home/zuul/zuul-output/0000755000175000017500000000000015133656044014131 5ustar zuulzuulhome/zuul/zuul-output/logs/0000755000175000017500000000000015133660060015066 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/0000755000175000017500000000000015133660001020356 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/logs/0000755000175000017500000000000015133657763021347 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/logs/openstack-must-gather/0000755000175000017500000000000015133657734025572 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/logs/openstack-must-gather/must-gather.logs0000644000175000017500000000440015133657675030722 0ustar zuulzuul[must-gather ] OUT 2026-01-20T10:58:04.738812382Z Using must-gather plug-in image: quay.io/openstack-k8s-operators/openstack-must-gather:latest When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: a84dabf3-edcf-4828-b6a1-f9d3a6f02304 ClientVersion: 4.20.8 ClusterVersion: Stable at "4.16.0" ClusterOperators: clusteroperator/kube-apiserver is progressing: NodeInstallerProgressing: 1 node is at revision 12; 0 nodes have achieved new revision 13 clusteroperator/machine-config is degraded because Failed to resync 4.16.0 because: error required MachineConfigPool master is paused and cannot sync until it is unpaused clusteroperator/network is progressing: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) clusteroperator/cloud-credential is missing clusteroperator/cluster-autoscaler is missing clusteroperator/insights is missing clusteroperator/monitoring is missing clusteroperator/storage is missing [must-gather ] OUT 2026-01-20T10:58:04.810012317Z namespace/openshift-must-gather-jdb4k created [must-gather ] OUT 2026-01-20T10:58:04.81661185Z clusterrolebinding.rbac.authorization.k8s.io/must-gather-hpl5w created [must-gather ] OUT 2026-01-20T10:58:05.765872775Z namespace/openshift-must-gather-jdb4k deleted Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: a84dabf3-edcf-4828-b6a1-f9d3a6f02304 ClientVersion: 4.20.8 ClusterVersion: Stable at "4.16.0" ClusterOperators: clusteroperator/kube-apiserver is progressing: NodeInstallerProgressing: 1 node is at revision 12; 0 nodes have achieved new revision 13 clusteroperator/machine-config is degraded because Failed to resync 4.16.0 because: error required MachineConfigPool master is paused and cannot sync until it is unpaused clusteroperator/network is progressing: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) clusteroperator/cloud-credential is missing clusteroperator/cluster-autoscaler is missing clusteroperator/insights is missing clusteroperator/monitoring is missing clusteroperator/storage is missing home/zuul/zuul-output/logs/ci-framework-data/logs/openstack-must-gather/timestamp0000644000175000017500000000015615133657675027526 0ustar zuulzuul2026-01-20 10:58:04.822241707 +0000 UTC m=+0.196083467 2026-01-20 10:58:05.760395922 +0000 UTC m=+1.134237692 home/zuul/zuul-output/logs/ci-framework-data/logs/openstack-must-gather/event-filter.html0000644000175000017500000000641015133657675031071 0ustar zuulzuul Events
Time Namespace Component RelatedObject Reason Message
home/zuul/zuul-output/logs/ci-framework-data/logs/crc/0000755000175000017500000000000015133657715022113 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/0000755000175000017500000000000015133657715025602 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/0000755000175000017500000000000015133657716026550 5ustar zuulzuul././@LongLink0000644000000000000000000000022600000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-res0000755000175000017500000000000015133657715033045 5ustar zuulzuul././@LongLink0000644000000000000000000000025000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-res0000755000175000017500000000000015133657735033047 5ustar zuulzuul././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-res0000644000175000017500000000014015133657715033042 0ustar zuulzuul2026-01-20T10:49:45.122177181+00:00 stdout F /etc/hosts.tmp /etc/hosts differ: char 159, line 3 ././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-res0000644000175000017500000000000015133657715033035 0ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-vers0000755000175000017500000000000015133657716033124 5ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-vers0000755000175000017500000000000015133660073033112 5ustar zuulzuul././@LongLink0000644000000000000000000000032600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-vers0000644000175000017500000363027015133657716033141 0ustar zuulzuul2026-01-20T10:47:24.816706741+00:00 stderr F I0120 10:47:24.816530 1 start.go:23] ClusterVersionOperator 4.16.0-202406131906.p0.g6f553e9.assembly.stream.el9-6f553e9 2026-01-20T10:47:24.838928452+00:00 stderr F I0120 10:47:24.838777 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:47:24.876692613+00:00 stderr F I0120 10:47:24.875972 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:47:24.876692613+00:00 stderr F I0120 10:47:24.876212 1 upgradeable.go:446] ConfigMap openshift-config/admin-acks added. 2026-01-20T10:47:24.893938230+00:00 stderr F I0120 10:47:24.893033 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2026-01-20T10:47:24.925487372+00:00 stderr F I0120 10:47:24.922229 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2026-01-20T10:47:24.928677498+00:00 stderr F I0120 10:47:24.928641 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2026-01-20T10:47:24.931040113+00:00 stderr F I0120 10:47:24.929269 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:47:24.931040113+00:00 stderr F I0120 10:47:24.929779 1 upgradeable.go:446] ConfigMap openshift-config-managed/admin-gates added. 2026-01-20T10:47:24.945600216+00:00 stderr F I0120 10:47:24.945519 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2026-01-20T10:47:24.967045536+00:00 stderr F I0120 10:47:24.966847 1 start.go:295] Waiting on 1 outstanding goroutines. 2026-01-20T10:47:24.967045536+00:00 stderr F I0120 10:47:24.966890 1 leaderelection.go:250] attempting to acquire leader lease openshift-cluster-version/version... 2026-01-20T10:53:32.305977138+00:00 stderr F I0120 10:53:32.305902 1 leaderelection.go:260] successfully acquired lease openshift-cluster-version/version 2026-01-20T10:53:32.306113032+00:00 stderr F I0120 10:53:32.306007 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-cluster-version", Name:"version", UID:"4c78d446-a3f8-4879-b39d-248de5762283", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41766", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_f1f37c7d-2b52-4dd4-a43b-e74f99a4f31d became leader 2026-01-20T10:53:32.306378609+00:00 stderr F I0120 10:53:32.306240 1 start.go:565] FeatureGate found in cluster, using its feature set "" at startup 2026-01-20T10:53:32.306529912+00:00 stderr F I0120 10:53:32.306490 1 payload.go:307] Loading updatepayload from "/" 2026-01-20T10:53:32.307905639+00:00 stderr F I0120 10:53:32.307852 1 metrics.go:154] Metrics port listening for HTTPS on 0.0.0.0:9099 2026-01-20T10:53:32.308352530+00:00 stderr F I0120 10:53:32.308309 1 payload.go:403] Architecture from release-metadata (4.16.0) retrieved from runtime: "amd64" 2026-01-20T10:53:32.328179602+00:00 stderr F I0120 10:53:32.328127 1 payload.go:210] excluding Filename: "0000_00_cluster-version-operator_01_clusterversion-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterversions.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:32.338548540+00:00 stderr F I0120 10:53:32.338483 1 payload.go:210] excluding Filename: "0000_00_cluster-version-operator_01_clusterversion-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterversions.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.364992532+00:00 stderr F I0120 10:53:32.364916 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-Hypershift.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:32.369738005+00:00 stderr F I0120 10:53:32.369693 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-SelfManagedHA-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:32.374385466+00:00 stderr F I0120 10:53:32.374339 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-SelfManagedHA-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:32.377682551+00:00 stderr F I0120 10:53:32.377643 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-SelfManagedHA-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.379067076+00:00 stderr F I0120 10:53:32.379023 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_backups-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "backups.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:32.380564855+00:00 stderr F I0120 10:53:32.380534 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_backups-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "backups.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.384448215+00:00 stderr F I0120 10:53:32.384401 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_clusterimagepolicies-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterimagepolicies.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:32.387640727+00:00 stderr F I0120 10:53:32.387601 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_clusterimagepolicies-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterimagepolicies.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:32.390037489+00:00 stderr F I0120 10:53:32.389998 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_clusterimagepolicies-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterimagepolicies.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.415163928+00:00 stderr F I0120 10:53:32.415092 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_infrastructures-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "infrastructures.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:32.442857124+00:00 stderr F I0120 10:53:32.442797 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_infrastructures-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "infrastructures.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:32.454716620+00:00 stderr F I0120 10:53:32.454668 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_infrastructures-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "infrastructures.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.471615536+00:00 stderr F I0120 10:53:32.471496 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_schedulers-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "schedulers.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:32.473498645+00:00 stderr F I0120 10:53:32.473416 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_schedulers-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "schedulers.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:32.474422159+00:00 stderr F I0120 10:53:32.474346 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_schedulers-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "schedulers.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.484507269+00:00 stderr F I0120 10:53:32.484439 1 payload.go:210] excluding Filename: "0000_10_etcd_01_etcdbackups-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcdbackups.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:32.485423703+00:00 stderr F I0120 10:53:32.485364 1 payload.go:210] excluding Filename: "0000_10_etcd_01_etcdbackups-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcdbackups.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.489764715+00:00 stderr F I0120 10:53:32.489704 1 payload.go:210] excluding Filename: "0000_10_openshift_service-ca_00_namespace.yaml" Group: "" Kind: "Namespace" Name: "openshift-service-ca": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:32.491244283+00:00 stderr F I0120 10:53:32.491187 1 payload.go:210] excluding Filename: "0000_10_operator-lifecycle-manager_01_olms-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "olms.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:32.496653063+00:00 stderr F I0120 10:53:32.496542 1 payload.go:210] excluding Filename: "0000_10_operator-lifecycle-manager_01_olms-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "olms.operator.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:32.500421861+00:00 stderr F I0120 10:53:32.499370 1 payload.go:210] excluding Filename: "0000_10_operator-lifecycle-manager_01_olms-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "olms.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.502582276+00:00 stderr F I0120 10:53:32.502547 1 payload.go:210] excluding Filename: "0000_12_etcd_01_etcds-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcds.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:32.509143376+00:00 stderr F I0120 10:53:32.509107 1 payload.go:210] excluding Filename: "0000_12_etcd_01_etcds-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcds.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.574752820+00:00 stderr F I0120 10:53:32.574667 1 payload.go:210] excluding Filename: "0000_30_cluster-api-provider-openstack_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-openstack": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.625133981+00:00 stderr F I0120 10:53:32.624993 1 payload.go:210] excluding Filename: "0000_30_cluster-api-provider-openstack_04_infrastructure-components.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "openstack": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.627208245+00:00 stderr F I0120 10:53:32.627158 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-aws": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.627208245+00:00 stderr F I0120 10:53:32.627181 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-azure": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.627208245+00:00 stderr F I0120 10:53:32.627192 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-gcp": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.627243556+00:00 stderr F I0120 10:53:32.627203 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-powervs": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.627243556+00:00 stderr F I0120 10:53:32.627215 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-vsphere": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.627831431+00:00 stderr F I0120 10:53:32.627783 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_namespace.yaml" Group: "" Kind: "Namespace" Name: "openshift-cluster-api": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:32.628318893+00:00 stderr F I0120 10:53:32.628271 1 payload.go:210] excluding Filename: "0000_30_cluster-api_01_images.configmap.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator-images": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.628619011+00:00 stderr F I0120 10:53:32.628571 1 payload.go:210] excluding Filename: "0000_30_cluster-api_02_service_account.yaml" Group: "" Kind: "ServiceAccount" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.628619011+00:00 stderr F I0120 10:53:32.628591 1 payload.go:210] excluding Filename: "0000_30_cluster-api_02_service_account.yaml" Group: "" Kind: "Secret" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator-secret": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.628909349+00:00 stderr F I0120 10:53:32.628861 1 payload.go:210] excluding Filename: "0000_30_cluster-api_02_webhook-service.yaml" Group: "" Kind: "Service" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator-webhook-service": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.629342100+00:00 stderr F I0120 10:53:32.629290 1 payload.go:210] excluding Filename: "0000_30_cluster-api_03_rbac_roles.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRole" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.629342100+00:00 stderr F I0120 10:53:32.629311 1 payload.go:210] excluding Filename: "0000_30_cluster-api_03_rbac_roles.yaml" Group: "rbac.authorization.k8s.io" Kind: "Role" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.631674200+00:00 stderr F I0120 10:53:32.631592 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.core-cluster-api.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "cluster-api": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:32.710763323+00:00 stderr F I0120 10:53:32.710608 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-aws.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "aws": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:32.729315142+00:00 stderr F I0120 10:53:32.729245 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-gcp.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "gcp": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:32.752853710+00:00 stderr F I0120 10:53:32.752774 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-ibmcloud.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "ibmcloud": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.797488393+00:00 stderr F I0120 10:53:32.797397 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-vsphere.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "vsphere": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:32.882927699+00:00 stderr F I0120 10:53:32.882826 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterclasses.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:32.882927699+00:00 stderr F I0120 10:53:32.882861 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusters.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:32.882927699+00:00 stderr F I0120 10:53:32.882875 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machines.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:32.882927699+00:00 stderr F I0120 10:53:32.882891 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinesets.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:32.882927699+00:00 stderr F I0120 10:53:32.882906 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinedeployments.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:32.882927699+00:00 stderr F I0120 10:53:32.882921 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinepools.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:32.883024981+00:00 stderr F I0120 10:53:32.882932 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterresourcesets.addons.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:32.883024981+00:00 stderr F I0120 10:53:32.882944 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterresourcesetbindings.addons.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:32.883024981+00:00 stderr F I0120 10:53:32.882954 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinehealthchecks.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:32.883024981+00:00 stderr F I0120 10:53:32.882964 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "extensionconfigs.runtime.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:32.883711479+00:00 stderr F I0120 10:53:32.883660 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_rbac_bindings.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.883711479+00:00 stderr F I0120 10:53:32.883682 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_rbac_bindings.yaml" Group: "rbac.authorization.k8s.io" Kind: "RoleBinding" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.884434278+00:00 stderr F I0120 10:53:32.884390 1 payload.go:210] excluding Filename: "0000_30_cluster-api_10_webhooks.yaml" Group: "admissionregistration.k8s.io" Kind: "ValidatingWebhookConfiguration" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.884434278+00:00 stderr F I0120 10:53:32.884409 1 payload.go:210] excluding Filename: "0000_30_cluster-api_10_webhooks.yaml" Group: "admissionregistration.k8s.io" Kind: "ValidatingWebhookConfiguration" Name: "validating-webhook-configuration": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.885028073+00:00 stderr F I0120 10:53:32.884993 1 payload.go:210] excluding Filename: "0000_30_cluster-api_11_deployment.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.885404783+00:00 stderr F I0120 10:53:32.885363 1 payload.go:210] excluding Filename: "0000_30_cluster-api_12_clusteroperator.yaml" Group: "config.openshift.io" Kind: "ClusterOperator" Name: "cluster-api": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:32.889766396+00:00 stderr F I0120 10:53:32.889718 1 payload.go:210] excluding Filename: "0000_30_machine-api-operator_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-machine-api-alibabacloud": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:32.927826168+00:00 stderr F I0120 10:53:32.927696 1 payload.go:210] excluding Filename: "0000_31_cluster-baremetal-operator_00_baremetalhost.crd.yaml" Group: "" Kind: "Namespace" Name: "baremetal-operator-system": no annotations 2026-01-20T10:53:32.927826168+00:00 stderr F I0120 10:53:32.927804 1 payload.go:210] excluding Filename: "0000_31_cluster-baremetal-operator_00_baremetalhost.crd.yaml" Group: "" Kind: "ConfigMap" Namespace: "baremetal-operator-system" Name: "ironic": no annotations 2026-01-20T10:53:32.972251356+00:00 stderr F I0120 10:53:32.972132 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-Hypershift-Default.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:32.972941374+00:00 stderr F I0120 10:53:32.972859 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-Hypershift-DevPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:32.973622222+00:00 stderr F I0120 10:53:32.973560 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-Hypershift-TechPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:32.974257738+00:00 stderr F I0120 10:53:32.974209 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-SelfManagedHA-Default.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": unrecognized value include.release.openshift.io/self-managed-high-availability=false-except-for-the-config-operator 2026-01-20T10:53:32.974872604+00:00 stderr F I0120 10:53:32.974817 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": unrecognized value include.release.openshift.io/self-managed-high-availability=false-except-for-the-config-operator 2026-01-20T10:53:32.975466489+00:00 stderr F I0120 10:53:32.975421 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": unrecognized value include.release.openshift.io/self-managed-high-availability=false-except-for-the-config-operator 2026-01-20T10:53:32.978009045+00:00 stderr F I0120 10:53:32.977947 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_04_serviceaccount-hypershift.yaml" Group: "" Kind: "ServiceAccount" Name: "csi-snapshot-controller-operator": no annotations 2026-01-20T10:53:32.980912590+00:00 stderr F I0120 10:53:32.980832 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_05_operator_role-hypershift.yaml" Group: "rbac.authorization.k8s.io" Kind: "Role" Name: "csi-snapshot-controller-operator-role": no annotations 2026-01-20T10:53:32.983323132+00:00 stderr F I0120 10:53:32.983279 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_06_operator_rolebinding-hypershift.yaml" Group: "rbac.authorization.k8s.io" Kind: "RoleBinding" Name: "csi-snapshot-controller-operator-role": no annotations 2026-01-20T10:53:32.984343648+00:00 stderr F I0120 10:53:32.984297 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_07_deployment-hypershift.yaml" Group: "apps" Kind: "Deployment" Name: "csi-snapshot-controller-operator": no annotations 2026-01-20T10:53:32.985095548+00:00 stderr F I0120 10:53:32.985036 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_07_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-storage-operator" Name: "csi-snapshot-controller-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.011509340+00:00 stderr F I0120 10:53:33.011417 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_00_configs-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "configs.imageregistry.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:33.064013686+00:00 stderr F I0120 10:53:33.063906 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_00_configs-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "configs.imageregistry.operator.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:33.094888313+00:00 stderr F I0120 10:53:33.094775 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_00_configs-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "configs.imageregistry.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.119834978+00:00 stderr F I0120 10:53:33.119749 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_07-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-image-registry" Name: "cluster-image-registry-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.143391386+00:00 stderr F I0120 10:53:33.143307 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-ingress-ibmcloud": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.143391386+00:00 stderr F I0120 10:53:33.143345 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-ingress-powervs": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.143391386+00:00 stderr F I0120 10:53:33.143359 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-ingress-operator" Name: "openshift-ingress-alibabacloud": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.147667697+00:00 stderr F I0120 10:53:33.147610 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_02-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-ingress-operator" Name: "ingress-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.153800325+00:00 stderr F I0120 10:53:33.153730 1 payload.go:210] excluding Filename: "0000_50_cluster-kube-storage-version-migrator-operator_07_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-kube-storage-version-migrator-operator" Name: "kube-storage-version-migrator-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.155670204+00:00 stderr F I0120 10:53:33.155620 1 payload.go:210] excluding Filename: "0000_50_cluster-machine-approver_01-rbac-capi.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRole" Name: "system:openshift:controller:machine-approver-capi": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.155670204+00:00 stderr F I0120 10:53:33.155646 1 payload.go:210] excluding Filename: "0000_50_cluster-machine-approver_01-rbac-capi.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "system:openshift:controller:machine-approver-capi": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.158535248+00:00 stderr F I0120 10:53:33.158474 1 payload.go:210] excluding Filename: "0000_50_cluster-machine-approver_04-deployment-capi.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-machine-approver" Name: "machine-approver-capi": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.403498754+00:00 stderr F I0120 10:53:33.403409 1 payload.go:210] excluding Filename: "0000_50_cluster-monitoring-operator_05-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-monitoring" Name: "cluster-monitoring-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.419220910+00:00 stderr F I0120 10:53:33.419046 1 payload.go:210] excluding Filename: "0000_50_cluster-node-tuning-operator_50-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-node-tuning-operator" Name: "cluster-node-tuning-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.446329510+00:00 stderr F I0120 10:53:33.446230 1 payload.go:210] excluding Filename: "0000_50_cluster-samples-operator_06-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-samples-operator" Name: "cluster-samples-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.461997945+00:00 stderr F I0120 10:53:33.461906 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_04_cluster_csi_driver_crd-CustomNoUpgrade.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clustercsidrivers.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:33.467385854+00:00 stderr F I0120 10:53:33.467330 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_04_cluster_csi_driver_crd-TechPreviewNoUpgrade.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clustercsidrivers.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.469587881+00:00 stderr F I0120 10:53:33.469508 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_06_operator_cr-hypershift.yaml" Group: "operator.openshift.io" Kind: "Storage" Name: "cluster": no annotations 2026-01-20T10:53:33.470261368+00:00 stderr F I0120 10:53:33.470204 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_07_service_account-hypershift.yaml" Group: "" Kind: "ServiceAccount" Name: "cluster-storage-operator": no annotations 2026-01-20T10:53:33.470972286+00:00 stderr F I0120 10:53:33.470921 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_08_operator_rbac-hypershift.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "cluster-storage-operator-role": no annotations 2026-01-20T10:53:33.477124085+00:00 stderr F I0120 10:53:33.477000 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_10_deployment-hypershift.yaml" Group: "apps" Kind: "Deployment" Name: "cluster-storage-operator": no annotations 2026-01-20T10:53:33.478995653+00:00 stderr F I0120 10:53:33.478936 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_10_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-storage-operator" Name: "cluster-storage-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.481303753+00:00 stderr F I0120 10:53:33.481245 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_11_cluster_operator-hypershift.yaml" Group: "config.openshift.io" Kind: "ClusterOperator" Name: "storage": no annotations 2026-01-20T10:53:33.483402688+00:00 stderr F I0120 10:53:33.483357 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "aws-ebs-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.483402688+00:00 stderr F I0120 10:53:33.483391 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "azure-disk-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.483432409+00:00 stderr F I0120 10:53:33.483413 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "azure-file-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.483453289+00:00 stderr F I0120 10:53:33.483441 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "ibm-vpc-block-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.483469519+00:00 stderr F I0120 10:53:33.483455 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "powervs-block-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.483485600+00:00 stderr F I0120 10:53:33.483472 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "manila-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.483501650+00:00 stderr F I0120 10:53:33.483490 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "ovirt-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.483530861+00:00 stderr F I0120 10:53:33.483509 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-vmware-vsphere-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.483547191+00:00 stderr F I0120 10:53:33.483528 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-vsphere-problem-detector": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.508327661+00:00 stderr F I0120 10:53:33.508210 1 payload.go:210] excluding Filename: "0000_50_console-operator_07-conversionwebhook-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-console-operator" Name: "console-conversion-webhook": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.510213850+00:00 stderr F I0120 10:53:33.510117 1 payload.go:210] excluding Filename: "0000_50_console-operator_07-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-console-operator" Name: "console-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.538844189+00:00 stderr F I0120 10:53:33.538753 1 payload.go:210] excluding Filename: "0000_50_insights-operator_03-insightsdatagather-config-crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "insightsdatagathers.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.543937230+00:00 stderr F I0120 10:53:33.543871 1 payload.go:210] excluding Filename: "0000_50_insights-operator_04-datagather-insights-crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "datagathers.insights.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.544247238+00:00 stderr F I0120 10:53:33.544200 1 payload.go:210] excluding Filename: "0000_50_insights-operator_04-insightsdatagather-config-cr.yaml" Group: "config.openshift.io" Kind: "InsightsDataGather" Name: "cluster": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.546561708+00:00 stderr F I0120 10:53:33.546512 1 payload.go:210] excluding Filename: "0000_50_insights-operator_06-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-insights" Name: "insights-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.640633518+00:00 stderr F I0120 10:53:33.640562 1 payload.go:210] excluding Filename: "0000_50_olm_06-psm-operator.deployment.ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-operator-lifecycle-manager" Name: "package-server-manager": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.645795431+00:00 stderr F I0120 10:53:33.645726 1 payload.go:210] excluding Filename: "0000_50_olm_07-olm-operator.deployment.ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-operator-lifecycle-manager" Name: "olm-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.648818640+00:00 stderr F I0120 10:53:33.648761 1 payload.go:210] excluding Filename: "0000_50_olm_08-catalog-operator.deployment.ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-operator-lifecycle-manager" Name: "catalog-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.662871183+00:00 stderr F I0120 10:53:33.662795 1 payload.go:210] excluding Filename: "0000_50_operator-marketplace_09_operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-marketplace" Name: "marketplace-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.672773788+00:00 stderr F I0120 10:53:33.672719 1 payload.go:210] excluding Filename: "0000_50_service-ca-operator_05_deploy-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-service-ca-operator" Name: "service-ca-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.675631352+00:00 stderr F I0120 10:53:33.675585 1 payload.go:210] excluding Filename: "0000_50_tests_test-reporting.yaml" Group: "config.openshift.io" Kind: "TestReporting" Name: "cluster": no annotations 2026-01-20T10:53:33.676287118+00:00 stderr F I0120 10:53:33.676259 1 payload.go:210] excluding Filename: "0000_51_olm_00-olm-operator.yml" Group: "operator.openshift.io" Kind: "OLM" Name: "cluster": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.676707400+00:00 stderr F I0120 10:53:33.676667 1 payload.go:210] excluding Filename: "0000_51_olm_01_operator_namespace.yaml" Group: "" Kind: "Namespace" Name: "openshift-cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.678598338+00:00 stderr F I0120 10:53:33.678561 1 payload.go:210] excluding Filename: "0000_51_olm_02_operator_clusterrole.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRole" Name: "cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.678985818+00:00 stderr F I0120 10:53:33.678944 1 payload.go:210] excluding Filename: "0000_51_olm_03_service_account.yaml" Group: "" Kind: "ServiceAccount" Namespace: "openshift-cluster-olm-operator" Name: "cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.679558544+00:00 stderr F I0120 10:53:33.679521 1 payload.go:210] excluding Filename: "0000_51_olm_04_metrics_service.yaml" Group: "" Kind: "Service" Namespace: "openshift-cluster-olm-operator" Name: "cluster-olm-operator-metrics": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.680071547+00:00 stderr F I0120 10:53:33.680023 1 payload.go:210] excluding Filename: "0000_51_olm_05_operator_clusterrolebinding.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "cluster-olm-operator-role": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.681862682+00:00 stderr F I0120 10:53:33.681821 1 payload.go:210] excluding Filename: "0000_51_olm_06_deployment.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-olm-operator" Name: "cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.682334505+00:00 stderr F I0120 10:53:33.682300 1 payload.go:210] excluding Filename: "0000_51_olm_07_cluster_operator.yaml" Group: "config.openshift.io" Kind: "ClusterOperator" Name: "olm": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.691212415+00:00 stderr F I0120 10:53:33.691030 1 payload.go:210] excluding Filename: "0000_70_cluster-network-operator_02_rbac.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "default-account-cluster-network-operator": unrecognized value include.release.openshift.io/self-managed-high-availability=false 2026-01-20T10:53:33.693125943+00:00 stderr F I0120 10:53:33.692843 1 payload.go:210] excluding Filename: "0000_70_cluster-network-operator_03_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-network-operator" Name: "network-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.703853211+00:00 stderr F I0120 10:53:33.703805 1 payload.go:210] excluding Filename: "0000_70_dns-operator_02-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-dns-operator" Name: "dns-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:33.707395372+00:00 stderr F I0120 10:53:33.707355 1 payload.go:210] excluding Filename: "0000_70_dns_00_dnsnameresolvers-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "dnsnameresolvers.network.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:33.709956168+00:00 stderr F I0120 10:53:33.709906 1 payload.go:210] excluding Filename: "0000_70_dns_00_dnsnameresolvers-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "dnsnameresolvers.network.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:33.712442422+00:00 stderr F I0120 10:53:33.712385 1 payload.go:210] excluding Filename: "0000_70_dns_00_dnsnameresolvers-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "dnsnameresolvers.network.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.730180361+00:00 stderr F I0120 10:53:33.730125 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_controllerconfigs-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "controllerconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:33.755756051+00:00 stderr F I0120 10:53:33.755672 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_controllerconfigs-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "controllerconfigs.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:33.766973761+00:00 stderr F I0120 10:53:33.766926 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_controllerconfigs-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "controllerconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.770115722+00:00 stderr F I0120 10:53:33.770086 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfignodes-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfignodes.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:33.771961790+00:00 stderr F I0120 10:53:33.771927 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfignodes-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfignodes.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:33.773699254+00:00 stderr F I0120 10:53:33.773671 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfignodes-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfignodes.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.776718802+00:00 stderr F I0120 10:53:33.776685 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigpools-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigpools.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:33.782009179+00:00 stderr F I0120 10:53:33.781955 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigpools-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigpools.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:33.788223699+00:00 stderr F I0120 10:53:33.785118 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigpools-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigpools.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.793278190+00:00 stderr F I0120 10:53:33.793230 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigurations-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigurations.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:33.802271862+00:00 stderr F I0120 10:53:33.802218 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigurations-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigurations.operator.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:33.807784705+00:00 stderr F I0120 10:53:33.807742 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigurations-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigurations.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.809256993+00:00 stderr F I0120 10:53:33.809225 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosbuilds-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosbuilds.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:33.810714230+00:00 stderr F I0120 10:53:33.810669 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosbuilds-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosbuilds.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:33.812215579+00:00 stderr F I0120 10:53:33.812172 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosbuilds-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosbuilds.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.813931074+00:00 stderr F I0120 10:53:33.813892 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosconfigs-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:33.815805622+00:00 stderr F I0120 10:53:33.815741 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosconfigs-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosconfigs.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:33.817568818+00:00 stderr F I0120 10:53:33.817515 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosconfigs-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.818563093+00:00 stderr F I0120 10:53:33.818508 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_pinnedimagesets-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "pinnedimagesets.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:33.819569080+00:00 stderr F I0120 10:53:33.819515 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_pinnedimagesets-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "pinnedimagesets.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:33.820518674+00:00 stderr F I0120 10:53:33.820475 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_pinnedimagesets-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "pinnedimagesets.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:33.826215171+00:00 stderr F I0120 10:53:33.826154 1 payload.go:210] excluding Filename: "0000_90_cluster-baremetal-operator_03_servicemonitor.yaml" Group: "monitoring.coreos.com" Kind: "ServiceMonitor" Namespace: "openshift-machine-api" Name: "cluster-baremetal-operator-servicemonitor": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.000631365+00:00 stderr F I0120 10:53:34.000467 1 cvo.go:315] Verifying release authenticity: All release image digests must have GPG signatures from verifier-public-key-redhat (567E347AD0044ADE55BA8A5F199E2F91FD431D51: Red Hat, Inc. (release key 2) , B08B659EE86AF623BC90E8DB938A80CAF21541EB: Red Hat, Inc. (beta key 2) ) - will check for signatures in containers/image format at serial signature store wrapping config maps in openshift-config-managed with label "release.openshift.io/verification-signatures", serial signature store wrapping ClusterVersion signatureStores unset, falling back to default stores, parallel signature store wrapping containers/image signature store under https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release, containers/image signature store under https://storage.googleapis.com/openshift-release/official/signatures/openshift/release 2026-01-20T10:53:34.000631365+00:00 stderr F I0120 10:53:34.000553 1 start.go:590] CVO features for version 4.16.0 enabled at startup: {desiredVersion:4.16.0 unknownVersion:false reconciliationIssuesCondition:false} 2026-01-20T10:53:34.000713247+00:00 stderr F I0120 10:53:34.000633 1 featurechangestopper.go:123] Starting stop-on-features-change controller with startingRequiredFeatureSet="" startingCvoGates={desiredVersion:4.16.0 unknownVersion:false reconciliationIssuesCondition:false} 2026-01-20T10:53:34.000713247+00:00 stderr F I0120 10:53:34.000660 1 cvo.go:415] Starting ClusterVersionOperator with minimum reconcile period 3m0.635374416s 2026-01-20T10:53:34.000713247+00:00 stderr F I0120 10:53:34.000687 1 cvo.go:483] Waiting on 6 outstanding goroutines. 2026-01-20T10:53:34.000860631+00:00 stderr F I0120 10:53:34.000775 1 sync_worker.go:565] Start: starting sync worker 2026-01-20T10:53:34.000860631+00:00 stderr F I0120 10:53:34.000821 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2026-01-20T10:53:34.000883322+00:00 stderr F I0120 10:53:34.000833 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:34.000953123+00:00 stderr F I0120 10:53:34.000907 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:34.000953123+00:00 stderr F I0120 10:53:34.000936 1 sync_worker.go:461] Initializing prior known value of enabled capabilities from ClusterVersion status. 2026-01-20T10:53:34.001012135+00:00 stderr F I0120 10:53:34.000979 1 sync_worker.go:262] syncPayload: 4.16.0 (force=false) 2026-01-20T10:53:34.001033766+00:00 stderr F I0120 10:53:34.001021 1 payload.go:307] Loading updatepayload from "/" 2026-01-20T10:53:34.001204020+00:00 stderr F I0120 10:53:34.001146 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:34.001485317+00:00 stderr F I0120 10:53:34.001425 1 upgradeable.go:92] Upgradeability condition failed (type='UpgradeableClusterVersionOverrides' reason='ClusterVersionOverridesSet' message='Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.') 2026-01-20T10:53:34.001485317+00:00 stderr F I0120 10:53:34.001464 1 upgradeable.go:123] Cluster current version=4.16.0 2026-01-20T10:53:34.001571769+00:00 stderr F I0120 10:53:34.001528 1 upgradeable.go:92] Upgradeability condition failed (type='UpgradeableClusterOperators' reason='PoolUpdating' message='Cluster operator machine-config should not be upgraded between minor versions: One or more machine config pools are updating, please see `oc get mcp` for further details') 2026-01-20T10:53:34.001590051+00:00 stderr F I0120 10:53:34.001571 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (758.229µs) 2026-01-20T10:53:34.001644532+00:00 stderr F I0120 10:53:34.001613 1 availableupdates.go:61] First attempt to retrieve available updates 2026-01-20T10:53:34.001721124+00:00 stderr F I0120 10:53:34.001157 1 payload.go:403] Architecture from release-metadata (4.16.0) retrieved from runtime: "amd64" 2026-01-20T10:53:34.001985171+00:00 stderr F I0120 10:53:34.001178 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterVersion", Namespace:"openshift-cluster-version", Name:"version", UID:"", APIVersion:"config.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RetrievePayload' Retrieving and verifying payload version="4.16.0" image="quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69" 2026-01-20T10:53:34.001985171+00:00 stderr F I0120 10:53:34.001952 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterVersion", Namespace:"openshift-cluster-version", Name:"version", UID:"", APIVersion:"config.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LoadPayload' Loading payload version="4.16.0" image="quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69" 2026-01-20T10:53:34.021132604+00:00 stderr F I0120 10:53:34.020417 1 cincinnati.go:114] Using a root CA pool with 0 root CA subjects to request updates from https://api.openshift.com/api/upgrades_info/v1/graph?arch=amd64&channel=stable-4.16&id=a84dabf3-edcf-4828-b6a1-f9d3a6f02304&version=4.16.0 2026-01-20T10:53:34.025116118+00:00 stderr F I0120 10:53:34.022370 1 payload.go:210] excluding Filename: "0000_00_cluster-version-operator_01_clusterversion-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterversions.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:34.032129059+00:00 stderr F I0120 10:53:34.031141 1 payload.go:210] excluding Filename: "0000_00_cluster-version-operator_01_clusterversion-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterversions.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.052717661+00:00 stderr F I0120 10:53:34.050666 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-Hypershift.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.056121159+00:00 stderr F I0120 10:53:34.053780 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-SelfManagedHA-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:34.060111291+00:00 stderr F I0120 10:53:34.056865 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-SelfManagedHA-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:34.060111291+00:00 stderr F I0120 10:53:34.059120 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-SelfManagedHA-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.060111291+00:00 stderr F I0120 10:53:34.059815 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_backups-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "backups.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:34.061838046+00:00 stderr F I0120 10:53:34.060476 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_backups-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "backups.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.064221508+00:00 stderr F I0120 10:53:34.062273 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_clusterimagepolicies-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterimagepolicies.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:34.067940404+00:00 stderr F I0120 10:53:34.064410 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_clusterimagepolicies-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterimagepolicies.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:34.067940404+00:00 stderr F I0120 10:53:34.066216 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_clusterimagepolicies-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterimagepolicies.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.096134322+00:00 stderr F I0120 10:53:34.095758 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_infrastructures-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "infrastructures.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:34.118106759+00:00 stderr F I0120 10:53:34.115271 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_infrastructures-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "infrastructures.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:34.129108064+00:00 stderr F I0120 10:53:34.128328 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_infrastructures-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "infrastructures.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.137205253+00:00 stderr F I0120 10:53:34.137157 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_schedulers-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "schedulers.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:34.138392763+00:00 stderr F I0120 10:53:34.138357 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_schedulers-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "schedulers.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:34.139178154+00:00 stderr F I0120 10:53:34.138924 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_schedulers-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "schedulers.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.147691214+00:00 stderr F I0120 10:53:34.147645 1 payload.go:210] excluding Filename: "0000_10_etcd_01_etcdbackups-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcdbackups.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:34.148427933+00:00 stderr F I0120 10:53:34.148397 1 payload.go:210] excluding Filename: "0000_10_etcd_01_etcdbackups-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcdbackups.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.150521937+00:00 stderr F I0120 10:53:34.150483 1 payload.go:210] excluding Filename: "0000_10_openshift_service-ca_00_namespace.yaml" Group: "" Kind: "Namespace" Name: "openshift-service-ca": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.152127378+00:00 stderr F I0120 10:53:34.152098 1 payload.go:210] excluding Filename: "0000_10_operator-lifecycle-manager_01_olms-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "olms.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:34.153504344+00:00 stderr F I0120 10:53:34.153417 1 payload.go:210] excluding Filename: "0000_10_operator-lifecycle-manager_01_olms-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "olms.operator.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:34.154323055+00:00 stderr F I0120 10:53:34.154207 1 payload.go:210] excluding Filename: "0000_10_operator-lifecycle-manager_01_olms-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "olms.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.155561126+00:00 stderr F I0120 10:53:34.155497 1 payload.go:210] excluding Filename: "0000_12_etcd_01_etcds-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcds.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:34.157978709+00:00 stderr F I0120 10:53:34.157906 1 payload.go:210] excluding Filename: "0000_12_etcd_01_etcds-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcds.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.216414578+00:00 stderr F I0120 10:53:34.216320 1 payload.go:210] excluding Filename: "0000_30_cluster-api-provider-openstack_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-openstack": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.272289931+00:00 stderr F I0120 10:53:34.272180 1 payload.go:210] excluding Filename: "0000_30_cluster-api-provider-openstack_04_infrastructure-components.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "openstack": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.277519946+00:00 stderr F I0120 10:53:34.277450 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-aws": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.277560118+00:00 stderr F I0120 10:53:34.277526 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-azure": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.277560118+00:00 stderr F I0120 10:53:34.277541 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-gcp": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.277580208+00:00 stderr F I0120 10:53:34.277558 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-powervs": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.277638960+00:00 stderr F I0120 10:53:34.277593 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-vsphere": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.277990008+00:00 stderr F I0120 10:53:34.277950 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_namespace.yaml" Group: "" Kind: "Namespace" Name: "openshift-cluster-api": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:34.280390721+00:00 stderr F I0120 10:53:34.278276 1 payload.go:210] excluding Filename: "0000_30_cluster-api_01_images.configmap.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator-images": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.280390721+00:00 stderr F I0120 10:53:34.278609 1 payload.go:210] excluding Filename: "0000_30_cluster-api_02_service_account.yaml" Group: "" Kind: "ServiceAccount" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.280390721+00:00 stderr F I0120 10:53:34.278624 1 payload.go:210] excluding Filename: "0000_30_cluster-api_02_service_account.yaml" Group: "" Kind: "Secret" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator-secret": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.280390721+00:00 stderr F I0120 10:53:34.278862 1 payload.go:210] excluding Filename: "0000_30_cluster-api_02_webhook-service.yaml" Group: "" Kind: "Service" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator-webhook-service": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.280390721+00:00 stderr F I0120 10:53:34.279204 1 payload.go:210] excluding Filename: "0000_30_cluster-api_03_rbac_roles.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRole" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.280390721+00:00 stderr F I0120 10:53:34.279230 1 payload.go:210] excluding Filename: "0000_30_cluster-api_03_rbac_roles.yaml" Group: "rbac.authorization.k8s.io" Kind: "Role" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.283335467+00:00 stderr F I0120 10:53:34.281180 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.core-cluster-api.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "cluster-api": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:34.300380927+00:00 stderr F I0120 10:53:34.300296 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:34.300380927+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:34.300380927+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:34.300524120+00:00 stderr F I0120 10:53:34.300472 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (299.336151ms) 2026-01-20T10:53:34.341137830+00:00 stderr F I0120 10:53:34.341057 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-aws.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "aws": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:34.353245752+00:00 stderr F I0120 10:53:34.353182 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-gcp.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "gcp": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:34.361643549+00:00 stderr F I0120 10:53:34.361395 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-ibmcloud.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "ibmcloud": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.403558142+00:00 stderr F I0120 10:53:34.403434 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-vsphere.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "vsphere": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:34.473723114+00:00 stderr F I0120 10:53:34.473634 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterclasses.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:34.473723114+00:00 stderr F I0120 10:53:34.473676 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusters.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:34.473723114+00:00 stderr F I0120 10:53:34.473691 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machines.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:34.473723114+00:00 stderr F I0120 10:53:34.473705 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinesets.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:34.473794936+00:00 stderr F I0120 10:53:34.473719 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinedeployments.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:34.473794936+00:00 stderr F I0120 10:53:34.473731 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinepools.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:34.473794936+00:00 stderr F I0120 10:53:34.473743 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterresourcesets.addons.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:34.473794936+00:00 stderr F I0120 10:53:34.473755 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterresourcesetbindings.addons.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:34.473794936+00:00 stderr F I0120 10:53:34.473767 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinehealthchecks.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:34.473794936+00:00 stderr F I0120 10:53:34.473779 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "extensionconfigs.runtime.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2026-01-20T10:53:34.474141404+00:00 stderr F I0120 10:53:34.474101 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_rbac_bindings.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.474141404+00:00 stderr F I0120 10:53:34.474118 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_rbac_bindings.yaml" Group: "rbac.authorization.k8s.io" Kind: "RoleBinding" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.474703619+00:00 stderr F I0120 10:53:34.474670 1 payload.go:210] excluding Filename: "0000_30_cluster-api_10_webhooks.yaml" Group: "admissionregistration.k8s.io" Kind: "ValidatingWebhookConfiguration" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.474703619+00:00 stderr F I0120 10:53:34.474688 1 payload.go:210] excluding Filename: "0000_30_cluster-api_10_webhooks.yaml" Group: "admissionregistration.k8s.io" Kind: "ValidatingWebhookConfiguration" Name: "validating-webhook-configuration": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.475133111+00:00 stderr F I0120 10:53:34.475096 1 payload.go:210] excluding Filename: "0000_30_cluster-api_11_deployment.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.475367607+00:00 stderr F I0120 10:53:34.475336 1 payload.go:210] excluding Filename: "0000_30_cluster-api_12_clusteroperator.yaml" Group: "config.openshift.io" Kind: "ClusterOperator" Name: "cluster-api": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.479149464+00:00 stderr F I0120 10:53:34.479101 1 payload.go:210] excluding Filename: "0000_30_machine-api-operator_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-machine-api-alibabacloud": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.508282236+00:00 stderr F I0120 10:53:34.508197 1 payload.go:210] excluding Filename: "0000_31_cluster-baremetal-operator_00_baremetalhost.crd.yaml" Group: "" Kind: "Namespace" Name: "baremetal-operator-system": no annotations 2026-01-20T10:53:34.508282236+00:00 stderr F I0120 10:53:34.508249 1 payload.go:210] excluding Filename: "0000_31_cluster-baremetal-operator_00_baremetalhost.crd.yaml" Group: "" Kind: "ConfigMap" Namespace: "baremetal-operator-system" Name: "ironic": no annotations 2026-01-20T10:53:34.526417905+00:00 stderr F I0120 10:53:34.526347 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-Hypershift-Default.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.526765924+00:00 stderr F I0120 10:53:34.526713 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-Hypershift-DevPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.527095002+00:00 stderr F I0120 10:53:34.527047 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-Hypershift-TechPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.528379056+00:00 stderr F I0120 10:53:34.528305 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-SelfManagedHA-Default.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": unrecognized value include.release.openshift.io/self-managed-high-availability=false-except-for-the-config-operator 2026-01-20T10:53:34.529217117+00:00 stderr F I0120 10:53:34.529150 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": unrecognized value include.release.openshift.io/self-managed-high-availability=false-except-for-the-config-operator 2026-01-20T10:53:34.530671084+00:00 stderr F I0120 10:53:34.530621 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": unrecognized value include.release.openshift.io/self-managed-high-availability=false-except-for-the-config-operator 2026-01-20T10:53:34.532666756+00:00 stderr F I0120 10:53:34.532599 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_04_serviceaccount-hypershift.yaml" Group: "" Kind: "ServiceAccount" Name: "csi-snapshot-controller-operator": no annotations 2026-01-20T10:53:34.535084209+00:00 stderr F I0120 10:53:34.535002 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_05_operator_role-hypershift.yaml" Group: "rbac.authorization.k8s.io" Kind: "Role" Name: "csi-snapshot-controller-operator-role": no annotations 2026-01-20T10:53:34.536906726+00:00 stderr F I0120 10:53:34.536847 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_06_operator_rolebinding-hypershift.yaml" Group: "rbac.authorization.k8s.io" Kind: "RoleBinding" Name: "csi-snapshot-controller-operator-role": no annotations 2026-01-20T10:53:34.537670485+00:00 stderr F I0120 10:53:34.537613 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_07_deployment-hypershift.yaml" Group: "apps" Kind: "Deployment" Name: "csi-snapshot-controller-operator": no annotations 2026-01-20T10:53:34.538255880+00:00 stderr F I0120 10:53:34.538216 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_07_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-storage-operator" Name: "csi-snapshot-controller-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.554727286+00:00 stderr F I0120 10:53:34.554640 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_00_configs-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "configs.imageregistry.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:34.585097050+00:00 stderr F I0120 10:53:34.585014 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_00_configs-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "configs.imageregistry.operator.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:34.595335494+00:00 stderr F I0120 10:53:34.595279 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_00_configs-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "configs.imageregistry.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.604613074+00:00 stderr F I0120 10:53:34.604570 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_07-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-image-registry" Name: "cluster-image-registry-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.626036277+00:00 stderr F I0120 10:53:34.625959 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-ingress-ibmcloud": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.626036277+00:00 stderr F I0120 10:53:34.626000 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-ingress-powervs": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.626036277+00:00 stderr F I0120 10:53:34.626028 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-ingress-operator" Name: "openshift-ingress-alibabacloud": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.628885691+00:00 stderr F I0120 10:53:34.628827 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_02-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-ingress-operator" Name: "ingress-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.633211933+00:00 stderr F I0120 10:53:34.633154 1 payload.go:210] excluding Filename: "0000_50_cluster-kube-storage-version-migrator-operator_07_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-kube-storage-version-migrator-operator" Name: "kube-storage-version-migrator-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.635136983+00:00 stderr F I0120 10:53:34.635089 1 payload.go:210] excluding Filename: "0000_50_cluster-machine-approver_01-rbac-capi.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRole" Name: "system:openshift:controller:machine-approver-capi": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.635136983+00:00 stderr F I0120 10:53:34.635118 1 payload.go:210] excluding Filename: "0000_50_cluster-machine-approver_01-rbac-capi.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "system:openshift:controller:machine-approver-capi": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.638306415+00:00 stderr F I0120 10:53:34.638258 1 payload.go:210] excluding Filename: "0000_50_cluster-machine-approver_04-deployment-capi.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-machine-approver" Name: "machine-approver-capi": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.886050672+00:00 stderr F I0120 10:53:34.885936 1 payload.go:210] excluding Filename: "0000_50_cluster-monitoring-operator_05-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-monitoring" Name: "cluster-monitoring-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.896884522+00:00 stderr F I0120 10:53:34.896822 1 payload.go:210] excluding Filename: "0000_50_cluster-node-tuning-operator_50-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-node-tuning-operator" Name: "cluster-node-tuning-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.905183657+00:00 stderr F I0120 10:53:34.905127 1 payload.go:210] excluding Filename: "0000_50_cluster-samples-operator_06-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-samples-operator" Name: "cluster-samples-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.910682659+00:00 stderr F I0120 10:53:34.910622 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_04_cluster_csi_driver_crd-CustomNoUpgrade.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clustercsidrivers.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:34.913905042+00:00 stderr F I0120 10:53:34.913849 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_04_cluster_csi_driver_crd-TechPreviewNoUpgrade.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clustercsidrivers.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.914906848+00:00 stderr F I0120 10:53:34.914854 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_06_operator_cr-hypershift.yaml" Group: "operator.openshift.io" Kind: "Storage" Name: "cluster": no annotations 2026-01-20T10:53:34.915133814+00:00 stderr F I0120 10:53:34.915099 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_07_service_account-hypershift.yaml" Group: "" Kind: "ServiceAccount" Name: "cluster-storage-operator": no annotations 2026-01-20T10:53:34.915367670+00:00 stderr F I0120 10:53:34.915321 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_08_operator_rbac-hypershift.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "cluster-storage-operator-role": no annotations 2026-01-20T10:53:34.918521891+00:00 stderr F I0120 10:53:34.918466 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_10_deployment-hypershift.yaml" Group: "apps" Kind: "Deployment" Name: "cluster-storage-operator": no annotations 2026-01-20T10:53:34.919504956+00:00 stderr F I0120 10:53:34.919464 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_10_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-storage-operator" Name: "cluster-storage-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.920628226+00:00 stderr F I0120 10:53:34.920570 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_11_cluster_operator-hypershift.yaml" Group: "config.openshift.io" Kind: "ClusterOperator" Name: "storage": no annotations 2026-01-20T10:53:34.921810326+00:00 stderr F I0120 10:53:34.921755 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "aws-ebs-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.921810326+00:00 stderr F I0120 10:53:34.921777 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "azure-disk-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.921810326+00:00 stderr F I0120 10:53:34.921787 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "azure-file-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.921810326+00:00 stderr F I0120 10:53:34.921798 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "ibm-vpc-block-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.921845627+00:00 stderr F I0120 10:53:34.921814 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "powervs-block-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.921845627+00:00 stderr F I0120 10:53:34.921825 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "manila-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.921845627+00:00 stderr F I0120 10:53:34.921835 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "ovirt-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.921863698+00:00 stderr F I0120 10:53:34.921845 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-vmware-vsphere-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.921863698+00:00 stderr F I0120 10:53:34.921858 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-vsphere-problem-detector": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.951191565+00:00 stderr F I0120 10:53:34.950268 1 payload.go:210] excluding Filename: "0000_50_console-operator_07-conversionwebhook-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-console-operator" Name: "console-conversion-webhook": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.951664578+00:00 stderr F I0120 10:53:34.951602 1 payload.go:210] excluding Filename: "0000_50_console-operator_07-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-console-operator" Name: "console-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:34.972769962+00:00 stderr F I0120 10:53:34.972653 1 payload.go:210] excluding Filename: "0000_50_insights-operator_03-insightsdatagather-config-crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "insightsdatagathers.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.976270382+00:00 stderr F I0120 10:53:34.976189 1 payload.go:210] excluding Filename: "0000_50_insights-operator_04-datagather-insights-crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "datagathers.insights.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.976362885+00:00 stderr F I0120 10:53:34.976322 1 payload.go:210] excluding Filename: "0000_50_insights-operator_04-insightsdatagather-config-cr.yaml" Group: "config.openshift.io" Kind: "InsightsDataGather" Name: "cluster": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:34.977475154+00:00 stderr F I0120 10:53:34.977399 1 payload.go:210] excluding Filename: "0000_50_insights-operator_06-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-insights" Name: "insights-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:35.063467365+00:00 stderr F I0120 10:53:35.063351 1 payload.go:210] excluding Filename: "0000_50_olm_06-psm-operator.deployment.ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-operator-lifecycle-manager" Name: "package-server-manager": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:35.065934938+00:00 stderr F I0120 10:53:35.065874 1 payload.go:210] excluding Filename: "0000_50_olm_07-olm-operator.deployment.ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-operator-lifecycle-manager" Name: "olm-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:35.067516989+00:00 stderr F I0120 10:53:35.067468 1 payload.go:210] excluding Filename: "0000_50_olm_08-catalog-operator.deployment.ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-operator-lifecycle-manager" Name: "catalog-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:35.073240917+00:00 stderr F I0120 10:53:35.072942 1 payload.go:210] excluding Filename: "0000_50_operator-marketplace_09_operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-marketplace" Name: "marketplace-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:35.076319567+00:00 stderr F I0120 10:53:35.076274 1 payload.go:210] excluding Filename: "0000_50_service-ca-operator_05_deploy-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-service-ca-operator" Name: "service-ca-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:35.077190779+00:00 stderr F I0120 10:53:35.077155 1 payload.go:210] excluding Filename: "0000_50_tests_test-reporting.yaml" Group: "config.openshift.io" Kind: "TestReporting" Name: "cluster": no annotations 2026-01-20T10:53:35.077319913+00:00 stderr F I0120 10:53:35.077288 1 payload.go:210] excluding Filename: "0000_51_olm_00-olm-operator.yml" Group: "operator.openshift.io" Kind: "OLM" Name: "cluster": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:35.077455946+00:00 stderr F I0120 10:53:35.077391 1 payload.go:210] excluding Filename: "0000_51_olm_01_operator_namespace.yaml" Group: "" Kind: "Namespace" Name: "openshift-cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:35.078037171+00:00 stderr F I0120 10:53:35.078003 1 payload.go:210] excluding Filename: "0000_51_olm_02_operator_clusterrole.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRole" Name: "cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:35.078416571+00:00 stderr F I0120 10:53:35.078125 1 payload.go:210] excluding Filename: "0000_51_olm_03_service_account.yaml" Group: "" Kind: "ServiceAccount" Namespace: "openshift-cluster-olm-operator" Name: "cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:35.078416571+00:00 stderr F I0120 10:53:35.078271 1 payload.go:210] excluding Filename: "0000_51_olm_04_metrics_service.yaml" Group: "" Kind: "Service" Namespace: "openshift-cluster-olm-operator" Name: "cluster-olm-operator-metrics": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:35.078416571+00:00 stderr F I0120 10:53:35.078388 1 payload.go:210] excluding Filename: "0000_51_olm_05_operator_clusterrolebinding.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "cluster-olm-operator-role": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:35.079301863+00:00 stderr F I0120 10:53:35.079256 1 payload.go:210] excluding Filename: "0000_51_olm_06_deployment.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-olm-operator" Name: "cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:35.079442007+00:00 stderr F I0120 10:53:35.079409 1 payload.go:210] excluding Filename: "0000_51_olm_07_cluster_operator.yaml" Group: "config.openshift.io" Kind: "ClusterOperator" Name: "olm": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:35.081519191+00:00 stderr F I0120 10:53:35.081478 1 payload.go:210] excluding Filename: "0000_70_cluster-network-operator_02_rbac.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "default-account-cluster-network-operator": unrecognized value include.release.openshift.io/self-managed-high-availability=false 2026-01-20T10:53:35.082310301+00:00 stderr F I0120 10:53:35.082269 1 payload.go:210] excluding Filename: "0000_70_cluster-network-operator_03_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-network-operator" Name: "network-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:35.088084560+00:00 stderr F I0120 10:53:35.087990 1 payload.go:210] excluding Filename: "0000_70_dns-operator_02-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-dns-operator" Name: "dns-operator": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:35.090029001+00:00 stderr F I0120 10:53:35.089984 1 payload.go:210] excluding Filename: "0000_70_dns_00_dnsnameresolvers-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "dnsnameresolvers.network.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:35.091291203+00:00 stderr F I0120 10:53:35.091234 1 payload.go:210] excluding Filename: "0000_70_dns_00_dnsnameresolvers-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "dnsnameresolvers.network.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:35.092442213+00:00 stderr F I0120 10:53:35.092399 1 payload.go:210] excluding Filename: "0000_70_dns_00_dnsnameresolvers-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "dnsnameresolvers.network.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:35.127182120+00:00 stderr F I0120 10:53:35.127061 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_controllerconfigs-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "controllerconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:35.148963173+00:00 stderr F I0120 10:53:35.148889 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_controllerconfigs-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "controllerconfigs.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:35.160632924+00:00 stderr F I0120 10:53:35.160568 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_controllerconfigs-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "controllerconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:35.164681659+00:00 stderr F I0120 10:53:35.164625 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfignodes-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfignodes.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:35.166510016+00:00 stderr F I0120 10:53:35.166416 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfignodes-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfignodes.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:35.168207730+00:00 stderr F I0120 10:53:35.168128 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfignodes-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfignodes.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:35.171016072+00:00 stderr F I0120 10:53:35.170939 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigpools-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigpools.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:35.177707675+00:00 stderr F I0120 10:53:35.177625 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigpools-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigpools.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:35.180973330+00:00 stderr F I0120 10:53:35.180894 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigpools-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigpools.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:35.188043712+00:00 stderr F I0120 10:53:35.187957 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigurations-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigurations.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:35.194637272+00:00 stderr F I0120 10:53:35.194566 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigurations-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigurations.operator.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:35.212201946+00:00 stderr F I0120 10:53:35.212150 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigurations-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigurations.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:35.213743546+00:00 stderr F I0120 10:53:35.213724 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosbuilds-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosbuilds.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:35.215278956+00:00 stderr F I0120 10:53:35.215258 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosbuilds-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosbuilds.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:35.216833195+00:00 stderr F I0120 10:53:35.216813 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosbuilds-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosbuilds.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:35.218560390+00:00 stderr F I0120 10:53:35.218541 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosconfigs-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:35.220243094+00:00 stderr F I0120 10:53:35.220224 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosconfigs-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosconfigs.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:35.221938337+00:00 stderr F I0120 10:53:35.221919 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosconfigs-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:35.222825940+00:00 stderr F I0120 10:53:35.222806 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_pinnedimagesets-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "pinnedimagesets.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2026-01-20T10:53:35.223895728+00:00 stderr F I0120 10:53:35.223879 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_pinnedimagesets-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "pinnedimagesets.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2026-01-20T10:53:35.224773671+00:00 stderr F I0120 10:53:35.224755 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_pinnedimagesets-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "pinnedimagesets.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2026-01-20T10:53:35.228996630+00:00 stderr F I0120 10:53:35.228976 1 payload.go:210] excluding Filename: "0000_90_cluster-baremetal-operator_03_servicemonitor.yaml" Group: "monitoring.coreos.com" Kind: "ServiceMonitor" Namespace: "openshift-machine-api" Name: "cluster-baremetal-operator-servicemonitor": include.release.openshift.io/self-managed-high-availability unset 2026-01-20T10:53:35.300843685+00:00 stderr F I0120 10:53:35.300716 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:35.301521623+00:00 stderr F I0120 10:53:35.301479 1 cache.go:131] {"type":"PromQL","promql":{"promql":"(\n group by (type) (cluster_infrastructure_provider{_id=\"\",type=\"Azure\"})\n or\n 0 * group by (type) (cluster_infrastructure_provider{_id=\"\"})\n)\n"}} is stealing this cluster-condition match call for {"type":"PromQL","promql":{"promql":"topk by (_id) (1,\n group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type=~\"None|BareMetal\"})\n or\n 0 * group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type!~\"None|BareMetal\"})\n)\n"}}, because it has never been evaluated 2026-01-20T10:53:35.304627983+00:00 stderr F I0120 10:53:35.304595 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:35.304627983+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:35.304627983+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:35.304928911+00:00 stderr F I0120 10:53:35.304669 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (3.976293ms) 2026-01-20T10:53:35.384012753+00:00 stderr F I0120 10:53:35.383930 1 sync_worker.go:366] Skipping preconditions for a local operator image payload. 2026-01-20T10:53:35.384052844+00:00 stderr F I0120 10:53:35.384012 1 sync_worker.go:415] Payload loaded from quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69 with hash 6WUw5aCbcO4=, architecture amd64 2026-01-20T10:53:35.384052844+00:00 stderr F I0120 10:53:35.384025 1 sync_worker.go:527] Propagating initial target version { 4.16.0 quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69 false} to sync worker loop in state Reconciling. 2026-01-20T10:53:35.384108426+00:00 stderr F I0120 10:53:35.384080 1 sync_worker.go:551] Notify the sync worker that new work is available 2026-01-20T10:53:35.384321721+00:00 stderr F I0120 10:53:35.384194 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterVersion", Namespace:"openshift-cluster-version", Name:"version", UID:"", APIVersion:"config.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PayloadLoaded' Payload loaded version="4.16.0" image="quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69" architecture="amd64" 2026-01-20T10:53:35.384382383+00:00 stderr F I0120 10:53:35.384324 1 sync_worker.go:584] new work is available 2026-01-20T10:53:35.384469655+00:00 stderr F I0120 10:53:35.384410 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:35.384530926+00:00 stderr F I0120 10:53:35.384514 1 sync_worker.go:807] Detected while calculating next work: version changed (from { false} to { 4.16.0 quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69 false}), overrides changed ([] to [{Deployment apps openshift-monitoring cluster-monitoring-operator true} {ClusterOperator config.openshift.io monitoring true} {Deployment apps openshift-cloud-credential-operator cloud-credential-operator true} {ClusterOperator config.openshift.io cloud-credential true} {Deployment apps openshift-machine-api cluster-autoscaler-operator true} {ClusterOperator config.openshift.io cluster-autoscaler true} {Deployment apps openshift-cloud-controller-manager-operator cluster-cloud-controller-manager-operator true} {ClusterOperator config.openshift.io cloud-controller-manager true}]), capabilities changed (enabled map[] not equal to map[Build:{} Console:{} DeploymentConfig:{} ImageRegistry:{} Ingress:{} MachineAPI:{} OperatorLifecycleManager:{} marketplace:{} openshift-samples:{}]) 2026-01-20T10:53:35.384625729+00:00 stderr F I0120 10:53:35.384542 1 sync_worker.go:632] Previous sync status: &cvo.SyncWorkerStatus{Generation:4, Failure:error(nil), Done:0, Total:0, Completed:0, Reconciling:true, Initial:false, VersionHash:"", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc253f58bd6e35ca7, ext:370723773144, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2026-01-20T10:53:35.384938377+00:00 stderr F I0120 10:53:35.384913 1 sync_worker.go:883] apply: 4.16.0 on generation 4 in state Reconciling at attempt 0 2026-01-20T10:53:35.388652603+00:00 stderr F I0120 10:53:35.388626 1 task_graph.go:481] Running 0 on worker 1 2026-01-20T10:53:35.388664584+00:00 stderr F I0120 10:53:35.388652 1 task_graph.go:481] Running 2 on worker 1 2026-01-20T10:53:35.388740956+00:00 stderr F I0120 10:53:35.388698 1 task_graph.go:481] Running 1 on worker 0 2026-01-20T10:53:35.424419756+00:00 stderr F W0120 10:53:35.423027 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:35.424419756+00:00 stderr F I0120 10:53:35.423343 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2026-01-20T10:53:35.424419756+00:00 stderr F I0120 10:53:35.423356 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:35.424419756+00:00 stderr F I0120 10:53:35.423428 1 upgradeable.go:69] Upgradeability last checked 1.421862732s ago, will not re-check until 2026-01-20T10:55:34Z 2026-01-20T10:53:35.424419756+00:00 stderr F I0120 10:53:35.423438 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (106.072µs) 2026-01-20T10:53:35.424419756+00:00 stderr F I0120 10:53:35.423715 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:35.424419756+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:35.424419756+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:35.424419756+00:00 stderr F I0120 10:53:35.423783 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (440.831µs) 2026-01-20T10:53:35.428659766+00:00 stderr F I0120 10:53:35.428579 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.427672762s) 2026-01-20T10:53:35.428754189+00:00 stderr F I0120 10:53:35.428710 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:35.429196380+00:00 stderr F I0120 10:53:35.429138 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:35.429288802+00:00 stderr F I0120 10:53:35.429254 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:35.430091013+00:00 stderr F I0120 10:53:35.430001 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:35.453946469+00:00 stderr F I0120 10:53:35.453822 1 task_graph.go:481] Running 3 on worker 0 2026-01-20T10:53:35.465405715+00:00 stderr F W0120 10:53:35.465317 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:35.468935466+00:00 stderr F I0120 10:53:35.468867 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.154567ms) 2026-01-20T10:53:35.496371145+00:00 stderr F I0120 10:53:35.496292 1 task_graph.go:481] Running 4 on worker 0 2026-01-20T10:53:35.543506902+00:00 stderr F I0120 10:53:35.543429 1 task_graph.go:481] Running 5 on worker 0 2026-01-20T10:53:35.543711588+00:00 stderr F I0120 10:53:35.543674 1 task_graph.go:481] Running 6 on worker 0 2026-01-20T10:53:35.593211796+00:00 stderr F W0120 10:53:35.593106 1 helper.go:97] PrometheusRule "openshift-image-registry/image-registry-operator-alerts" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:53:35.593211796+00:00 stderr F I0120 10:53:35.593169 1 task_graph.go:481] Running 7 on worker 1 2026-01-20T10:53:36.244410914+00:00 stderr F I0120 10:53:36.244326 1 task_graph.go:481] Running 8 on worker 0 2026-01-20T10:53:36.244481676+00:00 stderr F I0120 10:53:36.244454 1 task_graph.go:481] Running 9 on worker 0 2026-01-20T10:53:36.305057521+00:00 stderr F I0120 10:53:36.304917 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:36.305310047+00:00 stderr F I0120 10:53:36.305250 1 cache.go:131] {"type":"PromQL","promql":{"promql":"topk(1,\n group by (label_node_openshift_io_os_id) (kube_node_labels{_id=\"\",label_node_openshift_io_os_id=\"rhel\"})\n or\n 0 * group by (label_node_openshift_io_os_id) (kube_node_labels{_id=\"\"})\n)\n"}} is stealing this cluster-condition match call for {"type":"PromQL","promql":{"promql":"topk by (_id) (1,\n group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type=~\"None|BareMetal\"})\n or\n 0 * group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type!~\"None|BareMetal\"})\n)\n"}}, because it has never been evaluated 2026-01-20T10:53:36.309583197+00:00 stderr F I0120 10:53:36.308736 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:36.309583197+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:36.309583197+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:36.309583197+00:00 stderr F I0120 10:53:36.308831 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (3.944572ms) 2026-01-20T10:53:36.309583197+00:00 stderr F I0120 10:53:36.308849 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:36.309583197+00:00 stderr F I0120 10:53:36.308905 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:36.309583197+00:00 stderr F I0120 10:53:36.308960 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:36.309583197+00:00 stderr F I0120 10:53:36.309274 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:36.330900098+00:00 stderr F W0120 10:53:36.330798 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:36.332769746+00:00 stderr F I0120 10:53:36.332723 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.869767ms) 2026-01-20T10:53:36.335742523+00:00 stderr F I0120 10:53:36.334755 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2026-01-20T10:53:36.335742523+00:00 stderr F I0120 10:53:36.334770 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:36.335742523+00:00 stderr F I0120 10:53:36.334807 1 upgradeable.go:69] Upgradeability last checked 2.333242099s ago, will not re-check until 2026-01-20T10:55:34Z 2026-01-20T10:53:36.335742523+00:00 stderr F I0120 10:53:36.334813 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (62.131µs) 2026-01-20T10:53:36.335742523+00:00 stderr F I0120 10:53:36.334821 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:36.335742523+00:00 stderr F I0120 10:53:36.334851 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:36.335742523+00:00 stderr F I0120 10:53:36.334900 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:36.335742523+00:00 stderr F I0120 10:53:36.335174 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:36.336102342+00:00 stderr F I0120 10:53:36.335962 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:36.336102342+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:36.336102342+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:36.336102342+00:00 stderr F I0120 10:53:36.336040 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.215232ms) 2026-01-20T10:53:36.371469155+00:00 stderr F W0120 10:53:36.371381 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:36.372985165+00:00 stderr F I0120 10:53:36.372923 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.154306ms) 2026-01-20T10:53:36.372985165+00:00 stderr F I0120 10:53:36.372954 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:36.373045286+00:00 stderr F I0120 10:53:36.373011 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:36.373222251+00:00 stderr F I0120 10:53:36.373057 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:36.373363854+00:00 stderr F I0120 10:53:36.373314 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:36.396457901+00:00 stderr F W0120 10:53:36.396375 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:36.397911479+00:00 stderr F I0120 10:53:36.397862 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.904493ms) 2026-01-20T10:53:36.447421878+00:00 stderr F I0120 10:53:36.447331 1 task_graph.go:481] Running 10 on worker 0 2026-01-20T10:53:36.796014890+00:00 stderr F I0120 10:53:36.795922 1 task_graph.go:481] Running 11 on worker 1 2026-01-20T10:53:37.243872907+00:00 stderr F I0120 10:53:37.243778 1 task_graph.go:481] Running 12 on worker 0 2026-01-20T10:53:37.309472541+00:00 stderr F I0120 10:53:37.309357 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:37.309592904+00:00 stderr F I0120 10:53:37.309567 1 cache.go:131] {"type":"PromQL","promql":{"promql":"group(csv_succeeded{_id=\"\", name=~\"numaresources-operator[.].*\"})\nor\n0 * group(csv_count{_id=\"\"})\n"}} is stealing this cluster-condition match call for {"type":"PromQL","promql":{"promql":"topk by (_id) (1,\n group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type=~\"None|BareMetal\"})\n or\n 0 * group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type!~\"None|BareMetal\"})\n)\n"}}, because it has never been evaluated 2026-01-20T10:53:37.312996262+00:00 stderr F I0120 10:53:37.312899 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:37.312996262+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:37.312996262+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:37.312996262+00:00 stderr F I0120 10:53:37.312980 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (3.634945ms) 2026-01-20T10:53:37.313117446+00:00 stderr F I0120 10:53:37.312997 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:37.313117446+00:00 stderr F I0120 10:53:37.313049 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:37.313141396+00:00 stderr F I0120 10:53:37.313125 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:37.313417563+00:00 stderr F I0120 10:53:37.313361 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:37.334595090+00:00 stderr F W0120 10:53:37.334435 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:37.336464478+00:00 stderr F I0120 10:53:37.336398 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.395544ms) 2026-01-20T10:53:37.340733088+00:00 stderr F I0120 10:53:37.340694 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2026-01-20T10:53:37.340769719+00:00 stderr F I0120 10:53:37.340728 1 upgradeable.go:69] Upgradeability last checked 3.339163079s ago, will not re-check until 2026-01-20T10:55:34Z 2026-01-20T10:53:37.340769719+00:00 stderr F I0120 10:53:37.340734 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (45.261µs) 2026-01-20T10:53:37.340769719+00:00 stderr F I0120 10:53:37.340743 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:37.340820010+00:00 stderr F I0120 10:53:37.340783 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:37.340835801+00:00 stderr F I0120 10:53:37.340826 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:37.341099298+00:00 stderr F I0120 10:53:37.341032 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:37.341724344+00:00 stderr F I0120 10:53:37.341665 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:37.341910179+00:00 stderr F I0120 10:53:37.341863 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:37.341910179+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:37.341910179+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:37.341935269+00:00 stderr F I0120 10:53:37.341913 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (241.856µs) 2026-01-20T10:53:37.346782915+00:00 stderr F I0120 10:53:37.346723 1 task_graph.go:481] Running 13 on worker 0 2026-01-20T10:53:37.381708156+00:00 stderr F W0120 10:53:37.381549 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:37.383912754+00:00 stderr F I0120 10:53:37.383859 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.109754ms) 2026-01-20T10:53:37.383912754+00:00 stderr F I0120 10:53:37.383896 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:37.384031247+00:00 stderr F I0120 10:53:37.383987 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:37.384089128+00:00 stderr F I0120 10:53:37.384053 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:37.384540100+00:00 stderr F I0120 10:53:37.384479 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:37.415881179+00:00 stderr F W0120 10:53:37.415780 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:37.417914582+00:00 stderr F I0120 10:53:37.417861 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.949347ms) 2026-01-20T10:53:37.457385101+00:00 stderr F I0120 10:53:37.457273 1 task_graph.go:481] Running 14 on worker 0 2026-01-20T10:53:37.543732121+00:00 stderr F I0120 10:53:37.543592 1 task_graph.go:481] Running 15 on worker 0 2026-01-20T10:53:38.045624893+00:00 stderr F I0120 10:53:38.045426 1 task_graph.go:481] Running 16 on worker 0 2026-01-20T10:53:38.045624893+00:00 stderr F I0120 10:53:38.045482 1 task_graph.go:481] Running 17 on worker 0 2026-01-20T10:53:38.317746930+00:00 stderr F I0120 10:53:38.316758 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:38.317746930+00:00 stderr F I0120 10:53:38.317044 1 cache.go:131] {"type":"PromQL","promql":{"promql":"group(csv_succeeded{_id=\"\", name=~\"sriov-network-operator[.].*\"})\nor\n0 * group(csv_count{_id=\"\"})\n"}} is stealing this cluster-condition match call for {"type":"PromQL","promql":{"promql":"topk by (_id) (1,\n group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type=~\"None|BareMetal\"})\n or\n 0 * group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type!~\"None|BareMetal\"})\n)\n"}}, because it has never been evaluated 2026-01-20T10:53:38.322452422+00:00 stderr F I0120 10:53:38.322205 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:38.322452422+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:38.322452422+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:38.322452422+00:00 stderr F I0120 10:53:38.322296 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (5.575474ms) 2026-01-20T10:53:38.322452422+00:00 stderr F I0120 10:53:38.322314 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:38.322452422+00:00 stderr F I0120 10:53:38.322365 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:38.322452422+00:00 stderr F I0120 10:53:38.322419 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:38.322861023+00:00 stderr F I0120 10:53:38.322787 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:38.363146493+00:00 stderr F W0120 10:53:38.362992 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:38.366126990+00:00 stderr F I0120 10:53:38.366027 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.707018ms) 2026-01-20T10:53:38.372911785+00:00 stderr F I0120 10:53:38.372692 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2026-01-20T10:53:38.373120121+00:00 stderr F I0120 10:53:38.372749 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:38.373299505+00:00 stderr F I0120 10:53:38.373214 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:38.373324416+00:00 stderr F I0120 10:53:38.373307 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:38.373812838+00:00 stderr F I0120 10:53:38.373721 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:38.374114616+00:00 stderr F I0120 10:53:38.372773 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:38.374267600+00:00 stderr F I0120 10:53:38.372978 1 upgradeable.go:69] Upgradeability last checked 4.371407858s ago, will not re-check until 2026-01-20T10:55:34Z 2026-01-20T10:53:38.374267600+00:00 stderr F I0120 10:53:38.374207 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (1.520449ms) 2026-01-20T10:53:38.374586979+00:00 stderr F I0120 10:53:38.374499 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:38.374586979+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:38.374586979+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:38.374623860+00:00 stderr F I0120 10:53:38.374584 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.809557ms) 2026-01-20T10:53:38.403945607+00:00 stderr F W0120 10:53:38.403795 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:38.408432413+00:00 stderr F I0120 10:53:38.408325 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.569169ms) 2026-01-20T10:53:38.408432413+00:00 stderr F I0120 10:53:38.408381 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:38.408565346+00:00 stderr F I0120 10:53:38.408499 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:38.408637928+00:00 stderr F I0120 10:53:38.408598 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:38.409276205+00:00 stderr F I0120 10:53:38.409193 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:38.436038435+00:00 stderr F W0120 10:53:38.435870 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:38.439647239+00:00 stderr F I0120 10:53:38.439532 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.142344ms) 2026-01-20T10:53:39.323217158+00:00 stderr F I0120 10:53:39.323049 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:39.323321971+00:00 stderr F I0120 10:53:39.323285 1 cache.go:131] {"type":"PromQL","promql":{"promql":"group by (_id, invoker) (cluster_installer{_id=\"\",invoker=\"hypershift\"})\nor\n0 * group by (_id, invoker) (cluster_installer{_id=\"\"})\n"}} is stealing this cluster-condition match call for {"type":"PromQL","promql":{"promql":"topk by (_id) (1,\n group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type=~\"None|BareMetal\"})\n or\n 0 * group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type!~\"None|BareMetal\"})\n)\n"}}, because it has never been evaluated 2026-01-20T10:53:39.325848158+00:00 stderr F I0120 10:53:39.325809 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:39.325848158+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:39.325848158+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:39.325957461+00:00 stderr F I0120 10:53:39.325907 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (2.890017ms) 2026-01-20T10:53:39.325957461+00:00 stderr F I0120 10:53:39.325934 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:39.326045693+00:00 stderr F I0120 10:53:39.326000 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:39.326126875+00:00 stderr F I0120 10:53:39.326098 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:39.327004368+00:00 stderr F I0120 10:53:39.326942 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:39.349656013+00:00 stderr F W0120 10:53:39.349577 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:39.351782099+00:00 stderr F I0120 10:53:39.351669 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.731956ms) 2026-01-20T10:53:39.358273283+00:00 stderr F I0120 10:53:39.356818 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2026-01-20T10:53:39.358273283+00:00 stderr F I0120 10:53:39.356869 1 upgradeable.go:69] Upgradeability last checked 5.355302036s ago, will not re-check until 2026-01-20T10:55:34Z 2026-01-20T10:53:39.358273283+00:00 stderr F I0120 10:53:39.356882 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (74.572µs) 2026-01-20T10:53:39.358273283+00:00 stderr F I0120 10:53:39.356895 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:39.358273283+00:00 stderr F I0120 10:53:39.356958 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:39.358273283+00:00 stderr F I0120 10:53:39.357008 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:39.358273283+00:00 stderr F I0120 10:53:39.357321 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:39.358273283+00:00 stderr F I0120 10:53:39.358192 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:39.358401686+00:00 stderr F I0120 10:53:39.358377 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:39.358401686+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:39.358401686+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:39.358466638+00:00 stderr F I0120 10:53:39.358427 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (236.476µs) 2026-01-20T10:53:39.393037299+00:00 stderr F W0120 10:53:39.392897 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:39.395107265+00:00 stderr F I0120 10:53:39.395006 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.105467ms) 2026-01-20T10:53:39.395107265+00:00 stderr F I0120 10:53:39.395049 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:39.395224568+00:00 stderr F I0120 10:53:39.395161 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:39.395243378+00:00 stderr F I0120 10:53:39.395225 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:39.395563207+00:00 stderr F I0120 10:53:39.395495 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:39.420892982+00:00 stderr F W0120 10:53:39.420760 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:39.423405989+00:00 stderr F I0120 10:53:39.423277 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.222452ms) 2026-01-20T10:53:40.327381517+00:00 stderr F I0120 10:53:40.327270 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:40.327897460+00:00 stderr F I0120 10:53:40.327856 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:40.327897460+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:40.327897460+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:40.330942711+00:00 stderr F I0120 10:53:40.327964 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (714.629µs) 2026-01-20T10:53:40.330942711+00:00 stderr F I0120 10:53:40.327999 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:40.330942711+00:00 stderr F I0120 10:53:40.328103 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:40.330942711+00:00 stderr F I0120 10:53:40.328189 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:40.330942711+00:00 stderr F I0120 10:53:40.328608 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:40.356947524+00:00 stderr F W0120 10:53:40.356857 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:40.359154953+00:00 stderr F I0120 10:53:40.359104 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.101259ms) 2026-01-20T10:53:41.144340514+00:00 stderr F I0120 10:53:41.144196 1 task_graph.go:481] Running 18 on worker 0 2026-01-20T10:53:41.245480490+00:00 stderr F I0120 10:53:41.245375 1 task_graph.go:481] Running 19 on worker 0 2026-01-20T10:53:41.328968966+00:00 stderr F I0120 10:53:41.328876 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:41.330232100+00:00 stderr F I0120 10:53:41.330178 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:41.330232100+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:41.330232100+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:41.330375904+00:00 stderr F I0120 10:53:41.330336 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.475249ms) 2026-01-20T10:53:41.330387454+00:00 stderr F I0120 10:53:41.330374 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:41.330523127+00:00 stderr F I0120 10:53:41.330477 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:41.330585509+00:00 stderr F I0120 10:53:41.330564 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:41.331130063+00:00 stderr F I0120 10:53:41.331039 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:41.348128536+00:00 stderr F I0120 10:53:41.348026 1 task_graph.go:481] Running 20 on worker 0 2026-01-20T10:53:41.348542067+00:00 stderr F I0120 10:53:41.348372 1 task_graph.go:481] Running 21 on worker 0 2026-01-20T10:53:41.369657921+00:00 stderr F W0120 10:53:41.369585 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:41.372738613+00:00 stderr F I0120 10:53:41.372681 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.304208ms) 2026-01-20T10:53:41.744017810+00:00 stderr F I0120 10:53:41.743910 1 task_graph.go:481] Running 22 on worker 0 2026-01-20T10:53:41.844993441+00:00 stderr F I0120 10:53:41.844895 1 task_graph.go:481] Running 23 on worker 0 2026-01-20T10:53:42.331501691+00:00 stderr F I0120 10:53:42.331418 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:42.331870660+00:00 stderr F I0120 10:53:42.331840 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:42.331870660+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:42.331870660+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:42.331926092+00:00 stderr F I0120 10:53:42.331898 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (497.483µs) 2026-01-20T10:53:42.331926092+00:00 stderr F I0120 10:53:42.331919 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:42.331997714+00:00 stderr F I0120 10:53:42.331966 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:42.332038405+00:00 stderr F I0120 10:53:42.332021 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:42.332334433+00:00 stderr F I0120 10:53:42.332293 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:42.374310571+00:00 stderr F W0120 10:53:42.374222 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:42.375793152+00:00 stderr F I0120 10:53:42.375722 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.801189ms) 2026-01-20T10:53:42.744907521+00:00 stderr F I0120 10:53:42.744815 1 task_graph.go:481] Running 24 on worker 0 2026-01-20T10:53:42.843047498+00:00 stderr F I0120 10:53:42.842965 1 task_graph.go:481] Running 25 on worker 0 2026-01-20T10:53:43.332831573+00:00 stderr F I0120 10:53:43.332740 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:43.333237025+00:00 stderr F I0120 10:53:43.333196 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:43.333237025+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:43.333237025+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:43.333321237+00:00 stderr F I0120 10:53:43.333291 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (563.816µs) 2026-01-20T10:53:43.333333797+00:00 stderr F I0120 10:53:43.333320 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:43.333419159+00:00 stderr F I0120 10:53:43.333387 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:43.333479311+00:00 stderr F I0120 10:53:43.333462 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:43.333931993+00:00 stderr F I0120 10:53:43.333867 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:43.372572553+00:00 stderr F W0120 10:53:43.372448 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:43.375349467+00:00 stderr F I0120 10:53:43.375291 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (41.967128ms) 2026-01-20T10:53:43.993672699+00:00 stderr F W0120 10:53:43.993545 1 helper.go:97] ConsoleQuickStart "ocs-install-tour" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:53:44.045285986+00:00 stderr F I0120 10:53:44.045151 1 task_graph.go:481] Running 26 on worker 0 2026-01-20T10:53:44.333840968+00:00 stderr F I0120 10:53:44.333748 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:44.334285750+00:00 stderr F I0120 10:53:44.334262 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:44.334285750+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:44.334285750+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:44.334377602+00:00 stderr F I0120 10:53:44.334359 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (631.226µs) 2026-01-20T10:53:44.334426183+00:00 stderr F I0120 10:53:44.334405 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:44.334506855+00:00 stderr F I0120 10:53:44.334486 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:44.334570787+00:00 stderr F I0120 10:53:44.334561 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:44.334818044+00:00 stderr F I0120 10:53:44.334788 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:44.376443853+00:00 stderr F W0120 10:53:44.376360 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:44.377956604+00:00 stderr F I0120 10:53:44.377910 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.50237ms) 2026-01-20T10:53:44.597409734+00:00 stderr F I0120 10:53:44.597320 1 task_graph.go:481] Running 27 on worker 1 2026-01-20T10:53:44.695784246+00:00 stderr F I0120 10:53:44.695712 1 task_graph.go:481] Running 28 on worker 1 2026-01-20T10:53:45.045038947+00:00 stderr F I0120 10:53:45.044943 1 task_graph.go:481] Running 29 on worker 0 2026-01-20T10:53:45.094653509+00:00 stderr F I0120 10:53:45.094573 1 task_graph.go:481] Running 30 on worker 1 2026-01-20T10:53:45.294652321+00:00 stderr F I0120 10:53:45.294578 1 task_graph.go:481] Running 31 on worker 1 2026-01-20T10:53:45.334948704+00:00 stderr F I0120 10:53:45.334872 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:45.335412607+00:00 stderr F I0120 10:53:45.335378 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:45.335412607+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:45.335412607+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:45.335492839+00:00 stderr F I0120 10:53:45.335461 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (608.396µs) 2026-01-20T10:53:45.335501309+00:00 stderr F I0120 10:53:45.335491 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:45.335602462+00:00 stderr F I0120 10:53:45.335556 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:45.335694064+00:00 stderr F I0120 10:53:45.335629 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:45.336051354+00:00 stderr F I0120 10:53:45.336001 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:45.343656596+00:00 stderr F I0120 10:53:45.342803 1 task_graph.go:481] Running 32 on worker 0 2026-01-20T10:53:45.370865202+00:00 stderr F W0120 10:53:45.370767 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:45.373522113+00:00 stderr F I0120 10:53:45.373470 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.975792ms) 2026-01-20T10:53:45.943723302+00:00 stderr F I0120 10:53:45.943641 1 task_graph.go:481] Running 33 on worker 0 2026-01-20T10:53:46.043987945+00:00 stderr F I0120 10:53:46.043897 1 task_graph.go:481] Running 34 on worker 0 2026-01-20T10:53:46.337049338+00:00 stderr F I0120 10:53:46.336277 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:46.337448439+00:00 stderr F I0120 10:53:46.337411 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:46.337448439+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:46.337448439+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:46.337519050+00:00 stderr F I0120 10:53:46.337484 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.222992ms) 2026-01-20T10:53:46.337519050+00:00 stderr F I0120 10:53:46.337509 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:46.337603963+00:00 stderr F I0120 10:53:46.337565 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:46.337648714+00:00 stderr F I0120 10:53:46.337630 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:46.337943722+00:00 stderr F I0120 10:53:46.337894 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:46.362662801+00:00 stderr F W0120 10:53:46.362603 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:46.365995459+00:00 stderr F I0120 10:53:46.365951 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.435158ms) 2026-01-20T10:53:46.795545571+00:00 stderr F I0120 10:53:46.795463 1 task_graph.go:481] Running 35 on worker 1 2026-01-20T10:53:47.338545385+00:00 stderr F I0120 10:53:47.338434 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:47.338964507+00:00 stderr F I0120 10:53:47.338904 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:47.338964507+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:47.338964507+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:47.339086610+00:00 stderr F I0120 10:53:47.339003 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (612.557µs) 2026-01-20T10:53:47.339086610+00:00 stderr F I0120 10:53:47.339035 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:47.339199973+00:00 stderr F I0120 10:53:47.339137 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:47.339252235+00:00 stderr F I0120 10:53:47.339218 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:47.339891291+00:00 stderr F I0120 10:53:47.339789 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:47.389005250+00:00 stderr F W0120 10:53:47.388893 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:47.392044951+00:00 stderr F I0120 10:53:47.391970 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (52.93074ms) 2026-01-20T10:53:48.339841447+00:00 stderr F I0120 10:53:48.339725 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:48.340368101+00:00 stderr F I0120 10:53:48.340321 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:48.340368101+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:48.340368101+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:48.340489654+00:00 stderr F I0120 10:53:48.340439 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (728.97µs) 2026-01-20T10:53:48.340502845+00:00 stderr F I0120 10:53:48.340492 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:48.340622938+00:00 stderr F I0120 10:53:48.340581 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:48.340846574+00:00 stderr F I0120 10:53:48.340791 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:48.341438079+00:00 stderr F I0120 10:53:48.341358 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:48.371364567+00:00 stderr F W0120 10:53:48.371227 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:48.374653685+00:00 stderr F I0120 10:53:48.374579 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.079458ms) 2026-01-20T10:53:48.545280703+00:00 stderr F I0120 10:53:48.545173 1 core.go:138] Updating ConfigMap openshift-machine-config-operator/kube-rbac-proxy due to diff: &v1.ConfigMap{ 2026-01-20T10:53:48.545280703+00:00 stderr F TypeMeta: v1.TypeMeta{ 2026-01-20T10:53:48.545280703+00:00 stderr F - Kind: "", 2026-01-20T10:53:48.545280703+00:00 stderr F + Kind: "ConfigMap", 2026-01-20T10:53:48.545280703+00:00 stderr F - APIVersion: "", 2026-01-20T10:53:48.545280703+00:00 stderr F + APIVersion: "v1", 2026-01-20T10:53:48.545280703+00:00 stderr F }, 2026-01-20T10:53:48.545280703+00:00 stderr F ObjectMeta: v1.ObjectMeta{ 2026-01-20T10:53:48.545280703+00:00 stderr F ... // 2 identical fields 2026-01-20T10:53:48.545280703+00:00 stderr F Namespace: "openshift-machine-config-operator", 2026-01-20T10:53:48.545280703+00:00 stderr F SelfLink: "", 2026-01-20T10:53:48.545280703+00:00 stderr F - UID: "ba7edbb4-1ba2-49b6-98a7-d849069e9f80", 2026-01-20T10:53:48.545280703+00:00 stderr F + UID: "", 2026-01-20T10:53:48.545280703+00:00 stderr F - ResourceVersion: "37089", 2026-01-20T10:53:48.545280703+00:00 stderr F + ResourceVersion: "", 2026-01-20T10:53:48.545280703+00:00 stderr F Generation: 0, 2026-01-20T10:53:48.545280703+00:00 stderr F - CreationTimestamp: v1.Time{Time: s"2024-06-26 12:39:23 +0000 UTC"}, 2026-01-20T10:53:48.545280703+00:00 stderr F + CreationTimestamp: v1.Time{}, 2026-01-20T10:53:48.545280703+00:00 stderr F DeletionTimestamp: nil, 2026-01-20T10:53:48.545280703+00:00 stderr F DeletionGracePeriodSeconds: nil, 2026-01-20T10:53:48.545280703+00:00 stderr F ... // 2 identical fields 2026-01-20T10:53:48.545280703+00:00 stderr F OwnerReferences: {{APIVersion: "config.openshift.io/v1", Kind: "ClusterVersion", Name: "version", UID: "a73cbaa6-40d3-4694-9b98-c0a6eed45825", ...}}, 2026-01-20T10:53:48.545280703+00:00 stderr F Finalizers: nil, 2026-01-20T10:53:48.545280703+00:00 stderr F - ManagedFields: []v1.ManagedFieldsEntry{ 2026-01-20T10:53:48.545280703+00:00 stderr F - { 2026-01-20T10:53:48.545280703+00:00 stderr F - Manager: "cluster-version-operator", 2026-01-20T10:53:48.545280703+00:00 stderr F - Operation: "Update", 2026-01-20T10:53:48.545280703+00:00 stderr F - APIVersion: "v1", 2026-01-20T10:53:48.545280703+00:00 stderr F - Time: s"2025-08-13 20:39:50 +0000 UTC", 2026-01-20T10:53:48.545280703+00:00 stderr F - FieldsType: "FieldsV1", 2026-01-20T10:53:48.545280703+00:00 stderr F - FieldsV1: s`{"f:data":{},"f:metadata":{"f:annotations":{".":{},"f:include.re`..., 2026-01-20T10:53:48.545280703+00:00 stderr F - }, 2026-01-20T10:53:48.545280703+00:00 stderr F - { 2026-01-20T10:53:48.545280703+00:00 stderr F - Manager: "machine-config-operator", 2026-01-20T10:53:48.545280703+00:00 stderr F - Operation: "Update", 2026-01-20T10:53:48.545280703+00:00 stderr F - APIVersion: "v1", 2026-01-20T10:53:48.545280703+00:00 stderr F - Time: s"2025-08-13 20:39:50 +0000 UTC", 2026-01-20T10:53:48.545280703+00:00 stderr F - FieldsType: "FieldsV1", 2026-01-20T10:53:48.545280703+00:00 stderr F - FieldsV1: s`{"f:data":{"f:config-file.yaml":{}}}`, 2026-01-20T10:53:48.545280703+00:00 stderr F - }, 2026-01-20T10:53:48.545280703+00:00 stderr F - }, 2026-01-20T10:53:48.545280703+00:00 stderr F + ManagedFields: nil, 2026-01-20T10:53:48.545280703+00:00 stderr F }, 2026-01-20T10:53:48.545280703+00:00 stderr F Immutable: nil, 2026-01-20T10:53:48.545280703+00:00 stderr F Data: {"config-file.yaml": "authorization:\n resourceAttributes:\n apiVersion: v1\n reso"...}, 2026-01-20T10:53:48.545280703+00:00 stderr F BinaryData: nil, 2026-01-20T10:53:48.545280703+00:00 stderr F } 2026-01-20T10:53:48.745450709+00:00 stderr F I0120 10:53:48.745320 1 task_graph.go:481] Running 36 on worker 0 2026-01-20T10:53:48.843859862+00:00 stderr F I0120 10:53:48.843714 1 task_graph.go:481] Running 37 on worker 0 2026-01-20T10:53:49.044461648+00:00 stderr F W0120 10:53:49.044349 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-etcd" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:53:49.044461648+00:00 stderr F I0120 10:53:49.044411 1 task_graph.go:481] Running 38 on worker 0 2026-01-20T10:53:49.341563177+00:00 stderr F I0120 10:53:49.340673 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:49.341563177+00:00 stderr F I0120 10:53:49.341099 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:49.341563177+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:49.341563177+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:49.341563177+00:00 stderr F I0120 10:53:49.341160 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (500.293µs) 2026-01-20T10:53:49.341563177+00:00 stderr F I0120 10:53:49.341181 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:49.341563177+00:00 stderr F I0120 10:53:49.341239 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:49.341563177+00:00 stderr F I0120 10:53:49.341297 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:49.341652390+00:00 stderr F I0120 10:53:49.341611 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:49.369361279+00:00 stderr F W0120 10:53:49.369218 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:49.371390723+00:00 stderr F I0120 10:53:49.371292 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.107203ms) 2026-01-20T10:53:49.444038330+00:00 stderr F I0120 10:53:49.443925 1 task_graph.go:481] Running 39 on worker 0 2026-01-20T10:53:49.699426467+00:00 stderr F W0120 10:53:49.699336 1 helper.go:97] imagestream "openshift/hello-openshift" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:53:49.993844245+00:00 stderr F I0120 10:53:49.993741 1 task_graph.go:481] Running 40 on worker 1 2026-01-20T10:53:50.294850799+00:00 stderr F W0120 10:53:50.294730 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-cluster-total" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:53:50.343844385+00:00 stderr F I0120 10:53:50.343779 1 task_graph.go:481] Running 41 on worker 0 2026-01-20T10:53:50.366936930+00:00 stderr F I0120 10:53:50.366822 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:50.367405254+00:00 stderr F I0120 10:53:50.367355 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:50.367405254+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:50.367405254+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:50.367509076+00:00 stderr F I0120 10:53:50.367462 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (656.808µs) 2026-01-20T10:53:50.367605399+00:00 stderr F I0120 10:53:50.367547 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:50.367765803+00:00 stderr F I0120 10:53:50.367714 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:50.367831335+00:00 stderr F I0120 10:53:50.367807 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:50.369145919+00:00 stderr F I0120 10:53:50.369016 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:50.412886766+00:00 stderr F W0120 10:53:50.412791 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:50.417289833+00:00 stderr F I0120 10:53:50.417186 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (49.645132ms) 2026-01-20T10:53:50.417289833+00:00 stderr F I0120 10:53:50.417252 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:50.417426756+00:00 stderr F I0120 10:53:50.417361 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:50.417483298+00:00 stderr F I0120 10:53:50.417449 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:50.417968331+00:00 stderr F I0120 10:53:50.417881 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:50.476039078+00:00 stderr F W0120 10:53:50.475918 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:50.478158695+00:00 stderr F I0120 10:53:50.478103 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (60.848653ms) 2026-01-20T10:53:50.493270018+00:00 stderr F W0120 10:53:50.493212 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-k8s-resources-cluster" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:53:50.694676527+00:00 stderr F W0120 10:53:50.694556 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-k8s-resources-namespace" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:53:50.895345606+00:00 stderr F W0120 10:53:50.895260 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-k8s-resources-node" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:53:51.093261563+00:00 stderr F W0120 10:53:51.093177 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-k8s-resources-pod" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:53:51.294623121+00:00 stderr F W0120 10:53:51.294531 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-k8s-resources-workload" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:53:51.367933035+00:00 stderr F I0120 10:53:51.367834 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:51.368588412+00:00 stderr F I0120 10:53:51.368547 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:51.368588412+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:51.368588412+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:51.368758417+00:00 stderr F I0120 10:53:51.368724 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (907.604µs) 2026-01-20T10:53:51.368826168+00:00 stderr F I0120 10:53:51.368804 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:51.368988023+00:00 stderr F I0120 10:53:51.368948 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:51.369143877+00:00 stderr F I0120 10:53:51.369119 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:51.369702351+00:00 stderr F I0120 10:53:51.369641 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:51.402899946+00:00 stderr F W0120 10:53:51.402814 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:51.405257920+00:00 stderr F I0120 10:53:51.405200 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.3946ms) 2026-01-20T10:53:51.493892622+00:00 stderr F W0120 10:53:51.493786 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-k8s-resources-workloads-namespace" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:53:51.732222945+00:00 stderr F W0120 10:53:51.732130 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-namespace-by-pod" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:53:51.744467561+00:00 stderr F I0120 10:53:51.744420 1 task_graph.go:481] Running 42 on worker 0 2026-01-20T10:53:51.895234770+00:00 stderr F W0120 10:53:51.895047 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-node-cluster-rsrc-use" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:53:52.050719564+00:00 stderr F I0120 10:53:52.050544 1 task_graph.go:481] Running 43 on worker 0 2026-01-20T10:53:52.095103518+00:00 stderr F W0120 10:53:52.094988 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-node-rsrc-use" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:53:52.293399013+00:00 stderr F W0120 10:53:52.293308 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-pod-total" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:53:52.369567383+00:00 stderr F I0120 10:53:52.369464 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:52.370144999+00:00 stderr F I0120 10:53:52.370028 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:52.370144999+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:52.370144999+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:52.370215431+00:00 stderr F I0120 10:53:52.370163 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (737.079µs) 2026-01-20T10:53:52.370215431+00:00 stderr F I0120 10:53:52.370189 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:52.370317844+00:00 stderr F I0120 10:53:52.370260 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:52.370377386+00:00 stderr F I0120 10:53:52.370347 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:52.370887909+00:00 stderr F I0120 10:53:52.370812 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:52.403202240+00:00 stderr F W0120 10:53:52.403113 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:52.407026283+00:00 stderr F I0120 10:53:52.406971 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.778201ms) 2026-01-20T10:53:52.495020048+00:00 stderr F W0120 10:53:52.494894 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-prometheus" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:53:52.495020048+00:00 stderr F I0120 10:53:52.494951 1 task_graph.go:481] Running 44 on worker 1 2026-01-20T10:53:52.596532354+00:00 stderr F I0120 10:53:52.596438 1 task_graph.go:481] Running 45 on worker 1 2026-01-20T10:53:53.342765337+00:00 stderr F W0120 10:53:53.342691 1 helper.go:97] PrometheusRule "openshift-kube-apiserver/kube-apiserver-recording-rules" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:53:53.370895136+00:00 stderr F I0120 10:53:53.370797 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:53.371412530+00:00 stderr F I0120 10:53:53.371370 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:53.371412530+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:53.371412530+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:53.371622215+00:00 stderr F I0120 10:53:53.371585 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (807.001µs) 2026-01-20T10:53:53.371630516+00:00 stderr F I0120 10:53:53.371619 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:53.371759789+00:00 stderr F I0120 10:53:53.371704 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:53.371850241+00:00 stderr F I0120 10:53:53.371826 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:53.372381105+00:00 stderr F I0120 10:53:53.372311 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:53.408162290+00:00 stderr F W0120 10:53:53.408100 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:53.409616379+00:00 stderr F I0120 10:53:53.409571 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.951082ms) 2026-01-20T10:53:53.594128676+00:00 stderr F I0120 10:53:53.594030 1 task_graph.go:481] Running 46 on worker 1 2026-01-20T10:53:53.694440050+00:00 stderr F I0120 10:53:53.694358 1 task_graph.go:481] Running 47 on worker 1 2026-01-20T10:53:54.043616759+00:00 stderr F W0120 10:53:54.043543 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-api-performance" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:53:54.243704602+00:00 stderr F I0120 10:53:54.243602 1 task_graph.go:481] Running 48 on worker 0 2026-01-20T10:53:54.372102675+00:00 stderr F I0120 10:53:54.371890 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:54.373372339+00:00 stderr F I0120 10:53:54.373289 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:54.373372339+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:54.373372339+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:54.373486862+00:00 stderr F I0120 10:53:54.373421 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.558432ms) 2026-01-20T10:53:54.373486862+00:00 stderr F I0120 10:53:54.373465 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:54.373642846+00:00 stderr F I0120 10:53:54.373566 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:54.373723629+00:00 stderr F I0120 10:53:54.373682 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:54.374446398+00:00 stderr F I0120 10:53:54.374348 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:54.417535266+00:00 stderr F W0120 10:53:54.417398 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:54.422448117+00:00 stderr F I0120 10:53:54.422347 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (48.868302ms) 2026-01-20T10:53:54.748688004+00:00 stderr F I0120 10:53:54.748584 1 task_graph.go:481] Running 49 on worker 0 2026-01-20T10:53:55.044295364+00:00 stderr F I0120 10:53:55.044198 1 task_graph.go:481] Running 50 on worker 0 2026-01-20T10:53:55.323131097+00:00 stderr F I0120 10:53:55.305212 1 task_graph.go:481] Running 51 on worker 1 2026-01-20T10:53:55.376114818+00:00 stderr F I0120 10:53:55.375310 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:55.376114818+00:00 stderr F I0120 10:53:55.375880 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:55.376114818+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:55.376114818+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:55.376114818+00:00 stderr F I0120 10:53:55.375938 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (648.017µs) 2026-01-20T10:53:55.376114818+00:00 stderr F I0120 10:53:55.375959 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:55.376114818+00:00 stderr F I0120 10:53:55.376092 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:55.376189511+00:00 stderr F I0120 10:53:55.376160 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:55.379190561+00:00 stderr F I0120 10:53:55.376715 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:55.425779943+00:00 stderr F W0120 10:53:55.425703 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:55.427899369+00:00 stderr F I0120 10:53:55.427851 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (51.891234ms) 2026-01-20T10:53:56.377008099+00:00 stderr F I0120 10:53:56.376908 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:56.377750799+00:00 stderr F I0120 10:53:56.377680 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:56.377750799+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:56.377750799+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:56.378005905+00:00 stderr F I0120 10:53:56.377851 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (960.375µs) 2026-01-20T10:53:56.378017056+00:00 stderr F I0120 10:53:56.377960 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:56.378196200+00:00 stderr F I0120 10:53:56.378150 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:56.378281483+00:00 stderr F I0120 10:53:56.378255 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:56.378975591+00:00 stderr F I0120 10:53:56.378906 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:56.407367889+00:00 stderr F W0120 10:53:56.407267 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:56.409414263+00:00 stderr F I0120 10:53:56.409354 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.403817ms) 2026-01-20T10:53:57.378695931+00:00 stderr F I0120 10:53:57.378538 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:57.379369529+00:00 stderr F I0120 10:53:57.379280 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:57.379369529+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:57.379369529+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:57.379430841+00:00 stderr F I0120 10:53:57.379394 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (902.205µs) 2026-01-20T10:53:57.379430841+00:00 stderr F I0120 10:53:57.379419 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:57.379577655+00:00 stderr F I0120 10:53:57.379497 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:57.379601166+00:00 stderr F I0120 10:53:57.379584 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:57.380108889+00:00 stderr F I0120 10:53:57.380005 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:57.427302167+00:00 stderr F W0120 10:53:57.427173 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:57.430796330+00:00 stderr F I0120 10:53:57.430703 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (51.278956ms) 2026-01-20T10:53:57.502629605+00:00 stderr F I0120 10:53:57.502560 1 task_graph.go:481] Running 52 on worker 0 2026-01-20T10:53:58.380445465+00:00 stderr F I0120 10:53:58.380338 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:58.380726372+00:00 stderr F I0120 10:53:58.380681 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:58.380726372+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:58.380726372+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:58.380767583+00:00 stderr F I0120 10:53:58.380737 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (409.961µs) 2026-01-20T10:53:58.380767583+00:00 stderr F I0120 10:53:58.380759 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:58.380846145+00:00 stderr F I0120 10:53:58.380802 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:58.380878006+00:00 stderr F I0120 10:53:58.380856 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:58.381158854+00:00 stderr F I0120 10:53:58.381107 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:58.404338451+00:00 stderr F W0120 10:53:58.404250 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:58.407134616+00:00 stderr F I0120 10:53:58.406986 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.223219ms) 2026-01-20T10:53:58.492754348+00:00 stderr F W0120 10:53:58.492654 1 helper.go:97] PrometheusRule "openshift-authentication-operator/authentication-operator" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:53:58.493241851+00:00 stderr F I0120 10:53:58.493202 1 task_graph.go:481] Running 53 on worker 0 2026-01-20T10:53:58.493384295+00:00 stderr F I0120 10:53:58.493350 1 task_graph.go:481] Running 54 on worker 0 2026-01-20T10:53:58.895281439+00:00 stderr F I0120 10:53:58.895217 1 task_graph.go:481] Running 55 on worker 0 2026-01-20T10:53:59.381549961+00:00 stderr F I0120 10:53:59.381446 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:53:59.382528688+00:00 stderr F I0120 10:53:59.382466 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:53:59.382528688+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:53:59.382528688+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:53:59.382565399+00:00 stderr F I0120 10:53:59.382537 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.1074ms) 2026-01-20T10:53:59.382565399+00:00 stderr F I0120 10:53:59.382559 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:53:59.382649401+00:00 stderr F I0120 10:53:59.382605 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:53:59.382694872+00:00 stderr F I0120 10:53:59.382664 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:53:59.382989460+00:00 stderr F I0120 10:53:59.382926 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:53:59.414507740+00:00 stderr F W0120 10:53:59.414423 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:53:59.415884967+00:00 stderr F I0120 10:53:59.415818 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.260067ms) 2026-01-20T10:53:59.793897463+00:00 stderr F I0120 10:53:59.793764 1 task_graph.go:481] Running 56 on worker 0 2026-01-20T10:54:00.383102970+00:00 stderr F I0120 10:54:00.382910 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:00.383359957+00:00 stderr F I0120 10:54:00.383248 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:00.383359957+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:00.383359957+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:00.383359957+00:00 stderr F I0120 10:54:00.383311 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (414.421µs) 2026-01-20T10:54:00.383359957+00:00 stderr F I0120 10:54:00.383353 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:00.383457489+00:00 stderr F I0120 10:54:00.383401 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:00.383485210+00:00 stderr F I0120 10:54:00.383471 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:00.383838979+00:00 stderr F I0120 10:54:00.383734 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:00.407899841+00:00 stderr F W0120 10:54:00.407805 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:00.409477683+00:00 stderr F I0120 10:54:00.409416 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.058305ms) 2026-01-20T10:54:01.383975030+00:00 stderr F I0120 10:54:01.383850 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:01.384461353+00:00 stderr F I0120 10:54:01.384424 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:01.384461353+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:01.384461353+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:01.384556735+00:00 stderr F I0120 10:54:01.384523 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (688.528µs) 2026-01-20T10:54:01.384556735+00:00 stderr F I0120 10:54:01.384549 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:01.384675088+00:00 stderr F I0120 10:54:01.384623 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:01.384752000+00:00 stderr F I0120 10:54:01.384720 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:01.385235934+00:00 stderr F I0120 10:54:01.385160 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:01.426148954+00:00 stderr F W0120 10:54:01.426052 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:01.429840212+00:00 stderr F I0120 10:54:01.429803 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (45.247426ms) 2026-01-20T10:54:01.692211286+00:00 stderr F W0120 10:54:01.692125 1 helper.go:97] configmap "openshift-machine-api/cluster-autoscaler-operator-ca" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:54:01.793445395+00:00 stderr F W0120 10:54:01.793362 1 helper.go:97] PrometheusRule "openshift-machine-api/cluster-autoscaler-operator-rules" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:54:01.793445395+00:00 stderr F I0120 10:54:01.793408 1 task_graph.go:481] Running 57 on worker 0 2026-01-20T10:54:01.844813395+00:00 stderr F I0120 10:54:01.844732 1 task_graph.go:481] Running 58 on worker 1 2026-01-20T10:54:01.895373493+00:00 stderr F I0120 10:54:01.895286 1 task_graph.go:481] Running 59 on worker 0 2026-01-20T10:54:02.385599391+00:00 stderr F I0120 10:54:02.385482 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:02.386013601+00:00 stderr F I0120 10:54:02.385944 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:02.386013601+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:02.386013601+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:02.386107134+00:00 stderr F I0120 10:54:02.386038 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (570.125µs) 2026-01-20T10:54:02.386134955+00:00 stderr F I0120 10:54:02.386109 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:02.386225667+00:00 stderr F I0120 10:54:02.386179 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:02.386271068+00:00 stderr F I0120 10:54:02.386260 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:02.386729370+00:00 stderr F I0120 10:54:02.386644 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:02.431627557+00:00 stderr F W0120 10:54:02.431537 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:02.434599836+00:00 stderr F I0120 10:54:02.434550 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (48.437071ms) 2026-01-20T10:54:02.495897190+00:00 stderr F I0120 10:54:02.495836 1 task_graph.go:481] Running 60 on worker 0 2026-01-20T10:54:02.593164204+00:00 stderr F I0120 10:54:02.593040 1 task_graph.go:481] Running 61 on worker 0 2026-01-20T10:54:02.741890248+00:00 stderr F I0120 10:54:02.741814 1 task_graph.go:481] Running 62 on worker 1 2026-01-20T10:54:02.995301372+00:00 stderr F I0120 10:54:02.995218 1 task_graph.go:481] Running 63 on worker 0 2026-01-20T10:54:03.386500461+00:00 stderr F I0120 10:54:03.386413 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:03.386857010+00:00 stderr F I0120 10:54:03.386818 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:03.386857010+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:03.386857010+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:03.386924272+00:00 stderr F I0120 10:54:03.386893 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (496.613µs) 2026-01-20T10:54:03.386924272+00:00 stderr F I0120 10:54:03.386917 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:03.387009114+00:00 stderr F I0120 10:54:03.386972 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:03.387142528+00:00 stderr F I0120 10:54:03.387037 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:03.387519378+00:00 stderr F I0120 10:54:03.387463 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:03.411892578+00:00 stderr F W0120 10:54:03.411824 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:03.413929672+00:00 stderr F I0120 10:54:03.413892 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.970469ms) 2026-01-20T10:54:04.387551986+00:00 stderr F I0120 10:54:04.387433 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:04.389028476+00:00 stderr F I0120 10:54:04.388951 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:04.389028476+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:04.389028476+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:04.389092857+00:00 stderr F I0120 10:54:04.389039 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.656895ms) 2026-01-20T10:54:04.389092857+00:00 stderr F I0120 10:54:04.389085 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:04.389198260+00:00 stderr F I0120 10:54:04.389149 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:04.389262152+00:00 stderr F I0120 10:54:04.389229 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:04.389615871+00:00 stderr F I0120 10:54:04.389552 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:04.424754608+00:00 stderr F W0120 10:54:04.424654 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:04.427882902+00:00 stderr F I0120 10:54:04.427800 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.731063ms) 2026-01-20T10:54:04.594745369+00:00 stderr F I0120 10:54:04.594660 1 task_graph.go:481] Running 64 on worker 0 2026-01-20T10:54:05.385543260+00:00 stderr F I0120 10:54:05.385401 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:05.385608272+00:00 stderr F I0120 10:54:05.385560 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:05.385666373+00:00 stderr F I0120 10:54:05.385625 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:05.386120915+00:00 stderr F I0120 10:54:05.386002 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:05.389513236+00:00 stderr F I0120 10:54:05.389453 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:05.389862415+00:00 stderr F I0120 10:54:05.389797 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:05.389862415+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:05.389862415+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:05.389905106+00:00 stderr F I0120 10:54:05.389862 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (414.141µs) 2026-01-20T10:54:05.417366588+00:00 stderr F W0120 10:54:05.417285 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:05.419480085+00:00 stderr F I0120 10:54:05.419407 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.014468ms) 2026-01-20T10:54:05.419480085+00:00 stderr F I0120 10:54:05.419441 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:05.419560737+00:00 stderr F I0120 10:54:05.419513 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:05.419625949+00:00 stderr F I0120 10:54:05.419574 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:05.419959677+00:00 stderr F I0120 10:54:05.419879 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:05.462208883+00:00 stderr F W0120 10:54:05.462121 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:05.463819426+00:00 stderr F I0120 10:54:05.463781 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.337081ms) 2026-01-20T10:54:05.595007853+00:00 stderr F I0120 10:54:05.594921 1 task_graph.go:481] Running 65 on worker 0 2026-01-20T10:54:06.142928360+00:00 stderr F W0120 10:54:06.142806 1 helper.go:97] configmap "openshift-operator-lifecycle-manager/olm-operators" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:54:06.244027295+00:00 stderr F W0120 10:54:06.243900 1 helper.go:97] CatalogSource "openshift-operator-lifecycle-manager/olm-operators" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:54:06.390118019+00:00 stderr F I0120 10:54:06.389963 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:06.390451778+00:00 stderr F I0120 10:54:06.390409 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:06.390451778+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:06.390451778+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:06.390512449+00:00 stderr F I0120 10:54:06.390478 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (531.654µs) 2026-01-20T10:54:06.390512449+00:00 stderr F I0120 10:54:06.390504 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:06.390599641+00:00 stderr F I0120 10:54:06.390556 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:06.390640683+00:00 stderr F I0120 10:54:06.390621 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:06.391002662+00:00 stderr F I0120 10:54:06.390937 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:06.429618762+00:00 stderr F W0120 10:54:06.429548 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:06.431089031+00:00 stderr F I0120 10:54:06.431001 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.49533ms) 2026-01-20T10:54:06.494234854+00:00 stderr F I0120 10:54:06.494136 1 task_graph.go:481] Running 66 on worker 0 2026-01-20T10:54:06.494288175+00:00 stderr F I0120 10:54:06.494239 1 task_graph.go:481] Running 67 on worker 0 2026-01-20T10:54:06.543373974+00:00 stderr F W0120 10:54:06.543267 1 helper.go:97] Subscription "openshift-operator-lifecycle-manager/packageserver" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:54:06.743501679+00:00 stderr F I0120 10:54:06.743398 1 task_graph.go:481] Running 68 on worker 1 2026-01-20T10:54:06.893903358+00:00 stderr F I0120 10:54:06.893784 1 task_graph.go:481] Running 69 on worker 0 2026-01-20T10:54:06.946415088+00:00 stderr F I0120 10:54:06.946319 1 task_graph.go:481] Running 70 on worker 1 2026-01-20T10:54:06.946415088+00:00 stderr F I0120 10:54:06.946403 1 task_graph.go:481] Running 71 on worker 1 2026-01-20T10:54:06.998033684+00:00 stderr F I0120 10:54:06.997940 1 task_graph.go:481] Running 72 on worker 0 2026-01-20T10:54:07.044136803+00:00 stderr F I0120 10:54:07.043999 1 task_graph.go:481] Running 73 on worker 1 2026-01-20T10:54:07.391403221+00:00 stderr F I0120 10:54:07.391290 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:07.391734479+00:00 stderr F I0120 10:54:07.391698 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:07.391734479+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:07.391734479+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:07.391816932+00:00 stderr F I0120 10:54:07.391784 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (525.475µs) 2026-01-20T10:54:07.391816932+00:00 stderr F I0120 10:54:07.391808 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:07.391928895+00:00 stderr F I0120 10:54:07.391886 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:07.391971466+00:00 stderr F I0120 10:54:07.391954 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:07.392388627+00:00 stderr F I0120 10:54:07.392330 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:07.417383713+00:00 stderr F W0120 10:54:07.417304 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:07.418896183+00:00 stderr F I0120 10:54:07.418867 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.057601ms) 2026-01-20T10:54:08.194586820+00:00 stderr F I0120 10:54:08.194517 1 task_graph.go:481] Running 74 on worker 0 2026-01-20T10:54:08.392529417+00:00 stderr F I0120 10:54:08.392439 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:08.392830625+00:00 stderr F I0120 10:54:08.392793 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:08.392830625+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:08.392830625+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:08.392984029+00:00 stderr F I0120 10:54:08.392950 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (526.784µs) 2026-01-20T10:54:08.392984029+00:00 stderr F I0120 10:54:08.392973 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:08.393079191+00:00 stderr F I0120 10:54:08.393025 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:08.393133173+00:00 stderr F I0120 10:54:08.393110 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:08.393496672+00:00 stderr F I0120 10:54:08.393428 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:08.423960544+00:00 stderr F W0120 10:54:08.423887 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:08.427260382+00:00 stderr F I0120 10:54:08.427222 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.239373ms) 2026-01-20T10:54:09.393806598+00:00 stderr F I0120 10:54:09.393720 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:09.394279781+00:00 stderr F I0120 10:54:09.394241 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:09.394279781+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:09.394279781+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:09.394380124+00:00 stderr F I0120 10:54:09.394354 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (652.528µs) 2026-01-20T10:54:09.394399634+00:00 stderr F I0120 10:54:09.394384 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:09.394514787+00:00 stderr F I0120 10:54:09.394474 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:09.394594569+00:00 stderr F I0120 10:54:09.394575 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:09.395052112+00:00 stderr F I0120 10:54:09.395003 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:09.450509730+00:00 stderr F W0120 10:54:09.450438 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:09.452017640+00:00 stderr F I0120 10:54:09.451947 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (57.562484ms) 2026-01-20T10:54:09.794189861+00:00 stderr F W0120 10:54:09.794032 1 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+; use flowcontrol.apiserver.k8s.io/v1 FlowSchema 2026-01-20T10:54:09.795046375+00:00 stderr F I0120 10:54:09.795004 1 task_graph.go:481] Running 75 on worker 0 2026-01-20T10:54:10.293509152+00:00 stderr F I0120 10:54:10.293399 1 task_graph.go:481] Running 76 on worker 0 2026-01-20T10:54:10.395157462+00:00 stderr F I0120 10:54:10.395037 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:10.395464240+00:00 stderr F I0120 10:54:10.395433 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:10.395464240+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:10.395464240+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:10.395540972+00:00 stderr F I0120 10:54:10.395509 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (497.593µs) 2026-01-20T10:54:10.395540972+00:00 stderr F I0120 10:54:10.395534 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:10.395644924+00:00 stderr F I0120 10:54:10.395581 1 task_graph.go:481] Running 77 on worker 0 2026-01-20T10:54:10.395644924+00:00 stderr F I0120 10:54:10.395618 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:10.395707706+00:00 stderr F I0120 10:54:10.395692 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:10.396097756+00:00 stderr F I0120 10:54:10.396019 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:10.396249580+00:00 stderr F I0120 10:54:10.396187 1 task_graph.go:481] Running 78 on worker 0 2026-01-20T10:54:10.446911131+00:00 stderr F W0120 10:54:10.446808 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:10.448951205+00:00 stderr F I0120 10:54:10.448857 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (53.319222ms) 2026-01-20T10:54:11.194779217+00:00 stderr F W0120 10:54:11.194651 1 helper.go:97] clusterrolebinding "default-account-openshift-machine-config-operator" not found. It either has already been removed or it has never been installed on this cluster. 2026-01-20T10:54:11.194779217+00:00 stderr F I0120 10:54:11.194748 1 task_graph.go:478] Canceled worker 1 while waiting for work 2026-01-20T10:54:11.194779217+00:00 stderr F I0120 10:54:11.194762 1 task_graph.go:478] Canceled worker 0 while waiting for work 2026-01-20T10:54:11.194852409+00:00 stderr F I0120 10:54:11.194775 1 task_graph.go:527] Workers finished 2026-01-20T10:54:11.194852409+00:00 stderr F I0120 10:54:11.194792 1 task_graph.go:550] Result of work: [] 2026-01-20T10:54:11.396102334+00:00 stderr F I0120 10:54:11.395968 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:11.396457113+00:00 stderr F I0120 10:54:11.396396 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:11.396457113+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:11.396457113+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:11.396485504+00:00 stderr F I0120 10:54:11.396466 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (515.123µs) 2026-01-20T10:54:11.396501714+00:00 stderr F I0120 10:54:11.396486 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:11.396595717+00:00 stderr F I0120 10:54:11.396539 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:11.396631928+00:00 stderr F I0120 10:54:11.396602 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:11.396936586+00:00 stderr F I0120 10:54:11.396882 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:11.447166255+00:00 stderr F W0120 10:54:11.446973 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:11.450917905+00:00 stderr F I0120 10:54:11.450492 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (53.99702ms) 2026-01-20T10:54:12.396764258+00:00 stderr F I0120 10:54:12.396680 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:12.397386705+00:00 stderr F I0120 10:54:12.397345 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:12.397386705+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:12.397386705+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:12.397560330+00:00 stderr F I0120 10:54:12.397520 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (855.823µs) 2026-01-20T10:54:12.397645612+00:00 stderr F I0120 10:54:12.397615 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:12.397788476+00:00 stderr F I0120 10:54:12.397751 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:12.397896278+00:00 stderr F I0120 10:54:12.397875 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:12.398510245+00:00 stderr F I0120 10:54:12.398439 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:12.446011851+00:00 stderr F W0120 10:54:12.445934 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:12.449908934+00:00 stderr F I0120 10:54:12.449825 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (52.205901ms) 2026-01-20T10:54:13.398112581+00:00 stderr F I0120 10:54:13.398022 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:13.398642035+00:00 stderr F I0120 10:54:13.398599 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:13.398642035+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:13.398642035+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:13.398737658+00:00 stderr F I0120 10:54:13.398708 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (723.349µs) 2026-01-20T10:54:13.398745738+00:00 stderr F I0120 10:54:13.398736 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:13.398837190+00:00 stderr F I0120 10:54:13.398802 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:13.398902212+00:00 stderr F I0120 10:54:13.398879 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:13.399363534+00:00 stderr F I0120 10:54:13.399303 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:13.422202023+00:00 stderr F W0120 10:54:13.422123 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:13.425161722+00:00 stderr F I0120 10:54:13.425116 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.375232ms) 2026-01-20T10:54:14.399227167+00:00 stderr F I0120 10:54:14.399173 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:14.399557956+00:00 stderr F I0120 10:54:14.399541 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:14.399557956+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:14.399557956+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:14.399626278+00:00 stderr F I0120 10:54:14.399611 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (450.951µs) 2026-01-20T10:54:14.399657679+00:00 stderr F I0120 10:54:14.399648 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:14.399722140+00:00 stderr F I0120 10:54:14.399703 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:14.399779422+00:00 stderr F I0120 10:54:14.399770 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:14.400041939+00:00 stderr F I0120 10:54:14.400011 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:14.432576947+00:00 stderr F W0120 10:54:14.432446 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:14.433974924+00:00 stderr F I0120 10:54:14.433913 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.263273ms) 2026-01-20T10:54:15.399736478+00:00 stderr F I0120 10:54:15.399681 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:15.400101887+00:00 stderr F I0120 10:54:15.400084 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:15.400101887+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:15.400101887+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:15.400186629+00:00 stderr F I0120 10:54:15.400169 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (502.193µs) 2026-01-20T10:54:15.400219140+00:00 stderr F I0120 10:54:15.400208 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:15.400298612+00:00 stderr F I0120 10:54:15.400279 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:15.400363594+00:00 stderr F I0120 10:54:15.400353 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:15.400673473+00:00 stderr F I0120 10:54:15.400642 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:15.423821900+00:00 stderr F W0120 10:54:15.423774 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:15.425350440+00:00 stderr F I0120 10:54:15.425325 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.114429ms) 2026-01-20T10:54:16.400827614+00:00 stderr F I0120 10:54:16.400743 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:16.401097162+00:00 stderr F I0120 10:54:16.401047 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:16.401097162+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:16.401097162+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:16.401166103+00:00 stderr F I0120 10:54:16.401134 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (420.131µs) 2026-01-20T10:54:16.401166103+00:00 stderr F I0120 10:54:16.401157 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:16.401247465+00:00 stderr F I0120 10:54:16.401214 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:16.401295227+00:00 stderr F I0120 10:54:16.401270 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:16.401532793+00:00 stderr F I0120 10:54:16.401480 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:16.431598955+00:00 stderr F W0120 10:54:16.431558 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:16.433869964+00:00 stderr F I0120 10:54:16.433840 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.67834ms) 2026-01-20T10:54:17.401278333+00:00 stderr F I0120 10:54:17.401182 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:17.401731365+00:00 stderr F I0120 10:54:17.401604 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:17.401731365+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:17.401731365+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:17.401843958+00:00 stderr F I0120 10:54:17.401788 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (620.337µs) 2026-01-20T10:54:17.401843958+00:00 stderr F I0120 10:54:17.401818 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:17.401947941+00:00 stderr F I0120 10:54:17.401888 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:17.401973012+00:00 stderr F I0120 10:54:17.401962 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:17.402504096+00:00 stderr F I0120 10:54:17.402411 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:17.434984012+00:00 stderr F W0120 10:54:17.434896 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:17.436819790+00:00 stderr F I0120 10:54:17.436762 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.940471ms) 2026-01-20T10:54:18.401915097+00:00 stderr F I0120 10:54:18.401802 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:18.402157203+00:00 stderr F I0120 10:54:18.402121 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:18.402157203+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:18.402157203+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:18.402210245+00:00 stderr F I0120 10:54:18.402178 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (400.801µs) 2026-01-20T10:54:18.402210245+00:00 stderr F I0120 10:54:18.402197 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:18.402282557+00:00 stderr F I0120 10:54:18.402247 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:18.402341308+00:00 stderr F I0120 10:54:18.402312 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:18.404825184+00:00 stderr F I0120 10:54:18.404713 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:18.428151286+00:00 stderr F W0120 10:54:18.428076 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:18.430020466+00:00 stderr F I0120 10:54:18.429945 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.74492ms) 2026-01-20T10:54:19.403236729+00:00 stderr F I0120 10:54:19.403108 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:19.403522407+00:00 stderr F I0120 10:54:19.403468 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:19.403522407+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:19.403522407+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:19.403552677+00:00 stderr F I0120 10:54:19.403535 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (488.193µs) 2026-01-20T10:54:19.403570098+00:00 stderr F I0120 10:54:19.403554 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:19.403659981+00:00 stderr F I0120 10:54:19.403604 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:19.403703272+00:00 stderr F I0120 10:54:19.403667 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:19.404006790+00:00 stderr F I0120 10:54:19.403932 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:19.427378673+00:00 stderr F W0120 10:54:19.427271 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:19.431902934+00:00 stderr F I0120 10:54:19.431825 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.222544ms) 2026-01-20T10:54:20.384965920+00:00 stderr F I0120 10:54:20.384840 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:20.385100734+00:00 stderr F I0120 10:54:20.385037 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:20.385217887+00:00 stderr F I0120 10:54:20.385180 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:20.385678279+00:00 stderr F I0120 10:54:20.385549 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:20.404015938+00:00 stderr F I0120 10:54:20.403931 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:20.404705896+00:00 stderr F I0120 10:54:20.404648 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:20.404705896+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:20.404705896+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:20.404864501+00:00 stderr F I0120 10:54:20.404803 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (887.624µs) 2026-01-20T10:54:20.414997201+00:00 stderr F W0120 10:54:20.414903 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:20.417228130+00:00 stderr F I0120 10:54:20.417149 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.319461ms) 2026-01-20T10:54:20.417228130+00:00 stderr F I0120 10:54:20.417177 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:20.417381514+00:00 stderr F I0120 10:54:20.417321 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:20.417381514+00:00 stderr F I0120 10:54:20.417373 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:20.417817096+00:00 stderr F I0120 10:54:20.417752 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:20.446016118+00:00 stderr F W0120 10:54:20.445957 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:20.447598409+00:00 stderr F I0120 10:54:20.447547 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.36339ms) 2026-01-20T10:54:21.405798802+00:00 stderr F I0120 10:54:21.405652 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:21.406473560+00:00 stderr F I0120 10:54:21.406396 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:21.406473560+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:21.406473560+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:21.406617325+00:00 stderr F I0120 10:54:21.406548 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (946.085µs) 2026-01-20T10:54:21.406617325+00:00 stderr F I0120 10:54:21.406597 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:21.406801439+00:00 stderr F I0120 10:54:21.406726 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:21.406906032+00:00 stderr F I0120 10:54:21.406852 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:21.407702303+00:00 stderr F I0120 10:54:21.407595 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:21.434346623+00:00 stderr F W0120 10:54:21.434232 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:21.437400594+00:00 stderr F I0120 10:54:21.437316 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.715509ms) 2026-01-20T10:54:22.407367891+00:00 stderr F I0120 10:54:22.407229 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:22.407760262+00:00 stderr F I0120 10:54:22.407700 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:22.407760262+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:22.407760262+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:22.407853434+00:00 stderr F I0120 10:54:22.407799 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (590.116µs) 2026-01-20T10:54:22.407853434+00:00 stderr F I0120 10:54:22.407836 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:22.408004808+00:00 stderr F I0120 10:54:22.407944 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:22.408067840+00:00 stderr F I0120 10:54:22.408029 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:22.408502371+00:00 stderr F I0120 10:54:22.408428 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:22.466009695+00:00 stderr F W0120 10:54:22.465906 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:22.467957696+00:00 stderr F I0120 10:54:22.467896 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (60.05987ms) 2026-01-20T10:54:23.408025965+00:00 stderr F I0120 10:54:23.407893 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:23.408386045+00:00 stderr F I0120 10:54:23.408320 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:23.408386045+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:23.408386045+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:23.408413326+00:00 stderr F I0120 10:54:23.408395 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (516.874µs) 2026-01-20T10:54:23.408429346+00:00 stderr F I0120 10:54:23.408413 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:23.408539729+00:00 stderr F I0120 10:54:23.408461 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:23.408539729+00:00 stderr F I0120 10:54:23.408522 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:23.408893868+00:00 stderr F I0120 10:54:23.408803 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:23.437862171+00:00 stderr F W0120 10:54:23.437723 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:23.439323480+00:00 stderr F I0120 10:54:23.439220 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.803972ms) 2026-01-20T10:54:24.409597713+00:00 stderr F I0120 10:54:24.409465 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:24.409913223+00:00 stderr F I0120 10:54:24.409866 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:24.409913223+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:24.409913223+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:24.409995595+00:00 stderr F I0120 10:54:24.409944 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (495.305µs) 2026-01-20T10:54:24.409995595+00:00 stderr F I0120 10:54:24.409971 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:24.410107888+00:00 stderr F I0120 10:54:24.410045 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:24.410203650+00:00 stderr F I0120 10:54:24.410163 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:24.410554220+00:00 stderr F I0120 10:54:24.410498 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:24.434736394+00:00 stderr F W0120 10:54:24.434650 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:24.436393828+00:00 stderr F I0120 10:54:24.436346 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.372663ms) 2026-01-20T10:54:25.412268482+00:00 stderr F I0120 10:54:25.411633 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:25.412755856+00:00 stderr F I0120 10:54:25.412597 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:25.412755856+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:25.412755856+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:25.412755856+00:00 stderr F I0120 10:54:25.412671 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.060139ms) 2026-01-20T10:54:25.412755856+00:00 stderr F I0120 10:54:25.412693 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:25.412784576+00:00 stderr F I0120 10:54:25.412759 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:25.412868099+00:00 stderr F I0120 10:54:25.412838 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:25.413718031+00:00 stderr F I0120 10:54:25.413667 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:25.439920209+00:00 stderr F W0120 10:54:25.439809 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:25.441530382+00:00 stderr F I0120 10:54:25.441478 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.776077ms) 2026-01-20T10:54:26.413863401+00:00 stderr F I0120 10:54:26.413769 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:26.414129708+00:00 stderr F I0120 10:54:26.414094 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:26.414129708+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:26.414129708+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:26.414175799+00:00 stderr F I0120 10:54:26.414149 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (408.901µs) 2026-01-20T10:54:26.414175799+00:00 stderr F I0120 10:54:26.414170 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:26.414251521+00:00 stderr F I0120 10:54:26.414213 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:26.414287172+00:00 stderr F I0120 10:54:26.414266 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:26.414525229+00:00 stderr F I0120 10:54:26.414479 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:26.437035269+00:00 stderr F W0120 10:54:26.436939 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:26.438632832+00:00 stderr F I0120 10:54:26.438488 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.314939ms) 2026-01-20T10:54:27.414357820+00:00 stderr F I0120 10:54:27.414255 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:27.414715720+00:00 stderr F I0120 10:54:27.414644 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:27.414715720+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:27.414715720+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:27.414760731+00:00 stderr F I0120 10:54:27.414714 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (474.383µs) 2026-01-20T10:54:27.414760731+00:00 stderr F I0120 10:54:27.414735 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:27.414854553+00:00 stderr F I0120 10:54:27.414793 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:27.414877304+00:00 stderr F I0120 10:54:27.414855 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:27.415234863+00:00 stderr F I0120 10:54:27.415178 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:27.451875610+00:00 stderr F W0120 10:54:27.451148 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:27.453527105+00:00 stderr F I0120 10:54:27.453471 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.730203ms) 2026-01-20T10:54:28.414987913+00:00 stderr F I0120 10:54:28.414898 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:28.415247470+00:00 stderr F I0120 10:54:28.415208 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:28.415247470+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:28.415247470+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:28.415268610+00:00 stderr F I0120 10:54:28.415257 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (385.05µs) 2026-01-20T10:54:28.415279681+00:00 stderr F I0120 10:54:28.415272 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:28.415353953+00:00 stderr F I0120 10:54:28.415313 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:28.415391394+00:00 stderr F I0120 10:54:28.415370 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:28.415788045+00:00 stderr F I0120 10:54:28.415667 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:28.456115970+00:00 stderr F W0120 10:54:28.456000 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:28.459588992+00:00 stderr F I0120 10:54:28.459495 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.214159ms) 2026-01-20T10:54:29.416286764+00:00 stderr F I0120 10:54:29.416176 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:29.416574792+00:00 stderr F I0120 10:54:29.416532 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:29.416574792+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:29.416574792+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:29.416626393+00:00 stderr F I0120 10:54:29.416599 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (448.632µs) 2026-01-20T10:54:29.416626393+00:00 stderr F I0120 10:54:29.416621 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:29.416771777+00:00 stderr F I0120 10:54:29.416680 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:29.416771777+00:00 stderr F I0120 10:54:29.416754 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:29.417046134+00:00 stderr F I0120 10:54:29.416987 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:29.442509923+00:00 stderr F W0120 10:54:29.442436 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:29.444012284+00:00 stderr F I0120 10:54:29.443965 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.342089ms) 2026-01-20T10:54:30.417190624+00:00 stderr F I0120 10:54:30.417026 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:30.417527343+00:00 stderr F I0120 10:54:30.417474 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:30.417527343+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:30.417527343+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:30.417598225+00:00 stderr F I0120 10:54:30.417549 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (544.715µs) 2026-01-20T10:54:30.417598225+00:00 stderr F I0120 10:54:30.417579 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:30.417721198+00:00 stderr F I0120 10:54:30.417665 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:30.417772520+00:00 stderr F I0120 10:54:30.417742 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:30.418189111+00:00 stderr F I0120 10:54:30.418118 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:30.445221232+00:00 stderr F W0120 10:54:30.445077 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:30.447097302+00:00 stderr F I0120 10:54:30.447005 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.423125ms) 2026-01-20T10:54:31.418173657+00:00 stderr F I0120 10:54:31.418004 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:31.418519566+00:00 stderr F I0120 10:54:31.418430 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:31.418519566+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:31.418519566+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:31.418544496+00:00 stderr F I0120 10:54:31.418518 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (535.275µs) 2026-01-20T10:54:31.418560837+00:00 stderr F I0120 10:54:31.418542 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:31.418673060+00:00 stderr F I0120 10:54:31.418605 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:31.418694220+00:00 stderr F I0120 10:54:31.418681 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:31.419051681+00:00 stderr F I0120 10:54:31.418978 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:31.464878652+00:00 stderr F W0120 10:54:31.464752 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:31.466352181+00:00 stderr F I0120 10:54:31.466274 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.730153ms) 2026-01-20T10:54:32.318033683+00:00 stderr F I0120 10:54:32.317931 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2026-01-20T10:54:32.318033683+00:00 stderr F I0120 10:54:32.317988 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:32.318159836+00:00 stderr F I0120 10:54:32.318005 1 upgradeable.go:69] Upgradeability last checked 58.316435483s ago, will not re-check until 2026-01-20T10:55:34Z 2026-01-20T10:54:32.318159836+00:00 stderr F I0120 10:54:32.318117 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (183.186µs) 2026-01-20T10:54:32.318159836+00:00 stderr F I0120 10:54:32.317991 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:32.318285380+00:00 stderr F I0120 10:54:32.318223 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:32.318373122+00:00 stderr F I0120 10:54:32.318317 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:32.318373122+00:00 stderr F I0120 10:54:32.318351 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:32.318373122+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:32.318373122+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:32.318440864+00:00 stderr F I0120 10:54:32.318414 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (440.233µs) 2026-01-20T10:54:32.318716731+00:00 stderr F I0120 10:54:32.318641 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:32.345125365+00:00 stderr F W0120 10:54:32.345010 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:32.347165269+00:00 stderr F I0120 10:54:32.347048 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.089006ms) 2026-01-20T10:54:32.347165269+00:00 stderr F I0120 10:54:32.347104 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:32.347316673+00:00 stderr F I0120 10:54:32.347251 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:32.347336734+00:00 stderr F I0120 10:54:32.347325 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:32.347756555+00:00 stderr F I0120 10:54:32.347687 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:32.388508872+00:00 stderr F W0120 10:54:32.388434 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:32.390875765+00:00 stderr F I0120 10:54:32.390818 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.707465ms) 2026-01-20T10:54:32.419520158+00:00 stderr F I0120 10:54:32.419417 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:32.419903558+00:00 stderr F I0120 10:54:32.419864 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:32.419903558+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:32.419903558+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:32.419975390+00:00 stderr F I0120 10:54:32.419944 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (544.954µs) 2026-01-20T10:54:32.419975390+00:00 stderr F I0120 10:54:32.419969 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:32.420083013+00:00 stderr F I0120 10:54:32.420034 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:32.420193336+00:00 stderr F I0120 10:54:32.420165 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:32.420745850+00:00 stderr F I0120 10:54:32.420690 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:32.453846353+00:00 stderr F W0120 10:54:32.453782 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:32.455595940+00:00 stderr F I0120 10:54:32.455565 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.592579ms) 2026-01-20T10:54:33.420851309+00:00 stderr F I0120 10:54:33.420783 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:33.421264520+00:00 stderr F I0120 10:54:33.421224 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:33.421264520+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:33.421264520+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:33.421350782+00:00 stderr F I0120 10:54:33.421302 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (534.124µs) 2026-01-20T10:54:33.421350782+00:00 stderr F I0120 10:54:33.421327 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:33.421427855+00:00 stderr F I0120 10:54:33.421391 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:33.421486406+00:00 stderr F I0120 10:54:33.421461 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:33.421817676+00:00 stderr F I0120 10:54:33.421773 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:33.449479402+00:00 stderr F W0120 10:54:33.449393 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:33.451426295+00:00 stderr F I0120 10:54:33.451384 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.049481ms) 2026-01-20T10:54:34.421860103+00:00 stderr F I0120 10:54:34.421770 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:34.422439429+00:00 stderr F I0120 10:54:34.422414 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:34.422439429+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:34.422439429+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:34.422594903+00:00 stderr F I0120 10:54:34.422574 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (820.053µs) 2026-01-20T10:54:34.422643764+00:00 stderr F I0120 10:54:34.422629 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:34.422781497+00:00 stderr F I0120 10:54:34.422752 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:34.422887800+00:00 stderr F I0120 10:54:34.422873 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:34.423305721+00:00 stderr F I0120 10:54:34.423264 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:34.450504517+00:00 stderr F W0120 10:54:34.450414 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:34.452085388+00:00 stderr F I0120 10:54:34.452022 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.391993ms) 2026-01-20T10:54:35.384424382+00:00 stderr F I0120 10:54:35.384345 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:35.384552386+00:00 stderr F I0120 10:54:35.384513 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:35.384631288+00:00 stderr F I0120 10:54:35.384598 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:35.384994547+00:00 stderr F I0120 10:54:35.384950 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:35.409384258+00:00 stderr F W0120 10:54:35.409315 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:35.411021752+00:00 stderr F I0120 10:54:35.410999 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.672121ms) 2026-01-20T10:54:35.413704133+00:00 stderr F I0120 10:54:35.413664 1 sync_worker.go:234] Notify the sync worker: Cluster operator openshift-samples changed Degraded from "False" to "True" 2026-01-20T10:54:35.413704133+00:00 stderr F I0120 10:54:35.413695 1 sync_worker.go:584] Cluster operator openshift-samples changed Degraded from "False" to "True" 2026-01-20T10:54:35.413723334+00:00 stderr F I0120 10:54:35.413711 1 sync_worker.go:592] No change, waiting 2026-01-20T10:54:35.423644938+00:00 stderr F I0120 10:54:35.423602 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:35.424039938+00:00 stderr F I0120 10:54:35.424017 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:35.424039938+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:35.424039938+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:35.424189792+00:00 stderr F I0120 10:54:35.424165 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (573.985µs) 2026-01-20T10:54:35.424244524+00:00 stderr F I0120 10:54:35.424228 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:35.424349726+00:00 stderr F I0120 10:54:35.424321 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:35.424438129+00:00 stderr F I0120 10:54:35.424422 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:35.424813960+00:00 stderr F I0120 10:54:35.424776 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:35.449582489+00:00 stderr F W0120 10:54:35.449466 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:35.453004351+00:00 stderr F I0120 10:54:35.452905 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.669405ms) 2026-01-20T10:54:36.424669462+00:00 stderr F I0120 10:54:36.424590 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:36.425148505+00:00 stderr F I0120 10:54:36.425048 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:36.425148505+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:36.425148505+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:36.425241888+00:00 stderr F I0120 10:54:36.425222 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (654.468µs) 2026-01-20T10:54:36.425278459+00:00 stderr F I0120 10:54:36.425268 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:36.425365551+00:00 stderr F I0120 10:54:36.425343 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:36.425437363+00:00 stderr F I0120 10:54:36.425426 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:36.425791872+00:00 stderr F I0120 10:54:36.425762 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:36.456981344+00:00 stderr F W0120 10:54:36.456926 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:36.458748871+00:00 stderr F I0120 10:54:36.458717 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.445562ms) 2026-01-20T10:54:37.426321403+00:00 stderr F I0120 10:54:37.426258 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:37.426696763+00:00 stderr F I0120 10:54:37.426680 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:37.426696763+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:37.426696763+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:37.426781545+00:00 stderr F I0120 10:54:37.426760 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (543.544µs) 2026-01-20T10:54:37.426843507+00:00 stderr F I0120 10:54:37.426808 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:37.426932259+00:00 stderr F I0120 10:54:37.426910 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:37.427010321+00:00 stderr F I0120 10:54:37.426998 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:37.427392501+00:00 stderr F I0120 10:54:37.427357 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:37.463828512+00:00 stderr F W0120 10:54:37.463757 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:37.465429645+00:00 stderr F I0120 10:54:37.465405 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.594479ms) 2026-01-20T10:54:38.427954064+00:00 stderr F I0120 10:54:38.427850 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:38.428319853+00:00 stderr F I0120 10:54:38.428275 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:38.428319853+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:38.428319853+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:38.428364524+00:00 stderr F I0120 10:54:38.428333 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (504.444µs) 2026-01-20T10:54:38.428364524+00:00 stderr F I0120 10:54:38.428359 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:38.428467487+00:00 stderr F I0120 10:54:38.428418 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:38.428511698+00:00 stderr F I0120 10:54:38.428485 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:38.428831677+00:00 stderr F I0120 10:54:38.428779 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:38.452461346+00:00 stderr F W0120 10:54:38.452370 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:38.454588553+00:00 stderr F I0120 10:54:38.454557 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.194278ms) 2026-01-20T10:54:39.429033489+00:00 stderr F I0120 10:54:39.428945 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:39.429431610+00:00 stderr F I0120 10:54:39.429396 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:39.429431610+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:39.429431610+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:39.429526042+00:00 stderr F I0120 10:54:39.429491 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (569.375µs) 2026-01-20T10:54:39.429526042+00:00 stderr F I0120 10:54:39.429519 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:39.429686746+00:00 stderr F I0120 10:54:39.429629 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:39.429730708+00:00 stderr F I0120 10:54:39.429712 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:39.430198680+00:00 stderr F I0120 10:54:39.430146 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:39.451911279+00:00 stderr F W0120 10:54:39.451814 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:39.453413449+00:00 stderr F I0120 10:54:39.453370 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.849246ms) 2026-01-20T10:54:40.430037253+00:00 stderr F I0120 10:54:40.429958 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:40.430748921+00:00 stderr F I0120 10:54:40.430711 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:40.430748921+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:40.430748921+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:40.430910047+00:00 stderr F I0120 10:54:40.430878 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (937.125µs) 2026-01-20T10:54:40.430981508+00:00 stderr F I0120 10:54:40.430955 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:40.431155033+00:00 stderr F I0120 10:54:40.431114 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:40.431280716+00:00 stderr F I0120 10:54:40.431260 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:40.431990795+00:00 stderr F I0120 10:54:40.431867 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:40.477471087+00:00 stderr F W0120 10:54:40.477379 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:40.480722984+00:00 stderr F I0120 10:54:40.480615 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (49.655052ms) 2026-01-20T10:54:41.432017942+00:00 stderr F I0120 10:54:41.431444 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:41.432416523+00:00 stderr F I0120 10:54:41.432396 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:41.432416523+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:41.432416523+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:41.432501535+00:00 stderr F I0120 10:54:41.432484 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.053378ms) 2026-01-20T10:54:41.432533446+00:00 stderr F I0120 10:54:41.432523 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:41.432606078+00:00 stderr F I0120 10:54:41.432586 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:41.432678050+00:00 stderr F I0120 10:54:41.432667 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:41.432910246+00:00 stderr F I0120 10:54:41.432883 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:41.476929040+00:00 stderr F W0120 10:54:41.476846 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:41.480474144+00:00 stderr F I0120 10:54:41.480405 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.860745ms) 2026-01-20T10:54:42.433493068+00:00 stderr F I0120 10:54:42.432830 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:42.434284750+00:00 stderr F I0120 10:54:42.434239 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:42.434284750+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:42.434284750+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:42.434431524+00:00 stderr F I0120 10:54:42.434411 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.600753ms) 2026-01-20T10:54:42.434467445+00:00 stderr F I0120 10:54:42.434456 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:42.434543097+00:00 stderr F I0120 10:54:42.434522 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:42.434605688+00:00 stderr F I0120 10:54:42.434596 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:42.434856025+00:00 stderr F I0120 10:54:42.434826 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:42.487359565+00:00 stderr F W0120 10:54:42.487295 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:42.489171572+00:00 stderr F I0120 10:54:42.489143 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (54.682568ms) 2026-01-20T10:54:43.435417207+00:00 stderr F I0120 10:54:43.435345 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:43.435729475+00:00 stderr F I0120 10:54:43.435693 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:43.435729475+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:43.435729475+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:43.435814567+00:00 stderr F I0120 10:54:43.435786 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (470.662µs) 2026-01-20T10:54:43.435814567+00:00 stderr F I0120 10:54:43.435810 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:43.435891629+00:00 stderr F I0120 10:54:43.435861 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:43.435942860+00:00 stderr F I0120 10:54:43.435924 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:43.436492725+00:00 stderr F I0120 10:54:43.436443 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:43.471310404+00:00 stderr F W0120 10:54:43.471256 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:43.473578774+00:00 stderr F I0120 10:54:43.473545 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.730506ms) 2026-01-20T10:54:44.436427230+00:00 stderr F I0120 10:54:44.436329 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:44.436650656+00:00 stderr F I0120 10:54:44.436603 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:44.436650656+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:44.436650656+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:44.436667847+00:00 stderr F I0120 10:54:44.436655 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (345.509µs) 2026-01-20T10:54:44.436680087+00:00 stderr F I0120 10:54:44.436671 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:44.437271904+00:00 stderr F I0120 10:54:44.437213 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:44.437288004+00:00 stderr F I0120 10:54:44.437267 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:44.437522540+00:00 stderr F I0120 10:54:44.437471 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:44.461726305+00:00 stderr F W0120 10:54:44.461638 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:44.463123832+00:00 stderr F I0120 10:54:44.463078 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.387874ms) 2026-01-20T10:54:45.437772544+00:00 stderr F I0120 10:54:45.437651 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:45.438251267+00:00 stderr F I0120 10:54:45.438228 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:45.438251267+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:45.438251267+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:45.438359520+00:00 stderr F I0120 10:54:45.438340 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (708.969µs) 2026-01-20T10:54:45.438405371+00:00 stderr F I0120 10:54:45.438391 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:45.438515424+00:00 stderr F I0120 10:54:45.438487 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:45.438602706+00:00 stderr F I0120 10:54:45.438588 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:45.438993326+00:00 stderr F I0120 10:54:45.438956 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:45.476168688+00:00 stderr F W0120 10:54:45.476046 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:45.477546944+00:00 stderr F I0120 10:54:45.477474 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (39.082112ms) 2026-01-20T10:54:46.439125777+00:00 stderr F I0120 10:54:46.438936 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:46.439444705+00:00 stderr F I0120 10:54:46.439372 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:46.439444705+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:46.439444705+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:46.439496277+00:00 stderr F I0120 10:54:46.439436 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (533.574µs) 2026-01-20T10:54:46.439496277+00:00 stderr F I0120 10:54:46.439458 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:46.439566048+00:00 stderr F I0120 10:54:46.439515 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:46.439589259+00:00 stderr F I0120 10:54:46.439579 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:46.439937429+00:00 stderr F I0120 10:54:46.439850 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:46.482292428+00:00 stderr F W0120 10:54:46.482236 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:46.483810078+00:00 stderr F I0120 10:54:46.483736 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.276341ms) 2026-01-20T10:54:47.441142578+00:00 stderr F I0120 10:54:47.439828 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:47.441347213+00:00 stderr F I0120 10:54:47.441259 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:47.441347213+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:47.441347213+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:47.441347213+00:00 stderr F I0120 10:54:47.441319 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.51145ms) 2026-01-20T10:54:47.441347213+00:00 stderr F I0120 10:54:47.441335 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:47.441430415+00:00 stderr F I0120 10:54:47.441386 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:47.441452866+00:00 stderr F I0120 10:54:47.441442 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:47.441768514+00:00 stderr F I0120 10:54:47.441668 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:47.470621773+00:00 stderr F W0120 10:54:47.470552 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:47.472114393+00:00 stderr F I0120 10:54:47.472022 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.684068ms) 2026-01-20T10:54:48.445183872+00:00 stderr F I0120 10:54:48.444226 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:48.445183872+00:00 stderr F I0120 10:54:48.444570 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:48.445183872+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:48.445183872+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:48.445183872+00:00 stderr F I0120 10:54:48.444646 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (451.752µs) 2026-01-20T10:54:48.445183872+00:00 stderr F I0120 10:54:48.444666 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:48.445183872+00:00 stderr F I0120 10:54:48.444719 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:48.445183872+00:00 stderr F I0120 10:54:48.444780 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:48.445183872+00:00 stderr F I0120 10:54:48.445131 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:48.467864076+00:00 stderr F W0120 10:54:48.467812 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:48.469249834+00:00 stderr F I0120 10:54:48.469212 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.543615ms) 2026-01-20T10:54:49.444994074+00:00 stderr F I0120 10:54:49.444849 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:49.445460936+00:00 stderr F I0120 10:54:49.445385 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:49.445460936+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:49.445460936+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:49.445496727+00:00 stderr F I0120 10:54:49.445478 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (648.167µs) 2026-01-20T10:54:49.445516738+00:00 stderr F I0120 10:54:49.445504 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:49.445654812+00:00 stderr F I0120 10:54:49.445592 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:49.445721264+00:00 stderr F I0120 10:54:49.445685 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:49.446271058+00:00 stderr F I0120 10:54:49.446169 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:49.491804912+00:00 stderr F W0120 10:54:49.491720 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:49.493400315+00:00 stderr F I0120 10:54:49.493346 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.840076ms) 2026-01-20T10:54:50.384442738+00:00 stderr F I0120 10:54:50.384347 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:50.384514589+00:00 stderr F I0120 10:54:50.384454 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:50.384543650+00:00 stderr F I0120 10:54:50.384524 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:50.384850988+00:00 stderr F I0120 10:54:50.384791 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:50.414245251+00:00 stderr F W0120 10:54:50.414153 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:50.416123682+00:00 stderr F I0120 10:54:50.416053 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.716275ms) 2026-01-20T10:54:50.446268586+00:00 stderr F I0120 10:54:50.446158 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:50.446510912+00:00 stderr F I0120 10:54:50.446462 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:50.446510912+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:50.446510912+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:50.446612775+00:00 stderr F I0120 10:54:50.446543 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (394.021µs) 2026-01-20T10:54:50.446612775+00:00 stderr F I0120 10:54:50.446568 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:50.446666556+00:00 stderr F I0120 10:54:50.446635 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:50.446729018+00:00 stderr F I0120 10:54:50.446696 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:50.447044626+00:00 stderr F I0120 10:54:50.446981 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:50.486873637+00:00 stderr F W0120 10:54:50.486785 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:50.488795389+00:00 stderr F I0120 10:54:50.488751 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.179275ms) 2026-01-20T10:54:51.447363141+00:00 stderr F I0120 10:54:51.447250 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:51.447626048+00:00 stderr F I0120 10:54:51.447568 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:51.447626048+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:51.447626048+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:51.447651319+00:00 stderr F I0120 10:54:51.447626 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (389.94µs) 2026-01-20T10:54:51.447651319+00:00 stderr F I0120 10:54:51.447643 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:51.447744511+00:00 stderr F I0120 10:54:51.447686 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:51.447766752+00:00 stderr F I0120 10:54:51.447757 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:51.448895603+00:00 stderr F I0120 10:54:51.448750 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:51.482104517+00:00 stderr F W0120 10:54:51.481995 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:51.486545406+00:00 stderr F I0120 10:54:51.486453 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.796874ms) 2026-01-20T10:54:51.627254837+00:00 stderr F I0120 10:54:51.626779 1 sync_worker.go:234] Notify the sync worker: Cluster operator machine-config changed Degraded from "False" to "True" 2026-01-20T10:54:51.627254837+00:00 stderr F I0120 10:54:51.626818 1 sync_worker.go:584] Cluster operator machine-config changed Degraded from "False" to "True" 2026-01-20T10:54:51.627254837+00:00 stderr F I0120 10:54:51.626847 1 sync_worker.go:592] No change, waiting 2026-01-20T10:54:52.447974944+00:00 stderr F I0120 10:54:52.447786 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:52.448593251+00:00 stderr F I0120 10:54:52.448518 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:52.448593251+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:52.448593251+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:52.448714984+00:00 stderr F I0120 10:54:52.448653 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (890.354µs) 2026-01-20T10:54:52.448714984+00:00 stderr F I0120 10:54:52.448690 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:52.448822008+00:00 stderr F I0120 10:54:52.448771 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:52.448922120+00:00 stderr F I0120 10:54:52.448867 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:52.449600908+00:00 stderr F I0120 10:54:52.449469 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:52.498504001+00:00 stderr F W0120 10:54:52.498391 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:52.500088724+00:00 stderr F I0120 10:54:52.500002 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (51.310288ms) 2026-01-20T10:54:53.448891656+00:00 stderr F I0120 10:54:53.448774 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:53.449412880+00:00 stderr F I0120 10:54:53.449350 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:53.449412880+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:53.449412880+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:53.449489292+00:00 stderr F I0120 10:54:53.449451 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (696.599µs) 2026-01-20T10:54:53.449525663+00:00 stderr F I0120 10:54:53.449483 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:53.449650576+00:00 stderr F I0120 10:54:53.449590 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:53.449754739+00:00 stderr F I0120 10:54:53.449687 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:53.450398306+00:00 stderr F I0120 10:54:53.450206 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:53.496810084+00:00 stderr F W0120 10:54:53.496725 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:53.500520022+00:00 stderr F I0120 10:54:53.500448 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (50.957928ms) 2026-01-20T10:54:54.449656553+00:00 stderr F I0120 10:54:54.449571 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:54.450422723+00:00 stderr F I0120 10:54:54.450348 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:54.450422723+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:54.450422723+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:54.450634879+00:00 stderr F I0120 10:54:54.450597 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.039608ms) 2026-01-20T10:54:54.450843274+00:00 stderr F I0120 10:54:54.450702 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:54.451051270+00:00 stderr F I0120 10:54:54.450957 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:54.451173543+00:00 stderr F I0120 10:54:54.451136 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:54.451928494+00:00 stderr F I0120 10:54:54.451817 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:54.501271009+00:00 stderr F W0120 10:54:54.501164 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:54.505244385+00:00 stderr F I0120 10:54:54.505155 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (54.468702ms) 2026-01-20T10:54:55.451589031+00:00 stderr F I0120 10:54:55.451458 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:55.452103436+00:00 stderr F I0120 10:54:55.451992 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:55.452103436+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:55.452103436+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:55.452271180+00:00 stderr F I0120 10:54:55.452225 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:55.452369423+00:00 stderr F I0120 10:54:55.452193 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (789.162µs) 2026-01-20T10:54:55.452369423+00:00 stderr F I0120 10:54:55.452338 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:55.452499296+00:00 stderr F I0120 10:54:55.452444 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:55.453050350+00:00 stderr F I0120 10:54:55.452968 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:55.488433904+00:00 stderr F W0120 10:54:55.488354 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:55.493096568+00:00 stderr F I0120 10:54:55.492963 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.747256ms) 2026-01-20T10:54:56.452459642+00:00 stderr F I0120 10:54:56.452373 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:56.452980246+00:00 stderr F I0120 10:54:56.452954 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:56.452980246+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:56.452980246+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:56.453173771+00:00 stderr F I0120 10:54:56.453148 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (792.011µs) 2026-01-20T10:54:56.453275503+00:00 stderr F I0120 10:54:56.453254 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:56.453450518+00:00 stderr F I0120 10:54:56.453413 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:56.453580551+00:00 stderr F I0120 10:54:56.453562 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:56.454146246+00:00 stderr F I0120 10:54:56.454090 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:56.484035583+00:00 stderr F W0120 10:54:56.483948 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:56.485440380+00:00 stderr F I0120 10:54:56.485392 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.140516ms) 2026-01-20T10:54:57.454870543+00:00 stderr F I0120 10:54:57.453968 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:57.455523031+00:00 stderr F I0120 10:54:57.455491 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:57.455523031+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:57.455523031+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:57.455650754+00:00 stderr F I0120 10:54:57.455626 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.678735ms) 2026-01-20T10:54:57.455702415+00:00 stderr F I0120 10:54:57.455687 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:57.455833499+00:00 stderr F I0120 10:54:57.455802 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:57.455928901+00:00 stderr F I0120 10:54:57.455913 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:57.456442355+00:00 stderr F I0120 10:54:57.456392 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:57.515027447+00:00 stderr F W0120 10:54:57.514902 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:57.518455168+00:00 stderr F I0120 10:54:57.518362 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (62.67155ms) 2026-01-20T10:54:58.455969560+00:00 stderr F I0120 10:54:58.455873 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:58.456339649+00:00 stderr F I0120 10:54:58.456292 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:58.456339649+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:58.456339649+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:58.456365860+00:00 stderr F I0120 10:54:58.456352 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (505.164µs) 2026-01-20T10:54:58.456404171+00:00 stderr F I0120 10:54:58.456374 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:58.456473133+00:00 stderr F I0120 10:54:58.456437 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:58.456534274+00:00 stderr F I0120 10:54:58.456509 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:58.456864733+00:00 stderr F I0120 10:54:58.456810 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:58.484950032+00:00 stderr F W0120 10:54:58.484862 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:58.486389871+00:00 stderr F I0120 10:54:58.486339 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.963019ms) 2026-01-20T10:54:59.457419335+00:00 stderr F I0120 10:54:59.457275 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:54:59.457836947+00:00 stderr F I0120 10:54:59.457773 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:54:59.457836947+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:54:59.457836947+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:54:59.457932450+00:00 stderr F I0120 10:54:59.457870 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (618.217µs) 2026-01-20T10:54:59.457932450+00:00 stderr F I0120 10:54:59.457903 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:54:59.458050133+00:00 stderr F I0120 10:54:59.457988 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:54:59.458173676+00:00 stderr F I0120 10:54:59.458129 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:54:59.458709530+00:00 stderr F I0120 10:54:59.458619 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:54:59.506371911+00:00 stderr F W0120 10:54:59.506264 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:54:59.508530938+00:00 stderr F I0120 10:54:59.508418 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (50.511416ms) 2026-01-20T10:55:00.462190909+00:00 stderr F I0120 10:55:00.458151 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:00.462190909+00:00 stderr F I0120 10:55:00.458479 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:00.462190909+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:00.462190909+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:00.462190909+00:00 stderr F I0120 10:55:00.458554 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (416.041µs) 2026-01-20T10:55:00.462190909+00:00 stderr F I0120 10:55:00.458576 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:00.462190909+00:00 stderr F I0120 10:55:00.458634 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:00.462190909+00:00 stderr F I0120 10:55:00.458704 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:00.462190909+00:00 stderr F I0120 10:55:00.458975 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:00.485097039+00:00 stderr F W0120 10:55:00.484988 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:00.486568049+00:00 stderr F I0120 10:55:00.486512 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.934175ms) 2026-01-20T10:55:01.459447403+00:00 stderr F I0120 10:55:01.459362 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:01.459833003+00:00 stderr F I0120 10:55:01.459793 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:01.459833003+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:01.459833003+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:01.459924236+00:00 stderr F I0120 10:55:01.459889 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (542.044µs) 2026-01-20T10:55:01.459942906+00:00 stderr F I0120 10:55:01.459922 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:01.460043499+00:00 stderr F I0120 10:55:01.460002 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:01.460158132+00:00 stderr F I0120 10:55:01.460128 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:01.460636575+00:00 stderr F I0120 10:55:01.460571 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:01.496013268+00:00 stderr F W0120 10:55:01.495913 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:01.499510971+00:00 stderr F I0120 10:55:01.499446 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (39.506483ms) 2026-01-20T10:55:02.460241351+00:00 stderr F I0120 10:55:02.460126 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:02.460782965+00:00 stderr F I0120 10:55:02.460721 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:02.460782965+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:02.460782965+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:02.460909799+00:00 stderr F I0120 10:55:02.460848 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (744.55µs) 2026-01-20T10:55:02.460909799+00:00 stderr F I0120 10:55:02.460885 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:02.461049923+00:00 stderr F I0120 10:55:02.460980 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:02.461212857+00:00 stderr F I0120 10:55:02.461165 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:02.461773212+00:00 stderr F I0120 10:55:02.461686 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:02.518034742+00:00 stderr F W0120 10:55:02.517923 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:02.522144361+00:00 stderr F I0120 10:55:02.522020 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (61.126808ms) 2026-01-20T10:55:03.461932204+00:00 stderr F I0120 10:55:03.461780 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:03.462695394+00:00 stderr F I0120 10:55:03.462602 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:03.462695394+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:03.462695394+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:03.462790896+00:00 stderr F I0120 10:55:03.462745 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (992.276µs) 2026-01-20T10:55:03.462805327+00:00 stderr F I0120 10:55:03.462791 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:03.462970391+00:00 stderr F I0120 10:55:03.462897 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:03.463098134+00:00 stderr F I0120 10:55:03.463017 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:03.463848245+00:00 stderr F I0120 10:55:03.463676 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:03.512114682+00:00 stderr F W0120 10:55:03.511995 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:03.514258939+00:00 stderr F I0120 10:55:03.514206 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (51.414171ms) 2026-01-20T10:55:04.463663475+00:00 stderr F I0120 10:55:04.463586 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:04.464221871+00:00 stderr F I0120 10:55:04.464191 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:04.464221871+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:04.464221871+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:04.464381025+00:00 stderr F I0120 10:55:04.464344 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (771.411µs) 2026-01-20T10:55:04.464453047+00:00 stderr F I0120 10:55:04.464435 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:04.464581191+00:00 stderr F I0120 10:55:04.464551 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:04.464669173+00:00 stderr F I0120 10:55:04.464653 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:04.465140945+00:00 stderr F I0120 10:55:04.465089 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:04.505862670+00:00 stderr F W0120 10:55:04.505784 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:04.507784292+00:00 stderr F I0120 10:55:04.507708 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.272363ms) 2026-01-20T10:55:05.386466034+00:00 stderr F I0120 10:55:05.385284 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:05.386691240+00:00 stderr F I0120 10:55:05.386657 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:05.386786332+00:00 stderr F I0120 10:55:05.386771 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:05.387229304+00:00 stderr F I0120 10:55:05.387189 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:05.420479091+00:00 stderr F W0120 10:55:05.420411 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:05.422635628+00:00 stderr F I0120 10:55:05.422606 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.334776ms) 2026-01-20T10:55:05.464663718+00:00 stderr F I0120 10:55:05.464578 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:05.465034578+00:00 stderr F I0120 10:55:05.464965 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:05.465034578+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:05.465034578+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:05.465110900+00:00 stderr F I0120 10:55:05.465038 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (494.673µs) 2026-01-20T10:55:05.465110900+00:00 stderr F I0120 10:55:05.465083 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:05.465208353+00:00 stderr F I0120 10:55:05.465157 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:05.465263724+00:00 stderr F I0120 10:55:05.465225 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:05.465619383+00:00 stderr F I0120 10:55:05.465537 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:05.499162428+00:00 stderr F W0120 10:55:05.499032 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:05.501260454+00:00 stderr F I0120 10:55:05.501162 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.100442ms) 2026-01-20T10:55:06.466124993+00:00 stderr F I0120 10:55:06.466006 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:06.466324538+00:00 stderr F I0120 10:55:06.466294 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:06.466324538+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:06.466324538+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:06.466363199+00:00 stderr F I0120 10:55:06.466342 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (349.849µs) 2026-01-20T10:55:06.466363199+00:00 stderr F I0120 10:55:06.466360 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:06.466423361+00:00 stderr F I0120 10:55:06.466402 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:06.466464752+00:00 stderr F I0120 10:55:06.466450 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:06.466735669+00:00 stderr F I0120 10:55:06.466698 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:06.493017850+00:00 stderr F W0120 10:55:06.492915 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:06.494999014+00:00 stderr F I0120 10:55:06.494954 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.590773ms) 2026-01-20T10:55:07.467396925+00:00 stderr F I0120 10:55:07.467292 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:07.467642571+00:00 stderr F I0120 10:55:07.467558 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:07.467642571+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:07.467642571+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:07.467642571+00:00 stderr F I0120 10:55:07.467618 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (345.389µs) 2026-01-20T10:55:07.467713893+00:00 stderr F I0120 10:55:07.467639 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:07.467713893+00:00 stderr F I0120 10:55:07.467697 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:07.467793865+00:00 stderr F I0120 10:55:07.467753 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:07.468106993+00:00 stderr F I0120 10:55:07.467977 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:07.506514348+00:00 stderr F W0120 10:55:07.506418 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:07.511684965+00:00 stderr F I0120 10:55:07.511624 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.971011ms) 2026-01-20T10:55:08.468748388+00:00 stderr F I0120 10:55:08.468625 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:08.469433646+00:00 stderr F I0120 10:55:08.469378 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:08.469433646+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:08.469433646+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:08.469502718+00:00 stderr F I0120 10:55:08.469458 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (848.663µs) 2026-01-20T10:55:08.469563350+00:00 stderr F I0120 10:55:08.469532 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:08.469662032+00:00 stderr F I0120 10:55:08.469625 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:08.469716634+00:00 stderr F I0120 10:55:08.469695 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:08.470189017+00:00 stderr F I0120 10:55:08.470121 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:08.503327750+00:00 stderr F W0120 10:55:08.503208 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:08.505329603+00:00 stderr F I0120 10:55:08.505278 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.791345ms) 2026-01-20T10:55:09.469673419+00:00 stderr F I0120 10:55:09.469596 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:09.470152702+00:00 stderr F I0120 10:55:09.470130 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:09.470152702+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:09.470152702+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:09.470269935+00:00 stderr F I0120 10:55:09.470246 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (664.429µs) 2026-01-20T10:55:09.470330847+00:00 stderr F I0120 10:55:09.470306 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:09.470437130+00:00 stderr F I0120 10:55:09.470414 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:09.470514692+00:00 stderr F I0120 10:55:09.470502 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:09.470893812+00:00 stderr F I0120 10:55:09.470855 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:09.492520568+00:00 stderr F W0120 10:55:09.492415 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:09.495490428+00:00 stderr F I0120 10:55:09.495411 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.099729ms) 2026-01-20T10:55:10.470802276+00:00 stderr F I0120 10:55:10.470710 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:10.471056162+00:00 stderr F I0120 10:55:10.471009 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:10.471056162+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:10.471056162+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:10.471121724+00:00 stderr F I0120 10:55:10.471084 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (387.89µs) 2026-01-20T10:55:10.471121724+00:00 stderr F I0120 10:55:10.471099 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:10.471176516+00:00 stderr F I0120 10:55:10.471139 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:10.471197046+00:00 stderr F I0120 10:55:10.471187 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:10.471461813+00:00 stderr F I0120 10:55:10.471399 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:10.516301938+00:00 stderr F W0120 10:55:10.516236 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:10.517753787+00:00 stderr F I0120 10:55:10.517694 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (46.592433ms) 2026-01-20T10:55:11.472220009+00:00 stderr F I0120 10:55:11.472152 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:11.472631140+00:00 stderr F I0120 10:55:11.472612 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:11.472631140+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:11.472631140+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:11.472735863+00:00 stderr F I0120 10:55:11.472716 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (575.535µs) 2026-01-20T10:55:11.472775724+00:00 stderr F I0120 10:55:11.472763 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:11.472866466+00:00 stderr F I0120 10:55:11.472841 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:11.472965480+00:00 stderr F I0120 10:55:11.472948 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:11.473335270+00:00 stderr F I0120 10:55:11.473299 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:11.499420615+00:00 stderr F W0120 10:55:11.499384 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:11.500945936+00:00 stderr F I0120 10:55:11.500895 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.129699ms) 2026-01-20T10:55:12.473036119+00:00 stderr F I0120 10:55:12.472973 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:12.473657325+00:00 stderr F I0120 10:55:12.473625 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:12.473657325+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:12.473657325+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:12.473803509+00:00 stderr F I0120 10:55:12.473774 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (814.372µs) 2026-01-20T10:55:12.473862081+00:00 stderr F I0120 10:55:12.473843 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:12.474001264+00:00 stderr F I0120 10:55:12.473968 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:12.474129298+00:00 stderr F I0120 10:55:12.474108 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:12.474606260+00:00 stderr F I0120 10:55:12.474554 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:12.528614119+00:00 stderr F W0120 10:55:12.528540 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:12.532639247+00:00 stderr F I0120 10:55:12.532591 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (58.745946ms) 2026-01-20T10:55:13.474766371+00:00 stderr F I0120 10:55:13.474648 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:13.475298885+00:00 stderr F I0120 10:55:13.475220 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:13.475298885+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:13.475298885+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:13.475350306+00:00 stderr F I0120 10:55:13.475321 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (695.748µs) 2026-01-20T10:55:13.475366977+00:00 stderr F I0120 10:55:13.475353 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:13.475513501+00:00 stderr F I0120 10:55:13.475437 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:13.475592833+00:00 stderr F I0120 10:55:13.475533 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:13.476200270+00:00 stderr F I0120 10:55:13.476055 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:13.522399391+00:00 stderr F W0120 10:55:13.521801 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:13.526649394+00:00 stderr F I0120 10:55:13.526532 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (51.173395ms) 2026-01-20T10:55:14.475629921+00:00 stderr F I0120 10:55:14.475478 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:14.475839457+00:00 stderr F I0120 10:55:14.475770 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:14.475839457+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:14.475839457+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:14.475865957+00:00 stderr F I0120 10:55:14.475830 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (366.139µs) 2026-01-20T10:55:14.475865957+00:00 stderr F I0120 10:55:14.475848 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:14.475940289+00:00 stderr F I0120 10:55:14.475890 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:14.475959120+00:00 stderr F I0120 10:55:14.475941 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:14.476284909+00:00 stderr F I0120 10:55:14.476200 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:14.516255905+00:00 stderr F W0120 10:55:14.515509 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:14.518122385+00:00 stderr F I0120 10:55:14.517726 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (41.872786ms) 2026-01-20T10:55:15.477278843+00:00 stderr F I0120 10:55:15.477151 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:15.477518919+00:00 stderr F I0120 10:55:15.477475 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:15.477518919+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:15.477518919+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:15.477641762+00:00 stderr F I0120 10:55:15.477552 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (421.831µs) 2026-01-20T10:55:15.477710914+00:00 stderr F I0120 10:55:15.477653 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:15.477863558+00:00 stderr F I0120 10:55:15.477817 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:15.477931160+00:00 stderr F I0120 10:55:15.477911 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:15.478739161+00:00 stderr F I0120 10:55:15.478660 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:15.506923043+00:00 stderr F W0120 10:55:15.506843 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:15.508409762+00:00 stderr F I0120 10:55:15.508354 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.721649ms) 2026-01-20T10:55:16.478686237+00:00 stderr F I0120 10:55:16.478530 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:16.479014666+00:00 stderr F I0120 10:55:16.478961 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:16.479014666+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:16.479014666+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:16.479161270+00:00 stderr F I0120 10:55:16.479106 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (594.497µs) 2026-01-20T10:55:16.479161270+00:00 stderr F I0120 10:55:16.479138 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:16.479271893+00:00 stderr F I0120 10:55:16.479218 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:16.479340825+00:00 stderr F I0120 10:55:16.479312 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:16.479784987+00:00 stderr F I0120 10:55:16.479700 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:16.512375045+00:00 stderr F W0120 10:55:16.512286 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:16.516358361+00:00 stderr F I0120 10:55:16.516303 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.160491ms) 2026-01-20T10:55:17.479282890+00:00 stderr F I0120 10:55:17.479180 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:17.479486016+00:00 stderr F I0120 10:55:17.479446 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:17.479486016+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:17.479486016+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:17.479510566+00:00 stderr F I0120 10:55:17.479490 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (323.819µs) 2026-01-20T10:55:17.479528227+00:00 stderr F I0120 10:55:17.479504 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:17.479582678+00:00 stderr F I0120 10:55:17.479546 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:17.479633260+00:00 stderr F I0120 10:55:17.479596 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:17.479891927+00:00 stderr F I0120 10:55:17.479815 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:17.512220638+00:00 stderr F W0120 10:55:17.512125 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:17.513596005+00:00 stderr F I0120 10:55:17.513495 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.989056ms) 2026-01-20T10:55:18.479739850+00:00 stderr F I0120 10:55:18.479604 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:18.480172711+00:00 stderr F I0120 10:55:18.480111 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:18.480172711+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:18.480172711+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:18.480307865+00:00 stderr F I0120 10:55:18.480215 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (632.076µs) 2026-01-20T10:55:18.480307865+00:00 stderr F I0120 10:55:18.480281 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:18.480464879+00:00 stderr F I0120 10:55:18.480397 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:18.480528291+00:00 stderr F I0120 10:55:18.480491 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:18.481003453+00:00 stderr F I0120 10:55:18.480916 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:18.523442164+00:00 stderr F W0120 10:55:18.523349 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:18.525035177+00:00 stderr F I0120 10:55:18.524972 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.691291ms) 2026-01-20T10:55:19.480473356+00:00 stderr F I0120 10:55:19.480356 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:19.480924358+00:00 stderr F I0120 10:55:19.480869 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:19.480924358+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:19.480924358+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:19.481091232+00:00 stderr F I0120 10:55:19.481013 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (672.508µs) 2026-01-20T10:55:19.481091232+00:00 stderr F I0120 10:55:19.481046 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:19.481267577+00:00 stderr F I0120 10:55:19.481211 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:19.481348049+00:00 stderr F I0120 10:55:19.481315 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:19.481874773+00:00 stderr F I0120 10:55:19.481794 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:19.524227593+00:00 stderr F W0120 10:55:19.524114 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:19.528046254+00:00 stderr F I0120 10:55:19.527947 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (46.89405ms) 2026-01-20T10:55:20.384653999+00:00 stderr F I0120 10:55:20.384577 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:20.384839684+00:00 stderr F I0120 10:55:20.384781 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:20.384927806+00:00 stderr F I0120 10:55:20.384894 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:20.385652445+00:00 stderr F I0120 10:55:20.385577 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:20.411397731+00:00 stderr F W0120 10:55:20.411329 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:20.412816990+00:00 stderr F I0120 10:55:20.412759 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.201172ms) 2026-01-20T10:55:20.481545561+00:00 stderr F I0120 10:55:20.481449 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:20.481800938+00:00 stderr F I0120 10:55:20.481759 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:20.481800938+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:20.481800938+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:20.481849989+00:00 stderr F I0120 10:55:20.481817 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (389.96µs) 2026-01-20T10:55:20.481849989+00:00 stderr F I0120 10:55:20.481842 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:20.481998033+00:00 stderr F I0120 10:55:20.481916 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:20.481998033+00:00 stderr F I0120 10:55:20.481984 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:20.482343243+00:00 stderr F I0120 10:55:20.482285 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:20.521125096+00:00 stderr F W0120 10:55:20.521030 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:20.524347403+00:00 stderr F I0120 10:55:20.524251 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.391281ms) 2026-01-20T10:55:21.482257116+00:00 stderr F I0120 10:55:21.482057 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:21.482667948+00:00 stderr F I0120 10:55:21.482588 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:21.482667948+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:21.482667948+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:21.482766880+00:00 stderr F I0120 10:55:21.482690 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (657.238µs) 2026-01-20T10:55:21.482766880+00:00 stderr F I0120 10:55:21.482729 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:21.482900094+00:00 stderr F I0120 10:55:21.482811 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:21.482926615+00:00 stderr F I0120 10:55:21.482910 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:21.483541841+00:00 stderr F I0120 10:55:21.483399 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:21.506685938+00:00 stderr F W0120 10:55:21.506512 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:21.509928034+00:00 stderr F I0120 10:55:21.509847 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.112462ms) 2026-01-20T10:55:22.483322382+00:00 stderr F I0120 10:55:22.483217 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:22.484128773+00:00 stderr F I0120 10:55:22.484011 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:22.484128773+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:22.484128773+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:22.484231016+00:00 stderr F I0120 10:55:22.484188 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (992.776µs) 2026-01-20T10:55:22.484243396+00:00 stderr F I0120 10:55:22.484226 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:22.484348639+00:00 stderr F I0120 10:55:22.484307 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:22.484398071+00:00 stderr F I0120 10:55:22.484378 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:22.484758370+00:00 stderr F I0120 10:55:22.484702 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:22.523583515+00:00 stderr F W0120 10:55:22.523506 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:22.526575545+00:00 stderr F I0120 10:55:22.526520 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.286667ms) 2026-01-20T10:55:23.485091156+00:00 stderr F I0120 10:55:23.484487 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:23.485407806+00:00 stderr F I0120 10:55:23.485373 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:23.485407806+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:23.485407806+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:23.485475127+00:00 stderr F I0120 10:55:23.485442 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (976.616µs) 2026-01-20T10:55:23.485475127+00:00 stderr F I0120 10:55:23.485469 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:23.485569230+00:00 stderr F I0120 10:55:23.485531 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:23.485634962+00:00 stderr F I0120 10:55:23.485610 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:23.485990521+00:00 stderr F I0120 10:55:23.485941 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:23.511736147+00:00 stderr F W0120 10:55:23.511623 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:23.514791078+00:00 stderr F I0120 10:55:23.514749 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.27435ms) 2026-01-20T10:55:24.486336747+00:00 stderr F I0120 10:55:24.486251 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:24.486542783+00:00 stderr F I0120 10:55:24.486517 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:24.486542783+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:24.486542783+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:24.486589454+00:00 stderr F I0120 10:55:24.486571 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (337.299µs) 2026-01-20T10:55:24.486596594+00:00 stderr F I0120 10:55:24.486589 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:24.486662816+00:00 stderr F I0120 10:55:24.486632 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:24.486948863+00:00 stderr F I0120 10:55:24.486728 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:24.487001175+00:00 stderr F I0120 10:55:24.486947 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:24.510361237+00:00 stderr F W0120 10:55:24.510261 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:24.514469987+00:00 stderr F I0120 10:55:24.514367 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.76082ms) 2026-01-20T10:55:25.487143576+00:00 stderr F I0120 10:55:25.487037 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:25.487428323+00:00 stderr F I0120 10:55:25.487381 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:25.487428323+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:25.487428323+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:25.487464244+00:00 stderr F I0120 10:55:25.487443 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (426.991µs) 2026-01-20T10:55:25.487474364+00:00 stderr F I0120 10:55:25.487464 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:25.487571057+00:00 stderr F I0120 10:55:25.487520 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:25.487605578+00:00 stderr F I0120 10:55:25.487583 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:25.487981948+00:00 stderr F I0120 10:55:25.487915 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:25.527923673+00:00 stderr F W0120 10:55:25.527791 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:25.529927676+00:00 stderr F I0120 10:55:25.529841 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.373459ms) 2026-01-20T10:55:26.488424527+00:00 stderr F I0120 10:55:26.488342 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:26.488727305+00:00 stderr F I0120 10:55:26.488690 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:26.488727305+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:26.488727305+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:26.488801187+00:00 stderr F I0120 10:55:26.488769 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (440.953µs) 2026-01-20T10:55:26.488801187+00:00 stderr F I0120 10:55:26.488791 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:26.488906379+00:00 stderr F I0120 10:55:26.488871 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:26.488952071+00:00 stderr F I0120 10:55:26.488932 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:26.489265989+00:00 stderr F I0120 10:55:26.489214 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:26.512348594+00:00 stderr F W0120 10:55:26.512263 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:26.514641135+00:00 stderr F I0120 10:55:26.514570 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.768927ms) 2026-01-20T10:55:27.489374109+00:00 stderr F I0120 10:55:27.489217 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:27.489655126+00:00 stderr F I0120 10:55:27.489604 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:27.489655126+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:27.489655126+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:27.489704348+00:00 stderr F I0120 10:55:27.489674 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (470.253µs) 2026-01-20T10:55:27.489704348+00:00 stderr F I0120 10:55:27.489695 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:27.489809100+00:00 stderr F I0120 10:55:27.489755 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:27.489847212+00:00 stderr F I0120 10:55:27.489824 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:27.490201281+00:00 stderr F I0120 10:55:27.490136 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:27.517999872+00:00 stderr F W0120 10:55:27.517938 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:27.519622586+00:00 stderr F I0120 10:55:27.519571 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.872157ms) 2026-01-20T10:55:28.490275250+00:00 stderr F I0120 10:55:28.490173 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:28.490669810+00:00 stderr F I0120 10:55:28.490631 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:28.490669810+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:28.490669810+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:28.490747752+00:00 stderr F I0120 10:55:28.490708 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (556.655µs) 2026-01-20T10:55:28.490747752+00:00 stderr F I0120 10:55:28.490742 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:28.490859205+00:00 stderr F I0120 10:55:28.490816 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:28.490923367+00:00 stderr F I0120 10:55:28.490896 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:28.491232605+00:00 stderr F I0120 10:55:28.491186 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:28.513862109+00:00 stderr F W0120 10:55:28.513781 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:28.515708228+00:00 stderr F I0120 10:55:28.515629 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.875463ms) 2026-01-20T10:55:29.490996996+00:00 stderr F I0120 10:55:29.490901 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:29.491421477+00:00 stderr F I0120 10:55:29.491380 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:29.491421477+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:29.491421477+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:29.491504470+00:00 stderr F I0120 10:55:29.491461 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (579.806µs) 2026-01-20T10:55:29.491504470+00:00 stderr F I0120 10:55:29.491490 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:29.491623024+00:00 stderr F I0120 10:55:29.491570 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:29.491688695+00:00 stderr F I0120 10:55:29.491655 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:29.492041965+00:00 stderr F I0120 10:55:29.491983 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:29.534173218+00:00 stderr F W0120 10:55:29.532934 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:29.545241493+00:00 stderr F I0120 10:55:29.535868 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.372862ms) 2026-01-20T10:55:30.492433432+00:00 stderr F I0120 10:55:30.492320 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:30.492966756+00:00 stderr F I0120 10:55:30.492925 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:30.492966756+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:30.492966756+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:30.493141041+00:00 stderr F I0120 10:55:30.493086 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (760.31µs) 2026-01-20T10:55:30.493171942+00:00 stderr F I0120 10:55:30.493149 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:30.493329966+00:00 stderr F I0120 10:55:30.493260 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:30.493417288+00:00 stderr F I0120 10:55:30.493376 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:30.494127967+00:00 stderr F I0120 10:55:30.493923 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:30.521588869+00:00 stderr F W0120 10:55:30.521500 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:30.524112677+00:00 stderr F I0120 10:55:30.524021 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.858473ms) 2026-01-20T10:55:31.494328390+00:00 stderr F I0120 10:55:31.494203 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:31.494967628+00:00 stderr F I0120 10:55:31.494886 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:31.494967628+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:31.494967628+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:31.495080221+00:00 stderr F I0120 10:55:31.495032 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (851.043µs) 2026-01-20T10:55:31.495138962+00:00 stderr F I0120 10:55:31.495113 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:31.495453361+00:00 stderr F I0120 10:55:31.495272 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:31.495498572+00:00 stderr F I0120 10:55:31.495457 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:31.496182360+00:00 stderr F I0120 10:55:31.496040 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:31.547306592+00:00 stderr F W0120 10:55:31.547200 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:31.552636904+00:00 stderr F I0120 10:55:31.552570 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (57.451651ms) 2026-01-20T10:55:32.496046023+00:00 stderr F I0120 10:55:32.495951 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:32.496992838+00:00 stderr F I0120 10:55:32.496947 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:32.496992838+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:32.496992838+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:32.497236675+00:00 stderr F I0120 10:55:32.497193 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.263103ms) 2026-01-20T10:55:32.497327638+00:00 stderr F I0120 10:55:32.497262 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:32.497560234+00:00 stderr F I0120 10:55:32.497458 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:32.497560234+00:00 stderr F I0120 10:55:32.497550 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:32.498127829+00:00 stderr F I0120 10:55:32.497984 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:32.551534463+00:00 stderr F W0120 10:55:32.551421 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:32.555148039+00:00 stderr F I0120 10:55:32.555027 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (57.76854ms) 2026-01-20T10:55:33.498320911+00:00 stderr F I0120 10:55:33.498202 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:33.498795364+00:00 stderr F I0120 10:55:33.498725 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:33.498795364+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:33.498795364+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:33.498961868+00:00 stderr F I0120 10:55:33.498875 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (693.158µs) 2026-01-20T10:55:33.498961868+00:00 stderr F I0120 10:55:33.498912 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:33.499094492+00:00 stderr F I0120 10:55:33.499002 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:33.499177234+00:00 stderr F I0120 10:55:33.499139 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:33.499759149+00:00 stderr F I0120 10:55:33.499675 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:33.546574717+00:00 stderr F W0120 10:55:33.546453 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:33.549613088+00:00 stderr F I0120 10:55:33.549534 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (50.617189ms) 2026-01-20T10:55:34.499372027+00:00 stderr F I0120 10:55:34.499254 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:34.499681615+00:00 stderr F I0120 10:55:34.499644 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:34.499681615+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:34.499681615+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:34.499727806+00:00 stderr F I0120 10:55:34.499700 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (461.022µs) 2026-01-20T10:55:34.499727806+00:00 stderr F I0120 10:55:34.499717 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:34.499799468+00:00 stderr F I0120 10:55:34.499757 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:34.499844169+00:00 stderr F I0120 10:55:34.499824 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:34.500142257+00:00 stderr F I0120 10:55:34.500099 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:34.526467099+00:00 stderr F W0120 10:55:34.526391 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:34.528618737+00:00 stderr F I0120 10:55:34.528578 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.85492ms) 2026-01-20T10:55:35.385367085+00:00 stderr F I0120 10:55:35.385283 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:35.385562541+00:00 stderr F I0120 10:55:35.385495 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:35.385626572+00:00 stderr F I0120 10:55:35.385607 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:35.386182787+00:00 stderr F I0120 10:55:35.386127 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:35.416248628+00:00 stderr F W0120 10:55:35.416185 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:35.417941943+00:00 stderr F I0120 10:55:35.417913 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.64513ms) 2026-01-20T10:55:35.500747921+00:00 stderr F I0120 10:55:35.500645 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:35.501215253+00:00 stderr F I0120 10:55:35.501179 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:35.501215253+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:35.501215253+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:35.501301606+00:00 stderr F I0120 10:55:35.501253 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (626.916µs) 2026-01-20T10:55:35.501301606+00:00 stderr F I0120 10:55:35.501281 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:35.501390188+00:00 stderr F I0120 10:55:35.501348 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:35.501482050+00:00 stderr F I0120 10:55:35.501434 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:35.501904511+00:00 stderr F I0120 10:55:35.501847 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:35.542329539+00:00 stderr F W0120 10:55:35.542238 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:35.545707249+00:00 stderr F I0120 10:55:35.545625 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.333692ms) 2026-01-20T10:55:36.502344320+00:00 stderr F I0120 10:55:36.502267 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:36.504105308+00:00 stderr F I0120 10:55:36.502932 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:36.504105308+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:36.504105308+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:36.504105308+00:00 stderr F I0120 10:55:36.503420 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.165971ms) 2026-01-20T10:55:36.504105308+00:00 stderr F I0120 10:55:36.503447 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:36.506915332+00:00 stderr F I0120 10:55:36.503538 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:36.507101247+00:00 stderr F I0120 10:55:36.507051 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:36.507674542+00:00 stderr F I0120 10:55:36.507619 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:36.535477994+00:00 stderr F W0120 10:55:36.535420 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:36.538222727+00:00 stderr F I0120 10:55:36.538179 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.722865ms) 2026-01-20T10:55:37.503782596+00:00 stderr F I0120 10:55:37.503675 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:37.504157076+00:00 stderr F I0120 10:55:37.504099 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:37.504157076+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:37.504157076+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:37.504199917+00:00 stderr F I0120 10:55:37.504157 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (503.324µs) 2026-01-20T10:55:37.504199917+00:00 stderr F I0120 10:55:37.504178 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:37.504287649+00:00 stderr F I0120 10:55:37.504236 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:37.504340230+00:00 stderr F I0120 10:55:37.504306 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:37.504685460+00:00 stderr F I0120 10:55:37.504616 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:37.541802329+00:00 stderr F W0120 10:55:37.541725 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:37.543422342+00:00 stderr F I0120 10:55:37.543307 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (39.126163ms) 2026-01-20T10:55:38.504520942+00:00 stderr F I0120 10:55:38.504400 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:38.505198670+00:00 stderr F I0120 10:55:38.505048 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:38.505198670+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:38.505198670+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:38.505371264+00:00 stderr F I0120 10:55:38.505350 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (970.415µs) 2026-01-20T10:55:38.505408985+00:00 stderr F I0120 10:55:38.505398 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:38.505512198+00:00 stderr F I0120 10:55:38.505489 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:38.505577520+00:00 stderr F I0120 10:55:38.505565 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:38.505852437+00:00 stderr F I0120 10:55:38.505821 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:38.573744767+00:00 stderr F W0120 10:55:38.573636 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:38.575968066+00:00 stderr F I0120 10:55:38.575904 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (70.500729ms) 2026-01-20T10:55:39.505930066+00:00 stderr F I0120 10:55:39.505829 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:39.506536973+00:00 stderr F I0120 10:55:39.506476 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:39.506536973+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:39.506536973+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:39.506726768+00:00 stderr F I0120 10:55:39.506610 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (802.852µs) 2026-01-20T10:55:39.506726768+00:00 stderr F I0120 10:55:39.506655 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:39.506777519+00:00 stderr F I0120 10:55:39.506726 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:39.506786770+00:00 stderr F I0120 10:55:39.506779 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:39.507108058+00:00 stderr F I0120 10:55:39.507012 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:39.538528776+00:00 stderr F W0120 10:55:39.538451 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:39.540217851+00:00 stderr F I0120 10:55:39.540172 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.514384ms) 2026-01-20T10:55:40.506725255+00:00 stderr F I0120 10:55:40.506602 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:40.506973112+00:00 stderr F I0120 10:55:40.506915 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:40.506973112+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:40.506973112+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:40.506997402+00:00 stderr F I0120 10:55:40.506974 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (386.15µs) 2026-01-20T10:55:40.507029673+00:00 stderr F I0120 10:55:40.506993 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:40.507167407+00:00 stderr F I0120 10:55:40.507109 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:40.507194158+00:00 stderr F I0120 10:55:40.507167 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:40.507477385+00:00 stderr F I0120 10:55:40.507410 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:40.562253535+00:00 stderr F W0120 10:55:40.562121 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:40.563648732+00:00 stderr F I0120 10:55:40.563546 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (56.549948ms) 2026-01-20T10:55:41.507146213+00:00 stderr F I0120 10:55:41.507025 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:41.507499402+00:00 stderr F I0120 10:55:41.507437 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:41.507499402+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:41.507499402+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:41.507558144+00:00 stderr F I0120 10:55:41.507514 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (503.353µs) 2026-01-20T10:55:41.507558144+00:00 stderr F I0120 10:55:41.507541 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:41.507660117+00:00 stderr F I0120 10:55:41.507598 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:41.507679397+00:00 stderr F I0120 10:55:41.507664 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:41.508036407+00:00 stderr F I0120 10:55:41.507964 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:41.540648176+00:00 stderr F W0120 10:55:41.539990 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:41.541552360+00:00 stderr F I0120 10:55:41.541503 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.959786ms) 2026-01-20T10:55:42.508101306+00:00 stderr F I0120 10:55:42.507994 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:42.508370683+00:00 stderr F I0120 10:55:42.508336 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:42.508370683+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:42.508370683+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:42.508457985+00:00 stderr F I0120 10:55:42.508422 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (441.672µs) 2026-01-20T10:55:42.508457985+00:00 stderr F I0120 10:55:42.508450 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:42.508566948+00:00 stderr F I0120 10:55:42.508517 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:42.508633910+00:00 stderr F I0120 10:55:42.508610 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:42.509038090+00:00 stderr F I0120 10:55:42.508977 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:42.553204178+00:00 stderr F W0120 10:55:42.553121 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:42.554653487+00:00 stderr F I0120 10:55:42.554612 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (46.160681ms) 2026-01-20T10:55:43.508908924+00:00 stderr F I0120 10:55:43.508783 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:43.509251593+00:00 stderr F I0120 10:55:43.509161 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:43.509251593+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:43.509251593+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:43.509251593+00:00 stderr F I0120 10:55:43.509215 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (453.961µs) 2026-01-20T10:55:43.509251593+00:00 stderr F I0120 10:55:43.509244 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:43.509363846+00:00 stderr F I0120 10:55:43.509298 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:43.509416308+00:00 stderr F I0120 10:55:43.509364 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:43.509769698+00:00 stderr F I0120 10:55:43.509653 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:43.551763128+00:00 stderr F W0120 10:55:43.551633 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:43.553703019+00:00 stderr F I0120 10:55:43.553611 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.362023ms) 2026-01-20T10:55:44.510200026+00:00 stderr F I0120 10:55:44.510140 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:44.510677298+00:00 stderr F I0120 10:55:44.510650 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:44.510677298+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:44.510677298+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:44.510788641+00:00 stderr F I0120 10:55:44.510766 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (639.827µs) 2026-01-20T10:55:44.510844243+00:00 stderr F I0120 10:55:44.510829 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:44.510955696+00:00 stderr F I0120 10:55:44.510929 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:44.511039348+00:00 stderr F I0120 10:55:44.511025 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:44.511437698+00:00 stderr F I0120 10:55:44.511396 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:44.549879363+00:00 stderr F W0120 10:55:44.549807 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:44.554237429+00:00 stderr F I0120 10:55:44.554190 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.354906ms) 2026-01-20T10:55:45.512338779+00:00 stderr F I0120 10:55:45.511109 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:45.512606107+00:00 stderr F I0120 10:55:45.512578 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:45.512606107+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:45.512606107+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:45.512683099+00:00 stderr F I0120 10:55:45.512659 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.550402ms) 2026-01-20T10:55:45.512683099+00:00 stderr F I0120 10:55:45.512678 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:45.512795772+00:00 stderr F I0120 10:55:45.512762 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:45.512831373+00:00 stderr F I0120 10:55:45.512817 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:45.513149491+00:00 stderr F I0120 10:55:45.513108 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:45.557751570+00:00 stderr F W0120 10:55:45.557670 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:45.568125646+00:00 stderr F I0120 10:55:45.560097 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.412783ms) 2026-01-20T10:55:46.513788625+00:00 stderr F I0120 10:55:46.513587 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:46.513891897+00:00 stderr F I0120 10:55:46.513861 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:46.513891897+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:46.513891897+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:46.514045421+00:00 stderr F I0120 10:55:46.513941 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (366.94µs) 2026-01-20T10:55:46.514045421+00:00 stderr F I0120 10:55:46.513961 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:46.514045421+00:00 stderr F I0120 10:55:46.514002 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:46.514096133+00:00 stderr F I0120 10:55:46.514050 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:46.514398031+00:00 stderr F I0120 10:55:46.514326 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:46.544700779+00:00 stderr F W0120 10:55:46.544627 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:46.546173777+00:00 stderr F I0120 10:55:46.546114 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.151597ms) 2026-01-20T10:55:47.514644674+00:00 stderr F I0120 10:55:47.514512 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:47.515107096+00:00 stderr F I0120 10:55:47.514997 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:47.515107096+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:47.515107096+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:47.515146177+00:00 stderr F I0120 10:55:47.515120 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (628.906µs) 2026-01-20T10:55:47.515165668+00:00 stderr F I0120 10:55:47.515149 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:47.515262810+00:00 stderr F I0120 10:55:47.515209 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:47.515285741+00:00 stderr F I0120 10:55:47.515264 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:47.515579610+00:00 stderr F I0120 10:55:47.515500 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:47.553996423+00:00 stderr F W0120 10:55:47.553929 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:47.557406804+00:00 stderr F I0120 10:55:47.557354 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.197675ms) 2026-01-20T10:55:48.515481485+00:00 stderr F I0120 10:55:48.515386 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:48.516256436+00:00 stderr F I0120 10:55:48.516220 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:48.516256436+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:48.516256436+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:48.516466322+00:00 stderr F I0120 10:55:48.516433 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.09379ms) 2026-01-20T10:55:48.516528293+00:00 stderr F I0120 10:55:48.516476 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:48.516671737+00:00 stderr F I0120 10:55:48.516636 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:48.516843102+00:00 stderr F I0120 10:55:48.516782 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:48.517451368+00:00 stderr F I0120 10:55:48.517389 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:48.543985852+00:00 stderr F W0120 10:55:48.543920 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:48.545635936+00:00 stderr F I0120 10:55:48.545610 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.135203ms) 2026-01-20T10:55:49.516867367+00:00 stderr F I0120 10:55:49.516757 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:49.517921985+00:00 stderr F I0120 10:55:49.517250 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:49.517921985+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:49.517921985+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:49.517921985+00:00 stderr F I0120 10:55:49.517343 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (609.947µs) 2026-01-20T10:55:49.517921985+00:00 stderr F I0120 10:55:49.517368 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:49.517921985+00:00 stderr F I0120 10:55:49.517456 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:49.517921985+00:00 stderr F I0120 10:55:49.517547 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:49.518052179+00:00 stderr F I0120 10:55:49.517993 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:49.561992911+00:00 stderr F W0120 10:55:49.561889 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:49.565567337+00:00 stderr F I0120 10:55:49.565500 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (48.122005ms) 2026-01-20T10:55:50.384675006+00:00 stderr F I0120 10:55:50.384599 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:50.384948063+00:00 stderr F I0120 10:55:50.384903 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:50.385141238+00:00 stderr F I0120 10:55:50.385050 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:50.385688783+00:00 stderr F I0120 10:55:50.385613 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:50.429465681+00:00 stderr F W0120 10:55:50.429356 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:50.433385826+00:00 stderr F I0120 10:55:50.433287 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (48.70065ms) 2026-01-20T10:55:50.518565687+00:00 stderr F I0120 10:55:50.518472 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:50.519490413+00:00 stderr F I0120 10:55:50.519439 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:50.519490413+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:50.519490413+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:50.519759970+00:00 stderr F I0120 10:55:50.519711 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.256045ms) 2026-01-20T10:55:50.519860403+00:00 stderr F I0120 10:55:50.519827 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:50.520117840+00:00 stderr F I0120 10:55:50.520007 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:50.520309835+00:00 stderr F I0120 10:55:50.520272 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:50.521015234+00:00 stderr F I0120 10:55:50.520941 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:50.567415982+00:00 stderr F W0120 10:55:50.567336 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:50.570990258+00:00 stderr F I0120 10:55:50.570931 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (51.099934ms) 2026-01-20T10:55:51.520255407+00:00 stderr F I0120 10:55:51.520058 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:51.520750451+00:00 stderr F I0120 10:55:51.520668 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:51.520750451+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:51.520750451+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:51.520881654+00:00 stderr F I0120 10:55:51.520813 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (769.83µs) 2026-01-20T10:55:51.520881654+00:00 stderr F I0120 10:55:51.520862 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:51.521113580+00:00 stderr F I0120 10:55:51.520976 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:51.521143981+00:00 stderr F I0120 10:55:51.521118 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:51.521797969+00:00 stderr F I0120 10:55:51.521691 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:51.556567174+00:00 stderr F W0120 10:55:51.556014 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:51.558409624+00:00 stderr F I0120 10:55:51.558354 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.48699ms) 2026-01-20T10:55:52.521477476+00:00 stderr F I0120 10:55:52.521362 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:52.521985150+00:00 stderr F I0120 10:55:52.521953 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:52.521985150+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:52.521985150+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:52.522174645+00:00 stderr F I0120 10:55:52.522140 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (792.742µs) 2026-01-20T10:55:52.522268698+00:00 stderr F I0120 10:55:52.522244 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:52.522398751+00:00 stderr F I0120 10:55:52.522364 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:52.522545735+00:00 stderr F I0120 10:55:52.522501 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:52.523093270+00:00 stderr F I0120 10:55:52.523001 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:52.556206881+00:00 stderr F W0120 10:55:52.556117 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:52.561044601+00:00 stderr F I0120 10:55:52.560939 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.670611ms) 2026-01-20T10:55:53.523394833+00:00 stderr F I0120 10:55:53.523224 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:53.523978849+00:00 stderr F I0120 10:55:53.523889 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:53.523978849+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:53.523978849+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:53.524108302+00:00 stderr F I0120 10:55:53.524036 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (833.902µs) 2026-01-20T10:55:53.524138353+00:00 stderr F I0120 10:55:53.524122 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:53.524313118+00:00 stderr F I0120 10:55:53.524234 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:53.524409820+00:00 stderr F I0120 10:55:53.524359 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:53.525134691+00:00 stderr F I0120 10:55:53.525013 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:53.577270343+00:00 stderr F W0120 10:55:53.577145 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:53.580490889+00:00 stderr F I0120 10:55:53.580429 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (56.305565ms) 2026-01-20T10:55:54.525037323+00:00 stderr F I0120 10:55:54.524930 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:54.525448394+00:00 stderr F I0120 10:55:54.525399 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:54.525448394+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:54.525448394+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:54.525529906+00:00 stderr F I0120 10:55:54.525483 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (596.046µs) 2026-01-20T10:55:54.525529906+00:00 stderr F I0120 10:55:54.525518 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:54.525679390+00:00 stderr F I0120 10:55:54.525609 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:54.525735642+00:00 stderr F I0120 10:55:54.525702 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:54.526199384+00:00 stderr F I0120 10:55:54.526132 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:54.553823327+00:00 stderr F W0120 10:55:54.553764 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:54.555900094+00:00 stderr F I0120 10:55:54.555796 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.276455ms) 2026-01-20T10:55:55.526380784+00:00 stderr F I0120 10:55:55.526264 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:55.526649511+00:00 stderr F I0120 10:55:55.526594 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:55.526649511+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:55.526649511+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:55.526713503+00:00 stderr F I0120 10:55:55.526666 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (414.231µs) 2026-01-20T10:55:55.526713503+00:00 stderr F I0120 10:55:55.526690 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:55.526823076+00:00 stderr F I0120 10:55:55.526762 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:55.526864937+00:00 stderr F I0120 10:55:55.526828 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:55.527163805+00:00 stderr F I0120 10:55:55.527094 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:55.561984683+00:00 stderr F W0120 10:55:55.561896 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:55.563455522+00:00 stderr F I0120 10:55:55.563395 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.704208ms) 2026-01-20T10:55:56.527594393+00:00 stderr F I0120 10:55:56.527473 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:56.527849920+00:00 stderr F I0120 10:55:56.527797 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:56.527849920+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:56.527849920+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:56.527907981+00:00 stderr F I0120 10:55:56.527874 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (419.392µs) 2026-01-20T10:55:56.527907981+00:00 stderr F I0120 10:55:56.527896 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:56.528003334+00:00 stderr F I0120 10:55:56.527946 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:56.528108727+00:00 stderr F I0120 10:55:56.528018 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:56.528422545+00:00 stderr F I0120 10:55:56.528329 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:56.561300130+00:00 stderr F W0120 10:55:56.561173 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:56.563716504+00:00 stderr F I0120 10:55:56.563622 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.72138ms) 2026-01-20T10:55:57.528922994+00:00 stderr F I0120 10:55:57.528749 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:57.529406717+00:00 stderr F I0120 10:55:57.529214 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:57.529406717+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:57.529406717+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:57.529406717+00:00 stderr F I0120 10:55:57.529290 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (558.805µs) 2026-01-20T10:55:57.529406717+00:00 stderr F I0120 10:55:57.529314 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:57.529406717+00:00 stderr F I0120 10:55:57.529380 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:57.529461308+00:00 stderr F I0120 10:55:57.529448 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:57.529846118+00:00 stderr F I0120 10:55:57.529783 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:57.575249230+00:00 stderr F W0120 10:55:57.575148 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:57.576869774+00:00 stderr F I0120 10:55:57.576835 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.516568ms) 2026-01-20T10:55:58.530324107+00:00 stderr F I0120 10:55:58.530238 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:58.530802900+00:00 stderr F I0120 10:55:58.530730 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:58.530802900+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:58.530802900+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:58.530887923+00:00 stderr F I0120 10:55:58.530830 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (607.767µs) 2026-01-20T10:55:58.530887923+00:00 stderr F I0120 10:55:58.530861 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:58.530988665+00:00 stderr F I0120 10:55:58.530934 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:58.531040977+00:00 stderr F I0120 10:55:58.531013 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:58.531437677+00:00 stderr F I0120 10:55:58.531364 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:58.561099905+00:00 stderr F W0120 10:55:58.561007 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:58.562589415+00:00 stderr F I0120 10:55:58.562536 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.673112ms) 2026-01-20T10:55:59.531282859+00:00 stderr F I0120 10:55:59.531183 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:55:59.531654569+00:00 stderr F I0120 10:55:59.531636 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:55:59.531654569+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:55:59.531654569+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:55:59.531741431+00:00 stderr F I0120 10:55:59.531725 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (556.185µs) 2026-01-20T10:55:59.531775042+00:00 stderr F I0120 10:55:59.531763 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:55:59.531850174+00:00 stderr F I0120 10:55:59.531829 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:55:59.531911306+00:00 stderr F I0120 10:55:59.531900 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:55:59.532318126+00:00 stderr F I0120 10:55:59.532280 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:55:59.555366506+00:00 stderr F W0120 10:55:59.555309 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:55:59.556929819+00:00 stderr F I0120 10:55:59.556875 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.107845ms) 2026-01-20T10:56:00.532275280+00:00 stderr F I0120 10:56:00.532206 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:00.532668451+00:00 stderr F I0120 10:56:00.532650 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:00.532668451+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:00.532668451+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:00.532774453+00:00 stderr F I0120 10:56:00.532759 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (566.485µs) 2026-01-20T10:56:00.532807864+00:00 stderr F I0120 10:56:00.532798 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:00.532896947+00:00 stderr F I0120 10:56:00.532877 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:00.532971959+00:00 stderr F I0120 10:56:00.532961 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:00.533264387+00:00 stderr F I0120 10:56:00.533226 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:00.566871251+00:00 stderr F W0120 10:56:00.566800 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:00.568617358+00:00 stderr F I0120 10:56:00.568559 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.758472ms) 2026-01-20T10:56:01.532938694+00:00 stderr F I0120 10:56:01.532810 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:01.533265793+00:00 stderr F I0120 10:56:01.533201 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:01.533265793+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:01.533265793+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:01.533304654+00:00 stderr F I0120 10:56:01.533262 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (471.682µs) 2026-01-20T10:56:01.533304654+00:00 stderr F I0120 10:56:01.533281 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:01.533395996+00:00 stderr F I0120 10:56:01.533345 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:01.533423697+00:00 stderr F I0120 10:56:01.533411 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:01.533762257+00:00 stderr F I0120 10:56:01.533678 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:01.566786144+00:00 stderr F W0120 10:56:01.566740 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:01.569516348+00:00 stderr F I0120 10:56:01.569484 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.196644ms) 2026-01-20T10:56:02.533966198+00:00 stderr F I0120 10:56:02.533855 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:02.534266886+00:00 stderr F I0120 10:56:02.534213 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:02.534266886+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:02.534266886+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:02.534294896+00:00 stderr F I0120 10:56:02.534273 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (434.512µs) 2026-01-20T10:56:02.534308017+00:00 stderr F I0120 10:56:02.534294 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:02.534382009+00:00 stderr F I0120 10:56:02.534346 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:02.534423790+00:00 stderr F I0120 10:56:02.534398 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:02.534723488+00:00 stderr F I0120 10:56:02.534657 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:02.563090671+00:00 stderr F W0120 10:56:02.563030 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:02.564623502+00:00 stderr F I0120 10:56:02.564598 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.300436ms) 2026-01-20T10:56:03.535119773+00:00 stderr F I0120 10:56:03.535031 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:03.535804022+00:00 stderr F I0120 10:56:03.535755 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:03.535804022+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:03.535804022+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:03.535927125+00:00 stderr F I0120 10:56:03.535855 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (875.353µs) 2026-01-20T10:56:03.535927125+00:00 stderr F I0120 10:56:03.535886 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:03.536008547+00:00 stderr F I0120 10:56:03.535966 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:03.536079809+00:00 stderr F I0120 10:56:03.536050 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:03.536542291+00:00 stderr F I0120 10:56:03.536472 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:03.584456481+00:00 stderr F W0120 10:56:03.584372 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:03.586451754+00:00 stderr F I0120 10:56:03.586392 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (50.502478ms) 2026-01-20T10:56:04.536996480+00:00 stderr F I0120 10:56:04.536905 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:04.537317458+00:00 stderr F I0120 10:56:04.537262 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:04.537317458+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:04.537317458+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:04.537349279+00:00 stderr F I0120 10:56:04.537332 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (443.282µs) 2026-01-20T10:56:04.537363180+00:00 stderr F I0120 10:56:04.537352 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:04.537451462+00:00 stderr F I0120 10:56:04.537403 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:04.537499403+00:00 stderr F I0120 10:56:04.537466 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:04.537824752+00:00 stderr F I0120 10:56:04.537767 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:04.568806425+00:00 stderr F W0120 10:56:04.568717 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:04.570931373+00:00 stderr F I0120 10:56:04.570893 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.536802ms) 2026-01-20T10:56:05.384755258+00:00 stderr F I0120 10:56:05.384671 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:05.384862441+00:00 stderr F I0120 10:56:05.384812 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:05.384928863+00:00 stderr F I0120 10:56:05.384901 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:05.385308553+00:00 stderr F I0120 10:56:05.385264 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:05.425799043+00:00 stderr F W0120 10:56:05.425678 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:05.427229491+00:00 stderr F I0120 10:56:05.427189 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.532774ms) 2026-01-20T10:56:05.538180647+00:00 stderr F I0120 10:56:05.538042 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:05.538607838+00:00 stderr F I0120 10:56:05.538546 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:05.538607838+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:05.538607838+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:05.538703140+00:00 stderr F I0120 10:56:05.538654 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (632.597µs) 2026-01-20T10:56:05.538703140+00:00 stderr F I0120 10:56:05.538686 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:05.538813713+00:00 stderr F I0120 10:56:05.538763 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:05.538872905+00:00 stderr F I0120 10:56:05.538843 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:05.539349627+00:00 stderr F I0120 10:56:05.539278 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:05.577466163+00:00 stderr F W0120 10:56:05.577377 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:05.579643162+00:00 stderr F I0120 10:56:05.579583 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.89361ms) 2026-01-20T10:56:06.539938609+00:00 stderr F I0120 10:56:06.539251 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:06.539938609+00:00 stderr F I0120 10:56:06.539634 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:06.539938609+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:06.539938609+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:06.539938609+00:00 stderr F I0120 10:56:06.539697 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (460.322µs) 2026-01-20T10:56:06.539938609+00:00 stderr F I0120 10:56:06.539723 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:06.539938609+00:00 stderr F I0120 10:56:06.539783 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:06.539938609+00:00 stderr F I0120 10:56:06.539842 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:06.540358111+00:00 stderr F I0120 10:56:06.540206 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:06.570644856+00:00 stderr F W0120 10:56:06.570541 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:06.572134555+00:00 stderr F I0120 10:56:06.572030 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.305139ms) 2026-01-20T10:56:07.540146470+00:00 stderr F I0120 10:56:07.540068 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:07.540386357+00:00 stderr F I0120 10:56:07.540351 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:07.540386357+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:07.540386357+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:07.540429948+00:00 stderr F I0120 10:56:07.540401 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (359.129µs) 2026-01-20T10:56:07.540470049+00:00 stderr F I0120 10:56:07.540445 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:07.540526490+00:00 stderr F I0120 10:56:07.540490 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:07.540546281+00:00 stderr F I0120 10:56:07.540540 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:07.540793727+00:00 stderr F I0120 10:56:07.540746 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:07.564033422+00:00 stderr F W0120 10:56:07.563963 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:07.565429360+00:00 stderr F I0120 10:56:07.565385 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.959821ms) 2026-01-20T10:56:08.541142392+00:00 stderr F I0120 10:56:08.540970 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:08.541357698+00:00 stderr F I0120 10:56:08.541309 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:08.541357698+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:08.541357698+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:08.541385579+00:00 stderr F I0120 10:56:08.541361 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (413.261µs) 2026-01-20T10:56:08.541385579+00:00 stderr F I0120 10:56:08.541376 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:08.541503842+00:00 stderr F I0120 10:56:08.541426 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:08.541503842+00:00 stderr F I0120 10:56:08.541494 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:08.541790879+00:00 stderr F I0120 10:56:08.541706 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:08.583025359+00:00 stderr F W0120 10:56:08.582940 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:08.585130986+00:00 stderr F I0120 10:56:08.585085 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.678356ms) 2026-01-20T10:56:09.542237227+00:00 stderr F I0120 10:56:09.542156 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:09.542590596+00:00 stderr F I0120 10:56:09.542556 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:09.542590596+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:09.542590596+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:09.542657228+00:00 stderr F I0120 10:56:09.542629 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (488.063µs) 2026-01-20T10:56:09.542657228+00:00 stderr F I0120 10:56:09.542651 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:09.542731170+00:00 stderr F I0120 10:56:09.542702 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:09.542782091+00:00 stderr F I0120 10:56:09.542766 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:09.543162472+00:00 stderr F I0120 10:56:09.543111 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:09.579335496+00:00 stderr F W0120 10:56:09.579227 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:09.580746393+00:00 stderr F I0120 10:56:09.580676 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.022943ms) 2026-01-20T10:56:10.543738533+00:00 stderr F I0120 10:56:10.543211 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:10.543971499+00:00 stderr F I0120 10:56:10.543933 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:10.543971499+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:10.543971499+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:10.544034941+00:00 stderr F I0120 10:56:10.543997 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (808.512µs) 2026-01-20T10:56:10.544034941+00:00 stderr F I0120 10:56:10.544019 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:10.544124413+00:00 stderr F I0120 10:56:10.544083 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:10.544160414+00:00 stderr F I0120 10:56:10.544140 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:10.544407021+00:00 stderr F I0120 10:56:10.544357 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:10.577792929+00:00 stderr F W0120 10:56:10.577722 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:10.579315300+00:00 stderr F I0120 10:56:10.579259 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.237498ms) 2026-01-20T10:56:11.544453557+00:00 stderr F I0120 10:56:11.544378 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:11.544743825+00:00 stderr F I0120 10:56:11.544719 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:11.544743825+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:11.544743825+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:11.544814287+00:00 stderr F I0120 10:56:11.544784 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (419.071µs) 2026-01-20T10:56:11.544835838+00:00 stderr F I0120 10:56:11.544821 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:11.544901089+00:00 stderr F I0120 10:56:11.544871 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:11.544950081+00:00 stderr F I0120 10:56:11.544934 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:11.545200467+00:00 stderr F I0120 10:56:11.545168 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:11.582919153+00:00 stderr F W0120 10:56:11.582845 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:11.587182727+00:00 stderr F I0120 10:56:11.586961 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.128943ms) 2026-01-20T10:56:12.546541649+00:00 stderr F I0120 10:56:12.545757 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:12.547008832+00:00 stderr F I0120 10:56:12.546958 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:12.547008832+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:12.547008832+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:12.547223428+00:00 stderr F I0120 10:56:12.547164 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.424269ms) 2026-01-20T10:56:12.547238838+00:00 stderr F I0120 10:56:12.547219 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:12.547396212+00:00 stderr F I0120 10:56:12.547338 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:12.547486195+00:00 stderr F I0120 10:56:12.547448 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:12.548051959+00:00 stderr F I0120 10:56:12.547976 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:12.580218846+00:00 stderr F W0120 10:56:12.580139 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:12.583782081+00:00 stderr F I0120 10:56:12.583712 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.492182ms) 2026-01-20T10:56:13.547327625+00:00 stderr F I0120 10:56:13.547168 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:13.547625353+00:00 stderr F I0120 10:56:13.547568 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:13.547625353+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:13.547625353+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:13.547721386+00:00 stderr F I0120 10:56:13.547664 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (508.443µs) 2026-01-20T10:56:13.547721386+00:00 stderr F I0120 10:56:13.547697 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:13.547825818+00:00 stderr F I0120 10:56:13.547766 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:13.547865680+00:00 stderr F I0120 10:56:13.547844 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:13.548377383+00:00 stderr F I0120 10:56:13.548280 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:13.606769724+00:00 stderr F W0120 10:56:13.606674 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:13.610491625+00:00 stderr F I0120 10:56:13.610429 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (62.722917ms) 2026-01-20T10:56:14.547778203+00:00 stderr F I0120 10:56:14.547682 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:14.548148452+00:00 stderr F I0120 10:56:14.548108 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:14.548148452+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:14.548148452+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:14.548211284+00:00 stderr F I0120 10:56:14.548173 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (505.223µs) 2026-01-20T10:56:14.548211284+00:00 stderr F I0120 10:56:14.548198 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:14.548290306+00:00 stderr F I0120 10:56:14.548252 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:14.548344418+00:00 stderr F I0120 10:56:14.548323 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:14.548697637+00:00 stderr F I0120 10:56:14.548642 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:14.578986582+00:00 stderr F W0120 10:56:14.578894 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:14.580990226+00:00 stderr F I0120 10:56:14.580939 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.737491ms) 2026-01-20T10:56:15.549013332+00:00 stderr F I0120 10:56:15.548931 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:15.549298629+00:00 stderr F I0120 10:56:15.549227 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:15.549298629+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:15.549298629+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:15.549298629+00:00 stderr F I0120 10:56:15.549282 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (364.641µs) 2026-01-20T10:56:15.549324910+00:00 stderr F I0120 10:56:15.549300 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:15.549378422+00:00 stderr F I0120 10:56:15.549342 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:15.549418953+00:00 stderr F I0120 10:56:15.549396 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:15.549681770+00:00 stderr F I0120 10:56:15.549631 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:15.574640091+00:00 stderr F W0120 10:56:15.574561 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:15.576309315+00:00 stderr F I0120 10:56:15.576254 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.950045ms) 2026-01-20T10:56:16.549901052+00:00 stderr F I0120 10:56:16.549829 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:16.550155928+00:00 stderr F I0120 10:56:16.550125 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:16.550155928+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:16.550155928+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:16.550203810+00:00 stderr F I0120 10:56:16.550180 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (372.14µs) 2026-01-20T10:56:16.550203810+00:00 stderr F I0120 10:56:16.550198 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:16.550352174+00:00 stderr F I0120 10:56:16.550286 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:16.550352174+00:00 stderr F I0120 10:56:16.550336 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:16.550584280+00:00 stderr F I0120 10:56:16.550536 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:16.576510327+00:00 stderr F W0120 10:56:16.576413 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:16.577890395+00:00 stderr F I0120 10:56:16.577810 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.608603ms) 2026-01-20T10:56:17.550390330+00:00 stderr F I0120 10:56:17.550240 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:17.550606416+00:00 stderr F I0120 10:56:17.550555 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:17.550606416+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:17.550606416+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:17.550642937+00:00 stderr F I0120 10:56:17.550618 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (392µs) 2026-01-20T10:56:17.550642937+00:00 stderr F I0120 10:56:17.550637 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:17.550780430+00:00 stderr F I0120 10:56:17.550695 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:17.550780430+00:00 stderr F I0120 10:56:17.550755 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:17.551143310+00:00 stderr F I0120 10:56:17.551040 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:17.581199529+00:00 stderr F W0120 10:56:17.581125 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:17.583390718+00:00 stderr F I0120 10:56:17.583336 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.69445ms) 2026-01-20T10:56:18.551101464+00:00 stderr F I0120 10:56:18.550936 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:18.551384802+00:00 stderr F I0120 10:56:18.551326 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:18.551384802+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:18.551384802+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:18.551408082+00:00 stderr F I0120 10:56:18.551394 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (478.493µs) 2026-01-20T10:56:18.551418593+00:00 stderr F I0120 10:56:18.551411 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:18.551496725+00:00 stderr F I0120 10:56:18.551454 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:18.551534886+00:00 stderr F I0120 10:56:18.551514 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:18.551858974+00:00 stderr F I0120 10:56:18.551797 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:18.583959789+00:00 stderr F W0120 10:56:18.583884 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:18.585672184+00:00 stderr F I0120 10:56:18.585631 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.216491ms) 2026-01-20T10:56:19.551605263+00:00 stderr F I0120 10:56:19.551508 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:19.551826929+00:00 stderr F I0120 10:56:19.551768 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:19.551826929+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:19.551826929+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:19.551847140+00:00 stderr F I0120 10:56:19.551825 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (328.329µs) 2026-01-20T10:56:19.551857760+00:00 stderr F I0120 10:56:19.551848 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:19.551947053+00:00 stderr F I0120 10:56:19.551902 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:19.551960693+00:00 stderr F I0120 10:56:19.551954 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:19.552322393+00:00 stderr F I0120 10:56:19.552254 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:19.582538686+00:00 stderr F W0120 10:56:19.582459 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:19.584367485+00:00 stderr F I0120 10:56:19.584333 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.482554ms) 2026-01-20T10:56:20.384686858+00:00 stderr F I0120 10:56:20.384608 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:20.384757810+00:00 stderr F I0120 10:56:20.384717 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:20.384821991+00:00 stderr F I0120 10:56:20.384790 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:20.385158580+00:00 stderr F I0120 10:56:20.385088 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:20.408588490+00:00 stderr F W0120 10:56:20.408527 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:20.411740335+00:00 stderr F I0120 10:56:20.411683 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.079748ms) 2026-01-20T10:56:20.552985426+00:00 stderr F I0120 10:56:20.552915 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:20.553517280+00:00 stderr F I0120 10:56:20.553453 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:20.553517280+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:20.553517280+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:20.553598482+00:00 stderr F I0120 10:56:20.553554 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (656.477µs) 2026-01-20T10:56:20.553598482+00:00 stderr F I0120 10:56:20.553586 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:20.553730186+00:00 stderr F I0120 10:56:20.553681 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:20.553799208+00:00 stderr F I0120 10:56:20.553767 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:20.554275290+00:00 stderr F I0120 10:56:20.554200 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:20.601302356+00:00 stderr F W0120 10:56:20.601191 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:20.603717611+00:00 stderr F I0120 10:56:20.603412 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (49.822541ms) 2026-01-20T10:56:21.554047339+00:00 stderr F I0120 10:56:21.553932 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:21.554395929+00:00 stderr F I0120 10:56:21.554307 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:21.554395929+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:21.554395929+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:21.554395929+00:00 stderr F I0120 10:56:21.554378 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (461.522µs) 2026-01-20T10:56:21.554420589+00:00 stderr F I0120 10:56:21.554398 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:21.554508622+00:00 stderr F I0120 10:56:21.554453 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:21.554525742+00:00 stderr F I0120 10:56:21.554515 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:21.554859161+00:00 stderr F I0120 10:56:21.554790 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:21.588653000+00:00 stderr F W0120 10:56:21.588555 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:21.590149801+00:00 stderr F I0120 10:56:21.590099 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.69856ms) 2026-01-20T10:56:22.555250448+00:00 stderr F I0120 10:56:22.555157 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:22.555525485+00:00 stderr F I0120 10:56:22.555478 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:22.555525485+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:22.555525485+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:22.555574866+00:00 stderr F I0120 10:56:22.555543 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (399.761µs) 2026-01-20T10:56:22.555574866+00:00 stderr F I0120 10:56:22.555566 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:22.555679659+00:00 stderr F I0120 10:56:22.555622 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:22.555701539+00:00 stderr F I0120 10:56:22.555681 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:22.556010788+00:00 stderr F I0120 10:56:22.555942 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:22.578275747+00:00 stderr F W0120 10:56:22.578227 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:22.581288158+00:00 stderr F I0120 10:56:22.581207 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.639969ms) 2026-01-20T10:56:23.555972911+00:00 stderr F I0120 10:56:23.555847 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:23.556496735+00:00 stderr F I0120 10:56:23.556393 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:23.556496735+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:23.556496735+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:23.556522456+00:00 stderr F I0120 10:56:23.556501 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (676.748µs) 2026-01-20T10:56:23.556542317+00:00 stderr F I0120 10:56:23.556529 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:23.556695391+00:00 stderr F I0120 10:56:23.556611 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:23.556755692+00:00 stderr F I0120 10:56:23.556721 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:23.557458221+00:00 stderr F I0120 10:56:23.557360 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:23.609639005+00:00 stderr F W0120 10:56:23.609540 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:23.617887477+00:00 stderr F I0120 10:56:23.617798 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (61.253428ms) 2026-01-20T10:56:24.557593650+00:00 stderr F I0120 10:56:24.557508 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:24.557803475+00:00 stderr F I0120 10:56:24.557765 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:24.557803475+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:24.557803475+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:24.557857847+00:00 stderr F I0120 10:56:24.557827 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (350.18µs) 2026-01-20T10:56:24.557940539+00:00 stderr F I0120 10:56:24.557906 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:24.557987130+00:00 stderr F I0120 10:56:24.557959 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:24.558030241+00:00 stderr F I0120 10:56:24.558011 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:24.558340721+00:00 stderr F I0120 10:56:24.558283 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:24.580904548+00:00 stderr F W0120 10:56:24.580813 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:24.582886120+00:00 stderr F I0120 10:56:24.582829 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.91978ms) 2026-01-20T10:56:25.558852749+00:00 stderr F I0120 10:56:25.558762 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:25.559260130+00:00 stderr F I0120 10:56:25.559230 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:25.559260130+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:25.559260130+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:25.559321382+00:00 stderr F I0120 10:56:25.559299 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (554.265µs) 2026-01-20T10:56:25.559331172+00:00 stderr F I0120 10:56:25.559318 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:25.559405884+00:00 stderr F I0120 10:56:25.559371 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:25.559443615+00:00 stderr F I0120 10:56:25.559431 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:25.559778864+00:00 stderr F I0120 10:56:25.559717 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:25.588396584+00:00 stderr F W0120 10:56:25.588318 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:25.589804562+00:00 stderr F I0120 10:56:25.589757 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.436759ms) 2026-01-20T10:56:26.560419797+00:00 stderr F I0120 10:56:26.559794 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:26.560652533+00:00 stderr F I0120 10:56:26.560596 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:26.560652533+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:26.560652533+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:26.560676913+00:00 stderr F I0120 10:56:26.560660 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (881.544µs) 2026-01-20T10:56:26.560693364+00:00 stderr F I0120 10:56:26.560676 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:26.560764586+00:00 stderr F I0120 10:56:26.560722 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:26.560815127+00:00 stderr F I0120 10:56:26.560785 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:26.561129435+00:00 stderr F I0120 10:56:26.561041 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:26.607880954+00:00 stderr F W0120 10:56:26.607794 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:26.609409935+00:00 stderr F I0120 10:56:26.609230 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (48.551027ms) 2026-01-20T10:56:27.560962876+00:00 stderr F I0120 10:56:27.560882 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:27.561184992+00:00 stderr F I0120 10:56:27.561150 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:27.561184992+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:27.561184992+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:27.561228993+00:00 stderr F I0120 10:56:27.561201 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (331.919µs) 2026-01-20T10:56:27.561228993+00:00 stderr F I0120 10:56:27.561221 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:27.561293735+00:00 stderr F I0120 10:56:27.561258 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:27.561338637+00:00 stderr F I0120 10:56:27.561317 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:27.561564283+00:00 stderr F I0120 10:56:27.561522 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:27.588889198+00:00 stderr F W0120 10:56:27.588809 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:27.591128288+00:00 stderr F I0120 10:56:27.591082 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.855204ms) 2026-01-20T10:56:28.562698438+00:00 stderr F I0120 10:56:28.562049 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:28.563051478+00:00 stderr F I0120 10:56:28.562994 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:28.563051478+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:28.563051478+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:28.563226272+00:00 stderr F I0120 10:56:28.563079 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.046428ms) 2026-01-20T10:56:28.563226272+00:00 stderr F I0120 10:56:28.563203 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:28.563319615+00:00 stderr F I0120 10:56:28.563261 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:28.563368126+00:00 stderr F I0120 10:56:28.563339 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:28.563711895+00:00 stderr F I0120 10:56:28.563637 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:28.602353435+00:00 stderr F W0120 10:56:28.602262 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:28.604406591+00:00 stderr F I0120 10:56:28.604341 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (41.136208ms) 2026-01-20T10:56:29.563819863+00:00 stderr F I0120 10:56:29.563725 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:29.564253675+00:00 stderr F I0120 10:56:29.564202 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:29.564253675+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:29.564253675+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:29.564338788+00:00 stderr F I0120 10:56:29.564297 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (586.816µs) 2026-01-20T10:56:29.564338788+00:00 stderr F I0120 10:56:29.564330 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:29.564447961+00:00 stderr F I0120 10:56:29.564399 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:29.564503542+00:00 stderr F I0120 10:56:29.564478 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:29.564953314+00:00 stderr F I0120 10:56:29.564887 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:29.602824703+00:00 stderr F W0120 10:56:29.602138 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:29.607132589+00:00 stderr F I0120 10:56:29.607002 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.656197ms) 2026-01-20T10:56:30.565526875+00:00 stderr F I0120 10:56:30.565431 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:30.566331296+00:00 stderr F I0120 10:56:30.566286 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:30.566331296+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:30.566331296+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:30.566560932+00:00 stderr F I0120 10:56:30.566515 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.10273ms) 2026-01-20T10:56:30.566695216+00:00 stderr F I0120 10:56:30.566660 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:30.566875750+00:00 stderr F I0120 10:56:30.566826 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:30.567045105+00:00 stderr F I0120 10:56:30.567015 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:30.567781175+00:00 stderr F I0120 10:56:30.567698 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:30.603582998+00:00 stderr F W0120 10:56:30.603502 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:30.608380187+00:00 stderr F I0120 10:56:30.608306 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (41.642161ms) 2026-01-20T10:56:31.566768103+00:00 stderr F I0120 10:56:31.566698 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:31.567161843+00:00 stderr F I0120 10:56:31.567144 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:31.567161843+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:31.567161843+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:31.567239295+00:00 stderr F I0120 10:56:31.567223 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (538.434µs) 2026-01-20T10:56:31.567271066+00:00 stderr F I0120 10:56:31.567261 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:31.567350969+00:00 stderr F I0120 10:56:31.567331 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:31.567411721+00:00 stderr F I0120 10:56:31.567400 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:31.567664298+00:00 stderr F I0120 10:56:31.567636 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:31.592605509+00:00 stderr F W0120 10:56:31.592563 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:31.594315824+00:00 stderr F I0120 10:56:31.594290 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.027197ms) 2026-01-20T10:56:32.567880199+00:00 stderr F I0120 10:56:32.567726 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:32.568213027+00:00 stderr F I0120 10:56:32.568143 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:32.568213027+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:32.568213027+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:32.568272529+00:00 stderr F I0120 10:56:32.568211 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (505.893µs) 2026-01-20T10:56:32.568272529+00:00 stderr F I0120 10:56:32.568230 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:32.568344081+00:00 stderr F I0120 10:56:32.568282 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:32.568366152+00:00 stderr F I0120 10:56:32.568351 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:32.568766032+00:00 stderr F I0120 10:56:32.568658 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:32.626114295+00:00 stderr F W0120 10:56:32.625956 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:32.630212355+00:00 stderr F I0120 10:56:32.629714 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (61.478314ms) 2026-01-20T10:56:32.829012834+00:00 stderr F I0120 10:56:32.828733 1 sync_worker.go:234] Notify the sync worker: Cluster operator openshift-samples changed Degraded from "True" to "False" 2026-01-20T10:56:32.829012834+00:00 stderr F I0120 10:56:32.828760 1 sync_worker.go:584] Cluster operator openshift-samples changed Degraded from "True" to "False" 2026-01-20T10:56:32.829012834+00:00 stderr F I0120 10:56:32.828773 1 sync_worker.go:592] No change, waiting 2026-01-20T10:56:33.568925202+00:00 stderr F I0120 10:56:33.568757 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:33.569339433+00:00 stderr F I0120 10:56:33.569259 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:33.569339433+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:33.569339433+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:33.569378484+00:00 stderr F I0120 10:56:33.569334 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (611.817µs) 2026-01-20T10:56:33.569378484+00:00 stderr F I0120 10:56:33.569354 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:33.569487007+00:00 stderr F I0120 10:56:33.569408 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:33.569487007+00:00 stderr F I0120 10:56:33.569477 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:33.569936209+00:00 stderr F I0120 10:56:33.569796 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:33.607399377+00:00 stderr F W0120 10:56:33.607284 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:33.609301558+00:00 stderr F I0120 10:56:33.609154 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (39.793671ms) 2026-01-20T10:56:34.572145094+00:00 stderr F I0120 10:56:34.572036 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:34.572428852+00:00 stderr F I0120 10:56:34.572403 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:34.572428852+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:34.572428852+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:34.572509364+00:00 stderr F I0120 10:56:34.572480 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (473.223µs) 2026-01-20T10:56:34.572528174+00:00 stderr F I0120 10:56:34.572506 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:34.572597116+00:00 stderr F I0120 10:56:34.572557 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:34.572644437+00:00 stderr F I0120 10:56:34.572614 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:34.572922005+00:00 stderr F I0120 10:56:34.572879 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:34.605568673+00:00 stderr F W0120 10:56:34.605483 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:34.607278320+00:00 stderr F I0120 10:56:34.607227 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.717374ms) 2026-01-20T10:56:35.385020415+00:00 stderr F I0120 10:56:35.384531 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:35.385020415+00:00 stderr F I0120 10:56:35.384664 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:35.385020415+00:00 stderr F I0120 10:56:35.384734 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:35.385152218+00:00 stderr F I0120 10:56:35.385014 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:35.416960015+00:00 stderr F W0120 10:56:35.416812 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:35.419906434+00:00 stderr F I0120 10:56:35.419849 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.324911ms) 2026-01-20T10:56:35.573133386+00:00 stderr F I0120 10:56:35.573051 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:35.573164797+00:00 stderr F I0120 10:56:35.573153 1 availableupdates.go:70] Retrieving available updates again, because more than 3m0.635374416s has elapsed since 2026-01-20T10:53:34Z 2026-01-20T10:56:35.575657104+00:00 stderr F I0120 10:56:35.575612 1 cincinnati.go:114] Using a root CA pool with 0 root CA subjects to request updates from https://api.openshift.com/api/upgrades_info/v1/graph?arch=amd64&channel=stable-4.16&id=a84dabf3-edcf-4828-b6a1-f9d3a6f02304&version=4.16.0 2026-01-20T10:56:35.806923017+00:00 stderr F I0120 10:56:35.806831 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:35.806923017+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:35.806923017+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:35.806967178+00:00 stderr F I0120 10:56:35.806953 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (233.912684ms) 2026-01-20T10:56:35.806979348+00:00 stderr F I0120 10:56:35.806971 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:35.807084801+00:00 stderr F I0120 10:56:35.807023 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:35.807101591+00:00 stderr F I0120 10:56:35.807094 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:35.807383329+00:00 stderr F I0120 10:56:35.807319 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:35.837860989+00:00 stderr F W0120 10:56:35.837779 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:35.839284097+00:00 stderr F I0120 10:56:35.839240 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.267037ms) 2026-01-20T10:56:36.807266811+00:00 stderr F I0120 10:56:36.807177 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:36.807774075+00:00 stderr F I0120 10:56:36.807731 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:36.807774075+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:36.807774075+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:36.807905278+00:00 stderr F I0120 10:56:36.807866 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (708.929µs) 2026-01-20T10:56:36.807919339+00:00 stderr F I0120 10:56:36.807907 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:36.808075883+00:00 stderr F I0120 10:56:36.808025 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:36.808202776+00:00 stderr F I0120 10:56:36.808170 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:36.808788383+00:00 stderr F I0120 10:56:36.808711 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:36.889142084+00:00 stderr F W0120 10:56:36.888536 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:36.894303703+00:00 stderr F I0120 10:56:36.890408 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (82.49887ms) 2026-01-20T10:56:37.808242053+00:00 stderr F I0120 10:56:37.808145 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:37.808623323+00:00 stderr F I0120 10:56:37.808579 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:37.808623323+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:37.808623323+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:37.808712935+00:00 stderr F I0120 10:56:37.808675 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (546.264µs) 2026-01-20T10:56:37.808732206+00:00 stderr F I0120 10:56:37.808707 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:37.808817729+00:00 stderr F I0120 10:56:37.808777 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:37.808882641+00:00 stderr F I0120 10:56:37.808855 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:37.809355413+00:00 stderr F I0120 10:56:37.809294 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:37.835952998+00:00 stderr F W0120 10:56:37.835878 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:37.839184455+00:00 stderr F I0120 10:56:37.839149 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.438579ms) 2026-01-20T10:56:38.809341168+00:00 stderr F I0120 10:56:38.809243 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:38.809799860+00:00 stderr F I0120 10:56:38.809773 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:38.809799860+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:38.809799860+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:38.809902023+00:00 stderr F I0120 10:56:38.809873 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (644.947µs) 2026-01-20T10:56:38.809924524+00:00 stderr F I0120 10:56:38.809904 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:38.810030156+00:00 stderr F I0120 10:56:38.810002 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:38.810164170+00:00 stderr F I0120 10:56:38.810130 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:38.810661013+00:00 stderr F I0120 10:56:38.810601 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:38.842694736+00:00 stderr F W0120 10:56:38.842579 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:38.846196060+00:00 stderr F I0120 10:56:38.846137 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.224835ms) 2026-01-20T10:56:39.810291989+00:00 stderr F I0120 10:56:39.810223 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:39.810523145+00:00 stderr F I0120 10:56:39.810504 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:39.810523145+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:39.810523145+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:39.810568246+00:00 stderr F I0120 10:56:39.810554 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (344.069µs) 2026-01-20T10:56:39.810576877+00:00 stderr F I0120 10:56:39.810568 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:39.810627568+00:00 stderr F I0120 10:56:39.810605 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:39.810678049+00:00 stderr F I0120 10:56:39.810655 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:39.810896415+00:00 stderr F I0120 10:56:39.810868 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:39.842932428+00:00 stderr F W0120 10:56:39.842852 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:39.847518170+00:00 stderr F I0120 10:56:39.847470 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.894443ms) 2026-01-20T10:56:40.811028574+00:00 stderr F I0120 10:56:40.810926 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:40.811501216+00:00 stderr F I0120 10:56:40.811454 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:40.811501216+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:40.811501216+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:40.811645100+00:00 stderr F I0120 10:56:40.811595 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (691.259µs) 2026-01-20T10:56:40.811645100+00:00 stderr F I0120 10:56:40.811631 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:40.811762463+00:00 stderr F I0120 10:56:40.811721 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:40.811843306+00:00 stderr F I0120 10:56:40.811816 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:40.812582016+00:00 stderr F I0120 10:56:40.812326 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:40.845688687+00:00 stderr F W0120 10:56:40.845606 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:40.847622539+00:00 stderr F I0120 10:56:40.847541 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.908196ms) 2026-01-20T10:56:41.812446848+00:00 stderr F I0120 10:56:41.812382 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:41.812761577+00:00 stderr F I0120 10:56:41.812746 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:41.812761577+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:41.812761577+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:41.812839959+00:00 stderr F I0120 10:56:41.812823 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (453.882µs) 2026-01-20T10:56:41.812870709+00:00 stderr F I0120 10:56:41.812861 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:41.812945811+00:00 stderr F I0120 10:56:41.812926 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:41.813008343+00:00 stderr F I0120 10:56:41.812997 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:41.813320031+00:00 stderr F I0120 10:56:41.813285 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:41.841279864+00:00 stderr F W0120 10:56:41.841224 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:41.842854746+00:00 stderr F I0120 10:56:41.842830 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.967007ms) 2026-01-20T10:56:42.813087541+00:00 stderr F I0120 10:56:42.812983 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:42.813375978+00:00 stderr F I0120 10:56:42.813335 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:42.813375978+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:42.813375978+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:42.813415729+00:00 stderr F I0120 10:56:42.813397 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (454.662µs) 2026-01-20T10:56:42.813423990+00:00 stderr F I0120 10:56:42.813414 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:42.813497611+00:00 stderr F I0120 10:56:42.813466 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:42.813553063+00:00 stderr F I0120 10:56:42.813532 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:42.813854371+00:00 stderr F I0120 10:56:42.813809 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:42.852480280+00:00 stderr F W0120 10:56:42.852369 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:42.854857175+00:00 stderr F I0120 10:56:42.854814 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (41.393964ms) 2026-01-20T10:56:43.813567729+00:00 stderr F I0120 10:56:43.813447 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:43.813989820+00:00 stderr F I0120 10:56:43.813940 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:43.813989820+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:43.813989820+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:43.814182675+00:00 stderr F I0120 10:56:43.814138 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (707.379µs) 2026-01-20T10:56:43.814281138+00:00 stderr F I0120 10:56:43.814210 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:43.814428802+00:00 stderr F I0120 10:56:43.814383 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:43.814486253+00:00 stderr F I0120 10:56:43.814467 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:43.814877904+00:00 stderr F I0120 10:56:43.814822 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:43.892753779+00:00 stderr F W0120 10:56:43.892605 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:43.894085226+00:00 stderr F I0120 10:56:43.893991 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (79.827338ms) 2026-01-20T10:56:44.816116823+00:00 stderr F I0120 10:56:44.815184 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:44.816116823+00:00 stderr F I0120 10:56:44.815515 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:44.816116823+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:44.816116823+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:44.816116823+00:00 stderr F I0120 10:56:44.815590 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (418.321µs) 2026-01-20T10:56:44.816116823+00:00 stderr F I0120 10:56:44.815610 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:44.816116823+00:00 stderr F I0120 10:56:44.815658 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:44.816116823+00:00 stderr F I0120 10:56:44.815715 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:44.816116823+00:00 stderr F I0120 10:56:44.816001 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:44.864476924+00:00 stderr F W0120 10:56:44.864407 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:44.865964504+00:00 stderr F I0120 10:56:44.865919 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (50.306344ms) 2026-01-20T10:56:45.816731205+00:00 stderr F I0120 10:56:45.816644 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:45.817078624+00:00 stderr F I0120 10:56:45.817028 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:45.817078624+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:45.817078624+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:45.817164156+00:00 stderr F I0120 10:56:45.817121 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (490.123µs) 2026-01-20T10:56:45.817164156+00:00 stderr F I0120 10:56:45.817154 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:45.817252999+00:00 stderr F I0120 10:56:45.817207 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:45.817298440+00:00 stderr F I0120 10:56:45.817274 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:45.817648289+00:00 stderr F I0120 10:56:45.817592 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:45.844118662+00:00 stderr F W0120 10:56:45.844035 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:45.847442281+00:00 stderr F I0120 10:56:45.847278 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.120721ms) 2026-01-20T10:56:46.817504741+00:00 stderr F I0120 10:56:46.817384 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:46.817932832+00:00 stderr F I0120 10:56:46.817880 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:46.817932832+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:46.817932832+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:46.818180660+00:00 stderr F I0120 10:56:46.817966 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (611.696µs) 2026-01-20T10:56:46.818180660+00:00 stderr F I0120 10:56:46.818003 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:46.818180660+00:00 stderr F I0120 10:56:46.818112 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:46.818212541+00:00 stderr F I0120 10:56:46.818192 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:46.818684323+00:00 stderr F I0120 10:56:46.818599 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:46.862566983+00:00 stderr F W0120 10:56:46.862457 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:46.863990602+00:00 stderr F I0120 10:56:46.863927 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (45.923086ms) 2026-01-20T10:56:47.818322459+00:00 stderr F I0120 10:56:47.818220 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:47.818570145+00:00 stderr F I0120 10:56:47.818528 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:47.818570145+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:47.818570145+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:47.818626927+00:00 stderr F I0120 10:56:47.818599 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (392.36µs) 2026-01-20T10:56:47.818626927+00:00 stderr F I0120 10:56:47.818620 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:47.818695669+00:00 stderr F I0120 10:56:47.818662 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:47.818765341+00:00 stderr F I0120 10:56:47.818717 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:47.819044378+00:00 stderr F I0120 10:56:47.818954 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:47.854149982+00:00 stderr F W0120 10:56:47.854047 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:47.860271977+00:00 stderr F I0120 10:56:47.858656 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.014556ms) 2026-01-20T10:56:48.819582868+00:00 stderr F I0120 10:56:48.819506 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:48.819955218+00:00 stderr F I0120 10:56:48.819925 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:48.819955218+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:48.819955218+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:48.820158413+00:00 stderr F I0120 10:56:48.820022 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (531.474µs) 2026-01-20T10:56:48.820181534+00:00 stderr F I0120 10:56:48.820154 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:48.820308977+00:00 stderr F I0120 10:56:48.820264 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:48.820385739+00:00 stderr F I0120 10:56:48.820347 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:48.820685277+00:00 stderr F I0120 10:56:48.820631 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:48.843698607+00:00 stderr F W0120 10:56:48.843642 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:48.845631239+00:00 stderr F I0120 10:56:48.845585 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.443294ms) 2026-01-20T10:56:49.820466757+00:00 stderr F I0120 10:56:49.820384 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:49.820863388+00:00 stderr F I0120 10:56:49.820822 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:49.820863388+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:49.820863388+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:49.820961390+00:00 stderr F I0120 10:56:49.820921 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (553.555µs) 2026-01-20T10:56:49.820961390+00:00 stderr F I0120 10:56:49.820953 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:49.821085873+00:00 stderr F I0120 10:56:49.821025 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:49.821209298+00:00 stderr F I0120 10:56:49.821173 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:49.821626299+00:00 stderr F I0120 10:56:49.821561 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:49.861413209+00:00 stderr F W0120 10:56:49.861335 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:49.865125619+00:00 stderr F I0120 10:56:49.864520 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.560802ms) 2026-01-20T10:56:50.385258653+00:00 stderr F I0120 10:56:50.385158 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:50.385312535+00:00 stderr F I0120 10:56:50.385288 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:50.385381887+00:00 stderr F I0120 10:56:50.385355 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:50.385725936+00:00 stderr F I0120 10:56:50.385668 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:50.407890062+00:00 stderr F W0120 10:56:50.407814 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:50.409991139+00:00 stderr F I0120 10:56:50.409943 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.793517ms) 2026-01-20T10:56:50.821335747+00:00 stderr F I0120 10:56:50.821235 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:50.821619614+00:00 stderr F I0120 10:56:50.821570 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:50.821619614+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:50.821619614+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:50.821675226+00:00 stderr F I0120 10:56:50.821639 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (417.732µs) 2026-01-20T10:56:50.821675226+00:00 stderr F I0120 10:56:50.821665 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:50.821766628+00:00 stderr F I0120 10:56:50.821715 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:50.821805329+00:00 stderr F I0120 10:56:50.821782 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:50.822152888+00:00 stderr F I0120 10:56:50.822049 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:50.869801940+00:00 stderr F W0120 10:56:50.869690 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:50.871692602+00:00 stderr F I0120 10:56:50.871633 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (49.966374ms) 2026-01-20T10:56:51.822267897+00:00 stderr F I0120 10:56:51.822139 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:51.822497723+00:00 stderr F I0120 10:56:51.822413 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:51.822497723+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:51.822497723+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:51.822531314+00:00 stderr F I0120 10:56:51.822496 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (370.08µs) 2026-01-20T10:56:51.822549465+00:00 stderr F I0120 10:56:51.822535 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:51.822627887+00:00 stderr F I0120 10:56:51.822579 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:51.822686938+00:00 stderr F I0120 10:56:51.822647 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:51.823020527+00:00 stderr F I0120 10:56:51.822929 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:51.849227102+00:00 stderr F W0120 10:56:51.849112 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:51.850639580+00:00 stderr F I0120 10:56:51.850549 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.011594ms) 2026-01-20T10:56:52.823276389+00:00 stderr F I0120 10:56:52.823199 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:52.823541876+00:00 stderr F I0120 10:56:52.823504 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:52.823541876+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:52.823541876+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:52.823588558+00:00 stderr F I0120 10:56:52.823557 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (371.09µs) 2026-01-20T10:56:52.823588558+00:00 stderr F I0120 10:56:52.823581 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:52.823662280+00:00 stderr F I0120 10:56:52.823631 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:52.823708891+00:00 stderr F I0120 10:56:52.823687 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:52.823967428+00:00 stderr F I0120 10:56:52.823917 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:52.855957789+00:00 stderr F W0120 10:56:52.855894 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:52.857370777+00:00 stderr F I0120 10:56:52.857321 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.739298ms) 2026-01-20T10:56:53.824371597+00:00 stderr F I0120 10:56:53.824227 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:53.824527421+00:00 stderr F I0120 10:56:53.824473 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:53.824527421+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:53.824527421+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:53.824583703+00:00 stderr F I0120 10:56:53.824551 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (336.41µs) 2026-01-20T10:56:53.824583703+00:00 stderr F I0120 10:56:53.824571 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:53.824637004+00:00 stderr F I0120 10:56:53.824609 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:53.824677265+00:00 stderr F I0120 10:56:53.824658 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:53.824905421+00:00 stderr F I0120 10:56:53.824864 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:53.854205176+00:00 stderr F W0120 10:56:53.854132 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:53.855609833+00:00 stderr F I0120 10:56:53.855572 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.997969ms) 2026-01-20T10:56:54.825226945+00:00 stderr F I0120 10:56:54.825162 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:54.825663206+00:00 stderr F I0120 10:56:54.825618 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:54.825663206+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:54.825663206+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:54.825776639+00:00 stderr F I0120 10:56:54.825753 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (603.646µs) 2026-01-20T10:56:54.825866172+00:00 stderr F I0120 10:56:54.825849 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:54.825961854+00:00 stderr F I0120 10:56:54.825934 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:54.826952540+00:00 stderr F I0120 10:56:54.826932 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:54.835374413+00:00 stderr F I0120 10:56:54.835287 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:54.881111083+00:00 stderr F W0120 10:56:54.877094 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:54.881111083+00:00 stderr F I0120 10:56:54.878600 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (52.750385ms) 2026-01-20T10:56:55.826439362+00:00 stderr F I0120 10:56:55.825856 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:55.826439362+00:00 stderr F I0120 10:56:55.826248 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:55.826439362+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:55.826439362+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:55.826439362+00:00 stderr F I0120 10:56:55.826306 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (463.292µs) 2026-01-20T10:56:55.826439362+00:00 stderr F I0120 10:56:55.826324 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:55.826439362+00:00 stderr F I0120 10:56:55.826374 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:55.826439362+00:00 stderr F I0120 10:56:55.826427 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:55.826770241+00:00 stderr F I0120 10:56:55.826711 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:55.852032489+00:00 stderr F W0120 10:56:55.851947 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:55.853459787+00:00 stderr F I0120 10:56:55.853408 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.081357ms) 2026-01-20T10:56:56.827219338+00:00 stderr F I0120 10:56:56.827136 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:56.827727862+00:00 stderr F I0120 10:56:56.827662 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:56.827727862+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:56.827727862+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:56.827898587+00:00 stderr F I0120 10:56:56.827851 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (730.41µs) 2026-01-20T10:56:56.827948798+00:00 stderr F I0120 10:56:56.827895 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:56.828047911+00:00 stderr F I0120 10:56:56.828015 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:56.828127783+00:00 stderr F I0120 10:56:56.828090 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:56.828414170+00:00 stderr F I0120 10:56:56.828369 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:56.869390433+00:00 stderr F W0120 10:56:56.869338 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:56.870832232+00:00 stderr F I0120 10:56:56.870802 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.921075ms) 2026-01-20T10:56:57.827987955+00:00 stderr F I0120 10:56:57.827915 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:57.828537009+00:00 stderr F I0120 10:56:57.828495 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:57.828537009+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:57.828537009+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:57.828632822+00:00 stderr F I0120 10:56:57.828598 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (700.119µs) 2026-01-20T10:56:57.828641422+00:00 stderr F I0120 10:56:57.828629 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:57.828747585+00:00 stderr F I0120 10:56:57.828704 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:57.828818036+00:00 stderr F I0120 10:56:57.828788 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:57.829319389+00:00 stderr F I0120 10:56:57.829254 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:57.869931894+00:00 stderr F W0120 10:56:57.869821 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:57.871556326+00:00 stderr F I0120 10:56:57.871504 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.873414ms) 2026-01-20T10:56:58.829525280+00:00 stderr F I0120 10:56:58.829460 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:58.829764586+00:00 stderr F I0120 10:56:58.829733 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:58.829764586+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:58.829764586+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:58.829828098+00:00 stderr F I0120 10:56:58.829792 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (363.049µs) 2026-01-20T10:56:58.829828098+00:00 stderr F I0120 10:56:58.829813 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:58.829878569+00:00 stderr F I0120 10:56:58.829850 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:58.829923821+00:00 stderr F I0120 10:56:58.829902 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:58.830224338+00:00 stderr F I0120 10:56:58.830184 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:58.861951498+00:00 stderr F W0120 10:56:58.861900 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:58.864220168+00:00 stderr F I0120 10:56:58.864166 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.347029ms) 2026-01-20T10:56:59.830560203+00:00 stderr F I0120 10:56:59.830484 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:56:59.830994874+00:00 stderr F I0120 10:56:59.830944 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:56:59.830994874+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:56:59.830994874+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:56:59.831097577+00:00 stderr F I0120 10:56:59.831042 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (575.026µs) 2026-01-20T10:56:59.831141278+00:00 stderr F I0120 10:56:59.831107 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:56:59.831235790+00:00 stderr F I0120 10:56:59.831189 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:56:59.831302332+00:00 stderr F I0120 10:56:59.831273 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:56:59.831778614+00:00 stderr F I0120 10:56:59.831700 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:56:59.859196920+00:00 stderr F W0120 10:56:59.859141 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:56:59.860701039+00:00 stderr F I0120 10:56:59.860679 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.570271ms) 2026-01-20T10:57:00.831874173+00:00 stderr F I0120 10:57:00.831794 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:00.832518529+00:00 stderr F I0120 10:57:00.832484 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:00.832518529+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:00.832518529+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:00.832721235+00:00 stderr F I0120 10:57:00.832690 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (920.064µs) 2026-01-20T10:57:00.832807837+00:00 stderr F I0120 10:57:00.832777 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:00.832909010+00:00 stderr F I0120 10:57:00.832868 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:00.832988332+00:00 stderr F I0120 10:57:00.832931 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:00.833359561+00:00 stderr F I0120 10:57:00.833248 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:00.872976370+00:00 stderr F W0120 10:57:00.872833 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:00.875118756+00:00 stderr F I0120 10:57:00.875027 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.249237ms) 2026-01-20T10:57:01.832792512+00:00 stderr F I0120 10:57:01.832740 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:01.833219913+00:00 stderr F I0120 10:57:01.833191 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:01.833219913+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:01.833219913+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:01.833364017+00:00 stderr F I0120 10:57:01.833340 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (621.966µs) 2026-01-20T10:57:01.833502822+00:00 stderr F I0120 10:57:01.833441 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:01.833629655+00:00 stderr F I0120 10:57:01.833555 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:01.833666946+00:00 stderr F I0120 10:57:01.833645 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:01.834178229+00:00 stderr F I0120 10:57:01.834127 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:01.867984773+00:00 stderr F W0120 10:57:01.867901 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:01.871193508+00:00 stderr F I0120 10:57:01.871148 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.614115ms) 2026-01-20T10:57:02.833890487+00:00 stderr F I0120 10:57:02.833835 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:02.834240166+00:00 stderr F I0120 10:57:02.834221 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:02.834240166+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:02.834240166+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:02.834350209+00:00 stderr F I0120 10:57:02.834305 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (484.442µs) 2026-01-20T10:57:02.834460701+00:00 stderr F I0120 10:57:02.834387 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:02.834665647+00:00 stderr F I0120 10:57:02.834631 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:02.834756189+00:00 stderr F I0120 10:57:02.834741 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:02.835163450+00:00 stderr F I0120 10:57:02.835122 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:02.863411617+00:00 stderr F W0120 10:57:02.863341 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:02.866779646+00:00 stderr F I0120 10:57:02.866734 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.363385ms) 2026-01-20T10:57:03.835181865+00:00 stderr F I0120 10:57:03.835087 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:03.835887453+00:00 stderr F I0120 10:57:03.835845 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:03.835887453+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:03.835887453+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:03.836131540+00:00 stderr F I0120 10:57:03.836050 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.010386ms) 2026-01-20T10:57:03.836216512+00:00 stderr F I0120 10:57:03.836191 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:03.836391956+00:00 stderr F I0120 10:57:03.836350 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:03.836549472+00:00 stderr F I0120 10:57:03.836523 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:03.837276821+00:00 stderr F I0120 10:57:03.837197 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:03.884713025+00:00 stderr F W0120 10:57:03.884653 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:03.886321727+00:00 stderr F I0120 10:57:03.886299 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (50.111385ms) 2026-01-20T10:57:04.836292549+00:00 stderr F I0120 10:57:04.836170 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:04.836528775+00:00 stderr F I0120 10:57:04.836487 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:04.836528775+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:04.836528775+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:04.836591648+00:00 stderr F I0120 10:57:04.836553 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (398.012µs) 2026-01-20T10:57:04.836591648+00:00 stderr F I0120 10:57:04.836580 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:04.836662920+00:00 stderr F I0120 10:57:04.836628 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:04.836724481+00:00 stderr F I0120 10:57:04.836693 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:04.837106671+00:00 stderr F I0120 10:57:04.837039 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:04.869929588+00:00 stderr F W0120 10:57:04.869869 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:04.871797778+00:00 stderr F I0120 10:57:04.871754 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.17232ms) 2026-01-20T10:57:05.384869226+00:00 stderr F I0120 10:57:05.384779 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:05.385670488+00:00 stderr F I0120 10:57:05.385529 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:05.385670488+00:00 stderr F I0120 10:57:05.385624 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:05.386023568+00:00 stderr F I0120 10:57:05.385953 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:05.411126411+00:00 stderr F W0120 10:57:05.411009 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:05.413249527+00:00 stderr F I0120 10:57:05.413188 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.418342ms) 2026-01-20T10:57:05.837338693+00:00 stderr F I0120 10:57:05.837217 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:05.837655881+00:00 stderr F I0120 10:57:05.837606 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:05.837655881+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:05.837655881+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:05.837710882+00:00 stderr F I0120 10:57:05.837677 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (474.402µs) 2026-01-20T10:57:05.837710882+00:00 stderr F I0120 10:57:05.837700 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:05.837788114+00:00 stderr F I0120 10:57:05.837754 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:05.837837406+00:00 stderr F I0120 10:57:05.837814 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:05.838169754+00:00 stderr F I0120 10:57:05.838117 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:05.883767200+00:00 stderr F W0120 10:57:05.883710 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:05.885304261+00:00 stderr F I0120 10:57:05.885278 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.573468ms) 2026-01-20T10:57:06.838306003+00:00 stderr F I0120 10:57:06.838241 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:06.838659932+00:00 stderr F I0120 10:57:06.838629 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:06.838659932+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:06.838659932+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:06.838704133+00:00 stderr F I0120 10:57:06.838682 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (454.392µs) 2026-01-20T10:57:06.838712213+00:00 stderr F I0120 10:57:06.838702 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:06.838781085+00:00 stderr F I0120 10:57:06.838754 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:06.838821796+00:00 stderr F I0120 10:57:06.838805 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:06.839044422+00:00 stderr F I0120 10:57:06.839011 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:06.864361892+00:00 stderr F W0120 10:57:06.864306 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:06.866148269+00:00 stderr F I0120 10:57:06.866122 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.415585ms) 2026-01-20T10:57:07.839490179+00:00 stderr F I0120 10:57:07.839387 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:07.839908401+00:00 stderr F I0120 10:57:07.839838 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:07.839908401+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:07.839908401+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:07.840038754+00:00 stderr F I0120 10:57:07.839960 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (582.936µs) 2026-01-20T10:57:07.840038754+00:00 stderr F I0120 10:57:07.839990 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:07.840139527+00:00 stderr F I0120 10:57:07.840094 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:07.840236490+00:00 stderr F I0120 10:57:07.840185 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:07.840916787+00:00 stderr F I0120 10:57:07.840698 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:08.840331337+00:00 stderr F I0120 10:57:08.840192 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:08.841597230+00:00 stderr F I0120 10:57:08.841522 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:08.841597230+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:08.841597230+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:08.841790585+00:00 stderr F I0120 10:57:08.841705 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.52971ms) 2026-01-20T10:57:09.842718406+00:00 stderr F I0120 10:57:09.842614 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:09.843201968+00:00 stderr F I0120 10:57:09.843129 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:09.843201968+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:09.843201968+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:09.843345232+00:00 stderr F I0120 10:57:09.843298 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (715.409µs) 2026-01-20T10:57:10.844465926+00:00 stderr F I0120 10:57:10.844252 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:10.844656391+00:00 stderr F I0120 10:57:10.844610 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:10.844656391+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:10.844656391+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:10.844715003+00:00 stderr F I0120 10:57:10.844684 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (446.232µs) 2026-01-20T10:57:11.831412326+00:00 stderr F I0120 10:57:11.831308 1 sync_worker.go:582] Wait finished 2026-01-20T10:57:11.831505119+00:00 stderr F I0120 10:57:11.831373 1 sync_worker.go:632] Previous sync status: &cvo.SyncWorkerStatus{Generation:4, Failure:error(nil), Done:747, Total:955, Completed:1, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(2026, time.January, 20, 10, 54, 11, 194831918, time.Local), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc253f58bd6e35ca7, ext:370723773144, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2026-01-20T10:57:11.831505119+00:00 stderr F I0120 10:57:11.831483 1 sync_worker.go:883] apply: 4.16.0 on generation 4 in state Reconciling at attempt 0 2026-01-20T10:57:11.833676446+00:00 stderr F I0120 10:57:11.833617 1 task_graph.go:481] Running 0 on worker 1 2026-01-20T10:57:11.833676446+00:00 stderr F I0120 10:57:11.833635 1 task_graph.go:481] Running 1 on worker 1 2026-01-20T10:57:11.833707167+00:00 stderr F I0120 10:57:11.833672 1 task_graph.go:481] Running 2 on worker 0 2026-01-20T10:57:11.835083974+00:00 stderr F I0120 10:57:11.834961 1 sync_worker.go:986] Unable to precreate resource clusteroperator "image-registry" (399 of 955): Post "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:11.835110335+00:00 stderr F I0120 10:57:11.835016 1 sync_worker.go:986] Unable to precreate resource clusteroperator "service-ca" (754 of 955): Post "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:11.837165128+00:00 stderr F E0120 10:57:11.837093 1 task.go:122] error running apply for clusterrolebinding "system:openshift:operator:service-ca-operator" (746 of 955): Get "https://api-int.crc.testing:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:service-ca-operator": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:11.841628507+00:00 stderr F E0120 10:57:11.841551 1 task.go:122] error running apply for customresourcedefinition "configs.imageregistry.operator.openshift.io" (378 of 955): Get "https://api-int.crc.testing:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/configs.imageregistry.operator.openshift.io": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:11.845595531+00:00 stderr F I0120 10:57:11.845535 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:11.845845529+00:00 stderr F I0120 10:57:11.845792 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:11.845845529+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:11.845845529+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:11.845867799+00:00 stderr F I0120 10:57:11.845845 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (314.899µs) 2026-01-20T10:57:12.845807833+00:00 stderr F I0120 10:57:12.845662 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (5.005665005s) 2026-01-20T10:57:12.845807833+00:00 stderr F I0120 10:57:12.845697 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:12.845807833+00:00 stderr F I0120 10:57:12.845739 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:12.845894706+00:00 stderr F I0120 10:57:12.845810 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:12.845894706+00:00 stderr F I0120 10:57:12.845863 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:12.845916346+00:00 stderr F I0120 10:57:12.845893 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:12.846222764+00:00 stderr F I0120 10:57:12.846157 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:12.846363258+00:00 stderr F I0120 10:57:12.846310 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:12.846363258+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:12.846363258+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:12.846437530+00:00 stderr F I0120 10:57:12.846400 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (518.484µs) 2026-01-20T10:57:12.847286962+00:00 stderr F I0120 10:57:12.847207 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.448008ms) 2026-01-20T10:57:12.847320593+00:00 stderr F I0120 10:57:12.847264 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:12.847371404+00:00 stderr F I0120 10:57:12.847332 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:12.847544579+00:00 stderr F I0120 10:57:12.847484 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:12.847636921+00:00 stderr F I0120 10:57:12.847597 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:12.848213096+00:00 stderr F I0120 10:57:12.848138 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:12.849774098+00:00 stderr F I0120 10:57:12.849684 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (2.340782ms) 2026-01-20T10:57:12.849774098+00:00 stderr F I0120 10:57:12.849729 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:12.851672568+00:00 stderr F I0120 10:57:12.851614 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:12.851909484+00:00 stderr F I0120 10:57:12.851715 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:12.851979386+00:00 stderr F I0120 10:57:12.851943 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:12.852453409+00:00 stderr F I0120 10:57:12.852363 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:12.853625999+00:00 stderr F I0120 10:57:12.853548 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.933581ms) 2026-01-20T10:57:12.853625999+00:00 stderr F I0120 10:57:12.853581 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:12.894011258+00:00 stderr F I0120 10:57:12.893940 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:12.894140731+00:00 stderr F I0120 10:57:12.894107 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:12.894203723+00:00 stderr F I0120 10:57:12.894185 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:12.894587693+00:00 stderr F I0120 10:57:12.894538 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:12.896424292+00:00 stderr F I0120 10:57:12.896386 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (2.476616ms) 2026-01-20T10:57:12.896424292+00:00 stderr F I0120 10:57:12.896410 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:12.977200847+00:00 stderr F I0120 10:57:12.977118 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:12.977309140+00:00 stderr F I0120 10:57:12.977263 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:12.977395003+00:00 stderr F I0120 10:57:12.977353 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:12.977930067+00:00 stderr F I0120 10:57:12.977870 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:12.979458447+00:00 stderr F I0120 10:57:12.979405 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (2.297921ms) 2026-01-20T10:57:12.979491628+00:00 stderr F I0120 10:57:12.979454 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:13.140115096+00:00 stderr F I0120 10:57:13.140015 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:13.140223239+00:00 stderr F I0120 10:57:13.140186 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:13.140331782+00:00 stderr F I0120 10:57:13.140299 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:13.140958808+00:00 stderr F I0120 10:57:13.140672 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:13.141919173+00:00 stderr F I0120 10:57:13.141865 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.861969ms) 2026-01-20T10:57:13.141919173+00:00 stderr F I0120 10:57:13.141890 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:13.462439350+00:00 stderr F I0120 10:57:13.462298 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:13.462524522+00:00 stderr F I0120 10:57:13.462440 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:13.462524522+00:00 stderr F I0120 10:57:13.462510 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:13.462881772+00:00 stderr F I0120 10:57:13.462815 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:13.464321259+00:00 stderr F I0120 10:57:13.464258 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.954511ms) 2026-01-20T10:57:13.464351610+00:00 stderr F I0120 10:57:13.464312 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:13.847301468+00:00 stderr F I0120 10:57:13.847211 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:13.847512813+00:00 stderr F I0120 10:57:13.847478 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:13.847512813+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:13.847512813+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:13.847568265+00:00 stderr F I0120 10:57:13.847545 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (346.419µs) 2026-01-20T10:57:13.847568265+00:00 stderr F I0120 10:57:13.847563 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:13.847646197+00:00 stderr F I0120 10:57:13.847615 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:13.847694868+00:00 stderr F I0120 10:57:13.847676 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:13.847957785+00:00 stderr F I0120 10:57:13.847911 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:13.848977902+00:00 stderr F I0120 10:57:13.848923 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.350087ms) 2026-01-20T10:57:13.848977902+00:00 stderr F I0120 10:57:13.848958 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:14.105047314+00:00 stderr F I0120 10:57:14.104976 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:14.105138786+00:00 stderr F I0120 10:57:14.105101 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:14.105194508+00:00 stderr F I0120 10:57:14.105173 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:14.105532637+00:00 stderr F I0120 10:57:14.105486 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:14.106818741+00:00 stderr F I0120 10:57:14.106779 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.806728ms) 2026-01-20T10:57:14.106833681+00:00 stderr F I0120 10:57:14.106812 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:14.848278109+00:00 stderr F I0120 10:57:14.848208 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:14.848565016+00:00 stderr F I0120 10:57:14.848540 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:14.848565016+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:14.848565016+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:14.848618177+00:00 stderr F I0120 10:57:14.848592 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (398.411µs) 2026-01-20T10:57:14.848618177+00:00 stderr F I0120 10:57:14.848614 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:14.848692650+00:00 stderr F I0120 10:57:14.848660 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:14.848731971+00:00 stderr F I0120 10:57:14.848718 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:14.848967418+00:00 stderr F I0120 10:57:14.848930 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:14.849905892+00:00 stderr F I0120 10:57:14.849853 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.230833ms) 2026-01-20T10:57:14.849920002+00:00 stderr F I0120 10:57:14.849892 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:15.849413774+00:00 stderr F I0120 10:57:15.849310 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:15.849899457+00:00 stderr F I0120 10:57:15.849855 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:15.849899457+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:15.849899457+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:15.850193085+00:00 stderr F I0120 10:57:15.850109 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (748.18µs) 2026-01-20T10:57:15.850193085+00:00 stderr F I0120 10:57:15.850143 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:15.850461692+00:00 stderr F I0120 10:57:15.850402 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:15.850582505+00:00 stderr F I0120 10:57:15.850524 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:15.851244362+00:00 stderr F I0120 10:57:15.851157 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:15.853279776+00:00 stderr F I0120 10:57:15.853189 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (3.02626ms) 2026-01-20T10:57:15.853279776+00:00 stderr F I0120 10:57:15.853253 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:16.667420967+00:00 stderr F I0120 10:57:16.667313 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:16.667526009+00:00 stderr F I0120 10:57:16.667485 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:16.667613352+00:00 stderr F I0120 10:57:16.667574 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:16.668155446+00:00 stderr F I0120 10:57:16.668026 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:16.669556392+00:00 stderr F I0120 10:57:16.669490 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (2.182077ms) 2026-01-20T10:57:16.669556392+00:00 stderr F I0120 10:57:16.669530 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:16.851155325+00:00 stderr F I0120 10:57:16.851023 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:16.851433422+00:00 stderr F I0120 10:57:16.851389 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:16.851433422+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:16.851433422+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:16.851879115+00:00 stderr F I0120 10:57:16.851832 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (821.252µs) 2026-01-20T10:57:16.851879115+00:00 stderr F I0120 10:57:16.851857 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:16.851973587+00:00 stderr F I0120 10:57:16.851910 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:16.851993698+00:00 stderr F I0120 10:57:16.851971 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:16.852338387+00:00 stderr F I0120 10:57:16.852274 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:16.854169585+00:00 stderr F I0120 10:57:16.853453 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.580861ms) 2026-01-20T10:57:16.854169585+00:00 stderr F I0120 10:57:16.853497 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:17.852167108+00:00 stderr F I0120 10:57:17.852112 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:17.852506317+00:00 stderr F I0120 10:57:17.852490 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:17.852506317+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:17.852506317+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:17.852580739+00:00 stderr F I0120 10:57:17.852565 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (464.132µs) 2026-01-20T10:57:17.852611969+00:00 stderr F I0120 10:57:17.852602 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:17.852684411+00:00 stderr F I0120 10:57:17.852665 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:17.852743363+00:00 stderr F I0120 10:57:17.852733 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:17.852990309+00:00 stderr F I0120 10:57:17.852962 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:17.853967605+00:00 stderr F I0120 10:57:17.853912 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.295934ms) 2026-01-20T10:57:17.853986635+00:00 stderr F I0120 10:57:17.853962 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:18.852964574+00:00 stderr F I0120 10:57:18.852880 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:18.853435396+00:00 stderr F I0120 10:57:18.853381 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:18.853435396+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:18.853435396+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:18.853571960+00:00 stderr F I0120 10:57:18.853513 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (645.757µs) 2026-01-20T10:57:18.853607851+00:00 stderr F I0120 10:57:18.853591 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:18.853709544+00:00 stderr F I0120 10:57:18.853684 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:18.853790736+00:00 stderr F I0120 10:57:18.853778 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:18.854047542+00:00 stderr F I0120 10:57:18.854017 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:18.854990588+00:00 stderr F I0120 10:57:18.854933 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.360936ms) 2026-01-20T10:57:18.855225364+00:00 stderr F I0120 10:57:18.855164 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:18.856073876+00:00 stderr F E0120 10:57:18.856021 1 cvo.go:642] Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:18.856089227+00:00 stderr F I0120 10:57:18.856046 1 cvo.go:643] Dropping "openshift-cluster-version/version" out of the queue &{0xc000062e40 0xc00068e1b0}: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:19.854593112+00:00 stderr F I0120 10:57:19.853740 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:19.854593112+00:00 stderr F I0120 10:57:19.853982 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:19.854593112+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:19.854593112+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:19.854593112+00:00 stderr F I0120 10:57:19.854027 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (299.488µs) 2026-01-20T10:57:19.854593112+00:00 stderr F I0120 10:57:19.854041 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:19.854593112+00:00 stderr F I0120 10:57:19.854113 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:19.854593112+00:00 stderr F I0120 10:57:19.854160 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:19.854593112+00:00 stderr F I0120 10:57:19.854366 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:19.855190019+00:00 stderr F I0120 10:57:19.855144 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.09882ms) 2026-01-20T10:57:19.855190019+00:00 stderr F I0120 10:57:19.855174 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:19.860490209+00:00 stderr F I0120 10:57:19.860457 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:19.860561071+00:00 stderr F I0120 10:57:19.860534 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:19.860619762+00:00 stderr F I0120 10:57:19.860582 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:19.861005282+00:00 stderr F I0120 10:57:19.860855 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:19.862004088+00:00 stderr F I0120 10:57:19.861868 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.420537ms) 2026-01-20T10:57:19.862004088+00:00 stderr F I0120 10:57:19.861899 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:19.872194748+00:00 stderr F I0120 10:57:19.872152 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:19.872282460+00:00 stderr F I0120 10:57:19.872241 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:19.872335132+00:00 stderr F I0120 10:57:19.872306 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:19.872745463+00:00 stderr F I0120 10:57:19.872691 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:19.874043726+00:00 stderr F I0120 10:57:19.873984 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.824377ms) 2026-01-20T10:57:19.874068177+00:00 stderr F I0120 10:57:19.874025 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:19.894466597+00:00 stderr F I0120 10:57:19.894405 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:19.894589370+00:00 stderr F I0120 10:57:19.894543 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:19.894643942+00:00 stderr F I0120 10:57:19.894615 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:19.894939520+00:00 stderr F I0120 10:57:19.894891 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:19.896143981+00:00 stderr F I0120 10:57:19.896046 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.643243ms) 2026-01-20T10:57:19.896158061+00:00 stderr F I0120 10:57:19.896135 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:19.936481508+00:00 stderr F I0120 10:57:19.936429 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:19.936643203+00:00 stderr F I0120 10:57:19.936620 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:19.936704924+00:00 stderr F I0120 10:57:19.936695 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:19.936953941+00:00 stderr F I0120 10:57:19.936922 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:19.937954037+00:00 stderr F I0120 10:57:19.937903 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.473108ms) 2026-01-20T10:57:19.937972667+00:00 stderr F I0120 10:57:19.937951 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:20.018345253+00:00 stderr F I0120 10:57:20.018252 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:20.018421545+00:00 stderr F I0120 10:57:20.018375 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:20.018470226+00:00 stderr F I0120 10:57:20.018443 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:20.018867467+00:00 stderr F I0120 10:57:20.018796 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:20.019882833+00:00 stderr F I0120 10:57:20.019817 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.565031ms) 2026-01-20T10:57:20.019912424+00:00 stderr F I0120 10:57:20.019865 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:20.181169409+00:00 stderr F I0120 10:57:20.180900 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:20.181169409+00:00 stderr F I0120 10:57:20.181021 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:20.181169409+00:00 stderr F I0120 10:57:20.181148 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:20.181510958+00:00 stderr F I0120 10:57:20.181434 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:20.182621228+00:00 stderr F I0120 10:57:20.182554 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.657574ms) 2026-01-20T10:57:20.182621228+00:00 stderr F I0120 10:57:20.182594 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:20.385050131+00:00 stderr F I0120 10:57:20.384961 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:20.385309547+00:00 stderr F I0120 10:57:20.385269 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:20.385437821+00:00 stderr F I0120 10:57:20.385416 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:20.385972735+00:00 stderr F I0120 10:57:20.385898 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:20.387336481+00:00 stderr F I0120 10:57:20.387300 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (2.351782ms) 2026-01-20T10:57:20.387336481+00:00 stderr F I0120 10:57:20.387318 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:20.503814831+00:00 stderr F I0120 10:57:20.503733 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:20.503896763+00:00 stderr F I0120 10:57:20.503867 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:20.504000226+00:00 stderr F I0120 10:57:20.503950 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:20.504478128+00:00 stderr F I0120 10:57:20.504398 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:20.506139673+00:00 stderr F I0120 10:57:20.506042 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (2.308342ms) 2026-01-20T10:57:20.506265206+00:00 stderr F I0120 10:57:20.506212 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:20.854393122+00:00 stderr F I0120 10:57:20.854285 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:20.854620168+00:00 stderr F I0120 10:57:20.854586 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:20.854620168+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:20.854620168+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:20.854650679+00:00 stderr F I0120 10:57:20.854635 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (364.429µs) 2026-01-20T10:57:20.854659259+00:00 stderr F I0120 10:57:20.854652 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:20.854738931+00:00 stderr F I0120 10:57:20.854709 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:20.854780482+00:00 stderr F I0120 10:57:20.854764 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:20.855033880+00:00 stderr F I0120 10:57:20.854989 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:20.856082747+00:00 stderr F I0120 10:57:20.855996 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.331065ms) 2026-01-20T10:57:20.856150319+00:00 stderr F I0120 10:57:20.856046 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:21.786566594+00:00 stderr F I0120 10:57:21.786483 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:21.786627815+00:00 stderr F I0120 10:57:21.786592 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:21.786682067+00:00 stderr F I0120 10:57:21.786657 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:21.786987925+00:00 stderr F I0120 10:57:21.786949 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:21.787964401+00:00 stderr F I0120 10:57:21.787905 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.420688ms) 2026-01-20T10:57:21.787964401+00:00 stderr F I0120 10:57:21.787941 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:21.839579556+00:00 stderr F E0120 10:57:21.839510 1 task.go:122] error running apply for clusterrolebinding "system:openshift:operator:service-ca-operator" (746 of 955): Get "https://api-int.crc.testing:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:service-ca-operator": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:21.845856492+00:00 stderr F E0120 10:57:21.845810 1 task.go:122] error running apply for customresourcedefinition "configs.imageregistry.operator.openshift.io" (378 of 955): Get "https://api-int.crc.testing:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/configs.imageregistry.operator.openshift.io": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:21.854963953+00:00 stderr F I0120 10:57:21.854905 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:21.855217890+00:00 stderr F I0120 10:57:21.855183 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:21.855217890+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:21.855217890+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:21.855267401+00:00 stderr F I0120 10:57:21.855233 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (336.39µs) 2026-01-20T10:57:21.855267401+00:00 stderr F I0120 10:57:21.855249 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:21.855317522+00:00 stderr F I0120 10:57:21.855288 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:21.855366704+00:00 stderr F I0120 10:57:21.855340 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:21.855594930+00:00 stderr F I0120 10:57:21.855552 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:21.856162484+00:00 stderr F I0120 10:57:21.856134 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (885.893µs) 2026-01-20T10:57:21.856162484+00:00 stderr F I0120 10:57:21.856151 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:22.856220811+00:00 stderr F I0120 10:57:22.856156 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:22.856496969+00:00 stderr F I0120 10:57:22.856466 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:22.856496969+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:22.856496969+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:22.856539020+00:00 stderr F I0120 10:57:22.856516 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (373.92µs) 2026-01-20T10:57:22.856539020+00:00 stderr F I0120 10:57:22.856534 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:22.856611522+00:00 stderr F I0120 10:57:22.856573 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:22.856649373+00:00 stderr F I0120 10:57:22.856632 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:22.856890049+00:00 stderr F I0120 10:57:22.856850 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:23.857163501+00:00 stderr F I0120 10:57:23.857101 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:23.857516011+00:00 stderr F I0120 10:57:23.857498 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:23.857516011+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:23.857516011+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:23.857598883+00:00 stderr F I0120 10:57:23.857580 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (493.473µs) 2026-01-20T10:57:24.858415600+00:00 stderr F I0120 10:57:24.858330 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:24.858640996+00:00 stderr F I0120 10:57:24.858606 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:24.858640996+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:24.858640996+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:24.858681367+00:00 stderr F I0120 10:57:24.858654 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (337.569µs) 2026-01-20T10:57:25.859767911+00:00 stderr F I0120 10:57:25.859681 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:25.860028688+00:00 stderr F I0120 10:57:25.859996 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:25.860028688+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:25.860028688+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:25.860098680+00:00 stderr F I0120 10:57:25.860050 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (383.72µs) 2026-01-20T10:57:26.860362642+00:00 stderr F I0120 10:57:26.860218 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:26.860561287+00:00 stderr F I0120 10:57:26.860503 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:26.860561287+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:26.860561287+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:26.860585178+00:00 stderr F I0120 10:57:26.860558 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (354.249µs) 2026-01-20T10:57:27.861712344+00:00 stderr F I0120 10:57:27.861624 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:27.862100254+00:00 stderr F I0120 10:57:27.862017 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:27.862100254+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:27.862100254+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:27.862190516+00:00 stderr F I0120 10:57:27.862133 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (524.454µs) 2026-01-20T10:57:28.862471659+00:00 stderr F I0120 10:57:28.862396 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:28.862994193+00:00 stderr F I0120 10:57:28.862962 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:28.862994193+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:28.862994193+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:28.863231989+00:00 stderr F I0120 10:57:28.863193 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (806.921µs) 2026-01-20T10:57:29.218144545+00:00 stderr F W0120 10:57:29.218080 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:29.221142974+00:00 stderr F I0120 10:57:29.219530 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (6.362992251s) 2026-01-20T10:57:29.221142974+00:00 stderr F I0120 10:57:29.219557 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:29.221142974+00:00 stderr F I0120 10:57:29.219640 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:29.221142974+00:00 stderr F I0120 10:57:29.219682 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:29.221142974+00:00 stderr F I0120 10:57:29.219880 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:29.257687001+00:00 stderr F W0120 10:57:29.257624 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:29.260297870+00:00 stderr F I0120 10:57:29.259530 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (39.967798ms) 2026-01-20T10:57:29.863330337+00:00 stderr F I0120 10:57:29.863203 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:29.863589124+00:00 stderr F I0120 10:57:29.863550 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:29.863589124+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:29.863589124+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:29.863647595+00:00 stderr F I0120 10:57:29.863611 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (422.701µs) 2026-01-20T10:57:29.863647595+00:00 stderr F I0120 10:57:29.863635 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:29.863721497+00:00 stderr F I0120 10:57:29.863683 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:29.863763858+00:00 stderr F I0120 10:57:29.863743 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:29.864123249+00:00 stderr F I0120 10:57:29.864042 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:29.898392275+00:00 stderr F W0120 10:57:29.898306 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:29.899803131+00:00 stderr F I0120 10:57:29.899765 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.128535ms) 2026-01-20T10:57:30.863935748+00:00 stderr F I0120 10:57:30.863838 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:30.864417972+00:00 stderr F I0120 10:57:30.864374 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:30.864417972+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:30.864417972+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:30.864503284+00:00 stderr F I0120 10:57:30.864471 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (648.158µs) 2026-01-20T10:57:30.864511564+00:00 stderr F I0120 10:57:30.864501 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:30.864604286+00:00 stderr F I0120 10:57:30.864567 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:30.864676398+00:00 stderr F I0120 10:57:30.864653 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:30.865167671+00:00 stderr F I0120 10:57:30.865107 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:30.894054515+00:00 stderr F W0120 10:57:30.893975 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:30.898325318+00:00 stderr F I0120 10:57:30.898269 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.759302ms) 2026-01-20T10:57:31.865144256+00:00 stderr F I0120 10:57:31.864950 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:31.865591167+00:00 stderr F I0120 10:57:31.865533 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:31.865591167+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:31.865591167+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:31.865714011+00:00 stderr F I0120 10:57:31.865677 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (741.169µs) 2026-01-20T10:57:31.865724001+00:00 stderr F I0120 10:57:31.865715 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:31.865851254+00:00 stderr F I0120 10:57:31.865808 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:31.865937926+00:00 stderr F I0120 10:57:31.865911 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:31.866608374+00:00 stderr F I0120 10:57:31.866537 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:31.891679377+00:00 stderr F W0120 10:57:31.891554 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:31.896489535+00:00 stderr F I0120 10:57:31.896416 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.692092ms) 2026-01-20T10:57:32.866389434+00:00 stderr F I0120 10:57:32.866297 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:32.866693882+00:00 stderr F I0120 10:57:32.866656 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:32.866693882+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:32.866693882+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:32.866777084+00:00 stderr F I0120 10:57:32.866744 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (445.221µs) 2026-01-20T10:57:32.866777084+00:00 stderr F I0120 10:57:32.866765 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:32.866854906+00:00 stderr F I0120 10:57:32.866818 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:32.866902057+00:00 stderr F I0120 10:57:32.866883 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:32.867204446+00:00 stderr F I0120 10:57:32.867153 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:32.932672037+00:00 stderr F W0120 10:57:32.932604 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:32.934110164+00:00 stderr F I0120 10:57:32.934068 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (67.30115ms) 2026-01-20T10:57:33.867554041+00:00 stderr F I0120 10:57:33.867482 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:33.867880889+00:00 stderr F I0120 10:57:33.867851 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:33.867880889+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:33.867880889+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:33.867936811+00:00 stderr F I0120 10:57:33.867918 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (449.382µs) 2026-01-20T10:57:33.867958221+00:00 stderr F I0120 10:57:33.867938 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:33.868018553+00:00 stderr F I0120 10:57:33.867991 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:33.868090365+00:00 stderr F I0120 10:57:33.868054 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:33.868505745+00:00 stderr F I0120 10:57:33.868454 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:33.893219559+00:00 stderr F W0120 10:57:33.893072 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:33.895298624+00:00 stderr F I0120 10:57:33.895275 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.332202ms) 2026-01-20T10:57:34.840493850+00:00 stderr F I0120 10:57:34.840436 1 task_graph.go:481] Running 3 on worker 1 2026-01-20T10:57:34.846105218+00:00 stderr F I0120 10:57:34.845996 1 task_graph.go:481] Running 4 on worker 0 2026-01-20T10:57:34.850653939+00:00 stderr F I0120 10:57:34.850588 1 task_graph.go:481] Running 5 on worker 0 2026-01-20T10:57:34.850776882+00:00 stderr F I0120 10:57:34.850741 1 task_graph.go:481] Running 6 on worker 0 2026-01-20T10:57:34.868282455+00:00 stderr F I0120 10:57:34.868204 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:34.868582633+00:00 stderr F I0120 10:57:34.868529 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:34.868582633+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:34.868582633+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:34.868620144+00:00 stderr F I0120 10:57:34.868586 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (394.9µs) 2026-01-20T10:57:34.868620144+00:00 stderr F I0120 10:57:34.868601 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:34.868683045+00:00 stderr F I0120 10:57:34.868642 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:34.868700406+00:00 stderr F I0120 10:57:34.868689 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:34.868992334+00:00 stderr F I0120 10:57:34.868934 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:34.875555147+00:00 stderr F I0120 10:57:34.875473 1 task_graph.go:481] Running 7 on worker 0 2026-01-20T10:57:34.905166250+00:00 stderr F W0120 10:57:34.900261 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:34.905166250+00:00 stderr F I0120 10:57:34.901745 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.140877ms) 2026-01-20T10:57:34.970126028+00:00 stderr F I0120 10:57:34.968878 1 task_graph.go:481] Running 8 on worker 0 2026-01-20T10:57:34.970126028+00:00 stderr F I0120 10:57:34.968978 1 task_graph.go:481] Running 9 on worker 0 2026-01-20T10:57:34.978357666+00:00 stderr F I0120 10:57:34.978297 1 task_graph.go:481] Running 10 on worker 0 2026-01-20T10:57:35.053396110+00:00 stderr F I0120 10:57:35.053329 1 task_graph.go:481] Running 11 on worker 1 2026-01-20T10:57:35.094508318+00:00 stderr F I0120 10:57:35.094396 1 task_graph.go:481] Running 12 on worker 0 2026-01-20T10:57:35.209107138+00:00 stderr F I0120 10:57:35.209015 1 task_graph.go:481] Running 13 on worker 0 2026-01-20T10:57:35.296353366+00:00 stderr F E0120 10:57:35.296286 1 task.go:122] error running apply for imagestream "openshift/driver-toolkit" (657 of 955): an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/driver-toolkit\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io driver-toolkit) 2026-01-20T10:57:35.385010900+00:00 stderr F I0120 10:57:35.384923 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:35.385143103+00:00 stderr F I0120 10:57:35.385091 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:35.385200825+00:00 stderr F I0120 10:57:35.385170 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:35.385553574+00:00 stderr F I0120 10:57:35.385485 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:35.408918012+00:00 stderr F W0120 10:57:35.408850 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:35.410312699+00:00 stderr F I0120 10:57:35.410279 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.389922ms) 2026-01-20T10:57:35.868996009+00:00 stderr F I0120 10:57:35.868905 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:35.869249026+00:00 stderr F I0120 10:57:35.869204 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:35.869249026+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:35.869249026+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:35.869282777+00:00 stderr F I0120 10:57:35.869262 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (368.89µs) 2026-01-20T10:57:35.869293727+00:00 stderr F I0120 10:57:35.869280 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:35.869356188+00:00 stderr F I0120 10:57:35.869320 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:35.869398480+00:00 stderr F I0120 10:57:35.869374 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:35.869642196+00:00 stderr F I0120 10:57:35.869587 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:35.896150037+00:00 stderr F W0120 10:57:35.896061 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:35.899383273+00:00 stderr F I0120 10:57:35.897682 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.397711ms) 2026-01-20T10:57:35.945639256+00:00 stderr F E0120 10:57:35.945552 1 task.go:122] error running apply for oauthclient "console" (591 of 955): an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/oauthclients/console\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2026-01-20T10:57:36.869582189+00:00 stderr F I0120 10:57:36.869475 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:36.869863337+00:00 stderr F I0120 10:57:36.869823 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:36.869863337+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:36.869863337+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:36.869911738+00:00 stderr F I0120 10:57:36.869889 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (427.591µs) 2026-01-20T10:57:36.869921828+00:00 stderr F I0120 10:57:36.869909 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:36.869996230+00:00 stderr F I0120 10:57:36.869960 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:36.870039931+00:00 stderr F I0120 10:57:36.870021 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:36.870407422+00:00 stderr F I0120 10:57:36.870350 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:36.895488864+00:00 stderr F W0120 10:57:36.895396 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:36.901532785+00:00 stderr F I0120 10:57:36.901460 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.543785ms) 2026-01-20T10:57:37.870633633+00:00 stderr F I0120 10:57:37.870548 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:37.870879720+00:00 stderr F I0120 10:57:37.870840 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:37.870879720+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:37.870879720+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:37.870927531+00:00 stderr F I0120 10:57:37.870892 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (360.04µs) 2026-01-20T10:57:37.870927531+00:00 stderr F I0120 10:57:37.870915 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:37.871036344+00:00 stderr F I0120 10:57:37.870956 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:37.871036344+00:00 stderr F I0120 10:57:37.871011 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:37.871343622+00:00 stderr F I0120 10:57:37.871280 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:37.918847958+00:00 stderr F W0120 10:57:37.918749 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:37.920892342+00:00 stderr F I0120 10:57:37.920810 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (49.889259ms) 2026-01-20T10:57:38.872118307+00:00 stderr F I0120 10:57:38.872009 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:38.872579839+00:00 stderr F I0120 10:57:38.872533 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:38.872579839+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:38.872579839+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:38.872643401+00:00 stderr F I0120 10:57:38.872613 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (644.567µs) 2026-01-20T10:57:38.872643401+00:00 stderr F I0120 10:57:38.872636 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:38.872753954+00:00 stderr F I0120 10:57:38.872692 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:38.872772244+00:00 stderr F I0120 10:57:38.872758 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:38.873076002+00:00 stderr F I0120 10:57:38.873015 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:38.905214683+00:00 stderr F W0120 10:57:38.905122 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:38.908272003+00:00 stderr F I0120 10:57:38.908193 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.54953ms) 2026-01-20T10:57:39.873010476+00:00 stderr F I0120 10:57:39.872912 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:39.873356765+00:00 stderr F I0120 10:57:39.873319 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:39.873356765+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:39.873356765+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:39.873471859+00:00 stderr F I0120 10:57:39.873393 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (496.913µs) 2026-01-20T10:57:39.873471859+00:00 stderr F I0120 10:57:39.873414 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:39.873488180+00:00 stderr F I0120 10:57:39.873467 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:39.873556291+00:00 stderr F I0120 10:57:39.873528 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:39.873865739+00:00 stderr F I0120 10:57:39.873809 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:39.897771191+00:00 stderr F W0120 10:57:39.897715 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:39.900825612+00:00 stderr F I0120 10:57:39.900769 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.349934ms) 2026-01-20T10:57:40.873902186+00:00 stderr F I0120 10:57:40.873838 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:40.874350127+00:00 stderr F I0120 10:57:40.874329 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:40.874350127+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:40.874350127+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:40.874452580+00:00 stderr F I0120 10:57:40.874431 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (606.805µs) 2026-01-20T10:57:40.874490081+00:00 stderr F I0120 10:57:40.874477 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:40.874575123+00:00 stderr F I0120 10:57:40.874551 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:40.874648955+00:00 stderr F I0120 10:57:40.874636 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:40.875031995+00:00 stderr F I0120 10:57:40.874993 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:40.898549087+00:00 stderr F W0120 10:57:40.898458 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:40.901902696+00:00 stderr F I0120 10:57:40.901774 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.289992ms) 2026-01-20T10:57:41.875503453+00:00 stderr F I0120 10:57:41.875418 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:41.875727779+00:00 stderr F I0120 10:57:41.875687 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:41.875727779+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:41.875727779+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:41.875783550+00:00 stderr F I0120 10:57:41.875757 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (362.819µs) 2026-01-20T10:57:41.875783550+00:00 stderr F I0120 10:57:41.875778 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:41.875845672+00:00 stderr F I0120 10:57:41.875817 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:41.875885643+00:00 stderr F I0120 10:57:41.875867 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:41.876148730+00:00 stderr F I0120 10:57:41.876101 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:41.908632179+00:00 stderr F W0120 10:57:41.908561 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:41.912528252+00:00 stderr F I0120 10:57:41.910489 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.707838ms) 2026-01-20T10:57:42.673402494+00:00 stderr F I0120 10:57:42.673352 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2026-01-20T10:57:42.673667541+00:00 stderr F I0120 10:57:42.673630 1 sync_worker.go:234] Notify the sync worker: Cluster operator network changed Degraded from "False" to "True" 2026-01-20T10:57:42.877013999+00:00 stderr F I0120 10:57:42.876352 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:42.877013999+00:00 stderr F I0120 10:57:42.876644 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:42.877013999+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:42.877013999+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:42.877013999+00:00 stderr F I0120 10:57:42.876692 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (354.86µs) 2026-01-20T10:57:42.877013999+00:00 stderr F I0120 10:57:42.876709 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:42.877013999+00:00 stderr F I0120 10:57:42.876752 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:42.877013999+00:00 stderr F I0120 10:57:42.876800 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:42.877137642+00:00 stderr F I0120 10:57:42.877045 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:42.899875723+00:00 stderr F W0120 10:57:42.899781 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:42.901385373+00:00 stderr F I0120 10:57:42.901291 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.578569ms) 2026-01-20T10:57:42.920630622+00:00 stderr F I0120 10:57:42.920547 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2026-01-20T10:57:42.920746875+00:00 stderr F I0120 10:57:42.920701 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2026-01-20T10:57:42.920797106+00:00 stderr F I0120 10:57:42.920758 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:42.920797106+00:00 stderr F I0120 10:57:42.920782 1 upgradeable.go:92] Upgradeability condition failed (type='UpgradeableClusterVersionOverrides' reason='ClusterVersionOverridesSet' message='Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.') 2026-01-20T10:57:42.920814916+00:00 stderr F I0120 10:57:42.920800 1 upgradeable.go:123] Cluster current version=4.16.0 2026-01-20T10:57:42.920859358+00:00 stderr F I0120 10:57:42.920828 1 upgradeable.go:92] Upgradeability condition failed (type='UpgradeableClusterOperators' reason='DegradedPool' message='Cluster operator machine-config should not be upgraded between minor versions: One or more machine config pools are degraded, please see `oc get mcp` for further details and resolve before upgrading') 2026-01-20T10:57:42.920904639+00:00 stderr F I0120 10:57:42.920872 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (200.065µs) 2026-01-20T10:57:42.920967220+00:00 stderr F I0120 10:57:42.920939 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:42.921126095+00:00 stderr F I0120 10:57:42.920973 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:42.921126095+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:42.921126095+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:42.921158215+00:00 stderr F I0120 10:57:42.921140 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (388.59µs) 2026-01-20T10:57:42.921242478+00:00 stderr F I0120 10:57:42.921205 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:42.921349671+00:00 stderr F I0120 10:57:42.921329 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:42.922129422+00:00 stderr F I0120 10:57:42.922007 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:42.955937516+00:00 stderr F W0120 10:57:42.955891 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:42.957783275+00:00 stderr F I0120 10:57:42.957756 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.028559ms) 2026-01-20T10:57:42.957831576+00:00 stderr F I0120 10:57:42.957820 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:42.957858777+00:00 stderr F I0120 10:57:42.957850 1 cvo.go:668] Cluster version changed, waiting for newer event 2026-01-20T10:57:42.957881437+00:00 stderr F I0120 10:57:42.957872 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (53.841µs) 2026-01-20T10:57:42.961120373+00:00 stderr F I0120 10:57:42.961085 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2026-01-20T10:57:42.961139584+00:00 stderr F I0120 10:57:42.961128 1 upgradeable.go:69] Upgradeability last checked 40.256615ms ago, will not re-check until 2026-01-20T10:59:42Z 2026-01-20T10:57:42.961147194+00:00 stderr F I0120 10:57:42.961139 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (84.932µs) 2026-01-20T10:57:42.961154564+00:00 stderr F I0120 10:57:42.961092 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:42.961184995+00:00 stderr F I0120 10:57:42.961172 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:42.961483152+00:00 stderr F I0120 10:57:42.961467 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:42.961483152+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:42.961483152+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:42.961554074+00:00 stderr F I0120 10:57:42.961221 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:42.961578955+00:00 stderr F I0120 10:57:42.961540 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (370.08µs) 2026-01-20T10:57:42.961609676+00:00 stderr F I0120 10:57:42.961594 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:42.961968445+00:00 stderr F I0120 10:57:42.961913 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:42.987853099+00:00 stderr F W0120 10:57:42.987781 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:42.989281118+00:00 stderr F I0120 10:57:42.989247 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.186716ms) 2026-01-20T10:57:42.989281118+00:00 stderr F I0120 10:57:42.989274 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:42.989362820+00:00 stderr F I0120 10:57:42.989332 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:42.989398451+00:00 stderr F I0120 10:57:42.989383 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:42.989641317+00:00 stderr F I0120 10:57:42.989604 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:43.010518259+00:00 stderr F W0120 10:57:43.010463 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:43.012131212+00:00 stderr F I0120 10:57:43.012097 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (22.817634ms) 2026-01-20T10:57:43.876977233+00:00 stderr F I0120 10:57:43.876871 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:43.877315782+00:00 stderr F I0120 10:57:43.877259 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:43.877315782+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:43.877315782+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:43.877426745+00:00 stderr F I0120 10:57:43.877357 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (512.573µs) 2026-01-20T10:57:43.877426745+00:00 stderr F I0120 10:57:43.877389 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:43.877488347+00:00 stderr F I0120 10:57:43.877445 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:43.877545578+00:00 stderr F I0120 10:57:43.877510 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:43.877872177+00:00 stderr F I0120 10:57:43.877803 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:43.900415123+00:00 stderr F W0120 10:57:43.900321 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:43.902479107+00:00 stderr F I0120 10:57:43.902410 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.018262ms) 2026-01-20T10:57:44.037786655+00:00 stderr F I0120 10:57:44.037701 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:44.878560280+00:00 stderr F I0120 10:57:44.878448 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:44.878929469+00:00 stderr F I0120 10:57:44.878892 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:44.878929469+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:44.878929469+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:44.878995881+00:00 stderr F I0120 10:57:44.878962 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (537.684µs) 2026-01-20T10:57:44.878995881+00:00 stderr F I0120 10:57:44.878989 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:44.879149365+00:00 stderr F I0120 10:57:44.879046 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:44.879217907+00:00 stderr F I0120 10:57:44.879172 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:44.879597628+00:00 stderr F I0120 10:57:44.879531 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:44.906718354+00:00 stderr F W0120 10:57:44.906602 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:44.908727178+00:00 stderr F I0120 10:57:44.908673 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.679305ms) 2026-01-20T10:57:45.303173519+00:00 stderr F E0120 10:57:45.302827 1 task.go:122] error running apply for imagestream "openshift/driver-toolkit" (657 of 955): an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/driver-toolkit\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io driver-toolkit) 2026-01-20T10:57:45.879381967+00:00 stderr F I0120 10:57:45.879316 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:45.879605112+00:00 stderr F I0120 10:57:45.879580 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:45.879605112+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:45.879605112+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:45.879644504+00:00 stderr F I0120 10:57:45.879624 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (321.09µs) 2026-01-20T10:57:45.879651755+00:00 stderr F I0120 10:57:45.879642 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:45.879717726+00:00 stderr F I0120 10:57:45.879688 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:45.879752217+00:00 stderr F I0120 10:57:45.879738 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:45.880005654+00:00 stderr F I0120 10:57:45.879954 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:45.904429479+00:00 stderr F W0120 10:57:45.904371 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:45.905847627+00:00 stderr F I0120 10:57:45.905815 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.170362ms) 2026-01-20T10:57:45.950608201+00:00 stderr F E0120 10:57:45.950545 1 task.go:122] error running apply for oauthclient "console" (591 of 955): an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/oauthclients/console\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2026-01-20T10:57:46.880223095+00:00 stderr F I0120 10:57:46.880156 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:46.880542753+00:00 stderr F I0120 10:57:46.880523 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:46.880542753+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:46.880542753+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:46.880624945+00:00 stderr F I0120 10:57:46.880608 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (466.392µs) 2026-01-20T10:57:46.880658326+00:00 stderr F I0120 10:57:46.880648 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:46.880765419+00:00 stderr F I0120 10:57:46.880740 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:46.880833281+00:00 stderr F I0120 10:57:46.880821 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:46.881113868+00:00 stderr F I0120 10:57:46.881081 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:46.910244798+00:00 stderr F W0120 10:57:46.910186 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:46.911643646+00:00 stderr F I0120 10:57:46.911604 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.953888ms) 2026-01-20T10:57:47.881349530+00:00 stderr F I0120 10:57:47.881262 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:47.881906074+00:00 stderr F I0120 10:57:47.881874 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:47.881906074+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:47.881906074+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:47.882054468+00:00 stderr F I0120 10:57:47.882023 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (774.73µs) 2026-01-20T10:57:47.882241533+00:00 stderr F I0120 10:57:47.882152 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:47.882401287+00:00 stderr F I0120 10:57:47.882338 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:47.882473259+00:00 stderr F I0120 10:57:47.882442 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:47.883009834+00:00 stderr F I0120 10:57:47.882928 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:47.930721966+00:00 stderr F W0120 10:57:47.930620 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:47.933797417+00:00 stderr F I0120 10:57:47.933697 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (51.560504ms) 2026-01-20T10:57:48.100134835+00:00 stderr F I0120 10:57:48.100028 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:48.455019901+00:00 stderr F I0120 10:57:48.454929 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2026-01-20T10:57:48.885276738+00:00 stderr F I0120 10:57:48.885168 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:48.885488754+00:00 stderr F I0120 10:57:48.885452 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:48.885488754+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:48.885488754+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:48.885669960+00:00 stderr F I0120 10:57:48.885638 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (484.694µs) 2026-01-20T10:57:48.885738731+00:00 stderr F I0120 10:57:48.885694 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:48.885894226+00:00 stderr F I0120 10:57:48.885860 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:48.885944317+00:00 stderr F I0120 10:57:48.885932 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:48.886294576+00:00 stderr F I0120 10:57:48.886244 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:48.920680725+00:00 stderr F W0120 10:57:48.920616 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:48.922316709+00:00 stderr F I0120 10:57:48.922184 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.507495ms) 2026-01-20T10:57:49.362111660+00:00 stderr F I0120 10:57:49.359172 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2026-01-20T10:57:49.886614110+00:00 stderr F I0120 10:57:49.886514 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:49.887016830+00:00 stderr F I0120 10:57:49.886958 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:49.887016830+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:49.887016830+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:49.887166284+00:00 stderr F I0120 10:57:49.887058 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (574.055µs) 2026-01-20T10:57:49.887166284+00:00 stderr F I0120 10:57:49.887121 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:49.887290777+00:00 stderr F I0120 10:57:49.887214 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:49.887331178+00:00 stderr F I0120 10:57:49.887316 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:49.887813221+00:00 stderr F I0120 10:57:49.887723 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:49.915494113+00:00 stderr F W0120 10:57:49.915364 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:49.918122573+00:00 stderr F I0120 10:57:49.918000 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.872886ms) 2026-01-20T10:57:50.357916303+00:00 stderr F I0120 10:57:50.357753 1 sync_worker.go:236] The sync worker already has a pending notification, so no need to inform about: Cluster operator network changed Degraded from "True" to "False" 2026-01-20T10:57:50.385578125+00:00 stderr F I0120 10:57:50.384705 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:50.385578125+00:00 stderr F I0120 10:57:50.384811 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:50.385578125+00:00 stderr F I0120 10:57:50.384878 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:50.393902835+00:00 stderr F I0120 10:57:50.393781 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:50.438171526+00:00 stderr F W0120 10:57:50.437645 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:50.442141510+00:00 stderr F I0120 10:57:50.439114 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (54.420909ms) 2026-01-20T10:57:50.887377445+00:00 stderr F I0120 10:57:50.887143 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:50.887577960+00:00 stderr F I0120 10:57:50.887526 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:50.887577960+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:50.887577960+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:50.887632051+00:00 stderr F I0120 10:57:50.887600 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (478.252µs) 2026-01-20T10:57:50.887632051+00:00 stderr F I0120 10:57:50.887618 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:50.887696503+00:00 stderr F I0120 10:57:50.887658 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:50.887777105+00:00 stderr F I0120 10:57:50.887726 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:50.888146995+00:00 stderr F I0120 10:57:50.888033 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:50.933755632+00:00 stderr F W0120 10:57:50.933668 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:50.936199876+00:00 stderr F I0120 10:57:50.936158 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (48.535334ms) 2026-01-20T10:57:51.888443448+00:00 stderr F I0120 10:57:51.888366 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:51.888770738+00:00 stderr F I0120 10:57:51.888727 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:51.888770738+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:51.888770738+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:51.888833019+00:00 stderr F I0120 10:57:51.888800 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (447.152µs) 2026-01-20T10:57:51.888833019+00:00 stderr F I0120 10:57:51.888825 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:51.888919332+00:00 stderr F I0120 10:57:51.888881 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:51.888967243+00:00 stderr F I0120 10:57:51.888944 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:51.889328582+00:00 stderr F I0120 10:57:51.889276 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:51.917132788+00:00 stderr F W0120 10:57:51.916828 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:51.918938315+00:00 stderr F I0120 10:57:51.918871 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.042994ms) 2026-01-20T10:57:52.889038210+00:00 stderr F I0120 10:57:52.888962 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:52.889498002+00:00 stderr F I0120 10:57:52.889249 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:52.889498002+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:52.889498002+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:52.889498002+00:00 stderr F I0120 10:57:52.889341 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (390.76µs) 2026-01-20T10:57:52.889498002+00:00 stderr F I0120 10:57:52.889356 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:52.889498002+00:00 stderr F I0120 10:57:52.889415 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:52.889498002+00:00 stderr F I0120 10:57:52.889473 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:52.889778029+00:00 stderr F I0120 10:57:52.889730 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:52.913836555+00:00 stderr F W0120 10:57:52.913717 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:52.919215527+00:00 stderr F I0120 10:57:52.919123 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.751606ms) 2026-01-20T10:57:53.889991970+00:00 stderr F I0120 10:57:53.889911 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:53.890311089+00:00 stderr F I0120 10:57:53.890273 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:53.890311089+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:53.890311089+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:53.890371530+00:00 stderr F I0120 10:57:53.890345 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (450.342µs) 2026-01-20T10:57:53.890382201+00:00 stderr F I0120 10:57:53.890370 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:53.890472323+00:00 stderr F I0120 10:57:53.890436 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:53.890521054+00:00 stderr F I0120 10:57:53.890501 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:53.890843593+00:00 stderr F I0120 10:57:53.890792 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:53.924156674+00:00 stderr F W0120 10:57:53.924052 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:53.925765747+00:00 stderr F I0120 10:57:53.925712 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.337725ms) 2026-01-20T10:57:54.890570731+00:00 stderr F I0120 10:57:54.890447 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:54.890866999+00:00 stderr F I0120 10:57:54.890820 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:54.890866999+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:54.890866999+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:54.890941861+00:00 stderr F I0120 10:57:54.890899 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (465.752µs) 2026-01-20T10:57:54.890941861+00:00 stderr F I0120 10:57:54.890923 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:54.891085694+00:00 stderr F I0120 10:57:54.891009 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:54.891162066+00:00 stderr F I0120 10:57:54.891128 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:54.891619148+00:00 stderr F I0120 10:57:54.891537 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:54.920203065+00:00 stderr F W0120 10:57:54.920072 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:54.922258139+00:00 stderr F I0120 10:57:54.922202 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.275277ms) 2026-01-20T10:57:55.891273064+00:00 stderr F I0120 10:57:55.891156 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:55.891522591+00:00 stderr F I0120 10:57:55.891470 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:55.891522591+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:55.891522591+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:55.891564012+00:00 stderr F I0120 10:57:55.891528 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (387.55µs) 2026-01-20T10:57:55.891564012+00:00 stderr F I0120 10:57:55.891546 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:55.891627364+00:00 stderr F I0120 10:57:55.891586 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:55.891683365+00:00 stderr F I0120 10:57:55.891653 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:55.891955363+00:00 stderr F I0120 10:57:55.891884 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:55.917778766+00:00 stderr F W0120 10:57:55.917675 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:55.921293829+00:00 stderr F I0120 10:57:55.921102 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.509081ms) 2026-01-20T10:57:56.892266957+00:00 stderr F I0120 10:57:56.892186 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:56.892789961+00:00 stderr F I0120 10:57:56.892661 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:56.892789961+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:56.892789961+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:56.892896193+00:00 stderr F I0120 10:57:56.892850 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (677.768µs) 2026-01-20T10:57:56.892910914+00:00 stderr F I0120 10:57:56.892898 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:56.893075258+00:00 stderr F I0120 10:57:56.893025 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:56.893247193+00:00 stderr F I0120 10:57:56.893220 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:56.893821938+00:00 stderr F I0120 10:57:56.893754 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:56.949810968+00:00 stderr F W0120 10:57:56.949729 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:56.951639447+00:00 stderr F I0120 10:57:56.951597 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (58.698153ms) 2026-01-20T10:57:57.893253318+00:00 stderr F I0120 10:57:57.893142 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:57.893479694+00:00 stderr F I0120 10:57:57.893425 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:57.893479694+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:57.893479694+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:57.893502045+00:00 stderr F I0120 10:57:57.893482 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (354.819µs) 2026-01-20T10:57:57.893502045+00:00 stderr F I0120 10:57:57.893497 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:57.893599497+00:00 stderr F I0120 10:57:57.893545 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:57.893616678+00:00 stderr F I0120 10:57:57.893598 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:57.893920956+00:00 stderr F I0120 10:57:57.893842 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:57.946112976+00:00 stderr F W0120 10:57:57.946009 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:57.947905793+00:00 stderr F I0120 10:57:57.947851 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (54.348286ms) 2026-01-20T10:57:58.303157738+00:00 stderr F I0120 10:57:58.303023 1 task_graph.go:481] Running 14 on worker 0 2026-01-20T10:57:58.348146777+00:00 stderr F I0120 10:57:58.345524 1 task_graph.go:481] Running 15 on worker 0 2026-01-20T10:57:58.385048004+00:00 stderr F I0120 10:57:58.384993 1 task_graph.go:481] Running 16 on worker 0 2026-01-20T10:57:58.385171837+00:00 stderr F I0120 10:57:58.385156 1 task_graph.go:481] Running 17 on worker 0 2026-01-20T10:57:58.496299206+00:00 stderr F I0120 10:57:58.496231 1 task_graph.go:481] Running 18 on worker 0 2026-01-20T10:57:58.498406472+00:00 stderr F I0120 10:57:58.498372 1 task_graph.go:481] Running 19 on worker 0 2026-01-20T10:57:58.504088042+00:00 stderr F I0120 10:57:58.504047 1 task_graph.go:481] Running 20 on worker 0 2026-01-20T10:57:58.504217355+00:00 stderr F I0120 10:57:58.504189 1 task_graph.go:481] Running 21 on worker 0 2026-01-20T10:57:58.513397808+00:00 stderr F I0120 10:57:58.513368 1 task_graph.go:481] Running 22 on worker 0 2026-01-20T10:57:58.517790614+00:00 stderr F I0120 10:57:58.517772 1 task_graph.go:481] Running 23 on worker 0 2026-01-20T10:57:58.894281182+00:00 stderr F I0120 10:57:58.894190 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:58.894631461+00:00 stderr F I0120 10:57:58.894579 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:58.894631461+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:58.894631461+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:58.894724163+00:00 stderr F I0120 10:57:58.894675 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (516.133µs) 2026-01-20T10:57:58.894724163+00:00 stderr F I0120 10:57:58.894701 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:58.894820046+00:00 stderr F I0120 10:57:58.894766 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:58.894870218+00:00 stderr F I0120 10:57:58.894839 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:58.895351520+00:00 stderr F I0120 10:57:58.895278 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:58.935675384+00:00 stderr F W0120 10:57:58.935579 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:58.939339277+00:00 stderr F I0120 10:57:58.939295 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.585882ms) 2026-01-20T10:57:58.951545747+00:00 stderr F I0120 10:57:58.951491 1 task_graph.go:481] Running 24 on worker 1 2026-01-20T10:57:58.963995693+00:00 stderr F I0120 10:57:58.963851 1 task_graph.go:481] Running 25 on worker 0 2026-01-20T10:57:59.007792736+00:00 stderr F I0120 10:57:59.007704 1 task_graph.go:481] Running 26 on worker 1 2026-01-20T10:57:59.895143375+00:00 stderr F I0120 10:57:59.894975 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:57:59.895555856+00:00 stderr F I0120 10:57:59.895481 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:57:59.895555856+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:57:59.895555856+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:57:59.895608457+00:00 stderr F I0120 10:57:59.895581 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (625.145µs) 2026-01-20T10:57:59.895626407+00:00 stderr F I0120 10:57:59.895606 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:57:59.895778771+00:00 stderr F I0120 10:57:59.895716 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:57:59.895842683+00:00 stderr F I0120 10:57:59.895801 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:57:59.896424327+00:00 stderr F I0120 10:57:59.896329 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:57:59.956628717+00:00 stderr F W0120 10:57:59.955996 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:57:59.959323985+00:00 stderr F I0120 10:57:59.959187 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (63.574924ms) 2026-01-20T10:58:00.011203743+00:00 stderr F I0120 10:58:00.011006 1 task_graph.go:481] Running 27 on worker 1 2026-01-20T10:58:00.108766301+00:00 stderr F I0120 10:58:00.108675 1 task_graph.go:481] Running 28 on worker 1 2026-01-20T10:58:00.159346486+00:00 stderr F I0120 10:58:00.159258 1 task_graph.go:481] Running 29 on worker 0 2026-01-20T10:58:00.459347136+00:00 stderr F I0120 10:58:00.459055 1 task_graph.go:481] Running 30 on worker 0 2026-01-20T10:58:00.507972701+00:00 stderr F I0120 10:58:00.507905 1 task_graph.go:481] Running 31 on worker 1 2026-01-20T10:58:00.657988651+00:00 stderr F I0120 10:58:00.657908 1 task_graph.go:481] Running 32 on worker 0 2026-01-20T10:58:00.896558751+00:00 stderr F I0120 10:58:00.896470 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:58:00.897000272+00:00 stderr F I0120 10:58:00.896969 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:58:00.897000272+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:58:00.897000272+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:58:00.897142596+00:00 stderr F I0120 10:58:00.897111 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (703.028µs) 2026-01-20T10:58:00.897156346+00:00 stderr F I0120 10:58:00.897144 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:00.897263859+00:00 stderr F I0120 10:58:00.897227 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:00.897345061+00:00 stderr F I0120 10:58:00.897322 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:00.897825124+00:00 stderr F I0120 10:58:00.897776 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:00.921439263+00:00 stderr F W0120 10:58:00.921390 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:00.922880759+00:00 stderr F I0120 10:58:00.922845 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.700002ms) 2026-01-20T10:58:01.257339465+00:00 stderr F I0120 10:58:01.257274 1 task_graph.go:481] Running 33 on worker 0 2026-01-20T10:58:01.358288289+00:00 stderr F I0120 10:58:01.358224 1 task_graph.go:481] Running 34 on worker 0 2026-01-20T10:58:01.897760971+00:00 stderr F I0120 10:58:01.897661 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:58:01.899533467+00:00 stderr F I0120 10:58:01.898177 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:58:01.899533467+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:58:01.899533467+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:58:01.899533467+00:00 stderr F I0120 10:58:01.898379 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (730.62µs) 2026-01-20T10:58:01.899533467+00:00 stderr F I0120 10:58:01.898407 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:01.899533467+00:00 stderr F I0120 10:58:01.898486 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:01.899533467+00:00 stderr F I0120 10:58:01.898566 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:01.899533467+00:00 stderr F I0120 10:58:01.898974 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:01.955323014+00:00 stderr F W0120 10:58:01.955260 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:01.957207052+00:00 stderr F I0120 10:58:01.957165 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (58.756693ms) 2026-01-20T10:58:02.008631248+00:00 stderr F I0120 10:58:02.008561 1 task_graph.go:481] Running 35 on worker 1 2026-01-20T10:58:02.899414704+00:00 stderr F I0120 10:58:02.899297 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:58:02.899754723+00:00 stderr F I0120 10:58:02.899698 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:58:02.899754723+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:58:02.899754723+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:58:02.899856785+00:00 stderr F I0120 10:58:02.899801 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (515.193µs) 2026-01-20T10:58:02.899856785+00:00 stderr F I0120 10:58:02.899834 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:02.899960288+00:00 stderr F I0120 10:58:02.899902 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:02.900010249+00:00 stderr F I0120 10:58:02.899983 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:02.900509272+00:00 stderr F I0120 10:58:02.900423 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:02.955490189+00:00 stderr F W0120 10:58:02.955402 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:02.963966824+00:00 stderr F I0120 10:58:02.963884 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (64.036217ms) 2026-01-20T10:58:03.856830403+00:00 stderr F I0120 10:58:03.856761 1 core.go:138] Updating ConfigMap openshift-machine-config-operator/kube-rbac-proxy due to diff: &v1.ConfigMap{ 2026-01-20T10:58:03.856830403+00:00 stderr F TypeMeta: v1.TypeMeta{ 2026-01-20T10:58:03.856830403+00:00 stderr F - Kind: "", 2026-01-20T10:58:03.856830403+00:00 stderr F + Kind: "ConfigMap", 2026-01-20T10:58:03.856830403+00:00 stderr F - APIVersion: "", 2026-01-20T10:58:03.856830403+00:00 stderr F + APIVersion: "v1", 2026-01-20T10:58:03.856830403+00:00 stderr F }, 2026-01-20T10:58:03.856830403+00:00 stderr F ObjectMeta: v1.ObjectMeta{ 2026-01-20T10:58:03.856830403+00:00 stderr F ... // 2 identical fields 2026-01-20T10:58:03.856830403+00:00 stderr F Namespace: "openshift-machine-config-operator", 2026-01-20T10:58:03.856830403+00:00 stderr F SelfLink: "", 2026-01-20T10:58:03.856830403+00:00 stderr F - UID: "ba7edbb4-1ba2-49b6-98a7-d849069e9f80", 2026-01-20T10:58:03.856830403+00:00 stderr F + UID: "", 2026-01-20T10:58:03.856830403+00:00 stderr F - ResourceVersion: "41874", 2026-01-20T10:58:03.856830403+00:00 stderr F + ResourceVersion: "", 2026-01-20T10:58:03.856830403+00:00 stderr F Generation: 0, 2026-01-20T10:58:03.856830403+00:00 stderr F - CreationTimestamp: v1.Time{Time: s"2024-06-26 12:39:23 +0000 UTC"}, 2026-01-20T10:58:03.856830403+00:00 stderr F + CreationTimestamp: v1.Time{}, 2026-01-20T10:58:03.856830403+00:00 stderr F DeletionTimestamp: nil, 2026-01-20T10:58:03.856830403+00:00 stderr F DeletionGracePeriodSeconds: nil, 2026-01-20T10:58:03.856830403+00:00 stderr F ... // 2 identical fields 2026-01-20T10:58:03.856830403+00:00 stderr F OwnerReferences: {{APIVersion: "config.openshift.io/v1", Kind: "ClusterVersion", Name: "version", UID: "a73cbaa6-40d3-4694-9b98-c0a6eed45825", ...}}, 2026-01-20T10:58:03.856830403+00:00 stderr F Finalizers: nil, 2026-01-20T10:58:03.856830403+00:00 stderr F - ManagedFields: []v1.ManagedFieldsEntry{ 2026-01-20T10:58:03.856830403+00:00 stderr F - { 2026-01-20T10:58:03.856830403+00:00 stderr F - Manager: "cluster-version-operator", 2026-01-20T10:58:03.856830403+00:00 stderr F - Operation: "Update", 2026-01-20T10:58:03.856830403+00:00 stderr F - APIVersion: "v1", 2026-01-20T10:58:03.856830403+00:00 stderr F - Time: s"2026-01-20 10:53:48 +0000 UTC", 2026-01-20T10:58:03.856830403+00:00 stderr F - FieldsType: "FieldsV1", 2026-01-20T10:58:03.856830403+00:00 stderr F - FieldsV1: s`{"f:data":{},"f:metadata":{"f:annotations":{".":{},"f:include.re`..., 2026-01-20T10:58:03.856830403+00:00 stderr F - }, 2026-01-20T10:58:03.856830403+00:00 stderr F - { 2026-01-20T10:58:03.856830403+00:00 stderr F - Manager: "machine-config-operator", 2026-01-20T10:58:03.856830403+00:00 stderr F - Operation: "Update", 2026-01-20T10:58:03.856830403+00:00 stderr F - APIVersion: "v1", 2026-01-20T10:58:03.856830403+00:00 stderr F - Time: s"2026-01-20 10:54:18 +0000 UTC", 2026-01-20T10:58:03.856830403+00:00 stderr F - FieldsType: "FieldsV1", 2026-01-20T10:58:03.856830403+00:00 stderr F - FieldsV1: s`{"f:data":{"f:config-file.yaml":{}}}`, 2026-01-20T10:58:03.856830403+00:00 stderr F - }, 2026-01-20T10:58:03.856830403+00:00 stderr F - }, 2026-01-20T10:58:03.856830403+00:00 stderr F + ManagedFields: nil, 2026-01-20T10:58:03.856830403+00:00 stderr F }, 2026-01-20T10:58:03.856830403+00:00 stderr F Immutable: nil, 2026-01-20T10:58:03.856830403+00:00 stderr F Data: {"config-file.yaml": "authorization:\n resourceAttributes:\n apiVersion: v1\n reso"...}, 2026-01-20T10:58:03.856830403+00:00 stderr F BinaryData: nil, 2026-01-20T10:58:03.856830403+00:00 stderr F } 2026-01-20T10:58:03.900204494+00:00 stderr F I0120 10:58:03.900141 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:58:03.900583954+00:00 stderr F I0120 10:58:03.900528 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:58:03.900583954+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:58:03.900583954+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:58:03.900597274+00:00 stderr F I0120 10:58:03.900588 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (460.192µs) 2026-01-20T10:58:03.900638665+00:00 stderr F I0120 10:58:03.900613 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:03.900932984+00:00 stderr F I0120 10:58:03.900700 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:03.900932984+00:00 stderr F I0120 10:58:03.900783 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:03.901165399+00:00 stderr F I0120 10:58:03.901112 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:03.947318061+00:00 stderr F W0120 10:58:03.946887 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:03.949121947+00:00 stderr F I0120 10:58:03.949021 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (48.412931ms) 2026-01-20T10:58:04.057905061+00:00 stderr F E0120 10:58:04.057335 1 task.go:122] error running apply for clusteroperator "machine-config" (799 of 955): Cluster operator machine-config is degraded 2026-01-20T10:58:04.057905061+00:00 stderr F I0120 10:58:04.057818 1 task_graph.go:481] Running 36 on worker 0 2026-01-20T10:58:04.158513865+00:00 stderr F I0120 10:58:04.158417 1 task_graph.go:481] Running 37 on worker 0 2026-01-20T10:58:04.356493594+00:00 stderr F I0120 10:58:04.356396 1 task_graph.go:481] Running 38 on worker 0 2026-01-20T10:58:04.759183803+00:00 stderr F I0120 10:58:04.757013 1 task_graph.go:481] Running 39 on worker 0 2026-01-20T10:58:04.900692817+00:00 stderr F I0120 10:58:04.900599 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:58:04.900953513+00:00 stderr F I0120 10:58:04.900894 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:58:04.900953513+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:58:04.900953513+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:58:04.900974124+00:00 stderr F I0120 10:58:04.900959 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (372.18µs) 2026-01-20T10:58:04.900987834+00:00 stderr F I0120 10:58:04.900978 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:04.901082908+00:00 stderr F I0120 10:58:04.901026 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:04.901209071+00:00 stderr F I0120 10:58:04.901142 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:04.901644922+00:00 stderr F I0120 10:58:04.901580 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:04.942875608+00:00 stderr F W0120 10:58:04.942792 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:04.944322415+00:00 stderr F I0120 10:58:04.944255 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.27592ms) 2026-01-20T10:58:05.207231854+00:00 stderr F I0120 10:58:05.207161 1 task_graph.go:481] Running 40 on worker 1 2026-01-20T10:58:05.385277035+00:00 stderr F I0120 10:58:05.385196 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:05.385324197+00:00 stderr F I0120 10:58:05.385291 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:05.385376028+00:00 stderr F I0120 10:58:05.385346 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:05.385658426+00:00 stderr F I0120 10:58:05.385587 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:05.409692556+00:00 stderr F W0120 10:58:05.409620 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:05.411048411+00:00 stderr F I0120 10:58:05.411012 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.824487ms) 2026-01-20T10:58:05.657987303+00:00 stderr F I0120 10:58:05.657848 1 task_graph.go:481] Running 41 on worker 0 2026-01-20T10:58:05.901580781+00:00 stderr F I0120 10:58:05.901467 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:58:05.901925769+00:00 stderr F I0120 10:58:05.901804 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:58:05.901925769+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:58:05.901925769+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:58:05.901974810+00:00 stderr F I0120 10:58:05.901945 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (489.903µs) 2026-01-20T10:58:05.901974810+00:00 stderr F I0120 10:58:05.901965 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:05.902099313+00:00 stderr F I0120 10:58:05.902020 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:05.902126264+00:00 stderr F I0120 10:58:05.902105 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:05.902473263+00:00 stderr F I0120 10:58:05.902404 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:05.964588150+00:00 stderr F W0120 10:58:05.964480 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:05.970746117+00:00 stderr F I0120 10:58:05.970693 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (68.722345ms) 2026-01-20T10:58:06.902864333+00:00 stderr F I0120 10:58:06.902768 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:58:06.903284193+00:00 stderr F I0120 10:58:06.903244 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:58:06.903284193+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:58:06.903284193+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:58:06.903371236+00:00 stderr F I0120 10:58:06.903336 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (586.945µs) 2026-01-20T10:58:06.903371236+00:00 stderr F I0120 10:58:06.903364 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:06.903469218+00:00 stderr F I0120 10:58:06.903430 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:06.903531350+00:00 stderr F I0120 10:58:06.903506 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:06.903969721+00:00 stderr F I0120 10:58:06.903905 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:06.939951945+00:00 stderr F W0120 10:58:06.939867 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:06.942546941+00:00 stderr F I0120 10:58:06.942492 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (39.124644ms) 2026-01-20T10:58:07.057686455+00:00 stderr F I0120 10:58:07.057089 1 task_graph.go:481] Running 42 on worker 0 2026-01-20T10:58:07.363115713+00:00 stderr F I0120 10:58:07.360347 1 task_graph.go:481] Running 43 on worker 0 2026-01-20T10:58:07.706303851+00:00 stderr F I0120 10:58:07.706240 1 task_graph.go:481] Running 44 on worker 1 2026-01-20T10:58:07.810513088+00:00 stderr F I0120 10:58:07.810434 1 task_graph.go:481] Running 45 on worker 1 2026-01-20T10:58:07.909832541+00:00 stderr F I0120 10:58:07.908264 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:58:07.909832541+00:00 stderr F I0120 10:58:07.908547 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:58:07.909832541+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:58:07.909832541+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:58:07.909832541+00:00 stderr F I0120 10:58:07.908602 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (348.159µs) 2026-01-20T10:58:07.909832541+00:00 stderr F I0120 10:58:07.908616 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:07.909832541+00:00 stderr F I0120 10:58:07.908665 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:07.909832541+00:00 stderr F I0120 10:58:07.908720 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:07.909832541+00:00 stderr F I0120 10:58:07.909012 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:07.959213885+00:00 stderr F W0120 10:58:07.959008 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:07.960604580+00:00 stderr F I0120 10:58:07.960557 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (51.93733ms) 2026-01-20T10:58:08.807487341+00:00 stderr F I0120 10:58:08.807404 1 task_graph.go:481] Running 46 on worker 1 2026-01-20T10:58:08.907286576+00:00 stderr F I0120 10:58:08.907201 1 task_graph.go:481] Running 47 on worker 1 2026-01-20T10:58:08.909209215+00:00 stderr F I0120 10:58:08.909136 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:58:08.909983295+00:00 stderr F I0120 10:58:08.909825 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:58:08.909983295+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:58:08.909983295+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:58:08.910046007+00:00 stderr F I0120 10:58:08.910018 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (907.344µs) 2026-01-20T10:58:08.910105208+00:00 stderr F I0120 10:58:08.910052 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:08.910247192+00:00 stderr F I0120 10:58:08.910175 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:08.910301363+00:00 stderr F I0120 10:58:08.910272 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:08.910843867+00:00 stderr F I0120 10:58:08.910725 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:08.934783674+00:00 stderr F W0120 10:58:08.934719 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:08.936890478+00:00 stderr F I0120 10:58:08.936856 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.80265ms) 2026-01-20T10:58:09.556280420+00:00 stderr F I0120 10:58:09.555833 1 task_graph.go:481] Running 48 on worker 0 2026-01-20T10:58:09.910857017+00:00 stderr F I0120 10:58:09.910795 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:58:09.911077133+00:00 stderr F I0120 10:58:09.911029 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:58:09.911077133+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:58:09.911077133+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:58:09.911129684+00:00 stderr F I0120 10:58:09.911093 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (313.028µs) 2026-01-20T10:58:09.911129684+00:00 stderr F I0120 10:58:09.911112 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:09.911176895+00:00 stderr F I0120 10:58:09.911149 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:09.911201166+00:00 stderr F I0120 10:58:09.911191 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:09.911462632+00:00 stderr F I0120 10:58:09.911411 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:09.935772330+00:00 stderr F W0120 10:58:09.935690 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:09.937415691+00:00 stderr F I0120 10:58:09.937250 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.136854ms) 2026-01-20T10:58:10.057708707+00:00 stderr F I0120 10:58:10.057659 1 task_graph.go:481] Running 49 on worker 0 2026-01-20T10:58:10.357782649+00:00 stderr F I0120 10:58:10.357703 1 task_graph.go:481] Running 50 on worker 0 2026-01-20T10:58:10.511220226+00:00 stderr F I0120 10:58:10.507510 1 task_graph.go:481] Running 51 on worker 1 2026-01-20T10:58:10.911223467+00:00 stderr F I0120 10:58:10.911122 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:58:10.911515934+00:00 stderr F I0120 10:58:10.911477 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:58:10.911515934+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:58:10.911515934+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:58:10.911555875+00:00 stderr F I0120 10:58:10.911537 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (427.751µs) 2026-01-20T10:58:10.911555875+00:00 stderr F I0120 10:58:10.911551 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:10.911663018+00:00 stderr F I0120 10:58:10.911613 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:10.911705699+00:00 stderr F I0120 10:58:10.911680 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:10.912010816+00:00 stderr F I0120 10:58:10.911954 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:10.945160159+00:00 stderr F W0120 10:58:10.944512 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:10.949093698+00:00 stderr F I0120 10:58:10.946512 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.956238ms) 2026-01-20T10:58:11.911656928+00:00 stderr F I0120 10:58:11.911572 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:58:11.912221002+00:00 stderr F I0120 10:58:11.912161 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:58:11.912221002+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:58:11.912221002+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:58:11.912319065+00:00 stderr F I0120 10:58:11.912273 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (711.898µs) 2026-01-20T10:58:11.912331065+00:00 stderr F I0120 10:58:11.912313 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:11.912469048+00:00 stderr F I0120 10:58:11.912414 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:11.912548130+00:00 stderr F I0120 10:58:11.912523 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:11.913047254+00:00 stderr F I0120 10:58:11.912983 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:11.947956760+00:00 stderr F W0120 10:58:11.947874 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:11.949436458+00:00 stderr F I0120 10:58:11.949388 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.075563ms) 2026-01-20T10:58:12.608975010+00:00 stderr F I0120 10:58:12.608784 1 task_graph.go:481] Running 52 on worker 0 2026-01-20T10:58:12.913397563+00:00 stderr F I0120 10:58:12.913311 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:58:12.913737472+00:00 stderr F I0120 10:58:12.913705 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:58:12.913737472+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:58:12.913737472+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:58:12.913808633+00:00 stderr F I0120 10:58:12.913777 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (481.072µs) 2026-01-20T10:58:12.913832484+00:00 stderr F I0120 10:58:12.913814 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:12.913916136+00:00 stderr F I0120 10:58:12.913877 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:12.913960987+00:00 stderr F I0120 10:58:12.913941 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:12.914415379+00:00 stderr F I0120 10:58:12.914354 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:12.948575466+00:00 stderr F W0120 10:58:12.948493 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:12.950282800+00:00 stderr F I0120 10:58:12.950233 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.416415ms) 2026-01-20T10:58:13.606749554+00:00 stderr F I0120 10:58:13.606657 1 task_graph.go:481] Running 53 on worker 0 2026-01-20T10:58:13.606893257+00:00 stderr F I0120 10:58:13.606851 1 task_graph.go:481] Running 54 on worker 0 2026-01-20T10:58:13.914557612+00:00 stderr F I0120 10:58:13.914460 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:58:13.914995443+00:00 stderr F I0120 10:58:13.914912 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:58:13.914995443+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:58:13.914995443+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:58:13.915145997+00:00 stderr F I0120 10:58:13.915109 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (661.267µs) 2026-01-20T10:58:13.915165017+00:00 stderr F I0120 10:58:13.915141 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:13.915292591+00:00 stderr F I0120 10:58:13.915241 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:13.915363562+00:00 stderr F I0120 10:58:13.915330 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:13.915846855+00:00 stderr F I0120 10:58:13.915776 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:13.961010743+00:00 stderr F W0120 10:58:13.960903 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:13.963948247+00:00 stderr F I0120 10:58:13.963882 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (48.734868ms) 2026-01-20T10:58:14.009867443+00:00 stderr F I0120 10:58:14.009772 1 task_graph.go:481] Running 55 on worker 0 2026-01-20T10:58:14.908176161+00:00 stderr F I0120 10:58:14.908095 1 task_graph.go:481] Running 56 on worker 0 2026-01-20T10:58:14.915781643+00:00 stderr F I0120 10:58:14.915722 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:58:14.916217705+00:00 stderr F I0120 10:58:14.916178 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:58:14.916217705+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:58:14.916217705+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:58:14.916319468+00:00 stderr F I0120 10:58:14.916281 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (572.266µs) 2026-01-20T10:58:14.916319468+00:00 stderr F I0120 10:58:14.916312 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:14.916649446+00:00 stderr F I0120 10:58:14.916391 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:14.916649446+00:00 stderr F I0120 10:58:14.916470 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:14.916926603+00:00 stderr F I0120 10:58:14.916864 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:14.955980835+00:00 stderr F W0120 10:58:14.955898 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:14.958557571+00:00 stderr F I0120 10:58:14.958480 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.163171ms) 2026-01-20T10:58:15.917226081+00:00 stderr F I0120 10:58:15.917137 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:58:15.917505808+00:00 stderr F I0120 10:58:15.917462 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:58:15.917505808+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:58:15.917505808+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:58:15.917550719+00:00 stderr F I0120 10:58:15.917523 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (398.66µs) 2026-01-20T10:58:15.917550719+00:00 stderr F I0120 10:58:15.917546 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:15.917642281+00:00 stderr F I0120 10:58:15.917598 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:15.917681052+00:00 stderr F I0120 10:58:15.917659 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:15.917996660+00:00 stderr F I0120 10:58:15.917945 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:15.969920810+00:00 stderr F W0120 10:58:15.969821 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:15.972987237+00:00 stderr F I0120 10:58:15.972941 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (55.389527ms) 2026-01-20T10:58:16.917724004+00:00 stderr F I0120 10:58:16.917595 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:58:16.919112760+00:00 stderr F I0120 10:58:16.919046 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:58:16.919112760+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:58:16.919112760+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:58:16.919258603+00:00 stderr F I0120 10:58:16.919201 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.620042ms) 2026-01-20T10:58:16.919258603+00:00 stderr F I0120 10:58:16.919228 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:16.919348766+00:00 stderr F I0120 10:58:16.919292 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:16.919377216+00:00 stderr F I0120 10:58:16.919362 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:16.920268959+00:00 stderr F I0120 10:58:16.919831 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:16.922432264+00:00 stderr F I0120 10:58:16.922327 1 task_graph.go:481] Running 57 on worker 0 2026-01-20T10:58:16.964369729+00:00 stderr F W0120 10:58:16.964247 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:16.969564031+00:00 stderr F I0120 10:58:16.969512 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (50.274457ms) 2026-01-20T10:58:17.009305771+00:00 stderr F I0120 10:58:17.009224 1 task_graph.go:481] Running 58 on worker 0 2026-01-20T10:58:17.257906305+00:00 stderr F I0120 10:58:17.257827 1 task_graph.go:481] Running 59 on worker 1 2026-01-20T10:58:17.857863664+00:00 stderr F I0120 10:58:17.857797 1 task_graph.go:481] Running 60 on worker 1 2026-01-20T10:58:17.908138071+00:00 stderr F I0120 10:58:17.908038 1 task_graph.go:481] Running 61 on worker 0 2026-01-20T10:58:17.921832959+00:00 stderr F I0120 10:58:17.920164 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:58:17.922420314+00:00 stderr F I0120 10:58:17.922164 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:58:17.922420314+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:58:17.922420314+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:58:17.922559657+00:00 stderr F I0120 10:58:17.922461 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (2.309148ms) 2026-01-20T10:58:17.922559657+00:00 stderr F I0120 10:58:17.922535 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:17.924347902+00:00 stderr F I0120 10:58:17.922621 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:17.924429844+00:00 stderr F I0120 10:58:17.924412 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:17.925229345+00:00 stderr F I0120 10:58:17.925088 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:17.961296411+00:00 stderr F I0120 10:58:17.961217 1 task_graph.go:481] Running 62 on worker 1 2026-01-20T10:58:17.966740219+00:00 stderr F W0120 10:58:17.966684 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:17.976784484+00:00 stderr F I0120 10:58:17.976728 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (54.226547ms) 2026-01-20T10:58:18.308636924+00:00 stderr F I0120 10:58:18.308546 1 task_graph.go:481] Running 63 on worker 0 2026-01-20T10:58:18.923256095+00:00 stderr F I0120 10:58:18.923154 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:58:18.923769088+00:00 stderr F I0120 10:58:18.923728 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:58:18.923769088+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:58:18.923769088+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:58:18.923872970+00:00 stderr F I0120 10:58:18.923841 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (708.517µs) 2026-01-20T10:58:18.923883361+00:00 stderr F I0120 10:58:18.923870 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:18.923991713+00:00 stderr F I0120 10:58:18.923951 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:18.924079556+00:00 stderr F I0120 10:58:18.924037 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:18.924859036+00:00 stderr F I0120 10:58:18.924790 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:18.982292954+00:00 stderr F W0120 10:58:18.982204 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:18.997870470+00:00 stderr F I0120 10:58:18.997768 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (73.886608ms) 2026-01-20T10:58:19.909083215+00:00 stderr F I0120 10:58:19.908360 1 task_graph.go:481] Running 64 on worker 0 2026-01-20T10:58:19.924150808+00:00 stderr F I0120 10:58:19.924055 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2026-01-20T10:58:19.924578298+00:00 stderr F I0120 10:58:19.924532 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2026-01-20T10:58:19.924578298+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2026-01-20T10:58:19.924578298+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2026-01-20T10:58:19.924717732+00:00 stderr F I0120 10:58:19.924619 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (578.984µs) 2026-01-20T10:58:19.924717732+00:00 stderr F I0120 10:58:19.924651 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:19.924804305+00:00 stderr F I0120 10:58:19.924726 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:19.924817015+00:00 stderr F I0120 10:58:19.924799 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:19.925201235+00:00 stderr F I0120 10:58:19.925143 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:19.950943069+00:00 stderr F W0120 10:58:19.950863 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:19.953263708+00:00 stderr F I0120 10:58:19.953198 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.542166ms) 2026-01-20T10:58:20.385466356+00:00 stderr F I0120 10:58:20.385162 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2026-01-20T10:58:20.385466356+00:00 stderr F I0120 10:58:20.385279 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2026-01-20T10:58:20.385466356+00:00 stderr F I0120 10:58:20.385365 1 sync_worker.go:479] Update work is equal to current target; no change required 2026-01-20T10:58:20.385722562+00:00 stderr F I0120 10:58:20.385669 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2026-01-20T10:58:20.416179426+00:00 stderr F W0120 10:58:20.416106 1 warnings.go:70] unknown field "spec.signatureStores" 2026-01-20T10:58:20.417538680+00:00 stderr F I0120 10:58:20.417501 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.349771ms) ././@LongLink0000644000000000000000000000033100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator/0.log.gzhome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-vers0000644000175000017500000244707715133657716033153 0ustar zuulzuul‹Î_oi0.logäYsÛH’€ß÷W ôâî]ƒ®ûІzÇÓGlÛ=ýäP€@QÄ 8(µzbþûf Q¤Š"ˆ’<m“H¢ª¾ÌÊÊ: ˆp)Ó/XŸstÎäˆi!ZþBçyE™<÷¾÷~B S¯%G9Ñ^ýƒX—£ËìœÐ¯Þß’EQšü&/â,ýmnò Ìr°!Ÿ ÂÀk$Fs4ºΩѣ (ÌlœÜŽŠ27Áldí7×þƒl©ªT SÎ5sTU*ŒYU51șĄ%ÔÎÖòÕû25Í%oyÍ»Œ¯Má1/7eÿ‚4ò‚$Én o ¢¨ð²‰&YxåWæfTÝåj16~0 “_ÃÝ¢ì&-ã™ñÊ,ih¼¸ð¤*FÞY^”^š¥þe„f²Hl ãá?qW•a2ct%þ°è¿ˆ(þ=ÚÂKˆ`’*,у¼@ŽKÉÄŠWn&–G–[T”cÐoNÆ<›/’ 4Qã?¯ñècžýyëMòlèÊéb< ³Ù»lnÒbOÊwa›´ô/³wa–NâËwq ?¼3‚ŤAr]MñnØ2o+ý`±½ARå®I©>¤Aß› \äæør¸fQL –;›¥…Räfý­ªæ/Á¼nÔØbœµš±®»º\Y]©éºº‹9rd‚qbìÝPáU½VpýiÍâÔÂ+è…Qd¢í&N)áš(ªu¡ÑÃLü®oÐ(8eú¯tµ äˆ:¢e+w=\Ó4§ œs5Ms¤Ô–!Gó¯Þ¸ÀôÒ`/[”p5ì—YãÔÛM†!D¨&Œ8KÐEG_½ ,Íl^Õ¤Ìjßœ¯Æ•Úa·L»&î7°Þ5FÛk*Á‰ÁB5•œ ΞÜ/<¢ºLèý‚? ÒàÒD¸„æ<è Q)Að®Jiˆ~_~'qyëAu¢zžqL¿)oçæý›ß׿Ü‹@Ûy™â h((²ôý›ŸMùÆ›™¢€†¾ó!.àf•©ß¤ 9çÞux1yyÙòçÞ<7× ÀbÙ ˆ$>Ö¦˜›ˆµDÇllcÀ„PÈèÍ·Çr”Bìâˆ[Ñ¢.òª¹lÀû:\<²t zÑâÒM¶ÕþB¼¿Áõlö œŽù|›†ßWwú®¢l_+ÎÏ?€“,ˆŠ}Ò2ƒ%µlé®Ãºt¯˜f‹$‚ ²m.›Á¿ËcRzI¶2’âÜÛ^éV{ϽbÚ&–…³QSrë-R{Ýú¹)øZû¯©5©. D©ßDUí½¼ªø¨úk´ê½àV–öêèGšÙ.tCTÂÏiÊÀö%V»­t)'~hd0²‹0åCà-h–hß!eGý$CfQÎúIŠÅýq— wïDÈÙ&HqºôEo½EaU ¾gRË>Jïì ÆÉú>‹ùŽªATOÂΪ)Ò çÁ­UL5Ð!ùÕû>Ø ,æÔ¯¹Zmgïζ—Ì €ð¸Jf„¶&••‡­Êì«÷wûéPÿ¨ñ“™€Baâ÷¯ÿÓèüìg«´³·Þ¯tç9ÌæÎÏvj°;?[þý§çg‘ ã#˜hŽõYˆ¤hnü ¥ÔÆ(b ûÝÇŸšáäü,̲p•Y5cû5¡O¦ÈyhV’D"&áÂ÷±I¢A9=?;û÷·Ð¡Àž{o~… H–޾ø¹2Ù¿7&ûÆ óðÂŒ£Gtâ³ÉÄøLCUU4>šŽu8FÊ€W ¡qÉïЇ@¦£NK`ly›%0ƒÌwy8K¨ŸµÁÊl5ø3S`÷M=à|ÛÌö¯Mc)ù¢š¾Ÿ{gÁ,l—Ù‘PÒYMÁ_Usf‹ ‹:ŒÃù¥þ ¡Y^zI †ZûµñÙ_¾|ülÃ[4ªþw®‘ÖÛë"1·I…«.ãÖ$¨…Œ`p8æO0Ūû|£QTü¹€ÿoX©¿†.^^[^ª=ô¯YãšGanƒYræýÎun¹Îc;oH«!©±Ì3¯ê+ÞYãàý`&qZÐgu¿ðÎîWŒêÀðŽË>1øe°HÊ3;X_oÝçÛjE§±„;¿x×ø+üÕûl‡®4Õ’i§³9Bz‡þÅ„Ó÷Åææ9“ßÖŒ-ø¡ÛR‚°Wñ ~+‡•ÄàÇ€¿žq´© À=„•S-üÁAæuKº†½QÔØqj`F[É&™,'Lþ4¾œúÁ5DšËàw‘òtíJL’t…<ȸ÷£ûêÿK]ý¿ëݯì¼_¿H¸ Í¥Ó°ET „þƒ¹îݹ쇑æ&Ì.Óø/®¯ƒda¬FîÕîÌF¥N}lùá{W)„ûõ­ /È¡jj{ÛXÀÛŸ!ÆLÃÛ϶Åe|mÞîxæŒ2íV2Äâz %2„ôßÉöWÇJ *¨K  Ý]OáÕb^ôîΚržÊqSy*´.cœtw[îð#li†¹Û†íì¡3ÌM ÏÀu̳$cS ÷ß)ô©¬[ØÝ¤” » àbúÅ>ÄØüô/d€0WÖŒ¹,Æý*xÈ©]ßkh˜¶a„‘K0'Ý9¶8äAQæ‹j᨟¶QÞ¹3L´Æ 9ÖÜ*9‰Uo°‡ðdûN 3ó ÒÒ¥Vk­Qv­ÖAüWÿiìœI¡rö&•ì {NM´HLÞ¿×ZõT‹KI ܉X`ªû@<„¯rb~!njoeJÎDÊÄCõÚiö‡- “a×ÊÈQ†ƒmÊ0²„íßC­v´Ê-µ<¸W’L ådk tÇvóíð#,X!¢)qº )?Ð÷¯jya“Ä¡ñÃÀnÈ¥Ë ü ´+Œ«þ·õFÿúV½íÛ`©9T:#>©©>œM¥|?‰'&¼ ÓTµòªY2뿳ÛBž®—+ˆ—í3.Æð ÓÞ€<Èù…Š" q…vj“hÕ_Ä…÷ÖmA›SÆeNÚL³ƒf¤äÎ9LÜñ„¾H#¤ÑNë9Lºà9X¬ñÄ–Jg ÕååAŽàCü]ïXþ<Ï®c(¯ª^B U‚–ÙÝÒ )üæ¼êè0ÉÑZì.«ðõm>Õw9kEôîF„Ûùëû­fL[â™võWµJ?”*,Ào»"CRõ£¶±Ôå‡Ùlž¥öèý®øpõ`Ãn¬Ê¼‹|P¸àH©°k"cå(>î³²ôà¦P J#â<…eå$}ejø p§ ã'F{ôMøëRÄe8R Œcg´cå(y]j˜g7&¿Ð5aÌÄ­ LÑëRÅu1Ÿš!Á’kg¤jåx7½â¨e°v„Óé éícIF–nd²ƒà¾Úµ+–+ÿ³`Þi€¸ü.lBñzµ¡.r83¬GIád !¹è€)Y®Í^a˜-ÒrÒϵØwµÔÑ\‡*0eî ƒ2¦† Nñx€£ºÏp<5GXº‡,ª í„çO³ìj¹òï0Ðãn”7YÆfÜ—AÄÐE×§ù8/ò,1›3m{ad™gyüWûÉǵ‡­ïô ~|*Ð)‘—ú©ñ­¹= ª°¢ÚyÄßÊñ.2ØE8ƒAÞ®­¿îe¬Âð‰aB9SÚµ r„uFuc­-¸év‘íøuÃyÉ%RÂålÔÊhOkúlÝç]¦&HÊi85áÕóäMŸI²úi½{WŒš q§#ðÙܹUríô¨Ç@¯VWþˆ5øÿ©ïqëũ𠀼Gð,ÊsŽÅÈ™º ä(é2F˸M¾6‡ta;vn.ã¢Ì·"þGÄQ`3|ÿQߥ^pZÔÒÏiS–a^¯ÄV;›áÁÁèÂ¥àÈI—à.vâ1¾XOÔîdóbÅñÃJêu~"4ЏYrÝÁÊ'&Ë«Ç 6mywc˜úmóÜWw{pঈb{LÞ¸Ââ@lâJ¿m'ò®]iðãÆÌf‹¤Ÿg,¹f’ ;#\›Éø Å*?rcó'wô±úvšÛòŽ>p ïþ ý⊚®4›M³²ÎÌwtÛ;äÕYÛwl í°Úc˳4÷ã!(¢B3å@ŽˆCl¯½Y“ÂÀ:µÉ:{+­ßø¡G8¸V2÷MçÖ[ÿ6PAÊy˜˜i"úgvÿ1Ú“ç§´–Œ9ùa)pßü¶>xêa$¥Êy`ä$A½ÜÌ®Ûm¿Ýòdö$ßO`À6>4×ÌK³œ#¡ÊД+Í4q‚¦º¿Î~:Уš:϶‹êÜðÐ{õ§C]ap·Ümê Ñc"ö‹4˜Ó¬z+V™gIR?™Ú„Ily’»9ÈíO×iêqHÞ’v”µg˜Ä0’9Ó’€U½Óá«„ ÕØÝpsvV¾-m_`TbMˆÓœ\õ LÜÖ,VËíÞÂo—ø˜ JKgˆiOJˆ¾ñÉÖjÎnh.ìtع}¨ØyÖä`j3(™x©’ Û”mNÖàÂ:g½:{¾~cËN›;䱑¦Ä=ÍŽÀ ¥”rv<‚űӽ{(î|2Aº˜?õÎApSøf\TgKêhb£|„&´x5š° øý(.®NU¿2eL )C+C øÊ ôµ(ÃVìzúã$ ¯NVœ¼}4{˧­Å^³‚ê¼Á¡a#öP„”¯EÙuœ—CëÁ>J¥[ºóµ‡ÓÕÃJözv¬3ž®9Q{¨æõŒ-Õ4:™çÙ813?2¥ {U§T ÷Z=È!|èi˜}ÚG`Ú‡%á»æpÄ2éF×Ë…¶°VÖ5Xæýè2X¼FD;!s˜Üu¹·-'Ûþýˆ&TCté$jOgh¶qZ@Êö.õ—_FA\åtý|b×­ï—´|±Ï“lh€ßæZ æ&ÞDtFœù-Ò«ë]³n3^òT”µbˆ 7eMI‡”²ëG,JþÔܿܿ‡ê6[¶" 3"äJTÉœÀ` IÑëÙϺ¸{.¢KÑÔ¹«Lìvæv™%3Ëo^ÌVíµ2Tu s÷ëÒ×Sú ¼²¤ÝK¬lÕ—ûbLRhc“œÁƒäT·kÒ#Y•rnƃÕú²ÊÁñ%Ùå‰ÐݬM„1æÆ$—·9&\£]µwäW¦œ'ä鋾¢ÜV1­U¥ÕwC@•ZJ.¸ªT‡Nšs`·wrÔVÛ5Ñ-…­·-·\ë¬æ‚"œ`õÁÖZš¢,ªÿú¹™g¹M0ùˆ@ê üðÓòw‡ÁÚkÎÅîVªƒ’ùq\{=tw<¹ÝÿˆÑo?ÿòTñ!%6©åÎ1~,ÑtðúN'o´Ý1‚…9‚ÔšL;i\k²¿g‰éôí‹OÄOq¥äüäaOi6üh?¯«½Ãì4€Jµ¢Ú ”|Œoc3SæqX\tózÕ½IúM¹¾Q.Žñ‡|koÒýôq.ñÕæLâ@H©PÌ•— ’ãRTtžÉù; TLíÉ-×¾-ÈÁÔ¿?=•špà®¶¿0ÌÉç®þr‡½Xg«î4ß&[èßÝÝ/ò$üÜ3Dœ{! 1]ü‡pxêà…x<ˆà †CÚ©iÉÊÌ´¯¦qyOÐÝö×AJ¥];l '{êsÕ«±‡õyõÛ¸OÂç ƒŽÀÒÉ_.zä?¸ÏÛ_/ÄçQ¤CÎÇUAN Ú£¦‡÷yÃt·GhBÚ¤/ÒçQ)H?}®iþÀnoYê“å=²ïµg!'y&PO£ÍŠüàÏIÿ…ø9.ì+ˆ‘ÓÏqFJëûïêzíbÑ%0åÔÄ~]ë +Æ‹8‰ómËòN#ž˜Ø7ë¸vt@”¶f/òú¶GÑ!N¤(¨Ù©c~X¾°½t<¤o¬‹=Bš!ôÿä]koÛH²ý+„¿$Dv¿ÆÍÅ&q2`çN¹3û)0(ŠŽy#‰ZQv6;Øÿ~«ùi=ÜÅnI60ÇR™ÝnãœcÛF ȳ·>ºO èó/ÁŸñ<¹ùa@/»˜b“A'‚¶/ƒ7ãñò›Üô‚QòÕܹ &ðöÁmx?ú9È’¯ÓÜt²àfžN‚{óÜÐ™Ý ÇI4øÿ€é݆‹à%ò=eòÍBŒ½¹zÏùÛ7ê ÿ€µ~O>hüáŠQ|Åñeð¨é—pñ*ø8΃—UGàaù)ø¯,ŽîæÐÍ¿O6zýïWÁ[¤Þºíý{%Þ|„¾}§Ñ{uõVSõF¡wo>súüþíÚ㇠ÄGŸýS0¾'€HtGß‚›tÞ|oàI‚̳‹,™À+Ã`I8®å“7)¾ÏÃÙÌÀ_ ã`Îò'=ÈcòC”öí/n°xœm´¦ù(§ƒAݽ³WÖ.”ááézj9“¡Þ/·ÃWÁM8éa,Ò Œ/–½‚a=‰øñW}ˆÒªäÝì8¸],fÙåÅE™` L6ý ï;Kò|Q#TÂp‘ÞÀ{Ã^Ôo]K]”R¯vm~’Ìç6Mã`ÕܳG›Üwì2V“f¶ç 3z¹Ê|÷çoAIÙYnŽeª¥€cqŽ`ÑáŒÆØŸùCÇâÌÌ¥š/KÑ»i>]WŸæQùñ¦ÐPnM³ fòwét”OL…̶¼FÙïäÀÛiZo_–ï݆SC2é Õ¼,&ôKð»yc?æ‹wõæƒB>¨“ŠQ’•ð¹œ÷>ò¿ÃÜvv¶üöÝ}ú3¸Ùk_À` ÐØÑHªUÊf†²—8<¯Õm˜âÝ'à=Lî&ËŽÆ|¤£€N¨4õ²€5¡Ù¶NR.ˆ5lÑÈ52ûUT ¬„IÞG°E¤w @;¿p|MaZ†¯âì|cۦدÈzÚæ;œeÛ’ˆ °úì¼94YΦ€ÁÝldT½é*V9r.ÊŸg{uÎÈ!±Ú9ÁåzçÊö—#·E×€Égq”£ä•±äÅ]Vxu¥Sµ­÷”+{”¢©£Œù´Œ¯÷þ®p sp÷•I*…´ƒJñšÆ…Ò_‚«bä.AÌ]Ê3®ï=>ÿ#×ý_oæàÉ™lÀ—g€e5°ÏŠñŸ|4³ÂåÙ?ïÂfBLœãâ"fÕLó·ì6$\€Bã›(bØs¤ÁìMòAãéhn£XŒTŒiL",9 C¡(±8Úú‚³ÿaÀ[ Іµž’‘k¤Ž7 º6Qç2¿€—“˜ÄˆÉ¿æfÀóæ-Ho–óEÎ ï3©ü¹Uß 7¹m#3dή¬=ƔԛëËQZÒ\¹ÐëÉ<3Ô"žÌÆá˜›»1økãzkw¨´Ç`›î4 i6,ÛôD“/ÁÕG…_U|.QbP{¹ø1‹_¿ø£þˆóœ'0˼€w³túúŘ¢^°ÉÀ_¿¸J2xXΦߧyƒYpŸ„kÔ‘VÊ|¦à¤—o‘Ÿ ×–* ÖÆ7Æç1&žLïÌ"éÅOÛpä0H­7 ÞRoÃ1ŸÉË·ÀÞC7«x]ŒÀm­KD5ÏÈ ¶uj4Ÿ}*hò²r”^Þ˜qXÜEÜúúJJøÏÞ@iÓ"—" ÿ¿Ã/q‹-¿-†ØÙÅÙ¾- G\ùMË-rP–»0¥Ç¿¬?äOºž\4¿Ë./¯ÀxÍËfmÿ¤1*ƒYRz™3ÈnÓ»ñ(˜¦ 0äê•GðïÅ÷8ž/¨vгË`s§ï{Àâ7(2‘™y£´Ð±Y…çLüÓîÈüë6ʪÏÁËQÞû`žwü<ÿñ`U,Ó¡æMïhbÛ óÑñQÕvÕõ­ô@¦µÏSmN]ÇPCsz.†E9çšD ¡I›¼,ÆìOˉ ?ó; °¢‚Âd$ØÙÞÝ”| i&¤i’Ývw‚‚—ä<¦Õ$ÛŠ%6•½X7s+ÙqQ9l—ËÅN.2[›¦J´˜- Ƶóé&'9Á¾ïÍo/Áçúmø ÊÏñM ÆÅå{ng'=p‚ê=·íð•b—gõï|¼Ê=¸7Ÿ>.¸ i.î1U«KIã|%ñxô)\ÜÂoÿù †Ðßeðâ̦͸¢:øàsihå¬ñ"(?ÈÓQ±Ù•o¢U„^Íh•SYì½öáSî­ÖÆaûÓV«™‹—*­&æãÕ BüX;q Õ8ÌZ®‹Õâê´0^š¨3L8ÝJ_+ޏ5K!ÈIÒåÜÕçUnñú4`Yø«úÊùÕÓ͹ÈÝæ°S‚slJ’ZÝs‚D§+ˆ»îç&¢sä[ŸI+Ä¥m;W2¤:åAÆyîysÐD}yXS†ËþR—¸ë앦¶€íæ K2†8§ÌŽ®Iêí Ýß¡ÿ¿Ýÿås^i¸s^a æ/iMImä:%á齘Švð?( É f‚Ù «"_Jö2…¸d;(j¬}§—SðœzS‚9ǽ›¹+Û9q&$·g0r*™<®[v‡ð6¬GRX'j.u§ò%›a.Á|‰6KÇI”xÈÁ°©ÑCY·@ZÀBÒ:Iƒm3æv/¹fÚCÿD&hA%%ØZ·äÀMv«`ŸK;׃«½”F”ib=ùWJwŠqܬ€dz3³Åü.*"]\sÚJ{¢3…9b [ËŽ™ÀßN)âÚíƒÉÚþ4HLƘ’ÖƒL£ÈÝòÂ_îR{Ø)çBck}c=nždÑm<ºûHTZ7u(¢Bdd…Xt«)e…ØWYa~"4ebÑ0ÂöñbR`¹P¦†r:hÚƒ 3‚É}i»£rBv\—Ç‹hd6?}ív4Ú:X®Ÿ‰òµÞ<9ÞuU¸ [/æëà,X›8FnGY£®Ü¿ìe£˜9Û«ÒQý(gç6Šc¢°Ï‹c­:c³¥>_^jGq~"Χ„hn²9,3mz¡pgÃf´WY«¸¹NÃ0y0Cúñ;ÉEæ˜Ù½9ü<½ù‡¶TÍ•V˜X÷N4¬Kx­Ã{¬ÁlžÞ'#S…Êt}ŽVofNKÃq6(CˆWË6™¢.µØ–úMõc>OÙ^(ËÔˆ©Ÿ·^åe=`ÍtÙk_úõp¤5kÈÑNù[臭lu ¢t2K§æ"È6ÿð]¾òû5œÙ*•5/¾\­…fLZÁÕJ‰½Á=)K¿gG¨Mñ3SÿÎcTãÏK_£Ù1ª?3Zš¥ßãù½?j€—´—³ËU¦ž—*î³Ùm쑜„¹D­——LÀ ígTôRðÛx8½®^í£BX—M GyÖ‹¯‹4RåÎÿ$œõê VŸE¥+^ì6Mz4CÁ²l€ºL‰›šàV\P< ¤¸?GþxjN”5“>Èm¹P¾3žßãámš~ôScÝèJ{þÅXƒ3@¬ÈbLú@–æå˜¯MÑólŸ¢ÌÇ2ÐÛÃ'é¡á«q; %’kdçP"xвëh’g=lbád®?˜û¤ÎJkd¹½’Ëa-úBue¯-üÞï&Ûþû:Ýñ$Œj¬-VšË‘Ná—mð„u|¯xî¿/ÐOÊ‘PÄr{ÑÈ1µ#<“á$_@ö jõP?ü qŘ&–ÓÑ\ŽhäÉr=Ý+ý¬Ñ;[(´ÀÉD¶‰Ò¸'lS/7¢q˜ef¥[üzþ¯Aù €Z²“‚ú4Aî‹'\ƒ\¦ >Eõ–ŒOÇ ²É¯~Š8cvR8âÙ8ý1ÉO¨On"N ç)MÕIÍ„óR"ç‘p4Ê3 œ ìŸ*ìÃ$O7}Òè zRìr‡ãÅm^á¨ðÆB1K¾õJ/§÷òO«ZWeæËci­D Щî‡Ùóäå ßcþmñŒcØ/Þ HFÈ5~SÀ£ð!–ܶ©rõ°XĨ:ˆ[Å7M’Ì ìyü5ÉóÿŽ“Qh’¹þ£xÊ»f¨Óƒ”ÑSô~)¶<Ù|P|Ë#ºÂ”…Ávt9í]|]/ÔÖf²Y¶Äñj)uJƒ_3ŽX KU}ÄðaRýº¼N°4¶á.îÊ4õÛjÜWgp;àÆ$ÁÄr!—¨£ 6Š®5nJyä]³ÓÀÃæÌòˆÄÅK@X"L„”VA"ÔÍ‚ñ£Õ÷ÐõòÓÛ4Û”wô‘0¼Mµó~@S€kš‚ÍNÓE‘™oÛ»ÃÔZ%7wãfR rëŠ\«n‹•1[¿F{ìøiŒ—ÜŠ¬°™kü6Þ(È Õq '™fУEóªGá"ü.nëû‰}Z¯·Tö9Ȇ` QD­±Fõ‡84^~ß7ÖMŒ«F…2‘œ lG™t=¤Þˆòcv½Ã¦äÇò1Wð˜ŸóÇl8>ò„¤Ù°)9ÖŸ½ §±ŸEskážo%5é”(µ~£]oô¥ã‰Áo–M–ïsÞÈPÕ3˜ÛË¥×Kú0úfHs–˜l_»ÃlÑÄ^Y1´ë½cp ÆÇ‚q³'‘•BÁ¢ƒ[‘•¢ëáZެD@|ãôë‘ »Úw3"8ÒÖ=(ã]/-ßwοŋÙ¹FúÚ•—Ûh¦±«´ų̈#,¨Õl,ª;îc”q`·rVÛ7¢«-7|çXI"Ên­u=QXÄÙ"Ëÿ?˜Ç³tnLîàHý/üáçêﺅà@ï)ç cû[ÙÉ—Ä롇óÉö!F¿ýý×Cù‡ðÖLQûšÜÈa¹:¸Â饢íÊŒé ,E RDYÁRÓ=À"Àµ2û{:Ž{­¾x ü%\؇¢¢²9VøQ7åj`v$€JŒµu‹ä¸`{Ê®'ñbžDÙu?åU[#9(Ûõ‰(ÃÔšËÀÈuÛ™/å‡x™îÇE\âT˘DOj‰ÍE+¤Z´¤¢÷LÎG8àµf¼/¬ßÇ<åZx^1yðdo@qÅ%ö©†+†»x}²q¹(^|OçßšWŸó@ã>Gñ¨ÀlProkÛQš¾-ÃÌÃì±z Gµìcê0Xw ÕJ àv´6Ri¬ARpD7lGS_×õ›-Õ&=ÍŽ G)ºÚ(¼ˆÉ??̦NÓØ©:OöµÚàyi¾ïZr Ø’X1Všð~1ö‘Ó«-ÎO#›—”H˜Rë®DH“~µé%u—aÓmMåÜzÑäîâЩºÔDá²™­˜:£ÇÄ„[}uZ¶Õßv\âø …A© áÙ£t<|•×ðíÓa?*n§ Ð¾{™Ín’ñw`’ £ïÊ©÷ÜμïÖgbÙ¡''{§žt¾¼ vVfᓎ†}Ç5®©[´Fwab\3îMm‘K§çÎI»S݈²nÄãn$M7"åÛçhê¡#®I``hKZ¢d³¥ÇÀ>º¥Ã¾•Ð0SàÏÙ(Ÿãôo`nb¥´_=ó¸V÷kmÖ€Ë}=³¥õ@]|¼¯.ž æcgÌ&à ÛÕ¯ ÙÆ¼ èÀšù;huinÞÈÁèáÍ`³L¯e×~»ÀÞüv·<–áq•hÛXQËÅWQ1›ewk§Ù¢-ª<ŠjM–?kFßX 1j~³K_Ö[ñ÷¨æ>WIæE–ƒ8ÔêaîØ‹õv‚9Sm`]Tr©sȵê‡ü Žm´æ¹9$Sçïê¢Qؘat“LAê&YA‡ Ïì£ÿ”~ÿîh+[?.‰?±=ÁgV᜕WWÛîSüøÂ øømAÆóE­›®£ø®V”¿Ö긯àáÅaƒ:L‚#GýêĘFoOq†Î0¤Ù§,6x }<=³f¥=Ë&iöë Â7üPYüRóbû8P ¹û½P¦¥jÿ¢çÕ¼’~é•t£ç`»À‡w_º°"ÉdžÌnÁxXÛQýg_¾y}rˆJ»€Qï]ƽÙZoZôKïÑMìùúS°çé´ßYq;Í `œ%®Á}[ráíÂ}ûlx»æ„¾uãýÖÊÁš‘Ý.|ΕéWNÍݪøtÛuø1ƒ°Áà–!µîM9-PÂ_Ï›¼«;Løéîc2†©õÇ€Ñ+i$å—T¦—øXÿ>'BpAb! A1ô¾¯¼'lìð€üÁ©¦"%‡Ÿ '5=Y¥êñm%Iö0w5êÚüýÝNÉÖ߀@7ÿòö®c/°¨S*^ü¸®xñيⵜâÅKŋߚ/¾«WÈê®zrŠ·|ùæZ?„óÓóI2ͯ³Â~ESuRYªZ7Ü›ÊêïIþ¢„å&+^fÃôµ½­ÒǘÅÕið±JÂ…1ìN¥éx4ãÛ¥x¸æÔ[“Ë,$„™ÚxmœÍ/ð¦/ÐißöÞ.xÅï*Cñ­I–(W§(Ò›i‘†îi0­±Ž½ÝÓ¦Ö½"Éßÿþn–L¯m>.¦ÑÙ|‚cÙž•ɺ7N‡àd8c1µðCÌâ :i ç6Þš†SCâhé×:N Çj]}LfˆÀüâýùÄ-˜\av&êäÓtЃ92±yuÜ&üšiCÛ÷%[8UK£ÞD·Jš‹èÇÑd”_ƒMEy¬Å-U`Öœ :^$¦¢ÚãÐÅ gùaSWc Œ¿«º¶q·ê*”e€¿OO»2·t³ónéFºe‘2­ÝØ{C‰"½Wj½÷ZÈÍÞÏݺ * žoï–¤4æ1Õ¾nIBèÆøk¦6»Uå· ƒè›äÒ9µTUJüV9Âè?ÝèØÇt–F3pg& Êá8¢é\ä²|Ðö9æ‚AsÞp5ã†ñJOwŒ›m^'øbCütíÜ]À౺¼g¿¬.N®âÝb¸%WˆIC¶Tâ\a‡ƒcr7vdX²t0¡ç¾¸Ù ç¡Kê¯`*€sT"E³rKS¹oi‚‹¡Ä•n»âûÑdu~y ­žØVíæ¥ƒè6öI­S­€J\c4ƒn¦zWO÷ü …È„Á'tމ[Ùaá¨P!à˜¯Wà×¶•öZÀI;¢Œå¢WZSѶ=¾‚#ê/”“XPmQÄø¹c( œD‰QÍ<ö›ä±ú–“ø–“øzr Á„y%Lì`Y«Juÿ9 cXÒx[NBõ)ï)%9hHÅÛ:êà˜ s/[j×xÑ0õð!¸Zõø6¤Ò‹b/b`”©&з0JUR¼í¢í¤}Õ¹ÀõXíEj(ˆ!m ¢ 8­ÂÆH%‰5‹T”$¡ÆäÃp&¼”2NT 1‚T`ôÉýH õ#å±àXzÙ F“å,ó#zò¸ÝùspB%„÷ ¯9…Ví”Z8–%e¢-´·1 ±3Dzœ$a”J/RÁˆŒ!l7¤'TØŸF’Ðñ´…1Ž ±gNh8ÉÁi šs@ãÍù´÷üt³ç&Œ]ÚÏ.­…ÒŠhR­¹ ›w±G0dObÑr¦<‚aá Ên1ã¥4fLbe3îAŠp”±1¤˜\çÜ‹T3b”T±ô Õv¹!©Ï»Ó=ð'!\¢}L-œæAó”ût©Ý;/°PN;¥öÌ™T_Wžb§ÞsAô_*OᨆÉ/¤à΃íX`T0§)õ÷LÆä[žâ[žâkÊS8ÁÔbbá`eäÝ; ±&ìfkžÂô8.À ·›O 'µ 2*>_Ù€'Ig±f¤à ›8Ì’i?RSð˜5muÐ-áâ^–]ã®6ðYkdÛ‡Xi!ç†5ï#_ÀñÚ}Ù×éxê4¼#îÎÞ$Ó±³5FŽah®àãÑ0ɯ/³d6ö¢Ó"JG@Ú k`DɨÞF—iŠÅ€n² îÈξž¤È&ûr4Ý>ÆuæâYçD¾×D2D€Êc÷\Xþ‚Ç^Ó¸5•·Þ@UÁÑ è‚{¢ Fz§{dRÇIPÒD/RÌlÒœý6‰rp¬fžïC¢š®À£•.Z@“PFP£7wH¬Ñ.°‚.¿WÚßÇùÑb}2_pâ1x ÞŒiˆòðàè=ÿ*&Ëמ â±ØÄ^.\=ãó¸ …Z“7ˆ®¾ |ÿ ˜fÃÇ ?VÆhÅ6Skôœ6_R ²Æø1˜`”Q‚K-=L8Y[súrLxT­@{Ds zO·ÛF '8½W~Tt]Þ>ΜÀ]:”H¢”òO5ðþ~É5Xù³|68šç" ô‚Q’0ŽñûçÁcÒ.¸âoXõÐ*Ãèû5Š ñåRt÷â˜j|tƒj¨eï…îYv“…óü«"üt“põϪ-c=ÌÓ`å—ÖPËÁÑÀÓ Ì‹”3.¤‰ÛrÛj.<+ Œƒ™„Ç’´›W„ã¦ÑÕ~UIÈÙ|õXâûùeºÜ#~¼úõkvΰtêÑlŽõ \¼80“ÑXƒëà¡ájÛ´ïc^Ž ‘+,_8y g‚c¢ ¢{Ö¾|^‘0‘^‘1‹Á­õÌh ÇÃÖ™„ô"5š2Mø¦ùÃ*R€£kEv_>·bž»*‹œéÑÍè}GrYÛñªÖ_,¹·Q{ºAmýLo‹=YW&pÙ•KªHkNÚÁ±°ãVB{‘ƌŊ+Á>ÍÓ-{î™é ,ž*oÝ¢ãàûÓ¬0—qŽ9ñ$‰- [o‘w‰“Ó±ŠŠÖHÐÂç"vö\ð¨VŒrõ¼ªú‰Ç ÏÜz·qàt‚ˆ UÉ?Ÿí›HyØ&)¼H…Æ{›9'¤xòSJ/Rè¤D*¥õURå¯RµhõÇå;ù£µb¦Ç-ÅLÉÃË2íaIiÅ·lC^£Û AR;ÏæPâ'i£ÿtƒ~Mƒ„M{…Íh{­d>¤xŽ>̺{œ'Î –L*Ó¾3ÓÁѰmØÒx‘jqL¥ñ ¸À]|Šø‘•R‚vð!U± ;#¯¨)žé$ºµzÝNÄaH}¶Ã@¸Á k§ÔÂÕ«| »øvë½’ì/µ‹ÏQµ%Ú·o/¸?Ø.>‹‘ƒÒŒIÀ¸ÅêÛ}ßvñ}U»øv`³/¸‹OH<)È·Ÿ6äÞÅ1^ìÜš%upJï´âTy^ãE¡ÜE‘ùãl|S½~ŸLô4Åcž[N®Q pš7-=ŸÀgïÎÝ©ÿÊcMxLÔ7ÞxÓwíøã*åçóË|0M‹51l!|š ÞÛ{â\š‡'ü0,÷nâD8…‡ë÷¯¼ÓÀˆ£5FèÖŒqhCÉcÇ*$m_‡Zð*hË®ò¥dOk´Ñºý쫃S<(*P¾´‚²‰¢ o]°pÜHuou–6/ÔÀžJ(¸´å6Ž€Qu´&L{ØåàtPŽNI/ƒñ|’RFQR»ÄêûDªyPÞFi/Rp1!0}èA N²1A‹ÊJëa\iÌö·"µp§_•ñ#Õœp¥ñ!ÕL‡EµšÜ'R–uÔÔ‹”2BcÉÛ×! +¯™©fäÈ´wqp<ìd¥öi_c3±i?ïà˜ZÐÒí+Hc®…kÕjkK¸µýì»kßÕ´=–¸ÄS®úv<@ú~2coMá•p:èЦ–^&J0ÖÐ|ûbº°[¥MXºG+/Rð afxâ GŒ Ò°Z‡ eƒêEJYØ™XíѰwQ‚aRJ´î¶,áš· ;é«^Ž&öªàÎÐ]p|1;8¸Ö)^KU—ûGmI°žÀ5ÎÚ×¹\‹Ï cŒ‚&ƒñ.¸]Þ«÷1Ù„ ÎY|µo7h,véùBÝ`²I3Û«qÝu_ytUï{£kœÙ¾^à]ª j¨íÜ­…¿íZÑ÷ó €F'Ù|<´ÏÜuAåòjË×Ã,ÍËúûåïNf{²»®vT]NŽWË–?Î7±ßCíÂ!^t‚»l§òiæfsPmŒày×ôC{±ÛF¢{ +÷Œ_v6ÇM"ʘL Y[R\K\Sªl.ãæ&™þYðµnG¾7ÖX7cÕ´Ê—;'gY“»òW—L½ÌЧÍÌóMîƒï9=ûϹÍ÷«ï(`AÆ PÊÚÚ5Ðôçöíò·ß_f¯²a¿_¾ Òü÷Wh:@=¢Ú"ÎÐOÚúãÚ¨L²¨ÂÝÛÖŸi6¬]¥®T2¹µ‚ÓÛmÄj=]‘ŠÍ>ï/ Tî" •Eî—/Ê;,ײ¿¾¦/FyníΣšùJ®ðÞ¤óìÒ”ÜÝ8„Øa8–ÉÚp”Sñ:èïTOòâÕÂj¯œSåI„§ów0Ûº‚¨éFRu#Áà/x †¯Xøö®®Ç‘[¹þa;Øó³HN` ŽïM`Ä~¹v °z4½3Êj$YÒìÞ½ ÿ÷»[šÞ‘º‹TS\ÙnÀ€g¥žéÃÃC²ŠdU}ƒo@DÐX’Ø߯͆`-‚¯Œï_~úñÇm¹ºyÕ|æ¿®N~hÜüÝÕŸ{žG¾þ¹!íçWõ‚ŒœŸ5|YÑR ¾³’ËÏ~²zò¬òÊ$¬XzÝ ô›ßŤwØ¥•”›æÈOë ƒOÃyüOßóù cÁßh§¹¼åº¼õÚ7’¡‰¯˜UÚ1/“)Îû‘âÿ˜Ÿ$úÿªd_þ†0¾-VÅ-Žîí¬Ü´¤´ÿøÃ^IÕwHnc=4\þþÇ¿V6þ]ût÷o~øøêßžfs¯ÌWß6–þøì#[]ßòŸUÝõ·]qmÿÁ¢šEü?Ô—½ÐŽòÿÚí^|¿3©h®‡ãwΞÛÕܧ|Ý^ã7…Ÿø6¯—ÿ\ yZ~üîÇE±Ú<,·Õ?ç˧»gG®£þ§ïróÍÿÕrÿ°=B…?¬ÿéÉ_Ó£ˆi¶ˆý·èk<–ÛMØo~(ùÓüÙvþáYkõôÖi ÷Í­¡×Ùö…ì1"qŒH¼€ˆDå®™¬j÷)Ó{ßµzZ™Þk?T+ æwLÞú©hRÙ…“WÞٸƑ¾(üªæ,¿Ž÷iµíϹÑ<§Ïy-=úhaƒ C—UqC#Õü R¤QúБ{ZUlµÙr(,hÕ›k½º"²rAšªY¯ˆµÿˆù#Gz||íäfRÜ/ÑëÍçÍîÁU½Oð„KÄ|ò¥AçCçìQs¤ï#)|ðµâx$l$Hqaþ{zý'óßãØqý÷(d¦4úï£ÿ>úï£ÿ>úï£ÿ>úï¤ÿµÎÚV`ãè¿þû¥øïÁvüœþ³pâšY›Âr‡žˆpèG=_/ª¿79œh׊{ÝhÞï‘Ztø¾9ö¾\{‹Z_B^—8bÊwåÝëú¶#Nn›ÉÔD>úH ¤èh—sœÞ|“g‹iùÒ³ÖÞ³nüp¡5/:{ãØŠ¨Ý³oÈä$ú'_¿V꨿mn˜O}…†Œ¤ÍsBéKò·cÑ;©ÿDþv$;Rê\þv,2ÅøèoþöèoþöèoþöèoúÛ±ë¬ke»ýíÑßþìþöNÀ˜ë+ µ{N(}F›_;…Þ<ó£8»æLJ篧ô­Ÿºob‹_ªËÎÔE»?¢œ–ÌÒ/“Ý¡u …/éP¾›-Ñn®Ê÷Ôÿf2ÚÈ£<ÚÈ£<ÚÈ£ü‡¶‘ëER3£”£Se:wɬ•¿ÔùQo&õ ÷¥÷ûs"}ä½__ËöJ7)¶øß¶|\m3d6¯5Àq­§ £EPÒÞlÀÑï «¿(‚Þ©¸êwUšçDP^ÞvbÖõN Œ6c¡³ë îÿËz¹Zµ¶!×åj¹ÞÖ»Ée±žÏ| ⢶²æËåªå\Îh<:cjÑ }&¡z¡OÞá³£~Ro_%x_zï¹õÕ[»A£×É}Îôãx9v,›ÿK¼¸èÄËW¿TâÞD¿ÀÝÉvxñ¾O‡K£é§Q¢ïŠ7a›úýóãZgèþ<ÝóÏñîßÔù—èD}Zá“}_V¼Z‚W©]€ Ûað©Îvøã8Ÿi¢ÎÎøœWî Ùøs¨5H ™Í1ÃGàq‘=>E––;ÖîJ¿…P׆iøÚ\ï(ü$Å–X·¬ŸF¥­?B%m$¥ ÄøËÀm»çׄÝŽÇ1¸û§ó™Ïƒà}ðÍuóÙ!¡¼ŸP-ïÁÞ=Ç$D !딥-%9ñ9i3(C+ÁÑå74ÎM"ÓïØ ê(Ú”Ö¦¯vÑ®ÿ…c©'‚4 VšcŸjšh•ÅæGi&8N_G~U69&ëI”X= 0eÑ»£¡—ÜH$Z`F(N]øœÎaÇ8 À&ñ4ðëÓlúvã¯*õðIì±Y#• XÇ,(p§MÉäkqÍE€œr±M6džâÑÎàq.‹ ðœSü½’¤×q ý@ èó¦žÒhYKj¦ûC ‘EÚhk] ©÷‹ëC÷n.‰Ã6'@‚aŠÂî„Ö,q0HÜáíi§~é9¡–ô uÄ;yŽmJ|/K®ðˆHKe×5O·(ÁúÌÇG:¶œ³¾6(½qê”É ’ÁC$¼f"‡KG¦6E>s’Ïþ^Á¸ôÕd)SŸ2öœê4}Æ@²A÷Z9kļ³ÛªH'+ÇàŠÆ#Yä*vBvðŒšk®DO>ûÝsìE\²Zv‚ æg,÷ÕA‡e MºËŒáÉ·y’Lá –c Dàq©¦Ö‡rþ8}@‹ØcÞxSi†”úOùì?0>㸕’òšñ9eÒü}–h™C1x’ bµ^þ_9ÝÆÐêZ ©É{›Bø´“Étñ9Ûá˜Ì!p<\ÄÉãµmÊ}U}¿©ªY0‚8ǸFÂ:‘HŽ]iÐ$RÇL–.vþ†$¹ùãŸã±ÎYšä8$¯84‚eI-¤ˆ³‚Bª¥œ½²ì‘VWé¢ÎŶÿ< .óàÄE%A©Q¿04zmÌŸ* JÝjÂËlýœÊ—¥~£³€}B#³m uL‚2xŽžc€çà9xŽžT€§_?Q¦hÐÖü$JmL‚2&AùüIPja¢[&¹¥,Õ4L–Åiv­…9žLRxGU(§Ýá튶÷X?'ZwÙÿ{è¾™Z:àÆIŠ-Uë¦ñïÙ™àc—@iiz¯3ÕÏ©Vaà3n‰…-mFn‘Öç[Â$(Ü ]èsn1Îï5vŒýêåF‚s2¤4µ‡‰^ÿ¹ÉƱcºc”’ï¡ToÔœi!È8Œ{(ãʸ‡2{(ãʸ‡¼‡·Îºv¬Ú¸‡2î¡|þ=”(kÖŠT;CáSî rð?*(ð|…[¢Á¹ßYá–}Lÿ‘ç-Ü¢pt‰HéïIðÞhÒú9Á‚òêþ{Åqï”ìÜWè"ñðÈÚ[oA´ïyCÏõBÿi9JG0 ‹4bãÎÆ2©CÄ´8"Þ©mqDàÈÛU‹â±Ü¬Šé‹½3´ ¯åÖÿöÕ´²ýÿ[Ói€þnóI̵ì¿#\?'!öŽíçǬ8ÏÐíxDä½ÉéÞ=Øì ¸|~æX®*‚ï«ûéªâÜœ;°V´±;‡šä…·³šôÏи›ÉÝlSy'“iÛ¹›¼pÆ“pöt‘x,¿4áï7UßXJxMŸþ„•Ís}€2 ï qC…‡ 8BgY„ŽÇ]œðþñ´.«Þq¤ô”dý·%›ç¤¼é6o°øŒ¯D@ƒYÄ g!x.N|Õ“ÛbúÖ÷a”C[ âr澎&a02ÏÒk¸ŒàQ|pþœÙûÞxôûÓj1œ‘E«S*ÚH°’éá¹R2AÕY ûp<À‡eH= í Ùœ2¢Ÿ=üj"ä¹~Ž·Ââz0‡pŠ >®‚c,NSJ$„DÙO"k” ½&gÓdÍŠÚdYg#ððdeDx|ùAÅ$±ê‚”,$r){" c ‡x÷¨…ÚÝŽ¨sx:øgŒ ÁÉVÒ}ú´* 2ørÙ‘‹Îú,±´@­²é2Á¥Qh0t­‚’¾9ú$-â&‡ùÇB;¾>ßÔ7 Úý%±,áPJr4Âh‘æ-XÛºNÑWóŽÑê—¢yGûÑ\ŽÕ+Oô]_¢)¿C¶š‹ò±¾ ·)Ѿn~~9ÁQrFI°Àè6 ƒ“.[ÖàyYç\ôg­ŸSy¤ @0ÙgÐà2Ùêö„ÆúB­´ÒǨ¤à$·’ž#qíæé2ßfÇ9nnDà1)+b uݵÆ$q4jýt©9–¬H\!å<ÑV„•Ó RøK¥ì^l{SeÕxž#lð#ð|¡Ú¯Òô©ÈþùÁ-P.ÊÀ“®¤Ìž£‡¥ß©ª~”µìü0M¤;i‡o¬…!Í Ç.K .§å;\™6O·›éz¶ª—§O™v©4¡$®‰Àç—†Ò⢤ÑÕ(»b[ÔáõŸÒkY*=h1|Ž Qž_Zñ‹Aü ŽNzßÌÖï‘§Íôí–—ódJ0r°¡fƒ½¬9A»dE©ŠõçØº‹TZƒµ‚óüB¡.J “íËx‹l7à|H zB‹—&š•É$¡ÌpID!Î Ž³W®Ägz¹>~ô%Ñ*™4œK* ïù…aøey†›tYÍËÕzùnæCçðWŽÚm:•2, ·)¢Ÿ_Ö^–¡im¬¡YmÞ=«öÉL³æ«?}µ;¥ÙSö\ ª&¿¢Hhë,§àxðÂ’ù±„¶M˜ûïðàªÉ,S@]Wœ9É„0L¼Á˜9a©­=éD¼Te¸ƒG‹Á;Z»”ïu ÉÂ)Eõ(Æ€ÑÀut1«LC&´ÀU%€FÛ'Ð瘂êo÷¬v k% ^•²Î‘.¼Ž©á{_)%­4WÆŠä9í=_· >e·ûøÖB ŸšàÓXÃ¥²$~c$;Ë‘JÒá ,ÂÆcÓ%e©x½›Ý—›íãÌ—ñ®²ˆu̽ÄH‚3f4Ù!˜–ÉvÍJ;¢6‡"@Ôàq:áZ'¥AOULÁ4¬\sÒ2¬J¹ÁvIM3"‡-GBj[ôØ ìßÀ”’;‡æiiH¦¦TM9†·@g±A#ðˆ´np'ƒÄD,¥LH £ë«“[›§ˆ6pŽ” øÀ“,ãg22ÑHìXJ´lœÐä‚'­žvìŸ,Õ`È=ESö|8‘ÖŠÜ÷ôÔI,žè»û€Z·Îq~.òáFàÖ9TÜù,Ç^«ªâÕ¼J¥$Ò"W8áé3Ú‰¢¸É!p< ³ ×˧m6Ì4£X4Rp²â->×›Já”õ!Éì ¤Í"‚`<*Y†àÙâͺØl×OSÜÔI%ap)ôR‚´i¸h%œSÀ¡° Ë8æßcŒ¦7G=î4vb ™Äq£L1Ôý"©¹µ"±ðYÐg‰!ŒÁ{fÕ½ÒvòI?RqLœi G›&Û¤NWêbemPF_<’:ax©Ë±«ާ§hdD^ަ‚\ÝMw_­ËyYlª{9³7M–êJ±5~› ¹â‚*újÇYq+齟€íkU&ëÓКi@lÇãs<áatû¤SÒÒ³)z.‡½ïOÅ‘ Mãq&Ý9ôâ?î¹ÉHxÕ~‡œ+E^£p†÷Ü ®ùÕçóëU“±¢®<9ƒ>48CC”9’wû÷X0füf¿JX²ïw’0†eÚ§© ±º„Wk‹2µE{*G[f—6N–·¿©¨ÙÉaÿnˆbœ Æ$5áœå8¤,Öx‚L¬@~%Ár‘#C®¯ˆåð€H•»®‹À¯^~pµ)*R9Aªö5ßÉ3)|NEÇÒ]b+´T9¤Ž êYàQràYo(•‚à´fœ17èèx«óMbBKA“ À²Ì Fi2U¡N' ©],ï:WRb3LqtP•!Bñ9#øÐSÞ”råÚeÀÑÀ]Ž@;Åù;UäðÁçX¢ÓÞ(:%A'`'²ú«‡ÏÓ]s¤ÜpÄ9j7ÅàqÉ‚\–¾m'ƒÄÊ)}ùa ƒÂüsÜ%:èM©Ùüa?,Qƒoyµ·”Ž²×¿¤¬‘\!^ ­ËÓy~ƒ Ù“cÈGà±"…£ÿI¯{ÂúI-%óæe£:AÄÀ@IFÀ5<ÃqL ‘ìIbÿe(m¥>%!ÚÊžÀ(¿>V•!ƒÑƒ'öŠwaž¨þ$Ú 80©-ð¹ë\²›XèQAgëõÏ帇…ïÁÙÂ1OtúøvæÓÍüùñfÏåÍóÑâÍQûoª€_Å!²÷­áJ§Èº G&t†í×(<'7Û§µõäô/bF «¬µ”Q€ÏA¼'uF½Å7¦åfž¾[zØ Í/“·ÅÚŸ–íРž4‰(ŽùUÍW_5ÿõz²Y•ÓÉô¡XÜ—›×M‡¼ž‹»Éªø0_w]è…Ђ<;õ­l™¶;ôþ¸ï/åf¶Fô;´Uïïúf›É;~ý_«»b[~üfônËê°÷æ‚þïúwn^©k× ?ùÎßÿ»yõëSñ}…¯¦kß;_-§««æ8ý_7…Ѐ̕o¦Su[jÍœ²þ~%6£,™›Þ–wo„.§%ÜÙ’ËR2q‹.»VEVê;Uàð]ÿ¾D§åæM1ß”¿u²ã¤bšfGt_"TYª¨á='%6p>Ù.уZ¯}ÂS”À}¹ý—ÉbÙt"¦_Ÿ<±]Èü9 ™vÌ#kÛV•,ª‰Çj}X/³´1þÍ×ofåüîú¯þæ÷³Íö‹ÅlþeóÀ×ÿä{ßÿêÿTMü±úô㔋²N—r#‘Úb6÷üÅ?7êk$PýÅ/¿`Ÿ2&î8€d_¾®æ?ìPþzòÓr[Ìop°½ž|»|\ÍK7ìõäo>u:›#Λíú©D¡xOóÿ¹»º¹näü*]  ñ¿ŠøÂÑn`'vÖ°¼ÈEì‹£ž¶¦£žt÷HÖ~¬¼À>ÙÏé™nÍôa‘çðPXÄQs¾*~¬*‹EÛNß#“¾i6W/Üýõ£m^¿ÿÅ|Eóü9íšëKgè§ß5›íëÛö€òb»¼^œÿ‰þsÐö˳öÏ_ß¿£ÕöòL’@Ò¿<³îå™Qô?0aã©Æ}w;oV$Æ×ôBD|ÿ±ãêï_–ßýñ»‹WÛíÝæâÕ«f>'AωPWÍ–öåׯh¢›móêÇoÞ|=#© ñÈHúg¯‰7‹Õæâ¿Ùlפñvöÿhu¼$b\>è<ÌëÝôv$¸ÌøîéO³]Ü]¼Øý,ü5í^ž}OpZ]ìþ¢ýu{;òÕÏ;¥ýüâ¬- ¦L¯5úXsÀúfK—Ÿƒ±úš4xG|üq¹y¿iÍ×˃‡Y·Zz¹[èÿ'ŒÞó)m©¼[aaü´nn6mJç'bz·LÂýþ±Y­.h+ù+dR¿•vñvNÿjñÛöB cÈl¢±^šÌÉ<®”ðË‚áºYˆùƒ`¼>èã{@¥ÇzdRûw¤Ü×#Üéòù¿ÿýÏ7m‹àÿ!B÷ÿËO¿¿ø×ûå*0óÅë®1TøÏ?=ö¤~ÝEÍô³vº~\¼#øþÔþ +sÿù}A|ý÷áOÙ¹Áï–¿.æŸæ«Å÷»Òס†¬çönEÁyøãÞÇoš`ø6/H/ÿqsûñf˜o¾}sÓÜm®n·íW·÷—¯ÝßY ÉžŽÿ[bË»«íUüçíåâ§ûžqŠyÓõá ÿ¹ï¤°?ÂRº ­ ¶«O{0ZëÌ[o`Cߘ„èÎV ìý¬¿œ]/ȹS$Lþ~ñ©-xÛ«eè7öélþàæ¾j—àÙeN}õÿÔ‹œ­Èª|õOæñ›N íÊ&‰>ý†]X¼7<þÙìüî¿ßßÁQä•%yeä¡ÉK‹'^y÷+?3añßdé7Iñü7}ÙyxjªŸº£^;Šõ¤á |Øüùq{B¡ë¿-o–›«qû“kçNÛ¿ÿïæØ>Êœ @!¥aê)Â8ðÙ§T“ï;üa`Yü(`ò GíwÂ1•q<)K¼¦Ø{ä÷üyBí N˜P Çë…õ0¨t;9Ó“…ÆM_‚‡§è{ÝÿCAletHV©: ãP!)Æ/IÇdðÚÚHÇãÆ?½œw–ï9UZ­$ ÝÐÅN£G“7´•ªÊü'ãr^y䜨¿ÑS´Q:aðÎÈP|ÆaõdПNFVÂŽ!&á±WÃ¥{×^P”<+T‰÷¢²´í•PXÏõméÆe7³+O[ «4?õ¡‘\©o´à—8é‘…ObšéIƒÇÙ‘Ïe_Ðj¿k¥åÞiÇi3p W  ‘i “¹ZiÎégNyÇ.Ýðo#í?’oÖ]Þû ¢ªÈA5}õFçFToìM)ê°Ù í¨8yŠhb¹Vñcmpl/u•‰NÆ£TÑ2kx<þœVÖe—ˆo•V4‰^±h‰ÖŽ©üÎÕtˆ^V_²ðø¥wÏôOñ($tÂ).ùHãtþõØ:¬LÁLW6Oî©pä³^|§µTÊyæ¶V;N*_¢Â.“~9}…@3®ì#r½oa¸x’†|Š¢ýxsx &×=ç. T®rð ”,pï߀»øÞËhÚ`9.[`B?¡Òo å`hW#a”ƒ ä‹ôÏ `<ÝbÐ*AžŸ‹hœ´¢hüp†¢mû )³Â ‹<à1BjÍãѹ­§%ZŸ¨t·^Î[½Å7áV;Þ;g§dƒÞÁ¬$J #Tú ç>Oˆ©€ÇCw3~¨±WÏÔßyZD°Ä6V}ˆÎ—Hýf$¢ÇpvÏõªÂ¯E ½׋©'K>æú™úžü9¨1žRr´ßEÔŠKO†‹:Ëñ1ü$><#x¼ÞVðÖŽ6ÉU?ò%³¸Þâ;MGÍ%bq’³_²QôH~’ãóÙ¨0*˜súޱÎhäñ-&¸øò˜kèUrPj<ÓZ °|äÆI5òTo k!¼£'Àh¨Ô0ýì‡ïã“&k-×`|W áé4·‹ ã²wØõK«‰è*/…Š9Âw¬— ð˜r-ÆžÓ¨BãÉ+òø ¼eV¸åm¡X{±1Lm7 jTô… É+iǃmA¯êöQ2@|Ãëµ1DTÏ­§½Ï‡ 5[øÆÕ8û ï ƒ‚ǃ®HÕNti€d´‡4yÐ:(Ø,°œ=òáÁ"^±ЬØóVdícÀƒeª¶r”ÏhzD)c÷H4NU¤‚g{1d÷isÂÕ5ïô¥SæžÆI=²b'­ðƒ>EÛ2û|yüù$s`ÅC L;•JýrÖÞw<[ï5ww«O‰.Îv·°ö7ÄiL¸}s»=k>4ËU¸µÕ‡Ü¥ä“Â8‘ÖŒÙêúe¡ë´@þ£ k0Ê+0ÒñÖÍ+çt±ös«³¹[#ªQúIÄÒ.ö\P KÆ‚ñ‡Ý8 eŠKÄýŸgÚÿuã¦?fï¾CþMÀ‘戶§-ÚØt~|EàxN(!˜Ænݸƒsî˜EñEö~Ôj®©y7Ψ ráÖ SUÚ]0íøìÀ £ ¼5Ž6n¬ûeNYHùèô÷—ºï€ÒL¢û÷èû+á.û²Ek8ì¹Ô  {q¥'Ñ”Llf¯ ‚ yÁCU^O?¿QÒIV®ü9åꨙ Zº0€iœQnüÕ”aŒ (½QÞð(íô·ßqZx´<'m©ª¢ÝE‹5‹Þƒ’þò]Kžƒ'71½_ëûø1_«¸ø¶ŒvdV(£8ovn*×[)¤æ %‹%»çÈùh¤rÕ/goÞ/;^Ýí.#…™~¶ÛìÚø„*°Ïÿ\.7m_ž³ùa[£³]ÓŸÑè¡‚)ÌÁƒ¶àÃ_óÍrv¹^†HÛ¶ÂsLsá=™ÓGÎkÅá¬Ã7tÚSñ[a«L{2™[¯ñØ€kú>.6O.µÞÞ_Îöcö6§ù¸™-Þn4èn¬¥b•NdÈä¢pdzyö¤§ÙhÙµ<-vi¨Ä®¿Ý¯3ÒöûÞY‘Å«ë.C° iêslú:ƒ<lî®ëþ<›¼˜$¢Æj;\ÙÊñ/UÆTáŸ7Ä?™rÚˆÅÚU +}p‚HVq«aœ¨pz¾²e,aœ•o··êÛýÿVo’Ó›²ZH‘@:+ª™Ý $:Ð꺞].ÂCÝòã ¡y+ÁO€2¶žiI’¨ŒAÉQ“UÖ„BÉs­Ã]ªª—?$#¾Ëö–µZv­€‚_ÓÎ9(Ù {Q'›V«Øú['<:‹È4¸¹buË%§×ûpYÝñxaj,Oð”[.»‚–˜ã'U€¡žX²u#4Nc9ÔÄm\…#¦<¹áÜ)ë³fL¥fïýiÇ zš œVàIM£•¦XçƒÝ¡ÙNÁýdzþ¢Õ¼)ŀȹúSL%ÂääQ…<ÎH4>ß<§Oµ³uxX¦ì3oúò8 C‰.ˆ?)"8a‹=†”¼öfûΖ¹Ž˜#ŠÄ<32 SÓÇ)}ZôQ¾È»!iž¬X†—;_7w³æær¶ UÕÛ]Ûïý¢Ž9 C )ŒÌ·.åà%M²xVU!M:œ„4nW÷׋f»mæWáºû3c1Â@a¤CŸœ,€§EÔ’e³{ú™¾}1ªx;UxàS„<)¢€€bDY/6Ë¿‘V—7¿®›Ív}ß¾TÿT× J‘T{’ zr‚¨:y°Àµ ÜáVÅ ²‹ ç«f³y¦iÉÒ#yú=€R‡’#¼ÞÀ&u[‘\r X«\WÎÍ\7Ë›YçËuË&S±¢+`-8ƒ à@ŠPÇ}¤ãÁ²8ôÙêe³¦^e î„ÈœCi.Gp3t“õ x|vu<£Þ³û¨Z6mêµp*jzqÕ`Œƒ§?U†Jù o…ó˜‚Ç—þ‡8}{¨^6-š ×”²qœƒiàzŸ"G¬h2kK>u(lBÔ£!z$H‘œ¥˜þ`Ò$‹WÅu¨Ps•„ÍÈ^‚GÛCôÂ’Z+ÁÂ"ø™Ñâí*²ðBiU¡ð%OnÚþÖ(½ jÿOZ]#;÷:<“€]¥‰=” :ôqM ’ÙRdÁµ€Éø¨Œd—²ZÀû˜¼›ßÞlÉÛ®ÚZ°Æ^¿ùöÍî–HVÙ§ãÉ~Ù¸¿Êk³|Pñ^Ãýµ>ÚrÆÖxÐ|u. öýÆ6ýµr’’þ2¬*#¹<´%bzˇŠV ©1Ý6ôav¼YµR—Ùoï”uƒßf£°R†>} JÚˆr¯–fe†¶Æ•§<Ù­€Ž©î°L3w–rÀ™ä®>¨[ôäVŸÖ4»Üi~86OÛ…÷¤û î×µóà‹±“wÜÓ S…TþÄl‡‡ –¸YØýhæFI†HNŠ) —ÉRƒR‘B­/B©ìCÊP™ÙhgAc”VÔf‰R…PæÄ•›>Œdgi:¼2ÓÌ-…ÌPàŒ9lRÀ‰‘mÉ×ä{ÕÝ%ïµ+¶#j¯‡ ®Aæ¤èÂ|Õ!×^¯ÚÚÅj1ûÓvL)Î`ò%ÙIŨA$´pZDrºÌ»&Â㸠>M-MZ¡:-Z!Œ Õûû·‹Y÷v1¿1iî·Wá¢ú¼Í¢ïjÔÚpŸ”ô 5+•Ú Œy|ú1¦­ã¤’ñdgtz³¡—7›õb~»¾ÜœïtØ› 5‚ñKR{iù1Þf?jñhœ!N"º<(Fl|øØ~t«G(£Gò à‹¥ù¿± d®ŸYx\)K±ÓbÊ™‰LÄ¡•×ʲÊá‘óXsÞC”ær:pÄL ¢ ß$…0Þ¤à)qÓÝÁ<\Æ2¾sH–TÀ†4Ndïm§ô»é¸¥Ã 3'w¦ûß–£¹>®<UžÒšŒƒ6œ}¡q";R-ÃKú´¶ÖIä!êƒ> 1óaYó¡Úvñ¹ÍŸ /‚ˆb/E Ì¥B6i¡¢-žË-Õö(5L©~7ÎÀ¸Š»4§˜…ÞxÍ9 çúÓ›¦\7ó«åÍbO¼gj!•4Žè±Â üb%dŒâ%#¦g=}Ç[çðxüÀŽÒúYi Ž«Ô{íÈ6kNª_¥é‡ò{i6ÍõÝjѾ¤]1ŒÓ?™…G‹bM¯ºÙÝœïôÖÛÈF÷p®´QÚ É‘V…BA]Í d±6G†ŒÈÀ#¥¬e.›ÅuÛ¨ÐÄÕ‰´Û¦ÜqðQûü6™Ó3:¾ª‚[ÈÁ£]‘]Ï®ì¼Säg¯/2¤&œ†¦Õ0XiœUÍ$S7¿Õá<¹EyÑ•´mH}ëõýÍvy½xXY;­v¼_·mGŸ®.@×0†» Bs¶¢F…²ÌÞx8“Á:+L:dàQ¢ø«’Oøêéf ùÛmƒ1;NT‘ß<­½ñ%]ÄdÄ&‘Œ$—¦Y‘Œözz®Ðw€bl£x<lIÓÑÅ]yº‡ì¡n Œg^wmÇEòT=t©KütIh^*°Mø”Ò xœ(þŠÑSÅÆlµQÍZk(fVÀåiœÊ>è¬Møa ‚1ÉÁƒ²”1 ÝÐW‹mžbãÁ½ÅÐ/X+M㜕ÅjÑ&!|º( +Ī9x´(Ð5¢Gu³åuónÇ’}½¶Uñ£§5‚4lDB㬷ÅÌÇÏĹ û×T ›I†\B'#6UêÇ2ðd ÷>Q•Ú‰%à¹ÝÛ­ailjìÿšdþuWÓÛX®cÿJЫÙ$%‰”DÕö½7ÀlÌl{árÜ]A'qÆq= æ×u¤œ“—¶$ç- H%´ïá¹EQiRŇî¦aþUÂûˆÃk%ȼÖkvˆÊ"i’³¯-†Øø|(Ž0 žì:DgIäÌ×.}è¢S0²œýêHëµ(Ò³Ô„G¨thXaNAÚå×—³ )åeB½ºV4jù‡Ø#j˜cÀóAÆþ·ýlx µ¬‰tˆÄ÷•fbÈ"§¡6Z¢’4ÇUËÁQ“ud{ã5èàÓ`Á“O,†vÈ»þ:—pùíB-”I¤à´R%œ¦ŠrXš–MjhÍó•ÈýïÚðÄØ¶Ô¹)Ú"‘V€”Éiao=˜‰§†‹Í,Ú€:ºÜß,x|c0p)ÇÛµô­Äi• ]ãRç--y¾%° žûmNLÔ‘:^Ü,¼üU âT`¶ 4³_ ø!‚õXÃÇ7=Y†Éá8²Mb,5(䈷 æï|¬Tä xŠkÒñâ ×Ü ÁW×7ÛéœÉŒSÜÊ”+àYΙÔŽ²^ƒ> ð<;î@ÔCe—›o‹å.?Q)GÝÕŽ‘²še9ἩifhiÍóÑCÿ†'6<ÖþŠê—.×›ÕúñòáÇííåc½8»Ý[rœ¢Oàëy¥Xû$—cî¹Û`7ÜTK3J¯bÎ H(ñs¼¹à <8ròqÚ±ù±¹H•CìDrÖÚ¢W¹èе)À{’SŽ)æU¼ÅÐ# œ8€Fš‡šv“˜áSŸ â=ÇrÆ>dï´Ú;“ÚÏ,Œ²lƒ5QÕÝR,x ŸXèÄØÁ >µ^úITtȱð©õNŽ£O®èðpïb´Tö$+MFLèFXÆŒgá!ë•Лõæfûsy»x|Üšµ7öå’ÿt³\Ôa³ò¦jû!Œ¤š9¯• 5íÌÒÑ#Ö^ ;Ñ?øÇ_^VJ'"å-ƒBŠ/A3àÚìÞ¿´àùj„4`KÜ‚çÈ*wëû›7ÞŽcV2råÀeÈAÑd’‹©‡o8Ò´'D’Pö:rÚ;@'%E Jê—yR‚¨>”)é>Aíž“’ói¤Óò¬“?¼{I)Éï‹ÃŒ©¶•ó‘eòF šùªÄþÓ“ OŒ'æPì'ëƒ_"t€d@#,>Žx¥óñc›ÂÝÍ…-/óï^ïFÝLDe‘(_·1)å{&9éÄÌø‰.§b­…ƒŽÕÓoÍÏ P;sêx‚õÎæÊ ò¶sáÉ3¾¯¼ø7ˆsÉ{ºu’^s¿_ÔŒï ðâááö§É ¿^0âÉÃWŽ/–?6Ž9ù[~ÜׯîÛõÅ÷Åýuý‰7«ÿýÁ_{ño®¶ûÏx¼šþóê’ûŽ›àx‰ÉS«òªXr£¶â'¾+d„.Ì‚±U©¾×$NÿÛc²Jï¸$™ËZŒƒCÒ°$—›•[ CžXðäÒjµô—{!¥"rX=5e­Ïò$²k­ { ê„%Q=v¢,‰,ÅÒßÚ,x¬Á°X]•§§Ç›©Æ®—ß舳jïô–õ_{ L#^©On—~û²¸¾»¹¿\,ÿª^7ˆ”a ’x)^ˆ,ÚåQza,æžä }VuGÝŸ‹í4«‚Ì›{zÕ«°œ#×!OÖ ¬ï¿CoÛÕÎ|“x¼Ú1ú¶ÊŠTF¨M0¨h6ÊrBe0{MÕÁØ úþf`Á]c3xšû1)'ü"a¤ì¢–H`9 Í­`ô2 bÀÍN_¸oñDâ~8-'øä" –Te9Ì-ë\öDû7{¶á±Þ÷Ÿ¨û"P)çP2ðŒXrÒ"Í ®¤Ôt¿t vÆ—û›OHÍNcÍàQNŸdò9AÍC±\Š¡Ý¹™‘À³Ÿu´ž:W–ÿ–‡ÂçcÁƒÆùåùMü¹Yÿx}P{—Éaá/ïßë¯?Ö7d_^ aÖ3úÚ†.suöO…ÜF1â¡ökÌ_ÇžéÜ®·Ó¡K ²;'ò¥me9¢ÒaÅ9zÁ;<±Œ8¾õžQÙµžK}ÔæR–s0æ˜NOh@xbÁSbOÏð=^>/ÿŸy•Óý…xžs)iÛÉeªÜÕCŒPD%B÷VbF<1Ž÷¯¹ "·¾Ö–Ï(_ãÞÉ!Å3øNÊ$çûŠ=ÈúzG'Èt²@HÁivîë)é<Úãï~.Ј§ø3øgZQ¤5𼘉!+j„©µÍ9\Fk=j1ÜþæaÁƒã‡õtOC:,_QQHÅ'93ÉùÒ%?¸0 ưà3ÄϬ&‘U¨Ë§Œr Ir8GtÑ\ ¡bz;ã°à!ç)êwß®;J³L)D, …Fõ=ŽŒ-úè€Ý›¦ñø3øŒWÔ’H-†RœóQ³p–K]²âçÑEتjg&<Ç{·k¾"³;•5(ÞiÚÔ¾Þù ޤ›:‚R¤äSÎ!ÍÀƒå|>å Ë(§£ÏÉ— zJ–‹‡æüùß/þs½½ùãçîîRUl÷ç¯Ûå=_6c/îWÛú·‹å÷Å=kvñ÷ëvÍ?L/ä·ÿÙü`äÛõÅoÿ¾¸}\ýö1n^:Du‰Àr9ÒÝ¢ñuÌW‹òiÁC]Wb/,^~ûù¬¢œ-‚Çà5÷Îr¾Mýºs¡Ï4À xˆFø¿I•ó¢ \-7&7ÝÉù<&šê«E€^‚§o¾¦&¼^6ê7›e-ý51*'HkG L˜´É½6%±¯«è¯B)#Œb6t=.»|0Ò2+§GëyÍâØxMêÙ8&ÍÛ_/t™lg$<Þu÷¯˜”Ó¤»Ã°I.m°“‹n€Çè=8^`ÁÓ£ÐàáõŠQ9-Z·ÁkmÅrÒ8ÑGˆ¦ žÔuǘc²_gþPN„Õ½>’¯ìä¼ë{B­ìi} ²?üšM9ÍYÀSZ8Ìr˜Æä&:Á~ÀÂÓ‚'”®žà¥4ÉD£œ¿,uâJŸÑdìë :áF°—cÁƒiˆ/xEg”2z~*DXäža“œêšµu½ðÓ¬RØõèštÇÊôÐÚs£· ÚðX¯½-75Äáw¸å?ÜÖÛHÓ{ÜHu²¤¦Y9d*_OÝÉ…“ž>tïþhÄ“ÓxcÈ"›¹øÚÁŠ‚>G Î` Íà—0À3Xð@› ‡ÇJ ©]IèT%b0·ÿŒZ@ÿÃñ6<ÚHCª †+8¨ö$÷ÙÚÉyJç3fZXZ™† Oò­ ª¼jÌtqNa:‡”zU³”JiVFðs©–]÷Þ£F

—y™éšY”ËïäàØ‚Ÿ_5ìŸ\±á ¥K0ûÞS=¹Uˇ°ô‰a»¢pˆ§oèú— s‰ÎåZÑDÅ^J«jÕãÁ×ñ‰öŽF9Ù‰WTx…T‚‹NÕƒýôž6)’G„«<ä†MQ!1å@y†U§d>Òó)Pw¯^hÓ=I$™Ëè€bÔ'¸èÅá“D3ð€#¢ž˜Î9Id…WÌõLxTõÀÂY'‰vŠ”8Â@fãÉ.µê~ÑfÍF2ÓÉ%“3jš%ͼþeTK¡ 0¢ä²÷.¦x€>—))È”}&u"MIh1p&#RUË®Ú~ºFÚ"e‹ØöÐ݈ܙ/Mbã™<*ËË À+~¯ÎÔµ q£¸x8ð4Äæãi×÷#*?øÝħ²ØÌ‰B=5¥âOÙS³X¸©$§+P'±93ê3YI™üŒ‡Y™ðDjY<ÜìšJ;²YÊÄpj¤£f¹ÐäLR#Ô¥8LQG]ˆ5y^ýs,¬âa¦Ò°W¯ ZJsÑ#%Š0òÕ7C-”lùêçã)Ô.17K%·EŇœº"öÚØ01× |Ý6Ó‰/>X2S žCt§ã .Œ6%‡UÏìóÈ {h¸!´?Ä#ð”¶Úè<ðû‰W9'äY›Ô‚óÅœ¶ÿœŠìåXð$hš\3ò*ç›ØÝa­¨Iš¾—˜K¥M‘û¼<„gô $ŸÔ Pï\CÒþáñêåÇ«¿èqgvuëHbÔC¨uç0(°«Úl¸œIß¿¶‘ õ±Êh¾:@£—i¤èy½ç@ƒM˜´¶„1¸û‡ú&N|’Ègä5K$ˆÚªŒå—|Ê€pÖ€'»ã"ŽvyÃÞ׫ïÀþã-Xïàuruzí!ü~Q£…ÍÅæ ëâááöçÉ`¿^üm÷é‹—‹Ÿ¾¸y¼¸_o/ÿ\ÜÜ.¾Ý®lÚþÇ{m÷º¥K©ä¬¦’--4Â>½G†„:ï°Uîù5~ÿñíñj·T}» d2"ˆõF†›B¥Y® pLLfÑC˜·iFªÕ%Ì%fLÚCYNŠufo`Ü-ø3Û‡ÛÅ´˜QNMÐQÁþc€§XËu|˜ß{C(tÕƒÚH‹1XΛ÷"Æâ #6,x¬ ‡S-{œ}Ùûy?¸Í »ŠZÛa£æ›38 Ø,I4yÁ x ëfÃ^Žä•ã{Í—œ„ÍÓ1pÚlÆrÞûã“8#7`LçZ Ì3ðä/±Ïr”y>×¢>ª‰ÇS ¹ÆK¡ÿë¶à)pÂøÝ‹ïžOLq°s½Ú‘&g-‰¢À3’P(:9gA]’·àÉîô”üÁ!g/ {6ç´laa´7Œ?Np8 ËjÁÛ¡˜åÿä´b!^\úHšQ Õ¡ " XHYðXo }¸9~ˆÃí¦°ëËåbbQJ!â•C1“r2e’ ¡ÉžþpØ¥{ÌmÂîôT¸6|.å3™ÉZ± ƒÒáb’ Åž?tèÓdž'†›lsÜ(:‘D_'%䨀f9gä>ja ÙîÕ[ð„pâ·%á8==:thpl¤>[ÿ*b6<%·ª0k<‘H^.È´ù‡å¼o×jr0ò\ú›€OqÍcø»?'"ÈtÖiºhuð&9ÎãƒùøÄæ`ÀƒÐ.÷¦óˆ"5·9`+ îzæ¡an ðà\0á)­ @r¬‹ÛÕfûR/!c”¹¤àcÕ—Õ#æ.Ÿ¼÷nÖÞqQöŽm  ÀÓÜÏË ìöæÕòçòvõÒôðŸöFv9ØCð%yPcd–£D'º¤þÊÔù ”&ßÉõ7”úvR¹ÌÀ“Û4;b92ä±[€RÒlŸå²oÊô×)Ô‚¥!©:…ëÝs˜då†ñN.7[ðH,ßÞ½j¸r¸!ää|Ò³ڹ©´[÷4U `ñjxQå`€AÔç0¥{ÏNÎêGŽâs¹Ø.n×¾æT!#;d«Ð|3Ë‘#Œâ(%‚KµƒµªDp8 ‰5ÂpAõÆUÎzNìp(© ´×e9¢LuÛ-guÎNu ´+—×Z=§ê~¯ ¨×L«×/ã™Í…¥ÙÖT[Ä®UÇ|h—¦[~¿¹Ÿ*Ø}ÙûùÕ öòŽE¢Z|T;É3ÉA †iºÈ‡ tä± pG‰Ø$ Õx·Ê¥î&ð«_ù4†ƒ¼yÁN=ºBêüÄrP\w+èã€W žÚågäu¹W*ùŵyI™yèxL ­ót a¶æ¬ªà^Goi–òê,eyèáz…Âïÿý×ÍÎ2ž.Ôû¬ðûí®Ý0cÑz‰'Å_—xÖÿä‡ß\_¯îO›ì][ðku5fŠ}Xv²««_d¾½f‚Ê&R=2ɸ’¶¬¦=Q³ 2g€`ÀäjÁƒ±©1¬¯Ÿ(”†X’™ä¡ã1jÈkDZvþa0ôh€ð´+Ëk”o+‰Ä,“Hˆ%ÅLhBhèÆ¡Æþ'àmx̵ª%Ÿ–§"•$RYþ…”ˆw’ æ~jŸ{0 XðäÐÞ Þ­ÆÞ°YD6CÆcYA2Ä\:XÂ0ø ƒ6+¹ñ: *°Ì&ñ|_[ÌjèÙ£QnWÿv<|¡qXCc0à±V!;Ìæöûâ~= +‘J/R 1•˜]Ñæ6–Ë  £±—ú›´¨†ü:KƒÌV*ÙEŸ4#e¹X¨IÅ‘Nð Ó<ִЇlq¬´¹¬Ÿ¼]/®ßr2wäyÙ¡]Њ»ná¹Å«¶ô¿»nÃsrŠ¢¾œq:º`áuà¹N-¤ÓZ”¹MùcÞi<’ï1iûè½åpxÞL¯¯Æ~®ÛfåFpƒNEQbµ{ÍÙÖÂäå—dÓxŒ.g\¢Hà 2¥¿Î 7¦ÕŒ›ÀãTnã^Ì}8l—þ U„rذ cI*•ÖÙ \U—'šÆÃyY#ë¨rœ9þAн—|çs¡:f*´Á)<–•0ò¥_¶:¹« M\?Á×V2ŠWp¡LS–5Ju«ap×Ïú?ŠÅk¹1¬ŠÁ³[[Ãà)<å î…sQáBPƒCÔ•‘¼œ¥ó’:^ÞÄ <’±’&v,.œåRø J(PË”5qFRc*˜8Çš&ž¶+ÿE'ŸyˆÔUO Р$ÙuÄtRË"v.„«Dc§ðÈ Ææqõ­’ŒSÓ8ûo_¬Ž[~¿jU¿E¥S‚¡SRÛðB:ÁYIKçe-.=‰G²ÒfŽÏ6)ëÏE[Ò#1/[ ó²ÚfNàqÛÌÓÙt1›­Öõ`|ºI þÜŒ$H1+0°JDåLc£H£Bþ¶ç‘8ˆ64Oä¾å\Yaî+…GñrÆÒÅæ“Ì©ŸÑÂòJÜ‘âÓÅö g°t?VÀê’ØX×¥+ß×îÞ£$öûzðlß›–ÉÌ2.åNƒ!îò é¬+nfŠÕïJSÖ’¬‚•?©Ùñh+‰kAºt2ñܞU¢ŠŠX^±ÀZÊà˜Î—átn&jŽ•‘$®ðêÒ•ßí™Æ#]5Ó븈þœ–   ýQ]ÏôÙ¨]ùÅÊ4žÔÅÊ'ÃÝß÷¾y˜‰JÇýõAÜj •cÀtŽÐKuXU…žÂ£ua3Û¸t¤ó¢(Tü-9LÞó±VؘƓ:¥²l‡W‹ñêÚßõÓþ¾Â°ækÆÓÕrpt»»˜‹ &„ +‰ãš!]únƒ\„BHg MˆýÝò&õïÁQ–¶=x”*aRˆÍ7ÙS†ƒÅœŠWBºHx™½LJ¥±!§KW¾Íõïñ·þrÑC1 .ŸIïít€¸hÜ߉- Pþ(‰ÉhÖR”¦ü 'iØÃ âÒ ¦€1«)TÁ¤P™ œÎª±>4ŽfUNV03òXÆh-Ea3˨tÒ_nî#ª¨>]ÖÎÔn¬}4Í*Ë’ï±Îqæh#!¯™ïíwÅ^QT8ÃÁ⊪¬ g2ùä3‘Æödä3q /kbNX­%ûƒÆ±eMœÔUèn%ð(¦‹˜8¨f¢ªa=# v÷¨ºÏ§Ûˆ=ü<ótZ™÷ƒw«fáãIÜXv=?9øÜ.–!ÊÿÝlÆMÄ—õGgëŽËy; /šé§vy¼ÖýxÐLGƒysíËm£Wa+$M¯6jîzãÛv9^ ý m0òm(ŒñrðN™šUûõÕbx1^µÃÕÕ¢=?BèïþæüHž‚>eøä‡Ëæ~ö÷«æút<;úkWg³áüdÑNÚfÙþÓò¢AfT®ý8Ê­ò÷ †å ®EÛ27üÐŽ>rÕ[=²-ˆó÷{mJ6¶BdÛh‡ïúól1lÏ?6“eûÇ6u´6hzZ½ýØ›4¨R§@£â5i1ƒ“Áj6@7\ø?èŸÚÕ?¦³µ±ÌüýÊ »Ìj‰Cód›‘êƒ[„ Å*t<½X̦ãÿÚ(«ø=Ë—ÇídtúÝb1[¼/W/¦ãÉ7ë/ÿä­ïÿôo!‹ïÂÓ¯ÿÒN½É½5JÛŒ'ÞÈ/þaí}kßøÍ öû1>¨°oŽC½†…ãÁϳU39Gůg—óI‹åâœÞ¶èÃñ9ÏW‹«Å_iƒùn=éûfyq~¤ÿöËÕ¼þ0ü«|‰v¾ïvÍåHK|ú¦Y®~ZÌ>-pÄr¾_¶§ß"à ¯öñ üþêê–¶ã`†À”>HŽÿi|¨Ž›tofÃf‚Ùx…o@"ô÷·¯~}^ÿþåí›ó£‹Õj¾]¬žâßf£öç+ß £„y‡Ïñgÿã‡fáƒvX´`ø¢4ŸŒ‡ãÕäúÎÕºêm[ÀŸù²GÀmΦݵ³Œ½\¶Ø¸c‡Ûûöw”Í·¶øïlq=Þ4s/CŒºîÔËÿ§­È`‚µÊË?ÝVßw2„’9ºmôñÖÝ⻊ç¦}–ëv÷/WS¬p8¶Ê€­²°ØBc+Í´Ê민W…Å¿Iá7{üMÏk‡‡UõÃæh«ƒS}:ŠX«ÇÃìºþy<//öŸøsÞì”1ý?ÿ½|jåN[ + è¿Ûõé„ÞØIì†rÆùûAh‡‹õ`¯™Ï'×a´úè…ms¹{6œŒÌ¿®¸ºhÝJ±ï¥¯;çø-W¡2ñvtë‘ÿé¢ë¡ã×^`Ï}óËÓð˽ëšQ´ñÓP§ˆ!˜OÇ”Se&Èqn¦ÖÅgìÓx ”›°qÕ´ß2¨%¦Ë>‹›„)üVИ‚‰ ÆÅ÷øÛhU•q9æÞ¡HÍùË/PF@:¦!ùÜÈsPJåM›Â¢”i9‹‹æã-IÉ-ɪ˜ióQJ¡+˜6G±ì¦ ŠEv(v µs ¢‹j!“‚å·k^DÇK·³I< ´(cTWÌJä2¾NÐ¥3Éò!:˜FtÅCa„÷6œ*Þ¾®Óåܤtÿ¬#qѬ1ûnŽ‚´F —Ñ®©”Öb ‘4¥Ý~¦0£i­Á:XEó`©.fZͯ˜á`ÐR˜”*gÚl”¼øÖÂD™±WüèWQÝ Ãžbñ%Ñ.]zXöÌ Re Z~ßJ÷%°V=x´-iàÈN§ …õ»Žâœ!ɺ§!TKS_“ dñœÄÃsni¸v‘›¸h¾è")i]Å„.cݼ”’ë ¦MઘimT4έPÎǵJ—3-M©±h3’ÒªâS‰<"5ÈE¸£ïæ&™Qûq< Ë­Ý2·ç––§7;jîÍ*1]TLᬳ â÷A„tÒoY8 zÎeyWHá¼¶+SbÿÑQ—ºtBðê®^–ŸIãIíß“ìöÖè DERNö¾ã7q…t"ýûTš»ò¦KáI z‰GEÒÌæg½ (L§Ëgº|TªB:…Gë½ï„ÝrzøÌß´9†“NBÄä8’Ä©Ý.´lÿË`«+Y¡û•£ ŠÉ%! Q`%ã$°ÙeËÿ3k^ÃäýyR÷Ÿ¼Hw›„[žßݱ Bŕլ#Wã"ÇÀ‘•ò‹i¡†9J3A:º\?«ƒäˈ`5$ƒmŽnäì«)šú `ŒÌƒIïs`&ÔÖ&ð¤Fp˧i|BÍí0_¨KÇ“Ç*˜ Qc_@ Ò;uFoOäÝ—-¨%(µ %zÐ¥oÄ«Šg˜®bL#4·}x €1%¡–?}Ž‘t–3UÀ˜ùð´­aÌþ<©WMÍã™ß01i?·“n†åª ¡pïH]wj÷d>i¦ím%fue|‰.Ô %¦ÉQþ>²U?£#$ðÈĶûãdöe9¼h/›Mí.gÓ1Š…)O6fò¥&dó153$¦–É6N5Zà®r˜7Ö{1„vÎp%™&YJÙ?'¬ªbèþ<ÚÔ®ÐãSeÆJnY æ,1–¶z…ž Þð £ô! VèñÙ/ã(ÉÉéHc­‘¼d…žÓª½ðžÔrž\G*Fh'üÈŽSû‘Œóç¶JWèa¥¾¨á??-šùEˆ‰fáNW?¾›ºŒ `ï—ªÂXytwÝ™ª¹Z]øà<]褻é9·øÑÕòä7단eÄØ ?Á"!ɱŒS2ycÿ!P+&j˜¾?@NÓ?¥]|e“ˆë»t‘ûv²øN°†ƒ#—ê­Î\¡Œ§ðØ= =óâÝÕÇO‰OagÇ%ÜP;¦0pzOKפ•5: )<»Tç· è;Õñ’p«©~,¦c;•ñC·Îw¤ð¤î)ë¥åSZB°A–ÜçÏ0%Ç*9fWÅì½y8Ëcöuh|Àe™ÑÔÇj°&“ÝkB»‡x’xÜN{9Ö£é>Õç–ç^Wˆʬã‚kMžÈÆt`ôn{9-#ÜÕpÇ¥Än ¡yDúÅ0ýt=ëŠÝíÇAF dĪ ÇC@bc— ùCYn%jŒð=ÎhÁ{ðÈlæXÍný$(ßêà˜ñW7;JQÇ”2<›' Ì´*;ã¾±b~ÒŒF›‘NÖ_lÙŲBóï©9©áMö°ÚîÛ‘ÏÍd"!¢SRi $´‹\v˜vö°*µb²†éûó€ÎµÈûHLJ‚’<®¤H¤ˆàù!„|Ç k£—_©MãI½êfû2g?%¡¤1ÂOÙ“äF+–íøP&t«±ããht#úÅ0ÒD7#í¥åƒ^¤ñ¤^¥ö°Ýá¦|ªZs©ÃÍ·ÔÊ—CAKSÜž½ç †ò›Óxv1ýÍw‚–ñV»’#ŽùtB€ËÓ÷xx[¡HáqÙö™m“sËó +'tÅ~¿0Ìù°Rˆl’ç͈s5¤7‚lÉvÑ5ÞâKâ¾Zª³‚éÌ¡S2#ÖAyIàq©¡sî­‰Þ¯k7Ä»9ÐÔÕ¿2®ž³Ü ¦4Eël$n#}cV}\^>\N"ÍÙe|ª©UQíI'X•¿o²ö‹Â %ÊÚóHEféx2Ä£%Ô„„NøËˆ‰ì¸c%ú…E™#K79ÍÞŸG±ý7bô¬$M\D°B;¦ ±+CnAÌE-•bESKY¡jÇ÷(·ºìÁcm™‘ÀƒnQPÑÆUÄ~§SŒQs ›-é õû‹bË^ulþÉóO)/™Öܬ 7W«ÙrØtêøìvpö!Æ£Q;ݺüõi<©aå·ŸÏ_Ëx§âòôægü†‡'ö… :’;n¢Ö”d,yöþ3ak4 <É·‘oÕô²^Œ§moM‰îˆuþx‘¢û ó™+î3¡e®tʱ¥ùNÇÍy!âý RJÐä:´!…Ûgd\S± 3¨)<©BŽ2×ÅO<£TŒw@´à8àäšêíùÒÛspü<تÂìh Oê1Û-CÎ1!¦¶¸¥áµ™ÌÏBo¡Š+ôçQ<߆-êÉø‚¤Ö~½°ÔèÓE$ÆÌÕpyc'ðìè‰Ca …G¡§ü_ê®f»­G¿J^À ’ mŸÞÏ9½î…,«\š²ceWuÍÓx¥8rl„//¥YU³|?|IÄSÀ€*þÈ.vký~b¡íxQˆ‰FPh,-‘½úü+ã*…Ù:ëAgÜFL¿wµñ'¶êO†˜KV=:ÒÐeÇfco,¼8 ,Ó‚'¥%-ú‰¼¨¡Ü,œ 6z‹òÑÆç¸uÛn³‚&Ò’BZv‘´Z„e2ÇElµ®(³ó~ÄÔðäW³‰CT8d¹i¡×wFÀ7²Î ‡ÜÉ x°ÿÄÿò8¨™¹Š³‹BðŽYÝŸ(xýâTÂ@¡(<Éõ;ç¤Ö=bB†Œ  !ã’OÏÿ“IL@ žf#°Ü‹Y!2BRSŠdf÷¹ ÝÉ‘çF¨@;ž˜:Ô?¦r (x¼Z­/d™ýçomu¥º÷,{O‚QMÔ‘qÙ›ŸõÎW–4ÀÌ4áéÖ@аì’WøLrÌQÅŸ ¸n4§`DF…SÚ.Aå218]™Sµ]xC%‰OV£…,?FšñÐÌZ¬Õ‡³\Z&ülˆÙa`˜˜ÆL,• ¥O†e‚:¾ýøÙãfu}`4*“Mä½£ ¢¦ KÅx|6‹¡­“MC&¿úΦa…Ù}7–ꎺ ¾ôBÊÚÛ¥Œ«­ù¬58~ñlÁ¹W‚\+¥åÿÛM¬¢Âj`vŽÕÛŽŒè–wJ1Žüùµ¸jRãª-Å!ÙŽ'w Q.×<ùÙãêf#GùNþín{3ÍÞîòÇ<¾ åtŠm<ù¨žú@¸[<ê©Eá¶ÂV¬kfN˜²L²öÑœjÝ«I4á¿þ›ë/?({ÇÌ»ÿþ$çÿÅÃíJõŽ۔ÎßÊÒmÇŠn„énÀcM|nŒŒøˆÍãÛQ wªÒJž!š±)ã2æe"÷㼟ûô黸€"Av:ÐüÙì nHK©ì:Râ1S,×BAÇÃÑ÷í;qàq¢ñÀ¢xyøã¯+€\g]¢”YUVt¬ÀùŠÂ©Å÷V‚Tª¾7ÓG+Å;j¦Ì©Óö¸¹Ùîžÿ¾8„}ì{W²Üª’gb ^þÄD|T:åÐŒÿ2.¸Œˆ::ÊDê-ãbœýˆ5eåQâÌÈ:Ê”}îpñ\怚Š&<暊o:*Ü­n6?x{§¨~µ”‘``={ÌVAšÃÜöÀ&Ç:ØÊ¾ãD³l¶Þéx’ë“« SÈ …!¥èªCÅ+clý0s1ííxB·ªË¿°øú¯…Ū߽ É®„«„ì\Á÷kú06¹“ߎ‡aÈä¿j 8ä“+‰H½T—qŸ-p.ø14]¼z°|4¸ XðPèÁ¶ÿáQϸûÇÍýîâêþþišÒ²›#±2ƒè0øTð‰Éõ `ë„Þ |uù)sSéÇ­ê_ûG‰Fèzηàá™­G~Ä•ByÞ)Ò[Ÿ¯àK.zÒì[Ì~b¥¥<ÑêÿÀTTôuk!•-Ðßrd\¥·Í¼ˆ9ºÓnÀ㻕þ•Å_þ~l#øº‰€²31kL©þ–úu7;‰Ñ¸æZðôŠ‘¶ðYD@Š ¨úÊ ã<ö‹Žî*@t˜#ê4õ;+¹;ŠÑ‘“+QYcMÆÉÄÏ>}‰¿Ýþ¶Yÿ½¾-IßÅ,+eå Q$1~4°DXñ§v{±^=­nïo&Ðuwe;¢S-a* Z¢ Vë?Š,»ÛoÖ –&W:tŠ,Ò’ðä927౦mVJbï§úÐ5à%¸|w¹¿YÈîÊ•‚ëç?Çj^Dç¹_ÙæáàaÄÕØ‚‡¨s÷ƒqpØà+œÖÏ.å¹9&ƒŒKfŸü #.3<̽cû]0ßN/á&ë§?“Ü®‚Gíôg \7u }Äî`ÁCưÿï«»ÍNÎÑW6`Õ¨‘´Â^¼t!&Îë‡ì4.’ÕwqR¸•g‘^“mÃsÇÉžüˆëdx 8gî¾Ü8JJºÛ^ †S)x¢’¯¿R¯Íûþönï2®ì©Ê£‹j¸evܺÇ_¾,‹ Õã\áñ@žy=ýS…O¬óI¥‹zVBB¦qØÑ¤;‰9Øä;rsf¥ßÙ¹^¿ÊçÍãýóC…Ë\å²4àྲྀ»Blïbz6ࣃ§¼Oǣᅼi¤:Á’vþƒè9cw€»4D0ý×÷··›õS©¦øÛöv³;D%Ÿ«suVÉË6‡NÛéB‰uö='N,†²ƒXðàÌ`¡“¾Êd ¥F°¦Ö2®TŸÛp2èqù.6ædBTBVû)†7W_?¶›f…Ä‹õæñib±îÒKäå ¤íp2.9{ø÷‚°e¨ö}ÏÓ€É<Ç<ýž‘vÏW»õãö¡êÂÉ®îÓØ’,sÖ¨”qûù Fci€½`Áƒ0àp¸¿½ûùã׿a¢·îÝCÇÉgÍ.½8]qNŒ“gÄÂÓŒìÝÝßòç»o‡ìÛÍã·zX•«;û²˜Å19Õ¹&ãâ¼´ÞÏŽ Êóó}_-ŸÃ¿^4¾ó–¢;Un ÄæÍœ5Yœ2Ôç&Lôv 0î/Ï‚/±>Au_prbíT꣒Õn\‡´ü”YðXsùß)¿ÔzhÞmž·ëé‚åë9*§Ǭ=®R9-ÃüR§” € <>/©‡(Ï÷I­û踜E O¬2.¤´¨ZŒ‘"ºï•<>õ(.¦?î\^wŽŽM_wÖ±lz>§ù@dœ7—!=O1Òå0à±¾EØVÜû”¾Zw5¯]ºt"I AÉQžÆ9p‹îCeÉ‹«‰ u+@1áXg¸¸ï|Rü)Ó¸Ê}Íœ³w&"¥”Æ€ÇZÛnýxÿý¿ï¯f¼øT%ÓƒG%mWiËöÑóŒÐÄåUÁ‚'/mƒ›ü—Xç²ô ZcÏi\4{±Ï|Z¾¤© µˆ™Ë_ü‰Ï\åÄ âIÃ¥ÞH^Z– RοŸBXðø9ñV77››ÕÓæ¢,¡ÍõvrëWë¿ÉWåhâDCGµ¨‘wôÒðÒò…lx¬…Í+lý¹Ýü5±ÅU¶BðÙ9ŒšªÉ¸d®Ì2Þò5wlxÌÎÆà¿ãSo²ª5ÜR ˲†½(l\8rqAð˜ÒE0à1¿%焚O_å³”œ‡RuTÁKùMñÌX¾ © /v¯rFÞÍ&üzs{µºý…A¨3HÅ~ˆÊsø4Λ#™OYî&Ý€ÇÚ¸^e°iÕ]jÅ}…¥ÓŽ‚¿4ú6ïç&@p•³à±v5>áúLÚÈDgÝ[&×Kì“æ’qÎÞèð¬ð{?À\´àyY _×÷ÛÍõÅz÷çd\øª»ÄP´Uì`_©)67wai€wi¿ µàé“¡ð–¶ý¾Xwq¡,PF­2Tœ°K"ÂÒ@ƒó#&Ø‚‡g¶šQlmB䪣Ô?ßó³û ƒaÈ$7ãIn±In.R7!IÒ©d1.7Ù Aæ1+»Ïr+ÛXäo“câ·ž†¾ÁH¯ÊNNsðï/Å'ûööêááö¿}ùÇþ½Øü_ZÑ—íîË÷û§/«?WÛÛÕÕíæCÐ9ÕÛ2qå›JoF¥ôfº,¥,#D¯) ÅÌ¡Cã÷?ž¯¦êè‡Ðë£ýÕcÝJ" X k‘2.ù¸,Rì…ý¯Mùc­ÄÕ©> îUÚäs’ÛKÐ1ËZÈ=ŠV MÕl/¼tåå9dª»ø¦q‘:ÄçŸzŠ‹¿©Úð¤0V °Î$‘c§u,.ã2# V‚^ÐÉPžÐ#ÂBåRð³èRÉN¨qíCi •þ·eœcߥvÔ™ ‡yyE²àÉg§HTçºD‘³GÖdûL‘ÿOÂE H<©CÖø! ÿ…û—üoM×1®’++zçÀ+Âȸˆ=ÒÆÏEš¨ŠOÄS« º:¹ä":U*ý—O®*ý¤A稊|9hÁC½j×ÌÞÆ'¦}•é0å&‡Šd¡Ô*ÝÊØœ™hŒay%²à±¦-Í4Ô™&O)ˆ¡¨IVâ}éÌ”¨›hÙP"ð=Òú°ª,—8[Tký”qsŸ¤¤ó+V®Çý”g {Ùç<‘ÎGybåÒ"yÒL~W ó:òt Gì<<Ö××nÏëåódësꚣèçß Á)kvÄô¶ã °àô¢F[HìX×ÂÙ/9½:ÎÉ5à GÕ.kA0I ‚ÁËT^ÉÑû¨|TÆÕܱ–î¨?÷¾ÇQòuŽ,p9.¿ x¼Ãaõ"¹º7•9 ¨­X=÷ˆ0‡7 °†,x¬Y/œU~“«;WKhsbP¢ §qhÎ÷qÄ6àÉ.vxH¨YM£~úÚ“«;P‘@,sVhœz<$œ§púžÚðÍ~ßn{N¾î7Í¥s‰ÊS@Ë8nþÓöpÔÉÃòSoÁ'ÝC¶w%6{¢¸î ÍTÚ<&Õ´‘qžà¤[Ç2A¡6<ÖˆÛ L³ÅæëÎÏ’™Jx "‘Œ‡‹>*B$pÌXðdîõ(0û\Ÿ˜®;?‰°§¬YbD2t{83Ñ" ˆž±à‰®‡_·Ëu(‹Ê'ÊAóVs)›ºøuÏM,àø3àg|Qúíöþ¯Ýú÷ÍݪÁYT÷—2Ë$»àµÁúP}"œ<ÀËaÀÎϤœãä¸e+«fOîÇUƒûtöd#Ö÷3&?üŸ[²$+²¾D×ä FÅAœ/])]éXñJOã ’0oË=, ëúùvï6Ï5¡ `^<Ùˆ‡{ež½0öú½ëÔQt>'T ”c·„³¥±2¤ÓlÀB¯u…¿ú~öy»|aør{?­W%ä¥ÑˆJÀ£ëÖ¤®úH9g¥gè„>·¼*ˆ19k­*§qî3yj/”½Ê˾NR(Ò{å߃ð‰ü´Ï¢JÉe@U aÀÔ•r^.¢Óñ`à>)Io·¿ãtž ¢ÒÎ{Pgp®S&Ò0Ä#¦¼O†^y#ïP¨Ž8J©È¡Î1 uÌJ«Ši\dè–/r6B ȱáIiIgíat͚ʱÊm`¦ ÀÚ>,ã0úEÝ´c…ᦡO%ùúSQÛŸY‘éÊptÁ“‹N;&eœ37£<{‘|ÈË+Oän¶ŸfëìO1j&zœz,õ °]Nœà|@¯‹4BYO ­Íü~œ5¤¾G&£:°k` #w ¸4ÂËAÌ  Ã;·¨¹Ñ²êF+¿ }løèò‘lÓwˆÑ)qýûq9Ϊ\{±¾½¾¾X?n®eÉnJ‚}Ì›§ß7Ï»‹?¨\&È9mîX.†Ä:l•´!ÿþò¯?¶{ûàýöåz»+¾Ûë/ëÕÃêj{»}ÚnvÅ,¿î/¿íC¹Xnë®A®àó£LÌÖݬSª‡ª4IЭ{ôB¢ˆîºº(•¶ŽJB¥ïHâØ€Çª$5cáfÿðãïåܯ½ºO…U,U_1d ³5øû,å‡m «Œå£4àÔ„‡{¸lV××òƒÝfw)¾»ŠœK<{iàÕóöözw¹Uø5>M‰TÌR.j‡y.IâÐMëA>®]Ó4¯kšá£8`¯Ë¥*NÄœðX½O]}e2W7›÷œÆÅNI^›¶fÄÄŸ~ 1BýÐwô¯ýo˜- ãY):èzƒ×ˆU'2çà@sß>Œˆ?­­¡£ D-x¬û„~ ~ÃõG?˜˜š®--’°ëðØ1K„Ï*O»ˆÜt²AËÉ&&eL¤4ƒ¢±íxz”Jÿqßÿðh¢ï7÷»‹«ûû§—œ}¤º,~€Èj®B)Í“»Ä¶ž=„6ý ªþ±˜2ªáA%ϯ²C5çlmžÖ×e­Ö­ùš` œtT9ÆåW…|‡%r:ê’¨Uh:ÞÛª¹téÄìVLã’9†®Àr0; ââOnÓw2ä¬äGîñX;€jŽ]ªIò‰Pk]5Kæ~®ËaÁL“Oгsn^)õQÞJ5û€.½ .—ç9¥wÍEAO3%\~r-x°‡©ñš·Wû™jRM (`²ÙhQ]üœ=LŒ±¨³[¾]Ÿ ,4õëÕÅÕó÷ëÛM¡/ÕéËÎ%ÊNÕTäʶ8sÒ—Á›]0Ý<~¡é>죯iÄ:bñq@ 6‰1 Mû²¸i„±%ß)µ(04àÉæb‡rçxÒ–ŽàüþTˆËUâ"Å2_q¥¬²³8<Òå]6…à„#ú<;‡°SA¤<üì±S1m(4f]0ú.eøÞèyÝó”²KœÈ«ëRÆëR€ÏްèYæ„m© ¥EOÕ=O—“)*-ãz”Tû¹-QGÊ=³èŠób^¨è¼£ ùø¨¥"îq³Õ }IdÙ=nþçy³{jK¦yûV$Ü^¬þ*;Òø "è³ï¡V«åÕ£ë‚ÌHÚ œC¨ Q¨(ç/5(x°7çáŸ7ÓL°¦R ¢lºDÉåÓ¨Ô[Qæ*:±Z Ad£TH1džU餻Ôô¯O«õe6‚Ó‹ ¹Ô°Tr:b} Î\åbOZÔó~œƒÊÎg¹Š7à±V©ë;7ë‡i¼¢V%oB4K•ª™D ªÕAf*”˜0%hLØC¢PÁ•¦G:0·+黼ÿÜ>>M3šJ…ŒZÕ~\âÓìToE™«T1øÿ£îj¶ÛÊqô«äÆ ðì}/f1çÌ ÌB±U±¦Û-Ë©T?ýW²-Ùè’”&½¨>ûüH ðv xñ<}ÇCäj>mÇi;ž·‰_O·Ë­9‚ÈÑ*˜ƒÈDç¡Õ§ÂÌ%Vd¶U-A£)±¢ÏÞ¦à¸tÖãoõýçô+Ód8ŽYÉGB³H\£f„43©…&ÿÏ#+5;dÏ*%’yç¤ÖãÃ_Ëõ¯É/G.€ƒ6F‘—É{f}*Ì\b/wÒ¼Ð0æ0Ĭ]>ªàIáœÄºÞ,îW¿§¹±\^•V°x‘ÎC¬O…™K,ç“h‡p. !–φ»!² ²–ßE12‰e‚‘_‡ÑÔêÉ“–ûá C<19žè>ÍÈêúö.޲M?|}ÚȲ½)m$x§63¡åÃŒ&ØóöŒìÕYgÌxãŠOîx<°9 ®6ëriT'í1ñæàmžTÞŠËpÕÙí]àB¶¶SàáÂèKðyoØ*y4+ª°SáÓ×ÝÿyÿD &g]ô|’\¥0€úYû@ØÖÒ€ÉWàØzò—›Š"ëAÛXº¤;®uË4lûùƒÜÑ hð¤Ô˜·ËÅÝæöúvyýgE¡õ`ëô8>€å6°Rá"Ak* –Àû©¶<Á4£Är³¸Ãõòçòfµ(?zúººÿc½xÊÇëõæy½ü´j«‡L©”²Î¢pá*•Îî}dAkˆÀ°² ñ,‡ü¬Ø(ÀCÐ&›åÏÇ»Åf)Sq=JD Š5ΉD úÑåL2¹4 €®ÀãOlH£rÓ^~ø×òûíÃÃtsïê1ÍÒÙØù˜8âçqÆ5xÙGÆ„À‹n)RižõBÊ8oû“b_“õ8d"{јÇ„dè}Dr¥OjÖªŠSè¤ÈZ/}Í›YÌvF¬ßQNã ¯º.y›B‡G[âQ¯È·ÞØSÏQ]—T®)²^ÆA‚ÐjðòyYðÙA„üâ^$ž+U+t™M®Ÿ«ûÉ4»º]”7²ÛÅ•ª:µY„¼W%N§¶,.èNˆîBD;€<ÚÕ‡jƒSô·¶ª¼©+/…ü9°åD›ÝÁ` ÚØ?ÕF‡'`‹©Îkç~ñcySU¢­*Še]ª2 K/¨ «*+¶|¶O9¯äÛ žþ˜F²ý§U…'õYËû.‘wuõfC&xÏÁ%Ô;ôgÆhÀt+ðDßhèËWõåJ ´Ä½§™Æ™`[-á>\ëð@h<¡WËß›«½Èø¤¿P×ßä_;Ï™¥m…ÖܰÃç°‹óKíˆÕXeyÌË!:àBeœ: w!¸ÝÛ[ƒÇcÛéoéÔƒV¥šC9m8Ë"«”.9qÆ»A¥0À(Óà‰¶IK©ÓRG•.P>Ó“cg(Öd£†€Ã‘(´£Ä“Út•`CT¡¢ ey„ä9Úæq•vÊvAWª<6œxt '^¨ õ@UôÙ¹°d8c43ε$ÀXðÙDéO °} ½Imõ8U$GS„œƒI®ÚEf–a×gpÆ+ðDcºžñZ¬‡¯²c )ij Ù÷€ï̯]ƒÇÆæ“Ø*ùê¨Rë±-"ˆ¥Š·á„È®¼‹í¹0\ ‡¶} gÚd¥Ú¬‡ÇR1ì‘;~a jð 6Žîýã¥ÙÖz¹¸Ù¦#„z¬,•v‰Øèr‡Æ·îõÅ8}™­3N­’è+í§Ò[çGj<Úb kXÌxGt¼eà„ϹÒâÊÕªìÆùÊã’#s~¡‚¤½ÛÈZ—_ëÒ2ýcÞz›Õr5^ÆU’Vå]ZÏ›ÛR‡ðúÅÃA¬{8 €•Fgí–¶¨ø§•ÿ5¶p 0ªC£ÉóÆAE(µ¨R8k„»£6+Us¨¿§¯/ |_ø¡îz”Ȱ[Iôí_À/=¡Lí}Þë¸8 ªáK53“¢çñXí•÷¡ÎöÞ#Ô½Á€!¸À Êã¼:n1 Vô&ÇÊ8 2‘¿“¢u^ ¦äæ;©G¶À½^÷uW/˜äÐE0ÙD Ô&ˆÁZÏn7¡ỖSžñP¬¶Mz§~ï÷ÙµÎ1ùù.ñoÒlÝ÷‹Ó%º%޼yœ×—£¼TQBçd5m²ŠJ³ŸµI/Q«ÊÌCv¸="+nGG^tFH¨ À£~—¤Qæ[Áß}eÖÝFòàóÃÏ㜉=©Ð}Á Á]Ïnð=e¨'ƒ“Ï×ÀmhDž’ïÚ¾'öxù<1´ª¤<{'½Ö3FJEÁÈ^Æ'̶d»JAçL`=¨ð¤÷§è´–9bKB[f/…z(kç|hrrV!|ÿ8çfÞŽ(£©å«h²×í>xFÿü€ŽöêF•ó4×ÿó¥œë/ë¸ÅããÝßzt¥-Í4üËëøw!‡ÕÓ—û‡Í—ůÅê®´µÑÉóQÛù«¢˜y`b溆!”“ãѦÞö½X»¿ú×óCÖê×é?ïã}6ÔII˜|ærø©Ýo­<‹ÉŒ „u­Q²XW˧¯[Ÿáƒ cU…ÙÑ‹–ÌœÆ9GÍ80 ³·¦ÿ´«ðP«ißs ò_—á6ø_¶ó‡õêßÓnþA³T×,9— °‡h—enƆN¢ørÍbxQ\ÿì žÔŒ$ÇŸA|}ù«z­Û}€>;àÕ‚¾/ã°EÎ*ˆèO(½@BàIͬ‰õâþÇrqw÷ðrwøªÍ¬Ûåú~q÷¡ùIÝÒ…r'‘“ 63ηÛB:È."^ܻʭY¸‘µpó?æ²»„†ÿ¨36-(Ë"àÑp?>—¯õ–ª¹-Õ°>~µlzød,óBj‡¶]#s‹BFÄLb™©ùhåø¢ý欻ĥ¢©,ð?vמõUs»‚SN—IÂþ«¯¿ý_Û_ž+³ hцõ˜ª÷#B6ÇKoB€FiÏñh1D,œÃ! ‘ãI2©:5#F­‚öéÄ89šý/NtxÔ±¨Ï.#Ù5øíèˆ×>º–L3âÈ[‚f4ÃF8°<ɵºš“L̶‚Cmí“eyŒ@4_«AqÀ³q2µb›TÖa[Fã%xæp„£OÒ¯)ìFœ‘K™.Š3•®]Âö"mÖ0¶¢Q°R‹º·$ƒ™À\³´=Õtý)Ý5£žLŸSàŽæˆ»0Ž¸Ð²ú®H÷¾U¼WP¥êÑŒ —eㄚ¶Ã)?4£LŒ³v—aæ áeq†¨G©gÑÄVÔ©Ô±Õì6'¢Ì hÂE1(Zè™i]6ÖY#@ï`5CߊAbai ƒ¤xÛìAmœ”ÔŒTgnKjÄ3²6z/‘Œ?/Æã}Wž‰f#±Ñl±8Áõ¡—RŽÑ¬Šcü4 ’ ÒIÐ=Žý­:lغ¼ƒ!‰$µ‡è3ÃÖBZqI*qcKQ* „<½w¨Ê,°áêRöÁJÄH¾ÛÎ$Á߈ERq“H@†’Oè·®«Ÿ S‹ñÛØ8L-ÞŠ7R9GdŒ–ï”"ûF¼H§l3z ­x!•˹1¼–¼‘à9©?³XÏlÔ9Eë¨OªC°¤…1y„‰,$ žhgÜ¢WUÌÆ‘S².H0Ê“|ôàZÍ¿X–!‘gL1bxRƒÎ¯UMs1aglv|y¨E¤=Æ6l‹dÍ6#Â:Û ÿX<®Š¯™òîúk+›ˆ¥ zò §Vmož=¡¤¯œG+‘Ý^yç¶¢ÓÉáˆû‡›åÁÒN,—‚sA"ƒ‡Þ±”ãà[ñG,ë S*FP€'¸ÙVÊ;å‚1,3²mï$èÄ÷Ü*X­æ<‘K)Ò˜‡³ÑZÇG…5¦•eòAÇ\€UÒÑl›ä8ºF  ÆX#ÙG¦’ÏÍ;5„Þ`†ÉñP~vVìªÌÇÚˆrÑÆœ b<ÉÏ8¦3y³¸þóªªllE °'å{kaæD¥ãðY8àçŸB;–‚!hL3Î-ÚV ÅH7&cÀADŒ<žfלžñjl¿ñ,ƒ¨Ü¼ „ŠJפ«4­&~ ÃБ‘ÄP€âw–Ÿô35\ÈU,¢éû®²Š¾ƒÄÂŽy¹í0X°<Þ´¨`Ù$× ¿uùç'ƒt—#O#–‰Å“å¤À.ƒe‡Ç5£Ñ™évD°Ñ¼Kþ¢xç Ì,Ã*+3§U^7§‚ËÇ—G‰<0æ”Ë€R”àAíÍózyS®YwOë忞—OŸ-××1oËômŒÌW²l`8+Ü'ÑI³æÆËưïõ[ÿ¡¥c( <ºñ0†Ž|”àÑ6]œ?e‹?¯—õébÕ®ÔÓ“ˆq5ru£a²ÎbÞŽÆXeÞ”»R ž8š†«ï?ë“ÅFǽI1 „óÆ#¡\ª^ô6Y/ÒʘÀº‡ÒZV€G]Xeöd=>üµ\ÿzªO‹÷Π‘ˆãh¨“¬] Q¤™1±4À£ žïÝp*þ¸~¼º¾þ9Í â÷¥t’DOÃ÷©Ýˆ%UÀ˜bxΓžˆ£‰õëéñvÉÙEì•€OIpÏáiãt’õ¢b0™Œͤ1T Ö£äú/˜áT¼Þ,îW¿ëÆÞ%ÈÆªD@ñãŸÑ’u£"å™3Í ˆê4'AfùGmÅb<'6tûùp¿š¢mo³ÿsq}»º_¾ÿxk­ûùÏ30Î 9ô¥p'9‹'6%HFèø+{ª]?5$ˆO°ƒ ò¶oLš­;nJË6 º´Nñ8š"½D¡8àV!ã)}®Áñx’wƒIr³(¿ë‹íîÇ£ ÒC QŽFƒµGLVT>à-×O«‡ûŸ«|.o;ì”·ÕÝóö§{kl§Óݯ_í~ÿêõ˜Ô\·ñ=úgotò8m—Èÿr9UèKßg‚àÑ6¡Ö©ùà‡wŒžë†¹§©¬gâ2§ò8ëLWþœM0tî–4x<´H ;,i7=Ø]=Oê«'ê„MCËz¤¡ÜÐ7ia?o°_hðhût=f$ËÍíòùiý|Xþâ4+¯žCòÊ7!gå…r] õi.U¬ò´G0nsòé³ *Àc•w;¯e"÷Oz`VV2Îà'%[yM+ŽN½¬¯ÖË«§ÍºÔbÀ”šLŒG0¾Uü`kk?}Ô÷¢½£9Fªê´p+ÃCŽ„eM¨íÍSX¨dG¸<Úw#‡%\”Tß?KAçà“å<.ß,Ô˜z !hÀ~¬Á£­÷iõãë C =k-– ÆZq¦Ïë©dT â jð ©=O»2ª¯ÚñuíPÞ8­‹žC“½˜H-‚ózZ‰!æß6&PƒgüÃåõ^-îVßߋыdš©Ù|²,Gúde¾Ü4„^Ùr ¦'Û:#¢ùt”ªÓdXŽ\b¡b8¹>J3—Zb©É^µèÜ;×é š+Åsëƒ,£he ^­¬¡3ÓjõýçõÃv6°³,˜s1ë3q†‘ ürÌ›¸nsfr½Ö›š&ıü’ æà\ü:"ÑlЉ%§Ë¢˜Ç‹Ø¿^J¨L³Âzx6Fo$Ò:ï>ö¹X³É&?º1dK%©G€‡Î¼ŸMÓ8vA¦†‘ˆ“ÎuJÊ1—NryÃ:ås8Hð€ºÆèÑHë¤ÞÇõóýr-·R¬É#”Æ1ÌËÕi„žb)ãb 8˜D ž˜€€Çã´6öÍýÓfê+ÞSp¢c5C\hÜïA9¹$haÀ+ðÀ¼¥ÛÍm½—4‡R'º¦º€>8Áåp‡¨ õëø¨€âhÀJUàñV¿ûc™qåùZl÷²:Ï˼hBâL ò¶Ñ›Ó^4!–+ÛÐàA˜×þPkß$ʬߜÄ|V–r\úh,·çÚŒÜÓ¸¨€da@~”¶`Æ4qº©s?’Çà­å2cÉÊsÊéêO69zH-´^– 4šê7r„béUÀH0=ïÒòAKN?"±Q'Xe:Äêþõ"ÏÖóuÙá”T_DÞxB6‡È…/N.B‘ö¢Á“æ4êž³5¦úígBt1V†<’ÖZ>™¬ T8ÂpVàqÆjWôìT¸X_‰ =>ò8§ÎaF<…a@´Jƒ'úVcNPkíÒ¿š’¼OÖÖïȦqÞ¨f%U§Ï$Š>&Nêÿp¹|'ûf12/^¶ã´… î—›2ä@;¶®‰EÏÔoÜŽ3ÔìAS{Ò@DCŽ—|0ËhR¶­x’kð`õFó‰—5z³}Ñ «Ê³e%ZÏ”ûŸÆ9õ›"=' œlׄÄÃñ#VlþN@kx<Á(¬‡’¹{ ›:Ïm²]b.ì¦qƧψg+Ù¼ ¯¹<κ3™¿ÈðxÛ4^µ­e·Uš«* Ê‹¡è·¡¼tÒÚVZ¾å@iCD<› ÿ <åê)ñx‚º\Úÿ.¯7Ú©óÈÆ¬hÏ¢¡<ƶI)%Gú×íPâI=œÖ}ÝùªîŠÏ]ž<kmжSN‡,õŸJ ¯¯^øûïÝÔiŽÅäò.p~–\–>>©‚Wr°ü€‰TàÑ6»ûê Uõ9,  Ç;W*m8ý²TO`ÀþªÂ£­‹t}»¼y¾;|Šu®; -÷6¯Œó)b/SA. Ö`€Àâ &˜ÏŒÇîÍ2Üì:WïôVóÉcU>Oz )póžÇ‘úÅã)DTJ{–u­r2• t ,hžÔ 2q¬HïõÃzùðtõýáa3q¬äµb(• iòÎqîHgTQk·²-KàÑðÀCà çïÄÒx2ÈŠt‘~¾=5–À¯Íöùg… .p¦™sÁÇ6·úq·æ‰ÃÔ <ñÆÓ4Ê”×\k…JC‘#Òx ®÷ç˜?©çn¦q6-† Öã”™N‚ë¿ÇÙ0?7;DÄO/I¶ÜÙñ¾PEX›ÿÆöýïÛ9Œ½Ç5Ñ ì¯ÿ¯ÖI<)D,\4æÒµb{ ÁI¨­Ùàß–aVxÀ¯eˆ¹ñ4>ûuxÀ#_V¼9=¯:W…ÄšúÀ䤨_û~÷m€fÞ&¥˜ž¦ûá–u­xk}’IPòhuGÀfkø¶C6 ¼ñÐŽ£+Ð É:Wÿ¤@ܹElʵÿ¾Û÷½Ð’ø4A chʱÏLT“ ´Ù)µZKçúŸf !¼šÊn:Àhk&h.±¦ù@LS/<²/-pµ¼{¿IóÏW©Ùsˬ° „ZJï]¢àŠÁ º=C4ÆÈ«Ur†øâQ¬g]Q%s×"EsLkÑ—úpboË߯Q„YG4ñ9\yg Ü,„Wq•¯ÊNNˆ<ÉÆW«³+×+›'æÜ†•9uò…2®|š@d'£ öö|a”QîÑ8ʆá '„ûàa8 7ÒBôÌI½îÓïS ½5¡={„V^ìxöH%‘ðÁ£‚EÁ+D´•vuÞY!¹“3 ª÷Áï´!,ðöLÑJzµl˜#eÀv `k-`ÃØXUä‡7.òX–Ïë ©œ=-M"R\ÂwgÓPûžUÈq¶h¦g•æÚ ë™®iCf“^}Töt¨t­bÚ̒¬&|Ð:”­9@`õâÀ0{Xær6'ȇ4ÝÃ:2`+Û}°€«»Íb2Oì!pä$ãÙ-Ö¾Ö^˜íY@%àðh‘ð€ ¤|ðP’¥\Á%ñb' 8S{ e" 0Û³@P÷iÛ^: $¡Â@Ö²U¬r.FeBay ”·³êàµïuðXñÁ?Ð ¾òÁ£xÛ^µ;Dʹ¶H‘d^âÓ¾I9ºálÍcX{œm2K/ƒðÀž›¤x0í~“x+\^ëÙ8ZÎõAøŸV> oæÛùCkßÛŒHæƒcûSŽ¥Ï=ˆ¼cvûÓ×ü*q 3¾=p1Ýì¼Z_÷ýóal9P*”ûàí"çüèJç‚UTx‘Ñ;Çew¬­ùÀÀ¶gë¶å»Vu—w¹óò® ó˵r^Rr¨q˜îl7hà _MðpÞUE¥]ÒÚ%-N;Õðí:&è#šÙ®&0VÌ^0Í pñj5hŽÎd²Å£]ùÇÅ£Y-–YŠgs3œšµçXÚåäuÃF”† :])LÉÔ¤ÛqUÊ«‰†£ ‹_WÝAYl7ŽŽåXÄ»Jǵ­&ÈÔ^B<¡·Ù Q•ˆäÜ!3Cvì<ϤlzÄî½¹¯ffº±ËÄÑ`(åj®EôìNXå¾eŽ˜hÊÄ>Á;ù§Àƒi'*€b˶žÁÓ —”!¡äZ“±"xªÄ“ è#±YC ~R$ØéĆq_/7™ÉXWH¹ò …äõ;šb –q¸Ñv;1„æh¥àÀÔh€‚&xšf‰Ù‰j'©ò>+s„àÓŠ©3D,”CˆK£˜•Š2^/q7‚ªÖþ N9¡n<ŒâP#}+½ÕÇ™HëýG­©B('c5‘%c~üii  .arÈâµ±ù¶Œ(HCÓ-àÝ‘û»ÅO7Å¿—QºJÆÑx/>$éeÑ)—Q¼˜D«øa¾Œ']Ñ«R„Ú-zc´¾JÒÌ/;´–{§,>áëŸV°j¿¼X§³,±‰oFú¿ògFì‹kß|gœ¸ÑÅoâÿÍx=^.²›åxŽÝ<‰Óä?ÒiL¸É%ïÇcv—pŽ´Ù–Šš$Hï’É{“q"&*Á4¡ˆÜ!,9‹c¡(Ÿ°$êúÈ1zSqòkWéèê­)&AJ¹¬_hdS<²%°t½6K@Iöoà²óÀ¶;²’£fia‘â@<:]/³–2¼'}þ~–Ì'×ßOý{p§Ÿ-fóoŠÏgzß<ú7ÛÄ[ûí—?& Óå¦7)ˆœnÓÉÏ~_°¯ €}ã7ÏÐ/cP’˜æ)úæÒêCèP|½]fñ|ƒí2z¹¼_Í#t½I€c°jFÙz“QÌh†²¶ûvLúSœNGâo?}æñË»ñ_ÙsèçÇ´‹ï'‚Á·ßÇiöz½´úu”Íî“ëWð™‘öedÿ~±ù£í2ÂÐ ¬/#..#FàÿæžYì,Ê}¿ÇshÆ ¨ßßä\ýòuùýÓ›ïGÓ,[¥£››x<††^¡¦qv=^Þß@GÇY|óæO·/® Õ xÄ0<öø·HæéèïïÒÌ„U±½ÿ«•ñ ˆ1ÙÊÜôëë¼{sŒ 3¾?üöËm–¬FÅwæçdÕüp¬,Šìëözäù? ¡ýã"_Ù/ú—T—ØuZºüÃ(« ÁðñÍ,ý˜Zõµe¹]ѲRº,úèÿ…Ò;îRKåb„™òv/R;m¾¦çÃÄ|úò9žÏG0Œ ~Ï5Çôóän O%¿d#ŠCC1ð7 MÆ v#żÌhcø²}ó+ÀxYòYJTÚ}ý°c’ý „[¬þ²<~þË· ë•BW?ùðåâ?7³¹aæÅË<ý¸ùøj· ýÒÕ0ßÙîz³]–4_ä) ÌÇò€œ/^gþúk1 ~?{ŸŒÆóä‡"üvƒöÌVs0ãÍŸû9>âK/@.Y,?/Ú5áö»ÛE¼J§ËÌþiÝ@`Õz9Ÿ'댱có·{Ç¢øq9IÞnŒ‰æÌ-|ŸÍÇ»xÜ'YË-°_ÍPZÍgãY6ØÀ!µ\½uµ(Æ'çY„ÞE÷ Lî` Ã|Ÿüb3³íþ]®¢ñvš{n‡`4ÉÍ©ç¿ÑY$šƒVyþ»züS.;²¡E»IÞP˜Å{ųŸY1ïþy³…C`VÆ0+S34ÌÒè`V.^ùH…Õ¿‰Ã›0:~Ó×í‡CU}8u'pÉãÛ¹'`ºþ§tÚÍ?ÿZˆk&ØÿþOzÊb6¹œ”ʑև٤x÷»¸þM Õ], å57ÃCU€s“ÇKK7u?×ܬde½dMލ]-QqlQ%à@£6`ª`G àizáâô¡½#5zRЏVŠæ&×®Tä¦ÖèñÞ¸Û¤%šöÏ<¡`—¬š\öˆSý83YÕŽ““¦È•9„ØÞ°MÚÛèРúM´Õic7,.d»ñß3.QJ`Tw‚T:^7«´&D£¬I‡g“lÖtàR³æ04ÈðÇ£:„X?šc*€¶È.°1­=ñÙ¬B¶‰Áý`Ô‘òiQ§iܽê-+øÒÑÂÊÅ,Š˜ƒ4f “9×L9¤:6jƒ»#]Ì%bä¸Ê_ÈA Ao<˜òNxŽ*³òæ.žxã¨C€÷€‡"ÈÑRâѦ¢zA[ ‹P”¨‰éQ7ß48 æOŠ„Ð>‚-VŠÞJ]#F×0Ðí`F>Y¤æ {à©NÁÒ*’«ÇUNª(­¼°K&„kЉbõjœÄT51ñÁÓ$“19lñ†í< ÞÁè¡ñÓ¢‡q²^àûÃ!VæÎEVaÎ{Þ‚à.‘$ÛîLïÆ ¤G|ñÐ`äÛÚ‘ÎUW˜‡˜ö0é…wªŠÑ™HRR¯ÆŠa¬¡”+ k›„86ÓVþÎuZ¡•ö1êMT².¡,;7 3|:È9² SâG"Üñ`Eýn}%>‚‘¼$&mƒ özŒ€JŠ4õiÀ@+0’‘Â|ð §Úý`n--íO¹áè\ª•ö2“Gãé9:N›Vu¦šoëZ©QÜäýôÁóÕ©6‹ï¡Slw8ׂ•äÒyÓÉ”#_d§ÚÓ•^š͵G»K[u‡Ê”Ç¡2ïJ5ÆàÒÐý v”æãæ.Ù/¸ë-ÕS°¨ŸTeŠ3„\î”3™^B]\ º¶AC$€MððîÃOt¿ÈÂQý4SŒƒjP.í åjÓC5¼æšÊ Z1„=ÝAä«é‡z3Š›CJ9ò¹ÛrŒ¶ÔÁxÜì h€‡£ .y½Z¿½À•¹j%±ËÄr5‰wû× õ4Ó“KÆ…k,B9¦H»á»ÂØÏTi犓)×X¡„ãËòt(ËùÛCˆFÐão;!žÂ}âŸHÜ1ãS·ðÍv\•É+Õ»èe¼'ÆîÞZ»ÑçélžDŸãY¶¥”ù© — +áCË&0PO0x)aïcœÈwQ¤-ÞQ<ªªÄïS]iøTÇ?I7óÌÄ´mýýår3ŸØþßä!ð–ñ&›ŽAÁ›á<.B ­ti<›&QAžÉ2Éé³}õa1ûg¿ÎÒ]€ìËhÎà•©UÁg‹Itñ×ù¥…|aìÎ}h#Ó–GÖÐÍ'|Q99K‚ÀNsw‡DMºc¯Œº£`ð+£ Tu;™ç±ölPÏÃG‹À¦`[ll 6ÓÓØÄM°·+oÏL•¦×öGCþ¦»ôkÎK_J²“~!ðmøº—Ù‹jurrØÉÎþskC£v!µŸÝ½ýùLÔ¯QôêÇÛŸ»·Ù¿£ 0ìc˜u/Ì›ý¯×Õ‚ëûªæøÁ ­AÒ´í847g !9ÆÎóÞ¶?šÈÙ$…iEȆ¶€'Uo="ñ_à‰"ÊcSøÏyÀI[Qu©{þÿ­dMïƒUvïš-«¢m‡´7®<û¦IGÝTÀ½ÍW‘Š>€ÝnÆã$™$“Š"%;ªºþ›Çõ/ï쇉™|WÓ8Mlp]ã=Á´#2N—‹‘å¸\@ÿ´˜&ñ<›>\F÷EaðdgŽ\X³EšÅ`cNªÙ ¢>làMØÀwçš³¡}ìÏ€Ú—õúÏ?._/'iiŒ•>‚ñ´‚K0 »ãÅü0©™o”qA낦µ´Ë“ðíñkŽ„ –ê­%ø^7åòŠŸ ]6‰ŸÌü•åìMOF#k”æ¿1 ¾]LVËÙ"{aƒ°Î š}Dߟ·?–­ƒòŠº¿ãTbç+]—ƽ‘MÉ‹ª!M3Δ¤VBqJ~ÍÞÛDÖ¥ŠÿÂ.8nÞè½u¯VéÕx=¾Îó÷&×Yÿ¼0{Ã&üÿx52‹Û7«Èõ¬F×Ëkvшӑъ“h3Ym@צR |õï§ÊŽÌ!ÑE2¶l]'ï7iõrSJ|ª’ÓC#Æè ±DEvÈO5¡»Ö´Žn9Ù»–#<µÎñkŽ´ÎË7‰ ƒ—ÆÈÈà~ùâUlr“þ¸Üý°-ùHÍ¿&åU÷‡>?%%ÄëâÙfPrX§ý,õÌꆫq¸n°ùõz³xÄÎü‹èåz¹øóòjJ£»$YDE2¾ÎS5±v ì­í‡ê嫿fãwàÚM-_uR©¤‚6« ½+£Qåþ¾`ú]É!MwÉY 9z¶Ý:àæXƒœk•‚_üøþ³Õ¼àˆd˜hNŸŒmb;INšaÿýÁÒ‹çÒ¬R\(¤Ëzðò³.k­ÔK/Æ´Ük«©²Éý®Nô§ûš^gdC.úV€åX"êáÏqT»–+öîX³áàŠÔL“èUl6Éjìy…‹·ë ´Æì…ÕŠ]ñáê \“w0ƒH.·»‰Ç5becú â¸8m˥ɓJM–£7ŠëØ­,]ÜùWHM–·&3޵[:´úØHðÔdyœSÅ<ú–ãY>¥Ôd‰øSKM†·ÙJâÅ&63-¶9JvÿÙzûòœŠìœŠìœŠìœŠìœŠì7“Š,Ÿ/%X†¹çUQÞ½:§";§"ûú©È,1™±¬êÏçÖŒô˜Š _cÉ`^¸?•ŠŒš\hšPÁëWöòr¸t¹ï)øM9*†¯÷3órTükùMy«¹¢{ômMùà~S^£âBIêF&°8ûMg¿éì7ý¦³ßtö›Î~S¥ßDÙ5ÌJH ­ký¦¢\9>óÙo:ûM_ßoʉI¸9·ê&0)Ý-éÅo‚Y““~³ûHD*G´Ž¼"Ok¿©zrýoÙoÊ[ÍSLº¥ÃÌoÊkàñ2äFÆË7èÏ~ÓÙo:ûMg¿éì7ý¦³ßtj^…q¤4ò˜W%Ü=ûMg¿éëûM91ç’S')Òº_¿‰›*OøMz„(xUQ\kÁå0«¾(ÊM6Ws²2ÔGñLQÌYÑÕá…Ê^¯“O³%ØÉ6ÊM>ðGÑÙ&>ÛÄg›ølŸmâ³Mü²‰NžìÑýãÉS)ú.»6ŠòAoî.}ØÍµ¡YXžÙ¢8ƒÿ²ä~•Eä4aƒQ]Ôà¦uR©|êÄ!ë”Ü«¤T'êX'G¨ßÌÑMñ`¤;§ù4`hqP“lšlÒ«ÊDU¼2sIÓã¼: ‹*Çñ^­{AÓFœx¾p¯-L’Gw†¾]ùŽ-¸çt$ñH*úíNFÓeš¥×ö#ÍC„©Q@ÖäÉõN-2!Ÿˆ¤ÁÈp?¶—LÓÍ]:^ÏVæÛCIëPœ(ŸUl͉&€û§•ôIQƒª`Q²ÁÕ‰ó»ªÅ«P(>0M:óÁ²p„Ÿ 8¡Hð~¶¾ÿ rJÇÓä>>”1Å„š|ƒÞLð„Ú?Ä£ƒ¨ŽWДÓx=126£î@À$`ºíÌœýAÒA쥥 fî‡K±b,²í€o#ºŽ9ê „'p*j²ÕúS¢âÖäðo§Oˆ€§iöo?Q§If6 ÍBQC(”.¼ýzÿIC6]z¨4¸û«õòÓ̬ÖÁ#'í6Š:€uÙpïÔèiQC"Œ¤e¾ÏW•M,¡›íìNd¥ mVøVä"G$eºKnæÈû' ­Vè_…,@—Ðd¹;H€M” Å ÆT0fÔÀ쟌‹'E&H¸éd¯šµ² E.BL#@û§‚ÀúIQAÑ9©Ÿ§ .ž´ס¨¡1o– =ðþ©¢«3œ|ªh†{¦Êçänº\~, \£`LѺ/¦Ôâî( ¡'¥Sª“µD9‘€Õo„Zã@DQ5a O%îþ‰‚{RDÁÁ}—½ŒmøÔõ]<¾‚™ÿ—+jŠ"Ø-âÈ!ÅÓ"GÓm4sÜân–§¯©öãWŽ‘jíTQék¿ö׆þ šóIˆjÔí¼N{©‡ZZý?ö®e¹±¹þŠVODQ ,Gïl‡^øfÁ–ØUŒÖË$Uýø/ÿ€¿Ì‰KJ¤$‰äÅEÑ1Ñ£Aó$2@>’AÁœÆà§§Œ§Ë2HÁ4¤Œ æV—¬)‚kÀ‘Z´Ó“"bC“ 6Å<þ¼&Å[i r®QkQÆTˆN{ï’œïÜe¯Ÿ€³¾Ë²C0Îù <šû‚´£D†jì!¶s8jAŸO‘ÚiA—¨ žp¶f'ô$1… ñ9¼b §_(N«FØÏ'Lõì(u!L-4Ê›‘ƒîAç ›ZCíEk;ÌÓOŸÌ¾AbÚL¿ô<ÚðŽÝÆý#ïñ¬ª r|ÿ‡A’Q$9ƒ>Jž4 Z«]ÿ‰ô”óI,˜͋⛮ÇSŸ £s"ßõ©+l]ÂòÅjŽ´1!… ÏYi¼‘ùȱ‡ÒQàñÐÊÞhäY¶ßèÀ²ûç¥ t|Ð׺ÓjMtèqe¦Á£½2Ûµ·Ü'Ù6·tø‹Üžîž¿,N´ÁÜ–J¾|‚Àè­ä,ó88ËóœœËõ@×A7hð@K‡ä¤H«F ’.»~ˆüÇœgFZ‡¥ ×ÑÊ•¤  ª:{±¢ƒâ›è:\Çið@j²ýãŽo·?íŠ åõ[þºkñ;, KC°¼³DØÁ•îlõ«Ã¶ªžØ. ©Çãbkÿö˜<‚àˆMkŒ^O…¯Sç¨ )­ÀªÎRAÖF1°L^4Nц`›ÎXܘâÐZAœ0uÈeÅ|·è €Œ'´»iªcÙòñiÅäF¾×¼ñ±e™ËÏÿ×® ÎâöµöÇKíýsFN,Xd5˜fm ž%4x´YËóûÅúiþ6ˆúåÙç˜ì |-ã]î‡AÄê($íÉþ;‚M±CÌœ6èòtô)þôþ³õ|ª°{ØŽø\8NšðþöSãÎPL ÜÔ!ðÉCn¼ÁFRƵE+4nfÑ»äZ~ºöÁ»dÐJ5=û”'ô—;N¤PЫ!AxŽóŽWÏ›EÝ6ó‚Îå3_ÊŠND}†ÜOÉ)fÑÃÔࡱ/µ¢äXNåð)‘Ec“„;Å‚[{ ø>Àcô@ lô '4g9~¶Š •Y=lçè’XPh„)èTÊ5;ø„-çc¯µÇ–Và÷=Ž <Á¾=;¼‡:*=¤—RŠìñŠhù(†ãïƒ:ÂÅ€÷­nƒD!–Ì‚É>žó’L,†5jv|;}¥@ï{dÊhðxlu`<)OyÈ ã²›,Q2¼á¥9Y‚Ôî2a,¯ëQ#ô`†ºìáa’ÒúOþçûϯ²ü¼_ôÏGåX~ €ù¢+Io‡˜vÁ!SZ1©TÕ@ ŠkŠohy ÛÅÌjð¨kDÉÛQ‹³ü†•¼ñÌO’.˜¦Pª(ñàðÇ!¿ýüjE®îX«üüO¯êñ?¶bv6ÏèÕÈó/ìÜà½âìqþÆ‹I~~`…ãÛd÷é Ò§+O{›<¨²¿½ú oTXù—<ÿ’5éû®Ã{UýÞ$0—<’LàÃÂõ¯ÇvUÿ}ù°\wásD–šýßÿY;7±Ds·<¶¼Ó†qÁc‹ dÕÁN0NŸh£Ããšu¸a¡- ùÿÁ 2äÛ}/böà¼Ç¡iîjTØ1Ùë_G[¬õx•)•4±…å¯s¸@Ê2Ûqêˆó.ð Èðaú¼Ìíw(²Ã]G{+6>='×ã·ãôYvRp*\"^x“N7üf~£üMt®‰¬7 \¬Àl+‹òºAçOˬÂÙc8i_\ŒåUŒÖÄL4qà£kVgŒQTÓ—'Qâ¡–5:?xd¥@ƒpíbrÆD¯y\4¥RßÊÕnGXÕìAÚbAí vtßœþ¹V‡Ç̉9dØ‘êeÁµZ-hŠÑck@¼¬5.ĪB]ß,õñ•·%Dr€„\ˆ‰b£âz}ð’Ë­Áƒ¾á–þ ·T”»ùäø¿‚€!…8¶pæ8Ý£€šBš~‰5xbj¿£ŠäÓZ>ƒ$Ççßr­•¤TØa¥5xB³æde RQ‚1Y>Ç¡ˆ‹`ÀM ¾ÕädÄ·‰52Œ³ÓÇT ß!Ï›;ÈxÔ—‹§ƒ¸KkžŠ×³á:AÌYõ ­yÊZm³¸ó‘°ýr°8ý¢çïä: XÇ+}¿#VÏeU9Η—¯¾=¯£4bTûá‘z> géí¸ƒj¯…ó5â/ÿ±kj+>:}‰þá;6X° ã±.NjÌo×Ç*S—må•Bvh¼=¼ƒ65ÔÏB9ô–D™ÇFÙniù;ÎÆ$T3Þâ ¡eéôc6¶tg¯1aŽ‹ríxœu¬6šVL?,™ˆÞ {8OʃŸÜ!ð'TRÆ¥D ¥¿Túäf‹ró¼ÀÉA×dœ¼‘S³R%ç{«*ÈÿºÝR+𤀣ãé߈Ž7Éjy³M:)oòóëBeð˜Ç?%O“|‡ÅÕà Mú)·F´m¤ 9h]i\¡v4@cÕ󡵂Ÿ§û(~¯K‡ †žviaù¥ƒ'üáiuW@™Eâ£ÜœnÊ7½56\ÐÎ`<ÚRìÅ"ˆ7ëåìvµÌïHY¬H˜†… ÌïÚJ‡oâeG¼(…Èx¢rÙo^Ãj×9{c±~òø|;ÛÙkÉùïëÙâ—õ䕦÷¦C]½‹T=÷pQJ}ÄNìúëyµ˜±´;¹*¶ÁÒäȘ;Ç‚µűàlOŽýº¼[œ\׊cbŽÕͦ?Çü…q̧>=Í(–`bŠ)&ÓŸaDŰBû©öåæiöt{r÷c+¢%纭vNÝù–\ßRt}ø¶üå~öíéföËÝãÍiwÆ7#ù‰É¦œPw¦Ñ…ùgd;ÙÎûùÃòn~rEB+ŠÚ‰)V;“þÜÂ˲šä{YÍoËÕæä‚ÄfÔšÞ'«›Hf‘»$fycB?ûøôøûbõm-˜”ÔˆeÞ8è`#•“êÍ8o.‹qýOß³o맯‹Õé{jF¼Ðï$ [þ%sYüÓ†84X£Ýâ<­Yð÷³ÛE®)³]™ÐêAÀKò¶f]ÕŒºsÍÚxQ\³Î·JµÜ]-ñþÞ],Ì[óÂ{w È6AOŠBMÒ·ÇÄóÎ’3^€› *@š~ù5x´MãÎ*âʟ˜’Å(®({ŸÔ.u¶%k'@.’ëð°¬Á£n•|Rž»øŒ’KÛ‚SÈ}Š&•k lÇ…º'TiO×àÀEð(}43n|Eæå®ÖÖÕ?£¥âB‚Ë•*“#Æé-òwØ´©DÅ7ù&U™wÂ:´l¶ÊRCŒÆ[¡°Ç0. ´©È{LÖÈ0ÉÄ‹‹¬5rF`ž#â=ó¸º]_ïdxRí¡Á²@ƒÇ® ²3xP‡“všA`Ýg£“gà§?N xØ@8D<¡ «¥ÄNŠû–%ú²@£CVÛˆÒ";i¶%F[pÆ!EÑæqÓ—•¾C.x!·q‹'(#ìvÂzõ#ñª3‚a/EV1w®< —qœ(9àäÍŒ¸xœé±Ÿ#ðnRÙ¶xÈ·nt½ðþxêÿ$Xªl¶Ã• ˜j ç‘'% ÈS ¡‡ ùJSēӞÔ:%ÚÙjÏûùämSQgñ!Þbí•Ù8 çOëèB¬˜YêA°Þ/U§ V,€¬Þ{³}j/ÖØ ’EAžJ ¨S#ÌáLúÁ”dúÅ.zÝPŸCăÎ@“Cd~ Ée—¹mŽË{i¯3›?ÜÎÖù̾™ñ¹ì7m˜“s’·ò,Œ^»´ƒ&i0²ô=ÉÓóÐÁsñÎ2‹½“<>gõƒz/õowÏ÷‹ùf3¿ùš/¶?H< ã ¬E Š3p!`[ÂÔC?‹, 9z¶rIžZ‡¦ìù;À.HH20ÞLH–õ®|ÿy—¯ýB²1÷–t »}ÅËçSE~Qó=®¼4xÔi§å½Z¬—±T—¿®æëÍêyh,ò^ÖŬdöú¾v ''Èôu³tx<4'ÈÎ+¼¹›¯×$m[Ñ#µ£Gä©ÉÑ!(X‰'6#Çý|ù0ÛÚòCÙºVt ú\¬óANNÄË"Àˆ‘£²=´Ù¯â…fðÔˆeœ“Ó`úaždÛÒàEí¾Š›Q€ Ncœ|ù‰.iùÁßvù_üôÍ¡x}# €±¾Ê8'¦g/‹д—‹êQ"†fìÀZvLrÒxwY¤ñqdáѺ?£aEåÙ²}"Þtaº@Jsº¶éÉ µÿWY§Vko)T®}CÐSÄùº® ÔöÒ}ôto]m¯lq_/ïö4,B¨ØcGGÏ(]Ôý8‚f!øë勈kb”À—•-™6¹$=mä6 .6‹Qje¥˜õsÑà±Ð; è üׄfrÅØ-éçèNë1µG7Q/‹@Ú ›Íä?J «ÐLÁ{§¢Ñ±it#¥K"“7¿™¾Ü< kQÉ›ºSéÃ$:É×áY›,çÙë®À¶AuÓ£I å§kBBK¢Ìræ´h=-Hð=4„O„–Ý Ï–ßÉç¨j) ”Çyòí’ ;Ýõ³ˆ¶‡Ëê)W3õ±¶rñQц¹ƒ´¾\½ª@uöUƒ¼Gº=ïKT`Ý%ëñ‹‚}êöç†u VìHõmÓO¦©RŒEª”`Âxliö1ÍL3NQš".û¬¹ô ÅËÒSê'ís²l„ÕVÁ6bc&LRM¥¡°Púë{ ¯ÎêGòc’Î5 ¶†„©ÁìÄLÕ‚íBŠé«…)ñ4íRRÜÛØòO5À½‘Zvâ.ôð¦3‚ÙsP«¯ï†ÐÖÅÝâ&ŸO‡Àfœ‰µuã&F"]–ƒŒ&¹6,„oƧj¹ÇlzÐÊš‹r’ÑZ3‚V¿=ÿ²˜­ÿä%¸—&óçÍ×|©}3 ìBð‡Sp&Bo—æä­5/m:!uˆôÒà¡1AÂò]È~ô ÇÒE)]Ë w^&9åüÆó‹)µ¸œV¡˜ÜòèðxÛÄò¼—ÚOGÅhËb Ëq5yœC4c*hµfk=p®Ãú+ðœ³þmî²^…›Ê€¹ou”&ys¹6Í—›°¹7ÚÔ <`[5Üþuùå¸ð\QxÁX뀰]À0¶§úÔVÌ&¢ž <£2Õ·îÖçWQža*|™o¯r]Ï6³ùíýrë¯S3A†pÎÝØÄgž0T³JÓ÷³Õáq¾=¾-¿±Ä¦;RHS²ã#à.äˆé²È‘¨9~™¯—7³çõ.Üܶ¢™I)r v¢õEr®Q^Ìû«êŽÍ|?%W È»Ð/ËèПd_>Oy,f¹Óµ3ÑbFò®AMƒ ‹ï‡ßBí¡Áƒ8¶/¡oÞÉ‹Éî:äaÌ+ÝYÏÖªiÅË"ˆ6zµÕ{ćsb1Iž®Á±)4„ÒœÀ² ¤6WçŸÑ«Ñò ÛáÖKƒÇÅq·^uDe•‹ƒ*MÎIx“³>5Zí©)\;)4#MO ¯MƒÑ¨ˆÁ*ÖIÐͦºúÚ¤Ó8ÛÞh¦êñ²ˆìÈš UùÔ£aFn …)ó½UsýJió(¤Íë>êÝEñ-xjvµu·±d¿òÃ…“Šõò¸à!¦‘wôM n=`ßÃ…Õà‰#–’›W>ÏxÇK”ƒN”Þ†qìSL˜‰:¼±(ðiË^·MÊ.\ö·&%ÉöPÝÏïb ¾’@'ŒïYðA„ût>”—%—\ Þ&‘®1Yµ2o•\p¢ÚÌãL5ž¿Ròh·ãÔ7—ÇÒ>Èn³ÊþÙíìf>ˆ¯ì!FŒ{«JœäqƪûÔM‚7Dç„îÛq¦_nþNÌí•*ä´ÝŒŽÅê4c9!ȦÔÉ£.÷ÓÆŸT@´€Un¿Ýþ„¬n„^syœÑwï›p9ëaw¹fÑࡱm‰k;™ª`EïÎ{•™×A¥ÃÕƒÐp–,Š>€øQWè€SÝpvÏøõüþén1TÔƒ²mgWÉĉ£ ”=ÇZb³-ã±Æ4+õ6¨ÎõõNn§û9 ¦“<uE÷³œé¦/šñàS‡+kB‹+™—;Ø­ ß”¤-k®MðÆ•[yã0`jbﻀ¾,ж,JEMÒwüéýfó»Åj3¨Ó’Éc¤œ•d¤‰Dë¨AñûùÍ×åÃb_ø"R3€°ÃÊ+ð¶ Zz+º×µþüŠ×¤XŠ?`h¼’”‘ƒ0›‹zÅ1Y—1‡É/ótxPëU¦'$9ˆÎ—EG>¢‹å¶y\ ^íò~?¬aò%­Þ?*ºáŸ6ó›ß¶ý­‘…¢Èr$P€èrª™o±¼“bÄÉ/è”xl ¿í·‡å°bQXÀt&„ €›RÒ.hWg£~&dzxq*<Ô¼šÐ{Á–ücgŠ’Íå<(DðÒLø|¯¾¤Í`º;lw žÔâåüñÛrµ9Wù°ƒùîØ;/i#4)‚m¢Á'ÂGÆt0ÊG…Éø¡Çy¿“+—Èû‰?©a„ÍfÎâ[­ø¾¼_¼„¬ì¤ºýŸÏ«¡úãû°•˜Ê¯Z)A6Äâ2ƒ¦¤ì-‰Ðxå–«Îg'›ì9»)Û¹|¸øÃˆ[¬P¤–C%Ѻò߯3’(‰*»•ª•©çéFÓèÆ–qm4+]÷°{Ž Ž[9%‹˜Ê®Äº0±þŽt6¦ˆdÀu3ɧ‰3Âñ¸&ÎèLDr|”„¨É¢œàEL¤crû ãX˜Sg€+0&¦ƒ£Œì2Ö‘C Œ÷d¸–`§lñ‹#m5ð8™ÜqjÓÛÄ— í‰bã™åAf¹2B(ãTD”ãÒå¸<\Rtç‚™¸*ÀzðôÕv;à‘&Wºâ#^C9äU(+¤c‘ä”.µÊ–;SDrè<6 (–—7Âc¥‹.w$§D†sï­O:®OvJøw¦ÔqE‰µ ¢X®²þÙ6ì^í@Eƒ'’+ÿŽÕ?Çh©‹ãÁ@©À©çL9á -bÙ@ÿÇ”óðü†› o–†G²=›nó¬·— N~±íŠ+ƒ$Rv••‘ è­\°wÜãÏ;w°Ö\÷;¦à1Y›1l#ñáu>%L˜S#9H+b^SRD¦²^I<Œ¶ן‚'5Ã.V=6iE Ÿó(Å.M*†+Š sÜÃè!xæ‘‚G̯ðÔ¹<Ô®îf[ÍP—oꔈG±,þ ËpX­p§¥M,œÑèÊ\Éä묱 …á´Œb!{ð9Rk¥ãÜA kbʸou“k¡v=™®|½!SjAG‰QðR9•Çûg´Z4Y)ñÙÉõ±Äç.8S;àq:GeºÇº„Ão£qͤˆqj4(a²DFsN@¯¥,o)xd©0<^,ëEsrq9›4T2fµbÐG/~¹äQ¸Öhž¥_ ÞY ºØ,ùòÁ7à^žVã6ëÈSÞrYª˜Ê‹A·–=¢ ô“ÁÒ•7ƒ<ªÏwPcs¹œyRC›,A­…Å7ü†•ä¤wˆ~‚ZÈâ­×ñ(µgñÎÄaIèÌc[Á÷Ï®Á •ò4᪵‰-^þ5U©ó¦‹åtu5žUM³>5©wæÉ5W4í” Æ×ÁÈ _’“ O#ƒÇGC)xS= r ír6ß!@ïüðCŽE˜ct ¬.Ê)-“«/å²ÜÝQj³[Cõ1)u=¬R xLê+¯KgRçÕWöÉ‹z5žœLªæìtQ-'8j6Ô‘+­R6XË’ëÂB¬±B 1Êñ†œžãdÜãœáù‡çÎ'üñ–âԩ‚DÒ‘¯ÕBdžÞ ÛC¿r˶jäf-{'äj\ÔÕàCq£xô¡VïqÞáQÒ)ˆãØóZ‰Ÿ¤ÛÏ ­Öáñ’8aµ¶q¨Tà kËÐr«2Pùe¦YÌÕ£ïcéÁçH†›8H½f´áÊïÖד·=rpY 2ˆKSÚÉØ46hæÉuL2›ðîXU uv¾ÑNÁcó¼H‹2hò0ht 0}šè2Ø'4èˆGª\%ßï’èÿµÆ$I·\Ú,\Z!YrÛè§^>!C <©÷|:_¾˜OïuHÙÀåÚ+:8 qè讎ÅLEIY®ð a·M-UBŒDš)u=œ©$à‘2¶ùµ^6S_0$tC#q¦U¸ÛY+' ÏÙG¤BY¼?\"ïœâE59ŸÎOªñ/äuE„2ë ÊQˆÜæhö¸w@›ÙqÕÇ(ïŽGäí+Ÿ’´¡9ÄxÚ1ªhÔÂË Ç²¼p?° ‹Gž|1Äã˜UAf9ú2#\俢—ù²v÷fI¡ü©eáòŸZ¶Ô}®V>þ†0wE»tØÊq‘-r,bª¸µ•†;·rJ±Ì€3#rÇ£Ùó­zr¯Eÿ`¡A)µS”c{¿XB`\9!â]ùr!·Üì0¤ÀSÓéÚ{Ë«e]ßÍMœz¢L(a×F=ì'úõ=`š¯÷ü¢½”C!>(Cz9Xv¸ª‹‹ÙU°Ñ7þ~ΕÍ`|¹\âÚßr9¯Ngõ`µœUó ý„‚Ëú_—øµƒgŸëÕú3š¡ÿÇ+ÿhMÓÍŠãÖ‰qîÂ-½ 4|ßãEKгM«]*aÙJšÜó Ͱeô~‡pħ‘‹@׸ª|…³2aG¯k Í;¹òGŸþ9T9ÚÚ8Í]f3èvùÛ˜ /ùtVÇ$ØXôb¥â–å¶‚¾ KÖCК‚GÈ|×Å:×½pL€Gw&Ð¥e2^*ˆTˆògIx€g=²è¨{ 2ߨ¡íèfKº—ÓŠg=§È‚ݹƒÑÊ•On ç*‹¹%ÔÊ¥^ œýíÀ£ òˆÑ-h€H´‚¸ÑÎÎwhÙ'pÍøN/:xäEGÚC”·º×Ë@è!“ø)C”Aä$Çgvœ{AçÌ*)£Ð³ÅW¢±–[­œ}¤w>dÔåŽk©´0 8z-{ÉóLWA`VCLG}OÊ>áX©ãx„%=Ã/¶9¹Þþ5×¼zF]Q…À¸â›fŠs§yQQ^(Ù•Ï(RðHÙ‡§ØÊlð†8"ÄåÕà–žÅ41ܺ~2ÃË«"dùN2ix’¯1î1ín²\<§¡dC …dÆEÞÇ‘3Ìöè; )á\ñ.)x8c¶ïq—[äV²¶~Šè"}ÑëøBÊziå3”<¸èу,&-¤S|´r‘» $ÇÑšîIáÊ–7‡<ÚÀo\Ó*ƒ´:FQš…˜û³tL©á2òêá-Ë›G Óc¼q±ðý‚é2v(”¦|(öu$'µë3ÒÈß•ÏúMãUÿ¾âšÕÐ1¡2&@º˜ÎéõTºñ¯ B¯•ù8x¿ª–«zrcÝ~bЛ‡Nد3¾ÍE=ŒÏªùçº9îÆåxPÍ'ƒ‹êj¶¨&[Ð[&á£9’SL?Do‘û7u3]"úk´Þn’˜¦ÍàW>üñbR­ê/¯–ã³éª¯.—õèAÿoû7£#ʳ2üä;Êdý벺N/ÆÔnkõb1¾À™ÕUSÿWsV¡å sõ§ñXžÖŠ.뛎óèS5kê?öeGmwWÒ K-þqR£‚3JïéR~hŸëÕ_óE7ˆ>Û‡ˆÝŒLZBkC& è5WäÍÂ{!«ÐðèÙr1Ÿþ{m.ã÷4/?MëÙdø5å7½6«góéìy'ðò+}úÓŸ¼Šïý§_þVÏë¶ ÉÚj:£A~ö—Îú:ðßøüû}ÌUŸA³z~ì!(?| #¸N¶ãÁëå¼á¼±ãÁ;ês2žÎçhµ¼¬ÑP(µeýðÝXÒ·8—GGú§SÕëÓñ?åKç»fWO´ÄOßVÍê‡åâ3NøfD= ‡oà3áÿñ÷j~Y-¯Žø[ûÏÿòǯö+üFD€öý®µÍ/‡µçß½­VÍèÅ‹jŒ>¬¢U«áxqþ¶ZU/Þ}ûþÕ Ú”D»‘ÿì5ÚÛ¼ž5£ÿûجèåˆí?<§S4„É5Ç4Ž?´ÃÙúˆ,áíýO¿¼_Õ££î3úu=ÁÇ|p<Ý/ü×Ýú—?w¤ý|Ôæ¶áåYÇUkÖÏôæñ39§WÈàÚß»ióKãÝÕµUû¼?ÏÒq7±G '÷pHß.ÆhÁ팢 ñaYÍŸ³óí¼ôӗߪÙl„ÓVðOÊ)§\Õ§cü«ú÷Õ¨üdV*ÇÈLÆ8ïýŸÒ×Ó—ÑŒn¸¬Ùó?Æëê¢:ÅÙ¼šÖÍš)Ý||ucIþwHîëváë¸|ø÷_¾ö““õß AoÿË«/Gÿ}9‘e½^̛Ŭ¦ßÜd6¼nËYág~¸ÞÕŸÑ.¯üsï5èÇïÛâW¯~øŽþõÏnÙ{;ýT¯Æ³ú{k,éwçzËÕÅŒöùÇ›’{‘—ÿ™/~›?N…÷ß½ŸWÍÙbåÿ9[\N^ßô¬\ƒÑþݪˆþsOõ¿Ckù|¶Ú@Å?p‡òá’¢²1ïñsü™~<­–õy½jaÅ ûƒ¦ÒÅl:ž®fW·a­uo[V|E/R ÒŒÖU'×ûn×UÆ>Îk\Ì1Æõ½þi£Õõ ÿ±¼ÂÈ´[Ö^ú)8˜´áÓËÿÐUd0C¯òò«÷ømKƒŸÙ¨ÑÍ"ßÐ…Á·ŽÇ¯ÇôŒë%ùrŽGh\“Åñìñ@¹Û5Ù»²ç7qÃþ&…ßÄÙÃo:ì8ÜwÕ÷—£ÍÌ­Û¡¤¹—[ u³ÁPõ›é|Úœí·<ㆸà6þ¼Ù¼sB ÎÆæÊáÒr€C–Ý6€‚®„€ˆäe´r¬ø;ö9F°H®t+'xg,ôÝòyJMRãŒQÚª˜e#¹è󠥄Ni¥Ê›E Ãú?m¹Cm ÿB0:8Ò¾iH•VŽóC8Ž2ºúüd2“D<Éí3̼ûoË\]ë”TV„¯œ{9PLÀ‘SGkSÞXRð}8ŸreHÒ|(¥âFŠp\âå@Y8 {)¦–†Òw+ñÈ¢ù¬7,žœ^]}’É´Š ix¸æ#ÉI+ ;–Âè /o xë%6ÙHª’êÀIkÍ#JЖH¹^ÜFY-´îÁ4Rð(ê%“úf¿»l–cªIí ¤h1Ô¸}•܆1½œr¶löû*p_35ªÜÅ—Žö9&V\´“Óýø‹­Ì4ÐFé3afINëg+Ó‡*ųâÓðHΊ{Ž;Lª0“N$Eî„3®¼ÇHƒ.¸4\oåÊ—!îðXd âx„Ô½yŠ;Œê £Ü#(ŠŒh€rZA"AÁ¸Õ Ãmw[9£xy£ Ûh¸ä…ÛQvrªè}ŒÉnoIdQ‚Å0(Î"*ædÙ;2¥`ïˆÇõrƒî.›6ȦÁ`Ø*ÜGÐ e–†½áÓ[8i¢ð)b.o „Ç€”.ŽGSÔÜ”ö4º Ž£u‚f1v\€-ì á– Ê ÙË>â.*t¢'‡±K§„ •0órÂZþŸR 2À¸V<ò’儱ºŸ˜(iàvÇoùŽu¼E¸¼EÚCy„RK©Ýo·á{‘ŽoÒ¡ºq\†š1„ZÖ 5äÔÄšEª1´rfïn!%àkÚ…¯Yùö9ΚpAµNNCÿÆ`‚l‚dL®ÔBràœÝj Ðþ±XM?]µN•h=tÉl·‰õ^‹f|VO.Q‰.i|2xS£ƒ ´ZÏÿÑ7”;tDNøèÃò²>ÚŸÚ×*#å@ÚØr˜ýøRéò¶œ‚'õ^Ë–Î2!5©kœhWÏX¤NrZi»Ý¤õÇÁ‡»¶<¨fÔÕê ÂfP P¯Íœ Ú¦ šÝ„˜×hÈh»Ó9ê~>¨N—«GOoÿ~&´“b£ÚBá>Ô¸ˆÚm¦¶@ùoW-€r²‹/õixROá2’ꂤRöC€±¡qÚ0{8ÓȦ…*Þ;(Ò¹êþGy ÓŽ…‚|3d”IÅ#½L[9%òÕ‰ïK5k¸`qÕ´)¾÷Ï1 T8k¿“ã𴌈™TÚÉ “Žh&¬2ɽ*ÿ,ª9Îdy#JÁÃM‘-ÈCw(%£:Ô,6%8®E™Ð½(jQ¼Òf"à%ã“M\BKË®&êû,3N‰¢aIIð¸Y²å !4…’mNÖóÊ۸ÐYtg,’5…rRBòÕšžÑÌ1ÃbŠ(\‹¿:ÆçP«wájnÓß"¡‚$rôCÉh_ÌKõ·HäC­Yù¡çF3Ü„0³ ×÷"¡ƒ\z5ª™/"ØAK–Ü ðÒ %žä´+oøF8Ç£øA äUrISLÄ ”c ºHdS„3QÞ@Rð¤fnï-gÏfƒLSk Žñ‘Žh¦ƒäJÞÕt‘F óČȅ™‰aŸ‰ÜöðrF=5#Š«æh«æÖ F…^ìCäÅ>}™¢6ÍVDª˜èa}Äçpn¤•;àÑ{ÆÈÝ2s2®gâTÍ.8j†:‚ˆM5”Sjßø8b«¤2‘ÖS\+>GY'•ˆãQ©‰T[ÂËy o/-WN‹H/y/GM_óÄÅ}çÒ”7€<©7r¶‡’›¨Üð™ç3´Ùt¸¯gZZné@r¨¦tÙbáƒ(`Ûiå‘‘•'í¡åSÊÒðè§3ÕÅ´M} ½‘528r¸1gŒ3`ÐÂi,ÇéL.Ô8¨¢¨©Qù¡ÇçøÛä*އ ÑÛЇ'­Äõ[rÌ©þ†>j^>ß* –ùævãR¹TVZ ‘Žç^Npž3©¤oðÊB Ëû6äRƒkˆ­L ‰±˜!d¯¬*o)xœÊzB»Î-Ÿ{^gRÀ†è>9h~/ÝÊvÒ;¡Í­ˆʆÛBtr¼ô~µ{ަüÖð¤Öž‰œ@%òꂼ]eÔ&œ¥ë夞÷(íPŠ%ËH ÃèAl SøQÓPT$¸$z9k¸;¤‰+‚[;NèäŠwcížã¬¾sHbyWá« ˆµAæ$£"Å[Ç'áÉÔÂ[» ñ›Aî4½jB!Áª…J|@°œ©ò‚‡—Ø~n¢‚š6GEÇlÓp!+±Ùxf ‚ÕÚÅ®£%”ö<†ÚcÜs—žEdÑ­[ÊÆP;a¹µ…⢰…êaðSðhQfõ¿¿VâRI·äªù¸>YÖãÅ’fà }G;½‡z †Üâbe,¾Qlå¤BqÀÁTÒ®xȘ„Ç0ÖÑlbUYL‚ËdD ¦`û2”¢j(éÊG U<\4aî4å·1P1¬·žP<\Ì6p‘9ã@'àyLù£B/¤P2NH-"ÁÁU_áb>ÌÖ”ö<®ÇpÑYTtJŠDP£œ–}†‹QØNRSÎ8l'DùÁ§ç8&9ßyá¢cA†5®Šb欨+{áb\%e gvrå÷þ9ÍXˆ8#ÔÁÂEÇì‚qœî&Ç´†;y¸p1Ÿ¸5ìÁ8ðˆ^ªXnœ†žYdÖZÎÕþŒhb hè§–[ª/~œ†Ç0Õ“‘<à4tìCÆQ©ul_ä˜VZ÷eå”0²øIDÅë=dY§t™$¨ É1#ܽG6Uô`$IxÜ¡¼‡ sê0ª Û10’][&oºaš×]I÷i†y×8©QÁUŠëŠÙÐ>׫¿RIºv}["v_d°¹‰°Uhxôl‰›ã¯Ídüžæå§i=› ¿¦Ê½o§ÍÊ7Úí^~E£Oú“W±kŽý·zNCN£ ·mÌŸý¥³¾Îü7R«î1YžÀ˜€=?ön”>P•ïN¶ÿgïêwÛ8’ü«úcm$¦?ª¿¸Ðgs .{1b#rlK‘:’²ã5ö±îîÉ®ªgH‘’ØÍ&»ÇJVÓd‘óëªêêªêîªu“Ýz4`§½ŸÑÆÉ1Aœm+ñ0¤#mÛº¶Õ¤ïq&Nô/o?©êåÅð'8§.ë[j绯Ÿ4½È_Íg¾ÏôF _¾êß[MoªùçÓ÷]{×ü‡oß¼|Aír—(ÅÁ´­òýöÀ$ÇW87:ªÿx÷Ý/¯—õõà¤}>ö½—ÿŠpMÂë^O«ëÅåléÿ9™ÝŒ^®/oÀh>A³€CDûyäð@mùp¹|€ÿ5ÕonÈ'‹1æ5¾¯éåE5¯¯êe+ΰÒTºžŒ‡ãåäó­D¸Ö˜·£W|½y¾ÿv]eì×ÞU‹9º¿¸¾×¿!Ûhu½lºÒ£_Ú.kç~ öFûtþ]Ez´*çZ›Çï6ø™#Z/òø ­|kxüzLÏX-É7S48Bãš,N{Òžö”»]“½){±ö¶LXø—þg÷éëÊᮩ¾»íR`κô"®ÀNßw躪ߧãÅåqñHï9U:sàäÕâŃ@1ð£äs¸äÑqmø×˺„‚¿¤APúwŸ+±*|%6í¡®üM‡D<©Q<ª×ìjÕ—xT¿O½wÒ;™ÖKúò¢¿ 5·{”ÐaÓÈqO'’¯C=Ü ÷ëì¡£:§-š°ÊEª-8ؽ@ù‚Wóz8¯+Jx¬ø³²0ë¬ÁISo½q VÆåìjüauT„ 2i¼J¿6ÞâQ.RθÅÙæHËÀ– ûðÅ¢¿~Ùÿ»]4jG§pCµÜàê)¿àé4O®÷øè† ËßJãmf¥ àõ6ò0i*f冀wr9Ø¢¸ˆ_¸òtº¿2’I06ŽG¤ÓZ·å¾·c½Ë¨n^„!œ4äÙ±põ¹†NÚÔ=šӕmâеë`a <Úˆ¸ @:ȱ˾'¿i¾ëvGÐEæ*VÓ¼¡K/0ÑÕ„VZ±øD'vÁ)ÐÖjÇ©*шð~v3]ÀÕ„‰A®;Ω尌ªDú°sc˜ P˜¦’º¼rÑs¬±Lí'õÖqëÅo·]|Æ·®k V¼$°8„ãZÄТó¨&uº}½1qn˜‰Š ÇÞÅ2Ex4“VÇñðÔzbÚCn>Å-û=Çt˜c–ºûh–¥—üÌ‘3ëˆØÒé7øî§ïÇ%Æ£õFBY뺙¡úŒnj‚ФN8îRC•ßÝmyK‘„Ç1w÷r»¯êå|<\x~Ú ?¹PŽ)T?Ò ÇóKËdyw6 ’y¹ò†°¹v3<Ü«ßõ1`w¶¤Üñm:c4-{ÕÇj<¡}ÑcE³yÁ<”J6‘TrÚCËßœIÄ“-5»ãåÍŢ߄ªw7‚™ Õ ú~‘r5DG5K³¥áºnÌ^Zg£Z'`ʱÈuzE5dÀ×Ù{cóðBï¹dI:Åeù9‘‚'õbЃù½;ì’av)F zÉœ8`9ò¥ð)ÆYyq¦à:WFiƒgßl¼ÞtnÑì©-cJI3Ú8u¨3öÕ‘;¡Ê«@ iÎûl¾m~É ¿Œ¢FŒ†ÅVaã/ÔžÄé  Ì¡|MÁ#õqÕöšdŸãB3,:TS~lênñvP 0=bþnøw«SèìŒê†i*Ì4Ãq`bnÒ°3üPjÎ;mÁOÉì¥îsÉ(!9ëéTrI–¯ƒS—ϤáIÍì¨l²—ý3A. «œn·ã逛LUã»…-dÂOÁ9îGïâárNlt6¬<m‹T)Á2‰ˆG—eO¿cØšïX”ˆGé£Sá±ésv›C6Ò…9I5&†\ôã“à_ºÝ$àI­ãøðÔ>fX˜‰Nk& ª¹NsYöͺEh²—Qô xR»Æ•pL‚f“ûvˆÍ°G%Fjg“©®Ú^óAi%f]¤FÑ ay¶ÒjÝ"—¦7.uÙ}øßÝ\AÙ‰Ö†q ANqߙπߢ "Žß”錯šÁŒÖ6ŽÇÒ‹ôp>†’Z¦/8F¼ÜDN•¨›Z¡7·±óÆ}h ½‘Ox)x$'¼µÿ:YÝ{^÷¢ßýц@]p£Éôq'È|GÆ‚tŒÛØñ1 ÆÊòŠ’‚Çåéºp<‹E˜ÅtUƒvˆ I°ãË*ç´\ÈHɬ7Üv ‚&”‰4=jð¤ºö»KNîò¬ªI=_® MPA^*a©­„4ì ;‘Ç(­Ø cR–•×$<6WÌâòäj«×sp·qYEy|[$‘Ž%7õ*®à à7ŠdÜÉ"Œ’Êy«=Çjz<‚å‹Z} *¥ðÍÆë-Õã.(=íÂÆM«Æ˜FÈŒQkäè/¨H÷†Nw¦hª9(cq<’éÒ*0\W#ò=xWÛöQ|Ôj;²qâé„eÙ´ —M€„âaO%:X‘†Õ²šÌ>ló‚<åt Œf‘1pEEºPвƒ0å›Ë¥ááÙÚLF&Úö‚ï‚':,u¾Lré"C”ßÊ×0è«a£CÈiá‘ÜLÚCw—wVüÚ{ý÷q£‹×íñh:-޾¿ÐL0* ×{ZÝ^q˜}ćG£zz,X]ÞÃJÓzªv÷ù{¯UHëõªËÌ»‡ð!˜bGôZs-Œ‹¡ \¸ânVšw^(Ñ*$àÉ™»¬g-Öf(X$ÅöµVÒÍld$H'D¾2?¹´ZS/ gyÌË!:Þ×…Ï‘B…=ð(“ÕF´KQsÜf‹§ÃÙ¼žÑå+ÏѰkèÐ8pÚ¡¹[PCõûõÍo®qÁÕt-mŸZ‚GÂr›Ç±7í+¨2wû–/JÇ“F ßWã ¢~¾ü|]Ÿ?{{ûÍí¢?5«V½x†*R-fÓóg;^×Ëg½«¦ òù³oÇ ü1âÇìÓ)/Ç×½ãê^µÅÙêë´²~¬§ËÅj‹~:‰¾B² Ò‹5¤¦Á଺!¡?;V¼Žkµ‹\à„YÝP\•4_vnªgûôÍêœR\U¦Ý_S÷¥Ÿ_?Ïn–5U;ÿÛwþ—¾­©^éÆg‹Áà[” U³^ìû• 5¸w¯sØ<½·¸œÝLFþçE½ò_/?Õõ´w5žÎÖJ²ô½1ÞAW‚^Ó7‚®‡¶bš|îÝøÊ¹Tþ²šŽèÕeS~¾^,{ÏG}oî÷ý_[Vyø¬±G‰j¶‹u]­ž½‚~üÑ÷kª0ÔTM´½ç¼/´B?\O5$•ŽT«Eú Cð/ч"‰;|·»›¿E²MÇ÷©ÅSЧ>O}(žúP<õ¡xêCqÔººUò©ÅSŠGЇ"I +Ù‡B+Ýçàþ¸Ü }?nZ]Â(tmqx–‹BJÇ£›¶FQ!–ŒÇpò'–†bÔ~û["r(Çö-STìYóÈC-›%¿×lå×ä„ñÉt­þø°Œ«ÕxzÚ»¨‡Õ ÚÓ+Ê¿-Ñõä•4}ƒ(„tB.z—7\HeÆÓaÝ»]a¨ö߇SÃul<œátÞ¥B¢Ò¾¹l´aÅÀjBè?{tUUÁá›Î–8ùç £ùÅ…Óã’2žR§”^u1»Y>PŸ¬^Gm 9Âà¼M_ù¨üä;2%'ô#'oÐç{Xù­5Ì ©×NtÚlôå]ÏLû€ 9hjöžs‰vW«ì®ôKXÇäcK¿¤ Wî_-ýb7¸‰ZÚj°]¦_IùÔô)ýò”~yJ¿<¥_žÒ/Oé—`ú%a]«ŸÒ/Oé—G–~IPàÍ™ùÓ/²¯­QF=7¹¾ Ò\Fly:ÜÆMk,X]›ãÚÏ?¹¾4(9Î{ Ä¸.PÏþ€h‘ƒÒ:hAëâçÉ“ðÎŽ¨Tx¯ Ëb8TÓÏ7ãµn ‹2eb퇈NiŽ)UØLguyá¦àq¶œpÃSBk&ÓJEP¢†n6˜Ò˜òÂMÁã ·p/gˆÝûÄ9äœ)¥Á¥#‚Ô m”Ì.à4¨Ž6llý îTΖ2⃈âqÌÚ²Bò†Îpe,/&p|CáëAu¥¯Ÿ§âq%„|E5ÕÏn-¡‰ðÏp%#5R:¦ËˆºàÀ활7Ü¢ƒ¼]Üù'$Uì×a+äép=â<`á¼ÀSðˆr3œçÂŒ£…ŠÇ€:P¦àÔΊÔêŒ8>Ça(¾TÖⱪ¤ˆ 2Nj¦0VD€"reEGj´6âH t`¶¥æŒÇãLglZ/é‡Î> _à"È=Å”âL³Z@ÏC–1Ú¥àvá”%àQŒ•¶sÏ Iw$c.¤²Üš.„…‹sÖ…km7tªx¢¤yŽáRDg6áv¼ °}é ë´’‚:çÊTtÄ$ˆ¼W~s\RMè¼³<J\û ñ‚¼´¸H[£y,”°¸pqUrâÇU7«-ÞK2‚³>"fk9¥b“z×[[ZÌÙ°*c;s«s‹y:›Îg³e»,ª 眖L¢üb®¹CG´€k^*W²¼Sð¤6nHå\(ûÈûÔñËmë§3Ür *FL„³á °Å0ÿ Ó‡ÇŠ2±ÍÙÆ9·Z‹ðÎ Ñ1ØIô™qLqÉÃæè,¶HÆûÐülxÁŠ'ÃÒð‘5×=¯'ÕE=y`÷à^ÙPä)ç`h##cà(ó;›IÊ›€5íÉ'ÿ<’ß¾|Ï52,p`è~„;Êy:¦•ËÐ̼kÔVç=püÏÅB3ÂãdzZª£¹Ÿ0PúQ1 ËeO¯g£–…¡•I‡9i8Zxà°zºÎ,A4ÚH:¼.xí锵ùv¿sð;ºU^sSðä+v}=Ÿ]Ô!&š0µÕŒ›ÈQiOÇén%йP #:}žÔ³ŠÓêª^\WÛã·¯Émö_A7#È:+•²tA-•è’Ï7äÇê€EZL´t¢1Ós,¢å{à1¼°˜C{’ P2Ò½³¡“F”s«<Ò-°¥cÅÅìŸc97lÞŽíê5¼™—ŸéÊtýÛ’Š‡/çÕ˜ÊÚŸ¬ãsæ‚ ãÄ-‡¾p Ò)Pùíc–Æ(É·ÖZ:éÊ‹Ÿã˜–‚ÇñXË2®×mí¸ +Ã3Fн-Â݉Ž[éíá2¨hBÁxya§àá¬Äœæ,È0‰â‡¡’wP (/ÒãRЅŰ #†qZÞï|k~]u >¦ñ-º;×õ°uT§­hN{ÕtÔ»nß‹ÞlLÀzê2ûmÓIzÖëÁÚ±/z¿‹õÇrÇî>ܹÔpÀû¯Ä“8!ÇÝ·9Í=TõòÏä²6BÄiõ?7ÄØ£‘=Ü0Ü*T<z9Ç©ð錿³8?®'£þ_æsê¾Xú¦Ú-ÁùŸHúôÕ_ü_ûw¿üG=%‘“4%²eòókµ¯Uÿ‹/ž³ß†Œ1Œ¯™b/N½õCòÓÞ›Ù²š p²­j×£;íý\£ ÇÄ9X¢«ŠBÖiÛ6Õ­&}_-.'ú—·ŸTõòbøœ£œ·Õ®ºiÀw¬ËWó™ï)¿Ñ®›¯zuWÓ›jþù´Ç}‡îõÿáÛ7/_Pkì%Jqðm¡~¿8ÉñU#ÎFèÒ„ï¾ûåõ²¾œ´ïÑǾÏú_ŽçEûÿ¹[»qþ®eÚ»“Þ˜˜†o”ç>¬ÚÐ|¦Wwdœîv>¿Õjª°2÷\:m'öàwaäî‹Ô÷œogMˆ7ójºow±§W_>U“ɧ­àï†fò‚«úbˆß¨s €ÄX ”c¤&Cœ÷ë–öôc4ãÑæP³ÿD/«ëêgór\/6Tiýöÿswu»‘å¸ùUŒ¹`Ý#Š¢$60Én‚ ’M™Ü.‚»¦ÛÿÁöÌ®±ØÇÊ äÉBž*Û埥::²“‹µ«ÔÖ§OERuÿ(IÓwBîÖcßrùúßÿå/Õ1?ÝýFzÿ¿¼ÿË7ÿðËÙ¹Jæ7¿·Ylýñwëëó«û QÉ¿*}égÓtýÇú‹(›ûéƒËIkè¿_Él_®ÿþßëoÿ¾Ýöþõì§õÉýÉùú÷›ê‹úÝÅJ´åÝõ¹ØtúëÓž~»REwûðò/—Wºe¨`3µ¾+󌲣o¶S>‘E’ 9Ð3g€Ò7Á›Kq uò«G’Ô­Y~º›ðÌy¼ Dåcˆêtb 1ê#qé½Ì1‡AOBÕãÞÒ@—V±#cìÒ.çÖÓìƒW§wÁ%MO2Qi–Äò2-ýLåÓržÖi’|‘¤à•$-P²Ÿ÷V·•ÖÓ]Õ‚'ûetÕD[,Ó3G¢lÂŒÉ#-4Ó}qâòI'mx 9OnÊ'D똴ïö©H§ÖGLl½Ð¾iç Ÿnöª9›ºYÛ-ÿ6åÔOò‘1WàáÃ&¹¢²é·kMh˜êR! Œ˜ÊNa¹A¦ö›Ã‹‹eþø-xšKm¿Q´Úb/ÙKúZ.°gmé€ôÁäµñ€úÚmx2 YðÜ'±sä˹Úsû‚ï,¡mp ×o:Íw#ž€íÇñ´¤+¹Ì%'´~´… ¸L½„°Ö#öDf¿OlÕîÓ9ÂÅ꺆Â=ŸoþÈÄ,™õIKBù1“©]‚Hÿ*KØ5e8˜tö ü&ÒÔ.úf+véõÛ¾àAô[ -x|ê¢ ÷%vYÌFWd6xt¡|0µÓpbë¦8ta7 ey©OsT£3³±Èl.êÅc$Q35sݱ¼7 Å//$-xдªöƒD(óËzËŒÊUÂ7í /½Ò'Ñ ;±yƒ=”Ìn€¨ P¾è·m‡´À[f~.<+ƒ©ÈiNŽ#‰ƒeŒAÚEG#­å½a@É ’<íR^9: zÙyä@h”ò™ÚZâͳÒÝ0°¼0´ài¾!ÕÓR>‰˜R²»&íb©ús½ëÞI¢›€ÇÅOañ´j†77æ&ïÖçš´wsÿ´û†ˆ}"»Á[p9ïßM!ds 8{°x)æM?ˆ²³f‚ïu×h¡M¼r‘W'–dK–É1çÔíìf‘heDk ÅW¾û ˆöÚ©]'Á; HpE^#xtËÏBlÚAæ÷{ 1—OY¶í–÷W7ý$ôÎUài­|ú˜Y>—S(ršÅ=G*X‹NÚÉX…cà "‰*ö ¢°µH? \06xò» FÙdc'ÿ¡O§ì23õ^̳‘Xß÷€6^ˆËË€öÃ²ç¦ <)ök=XTÒTyφN¾wþG?ü|¶á²æÏGW"Ë7g§§ë˽ø£¾ Q„¥)ýDHì¡Oιž§dß<¿æɚ騶£·‘Å€•3] ©f"\6¼Ý€ÒÝêöçÿúr³ºþ:Ý'ÎðÄ›æ•\>QÂÞNI+"=YŸ¯Xúû¬«ÿÏk’=m#OE³ÎzdŸ¡B2ë¹é`söÔú¶Ö"úQ‹<­5Ý_-Úç´Mla/¶¸ù`l,¼ä>Ôdr«}PÃV¯õ‹ŽyÉì\úH“‰ÐKº¾9»Òšeçë_×çsø—Måˆg7 7——¯ÏW—ëG%7eW…’ à?9HÑA.¿Zµi‡.ͼvÖiƒiC½xÆo#žÖX@™Ä·¸3f<ˆŸ$“™-¬ˆ[ÓŸ×à}L©ü<â¶Ž˜øàQ7´P‡'þ§ó«?Ýž|]_¬Þö™Žwr(C,Ò.k©*²`Šº¤æ#„wÂÉ~ùémÀö˜Þ’½š:q—]¦®š¼] 5€…åÛð@˜§À¯”¼§9~‹q¿ O £Íô\äÒcd1ŠÑ²{¤¾O¶X7©m8@ZðZ@¼E¡1ÿI”#OælÚÉà†í&xòÄ 6x0ÿI8'_'¤v.ÓÆ8„`Ù$Ò9,±æÛåT.8Z˜¥]L–:‚ÖAs©‹;\8]Iåä±ÝñŦôÛŒªú¼ãŒÉœLž ³±,ýˆ¨¯i'ËL“¹EÀ%]Ö~8óó¶û…]ÖÒÃtŠ!»ræ]!K¸MÃÍÌÐy@ª–ç¾]5ÆážÏ•W°„!GÀÌÎG.]ïåÛP7°»ñ‚ÒÙtØ9›v³;-̻ܵÇõ×7ëÙÇ&cOüêpæòêt}|·)Mzô·b<ߟžÝNEDNvk°=Õ2=ˆý^XÏåS'ùïi

–嫚´[ö½ w Æí"pÎ’@ËkÄCCv•)ä9±Ý- K–Ý>ÞÀqmFY«W۽ώêAyÿ±¤Ãçõ f…»‰ ò¬Q†1BŽBøXrÔšÉ_Ÿû9ZMj‰|ÔjØ;‡L¥L^ªÉä­î4ÁÇ«VSè幬|y³¾½-Ý'Leƒ8àåZiçVÒtHÊ»ôúvÙñ#¢%Ú9*?\¸m‡´ÌU‚o74¿žh,Û½!9/6ƒY$DNanåƒâ›9Ÿ³u—SÚ…¹iŠ'‘³ñï“'PCbyÎ §êl‰ª´Ãæ²ÆK¢¦à£9õÚ΃h?‰!» <{-øW<¾ü`bÒLÆ2‡d"— 4,´ «ôzv4ËœoÚ‘!Ég²‹"h;ê¦ò_^ÂÛûÍĨ! Ù—}°´?¥ºIB!n€žì-xrì•ÃQÇ$vb’±£¼3zúHÂÀˆÐK1² ÑÐ2Ï>›% HÝRj.ñ!w.v•<ÕhÔCnë”Òšðäyžêt=Xo Ÿþr¾[Sø•ÙdžŸ‘œ|ŒVÉÅè³o~faYÅÚ}]x$™á‘–N zš.:?Äe}^ÝžèxÂV–+oÄtšðt»–¸¥ñ‰ÅÛO?Ë_xyÓ ‘,é .EÓ?‰ú´››éÙ÷\î!bô`£#l|é‡Xuþ7xrŸzW \–ý%ñòRäÄ&viç\?¿#xY‚É*Þ?ç‘ÅCB“¹œ¤]¿øÞ>:÷|>ñZ6•PòNGûnà!¼–­­‘³õx›´KbÿÄn×»$§Y-xØÍÈ–{®kwÈ{0:6ú·|$™X·¾dÖ=“v¾ù``” ”ôñèìA„qé‡|ó´EñÄn¥ .V'_Ï.×Õœ–“î3EÀäÌÄ,v\€n‚Ñ}úX YÊZÛ¨¤¤Ù‡}p6ô©C&í»*±œmŸ9 ‚–ñ íbóÉAÖ70Zð´Þ-{doÙßeÕÉQŸásÞÂ*ípÎËÉsE²'˜ã<…#•º9ÞêD •k±˜ ½ ìLÔÞÍ~Êf¾hzp„¦O®`Ù˜râf;²ñÂ1‚·(,ï,¦¸³æaº(,nöÄœÅu5eTÚÅö¼ôÀÞÆÎÏ¿Q¹%–Ô%êËD(¯í|Ø_¿{LQçmÈý±~óïÖ_nV§òCuùf‡X«]ÖxÕ×™í¥›`g¿xô« Oìm!“ËdF"}¡,ðjëóá¦Lç•Z: <Ñ-} šX,/$1²F1œZ44A\(ܹ,ì½ÒÑa6Ϋ›:Ë?,Õ†§õmÈý+ùîfõeýëúæV>»8û²y¯ãÓƒÒynpPžNŸ9`2Îeí°¾¯ÔÇÜê§?kÑk;°Wã5W¡ßÔ=ìg°Ç(>!ÛhÅÒ~çg4N®.oÕ¦™cqAÔ„û, È“‹vÛ™Ó“¢Ïâ™Â-ípÀ6+ý '1·lš2; ž‹¹Ç6Ó‚>‘€z<­ï1>ßõêØ+{ºÁe½)Å–…!í¢›• Õ .Gm¸2 6médzN¤§p³«A“בXömBöÈCÿM»ý׆=м)w~öÓúäþä|ýð$òñõêägùasÁvÎÉÓH£.°)Ö.ªßë:ÍcB°2q7í X ÒOÔ§}*ðôßûê8-{â>{Ê-ÃVÛ5Gþ5s9Ûƒ@àÆj?]â <1Í®´yŸeÿœøŒÁRÚÎÞXzz‹’Ð@áMõ€Í¸†¶£.å ´ìx  slÉ´´k?#8†²{ ”è é'’(Š`ã‰gV7œsX 8{Hhe:0´Õ iÐ̶) ˆ).ŠëCxBê[&wËãDã–E!ñÓöÇ—'«Þ'ƒY1cRˆöHJÏþî‘áCAÄ@ÉÊîÛ“…CbMX0‰å [æà†(Ÿ,J.T¬”0ÏðÛoÖ_Înïnî·9w›R¦Œ…ìsöÑT9>Çæ#–>ø2ëµO_N¬H¾Äcû<ß3ßÇš±òĈ:’)tÒ.äŽçÁ0!§hÄ…ö#ª(§ iÙ¦%g^Ï>8„aß 4Ј‰oÀ8•®'¶¶d=ô&²°Y˜xQ}ô¬¬Á¤•¸töjž‘û½;»ÑâÛÒôåRo?m΃åÿ.ô ˜Ë1†DɬÍàcj5ÏR-i"ZÀD§—½—Ÿi­4ħõ6žä©ûŠ.ÇßYìo½=•1Qó‹Ëa5øœü€XA žà;¿%½ÕmÝç%C(|r!úÊ~ÛÔÎ5_Ø)®S¯žT«Øè<.~ájÓOu“+ðtH¡¶ìo/Öw7g'S¾gDä“õbô¦ïP¬£/tHÒq°¡òâ’[é¼<­'Gs™Œe&£f!'#…zjGéK\íöŒ Â'p™BÌ Æø¥]!Áh¡EPž:ˆè‚‘³içhùE ý°·jƒmÚµjÂ7OL.­ۜȉëdp-Cp™ƒ96-³ùî•àÈû×ÊËêoÓ ™ÖÊfÙ¼Í €FÅÆ©o~_íݧX+©$Îh3XN‚'f²ñ„æ?‹sË\ç@@ì,#NÚ…öGÚ>Àನ ¶WØR; RÑk>Ot<¿ÀÇí½|tñù‘ûϹaUÁ:.’‹.zï"X––´“eÑ¡ÈÒ£ ²9Ø£Áå¯I?Àz'%›ì‚8ºôÞ¢]™\â˜ù`)ti—ö_ù¦mÕnzÕXã'â»/s6 €‘=p\>í‡bP¹×³3³·±‰érü' †Ddº„ÒÎS¿—hÆ -èÓŸÄæÐ‚sl¼ÿeîêÖ¹qì«ø›ëµCHÎíîì ÌÜæB¶•nmÜ-lw’yúKn·Ü–ˆ‚‹¤5?_›é:A€ùäòŽSÇ.ž‰ ®é„¥¡gV%K²ðÝ™‘¨hÈ€ÇÚIáøóž6Zƪ–£å=Ÿ*Uô06){fbQ@ ž·<õK‚Òn’KõIM*ÃçDU,æœ|ÐÅb‚ä)g¢™æ¨Ù\ǾYÆ\ù<…L¬Â¤àóÒ"Ä‹p²#R£ÁÒ4¥Ó+–!i½«öã|ê8½¬¨-E™7çT˜‰Cì9½*ÎÒ‘,é«W€ÎK]&% 7\%.}W´%šJÊDh’¯sòðYtD¾®£$^Dr „Û£¢k„Wß $ ¯«¯„-jÚI¬œær4Í?(BäL¬­“2.Žà@)ø..éxÌ•OëtóU0ßM…Ë+š¬G*8 4ä¹d6ÉÃ]F_^¹ÿÌ[ð°qõ¿è¬š SÞ°œÖ]•+דҕlÇÞf‡ÈÝ'Õ†ÇÜ øØÕS-(˜5êÇÕ¹\Õ5¤rýŒ aeœÁL‡ž¶j‚޳Rªb?Îcš”êû$®XÐñk¥—£¦³ú|‰3VµÇÎqI-B-匑[\¨dµA¸©?5,x¬7óGR†æe QõÕ=]eç}Α´õ%qT4Ÿ§ê4bó0àÉL¹y¼Ôt£êë{¾*µ #¥"]Wüí5]dêOžäzÞ«ÏvÕ«/Šù*`ö©W$*ã|îz¹ÜP$‰ôYñ@¦qbÒÈwB©rM:ž`=~;}ƒ¶x_Ÿ4uM äè³áOã\‹¸tse‚ú7õ±á±Ö@­jo*šCUY¥é´-OcÌå—‡¢ëÿ2̈‡[-mï¾ì7æJ´FŠCH•\ï2(·Ëaœ ¢q ™‡šÜжÑrm!Æ+ Yi]ÆQå*Ó|R±”ù&à±Q ùˆ¤Ñc§Ý‹ÃïÊeÞN?ªè“«ú,'ë]Ðô¢‹›\Õà÷|±<:îO 0žzþv·ýãáæóúËjÆý_}u•B•âò%Ͳ­ý.w T: ·›w Ï­ÍçÝöé¾¢ËX×%rJH!iØ‘É\+¿%içãäþO’mx¬ñAÛ ÿ‚!¹#¼ ºÿñk¯“"¦Éøõ¢œì.vÏPW÷÷w-Åj}V·y(Ùÿ«ï¯êl²¾™—x˜Â^Ka%$^ÅŒ2GÍ”¢FÜÞ¢.³ð„¬©OH­­iE©ªÆ\þ¡¶ù—d!ת¸]1®·Oûbð>ÆFi@ÁžL­jo½hìuNhír,]9 9;1U¨2.”4ÍJnõÆJý³»lx[-碿õýXÓW/¾Úl§5ãÚ(“›öw#îìψ12ºßÞÞnvO÷Ew×O·ŸÖóN¥yUÜìòþözÒl}yùä@Rêé˸ƒ¹úÞùŠ2‚$<äZ4Z¨è󦸞7¥½Ûo›»õÃDŠèê&$º§Ò>M‘ʵS;_°‘Á3 gOý9aÁóžªn/*{U+úª’0FQ#*ÆZ,j¹ÍïÂTƒ±«'ÈËz¾:iR!Dšzš(®lWŠ!¾£šÛûh:UiÀüðPlSÀë­ }Xü)BU¦†›Q,ãœ9Ó¿/# Ð= XÛ<œ›´ý±*TáBr>&GšJ-qlÓèÃ$ð! „q«Ô«NQÑiihRÔd(íM¸ÙýÇ Ñ¿@Ž;gn?|³›T%^Þ¬w“CU‹ä8pª÷BÂxïÍTè›bv²àAƒœçN‚(Ç'¥ä‹ŒC‡ÕŠ\ä%”éœZ^hˆE2ë¦ÓñÞÃÓõÃÍns_½HŽêÐs(åö“¶+Ë8àg…<蛕ã;2ûꈃJu±n]%äTzm)2•qŽš•áë.”ir^*Fן8ò’AÇ“²ï™ûý½Tå?Ö-²ãR07iö$ú’Þ5ë{¤0Á¸°à!lZ ê=+rÒp݆GÀr‘& 3´-ÕO$‰µÜëý87 R)߉¨ÕÃܳF*•B7ïÖ.×µ‹I|¹¤IƒÈ˜ÛÕíùxqÊ¡I²ðøÐ¢ Ëü$€òÕèXB¬¢+õê›Ôqé/TR©eïD%{ÇôÑ1æÆ€ÇÇeçà7wÛ§ÛK .oeÉnVw?Öæ›ŽÓI¹N°À®ÝŠ8øõ⟿oözlƒ÷ï·›‡’2v{q³º_]oî$ŠZ?”4ùãþûåO[,ׯ€‡ZuP7hÝ7c KšÁF–HçE–w6M9Ú¶ü”âOþfšhDX}DýŠ:…D¤Ò£çœˆ$xr»~íæ9¸,=¯÷k›ñ©ÖiãŸúË2ŒV5+5t°¢´ƒ¢KÁñ 4'ÁÍÝjó¥¦Íú •ìS.¨ Ùƒ jF„&iìDÌã€LºíÝ—¿~ý'Lê­Ýèä+h;GOuqʸҸ½XÄl | ƒ£²C‘Sv(ÛGûSÒ†‡›=â½~ÚÜÝ–U\ÒX~~úR}7)H}.åh5Ä\мŒHå\¸ŠæË“û7º°àAçàÝ):/é,? ü½Z˜.Vë¥ žHžKFª†;†ìr3k´ˆÊ³!“ÃY5ÈëÈðÑþÏËxR›C>™ÌÐêØ=S mÈ·š6ïÞj„zò¸ùŸû?a±(ýŸÚð@ÛC?MÑÐŒµçÆó®Kˆ»Sù¼¨|ëCÀ7º>õ‹IóØŒ+µr³ïG‰Ð<4«½Sa³¾³A Vz_=ƒ C;O„Yòßïö¿<˜è)h¾¼Þn_ª†sªGGèbJÊÝ[äšt¥ÿ(ôé5þ¡Ê?,­•Ž„eœpË+ ­onËZ­{+TH±ÿª°àanPN¨¨éжEE_(Æ¡Ü$høÐ‡Ø¤fs?€#&Ô€·ôE5‡OécÝ ÃÒu œÏ*”Ò~Ð8wý°TÚÏ5œ&ž¼üUÿ+R¼Ò‹uß)‹q %Iß 3K˜žÄM<֓壛õk½½ú§ƒÂ&¡®Äì\pä@2ÆÔÂÅŒ:Äçu<®ÏÔ߬.¯Ÿ¾ÞÞ­‹ú¨ª¾Ò‹:æ¬8Ë8„ÜiÒ;áaÆ-x2õ™îg;úZõSî©à¢C¥ó4ÎùØiÚ;ãÎ#¦>ïÒÒš Ç–ŽàÜßkÔãƒl4!³ÓÜ2Ó‚žHÍï À+ÏzθµÚÖ±WƯn³žz9³yPtIÓmŽÔc!×ÞXDß%³4¨Hƒ7‚óñ ö|8ÿ&Z­O99A]z„(ÀÉQ¥zá•|Oþ„`,xи¼t‡yóºú¦x˜²7±zîF^¬•õr¾3¢ìb0e<)-Ãç¦-D¾Õ.¤,DF@³D2Ž8/лH½Gu 88@+xˆ(%OÄГ7«ÇÕÝöÓq¥ÖO%‰E£‰³vAì<»®´#„ç:<ÖRœÇK@赫/Ÿ‹Wl›¾~J1„€Ú»²išõ†Š!ÓábÐÅ AÁCå囎‡ßù,f‘J_­;TT+þmˆèUQ2w­¥dõoc¦R7LŸ„FœQŠ|ö*žrlÖ(·r°ZT¨¸h³!SN}·ŽåDž-‹ìƒtNt(;Z«¬™e®Ÿ¿S⪑0eÈàš=¡;‘Äø 8l°à‰F¯ãf·ýúÛëUr}ýÀž%prâàkÎ{Šöfìg„>ô§‚Bgô0Þ›tY?»/í–ÅÈf-Ô.Ø÷v;ÍàÅ ;„¬ƒÏ#‚Ô‚§”×p*v!w&ÂÏÞ¤Ïú.gô™ôc ÎßQö¹‹c$˜¡ØV \ö…΄|G ~ Iǃ޵à@]}©~ÎKѲŒ¸ÑÅdîË>‚³òˆÇb<Œ ž­>}Ú­?­×—Åj®o7Óòž/dö>ªè¢ Knß ]’_° Ýiå5œL@ÔSÐ2Îåf“ùm³þcÒVÝôE Äòi-Í¡ŒsÐdí.³5˜‘u¼F,UœrÅ8Ov «›ÎKDŽX‰dÝz EZ\Òt6&( ©˜’‡!Ó&Áºð>ÎÀ}‹ÎôoxšÂ(D7 clhdçšàDa´®?ÆáS,ÕÉ9&ÕŒÉ8ïû¶J< @¦P“F:ù¬c'"צ㼕Œ„g½òê+(û’ØpÝdPiqÞ¹© }Ngƒ0 1à1§µìîç×3ïÕ§²$—忤E•Qv4ó#f±—^Ý”EPòaÀc 1¾ëåÓnûtt=üòén{}P×s¯Áº§F\ý•_Ì¥âüòW_?*tmJ‘%}Ù€.¤¢O¶¾‰|)©ú°[ÿûiýð8¯¬ëÛ'ˆ¢ÛËÕeöKW#ý’Ç™oy; ² 0­Mà³~,¡þó´[O3‘›Q*Ä¡Ô[Q†‘ŠÒy‘Ê|ÐÕt&¦Ÿ>®n~/³®±bøbg¹FdhXð|ìøéæ~šߊVì>†VoE(öî¬Åž>ÔZ}ÛLÍN›ùÔ ô1–ê­(ÃH…þ¼H…ü‘¤úöpÿy½wG›ÑŠ>ÆY?*Ì0bñ™Y+þPkµ¹þ2ý+Ód„fÌŠã³—fµµÌ—^M'ã~ûÇz÷mŠË±ÙNtîC˜uT˜QÄ*IûçD¬èÓGëëÓãêëæÏi.¸±ðcˆuT˜aÄÂó:hˆ!4(†õÓé1F…#åy ©ÙŸÉq&hQ «¾<$´Ÿ‹Gl56¬ø'êúå§s”ý ÑKżòþ·ªJCÚ]jòÀµ¬÷×;øÒþEÃÁ(i\:R±†¨ã Íi°½|Ü•K£RçfÒžrÞL¥ÅšSS­dœý³Ó›@NÕÄØ2.ŽX÷”8 AÇÃÖJŸªgÝ×ã쀑œš]jµ€oM±°°¯[ðX[C¶F±½PÚ?PšÔYg³ìæXRãø9‰:¬Éò焟¦•ïDê3î ·_Tãþ—›íý¦˜þ‡oSÒܳ¾Ž™PÀÈÿ®œ£â¹à }½çRûÍÞŽu#Ät2•úÕ¸Ž%m¿Ã²Žƒ×ñ6k¢ò¬Â‡«ç¿ù¹ù„8tu-‚,_È5Ô ;íÂ>¥ïcå|€!à€i6à!hÑœà­Úö›!×õ†Þ%¯.t9·ëݶ˜³a 40Ý<ÍWõú±¢H_Wdð!9wª*Ô«q>4i>ûN¢_j'èÛŽŒ£ qÇ©ðÒÀuà‡O—kO3èài†;þÑäË멨.”ä0µw|–lÙÉ‹c¼j”d0’òU,!´Ž=6ö}>¯WwŸo>¯o~¯(…ŠÇ1ê`¬t‹š•¿zzü\îÄDŠ(ѵ˜Rè?ãÞúaÖ(ã}ëµÓbÊó§îÒ^s[–àØ`´OÄnŒ~\¹¿{±Q4iCPü§TÚ0D¯FI6Ì…e»Ps¶Á0ó<>ö_쇚¬_6@i M¨^ç@HöTg]ðõ'µàéRŠšB'EÖï"З~)Îi!¹¸ú‘r³"C‘'?b/°à Л?zM颡~9!1r’ÿiØ%¸!¿Ü,ßÈ ˆ+ÍÔξ9ò;–.tJ…'~þ£™'BÝ™&9«a‹Œ£Ø¤/aOQ"1é¢D?!ß™ò]Ó <:’äXÛGTr(%LY³oÄÉß}§°8xïÜ&XðäÞ›…__6_÷ù¼*|ØoÀõ MÆÈ.ë¹*\J$SOCñ:— Ño©5Kõ“øH½ü_§^yËRg •p=cÌYCËž`ÀŨOh2ÕÏi,·U%ÖÏÛSŒ¨4EG9÷´ÿï1YôqD1xÇMÐPÀ]Ô‚Çó²…4Ó3¥ú‘¶|‹CÈjÞªð0ûгñÛ„j«¿òžåÇ ¤ú’° ][ÔõÄŽÜýÙ†'ÆV1òžôšêz)£¸A‘CÆEXê´XJŽ\J*d‘Ì  ByÇUÞYèx*9%fgº’“1DUä¡¿bA<,‡ÚqЄ'/èïQã"VU&Þ&ç¬-ªT:ä_/þõy?óß„Ww»õêö¯‹Ï«‡‹Õ…LûdS¿n7¿=Ïí]=–b"“„/—/¨s¿Ù§A_ÜH¸%ÅÅÿ¬?íV·ò7Ó ýí_»'ñÉäüÛÿ®îÖ[(;{{›ºÑóÃ>Äþ”¶à±ÖU˜kmïl(ÔÕÇbõYÛådY#Ž·°yUËU8B¸<£"G_)P³p‡3è|>^ðûSÖ‚'a#+üJ_TÕWé¸")IÓ2NœœìZ™¡>i„ ²àa×xB/×>^¤•MúãFúK•Òe¦FZí­EˆÀçD‚iéCQãÓ’‚.e ü6[ÿ?¡gõ [¿¼½šæà׋ÂB1ÐÏàV÷÷wÙÑq£~ʆÛ<Ÿìbõmµ¹+5ÝŽËC'òš¶C>ŒkoÉXyKfúhð4€róñØûœLdýn‘žõï§­hõjúËÏ Ë¾n‡”>Nó’lZo4ï0œ ™|ô!k€±Öš¨ä;¥އRƒ£³Ùj¬=eŸFÍ’q•G †c³v¸™ÑAÖqXÿœó9ó <ÙÜ5@QãÏQcý $‹gÍu¤Åo÷ ·©Êå9UŒHªÁõO·œ¾ƒ$êx<5{gs¿Ûþ¹Y?\íoOÞ¨PY=˜Å aR9€Y¢£Æ«ÞH×àˆ³:Û¥•ã;/߉åñuÖñ˜ëS/Þ2ûd9+Š Þ YŽÙ\OªOC+4êʸ!K=„ò^K=D.ã\³–´_—¤ÔýÛ´â¬ow›ÿL{æÍ*ËŠ™\`ÒÎêd\@nR„¨Y­¬ß”q~ÀA¾|§ôàÎ^Ç51zÊ ;E‡)䜎9nçt!²Ò T"‹AÎîNå;%ãŽÇž:wR³ë›§Ýæñ¯Bq¦å/¢ÞÕæëãÃÕ÷_½Ñkm™Á•s™€“’:«¼¶ˆ…¼žÀÄR{8ê ™ºïÓwRN g(1EhhædÓ°¯ë9¥@NÅŽÔl‡øðìc@žìúœLjƒºÚ¢óÌéÌ Ù¾ÐÁzÍ$%0ÿ<¹Ùñà®Ü¯îîþŸ»«mnãHÎ…ÅWVŠ/óÚ3Í”ª¢Èw±9ç’ä܇Ø–ÀŠD<´ÍSù¿§gØžf‡Lê\uÔb€}ºû™žîyé™>T3Xi“t[Ï&ÕøIÆ¿Z"dÅ vJô4[Ãc‹R†«;xœÞ™æ·ád:W԰ź̘lohQwk1L¯R~΢¦„Øê|QbvÖ:Nà3û0›v~Íùv-P8f"ü˜¢· ä•§¤,áœR¤ç• ”Í–º® õ8|/Kî.­ât·9¥’!8•ÈI‚h]Ï©kBoŠ…-¥ë¨ó"÷Ù}Ò-­×õÝ<”V9Ý©TÃ)U ã¬e…kòU ê‰Ö–WÇŠ*ŽÆ-£zÖKÑ‘Þ9þ¥:úõKÆn—ƒ¦Hà'KÒËkÆVš[+VLþ2ò±Õ·h¿¼S:¯Ñc„tNérÄ#§ãÞ¼„:}»T½ÞŤó/ eh ‘Ñå@›‡!AÕN*V8ê¶CeV`<'ÜÚCU#C e ¸6+£TÄØ™‰7|0¾ˆÇPÖ€dî´hÛ)›á .ÛÏw¶XÝ.qcˆCùÓ&B(KœÒdbP<ÅçiÛ)S‚aZÈP—"d«c˜&lèìû^2<³Ö!F8¨®ó _ñ¬œLyØZ9Jly€.Â6)½å#2°^°=w÷é¼NåwO+…À[RüÆFËÚi}A}NØY8Ó²+?¤ ´f.¤¡¡MPúÆâ1ʻϣçéÃÌL€±Z£öì NíºÎmШoI²0‹bÃÖ¥jÚImòÏÜdšs j+²›¨;!û¿<¥yƒp @”Àc ZlxGí¬Ëw ,Öº¶ÛºÞxo)™åµiÐd)tÕNz+ТfG&j'ä¾.e¸Yü†· (ˆ‘¼tánÊþîÉA…S<Y·üDéž#8E»Ê)äÁÛ¤Ñ'êLŒSk^H0¢cÊòxœÔYkêD)8Ê î|" æ]ö„‡3@i‡@¾Wƒ€㺇PáÔñZél›‡¢làꀥßAkÈámöDŸ‰A<Á EÆ)EYZƒ,BŸå|:mÀ¬?xl×gX!Ph—žtgCŸ‡A¨ÑiÃÎë•XðæÆøXžÚ©<‹Ä™WäHŠÿù=ƒ;4Ê,P&ž¡’ÊEØÕC ž…Zå4Jðù ¢‡^ye dV8PP~HÐ9q¨~è•(GVa¸ƒÝ Ë‹­¼(Â*îDOÇí¹Ö6Î; !9B¹p¸Øñ’€ÇÞ–2"EÈÄ%GQ’ð¼ÄÎ`.a¸]Oóxü>óBi=»Ã ÝKJP„ŽÎ³bçÏY0(Ï윢xË‚Ó\¦và ¬g©`n x<èr׺xðð¡–Z» ºyeŒ”ÚX.!P ƒ½K1ìÍáC¢ÅUZÊQN múYÉêV¿ÎC¥¥s™®"—æM‰ž<Í7¾»¬OÛ<¸[Ý&O”Ø+üMZ˜JªÅ %ý>þ$ZÏ6!TÚŠw:ÂÒL(±¾„çj²*†l$è*5´{÷L¸Òö/q˜#ÏqèÊÿvjÚe£AWÂÎçÆa,Í|aÞ ÷xábUùëÊ“«´…’Í\”é¼(ö°x"M¼Òð²òíd¡ù´ÉtXotmÌÆ%¯úžIÛ ¾4üË¥4ªƒ£”¯”«„ÈÅ #ôþñÉnX…mn¾(››WÊìбÌfzåŽIv£+Íý²z½9ô.âÈl`fãƒÝwHÇZš _;š1yQ >Ÿv*;Û ¨‰Þ7pÌÒœ(²ª’€'Gµ¤Hg›ô´B0†¤¢-Ì+àE1ÄÊý2‘LçH×7E¶YR«LÚ¨Ó«4¥¦^VÜjµ)ròº f—Õà4Õ½olmÊÕÛïIëNô¥T¢øc žÔ“ÕÛovɲÓ0\š‹S.6M.#Oi–ù—5×k½},ÛF²ÍýZ´ÏL·‚戗µ'öœ×I-u0NeÓÂòç¨i•"~YþtêÊó¬†e–j<ŸÕ¿«çÛºëªÍc7}l—+ÉlÃ`£'†‹Ëưoõ®“¾èh_VØ©—zn²êw³ºÛ\{MTÿ¸M¼øù§%æ]o¤èTè'ǯÊh¡/Òn—i‹}ÒFãÁâ¤]Þt‹KÎIQÌ{ÆKõÜt^’¥)x;ý­žý:ï6˜ÎFCm‹Ñ0M²g§¢‘/‹Š¦xDy5¸= n»˜lŒ‹.œÚ“ÏN,xaÄ‚âÄúu~{]sq‘ÍÅ8]‘µ´dÏKE#м¤ˆ/„û¾4'w‹j2ú½Û`‡Š$ +çüÒ${f*Êp¤6¦Î;²uÞ^ª„}QüWöZ¿½™NFÍÜÜÓó¨ËB1+~ìxNd@Å$7@î [g*´ëš]Üzœ¶° Ö£À Ò±ª“‘ €@a‘t<¶0AýF£Y&ñph¥°ü‘en Í}³úg„ /‚c¯* í\‰¡Ò¡Ó ¥åñ¸t&ìºxlS›¤À›Ñ¤Ýì]M†ã¥&»WJ5ÓìPbS;….Ù9ô Ý*çQóÐ;ö×ä#AxO¸jÀDà±"ÃnŒõÒ[v.@÷¢%y.ã´` ¡Pk\ña!É}%ˆbK܈‚eáaX…/7ZížðRI57ÜR;ɃB~BûP‘’tÁãíÐz><áÚQÉãm3ÕÝX»‚ôô¦^TcM£êM=µÞµººšÕWÍßF»×¥½ò䫸«j©›+,È-‚Shò"¸ƒ„W(A)ö2ŸÐN›"¤ZtÝ«ËÞjë5²…޼Úbé±"ޣŊaÈ“” C•1ÌIQ”ßÿJ6'ߌ(-oï…[*¯ÕÝ]ût!K.¿~ºüþéê5wOñsjÃF)´3eÜFÉ`›u-Sðx“÷>æue^VA¤ÐF‡Ýkÿ¨Q:-ÙDµ5z×ásR:^.v4%áÁ>=ÇÆÃÛétÌè¹;±WÞkãÙjˆÓ/÷î‹ôÚ ÁæÐ¡-°í1¸ƒžÄÜ;ë§Am4ò®ŽÚ Û«ÛÈÎg£œfkb†v%ÂOzq¿Ý3àAŸãÄÈf½óæTñ²6êëšâÑg Þ2B7í˜ýâάTn  ±Â*2öE„÷„ÛÀЩÉöæ Ö|k£DÝ­ÄPA¹ËTšvÎØþ잌ÃØ¥yÔ¨eÓcP£çMOíR-o?ÕL§%5Ž,Ï)1´“©7§÷ˆÚ‹Ò[µFõozz” ­‹À“:ùä`ÁšÎUYNUÚ“>#T•>—p86æ²€Çf¤-bFãÂåñxlâñ¼¥f®ëj¼¸\׃Ï1}bÇRƒN½*…(gÒ•ÐN9¦’‹;ðZáúçA žÔ"šE¯gwÛKc§-ÕønÍÿ§5/‰öºë¸­L(JéAÖöçwl‘±¶EFn}©Ñ ••ŽÑêƒM‡Ë!AÕ ¨4<Òîbk3ÜÈhÕ;-PiV ©¿')~œÕƒY]-ê!)t>½›QÈüdx]ÆÒ§ƒŠÀ9k8pœÇƒ£¡„É=…lÊú<èú¼zêQëAgØ9Þ¥O+¤ä\µ³Æd˜,ëA'¬aså (H˜è=h„ŽQ¨S‰„˜T7õü¶ÚÌ;w©­;:±šÜ=ç"hj§’ÇÐr¾*E _`HÀ£E†õ–aý©º/N—ÅÖNYMjÑ­S$jZÞÿS;Á%ô#„2\íÊ áR7žä쇄Óy«‹Ó (-Ò{-G™T“é§7Y}4µ³èKÝ ªùÙƒîÎVš=MEvÇ‚N¡áÖg8 X_%³Ý\ŸùPfžü#g=OágòRe¯êŠG.…ýó. <ż¥³žÓ5 æòºGŠ@Ñhîîô¦Uúð©åÑzê”Ar]ÚSCL€'µ^ÔÖyœmêÛòl9Ê4Iˆà4ªçY h$…Ã3¤ÑMX;œÕW£ùb¶1jÄLÃX_ÄäJÐx:Žå¤Ž*­=çgú´·Ócj×í1‘:¼A`çQ)*’Fæ˜ÌÎÛ|§Ÿ„g¿Mƒ*-z˜³uV2 R”Ï{Å%o¡]W}ì.ÇÞ"çK˜L™ˆ<{šlËí‘=¡Ñg÷Òz£Ñò‹Z¡Ý¾ÙúÀSRÁŽž!¯p%ú°¨µ4šÇƒ©wÏn_jMÑe÷b;¢4ÜR]h·–ì ~Ì`Ý/GÕ, Þ4Xöì£åFŸÍs»í µåGçËÿ?>9šßÖƒ£Áu5¹ªç'KkœU“áÑmu?žVÃ]è) &yô~mÎú}|[ÏG3Bÿ€¶1ý*îÍ~•g?Ý)<ùòfF©ø¢,îfõÅ1þ¯ö;ÇæL™ '߇qöâøïwÕ= ©çƒ°irq>ÜRè2®«yý/óëJY ÍÕŸsY[+Ðx-’N]×—õð“²õ †¡¯¥®)½¤¬Àšª¯íÐÔ ½ë/Sì/>UãyýÇí¡"lk:ö¬GZj5Ðd‚A'5 8>ZL)ò˜Í7‰WõâŸ&Ó¥婒ʯ]ȤŽXõ È֓Ɔ÷ñ–ˆG@¯gÓÉèk=˜~gþúÓ¨Ïþ<›Mgï(äùf2¿Z6xý§`ýðÕ¿5"~hž~ù·zR·ÛÑ.4©¶ƒ‘¿ù§%û–h~ñÕ7â÷¹©œ°âÕIãÉ òäèãtQ/¨³½ÞÜŽkêâäè}Ø7 çÅbvWQB„Fmó­˜ô]5¿¾8†¿ýô›­Þ^þj^“7iWÝ ÁÐÓwÕ|ñãlzEQßüb1º©Ï¾%€ßˆæÿ^MîªÙýÉ=kÿ5þôñ-Á~C¿Hˆßï[n~y^>ÿôþÝÅñõbq;¿8?§„;#]W‹³Áôæœ [-ªó÷ß}xsJœ2D#éko‰o“z<¿øï_(¸% 7Öþ£Ñ鈈0|Ðq°ã­9[£_&¼ûúé—‹úöâxù,|\é5?œFËšŸ{ô¯^*íçã£&ئýk^V­±ƒÞÙÐãçàœÞo‰ïGóÏóÆ]=°:Ì=Î-,;öÅÿ '÷Ô¤ï¦bpÛ£B‡ø8«&ó&õùH±þ z÷7ï¿ÿëÝh˜yüv:™OÇuøóÛU¸ð¶ÍˆèYc®÷™sx0i¼Føó‡vvùÍ߇ýu9ì½}ª÷ƒq½,n>»©È[.nÇÕ yÑZ,RG7?&½üÇdúÛd?>|ÿaRÝί§‹æŸ[kl<~²*Çq øß[®®[TñŸÓaýñ.Äbœb>´ûšÃŸ»Ý£öGèJ·ãÑ`´ß?€ÑZëÞvŽøSãªYŸWzW…øå覦ÁœÂ^ßëßImat½…ýÛ÷”¥/‡µ×M¤ø´ Ÿ^ÿ?EŽÆäU^ÿiå¿kÕÐôl’h5ÈÓ/,ÃàGÇÓŒÇáCòÝ„Ž“ÕÉ‘ö'GÇ䯕½ZÅ .¬û—,ý’Oéyíðµ«þz8ÚI`§QADȺ^+q•ŽP¨ú—Ñd4¿>,9úFž BÉ›ù¶¼Éœ ©Ñ“ØÝ‹km;á²ì¯Í•ôPVí ÞÓûŠ[ó)¬÷>wn³MØkÓ¼Q‡­-À#ÓÖºË6šQÊŠ#·Ú§˜síò*ÂRƸ"̱ S¬ Å“z%ÂêØõÉJÅô,­LØ:¦8ÛƒK F¦ÒXÛÎB ã…º€‚wøÔ.µŽI†*úê=­Æ£Ëê2ªÚ FÍ^„= ¬“£v®à­òÄV…Ý%·ÔÒ[ž^^*Q‚^”°ç¬Ô.Õ7ä7G¸8¤1†äÈ¥¤´"„ê*¼Ø3¹žJs(µ”‘/µêÿˆtûg0ÂsyÏí¹®š-ƒ”ÖsÄÒ$b„H蟋XOd9”Vš²@-x™µ)ÍxÊ€ŒÀãà™i5º¼L[khŽYF ´|dä|6—µMœCÉeˆ." 0EÒYç-…î>b8´B>3¹VÀ51¿¬£$.bH츷o~íèPŠÐ #†E‹ªÅ@{3,ÂîMòEý×Ã-EU¸Ü× i#F x¾¸¾K¬CÉæ”Ó6bpr²Ì`éB•šˆa%œŒyV²5f`æBå¢P啇ÚiõLìÚ”ã0:–P^ÏÒ‰Ú)]€NÄn©?5Ix’×ävN¯6ê½ÝMêYô«wD¢ „çgy¨Um¢¸IȾ†Íú‚ľÀZNÀcÂ:8Ç©CŠ0?,Ó? !V¼µ÷Œê´ØÌ†ÚuÞQ´}¹>ib4@±aóEYŠšÜ¬Q*K\BýTWaáUØ;¼®!†çè„òJ³=A¸Ô#(½C’%2ã<6ÑhM/ÛÐMwßR‡Ôšsô]}Èaù~ «)áBðØÔåðVg»B—óev¯Ÿ(°((HcÁƒ&ÕðÉ,ážÉ‚! ìL ïñJz'õˆïhòiV‘µîš]ÚJ궘¦P NIZj0©‡õ‹ Â¡oò©¦k6¯kG‹LÚ12yÅ»w‘€^ õ’lk””ù«ÓDh>(ú"ëX¨Õ‡°K‰ %(v%2´+B z(îFĶ]ê­ª<¤£a÷B(iJƬ ¤QTÉ®=ÙkiaBþây8JÀ¶ïq”ßc›˜«NêEs0r];Œ±ÂJxv²‡’}·‡O/Â8êKƒ1¬NŠûÂ{¬’ÚFàÑë\þjUœZ‘>ÔÈ‹á}jGÞ‹ª±pÐC +{¡”7"&®©LÃ^Ý Ý0¦¤PÂI¶@()–­’B~Ê‘d=vÅ0´ƒ6vBHìDgh—¼4»­ÌÌW¥†4:lÏa„K;•çhÔõÈNÅ„vÉ—,$3ÒK©(#gƒC/–°dØÚ¦¸Ë/[ÜN$—þŸz°ØÐN·©Œ(Á²›‡©(—^ñ³G8X`M“^ý÷û Ýtob Èч¬ 0R€ÞÃT½)1k˜‚S+®ëáÝxó‘¶™ôc\råž­_!Óu6N³gãR^Zb+Be¼HûëÊÆƒé¬žÎO/§ÓE3І]À¬uŠ5܃ŠA*[ÄöÒ¡d`…v`˜Öÿ|wYŸ¶·zœïV/Ž¡œƒ°ÿËܵtÇ‘ëæ¿¢3ëÈæ 1»,’uw›E[ÒŒûZGjÝÉäרnK-«› «X´Îœ3 V}Añ°ÑW[°æl)X¥ã1@" RÅÁJd©½céÄMomzâ„¡âȤìÚO½‰jöCH˜m;DQÙcк犳S(yhzÀ»{¸ß 7å§§Î’¸«»»C0åw9j—PLÁ¼¤„®4¬óüÑ_Œ{¦œ`gÍrt¥‹#ü"MHxš ÊjîÚ‰©Ñ†ÀÉ›Áx¥ÃÜÓx nöŽÇXÑŸAéF¤±ëwX|\®ÀƒÔSû_þ¹Ù}=£díxJìJèJMƒ.“àfï8/ì³ÑÈúÊgZScëìé#Þ‚µñbË%¬8êè «Í ãìý—¿Qfº„CöŸ¼£\¡0çõöÿòñfs}`1Zb@ æ]ÅõOWƒ“PgK'Îf!¥ C¤A_µ8WH'ç…‘Ð}Xñóé{Õˆ&¢6:‹Ú„Y|~ÂlÇ7wßu:F®X@bé%¹ô í3•¼ïãÙ›­í÷ì|0¥ƒ…@ÌÙR\#²b<"¾›CÌ£íû³ß7wºá!$ÐÉP¦ÎºÀC¸ì²ýz”Õh]#ö˜w† ¥]ZtÄœ5ÑÒšƒèŸ9;l 9ûdÚzBçÜL _‹pæénX¸ÛsŒ)&û NqC9-2˜œ%µxc“,:[êWò±$!.~ï>ËÙˇ/é«ênb¶ï&º+ôú2$ò'ß$¬Âƒ+ëŠs<¦ Ô.¡º¯OìóåEœOÇ5‹¤QÄ{5x°‹ßx¹»_Ÿ`ùÍîêòû·í¥¶ÙÝÞè`€‰çÑ”— j–@q‘O¹û|y©^\#/ \žœ:ú›3Xoy#1¹”}ÍJ8õðE—,a¶ôÔ®1¹1·Sòä¨ O·áÀ?ØýƒÛçG©‹‡eÊLÀ”jðûê䕾ÀgK ´»N€!’R'ºnEn?Ì7NZ2Q4Õz;³!®¿û ?Öî·æ9T°öõш¨ÛÞ£ï·÷§®¿ó”>Öηö¤x«N -И`Ñ‚‹kwºÐú;Ëü¡v¶ð~>Ï3ý.HoäÈÚ)–ý^Åf3I!Q®AéãÒ¨u Þü]­&ÞX¨¤ê»ëZËXáAS«_wzœhù…(›±E‚±nª5û–áœ/Õ r$—<ÖÜI:åuއw’¹r¼·Wû£eÅëblóíê¡ÍÝíìü¡v;·6³©,ä\Š‹]c1ÚJ•¥ ˆ=~¨å³Îñ¡’£AWb·MÜv°`][Òq ¡Ry6˜åÙâŸO]Æ-·Rè| «´u«ÏœR°bõšõ†J‡, ˜’9ðAéZã Õ½´ŽÙW>-@:Ü™!@*/íýµ ^ï”e¶à  ·|+õJ+‡¶!mªc¦¸‚œ¬ó©Oœ÷:/ˆfûñÊõþ’¢ñHyvã+Œ7ur‹æL5-”?–!öJ}¹™úY?><ït<ÓËg³HÊʇ¤•/Æ„Îw1õË1¤S žæNýgyúƒ{ß¿mkXš-–Fšä+–À­ÖÅ [Q×ÀÉÅ`¯!’")² ÉÆ“‚_2{7íÙâ–¸+brýŒî?Þ¡;îªa‡i«ƒ\Z´+îþÜæû÷Û¿ÛÑée0‘_¼Ð¿%¿Ø>]Ü?ì.6ÿÚloõ29»LÙœ¨tP×åÂ(îôG=qÔºxë£^'‹vS§}¾¬9ÚÚKèò€¼AôÙŌ櫢҅Ð+ºò³û¢NZ¾ÆÑ ¬½¼Ì‰B×ÞøÃ-Ü€¤õ<¾C;œëŒîhõ.Íæhgí¤TèãWvy€Í÷­JÇ”|.öBÉ#¦žèwÁÕàiõÊOz´/Ì:ÞÛdD¢1ʯ˜“‹1æÐ<•¬»Vc~ÄnÀÓÚÁô´…ùN!žäb9w!9¨f§H„ÀÍuù«ÂŽ:!LØcæVéw¦ž0x.´…ëÑò9LÉ ¤taæyž‰K›A˜96BGGO%c–Ì7Aùc¬t*˜ÁÀ}Z6ÿYš¦üP—ð‘¶Ò b3ØG ³©ß³+`X2ƒzß7õ÷…òûÙðÝï?®——®JÑ2Ä m;LèÏi× ýRÑñèÙ,ÙTºCDGÇüzW'Ç^¦‰ñGáóÓà]²„&$ðìmø!úÅ5wsp/—½‡P±¾ù òXn6OtÔÃY:sB/È&~ƒ%'‘2† t—›x©€$Ç™*TQÈ£xÄr0[Í)]k ïOcÐKŒž8Œ–H€ 9VhâD³î›ˆ …@þs*˜#úÍjêØÙ¬Õ9äkô8Ëú‰ëƠœÉU ‡¾µÿ•°— ¹ÌÉW,oD©‡âI",ÑÆÓœôo4ê¨8 F€´J%V8yHH}º‹´€^((œˆŠ}µÏC®¥ÑÖrT°õÎ]-o »·3žÙ’*üQбú©Á³t{cOÊVÒÝÓ’¢vœwxRìsiœM>)*bï,Ð>¶ä+ [T-^ÁRšRÇmõGiD²¥|B`[ù^%íœQÐÞŠ°’¨ÔŠ·Ð¼ÝºUºà_*>ĬX'y ”ïvͯÀ°8»°÷;áõíÍãáÖ¼<üû‰ûV€7;/˰õzvnAiÚrø …'kó*pËŒCb°Y>ŒÑWài±ÑÆý«‡Ë×jÁè­¨kλh«ðì‰W‘–sx—ŠGH¢Bí« ‡8äjÊ!«õZ{Ls/3üí`WoY³hi¶^̱z([GÐK%¢ó¾B]Æ4ÄËÑÊB¬0ƒslíqt>Ág®iE]sߨÂÎÉÃâX‹±T’*5‹Mc.$pì¬EÁÃ]š)Ìå¿§ÍúFUcÉ…ÁU]/`©‰í™+^•3à§ÁŒ¾b†¥âa^˜XQ~­?‹/ed;”1ÂÜúüUÓ2ŠÌºŠ{iHØ>S“VqRsÖ™U@çÑ?¯¾‹C:ýjo8Z¡ÚL:÷²â^¥êÆá#WµTÔ8ʽS¡[xHÆž|G›ÉTèFüÕ¢¶ÝÜɦLÛaÅ‚Ùv`Ÿh±Ü/²SëY(^ìXóØ*ÖMuIe¹"©Œ}¨2¸Øû!Y¬½6ì,T¡KÔ+•FçY¿\ñ–ó‰)Þ˜ÁFŽe#Ñt‡„®XZ_ÿ;+[º&ÆY-xRceÉ «^8uü”Œ–,N4çX‘º ¡[Qo±lXňyê-xÐÿ²³^6‰Ä¼ÇIi‹:È­»Ë¤V>I‰ƒrRº8@·g¹Ûg¬`§á»÷Zñsé—‡$‰³å“”‰!ˆSæJbüëôAî¶ ÎU›GÃGÕE\_(ð„´ì¹ó„<¾ëG*7`aÓà“‹ÉÅ Þ2ØØst­•à#OQÓRÒú½³ÛðÄ•ôÓ©öÔ©XX/¸³ÎIFãùi¢+L ªœMÚE†…É­FÉ{:?bã3ŸÁ(çÙãaê’5SÇÆòùñŽ' ¶Ò…ܧI`'ܬ­ºØÆÍ1¬¿ýÞe—äó&M èÕa³Q­N|-ë¯Y;"Æh­C'ØÎ¬kµ$Ïí… ÀÉ|°²êöx:Ù§ÎÙ&:,:Ñd`À:ÏÜ똇“CHÙÆ9`âÞôÍ×ë=}Õÿ î%Wæ^ÐÑm-m% »v%pŽ™Òœœê<±GÏŠw•“/…’fÏ8OàsF‚h-|¡sÞL;`†«±%?7ùÁ­ßjúN’Óo„¡&º¹É"¥[óóé!)r}&d¢FŸ0vhu±žøÖ¯b  xR·öÑíV¸ò9[Él~5Ñssp¦` F²28ötëOJÔï KL<èBZœ“úʾ«¯7Wß.o?lïw{ÍʬӞÉJæØÓ5?S ]åw¶^'º4ÂTïÈQ 6LÐO^úšÜ ®|~´ P{ZÖ€öU¡¨—lVÎz žCL£¸ê öÆ2{QüQñÍå`€:ŠÂ,¹mË4@êñ ëÒkŽ5Îk$%™ÖÄ1¯iÿûâ_÷Ûÿãugs«m]þ¾øºyºØ\ÌÉJ¼ØmÿØ^M …þíâéA~pqss}±{ŽËÒï.6_žw'šŠžXêÅÕ×ÍýŸò¯ÿýGkÑýþöŸ›Û'q9åþöÇç›ßN.š$Bi€‡¡x@U0[¯™.ý'õÓà´Kb6!Ê_â¥×fcôŽÁŒÛ)FÆÛ â6fçl<>6þ¸}øëItÆÝæôQyÚì9–ŠCŸ¦l¾¡…¢ØúÞ§W÷OûrW•ãNÈhÄ“` žÐ-7éÀ¯ó)ËŽÓTåK!š°È·¶Ãì$ƒZ2«)6Äý¬'<W2ÃJ×686q«|.d—œÏœ¬«‹\†æQ2+ bn v¹OLÏóÕíöúá¯ûÛ‡ÍõÓ§ÃÏÞ3´ì†¬qs+jN“ÛÒ÷d×È)j*BÎ`Ãã8 ¨¯ßÉ¢ ±¹¾wñË´œ “vNwhíªÐ2Oæïr)ÈÄ&ÓAPº4Àm—ï Î’¯àhsÛ4“¡7ÿ+æÏýæööáÏÛíý·SËKœ*Ý­øg†ä›'ôßzœáÈj+¹¤ÞêÑöÑWK žØ^¶0íÅ×ç/Ǚߡ|&E¡ˆECl9F<ÝÞZ¥×!t(ps²W8@³Èw(00Øx¨5 h2ÕàdÙQž&7$ &'] í5 sD´ÒñL®’æ¦æhùhL$ªOk·¶iFÓÓN\ô»#5ÿùúq«:~÷ðpûm»SǼ8Ì ?9—ô¢4Þ÷tÎ÷Ö% Ä~‚D![ñ¡=Ýú3T¦ïdôÙxQØãÞ>Èqd¾ÀÑTæ(2Z*pO×\÷ÒMb‘´ÏŽ‘×5Ñ庲ІjÑ?–] 6gr!K £¬‰M"§Ö±Ç·ú?_ž··×Ǻ¾Xg„Ÿ|J)'q¾ ,B!õV!]¿a i„*iÁz«’ÿyÞ^}ô»?¡ÌO ï’‰_mëVEÒ,Ÿõ`Žgm•F2FËG×ÏzkÄÃ|í;+ü˜tèÏüZݨ‰ƒˆÅ} !iRŽ1ûi¢Kí¥pNAËÖŸ Ü†©÷£Çæî»FœÏòÒŒA$Y·~@ïöè¹ÐA€0ã€[¥¥Ñ#Ø ~kן‰çÔ眆go^Ü‚V3ML?”àÿáoó×Ó´7¹—à…@FðÞ-n˜à¥ð±/ñ‡¼ÿ{~œ*¯»‰Æ#zï—7LøÖ/ŒžYâ0!äeóEç—6¼žûš žÈ…(u¿“1é@è&ÏN…Yæž´€çWžÔܦÔâåß›»[“Ÿe3>bvLâ×[øÁ»å¥ Kä¶*±nÁãiY7ƒwl{W-œ(¹—|ŒžXh“º¤Ýߟ:Hª, !f3°¦ti@G¾­L­=îäùzs{wõuó¸SÌOZ3!7æ'ýé{~–ý(ˆÄÁYüÔR¼¸´ËÅrùm;Bñ7à—º4»¨abù%W§ZÇÐz„:ïûõÛë(´ ÀRЀ‡º½ɾÿóæj×ÂV6ØšµB&Fs¹PßÖÖù¢‡4gÑ«³§[d×þ;šÚ„xÀ÷ªz}ÇÇŸ0q²ü,‹Îpr– ]Êý\‚îr¬µ` F™³×ëÏáÚ'#£¯ÀCñžk‹…þúrúýT4ÌåÀê´°Lf®‡Ðæn5ž]d7k=§ÏцNGqéÕÌWgýcˆ!¤ŠÂ_D¾“™¬2Ã=…n¯Î§gÐxsÃô}<8+ÇÚ—£ …¡dFÀŠƒÃ#A~‚‰'8‡ÝâïBˆâæ½ü®©2BIJJh°à«G®´€…ÄRåŒ …º^4ÕEçC`²t&âКÈÜõbjAJ¼~RÍE­í¢Ä¾5èó°yÞ}½ZÇÿZò \Ž‹É­Í ˜LH,ÆQêRŒ¬ñý#]j¤g4P“܈§[ÂKaÐþ¬PŠÍžé“¨²$Rž¬LTæä]¿·‘°qõü¼6<ýr– @ÐqÇó's,×j”Âã8 Ö‚§Ù>?Å´7&n9¢è( Ö#§ßç.;º¸yÀF6àižà~‚W‹Ê¾(z1GxKá/i”fÕÜO¥4, y\[ð\CEïûÂÕsAŠTG‘è6í©iy3B|XE0z-$E/^¯3’‡õDñ Ïl<1QW¹Þj¡»­h‰G±yÏž¸rTƒ¦Î\âéZ ð.3wPùj,‹›œ#Ëf?@Ógyš9iâ ´HÓ¿¦^L©—ÿÚÞüu0rØàXBñ.0u>ß}䰞Ǒ>ÖžGŠ}ó³,;³9dHr”­€Kía¶>bZa€önÁƒKlñC"õåOœ›r¯.åÒt\9@¥mL³ µô¢Ðj+kv¸^t1óo':Ȭ9Å# дæxj­¼¬ ðpÛ}>áè +Ç)XmNíÂoÁŒ^î¿……_ËpF§Y#6Î!¶˜âIÞ¥dã‰o¯æØÝmîõ<ͽ²«Ã˜§Vá–áÅ8cjå/…+ÿ Øì<™VÙìïÏ_Ä™™xWrZò'&gå˜(X½i­^ìú|mxhSý2,ü5êˆÅá/‚F'ëbrÙBÉå•¶|ìÉ#«÷tÉ Ø|ùùMž>›Z7B™k ¤ÃˆL” íãh×pš §8b£ðôËZ™8¸Ûüi;¯XäfHL!Xè…!v9ísäT>ÏÑùœm˜ì¨výNvÎÈ—ÞÓa{üíñç“?½Ü\ßmï'~–·=zñoz tŒ)ö1Ô;/ 3k‹Sk ¸K„yË ò2R!ŠFâñ´Ü;ºÝA{èHZBQlüBî÷ÈpÿÇãæi÷ø|¥Ïêg¡—9ŸX´I²ÞÀ&º‚µõ%XC‚'ˆ!5Ê{º9ÍBŽñʬ…æ´ÏE¤:ÁOK›- Mèç…:g1T±w8ƒ ÷Qï/…ê'ñroNbû†l©s¡+t…nTç S"°SknþQ8ðå ž‚è]"Ë"Rð–Éòr¿†6]”[vê×:NÈäÇ…$Æò¡#ä3e‚rvDÿÏÞµ7·m$ù¯‚Òk;%Áó~ðÊWç³óÚµ—å\ªnãòB$d²Ì× Å•µ_à>Ùu@ ‰‡m‚ÍV™"˜ßôôôkfº÷Â=éÀÃùvÛñ!E©+ ”iÀеUU› | ºµª¾erZÌ~if,5 ˆš0[JÃ϶ͻƒeÚ´/zÏðáÙ ãÈrÑù¸6xÕ–1¶Ã½÷ÖùžÝ^¡†élP»ð$ñ—PkÚàQ!`-…=‚øÝƒ1v‡¬¨h•2O(š’˜hÊÁÁöF,Ú±/çî;ýa€gM3b!Z»¼î®†×"¦~ĬÙp‰Ø!Vžmá »Ëô@SöpÆÜv¬ÁÁ^˜Œ1Ë@ª˜ÐÎÈ6,®½qrBð†K3Nk[ô¶$éʹÁ0Ù´ìÁ¶¦½2j‡¬"¼ê+dCص#*ÔU¨»ñRÙòA€Ì ˜ƒ'®š˜Û'luö·Ô4#åšµ“Üê‚SåE‹i„8Õ~CÛq˃ÖVf>¡Ð¼5ÖD`~ñn8‘†3Õ$Þ¡ØÚí‘®UdÁåj 72BøÅ¶‘„S©XÓÊ5Bë`»½ýWµ»¥à ¥QO¬‡Ò˜IÎŒ1þDÙŽÐvÙâUI”ºkÇ÷Ù ß»é,5^М€%¢¹ÿìŽkÇ<ûfŒ«wÑÛaAêU:½dŒ.üU4L²(‰³C_-ze3ø"š¦é ÊgÑh cDÉÅl™cF@·,מ£ã^ÕK)Æ‹y˜LÑÓOÉhŒ ‹ Гo’q.¼ýäíb™žl§‚k‰?Eˆkþ™ Ï<"f‚ÐhIZN”]kuI/fåü«CÛ1Êuk‹ö@î@Í;æ’½>F­²aÊXëÏ)R¶c¢íôÓûr‡a+{ˆfІ±VŽÕ§J—Ê‹Uƒ[`0oKVM$°ÏQ9yg¦À¬‰m‘·.:F¼`-µJh*šô;fžÔ¦Mï‰ÌvIM3ZÍÛ [½Œú3&,o¨²T´ã–µsÜqïÅ $f/¶Í`%k-O]Ö¦ƒå¸>O§ôYz,Æ à”øOÈ•íhK"â¦@$œà6‰lD R„ÀÅ—éYv†Îd;NæÇ‰E!„iPØNÛÞÎÈÁ ±;lkì—— ÊöÖèz寷âúûëÜ Æ;ÊA*릅èÚÉÃXú@ ªÅ1M¶mgºorLmÁ3œ&ñ®…qʨ±; ÁžÃÝž› :½PÀëT ¾…¦š N‘q4,šÝ&[Öë^EZ‚lˆ¥G5t÷XJè)•íj4OxáqŽéP²<`;&4mÕÈÝŸ!0›ÐÖ–ò–·á^W·T´%œÒ“*lO]ÕêÜË𳀫•½¼·+&½ØTXDB7-+Ã5è½"a÷<§¼ ¾‰¼ÂÁLµ…42‡X«ˆb ± ×NšÀPÛ gW“ôðÒ £„D‰F)Ã1?Šù·ÓìŽÚ’Pï噿3Ìz‘ fñº¡jBí ‹[ÐKºŸbÝ·”öÊA¡ Fh3:»ïùï;‚W‰^ùŠj1ÃÀN­m¥öi^Ì€&ɇkƒ¿Ù& ]KX“ ÚZÖª*8|Yï Ýà¡–R5RÁ‰§”ŒI"ü¥p‹vÔÈVÎýíÏ|)Ò –“ÝŒi\0ø2+”b;t*Û,á‚ç›æãdšN’þp4MÝöòóíÝ)å75DSݨ±¥m/¤vÌPÍi3P&MK¦ÓöÅÿbG˜³ŠÅÍ©?þ“û6áWJZÖfu•6¹ ;LÒ$»°]pðNf ḵ¬› ”Ú«¬a<Ì>OdzdPÞYýy6M?º°÷K3X’’Z›‚ÐN³_±»v`*ÐcþÊFåpB+2Ã(ƒ/Ì b%]õC¡h×NiËÃÇ-™n¹^Ø|¼ü0*ê_âŸRƒûÜ^ŽY¶áê•k'9i»²Ôáà4Ò_ޤäž·àox k}(`iWiÃøøú6,É+Ö:Z<Ž˜Œ½‹0ª·ˆ%ªd>_ÀÚr {UHs”áYî(Y·®˜˜š7s…®Þx¹e³ió.z–Lû)–_™kÑçá`|NFùŠMð§:š Ûp'¬€[9Q½ s ê¨a*›)7aÀÊýäzÏ"\Ù0Ôtg(3MçÈ]»júÈ[ÝILfËqŽLcëEßàŠÁ4ÛàˆèÙl9¸ï–óA’ϸ4yù"M&Ùùô”» ¤¯Ù.ÇË®–6¾ÕVäeäXl}>+bÔ(ŠýÚÒµXY“чâ×M|A7 š¿YJx‡=Îé‡åj‡êß»ZЖ5ÄgŠvõ9µ)UÀqçËÉ$YŒ~u~S!U²ºNÑÿ"ÍLn˜ç3Õà¨ýX°M!Ã` —ü±–J?”ty5ËŸÖm+o>üªCìþ9Ïñdeo%7ž]¿ý‰»OÒ‹ž¿:ÿqº~›û;:Å”À:9Á'–׿Æj e÷§ä.ÛyâS›„º^H½ò‡×ÉZ}%ežãj:öÚ\Ó÷Üe%üÿ¸ýh¹XÀzÔÂIÉgÑ0™ð4\¤ÿ\¢èá‡4¯ÂËb÷Ç %37zTKgÎuÓ‰¢ ¡³XG2PLíȪoÝ`â¿ÁçÅÿS´Y6ÿܹ®£úVO¯ùÿ§Šþ{¾6k'D’¦äàE;áÉ‹´…ñ×"'$Lîï8M/ݘŽ÷¯f¯gƒ¬BÕÊÇé,šÃ a’é•;¨X/aÀ›rù/9 4_r ݤ!w$íæk6ˆùÃShsîDÍ›Ù2O¿žæ³Ñ4Úï§Y6‚–ÏÖ³ò~õã ‘ò†ÊD} Âêd˜çó¬÷ø±Kdpv-¨ÀþÎÎú‹~œƒ,óüÃ4çÃ_OzÑ`”Œ£¼?ïa’ÖËyÔô,2戀 1%=É{ŸŒ–ƒùêSÛ“àÂÓ³ÿÜÖ¶‡Å4í;2.ÒËe–jyIáQ½/©EËížR´ÉxÚ‘—6_³ÁK¯fo@¤·Õ¬M¼ùZ‰ fie˜|J+€åKÂèû Ö÷³§Ï<s»O÷%(ÔÜI³~R\¶=°Çï‘"¯Áe¼Á×ÅѳÅlú×Ù…ëè"M§Qø(÷q†í`ãxŽmášk5kê,è]Ù¥òü¦Q¸æ÷ßO³åå%ÖW›æ+æéõœùÎ-ù÷ßÀÃé¤Eõ¹ ½Ézö* +Á½ Ç…‘¾¢ÒÝ¥ƒƒØ·_¿Ø&z¸c•XÖÍp׆PztKîþdEFÔ²˜*SŒö„àkA$±´ât3=ëFô ùÔñžö]Å<Å_‡°F¿–sôpåúÓ‰0±ÑKÁ¨ ñ|Ñ|\²,Lã"æ‹ôj.h?_Ì\bÁ^ú¯6x:»¥OW,rŠîËé- wê7Oýv Ža—°O=øãúup,Å]ôf)dêõJ#ÞÕÙò«ûcë4ÏS ç>윅 €éR¿4ôõQkiÙ;P%’ÓUdjK&¶–q¥™?ubÑŽUî­õ?ÍÜÒÈ“ˆÿu¸v%0K¾Ú¶÷[þô¸ü÷ä4Êæi¿ üZä±?ëvÍ Ïµ½²”ËЫÊvá ½…ý<ÍF @¿BëfëÚ4È¢O4.ôΗ§‹þp”§.‹mï@—ÎPï,*øÆ)ÎÞÉ?—ɸ¤A¸Áz<ëãµ–qšdéeÄI”K/û}q‘JI¬·oëó4%¶‘.™Lû©˜”ò”vA¨–"I”ár ÒDYèë›Ù¢Ÿö.‘¯~«£´4¬™:ž’ÐBÛµæÅo‘&à–£œž­ü÷X\ôÿ@¥˜Dçº#akˆäÂêU/y:¶pö€‘ÀxkI\©}ïÉž\ŽÒñ v‡ òNGãGeƒ'ÁÙÇG‹jœøòm:Å)ÇÙä@Zm8É¿*¹¯d÷ÆGÉ/},O*™G§n³¢§ =ÞÎòdÜYu Rn‚ÖEœFo°îj4Fƒ"‡ÕŒ‚ÛÐÖMßš“¾K²aïDýôãg™<»èÿ žÀ<ßd»d2P¾}‘dùë•þÈG“4~"µO#÷÷ÓåXm§…1ÿ§ú4Á̘¤lõbÖOÆ0ˆ§ð~ÀÜþ¦àÔ/¿/wÿøæEomC$ÎኆI÷g“Ç0ÍIž<~óÝùÓ3³†{Ü7MÇYïïï2gÀ¹¹ÿÍQxl1XQgµ Š,ÐC¾xqûÛ/çy:ï”ßáÏéºy p-ÊÜ뮥ȓŸK¢ý|Rh7øâøTƒÎ’ ¯@ŸŽY~FQ….ë¸ñÍ(û˜9áµâqgš8*–˼÷o!ò6§Ô±r¹¾py¼]$ÓÌí¾N/ ~úò9{°ˆ½”VR~Aezч§Ò_ò'`l b„´Ù¤R`½Rðe¸þ9ø#"%~Ï’yrk;¥Y…•Ö__­9ÉýÄ-Mˆ’–›ÏùÚY›ƒê/ÀÐõO^}9ùïåhŒœyò¬ÜŃ×^ųâ -|ç¦ëMi º/ŠäÖøñeq\äéëïñ¯•7õbt™ö¯úãôeaâo“dg>'}×ѵ†Ï{Ù ÐåoS°ÏöÂù÷çÓdž g¹ûs<[®#-Å/ ÐôMÆ<¶Ñ‡a¾…è¿]âþiaÊ0)~¼Ky’æ¬f‚ý†K ݽQ>¾ºf€ªâ­Nÿs!°Xj£–åÜlÕ²„¼‹&)¨öh4mŸþ2*âNÃ:Wàe”Jî‰[‚à:cêÉT‹Dc*Oþ²ßdp+F´Vùð†Ò(¾<+í,J½û×åS ”*åÓHZ§£ohåò•7D˜ÿMÞDÉæ›~ßy¸-ªo«£m lc®ˆRŠ¿™í$p:¿mÞk!7“åÜyeΙßÁ19–×gE*];ïÙÕåWn-;öû9íĂĊXmˆ" üû3päFãqéíž¹fÑäÞ8º’ö¸ù߃‘Wîw¯ ÞÞ7åyƒ})=¤ÜÆÂˆÿûWöè`jÃ%ÕLmÎúuø¬9dw1÷²’ån  ì‘ æÓ `ŸÓ:OýbWt—HÓOé'?F )²(éñ ×±ÆsÉ\xð{Æ +pÈ£i?­r†ÿ-rDÉ?âCW‹øàZ½I}³…Eö"?0Ф1È’zFAןòMÿö§[P xš×Œò9Y &ÏŠòyÚûˆ–JäœÆè#1Ó^Ôç Ãú¿M*s£hð:vb·jàIô[sŠ1ŽI(©6¡*ÆïW˜' }åJàŸ"ÌD-éÝ…y@ÊK#&¬I– 2Luaž.ÌÓ…yº0OæéÂ<]˜g×0jOÍ9üÀµ¬PFuaž.ÌsÏÂ<»3°ÔDÑÒX¾Ên¶ºQLÄFi©mC1$×îõµîÏç£Ë«âDÃuÉ„¶j¸#î°AqîàÐÑx(â^ ¶õÕ¡w>#ÁdL…Ácþƒ®õ¹}Ãfý֙Ľ•„Á—´á•ÈP4 Ìðá2wRÁåû¸…+›E£<êƒCv‘®ÁŽ WTêÚVà‚Ù 4F÷N)>ÙJA ^–Hæ³EA¨¤ÓU¦¾ïÞ¾}}ŽyÄHìþëYbí,2Ö– [Ò^,Ð΀`Ú>û”ÌG%÷?<6Kä¾Gž~°Œ|"FàM`%Œ‘Â{•ĵ“šÜ¯°^z`Ñ?UX/Œ:–³;‹Ö"«lutÑš.ZÓEkºhM­é¢5]´Æ­ Ò²ŠT ÿuÑš.ZóûGk¸ZË¢ýhÐ ³¼nÛ¥=‚Gr,e‚ûªí•í 8÷÷È‹Z¡—H­Ñ[nÿL÷ `Ô"&„(mÁðRǵ“Bß‘UöH9 ÚWLd=ÕåÔ켨΋꼨΋꼨΋ÚÍ‹*µ'cà )Ù¨e1´ó¢:/êþxQ+Æ]û*P®Ú‰-›$-zQ1±”=ÙæEa¥"fR P×Ü=sÔ[÷·wïšî³‡â÷TA ºvtœüÐ| ´Ì7ÌÀC&^Óµ£VÙ{å¡g¤RUáÏàRÇÊ;óÃQÒÝ»ïüãÎ?îüãÎ?îüãÎ?ÞÕ?Ó²ŒwÉ;ÿø^ùÇ lÍ1Ï„ K³}“‘ñØX+”`‚øpºv¦šý0÷¸[uiÞšµ†â6žC­ÛÝâ;ƧÕÁîp™èKhB¼[Še"·{¶]\ ?WRÚŒžnqæÿÈîð*½ &6l¦ÓâÎÜá2oˆ¤’ì0oœ²ÎîÜáÎîÜáÎîÜáÎÞÑ.´§a¯e³–•ºÛ.îÜá{åŒi¦çof`Íš›Üap3¶ûÃîÔ/ÐÄJ¯ß^´STÝ+/ª@*8iF¿Ìd/ª¤Žf\Ófê€;ó¢\œHa¤Ú™î®.v^TçEu^TçEu^TçEíêEZ–QjwÐÿœè®}çEÝ+/*Œ©9f2o©u,­ÞîEIô°@‹j¼—È\;!6ó:#™w(,UÙ¬¸•Ì÷än%óN8Å]‚¨æWóôɃ¯Ÿ¼)°~œ‹ˆ‡à$XHjœ§ù9μzòàù(ƒ—!=@—@Ëáh}%³8[=Íé'øÙjY½vÜ O Y¥éEz9[¤8jKè$~pðô*Q›2~½ùºrœVc±*í]W¶ ۜŕÆÛœ¾õ/¯fyå¹Ê¾\Žó±õœ­œë,¿\Ú÷‹txŸóÏi:&£él=ÅY/Œ8_mîsO ~Vd ì=z=›ØG'.úa œ¯D*ß[f&‹æÐ0K¼L ÍO£yÁ‚YšFÿ˜õ#ð•£Iþ—Kìr¹È‡€s¶Âhœ<Ì~a…ޝ4œžÁÛf“7³ež¢“üþÇÏSTl•ß²^ï9,It{²©é£Â7½â8E‘l”­–Éø*Z:‹c ›øiXÒ,\gÑÂõ»âµÐí¶"ÊYßu¸ÌëFzÀ«¾WÐke”&ŒiÁ¥„Ò”Þ«@^zó';QRG2PÑÔ*ÞÝ퀢GJ…dbd\v¼.×òº@^Èëy] oÇ@žÓžÖ(ÅoÖ²´»=ßòîW /Œ9;æíyEỷ/·;P©î®ˆWÑ) úXÕNOñ¯"^¡äBµˆ‹-Þ.à“:F €úÿì]{ã6’ÿ*BþÈ̱‡ïG‡Å&»÷Â]Ü"@€Aà±ÕÓºi?`Ù=›|¯û÷É®HI¶ºmñÑ–ÔN†ØÙÝÁ4[,dUýX,j2Ðsƒ|ª)e¬º>J Bz̾8J B;\ŽJ DH&Ò ‰D $J Q‰H”@¢b(p+Û~â(Q‰¸J@L1‚oušºÿSÀ¶Aǿϳ­YªÂ¼_}2[Qf½Âì+jLa¥¯fƪ™ Ëï|ÿXBôÊ…;Ò7ý›ì¸A úI¦4Bœè³q\%ªäòúD…v ”]ÝswD¹K>TbQÚ™#ôI^ŠY‰jäóO 1¨‡ Í(!ªÌf×5“`²¶ùĶ슇왌¶¿Ÿ/œ:P쎕ž*%:˜¨8!¹®à=Nzþ…çGiGS9*Ë#œB£=x-KɉYHÌBb³˜…Ä,3 QV–œOéKÌBb^“Y0¥zÀÈ^i5åÂâq!‘„@Ð'(¢•lð{9Ïà™œô~ÏóÕT®59sXNªwÙ5uܲiÚarM‡å±Òþ%½>׌ša¥ök‡¶g† i›¹ÂÈõ¬ÈA2žªë§6…´)¤M!m iSHÒ6ÖSp.H€wÄU*„‘BÚ+ i`jsXà&*‡,'Hø” hI]ÜŒq©7í0ÖWCUR/NöKωú¢b¨jÔþ£¹_;B³Ñb¨êé ¤„Â’©Ö÷)†J1TŠ¡R •b¨C¥ÊCUV–iLÜLeÝŽ¦*ÅPWCUÀ”ˆPW¦tÀʼOÆ!:{EÈ”iN1ÕnA«vXêî§¾Èûì§Y±Ëîj©»:‚+FýQÙY#\PèìûmþX¬Ák¶¯¡U ÿ&Krò“‡œ<ää!'ùì!W&Ò$¹òŽ&—w2dJÑ÷Ùl³yøõ&«}ŽÅǃ½Ì¸ÏÖºæm;—Ívðg—/7»Œvˆ§¡[­…W< Íâífå§_>ng›{Ëß)ü>ûa¿2s“YÉ*É3|qŸ­ƒGŸ8¤OMµÌß§jÕbvôIZ}¢³}r„°¢à\úûuM½6.ÛŸŠÍÆôºÙæóm^Íùú®q5dòWe }[Ú´?èO7ÙÂVLÛ8ooY½Pº¤'ˆ îs8Í(»kIHÀýÜ´Ôm¾YowžÏ¶…©Žºª<ćõzÓ)J°ytg¹K€Áq m‡¦Øîj¶ÌË l ç|õyYLÛÂxF­° jµv‰ .1"Ò/&ÑOÞBXÜžõ—É÷Òi•ŸR6Ê´SK2DFâ¦}~0ZeSz÷‰~Á®MŽm&Í*{7û\NòeK󇟙àȇfþ«ý#jàq#dð¡øÀ.ØS?àâ±: º‚åé~ «gtý~þ´ý©sVpoëNlì `£éc°è áëq0¦0Æ(@ÎÇÄØ]ajŸwÌ ñbLS¡B%Õ( M ¾ÇJBùð°žw»3¼7°I60Ø"4>ÒF²Áòh4Ò–³Uñ0ëœÑÄ–C,t$£cKŒd5ÃåQ#YÍÇb»ëœ郖$Jò6F°¡¡8Þ:r‰Ä(ÈRZàIÌÆ³›õç|ûXzLŠò¡,tlJa##Õâ4ÀhA#4 âÂåQcGËϳm>y,7÷ù¶›ð’÷Z rÆ ©-ˆ[ø U†ç(X)FÇ_=9›í¿œ,r“±QÍŒðÄà$h=^ü6¢¾°†&2ÀO‚aâ1°†‰‚›†È}Ú·y¹Þoç “‰gÏs¿ª©%Xß5±4m¦åÉëv\Gtª Ñ”3÷ÝÆªWŽ ³÷6³îÛx…xpw+NÂûD}šïR£‹¦gSÂ9ÆL÷žiÚ1Š»qln±}_gÀBoD=M?(êDžì-ú/ɇT}Mp”<þ«c‚+5m×OËÀÖÊjq†¨Sk , Q” ¾ PoëúbFÈ-~¶cä‘‘ç¾µ²»¡UOoÓ*tèQ‡[Ž$‡DÉ9B'ŽFˆøŠy>›Ï×ûÕî\âM­à£ïÔõ«yÖ$å¡jCƒGayUàQÑ&ý¸ã(V 󳯺T;1­¬~y_QZ‡† — :44º. hL_´‹\´ö&KS£on™…è #:˜Ÿn ƒÃ‡_—Ò÷â9ÂòÇÂä¡›\›Z¿œm&³ÕbRG}7g9.ZÙr$Žß]úpÐtç+½h4ë‡ýºÝl~oîgœh\õHÏ€ }`°ÀÐô5Å<: XÊúBЉ¾uoPal¨ø(’\P¤ê (Û¼,~­«»í¬Üm÷öªâs]KÔHtûI¬ÐƒD_×N‚‘è µW8˜•剦q_ðÀ„÷0‘‡¦øºÀAûÇrV¬&•-oë–ôÙ|B%¯ šõ €¶Í>¨·/ÖTð<·‹ä„°áaÀ17’{cM;„êͶ{P­›6•Ѻe|ÑôÇŒ©«š~Â{Þ?}×V/ï  ÷·œƒÃ` c#Ž<]ŠB\|(!E_è (4ôDü¡AC1º*ÐPü²3O­ƒ‹ÅŠ ú¯½'ï‘A”<‘¼8TË|wŸïËíþ!ìáø+Vת·¹¡;@BV€£œ õ”P‰ê“u‰+gb î»›ªiÀk•0ºxD‚_Õ2c±Y)ޤ½²hT|Ôpwªu^áS!$Ò”y*íT툊MQê)“Êvož*vϯۉÁOm?Ô<*üòîCñع_¬JXëí¢œÖ:ìœw†˜S¡S"C>…jŒ5çý¥œö‹^Õsä¥Ã'½C?ÄT„úåa$vK8§ºvºuÞÓN)’ùö«ª ¾¡!Õ w~#¥j¦¼’<…ކ">6ŒÎ ãR0³QÀ®!*@©^ Lç;Ê%Ž ÑØ?†ÉèP:Ä¥@ ¬–£I åsÈ+¹QOùœÍÑN€()$×\ûœ%9Õ±aÏ+éxZ¢¿‰‘G°>‰ÛsÊè'å çÝ¢RÏ~v„àLÐkšu!™}^¼„ ·aÐ ‚nq&è»§ƒÑŒÓVÂɬ´p `ìkžÙ¶‹}¶oðÁÜdËÁÜkݰ&vÓùv>ÝåöIíÁ5?(ßm?ÌæS.¯·Åo6Zž~R¥y9ô¿;lNå»Q$e ÀƈͲÝ|“aM¦X¨)¦hбíÆ$w¬òùîðÞmó»}yî™bŠ§Ê¤yzNÔž«ªŽ«K¶›þæâTŠòCŠ~L§ZLqGôU;$ÈéëØò}óJù%c“•›|žÍïg«yùM½W}“ÍV‹¬~ºKz{Žûéͺã'Ò›¢‡ßV¯Í¤µã!=¹(³ßÅ+½]Úa„!wÂiÝ®û*8“ ¥J6{Üè\_Âu6ßo·æ’ @àc¾û§lµ®'13î±Ql—dÁº’µó–,,ìÂV€‚Þo׫ⷖ}ƒï”·wEþ°˜ÚgÄÿ£(wöõÖºÁmz˜<=Lž&O“§‡ÉÓÃäà‡É+ë ñ!Öö_¶O™ŽV¡÷Ù2Ó!"Xûü…õm³ûÂD ¿‚“Y¹[»³EåLÝþA­Hö»Êíׇíñ_*5Ø• #:˜|øBí7žÆ:³ÚîþÛ~`”‰1Êßd\[ýÄ*ן|²…¹¿ÄáK~éuçáùVýÜu˜"Œ) pïu«`Ô!8Çõ¯Åª(ï/‹N²·Zè)Eòÿþ·Í-mÄ™·O.º3Aµ´Z#jÂÔ¦®Yîx+qYYüc ©[5H¾_—Á|Oõ…'—ZLOÓc9Àt…«N ]}2R}ÉŽÕ¤›ƒÂožÎåaÜ4;üá.’t4¶¢A BÝücÕNHýbþ±§! ÂJ–ï^*[¿øVSB%ŠŸ^Äz2¶æÇÉxzÉÕL ¥Çi1õaòG{©¶ÙÌ«ò™×-ˆRö|½Þ.ŠUßôg}±òàô«U°v‚ Á#N­švX·^~ý"­"…?SPÂþw ©¿#JÈ”i}.K`S†áóthwÁeºÄ‰ðSC’½åTün¸ðXíéŸì9ô#¨¢!ò0ÚÛ["Vûå´žÈÎBLw^q¯¥’ˆ Æ™k3²í„¦‘a¯ÒGíCð?àç ·²½uHÔ Ù'Q¿X–°Å›×¸«„dÝsMCvé:†‹ì“@«¡×í$âÊ=Ê-cŸ¼*Àí~µ ·÷?˜‘íd?*Àá±ÏDiÇ21à gŒhç2±íd«òå2_ÎÍC¶FB"ø{ÀÅþa±z³Ë>˜>Ì1MAø}Ü®÷›ì\żKãКûG«[LiЦð’)¹Éîf…©ÈvÎ(î`áv¿nòç?ÜÝ·LàßýøßÙrV¡Ôtÿ7£àÚ½ý÷bµx[Âô,ö†l;îŽÂn¿¯Eý‹õO¯7!6[›ç•Ý^Šm‡øuï•T~I_z‚ÕE¼W£¦ I×ë-‡vœŽF¼W= 1•L&â=ï‰xOÄ{"Þñžˆ÷Pâ×wùâö_ð”ËŸˆ÷«"Þ+`jˆr¤òX = ñú”*vžx'dª%Õ&ÑP¸ÂÚªlÍL\×å1,1dÔæ-/âÔ¾m'øï›B‰­’"Q(ÃNH]ëv<דHM;¢ÙUQ(M !Á)óK/8û¢(”ª@ Q»‰Šªî~¼³w ¥ê‘3©¨ðKFJJ¢P…’(”D¡$ %Q(еžŒ¬©ô[YÙ~æ"Q(‰By} ¥0OOûÝ{FùÉ–fyŽA¡|ªÍÃæ´Ï)§mG ÉWQîl¹¸>©bç«ÄÉÃPì›ÙÍu¾3·*›+ùn¾˜,Àþ°žm†ö¬;wÅ¢Œ¬¤rQ¶n¿µF{](ð°W"ŸõÜ+¥ÀÐTcŒ¹"ÊÜUí•!«A»³·"ûTtàÕ)f—=Ic§óP¼træ»û|_N>)“ɪº_­E0¹Y°YºÖAÕŽ tYuØpQ®ýúT*÷ú™8ý. ³H‰)“Ô©gÛN ‘ö›@ÅVùQœséôªví畯Œ“ë/+ ¬5%Œ"áבh4 ³®Ä‹8gØ/E) ,Q˜‰ÂLf¢0…™(ÌP 3ÎÊ2)…™(Ìk¢0›' ìS~ <$…É™¹~ÍÏgqï Vš*…\áiÕŽ šh€E«&Z™· <÷bm;…PWliq‘őСé±Hy$í£tîÓuaJðu“b\O‰ ˆ)«Á¶#Š§Ê‹!ØmiL0¡Rû´ªQ›ë¸ òb÷´÷By=û|úb¶&5(™ëNÕLWA{vQé±JªÝ+¢åá‘{E³õWɱ-à,׫þŸ™îã±"ôz%×ZšpÌ'>ÄjìÄŒŸ²“û Lî"·%.lÿ¥baz$M[][­ëÕÿdyíE•â]'Ͼ5©´·oþ~üͧÀœÛÜî7€Y¹^ݾéhðc¾{¾¼¥-nß|[”ð1£ˆÑ å}±É‹Ù‰w´n~ÝÔy„HªlFQN³ïíŽ/¡Y«é‡à‘›ÁÀN°7)¼o.ž^,:õ›âû¬õlˆ˜ÊÛ½´wÚ*Ãßç,6‘d{úªøõ/6Óþ‡õ~—®ò—¿Ú/}››ø¢õ³òææ[˜AÃ>•¡¿Ò‚A£µß=¯zÏ ôÞ?,²Õz³Ù yß}ÎóU¶,VëHÊ›ì¼Ð­ñÞØüï:£¾(›ió³?T›˜kaþv_‘Ç`G²· +=øÀFð©ý¿'7 'ÕÍ„H˜u©n ÁMßè—¯.Ôé(ÏÄ@‘{ )裧‹sñO´Š_Ñ)R¬ôÉ/è)Z;œtŠ-™H§Hé)"¥S¤tŠ”N‘Ò)RØ)R¬••©–@:Eº¦S$fjÞ""QÄuŠTX qúì·$â48™=‚Ù³áÓÞNaùB*)J8Ùªcv î¿b ߃‹x ÉOeœ}œ«o ˆŸÏö°Ì–†¤ÙF3º¤r*™„jBKhÁ~€]à ¹XÍó¬%:Ö7\ßPõóÙñmüB¥Ü„#´S ´}Tvakxw»c`ÁBiG!ôz—ýåÏÙf½~È>»û þ­Üøp+JD4¡w3Z‚µêVøm­¡¦*ë—bugy~Ëûþ³qn­Óóõ¼Z·°›&'õ_‹Û™b‹Ù‡;:Éó» SDM>ˆžÜé‰;DÀüí£ °y'Rj"Τ·µeÛ Œµsö±!;0:ØçÇ“ªE–?ÎöÖï8ε†a§2±˜‰-–`äÀý¼ýûê“1º7Ùw‡ßª¸$Cô4´PýEøô?6ër¿µLHÝÛÖcöŸëÕÏëÕìáÏ¿ÁkCøc¾û63±oï*Ïv{X,Kø¡ÅâºÌþõû›ÿgïÚz#¹•ó_ô´>°Úd±ŠçÉ Ø9A€dZIö Þ]mt±áÎoO±{f4Úfu©›->LŽ]~üXͺ°Xì.×ù»þåÏ‹ûö´û§›‡—_é=‘±\Ü^ꃷ“^f›T¼Ëyö7Ïþ7ÿ9+ÚÏ9ëźøþ}Ÿá;9¿?¹ÛüÊãïÞ ½“/ó_oöÛ”Y·0øl¡vÊ~}w÷põÄB½½½ùíî꛾ûñ»ïø9㯔ttÒ 5ùû¿ýë÷ƒvýýê-Öcô<£ pòêQs³šÞs¤~Ÿsª쫽ß~!ð_jmÆ}¢ Yv{îóñ&§¸×¾ßyòœ\>Ü;VÙÎñò¡Ÿñùɇá÷·šÝ-Œwšüí»Îl00w3"g¾´“ñ@¦ñY†òäUè,Qô-ݬ¬Gœ•ÀBL4zÐfM¶49ó²C¼Ä)¸a2{Wê*ñX¿Ø³oÎ/ys?;¿ø%W§@‘²Ü–=(A›Ïû”‹Zû¸R‰Ÿbý%×àñ~ùk…»k+;:ïsj³g2™ _C¤/ïûüçSä!Ÿ[?jÂoç·åÎ~<™“>“{ršºÝ›†9Èæùðø‚` æXÎ_±òŒýò`(rl #;¡*€kêìE‰Þ}yräããÑïí€áòkÖ†ëûÝ–~õ‹ôGسq ©ùx|ꌳ‡OÚ,°ñò­Ú"Î,\€5ƒÙ<(YËèÀIàȤ½kÕ»^lÖÙžüqšOËO¿=ý‘÷ºÿþôëÓ\Tø?ïO¿ýc÷O§÷7Ÿ~ye¿~ýñdÓ›ííï'¯xWïßdïëÍno®oÞÜp€wùÕÉ«_Þ^ ÿ¹—¼ûƒÿõ__Ÿ¾>ýºø—,rûîêýëÓ?¿ÊãÝÜæÿkNþ²ÄÈù'ùOÿü3{·ÌôàÎß¿ËGÍêË >œß_¼cº²ïÉ:=‰©WO9ÊÁ€6¿ßÛó»ûÛ‡>7ÿ†ÿê×k^³GfúÊ…×§}ØQb@ù»ó~ŒG¯ïûÄÃÇ«|Ôþ6—lƒ‡ËƒÚæ­wÉð‚¶e¹½àyš·PÓ¼õ0¥“ +߸ގC˜Pp9¨à(ü|{þÿãY‰Ñ(0Ñ8Bg_P¦»·<6äâ#‘„‘åVXuÈO"ž`C8fŸ¤ìSÑö³OsÀ/“}* ÐI/˜}Z]MLKÍÁ;;-ÕÎáimäfUÓRìÞvùÉ&‚w£+ Æ9À&ÓV06 GDÑÏràkcÃï‡ddc9aY3ú# Æò;Èá¹—q:ƒË»MƒïñsVàüFhÁùp²O “)be¹àÐÇ Þò\O5rÕý'õ+ªøã—xw¶½u·åµg4-Ãhî„\C)ªhp´`mCËÏx{¤ó³·P»o¦2•X¤’bÏÆLÈ"õríªY¤<(ï¼µ"86µ0'‹Ô§/^]ÜýúæîáââŠy¾|Ì|ô§@ý¿×§>œ?~\[âÿ»ûg÷—!‘‘ó!»lHÿsìûÝ?Iv3<ê uÁDvFÀ•ý‘,ç}Úí©æ'¦Âê+´ ïV»Àó0“$0™ï >ŠÈÙk¤cšBŠ? Œ¶Ÿ¦˜~™4ENºñ4E‘éÓsðÎNSôƒ#$¼–AÎÔMS@gBˆh ®ž@„ĸ&€õ3* [ ¡P©GO¦Zš¢ÇŽø¿H8l´©ê#e<€MÆ–›DÂÀÛñ.‹œ”=/RUM` wJǹ5NÌF™u¦À¬gÕq)ÆP®@ßÈAZ5̃CÁX/‚c!ª ÞÝ^ßüzöñê>¯á1\-ô«hnS+d[²œõ{Ù–c0âÞm? ˜~™( €@'ÝxPdºÁ(`ÞÙQ€j—JÆU¬]ŠÇ+u`]c‡•:ôèjE:T³rl—üXå¨' ¡UŸë²j¦âLí`@‰'­w¸»ÇÒsjËœ&LÑy!˜ÍsHÞáÒ¹ö â %+” rH+¨C"cÉOÀµ©ö¯Ümõí5G üßN?Ûva Wº¯ƒ´Ê9«,g×-&W£yÅäq ,_óúþzÃkð$¾auÈM##¥ÿ?ïx€Û~~"@ݯƒÃg‡±EƒåÍ«—³úRðå>9ÒXýV ORV~_î:§”6•A 2"ù€^„q/%xÌŒ„zFÛÏÌ¿LF €@'ÝxF Ètƒ9xggúÁƒébJ/çSªšp;ÇÛ=…Ãû÷HQ´•Ȩrè(”¸ôè†ZGv%¢¼äÉĪL‰a$%Рüüo’Òªç‚Óל'°ÆÈ“ðŽÖð¨R°>˜€'Áú'ƒO¹…2·@É… +PðærFºÉø`|Ê^ή (Á¤ “ëš³Flh{:]‘Îü ¥h¤Ý6%H.¶f+4è÷Þ^ÜVLÇጡš5$¦ãu1‰•ÐĔȗq¦¼«%ÂUSE p¹ßa¸ä&àÑÞO]‚MAB0Æ!)‘å|‚øʰü`WP ž¨W†Ý¹y¤Æ"©ùY‹ÆIËx6ªj\˜%øÐ’Ž VŠÑöãÂ9à—‰ tÒÇ…E¦Œ çàöƒ#»B˜0ÈYªÛª&gÀcÂw£ëÊÛ}(·+ßÈùÔV\Ø£"Ë«l«Ø™…Zqáðûè¼³p8[1.ô®‹c¯j€ëÀ™`QŠ£{9oÖµíyP üÍ”ïËoäÌñÅvqÓ.0Ú¾mŸ~Û^@ “nܶ™nжÏÁ;Û¶÷ƒ;t $R9¨\xæL‡d »½)£˜5„¶l{*a°ÑÉèµÐ[ȶ¿ƒw8G¨iÛ)agÒÈ•sÀÜG†Ãv(ßÖäšg$sVH? ³ùùyè«'øúqØÅn‘nð¤—Ëé¤2©1 þi/‡&¾œj,7‹„+¨Æt<ûÍò5Šn>^3W*^%f:#0M†#y+A4~Í{K“:ú9XH¾Üf#ga m!À–Ÿ(ÞÈÑ ´µBm”¨M‘ &q*ÎÒªÍ ó ’ó‚.²öŸ^ャ퇑sÀ/Fè¤#‹L7FÎÁ;;ŒTíRñË÷€—½±žºÈaÖá'n”PC[/)ÑÇZ/)q¤ša$˜.Ä‘ ’z˜)ÆrƒµA.øu¯&çA‹ÙõÁ9€c†Xܳ Œ¶oÚç€_Æ´è¤7íE¦4ísðÎ6íª]Š BUÓî­åaJ»½C“¯¥M€3í=*b+d’Œ]µÓ_ M{`÷ÁÃèj{ç8äõ䜞÷»¹˜:9¡šK¬ŸÝÑá·~eðçý<“À.Å„g@‡Y<ó^⋤6uS«ßÒH‰'¶•D.tÏÛFx¶w³iÝ,¡ { %$±Àhû¡ÄðË„:éÆC‰"Ó †sðÎ%†ÁƒØå`ÃX7KècGc‘„©÷mEª˜@ðÞ6³¬õà¡GªÛ×r4l <|€„Ç z9ÏÞ½G–¨üvÑ çâѲ‹[vÑö-ûðËXötã–½Ètƒ–}ÞÙ–]µK!Ö}»CêÀai·Ÿ Õ7VFªC¨–iWâ¨jÚó;ºÚĪ™ pX ÛXê§”PMÍÙê uxªÜÿ²3HL>:ês{9áJ5™NÝtbZC'&ãAc^®Jð3–±”tù Ê{ðÂÅø,â—϶U ò ùm;¢.¦½C°cÀ0â m?`˜~™€¡€@'ÝxÀPdºÁ€aÞÙC?¸5> ]…ÞT¾SÎIrt¸`P5Q[Àަè' ·¦ZÀ ÃaMÝŽç00äWP‘ °[ÄÙË9^°ª@㲨¦Uèâ½”c¨Ä“jìX<{ûû¶ZÌ@ìÛ ]²z9c|>SÚPG‡q ˜Ž‡ æUÄC\º"—}gï“°³œ?Ô*³bÀƒ&cØ\ å98^]»£íGsÀ/褊L7ÌÁ;;è·¥ ­u‹CgÁú„*xã'@µ¶±j€•#©ÅÇf–µ°Vâp¦n5€ ã@LH)ײ 8³œƒªÝôŽ ƒò‰ƒ ƒ÷q'Ç Ž¢‹2žB‘öBõ c§.=¯Xä5»J‰RæÁr¶r g, ô+¼…¥Ã£} ëy1öAR ñ”3yŸB^ír¯ÁAÖ}è~Ô'gË÷39rxŒç®Ähó1À,ð‹Ä%:é¶c€2ÓíųðÎt»”7•Ûxè`Ì+TB…ØT  Dïjµ Pâ@[7€‘wOåá#{ÈÅPe3¸êÿ0(F“Êu69{4íâž]`´}Ó>ü2¦½€@'ݸi/2Ý iŸƒw¶iï'Þꃗw)2•Ó{>t6¸Òn?*´Õ4^‰Þ…Z¦]‡¡®i÷at±Y`Ê].³&ëý*çûê܃jµŸ Râq¦êq>o—;帽»½8c·¥gÔ Œ’s!Éåã- Ð_s©Bñçf*•¡°†jLÇãiµ²p*’˜Bô yC@§`Ÿwðâ¨Ó K¯ÀãM\»ÂÃ/Å¥ÇT7áÿÌ-M3ïÛR‡°NÖ”ÙÒ ìàu)…rê?˹DëæzpÎ&Á!„ãa1ð+0Ú~~`øeò:éÆóE¦ÌÌÁ;;? Ú¥Ö}T­í¢Ký"Qô úÆRÿ:ônê.”~?ùà¦àˆ5Û ÚÔ%_möæú›q’búüР[½üÇxpÕóJ<é%ËÂR¼ú°N¯Éçú®š©T¯SâIõSGO˜¤2“„¹¢±ÜÞ¨—s”¨~ˆXúþ9l=%ÐàI«…†Oõ£€Êå̃V«äÃR)$mUÉx¬òYªGûzû0)'§làQhúë{:}›ÝX.†Èr<ûUŸ¯À!ûîÑŠà<î•æ³#áaÑö³sÀ/“E( ÐI7žE(2Ý`aÞÙY„~ð€É¦$ïRd}å6&¤`íÁ6J¨©­KD:ô*5—Ê"èpX[·€i)Êçž¼dÚYŽ ´¶Øôdê-¶ GÍ&s:„‘—(vÙf%ã ¬”ƒœµ«:ryÐhØÁE³çe¹ ]`´}Gnøe¹tãŽ\‘é¹9xg;rýàÖ;+œ r&UeŠ]ÚÂn?jk7Atè]­~PJµo‚ÐØbS猱¹f¥œ¡ 3®û€¤ \‚½’Û£iÙ³ Œ¶oÚç€_Æ´è¤7íE¦4ísðÎ6íª]j¿ÅrÓŽ<ÌØsÁJ¨ØØMzªVé¡Ãá©nަdÙ©?—E%ËášGú“Ï{hÁĈIž‚wTû¨.ãyéƒp ÕãI5ò?Ý\žÝßÜŸ¿ïY ‹Ñ u$Ôã¿ä £njÉ®¡ “ñxcÛ:Ë-4Þw¾3ó}qáT/çT­™®û+Q’a“õµß Ðá€kÔx#â´Î„õÝ}ÁEÞâ.”…E»] säß÷¼F‰WoÈŠÇy¾–ö®gø‚ Ü)¬¡>zC2Òéîiüª0–NÙ×(3 `Ù^Y9ËÙ´l7£eUœA䉀]!#Çñ\œ€‡Ì¢°J^ˉ7ÇÎT ž¤“ –sꒂʺ1¼‘ƒ]á\E'Æ%ªÊ¦1ùÍð·=¡å³ ‡Æ’#†àhô>ĪJ=}"Ö¹4ƒÇ bQy–Óž$,º G!ÕE@–ÿG )X )Ž_\Å3Hk˜‘éx@û²ÂF3Î/.8ø¿«ŠÖ1 ñ4Ê‘Kˆ/é…ˆÀô‰¤5LÎd<¼e¿¤‡ŸÍ¦Þ‹Ÿ7&áKZy"Á#É5Î1ó8ŒFŽdóÅ‹¸d­Ç,@,ûsh Øì/ X1OÉ/Zµð,°ÉZˆQÇÊZn¡3žÜ±-ÉxB…ÊC–m1s)’Ô®—Cõ u;ÆO1ÍÂ炚¢Àã´aʾ|¢0w¿ó¿úðín¾Ýr÷­‚ârå²o”Œä_³úP#ùù ퟎ™`…’P ­ZLuÑ>3¼=‹ÂÇ|J²2³.;õåÿÒæzO(Î)æÄz}͘Ž'i£×LÚ¡bŒ¬%ú{ÆÊåB ¯  ^Dè}•â‹g(oì/¢&)§™åp…ŽÆ—D/<ã‰X'šøÜ÷æÏ†ÿüÃùÇ‹«³Û«‹›Ûüžåß6Ùò§D¹ç‡bâžØûUßVh~J+„ <έ£4‡X-ï ôÿì]Ýr7®~×ÞK&@?y½ÞÚÒJ³UdK%Ë9Çûô öØÒÈÖÍi’š‹ä&‰ «? €øAR ˆÞ=’±X£“eè2pÆCj žG»ŸõGè\ÞÀsp±JjNšïsÏ5@Ì3œÎ&<Ó^nvw÷ß>íF;oŽRû‘¼|£ãÍ%÷›5x=X‰ì‰ž Úý)äÍŸzÉQ„$Zì]÷²J’£tS„Žšœƒy-I]}0­ a‚>4àI0ád¸¾zºº»ÿÏkž¦><Íõ(Û¬Ô øÇOzoá_^‹ÏGGHmÛpM28©£ÑGœaK6à‰Ø+µõØm{u·{|zÎ,S㪼äÒL|;xÉßqCœr¢qé8YýE ÏØ¯Š„9øxb«Õp\!œ[÷µ^hôô"g ¶Ð~]mz)õjðÆä5¡Î2Ÿ«Z ØòѦ81-x´Ÿ‡»ÌJ*ñéÿýj7ƒ:Ò#Bー<uS½ÎÛ‡ÌÕI~¬­ G «4œ`ûòË„”Ñ?ËËÅx¼(ÿñáïÜî5ðá{ub)ÓµÿÚ«€‘–ÒpzæÊoîÿ´ßÞÜì>«D²âŒ$ÍS¶ ƒ?„gOG¡WÅö¢UF»èÕå 3®~NŽß ÑÔ rðüT£Ãœ;zü›w;$ÈD€ÁE®*4^ eQטƒÒ¤‹G›×÷ŸŸÌ”6-YN¬ÇÊÁ`›)³‡]!¶7НÄëá'˜àصàAìz$|¿yöÏÀ¯xz}ÿ¸»/M>-­»Êˆ¥á¯°§ÌežÐðc¡M› ”V¼È«¾à)„ÐÇ£Qú¹yõ#öâ iju‚õÇ伄Y½° 塎Æýˆ¥”æ”\%1ºƒôéš¡…®©C†Ä™=ë.†DåÂ3Æ/àìŠ]p:¢ý5äöÈôÒ GÏÈíð}†ÜV´QŸùÛ*§ÏpÈí¼›‡Üî?ÎæˆJáÁ ŽCn)ÅË(zdìé‚–Ñ%+ 2œ×Û6ôEp‡Ü6âȇ܂\2¶¹£’ýô£‹•þä«;ýÒõ“>%âÈht•Q›¦nÇ—·áQØP‡üËØ†/×׿]}þöõö¦¤ç:bU@óÁK9IB1à–Bäi0i†pðp'Üú–ÈPfµEò Å«(Ü~0i‚ÃÝ€Gbwáþ~oدÍ\Ú¬ÔÃý™’²²)2:NýÜ5‡R-.ÔhB;%ÙN}<x¬©Ê9“qŠŠê5dÇQŽƒ…Ü jŠÒ¦šðè!*-µ.^NB®óOÌðUHÞ£±ý†ˆº°íéä¾ZÝ”XiÁ#²úx4ç—*ÿ8a™?½0"G§¼`;7Ç ¼ ϸ¾ÔÁ9ŒãœÊ¨6¨ÝÔ}ßÈ:½ê4;P³¿ 8CöfJ ´OsËU­ñ²nóòÀ)Š÷¨ÃDÔüpÞWq9BŒ ®Ô‰•&<”¯Ç#!ÂȯõwD1›Âì]w»f*âHíux ݌Љ}'™å¦êãI(#Düy÷T~ÑÅŸ¸¤’T¹§\ú0{F›Š96:DÎmp±LÀìÂEÅ 7¸šMfC²‹'† 3{Ws¯–+"—ÄŠuÕÜÓmë¾Ù n.mч›Æ—?îñdÇ!ÛÓÅ4PØK.eu\:Š3ˆ•s¦¡’^U)Å„>VF f¡$ü]bt‰‹9UYÍYDs=4:9.?#§¢|”PcÊ>8:|ü+§âÈcy…£çŸS±|ŸœŠ ‚6ê3Ï©¨rú s*¶àÝœS±|Ü®•ä$ïé Í©à —éXJÅ‚€rD)9¯”Š=*v†Õï鲌J©X~éÔ›ó  S*$_bÄãÒ.©†œÆï ]”<Ú‚sÌõXÍQoÁµàéÿÀm—é£Uß]œ\å\*“ Áë]è¼™ ‚ZiîÔOÈ-xÇ ™ªœËjzRô޾¬€ ƒ…ìB……Ø…&¸Üå;"¤ºÅAB.lã*Û(æ˜%8ÁüBgl&á¾8YÆ‹·È@ñŠÃ63¼]±¤oQòüuêcÍ| „>xÌSd^¤5­À£} Äwf•ïîÞxJü¥è®~RÎ)š<]…È¥fhä~—n85OýzÃâÏ-x7.ÏPÐ?¥rġşæÇ\&ÎGê÷4yñöt¬çUý¹ "$Ñì£'È£ª?÷¿?ó:idCm¢K<*mÛ¥ƒën¦¨¡£ëÞËmÏìù<Ò-wirϨÊJpXYJ!¼®š…3Èhc>uêi†ØWã¡Ð9J÷ªÑ@®3Ž$krõ“rÅÿ=QÈ£æ4cg7àÉ2TÄTe•äheðîo£‹ÆŠ¸Òãx·àI8DÄ ×¸Î5sÄ%ø¦d©´#ßÎ0eÂCJ åq­¿Fp`ˆ¢îK /é ×…#E&† ô‚G“Æäãéú¦ýªã@=ö̘¢]ÿâEª‘±§lG¡Tœ`Zµàií…¸žiXwC¸”¿2ù ítI4L´>J»Ý‚›¶Pèò“ЉBPÖ¸OÏ䢗ªvÀºOÄ"¦V’]¹Š`¹v†¨îÙ‚‡1øx0È¡Ö_0IŒä©ÑEʃ„Ú bB/Ô<­³~j{]¨Šõ>! š“û4-$Ü<¦JAdvQJ™»3A´ëñÀ0ÑÖã=fÊ ž}§¨ÙçW3ì1qr±›z¼Àí;¶G}ƒªàaé¹ý¸…[õ“O1Ä,½“ÆètÜ&N½PÆ8ÁÿiÁ“::·¿«`=¨§f"'ÊîÛ’!qTü6å[/â ±6à‰=ji¾~Ù=^”¿YÞ×~æ]å^Ká2 Pï°Ðuušt° (¯•kăC÷.ÕùÆû$ öpr¨ôxjÙ»[´q=X˜ äÐñ]t¸aAœ'hDI›5¢á¬­ß»j6ù›ä=,”¡!Ì9 R7Äy‚ä[ðP묿eê§«‡5,<òó‹ý/Y8[OgQBNfN{–¡Ñåo*ñzà4ã=§OóȨ·T–O»»]ÙLß^ä^Úkg\˜ÄéÔ½§ƒØÞ™¨Â Ę82ùK_×ôý;* ¼Ok![oÎR³ 5'õVBP™Т$•ºïð‘°xZ¿ßìîî¿}Úú>B©+S­à<à?>üýÛ=/ûaþíý©ôãíÍÍîófüˆsTS¬Ï’øç´÷œúð·×*”]I¯E–`¥¤WCZ#H*=§“/t‘zºúòÇ?ÿóxõð{A—^øVê¢?ÿH G?•êµÈ?ÀÍÑžÕx"l«Ú¹úúô»íÐÛëÅ-{¹M^:Ÿ_ü!Åh—P?BÌö5ÏÁ31Jú’†w61¨×RN¸XZðD0Öî «L˜Š\ç)A(ùè­‚rÜXâÓO©W£.=Õ'hBž¾'Ã[¼«ß4¦¯æ {X±8Í0b®aµmXÄ„ªÝ6<ÈïuH§TÚ¡Qð<¸Ò³´9 Õ]‹Àæ Ѐ‡ò¶à¾0ïEÆo1¯~‘FÀ#e¬é´F}·#@¼E€”òº‹Ð<^ÖãÁJt´5«´ÑØZøª_cࢿ$˜Z#¶|téȹœ³Õˆ)þ©pôükĶ€ïS#VAÐF}æ5bUNŸaؼ›kÄ–s¦DþýŠÇÖˆ%ÑËÓ‘²¡‚&É>TÑp^EbMèG÷hÄñ«À;‰%¾ÌK»Ä‰EѵQbóìëþfj &ÆìzÕeUª,¼2YÚýèãÉ­ý·Ždr4DLêss©È‹/ºÐ¯R¤¿yšJnE7[èâÇϾC`&=¯À£áÍþê|¥ ˜1zw–Ñq蔳ÜO»)˜£š¢úàuFd¸|‡9iX‡`ÀIñ ëî´]^v›³{Ø]&~ÏÂQdȉœæÿßéâ„Øp.­¾0!¯ÀCÜãùùžB§¦§²x7ŸÑ1ÁˆÃá…^YdB|¨ö±VX`õË–Ìêö­£kd0Ky©çÜØü²Ø ÏÈ-xú䣜ÂS¬ó”‚-ÑsxŒ."ö9¶ksh† ŠÐ‚GO2¿'|¬1ºŽü¼ðê7.!O¡KËv‰ïvJ`§E˜È&8›-xâÖT¥×l[¸=n•›Á銳§Kñ´¤Ið§s=í/Ìäq+g÷ŧÐ Ì~ðÆ7‰hÓ ¹‡ÇÛûÒhén÷çînŸõõñêéçøßýç'»$/î®>ïž¹¥„#ÕŸÍÍÃÐcð :£“H§yysîœõ Ñ0c‡7às—Ü[Hϼ°±nKÉG )yβÑeiÍ-­Ì à‰&è@ n4ìÿ}wÿ_®ß}ºz;iûâ Œ+ÕÃêæÐ©JÏì4:I8è 8EW×ãÖZ žØÍ|ÿÙ :ú' Gë6¯–¶…à)¬ÑU†ÅÑ„ŽÛ€ó„PN žÖÞÛl«ù5õиÄrüy¦§ÑH·ÝÞUMÿÇÞÕ,Ç‘ãèWñ HM‚$@øº±×‰Ø½Î¡¬ªv+Z²’Ü3ýöKfé§ôSQ‰¤r"쓬¢*?|D’ý"@à·kð Ó€OAÚgl® Õò’ ^}#,ÿÚj<ÃäM^ÿSõX–üâa}ð£½€ÜæÉQŠÂY_—bbµÕ7PƒûIÕ¨Z×’¢p-I÷ÐFóž|xSëön7%¿Õ).¿sšk"ÔÙÃÏ7·øà Þöò¾¦m¿\ln7ß.¯..w÷_¿üOù‹ÿ›þ`¶Ë7DQâ±hCø”CsÀì«[Ž”Ì4/¹Îûz3ŽÐ…åséux´§çÇËZJË€ëÍ‹]YÂ~¿¼ÚÝŸüîmyËHh¦ {/t.€~„Öd·.­Éh¦5Od—7s·}O3™)IËëU’^°t—Ê)ñd+˜Ø=J²™i†³5¢ê}ëZ#0 AQÖæýÛÃ]ýp{v±™Øf3ňԫf˜GhHZÙŠnv5±&×Ç,¾ìÌ4…zKIØc¡1´.k3Ì»ùÙ¤ü]>LqlÍ…{÷3È#ôƒóªôƒœMÞŸ–s°R§PKä´…–?TÒá hu¨tÚRn›£ºmÈ^„úƒë²_ÔÙæÇ#»§‘­ô'·RÌÜë8ëB" Пœ×µþd6ëÕ$_ €f³H-û [‡–e€>1¬+®ßHÚP4F>J­Y¸–»W›Ž˜û´²¹OÙ®›ÖQ†Í‚±L WpŽÐ„¼2MÈyÈ®2…<'¶­²è,»}|€yy )b­Êî@çÑvŸøúÂ+›é°Ñ&ñº³¾|Ï=m*}çÊûB/;³ÉOd»=|r„`\—_6â~pb~¶Ùn’‘οjšo¦(­ž³âïZIFh¯kEñÚhë_›«Ëí¦ú×îÛ77¾IÍ;ÎùĵU”›=_iâÚá!­K;‚ÉÕëYá>)Âú¤é>^RŒcRxÂzÌѺ¿ñ^Ò­ëƃ~æíÏ£™¢ó¡ñ©m–È^í†}pÜÊûM=y¿ÝͰ*µ:©1êáû_>¼ÛÝß·J4QÛ|µ’;–òå˸èphYÙé¡2»$ƒË”•••ê…6]YÙ9àmÊÊ6èF¯¼¬l“é–•ƒwvYÙúðècÌò*ÅÙ-ZV6å|^¶Ö#…F÷PSÈE¨ÑE¿®²²*õYúƒÔã²²J¼`YY¨}òü‘ÙµÁÍÔ‹³íVMã0Ð̲²sM\r‹gêðøl’!ÖC¢H õš}t"è²ÉáðÊÙ|pË׈Ôá‰~ÁÊܤ­¬3e÷ËŽ˜e\†±6|}h µ Á%ðüˆ—Œ³£ë·á瀷±át£WnÃ7™^¡ ?ïl~zxB±-ñ~ÜA¥ê%lxbw0µê ,¿÷Üõ •t6¼=ñR6¼GÆ%mx¨ä‘ÙŽç9å@©Ÿ›ÆÕ Ü ÖjÙ *œO\Ñቼp- äÚÜ¥âÅQQw+£QÕÞ¹žÆ„ "¡íÆ^: 3®ÀcwEão11 m&ÑF—ˆ<ÇHfµ FC_¾©¼{«{}LÉÀœs& yÂÒUŸ¤«pŒw€å¾¾ãxpÐçg>4:ïFèšÍ;蛦ªÎඨÞ]cÕf/Í\âärWIL™¢Y)›7¤:»¾ÓeN—u£ªROÆâæçÃMuº§j|!\Š»)ïëÝåv»û1t #Þ,6øÔƒ‡­ê}<ÒøÂâýùÓÏåÞ–Õ!µµ(bÄ rJÅÔl\¡òÅÈõ% ‰_žF´Z«×ûØR7§Øæ4/ƒ¥Ó :Y]»{}B‹i„bôãI÷^x<Ü’B³ß}¬ÑŽE5¥…­F…‘÷úК`Bë°iœÇð+ò.…TŒ®?ò>¼Mä½@7zå‘÷&Ó+Œ¼ÏÁ;;ò¾8±§(¯R–mÊœˆÏsÀ£±X T 늼ïQq1Â{Ðg¿Tä]‰ƒŒ¼ûtžãÑÉŽµX'míeœO~fòŒ™{_Q猎dÔ@ì¸òœ1ù,ã Î&‡FÁe;ÀÁqbò’ýTÆa2ê®ùà) –ið XEçÑyä÷¯ ðr̲@žOkÊeá™hp&1ÿýx0Í –>ºÆ5´ý›Äbn³8u†Íbˆ·Œ Á®KÇj›<çÉÉ‚ ÷ÔA¢Õ9Í)¼×bÈ@ÙáD9À;˜i%Xêu±m{ì€Ma„:ïÃC O±ÈLÌ ™íÔ‡ˆXÔP´q°¸hv¤²€2÷  Ð žèg_[Wä=^÷WØËr’ß©)ÏÌÆ\´Tâ eŸë¸'å9¡x2Ažû¼·+Ar„½(¸]ņ 9¢¸•qÎnÚÁÍ™²¶®ã‬ò‘ÊxØ›U7T¼Š¯´OÒ#»äˆ(ˆX™Mà ‚eçü€ä\ X"*ø…(PBH^„£§AK|Ï’¤Ài‰ æ²\ÓˆÝHgD’« O2MŽ˜4.˜ÍXî[a;\šœs‘ˆîOÁ•ÑÀ€·BíÁËê87Ài«Ï)h¼ëÀ ò¿‰L,f‰EŽ9‹¡é2.3Ú†ô;Þ§TæÎ!‹§ÎUŒÌ#&¹æìqBo\Ò¶ˆäOäµMÜTp1g¬Ï~v  ´!ø(FÅ«T#6¾Š‡"ŠYxuÜŒ,¼FTc"-I¤!¹L¤¥¼ÔÍ «N?lľ+Y¼BQ¿¬ØÞÜóІ(¯ G«üß:Wå³»Í÷Ýc6Èõå÷}‘Šó§ É»Z׎ ¤Xk3—5X’$¶bÕê,àO%†Ô¥™,kfy^•)3á2Ôžµ¦eƧêK˜4Ä%Sà™{Iq©‚­D!$íP Шԛ‹ÿx¼{·Û~yÚO?¸€÷ª‚Õ“X÷»‡©^Za#n©jðh3m:ã–ÇØ Qž<»ÉƒæÀRPöP</.kµ‚žæô}>òÁd ½×©œ¼æ»o›‹³òšýûoéíª$fÌž‚踔q>,sFc²Ö1SÊÎ,¸Œ™¦ 7‹Iüuœ¶P~3†ÖE&´•£zÃQRŠ2£V)+¸BrÜO ÏVñ~ÅZœ¼À'ˆàdüÅõ–ÑaÍîFß0í-µ¡O˜“®ÙÉž4÷‘ID›“c³°±¥òö †L?í »ßh¸+.±•¿ÙѲç$ÐbÅ•ëRô0³·ŽT/g6Bˆ§¶Ø™]ÊG3øuMlX¨0Ç×§Ïîv›í#£íƒUô±€Fñ°­¸™Ã¬#~“½©.G0é <É¢h‰Âš@Y<ë,ã0.uSóU…àd1`[Ǹ„Sž“Ù£ xr4Ž4˜=»ØLl¶ì0¸â×y—Eôå+LŽMT·5â€C bûcÆ>N…ý>øZ‡VÌð+²’ºÈêEã;@G›«üÎc/¥õïî'VQ`µ¸ÆŒâÅ# ‘p£G#Íæ žÅ,Ð2q„µÐ§ÑÇç´C…>>…8†œcïtû]‰TsÝî#†xŒÛHã N,—ªy(HVÕàÉsbRµnÛýÃÝßÕókI[/Í;ï‹'ÂË!ƒñ1–Í»Ò-Àa®Ø‚óÝ'Fó T¥ÂNPœ­è¤ÔžÚChŽßr¢Î’£€e|Åp€Ù‡œËIÊÇX#5Ì9'ô"Ê2ÿZot”f*dÈ#ܦ¹AÆ9AH¹6㫲!]>Ñ ŽÔ»!/w?¯ªÁÑdqÊJS¹8ÙQ «P=0ª"2{AT`¦ƃO6K–|ÃâëÿVAšùâè¥²àˆ¨StÎfæçkk?æ4b]WàA²šù·,¾ùÿ«vít"€P¬èBe—‡hןr®ê„Z”O ÖqC¶w‚è‰Í®*mÙžY|Õ~OHJÊ>aô.Ht–qñÔÛ‹ªqñ6±X¢Çž=ˆ³ ž˜Ìj`kølg© †0¸ -ke\"³,¨º?öÕ9A< åÚý$’X=§ŒóÙ®X´Ý¬+8l¬Ýb- žZ:ž\ÆÄí$´:ÃÐ>k¸@™„ëZûqW¾õY;Ò@«Áèúû¬ÍoÓg­@7zå}ÖšL¯°ÏÚ¼³û¬MOP y•Š)/ÛgÍåóéHë-ÔtÐTc}Ötèb%Æ}Ööß!bð öY8wèÎ6¥T«§'pRuÖÓüâÓ•ÜÇÑšXw½ùQ,ÈÉòÁVþCA€ÁGADŠÞÏ|éGúœòwuùûîâï‹«ÝXjÒ9Û–ƒ6g߸›köìb󰹺ù>ÎmÐì‚—A»ì–Õ4BÊqùJ`:<¤<;ø±¹ÞÝßn.Þ×$OÝ+O¢yƒµb..Z *H˜9FuÛ„5€NiÄÄ÷ãÑžIŸÎ!49Ì.‘Ô*gçÕ‘ƒAë- iíûq‰—ŸøòϱƒÄ¤M³¯t½°õHÖóE·‰¬ …)FŸ“_t«<+ªðg=….¦ôãÆÉt„D2ttCæëûIÆCê² G ‹>îٵȟk‰ÞŸïO÷«ÛQõÙµ¹ è|ÎBíê:.qÒÆŒç+iAÇàø•q4b¦Ö\™d<^{i¾ƒ¬Ø&+peKÉŠ5ïÛ¬À­‘vƒ¯)^fZ‡’Ù;ý Ûž£XG9õ§˜B-ì#ÉP“hÿnG+tiùDff_Š“ìß®ww—Óazj{½\¼Èâ¢GÉîaçÅúï(YBOÈÅÊ·NvàÂ'­OJ n¬´Ü]´µšfjµÉÆT”UHJŸÆEuÝÝåÔU:˜suMVÓõõ¹6YòíÝ7x„j‰‘ R턨Nb¶×ãP]2Eknk@¾“'ÃU`ª;Çí€aAV ý‰ê|½f3´°À®¼L>ŠàÀÞ8ùUXàÈñ£ë/,0¼MaÝè•h2½ÂÂsðÎ., Z¥b ‹@¨ùÈUó=„\öê€ú¾ŒÌçС'Xª°€-XXÀ§s|¶‹)걈cPß2ý,[´_¦C ^θSà¡Eó»#î¾íDÇb®rÁ*9­Ñ'Ž`ií÷Ø«t< ÄÓ§ö€·ŠÞ\]ïß¿F¨¤}’ÑS RVGMý4S\…H<àp\…‡­R÷f‡['¦ÛvY¤ ÙQ$KÞq³á|­×qñ@ƒ‡ƒÙêñH^!s;ýªÁgû0¾6¦© £¥Ý¯ŒC$³Î!:®mÄF£À£n*ùq¾š Ë­íœêAZ õƒ¦TÓ8PWý+,ß‘T‡G[é÷«â"_ü±»Þtäö$¶z› ‚a†HÁn_1ZU4.¹[Î{?0ß_¾ßÝü¼mpIM.„œ™AÂs GD¦5àÂa"æ¯Èô‘cƒÑõG¦ç€·‰L7èF¯<2Ýdz…‘é9xgG¦u«T^¶ä­?w¹6wº¾Ÿ»Ú n]‘iúè—ŠL+q,™\æûXdšÎ±šä X'úÌQ›cdhöãdç`y{NƒÇ»™÷-f%ê7°þ÷;¬ppé°^f˜&àŸ_j ñîËÝ#ÔÍííÕßs±~}~ÓžÿºõÇÕĪ[Òó²wLÖâ^;y^/Ò·.n pq£|Y-Æ\Œ_”¼3LÔ³±îà}ˆÞ,u ¸ÈeƒÆÜ¦1bô½;bñ“Ìu`îå+Oêðde€äöf»½¼¿û9™žß~n¿ïúrÝ~{U¹õìvûmb–f9! É’0doTA¼n·ÛŸWûKnDVŽxóx­êG?3öúv¯0· ÊôS¡&ˆê.”ƒÕ´_”äÒ-PàQ×õù(U¢ÁçEµ@.Înïn~/îáýËr®Íj10ˆCö’è8Y]ñÓ”¹«wiÄ ÀƒÙʨü=Ó÷²ž?3|~y3-ªÒÄçTL. P̽EªÏBúÛ-FŒ8B'úñ ÎkH­ tbÒ·™¤4µŒc 9Åàìœ+uV ²6(ðh׆W”½ªðL³+^¾—ÎÌüÌäÆÚYvŸÀNèL»‡Î#vÿºò;`qíg$£ŽäZBA ´WJèN¨E~¢†ö£ÊqMÓL‰½Mùé÷¶ÑaéfjO+yÆPþ¸Œó)U…–¿©­Ã£­8~¼ŒîŠ#ÊçRhs ™‹%…ÒÂ_Æ‘úJÿåJ!» ƒÏ‰Í >HcÓr*iGç ‹š^Ö¿äÌ*F/­êýB1ŒP”n<Ã’éþOgo œÚIjªY›ä¥1¤ì’7µ^!ÄCƒ‡Õ+ÉÅ]P®’xv±»{˜XT!úÅèW‚¾ ï`ãˆÕAƒÇ¬,ÌýÏoÏ™-GdȵOº)¨caKž.n°²u Ž„x<’içS¶ç‰aA]0åìHè<‹Ù¸øç‹”FœHkðD»º»'³ÛÎÉ1“”¹’}1ø ‹5¾8ÝòÊ¢Á£lÌ?òU Cý}±‘ðèàÍoe£y舣 žóŽ:.®n~nÏŠU¸-¯ìåæêåÝ,ÖßõîáÝÏû³?sµÿ³pb¤€[v‹ƒ~ùß?/÷<Úàýúe{y_Sƶ_.6·›o—WÅ`ÛÝ×´òuÿõümsåjTüõ` “¹Šuo¥%Í|ÓwZb{˜²¤¼.eÁt’½{}SÜŸ7™ŸG‰?úÉ4`¦:9uªÎ‚B S$+R$pÎ)Íš—·ñîgïrô“³M15öït°Ñ§"R«kð+}Z^–1jUd†>+ÛXõ¡¼.]i®µ¡\ïï›mjXí¯ÝG[HË_ŠEÑ‹¨¨–W;ø!Öu?HFæS?™Â’jÒ;‰á«'osÞøèË¢:Â8RàñhgáóسQ`6FG$J’Avp}¢„8"ž£À“ Ÿ#Ì>þâéÿgõ÷ï^heÀêz‹b0€:a}¨±lB•¯ý¸ÎËL(«)RJòC“¢“ŒäÊ£d<íšÜn¶ÛòÁýîþ¼ü|}þ8¹çÿ®ëÿ>þŸ½«g#ÇÑežË.P›H8½ç‹6Þ -õÚ½#Y:IžûõVµä–¥&È.’ª@ìÌX°ø| H°Ê2é0]CÁV·Dé|1WÝ™ U5xjŸ ž•4p{}óóÇ/ä^å Ÿ‚ ÖF•Ô”^¶ëvЂÕÐÀÛï<Ñ6·—×ÛýMN›ù‰ ²‘°VAÏ Þ[‘ ³ŒØóqq;jðœyC5‡“‡Ä‚ŸO¹?g» “ÿÊ8X²Asa˜Äcë` –Òº>º¢–‚âþjžKÍ p¯6{ÜýåÇþú*mó)×ñ×0ÙÇqqcR‘„£ù q’óÕŽs¿Ïe¬Ì:pŠÝ—~‡ Jå»7„å ›/ŒöáO/ ›„lÛ<Á˜2Ù1êsN73E‹xL˜]d2Ä¢Â`ó3ý2Ð胢78(ãDcÙà!׿äWó^hõÖåcŠwÑjËFÎö:b2ñì[õJ¨'ï þ1ÿ†“SAï ¦n‚BÒñšžkŠv'‚|¥J>Ú,—Ëà*»C¯D|.5Xb(`9Œ ˆ•_@UæØúdø•®Oý`Ò¼W¸É¢}§SœoÑm g’ŒüeOSŒEG‚b}ô-õDпA‰RqcS£y ð6x]÷t¢;ÿðh¡§#µ‹TFñ¹•H*i˜#©µ^¼>4eZ HÁ5Hyïé¦É0F2êd¼‰|ÇztÁêx¬­´eÏ}Žß"“÷De¥MŒòÿ ‘«¯”Õ«ä¬èª“äŽ Læ¬W­³âù‰2¨ÈÅêxã¬Et6Êâ Qp—;#®S\þ ¼ô°ðçsííž*À"Cd´ÏÔM‡¶Ë«…í/¯Òæk5TìÕT™= Øæ\b’ö¨o– ®A‰°¤¦cg%äõå­³©ü¤e‹œAZ^& =Yê¨n­>¹ÿŠ'<žE«:ç|Ïûrû¸½¾ýú¶RZ8ñÀÅ`ªJ•x4àÐ úiP”LPÁ!…ôO”FÏhtýô—€oSA?ƒ Nzåô³š^aý%xWПéPV7¡bh»WЩ®çÛô'$P%*“[Wý„JœPˆAßÈß«‚þüû"ÆÞw¬ @ÜIw¢‚¾Œ/N;6Z$”ä ¶¨ôYë“Y´F§£Èù‡«i‡(à •GUI5/ÂØüI:¤î‚ÌêýT’³ÜÕ]_à]‚µìXtê,l&èh·À2޵ŒxjÛm¾ýÀ\/‰|q¨‰|tŽgnH<­vrD\]Ô¿–§,HU,2«A™Œã,i f3^”½0xG%ðBþî%ñ¢¶‚"äšT:èADL¯U¢pLŸõÏ;ÆI_˜Ï.ú¼é® ÕäÉ$ç;ŠËíÅ—߯®wI}˜UŸ8+èÅÐ"¥T¢]³gt=yœªî:ë êS"3€i¸OužÂýí÷ß~YPâÒ*ü çÈ5‡RälèejøLΣ1êmY’ó<Ç4#‘/ÀSÛà§T}‡Ïð¥)«Æ(qD­/Ñ$wF3ìÞ¤Ta²:zâù( Νäj-@uPv|I:éRa‚8ØŒš§(r–L'pƒ“ãePÍŽHrf@êm'ˆ xj°¾.rû–œóã­ü# ¶Ó1´ÕœSpA½*ÙÊ)J3Þ’ÞÀ°QÆ'úŒ:0®ó—ÿëyë¤O… Ñ¥s/ `Ù0×f^BÛÀdÅò:Rñ½§qô¤¹$çcÏbí¯îNòÙ~©ƒsä¨/y*zï×v•'è%¢a«ª“-îw•Ç©¦’z›äõl†míFbwy¼‘O]´Š¹Ÿä€Bos_m¢&`òá[%~–ëÿáÏã x x|XðÖxûõëýîëöqw‘6ÉÝÕ~²’Ù'¹2ªèJì’Q—[äLßRëUviB|ðÊåÔ,ç`À:“àH¹¬Ÿñ`³ºê™#³¤BΪЦæ‘Á;šéҔ㒇½gq3ݯ=ü›ädÞÎx(øˆ:öÜìKþc¿ûsÒV~1Ó3[ A©ˆ3É¿å‘`™•ÅoŒTp/’–?,OdÎe4ºþË%àÛ$XfÔI¯<Á2«é&X.Á»8Á²ÊJ¿ ì‘` 2 ƒ?é¦×@%ZWTV‡þõc€VQYŽ£‡Ší£2‹p§;…TíNƒea±ŸVêyˆ9U¯Páß¿ÂÎ4@$åÊu–s-žWNWÎäuÉ–bŒv‘³¡sïôžàû¿ûÆ‘Ï&F*ÀõýV瓉ÃÛÂsõi}²qÞ²®Ï]»2p‹bÊÌýÛUááÚ'ogåÕ󟤴¦ô²B+r}“ã·¥xSŽº÷:^ß¹eœ`€<à‰´°÷TÉâ4TjTiÔI‘3ä—6œ*Æä½ºE¦’ÁŽX6HªÑ— |¬´ÔÿºÏÿòÛîf›{A]VaQ´…d€Qê'.£7ŸÊUe¥*Ã4(‚DuPFSÍí[lÆ‘Ô3Éêÿ!È8Ñ„¨ã¡ÚÖVOzùzûãMðáÓ×ëÛ/GÝWf fx‡f“^ðÉNÎYï–Ãêðï™lïÊ•x7^ô¢Ègõé]ê êò• &¹À@#žgp)õ+‹0ËÙ£" GÏoŸ)æ4ºú£çEà›=çÔI¯ûè9¯éõ=/»ôèù08‡ÏS9È¡ïzô,ÁòÆð‰ ‘‚7}ToÌªŽž¨lP^æ>¡Žžçß©\O(Àá]Ï£gØD<¹ØÉ7¤¼?:É?ðW^“ëg—œ}*(ŸËî¯C—yÞÚΑ«ÁªŸv<µ5|¸O.ÈÃcYkÅ×5E·Û?“cç\l¥_4¥ ;Oäüæ•¶fU„Bû¾„JNÓ´ÜŒR߇R¯§2ŒT~]V «S‘›®Äô§ÛËßÓjxÓŒXø>¶êÄt†‘ úÈEðžäúzy7­ƒmF«ø>öêÕD†ŠiU„¢Ú'3?ï?ö÷ÓJ¸V”"ëÞÇR½žÊ(R‘åu‘ʽ«_õÇÃÝ·ÝìŽxߊVÁ¾¥zs2£ˆüºöàé=‰µÿr3ý•i1 ³ ¼ ³ÞžÍ0j!¬‹ZdÞ“Zw·îîÿ˜ârßì'ý÷{0ëÍÉŒ"Vt¸*bÅê5M×âûÇí÷ý_ÓZP3b¼ ±ÞœÌ0bñº6C6РUÁ/§Ç>h!¶žuxDž[g.4¸w`RG }Á Õ"YÍ ¸ô=Ž/Óò¨åz‚1Véx˜|Ãk¢®O¿œ£ÎßÌsË*çsYvcRc¦|õŸY¹ÖÔ/çâ4p°©ž°¸{JÚŒ"æ‹Ôä¼oÑôµÚæ-½yŸZúê ëÍißõœNj-HYŒÝz±æåxÐ4ÿ¾o/ïS6@ªï7i³ÚóŽ%̵ùÊ~“\D?ô½{86ü‘t¨e“e4ºþ¤Ã%àÛ$fÔI¯<é0«é&.Á»8é°ÎJÕ¬îòÞ]üF‹'Þ»WBåu5ªEϽ’kp¤·X}“½=¹Ø(©;T`ŠœÇp†·ÞÊû¬ †ú{p5xlXøú1ß"ÚMsAB-p:RB»ôQä0¨1 Yäb<ŒÝùâðšäªÁJF£ëi—€oÓfÔI¯<¦Íjz…1í¼‹cÚ*+EÀ}cZ—J-Ÿ(ëu€ ªX{ +‹iëÐG×+¦­Ä{Æ´qsêѤO£39Ê·‹œåâ[¥;îì2¨7΂WÞ"Nrñcg×LvF£ëßÙ—€o³³gÔI¯|gÏjz…;û¼‹wöyð£r%=Ëy껳Ø8r§­}Tâuíì3*&¦}èöD¾GìzZM›S/ä!žNÿc’C{µ¢¥Æ ŽÉ~ììšÉÎhtý;ûðmvö ‚:é•ïìYM¯pg_‚wñÎ> >•huª•²ÖÅ®;;z¿A—³öåP㺺aU¢gêµ³WápÆwÜÙÝø`O®v ô䢒;ÉÙÛgV¥ÄUÁuýCVâ©-4!z¹½y*$tµû—ðaέ?¨ðasø—ͳv7ûÛ¤EÈ’Àu©w±Í÷=Ÿå¬iPòhûãñ[zÔs99B©—o»ç ×á¡Ðà™ÏK×Èõ.¯:1(”2­Y*ràa¨³žuÆ€SÎ'pì>*Uª^XF£ëwÖ—€oã¬gÔI¯ÜYÏjz…Îú¼‹õyp±¢ù+Å}/Ø0¤NÉá„ûVÕ»²c¸:ô®WëÚÃï Öàè]©2œ^lAÈÆX«ížHÁa“×|ç»!X½÷ý=¸<àZ9ê/õ÷°yR௻ÏÕð¯A„ÇÕé…ÝâŒØVDÁmëhM\¤ÓŠTÖ?:˜œ <:ç±Ùú7£oüþU*ñTFí/uöÛŒý¤¥\Ñ4J» Eó­úf9¤±=Ò ^¾Q£ämNà8~$P¨.yF£ëÜ–€o¹eÔI¯œüwl¤Ÿ^V!Ï´äkȉ <>4ŽÓ¿í¶×ß.¿í.Ï(4gWƒl7àlª¬œÀ$gÂØ\øiPù”¡Ø„ršžÑèúC¹%àÛ„ruÒ+岚^a(·ïâPnœ<©6uÇX‡l•€@ä›\›ÁåF£žÚuèû—]®ÃS}™_£ÌŸ¹ËÇʤ¬2½G6Îy£€¹7¼…®±a4•ñbÅOr>* ¨NF£ë —€ofÔI¯<6Ìjz…±á¼‹cÃ*+å]èÃp§.~ê B\WlX‡»%hÖáxý¹m§W£0!(«È…ˆÔ0O¯‡ËZ1áôÕà~ÑÀãîæîzòqJT YJ¥&¬:¥€ž úÑå}æ¦Ì:<..¾RV{'<ý៻/ßno§>É€4ÊÎvÏ wŠÑ“q#ø ‘#¢B¤³ÊŒ¾¨¿üð·üÑÍçgå~~Òæçׯ"³ezRöQWeˆö´,O6èÁåŠ)ð6”ã9~¦ÝÍ:k2÷]ñÆyFã´BØ“œ!:¯ìl"Wa·ýsPêðÔ¶…9°`{y)Áßã‚cýI¯1¯W‹A+¡œäÄ[7爵M$@ <µUç¯vw×·ßì–딳:õƃ'TÊ=ËØ8ƒvr['ЙÁ:§@¹L«ÓvtðR,bIņ¡•½Ð:)2ox½Ç1*‰UI.ÕG©äÀ0"WL‚ì5x-,\_YJ#ƒî¿_¡££¬ÄTå}Zkçþù[òî»?€ÛÞÝ]ÿ]îóóQгü/‰Ëû‡é”ôù$n©¶é¨nBO\qe4ºþ›Ð%àÛÜ„fÔI¯ü&4«éÞ„.Á»ø&T ÖÒ‹GÍJ±Ø÷&Ô¥çbÍoþóT@kéP¬ìÁㄊ1ÈÆ¯£gï{Ý„¦ßo “–£7Ë…ž££7›háÄU¨  !¨ŽxîîâUz&àûgBÖá©mˆW¯ËÙÚÏq»pøêz7{άé”ÁZT=UâbwBtŸDŒv1Šñ°«obÿÂLW…¹¯ •£ˆ”ÐÑE¬³õI°#ÂVr¨ ø9?dõÖèx<6;¸»¿ýk/®Ê̇W*Tâç˜ÚkSt»»ÿ¾½~õÒØ(jb$*U§ƒ7|*>G–8Û¢Ž—Gä+‚e#4P¯Ï“œÅFŽÀ }a^_Χ⨚xç­ãv^@sžVÌ#bQ®IPsM*u§kà5$›Œã¨¹&nesžêçÆ=Gq÷§ËP¯.'Ê©ÏZyuß qtú6/r†,}Jze«ã 66¶3²…_=Ršô—?Ÿ/úK…>5¼é=¡k½Àç©üaÔñ­{/˜S¯’œ ËÇ«1Z`Árª‰ë|»Ò+]l8‰ß€AŸ @Ù–õ-§|P:}Ú)–럿ýã÷ýLûCc¸¤)™ð«TÑËëÛWGaÙán2“†&^íR.ÂÕo—Û»í—ýµè~÷rIå¯ÿ×óßþŸù//ž]pˆ_ƒ§öÚ›]\N©úxkJÍñ2{ˆoŽ9Òí`†„!á~ žž ™TÍ­ˆÁö ÆKƒù®Šܤޑú ~:)±¿‘i”)Fïã"ër&ìÁœq@«âL¦<ÃV¦n B+êx[X›3Ñf÷¸.íYÝ,»±ƒrÝ®OXÍÐfX^ƒÀ»66¨M­H°Ô j<¡Ñ<ÃuùG€Ü•gE«Á¦½¸½*ç1˜Uˆë²^™Œ­Î±?eÂ6#Tèwl]8…Ñ\ŠCN©¬•ß§ã¡hz[¨Ì*hÇÕÅÓˆlºY¦ümXT:]0Crè*ð0ö9·Î«ß7¢Oz‡Þø˜ºx#Þ  AÏ”ÿ…!9 ˆ=¡Žk+¿øzÿñew1GByukçÑ¥xÑE:ÇÌÔmËâyù!騂'p,8ÌMuáα'ÅzF•¥@½­2õÛ0‚`.˜ ±Åx–Ü¢gU¬#—b$ãϺ+/7výÉ [Óú“ fùY_VÓ¡ Ç'ze³!¿*6ÛÛ_ø¼½Û§XSþäÿÙ»ºÇrÜü*Æ^ííjýKl AgY`&;˜Á @€\¸ìÓ]NWÙÿôÌä½òy²PÇv•]e‹â9:²/’›Íöª|>‘Ÿ(Š¢È7užWÍd¶WöBÈœKKçµÊŸàM¤ ¯²ç ¦JÞM>*ETbµÆ“¥MÅ~½- 9Ÿ|\&–r|!þdÏUWÉœÈÇcUo/åp•ŘáEwÿä2¬Ú:¯óÂ%O‰Kd,K©^kßÛ'¹Œ®2´7ÅÝw×È< ÆÇÒ¥øàuÇM‚µ6;±#ø{B»'o&Óo㤰u)b!»ì\˜•9a¤º)NUà4›)sSŒZ÷ØC¸hk3ÄÈÛbˆéßɳǫ±ã’ÂcS¼]gÐÙÔf˜¿­}ÉøPåeë0¬î'Óq¬gúG«WŒQÙ÷½C ¯Ì [¥H·HßùþjErÍ”ð¥8eCn¿Î|j³ üM±Ì s,;ÝFB)º9ymº]˜XeÞ9ån‹wÚ÷ìt˜W ¦7Nüu¼N›üùØãcڟ骙Åk–Éãz[¦­Ï-×—1¯ËôuLÞYI »ý¡úÜö½|ëŽA‚rdññ(¨â¶gãñÜ”˜þ*›Ä¶piuQêüé)[Šœy]›†‰âëÐPCmÎïŸÒÊRÅHhu5æÏêêt7f ¯MÁçåoÍêû:­0]ІA›j4äÍìÚT Fß»+zo…}>§Ó§V/¦ã Þî{v×&sSÄ‚úÞÞ÷õóCCùE¶ã‚ÐõǛٕ©„¹)*a«Sq±ÝLóßÓ sŨèêQ‘7³«Sñ(¥"U§2ê43>êo‹ÿÛÒ>-ó6Úö¾¡Ç¾øÇ ?.ü;’q¸±Fy©4ùö Ç Û½n|ñºç6‚ tk'kð!ö¦ÆB«Ë–Ûà‰¸a°ÞJïèF:±%¥ëÖŒ¸uÑË‘=‘Ä…cI äã ª²IxåH+Yâ¨éµŠ…>IŠø.MlʲÙÇá ÕVÉùF<^I)i<‰ûÖËùÜŽ@ œ§<Ç”àTNð0fKßÜ©@•;#üNˆIôx¸5í{Kv6‰ÜJ•ˆ ##Ahr¥á¦l»t1ŽÚÑ369T‰áwœÑ&Õ%+j&dèˆKÁ€Ô¤¿BòMGû5A‡*iœ ܾþÝ/¿V|éÀ®Z;gÈÆ«8NI?ì1“Agìé0,<¡¸QxF°Íæ¡Ù®ÇßÂz|Q¨†*È€no '‡*vSÛ!¸‹ÆAh!h¼ªFO[üžÅ<éÈF<Üb¯^mÏwç…ô%Y/$j™ž‰Â*ãsˆ} == _#e¥@Í+ò`x€Ë·u.»H3í—[i½3Ž,[‰ãœw8q3ð}S&R…;ž:<5z©-¥œ#Ä‹‹9GZé,û´YÙ¸I<(8<†’Sñ‰¶!%™ç¸wmã(Ò—Žú[%t|’Iù½8.¾é|œ,FhÞ•JXx€{k5Ý®æ›?â2i~Mä×›Õd¾Ø¬O¥‰|š/v§´‡Ébö¸—$á†)m¼5Üfñï¡¶i`8{FÕð&xt2%³æËdû¸ï+IIG™jAÈÔJB’›œ²‚}‰94«³¡KYÅ>0ð@¸Â± v¦º‚#6YôÇ &»‘p]¢¨8#º8 }o@äL²R/­ S/­6"G‡´/KœÂ{j:¯I^.¸Px¸)Aý£¯1îW“G}œ`9ž|ýºj¾¾$[ºôNÏ —:åkã8¶”+u­)XW!'ˆƒÇÛ*¤ˆRôiûãc‘X©€²ùÞ —¨CõO«fоñ¦™¡×ËíjÚ¼¯3°Ï O'}Rà@UØ 9xô  _Åx¼Ùà!/-Ï`ÿGOâ&Ù6.{Ǽæ B„…Ç ¦½˜<5ëçÉiå³KbK;š”SZZêøãŒu¬}Žmʇmm m3ð89Ø‹€ñý$ÊÚÊP2ŒÙÔž ªá8ÃvõŠR4§õ²†®óñ°ïXQÕ˧ɧŠùb¾»ÌÞËn:Yßdw÷"Ù»ù²dÚWBÆ»òF"ÑÁ·ˆ¥ù ¥¨Ñ†ƒÇÊÞÕ9sn«½"$ØÑqк%s^2ÈJgà ƒ)ýMf£Og)„øbÎ 2u)ÈTÍÈþz µ“5TÏÁ%jf­C Q)2 ƒì\ŠA÷(r¯ª¨?O°½Wþé#ùtÝ /÷*ÈvYP`U¼ócª}8²æ£¶ºB¢1=«?ɬ•-$*4Zª›W[Gû<À|xµ—ÌC3yÜqMÊ­‡ ÂHjƈnËzXÖÆ:$J¦Ê½ŒÓ¶)bã.œÌÀã‹„28²ÔIYJ#%ˆäýê~œðÖ‹h”e4cñõÉà„ààÑ®Àå{»êÓvUZ£T,A!´ÚhY$¢Q³ùà­¨°þ9x¤îØÈÍæ8 ÐA)™wÐ7®146ïÔ¢RYxû³¶}=D~Ôšß±ñpk™¼t?v°Uzeá¿ü²¡ðà8ÅÎZ(c¹õÀå„Ùxê—>ïxò8¿ŸÜÇåû´–³‚zõú3æ“[¯µ÷¼µ»-zymzÅ–­2d1rYu5r½ŸM5jÙp[Ԫ߅é2¾¶QrôŠË›këÝ\ªÑjà'o|<îÊ´šß?M—;móŒ4¸k1ëÜtj‘ËU…\Œ!ão-er½4[kb(~eOLékñëÂŒúR,{æZÞÅômدCÿ V+¶ÏRïjرóÓªF6{cöÌ^ÙžµjpŨ啨u:jtòú¶èÄn@1˜ßŠ÷yµ]4«ìˆ~ði"…*GE•µà¹WÏ"cÙ€bíª šæàaÞ9M¶›‡Húé¾Þß‘ÒNŒ2àÂ&ï=p¿˜PUþ1fâk\1rðÓã¥ë¡âÊ{Çïp;ˆ&EgŒÐ€º²×•ð*ºé÷e+2åÐôo&«˜HpPÿäûdþmôhû<›lšõé°C¼/3ùqÿŸº.¨X”šgÔQ&ï €ý÷[š‹ÂŒÒ¶y0Þ5ß'Ûvý|Ý7ÓÉvÂ5wÒÝù§Ñ|9]>=5‹Y3û§_ßËߟF}ù«¿áï53Ü…–ÛÇÙh±Ü~ú÷çåz»jF›å^£Õ|ýmôoËÅ,“ÇŽq«wie¿4›_¦“Ç(Ä?ÁŸŒ6kvyfø“År=úûOŸ÷ö(ÙÝ?Žq^«y\ôñÛ_–ÛÅì/<‰Ž(@³f=]ÍŸãl?Ú1¸ç¶º\DZX.Æÿÿ|Y¸\¬G¿ÍGq£Éf´ÞÿÊëï®G‡üëÃÊÚ³a}Wü¯?ÿðiô°Ù<¯?}ü8_¯Ñ-¹C_áa²¹C½~¼_-[7ÿñù§Ï?üã×+𖇀7šA“Ÿÿõ¯?ìØõss¿\n~Dô8£_ö«õiB:¡I{¸â‹u%Ió\µ?0šmW­Ùh‡&?Û¶¢˜Œžv¿˜Ë]a¼yÌøñó?ÆÒ Õ×Jé£âÈ/&4üçèo¸;®úÚÐÑŸ­„;)íÿþÏú/½¡EûöPõï­ýÌaéæàü0Z?7ÓÑ9ùµYØoµF“Ålô<ùãq9™õFTÿî€Þ¡ ?¿A»Ã0û0úm‚îÉáê³Áƒ,.Môªúâ0G—š/R„3 ï ÆØöðNêËʶ`5ÐÑÀø¦ÏqŸIt÷:óQIY!û‡ƒG9vjüúØOk÷¤¥NX%Sïìöãl")§AבÕWˆÎ°ð@¿Ò®§Rû˜#Ìt¢„±Êhd–$ŽSüÛ‡ 8U.Ö9x¸'ª:«Ç’IÍ à=¶ €å—ˆžfùè­¨aá å ,fHÒ©7Öã• 3qœÜ$,59P\…ø&·à×<;nÛbh§f-(²€tTÄ5öô‘f€Šˆø–?'+ä¥pðè2 ÞøVÒNÎÆ¼8ÜɈ9à8É.µß©H¾BH“ƒ'0³¦[ÅÈ&íxz Êz—jn¹ç¥-Ô˜±8çò'ÒWÈŒåàéXâáÌ3£bMç”yë”÷ÎÓ°Ññ`rƒMTWá„ÄÁÃ}³:_|YMP[Ûi´v'BJ»¤>öÏ“JQï±pœìúnµ óó¨ñø‘ƒ\7ÿo$ŠT@‰ÎvïK4¤/lCìp§€¼1ÃqüΩÉ@åD…S0· þ—U³>9nè´'ßÀ–@Ú;Q¤Hv†qÐÖ°Å<^VíZ|í„–Ö~Z)')çž5øË’͸|8ÊUð™8x<÷åh³‰CN¤“ö(bI!o¨ø'€7R–L1(•ÒÖˆ,rð˜0ÄùõXv©¨¬¼m»cGDAÛqÂs:p Ç‚# ­JÍTå2^xÈF%e#…·Jxå ,R8ðf3j>¯òÁÆúîÃ+’G©¡Î¢Çâsiñ9K®hŠw2&÷qãlâåƒ1à*è2†ÝYí¿šéæD::)d•ŽÊÌkÇ9å‡:læS q„ u4Þ ÕðÚÄï ã „׺'}ïB oä–:œû´Ml0à•¦p› \à÷GãÒ0ŽV)æ/Òßÿ8‘IË@JIU¹jÇ ] `s9Ž() m[p‚ +(bkfAlÎ(!==W㿃§bx\èYÛ’Y:~Œ–Î ÒUô­oÙ ^V+žñ²†2Á*'¥¦ñX)ØA™§fóÐl׫í#Áÿ¯™<­Çql\ÚFˆ¤$­• @KjiÇqJ•lŸ]Â0!*㘠ї¸Im-†ÜZðǬŠþ¨½\º Âq)Žç}‘÷è±á„߯Åzþõa³“ ]±åïûáñ‡ Agà÷¶FüŽS6¨ :îÂó¥Q„iE0ç½Ó.›Í­êÃÕYÃ^+çÚKWEÃÁûà2VŒw¢ÿÓ‘ƒ,_{—Ä›¹V²@©Œ1ÖÐ@!¿XXG„u $d¬&>Y9~G í½§WPtÎz$m|ÛÞ7ã]“î—Åë¨íP‰X6:d Í®³Ô fW(©¤0@ÏC _…Ò¡—!2ðô*³òN¤­4%¥t…[¢àòKP2PuÖ1n›NÑû¸RFWѱ^“'˜"éYOËÅ¥‰ÿzn-=6«ÍÓd˵òV t¬éDÎÔ;ÚÔÒï»3O4˜dè—qAVá‰Ác‰Î°±†›µ×¶BÕb &dØOãUIGà\gãÙØB†µµªhÜâiݪ <Η´þã¯<:ï„j(;”ϰ¦Nª"›ÀYp5Žv"ÇqUΊ*wÀeSôÿ‘l-¥x¯]–ƒê ûüç0vÖ¿wñy~Æ$l•3½‚Êp©‚Ãé¼j&³½ˆEp MÆúÐà,Ô®lÐBJ4*s*Þ¿ѳ¢—˜:ô{@³ +~<¿¯RÑD-…í3`Î᯳ޥP´g¥¥ªr"Ð2% ³ENIñR±¾Ø©4ÚÄù%ï³3 ´SBÓ«_':He’RX›§Ë¹ð\õ‚d©èŸ6NBÈXÿƸŽë?agÝPYÀUE÷Ö(/3,€•…žl%ì©øŸvÖøa¬‡~6 hg&xërâÀÚë*wñ1‚ΈHêà`(+0^ÞÙ®w•Çëxç¶i…M5Ģʴ7«AAqÃ@‚îÌÂJ•1+Wå¦Àë§÷-#¤ØV\’9C4RJp.c ‡2$ö®|1±Ñª§·G“x³Y”/èZú2ðYâT1ÞEeggDÞl¦ãçoóq,´ºylb:S+s*ÌhbüÃÐ:£L¿GìùsVÓ(¨â‹m¼Q´½3Z«‚§‘¢§b•¸é uÆL¼,qRé3…ÎìA3¢LÆnP¡AÚî;(ÎŒ,ƒô)Öf/¢ƒ´×o8O…9ãÃ+a3¬w¢#òÛøVYà™C¸9û€­“íŸ xŒ+ö$þàœ"¨˜§±`•ÊArï<:Cì¬}‡VBgl¡NV‰ygCÖ®çŒ.®ý×+O; 3 PN÷çvÖ¼›Œ¢Ò ×@lÀ”±š@8nɯSszüüÂy*® !¾O¡ >;¬• ¨«flQ˜˜¦^C³¨¯3nW¡S¹úÔéî5ý{ü-´Ù¡žˆb:!ÚGX§E¡3h eG8!…’da™8Pcu#뽺qœ%j4dûZ;o|¿]Ì›6 \P$ˆ¤È@ Ù—Ú}`vfnî,ªÇiY… ¤ sç]lY’û­õx%Å££é¢Ñjg‹°€€Ù™È'=SåÅBì^C½4v+„‹aá÷&6(Jûá:ÃX+úF­Sð:kW‘·kßBµït@qÐxœ’EŠ´$oˆ‚¦ÔïBì±’×åº}ýpvæAÐZÙŒÕ索ÂP&@/Aèþ…µÂÅ嵚OwKËPªç2^¨á8«yg»|h]µß•2g‡¨‘É褖>зdˆÛBßwºgŸù]Âe¬ †vó¥QÌ|ÕÞ"’X¾Êg u4k•7ô+9'U§u¼ÏógØJG)Ï Hü ÈÖðv¬ùà´tŽ(Ž;’êñ®%ïF¿\iï©0¾‹FÈðK™ øT›ÝU¸åàá–ÔæIòµ Ä«,Ó'#gœ”dª—Ãå¦.ŸæbÿiÿtɸyoG÷›™'ž!1iSÁëå౦Àë콨ŽMMLYOÊ |Pñ-3…|â½K‘EÝŠ ð5/däuZ‹ê,ëÓ·q;©¦—‘—µ–N&Âq‰²­—ê¯â-~<¾Ó%×VW£˜wüŽóžÌã¸7—h#g1E`ò¸^Åê™ëÍii²åv6~óZä€[¹:J®Nî§ô<’íNN}ãa&pÑwúñóË^œh¬ž‘3QS‡@b>Fnת˹MÛœdµÜÆ¢w{)_Ì7ð6}1Dl²$ý»9üûé‚ðâèÅÃÿ±wµ½‘äÆù¯û!¸;`[,¾SÁ! Û8 AŒI¾8fµ½«ÉJY3ZßÚÈO‘=Òöž¦Y¤Höp>à,•š««ë)‹Åñúîßø àÑÑý[™£{ê¯9œýt8·~óôìj³?Û\ûóB_ÎÞã-¾¬›Ýg´ ¿Sx¿¾?£`ø%ï‡Íµ70TÖáj» O†¼ùÿôËù[v%2Í'„‡Á)ç¹ü4¬IZ]èÙêNê¯ÏàÖQ¬c%Rl®îVû’'á˜Z¡.8浜ʣöî>mSTß‚GEaŒ'É ”‹¥Ž“—_?«ÍÝÖ¯ïþ¼b‚@ésjšL)y¹5.‰ñ¯ÓYIn äÞ-r2„{RÖ<ˆ“Ä,)ØŸ¥ü¥SB^훯kŸJ[À—OÚ'Ò¶XáÀ(Žã8`Gã±NÖ*¢úeÿÔuòdh¯ÂpÎø2 ò.üÞTöÕ7¦–+ì­äàQ™{lOªzÒÔ¼hIF¯wá“Zà*]Æ?ó =šyM=› Ф`Šèp3ɵ/•œÆ±Šºág’Ë&ö'ïcª‹öã0ɼ# ¢ˆÜ S™aâ‚áo9 m…c´1&h<ÙÔœ¬=yîøÎc¿<žhš5qÍ*ŒoÁYÊ>QNfß@UÛ@ÓBKéh¬‘³¢à8¾è”%àQ¦F±Ä3OyR‹×¢ w«sò[rèµtn¸íŒ×i•šä4[Á œ1ãdCãQNµq §Je´o8|¹Ö‚t('´Yõ^£0(L†è\äì¬Dêõ^£… k"íÿ^£ðuî5Š È“îü^£¨¦;¼×¨oñ½F~pão‰%öâ&9%šÞk„Z”\¸è& М\!y9ÖÙµF*iuUÍ®5šžoîMÝøZ#ç`ñmû²!ƒ%p¢œ™-êîÇ×ã¥Ï²!Z¡µ†þ-ðnw÷p<¶×ÒŸaøÇ˜ü &!Xúd÷Ãv‡Kø-ªîíÇÝùö%oïù‡â—ÀÄÊ„k¹2\jE‘=Êéì‹QÛÅÔÇZ'AÓ°mûczagœ2ŽÆã¸),èL­ûñÃinµÖ†„¥¹r¥7®dáþJFã³Îݱ}4CTé…‡ù) ƒºJWlürß_C¦Â6ý4‰Î†ÓÓ‘í7V¦qŒ2R$à)¸áìÍÔçþâÉ¡\,–I\hÝP†Õ)ÐkWµW"ìRcá\*CÇY†_ÅX¸?×®ðHYµí_Âj)SAGÊ$`·®N¿¿Ð¥†"p©˜29Á×ñ*›®MÀ£rwœ¾ ì¾¹ ™9Ê$3V&ðˆp2™GRð”¾^éïÉH ÉWÉãéð(]‡4‹ü£ŽeŠIº€ÃO¤´ilñ JMH mS˜Rq¹Š ù”*K 6¥L!•äkþÉA•a5¾åK`D•Üì§ þRóÑB*GçŒæbóÁÕ-K!<­ÊADÞÀîö€º¾ï¬ÿöø÷AûT‚}ºP:&µc/oQ¿Ôx0ì:v _%3b,gÚ%ЗQ¬¥ñ\îÞ~íß!€ÊºJ«—is[†â-5§@¤¬*\ûææÓ8¾YF‚¯‹\ã›Q.Wø×Ê© s*ÉŠo !Éj/j>÷2Ð…†b™6ÖÙ„É©UÂ`‹š’Ü%à±Õº ¿4ޤ²®c\z‰a—÷#.žD©!®–Ö/Ô*yä#Æ%OÀcyŠÝ—êŸÊÓZ.(z‘a9·E}ÏŠ'Pj@Ü •öÛH/ʪä,CS–¹ý3wëñÐ áºJ¾´cVÓ2+™•=¼¿&\Žù–åÂЩ*åëô$iÁJéXòºèÄÔKÎõd7zð;åÃ>+n{ªe#”Ç|šðUÝÚqAl»9>Û'ZãÀT?:Gô, rf¦å×S 'a"íÿÀT ø:¦"ò¤;?0Õt‡¦Jð˜ÊòRvÖ´É)n&ìš ª’Œiêü4E'¦òÐϲþ•OLåá˜Õ4Õ?1¥Ô ,¾m¿J—2z9í?Ü¡Y”’1Ý~›‡‡ÛZiüÌ«¤WËcWdv꺪ɛ…å=Y‡Ô 6_*Ú¸R‘ÂIK±ʉün+ÎBù¾ç–ž…bi}%‘º˜† ?b“o’Óz{4F MEä °÷„zv×:þÒ¬dפä÷c…3¹|ö×· M–À`…n%gJ)—€ÇÖ¹;;M†R£@ä@¿wÉ­®Ö´­­¦ODH½Š=$ãÉ­­>Q tB¡_K€T´Á¾œãÊ 4+ÊÉÙi—N–(èç|S}‰’ƒtÛ¦Bð…%Šœ–RŒº×8ÈÍ ÔÖH?úA}hN‚³óCþ¯éÇ…¼RD£ý§KÀ×I?FäIwž~ŒjºÃôc Þâôc\„ðžöRBª¦éGéÜFF¼½0$Qj:ÉAgÜP9ëË|iôóv“•¹Ý?ß1©•$ÇxÓ†MBùíîÅ×m cJjF)ÌjßÅóeñ|¥¸3 ¬?!Ó8fÏÄãjx¾dÄ)ú.qDc I޽d!ßd!:áqþ*ÑÜíŠa)„“’Æ#A׺û2ßü®KL¯è·Œ°@Ôœ9 ¦JO´úF3 »‚£ÈÁãDa!ïK®* C;%Ôóøè7Ï :æ¾É†Îùù­&üè7ww×_^ñ≆ŸþèÄßøe—SŸB¡…™ù’Z#$­üùÁŸH~ÙwÞ[„ãAºU”3³{Ä2:!þ—FÖÿ·ÂÁ?nWïBLú¤«YgÄ©ôkÖ qüõ‹ ¢ÇÕ7AŸ˜–òñŠø®ãÓ róÖøkäpPÅÁ’õ‚S¯¥Hä"/¢Ñþs%àëä"ò¤;ÏD5Ýa. oq. ËKÁ,cÙ"Àkq³_|¯´Õʾryèj• ÈÂÞ0nK™5¡êA3 ^– Xaã…a=qd’kß%Œ£vŠÑx´á•¶öO-¨]\m0cÓA¤kEúŠî§n&Z¨ZÈLûâyxê39êk¹ÎFYF¨ƒwfˆÃ…ANÄŽFå•l¼ÄÓqZµÆëNÇãdÝJÚ“±·¬ÄÄQ†h†ä"Ý^˜Ïk Dó4^)ª§ñÎOgFœ„¨I¦˜!B Dßž‚úÜݶd Û¿ü<\׫Á9¿¼/?½oßßí¶·‡é»áUTç;™hYqߦ-VÅx?¯Ùã¾Ê7>»Žî„zE%õ ¦ëˆÆ#Y¸lOf! ÷ë'Õxy½}¿ûó­_Ìî‡ãÏž+”p§ø9P] ýÜ:,PdØéóQ³…S,qDaxÞ n NJÇ£™È¾¿9¼›«‡wóþ¶œ43'´$êõƒœµPÛßTúPœœX‘OrF­ñŽÑ÷(¤çA9•=Ïã6ÐõîãõööSD©Ës©~TäŒr‘æå _b©é”LjŠá¯v¢HÆ í¯‰ÍÃc2ÈöfóqÜîÇÍÍŒÎßßo½ë?ìvן¶ŸáQ&þ¢„á连7îøÍ/0 nfë—¤Íç`¼8Ã%ìÙ×=æË‡ûûñö€|¸ YöÃîìjãÛbÁÇž0ß}óáöCø?ß|Hgߎ»¤«®0Hÿ$,Ë¿4y5‡> €5Âþ <Õ½,¡I"ÒÆHßSŸB®–²¶=¬Ý0·‚dàÚ¹ÛÛÝaûa{j"•q:É,ãd† 0È7¼¶1¬<ÉÄF‘G±ÊFñ§‡íå§½ß&èSEõÉ™#©Ã˜^Nƒ«nëN€ho9xrgÒ•ÍÍÏ«,êRÇu Ê:NÆ9ÃlõtÐjà­Ñ+(çïj’ x\í˜áËææšÔg<°æ\€?èÎ)ü(ÇTmcXyz ÏÀFŠÉ<¦U\×7—Wèa=æ½OÀmQ¥þ§Ïõiãú¾³9p*F9Á굌øu&àÄ ‘ŽGææç–õyw¿ûŸñò£VW«”L“⽜°¦ž£ø5ç¹å¡¢ydàÉ$žkíËœ­rŒPœÓÀŒr$P'mnuȯ…4’ЪùŠÓñØÌÎ »ÍÃájªáŸWL)O|rmÑ)axGäÓPŽÏ¶“òi4¤‹³ÍíñÏñŸ¥Ò¾{óÓí”[:ûÃô“ßx©‹³?¾9÷;FçáÙß|òçŸá|6âþü8âß\œý~·?àŸ>–ãÃ0Ì r€ )Åô@ÿ—èLþÖ«Ã'»?>rÿðÎûÍåå¸ßߟ·ãŸ÷wØÞŒ»‡ÃÀöþñïýµ‡Ë»³o|á›èÞâ?ý‡÷d÷㇇ýøþÍ÷gW›ýÙÝ}(ß“ fµÀ1G_AsòÍù£·ÙJCÊÁʽ^àhëÖ œœ]þzÀb¡r>¢ÑþX”€¯sÀ"‚ OºóQMwxÀ¢oñ‹0¸ò½Ñí¥oÛëU23³Ôýs‚ ­1)Pg½»8`‘‡^‹V,¦çÇEÛò€…¸]~ÛV+e0.£E”“’¿äLèQÿúXd¿Ò¹PƒªÕÌHê›ór¾'ÿªaKǾy ÁéyÕ×°e"í?l)_'l‰ È“îóÑãÇZ'¹y˜ùÊí§vL£Ýs{ø*ÜC'Ý7·Ç5Ý·á-åöipôôŠÚK £—È—Ü1oŸ Õõ•“ÏC/Y«u{&ŽS·_ÔìÕÈÅÂË?¼µÔÕö“œVrUj÷ƒú>&9¯m˜IŸÑhÿÔ^¾µGäIwNíQMwHí%x‹©=ÏK¹ÆÔ®ÔÀdÌÛ8®â·G=NÉöEíyè›-Û3q´^¶+¹´lç8¾à ÀDÛ+9.­]•ÛsÀ ¦ì+·SN;¢Ñþ¹½|n È“îœÛ£šîÛKðs{–—Ñ6%¯5ÜR×ýL¨º¯™è›•Òeâ°-—í\,oÀ€ðã+ãûFFq9 «–Ò…A%G5*A‚“ _O÷‘N;¢Ñþ¹½|n È“îœÛ£šîÛKðs{—.~èìòTê³"·[åSò*âíýEŒÈÒNP]_Ûíyè´º>éø|Á-¤àà-×í\ fq»]âøZ;fã`Žr|Ýœ¼ÔøKÇâ÷Qå„|åvÊiG4Ú?·—€¯ÃíyÒs{TÓr{ ÞbnƒsFÚKÍ/£o²n—|ÖF¼=Âîhooc}q{@%Xü<½k¶Ý>áPÌÄ»ûå„h™“bpzi FyÊ9Æã@ƒœ´«¶å ƒZ\¨p¨îWr§¼vD£ý“{ ø:äA'Ý9¹G5Ý!¹—à-&÷,/å“»â¸p×*âíÓ¡šÎ’òyèm«¶<¹8·å¶´p÷÷=8n˜Ð.Ö,q’ƒYËÐÿ——à4¹åVpŒPÊaþÚ;²“Þ‘øFЄÑAÇ3eAĪM˜ü È ‡•8í %£ˆFûRKÀ× R#ò¤;R£šî0H-Á[¤†Á•Eì9éDëF Fhy³_|­¾M U4Rå:‹Q³Ðëç=:kŨ™8\ÃU²ÁÈ¥Ó=ÇËÁÆÛMrð|+±)µûA9×À™%Áù[1^©òÙöOí%àëP{AžtçÔÕt‡Ô^‚·˜ÚÃà’IE¤#‚œ0¼5µ+ç„»Yôö\#)iGC•RõÅíyèkÅíY8Ô©-ºz=˜Ø Ü·[æ c5‹/ÛƒWëøƒj_~#hpʾöN&vD£ýs{ ø:ÜA'Ý9·G5Ý!·—à-æö0¸cHí@{)ÃÛr»4fJļ½óyTÝYŸ†,ô®]QhŽçÅ–5÷–4—‹¯Ã<Ilü¾ÈI˜ëíu{ôRY¢Øz’k—¦ÉÃñ|g£r²K¯Û{³ÆŠx*$ÈY¹j?Í0¨ämÑà$篽²I’Žh´ÿX®|X.‚ OºóX.ªéc¹¼Å±\ÿ瘣½”m¯ïRÌ—žÆ¼½Ä0GEÀÇ)ñ¾È=½•­È=<_2IÜ üˆ·e0‡A,5æàÌo0̉ˆšä´Zµ8 ª8F™‚‘àØ×Ɣӎi´{n/_…Ûcò¤ûæö¸¦ûãö"¼¥Ü> .¥æñ–ÄG9Ö6Oc´ÄRV~‚ ˜åÎ&@íìŒ#z\(éqûô|  p4åvÀPG\xݾÖF «™F“|Ur÷ƒZ°L9 Á©×…;íµ#íŸÜKÀ×!÷‚<éÎÉ=ªéɽo1¹gy) îNNšˆ·O‡Êûꨙ‰^´êº•‰C6ÍÊËA-uæà|–1 Îj{9cN´FoÊí9àì¼WÝ+·/8íˆFûçöðu¸=‚ OºsnjºCn/Á[Ìíy^ʶåvãì —ºegBí¬ëVzÎZuËÎļ-·Û%j~x'˜‰—WLrZ¯ºßu`X Á9öÚ—ƒöÙöOí%àëP{AžtçÔÕt‡Ô^‚·˜Ú³¼Ô‰ûêÖNZ>8##Þ>ªì«v2½jU;™‰C³¶a0½ð²%F Üq .âÔä _w»Ý*8j Npýz,‚ôÙöOí%àëP{AžtçÔÕt‡Ô^‚·˜ÚÃàÒïh/Õ|»ÝÚA,‹8Bp¸¤…¨¦³Œ|@¥$C&¢Ñ+ތڧç[§™HÀaÚn·‹/¾n5çN€Š÷]rÀfçHîÇ×ãåawïá ¨6ô¨hƒw»»‡ëà³½š~ø Ãïïw?™Ú&}ܮޅ¯æk ®©'ÔÛ;ßêÃöãùöÿôgp>þ<5šzôùç6~Ì/­AŸœg¾ZDå"^Îÿó’  ä?ޝ¡Ç™ñ÷‘ùèæ—_æÃÝÇûÍû1x‘—E`–pLÇojä¸é«o@e˜?«¢iô'.‚Ñ?Ô÷[äƒ'´Á ñýæ° ãÐfþ=¸è¿þÃ=†´3äã‹7úhDS$Éð'?ùfqoþô°ùâÛœ]Þ£íÎw—woïÇëq³ÿ~µáJ£æÆ——òݨsÒ Ð#ÜqdîòÝøþWãå¨ßÛÄ(ÇÀ(¹Ùht8ïå¸ÑÇúíîþr¼ø°¹Þÿ[ª5ã-ÿÿ{"u¯%iPK“Õ{`˜½ ìªw†&ðq<ü-2üñ%†Žk^±ÅÈàë¢d2‹ð•Yÿ= ЫûÝíö/!~¿ôMïö?~ØŽ×ï‡ÐÙûÃw·Ûëï?þûþOÿ3Lñá§ýÝxë_¹›U;]ßýp´¾£ „'~ÿûù’±ÿcïj{¹‘ô_æÃa+$‹,’ò!—ÜewƒLîdY3Æ–IÞì È¿"%Ë=¶›ÕT³;Lâ‘iña±X/‹$ÀÍLüõõä«õjqi…|=ùa½ŸÝ^zc^Sx·¹¥˜íúR¼ŽÙ-Žª^î·÷ R”¹#j§ï¤I_Ïv7—¯ðÇÿþÅ̾¼šÿCNóü±ÚÍî®QÓ§ßÎv{2Îï¶‹Ýî2\›7ýŠþ%Hûõ$þû‹ûw´Ú^O$ HúOÚ×pV)­Å±Õ·k ¡i_Ð÷Òöïšúëï«Ý!n;Ý+x¸$°·Ñ4Ïö³Ï¾ÿúÍ4fMZD™‰´oµ¸Ý]þïÏ»};ãÜÿ%¼$µ¸~x˜Õï“{PË ß>ýô×7ûÅæòÕñ³ðëÅ5uó7‚eqüEüºG+òùOG¡ýôêp™$}нԨ³YEW¨Ï¨,?SõIpCÚø=å»h¼t<Þ¥ôú¸Ì/ÿ&ïù”FU>®¯°<~ØÎV»eXÓ?¦Iøé×_f··—´ˆ•|k¼‘p%ÍâjNEŽý„ÖÇ;m¼j2'+pZ)áËÂúi¥^ˆ¿þF0¾œmfW´¶÷ËÅ®¢J§?œ4)þîY òüïýxEéuõ7¤Ðõùá×Wÿ~¿¼ šùêËã} ôãW‹ÍíúÃè/c>‹Óõ=¥ð´<>ÄVц„yõß}þõ£üvùv1ÿ0¿¥´{E» ¿»›‘íÜongóØÑ£‡ßÍ‚ÙÛ½"¹üW QΛoÞ¬f›ÝÍzÿy»¾¿¦ì·ëÛÛŶãð2 4D²¦-‡ÿ iË»›ý ¢ø;%®?܇[X9Á¼¡ÏéçðãÕl»¸[ì°xý–Òæv9_îo?<*#µƒykíÿ½p/yY!~žPÄKî|¹"o¿ø‰-øÚú?Å»“ùƒ“û<.Á@ŸÿùŸÔ‹Lnɪ|þo'óøõA qeÓˆN.Ÿ¾á?žï¬~÷?ïWdp’SVÁ)¿ž}ôG^ùø•™°ô7ú&)žÓï;OMõSwT£Àµ¡ W³ ìz~ç1VN6ŒÜu )™èð¹ë6àËp× y­Î]'%=@îº ÞÖÜu–•²ØmÅ N=Ör™Í‘†„NH+$Xät"Yqü(FHÑiOæ7‹ù{ƒ¼Srª4 o¥U»ÉìÝúõaâƒÖm±áäžâÉÛɘx)àZcN[m_r&»7ù‹’ËEuW#_°އ»gäc+±r?Þ/³mˆ{£Ÿ²ÞÌ$R,“W·›Rؼš…1D¿˾Ü? µå˜Fp d§¤ºµS¡ÑÊZI5Gj@ÍAª«M”L]¾[6ìÔÚpZi+…cqêJ4;2!ÐÞzσ3v<²Ø z¨•è!È<|© ³A^ëÁ™ I2È<o “:'  x+…®ÛÚG#aJ¢HY{‹Ú¦_¦8¶3bx¾];£eôB;p]úv핱ÐÀgzá;­}tS ºvºQ p­`€"%Þô¶ëž *§ÏÉuHe-z©e©\'»6å:ÆO•%¦ŒÚ*Ï@¯_¸qýO^ÕÐ\:^TòȪrUÊׯª†±ªa¬j«ƪ†±ªa¬j`«2¼¬ª¼‹3V5ŒU èjÀPl¹¢ýÐNbÏ„3uêA;‡<8/ÅH8óLb­Dÿ„óùàKεòZžpNHz„óùx Ψ¤0 %P;åº%œ•›Rj\CEáT‚pÎLo»cØ¢T•M…¶Ýýó´ ˜ºÜv7ÂMƒ—}ù¹Æ<¤Ê lÛ= ½~ᦟ?5i—%S9¨Ø9i—‡l$íFÒn$íFÒn$íFÒn$íš“vä=µ'K/p^–\B¥(q$íFÒn¤N•’žtP&£vÕ›[‹¥qJY Œ¦*¥Œíò¥VO}8cÑÕ¥qà0@`Þ*‰í¤è—Þ Z¥ œ`ÁY©Æ÷½XÞ*!ÑáÓ›mÀ—¡7òZœÞLJz€ôf¼­éÍØ¹1’ o¥Àc§ôfxv¢ÞÒ7†i„aG¨œFdQ‹í¬ŸaÎ aV:NT%é°£µth› ³b$ìFÂn$ìFÂn$ìFÂn$ìšvà%EGÒóY¼¯>„=v#a7ÂΆ°Ý £tºÊ.¶3ªß'm²À9=VÙ±üBB¢Ã§¡Ú€/CC%äµ8 •”ôi¨6x[ÓP‡Î)PÈ[©Nù–}ÒFÈ©­}fÞ…GVÂ!WÉí™ØDñËm\ƒ¢4é €Ü‘têp{FM… d¤š#õv`ïúå wR~b¤]–t òUç¤]2=’v#i7’v#i7’v#i7’vI»/ëÁø‘´I»A‘v¤˜^(îãØÓ¼¹^I;7ÕRÄCR8M2z$í86&!Ñá“vmÀ—!íòZœ´KJz€¤]¼­I»,+eT·~K?ÕÞj[sâÓM ýR¡e6c;©à¬×þbø·Ùæ@ó¼w»èùNOû=¾éW}ÂÏøá¢%¡©ÞSyB2¢ Pžd¤þ­` #®ÐNuùâ´r8µà¨×º™%NÁ:òuhOŒTºÂ‚vF~j,c†t<¨>YÆdZ,ãÈ2Ž,ãÈ2Ž,ãÈ2Ž,cs–-Jé)nc½¬‘•ÇnF–qd‡Á2:PÔ•e®d íDõI±<·Ò:%˜Dí”°ßžï5‚®ÉãüTj)¥¬7 5´ªrù~¶{ÿï¶³ÍM õ¥èß߯‚ô&‚w¤aDëN+,ð“à  4l»Þl*aüv±Yo÷‡lìæ[®âŸ’O^oZãÁÚ„Ç{ÿ(…Øa )®‚sš¼:ÎÍ ºÉúí„bý—'„RoÆ2/@„vZVÎe¥&DW&D¶íTÕ+7!9xŒÎ›Õìn±Û?ýx Q@p±ZìÃ__Ì£Ûÿ;†¡@ 19oÆ“‘cNŇvDí­õRHýsL O ˜IÙºŸö<.sÎOÑÞ.P ‹Ýþ¹,Û\<09Ÿ5–÷Å»ù&ÊÜ–’yõLí3=êçÉ›÷˃ЇÁ]N®—»lNæÕX}ò$¶n-„Ê ¹CP<£§x³_vqn\1Åó~0Š÷lp}) 7(ÅC©§xaû3ÎŽ/¥z¨ôpTïùðzS>凥|ÞMùbËýlþ>Ìeq…ÐJ9¬b_Jhå°, U§„ÇMüê´<û4w‹ýv9>ÆÊ´y–riÃ!õJá禉ԉ6Zh˃Ñॉ†ÍÛ×yMß× Óúø¼ îxe5óÚÕ‡‘Ùiâ ‹:p-š›0c­²Vlj‚Z€lgD¼¢ðˆ¼TƒÁÔ{ޢЀ5s™Í¡Êõd´"×wÛÅn}¿“TÞ.Wqoˆë3°›Fá?þbzš–ér 9NIGêuƒq`ã½›´U + WérhW)µL™/dÍ—–Âç']j§M¿Å§±S§œÖ ÀÙŠ}‹Okª ~ñiðeŠOòZ¼ø4)韶ÁÛºø4vî 8ey+åAt{b܉©¨91~€à`ž°>´s»¼0 ";oŒ–,zrjúÓªPŒ£–àÑ*^:RaбG%µf›€¬æŠc…âX¡8V(ŽŠc…âX¡˜®P ÞÓy |D/K?V(ŽŠƒªPôSƒZH– ¦vƚ̽„²Û<à¡R¤Ø=×ú1B‚i€2·ÌÉ §Å¶!¨‹ýÍâ~wñÞ¤§ÒÒóBTšËTÐYLlQ„|î»íâFz s2ÒSV÷급X¡ü™° €Õ£pÝMo¥¨üÙfV Y’éA‚OyoHKÑz ¥ævƨ2"s=—×J‚¡É{ÉÃ×CýUÀ(Ø|>´S"M_-W×áÃL!BRˆŽRxig¿ÌQØÁ°ãÇNëCéœ`Má±”9grØ&ÚÈ”ê„Q‚™œ0I11„Û¬R4…˜ô–þ¼‘v9V»r:Uº{íÊÁSiaÒ¸¹­w®o·Øï¦ÇŸŸZ8´ÜD’MFÃA;/‹¹¸–v9 ©‹æQ{Û‹&¡%soɱ]QçãÚÁ§eè,XëS‡hc;-Ð>DKß«(Þ§NfJ ®?“sŽæµZh£m÷Š—ƒÇ`)Å»^íj¨EZ€Ií¬àð²£(6ëí{Š ¤á;߃­ xÓ„v¦˜­y»ˆ ú]¬A¬‘£LËÑ:A ‰àeeþÚ¾Ü% "ì$«¾ÖU+äSqgãžÐ©dãŒÐÎÔW?»jÑðæH6Ñ€ŸñÌ[€!䦜ÚוUé=†c;ÙÇ¢~#¯à1¢Ô:Éèf*±ãpP;ê(­w”ÄКw’l¤íòæ@‡z*­±_¼qâUIJ‘ƒê¨´®qo&Ôsõ–Æb…ËEÕƒÞ:J%4°$JÀã]1½½›/þ¹Xíw÷W§‚ا’öœú'”…–Í—B;­õGy¾ôÞ6°uÆè^” <ùíØ+ã‹“z»ÜÞQZ¸ØÍo(Äz*cÉi¢²EÖ¡!ÔóÕ!T¨ÁÆ ìE¬•Ì1¾c;],U¸™m¯ƒŒÃª{"`Åé‚×Òbíõ²}äÐçùŠàYc×`$Vv¯ Œ3J"ç¨]â||¶"PDö°àB½%m«§!šƒ¤J€@ ¿ððê!³U" ñ™ÊAc’è¤Rì˜(²}(‚ð‚ Ø©]ð˜(%äûÈ)~,èôÂÁ" ›â4+Šª‡÷\Å .Ü+¨ùì!š ¾héÉãI0_¹Š±!¬Ûõ?—!á¤?y1n3ŒfúرYµ“ ZkFà³UÃ(gùÝóЮÝ|êGKô¼CQ|®C‰ìâÝlSe$ŽtÝÅl³üì8‰ì´syqþá…NG(W"Yñ0¦±ŽF~¾²„Ë.¬[ƒ®e¡hÈ@Yç—|²"¿Z_ì·Aì×óY”¯e4e } ‹¥ðÅ4#ól5@…7æ¢t/j€Fx¼{Cp®œ;y4ÍO­²ãS Ô–ša7Òèùª`é0Þ" í£r"Ü7JAïÎ4Ø\þý…Ó Mðñ/£ÄÓ|&áF³±§Öªñ]‹…Ÿ«*Zp‚š6Ø‹ª„´€ÇƒBv¬*¿,®nÖë÷U{Ái )J#ΔCWš’Ä}¾¢xŠ3Tƒ‰ñЋM1!èüš¤6æ,E™ÍçëûÕ>…FKFQHK¤~Ý™êò¥î³Åh Feƒ‰”½(Š¡Õ&ø%iŒ´…ÃÑG¿¿¿Z\l¯fó òüÿúE­8¡xNK~Å4Å"Ó†ˆÏUV*Ås(Œî¡b˜4PZðlu „ûñ|‘“JUa|Øã‚Y™ wŠ´“ì%µSØ”;ín g+Ð𥡰½(qÖóŸÚ¡hw|õ|©kNs »­Iílã˜ÀŸ¯2ý´üð¼îÃ!¡’ ϫ°¹Ï"¤¤Îˆ™!Y­G§œå %,WÎ×¾‘R›PÁ'NH졦‡úÑE{ÇãUlû%Ò¡6r±ÚoÖáîŽzy¦£-Ðë°ýÆãÖ0MÕöLëÐx<ž’ÒJžÜ·rŽ¢ý J2]êƒZxš½qÁ«JMLù‹é”RSB‰N×]L‡h(»UìÕ£ÔÀ”5b…[Æ|¾4OiëU+Át4‚´ž³ì%¤è-((«=@v=Tåàñªè¤ïgïø%”¶üáб@Þ¡²”É¡/ö¢C S›ݹb—<¹×ß$8š÷»å»ÃþÄéúî‹Ííý»åêôtÁ±áÅ©e”gºöІ OHöºEidîäC6ºŒ8<Èåtêz¨“ÎÂSìÊÃÅ~~]Ÿ„XÁÌ o{˜E­tálª„ m„ïCšã‘Å4`¹z»íöÛû¸?T+Ê´Iw@ ¤•ܽÐÓ„VÚ›ƒ¹’«¤¬Žb­NóN­èã”g ­Ë»?w‡Í–Dæë“)„œ Þ2P.}»µ”lÏþ>È»ó%Ï9§6_&>räiÒòô­5 9üÞ) Ř›ßgÝ_²š‡Ç•$qjEÚ¨U”4&%-e¸†É1)_l‡ÝT›…\G÷Z’…Ç—‹VâN½#D @ Í‚ÖÒ©’Ysº­‘{ãóЮû[òb?Z‚`. 9àñªtdñ’N¯/-µ‹A1xC9cåV¿>ŽLdkÇ#\-|B¢Ã?2Ñ|™# y­~d")é™hƒ·õ‘‰,+¥u·G&ÀR|Zwb"iå²Aœ˜ÈBo¤ù´NLäI§þähùYÈPºñÄÄxbb<11ž˜OLŒ'&ÆMOLïi]¸oy/ëOLŒ'&ubBN i°Ês$%µ“¾üÖV>u–Xõ±£•ƒÇaG;Z/‰Ñ1bD$ïËnÄ…ZE¡ îN´¢Ô3PkÙËä7Ç£ Ô¾Äj±\+Cž |®° K‹ ï5„EœË¤vP9âT¾øY‚™Zïú»æÆPfìÈ I*J¬¦ínï‹;w›¼9xºÜäMîE¹2KÇ ÿ,9NÅ„ã…åjÐZ®÷æ ¥ïa¯.’Å®nŽDxH ÉHØðN ˜Úør³Þ bÓÇúÏÁ㊽‰¹žÝïoj%(ÓôÖ‘ž²Ž½Q :Ü»ok²râUªÐä¾Lõ4ÝØ®ï÷‹f•YPáµÞsõÔNzh™%… á‰/ÏÃNÔ-–›üШE<¦ÌõÜ9ÂL›«ÁiO>‰¯A‰r'áÚX1«]ÐZ¶lÊjõœÿæx¤Í,:_Íî»Ílþ”p ©çü:L¥f¤£‚ŸckL™€¿ †6FO–¬‡ì8Oî)“zZ+O¾I”qZC-¤\†ó_ÔNërGá:”+Ù½âÐN6;Ì鄸e¹K¾SjÕ½¶R?ž–Êí„)ðÂÝÇÆé³##ˆŠåÛåüñ<[ºÂŸd#AZÃ…&ŽRð컊Õp… RþÏ V èe¢3ðhWÌ AAZP}lØnȦ:TÁæ¸Meå¤Ì‡æÍ%ÛÆà;‘}&¹í46E&¡‡¨+Öç¶Ó;CA8i]¡´N £-·]B©üö®u7Ž[I¿ÊÀ¿œ n“E‹,‚lrp°Øœ ÉÁ»1ERbØ– ‘’õ¾×¾À>ÙÙsiIÓdsšÝêѹØÖpš_ɺ|,5ǧA8C€”ƒÇ@ÁÛÞÿqqÖ{²ã'ŠØ•d ÍK;…™”4ý^žOüöæb›ž¼Åùø¾‰ýM#þò÷0ÊZËRAÒ Ãœ°„Úl‚’ŠqÐ)Mpw¡ÒåÂÞ1s2 t¤ÆJ©ÏÃ#õèó)Þ¦÷Œ–yb †@Xce +r#UÄ°Ž˜ÃÁFnŠ/8Ðxrxÿ½ä}|ýð¯Ö§A¨2.T"KN'n)õíÀÏ 7]3Àœadà¡b&|}ööâüŽ£þ^…Iq)r Ú&\xßNèìêóMÜáormÆ<*Ê GˆÊ‘­3h’æJú:У«¥> ðª¬äá‘¶LrX–8U\œ X06”°¢˜ï7V™eÀŽLß‚³€ñJmÃm𸗯?òŸßŸì¤u² Œìîçl·q²K”,²_£’o@$Ž*€2õDÎÀO3Ä9x,‰¶ßAéé¨ô@*Ÿiœ L};ÿO®˜e+b!ÐÉä+èYâÈk]i•ÄKËèaüÞ.æâ¢!‰Â(‚’ùJ~‚É8n¤"}Á‘ÌÀƒ¶TþwRˆq~D±¿) ÊT¢üŽLö¦QÞüËÀ2‹µÎÂãJh lâ,ˆRÎZ:ëëÛ‘²®XžòØI8µž#rÏÁ“›’°r»Mû“ƒrŒ3 Z¢Î&ê ùvº»{?G1™p~K£“IU ‰HtùÅdÆ€/SL&‚ ¯õ‹ÉD%½Àb2cðŽ.&“§¥HL\LF4®§–LÐÎiÃEÔ’iÑ£2‰ä¹¶èçUK&¼5hŸ39@:$æ«%zÔ,˜uÐ-YkÉÔZ2µ–L­%SkÉÔZ2µ–L¼–L°ž¬ê”i+k:•öj-™ZKf µdüÄ4R¡IN`#….JCM2åŒéEgÑô$$÷ƒ,D—vµ ŠãJ%ÄÄDz»¹< r‹§Ú ±µ©±FC˜~?%[šÜè¶rð <šuÞ-lqÆs-Тc ©žÛÁ±çáÊLØá@•°3Œ{žÜÔ h{â{ðw/Æx†•1ƱŽOÂ6F)ǯû©ækþ92ArðX]âÔa_ZÍow?_¼ºùùôìÿà¿?YÆ“*ˆ|u0Ln}Î.4áÎÁmg˜9xN`vi5½BöB§XYË!¶PÉ kÑ`™Ã‹åfr/øÿxÞto9.TÍßÁl$øÌê„ðøH̰žƒíx/4+Õ/žNâKž‡‹æ’ÐÑn Ub5å¼…£9&Ä`<$¨Ø¹–š>*ÐxÚ˜#‰ S—Î8G¢€ŸZrJÅNBÀ ®k•9Îw$àÑÒÔñˆß9 Àê>e¨œŽÜ„õ'Yu6–,‘<\Rùã€Rg! ó󺦖´‡Ñ&\‚ö5íäó6 8ÊÏqL9 fŸ¼,Ðß{†$@d *ô€‰Bü0¯E”Nl‹s»PÂnüùòóžM ‚¸(2P9=ÃÜŽÇ‘¼¸W,¨®¦¨K£ü!7ö° b !9eE¹#†¥|ˆ¬7ÐjŽ1ÏÀ£M;œ#LŒ “=-š’Öi6”cäãgìpˆfú㤙x\²ä_l¾„“Ö‰>¶ö‘RøyîJ¹ 9=YÍ0Ú9x´é­–)%-=Ö³ÂÃNÚyÌS1OEŒîTÙEÍ ÔåÊ.o—àéõ¥7Ëþïvà䢸0j¶\ lÊá”Ú£ËÕ\}šW0¬8¥gbF§z]–ƒÇ;ïÚ5ZÓ¬‰ûõ[O‹©Ìç1S‘ê7¹Fv8@2r†¡ŽÇJ,¥dXhýk’R¦Þá ‰Ù Õvä‘בóq0TV0ÃpgàQPätdJ€q'^ZbÛ@É ‰Û Ke’’fC,§¯Ú‡ÇéR™ ÷dø@GÚ„Žô…[ÙIOÔ`òí@C¹Ë­Æ)¥Ðfv!HLŠXD“ ¶@J'j%M±4„qt8d sŒõp<.s¬÷úïæ.b[ÚÕ¦ °·Á›I¥e‰ô£ãg%h¡¬26‰T ˜ÁOóý9#à1²¼á~$¾xØ 8•Œ]ÁçÐåªð²S2ig“.àÙd€—Ñ)M_è,Ï1…Îp~þÈÈÇ*Zi„;×’”SIG†Û'&ps×€––-IZ¨ÜNÏàr?NƒÖ2Ç.Å?’›MÈ Œ“2Qð¼mî´=r6¢B‚$JTzŽ@žû1ìî9“ƃKս鑉gqê©ÛVÖ¸²ÜI“#ÁZ!­P*©vÈJ3 ´•hÁˆ$+€\ÁÌó ¾ÍïAn2!7´Îß¶’”J%Yš\}ÃK Òz‘Ûé9¼yîÇ‘K²­¾Ý‘|`£=.Á¸.TžNEE)]¨$ÛoS2“Vœ^ÞnAúFÃ0&†œ õÁ@ 7«¶¦ÚzõËæ@Ëèî:sôAw(¼ ²¾{wëGÚ¿ÛÉê¿ö¶`£ùâ[0—mñŸî‚÷»<›™³zô´¾«FÖmÕŸÎâõå«õ¦„Q÷ ÝmƒW7mx¨öLÀãÞb—¥vKÕÿˆxá‹í‰d1|"é£wÿº}îºÙÔ8êÝÉFIý¿¾¹¼º¹¼ýxöît}Ïùe·òæ›ÇÛKöKÂñ'9à¶Ò‹Û³óWç§ë·?_Þø{h­ÔÃçBøv,IÃ쟵9ݵÓZ÷ =¸nÙ;̾Û} ðèëþFw_«­O6à ûþv ‰¾õðÃÍñgl‡¼ùÜSèv&ò„ý¥‹Õþçþðr¨à~ëÈ\_µ+ðÑ“C¼õ-Ï·÷‘¾>¿¹ôW5Ý^]½ûíò6d\S^Woúu<éRü·ëÞÊ>EuV‰ðéÖ®ïÈžV¤œÅdÖ„ogô² ´TÆm1;ªþYhå·ÖÚÿ—”ŽÖ f,К‡ÌÊZ µh­ZkÖZ µh­Z‡hõÖ“»¢Æ¥­ì=®«h­ZP •'¦/i)ØSLM`2"R U¥Ü½ç5غŠüÝ`õÖ£{ÅH¯$w~sèmµç~N6l¼‡–ò^•e—^~zwó®õOV¹à˜z·®/ù˼nÎŽ$ýŠ>1lôüë×¾YãCñ+–V0xÍovígÈïòõcDë×D'«óK(œ]¯¤žx¶‘J4R†þNÇ={>t‹öÇ!ÿ£9~lHQèŒ^ºõëaèžtàúïï<4p» ‹wÔVÈËO[Wªùãòö-{½g¿„œî€#†_/n÷¥«n?^_<üгì»ß}õý«÷§í6µßRüËÍÕÝõÆsý7ö7^n®ZçÏ÷CÊ‘&ôù·ô_zôŸdÌ•§R²Cºß1ºÇîKMcζ]ïà­¢xÒ1é¿ÁúИìvÞŽô3vöæó ïƒjmÝhO9D òLØ~ˆ6DÊ–Ãùæêö‹!™C6Nwª0üö} NvÛaƒºÃóù×>X>YýôÍÕ·WçëÝÏOV?~¸â u×.õÍÃVÛ‡uò‘®ù¨<§>†$®fü`È¿P)qÜ`¤¶¡Šÿñc üo_p›ïÃÞïžClGáä¤óáw^ð_}8¿¾â¥÷E`—.ù1{ªò§í‡ÿ°Ãyï ›Ì¶ìg†xZ„ýôW{祳+_‘¹‘75rDÓÁzO‘=DÍÞÀúUG«½~{qúîöíÿ°úùêo_o€àûJÑðŒot£Å‰×°{@{}x¿ÉĺPç¸(ÔX]IêÈV÷žõh!~ñí_û–áæ#Ÿ1û“¯“ê2¼Xù%¾ãëãÁ/ß_ »¾›CxŠ­ŠÒNÿ‡0Ÿ ˜C†ì3kx.›±ÿWŸó2]šÎn®´›^¯æôà ¼ó˜—/þú!ùœõvPW›ö?¾h]œë^¨æ}›ØõöØÔ¼R¿½ZßòW-iy² {Ü¥õÝÏÿàÜn&Ü\ü~yñÇú_<ÕÎïs)Ö?T2©^|²z{ºæ ìâwFë#¤·më½6+e}Ç}^xÙŸ(&ÇÚçEåL”ɲ®zfÎý¯n2SÞ}\Ý…½4p¾=ýpîÿÔìK„vò(û¿Ü›H«û?É”þW¤¯ ·›ÑîMñz—ⲑçêå6åV½WÔ6”å#Þ¿{w{yýn3j¬¼n.¶ÓÉáz›-’ÿÓG¦`ýÀ‡úì>ÿ,îçzl÷LG6¨ééÃ’²÷oÆw>2ô $8ÍdÚh4²ì=†Ñ=–2á£Ã8ŽîtÊ»âà†“C»FÒ’½(*´’ë….ŠòÏÕÊ)¥SéÅÜ:…ºÊ_^"¥i|)mû2¡SZ8€T¶ŒsàœYZ&ôpôª[„ñ™dBgH'BM‘ ƒÌÐÌ™Ðz`&´þµÞeB³YB&´Ü¦G~¸;½ùøÙJ†¤¨Ý¿áÿÿðeÍ}®¹Ï5÷¹æ>×Üç?Sî³s¤ ¦+øû«è¤¬¹Ï5÷yQ¹ÏºþŒHˆ_ñÚAçÒÊB‘?Ù/ÕÒÅ#7ß¿ýß ®TðEs{7Ré&JIªsíÕö¸­¯5ô0p»»þõ†}šÀÂW)–Ûïàuº‚ !ÇæGA녅ȱÅÅÙoŒX¾÷)H­W§¿^}Öni©šW¡!OƒÛËw«0͉Pÿ9»î±HéÀàgÊ”Cv¡žÛÿ÷¿ãǽ{û"ö<ôÏ-`Ï’ö_]X>`ÏCÖ½ª¨ì5`¯{ ØkÀ^ö°°« V³Ö6 àüÅJ$ÁqC¬µMSE+#]~mÓ1àËÔ6 Èk½ðÚ¦QI/°¶é¼£k›fi)%Ô¤uÇQƒBöÇîŽ]T)¨¤5A¥g+=»0zVI%%à㜞ô¬¨;ÙžÅèY_¸ž|M¾Äò…ëÑNHÏ‚Ö ²>ÑÐÇχêgÎÒx:F%%1þè{n<¿5ÖÅ"-‰jNžÎ##Ò ›½Ä`åé*OWyºÊÓUž®òtÿd<ÛKYÀ»zog®Fn5r[Dä¦9pÃäÖ ÍM438e­H”ì í ^¢5„Aì•è?Ñ|<øRDs/‚¼Ö‹'š#’^$Ñ|<ÞD3w®’i-¥¬œ”hÖ e¡—hFÌ‹K¥Q¢R`\yžÎŸ“ºÑÍíXÚS€³²1$e?O—õÓõÄLÓ^–W±¢xÚŒ K`Vš.€#Ú¹$8K‚y•¦ëá_"]>M7|š.‚ ¯õÂiº¨¤HÓÁ;š¦ ;Tà0­¥Ü¡tº§ä¾<*'P‚°Iô žž÷ÞZjÃtÀØ*;)ë¬nØ ¶…j‰4¤RÎYO>ô†ØÕJ#YÓé*MWiºJÓUš®Òt•¦‹ÐtÁ^¢íX|5®ÒtË¢éxbZc´Õ.~Xh§eqšÎ4ȳÁh6‰„þ Ï”4éÆ("{êÉ“_Â1•9Ú)7/M:å‚ü48ëj6]’‰Htù4Ýðehº‚¼Ö §é¢’^ M7ïhšŽ;Wá¾ËDyǶ†I©c ±ÆõP9‚”ÂÕѲŽEHT§kÛõ¼ÅðÖào6uié@gNNÓ…•$9™JÓUš®Òt•¦«4]¥é*M×OÓ{É.™t"mW¨Õé*M·,šÎÍ«¬°qš.´Så¯}äç‚°žªKEnÜNé)i:'~Ciz²élî§CMw¡C;kç=ôê;•Ú°2IpR«ZÇ/É¿D$º|šn ø24]A^ë…ÓtQI/¦ƒw4M:7À¥U¨4ïø-{èÕˆFQ_¿‚%‰0*©eÑt[u)(žÜÖñ§¦éÂ[;‰µHKÇuÒ"'§é|àïáN”Dl‘¬4]¥é*MWiºJÓUš®Òtý4]°«*Ü –´«ìªJÓUšnQ4mŒ•hs¥é|;Ñ­ÓRˆ¦óÏušŒ›X@Æ"j1éõª±ìS’:ÌÓ9^ÃÊ1ŒD‚Eh‡fÞû6|§ÁÃ'H‚óqzåéRLD¢Ëç鯀/ÃÓEäµ^8O•ôyº1xGót¾s¶o(yL-H5í}l`p}étyPI.‹§ËB¯ úÏÍÓeIGËyº8Õcº¡ôLßu’ï`ë|¬½~œéödIèçÚ†ŸÉ$Dg:g:g:g:g:g:gºs¦+ý¥%Ž¤Û¯jÄäLçL·Óq™¥f!¨j“éJ…`w3]ãúÿ½ŒóèáÉE¯¶Å$ªpÌtRnaÃÖNZêÄ^eº‘pú¹ë“3݉¿4Zt}¦› Ó5ŒU/ÎtÍ–^éfòN3ÝÐS Åže:.[Bã ÓEµÅ JO?7hø»™n¬u>Þ9gº±dæ{Ó9Ó9Ó9Ó9Ó9Ó9Ó5˜n¨_Ug:gº¥˜Nʽò˜%5™®ÖÑÇâž›˜®|nÒp¡w¥”›äI¦‹$Ñx2›N‹Èæ›¶È×:~•éêE5boíp­“­”œéNü¥Ñ¢ë3ÝLø{˜®‘`¬zq¦k¶ô‚L7“wšéêÅ“•·þþSJn™v/ÓoΘn(êç.K0]Meù­¡ŸÞàËfÓåo-¡¬!ng™ë‹{Ó%‹~–¯333333]ƒéjIdLÜïW1¨33ÝRL§…ß‚$MÖd:­G=ÈíGH”ÏULݳÍö:à™tc!8Qº”ïà(d"í;½Ôæw'Ó …KÉOèòK£E×Wº™ð÷(]#ÁXõâJ×lé•n&ï´Ò<¥âçÌíG”NÃñL鯢Âb½Ž¥ÿ¹¸øïVº¡Ö‰ðâdº±däk^]é\é\é\é\é\éJ7Ô¯R®t®tK)]*kNµšJWën_óZ>7YRîÌÛëXŸ=èÕ@,œ0m!ÿp1¿c'© ÑZ·¡ôɾlOñ¡Ö±¹ˆÜÆ’‰ýç7¸ùÀÍn>pó[cà6Ò¯¨O¯ðÛZ·üVƒ¤ù'ܸ•ºÄzûÑùsS ¶o R—¿ó“« ’lŒˆ’În$1B'i®K»¯¿1¿¢\"ì†`öù½?œ7Ztýù3áï™_ÑH0V½øüŠfK/8¿b&ïôüŠ¡§?:¿‚o)ñÉüб¨²Ó•ô”{Vê§§ŸYþv¦Ëß:ÿÃjgNôÞ:œÞdº’,ˆ\Hö±›—3333333Ýq¿šb$¹ð>¤G!;Ó9Ó­ÁtdRâÓåºà~¦£Ü9 ÆÞ ”ë¶U¸q~lÊì˜é8ÔWh‹)5oõ½îðܪ瘮^ËSˆB?œ}lâLwì/­]žé¦ÂßÂt­cÕk3]»¥×cº©¼³L7ô” ð(ÓI[a3Ý`Ô£³.þ ÓýJ¯Âûé#×2¨ý[ƒ†ÎÎ~¿êâ{GÿíWDáÔ>¢èW2ò=ÅéœéœéœéœéœéΙnï/M˧~¿ªÇË‹éœéþÓå&‡(Z6¨l1Ý^>öN½‡éêç YÒÎ ”ë>yô…Í(lð˜éb¾…©ˆ¢¶oõX‡>–^eº¡p|·¢®¿4Zt}¦› Ó5ŒU/ÎtÍ–^éfòN3ÝØS*=Ëtùµm‹O˜n(ª\‹éÆÒ Ó µŽÂ{{Š&;þ«¿3333333Ýx¿JQ|·"gºµ˜.n ³µ™®ÖÞÎtåsËy{Q¬så:>ØþôF¦£ÍÊŸØOfÓA¹…ÁÊùͤ¥S´W™n$ÅQ¥3݉¿4Zt}¦› Ó5ŒU/ÎtÍ–^éfòN3ÝØSJéÙMÅnÀpÂtcQm­E¯céáç´Å¿›éê·FÆÎ!¾¿êÂ{›Šÿº¢\J}6333333]ƒéJÉù…Qûýª¨33ÝZL—˜å,ô¨ÐÜ›n¯Jw3]ùÜ”¿ŠAìÝ@–Ÿ<úi#Ôtröc¾…r;YûV¯u‘ã«LW/JÉRÂ~8úX;ìLwâ/]ŸéfÂßÃtcÕ‹3]³¥dº™¼ÓLW/΀ÖÞ"äWÝÃgÿ¡Æ él6Ý/D…´ÓÕTBA-]H¯_Æt{ëp‚x¡uäc‹ÄÇ™®^Q9‚Å ÉØgÓ9Ó9Ó9Ó9Ó9Ó9Ó5˜®ô—S$è5°!áL·ÓáÆAóoSƒ6™®Ö!†»™®|n2ãÔ}1-uHOΦƒ óð5ò1ÓQ¾…[‡éj]Lô*Ó勿‘1§~8Cgºž¿4Zt}¦› Ó5ŒU/ÎtÍ–^éfòN3ÝÐSŠåÙÙt1?탊>í¯G•µ˜n(½%ü.¦h\/îMW®Mb r!úÎtÎtÎtÎtÎtÎt ¦éW#%?B™n-¦£-?š1¿¬‡ölºRŸw¿‰éÊç&€ÎtcÉ|6333333]‹éJ™»¡ªÝ~•)Eg:gº¥˜N7.¿LæŸ/†ÿüûœëBâ»™®|nÒ˜ÿwèÜ@¹ŽøI¦SÚ 17ü1Ó•A%Ë·zû/ݵ.¼ÊtCáÐgÓõý¥Ñ¢ë3ÝLø{˜®‘`¬zq¦k¶ô‚L7“wšéÆžRž=Bpó$Æ’ÊbJ7”žÂ—)ÝXëЋk^Ç’}Ì6u¥s¥s¥s¥s¥s¥s¥›ëW%‰++ÝRJ—ÊZV1‰ÚVºR`ñn¥k\ÿÿþ{}d“•lÈýÑÉÖtVou.§Ý6“Ö: ïN¦+e0¶Î2ž½NÌ•®Ç/]_éfÂߣtcÕ‹+]³¥Tº™¼ÓJW/Nò{ÿ)…éY¥‹ÛÙÆt{вK?÷ƒâZHWS±¡õÓ3Ù1¯õ[KîáBW)Ÿ§4›®ë/]ŸéfÂßÃtcÕ‹3]³¥dº™¼ÓL·_œSJÖJ1Ò³L'´1Ÿ¬yKúó¤‹?«t5•p¢Ò'û.¥Û[G£\ia~OéêËùNF’}LØw¥s¥s¥s¥s¥s¥s¥;ìW%"AgÉ^­ â§¼ºÒ­¥t¸åGs¢¢5•®ÖéÇñ 7)]ù\Ö,ô^™s]î3žL—’’3¥3ƒѬw«çºpptÞ¸åT -õÓGÁo¸•Ö±”ÿáú­‹ñ^¸å+bJ‘ñB²ãÙÚ>pó›Ü|àæ7¸ùÀí£¿Ôh&,á´/.' L×ã/ ®éfÄŸƒé ƬÇtMO/ˆéfôNcº}pgÉIû«”CºÓlÆGÙtU$Κü ©ß›Òþ\L·«æ^õµ‡øgaºýWcK¸ï|z /ÇtûˆT†|cøº†N`ºÀtéÓ¦ L˜î)^fÎäýÝèÓ5ŽÀtéVÀtåŬ N›˜n·K|zmºú\Ê©Ì._JWÖ¦Û»ÉælG˜ÎLÊŸ¦·¹/vŠËÜܹv¤íª¯…Ó?íàV½Sææ¾w0ó·2¢r™{ï(³(*·8¸ÅÁ-nqp‹ƒ[óàæ.ˆ54u㪀jÜâà¶ÔÁÍ·DI•MÛùÅ-?Ø<éàÖÿO?Œ_ì’_™_‘·T6È~_᛺vŽ>ÕÜó­ù#â³F~EïÃyãëçẄ?'¿¢¡`ÌzñüЦ§̯˜Ñ;_1´J=ßò¸"¿‚L6ñ£üŠ1©kaº!õL–_1æË÷aº1e˜.0]`ºÀtéÓ¦k`º¡¸š%ŠŠ¦[ Ó ›gÒLW¯K¥|>¦;ÿÇ ÄÙ^”?=ÓÉÆBjúÓa*S˜kÖvÿnGŽz'¦{ˆ#Kî}qŒ× zü¥åÑå1Ý”øS0]KÁ˜õÚ˜®íéõ0Ý”ÞYL7¶JQ†k‹ŠKýôC¯1ÝC³›ÛRÍ—Âtcê™>+›îëWײâþ†w²Ý†é#ª´—üö Ó¦ L˜.0]`ºÀtǘn—‚š­M¾ìRdÓ¦[ ÓÕ“…IËÿ¶0ÝÃŽÔNÆtûss¢,ý Tì’]ÛûOUù5¦ƒ:… È´}ôÙíàÞÞAÙ3´›=}ÙQL×ã/ ®éfÄŸƒé ƬÇtMO/ˆéfôNcº}pÉà¤ýUJÒµ½ÿHaãH7&t±’âUÊ%^¾©$ógAº!ïèS¦áån1—MÂÊØÒ¤ H. ]@º€tÇ®ÆËÈ)£wãªr@º€tKA:¨¹l†bMH·Û•×ülHWž+e_‡å_zˆÝ€¯„t¼!C¢×W^÷)\ ~¶¿sïvpï•×Ç YAÚ9½;H×£/ ®éfÄŸé Ƭ‡tMO/éfôNCº}p$ƒþ*e~í•×r¶Ø˜®¼ŽIu²µ0]U•Sù;»tÕçöY˜nÿÕPBtÒ7¼ã÷•ÿRV„±õ•DIñÀtéÓ¦ L˜®éöxéÀÞnó°3ˆ’âéÖÂtX1YY±‰év;¦Ó1]}.’Ø‹\¾'P‘èéÚ+¯TvÞŽ¯1Õ)\¶œEmSénÇH·bº:¨¥ša}qöÔÚ 0Ýixt}L7#þL×P0f½8¦kzzAL7£wÓ ­Rž¯Åtb´±åL7&ÕÃt»z ¢”ºê-}Xeº1ï@ºñÊë>"sçÊëCäÀtéÓ¦ L˜.0Ý1¦Û㥂s»×ïî¦ L·¦+/¦Yr0ç&¦{Ø¥Ó1]}®WJ×¹\ô°Ã+³é2l‚ôuç?ä2…=•ã·§zµ+ÞÄ[1Ý.NÔ©Ý îaÇÎézü¥áÑõ1ÝŒøs0]CÁ˜õ☮éé1ÝŒÞiL·^³¼ú«”^|åUsÞ2ø¦“ʼ¦ÛUYÑ…o„ƒ,ù³0]ùÕ˜Ê&4u0Ýî'½Ó=”¨ãʔӦ L˜.0]`ºÀtǘn«,šˆzqå˜.0ÝR˜®¼˜ZvvåiíK¯»>]³8 ÓÕç’)jo)Ó…˜Žic¯5_^c:©S¸uê\ÏÝíPäVLW…r(sË}q˜®Ç_]Ó͈?Ó5ŒY/Žéšž^ÓÍèÆt#«$¸¶D‰Ÿ›õy”Jº¦ROFŸ…éê¯.Q9k×;åÿ}cmº}D4Ð o( L˜.0]`ºÀtéÓµ0]—%ö u>Ìívœ(0]`º¥0]y1Õ™Ò‹ï·¿þð«q’³1Ý>¾;çN5–‡\Šél³Œ™ü5¦Óº5VÈ$í£Oµ…{³éFÄÃèóÚå/ ®éfÄŸƒé ƬÇtMO/ˆéfôNcº¡UŠ(_‹éH7Ò#L7&Uq-L7¤žíò醼#tã¥×1eÑB"0]`ºÀtéÓ¦kaº/ÅБ½WÙž?̦ L·¦ÓZóÍŒrÓívæz6¦+ÏU)Óµ»1UK×fÓaÒŒù5¦Ë[-(îàn1»ˆÝŠéê œUT¹+Ž1ú¼öùKãëcºñç`º†‚1ëÅ1]ÓÓ bº½Ó˜nh•¢$—b:(Kññjÿ¾T¤µ0ݘúüa-$†¼Ã(÷aº1e-$Ó¦ L˜.0]`º¦«ñRxêd©ïqõ¹wY`ºÀt+`º\³Ô„Y›˜®Úe0?Ó•çæECênLsJ¯Ät²e&?(Mgu;p IM¡»šÞJéê Fœ±+N î¼öñKãëSºñçPº†‚1ëÅ)]ÓÓ Rº½Ó”nl•Rº”Ò•ÝØÎÉt»L™:m½¿~ÒbÉt»*Gr}Cý«o_Ó”nÈ;êp¥R–SÜy J”.(]Pº tAé”®ÆKe÷Ü«J Aé‚Ò-EéÊ‹)FÙɨIéª]NΦtûø&Ù“ö&˜~ϯ8³Ñ+mŒ‰ý ™ÎëÖ¸0S§ ÍnGßkŠ_ŠéöAs½¡Ã}qÉt}þÒðèú˜nFü9˜®¡`ÌzqL×ôô‚˜nFï4¦{ .”\ú«TÆkKÓQÅt‡w^Ǥ¾h•úS1Ý®ÊÍT¡¯Þ> Ó yÇéÆÒtuDMfèo„qgL˜.0]`ºÀtéÓcº=® iêÔ’Øí50]`º¥0]y1¥l v0]µW;ÓÕçfÍ–Ä{¨œšX¯Ät°Yqk‚—˜ŽÒ>…©Ö o)}ØИî1¨”…ß÷\‹(0ÝkþÒòèò˜nJü)˜®¥`ÌzmL×öôz˜nJï,¦{ ž©Vè¯Rš®Í¦“D›èA6Ý T²¥0ÝC•q-ÑÐWŸQ> ÓyÇ0߆é#:#g~C¦ L˜.0]`ºÀtéŽ1Ý/Ë6ÉûÇã Ñè50ÝZ˜®¾˜YSYž1µ0Ýn'F|2¦«ÏµZÜ1SîM ƒ|)¦ÙÊ\LúúÒ+Á&D^sÑÚSj³˜âÑ[1Ý8qL×ã/ ®éfÄŸƒé ƬÇtMO/ˆéfôNcº¡UJåZL§`@>ÀtcRób˜nH}¦ÏʦóŽ ß‡éÆ”y\z L˜.0]`ºÀté˜n ®–éfQš.0ÝZ˜6+¯oò²71ÝnGrv‰ý¹ ŠÞ™@æRÎ>WfÓ©l®èN¯9–9 b"Ü&ŠÕ®ìUïM§GÑéµ`]ŸÓ͈?‡Ó5ŒY/Îéšž^ÓÍèætC«Óµ-$¸†hÇN7&Ud-N7¤^‰?‹Óyç©mÖåœnHYNœ.8]pºàtÁé‚Ó§;æt{\Uõ±»q4Eqºàtkq:ܔ˙J-5;½îvõz÷Ùœ®,Ÿ®þjS“wþ¶~ãµ×!e€œ.8]pºàtÁé‚Ó§;ætCq•%§ N·§+/¦£¦œ¿Wáùõ‡Ø0ŸÞD¢5þÈ8yºòÚ+n% !Ò¦s×T$´›Òîv¢ëÜÔgöO;¸ xÇÌï<¸ (ó§‚^qp‹ƒ[Üâà·8¸ÅÁíe\-/©%Ôn\UŽƒ[Ü–:¸ñ–“)~ïrô»ƒ[µKxöÁ­>·ÌvlvYÚí2ë¥õŠ`3Í`¯»ÿB’ê‘Ì:J…ØåÞ{P»¸ò^¤,]q‚÷ ºÎ]?¿bFü9ù cÖ‹çW4=½`~ÅŒÞéüŠÇàhYßX¥(]›_AZW¬ƒîƒR‘×Ât»ªòã4Ùêó‡åW쿺îxûÞ‘œîÃtûˆ9 ´{!ýˆ{PéÓ¦ L˜.0]ÓÕx©ÊäLݸª¨‘_˜n5L§)q§ûßÃNýL§©È4w.èW;$¹ò”æ­ QÆxé¤LጠŠíTªÝîî²âû ¾×Në‹3ˆî]þÒðèú˜nFü9˜®¡`ÌzqL×ôô‚˜nFï4¦Z¥übLge½g“L7&i-LWUYâ25å õ–> ÓíÞp„ÜõŽ%…û0ÝC™“tn…?”ydÓ¦ L˜.0]`ºÀt L·ÇKA³~\e´Àté–Ât²im¾ç/àÓ¯¿‹ã阮>×Ô!u&¢¾<†éÌêU’„ðÓé&\æ*¢´•V»ºU½Ó ‰#ˆjE]þÒðèú˜nFü9˜®¡`ÌzqL×ôô‚˜nFï4¦[¥Ä®Í¦cÝœýÓIͺ¦RÏ(Ÿ…鯼“oÄtƒÊ,0]`ºÀtéÓ¦ LwŒé†âª=5/ L˜nLW^L4M–š˜®ÚqæÓ«Õç™zo™æte6]9¶¹½Æt¹La“DÖ¹ß^í8ÝŠé†Ä[`ºixt}L7#þL×P0f½8¦kzzAL7£wÓ­Ræ×b:£MåèÒëT†Åšÿ©Wû,L7ä›ÿ)øô˜.0]`ºÀtéÓ50ÝH\•µéÓ-†éÊ‹éœ@ÚÙtÕ婨ôI˜®1þÈÉ¿'Xœ‰é|+¾G~Mé¬Ìàì’´…Ú>Ó³ÜJé†Ä1E2]¿4<º>¥›¥k(³^œÒ5=½ ¥›Ñ;MéÆV©|íWFßXñ€Ò IX¬4ݘzý°;¯CÞÉOßÛ.§tcÊ0’é‚Ò¥ J”.(]Pº¥‰« ”.(ÝZ”ÎvúVNUÒN¦«v€éôdºú\–DØi}°Û‘^šLg›–iúÓU†I„:=Û½‚{JùVL7$N)’éºü¥áÑõ1ÝŒøs0]CÁ˜õ☮éé1ÝŒÞiL7¶Jå«“éòæv”L7&Õ»ó:¤¾ì3> Ó yÇùÆ;¯cÊžvCéÓ¦ L˜.0]`º©¸Z+Ö¦ L·¦+/f­“Êš˜®Ú‘ºœé|S0@ñÌ ¤ žøÊ¶ ¥×ÙtœÊTG«g²f³˜Ý.±ÞÚAbH<3ÄÀt¯ùKË£Ëcº)ñ§`º–‚1ëµ1]ÛÓëaº)½³˜nl•’‹;HÀà9ãüj_¢†/…éÆÔ—ÄGaº1ï˜Êm˜nL™sdÓ¦ L˜.0]`ºÀtǘn—bÐîþøUÓ¦[ ÓÕÓÔÌSò¦Û턟*eŸƒéöçº qoY.¿öBLGº»Z~}pƒ2…k¸bi+­vD‰nÅt#âJàK¯]þÒðèú˜nFü9˜®¡`ÌzqL×ôô‚˜nFï4¦Z¥^468Ó‘³ÒüjÏÌk•¦{¨Rsbí«— Ÿ…醼£t_iºÇˆõjå7”±¦ L˜.0]`ºÀtéŽ1]—R^SiwxØý®µ{`ºÀt `:Ø4•! `ÓívøT å$LWž ™“ªHg;ð+KÓiÚ¸¸žò+p+G.3½}¿½Ú¥O-qØÜ#ñ?ýáÏÿù׿üÛÿüá¯ÿñ¯û?Ïÿþ7šóÿ§¡þïbSÖÝGTüÚéÿK9Býûí{¡ýìôË?–püK=üòÇú§ÿeZ¾Þô+òW‘iÇ2«—¿”ü]y]ÿRÕ—#–#¡dé¸în­9¸*he$]qò<=¿pµ†G×ǯ3âÏÁ¯ cÖ‹ãצ§į3z§ñë>8I9¯i•BãKñ«m¬ùõeæ1©ôª±üÏį»*Æú]õ õ? ¿>¼S¿ù¦¾w˜oįûˆbå¿ô†2‰,ÉÀ¯_¿~ üøµ_k¼Ì)—HôÆnÎ_ÇÕÀ¯_~ÅM‘’dckâ×j‡ª§ã×ú\Ïœ¡{ (v˜àBüšiËZCÒküJe —sbSén§~o–dÔ’~ÿ3~—<0]¿4<º>¦›¦k(³^Ó5=½ ¦›Ñ;éÆV©ï…üÎÅtš6?ÀtCR Öª98¨^>«æà˜wøé¾Ãå˜nL@`ºÀtéÓ¦ L˜îÓ ÅÕ,˜.0ÝR˜Ž7Q,¯o§nµÏpkËqš$§€O=ªÐðèúðiFü9ð©¡`ÌzqøÔôô‚ðiFï4|Z¥Téâ+ºì˜ñàŠ.oY œ•Í´ùQf·Sdz?ÊÈ–¸<¶D¸ö§ŽjGùÒFPÅQVþf˜_ºjHªóZ-|‡Ôs’Ïjá;æ|§Sfœ.8]pºàtÁé‚Ó§;ætCqõ¹)]pºàt+p:)‡G@/Gï¼Àåé~ï]Î]i ]qÑñâÓðèúœnFü9œ®¡`ÌzqN×ôô‚œnFï4§Ûgv~c•²tmcZ·­Hj<*ìÅö²³úÏ)Ép¹N’ùš åI’5Q»xpUƘô|ëhåDÇÒÛ;d½Àšñ†æ9Àê]W­}©”e5ë” QJÔWÏÆŸF`¼#ÏÍU®'°#Ê$ 6lØ °A`[v ®:rØ °KXÝDÙÙ ÛgÅjGIîí9<$Ž<l­5<º>m(³^œÀ6=½ Ñ;M`‡V)ñ|)±MElÞ¥zJÖn+Qí('¿uµWN«}o7<ºþj?#þœÕ¾¡`ÌzñÕ¾ééWû½Ó«ýÐ*¥|míÔrPØÊË{°ÚÛ&E‘f¶öW™Ý.=}o;é«L}® då¶«v;Ö+[:â&ÉÊÙŠ]õ¶TI‹}•ÙU)£1¼¡þ{Iô¿í¯2ïx6¢¾wôμø}Ä,¦o÷eõ+â«L|•‰¯2ñU&¾ÊÄW™ÆW™/!%'éÇÕbG_eâ«ÌR_eÊ‹©ÙjσÔ{U]øVN7".#ÆW™.€ixt}N7#þN×P0f½8§kzzAN7£wšÓ ­R×r:ʸéQíTßÀ¹– ‡ÔÄtÕNÇtñÿôãøäreMq5Þ˜Uú“š+3;¢žæÏÂtcÞÑ1ݘ²å+Ó¦ L˜.0]`º¦Š«ÉÓéÃt^ÎŽFIÔ©ó;H÷bº:¨j9ótÅyb L×ã/ ®éfÄŸƒé ƬÇtMO/ˆéfôNcº}pN Aú«bº¸Ì,`.¡ã>¹X‚”)w8³ËSG»Ó8Ýáø?¸ªœ@®,r%o%†ªù°«^Hý^”ägsºõÈøiœnÄ;FwrºeétÁé‚Ó§ Nœ.8]“Ó ÄUE Nœn%N'i“š:‘5»¶?ì€ÏîÚ^Ÿ‹„ÌnMN¸Û\yJ2lÌågòË“[•o‰ö¤ª»ßZâ!Ž™º~Ì 3Òì°ª–G—GšSâOAš-cÖk#Ͷ§×CšSzg‘æ×à%¼÷W)Êz}E^>¨ÈûÀŒîú†Ôï¥+~*§{¨sï«gû,N7æyª‰{5§{Œ¨ Ø.¬ÿÛ/ÐàtÁé‚Ó§ Nœ.8Ý!§Ûã% vã*ü/{÷²+»øU×…ë’ГŒx”A€Ü<=ÉÛ‡dÃe‰,Bg/ôÈöêâ/î’(}E‘ˆNN·˜Ó‰SÖÚÞÀé¤üËõN§dóèÉ­Ô%»Óé@t“r‰ºïtPÎas&ÜC×:Hœuº©påá3œn0]ßé΄¿Æé: æªwºnO/ètgòžvº©«‘Ýët›§ƒW„'£æµ¶mŸKÏ߯:ûc;Ý\ïän.™Ä{¯átátátátátát§›W•â½×pºµœ6AðL®ÜuºV§èW;]ùÜ2  1çÁ TêPìÖ‹Êc[ý™}Ÿé°žÂâ‰úo«Ãg§ÓÕFËßGÝuáí©2˜îÀ_:=º>Ó  ÓuÌU/ÎtÝž^éÎä=Ít­q,·üÙÇW)ä{ßâ-‰0Ý+‚›Zú ª­µ<Ý+ åqzÊôµ˜®u&fù`°Ìo#ùíL×Z”¤åÈ?HÓé‚é‚é‚é‚é‚é‚ézLWÇKJ b0W}w¦`º`º?é¨}ÍÄûÓÙ^uøèòt¯FKˆÛ8¿Í¾|:P…N®OgÂ_ƒOsÕ‹ãS·§ħ3yOãSk\2¹Àø*%è÷âøÆz„O´‰”ƒÀlýEj8\þ£Lù\e0-rðª#½sòô ŒÜŽºJ3zÇöoudk9]KEõ¹ËÆéwæ-þØN׎š À`Ü;ï“ owºÖbÎÂÀãdï{ ‡Ó…Ó…Ó…Ó…Ó…Ó…Óí«ž¬|ÇãªåpºpºµœŽ·œYÁe°RK©cÃtøÓ2ÿó_þñ¿ÿöË¿ÿß_þöŸÿÖþó·ô¿ÿ•s~»ÇÿíÌü—ÿù¥>—ÿøº‰ý×òtðÿÛ†ùöXðÓ?•‘æ§zßûÓ?Ô£úéô1ðáîmÙÊ1,•Uû;;Öþþ–äïʃô/õ¢±ß¢zù×ÉaÔ¢’=*±3á8Å4À1±uzt}‰=þ‰í$˜«^\b»=½ ÄžÉ{Zb§®RàV‰å [¹?8ØÁ¥®«×•ØZ'ô¶†êEÛiÿ]ejxçF!´eª;·ü×lWíDe_KbKªr‘¡ ‹£ôulj/¶áTï ós;—,kHlHlHlHlHlHlHì±Äò&)!r¬ÏßêyHlHìbk‰²Y&<¹iI|ý“›!Vœ@†¥ o|rC¢4¹Áþ“[.Od–T”ûïCÕ:õ¤’fmÔë_yÎÐb¡Uuzt}Ò<þÒì$˜«^œ4»=½ ižÉ{š4§®R¤÷’fNi+Ï!¤9Õ×ÚÐw.=Ó{³y®wìA§›Kö¶ãv8]8]8]8]8]8]8ݹqÕS̘ §[Ëéò&,J°³ÏÏ¿ÿ —ïùåïºåú®Z’Á $’vžÜ.t:…ÍÅS¦}¦“-;'ð„}¦«u¿[Ãû ¦› Wž¡ƒéFþÒéÑõ™îLøk˜®“`®zq¦ëöô‚Lw&ïi¦›»JI¾wŸ2¦X>Pº™¤¤i±í|§ÒûWÛ&d®wòƒï5¿’‰)ùÉ$fÓ…Ò…Ò…Ò…Ò…Ò…Òu”®«J`jÃq•óþ,õPºPº?MédÓäåA›»J×ê”íj¥+Ÿ h9©ðàRä;߃Ð-“ÊÁkPZoÍͽ¿ U«ñG•NÛÛ2Dl8 'ò¶nQ(Ý¿tzt}¥;þ¥ë$˜«^\éº=½ ÒÉ{Z馮R 7+Ý–‰õøjÿyTZl›©ôFüµ˜nªwé9¦«-jbAÍ$c¦ ¦ ¦ ¦ ¦ ¦ ¦;f:Ý$ŽF|Ià,ÁtÁtK1nfÊŠžûL×êHõj¦+Ÿë˜´®Œ=8Ì!§;'Ó¡ó¦\Z9xéÕê=´eåÁOÝ­NåÙí|k£†T¾9iÎàím™pº€éôèúNw&ü5N×I0W½¸Óu{zA§;“÷´ÓµÆ)g\í_u7晴¶$z0îÁëÈøAT£µœ®¥be!§ç¯¶o;ê\ž@Óx$·LnçÛZL¥Ñ’IlçNNNNNN×qº6^:‹ÙãªC¼ôN·–Óùu t“®Óµ:•Ë_z-Ÿ+Vn˜ÕúµNÒÞãÐeN—E7©'óÁ ߨÕÜÓ ª”o8?»8ÝT8}{y8œî`:=º¾Ó ÓuÌU/îtÝž^ÐéÎä=ítSW)Û[8àÊý6¶”öÛ˜Šê)¯åtsé9-§›ëÇçœn&YypHátátátátátátátÇN75®"Åk¯át«9p¹)¯/ œ®.b—ÓõN'Ù‘ÈÄG'P¹ÝÛyAçB§³ ÁÊùÓiIVNa$ÄÜ›N÷k؃ÛâþÚ¨db·q¸÷i‰Át{þÒïÑÅ™îdø ˜®Ÿ`®ze¦õôjLw2ï9¦›¾J‰ÝËtu Ã=¦›Žj+íõ:›^¾ÓýzÔ–2dü wÔbºÙdšb:]0]0]0]0]0]0ÝÓ}/ ¤ÌÃqµœoLL·Ó½¾˜L&¢)3Ý·:´·÷c®`ºoŸ›M€²ŽN ΔóLǸ©€Šì3ÔS˜R¦nÒZ‡.éQ¦kሜЇá!Óü¥Ó£ë3Ý™ð×0]'Á\õâL×íé™îLÞÓL7w•ÒtïêtHeL9Pº¹¤«)Ý+}ý]-ÓÓ—Zœî×£æœÌñƒÞ1zNéZ‹ЉÇÉ0”.”.”.”.”.”.”îXéÚxé î:WÅb2](ÝZJW¾˜¤™Ì»J×êÊ“ÓÕJW?×Ë1Ûè"Ä•Žòf¨”u_鰜 TƬ¾Òµº´·úJ×åò÷Q‡#Œ=$†üÒéÑõ•îLøk”®“`®zq¥ëöô‚Jw&ïi¥k `J2¾Jå ÷®MǺQÊL×"(¤4ŽZ†¦µ˜®¥òL>èhû~ÖâÍt¯ÞQJ¢ãÞñÌÏ1]m1'eÕ¾uþ¶r0]0]0]0]0]0]0Ýþ¸ZwP‡ñˆŸ‘b‰`ºµ˜Žê“Ðú_`zÝÐ?;El&\×À§‘*tzt}|:þ|ê$˜«^Ÿº=½ >É{Ÿ¦®RÈ÷â9o˜áŸxË A”¡?uš7v/£ÒÕ?ÊtÚÿëïÛÏ)åïgÓ]¹±÷fžê>ç³]µÕu-§›J/¿–ÓÍõŽÃsN7™LÃéÂéÂéÂéÂéÂéÂéŽnf\…NN·”Óñ&è`BƒÕPJ]ýÇgW\+RqÁÑÙUâý±2œî`:=º¾Ó ÓuÌU/îtÝž^ÐéÎä=ítSW)Êt«Ó!lHŠt°ÀfÉÀÀZ·Ê@áõPwØþûŠTTî„:Ò-)Qžî©ï“jJ«9ÝLú,_Íé&zÇØŸtº™dátátátátátátát=§û|\5ÈñÚk8ÝjN§I¹|‰]n¥Ž”¯pÓº¶#øàµ×R è7>¸i59E†ý·\Náúæ- ’Ö:t~væáL¸òø¢9¤ªN®/šgÂ_#šsÕ‹‹f·§Í3yO‹æÜUÊåÞ=$Ä7U:˜y8•ÉÖbº¹ôúÅ^{êü6XÞÎtsÉ(¦ÓÓÓÓÓÓÓu˜®Ž— „èy<®z0]0ÝbL—·òD%€;›Âýüû/°¦äoËš]Ätås¡¾„ƒ÷Æ[ò/BYÞÐAí`~…”S8'a,Wëeº©pL7ö—N®ÏtgÂ_ÃtsÕ‹3]·§dº3yO3ÝÜUJý^¦CÞ²mõ:U¬Åtséå‹Í¦›ê[÷v¦›KF±:]0]0]0]0]0]0]‡éÚ¸*B&>W3ÓÓ-Åtå‹éåÞûLWëÌÓå³é¤ò_®oÞŽN RWŒ™N*ER9Æ}¦Óö"Q?i­ËšùQ¦«ÖŽt±a8ÑäÁt#éôèúLw&ü5L×I0W½8Óu{zA¦;“÷4ÓM]¥œïM&ÉйÚ+$ èú¾œÉL×R±@Τ§¬_‹é¦zç}¦äíL×ZÌZžŠùƒd9^z ¦ ¦ ¦ ¦ ¦ ¦ë0]/-å<¾ñTG ¦ ¦[Šél³rWNƒ—^[¼Íf»ˆéÊçºBVÔþ TëüN¦ËÙË_Í pÿÉÍÚ3cùÛ ¢Ö'<‘G®6j‚$ãpåëN7˜N®ïtgÂ_ãtsÕ‹;]·§tº3yO;ÝÔUª<Üêtâ¸ùÁdº© ´–ÒÍ¥7øZJ7Ó;åÞ^ŸSº¹d,¡t¡t¡t¡t¡t¡t¡tÇJ75®y(](ÝbJ§PCIÀ¥SHùm ‰Ë”N!Krœ@¥Ž,ß¹4n9)!—SØ]T°ÿz{«ÛY¬èV¤+ÖÑ´Nþ…ã”0n¨/]é΄¿é: æªGºnO/ˆtgòžFº©« ÞŠtš`>Pº¹¤y1¥›Jé‹­L7×;òà+¯sÉÞ–Õ¥ ¥ ¥ ¥ ¥ ¥ ¥;7®ŠæPºPº¥”®|1­-®H¹«tµ.§·—,.Rºò¹åXit™Þ¹óËæâ¥;v•R=…€¼›ôUgèO*]k13Ãb ¥ðK¯G—WºSá/Qº^‚¹êµ•®ßÓë)Ý©¼g•îÕ8å ýu|¿ÕQºw*ûFbûL÷Š`í—ÕqT–¼Ó½R¹òx¬* üRL׎˽Lù{ŸœL÷jØv ÙIf±D0]0]0]0]0]0Ý1Óµñ’Uáƒqµ ¿ÁtÁt+1]ýbd—”¼Çt¯ºWo Ñ>‘Æ7¦† w¾òÊióúRëþZEíÖûóf[쬡w+ÓM…ÓL7ô—N®ÏtgÂ_ÃtsÕ‹3]·§dº3yO3ÝÔUÊäæ}^µŒ)x°Ïë\T_‹éæÒËךM7Õ;X>ÿ9¦k-B9n÷’a0]0]0]0]0]0]0]‡éÚxÉHIu<®¢ÓÓ­Åtå‹©È#¦kudWo Ñ>—Õ¿l·:1¾‘é6ÝW:lgz¹¥¾'âë§úG×¥ûÎIr‡ã·I½¡tüÒéÑõ•îLøk”®“`®zq¥ëöô‚Jw&ïi¥k«É—P±›'Ó‘oˆG“馢–\k)]ME 2R§ß3ÆZéfz‡ÒÛ¾·+]k »ÛÉ Þy ¥ ¥ ¥ ¥ ¥ ¥ë(]/ \Çãjù_(](ÝRJW¾˜fXnù»J×ê ]ýÎký\GVB¥Ñ äH†wN¦ó­<¼R¢}¦£vkì¬Ö¿…~Õ¥ü(ÓµF™ðƒp ±ÍëÐ_:=º>Ó  ÓuÌU/ÎtÝž^éÎä=Ít­ñ,å:ã«T¦{W¦Ë,[¦|Àt-‚$ΊDu\‹é^éÍGõªû~ÖâÍtí¨5g%÷ŽòsKÓ½Z,5É|ël¡ë`º`º`º`º`º`º`ºßÆKõH&±Ñk8]8]8]8]8]8]ÇéÚxiXËq<®jL§ §[Ìéx3ð”,§¾ÓÕ:+wYW;]ù\4RB=•:ô;Ô·ò\–ØöŸÜrmWÿ6ÞÈÌõ\wçG®…Ë.2˜@ñª“xíu0]ßé΄¿Æé: æªwºnO/ètgòžvºÖ¸&!µñUên§Ó”¶Óé^I˃Ë`•°W/6®¦‚TÎLÓqz·/öÖkë,mõ7—ÿV÷ä[¯­E*·9Y>H–c:]0]0]0]0]0]0]‡éÚx©`ãû!(§A0]0ÝRL—7­‹ë²t™®ÕAº|mºú¹ VîíFn¥î}o³ë™Îi£º&ÒÁN¯ROáòì“¥¯t­.ѳ³éZ£™A?ÿê€CéFüÒéÑõ•îLøk”®“`®zq¥ëöô‚Jw&ïi¥“×ݤÁ†[­N,ß»8¦Ý˜®Ep7DGµÌk1]M…X% ‡é1qúZL7Õ; n!ÑZdg·ôA2 ¦ ¦ ¦ ¦ ¦ ¦ ¦ë0]/µí!1W…‚é‚éÖb:Ù\-È÷oþüû/°«:áÕLW>×´ük¢Ñ ä–3Ó[HmRÑfÓi9‡ LG³éZ]ùæ<êt­ÑòÍ‘Á‹<­Ž-V§L§G×wº3á¯qºN‚¹êÅ®ÛÓ :Ý™¼§®5.Ž&y|•±{.—!:9]‹`l¢\P5ÓZN×R¹(Ó;}±étõ¨9{ 9ძӵxô³ÿ+YŽ­^ÃéÂéÂéÂéÂéÂé:NׯKr“n<ë²¾átátK9ÖÍ!²£@î:]­S Ë§Ó•Ïṵhu)ç;N7t3ÜW:ÛD°þ ?€°R—…¾Å[•®†#…4z§Õ½o˜JwÀ/]_é΄¿Fé: æªWºnO/¨tgòžVºÖ¸3ð`²Ð«äÞw^·lŽ~“©8q*üaTN(k)]Kuðòä«ng£ÚZéÚQ#xù{Óƒ³é^-:+¥’qlõJJJJJJ×Qº:^–gr¶Áb+¯º¤¡t¡tK)·uKýzoONâS WÒ•Áb®ÜÝÆÆCUèôèúøt&ü5øÔI0W½8>u{zA|:“÷4>M]¥(ß‹O¤¶¡ÌóÍ˸BfÖ_á Õe»ü'™ú¹D lØS¥³Ý¹ié%ǤL~ØUĥܯQiµ­^[*ÓÁÎL­no£ÚšéÚQ«pLY{Õ¥·¨-r}·Á?8AR0]0]0]0]0]0]0Ý1Óµq•F/¼ê’ÓÓ­Ät˜¶„ZE ÷äöªc¹z ‰ö¹æFÒ_ìúU§rëÚty#EÁý½ÿJ‘òËì(i©³'AóÕ¨° Ø8œ¼½Æ ¹/U½]4O…¿4{ æª×Í~O¯š§òžÍWã†Hã«”ú½ ™¶2¢í‹f‹ )¥üÉÕÜ—bºWzÀúw¦×ä_ë××Q£¦Á\ÃouÉcºW‹ÌÆÉ!˜.˜.˜.˜.˜.˜.˜îéêx©`Šƒ­Ë¾ÕQ ¦ ¦[Šé`KˆX›å.Ó•:pa»šé:íÿõí#Áï¼fÞRÎû{H”"Nl©¿ªx«3cyÔé ^`²3nðk]’·)=átÓéÑõîLøkœ®“`®zq§ëöô‚Nw&ïi§k—æIyx•Ù›»}¡Óáfu"9^íKTÁl2Žªlk9]Kå’ ‡púZ{H´£ÆDJDÃÞÁ÷UfowºÖ"Ëß:L±Õk8]8]8]8]8]8]ÏéÚx©\÷Ë«„ñÖk8ÝjN§r[…gàt¥Þ^,½Ìé4‹+YöÁ TêÈåF§3ÞÚŠ/ºÿà†å.=_þ2ý›ûZ‡ln!1NßVÎ ¦;ð—N®ÏtgÂ_ÃtsÕ‹3]·§dº3yO3ÝÔUjgÿÔkß6ÚŒá`:Ý\ÔÕ¦ÓM¥÷™~h¦›ëî­×W²ò@PnÇ>ILLLLLLL×aº6®2ÖÕi†ã*¡z0]0ÝRL‡›HúöήW“¹ã_刋…H¡q½¹\H(Z-Š”›$ZíÞDŠVsf90#f`Å·ÝÍa:œ§ícºûÁ᩽Y`jÚÿ®§ýR?—ËR½êu±#<Ó•çF H(¿)žY¯(â„ĉ7,(÷aåÜCý(Q±#‚ë{íÉÓ隦âÑñ9ÝñÇpºŠ‚>ëÁ9]ÕÓrº=zwsº¾QÊνꕌ'³-N×%UAÇât}êŸLü}sº.ïØ*1þtN×§L’s:çtÎéœÓ9§sNçœn›ÓõÌ«LàÇ^ÓÅéh²H”¥ZW¼ØiŒépN—Ÿk¤$”¬Ñ²];óØ+Ø”’np:Î}ØD•R½¯;Ž|Õû6úÄ™_öÚ0ŽÏéöˆ?†ÓUôYÎ骞ÓíÑ»›ÓõŒRèÜ|:Z®ïÞàt}R…Æât]êà¶8]Ÿw䊜®OYdçtÎéœÓ9§sNçœÎ9Ý6§Ëó¥I´Ö¼š{#{>sº±8O‰ ¦*§Ëv ëË^ât¼äó•kõ(»Dã™Ç^ó‚|Ò@‰é2§“)ÎH36ΤHYCÓÓ /Nåt]â’çÓµLÅ£ãsº=âát}Öƒsºª§ät{ôîæt=£”åñò\N7•”=“ý£½ië¶×>õ†é¶8]ŸwVi§sºe)¿íÕ9s:çtÎéœÓ9§«qºy¾dÄ ÚžWI<ŸÎ9ÝXœN&B@ÌõÛ^‹G;ü¶×ò\ )A#!u±:³¦üf”uJàSÓé`¢lã —Ð !Õ‰ülÇO늟ŠéJ£Gm\Æ1‹KÑÓµøKÅ£ãcº=âÁt}Öƒcºª§Ät{ôîÆt]£”E97›.Á$ȧ^û¤Z Óõ¨‡@7–M×çÕ‚Ó1]i1¿R[YÓ9¦sLç˜Î1c:ÇtÛ˜®k^5óË^Ó…étRŒyýTÇtÙMèðêtù¹œ×uŠ!5:RJt&¦-7Z¤Ë˜.å., ¥¾Ó]ì0J¼*¦ëG¨à˜®Å_*Óí ¦«(è³ÓU== ¦Û£w7¦ë¥èäË^pJ›—HôIU Óu©g·…麼³Þ<Óõ)uLç˜Î1c:ÇtŽéÓmcºžy•’c:ÇtCaº4%L&AR½8]š³éøpLWiÿß~Ù¾Æ ÅÙtÌF /s:Ë}8¢Q¤ú&üløºœnn4EŽmqÂ蜮`*ŸÓí §«(賜ÓU== §Û£w7§ë¥âÉÅéb±0mpº>©lcqº>õOS¿ßœ®¼µrÒ•6½£€Wät}ÊÔ/{uNçœÎ9s:çtÎé*œ®Ì—¥º~ ϘW£ø©Wçtcq:›S޵ëétÅŽíøË^­¤óQPÃÐè@ÙœÈé˜'&cà‹˜ŽBéÂB¬Z };ºtQáy˜nn4Í3*4Å¥üÔk‹¿Ô<:<¦Û%þLWSÐg=6¦«{zï]¯8ÝÒbÀz­ëŸ”±c:ÇtŽéÓ9¦sLç˜nÓÍó¥ä™©½š³°¾ÝÝ1cºßÓå3/DËάI Ó-v*Gß!1?7¢jé(Ûžz×kÔÉ0‹¹|ûAéÃdêuô;´«^"Qµ¼@–„Úg«ó2Îé6LÅ£ãsº=âát}Öƒsºª§ät{ôîæt}£T:—Óiɶ0]‡Òl0¦›UaŒV¿A}±»´ùõ»Æt]ÞA¾"¦›[dÎ_”=CÙåpÂ1c:ÇtŽéÓ9¦sL÷~¾$ÁƦël—ÿ¶c:ÇtCa:œ&ˆ«˜n¶ƒÕiɃ0]y®%PâúÂt¶SÖ1QIÙCݸý/Kˆ†@ ­%5¯¡_Óõˆ䘮Å_*Óí ¦«(è³ÓU== ¦Û£w7¦ë¥ÏÅt :á§ë“ǺëµO=¹-N×çõ˜³9]Ÿ²è—H8§sNçœÎ9s:çtN×5¯FHÎéœÓ Æé”B^ð)IƒÓ)®. =ŒÓ)%Eh¥ÓÍv(væ©×8b(—ïz%Ê]˜U'Rf»@pUL77š¥H[­ªd:¦Ûà/Žéöˆ?ÓUôYŽéªžÓíÑ»ÓõRIOÅt¤XŠ‘n`º.©ƺD¢Sýj÷í&0]—w$\1®Oˆc:ÇtŽéÓ9¦sLç˜nÓ•ùRÊåèøŒyÕÀÓ9¦ ÓѤDb1rõ‰ÅŽíðtºòÜ$šß¨ØA8·8]°¼¦ä-Lg¥ k”–R3äÄ£nêáÖ·ï\5pëQ†^®È7ÜëÁó+ªž0¿bÞÝù]£T<÷”„)2mäWôIű.ÿëS¯qý®1]wò'wEL×§ÌÓ9¦sLç˜Î1c:ÇtL×5¯JTÇtŽéÃtËuè´é²:Ó©& ’ê·RÏv9ôÁ1ÚdÀ6ªIéÂóJ³ÞÕ‹]´ÕI£k`ºYE‰Ëÿf;D/*Þä/Žéöˆ?ÓUôYŽéªžÓíÑ»ÓuR„'c:À’“´éú¤^8Hô›bº¢*¯I“ð3Ôó­UïóÎ bžŽéú”©9¦sLç˜Î1c:ÇtŽé¶1]×¼š08¦sL7¦“Ió@lŒ±^U¼ØAÒÃ1]~®Ä)¿q£iž+â©Ç Òl+›.NÄ‚¨…ªÒbGº½¸Gâÿ¾û÷×ï^ýx÷î«ûùŠÏ?}¤9÷Â_¼y(áwþÃe¹ÿòî?ǧsõÁ¿–Ÿýƒ#|ð—ŠÝ#þ[QÐg=8Š­zz@»Gïn[·@¬!´G)£s+R‰æUÅ Û'U+ߣ>OŽ7–19¿uþD!ò3¼#z=»(KŒöe‰Å:Šuë(ÖQ¬£XG±Û(vž/-¨Š´çUõÂñŽbC±qRÁ@´ž1™íȪÏA(¶´ùý¹Q`Ö©g¢X˜4;Ý6 Ç뼄cëú„«b:-™ÛA8Rh‹6Çt-þRñèø˜nøc0]EAŸõà˜®êé1ݽ»1]×(ÉNÅtQp²”60]ŸTì`s—z}:Wý¾1]ŸwøŠ˜®O™DÇtŽéÓ9¦sLç˜Î1Ý6¦›çU$n¥ØÌv°¾ÑÙ1cº0N "ZãNÁžÂ§‚t~Ž|ÿæïßå™z~-vêEWgbâ"°3±6ìØôèÿ&öëÅÅÄ6ôYÏÄ*ž’‰ýz½0±ŽQJƒœ{Š¡rŠ8K°ˆÊAZ«¨d¬«SÄ«‰¦H.™Ñ}üOóÊ-/ž¾}9G'ó›_ä£w?¾¹ÿìÿ¾ÿ›ÿwÕ÷Yôwù“ø0/Þæ0óà ƒÜ?Ì ·9FýìÃÏÞæ‡å!8[~õðæî‡‡OÐÜëÇ¿~÷æ»ûò²ùíã[¼îþs^䆿Éf+Ó/î_½Î}©¬¾ý>72}xÙI"bÃÙ.˜lùÞçÃ?ò¦Ç¨{YÚl´!Æb³õR8åW| Ö?߬ü)ÿùëoþüúûw÷Eýmé{Ÿß—ÅäêÏÞ~úéçù,¨áísÿÊê3xrŠàË¥õ»g=Ž­_Ü?¾òËüÏïþqÿmM¾}ý~š)ö%Ñ«÷ýt9n±KÈ Ÿ~¦¯¼û~kÊ •Ç֗埾ZØàýÛww½œÕß}7 Ÿæÿ›~îÁy‰ú(øã/ç¦ÿ©ï‡ÞrÝ5„¿|lûQúfQ‹yÈKí¯Ô”Ÿ.xõ°عâ½ûBw™7ªè¤ Eó‚«»çÅ®œò9z÷¼<7¿)j£`×lGL§–×)–ÔWÛvÕó¥Ú`yϳªñ基­ž£ÞÖ†ÊüÖR¢ØöŽ`¸Þ†JiÑ ”ÄÏP&ž÷ì*¾¡â*¾¡â*¾¡RÙPÉóeÌ#Cj®æbê*¾¡2Ö†Jþ0 S4nTŠí€ù„È­ì90hu c£tbä–ch! ·7 ¨”l(Õ€ù¿êO8 $¾ÇÓ‚÷Ž¿Ç³Gü1{<}ÖƒïñT==àϽ»÷xºF)x:غÇS)kû{ŽÒu¨§Õa¦¡tÞá€×¤t=ÊÀ)S:§tNéœÒ9¥sJW¥tój"vJç”n0J7§ @TnP:eÁ$ÇSºÍöÙ8Fâ3«Д¥„¸QÀJ–ˆ u ?ÛqºnѹQ3c–¶¸¤à”®…_*ŸÒí ¥«(賜ÒU== ¥Û£w7¥+‹)„æ(Åáäûœ„ÓDºUD´O*DZ0]—z¸µê}ÞYUž8Óõ)‹~Ÿ“c:ÇtŽéÓ9¦sLWÁt]ój\…ÇŽéÓ€éò‡™’HŽ4¬ŠéŠÄãïsª´ÿË”Ô,žˆéMÆ‘6Šˆr(]=O[R¿Ng¶c W½v}—¿!lŠ0/˜Ðâ/5év‰?ÓÕôYéêžÓíÒ»Ó-S”<â·G):ÓQâ .cºN©JCaºESAÛê)ÊMaºå­-bh{‡WölLשL=›Î1c:ÇtŽéÓ9¦ÛÆtË|™Ê±½ÔžWs î˜Î1ÝH˜®|˜’@5/9k˜n± ¦cºù¹ѨÕò,¢vfµ"ž¨Ô*ÚÀt0)°D¨ÅbÄ0]Óõˆ‹y^uL×â/Žéöˆ?ÓUôYŽéªžÓíÑ»ÓõRrî]?Ì¥Àm`º>©caº.õŒ·Uš®Ó;ë½õ³1]§2ϦsLç˜Î1c:ÇtŽé*˜®k^Uñl:Çtca:œ0JHÉ”ª˜®Ø®ª)„éòs5hÔÆ5‹è©EÅcùÕÔX.s:Ì}8;RýòŒÅŽõªµé–FcŠ@ÔcÓ•ßóÏ÷¹OÞÿpYã‹¿¿xXQºoÊE…nÜÑ7åZ3ŽÉÞÞ}õ"‡_ç¶¼ò÷9Öÿ…ô<ð]~Ÿ˜=¨ÒzŸºÎ]|(ŽÍŸÝÃ<¸çHêí‚6¾+ ãO¼{óúõ×wÿxx÷Õ]øù¿½ýþ‹ÿÉ+°· Y®x|Û9>ýy|{ó°ºb Œoù¿|òx]Çß¾}õú“à“ü¯o¾ú—²Rül^þáËe´û,ü…y–Oí/?{‘øå‹/^ÑÇ÷/¿|õ1'L_Àǯì%½ˆ¯æ•ÒZ×l0M!Å`¦PO¦,vbœÒ¶ð[Å£ãSÚ=â¡´}ÖƒSÚª§¤´{ôsã yÅÌÍQ*ÂSxxl2¥¦)σyò¸<Û÷‰å±Š.ª8¿#â3Ôß§íòòõŠv*»O:§uNëœÖ9­sZç´ÎiWóeÉSmÏ«âœÖ9íPœ–&‹A‰êœv¶cã9m~.Zj-Ló¿Ÿyê™xŠˆ x9pãÜ…57°ž9=Û½n:e—8^]eä nƒÀT<:>¨Û#þPWQÐg=8¨«zz@P·GïnP×5J žœNuÊAÈF:eŸT caº.õñi…Žß7¦+oBÈ}Žwâõnú--Z©l¬õZç?)óSÏŽéÓ9¦sLç˜Î1] Ó•yc©­­yÕ‚Š:¦sL7¦ËBQ0)Cõ¦ßÅ,éÊsSBˆÍ%s¶Ó gÞ!…t™ÒIîÁò :Õó>¥Ä¯›LÙ#ÎÉ)] ¿T<:>¥Û#þJWQÐg=8¥«zz@J·GïnJ×5JåèàTJ§T2Ë·jöIc]!Ò§žn‹Òõy'Ñõ(]Ÿ2c§tNéœÒ9¥sJç”Î)Ý6¥+ój¡Ѭ5¯ZþÛNéœÒEédŒ!å@ª”®Ø1®Â‘ƒ(]y®%H`©ÑU1œ™L—&*w„ØeL hǤêݳ]ñª˜nn” ¸~ËbG~êµÍ_*Óí ¦«(è³ÓU== ¦Û£w7¦+æé_¡=Jŧ7[Šé¢âdh˜®KêÅ$ïßÓu©7¸1L×ç½â"}ÊRrLç˜Î1c:ÇtŽéÓmcºy^,W•6çÕr´Ï1cº¡0]œ€’1HÓ-váð3¯å¹Q·jÐÍvDxf2L")èF±"-¤b^Ö•ê¼#®{æµK®j;¦Ûà/Žéöˆ?ÓUôYŽéªžÓíÑ»ÓuR„xnq:¢ ¦ë“ʃyíS¯7vÓo—wxuØé˜®O©c:ÇtŽéÓ9¦sLç˜nÓõÌ«ê¥éÓ…ét JÆ«˜NË™SLp4¦«´ÿ‹”í"yæ•lB ãeL—JN Ð8ÞžæÐÇôª˜®GbdÇt-þRñèø˜nøc0]EAŸõà˜®êé1ݽ»1]×(µÎù=ÓY’)IÚÀt}Rm0L×¥ž)ݦëóNº"¦ëSfÑ1c:ÇtŽéÓ9¦sL·éºæU%ϦsL7¦K0 D\Åt³_š®<—r´Ä‘¨ìó™Ùtq”ˆ[˜Îr@&l)4”f; 8ZàÖ¡žo-pËoÍåø7>Ã;zÕÀ-·(¡œT|†².ðÀÍ7Ü©‘l(L·¨R&Dz†úd7…é–·N˜ŒcÛ;º¢Ågcº¹E )²<ãw3AÇtŽéÓ9¦sLç˜Î1Ý&¦[æUi¯æ//vLç˜î·ÂtùÃRÔ²­aºÅîø¢âå¹,ll©ÕФ3‹Š§){!¥tÓÁÜ…Eë$l¶ |ÕjEK£dÚâÖãcº þRñèø˜nøc0]EAŸõà˜®êé1ݽ»1ÝÜ8 ±ˆ|º'slQñ&£¢âR#Œ…éõˆÆü õo Ó-ÞI¤"mïðê3<ÓÍ-JYÇg(pLç˜Î1c:ÇtŽéÓmcºy¾LÀÈϘñóÒÀ1cº¡0”*D12A5›n±“ÕÎòA˜®<—%(¥V@‘íéLL§S˜‚]Æt8Áœu(ЧØ{š_q*¦+"äÿ™5Å9¦kñ—ŠGÇÇt{ăé* ú¬ÇtUOˆéöèÝéºF):ûÐ+ã”ç L×'5ÉX˜®K=ãeÓõyGãõ0]Ÿ²$ŽéÓ9¦sLç˜Î1cºmL×5¯*x6cº±0]þ0I@˜V1]±ËÍч^ççF¤[ˆDéÔC¯<ˆü/{gÐ3ÍmÛñ¯òÂçv,‘)úÖK^Š ø”““ >%hœýö¥4µŸiUÍŒ…wù¿ôè¿Ü‘(þ–¢žc:´)lnJ¶ m*-v¬Ÿâ¥˜nDœ€‚cºixt}L7#þL×P0f½8¦kzzAL7£wÓ ­Rˆz…¼©À¦“šh-L7¦þó=µ_7¦òч^Ç”%tLç˜Î1c:ÇtŽéÓcº¡¸ê˜Î1Ýj˜Î^LbËXbn^!QíÒé‡^ãÿv‡¯½B‚5‚TÓÙWj‚ýivÑ«vAõÞjº*‰´SÕ»ÛÇt]þÒðèú˜nFü9˜®¡`ÌzqL×ôô‚˜nFï4¦«ƒJÐüÂ*•/>ôª´Ù ˜nH*…µ®ØU%!Lð‚úï…醼“è¾+$ö™´w¶`Wö‡Ó9¦sLç˜Î1c:ÇtOã*Ĩ™_ØÍ=æîŽéÓ­€éLh²HlWÓU;x8r¦+ÏÅ„€¡Ë—Ì*^‰é(n9õ¦K6…1ÛF·§z±ñVL7"cö›^»ü¥áÑõ1ÝŒøs0]CÁ˜õ☮éé1ÝŒÞiL7´J赘.ñv鯄~îCúûBº!õñ½ ݘwøFH7¨LÒ9¤sHçÎ!C:‡tÇ®ÆK-Åt¡W…’C:‡tKA:{1 ‰¤Ý™®Ú éÊscæÜ@”RÂkïyÍ)ðsHg)IÈØ»ù¯Ú…Ïõ —Bº:h& Aûâ$9¤ëÒ—†Gׇt3âÏt cÖ‹Cº¦§„t3z§!]\™¡¿JeÁK!EÎÍ"Ú¦’ª«ÝóZT¥`ß2¦Ô.Züº1Ýî”9õ½“ë÷/ÇtuÄòkdÊ/(c¿@Â1c:ÇtŽéÓ9¦k`º/I8&éÇU ~äÕ1ÝZ˜Î^ÌĶWÏMLWí œ~Dy®¤Ä,Ô›@IL啵t°%ûspâUlsÌ¡=Ó‹]Rº÷ÄkTˆÄttÅ&ê”®‡_]ŸÒ͈?‡Ò5ŒY/Néšž^ÒÍè¦tC«åkïH¤[> t#Rh±ÆtCê1å÷¢tõS'  Ü÷ÝIéve©\éõ‚2öÆtNéœÒ9¥sJç”Î)]ƒÒY¼QŒ¤Ô‹«@üšW§tkQ:{1Å^a ©}âu·‹álJWž‹˜Â Syz:çö&Ù¥te-]Ú‡ƒb:Ý"FÊ!aû8Š–©ÎñÞbºqOñ8¥;À/ ®OéfÄŸCé Ƭ§tMO/HéfôNSº¡U*‚\[L‡²Ybr€éƤæ´¦Ro±ÿ½0ݘw›I_醔=þfé˜Î1c:ÇtŽéÓ9¦›‹«¹»c:Çt+`:{19“J@mbºÝ.ÈÙ˜®œ;²Ù#П@¬ªWÓ‘nö é)¦Ë¦Ì¦°Mó¬Ò8GôaƒÞx}Ĩ8$ÇtmþÒöèâ˜nRü ˜®­`ÌzeL×óôj˜nRï¦^¥2][LxËŸaºQ©ÂB˜nXý[Ýò:ìÛ0ݨ²¼šÎ1c:ÇtŽéÓ9¦;ÂtƒqÕ^gqLç˜nL·¿˜9g¶•¶QM÷aGñá‡30Ýþ\UæÞÊÊ.Ätœ7È¢<ÇtѦ°ý% ¶§únïÅtuÐPúâ(;¦ëò—†G×Çt3âÏÁt cÖ‹cº¦§Ät3z§1Ý>x&ŽÔ_¥_‹éˆy³Ðs€éªRä¾Ôµ½þ¢JEc~AýÃÝíoéê§F”Ð÷ޤp¦«#ªís^yë4&ÇtŽéÓ9¦sLç˜Î1Ý1¦+ñ1¨½ÈÝ¸Š ÞšÎ1ÝZ˜.n6xˆ Ÿ/*þþ×/°½¿|rkºöøÿöÛñ‰ò•˜.ƸeF=âtPÚK*3¶.‹ù°CÍr+§Gœü ‰.€ixt}N7#þN×P0f½8§kzzAN7£wšÓ ­R¯=õ*õÆ xÀ鯤"­Åé†Ôgz3NW>u €¢¹ïe¼Ó•ÙÂ/wâ®LÈ9s:çtÎéœÓ9§sNwÌéFâjzlÖëœÎ9Ý œʩӠ¬š›œÎìX0ç³9]?¡&„î*W›Éŧ^²òsL‡eªc*÷Ò6•V»øùÔ¥˜® *¬$²Þ*ŽÁ¯èò—†G×Çt3âÏÁt cÖ‹cº¦§Ät3z§1ÝÐ*%!^ŠéØV,Nr€éƤ~nEúûbº!õùsé÷×鯼“ã}˜nH™;¦sLç˜Î1c:ÇtŽéŽ1ÝH\ oNç˜n-Lg/¦½¦ÄH±‰éª]ÄÓ1]y.ª2õ&P¦ø,:ï¦WÙPíŸCL§ Yb¤ÜQjvpµÄm@==é‰þ•'nÞ‘xk⦊ARTzAxWqOÜ’ÞZ_1"N³×Wô~8oxtýúŠñçÔW4ŒY/^_Ñôô‚õ3z§ë+†V©'ͮϽü/äMòQ»¢1©²XWñ!õIð½0Ýî)W™ô½Ã$÷aº2b–P$zAYRÇtŽéÓ9¦sLç˜Î1Ý1¦«q˜r§ÛJµ‹LŽéÓ-†ébÊ!—+€:˜.&¡‡® §aº˜T‰Rν ”Ò•—ÿaÜD˜)>Çti‹%j‰}7M¥©Lu•t+¦‡ÁÞÇt=þÒðèú˜nFü9˜®¡`ÌzqL×ôô‚˜nFï4¦+ƒGû× ©¿J¥‹1]Ü, PJÇ«ýËRù¡å˜®¨2åhÿtÕ—$ë½0ÝwôáÊË1ݘ2õËÿÓ9¦sLç˜Î1cº¦+ñ²à ~\M$ŽéÓ­…éRéB”$3·»U;âÓA•çZtˆQ»|‰ñRL—7ÎönêóÄm —ßÖClW(T»”øVLWÍ(Ⱥâì•ñnE]þÒðèú˜nFü9˜®¡`ÌzqL×ôô‚˜nFï4¦«ƒ3c¯5ÝnGWcº˜b x¼Úg¥Ò®¦/Ux±nEE•¤~8ЀoÖT¼~êÈA…úÞ‰-×/ÇtuD[Q‘ò ʲ7wLç˜Î1c:ÇtŽé˜®ÆKfˆIûq•ýЫcºÅ0ü¥éó©íïó³%äélLgÏ•˜YBè&b Q¼ÓÁVŽÿƃú Ù¢¬ ©sTFêVûÙóbºq  cºixt}L7#þL×P0f½8¦kzzAL7£wÓ­RY.Æt-ÎN/ö0,VL7¦þI)àWM醼CwžySæ”Î)S:§tNéœÒ9¥kQº/1*Fˆý¸ª7§9¥sJ·¥³Sˆés3”ïó‹Æ gS:{nFL3ô&Pù| êÌ«ÿ`£RñzК.—)Ì¡s:·Úá³óBJWD™1vÅE‹½Nézø¥áÑõ)ÝŒøs(]CÁ˜õâ”®éé)ÝŒÞiJ7´JE¾öê¿ eÑ:jM7&5ÃZ˜nH=<‰ª_5¦óŽæû0Ý2 ~õŸc:ÇtŽéÓ9¦sL×Àt%^@’ÎÍe5®fÇtŽé–Âtöbæ`éHûÌk.˜Ž„ÏÆt¹^)XБö&PŽA/=󪛶óÓi!ò ¢ÊM¥Åâ³¾Jbº!qôp ×1Ýixt}L7#þL×P0f½8¦kzzAL7£wÓ ­R)ð¥˜%l⦓ ‹UÓ©×7;ó:äæt¦S&~Ñ«c:ÇtŽéÓ9¦sL×Àt#q{5cºµ0½˜"íýå&¦«válL§¿•;×:ÍXª£^‹éb€žß C-œ Û¿tïv˜n4ÅÛ7Hìv¤Ñ1]‡¿´<º<¦› ¦k)³^Óµ=½¦›Ò;‹é>W 7©¿J=¹ôÜ‹^5ohŸcº]§ }©×Ât»*áúêßëÐëþ©5"õ½“oÄtÊ4Û¢ÚW¦Ó9¦sLç˜Î1c:Çt‡˜®ÆK 当pvÞšÎ1ÝZ˜®¼˜² ½Ã-LWìDˆÎ¾A¢5þo'dåp%¦K›ÉPxÞ­(Æ:…¡ÜëÜTZ힃ºÓÕAÅâeÈ}qÌ~ƒD—¿4<º>¦›¦k(³^Ó5=½ ¦›Ñ;éöÁQ4¾°J ¤k1]’€0]•Y;w0|H%Y ÓUÉ"•v@TU¯ovÑëî$ιÉÈ}‡^?”q`zE™‚c:ÇtŽéÓ9¦sLç˜îÓÕx™C"Ð~\夎éÓ-…éìÅÔLårmbºÝ.ž]MgÏ…¢íê:U »\Ù›.¥ YósL6…Ybdlo¡¡. rëE¯â˜,eìŠc¡ä˜®Ç_]Ó͈?Ó5ŒY/Žéšž^ÓÍèÆtepû›D’ú«”ºú‰˜!êñjÿºÔ´¦Q/!è{aºú©£¢tj wïˆÞ‡é†”=v vLç˜Î1c:ÇtŽéÓ=«9H ðÂ~ˆÈ1cºµ0”*µ,¶§oö¦ÛíâÙ˜ê¡WJºÓ'ÐÕt´QÎ9&n€ 41ô0XdãÃͽýí¾üû_~úñÏÿûå§ÿúSýëüü»ŸiÎ?²¡ö•öc‹ÿÇ/ÿòKNZ³¦oþµ|Õß”¼à›ÿ´˜üͬpK:„§lÂ*Ï~Hù'{E, 铱,ÇT´}F¸Úë­ÈkLA‚NL©â4:rí²´†G×G®3âÏA® cÖ‹#צ§D®3z§‘ëØ*¥×ö´íÃfÛðƒÊÈ©9¬v€¹ªŠ63U_P¯o†\‡¼ùÆÊÈ:¢íjr§wW&âÈÕ‘«#WG®Ž\¹:r=F®5^rK}ûq•Ôû :r] ¹â¡ö^—fŸÁjèô>ƒõ¹©l™…:Èì‚Ê••‘¼©&Jϯ‰ö•&Œ!†ÎQëbú¬IÖ…˜® *9Ù»#]qòX`ê˜î€¿4<º>¦›¦k(³^Ó5=½ ¦›Ñ;é†V)¾ø3mBœ#¯ö/Kĵ0]YŽc$…®úƒ¼¦ò>T½^ŽéÆ”‰`vLç˜Î1c:ÇtŽé˜n$®ªíˆÓ9¦[ ÓQ9˜œs &¦«v¶fŸé¨à?-ÝûzÈìò¥} ·)¤ƒÄ-ÙW1w·T¶Ðï­¦Ç.?vLwÀ_]Ó͈?Ó5ŒY/Žéšž^ÓÍèÆtC«T¹ø³e@8½Ú›]X¬š®ªÊ˜Sçv¨j'ñÍ0]ùÔd¡¥ÿÝâc°¼Ó )cÊŽéÓ9¦sLç˜Î1cºcLgñRÕ¶s)j/®jfòë@Ó­…éÒ†!¤ CÓ¥ Táá(çI˜®ôö‹öA!ô&ZBñ,: ÓA[Í9=Ïܸ yJ¡‡ä«pº•Ó•A!"äûâ,}sN×0 ®ÏéfÄŸÃé ƬçtMO/ÈéfôNsº¡UJŸýÔq"§ÎüààÔë˜T kqºõ"¾§«Ÿ™rß;á>N7¦,ùµ½ÎéœÓ9§sNçœÎ9]ƒÓÕxÉ3s?®&ñS¯ÎéÖât\N½†Ò¾Ýh°Ø…ÇS§'qºò\=$éÁ/(Õ xe9mÈ)+<ÇtRSÆ í…jgZoÅtuPÆ“ôÅ¥DŽézü¥áÑõ1ÝŒøs0]CÁ˜õ☮éé1ÝŒÞiLWTè\w°ÛÅkïa‰[V=ÀtUBF|aµ^¬œ®ªRÆÔ¹»j·Ãð^˜®|j ¥»rÿ5ÄpÜØö|LWGŒ¬ÄÐW£ŸzuLç˜Î1c:ÇtŽé˜®ÄKÂD’ûû!Bðr:Çtka:©ø ìqík{«mÏÆtö\{* vÊéª1^{m/ç¤ùàÔk®)Pî°j—InÅt¹бÜsW"z5]—¿4<º>¦›¦k(³^Ó5=½ ¦›Ñ;éêඈ3Çþ*E®Åt7[¦0]•"Ç/HÕÅ0]UÅ„ýX…Lo†éê§Öˆ Ô÷N¦¯í-#R° Ñ { ÔÄŽéÓ9¦sLç˜Î1cºcLWã*"äΓ»]ðætŽéÖÂtyƒ@¶gUj7§«v |6¦+Ïee¹Ç—Ì.`¼²š¶’Ž®íÍ›ªù€B7õQs§àj‰›©ÇhoMêª ~·Ä­xG•sÿ»|€Õ7$n6"YFÖ)@ÚíÐ7OÜÇtö•JV›Åм_¾ÚqN÷VÓˆ³è‡^»ü¥áÑõ1ÝŒøs0]CÁ˜õ☮éé1ÝŒÞiL7´Ja¸øÐkÂ-<ÀtcRa­+$Õó›zòNÊù>L7¤Œ½7c:ÇtŽéÓ9¦sL×Ât#q5ˆŽéÓ-…éhÛØ©íÕ¥óCøÜÕûÜj6„BG˜lD©~VŠü‡/u7ùÅ^¼?V$Ñ•õÝ—ÿøûO_þñrÿõǶIºÙ׸Y>X¦éwl‘¬üÅß¾ý¡.€Û/-_øÿÄo?ý3VùùÑßîKÀ7ß}ùãegÿÃ_¿D­§Y¶ˆa‹±>ù;›þ6K~øé—ÿSüùßúóßÿv°Ñ öÆö6»]Y-½Po3àÝÒËïâé刲ç-Z=½ôôÒÓKO/=½ôôÒÓËÿG\MÄž^zzùìͮ嶱…_¥Gn°þYóÜÁ}‚ÀA& AfAÞýuŒ‹ãô)B[ •`7—jS¤¾Å"k5¼$¤è°Ž0× ñ_.Åvé’P, ±Ž…råÇÒ%G„ZÊÙ(JL¢e9ºä‹hÌcõÕèqtɬñ³Ñ8:ñ]u/]2[à˜ÙX™~JåJºLºLºLºLºLºLºür]Bt~œ³ åU I—ËÑ¥pÌÆ&£ÝwtºòÆ\Q+ñå»t9!´ÖÇÒ¥q‘ê@,0…–£ËãêUàqty<:†x+]N(#MºLºLºLºLºLºLºìÒåáu•±æÞeÒårtiVäÈÞ¥©_™û Þ±ª»ti$ Ã]¹hW»wÙ*È£U“A”h;y¹]RÄ ˜8ÕÇX|]¶è`…r :~']FÄ&¨”QÞ–t™t™t™t™t™tÙ¥KBByÑGëj¬ªž÷£%]®F—„"Æ2LFŒväBº´VEÚÝ»œ úØs—\Äɼ޿£h·]rÑâB‡êµ<ïZŸxjˆW„u°[÷.£ÇøZ¯‚Ñ!é2é2é2é2é2é2é²K—±^Š×‚6^WEkÒeÒåbtÉÅ ¾Ö‡¸V©—î]B|žò.]NuħҥX 'Nve½ÌØ õèþ4ºœˆ}ºþýºœQ¦™›t™t™t™t™t™tÙ¥KiY±¤ŒÃuµ°IºLº\Œ.Ť@ñá50bìv奱ürŽG†=ºlBãÖÆ_°”úTº¬^AÍêètj´+¾Ü±¡ŠS†·Ã´vø¸[}â©™PÁÇÑá{3c£G‰(P&I—I—I—I—I—I—I—]º¬îUÆëª&]&].F—Þ’ hm^Ä¿ªøÆÌØZÔö ÷ÎÕ‡Þê#­¤ “ÄŒÔÒÖ.ÆÎ­å[§Ñ'òPÜïà1ËïÔ­íDtýòÆgÄ¿§¼qGÁ\ëÅËw#½`yã3zO—7Þ:gؘíôÚòÆê“ÖNuãM0˜ûX© ®e%nªT‰ÆêõÇ2ÒÿÝVâöÔͱ;2 Mù>+±õè%>ÆÈóÑóDZ‰i%¦•˜VbZ‰i%v¬Äm½$¯õ;i^žVâZVb ÌŠâ›~ˆ#XýÒòSKÊ÷=n:.Tžj%J (Õ‘•í .G—M}-¨7ª(éÙç›¶Þ›æÆU~¬’ð?ÿ:€kÈúáMcüé?ÑÙÞyæ^Úv-÷ÏL?}ûãß~ùõ×öš™E~‰ýò—oÿëÏß>\æ?ücCÇRþÄôçòí·?ÃÏåŸÿq;°}F˜ªtsŽ·vñ÷îlâ¤`­6lš;# ¸Ñõw ΈÏNAGÁ\ëÅw º‘^p§àŒÞÓ;[ç (¥Œg)Q¾¶ÒÊ+–¯“Ž'¥ÖŶ š*Š•ú§ˆ~{J¡gmÌDåÓbyùVÁœ²O[<¹U[¹U[¹U[¹Upj]¥B¹UæZf˜³³*°ð5 ߺUPã¡Ù¿ïrÓq¡úÌ»Pãé±@‰õ¶À JXŠ›¯F—¡‹–*Cõ$O£Ëj»ÇÑA;é2zÜ–Ár@jÒeÒeÒeÒeÒeÒeÒe.c½4V¦ëj´JºLº\Œ.±!A`®rm"Z51ƒ=ºœê„¥Ëø”ØGnW´Sçå謔JƒdÃM½óóèj0¡ £c•ê­t9¡,ï^OºLºLºLºLºLºÐe+㟽C·âE,I—I—«Ñ%`nva)J—Ò%U(l»t9!Ôí±t*ãÈ1k-—;¡^¡<Ž.'¢£÷Ò儲Z’.“.“.“.“.“.“.»t몚U®«_ç%]&]þ;é’M£O¦Û±‰Ö éR_Á–J²K—Ç…j‘ÇÒeÝî^ž²ÇZ]ërtY±2 ÷.«ƒ×ÇÑeu‚ø‹0Žú½™±Õ5m¬Œ>ïª&]&]&]&]&]&]&]~¹®:;¯«U236ér9ºtƒa©cÌ×—î]Z¥]¸ în§|zK¿{*\R«]*à2Ú•¯J÷ü{á2T1ï»ÝÚ‘?îRŸxj)ÅñÀoËŸÓO¯‡ËèQK©EÇÊ„òØeÂeÂeÂeÂeÂeÂe.ÉâE+Ëp]•òéÔZÂeÂåpÙö¹B@`c÷Ká’‰Dv/õiBÛ¶Ž…~µÇúºª T}”¨íJ•Õè2T7a¬žžF—ñÔñÕ?ï8:ŒåNºŒ%¾éàÀ¨cÉc—I—I—I—I—I—I—]ºòÛa¸®zaKºLº\Œ.…ÁÙØ†Ÿ¬ íÚêrñ’øî±Ë&´0…:=–.«ÅБQ*E´C×Õè2T)#ŠÕkyÜÞe<µEt¼Œ£cxkblU.  u¬Ì!÷.“.“.“.“.“.“.»tëjŒa5®«,,I—I—‹ÑeUGcÑñ‡¡^YD¢ ýK}B™—a©ö@ôÐc—Ø R”»ÜÚźµ¸ñÖ©ZáÑÖN>UoÊâÆ;Uk;]¿¸ññï)nÜQ0×zñâÆÝH/XÜøŒÞÓŧf©/n{kqcf})àNqã9©´Ø9ˆMUË”: ¾>ì‚ðí©kü7ŽŽ Þç%N)«é%¦—˜^bz‰é%¦—˜^bÏKlë%›¡ ×U¬é%¦—¸”—ˆ[Í`Ћà•W¸á«Ö˜ðwNÙoÍdü¦Éc¯p×`+=ú9£(®F—¡*¾ \ëX=}]ÆSK1ãr :ŸJ‹Ý@—M™«ùQ'”„']&]&]&]&]&]vé2ÖK—Br`]uÀ¤Ë¤ËÅè2¦fŽžL†8&l¸”.Q@•÷è² ¨v@¨–gÒ%5«TîßE@Ûn“Þ›©²‰3EŠ#ýädd¦ÊN B'¢ëgªœÿžL•Ž‚¹Ö‹gªt#½`¦Ê½§3U¶Î+ÙÈoýM$\š©RE_¤²“©ò!Uœ÷š~´ãÅÊA4U\ÑÇê½>ìN•-:PÔ|¼Xr4»ÏKÜzÄv’PfY"½ÄôÓKL/1½Äô;^â¶^j,ù¨ãuU O½¥—¸–—Ø¦šª °*Ë¥^¢y[)¾ïrÓa¡ö…éù/ÑÛ}—H8áh «Ñe¨bªupŠã£]ñ§Ñe<µ@eÀqtäÓp]z;Uƒ±ÄPÆy"é2é2é2é2é2é²K—îñŽ¡1Œ×UwHºLº\Œ.=¾Ñêp W¸ôÆNªÄöèÒÛ9ˆ€Šñ›&"ôLºäæb™Â~”¶vjpk¦JëTZÅÊBCqñÁ`™©2JAèDtýL•3âß“©ÒQ0×zñL•n¤ÌT9£÷t¦ÊGç¨<ž¥°\›©"L/VÝÉT™“ ‹eªlªˆYªPÿEãÿj/ñ#:Zë8:tç*[lª ”)¦—˜^bz‰é%¦—˜^bz‰û^â¶^—2ÈÜÝÚ©Qz‰é%.å%¶ÙÊyýŠ€—fª¸c<Ï÷]nªNãoëʬÏôåU,桟µvì¼XmÙõ­¢÷³èr*: 7Vÿ™SV3S%é2é2é2é2é2é²C—SëªBÖ–Mº\‹.ÛÀ´Æ1•‡8x¤\J—"œû5]N •jO¥Kp&˜­T¢[3UfÄ)Ö¬þ3LAèDtýL•3âß“©ÒQ0×zñL•n¤ÌT9£÷t¦ÊÔ,E?^TòÞLkÕö2Uæ¤ZYÍKœPÏ Oóg¢SoõÛ‘XR 9 ÌÓKL/1½ÄôÓKL/1½Ä®—nÈ€õÀŠ_ÕÓKL/q1/¼¶so„Ã\Õ®õ1¤ù¾ËM‡…<´ú¶ù¨TÃAýœÖNÝôV/q'!mppqkG–^âÐ$êDt}/ñŒø÷x‰s­÷»‘^ÐK<£÷´—85K1Ùµ•ÄM^¥ÐŽ—8'U»AkS¥D28³÷ñ”_œcÿ¯ö§¢£xc­·­G#ÖÁív¿=§—˜^bz‰é%¦—˜^bz‰û^b[/+€ŠŒÙ½ÎZoé%®å%¶éb(¥ p»séR/±”xIê÷]n:,T„óMË7m±7 K¤Äÿ:€ÿç÷8Ú1þxÝ ãOÿyÖöî3ÿïÏl?ž©eúéÛÿö˯¿¶×ôÈ,òKè—¿|ûû_þöa3ÿá:–ò'¦?—o¿ý~.ÿüÛ*°mC…Q¥emÃxÆ[· fÄÕB’[#¸Ñõ· ΈÏVAGÁ\ëÅ· º‘^p«àŒÞÓ[s³T½6í˜Y^.{[SR¡ØZ[sê¥>k«`*:XnÜ*˜SyA^näVAnäVAnäVAg«`j]ýÝA£40ÓÀ\ÀÀ´W+ügƒu0€£Ú•Å6"fJüû7M%|h±öôV±F ‡Q2ÃåèK-%Õ»ÈÓè²EG-0hZ¤ÜI—ÂñÚ©P¦”t™t™t™t™t™t™t٣ˉuU$Ñ’.—£ËÁR…‡Ð†1RéBºz¡• Ø]¼œP*öX¼$lå Õä[;¨¬Ëáåaõ-Åéqx9Õ[ñrB™Iâeâeâeâeâeâeâe/ ÍÜÆëª•šx™x¹^’¸¶«>‡XÔýB¼¤WU!Ü…Ëã:+Ëcá²ñ7?óE‹Èrpy\=˜>.G? ¸.+£OµÕ.......Ï­«F™›p¹\ŠW#¦2ÀÁ~ré% €Ê»ty\¨•ÇÒ%•6tbÁEÉ*®F—êåy™±ÑQ”;érFYÒeÒeÒeÒeÒeÒeÒe.®«T TMºLº\Œ.`‚/.Ž{çÖ¥!±=ºœÊMŒ¥* à8ŠRe,u9º<®¾2=Ž.GÇ?_|]N(“,t™t™t™t™t™tÙ§Ë*à¬ÃuU°XÒeÒåjtYM[‘N`“‹c©²ß§ËãB¿¸ûú)t)\½ƒ¢$°]W_žF—Ñ‘rë±ËeÇ.“.“.“.“.“.“.»t9±®VË¢WI—«Ñeôg@RêhÇ+Á¥™±Ö.ÈÚ½3vF¨Ö§Òe5(l:<^­øW?ç¿—.›úÊ8<[ÛÚ)<.kÿTÆÑ¼•.g”IMºLºLºLºLºLºLºìÑe¬—"Ãê¬[;VOºLº\Œ.«i!¯ÃÍ÷¶uiåBº”W, n»ç.›Ð¦aü¦iA}&]ÖVºÝ…q@—[»`Ð[«×V¢š4“â¢ZV7–­íDtýêÆgÄ¿§ºqGÁ\ëÅ«w#½`uã3zOW7Þ:gTGÏRTõÒêÆ±j½ŠÕêÆSRÊZ^â¦J¤–êŸæ%nO­ 8ŽŽÜyAøÖ£A%“±2-%½ÄôÓKL/1½ÄôÓKÜ÷Ûz ñ1G2þ€¼ <½Äµ¼Ä60É” Ž?YI/­nLÛÍÆHßw¹é°PÃòT/Q@¥“¢í¨ÖÕ販׀ü2V/€O£ËöÔÆñˆŽÊt)ñ8Rë‘Q'Y~*é2é2é2é2é2é²O—Ç×Õ–xœt™t¹] xå¿Ép»½ö”=:©ìÑåŒPægÒ¥o»Hû»H[»@ù[3UZ§àR¥ÂPe¦Ê0¡Ñõ3UΈO¦JGÁ\ëÅ3Uº‘^0SåŒÞÓ™*S³ÂÅ™*ë™ïeªÌIåÅêAL©g·gy‰SÑ-÷y‰Ê@ñ€2ËL•ôÓKL/1½ÄôÓKìx‰m½Df³{«±^bz‰Ky‰10cì¢×Aâ|À^ê•ÕíC´sêmJ(Ð3½D.Û<ƒ¥û¿µ –¦;½Ä)q!ÒK˜D½ˆ.ï%žÿ/±§`®õÚ^b?Òëy‰§ôžõ'g)ÃK½Dªøª _{‰“R}­SosêÁžå%ÎE?í^í%~ôèZËe‚é%¦—˜^bz‰é%¦—˜^â®—8·®*•ôÓK\ÉKdˆ)`*µk†oíÐ Üj>͈#´4Ÿ†®B'¢ë›OgÄ¿Ç|ê(˜k½¸ùÔô‚æÓ½§Í§©YŠ”¯Md³ú¢ë ›ÙJ„w…ní˜Ê­sýÖ©»h?Ûé·v¢9×^âNDןëψÏ\ßQ0×zñ¹¾éçú3zOÏõ­s.^½Èp–âR˵s}µ£ïÌöôB(Ukõ֮˜q½u¶ŸWcyÍÙ~ôw"ºþlFü{fûŽ‚¹Ö‹ÏöÝH/8ÛŸÑ{z¶Ÿ›¥j½t¶—Vz ug¶ç&A½ ÷¿í?Úá½ßöü‘ßÒ¿ì£]ùt~&gû׸Ñõgû3âß3Ûw̵^|¶ïFzÁÙþŒÞÓ³ýGçFÒ¿³øÿE^;Û#¾bìîÌöÒ&`êZkíŠ;Ê­³ý&N@êXÈÙ~ôw"ºþlFü{fûŽ‚¹Ö‹ÏöÝHÿ{ç³kÉÛáW1f â²HŠ•HVYàEÑn7¦c7ºm$~ûPªÛö±ï)éÈõÇJ_Î fa¥_ñ–(ñ;5a´ß£ww´ŠRÏ¥öl‹JÒ­h/årâÜlɲڤK£}T‚R_£“œî4nxtþh¿Gü1Ѿ¡`Ìzòhßôô„Ñ~ÞÝÑ~!â!Ôâì©Ñ>R^l¿í“I ´ ª\íë ¬å"õ¾8¾ù­Û£ýÆ4nxtþh¿Gü1Ѿ¡`Ìzòhßôô„Ñ~ÞÝѾ.B¢~”Î's{±hϛў«F’giÈ׿—Ê!Þ^#ö?¯>”RéHSøæ‹Ÿ×øE=‡óÅßÊá®åãÛ7?¼*§ JÁôëÛ?Å0¦Ž«8ˆ&9õ,9…˜ìE7]õ°ÔŒsuôT¯ô²NÎxÇæ<]wrnLÙ ö“s~rÎOÎùÉ9?9ç'çüäÜÝu5CÌÚ¾‡µÚÙ¾3úÉ9?97ÕÉ9û0k ÔËÜÐM—õã2·dY•ÞJ¤˜NÌÜDEî¶‹jiv$ówlÃ×ÕîÚfùë ©ô*y@\º¹Ú‰æªjxt~¢¹Gü1D³¡`Ìzr¢Ùôô„DsÞÝD³®¶f%íG)x2Ñ,%p[D³JȶæèKÍÄsaº¢ŠCÊŒýå€óËÂtõ­íËYúÞ¹½YætLWGDL€ùeÉ1c:ÇtŽéÓ9¦sL×Àtu½I9öwsÌÊŽéÓM…ét¡DsÓU;”|4¦+ωö¡3ÌŽ“žˆé2,BlƒlaºœQ˜Sèå˜f£Ì–¸ ¨çH/-qðŽÜÀâ 7Q1d¡”QòÄÍ7OÜL·Kï^L·N±Ô£ö£Á¹˜ÎÖÏîcºUB šÃT˜nUÅ`û¬ØW^¦[ßZÊñkí{G.<µŽ¨ è‘¿›Í1ÇtŽéÓ9¦sLç˜Î1Ý&¦«ëefµä½Ÿg¦ì˜Î1ÝL˜®|˜°T‘51]±³PÍGWÓ­ã+jêM øDLq!ÁûÕt e C9‹Õü ¾ÚiŒ—v+*ƒÚˆ‰»â³cºixt~L·Gü1˜®¡`ÌzrL×ôô„˜nÞݘn,Je>ÓђĨ›Ñþq©su+zRUH'= ž_V5ÝúÖ¶å‘ûÞ±Lõ:LWGŒ¢Ú¾ãåIc:ÇtŽéÓ9¦sLç˜nÓÕõRrÊéÿ¶ÄÆ1cº0”j6Žp'ùú°ÅpMGcºòÜr “eD½ Ęïüî`5]°Ä rÂû‰.eÏž±÷üjG×bº:h)‹äÜÅ«éºü¥áÑù1ÝñÇ`º†‚1ëÉ1]ÓÓbº=zwcº:¸ k»õÖ“]8¹©¸Â’4mTÓ­D{õh«O†éªªÛWu<Ù=¿kþóÆtõ­3éüàVíô¦CÄ阮Œ”›Kúʲ8¦sLç˜Î1c:ÇtŽé˜®®«˜ÄÒ÷îº1ªc:ÇtSa:,‡I³Rçî¿ÕŽo*^ŸCÌ=LWìlûŠ'b:ÔR¼ßTœ©LaPEh¶?_íÂó …S1]”¥ó3|µ#ñÞt]þÒðèü˜nøc0]CÁ˜õ䘮éé 1ݽ»1ÝP”Š)Šé"ñ70ÝT8¦«ªÄöíîkOêE^¦[½#¤í[|ŸìàÂC¯uDÛåØFëe1:¦sLç˜Î1c:ÇtŽé¶1]Y/ËÇ1s]ÍšÓ9¦› ÓQ=LÊ‘£61]µ“ÄGcºò\Ä”0w AÀtf5]ZL ÀFoºhSXÊý‡¡] Rì˜)\Šéª8I•»â$D¯¦ëò—†GçÇt{ăé Ƭ'ÇtMOOˆéöèÝéÖÁ³­ÿÚRp/„ˆéxÕ L7&5Ñ\˜®ªÊ‘C÷T;¼éÅò"0Ýwè¶¡ôÙ˜nL9¦sLç˜Î1c:ÇtŽé˜nh]qLç˜n.L ~‹‘Sl÷¦«võhLWžKRºŠco 1†s1E„û˜Ž—rU s¢öT¯v!Ó¥˜® ª‰Ê‘Ý®8 7:¦Ûà/ Îéöˆ?Ó5ŒYOŽéšžžÓíÑ»Ó E)¼©ù=ÓÁš-ÁØŽöK%’¹0ÝúÈô²0Ýwð:L7¦ìþtŽéÓ9¦sLç˜Î1cº›õ2g[•ú¹»æÈŽéÓM…éìÃ̤¥J,71]± r4¦ãÅÞ&”Û^©3¨h<ó ‰„KÞh). †`“¸}ÕE±ƒð¼>áTH7$ŽÈ/èÒ—†Gç‡t{Äé Ƭ'‡tMOOéöèÝ éÆ¢”žÛ™Ž8-´QI7$”æBtcêù…õ¥òŽ„p¢T–Ñ9¢sDçˆÎ#:GtÛˆ®®«6‡ìÿ»ëjYÑ9¢› ÑÙ‡)¥3MÐv%]µ‹px_ºòÜÂ!åÞ’$z&¢#YJr)}éR™ÂQD24•V;b¹ÒÕAÕò†€}qé&«tH·A_Òí ¤k(³žÒ5==!¤Û£w7¤[O:=žDž[IÇdÓ­40?"5…¹0]QEÁr+ѾúüüFòÏÓ­ÞI:¿«¯v7]ûNÇtuDH1D|@YJŽéÓ9¦sLç˜Î1cºmLW×K–ÔßÍÙ÷î˜Î1Ý\˜.-µÁI@’&¦«vpSÞp¦+ÏZΑv ‰)虘%Ø8ðªe SNí_º«ÞË1OÄtuPa• }q–Ÿ8¦ëñ—†GçÇt{ăé Ƭ'ÇtMOOˆéöèÝéêà‰€:ÍÞV‘Ͻ>‚aI²UM·J1䤚®¹0]Q£`îªA^¦[½£ÉžÝ÷Ä 1]±Ül‘ø»AÇtŽéÓ9¦sLç˜Î1Ý6¦+ë%##t~þªv¶€9¦sL7¦ÓR¥ÆIcj_Qíb<üúˆò\[’-½ $Ùv¦gbºÒNœ3Ò¦Ë9UJ½©ž³æçu©À¯?&n?¿óÁö45Éü³€Îd!Ùw ]Yä·²›¡«#kÊñôjÔ«Ñr‹×ßþÓÃ#,A2ا÷ñ‹Wo~üו$Œñáõ—ÕÐ>ƒŸÞ¾ûâ2ó¿Qú¯ÝÚ9\L>mPʈ }qøüžÇÓ/sðþÅ¿ …EI6i‚ Q{·Œ?ýõi6šK³Ê+ž×·L— †›ýÿ—ÿ¼ø£àò¦‚1ëéárÃÓSÂå?¯÷¸lƒKΩË<‹ÝÂÊÏœyÚ[«%òü@ —|îÍÍ,yIÒ\,³–ßPûRóMóÞÓñl^‚ýG#vNj¬v7?V;žu<ëxÖñ¬ãYdzŽgﯫ”P:?{®vÑñ¬ãÙ¹ðlù0•c÷VÔß~·ÿðú»w¶dýX7¬Äð…èo¿·}ôûßÿü®fåßYâñ÷gÓjÍLÞ¼ýéûŸÿQ¿_§éWßZÿÃO_¾1?ÕXòÕÛì67>~e‹Æë–Jï¿úîUü—:ƒ@î¿ÍæÄØÏÕŽo®ù8<—ç"RʹëYÂåLðŒK&Ô­ùCJ!NV14¤ᅵȯoMå^ ~À;Ì—¦¤ÊÄo²ô”ÔSROI=%õ”ÔSÒvJJh›š(¬«Lê)©§¤³¥¤‚ s ÔKÜø„ÄM¢ Üý­D"$:7q `IÝJÜØ’WìNuÛhç|mÿ­2¨Ø'š»âÊ}^{ÑûQ½áÑùk/öˆ?¦ö¢¡`ÌzòÚ‹¦§'¬½Ø£wwíE<ª-nD)[•N¾ÉU•Óv´X*å0¦3UŒ¶¨‡¾úx‡~æ˜ÎÞZˆ$Sß;ÌáJL7¢LÈ1c:ÇtŽéÓ9¦sL×Âth[!Kß%ö×ÕŒ^9â˜n6L‡ÉR ûеƒéÌï¿Už«ˆÔçÜfáÌúŠK‚ˆï$nÙ”•)¬Xª¯·•~²cI×aº§Arë¬_ín*ÃÓÝã/mNŽévŠ?ÓµŒYÏŒézžž ÓíÔ»Ó}<æLô@”ŠNÆt ˜2nG{ˆ!†¾T™Ó=©ÊB@ì«×”^¦ûä̶›éÿm32]„é>5åV5ݯÊnš8¦sLç˜Î1c:ÇtŽéî­«1”+ZÅ@ŸÖÕ,Ù1cºy0Ýúa*Øž*JÓ=ÙK[ÅtOÏe"în™-Á£x&¦“…„ƒäû‰ØT‡ËY¸¦R(S/ÅtCâD“cºixt~L·Gü1˜®¡`ÌzrL×ôô„˜nÞݘn(J)ŸÝ«',¶ÊÞëÕ3,5é\˜nH}Îø²0Ý€wls/bº1eÞ&ß1c:ÇtŽéÓ9¦kaº¡u5ÇtŽéæÂtPÛÔc´Áš˜®ÚÑÁÝŠÖç& T:&õ&PLrîm–”Qä>¥Ã2Ó!«v~è^ínfú”®*)¢j_ßü ï”n¿4<:?¥Û#þJ×P0f=9¥kzzBJ·GïnJWOT ¥,°ŸLéxɼE醤&ä¹(]U¥Qrëìäoo©/‹ÒÕ·Î¥·›ô½£*×Qº2"fˆ¹¯,#8¥sJç”Î)S:§tNé¶)]]W)äþŠoVè”Î)ÝT”K‘pPjSºjnî_?ˆÒ•çJH ÐM‡”s†3‹éâbS9nÓ‘MaJö_io¡«m¶/ÅtePSbÆ®¸’·¦ëò—†GçÇt{ăé Ƭ'ÇtMOOˆéöèÝ醢è¹g^S 1m`ºUjL¨/)Í…éª*¢QP¯ô²0Ýê%ÄÔ÷±^‡éêˆbCR|@™ø¥†ŽéÓ9¦sLç˜Î1]Ó•õ’íŸÚ‡Ü_W5‚c:ÇtSa:û051*41]±ƒ$‡Ÿy-Ïå²íëzH£ê©˜.,B*‰·0]Î SèMu³£À³%n¦J²­»ØWÏÏ Y>÷ÄmÀ;"peâf#ª`†ô€²äWÿyâæ‰›'nž¸yâæ‰[3qËÙ>d›D¡»®F¿úÏ·¹·¸„Xên5ÅfâVí"èщ›=—©"ìt?-ãÇsOA1,9åCP± ¶3ïé,$']{ªÊ\ ¸úâ‚WWô~6oxtþêŠ=â©®h(³ž¼º¢éé «+öèÝ]]QÒ$ñ(•ù‡ PbÞ¨®’*@sAºªÊœØ=«z–—éê[k¤Ôé(¾zQ.¬®¨#fŽ |uæ‡téÒ9¤sHçÎ!Ý6¤+륥ÜiUTíloìÎ!ÝdSTûŸPҙߴÄ? Ò•‹3%îò%˘ҙÅ),ʶ}÷1—)  ƒW;àK1]T)díŠÓÛ³lŽé6øKãócº=âÁt cÖ“cº¦§'Ät{ôîÆtuðÒ‰%ä~”²8u*¦Ã…’¨èv´·ikõ¥òóÎt-¦«ª,é²V_½¾´^Eå­K)šjèz'c¸°£x‘Ëýe<¡c:ÇtŽéÓ9¦sLç˜nÓÙzÉ!&ÂNŸájG7!;¦sL7¦ãr¸I€µ/þ«v–;éÊsm¿)Ro©™…31]\RÌÀ²•¸IÀ±×UÉì4gùmªxýÝ;[ܬ[{bøÆ³o¿·lòýï~WéËw–÷ü½ BÿóÕû5'ú§~¬Í»·^¿|óãWo0K‹¿úîUyâ/5€p¾+WciÒK‰Í.EþSrW_~ Õ«è7oúþçÔ™ÿ«×o^áÛúŠ7obÛ×~xõîvúÝ«Üy5)IŠPH#KÕŽ\ L¥Dú‚(÷Ä™]ðæî]ÖðèüÀtøc€iCÁ˜õäÀ´éé é½»i¼\¥‹ÐRåÜ®Q€KƲƪ€D ÃKW»çâ¥U•„reJ_ý0ýyóÒÕ;œB¾wÓu¼´Œ²ýåP&~öØy©óRç¥ÎK—:/mðÒº® î\ Wí¢úÙcç¥sñRYEÅv£íÞîÕ.èáe幚-qK½iùMâÞ)¯ÃxiJ bæ¼QÖ˜ÊÖXbçz¤j—.¦tePÐ¥+ÎRèä”®‡_ŸÒí ¥k(³žœÒ5==!¥Û£w7¥[OåÈ~”RާR:°U\±íAÕ>n}@j𬬱¨Â@¹W‚_Õg‰/ ÓUïÙs¹ëÛÚçë0ݘ²DŽéÓ9¦sLç˜Î1cºmLWעìkŽýu•Ä{»;¦› Ó¥Ò[€HÛ˜®ÚÁ͈aºòÜTNæK7¡ÐÌg–5ò‚¶à¼Ÿ¸©MaÊ"ö—i*Õšú$ºÓˆCŒê˜®Ç_Óí ¦k(³žÓ5==!¦Û£w7¦ŠRwJÔÅt”Â#lTÓIÅɪéÆÔ§vãwn/S>Ó *ó+Ó9¦sLç˜Î1cº¦ZWÅ«éÓM†é´\ÁbÔ¾‚±ØD:Ó•çŠíw…´7THøÜÓÇ DÞÂt‰ƒäˆº¿jÇ7¶þšÓǺdû‹ H¢ŽÜŒ)ß” Èý÷×õ¯û%O¹èäq.׿†Í‘ækU»×Þ¨R¥B}q·=ž–nP°†G燥{ÄK Ƭ'‡¥MOOK÷èÝ Këà1§;}ŸG©çÂÒhÃm=“ª“ÁÒªŠ5¦Nr°ÚQzY°´¾µØŸ µï —#`‚ÎÑüUYôU–:,uXê°Ôa©ÃÒ,­ëj„¡ŸÛOK–NKsmÕ1?ï.þõ>`>––çR{twcª$ÊçÞ¨Y‘î·j„° R²Mt»Ucµ³½òŸ¢¶»üß_.âŽE( Ø"Ün,_íJŸ÷+¹ã8‹À^¤ÙJ-NÏw‰?„;¶ŒYÏÍÛžž;îÒ»—;ŽE©;3厂´è}î8(5ÎÅÇÔ ¿¬+bƼ“n†œÍ•%pîèÜѹ£sGçŽÎ;nrÇu]…HÒ]W…Øorvî8w´“eMòü,ó׿ÿ€ÍNHæŽõ¹‚ ©ÝëÉîy#Á¹#‡E8¦xÿ,5@™Â)c³åaµc}žúœŠéª8*—rjWœà CtL·Á_Óí ¦k(³žÓ5==!¦Û£w7¦ŠRzny`JKFÚÀtcRIçÂtUUÄ9? ^ñeaºÕ;¶ëáÐ÷N$¸ÓÕ™ÐþÍÊbpLç˜Î1c:ÇtŽéÓmcº²^&ÆÝu5…è79;¦› ÓAi%˜(#¤&¦«vBáhLWž›QwiŽùÌ›Iˆ`¸Oé°ÌàL6‰Û;èjG!]Jé†ÄÝžþqJ·_ŸÒí ¥k(³žœÒ5==!¥Û£w7¥ŠR à\Jg+ÇC¼ƒR æ¢tcê¿,J7ä•ë.&yR–E8> LÒ9¥sJç”Î)S:§t JWÖKÛ”*u×U…ìÅtN鿢t¸°&¤&¥«v¾?¸>×ö¥öÆÐ™@¶$Ñ)„¥Ìã-LGK¹@9•?NS)•©®zí™×qÙ‹éà/ Îéöˆ?Ó5ŒYOŽéšžžÓíÑ»Ó E) r.¦#]@·ŠéƤ"Ï…é†ÔÇ;·Ö˜îÿØ;·KnÜŽ•†ßdŽER¤H¿òA?ÁØÓ¾Àk÷`.öÛGRõØg¶OIU®Ë{,ŒEQú‹G%’¿Òe•w˜ä˜®¡`õà˜®éé1ݽ›1ÝÔx¹ˆ^û³D>v9ÂEbœÁt“£¨¶@jJcaºª*ÿ/b?VâMW{M!§=¸À;gîz”•ci²”Ó9¦sLç˜Î1c:Çtó˜®ÆKeIKOvLç˜n0LÇ%aNÚÄtÕN`÷]¯ù¹Lúå@8Ój€L'ùƘ´½š®Úñ©˜®6š0,'ì«éºü¥áÑñ1Ýñû`º†‚uÖƒcº¦§Ät[ônÆtµqc ýY*ÉÁ‡Ó)^rJ6ƒéVIU¡±0]QE9¬¦Îz±jîí¦×É;%ëI}ï@<ñ ‰Úbî1,P–È1c:ÇtŽéÓ9¦sL7éj¼”† â*›c:Çtcaº<0- BÄöM¯ÕÂ<ÄT©Ë—²É‘˜N/)a0¾éÒTúDè$÷©x'¯¦«âˆ³¸ØGüpº.ixt|L·Eü>˜®¡`õà˜®éé1ݽ›1]m<–¹^ÌRrðjº/¤0ƒéVIÇÂtUç —¨yê?7¦«½–h©sŸêäňçaºuÊ’ß!á˜Î1c:ÇtŽéÓ50]—–ˆmA\½>XÞ1cº0]ª›N¥ÌÄMLWíðj9èN˜.Uü&“ô^ P8Ó¥KJ„"·1ÖW¸0Å6‘¯v ðTLW “uÅÅ|5]—¿4<:>¦Û"~L×P°ÎzpL×ôô€˜n‹Þ͘®6ެ±³Ûp² xìÙt/*sgÓ­“zëóÑ—ÄtUAçãפ^ïìª×É;šbg­ádOÜôZ[Œ\Ò¡ÊRtLç˜Î1c:ÇtŽéÓÍcº/ÅBÂù8¦sL7¦Ë³l8ôrØ·ÿ0€Mèê4”0]y®æŒ“C~™Ò¡«é(^Êæ[Is˜Î B²^ í„q´ÂÍŒB"Ô¾z ÷V¸å^#•ß·ï>³pË-’•‹V(óÂÍ 7/ܼpóÂÍ 7/ÜÚ…›Y ¨!Ä~\5¿üÏ ·Á 7»$Žj/ï8ú¬p«vQv/ÜÊs9”5½í¨ØE»q›øŽ…^ˆ˜m¦p³K.Ç çö% “ž{÷_m4{:õïd¢¯¯è}8oxtüõ[Äï³¾¢¡`õàë+šžp}Ž›×WÔÆ•Å:»sª]9t}…H¸ä¹|f}E•`åÀ¢ªQ ÓUr†±«>Gƒx_˜®öCŠÔ†’#þy˜nRfš ô•!ù6(ÇtŽéÓ9¦sL瘮éj¼H²$®²Šc:Çtƒa:´²âüÆ-Aÿ€é²]à°?¦Ë•P¢ûÜ{ŒáØCÅFA¼‰é0äW8‰‚´IØdÇfgbºÚ¨Žˆ Ä™ªcºiytxL·Iü.˜®¥`õؘ®íéñ0Ý&½[1Ýsã†yÂïÎRÒчŠKùôsÓM Щ6ÖjºgõÊ¡}%õ³]¼¯mPS¯ós©ýÁm²;óî¿©ÅXîÿÃÊLÓ9¦sLç˜Î1c:Çt³˜nŠ—bJúqUBrLç˜n$L—&åö’&°¦›ìôêF»}0]}.ªRh¯¦›ì¤1Æ‹€…™mPåfŽ’š‰&»ˆt*¦«*ç˜ûâ’ cºixt|L·Eü>˜®¡`õà˜®éé1ݽ›1]i¼^aѾPoilÇžVĹ™4sZÑ*©ÂX«é&UÀœÚ¿žÕÛ}­¦›zÙÒ÷h8ÓMÊ”£i_ÇtŽéÓ9¦sLç˜Î1]ÓÕx™ó^4ëÇUÁ1cº¡0”S€$§…Ö\M7ÙñÕ>¤0\(çu,̽(Ûa¢#7½Ö—Ôðö¡âˆå_c`Åö—îj/7Šépš_L÷ÅEÇt=þÒðèø˜n‹ø}0]CÁ:ëÁ1]ÓÓbº-z7cºÚ¸@ˆ¬ýYŠ£»š®œ€§8ƒéÖIM<¦›Ô'ÆÎ×ãç^Ú}aºuÞ=ÓÕCç0øO=ÇtŽéÓ9¦sLç˜Î1Ý<¦Ëñ2g¤9£äÔ‹«)sLç˜n,L‡*'iG ibºj¯–aí„éòs‚$Ré¼@AõÈÕt± @Ë/ómLG厔°}}öd‡)ŠéJ£9\†Ô>JiÇÑïþëò—†GÇÇt[Äïƒé ÖYŽéšžÓmѻӭš¥Ò¡˜Žå f0Ý:©4¦[§^ïl5Ý*ï$:q5ÝJe¾šÎ1c:ÇtŽéÓ9¦k`º5q0úŽéÆÂttÉ-Iì`ºjG÷Ætå¹j©œcÝyòk¦väÝ1]ËšÁÛ˜.ÖWòßÛ Öª]þ阮6ʬ}•ÍdÉ1]—¿4<:>¦Û"~L×P°ÎzpL×ôô€˜n‹Þ͘nj<Ï¢Aú³c8v5ÓE•f0Ý:©q0LWU ©.Poz_˜®ö:!ÆúÞ‘tÞM¯S‹J…(SÇtŽéÓ9¦sLç˜Î1]Ó•x‰ÀÚ;i¸ÚsLç˜n,L/òð5ˆí³éªÄÝϦ+ÏM‘»ð+ÛÇ#7½òEC 0³é•Ë+L‘¸³¿½Ú{6]mT˜SûÅÉŽÙϦëò—†GÇÇt[Äïƒé ÖYŽéšžÓmÑ»ÓM[¤°` ჯHyÆš»BbT•±0]U•L%a_}¢;ÛôZz³»qïKïXLçaºª P³Š®²Üùä˜Î1c:ÇtŽéÓ9¦›Çt5^FŽ â*]ítLç˜nLÇ ÄŒÔÄtÕ.ØÓ•ç2”[ z/P± ‡žM‡eTT»é¤¦Ð*ùeo*­v!ž{Ókm”¡àÁ¾82_M×å/ Žé¶ˆßÓ5¬³Ó5== ¦Û¢w3¦›å¦Ùþ,Åáà³é]$Ì­¦['u,LWUIê, ÔË­¦›¼Sò™ÞÄó0]m1±š…ÊÓ9¦sLç˜Î1c:Çtó˜®Ä˘ÿÓ«1°8¦sL7¦“r5CbDMLWíXhoLWž[më¼@ÙŽáÈM¯¨SˆncºTîDМ†6•»(/•Šéª¸Å˜¢i_*ñ`«éª*É1©¯žï ÓÕ^'èæ“õÄ+$j‹jˆúÊrwLç˜Î1c:ÇtŽéÓÍcº/‰%ô㪂¯¦sL7¦ÓzÓ+ hbºjw}¨ÙN˜.?‚¤Î ”íñÈM¯ñ¢ά¦³ZÒ$VmטÕ.ǵS1Õù%¿X!uÅiˆê˜®Ç_Óm¿¦k(Xg=8¦kzz@L·EïfLWǤ¡?K‹éDåb3kéÖ Mi,H7©)tNb™ìï ÒÕ^SŒÑâï\1{8¤«-Æü«¡ô•øÉtéÒ9¤sHçÎ!]ÒÕxYB¹h?®&‡t醂tVN|£ÁÚHT;ä´7¤+ÏM9µéÅøà-¯„rûþ ÔRˆÒÌí‹]R³S¯y]%Έ|)]¾´<:<£Û$~F×R°ÎzlF×öôxŒn“Þ­ŒnÝ,!{0] ;˜n¥T‹Ò­SÏá¾(ÝJïÈy×¼®Tv}­S:§tNéœÒ9¥sJç”îV\HÉ’ôâªð¯N鯢ty`RPJ©¹”îÙ.ì½ãµ<BÙt£½Ä´Ø!¹”êb9œÁtP^amžXŸb’ðTL·B\YIÓõøKããcº-â÷Át ë¬ÇtMOˆé¶èÝŒé¦Æ•±½óÙ½?ÂÈ.¬iÓM,uŽûÔ¥±¦›Tiiï×ì’„ûÂtµ×f@í×Ïv$ça:¨åG$Y ,¡c:ÇtŽéÓ9¦sLç˜nÓÕ¸EBân\J¾˜Î1ÝX˜.«*{¹ãôÛÏp¶ ÷Æt幘›Wèñ¥lwë]1¦™nc:̯0¢• $šJ‹ð­íZbº*.A9Ó¦+üš×.ixt|L·Eü>˜®¡`õà˜®éé1ݽ›1]m\9òÒŸ¥ÒÁ÷Gä” !7f{•S_ª2…éŠ*Š ¢ýp]÷…éj¯0÷½å¼=¯S‹‚åâûÊÌïpLç˜Î1c:ÇtŽé˜®ÄˈQû‰g„«¡Ó9¦Óa¹—nœ“ýíç¸Üßö¾æµ>71¥ˆ½r(Û]ßOº?¦‹ñ’£ Ý.ܨ¤Æ´÷ ¾Ø¡á©ÓMâr ÌÐGru¹…cºþÒðèø˜n‹ø}0]CÁ:ëÁ1]ÓÓbº-z7c:ªY‹ÄØŸ¥Lm,öUÕGêªdv_ì«ö:?Ú:™pµã«»Ag_ë”±9ûröåìËÙ—³/g_ξæÙW‰—LyGíÆU¾þèìËÙ×ì« LB2î—#éÑÙqÕ€’^TâÌæº•塉éªÝõ»¶¦ËÏ%ŒùÏU Å <š.¦‹Kæ0æÁÑÃtÙ.­Æ,ê€b_}®²ï­ÆÌ½¦²Ê'õ½“_ë3kÌ¢,¨öÅÏÊ®®AöÓkL¯1½ÆôÓkL¯1oÇU± ý¸*~¦¸×˜ƒÕ˜ñ1¾ÜÇ÷YáVìÀ’í]¸•çZÙ7Ä픹ÚÁsvÜ•.Àšdfa|,˜((Z€ŽRÔäÜCÅW‰‹¾¾¢÷á¼áÑñ×Wl¿ÏúІ‚uÖƒ¯¯hzzÀõ[ôn^_±j–Ò—“ý¾@à¢6w¨ø*©F4¦[£žC¤ûÂtµ×ÀÖƒ˜“ÝÕeÇcºÚb$ŽK~·ë›1Ó9¦sLç˜Î1c:Çt·ãªä€ÿòæ´—qU0:¦sL7¦#•üòãß>üüXÿù¹@ÿæÎù³zý1Ûä‰w ‹Ï©þ›‡û£6­ÕÓWÿ“òW¥<øê?ÊÿÕœ~3£ÎÁP“]œ]¨Îšõ£SÓ¬NÃâçg)ÿšGì/e^½ÙbYÐ1°uZ, ú^Þ?y(­2uݯv‘Õ l­5<:>Ý"~ÛP°ÎzpÛôô€v‹ÞÍvÕ,Åx,%ЋÌð×I€wÎ}îÐ`Ë$yª#T1-P¯z_üµöºl–‡Ð÷Žð‰ÇP­S–Ôù«óWç¯Î_¿:uþ:Ï_K¼3Án\€àüÕùëPü•/eËçŒSšüµÚì~Z|y.3ä8Ð+‡²]¤#ù+ÚEs R½_%¿ÂZγè`Áj1 éJ£9–Úgtuè¾CºúÒðèøn‹ø} ]CÁ:ëÁ!]ÓÓBº-z7CºÚxŠ’3¦þ,%>-žÏ_%¨°v€ÉÔ%Ʊ0]VU¶îÄÞæƒjÞ¦«½N “sT»'bºuÊXÓ9¦sLç˜Î1c:Çtó˜®ÄK ’Å~\µä§Å;¦ ÓÉ…ò\,h/ðýöóLyÓî§Å—öó ”ÓºüÊ:íåá=¡Ê…AöÅ\á–ß` É&¥Ù.ÿ<ó« ñ»‡o_ÿ’kÿgÙsQ¹EÓúð\cB¹±ÿz÷ø¿<åD½.Èœfžo<)÷¤Ü“rOÊ=)÷¤üŸ-)·rQ5[¤^ð„@<ÄòÙ¤ÄÓÇëÈV>µ¼þðáñ··⌼dÁ´+/©ü™H|xýþ×ÿýéÝë·?WH¨ðÝÃü½ü6UÙ3䇭mÚÕi'6a×6ÓìÏBþþýÝÓÛ·WÌñÝãÛ§w&tüéÃUù1J†ó·§§·3z4¤4,Уa‰ðÊac›"žâƒ„úy%¨ÌŸ·if:¡6øãÓŸ{f~z÷ôñíõ«þÛSNóžJRòõËLüÏ,gL <<ýøSÇÙ*?¸uòó$ûx­ú×ß?¾úáÙ÷ÕoÓd÷êS7¿~›=ÿøáçÇï_ýªï‹zë¨/· D ½¯T†)Éì>+¿«ÙöÏpòókÂX^¡g~Qôqƒ!µ½™ë; NØUŸËɰr0œñ*®ÐÏvÂhX¡GÒºÑðCM£~{ý™+§?>„7_¿ÉEë÷O¯ß½ùÃJ\=Ù Q1kì!l'¾À´°›üÃa…]?-|ÿËïoÊ7:U;N•)án'Ødí˜8z8/—n§L KõHL 9uü1ÿßW-öƒF"¥.yÌ6Èa‚8iH/ï…œ‘ ¯Ð“›'¬íÔRîzˆAæ·”o˜'vÝË»ðŒ21·“ È=DG&y¼z÷øþéã»ßòkõhoP0€tGv$ûróÅn½ŠgŒ ’rç7/ÐWŽ‹ç••Ï ø*¿ö ²§-«m{ÚÊ5˜Øñ9FÊ¡ÉÆ_ò‹»ç_Ú=Ër=óG’ì™tÌz–:ƒƒI’%ëö¤\f§D˜º"rÀX£Ç΋4¿¿þíñýÛBì‹O¡íSQUHÔ[œídun:b'ôŒÙc…ÃógÏ}‹mߦò™:H·/å24ýóÇA19c ¬Ð“àÄäéÍäNê¸3aÌ^_~‚yùGL»ëG<# ¬Ñc_`ÞøäÖØsk,gçÅ~7"®2¾pªçÒRxA×,ž2b˜9%éëa䱊šNLW,º_Ÿ5Pø"qgá˰¢r^_£GÏ 9oŸÞToòNÞ"ljݺ6_ʼn”ðϳ/¡“…N‚§ \¦ÑÍj|Ì·¸#UËõ®Ìz Âba #Ñú-_vrºrX纼«½žCáé_ŸwWÓ;Ö3‚I(C,ÐcG3Ó¹I¶úµ„s^g¹>í¦mÙnþÛÕ…ʨ—cGž‰1õåž‘u–Ȩ ô°œ_§|òjgŒÅügáî2S+×~ sÿnè æ='RÑòì²Û¡º4õ\ÊAÕ°ß…È|t–¹iÞ[ÞãSÆc®ùŒ ñ´œ²7ûj€´ïÄt}aèá9e_5kè~Ívñ”Ÿ^ƒiNnèÑpvNÙšbÓ%äßN :iÐd‰éÃâ9­j“¡sØdw|jYÛùö®h¹ŽÇþJjžöE à¯Øªy˜§yp$mâŠm¥d;UÙÿÚØ/[¢¯%_Ù÷ 5IÝ­t&5åR`õ!‚ ˆCpNTÐÆ£÷‚ÓcˆgªeKµBl%áªul{-x¡0Å.Dr0^(:ÈyýDß­×Zo¤»†q†V¹"é5¢K—¯Ëø¬÷&ëÅŃ\Àù!æ÷—’bi7d¶wBzÝhÓ4ùTcÿ.Nq%”HësVà)Î æ7?ûðÅY¡¿*Wl,È Šé´cÈçä“ù3´2>nõáñ²”FkÚð;‘…Á\®U.çWˆjnT,4ú§ä@f¦z>_a8˜_/ÀýNË)Z¦P±9ª .ÌóH·¡­ã£Á/ö}sJdµ†¸-Aó54¹ºyÛ¨2Wæg{Ò8èz6 ‘ðÃlï2[?¬<Å™­Ç3–ù¤Å«_þz̧h(S_EÉöÄÌ´õR¸“U¯F,8%Z'õ¹^©ÇÔÖ#Äÿ³q¼Èq.}23€CHö™°ÊÁŒl]ýƒD\§t«3;¥Ê?[ô™-}2¨ÛøiÈ]ß6V·d}Ú6c˜ð‹~C=;°™ªrqÊýïI¥‚¡Ô²_›I$(ÑÔŸcÓë@eM¼šìxÕñÍó|Õž–¸ôxCäíït¶ÚÌH²œ»6Á\dkIØœ˜õEëgõ(rœ¢:ðÀк­ ~¢Q?|z¸¹úòéôQ_Ý`!f;_YhîºùÖ¼u™‘õõàái.Áðå92Šï!4ø¦ý§¾jަÞG _Á:]K—ÌÀ´;ŒP_ê¿Ö §˜C=h³¤xxJö¼f ‹¤p6n,1ö|A¦e¯?%™áÀÃ4ÛK¥K,eÅ]±êrNUÁ›^;œRT¿ÃÊ_g,wÑè3MZÛ°$ªÌ<¡v©œî¨ø)÷½<¹/åá„:Ïü|Ñ«qN,Bž%wD9ØžÐeF’a= eZñL£í-8j{×구ü}•£û–uµjˆd ¶Â2áÄáÀ#@]«Cœzmß}G*™ k3®r)Çyq…ÇÆ×!‡2Á4xbOüqûí5¿ÔÞŽ8isKË õ>Åý–ØL‹v q‚§ðàIáã nW&‘põxÖé©Êñà£\ví€M¼ƒç1Åsm¶·amL\²O,m„ejËû壹(‡À—‚½åý™^æ ^~Ëû-àû´¼o ðI_xËû¦¦/°åý¼›[Þ/§±$ÛK•£jÓ-ï ÃuBùßÿùtv^ëV•„m¨ÇWÑòž–¯JqCýñ‘ñoÑòÞ£¥@Ïky¿|1¦:`Zìøq¾½åýÞ]s﮹w×Ü»kîÝ5÷îš'÷U kvüœÓÞò~oyQ-ïé!SJÄÒly¿È•£Ìn§–÷ú{‰"”l]V¹rTƒÑ¿åýÒ?>GâgnˆúzJâh"v'óg&ë@ Ä­h•‹aB2߃ó+^ûq›X‚DZ2m›2QÂ9,{ÒÐøyKʃ'…¡W~OF´¨±}U–€³”hVÎW¹”_õÎÏ0gÇ@rœà&{]_‰L+Hùˆ7sFš ÷q†zø¸ÿðøœÅíݽû¸äÎ~úÇÇ»Ïú—?]?nÐ×Os\ŽuêˆÚ¦§Uø(V"¢Ê5ƒÝR¶ì_.Ì3O<2¨Zù»€mÑb»ô¿0 ö.5Q3Ô¼Ng Å:pi»álŠélH ¶Jmª\îÝLiÃ{`ó„²~Á1'öïO6Õ ë_¯áÍÝÕÃÝÍýƒz«+ýW” §T£3بr(ç«Òõ~û?ë׿–Õ<®›ÇŒÔÓ-÷?ðŸ‰ìÇdÔÕ‡w¿>ÎH/¼)ñ ‹XÇÝÄà¬úªÀ¯ú;¨o©]zúãu݉î(A;öÑœkÒv!Æ ;·÷5jÇ$MØ_=x2Ìq§´Ú.gÖrM0‰¹\mÝäϱõõCÈS ÃÇK35ªÀïΨ±!“Š˜ Ím°ÊÅ'¹ ¿A;†qÂ^âÁ4:Ñ×®„Žõ,P‚ÄV9ÿ®1Òt×ãf™ìqài$OÏùÓs~~V.85µ¸dkërybRzÁf˜à4}.z ¤“ ; ÒŒÉwàáä›üÛ§“ÌØëÖv½_Æ„5¨!˼«D™BH}©}{†ržÜÏxX4E¡Xþ*…fy§Ý;8á âÁ“èu£v}O­Ó fAw•‹ðÂÞ8cíÛ1€ñ=™}xäeJ½X© Bßl)&X é…GÕWAxF :tß7ÓóZ'w{íâqûò˧ëƒ/úþõƒÊ@˜d$ËcV9Lå•Ûd %D{(‰'”D*”T‚'¾V`ÛÎ/£>w­(K{Ìw£åu°kF(z‚±ÃQ"ªá“ØôIõ\e° °Êå¯ÒµgœµµC¶÷I•;¢47'GŠ ç¾Yƒ,`3ΠPèwJaãýšƒÜKØtÇSøhêWï¯T¿NçÃÏçÿÓÕ·8L°(d.5ž!±*Œª\(´ýAŽãÛÞŸþCû6vBÏ¿EØqæ=x¤}ö;u“)ÕÔhEU.ç­ïà¶LÇ`JÄñsïÁ}ÞÉÛ®âö.(¤”Œ-sMÚPŽ­Óž×ãËyÂò÷à)Ћ;w¤³Ÿþ|œ£æD‹òº K4Þø]äJéÔl{¬ñ.X%ÉÑóð¨@¿#A+ä’G¼·ç‡--¿ÿpl#‚ØÖ'Õã ?IêÆŒëaÝú\MˆÆ³69„ –@R-f؇“ÝžSÏõÕžim˜I¢¥¯¨Þ 9¶Ÿ©ÖØ*@ª¾Ø@ã¹â~®xˆØ •yÂÒ¿yûùíûû_Ÿë4µuª-[!°Xc ö3¦»X-iŰÖh`¤;ðÞv \å$Ûó  ”Öš=¸ËŒEÿ#­ƒÈ_£5ˆ*'ìùõ;%itlã)ЭòÍp¤oßß=|~ªÌm•’@d‹)¬r,’6ž{Øñj¼Õ×NØì=xJܰÙ%={wooïJ3&¹ÓÚ¢-õPþ#ÿo(W}ùhNÌq¸ãÇ‚w®úrC£—ÏUß¾W½À'}á\õ¦¦/«¾ïf®úòñzŠm/URÊUO¥\Á3­¯(š‘©B¼,®ú‚ŠS±®Ê£,³Öײ0ÒB kÌPrœÇU? «Çr6có*‡;W}çªï\õ«¾sÕw®úÎUopÕ—ý ±wü„¸sÕw®úeqÕå!FÒØ°ÉU_äŽéʸêú{s¶’´Uî˜ן«Ke@8óƘà˜)˜@[7÷´k:YÇ 1›Wa*wDkÔÆ`.¿¬àªn•æôË ëw ³äxr·w›uþ|ôçg÷+QڳǡÅÎ!ǰåpK6Ûƒ²” ÓíÀCyûã¶goNJSmÚ°ÅjôºÈ%w³ä¡f™t}§b–ú¨\œ0áõ;”J iž2|}ß<ÅÔ‹ƒnö¤mõ\RÝ?,cMúsèðÌÅK­Uë 0˜÷j*Ä9¯ßÉkV¶ìíÓBeÕm)5µ˜õæ/ Y¨³¯ñð5ï3Õ *Õefž|T&Üöëwê®^`ž"½nû-ÿyÔzú[”ÖlKUj¯×ºl¬»Ê!ö©íaͬ]ƒ¡ˆ i†A0èÙ<OŠÜÂuN‡ŸÔïÞ^ݼ]´ÈM-–å-š,Ê‚¦~8Gرvˆ†ºM{(éèhÙ:mDó´¡wSuó§d´œo. ÿþ響¿;Xá_¹3z–¬þ‘6s𱚺Òû¿§øøÍO÷õ|ùðîööîã9°õo—§¨NX2õ;ÂÉT®„^DØå [e»ºþ¦ÌïYƒ)µ+‹(5~²v€*× {(|}VüzØfÏ {yvž#Òaû$BÀÊâF+žªru€Ý¯½ìן`BPíÂC]½Á××aÑ=ÓéÍýÃݽòŠ?,mW%“¾¡…Á,E¯r/°GÙ²ºL8U;ðDïq§IZ’¦ö…… ä¬ýµ„¹¯CèdÂë‡@aFa½OI#œBK—íêd®:¶ÒY³Û 3bj‰ã-ÀG`ëc<Fú Hé&´ê¥`ëÓ;ã°ôƇ'§~iÒë¡&¥p¢Ô¶•9L8Ä©¿ÐùÀ×Cì`+p≹«;¸{ÿö—»÷'6ÈâæÜÖi…¯/·dk L=Ó¨ÛMC¬?†b!¯#˜` OÌ…W੸»§ÑOþô8fnÅJ ‹$¤¢Š_o :{†~Ö¼z†§œx öòÜß~UaË×–¦&#"RÊ… äUJéŸGßjÎü8ü'¯%´¸ˆ¶Û{n” cû]ûEånn¡‹ ¯‡!L0îæ î¹k)‘šJàBÚüʃ\DêH¢Ýj¸àâøÙ÷à‰ÐïŠõtœõ¼ôŒmu¹A \Lìõ70Í$ÙºÀ9zOz'ÙžfO¶4zñ$ÛMà»l[|Ò—M²mkúòH¶›ðn%Ùº¼Å ƒBg}1-,ÖöA…£´Æ%l+ªzÚ¯žžs´Ñ†¿ÉÖ¥n<;ÚdëD¶“lw’íN²ÝI¶;Év'Ùî$ÛÉÖµ¯ ÜI¶;Éö’H¶jÀ±žV2\årè˜Þ–7õ æ ÉgÅÃ򻂮SúÝUKL6UÉmUÖyeA°.ѱ„رª¿GêÔ~%kÚýXô—咥į¢ÿõÝGsIG ŸûðÒõ÷*A€0Z„*Ç yé%_çÄEä\®CY£uç3/h«\h´½XÝ$äyûcMÆ÷˜‰E;™ë¶Ê¦!S/€ZÑ1ÜÑyðä-O”ÿÐäüÓÍÍ›·ÿúòîV» µ§5ëƒIе=T¹œ©ç¦¶Ý;°œ°µ¹ðHÿ­íêÂwÚ”¶6Qçlô˜aËÃÛ Ö“'ÔÞyðH·ÌÛÎQY¦Y-gTåbÁË|£m®‡a‚owáé¶ÒŸW16´™CS›5Ø‘Rÿ/è5(òwœìf³˜¥¤ñ“îÁC©÷Jÿí¾b_Rª¹v¬_8æë³å“ª\ʱÛjïfŸëáç0¡d΃Ç[1s^›‡Kpõ›MUƦ* (Qˆæj•ܺ/uÑVõÔlé §—65}ôÒ-x7ÓK—Ç…ÅôR¥— Õx±ðé®5ÔW@ýñ1×¥—.¨R Rì½Jðhëÿ[ÐK—Qç@’V˜a:{ÒŸ^º|‘b#ùÁé^s;½t§—îôÒ^ºÓKwzéN/}Ü/±nKõXNÖ¾ª4Ôã<ûN/Ýé¥@/­†Yl™Ð2` ThHnÙ“"Õd}Ö—kl¸ŒÌõ;"ÌæyHåPÆ'˜ÁО¶[ÄkÉÿõ=ZÎÜYoUëì‹9Á@b$Âu¡‚…>œ9¥#§ÅôëTM~Ù»Ð#sÊü4á݃‡q{'UK{í”’Ò ´IÛȧÝàŽ ܃‡CÀNqikÝV]uËLÑôßU¶TGõ2Ìõp'<ëìéϣïë]d»ð$cL­™*G.rÛLX1N(’ñàS {¤ºv-YÖ‡™c°¡ê‹½:¢v³Qø2cÞx8tYïçjŠ-Í–v¥JÁ"ZIÁ %^ý©ÖŠg‚xð¼°‹SCuï?>Üßþµ Ίö:(VõB•‹ }ÿ(ƒ]?˜bÕCœY;¤ã¡ ©†ËºB5v6§—¤€›Õ<À×ãÕæ¾ã§Ùƒ‡»ôæ^ïS/Urkeø÷OÿüýÝA—ý0¿ùéþÏŠçÝííÝÇÍøI.È8„·6g~F/¹ÏLWdWÎôjHý&’ÃqûÖ#·xôÈmØüÑX.Êz"À6ÂüÛ/ŸÓr¤C±ØÓúûùÛÃW¿óòZLh¹¸˜B!Èí|¦ÊéKU#ƒî…³ñÜc¯ÉöàÁÐÿÄÿÇû?ß½¿ûuᜊ´5Ç b`Ã,Uµ¥â6R|?»\š±L˜ož”{.îSºËMÝÅjr9–Ö¨ï:„îkÚcœJ3¯ÿ3ŽR‹\ä Ó¬x ’ ]Öu³n«Ž Y%‹躮_`’ëÁb„ óìÀ³µó½*ïÛ“ê§”WšÊ`ɘ’e”ÚaV¯gË*X)ÅñíÁÓÿ1‹‡;­ì½ù|w{HlÖ9hk8°YîHŸݸ¦·Ûåz´&Äd<اÇïPS—u‹W´ìT·vÁ©xý(gŒbKU®žòN·¸Í ^>~ ø>øŸô…Sà›š¾@ ü¼›)ð./•‡RàË5KJùLq½ j>J4^Þ‡žþfx—vÊù×ûSà}Èîø¿Sàw üNß)ð;þ<¾î—1J’`&µh<ï–w üeQà«a¦$ ë¿–W Ü=ÁìÍ’zàò„¼£àø3´µGõJh÷®rÕ°BèMo|ÿ{mQ()ì Œ×…óÙC:ke̦[ç¥u¦_>y=ø­·ãù?ö®&;ŽI_ůW3 Ò@ ?>ÅìzMSt›Ï’èGJvû`s9Ù²D©D±@%ª×Î^t»I˜ùáC ÄOÊøæ_£°rbºÎ $õµŒ q‚Vn’„nDJ7ãå;ƒêÔ$>Œ ´V\¨KuêJ©”³xPY@ã;ý édÁ >f¤ËÍ’“‰Vòñä C6xÃf=ú$#ÙãÕ^Æå6Üá 2ÚƒuûõîÁ“óÆÛ»K–ÍB&â€TSŸÝÅL6“ÍÐ2!b¦†³òÚ>…·ØC'~^xõ ’<]O ¢që=£°*OˆŽëÁ³¢—ãçíÔhÕCψ5¦dÊÊÍÅ…s^ÚÛÉ혈L8zð(o£~<èÙÏ¿^h¬{ŠE$âÙ¥K¿cPðhˆw &˜pŸëÁ“â4UPXcB3ó=ýÅ¥²l¤ ÎÙÒî%7Ç ¶~ùÙáðôߦñ¥Iuò7 £õ˲q dÆž' RâÎyž uÊ*à ö@ùŽrr•@×[ éÕöª¿}üùîË:žÑñ‡I)ÂbV‰w=µq±»WÜ$™eãÑ °ìO4MX}.ÍØs$Ow ´/×¥Òç~­]U¿Yk 9Jr=)Z^¬aD—àóÅØ0PLÙÇšgÈ@Á“KÞ‡ze ›ººÃL¥ÔâB÷U1‚{5ÀTymŸˆeM×’mÑI¶íùh¬¥FËqþñïwKìqy±± “Ž\âP¯>Â~ø/Ô£Äò7÷OËÃý·Çq?|‰89 ѤÐ0‰ÓïœwOžÞ振îžçǰ#f¿ÊÝçìJ^+Þ Yè+΂˒êM(^jj™µùðññöîÍÝ/÷ï—p­þa”Ú€w7ïoïL…ýrÿöîéúègןWäúþaá<±•O&Q»[]¦`ô+¤¦y‚3ÒÐ žÈðÄÞjcޟɶy÷æ[šÙPàÌ‘× I+Ø52A#6ÌhŽLd0[µOoÜišvO’ìšf‘H¸qΫ%¢êyà†{Ú2ëI:¢ÀreµÃûÇå—o®no¶Õ ÈYܬÈ]šUÅ0Ì+$¤uZ0Ic4ãéµC_) Xåú”Å'Á•¶ëIËjµì¿’”ñØ×HLëô&Y«¥“Ž1³àëê'T)ÿæ]Ý.¶® ¨lAÞl¢ƒ¼F>Zg5É.]R7ÔÇ“ ‰êå<1I ¦ë&{Äd$òÒÒ<9˜$-ˆ9µàI0ê=é'v±$Jƒs(Ç3Dà œk$!)ç–©Ï‘„Sµ½~Ø3Uñâò\Øv²™¸Áýeò¶ÇÇ+˜×HHë´ò»#sÒ‚‡`ì9ñÓ^]lVÀ&̃‰×ЭYõÖ ÈœU7Ó©Á——69!¾Ð«®[•¢6`¥ c‡×@®J Á?FqNäG ?-x`cûÑ‹ùÕÍ›7GÁHWŸþÔ²®[Õ  6˜àÔü„·õLVHSódeŽ¥ÑŒG;¥é›·÷onJ5’?ï~þõáá·ÃóÙÇCÙ®¯b0^p¾p £¤Æ®çR3 ñé`H%œÂˆ7«Üš†‰ âªž7ƒ¦1CŽ2\–e]ÙçdìçIh%9¿áÒÍLç¶ÇÙ"µ¶d¬Eòæ–H^¡RZ¥á£³ÄŠ´áš,´ò)ùþP„¬VW…kqº@lbâ<5,ã2æ­“dVŠ›D}x ÛÍ~ûññþÃ_¥¶ÖÝ¿?Øÿ<}x¼¹ÿáé‡ܼÿëãý!ßZë„1çhÿÅ@愺²üþYl‡‹&¬ož‡<ú·«$.¡¨vÇŠèÒßyˆTv ¤€Û/sž¸É6Žõ}Q:ÞØ¥ä”P Àì«÷1@ÛQè„eîÀƒÃ ~|ÃãË,LB•IHròÎw\ánä–v$´ T°;‚y–¶ïd;"œG«Ã¸¨ã¶ô¯†z©ºVß Keð@JJçö¯ÞJ"%圡='ž°Þ’ÈV2SÁQ3mL¦*“ÉŒÞL^¾TjÝã‘ó|FŸœpj°½OŽ×¥Âèå÷ÉY~LŸœ ‚¾ÑÞ'§ÊôöÉYƒwuŸœ.-!lÛ'§”‡"ù¿ÿ}úïÕPñÂúäô¡?z&ú[ôÉébN—Ýß'§Ù±‡t÷ÉÙûäì}rö>9{Ÿœ½OΪs•Ž£=ö>9{Ÿœ è“c‚™4 â·•þùB€ÓWu/µN±¿‹1¦¨Nõ·2.¨À†­SR¼ûH€w§.nh׺ ‚.ÒŒ¢]ð]唤P;ðpàÍÜðu_v@ ä¾®”q‡ÕÑâ<6=a†”놷qÇ¥Ÿkñ6äÄÛ>ª˜ÜWñ2Ž'DaØw¸ˆøx¸·ØÇˇ%våéÖô­æcåqT3ãX4{o?˜ˆA¶Ó'0%‡·_ê<#_ê iïJ’ÏÕÑ‹h}k¢ä¬Jâí’2.ÑÊЛqb)dšB¹¨©òF>pÁ…ŠSªÅÒÊaHN—õÃØ–UK›ï\É¥I¼Å;¥µ+gÙ~Ý{ðo¼Ñ±N«‚f7z(—ó4&8gœŒ6ƒ×&Ämtà‰<,Fç'~¾ðZ?8Én5Á {ó0í&ï ÜŽµVrsœ ôà‰0vï¿¿ûPþħî1׉³ƒ)+:5¡–q•&Ýñ;HkÇDdB¤^ÃÊßœÃkýTåd›Û¬@ol—©Ìƒµ@—(w •ñ{=x$oª¨Nœ Ú<ý©R§Y{ÿðþñááó%£uÒ˜5Apn‰Ë8RÙÂñv†<¶cæ',t]_º©Ñºá*‰”0©ã$\ÆÁySƒÄ³e´ýR÷àɰ՞†P'M hˆNÉÃ2.¨â€6¦c²µéTš°Ôx2nãEáŠXX¬‚ZÂ2:¨Ki¡gu—„v ¤Guž Žê…±XgŒKt¥J[Õ´‘›ü ql‡¡-¶†Øš¾žö÷u¥|ŽŠ»ùøá¡¤ .ì”2÷Ÿ‹=üa îß¼¹{¿4ë„-ÑGhT¿°O4~añéúùŸí/¼lË•R®J_¹M§Ò…Í™ƒK“Ó‚—JHäÄ=/ãø¨²Ãž|"ß³Âè姯?&-¸‚ oô…§W™¾À´à5xW§>ÎÒ B%Á¦iÁ˜ñÚ._'Ò‚û ø¹ˆ´`,i0!1¤ä£×W’šÿ£Ó‚;Ø)}Æ'¦/_, S`ÏO3À=-xO ÞÓ‚÷´à=-xO ÞÓ‚O§/çeÊÊ¡áÄÿê}bO ÞÓ‚/ -Ø3AŒvq Ã…ñžø§rDŒ¼½Ï±ð6nx¨3†€5Åj®÷2îø=rP®wù»ˆYˆ¼˜‚2î¨X¹ÞxÍÒÐøîÔm\Û}ËwoJ"ê Ûžæ@–$ Ù}î-ãxÆ‘¤Êèã1]7Ê)ÿî`N4sZÞ@(‰)Y¼9OâFz\ˆ¶Éˆ]ˆ&߸ý²—žP¼äìø‘~¼ÿãþíÝ¿–~[† N³.‰¦Hfé>ï¦ 'KŒv­Jî$$† ¡Eå;Ùv36àéM˵ÛÞóRªÇndP4Å”½ƒ»Œƒ‘{ºKBíë8¸jã²LÐæOÖ”‚‡m¶­±Nš[_]g¶ƒ ×í¯A)~XNžÎ.óÑ áC%S‰>žˆ+Ø|ҥ̋õx%•6{úÏÆ1Æí62z(Õ¶„ës/ãd‚–¦ÅY®ì•qQÆmd;í‚kWÑç°«º9K%Tƒ¢«pl§µl ¥Øá£¢[ó½M’CâìâÉÝTN„L÷Y¨eÛá,ªž 0”<å;¼WX;€&šúÛƒ‡iËM^7aÍ G»¾ §Â¹´˜TJc xv W™°êíxR qm™O°‡õ¸Z)ž1»]ƒV䤛nn”„à<§›â­ÙÜ iu·”p ¥Ï“²$ltÇFƒ!ls"®Ä ­ZJI}<Î+‹ss{[ÊÆœ©±î­×¤HÂì@šÌ„ßæänÒf”Fò„c»ÏÈjw/H«[¸ªYÍXdÏϬ%i#žÝ¿­l¶OÀ†MXõ<çìõ…Æšã9_;…P’Ö/’˸”h»-.ÃPno£uáéî0zÚñ\ra>7à}º~Þ,/=δN¦@*);â·ÿ‡³¶~‹¨vàæBЧ·Ø‘ã¦\ØJU¶ŒŒ™9èbÊ(8ìqdŒv Ï·_ë.<:{çP%F±[DpÀ۸С¹^Rí³Ù+x˜Æö~ô.<ò–~ô…<¬“—rDæè)!H)Oßæždv œ°ôx´óRþeŸ·ÊBR¬“$„Y‘ÕUššð¦>ôlGË34wݶ–«¤s’ŠëÂiãRš›6Z>šEÊ3º ®Ägíi£^>`…ÑËO]~LÚhAßè O­2}i£kð®NíÒR$Û¦Æk»–×U» rÐËJ-¨ÙEO!Èß+m´‹Ê[èø´Ñ>dÇÊ=mtOÝÓF÷´Ñ=mtOÝÓF_;WY‚ª{®rÔ½›ìž6zYi£¹ä@RŒâ;Ÿ`÷óÀÙ^ÏTyû‚}xzkáWHª?‰&…˜“—_²Œ ¼M|V‹wSÊ+côQÚ&,¥}‡‘œ¶‡q™'DS/R•C,óбTM>Œ;ºuJ¶¿‹Ñ“'þ6.0Suù·¯qŒÕGE’·—,ûN©*Ê ,Iè×ÁWö²Vêgâ¾´2{èñþvé›’C i À â–ÙÝhéE§œfÙ.ÿTbR—Ì2oص= ·Ï(èÃÃ0\ó½¨ÚírÛTT]fôæ’¢Ò™õ‰Çn¶vÄPIÚ( x€¦h«úÛr Ê $ö\˜6ÎîcãuÀXù5[2iÐèOÆÌ Ò %rܳ«Œ£é£¤ÖEÉ”?›eçgd"⥀Ãóã¸Ow¿ÅAôîæ÷ OüüêðGfë×Smb²ªžz3îb¦q3C%ºc'˜=xW= õRF£Z²z·>Ú}Wž+ÙíSI:!ƪöæ9f¶n°—†‡Ì!do&¶˜ÖŸÅ»9Ìð—õàÁ2qLå"?ßÜ^™.þ÷_ž Îõ(UIv’芷ƒÀßYsа©PØ^JºðÈúȩ̀¿ìGï~úÌôOÏþt‚Ú…Óº!/¢’$ïVbãRÈ#ÄcCQï˜ ñùèÀÃ4ª%V‡VÎõW C!AöòFŒM€8ÀA¿…€÷LB&ø-zðèwSu³^‹.@u£±T## ë®5P²Û' y‚ Úƒ§×}ñæsˆÃ¹\Â0.k¹zá¸Ø@ÐN+±¾ (^”0(œ—ôÑÞˆm5BÔÆ%Ý#®æ {± oSò§çß=ÞݼùÄhÍÇGKZº¨rݧ¶Œƒî$û9'V×$J@ÀƢЇ'ó¨‚ 7³…W­ó* J>aòæ!)ögpn'Ó°%L‡<:ØßPaöêöfa3WÙŒ) +‚8èm\TV“a¼4wL¶7 ;ñèwÔÕÒVTzCc¹ãª7è~úœ ßíðSž -ºðtŠÅ—çŸÇg(½*ÿÞÓÂ*UY-QuDä ÷’Öž¾§Öp¤»c"œ×}xF\&Ïá4Ö9•Äm”7)=P:…c¢ŒwLãÈÌ«õó§ŸwßG9MÇê½»ÿײz§+~˜}R]Î5—²—žâµqqj¥ƒå£Bv½Fì•N¤°W½üJkÀ©tPAÐ7úÂ+T™¾ÀJkð®®t°|\‘³Ó‘ëRqÛÙ’¯#à‰Ù¨H¢õØËy• „À6ôÐÛ¸ð7k½Ì:–‰çvN·Ä_é`ùbRŽ!5 ;ŽôÚ+ì•öJ{¥ƒ½ÒÁ^é`¯tðÚ¹J1DrÏUJǽ>öJ{¥ƒ ¨t@¥Gr "t zŒAzQfùu;&‘¶÷MöàÉù{ùÊ¡Îiñ›ÚeÙÕl’*™™gUQî`³4û—V›ÊQ5š¿\]ùÒÿ¥Z ŒCˆÃ›”ÛßÕH4ºwQµ‹zØ0Ãu0ö³¾;áÁRêWsð¢V°D·àwSAõíbàrˆÁŸD† ¡@å;L ±O^î÷5m [ÉcKI9ƒNôÌðUð8ñ&´OZð«¦ìâAÉ<~1Ña+G`Ää¢Ë1à‹éÃc‚„ ð(ÌXÌ € þbfÐgqÿPº ½µÃóí!„åãÁÉö•®>Üó®ìªôþî³’[2³±³‚bf©&õŒ‚2îôRkqÿÏãÝóëÓ³ðJ°îWHŸßÑŸî>Vò°¦,YÕÇ:%8³|G3rhÀúI7¬SltëÑl9©š2"VÚC±­üf»ª›¹½sÇÆÖíe¢àÉ ®n*ãzƒòyûðçÓí¯wïn¾’‡CŒ¼:*È€õ­”Y‹yãn{(Û´Ç"»¬v7Ò¢ÌW'¨„‚EÝgË2.ÁÀ¶:MdB=@’í‘ÕÛKTŒ’ÞÏÂK)Êft»8s˜~KÅ ÊɽX<Â#ö|íVÃuîX"³YáVfÓb#ûì ‘Ðvô)MØî=x0­ÉÔic¯î5±f±·oʸ8fŸ/¨„…Å;%:±l‡~0uÃl#¿žÕÄf&‰èriTæu½ñ†Èi;\Ež°ôxò€TìFëî+Ã"ÄI=KÄÆ…„Ó­z^&Xõ=xz“ñ» £º5l·8ÈšÄ3Dl\Q)yˆ¤šm^<#â£N0Áœ+ßá¨Yðd_9¹Óº¿SM‘—¬Ï”/ã¢niÊ«‡Ó,yôqNȉ)ßQ Í xzK±t[H¹n k9nÈ« ½ŒƒÓÉ©+ŠT)¹î¾ÿ¦Œ›aÑk±ÞP]ÿMÁ#qlå¿6>k¾q.‰à!:Í]˸ ys¿*Á ØXŠÜlŒZžqËýñó3n<ùQt:ÆmŸ¥¹|Ê«.úx×T{xüÀu¤©m$R}XÄl_r×Èîj„ƒ‹Ù,Z %ºýE²|G­¥‹GCÐá5|Ú(ÍUJËã`ŒÌÞ²q©Û{<@fc¹Æ:§øaèöK^jNkyfiÀ#¸þyŠ5®³Æj¦ïè°qèñ’)!0Ûöòæ`ã(NXy ±ð xάâ6âmu°ôëhŠ1­}ò_‹ÔŽË˜2MYâÒÈ=¿Ë¸Øéþãæíý››=üçÝÏ¿><üöÂ×ò‰Ç…ÆO,‰×ŸþñeT@]”¢I¢kÙ8Fà>8OO±ÍÐ#¼ŒËy{°ïÄ ˆ¹®|ó[òb®žy{Q„}áNêÜIÊfÕ:‘yË8ÀÞ·þÉÒ*‰‘ÌÂð§Bš.щ]þ˜h6UíT˜'Èž`(i7àâÁJLï¡¿ügyûðx÷ðtU’-ÂH¢Õ,)ŒêZÂe\”©e– ¥L⃳µÞË$xùïF/¿LÂðcÊ$Tô¾ð2 U¦/°L¼«Ë$”ǘRrµ”ÙiÓ2 è:Q X>šdÀülÇSËêmz’zË´w3³>^ž&µdÒ^¤Š¬ÁA I+¬%Ì ·ïV@ð_ÿ±XëÑy’…Œ\uú˜À§…;ðR–¶cE7 ®Ä‚Çt03iT B}¸™(ŤAÎ'2÷X¥ØžÄ’YðX‹ÉhÅý-©æveX‚’ Dmgb6ç ®Å\æ0 àÄ‚'ú^xkÅ7~÷]ç@ Ç1„4 çvy…³X8lÀŸ÷ä¦#&©aë!å«oD58+ߟSânFë8ê­()}Bg x’5Ík¾rŒÅžXµg,’„ùóÚ…2z Ë•{ž‹Ú\ÿrMæ{ÛÎ7ùÛ¿E%X+ 櫵6ebÉ<Ýêôt´v†IžÕîÒšKÚ‚r­,UÁ9é>ªÜο柜Ÿ=ž]ß~›†ºî«’äE{qÍíÜ|ˆa;è)ýèüÙƒ¿GJõ`ñRÍç¤i¾JWâ]»˜÷îìüâàø%1õ‚.ä×_L-xÌB³Ò¢O¤ÜýàEMôátç8*nøâ1Jõã]ÊD¤l%m!M"¡¢ÙÓ…°¡ÒD½þ¨ðãAôûÙÍåCž—ïR ß›îuÎaí"§0§â ®bžÚùÝTp{PÕ>ß×&€ ‡nÓþ)öswh}ŽÛœµ©BˆRLAI{šÚA°ú%Vcq;h q x¬Rˆ‡Û«6q˜›I#/ÄxÜÀw=¬?ð9c¶ÖGž3VµôæŒ-Á»8gÌ´Jí»²×È£àó9c&ĕ”üq¥ŒÙÐGþk¥Œ™¬ã1ŽK³!Û{ÞÝRƶ”±-elKÛRƶ”±-elÙ¾÷U·”±-eìRÆ21½x¥ÚGiB\Á\w‰ÐEðN`Áãq±’²ö¶öûž(9sÝ’1 g¡š*6µƒ=‘¯N©bå÷ÆÈàI{a)íöÄÛû§Š1Ÿ–º(7st<ÈA¢†T€ÍE&Vp·Ã ¦ˆ¦Ž®÷)‘.Õ×Oà¯óT‹m~î8Û¡ƒ[?]Öˆ'] CÝ’’Ïb™~ÚRäÅsvÞrж£0Ÿ-xú…CÝ^ßìg+O’õù˜ʣ>küd,Rƃ§zè]†õ)`Á3c`äTm©5xʺžlë¶Ž #qTûV’ û…ÓtàwLÈ»¤'°”êŸD‰ëßá1ëÁÏÛñÉxÙ˜Ó_UìYŸtù0Bæ9*øs;N]²<ƒQïœÐ€M%Uîü÷©µºèú¶–º­Å繋êÄ,í0ö[QúÍñ’×;ã€KwþNIÅà ã‘äz¯,ßîoÜUlYŸxcÈÇ_• ¹S<ºUEÔÎ%GõÎ…8 º/§Èb*±»v€ËK’<ü+ÿÕÍ—gÛùeÂ/MAÑ©nܘÊ,ôZôy,jÒ}YH{ø8 §Ã‚G ÷R1c}‚å žÄ ¨:_Ña‡â«2ÚÒ°\Xð¤ðÙËE¨gQI©æµãwn t_.§¸wp1á1FßÝ^\\=Üÿ˜"ó¾þ¸øvùØæþýU*íÉÝÅ×ɲõé–Š.9‰ëžJJúôÅCá·¡7Þ ð‰Xðäþ<;?¿ýñº¶ØâÃÝdéznN¹Hê]1 ˆY*oð0t% 8†ð$ô®½{ž‚œ?žÜÝßþýêúòáåØ2“GI´}3•Œpl¬ôìü_-‚Õb@##X-Èöýù[ëÁºE°n¬[ëÁºE°~¼¯ÆLfõ"“Ûù¸E°n¬GÁšNó_ùY\5²qjG¾{dcù½£ËW¶êÚµX1²‘ü©d3ÀŒ~A à˜€U¤âÐó)N [×Öвá‘#saÝÒ¥¹GˆZÏJ,ïñ¢¹‚ÃÔ ?ÿVØ‘0<˪ :Y²>õ€ò½B òÜ$Ùª‚½ºöêl´I,x¬ò¾ÏçàþV¦º•crä´È«Ò.& k&tæ~3tq. ˆº.%¬­Ï8¤<Ä’¢W:Û‘9óó(néV°XðÄ#ZH|ÝÊâ#GÒz•Ïf±S†ÎôoﬧiÃ#¡×1ÖjÓúÜ#ôEÓ×k'«R[„a¨ºVù(gd¢”½Øµs›º–*›T±èñ«k-ßG]«‚ÀÖúÈÕµª–>Bu­%x«kMQ°a• ëªky ¥VÐŒ¼– *ì-¨GœPPE!jØ üÅäµLÖÁy5€þÁ 6dÞmÁ [pœ°'lÁ [pœ0œ`ÙW‹ªûœ°'YpåOAr¢:ŸÈIˆGäÏ®{Z ÝJë—j³á1—jëUœÏ(„ÃRrFãwLà $aÅá ½ÌÆæZ,Ÿ„s/ç®V÷“•ºŸù—q(J™•0-bŠ{ONÝ´f¿ÿ¶ÓÌûép+„i¹Sô‘æè2‚Rw,ϧ!ÍŠý´:¿º´w XB x’¹Äüù}{[1âÉùåýãdÅú>ëKyañ^{Ëí©K5×_üýhaaÝàÆAU<Ö´ÿ…Þê§*v±j>F‰@¢ÀÍí‚“ÂtãðX¸-x¢q¸ŸmVÕ³e'U‹…¢ͪ­BkÙî³KûZËv¤°þ@[ðøØKÎãáÇ×ç—¿Y½ƒèêGë|ñL1¸¨E‹ëæÞ‰¤ˆBF»O>ÿwÈš®]ž›Z½„!³KU[G‘8³Yé[dÔO~° • Ø ,ÿ<ÖàǃŽÍ·×7/?~ý&óÖS1JŠù–¦ÅèåvA G ýHÞ·w.Ò€ë÷‹ÕŒÛtbê9 RüѶ æv†\—1ÞÒŸ8`ã±à1Kš¾è<+ç¼°âKU>*çr)épÉymÃÌíˆy¹ôq'6·£®°¹ãèð`øÌcÇô>¿3q=]!Q¹e§=§åväÃá’PÝmîÝ€í‚h¹tÜ«uõéoO HD帟$¸<7Yãy¤?õœÑNôö>ùž ž¸ª2i³ï*™ ìNæqU_Ë®ÜA]nµ)`ë ¯^CňÎêgåù}—@¬ס$IG$·ª–`7º´º—È'a¯Ûëâ+ßdiªZ’sP锢;Ö¥â :·#PKÇŠGß>[/p/g÷X;»`‚ù8D¢‘Äyân7Ñ!ìmï»\1à×#u³•}ÕÊèbP|wS;à–_YûOC"ÅõybÁc}#±ôüìñìúöÛÇF…ªQ‰bÌvR:‘Û9‘‘¹›å£Þ¥TÏÝܵCÙr7µ¤¼šE>wsø.¹›5¶ÖÇ»Y·ôñån.»4ws÷qvôUʃ¬š»IÓGBø0ðÍ•ù¨r7ŸÐ'pê†[ÚÁ_+wÓhùâî¹›»/FÎw/ׂ,m¹›[îæ–»¹ånn¹›[îæ–»9›»9í—àR}_•Ä[îæ–»yL¹›…˜Áq(eÍ5çtÉÝájÍpƒd[¢Þ­Q1Îæï×ù‚|þË›³†ÜVÌ%Pj`C$ «z¨—8TÛ{á1øíx¬%[?žbz‰±“§c{ñÊ[²§¼0 #(½ðENÓúzёÜÞxó¶àa\˜×»$·­`à|LAÿ>¹èßß`eöæIšño¿M—ßžÝÝ]ÿk)Ö/¿üÜ/~¢Ú?.“â]}öàÙúúïú÷\YµáPOž~û”ˆúQ¤.Ûí*k¡#âJ,xŒ+YdÒW›Wý)ÞGp!p-…|jçÂ^é>)äÓïÍs;}¹Šù¼b 9?Ea/sžtÎw/,õ¤œï^óÑyíyÓAÙ¤.~\ï$b¬e™—T€#^ËówŠ*´ê.í¦Â+“í‡9C¬;{b œ/רX“?™âÕ C¤›à®î'¶á¡Ðmöÿ¼ºüs²Vª[+¦RÙQÙÀ¦v°d0Ç03&v¥¢”Þ™ŠúNÇ¡Ïx$Dn0.cêZÒûåu²0W- „^ÊLéQi¡ßÜod3gBžgÄõ ¿ˆœOpØáxØxךœ)èê¶t!Ÿ[Uì¹s}‹t¯Å^A.oøz—ÀØòwØynÀÓçþ¸Ìº¡j]$F'Zo¼³ë»¬Ìö Þ;DçTðLë?$íðø¤Èÿ<µ#cÎþÓ•êéeþP{BÝž‰b&v];¿´“T4£»U1Y‹Û†î@žè{”·hÕ€SʉPÔu– ô©j±¼šbÕ¢«fú¨‡ ²àAãÞ³;Áí‡%Ýþ¸89N{y™›wóåã?.<œü!e]çz§Šà½Ã¿ýöŸ\íìØï—ß.®¦,˜¼Âî%ýö&égq¿|8.zðôxi;ÌêÐ%µªIïXÒ ö0²D<.²Dé5oøÙŸLc€Ý¨“ ‘:+vb‘R<*"±3Ó½ÌÆû­ËýìONÎòQãq7§©Ÿ¸¦¸ýŠOë÷e­Ûª…IÏãq°˜`ékç:½Ù%Ÿœ•Z-?/?Ú‚PYþòýó¡F»ÄSÑ{6ûƒ\<-H4€íx¼ÕøË.ßîo|èyøýÛõí×½¹¾³ BOŽ<¦ !öEVj¨‚ÚŽ‹)©àÈï½Wl j3ÒX‹¿‚Úð}Ô*l­\A­jé#TP[‚w±‚ÚôñôUŠ“[UA-`:uIþ÷fÇ5°Ääu¨Á™‚Ú„*zN ïU‘þZ jS¯KÅètëȼS©¿‚ÚôÅ”‚ㆠ"ûi›‚Ú¦ ¶)¨m j›‚Ú¦ ¶)¨}´¯úäSP÷ÕƵ)¨m jG¥ –‰)ˆŒê‘µ„rYý+ºMKAO:lÂ^Çü<$Ñ7àáÔéù¶Ý˜Ê“Iyl*ïÊUÑ–Ò./cÐ[´N½Èg§fJûµËû‹¶pžµ‘#Έ¶d¤EáŽI½¼—væpÜu¼Ë æ7E2$·þL)x¼‹ :œ6:Ð?ßôÞQŸ,^0¯;I½ãû²°Ê æªSÝ‚~Ä´9îÉ0cϹL–­çB0¹XãunRê½4t 6:N!²Út4 "7§KqlÀ“¬ÔøñõÙGn“FÙ)£LæTø3«iH¹ë(î² µcnA¨NÒÒÓfäïˆ5S¢´ ¾_ÔÊŒeŸþâןOÊ?Ü­Åõìˆ äHõ¥‡|b´T[Ÿá?`¾ª$¾Ãú´(ß yÈ]^òûùí]¾jœ?üœR¥žì¥Œ·¸¼mÍí¢9_c$oÛû‘ö„øjÑFI6Ê[«ƒ¢.ÝðÑ0àÂgÁ©—ðØÕÝÙÅEñ¥]>œæÿ¾9}ÜÓ–óßN— ¡1AŒN´©´s²ìÆÐ<É&BÔ Í.߉¤¿•vzÜíß›mw€«çÌäƒDJ _£s;IÜM.¬ 'ól ™UèàxÀ%5òÞ,¡p÷Ù}~}vuS³f=#_ˆòq½†^(36u¹¢ÆØv äÜ€üq àåbç3'±Ér±—åPÒÒŒ¯QPÉÁQ 2¹¸Ú Ÿ<çNì5ŸK¦ïzƒ½d8®™íÑ­7èïïXI±£gŸØ‹Ž›Wú5‡¦úäö. °ø£F°®UØþñµ¼ñ?œî2øßª¶eØÄ“DOÚå ·CýÏKO>âƒpRoe¥ÝÞ3\u ^Mó/˿Ϋî±Ü.âÇiþNòù&@:é“ó°‹Éø(³x”aã̘”XGœšsCMÉz @™ë xˆ>©]ÉëµA†\r x¬ÂÊk•fhÔ8>#G8’[šRlD|(50¡îG(íbB *Õ®ôYÇÒûíí­ç~0Yž4®P =‰®Cñ¢.JOàEߘٷÕ{"lØÙJ…’3ac™AÃbÆ@ä½~ùÌw?ÜèÛûËÛ‡“’Ì2Å—å!Hýl’òMG6ÝR‰ŒÅnG«%§AdØSÛ­‘ŽÔC|Š0rÔH—ÛUDü,²V]G9£OeêèÞ$bÍd^™§xZŠ›:tÊ^™Û‘P%¿¹–ÔåãùEYÓj§:ªõ @ØðX @|Xf§˜éµŽrÝ^’‰BÔð‰$qËGñåAðª\­ªZ~tÉ­¯ðkÃÃæÊ.¿ÂÝJVÕåÃc›äÃû%%Ûöäìϲp`Õcfê8j<ìܑ¢¶!T¾·)R«O¸Ý§ªdXO#‘TJµöˆøs(õ¾+‹IÕÚeÇE*Ÿ>“TÓß>žÿQFƒœJ¬äòA§¡Wñsˆ5ÓÅäjí¶ŒY±ÐQT®C¿p&¹¾ßMã €Š^°oè~Òø®#K ÕÜa³Z¡GT²Ùžð|îjõóêþq T)Å^°¥Gþ“Vª÷]YLª$¶t™‘*JŠØ€'|ê*õóáî—»ã‘J«R=¢¥OQ>…Vvf)±(ß›Rd±šñ¤O½^}½™þÉ4^cé"—>…Y÷f1µXZz ƒ¨åK™<Ö$µ¾ƒqwûçåýÏé^Nª‡˜#¶ô‰>‡Yvf1±Z;½~Šé;¥ôhÓôfÿ™ÄúþãñìûÕ?§±±ZûäÝ眱>ìÌRb5wzý uôœÛo¼Ç«_âü|Òæ!”Ê}©G{£óÝ0 ‚ÇZŬ˜f¿p¬ß¡@òŠ´×uÜU×cgä–_0-xbèøJÍõûØ.œóùõ²d¤×L‰Þ %Í'_¤sÍÉíFÚ°¸õ‡Õ‚ǺÂ~P»÷ÕúµW±7Öo¦˜/þY)qUÚßóõ¼ûÚ±{0‘-xè|#« å|šçÓy8}ú·™žè†š¹¼w¨¨»!³¤Éÿ±wµ;näÊõUóãÂfd~(0Ç»›Ýï]óÎÉúGÔöÖHºjÉö¬±•È“¥È–4­‰lªÙt'w¶%ªû°xXU$«ŠÝlËò‹«õl<5A”žì8пL›s_\EéñºÚ-©Ú ^–"H8OppCñmö*÷Åè}â¦h¾RÊÇRh‡‚oÏèn²ÀÆ)‚]Bð„»ø¥X¬‚tG q-À@ùt=´#¨«i op‹Ó?Ä¡±£e±ò©NÀ93YÒk$ÌeÍR#ßv­ ”3}· jrt÷C‚'´úŒW×E>]]®‹ÑG‡@ÝñIÀWNÁ÷v@JFxp¼ELÒ M‘¸‚GŸVs¯ÐSy Ý w²nÅ8|$êP’c¡•7²ORDŠ=ûc6 Œ$ª ££i³ÿGÍ>àxb7ËÁdö~™—«åÚnª¬=ÈÜMÜWLµö±Z*DN­ÒÚ­ œ€!xôI‘–ùh4_ÏV ×þF„îM&[Q”#îÛdR‰ð¢Ei‰Ò—u{BðHÞ™Ž€¹¶˜ÚÀ´&"vï)©…¹uÑÛ%iJk«â˜_/‘Íñ0X n¼;£j.>å^sÃ@ ­%óûh /ëNCĤq@ŸXŠú!xi¢ï|ß~ø¹¸ºžÏm)æÞ—ÓʬvòmyC;|GP¬nŽ—’¶"Oð¶ñieð q%Û–pI‹^ ›@÷ýtþ¹„UÓMîJT®u "%1ǬN€¶׬}rEtÕRuAP¡´¿ w¾dßcvýQð´ôð²p·?%MÙw°g>Ûí(j;îé! Ùý ‡à‘êäüó]âöA‘Qȸ)æL©"Ãw©õOÑMèÜk†‡Ðý]Ẻ0ݱéŠRÄ8ƒŠÍééyåmè‚Q'ð <ªµ^OænÇÈ^“.´çÊ^h§4!¤ËÙ~AЫ6=Æ]–y˜@Üêš4b¾ 'Í¡´#ã@ܼ*AÞLž•eöU垼܆QKE¥ô˜v× ç¦4ø 6UíP‚inÞcöñt§ÍÔOiï ¶àLÝaêehW›P™ºGR0í¦nðq2uÂZ÷6dH2IùÃK‡Û‡lÚa;[Û<VÌŒŸÈ ªÅ(ÆÏÖh '”[¼3ŠµÔ‚ù†ZØÙ&ssä0 ‡à ­²{DþôIá>1d&X ƒóêÃ,ñî—l7·¥ &ý˜¹Np@gðHÁ¼[$¦v›lðµÉ›¬Ê˜sÖùrò‡]t<¬ûü‹SA(æÜ§µÌ½¸Áõ»¢0€Ì0ùü )JphgÞÃMÄ`< ETMÒåvËR1 ά7ÚÚÑàÐÁÄ\6QÁLáïŠîþÊ™ ÏÁú1ðpD£Ý?[£õ|†Å—üâÍaYV¶_=«Ûê "©¹:Ù7Ý„)YBcêˆüPŒ*…¨|’h2óÉ¥7¸Ñ´ãº›`B+6⛢°`ÄgqM;ïÚÈ«LdÃÚ߯(³ÚÕKÚOpêÐqukWòét¾-•´“&ȶXÎòéƒk~Ý&Xðl8'¾ˆ hÇ5ï(¸´áý’*¢¥§ìþ2ûS5ãx¸î4¾gOŠÔ-EE8c0|¨æ÷Wâ³¶y?D-Æ](½Ñ…/•' `žhnÉn îÌweÜ—Çë¶IOð+Ìk¢¾ž(PV¤ãŒ€Ù[&ˆpÁ£Ht´°ÅêºX—æZØ‹£Ben¡*‚ÀIõÚU¥°’ñ¶5:¡uó®hÞLK)¯– y©ëZ5ò.»ü8©H¹ØÜ¯f$~O>2§r5~ãîÛáFú.²|<)í)W6ªf‡ûZ÷N%ðCðháN¶c¢®O1,U$^ä*‡ê‰6-C¨O <¢C†XQëhÄ`¨ bìƒLÍ&ûÅ£’˜w>;ÚÂF2•vÃÎké¤þÍGÛyÑ8)z‰a;/I†5Ç#cU;h20ÕI§kî+g® Ò÷x–®O©Ù¦i¿Ø¦ÛTH;^ŸÄ)|‹P7µxQa'æ Æ¢WœÁ„´¬´gÓh4¢¸!ºîIjfQÕ/f1¥ {#¡³hôqUöpÓ縩9Â{ÆAcû4’=E*hUb¡NÌ*ûe¯¨ÂQ+Ã7¾ˆF[i—a§æŒîg"]Ä5‹: ãÚæDô‰Äzæ%³P/9¬¤s T4Q¾ÀІ>5ƒzæ 3ÉÓ‰³HÑÑHÅYKµ¹C©y&úµãÈ„ì”gFCGÛÍfRtC¯À~¤f•ê«”êzû™s ¢m[³ã{¤­·­v!1—8b½âǨk å…hÛÕœ Î4Sü©YDúå‘sŠ»Ù·v‹?Ú65g±·©OÍÖ/?œsÞBû|\_ÕJÈ-îhûÑ\œä…MÍ‹4'õÜ\Dì-fñœt¥Xc9ûvB)Žp„q˜Ð¼C õŠ Bµ¹¨À)b‹ò´³òfà¿Ä²Wã/ o¿×ç”´ŒF*Zìè5Øš Œö‹ Lwì/ óÅĬ5á“{Åà—E>Þ ‡ŠFºò'Â:’šW¢_ZFÅ»ɳ 0E^÷¦¶ŽÅ%%;ßK9>1”Òýâ­½”{Â%Åb†n¼º ‚•vÌ)a¬OcN G±<“2Æ‘†ž[û$ÇÑ¥f@¿¬Õµ³ Wƒ3"äD#Ž5-;¢¸Oìº0EoíÄÚre³nó*è3ãž1(ôö¶Ã—0F‰5#HFãmÊ©4ýIÍ2Ú3–±ž°lߌ¨htãßšnG:–šw¼g¼¬åu¨Í Ä´Æ)i˜[þm*Øô§gÔAI’Ñ0¬gã]ãš=¸~¶#â|Ù!9&ò(›âIïQ~-.F£;Šþ¦ýÐ,?v`ø­‰¥ûE,Š’ëS¹¸.|^Å8Ši2Æ…õì[S‘âžQ‘$§âl½Êg“/îѨHÓ)¿°ž}s*Ö–®ªÎºAU瀗Ê~ñŸÓ“Îoo泉ݛ{Xm|S*dÇ#Ÿ4q/…4x L`o?4‘Žê¼§WžS3=¤"ÁÒ%díò$O‘¦Û k©G ù®ÑRjz%JRr7îˆJrËb–ZiÜÙ+Y×Ò•%T0&ÝksÓãà{Tºfx|Â:ßô ÃÓ"ÆøðåCÃí¡à. „áŸT‚2⹊ʶã®hãnD‘¥’DsJü]‘˜&`¼žæ±U»ä~Å87?¶ReN©b“&Æ Už^@;êÚÙò]ª‹Ð€ƒ+"ñãå" à=‚0Ι@ÁyJÇn¦Û—&ðf2«òîóÙxº‘¤tKRaÜ$؇\aNijÕМÀÐ)™ðØ”l ¡¡ŒSíÇCBË·” (Xìmîݾ™À­ºh#¼JvëêÓšvÝÈtóóíýÝ»X1»í0¡ˆ"í×Ð3Ì–.‰1W–ø¡³†Ã¼GHâ5d¦]° q(`²î•Ê->iKLx=h‡ƒ¯åü–ŒèW÷‹axBo'óÞ‡‹ù|ꑳÛbSbü1ßuß¶,ÔÈÏ÷¼¼ûz axˆˆ´â¨]x|«¸|JÁE»)Æ“Š!ù‡ËâÖDh·D™¦qŽ|=`×N§FŸæV’D¼Ë.WùrUŒwcŸÊ'S³]š­ã|U”û»³•S¹aë³ÍßgÇÀq¦(~p¬´·°y¿å(ã c³[¼.î\T-²âS>][‘gWÅ(_— s6ÀbÀðY6)á—£ùÍM1ãçoggóϳaöýîW?ÀóŠñ0{9_OÇÙl¾Ú>ýe1/×Ë"[Í7òÈ–“òcö·ùì?ç³|úÂÄÌü\ ïe±ºåS#Ä'ïá‘æg㢲¬ð0«ó2ûéõpëÂd«/ _ˉ±¸æÝïçëÙøi˜D3 qQŽ–“…éí0³m²—ÕX–fžÌ€y˜ŸÃDú`.gÍ>O¦ÓÌô"ËWY¹yÊÝsËÌÞ¼_o'܆ å 2ø·o^ ³ëÕjQŸ=›”åH±,Æ×ùjãúìj9ÿ\Ï~yùúå«_Þ~wA´äaÂZÐäÍß¿ªØõ¦¸šÏW?zèÑå&'=M<€öhbÚn$L²sÐÏ¥}@6ÿÒ¨ ÛdÛùñÚŠ"Ïnªçoû2ˆŒ·3~~ùË–ˆ´ÕR¼–ººS¡ê]öÃd6)¯ÛêÐì åbÀÕÿüwy|X…‰–i€´V@nƒTpùPÙo°lgn˜çY¹(FÙèÚ\ç\žo ðy‹†l‘ßNçùøzÁ4RºúZÙˆ-zFö»¢œÀèîÐZk¿ ß=ÿ ÞZ}±%²*F+˜]Ã3ýÕo†•]@ðÉOæ&¸áÙß×ùí`262žïêÙ|´{<-ò²ø—ò:'\€äŠ÷£»*ÀÎj  ŠZH®Šñ{‹Q!ƪÀ´ ˆ\!,9Ëss5ü˜¹Ðð®æËQ1|ŸOËâÏ£ÒÀÆò˜+Â$H©’€¸F&`4s;_a!¸4¹@ÅêŸa¾n13ǰF°‡‘q„|ÑY=#ÖÒÂÚn¶û€^/ç³É5§ žS>?)¦ãÁ÷à)-_MÊÕ“ÙdútÓàù_Ì蛟þf»xi?ýúoŬ¨|ä!ÑV*tøäŸ6ìÛPÀ>ñéôe„eWœ1ôôܺ¹C®ñyöë|¯!Ì”sPè7‹)hàñ[_&‡1<ÃÕr]QÌõÀÐÖߎI?æåõðLüöö3Ï_^~aÏaœ÷i—ߌƒO_ååêõrŽ]YW“›bð| ìþšÏÖùòö<ƒPíýòí¯/ö x" ~¿©¸ùõÛòÙèݳ­ÞÍG`¤öô. l¾ÊŸ½ùñòÅP‡oÀAßfÅ´þ×»rĕí?­L'@„ñVÆf_WÃY úÐ0áÕýO¿^®ŠÅðló™ùºÃk~8V›/ìãîôÆóß7BûœC#4ø {©ÁËò;à–¿åô$¸þ½Ÿ¡´êjËj³‚XZ)o&öðÿ„’{8¤¯æàÍmf”™¿.óYiïÝþx^M 󯯟óétÓ–à÷\sL¯0/®Fð«âËjHc”!ŸF†&#˜÷ö§æñæafÆS,1+ÐÓ?ÆËZÈGJ»owL²ßp7~ñF–ÿõû™&©„>þËÛ¯gÿºžL 3Ï^‚{ KBóÏïvék/íRÛ|f‡ë ¸à0=ní3«5Ì?7~ñ‹×?™ÿý²1{¯&ï‹ÑíhZl¢UÌw79hËÕbšì‹îlz™EWž\þÝ,‚NëÂåO—³|Q^ÏWö¿ƒfî¾ÙÅ×´ìþOÀ–׫¢ø8ž¿®ÍòÛ'˜Ëj³Åüó*7«îUË/°?ÍTZL'£Éjz{GÔ*õvÄâs‚Á€z[ÀþâúñÆ]Eè]vS€1‡5Ø÷â ˆÍX×ë‰ÙTºÍF[³öÜNAãÜ+ÿüÿ©ɦ Užÿe§¬Ä`g6ôhgäá 7øNñX{lÞ±5Éë("À&“óŒªóŒë;›lUÙÓß°§ÂÜOâð$Œ>éÛŽÃ}U}ß!°TšRòСÿíÍ1ÞÝ®êç|i¦jYmå½ËÖÕ~LfýÀìÌ,.0Óg¹±jfÂÂô {ÿý $%¯ÝǶ[é ·ÖCÙ¢ˆ“sslá&5åaìEª‰dîóGÞ€nÞ1JRlèà¡QÊ}ì_WfKÂmˆbÅç>>3’Rí;ùƒvˆ¡XQ7 Cº y÷ ÀƒK²¥o¤(‘[ŠR ¤¸ô¢–’¹ ’5?ÈiIÜæx9J:„'°Tò]°Ýr}8$/,Ë}DƉ`,¸Ï¨A;Št•ЀÀ!°5îžx¢íkÏæÝ*ÃÈ»e¨”bJ{·w¡`*p賸yW$N`Bð„&ê´–l-Ëmk%S.¹§Ö𮢲:´êþ¶•0<¡·8¨ÒW˜8§-‰[‚š™”]ዃàc3’Z/4goónP”ÀD„àÁJZ‹÷ùzººØT¾ðJÚÈôÙ»–å8rû+7fu7’I€$€þŠ»›µ,W·5¶,‡1s?l~`¾lÈ,=RVAd2éŠèê•ÃfW‡$âõc—Ðû Þw]JnY ig6  *µà±^ œm¯“ëûD%  ‚ÆÝ<ƒïplBÞöI„ÙU­’B™ïÈõÊxÉyÉcÖ¼®SÛš°ÝбéÛ€z@Srk£¢ÃUýšvžz8{¶%@R£òó8ªôß+ÿºß½„?Ýïîžî³Ò?T“{fÃÅõUG±8N,f Š[¶ã\ÙêžIâäI4‹?#g½LmFQ ê4À¯bÂ#+ËCÎd6‰**¢*­FyZÉYvŽÀ6ÂbÂcTã³d¾î®¾?~½þº»þÖ²&ޤ›²¨I8xÍÍ—ÇR³§û®Ô>Žvyá|È©O2¶íøqu»{øyõÞ:&¶š»9]:tù( PÏÆU:[Ñû8Û&2ë P3yýÌäu‡?JYI¸ŸÆUªÔ‘^O-ç³&äÊé8l¾8ößaÑòa÷㬋ã:ï5w·/¶c¾]Ýü¸Ù¿Ú>ËîúêáòEv—¯’½¼¹›ÉUAzÄÑaP€çq!ÂzCwŠ¡»¸ ÛùÇ?±Ú½oúpòe‹ÖÆíKŠíñä‹SdOÅónÕôþÆüp9‰ïEzG•ޤ(=/ðè>vÉ\Ž`[º¶#'Ø`Àë=Ä6ËO·»Çû›ë‡Ixõ¥XL6& l©›Úûr7ƒC-‹m?Žìù;!ŸÀ :žJZ³Á,ú¸ƒ’¢v!f¯ùþ&@ëÝX+Ij@Ki€~ x̵IE[ß¿{v¢Nî"W•(b>‹PRTf€¥ôPêaÓ™)j€a€9gÁ‡×”/Þ‹«ï7Ÿ¯>Ç!r7&Tc {—ñn˜OkaÆÕófZôbúÝô*y¶“2|7r ÿ6r}œÍ(j¡;­ËþRÝ[MA¶6{ Áÿ.b}˜Ë0Zm,=}'8¡<~3­n>ß^ßíµ³šgU«¦´-³Mg-¹š§ýi‘+þîãðµÓ¤Ð_é·‰Gf4ŒbÇ-ÍßC1v'±½´™´»ñLÜïÝÇOkÙŽ[ž}ÉN€u<áwÛ`“’Ê.tºÛpš6þ&v½ŸÇj:…¼7ÍWÆÐ©9Qí¨×vïÏû§»ûf×-×íÊKSôâ´·¦<.Ì^4G +E€’r¢‚CŸÎÈÔÊR‰ž~²5àû « °>ñdUIŸ`²5xW +S õ,ˆ‘"o[€ì²Ô?–È>A$Ð¡ŠƒÓª@VP%_^“Š>9Н dÓ¬J@˜.ð<®ÙôÅl¬zŠ:²l/+++++++¯@6—\²#ì¡’¾t®@v®@vJÈ21Ùªó)ìQ6§ »ä|Ãå]ä±ìȉÓo7ó¢à×ë«üÇ[©!ŠJŠ`L”ªUæÊ8˜§¨tª2W~7DÕ‡Éwø¸a•9äËà%ìr‚OkHó¸èC·Pà œÊ–™¤ñ¢<´¦¨Ø‹ö?¾æ¾$Ñ ×C©J¨Ý•{¡…!l ¸Q9³(Ó€—5|®’°àà˜äYÊ»Qéz¨>Ýåq(qM…°õl4` aÀ+®5ø×†­ï¥ö©E˜õ€êHòç´,‰<H–-ím™Û>Ķ޵ñáYÖÃÜR^Ý)²hØSž 9zs*[Ðó€Þ‚GbÿZP •ú="±‡Œ5hwù<Ιù°9™ à‡ÐÁ€GÂʲ†21Ó\Ô‰šg imÕ‹­ ¡÷M¹í ”sÚÿFï›>*C˜2'ÔœñIXÆäêçMQH†1³¾¡¾²¨äÓ'P3­ó8{žØ˜Í0À*¶à±&‹¿åšóWêw2*u˧Íͦä"ž¶ 8à’hÁc5 ¯ž¿–÷ˆëç:ý3!ÕwXÎÌ‹>±vs-N}v l¾ÔËÐRÌ&iÐ'‘·×4O=`½O xXz¹ˆµ~·bb—o㬄LÑêXÎØvTèÜø,xÀÞâaî6)Ò©‡øK)?[¬ÍT] º]è»sÏ0‘ŠkÁ#=ºÀÿ"ÑÇû"Ñ/{³¥~òI)O—Íœvôð¶rÒgˆ. x¬ºüòãádj)té0bÞ”"zÓ¸|N÷xl\N+ZÜÞ ²á }=¯ûÞ&{¡…ºÐØAyQNHOVÙÖ ПÕsþçnz…ý«ÄŒÎ%DU •ŠyÉ'§ ò@ÎÜ)¥¥,(Ó=ðøè¶¸½Îeë²c_Ú!kX9Cµú4’Îiû½F<ÆÛˤ·w²áªl PË'¥Îd—­YØäzÚÎ-Xr¸½"-x¼ßè:_ýx‚|P94¸ì<[7_3ñÚÁò]ðc탛Þ_ee=M!×ï„$U!!Hr¬ ¡VÍtíe³a¼vZ^]ò¹Õ.çõ cļù{ÍpÃÊbd‚® \šedtd^V$zú ºkÀ÷IЭ °>ñݪ¤O0Aw ÞÕ º¦]ŠfÝw6IÐEÊ;ÖÿýïÃêÍžO+?ׄž=þ½òsmÒ9ž)Ð??׆LÂ9?÷œŸ{ÎÏ=ççžósÏù¹çüÜãù¹4Õ³`¥ËtþÂÜåpÎÏ=ççž@~n&&±“äHõ™QÞ­kË=¡í¨`ûòÞ6<>YÝÅÓî8—Nµ+oþJéèêù¶e\çzçÛV¾ÿ«4˜qæ<ß ß/“ÏÃÊ÷pàˆêSj(Éz°ts €èÄnÃc}û:œº² ®þUJ1xÐŽê<<™w:ë"¶Àa acC¥ÝãäȘKÇW¥I|òÑkïsyœ=Wj[82`9ðxg ¹+¡ÄïdUÙ¤)d4ÿ–‚%‹±Oçî{ƒaÉÉöúµàñ¼2»ÍØrúªÄŒÐëèØY0žŒˆ5(ßIÁû<Öx®ŸÉîñëîéáþé»jŠä?í®n.ÊØR >¸ú•ØS*¥O5äì*¡TGmÞgÚÁ$? \Ö‚ЬÖÿÚ]?¾“N=˜˜@1±v"äqh-Ë Ðƒ` YðÀ¦ëõí,{“eÝP¡©bm¯Éã½VVàð€{¼Ø×ìÿüûlêaÇùâFY´ý£¤²z·éеóÌ~DÌ´Å1köµÒÜÕ÷Ýýã^ªõ#ާ¢ F%æÍ©2‹ã”°ñ³ ÅÛ²O²…ºþºû’•û.ëáÉIb>ÛPÁ“ÇE”Aë×Î9Ë4f9Sµš8« áÖ~4¹‘£&<Ô!ñvÿöõ\„ò­8ËõÝýîîá¢Iím©¬ÁÄu_¡dˆÉ!¨àÙC4ç¨.Yí€pÖð¼F© ”±}´’½ÆónI?ïwS€dqHç ¼š¾¼f%éMµ^?á˜ïü©`*YðX *î^=fd`­ØBÍ•û®5–ÔæÀc4] <Ø¡¢Ö‹,g‡ÔÓã×I²¢ª;¡ó-@kÜûNh .Ö} U‡^™‚ŒÑ}+žiEŽá·§Ï»‹‡gQß~:.Þä4Ä|%C×€6´6ô\s9 ˜‰ZæA4„Éq€<WÍý ÒIš^Sz38ûªWQ-Öq+ìäÂç3}ß'›øöîÇM–fþÛCk©\LnŸ£¹Š¼Aeˆî±™Æù%K5îåNcXÀœm«<äVV{{ÌɪÞ?Ê7“ ®ÿV„Ëuß:£{AÂKŸ .`ÒüàQòºPñ‚Õ{@+Ð¥L/¥ö>?æØŽ‡`«]àâîóŸOûÔÏ‹‡òæö8 Û«¬hÅÎÒ}cPA/fxï#7ÌJÒ†´âïÇd®ù!ÏÁqËù {ÐóŒ¡ÇMeͳ§}Ž<†=9¶àñÖ6PÇû©>‹èEÚGÛ¨–0M3¥ ~ ~hdé |9SBˆ 3Ã0†)É5áé×DäÅxw‰H*'"¶ ­Ož‹!.×~óÆX%HS ž”ºkÿíIHÕ=KHØ€“¤Ÿî\®yáH-37Dó![Ç)¶à1Wú}¿ÎcdËš¢éP2ˆp…Øjd6Z¬ÙfÄiÌÍ4䓼B°ïÍô->ÿâOÑ¡$ªº…¼okrV,E¹˜Ñ¥Ô41/š§ø&Ot¡G¡Ù¾µz¹øüôãË÷Ý®ú1#”À´>¶Z{k`.gAë<ÀaAJÐð¶æhÖºxgÍtgâUý–ÍhƒïÂær4ÏCN‹1vòeÜbYuFF’Ð⯋äÖz­kð–k™›¤LƒN€É7Ô€ÇÃ|¤úHõ…ˆUßbr”B \i}ÊZ‡s1’Ïæ`ÃD’c &H)5ø“ÇõÍ^„›—×ýÍõ~i©þÁ„Yf-!ØîvíЖk»;Žñ§è7ᑵr¦ùÅ• ×àÏKÍ·ºó!‘Ђ8 Ò,;ŠMx–ùéžãü {¥ê°KMNèÄh[Ø+°.åº÷Õ0™y'…ZònT’w¹´”ÌŸ„PwMã|ô=wöKÀ.¨ò¥X)>1ó³Í±&²¤¤Ðçc!QÞͦq;RJG†=§9g¢¼<|Üsž‹æeqP5eÇ„,Áæ– öÈd~Õ|YRŒU™yL™DA‰eÙ;^ï³]›o`¯~Þ”­frF‡ÊÀ“ÓQrÛk6'•Ö¡AÇ“¬ejöUXsÝQtKÅí ¤ê–Øûd­eÙ‹€I¨Ö€‡¬9^«Ù?”¦)»‡Ç÷å†ïž¾\¼y;>x’kê&WiN`ØfG †_z¬(l_±Øˆ'õŠ ØMe€ïïžJO¶Ëg)}d§X7,]@ ¨íoyœ]vŽEœ&ãöª·à±†½ŠêURó7¶ õÝ!ø˜¯ C@ f;w+Û'Xé<z­ôéýüvÓ"R®‹KuuˆšaU*t›Ë]¯¤,–âøN¼m@ç^žÐ£T×Ç-ñSíŸo”“d•uå¿V·©Dåauçf1Ó綯GúyV$zúm_×€ïÓöµ‚À6úÄÛ¾V%}‚m_×à]ÝöÕ´Ky‡›¶}M‚—Ž´}Ý#±éìñç$Ú¾N¨ Táô”þ^m_ˬÙâºt„Ò¸¶¯²¼>$ê&»yrйíë¹íë¹íë¹íë¹íë¹íë¹íëÁsµœI‰ôs5D9·}=·}=©¶¯™˜%&¢F`ö®›#º¯ÓÔ0‡í3™mx¬ÏÇS¸~}ë}í†wðX‰–Êàëe§qn¶­uj,[~WR¾"õšÉgÉ7h, —Á£Ü¹ÎÈ€J)Eiç{,m憶L…„ëXðÈFþCY3AêÛQ(âÄÚ];öËz o¼þ Sˆ8`Gµà ~}½Êšèª%êù2ïG1y4Öæq`£ a m3 KoVx ^sËw$²êÒ-ãÌ?t¾/úÿÇ«\ÿx‘æjTD}1ÅLÞHÁioºyšßt{³—R€èU¬àiÀ`ÀS©ØdIêù°”¢¯J1Çä¢R*kÌíR‡r×0“èDôXð€ïeZM®)€¶¾ÎòŽUbŒQÛló8}zew!6eS+_nTñçqЦÆNj¸L~eöO{à{þ\(!F b Þ¯í‹mÃ%è¹A\aVö½–Ÿ@jJG*6ëQYeHŸ~l¿F¾–›œ§.á­¦€W3TÊ8â!K ¡dƒSÇ“üšæ Θ£áż ¯%ÓQÉ'(àXòe¾a„Kº;tB¿–:Tž>HŸ%yBƆ,Ž‚Çš'Üõ7 ~¸~Ôí—E¥‘Fœó±acaâÕ%Ó–à^KaL©aeK²ÓˆËKÁë‹4«…{\¬ЋٰIÞQá‰8ÎÀ©÷’Ö1k¯$HÞ÷Qœnfˆw0„ >ä4àYÑ9Jô$á¤QÂç­O\ÒæÚ‹!®%äÿHß³Õâ‡bÒ þíÇY]um5PŠ~’:iÄqä°=q×Ò­°×’½`j˜Þ€ª­ûïÄÌÞЀ¹kè†ÊUS‚†ã°ÒÙVÚz-QBi ›ôÉ7ä¢/ÙnFn8Ã5uûÃî]s'òÕ–cà  çH žµê¾<›é¸ã Ë!†Ør‘ˆ£É±ÕØk~=)nüg‡ÄD+ϵ3XK¡ä ‹f*cŽ’@¨áhK°ö(±Kþuƒöš‡UQhpçHéJ¹ðTé‚-}È%Š §g’4„>¤† ®®VÑÀkæó©ñüÿOÒ×¼"ŒÐbßQ‚å•ÅÖÃ_Gv.ß0Ôp€2ÎØ{òwÈAˆ x"nIžë»‹·boè¯+;ïuÎgØ6aË1¼kéá1îðÉ㎯å®ôð“¥Ø€'ö¨M\ø[pÈ$ó qÊ1/ Ø%­©Q» ôZ¢@Œ¤›yÈ¢ïƒîÑ.Íκ5'XjGF8ù òj.\çauƒÕ“XK$Œ16L6„!DBfô $Z+âŽ.X*ÿ¤‘(”®f kp²ªHîê ¬%P ÖóHʸà†(zr-GV´6û3¾ÖÅ1qñƒ[Z^uË0‚ ¬E4=‘`ˆÂ“OZ½Úý8çÇ”Y;~ý«dÐÜMÿ´7YcKŠù\m8‹ÒƵזÌj-ÕŠõ ‡SJcnKäÅņɬÅû+åæê6+eR‡h$£”€†³ªÒM}s’šÏZzqŸŽ ØTÆzP;qÎ…†ýŸy §…MX*l£O¼ KUÒ'Ø„e ÞÕMXòÇ“ IIÆ1»M›°ÀeÊF^ˆ‡Ë¶î¡–LÖ¡ ŸX–‚Êaôº =þ{ua™f1ÆÝzD×…eúbÑ7 ›ª>wa9wa9wa9wa9wa9wa9wa9t®B^j¬ºQó¸0>wa9wa9.,™˜ ”^9dË¿ó3êg¨.xF³àßýõôÓáÇÈÒ¸½*E‰¥0åÇĉÿüõTq¹w“•ò»äc§JMòµeÓ&+ÿÏÞ•%Gr#Ù«Èæo>ÈÂâðEטﱱ,’Rqª¸4it°¹Àœl‘\²ÈL8€ÒºSÖ&«fAŒàÏ=û”øÐn¢gÔùh]è¸BM¹æ¼ÓÅgÊŠGÔ9`´q¨à™Ÿ²Þ“‰'üÔ´ËÑüÅëKåiÞ?<ž¿üì³AËSr5‚„`|ÀT¢ÿEôÌ™YNh^óçq#Òô9à!øPGb?=Ê—‹oWßÏ®n/ïï4ÎÚ.5¡hº”SO…Pu\ Ø{ê÷à¬"ƒÀVÕçí8ŸÖÿøù9¨lä <ÐÛ è¾óêáVw¥w¿ÿ¸¾ý^0ªA R¿ª®ÀšQiÔt”¢Ì¡0é›]Š6Vˆ2€ÄÉÆTGÂhi§[ÙóÆ¢y1æàVñ¥i\aQ›ë!zºå-8®O’<­K„iTÃ’Ø“#o]:. ‰ñ»þ}p§¶L)©Ér¥¶‡â€d¡&<Õ_¿Í·ç¯»µVCyY§À¼nL D:.EîívÍèëwñiÃþ³¯¹½{ºþíz{à]°hY4ÛEïÐ ¨‰D¨• 3ùY )×$¬sÁt-¬<ÜêL×]OW››uàËåÃuvøOww?¾_?åü½T–"0ä³Xô–kãȺÇ/]èßð œ/-x:;”<__|̧â{–S:YòÝŠE[ÎZah¤D7ÞÖƒ¤•aÉÁDÓÁ4<”Àþ<­ué¿æ[Â]eqƒ! ÚÅb½Žó®»#é@û–ÀÇd-xÈõÎdÞ^—¶eY !$.E+¦’Üdªõ¤´™™$^¢Á£†¬‹EÀtúËBLöÉaÄ"ù9ìc—<®µÛÚíææêñ~³¯ìòk—Ï—N"ïu4r÷WÅUuíx ·.â¯ÍÍÓžT¶'¡€”úݼSš7’a-×cæàp OTú°ÚÞ¹>ådón<áÕë .x¹™EÛ°ný±vm.rg÷Ü=Î" {5oJáhxà“°Ú<†„õz\îà^žlójì=%3|‘D~ÚÆ…‚<é}œ§ÞÛ½[–Rt.Ú/Ö­Ýôþ–(P‡ºC»úqsñmóð”1?æ{j;çù§ŸíYލ<;H"hÚ3特åY6K¸›B€OhCMë&Ù¼?GtÞSZØŽð“Ù>%t…¢õB!LæÂ­ãÈ÷K°íÇÔ÷Á±õùC1]¥ßçÏx0²ñøØ-ÉN¿û_]<µ˜µ×fbm©Bn\µ´¶M×Ã%Á‚j<!t)qScÄX4bŒàP—QËsÅœlßï©;wÞ#º›‘<¾±¹Ûg«ýµKŠü×gÓßO•Ó¥¼—ˆ¨Û€À`…1Énoµ+ἂóY #–£Ï¾v¦Ô©Ξ'†E»ÎRðË+áÚFq%ÓÒGV g)ÞE•pš½Ônš5*á§sqðÿûøï‹¡î¤Èýí•pšÑÃNßúJ8íÖ9œþØ·N;²]Åî©ΩΩΩΩΩΩ΢uõ§|äS%œS%œ¿¹Î 1•¦€­|‘˜f`ŸJ8ÎL«Q‹ž[ðÄnïOvüøƒÉ’P´$8u`J§x°(Îë8ÇØµ(ÎëïMžå@ÁÅ}¥UûÅpκårs`ãAP¼/´@~ µ2F×å¤2 #£Oáfà€˜xóüôíBqä^‚ïEüusX6{ôâÁº£C}£h½ËŒ­‡N;I%qG*ˆ;Ú:"Ó¬Ok—®Bnüžì«ÜµÛc‰•¹b @.Êöjx‰pTDŽ*O™zç“ñßÿâS—ÜÔ‹$‚µ$Y醈êz¡é¾èö ÅL Ðqíù63— }–`ð^lLfRe3¹ <ÜšB³µÊ®„1I9·$‰.„â‚u+®ãS—@9YvÇcZ§ÜzKwkH6À!ž0IÒ­uŒ\‡»%?¾•Ü™’HÕ‚ÝF,[Q7MèI‚c ¡9®v-xh€Š±tQ±î8°ò&9çö‘·—ǹÔ-qi1ãrMKôfÖµŽó|aÅ$ØxB nóV׬¼JïT"ÙgÊrþ7å„;l¹@WèÒÜ$Q­fg ¶”ÖÿÌ-xZk¤”LõS?G)ïTˆÑ%Ñ:ŒÉ ±_­>\¬Çp€ô¸…Žòs5Ýá ʱœÖÍY7,ÐÉwœÐ-,m¹r/év<­YÇE›=?æ.ëWOù¾h{.XÞc2³’ÍÚqè8ê[Ö{'ëQóGÞ‚§{±«ƒÎ±œ›-ÑiA¦&NG{ÎëV޶ Å7;-x¨µVâvW²Õe¼,¸½2pÏtÏzù²i)‡´2-6¥>Ìoã¹½þyœ¬=¤^L À^óúòöñ ¡tÍãÏ]t,.AyÝÑq¨½sn6¡Ô…tíÏ܆'r‡ ýSdCe[åöôൎ#Á„ÝæñÖ#¦õ/âÛðÄn‹óoWÓ=÷ï“Òå€}ÑŽ^w‡Ä'·WV6ÞKiÙ®p²Øï#·à\>…'qÙDœµfÞ9 R.ĺÍÜÄ«î£ðm[ðp¯ <%çܬ«Û§û»œ xØž¡hÏIÙ­aøC`I®ÃD® g=&qn€“nÁãÂùûž63¥Õœýq}õçKÐ"e‹©#Ã|cl!$Iد NGÖ¿†.»OLkLëíß¾YõÐaÄ¢]£N[‚äƒñ:®½(cî6@d/ëú<Í5w-ö¢h?û`¹)™îìêòúåVו#ÚˆQaXnÓ8ع¡ÆÎÕͧn@ɇÁŸÔئ̶`ÑãWc/ßG]@Ð6úÈÕØEK¡{ ÞÅjì&/¥žvU5v8çýþ¤î6¨Á—» ý-ù?µ»É:pø,¨¿» ÙnÒêI}RcŸÔØ'5öI}RcŸÔØ‹ÖU á¤Æ>©±J­Ä‡«“yø”›M­sîÜéœTœn¼¼yE§ãè·xXg=Ùxc×óçËëÜþïæ:GWOô¡hPpSdEö4ÎíÜ+uÒiçß .¸Rw¯×qN¬©ÓŽxNÁ“ÄC[zú¡‚uƒ¨ã’,ºësà¬8rw£iÂv„õ§ äÛõ|{aãaðË*i¿Üa~Ù“”€®|q ¬‹50Z®XÃÛÐ×UvšÒ o iÀ‡¯Ç“\_yЂ©hÁ:q\$ëjIÇAH Ëe/àj NÁõ¿tžä`ñϺʛiÇp¹ßzå»ì\YØ;«¸Ê4.Fßw¦Ï$¦BI^ÿ—lÈC¦vƃ>1ÚxR¿òø“Ÿ6¿Û޲œBŠ,Þ#¼é8×\!}¾ææÃºm®n||–´³ñïV™í÷Ï_u?1Ù.Z¶£¨ dcÅÔy®÷aj-|ònȧ¯ÇÓoÞßþö°y|zxžNQš²œtªÓ=ê†ÇY“^ÇdºË&}5mÀîV±Zí»·àI´Ê”ßþøfsÿ.@Wˆ‰an9άb9ܾ©}òw¡k=vòB¼<¡_0¿=[>lÄr¢*‡mê•EÛ¬¸Œ+Íù¼å,o÷vÎr~½ùñx2SØò¸Ð%¼ß&•cdf—3þÄ’jäq®ã~!EPÀìÊ<©[¾üKÙºƒ6,g¨J>"ÌÝ ÌÔ`—i>‡  0 ˜Ö-xv9˜ûY¾ûeïOÏ6—7×·“=ËÁ±°n‹Ø›—:œë6árµ4X×[ðP¿ùžs0•Ž·Ã¹ uûiœ*æqœûœÑu¤mÓ `Xý|¾ Oô]šÜÍ3+•ÍJì%—O·^CÇ…~mï–0™8aq@@â˜ÑÆ]· 3SµÃƒôE úè€õÍuœîðú_ƒ»¹a gE“ýˆi}&èsHݧq»½ç^Ù½T5Ö1ûc*.[ŽƒÎE“:é;¶¼^DÚzÈJÛ»OëÎþP)—r6X(L·æ%å'u\^¸â/£cÔBHÚïÛ6á‘>!þ[-ò½ö“²ýt gI1Yx5jm>±ëCFÎY+$d" .á€/¬³×‰7ªGmñHè6{³¡bÑP1‹¿i(a+Ó âô%ÜkLá<ÍõwßÓŽÞBœ}&óå6¢¾nˆlBD”½Ømë.j ãÀGqg¦ž´­D‹‹¿¶u ø>ÚÖ‚¶ÑG®m-Zúµ­Kð.Ö¶N„Hh{)Ø9^¥Ó°xuZ‡: O’ƒTUüqi[·è‰ˆ>í\aÿKh[§·žÖè`[§P)®¿¶uz"AâT±Œãn·›“¶õ¤m=i[OÚÖ“¶õ¤m=i[÷­«ì1J°Oö»utOÚÖ“¶õ´­JLŽÀ̳âÈš³ø—}ŠË²ñdn”XDü€“ÄZ<¢ §æNlj“Ÿ:bAÑ8º×U‚°yfÑÅùrÊ%çš-Gò·àÁŽGÁûê³£/_o%(Eaqçvëï¥0š›¶"Á¯(,ö.ž3øøæÀ f¹UbH±½W]ãĬG }ß-3íEƒ}ðn?m˜bb$ý æ §s›3SSd¦,Ñ2Á"D^ÿƒ·à¡.¼ÎrRÇÙ¥†7¿¿­åËà”õÀl"fêW^c!GëA§@>{žÖRÎ4o×ÄjIÝ„}»z~<ûÎS;¡rî&æ¦"^½ºUÇQ{‹ÊU8ªP’°!³ ˆe2a}”GˆºÍòÉRå°õ›‰FZVäÇ…N²àùL$ò„`›QÇ­ß×jzNðºSé¶Z?^h˜öüãpïÖTN¼$ÝÆDf¢£Ž‹í®±Q™‚÷æ}U礣äçèŽÌ¹ <€Ëçí¾zõ“ÙÊQ,SˆÞþ¢Ä=[-&"±þ8ذxi}޳Ù<dA‘ýÇ¿ôÏ7¿¾Yë×× Õùöó÷FaåTÌÜS§³¼¹ŽKÒc©žIØœ… ý¾|é…ï lÊqì”0¯±£5Ó™HpQ‹«õ8Zý ìü€Yß‚§Uè;åçìžF”³1%(M(tŒ´›yØ‚”|Ä<âzê¹öfÝúr+ìt‚Æ`¹ɪíÖõº•s XFÄÔ-x¨›2¯¼-ÁÒÉyÌ].õy`Ä Ó8lvʽÙ×· têõÅñHW-f1uÞSÙŒ$1%2êlLãBè÷Õ—1µ3Ë€O_g·ÉIÕ§I-™Om÷p}1ÙÍíæc¾LdˆN”ȱ¯ær G[pKXÿ{WãÉ ÃcOç^*Eç¹lÄ,YMÐľùãwei=ÐBXÑñk7ài½Ìz±Û‹r±d¾ÿ?›1ÍOV©Ìi\ØÕ±/`iêiýß‚'ÁšSý½—² Ù9Œè½Yã•Èó&ú*„mÀ-qÀ§¯Å£ã\X~šòé¨âílâ ‘³QcѨ1‹GUÿkçx]/PMàÌ> Xî[ð uô{h(ÇÆYs1—Kq« Cuù¡y»ËæRªàd§bÑI×y@°W°èñë:—€ï£ë, h}äºÎ¢¥P×¹ïb]çöáƒé¥dXU×9œ'çè:Û F9.]ç„*"ª@Ïî_K×Ùdø·¿®szbH|‚—7à“®ó¤ë<é:OºÎ“®ó¤ë<é:ë:§õÙ{w†Ý+Ä“®ó¤ë<]§3_‹[æÍZƙֿEqÑ‚'ÅYP:Óîžu#P8Ï/ŠçDw‘‹ºÂiGè­+Ì¿7äJšd0s°fÃR8×m“?Ô°TDï'¶®› (#\Ïãú9gÍ heÄDiÁÓé²Î¨9ÊWÝÀ!ÄÌ™ ¹ÎïÌ;ù•§vÃЀTœ<ÜxWsùžÎ6f*3E§œ u®¢ãœë4û—¸qpgÛ‚—¤]~þzu¶>öÛ­|×uKˆÑÌÓq…®«¾ôú<ÍJÁÙèÝ€¯N¹‘€¤PGæé›ÞŽ”ÿmµÐ°–úC 6ó0S¨¶ž¸„Ñ„'n'8{Ú<~ÿ¯ß6÷ߦSföï¶Ëz©Û×k"è¡> ™·Ôy\À ŸÓr©º¥¸_bÄÇíñÃî÷Š®ì;Ôm¸àÁü`¨3bYçB'×݀ؠOðkærík,Ê›RÌòAS2£À]J&- h=XÞɘ)ù6}Šþ2IÌN*Ê4€n xšW¥ƒjÖ‹í±ëýÍíÕÍöLtêaüòçÚV,g^Sîg§~ÑZéuœgX5}lƜɉ­ú.Øè£æçÃ+ð¤ÎÑéd®ò^Nƒ(J„æ1‰Ž‹ÒOÝ™±B!™2œ<êÂ1]Nt ̃û<îp_ áðŸ¿üÇ÷ë-Ñ„ªl#}áOÞûmEöÇo5¿þryý8ÝEür±{•óËÛøƒo ŽLÇÑ€œK}Ž@®—ó|³Ñ·»ÇÌÀüǸ¥] 6x—¼†1ÉU.52u»XŠt>òA¨óö»]É0‘íÇu\êG†›‹«?týøüõ-ÅâDtž)Vàf^ΉÀó©£Çd»×ƒBˆÀ.UàéGËÍÓfJù`^v ¯íö› -÷6Êù$H¼T8g¡ÔÈx"!TøÞÔ¯Áýo×7ªrý˜›ÍG{‹ ‰R„ ›JZëJ&TBO]øbªpÏI`0r€ ·‹­¥0ÛøÛæá2Û8Ϻ(FñÁ#vàB ÎùD Ì­ö›P¢!D`Îíl<…ÊLÍDЈìuÂå|Ý Ý~ Ñ8Z”]ÓBƒõw,§DâÙä@—ë¢Ù‘ºõòsRÅ´Sçkãµ×ðC[›tô>B  Ø"]©aáO ú/;*BŸ†,ô‹WD7è¹cñâ«û‡»?®s¦ˆþ'{ã¶d1#€ÚŽ)Pww‹™Ñx>5r€ªWÂ1ÔˆN<‡ <sªñßlîºßžm¼žÇ¿™ì½²ìÖø“ÉÑâˆnƒ›ý1ÆZŽtF>Ÿ,QHªx7Â!dL 4ïJL“½;{zÈf¿<»ØLöµÎM„|ânÌ(ÀœOƒsûERgäè2º <ä:ÖÂsͽ²u‰º>¨il¼Åê}ÕËHÐùTP.¸w­SpP÷#SD ‹ëDUºà—ÿr²¸už‰¹@oMO¾6Âè |>Uto±bñ ATa]ÎjÜ1ƒ[™*^}ývw÷}×àbt¢@t7ÈBk1¥ˆ{>Q„Õ;Vø¡!Ë 9/V«ùí8—z‰êfèdpë ”\JÀPBQ:ážMò)—´ßÌÇ!—häu®®ÀC©s8únã)5ááëæâLWþÿùk2µu>J!ä{¸NË^‘i%âùäùާæ`È Šƒº <Ô§¢á®±.zfÌLëì”bÖØKihXI—õÞa>4FwËErK‘!Ô,C‘ܲN@ó­n­xŽP±K Ìé ~>e@·î!.Ar;C .V$Mô¤Œafë5·H®XESà©E;ŸICÅB”Æšƺ…=.h^bØÕ:F%$¬ð¾ˆÕ!Éxó?;9ÒlŠȘÏN¹ðAÅòA»Ç†µ­“Sâ\q‘HTM†Ž çS„‘±â2Ÿ8¡ˆäÀ®ÂSqóµÜžÒ³Œn­RsEh*ín£öÙ„aQwæÛýT6oE°.l±"û’ÆRàc[­ì5Ä v¨ÆÞ×Fý0Ï'HÐðºbÓ›Y !Hv(ÎŽ58º¥²Ñéú¥Üµ¤4ú‰õ ^ƒŠŠÙ¶+)è.rSwÁ€1’½ccÝú aTòºSà <â÷Ãý¹±Æ§.*œÊ{ŽÞ%[¬ù¨ìÍq;è‘Zðဥõ¸Ð7{§¼ˆâ&[ßš(&h1SÇyîÐ w)5ëáp¼Õ€'¶ö¨mœ³Çˆå]ª„ÜàM­†Žcho®MØÜ))[&¹Ç΀듌'Ÿ,{OóõÉþî9?[r§}N0>{γï,®ê8è¨Ù]JÖ¼5böl£fàìuSà’Äh~oCÇ®?f·,NÿÏÞµ4·‘#é¿¢˜‹g#–l ñJ8bN½—>íltìe#ö@K´Åµ\Rr÷×o¢Š”Ê ‰¬BÕpÆœ9t· ©¾|ÈLä#ˆÑBÓŒCMìtg>“žàåè­ ñgë .àÎÌ=Á%ȼ»ô¿ô¿ô¿ô¿ô¿ô¿ôÏõ§ûÒ{@6û)­ëö„¼ô¿ô?ƒžàvI4:oµ ÙæÏÍ:èÄh+5N¿•…ÀÄdÛu*V™{;Ìso h‹ŠÙëí:˜¼a^ûDËøÀíº ëÇ<ùÛ çBžsFiãrêûê§ìðí—Z§Üéß„1øTÏ!EãjNgXAs¨g9Ó¿³{Ô³5‚½ß|i=ɦ[Y;ÍôîùËæá¥9âaáâeeÃOÌòSC@£ç¶;­ ÊOp2•ï/TÔ3ˆ^‚Ǹ)žbžèzM'û÷nH›‘wJLAϧ:Ñ&–÷ [N@7Ó)÷Æ«™7^ÑGÑâ ª'ÀãB­zÎõÓõÍ~yÔ¾·=ƒÊÊ €À`™ÆŽi]ˆòÆŽSmBíLˆL¾r»núVcÍw¼7PÀEôÒt‘ì¬öâöóœL³½-WdÕ¬“_4Sé,’ãJ‡wä1ÇÎàòÜYìY“>ØÄ›vŸCåPÓ ™´Øvò£óI±™CëÝ]æ5/µNÍI¶‰€³Çió¶æ`“Ñ›…  ŠÝæ‰Ä9¬Ü„ÇEË´`n×Y[ùØéfë˜ü†5>bʤdùæ#@œuð¼œëÔ#_Ï÷LÏpôüÏ_gð|lõ™žÏrú ÏÁ;zð¼è”ò'(ýs%™È¸3g’‰ Ùeðü%Éä’drI2¹$™\’L.I&¹$º/ƒ2Æ(dc A©nÊø%Éä’drI&†ÔÔ0l¼5x_ÞT1RšR>¼â‘Ûé‹Üšï¸4¤Lóxœ6µ*$ütY~ZåŒM°™t#Z§;‰«¥¥ª£¸/ͺî÷ 2Qp ŠlÞLkÓô&m¹ô Kÿs¾vŒ^Y¦ÏGV³,µã ¯2 OL-ëy<Ñc¥ã§ß5èŸÇ‡üN2‡Ñy~¦T7Íá¨Åϳìñ€€–VñÀô=8›ïð€žÇcT¨X+ÖËÒ¢U §ó‰Î¦©ÏÚr·£­¾ÖœòŠº.! „éUE‚½8q)1êúnsCWWløÓíúî~qüÏÅÝæák“ˆj Ï5$ËLq÷Ò:©™¤8½F“¦3£:+*„tXo¸ž:{ó™€tmåä#‡—T Ès‰*i°fôfz1Kð;©ÍÐÍ…0&ÏET©p–Bu­tu·j€v ÷WéW»NåMbc>õ.eGpl ­ËÌêk!N‹{ÃP‚õt†aööløšÏ‰Jæ¶²Îq „¹Ìÿ:f àøÀ6~uà‘ÖÓdç«'Ƶùßûåá?ßU7ù8Fk•wŠóqÈ·ÖOhŽÖåbB¬Ñ3X<Ç€»Çç§uÙIëò1™Hf ±‡-À uMµ”â:º,ïŸ^$xìØF„ݸ[>Ì’Z+¥d‹XQ.ãp¥ÖP3ÄExpOÿ¿Ï›ë¯ûôRtõ§ÕÍÍâv½º{º½¾]_¼¡¼Ó›š&Ũ<êM­|GÛñlWÇz–€ÏEžÜR0t2)iò& s”G©… bÆ-_ùµ{fºß0×?ýšE‡ÛéÇ[bBž˜¦õ^`†=7ëÀTŠjWÔ–¤w7½¶–§Ð‹Ÿ_ÒÐYšt*¶N… M"hi©óìú%¡fÀü¤·Û™Ì›§ÕÝÝâz÷ý1]N BÌ#Dc›(!‡Áz&úÄJD®ü§¶ë”-ª3LXûË‚÷J|ÔÅ:åÅô‡¶ÜürH«H¯º›Ï›ë×ú\›å@¤][ŽW´T…+g€ŽJ †ñ¶ÒúoÛ»ÇÝz±Ýl×w›‡ÃFyˆ©òÐFf€¬k*ÅÆÒ”Ò'Ü1j·&sµh§Xv§ÓÌ1ÜGÓ:¨ rCJßÚhçxŒ~ÀÔî·oi/,6ßÖ)å+á³*/(M>°øÒG˜µr®ù¨‰ZàÁ™NÔKå\OIT†£ç_97|ʹ Ùê3¯œËrú +çÆà]9×|Ü›týñ§”³nÒÊ9§üÒÛS9'ƒÚ‰ªEå\ƒ*ò@ ®ïÕÏU9'âNèÖ¯œK_ÔÚzú?‹LÓ=~Éð¿døŸU†?)&zPJEÖHF¯úGbÅtýu·>Ú=Ç7¦÷ϯs±¶»ÇoM¨ÂÚ| O1ŸqV÷ëývõ¦¡Û1°u«ÉûV;“î•M“Oë¬é´k¯”&O¿´&W¸x§ÕQÇ÷>Oò$ÞÞ‚Ï[òboÖÍ=ÔÛÀêŽë|º!¤óûðGMñUòoÚ7ÌÏ!tõç§ïÛõ_>üçëOþX¸õï„s·¡Cô)éjÿøð—= ÈÆþ@'sSfú—ÿÖ ïJü ¹ÐÊÛÍöêÛfõÎFx<þøÕv·N¾êþHÅ~yõ×fÏÓ‡ïiYgé§5éâ:CGÿ3}dùá_Fó±sݼá£óßGçåâ;ζgר¯ÛNìE¹Â kP¨]W¦Ï/mô½v $ø™½~8ß9.^¯;×Ëѯ8øZ^/Ùê³÷ú3œ>K¯8Þ ^¿à” &Nêõ KòÛ2§=©6â^”ˆ¤ˆq²@§Sé#({?)tû¥ŸI|B‚þD_¢òø„€;ZÅ9ãdÞ]â—øÄ¹Å'¬M‘3 \.•µN;7þ!TüÊh‰Êb`/KÔèWP2äw‹Í=›ä»Ÿ°=A±•Œ–‹J¸Ð]X/*჋Î.6ŸÖuŠñêßÒÖ-„þ[Ú‡Ú ðHý Óg²ø‰VÄ>¿ßÇwñ“tš]5‰„Øâ’”µ»ß_­¾<þkkÜ&Ëz·^4‹HžÈÔ}èÓÿ5wPq¢ˆj" úMHÐÓÎÎ0ó¨œõ¶€ÃÑøŸÎ0p§?wÃŒ¨E²ÈRS‰‹av1ÌÎÍ0óèRy†ãØ;wœÕ§I) ù}A]²«Jh}ý‡ˆ³_-ÎÚ‡@¶úüã¬ýœ>Ï8ë`¼5⬂SÊúi㬗*ôÛÈΠ²¬Œ`¢4¹ú|9X§cµÉ=€yû‹ýªmŸ%‚îVº6¸DZGw~}—Ýi­½ š³OÉê8ñPÓe‡%©î›ü(B}87ÿ­=ÙûFÿlþ›„;ý]‘¦ðßÈ~è9qñß.þÛYøoÎÐïÓ†õßœ+Ö® ÂjÅ¡©7»§Ô·`ñÐCY`úˆÁ¼/À{{_g݉Î*å‰Wåv‹‚ëß×€K°ÞÇØ{_#éx*iàbJú®×·²}†²šIÔº±ýK¡7¸ózêJ é5—°îS–X¯–uÀ–@ º^eøýóÝÓæÈòõÃbpÖäÁ¦—35·ÿhóuR…(N¶LVz"øy¾“LÈøâü¦ÙVÔçaZ"A+í rìãÍçÅðÓãsÓÛÍæ«SGSð.A뜂J- kꃈ£û¾vëËO¢ÍsSë"«‘» m…2ï !€˜´¿Ö)œ.òõî. †I]cÁ¤iÌÆ4òÕï§:Vê›–'‡£­Õ½”çq„€ˆÜÑEë”õõZ‘"@jG+ëÿ|zÜïëÕ6,þØ<Ý6FNƒ2ß*’ Ø[ Œªµ«Å¨£Ò0¨›^»Õößéßï?¾`ÿøÚàæãIÜ9n“;Ng§&‘ä°fU0^q)… ¦ŸGÖ*Bn‘Lòvw…Z/vŸVשáÛß¾7Øó¼Oá9ô^;õ $VI?5ŒwÎÒ£u{CìaÓÀ‹yxd Ú - Œáþ BÔ?«¢üËÁ£ö&;Ü£ÌóݬuÌT¤f¯a?HõÂP´gÀðøÄc³N¹×ûw.d¶ß¬OÍ/@{­‡0ЇYSÅZpÞ™ˆ,8òÂ%UŒËÊpôüSÅÆ€¯“*–A [}æ©bYNŸaªØ¼£SÅD§8=iª˜ÅTÌÔ׈«…¢‡XÕãyåã4¨Œ÷¨5Þœ(Óý§ÎÇi¨¶: 7Ü 3ÖS4_ôÑF6|’Öu#—|œK>Îäã$ÅŒ>gxŽ^ã°´ˆÉ\¤1hÔŽÇŽ¦Îä^ z›wO¬2È;2ÙÔ´L¬žj›~¯ÇTË]˜–ök„)G[Ã2Òénzªc A*u'¼ø·œÄN’Óíún{1]î¿¶>å$Ÿò÷>ŸòÅv\^ýF,yÃDÌíjµº#‹þæ;yPë‡C#­›«t­>5ý°&rÚ¿<üÂô×d°Òs |)£í·w´Å…æüh Q€h âxOŸËÝ×çý¯ÙyôiT¼gÚZ7ëhÓÕ™hWó((‡oŽœ%n…(”;:jb€~#¹fùŽi–ï—Ž¶DÔ ¸ëǥѹr€†l ÆÌÛÙ°½{ˆ#f'‚З1 ÝƒŠ»†’|ÃøN 7M`ÕPižåOޝ ëNà Þd’§«´óšsÍ=IÖ¸ÊÚW&Z ĨG+ßîöæöx·5¡ËÅ·Íêu£0ì ÁVàNZgƒóÈ<\âåÑÒ¾Ãg³ø‚²Æcäb´ÎÙ Ye£ä-ÛDTW7»ÃoÃÜŒgŸƒh?U‘~9@P®Ö4\¹Þ/0ßN Þe1#ùÒ`¸Œ¸fv“i¬@ 0妓™yÀtöŽÖÚö;bw¯ƒ‹¼c˜J6¶'‹—SZâ¢ÝÉ4!î§mŸª“Ùfùœ¢&ÕÐ{¦If³N» ùƒT€>n F$­¯¦Ûf à– ÃÅ”¸K† §£´Nc¬“3\Þt™“×¢4ÔI§Ñ üåÜ\²PXª”`Ðäá6ë¬7ÔS(zBgÊæªùŽ«¨F¡Öåø"ÒÕvÓæ3÷Á†”!…Œ‡Ö¡ÁJÉã±o¦0ª¬ #³85Ýçƒã4ƒÖ9ÐÕnËšª!!¡ÈÎí¨Àî(ÁG½®)ßw³€ÑCž=!º¨#>mÖ³#«DF©b9Pª50É#ŽYÄM)¬ LvZç£Ø~¯«åPƒšâ?Øä§S/ݬóâøÒDêP9HkJO¦×uy CJ‡ã¢£6…|•|ÈQ›FWÜŸjHXúT >½ÚðbôcSkcŠªS• ´#“å(û¨w£;[¥¾*—ü9“º¨Ð_3 i ´Š¡êÎ 5×€œ #ûX"ùh?ãˆ@ŽÒ×Ív›>»=$[6Ó§?¿Wéd»‘”fT¾¾}¼ºiZÓår½Ú¶Ó 6ëýÇ«ßÛÕ£ÑKÛ¨d›À_ï7‹›Ý&Ýi‰ [M»cªßé êry¾©Ùlµô&¯€î‹ ™§É§ZïߤÒ?>ß,^×¼î©ÕûÅúÓ¾Ci÷ZqNU“`¡D&'…¹-é#¿¾|c4íÒ÷ïÁ,HaÌQ÷µ— ºš@sO'u* f~™Š›âÂçÍݺ— PM¦8L˨™_¦Ò¦ŒC¹ÀÞ4ÎÔ©Svb‘ ˆ™]¢N›¹%úåz»ØÞôj·­&X0³ ¶”¦ùåk`ùn>Ý/¾m¯Ÿî¯û¯£jž‰³SïZ!AóKÖÍ´sïW›»U/|5‘Nn/•R2¿,ƒžéþ¶Ù=õ2 T%ê©Þ2Bæ—$ÆùÎÛíãëÝ·=sDa5©Æ8Ù+$jv {涘¾Ýÿ±"§àÛ~{»ÞõûÕ‚C`6ËIJÛüò6zvy˜±Ý=¡÷‹”ƒt}à„¯pòVÍ'å"Šæ—­ÅZ•.׎ô÷àØõ38Ï„˜BÄÔ 8rø›^sãg: |ï ”ØõwòïÁÊl ÚÂô§Üû=±33vB\áRO%Š 0 ­¥Ò‡÷“ì|`Í£Âæ]•ƒÊ‹ß׫+I9ØàB­ó#ÜÃ?œy]HYf@{«‘¡uæ>M§ ÜR#úîeË5pL-8¾ø­,cЛ˜gý™Î‚^¯ô¾¿h(µǃUƇJL©i2 C…UL¢xPÁ‰šî>R»ö,qL*€op¥Öâ8 “3Mµ.ûźµxM6tÞW“I´²m2 S‹+(¨ÒÛ*9–©™Û&%¦'˜ã܇Åêáf±O™$O‹ÔaqÝQÊPKRAƒ|÷Ôƒ?¹@O"¤owÏ÷ëÕÓÓêú6eû½£« Èèº*‡>T8ä S¤Ù)…³Xm÷·ï鋜hŠñ;;…hxà“ f„ ð–>2Ù7ÿ—æŒ?|Þ­öO»ç¦¼æ-mLž{Ân-׉ù@c/)èÁqÖeUunÑë»Õ~ÿŽ2ÖÓ+G^Qe‡ ƒö3èo& j ã~µyX´gq—Öa-ÅU…ë95ã…º ïž¹/ä˜jä¨~}‚3L:‹„¤¦xFDË"ŒÆOŸ'ÁPßP­kI0º)ó%D¤Ì"@_ïEëe;úùö‚`rš$˜Ñ䔂.$,βIp áXsE/y­¿c˜t®bài‘É2ñ(q´PB¨‘O¤ûÿ®yZß­¯39 “&¡¦ø9~R2F Î)Ïv K묪“]pÇÊϧú—¢œD~SS3JŒÁ–QìLjñëó§õbÿH¾ç §ÕóÓmªh‡á^hŠóîiÌk@nÿÑ:#Oƒ¯æn `Òy_qŒ×n}ý¸»Ù/˜{=m«òç\LéV±¥´Î˜0&í}µ)'Çê1 ¼Ï÷ººÁ xàR¥hvz´Ìâ¦uµ×5§€ÕQ$`ª ´; .‰9‘é›' D]d í:åÆœ8µu'D-DÇ·!Ìlx½Mž6Ù4H\j vdˆ¡u¾bùSE=Ò•Á[–Ts—kvâÝÊsth­<ó×Òç«Îñ¢¦Þ‘>Á¢¦mð¶.j%¥gÝ5%v¨;RÔ4ª8±¢¦Qè5¡WQÓ¸Ñ9~8“¾¨)¾Q€¹c "¤êz},júXÔôŠš"c!ˆ0*ÈÀ†›(ëeçæ•6’Z<”Âv„v{Õ»F<ºVAjlZ¤5µ3«tJFc[HPŒ‹à)<¶#m¯׺Œy¦²`™×€)iÓëà]^uØ4D *Áó¾;·,pç¶x©Ɔ7r£¹NQšÐ]¼ª²)õÿ b¬ó[yëc;-YòzÄÐ/…§†ß¥L«.ëc†iL@yyDáð~c™ YôÝ‘fçHq,[ –*8páa°~ç‘0’MƒA•ÐŽD&Z1õ!RÁk $’i¦…¥! í¨°É’u7œÅ´Œ¶¬ \o‰ø§Ti¨’ÁHX‰ ‡hÿ²8ÕÚ^“ÕÖ¦(%MéÞ…ºÔÑZnÕuÀ[£˜åRÒê S´ehQ ¦Pè*e4cí„a­ []jaÛ‰ZÔ Á¾2†· âî5êÓ}¹ô Ì?‚ÜAõ[Z½nLÛdƒvD›öå?rEL[ƒðø%åz«.0ÀÎ*ZuZk-&R3Cm¬ Ÿ€oÝ_«lü¾HÊê úmiÃ\h2@°ÍR"uÄ ‰c­†è| UÂämAƒ¬ ã\­Fž@)0&ÆP‚@”!†WÙ4!* í¥ú@UlºØ‡êGüh/<þ5üÈ´Þ/õã.ÐŽÑä±iÐ/•À\,x^€g®•„Ħ),”b?¶•cìÃr„!¤ VOW!‘Ý °\žl†ï©7uQÆýÀw*:ÎÜ„ÖÂ(I‚C­5&Mií«ê`YÁ 2K¬S ¬|(¹°ð« ÷’ a¸x$zëlܘc"Àj!’§ºøÅýƒ Æ|å·ç§Ò@,Xˆq¤¶’˜cý8Ή A“‡bþI†…¥|á…¯@—1ZÒ|hGUl¶‰e¦J'ÉKBÜ'Ä·¾™ÿü\ÂÁÖ D»ºvÌ—*ñ²¨ÏSøMÊ3ëUp ˜"° ÖºDIEññfáNTïëÚH/EX¶JqÐtA;Am²¨§.˜+‚¬Zß>=˨ƒ)f%km";%ó› Ú¥ˆ ûÐNª¤ziw¼UŸ$ETÊÕRìqq´ø½UÌ©TpåC;Œ–N»()wÅÀ7m¢âcaûõoK—þ€m ·ÑÊëC³P}b‘:ÕrÀÔú³|Gˆßµ g0ÍLÈ®ƒvĘVÙÐÓrS}Ü´E ùðíñÛ*n¿"n Á 2¤ÿY'o“­‚.8§>%Z$ÓŸv__]Íêã1V)qÕ ˆ’þbíE;Êe—[B ?€´d†‘0pÍD›Á`álˆ0„’àHƒz®Ó.íŠoêS#:Zu ±^B(Ã"XÒØ!áh'û@$#E „%*‡eâH?~¬ãDàÿCøµUÖv³ñN}Jt:eèòãgñÚ/ ü»339›>!;E`XVèü1øÃZ©B²’vÃòí9ˆ°ZÛà@C»èJ‡nGDºB }zé(¡^J8Ó06¨;@;#y2¿Ïp[§´‚AÂ’¦Ï?zÿ¦¯dÚOƒ¡ÍÚßc;ÆSܨéŽg0…Ó†Ia±‡Çx- ѹ£L)áóë^Êð3'apV?f súFôäc~[OóëC×ú´c~ý#}z1¿­ð¶ù-_n•¡ÁÚ Ûi̯åð¢g- PÀ*ë@µ§•5¶@Å,èI&ŒžVò¢~ 1¿ÕÊrE óëÞÈ !¼Î¼óóûó{R1¿À˜‚*:Üz³$íhÅé›&Õõk¨À® Hà jÛe$ªZ¥…¦—G6Ö¾ÒD†4vŒXÕ&i>ªtfjšÛ´%X¢NÂügò ÊØl,´ÙC;JI²0˜ô6j !:AA¨âmPÒ6¨•+ÊúÝS’seji€hïVx@žŠ¡CÙîÜͪÿ4^"”¦6´Š¡çI*suÀ6õi„´óí[·Û°ÐÁ{SùR stƒ@;JE‡>äLƒ5V´^–GW_…aòÉtå² øOàÌ„ÄÜ !ðXyµm¹ÞtìQµ–*ÉîÝï\Ò\JËŒ?Ö£h'¬I"ÆòKz)Ó¤4ƒR{µ\ßÌfƒ%&ÝXUÔ0î?b×XåÕР٨1y¢ŒÛé8¥>x«CÎ(ÔDïò»œÐ7LðÞíi²¶a˜¼Š§Lu^c–ÿO“_÷5 "PÀâ#hÇHšÔT0RJtè*Æð¿Áâ,A6ºß´ÌÛ±àní8×I3¯wÇTD ¢[ß‚:jˆlón‚½àGl‰’=ÐŽVT”&nü¹)O5ë+7j_ËDsÌÚ/1™žŸzÿüÇ?ÿ§p›ïâÀãïŸ|w{•¯²ž=¯?Dv4€~žü8iT’Þï?ëá?÷zØÛ•óòÍëõI_C0;=üFcûpcû¡É¾9ûß|¼ÚLÉöc“Ù}2{/^ôXo:9;g³âüoÙ¤³¯×wS׳TùW#§u“Ñxò6Ÿ9¿?ðÊø™u¼ñýëÏuVôx–é|rv&ô,caÏÔÀšL&F@§6·ç†4â'ï[ÑñmcµaÓj®£õvÛlÿšÏó"ÞkHÒ_¡ï¾ô,W ×+ý¢ƒÇwÎSÐ[öÑE0 jÀT²·#Æ{Ð)ôùýw¯úš Q×7ZûO>Ë·nŽû=ϧ³V}þu+÷M¾˜^MÞbÞd9jÜgR±òÍïÀßæçù"Ÿ±šÖÝÝŽô-äÈN€à ”ȽbÃè— ?lü»(§Pjo¸Å×Ï4Ÿe™2ᡬØ3kc’©<Ÿi˜„Æ@Û‡f³÷ÅtžÍ¦ÂkGM§Ì±÷WÙ<{—O¾p#:êýô 0ÕγÏç«Åm“MúI£•€Žªþ=ÿLcù^öýÍõV6ô ‡bÞZnÙe/…Ü(þU¤‡îQ>âbDUEz4{C1‡¨kÎâôU_?PD¼üõ®>‚Ìú#?ð7h¸Yù>eóùÕª»ÅÃu«é¦t’ù¯Àûp|xþ/bÆDºÆ¿ /–=ô¨É{;Ù ó"üUÎÏùt–o³ËòÞ‡¿>³5Úèï ↻e£ÍäõååÍ C°FwþÏ2´!Šwýý€ñfuqµ˜þYðüÏóÞ&!ñËØÆg7+ØŽñq¯—]O7 ý-žaã>n—MÐý6ËÅm±‡¢%èpb¤µÜ{/ÛŠLTû„­â +•´‚nŒs– ?Æ2®©?Ï~ÑŽXÝex}ì9[öx››Kw%àf1sD/”k¨Ò*@Åžä°¶ >Š Ãòfw.¯æÓ{YÖ›Q"ý”hc…%¢]Hö!/œD±U}* k[&:®8¥{«e”2í(m]î¡SxÑî«ú ýõZ#_{;èz1au;že˰ðX[ƒ1|…§¸8´d`SÖÊ È…vœðfûô­óRŒi¹B¢ C/ÖÄJâímI×c7àâVÝj‘g—»‘ S̯€ÊEq7¢!`îöutλT‹©>JZÉòè3*(fb^ªd¥©Ü½. «ìE¾O“lyqv•-&0J†ú5 έ  Dv0ç†GWúkÅeÈ¡©Ø-VÓól\¤Lä~5¯’J!t"†Õ%©G×zºë#f”¤çOXXçðçóIpÀe8“Ó:³j¤ÔCÜŠu/S•ñð¨2«Øã­ØÐuGψžþ­Ø6àÓÜŠõ ˆk}â·b½#}‚·bÛàm}+_Ž)I™A)Å)SÞŠ•’SGnÅ:ŒR믒]B­”t8‰[±'FsFÏ û¸nÅ:ªÇŒ÷áÑǃˆÓߊÅ7‚"LTäíä…¼ûx+önÅÒ!ÚKÒ2î¿ëÚ±J ÏD·b±_£Œ¢$´´•Zò.oŲ¡”R©Ï‚„ä”H:èi£ÉÌÞ(ë2£McOç ÌÊ ’ûÏW˜ˆÖ*’–_ÚñÛØ–Œ@Îh½h<Šy©hyqÐ ¿)c”ÿàÓG3II*´cÑÇÐ 8/ŸaiÆî ÄÔ]9‚õŠiËë:m'½>V#X’«:AÄþBÅ5UD¢ÐŽ“˜M£ø ilAï#@Á+¡Q?4L¤HÐÉ€5IKs›¦í¼G@¶6Uù¦]ÐîS9¶.°û÷tÍ¥ÕŒÐÐI¡æ‚‘†gë²B6Jq)¦»vИX\ë… Ít²*K‰æ¼>xÅ’oÀ^qßåßbÑö–`M„Ί á‰dk]fˆÀFy½ø Á» æÝé'»f@åJ°€2X÷,¿åŒ×-I=¥Ø•bìÌrKE—r–°ÎäÖTôËDË–L´\¨è˦I˜-aìnxàÆÈnâ¥ð»­ÆÂ›¡5®4$eiÈÚ“\¡4™SôE6¹œÎÙø=J=ŸÌcCÂ9^éè<®•2 ÆLsBFe>¼Èg³«íÝkRúAjL-¤iÛ°¶„¥;³ˆšêŒ†¦wÑPß¡3OÒ¼X)Âr³o4þ¸‹Úq³>0q×%LýKíÙÝø‡½×«^>EٻȖ½l†÷ìo{gy>ï-`kü-ŸômVîëy‚ ørm¾MpgZ]L—ëãÅay¯÷È"ÑAHM4£A\ëfôŽô F3¶ÁÛ:šÑ½/[–Rš›N£)誒u8Ø$ªT§͈¨—\ÖЄ­ÜÃø(¢ãFG뇋fŒA&YÕæ~Œf|Œf§—£Áº5Šö~Žâ"×¥;ûÿƒ_ÚÒg=!T¡Ytsy™-¦â²ŠùÅòðK±Þl¢´ÆK=7µ©¦=ݽ­'ézË­Çç›rx¾¾Z½<>vqCÿô“‚¸¡ûÏ[§•Ž6ÚDíN7x>ý5±Qï¾¾zs5Ynžz•?ÁÐX\Ý©ÛÎzëÎ*"ÿ:¨@FÈæ·îXaø¬õdH1|Sn#r2Œ\wø÷»ÙðóÙìï¨Âȯ¾E·uåYeì¿»éÏ…òšÏ'×W`…­¡Ýæx#w¶Ò Œ4°{/ß¼vÛ@{ýf9 Øß‚à™ô0æ®[·Y÷ýåÔQs}µXzhæ|^~±¼ÛdD¯&Dw©fÑD¯$v]þy¯’®{7[·KBÛÇ÷/æ9ºbŸ÷Ê<ÈyñU™¨¶H¨ÝŸäç0´§Ö¯ûeæÚ>ˆ‘Û œ3Kâ<ËVŒõ€)–Qk3:˜¸ö~ææ>‡æzû^|¶Ò~öâšVß7µöóÿ˜ÿ½ýâ@Òá§ŸÀ0+ñÌšÏ{_fgh~^f×?hi‡Þõ×sµœá©Üv$Ñ„[¾Ÿ^.§È¡ýQµ¸Éq2^n3œîwX¾ï~¶a—]w÷aÙ²’Î÷§Ý>vóBËÏ{øé:}©c°fÁ™0MÛ죛ܣ;ìærïDzœ“—7ï`‰ÂtÁ<1˜(˜H³?c•„Ÿ•|Ÿ½MîΧŸìõ§gOÉcBXn3;&ðë·7gk²ÿ‡îÑ2_•£°^EÅÃò(áéø"¿ïíÒ‹‡È+ŒZ)„ë‹%,`íÌnŸE ÒÏ÷©9j(a©Èüöbq5‡Y.\þ½§k5‰_r=Ä#_Æ-ãËg£=Iº¼'Ÿû·¤l‘ïJÚdrwB…e/ü'Y®­\zo¾TQ®-%ApŠV*wsÓõ-èä°‚a„žnÍJaØ\òa W”:r™¯#¥TÎÎ?b» þäþü¿ûc9Ó´ ó¯vT‹ÕL˜¡,HÉ2C7G¶äÛj«^›ˆ_×-ŠmKµÆš©ñ~ùÍïþð«+T¦ñÇßþíNÿ\6ˆÿòFøuâ:.åã{E8ÓþÔ'ùI‡~ùȼUîÜw£D/üÑ=­½õ¹v,×±‹³è§Eïõ ê/üm‘ô­Ñ{½Q–ÐÆÊ)¢÷ˆÞ#zè=¢÷ˆÞ#zŸ‰ÞËwì˜ô…óQv¢7ÿOT&&FFE]Ž./üÄzèÑÿ ¿Äÿõâ¯ú%þPÁœõò¿Äw<½ä/ñ½Þ ~‰ŸX¥ëdÝñKYòhlÜj'. ­ ¹b—è\Èqc‡A…øf'O:z^ÈoˆœR~Èц%†ã²gõ[ð5»lïí“=%Ž:w¡?@¯®Oèψ¿†ÐwÌY/N軞^ПÑ{šÐ·ÁUÇ«Ôc;;=Tº~m7'Uë“=¥^3~¶›ó޼1ÛŽ*JIÙò Ê4°]`»ÀvíÛ¶ l7í&öY/ŸzTpl·¶£„MÒ“ë4_þ;ÂËïÊÕç*pΤƒ Tí€nÄvª³ÒslW³3²óà7úfÇêoÅvuP€ZhL†â K`»!éxt}lwFü5Ø®£`Îzql×õô‚ØîŒÞÓØ® NIIòx•B’[±l˜ñÛU ȵa ¥²ØZØnJ½bú,l7å3z¶›RöØÈ'°]`»ÀvíÛ¶ l7Äv3û,[`»ÀvKa;®··1;u±]³ƒ‡*8a»ú\pt¢ +vFwb;N[9 ãQã5©SXRf—ߊí¦ÄÑCIÛÀv<¦ãÑõ±Ýñ×`»Ž‚9ëű]×Ó b»3zOc»¹UÊèÞl»Äp>ÀvSR9-vIvNý“+¾ÿ¯±Ýœwì}œç”=Ü}lØ.°]`»ÀvíÛ ±]Ý?kÁ%Eî³”ST¦l·¶“€1)kÛ5;H—gÛÕçzΈjM¦|oµ#Όٞc;­S¸V¦4{jv ïÅvuPFñ' ~Ç ØnÄc:]Û ¶ë(˜³^Ûu=½ ¶;£÷4¶kƒ‚%¯Rø,aùÊl;Êè¶›’J°XKŠªJÉÁQÇê>¬¡ä”wŒü}ØnN™q`»ÀvíÛ¶ lØîul7³Ï B)l·¶Ó–ÅVŽ€?ö.ûú󏨉ÐÕØN[CJG¼©Ø¡ÞYÛŽl+ŽWÄçØÎê6L’úe,›ë{kÛÕA5³¨ÚPœæ‡¶’íxLÇ£ëc»3â¯ÁvsÖ‹c»®§ÄvgôžÆvmpbM/¬Rj·b;aÛ,û¶›“꺶kªPнRw;àÏÂví­91Ñ ›%å7¶¤˜S†Ø.°]`»ÀvíÛ¶{ÛÕýÓ“¦”q¸ÏZÙËÛ¶[ ÛÙVïž–³{ê×¶kv(—·¤(ÏÅœ0y…GTÎT¤wfÛÁ¦ÄÙìÛ¹c.ÿ”z­;“V äŠz0@NCõX|ði\yëò\”(Ýí½3+#%uzAYŽ"åÈE \rÈE 7È•ýÓÌ}¼ÏÚC5Èä"[!ó­¼£AQп6ÕìHøê@®>WIˆ­`T;6²{9gÅ|Ð$Þ+‹!V(EWq{kþEÔ‰Á‡âL"ÿbôÃzÇ£ëç_œMþEGÁœõâù]O/˜qFïéü‹©UÊž¥°]˜‘7@—ãÕþu©«å_L©wû°Þ‚3ÞñôpÇøvl7§Ì!°]`»ÀvíÛ¶ l÷:¶›Úg)çÀvíÃv%táLè:ÀvµÚÒõØ®ì¢F8š@„XæÐØŽ¶zÒýi i£L’û™"»Ý“ºLwb»9q„Ñ[pÄcz]Û ¶ë)˜³^Ûõ=½¶;¥÷,¶›\¥ôæ"å"›ÓAoÁI©æKa»9õŒŸ…í&½£ï+R¾¨) â Ê,Š”¶ lØ.°]`»Àv¯c»¶zb Óá>Ëå?¶ l·¶«&¨KÊ=l÷ÍîF]ƒíÚs¹y¢y4ˆEñNl—·¬†ò¼Úä2…ÕLúS½Ú±Xz+¶›'ÙÛ yLÇ£ëc»3â¯ÁvsÖ‹c»®§ÄvgôžÆvS«Þ[í(ã–²`»9©š×ÂvSêô³°ÝœwsÚîÆvSÊ\â’l`»ÀvíÛ¶ l7íföYÞ‚íÖÂvåÃ$K)»v‹”7;‘|uµ£}ü27¨_…g·C¾ù’l²dÛA%ô57ºmD›ù³~ö7b»&ŽrÙUÓPœ#FoÁ!éxt}lwFü5Ø®£`Îzql×õô‚ØîŒÞÓØnÜSÙÀÆ«TÙoîͶSÚ$Ó¶›“j°¶kª$%íWæÛí?«Hù7ïHò~¹‹¿xñ}½¿¨å;EYöÀvíÛ¶ lØ.°ÝëØ®ìŸPØÆ'H9I`»ÀvKa»òa–C"€æ~¶]³+GÝ«±]}®BÙÜFH5óEÊY7‘ÈdÛa=*{uU· ân§,oÅvX×IÏJ~/‹¶ñ˜ŽG×ÇvgÄ_ƒí: æ¬Çv]O/ˆíÎè=íÚàFf”Ç«”e¾¹¶]Jõ÷•ÃÕ’3ë+ ªùb—d«ªœÔu°«îvôa—dÛ[gcÂñg˜³¼ÛµÁÒø«Ëã’l`»ÀvíÛ¶ l7íÚþ)T¾ï³l‘mØn-lW>L6Ä—ÿúÝÌúø»üEØ®ã¨E8,ŽýHˆöxÞJíê jœ¸ßât‰µᘎG×§vgÄ_Cí: 欧v]O/HíÎè=Mí¦V©ÇžwP;Ýü¨²ÝœRÀµ Ýœzù0h7åÊö>h7§ s@»€víÚ´ hÐîuh7µÏŠD®]@»µ ]ù0Õ“ ríªI¾º!E}®åæ6daZÎTéÎ\;Û¤`)?§v\¦pùC1xª7;â÷^‘mƒj9rf‹Ó¤AíF8¦ãÑõ©Ýñ×P»Ž‚9ëÅ©]×Ó R»3zOS»6x9ó§AÓÝî͵C˧|€íªÌeg‚T£Åríšzð²]÷*„䟅íÚ[cRUzÁ;žß‡íve^^ZÇʱv`»ÀvíÛ¶ lØnˆíÚþYoÈú '€ƒÆOíÛýbØ®|˜ l‰»Ø®ÙÑÃU ‹°]}.qΜ}4˜üÞ†¬’˜Ÿc;)S¸V×Sì+mvéÇPèVl7%s\‘ò˜ŽG×ÇvgÄ_ƒí: æ¬Çv]O/ˆíÎè=íæV)¹9ÙNÓÆGý(æ”þXÆô—¥vSêéÓ Ûµ·–”Qaì¦7ö£Ø•ÕŸ÷ñee~ƒÚµ jÔ.¨]P» vCjW÷O6K¬/œÊ¦Ô.¨ÝRÔNj²›`êS;©ÉnÌ—·‘íŒÿý²LYî-l—@àˆÚiê^þ4Ò?ì7;Jï-l7%Î9úQ qLÇ£ëS»3⯡vsÖ‹S»®§¤vgôž¦vupIb>¨'²ÛÁ½ý(„hS:ÂvsRy±;²M%/ï8VŸ1}¶›òÎÏÊÇÝíꈚÊ䜆veý(Û¶ lØ.°]`» l7µÏjÜ‘ l·¶+&»±ÓÛ5»Çü‡‹°Ö‚u”“.w6;¸·,oÙMÒAY«S˜Y,õ;µ6;Ò÷V¶kƒZ.û¬ŽÅiŠÊvCÓñèúØîŒøk°]GÁœõâØ®ëé±Ý½§±µÓ§²­ŽW){VôÊd;Ï›ãÑÙ9©&ka»)õîôYØ®y3—/`腜އíÚˆdå­åeý(Û¶ lØ.°]`» l×öO%ŸTº`¶ l·¶³ÖgÂŒÀ»Ø®Ù]~G¶>×)e“ášM1Ý{GÖ(©´‘õvØwíwÎhv`ï½#Û•$O*þ(ŽÛxLÇ£ëc»3â¯ÁvsÖ‹c»®§ÄvgôžÆvmpD{a µ[±¥Të `»&ÁÀÊb9–ªy±ÒvM•›x‚±zû´ŽSÞqá÷a»:b½ºcƒ<À]™¶ lØ.°]`»Àví&°]ÛgKŒî:— ˆÛ¶[ Ûy-^þ‡t±]³ËWc»ú\fÉŽ8š@ÅîÄv µ,§ç}d1µ)\¶$ér»]r|'¶Û%.:x,-¶ð˜žG—Çv§Ä_‚íz æ¬×Æv}O¯‡íNé=‹íöÁ9»ºŽW)’{³íJü±•=ö9¶›”êkeÛíª(ÕóÃvð ØnkÍ9Œ½#ô>l7©Œ£#E`»ÀvíÛ¶ l÷:¶kû§'.Ñü û¬?ÄòíÛ-€íê‡)R^“zخٕµš/Æví¹ ^ží£ Tìò­—dmÓ\bçÔ.×lÙFq\³“‡V­ï v¹-C™­Ÿ?°‹ó‡ã~P»ÓñèúÔîŒøk¨]GÁœõâÔ®ëé©Ý½§©]“äá*å%î¿—Ú9oht@íšÎ9Óxµwd^‹Ú5UF˜²ŒÕ³ØgQ»)¯´Ý7eúÚW§9’í‚Úµ jÔ.¨]P» jWöOLŒJ<<”O"Ù.¨ÝZÔ.Wǵå\îR»fG·2.¢võ¹JH<š@Å.ɽԮD”ðÛAÂÀNýŽ·»]æ·–¶Ûe-q­Å=æ,¶;à1®íΈ¿ÛuÌY/Žíºž^ÛÑ{ÛµÁEú5žÙÁ½}dÙò¢Ø®I¨‰Õ^*º¶kªŒUÇÒý-> ÛÕ·Î){Ê/|†åDð>l7¥Ì)’íÛ¶ lØ.°]`» l×öYäò!÷Ù\N«íÛ-…í ––Ëõ’¬v±]³+ƒ^í vš(§THà $ÉñVl—6@#8¸#‹e *ã UÛÑÛó[±]§²ŒÅÁ㵟Àv<¦ãÑõ±Ýñ×`»Ž‚9ëű]×Ó b»3zOc»6xëNãUªDû·b»¼‰™uV{CAà±TƒÅîÈ6Uî¬à/¨Wÿ,l7åWz¶«#b2€”™¶ lØ.°]`»Àví^ÇvmŸÅr•4ÜgËFÙvíÖÂvåÔò<òÛ£~ýî0¹º#E{.¡çdÃ#´ÔãÞŽäÅxȹC‘p¤´Ø¡,ÈU ™5Õ“¥O äÊ[ 22½Ã’ßÈ•-¹¼òÕ±G \rÈE \rSœ{ùLk[åá>‹é¡YNrÈ­ÈÑ– ò~ï·Üíäúü‹òÜ2/jùñþýþfW¢¶{¯M[Öƒ_䨲˜r¬#°ÒúÛÓ[ó/fÄ¡æÈ¿þ°ÞñèúùgÄ_“ÑQ0g½xþE×Ó æ_œÑ{:ÿbj•²¬·æ_ûfÎצæ¤þ˜*òËb»ªŠ8;:ÕSú´jGSÞÉøÆåsÊ«§¶ lØ.°]`»ÀvíFØnjŸ¥íÛ-†íH QÂlG’oÀv$åjà0š@‚(·¶ô3«\›â:…Á€S06»Dï­vÔ%._ ŒÅ!``»éxt}lwFü5Ø®£`Îzql×õô‚ØîŒÞÓØnn•²{±²m.ÔnJéÓ"|¿$µÛU) á êI?‹Úµ·gÜšÚ½ão¼55¥ŒŸGAí‚Úµ jÔ.¨]P»çÔ®íŸõžÛxŸ5‰Î‚AíÖ¢våÃÔ$å1Þ¯QÞìÓÕÔ®>² ê_4»"àÎd;Ý„@å µ ´©^óûS½Ù¼·µ`´–þ¥AßÃÝî!Ê jw€c:]ŸÚ µë(˜³^œÚu=½ µ;£÷4µkƒ£§4^¥0ý5ʉ6Ô#l7'•ÃvM‰ø Âún—óga»öÖeÄÁ™c÷޽1Ù®( 9¿°“[`»ÀvíÛ¶ lØîulW÷OIà ãXžz¶ l÷‹a;©-ûLüIåí¯ß}À¬x5¶ëŒÿý%ä;±lI!\‘Õ:ÓÑŒ Ç5;p{+µ«ƒjR‹ÓÇDÀ v8¦ãÑõ©Ýñ×P»Ž‚9ëÅ©]×Ó R»3zOS»6xözïp¼Jeñ{¯ÈZY´èèŠl“PNl£s]³µ¨]SE :há¸Û¥ë,ØÞšë, ±wøá÷·Û©]±œ…ÒàÇ·]µ jÔ.¨]P» vAí&¨]Ý?Ëù_ —MZ¸µ j·µ+¦—£–öK”7;$»šÚiMâC75M  ú­qKìØÎêQÙœqpï¨Ù=)ò}+¶«ƒ€¢ÀP\ùW l7â1®íΈ¿ÛuÌY/Žíºž^ÛÑ{ÛµÁ‘ƒŒW)à{“íÔdÇl7%ÖÂvMÕøÉ^P¯òYØ®½5kcïü,¥ínl×F,_¹’Ž•qT¶ lØ.°]`»Àvíf°]Ý?=«%LÃ}ÖSÜ‘ l·¶³ŠãŒ°Ÿl×ìÄ.¯lg펮‹a˜&V¿7Ù®–­s>Âvîh‚%ä(-vp\þùË?üáO¿ÿ×ÿúò§ûmûßßâõ_ÿ…îüotô›?›²îî»â·“ÿ¿|ù»ÿ U[0õÓß׿øO5\øéËÖüÓYýø3§Ÿ­è_F§UìPýüMʯÊûûº¬>Ñëâ,õëêØìèÍ­FÊ ”r"¤ý4»„9€ìˆ´u<º>=#þ ÛQ0g½8ízzA {Fïi Û‡”ÀÒx•ÊpoÍÂr(Ú˜ýÈÎI5\ È6U˜™ŒÆêËqæ³€l{kJæ"cï ¿±Cð® ×veÏŲdÈ @6€ìs ÛöO'†ñ>k@6€ìZ@¶|˜ZÓ-ìÇ«@_¿û€53ðÕ@¶>MItx„V¦{ó(ëMlz^´R›êb–ºØn·3ÌïÄvmÐ Ù±ÿÃÐn—9ŠŽxLÏ£Ëc»Sâ/Áv=sÖkc»¾§×Ãv§ôžÅvß÷ØÃx•º;²ì±› =Çv»¬]éñ©?vEùE±Ý7õ–Ëé`¬á³ò(÷·®¶~¶â7ïøûŠî#2>VF‘GØ.°]`»ÀvíÛM`»}ÿtW÷NwæÑ!8°ÝRØ®|˜œÈ8Kâ¶kv€|uÑÂú\ *aB¡²ÝšGie uÃçÔ.— I ¥ å¶"ð[[ìâP=õsw;x¨žÔîÇt<º>µ;#þj×Q0g½8µëzzAjwFïij7µJ¡Ø½ÔÊ¢•à€ÚíÜR?]í›ñZÔ®©’ÚÎ>Õ}VÑÂ9ïðC_®Û©Ýœ²,Aí‚Úµ jÔ.¨]P»×©]Ý?Y^Øg9¨]P»¥¨]®wYT’w©]³£|u‡àöÜJÄe4Êñníœ7Åœì ÙêTOõ xÛA›êüÖ^#»8jt(=°ÝˆÇt<º>¶;#þl×Q0g½8¶ëzzAlwFïil7·J9Ý‹íH·–`»)©”ÃvM#”Ýìõ‚Ÿ…íÚ[KJe€±w˜Þˆíve^NZø‚2ËíÛ¶ lØ.°]`»×±]Ý?)[úoöÎ¥W–ܶã_e0k» >Ez—M–Y @A2qŒÄpàLùö‘ÔgìνÝRiêq•9„wcÞÒ¿Ø¥‡"ÅvÉ¢×H`»µ°]ù0M8Ë;S?|ñ›øÙØ®>ב"&åäWvfÚ´8•ÞdÛQ”Ê1¾Oè›°ÜŠíê R¢GFŠ„ȶò˜ŽG×ÇvGÄŸƒí: æ¬Çv]O/ˆíŽè=ŒíÚàb(nãUêM:ÛÁFXâ{¿ÚKM8^íE}1lWUi*âwìUâ™?¶kÞÁrê‘<ôŽÂsƒÀ«±]‘UÁÇÛ¸bòÀvíÛ¶ lØ.°Ý~lW÷ÏŒe ÜœkvÉ5°]`»¥°]ù0k‡Cל»Ø®ÙA:»Ep{®”—¡MðÚl;ßkÉ×·P¨LöÁ‘š[ —níH1%NS´ó˜ŽG×ÇvGÄŸƒí: æ¬Çv]O/ˆíŽè=ŒíÚàà$ý:¦v —b;AÞÊ·û&ÛnN*ùZØ®ªÊ I3Õ#äÏ…íæ¼“o,m7§Ìâ’l`»ÀvíÛ¶ l7í¦öYÕȶ l·¶+¦—ðô³íªóS$u¶kã“”Y4œ@®(W–¶“´dT}í¤NarIÜ/§.¿åã­Ø® *Ê\¶Ù4—U#ÛnÈc:]Û¶ë(˜³^Ûu=½ ¶;¢÷0¶›Z¥Œõâl»G>ÚÛÕ~BªéZØnF½=ÿçS`»)ïÞxIvNÇ%ÙÀvíÛ¶ lØnÛMí³Š)°]`»¥°ÔŽr·‘ìÃNÎÆvõ¹J&&#Vížjë]€ílc$tyÈé#ÄðþT¯vÙsºÛ5qLåÃÁ¡8ã¸$;æ1®íŽˆ?ÛuÌY/Žíºž^ÛÑ{ÛµÁExt7áa÷ª÷‰ØN7`{“m÷à‚¤;¤®…íš*5NìcõªŸ¬%E{ë\oÃØ;™ø>l×FtUмC™æÀvíÛ¶ lØ.°Ý~lW÷O§dîã}Ö1S`»ÀvKa»òaZuìb»f—äôl»ú\¤$£NÌÕØõÊK²¸yÙ‘ˆ_c»\§°B‰„ú\nGï|o¶ÝŒ8Gˆ–CÓñèúØîˆøs°]GÁœõâØ®ëé±Ý½‡±ÝÜ*er)¶#ÔÄß`»)©ôâšé7ÅvSêùëÛÈ¿nl7ç ÷a»9e–Û¶ lØ.°]`»Àvû±]Ù?%¥2“²÷Y3lØn)l—+6c¯ T»Ø®Ùœ^Û®>—‘3¹&PÍ»²%ñfDÉßt’µ:Õ•sä_4;pºÛÕA¡÷Ë~:WÌ(°ÝˆÇt<º>¶;"þl×Q0g½8¶ëzzAlwDïal×G@T¯R ùRlç™6Éï²í¦¤–¾¶{¨r'’ê>¶koM¢yPO÷a‡ù>l×FäòÍÙŽmœ$°]`»ÀvíÛ¶ l7íÚþY¾bï³êQÛ.°ÝZØÎ*++1k?Û®ÙáS£…“°]}®¸yS±Kt-¶SÏòî’¬m% JÄ‚£Ã¾;fÖÕ¹ªÞU“Õ—ïK?[ WÞÀTÒïìèÚÞ‚œuË™^_›š”*kU;z¨¢²Ž գЧÂvÞÉ8¨•ÿa‡v¶{ŒX¯ ‚íP&Ø.°]`»ÀvíÛ¶ÛíûgF‚´ãPv…ÀvíVÂvåì=ëm éa»‡] ¥NÆví¹ eƒ†GÅ.½ª*q^oAØŒÜì ¶ƒ:…ËdwëÞ|ØÑÍØ® šµì˜y,NsT;ò˜ŽG×ÇvGÄŸƒí: æ¬Çv]O/ˆíŽè=ŒíÚàæà¸c 5ök±ÛF(o°ÝœÔ¼Ö%Ù¦J¹£ŒÕ{þ\—d?¼£¹üoèêÅû°]Õm‡²§*UíÛ¶ lØ.°]`»!¶kû'£1î8 ¶ l·¶ƒÚ³D»Ùv»ôÄOÂvõ¹˜U9¥Ñ*þH—fÛå-£½®v$X§°g”Ô-[û°Ë|koÁ6¨‚þLÿ°K‘m7ä1®íŽˆ?ÛuÌY/Žíºž^ÛÑ{ÛµÁ „Ç;V)çk±ÒFü.Ûî!l‡T„Ű]SEÊ€´C½}®"åsÞù?¥À¯ÆvmDó~#”eOý‹Û¶ lØ.°]`»ÀvCl×öÏLfý²;åÀvíÖÂvذ9ö/É>ìÂÙØŽ#Ò~ŸŸ‡\‹ítCÓ²3½ÆvT¦p½XÚäš]Êz+¶kƒ–û­°vÏwÛ½á1®íŽˆ?ÛuÌY/Žíºž^ÛÑ{ÛµÁk’Sæñ*•.Åvª°øl7'Ua-lWU•#©ïØüÅß_5¶kÞDãßÖR¾ÛµËQŒÒøŒQÞ z ¶ lØ.°]`»ÀvØ®íŸÂ2¨!û°ãÀvíÃvåÃ4“ò°¯?à¾ø€ÍÈήmWŸë‰¬LŽáAÕú¥½q«•€^c;.SØ•\³#—[±Ý”¸L)°ÝˆÇt<º>¶;"þl×Q0g½8¶ëzzAlwDïal×wa†<^¥ŒíRl'`[z{IvNêj—dêËÞ¨6VïdŸ Û•·Ö¤bˆÃßV+„ºÛÍ)£¨mØ.°]`»ÀvíÛM`»©}–s lØn)lÇ›$¨°QÛ5;9ÛÕç‚”WL b|%¶cÚ°†ókj'ug&·þm^yœ¨om$ÛL€j×Ä9Gi»!Žéxt}jwDü9Ô®£`Îzqj×õô‚ÔîˆÞÃÔnf•‚„zq#ÙT7{¿Úï—Ê‹Q»)õþ¹¨Ýœwžo¢^MíÚˆD)XñC™Ei» vAí‚Úµ jÔn‚Úµý3KY}¼Ïj² vAí–¢våÃôzyÛ1u©]³ÎgS»ú\cG"M b‡|e²]Þ(“:¾ä´La„Zr©ï¨Ù%¦[±]TRx‡8& l7â1®íŽˆ?ÛuÌY/Žíºž^ÛÑ{ÛÕÁ)¥lƒBÀMdÖk±]&ÞÄàM²Ý”T—Ű]SåÈ9øÓã-ù“%Ûµ·FJ&ãÍ’Àò}Øî¡¬¼ï ðÃîéy`»ÀvíÛ¶ lØnˆíÚþ)ecÚË?ŸÛ¶[Û•Ó©ÄHÐÅvÍŽèô;²õ¹h†åᣠä%’“+KÛùfÉ<½éH‘Ë椚°Ÿ˜ÛaïͶkâ2¨ ¢Ìf'O•˜Û½á1®íŽˆ?ÛuÌY/Žíºž^ÛÑ{ÛM­R*WgÛ “+wVûÝR³­…íêÙÊÆ>VŸá“5’mom.’aìñ]Ñ9åL;”=]}lØ.°]`»ÀvíÛ ±]Ý?Ëjã@™Ø.°ÝRØ®|˜åãõ\žÓÅvÕÎÜðll—7)Ç`¯×7Φvõ¹9»¤¯©á—¨Ø ^Ií`kÍÙô5µó2…ÍË™ÔúS½Úe{ïÈN‰ÍAíF8¦ãÑõ©ÝñçP»Ž‚9ëÅ©]×Ó R»#zS»©UJ.nH!eÑz×bNªâZØnJ}†ô¹°ÝœwôÆd»)efØ.°]`»ÀvíÛ¶ÛíföYC‹d»Àvka;ßëÍŒ~ÙjGüTLå$lWŸ+”([L ¢ÓàÊd;Î$GGíj$)Î>ZìÈùÝY‰ÿñ»¿ûÓOø×ÿùî§û±ýßáúï~†; Žžtÿeiþí+áGð/ßýÍ_âÖY}ÿ·õçÿ¾Æßÿ}Ù§¿÷6bÎ ã·‘LïÞF¬¼Í½ªÕk"ÎXµ>ìþBµcý >¤ü¦|˨+î—#BaKTž”SîPåŸíD¿å¯ÞœÖÜ÷ðä»·1u¿šÝð«ïT’U»žò«sÆ:doQzØ©‰ð}„~Rœ§h>3@¯}.NèŠ?Ð÷ÌY¯LèGž^ÐÔ{ŒÐÏ®Rx-¡ÏiÃ׉µÓR¿.~ò ý´zýL×ág½ƒ7úie˜‚СB„>}ú ô; ýì>ËO±|ú ôߜзSÈs&|Oè¶<·Šå㹌تC&P±¿²et¦-IF~UŲ(€:…Áz}¦~¶+›Û­Ø® Z"åJäl$®Ø=ÄíÞð˜ŽG×ÇvGÄŸƒí: æ¬Çv]O/ˆíŽè=ŒíÚàîîãUj­lÕYõöõeþ_7 «o ‰ÑRzâ},¬ÊYi‡²×%«‚… ,,XX°°`a¯YXÛ?,Œ÷Y¢`aÁÂÖbaõÃD*ñ,í8à‹ÑÅEöõ-Œ’r$$a.¶ƒZíÒŸj3ž„íêø ÕóÀUÅNäJlGi33x‡í°üXÙ3›ôã’j§ÂéVl7#®œ £ùÌÇt<º>¶;"þl×Q0g½8¶ëzzAlwDïal7·Jéµ÷á-ÉÆ‰ÞdÛÍIÍ‹eÛM©'ùdÙvSÞál÷Æ)eÅ=Aƒ0a „1cÆý„±îŸYs¯µàÏvÉ%cÆ¥#Ö{æœÔ…»Ø®ÙŸŽíêsÀy0Š]‚+{F“m™ókjGu»•uÅ6;Ð|+µ›÷Üy;¨ÝÓñèúÔîˆøs¨]GÁœõâÔ®ëé©Ý½‡©ÝÔ*eùâÞ3X†!xCí¦¤:ÓZÔ®ªrp,»úP½'öÏEíÚ[ª½SƾڵYP w({ºzÔ.¨]P» vAí‚ÚµR»²Z—²0Ž÷ÙŒQÅ2¨ÝZÔŽ6¡b Ú¥vÍ.=å?œDíêsAˆXFF±c¼²ùŒà&åË4y‡íÜÊÉNFGêb—HV äŠ**Á°ÛXý«ä‘_y WÞZÝvü¶äpg WF4LÊ0VÆ\rÈE \rÈMrîDd>H¿hvø:Í1¹ä¾Y Ç[¯åo¸_ì¨Ú™=Uë:)+Ï-¯™xT!½Ú‰8\™~Q‚ÅlÅí¯9ÞÊ\­ÅJˈéÞü‹)qY"ÿbø‡õŽG×Ï¿8"þœü‹Ž‚9ëÅó/ºž^0ÿâˆÞÃùS«”§koçd[‰KÞä_ÌIÍi-l7¡¾˜|.l7ç¹ñÖÔœ2ºLíÛ¶ lØ.°Ý¶›ÚgŸÿ|Ø.°ÝØ.—cÈÛ;zêây¶Ë‰Ë‘¨D‚üÔØð|lþHÀÈðšÛIë”Ä©ŸÂPí’’ÝÊíê µ‹"Å•­ÁíF@¦ãÑõ¹Ýñçp»Ž‚9ëŹ]×Ó r»#zs»:xYA-óx Eÿºåݩܷ8å÷«ý^©”`±jGMU->¸õõPÿ‚:þª¹ÝÃ;b–aì ¹]±æÑíP¦Ü.¸]p»àvÁí‚Û·ÛÏíÚþi"ª;NÊ)¸]p»¥¸]ù0YMÍRîr»fW¦³¹]}®——ÍcÆfzi‘rÙŒ¡8öu §íH¶;"þl×Q0g½8¶ëzzAlwDïal×Ï’Ò ÎL³Óœ.ÅvbV+è½I·›’š“®…íš*—ÚÍz¬Þà“Ý’óŽÝXXÿtˆ&;”E¹£ÀvíÛ¶ lØnÛÍ쳌åŽÛ­…í´â0”rVéc»j—)Ÿ^î¨ïfÃðˆÍ˜¯Åv@,j¯±]nS8‰¥>¶kvék¥—b»6hY`x^ÑìÈr`»éxt}lwDü9Ø®£`Îzql×õô‚ØîˆÞÃØ® .èè0^¥XìRlǘ7d|ƒíæ¤:®…íê]h¬^?¶ko­lÐåa—Ò}Øî1¢¦r Ù¡ìévs`»ÀvíÛ¶ lØnˆíêþYwPÖq¸,)Å-ÙÀvka»òa2¨gÄÔÅvÍNäôl»ú\ªõÖl;ò ±Z#ûkl×r3J¸›÷P›]ºÛÕA3’èP\NíÆ<¦ãÑõ±Ýñç`»Ž‚9ëű]×Ó b»#zc»©UŠ/ζƒ(¿ÁvSRÃvM•Za»f'úÉ.ÉNyGïÃvmD+§ŒA/‘‡2‘ÀvíÛ¶ lØ.°Ý~lW÷OÏŽã}ÖRŽžíÖÂvU©x=v±ÝÃîé/Ï'a»úÜòå¼4„alhW6¤´9•€åÍ%YoGå$2PÚì8ß[Û® j@ÒX\~*רî éxt}lwDü9Ø®£`Îzql×õô‚ØîˆÞÃØÎÛ)È´l7ãUÊY®Í¶ÜTé ¶›“j°¶›Qo‰øsa»9ïäkÛµÁ˜IóoÛ¶ lØ.°]`»Àvû±]Û?Ë.‹iÇ  ˜¶ l·¶+& y‰³¼‹ívOÙn'a»úÜÌÈDy48æ+³í¼)}Ý’R™ÂΞ»Xš¹ßŠí¦Ä99¶ð˜žG—Çv‡ÄŸ‚íz æ¬×Æv}O¯‡íé=Šíƒ ¨@¯Rl×¶’e÷¿ÆvsRe±ÚvUš!ìPoŸ«•ì‡wLqì}nØz1¶«#bJI3íøÝôu@`»ÀvíÛ¶ lØî%¶kû,$M Ã}¶üãµíÛ-…íê‡)$Y•¸‡ívŒg·’mϲ4äM"È·¤È”€ù5¶ƒ:…É0çîÕš‡~ ]ŠíÚ ªžÆâÄ#ÛnÈc:]Û¶ë(˜³^Ûu=½ ¶;¢÷0¶kƒçòÅÀŽ%TíêÚv°‰o°]•¡¸Ž¥fZ+ÛnJ}9¾êçÂvSÞ¼/ÛîC–?ïPFíÛ¶ lØ.°]`» l×öÏŒœÓ8\% lØn)l5‹M)iêc»f'‰ÎÆvõ¹žÊ!x4؜ӵµíÀÊÁÍÞa;…šY .¥ÅNù똳°/¹ÿþÏßÿ¹œiZÐùKÝ„¬/ßLµü´Bi¸<q®M|òKû•[|ôñŸÚ’ÜV‰ýøÏÿ^tÃuGáCÃ}÷O¿ÿÓoÔ£"—?ÿøÛfW>ÑŸþðß}¡—äK÷üõÏ_|˜“¿weɶñ[´P$bª­Ž†8Xí÷Ö£ÿXí/«}«`ÎzyVÛñô’¬ö—ë=Õ–Á)yB¯RHzmACÆM’ôTr¯ùø2õÿÛ²Ú¢ŠÙÓõ_ïX¿vV[½£©lécï0묶Œ(”Id‡²(h¬6Xm°Ú`µÁjƒÕαڲfsê÷ý°Ã¬6Xíb¬V™1s°Ze½€Õ*‹*¦1jähé•)–´%ÿ_öέՖã¸ã_å =E£®kwL0<æ!Ž,É:Á¶KvÈ·OwÏ–¼|´¦{š¹¤ñ*û`ðQiú?µ§/õÛÕUõyÀšŽRÚ¶ûÅ.¿Ý E«8èE™Õ Ï;¶Ûà1 Î펈?Û5ŒYOŽíšžžÛÑ{Û­ƒGFê/¡@ —b;2^¶R,«VL(}©Œsõ!YUåm5pê«—ðb7£qMœÂï˜Ü‡íVe– N¬vÁ±c;ÇvŽíÛ9¶sl7€íÊþ‰!ŸeÇ>kÑS,ÛÍ…íò‡Y˜7ÛW;0;»I}. Hêó&%~R1ýÞ‹ºÛ)cÇvŽíÛ9¶slçØÎ±Ý¶+û' ÔþA”¼‰c»¹°]þ0%ŸTâlöÅGpIŠKgc»òÜŒD¥7$šÙµ 8„›Ñ\§:åÙÞ¾‡Zíòöv+¶«ƒÆÒzúâ”ѱ]Ç4<:?¶;"þl×P0f=9¶kzzBlwDïal7´JÅpm´0Ù¶[%IÜ#u¶l»ªÊDÓõ^ Û­ÞÉÿÚû°]‘ƒ$ä_)9¶slçØÎ±c;ÇvŽíöc»ºÏR>Fëѱc»¹°¦F„íK²ÕNÒéÙvù¹X)¿Moi ½´‰.J7ÚH™Áf”ËM¡ÕŽÂ½ÝƒË ‚j’bWœ@rj×Å1 ÎO펈?‡Ú5ŒYONíšžžÚÑ{˜ÚÕÁ‰…C LñRjgKŠiƒÚ IÍ{Â\Ô®ªârKv¨×ôZÔnõN‚ Ô÷NÞÊï£vuDa…Îï0WeÂNíœÚ9µsjçÔΩS»ýÔ®îŸIòÎý}6¢wvj7µ“Ò•MÚÔ®Ú9½{p~®„|P¥~€!éJj‡²@Þ‘6¨æ¬ÁRê܆/vb ·R»*Ž“¥®8ÅhNíz8¦áÑù©ÝñçP»†‚1ëÉ©]ÓÓR»#zS»up“ÐI`[íøÚ+²Ì¼Ðµ“'»"[Uå3%ê«gx±†õ­%ïðΕíêˆ(‰ö•IP§vNíœÚ9µsjçÔΩÝ~jW÷O‹¬ûûlzÞøÉ©S»ÿ7j—?L!‹¸}E¶Ú¥¤gS»ò\AÓ^²êj÷$-àÔæÁ)ÿ6®ÈÆ<…#Hhc»jn®lWeîD™«]`Çv=ÓðèüØîˆøs°]CÁ˜õ䨮éé ±Ý½‡±]\©SÙnµ{–¯|&¶Ky½—­ÊvcRgë#[UiÞ¬Tw¨W}-lWß:樴ïüýÛ• „N«ŒªÌRrlçØÎ±c;ÇvŽíÛíÇvuŸ¥D$©»Ï¦Ç,ÇvŽífÀv±T–ˇÓÐI¶«v@§c»ò\SˆÔ›@Rr®¼" ‹åX‡âsl—–‚ÇNËj'tïÙ<(…@AH{â²ÝCÍ(Çv<¦áÑù±Ýñç`»†‚1ëɱ]ÓÓb»#zc»:xéºC• —b;1Z"Å lW% ä­wH“U¶«ª“ôÕçÝÿµ°ÝêSµØ÷N>Ü߇íꈂAiÇaplçØÎ±c;ÇvŽíÛ `»º&F ;öÙ¨ÞGÖ±Ý\Ø.Õþ°yÌ_æ}ñѬÀátlWžK¡Ç½«⥠)h %ù ma;3L=2f–l¶@.«Çüeuý\Þ’äÕ¹âUë{‘ï äòˆ:5üWeä=ó@Î9ä<ó@n(Ëû§˜jØqxüõ±rÈÍÈÙ‚DåvþE± )ž^ì¨<Ëo9:$¤ÚAºòÚ…%G:1èó@ÎF•dÔ¹6UìXS­þö°Äÿñî_¾ûñÃ7ÿûîÇo¿®ÿøí¯¿úé×ôŽ„ÿ¼6ö‡u)|‹¾z÷›ŸYC«Oþ¹üü?)ÁÃ'ÿ–7êO޾MÄÍ·‘”ßfRÕ¼Y§B ËÏàMÊ?äùCYrŸŽ(TÂò¸3b¶ po?É!qøp;³n6Ò)?ëæˆøs²n Ƭ'Ϻizz¬›#zgÝ ­Rôl =1ë&?QÞê'9&'ƒµcêÓ‹ÁÚ!ïðõ÷Ëaí 2ϺqXë°Öa­ÃZ‡µk`íÐ>«É³nÖÎk”#<©†ñ¬ÍvÄé|X«ÍBìä;T; |!¬Mº”NkJOa-†<… /¦6V^í ÜŠíÖA™,iì‹#IŽí:<¦åÑé±Ý!ñ§`»–‚1ë¹±]ÛÓóa»Czb»·Á%IܱJ1][šžXÚè'ù¦ ﮘv(™ŠÚ­ª„ xÇVõX{ý¨Ý›w¬äXö½#Íx®¦vëˆ1ä÷æÊ,8µsjçÔΩS;§vNívS»º2 1JwŸåðP Á©S» ¨]ù0ó#4 R‹Ú­v,p2µ«ÏÐhd½ ”$è•wå-YröœÚAžÂ’Õäª][+Ó‰ŠMjW~œÿúuž’_ÿ幯÷¿{ÿáÙýá»?­äýâ9DdH?¼ûö}Ž~Ÿ××òÊþ˜#ÿ¿•⯘þ}ã}Tƒ0Äîû”Þ`uö‡âØüÕ}¨k+p¬~XAÇŸ öø§ß¼ûþ»ï~ÿî>üøí»ðóßýðç/ÿ+À~XÉÿùë~üùmk¸úóòöý‡åçŸP]Þòß|þçï÷§|4ýá??üñ›ï>ÿ |žÿï÷ßþc9(þº?ýíºØý:¯û…€–õáÓ_ýú}â¯Þù }öõW¿ýæ3N˜>ûRßÃgߨWô^¿ ˜JŸþtÞ]•§ÞŠù¤”7¡:ÞÊvhžjÙ…q ÎÏlˆ?‡Ù6ŒYOÎl›žžÙÑ{˜ÙÖÁóžÚùño"ãµ} òF¹0©%ÝÚíÄJsaÛªJËõ¯…mWï ‰iß;úXHìjl[GŒ!|ÒeèÉ–ŽmÛ:¶ulëØÖ±í¶-û'BPj犭û¬EulëØv*l›?LKœéÔ¼_íòùll[žkÌ9$£Þ2£¨W6KŠ!na[,S]4¦vg‚jfv+¶-ƒ”´o늣<Ù²Kdܸk(³žÜ5==!¸;¢÷0¸«ƒ£jŒý%”ôRpKÉ3Ý^í‰ ™H_jÞçÂvU™¥ê_ìŽô›wJõÐ÷Î#?¾ÛÕ50õ• ªc;ÇvŽíÛ9¶slçØn?¶«û§%¶°ã š,9¶sl7¶Ã%…8˜4;T;5=ÛåçB D½#t¶ O®sXÐR—˜’DyÈQžÂi'«vAã­Ø® * bꊓǺ‹Ží6xLãóc»#âÏÁv cÖ“c»¦§'ÄvGôÆvC«^ŒíBÌcn/öY'Ú¥ç¢vU&v7Ô7» ¯EíÆ¼£÷õ]G$Ù¡ìáw¨NíœÚ9µsjçÔΩS».µ«û§)î8¨§vNí¦¢v´¤Ò&Ô,q“Ú;²„gS»ü\ ¢Œ"LÕ.¤vÌ‹ 2l$ÛqžÂœum@_íòöv+µ«ƒ2’)öÅ=fW8µÛÀ1 ÎO펈?‡Ú5ŒYONíšžžÚÑ{˜ÚÕÁ…(©ôW)ŽáRj—Wò™7JIÍÑÍ\Ø®ªRÌâõ‚¯…톼£ ÷a»:b ”bØ¡ ½{´c;ÇvŽíÛ9¶sl7€íòþ™B r¡¿Ï³c;ÇvSa;^&k'ÛU;À³»G×çJKÜ;¨&dã+K–õyŠ"<ÇvR¦:åŸN'S¤Ú!ß{G¶ª¦¤;Ä)y²]—Ç4<:?¶;"þl×P0f=9¶kzzBlwDïalW‡|èϧþþ*éÚ>ÂliÑ$ØnHªE˜ Û¨/^^ ÛÕ·ÆÀfØ÷ðIVeQBâÊ’w$qlçØÎ±c;ÇvŽí°]Ý?%‘"õ÷YÇvŽíæÂv²ä¨%E‰MlWí@Îî#\ž›ã4!íL l'¯,mg‹h)üöÛ•´bVR›J«]‚{ïÈ–AK5(¤Ø!EÇv=ÓðèüØîˆøs°]CÁ˜õ䨮éé ±Ý½‡±]¼ì5ÈýUêI ÛÉwdSŽÒ·Wû¨Ê`;T%ž ÛÕjA; ªz ñµ°]õ…h«¨Õê‚\ŽíÖ*¤FØWF×wÛ9¶slçØÎ±c;Çv=lWöOé$ ­vÁ;R8¶› ÛiÍ¢‹!YÛU»ÀállWž«Â)FèL r€¶tei»´$IFü<‹Kpòa™;‰µÕŽžõA¼Û•A¡Ô(ìüš~µC¿$Ûå1 Î펈?Û5ŒYOŽíšžžÛÑ{ÛÕÁKûÖ/©v®½$ÅÜȶ“*“5’So/ÖHvÈ;(7fÛ)KÞHÖ±c;ÇvŽíÛ9¶ÀvuÿÔüuô÷Y‰Þ‘±Ý\Ø.. L£‚µÉ®vpz¶],8M{ÕlG×b»Pj%‹Ï±]ÊS˜¡dh´ó«]ˆr+¶«ƒjÌkIè‹“‡…Ží6xLãóc»#âÏÁv cÖ“c»¦§'ÄvGôÆvupËÀú«TŸ/Åv qÚªm7$Õx²ÚvEU>FäƒvÕK ôZØ®¾5äɸíêˆX^:îPFŽíÛ9¶slçØÎ±c»lW÷ÏÈý}VÉÛ9¶› Û¥%QŒ"’ÚØ®Ú©ÙÙØ.?—jÈQZge;øe$wnm;ä¨aÛY™Â’ÕØÆvÕŽ‰nÅvuPËŽ„⢘c»ixt~lwDü9Ø®¡`Ìzrl×ôô„ØîˆÞÃØ® ®©×5|™äRl'Êy½× l7"UC°¹°Ýª*":ÇÒÕN_,Û®¾5R^¹ú›ež'p¶«#R>E̼ۡ%…c;ÇvŽíÛ9¶sl7€íêþ©SŸ•è—dÛÍ…í¬\>„˜¨‰íŠØÃAñ$lWž›âØ™@ óâ+³í`)©¤O±…<…#¨k–±\í‚ÜšmWMÂ!‡Ë]q‰Õ³íz<¦åÑé±Ý!ñ§`»–‚1ë¹±]ÛÓóa»Czb»u𼆆öïhÞD\\Û.ï´ª¼½Ú§$!!ô¥jšë’lUe!m_¶\Õ¼VKŠ7ï@ävÞüO^´Û°Ý:"€Úe`ŽíÛ9¶slçØÎ±c»ÝØnÝ?E’†þéÎX=ÛαÝTØ.˜‰R¤ü?Íl»ÕNªàœƒíÊs9PˆÑBg•¬<WfÛå@ŽcàçEÊ êT¤Ò&cÕ.k½ÛÁ’òq9/%Iúâòç娮ÇcÛ¶k(³žÛ5==!¶;¢÷0¶«ƒAàÔY¥ŠH´K±R\$¤çÙvƒR-Î…íª*„@õÕ?«Ì÷wíê[S>úÞA¼¯¶Ý›²Hd‡2ñK²ŽíÛ9¶slçØÎ±Ý¶«ûg‚„aÇé.>´ìrlçØnl—?Ì›D|ÒÉõ‹>à|ÐM§c»ò\‚|N݃*QÐKkÛábjÄòÛá:ÕÚÕŽV;Å{³íÊ @BY_W 8¶ëò˜†GçÇvGÄŸƒí Ƭ'ÇvMOOˆíŽè=ŒíêàŒ Iû«ý2µúÜ–).7ZR¼Iåí*c?½ÒdØ®ªÒÀÔ¾$»ÚIx1l·zÑ:©éo^¼1Û®ŽE“ì˜ ŠžmçØÎ±c;ÇvŽíÛ `»²"‘’öO˜Ï«ŽíÛM…íò‡É(ù€ÿËòÌ_|ô3žÝI¶>—”õ&P¢Ò¥Ùv¸¨`Þ©žc;*Sòßwnx;°¤·b»*.KGJ]qøe:¶Ûà1 Î펈?Û5ŒYOŽíšžžÛÑ{ÛÕÁµ´ Çþ*%zqm;±%ÒFm»1© 2¶«ª"1Gî«áÅ.ÉÖ·¶¼Õ3ô½“CÔû°]1¿C¢ÊNCŽíÛ9¶slçØÎ±c».¶«û,#©õÏGD ŽíÛM…í¨dÑ¡*@ÛU;ˆgw’­Ï…HÀíòЫÝÓðè4l‡i1Kù¹…íÌP˜ô°]¶ãÉz ®ª±‘ôÕ«¥W äŠw$Gò;¼“ï äÌ(HÊ$z çœrÈy çœr#\Þg 8v÷Y"ï-èÜd/5ÿHÚÕŽªñé\y.çØH¹=ª]$¹öÚ…r7ëy ÇK lA´ÈU»|j½5ÿbH%ðü‹Þ/Ö?ÿâˆøsò/ Ƭ'Ï¿hzzÂü‹#zç_ ­RŒ×)'•%lä_ŒI¥4¶SŸàµ°Ýwù>l7¦ŒÐ±c;ÇvŽíÛ9¶sl·Û í³Q“c;Çv“a»R$œ-u°]¶»Û¥ŸÆH½ ÄøÒjGqÑ 9{Ží$OaálÑIi®vD|+¶«ƒšhÀØ—„ÛõxLãóc»#âÏÁv cÖ“c»¦§'ÄvGôÆvep 1I§vöj®½6¥IܪQ>¦T&K¶«ª0$ì$ÛU;|-j7æÓû¨]‘ÈR§vêÛx±#§vNíœÚ9µsjçÔn€ÚÕý3ÇãùŒÙßgó†ìÔΩÝTÔ.˜‘0Åö­©jR<›ÚImmXª½ DII¯M¶“h7zÄkžÂ1%ÅNðj)ÝJíÊ å:ÅŽªz²]Ç4<:?µ;"þj×P0f=9µkzzBjwDïajW·$¦Ò_¥øbj Rˆ¦Û«ý~©lsa»!õúZØnÌ;zc²Ý˜²èÉvŽíÛ9¶slçØÎ±Ý¶Úg½Ø‘c»¹°’T°‰íª]@<Û•çš0ôйT;V¸ÛÁBR³ç\ÌSØ(t®ÃW» ÷&Û­â,敨/Ž¢×(ïò˜†GçÇvGÄŸƒí Ƭ'ÇvMOOˆíŽè=Œíêàš£þ*Å1\ŠílAÜj-8$U`2lWUeí»ö*‰/†íVïhâNfú›o¬Q^G´ÂÔw|uª^Úαc;ÇvŽíÛ9¶Àvyÿ„@ÜÉ·¯vy®9¶sl7¶‹‡¤_†q_|ô“a:½µ`,wo#„¤Ý#4E¾ÛAéh…ϱ]*S8åÍÚ€±Úµ[±Ý8~¸«ïØnƒÇ4<:?¶;"þl×P0f=9¶kzzBlwDïal7´J‰\›mǦ ÙÀvcRãd¥í†Ôç-ôµ°Ýwâ’½Û•sœÐt‡2ôK²ŽíÛ9¶slçØÎ±Ý¶«û,bÞ¤°»Ï<$ 9¶sl7¶KÇ¥ØÉ¶+vÉ꼜„íRÁqBŠB½ ÄáiþóZ Ú’ÅçØ.ÿS@˧ýN—õbXîÀv#ârL Žíz<¦áÑù±Ýñç`»†‚1ëɱ]ÓÓb»#zc»28aþ£¡»JåáêK²¥’éöj¿_j„¹°ÝzøeZãß7¶«o}Ã3ÇêŇrº—c»:"—bÙºC™y¶c;ÇvŽíÛ9¶sl7€íêþiçÙŽóQ¤àØÎ±ÝTØÎJ§¡ˆMlWíÈällWžK)rŒÖ›@L9š¸2Û., œ}ÿ4ãPÊH¤Íßѯv!ÝŠíÖAY-&ë‹#CÇvÓòèôØîøS°]KÁ˜õÜØ®íéù°Ý!½G±Ý:¸ c ýUŠŸ•=ù’,ÑÆ%Ù7©cÜ±Ú †©°ÝªJ!AûŠï›z}­K²ë[çÐ2¶¯¿yñÉ^íêˆ(Æd‡2KŽíÛ9¶slçØÎ±c»ÝØîÿØ;»dçM ï(ƒþ¥•tÿ;)àóuÒžì±2]ô¦UÍkƒà‰¶8K¨/úpþгŒ”dÛ-…íÚ‡Ù~ÝU‹a#ÙÍŽüjlן‹h\ßf6‹ÝÙ’ý!5N½i$ËЦ0+<ìúÐíþµÙÿ¶ÛÄ™«”©8.O—õÛ½á1®íΈ¿Û ³^Û =½ ¶;£÷4¶Û€ù*~om;‘x êl×%`uÔ©ø»{Æÿ‹íº*–b6Uuûú]dy‡Ÿ¼s;¶ë#ŠVm;¾:~Ýá.±]b»Äv‰íÛ%¶Kl÷ÛµøiÚ?ó8šÙv‰íÖÂvõì;Xtû}æ¯ÿ|À̬W×¶ëÏÕ:}h\ÛîÇŽèÞÚvD ¯±Ö),5"!ŽÚº]ù0¶ëƒ2 ‹âlvÄÙIvÊc]Û ¶(8f½8¶zzAlwFïil· n„`óUŠ1nÅvdu˜o°Ý1©‚ka»®J<Üa®^о Ûõ·ÖB´Ë;ñ¹N²Ûˆ†…MæÊ´Hb»Äv‰íÛ%¶Kl—Øn?¶kñS}ÏîN‹qb»ÄvKa;lYl¦„eŒíºR\í°ã@¬¯4@5bÜÚ’ýáZ7ïkÛ1µ­² âø>üf§ ÅvÔר ž‹{þù ±Ý3ðèúØîŒøk°Ý@Á1ëűÝÐÓ b»3zOc»>8Ζ7‘ ÷^’¥xT1o°Ý1©ºVm»cêëô»°ÝöÖZ îñÎkÛm#¢91îP&Ù’"±]b»Äv‰íÛ%¶;€ízüä¶o³yœå’dÛ­…íê‡iâŽ@ÃÚv›]ýÒ¯Æví¹Pu6Ì •±–G(¼¹$Ë-±Z¢Êx³ßì4ü£d7qä1éhýc‡Ù’bÊc]Û ¶(8f½8¶zzAlwFïil×Wœ¯RL7×¶Sz ¼ÃvǤÚb—d©’ïÂvǼô9lwH™Hl—Ø.±]b»Äv‰íÛíÇv-~z=Êñ4Îz=ð'¶Kl·¶«¦Ô/Ëoõ×>`)n—c»ö\$uŸÃ°j‡rç%YnqÃâ5¶“m«ŒLã«5ÝŽã³—dÛ ^4‚y.ΟN"‰íÞð˜G×ÇvgÄ_ƒí ŽY/Ží†ž^ÛÑ{ÛZ¥žÛ–ßRÛü¡&o°Ý©^ÃçZØ®«‚RC•ìPï_vIvóŽ×?­Ï½OPóvl×G¬oëTv(ÓÄv‰íÛ%¶Kl—Ø.±Ýl×ã§bd;âìóY>±]b»°]ý094”bœm×íDàjl'‚y‰Ùª~Ñ;/É>¢ T~í´NáÀê¬IDíS½ÐG±Ý!q¦YÛnÊc]Û ¶(8f½8¶zzAlwFïilwd•Šrsm;ezùlwLªÒZØ®«ª[œ«GŠïÂvý­UæÞ!ý`'Ù>¢0¸ø\gm»Äv‰íÛ%¶Kl—Øî¶«ñ PÝûLwXãY$¶Kl·¶«&2‡ãÛu;¸ÛiÃqXwv³ $ïmIQêIñ5¶³¶U6 ÆqþE·{¾Zó lgm}Atј‰«ëç%Ù)xt}lwFü5Øn à˜õâØnèé±Ý½§±]œÚ" óU /ö×b;/u½7Øî˜T÷µ°]WÅE„x®ž¾ ÛmÞq@йw˜?˜m×GT`ã²CÙëw‰íÛ%¶Kl—Ø.±]b»×Ø®ÅOhŸ±Ñ<ÎFdm»Ävka;Û° !¶ëvð´…½ÛµçFq#šn¡«à½-)ꎭn‡_c;oS½žõ^ä%þK©÷©þáN²mж‘·â ÛÍxÌÀ£ëc»3â¯ÁvǬÇvCO/ˆíÎè=í¶Á[Ú4MW),RnÅvíG†ïW{3Ÿ\3ݤþ¾Ïûÿb»® EÌl®¾ ÛòNÝ}ÛõëV(&—w~Þ€Û%¶Kl—Ø.±]b»Ävû±]ŸÊ¡“,—n'’Ø.±ÝZØÎ{Í: tb»n§~y¶]{.B ¨Î&PµCº3Û®< {|w‹*5&Xº]AXí ×Ô»Õ£Æ\=~ÛA®½uX”²Ã;Oµ >p«#rÚ¡ì äA.ry˃\äò —¹¹?Uëg¼#Îr”<ÈåAn©ƒ\< D]†‹Ž¯Mu;ƒËó/êsÛÎØfY ÝNÍîÌ¿ˆ‡ªÔoóõA.Ž5dÕÝL©#:êGó/º8-anSq$d™1ûa}àÑõó/Έ¿&ÿb à˜õâùCO/˜qFïéü‹mððºhÏW)Eº5ÿÂÕ“Ê‹U;:¦Þé»°]kS˜µÙìž"ùíØ®èB>¹š½)£¬v”Ø.±]b»Äv‰íÛÀv-~rÿ‚8³\”Û%¶[ Û¹±Í°k±b×c;¯ç3¨¡@gHÿ•Â|Cþ…?j r°—ØNJ›Â(¦1TºÙ=×…ú¶ÛeQ÷ßì("±Ý„ÇŒ<º<¶;%þl7RpÌzml7öôzØî”Þ³Øn\X¬È|•âß-ï.Åv¤ñ ¯±Ý1©ò»žúÿŠí6UJl;ÔÙµ©ûÜ;Êþ1l·hĦe‡2ñÄv‰íÛ%¶Kl—Ø.±Ýnl×ã§@!˜ÆÙVŒ9±]b»•°]û0H„~oÿúϬ€—W;êÏE…z>šÂ0Ea¿ÛU¯²¼¦vÐvÊã²L›½*°{#µƒ¾ Õi*®ZIR»Žxt}jwFü5Ôn à˜õâÔnèé©Ý½§©Ý6¸úŽU Äo¥võüñ¨Ëùj·Ipª[•Rm­bG›ªº©à±ª]Fþ.j×ߺþe m‡wìsÅŽ~”)YÄe‘ÅŽ’Ú%µKj—Ô.©]R»Ô®ÇOE$ر?ó¤vIí–¢võÃ4oCÇwd7; ¼šÚµç†ÕªL7ªÕ®ÜyGVøÑ6éH¯±ö)2N¿èvÏ ‰>íú ÀÎsq.ÙZpÊc]Û ¶(8f½8¶zzAlwFïil· n“ÝÒf'q+¶”Л;²?\Ý÷HµµZ vU õˆå0U¯Eä»°Ýæ ßù±{®~7¶ë#"‰+íP¦™l—Ø.±]b»Äv‰íÛÀv=~²9{™ÇYÆ,m—Øn-lW?LQSã!¶ëvÌz5¶kÏ5'›N ±â·&ÛÁ£ÎEÖ×5Ê…ú–Úl–m×ížÚ>íú lDsqÄ™m7å1®íΈ¿Û ³^Û =½ ¶;£÷4¶;´J1ÆÍwdíaï¨Ý1¥¿£ù©]SU•ǤðÄf‡ô]•íŽy‡Ê“íúˆõ`\ß|‡2ÈÊvIí’Ú%µKj—Ô.©Ýj×ãgÝLÂ¸ÝØfWÏòIí’Ú-Eíê‡éHà/îLýõŸ¸u>¿<Ù®=—Q‹Òt9CàÉvõGáþæŽ,·šBo©»Dù(µëƒ†±ŠÎÅEɆS3ðèúÔîŒøk¨Ý@Á1ëÅ©ÝÐÓ R»3zOS»6¸` mÓUÊJ{ïȶd»À7Øî˜TY ÛSö]Øîw€?˜lÇFUPÞ¡L0±]b»Äv‰íÛ%¶Kl·Ûõø)P^\1üg™Û%¶[ ÛqObk÷&}ˆíšÆS¶èEØ®=×µM²º¼7ÙŽõMY‘:…ël•ñTovBÅv‡ÄÉSAb»7Žk|w;ÖÏ^‘탶R=Pæâ 1¡ÝŒÆ <º>´;#þh7PpÌzqh7ôô‚ÐîŒÞÓÐîÐ*å|o®…<‚áM®Ý1©«Q»¦ªý[štÿyË/£vǼóÌÆî¦v›2Þ¡,(©]R»¤vIí’Ú%µKj·ŸÚõ8K ³ŽTÝ‘’Ú%µ[ŠÚY»úÊ¡2¤vÝŽ”¯¦ví¹fR÷©S&¦…o¤v(V¦Zߤ_x›Â L“"–Ý®Àg±]”ÄÜx.ŽJ¶‘ò˜G×ÇvgÄ_ƒí ŽY/Ží†ž^ÛÑ{ÛõÁ¹ÕQÕ«”ó½ØÎ[nõ;lwH*ƒ­…íº*!шê¾ ÛmÞ©gRƒ¹w„ýsØ®¨%ö(“Hl—Ø.±]b»Äv‰íÛíÇv=~F5œÇY7Ll—Øn)lç ‡±‰Ã¸ìf‡—c»ö\wså)oªvrçYÔ£½ÁvQ§0i1˜ä_t;þ]äûVl×åbÌ“û»›]¶‘ó˜G×ÇvgÄ_ƒí ŽY/Ží†ž^ÛÑ{ÛõÁ±8‚ÍW)°{ÛÈjèì µÛ”jË{š+E^ŒÚuUTcºÃ\=á—¶ëo]·<2÷˯ÈöÅÙ&ݹ6»§¶RIí’Ú%µKj—Ô.©]R»)µkñSê9i~Z–"Ô.©ÝRÔ®~˜R·*õìmCj×íÄ.¿"[Ÿ«€E%¦ xQƒçRjçZ‡|í° k;eWFœ8ÿØ©Àç ÝÏ BŠ—©8€„vC3öèâÐî¤ø  ÝXÁ1ë•¡ÝÌÓ«A»“zÏA»?ƒ3Õuœç«ѽÐÎ ?^6£8*”KYˆÙýQÕüWt‡zÅ/bvÿxG1|ÇEž;ÈÝÊìþŒh¥ÔÃòeš=d“Ù%³Kf—Ì.™]2»½Ìî'~*0yÌwwZ83í’Ù-Äì¶S¹( 2í~ìHʵÍ(~ž+ZOsÚ¤u£ w–µ+•×ÐÚFª1+†J»ú(´ëƒ¶sØ\kB»)xt}hwFü5Ðn à˜õâÐnèé¡Ý½§¡]\YË|•»÷‚¬(=ª’7ØîT] Ûmª Ñ`‡zù2l×ߺ†q4{§A?‡í6e^JÙÆM²®]b»Äv‰íÛ%¶KlwÛµøiX÷G£Þì@-±]b»¥°]ý0ë ™œ†Ø®ÛÁSÙEØ®=¥0Ɔr‰{{Èz}yA¶*À6…[q¿oö»]Ý{ÛµA] ó©8—Ävs3ðèúØîŒøk°Ý@Á1ëűÝÐÓ b»3zOc»mðŒ2_¥êB|+¶ ¥¸½ÁvǤ¯…íºªvtT·å;ïÂvý­£¨à<’»Ë³íÚˆQ}ÏW¢‰íÛ%¶Kl—Ø.±]b»ýØ®ÇYb¥Ó8(žØ.±ÝRØ®~˜ªõì­Cl·Ù=•f¾Ûa¿•ê(4ÝB«¿³‰,Æ£E¤Â¯±õ-ukÉ1Î` ¾¥Vþ(¶ëâD˜G½xÿ±Ã’ØnÆc]Û ¶(8f½8¶zzAlwFïil××Ö±<æ«ÔË*W^‘Õò0xwI¶K0ˆaÓ슮…íºª(¨²#¸Ñwa»úÖ\ŠÃ4’W;¤Ïa»>"(‡Ð\”lG‘Ø.±]b»Äv‰íÛÀv=~rÝ"ÉŽ?µ–Ll—ØnlG¢8ÄvÝŽŸ6øa»ú\ÕR£Í6ªÑŠßÝyIPP`æï¸] M.¢v;U^í$Am72W†ßv’kÞñÿÇÎô“'¹:b}aÀ2WåŽò$—'¹<ÉåI.Ory’;t’«ñ“Kýœç»;"“<ÉåIn©“? ¢s¼H+ø×I®ÛY‰«Orí¹\OŠã ïÍNíÎ{SðP }}ãc$PX'JÛÏ6¬MÀèƒF=ˆˆÍÅùSyÓLÀxóËúÀ£ë'`œMÆ@Á1ëÅ0†ž^0ãŒÞÓ ‡V©°{0‚ü!åݽ©cRÖÂvMÔu¾p™ª‡òUÿ¼5B…Þ ù¶Û”…Ð\Nl—Ø.±]b»Äv‰íÛíÇv=~2¡ãŽýqÞ›Jl·¶síÛûßå†þƒíª³^íZ¹%±BSfåö{SŃñͽ)©SëŸa¬´Ùxù(¶ëâ…§âP"±Ý”Ç <º>¶;#þl7PpÌzql7ôô‚ØîŒÞÓØ®nTOÆ>_¥¬Ü[î@U V{7eÀ¹TÇÅ²íºªŹúÐ/Ãví­[OšovúÁ{S}D(Vv( Ol—Ø.±]b»Äv‰íÛíÇv=~R ØŽifÛ%¶[ ÛI+c^ÆUÊ»?ýò|¶kÏm}7ë™M ê¸³J9ÊCÐàõANûVÙ'›}í9ùl•ò.Ž y‡8BÏrGS3ðèúØîŒøk°Ý@Á1ëűÝÐÓ b»3zOc»>8ÝNÜDªÞŠíjÌyH¼Ë¶;&õwOŠÿÛuURH'øg{K†ïÂv›w”ÊÉnv`ŸÃv}DE!çʨ$¶Kl—Ø.±]b»Äv‰íöc»?ûïc;â,c–;Jl·¶«¦ƒÔÍûÛu;ÖË«”·ç¢»JLŽ&·6ŒGk¤.ñÛY›Âb^§ñPi·ãW©"7b»>hõ⬅}·3Hl7å1®íΈ¿Û ³^Û =½ ¶;£÷4¶;´J9á­ØŽÉŽôÛ“*¸¶ëªBÍ íP_†í6ï¸íðN<íynÇvmD)†!ºC™e¶]b»Äv‰íÛ%¶KlwÛõ8K0m¾Üí"±]b»¥°5'¦ãl»ÍŽøjlWŸèjÓj”ò¢xöuØŽå¥Ý}í¼OáB8iIÔí |6ۮʥn‹u.Ž,±Ý”Ç <º>¶;#þl7PpÌzql7ôô‚ØîŒÞÓØnÜë†)æ«K¹Û‘ðÃø µÛ„‹íQê‹Q»®J4w„*¡/ëHÑßZIcR@nóN|ðŽlÑêŠj;¢¸bR»¤vIí’Ú%µKj—ÔîµkñSApV¬¥ÛÏÒvIíÖ¢võìÛÄvûÕ†Ô®ÛÝpG¶=W¡¾E™ª–;“íÊC ‘Þ$ÛEŸÂà6)ÔìÄ=>Jíº¸zaä©8Õ’ÔnŠc]ŸÚ µ(8f½8µzzAjwFïij×7„Œé*e@zsi;”ÓìýjoTwvs©H‹u¤èªÔŠîQ¯üeØ®¿µ×ÿÁ}î—6’m#:8ÙT™…Äv‰íÛ%¶Kl—Ø.±Ý~l×ãg¸#”yœõ§b-‰íÛ­€íê‡é8p’l×í ÂÕØ.¶;º¦“d»n'~ëY~ yyƒÒ6ñQÜÆ€q³ýh#Ù>¨‹±¢Oŵ:‰í&—b»Fµ ¼_í]Ia\»áÇÖ*m×UD¡SõQľ Ûmoè5–ϽOí›îÆvÛˆLŽHse”Ø.±]b»Äv‰íÛ%¶ÛíZü”‚5P¹Ìâ¬4«Äv‰íVÂvíôàa»ngxy#Ùm| -4åMÕãÞl»z„ñx}G m•‰yxñh³Ã"Åv}Pm}Eb.®~`‰íföŽ¿”“¸ÚíÊ2 ¤¡2M…íÚ´ hÐ. ]@»Юퟜ²ãø Ð. ÝbЮ|˜¦)©Hÿ†lµË—¶«ÏµFƒ¬€j—àÎT;Ü?r¹NáòGIØ…š=Jíê J9—“Ä Ù!Žéxt}j7#þj×QpÎzqj×õô‚ÔnFï4µÛ/ký ÷ÐnG÷R;qÝ?åÚ“šóZØ®©">°We¤/Ãv§¼CžÃvmDKéÀ6NHíÛ¶ lØ.°]`»ãخퟙåÀ @Û¶[ ÛåÚg"Õ8ØÅvÕMìjlWŸÛ’U…GÈ8ËØóV‚Ùü¡…•ì„™¬gªÙAÆG©]ÔHyÐ̺ٕo&¨ÝÇt<º>µ› µë(8g½8µëzzAj7£wšÚ•Á5±+ V©b—p-ÖTirÍ0V/øe×NwïäÌBcïüë |7 k#æ²³P&(,PX °@aÂ… ;ŽÂêþ Êè2>•3PX °µPXý0™D !JùÌ%Ý|˜1ã'å Õ œ¬ºÔ®Ù‰âÕÔ®>W«¼! +vÄxg; ÙœÔ8ô”×ÐÈ£?ª{­s¹ZÌy\}¥¸ßs–·"e8à…'cÎ3Ê^ÚÅDÌ1gÄœsFÌ1gÄœbβ2KÊi¼ÏKÄœs.sú–@EIº\³ã—ìð‹¹ú\Ã2}¸?šÜzk ek-Â?r^žA$(-vð»(Ó­ùmPÂûjv¬qkjøÃzÇ£ëç_̈¿&ÿ¢£àœõâù]O/˜1£w:ÿbÜ8ã%TôÞ å¬¸‰|ê+Ø$”ž’ºZ±£SêUõ»°]{ë¬êfcïd|°Byѳ’ñe/ɰíÛ¶ lØ.°]`»!¶«û' kT(ovà9°]`»Å°™{¶d2ÀvÅÎôlg– Úõ5;½³ØQ-Z ¤ðÛajS ûJw»ò>Ob»}ÐÚΰŸÆ²ÛáKÙÕÀvïyLÏ£Ëc»)ñ—`»ž‚sÖkc»¾§×ÃvSzg±ÝÏà%.¯R$x+¶#¦Íõ=µ;©4¯EívUB šÆê%}µÛßZ]ì•’ŸK¶ÛGÌ%$î×`øyjÔ.¨]P» vAí‚Ú§vmÿDp#‡ò‰‚Úµ[‰ÚÕÓQjên_ÁÝ®,ÙS»ö\NTª8š@N.~o‰r1ðÄï©Ô£r«©›Ï¶Û >šl×-먧±¸ò— j7Â1®OífÄ_Cí: ÎY/Níºž^ÚÍè¦v§V)Ï~s²lž?$۔꾶;£K(ö]Øîœwü¹d»sÊ E]¦ÀvíÛ¶ lØî¶kû'—X¾_Öf·#ºLíÖÂvP;öeõœ»wdw;Å«; Öç:€(ª^Ž‚rçÙZP ü=¶Ã:…­¯´Ù±è£Ø®J@bbCq”ÌÛxLÇ£ëc»ñ×`»Ž‚sÖ‹c»®§Äv3z§±]µVE¯RH÷Þ‘ÍÊgü€íš„²ÑÒ©¶¶kª¸³’ŽÕÓo>úwc»úÖœJäé2ô§Ë©ï#b¶Aü;ˆÎ‚íÛ¶ lØ.°Ý l×öO7Nx`ŸàÀví–ÂvX³èRYi7Çþç?°—¡_íêsYU}4œ‹á½wdA“‡l;ª!N9Ò%î‡BÍNüYlG-Âȉå€8l»1éxt}l7#þl×QpÎzql×õô‚ØnFï4¶ÛwM6^¥ŠH¸7ÛÎòB°Ý9©«a»¦ Sy=«‡ß™á7¶;ç—Íòvl×F¤Šµq¬ S`»ÀvíÛ¶ lØî¶kû§dA²P³‹.ˆíVÃvåÃ4˵î2v±]³¾º´]}®ªjgÿØÑ­Ùv%Œã”ÐÞc;®SX%åÜ?RóÏ’ð(¶kƒÖ¦7é€8ùù °ÝÓñèúØnFü5Ø®£àœõâØ®ëé±ÝŒÞilW—T–z¦á*% ô^l'¶ }º$»K`*{ð©ïö¥?‰íš*¨µhí€úüeµíNyøAl×FDIb~@Ùûp"°]`»ÀvíÛ¶ l÷ÛµýSÉ4¥ñ>Ë/ÉJíÛ­€íʇéŽ)åÔ¿$Ûìà¥òEØŽ7OIYy0ª¦;³íd+'æ÷ÔNÚL¯yt}0ÖìÞt¼•ÚÕA’ ÈPœ¦—KQAí>à˜ŽG×§v3⯡v笧v]O/HífôNS»}pwë7ðü±³{Rh*‹–çÔ®I@NyµÛ½IWû£Ô®©*'ܵÜÕû—•¶Û½cÀ‰ÆÞ!µç¨]±þäG¶qNÔ.¨]P» vAí‚Úµ;NíÚþ™²ŒcyU‹;²AíÖ¢vR;=H&”¶kve;šÚÕç–Ò8§Ñ*›ð½¥íÌì¶Ó2…3;÷›¼U»2ÓŸÅvMœ##úP\¶¦Øîéxt}l7#þl×QpÎzql×õô‚ØnFï4¶«ƒ ¢Èp•²ä÷Þ‘…ŠBì¬ö&8¨ˆ°Û©®…íš*42o†o*OüÕØ®½u9ðøà&ên÷âÛ±]‘‹>0A8:R¶ lØ.°]`»Àvg°]Û?͈œîrNíÛ-…íʇéf’Qµ‹íš]ÆË±Ö$ºò¶ £#tM¶¹·‘,—] ßÇq¹Ì`ϵŠQ_h³ÃÄR»6¨•CqÎcqör7¨ÝÓñèúÔnFü5Ô®£àœõâÔ®ëé©ÝŒÞij×÷²ˆb¯R.vsC Ù>5¤8'ÕÒZÔ®¨Ê  È§‘ú\c_FíNy^Û>ÜMíÚˆöæ0öFFe» vAí‚Úµ jÔîµkû§ˆ’ØgYãŠlP»µ¨]ÞH@³ôR4;¦Ë©]}.›bN£ Tì2êÔŽy+QZyÉ÷ØÎÚQ¹VìÿÞìÒ»˜óFl×åZ¥šÆâÈãŽìÇt<º>¶› ¶ë(8g½8¶ëzzAl7£wÛµÁŵ,…ãUJÒÍ•í6ÀO•íš- >Û©ºXCЦÊÒð·°f—ñ˰]{k¯èÜ^íwc»:"$-ßS+óíÛ¶ lØ.°]`»ãØ®í³$˜l|º|‰åÛ¶[Û•Ó³•0‹ûØ®Ù)ñÕØÎZŶ› ¶ë(8g½8¶ëzzAl7£wÛµÁ¡DãU*ó½Ùv\†‘Øn—*îv@ªÉbÙvU¦D¨ù€zÿ²>²§¼ã/áoÇvMY9 ÂPY-GØ.°]`»ÀvíÛ¶;ŽíÚþÉ`†ãp 0°]`»¥°7lfæ¹í|Çf—7¤ðvGVUho\q/¶ËNÙä-¶£T§° 3t[gìvœÒ“ØnÔe,NMÛ xLÏ£Ëc»)ñ—`»ž‚sÖkc»¾§×ÃvSzg±Ý¹UÊäîl;Ûé=¶;)5祰]SUN›0èH±Û%ú®>²ç¼é9l·ˆI…è€2°ÀvíÛ¶ lØ.°Ýal·ïŸ,Æý>²»Ei»Àvka»úaº –óŠ÷°ÝnG/¯Áv幞q9ŽÐÅ®8äNl—7²TÎÌï±´©NèÜWÚ숽$»ª&6§ÙvCÓñèúØnFü5Ø®£àœõâØ®ëé±ÝŒÞil×/ÿIÙ¬RvoG bý€ìv™ƒ~ÿµ2ívUE%d< žü»]}k®_ Ø(=?W×nW ÎîÈ.] »@vìÙ²;ŽìÚ>[û~ýØ»×~TìÙ-€ì`órJÌ ¿°ÿüû.v¯Ý .Bvõ¹&$Š<˜@^¯VÝ]×.ådïÛQÖ)¬Êƒ¢L»È£í(Ú BJÞoG±Û%L»!‹éxt}d7#þd×QpÎzqd×õô‚ÈnFï4²;µJÁïôµK‘olöÛ“ê‹eÚRœ¾ ÛóŽ?ˆíN)£í(Û¶ lØ.°]`»Ø®íŸZ?dï³bØ.°ÝRØ7O¬,ÒÅvÍŽ%_íêsÀ(Ù`»„·6‘M›Õ”Ã÷í(ˆê¶òWüÞìäÙvmPeæ°4;¢ØnÄc:]Û͈¿Ûuœ³^Ûu=½ ¶›Ñ;íêà¹,÷6ø‘³‰¾÷‚¬»o(Ÿ°Ý)©¯õ^—Àv§Ôg‚ïÂv§¼ãÎÏa»¦Œ I¦¡²2Å8°]`»ÀvíÛ¶ lwÛµý³DóÉÇ'€¬)²íÛ­…íhó2œ—³¢v±]³£—‚ha;ªY|DIltP-v˜ï¼ K¶•sˆ}Âvî(%˜¤V/¢ßÈðŽ&2+#”c’P•Ž"‹@.¹ä"‹@î\ ç^kÏ¡ND Ü äx+Orø@ô¯@®Ù©^ÈÕç–(JêO fWþ¹³Ò‘n™D ¿äj*5c(­¿Èá³ùuÐÚ0ɇârz‰‡#ÿâÃ뮟1#þšü‹Ž‚sÖ‹ç_t=½`þÅŒÞéü‹68&`LãU 2ßÛWPÊŠEŸªíRFE¬^i±ü‹¦ŠSÍ2«'àïÂví­…LøÀgÈ–žÃv»2Câ_]‰©Û¶ lØ.°]`»ÀvDZ]Û?]Á Çû¬qô l·¶ó¤bš”خڽ„'—a;O–ÊN0˜@ÕìækSP/@¾ÇvÒR¨$åA¦ˆìGoxÛÕA-ÕLšâü¥QB`»<¦ãÑõ±ÝŒøk°]GÁ9ëű]×Ó b»½ÓØ® âå›®ReS¸·Ú›lâüÛ’ ×Âv»ªÌipùç-å»°]{k”r McïàK7Û±]‘j¾PÆíÛ¶ lØ.°]`»Ø®íŸ*–?5;Áȶ l·¶“Í$I†}lWí’ãåEÊësQAIGF±ºµ¯ oŒŽ ï±–)ì lÚOhkvÉüQlW-Á8g´glâÜ$°ÝˆÇt<º>¶› ¶ë(8g½8¶ëzzAl7£wÛµÁ¡ûV©"ÒäVl[ÙÐHýãj\*¼©ôG±]S…˜tp›Wo_V¤|÷Ž•ÿ`cï àsØ®HŠé€2KíÛ¶ lØ.°]`»ãØ®íŸ*Rvðñ>+YÛ¶[ ÛéæI”Ë÷Û/RÞìäå÷Ý‹°]}n–TΩ£ TìøÍuž ±]ÞÊIXýCoÁ\¦0@‰\nSäQl×ÄyFŠŒK²cÓñèúØnFü5Ø®£àœõâØ®ëé±ÝŒÞil×/ç~G¯Rä÷fÛ™ëþé’ì.•³ò©L²¶kªj§ŒAûn÷&Wð¯Æví­5eÇ|À;¯‡ö»±]1 Êà×·Ÿ7lØ.°]`»ÀvíÛÀvuÿÄdeØgý5ß>°]`»°]®ÅÇ…]~ÿòüÏ¿?àZÌœ/ï-Xž[/¦R†Q€Qìò½Ùv¾¡+ê{jgu›â¨A³{-øµ«ƒbñ Åabj7Â1®OífÄ_Cí: ÎY/Níºž^ÚÍè¦vûà F<^¥o¾#ëeÑ’Owdw B9‘ú» ⟥vM²fÎÔ[þ.j·{G©Lƒ±wP¤vmDâòíûe*Aí‚Úµ jÔ.¨]P»ãÔ®îŸÄÊD0ÞgÅ(¨]P»¥¨mµý:X¿#E³Ó—ŽQ»ò\LR;RÈ`»²!ÜÛZT3~(mçm 3¡õ•6;þ}ïèVlW%ðŠðPÅÙ!éxt}l7#þl×QpÎzql×õô‚ØnFï4¶kƒSN6¸š°Û½kês!¶ó”6´OÉvM“Á #Ån‡‹5’mª´¸Pt¬^}¶kom.Él–YósØ®ŽÈI·ww»Ø.°]`»ÀvíÛ¶;íÚþI¤ÉÆç#¦—‚þíÛ­€í|s(ÁK9Ãö±]³ËˆWc»úÜõ ÒÄš0ßÛ‘"ƒ ¼/mÇ©LárÕ,]l×ìÊÞöèÙ]\fIÂCqY¢´ÝÇô<º<¶› ¶ë)8g½6¶ë{z=l7¥wÛ[¥rº7ÛŽ6*cHî¬öæ¨ýÚ ?R)/…íΩ·ü]Ø®½µ%ªÇÞqåǰݮ Ð Æg K€íÛ¶ lØ.°]`»ÃØnß?ûU2v;BlØn%lW>L¯|#ô°ÝnGzui»ö\-GhÑÑAÕ¡ìwfÛaͶË)Ùû@êf‡¬]Bßì²ã£dÛ Ž,ƒŸéw;x‰2Û}à1®ífÄ_ƒí: ÎY/Žíºž^ÛÍèÆvmpgÍãUŠ3ÜÜ‘B4 ÊçÕþ°TµÉÄXp`;ü]¥í~¼“U(½£¯‡ö»±]1——V< ŒÛ¶ lØ.°]`»Àv'°]Ù?=A½MA£}ÖS‚ȶ l·¶ƒÍ!;$rïb»f§—w¤hÏu¤¬ÃðèìY²ä6Ew¤ fr%ÞÿN¤^Ud»$R fTÂnÿ¡Å+¶ÃõHý.¶Ãº¾¨Ö.±#q%I‚Àv#ÓñèüØîŠø{°]GÁ9ëɱ]×Ób»+z/c»6¸QÖþÍÓ‘¦O7’Õ$»kýq¡sU¶kª€*¶Kcõù» ]óŽ0K¿ÀâjÇoæÚµÛÜÌ” hÐ. ]@»€víÚ‡vuÿDñy–h¸Ï"‰´ h7´Ã¥¢?%cÚU;(ÍQo‚võ¹ÆœF†Ûa–¡ë"bÙv9ªGe\Jÿ°O-*ùUh×ı;ˆh(>ס€v;4¦ãÑù¡Ýñ÷@»Ž‚sÖ“C»®§'„vWô^†vmp©íÈÇK(øžð(´“”H; )~¤¢ïy¤Jš Û5UŠ ƒŸÂVõò] )ÎyG?΋)i`»ÀvSa;ª&PŒ¥E¶Ù¥»±]}n‘0H[í>™kgK)(·±·#uÊ©ôó/šd{Ûq[_<¬µ±8L@íF<¦ãÑù±Ýñ÷`»Ž‚sÖ“c»®§'ÄvWô^ÆvmpH$ˆV©’žÅvÀ £ì`»U*K…TæÂvMù¿Ôç/˶;å?¼‡íÚˆT_ºP¦%°]`»ÀvíÛ¶ lwÛµýSS½Ð1Þgå£3Z`»Àv3`;®8,Kvœûçÿ?àj—înHQŸ‹‰²ŽÒÄV;ÈÏV¶ó!ÒÞYñ)L•R¿_³s³W±Ý)qòQ'°Ýéxt~lwEü=Ø®£àœõ䨮ëé ±Ý½—±Ý¹Uª<›mÇÀ‹*î`»SRx.lwN½~YCŠSÞ±Ÿ*Çvç”}”álØ.°]`»ÀvíÛ ±]Ý?Ù·v£hvˆQÙ.°Ý\ØNê%YCVË]l×ì”ànlWŸ«â/Sl0ÜŽ·jÝwI—ÚH‚w: ªOaIþ—Éý©®mI w/É6q%®Ÿ5;Šl»1éxt~lwEü=Ø®£àœõ䨮ëé ±Ý½—±Ý©UŠéé† tVûÃR%Í…íN©·?›'ýÝØî”w²¾˜mWGT— É(ËíÛ¶ lØ.°]`»Ø®í³¤–² ÷YE“Àví¦ÂvÚjÛ ÈŸ5|þùÿØíü8y7¶«Ïõ°ÇCL ·~²!•Eëá>²¶´æs¨íšRzÛÕAkÆ$ ²~›¸R¢!ÅÇt<:?¶»"þl×QpÎzrl×õô„ØîŠÞËØ® €ã%Ô’Ñ£Ø. / {—dW©(ÉÊX*Í…íš*d²‚Ô‹~¶[½#¾›ðâ‹ÙvmDR¢A8ñó÷K²íÛ¶ lØ.°Ý l×öO…lLã}V>óíÛ¶›ÛY½|ª¦þ¦]l×ìDn϶«Ïu…¥¿«xˆñä%Y^²(mc»\§pæ"Ø/—×%A_ÅvuМêWccqÅ¢%ÅÇt<:?¶»"þl×QpÎzrl×õô„ØîŠÞËØ® À‰óp•Êéá>²l¸PÞkIqJ*lÕ\ý/±]S… .ì€z‘ïÂv§¼ƒIŸc»:bñAË ¶Ý¯7lØ.°]`»ÀvíÛÇv§öYI%°]`»©°]®Yt¹$Véb»f§z;¶«ÏeªÿŒx“Û}–‡~ ÛN`2è¶+) –¶+µuž-sUì€Çê©à·rµ +r>W»$or>¢ p*”Aä_D \rÈE Ü©@®ò‰†zàtW4¹äæ äÊ’ü —êÍ¿n ×ì(—»9.&(&ƒraÍ.mU•¸³· y´J;ÕŽJe1þwÁAÄfgT^Í¿¨ƒÖ¾ˆyÐã´Ù}VPü‹Ö;?ÿâŠø{ò/: ÎYOžÑõô„ùWô^οhƒ“€ hÒ*²Ø³½--El'ÿb•ªl c©Ä“)oªX2¡P_òwa»öÖ¦4ÞÉ«‚÷°]QѶqƒÀvíÛ¶ lØ.°Ýql×öÏRH4Þgs‰Þ‚ífÃvIÅ?à¶s»®MÕçr)À8Âvn—7º ݈íd©Eky“Ú‰Çx)yÀ)Ü„V»$¯R»uPΖñ€8Ê)¨ÝÇô<:=µ»$þj×SpÎznj×÷ô|Ôî’Þ«Ôn\’Zÿ'šß"Ÿ½5ÅiQ¡mj÷#Šê©0µûQ•“r: ^¿+Ùn}k•þO\?v팟¦v눦þŸP&Ô.¨]P» vAí‚Úµ;LíÚþ ȨZ†û,$Õ vAíf¢võÃDüqÚ£v«ÞLíÚsU2ƒ ªõòÈ“5ÊÑÿb ˆdÛOaôãè DB³ó ^ÅvM»3’ Å!}¬CívxLÇ£óc»+âïÁvç¬'Çv]OOˆí®è½ŒíÚà ¹ŒW)ÁôlrLKÞÅvç¤f™ Û5U%iî×¾[팾«ØÑwT=ø{§¼xG¶H~ÈФ”a`»ÀvíÛ¶ lØî¶kû¬Ã,ãóqlØn.lµˆ[ì;ZíHî¾#ÛžKÆÃ_¾›»äÉbG°¤bd¸í°Mõ¬dýü‹fçëÁ«Ø®ʾ§*ÁX\Ql7â1Îí®ˆ¿Ûuœ³žÛu==!¶»¢÷2¶kƒZA®Rœ´<Šíq½;²ç¤ÌU£|U…ˆÚ/Þö£^¿ Û­ÞádÅÆÞA|¯Fù:¢ŸÇ _‡ãGQ`»ÀvíÛ¶ lØî8¶kû§IQ‡Ë,íÛM…íüÃD?¨øB[ºØ®ÙiºÛùs (§<¸ÒìðÙÖ‚æaæ’·±ùôÓré+mv@ô*¶kƒzˆÁcqT"ÛnÈc:Û]¶ë(8g=9¶ëzzBlwEïel×—\€t¼JñŸ¿ÑÜŠíHq±;Ø®I0ñ#ÛÕͅ횪‚Hv`¯Ê?þjl·zÇrî÷Eù±ûìûð4¶«#ú!KSJ”mw*lØ.°]`»ÀvíÛmc»¶ÏRA°2ÜgÛ¶› Ûù‡I 3ô±]³û¼å}¶«Ï…\<¾TÝîÑÖ‚X[ Úµã5Žã2¸#ÛìùUj×-™ËX\ùÈ j·ƒc:ŸÚ]µë(8g=9µëzzBjwEïejÇ J˜ÿÃUJé³¥íDÜ¡vç¤Úd¥íš*„šô=VlßEíVï0¢êØ;îÅ÷¨]Ñߊúu~”aP» vAí‚Úµ jÔîµkû§Öª¾N‚‘lÔn.jç¦øI…r—ÚU;¹ÚÕçæ’¹Œªn—äAj'¸d$Ë;¥íħ°™ÔS]Wi³ƒôni»:hÿrW›]ŠŽcÓñèüØîŠø{°]GÁ9ëɱ]×Ób»+z/c»6¸ov: I«Èò,¶ƒ 彊§¤"O–l×TQý-¬ŒÕëwa»öÖ\ëéŽwòìAê{Ø®(ìç1:  r`»ÀvíÛ¶ lØî8¶kûg&ʃ«ÍÎÛ¶› Ûù‡É¢ä‘G¿#E³Mwc»ú\KÅ äÑr;ÍO–¶ËKFD¡í@N} û>&Ê}2Víry¹#…êëk4’]íRdÛ yLÇ£óc»+âïÁvç¬'Çv]OOˆí®è½ŒíÚà¤ìKáx•¢­2·b;&ôuzwµ?¦PìK«ÔRæÂv«úÚ!>ÕóF®à_íÚ[‹f{G>ìãØ®¨>A¥VeØ.°]`»ÀvíÛ¶;íêþY7P.ãƒ(¤Ï*íÛM€íüÃd.µ8³u±]³Ër{#Ùú\õ@e±¢=yG–dÉfœËv gk —Ò #E³CÓW±­çý4ªedëy?:R yLÇ£óc»+âïÁvç¬'Çv]OOˆí®è½ŒíN­RÊÏfÛQ–…vKÛ­rÊ¢¤êdÙvM•){„5VoéË.ɶ·ö­ Œ½“éÅF²mÄ’!P¦Ñ‘"°]`»ÀvíÛ¶;íêþ (µ8ØpŸØ.°ÝTØÎ?ÌzA–0õ±]³K"wc»ú\#[K£ D†*Ï–¶“RÐv²ír›êàžþTové£ZôØ® ÊÙÊÀÍŽŠ¶ñ˜ŽGçÇvWÄ߃í: ÎYOŽíºžžÛ]Ñ{ÛµÁÅ×ûÁÝ„‘×¶3X|ßÁv§¤Êl)š*Mœ3PÏü]ØnõŽ’’Œ½£Àïa»:"¦$”Ž(ûàÚíÛ¶ lØ.°]`»!¶kû,©Zﳈ—dÛÍ…íüÃäú/ É®vnÂvõ¹œ³ aS}2ÛNʸ“mWêÖâ{Rª7;!|ÛÕA)©Á ë·‰+·kÛíð˜ŽGçÇvWÄ߃í: ÎYOŽíºžžÛ]Ñ{ÛµÁÁGÏ6\¥(Y~Û‰¦EEw°Ý)©~„› Û5Uè1VNÔó—5’]½ZŽx߬m×F¬%w2P¶}y'°]`»ÀvíÛ¶ l·íÚþ©È¾Ö÷Yáȶ l7¶+Çaf(ÒÅvÍ>jËÝ„íêsý=%óð ÊµBø“ÙveaWÂ[-)È•ùf¤\z…¬Ù¥Bïa»_ƒŠ–ÁÆâã’lŸÇô=:9¶»(þl×WpÎzfl7òôlØî¢ÞkØî÷तy¼J ð³Ùv$ §ÍN²¿%xxrH*éDØî—*U“Þa¿Õ—oªm÷ë­³ÿk6Kß¶_Âv¿F,©Vê8 LÛ¶ lØ.°]`»ÀvG±ÝÏþ)”¬{Çï—Ýgo´Àvíþsl·~˜E1Qê´¤øe'zï%ÙŸç Rf* O¶¤´(3lc;¨S8[­xÔUÚì”áUlWÕV°‡‡â4A`»!éxt~lwEü=Ø®£àœõ䨮ëé ±Ý½—±Ý:¸b&¯RðáK²º@âlwN*Â\ØnU•SJGÔë—a»öÖè±åŽ­v¯eÛý‘©Wuï·2ŠK²íÛ¶ lØ.°Ý l×öOM2Žå•3¶ l7¶ƒ–EWp£ò?ÿú€¹–l»ÛÕçú:Ÿ»×y~Ù%MÏÖ¶+¾Umc;¬SØXd@ƚһخj €ÆâÊGA`»ÓñèüØîŠø{°]GÁ9ëɱ]×Ób»+z/c»uðŒIi¸Jùží$ˉѽl»sRçÂvMÖ±zH_†íÖ·Öú{åï|œ9ÇvmDdÍ(‹l»ÀvíÛ¶ lØî¶kû'«ŽcyãØ.°Ý\ØëåS€ þ¾ûÏ¿>`ÆÏVÈ7a»ú\bSŸ£ ÄDö0¶#Àˆ·±ùÎBY¬دv&˜^ÅvM\ÉX¨ Åå‚QÛnÈc:Û]¶ë(8g=9¶ëzzBlwEïelW/Éc÷‚ÃUª${6Û*´C•ýÕ¾e$K”¹°]S…H㽪àŸi7¶[ß:g-ù€wìÅK²mDRÿ.ÓXA lØ.°]`»ÀvíÛÇvmÿTfá'ÑÀvíæÂvþaúa%Y¦ÜÅvn'¥¤|7¶ëŒÿï ¤õŽêƒØŽqaq‡Ún WP“© ÊTEÊ©2bÉ4VïŸÞ×rkÁ£ñWèñÓ«œ++ïÀe"‹@.¹ä"‹@.¹S\!ÂD0>Ý’D ÜT/ ”©þ× äš©ÝÈÕçæd¤ƒkS«=È¡,æø˜Û·ß<Ø'ûHiýÕæÏ~Mæ_´A5³ ò/šÇ¢‘1úa½ãÑùó/®ˆ¿'ÿ¢£àœõäù]OO˜qEïåü‹6¸Õ_ü,¡ötµ#•Å£kS«bIp@*NV¤¼©Ê¾¯&«ÏôeØ®½uq7 °Ýê‚ïa;ÑçF"‚ÛxÙNçlØ.°]`»ÀvíÛmc»¶ÏÖ˃ž^ÍS`»Àvsa;]ü»Taí_›ªv~†½ÛÕçígÒ¯vðd‘òL‹‡+{·¦d=Q3Ò/¤¨áÝ[SMœ»Çt(®þÄ AíF8¦ãÑù©Ýñ÷P»Ž‚sÖ“S»®§'¤vWô^¦vmp?ÖÓx•bJR;—±X‘j·J(T˜HÍ“Õ(oªÄ7³Á ßÕŽówQ»öÖZ£Ï<öŽ~ÔŸœÚµ-e“Û¸Šµ jÔ.¨]P» vAíŽS»ºB‚d¥Œ÷ÙbÔ.¨ÝdÔÎR™9‹ ¨ÛÐýÔ®øcÑÃÈax$–¤v¤‹°‘åml×*˜3Q4Alv´ÕÎþAl×õ¨Ö¹!ÍN9Š yLÇ£óc»+âïÁvç¬'Çv]OOˆí®è½ŒíÚàÙ|•ñ*eòp²]Æó^²ÝÚ¯‚àkµ›íŽlU…IAY«ú’¾ Û5ïƒåñNŽÉ^ÄvmD$ó³ÖX|$I¶ lØ.°]`»Àví†Ø®íŸ*ÙÄÆû¬h lØn*lç&cKÚo-ØìôþbGþ\©×Óà¾J³~´FyY” Ê6µ³v¢f,Òãl˜ò«Ô® êfTÔmv”)¨ÝÇt<:?µ»"þj×QpÎzrj×õô„ÔîŠÞËÔ® .)Û ƒm™î,ˆ¶ íu\¥úro¤J²¹¨]S¥Yí€z¡ï¢v«wDiP¢üÇ‹òµk#—œá€2¶ vAí‚Úµ jÔ.¨ÝqjW÷OÿLk,<Üg) µ j7µóSr"ñ8«KíªéÇÏò7Q»úÜâoÊiÉv¡û’í<Œ«MÔwJ”ç6Õm@ƪ¾}G¶‰#Ò¬e(Žð£}`»ÓñèüØîŠø{°]GÁ9ëɱ]×Ób»+z/c»upƒ4¸¸Ú <‹íDRÚÁvMC­vp@ªñ\Ø®©‚QÃëõ-7RØÿjlwÎ;e`Çv«2#¤x®vQÙ.°]`»ÀvíÛ¶;ƒíêþÉÉÏvràtW@Û¶› Ûù‡É–})¦~e»f§tûÙúÜ’¬È / Úe?T=ÛBÁvJÛ•:…=pDêÇœÍNA^ÅvuPƤ%ÑPCŠÒvCÓñèüØîŠø{°]GÁ9ëɱ]×Ób»+z/c»up–$<^¥ž-m§É2ÛÁvç¤ÒdØ®©"c¤°¯ê3¶koÍþÆñNÎô&¶k# æ,¶qNQÚ.°]`»ÀvíÛ¶;íÚþ™ÅCrï³öÑ\2°]`»°]©wO©žÝûØ®ÙÂÝØ®>]At¼kvœÊ“Ø./¾á”ÒvÚ‘Úš~^àjÇæ>‰íÖA‹7IÇâ>>l·ÍczÛ] ¶ë)8g=7¶ë{z>lwIïUlׯ»HÁ2\¥$q~Û‘¯X%oc»“R M…íVU˜ÔÇêA¾+ÛîÇ;JFã\ð£1ÊÓØn‘$kÖʘÛ¶ lØ.°]`»Àv‡±ÝºZ">²Ï*@`»Àv3a»úaJÈúçïòÿüëö@JîÆví¹è‡<ëÿòÝìÀc¨'±.™Ð`›ÚÏ`E(ÖoHÑ섊½JíN‰Ó/µÛÁ1ÎOí®ˆ¿‡Úuœ³žœÚu==!µ»¢÷2µ;µJÙV¾òÔŽ(-Ùʵ;'•d.jwJ}6ù.jwÊ;þ¿÷¨Ý9ewdƒÚµ jÔ.¨]P»ÔîÌ>«9¨]P»©¨.ÉT‘sé6¤XíHÊÝÔ®>7«BÂþªvfðdYŸ¥~fKÛµí\‚Ïáš[;ø¾Ù¾ÚH¶ êçãb’‡âÌO§ÁíF@¦ãÑù¹Ýñ÷p»Ž‚sÖ“s»®§'ävWô^ævmð ¾8–ñ*e[KèÜßvLtµ·‚3Ž¥æ|µÿ”ÛUU9AÖ~ûï·ü.n·zG©é«¿ÈíÚˆ 5¤8 L‚Û· nÜ.¸]p»àv'¸]Û?3¥ÿ±wö¼Öܸÿ*îœj ’â[±]šT ®Rl'q±‰aï.oIs?÷iÆóò(>„;›>úïHCQ:޳”9¸]p»É¸]ÙJåDü¹cÛÜ®Ø!ÓùÜÎYD½<Ïh±<»óîÔC²Zd =Oä¨La/ù®÷{Û5;#½õJŠU“ŽÅ9q’ò˜ŽGçÇvGÄŸƒí: öYOŽíºžžÛÑ{ÛµÁ0÷/Ê\í¦caE%0w³‘úb÷ärÙß5 kO]Ï_k{n¤¿œ…µÉXû—(ã,,XX°°`aÁ‚… ÛÎÂZüÔ\¦Ø†8+ ã‚…ÍÅÂꋉe§¼¢hÒ‹« («ñ‹Žmŧ–@œ»Ønµ{¸»ì$lW̨ÿå»Ù™¥K¯¤À%•œ6É+W•å†Gµ"¹eÆxë«8/«á*6;³¸Jvd:ŸÛ·ë(Øg=9·ëzzBnwDïanW‡úiÄh¼JùçŽq'Æl _¯öصÝè‡]–¹cS$’u¬à͎ɶ§.C•?ÜØ;˜oln×F$RvÜ L=c Æ@Œ1b ĸ1¶øÉ*à>޳wRbœ 1–SPYðs÷ê/¸]±“Œçs;— Ìdêä¤x!·#ZêírþÛq…± îƒÊÚÕìVl×Í&¢2—…ÛxLÇ£óc»#âÏÁvû¬'Çv]OOˆíŽè=ŒíÚàRÖ£/iv¬~)¶ËŠ‹Q~ÑÝn•ʤ c©B:¶kªKȤ±zMovJ¶=µ±ç-‘ÜîÄvmD¯½:6¼uæØ.°]`»ÀvíÛ¶ÛŽíjü¬¡=°Ýj÷¼?°]`»¯†íÊ‹)à©ìû§d›=4ß> ÛÕß%®e³6š@Å._yJ–ÒR‘^µ;’:…©l<WÉ6»d÷b»6¨˜1àXK l7â1Î펈?Ûu쳞Ûu==!¶;¢÷0¶ÛµJ _ÛÜŽU—DúÛ퓪“a»ª*““lQÿ¤‹àïÛíòŽ=^Øz5¶Û§Œ-°]`»ÀvíÛ¶ l·Û퉳!°]`»¹°]y1¹ö›}÷Å Ìöx«ÊIØN*,yž&Pk!~å)Y]Èë1ÙçØNÛT'0éOõf‡tïmPfàÁÍ.k4·ò˜ŽGçÇvGÄŸƒí: öYOŽíºžžÛÑ{Û­ƒ;ûx•b¾ú.YZØ^Ý%»OªÒ\Ø®©RRœFnvònwÉ®ÞaÁA·»/æû°]ÑAG»¡UÙ#P lØ.°]`»ÀvíÛ°]Ÿ e¡7ÆY~<ãØ.°Ý Ø®¼˜%Ã@ñÔ?$ÛìDàllWWËøhê¨8_{HÚŽù9¶³:…‹`Ðt»Ùe”[±]T…q,NSTÛ yLÇ£óc»#âÏÁvû¬'Çv]OOˆíŽè=ŒíÚà& "V)²k±òRòôØn—Tûü‘æëb»ªJHh¾¶kOèœy성]1»ºÒeŒíÛ¶ lØ.°]`»íØ®ÆOKõ&LÇÙ_µÉlØnlW^L)ÿ€ t±Ýj—ällW~W1åÁý3ÍéRl' ©¨ÒKlçHd˜†›}Gxv{ÆWNä¶«G¿D®xÇÁYÇÞ¡‡=îHä9£º}ïQŽØ¦0AŸŒ5» éVl×õ0`ŠëCHô(ò˜ŽGçÇvGÄŸƒí: öYOŽíºžžÛÑ{ÛÕÁ’x¿ÍÀj—ðÚ3²’}a}Ñ£|§Tñ¹°Ý.õþ^ØnŸw„ïÃvû”i\-Ø.°]`»ÀvíÛíÀv-~–—Xû­í>ìR`»Àvsa;lgOë1ëb»j‡D§c»ú»Yj§M ÍÅ5×ÛåZm÷âŒ,µ-uEô}l×ìýVl×åòƤ<ÇëP`»<¦ãÑù±Ýñç`»Ž‚}Ö“c»®§'ÄvGôÆvmpI”}Ë*uñY¢¼”eú¶Û'Õm.l×TiRD«z³j»öԞؘÆÞ)»“û°Ý>ešÛ¶ lØ.°]`»ÀvÛ±]ŸˆžiÃî1ÅÕ‚íæÂvåÅAuÍØÅvÕŽUξ‘¢ý®™@Î2š@eC•åÊj;] §¢æ9¶+IdNÀƒ¾LÍŽ>+Õ Ã¾Läþòã¿ÿTö4-éümÀn—¬üp¹éÃÐUžcMA>þU[õÊÂóŸÿÚ"{ƒEõßüù¿üþßþÓÿýŸ¿^1ÿ¾èü釲>}[r“?þ\¶hß¾0øÇïÿümYôÚþîßþí?—«þ(oP±ü~üæ¯?üñÓŸñ¿~ùß¿ùñ§ïÿZBÎÏ¿<ÅÏË7ÿЦSøOÅìÁô_¾ÿ·òòÕ‡)«ê_Ê Ë·‡ÿ¼Ùà•éŸtÿ›»ý²c]—…—£Ký´3ùó ì¬O¦Áη«LðEò«v_ëà* °A¤LÖî«©ÄÎÒ”«øØj÷p3n|.yÁÁ;ÿsÉñç|.é(Øg=ùç’®§'ü\rDïáÏ%mp4eî` hpÐà ÁAƒƒ ¼·øIL0èM³Úѵe y1¶N.Ï"JƒÞtÍŽE\¸ž \—“œQÅ¥ ®›1Ÿ ®Ëïæ:Žëp®×CÚ—^¥,KÙÞ»¼äÖE(‰Ê¡ö„{}mj—“kbÕ ê ߎÚï+ÚØ;®x+«§P¥d=”YÜ­)g¤œ‘rFÊ)g¤œûRÎ d©l'‡q,ò¸ÈãfËã„38¨ò8a¢+ò8Kœi¸…®—aù•y/æI¾Jä<#'¶aùE±K~÷'ú2h­ã âŒâÜà†/œ/=úÿáCøoÖ‡ð— öYOÿ!¼ãé)?„ÿv½'|÷L) W)×kÛ}™ãR“—€¾H…ä®6”JIólØ®ªwM4%ËJï†íÊScírJcï ßZÏVF$Å”d¬ŒRœ lØ.°]`»Àvíva»?ÙÇq–ãnÍÀv“a;^RÙþ±÷ïÖ,veOcz6¶«ãcùeÅþFµÚ_ˆíÌ-o¦ÑslÇm Kmq=PZS¡›»ô·A­x2ùXœ¦¸\sÈc:Û¶ë(Øg=9¶ëzzBlwDïal·.Ž’Æ«”‘_{¹f†E^`»}R%Í…íš*·Äƒ¶Q«]z3lWŸ:§„ÀyƒwQ]íVeNIe¨¬ÁóK¿Û¶ lØ.°]`»Àvϱ]‹ŸÙJ”ÒqœÍÕvíæÂvº¤$(씺ØnµK§’-¿ õÐÔ“ñÿîËñ5?»Äì4l'¸¨£1<ÇvZ¦°$H–h ´Lu“{É6qHP–¢¡8ÕvCÓñèüØîˆøs°]GÁ>ëɱ]×Ób»#zc»]«Ôã7¢K.×]ò h· `H´E(MvD¶©"$YÕ>‹üû†ví©s*ÏÍcïÐCÉçåÐnUfehÐ. ]@»€víÚm‡v-~jí¯œÇqV$:Û´› Ú•¤»ìÒ3ä´«våNƒv®š JŠ6š@ªÅªÔÚA=LìÏ¡•)¬ ¢¹¿¥®vâH·B»&ÅÄÓPœb¦€v#ÓñèüÐîˆøs ]GÁ>ëÉ¡]×ÓB»#zC»6xÈãUŠ _ írÊKJøÛí“J“]H±O½å÷Âv«wj‰$½“éÆ#²mD–:A7(ËqD6°]`»ÀvíÛ¶ÛíZü4CßËkÉ—Û¶› Û•S…Í’ökíªk>½³5lµ`a4T)É•Røâ /°]ù¯åÏR·ûýÎvÍÒ½íÚ ,Nƒšßf—îb l÷‚Çt<:?¶;"þl×Q°Ïzrl×õô„ØîˆÞÃØn×*ÅéÚ#²|x…íöIEž Û5U‚9 ðϪ^ß Û­ÞÉlƒ[äV»‡`y9¶k#ZÉ–]¶(‹j»ÀvíÛ¶ lØn¶«ñÓŠsÇÙ÷µ¶ l÷Õ°]y1-9d´~µ]³“óÈÖßÍ,P6«£ dˆrå…D‹±ÙSlG©MueîÙfgšíNl·Š#GZíð!!l÷œÇô<:=¶;$þl×S°Ïznl×÷ô|ØîÞ£Ønç*ezmg;ÆÅAžc»}R)ÙTØn§ú7ël·>uÉýÎvvé¾{d눜Ryn€ Ê€Û¶ lØ.°]`»Àv›±Ýž8[ìž_üØ.°Ý×ÂvõÅ´²»W÷î=²«:ŒíÚïr‚\ö £ dÙôÊj;â% ” óÛAÂâÚ­¿Xí’ÞzHv”Å3n—ÕÛxLÇ£óc»#âÏÁvû¬'Çv]OOˆíŽè=ŒíÖÁpËÊrñ… ¦½ívJµ¹.¤XU•ýC’‡vT¦0Õ”ÓúSÚ’p+´Û%.?üÚ½ 1Î펈?Úu쳞Úu==!´;¢÷0´ÛµJqæk;Ûe]À^a»}R…æÂv»Ô?6¾x l·Ï;ù¾{dw*{8ÚØ.°]`»ÀvíÛ¶b»]q6:Û¶› ÛQÅq¢FÞ¯µkvL§‘­¿«e Šý]V»Œ—Þ#›“d/.¤¨ ‹¯ ÓH©£<»ñö+'rÛÕ¥·Kä=AfØà‡–[w$rN‰LÅÜ«2·Hä"‘‹D.¹Hä"‘‹DnW"WÏœÈ8Î"E"‰ÜT‰\^¹õë/V»„g'rõwKðŒ³)ŽM¶› ÛÕÖ߯j4ÀvÅN„ÎÇv^öI\âŒ&)ˆ_{³ ‘1ËslÇe ×#’ÆýD®ÚeNz+¶kâJ0Ŷkvô°ßl÷‚Çt<:?¶;"þl×Q°Ïzrl×õô„ØîˆÞÃØn×*•³\ŠíœqÒØn•à„¤>__Û5U%¦'Ècõ oVm·>u-¸Ûâ‡p—c»6¢xÝmPf9°]`»ÀvíÛ¶ l·ÛÕø)õjÁA‡ågýáXy`»Àv3`;®8(Ñçv]ß}ñ9¦³±·c[f Ãj±Kp%¶K  Ðsl'uª#`Ö~©Hµcs¿Û5q„e`Š|ø3¶{Ác:Û¶ë(Øg=9¶ëzzBlwDïal·o•2¿¶ÛQ¢…_`»]R)Ù\Ø®©H4¸NcUooVm·Ë;™ó}ØnŸ2¡ÀvíÛ¶ lØ.°Ývl×âg­CÂ<޳JØ.°ÝTØ®¼˜V6+ôäÐÔw_¼À†*§7)¯¿›©’$M Ë W’E],»ó‹nGºÔ¢Ú\‹EºJ«¢§[±]'D08ÁÛì¢IùÓñèüØîˆøs°]GÁ>ëɱ]×Ób»#zc»]«”¤kÉÂBäH½Õ^pP¯¶Úe˜ Û5U*%Ç’±zEz/lמÚÁÓØ;&v¶«#:@I'tƒ²¨¶ lØ.°]`»Àvíö`»g©lÒxwW÷ íÛM…í´67fîb»je{6¶«¿+4º»Ù%ÎWVÛÉ¢’’æç‰œµ©n¨Ü¯¿hvéVl×e‹Ë)ÉyLÇ£óc»#âÏÁvû¬'Çv]OOˆíŽè=Œíö­RJ÷¶“…àUµÝ.© y.l·O½¼Ù!ÙòÔ%ΗlpD¦yGÒØnŸ2ˆ»Û¶ lØ.°]`»Ø®ÅY`Q°Qœ-v¨íÛM…íÊ‹ieuvïb»fGêgc;«80•gM /[A»öJŠ‘,Ãslçu ×ûAMʛݓ.|—b»6h}c<Åq\I1æ1Î펈?Ûu쳞Ûu==!¶;¢÷0¶kƒ«‚ŒW)Åt)¶cáÒ«Þvû¤æÉ°]SåŠ4¸òºÙó{a»úÔ2£ŽƒeÙË}Ø®\²cÝ  1°]`»ÀvíÛ¶ l·Ûµø™s àãݧÀví¦Âv^q(õ±WlWþ9ÛÕñ¡&’ƒ+ÍŽ“\ÛÛ®ø”èùM²9Õ)\ëj±[m÷a—n=$ÛEÈ,ýÏôvÑÛnÈczÛ ¶ë)Øg=7¶ë{z>lwHïQl·Nžh¼J‘¥خÄ×Úá9¶[%Ôˆú=WW»²ÿ› Û­ª¤ÒÁX=ë{UÛíóŽäû°Ý:¢%.³AGµ]`»ÀvíÛ¶ l·ÛµøYç¥qœ¥¤ÑÛ.°ÝTØ®¾˜®DlÖ=$»ÚŸÝÛ®ý® °2Ž&“^{H–4æ9¶ƒÿaïüvï¸q;þ*ÆÞõb§¢HŠT®ûE{‘« 0²Ùm°›Úp’}ûRšŸ½'ë3Òó'‰8Äè;<3"ù‰ª%F¡f¾ÕîI›ïK±]T)¦ }qIÛuyLãóc»#âÏÁv cÖ“c»¦§'ÄvGôÆvëà‰TvÌR¯=’‚yÆ l7&•ã\Ø®ªÊ¦_Âõ9¾¶ƒšŒÖv—Ú7/>ôó½Û­Ê8e¦Êıc;ÇvŽíÛ9¶sl7€íjœÅQû>žæØÎ±Ý ØÎÌ̘cThb»ÕãÙØ®\7 1A·À0;¸Ûá¢åTèç›d)Ú+LµOy«v˜ŸÕœb»*Í?»âèqÕ¯c» ÓðèüØîˆøs°]CÁ˜õ䨮éé ±Ý½‡±]œ³¦þ,EÂ×n’eYBHØn•ZöÉj_*O¶IvU•(jÛ­v_ ÛÕ»¶œCS?’“=s÷a»:¢š„ö&Ù7»ç­²Û9¶slçØÎ±c;Çvϱ]‰Ÿ Jú‰(©c;ÇvSa;\BΚ%6±]µ³¬æll‡ fj÷¶«vŸØw¶äÂ& ž.hìFeâžÔbéVnWM’#A_\¢ìÜ®dŸÛ·k(³žœÛ5==!·;¢÷0·«ƒkN9K–Ò‹wÉrÊKŒ[Ü®HH0„Зj÷3·«ê­|JÄ]õ)¼Ú.Ùz×V~fêÿ¶é1’_Îíꈔ,×ÂÊ$8·snçÜιs;çvÎíös»?3Å~"šTÛ9·›‹ÛÑìïrÐÜln÷f‡x6·+×¥ B±Í›V;ºt¹.6@ÜèmGõ Nˆ’:BÍ.ɽ½íÊ ‚åC}ø°»Æ©ÝŽixt~jwDü9Ô®¡`Ìzrj×ôô„ÔîˆÞÃÔ®NQsŠýY 5^Jíl¢_d«µÝR‚É ]UŨ2ïPÿbÉŽy‡ã{dëˆ $«ìP†ìÐΡC;‡víÚ9´ÛíjüÔ¬wdú°VÈ¡C» /Êù Û{d«áé‹íÊu$vNb®vzå‰6†$6×?§v¼”E"€¥£ÔR?ºyì¸Ç­ÆNí6pLãóS»#âÏ¡v cÖ“S»¦§'¤vGô¦v#³Táâµv¥­µvcRcž Û ©G€×ÂvCÞ!¾q­Ý2fplçØÎ±c;ÇvŽíÛíÇv?%¤»qV;¶sl7¶Ëµí²ÆØÁvf‘ÎÇv9K¹‹ÎÙÕŽñBluI1n´¶Kå,ÐÙ4•jêèVlWÅEMV‡öÄ™ÝÃÇv<¦áÑù±Ýñç`»†‚1ëɱ]ÓÓb»#zc»:xb‰ ;f©Œ—b;L¸Ä­ÅvCJ1Â\ÔnL=ËkQ»!ï$¸±³]Q3"ÒeèÔΩS;§vNíœÚ9µ v%~BD´»q€Ú9µ›‹Ú•¦ÜÌ íÎvÕ><ŸDíjSp­;uš/Pµ“‡Ö—œ#K5ä<§vR_õcG©Ù…›Ï‘­ƒ2 uÛU;R_l×Å1 ÎO펈?‡Ú5ŒYONíšžžÚÑ{˜ÚÕÁS–ˆ©?KÙl)µK!,eÛU ‚*ÝŠ«]ä¹°]U¥©üÓW¯H¯…íÊ]ÇDûÞÉÁòrlW•E ‰°«,†‡~%ŽíÛ9¶slçØÎ±c».¶«ñ“BÈý8K’Û9¶› Ûå¬ÌjùJÛ™Ñé{du±4/KŽzµ‹—.¶Ë d̨ϱÚ+ŒAÍã©£4—-¿|+¶«â"©ÍD]qVѱ]Ç4<:?¶;"þl×P0f=9¶kzzBlwDïal·. û³TLñâó(Â"°µG¶J@«SvH}Ö»á÷Äv«z‹a‡zÄ;G¶Þ5qÕv«wÖ|^Žíꈌ¤1÷•úyŽíÛ9¶slçØÎ±Ý¶«ñ3‹†°#ՇݎíÛÍ€í´à8Í™¾Na¿ý‡8« y6¶ËÇÅÒù»íª\ºÚ.¦EÙ’¶çÔnLhsÕq«úžÐÙ¯ÕGÂתãê]“p‚Ø÷Ž¥.÷ÕqcÊ0yçuœ×q^Çyçuœ×qûë¸?“¥Á¢ý8Ëš½Žó:nª:ÎL=è“^Cßþ˲ ê¸hyRiÌÙ{¢Eƒpm—‰‰q«+È&‹ v”æºÚþÖåePÚñ½mµ{8*Á—_l|WoxtþåGÄŸ³ü¢¡`ÌzòåMOO¸üâˆÞÃË/êà$¹×yµãtíò‹ ­]SUG¡}©t6lgª¨vNë]Õk~5lgw-Xˆ\ß;òóÜ€ílD{5¥³š{Uöü³c;ÇvŽíÛ9¶slçØn ÛYÕ!‚ô³;ñåŽífÃv9g¦1u°].pZNÆv–b´?Z/Ðj2\‹íXÓóõ¦ ¼ê$…ŽÒúªÇ;±Ý:(e{d¤/Î*bÇvÓòèôØîøS°]KÁ˜õÜØ®íéù°Ý!½G±Ý:x²b Ý/îÍŽ®mvDJ‹M…ϱݠT™ëdÁU•$ @}õ¯…íÖ»Ö%ï–ú,¯ÆvuÄ@í–w(K¾kʱc;ÇvŽíÛ9¶ÛíÖ8# ö3€dɬc;Çv3a»ò`fÍ–­pha»ÕNñìÕv u»EÐæ÷åÕ.rºÛYWZ<ïQn ì&j·¿¨vÕníQ>$.EïQÞç1 Î펈?Û5ŒYOŽíšžžÛÑ{Û ÍRHr-¶ã¸(ã¶“š&Ãv«z¹¯ž ¾¶«wÍ–„rÚá¾oµÝ›2µ[Ê;”©o’ulçØÎ±c;ÇvŽí°]ŸqGœÕ‡^-ŽíÛÍ€íb920¢67É®vü°Iõ$lW®›rŒÐîʹÚÙÍ^y´`X(E¦çØ.Ú+¬“&ê(-iý«í†Ä¥‡VóŽí6xLãóc»#âÏÁv cÖ“c»¦§'ÄvGôÆvC³”^‹í/–·m`»1©çÂvCê•Òka»!ïäp_òAe¸Õ±c;ÇvŽíÛ9¶sl×Åv5Îbé"›»qVA¼·c»¹°.!HH‚™šØW¼vú&Ùr]ÕX˜vóªv‚r-¶Ë9[ÎüÛay…3‰t”–3 ž5fºÛ ‰£¦èØnƒÇ4<:?¶;"þl×P0f=9¶kzzBlwDïal76KI¸ÛE^Hâ¶“ª“­¶RÏQ_ Ûyç¡KíåØnL™ªc;ÇvŽíÛ9¶slçØn?¶+ñ3´Gúq6#:¶sl7¶£%€B¦,ØÄvÕ.žíÊu-™Ë˜š=¬vWö¶£°XÆÂó#)LA.ˆÓºW»ðVlW…@ Aûârö#)º<¦áÑù±Ýñç`»†‚1ëɱ]ÓÓb»#zc»:8„d)Sw–‚ÀñòÕvA·VÛI˜ Û­ê:'’¾Ù½Ú&Ùõ®5å]Þ‘7ÉÖ‘‘Ûg¶|¾ÇvŽíÛ9¶slçØÎ±Ý¶+ñ3‰exÝ8ÃC“eÇvŽífÀvVº€UC6˜4±]µƒLgc;^±])Òš/Pµ»öH T#&yNí¸dÊ"¡è«ç{©]4¢šsW\„‡ÕNí6pLãóS»#âÏ¡v cÖ“S»¦§'¤vGô¦vep‹]*µDU$ÅkO¤PÂoP»!©Ï–«ý®ÔnH½@x-j7ä½³µÝ˜2A§vNíœÚ9µsjçÔΩÝ~jWâ'Yœ%HÝ8‹)$§vNí¦¢v©.¶ÁÜ^lWíB8}±]7( éÚØ»ŒŸ5]ÚÚNDæüœÚ¥šQLª¡–Qg¸÷@Š*ΊñØ¡v«ûÙ.Žixt~jwDü9Ô®¡`Ìzrj×ôô„ÔîˆÞÃÔ®N9jg‘Ój—ôRjÇ Ö´Aíª&U¾T~Öqõ÷¤vUUŠ;R¬váŨÝz×veÞyXGy9µ«# Käa\‚S;§vNíœÚ9µsjçÔn€Ú•øIºq– ²S;§vSQ;)Ô, &Ò&µ“BÍ?ËŸDíìºh)´Uíhµ Wv¶ã°„4òsl'å¶x;'ÞV;«ŒoÅvuÐd ²¦¾¸Ç¯Ží6xLãóc»#âÏÁv cÖ“c»¦§'ÄvGôÆvupŒ;f)åK±&´˜ØnHª„ɶÈ©Wà×ÂvcÞIr¶+#2°=š;r ?GÖ±c;ÇvŽíÛ9¶Àv5Îr$¡gÕ¤pl7¶ÓºE•0(7±]µÃœÎÆvvÝBÊíU ÕàÒÅvi±ˆ”òÆ9²j¯p²¸¡§Ô^uIáVl7".Áà Ží6xLãóc»#âÏÁv cÖ“c»¦§'ÄvGôÆvc³Ô×§<œÛَ‚°ÕÙnHjüúÈÛßÛ©OøZØ®Þ5!C¾wnÄvcÊ¢c;ÇvŽíÛ9¶slçØnÛÕøi¯õãlzè§ïØÎ±Ý Ø.—nn!sol±†|6¶+×EËT%µÕjâ•íbZ4•‹Ï±]®)uîúj‡J·b»¼ÎCŠûâ,;ul×ã1 Î펈?Û5ŒYOŽíšžžÛÑ{ÛÕÁ%$å´c–ʯ¶‹´ÍØn• 9Á©“­¶«ª4Cê´a]Õ3½¶«w! Ä¾w4ßØÚ®ŒhïÜñÔeðM²ŽíÛ9¶slçØÎ±Ý¶«qIìIîÆY‰âR8¶›Û•Ÿ¢4Qvu¶Î‘ýl'¬§b»·ëZáÏvÇÛ/Ðg»Dxåj»¸ä˜ÂÓÕvU½Â+t”–)Aoìm÷yPM”bè‹Kâ'R´yLÛ£“c»ƒâOÀvmcÖ3c»ž§gÃvõÃv£³”…k±†%>ß$;*U§Zm÷EUæ”vÄ*M¯ÔÛîí®5 Ö·öÏÞÉ!Þ„í¾(‹!ïPöвıc;ÇvŽíÛ9¶slׯvŸãlÌ‘»qVAÛ9¶› ÛA=éETšØ®Úų±”ÞzbåµSèj—’\‹íT³ùÛA}…Ù’ÒÜQZRïD·b»:(¥l…m_ª÷¶ëò˜†GçÇvGÄŸƒí Ƭ'ÇvMOOˆíŽè=Œíêàl9í˜BIàÚM²Œ‹ÅÙ l7&5ǹ°]U•B ûêåµ°Ýꈒ±ïô¸¦íjlWG`ê`»·; ÇvŽíÛ9¶slçØÎ±Ý~lWâgK2»q6DÇvŽí¦Âvq @Ê”7±]µ“‡Mª'a»rÝ#@lc»j|%¶‹²¤²ØnÛÅúªc¢˜;JKêî]mWÅ‘½X1uÅeŒÁ±]Ç4<:?¶;"þl×P0f=9¶kzzBlwDïalWg²yú³e¹Û Á’6WÛ­RS`Ù!•‰çÂvUU²ØˆÔWŸB|-lWïZÈJÐÁRî\mWGÌ™ˆq‡2òM²ŽíÛ9¶slçØÎ±Ý¶³ø™¦,ÒÍrˆ‡R9¶sl7¶ÃÅËœ1½Úí7Ø®Ú)ÉÙØ®\W‰¢tRèj‡)]¹ÚN…¼í°¼ÂD–.kG©ÙÁ×J/ÅveÐrbGd銃ÞÛ®ËcÛ¶k(³žÛ5==!¶;¢÷0¶«ƒGU mýY*~M“NÅv°„D¼=×C©QÒ¡šæ‚vU©fèG*ÀŒ¯íê]' Ñ-²ÕîÈ^íêˆ4ÇÊRðµvíÚ9´shçÐΡݴ+ñ3r$ ýb9"²C;‡vSA;*[O)I ¹ íª]Œx6´+×µËÚ?ílÕÎîùJhUÙ(äª b5o_i$™«[ÕgËK©¯I_««wM‰Sß;–¶ÜWÈÕ™YTw(CñBÎ 9/ä¼óBÎ 9/äör%~b€Èûq6ß4å…Üt…Ę:›¦ª]|XSp^!ÇP·ŽÝBŽA.]}ó˜!ÈV!—³ý­¹½W å í«/ª8d!í‹‹ösûê‹Þgõ†Gç_}qDü9«/ Ƭ'_}Ñôô„«/Žè=¼ú¢NI“J–"º¶E9Û|76MU QwHÍÓa»¢Þr }õœâ«a;»ëD¢!ö½“ïÄv9#¨€öŸ:+;|ý…c;ÇvŽíÛ9¶sl7„í,~j9mGœ}ÜsâØÎ±Ý ØŽ»GL95±]±‹IO?Y°\W¬tàNW†jõÒå´ˆFÕ lÇ5¥æ("¥µ»·Ey‡L°+1x¯£.ixt~lwDü9Ø®¡`Ìzrl×ôô„ØîˆÞÃØ®NÈyÇ,¥×nšÊ!,·°ÝT‚ɶMUU’„Ãõ,/Ö¢¼ÞµÅyìl*[íîÃveD*McÜ¡,c;ÇvŽíÛ9¶slçØn?¶«q6±$©g˳îØÎ±ÝTØ.ÕíPjh{µ]µ{l~¶+×»eê´‚­vté¶©¨K`+u6¶M¥š*sù££´”Lp/¶+ƒ(„N¯£Õ.øj».ixt~lwDü9Ø®¡`Ìzrl×ôô„ØîˆÞÃØ®nÓ½’”®=YPã¢HØnHj :¶+ª8A†¾z|µåCÞ±hy¶RÆâ-ÊÛ9¶slçØÎ±c»lWã,•¯²Ø³ŒŽíÛM…í¤¬¢³L…)4±]µ³ìllW®K”‚¦öº€j‡˜¯ívQ’Òsl'5¥.G QGi!ù™oÅvUZÆÐǑձ]Ç4<:?¶;"þl×P0f=9¶kzzBlwDïal·®9ög)¤k±&\8n,8&5å¹°]UeyDèto«v_¬Iùꘒîð‡t¶«#¦ a‡2PÇvŽíÛ9¶slçØÎ±Ý~lW⧤õ3¡ì½íÛÍ…ít±™˜T²´±Ýj÷p$ÝIØ®¬p#`¡N—ÿjäÊÞv¬KÊ•âsl§ö §€;MÊ‹gÄ[±]G–+KìŠK¨àØ®ÇcÛ¶k(³žÛ5==!¶;¢÷0¶«ƒ3”Ûú³e¼¶·à¢œ6°Ý*5q$éKeŽsa»ªÊ~fîý±ÚÁ‹IQïZ Ãïd¸ÛÕí¡Œ{^‰ìØÎ±c;ÇvŽíÛ9¶ÛíJü¶—¬g…ZÁ;¶sl7¶+m¹Ë.†ö&Ùj‡éô³íºA-;n/W]íX/ÄvD  )=ÇvyQ«â SÈU;Kž·’}›+¾{÷íû1W­²·#DåØ ysA@Bìß?ýð¿?~°Ä½üÏ·™ç›wž¤{’îIº'鞤{’þÏ¤× Éˆ‘¸L9lnŠUÅïÞ½ÿøñoÿ÷Í»õ¥g¹Å_¾DÌwT>«”øúÃc¤+ŸbÞÿòË?}üå?—§ö—h¼'O˱Å_äýòþç¿þ×_>½ÿøß"*|÷î?~ýŸòÛ¼«ÊÞ>ÀÖ˜ªÊý11Àž1ãž1)iH¹?&=°äƘð0fؓٯºã>wÿö§Ï%iûÏ¿þøñcõã§êw´ò›øóçló ÑþÃO,«ùPbnI+#K†þå›w,,~úñOö–nŠEû‘a‡ƒ¶q²{NÿÍô||€¶Ÿ~øøáÓ/+{ÿüå¯<­%%üÛ‡7õˆæ=æãÁ6ÿ輜ÿþ‹Õÿl^zû¶ôæ«Ç‰à§uÆùãû?þëÃÿñ³ƒKcRÈŸº%<ÙƒÒi˜½Úm¿ð§:r·žˆcŽüÞž¾?ÙÔýáWË þôC)+ê£ù‡÷# Jøô«Í«ËߟÊ勯-O©•Ol{³|¾°W¨û•… yK= ïj‘ð¥N»M¾Ú+Ü—/˜nxrVÜAF«ž”O}~Zãó÷5 øO¿· ýÃÏ¥¢¨ŦGs©“Y¥W ;ÀÁâÒ‰aD9éõψží2ÿ¬öû/©åÏÅ—1t|™˜!h_{"‰—?—‰gw<zþŸ½kÛìÖ±¿Ò8OçÅŽDRÕ_1À<Ž]'m¤ÛnØî>“ùú¡vùR¾”(Ö–Ô…IØŠ÷Ò"%Q/lS„ï‚dóðeóã¾l¥–uöò¿ž-Ûð¬¯3ëå¬ÈIuèÉ8òɨó§R^÷IŸÊnVNÅ`EÕ`Í yç40—Qû¡^üÇ£º¹úôt"¼7VÿüñûBTYp%JéŸ^ÛüC &Lš5 ã0¸!IÁƒ.5ç&œ÷å;´PŒí8²[¶dß“Åpÿ—ü÷·ÏÏkâóýååç‹›¿~\_ m¬ˆ5úˆœºE°³‘×Ë,‘)¤‘õ)Ä0Cð<1Ž0ôj\*‹ˆÑ»„êûƒ˜Õ2C«:ôÓÚv˜&ܘ-xb·Öá–¶Ä9ék={0P¸Ý`îú¨Æ ×€ÇûÞÂýr+Ø—çŸÂ\P˜‹ryTÏèRtÝ<*3Îr;žLc…;1£ÏSÓHÊG“’)«FOYOá”F¢åT=þ4’5àû¤‘TØFyI•é#L#YƒwuÉòqÖZ\lA¦±Õ_BHçûÒHSD¯C…š üÊ4’1ÇØpàÉ™ÿ¯ÓHLìN,Ú¼|1¸ÀMÈèÔkí¡vŠP;E¨"ÔNj§5S„šœŸÙËQÕ`ˆ& §4’SÉ1¥‘xw.º‹Q6âªoÇyNƒ=Ž¿™ * 30â‰8ÂãøíöÇÍÃÙ‹[>)üÄ@9«x_õcë)êq€÷?wôx;ž9g…?ŽIóånÇ SÞ ptSVx;ž1G …¸\'Σ‹²¼ÔCö4NÒ}‘îOÆë(bžGŠ8;…¸$ffˆ¤«€‡Š¸#Ò8c[ðä"¾Ù<”?tö–P×:{à‘ª2n@ÇH¸ÉM6ÈáëBvã… {T:ú\KâÞŽs;»aŸ$îíß Xr{T¶ˆÀ ;0ÔÉ>ó ÕkÆ\ßü¹M^ü¾ùúA Þ»œ‹ p€g}9ŸÓ„ÝzÁ…”f¨@@ïBÃîSpÜ} u¨Qˆ]Ê:TÛäÜË¥*¥ë7,8FÀz8æã8ŸGêd «íXÔã kô„‡+¤²£œÉÙ!jPKnRç@æ~û§aãâŒx2ô:“¾ß^=RX;Ýo³]õ®¡Ø{8zÕS7¬ÃØxbê½êonoînopm½¤˜8Di Áw—ò(¨yŠ›ñTÒeúY[ÊÌQ´QµÒ#¥n;{Ÿ]¨zžá‚nÇ“\¿ýüîö÷MDÅO!f};C3ÐàE»AåwJž-òB›&á˜={§Þ{(F„~+|­^6£N0ÃX3àA츸ók«Tr7*cµÄ»**OyR4à‰~àúÖÄ›£/]“U˜™¸…Fà 8E¼íx¨»7úûÝõÏ믛?6åé5+ïrDr™TE €ÜÿŒ•`† x† Ù;åi.¢,j¨¼Öô‘rG¬•·‰žbnÇÓ_Ìrr?Ü]_>l®¶v¬È@aOîR.eÕä Ù=„§˜b<3„­<ƒDŸÀG]5£¯Tªè(ì~pó a·ã‘ŽöBò¾)gº?_Æy?PÒ±òŒ <ùiÒ n7XÌŠC¹D¢ÈƬk$Ë„F‹¹Öèg¸L xàp¿èï×7Wåw­[¢â\N^.$^wæ&Oqźþe¨ƒŸ±Âå;qðÓD¯¸H¬?Ñ%¢ŒóD¯£N1衱2n¿}ÙSô”¼°˜u<ÑúÐysñmsÿýâòUݺ×åÍvkYz§ø ‡#èP9€usï5:1Ž\Ö<#\WðÏàu<Ò`1+¾(–1(@Ç™£ïûcÔ€5ç b<IJLt<€æÖ—?î®þ*©X›ÿyÉÎwq}ópÿéω Þ)>(Q/9ˆõè)Ž•T½5Tç"®gˆ´Ï‘zÅß”]ö³zgÇfl6ÂìflÆòB¨OýDú*iÐ{ų”)dκażrÅ: eÀ«Õ€‡h˜h?Rβ¾ʲ'Ún(!LYµíx¬¥.5ÒÞ¤–À½uÞ˵H¬=M eØÓÚÆ¾ˆ[°§ &•çþÁïºA¼a³nÍxŒĢ׎:/u«Ö°ÎH‰2áɃ|ݗ虓‹˜4‡’Œ ó‘ºi©~šð2mÁþךÝ¢Âf¨›»@Î!±šK˜+f{­yê…5ÓO¨5 R£îU—÷u2d¥~ŽvÓ—q.ô;â»éh;üJ–BG¹ð@·øàm©Ó²wV©¬_PŽ©èåÒ¡@—qCçånÒYÒL; ä¡‹½þh@%Ô1£×,ò)c¿ÅÞE=Û±³Ã é<Ä<᯻#†ú¥—äL.°šJʸÊsLŸÅ{!}UÁm˜\ xB²˜ÖêïBTªÞê(9V[-6õ3ÀË®`&<¹Ãbý!ÎYù?KÑÔ·ÜÕ¯¯èïÑ2Î'³t4ÐfxL-xÆ­ÛºW*ztè4”ÑC¥ŽeÝ®ÐÄv°èÜ„#·|';P_I˸dŒ,ÚFc}´ÝýöT:áåG/Íñ–ìõ³­ésý¿KíÜ…Øúm5]•AÚ«¾Œp—8÷‚Y)…ÔQþòvJ‹¿Çqû-ñW ŒÞ×}PQ.ý˜’zÔȸhNÕ™«¥\Ѝ'5~´Œ›QKÂ'®‰­Qù¼Ç^- Ö/ªIlI*©¡ b|O+ܤ±”ÑO°À-xzžä¯Iƒºã©Äe!%õ¯Äe­«øØO ói;žèŒYÏ mõ *“èA-_(ãȹa‹XÑG ʉµ<ÖÄÚÒÆê%f.™ê6Ã,§y´‚[ô¯g¦ ǰONµ^¿¸¼,Ï?vKg¡³~o͘œ¨eÔÂB3Ffß97h¦¢ /q ž„cÖrÝß”9EðNMã—qt€»s¼R¶ãiÂÑlÁô>‘Cc¯~O-Í=œÀÕî…¥)Omè¹ýhéõì²n×P<5ôü¸ScÑ£oè¹ |—†ž5¶ÑÇÝгÎôñ5ô\…wmCÏåã,;(€Sw)v;)Ø#z’O²iÁÇ =! è¶S:ª†ž[T>8±ÔÐGü[5ô|d'–Ä#JÇ™î =·_”kOj@O =O =O =O =O =O =O =ÛznÏÏàe5X„§†ž§†žÇÕÐÓŸ;Àä}|ß‘öUK¡eÜn½N-…Êßàs º¿içBÛWÄAN‘>î+"P¾†" iV¿² bž^`ÃÃ_Õ^ެ“Æ1&H È›J ê¬ñÛàæ4AÆíx¢5ùm½cä*—“£à‚¦Ÿ¾4îíùVnRP Ê7àÉŽ‡­jª“–c ŒÇ ”›:ùâÚ†êe;øì`‚Ä x¨®† „ñŠ`Áƒ‡Å]vöÆú 9%rÑ‘†»ìb=˘5ØtÆÊ·àÉnäÊUÞˆ<†¤=È.ã€âaÁWC4Ó]/p sùÜÅ›üíâû&6_‹ã÷íÈ…¾ú5*xÇ$KAÓO³º°c/ i|"‹Ï#}!-ÕIÃD11hÇÍ6ïÒêÅ íxiÆb¶à±.æ«çG¢÷EêFe­…ƒÿúôŸ^o¹ì‡ùó§ÛŸ‚çúêjs³À)ªJ?XnÀs`“±çðŸ¼ÎðA“t32v’n†Ô"È2ï¬ÃÛ-GöpqÿçÿqwñýË Äþ…7Q¢›§h>¿÷›˜Cjù&NRž,Ç…oÀc͆{›NzñãáKy•ÞÆ ;à ¢7à±V\O&Öß¿¸\?DÚe€Ëmå’ÕŠkA›'ì÷<ä¨ËkÁ:ª¼Ô)ެb/u“ãôM@ÓÛvô•z!5Á€Çzò¿¢ì9òi!©þšUÒÙS¢ ]ke\2×ù­Ÿð'¬õ’èîK¥ß¬ƒÛ×?¥·ïÉ[®0züéíkÀ÷Io¯ °>òôö*ÓG˜Þ¾ïêôöåãǬïR!Æ¡éíÁ¥sbÚ“Þn‚wjEzû‚*A €ô‘ÿ^éíˬY¬Æ@:;)¤yéíå‹ÙûRXAGÆ»>ÞSzû)½ý”Þ~Jo?¥·ŸÒÛOéíZzûrÎ’C9lÕs6ãnËÞSzû)½ýÒÛáÜA À×ÓÛ—q¸$ß)½½ü]1)(m>–q‘y`z;ºs—0~œÞ%¡äîhH1â»ÕKlÃÌßRlx¬){(lx˜ÌuK‹O9h˜³sœûyÚ¡*?¾A¦O^ÍTx{êËST 2¿¾\Úˆ#V ,Ýy9z"°Œ#ßémµ.Z@G/u žt˜Ôƒ[Þ!÷ü¼ðê]WfùÇéóA ëSž:¨/3‹tIJÝNÐðSÖ?U ,}©\@À€™&ò)ºk˜H¥•4¾~Š †õ=mîÿ’}ûüÌôç' ?ï¡vá4U9 äåvT¯‰ŒsÉêSœ©ç†‰ø£¿’Ù”Ì^Û7÷o»™ws¿ßm–¹â6–ù¾K}.ñRgÛgªOÿ¤¼“Ä~u}¿<(}ºÜ}ûôò®µoD1€×'aÂ=Þ€‡Ðuè¬þä‘ßaöU€4Å£ˆÈЀ7@cÆûJ kt![æqŽ.p$ׂ'ù^ÙOB© øvqs¹‘ì_×_7÷ç;?{›MB)ª ’“X- “àÔZaúZœœ„-ÌqŠÖ(Ï…:žàº%Ê>“-+ssõžfõèÙSf€õJÒ vNúÐ2ôst"ÈÑ„§[åÂî^’YÕˆ…ÂÄWkD#Ô5úÐ:›8IäÃØ„'w¸û×ïßîÊ/¯Î./¶³¦‘ðƒ~ï¡ïöP£æÒ<-S4¤¸ÕïIU®÷Y|ìºi †FMé}Ƥà37LæX«Íx‚[W¦¡Jù»W}¹×jŠ’Qj8¹ùÓ ò ýÈ 4ìñÙÓýȘ Zð@Ÿ> VÎAQp1xŸ&‚AMz"?\[Z'nÎùÓŽ°×ÓÕa[9vRp˜mÇPÿ)ÌÐJÇ¥?Ö®/û»‡‘Ošþxp)ú†™¤Öš’æ°Bä>¡e–<冞(¸<¥,â!ûÏÅååí×å[«äkPªÉå²Á»)3Û>4n*kô©u¶a’>%Ͼ üZöÒ¼k/µQÕFhâ-¡­‚­àÙg æÀsl9Œ|kÔ}­Û^†5glÉGÉÔÙUàœ+4¡y*çhB Ŧ<zU¬[ñâò\ØfU+¨7@G{||€y†´N‹&íÑ!…<¹ï9ñù…׬êB¢ìZ@Æ^‡ÄGèÖH½¼¬¶L Ñ©ç•Û€‡ýâ…Þ¬¹UKQ Öú\¡­ó@7çl@Ù9[t½ëqßy1?»¸ºÚ‰E:{üS‹ 4·*”ç#j¸‚#Œò¿[g²B›š'›ÂmjÆÃFÿÇÏ‹¯×W%Kþߛ߿ÜÞþ¹}>û±-/ó*ã ç ×ÐKkȵ^S{!ž äçÜF ƒgׂ‡{4ÓYåȪ‡•(fl™M³i:tkô¨u¦“¬UŠÅíØ€'¸•xöÆ~î…–Zܦ¥¯Ý­xÄ£Êu$a‹t7£·Èyå*Á¡!†ˆ§D°ApY„×€'‡u/É×ÛÚ8µbÊ©nGçPîôê^*ãÀ›‹}O „·L"Mƒ6áÉ¿*¹ î1)Ý(Êæ¢ô2ιµõÿ;è±n˜PÛÀ‚'Æ.ñ-$Öï#è肹{Ú\xÉMf;î/LÒØbäàXG—| Ì~ðœ•ÙŽÇ r}{'¶õ×ÍÏÍ×7YE;§õöBr&6ýÍæy“[šÌP¨réQlò€^ÛUd\¥R£-f°Û9cÏ1ŽW žÔ­Ê>:÷ü|áµ~ôÉEňTœ¢Ë¸ìQ ƒ5º|r3”€Ç³MþõõößEÂß.v¹{Ô‘’¶ÓmŠb•6À3ç¬Á”qö`¦ò&ÇËß‚}¯HÁCx­›P ög©¦r2Îe6*HOEæHŒI½È•qã{h.ß € PÇ\—õ_»ó¤*wˆ1—z6šŒe\µR©1ü¯¿²"Š3«öu7>0oû¤ ÚŽ‹~EÙ•×6ÔyOUÉ·vUÝÀ.’”œ̶Œ æó¿»ÎŠU!ë`S˜pµÃ’µJƒÛq1̾ p•KÂX:s±¶ld¦pxzY'=BÒá?á¨/߉å x(ö ¿ýèîT7“KÛ,/ǹ¶#QYã4ÝÆç^à'Ô&·á±öÄ5™F¹J[ ˆ‹u¦À,…}èê™·+¨lœ c k·¤&GÉGÖMá GLˆ¤î¡´Ð£‘&{î…“Ç'wm¿“Å x¬ÅTÍæO¨¿[E ‰œVÚmWi¤±Æg×K æW4 ž”ÖWƒh4~êvo,‹–zW“q?G/#]ÓRØ’+•ä,ÞIÎrk¿™ö':ÚúG>=›_üx¸-yË{†ÂL·ÂÜÝõÕÕæfh.Ó|hšp›1áéÖÙë‘ÆïÏŸþ[þÂÛZˆuK"¡\ÇJ¡1e2Îc :]Ö·užp(ð€Kc7Þx„ë7¹ÌFOAéÓ°Œ#gueLS`Ã$f¼sYð¤n›Â·md3§u¤¤Ž'A‹RbL)¹Aïv}6Àæ¶,ç¬d9Û¾Énb<æz÷{Õ¯ˆê~Qû ÷íú­ÇâüÉx§„®þ€Í9-õŒ‹ÈØmwê¼’Ú'Q)öÙQ= x t(qþÂã®IŒX?ß32ˆ­«jqç]7ÙPcXvIF}*¸s_þ?ö®n9Ž[G¿J.÷F2I˜§Øª½ÈÕ^ȲŽã¹$;Ùóö R²C.€5xWs´S}—^°…Ò¦\ÀEw¬x„  Aȵô‘@-gÆðˆ^ôÅFŸsø¹¸Ú¨J–hk‡w>ÏоXBÍs^€ì0Igçعv.€ `çعv.‹  žŸQ×]ûœ/ºTî\;Àp`­ñÏ1”.@“;ô¤q`­ñÌRÌ+´Ê!oÉ.‘€ÜsäP1EAR•KǫثKôŸO©Šùk\û;ù/RJ¿†n>+Ø”r?Ĉ9£0²µS¨3*PÛ¾£®¼:¸ ðð°ñ/B¶Ç´y¹M±ôÕ*9e£þ¥ÉE^K10à¼v:Ӯƣ̾ D ¢Z2ã9*ÇÄÃ8ç‡Û²cRh{£pàqSÎw—Ö"e¦ÐW¦(ð„œ,ðµ РÌÕ&í@/öžBãZ<Ñöç¾v©Ozs­Ê;yusËõ ß¾½·àšŒ‡eÚ‹}íµv[!X·×T ä`@:æJS]·pž0Ù<’7¹úìO-q-Àr ª\(«ò6FX«.NØç=x(Øç—)1õ•(œ€#Y‡“ÊÅŒÛÜÿÖ­ž®Ðôc«\œ° T>y`Y€'‡9›@Scß°6SÆ‚] d׬ý±ÖëAM¶ž¼+¸L§Ðש0©Þ“9†™; và.2Á–ãIal²kÓV– ‡Hb%Æ„²‹7ÂTƒ ˜°ü=xð´¸àJ}b_Ÿµå3DåãØÄÝ%6»Ìüyð@Þ2_·)¯?·Ym1™Qˆ ¹„}›çòô‚Rãf߃‡âˆ>n'¨”ú*•LQ2Z‘k•C7©Ôx ®Ì¬!@¶ÑR™°ü+ÌBlãɶˆë5¥õ§XŒ’tû±n!*'îÎã“ÌÔ3™àâyðœÐónÊêÙëÄ\1UÒZ¶àYHsœ1Å~xwU³bþ¾yûûÝݯh˜žôØÔø¤EUâåÓÿ}]VŸ8{š•TSžK ÆHT.PÚ$¦»`›r ÌÛ7iõášð¦ÛtØ_C›ÿQ¬HX•;‰CêGƒÆ ^»ŸøWüVh”-ÝJJlÅÏü?ws %§E,5•À¼[Çêú(Ïð7=x@VD”]8Ãj›÷h,e Q"ó)AåBáñûÏØµC! Y8Zåb™0÷ú¨äx²Œ‹58•ÚÏÜ*¢ö‹©Xys¥³óšøÒ‰,Ro}L|yÆ‚IõA™lé3/ïjú À×à]]þøñ¢&ö.±lZŽ„—PàH¸ª¤ó*o¨’znmô)¥Ÿ«¼ºuË’Úá4¯Ü… ÂÞ ~/ß À÷ð½|/ß ÀàíüÌzÄlŸ³tqØ À÷ð3(Wîìbº' „)-¥mº¼Ìë®Ê¥Í_ÅÚwr$½.ÀSâØÄFC‘¥¯HduéåŸÞêo¯€#‡.éA|ú»B³¥8•ô!æËšsñö˜__›£Õ×% im'0¢öëÄ`³°ncÍJåB˜°2jÍW¤, ðx;L¿Þnš[pñUoÿlî"±¿«DPƒ.–1V9(ƒS@W-cE”ÕýÁFž9m?ëúNQ/ô6vçö·‹çíÕ§cªlÌô÷o¯®/Ôþ÷ßÖQÓíÿBµê#‰X±º‰º;J·`a Û`±À;P<˜"ˆ‡É±UØ_Jµ"Yçb@V¹Äî7ع¶ë ¤²½9xðÀ°.óŽm–b_Ÿ’‰+M·…¿6D–1«í¹5n”lÌy{÷ö½J&ˆ ðœxUþŽ ¼ÒâË­ZLý•P9*Šù.Rå"ë¹2ÎtX%[‚liÂÉ ß¡…xááÕ!†.“¥KRß³;AçJû³ m´jkÀô+%ÙÐ {Š1äVÚG9^]0`4º;†P¯w…¬Niá”îÁ÷øùÌhL¢)+.Xår<Ãv]C›_¿þÝýÍÕ»'b²¥®Û€æòÐ[t¦a›ýê3J8bÛFuxe‚Xñ jl<9å)'ýá†)©bb‹ÕÓv«\âºôøë)żcW¹ùùêãÝûf¸ýgN))3[]K›eæ• Ôúòä4!KÃ…GFÝŠ=úìGä 2Å@ÁºL©ØÔŠ?]]ÿQïÁjEO›p=óö¦àÁÃ2ªEýÓÚü‹oí¼.ý‹šYZ‹Ò?J I'8Zw|h*0ì¢:ÐŽ—@ån`½+*žQ¾¬™uz6åþÖäâÁâ™QŸÜ>ŠíJeƒƒsÜ듞v4zþõÉkÀ©Oî ðIŸy}rWÓgXŸ¼ïêúäÇS zÛ»"lÛ šà’ñHyò#Ö½~ Ò çUžÜP‘䓞ºýåÉmÔꋦ%9 Ì+On_äDb;‘•¼—'ïåÉ{yò^ž¼—'ïåÉ{yòòòäz~êµ'Ƹàœ-‡Á›½ƒòdE X~§Êò·W Èé"ƒêQëïJ¨©¨æ²¤&lП:]B®÷¶ÛcŽ&¨Š" )¦ä&¯Ý6Pëaójž˜†½Ÿ‚gqˆçÀ×QƾN ë&´ÆP2¦¸þ ¤•ª]?ŸÑïÎ1Si”„(j—R•KÇ™îPu;¡[äö&ìÁa” øS1l,ÖM¦®&Sd]|–áª\"ØÖpó0¤¥l?ç<œ9d^ÝÞ<|ºzY±~Du/^¸¨¿nR- \‚…˜² [üC u9ôNC›ÓïÀƒqÀôw³r¾ö$'õN²õ’§r‘¼–ͬÕzÊŠ_Ž'Í[ñýe û¶N©e¦P»ìä¿ÊN=p m?å<î>™]íµTî‚}eUz—Œb='Õ¢7Ôf¹4‡ˆfØ'9¯nU]ß´õ¤¬ç»pSV9`mfD–ù!Ô.x#Wïûs +A¶ŸJžÈ£náwoë2:W›¾K¦Æ—’` ·”½•ëmÐ.ç ³ìÀÃ8|Áö—é9+„¥r1Ž»`¯·ApŽÛϲ„akùIyªÌwí:úì{«µd‡ÔH-ïPå:¤`§¯i…Žg8S<'&ò{®1ono>߸nÔ_7 r¥¥5«¤0nÁ3TÇ:‘Ëq¦àÁÃþûû»/Ÿ:ºä¾.ER­MKåbHë9 GÚ±úö™ð><œçîýÅÍs f`HåˆiøŽ°Î‚àu‰no<ÞŽ—¶.;j”¾õØ,XÌT•óónl¾Ë¡LØ/‹v¾yQŒvñéÝÛ¦ÙÒ׬pÄزv• ~¢ëmîŽÁɄПOÁsÛZúK´@Ñ{‘õV¡r!y¯¢sׄg(ó’Õ®¦Ï°du ÞÕ%«íãRyØ¢½K1¤mKVKºDŠGjV}PƒgQ³ZQåÀ˜R¶ÑËA0Yui§¤‰5«>d‡YÊ{Íê^³º×¬î5«{Íê^³º×¬Z5«íœ…9ÛîrN÷šÕ½fõ¬jV¹Ö‚ÆDÑ ¯rTÆ0´oõ cû§žë² mšŒ†&9RÉÿl÷Ûkä¹”áEÌõw¥rîÝ@›EÞ¶ˆ™‰Êí1Ï^¿³˜{¿pð'ªLJ/Mš°‚xtÈ?÷·§XòS!˨Uޏ¬$R»8 çí‹|x" áU÷*4õª†]{µXÏ?*sùáÛ†eÙËG“fl 4ÁVÑ7!ŽýŒš"¬ë$›e‘Ì$#2rgÚïòÁu*æZ‹¬·–eSÔµiÅ1 ôA7¹2•@ }”t~Œ.àM)íVexG£çO °üŸô™t5}†kð®&hÏœ…ÉÞ¥rܶç5J¾$<ÖôÚ5Çó"h¨DH°Ñþàÿ5@u‰‚“¼„‰õ‹•Üj ÿ4‚½éõN °ì;ÀN °8Ú9‹ÈDlž³ ‡}:v@à Ô0A¿3†n½x““ƒ:ÝAõâõwSÒã@Ìé0«|½x¢ËÚ…âí1G249È d=©ýèдk°}.›OŠ[ÆúŸ:‹_©±¯T£ ±É¥8 ×Á ´õö~>œ{ýŽž ð¸;ª–Ý_½ó^—ÒEÝoÞ}h\QúÚÒ[K&ˆæJ‘JQ¾Â4ëB·ñlcM0cUëw ð<Ì£H÷ªþžÕ÷|ìòY×îÚÙ‘=¨E-Î2Ë*—d ÄIv)­¯„ý@_åpÂ\×ï”à¶ŠÿúpówÓVeˆT궬óEå€h Þ(STôÙLj¬r3ÂXR_«8šaµ*ç}É}¡²ÃÂiý`WIC†¨G›Jå(Ò¸õºÐ=ðd³™7ÌäíStè­´èMꯇ"¬«9kk.§™n€ËQAžoáÁÃaõ3ú‘kËA\äþËr)©˜“u¤U¹xv•yŠ I$‹ãOW™§£¦XƒÚ9þJ¿Ee^E¦‡«¸©Èý¹½2o¯ÌÛ+óöʼ½2o¯ÌÛ+óìÊ<=? °˜D6*Çyoí»WæWe^i¥¥¬·•~¢H“šKÕ>ÊzL„lƒËvš(‹ÿ§£Ñó§‰Z~ MTOúÌi¢ºš>Cš¨5xWÓDùv©:Ø-h¢ó¥Ä#,QŠ ÌÿIdøÛ+¤*wMT†]7¥­vÍrU”a—ËBµín ‹ØHñ¼¢v UŽ)Ð=“„Ÿ+j×FÍ9‹ñðô(w¼G×ø¨]ý"Êl”6d%áµÛ£v{ÔnÚíQ»=j·Gí–GíÚ9«÷ÛÍ{‚ÃÛݵۣvgµ½¿4é«¡¿½Ë¦‰.>ðÛ·sñá)àLzÌd~Jq?UŸ±¯O,%Y^= q¯žb.HÙô§(ê¥b[¯>EF£^½H,Øô‹…e¹ÚÚì!âhÂZqà‰< Áî˜ M‰oíF"CWÇ2uþ,¯[å ¸s,'lžoo$<™ÜíÑš^Þßß}ùîõðæýÇ»·W_i°¿Ì¢°îŽŒÖVÙžÊót†]/TÙ¾s¦Ë–TY_NQ cW· 8UÙŠ«\äà︶; 'HÛ›ƒÂà]bÑ®Û_^IrÎ)™Av•ëP4 aÁ`ÏË“·/#÷á‰yÿÕÉ›rÓ0u5 ÐmËÜ+7“›4m†½{Ãö&âÂãdxøòöùÛGëóÈêÓÔÙ_q¡67+…QoR2ŒÙjK×!霔@æ 1no!ú½s°½ÎÊ÷kš×i7÷µ«vLY=Sk4Ý~óÇ f‚ÁCmά.”?Å ®lýN qtî%uÏj|s}÷éÃÍ»‹ë‡¿ZeÝ“¾ŒÕ$¬îZ´ð æ!%ù›š¯0çQìátÈN¿HˆÅêáÝð®%ªqRY4t Ûm±'_’ÐZjš“à Y5ùryd4íÇ0E0ó6ªÜŒ;jýY€㪠ä⺾ç]\??è}[›ŸóÍçßo¾<\ü!õ'!Xs‡ ”°¹CÀÒÿò_|xÔ㼿þòîÃC{ßûåúðyô—WÏ™ÇÆE5+Øã¢˜¦˜å²à|Q9¿y<û…'j=ZV’§è{ !ÿ°’a°×K=û ØÃË ¦‹:¼Ì öûì5–ã,¥Çôoþ½«iŽëÖ±Å5Û7RH€ €Ì~ÖSõo5 YÖ{VÅ_#ÉIf~ý€·ÛvËRd_^JUéJ¥’HŒï!x ,ku8˜‰O si¤Î†“XK$Ƹá$dSˆdf£HÃ!ɽfÍÝx÷µUÝýÍÅ•™»=ŸLɘ_éOÉÕȧíç²–VšU¨A?)h“¥ –†”d—5PÈä+ ˆk ¬]õãîyÓÅÕcÈï7ÏANœÌÿ)“wî’HoûŽe¨ î2—q2Á·ï˜U¯îyQÆAa(=Û.Z÷`I²š—‹îš:~aѶ°#ùh§q)ŠãŠ·\ÏMˆÊÙwbLÜ€§·FÏQ³]˜õ³¨\X—gâÞq_ÆÁ)›{Y @.¦¢ø@sž Ëí;Œ¬|<i}]à#QËErìHŽc–è"åGT^U9fl€:ã%]©\›¼˜2Òf‹|±t³HP< &0ó]}Ĩ´ÝbwBÔ~¦,SÝTøx(èv‹þô>B=92&R_C2ÞpéO.T,u¸`jñ ʃÝï¾A8öQ3~¢6¨Ù¾·@ùŽ„@9ø1+ã¨*¬·_®Þ½+Y7÷—öï/÷ë|ùg±vu/!:JÛìŸl:Û³Çr‰¢ðb¤$æ^DôwÌyEdßÁµ„q³#ò<ö‹E²õûQNü&ï65«æ:‚ÒÝV »¡÷2nF4Õ¾“0¨ûJ¹Œ y¸r¸þpuû±&Íú϶ñÅœ Wš6®»õÈdNËRÏYÀŸÊ„zλï¨Y¨Ø€‡FGHvÿƒoÿ}QþÇ®GpÌüÃÔPÀÆA¦ñªb%Á¥tcÙM¸)ã060Ñ5`ʦ">*t”}Ǽó“‡zÛÒ_·%“ôþr÷†üçŠÝ¹nwŠäB;t%X:Ðô²næî)(³bÜÝSgÔD?pcëåº%Ç 3ìgûABP…8&¶~¿K~îz·,EoÙ¨$êÅÄ'¿[è„zô–gŸï|l*Ù@gh˜Ê$2ä¥ags’õ '2k¦Ô<§µ¯:ŸJ ³þĽó*3R™B N1º/½Ê¸^jø¶èYûÅ"yô¸"¥g}ƒæ ž%¬šÂ©ä)W¸îMyÇÐt²aËɦå)Mƒ^V€Œ•BŒ ÊL{#Ϧò}3Lv¿_³&ÑW_éuø!•^kúF¿îJ¯uI¿¾J¯«ð®­ôºû¸¨R½Xö~˦•^Ñ%>_êu¡< ¬ßïǼ{ LT’b=X·ò—*`º›uŒšêõ5öãŽ_o /`ºÿ¢†\oµ—¹€é¹€é¹€é¹€é¹€é¹€é¹€isÓÝù™çà»ËŠÎLÏL_SÓBÌòh4ÑÓ7.ÿø‰Àhö ®Y¹ü¹Š‰ÁÛ@6.nY³Ã%2CŠ9r)«¥èúœ)óñ¶ÀZ\¢ÿº»ù‰ù~úPõæáú]¹Š¨\Æö¡’´õËóN<½ù³Ÿ®>Þܹz\¾±ˆéðê†ëòŠA2„žcdãˆdý*þx,q[äÔ ÷ì¾jK!}t‡l6[Í‚'Ùb‚‡»ó¾Û÷Å•½¹hKþ|z`²½¸ú£Äû¡öd‡Ó§œýùH¢Æ«Ç'rzúên"AëYÈûqa¡Lú)i„%T k/+¡¥4”ò‰Á‘†ˆ/C©§SYI* 9Ħœt©ÔÌë OïcÒ±+±üôáêú·²}}¸útûç²Ù#Ûaˆ <¿ÐaøìdÖ‹)`C8Ty’Æ*d¹áÜà^b=Üþ)zŒ\ç!•tYWù>,n{‹=$øN˜KA¦àÌ›WÀÚ}ÇŽCŒ±Oo¬"š7ÿ¶¿XЉÔESrÈKÁƒ"™)X»Nn•\ø¨¾×Zæ1CÇwàI00¹ÄÄõËO1°]=™ïI€õKHbKij{ ±v×µê¤]™p}؃GO«"rTtäºg b†Jöô˜#–‘I/Ø×Ž=#MXñ<½Ý§Dùùâá®ì3t®éÕo{°TUâœ=÷ÜÆqÊëWžHИ•þšã¹OÑ­ò±Üý×¾}\¤ÀFKÞ³<‹ãwy'5{àò„݃GÒ¨º{Þ_îÿåçJHõ[·„²t®õì0jÅ Ú}UÛQÇ8!jØ…G·Ùï×Wo¿~z÷¡Dgg7IC6°î#4—†%û®æj;l îTž„£wüÍCEu¿´tíð€—Ú÷7Úô|íÁ›'\iöàá¶ûÞxz,ÆzLD(™…îÅ m\Šaø¶_CØäyÂY߃‡Gïü÷7WÞ_¿¿¹þ­"к\ô¸ÆŠË i€(܃;O0ô{ðpw»Øë»›O‹ÎO妟ëñ,ó99Úë1ÖÆQâÑ{eÛgãЃ§·@QE %ûKлÛ%uìþòöÓ?ï®îÍwZRêž­‹˜êî3#GT o*ŒY¹ßËHãv¤Ò„ý߃'‡-[Ë?¹š¨‡ÅXÙ¬%OqÙ8 §6¡oû\pF¬¾O÷£ïvÑQ¾|XŠÓ´ˆ¸îpK©EONá Ý8Äm{Îw½¹Np;ð¤@£ÌW‚.E„õH›+6F7‡ÈÆa÷]ÝKº}N &Dzð`\}äÞ¯|ûá7oßþ¼Ó„û¶><ØyÓòÏŸÿ¸¿~óñª–;*PXé`‹Ùɽ]Æ…„îV:n: +NXëv<1À¨+wO ‹ ¹*ȈÐ~ p»œ‡°´¡HSC©\ïÁUþ0aÉà®±tl‰ÃnšG,h;rŒiûу6ß×ßµ%±3IU–%u9897eW*·W¶½úúð¾8‹;W¾TPÆ0 àæÍVûðhÕKìpŸ 8¢+’K£Aj)œ6ßáÄì/L.<ºõ&ß].îa¾ò»7;Å©U™¢ßD¦9ظœ†”ú8¼X9Ðöë߃'Ò¨ç¨åwùM€??û7¼.MÉB9§µJ‚EÀÍUÁ‰ôí˜M°ˆ{ðdXÙ„}‰ Öô)Õ÷SB–ŒêÆRR¹÷T}{àç koßQ`¯Aß7vševX—¥˜5)QL1Av ¡”q%Ý¢s‘‡S²m ‘·_Ó< #ö³)Èrsñ®*ÄúÆ *Ý*ƒx¯](q„^7n;`ÉŒµíÀ£ºúEè»ç  V˜—zkLž½iã€yÈVÀFC“RRŠ>ê&¸æå;’C†<ÖméF¿‡ê''{FEN¬X`jâfYž<¢ÈÛ1ùùú,©.Y‘M ¸»IX#¬ÔƒèkKúäB–Àô¾}'"“'v»m?2îµlz"²úþadˆ™¢§£J–l#ªlFØ2•’˜ü©OÚØw0 ùx0¤ UÁsE¨.Ì–‹ìš,æ æÓ³EWq·c%ç}à‚wàÑÎûÛ ªï[·ò>HÐkã’è–[ÿ‚v §4A×÷ࡼánÿQïP˜õë^15e–³^ˆÃVg}u;ðÒK¿O·ÕwLY>’W}³h*•Ø#x§¼ Ê[nõSØÙ>æ¸ýj÷àáM+†<½Ð„úíµ GeEϱ×ò®CFë=Lmg¨õ¸t°ÉÎÉÈG²L+}ýÉÈkÀIF® èýÊ“‘«’~…ÉÈkð®NFÞ}ƒ©B_KQÄmû K¼”|¬ŸpÁLÒ P‘_W2rúÓ_+¹K:|¼›Ãødä>d‰ÏÉÈçdäs2ò9ùœŒ|NF>'#·'#—ó#QTtÏY¡s2ò9ùU%#19LâØ XÊÃÓ㣪]S¡°yX²O”QñêûsLk¦ÙYkÿ0ñ^™n¹¿üö«'rÕº\…0`=_}wø¤wP¾ºý¹ cpê×—qæìƹɬ—ZÚFIN.˜ z‚ʸÌ1ŒNN>!ÝŽW&ì˜<z³éí‹ÇÒ<Ò-sU’1•šöž`ã2ë05:~³wL„1oO‰ $uZoïÆÍ¿•ï0Ƥ x(ðÜ=øËÑËæýâôN:N€Ä j9…V2c6ƒ–²Ao8Ò¤3‰BqEðHUü±eavqÀÚÞ—èñŒˆ³$jTK }ijysÄ6#[‚†ó„dÛ– á†3#ÚnjÇ˵V…¡Ì•Üp:ä[×H؃8cæ0Gi˜¥Î±‰¥á dX™43h£G£’F *Wj•Ñhë™ b–ÆDР5Àf)«æm¤½E ž¬Ç®¾O“ГC I}-$!ÆSés Ü1óÙ%HÃìæØG¥ñø3ÇÏà[œ¬IöäQ%‚(ú–v­Kù3T…zcb.%Ì&¹}¤ÝwT¡à­ò½¦P“ð³GóÍüÃI`•v9ö Î@¹ÝH ³$šÂ™RÁrM[KkZö¨S:e熃 S¡mND?ˆA)m¸yœc%——È ª>¡lÙ¹¸º^¬Y(L ª“¢ö;XÃÐbÙ®oˆ= m_†uùNµ3ÒÇ“ƒŽÑAcœ/N-Y 6XùÒîµOšÐ žqyÐÐ`Q󜘶˜£ŽÚ 9™pSž5­†zÑl‘¥â…?áèÕ9A¬ÒTÔúÓV˜¢½4 öw¹†^íÕóý¥º^ØZƒ‹4Ì$§ÍÂÖSÃ%¦ n¹ÌŸÃ¥HBhÀ“âÖª² ^¸Z!0Hl˜†Èfš©ÿ …–°ªy·S,r5_ 45àá¼Mܺ.~/L­å­Qƒ=¨xz$`ðA¼ÁÌÐà{+Ò”ˆ£&;¢Þ ”bR+´Ïò€qç ÕÅíÅ£µ´j9¯œdì:ˆI"pé”ò”‘0·¨s rŠ>i–³u6ò†Ô`ÛA]Š£á &ä`2m8YH§Ü[iN94Ü£iYq‹^±GÖ,Òpzd>鮼 Ü õ73Jµá„àˆSÖ_46h&&Zë«JÚ‹ «ùe±árFEŠˆ^ÆAl°ƒŒCƒz3pÌZÀ˜ÕÝi6nU÷åûëþD?5?øžÐÕ §‘’›r\ÆÅÍ쉾‰ áUéhk[æM2…WöcN¡ȤˆJ©õöhk«Ç%À’üãÏ`óXÊqðƒøY’ïl㈧ðƒÄØ€Gxµ•ò“p!åÃ~üÒÆ5{-]°­9fåÝ4ç,JeÍs‘Q–ÉGoéfòï©màj›ä8ºA H¥ ]Ëd(Ma˜uÚ€gí{òFoÐ`z| ÿgïZ—Û8–ó« ô㔜"©¹_RUÙ'v"»$;®JìK`%", ËQù±òy²tÏ.À…lÏ‹!“CW¹D‚ƒÝoº¿éî™ééÁ;¬"Vú@'‘޵'vh¼’9Âÿi›Ç'à]a,ÂGu\Ûá‚O^£ßÏ;…-)bm9ð`FÉC|D*Ìž8aœ±*Âó«³pÂrçM„³¬‡Ùl¤ÌE «aÐD8:«ø>$mO qGÌÀ7Yâ¼—&Â@»ÔÜÆN÷tj¬}Óƒ¼Ç»0éNùèS9zÓÃðZOç_B»WÙ‰¾'qVÁEtÖŠ, !?Æ#x¢o ”öžk&˜¥8sdîyD·¢yçéOO,“Ls'"ºŸ%Ëð(à3í4¸L= w*­l»jí—Kk]Œ_ѹO™;Öï pt"F>˼+ 8]œډîxN-³'N%"\ ²‰ÛM÷RÁFq-%g^Nç‰Ä¹‘<Ê–ʃM}î ¯…(«]ÃuÓæv˜¶.®š+qja˜+¼0ùFGŸAÈÞ7‚}›w%Ò¸$"D¸ñy‚.+µöfÒ&—8Ze^*Õ­.j¡"a"¸¾òöëT4ÄÛÑu„Ut:UôÌsºBàñ,7 '—×ÝÊ¢Vǹw†É“¿N‘³W'¢ `ZDÞ‚v2Ë"*Ø~\T‰Àãxn ÞÌ?”‹÷U·Â¨µx˜6kúNGl'M6¦õìTTäÎzº* ´³Y¬¡˜œF[g!RO⯰w£›óÑè:è…Zâx ‡>§í2zß8±$÷šN8†v,ƒI¿ˆÌ…L.£~´^ÞW7W%Q[çó>©HŸÏÆ¥õìTTTZú÷¦d–ý¡ÁÂùo¤lv7[-‹Ùäc·Â¨½¡-32ˆk-³Q1­g§¢¢‘&&-R˜V‚Ž:ÍXšªÓ ³¸ñRòðK+Gä ãÙA;²×óÙ$¬¶Ý­¾ÞÿØðcÏç@/ˆÉ Þà =½ïç@v¦¯Ënú-žÐ…,a)x\–¢ò(E۽꧰Ð.n*¨¡èªr½çn“|ŒNéH:¤àq¹-Å-m‚d»g  ï"6œ,„í„•ylE ³ãaËåÑRðv|º×>aÞ^»‰2ì^ˆÕ˜Å%9™! íŒÉn’X¬1—Š92’Æv9V 4¦71CžÄv\f¶ã¿¤Ú½V€éÃS¨^`šñ—æõËjg¤²œLÇv9 &à{žpŽÀcŽOókßå³ë* Áng qŽÄd‰lÇLn»Í^€Þד"°]Žjˆ+(§=G±ÄÅ쪾x¹¹ózS²úNŸFxµìVõ§­¡ÕÈ´ùúúbÍóÍ‚˜»—w FÆŠ)RÌÐŽñÃØÒ7Á!–QÒOC9ª<Á{$n@ó<–ŸÌD´,nb·ËµÆp^ä®9ÊÚ$[‰ûãsB¿ Ï@Ž<’Òll}x3ŸO 9w¯Åa>œÖ†\‡‡vR¸ÓÙDÖ' v–lð$;•݆Qö·Û3;¼ÂÄ;ÒþB;™>ñ¸GN'tLå`G|ig ývãpH´ ôƒøºÓ²³>”£àâ‚©&átlŽGírT¼MÁcí‘Y­-™QiBTXK[©_kìðÃ&”°I®r¨1Oj`#™«²˜.¯FWåè÷˜1±ç’jÛm‹ü“¥VƼÄZD}\g{œŠÇë9ˤž‚‡'nÊÞ’ryU®ªÅjZ¿H×} w–™£K6ûPƒ7‘ Y)БÖfP×f)om–ò/UŒ £´ wB;ÍníÓèý<ðQ˜ßo–ÅbYŽ7r+Þ“)î$V7ãbYVÛ×õ9‘&zÖüûd8É<ΖIp’µêŽl4ïúUä‚é«ò¶ÁyÝbP¾/¦« ¯³Áe9*VÐV]ps¡ø“Á¤‚oŽæ×àåÇåøùϳßgó³áà›Í·þ Ï+ÇÃÁËùj:ÌæËõáÑoæÕjQ–óFƒÅ¤ú}ð—ùì?æ³bú„¿¯Éó¦\¾SâÓ·ðHüÚ¸¬Ù*Í«Áw?×lýá9ôk1A–á»ßÎW³ñWiP€Æe5ZLn°·ÃAh3xYë²Bk3ªÿ ¿æèÞ??ø0™NØ‹A±TÍSnŸ[ ÂÅÇÀøöÚl5l¨.zÿóëWÃÁÕryS Ÿ=›TÕ H±(ÇWÅòôúìr1ÿP•Ï~xùãËW?üüõ¹ðV§!Hk@“×ß~óªf×ëòr>_~è¡Goš™h~š€¶h‚m׿ýe›ÐÏExÀ`¼Z³š¬;?^Qƒëúùë¾\ôŒ7Žß¿üáœ[&ö½œ[å‰È¸nת|±1¡î·ÁŸ'³Iuu¬ <5N]Hïÿç¿«½zåÎqµÄ7P¶w­}f=tcpž ª›r4'ß•ÕYÇœ ÀSnŠOÓy1Þ‡/P´¯’B±;èÄ*_—ÕÔ»A‚¦Ml†þ=¿ø9Hüó‹X‘e9ZÂð>Ðÿ^gX;Ÿ|‡÷ÞŸüuU|º˜ÌŸp¡pùl>º9_”Ó²¨Êª® ˆµAråÛÑH]–Z3¯ Ò7NY–Ì.Ëñ[¡ËQiÆ®ä²Ovɸժ(Œ“z¬ÊÂxxןç‹Q9|[L«ò}ÒQR["í©–¢Ú;ÓP¤TK Œ\” xÍ" ØÑj±À™.Pà]¹üG°˜¢†‚Ý‹LsÍ"ÜøÖ.v EpÞœ÷zµ˜Ï&kžðœêùÛI9_|³X̯&Õòél2ýªiðüO¨}üê/¡‹o§Ÿÿ¥œ•õòËPhk:|ú û „'~õ”}1Æô˜1Ͼ: 1âÄ|6øi¾,¦CˆçÎÀ¢_ßLÁ‡ì, 08Ðó —‹U D™M0w®V߆IßÕÕð‰ùåçºxy9úA==oÓ®¸Ÿ¾*ªå‹ù»EYUÃå亼øb$xyøýÅꌶ³—g Qøÿg±®ð³¦Ý«9øDèÆ x ¾¿®¹úù~ù†øÉÚ#ðZ[†],‹g¯¿}óâz­€Gšà³rZ ÿó·j‰~$hÿ ã c¼–9êõÇZ½5 †ÈŒW_~úùͲ¼>i>Ã?—cxÍ÷'È¢ùCxÜ­yþk#´_!ZD¡Á§—¼¬h±Þèò+« Áàãk"ª`¾Ö,ÇÍñEÒY3Їÿ'ŒÞ]•*7# ÈO‹bVMpTÿL¯‡ þôùC1a þV{Íå%×åå¾U~\%S*ñÑž!MF`6#†Ü!W%ûê€ñ²•Û¢ÒæãO&…¿p›@¹‘åÝïþf2oÛBïÿæ§ÏOþy5™"3Ÿ¼„xy>-ñǯ7‹‘/ÃD? êz 19 OáƒY°"øc(¿øñ;üí‡Æ ¾š¼-GŸFÓ²ÉìÅ¿]`=—7Ób^tëã« _õäòo8+:¬ o¾{3+nª«ù2üº3Áøö/›\ä#»ÿ°åÝÕr‡(þ‘èO+œCS‚yS¯ëã·é$Qû‡ÒÍt2š,§Ÿn @H­6o{#¼ê8ÚYÉwùYÆ~\—àÜaRþ¾übCo{5Áý‹OƒÑÚÍ=C£}ôúÏÿŸz‘Á¬Êó?mÌã·µÂȆmœ><¡ ‹o ÏÚ?«ÆïþëjG˜Ú+Kw6Ðþlð¥Wn¹eºŸ¤]íë¿|ÒýêáKSý¥;ÚC`Ťg^“Ó¬TœºŸsU2©+òô×w§áI-|~´d[ÉWžªpÚÃlâË^üòe/D»ØÞ‡bˆ«Z¤¿ Võ Þ LOp6z®`V`؃ìóž÷+ƒWÏjRŠJ·vO6óg¿cªÀzðTò ˜í í®÷Íô=f+âØhבmçqÎüã¢\/Õ`æ+ÉîîŠ5Ë3ç£Ð=èVc8ŸcpÄã‘}TÕÜ£ê­Û{ÅV™Nyr¼À1E îp^’嶛ѣ;ºŠ1#OO‹<¶ZŒË·Åjºµ¾ðm~½Ž>€j“Ù¤N¿kd7*ª‹µì.6’…iNdw¨.­ÇÛ¬%e楅ØS*…ÂóÅzM‚9ïûh¤=½¦Sð(Õ—¦kóQ]ñ­¥·Wéðîn™:§œWвÐîûxRºÆ#W,ƒCLÁsà¡4ÂX>ƒøt1UAxÝCIã9^¨B€UFi¥zS{¿ÜMè„âä<’÷ݵ –P»óÁ«S{£ “WÕñ§Ž$iOHÇöµãÄ{³XÍÊEôÒ­ëÞ´ÓÎZι{ í´Õ}[î--¾ FfØHÁ“Zé|³]ƒ÷X­z[ݽ `Œ–a™í—½í1œ€­ =‘&Ã3¼G)ωÀ“\I¼]Ttxw7L\ïÎK×½GƒGn¸ã¤4a×ü°‡DZ& ²<ƒ2ðXìvs¡|²9 òì^“±Zsk=£ò Ó☷dz1+7ÂCxÀò•,MÔ}ë²øAÒ³avïÔ€Atxƒ e­3ÖøÃ†öi™ßç2X‚<©™C»ë3¥È²{ýÌI¯ Þ®L`‡v*™ '§rz-20!dýgÜFHÔwGÇÎ3+…$ó.¡á¼—bc=’9¼Éh–‚çÀìO08Ï™ç¤hdÞ‹cKŽg‘I³’,#„ƒéžÓ/Õ*sâñ˜Ä”¤âf‚ )íè[t,,=ŤtÔâ«W‚õRó¼Ë–Ò…Y¨)x”L!7¢<Æÿúî9¼]9£È»Þy™|¥ÎA9Ï2ã(ï‡e§¿2 °=÷ÿB¢09‰ŽëÝnée·ðœƒÐ F6³„H¨Nåd<§D]&àI=ô3žU[’1’YAm׆vÂó>2ߡި þßJ­æþôzÄ÷X+´ŽÀ£]¯ “õEgµÐºÝ•pV i5§@:«TêXLd w&¸54ã2ŒCÄÓLãh<6õ|çÛ2bïpù¹-!Û)!)ABÖk*Ä•á¶ó~ã)%±†cD2SÝN±ÓëÞ㙦N Ôxœ;EÒI[vݾl-Nj8J@ªS5| éœç\pMªóæxu£9×4žšCô¶%×)ô6ÂqOQ Ú ­N2‘ç€P€Vy¬â&ì ¯f^ñ<–hÂÞ_·{R I[KŠ&R&50™xñ`2TžJãs¾&³·‹”µ »5[BòBÒÒ…“g–í$÷§šlÆ3,¯bg ù…ܺ&çÝKcÕJã<¸A;îR­òálŒG%”ΠÝ<Ú§Ù°ùÔ–NgÙh}a€C†NMÞ Lž‡œ’l˜°®ajdhàZfP+¼Çóö<ž÷q0ë€ÅµnçfÀ™±R]g"mòøM¦fo2h8OrQ«Y¹ ÙPmétoÁœ ø/Åh§LêÀ=Ž FÃ1:ÃÂ,¼Çc`!,⤬9nRlɦ{ÑÚIc@’"´ÓLõrέwÛÒa{%1GwK¬BÞêÀ_A£ëXЉ=òvRxZgQf4ÓÛ…[»Bø©,®«sl‹5§0Þï”$„²L3Òì¹ÎrÏ{lg¢Á8¦eµ&à1*Y­ÿUŽ–[ÒéÞ¦ðÒ¬YN-{@;.Lo?õÁ²è6ƒsMÁ“zÒ,M’·¾ìV–ÝŠÇ\eL§°;Ëì÷&¦²2Ö’ð¸dÕ~ü´%›® sÁ¸õ04ˆãyØNËVõ¬g“À™Öx<;¸çPX‡DþÙÁcÀ÷sv°AZë~v°SÒðìà1x>;˜d¥Ú{é'9;Èå…–bÏÙÁ4¨­Àâì`z'ýß×ÙÁ4éìO†ìÿì`2ß¾û÷ñìàãÙÁdzƒgÏ>ž|<;HLñ³f+7àñìàãÙÁpvP{%ãÒP–Þ'oŽŸvÑ3€—˜-A‚Ç+sNž‚Þ¥µxœÊ³€¼©î_€]_ÖR„T|ÇáŸ_¾ì…1Ì÷}dŸë`Öîˆ3u;®OxdPŠ k¸Q×û¦öÎk'¤'Õí¼2"}줭*ǃÑÜf xDâfJ5ÝÂ(Ø*k(u§|`Xr#…§X¦P*“ÉÐ¥Δn´Šu•vU­Ò®ìØ—*“ÁΦ౲‡ã‰×u¨ÚÜr[cyVl^ã"g½ 47jAÛŸáú¬ð¡ˆ?}zž_„*)2HN‹6 ^2Ñg ° î`K¯l„eðÒf‰¼ÒVqNãQÒ÷iýÏßË«f)JãZzÁ#F‰æ¾'°ÜÁ×ÞÅøW¯Î¢qƒ%‹"ÆìK¶Ô‚Çò&>Bñ®«rÁ1ÿ.Œë†Ž`ú÷"Lj÷Œy°Ÿ¤‚vÖžNÿ狲7"6Ý4¯Î…‚FÌý)h°êl€¾xã¦û"NEbx4Œºä¸n§ÄqõŸêuÎg»ýª¥( 71¤UB$D€éøÖ»fLÑg²è];ࡤñè£æþ‘âu”ú¶LEŒ#ýq€8˜³À*±žŠ÷æýg lêÁÐ;¥ºÁì‘,±úç9ãüÙ»Ö%7nåü*¬ýqJJí÷ SªŠ"Û±ùX%ÙqUbÿ˜%gµŒ¸äiYVù±òy²4ÀËrwI40BLκÎ)I$8ó¡ñ¡Ñh »EŒÚ –E ÎÿX„mÇžÂBàcO‰-áÛ<¬\#QÊYžŒcAk‚Q@€EJñ%€ ͺé€X ­™à–\…Û3Tš2LP°à»~hÇÄ¡´ÀÙôòjÑ,ojž5îÌmî…M1Vh.”Q8vÍHvÅ€‚nÍ[ÁuCy†Àn@’](ìIWì“9âC´ð)1Bà]°FJ} ØÛò…+¤Àgˆ „Ñ2˜–’ãüe¬ç)»Š³¥Wv¸Cäõ|pvûatæ®Ïǵ»VìeÎ1¾0A-'xXÇGì­ù©»‡wŽ9Ÿ€÷Ê"&'ãŠgÜ´½Àhãûh|? ½e9v*]ºÐš=°'$·dYàÄ0+{¤Õ”Ñ<©>ŒÁ¢™Oofu3]Ìõ°¾MFËb kq¯¥Ýœ¯ÿv¾¢óÑÔ \bœQŠ™ˆ½8SÑ'Û™·fŠvEpË’iVf]2œh1;µ¦Ù²¹®Í{›ÌçÉ,¡œD¬ ÆÄú<[Cl;úœJ$¾Ïa¶ŒUÂ]ÎÁˆ}'Âfý»#9;Áö»ׯ°ˆ“|c¿ `ë‘gVbY“—í´-2ò\ƒ$A»Ôâ¡:ݾ#«4æ×ä®V²‰`¤`±×["µYIÑÌKĶÈÚÏ¥11álºp9ÕÏWRÞ{È®eذ` Á,¶Æ»J¥ÚæÐ­œp>x°€ S“[÷XÊ™À“êÞˆj#©í36aÃÚ@C…5#¥ËžcRsFc¥1ÌíÞ Vâ{4’˜<‚äšékéÝ~ň4| Æ2,=Æ‚vʦ©ëFÙhZˆaLÁ#U†³žÇ*ñ"ôåjGé%žGÒÕ>Ú¢=1 5I6e—¸ }°΀ðP’­f÷C{{SGp§!¾( \m*%Ñ{jÐŽ$׫,Kð„®PZ@u¤àaR»Îã… O2Ö€0–cÆÊh£U¶:áYYß…@VàŒTHÀ£3ä> ‰I~¥ìËe˜yí¤âRmh›¼ˆ·(—ÝÇ|™N¬¿‘k-Í>ºß O&W‹—J^¿Ñ.éœmW55{ã±J[` HÀ£RSží>†¤FwJ1|eÈpm,¬\jh§Í0âãnBOŠ„¤àá"—Á˜hry×|xžë*GPô Ò¸`Nž¥vvbÇÃÜ…ËH‡\}íMîÕª?;[ýÞKŸecO—˜Åîð‹‘GÑã"2‡$Ï`zvFÊ)ÏÅKøAزo)zXÊŠ6¹ŽÇ®!aß]ñ2Ù8Ây—ìí@#Jk<žTÚþÓí¶v¤ÌF+;§Fë܉BD‚•Æ¡„’×fÚÊ_e"/'w #'ÇE ÞõbEø´¾3>Ñ:qÃA¯Äw@ªãðÔ¼Îmã÷oFßnaCê¿ZŽ&[ ?pTg›^£š¥ÇE5k¾4ÕFÕ Š›‹d4úè¨LJÑ‹F^*³—Êâ_*ÊûHijy¦ýWR\ì;‡ËÊß²ÿb EJ·YnM*íϸ’ÿvmJGlç<4õL|‡¿e‡@ïœ,’„ÐÖh%”D´ ùbžsS9¡%Šø¥à±ò‹é‡¥Ï UV(†ÄòúvFÜM«ÁoS/L¦`a›W3—êfMŠê·j4v+Ooq;¬æusy[^Ä&7П‹ÕŸ'{Ài¦9Ñ88M·v,«÷{ò bvËí¢¾kp¶lÑ««Æ õø´â­ L>qNÕ¹ '½Q¿Lo`{9¬‡/~š|˜L?Nú½¯7¿úžWam.ÆÃÞd:_?ýûí´YÌêÞ|º’Go6j>ôþ:üÇtR_þ_~¿L…õ®ž¿Tc'ÄgWðH÷³a âº/{óëj2mzß½é¯Hvùáôk6rÇÉîÝW ‡ÏÓ$Úà ëf0ݺÞö{¾ X~,7U&ÓÉÙîç0—ÞC›¦÷q4÷\/zռ׬žr÷ܦ·N‘¿^ϹšóÌàzûºß»žÏo›þÅŨiÀè: 躚ŸÃ¸^\Φ›úâ‡Wo^½þ᧯ΘÕ2 AZëš¼ýöë×Kv½­]F´ï=ôèÝj…(OÐ=š¸¶k›o2…¯¶Hýœùô†‹™W¾ÉºóÃ…EÕ»Y>Ý—óÌxã˜ñý«Ψ&¬«–b[޹ 5¿ö¾õÞ\wÕ¡½gB°s«øÿüwó¼3Ô­ëÕ+¨°Ö=Öö+0멃ó´×ÜÖƒÞ8ù¾nNW‹ði¯š {·Õ§ñ´îEË%‹A¿•‹gÞefûªnF0¼´~Åß8‡@ÑÿFÏòÿürZd^æ0½ú'úß—¿é/Ÿ|çÒöOþ¶¨>Áj{1˜¹…ùbê¶üõ¸®šúŸšëŠI’«¯qYK Ʃᰘ2ÅëšØÁe=¼b²ÔjhjÊkNØ%t@ŠªR†Ë¡¨+eá]ßLÁè_Á¦±þsŸt\n[¤Ò¹oR¤´”€Ÿ¹N&°jV~³™s±Þ×ó„ »ÄžÛÈ:ÁîAfàJD »w£ÊÓÂ/Þïwôz6ŒþØ2Üà9Í‹«Q=ž=›Mg¯GÍüÙd4~¾jðâ/nôÝOö]|ç?ýü/õÄ ¹M¢]êÐþ³X±oEÿÄçÏÈïbbÉóSo÷µ §½§ójÜÓê4úÍíTð°ON½±“í<ýùlQQœõmýðm˜ômÕ\÷OÔÏ?}”Õ«ËÁâŒó}ÚU7C%àÓ×U33›ú$Býùè¦>ÿ >sÒ>íù¿\¼‡ÙvÚ£ü$ ºÿ³Óž‚)£Œvùv¯§°&B7^Âðýí’«Ÿ¿,¿">Y+bØQBG·1 t5¯.Þ~ûîåôZÀ4Áÿ&õ¸éÿç¯ÍÜ­#~ôÿô21†k™»q}³Þ% úޝ~úùݼ¾íŸ¬>s_×CxÍ÷ÇËbõ…ÜyñËJh¿€µè„^jð²j‹-ðNO—_œ²z ¼>¾#¢ñêkÍrw›}æ¥tºšèýÿJïñz*¯f˜› ?ΪIã·I?Ó—ÓÄýíóÇj<îÃ4fô vb”_RY_àWõïó>'BpAŒ–8š @lfŠ{˜Óv¢&Ïÿ¯¶Üi[TÚ|üiÃ$ÿwe(¯dùø÷Ÿ¿žxOÝö7@èý¿üôù䟣±cæÉ+°—Ý=.øëW›S°WË1ð™®·`“Ãôøä?˜x-âþº2”_¾ùÎýë‡Õ2øztU> Æ`GûÍ¥ûî¦í9¿Wÿ¢»5¾©œâkN@.ÿævEíºðî»w“ê¶¹žÎý?½‡r³ÇÝ‚ñÀwÙ±ûë$ÌEñW°D\¸-9&˜wð9üÝýõ²šÕ7õ| ØŸn*ÝŽGƒÑ|ü鎈ԖêmŸ`,0I¬àÛ™íJ`9 Oj­Db^Z2,-c(‘–af3ÚXÑb"Šˆñ¸­%F9!$[õ³t Çò°mÍáăº@;"MÞ A×xTö<©e¬1uxÏn ¯~ÜXK C*ÀûvJÐv.’2ôŒïˆ>|xMÊ3íÂviK›à–jK A`B;’|•/#OpRÁ£vö)ȃ¥M¬ ®o§¤Ìe˶ÄxœZð@À{,SÚšœÕ“áít4™/hدS„Áî í¸¹§~Î&ô@È›Ù<ŠdVõï`>Mªñxú~<š|¡…ÿGìê„0›ñ6@ ÇcÕ%¼)x$/blU/Û!Þð±Žá.]Xw ‡Ï­!r:¡¶Ä‘‚GçÞ ’Dø`X¸GnK>´úüŽïß:Œ yKz"å¥Ö c<ALrÍ`?6׋ËíÜ«,¼¬[n”¡Ja*­;’¹ÕN§’]Ѻ&ã™uÍd:]–·¹ ŸFZæÏJOÌ@·¿@p~&@²q×§8ª6â_J ) 6R𤞿ûXŽf>««›­uàb89…?ŸNÇFsç Öë6ËÈ&CŒ¾Ýv~‰(kãó Ç.0xpîPç)Ê Ÿ Hôø£¬»€Ïe@ÖúÈ£¬ƒ’>Â(ë.x;GY§h)C¶*"ÊZR~®ˆÜeUÙ㊲ö¨`2‚âè·ëAþ]DY§Igÿ…ÎüQÖþÐ[!y²íS³§(ë§(ë§(ë§(ë§(ë§(ë§(k,ÊÚ¯ŸRh¡#,±]2ó)Êú)Êú¢¬˜‚*¢•ešäöjgðÅ&uA²ƒ_ºIÃÃUfï6ì 73ò”ˆ<|nÂq×ËvD去vÏÕŠX-*?-·*’ä»fö\HrºÙ³·wµ4eR#H¿ær7ǃt@¢Îr.öRf?'Sð¤ÎÉKgSmÔãÜá\kNˆÁû [”„0QÀ”‚Gf “½®Ç7ƒëj6w˜^sçÜ}úXža‹Š-ago°SHîâÍx÷à¸.܇*i‰¡OÀÃX·L/Äö(»€Ð,(=ÁÁ.!ŠcD\§8”`jJlsx<° 6¹f>ŒûÕƒyŠXÃv­0JSÐ`Ø‘´“ºk®› 4އ«û˜†GäÉÚ#D¢tÉh©A5´£LgÓÙ¹›Òþß<6ñ¨ð±Ô>m“Â}}æ¿÷9çmx/V–¦ôÔ-q&Ofœ ´GHË•q¸ð¤&åÝ<üHŽ?ð’ ß{Q\»Y.äÐŽÙÔñÏÊÔ¤,.ùÆ<N4ú¦Õb~=.iù]¶0iÃv±2ÆZF(va_¹ 6[{Øfr‚Þ­p툎:5V話{˜±¸áæÚiV€R>]£§†®]jàiàÐm‡[ÇUZ@ŒY)‹ÀJ©vÏ Õd'7€ä®Î>q,§¶¸sÙ‹<ù| ]€Í¹þÝÊqHŒ$ì 4JÙµ#&ÖSy˜t%°ÎBŠè¨ŽS_U_ZRÉ¥˜tµäüí– xX8Œ“’ViW¿v.œF౩‡ðK©lß’6¼i5°Cä–)lÅ1TÚýû”¤L£Î ¿¥1‘K1ÍE*F hB÷X)‹À#³]ËÛ¤àñÞiàCµÁ)J¡(µ8j)¤HNóÒ‚vRhâb9q<Ê”UïB¸”8ÍDCgK…·–lcîâ¨1h.G\>ïhgÆ%À¶LÙ<šdËÎãÖ,·Joƒìeø`É *µ¶#'´Bd¹ûÍÎl’¸ã–‚‡Ú|ù^âxÞ©XïtÁr£ûvRå;éÌÃÅxìŠðk¦àa9Ã’@tû3* CX7 F rµß·ã‚Í•ä_ªŒÐˆiÙnK¢O¹’ö$Á Hôøs%uŸ'WRAZë#Ï•”ôæJê‚·s®$ÿrø t„–2ê ¹’¸ç|_ª¤%RØðÓ¤Û†çQ¤J²ËBêŒ"î“%ú­kÍ©’–ÒÑD"åèWRTåR%¹7Z­ ¬3J?¥JzJ•ô”*é)UÒSª¤§TIO©’âS%ùu–kξ[¶TɧTIO©’Ž*UÒÿ²wmËÝ:öWRç]n’ A0_1Uópž¦¦Ü¶ÒVµoãKrò÷nɶÚ-„Èͨ’}^Òíƒö^IÄ…‰ ‘™c” ˆê>SÈ8ÿ# ¥Ž^æ×çÜZ|ý’wˆmŒZ(+ŽÙáç–¯ÿþ”å¬é]õ(ÿÞ†5•$Ey¾dÒœUð"X{¤æ?ÇK¢5â6N1wËë˜|ÔäG¦˜qû$£&ï,‹Üd•¬Œ'™Ô½ç‘TÔ¡ è¢8ó,çÕ=/º®e R ó϶OÔ¾¨nò6IãMa¹IÕýäªæÓ“¯‚Û“ËJãkxˆ^Ü€²ó-úþ=7Oã$EO!Ø ƒöÌtî ¢Œ'ØnùD×÷ÏGXìW›.„0Õѳÿsí¬÷â&=òÁ+>²[ƒóO3‡ådd×Ùq}º¢>= ãë9JÇ]–3Ôa!W“¿Åd7é,w¼‚j¿9Îß¡h+ðDÓ°~?RݦT¸Õï›õ;£¥l›zâ §G”b–3¾ßAÜ‘†Š¤[¶UoÙUúÜþ¿ïZ=æŒ(6xMØð2¥í2äֿز¼Oæn˜JÇ„EˆÉÚ&Yþr©O †Õ¾Ko_}ÒÜ”»Z_ov™¦lÑf M$c‡åŒ ³,ýNTU $ ð¤hð ïº\or9ö»M~ ~^¿ÝSËþyd;)Š^–Ã6½“xc°hð i«×³3޾¸í )[Ä8Õ %1è³_ÚõÝ:µ~É °÷4x ïáTƒe/t®É%é e9ëScQž®*p:3àö¦ÂCÍK<'YÞM!ׇµW6’#%—¼t׌¹Zô]é§S9 8Ôxbg»þåò›¼Q–}ÓÙ1ŠèY΂m_æ|ÕÀ¥—: m¹¥Jí=¾~e+yÒ]Ù¦\äoÊÅ@:ßè{0U?Ñ€©¯Ç¦[FÚæþ·§Ëç—§×)Œô¨*ËÞìys¤ Açû;z3Ï¢¯¦­ìˆý^G`\©»íï.?Þ Ñ” â”Å`¢d€²\„~õöúЕ4D°$cO#Ü9<ú­ü)¸ö¸ p090Õ¾ƒŸ·rû=Íä¢n?êѹ²ƒl'·gC-¹¨‡“ K=û\Ô&ð]rQKtÒç‹ZÖôùå¢6ámÍEÝ~<ø <ÚíäöfçÈEõ .ùÃɨ;9¼¼b·žÎ*u‹ s<±—Ñ£qÿ¨dÔí¨£®"µÓ¦aɨJdû¡PK2ê’Œº$£.ɨK2ê’Œº$£ ɨùüäºc'³¹$wZ’Q—dÔsJFÍÄÌÆS ÑX7ZëDUÂŽs¿>)ñP—·æÃov¡¬µÜù:ÂÏñÀÿþ„ö¢ßúä¦N¿7’IdD-aÜoÑÝ?7ÕÓ……'§f¹øºKPÊÙ—}ômŽåzÔvÀ¢PàñÝ: ï:OÕa*êÐzp‰Ð€™å,õy…?e%+`ºãÑ"ý¦ZƒG;ÕGB”~Ìæýrð§«Ëë»Íý¤O,ë3™œ/”Dü)Û{Ý|#W i xR· ÚÉË}DÁè z2$À.÷²}"ëºÒV1op~hðØ> ðNSk,«5·ò°¥GP–sê -ó0¹187€ <Ðía~jXrTƒ¶¨A€\ÝÔ‹ï4 ÌÄ>]ðæà®bäüüLÐàØf쓱Ìa›ªl>Câ«u(!M&:ì¶ê›H›¬o¼!3¬0`²ù;Sû˜ <èúTiÚó(d…¹¢Â|ÎÖ`%…±¨£¬ûÒ‘!„œ:çe¨ÁØÒów"[n¡O >&þ{;Áƒú+[ǹG±3A´EXÎC¯2C:2bÈÅd„4`†ù;ÉF‚ <1t[½YQPTTà3×S i*³\'e3÷BÎ.KÖ;0á€C˜¿“œ%2žÔP¤âÝÄ9¤2[¶`s_\2N\®YÎt\®ì££1`ed1 8dù;DâÁå0©ûÙ±N~è+VvÄO­mƒI³XΆpzj µr›ÞE£iþ äïï“è,ËrÎö[‡*f¡-[˜Ñ¢1Ì™X'v+¿¸ËÈ>zg(»Ž·i‘BþÁVÎ8ì¸bO¢_^’È`Ù®0áÀ§E1®.oŠDýÒU¾,®®×·¹ÌÒnó+™ž¼ ¢ÙÎrûÝjÛ8ªýËXƒ§Cµ€ý~­_X“wë—›õëóê;MUKË>aò!˜ÜFÂêùr3œÈ'p”!'›Dó4ËxÍçï £`¼Œ-t[哦ʶ(Q´d<ŠDÌÉ:% ŸÎļQ‚¯mY. °»r?ÒìKxR7§ôóÕÍúúõöx[×Pvè¦àõ˜Ó<õ[ÏlLy§q(:0²pQñw¦RÌ^Æc­o_·‡ÊbMj+[±‰ooùIV2"XÎt,êÒLÄzØ<6jðøØPËëùOþóݯïÚúõ-°èÓùþózÄ%¯½0ÎW.´º•s6Í>ŠÎÛrñÈ­\Ø+R°d‚Iñ+hôü3A[À÷É- ÐIŸy&hQÓg˜ Ú‚·9túxÁ-ÈC{f‚BºÀG2AUPÉŸW[Ò ˜<ÍVDæ@SÕ¿u&è4jkÉ Og[ííŸ ºE–0 .Ó-²ý ¼%tÉ]2A—LÐ%tÉ]2A¥LÐéü >ùXqÎú`—LÐ%ô¬2AmNM$ïÈŠ†¨/EØÍ{ªÂçÐᱟ„¼²…²ârÍs2‹ÙŸ“\@èý™onDNT·÷=Cö§»ŽmŠtwì¦NѸ„("%Vhl)®?Ÿk¹~ F¬mëÊÉuµD”Šª±¼ ’³A‚Âr¨NñìºbH£·óO¢O0=Ó»áZ,k/GŠÆ ”²œä¼ú­\˹z,ü€™TàñÐk&ËÑhŠ*th]îÛ$9 Dc\×­Sا iþ×àñØ55³IocQà[WN1˜äÐc·Yobªspö*ðDm}ôÝËPI}¬»§ÍÕ¤7[Ö°:ƒDQ€RßÞÆÑjÜhüˆùVà Øss/õ“°eƒøjbmr^;i{"tei=Ð4ŒÖ²Þém—ÈXRß§¿g5º¢}CGçP€Ír&Ù®{K¨­`Çið˜s©t‘°eSØóò%f¢ô Îr@æ´…> aëq{p¦^Ç›v¿ÓO®ŠwßÄQ%g¥BQ©!§8/U,çý¼»@55˜ãË^ƒ‡z÷7PW¶§ŸÑ8,!ùΗyh«E`(ð¨ó{w[êåÕÕÃëýKiK-*´ìÔÏi¤SØ“0€œFŠ]÷SH¬@ëæÏJÓáÑö´<¢<¡¶…+ÛÐ9&&$›s§îm€™ «Áˆ;¾Óî×ïOû'+³ìçŽ J¶/– ÐÉÉÓF` â4À3¯Àƒ¦Å3ÿýõëzµ=Së­lDóÝÝ b l®\@Úd¦ùyZ>šï1<á´Bïá¹ÿ: -”´ øä—å,œfê ‚>®ò/—Ïßÿ÷ÛÓåãͱKöCw¹<ïý[Ƚmþ(†! ªÆm’?»wìçmèÖþ|)ï17{/¼” P5›œ(ÚN"þN dSªÀƒfN'Ñ¡Jï®|Õ¢\(9’N6– ¡O] ŠÖƒÅ½„¾Òž’Ä=EóQ°§hðøniÖWÛÕÇÛËûõÝ6žtêp¼ûóç\W,?é$@-dN³œ nV¿Ô kFƒ> ðP)ð8ûZ§“ºÊw¹” òÊéúÌrÁôkÇÝ™±ŠA„ª­'—¶ÍG æ¹ÿùå¿¿o¶D{|ZOÉ©YG<àŸ ½H_;j~ýåzó<ÅqÿrµÿË»|óhÀ=^ƒ§_ŠwÝ<)Z¼¤GoIŒtÛ¯+5$¼ Ĩ!FO5xúùíøºÿøôðû&çæñ?9h·‘èëôÚ­KàÓ©A@D©bHèGPÚ)1¤ö™èÝ¿ùÃëüÖ¶º|Ü|yóÇ¿«ì£ÒìVù“ÊQàHõ’«¶5:#?™,õc#sVdIÚxDYå_V/OYí׫«ËI¿±3ucFæì4`°gµgXcMÇÚøï[óç]™D"x°¯³Ž‘   T¨JC…Z<àšÐ*·àÝ¿œ4žDj„*Lxg­íÙxUª‡6ˆ*<Õà sSåõ×›‡‡ïû OFdJôr Tl€×Ê”"î¢TlQˆ¬ÀsbÚÛâº:)ÜŠDI±>)‰Ò wQªG6äZkÙÀ _ƒGý¢*Ùy:žBž¾^^­øäÿÏŸ“ª%ÿ¨µyò+€[ãzY¦•ˆO'Gý˜Æì"Ö±•+ðX×'sbOÙ?VX +Dº€uX3WK—ùÆÐ@ êQ"7>Ôàhlo|²Ö½ÈŸÀÕŒÁk˜Ó|e„j†—ÆP£15x‚ïGAÍAäHniV±GjÑ6¢v<ÑŽ!ù«ð´43ô*ºQm ©%U»ÚOwú´;“{U ¹!Ó^'v·?m‹žSgƒÁ ì…¨^½ÁQ º"R0ò0†"ÕxR{‡“”.ºV„`j†àH¿mtÂÞ@Ïÿ«Œq¼×ãI=RÊ­. ÁèÇMµOD釹 È¥­UDj<©1môH¸þQ`9$¢*÷‰ù¬#þ55˜cUi°yŠŽ¹Òð= «ÀC¾¹?îû~*ÏHÅîÂîÂ8o&á8Êröb¹G´Ý›À9ø%‚ón/“i»w¤ŸZA£çßv¯|Ÿ¶{:é3o»WÔô¶ÝkÁÛÜvoú8± V±Kí…¾ÏÑv/G»î©ÂÏ=VÿÚ®{*Ï“ tÝÛ¢?Ð3áoÝuoup|eô²vüñ$¨þ]÷tÈÐ.]÷–®{K×½¥ëÞÒuo麷tÝ«ïº7ŸÄ­­8gÙŠZºî-]÷ΪëžË]ᜱNx÷䬉‹š‰µqtøÂìžF¤Ñ|bƒ]¦I—‚ß‘¦Í‰Š ø&9ÜÛ¬:5àË¿—u’ß’336àCw’½;vgO" |{b9{J¥¿Îþd\Äk$FG&Và!3O«§JÄ¢- _…Ñ€šå@_~æ•­ïM˜Ÿ<6öèøô£&÷šé8aÚ =ú%®²ø0SÏ'=Y¨ç/ÿ¬ÂãµåŸ‹-UÄÞY|ä5é|® ­³Ñtiût*UP“uóOºóýù.ËhÒœ0É”xõF!wh’ó1ôlùÓÎÏzèÁ˜tu~Üöb¿³Óî6ß¶þâ¶·¯ß6÷ï…qw‚«wÉIŸTÔ'PNƉ á2H8Ãz¯g­jœ?lG‡ÃWŸ¾4çýòÏ}ËH˜ïùJí¢ôüÆrÔKaëàövýR|â{t3€z<Ý Í­_®®Ÿ/ÞØ÷¹n4Åyó-àd‚tÙe93Ë}â„å¢A7 <ÞøŽ¥ä¦¬.•5Idù$]Ê}nÓéºunãl5fÚ/AQÚk@Ük4¿ •o%ܽ¸ìÊ®?.…É—]¥È;dJAŒÜc9K©gS«æÅ¢€î€óF…'uÞvö½ã`½Ù€†‚“qbìà¨ìIÖjäÑÒˆ¯Çãb/Ÿ¥FŸAÐ'y­ˆŸ ØÞ‹_EÙj Þàˆ‰¯Ç£uUmvxÃäÿ¾Ýön‰`Ëú$dex’‚ÉY.P?fGæÖý€k®Oð}™GUZ%5iºì@Œ@€à’Du–sz5¿ëÈuÍhU4x’Þ#’uu»¹~øã~ €û×Íúönõö×Õíæþûä7WÔ…|w–/Ï,¾kÃóÙ­Zðî¥À£¾®ÊæØ¡½7 úCD`“@Ä‹!F½“¢ƒ«a¢ƒÓ\ü¬6Ã=ÃKz §“A´Ö•QOr…ÝêäË‚–*Ààg7uxB˜éŽpH$¨Ñ“ ”4œäÂÏa7³¦NÍMx„^•ÛAì=*,é†GòÈ =ÿtÃð}Ò tÒgžnXÔô¦¶àmN7Ü~œ¢êáoåöj Ì’ntþHº¡éω‘mºaFÅç°FDÌ^çâDº¡N;80ÝP‡l?ÀfI7\Ò —tÃ%ÝpI7\Ò —tC)ÝPu΂¡%ÝpI7<«tC&¦G¾w@@‰À,gí\¯•'øÐ¸çoˆ¦ÃÝ|”Å—œI¯IÒkòÆ’ÇAÑÍû$©q¥×ÃNÆ ¡C5ž~½®vŠÛ9?_ìþú9æ0¿€ulÌ›t(f¥Nr¸×>±SVjþ½¼ÅÅhE¯~p ãŒY©.^¤”ßïŽøv,+*²®$ߎµ˜"ÍøÛºêë„|êðxj{¶}zx}Y×IÁ”µ‘-Hï:,ǪêaÜgÕkÀϟׯãÎëÿ\v?Z.j"@~r´SÒvއà²ÖŽ˜>:)çÿ^7Wߟó%à—]^_¯nÖ—·/7W7ë«] –5FR"![r’+dsU¾¾wÜdêaS&Zû¼Ák”i‹ÊtÀ¨È'‰¥,çžnÓHSÄ,le$þÇü=½ 59Þ¬¶yÄ¿fµ7ûùŸo•[hqLIIÁ,çœïsûíÈ^ú8àÒ£ÁCÝŠÕ§,2éØuœKņ$Y¢,gÔéÉÃù®açkðxl¶Ø<~¹¼½]]=ýùaO+û™€øhs$/Pu@æ_FêúAù½®¥´B/¤ê>Š‚5x"ôÉžçî¦éúËΜý›ß6Wéçå{š÷&yKbð Ë¡‡v“å„5£€Í€™ÖàÑfÐØú?·OëÕãæq}»¹ßmÌeŒç»n4Ft³œW' ÍÉÆzÜ딃¸“ˆÖ¡„,·”tò8eB5qÀ5Mƒ'¶Ÿ´7¼W¬6÷¿¯ïóë]Ö—/_kCÊö“wãØÀ<þ~’Îþëiýæúæ.ûÙ³òÑí‰Í¨ß§SÓ{Û "°­iðxå¶vy·~~¼üT:çÍ= ;(O/w¡Òä$çm_²jþiÆ–±µŸY¯lŠ®6w¼<žîß6òadMiOa¹Ü´W7©ÝXFøQ4xÐt«hpD_>ÿ`õ|9)µÌ¾óð‘â¬ôÌÉrüƒöe}M íˆ}[ƒÇ‡n—ÞœéwuÃ?Ê W÷“ãË÷ÄÈÜ3|E—¬;–³§Öf@ÒúQ8æ_çøNT™“Z«ÊIe6šÄ†¼€›å‚Oýnž'7CMÑŠK,ËÑ€è Êî!ˆ1Éxb‡×°7Íݽ޾lÞ(°¾ÿÆ>)¯KCÙ+¶qžä¼o}ëJP¾Œš`•‡ã6SÇYÏ2‚wNƃŽú¤§ªÔYæAò69gŒäÎOÞÜùúõ~khØw*<©yÉ?\ÿ¶zSà ßy'•}79-× z !數W¡šžü¬Æ͈—p l¯5±ï½;¨=aö™à­“¼,gΠü«ó3ú€`+Ð;‡ÿ´¼D5ð­1B…vŽœ#/‘¿È·^°AFû…C–¼Ä%/qÉK\ò—¼Ä%/qÉK”óùü$Š$Ÿ³?ÔL\ò—¼Ä3ÈKô†Ív ñç÷¯’&¹ýh¡NIHù÷BJSù‚1ÉQš³5Ø‹HÝ‘Þxä[§Œ4¤DC«‰MàxbÃS—/sK51©LTA£ç_M¬|Ÿjb:é3¯&VÔôVkÁÛ\Mlú¸çP(¼•³0k51î"98RNl‚Ÿ‹¾*áy¹í&Th\4AFCû[»í¦Qÿ?{W³ÝF®£_%Ë»¹ù‚@žbΙÅ]ÍB±u;šØ±G²“îyú!K²­ØAT±Ø:·k•¤›v}?’ ˆŸè‚‡%úŽn»á‹ì=PżÅËœ,n»Åm·¸í·Ýâ¶[Üv‹Ûîc·]>?£ÀBñíaÜ/‹ÛnqÛ]†Û.]MBZPã‚sÓc†Ôa/:ˆqö°‚f±ù6¾ÙÝæ4ë,”uBöº(ycC°ÁÏà œ!X~@s–„â+Æ´ÉòYo,—ØkÄÙdðަ÷ô˜¤‚ &vX <–Zuô•X^¹Ö³ÞKø\Òˆ¨] ¤j +@2ØùgZƒgDúõ[ýÏ—ûý~µY?ÄÕÏíã×!¨|Жµ–óVÑXÉ3˜Æãfå1§Rš|‡¹Vàa7ª–×akÜÿ™þ~÷ùE—Ÿ_Ë#|þPåÙwiÁ:HÂÆ!7ȵIRÌ8UžU= ePÖ*ã=솱¬2ŠäÙYÉ¡™ÆEÇ•q›‡’ àó—éÓá‰fôÒ~Ù&Õê,ó –ÉMáH€ŸËû@ƒÆ1\Õ`仹5núò>TÇ9º]ÒÜ*£²Ê+ÙZ’^Éò“£:µ¡-ëñSì°ÊÓwÐb¨ÀCÔ¢Vι<‘oO_6«Ý—õu.õÇŸƒ.Ë\€œãÀI†&ä‚u¡ÁÁWF´–¸›¾ÄsØÀÁâÙ»í ..«+‘ŒÀ[é¡4‹šTËiÈGø€æZG[Àóò1>¬ÌƒàÙƒ B&u—. ìs-Oø¢3óOµ·Ós¯÷Åê¶éˉc¥²'Ã8#+ªÏÆÆzìÁuØÄ5x¼oÒY£M(3s!*o£de¢G†Çö²ÖCŒ BéðÀô*ÿû´Þ}{Úÿêª(–¬MŸN:@é3³Î´i/Ò’” ø f¼3~b‘|uÉ´á»5^Æ6L­–?3À ¢T¿…ú… ûä"ÚNºÝåBA§omcª£wn~¦kðxÓvo;{Å¢ºpE\&H»ogCƒ †7C%œÛ¡âžgâ„§€× )ê´•TD_ÖX2~ñî›kEÛxuVR­btÞò4x,ÎÝ×›¯Ï¶Úß·ú±]¿nlÂô²5ÒeVÂʆ¦xþÇ3°"Ûò<}§¯rû`@²‚Ò8“ø§ ¦ÃÁªÁcýLk÷´Ó_&›È¥Y ’ƒÓÖ›Õ²± ïáÐàQ;}Î6%KJÛ쯞Õö¶Yv$ž×aÈñ­¸†|ð]sZ5àâéû×’Óz&Y± ÑËÏi¾MNknô…ç´5}9­SðNÎi>ž Y#Ê»Ôé³ç9­äÂU°çrZUPñ¤wáEä´¨²kBèUz@ùï•Ó:HMhÑU–]¿œÖüEÊ•è°bÞØ/¥è–œÖ%§uÉi]rZ—œÖ%§U‘Ó:œ³‰ÄR ¨aœGZrZ—œÖ‹ÊiMÄ„dh ¥àÒ8ÏÆÎæ^®v‘êÇÙÃÎtxhúóÐa/ËQ·¯ 1ÇÀÎz¶åzƒÃ8sâ%m”áš~/Úœà*ßFÒ ØÏ™ášÆ9°öL½Á„€Ø£CoëÄNßf.‡r=èNyï§§Í^3°¨¸D¶à ù=Ã8Ï zmŽZ¼ 0'œžÒ¿úÿ@ Ì*F”:âæqÎÛ ?f<ÿꑺùßÂuxõô9ÕØoïÔ‹ês9”;Z”Üo.XçDÈë©èÒÍÀ`š6åqÆž4~-EàÅ“<3õ£Î»ùY¥Á¾Õí ÅÖÛC-ó¬‹ÑØ¥Å,Ù/Ž=h“(?iaÔ㠦þ¢ÁcqZó ²Þ¨¨7.7qÒþ—Æ›fÖ_Kª*D¡®g9‰;Žæ£=îd*<Ôo ææñëæi¿úF9‚•Е§‹˜ˆØHGXgG•àh¶4@#v˜b ­: —5ÈE ‚i”ê„ ãl4-§ZÍJTç:X<~»ôú²sv6 4.Û¬·ð$z* ÷°Uxxræî/ªK“½Û^2LQk¹ 58Ç’e“Æ‘·3X†jjjÇÎ" r£æùX´ v¥–=У‰d¼tôF8=w,5«aFc;œØ<.´ÈØ-n‡Ñµ‡=åcG@‹ÙÓÉã&¹1/} ×àÑNøë¶·{*¹ƒâÊVL@Ü‚Ò8ˆÜ$ow5ëáßá¶¥ÂÃótk‡O‘5A4Ò¸¨ÎÄž€)™§!V`:ÙQJ7s²qU¥]¸Bbžm¤³÷)”MáèÓÙj¢dÔÕšî\¤\å_F#z{Å‚?—Üú¯oÛ§ŽIÁÙ–~·ÄöǘŸOÿ¯°ÏŸn¶û!ˆçÓõi Ô§c„Ðdôó×ÔáÑV(ú0ãëùuüz¿]Ýì¶ÙÛ˜Õ ÍÖÆBœˆ9õqøfŸvò—5í¤=4_¢õö9H|³S2äþéfõ:æuÏYÿܯ6_ö'š?5KB0"C1W2dvQ$2ýǧ7Se&ôaW-k:±+牭’¶¿ÛŠ`±Ô~¡ ÁÒô瘣Ëâ˜:ÊgÒ¬ük{»9;+®Ç Çê¤éÏ1ˆ—űàúpL´T‚—(FŽÐÅ ™pnŠ)„iưzéû0¬ÙØ›a¿_?¬nή~‰,8ª­ô6јhµ2µã[µúœšÕx Ó©¹ýr·úñp½úr{}Þœ=„€¾F®RˆT²)jÇ´Z  éô\ •kðP¦Ý­¿oo×ggEŠQä*ª]sKÒŽ[µ¢wò^° Œ¦O'nÝÿØîÏNH”¨U-ÓÜGe ͘Å]¨œ ôaV-ž^ž‹|œ<ÜÿÜì~ì…#…D–¹@±J6îpF*…jÇ8¢«Ñ‚ëĸZ<Þô¾ü¸û¹ÞmV?ö_7»ó~ÑyÏd<¹¹ÛM@+[;þÕ*úøj™s5ä <±û ôyrv÷Iñw«\ôêú83(>T FYW%Q+®‘1JƒªºÜ>ÉØ\UPÄ“pC«Tœ£k)­ï£célvC@á €rlÊ·çöÓÐŽŒ¨GÉ®ÇMPƒ§YßÙ3º6 ÊY¬ƒ´bÒ8×0§EøÐaÉkð ·ZòÇx’Ë1Dl‡¦Llb£n®Ðœ´ °ÔÁ¤ÐàáØ*¦úY}Ç?½•× ç°$ÏâóN›-öÉäTàî’+¡ÁÊ*ÈGe½l‘ƒz|3õ”ä1üÊ8FÅip¢¿¬iDn•ÓöüðwTð«É|î š‡f ˆµ¦ÿl"ÌNž{>çxE VàὊϩvµ;ìýA]UàEãko‡Ó€Ž%B½ |QD@ ÍZ^V¯½Õk Sê£hDq^·Ì ÃìôqFMzeÇ_nj¶ÍUºrÊs“ÚÕúûÍjŸ#ÃW»Í:y]´±sô»K;ø£IƒÞP¬/Ø>¤©Æg!ÍûÛ§»Íúñq}ý5g[½Ó8I„Iw  g JÙùcS},YªEcèB–Z<ÑØɲ?À}§onE•hÝT‘ÏM”h鲈âÚE»Í~ûI«ÛïÿÚ­÷»§¡4÷[] Ù×ì¾Á¤=;A ßƒr×oWƒ'6'ÈÑ*¼¾]ï÷ï4-z¾ª‘‡ØŽuÇ’£Z¤NÞ”j<±Ý1s·Þ~_ÎòSݺVt ßÀú@Ž&@`«„ˆ}P‹|[œžÙ/꽦ÕpƒoÄ2ÎÑ4ˆ­¯‘£Ï…¶B[o*Õx¼XÙæÃgaÅh\…s“Ai-ÎQr#ã`UxcŸ›A5mɹóå]Î.¨×t-ù))Í[š‘ ì\k%6=’ d|ö‹B‘9yç)•±±b¦(8–ŠŠæqÖ6«ð6"JFôÄËQÒŽ Gi>z>›MWä'Û#/®Íë—fd¹oKÅtÒÑl²D±Çõ”‚ ò}/ãÆf±¸ûí³Š_5|>ʧpè€ÁÅx耊]{™ãÐ5 eƒ‡q§apK/ó3Mª ½ü^æSÀ·ée^@ }á½Ì‹š¾À^æSðNîe>|܃uå]Êd ÌÑË0÷³;×Ë\/¬—¹=Óß«—ùA;¹0g…vüù´¸ö½Ì‡/‚$Ô—=J°ô2_z™/½Ì—^æK/ó¥—ùÒË\ÑË|8?sÞ/’|Î"àÒË|ée~Q½ÌJd;øì%§q…ìölèÿçÑU™Îèg/Ú{åö¸¤‹“å²›Llþxm%nò(~TÖés!_ÖZt ¾¯ôÏ7(£;*oÔ¶<ÿ^0¹ú¶¨¥è™íŒmË!\%UD¦myB3ü}´â©Xq@Ÿ5ߨ™¬bþ@u žhŒ²4ÿ‡ª;MnõÅbR‡BÙ¢g!j¨F>ÄèÏŸ=­ÃFÝÐü TZññØkÚÙá× óÀ";Òêl…L¡:Ga~a&‘ªZ`êCªZ<8g¦4 ¯™Œ¾XåB'Rœ%s”,](¹¥˜ Öài×òvôl ³`EFÕJÄ0c™•(Se {¡âqœéB(k‰Blû‘¦wéçF±ôÊ‹#à̶A|l-ØI¤¨Èõ±†¬'o| ž^G¢B݇0)^dDŒ5ÀÁL((1ñ$z„(…¯¥Â>ô¨Å£íÈñnýi÷ëÛ!¡ms»¹~<\œ}±NÍe2êC4Õ½8fc‘"3B¨}¬KìL žˆmŠ6˜ˆ ò‰)pPÕ%O{H3…V.Y¿*$æØ…VΦ­§30Vßž¾lVû?ÓÜÉ“õÓã×üætx<&Þ3€E>Ù¡`›=Œ³¥ºô¥>¶ µ ˜.t 6 ­½ù¾ßm®ïw7û«£ÏzBÁ”Ï%KÒ’‘ ôÁL©~Ù‡Æõâ€éðü¡Ác§¤„Ê>°×уËrç‘Ùú 9¤Ó8 íÜü ‰­€msVƒÇ5«Ž{ÔbÍ› ˜²ÅáÒ9ÒtKÎk—ïÃ~Ê ÑšË àØaOÐà‰¦óÅæmÍBåÇ!Áˆâ“FgL»Âº y­‘`þœA‚Þ]N" ¶Òk!F¢yS†8)[/èµ—E ëÌ_D ÕÐrv˜…ØŒFÞõ¦ÑGbt#“§Ë"üU»Qîô9ÌxV9oP|döÝ©ôNˆ©Dª–»©ö™úûún³X_ËÞ…r,ƒ§\Ý7ˆ·ù4§6}™Ý4«—&¸v¯7îÂ'ÃçU^ÿž¨ðûúqó¢×ýêñ~µ¾¹Û¼“Íh`T ½ ˆGûS5RuÙ x0t ÇíæçPÀ4cG¤9ÙñprÌßs@‡‡{ì_Öûíõêi,a[Q“"ç`÷ šxQDAë;åùxÙº£¹ÂLh*p0'W È'Ñ¥V:oúÐ¥OhÐËDéÚŒåç0¼óBû­a­›¾•U­ÙãÍNƒ±eÜÑGʳeëò+3+Å!û0Þë>C«ñG ü<ç[ð‡È¾_â>#4ÓdÄ)mŒÆ@}"hÄ"Y!þk¢…Þybù‰#¤UŒìÄK㊎¯êMdÒ–§@ëL‡‹©mÓÔö­Ö~ûPåKG rÞ9œÓ¸B¸¯²¯éìLV;ÜK5xÈÏ[ì·"µ b3íò\EUbŒ>v¢:sYDrvjàªò—“a:7¶ ðœå9ØØ9c+d8 ,/U9õB•SÝG{¡¨ð`ƒç˜Cæé2¶å›¦)ŠeÂ0mίç®7u0_4x¸ùr˜ë•Wö~áàa–h‰ù©Å·phèyYVm nqh{V ÷Oã‚qÍšÔœE Ú Äiwu[H™b‘}.vÍRjAÏ7_¨//sw¨)õš'™%€¹¬ ¡ 0 Òa‚x0¶ôúÿªº—Iþülß¾D®GÁ½C™£†Å™|ù‘¹*ö I5€{äã(𠡉aµ…­Q˜fbƒàIÚŠˆÀM‰¢ž‘ªõ2pìZ¯Á£m¼ô¡ÍpF“ƒêÊþ‹´ÿ€7l%_Rç8L0ÍRLfXƒ'ÆV)êvIaÂÓy“£ ¥£Ã³¸IuÐqö¼žh'Õ~§Â×úÒ`dÈâûǾA|8q07j]‘/PºT‚¸kˆ'…-Û·®pá*çuàݹË{.ž¼·PBV?°¶r'WcŒÆA‡e Àã[$íû¾=QV,*Ëú€Ñ£“Gi\@7n»k´HPÑÒüóªÁã°A#ŠwºK׎´ŽoV×ëA}ÂLSîPzȰùÁ¨ÉÒUñ°Íí¤ÃƒÔâÙìÇv÷x¢.*ªËA΃wBõìaxu2J{òið2Î?½ <Áp‹”ƒ:O˜õ´4Héò;x›ÓWÇJ¾ù#ÚtxÐ7XÄ?ö_7»Í‰Â¸¨0Oé‚#Lã<Ø:'Bî`L)ð€i1¥ßŸ×ß·¼*ŒLQaàcšk¥r‰Rmù£ù¶– ›ØÛ ÂNÆl‡ :}Ǧ ¿Aµfb*OmgÈás @žÝèÇEŽÄ•¡búà$عONâé͇ ÞFRÓ0}“ÝK»ÓUÖÏ.ÊÙu82žhe9M††Òy·ëÇ4äîDo¶¬·t€dÓI«/ƒóž¤úØù÷®éOÿ%Œ˜ØçHƈ¶ƒ·+‘#WàØª6õÁVß_õv¶€o ôÃz'é}<£HMr†FR“ØDãE \èÔ×pÚ‰­K+däïDÏÁVà ¾ÉL?(òtêƒ`ôa®“,CÉjÀÜõ2NwþÍ@^JxäqÞw˜ýôbt\'º^»Á/Ùß‘|Q¥1™ˆ!8ÑfJãœk“=:žÁ°=®-<Ú¦Ÿ Éæñëæi¿{º­RàooÿÃj8ð8XWÂ…"æ{¶3è$AÒ½?ºn[ƒŽÉ„!í²>È2„ÖþY­ê<»™ 7ëÍÝðö Šê$°ÉÒa#´iœ±ZÏFWrk$ÁÄÐà‰Ð¼Uð[Å–öcW¾pÓзƣt§qÆôÛ6ê9®ÀOì v /•뤾Ý.Qf{·y¶ÕZ=üóé¼üÖ^TR²í‰ƒt_Jã ¶‹¯™ƒì Q¬åùÉ¢ÁÓ¢ð9Õ­†ðããÒ{IˆÎåñ‹ÚÌ5M@B/t×ßCçãz½H:œ3<Zî&›M§Û’û®Œ7@2¦(š™®I£8—¾+t ÆÙoÚ’4z&° ÑËO¾MÒhnô…'5}I£SðNN=|<8¤Š]ʇ›%ié í¹¤Ñ |Ô““ë"’FT„®Ûôï ü{'´ƒ‘ ÊÚñú%_t ôÁx–`I]’F—¤Ñ%itI]’F—¤Ñú¤ÑáüL;½7öžÖ_’F—¤Ñ HÍÄ HÁFÙP Ás‹C ]¯:ø4{ O¨D«U£+«#{“îÊÅ,ÒaÜiÝFY¤ù÷rºf&†IjCvfÌ"Mß`Bˆtwî:ÏÑz h$¤Œ±©¯Otµ0Ñ@Õ£Àš½~{ú’6ãGbcQ±6wRŠ£ ˆõÈDSÊü¶Ýêq§ ˆóBƒG›Ë¨:œv÷üyªG_Ö#E"¶VòO¦qÚ½ÎÁäzI:”z×áQwo<«Ø_4ùp[«\**×yLDzd-¤qÖû9/%¿5À‰æg…Óf×;BYl¬4’Á㈠¶ÕÄ\<®—æ'Ñáq4Ë&Q«X.*ÖçÜFŒdå8+´³ØJb+³…ù© Áã`b[ˆQ‘Z¡¬OŠ" Í †q^ÝÆ±/—’pKBLœe“8jò| Qäÿgïêv#Ùqóõ¼…á«I÷臒(ß%{’`‘lv±¹ Ó¶Ûç8glÚž fó.û,ûd!«ºíöØ-«$¹7§v\õ‰bQÅTI™2)Ö»\õ¥nœUzj?‰ ê-™@ýN2<¡†#ÑÉ,}jî݇ÚäbÎÀMG+ùÓ5wø$Èik°ð<J" eýyþ¾“l:$ê,×ò!wfæÂqSº=OÒaÈ l•h[¨‡O¹D#H§1ßóäåZEG£Ì²#y/„r_sjj‹ârê*@ímà PÄ[È2­‹ŽsQç¶«ÈGLWÄ (¨¿ôœ$x\(Ò±úân³¾»_|üôáÃâž«Û=ì»lúLÎdTð1{©‘N÷¶ŒXNsà[\XKðÄ"UœÇœ³R1¼¸TVëŒI§btã´5MyÌÝK!ð¥^è¹ùm– šèñ󘧀/ÃcN >rsRÒGÈcž‚w2¹{¹#_tÞJ9mªò˜^£­=Àc–Auá¸xÌ*(È£÷~^<ænÖèµÊä]÷Ò9\À¦<¹{c4>WޤŸÁþ½ÜÌcžyÌ3yæ1Ï<æ™Ç<ó˜s.”_Og?zô™±'ôPzæ~ÿ‹Ó¯cD4äõ£¾ýwß­ŸN²ß>™Ž×“žù¯úrC‡Ì»Ëßq´ýòþlô3‹š•_ÿ/©Ço×WëÍúöbM¨¾|yf}{;òŒh÷ž-òI¿aœ>ûÐÏÙN±Õ~üA÷ñ®‚½8§C÷Ô¥]€°ˆçjå×ëKph ¦¹}·zthY}¸þ3½ölì’uêÝÇÊ.ÿ¥“èÙIÜzö³¾}Ø|³I¿õK[@›nV§ß·FÛ÷í³ûHaoNû˜ÑÈ'Mܲ·OéíFÿ_g=¶Oè¼<úŸÞ³ãÞЯ!ûZ„³ÿËõ¤gýQ3âûᅵ^‘W§gdèÏZÝþþ¶º½½{èé«üƒånÔõ--éåz¹YOº? Ç×x#e,äküê¢ÿÛÐEúÓv}®®?¬—ŸW7X÷¾~ý¾²Úè¿1Ä#wËQ›É/on>=ðÇÙèÿ»Ÿ!úÿ¾œ¾\Ú?=üx·¹þs¯óÿu{òØðùèl|þé¶cþñÉÉêãõ£BÿI÷?ãÁ§¼]ŽA÷O´Yn>÷ÇMðõ€¸áv÷2}x×^œÞϬ@xa8ä L¨ ‘à‘ö<=(:Šóf!)U«´v:ú\˜‰»É‹ŒÕCJ°×oj'Ã#nj7…aߥŽ|Ú|脊i¡Zo£÷ÙL>§Åuêõði˜úEÇdx̸7w·×œl8Y².-Ù­c¦Zn&1jð- IˆÔ\0‹èèÇp¬îŸeŒ"Û³¸ âè^)?}O! ’ÆÙ%µ‘L¡÷!Á#.ýí'ú2íŒæ3Ó†1£²µ¶“Hm„Z,ép<‡û¼¾¤}µ¥‡ÍzuóœP{Ý *íyǽMr„ àjZÜ2½”±Žö꣦¶…Ý$/ Ž©nr>ˆáþP.UV¤z]Ká¨m#l°°äy(Ý<˧º÷¢û¯-ØyKÊÎÒç¨"bnÿ· c©ìv¹GÉ¿]…%xl™|˜Ûõÿæâé´çÓ"Cr6è›Èm–“1C…w‘ë W,Á#æ vŸw¥w.×W×·]Ä÷±ÖÚîs¿|¥÷°JŠ€Ù³`sªIã”-ê¦#6ð˜%xâDBÁA ÷‹dP!-E Áx:€æPs]qÉ›ÊZ:»7 |1 « ð9C¯K2½5:ë½Òä¡eÓ80µ%D1åÀv¦Å—à‘îè‡JÁ „‰iaÒV¥0¨œöÒ89¥²ú  û¾»Op£ªOë‰Æ0…šÓXzVÒ ¨½Ø«šüR4õuM‚' 7ÝÒü°¹ûô¼UF§Áï_.ìÓ?rüÜd4½Eˆ>äàsL«ƒRè[ ŒŒUEK.îîv¢LïÛÁ.góE¤ñ‚J+ÀocƒÈ®Ï~÷b±¢'úNœœØI2­ ȱ ¥²Ö ·ò m ‡§ t-jc¼”hz?GäîâÎ`n´Ø,T6iìÆ«Ú Àt±ŽIä˜Ö&Ž9δÉà¦qNc“ªR5–LÁ78•Jð[sw ·~±;ÞïäÚI4dc G›Sf§œ/W {º6®Í J+ô­æ²C$/…Qœíêf_<æÌnup“b˧Z%‚^* H',H»Ù<ŽƦÌP 8Ð{©®33ôå/!Ñãg†N_†š@ }äÌФ¤:ïdf¨ÈJYc«r+ÜÒj8À­”AÝ«ç~$Vú—{Õÿof¨H:p¸ž`yf¨ Ù~™:3CgfèÌ ™¡33tf†æ˜¡¼:­ƒÍźqʇ™:3CŠÊŠÉ9†LV£Ö±êuÙ¸X®l õ‰NB<±ÅµÙAÉZ•–¬çniB’iÚS{§¿BLS~nðÖ•=ÅøàL¬É4µKHZææÐ)Ÿ/Ñ{Ì!ÅT÷ÌéŽBÒÃák£|B<±}€?$¥©ùÚ+‚‡ z64¹qk$SÁX_1$x"¶»y|¤wv2Õi™"Bäµ¹9 û¶"ƒØ žX$EzŒP1)TZhk17 —ða*¸aÅLÂh__3$x ´wÄžËÖ¤e‹´ò;hÒ8 åUWPóá³°ÐBCx¼™íHŸ8+ ¢¶9/™{$û·p7$z.˜ Ôg`ÉðX×ÐḻìÅiÓâDω¼™:*Ý8/æQ·Ópô\H7Ý~g;N…JÁLg4 À#%f.+”•knI:ª´¿OçeÐÙÜÁ ¬G„–~È0}àÐÀ|Hð zd'VH‹÷p59OŠy¨ª\_é&/˜Zh`[$x¤U:jK:}i.¸œƒå8$GfZt©©h<“àquÒ¦_ú~Q¥Þï2$¤nœóª¡G2L½ð½i°½HðØ7¸œÚI5}_Á­Vm &æfAø(–yUŒšÖ»AôC‚BÍèÇk²L;úÁrÓun ÖãÞÀ™¦È‚i¸'[ íÛùülÎÿëDš¾ŸuTs1½@»¡Çª!j=òÉ:¹¦}vR ÞúÜr2wRÒGHæž‚w2™»{¹§´j€•ÚKmªAæKtæ™»‡ŠœŸ‡ê÷Ú™›Qi ͣ礑Ÿ™[$<ìË•'sË¡ÉÜ3™{&sÏdî™Ì=“¹g2÷p2w·Ï:úÊ‚Ëî³öYp3™{&s™›ùFFaVѪ·¸å’DW;˜ µ2jÀtbõËÐW8àô‡ð¹3¤ *-eRNöñI’7ã^»¥IÞü\ç逓“*ÛËä(Oò6¸D>ú›C§ 6³ûÓºyfA&P-ïcƒI€'à[f„¤\µuÞ)ks6—Æ9‹oxY(³ ‚iyÝÀöJðSóÞðQŠ‹óÏ»,/Ðia2/ ÉÈG¼izA(6‘ú ˆex¤ ˆŸ,ìæ“S?(w“’6ÞX ÙéÌÌhœg¸VWxúÄNYNO$xLÖ«B5i¡bNÊÝæn(µ"o¬í‚©ah /<Ò2#µ%>SYæ»™™¥É‹™|Í>É,°}‘à‰u½“»Ëõãgs¿¹X|ºïý=›–h×P²gmË]ÓÌ‘—Xjjv`Ë “i™!{)¨*Á¦%bn=ÒÅÅ*Áù.­`(r‡s€iœU¡®4ò›’L!48CIð`“z8‡% iÉ¢³!xŸ ð8­&æ]–QgtåÒ!ZØzSÞÛ8–©|3PŽé•wœ›‚6æN|Ü89ª6þËH%&ˆQ‘ûhòSIôY-§Œ'Z—ÇÁUwbžIÒ¥%‰<ç¾äÓ8qÏǪڌœånœÏw¶Á¹‡ßƒhT€GÚØípœê5Q¾ò³NžiEðL_Œ6ë‡sŸ[oê; @wê«‚iç,<“húÆÃcp`ÈÈÍ É¥G] j³`8¬¹Í\$/ L’SP€A¶úxÍË•.XÒv™Å†6È€¦q´mµóR_` ¡…Ã*ÂSµòïǻ˧έ¾0 LY›½Ü\Q®A¬ˆú Pcƒk žšÙ€´õFú¶-Ęóíh@ÝRzÀvÊÖ_{ ÝÄy.ÍôÅ’Å"_ç¬3^·ûþ])Ô¶Åå¿Ñå¢Ãd™ÞG£ ¢6¹€E¤Sµiã Hx8|ÚÀ¨‚©ÊÒ„NŒé;¦8s&hNƒc²ÁƈԱ"*,8U« ð`hmRÛ«í¸\ÞG›Uñ8ï4å‹‹ÀíWÏùâˆÀ ‰?_| ø2|ñÙè#ç‹'%}„|ñ)x'óÅ%V*¨½ÈD¾xðKã5ÿî hðκP½9.¾¸½V?³æßݬi|$ïÛñÅeÈö;"Í|ñ™/>óÅg¾øÌŸùâ3_<Çí³Ïø 3_|æ‹_œ3:e¼6YŽ}Ý+ªáÑTnÀêwT= ÚØxÚTB~.N§Òâ V¡r/³iÿóøÁĽl…B p~.Ðs]¦'E7ÑÔl󖨽vîæÐy£C‹¾a7.‘S-ŒŸ =ß ° 3EைóÀÏ;¹bR®Ú‡Êg®#ºqàÛ¤$‹>~·×l=•f{ifjòKëw‚•áqÒÆdoînvŒüËõÕõmwj:9½]?ð/ß/wê¶|\drhíB°éµC$GÅkƒMJ ¦,³¼ìG$˜Hl`œxö«¢”`€ åšv• (£b.uª§¢¸$P=ÅàÞ®¦Œ’Ë%ƒAóãhœ2…IeW]0‘úM-exпáVé­’Ù1Ž“ 3ó qèÎ7#¿Ù¬w²»hç´>ÞœvHïûÇÎ_]Ü\ÿ°Ã &Ã]}=à's6F[nå׋¯»õ~üã’<èÞ.qa¿¤D‘\V>ó±L£ño¹áb¹‰xh æÝ×õÔrJ4m~†1ƦLöœ‰Ñd:Ÿ÷ãöÌüÌd?@QNHôø™ìSÀ—a²'ÈF9“=)é#d²OÁ;™ÉÞ½Üj:¨V*ºªLvqé1`²‹ Ú½xæQ0Ù;T èçfz§~^Lö^:Ü7+¿““¡“½coÜdû§µ™É>3Ùg&ûÌdŸ™ì3“}f²ç˜ìÝþ‰ƒü>û¬ÿÚÌdŸ™ìGÀd'Åô¸îwº‰u7Îîq[ Q˜ù¹Q™à1{<òˆkR˜ÕÒYcà…™ ùuà!ëRÓ8¥«§#fbŒ°Duxœk”ŽˆIjëuü?ö®nÇŽG¿Š¯‚¹XW$‘”ÈÁb1o°7f1Xtb;ñ®31lgæõ—Rµ=åîSâQ—ªZ;§_$6QúŠGEò“ø“,Ó®r@­—xcœD7¼b%Ë®ß.iÁž–©ôuë¶ùøîçOEŸÆ–áèœ4ñ3¥æ¤½¶ùõ˜™°-xäÀ¬E©j1ä®ÃJú-Ô!ïð'fŸíºu^€€öß-xŸ”>Ô«Z4ƒP«½ –=ŧå{tEÔØhÃÂJÕ*Ä£Q!^’Ø¿'0Ø|^G#߯ÀCÝJã¾üˆ¿þñÓ§iN(~Xß_­4P9S[âÀÉ©œÖhx¥ýËÚð°{¶TNñU­b̉F‰,מåÖ§:]߸byˆùâO MtH^—¶ÑáQ[^G4ÆÁ+ð´–š\¬à{ .0ÔÅ)…ƒ ¯ÔI?_F¦˜¯Áì’é¼²ð¥ë:‚Î'gãÏGtq¼hS‹f뾈 *óT{d¼‰Êùæ¦O6p¾@DÑMx¤WiÙBgß/þ{yò•Àø¡õý„,ÃIÌŒtHëÁ'ïÙë_EÜ֠σÆ.jö‘N¡ªÓ¨1)’óVl„ˆâºÕŒuØØ È Ø -xš§Ïþ³æ›Àé[}¿4+—Ö­h…*çØeZ·ëõ/áÃ?zx怫šMyÞ=€`«ÄðôJ®'ï߀¸ÿ˜É6øøæîõ›YiÆœgHC ¤:4Æg º½-FÔNQyqŠê·.z@»6<Þos>_vÜË÷_RïÏ߯ÿÓË^ÈŠAõRî-ÖÁ£@LQ¶D[¾”¹Yâî; „í]'Vít½èVÔ&ªÓ3ËjòÀfýÊî;õú—<À ´à!ì’k¶]Å5VO“CŸÌSݰe9Ç‹¦’G” pA=㢧Èy:‡_›µ Ž_2¼|Ÿ’á ‚6éÁK†«š°dx ÞÍ%ÃMV*,.©ö(tÅ•Šá6¤ÆªžQIÎo½}âÛª.oÓ]ØväÖ3œûW —Ñ'6&LÌÈ– qgÅðY1|V ŸÃgÅðY1|V [ÃÙñ>Ù~6úsöõY1š¨“ qcë‘–zß²dÈÞ½ -ÜÚh¤“\-ù^ÑØU¿wäæÉ·}°ÂÈ2±Wºyà>È3"¶‘J÷#¥‹»ôXç‰09+ô¡Ü<.ögÃðSôÖ¬ŒY¸_­‹»­åZ»|ãl™åRÝØ³ª¡ðsCxÄÔ+¬[ûP¿õ… ë¾0æ¬T4•Î »ùóÃÁ/«ÌT$þë‹?çz†÷o^-,xñ_ß)äܽûüåmò?m†±ðÌWÀp{Á€µZ é¯/æ|—O/ÞÞQo^naå,G.gÖ~úãýçüKçwûáÅýõ‚1_= °5< È¡ãC³ó¢U­äLpAýú™ml¥‘V4:~¶ñð}²+Ú¤Ï6®jzÀlã-x7g—Å£rÈ+Lh@†]³}nªŒÎ_¾·œ¡æËÛhCU–4VºqA%‘ƒ>ùt[éÆMÚaŒÇ¥çAw¾Ô9¿ìð”4¨œ|xfAYPý² ¢u ü«o÷onη8»è”“Ÿ+ùô!Yß×0~×¹ ÞË$.ä¬9Ÿ«­Á±{uSù¤?t0|PbóúMq†O%-°XFó{×£Oɹ[ó{-ÚY?ÝÃï5ì:òp0·¾»eWäÓ)ŸNy§Ü°å^}é_Õèÿ‡³¡§ƒïu6´Š Mzø³¡Š¦‡<z:ÞgC VÊG¿óÙPÄ„n=t9•ʆ*q1àxž—h³„Q÷U ¤ŠcÔxéÍÏÿ«/á ~ÒOÈ‘×wþôâî—ßÿmþåó¶ûøæeT®ôY÷Á˜ôCÀ¿lÆþø\+aº æFÞñâO0ïš‚ƒÏ}‘üãôÏ4Råpq£ÙF/âC2”<»K½ú•Wø‰ˆimü^Ò }ž™®5 WÓpkt­E;‘¤k È`Y°q2¢“ÀˆêóÄ‘é_t‹îà_B J`}?H~1Æ­¿ irÑ“‹—ýK*1±1¤ÈyÂCfÊ¢”;Àyú“ØšŒ¥¢Ññ‰íð}ˆmA›ôàĶªé‰í¼›‰m›•â¸s‹µd¢‚ Mzp2QÕô€db ÞÍd¢,žëp¡¢E.=>1ïJ&eYcMPa,6Á¥O[ð˜ÐF/·–IÖ¢rŽceEÔ÷5®&Фx²‰“MŒÄ&t_æ!‘¹Î&Š.fštbú\ñ.’cóË'iÏ êœƒàòü³56!¢¼K?t0f¹ £ùEÅìÀ{}Z´Šºÿ¢q…ó!z²µ#ÌGúEù°ÊF°ÀNÿrú—ü‹LåI„gO~ã_Š.†:ù—ü\Ѱ±nµg9t{^}ÇI<‡5ÿ¢0º“q.åH‚;ô´ª€ ɸ—/ra1@ü<­Z9†¨htüÓª-àûœVU´I~ZUÕô€§U[ðn>­j²R¾§UÞOÀaå´ª êhõþ¢¤”®@Ï7vZÕ¤\ïØŸM”“ú_¹b×Åå9ÚÉ&N61›Hy|­ h±‰ä–‰àýØ„†Ùú®F«ÇYp_68F‹l‚ܔӪ0@¬"å\¤#Ùļ(†à!ÚààœUl†‰5Ï&6ïÂ&jÚ¤ÇfuMÇ&6áÝÊ&î'õ$WX)„}+D!ÉDà/³‰{ѹú4°/¯†b3ª\uX¿¹¿G/·5¬ø^;Ƙ{¹ècóŠÂ˜êÝîåùÛ'›8ÙÄ󳉼/0²|7ðêÁþÍýÝ{³‰ò\Bç!˜1zRÛ¿sY^6¿I.³ ¯_°j;P}ªÀ,R<”M”EÕ>B}ÐÉ,Çg¿;L¬ht|6±|6QAÐ&=8›¨jz@6±ïf6‘Ÿ§IEÓJ‰Ù•M0¤)"®°‰ª€1ôê^.ÑXl¢  "±^Ux/oŒM”·VõØž\`}>P6QVL>9¶‘Åtf:lb(6‘û»'C4ØDé/L½ÙD~®äþTõü¡Yî¢ÕîW——ë4"ðåº< “#— E}òQ–C‡²‰DšàTîlòa‡‰ŽÏ&¶€ïÃ&*Ú¤gUMÈ&¶àÝÌ&Êâè5ÎÛJ¡Ã}3ˆ'5÷+lb†Àõùà÷rƒÕM̨ˆÍ›•Yo+Ói~ë2 P®ÐÎú Ãþl¢¬(ìbº™,'ºŸlâdÏÏ&t_²÷ XÏtšå\ìÝ2°<"FFÓjë«>>ñïÉ&`òˆ|¹#-Áý‡ž ~ÜSäøÒ¹ÚŽd"/ê!`òÎçÁù“LXQbE£ã“‰-àû‰ ‚6éÁÉDUÓ’‰-x7“‰²8"Yá÷raß&aòB+db† ±ÑP/¹¥ç$E¡zaÀ½Üµ œßZɱµÃWeEå¹l#[fœdâ$ Ý—‰DzœÉúêÁþMIBw2‘Ÿ+„.Šùe'AJû&:IJ1\.Â&Ô/88"À:›ÀÙ"Z„=ƒËue &¸€á,›0ÃÄŠFÇg[À÷amÒƒ³‰ª¦d[ðnfeqÊգѶR„ûaÇ Úó ›(¢Woe{UKtšÑ³q¸4ËÅÛº>¿uŠ|°µ“踡ëeEðœÑDÞÓŒN61›À¥;ýIªC×g9èÍ&ô¹¹é·ÒÓj³Kä÷d¿µÄ«<Ð’À좗Ery¨L'ñ€ø(J‰ò€Ô*›(r Ò›Mäç20£a–s{² “ª=!_ö/©|é¢Ô8RùÒáÐá¨38ìÄžwf˜XÑèølb ø>l¢‚ Mzp6QÕô€lb ÞÍl¢ÉJAJ;a§)¬ GmƒŠ—üÒs²‰•úKrW ÇkéTÞš0Z“f9wÜpÔ²bR†ëL‚ŒÓY„}²‰¡ØDÊDò,”*›(r¡7›ÈÏõì…(Zßûv^Gø`¥ ›õ ÌÕOõº<.Úã¡l¢ œ¸p² +L¬ht|6±|6QAÐ&=8›¨jz@6±ïf6Ñf¥dßq$0yI+l¢jr~¬áuèolv›v<X…Ý„l9tðd'›€Mp‰Ò½ó®^7Q䈻g:ñܪI_•­ïGå<í;¼EheÚ„”سl‹œ÷Ç&:•EÕÿ‰6¸èÏD'3J¬ht|2±|2QAÐ&=8™¨jz@2±ïf2Qg xl+•|Ü·Ûû)ĵ"ì6¨q°D§ŒŠÂUî€åÆZ:µh‡;nö¼"@â`Ǫ•“Lœdb(2!SîúÊÌ©N&Š`êM&òsSÔ·5*ۊܾýaQ&åKʘ.° RdùØÀëRkéôEÎù'aY4V+Ä¿¾Dª³‰üsþçý$ßüý2Æ»_îÞ-¸Äo¿œ³ð¤)aŒͧ¿Þ©}}÷áS~åwÓˆätÿCð¹ü>Q)YLµNy_äÜ"qKµªt×½+¦Õ£ÚûOsö1‡cþ~ÿý½†jŸ}á¾þݧ?~úu†ŸæÀMÉOŸ¿¾mñ¢_ÍÛ‡wÓ×_¨˜7ý›ïÿøðËÇ;µËÿýîooÿþïþ{ýß¿þû:Ùï~{ñ»Ÿgc÷£šýÌ̲}øîÝëï_ßýô^¾yýóÛ—È_þïüË·òîâ[Àáw÷¿û³Q¹¨­Ü#0´¥rÑ ½ê$¡®ÑÁ¹äFð¸dA›ôÈ\ÒÒôh\r#Þm\òËâ'_8·|l¥Ю\2hD¡q»gYóö-`i¤D·/¨$G3ÎFÏìoˆM¶jGåöe“÷+JHúJl"ÓXôl|²ÉqØä¼/õ£r‰_ ½z°Iéwe“÷Ï œ{eYMå’ ûNÉÍÍ<\ö/¾|éìR­•ñ½œÚ x(›,à”ÁQ &8EžôÉ'VÅŠFÇç[À÷ám҃󉪦ä[ðnæeq¤<í϶R¸3Ÿ —'X]œ„þBt ×@MT1¤m_%„x[l¢I;ÑÈ&ÊŠJ'¢gãY„²‰¡Ø„îKL>èÎLU6‘å¢@w6‘ŸËä"c²¾dD·oƒ`aç¿Ì&Â²Ú ÿ’圓t(›hp² 3L¬ht|6±|6QAÐ&=8›¨jz@6±ïf6Ñd¥Ðí[6>M$+d¢ ipc‘‰6ô‘n‹L4i‡G&š%8¯&N21™ùj‚‚ ‡*™(rco2‘Ÿ«Á-;­ï‡T'{VÍ@˜ÄèWÝ ÁâR0¹ŽMtË‹z̃[œ ·…:ÉÄJ”XÑèødb ø>d¢‚ Mzp2QÕô€db ÞÍd¢ÉJî{5&çרDÔ(c±‰}ò1=†KtšßZòl†+´sXàûƒcd¹b×ɲרÉ&N6ñülB÷et¬»÷±Ýyõ`ÿF%õfù¹às'­d}?Üã®»=Ëfü‰)Л ŒNÀH¥d>Žæ_DÀÅÀÆê,÷xŽû¿ºÑ·ö‚!Ú¿-èãô/º"$¹¹ó´êô/CùÌ;@€ÇVóÿRä¹ÞþEŸ\¤èR=©¥È‘ìzõ&f'+‡U˜DµÏh 9ŠÇVåEƒwä}0Á·è[{V­œBT4:þaÕð}«*Ú¤?¬ªjzÀê-x7V•Å‘Xø +‚»V±Ò‰°šG[ äI6TòƒåÑT‘ò­‰>†»ú.o­MrÅ6LpàÕw^Q_ýÈ””Ÿ=^N21™€¼ ‡˜ 2„Ëêƒnd(qJÕñ ÷rÑï:Ì `"&Ï+Uy”¿`!Åú±t– Dz‰.DMp9;èdV˜XÑèølb ø>l¢‚ Mzp6QÕô€lb ÞÍl¢,®QHºÂJ=ž7Û7ó0«2Ñ„E& *t)¢»=¦Û"³vÀ{ö¶vT‹Ç‘‰²bL)°,úófâ$C‘ ÊÁ<¤$U2Qä 7™ÈÏåä™^wENMôžd"¯¡ºÇËd"ꌤn8Ô«ŠÜ…:]ÉDY”u]68õÉ'™°¢ÄŠFÇ'[À÷!mÒƒ“‰ª¦$[ðn&±åÄS¶­”º„}'ã&šÜåɸ­P% –G;£'Çd£'Âm±‰òÖê䜑Ã6k‡á86QVDÌ9†62ÀójâdC±‰˜üSLøøB÷ÕƒýKqÙ¹º›ÈÏeò!ˆ£cعa º0é2›HŶ$ª¼Tü‹À¡l¢€Óu]……:ÙÄJ˜XÑèølb ø>l¢‚ Mzp6QÕô€lb ÞÍlb^c4ZÌr^ö½šˆqòa…L´!…ÁÈDA…ž%Å+Ч»š˜µy ‹­ ñ82QVd`ØF¦‘ÏI&N21™H¹u†t!D~õ`ÿª’îyNù¹Aß×[ß¡š€=‹&dò¹²iåj‚õ Ž1ùú±A‘ÓßðP2Qåè# .-ÌÐI&V¢ÄŠFÇ'[À÷!mÒƒ“‰ª¦$[ðn&eqq¨q¼m¥8†}ɇ)_amPÓ`l"£JÎS4ZEÌoà¶ØÄ¬r¶vt#»ãØDY|BaYð'›8ÙÄPl‚srSÁUÙD‘SÃÓ›MäçÇ.y3 V¹ ­:² š<0ÃJ ¶ä/˜9ZÍŠ\ÄcÙD^”÷&œàYƒm†‰ŽÏ&¶€ïÃ&*Ú¤gUMÈ&¶àÝÌ&Z¬;ˆû² ‘ ÓÚÝÄ E ]•+›(¨|ÐP‚¯@/7–è4k‡±ü{ç®ËmœáWaF;™B7ú0sâÈN\ŠÈ%Úf•-±DIU~{ã²´×çì;ÂÌæâ™}¦ÿé\>\ºûÑxcÂÀâ‘Èìßí9£æ¢‰EÐDú.Ù9¦ $7i¢Øáù Ós(µËzíGàŵ¼s:! ×•·!¤ì"Ô «]P¼“&ªSÚ5«]|*½hâõ4±ÑéibHü)4ÑRpÌznšhGz>šÒ;JÇz) ïMPîï_ÃÄA¥¯¹~A˜8¨^ä£`âÂ4ïG‡á¾[ÙcT †íœ÷U™>ýn &Lüò0‘¿KËGÝš0QìÐøì¨å¹FB@Ök?ÉäB˜à\5º¾>èZzBØÙ<®vAo-ŒZr.K}qÏÅÞLìÌ&FÄŸ Ǭ'‡‰f¤'„‰½Ã0Qœ‹€;÷{)‰ñR˜`ñ Iwh¢HPéKÕ碉¢ÊLßìëmô_7Mä·†½ñúÓÆÍå4Q”¥)Hÿ«ƒÈ«–Ñ¢‰©hòÝjÎYšw°«~m¢ì`ÝšèÎ&FÄŸ Ǭ'‡‰f¤'„‰½Ã0QœSÞ™ð~/EáúZ°Skâ¡ ýû–R s±DQÅy/ÿ¡ŠäÃX¢¼µ‰í¢„(2ÜÇÅ£3Â;ÊŒK,–˜Š%bÉ΂„öÎD±‹ñì*Øå¹nÂâ±×~8Oä¯Ý™H-4AÕk˜ Ô‚£ä¢tí½ãbGrkvØêÔCè+vÊ+;lw–؈èü01"þ˜h(8f=9L4#=!LŒè†‰êÜÈô.Ô/ÎKŠ[ ÃMT éO|G*ã\4‘UD]õ”fcŸE5:h©çêG྄NÕ#‘¾ñ»E[µ&MLEé»L ‘f˜ÐL[íξ‚]ž‹”`Üb¯ýÆpeå:´-R=vh‚K vChßš(v¨·–Á®N5§Ð~Cœ<å\4±3MlDt~šM4³žœ&š‘ž&FôÓDuÎý^JQ¯¥ ¡ÂN庇„44uÀ§ÚÑd眊ª4'ÅÎ|¹ØØìô½¾&²Çœ‡:wuŠøÚ›X41Mp>?$fðõÆ÷o¾ø~Åù©4éI4‘ž«ix ÝF!8_yÎIM Úëô°é÷Ìé©]µ][ôaîMèTœæbí€ÕetêNŸ&FÄŸC Ǭ'§‰f¤'¤‰½Ã4Q;“~/£^|Ð))¼Ç”Êd[EiNeÞWOáÃ:•·fÖNž÷Gtì¾ì°Õ£:R»¦ÞÃîé¸ò‚‰À„䬫"A¹½5Qì8ž]»<Ò =öÚ ñÅW°ÉÄxçÖ„æ,é§ íZktïÖDqš@§wÊ·Ø™.˜èÎ&FÄŸ Ǭ'‡‰f¤'„‰½Ã0‘K! ý^ʉ/?è$¸wÐé˜Ô§Ô$SÐDUïf»ê%ÐgÕÁ®o~Z0êGðÆô°Å#A‚ è+‹¸hbÑÄT4¡…4R»r]µ£§+Ð'Ñ„Jȵ™¹×~•¯Üš@ÙÔÀNl°œ(¨´Ç—bÀn¥‰â” ¬CÅ.⪃Ý&6":?MŒˆ?‡& ŽYONÍHOH#z‡i¢8ç4ü‡7ºP²k+×‘áæ»4Q¥æ¬éЗʯàþ’4QT¥qU;—«úOKèTÞ:}¡Ò. øˆ"ßxÐ){4Ä<è*³ÿÃ9‹&Müò4‘¾Káô_£´÷&²Ÿ~ ;?Wb ìµÄxíµ …ÝÂu¶¹£"¦Ÿ¥#4ÙI€Ù†—¬^ Hì«W¤O^Ò[{ú;gØŠ Ü9¼¸çlöQúÊ"®áe /3 /¾`QŠÍá¥Ø=×:ixÉÏU„À¡ ň®^hcN-¼_¼Nv* d» ÷n}qÑœ)vÅú*ŒÚ]…hDtþŪñç,V5³ž|±ªé «Fô/VçœÓ]k¿—J赋Uœï6íÝÊ;$•Âd[ßÇÔÚ­¼ oD‡ïÌñQ<&‚1´¾2 ¼hbÑÄ\4‘Ï”˜Z‡&²]ˆçÓD”4‡ìÝö#LBפMòñ%M`(}‹#·ß<ìb¸“&ŠS‡ eÞçWeÔÞ4±ÑéibHü)4ÑRpÌznšhGz>šÒ;JÕ9FM“ø~/…—Ò„8n²s+¯*ˆ:·œJͦ‚‰‡z3 í«u§ð× õ­óμ1VÝw޶zÔ Á¥¯l•2Z01LäïRˆÈ$6·&ª]´³·&ÊsÙÔ;éûv®=GË ðõÖ7Â)`úÓ郊„[ÏÑV§äÎíËÁ»§H &vf‰ˆÎ#âω†‚cÖ“ÃD3ÒÂĈÞa˜(ÎYDã](ÃÅ Ó˜¢Ž;4Q%Xúºß‘*stªª4…ØW/>‹&ŽEçÆÅ#äû‚$]e>ÏE‹&f¢ ÈOÙ½YµÚ=o œD £{h—³®v.ÝšðÈ™_tBÌ-ØòM‡æAÚj'xkeÔ⃧ɻâ0ÄŲ;MlDt~šM4³žœ&š‘ž&FôÓDqŽ Iô{)trÝŒvnå”JsÝÊ;¨Þ?ë S}똾¯Îœ½F‘é>š(5Ÿa²¾2áumbÑÄT4‘¾KO¿‡³µi"Ù™;œã£øŽäíkÕŽâ•9>ÈsiTˆúš&bjÁù”vö¾‹ê½{Å©xòûâ˜VÆÀî4±ÑùibDü94ÑPpÌzršhFzBšÑ;L‡z)¹¸˜QØ$ÂM“J“t*ª4‚êc•ØgåøxD‡¢ËÑÑx#Md ï$´¯MT»çEÁE‹&& ‰ô]jΩQ›4Qì(ž¾7‘ŸÝ8B·ý$f¹²˜QØò9Ò×4A¹SŠ”´¹§Ø¡ßKÅ©¤`tÊ;ÖUµ;MlDt~šM4³žœ&š‘ž&FôÓÄ¡^J(\K¬©ÇÚ‰ªÀü¥yj,½öc9ó•{º! †šÀ-ïŽBTo·ôb‡ñÞ½‰âTsQsë‹Ó¸ö&ºÓÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0MçîLú½”_¼7¡¶°·5‘X`µ(]¥â\µëª*`tð¾z@þ,˜(ohi®~Øa¸&ŠGqÜoƬ &L̹µ @{k¢Ø9;¥Sy®1¦>­Ûkç­‘‹k×å\ ;,˰á;,Qíoe‰â4—*o×ö|ØÅu»;IlDt~–K4³žœ%š‘ž%Fô³DqžOé÷Rôª =óœP¾c»Ǥ¾Zãú%a¢¨b"k„}¨wü,˜¨ÑQèýèðS]ÂËa¢x4%•7~7åµ3±`b*˜ˆåÖ‚ Ö„‰bǪgÃD~nIN ¡×~Œ!^z[6QWÇ=špGBÁv&èj‡4Ýør@}$û´ñ%½5 1ö£CªwŽ/žsTF§7”=o9­ñe/Œ/´¥¾Uš·òª] ³oååçFÈKUíšÐÕŽ$\¹Xå"ÀÎð’ÑÆ :B“]´[S|T§e’€}qb+ÅGw¢Ñù«FÄŸ³XÕPpÌzòŪf¤'\¬Ñ;¼X•œKHC¿—r‹Ñê¦qïí©É.†¹`¢¨BÊ»ß}õ@á³`âPtîKX=’)ô•®Ky &&ƒ‰|Ù-$T×L$»ÀDt ˆ»-;_ɽ&`‹1ŸayM\úG…ö„½Øß{)¯8e$鬙»çüV‹&v¦‰ˆÎO#âÏ¡‰†‚cÖ“ÓD3ÒÒĈÞaš(Îò À~/¥q®œÞE!÷Õ›~ØáÔ#ÑðTëïò)zñ(¤ ¡¯ uN]Sô©¦èù»Ä!¶³åW»®½| ·ätg †7JÿÓ\Å›4Q삟¾õž›š*ƒY섊‚û¥)>È7Ú9H+yp" í {±³è·Ò„”¾Ü …»âVúñþ4±ÑùibDü94ÑPpÌzršhFzBšÑ;LÅ9F¢Î¥‰j®M?Ny»;àθX%Pú÷©q²ƒNEUDŽÁßP¯v¶FGsžï~tb¼ñ Sñ(èÚWÆq¥_à3ø¤9` !¢S³˜Q¶C‹tzÂÀìßvÁ+Û)Á•{*éG“4•Þ£ ËÿÛœz}P²ãðUBZÍsô/Ç—?ÿøoüíï¾/cá_ˇdM7ìeõªÜÉÝTížV+Ÿ—ßÂ1÷ëÿôÃüð§Ükþþw?ü© ƒe‚ôÍßüé¿~üþ»oÿéÿæcŠÿhµÿ^å?¤ÞõÛÔáÿö§?üþ»ow ÒÜûÛÔcÿôSjÛß}ûw?ü”–Ö¾ÿdùï?üøÍ_~øíWAüÃÏý›ÿøý_ÒHóÓÏoñÓöÍ?–Î 9þÏdödú/ßÿkj:ùeÒðçädûöo‡ãøtÓä‹8¦‰Ñ?ÿL>ÿ3 >^à»Ú©íy7Î÷IÞðn7y9ůH’¾èâößü bLsÙ=ù<¹g­$‹³àÑúâLèÎ韂³K?l4®Uœ>žïFôÿÃ*Î_/þ¬Uœ]Ǭ§_ÅiDzÊUœ¿^ï «8z)¹¶ˆ\„- 4q(ÍóÙ;WšŠ]\gýOÏÆÓF9ÿ>1wx:Û!ÏÓFª¡W¨¯Ú‰ÒµgýÀ¢8ìòtR*A1ô•*ÉtàJ¹™P?Î5~òƒ8zÿ·Íg•n°s`HƒP÷wãܯñe/s/bj[é[íŒ/(ž?¾ìúÿû/ýÇèWŽ/Ì›GØ)©ù¨NjëÒ郊]ôpëŠFqªÑĤ/Nžî÷®uƒ lDtþuƒñç¬4³ž|Ý é × Fô¯dç1¸ƒi·—Jƒ\¸vÝ€`SØ[7(@€ÑÞê“{/ªrËìlßT;€Ï¢‰òÖUÇ7¢ã7žþ(Sû$|ãwãM,š˜‰&4ç:Æœš4‘íTAΦ‰ü\O/Û+Tì«"¿ç¥QãÍ wòÜXnÁƬ­´ØéÍgɳSBÉ!ꊣçk³‹&v¦‰ˆÎO#âÏ¡‰†‚cÖ“ÓD3ÒÒĈÞaš(ÎËù\ï÷Rè×Þ±bMî†9$5’ÎEEUšD¤D_=Á‡íM”·æ4 ¥w¢ó”²úrš(SÛ J}eÖÞ÷¢‰©hÂ6 ’1¤]á¥ØY<ýfj~®!SˆØi?ùí¥å")n¹òîì}{jÁœk[vJ¼»÷–‹<$iŸïNŸ&FÄŸC Ǭ'§‰f¤'¤‰½Ã4q¬—²kÏ4¢…-êÞÞÄ!©1LV|þ˜ú,ô«¦‰òÖ‰&œüè8ÜGÙ£ ¤éPè+ãuÒiÑÄ\4ái–®Ь‚¿Ø^/2?×ò¬n¿—ìR'w-M$– |}Ò‰BnÁŠ@í¬™ÕŽõVš(N#A{o¢ÚY{½ib+¢ÓÓÄøSh¢¥à˜õÜ4ÑŽô|41¤w”&ªó¨i®ú½|}œÿÜ“NÁ6 ;9øIEŸëÞDUE©¯žÂgíMÔ·N4éèÜHգvkobÑÄ\4‘¿Kg7ÞÜ›¨vÆgïM”çæëPÆÝ^Ûâ•'"n¹dX°×4©+0u2‘U»ô‰ßJÅ)P;ÝVµ‹ºö&ºÓÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0Mç è](ñµ{iʶ‰ÓMT©L /u¶ôaUU¾nïñ õüY÷&ê[§/”ÚuQÔûÒ&œ ׺Ê,/šX41M¤ïÒ#blÞ®vñô“Nå¹éÓü»;À8G¾2k&êfD¾“5“p‹ÌÓÔÞ›ÀÒÒ-ÞJ‡Ä‘Ø¢‰Þ4±ÑùibDü94ÑPpÌzršhFzBšÑ;L‡z)Æki‚X7â=š8&•l.š8¦þÅò_5MŠŽð{G”å¬8‹&MÌD˜+jÅFÍŠ^ÕätšÈÏE 铈öC¹ÀŠ^{ÒÉ!ß»zM9ƒµ¨¶O[;sä[i¢ˆ‹Æ¢ÚçëÞÄÓÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0M罓(ég‘מt" ›cÜ¡‰CR)Ìu »ªb6ngJ|¨gû,š8~*¸z9MF:'5Š]Ò¶hbÑÄL4ñßìi²ä¨FwäÐ ZÂ[Aï'ð­Žì®´H¶‹è$êgéZŸ•f8 o`îŠ^»Òå4QŸKÈåm¨Ó~ÐßO½ò¤Séa€ßß©ëÀ u©-ÝéÙ“NM!¤ž¸˜W}àî41ˆèü41"þšœ³žœ&ÂHOH#z‡ibwîÍû½ÔëšÇ4ÁÌ[Ƙ8§4Í• ¶©âdŽüzþ=ï&ÎD§üõs)Î)#[—°LL²ÕlJ ñÖD³Cò«a¢>WUP:í§{™Çß õG+¡88褥£f°o¢4;áô(LT§Ä””¥+®tC«Lew–Dt~˜ L ÎYOa¤'„‰½Ã0ÑœKͧŸú½¿Ëcq!L¸–:]ÂÞ¥*h\þíÇŽ|.šhªŒÕ­?V‘êw¥tú‰Ž–iûÑ1y&ªG.¨ïä]eŒ‹&MÌDZ—ü r¦˜&š]é[¯¦‰ú\®åQ½7 ®Ôãw^›`ز&Öƒ­ k}Kñ'ñöh³SyöÚDsêDع½Õì²Ñ¢‰Þ41ˆèü41"þšœ³žœ&ÂHOH#z‡i¢:g(}}gAf©÷¯‰hâŒT†7þ(MØ>Û$錪»úLßE{t’[²~t¼„Ý<ŠªÃ¿¿û^4±hbš(ߥ§Â —›hv–óÕ¥°Ûs³@NÖmÙ^‹ñÜ[¼ŽÁ™ø=M¤Ò‚U0›Ä kiï¡íQš8%îu)mÑÄÁ41ˆèü41"þšœ³žœ&ÂHOH#z‡iâ\/•ä^šÜ€ŽN:’ª³]›8§^¿Œ&NEÇèÁKا”•yÓ¢‰E3ÑDªEéËÄ4Ñìà%õòE4QŸ[SÔ j§ýÔ²$wîMð–)9Ù{šÈ¥ åÜ)z¿Ûazö¤SsªbˆÞ'f‹&zÓÄ ¢óÓĈøkh"PpÎzrš#=!MŒè¦‰S½”ò½4Á–7â#šhÊà‚«»Ô7óñ?JçÔ[J§SÑyô¤Sõ¨PÚGçÀrSæ”M,š˜‰&j‰éò¨ÌÓD³{åô‹h"·ûb­Ó~Šð­{ÅGr9¢ w"–ÔIòÑìe¶ñ¥¨bÎûê)÷/5:*Ü«>9¾IéÊL×jÕ_¦_|¢Z×ZS8¾4;Í—/õ¹B¢dñP³C¥{“|€fƒ{y^fˆ©Œ°¥ê(-vò{‰ˆ[W«šSÌJ}qiÝËë/CµjDü5«U‚sÖ“¯V…‘žpµjDïðjÕî\˜€û½T™ÈÝ»÷ec>J¸KÐZuã©2ÙjUU• ã”ñ¬«>ƒ~YqÔ=:&¹“üw·ã÷¾›GÏùƒß_ª/šX41M–ŽÛßì=ÿ‹&ê~.úõ4!XÜ#CoŽ^íàÎ⨂[ù!JC~K µ+g‹¯¾ívì& ß&÷!Øí,§EibÑéibHü%4)8g=7MÄ‘ž&†ôŽÒÄîÜkæUï÷RÙòÍ4a¥Ó:(gtNªOv’¶©rôZçõú]{§¢ã`ðMì©ÿÕ9é*g´hb&š(ßeÝSV³ßiø¯~¿”_€_Cí¹Âìî½ip±#Î7ßË£šjé=M`mÁ9œ¹g·3Gi¢8͹0¡÷Ä;Z{ÝibÑùibDü54(8g=9M„‘ž&FôÓDsŽ"hØï¥n.Žš|£tD»-ãë'Ry®œ»*béœÈüQÿeY>~¢c ðAt^óàßNÍ£INª}e²N:-š˜‹&°ÎÒÄBšhvò’ö"š¨Ï­)rS§ýÔm[Ö{i˜Ìè=MPiÁ¥ƒÅy˜š äGi┸İh¢7M ":?MŒˆ¿†&ç¬'§‰0ÒÒĈÞaš8ÕK9ø­4¡X³:éMœ“Ê“íMœQO€þ]4q.:é¹â¨ç”1­“N‹&¦¢ *s@®9À=Þ›¨v”äêœí¹I< k§ýcº·žQÍãáï3+—Ì( :«UÕŽòïë=·ÒDGTcÔW^be ïNƒˆÎO#⯡‰@Á9ëÉi"Œô„41¢w˜&ªs¬ÕDû½”¦{i"AÞŽÊSj€sÁÄ9õòe[ç¢ó’ü÷v˜8¥Ì_˶.˜X0ñça‚Ë@ââ¨Õ\_Ò\Õ?R6”^û)vèw¦ ¤´i€”ÞÄÔeÂB]ñY¢fWâù(L4§¢èkÍŽ_$/˜8˜%&FÄ_‚sÖ“ÃDé abDï0Lìγ[œÑéÇÎÍ)mù&šN(Í“íL4U†e&}õÊ_{tXÌ?ˆŽásÕŒšÇ–=ú£xŽuÎiÁÄT0!u’n¥K a¢Ù)]]µ=—±ÖFí/R˜CàÞŒµJëÁÆ„–\úgæÎÆDµS6y”%N‰3^,Ñ$Ÿ%FÄ_ÂsÖ“³Dé YbDï0Kœê¥ÒÍÇœX}Ó£SNç”2ÌÅçÔ§/»}*:9?x»)#DŒžý¼¬Ø‹%¦b Ý„Iì÷ìÆýóû-v|y>§ö\vFè´Ÿb÷fŠ~e>§¼%qV[°frŠ•6;ÑGkíNs.?‘÷Åe‘½YbÑùabDü50(8g=9L„‘ž&FôÃDsî™A¨ßK9úÍ7°mˇ7°w ž2~"u²ZFMUèd ÙíH¿‹&Ú[#eû :þ M4,åãéu¥1¤E‹&f¢‰2$wd¥øÎD³Kzù‰òÜÂ( Ô9ŸÓì2ß¹3!²%«£Ý{šHµgq¤¸‡nvfÏÞÀ®NËÀgܧ›¼,j,š8˜&Ÿ&FÄ_C‚sÖ“ÓDé ibDï0MìÎÅrú —B¢{iÂÒ–³Ãž“*“]šhªHÕD?PŸ¿«2êOtQ‡&v;{®2êîQS¢ôÁïöšgÑÄ¢‰ h"ÕsFI´|ª!M4»ë+×µç*h&ÁNû‡[³ÃÚV+Óáûʨšk v€b*mvæÏtªNëì]qî‹&zÓÄ ¢óÓĈøkh"PpÎzrš#=!MŒè¦‰æœP¡ßKaN÷æsÛøð¤Ó)©ôn\ú“4ÑT1’vvVvõ’¾‹&öèL îG‡Ÿ¼5ÑWÑSêd‡ÝíÞ䞸0;,×Ó\œõ MXQV[0“{ÔCÿ²#x°öSeH¤¹+N²¯Zñ41Žèä41(þšˆœ³ž™&z‘ž&õŽÑÄÉ^JQî=éDuíˆÞîDŸ–šgº7qV=åo¢‰¿££ ýè0?U¹îÇ£ABþD™óJ»hbšhߥ0Ô”JÜ›øe‡éÚ:Ø?Ï¥òBá Ï_v¯þo  ÛÜYߦ‡- °¶àdNQzØ_vFÏÒDujèlŽ]q®‹&zÓÄ ¢óÓĈøkh"PpÎzrš#=!MŒè¦‰æœ‘µßKÑ»âŸWžtrÛÊ„ûÝÞÄi©ÊsÑDS%–@úc•ñWåt:QxŽ&šÇöÒÜW–dÝ›X41M`™¥{.ý¡Ä4QírÑy5Mþÿ÷oÿÇï¼7!¼±”®õ€&¨´àŒ¥ k<¾Për~”&ΈK”Ö½‰î41ˆèü41"þšœ³žœ&ÂHOH#z‡iâT/Ål÷&ˆuØTñ€&ÎIÕ<MœR/ ßEç¢óòÛÞNM™ eJ]e9­Òu‹&æ¢ jb­ÞÄi¢ÙÑÅ'~žkÉ;í§Øá»‚£×íMä-qR°÷4ÁµS‰•ÇëÍ’M{í‡üVšð­V&K÷¾µ¶`-ãpŠû fWÐçQš¨N±|­œR_\~9æ»hâ`šDt~š M ÎYONa¤'¤‰½Ã4±;O(»½»«ÍÒn^¦”ǽ=‚3¡ 5Ã\4ÑT¤LÔWÂßE{t,™ôGr$~pï»y4¬ØW¦È‹&MÌDÚªŽ*‹§&š]z™#_Dõ¹LÙ wæ7%.®¤ ÚÊ@wp/¯Ì–€sÒ¸‡®v˜ìYšhâJ€”¡+®ž^4Ñ›&Ÿ&FÄ_C‚sÖ“ÓDé ibDï0MìÎú](±âíY>eù8'õ]ö©?IM•–[­¯^äËîåíÑcÿ :ú$M4NÞ;©Ñì’®½‰ESфլ۵8)ÄY>šèå'¬R Ö=e鵟béÞzFe¸*ƒØ{šHµgE—ø¤S³SåGi¢:-ŸEÝþì‹sZ9»ÓÄ ¢óÓĈøkh"PpÎzrš#=!MŒè¦‰Ýy¸8u{)¾7g xéïAhb—3zþ@j¹h¢©BSIÐWÿî~Êš&ö·N9ù'ÑIÒDó(¦9}ðÕñk6ÃE‹&þéÔì@.§‰ú\.ßCêT ªv”nÍòA¾)ØA–ÜZ:yîÜÀmvdô(M4§J‰Àúâde ïOƒˆÎO#⯡‰@Á9ëÉi"Œô„41¢w˜&šsÓš®©ßKi¾÷¤³m¨0qJ©Ád—°Ï©ÿ¶kí­Ë_ù'¹åôLTRë¶vŽ}7;X[ &悉ò]–²°þÞ²þú×÷Ë‚¨WÃD}®jg굮צîMò‘JÜñà¶×\ ÄrÜ5;ðga¢9­ììlïv$ &z³Ä ¢óÃĈøk`"PpÎzr˜#=!LŒè†‰æ\Ê0ÂþA/usÊ@ƼúMœ’Z&€sÑÄ®Ê >P¯_–Ò©½µ"äÎ!ƒÝî%=ûí4Q=*b½uÙWæ°¶&MLE^Sñ Hœ2°Ù!]~ ÛÛµ .]_·eלnwÒ¥ Q<¿¿„P[:¡2…öÝìÑä»S©Ã}qü²”¶hâý41Šèô41$þšˆœ³ž›&âHÏGCzGibwn¨ŸãØíTèæƒN´tú‘šDbšø±ƒ¹hbW•5J_}‚ï*ŽúÌ”µlüM4†ÉSêÿn†¸/š˜‰&êwÉT¾@æ°œQ³C÷«:µç²qNÒ`˜ ¶ßKŽe } ±¶`V‰û fGøèµ‰Ýi #Ƶ–v»×|†‹&¦‰AD秉ñ×ÐD àœõä4FzBšÑ;LÍy‚Ú“÷{)Ó{:‰êfé œÑI©y®“N?ꬃm»ÓwÑD}ë„ %œñ9šhÊ(YŽ“Ðÿؽîš,šX4ñçiÛ &/] „4Ñìà%øE4QŸ›LRo ¨Ù)ÝšÒ ¶ºI‚ï/a#Õ¬‰¸Ã=ÍN0=JÕifpˆóîv´N:õ§‰AD秉ñ×ÐD àœõä4FzBšÑ;L»sîÕpø%òÞrÙ}s>¸7±KPQŠO°ïvbs]ÂÞU•Aˆâ³<»é—ÑÄõÂÔýè$yîÞDóèªõ,rW™‹¬âu‹&¦¢‰ò]²9Õ‚˜!M4»¤W'ˆmÏÍgèµΖà^š ®)ÆßÓïãKä»г'šS×¢CúâœV‚Øî41ˆèü41"þšœ³žœ&ÂHOH#z‡i¢:÷š'©³ ³ÛÙ½)ÌJŸîM쨦ˆí÷öŽi®r»*F5оú2øMìÑÉàÉúÑa{&šG+_~|?|·Ó´N:-š˜Š&¸tbËÓD³{-ÒsMÔç²”ñ-wû½bg·Þ›îMH™-·ÌŠ+mveÄ~”&šS£¾8~ùMLƒˆÎO#⯡‰@Á9ëÉi"Œô„41¢w˜&šse%“~/%ùæ{ ›%: ‰SR•æ*…½«2IW üQŸí»h¢½u’òxèGÇž<éT=":pœéåÇîåÜ÷¢‰EÐD-1­˜,YXnâÇüjš¨Ï5+-›»-›MQï¤ Ý<'zOZ[°(çÎÂZ³cx´x]sJ¥Ê(kâT×I§î41ˆèü41"þšœ³žœ&ÂHOH#z‡iâT/eŒ7'ˆÕ-t:§T'Ûš8§Þ¿«ÚĹèü£BÜÝ0Ñ”•†ä]e¾R:-˜˜ &ÊwYËH0 …0Ñì€/ßš¨ÏURbä^û)vhwtâM5;¼OÛ&Ë%.£¨ÕŽ˜M»‹ã„ˆcÊyÁDo–Dt~˜ L ÎYOa¤'„‰½Ã0±;w“üA/õ¦¼ô¥0‚ëÑ%¹&¡žêK˜ìÚDSU†Õò‚¨Wÿ.šhom ‰R?:*ÒDõXÿ1|ðÕùËÒꢉEЄµk êFñÖD³K(WÓDy®”(ƒÄGw»ÂãwnMØ–P‹Ï÷ãKj}܈[6;åg:U§µ8&xê‹Ë¶R:u§‰AD秉ñ×ÐD àœõä4FzBšÑ;L§z)—{ËMæÍäè Ó© sÑDSEX³™ >}Y‚ØSÑÁ—›Ûi¢yTÉ„(Y4±hb*š¨¦‹#Íä!M4;ä«ËMÔç–îÌÐ:v»wi½/¥ (ÇMäÖ·Xaâñ%ï=´>JÍ©1)a_ÜkÕŽEÓÄ ¢óÓĈøkh"PpÎzrš#=!MŒè¦‰æÜkÿý^ÊñÞ±Ym3::éT%(HâNÆíÝî]ªÁ?IMš—YD_=Ê—ÑD{kRN4’»?¸7Ñ<ªfÔþCWJ§ESÑDù.kvVòïM4;¶Ëi¢>·Þ”Ξìnw+M°lœ2Ñ„;aÊÒ¹–×ì^¯«O2¾U$Û¬¯Ý¿m|)o]þ–-÷£C&OŽ/Å£zrýàwSHk|YãËLã‹o€¹föþ}Vþñ¥ÙáÕãKy.0Ä3ÇfGzëµ¼¼a.?…¾_|«kUefì(-3I{8eàq/µ–ÖjÕÁ2DÑùW«FÄ_³Z(8g=ùjUé W«Fô¯V5çe`ëå0øyïIZɲe:*gtJêÛ3Y’&šª2Æ,¨O_–2pާ‚ýèÐËZÞí4Ñ<ªfÁ~·â~ÑÄ¢‰¹hB$—é¼uiBÊ×{ýÞw}®{vN=š(v™ñÞ{y¦µ„ñ[š (-8¡'kqìv î}ïN…$Å›»¯“´ÝibÑéibHü%4ñöÎ6rTWÃ;Šð·½’ÙÿN.:RÝé TD’A]ŒúW·'~ã „'`»¥àœõÜ4ÑŽô|41¤w”&vç‚í:I/‘t/MPØftp’ö¤T›ë$íK=¹ðï*Áßjg´ßu†¨L[_DGŸ;I»{ôÈ‹Œ/~7çÕuÑÄL4QžKvô ÄÖ¢‰Ýôêæ¨åº’XÝRwüH¢{i7ÏÄŸ«| äìLˆÑžƒªÁ³4Q†i§7îvÊ«d`w™Øˆèü41"þšh(8g=9M4#=!MŒè¦‰âø·h¢Þµ3 ~ñ²4çh"{D4AˆîÉ+¶´*/š˜Š&°î áŸiÿù×ó[ö®®@^¯ Š$í|±ÝNnÍ› ØMÚ!•œò¿§æIÚbv¸¸•&ª8 ë”.}Ù­“Nýeb#¢óÓĈøkh¢¡àœõä4ÑŒô„41¢w˜&ªs‘°dýYJn¥ qÚô€&ª×Îç™Ý.Í•7±«rfSùB½ÛoÑD½ëÈœ ÑN<¹7Q<1ÃÊ€’.šX41MPéäfh©IÕ.SóÕ4Q®kYCêŸl÷©6Ó•5:qùl °Ý¡Úø³N‰“·# &V‰ˆÎ#⯉†‚sÖ“ÃD3ÒÂĈÞa˜85K}X¡_»5¶¹mMœ“ú'÷ü·0qN½ýXÚÄ©è˜Às0Q<’;@;‰t¿0_0±`b&˜àr€(eÆöÖD±ƒô–tL”ë2dL!îá$voÚ£“Úgš2‚±´$k¬ªvIéQš¨N%¿·:}±LlDt~š M4œ³žœ&š‘ž&FôÓÄ©YJä^š[R> ‰]Bÿäèng“ÑDUeàØWÿé˜Ö_M{t¥]®ëQ|&ŠGBHy­ÐUF@«@좉©h"?—â™%"Ú[Õùò$ì|]… 3½"Õ.Á­|cÊNö&´Œ`)iÛsPµc°Gi¢:uEã/ĬvFÝeb#¢óÓĈøkh¢¡àœõä4ÑŒô„41¢w˜&vç®L_ÌR.tsÚ„mîbOJ5™‹&Š*N9„Éûêå·hbN·Û½ìøÁƒNÕ£"¥v““ÝŽ}%a/š˜Š&´^M¢Ø¤‰bçïüE4¡5mÑ:iGÕŽ>Ÿ¸ô SšÐÏ4ayKBk¿_¬ŽtŒGi¢ŠÃHn©+NWÚDw™Øˆèü41"þšh(8g=9M4#=!MŒè¦‰êœJS¸þ*$÷&añæ`4qNªù\4QUqi#¥}õL?VÒ©ÞuùløÍËRà¹æuÕ£æ»Füâ© Z'MLEù¹U‹„í±ÕNäò“Nåº. Ð?âïíZni7‘JþÆçæuèe¤ §ÞçÿjGö,MT§Áb)ÕÎh5¯ë.Ÿ&FÄ_C ç¬'§‰f¤'¤‰½Ã4±;rÀþ,åä÷îMXÚŽN:í”Ü¿‘*:MTUQRíµ¯>üM”»¶Èl_D'è9š¨ÊÐ(w•ÙÊ›X41Mäç2½ôUlÒD±CÔËK:•ëfLH½B|U§(ßHª›š'>(%«"³DþÓTZìÌôÑVØ»8 ¡Î1¬j‡‹&zËÄFD秉ñ×ÐDCÁ9ëÉi¢é ibDï0Mœš¥ÈïÍ›PJ›‘ÐD•PúR¶Ï¯/˜‹&vUÐnöü²ãÛ›¨w-BÖŽÀs­°wn€ß(3\y‹&¦¢‰¨YÐáÚ)éTí’ÙÕ4Q®[6¡?k‹Dè'l· i"¿{ÊÁö.Ên§OÒDuù L®]qñžÌ¾hâó2±ÑéibHü%4ÑRpÎznšhGz>šÒ;Jçf)»7 [¼”ðûL'¥:NEçÔ#ýVö~×”,/z¾ˆNÚµ:ð<¥®¸ü|-šè.Ÿ&FÄ_C ç¬'§‰f¤'¤‰½Ã4Q£¸B Lpo?#.gŽòòÎIåÉö&ª*R‡NÖÇ®>ü·hb޳±ö£Cú MTy¡€òÅï¦È‹&MÌDù¹ÔPJ„ܤ‰j÷Þ¥ç"šÈ×5€<„ÜzãÇòÜè÷Vùðdžö¾µŒà(µÛ öjGòl^^qªd ‚]qЏh¢»LlDt~š M4œ³žœ&š‘ž&FôÓĹYêS2•4!¼e\8 ‰SR x.š8§^~ì$m½kN‘ñEt"ž£‰êÑ’ñ»É[#‘E‹&& ‰ü\*”ªÖ¦‰j(WÓD¹n&qL½ñ“YìÞ½‰ùsP3жzÌIMÛsPµKòìI§ê”BóÙG¸ª|t—‰ˆÎO#⯡‰†‚sÖ“ÓD3ÒÒĈÞaš¨Î8¥øb–Š{÷&Hd:€‰SJóÂt.˜ØUEX§dànÇö[0QïZòu;¨µÛ%&ªÇ²g’¾øÝ,邉3ÁD~.ÕćvBÿüëùÕ<ÇàÕ0a5-0€A{ã'ßë­íŒ(mᙪ`·R=Ä;›ÇÅ΂ãQ˜¨âĀѻâW‘þ*±ÑùabDü50ÑPpÎzr˜hFzB˜Ñ; çf©À›a‚7>Êš8¥”`²nFçÔËÕ?xðœSõ¨ Þ9ýðºƒUãcÁÄT0‘ŸKG–?«üó¯ç×Áðr˜(×%0NžÝï¬ȼyRðø QF° C§Dµ3|6k¢8< )cW\¬¬‰/V‰ˆÎ#⯉†‚sÖ“ÃD3ÒÂĈÞa˜ØGþ×þš­îÝ™`ÑMãhk¢JÈà©n¢ü·4±«,Tûê‘~Œ&ê]¸}'»Uy1$}e +kbÑÄT4¥ç¨dìjÇérš(×Õ”I¥“<\ì$ÒÍ9Øù5'Yœ¶|ÿP´6Gún—Ÿ¤‰Ý)™ZH_Ü:çÔ_&¶":=M ‰¿„&Z ÎYÏMíHÏGCzGibw®åÕ…ýYŠîÝš0Ùè ™ÑK)DÒo”ò\çœÎ©þ­fF'£Ï•ß=†Uè+s]­QLÌå¹´2b’4Ëþì’] õºTÚ‡öÆ2ßyÎ 7s÷øÜ̈¡Œ`ËsP»=ón§6¯¾&ŠSŽN%ˆÝ.½Õ¨X0q°JlDt~˜ L4œ³ž&š‘ž&FôÃDuNÔþr´Û¡¤{›)n@ÎI¥4WyØ]U©Rnô…z¥ß¢‰=:n‘¾xY2Ós4Q=:9÷•åuÓ¢‰E3Ñ”-Uw²&MT;Q½š&Êu8å\oü¨&¹5›7“BLŸióæ ¼o+-vôaåVš8#Žñ-“}ÑÄÁ2±ÑùibDü54ÑPpÎzršhFzBšÑ;Lçf)½7k"/Këš8'ÕÓ\4qJ=áoå`ŸŒŽés4qJ™ˆ,šX41Mäç2,âÏ‚dÿüëù 3¾º5j¾®¤äB)ioüDËJ(m/>Óå,ØÞ­vIŸÝ›¨N3&(x_,šè-Ÿ&FÄ_C ç¬'§‰f¤'¤‰½Ã4±;/ÇE£?K1Ü»7A ÈMT ’ ,}!•l.šØÕCé#õ…úø­ò°¯è¨†~ñ&~&ªGwÿâw³·ÃÔ‹&ML@Tš8¸`3 {·»&Êu9;7ê.ƒË‰¨;O:¡m®˜*:1ב‚튻飪Ӽ@(ß™ºâ0-šè-Ÿ&FÄ_C ç¬'§‰f¤'¤‰½Ã4Q—WDýYŠøÞ½ %ÛÒáI§sRm²“NUU¦‰¤_¼òóð[4±GGIÔûÑ|. {÷èâÚWö^gÑÄ¢‰ h"?—æÁ)¢MÅήo]W¯è¦Ðµíÿ5â¾'oB‹“Ï4!ygð1é|­ªvDÏîM§^>Gy_œÓ¢‰þ2±ÑùibDü54ÑPpÎzršhFzBšÑ;LÅy$3NÞŸ¥Üt®%úõ‘ø·ºKïwBÐIÕ¨v@~ð¯I€XúOʽ [^ùñ#£‹a?ˆ,«ÛÄ¢‰©h"?—A¬Ê$Mš¨v”àjš(× ÕÞø ḳ6ëf‚üy¤çÕ²$L4í )­#m]·‹Ë3&rꉓ„‹&zËÄFD秉ñ×ÐDCÁ9ëÉi¢é ibDï0Mìαsc·»9o"”6B;Ø›¨$:Õ§þwKsÏ®>khWâ}ÙáoˆÝïZ”ô‹è¼ý¶·ƒOõXˆÆ¤¯LcÑÄ¢‰©h"?—¦Š‚Mš¨vò6ï\DZ÷F„”­7~Ì4ìÞ“N€Ÿ»M°Õ.ÁÔ~½T;{&ŠS@‹^-ˆÝnµ®ë¯&FÄ_ ç¬'‡‰f¤'„‰½Ã0Q“aˆõg)¿÷ “ÈFéè Ó9©ÊsÁDUÅLÉ¢¯žA~ &ê] º|{®wÝîÑÑ:Õòw;£ÕmbÁÄT0ae‘n%i85a¢Ú©ÐÕ0aõSF‰þø±R›õ^˜0P¥ƒnžGpF®9¨Ú=Ýmbg :¨³ÛÉJÂî.Ÿ&FÄ_C ç¬'§‰f¤'¤‰½Ã4Q—?Êœ»é½%lÎGIØ» ¥o¤Êd4QUi „è«—?ýwÓÄ©èèÛqŒÛi¢x¤¤¥ |_Yðꄽhb*šð}k"¶:U;Š«{×Õë:äùX¼7~ÌSÒ;iB6aOG4Q`°÷~‰Ú„t¶÷Ë÷ê)ýÚûåLt<ž|¿d\zKr_¾gYï—õ~™àý[*háâívFÅ.Ï1—¤Í×EdÐö‘Ÿj—>Xºîý[E úü~‰²’´”_sÞQZøÅŸ-òqJ\f´õµª÷¢Ñù¿Vˆ¿ækUCÁ9ëÉ¿V5#=áת½Ã_«ÎÌR”nî*TzaËÁתsR?¬ÇÿSš8§Þ¬9ê©èús4qJ½1좉ESÐD^ø c¯yµcN×ÓG°®LÝîÖ¯UäùG³˜´•š¸e?¤9ЫEø“0Q¸ôÅâ:HÛ[%¶":=L ‰¿&Z ÎYÏ íHÏCzGaâåÜ ÏýYêý€Î-õÇ·R™í ö.¸]máesmMìªDKk×/ÔûomM¼¢c–ì‹èˆ>W1°zôü·ù7é+óX0±`b&˜(Ïe º©4·&v;q¾&êu9%wÑÞø ò[k|Pͤ€Ïï(#„¹}üf·KühÅÀÝ©$AúB'Z4Ñ[&6":?MŒˆ¿†& ÎYONÍHOH#z‡i¢:Wjõ~‰¿yk"»±ƒ­‰sRa.š(ª"å¿Wî«÷ø­´¼WtXõ‹èDz«ý{;MTì¬a}eäë Ó¢‰©hJ%¾(Gˆ¬IÕÎäêƒNõºù~{ã'èÖŠi€ðƒ¯UXFp¦ ôöj—Ü¥‰ê”…©/Žitê.Ÿ&FÄ_C ç¬'§‰f¤'¤‰½Ã4Q‹·›ì¼ìøÞ½‰¼dÛr(h¢JPè5›~IÉö&võ¥-Nê«Wý­ƒNû]»uöv»K¥šR#ÐS–ípÑÄ¢‰©hëÞD*g˜š4Q턯NË«×Õr6±³7QíPìÞ½ :  ª#ØUÚE¤Š]~»Ä³4QÅ‘F–×§ù^{Ýeb#¢óÓĈøkh¢¡àœõä4ÑŒô„41¢w˜&NÍRDv/MHžïh¢JàdÉì ©“• |©÷àvÉÀ—Ý• <·¾··ÓDõ˜Ð;œSílu3Z41MÐ&‡9šE>ªR\]2°\WØ­4|îŒz__O›'<Èšà<~I¨Ý¡ÚУω“•5Ñ_$6":?KŒˆ¿†% ÎYOÎÍHOÈ#z‡YâÜ,å7 Tßv&NIÍ+ιXâœzþ­‚'£Ï <¥ !VÁÀÅS±—l„”Wò¬M–(vh—§`×ëæ÷FÊ÷Û?!w²å-!tFÉ#˜!ã¦Òb‡ÏîL§,y`ur°«¯òãýeb#¢óÓĈøkh¢¡àœõä4ÑŒô„41¢w˜&ªs+®¸?K©ÂÍïÝRc¶gwK4´ý–t®‚NU•@ŠÔî7¾Û¥ÅÓÿjš¨w]¾ŒbêG'¯íŸ£‰êÑ’fžè+˿ܢ‰E3ÑD)¹êÊ©ƒ]Ksë[ÖÂE4‘¯+è.ὑãb7Ò„Ë–#áé`oBËÂö ]ì2ùУ4Qœ*zò/Ä)¾ X4q°LlDt~š M4œ³žœ&š‘ž&FôÓDuÎП¥8éÍ4QV܂dz½–ž÷4´]ªë\4QU)鯃›@7MÔ»6PÄèGGãÁ¬‰âÑÀ:çœv»·ç‹&ML@ºIpjç`W;|kÆuM”ëq~ýHgü»D÷fM8pØAE'«sP¶Òj÷¡ÆÇ­4Qœæ7¸©!.ÞÊ.š8X&6":?MŒˆ¿†& ÎYONÍHOH#z‡iâÌ,•gÚ›s°6t<8é´K0æÎç™ÝŽ&;éTUsòøB½Ùÿ±wvIÒ£¸Þ‘ý‚–0+èUœýß«'²»ÒØC|IOÏÅL(Ì›*#ô¾‹&ʯ¦˜Â+yDãçh¢Œ¨)J¤¶²W Ü4±ibšˆ¹>+ ×÷&ŠqœMù¹"FâÖüÉ…3äÎÖué0$2}OéünÀ­ÛâÅ.س{ePåfmqºëöÓÄŠG×§‰ñsh¢¢ Ïzqš¨zzAšÑ;Leð¨1%jG©È÷¶®S¥CÕ.h¢HH¸Qqã´ÓÅ*:eU)P„¶z³/£‰â ÀMïøÿ|po¢Œ(‚~ ŒuïMlšXŠ&R©¨Ä1±Ui¢Øáü{©Ô‡kÆ=3ÐpçÞDþVe¬'Ìg°Hl”#Év)Ò³4QÄå–72»ÅŽuÓD3M¬xt}š?‡&* ú¬§‰ª§¤‰½Ã4QWÇh‡ÐÜWîVšHª‡^ÞÂî’ªÖ¢‰¢Ê¡Q(~ ^í»h¢üêDêÏo{ÇŽ&|Äè?È"6•¹ìú°›&–¢ ;„üÏ¡êõa‹ñôŠNù¹`è!óÇíÒ­4¡x ©Â»ú°Ñ•ù ÉDUY_~ì냷°{Å%ÝÝ&êibÝ£‹ÓÄ ø 4QWÐg½2M´<½M ꣉Þ(eF7÷®‹G}GR!€-DÝê#|MôzТ‰ŸEL<×j*#…} {ÓÄ:4QÞKqHS«ÔtúÛŽ^výfÐÄÏs5qž@ù#ž¢ã­{tä³Tdïi"÷ME$K†U¥Å.¼kãz#M”AÉjÇ|ÿkÇ»¦S3M¬xt}š?‡&* ú¬§‰ª§¤‰½Ã4QgMo6ÂG©»oa“ÒáÙMôIE^‹&Š*!T ¨ü]4qz‡S@h{Çóçh"H¾ü†ZÇÄ¿í^vÓĦ‰hò7ÿ\ÑÉR•&ŠkšMù¹h‰¬ÖIþo;OäïÜ›¹¬êÅÞ–™™b=aÏvÆŸ¥‰".²$¢¦8ŠÀ›&ZibÅ£ëÓĈø94QQÐg½8MT=½ MŒè¦‰<8‰C;J™Ü]!–BŒV‰ö ¢›R×¢‰¢Š A[=}U'ìŸ_­`Ô^É9=GeÄ”B@l+‹»B즉µh ýýV¥‰lçgÓD~®a>ìÔšÙ®Óøæ~É#Œðûõ…|û"Ɉ\UšíÈ¥‰.q¯×·6M\¤‰®O#âçÐDEAŸõâ4Qõô‚41¢w˜&º¢”¼wo"zÄ W{}RW;éÔ§>~MtyG_:GÝN=Ê4¼”fÜ4±ibš CÔ<…N)Vi¢ØÉKÝÓI4áÏ€ ÌÒ˜?’ %ÝÚ [|u1¹¢ sìQyCC©Ûù¾ÚúÒ¡ƒ|ÛúÒá òäúâ#úïáÆWÆb÷ÚÕv¯/{}Y`}áã6ǹ¾¾d;ôUhöú’Ÿ››©ÖO,e;a½s}œRß/0œSÉäNjHõTÞ±ÖŸ«ºÄ‘Øþ\ÕúQñèúŸ«FÄÏù\UQÐg½øçªª§ü\5¢wøsUW”b¼ùsáa«ú„rX &úÔ§/;HÛåÁ?Vu)³ÝÎhÃÄj0!!%H)¦Lä­ïLŸ’{)¥ØøØ›íØÍ×ò"K„‹­oÉ3üŸÆAZ)3£,‘5Ђ5ÅåkÄ›%ZIbÅ£ë³Äˆø9,QQÐg½8KT=½ KŒèf <1ÄÆqÿlGo*:Í-@~Àtí?—ji±­ïõìÿÇwÑDùÕ”{­Ç¶wðåòÎí4‘G”@!ˆ¶•í­ïMKÑ„x–ý ¤ßÍ}ÿúçû+ 6¹dàÏsÍØ'f+îååzg;#’ÃïOç÷ë‹æª%iÿ¶Syö mT\[ kŠx骶iâ"M¬xt}š?‡&* ú¬§‰ª§¤‰½Ã4Q'‘V°?í ÜÞÎÈsÅ‹½‰"ƒ§*üÔ¤kÑÄ©>kä˧Ýï+ä6M”_-l–>X,…Ü›È#jð_ ØVf¸ ošXŠ&Ô³t1(õ’ÅÎyx6MäçFÿ¹Ø(ntÚ…[÷&ðpª2ºØ›ˆy¦ 4v!‹#âe•"ABûÔ´Xò¢Ê™Ñ0´ÕK ß…ùW[ʵŹécx°f`î< ÛÊÌdãÄÆ‰•pÂÀ|Î_¤Ž§ÝËæÀ$œðçRˆ>}¸¾ÀäñéÖî¨Hô'¬Lujî£XÑnù(Nôˆ3ÑÝШ™'V<º>NŒˆŸƒ}Ö‹ãDÕÓ âĈÞaœèŠRÊé^œP80áNôI‹uêRoöe8‘½ãipã¶fñ">y»K™ð>ë´qb1œÌ­‡!¥N¸ÎÇ AÅ[qÏí<Õ¿ó¬“çÃy’Â[š€gz q—×”f»dñÙä=â|™Ù›Í4±æÑåibHüš¨)è³^›&êž^&†ôŽÒD_”›7'$Ê!QßÓD§Ô´ÖY§>õ„ßus¢Ó;ñ9šèS&{sbÓÄR4áï¥P¨ø»þÃ_ÿ|…"¿÷ŸCå¹–4ùBј?ygÒìFš`;Pãû£Ny¦Kd•êi¢ÓŽ8>JyPDÿ7iSâËÓ¦‰‹4±âÑõibDüš¨(è³^œ&ªž^&FôÓDœ04Ž:ýˆL÷ÞÃöEö0Ž4Ñ%•Þ´løŸÒDQÅùànø@ýïš!6MœÞI€ÚÞaæçh¢Œ%ÅúŽ»°kÄnšXŠ&à!Ê¥«íQO;ˆ³N”çæNxÁZ3[²P¸ó¨ßwßÌX8ŠÕt±{SæãV˜(ƒ&ÿ?q—ˆmf‰®#âçÀDEAŸõâ0Qõô‚01¢w&º¢TÒxï5l‹‡gŠ0Q$˜9KÐR»†]TQ€Ûê-ÅâD15½C¯ o‡‰2";G×»;žvøòÖm˜Ø0±Là!1ù˜~ßHúëŸï¯x:½;jmüÿü{|¦o„‰äÀ¹Gß{š ŸÁŒùÊzýcU¶#Cz”&ò b-rSsÜ[Í4±âÑõibDüš¨(è³^œ&ªž^&FôÓDW”»·¨S":’^ÑD—ÔD¸Mt©ÓéϦ‰ïäöÏÑD—2”ÝpbÓÄR4A‡EŽ’ê4Qìfu*ÏÍ5È×ÅN;T¾ó S8˜QÓû±Ày†(õMÈbÞÝ ¼‘&Ê îD h‹c‹›&ZibÅ£ëÓĈø94QQÐg½8MT=½ MŒè¦‰2x.ºY/¹qÚi¼¹áÉA1]ÐD‘à«0jhK¼VÉ¢Jr{9Pø²KØç¯&„Fùß»ð\‰ØsDMãÊüÞ4±ib%šàC4$‹Q«—°O;Ù5Ês ZqO”â­{¼0\]–3q¬ƒ>íðáKØeÐ\AÛ´-NvÉvšXñèú41"~MTôY/NUO/H#z‡iâ<·iþ „*Ü{Ò‰,Àb;¥Êb'Šª”Àê¥N»×þC_AåW›ãDãäi¼6‘GŒH*Üþ»Exi&¼ibÓÄ4á*-i´`õkÅNŒgÓ„þúcîZŸ?ÅøÖ ±J¿ë‹ ±9]¶|[=Ö‹AŸv áQœèÇ'ÚybÅ£ëãĈø98QQÐg½8NT=½ NŒèƉ¾(e÷Öt"އ^´¯ëS*°ØI§S½¥¤ÜV¯ðe%ú¼£îMä–ÆŽ¢Ìˆ7MlšX‰&b~S„¨uN/v>µM‘ã!Ž1$Æ8·s«"·rŸŠG×O‘GÄÏI‘+ ú¬O‘«ž^0EÑ;œ"—ÁÑ<>J;JáÍçw òö…ãý‘ä@(Ôó¨lÉžöEQ²ÆiÍb‡/õ@v´¿˜Æ®íGÄωö}Ö‹Gûª§Œö#z‡£ý9¸¦øI”r‘÷V’9’\}É,¤€Õ‚lçΤÙ;þ\ÖmW¹ÝÙüd ñÊUyYŒþgmK\l+º¨R² ÚVÿz]å+>å_þ~ðF}ðš\Qæ¿[6•¡„ýñhï(Ëcߎº”Q€}a;ZéÛ‘¿—‰|Ý àÖ“ˆ¢Âüærüÿü{|¾óR40ŠŠéý®=‚gæ¢êà•íØäÑŠ­§8²êÍ)~ìh·¦nfÔ®^#âç€WEAŸõâàUõô‚à5¢w¼Êàž„)`;J1ý­©[$8Ô } 5ÑZ8QT)‰¶Õ }Wÿ‡ïÄhõú&?vüÜVtÑ'‡€@S™Þ5–6N,…p¨ÏÈåP«8Qìä¥rÐ$œÈÏA¨µ³Ý­g ø$å”Þ÷@ôS2jÄ l§¯ÇlŸ ‰.q›&šibÅ£ëÓĈø94QQÐg½8MT=½ MŒè¦‰¾(%z+M Á‘ð¢bk§Ô¸Vkê>õªßU±µÏ;¯uo§‰e>Å6MlšXŠ&¨¼—Q§‰²Ç|4G.â0W:NMqþÝL~*]?G?'G®(è³^È3_KÊ¡úí(Û áìn?å¹–ëQ6\•í’…;Ï:9JRHñÒUŸKµÅ6'zÔ[/É>ïØsE–ú”ÑËÇÝ'À Îkœxì3j¼¿nÇø,N”Aa|ek‹‹‚'ZybÅ£ëãĈø98QQÐg½8NT=½ NŒèƉ©¾ ¯E}ê¿í&v—w¬ë”GôŸ!YS™üãRǦ‰Mÿ{šð÷’-izS¸ï¯½¿lüRã`MÄL )_»K­ù“CÀ4AéLW{¹IƒCMUi±s8{”&Ê  )¶ÅÑKy¹MibÅ£ëÓĈø94QQÐg½8MT=½ MŒè¦‰sðB#Y:íHî¥ GÔ+š8%$ø‰Tµµh¢¨’¼ÑŽmõò{èϦ‰ò«L{§wìAšÈ#æúí#·,Ý4±ib%šH9Kþ Ö;b»À2ýj^~.²Ø¸ïvÚ…x/Mq¸h9aù³AÀ°.4Û é³0Ñ%NiÃD3K¬xt}˜?&* ú¬‡‰ª§„‰½Ã0Ñ¥ìæƒN‡èKt)°XØ>õßVÔ©Ë;Id‰¢Œ!7^n*S€} {³ÄR,aG¥˜ƒJ•%ŠÒôkù¹ÑX4`cþ¸]â;;N¸[ Ʀoi‚‚Oá$†XÇžb§I½…Ý%.ÂË×–MïÓÄšG—§‰!ñSh¢¦ Ïzmš¨{z=šÒ;J}Q on`Çï5^”Óí“J–‰Nõ¿›­þÑ8ÑçרwãDŸ2•}ÐiãÄJ8áï¥C.³J 'N»À³¯M”ç¢æˆù£åvÒ8ar€GVxm‚àÈͰ)T·GO;y”&|P Î…I›â0Ý ìšibÅ£ëÓĈø94QQÐg½8MT=½ MŒè¦‰®(xïÞ'>D/j:ò \Õ¤òZ›}ê_Kü}M”_íH1¶½CÖt:GTH?ø»É>è´ib-šÈ Õ‚RÐ*Md;Ii:Mä‹ÏN3ù¿­ùã6¿?WMÝœÈ)¼o‡Mè3<ºÔ¿;wè£4Q•€Xï£yÚ1ìKØÍ4±âÑõibDüš¨(è³^œ&ªž^&FôÓD_”Jt/Mø0éêöTd`û@ª­umâT¥¹±µÕËï}ô?›&º¼£¯Šî¦‰<¢'Ä š(Ê,ð¦‰M+Ñ[®×T¥‰b<û¨Synt–iâ;íäÖkî…ÃŒbzqÂ%øT'õÆRsH@~'Ê ¢ôfé·8NûâD3O¬xt}œ?'* ú¬ljª§ĉ½Ã8qž¢¡¶£”èÝ›rÐåæD‘ ÒGR¯…§zSød­RJß…åWçºIõ›?vAžÃ‰<"…$,(³—›E'6N,€YhÀª7'N;:'òsc*=9«ó§ØiÐqBYŽ,ˆ½Ç‰>©‹5HýQïÿQn«Oô]e>ʯN(ùÐÝÞI~¯òýF¾x4•ň/]÷³˜%FùàRj,0ŒD)Í_`˜„!µ¢¶Ûa¸ójjô,0Æx¹ÀxB ISCªçœ’ž=LÛ%.ÙÞþn~ˆ¨xtýïU#âç|¯ª(è³^ü{UÕÓ ~¯Ñ;ü½ª+Jã½…>"pñ½ªGª/ ¼Nô¨ç¯Ã‰ï˜<‰Ê0mœØ8±N8 ‹z68ávürálNDëCcþDâ7-ófâ„Ü­ÕÉê=NHžê¹;u½(I±cz'ºÄiÜ•>šybÅ£ëãĈø98QQÐg½8NT=½ NŒèƉ®(ïÅ ÎÃ\4Hí”J‹UúèS¿«p`Ÿw Ä ‘ÐXÉbS™€î©'– ɷM¢§êUœÈvôz°dNäçFÈ-R©5T-ɧiñÀ|ý‚&4ÏàÜ ¤ñá@K b}”&NqISãâàiÇ{s¢™&V<º>MŒˆŸC}Ö‹ÓDÕÓ ÒĈÞašÈƒ+!ÆÄí(%`·Ò„$ÍŸ².h¢Oª,¶9ѧ޾ls¢Ë;Êé9šèRf¶ËošXŠ&ôPÆ Q7‚ûëŸï¯æî 6›&òs‰-µÓ`·£tg¥€x$ ù=N”“PA›‡i³ÚÃwóò ÿÏÞÙôÜrÛvü«xç¬")¾-¼ë&«(ЬºH·½@œ¶ ß¾’æ¹é‰ïé(š«>‚wïÑøÌPüé…´ )Â@Sœ¢¯ÂÍ<±âÑùqbDü98QQÐg=9NT==!NŒèƉ®(EvíY§¨°¹•úè’Ãd…ûÔ¿[áÀ.ï0à}8ѣ̭͉…SáDz/=åÐb1Vq"Û©ñéwót“BÊ¿E[ß;^zÖ)Í.òÝ»ç4aù Žä õÛÓÅŽèÞ£NyP'FDn‹ãUé£&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4Ñ¥$^[8·hGGú¤ÊdGºÔë“Z¿jšèóßX†¼G™€E‹&f¢ ÛrïaÊm…ª4‘í‚àé4Qÿ·? ^@´¥øãÁ=lÏ_pîêÜSìHî=êÔ%NÂТ‰ƒ4±âÑùibDü94QQÐg=9MT==!MŒè¦‰¾(ep1MÈGe»”¦‡ &úÔÇ7ÛšèòŽÑUÒˆ1¶”QÊÆÂ‚‰3ÁDz/•Œ´~ »ØI8½§Qú] L ùý¤DŸàÒ*ä›"Øs–ˆ!Àù´U½\ún—|u'K”A‚c½Çntsj%‰5NÏCâOa‰š‚>ë¹Y¢îéùXbHï(K샣Ö{LØ=ÛÜ=‘%4ÀŽ9í Q"·•ÒdÇœvU1¥õ"ÛvÞŠ%ö§f®Xþ°c½%ʈ˜^:fh*K‰Êb‰Å3±D„ü^š™Ö;;ðnM‘{Ä!R\)r+÷©xtþyDü9)rEAŸõä)rÕÓ¦È#z‡Sä¾(¥py"<:¼“$ˆ„”ª°Qmå(Ûqâ :yå(ÿ®‚#a½ŽEÑÉÄWÖ‡ (|䪗¥š=©±ú‹âD—zïuΩË;0/lj.e¨«B쉩pÓ—oC@½Áo±‹o=½Ó'ŽpáD3O¬xt~œNTôYOŽUOOˆ#z‡q¢/J™\ÜE7óƒ. ‘¶ÜÜ•¼ž­d»è|ïâÑ.NHxS\®¹¢}ë3®xtþh?"þœh_QÐg=y´¯zzÂh?¢w8Ú—ÁSpÔí(EpñÍ/)bG{‰`fÕ; »]xˆö'-¥ßU ÎÜp•(:_Y®· ÁIáÈU/KMøås-õ©·7Û‹.O ÒO·½yùâQ1FˆöÂßpíE¯Å£©bšãsûæúû[ì——s/ʼn2(%7Ôëq|<„.œhæ‰Î#âÏÁ‰Š‚>ëÉq¢êé qbDï0NtE)ººg¦<™Žö¢9åМæoàD¶‹õROÂ‰Êø¿ýùø,Ž—îE§t,1SDíuÕ—R&;ÚZT9hã"؇zy¯‚àÞ!WyÁ;~'NäU¥©L€×^ô‰©p"½—)hzJÆ©5Á$;Ô &ÍS¡õýD° 'æMX¿;˜^˜Ò¬‘k—7„¦¤Ánå®".½7ìmqWÉvB]ñèüÜ5"þîª(賞œ»ªžž»FôsW_”ò‹» n廲TQoKeÐÙ`"©RȽ-^PÏön0ÑáQ¸&˜ÒŸ„ÚÊ lð[01L° ‹{h­V%»Ç…µÓ`‚]‚µ¾W£+ËKØŸÓ„l!`¯—Ìvd¬t+Mtˆ‹¹㢉VšXñèü41"þš¨(賞œ&ªžž&FôÓDW”zZõDšÈÍl<ÚÇã\åÀ;Õ¿Y¯Ò>ï0ßH]ÊœhÑÄ¢‰™h"½—‰Òó‰ÞúÖD²79ývߢ8IëûQ@¾²8…ÍÒ·ø¼SiÔü+­—êÜíýV˜(ƒ¦äíý!nÓ/˜8È+&FÄŸ}Ö“ÃDÕÓÂĈÞa˜Ø7@´v”2¹¶‚GÛù`kb—&%}EªÅ¹`"«ËÝø¤­ÞñÍ`¢Ç;/'\ ]ÊPL,˜˜ &Ò{™¢{.ô U˜Èvá1™? &ÊøÑbh}?)—¿²x‚‰[£<§ Ë_°»p¬'ìÅèÞûê]â"®­‰fšXñèü41"þš¨(賞œ&ªžž&FôÓD_”²‹/˜HžS`¢K)OÖ[¨S=½Ù9§>ïÞ]Êþ®ëÑ‚‰¿ýT¦ÁÂß_ýæ§ÿùþÛo¾þ·ÿû—+H_í?§GùáSŠ®_§€ÿûÿü§o¾>0ø×oú:EìLßö7_ÿÓ§Óe·¥÷?Yþ÷§ï¿úë§ßáÄ?þç_}ÿ÷M3ÍŸŸâÇí«)Á ü]2{0ýoÿ3}:ùaÒ”ð—4ÈöõðˉŽüHÿþyaíoâÇ|³µ£Ñs;¥FѼ•ô^ÉK§wî-£Ô'Nñ˨¦OâogXûê7 ºÁá>t—H 7Öd(#J¾;~8n»6 V+“`Dü9›}Ö“oT==á&ÁˆÞáM‚2xâb"lG)ù2Gë¹q¢îéùpbHï(NtE©âµÅ[ëÉÁ«êé ÁkDï0xíƒ;‡`í(ñÚB7¦º1Ôàß%8@hK˜«?pŸz}rGëWM]Þ1¾‘&òˆÂ€ÚT&d«?𢉩h"½—¦ ¬z7u·{¼³xM¤ßõ\5[ÊÅî±ÞãËU²A¾öÿ¼£cú‚ÀëJqAr+Mqœ›â4>ìÞ.š8H+Ÿ&FÄŸC}Ö“ÓDÕÓÒĈÞaš(ƒ+!oG)Ñpu þ¼)•ho&)ikK}vX襉¢Ê1Ķzs~/šÈOœÒÖô޽¯AÑ2J7•yäU6sÑÄT4›„|}Ó¼Z6s· §÷Ÿ/¿›¿ ¬·Œü°ƒ+Ëf²n’>c<  ÚÒ_&Zc~)v!ÞzǤOéÚ›h¦‰ÎO#âÏ¡‰Š‚>ëÉi¢êé ibDï0M”ÁY0¥Úí(ñZš ÇÜrþ`o¢O*ÍUà¥Sý›Õ5ëó£ÝGyDQ$@[™ó:é´hb*š \3©i•&²]>!z6M¤ßM$áˆõÂ0»Ò¥u3qÃrö9MÄôcŒ©ƒ²¸ß{Ò)JeY£-ŽP⢉VšXñèü41"þš¨(賞œ&ªžž&FôÓDW”JFïM9Òq°]©è\0Ñ¥>â{áïôŽÝW%oÑLB½¢Ãn'b &LÌ1 Bôe{ÞßýìýõœGŸ ùw)ä³NÍï'Ùá¥×&|f ó §/8šÕûïvo­uZMï’@[œ?ˆ[0q%V<:?LŒˆ?&* ú¬'‡‰ª§'„‰½Ã0Ñ¥8ðµ½a ~tm¢HÖÐ8r²ÛÁ\-½vU)N3æ êýÍj:uy籋éå4QF þÀ»Ýc+¶E‹&&  Þ$Xú5«ÒD± túµ‰ü»ÑS<­=Ù™ø•[¾©YÀƒƒ´’¾` ¦Ú(‘í¡Ý{m¢ˆ‹1¨µÅ >ôƒY4q&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4Ñ¥ìÚKØd°E<:èÔ%õ±ËË4ѧ>Â{ÑDŸw^ÃËi"¨1eø‚2A]4±hb&šÐô^æ*h½ÅNéËâ—æÈ=⌔WŽÜJ~*?GNŽ\QÐg=yŽ\õô„9òˆÞá¹ ΢®±¥áâã;àb|¾"¢›æâ}èV߉Îvñ±]îIkGùws L–V@Ov¢W^’+]Ä(/ðâ¡«^–ª8Ù-ì.õÃ{áD~ê|º,¾à¿'òˆŽݬ©Ìá¡#ˉ…Sà„Ï5•ÌL²“‡šJ§M0€`[ßO²ºts‚:ÒAC#Û8_X¢úEçb—þ<·‚W—8zØ™_àuQW<:?xˆ?¼* ú¬'¯ª§'¯½ÃàÕ¥.>êDœˆÂö&ú”êd'Šª”G€j[}zÊ÷‚‰òÔù|5ZÛ;‚7ÂD¢DŒí·,˜X01LX¾ÜœÞÌôjVa"Û„ÓW«*ãÿüûqŒŽ×¶›H³ÊÁ%l/_0«Hýº¸—XuóI§"ŽÒÜÇÒ( &šYbÅ£óÃĈøs`¢¢ Ïzr˜¨zzB˜Ñ; eðèÛQŠ¢_Û6<¦ˆåt@]Rc˜ìvQÅ9ƒˆmõüDý¯š&ö§6ò—¼£7Þ›(#:±ê Ê”×I§EóЄ¥?Ez/Ñ‘+[¢Ÿí0Ø}9r¯8¦u žüÔ=:yŽ<(þ„¹® Ïzæ¹åéÙräA½c9ro”’pím€ˆa3Æg9ò‡¯5úlÇ'7úø]fII:7]Åpi«RØB EâCW½,q¦^ØÝêõ®a÷z‡:^‹½ÊÖ5셓ᔉC8ÄX}‹]”{q"3Ä 5ÅEàÕE¡™'V<:?NŒˆ?'* ú¬'ljª§'ĉ½Ã8Q'&mG)B»¼Ã3à–ì1¤pïU©Ù.ò8?‹Ëåökõ1?ÛE^¬ÍϸâÑù£ýˆøs¢}EAŸõäѾêé £ýˆÞáh¿îL1´£ãµU®ÝhµãhonŒOfÅßýLª¹‚½x„ù@‰ÆZᢿÙ]z‹·€ HGžrPñ€ÚV:×VôgUiZ‡¶z3}¯µ£üÔÈÒרô†÷­•Ñk…#ÿfÇkíh­Mµv„›kD lÌ/É.œ?¿äY+ÏpÔø~²]ä+¶ZØ€º>»±ž$PÊ$_Ö u¶Ùín޵σjÌçr½)N‰Ã¯VF]ñèüà5"þðª(賞¼ªžž¼FôƒWœ•b;JñÅí…\e39¯"A€¹±)¸Ûœ 'Š*&´Ô¿N”§6Pj{Ç„îÉá/xÇâ4ÁªäQÚÊÐM,š˜Œ&b$o-V%;²p>MX®ÿÚø~²_yo"† £ðs–ÐüýRò:ÕïG;p+K”AE94Î;޶X¢•$V<:?KŒˆ?‡%* ú¬'g‰ª§'d‰½Ã,Ñ¥„¯mêÊB›¸°Ä.Á-Ò+Ru²KE•%dk¬p; þ^,Ñå»sg"¨Œ ¥bGºXb±ÄT,¡›äÒŠTg‰Ý.œ¾ñ7FhÖø~²Ó•,¡›jD8¸‚mù NÔ…V¯’QìðæÓ¿eP‰Q5ÄŠÝcßEibÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;LepCsÃv”‡‹w&`Káþ€&º¤*Ñ\4±«çÜ¢©­ÞðÍh¢<µƒpãd÷îÕûh"hœ/n¶ß:\W°MLE¶ Xа^Ñ)Û©;ŸMéw1D’hÞø~²]¸”&pdN¾±EvÓF±…b‡t/M”AÅc4i‹c]{Í4±âÑùibDü94QQÐg=9MT==!MŒè¦‰®(%*×ÒËært—°HHãë RÝ碉¬ŠÓLˆmõí½h¢Ë;öpãrš(ʈÔ½Yö'X½…MÌEž2è”û }¹Võ»¿“èé{ùw%Åã ­/;ÛÁ•çœ · À@OiBú‚ˆ#Wi¢Ø‰B¸“&ºÄiúüM4ÒÄšG§§‰!ñ§ÐDMAŸõÜ4Q÷ô|41¤w”&ú¢€]¾7Ax°7ñ!Á]_Jsu›èToòV4±?5šÆz'‘;º¯<ì>"+i|áï=,šX41M$¡1·/ËR?ÒD±#“³ËÖßB•PŸ`²û¥·&Xm3ˆžoN$ lNHbC*›¹ù­8Ñ#ΑN4óÄŠGçljñçàDEAŸõä8Qõô„81¢w'º¢TJõ/Æ Ü¢áD‘×ÏD^*<NU1Do«ø^ÅÀ;½ã÷ßGôô}À Ê7•N,œ˜'#!¡a'’]2;'siÇ`-œÈvÏ ñ×¼N6Š‚=§‰Ü~2EÓú—^ìPñVš(ƒ²çÓ²mqëö ibÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;LepOá^ˆR*_ø Å𨥦pï/H5ŸëvQÀfõÃ<»]0~/𨽓°V¨í¸ñâÄ>¢æ–VÖVÆa]œX41M`¾^-èRï]·Û1Ƴi ¥€šKãûIvpm+lÚPüù%l =îR’ZÕIeÂ[û`ïâÈÓÜéK´’ÄŠGçg‰ñç°DEAŸõä,Qõô„,1¢w˜%Êà1_E°v”Š.oÁ„G×&:¥bœ‹%úÔË{]ÂÞŸ:7÷Ðöã}%ʈ(Äúµ‰]YúŽK,–˜‰%(åèÒÿ¦úÎD¶Ó³›M”ß5¥HŽï'Ù±]y ›tΕrŸÓDÌ_zž‡¥ZÒéÃŽî½6QU†ÄiX;Í4±âÑùibDü94QQÐg=9Mü/{WÛ#I¤?ï¿hñíÞ19¶#Â/-N{·‡œN UMOõLÝö›º{XFˆÿ²¿eÙ…E“3]éÈÄ™‰J@¨»*:ãq¤_ÂYKWÈ&J𳉤Ü[«„õÓVçÝçDÆ6^Q›ˆ@iŽÁA†¨²•‰„^3®|i ½œ:²•‰ÔjÃÍ&`°àÊDÒˆÁ{aÛw+×Ù]·²‰•MTÀ&°±àWùcI:§w'bñ¹LÇu^?,§ÑÌYÒÉ6š ‹é95AikmUi’‹²‰¤Ô)'Kiålç\ÙÊ&zÒÄŒEëg%à§aã¤+gYKWÈ&J𳉤Ü[@£e/å4Í»6¡1¡ïÔÄ8¨TÙ!ì='0}÷0äQ°‰ÔêÀÏv$['˜aG÷ŠÈÐàÊ&V6Q› tj†ü>§$güäl">—<‘É_ÔÊ9ëç\›0RNAOI'Ë#˜”%ãòó=Q-Z ¶‡Ö õÅ÷r $+›èI3­ŸM”€Ÿ†MdŒ“®œMd-]!›(Á[Ì&’rbG¨¬ì¥ç-édµn˜Võ°‰ÁGùkör øýºl"¡rHÖi½ÓGv;¶Ú*$œÁNÖ Ýœ}n6‘*NtDdÖtWMV6±²‰_ŸM¸4²ˆ{p¾ÿF¹XE`Ñ9¯9ýÁÅÃkŽ,%?‹ÖŸ#—€Ÿ&GÎ ']yŽœµt…9r Þây”—‚C•£'­Säê­Sä›8Ï¡¼sù•è$§ýäûZ[ þ4òs¯IN«9+nè&(Dç}©Æ@µº.:1 ½}\tbœuÜ‚t"id7ƒù;÷-pëâÄJ'ª¢Ü/ƒ ƒbÿ N«e %¥V© • Ž´Yé„”'f,Z?(? È ']9ÈZºB:Q‚·˜N´ÊÙ‡.Ôμ‡˜NX×w DÞhi 3ÉY·ìvͨÔÃæ@œ×´.°ŠÃ8cÑú½} øi¼}Á8éʽ}ÖÒzû¼ÅÞ¾UîÈ‘•½”¡yX)؆Èõz{´„GIŽIÙÔ“Gñ¹Æ“åX"˜Šåœq³NéÀ,Ki´½¦ •_k]“Gz«¤é‘}8²µèQÖ —›<ƒ ŒY¯Z'jš<2ªåƒf^¹“䜷S¯NÄçrÿ·@ùÉ«¤?X3˯ÌA ƒ[[…ç‰óC½•;xÕÞ|Ì‹•òÔdt¾Â` .tìʼ§Ô9‹VϼŠÀO¼rÆI×ͼò–®yá-e^#¼Ë)˜·ì³‰ƒ=̼FB¥ºèÄ8ôZ×m¥m«öšÂëØåÖ¢[äÐ AFj¥+¨ŒN„àÑû¸¨Ó;t"ÄM§>(gtÃÎXy‹ù­áĨç<(Bbß<Ì&4`& Z8Ò·—3zQ6‘”k5ÀagÏÚÊ&zÒÄŒEëg%à§aã¤+gYKWÈ&J𳉤œ# gB²—¢0ó&`¯ßw½Ðjœ\r2T ¡.6‘P9 k‚Œþ@}“ß6›H­ö–éä€`éI-Ç&¢FÃñÙäË¥ìå:›ïV6±²‰ Ø„iŒ7ÊYÊHrÚ{;5›ˆÏ%BP.ïµ£„Y‹ø±Á›¸²iz' aâqå´]tp«”Èkmepë™B1OÌX´~:Q~:‘A0Nºr:‘µt…t¢o1HÊ-/{)k`^:á°qªNDVs¨VÍT*£ ½±.x9Xƒê¸èDj5¤[ eë@ç®ÀÙéDÒh)e•‘QXéÄJ'ª¢аsG :Çs+gú«í“7ßž|½ÙqàÝ{û>e6^£²2èŸà!þíÉç·ÛïvׯïÒ—ûátzò~ô‡_ò'_'é/ÓÇ?üi{`|5üBÿ³Í*NYóõmêtœüñúj{êÐ}pòÕõýæâ4=t ¶¨þ %ÉbÆuÇ!¿Û«ÝýŽe÷æÞ¿üÿÚܽ:}Ï~ýß§ÍGÏÏ>ÃgüjÞv·›ËùÓO¸§~{ý’©ÖÝÛÝŽöëÃ×/9Êpgî|ÜÛ4ÃGþÒz‡Ö¨ØßꀲFôu »xŒ¦Ÿ·Aµ}Õ§ñýòî§ì<¶7§ïí?‹_'ÿñ)ÃI¶Ø‘÷sü|öÍÞhß0KŠFãæ·+Ûtz ëLÝ囤ßчûrë7Oÿ_ûǯ4uåý‹ä«ÛÍÕÝîmïúá‹Sõý™ÑçHÃsMÛçgüWÛïïO㱘RP±›œþþ_FJ|Ø~¯¾í4nÕ~dmn6Ïytßï¶w®ôðñ›‡ž”¾cãî âÞ–ÿþ‡ÿ¸Š9î‹î7Ü¡ûÿòÍïýûëÝEì™æ®î®/¶ñÇ?no.®ß0=¿çÏw/ãgéu}Á\”‡Ç›ôÁUò"ñÇ=AüðóãoŸíÓ¿OvçÛ³7g̯øooãw—ö‘÷7›³¤èçÜönßÝ{l—?ÇÙ€_Ö„/?þòjss÷êú>ýzqýú·àþöúâb{ÛÑ~Ãn›Èþ´°ùsoyùêþ€)þ ì«×qL2LœþâŸãÏ7·ÛËí} K6Øq(Ý\ìÎv÷o~î‚ÕZ÷öciÈ$Ý»-Ç{øödsssñæô¤ô'L^>ÄÅŒ31Šn»‘.Înlîï·—7÷'ªž7:pÖ-Âóºslè~s÷·¿¾¼ÝܼJy¹×ßž|ñú*¾›“„lÏ«u©N£`ˆN=D''ë­‡è4‡mËD5j/µ3Z¾„Ä´”UNgMgú:}Ù3/•±hýÓ—%à§™¾Ì ']ùôeÖÒN_–à-ž¾LÊ™¶*åd/…Êͼ±Ç꛾D†à”%È_.·—SËzû¤9ø…à ¬WN‰Ã8cÑú½} øi¼}Á8éʽ}ÖÒzû¼ÅÞ¾U X’½*3éb•o˜|ôx{b!F!}Žrž>7ÉJµBÇËKàXN¯Þ^Æ‹ÖïíKÀOãí3ÆIWîí³–®ÐÛ—à-ööI¹ÓÎæï‰nå¬wk‚i4Åù¡ÃÑlÞZgíã«·öìE9Ru¯‰öìeôü®~ vÎ={Ð8Ï]†£†kªP}eGLG¡G8®»GZÇ-¸)È^ g.Ãbt=0cŒ@qú›…åÌ#8³zû1àÀ¨õ˜¼8Œ3­ßÛ—€ŸÆÛgŒ“®ÜÛg-]¡·/Á[ìíÇy) ³z{0K¯ô{{ ÐÀŒ";y”ä M^’Ÿ‹üaÜ%.™Šå”ŸµÜ1㈜õ™ çò··r@•J¨,± Œžü‘J­väµ"Ù:—+wœ4·J @Æ´ð®¬d uöh=šnöÈsCí ëæQ¬¡EùÄ(p¡Slå=‰bÆ¢õó‰ðÓð‰ ‚qҕ󉬥+ä%x‹ùÄ/…ì·çÝz„ªáèÑÃ'‚W†|9Ê$g;k”ñ‰ø\Nã­öä&9T8+Ÿ`¶À‰ˆQ¡×Tá’¯‹O$TÖx'Ìh¶­´GÆ'¸ÕÁ*ÐF¶ŽEXŽOøXäBSð INiXùÄÊ'*ã.Vuû¸îÖ;†åŒ™!Â8òŽ”´Ï$É‘³à±£†Ð{ð—=Æ+(þ+ e9hȱBŽ2ú&X Xt²œÃAG6q€ÎX[Þ{#ë zБMŽO&Èÿ ÛZ¹ÎBTF§ 3Xt¤eÛvLÔ ¶ vFkkT~·v”Ó¡sÕuF§Ú9F§Q‡è ¢Nˆ9+*,çô ~«•¨Ôê˜*«üµˆ­œ‚e7ÖŒn½ßH伋Ö?5R~š©‘ ‚qÒ•Od-]áÔH Þâ©‘Q^ ÉÎ;5ØX×752ª«ŒïBO渮Kig–ãûQ£åÿŒÑ"2«Ð­|åûUñ}î˜ìN8ý}<•öõ;˜bÝž©ù~|ntÇ^‰ˆ4ÎzÁQãÙ´ÔÃ÷ù[£Mœv$)Ëé0ˆ“j¡v¨H¼uB‰ÇVÎÙaÄI(^Ä31NPå`˜R•¢2Æ h©¥,§Í0ó¢¬ÔÅb•èE¥wƒ”’¨”cò³×­»$/m•z/˜‘ÁÙÎqÇ•—&9‹VÏK‹ÀOÂKsÆI×ÍKó–®—á-å¥ã¼”›y 0„æ0/Õ]/mQs wî¨xij5wP$²u,ÇK“FOšÉ9†3 V^ºòÒšxiì˜Öh&[&»äT÷œÜ4¼4=½1N£4€,v¯Ôžž—¢kÀ{B¸ì 0Á1Pùõ¼$Çonк¥¶"‡¡ÔéA —Ú JurVÊÍ;µ$Ç|Qâ””zkÈ+3ë•8IqÆ¢õ§ðÓ§ ‚qÒ•§¬¥+$N%x‹‰SRB0e/HÏJœc :Ýpv:‘4‚wÜ£dd¦Ã·W:±Ò‰脉…oX¥Ÿ¥I:\'¢ñ¹ñV¢  Œ+ 3Ò ã¢ÀÎõ0€Æ…*.Pd‘B ØnÑJ,ãÀ9kW:!剋ÖO'JÀOC'2ÆIWN'²–®N”à-¦£¼”Ÿù¸‡œ&`ß¶®qP©‹NŒBôq•e :[Ñg§£V:±Ò‰ªèwL¯ƒ¶öñe-_¿Ó9 í¤óщŒþwóúñ¥²Ò 1Ì&tÁ¥‚“²ÜÍN¿v€‰èɇ`eôæúßx€áVƒ6žÜët¨â†5:EŽ‚Œ¬;¡»˜5ÀÔ`(ÄûÇÀæL+שV4Q€‰Ïå|?ÖÊ (ç¼›sù[#5<Ž9ÆŽ0“Ià +l§Å˜L.|ñÔpüÇpë„UÏLDÆ¢õOX•€ŸfÂ*ƒ`œtåVYKW8aU‚·xÂ*)·hùÿ²—B7ï9Dε¯}Ï„Õ(¨TÛ9ÄqèÝ‘m§M­öqÏ•­c;U"fçQ£æð †DdZÙµîÊ'jãLÚ»d>å:÷kLÆ' uè„©‚VÎÌ9aÅã¸áÁ¼ïã¼#ÍŽH„êm‡úäÎ ÕLãÃÐêxõ£¨­¢aec‚¬Ô¢GO!HJ­1~P£(w„)«D¥d†È1ZT¹,0û“ZêcáÖAç<•r ÎjCЂR–³0¨#©þ¥Ì´É—wjåP-[•')uÈ\^Ëà¬_·oˆ4'cÑúÙp øiØpÁ8éÊÙpÖÒ²á¼Ål8)œº.ÔYšwû†ÓMß-ÌãúÊ.‡Ù£N,¬Œ>¨##Ô¨¤CéZk°àÙÒ„ Èa›z’3(¾’á• ×@†¹cR,¿ _”'ÉM~¶”R±Z¯ˆ•3 sž-Õ F¾ïsa~Ÿ#ùË_cÓÊuÏ}ä(ŒTÍ”ÆúÛ‡$¥±Р›= ‰Jãí06Ö”²Œ{|ÞwVÞ•šx°FœÑÕà•7õ$Ä‹ÖÏ›JÀOÛ2ÆIWΛ²–®7•à-æMI9»rj€— 3_@t,i×CœZ¨ä¼0µoReÄ)¡bÞ  ÷ÇuËFÛjÔ¤••­«f'NIc¼XSØ ›ä,áJœVâTqâŽi ƒ,qJrÊãÔÄ)>—Y[ ï¤d­Ç9OÑbÜNHa'o:ô^òWý´r: [Z“ª™ºy°0‹±^PŠ Bgb&§T¸û‘f G#ƒ‚R–ï%NIiPÉà<®ÕLÅŒ8cÑú‰S øiˆSÁ8éʉSÖÒ§¼ÅÄ)*GÅ‘-­ì^Nã¬ÄÉ5 CqJŒ¬ U#ÖEœF¡7xdÄ)µÐi¡`+§<Ε4Æ9sðÞˆÖj¦+qª‹8¹&~¢Ío¿LrÊMNœâs¹©ü¹aXŽüœÛ/91jÑSOý!ߘxŸŸÜJrì‘/߀ ZƒP0É[ç’¶_òȹ‹’Rî¬aЦDP”:PÞ KIIN©A›AËæõäW ½S–S~ОO0bKc˜ò …ÂëIÎв+zI)«„™Œ¶ý¾‡ˆit _lÙ¹o¿;Œqór³ëÐÒËëÛ6Ç?Kò.žGNŠîN^m8T_lnîb“wWœÜ¾êÿô´'p!aOH’#ß™™ÙEòÿÚ¥(­‘S‡»6—¿™ýGžÜ\__pÖÿêD=|v÷úùÿr^u×ræ$w÷­M ÙC ¼Ù5o(JþäéëîTáÿº»:¿~ú~šúØ¿m8_{¶¹|añý³6l>ã "’üiÞß½x¶ñøbóüžl_œ?AîÓOžÛ~r^ÀÆž+ ßß¿÷gmx:h-7TJ°V\‘vëz®È73­Z¢ü4Óã¤+Ÿ–ÈZºÂi‰¼ÅÓI9$w-{)Τg–0¬FÅ"ô¥„wÒšh+G•2K¨HiÒvúpd÷S޲Ò‚…Ì’FgQ(#³nÝ »NLÔ51á› @ƒ Ÿ˜ˆrýäu‘ãsQÇ¢$ –SzÎ:3¨Óvßþø…(OZb8Á£aÄY:Ͷºâè/¬èF9kœrJQTê-€Ò ¬]'9cÜ¢Ä9)u ƒö28ëÕJ¤œ8cÑú©S øi¨SÁ8éÊ©SÖÒR§¼ÅÔ))÷ÁJ“Ø­œõó®è4¦· N‚Сp`¼•¨‹8ET¨A¥Dô¨àȈSjuÜÀ r7DÓ)0;qJ1T¢•#·§•8ÕDœP%Ç‚rÙ78H›S±œvaÎÝšˆgˆUè3Õp¨ëºÞeÞ9„ £5l_"å7CŽTzdSVm«Q9ðj€u<½Ýjtq«è€÷F~=„°NYÕ5eešXU)xC&Ö’Áäa-Þm¬-:—_ºMrlœ1¬Y„ÆÝæàb?CðNÅ‹ µ zvÇËî.X·jùŒEëç{%à§á{ã¤+ç{YKWÈ÷Jðó½Q^Êͼ{ØyhAßµ²%ðqè9Ñ8.>1Î:n¹2ºc9¥Íz§ÌÊ'jã!` V |‚åÜô|"(4ì|Aâqñ¬— 6`=ú^:Á\ÃÆí.NBªtoÓä¶§~ï}"Ä®¯Îw/?ÝÜœüŒí²Íížœ¥ïžü亟þíõóí“Û盳'7·×ß¿ù)x±;??=ùç?þù8ZøÃ8ðñï÷Õ››í§ÛûÍiŒ?ý2òAOø9¿ûóBûÇ•qOø×GOxhÔØG%0~þñO#ò‚yë ßé±ÏH¶ýñ—üÑgéhìÃ+ùù×_òv×4ÍÉÓ§'æd÷‚CÜŽU;Nï~ÉÃþ²¹ÜÞÝl8`ÿ{çÖcÇmäñg‹^5ȺUÚ§]{ØEñÆzÈðÈÁBt1$9Fbø»o‘}$iÎauOwžžÁ—Xÿ®f“üñVW·~¸•ùN!üìË›Ïþëù«¿Ÿp9ÿÍÖºñ×?}qª°±Ä'×ùæé“'ô0>¹†‡¤OÒC•ëü𩆤7úLÂêSÓëžã/7o_ÿ`£§Õô¸DÄœðNB›ÅÞíþñæU©Ÿ–VÂ]ýóÂØ‡^ÛÆS/¿?*·4cöëŸj~õöAéʆôІÎ¡>²±˜jeþõ?ðó]B4Åý¾ýϾ¸ùe8òiÉ6FZTæߨ—û?6RxýôËû6Ÿ¾}tç2WmVþü£U¿Ü<»±Q¸ !]ýôÓG­ïØŽÝð¼ú·aÏØa<8LØ|õÃJ;UZí¿¨߃ëŒß>±‘ÓC Oñ!%¥‡úDåá·á:ÝÜ<%`3¶gûùnoÏF#×/žÿËÜ>ºë+«Õû¿¯_ÙØñéjD]ýík«Týî?_½³qñJþìNÿè èM}ªŸŒ£îܾÊþó÷¿´ ÆÿKZØeJÛñ§¶‡ê(¬9j=îæa|‡e¬e:Ç¿|•õU,Šß~óÓƒg,€×YûcÿýÒ† ‡¿Ûß º^¿«¡~[~1¼·zþÊ^éÓ#·o¬îßIÇÏ¿ÿ•*ãJc½.nYí¿ïçÙó7Ã?¯_¾(uï矿¹\e»SGÿIC|ÇÞòNÉŸ^¾ü¡^^óèÎ=ÿ×…!ÆŸŸÜ~Ö#þðî»×ožÿk¬óÿ÷êêêÍaô÷ïïÞ½yþä‡wÖ—__]]ÿüC…þGWŒ”îò.êþÃ:Ë7ÿ5ÞíOÍjà#¦mV/v]•ÖÚâÜÜ\ ËQEStæ˜yÚ9ßì;Õ¹Ì {NE¦h×i$„ö§ãÔ,ÂQî‚–Su–t¾¢œ<§SN“nã¦àìóÁAb%&ïš]ž–þ•¢û¤’#G…ö¹ªjò…7*Ï!î ×îŠd#¢ý/\/¿ÎÂuCÁ<ëή›‘îpáz‰ÞŠ׳Z)<Õ„®yv€pæìöL©úZ¸ž¥ž"ܯ…ëyÑ9ʵ²ùÂuñöЉ£¯,ãžÆh_¸îkáÅhLx"ñôã+°bHÊk/\—r˶uiÏì·Lc”êÓ ×T>u.Ù}ÚˆXíˆé¢8Q 2ùâ2¦'¼qb#¢ýãÄñëàDCÁ<ëÎq¢éqb‰ÞÅ8Qœ×ëq1º­”ö;Û\ZU•¤0š}õ˜ïÙaµúÔÌDí{Üvx¹û•FJÁIp;ÚÉÑÒ}Œ¾Ñ{£—Š™3±±­_ãÆ9Ñp°Qp éÜ ¹Ü¡ËYoY}ü±T³ «ß¡[˵&¨®sBevÈ›&÷‘!rÉq:R<@ …”±Ý›T» “ÖÖÈÉc…qÄÊEöŽSŽÒ¤'BשdfuŽ1W».{€°8-w’EWÞ/ŒqGĈöNKįN ó¬;§f¤;§%zƒÓèœPdB+ã¶ë0„q8w€pžT”¾¯ª28N)NP⺙ß4ãÑQû‡èGŽ:ËͯzÌR&_+Ûog¼5r«ší{¶ªHZœJ¹ ‚š<20;£Ì-Ê¢}¦xœì}2'€³bTí@¦í%#g[²zd¥™.rœš]œ¶’ØujQP°·-ŽS³Kħ┱fUWCÞWœÜq#¢ýƒÓñë€SCÁ<ëÎÁ©éÁi‰ÞÅàTÛ@=ð[)TÞöæ•„ƒ0œ§Q*«úR‰R_àTU1Jžv[=C¼_àTŸ:iÉ íG']rq¬xL!QF_™}DûÍ+;8õNiвŒ¢í›Wª]¤Õ7°Y¹\r¬Ev> bwªÙ^ œ2 ªœñ̶l¯”Áë`Š@‚I “†É”†O ÔqZìÂ4pʮӲHΑ§¹p›äÔ;Í•K ™ÁJLŽS³KtÙe®âÔb¢)úâl$µ§Št‡áˆöOkKįCk ó¬;§µf¤;¤µ%zÓZu3qT¿•Š·!bå,8<ˆœË‚3Ojêl+cU’2‚¯"ß/Z›Ì—£µê‘¼ƒP£]Üí´Ö­åA£QP9ðÓ¤µj‡Gx%Z+åBöŽU»S9rW½'³œDMgvRÊ`íO”c›ÖŠðW¶Æ»B†,AR ÄŽS³‹: 9¸N% þqœšdšä4:ˆ(¥…LV­Éuj-)Æ‹ÒZq*‘ÑÛð[íBÞs–ºÃðFDû§µ%âסµ†‚yÖÓZ3ÒÒÚ½‹i­:oûh—`Ûµ5¢!¥x†Öª$Egr´ÃÎ.‡¨ªˆ”|õá~ÑZ}jFJ0¡³äxÁµµêQTHÙW&G™“vZÛi­Z£ Dº½—ûñÇØì¢®~š«”K9¨¨×Ø]ʲ%­¥‰c”s´¦ Rr+x8¡e±2öÖÁ¨–ÛþÄeM-W…è}ë`Jt”ìKð£%\²ƒ)é59‡]e)‘ìÌÞÁtÕÁè@ j»ƒ)v9lWê`¬\Œ˜HS› ŠM1lÛÁ¨½ :3¨6FÔ L9;J‹Ý©rÎWU§Ìɾ,_ïóUÞDD#¢ýÏW-¿Î|UCÁ<ëÎ竚‘îp¾j‰ÞÅóUÕ¹XCè,žV»|jƒÖšóUÈCŠçæ«L‚„IùR%uvˆ¶ª/fðzÕb—îYÎúÔÆÙÙ†Yíà’8Q=Ú£hž ,Ñ~ˆvljÞpB—L²œ0»¼þî‚Rn&J™½AºÙáí›kÞ>”‡r+qæ“8ÁÁ>a mOMW»—½Ìt—@±½¥þ`wtMàŽ§Ç‰­ˆv‹Ä¯‚-ó¬ûƉv¤ûÉEz—âÄè\cŠíuÇÑNtÛÍÊq`:ßÚcÀÛÙ¡v±¯Õ‰QU¢€ú*ŒïNŒO-¨}§îhw|àÖ8Q=–µCMà*£ãÍ„;Nì8ÑNXÅT(wÆÛsî?®À Žn]['j¹T¶¨ºÍ¶BÙÿ²!NPÈxBÎàD °‘†æŒÏhGyRÎ>v.3µÂJ»ÚWŒv!_tIdtÊ)ÇöÝ=;Ú\ºƒÓFDûg˜%â×a˜†‚yÖ3L3Ò2̽‹¦:/YT#ø­Tâm&Å8(åÓK"UÇ¡,u”ª}1LUÂÆ"®zŽr¿–DƧ6¼s68ì.¸…·zLQ"8_•)ì¹#v†é‹a¬I-Õ7(4ó»vÖ¾§” VìY£óiT½»bÍV•œ¼!q#¢ý“ÓñëSCÁ<ëÎÉ©éÉi‰ÞÅäT§Xþð[)ÒmÉ ,h!g»˜éR9w¶/qTOQüÒìàž]¬WŸ:‡Ìí´´‡èä ’Sñ˜£2bt•åp´þ»“ÓNN=S.W%$LA‚1fWÏ5TÊÅ€Bêœ`ípËuD,»ÞÎ-#Ê E…œKf«]žÈ˜ÞusRÛ—l˜)žS³ xQ†©Nlx2A‡ýªwpÚˆhÿ ³Dü: ÓP0Ϻs†iFºC†Y¢w1ÃTç)'Dð[©CëU†ÑZ,à3«?UBFëéÈ—z|Ê¥ †©ª"¡úêEîÙêOyê’–Ônt$çÞœaªG{"ô•!î«?;ÃôÅ0R&`âöÙªjGiõG¥\- ¨n³]ì–ùR†(|î¶9³pGhµÃ'!ŒwÛÜ<§yÚÚOvZ[•)eÇ©Ù/^‚›ªS ) ûâd¿âÎ7"Ú?7-¿75̳š‘î›–è]ÌM£sŒäÜw¹ñ•*ƒ9=ÃMó¤ÆÎÎUCŽNn‘Qýí} ¿mn£cC+ç†ÜÑ.^›ªÇr¤Þ¹e¸Ú!ìk?;7õÅMZòÔ¤±ÉMÕîø.ƒ•¸©”«‘9fï² ¤˜·¼¦ÛèC)Ò9nRf JÑC³‹iÚÒ´FÍÉ…h¼Ú˜–9Ø¡L<⤮S€’!7´žô`ÒQ_ÔpšÃŠN#ò¤z9ºN1”‚ç´äÉâIgå2¸Nm T6J¶ÖßÛÁ%±ô½SÎ[g>ØåXÛ±ôo´#Ú9–.¿–¶̳îK½H÷†¥ õ.ÃÒ÷ÎmèýVjëå<TNæz/!§Ä¾Ô|bS߯‡¥ïU©ÑMžÐ(ܧìQ‡§†ÀŒâWCËõÞ#†„­-‰ïíŽsqîXºc鯎¥cÅ´ñvYjN^.wWMÃl]³0YÍe€ó,|°Kéh•j >”[:¬Œn{"iÓëÞó)ž¾NÅĪuþØn_ªõ~“^9à‡zÉ~ˆ"Æqt/ NÅ)¬+ƒ+Îþõ¾žçŽˆíœ–ˆ_œ æYwNÍHwNKô.§Ñ9(ú­T ß ‰uøyœF ¼©ÏÃ#å¾À©ª.›O&¨¸_à4F§£?:äràT=&ÛLyoŒ°ƒÓN]S, “­gl2LµcYaJ¹Ö½Äƒ÷ÆùÆWeëf%Ãi†±qIìÌÞŒv!\'ªÓ”„œiÃjÇy_‡qljˆöKįƒ ó¬;ljf¤;ĉ%zãDuž²3Ží·=VÅ0X$ÎàD• ÖÓfð¥Šh_8QTa(ÿ²«C ÷ 'êS—Ü“ª~t"éåp¢z¤ üĘ|lj'ú «˜hM¦ÄÐÆ‰jÇ¢kãD)—‚ZÃç6Ûf[âÆ!¢ÕM=8”[±\2ߌT±#=ºüÓH={ñúÇR Þ¼~1\ÿ¼ôýÖ<ý]Þ– òøäæÝ5^ýÁŒ¾üö»›—×¥‘zzóý¹”öi€þî÷ò_zÒñ׿û·«2¾w|îIBr—«§iksì,þàÀI’Äæy©÷v'Ý»ž“ë4%²æÕ{R³Ã¿~õ­½þ§&È®~üÎ8Ñ‚ú¼~úϬ½.ÿk± ™%#Î’¹¬49 ±ÅòÙÙQ{’¯¯Wïo­êŒ¯q©;9ïŽC™Š{ûËwW¯ŸÕg{tõ·¯O:4$@ ̞ì’.:=Y&Ä âXö˜î¼S#¢ýOO.¿ÎôdCÁ<ëΧ'›‘îpzr‰ÞÅÓ“£sÍ¢ê·R)o{•©õî‘ÎLOΓª±7x¬ÓWÎ)¾j—Qï<ÚS—s|üèË%áQm('Š’\eãÂ+ß½ùagÇWeÇ’YVv4» y}vTdÂÉý~MÁ–솄ðÌä$ÛœUœME\,âe7;Ì—ißìàíŸ&–ˆ_‡& æYwNÍHwHKô.¦‰Y­”Ü>ߺ.M Ö‡ž¡‰YR5H_41O½¦ûEs¢“_p)j–²ãÜÎ;Mì4ÑMpÙÐO†ÃÒÞÎ^íð¨Õ\‰&J¹˜’5íî÷ƒÈ)m{ޏ$ >sŒ8•X2(·—̪]¼¯kS˜(Ns g¶e´ ºÃ„7JlD´˜X"~˜h(˜gÝ9L4#Ý!L,Ñ»&ªsœ)Oh¥tÛ[‰âD1Äó­ýd©§öÿª01O}ºg0QŸš¢ª?:¼ÎµzL ™'¼7>Ê=°ÃÄÀ„ÕKLQO¥ß}üIý-Yèim˜(åJ¤€ Þ÷ƒ6Ý8a(ñÌÊD¶X˜ &ÚMP±Ë’.»Ï©ŠÓrÄ\qr<‘¶ÃÄ™Qb#¢ýÃÄñëÀDCÁ<ëÎa¢éab‰ÞÅ0Qœ+`ðväŽv7† PkËóùÖ^±\Õç7¨Š±³;‰ª*œ€}õé~ÁD}jÑ»±iŒŽ\ðN¢ê1#Ú#ùÊr &v˜è &¬^ZÛÎDÒ†‰b—8¤µaÂÊ•2Åí`ÄÇ-÷9e@(Ÿ£ ©ý):Ù{ª^”&ªÓdÒX|q)î7œºÃÄFDû§‰%âס‰†‚yÖÓD3ÒÒĽ‹i¢:Ïê$ëí`ãS‡ñÌ>§*ARHʾTAì‹&L•*ƒ«¯^o_%ûÛ¦‰1:Ê’Üjhvù‚ûœªGRˆN*ÆÑŽ÷¥‰&º¢ «—hÍIÂÛ3î?©¿héÕOMX¹TÆ·Üï‡ÊdÑÆœrV9 jpL(Éi‚ªÞNÔ¾)Lü?{çš,»É«á¹º$£8óÿ{¸¬¯ª³WšÂö¦Ò¤ò+QYokÐc@ÊN1$é\Á.vÄûœS7KlDt}˜˜ L4ŒY/ÍH/3z§a¢8WIS=ög)±{Û%Ä#7ð?Ÿí ìƒ •Ò²°LU¦ÆJ}õl_¶51×b1·ÃDö¨”à´³w—í˜Ã¾4±ab)˜H沈hšQÚ0Qìˆ.?生«¹ã2wU¤;»%9Èûò]!·°NïCŽxKiµ‰OÒDuj‰tÚa¬vºi¢›&¶"ºöÕ¿ÙGÿoÓÄPtžÛ›¨U(ðÊ„öA§MKÑDz/9¤ä7ÆfA§j|õA§ò\ÌMìc7Gç´ ÞIÑ iŒ¾ßû1`‚ÜD¨­4Û¡ó³4QÄQZ¬/Ž0ìKØÝ4±ÑõibFü54ÑP0f½8M4#½ MÌ覉¡YŠÀî½6!r„³kƒRãZ—°TyZ°ýõ¿‹&ʯfƒÀ¡ŽøMdéW³¨ô•)æ‰M+ÑDz/“CI¯fóv±£ôïÕ4‘Ÿ+Ž{ã'1½óÚD<óÕˆ÷4y§Á.Øž¡‹˜?JÙ)Gn¬va7›è§‰ˆ®O3⯡‰†‚1ëÅi¢éibFï4MTçª*ý)”c¼wo‚Y4'ŸÐĘTÖµh¢¨¢`àØWð]}Ï££ðMâêúÊøµØÔ¦‰MŸ&0óuëÐDµ‹—ŸtÊÏ%tîŽ&T¼wo"„ ò¾ÛD 4‚Eƒb»¬\µ ðhI§â4­àÊí€Uœ¿\,Û4q’&6"º>M̈¿†& Ƭ§‰f¤¤‰½Ó4Qs L¡ xoë:G?Xõ„&ªI ñ'Ry±{EUP³NVZíà»naÿüjG–O¢£Ï5®)&†¾2‚½7±ib)šHï%«¹!Y“&²A¸ü¤SÃÿŸã‡™ìÎ[ØÑJ ŸÐ§l™6:·°¹Œt|´vGŠÀ}q–ÐmÓD/MlDt}š˜ M4ŒY/NÍH/H3z§i¢8—̽?K1ÝÝn"DVlÍöKõ°MTõîB_½Ð—ÑDùÕ¹ö a?: ¶ž£‰ìÑÁCïvµÝ4±ib%šHï%§Ú´÷&ŠúÕ°ós%DŠÄÝyOB¸÷¤ ÓII'ÉXc4h¬*vö®ËNØ:çÅŠl˜èe‰ˆ®3⯉†‚1ëÅa¢éabFï4L ÍRÁî½6‘V³CN·&†¤Æ°VØAõòe—°Ë¯& Ú¾î¾&ŠGKÓnÑ]íØwØ KÁ„dHˆlšb«]Ë·&ÒsSæMØ?ü¦g祰Sj$vRÒIÓŽ!8l6ÈvÁ¥‰"%tÅň»¤S7MlDt}š˜ M4ŒY/NÍH/H3z§ibh– ·ÒJ<$žÀĘÒÅZ×UU¹Ëj»¡òzÿ²[CÑ!ypg¢x5ŒÞW&´ëÃn˜X &´$éÄüûôÎ?¼¿’þl—ÃD~nÈLJ0ôÆ„´˜Ü{kN*:åæ}h&ØÙ;ÎvñMsÑ[abDFÝ[Ý,±ÑõabFü50ÑP0f½8L4#½ LÌ膉¡Y q±ÃCcêõË E‡øÁ2Ic·à÷"¬Ù‘Ò†’jqÓĦ‰¥hÂJ6o¢íÞuÅ^ëà]D ÿŽ ‚|ï­‰èîf§4‘QjèÜš(v¨ë-…Ÿ«g_·D‡íÑ¥ðseJûí^_–Z_ü€àÀC»›Q¶3yù|Ñú’žÓûíü¬èt¾µÆ‡Ç“õÅS†H Ù:J“]ˆôèתâ”U%~ Ža­ê~†hDtý¯U3â¯ùZÕP0f½ø×ªf¤üZ5£wúkUq.LП¥î­ñA:ùn0&5òZ4QTiäÐé×SÕ[oÔéT ¬vø`7£ì‘#8Dé*ã×:b›&6M,A.1±„îÐD²ƒ—³M—Ñ„ ªti"Ù‰‡{÷¾! ¡·0! àü]-¶——»øèÖwuʉ¶ öűîòã½,±ÑåabJü%0ÑR0f½6L´#½LLé…‰ê\ó·péÏRâzï­<ãñäV^•ÖÖ@Þ—ª‹•¯ªJ_‘–ÿ½Vý§a¢üj”þk¨ÀþLTÉ'õ•!î‚&V‚‰ü^ 9Eöf‰j—2é‹a¢îk7Ó¢ÑÉI'Ê#C€Øž¡)ÏA¦ÏîMqb¦}qÉŽ÷ÞD7MlDt}š˜ M4ŒY/NÍH/H3z§i¢8·ôo烱SÆ[iÂ<"|BUª{hWµø±³µ U!-© ,Îö]4Q£ã–Öúnt¼´c¿&ŠG²`‘ûÊè¥¥í¦‰M ÐDz/Óƒ”™µIÕ®.@^ž«Ðtç=U|óåÂnF)65Á÷4Áy{â h¯/ÅM¥‰ì4bìnÞ»øò¹eÓÄIšØˆèú41#þšh(³^œ&š‘^&fôNÓDqÎ"Ô¹þZíàÞÞ¨á˜ãùl%-Âð©v¿eý]š¨ªrŸÀ–Ñø]4Q~µ²÷.V»×2¬wÓDöˆ¹¼&Å®2LÑÙ4±ib%šHï¥:'þ]EàŸ?Þ_M²ìjšHϵ0vgí|¿ãÎähGÊ´ÉOîMH™[ÀTÛ÷ŋ೷°³SL °0wÅ忍›&zib#¢ëÓÄŒøkh¢¡`ÌzqšhFzAš˜Ñ;MÅy¾÷kÞŸ¥î=é¤=k,S%P°NÏû;X«@lUÅQQ±¯þÛÚMÔ_-¤WR¼&ŠG'úÁ2îa·3Ú4±M¤÷2eÈ1ï>7i¢Ø‰^~Ò)?YcôØ?šëzßIp)é MhÁJ.ÚÞ›(vbÏžtÊNcgó¶Úlšè¥‰ˆ®O3⯡‰†‚1ëÅi¢éibFï4MçDΚ5U¤ËÍ{éq!úùlÿ±TúýíèïÒDQŦiê«gü2š(¿Z$b?:Þ›È%%cÚÁíbdÓĦ‰¥h´ÜF¥½7‘íB¼¼y]y.9²ufí¢Ó‚ßH‰Dr²ÀXž\ŒÌ:Ró$¤ü(NX™‡¢#xWœ„—ÝÛ'yb#¢ëãÄŒøkp¢¡`ÌzqœhFzAœ˜Ñ;C³T |oQ'ö#†³¢NUB ÿD*.vÔ©¨Â(ÂêõËNÔèX ú¿Õ޼8Q<‚v:²;Qß8±qb-œHÙ@ŒQ;8‘ìÀõzœpÍ÷«#õp"Û¹ws"ª›ž\Ãö2‚É5´7¿‹?Û »8U=ž"Ι6MôÒÄFD×§‰ñ×ÐDCÁ˜õâ4ÑŒô‚41£wš&ªs—Ðù*\ín¾†Í”æ{8£‰1©Ö¢‰¢*8sç0Oµ‹_Öq¢üê”pXçxuµ{¹¿s;MRö`}eü°›&6M,@ž9ê›Ïÿüñþ*¾¶Q¸ˆ&òs%¦LÛ»#[ߺ²}]`a{ ;±FÁÈní«mÕ.š=IÕ©sZ@¤/Î^îÝmšxŸ&¶"ºX«4|W‰Øú«-=º]HòÇŸ»8Q»'òs#¤y¯ÝLæGç­e‚Ãá‚èú'ƤÒZ…>ªza®ú¿? þ·˜±èøƒ Ì2Ø Ì^`ÖZ`œ#†vßÏb—fÍgˆq$¨!vÅEÜGDût݈èúafÄ_ó¦¡`Ìzñ0ÍH/øfFïôG˜âÜYcçZlysmm;Tôä# 6«ÄV»py•ØüÜ)W¢vëµjÇvg;;rí8¡ÿ; ÕçRÛþ.ª,¢!ôÕ§Qó]8‘~uqèþm#?Y7pL™¾Ô=Û8±qbœà#-Ãéí5‚ΓíЮ_`b¾³*€Ô?ÉôÖJ¨Gî*r¶À8 0©ö¤zúò³¥>†Äáî‘ÚO©]Ÿ¼fÄ_C^ cÖ‹“W3Ò ’׌Þiò›¥ìÞËyHx¨Ÿ‚×€R °MŒ¨gý6šˆ=IÊlW!ß4±MÈ}sJ b“&Šx¼š&ÒsY0‚c›Æ³ÿ4ïá4apšãÉç*ÉCÓ2I±#5 ua}”&FÄ%tƒM½4±ÑõibFü54ÑP0f½8M4#½ MÌ覉±YÊéÞRä‡ÐÙaÚ!©!ÐZ81¦^¾ì0íPtbxps¢xLyw ;Ò]8pãÄR8¡é½TÀH»qú3Í‘‹8Qì‹S´]¿¢›ü4"º~Ž<#þš¹¡`Ìzñ¹ésä½Ó9rqΡs‹«ŠôpkŽŽ€9yÿEÄdJ:ƒ5[Ø;ô—âÚ}<ÊÏUðÀíVAÅŽQìΫæVz®æ§¡úX*ÅÅpbLý›º‡ÿiœŠGx'Š2SvÅ.Ú¾:±qb)œ°ƒ=Äø»7ó {DòëvrK;vÆçfÄpëY'?"›œ"·” ©ÉíÙRÅ©‚Qç³F±{Ý:Ùäu’R7"º>y͈¿†¼ Ƭ'¯f¤$¯½ÓäU³Š[–zSûâ³N~ÈÙæÄ˜RŽ«ÑDRe&b_½¡|M¤_혒êGÇÁŸ¤ Oi‰|ðwó`{sbÓÄb4á†im Dšp‹æ7|®ò”¡§Õ-ôh<ÛE¹“&8ûP>)Cîy§H‰´óu/s?{e½ˆc ÞÙcªv°/Nt³ÄFDׇ‰ñ×ÀDCÁ˜õâ0ÑŒô‚01£w&Šó”Ç}0Kið[aB9ÍXpV†o~ð¯váFšäCô}Ù@‚2Qš†bGiž¡õыթ3À¾8Ù×°»ib+¢ËÓÄ”øKh¢¥`ÌzmšhGz=š˜Ò;K?ÎÉÝ©?Ki„[i‚5¢òž&ªC£ H]¬©QQ ¸ÆÖ*³ïªêT£˜Ì¥œ™M̈¿†& Ƭ§‰f¤¤‰½Ó4‘Ç4»Z–ò›÷&˜ü0áš“êkÝ›øQŸFf»ÿÞÙwÑDùÕ9›ñØNˆúMŒhÚÏ1⿚·nšØ4±M¸å©„Ú¡ Ïç‘üzšp3N?Ƥ7~ÌÈîìiåà)¾¿6A1`Kósû†Gµ~ôÚĘ8Æ}m¢›&6"º>M̈¿†& Ƭ§‰f¤¤‰½Ó414K Ü{aÝÓ¤e'41&5âZ41¦^¿Œ&†¢£òÜI§â‘Rž¥]eøÚfÓĦ‰h`ÒfljjG¤WÓD~n„`„í¯(Ù.˜ßYæ#_ÂVø¾ JRç è ÐQšgh{ö¤–é%½Gì]q/.7Mœ¤‰ˆ®O3⯡‰†‚1ëÅi¢éibFï4MT繊ÿ³Tø}áÚ±GГ[Ø?ÄŒá©ìkÑDQ•Ffo«ºÚÁ—ÑÄXtì¹±Õcn9a±¯Œ^:²lšØ4±M¸S$ÑM$;`¿ž&<_3HùwwÉÅ3îì8ñP°hüž&(` ÁÚß ŠÝkO‡'h¢8¥Ä0ú8‚Ýp¢›&6"º>M̈¿†& Ƭ§‰f¤¤‰½Ó4‘súÏ­?K±Ý{Ò)AÀÁ|¶71$Up±“Ncêõ»j:EGIž£‰e q×tÚ4±M¤÷ÒcÐÈ¿ßßþx=ÂåbËs‘ó]0ïŸdõÖþu‡Yú‘ï›a§,0@û»×ÝÊGk:Uqˆ„î]qy·æè¦‰ˆ®O3⯡‰†‚1ëÅi¢éibFï4MTç¹^ôg)ä{÷&8ú¡|vobL꛳B•&Šª”ìbľzŠ_¶7Q~5c0¦¢ãžt*-Dµ~Ž!úÒ4qÓĦ‰h‚óžC`ÐØ¾…]쀮®[ž+  ÚMƒsɾ;k:¥ÕE‚ÐMHÁJÄÔ.oXíâÿ³w.»’ÜÈ~íÚÞWF,´óÞÞZy!cd»ÏŒ Ö à·7É<:SÒ©dTN^š£"ZÚ4¢3þŒJ^>^"Œ/¥‰æ4 XeÍ® “&¢ib'¢ãÓÄñÇÐDGÁ6ëÁi¢éibÞÝ4Ñœ›dïW!~©zîI'Ï7F^¡‰MRÛ›hªêõp”'ÔûkU¯kom‰­ü¼at,¥ i¢y¤LÎ+C{“&†¢ ­³t©§õ¨KÍŽ¿…]ŸkPïN„íÇ­¼ô™'ê9ÚäyåÞD.-ØjÓUZíÌåÚ{[ÄÕ IÑ4±Ññibøch¢£`›õà4Ñô€4±GïnšØÖK)Ÿ[o¢fÝ •z¥Ú`b7©Ïœ^‹&6EÇôœNÅcÐòF)Ë îwM&MLšøú4Q¾Ë2MWÐ.M4»|—Kî š°z»%©÷WQšŸIpc^)7aµ›ÕêÐb‡_›Ø$Nx¦t g‰ˆŽ{ÄÛ¬‡‰n¤„‰=zwÃĦ^JñäKØÉnY×`b›T«öFõ&¯›¢“™®ƒ‰êœËw+ƒä4abÂÄH0áe’®R—äûš  õ¹†9( ·ØÑ©—°óMJäÓDS€T˜Šb¥ˆcG]T[@‹]ʯ5¾´·æZrОˆŽ]¸XÕ¾M,oí”SÛé-:wǹϦ‰æ‘ ÎÆR¨¬LEfmÔICÑ”Y:³z7c`³C;|ë»=W &$î0Íž›œÙ¤- <3"çþYþfG,—¤Ý&.ÓÌñN;Ÿ&öˆ?†&: ¶YNÝHH{ôM½” Ÿ¼7¡7£•½‰Ró`4±I½Óke ÜNxÝAÚÅ#™q¬ utš41M`«š“I7ÇÇ›]:|o¢>9iN}šhv”ùÜü㬘3?¦ ¬-ØD4PZìTíRš¨NŠþPœÍ½‰pšØ‰èø4±Gü14ÑQ°ÍzpšèFz@šØ£w7M,ÎÉŸè¥ðä“N¢vÈ+4±Hæ~]™_^i0šhªÈÊ4cõD¯•ãcyk–Ìýüãovp]mÔÅ£qys‰•iž'M ET÷&Êk¢B—&¨íMØÑ9>Ús%&è÷ÚÕŽÍùäüãLå%ÓÝjmfK(-v‰¯¥‰æTˆ!q,Nî¶x&M¬L;Ÿ&öˆ?†&: ¶YNÝHH{ô漖E‹{)=›&Ln–×h¢IÈ9¥±Ôü¨ÊÞפ‰¦ÊÁ‰$VoþZI>Ú[çÄ9§«;L×ÑDóHèÒ¯¿¾ØáÌ8ib,šàºçà) ‰fgztþñö\¤òE½v³Ã|æI'”›0çÇ9>ŠÏÎîAõÑfg˜/-ZZJe`΋-vLì Ltl³&º‘&öèÝ Í9‚&ñ¸—‚|îµ ¸9(ˆ¬ööEjá.†X*Ê`0ÑT —ÙËf‚ׂ‰öÖÊ`úÄo{Ÿ¸òt˜hÝ™ceŽóÚÄ„‰¡`Bn H¡L’»;ÑÃ:Õç ¡‚öü›>(âv൉âƒ+²<_¤´àò·Îš¥nð@é©4ÑÄ1§Ü¯ä·ØÝŸ›4±2MìDt|šØ#þšè(Øf=8Mt#= MìÑ»›&šsõZû1ðdš 2¡ÌÖéíͳÄRïKÞAM•ƒ¸?¡Þòk•FmoM)Aêç‡}‹âÝ¥’Ói¢)#%xBažùÇ'M EE%$B)3õ.M4;¤Ãi¢>]sÿ2B³;ùÚ„å”×V«´µ`ÆÄ(-vpñ%ìæÔ@À 7/a?1MìDt|šØ#þšè(Øf=8Mt#= MìÑ»›&ªsQöRœ>›8”&à¤+6I-‹&š*²€&š¾X‚Øå­Ë Éâ蔹ýu4Ñ<º³Z<Ç`“I“&†¢‰\×üYT?æìûMäeoàèjí¹µi—éw·ý4;ÂSK£–Ít%¥S®-X²b ´öUri¹‰Å©íL±¸ûd)“&V¦‰ˆŽO{ÄCÛ¬§‰n¤¤‰=zwÓDu.IjJÕ¸—rÉç^›@-ÖZ‚Ø-R%Ñ`{Mdð`h±{±ÚuË[cy0Àѹ°vÝâQ Ò3ÊXç%ìICÑ„•YzV¬÷Ⱥ4Qí4mšp Î5;zT$è8š [‰,òÊI'+-XK;O¢ÒÒÒ¯¥‰&®D‘ƒ”N‹ä‰hšØ‰èø4±Gü14ÑQ°ÍzpšèFz@šØ£w7M4ç"êÙã^ªÌãN¥ MrËj+4±Mêh4ÑT)7õöb)–è`å S:U¥é@P wQf“&&MŒD~+ïŠeÝ¥‰jGbt4MÔç–N;LÖÖìØäLš(m”•œÓ„·LT¥Å.#]JÕiFV$ Åe™ 6œ&v":>Mì Mtl³œ&º‘&öèÝM‹óœ«ýØK¡ÀÉÅëèF¾vÒi›Ô cÑDSE9H»ØÁ‹ÑÄòÖÎbÏD'_xo¢y,“!Í)V&6oaOš†&°~¯uÏA«­ïM¼Û J¿<×Êî¬üÍñÜRØV{˜{‹‚ÚÒ¤”;¾Œ&ÞštŠÌ¾Ûå»üâ“&LƒˆŽM{Å柳@Á6ëi"Œô`4±Wï.šxwîLOt¡²[¼ÎÓMœÐÄf©NåtúE•¥"Ë1ToéÁ­ß-M¼¿5(•A:ŽàE'Þ= I¯øÖ»ù,^7ib(š€F –“i—&šß•ü<ˆ&Ês1‘ÕU nûivø¨×>²6ºÑcš€Ú‚Y™¥Å/,^÷îÔ ƒ>!î~ãdÒÄÊ4±Ññibøch¢£`›õà4Ñô€4±GïnšhÎ]ÄIã^ÊìÜ{ÊrsòšØ$ÕIÆ¢‰ªÊSi™)…ê=½R)ì÷·†úlŠ£l×ÑDó(™SâXÙýII“&  ,³t`N¢ýï·Ù=(EëÌ󷽿_~üÿí‡ÿûfÇÍXùã±,¾«©wçºÊs¬íýí¯>ÿïçŸkkúÓ>ÿܺÇ6p~óO?ÿß?|ûéßÿö/ߦ~o¿æ¿?}.­îSé¾ÿòç?}ûiÅ ÌÉ>•–üåKùÍ¿ýô/Ÿ¿”‡Õx”ùt±üŸÏ?~ó×ÏßSþüË?ÿæÇŸ~øké¾üò_nßü[ûHŠã?³;Óÿüá¿  Õ—)]Å_Š“Û§ÞG¹£ŒßÄþã—ñ{Gùöß.ûšw­Ä0ö®òàãÊf¿®Z•0ÝleÞМgpŠEfH—RbuŠ ÂG“Î:"áô¿Ññ)qøc(±£`›õà”Øô€”¸GïnJì8ÿîC/EÇ/I.þ9QŠ»pääçRªçKg°aï•1y·#Éç&ÄgpH+‹§‹R·'”2ÓX8ÝT‰—Y´Äêå•jÁ¿¿µ’g×8:š.Üœ«%œÎ+sž‡&N†Ó¢PÆ¡¬ÑH(¥a¥FBÉ–™ã–]ìNNC€&º:¾¸9å”1jéÅ/Æ.÷T Y˜bqY&v=1Ÿ^è?výýâ®UÛ¬‡Ç®N¤‡Ä®¿_ïØUœßÑžQ³Ë¢'\ʼÚÛ{òº‚Œ±TO0M¸×|9˜4TéÁúß9M”·MHG®JCðî±f£Ó'”Ý•š41ibš ¶^Æd‚+ïv÷×-¢‰ú\rDãþæ`³3ÓsúÕÒfyåâ--œ,PZìÈèRšhN9ãt³»Ï†2ibešØ‰èø4±Gü14ÑQ°ÍzpšèFz@šØ£w7MTç˜ à Æ½ÔÙ)’µ¸aÌ+»([¤bJ:M4UeÒ€&õöb4±DÇË44Çѽ&šGIÌÏün÷;b“&&M @¼\ït¹þW4Ñì’Nõ¹®–¢ƒVÍNüLšÀ|«‰ÓÄ¢Ô=§'”jì(ySU^Î-Çê¤dû}/›¢cW^Lååb]ùM¯®Ljh%ŸãËhã çz3U1_Š]¢Æö2t‰CÔ~ØÜìÌñÅoh™iu|q'*–#¥e&™.]­jâ²*pÕìÔfš›p¢ÑñW«öˆ?fµª£`›õà«UÝH¸ZµGïîժ꜒²„½úhSô¢ŠÐ!Ès¶¨'?yç>™’ÀúXõ´Ôböj4QÞšëYÔG‡T®¤‰âÑPÒÃ8gÈ“&&MŒDrKDI³¨ti¢Ú¡ß]ˆ?ˆ&êsÙ²kê¯4;Õ3SðsºÕ$ÿ¼BR[°’R ´öA.—ÒDu* @ê,vw¥r&M¬L;Ÿ&öˆ?†&: ¶YNÝHH{ô漌^D÷R˜Ï=Ik^:-Ö•½ïE*K–K%±À§©â"?y¬¾pÛkÑD{k2Ç'¢ã~M4Æœé‰ayuŸ„”;½})cŽÅR…;IÛT)f .V,véÅRð··Î‰58O´DÇñ:š¨-)Dé<;œ'&M E¹R¸ªq—&š]òÃO:Õ纕á-õ§ÁÍN™Ï½—¥‹‘üx|É­o1° HÊb‡×fù¨N ÊGáгt—ûtÒÄÊ4±Ññibøch¢£`›õà4Ñô€4±Gïnšhα€L°t´Øœ~//ÑÚ½¼&R‹ÓRm0šXÔ×ÿ㱪V¼-šhoÍõÒ¢ÄÑá|á½¼æÑij=¡¬´I“&F¢ «'êä÷c©_ÑD³Ó»å¢ƒh¢>7× ‹ÖŸ7;˧–ö›f[9‹j­gÉ¢Òb§xí½‰êÔkˆÅ9ðÌN;Ÿ&öˆ?†&: ¶YNÝHH{ôÅ9×jôq/…xrªt¤[éíWhb‘PwRì ©2ØI§¦ªŒ—d)VOøb{í­™ (`¾Û¹9&šÇò«apšºÙ©Mš˜41MørÒH5(ìuo «MÞîmEeÑ«šá¹ÈëåD]¡ o-]X‚]”f§.áGn)%©‰ºÃÇbÇó¤S8MìEtxšØ%þšè)Øf=6Mô#=MìÒ»—&ç™øÁ`ó±—R>û¤SB4||Òi‘`ý™5ËXÈ›*¨‘–'möZõŒÞ¢ã–ûw5ßììº{‹G–zH-VF:sNš‰&êw‰F$‰º·°;¼Ë„w M´çz{¨Ÿ|±Ó|f=#,ÄžðñI'€Ú‚¥æŒë÷ÐÍ®üD—ÒDuŠåï=‡so"œ&v":>Mì Mtl³œ&º‘&öèÝMÍyÞÈ%î¥î×<Ρ DT‘õÞ³ÛRÆÊ»¨22Ê«W“×¢‰úÖDÖßXì_—Óiñ˜“ÒÃ8É]†ËI“& ‰ò]RéùúÈ»û úÑD}®%,صʎzrõm•Ò}<_°ö¼9eó~Kov"—Ö3jNËAR,ÎïNMšX™&v":>Mì Mtl³œ&º‘&öèÝM[z)JzîI'q»AZ¹7±Qê£Ìå_“&õîÚ¯–ñf÷qý÷Mí­Ñ)8³üf'×Ý›X<*¹ÁÊdîMLš‹&ÊwÉí-vëM,vbG×3jÏ-VˆÛ3Ø©b ±ÔS½+4Aµ[ÑÝ“N‹â¥b›SOª±8NwÝФ‰•ib'¢ãÓÄñÇÐDGÁ6ëÁi¢éibÞÝ4±©—ásiÂ꘲FÛ¤æ±na/ªH•‚}ôf‡üb{›¢Sò:šhµ€4r¬LxÞ›˜41MÔ¼ ÎÀÝ{ovx8M´¼™)Eí§Ø{oB ×›^Z0y?‡÷bGW«N¥‰æÔ@žg2O:…ÓÄNDǧ‰=â¡‰Ž‚mÖƒÓD7ÒÒĽ»ibqî ã^ÊñdšðtcY£‰mRy°{U•€çï›]’üZ4ÑÞ”ÅÑl×ÑDóXf9™â¯NÄg†ØICÑתp5ýxêætjv¤tt½‰ö\áZ:4l?Tæú§žtJ7R[¡ )-8'ËÚ¿…Ýì4óµ·°›8.ÿi Ååû$“&V¦‰ˆŽO{ÄCÛ¬§‰n¤¤‰=zwÓDs.5 7Žs:•&2ÂMVh¢IPÜO?õöJ¥~]šhª2rRÕëÇS¹¿ošhoíBØ/ÉþE¹ðvõhDÆu†2O:MšŠ&ÊwÉ©L/åãùï~óýrÂÃka·çb ¢p̈çV¯Ëž¯œtÒÚ‚™)A¿‡nv ¹žHÍiQ§±¸|—{qÒÄÊ4±Ññibøch¢£`›õà4Ñô€4±GïnšhÎݵÌÒã^ÊìÜ[ØZú{¦• ±Û¤:vo¢ªò$–ûeßìàÅN:µ·&":z!M4å]ŸPF6kaOšŠ&ŠJ+=·ÛÇ*=ßýæûÍî|øÞDõ_«<…Ó`cËgîMd¹9c^ٚȷõ¶8`{š]Òt)L4§’Qä ql&ÂYb'¢ãÃÄñÇÀDGÁ6ëÁa¢éabÞÝ0Ñœ«` ö¡‘vnñ:ºè Ll’ª8ØÖDSÕJÂæ'Ôg{-˜X¢#P†ú8:FtLT@T¾»ø«¤ymbÂÄP0Q¾KÖzé§_ »Ù‰ õ¹Æ\wØ~JPôÔrù–KC†•äVZ0‚“ô«M7;¸Ô4ÑÄ•_‘TCqH4Ka‡ÓÄNDǧ‰=â¡‰Ž‚mÖƒÓD7ÒÒĽ»ib[/åv*M°áŒWhb“Ô¹l¿.MlS¯/VnbSt.¤‰ê‘’Öʱ²|—ç}ÒĤ‰hÂê"E#ìÓD³+3í£iÂZJ§œ9H”ÔìÀíLš€[®ecVi“˜›JÝáÁ ¯=¾õeäH¡úò–ðjãK-½ ” }‹"_:¾ÔâÁY£ƒ€‹]š«Us|j|ñ6s«¦ßk6;¹¸H·É[‘–s,Îy& áºÑñ×`öˆ?f ¦£`›õàk0ÝH¸³Gïî5˜æ4ÈU½ˆT:wG7ËMa-uE-q]H‚ûš]™$Mõ¹œ<ØQnvšó™4·¬õªùW#ÅVÓ²I¬4Ë`i›ªzñ,¸ô±ØÑ‹ÑD}kIª1þ åÒrFÍ#%q‹7¹¿¬3ibÒÄ4!œÑÃñEJÿN'Œ/¢ǃ ‹⹫U¬å-¯Vaª“©Ù»ëj‹§K“|,N´Œ~±¸Ì6¹+˜P÷":ÚΨ:•`1ǶJv·‰ˆÎO#⯡‰†‚sÖ“ÓD3ÒÒĈÞaš(Îs·Æþ,¥ÁnÎËKkŠîÑD•ÓŽHÕ¹ WUiKJb}õñËn:Õ·ö´åèGÇá¹æ¨Å£$Têÿê„Þ¾X.šX41M@Éw³Èš(v¨W·3ÊÏM,=¶/T» |'MÈFfÈð™&0à|òÙùnPì@½V&Ò1ö¾8~«¼¸hbg›Øˆèü41"þšh(8g=9M4#=!MŒè¦‰êÜÈáÀ,%ê÷æån kvh¢HÐ|× Hµ¹n:UU%ÉœûêÓFã»h¢¼µ“†#‹eÔçn: iOÓ.{ÿ²ƒU2pÑÄT4%/Ñ¥ÙΨÚI¸ºd`y."lßa,vô1Ûí2š@Ûˆ‚Ég˜ 4€#{g Êv*…‰"Î’¾vŸj§q• ìî&FÄ_ ç¬'‡‰f¤'„‰½Ã0Q3‚r–2¼·›lùHÞ[³½9±É©2ÙÑDVe! ·“!«zGÿ.˜(ѼJC7:ödÚDõ¨ìÀ±¯ì¶L,˜˜&(C‚E¥Ÿ·ÿüóûeŒpuÚDyn *Ý=:ç¶Gw„nd¹ußçõ…óN{'mwT«v ö(M§’¶ F}qWÚDw›Øˆèü41"þšh(8g=9M4#=!MŒè¦‰S³”ð½È9ÐFq§7êI©*sÑDQ¥šVlí«×ð]½Që[GDB;{ð¢Söè!ýá<ö•¹®’‹&¦¢‰ô»´4§Fþ¹GþóÏï7§î…«i"=×C@JroüäVurg7#ØÈ)~®ñ²Ä”bûžS¶K¡âGaâŒ8Ј &z»ÄFD燉ñ×ÀDCÁ9ëÉa¢é abDï0LdçÄÊieëÎR„7ÃnlˆögûÃR)Lv4qN½|Yö©è0>x4Q<:tÄŠÚ:šX01LH®Ô$1²B&Š]WÃD~®JâÿvŸ²—Ý­GD’ÅøyyѼ £FµöWƒb÷^¢ú ˜(N9­a}qïýLìì&FÄ_ ç¬'‡‰f¤'„‰½Ã0Q3±[–R¸9iÂ|KRvN&ª¡Hý9Ûÿ.LU‘ÐÚM ^êýËî9•·6Î×–ûщo¨u;Lh;€Þ=§b—ö* &LÌI¥@®ê í{NÙ.¼ß¯¼&òsÑE¹7~’ʽ0F{´±`GçØ¾kYì={Ï©8epVï‹#])ØÝmb#¢óÓĈøkh¢¡àœõä4ÑŒô„41¢w˜&ªsOKÛ)”ùfš°™ÓM ’K¢ã©¦sÑDQ¥Ò^¸¯>½ãwÑDyk <ð·U|®u]ñÈ!½o°¾2Y4±hb&šH¿K11Wm¶F­vìr5M¤ç¦Yœœ{ãGÃÍÍ&t‹”t|† Ë”EÚQ‹Ú³÷œŠÓHi&‰}q1¬ê°Ý]b#¢óÃĈøk`¢¡àœõä0ÑŒô„01¢w&¬6JSn7 ®vïMÁÖà›ƒíÀ„Õ®e ÔŸí%üÌÿ]˜(ªr;Eµ¾zñ»`¢¼5–û~¢cñ9˜(¾„€}e uL,˜˜ &,oÒM)zûh¢Ø1_ݵÅ<€›µ[;ñŸimwÂDÇœ&îŠSÆ•‚ÝÛ%¶":=L ‰¿&Z ÎYÏ íHÏCzGa¢:‰J±?K ѽë6Öø&^Ü-ú©“Ýsªªb N‡Àj§¾ &^щ@GËø¾e¿&ŠÇ€dý=F|oŽ»`bÁÄïÃDþ]*†4iB³ÕD±KÄquÖDy.‰#ºöÆO²Ó›ï9y„´Ò}¦ H#ØÑ©‰=ÕôÑÆuÕ©©`ûhâeG+k¢»MlDt~š M4œ³žœ&š‘ž&FôÓDržïpxh-}Ù…{i„7õì—„(Ðn¨ú²û´.ý&MU ¹Ïj_=Ðw5®«o˜/¨õ£ƒïíáâQ”H ¯LÞÎîM,š˜€&`p Ö¦‰b§~u¯‰ü\2êÍÚÙNîÌÁvØÐŒ‚¦ Ì#XQ°}ô]íØýQš(NE4ôÅ™¯´‰î6±ÑùibDü54ÑPpÎzršhFzBšÑ;LÙ9@èU¿|‰üÔ®çÊìÈ9;c‡&ÎH…çªèôR/Þc¡jß•ƒ]ßÉ ý@tü¹ìêQÓOŸúËx®›³hbÑÄL4ùl‚Å56ËÃV»4¯¦‰ü\2Ë)Q½ñ“ìHï<›À-jnÅð™&¨ÎAÛ}°«г4Qœ&„±v†øËŽ×M§î6±ÑùibDü54ÑPpÎzršhFzBšÑ;LŹPú¿±?K¥eðæ´ Úh/mâ¤T´¹h¢¨R4µp@}ü2š(oÓ¶»s6Q£HÏ¥MT.nGöæ¼hbÑÄL4‘~—Ê`š4Aå¦_~Ó)?WØœÚ Q/»`÷6›ˆ8ÈM¸£"ëÍAÉŽâtëKR¥@,ÒW/¾m}ÉÑ1•îÞ!Ûqxr}q§Ä°ÄÔU–·5k}YëËLë §}Ožv‚´÷GÙŽžÍ6;#Žñ­ÓÒú³׈Îÿ fDü5ß` ÎYOþ ¦é ¿ÁŒèþsj–"Ô{Ot‘6QÜùù%¥×Àf}Øb—{f^M ÿÿ†JÍÜ™„­°aP;7Î)ÕÉòòN©Wø®"'£#ž}ŸRfëì{ÑÄl4®Á£…ÐY_ÀÿúZtÙúIÕÚM=«ÆxãúL[,b?/0’v‚ªÚC½Ø=ܵ:ÕBD}q¬«f`wG݈èüà5"þðj(8g=9x5#=!xè¯S³”(ß ^„›í‚×9©ªîý*NUÑ;Ëj}ËÇÿ×8QÞÚÐ"„~t"=×µxT0ï+Óð~,¿pbáÄïã„äCåÜÌÛ8Qì \ž˜—Ÿ1mpÙzãG5ꭇ߰ALS+|¦ Í#ÙÛ#½Ø>ÚϨ:M?ï‹ã·\ƒE;ÛÄFD秉ñ×ÐDCÁ9ëÉi¢é ibDï0Mçjª›U¤Ñ͉y°ÅÝ2§¤j˜ìpâœzù²«´å­ ò1@?:ø9šÈ# ]eIغJ»hb*šÐ\ 0_Î÷ؤ‰b—7G-ÏÍyçˆÖ?Š®por’<š?ÓDL#8E›¥S—/ÛE‰ò(Mœgé·³h¢·MlDt~š M4œ³žœ&š‘ž&FôÓĹYÊî-ó!h›ŠìÐÄ)©&Kœ8§žý»hâ\tÞö<·ÓDñ˜Pß:·&‹è*¸hb*šH¿Ku‘œÓÓ¤‰bÇÂWÓDznÌE½“0Vín-H´!¹ïܤµ4€!ýmÚŠ`x&ŠSv°vÛÁ—¯ äÝ]b#¢óÃĈøk`¢¡àœõä0ÑŒô„01¢w&ŠóÜ^ÚL¡Â÷6GM{ÉÍíÀD•`¹òÞ©1ÌEUT°ÎUžb§ôeyå­Í#ÀŸaÔ«|$Ð=Z÷WÇÞ3:L,˜ø}˜H¿Ë˜·éiŠoÂD±º&òs1½¨wªV»ï­¨˜›Ÿ~¦ ¯#°³V»ðlÍÀâT¢[ľ8¡U¼»MlDt~š M4œ³žœ&š‘ž&FôÓDqAÒrÓŸ¥ônš`ÛÜöhâœÔ8Y?£¢Ê©sý¾¾å·ÑDykO‹¥9>GÙc®KÌ]eðÞ3qÑÄ¢‰ hÂó'‡h±}ѩإ]òÕ4‘ŸËܵ;ïEr€»&ðsÅ@iü" ¡ùÕ Øã£µ¯ª8éÞ­v¤²X¢³IlEtz– K´œ³ž›%Ú‘ž%†ô޲ĹYŠf_~Í ö®9”jsL¼Ô[pò¾z¡ïJÁ®o­¸Ý—ðY¢zôô—“7[,±Xb*–H¿KÈå¦ÅÁ[,QíÈ®®?^ž«®iÚñ¹@ô,a¼‘3~¦ H#˜È>õ(ÿKi±C~ôd¢:Ueî,Åî½}Å¢‰mb#¢óÓĈøkh¢¡àœõä4ÑŒô„41¢w˜&NÍR*áÞ“ à e§ ÓK‚qè¬K¯Wš+i¢ª2оÉSíâ‡óþÿ5M”·v ôwëGÇ"W¶zL íò–ÕŽÞî×-šX41M¤ß¥sÓMš(vñú³‰ü\ ’ëµõơحÍ&tÓ Øgšà4‚иbûl"Û©>[¶Š“ä¶Ý³ãeÇël¢»MlDt~š M4œ³žœ&š‘ž&FôÓDqž–,ï|ð¯vzïM'ØB@Û;‰." Ô—g£‰¢Ê¢w:Â¾ìø»r°ë[§u\Ûm£^vlÏÑDöh˜¶Ø_Æs¬³‰ESÑDú]Æ4ïè‡:EþùýÆ4kÆ«i"?7íG2)ôÆO²ÃpoE§ÂN¶ä,E¼MòïôMq-BW\š†xÑDo›Øˆèü41"þšh(8g=9M4#=!MŒè¦‰âœ8·6:0K¹ÞJ¤´…½£‰SJ çê\WU1Pÿ³áøÿ&jtRx:ÇNÕãs0QÄælbÚ3t±‹Ï6›(NƒH§I±#]º»ÄFD燉ñ×ÀDCÁ9ëÉa¢é abDï0L碦P7Û& 0íÎö0—ô}©î>Mõä.´‰j§_VÒ©¼µBúïÐÎ{#‘Ûi"{„ôuÒ&Š]X%MÌEI¥±’˜¶&ŠDºš&òs5ýĸ7~rÈ4A²¥ù7„£ïXÖ JžöbùÙ‹N±,~c_\^$Wëºî6±ÑùibDü54ÑPpÎzršhFzBšÑ;LÅ9¥uÍ¥?KнiæPÜ9›8'Ue.š(ªX-v>.U;ü²³‰òÖ‚Á:çNÕ.l&ÜÀ޲&šcÁX*½ë±÷'a¢©bÊ|¢Û퀾 &Ú[—u\ƒú°{tüÁŠNÍ£;0@ʶmÁÄ‚‰™`"oPÛx&¡þÑD³Ë/u©/‚‰ú\õT&¾hü;ËwÞsRܤ–:9¨ke •Ù% ‰f÷Úãï š¨N5¹iPÎp·ã•5n;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4ќ׎pžãY S¾·>¬Ë–Øhb—PFµ¤fŸ‹&š*N¯UJY¿‹&öèXm%G‡Už£‰ê1'ÂL+sY}°MLEV³Ì‘»4±ÛÁå4QŸ›3¨%›Þ{Ñ 6N@|pÑÉëHGJ9¸ÒØìÞÔ¿•&šSv Ú‰ïv´Î&âmb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šs¡ÄÈñ,Å~/M°ÂæÄ4qJªü~Œògi¢©Ê`(«Wø²ìsѱ+:U (Be–hÕ‡]41MxÝ¥ƒ`ú=à¯ý~k% Ë+:ÕçzM ‹Æ{2¿—&D-¾¡ (Êö‘Ž;ß{~Ù½^&º&~9•Zƒbq¯©ô‹&Þmûœ&Å_@}ç¬g¦‰(Ò³ÑÄ Þ1šøå¼ü'ëµæúe§zïM'%Ü8½í]÷·T!c‹¥fš©¢Ó/UìöÁr`öMg?oí‰kß“0:^~­ÑÄ/$e|J¬Œ^vÑÄ¢‰?Míw ¸l r§¢Ó/;Âk»Mü<—’BÊŒHø®.ö¥½ë‹{OPG0¦ÜíŸùË ?JÍi™ r,Ž|õ® ·‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎ…j~u›¨NÌØcqðzijhâ`›Ø‰èü41"þšè(8g=9Mt#=!MŒè¦‰æ\³¥x–ò›;a£˜€Ïöå-\>j:MTUe» ª/ÿwþ.šhoXâèÀKNÌí4Ñ<—ßþ7¥u6±hb*šÀ­æÄ!¿9|þ럿ßb§¬WÓDy.Ö»VѬ]ìà÷l„+Ï&|÷2L߯/TG0%GìŸ}7;`x”&ªS¢¬bq„ ‹&¢mb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šsAò^ÝÒ_v,|3M%Îv<Û“0&§Xª ÎEMUæzé V¯”¾‹&öèhù :ùeÏs;MTÌœHãeœ M,š˜Š&¨P !¦þÙD³£—jiÑD}n–œ %?ÅŽ5ÝI´9ºÁÑòâåÏ’=C´¼x­d˳-/'Ôë›kÀÿñååLtÌž\^>WÆ)­´¼µ¼Lµ¼ð–X4Kÿ"m³c¼|y©Ïµòè(Ÿ¬Úe×;?V of–ù_Š‚\ï̳{¤4—í&=ú±ê”8̸>VE_!:ÿcÕˆøk>Vuœ³žücU7Ò~¬Ñ;ü±êÔ,E ·~¬B×ÍÍ.Òž“:[ZÞ9õoºzü§iâTt˜è9š8¥L_ÒŠM,š˜‚&€Ì…³A@@/îÚ÷ÿ¿û7Iw¦åiý£™ ¿§ i#¸Þçç®Òf'éÙ‹´Õ©¤Z‚„bq¶h"Þ&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâÔ,å7}³ÒfÎ4qFª¤”梉]•%ù@½|Y‘öÖŠäqtØà9š¨UÀ4Vfyµ3Z41MÈVïî§‚ÃÚ¥‰f‡/í¸.¢‰úÜòžÂ)Ú£;‘;Ï&8o"%¶iyZGpAÖ—f§éÙ"Õ©‚ú›꿉Óä+-/Ü&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9GË9¸ß¹ÛÝ[2qËzT2°I .+1| Õ`.šhª (ÄkUY|é»hâTt¤‰æÑ̈r¬,¯vF‹&æ¢ Ý ¸ª©¾©KÍ®üȯ¦‰ú\JboJþïßþÑýÎvF¢[™Eñ ë;—\?ëç ÎP³#ðGa¢9Í|mivb++/Ü%v":?LŒˆ¿&: ÎYOÝHO#z‡a¢9w”è“G³3O·ÂD™7¥|U‚Xt4Ñì’M–•×TaÌq  Ó—¥M´·&¯gqtʦã9˜¨=`¢ÙÑK.肉ÀDÞ pª‹xù_ Õ?‚jQŒ(frgVÓ&Šæòž&¬Í¼"jýC”f—\¥‰æT¸ü;ÆâxõF·‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎ5ÓS¨ÝKõºÈáE§]*”…ø©å•梉]•dw;Æï¢‰öÖY34±Û½\î¾&ªGçäM4;X½QMÌE¶A²\@ý÷ –ýó÷[ìÄ/ïfTž ©ö2ÂhÖ.vIùÞÞ¨,e’9¸èäun)“Ÿÿ}ŸƒðQš¨NK‘CqeZgá6±ÑùibDü54ÑQpÎzršèFzBšÑ;L§f)¼—&0o9ËMœ“J“¥MøþíÚ<óêó—u3Ú£S/h×ûÛiÂÎ&„$V&ºÒ&MLE¾Õß.hæ~ýñf~y7£ú\UFõhü;¡;“°ËêR"+ü¾d ¤:‚•Púiw;V{’&ªS+Ã*aùØÅÙK Eï·‰½ˆNOCâ/¡‰ž‚sÖsÓD?ÒóÑÄÞQš83K»›K:‰Â†ªïiâG‚zêW²ýû•¦¢‰]º¦ÔûwMœ‹¦çêïµüIŒbe²Ò&MLEåw a7mb·ÃË Ä¶çZíÄŒŸÚÑOï¤ ²Íá}3#€6€•$u ìvœ½è´;5çœ?—_¢¸`â`—؉èü01"þ˜è(8g=9Lt#=!LŒè†‰êR-¥á,oÊ$]œ6‘7„ƒ´‰ ŒÂøÔ7ÍEÿ(L4U`e"×Ôçïªèt.: ÂDó(‚eû+ãÕuÁÄ\0õÈÁغõaw;õ«s°Ûs3Õ´@ ÆO±Ã7µ'.<šÐ ó{–À2~1+iê/„ÍôY–¨N‰ÈËß'G¨+i"Ü$v":?KŒˆ¿†%: ÎYOÎÝHOÈ#z‡Y¢9çâ»wèGd†›£bÙ¶‰ÏöKe˜ì`âœzû²ƒ‰öÖ’ÁãèÈ 'ÞÎÍ£Yú@™½4__,±Xb–À €Úyn7i¢Ù¡ñÕIí¹,T9%?ÅŽsº7i"INô¾sPÁEiùóôi¢Ù1<Úkbwš™,W˜lÑD´MìDt~š Mtœ³žœ&º‘ž&FôÓDsîPKÑij”é½',¶¤`7 R[†÷«þÿØ¡ÏEM`N/_VÐéç­’§¢cé9šh9òwãE‹&æ¢ ª”5—¹µKÍNíê‚Ní¹Âåú…3v;ò[i"m…Ph‚÷™WHûŸÿy__탽;­'GÁ=ÐÝŽW v¸MìDt~š Mtœ³žœ&º‘ž&FôÓDs^›Ü¯ZúcGùVšÈÈ[V8 ‰*AI4–ªÉm.šhªÊBE™bõD_v6ÑÞZ,i¿åÉÝ“÷œªÇ à9I¨,'^I‹&¦¢ ®gµä)÷ï95;–Ëï9Õçæ¢€(š÷Š]ºµs]¡ q88û–6Ò‰€ûÜSíÔŸm6ÑœbÒ,¡8K¾Î&Âmb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šsA5‰§P£œo¾éÄêr”#×$d¦,)–ªïîßþIš¨ª¼V¡x9ðo냽¿5—?C²i¢,N\þ$þêj Y[4±hb&ššÚ,Tüõi¢Ù1]]Щ>˸HîÑ‹ªß{Ó)gd>8›Ð¶¾”(`ÿ²Ù‰=[ÐIÛ²P(&¸0¶Û½ô¼Y4q°MìDt~š Mtœ³žœ&º‘ž&FôÓDsÉ Ò³”ß\Ð ­ìÉÊÃþHU…à^ën2M4UÈÀÁ7º]½}ÙM§öÖAÙ—=ŠùÁ›NÍ£–Ÿ]¿Xþn'´Î&MLEÚò ½™wþúçï·ØÑK¢‹h¢>WsPv·•{Ëæ §÷4‘ë÷4aéÏÐÍ¥‰æTÑØ?'«¢S¼MìDt~š Mtœ³žœ&º‘ž&FôÓÄî\ë>.ž¥”î͆Íà¨<ì9©2Wëº]UV,{áX}Nö]4ÑÞºv!ˆ£c ÏÑDõµl-a¨ ð…aM,š˜€&Zƒk(S·w›M4;Óë³°[ƒë2—¡ŒŸjpwÞf88›¨_²€Sýët•6;Ôgi¢9ÍPÏ?cq «¦S¸MìDt~š Mtœ³žœ&º‘ž&FôÓĹYÊíæ,ì2ßÐÄ.•3|2Ûg˜, »ªÂz”cõ˜€¾‹&ÎEǤ‰æ‘ËN!¨oÙìhµ®[41MØT–Àï'ßýó÷[í®?›¨ÏE•ò¾ÑSì$ßyÓ‰Óf¢i‘IÀ¥Å.å4ÛúRÕ»SPè}·ËümëKykf“8:ÿ˜Åï__ŠÇ,î̱2}ÉkZëËZ_&X_|+lÞúU>ª•yóêõ¥>W X0kïvpëMZ®íW‹Â÷ë‹·½k‰TðżÚÕ¾K~­jâjG(ÃXì`}‡Ÿ!:ÿkÕˆøk¾Vuœ³žükU7Ò~­Ñ;üµª9'©iËñ,E‰ïýZE°™áÁתsRy²*M—µ1H€ÜíÞÜþOÓD{k©½Fäƒè¨é¤ms1éßÉjv*ÏÞtÒ6 aùM`(à¥I󢉃mb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&NÍR÷Öt’ŒñAwÔ ììë~ì&ËÂnªHŒ‚sô]½}MìÑÉàA½®ÝîÁÇ\^X,V¶Î&MLFåwY6éfœ¸KÍ.½|Ÿˆ&êsY­¬á¹,wö3¢Êæg¹Œ`$(3t}ivÉž½éÔœZΙ<W€hÑD´MìDt~š Mtœ³žœ&º‘ž&FôÓDs^›Ä§ÏRæpsÞ„mFGgU¥š)øÁlÿæ$úÏÒDSTöÂñr@ðm5Ú[$Sˆ£ƒüàM§æ1§ÿêHÓªé´hb*š(¿KÎÅ%©ti¢ÚIÁŽ«i¢ãÿßã‡UÝ雷²È–—|OÖÖɹßǵÙᛪ{·ÒDWëÒš†âèµ´Ü¢‰ƒmb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&vç"Q!‹Ýåæ ±e…†£ ±ç¤NÖ½nW¥µÞÆê_w¥_Aí­ JD9ê¶ïí¼éT=ry!' •q2[4±hb&š°zƒˆ(1õ³°m¿‘DWÓD}®dÊl9?5¿Âî¼é„[2AÎïiÂë&!–þ‡µf‡ÓDsªlI-'+o"Þ&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâÔ,¥è÷VˆÛ2ÕtÚ%dàäHåÉò&š*«‡Pøzÿ²›Ní­½faæ8:öºg¿›&ªG!eü`€ÒÊ›X41M”ߥp™3IúYØÍ.Ñågõ¹’¥LÛá¬-¢zgÞ„À–’%yÓ‰RÁ€DýÂfÇ®ð$Mìâj–<[(NÊokÑD°MìEtzš Môœ³ž›&ú‘ž&†ôŽÒÄî¼Ùî7çú±Kp3MЖè€&v ‚Éû57~¤²NEçÔË—Mìo­e‘fù :¯Ù 7ÓDó¨©N©9VæiUˆ]41MÔß%i™³x&v;ò«k:Õçrb6ÏáÃõ‚ê½bDßÓÔ‘N–ƒ>{»Ò£y»ÓÌ.ø¸×òè‹&¶‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎÝûu¬w;ËvïM'ÅÍÐhâ”T§¹n:5UÊZÖï[°Û%ÎßEí­‘P ÅÑ{. {÷X@¿à~¬¬`Ç¢‰E3ÑDù]jm$áh]š¨vètu/ìöÜzèúu½w;þ=·ùʳ Û„@ð}M'Â6óêÁþHǶ¾€>JM£QŽÅeâÕo"Ü&v":?MŒˆ¿†&: ÎYONÝHOH#z‡ibwn…4ž¥˜o®éİ©Ôt:)Uçºé´«Ê b«LßE§¢£éÁ³‰ê±^P³~Ú]™ùÊ›X41M`͇ȉ½_Ói·¸º¦S{®b¶$R ¿¹¶²‚¿§ j#XSöþúÒìrz´ßDsj„Y‚06»ÿ³wfIŽãºÞ‘‚ÄŒ•ôþwrI*«Ã§Ò"¬ ¤f\ó¥2PÂ/X>À¢‰pšØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰Ý9—Á&ޢܻ7A¾ÜÂþ‘à¢ýz?v2ÙI§¦ŠD 0VOÙ¾‹&ö·6 ¢£ÏÝ›Ø={î×òÝí”ÖÞÄ¢‰©h¢|—TpÂ%÷i¢ÚÁk†ã‹h¢>·Ð„²†-›$ù­µ°uÓÌ%îG4áÎR~‹&ìÅ™g_¼üÆ «7Éß6¾|´ùÉñ¥xt,C>ÇÊþ'ÿÈ_Öøòß/´%pb&êæ Üí/_­*ÏÅ\Ó‡·Ýšæ[ÇÞJçð>ËGQëã¹sz·CÇGW«šS+p?¡án§ë$m¼ щèü«U#â¯Y­ê(8g=ùjU7Ò®Vè^­jÎ$Ë»H‡{÷¾3lx”üGj•‚“N?¯DsÑDQU«`«&û@½ðwÑÄÉžÃèÔâ±üM4˜Ê/±2 µ÷½hb2šÈ('$i@ÅN…®§‰Œf)‡í§Øå›÷¾k± :ØûæÒ‚Kß›3öÏürëÄ¥‰&NDË/ŠËðRjÑÄÁ4±ÑùibDü54ÑQpÎzršèFzBšÑ;L§z)4¼•&òVKþä^oÿ©TJ“ÝË;§žå»hâTt8?¸÷]=BR”à^^Sf/{w‹&ML@\÷´IR¦þÞw³C¸:g`{.‘3öËoÿØe¹so·”²¾_¤õA„ìMÈÞ=›3PZ7äªÁ½¼&ÎlÕ3 §‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞaš8ÓKA’{OÒ²é–Qö&ÎIµÉh¢©‚2/íWcÚí²|Ù½¼=:ÌàGðAš¨ÑØ‚ꎻ2΋&MLEå»,_¥eÆ~ÎÀjW:ÍËsÖç‚‚PˆÚÃÛ*t×UGM1e?8é¤{ vMý½‰f÷š]ñ šhNMjŽ«Xœf]4M;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4±;wàìvJ÷f ÷´¡tjJ”Ä>‘ê“eù¨ª°žÀJ«wú®zF§¢ƒéÉ{Í#‚§XÐÊ@¾hb*šÐšÙ;y™Ð÷ïMh;i”.?éTýçì¤I¢öC9éÍ'²¿§ «-XJK—þŠF³#²Gi¢9õZObqʸh"š&v":?MŒˆ¿†&: ÎYONÝHOH#z‡ibwN²Æ½”ÜL¸ÉὉ]gû@*å¹h¢ª*Ãe™/C¨ž~ÙI§=:‚É(ŽNÁîçh¢y¤Ò¥¹eš¾|u‹&ML@VO©Hò~òfÇ~9MXÝóÀ$I†¤;÷&êþG™h+½§ ßê&/Áï¯hT;ry6g`G„Š)ÇÈ´h"š&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâT/E ÷Ò„Á–ì¨:ê.Ák—ÿTÖ¹h¢©ªÅ÷‚Ú{?oéßE§¢Ãòà-ìê±L…( Û”­ ä‹&&£‰ò]–·Ì~­Ò©Ú¡¼ìú]Dõ¹šËä›+Å”><ÛwÑÄ™èHBŽ&šG¢zÂ.V¾®M,š˜Š&ÊwÉ¢hÎÝJØ»¼ôšÑD}®b™§Gí‡K@èÞ;ØÜ¶@ßÓ„Õ\ë¾{ßÍŽ¥‰æÔ”>WÌMDÓÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4çŽBˆq/å‰o¯„ vP û¤T ¹h¢ªÒZ¤4ÈlÒìRÎßE§¢“ŸÌÛKg÷>M4»×Ò‹ÑDy®–ïÊàµqG¿ù6;É»{P”µ¾“agù·Cw€çhâ8.Ð+„ýÇŽltêOûœ&Å_@}ç¬g¦‰(Ò³ÑÄ Þ1šøã܈‰5î¥Äï-7¡ÆêÛ{$”q˜{…°ÿµƒ™hâGU­Îf 9û7ÑÄŸ·. ȆqtPŸ*7ñÇ£–/?Q¬Ló:é´hbšØ¿K-·çßçhÿùëû-v/+îWÐÄÏs)›‚‡½v±ƒ[ aã–]Eì=Mä½›[î*mv9ó£4qJ,šˆ¦‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞaš8×K½ËŠwå½ È¼?étVªd˜‹&Ω·/£‰öÖž³ÄÑQLÏÑDõ å •AÖµ7±hb*š(ߥZãëÒD³Ò«i¢<×Ê»¤µŸbG·Þ¶ò{š¨e*¡îŽJÝ Ù!>»7Q"0ˆÇâòº7N;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Ñœ×Ì¥Iã^ o.…]F”D ḷGNæ’c©$4M4UR~æÞ­ßòÛö&Ú[k¡ øà3&ªÇÂÑÌ’Be”Ó*…½hb*š(ߥ‡’º4QíÈÓå4QŸ[0M1j?¦f|g)l*óáú–ïǬ-8)bÐU;t¤Gi¢:evÁ Úípeˆ §‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhνÀLþ —2½·ÜDQ\ื*ÏJ•$>M4UÚª*ÇêÅòwÑD}k-(©GG“Às4Q=ZÎ"±2ñuÒiÑÄT4•& ¯Uýó×÷kL)_Mõ¹Vº^ µ3¢;s:lX\¤|4¾ÔÛ’rŽh¢ØeÖÙÆ—¢ ËËõÊýû–.ß6¾”·.s {ñEÕ'Ç—â±Ì¢rJ±2Uu/S/´¥Òí¸öW«v»ëÇ—ú\2©‰—ºí§Ù©ð½9Dß/TgöX†bÏÒbþìjíxž!§i­V…ˈοZ5"þšÕªŽ‚sÖ“¯Vu#=ájÕˆÞáÕªæÜ©ô?÷RžåÖÕ*AÝÊÐsp’–ÚŠ†Ö"š¡Ôò²“¤mªÊtÓ]cõ¾lµª½5*±¦8:(î}7R~ˆJ^4±hb2š(]&g( ÐD±£—z\—ÑD®å ’j8G/vor¥^˜3P ñûÁÎ7×Ui-½KW'·vŽÏÞÊkâj5<Å™¤•ã#œ$v":?KŒˆ¿†%: ÎYOÎÝHOÈ#z‡Y¢9·2²1Ľ”ýNètñ9ZÈJÎÞÞ™(Èè´Û½É¹÷Ÿ²DUå9gv&š]ü.–Ø£ch)ÇÑ)Ÿòs,Q<²ˆ GÊXÒb‰Å3±Dù.Ý” ¥+¯Ù‰ÛÕ,Á¤ ¦N)h?Õ.§Y‚aƒZ4û`gBZÏËuŬ«TÚø’Ò£4ÑÄQé…²…âWþñxšØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰æœ ÍK»]º7ÿ¸ä´íLìÜ,8 óóJ“e lªÊ¨.U.ɾ‹&NEGéÁsNÅ#§2ÑrðHY±[4±hb.š(ߥºPBïŸsjvœ/§‰ò\«¹ÃÆ2›Ý\IÞÓ„Ö Ê\|kv™äQšhNKx€<W¾®EÑ4±ÑùibDü54ÑQpÎzršèFzBšÑ;LÍy™ÅGý.RáÞjFè›9ÐÄ9©>ÙÞÄ®^Êcõ‚_vk¢½u=f¬ îÑy,o§‰ê1£x¶øwË ëœÓ¢‰©hBë,Ý„š&J‹”‹—‹­ &MLš‰&ô»LÂBø<ÁÇëϾß$Ä»'tÒç x'fÿpÉ[ÌÈiONñ*M€»ä*ÂÍ­ïjGˆgÒDi4p.¿MqÕÞ&M\Ÿ&¶<:JB±Õ#ÝÙ½¼5%¶‡ÇDpâ=·˜33E¹AYJsÁNчš¢ëwœ‹)¶ì<ÙœºJñ6^#{—0EŽÍ”NÕÎ-·îD9MŸäØp•Z¥C÷&/97ÐuOùɧªž[µ6u’€Ñx¦t2§‰ ŽO=â÷¡‰†‚mÖƒÓDÓÓÒDÞnš(çô—ì(¥Ìsð%ìtñ诟tÚ&5!>E•ètñõé¾îM”·eZJ7 –Âç%ˆ­ÊsÆ~S™x?K£Nð |ô»L!$¾RDáõg߯Ú-n$íDù¹ cŒÞì?j‡pì½ ¿– 0£zÐŽÐÅðÔ±¥Ñ\Ÿ™Ú×îªûYºÎœ&6<:>Môˆß‡& ¶YNMOH=z»i"7îc@hجvDt(Mø‹s a=Úß,•i¬ÒuÛÔ_ b·y'.îÄN¹E„6•©ü4ibÒÄH4¡ß¥ø|½ZÚ{Ùtê¾7Mäç¢'ôÁœ£‹2{8ò¤“»tÉ_/gù'õèÚ9X«¤SÄÖFu—älqae“&V¦‰ ŽO=â÷¡‰†‚mÖƒÓDÓÓÒDÞnš('&gç¯"½…ÜE‡´•½‰*5&"¶¥&ìÞDV….H°Õ ÝW!ì'ï”*ЦwÐÅónaוô…¢­ŒÝL;ib(š L 1×Íl–›¨v÷¾…]ž‹NDМ£«Ý¡9”& æôVk4<ç‘Ð'CiYWs£/ÔãóÑ1fÂø\ý¯ïÿöóÛ·e$ÿ£´A–Âܽ {¼³œ}0ìmQ¶(j¼ød²2ñYØÓ_=~ÿøKí~|óøKùËÄöÅ_~ùß÷o_½ü¿ÿŸOhöôÛý«þî??ê¨øR?|øéÇW/W ”™^êHûáƒþ¯^þËã}XþŽ5n©åwï_üöøð¬Oþôññþç·¿©«>||‹—ÿV> mø5[˜~ûö†¼ü2:”ÿª\^v•´øþ?ó#xü¯Äúû/úô¯ê§ÝÙ:?ßn¥x%ønŒ /þ¢ïuѹ:¯ß ržþ˜3¬áfX[æ "'¯Bª8Ñoè\ffù–—V=úÿaò‹ßkrUÁ6ëáW!žròëÝarC”Šž¾9äÑEYÝs Êœ®dûŒ§Õü<b^.f2]¥và¼9—˜Ž ÈÙÙBÇÃéBÌõblõáù)›/ž[£:'DÛ;ÏåÖ!pb4•EG&&L r|ý4á%Ûì?¼D(ÙÑÚîÈvtäðBþâˆ[IšÉú“: ¡ñ­Ø-뀟]¥Ñà‘½ÊbÇ‹04±ke>ÝðèøØÕ#~ìj(Øf=8v5== võèíÆ®Ü8ƒ#ïÅŒRìü± ÄÉ…píðÇ6©i°£äE•pÆaéú–|g4Q½Sk{Çûi¢´¨¿ˆ´KU;^Å41ibš`A{L:'hÓDµƒ½ z•çbD°zv¶sxtÒLÑ©öuš¥§+÷øvO/v¼(rMäFƒS„¶8™4Óœ&6<:>Môˆß‡& ¶YNMOH=z»i¢6ÉÊâQíðXšÐX~qa&¶IåÁh¢¨*Ùõã êåÎÎÔ•·ö¹8Û;è<š(-²äŸ¶2’¹71ib(š%~¸–¦éõg߯Ú9¿7MäçFç8×=³]vÇ%O:ÑN+inbîÁ) Æö&}±‹x.MäF£zÒ7ZŠÝr‹éMäŸóßßj—|ûÛu{x\°ÄùÐlžƒ½À0^"i FñøáÅw_¿xÿ!¿òã:#ùT:¤ÀôŸ×ß'‚/G ¬÷‰ ïßý˜«_Ýc ­@ï?Ô ØÏy:öÏÿôâýO?}¯Sµ_¾{á~ÿ»¿~û?:~¨7H~øå÷·-£èïáíýãå÷_¨„7ý›o>-þïÇßýôÍoðþëûïþñAÙW?¼ ôÕ_k°{¥a?“YŽ_=¾yõèÍ÷ïðë·oþúîkJ>}ýmx€¯ßÉ|ïà}e Ž”ÄÈɨcQì–é¯'K®@Bãã³dø}X²¡`›õà,Ùôô€,Ù£·›%kã¢ÓegG)æcz„KÂ\zhm´ß"6Ʊh²¨R Ñ(Tí ÞM–·ŽÀH­X½³H†r8MæÅEcoª(“E!¾I““& Éx!”䟟Ô|ýé÷«v$»Ó¤>—”Û…ß‹IÇžtCAmôúø’JO—°ÝÓSéé|n †"=¨BSœxšiŽÌ‰bããóDø}x¢¡`›õà<Ñôô€<Ñ£·›'6E)tǦ9"p9mêÊÞT•ÀDF*œ¯4MUäuT¡ªwV‚¡zG\2R“V»3ïÍ”õ+£ö\± 8“¦NšŠ&Ò…‚K¼´¯e»eVìh"?W笩FÿÉ·æÝ› )ï¿E”5šÐ>. ÉŠAj½m|Ña0Rð¦zt÷6¾lð,vhO_´E•%le(qŽ/s|i|‘‹óúÊ•t÷ŸŒ/Å.î=¾äçzÊg ÚÛ¶ÕäØ?Ñ…éúø"Êg4œŽA†RµsáÜ{™¥QaHlqâgÁPs¢áÑñW«zÄï³ZÕP°ÍzðÕª¦§\­êÑÛ½Z•½˜Q Ü•\žû¦ÃÑ0­#íz´HQ_Ä– <ØjUQå“Ñ«À?«¾lš(oùÀg´½ƒg®V•€Ð ¿'˜41ib,š "íYú4¡veš 9‘¾±ÇPìŽ,ñC|Áàh%Ë‹wÚƒ½sÖ„½Ø)á™4QÅa ¶88ÏÒZÓÄ–G‡§‰.ñ»ÐDKÁ6ë±i¢íéñh¢Ko/MÔÆ)qrbG)í|/.¼v/³JÈ›8í<)Ov†¢‰ª*0cûœÖ“Þ×½ÌúÖDü ƒetç¤--¢cA°•éËûI“&¢‰ü]b Š©¹7Qíh5÷¡‰ü\rŠ(`õµstl–Çùbÿuš€[tp!C)ÔH~ê½ÌÒ(ê™LqŠnMôˆß‡& ¶YNMOH=z»ibS”‚ƒsFRrð¸BU‚N9 ðyz%‹&Š*OÀ7¨O÷u’¶¾5æ’›h{GÛ>&J‹Á‰qÒ©ÚQœC'M Eú]¢wQ§Í,/Åw§‰ü\Œ¬5 b€ÍòB1¯ \§ ¯=˜P[i±û$ñÈ 4‘Õ‡<³)Ž)ν sšØðèø4Ñ#~šh(Øf=8M4== Môèí¦‰Üx'1Ù!4¸DŸtª×£}ÐX?o[*ðXõ¬ª*ÔiE[½r_4Q½“’´sF>ÙQ:&J‹“—d+cž{“&†¢ ý.‘¼†éçõ¤^öý"î¾<—C®×hFmdf:öÞ‰[¡ ,‘7¯÷´'ìÅÎI8•&J£…¨l¨Ú-S×NšX™&6<:>Môˆß‡& ¶YNMOH=z»i¢4ò¸B(§c³|°sò´²7Q¥‚÷á©Áu »ªŠ:­h'‘}RÏw¶7Q½CˆtƒwâbßépšÈ-F`!¦²è¼71ib(šÀrÒI’÷í“NÕŽÜÞ4ù“c0ûÚùxäÞ„¶áóªÝuš Üƒ1ijsO±Ó ù©4QM¬(älqˤª“&V¦‰ ŽO=â÷¡‰†‚mÖƒÓDÓÓÒDÞnšÈ'DÀvIÀ*Râ±÷&’À%E^¡‰-R““±naoSñ¾r:móŽ_œY>œ&J‹)éTËÆS³žÕ¤‰¡hB¿Ëœ‹3^I³ñú³ï7/ÛǽiBŸËམ=[ýøØ ä)„šàyõŸv&ˆjçžç <”&J£ Ñ[ÛÅŽå'M¬LŸ&zÄïC Û¬§‰¦§¤‰½Ý4Q×a8ÙQ*R8”&"ò…Vh"Ký[GdKŒ&ŠzP¢íhqáÎnaoòÎ2_×á4QZ$h×d®v˜fòICÑ—:EJ Ò¬gTípqoa'šàRÏ(§;sVÿ!5;²žú `¢t=¹¹³Ž0©=a/vú”Si"Oåsº’%Níf}T{šØðèø4Ñ#~šh(Øf=8M4== Môèí¦‰Ò8†ŒËÁÕ½7‘"äŠ+4Q$„ÈÞ9[*EEU 9°­>ò}eˆ­o-¹ Ê ƒ¥,~ÛÃi"·rÂ)S Ìꨓ&†¢‰)!ç¸{>›ýÙ÷‹9µßÞ4ò &Ÿ]°ú9 t콉P²ý]§‰˜{0iœ5na;Lp*MäF½k)λy Ûž&6<:>Môˆß‡& ¶YNMOH=z»i¢4®ÿH ;JÁóRØûÒ„Æ{åšš(Åë×õ•Â`4QTQÄÐλþô–B÷Eå­™‘ÚÞáåœýhš(- åW²•ÅyÒiÒÄX4óI#í3ÞÈéËI#¿{N§ü\Ïù2B´úÚ=/¿½çI'wñ9CìÊ-ìT"´$0²·¥:›Ó©4= q† Ø…0iœ&6<:>Môˆß‡& ¶YNMOH=z»i"7Ž9Cµv”JÍ›#Vô+4±Iª¸Áhb›zö÷EÕ;L.ÚÞÑ&N¼…]Z̉èñeË\\“&&M @©œ` Dܾ7Qì<î~o"å=AFb—èÈêuJ,ù,¢»^µd|Bq±MÅŽR:•&J:*ŸóÞ‰)NåÁ¤ kšØðèø4Ñ#~šh(Øf=8M4== Môèí¦‰Òx.7áÑŽR>ÄCiB4Þ¯U¯Û$a°êuE“Dºa8 +Õ2¾hš(oPg¡öHNœN¬^—[ä’‡Le@h¦t*v.-R/ïDù¹ž¼6û{ý±{)X£ /LÎY4¡véZ™½?w|AGž0˜êÑ9¸·ñEß0qL7xgY4èøñE[¤R°ÖV†a&ù˜ãËPã ]œG%Î`¬V;Xô¬Æ—ü\v!ãÌO¶#!8vµÊ%k÷òHgˆ@18C©ÚùH§®V•FÕ‰F½·jâÜû6—!µªGü>«U Û¬_­jzzÀÕª½Ý«U›¢”ŽoÇ®V ^¢¬$ ¯RdÇ`KM~0šÈª¼s¢‘üõrg÷òªw”© Öªvg®V•sþq޶2t“&&M FÄœ³ö=ŸÍFj‡ûŸ¤ÍÏÕ¡'x»gsk'–ö£ ¾$uþušà܃!ˆ¸ö B®±ÊJ¥QFtälq´§'M¬LŸ&zÄïC Û¬§‰¦§¤‰½Ý4Q„.¢¥8â¡4Á!]"ÄšØ$5€ŒEEU®R7 ñŠú/š&6y'˜2°´ˆú展¡B줉I#Ñç=eH,®}/¯Ú¹½—çFrhö&ŸÞûœPV²H…܃9¦Ð.ƒ\íˆO-ŽZÕ? íêÑOv0³|˜ÓĆGǧ‰ñûÐDCÁ6ëÁi¢ééi¢Go7M”Æ1oG)$98ˇ QÇÙõhObtv´Ï7ÖÇ¢‰¬ŠÑ%7 wGÅ;¤XíÂyÅQk‹)±sl+‹4÷&&M Eú]jèÐ>ó<ÓëϾß"ì¾7‘ Ô…ˆÞŒ{1„CO:±¿X)ŽŠ1÷à\I.´wQŠãssæFƒýdÄ'‹¤ª“&V¦‰ ŽO=â÷¡‰†‚mÖƒÓDÓÓÒDÞnš(C B`F©Ïkí3P£=âz´¿]*vÒ©¨ÂèÐÈQìüG­oMIÈ9Û;„'æ Ì-æ„g¶²è³¡I“&  ý.ƒ8 ÍrFÅ.9Þ&ô¹Q'è.y½‹Æþ#3§‹¯×FÅ”;0å+kí]ì<ñ©0‘MÎ'Ïd‹‹3¹=Klxt|˜è¿L4l³&šž&zôvÃĦ(%pìA'Í.:Ò®tªRƒCc‘¸J½6,ý™0QÕGÉVŸû‚‰òÖÀ ˜ÕîL˜(-rƱ•Ñâw›01ab˜ÐïR<$‡Ø>è”í\XÌ‘w‚‰¤µ¡ä­þÃλtäÖDÔ ½[£ É=XÇ3o$ù(vàÎ=è”Õ¯FYˆLq‚nG5§‰ ŽO=â÷¡‰†‚mÖƒÓDÓÓÒDÞnšØ¥„Þš@Ñþf©Dƒ]›(ª¦Èb«gO÷EÕ;œÞààO¼„­-¢CJ‰Leúî3eउ±hB¿Ë˜"ø”Ú—°‹/ríDúÜä"±Øý'¹€|äA§\ÎHÖ¶¾ÉiεÒ(5Ïü;‡rê%ìmâ–Ki“&®O[ž&ºÄïB-۬Ǧ‰¶§Ç£‰.½½4±)Jà±4£ŽÐ¼’€|£Ôà†¢‰mêýó„!_4MlóúóÄnSƳ8꤉¡h"—‘}ÎÏŠ-š(viošÈÏM}4ÊU;#÷&H..ïÒÈuš€ÜƒóBŽoîMT»O½6Qõ$îÿØ;·å¸$ _ë-:x±1³k•uÎŽ˜ =ãUŒ=VX>\¬ë&Ù’{ÍSð`[£Ð»Ì³Ì“mf¡Iµ¨F%ÐÚå!x¡»“•?uúP…,“ënm7ít’§‰™ˆ–O}ÄCݬ §‰l¤ ¤‰>z{ÓDrŽ …Ûµ]Œqì—°µ ˜éíŠ4;Q*Ǿ,šHª4È¿B^ÛÁ#;uô@ŽŽÞØ1:M$>𙊲2o§ÃQ'š(Š&¨^íÀQ³ÉÒD²³ÃÓ—kçó¤öCvzTš€J£Âíg£ZM ØI–]ú®í”1{…‰ä”ixÅù'˜f‰™ˆ–}Äݬ ‡‰l¤ „‰>z{ÃDrŽÞ(å^ ͸¯M ØÊ£oXš` œI#h¹Cuʶ4‘TsÖÈc•óÈ–&ÒUkBàåèh³¿NµÇàU‹ZG¨8ÁÄ%ÁÕKšÇ£qÖfa‚í\PC¿6‘ÊEšß‚[vä\#gtÒ6Ú&˜püJ8­¡d§7FÂ]‹úûÙ·‹ÍPÖª›œ¹uŒ²3ë_ó†œ=»Zþ¼º¸½N_®ûùì?8€Ïé“o“õóôñëO—çSmùŽ'3{¾íU¹_æ¿‹IÑû·4UåuãöñÕÕâüzõî(Æÿ{ýËâôt®~=ÖðÂ!ÍÀ-Žé¯–¿Þ̲üð#Z‡Š«ÉñüÿyßR¸°?þAýj €]ª?¾!/.GÔ¸oVËëªtÿñ«û𔾣à®AzË÷ÿþõ_ΙN6¿¡ Ýü—¯^üùvuÊ5“¦ç×§Kþï'ËËÓ‹Wg4…£_¬^ògév}IÌNÍãUúà±ÜÚZµO”}2êyÖ>k;½×w¿k§A;g•,.¨énñYw&¢å/‰ô?Ì’HFA7ë—D²‘.pI¤ÞÞK"É9B$ÿr/qä¸C¬š†%–à”´Æœì¶åbúM—D’*m¬VVïŸMøï½$’®šÊu&ÈÑÑaû«’GïÀ‚—•¹¾‡\¤Mk"ÓšÈpk"T1=Ÿ jm6/mmg7Ö$Zár-‘AD±ÛöV™8âšˆŽ•â' ;¬LåƒR'Dv›Ùšr”È0葸ɉa†hÅÁ ڈʀ°>Ÿì‚kw¥Z¸R›zRDï¼ä”ì î•ÖØi¢(.(;eê§á™ˆ–Ok}ÄCkݬ §µl¤ ¤µ>z{ÓZí<†?YúNä¸ïÖÓd—Ƥ¦ lݤ†Âh-©ÒA)aT®íô#£µúª‰Ã­i`öGkÉ£×ZÇ÷Í›hm¢µ¢hÍò)"F£vyZc;½šÖ2þ6 raÜÎPÜ·ŸIH œV¼K;‚ ”ìL»Eh0"Ãx‘>&ðù'íV„ÁŠN]WAIÝ-Ùiô­œJk¥®Ò|¸!=Û¹¸±!çÔé¡]xƒè”Ær©­œ’]Œ~¯\ÊN©¾¡S²¸8ei#Ñò¹´øa¸4£ ›uá\št\ÚGoo.íÔKi?îá–¡ ¸´›ÔhËâÒNêv‹K»E'øýqi'e~ãô׉K'.-K©búM à²\šì|œK¹\ŒˆAY©‘…‘¹Ô)hÊÓ๩; JÌ/s±]tï'Ò'È©¥-ø ‰³ 6rM8Ñ0OÌD´|œè#~œÈ(èf]8Nd#] NôÑÛ'’sëœÑZÁ‘“¾EÞøh{{’(NF–j·K¿%N$UÁÓìdõÞ>®i꫎‘ßÿ‘£³ùÅè8ÁÁhï¬<Œƒ†8áÄ„Eá„gLà)&ä5$;³ñ  œàr©a{4Jj@¼Ê¦ÇÝ”¨éLJíL¨›º¶ÂŒ=ÙÛ﮹äÔi>Mg§]sò<1Ñòq¢øap"£ ›uá8‘t8ÑGooœ¨[¯ƒ‘{)7rÚ7«T¥±éDš$Áó«V¾…T§Ë‰Nê½ 'ê«ö}›è8½?œ`šAZµ˜c Ø '&œ( '¨bo©Í˜ì—ÉÎm&ê'¸Üïv–P>ø1W'|ôãV'bů¯8íðIv›g=çvXI¹!"÷/Ôá;áåšd§ö|ªfrJå[ˆ£:61Œ49ÍD´|†é#~†É(èf]8Ãd#] ÃôÑÛ›a’s`„ $ÉÎ|Žs¡ ¶éÍŸZªóZXª¯íla “T zY}xloþ¤«Få£),ÙÅöÇ0ì1­ˆxY™Ñ:L 31LQ yç?=Ñbö¨Z¾“òÌÜc:² ³àTÛ©Á“CP¹œ,;:#^4ÙiwƤž¡ œaˆ (¼•ìœiNZ‰·‡Ò4•ìÔ{låD§HãšâS™§ø]VNµì5'3¯”ì¬õ¥MH•‡èäÑU:÷Ø& ¢ãAïsÒ€Heœ°{')C5%wš& eMøüi˜1¼Ÿ7íñ;ÙEüÀ‹tþµ‹ b¾%;;êYܚèN¼ AKQTJí†RéuadØðÚJ‰•“Ýf:‚}<øLNÑ)/< Nv1âôàSz¢•‰hù>ûˆæÁgFA7ëÂ|f#]àƒÏ>z{?ødçi3¸7b/eiX9呯¨OnxðYKpªTWØ^ð¤J+…=ÅÉà‘mÞ¨£Cá1QŽŽV{Ly”?¯¥Ú²rò¬ÕëˆÐB=]壧úªi ù÷×ÖÑñû;Ù£öˆš7ƒÈÊ¢›vŒLàT8qÅ ¢1!»ë=ÙYoôÀà”óÿ°‡fÔœ<ä#ZcÀ *ãq:ûd®¶‹-÷QÉL©0KQw|~†à”Ÿß˜½žNX; ª8¿qtâÄ0 “ÓLDËg˜>â‡a˜Œ‚nÖ…3L6Ò2L½½¦vn8Õ¹ÜK­Æee*g]ÃÔX‹-¤ZUÃ$U¨ƒ^¯¶‹æ‘1 _5ol1XbóqìÃ3LRfͯœ¨,è)¯èÄ0…1 0hEbvñ§¶ƒ×mb.×óIéN"²s£æÕXñîr«›&jA)Ù©ÐêPw-œ$O…ñ~e¯Há!;ëÚ9‚S] à ÆüPšì”i·â„²S^[ã¼®’Ó[wa„ýýTuËh|É)Ùmyo|TDd§Q§NH‡~:À^œûg"Z>"ö? "ft³.³‘.ûè툵s~ÝØ‹½TTN{ôD •· GO¬%áÛH eí¬Ui‚q%U”\ˆØ-:~/F×-BŒQVöΡ"NˆX"jNšä´ ù#k;ëGD.7¢Vô¯Ô€5²8î2u­vû+Ê$ÀeQEaEm§Ú! ˆã5Áh”`íT«óë`ÍTšÈÏòäØN¡i…¥ÆˆN fµ6Ô$»à[í¾4Vtʽ7è¼à”{ïý¦ÿ­"W%”Å…‰å©&¢åbñÃbFA7ë 1é ±ÞÞ„ÈΑ;na¤¶3ãNVДþ·–QGq\c;]ØFÈ¤Š†*/ìJvÚ¨ÇEˆut8õ¦–£c6¦›“Gï¼5-îÛfj›‰'B, “ ‘1Ù)?ôd©\äíëùk;e͸‹ˆHÇwÛÑò›aÑÌgÁ¨í|h‡ˆÒË\¶² ‘ˆCÁ©u Ì^ÓÿvgÃt"¢89ÍD´|†é#~†É(èf]8Ãd#] ÃôÑÛ›a:õRÎŒ»Êe!V^71L7©Þ•Å0Ô{ÀÇÅ0Ý¢³ÇÖ;)e¦L~ÔÅ0T1cDPò!“]~#$•‹Ê‡ùüÙµs£fÁ †‰‘B¿a7áèÐ;õ’v{Mª×MœÛX¯™p¢až˜‰hù8ÑGü08‘QÐͺpœÈFº@œè£·7NtꥼwIÄUYÝ´i®›Ô-çqü¦8Áª´¢ §°|ŸÔo!þ(p¢ŽŽ÷ ¼ÞWÛm<'’Gc,ù¾im§'œ( '¨bFe›=Ø£¶3vèS¹4øÝ“;ò’Hô¨BNDM‡rJ JÉÎm*¥kO ÝÄuïÃ?¾8±zùùâröVÛY=·;û×?ÿõOn-÷¾î&žÿþÉW¯.—Ÿ/os#î~éXÐ!•óäo«sB…ú‡Æ•n%ü×{%Ü_T×¢’˜ž=½k‘;Šy§„Ÿ¡k)¶ovù£/ŽþFçû[òö×]îªf~8Ó³Õ q+ê¬êvz½Ka_œ-¯/4`ÏÞû9+óN!|ò|yúâ³ÕùO[\v¿³©n|ýô“m…Õ%-ÂòäèÈÂÑBZ<ò‡áð$òÃ\⋨vªOY¯;]Ç—Ëë‹[š=ÝWÓÍ5Áì$4[ìn÷ðÓå9W‡¥¥µË¥ÌŒ½µi>uv¹Q.wcôñë4¢Ï®x(?TþP{ÀççDT(•ùõW¼Ù%DmÜïÔöŸ|²|;yX2Í‘z•ùéµÜg4S¸8y¾¤¶yr=ß¹ÌA»•/~¡êñåòÅ’fá4…œÏ^¿~§÷­û‘꾃áé÷ȳzÀ8X?°ùæžÃ¸Ÿâ^ûþƒÔøÁÑÌéЪsÈÇ•âÆÃcµðËå‰uQ;2¦k{³ÛÝ£ÙÈâtõr;ßõ–¥êýùâœæŽ'MÏþç{ªTï|ö—óšïPò“þh-è*]ÕÁƒyÔÎýûºì/.ßö õÄÇ’zÙëRê~£þI½ÇºžåAœ[½Ñ{ìæ¡¾‡<×"õ/ß@¯²¾V|ýÃëƒs àâ`NýýÿŒ¦ ëßé7‚®‹›êkþ º³ZÓ-=Y¹ý@u'o>ø*ã@sßi]4¿ºHÿ[ߟ«ÓeõjqvÊuïÍ›öWÙvètÄ;Ž–; &OÏÎnoøQé|ç‘ÿ“3Dýóúàý;@#âíÍW«Ôuþ»óÙìj=ûûèææjut{CÃ1<›-.W÷úg¨?cã.wQ÷g,¯^Õw»ÀmO5|¥ÁÍçŸ*°]Ä–¯`yá,_VÖ¸è•à”ìŒnu¦ ¢Sèco§dçB»+¢S~š‰Æƒä”íŒÙëÊjrŠ^!SOmg¦•UqÉ,ÑòWVûˆfe5£ ›uá+«ÙH¸²ÚGoï•UvnZ’Š×vaܚљʡmXYMÀð ã²Tк¬•Õ¤J«èj¡>>®ãÊÖÑá´ ZŽŽö{ÌX™<òAëdenã]™ieuZY-aeÕsš>JÈX™ì „¡WV©ÜHc›Qœ“„qWV©gµqû‘ËŽÏÊ´  ']´c)c¨´Ç ƒ”0#Ùyß*‰•ò8A¸4Ùù°ßãÊØ©Õ&_Qœ´8I3âLD˧>⇧Œ‚nÖ…ƒS6Ò‚S½½Á)9'ߘ?çymgÇ=®Ì_ÃpJÔ KµË§¤Ê+m„CDë« ì ·::4c±ZŽŽß8)htpbNEºl+*sJMà4SYàøˆ.òå•Ë‚S²S8x–.—Ú,D¹iGš¥ûqó8¢qÎ4€Sà\‰ÖhƒQPÊ9µoÅ0 €SLý=ÒÐwšìlØïâO¬;¿à½Å¹Í /Ã4LN3-Ÿaúˆ†a2 ºYÎ0ÙHÈ0}ôöf˜ä\§ Kr/ÆÍ4¨u¥cÓkuµTGd©dTÃ$UV.‚¬Þ÷¸¦ŽN4ÚY9:vŸ “<«‚jqß‚š2 N SÃPÅŒÔ÷{oó Ãv>†Á³tDξ¡lP(΃#3j–[©+¥é˨…õy¶S¸‘¥#Ç0R^xäÔë1êApÊvÐΩZç zù§l§÷»ø“œò»MÂ!^É.†iñGœg"Z>8õ? 8et³.œ²‘.œúèí Nì<(OL„b/ÔÈé ƒó•vMé “ú¬‘¥‚+쯤ÊÂîˆÚNÁã§tÕV[ݦZµÇíÉct¨„ÔúÉ.l4 œ&p*œÄ;gߟ$û GtˆCƒ•‹QçÁdc¦h7¦Bo 6qS NÖó´ ”ìbhõ¶‘µÂþ5¬"ï^Ó^ÀÊd[ÂZ6-¼!'ÜWyÐÙíwvÎÚýqÓÚi$2Uèeq¸‘Åxâ¦mâ|D 禞â়‚nÖ%s“éÒ¸©§Þ~Ütç\G…`Ä^ЬÆå&çuLØÆMwŒ ro!–”þN•µˆ:Êê-<¦·î®ÚQM´ GÇ™}-8Ýy䧦…²¸y,óÄM7ýÖÜTWLôÖ¼¿×øÛÙÙ Ü´.—š,µ ±ÛÆh·=¬ªtðà¬a€‰Fa‡V £§Kj·i΋ ƒŒQÆK˜@vZ·:OËÁ)Pg…ÊØ ó÷„í▤£‚Sç `Qº|Í85̈3-œúˆœ2 ºYNÙHN}ôö§Ú¹ƒ˜ÛO|o·mÍ~È4 !T$¦œj bl#Õ™²À)©ŠŽföQVì#'ºj¯q“Gr²3và”<òûÀÆÉÊŒvêMàT8A:ÏJ„<8%;âÐàÄå ‹bBC]äÈ N†ZéYÃøâ´â·Y4a'; ºÂd3ÍQaºRšj8á¹L²sªUj‹¢SÞ¼³^pj´ÕíòQ8%:¥Òt¹}wvNïw•‹¾F´(‹#Z›`Mš…g"Z>¬õ? ¬et³.Ö²‘.Öúèí k]z)PG^å‚ Ñ7ÀZ7©Ë‚µNêÿŸ½3Ù¡åÆÍð«xŸ  Š“˜,È: /‚^8= ƒ´ÓÒ¯Ju»sì{J¬B |Û›{éâ_<¥áÓ@§írŠŽO•žƒµê‘J½YµCÑ,~GÔ¤œ‚3ÈÕŽñáÒÕi©§ùCqjyfä ":½êŒøkÖ«: ŽY¾^Õô€ëUgôž^¯jαæä)a/UÞ$йt½*/Fô¾ÌÐß$ˆú„\c©¯Y[†À‰ªÊy°Žë±zø,œð·V¨‡s4çp;}©H{;NTY˜Ã9†þàãĉ‰cà'6¨+øN¸B¹'|~^KTjÔ´«]ºõ€tYJJÉlk€©Ééê˨©×œÅ¸ë’'çàÜ0{¿‘©¸Ž™äQ†iâKaŽÄiBÖÉ0Ñä´ÑñæŒøk¦£à˜õà Óô€ sFïi†Y[Ó¸—"¹7;, ÒÆ±®&¡ÔÊá;¤²å±æz¥üY ÓÞÚ2ú#ŽNy©T~;ÃTPÏ=J •¿Àd˜É0C1 ׬3‰ëá÷.ÃT;µ”¯f˜ú\#%“pl†%ÝY‹Ûk[Þ`Ø1¡Ôm÷@©ÛÁ¾´¢Œ!ÃøÃPügαS¿Ë)…NMýc85àiW.SæÀ©´R©?6;&¥µæ”êèœcq(ó„t8 ïDt|Z;#þZë(8f=8­u#= ­Ñ{šÖVç…1ï-Ž¢‹ÁÆécJu°ŒÂœ Ât³aº‘aÎè=0͹"øø÷R"éÞKž–TÚ`˜CR5éX ÓTIL9V_>-QM}kL8¸ ¶F§<È0MYv(2—7;xIŠ8f2Ì £ '‡ˆÜOT³Úå˦>s–œRЀÜŒî<4— ï]ß3L©MX…RP1§ÙIy6ïeuê?a’à‚P³›E Âyb'¢ããÄñ×àDGÁ1ëÁq¢éqâŒÞÓ8Ñœ#9cV;¸7ï¥0.m'V Å8X­ ­u³œÖº‘ÖÎè=MkÍ9€QP)uù.Iò•U 2.™·JÊ­R¹ˆr,õ]žÿŸ”Öš*Ÿ*@²X}þ´lí­ý¹­­véÁÍŸæQJÝ›Š•IšØ&­EkV7UøMª–oø» ^MkÖ21 óbÔñ¹ÝkòÞ[6XEÒÆm#[ŠÖª–Ì(u;È»ÀIRNfB˜Jà´Ú%ÜåBp2ÍP tª9¥]÷ª$È i¡¬õCèÞXí°ìJH!ØÓú0(–‰"§*IÓ“ˆ¸:%QÍ9‡e&¤ˆæþ½ˆˆ§Ä_‚ˆ=ǬÇFÄ~¤ÇCÄSzÏ"âêœÛû¸—"»7!‘,˜7Š@|‘ZO‡íÊP†BÄUUMÏVÒõ峒ꋎèsE šÇz:¨E¼Ú¥tw-NÔšòN¬R3eÙ!•ËÑýEU¡¢´C=}ÖŽÓúÖ‚û»‰_ì^ÝŽÕcñé1÷o ¯ÊìeqvâÄĉpïÞÙŠ|½eúí?`·#¸'êsÍê5Ë4 ·S¹3¿æ%“ùø±…%×\;ÆÑSr"ÚujN(Øɵ!ÉØOɲÚ!=zjnuê¿I[í„e2L49íDt|†9#þ†é(8f=8Ãt#= ÜÑ{šaõRŠ73LÊ ÃÃ4 …¹Ëæ_^i¬Ss«*ŸºøûÅê‹|ÃŠŽ½f¾›aªGCŸú(†Êlæèž 3ÃÔ.µ°fÎÝ;N«ÙÕu†êsëý]rŒ Ûq¹³.69}@öh|¿1À¨™ï¸‚|µÝw¬‹†ÁÚo˜9=õGµfWøÑ´oÕiIÙ?Õ°š¼$Ÿ ³19íDt|†9#þ†é(8f=8Ãt#= ÜÑ{šaõRï=Ö…¥,À©«Jå±ê ­ª9[Ú¡Þø³¦½5qâàDÅEyašÇ’³ûgÍNM†™ 3Ã`]òNÓ¾Î[øí?`·Ë×ïÃÔçÐ~N€ÕŽ î܇¡Å‡0Ô÷©«]AMˆ4`ÜŽyßÍ æ€SÉûÀI§´$EòÞª_vµCÞUÜH‚äþ0çGðÿ'¸„Óìì»ãdá›Öoº&ääÀ©Û)ïºã¤éJ§Ev…W!vª˜sÃÍ.Ѿ7ÍáoZ]’E_o³“}MF1|Sòž³ÞÒÓÀiýØ ?ŠýÕ)›& ›–Û½.Mìßà¹NDÇÇþ3â¯ÁþŽ‚cÖƒc7Òbÿ½§±ÿP/%póÖå5ÛÅÆaûcRÉÆÂþCêÒgaÿ¡è¸€ç°¿zà”Be:·.'ö…ý´°{·d"]ìov„W—nÏ­õL0h@n*÷n]r®Àð~€áµl^í7u^+LÚ.† *ýÖ‡Õ~ƒr?ËH³ãbÏ2L—-e±PœdÈ“a¢Éi'¢ã3Ìñ×0LGÁ1ëÁ¦éæŒÞÓ ÓœûìQû™ÏW;‚tsÒ–¹jcëò˜T¦±æzæòY ÓÞZ)ip{£ÙÉ“Ç/«G.Ú¯Xµ*³yür2Ì` Ãíø#)]#áÛ~À Æ×¿¬þ¡ži n‡6;,7o]š‡|£ên.óˆú )ÕN­XuH\I‡Ï4 ®—&ïÌHA´ ˜ÁÆž».§LÖ¯j‡FÛ¥‚ó»ßÿµ~üýï–ïþð_uì÷îé¿ËŸêò¿ðŸ¿þówøÍ¿ºÑ¿ÿò·¿þþ»ÚIýê×ø£ÏÜ¿L(½ix´þÑCþÿ#éúǘÿ៿©óûÀÇËã7ÞT©f é¯ÞT;F]›?lþhíIkr’À©Ûå$ÒZsjÙ §/ç-&­mLÃ;ŸÖΈ¿†Ö: ŽYNkÝHHkgôž¦5wn)© qÔK¹Ñ­´VÈaÚ µ&¡.¢ÆR!vo­©Ê>Ž•¯vY>‹ÖÚ[# #ÆÑÁü ­5" ˜ce¯÷J&­MZÖ´LS(ojöüˆAÜŽ¯¿·VŸ[RV°¨ã㬜nͨ *RÙ¤5ÖLþ«ÄJ5gÝw€MB†…ĤNE“Á®{kª{œRʱSÀGõ¢{ke¡zŽ2½)tý§Õ®m–oVŸqüËwÿóKo]¿úûõ¦oþú[Ÿ$ù7û_­gý‡õ¯NÊðyD:"#Ý%c3Þf~ñÍ·ÍûŸ¼e®­ä´»ÍÚÙÞuUýÓ_~÷ço~ÿ›önÿôÍüâ½CŸÓªu³W;gK…5§*E cq2«IÇÐÕ‰èøl~Fü5lÞQpÌzp6ïFz@6?£÷4›ê¥ôæœ2\¬^/Û`ócRy°¼˜M•O9À4V_?‹ÍÛ[“ÅÕîÉÜþÕcÎõZGü»e8»“úç?þe¢ùDó Ѽ8òRª¥Âúi1«¾æ]¼Íës5·*€Aû©vxgZL¬·ÂÄì}1iW`V_EJÝ.im|qUþ#kN±úüõ"ûÏ}|9„ôäøâ¥Ô½ãX¿ljÏñeŽ/Œ/¶¤œ,—þñ“jçóZ¾z|©ÏE,ýùY³ËVî_ ¦,sÎz?¾X›IzK·(u»ôõÕ¬[W«šSöyBºÙÍkÄá2D'¢ã¯VÍjUGÁ1ëÁW«º‘pµêŒÞÓ«U͹$–¬q/ÅŒ7_#ÖŽl¬V“úîéOI«ú:N±z¡K…ÔÞº@=°GG󃩪GôÙœ›]âI“&£‰ša¨ž¿¢€&Ü.‘]Oœ‹÷Ú\$h?nÇYî]­ò¦L™rª-˜ÛYÅžÒÕÎcú$M¬NUü£(±8š4L{ž&N‰¿„&z ŽYMýHG§ôž¥‰Õy±”KŽ{©×Œ–÷Ü"Ö…·n”*cÝ"nª(ù<¡?ª®ê ì£hâKtTsÿÎÁ»¬ÑÄê)kÿ@ßj—iÒĤ‰‘h¿KÆ VrÑM|±ƒ«sµç¢hÿñjÇ…î¤ [jÙú÷4µ×bý¼ªÍMqºŠ#Vë×n^í<Þ“&¢ib'¢ãÓÄñ×ÐDGÁ1ëÁi¢éiâŒÞÓ4Ñœ3ZÜ…’÷Ò„ä¥Ù ‰CRƺ庪jw‹h‡zù¬[®ë[« æqtôå·½&ªG(%TÆé¥ÒÖ¤‰IÐD«L öi¢ÚÁÕUÚs‘´`8 v;Ÿ¤ßH¹,E'6h"{ Î\úš[>z/ï8¦Mœ Mt³œ&º‘&Îè=MÇz©Rî¥ ÒóM’Êi¬{yÕógt:y¹½s;MQ&ø²b9ibÒÄ4‘ë½¼bZ‚½‰f§/y¡/¢‰ú\ v›]ηÖKH‹:ïËû”9½+î¯VU;VÂGi¢:-©fÖ‹ÅÍz {¦‰ˆŽOgÄ_CǬ§‰n¤¤‰3zOÓDs^/DǽTNv+Mä–}2Ávo_ЇÆ`\ZíEM!Ø1Pú°½‰öÖ †”ãèpz.çêѤhÙÑ@^O¨Mš˜41M ÏÒM hêÞÂ^í˜.ß›ðç"äZô,êµÝT契ºó¶ª¯eZ8%$‘Òß=¦6¾”giâˆ8Ó4i"œ&v":>Mœ Mt³œ&º‘&Îè=MÇz©›óùÂ%+ïìMG»6qH}áÏ*¾v,:–ܚدÌí²¦ &F‚ ª×j Ã.L4»‚W_ÂnÏ.‚˫ݽ)eña¡ÈÆb·–…µ›bµy4¥ÓêÔ¿ ýš¼Ô;™0±1KìDt|˜8#þ˜è(8f=8Lt#= LœÑ{&õRšÒÍxAÜHétPê›kÌ?)M4UÅ-±úBŸ•€¼½5$R‘#¹=˜ vUF™¸_€tµC˜×&&M Eþ]rÊ V¬Knç W'ˆmþkvXã¨ý0¼KewMpZüY6®Mˆ·àœÌÈú-½ÚÁkñª'h¢‰ãZ݉Cq™^ lÒÄÆ4±ÑñiâŒøkh¢£à˜õà4Ñô€4qFïišXs!Õ¸—â|oJ'Á¼d² šX%(ä´G*UnbU¥dEr¬þÓÄ‹ÎëáîÛi¢zô–Râ9&–I“&F¢ ÿ.‘ÀûMëÓDµC}É,zMÔçr½€%j?ÈXìæƒNY”7®MèÚ a?Al³x6¥SuŠÙ?š€&š Nšˆ¦‰ˆŽOgÄ_CǬ§‰n¤¤‰3zOÓDsNI1؈þ"o¦ ]r)4qH*Â`'Ž©ç»6±F§ÞÝJ&ˆm ×ceb³8꤉¡hBë,-[I]š¨v¤/»~ÑD}®dÿ/H‰Öì ÝLZ“1m”›(µ0I÷š|tïVš¨N©¦ fż,¥MšØ˜&v":>Mœ Mt³œ&º‘&Îè=M‡z©Œ÷^Âvžòk«ÜÄ!©”Û›hª˜jí½êˇ•›8~2¥SõX—û2H¬Ì2Mš˜41Møw‰…÷/a7;ÄËO:ùs)ø+§¨ý¸]N÷¯“"†ïaÂjC¯© S_h³C~vk¢:•T±bqÅfíºp–؉èø0qFü50ÑQpÌzp˜èFz@˜8£÷4L饼›½ù Ébyë Ó*A¸ïŠ2L4U ˆRv¨/üY0±F§¤ÄGÇ£øL4Þvv4ƹ51ab(˜ðï’(Õ£N}˜¨vuÿàj˜¨Ï¡,9ìµIÞ$ ¿&(-„ YÞÒ¦Ú‚ÝbwAcµc|´öêÔ(ûØ‹+23:EÓÄ^D‡§‰Sâ/¡‰ž‚cÖcÓD?ÒãÑÄ)½gi¢9WŸîp?ýåj—àÞƒNd¸øÈóž&JŠ&VU˜4Äê¡ÀGÑıèøÌä1šX=ŠÏÆ Æ•q¦tš41MÔŒîA§ÕŽèê”Ní¹˜s{mÂDrïÖ„©¥üþÚBë[jjñ~Kovð.¹á4k7$D;ıÌká4±ÑñiâŒøkh¢£à˜õà4Ñô€4qFïišX‹°ì襄ôÞ”N ‹ÓÂM4 >ÄB?íÏ©:M4Uê‹Õ—¯ýó¦‰5:h¨;¢c ÏÑDõXê_„Ê êÜ›˜41M@£‰”’tk×­v¯Un.¢‰ú\ÌŠ·l·Kt/M0²½‰\[°ÿR”»I>V»dö(M4§dJ±8¹7N;Ÿ&Έ¿†&: ŽYNÝHHgôž¦‰æ\˜£Î¾Ù±å›/aÛâB6hâTì¤Óªª¨Q?'Çj§øa4ÑÞÚ¸”²c$7 çh¢z´z œ,TfÙ`ÒĤ‰‘h¿KGtÁÝr«ÐÕ)Ús Ô¢]ÇÕ.ɽ•°É·±5µk=%Ô_WkvÌ–®«N!1¥ ¶çjG4·&ÂYb'¢ãÃÄñ×ÀDGÁ1ëÁa¢éaâŒÞÓ0Ñœ‹(šÄ½ëÝ¥ëêa*‚ÍÞ~¿Tù:ùÔO M•SéŽá@ì³òö·`Äx$Hú L4býË:«01ab$˜ðï’8{·ùužoôý¿&½&êsµ¸'ˆÚ)–tsí:BÖÍñŬ¦ÈÍ)u»ô®.ÆO;¾PŸßþ™/¢ƒ(OŽ/î±!r¬Ló¬f4Ç—¡ÆZü“fúúÈÍÆ—j‡¦Wg lÏeËïrŒüÛýs)|sþqFB{?¾P›»U–@i%|v뻉#HY8³šQ¸ Ñ‰èø«UgÄ_³ZÕQpÌzðÕªn¤\­:£÷ôjÕêÍyfG/e|ïAZ)KN²±õÝ$0Jâo¯4ØÖwS%Yr¬žùö¾Û[«$ƒƒ¥Âÿ±w¦9Û0½‘!îâIrÿ›Œ$WÏÒe± ÙŽ0% ù‘nÆü̲–§…|.c`óˆígK¡2„·L‹&MLAÂl„òÿ‹&Š^¿ZÅ-yGùóx€)îó­[ß¼93ñMÔc¶­Øö¹§ÚÛ£µQ›S*^5GðV|ÑÄÁ4±ÑùibDü54ÑQpÎzršèFzBšÑ;LÍy™-±ZÜKáÍ)aC#=î퉰ÌÍÒRe²kyM[=3«çôc{ßí­%“òŸ¡@~Ž&ªG®k–A"±¦Ì×µ¼EsÑ„Ô=å(©¶Ú©ÃÕùÇÛs½ži!Ú;êÍ{ß 2}Î?^þ²6`¨;Ÿ]¡ÍÎôÑüãÍi½µèý ³»¾][^0q0KìDt~˜ Ltœ³ž&º‘ž&FôÃÄî<«æ÷RÄrs1#ÝLŽn哪6L4U\+ïQ¬žñǶ&Ú[×µýôMtü¹Ò¨»GO.AŽfgie \01Lh¤—94K?c`µËY.?èTž[¸¢ §Á’á^˜@ö|”ãÃ6ÁÄ"}ºJ›r~”&šÓŒå7úBœÉº–N;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Ñœ»d1‹{©÷šk·Ð„ÖšÕŒš(=e n‹ïv0ÙÖDS•svâX=æÛš8ÊÒÄ)eï)5M,š˜€&ÊwY§é¢ˆ]š¨vlxùÖD}®Q] ѨýˆÁ½Mk^ÂÏ4‘k .Á‚`ÂÞìXž½6Q–þåSù¿Ä•ñMDÓÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4çõÖ„XÜKßKn°¹ÙM4 Ì)åK-ƒ×\4ÑTÕÂÁ¡ÜÝ.åߢ‰öÖ Lä¨ï;wÓDõX^ªL‡$Væo‰ÄM,š˜€&Êw©ñïÚ¾ÿüëûÕ¤zumÔö\ÌZÏ0FíGQÑï­ ^ @|¦ ¯-48ÒØìPžÝ›¨N³"XpN¶Ù¯“Ná4±ÑùibDü54ÑQpÎzršèFzBšÑ;L§z)‚tï%ì-)Ëqoÿ½Tšloâ”zN?VõTtD¼„]=zùʳK¨Ìå-ò¢‰EÐDù.³dã¨6j³ÓëO:ÕçšÕ¬›á=›ÞZÍH|«'ôó½‰Â¥k¡îòÿnÇðhmÔÝ© J¿šÑë%2/𦉽ˆNOCâ/¡‰ž‚sÖsÓD?ÒóÑÄÞQšØgJà÷Rö©†Ã…4AÂlMœSZÿ{&˜ØU993TåËèt.:þ¶7LT娾w¤¬|š²`bÁÄL0Q¿KÉ̹»5±Û•žçb˜¨Ï-ˆlÉîv)ӽŌ(Õý—Ï0µ3$ñ¾ÒfGðhF§ÝivV“XÜ{%¿³ÄND燉ñ×ÀDGÁ9ëÉa¢é abDï0Lœê¥ ìV˜`L2ÐÄ9©4×%ì¦ R™•&Õgÿ­­‰sÑq~îÚÄ®ŒKë±*J«4ꢉ©hê,ÈRÒ.M4»÷ã;ÑD}.¸° FíG!ßK°'<Ø™¨µYA kh¿¡7;&{&ªSLåwdŽÅ9.˜g‰ˆÎ#â¯‰Ž‚sÖ“ÃD7ÒÂĈÞa˜hÎkú~õ°—B`¼&LmsØhj>ˆ{{Ä$sÁDSEBžâá «tÝþÖŒeÒ¡qtž+]·{´ZâBbešL,˜˜ &Êw©Z&É¢ÝsNÍNÞÓ_õ¹Voǽ¶ãÍw° ñÛgš Ò‚‰­+o·+]£4ÑœÖZEýË'»È*6N;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4±;g-ÃMÜK)Ý[l‚4oùèœÓ®@¿R*sÕÁÞU匆«/£ïoÁÄ©èd{î vóÈàž„Ê`%tZ01LPÝqp©Õ„º0Ñì2]¾3QŸË–\™¢ö£zgå:Ì[ ½ÊçKy̵çò×Á곃”…‰êTjV®ô…8[ô[0q0KìDt~˜ Ltœ³ž&º‘ž&FôÃÄ™^ªtÅéæÊu) ˜÷ö_K˜+¡ÓIõF¿E§¢Cò\åºÝc®Ü1V¦ï—ÃM,šøïi¢|—¦mžÞ­ƒ½ÛÉÛ­Ÿ‹h¢>7§œ½_"èe‡wžsbÜvLhöÅïF¾ö¾MÌDõ»4"+³ù.M¼ìÞhøšhÏ•’ûËUÍŽ]n>IÛªÄÐÔ,äÁ1°ÝŽüÑœ»Ó¬‚ì±8}K–²hâ`šØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰êœêÆH¿ˆÃ.ÒáÞœB¼!éMœ“*<MìêÙ$T_ÞR‹&Ú[Cyñ~FÅWtòƒ4Ñ<²•÷úBéÚ›X41M@½o§À KÍN„®¦ h”âÙ2Fí§Øñ'(mT—s>Ÿ¤,-X¼OØZzz4ËÇ9qb+y8MìDt~š Mtœ³žœ&º‘ž&FôÓÄ©^JÁï-ŽJ°©ÜË;)UæÊxN½}(9þÿš&NE'?Gg”qZ IMÌEXgé5··vsîvlWß›hÏBG€¨ý˜ Ýœ\!3~¾7!TûGõþÝàÝîá äÍiXS(Ž ÖÞD8MìDt~š Mtœ³žœ&º‘ž&FôÓDs.‰0Xðovl÷Þ›P´M<ÐÄ.µ–XêlõŒvUE‚÷¯Ñ½ìÐ~‹&Ú[ç:šñæ·»š·ÓDõ(`\^8T&@‹&MLEÔ²wx¢¿+üó¯ï×Dàò½ jÙ;ÀÑ=j?f |'Mà¦(€ø™&¸¶`Kµ¦QWi³ƒlÒDuªÈ®ýœ/;\{á4±ÑùibDü54ÑQpÎzršèFzBšÑ;LÍy­{ÚÏRý²ãtwuTÕœù¸·×š¿ é ©>WÎÀ—ú:‰X=ëoå ÜßZØ£uÁÝŽž»…Ý×9zØk›£Þ\»®fS8ÚùÎ{ †B ]¥Õ.¹<[m¢‰cÑd±8 ƒÑ,±ÑùabDü50ÑQpÎzr˜èFzB˜Ñ; ͹'Ѹ—â|s~XÁÒcå‡=%Uh2šhª´hË_¨ÿ±Úu¯èXù×ãè¨>HÕ#¢±püÕ!˜/šX41Mä}kÂPúÕ&šÝûùÊ‹h¢<7·ã©`Qûɬ7ßš@r³Ï4áµ+e—þÅ„f'Ÿ6éo¤‰ê”Á“¹Rš8ë ML;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4q¦—"À{óÃ*Ù†‡ÎI­ÚÄ)õøkùaÏEç}Î~7MœR&b‹&MÌDå»ÌDÉòßÅIÿù×÷[5]~»>W¼Vg¥¨ýdQ±{3:¬Êôévýhk ®‰Å{Y2þØ¡>xÐéSSU·XœæU »?MìGtršMôœ³ž™&¢HÏFƒzÇhâóÌÙz‹Âÿ'’îÝ›€–süMœ•š?dXýïhâ*(CöêÕˆ&^oÍ ˆãèøÛ©à{iâ24ëÖÞú_;XµëMÌCûw™Í „:Õ&þØÙ[ž+hâõ\Çšv5l?Ù!Ë4!^ øLÐú #éån{Ù‘K~”&š822öPSòEÑ4±ÑùibDü54ÑQpÎzršèFzBšÑ;LÍ9+BÜK1­4Á9o¢|@ç¤Nµ7ñG•‰hþb8P…ߢ‰SÑ1ðçh¢z,ó UÂP™À¢‰EsÑDù.3ˆ&ëÒD³c¥«i¢>—˜A%œ£gB¶{ïMX-»AŸi_-8qE£Ú±?JM‹¹Çâ„Þ š/š8˜&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9×Ò…JŽ{)þ´½{åÞ„µœç4qJªØd4ÑT$ XhKοEí­3[‰£chÏÑDõ¨5ue¨L‰Ö½‰ESÑDù.s.ÿ” }—&š3^Må¹eOep û=/ÿó­{¼©eÍ{T[°b‰ûJ›¹=JÕ©)ÖC—¡8“´îM„ÓÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0MìÎëí×÷RJvsµ Jì`ǽ½i¶_H•ÉN:5U9#£Åêè·h¢¾uNZ~[Œ£ãOÒDS&¨œ¦nvÄ«vÝ¢‰©h¢|—íœSy\—&ªº^~Ò©>·˜Ö^îŒ?v¬z'Mè–S-s4¾¸cM“œ¶lvD8ÛøRT 1…ýT}Kƒ__jt e™ÆÑyÏlöÀøR¤‰sRg£‰¦ª¦îrøB}þ±Ò¨{tj⌣㔟£‰êQ €¾P¦ëÖÄ¢‰©hÂ7MìÊ3jvï©Ò.¢‰ú\eªe#‚öSì Ã4Ái#ª—¬?Ò¤Ú‚3rN]¥»ê£éÇ›ÓÂdŒ}ÔÙíè­Â좉ÏÓÄ^D§§‰!ñ—ÐDOÁ9ë¹i¢éùhbHï(MìÎ…Ôû“Ü—Hç[i"KÚŠ—Ï4qNª LE»*KɳÄêì§hâzÕÙâèo:u—‰ ŽOGÄŸC û¬§‰¦§¤‰#zÓDmÜH#öF©dÇ×ÞtJ˶…a¥ØD•€¬sã!u4š(ªˆCë«Ç;_lšØåz*Åq9M”-­†$ö•éS~ËI“&  Z ©¾(÷éËï7ÙèÙ4‘Ÿëé{ðÐëÙÑٮŽè+gœ{0é”F­v·ÆMÔF£Ó‹š!Å=§ ™4±²Llxt|š8"þšh(Øg=8M4== MÑ{˜&JãNÈíô};ÐKi"`XDp…&²LÃ=GÞ •ÇÊéôPŸ¦KèÏU˜fß÷¢‰òÖr&ɾwo<›(-ªÇ°¡ƒ Ø,„=ib(šàœ+‰‰^ì¸úòûMvÉðlšÈÏ$H«øNÿIvÀñâj@Ñ_—®C)cKÔÐÙX“:Bë­4Q•$¿³©Qì|ÒDo™Øðèø4qDü94ÑP°Ïzpšhzz@š8¢÷0M”ÆÒ*Œú£”ѵ4¡N‹ÑJ†ØR?Æøº4±K}|³BØï(@‡«ßH¹EÊÅ·µ¿ÆHk¶™ÓiÒÄP4!iDϤоé”ìÐáé*ëI4‘ÛWäø¢ýßô#ñW¹3N£ å…r±ú•›NZzºP öTìîÂÎ2`ú(B_\ú¼&Mô–‰ ŽOGÄŸC û¬§‰¦§¤‰#zÓDiõöfqÕ;!¸Kß;tçÙDiÑ „¸AÙ¼é4ib0šÐ´J,é IÅãÙ•°ósY£…À½þ“옮¼éd´"‡×bÓï©AÌÁ;§ÅäÞ¸‰]â»Mšè-Ÿ&Žˆ?‡& öYNMOHGô¦‰]£ÔåQØXÂÞ'u´¸‰]êß,nbŸwôFš(-FÒÉÅUíhžMLšŠ&¬Üt DÖÎéTìN›ÈÏåïÖëÙÉîEyÔ3£°}G +g!õ`M?NìDx;a¼•&J£.,öÅÅY½®¿Llxt|š8"þšh(Øg=8M4== MÑ{˜&rã†Q “À¯ÚáµbÓöµ¸‰}RMÇ¢‰¢Š)ýÕŸŒôÍÎ&ªw”µ“N½ÚÉ}ÕëJ‹!¿oøÝÌf-ìICÑD®1ÍÄc³z]µÃ§³µ“h"?WL ;ý'Ù_Yo‚r6BÔ×4KO'qhŸg;‹Qn¥‰"N€P¸+.ŸOšè-Ÿ&Žˆ?‡& öYNMOHGô¦‰Ò¸JÔN¢ÿ*Ò®Ž›ÐBX¡‰]RõEņ¯JE•%ñˆÔû›ÅM”·ÎGôqÃdùœ²äršÈ-FRLƒjWYú×YobÒÄP4—¼ õÅ÷ûéËï7ÛÑÙµ°Ës9}{ktK}ŒüÊ(l]Ü•Öh„¡7B';m~Éêƒs_=ÒÛÍ/{¼âóKj1aïܤ-vò”[fÎ/s~`~ñÒ ª‰Ú»UÅ.êégßù¹ÄÆ íÝÞj÷1ß™qy°pZ)®Ý¤õBéÇ1é(Í ÷îVåF# ;lçOAƒs·je¢áÑñw«Žˆ?g·ª¡`Ÿõà»UMO¸[uDïáݪ}£T¸6¹XX¯UG­R9ÍÁÚ•;û.ª0ºŠ÷ÕãÇTïlšðº'$½½¼jrM”-½¸R_™>®Nš˜41Mä«U±—WìäéÐô4š°’“ÖÜ:ý'Ùáµ»U¸€Ç ¯i‚ õ`7Ëîj)­vv'MäFÓ4M©gÅž¸Ï 'M¼^&¶<:ºËĆGǧ‰#âÏ¡‰†‚}ÖƒÓDÓÓÒĽ‡i¢4.î±³Œ«vxmT^Ì&WibŸÔY2¾*MU(è†é@Ã{ÕFÝç“û¢&J‹ žs÷•¹Í¨¼ICÑ•-ÿàÀÖ¤‰bgñìjFù¹ˆ˜ÂÞ¨ìàE4ò©9>,gb_¹ç”æž(ÀܹkYì’ÝJ»Ä©ÎÚ¨ÝebããÓÄñçÐDCÁ>ëÁi¢ééiâˆÞÃ4±k”2ºöl‚s3´RÍh§T±hbŸúÞ‹&vy'<Õ<¹œ&Š2JœÓŽ!­vè3ÿø¤‰¡h‚Ó*=µÄêÍìj§x:Mp©¹Í;W³]´K3²,ie„¯’”ŽcìÜD-vÈp+L”F•ÝÚùf«<7N˜XY%6<:>LL4ì³&šž&Žè= ¥qc°-C¨Æ«/:邼v4Q¥æ¨‰ Rm°ìª*-!Dlƒú ïÕ;âL¡ï/J] ¹Etè*S´ &†‚ É€êŽí¨‰bngÃD~nT–`½þ“KKˆ^ ¼DÌR^Ó„æLHhÍd Õb¸•&J£‚AÚyK«Û,Ú]&6<:>MM4쳜&šž&Žè=Lµñh½?J¥•ðµG!,+ÅŒvJµÁ.:U9K£l˜«Þ,{ŸwôÆ£‰Ü¢a®¿á«s™a“&†¢ ]Œr̰r3ýx±³çŒ}'ÑDi_Aˆ Ó’à•éa…RZ ›°ÜÓYÉ:„ÅŽˆo¥‰Òh`}qÏf'M¬,Ÿ&Žˆ?‡& öYNMOHGô¦‰]£TxuWôÌbF@ €®ÐD‘ B»àͯ¯Ç¢‰ª>»ôÕG‰ïEå­]ÁúÞñ§ë—ÓDn1}ä˜Ö]ep†MLšŠ&¬\`ÊZ¸IÅN,œMù¹¹÷˜ÅNÿIv€tmzXâàίi",D½“5©ØÑ‹¼Fÿíüò×ÿã§ïÿôC™ Gì“å÷Ɔ—F«÷ÅñÓõÝ'¿d‘NyØ~ü§Ïÿõù—<(þùOŸ)³\Yÿ|ó¿üÏ?|÷í¿üÿÿùXÁ?:å?%?}Nƒç·i<ÿþç¿üù»oW ÒÒúÛ4 ÿüsêºß}ûŸNË^IŸw²üÏÏ?~ó·ÏßøÆþòëÿþÍ?ýð·4‘üüë[ü¼|óÏ¥¯§†ÿ;™=™þÛÿžzF~™4âÿ55²|»úíIê(eØÃߺæG$þ×_Áæÿæ»Ç |WǬµÖ5FQÙð‰ñ`·>v©Wýx½F‹fgÇÍ¥ÒtZY»–¶Í‚ɆNlO¹çfÀ å5<:þfÀñçl4ì³|3 éé7Žè=¼PÏâ¶a67x³DaÕ;”7x'ðµ&Dh Àé&‚æmå RÃ}µ«2Æ´|Ö®²HOÕ"&NOœ§ÓwiÀ”§Ym¥Úáù8Ÿ‹†¸a!i¨Dׯ ‘ûJéÆB†cФžÅCì«‘í?ZZB 9lðŽÝ;½¥¾âeŽóîÇœ^F›^œÅ8tÂR‹ËÓ‹‹ƒ…Ð]ã&¯(]{“œ¢F¦µùEc°\E™:J³ÈÍÛ¢#H0Ü Îm¦ÌÜÀñ«ý{ØîùýâÏÚîYU°Ïzøíž†§‡ÜîùýzOØîIc>w·î(âµw?ÒˆE¬«{¤*FI#õn$;z³ÒÀïAß; ~'M¤MÒµáwÓ0«yMšŒ&r„*wð;;¿šW~®“I½“Ù ã/. ,”¸æ5MÄÔƒ’×; ø«à­4Qå Ü©ßRìÈaÒDo™Øðèø4qDü94ÑP°Ïzpšhzz@š8¢÷0M”ÆÐ|Ã(Å!\{<Šº]»S¥"Ü0Ú –2³¨Ò´N°°A½Â{ÑDykƒî}ï<—'¸œ&R‹iéÐaCIŸç¤‰I#ÑD\Œ8šh“&ŠôŸ«¨L‘bzem`ÖÅTiåhÂs †m¡ÅNMo… Ï£ {''Cµƒ ÝUbããÃÄñçÀDCÁ>ëÁa¢ééaâˆÞÃ0Q'$è¤Ì¬"ýÚ»–&iÄâµüûU*¥9vƒTB &Š*§`}õ o–ºÏ;OuŸ/‡‰ÒbŽ&ví+3)3'L ^ŽÒ¨ÚGÅžö´O‚ Ï@hÚID™í@ùÒ”™°0bˆö’&ÒŸª#y»ünµ#½•&j£Q9 E}qiªŸ4ÑY&¶<:M¤ï2­]ÓÈÓ¤‰jåì”™ù¹ŒiÐæNÿIv|m5/_8’Ø M`îÁ“³š—-«Y¼•&r£,¨Ü>»­vÏÑ/“&V–‰ ŽOGÄŸC û¬§‰¦§¤‰#zÓDi\™­3ØW;¸ö¢S4Z‚­„M<$„ˆQ6H5‹&Šª í‚UÕÎÂ{a×·ŽT6|†Qï ›(- EGìu9»÷¤‰I#Ñ&J„ÃЛ¨vâÙ4‘Ÿ«„»=;Ù™]„M 8»¾NÀŸþ4õ`ñ´xjïh»çû4Q¥úH_œÍ°‰þ2±áÑñiâˆøsh¢¡`Ÿõà4Ñôô€4qDïašØ5JE¾¶8°RÞ=Z£‰"ÁóÕzÛ U£‰ª>­75lPïá½h"¿µ‚…HØ÷ŽÛ}弪²œ}ùý&;<=n¢<× cúG§ÿ$;¼6›—)M k4áÎù§¡M¸Sí&íõ &ï6¿¤·&äйÅVìøÎù%µ(Ž ±¯ŒŸŠwÏùeÎ/Ì/²:{l–‹¬vøTîô¤ù%=7çïÜNòQíìÒ›´–à¹ÏëùEÊJÒ 0¥i%évkò*αü°ƒ™€¼» Ñðèø»UGÄŸ³[ÕP°Ïzðݪ¦§Ü­:¢÷ðnUiëÁi¢ééiâˆÞÃ4‘Ç¤× =+v.Î@î°øjÎÀ"A!hç¥Ø±…±h¢¨2eÂþt€ovÓ©¼u4ÍYß;n¤‰Ü"QˆÖ¹÷]ìð©ºã¤‰IÐD̹ø$BW›4‘íRÏ¢³i"?W4íõ‰vm–DGZ‰›ð܃!­ =f;t¹7n¢ˆS$êŠ# 3g`w™Øðèø4qDü94ÑP°Ïzpšhzz@š8¢÷0M”Æsû¬ýQJ#\\ÕÕQ«T¥Þ.Wµ‹&Šª–¥â}õÞ,yyëc`î{'Úq¹E¦ÀÚ©ŽZíxÖ3š41Mx¦ ˆù\´I^h‚O¿é”Ÿ+éE  ×XÀùÚ(l7y\ öàD ÍSÈj÷"²íJš¨jGÖ'ëÁi¢ééiâˆÞÃ4Q÷4öG)üÚ|òÙ„/â+gU!ÆMR}¬œNõiÑ¡‰jÇï…]ßš5¤Wï{ç¹øßå4QZ4Fì+Ó0£°'M Eé»d¦ôõ²7i"Û‘ëÙ7Ês…CêÝžÍBA®¬7á KSÈkš Ôƒsþ)ëŒAÅŽèÖ(ìÚh†²v©‡Í›NÝebããÓÄñçÐDCÁ>ëÁi¢ééiâˆÞÃ4Q÷4„¶YT»hábš =®ÐD–sáû¾T·±ª×Uõ9C,†®ú€o…]ßš,=¾ÿR¸&J‹ª1´c3v&MLš‰&¨®ÒÍÛ9ª>¥=‰&òsÕ9ˆt{6kÔp%MÄ%º°¼®Ž*\F^Ïé šJ‹„{Ï&J£f1´‹>ìfN§þ2±áÑñiâˆøsh¢¡`Ÿõà4Ñôô€4qDïaš(籄ýQ*òµgqàš(œ;'ÑÕâX4‘UEPåv-ì‡Ñ{ÑDykT6†¾wïË[[T@Æ Êdætš41M¤ïRÓ`ÈñãÉ÷§ß|¿ÉO¿éÄõ¦•vêµT»ç;ŒÜt¢%!­ÄMHy:u*vtoÜDiTBÒÏ}qòT„pÒÄÊ2±áÑñiâˆøsh¢¡`Ÿõà4Ñôô€4qDïaš(§a;{ÅN/ÎkÈ‹CX¡‰*57ïK5Œ&ŠªäAl§ø{Ø¡¾Mä·vÈ Ubß;_D'\ME§…ô•9ù¤‰ICÑ„ä̯’þÆvÜD±c<»z]y®£’ü/{gšì¸­ƒÑ¹b °’ì'¤n¿rr%À*J +fçW:ˆð ‡Ãv»]ºõ¤¦a*×&¸5`-™È_îévÂ&ˆmNëÀQD‰«voL̈Î#⯠GÁ9ëÉaÂô„01¢w&ºsfR¸—z?ÇqO)l~•| v“ÐÒ…û ü6;‘¹ÊMlª,e‘­ò] bû[C»|\*ÙìàÁƒNÝ#YI%Æ¡NÚL,˜˜ &êwIœù÷÷û×?¾_ªŸùå0Ñž[ ƒãCÝ.Ýz å…%É~µ ’Ö€˜èv»©Òo„‰îÔˆ£ ÒnWÈÑ,щèü01"þ˜pœ³ž&ÜHO#z‡aâT/UðÞü°,öR•˜8'•'ƒ‰Sê-L´·ÎIRöǼƒÝ=šXQ‹•¡®ü° &¦‚‰ú]R’¬ü»ßùëß/µT WÃD{n&ÊœsêvÈrïÎ[»›µO¥·àÌä×ÅØìàá[Ýiû%ýbE›óÊèNˆÎO#⯡ GÁ9ëÉiÂô„41¢w˜&NõRz+MÛ‹óÑ­‰.¡‰_ßìÏ+Á\4qN½~YF§-:Ìè×›þ±£çj×uØ~¸`iµÛÁÊ»hb.š(m˵èï÷¿þñý%ºüÖD{nÛle‚¨ýP+zoí:¬¨pŸ&´µ`Òú«ø-½Ûezvo¢;­MeÝ®¤uk"œ&:Ÿ&FÄ_CŽ‚sÖ“Ó„é ibDï0Mtç–•”ã^Jõæ[L¯ÄGš„:§ËÜbþy¥ÉòÞQO)áwÑDk` `αEçÉ[Ý#·?+#[ùaMLEÚ×ü¡Ò„»Û¥«i¢=¹@¢pLu.ŸîÍ[c_ˆh [¬P¥f¹Ì–1pSo¥âP¨IøÛÆ—úÖ"k1ŽÎ{͓Ƴ:/@ƒ”Ù&®ñe/Œ/öJ-­E"ôÒv;¸z|iÏÍÆœƒŒÝN‰ïÝûNj÷‡kÄÌÃJ ´OqŸÝúîN)µ)x,ßVÒÖbÕÁ*„Ñù«FÄ_³Xå(8g=ùb•é «Fô/VuçÜòfHÜKíÌq/ÞúÆn}Ÿ’Ê`sÁDWU$+|0ÉwÁÄ©è”üà­¼æ±ßÕ öœº]Z LÌBµãN˜R-k,óõ0Q;³\ˆ%l?$ w& Dx‰ ñþb§Ö‚s²dîrÏf—8?I›Ó6.ÇâP×AÚhšèEtzš Mx ÎYÏM~¤ç£‰!½£4±9WJH÷RïtnÙú©ÖAiÔ“R÷¹þEšøQ/\!ÿó–ßEý­%%†qt žÛ𨔵ƒ´Ï1ê?¸hbÑÄD4ѾKm©‘ÜÒ¨›¼*¹†&úsµ$DŸ&6»$RÒåW©AÍûiz„X‘ÊU ½‡6|”&º80.šBqÊM„ÓD'¢óÓĈøkhÂQpÎzršp#=!MŒè¦‰î¡Ž]÷R™îMò¬/†˜8§´ÌuŽvSEí>‡Æê‘¿«2êOt £|’çr|lK}a•XÙûmÊ &&€ h ë$˜èvWŸsêÏ-£ó›Ýû­Ö[Î9ia€˜È­×P©A7;ÊÞÊÛœjÍ)'¶:…³D'¢óÃĈøk`ÂQpÎzr˜p#=!LŒè†‰æ¼@F.q/UÒdW~Ô3äX=àw]EØÞ:çLöAtrz.qÆæ‘i¯øûoeô–üqMÑ×}‚)zû.kw­Á!öŸ^ðÞ㡦/0:XƒÉý !&·–Ñf‡éêŒý¹Rgè%‡¡"À{oå%’ŠLû4íÇBIšÝÜ ›]Æg:5§š¤—O6qF¶h"š&:Ÿ&FÄ_CŽ‚sÖ“Ó„é ibDï0Mœé¥4ѽŌ4——áѸØ%H{êGj™ëÚÄú"ìß2þ±Ëù»À§¿uæ”?ˆŽ=wmbóX4 ¬ÛÑ[âß> |&ì{¥‚{{³Cºü S{nÛ³Ô¨ýT;¹5LJ¾Xk+Õ}š Ú‚­ªùãK·ËÉ¥‰SâXÖA§pšèDt~š M8 ÎYONn¤'¤‰½Ã4qª—’Œ·_Âf<¸„}R*Ï•üœúû)ÿmšèo­õûŽ££ftj3Jj ¬‘²ö?¯jF‹&¦¢ j«ÿ càfo’.¢‰ö\!«œ Qû©}öN· ¯Mð‹ª”ýŒN̵C8ü{mÍ.‰Á£0ÑÅ‘ ùÙ ìpÁD8Kt":?LŒˆ¿&ç¬'‡ 7ÒÂĈÞa˜èÎë`“â^гܻ5Q»{$>€‰sR¥Ì§Ô×ÉëwÁÄöÖ-¿jþ :ŒÏÁDóØ>§`N·K°¶&LLõ»äD ‹ù[Ý.Óå0ÑžÛjjS]¡Û‰–{¯MTZª-yŸ&¤µ`â+ÂÞìÀʳw°»¸’üÚÑ›¼­i,š8˜&:Ÿ&FÄ_CŽ‚sÖ“Ó„é ibDï0MlÎ¥}2q/Uò½ÅŒJ±Wgh¢K¨£°úå3ÿ¼’ÎEM&ÁôÉp`ùË.aŸ‰&~ð†G÷ÈPÿ>žc –UlbÑÄT4Q¿K¦Úñ·&ºÝõÅŒúsÙZ‰ °e3+Ò½¥QS»q¶ôMÈRG4(ìvTž½6ÑœR†¼“æ÷—¸¿•ÂX4q0Mt":?MŒˆ¿†&ç¬'§ 7ÒÒĈÞaš8ÕKÝ»7Qˆ_–ŽR:“Z&;è´©oõe!VŸÑ¾‹&ú[£Y”›q³ÃòMtEê[K¬LdÑÄ¢‰©h¢~—\*—&º]æË/a·çZí•Â^»ÚÁ­4A/ÃêÃöiBk æœ0ùue6»ô{µêVšèN9 ¶xº¡.šˆ¦‰ND秉ñ×Є£àœõä4áFzBšÑ;Lݹ!dø—RÅ›/aëKùho¢I$ˆ©ÄRíwm¡—&ºzÈuÒP}}Ë/»6Ñß:k±>ÃVKî9šè¹åxZ×&MÌEÚªH¤ÚŹ4¡}.OÛžÛ?ç¸×æB–î­]ÇØªÎíÓ„ý´àúÇUÚí2?{Ò©;-œ XJëvb+¥S8Mt":?MŒˆ¿†&ç¬'§ 7ÒÒĈÞašèΕkwq/¥poí:¶ò’Rhb“P¨:HåÉrÙ6U%%ƒàœVWoðe4±EG°¾eú!?¸7Ñ<*%+ ¡²™²hbÑÄL4Q¿ËÚ¥¶Ð~‚Øn¤WÓD{®!U(ÛOí´áNš hw3ª“šàª¬÷-*Å ìÄö¶é?âдx™åþØå²ÊMøÓD?¢“ÓÄ ø hÂWpÎzfšˆ"=M ꣉³½ʽ b Ë+çݽ‰ÓR÷V¹þ5šø£Š,e¤X=±~Müyk¶¤¹ÄÑáôMüxÔúÖòÉW§eÑÄ¢‰yhbû.¥0âN…¿þñýJ¡|m‚ØŸç²f„¨ýÔ&Fù^šP*Ë>M@ï[”rö•v;ú}ÃãVšhNµ¥hgÅÙ:éOˆÎO#⯡ GÁ9ëÉiÂô„41¢w˜&ÎôRõ_ï­„-Z^‰`¢+È•í¥,sÁlK×uö’cõ™ówÁÄb÷öÿ힃‰î±Î³$}ð»±®k &¦‚ hy_±=†\˜èv9¥«a¢=·õhšÂñEZò´;:é+!W°Ù‡‰\[°¥b¥øJ›ó£0ÑŵRåh¡8ò`"œ%:&FÄ_Ž‚s֓Äé abDï0LtçLBÁzÌ&RïMéĤ/.|@]‚  P,• 梉sê ¾‹&ú[°”$ŽÎß®:ßMÕ#¦V(žÂß­¾û{êÚE‹&þ}šÈ­ŠsmYàÒÄfw=MÔ瀤á4¸@Ú[ºrk‚“êÑÖ¶œ´BþP¸Ù¥g:u§\ýÆâø­ ù¢‰ƒi¢ÑùibDü54á(8g=9M¸‘ž&FôÓDw^ê|)èì7»Ä7§t ÐD— "î}ñÿKœ‹&šª:ZÖAUCõ-éwÑDkÈhœâèÔ?ÏÑD÷È©PŠçP?ÞE‹&f¢ l”@ñ÷ÞÀ_ÿø~[]7¾š&°×®“V$>j?Õõν ®Ä’íˆ&Ì2f*Òj·S ûß_ª*J­¯ŠÕãÎ¥ÂÿøøÒ¢£‚ÁÉ€Íò“ãKõ¨¹~P(+oµ×ø²Æ— ÆzÕw¬Ã‹©?¾lv€W/í¹\€8hÙÝNDïM@^º¬«UTgˆÙª¿"ÒjöìjUsŠ©õD)‡‰x­VEËNDç_­Íj•£àœõä«Un¤'\­Ñ;¼ZÕ×ÿ–RŽ{)€{ÒVXz‰ÊÁjÕ9©¿O:ý»4ÑUåbŒ«Ï¿O,ÿ·i¢¿5‘ }0XâÛ¾Óí4Ñ<’fCø@Ùû‰‹E‹&¦  1ƒÐDµKoÅI/£ )I-@Ô~Jz§ñh¢¼X°íÓ·–^q#e7¤Ù¡¡>JÍ)3¡% Å1Ñ*ŽNˆÎO#⯡ GÁ9ëÉiÂô„41¢w˜&NõRœïM@/I`ÆÇ½=sÛ„ä¤Òd{]•(gÕX½ì”õøOÓDkÅLÁ f·+o+¨·ÓDó(Àª9V&©¬½‰ESÑ·=íœkŸâßËëvr5M´çr… .aû©ôï,RWÒDzeÈ–hBZ .\Rp¶ªÛ1=KÍiÜX(W­”á4щèü41"þšpœ³žœ&ÜHOH#z‡ibsNÄùƒ^ ò½'i¥”•ƒ­‰MS ÷Þh²$]UV‰’©lêíË`¢¿52Ô¡<Ž&}&ºGQIö2[0±`b&˜6I—vH5¹0Ñí˜èj˜hÏ­í‡KÜïÕ™<é½0ÖJ&íÃDi-˜Ú9'ÿ m·Ã‡ów§ ªú¸B¸`"š%:&FÄ_Ž‚s֓Äé abDï0LtçÖÖ®RÜKÈÍÕŒðŇ6 ,hðÔÙ’|4U @9q¨^—þÖÙj#ˆ?C…÷ €»i¢{l–RüÕ)óJ¸hb*š(í§–ÜÂ¥‰n‡zùµ¼Ò“wdC§ÁmSàÖ­‰Ü®6ñÑA'í-Ø´¢‚«´ÛQÂGi¢9­Ú!AÅYzË©ºhâ`šèDt~š M8 ÎYONn¤'¤‰½Ã4qª—‚|/M³:¦U3:'•&»6qN½~Y’þÖÈ„ÂqtÝ›è )É »¤•2pÑÄT4Q¿KMZt§×_ÿø~5±]Ní¹„¦†á])Û­µQùmt&¬7`’`O·c~&ªSJ 5ƒÅâ,­sNá,щèü01"þ˜pœ³ž&ÜHO#z‡aâD/Uíî-ZèUð¨šÑ&¡eÜøD*æ¹`¢«Ê•%‚ë)Ýо &¶è´Š¶D'?™Ñ©{äR9úeï%mL,˜˜&¬MÒÉŒÀ?èÔì°ÒÄÕ0ÑžÛ*æ¨ý(ç’ï…‰Œ˜ukRmÁÀµ û6;ü]oîNšèNëß ùI*6q†ëv4Mô":=M ‰¿„&<ç¬ç¦ ?ÒóÑÄÞQšØœcÉ)ŽTÎt÷ì” ‰÷öŸKµ¹®MlªHIür?vð]¶·fÉì߯ÿ±KÏU3êëKYpÝeSf¸2:-š˜‰&ÚwYêL ±G›•«óÃöçZ»ù̵Ÿ¢&vçA'i>ÒAþq€Þ·´ê¨îjÕfGòhmÔîsÅ­ êâ,/š§‰ND秉ñ×Є£àœõä4áFzBšÑ;Lgz©VeçÞ;Ød/SÝß›8)Ui.šØÔKôúÌú]4Ñߺ>Xâèà[Â’Ûi¢y¬«P>PVV5£EsÑ´LbHèV›Øì®?èÔŸ‹( GíGñæƒN鹯a¿ÚäÖÒA’=t·Kø,Mt§l~.ˆÍóª6NˆÎO#⯡ GÁ9ëÉiÂô„41¢w˜&NõRTʽ—°ÛáYæš8%•ód{]•pFΨ×ﺄýÂuÒGGøÁ½‰æ‘sÖúå…Êò:é´hb*š¨ß¥ÖIzb—&º]Á«O:õç’Z¸'»ÙÝZµÒ&N÷ik –ÚV¹¸÷&ºçbÒD‡ FŠ“¼R:ÅÓD'¢óÓĈøkhÂQpÎzršp#=!MŒè¦‰S½‚Þ}Ò‰œÞþc©8Wµ‰MÕáZôõ ßEý­ý;1?v ž£‰æ±Ô·Þ™ýRVÒÛ µE‹&&  l÷êç[›–KÝNõêjý¹«{ çèZ¸—&¬%é>_¨·`Ò]œ¿)¥Þ“çò(Mtq”Q‚cXÝut §‰ND秉ñ×Є£àœõä4áFzBšÑ;LÝ9SJ ýˆÜ[¹–&„Øéë?ÊPæb‰®ªïLðêù»î`Ÿ‹ŽäçÒÃn ‚•Õn§EK,–˜‰%èURQÛ;½ó×ß¿ßjGxù9§ö\+Å‚öSí¤ð,!ù\™æàœ×¬Y±6cWi·ƒß»oe‰æÔ€¬ø47qï×ÊKLˆÎÏ#â¯a GÁ9ëÉYÂô„,1¢w˜%NõR’ïe }ÕžðàœÓ9©<Ù­‰SêKú²‰sÑyc­Ûi¢+#&fCÛðº5±hb*š¨ß¥¡°óï`w;~;§wM´çŠÕ7NáÝXYîÜ™(/0Ù‡ é [¬\¡Ý.ñ£•ë6§dJø8z[ÓX0q0Kt":?LŒˆ¿&ç¬'‡ 7ÒÂĈÞa˜èÎ8içºÛ½ XøÅtP¹îG*·rɱT¡É`¢«*=Ky¬¾À—ÁDkU’à0Äf÷$LT\_È0Æ«ÝÛW·`bÁÄ0!íøRi ÝZ›ÈÕu°ûs ä:GÏQûÑ’ìέ ,/¢’ñà˜Si-8eÍä·ôÒû {öÒD—Ôr$®vCºh"œ&:Ÿ&FÄ_CŽ‚sÖ“Ó„é ibDï0Mtç˜ë„‡â^jçLë¥4F¯¢p@›Tñ+Iÿy¥¹ŠMlªSE…Ô—/»‚Ýßšrp½~‹by®rÝæQ³™X¬¬¤u{ÑÄT4Qú,½°±¿5Ñìä½õE4Ñž«ÄÌöÚª(x'Mè+a †îÓDKÿ€õ×ñû n—àÙ½‰î”´rÂâÐÖìpšèDt~š M8 ÎYONn¤'¤‰½Ã4Ñsí ĽÔûê-¥ë¬öXtPºîG*´PÅR9MF]•dF Ôïì¬ü§iâTt¤‰æ1c2N±²œÓÚ›X41MôúÖVrtib«—Í—›hÏÍ"ÁÆnGzëÞDz™ÖO“hÂ,K*((­v¤2ÛøÒÔ·Ãb«gûºñåDt„ìÉñÅ ÁJa•aµZµÆ—©Æ{¥œP‹?¾t;’˶çRb´`~Öìð½Ø ã ¼ˆkS>(fdÛLR-8òÛíÀž]­êN¥þ>ô8~Ë?²V«–!œˆÎ¿Z5"þšÕ*GÁ9ëÉW«ÜHO¸Z5¢wxµê\/e÷¦' ¥£½ïSR&£‰M•J ÒXov’¿‹&ú[«¢I>ºÝßv˜ï¦‰æ³šš„ÊM,š˜‹&D! Öyr@mU‹n ‰V¤ˆë“1j?5&…î]­âÚË0íÒDN­WwDîºÁfÇŸ¤‰î”Rû'ÇâÞ¡lÑÄþ4Ñ‹èô41$þšðœ³ž›&üHÏGCzGiâT/Umøæ½ïô²£½ï BAÁ›?¯ÄSÑĦ zñ½ÔÛwÑÄöÖê|¼ÄÑò\iÔÍ#™$ÿüö]Z4±hb&šhߥ–~ ÂÝ›ØìjÛº˜&ús­~á£jpgiT¤— hÞO˜¡µàR;ÿÌïf'ð,M4§ -û£†âÚ=âEÑ4щèü41"þšpœ³žœ&ÜHOHÿcïlvf¹3|+Zy§6ë¬ Aî ÙÇ–e)°-Á²}ý)’Ÿ…QÎ49 v· q´Ñ‡B×ËšæÏÓ$«FôÓıQêâ,l²aÚIxL*À\ÅŒª’:ëWM¥ÕjýèàÃ)éËi¢x”­¤Ú1ࢉE3Ñ”,¢DÍ,Õ.b8›&òssþq6èõMtiiT ¼“>§ ,=]Ôå6•;¾7g`uª ÖùêWìÒÃEŒE;ËÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0Mdç1 ¦þ(õäXë¹4¡Ñ­½½‰#RcÀ¹rVU>—ùZâõú^÷&j«‘“¶ÏC|Dñj/§‰âÑ›ìHÐWæ ¢E‹&f¢ ÌÙ;иMÕÎ.Zž«”K´u—ÁšD]L–(Ñsš ïÁ)0¶¹‡ÊˆpoÎÀ*%@gó¶ØA\9»ËÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0MTç‰Bza”B¾ö¤9MXÚ¹7qPjœŒ&Š*‡ J¡¯žÂ›íMÔV'HüJtä¾{Õ£ËJ/¬1R$[4±hb&š -…¼3¡)6i¢Ø¥ó÷&òs½_cîôŸlÇWæ dÛ41ÑózFÈ[ŒùÍUlAnçåp+MqŒF{âb ÔY4±³LlDt~šM4³žœ&š‘ž&FôÓDvDUú£”¾”&pƒdi´‡˜CêKõ¥Ý\4QÔç~Ò´/$ä½h"·c§é‡ÞxÒ©xç\éOãÈ¶Š£.š˜Š&ØWél˜øóÕü׿|S §ïMäçŠùüF½þãvÑÒ•4!›‹ÂÏçÉ#o`Gªöþh¶ †÷ÒDÇQ]_W ,šè.Ÿ&FÄŸC Ǭ'§‰f¤'¤‰½Ã4qh”"ÁkO:qÚœªvö&ŽIMse ?¦žQß‹&j«-Y»é‡Þx ;{ÄDÎ)ʯ ±‹&¦¢ /0A 6M;8ÿv~n¾•:_ü‹È¥·°iK‰‚îìMÄÜÓ‘CèìB»ï=éTœŠãV»ØÒ‡]X9ºËÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0Mç‘!´ËM|ˆ´kïM¥,îÐD•j>p_j|¶gþ¯¤‰¢* Q‡&>Zùf'J«Ôì…™\õƽ‰ì‘„„E»Êˆ0.šX41Mø{i!“6«£V;ÏϦ‰üÜ|!ÊÝë?†ÞÍ.¤ ´-MÏ‹£b*=gˆmoM;¾·ÜDqÊHJŠ}q–Öµ‰î*±ÑùabDü90ÑPpÌzr˜hFzB˜Ñ; Å9‡¤1uG)¦¤—„mH{0Q¥j’NÚŸjÇ“¥t*ª|Ác®báô^0QZJ?:ñá‚ýå0‘=J ø‚2ãUuÁÄT0áï¥iòñ¬ ÅNÊÜœÉðÞ–5túÛ…dWnMÈ)qØ9è¤uäõA¨ÝÓ‹Ó½ b‹SUŸÅ_UMô–‰ˆÎO#âÏ¡‰†‚cÖ“ÓD3ÒÒĈÞašÐº2éTA¨v!]žÒIwS:“Êa.šÐ²*‚Ô@ÕÞ,¥SmµQïAµK÷•®Iƒšõ•QX)MLEþ^‹÷,kt*vôgÓD~®øäfýþc¿¸~Iñ:§š€Ïa¼GF¥NíºbGtï­‰ì4¢u>ú; «ÚDw•؈èü01"þ˜h(8f=9L4#=!LŒè†‰c£Tº&`˱ö>“j“ÝÁ.ª"uN×Vʛծ«Ñ!o÷ Ñá;ï` 󈾲Èëö‚‰©`Âò"5…ØÎ[ìˆO¿ƒŸ›‚z÷Ñ^ÿ1ÿ§×žs*tžtrÔˆí š=½Úð4Qб³/NPgÑÄóeb+¢ÓÓÄøSh¢¥à˜õÜ4ÑŽô|41¤w”&ªó„>@”ŠxýÖ‡­‰ƒRÓ\ùa«*sjo¬|´òómô_5M‹Ž>|@½š&ŠGC úÂ4n²hbÑÄD4‘?LJôȱ™¶ÚñéÕ&Êsóvr'_µÓxéA§¸‘<§ (=8×ìnrOµ xëìê””¨/ŽÖÞD™Øˆèü41"þšh(8f=9M4#=!MŒè¦‰âœ1Ÿ za”Ò‹3:åz¨whâT0MUB‰Ú•÷>Ô?¹ôñ«¦‰œšñ…™üñRÉå4QVR/OJþªiâPtb¼‘&){LR°hbÑÄ4QRl‡Ô¡‰l÷ð‰ÿ4šH3¤óºØA ׿Wp`áçó‹”‘7Pê¤-v>aßJÙ©¿7IÛÕÞ>ì„Mô–‰ˆÎO#âÏ¡‰†‚cÖ“ÓD3ÒÒĈÞaš(ÎÑT%õG)”k¯å)Ë&»I>Š6±v= ;Ö¹h¢¨Š¨½kyÅNTÞ‹&J«ý]²Nº®jîË?^ª]‚k“|n‰h‡&ФW¤r˜,yQ%°“Þ¶ª—7£‰$ ÖŽÀ×ò²GfÖþ樂ۢ‰EÐDÚV1mŸt*vøp-î$šÈÏÅ`:Gi³ÃÄ•{Ì›¤â¨¤=]:§‹ʽ'ŠSõ— /.Ƶ7Ñ]&6":?MŒˆ?‡& ŽYONÍHOH#z‡iâÐ(•8^JÒ†¼wÒéTŸ©ç¢‰¢ÊÐD¨¯Þž°Ð¯š&j«“OöüBtä¾â¨Å£3RaØM,š˜€&tK@Aýi'ùÈv>:ñÙ4‘Ÿëã:¤;ý'óÅ匀LwhÂrv¥½ryVÆ º÷vÇ„áq®mGí.Ÿ&FÄŸC Ǭ'§‰f¤'¤‰½Ã4QÅΧ£j'צ ”ò…Wvhâ˜ÔdsÑDQ)aç‚J±z³{‡¢ƒÞGîQÄÐÆÕWÑ+eࢉ©hÂ6¢ÁôIý¯ùþº]°ÓO:ùs)„DÜI˜ý›¥+‹£JØœWdç6‡ÜƒcDŠÍ3ÕN­ È‹SG¡Ì[]qhÝÂî-[ž&†ÄŸB-Ǭ禉v¤ç£‰!½£4ñá<Á £ \{o"µ€ä¥Nv »ª"daë«Gз¢‰è8Rã Ñ¡‡ŽWÓDõh)B;/qµ‹L‹&MLDþ^&È÷°“5Ë»øx úš(ÏUU í,ÕN¯Ü›HY@°ç4Þƒ‘L:g²ªÝÓ„$ÒDqjHÔÎ[í"¬“NÝeb#¢óÓĈøsh¢¡à˜õä4ÑŒô„41¢w˜&R)„ËO:Žâ¨¥â\õ&>Ô‹jçËWµ ñ½h"·š‚µÏ}ØÝ¸7Q=æ#åí#Õ.",šX41M@>iD™b“&Š>ì­DPh†AÛׇ‹ÓÌ¥byËWH‚>§ Ì=XRŠØƒŠË­'ŠS,±¾¶Š{ 㢉eb#¢óÓĈøsh¢¡à˜õä4ÑŒô„41¢w˜&ŽŒRèÚ[Ødƒ”vh¢JPävú©;¹h¢¨‚°}›÷Ã.¼W½‰ÚjD /DGï»7Q=:æ@û~xµcZ·°MLE˜)Á4¤v†ØjGΦ‰üÜ\Ƥ׳S¾©}é½ Ú0ßI{~o‚É{°€äo9M¥Å.ÞJÅ))B»ˆæ‡ÝÃõ“E;ËÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0MçìC¶Y”zRªôTš ‹[Ø¡‰*A±S´ûŸMšŒ&²ªˆ$ ¯þ1kÑ[ÐıèØ'Ž(‹V†ØESÑå\IdȽ‰b‡†gÓD~.ª¯o!uúOÎR®píI'Ìßsž×›`Þ¢'ÿ]ÚÖŠ ÝJÙ)’2tÅåÑjÑDo™Øˆèü41"þšh(8f=9M4#=!MŒè¦‰â\| Mý!}Iuq-l—á#õþhï¯ö³¯lŸKõ…Ý\4‘UˆOìÔUOö^4QZíïtöªÝW »zŒ“¥¾²+§Ó¢‰©h‚sÎíHªš4Qì$~o"?WÐûFêõŸlGWÞÂÎû‡¾Ð–{RæŽÜùnPìTï=é”1ZçRGµ ‹&ºËÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0Mç2¤?Jq ‹s:X;9>$D_KƤò\9ª*I¢õÕ ¾ÙI§ÒêÈDÖŸÉ}mtãÞDöȾRèÿn 9qM,š˜€&dKT#bû¤“Û±>rúI4‘ýk0 ‘;ýÇÛ«I¯¬^ç³K®}‘žÓDÌ=˜rއö‚½ØAº÷¤SqÙYèq‚ºh¢·LlDt~šM4³žœ&š‘ž&FôÓÄ¡Q*Òµ·°Yy a§z݇Ÿ‰_:MU)p0yA½¾Ù-ìÒjM– ô£“ä¾ZØÅ£Ãi'W± iÑÄ¢‰©h"n ý_NJÖ¤‰líü½‰ü\õᘱ׳ݎ¯½7á‘ðä9M¤ÜƒA£uv«Þ›Ó©8ÁÄ ¬{Ýeb#¢óÓĈøsh¢¡à˜õä4ÑŒô„41¢w˜&ŠóˆÔ)0ýa÷yQµs÷&¢n÷naW 1æxAêl4QTiL/LŠïU »´:Ži7:1À9ŠGoqÐ~‰¾n[4±hb&šÈ5¦#‹úÒ¤‰bG–Φ‰Tê]$DîÚÙŽ/Íé„›Ç=ÙÎI'Í=C'ûT¶‹÷îMqÄ ºâ|Z'ºËÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0MçÌÇûþ(EѮݛÜp‡&ªTDíl£T©6Y†Ø¢Ê¡1…ÔWÏñÍêMÔèD üBtDâ}4‘=æ —/LãšVN§ESÑ„n‰ ¤¤ÊMšÈvÁîŸDù¹Þž\ó¢Ó¡½¶¶‡^…÷hÂÌ(&îõt·cšn~9 ^>OÎøkŸ_¼Õ‘P½»u~1£@*Ê}e©×ü²æ— æÛüåEe í“´Åäô½ïü\KQcjï}»˜â•÷ò҆ ð|~±¼’DÎEb;JóŠóæ{yÅ©`ꥸ*v¼rö?C4":ÿתñç|­j(8f=ùתf¤'üZ5¢wøkUqƒhÆ«HºøkUL›É^=£cRgËXT%§‰(}õ1½ÙתÒjõçv¾V}DñÆ äÙ£B>äÛW¦ÁV=£E“ÑDBI!2p‡&ÜNϧ‰„Ic.ÑÖé?ù¼/_Ir}‹žg‘’à=Øq =B;}’+ýJš8$΂®{y½eb+¢ÓÓÄøSh¢¥à˜õÜ4ÑŽô|41¤w”&ŽRðRšPÍp'ËÇA©¦SÑÄ1õO*yÿªiâXt(Øm4qL™Äµ7±hb&šð÷2± ˜|~¯ôë_¾¿nGtöÞw~®z ¡ÓÜ.Íò‘g—$¦ñ9M@[T;»ôÕï­ŽZƸ} ¹Ú=nm/šØY&6":?MŒˆ?‡& ŽYONÍHOH#z‡i¢:ǘè…Q*µÕQ)ÙÆºSÏèCYç,ê?íp.š(ª’k ð‚z}¯“NµÕšÜa?:Åûh6_ùôkíjÕ.<œ€^4±hbš€|B• ˆšY>ªÝùÕQËs™…Xz=Ûíðó*AgîMðsØç0¹;Ñ4[V»ÈéV˜ÈNbŠíQÕ.è:èÔ]%6":?LŒˆ?& ŽYOÍHO#z‡a¢8Çb»ŽÌ‡H½¶œ‘ø”w`¢JET}A*ÍE*Ù êã{% ÿˆçõx?:tcòê1ß“Tí+“¸R.˜˜ &ü½DKÜNXíB<&0_÷“DI»Ë` œ®Lˆ¶¡$ä­ òÌ„R{~ÉvÄÁn¥‰"NH¥³CZìøuMì,Ÿ&FÄŸC Ǭ'§‰f¤'¤‰½Ã4QœGLÚ¥Dàêâ¨"©1Ú¿,u¶ƒNE•EÖöñûj—Þ¬œÑ±è¨Ñ}4‘= §\“µ«LWÊÀESÑ„¿—ŒÞcX±IÅ.Ƴ/a—çf(Ýþà ‰¯Üš°M܉>¿„-ì=8úèÂÖVší$¦{¯MTq%»HW\ º:u—‰ˆÎO#âÏ¡‰†‚cÖ“ÓD3ÒÒĈÞaš(ÎÑ×Þ¤ýQêêk°9UíìM’ŠORxÿKi¢¨"ŽÉ êíÍh¢´š Ð^˜,)ÝxЩxô&‹p_Y|äœE‹&þõ4Á™EAhÒD¶<›&òs} Ò9žZì"ص4AhïÚ„”1ˆ5P[i±#Š·ÒDqê¿¡t¶¶k#΋=£‰üsþ×¼KþáÏ5~úã§ïXâÏ?üµ.̾ ?Sò*F$Cúé‹ï>ùøú§O?þ”›üý_|Eòÿ¤ã¿Aúïçíõ zí|<¹åQõÿü­û¾ ­À>ÞÿT`Í˱ÿü/~üá‡?ùRíoß}~þÛOÿÝÿúdøS]¸ùBò§¿ýÜÚ2‹þ<¼ýøýöó/T†7ÿËoÿþãÿúÉÇåÿùþ/ßþðÛÀoýüîß?ù$ûÕ§?ù7¿¯ƒÝW>ìg2ËãÃo¾ÿæ«OÊß|úÝ·ôå¾ùý·_:Òë—¿‹ŸàËoíú¿ Hóñ»U•§Ñ2A!†v^ÁjGyÑKî@B#¢ó³äˆøsX²¡à˜õä,ÙŒô„,9¢w˜%‹óØ;çVEêµ¥q‘ãæNqo¶Ïb1rxAl “íMU‰¹w–«ªü^4y(: î+Ž[MœSÏ_ÖmäLt4áƒ%½šÇBœ5V¸n:-š˜Š&¼RBMà¶þÙD³“lWÓ„·›VHÑ>AµW:åJš¨Õų½¡ )ÊÊ\B€:gß¿ì˜, p>D¿õËN2=JÕiæÚRBq…aÑD´LìDt~š Mtœ³žœ&º‘ž&FôÓÄ©Q Ò½yì¾eÖš8'uªv#§Õë—MœŠ‚ÿï_¿_Ç$x5Mà–Ê«dAÕèûq#½õlB7@7<  ÜjYEòÄýݪj—]ž=›hâ¨îèÅâÊK®šNá2±ÑùibDü54ÑQpÎzršèFzBšÑ;LÍ9gO’âQŠüæ³ â-磳‰]j™iᩜe.šhª„Ø?P/ö]4±GGâèÈ“7ªG•¢*SâÕ }ÑÄT4Ae5µ¸[—&šØå7ês’{0Á4»¬w6/$ØÊÓ²¾‡‰]¨ˆöúÊüc÷®/Æßœ^š*d¦X="}×ôÒÞš²åà’ôn—ø¹é¥y±Ü`Ûíp]¤]ÓËlÓ BÒ²ôñhzÁì/ÃûuÓ –éM%‡_v±{såç£ïšô­èh~qAðDAZ^³Ë„nV5§L€ ±¸ò–k³*Ú…èDtþͪñ×lVuœ³ž|³ªé 7«FôoV¥8ã½%6ð|°YuN*âl4QT‰gv‰Õ À·ÑDyk+—¦8:šó“4áR~s…öã_å¼.Ò.š˜Œ&ÜÑÁ°×õ—ÝÕíŒ~ž«ìNŽ{ÅNøÞvF ð -Û,í÷4»dü(M4§BŒ¤±8Éëè;\&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9·zêñ(õ¦gçµíŒ·L~@»TÎŒö¦“]¤­ª8±$òP='ø²³‰öÖ,qŽ£óÛšýnšhɲÙÊè¥{ð¢‰EÐ×âÙ¹O»]ºœ&ês˼ì¢ìv”î¤ Ø˜KÐó{šú‹J Éþ¥?KÕ©Ô›Sâ¡8‘—„—EËÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4ç¦&Âñ(e„7— ,«Á p<Ú‹3hPd·C˜‹&ª*Íž1T¯9}ÙÙD{kD*OŽ£ƒ/Ååo§‰æÑÁ‰ãDM×EÚESÑ„Ö ²Îšÿ\ÍÿFÍ^FÍ‹h¢>·ÌmTþÙý~v;º9-O2gà÷ó‹n®„P4PêZB•¥‰SâtÑD¼LìDt~š Mtœ³žœ&º‘ž&FôÓĹQŠnng„´Qƒ³‰sRÙ碉SêíÏò¶ÿmš8‡o:Q–óKq–E‹&&  «E6¤þº4Ñì8_^2°>× ÿCp¨Ú©Ü{6![íØ”ò&¬~Á†’ RZìž=›¨NŠ ’PÀK¤EËÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0MTç”ëÉm G)JD·ÒleÚ%éŒöKÍ8M4U˜Ë@ϱz /+ØÞšP=hö´G,@^=2Ô³‰+Í‹&MÌD¾¥ V~™Ö?›hvüR²ï"š¨Ï%b+<Ñý~š]Îwù@/ߨÎ/õߢh-I(-vÌÏfaû>©åm7»Ä+o"\&v":?MŒˆ¿†&: ÎYONÝHOH#z‡ibw^»‰r ‰SR)OvÓ©©ÒBÓX½¼9YùOÓD{kã] Þ£øÒêõvš¨3‘z¨,#®šN‹&¦¢ ª7ˆ‰ûbw;ÐËi¢>·žÙQ?#j·#¼5 Û¶T&Y>  ª_0Ô$æHi±+„ö(M4§Ìõ7GN‹&¢eb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šsÃ`ëui÷fa3ù–\h¢I0TŽ¥ŠLvÓé”ú×õòWÐD{ëzÑ }ð34{®{]ó˜jv¨ /šX41Mp=s(‹yK}šhv¯e6.¢‰ú\Ð,©_ay·c½õl"mÎEàû~EAù‚‹P´Hi±K(ÒDuŠUC,Ž`ÑD¸LìDt~š Mtœ³žœ&º‘ž&FôÓÄ©QŠéÞ¼ ÜÎ&ÎI•Én:R/ô]½°ÏEGéÁ¼‰¦ÌY‚Úµ»­ ±‹&梉¢2g†Ú¢´KÍŽ®§‰ú\ÌP5t¿Ÿf—Lï¥ Îì}wÔ¢ |Áel!Ö@i±ãülÞDuʹn’I(®Óê7.;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Ñœhê·úÝíî¾é”·2œ[îŒöŸKõ¹*ÄîªÄXbõ¤_VÓéTt^¡övš¨¥îîö{ùîÊü%ýqÑÄ¢‰ hB·TÛHüy¶öM4;öt5MÔç)ƒõ—ÁÍ…ï¬é› ¨ÙûùEë—NÈäM4;ø³;ê­4Ñœš&˱8••….;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Ñœ×ÎaŠñ(åùÞîušiS88š8§t²VØM•–)(a<h²ïj^÷«½DâèÀKtn‡‰æ±&‡»ÆÊÄ×E§SÁDM$%wa¢Úñkñ‡‹`¢>WI¨?î5;¼·@,¥Œ*ïaÂÚ—.BA‘f÷Z5é ˜¨N jCŽXœå„ &¢Ub'¢óÃĈøk`¢£àœõä0Ñô„01¢w&v瘙0¥ ë½,m`z@»„ò‰TÔ¹h¢©"Ê™,VòeG{têªÇãèÐË•åÛi¢y´\ïeštÑÄ¢‰™h·Z¸"aÿ¢S³cå«i¢>W(YT‹¨Ù!ÝIl[®u«ò{š8§Ô'+òÑT1¦Üoäýó–ò]ÍQ¢Sœììvô`ÉÀêQëUÀ ­¨Ù%\Gßk~™m~Qg7IÍ/êäxÃüRwO9µkƒ‹;¾‘6pó£ù¥ü«„¨A9o+IÑGw«š8P)Ã_(Î!­æ¨á6D'¢óïVˆ¿f·ª£àœõä»UÝHO¸[5¢wx·ª9§2ýcŠG)¤{w«lC´ƒÝªsRMg£‰¢Š3X°-¸¿%Ë·ÑDŽ%Ž£Ã/   ÷” '·X™½TÁY4±hâïÓ¤-e-Ëùn‘»—»M×ÐD}nùx8u¿ì;ÈwÒDÞ³Êû"EAýÒ9¥þiHµ3wz´ÈÇ.¸ŸÝøc'ëì;Z&ö":=M ‰¿„&z ÎYÏMýHÏGCzGibwNµ‘<Ç£‰Ü{ö² ” Ü%Ôô•X*gžŠ&vU…Uýõ†_Eû[krÆ&Kñçξ›Çœ‘ â…Êrz9;\4±hbšÈe•î˜ËÀÓ=›Øí²^Nå¹Å;;LµKEèÍ7is*í{šÈõ FÜOÀÝí ?Z€|w*N ?'/YÝ‹&–‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎYÅãQJõfš0Þà&šO”b©æ“ÑDUåOœèƒéÀ•¿‹&ZtrJF’?w6±{DGý`äU2pÑÄT45ß®–öN©KÕŽ\®ÎËkÏ2òþý¡»t÷M§¤ŠïóòŠ‚ò׎j’¥õK'x”&ªSD/ž%‡H«ÈG¸LìDt~š Mtœ³žœ&º‘ž&FôÓDs.Le@ŠG)IvsÉ@¨û»z<Ú£RRü`@Õ sÑDUU&K±þ]ž]ýë¬þ4Ñ¢µ)¬‡Ñ¡¬ÏåMì½ÌÏý|—Ýî·Òè‹&Mü}šÀz6Y$¸éTí’ùÕÍQÛsË»æŒý ¦ÚÛyy蔩Nñýü‚m„®Å¶R ôýür+M4q ˜ CqDžMDËÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4ç"@ý$Ó;„[i¢,£·"äàl¢IÐzïÕb©šæÊÂÞU9€ ÅêíÛn:շ朌ÉÃèp²o:5Xþ‹ñ¯Žñ5£cÑÄ¢‰¿O´Ÿ98þ¹]ôM4;MWga·çJÆòÎý ¦Ù%º9o¢|Å(gT¿`HžR¤´ŽAŽÒDuªFýš»Ýë¦Æ¢‰ƒeb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&ªóZsC‚ƒèf—ôÞvFÐn¤:ö–Ù$åXj¦¹ÚíªزÄêÁ¿+ {kª{‹£ƒ®ÏÑD󨩖q‰•‰¬š‹&¦¢ Þ2sqÖ¿éÔìè¥(ìE4QŸkI(S\íÔßµ´¾òl¢Naø¾¦SQàuª ´‘Òzàá,ìæTØÞ”~üSœ¬ äñ2±ÑùibDü54ÑQpÎzršèFzBšÑ;L͹"¤€&v‘~oÞDÊ7:8›8%Ua²³‰¦ÊÙ8VodßE{t؃Nâ?v6G…ªTÖC˜Îiv°²°MÌER÷üصOÍ.ÛÕÈÛsÙ0ú~Àî¤ °M+2½‡ Ù‡ ÌA™Œf÷Z¼í ˜Ð6º”¿cWÜn'+ ;\%v":?LŒˆ¿&: ÎYOÝHO#z‡a¢9Ç²Ê Ïívw'a'Ø ÁD“@T†BŒ¥¾–þ&L4U e!±zNú]0±¿µSRû :/-o‡‰æ±¬ÄÞtªÿS™Ãºè´`b*˜Ðš6QV9÷Ó&š]ÖËÓ&ês¡¼„sо}ׄ ذ–?  +_p.ÿ¯Bÿ*jµKfÏM4qL ÁMÐÝ.¯£‰p™Ø‰èü41"þšè(8g=9Mt#=!MŒè¦‰ê Õ>ñ(¥Ä·Ò„)m¬|@M*UNˆGûl2M4õB5Ý=T’¾«yÝþÖ&…©5ŽŽ=صyDT–·w;XIØ‹&¦¢‰ò»DÑZ OÍNðò£‰ò\f'ÂàH¶Ø‘+ÜI¤›q½Nõž&|«›PHŒÐծ̘ϦM4q’Bqĸš×…ËÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4çJµ›ØEšÞœ6!ärTÀï”T…ÉÒ&ª*N­¶J¬ÞìËJ:íÑ!.ÀGÇŸ,éÔ””·›½r΢‰EŸ&¼¦C”´cîÒD³{M¸ˆ&¼Q —5Iøý 0ë½IØFå%óÛùSƒ¼üyºJ›9?Ú {G¦à)ǯÕkM¼_&ö":=M ‰¿„&z ÎYÏMýHÏGCzGiâÜ(EpsÚ„ç­ðÂû³‰“Ry®³‰]»¨b¬žÿ¬õŸ¦‰ý­ESÎGG²=FÍ£dQ4•I_4±hb"š¨¿Kô$êÒM›hvæx5MÔçR¦$Ø/b°Û!ÜÚn‚7c|_䣰Fù‚¡6íîhìveŒ~”&šS!EÕX¿L~‹&–‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎÙ‚ÅÒ.ÒðVšÐL…&òMìRËB[S,Uß5Uý›4ÑT909Çêé»hbN-)#qtœžk^×<–ÅÊÒ¢‰ESÑDù]q͈èÓD³£tuI§öÜ2;FßIùϽ%ÞÓ”/Ø»§îe¢f§þ(M4qh`ýÎz?vyˆ —‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎkU¼~2Â]¶›³°qc<È›8)Uæj…½«p‹ÕÝÙĹ輴غ&ªGOšX?˜Æ_ï?,šX41M@=›àš ]šhvtùM§ú\ʉ܀£ï‡’¿+Äwe6qá{OX¿tª Œý}ƒfþìM§â´ #Åkÿ¢ì.Î%-šˆ–‰ˆÎOÿgïÜzì(’<þUZ<¬™:Ê̸e¶ävØËh™ ŒüZµMcZÛn·Üm ñÝ72ë`ÊîS§¨ 9>%$hNGWü+NÞ~y‰œ"~š¨(gÝ8MT#Ý MLÑ;™&Šó Q¼³Z)ŸÑeOa«Á¡N£¤zl+Aì8õ!œÖ¹‰î­A3õTïûè¤×&ŠÇè{°Š÷òFn4±ÑD4y”Î:˜¸øüä½ò‹eöµ‰üÜÉ%­ú£vŽ–¥ Ï œ†h"¥è ¬Zíü¡K»ÿØþe„zðîÔú—”÷hxèˆèİfÿ’8Á„ÞV{9×¶þeë_è_°ŒÜ$¹z–Î.„UsWtN…ØÛ☶9®+mfŠøyæ`* ÆY7>Stƒs0SôNžƒÕJ- îÄóÀL¾b€R"©ÒD± ½ ÷3ÑD~.`0F¹ÅΓ_–&Rdgˆ©45v.¯¨ÉS2¶úNlí»‹NL”ÈŽ¬˜å£ó(¤¯}„2–m¶j£‰Öh‚P«<ÌRó~ÿ’“Ôøúâ õšÄª?ùÀ-.Ù¿¸]Eþþ…t0àÄh,/;ï×=—Wœ2¡Ó-ÅŽx;—g¨+mŸ»¦ˆŸ‡»* ÆY7Î]ÕH7È]SôNæ®â\Ä8„݉”eÏåé¨mç¹k”TÅ·¶h¢¨Jlî.vnËø°i"¿uð€£àxŵï⑹~Kg‡~»Ïh£‰¦h‚öçâÄ  êÎÅÍNù¹ ‰­óÈÙ. /º“6ß™y 9p×¶hêGÔ¹´ä+gù(âHò%¦¸€°­â˜ÃÄJDÛ§‰)âç¡‰Š‚qÖÓD5Ò ÒĽ“i¢s“Ù­qZ”&a˜h¢“RôÇHMeù(ªE;v[=ËiÝŽZÞò† c³D'¹õnGí”a bä–)vÀMl4ÑM¨J!ÏT½Ï¨³cšû>£ò\B}v«þä{x²kßR²D¦ )mPÐÔkz¶ 1ÅUi¢ˆ#ÊcŠìAáFÃÄJDÛ§‰)âç¡‰Š‚qÖÓD5Ò ÒĽ“i¢8ϹšŒô {‘ËÞgÄB»à‡h¢Hˆ U‡-UBh‹&²*tQ¼©Ý©å ,oíÈßÛ@½8M¤˜€G(øå Üh¢)šML?MTŒ³nœ&ª‘n&¦èLÅyfãž„b'ì—Íòq8”30K íþë¦;©É7vn¢¨÷yÑŸLõä݉›(oò&¶xDt­GÅ#g H¶2Œ´ÑÄF-ÑDÌ;˜(¹ðp‡å“÷Ê/2Š››&òs£×á·7k6FçqYšÐ†Õ¥µ‰TjpJ`¬M;XùvÔìT¢eÆ—zÛ|7š&V"Ú>ML?MTŒ³nœ&ª‘n&¦èLs{'»•bwè*ì?rˆ^TåA¨÷¶zÿpŸÖ‡=D/o !çâ:â»]8¿]ˆ Ø)U”pd+í'¼Z&ŠÇ„ìÜ%,9·ÁÄ-ÁDÊKìµ…¯/M;ê-èÎù¹ù°›9DG ¸èÒ„ßùÈ@a&bHœo*ІRµÃ‡ÇÅ%Ñßï _ß>uñÝei./FdY¨C:gËêßÝså¥[ÆýGW×W÷¹Ý¹ùîê¾tye„qöñý›ÛËÇþþÛ_îÇÈûrÿ…ê|u¥íÓ#m2/î^Þ<~4` ƒ×GÚæÝÝiíxü賫;}Xއ– µüáêöìÇ«‹_ãË_ÿüìöÕåÚVßýúw»³/KuRÇ/Ô¬gúôò{-|ùe´Q}­Nv†¿^JàŽ‰cïÆ¼÷âè|û+:¼íRö/ð¸k†¼K^¥óGxqeRU§‰óFv[\zx UPÔÑ‘EÿìcDØ‚Á!BÔv1$Bk šíz÷²m8=ÈIƒýgÀéß/~.œT0κyœ®DºIœþýzgÀiu®È%1ÑJ…ØN«*òÆÙþî-NˆNÿèê"8´ÃjåÑR9­ÉÓê1æ}¸`+ØR$o<ÝO+Bzbð´Ú¹ùSðçç"ëËz³ƒa¤¸h ~Üñ0OsÎØ€ÁÛJUVs]!çÝ3Lh«„“ë cÎÂ/vt¢ðªý SÉÀcb"SE`ë_¶þ¥‰þ%9O³Ñ¿¤¼ ÐÏß¿ä'¬ñ™Ú .z05îœ$Hsƒ.¯¸H`WŸµêì˜VR+N%x§-‘)N|ܶ’[Óµˆ6?[5Iü,³U5ã¬Ûž­ªGº½ÙªIz§ÎVuÎÉÕ·’ïíüÂinwÉ÷rÏ*míRéTåmŠàmõ(§•2³{kb鈮’h½”™ÇúÒh+{'ýÎÆKüá,‘Ë%»œ•¥Æ]?‹Ù<,QžKê>8³ÕFL‹îý²#qàXÂk Žê.a±óìVe‰â4&# -N˜6–°‰•ˆ¶ÏSÄÏÃã¬g‰j¤d‰)z'³Dqž‚s|D+Ó²In¢ƒÚJ´— àê»÷vK-ªRr:¦úML?MTŒ³nœ&ª‘n&¦èLãZ©Cç%f¤ ·c7°×rœTrÔMULŠ9éõWÑ?lšë%àÏC¾ÉÎ¥`+KŒMl4ÑM„’²Rœ Õ}´Î~.µ<7å}Aõ}è[4efH;bêM@®éÉ)LÔÛ b§íÁª4‘æke0Ùâòµ‘MXÃÄJDÛ§‰)âç¡‰Š‚qÖÓD5Ò ÒĽ“i¢s®´úÙï½,Kœh'C ø; àé©©±µ‰N}¬_d¿·ƒ£‰òÖ‘êgJövaŵ‰â1ahØÊ·,7M4Eyµ—<åÍzUš(v.̾Ó)?—9„hw0Ä‹®MÐ.€ÖÑx˜&0×`ñXWZì@ÖÝé„]3œx[\ŒÛÚ„9L¬D´}š˜"~š¨(gÝ8MT#Ý MLÑ;™&²óà€ÄØÞ‰ä°,M8Ù äUÜK>2¥ê_C[4QT)Ž¡·»íNë:¯î­!`¨_®°âš;ŠGfq„2Š[Ž&š¢ Ìsþ⽤êìÎŽÎM˜wP‘#1ö;<°ÃsÖµ ty§äaš ­Ác=)_± ‚¸*Md§bràLq{ÝÇFÃÄJDÛ§‰)âç¡‰Š‚qÖÓD5Ò ÒĽ“ibT+•dÙ¬r~P‡”<ÜÚ+]ðmÑÄ8õrZwŒ‹Ž‡õ2:SÖß)²ÑÄF Е5‡DÎW¯ó*vyîŒå¹L(1’Uˆá@»7ãÚDØ*ñ;8×`AqÆÚw±;pÂcQšÈNIÿÁzJÕÎ΋ßhÂ&V"Ú>ML?MTŒ³nœ&ª‘n&¦èLs!²[©ÀËÒ¥¸?´6ÑIˆ1sG{;×MUŠ `l|-vàOìv/ç´£ƒ½«“§‰â‘“'[õm4±ÑD4Áùt³gŽR§‰bçæ?7‘Ÿ›§ˆ„ÌV[ôK®MÄ]"Ѹ¦ É58/ã-t±ƒ°.MŒǽm¾M +mŸ&¦ˆŸ‡&* ÆY7NÕH7HSôN¦‰Q­T<4!3'MPÚ1òMŒ’š\[·MUì­Ó¼Ý©ÑDykŸ"ÛÑñ½]‹ÓDñHJúñeˆÛ)ì&š¢ -—,Úp'®ŸÂ.v‚³¯MèsÅŠ$±êø@° M ítÌ?°ös VèJTÏe[ì€hUšÈN%„œË§qÜÖ&Ìab%¢íÓÄñóÐDEÁ8ëÆi¢éibŠÞÉ4Ñ9ç”ìV*øeiB›òsC÷MŒ“ Â.ª Ï…#ÔÇÛéTÞóÅ~`G{ùº§‰â1‚’ÙÊX¶»ë6šhŠ&b™ó—ì°JÅÎËì§°cÙéä ˆYˆÑ§eÏMxçqˆ&R;éo ¥)Ský‹ª''ÑTèý©õ/úÖìýª]û¹4–ï_RÒ¯#øD¦2Ô¿Þú—­i©Iekežq­÷/Ù“Ÿ½IeWò¾©jýIÝ,{÷vˆÑóÀ¹¼¬ Šçd\ÆQìpåÙªì4…Äœ¼).ùÞ׸ÍV LCT"ÚþlÕñóÌVUŒ³n|¶ªég«¦è<[5ª• ¸ðlU ;æ¡ÙªqRcc³UE¢cËr±>±µïòÖ<¢ØÑyçÖ ¥iB=BÞö¡ñ·”óý{[7šØh¢ š`yƒ-š`¯š  .[–Œ ä] eï3  Øaš —k0ÇõUúÎ.ÄUi¢8-×§×Q§—Ü–åÃ&Ö"Úšð»(‡hÆ|è€êË£Ù(¤Ua¢ˆÓø¸ú™ôÎŽz³-L Œ+m&¦ˆŸ&* ÆY7ÕH7SôN†‰â\ô—G4¡‡ïp˜õ:#Ï*­ý±Rŵ•€¼SuÄo¯Oëv÷Ö ÅŽN¤—&²GÖ7w¦² &6˜h & ÒI<=¼RúÉ{å—ÒÜ)Ës)’Ö!³ÕÖ:À²0áçI±•EpMl4ÑMh¹ä‚0WS:uv~öËQósEI«þˆÃeaÃ.ßVäÓašà\ƒQ|}ú¿Ø…^¤Ö ‰â”B F÷Qì°·!y£‰ab%¢íÓÄñóÐDEÁ8ëÆi¢éibŠÞÉ4Ñ9×®¦~ÛÄÞniš€¸ w£îˆÒÄ1J¹±¥‰¢Š)Ÿû°Õó)º&º·ŽJ’ñˆèðŠ²Ç¨8Àõ{0:;ç·ŒNL4œ79ñ®~›Q±ÃÔ[Ð &òs9EŒóPÅ..zÛDH»qð 6ïTc,ÅÎÅÁÕ(†oÏž\\ée/{È™²‹Ob; nðzuöå«Ë¯^¾¾+¿Ü7<çgÿ’#ø•~ò¤XU>þù?/orw‘ ±ýÿè†_çêùå«R=?9ûìåÍå¹ |röõËû‹ëóDô¶ iÙƒO Mè÷‡¦ç¹ÅÒZpsu¥¶û‚¹¯&ÿuq÷ÃùGüäïÿ ‹??}ö>ÖBünÇtñâ;Fýôsm¾|õò¹–¢»w+(í«à§¯Ÿë«ÕR­—Aÿ뵚"§¼•/Ò{UõSõ Š>ÐFôacÇ_vÃî«>ÏßÿçïªÍìåíùGûÏò¯KKûW•Sb±ÿEyÜo#Çßìƒöâdš~°|ÔÔÙE¯´¨ÏR\¾ÉÙ÷Û¾Ãe¹ëaÎÿ)†E¿ÒR”÷5,W¯_]ÜÜ]½ÛåŸ~þÇÅõõ¹ûéYðßç´˜ðÔÓåÓgúW—?ÝŸC¾Ä ]DJ.“gçÿëÛš’ö§ÝOàÅã¥ûÓ/*ãÏ·Oµvß_]ÞõŠÒÛß¼-IåwÜ=Iïcùðïþ÷› ßõ£zø/ßüüÑ¿½¾ºÎ%S7w/¯/óŸ]Þ^¿|óBqúá÷WÏógåëú›B»V7僛Ҋä÷$ýé—Éÿ÷Å~ üùÕ÷—ÏÞ<»Vоѿ}•÷âBÛÈûÛë‹gÅÑoÝÉÝEnøî>Ò¸üwîÿ~ß+|õ—¯n.nï~xy_þ÷úåëïô î_½¼¾¾|Õ“ÑýF›}EmO'¾þ_´´<ÿáþ@(þGQõë×¹·“{vý9ÿøôâÕå‹ËûN–°_rUº½¾zvuýæ·`D­kÞ~™ÚeÂðt1·g··×oÎϺJ¦#ˆçoûÅ3ÌS:¹½ì÷tyèâþþòÅíý™—ó•]¦uqûC!˜è¿=ûÛë›üÝœeû ?ÙgôÇøô¶Ïàt”‚5 S;×Û«Uñz>Ç6§“žØ‚0ðŽãÌ÷ÉSÏÑYÓ 9“Ã1>ñŸ  ‰é3:*¶dÇVòŽû=¦£b˦O ‘P,ÜW;¢£b+Fl¥€¼Vãû,v`Õµ‚âSdb[Äm­Àœ®D´ýµ‚)âçY+¨(gÝøZA5Ò ®LÑ;y­`T+…ä–=Ç8ï4X,'•K±TTQˆñõtj‹Ý[GŒ1„õ ²Çäthã½­¬µïïZ,(ÚV ¶Õ‚ùV ´`²ø”ÄÕ·uv~öcÑù¹ÉùèƒYµY›·ìAïbȺ¡”úò-ô&Mˆ(\C0Ñ ¦ò,Jà"Ø>©w~«â3™>C”œJÊÈóí(ö Dm†Á™NYX‹<[ÁU;‚ãœZó1·Ý.…úv­bVNÁ[œDkºØao2bãÃ%¢íóáñóðaEÁ8ëÆù°éùpŠÞÉ|Ø9×ñƒqx¼³£eϹâŽîß+Èãôc”JcxXTINKöz>µû– mw‹ñìׯ‚‡A¢A@Ù.&HËn&æ“)ª@ò ‰ÑÈÜ×Ù…xÂXˤqçU*@[”oyÂ㜂é4ĈDà¬æVI2áqkÐh:aК­–Tí‚;jÑÒ“é4ªSeDcղ؅Þ=(5§<§S ãXX §)w@ ´âꥷØ1ÄUY8;Õq[¾WÈ—³uo,lAN%¢í³ðñó°pEÁ8ëÆY¸éYxŠÞÉ,\œqÈ`·R!À¢,LÞï‡`¸H@Ö¦üˆ5¤Ø gU)ŠwÆàb'§Ãå­S€d ;;ç׃aõHùÚ=Bo)£wöôm0¼Áp 0¬3z—“ »* g;ׯZ3Áp~n€à|0é1ßʶ £Û%@†E•iãP$ºJB‰_í¤å\'~u*Žäî{(ŽaKúV'Ö#Ú8NL?NÔŒ³n'¬H·†õNÉÿgïì–ì¸;~­·Øâm²#4º ì#%.U¢Èåé"I•–â2a,~IÙ‘Y|?‹Ÿ, Ìîê{=sæC°ÎP*—¸îEÿ§gÀotß;G.Ù½”N†×Ýz9×;†÷"ޑо!œ¸S•tÁl«Wì8#œ¸½j,õ F –‰·Úzy§Œ(ŒyêÐ'·ãÄŽíàDÿ`–3]p'î쀖ʼn¾ÝèÄÍIzT"_uë¥tˆé8M@~ƒAÃ^Ë«vg·éA®;§Y^í³ß½ÝÁ8½ÓÄÀ4±ÑöibŽøeh¢¢`šuã4Qtƒ41Gïlš(ÎsÅ »—bXyqÂÇŽ8 ÐD/!B¤8BjSYßîT…à¨vpýÞãyÑD¹jÑÉxðvtÂv4‘=’“G(#wX*g§‰&  èÀKÒ~çáªè7>À?jãÒ4¡íbJ:‰k’®vAüŠ4‘ôŽÅ”$áDa/ÄÖ«®v(£NUAõ(—6æs¿rU§ÅîÈñäU&;eý"ÚâíÉ(ÌÉi%¢í3ÌñË0LEÁ4ëÆ¦éfŽÞÙ Sœ#&» e°*Ç.y?À0Ó¤2·Å0“Ô#Ðy1L¹jJàõ¢Þn8‘ëò S<ê傌P&§¨w†Ù¦†ÑSPû¾jêê;;õ¹4Ãäv)…œpÍz„ŽÕÿZô´QNóè&+`ˆäL¥ÚEÊFÉd£p°œª]wÆ»1N%6¶’õvn­y0²„²EZ §jçeÔi#ïM§!æ|§N‚áTíô%å §˜Ç‚€¶;æm¹4;ÕÑ,Rr¦¸àü¾¶fG%¢íséñËpiEÁ4ëÆ¹´é¹tŽÞÙ\Zœ{`Ž`÷R:“Z•KC`“Ò—N“šãÒ¢JG!"±Õ#óyqi¹jÖñFÜ[† ¹´xLÎém+‹~ß©·si[\Š™79å U.-v”ÜÒ\šÛ Ag¸v·­ƒ¬›$Q %—¢> 'd0,ylk C&Ãäüù^€,§jw˜C²æ”M§ù”%b8¥tìˆÓÏ=”ª*FOÑÙêÙɹ ¥zÕÁ9?":Éù7JÕc¢P­Ô|oçö¡tJÛJ©sÉëQßô^ìÐ/¾M%·›/˜¹>Vd;JǺíå†RßéHªRŽ¥ª€rF¶”ªúM¿Ì§y÷¶8v¼™³>¹T"Úþ—¹9â—ù2WQ0ͺñ/sÕH7øenŽÞÙ_æŠóü­ÃØõÞÛyX½| ÈÐŽ‘"A`L#¤ÆÔNUÉ«HÅ.J:/œÈWjŠft¢CØ'ŠGdGÉ~A42¼ãÄŽ­á„DþÒÃ…úqB¢Î3WÀ‰œ’4„`΃sÍÅUÏÐB—|Œ‡p"Jt.yç ¥Qä0“ní#Y0>’qî7’ˆ3z>þ¸Ù‚a&‰ igsrZ‰hû 3Gü2 SQ0ͺq†©FºA†™£w6ÃLê¥"ʺ'wƒtœ†NîN“Ë4I}:·]ïùª““œˆÃŒNrDÛ1Lñ˜ÌÇùŠò¾$²3L[ Ã9]§ ±±» Ûðâ» *þ?~¢?ÌË»Ã`‚cðÏò%{€cCiN‰—FUžðÕ2ò¥1½æ Q §œkާh:Qr~nou·1ãï¸- ɤ5HÀƒõ $/rPeªâq¥¡d{ŽHRo±s6EÄIâˆýŽˆÖÜ¿ÑöqŽøe±¢`šuãˆXtƒˆ8GïlDœÔK)¿ì2WÐNKp§Iõm@/ªTBµ¶Â½z9³äN“¢pCD̽w <ÙÊïÉvDl sB£¼‹ÂˆXìèà”æBˆ¨í¦|Ö{´^ ”ëíYysî@ D ÊÔFIÌPªv”FŒF0"Ç$ÁÕ zt£Jö¡u\8t¢°FHѺRµƒq\Šh:"^}  §jG8ŠÀÑÚß/¥[¯ÓŠªÓb'6EÄì´°6îôvvD´æþ•ˆ¶ˆsÄ/ƒˆÓ¬GÄj¤DÄ9zg#bqî œ±í½™Âʈ¨cRBÄ^ª€7ßÜ]R[ˆXTaði„útf«ˆåªu‚è"ÛÑAÙðŒrñ(ˆ„#”ñÁÒÂŽˆ;"¶€ˆÒéóG¢þêù{»ƒÔÚ !bn—] ÀÎx²_3wc'ùÐAx>0ÀŸw8s,Ì©Â8†acß¿äDU§Å.¸m—¹²SeaïÄÙâ’àÎ0Öä´ÑöfŽøe¦¢`šuã Stƒ 3Gïl†™ÒKyHë.s…²¥_f’TìXÀÏÉ0ÓÔIåÿ‹f˜IÑÁƒ‰Îê 3IÙaš˜av†iab®Ûá#$#9D±;Ì´Ãh»Ép`A±ãuOsÅÎ'n 9DÒW˜Y1ÎHö–í0à¶i[§ˆ£÷r"æ<±ÑöqbŽøep¢¢`šuã8Qtƒ81Gïlœ˜ÔKIXwIJÊôDó{{ŠÜØ’È$õ)œYõ)Ñaw=xuœ˜¤ìp+ÍŽ;N´€)ïFË™æ\}×\¶ËÉ0—ƉÜnt‚Þ‹õ¥àV=XE:À„„) 0(Qzñ†Rµ ¹£¿Ók//:ÆÛÞ'Küìå‹§ÏþûËëW?i{ÞÏí.¿+ÿßå]×ýéx|sùúñõw—¯^¿ü¿ïæOž=}zuñ·¿þí¯ùm¹oðÝ4ñù÷?ùý¯n¾¼y{}•Lj»¿LlèRÛùä_Ÿ½PTèÿè¸2­…xÐÂýEMmªˆùÕo¾¸{#OóA ‚©m”ؾ?å—¾zü¿::ßß’ŸþzÊÝý¤ëº‹O?½ðÏžè÷L;«þ=}sJcÿ~ýüæÍ«k°/üyd>Ì'…ð“ßÝ|ÿôßž½øã—Óïly6þðÅçÇë[||-7O?¦Kx|í/)=—)^Ëå“\=!¤›ô4º“ž§ª×“®ã·7o^þ ³§ûÇô°E¤r’Ðj³§ÝÃ_ß¼ÈÀÇ­•?î”Kÿ,3öí¨­ó©ç¯ÚÍݘþø]Ñ/Þ<ÊCù¥ —:‚€¿ÂtåñBÕ6ÿðûϽ?%DcÜŸôîòùÍOÓ‘[Ö9Ò¬6ýZßÜßèLáå“ßÝè»ùäÍÕÉm.Ú­|õg}<~{óôFgá:…¼ºx÷îƒÞ·ïGºû&O§r|Ñn?Ø|}Ïa¹ŸÊ½öýÊË÷èZð»Ç:sº$÷/Iò2=Nñò;wnnžäꀬÆzmïO»{:¹þþÙ_ÔíÕ©·¬<Þ_^¿Ð¹ã“)½ºøÿ҇ꃟýó‹·:/>¡åONú¥[A¯ËU=úhurÿ~ÛöW¯~êõÿ[š9dß¶Ò÷ýŸÒ{ܶPfyþŠù ÷8ÍCó\Kuöùfµõ5dÅo¾}÷èé•ðúÑ•ö?úßÏuÚpûwý›B×Ë·%Ôoòº;«g/ô–>¹QrûVŸý“t¼ÿÇŸéa\h®ñwú,†¿gQÿëöþ<}öýM÷ãõóïó³÷þý·Û=l' ôuÄ'Ž–' &_<þÃÛü©ôêä‘ÿóëÌýŸwÞxû?/_?ûKÿÌÿç‹‹‹×·³¿_½}ûúÙãÞêpœ|qqýêÙýý'è–åáòuÿ¤ƒåë{§]àñ¯,L˜ËS¬®vF•XA+‘cꂈwÂFr¬b<ª 'Šé4 Çè’$Ãi>Ç]i¬;×9Ú˜‘;LtµÁÊjï”Ôo½ê­î)+­%³ZD›_Y%~‘•Õš‚iÖm¯¬Ö#ÝÞÊê,½sWV{ç9ؽÔaqU› w)Ðñ𥆶ªxõªBdqÉVà¼ò‘ôW-.úQщ~³•ÕâQgSì‘le ÷”•ûÊjS+«ùÁŒ‘âÃT?ß|ôLj´ôa³Ònæ S`¿nÊJ'É6S^;g‹ëíp\mJL&ÃèS|À†Sµ£Ç8%#‘#€vV1HôÆ•f;eÉM³t§ lx .9œ¬q%¢íƒÓñË€SEÁ4ëÆÁ©éÁiŽÞÙàTœ{Ÿ\ v/ó .¼%•”R1O“ê¶NEFvõôÊ·êƒ;/p*WM9‘ÚÑ¡ 9öS$’h+J;8íàÔ8Aç‚×&cœŠÉÒ‰K»È”X¬W[íHFå¾ 0p¤4“3ФÝÚ±¬Hk¨ zÇÓ®ùÎ'ù»i|鈎À+BLç9S\ð;ÄŒ˜V"Ú>ÄÌ¿ ÄTL³nbª‘nbæè 1Åyй#E»—‡‡¨…ß!‹ã8ÜÛ–JØVšŽ[õ9Ùq²Õ9þˆ˜rÕ‘¢§ƒ¥ðvçêŠGÌ5ÒÀV†Bq‡˜bš‚߃¤­%¨BL± ˆKCLnup³ãS;J²î¹: `Xò;,I ¥œwºª¨EFbxm,Šýô†SµÃ‘NÄð€¥³B¥ÖºÓbܶ«?Ù)yb1Å‘‡œÌq%¢íƒÓñË€SEÁ4ëÆÁ©éÁiŽÞÙàTœ“×!ËîBI'«‚“¤ØQÈÑ~+U¢¯TºµãÔ8eU:hƒšêðy%$é¯Ú A=Gû­]Ø.G{ï‘ FÜ7†}ÛÜNm“>˜I;MJÏ–|óÑœtŸ–'Ì@æ ¼3_픎§XpÛuQiÖa°C ¼[ŒÚAuÞˆÈØÁ†é“ÀNÐê_]wÞˆŒÄð@eQœ…ú=Év6§"Î{ õc_·Aûy#sF\‰hûà4Gü2àTQ0ͺqpªFºApš£w68õÎsQ‘d÷Rúëëž7".n5Q*†¶À©¨BŠ a„záó§>::móÞŽ2oNÅcò:Í1Œë$c§œš'Ê@0a=1|o§Ò–' ”YÀ;¯²0¯ NÜ!¸ô|`xI^¼>Z/zò<ª1ƒ`X{ !$4¹Š‚ß”`ŠS핃ÁtÅŽ…v‚±¦¦•ˆ¶O0sÄ/C0Ó¬'˜j¤$˜9zgLïøƒ¡7tð§—êÁÚ$ÝÛAcÛæzUb éíøÌþä«öù@0{3:Þá†K"Å#£sF¡¡bG„;Oì<ÑOÄÎ;b!çê…†z;·ø’Hn—µC‹õ¤·v~ÍBC€Ô9ÇÑòDJžbNÍgHU;ÔÚ3A½áÜF˜ Ñ!Øt„Q‚ØÊ‚„}„ÙG˜¦F˜”§>E¤~² Ø1o›E&—Åȳ–ŅíHûg˜¾®D´ýÏ0sÄ/ó¦¢`šuãŸaª‘nð3̽³?ÃLê¥üÊÿ± hè˜Gê€EÅÒÚÀß|(˜(-Îÿ_|ì?ñš<¨S´‰0=TG¤¶V{Šz}þè¼xbRtpË/VÙcT–&ˆ¦²°GØy¢5ž@Š’{D1F$‰A–ash½Új§/ÚÊÅÖA|a‚§È!Ûx‹]ÀqÛx­ê©‹¬— Ãi_à|”S\Ò)ºQ™E™êN½ËÓ­È¢žkN{;7ݰ¾¾‡¼·#Ï›2LqQÔâ¿pî 309­D´}†™#~†©(˜fÝ8ÃT#Ý ÃÌÑ;›aŠó\Ú&€ÝKŰòAON*=QjlëNVcLÆ*R•t^õ¦D'æ òv S<¢:5ȹØ)_ï ³3LS ã• œN ¤~г·ƒƒ2È 1LnWáÈ{Æ ¤v(²nîKÒá8Á`çrÈÙÇj†¨Þî°€M`Œ:¹1mš$XN#ø@›Lq*ˆ¾~±· ²gï7§¦•ˆ¶O0sÄ/C0Ó¬'˜j¤$˜9zgLqóG:o÷R‘yU‚ è»ãÁ ‰ÐÉz;/mLVå ‰=x‡gF0åª!ÚLvtt¼ßŽ`ŠGŒ‰“=Çð¸ÌN0 –ÚÏBÎø8ÐÛùqû£¢93Wjq>âÃý—ß|ä4g)¦¥±)·ËtÎoõ¶ëb¯Z»˜< °Ë9®ÌþEí€â¨ÛcÔZÓÆ„ò¸o=j‡0Êip¦Ó¨WðÂ"ªAå §%7Q—êÒ{;ˆãœzãé§<„ɉåTï@Üve-;E½ùâÁ—x_Y3£Ñö¹tŽøe¸´¢`šuã\Ztƒ\:Gïl.ÒK¡‹ëžpbÊõ›Ã—N’ ®±•µ^•Wõ$¡·v|f\Z®…<Ù#9úƒ=2«siñ˜¦¥ʈ÷’ ;—¶Å¥”W¬h’¯#b¶siñ’ ¥]€T/%ÐÛyX³$ƒOÊÞ‘Òñš?ª ¥`õÊØ½¹Q5Úƒ‘ÂëˆYÀc×x¶Ã@qS†™"Ž?Êí 309­D´}†™#~†©(˜fÝ8ÃT#Ý ÃÌÑ;›a&õR„ëžp’sO=À0Ó¤R[Iõ&ªXoã—Í0“¢Ã˜¶c˜IÊ„÷2;ôÅ0\"çË«ïä²;?áTü'ýÇØTPì$ÁºY£Zæâ.2‘x0²;ŸÆ1Œ•iN¢(N¸î4ÛJÛ®ÃL'xPpg˜Éi%¢í3ÌñË0LEÁ4ëÆ¦éfŽÞÙ Sœ³'+õA/Ò¯{ÂIGöN’ 0Ì4©m1Ì4õ1Ô«NÅHÒGñàü×ê 3IYª–Ý'5M–ãÔ¥þíª4ôD'…ÇBW3LõXª«¿ÆH@i1Ìb˜©FËþJR|RéíÓÿ}€³>|<9‰aÊu‘£Dçpµ£k«eÓÆ3/üpð‚Q SoÕŽø¥Js".Ãt8e}- G§¶…™9µ§ÛjÚ µœzyoVZNf€ÔDžSKí´Ö%.Ñ:5ç.ßÖFÄŸCk }Ö“ÓZ3ÒÒÚˆÞaZ«ÎcHÚîI÷!R/®©§²åÌ­õI}rîì7¥µª „1_}^Û¼­uEðÆŠÅ#BÊË)u•­Š‹Ö&£5Û"pF%Õ6­;ŠtzæOõOF¼Tì‚]»ã„1‡ý‡ƒ÷Kd„<¹7e»(/µg•ärSþ·òǼ—„höRm .7å‹–Ø©N»Û ¿ä4úNIÑÄ{²]6|É)¸áÅRf@½;Ív$¯…;MùUË êv_õj—_p+–öˆ‹¨« …ˈΥ#âÏÁÒ†‚>ëɱ´é ±tDï0–VçŒæÍÛ»H¹¸ …d7xt²Oj saiQ)¿æ}õlö^XÚI7–z¯Ê";)ö;ÀÕ®jaé\Xš2î)qˆÑšXZí{œ„¥åº¦AÙÙž©v‚rm»ªH_„?¼`ˆ!±¢S>°Ø•Ò“/1Œ“Ì…!Ï cj*Úíâ¯÷X¯d˜Ý©€}qüÐ&`1ÌóÅi+¢Ó3ÌøS¦¥ Ïzn†iGz>†Ò;Ê0ÎH»©û‡âµ l£pp²SªÎÕrwWeBL¾z}³­µè”¶/DÇô>†©%J¾ou•«Å0‹afb˜ü`FÉ›-ww;æ³ RÔë’D2cge;¦+ÛU‰m™¦¥§ ƒq 0‹MÍòƒ»„ã õüþÇ¿•§à§¿ß¾ýËŸË»?OOÿi?—ä¿ã|÷Ë·øÍ¿d£ýýŸ¾ûáÛ2Iýỿü”Wî Ê<4 ýîrÈÿ÷Mºÿ3Âïþ雲¾w|<\þàNc†ZŽÍƒ»]"/Ñš“¶V.Væ÷¨í]Äj'Âz+­§­Ý¬l¿ ~è6²hí`Þˆèü´6"þZk(賞œÖš‘žÖFôÓZß,e×–̘’'­ƒ§N©i®ò}êõIͯšÖº¢c$~9­õ(³øÐ}{ÑÚ¢µh-n¥+-«u$ÛÁÃÐ:‰ÖÊuUbŠè  lÇ×–@O9  G´–Î0¡ä(Ív¤/•ÞPqF¸Væu6uªÂK=U_pƒ0˜ùN½T(QÍq e†”œâS»iº•ÖŠÓ”ï4 úâ iÑš· oDt~Z­5ôYONkÍHOHk#z‡i­k–J!]›¶F¼)¤­uJ…4­õ©²3øUÓZOtRàû %ö)CŒ‹Ö­MEkPÒÁ0?™íB‰»ÝcFèI´V®+ÂÌÎætµc¹ò| è&ô¼ÆH@\Ê[°ƒ0ÕNÒaå ÍoÄþö¿~Ÿý?ü=»é›¿ý)¿ÄsLÿ\Gþót]þ׌Xη·«Ž|ØÅÐ##tÊ(äÅF©u ƒ!¿t?Uï?ç'gÿÜA$ŒJ¾»Ç½­/Üq(œôó_¿ÿå›ÿXïí¿ù·âë|G`hSrµ‹pïN_ušAm×Üú°ƒUòÄ…‚FDçgÇñç°cCAŸõäìØŒô„ì8¢w˜«s¶’ïëÏRìRvÄdc<`Ç*Á,’sÀg— <;â–b¯Âè¨/vüfìXï:Š¡Èp;V ^QF;´ÿ/vüå§¿.t\èx":âŽd Ú©e»=Ô´: Ëu­Ô1ðÆä×\Û'ŒØ>ïuœ¤`Dì)ÍvQe¶÷KVEÀ(â«G´w{¿”è0¼º±¤Öî1ei}e:z’d½_ÖûåÜ÷ m!¦üœnš¬v‚§ûÏ×…SlãÚíàâ÷KÊ¯Úø¼¤VVC)RÀíây»Ýc¡©;¾V§D=<ÙípKw?C4":ÿתñç|­j(賞ükU3Ò~­Ñ;üµª:†Ö®SñaÇמtˆ[ÈqàÆl_ê'"‹/5ÎEUæ_ÁìõÙ¨oA{tŒƒ?:xgqñH"%ÃÑUF¸hbÑÄ\4‘"£94‘íÂé…êu™ó_qç=ä§|Ï+„„D ÏB"çŒ@¤íÊ¥»]¤[˪îN…I5øâ˜×±iw•؈èü01"þ˜h(賞&š‘ž&FôÃD×,õøåêš’D¸ÅtT’¨J06pvé?ni²$×.õß,ɵ/: /‡‰â‘€-˜ºÊ(>´Ç^0±`b˜ÈÏ%Z9ŽÚuvªãé9®åº©4Ëwü`²g–΃‰Ò¶¾¥Žà<Ô¡=Õ.€ÝJ²«ÁÇ­}M,Ÿ&FÄŸC }Ö“ÓD3ÒÒĈÞaš¨Îµ¤è°?KÉÅ[ùÕ¹ÙKt Õ8Ù1Úª*© ¾ð2°'™H_5K”»æ¼f7§˜ÂE»‘%ª2Tˆí–ÈvK,–˜‰%¤l8H`¥ö1§jGvú1Úr]áüf î¬BáÒ q+§yEž³„æ,Ɖbû³Z±ã'².e‰.qøÐ6w±ÄÁ"±ÑùYbDü9,ÑPÐg=9K4#=!KŒèf‰®YŠ˜®MÊ£¸éÑÆDŸR¥¹`¢K=Cx/˜è‹Îj]]ÊŒVÎÄ‚‰©`¢´8³Úª€š0Qí(~Ê)_·”ªŒÒîBþaw)Lm âÁ1'+#]4d¤j*­vŒñV˜(Nó ¼$ úâ’®cNî*±ÑùabDü90ÑPÐg=9L4#=!LŒè†‰ê¼øwJ•ï"-^Ü=Z6Gµüû¤¦ÉªCVU ”„}õ‘Ò{ÑÄM˜^ˆÈ9Õ#«Z$_‡E‹&¦¢ «[ªÉ)¹ÛE:›&Êu%±“ŠPìøYUÜS3°MòÁÖD*#Ø’&'»ÚIºwk¢85@3ðÅYd^4á-Ÿ&FÄŸC }Ö“ÓD3ÒÒĈÞašØKqïÏR@z-M$Ühb— ä©2Y=§ªŠ‚S³¼Ú!ñ{ÑÄÓ¢CtãA§êQEÑ9 Ví»‡/šX41MäçÕŒÌiã\íäaÞ9‰&Êu%‘½ñƒ “^{Ð)–’zOa‚BÀ)2Q»ðTµËÌqëÖÄ.ŽhðÅ% kkÂ[%¶":=L ‰?&Z ú¬ç†‰v¤çƒ‰!½£0±;RIàÏR,מs’`›mMôI•'G…~K˜ØU)kˆü‚z{¯ƒNû]›–=?:zãA§â1㨱ÛÉC;«E‹&&  ({Ù!§Ð¤ ¨{pvuØz]¶ØNˆÚí8\\4šêsšÀ<‚±Te í‘^ì@‰n¥‰*Ž2FtÅáã¹hâ`™Øˆèü41"þšh(賞œ&š‘ž&FôÓÄî\óŸf)b»”&tcŠ4±K0e‡&>ni®^»* j¾ú'ûè_7MtEç1Uôrš(3æ`zaIVÚÄ¢‰©h"?—T>¹Ç_wqùôÅóK%aëlš(×ň%“Í?„A®<éicÕx„MTGºlW²­vh÷&aïâ0YP_AZ½&Üeb#¢óÓĈøsh¢¡ ÏzršhFzBšÑ;LÕ9A4~a–BÃk÷&bÜÔ ÄöI¥8Wì] ™¥Ô ¾MìÑÑÄí6ØŸ£xcÞDñÈôÇjatZ41Md¡!bû¤Sµ‹tvØzÝHª¨î¼—í®¤ ÝòòÑI'Î#XÀÔ¬ý݀댠·v®ÛÅq2t&ÈÝ.®“Nî2±ÑùibDü94ÑPÐg=9M4#=!MŒè¦‰ìÀjsa–º¶s]Ü0ïú¬4ÈoÉ]ê•à½X¢D‡¢–†½^t€Âåaw\”«¯ŒpíL,–˜Š%¸d# è“ç÷ÓÏ/sÔÓs°Ëu5ZþwwœíàJ– WÃJt°3Qzd“Zjï|W;J|+K§ )Ç|q¶v&üEb#¢ó³ÄˆøsX¢¡ Ïzr–hFzB–Ñ;ÌÕ9S9£áÎRù…tmÖ„jØŒá`g¢K*˜‹&úÔó›ÑD½k!ŒÉY2 ÜGţļñ•¥‡î¸‹&ML@Rr MŸíç~úâùeQ<»ÙD½nJšßî…KËÃâ–òt°3¡e¤3"9'²ªa¸•&ªS 1@òÅéÃ1ÙEËÄFD秉ñçÐDCAŸõä4ÑŒô„41¢w˜&ªó¬‰]¤À¥4AÙ)ÐDŸÔ4YÖDQ•‚Œâ«Oúf4±GGó¢CÝèhxØ »œ&ªG„ÒÄW¼r°MLE¥Á4!E£öÞDµƒxzÖ„ÖŠN11²7~8Ó8\›5¢éÁÖ·ÕˆÕ.¦{a¢:Uaè‹S”Þ*±ÑùabDü90ÑPÐg=9L4#=!LŒè†‰â<ƒLRç,™äÚcNdK‡0±KM„ÁŸP-è\ëvU9Îä$ïvúfIõ®¡Ô°?:À7¦`‰)ˆs ÚQL &LÌù¹)%b›0Qí¢œ~Щ\— ™ù#[J_Ïk :ŃG4‘2̰D ŽÒlãtï—¬ ËÙè«Ïo˜w{¿”èX$§ºînGáÎ÷Kö( $/ün̺Þ/ëý2Óû%m $H{ë»Ú9}ë»\9‰8Gäw»pqgT ¥ŠÐó÷K*kW)©ÉQší˜î=H[œ¦@ˆ®¸”Ÿ¯õµÊû шèü_«Fğ󵪡 Ïzò¯UÍHOøµjDïðתê0–‚xm3#1Ú?Ví Ñ©)·ÛédQ«*d# ¾zÄ7«>^ïš(€ÓGd·ƒ{U¢,ä/1Òc›ú &¦€‰Èª€ Ñ‰l'Qχ‰r>Ö ‚;²³]¼´3jØ"êÑÇ*uƒRû³ôn÷$ÓáJ˜(N1iÆBOŒ«ú¸·JlEtz˜ L´ôYÏ íHÏCzGabwžXœBmv.®ðARЧÎö˜E˜¾ µ4ÌžŠ&>ÔSH}õÞkkb¿kb¯ÿèGÓ}5>v™•ØW¦M,š˜ˆ&ÊsiÀ%4«ïvŠgo}×ë’J?æ#A¼&7ÎÄDñùû%ÖŒìl¢ìvÈ·¤í')-šð–‰ˆÎO#âÏ¡‰†‚>ëÉi¢é ibDï0MTçùå–×ðþ,•.Þš`µ ô 3j§Ô'_÷Sš(ª p’È/¨҉髦‰(ªÍNéxtMTTzÕú¿[¶ZÕÇMLEù¹”âP ÙµÚÑùQëu…HXÜ‘-ò¤ˆÔ™{´F~NPGº(ZûkUµ‹zkgÔÝ)[&.ôÅñJË󗉈ÎO#âÏ¡‰†‚>ëÉi¢é ibDï0MTçùuÀþ,%ñÚ^F¤\æûšè“Š4MTU&Âí,íÝNñ½Òòú¢c|_ÅÀê±43òŸº :+-oÑÄT4‘ŸËìHš4Qí¢œNåºIBÊãÖ?’Óµiy)ƒšÀ:‚óÍ:C»]@¼•&ªSŽ(ÎGjG°ê»ËÄFD秉ñçÐDCAŸõä4ÑŒô„41¢w˜&vç¨^˜¥8\[äƒA7U> ‰>©æ¢‰ªÊò´Ý+ãCý“%_5MtEGnÌ›¨…ËWÛè*#L‹&MLEX÷&HÚ'pß›8»3j½®%$ñÆOÙ›¸²b ÂfX*?§‰ü“bé ±Mŧ[i¢G!¯“Nî2±ÑùibDü94ÑPÐg=9M4#=!MŒè¦‰®YŠÐ®Ý›Hy¾×£½‰>©<Mt©‚m_7MôEGì>šèRf%qM,š˜€&ŠÐ<õ'IÍúãÕN¾7Q®kÄjѼñ“ïÕìânF%ý9Lpýl€% M¡ÅŽé׉m—ÂD—(%ç›Fµ³‡*î &V‰ˆÎ#âω†‚>ëÉa¢é abDï0LtÍR)ÐÕIØ&1Bc¶¯Möà©<ÙA§¢Jƒ)»ê5¼ÛA§z×ù¯:Kv»x_ýñÝ#EÃv3£Ýî±–Ê‚‰Ào¢å¥¼¶a"?¿¥kèéIØÅ¿™fuÆáxåÖ„¤?¯ËRG°”Íï¦Òj÷´¸á…4Q&N,è‹Katr—‰ˆÎO#âÏ¡‰†‚>ëÉi¢é ibDï0MçCî,e1ÆKi"aØR:JÂî“*sˆÝU¡Äà-Úï’è½h¢+:HrM)ŒÎñºªì±¥í¢‰EÐD~.•00ýº@ë§/žß ÍϦ‰r]NQ Ýqt·³k:é¦y,†ƒƒNZFz‹ÜVZìÌÌn¥‰*.sŽºââÚ›p—‰ˆÎO#âÏ¡‰†‚>ëÉi¢é ibDï0MTçĬüYŠàÚ±R:I: ‰>©4YI§ªŠóÐt RíêíÍ:Õ»–r”Žýè0ßxÐ){¤cLž2 aˆ]41Mh9èU’µi¢Ú9»ÝD½®IâÄ?ÙŽ¯l7©ìÓ$a[Áãmš¨våVš¨NÙß8ÙíòÊM,Ÿ&FÄŸC }Ö“ÓD3ÒÒĈÞaš¨ÎƒÐ S¨\¼7‘Ñ[^‰ÐÄ.P_™í'k7QU™X zA}z³$ì®è¨ÜØn¢x¬íÑǨ+ {ÑÄT4aeÏ!sº:b«ñéIØåºˆ ƒ7~”È•{¼A {)`I¤Ý »ÚÅ >·ÒDqŠ\úqDWR\{î2±ÑùibDü94ÑPÐg=9M4#=!MŒè¦‰Ý¹ZžÈýYêIý¾“ó&Rí(q<Û¿.U&Ë›¨ªòrW|õߌ&ê]›RJÁŽ><†—ÓDñH’¿ÆÈ‹ŒÕ {ÑÄT4‘ŸKÕ ñ §úâùUIç7¯+×5ª'y½ñ“퀯m7aÌÉžååYVVF0Õæ2ÇJ?ÛÜØnâ³Sf|AœØ¢‰ö2±ÑÉibPü 4ÑVÐg=3Mx‘ž&õŽÑćsÎþ ;Kq`½”&ŒiË/”g{½Rcĉh⳪ ŒÐjÏöÙ~ýåë+¦‰Ïw°Øjþ÷(ÞÖnâ³Går¤ÚW&MºM,šøib.µ¤C+oâ³À¹'>®«©Y}â³]¼4 cÉÊÓ§yYAÌ#XæßŠšJc}¿H¼•&ºÄQ²EÞ2±ÑùibDü94ÑPÐg=9M4#=!MŒè¦‰®YŠÿ‡½³Í•ÕÁðŽ"ão¯¤÷¿“ ą̈úv*"É .FêžøO~Ø~sØÒÂ(ÑMœ’ªsÑÄ9õßE§¢cÏÑDU†É›{ÿØ%^bMLEùwé’¤\hÒDµƒ—úÑDy®‘£õÆçÅäν N›æx&{OXF°µGzµ“'ÛMü8ÍbÐgMtÓÄFD秉ñ×ÐDCÁ9ëÉi¢é ibDï0MTç Ë ÔŸ¥ÜÛ [̶œJÐÄ.„ø©ˆsÑDU…‚òúoۛأãI[— ÿµ{iØ{;MT™£Q©¯,£E‹&f¢‰ü»´œ•7i¢ØåŒß¯¦‰üÜ<åE$lwo+lÊù0½‡ ÊØÒ›âW¿ -v¦þìÖDq)ÏBÈ]qåjÈ‚‰^–؈èü01"þ˜h(8g=9L4#=!LŒè†‰êM9¬?K¥°›¯MHi7¡Ç³ýÇRQÒ\0QUQ°vŽòìv˜¾ &ê[—^©¿’ƒ?Ù£@ löèþ×lÁÄ‚‰™`"ÿ.ÝÅÙš0QíXéj˜ÈÏ„ÍËnÿØ¥d÷nM¸+­/‘—&ê(Ív(f[_"œÍ¹¯>Þ¨ÿË×—Ï£CÀ®/Ù£p"±¾2–µõ½Ö—©ÖÞò;æª7×—Ýî¥IÏEëKy®° Jû#ýnvïÇ*Ì”%üÂ%CÄÒ:JsŽ›“ÜG¿VUq$)zâ²ÉúZÕû шèü_«FÄ_󵪡àœõä_«š‘žðkÕˆÞá¯U§f)†¸¹d`ÎÉœ¶¾ÏIÅÉhbW¬ø‰úø²­ïúÖªBBýèˆës4Q=†A·úÊ"­­ïE“ÑDrf ðMd;z™à/£‰äš°±êW ¿·d`)^˜¾VIÁIУs˜¿Ú1Ò£4Qœ"°Jgù¨âü¥É¢‰ƒ4±ÑùibDü54ÑPpÎzršhFzBšÑ;L§f©x×Ãáʃ´hù8'Ud.šØÕ›¡ôÕ—ý—úÖIQ:§Øöè¼°Öí4Q=f‚ û@Å*¸hb*šÈ¿ËÒ)šPÛ×òª _Må¹yNGIÚ?9&oæ½K‹|d`{ºÏ-îÚVªu}‰g÷&ª¸¼òaò®¸<]¥E½4±ÑùibDü54ÑPpÎzršhFzBšÑ;LÕ¹‚G‚f©¸·‘&Ú(ü€&v©ìÒ)»ÛÑd4QUq³aÏ¿voJgýÕ4QßÚ-ô“ÅÒåÁ½‰â‘êùíþ!¤ÕÎhÑÄT4¡eÏAéÏ{׿þï÷ë,pùµ¼ò\Í+Lpwd»&»ù¤“9ìMXÁ–8qûLcµS¶Gi¢8e6OOi»¯æ¨Ý4±ÑùibDü54ÑPpÎzršhFzBšÑ;LÕ¹:`’þ,¥,7ß˃TúÏöl\.?õ¥Nv/¯ª HÔW‰¿‹&Ê[KÂdòì»ðs4Q=RùlÛ B¯7:M,šøïiÂ*MXiÜÛ¤‰j—øò“NV)Rwd»Ä½E>Ò†¥ùAÉ@/#X)Ú#½Ø±ó³'Έ¢uÒ©›&6":?MŒˆ¿†& ÎYONÍHOH#z‡iâÜ,u÷Þ„•±[§”æÅs.˜8§^¾ìv}k-'"¢±/a ¦è©¯l]›X01LxIæI©½5QírÊ5L”ç:°«Boü¸ë0›$•÷0e¤—²ÜVu¤?ܵŠ$†èŠSÆU¼›%6":?LŒˆ¿& ÎYOÍHO#z‡a¢:WDèœÚEÞ À›šÐÄ9©6Ù%ì]½k^¬úêý»h¢¾µ#„y?:FöMFypRÿWgH´hbÑÄL4õr3Ç›M¿_ÿ÷û-[—w3*ϵr‚ º#ÛôÖÞ¨°™ãÑ%ìe§”:ß ª†ø“4±‹Ë”Ð9¼Û½†qÑÄû4±ÑéibHü%4ÑRpÎznšhGz>šÒ;J?έ,ýØñÍÝŒ@6?*é´K¨es>‘êsÑÄ®JM“Z_½ÂwíMìomÄÚî@úxŽ&ªÇR^3¤ÿ«óëÚÄ¢‰™h¢ü.C¨ô†Mìvɯ¾6QŸ«ž¸7~"OwöF¥ØÐò;¾?H›RÁDÔ.·Pí =JUœ8ZW\þu­±Ý4±ÑùibDü54ÑPpÎzršhFzBšÑ;LÕ¹¥„‘ú³”ĽbÝa‹£“Nç¤êl4QU……µoÑívnñ]4‘ßZ¡TÆM܋޼v ½›&ªÇÒÛ¬¯,cù¢‰E3ÑD*Y:›½)ññëÿ~¿¡$W·›¨Ïuˆ–ÞøÉ³£ßZ Ö7 §HïióŸ4¹¤DÍ]”j¯ež ‰â9u6Nª¼[4q&6":?MŒˆ¿†& ÎYONÍHOH#z‡i¢:IÜ>ֺ۹ܻ7)CòãÙþs©1MU Ñn½÷ó–þ]'öè$kgü‰âƒí&v\z£r_½´ä]4±hbšÀ QÜA5š4Qí4.§‰ò\C3o7 Úí’ßYÒI9ÓD¬/TGzJ턽ÚQz´¤SuÊ©^¦ïŠãko¢›&6":?MŒˆ¿†& ÎYONÍHOH#z‡i¢:gË“(ôg)¢tsI§”':ÀãÙþs©³ÑDU%9)•–öô]4QßZÑ´³7±Gñ¥!üí4QDäÇhûÞDµ{½·pM”çæamÜÏÑsÚpoXñP8X_¤®/š¸3Ò¥®/þìI§*Ž  Ý§ûÇÖI§nšØˆèü41"þšh(8g=9M4#=!MŒè¦‰ê\”‚ ?K1Ý{Ò‰Õ6:¸7qNªÏU!öG½Úê…¾ìv}keÔv½®;xð¤SñhÉX“u•躅½hb*š’¥c 5i¢Ú¥—]¿‹h¢<7˜8‰ôÆO¥¸·Blžó2ûž&´ŽàüwéT‚(vêöìI§*ŽJµ¦¾8CY{Ý4±ÑùibDü54ÑPpÎzršhFzBšÑ;LÕ9'b€þ,E(7Ólyù< ‰sR碉]}]°ûêé»hâ\t<ž£‰âÑSi(øÁ¯Îeõ›X41MhÉÒó, Ò>éTí䥊ÀE4¡BbJæu»!ÜÛ ;¨(|OVGpäì©°W;u}”&¬NCœg"íŠsðU!¶›&6":?MŒˆ¿†& ÎYONÍHOH#z‡i¢:G„þêo.É][!6ÉfIhâ”TD™‹&ª*bŒ„¨ïj…ýo7 ÿ±ÓôMT â}eò2@M,š˜€&¬PX™8›4Qì<Òå÷&,SH2„^Œåg#÷îM”N~t@^F0gÞÀö ]íèášNÙ©ˆF§ aµËk䢉^šØˆèü41"þšh(8g=9M4#=!MŒè¦‰Ýy0ög)¸¹¦2ËÑNt•žÿ£¾Ôž‹&Šª‘zTv;ú²šNõ­sx¸S‡õÇîAš¨•¹GÕNpÑÄ¢‰©hÂ7L…s±sÒ©ÚåUèjš(Ï%àèŸl§rë-lÜŒœÙÖ—rKÑùnPí ÒlëKVÉ¥S£dKo[_"¯-DI¨ ~r})ó¢æ}eŒ/=×ú²Ö— Ö—Øò;æ¬öͼóÛúRí .?I[žkH@ÑΫ]NÑîÝû/»Öï×—¨™$¸XOi¶ã‡OÒ§ˆÜ9\íÒª@Þÿ шèü_«FÄ_󵪡àœõä_«š‘žðkÕˆÞá¯UÕ9@ô§PD¾wïÛ6<Øû>'Õh.š¨ªXuj”ìoi_v’vNþßôƒèð“5«Ç@O–q{åœE‹&f  uGÔÔ¡ öðëi!ÏëâÛÔÕÎ™ï¤ ÙØ”ßÃÂ>БÚƒv»< = Õ)a)Ú ]q”l]Ëëe‰­ˆNCâ/‰–‚sÖsÃD;ÒóÁÄÞQ˜Ø—7´?KQÜ[2P=mñ&v Bò¦1ÇŸRy2˜ØUi(µ?qÿØ}Ùµ¼ý­-qj ÿ±ƒç¶&ªÇ" @ºÊdm}/˜˜ &òï²|©ChÂÄn—g׋a¢>WËÞžcgüd;R½wë;À#Ñ{šHy ¢@ûƒFµcÔG·&Ή‹Eý4±ÑùibDü54ÑPpÎzršhFzBšÑ;Lgf)Á[iÂ6ƒ"'¥ÎF§Ô§ô]NFGŸ+¸{4,Kt_-šX41M¤ Ë[Ê›Kß¿~ÿýb©TWÓDñOaíÖÕ.¯Q~'Mè¦èõÇ÷®©Ý½z·~´ÆGuª ÌÛõÇwqþr©{ÁÄA–؈èü01"þ˜h(8g=9L4#=!LŒè†‰Ýy(³ög©¸½b`:ª¸K¥ÒíºRi®»*AêÜuÿyKû2˜8InMTÎÊ!}e9L,˜˜ &°œ_Iøçµë_¿ÿ~³Ýk-΋`¢ ‰sRÿ,nøßÒDUU.Dtåîv”¾‹&ê[#»}xpk¢zäü_úàWG/wXM,š˜€&¨Òþ¹ºüúý÷[h‚®¾5QŸ›gmNÚ›÷ MðÝŒÐ7“U{OùOêèYiû~Gµ+Ÿ¥‰3â^z˜/š8HŸ&FÄ_C ç¬'§‰f¤'¤‰½Ã4qj–bà{iBb3=Ú›8'u.š8§Þ¿ìÚÄ©èãs4Q=†F´KjþؽPࢉEÐo(IƒÛ4Qì4]Ð)?×ó›J¨wÆO¶S½“&¶(ý½ö&¤Œ`¥ü¯ýÝ@ö¥‰â´Ü¦´®8ÊÃoÑD/MlDt~š M4œ³žœ&š‘ž&FôÓÄ©Y*™ß[Ñ b“£;ا”"LvÐéœzÅSÑ!|®5ê9eâë Ó‚‰©`BÊÝfEekßš¨vÌW·F-ÏEÈONäñ“íÀì΂Nº¹AŠ÷,¡e³C%ªI<ÊÕ©…“{_œÆªÛMŸ%FÄ_à ç¬'g‰f¤'d‰½Ã,Qœ3°‚J–òÐ{;£ å%E`â”ÔxÓ è?…‰ª>¥p·®zù²[õ­sÆ.ê‡{_/:ß Õ#{Î2úË8ç!¼`bÁÄL0¡°lç 7a¢Øå„ÿê^Fõ¹˜"¸3~²‹Þ{k"ÆLXÀ®˜¤-´ÚÑ£0Qœ ²X»Fõn—^jü.˜8È&FÄ_ ç¬'‡‰f¤'„‰½Ã0Q‡&ïÏRè÷nLhù¾«v§¤Á\0QU fhì¯UÂúe—&ê[+’‰ö£#üàìâQA$:Wuª²xùÕ-˜X01LX ÷Øl\·Û\~Ì©ú±ª8-·0´+.ƒê:HÛý шèü«FÄ_󱪡àœõ䫚‘žðcÕˆÞáUÕ9“áS(3ÜÜ•eˉÈÁǪ]BPŽÔRm²[yU•¦°¾zù³ÐûßMõ­óoÉ:«v»k|T‚ŒÞ®ñ±Û%]õÇMLFˆŠN¡Ú¡‰b÷r€ÿ2šÀ ø”þÅ.Iüé…lŠûÅä·hâ L¬xt~šMTôYONUOOH#z‡ibo\¬uyx·Ãki"©äË{}RÎEEUŸÍäõöfùÇ»¼ó˜ërš(-¦\=8µ•/šX41MäÕ!7Xª_Ë+vrþÞDÑÉCôصÝã•{¹â˜Or@š× оU¥Å޾®–w)M”Fó‘ËFb'ko¢&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4¡[ 0…ÀQÊí‚\›€\RÚDN:õIµÉîMìê…LÚê!½M”·.W$­í7îM”s0&¡­,…•äcÑÄT4¡9JWFŽX¥‰bÇ|ú½‰ü\‹±qï¨Øʵ4Pà(¹ål Ú8“Uì’Ù­4‘W‡ªMqܸhâ L¬xt~šMTôYONUOOH#z‡i¢4ƒ†ÆÕæb—3òˆm3äšè“J“têSoo–Ó©¼µÿ­4*î^¼3e`i‘5PhÇVN§EsÑ„—èSK1«4Q좤³i"?WsÅQl†Á¨1†+O:É$wä§4A¡ô`%•ªÒÝŽèÖrF{£Š–êv;!Z4Ñkž&†ÄŸB5}ÖsÓDÝÓóÑÄÞQš(çc­&Ò¥TìÚ[ØÄ>hìMtIÁæ*gô¡žsºÑ¶úHïUukrÿ¼à“Ûhbo‘)¹ÿÛÊ®½‰E3ÑDþ.=–j4±Û=æ78‡&Ês}46h‡Á˜ï-\»7¡ŠÊÏO:ä,A¸ž¥z· _ß¾”&J£ÖLS±Û©.šh†‰ÎO#âÏ¡‰Š‚>ëÉi¢êé ibDï0Mì QÀö(e)^[5ÀÆz°7ñ!ÁXëõ&>¿ÌEY:4¦zÔÝ.È{åtÚß:²À “¥'÷Õ›Ø[L¤Ì¡­ÌqxÑÄ¢‰™h"g¶ f–¯o›}úâûEJϦ‰ü\Î×ÀÛ£6&ÕxåÞDÜ,Zäç4½ç RP?éTìèæ ±¥Qbª_êØ_¬-š8+Ÿ&FÄŸC}Ö“ÓDÕÓÒĈÞašè¥\œÓ rN'<í_—ç:éô¡*ü ^ÞëÞÄþÖ˜8„öLÎeÎ/§‰Ò"KJð‚2§ŽE‹&f¢ ÿ.=”G¬Ö›Øí$ž]µ<—£øàË­þƒü$ÍÇ™4¶îãóù½[@€ÆI§l'Š÷ÒD8凗‹&ÂÄŠG秉ñçÐDEAŸõä4Qõô„41¢w˜&¼qËGÖ¡=JiÒ«i"ÐñhÿªTðÀn®[Ø}ê…ÞëÞDŸwôá†ýå4Ñ£ píM,š˜‹&0×qP…DV¥‰bÇv:Mäç2&Ávÿ!ŽWÒDÚ’¢Ô› Êópdc®+-vÀ·Ö›Øå|Þ2¶Å=?]4q&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4QW‰XOþ!RÒµ·°£lHœtê’*0Mô©ç7;éTÞÚBhÛ½¨7Þ› Ì/Qc£Xün÷c,šX41Møw‰…£Uoaïvð°sMP®ŠÐCjõ ¬píÞ„1ðó”N”¼GãXßD)vùV˜(*˜G mqqÁD+J¬xt~˜LTôYOUOO#z‡a¢4n€P¯Øó!’¯… æ¸y8y»TQOŠôù•d.˜ÈªÐ1à…¹Ê¾ÎÄûÛ†‰âPàLìvwt*-rhnM»L,˜˜ &ü»d˃ÿ×¥Ü?}ñý²9;Al~®ññ¸q/¯Øñ×W›O„‰”—ªÐ2:ñ–/LPLXšíði× a¢G‘­ü°Í(±âÑùabDü90QQÐg=9LT==!LŒè†‰Ò¸öÆÒ¥RºxgÂtS>¨]×'•Ñæ‚‰¬*¡š@[}Ьïå­ $m{2½_¹ECÄÐTÆaíL,˜˜ &ü»LQ9W3ªÂD±#<=£S~nR&KÐê?‰äÒ[¶ fáàœ“äÌ&V¯y_ìHYo¥‰q)®jí0±âÑùibDü94QQÐg=9MT==!MŒè¦‰¾QÊðâü°aƒ€4Ñ%Ÿå-ÿ5i¢Oý“<‰¿išèòÎ/ò&]M]ÊØÖ­‰ESÑ„— 1Z¨W›Øíàü­‰ü\ï1"´úOr¡ébšš`B½ *IãœÓn÷žs*jŠ­üÙÅNxÁD3J¬xt~˜LTôYOUOO#z‡a"7®jA𣔆ÄÂÆ‹MäR©ÀÉuh[*¤É¶&Šz5$K-õn÷f…°÷·¶ìm{ÇÒ]]¹Åh„’š1†ÛZ0±`b&˜ðï’Ù»EJR… ·Kšrlœ•ö¿ì?ê•07ïÆÞÆM˜åt ¬­žîva¶„E•»Sh«‡Þm~éðN|ø o˜_,ßiJ¡q©¨Ø‘­Ò¨k~™j~±m´ŠïvœîM\a9(KQ©V;‚°Ö`Zp]ñèük0#âÏYƒ©(賞| ¦êé ×`Fô¯Áì›ס=J¥xíñPÜœÂí( žy-Ê"P?èTìR:ý S~®Yþ¡°á*·£tåÖªl”ØPŽ]õºT›ìZ^—úôõ­ß6Nty‡ã×ò¼EÄÀ,Ä–2¥Çš¶ 'NLþU¦ÀA[ŒÛùmˆ÷òú¤òd'iwUë[÷êõ½hbkUS}a² ·ÑDi1ÇcHmevÑÄ¢‰_Ÿ&òwÉM£A&v»„gg ,ÏÍC6(µúƒ‘^º\µ9+¸Äç4¹'#©WËÛí"†[i"7êÒ¨SìÂÃþEabÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L¹q¦ro¢=J‘àÅÕŒ|Ú…ƒJØ}RÌu”¶S½Ù{ÑD~kI"Œ©é!»ï¨ÓÞ¢IhÔvÜíTÓ¢‰E3Ñähž0ú×Y¥‰b‡§‰N¢ (GišãžÛ…‹³|¨ä[+Ïç—˜G^–\ ¯ª´Ø‘¥[i"7ÊA¤µqRÄÙÃ=’EabÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L¥qð©Ë´9Jå5®ksªX‡{»TÈ ›ÚRý•梉¬Jb$ áõé½²|ôy'Þ˜å£OYµ7±hb*šˆ[F Ìzãûu;ˆ__œÈ‘ç—£æ_üÏ?ÿî÷ß—þïŽ 1WjlYø°4ôÐt–g1÷÷úá¿øKîMüý)Ãc™8¿ù‡¿üÏß÷í¿þß_~„~¿æ?»Î?ÿà½î[~÷ÓŸþøÝ·“}ë=ù§Ÿü7ÿîÛúá'Xö‡ÇÓnù_?üøÍß~øÝWsÊŸ>ÿù7?þùû¿ùôÓç·øiûæ_ÊGâ ÿÁÍLÿýûÿp@Ë/ãCÅ_½‘íÛö#É‘ýÿö9"þy üxïö}´uù…äIôÐùuå\ôyôt7ôˆŒt3%¾.­2U/„ÿ‡ýÿ@‰¿ø³(ñPAŸõô”Xñô””ø÷ë=sãdF/ŒRõÚ\7iG{ŸTüŸÁ¸ºzšíØB<{õÔŸ› ņ«$ÇÙ+ ®Ä¸©¹;ŽVO{”j˜§;Ô ‡wãéïXwò´$D„šÊ€ÓJt³xz2žVÐÄ1ÖÝìvø°tÚü¢Ñ0x'âFÿѨ‘®¼9$âÔ*ö|~Á¼r†‘\GUi¶Ë¹{nå®"ŽM´Åù‡C‹»ZuÅ£ós׈øs¸«¢ ÏzrîªzzBîÑ;Ì]]£\|Ö/zPYí_–Ês¥5ÛU™r¬§ÛíÞì¬_—wôγ~¹E¥‚„¦2 ‹&ML@˜ÏÐp£¢×nÍœDù¹J­Õò»^œƒŸ•ñà╎®15;Ô{Ó”F]Ê â$ႉV”Xñèü01"þ˜¨(賞&ªžž&FôÃDiÜØc{”zܹ& o>–lâôI¥¹’šUÌ(¦¦z÷ó{•Þß:æ5Lk{îƒ o!÷M£¶²L,˜˜ &ü»ääñ¥G˜U˜Èv¿(z{LäçªY‚F‚‘ÝîÒ¤fh9ùL$ïè!§kh졤ÒÑáÞ{C]âôAÜ‚‰ƒ(±âÑùabDü90QQÐg=9LT==!LŒè†‰®QÊL®=f¸‘¦˜èŠ!̶3‘UAЄ‹zxržì7 Å;s.ñ¦wÜ97ÞÚ•á+ÊVuàSÁD*¹ÂLÃ×KUŸ¾ø~ÝNß™ÈÏõ1/A½ZÁ‡_ i 1>/è•8¼ ¾ÜSì€îÍiV-µ-ŽâÚšh†‰ÎO#âÏ¡‰Š‚>ëÉi¢êé ibDï0MôR|íÖ l$p@}Re²œf»ú¼ÚêSä÷¢‰òÖìSycÛi÷Îcâ«i"·è‘8¶•)¯œf‹&¦¢ .9Í×oå;´³ z•碩F‚Vÿq;¼´ n”ö&$÷ôHlPß„”}EãÞ½‰Ò(Y@„¶8ÔEÍ0±âÑùibDü94QQÐg=9MT==!MŒè¦‰®QŠÒÅ4ás °DŸP™,£YQ%ù’±µÕ'¤÷b‰.ï0ßxÌ)·ƒúg÷‚2{¬³Xb±Ä¯Ïþ]J®bB_gýýôÅ÷+ pvíÆò\Tc­×nÜí8]š9n ÐÜÓ½«®¯U» |+K”FEÒ‡ëv;H¬xt~–KTôYOÎUOOÈ#z‡Y¢k”z¬0{ K¨n|¸3Ñ'•'£‰.õ)¼Y~äòÖœ2¾à»¯|iƒwžF•œ¢ÌhsZ41Mè¦IéëëHŸ~ùýº¦ÓiŸ0ßoôáéU·óòúôB ñ(a yfÔœ¿*5Û%T¼'ºÄñJ¤ÛŽ+'FÄŸƒ}Ö“ãDÕÓâĈÞaœè¥”®Å H›Ïµ8Ñ%UÍ…}êŸ\úøMãDw8È r‹ •^PWñÆ…sá„—ò®CªoN»@§oÌÏ…”¹'ŠÝ“êÀç^›FŸžÐ„¹²ÜƒXH•ÖŸíðÎô°ŸUVEl‹“°JÁ×ÃĺG'§‰Añ'ÐD]AŸõÌ4Ñòôl41¨wŒ&zG)%¾¶x#Äéé%ìŸ%””N/H}6/ýj4ñ¡Êe“¶z{«ŒN?{ǘj~¶ã»6'>·ÈìOj+£°ŠM,š˜‡&öïR0QÎzLŸíÀøTšØŸ«òJT3FõÊ£N jHÄç4el‘”jee>ì\è½4QÄQR ©)N×µ‰f˜Xñèü41"þš¨(賞œ&ªžž&FôÓDi<åiR ®=ꔈ6Ÿxhb—àbÅ^ú,9ȯI°G¥J@[=?)áô›¦‰òÖ‚!l{G@ܢ‰IÛÊ4ࢉESÑä=‡„J‰«4QìèáüÎI4‘ŸA]@³ÿ¸]¸4¥m h"æ±E£Xco"îcÝJ¹QÿIšâ#,šh…‰ÎO#âÏ¡‰Š‚>ëÉi¢êé ibDï0M”ÆÉÔÚ£ñµ{‡%Ã?KHì‘6¶¥&”¹h¢¨â\ÜŒÛê9Á{ÑDykõ.@/|†’Â}4‘[´˜Sg¶c YÕ&MLE1GéYk'>ìòuí³i"?ÉdH­þãvA¯¥ es‰Ïi÷ù%¡ÕW«ðcʼ•&r£„¬e»­½‰f˜Xñèü41"þš¨(賞œ&ªžž&FôÓDi”ˆ¸=J_[Û#æMô&Š‚ˆHµœ?ÛÁd0QTa®QÚê1à{ÁDyk%ü±/xÇâ}0QZŠJ±­LÂÊé´`b*˜ðïRCJ \ßšØíB8&òscŠ!p³ÿh$¹ô “·‘„™Ž`Â,¦œJJÝn®,ŸU Fn«rÿñ·>¿ø[kNøÂo«"wÎ/f*ª¡© á!¥æš_Öü2ÁüB[SE¡úAÚbáôƒ´þÜHA êTí@0\8¿Ä|õ/_ž}>¿ä"Èæœ§™°ØQä[«ºÄ±®­ïæ*DÅ£ó/Vˆ?g±ª¢ ÏzòŪª§'\¬Ñ;¼XÕ5J áÅùÇqÓ£[y}Jy²[y]êUå½`b÷ŽŸ¥ÛÞ1äû`Â[¤@þ?HKŒ &LL‘S dÑ0áv!Àù0óa%KØìÙÌ(—ÂoIcÀƒÅªä=’½åžlRº÷m—8åµóÝŒ+&FÄŸ}Ö“ÃDÕÓÂĈÞa˜è¥ ÈÅ·òP7Á#šè“j<Mt©y³s´]Þ‰3ùå4Ñ¥,ÁÚú^41M¤rB„õs´Ù.ÚÃmà“h"?W-€5²÷d;zerä¼½ôàV—̌׷G‹¥{s|”F-/ˆÓ¸Ê5ÃÄŠG秉ñçÐDEAŸõä4Qõô„41¢w˜&öÆsýƒØ¥ Â¥4AÌD< ‰>©Ï¶ÌMšÈª|&óŸ™_PÏö^4Ñã7ºñ SiHcפØEY9>MLE\ŠŽú_göÿôÅ÷›3 ÚÙ4‘Ÿ›‚¿i#%m¶#»öV^ÜL@âAþqñLSlìBf»œ®éVš(â07.7;‡M´ÂÄŠG秉ñçÐDEAŸõä4Qõô„41¢w˜&öÆQÚC(!^[Í(iØè0c`ŸÔ¤sÑDQE ŠÖVOðf'Ê[§háïØ4QZTÍAF[™àÊñ±hb*š’;¿^qÿôÅ÷[î6œMR(A’nõ·‹éJš[^‹b{Nš{°c6zz±Kf·ÒDn4Ïj@Mq Â:éÔ +Ÿ&FÄŸC}Ö“ÓDÕÓÒĈÞaš(G(Ú¥"¦‹s|èFtEú@ÙH?¾+µÉÒUÄô…Ù€ðÍŠ•·N|ªo{'Åa¢´¨ÉBcMp·[0±`b&˜Ð|€)_ˆh$ ,vIO?蔟K¥JP3Dw»p%LDÝ,˜xVÆÖÔÈæSì(Æ[aÂöa¬‘€¤ØÉÚšhG‰Î#âÏ‰Š‚>ëÉa¢êé abDï0LäÆ9Hl­÷‘&צ'ƒÍðˆ&z¤r˜mk¢¨4Qk«‡Þ‹&Ê[G&"¼à;:•IÁ£š¶2|Ü4Y4±hâ×§‰œÅñ“ùÓ߯å|zÆÀüÜjÔ(8Zì‚áµNé)M@Ø{::{Õ”»dxë%ì]œDïX±)Žùá¼Ú¢‰çabÍ£ÓÓÄøSh¢¦ Ïznš¨{z>šÒ;J{ãjIYÚ£”¢\Jq”ç4±K°|¥à©FSÑDQ%DÚêß«4ê‡wü/¨=YJH÷¥tÚ[ÄDB[YÔµ7±hb&šÈߥåKÃ"Õ½‰b*g_›(Ïå}‚kÆèFö¤ˆÛ‰ùÇu žç‡Øçôߥªö©ðÖbF¥Qñ¹94Å <œ[4q&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4QG¢Ô¥b¸–&\ÇÆ 4Ñ'Uæ:éô¡Þ$·Õ£¤÷¢‰òÖ>—Çz¥ª»‡Éòrš(- ˆê _?$P^4±hbšðïRL”ª bw;y8ÉzMøs€$iõ…ðl è¼k¸©Oaé€&¢÷`”WÖªJ‹ÝãŠÆ4Q%qm؇²R:5ÃÄŠG秉ñçÐDEAŸõä4Qõô„41¢w˜&ºF)J§t º1Æš(|&6¢¤Ê\µë>Ôç”êÿËÞÙôZrgø¯håÚ¬O²AU¶ÙÈ&G–=m)–íüýÙ7ÂßÓä%ØÝ"|(fTh¾]§ùñ¬ª¨x­›NooK~`&—‡[l—ÓDiÑ?¹ˆP–xa/š˜Š&p# *BUšp;0³³ƒ°Kû¹ÜEkŸ Ûñ³3Ùóh"úz8úT÷<È{°!i¬W¯Þí‚É­4Qå|‰ ¶Å1‡E­ebÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L¥ñèÓ[àö(¥ïËKŸK‰7Ÿxh¢HH¬žøÿMjš«ÜĮʂOUÖVŸâ‹ÑÄîñ>mïØc¬óÕ4‘«³$`lþn€uÑÄ¢‰™h‚rt³€šÕJØ»r<›&òsEc’öÝíðÒâu²‘7óœ&8÷àˆ$›´Å.X¸•&ºÄ-šøÀ2±âÑùibDü94QQÐg=9MT==!MŒè¦‰®QJ//7ñ ÜD§Ô'~QšèR¯ðZå&ö·6b_ݵ½áƸ‰Ü"Hbl*~ÈI³hbÑÄ4áße"eF©–›ØíÐÎŽÂ.ÏU#‘zÎÀÝN1][¼.°iÐç4!e J­¾£Qì¢Ð­4‘ ͨbµ0M,+Ÿ&FÄŸC}Ö“ÓDÕÓÒĈÞaš(S’ íQ ÓµgÂa³£r}R)Ĺh¢¨b XÏþ¦^éµhb÷ŽF%i{‡ùÆ›N¥E#ÀÚʒࢉE3ÑD)E!‡Wi¢ØÑÙ4‘Ÿk|M‚­þÍyæÚ ±ÞEõyí:ÿ9Qƒ…F>’b‡|oviTY¡q… Ø=–ÂX0q°J¬xt~˜LTôYOUOO#z‡a¢4nìÓµG)Õk«M0Úáèh¢OªÍ• ¶O}¤;šèòÎcâ™Ëa"·è/œß¼©Œ ¬± &¦‚ ÿ.gNHu˜ÈvH¨gÃDiŸ©ã»]¸2¥Á¦îy<ÂŽ¹3"5°§ØÞ{4QÕ$ÐÈ,·ÛÁ¢‰æ2±âÑùibDü94QQÐg=9MT==!MŒè¦‰Òx¢óbáÚ l2óñþ(¥SŸÔÉÊM¼©×„T»¿V%ìý­M§Ôöέa¹EöFª—LÜíp•›X41Mä Ó)XªVÂ.vÑôô±þÜÀA¡^¼n· réÑ„ üœ&Réé>øZ}~Ie Â[+aw‰c„•Ò©¹L¬xt~šMTôYONUOOH#z‡i¢o”J׆Mê&xD]R)„¹h¢¨bãøõübú¼óPâêrš(-æÅXcDzةð¢‰E3ÑDÊ«ô\ŽëgÅNñìRØå¹˜|ýªÍþ“02_{щɑ%Ñ„戵FE‡lç•Ì6¿¸zDÐ(Mõ¯6¿txß9¿x‹šµ•iX)×ü2Õüb[NQ“·WêayÅ.Èé Èós9†œÁ¿ÚŠò¥g߸åRhéùôb¾@Ÿd[BÝîq?èŽÍªÜ¨úÓ*µTÄÉÃ-ßµYu° Qñèü›U#âÏÙ¬ª(賞|³ªêé 7«FôoVuR*çø`Û$]¤í’ê“õ\0Ñ¥Þ^-c`w4[iªÇ^Œ&Ê[C¢¨ðï<Þ'ºš&J‹® ¥½Æ@ ´hbÑÄL4áߥ%@ŽZ§‰l§ˆgW3òçúÛ˜5úÛ%Ó+Ï&ÂÆˆž§ D*=8k¨A´U·ÖFÝÕhüq%fM,+Ÿ&FÄŸC}Ö“ÓDÕÓÒĈÞaš(G 飔]\ÍÈçÿvh¢Kê³Þ¿(MU š(vI^+ù›wU?à»±6ji‘ÈqBBS¯ ìESÑ„—ƬA¬7±Û¡œ2°q¤zÅ Ý.¤{ã&J£1²¶Å=Æ7.š8X&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4QO¬^&áM$]…­*>§ØMìRU áRu®Ú¨»*Ÿ/¹ÁBû[Ú‹ÑD~k1ĶwìÆ”»2"NÐ^cøjlÝtZ41MðF a5y± ¦g×F-Ï¥ H”ý'ŸaÈ•gd$Ðç4!{ŽØƒŠ€ÞJ¥Q ð$üå½8¦…Ý\&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4±7Nì#£”¥‹o:ñfáè¦S—T_–ÎEEUŽòÃÌU¢¯•ÓéÍ;I¡^‚ôÍo<›È-æ\3C[™=ÜÔX4±hbšÈû3ÁŸBªUš(vN§‰ü\ Îh·lFéÊœNÑ×ÃJr7¡ÞƒÕq#5â&²O/·Gí§AM´–‰ÎO#âÏ¡‰Š‚>ëÉi¢êé ibDï0MtR1\{ÓIƒl)ƚ蓊<Mt©OŒ¯E]Þ1¹1n¢(㘨qÅ ØQZ9MLEºQ.±€QªåŒv;±³3Ä–çFD!iõl·ƒhWÆMÀÆî{;ˆ›ˆÞƒ-‰Öw4²]»—&zÄù'#‹&ZËÄŠG秉ñçÐDEAŸõä4Qõô„41¢w˜&ºF)»”&`CŠ!ÉøhŸX'‹›èR/øbQØ}Þy¸v9Mô(³HaÑÄ¢‰™h"æÌ¯Â> ÖÏ&Šų‹£–ç:¤môÊÈ/-^ç}4‚Dx>¿¤ÜƒCLV/ãZì’ɽ4Qı‘(7ÅãºéÔ\&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4Ñ7J¥‹£°%ÜÖƒ³‰"A@Rú€T i.š(ªòJXìêõÅêMtyGï¬7á-FòÕXäÐR ºé´hb*šHŠ©½¯—òí—߯۱žÓ)?—}7ŠØ•õ&”7ÍqÏa¼çB?ÖHR]ìˆî…‰Ü($M­³ÛbÇa%ˆm®+&FÄŸ}Ö“ÃDÕÓÂĈÞa˜è¥œ8®>š`‰¬Ç£ý‡¥êl0ѧ>¾LtyÇ×õ÷ÁDn1ˆR[™›­ ìSÁ„mœs¿¾¿hôí—߯ÛA’³a"?WA@-4úHº4A,mî ÏS:QØÇ–ëR»Þ{Ñ©4Š˜R#¥Ó›Ý*…Ý\&Ö<:=M ‰?…&j ú¬ç¦‰º§ç£‰!½£4±7N‘¤^é÷ÍNäÚ°‰È›Ä`ï „0µ•òûäS¿(L쪔AÒf…×Êè´¿u$sh{Ç;Ëm0QZ$ (¡½Ä dž &&‚ ÿ.‰}b@Õ¨‰b¤z2L”çúÛj£L®S /„‰è}T"‹=‡ (g1XªÏ„Ùo­]Wú @S®£‰æ*±âÑùabDü90QQÐg=9LT==!LŒè†‰¾QÊÂÅG!%Wr<ÚT*šëh¢O=½MôyçÆ{N¹ÅT‚ÃU[Êèºç´hb.š€“æýê*Md»€étš¨´ÿoß¾÷í+3:qÚ¢¿$Ì/˜Ç AêÕ&v;xvJ!M”FUØa¨-NV~Øö2±âÑùibDü94QQÐg=9MT==!MŒè¦‰®Qê±Âæ4Á 6N|p6Ñ'•Ã\4QT9i=³É›ú«6±¿µ¡Äˆm︀ûh"·(H`Me M,š˜‰&r…ify_NèÛ/¿ßƒ­g_t*Ï•èZ£v¶£K«M@ÎËÏ:yΑ[VOà½ÛQ¼µt]Ÿ¸H¼`¢µJ¬xt~˜LTôYOUOO#z‡a¢k”Jñâl -ÁA!ìN©6WzØ¢*‹©±ù¾¿¥Åׂ‰âˆØöN„pãE§Ò¢æW’¶2æu4±`b*˜ ò©Ÿ#8Vab·8&òs}ÐN>l7úO¶»´tà–=zDfsö«ÐPj†Ši¶ù¥C}|µù¥Ã;IàÎùåãÊÖü²æ—¹æÞBÎðAOŠ1¿»tzT^yn´¨±±YµÛ]›0Pòñ¾ÙÁüÂ&×TSC©¯$í}±¼Kw«zÄ¥`õôãùçü÷ßz—üíßžküô»OŸöªþøÃŸwðÿŠþHq‹¬ŠdH?}õûO>¾þáÓ?åWþü''Þ¿“.ÿ$òï“SÁû—Ñzïê{Õÿõ¯îsZ}¼ÿiü?gÜÿ×ùêÇ~øÃWÿûù/¿ÿ*üüw?ýõ7ÿí“áOûÆÀÿ8‡ÿåç·-³èÏÃÛŸ·Ÿ¡2¼ùßüú¯?þîÏŸ|\þÏÏúþ‡_ÿ ~íüñ÷ÿüÉ'Ùo>ýñ;å_ý×>Ø}ãÃ~ÞùËãï>÷ͧÄß}úÍ÷ôõo¿û¯ï¿öE^úú7ú ¾þÞ¾£Oú}@ ü«·ßý›}Pyê-IÌ Xm-vÎhaíU¶6¡*¯rDü9{•}Ö“ïUV==á^åˆÞá½Ê®QÊP¯-Œ|E!áð¢[ŸXž«4n—z úb©»¼á¾ôó}ÊèáÐqÑä¢É)hRÒÔH±A“ ‰ϧIeuöÀFÿQf‡Þ+Fâ–¯¤ƒ»’{0ÔCËĦ2÷Êʹhb*šMÉŸ•À Jn‡‰ñìÒ¸¥} I¥Ñ\§É¥¥qã!±ЄæBR¬Ï/ÙNCÃï  ÍÃPŽ!BiŠ3â–Ù\&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4‘ìó[k”2ä¯Í¹¡p:ë?.4¥ÉN&zÔ±¾Kôx‡ƒÜ˜âE7 r€[+ –K,–˜‰%4¯åÙüÔF;´ÓïQçç*Fhõb¾ôd‚a“œ\þ û|,œHŠõ]ƒl4Þ{Ï­GœU‹%š‹ÄŠGçg‰ñç°DEAŸõä,Qõô„,1¢w˜%ºF©«O&4¯É¢2û¤ÒdQ™}ê_¬–UŸwî«eÕ§,®¨™EsÑD.8Rb}OÃ_ЄÛ!„DgÓDn‚ õ3Ç¢ÓäÒôód[>ùô'RÙÈAUI ©n‡ÏÒ]ˆ¥QÍTöqb+ÿ|sXñèü81"þœ¨(賞'ªžž'FôãD×(¥rmi\fÝTñ'v 9Ë‹}@jÔ¹p¢¨²ï0µÕGx1œèòNB¹'r‹ìtêÓTÆáá2õ‰…Sà„"ŠÔHò’írEÅóqB‘‰c¢Ðè?n‡|åE' ›¨ T³²|@j>ô6*ê»(÷NäF£€ÿ‡šâRZ4ÑZ&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4Ñ5J¡^†-ˆ›¯Èh¢K*¡ÍE]ê^ìªSŸwBÔ/§‰.e*«6©hÂrvâ¼c]¥‰b‡kä“h"?×!%‘µúÛE WÒn”œjž‡åqÈ—“h¬_jt; ž_I}â„Wvk™Xóèô41$þš¨)賞›&êžž&†ôŽÒDç(•äÚ¤N6²ƒ«N}R5àT4Ñ©þI-®dšèôÎÃÖÛÕ4ѧÌt¥tZ41Mäï××,Uib·<›&òs $×Çþ“ƒ°ñÒjV´DŽÏórX…ùÏŒªÇô»O§·Òø0P£·ÅñCÔࢉƒebÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L]£”0^KI¶à€&ú¤êd4Ñ¥^_ì¦S§wâ}嬺”Yˆ+¥Ó¢‰©h¿ßd “¼O–öí—߯CºÄÓiŸkÁÀ—Ú¡ÑÜN€.¤ ƒÍ˜½Íç4¹§sÌHSUй§{Ÿ¾•&ºÄ¯¸‰æ2±âÑùibDü94QQÐg=9MT==!MŒè¦‰®QŠÃµaØ>ÃnÂGg}Ri®›NêÓk%ˆíóŽÐ}7ú”=Þ¦^4±hbšÀ’ø5xsuš(vhgÇM”çZ Ôê?ålä⸉”IžÓ•}ïÄž^ì€o-7±7*ŒîǶ8 ‹&šËÄŠG秉ñçÐDEAŸõä4Qõô„41¢w˜&JãŠBõJµo"Ÿïž…l#àšè“ja.šØÕ§ëÉÞìèµâ&ö·N ÐöN|ÈS|9MäCĦ2x¬A¼hbÑÄ4A%E,s&ŠÉÙIÊs}¹ÕÜåÚ›N‘¼3>Âf.=]žyê ¥¼÷t½•&r£H1‰A[ÓJÛ\&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4Ñ7JÙµQØ >ÞÛAN§>©såtêT¯ñµh¢Ë;Š7žMt){,À¾hbÑÄ4Á¾‚6ó/•ëqÙ.Éù4áÏeL¤ª­5zdP»òl8lãQ¶äžžÔ˜ê=½Ø=ŽAwÐDn”rº¿¦8àE­ebÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L¹qJ!i{”2»¶xnöB‰,PSª@œ,n"«ŠˆÈp´Ä‡bÏ/AþÖÝ5’Ú“e”tãÙD‡2ˆÞ‰M,š˜‰&dKÔôïÇo¿ü~Ýîñ*ëI4áÏ•(¢X£ÿ$V WætB†˜°ջp9Gl}‚É•*Ò½‡E\Ie-qn÷pÏwáÄÁ:±âÑùqbDü98QQÐg=9NT==!NŒèƉÒ8IT…ö(EáÚ‚F¸%ˆ‡}Rg œ(ª˜0¶ç*Wÿd“î'vï¤ê'Þìô¾úu{‹I$Æ(‹a%uZ81NøwÉ€êHQ¿ê”«Q[äÓq¢Òþß÷·C¼6p"Eóë9MDïÁ yªŒ» ÷ÒD—8 +Els™Xñèü41"þš¨(賞œ&ªžž&FôÓD×(Å"_u ÂÑU§>©OêIÿ¢4QTå°zâõÿË£‰>ï<&b½š&r‹HI˜¥­ÌÂ*8±hb*šðïÒWò)DHUš(vòÀé'ÑD~.…ÄL±ÕÍ. œ€MlHŸÓDòÌ)™ÖKË;[i¢Gƒ¬«NÍebÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L]£^œ"¶ƒŒöŒ:Mt©'Â×¢‰.ïø?÷ÑD—²(¸hbÑÄL4áߥ„\çC•&ŠÛégù¹1_1jõÁ —ÒDÜbÂãÑübF¦n` ¥fŸ)ýeç—õÆújóËǽCAn_:”áJ¸æ—¹æÛ‚!óû}à/æ—b÷x?ô¤ù%?—HƒÕþ“íPã•ó Úf˜ô RÃ|˜SH% ¡nÏHëÂͪÜèÿ±w®É²œ¸Qz£‘xþ3¹@û–}*adf+NÑÑín|¥< ÉбlaÇâd?¤ŸBt<ÿ°jEü5‡UsÖÁ«ºžxXµ¢wù°jj–ÒßÄ^zXUhé(é“«ï9©ì!íœzç)ïØ[¹ŽÛa¢)Sä‚CeFiVm˜*¥Ÿ²IÁD±“ëãòj» >xˆ^íØóõŒÈBóüù2DRÁš”º#½Ùɇ:®wÒÄK¥š­e(NQöCÚÑ6±çÑð4±$þšè)˜³ŽM}OÇ£‰%½«4ñêœËò/:ž¥î­gÄ ¤“‡´“R‰BÑÄ/õ5¡S«gH_E¯_-–‰ñÞ±çrÖ1yK ?Vfº³|lšˆDÔ²½¢Ð½ú~Ù•Ùübš¨í*”þ¥_yáeg7Ò„°dµ˜Àgœ€2„sqÁ ‚°Ù™Ð£IçÄù[†'ûÄŽGããÄŠøkp¢£`Î:8Nt='Vô.ãÄÜ,%÷fù`’ÃàÄœTãX81£>'þ®ò¨“Þy/t7NL)#Ü/6Nà eusàD±£·ÛçËpB¹Ìgªƒƒ‚jW¿ù¥SYìóS'©Z©±÷•;pÆü(ML‰3ÕM£mbÇ£ñibEü54ÑQ0gœ&ºžH+z—ibj–Ê~o\[>0ŸM¬ˆ¿†&: 欃ÓD×ÓibEï2MÌÍR.7ç GYŸíAcÑÄœúüeØSÞ!zðrbJ™ð¾œØ4Š&ÊwiLŒ–ú—ÅNÜ®¦‰Nÿÿ?èw¦ub9’æâ×Ïë‹–lJÙ#½ÙÁÃO¦Ä 컉á6±ãÑø4±"þšè(˜³N]O¤‰½Ë417K݇M–Îpr7Ñ$XYP~ 5Ǫh4§þC6Þ?›&æ¼cúMÔ39*²¼”¹ï»‰M¡h¢|—jÙkéÞ.M4;“Ëã°k»Ž „y4~ÔAðNš€CÌøŒ&¬ŽàŒãÍ®ˆ}”&j§Ñ}0 5;ÐÕi¸Mìx4>M¬ˆ¿†&: 欃ÓD×ÓibEï2M´Î ÈÔÆ³ʽqØâv嚘“êÁh¢©bHe]«§ßkyÿÙ4ñòŽ&{‡é¹’¯Ëær+SßY6M„¢‰ò]fÀòiæÜ¥‰bgYáòÀ‰Nÿÿ?¹øåΗN5W3&úL¹ÞBZM©×?X«vùÃiÕ­41%Îy—œn;O+⯡‰Ž‚9ëà4Ñõt@šXÑ»Ls³”ß[ÀŽ3e =¡‰©ž¢…aO©/Ðñ]41çü`ŽØ—²wÉce–vV§M¡h¢|—™Êw)¿ßúýõŸï·b\~7QÛ§\þ1?e‹.ùNšÐ£ŸåˆõC)<ˆjvšìQš¨–^…†âx'un;O+⯡‰Ž‚9ëà4Ñõt@šXÑ»LS³²ÝJ…Vp?¡‰&ˆIåR bÑDS%,e£5VO®ßESÞað¥Sí‘QuTq¢){¯s²ibÓDšð·Pÿ“û9b›Zºš&J»9³ö Ý¿ìÈîLê$vÔ«aøP›ª²2‚‘Ê¿ïŒôì ?…ýO§ŠÄÇâ„Ó¦‰Þ6qàÑØ4±*~& æ¬ÓÄÐÓÁhbUïMüç™=ÿ` UÒ{_:¥| Éš˜—ªî&þQ•%«óX½ÁÝMüó«Ý´üqÇÞñ·››[iâ´¡²²Žï¸‰Mah¢~—p$Ê)!Ùyößv–õÒ—N¿ÚeÒvwüT;@NwÆM”åEÊ(uúŒPêÑà@jMÁð(NL‰³·'É'Nö‰ÆÇ‰ñ×àDGÁœupœèz: N¬è]Ɖ©YêC‡‹Ëa×I‹NpbNªy,œ˜Rï߆sÞqx'f”«ýÔiãD0œÈ’йPø'Š]rº'²HÙÄ{çÚño;º3©ba~–âõÏ8e‹c2§®ÔjWþ€ò(N̈ÐýÖi¸Oìx4>N¬ˆ¿': 欃ãD×ÓqbEï2NLÍRtoV'ÇNpbN*A,œ˜Soð]81åy'¦”I’'"á¶·Nª¤Ðʼnj‡tí[§Aÿÿ?FöûýÊvù(ÄÄhŸi‚ŽšlÁ[_iµ«5§¥‰)qB¾ib´Mìx4>M¬ˆ¿†&: 欃ÓD×ÓibEï2MÌÍR~oV'Ê5‹ßMLIUFsê¿©~Ý´wL¼œ˜QfðVÿkÓĦ‰4A5[SÖ$F]š¨v¦×æˆýÕn®ïxQh4~2"é½'¼ŒÒ3špWbM”J‹¤m}™P ¾m}™ñŽè“ë‹»±x2+ƽ¾ìõ%ÒúÂGB¯_¯Hw}ivüV²î¢õ¥´Kœ¥ü뎟fÇ„÷žV1ä²Ò}^_¸ìÝËHœ«U;KöìiUé”'ÌÀ#q\&QÞ§U£cˆŽGãŸV­ˆ¿æ´ª£`Î:øiU×ÓO«Vô.ŸV½:G°ü“YÊïÌ£ƒDõt¶çå¡$c© ÁžÒ6U’’ÿ@½ÙiÕ”w x?GµGHÊD4V–wE£MÑh¢æö–dìš(vêézšÐ¬¬()ÆOV¢tïi•—5ìäê[ÊFײ÷zµLú(L̈+ÚtÃÄh—Øñh|˜X LtÌY‡‰®§ÂÄŠÞe˜˜š¥àS,•I™;¹úž“* &¦Ôcæï‚‰)ï=SÊD÷CÚ ¡`B¢L½‚FÿØá[bí‹`¢¶+Rþ+>?ÅŽ>=Xº &ŒDgayzÔ#ƒ²ÈQÿj¢Ùò£4Ñ:5¶4xÏßìø-cÓÄÉ6±ãÑø4±"þšè(˜³N]O¤‰½Ë415K‰ÝœåCý0æš(ÑÌYX›T÷`I'ÔK²,ôGÓÄŒwÀýÁ°¼Ú£¦úˆÊ$[Þ4±i"M”ïÒ³ôÒ6;âËi¢¶[ k'ãÑøq4¾õ!m.«‹ƒž<¤µZE¨ßHSâÐwyÔá6±ãÑø4±"þšè(˜³N]O¤‰½Ë415K‘ÞK¤rˆŸÀÄ”RSêßsE|LÌyçio‡‰ÚcÙ  `¢){Ïà¼abÃD˜°š»Ã‘¬ ͮ쵯†‰Ú®3 épd›Sº&ˆ#,~ÿ ¹T`Y%—(Õ.}HI{+L̈«ùÞ7LŒv‰Ƈ‰ñ×ÀDGÁœup˜èz: L¬è]†‰©Y ÓÍOª£ÎKÅ`9>š*B…Á5úK½ç—w´&K{‡øÁzF­GÃäöeòv´»ibÓDšÈu—Ž)Áï#ë¯ÿ|¿–oÈñ‘k4jŽ6;ÊùÎä^þhe–9¡ ¯×£@NÚÏáíz4?ûЩ‰£òݨ Å!òÎ8Ü&v<Ÿ&VÄ_CsÖÁi¢ëé€4±¢w™&¦f)Ât/M9ŸÀÄœR ‚=§þ÷BÞ6L´_-)—Õ|ìÆüL´­xðwSÙ 7L„‚ ¯¡Ír-÷C°›Ý{Ž‹`¢¶kF5'öhüdÓ[£&HÔuõ& •œ³ê ¾ãe§)= µSMâ¬8çÙ7LŒv‰=†‡‰%ñ—ÀDOÁœul˜è{:L,é]…‰™YJ¤{c°áP@;í'¤Z¬ô°sê ¿ëjbÎ;,Ï]M´\´_*þe—Ô7MlšDõ»¬‰—Q{4ñ²Câ‹i¢´k)‘$"ŒŸb‡zçÕ§ˆ øóúRVŸšwÊÁº‘[Í>ˆ¸•&¦Äù~è4Þ&v<Ÿ&VÄ_CsÖÁi¢ëé€4±¢w™&ff)Lw?t>?_ML*•XïœæÔÇ‹•?&漓Ÿ‹šh=’`r++ ¸abÃD(˜€²I—$µÐV&ªg»ºÖDm—R.,c4?ÅÎnÍ«¹“‘}®5XF0ƒçÜÏFò²KþìÕDëÔFd³Ó·šƒ&Nv‰Ƈ‰ñ×ÀDGÁœup˜èz: L¬è]†‰Ú¹$ÔÌR~szØlxˆÈ M4©P·Ú0”ZŸàÇ¢‰¦ªl!’ËX=Ú—]M´_-¹¦~{‡!=GµÇòGƒÌ6T¦)íÊu›&BÑÖ+N¹[¹îe‡zuÔDk×3û UÛË îÁ†‚ ˆŸi‚ÊΩ̳ޚhvæøh öKœ”ïHi(.cÞ41Ü&v<Ÿ&VÄ_CsÖÁi¢ëé€4±¢w™&ZçZ#ôdM¬ˆ¿†&: 欃ÓD×ÓibEï2MLÍRY²˜EâÉÝÄœT³X41¥Þ/êñgÓÄœwôÁ»‰eµÆÉ¦‰M‘h­UR?j¢Ù)\þЩ¶›kV Ÿb‡éλ ¦¥L­'/jí>(›ñþ“¬fW¾ðGi¢uÊVþ÷q¼ï&ÆÛÄŽGãÓÄŠøkh¢£`Î:8Mt=&Vô.ÓDë\rN@ãYJ”ïÁ?Hô„&š„즃›èf§)M4Uîe_Ácõ®_ƒ]µ&Ï:È›ô²ËÆ`·IÇÊès6MlšøßÓDnw ĹKÕŽE®.]÷«ÁM¬ˆ¿†&: 欃ÓD×ÓibEï2MÌÍR.÷¦G;XÏ‚&¦¤2H,š˜So_FõWkÊ’szG ¼›hÊ `B?ðÿ—îôã›&BцDIÀûÏœªjºü™Sm—“•–Gã§ètº5h©”>3B©sKVïßM4;ägC°k§ŠÆâ²lšm;O+⯡‰Ž‚9ëà4Ñõt@šXÑ»L­sB~åº_"=ß\5éYº—в °_v‚±X¢©bœp¿ì¾íf¢ýê²@“ëØ;ðK´Ë_$Û”™ì›‰Í¡XBjš$fõ~)£—\žÎ©µK(lýäÊ/;`¸“%ô°Ì…>¯/ZFpæìŒýíúËŸ Àn–oÁìâLw)£á&±ãÑø,±"þ–è(˜³Î]Od‰½Ë,Ñ:wlÑ_vxoÌ„l'¥ŒšO 0ã¤æ`7M=”Ýò Òýeð]4Ñ~5BùHó¼ó`©‰W\KÅËX¿•Ù4±i"M”4ˆf¡AÔD³¿º0jk7×ãª~‡fW»‘&0èÊ|5aGE.b«5»dÏFM´N™ÜÈÇâtÓÄh›Øñh|šX MtÌY§‰®§ÒÄŠÞešh " 2â½ìÒ½4Áµ´Ò M4 õ)Éàtÿ%UƒÅ`7U–D²Õ—ÆwÑDûÕÝÓVr³ç £¶¡ì1’—qH¾“ÃnšEÖî³c—&Š]¡t·«i¢ö€BÑm.7Ç`SüLùhu(È]rËÒÄŒ8(ä¶ib´Mìx4>M¬ˆ¿†&: 欃ÓD×ÓibEï2M´Î %õër½ìTïM«Žœ…`Ï)õ &ª*LR60Vo_V{Î;Y|èÔ”•?Èÿ±w.=³ÛÈþ+Þ9+u%k1»ì³ ¼60Fr€`ư=ä߇¤ÚpO«Ø %™pÓKŸúT¯ªÅËÃK•ùUâvOk‚ &LL©Læ% þÖÄn÷ôýžå¹å¶] VË.v®­4¡Rê\¿† +-˜”•ü–nµ²{“ÃöˆCÂu»9Kt":?LŒˆ?&}֓Äé abDï0LTçÌfßè¥ôâ+Ø 7 |@}Rg;è´«×<áLmõyøý,š¨o-¥Þ ¼»±ÒDõh€&o(‹O3M,š˜€&lË%‘0‹KÕãé×&ÊsSÙõàÖ48ÛåNîBšÝÐ Ókš [pY4`t·¾«é×[ßWÒD—8FZéa[ÓD/¢ÓÓÄøShÂSÐg=7Mø‘ž&†ôŽÒDg/•ìÚ­ ’ ä n]ŸT‚¹ÒÃvª×øQ4ÑF½&ú”ŧ¼Œ‹&Müñ4‘¿ËHQK…v7=ìÃΦ‰úܲÙÚ(Œú°»´Øó–B@L¯ir –P*£º{ßÕŽ“ÜšvGÙE¤¦8¡°h¢9Mt":?MŒˆ?‡&}Ö“Ó„é ibDï0MTçLI@Þè¥ìZš¥Üß\Âî“Ê8Wì]•DPIo¨OôY4Ñ©5}5M ˜ò„¨©Lƒ¬*Ø‹&¦¢ ØrMœ7¥Ón‡pöµ‰ò\"HÉbh´ŸbG—VÁŽ[ €¥ëÊ9(SEfò[zµ¾õÚDŸ8~Ú8Y4q0Mt":?MŒˆ?‡&}Ö“Ó„é ibDï0MtõRצ‡¥”;-:¸7Ñ)Uçº7±«ŠÍ/6ñxKú¬{}щr_J§ê±ÔHþ¹ï‡-šX41M`I¼‚ú{ÅÓé'êsKRV¤Ôh?ÙðÊ{·{å×—°‰r N!‡ ú-}· ·¦tÚRɹemqD«ØDsšèDt~šM8 ú¬'§ 7ÒÒĈÞaš¨ÎË-9y£ e ×Þ›(s2:Héô ü[Ø;Õ¹h¢ª’<ãôoó>ìX?‹&ê[+IF…vtï+]·{4 loün)­“N‹&¦¢ Ú"•Ì«ÝÒuÕ.ÀS…“h"?—³¡ØšG*u]®<é6„@¬G4aVªMQC©ÚlãK‡z~Q"ðO>¾ôDÇðÎñ¥C™Æ•2p/S/¼åw´’>Ç_ŠQ<;ËG}®Y0ñïvµåŒ0­VqI& B ¥uÆyk9£êÔ(rnŠ3z:‚³V«–!œˆÎ¿Z5"þœÕ*GAŸõä«Un¤'\­Ñ;¼ZUçÙ’øE×v_/{’VrEG'i¹œƒ )j÷ö&¯vQþHš¨ê!I m¨/vòYY>ö·FÉó‰ÔŽÒ{ßÕ£@hd)Øí˜WqÔE“ÑD ’¢FM¤ÀúTRú4šHA#B«×Î: Ó…4!/A ¤ùÜ„sæ×qÍvy€É#æ­8QÅQÈS»Ø—íN4ç‰NDçljñçà„£ Ïzrœp#=!NŒèƉݹöËRþ*òÚÍo*8h¢Oél4QU1FÅð†ú¯3½ÿ¹i¢+: ÷•3Ú=Æ ÞøêÄÖIÚESÑ„”ûvI™8º4Qì¢Êé{ßRöÔU“´Ú&3¹¶8ªF”£{yZ× [g~«Û½{Õiþ`5šw»(«œQs–èDt~˜L8 ú¬'‡ 7ÒÂĈÞa˜¨Î-•î±ÝK¥¯kvž{’6ÐïåuI5šì$í®Ê¬5_Þí¢|M”·F€RÒ¤ü×7žtª‰"iûwÃ窄‹&ML@ù»$¤’Eß¿—WìÀðô{y幜›vhä ¬vrm=£Ôüßkš(9@Pò×8éTs…˜Þ{/¯Š+5Ô#6Åa’´h¢5Mt":?MŒˆ?‡&}Ö“Ó„é ibDï0MçBžI»—2Ž—ÒlmD÷IµÉ2Wõ@Кê)Ä;é´G'?¹ÁZ»Ë}4Q=r~íÛʈiÑÄ¢‰™h"–½.ƒ »4Qí0¥³i¢<!F0nµŸl®¬g„¶B|=¾¤Ò‚c)„ì_Q¯v‚x+MT§óGcmq)­½‰æ4щèü41"þšpôYONn¤'¤‰½Ã4±;7 ñ^ÊâÅ{J[Ôƒê¨Uç‰J á ©6WuÔ]=hžJpS=MìÑI((Þ˜¼zƒ7”‘-šX41M¤2K×<#`Ÿ&v;<=yy.k°V6Ðj'p%MPØRˆõŒ,·`Aai,ÿW;t+MT§IBh,FU»Öµ‰æ4щèü41"þšpôYONn¤'¤‰½Ã4Q[ž¢7ÊMì"¿>€s*M¤`›æ ,4²¹-õÅ%¹?–&ªzdÊãPS½|XòúÖTïk¶£ƒ‰ï£‰êÑRIöØV–xåtZ41MX¡I)™ŸÓ©Úqà³i¢<—c0°Ôj?™&D¯½7!†^çtâ[p ‰Ð_­ªvjtk=£])ä/§).b\È[ÓD/¢ÓÓÄøShÂSÐg=7Mø‘ž&†ôŽÒÄÃy*·äÚ½]œÓ‰7‚šØ%0aŠá ©i® ±»*%?úã-õ³2?¢Ãå¨P;:B÷e ¯‚¶·ô\+~ÑÄ¢‰?ž&Êw™{Sõhb·c:ûv}®…r'#¶Ú'»´:*Ã&Œ ¯ïMäÍ}K¦žÐ_ªò­·°‹SHd~1¿*.­½‰æ4щèü41"þšpôYONn¤'¤‰½Ã4QSîýÔ‡]¸–&hÓó¸wØÛwH•¹naïª$µÕ3~ÖI§GtrlÂÑÉQ¼&ªÇ„ ý½‰ÝN-.šX41Mäï’ÉRb7§Ón—âé4Qž+ üœh»Ýó½ N:Ñ–;˜ÆÌ-ýšj»]¾•&ªS2k$ª~Øáº7Ñœ&:Ÿ&FÄŸCŽ‚>ëÉiÂô„41¢w˜&ªsN)5vPw;¼:§“lQñ`o¢O*ó\4QUI’H¡­^ð³ª×ío­ÌI±çW—ÓDõhÂQ¡­,ÉªŽºhb*šÈß%EDz±óÝï¾_Šξ7QŸk5ùóv;‰WÒÆ-GÁ^'ˆeÊ ¸ì?ùnéÖKØ»S1 ~žÝ‡ÝÓ0½`â`–èDt~˜L8 ú¬'‡ 7ÒÂĈÞa˜¨ÎËqþ¨í^J/Þšà=±ÇL숽#纄½«*«`o¨Ÿumbk+´Êí褧Ìb—ÃDñÈÊi§¦2†çM“ &þx˜Èߥ°”¤èÂDµã§”J'ÁDy®%EÔæ4Xò¼®<è¤@Ê}M\Zp(—´}¥\û*¹÷ÚDGÉ'Aw;X×&šÓD'¢óÓĈøshÂQÐg=9M¸‘ž&FôÓDuÎbÜ ‰ÝîU‹3iBtKztm¢O*OvЩª’hÚ ‰]½ÅÏ¢‰úÖ*ÂÚѹuk¢x” úÆ0n¼ÊM,š˜Š&òwÉ!¨rƒ&¸l $;&ŠÿRŽýô Õèëÿ3·&´¤ø`9؛҂K͋Ƶ‰j§z/M§‚ ´)Nð)É¢‰ƒi¢ÑùibDü94á(賞œ&ÜHOH#z‡i¢:ç²ÞßîB…âµ{aK•°w¹G‰m¥‚4LTUÙ¯Œô°cú,˜(o­TñßÖô¾ŒN»2!m+Ó™ &f‚ )“y‹ÊæÃDµKr:L”çj(—À¡Õ~8·ì‹óÃr¦…tpkB븑DJ«ë½ªÓD±•ѩڭJØöÎ#âÏ GAŸõä0áFzB˜Ñ; »óÜÝ“µ{©ÄéÚ;ؖʘv@»„¸qïàñJså‡ÝU™dlöz –ÑIwš(»2oD'Þ¸5Q•e€mÔ8yØ,šX41MhÙšˆX2\š¨v’N§‰òÜRÂD[í‡1¿øµ[˜ôàœSÌ ¸ìs#å^±S»ùÖDG%YSjŠËCòÚ™hΈÎ#âÏ GAŸõä0áFzB˜Ñ; ]½Ô‹,I'ÃDÜÔŽ`¢J`È{CjœìœSU•ë<°¶Õg^ú,˜è‹ŽÝxk¢zL¨­,†µ5±`b*˜ÈßeÙ –ü+ØÕŽèìbõ¹Y-4[6‹º¶ØD¹‹Nðš&RiÁ†d~9 ÝN¿.ft)M§©äiiìlW»ðÔ -š8˜&:Ÿ&FÄŸCŽ‚>ëÉiÂô„41¢w˜&ºz)€k·&8ò‹MôI¥É¶&úÔ¿8=ü§¦‰®è ݸ5Q= GjTc©v,‹&MLEu³W9¥¯ï#}÷»ï·l!œž6UJ  ¡Ù~˜‘äÚ­‰yÒ#˜0Cˆ±…=Õ.€Î6¼dUHy8–¶zú´á¥DGÊIÚvtð©Ü0¼dÊ‹¶2¦•}| /S /¶ĨÁ¾^lý—á¥ÚéÓ¥Ò“†—ò\†’2ÐßP®v.=G‹^Bz=ºX™¦v­¹j§"·.U§Vú `Mqžò´¬¥ªƒ5'¢ó/Uˆ?g©ÊQÐg=ùR•é —ªFô/UUç`‰Ú½_»Te%-,UõImã»ªÊ ‘_ ­?m㻾5%%Líèà},Q=F%“7†qI+÷øb‰ÉX¢ì:gO)4X"ÛÅó+•ç&L°Ù~²ò•KU™WX@^£•°E ¥¤ ¿ÚSìÀ,Ø4±‹# Ø—ÿX×¼Ö4Ñ‹èô41$þšðôYÏM~¤ç£‰!½£4ñp®¨úF/Eh'§ ñàm§T SÑÄ®ŠE Èêí³v&ÑÑäè°Þ·ñ½{ŒÊÜV¦kã{ÑÄT4Q¾Kæº^íføØíÎÞ™¨ÏÕTÚÓ`K—£•M1%}½ó-PZ°‘@ô[zµKpkºÀê´Ôpƒt·]{Íi¢ÑùibDü94á(賞œ&ÜHOH#z‡i¢«—Bº˜&RØ ìM<$D3|Gª¤¹h¢ª"õoº?ìÂgíM<Þ:rLïDGå>š(1@Ò(meö”¥rÑÄ¢‰ h"—µ”‘{Îi·£§N'ÑDy.&KàŸÚíâ«’çs‚-Ç]í`o÷ ê§äÛížwqî ‰â1k·¶8ħt³‹&¦‰ND秉ñçЄ£ Ïzršp#=!MŒè¦‰êœr_ ÝKÑ«{Í'Ò„”|x}|—À„ÖØ›Øí&»5±«)ÂÚê?+_àþÖªÜ8ü°c»&ŠÇ wä eÏåM,š˜€&òw™]”Ч‰jϦ‰ò\“Áš½v¶»öR^Ü td¯i‚j߸q+o·ã{V§$Ë‹Mq¹»‚E­i¢ÑùibDü94á(賞œ&ÜHOH#z‡i¢:Ç ¬Ðî¥ ^[5Ú(ÀÄ®´Ô릶Rä¹Jíª(SI[=ágÕEÝߺ$~g¬d¸ñ SõXN6ø—Ãw;M¼`bÁÄL0Aûµ Põa¢Ú_ʨ>7a~ÕÆÖÄn'|íA§”á&8·`,I ]¥ÅŽ,…[a¢Š#%h¬iT;Œ²`¢5Kt":?LŒˆ?&}֓Äé abDï0Lì΂ŸrõaGrmöq-9œá€&ú¤Ê\ùwU¥ˆ÷cU~Kù,šèŠË[ÕcÌ«o|uª«”Ñ¢‰©h‚Ë’¿•› âÒDµ»`k‚Ë*TKÐlÙøÚ|©îÿ¦ É-XŒg~«¼*~!MT§$Ëo‹c\4Ñœ&:Ÿ&FÄŸCŽ‚>ëÉiÂô„41¢w˜&vç¥r½ÑKÙµˆh‹G眺” È\0QU•ÓŒo¨—;çÔÅoMZ ¥7¦ÉVvØSÁ„ÔI:Ê‹Ó;ßýîû•ŸÎWžå¹5/v«ýÚµ07Öó×,¡¥¡GÕÄþªAµSº7¡Sqó7Mq17¿Å­I¢ÑùYbDü9,á(賞œ%ÜHOÈ#z‡Y¢«—¼ö˜“ ohr» ªïHež &ª*ÄRç õVÉhkBÀ íè<Ÿ†¸&ªGËßf[Çu{ÁÄT0‘¿KaQ{qé»ß}¿Â,§ïL”犥$Øœ£‹$µkë¢F„ bþ×Då$”¯´Úa [i¢:Uƒ fmq ë vsšèDt~šM8 ú¬'§ 7ÒÒĈÞaš¨ÎKê×@Øí"õ4!´AÔš¨,÷öÛROv»¨²€yºüÆp`ôaçœjt@©•6éÅi¢z¤üÖoünF´®`/š˜Š&òw)Š"û[ÕN/\WŸÉ'«ÐÅ4‘{Ѩ Ríƒ8Dõ·¾«Ý‹{—ÒDuJ‚©‘]¼Ú¡é¢‰Ö4щèü41"þšpôYONn¤'¤‰½Ã4±;Oepk÷Rùò[rt»*à˜ÞQj0LTU’± ±­^>-;l_tôÆKÕ£åÆÓX¬vQ×9§SÁD*[)’¨¿5QíâÓéГ`¢}LìÑ‘Ra«}*Ö{9L"emmeö´´»`bÁÄ0‘¿ËRRR2&ªž\ûñ\1ˆ‰›³`-Wůܚ  X#Òk˜ÀÒÒ™»J«]þ‰n…‰ê´” 4j‹SŽ &Z³D'¢óÃĈøs`ÂQÐg=9L¸‘ž&FôÃDW/)\ ̲%¦š¨R„<ξ!Up.šØÕçÿä õIé³h¢¾µ)F°vtì)aÉå4Q<2‚·7^[‹&¦¢ ,”rç¯âÒD±+½ëÙ4Qž+yòíæ0øÕñRš€-IJ¼¦ *-Øò>MT;r+Mt‰£§²z‹&¦‰ND秉ñçЄ£ Ïzršp#=!MŒè¦‰®^ŠåZš rïhk¢Oiœ &ºÔ ëgÁD_tžò•\Å£„”ÿ!¶•™®ô° &¦‚‰R¼G„}žõ»ß}¿1¿x6L”çÆ²ŸØX­ªv˜®Lèĸ%Â#–0ÂV”íx¶á¥C}Dø´á%¿u¶'ÒvtÒmù 9æwn*+%[Ö𲆗™†Þ˜%Ýá¥Ø%ˆñìá%?˾{ž|¹íg·Ãtm)#ËmôõøÂy‚¨!¦Ö}±¸u­ªKœ®µªÖ"„ÑùתFÄŸ³Vå(賞|­Êô„kU#z‡×ªºz©xñZ•€lyH9X¬ê’šH碉>õ/Γý©i¢+:&rM”êó`ÈH¡¥Œ?¥©\4±hb šˆ‰k-®Md;Né|šˆI³ Íö“T_„>&$l)*Æš-ä‰8æ‘Ã§¯7Cb™£ÿ~|ùÇÿõÓ÷ý¡Ž…ÿ?Žè‘%Ì¿­ö=¹.ò KÏøø__þçË/¥ßùÛ_¿üR’:Åøæß~ùßøË·ÿùÛ_>&Éïþ?²ÎŸ¾äþéÛÜe~ÿóßÿö—o òìõÛÜçýüsnùöß¿üœVâ‘¿ lùß_~üæŸ_¾ÿêgüû¯þÍ?ýðÏÜWÿüì]Û’E’}æ/Êúavé$®ámÆ3Œl`!@ ¶Tw—D/}³¾™þe¾e¾lÝ#K­T«2<³2³ˆ¡S¦‡®,¯ðž~ââñº×ÕâqêN¤øŒÄ¢‡«gÔø¸2äToIIõŸ ¶c#IÊ=;jcxÍu>®ÝÂPí~Cj{6tƒž­kñ>õÛ–ö H…EXýLÕʰjëDG¡¥lr’»‹°XcŒ|RdͽYs„5GXDXÜ.‘ ¥¬Ü~µÚí,dDaŠà9æYȖ饌EËŸ…~œYÈ ‚~Ò…ÏBf-]à,伃g!“ò¨ÁÛÁKá´‰†]å‘‘v–|tZ½{®úé=¨Á=:Ÿærƒ6A—L‚ jÚÍÊ)2D»¥:#ÕÚÇ&B0ÚSÓê€àÁ± ²ŽÕÔkeëãwÊ&¢JGÄ€' 6.@šÙÄÌ&Š`ZExû’0¾°œ¶ã/ZkTÚ*ÉkS :LyRÉTÀ@BÛø,x‡•€”ä¬ß5ï"¥H°12¸ çÝêV‹þ;ð®íÁÅ»Zô“.žwe,]$ïÚK‡—ŠÖN»ûÃÅÊ:h]›èÕ‡ÒØDô¸ázš?8›ècp»dÁòÔÞhY °if3›(‹Mr†Žw& l‚äœ㳉”Jç1¤þˆ1LœEÍ@P-YÔ€{0¯âø¼JrÖÙ² Vƒ!Ep¼ÛffR˜˜±hùlbøqØDA?éÂÙDÖÒ²‰!x³‰¤-(ÁÙ×rjÚU]ùh øvoJ‰§k¨¨Êb ½›.Zx=j÷Àö’§Zgo»–k$ó›œMïW:an5! g61³‰’Ø0KМïÜfÙD’SÆf\.º`<Š1: ~Úë"©‹’“Ù<¾îÁȤókIŽœÂNÙD`7Ä÷@'#¹ymB3-ŸM ?›È è']8›ÈZº@61ï`6‘”Š_!tðRQO»'ÌCE‘tËÚD/¨F¶6‘PYã¢SÐ?´ë"këÐÿÐÁ:v—l"iåƒ22oæ4j3›(ŠMP»¤0W)ôù´IÎ5v²ŽÄ&¸\¯#§€’úOpÍ«I&`®räYMËåó±>[.îiLrv›“™•j‹Áij[3›ÂÄŒEËgCÀÃ&2úIÎ&²–.M Á;˜M$åÎk¯‚쥜‰×&¯Ñèvo¯€w] ºX›H¨ìð¾HÖh´æ536ÓEÏlbf¿?› v4X0.ÃK’ó0ú¹ .×j¥E¯Í9™ãÄ7¼80¾em“‚0Ï&’œ»Ýé””¢åŒÊ28TóÚ„&f,Z>›~6‘AÐOºp6‘µtlbÞÁl‚•óÓ`;¸PŒŸW×±rÖ´¬MÔP©Âöu• »}>¡Ò6!^®Ñãc©ÖÆ ¦Þj+6.¢˜œM$C´VFæÜ¼61³‰¢ØµË5{ž|ÖL– ØØa9› r9‹ñFô{QyÐS²‰Xy«œÛœåC«J#z³ãK-ç\k&Í‹§Ë QÖ°Û”÷&ŸÄn Jë6e`IÙã«Õ/'·×é˵ã9Xü[ð =yš¤Ÿ¤Ç/?_ópÁ˜šþ_êøë€4_\¥îùáⳋógiüpñÍÅÍòô€lr×…¨íѯ˜NÐ{àØô€=õ‚ó“›’]7Ìu7ùëòú§ƒ=xúí¯~ùéáÑWîcjÄoL˳cpôô ò ¯.žS+º~»ƒúuüäö9½Xê––º)õKC-uUG-K[Í=÷­®ú i DP'ú®3à¸ãq~Ô¯ú€ßÿ÷Ÿ’›]]ì­Ÿñ×ÉÓ~Y'8½û"÷&ÒøøûµÑ¾'>ÉF£Ó[”-­…t¦æò=‡3÷}ßæ¶\0ÿaÑ»¯45åuãòÍÕòüúäíqŒÿzùëòôô@ývdô3OÞÂj¿:<¢_­~»9`—gŠÎ£âfrtðþÝõ.ìƒ÷ÕoVíVêƒWãÓåeW÷duÝhJw_ܵ¤ô÷íÔ¹ïþþåŸÏ™ 7¿¡ÝþË/÷þt{rÊ-“‚óë‹ÓÿùÙêòôâÅqôðÙÉs~–^××ÄÚ©{¼HΓá?×Tú“ÇøÓWë@ù‹“g«£G§Ä´Ïé·WüÝÙ’|äÍåéò()z3œ\/Ùñ]ï‘]þÆãßvUxòèÉùòòú§‹›ôñôâö˜jpsuqzººjÀ¨¿!·@U$:°ú¨µ<ÿéfƒ)þN\õ›[ß%ÃðÈNóŸ‡Ë«ÕÙꦆ%ìw¥ËÓ“£“›Óo€`µÚ½½:d‚j2c´?,–——§/u§_Pñün\\8žÓáQtÕéxhys³:»¼Y¨ð õð ´ñ<Ãs«oÞÍòúçÿ}~µ¼ü)1˜¨X|}{Îïf‘­g t›NêØFYgÔª‹NÝE'_„æUééŒNÓÐÙbÛè´Q6¤5ÉÅŽ]tZ±žÖ¡2Ú…(è´.’çï¢ÓÉ:=‚±Þ[I'ùùЩ ù.:‰xÓdÍyêŒNu‚ ¼ $$P;£3ˆmˆz¯ :IÜN“,%¥6(åƒÁYߨÁ5/lžÎY´øÅ‚AàGY,È!è']öbAÞÒå- Â;t± V@Œ²— 6L¼õH«Z’Û®¡b´Jöö¶™Š¿„Å‚„Êií•ò"z§Þ½cã½X°¶X£@¶Ž6¸³Å‚Z£ƒàc‡÷æôÀ”­É@ójÁ¼Z0Új7ÌàT …Ùœ­IÎb{ïQ*×å©®R ‚žvïQÔà”o`¢GçW HÎ6nÓËP˜(Ò&*‹ü^Ô ë„ntºâ”½#IëG¹)%* |ƒú()%¹°c®ÆJ½ÑZ8mSËi3ƒðŒEËçjCÀÃÕ2úIÎÕ²–.« Á;˜«ÕÊ#¤NöRfâc"]…¶å:äžPËâj •&äÏã¬åÞÍ<üÇæj©Ö ŒUÞ­Ó»ÛØUk àuþâu ‚¹ÚÌÕŠâjš¹‡Èùûk¹Ø¸o$®Æå£dÇÇé£&æjVÜ|½ˆ**ba€ÐÈ«RVµ–›¨0ò. J†ät[š5"oBOf ’O#¹è»)•JM¥€¨Ÿq1¿ œäÈ"”:Q©ö‘¸7 I®yÁoN©•( ´–jJr:-•jS©kÌç”Q)'Š0ÆåóÓ%9»Í!ÝÂ|¨È¬2-Ÿ€?Ï è']8ÏZº@>ï`ÞÇKÅÕ“p( ³º…€÷ƒªËÊÓн~`7Òô´Ž‡Ýð^Èló¼ÅLÀg^§†-ÅÕ¤2KÀ“œ5£/–r¹Þj4Nb$g NIÀ±2Ê[×ÂÀmêê ¢@“܆t×“Ò V´EoepA+œé„'f,Z>~:‘AÐOºp:‘µttbÞÁt")7ÑÓÐ%{)ã§]Ï‹ÑW¨ÚÖóËË9 Cµï.=þ¾t"¡rœ&eôÍ«A'R­½5^ØöTËéÝ]p™4FÁæ5Ôrôë™NÌt¢(:A 3‚ÁM»ûžÞkÀ”}=Ë CbƒÓSÒ §*…*ØÍ7\jÇ]Ø‘±B~›5Ë´;½“¦uDaÇC’ƒ8¯NˆqbÆ¢åÓ‰!àÇ¡ý¤ §YKH'†àL'’r ¡ì¥âÄwÒ Ð­b 衬.*Ê¢f¹k9õ°î¤©k­1Â÷Z®=§Ãøt"i¤b}dÖè™NÌt¢(:᪨}¤09K'’œo\å5àr1R…SšIΫ)éDP•wÚ†–Äo>uõ(å@­å¬ÇÒ‰¤4ãRe$¹sf1NÌX´|:1ü8t"ƒ Ÿtát"kééļƒé)÷J+g…DT $Úié„GäÜ-t¢ÔPVé>èyä}`›jë8åL”­£Íî.¥©5:ŽÈ: ³qÎ 1Ó‰²è7L´SOrº¢‘ø´ÃÊ*¥³WáÔrF¼:• 1¢Øk#è W€¸Ã *£¢j¹jDµ2Q˜fb¹€!tz=(¾ï”B!åe’ÓØé<ŒQc*µ®[þI)é%TÖs ù†ähôè¤ÔˆJÇH¿±’R>æâ:½ScE¥Á×—?IJi ÚõÚZgcTEpœ|&ÃËÉX´|2<ü8d8ƒ Ÿtád8kéÉð¼ƒÉpRî£Ö²—rÆOK†µ®À¸2\CˆN….P!”E†*‚öDÖrÆ=,2œj œ0_Ëí2õk4Zóé\™Q¥™ Ïd¸2 Ì7U€ >½×€£A=úÉ.—ú¬'R%u âÁLÉKmÅü\·œü ‰8yB²¼$ç:r)9D BÉð^’ÄN¹•’³‚@!‡TS’ó;ÎYÈJ©hV8tj&NRDœ±hùÄiøqˆSA?é‰SÖÒ§!x§Zy4NÈ5[Ë–8Y¨PC qª! E×®ÔhÊ"N •¦!Ò€Œ^›vÆ)ÕÚ(ž'ï`T»#NI£ œa^FfãLœfâTq¢†y¾iC2³§÷0}?g!—kƒ‹^‰ˆªjã”ÄIUÎEÔ-ùåcEC05Ý`ót‚å8Ë`'#åš‹ì7y|a½3ÉywÊaX)5œˆBÇaλ.§‹–Ïa†€‡Ãdô“.œÃd-] ‡‚w0‡é㥜ŠqZº¢ñ®…Ãô‚º)qÚïÊa*Ã'‡Lô^?,S[Ç¢”\­–kœþ˜œÃ$Ä­”ê€ÌÁÌafS‡‰¼¨b‚ÏÓäTcSàH†Ëå„rÖ‰Az TÛi<Ý[9 ïõW&x )¢AJ`}´ lã¨åt|h ÕÚ€ Â]µÜ.%^ƒ.(Or®Áñçf`J`°R&šè!ä'É’é{€¡r­Ò¼?'¡±~Äiw½‡J9à–+úÎ7 åë°Ó¡#]$O…å<€ÖKJÁ‚ê¶ë]: UPœ —­°\ÄŽÓ(Ö”tÊÑ'5’t;d¥Þðí^çõœgIžçÉX´üéÀ!àÇ™Ì è']øt`ÖÒNÁ;x:0)·HC¢’½” Óni FR…¶K oh,:ÑË:ÞÀîèDÒˆuþØa-üö3Ó‰²è5L¤‡}öXU-×¼kt$:Áåjª²7’œ‹nJ:a*ÅK?f3°•#@A›ü°ZN5ójSÝSG·qí}â§çÏNž¹¼\¼ÁvVÇvûGé»ý×®û£ŸoWûW‡Ë£ýË«‹ß^¼ŽŽOž=;XüëŸÿú'÷–»_öϿ«/W7Ë#^èYÐ>•óÞßNΉ*Ôÿh\éW¿SÂ]¥ú•À|òøÑë¹%˜·JøE÷-#ÙöÕ6?úêðÿht¾{%o>nóvß«ªjñÑG ³89¦!UÝO¯·)ìï˳Õõå’ìÅ;ÿöÄÆ¼• ß{²:}öÅÉùÏTö³©m|ûè³M…Õ%.ÃêøðÐíëÃ¥Ùwxû—aÿ8:T€+|ÕVí)«u«z|½º¾¸¥èé®™6K´žbÅ­€f‹Ýî~¾:ç&p¿´ôOmSõO™c¯GmЧÎ.å²£Ç/Óˆ¾¸Þã¡|_Á>‡ÚX¤ÒT(•ùí7Ÿî½ÚÆD]ÔoÕ÷ßûlõ&¹_2ÅHƒÊüüŠzîcŠ.ŽŸ¬¨o_l]æ¨nå«_©y|½z¶¢(œBȃÅË—oyßÚTw†Ã)öÈ‹zÀØ[OØ|wÇÃØO±×¾{:ßÞ2Ø£CŠœö:¶û|Bw1î©%¬VÇÎGãI˜êöj»·GÑÈòôä¤ö`ÛW–š÷—ËsŠÿ’,z°øŸ¨Q½õìÏç7oQò{[ýh è*Õjï^µµ_—ýÕåß°Wþ[–4pÈ^—Rûú_òë8Ê3á@aÃ{l§¡~‡kÎúÃwzPYßiF|ýã˽gdÀåÞùúûŒÂ†õgúD¤ëâ&™úšT¯¥NÎ镯ˆ¹ýHm+¯>üãH±ÆÜ§l‹ô×úý<;9]U/–g§Üö^½úqwm«þž#Þr´Üj0ytvv{ÃS¥[üŸ-™CÔÿ^î½ûhD¼½ùéâêäu›ÿþ|±¸ZGŸÜÜ\ÞÞÐpÌ‹ååÉ]ƒþE×ÏXx‡ËmÐý‰Ë«5Æí*øjãO@­‚qJàê@á¼étÁŠaˤ­‚‚J sI Óœ6ÈJA;>Ÿ´¾–Ó¸Ó›)“R‹lÐÐ8†9/r¶¬^e,Zþ"çðã,rfô“.|‘3ké9‡à¼ÈÙËKÓ&'v¦Ò±m‘³TWVîÈ~èãCÛ3ÙË:͘eòEÎ>ÈŒ1z^äœ9‹Z䤆‰àÌpKrƾg’ËåÓmZؿ䜚ôfJ¨Àk®m‘“ÏšEEh¤$Ýn¦´Qà0®² -’ËÏû—$gb·t$(* žü¨2®'9ïù<3J¸9ö¢r^RJív{À-)Eçµ°³7ÉÅ8_‡)†á‹–ÏÖ†€‡­eô“.œ­e-] [‚w0[«•‹B¶…ZÎOÌÖVh} [cVÑ ;@º,¶–ÐÓcÌ粯崇‡ÅÖjëxŠä‘Ü»C¶–4Rïð]Þ[´nfk3[+Š­¹*2·ˆFØ’šäÌèWÉp¹ZG'¢Ä ¢V&N™<Ò¢<ÊZÜÌÖ|J•hÒ‘$9§;­89-'Ï~ÃjÔùû¨k9£vzf­”où(ƒkfÎ9LKpš±hùføq8LA?éÂ9LÖÒr˜!xs˜Z¹G JöRÞL{¬Î²ÓBaú!õ®, “Pc´vÐã;U×Ë:jw†5:zª±CˆÔ03…™)L †&ÅÛäüó—•%9Âè§ê¸Ü€1h9 ÆqÒËÊ\ÅÙvâÿ³w.=´ã¶ÿ*³OaH|Il— tÝÅ]]Ló@‚¤M0“4_¿”tÚúÎ=íúaŽ€™ÅÜË1ÿæ±)ý,‰Ü(Ò!K©$9@€©v´¯=yUá¥ä6MÂT;yaŠS6†¤¾8ÍsÓœ;7íDt|„9#þ„é(8f=8Ât#= œÑ{aªsaÈÍRlYèV„IYA6æ˜Tű¦©O9ñõða=¼Ú]£A€S¦¸ÙÁƒ…«ÇdïfÜ¡LÖ D“a&à À0RØ$¤.Ã;H«Ýº1L¹.iÄ$â½@JIâ½UáíÍ [ £ÁBe€ÂŽR³[ŸÁé1ŒW>-PÊâfp §T;عöCÎþµ´ QSì·nvœwU…'vZ†TpÚT»㣴Vœ–î8Á'““Ö¼ix'¢ãÓÚñ×ÐZGÁ1ëÁi­éiíŒÞÓ´Vœ'(E¶ØÍRv…{7ÍÅ˦²ü~0mRm4uv!W»8Z¯ªŠX½ÍWÕ?mÓ\½kfÎyÇoKúàŠSñ˜Q9¹Ê2…YÇqÒÚX´–{J#”*¼]Z+v ruÇåzݤÁ^o/ñ™Ý›F»Ò‡År 2¾`ò‚F&–hûÍÕ›]H»:.“8 “bû¡‚ÕŽÒ®sU”\§%Ýæ ŽÓbŸ=mTª#ùâ4ÍÓFÑñÁéŒøkÀ©£à˜õààÔô€àtFïip*Îs$ œÝ,eëÍí´¢.*[Í›„lù^wHMƒÀ¯ª@€É«2Pø,pªwXŽEøÑÁÈÏSõXŠîÇè+cžË\œÆ§\–¹²sî‚SµK«ž³S¹.Çd™½Hù«²ò×/sÉRú;ÓF|]bH'tÊÀT;^U±è1ŒWB‰1ˆç m± š÷-s©ë4‰Fe8N-©åo{ÉÜ NUœÚå,ü5;žÃÜq'¢ãƒÓñ×€SGÁ1ëÁÁ©éÁéŒÞÓàdÎS@B7K•mòt÷Š“BJ[Ú) Ä ÑWú®“ïß”›ª*)Ÿ}ÅW/ðaGœê]çRüèdÂ縩xŒ6µ‰J®²È³JÃä¦Á¸I—˜¤l½írSµÃÕÁ‹¸©\×’(x4Rìnå&´ù0áæSj%@ ŽR³ØuĉCa0,–í¡Tí솧ÙEä]NcÁÉ.V™(!8N͎ã NÕiÓÔßšÑÄMnò'ĽˆÏM§Ä_ÂM=Ç¬Çæ¦~¤Çã¦SzÏrSsQ‘’›¥â:oß²àÄq!Ñ÷ N¥VÞ®©B )D_ý§-8‹Âs;õªG´¹àeëšœ&8 Nö`–]e"ñÛWëË×°Ä«œzþÿù§þ…ßm°¾œx!âø~+8F{…«‚1^Wiµ‹ãDu*1çþ†ÇfÇ8÷¯¹óÄNDÇlj3â¯Á‰Ž‚cÖƒãD7Ò⼧qâP–’÷VË^$nTË~I `s•Ra0œ¨ªr)Çv¨ÏôY8q(:ós8Q<"ØqbâÄ8 &”}³ßV«þòõ\Ök.¯–]¯k³t†þéfï]‡¡…T4¿ß¿†PÚ1%”þ÷ªjGiWñvJÍÙÅÊç P§˜x³Ãg«e7§‰ÈÙ¼Ûì$Í%wrÚ‰èø sFü5 ÓQpÌzp†éFz@†9£÷4ÃTç™cîo•y‰ät+ÃȈ sHjŽ<Ã4UIr+úÿÜåg•škw­ÌNã©—]xî NõHÀ™P\eÓ<ƒ3f,†Ê&Z6 w¦ÚÁª.ÈE uIcDoœË ¼“apÉHŒï‹ØßbÌbÿU/v¬û °±SjîS‰ûjt3¹N“=‚ÁrZvœ¦wõgeö&B*Í‚=§–qó³´VÚ°«¬¾¸´ª-4imcÞ‰èø´vFü5´ÖQpÌzpZëFz@Z;£÷4­ç,ýô;ˆ¿ìâÍ…Á1­Å Z«" ö‹~½ì¢ŒEkU•ÝœSRèeø³h­Þ5F¥aGtôÁ§êQÙùèZí8Ï l“ÖÆ¢5#ˆ–¾ÝSðåëØìb¾º0x½.dƒEõÈÀì„øÞþ¬ ¥°ó­©"°BöÀI æÑ˜ê†>m€9f~r€Ù¯ CÂ9ÀÌf¨†–€1uk™6»„—,×%ŽÉ95ÞìˆîÜ!M°)¿`¨tÅ‹bÓûè(•r¼}WYQç#ÙÄT"EDÏ©M`UžÝ–]ÅaÊŒÙ' ³¸ûõ£Ññ?’ÍG²Ž‚cÖƒ$ëFzÀdgôžþHÖœk–=Y ó½Û²‰dI°µ¥¡J ²4¶C*…±Êã4U¥¸Cý§òlÑ üèðª³Òí S=j§œ|³K³¬èd˜á&G©O=†1»u⹌ar̆'@ì¼@¥oCJw2L\XTÃð½Âe)(ô›q7;ágq¢8Í$)»â2§¹æîÎ;'Έ¿': ŽYŽÝHˆgôžÆ‰êÜž ;²”Þ]l³TÓÔíl¯±l¢õ³½†0V3î¦ !Dð]šâ~NÔ»&é÷pxÙÁƒK"æ1Û–"E_™ê\™81N𒃦˜ÞSþòõlvœ.Ç »nŒlÃ[&ç2;Ô;›q—ó™…&âûFÊR‡ 0Þ—ƒb—`_—NÎ’ˆ]Œ3 g·Qµ£}͸9;Õ6¥ì|18%„«]ÊûZ#°ºwj²ôâfïNKŽÏú(­IIó€()xâ,pi.þ¸ÓðNDǧµ3⯡µŽ‚cÖƒÓZ7ÒÒÚ½§i­:§Àèì®vˆ÷6ãΖ$´±øÓ¤FMºC*‹Öª*.mÂw ôí*ÛÏ›Öê]Û”%øÑa}®7Bõ"¨«Ìf¥6>¦CÙ%®¹§Ø•¥ê= #^o„´]ÛF`§dQ±+M÷v9®SæÄYS¿…çË.ìs "¦šI#‰óµµØ}˜ÖŠS@@q–(«Ð\[s§áˆŽOkgÄ_CkǬ§µn¤¤µ3zOÓZuN)§¨~–¢›;€Ç…ˆÆ­ÁôˆT¬úPUÅ”G_=»ˆùó¦µz×% îˆÎª¾ìí´V=ÚÜÁ)_™Â¤µIkcÑZj´ròÒ¦ÙÁê#WofŽ.˜~Ænû¼f'ùò½rÝij{Ó)Ü[¶UDCÜ8D›-o šØ~ðfñY†©Ns,öÅÙS=Æ›œv":>Ü Ãt³œaº‘aÎè=Í0‡²”À½T9„NˆcRy°.ÇÔ뇕L8ôdÉ„â±VPq•Q̳÷d˜±&/‚j'ªÝº·ÁE8Q® “8Í•«â­Tó"I¢ntÐYÙxCûJ›]صkN¼b¦Ú2Ÿ¥—à8-ã‚ð£ SœiiÏãŠ#Z­ÁM†Ù˜œv":>Ü Ãt³œaº‘aÎè=Í0Õ¹–î`àg©Dró:LȪ!n 1)s,µïÄ—ªy°º¢U=¤³2Íô³¦Þ5æ N…ðf·ÚZs;ÃTöPªìøÝÖ‡'ÃL†atÉÔrü·m¾|ý×Ý¡z5Ôë’ݯ ÷j—%™pg]QÐE@DÞ3 …²§N’’t?¤4;Ö}+VNo»XRKÄýZsÕÎFÇð$Ã4qöOþ4»¨³“79íEtx†9%þ†é)8f=6Ãô#=ÜÒ{–a^΃ìÈRâ½eßû÷ý:ÌK–.u;¤ÂX'š*BL;Ô§øQ óгMcüèЃݸ›ÇÄÌ‚¾2YUÞ 3f†±³œf }†©vÁf3L½®Ý§*y#L.gøï,û†q1)Ió{†±¿år®Cµ»­«Ù¥¸aœÒÕ/§¨_ºúe‡ûá$ש„ÄÙ.‡ŽÓb·êÑsê‡(FEè÷r}ÙÁ¾ðª{§F›AÅÙ€^í²îãÒ.tj`ºÏiô*@ÍÞojv–{…áêT˜¢îÇ«†7(§ÑñaøŒøk`¸£à˜õà0Üô€0|Fïi>”¥äæFÄi‘­èMB å$í©˜Æ‚á¦^ÊÆ_}ú°Fí®³Dõ£“W…šo‡áâ!Н¬´c›08 8u³œº‘œÎè= NÕy9ë|{ªv)Þ»Šhü¶¨È8“Êa,pªªL?ñŽá ¿é¡ò³§FÆÑQzn'dõH`ÀWFqu¶~‚Ó§À –í)'µº Sí@®>ÍU® A;E¾›Ç;WAJyÚ¨ïë.!äÒÇÑYÐkvûNs%r-o¹…LÉqjvHñQ†)NÔ˜[]q2wBv":>Ü Ãt³œaº‘aÎè=Í0Õ9³Æì§PAÅÛOsAÎqkˆ1 "$ΖšjGy0†)ªŒIuU}âÏ:ÍõŠNýëF'ÅðàâOõ˜ì±‹à+ãÕÂÉ0“aF`\ÌU2‚nEŠfwýi®z]$`p'éf÷f„¹x&€†‰@Loªµ©bòjIä§‘úÍþø×òüðÇ?,ßÿéweì·ôôûücy@þ+þû¯ÿü=~÷Ofô/¿üí¯ÿãû’¤~õë?ý`3÷ׄÒ^ Ð_ü…üÿFÒöÇ¿ø‡ïÊüÞñ±ºüÆ¢äDNºmv«jy=Zc‡Ö¨Žß©Ð¬ã´LIàÑÚÕi.û-›kš´æMÃ;ŸÖΈ¿†Ö: ŽYNkÝHHkgôž¦µâ\UÊî07K•­Ø·Ò,¬•¶ÓR×½ƒ† µCê?lÅéPtpUævZ;¤,­6ŽLZ›´6­‘QÑR̃˜§¼úÜp­•ë&ŽPÈìHâ½õ-²a‹Öx‰eOÄ~ j—ójm¬Ç0â0 /je’§²î<ã”.tªF·»œfשeHfÐà…×숟¥µÒ];ؽ¦¬Ž¸b“ÖÜix'¢ãÓÚñ×ÐZGÁ1ëÁi­éiíŒÞÓ´Vpô³„{;V%ÖÓV•‘&!%t³}±“±ª½7U6ñ>17;ú¬jïí®‰É;—ÝìV}[n§µê1%"Ù¡,­*\NZ›´6­q;cÄoØ~ùÉ Êáòýv] ™%ø#Liۛ蘆°p „é=­IÙ÷gxâT nvq»¿p²!ñ¿ÿÏ_ÚÏÿ«ÿ=ÞôÝ_k£¸õwõÕÿåëòW[2ì §U“›õˆŒxTf§ãî˶ؕÁFÝ/Õûöè´ŸqÓ]B²Ãl0³w«€Òùß¿ûãoê½ýýwÿúooZ–Tz:Ín½šö<§9$%W\©=áÑ£‚NDLJÇ3â¯ÇŽ‚cÖƒÃc7ÒÂã½§á±:·5ûY nÞ˜)¹äó­±Ý$`Š!_*¾iü7…ÇªŠµŒW¾zù,x,w!ftªÃÕè¬æÜU™ý¤þ)œlwüçþ2Ùq²ã…ì(e¥*öwV»È—ïË,×UR!pó^9{{k…Ê´Ž!ñûñ%ÕÌ«)9åKš]x¶à}qZö·fg²Ù­OšØ˜&v":>Mœ Mt³œ&º‘&Îè=MÕ¹D@g²Tíø]µŸKiÊ×5ÚÎö¥EŒâK‚±h¢ªÒ`¿BôÕ¯«Ø}M´è ýü;¢£ð`©Šâ‘)«?Ç È4ibÒÄH4‘Êé-Îz÷ÕŽ:g—þŸ4Q®‹€e?¸÷þØ,˜åÞž]Œh£RE.o0& ÐÓ«ȳ4QJvÖê›]˜kî4±ÑñiâŒøkh¢£à˜õà4Ñô€4qFïiš¨ÎSÄœw¤PÑ›+†››íŠáMji"²CjŠy,š¨ªlJûêó§C*wÍ6Œ§ýè(ás4Q•a‚Dþ0ΠsmbÒÄP4aÏ%ûº4Qí2ÑÕ4Q®«viää½?¤oö]H夓Xè7iB(%‰žÒrNu´ñÅTåZ¸ÆWocè§/ªÊ3ùÑÑž_Lg–¤®2\7›ãË__t 1sÎ*ý}ÓÕŽ(]=¾Øuí“hî¯1T»˜ï¬IiÉ–~·zCh™!jæà,ÜT»,øèתâT  GtÅI”Ù¦Áý щèø_«Îˆ¿ækUGÁ1ëÁ¿Vu#=àת3zO­:”¥€î-qŠJK¢­c˜Ç¤Žv ³ª*«;É«?­¿]½k¢Hη¼õ óÝ4Q=JB‘¿3Nš˜41M”–šeUš(;nùšÈ˜sÙõ“¼÷s áδ°eàoi‚Cyƒõ×¾›«>IÕ©µ$ý¦o/»&M8ÓÄ^D‡§‰Sâ/¡‰ž‚cÖcÓD?ÒãÑÄ)½gi¢9'ËŽýiÍéî¢.–±xƒ&^RqW¶·[‹&š*¶‰e¿"óKýj¾ü 4ñŠŽ†Ø§‰—=GÕ£ÍÔY•oÊT¤‰IÑDy.(¤îÚD³ƒÕªß54Q¯+ee¤ß‘§Ù­KØÞ²“-ô´Aµ]xPPì®V»¤ôh•&K%`pÅeȳD¤;MìDt|š8#þšè(8f=8Mt#= MœÑ{š&šóœ5ìÈR(÷¶&Å„lÐÄ1©i¬NMUÙ§…ì«'ø¬µ‰v׌e>¾#:úÜNÚæÑ¤±ìP&yî¤41MÄòÍß =¥néj—®®ç_¯«¤“ûþ˜]¼µž\J¡%Þ  °7X˶ß~©f#&MlL;Ÿ&Έ¿†&: ŽYNÝHHgôž¦‰æ<' èg)½¹ÊGDH²íHMcËkªl´LýSÍñÃh¢E§œ~Ë~tèÁ*Í£r„~]âf—Vt&MLš€& tý’lÞº;šËÕõæëu“Prú.5;Ê|﹉REJÞŸ›`´78F¤þøRíBx–&ªSK/Ø/hØìç¹ wšØ‰èø4qFü54ÑQpÌzpšèFz@š8£÷4MÊR‚÷Ö ¶¤•tcmâ˜ÔÆ¢‰ª*C‚°c¬JßPùyÓD¹kˆ!§~EÅ}°yS& Üo;ÖìhVù˜41MØs)öð*|{ÚìËOžßr.îê äõº¨œ¼êÕ.ç;ÏM±@È&ä=MPyƒí· Ðÿ¢A5åø(MTq™ÑûÜÒìVKL“&6¦‰ˆŽOgÄ_CǬ§‰n¤¤‰3zOÓDqŽ˜Äi†ÛìÂÝÈÉ2¡lg{$òúíò^·4Xò¦J3E_½ÄÏêgÔî:ÙuiÇc˜Âƒç&ŠGŠ!Q¿‚r³ ¢“&&MŒDö\R»â·§¾üäù-+§|5M”ë‚Ý*¨›µ©PûÝç&r ø~|áú¦s~„fðÙsÕ)—9‡« 2“&¼ib'¢ãÓÄñ×ÐDGÁ1ëÁi¢éiâŒÞÓ4q(KãÍç&t‰ºun¢J°‘8óŽlOi0š¨ª$ ùê™?lm¢EGÀþO?:òäN§â‘ƒÍ:q‡2]5™41ibš°ç’PFívGmvL—¯M”ëÚ =§~#žjWš¨Þyn",”sN_«¤¼él? ö?ÿW»7u2n¥‰ê4gá~¡Ýf—xÖtr§‰ˆŽOgÄ_CǬ§‰n¤¤‰3zOÓDu®1fL~–Êšî¥ N‹ Ÿ4Ѥ‚ 9;¤jLcÑDQ%Áþ¡°C=V?£WtQ£ ðàÚDõˆše2 ³¦Ó¤‰¡hBÊ,½|\ùoöÎ4×q\£; Ä™\Iï'Oí‡tW"Åí*jÔ¿&ÂϼÖp,‘ä~vµ#9ýl¢ü®8’¦á¬Í¢nמM˜JÂ74ay—Ì-Ã~†T±Óp¹•&ª8㡸,dßtn;]Ÿ&fÄŸCǬ§‰n¤¤‰½Ó4qh–"¼6o‚¤Ôt‚74Q%°RžÉ?*‹ÑÄ1õñe4q(:ÌzMTùq“} ,?À¦‰M+ÑD~/Ù\еŸ7Qíì)·ø$šÈ¿›7ðÀÔïþØìòº¶ß„’ókšðü]¥ßA³ÙÑÍý&ªÓRT‘h,îù*覉7ÛÄND×§‰ñçÐDGÁ1ëÅi¢éibFï4M4çF¬L¡Î×ÖtbŇػ³‰&!¯®ƒ;'?´XvQ@:*^ÕGŠï¢‰cÑyÎN¸š&ª²R±p"VíðùÖ¦‰Mž&¼ž H…þM§jgvzM§ò»†–çí4?ÙN®ÌÂF‰€¼¦‰x8&€Rs¡«´Ú¥W)„ÒDuÊÁ£¤Áf»Bìp›Ø‰èú41#þšè(8f½8Mt#½ MÌ覉ê\@ã“YJÒÅý&¢LZü†&š ôAú±[,o¢ªRƬªMý‹¬¿š&Zt Ò ò˯(ÞGÅcÆ‚ôÁß-€7MlšX‰&ò{ÉZ¾UI¿Blµ“8½ßDù]ËãB‡ßÑÎPèÚ ±„ñ:o³²:‚0t>ÿÿ²s¼ñ¦ÓÓüG PŠCöØ4ÑÝ&ö#º8MLŠ?&ú ŽY¯L£H¯F“zçhâè,¥$×trS~?Û.u©›NGÕ[J_D‡£#wu¯;¨ŒRÚ4±ibšhï%YõNM§_vxrÞÄÏﺚH Ç»¼êtMÈ# <ãëõ%¯>ÈD™i¼«´Ø&¸•&ª8UÅ¡8&£M£mb'¢ëÓÄŒøsh¢£à˜õâ4Ñô‚41£wš&ªs1#‘ñ,%pu÷:"ÉËGg¶ÿXꫯ\’&ª*ÃĽ.N¿ìù»h¢>u¶ù`±4’ûh¢x.E{Ç7!Ø56M,Eù½”¼´$sëÒDµ“§û;'ÑDù]—RÅ€FãG w\™7AÀxKXG:ê‹ZºÿRZíÐñVš¨NK y¯ˆÇ/;y*_»iâÍ6±ÑõibFü94ÑQpÌzqšèFzAš˜Ñ;M‡f)¥k+Ä’–ªã¯.:ý_Aä­| Td-˜¨ªJ–£Òêÿ &EÇäF˜(t€°Õ.ñ.é´ab)˜À²IGO*Þ…‰jOÝIO‚‰ü»š(, >¢T;”+&ˆ’iâ5LPÁ$l½;¿¿ì0Ý Õ©A"ú@Üs׎ ov‰ˆ®3âÏ‰Ž‚cÖ‹ÃD7Ò ÂÄŒÞi˜¨Î#o—˜Æ³”ûµiyÇü@ˆ74qHj€¯EE•¥PïµTþ¿z×0àÁ@³KqMT™d¬L‘6MlšX‰&j‹kâ¼ÅL]š¨vøt=ô$šh­°Â‡ãGÉ®m7QŠÐ–[ïh""o2ùŒ”f;g^m}9 >¿m}‰ À¼ÏŠ¢óô-ï†õ%+#-Ú†Êi}ïõe©õ… =oÚSô¿V; ‰³×—ò»¡è®ý‘]íˆìÚõERJñf}á¼CÌqÑÒúõOoýZUæi8ÆâvZÞð3D'¢ë­šÎ×ªŽ‚cÖ‹­êFzÁ¯U3z§¿V5çy÷jéƒYÊ®½HKló­ª àd$Ÿý½<ìŸe‰ªJÂÌ>X©8¾Œ%EGžZž\ÎÅcFmF‚±²Œû›%6K¬Å.¤ù·Ì,‘í„ð|–ðüÀàÂ2?ÂÏ%r.HÊË,™©ð5KHéy#è_£­v)Ò­,QŠ!G‹“´“ò†›ÄND×g‰ñç°DGÁ1ëÅY¢éYbFï4KTçÆL(ÌRqm‰‘¼B“¾¡‰*!R²Á7®öH kÑDS Žcõy·ñ]4‘Ÿš$t¾†”ÞHÕc~1•|¬ŒX6MlšX‰&¤Ü-}·»4Qí,~2!å~,ˆ gmM‰âÊ“‰ô@Ë«Àk˜Ð2€3Qù ˆFµSç[a¢8p£AIµK&F»ÄNDׇ‰ñçÀDGÁ1ëÅa¢éabFï4Lš¥@®íŒÊ^:£¾KÊ«R~ Õ»æÔÔ óX=þ^˜ñúÔ$–\ÇÑy>Å¿&ªGE"ûàï&¶ën˜X &´@y`P&šŸ~V+L( ÇO¶£‹{‹Æ›£ Ë#‰9AÿëµC»·úxuªYúXœÂîŒ:Ü&v"º>M̈?‡&: ŽY/NÝH/H3z§i¢:72ÐD³ƒk+|°¤GBzCU‚ Ð ½£IU]‹&ªª0Óø`9ø²¤¼òÔ囚 H±Ú%ºñh¢zT„Q2eµû׬M›&þM̈?‡&: ŽY/NÝH/H3z§i¢:gL<¸ÕÚDúÕ½Œ¬pÕûÉ^80 Œ•²-vÑ©ª#q«ù²ìúÔÊ ƒ¬‰f‡7^t*kYdÿ`ßY&–‚‰¨Õ¿=ù •Q³ÃÓs°£Õ!$t·hv×MèÃò2¯ë9A*#5ºsP³#¼&šSË´ÕÿÚÒìžIgÃÄë]b/¢ËÃÄ”øS`¢§à˜õÚ0Ñôz01¥w&šs¯ÍZÆ³Ô‹Äæs&ØyÃýúhâ˜Tçµ*:5UQZ£Æê_Ô£ú›i¢>µ%@îß·û‰¢ÝGM†Ê7{îp²ibÓÄŸ§‰ò^@]éæ`7»çìÝsh¢þ.f?Ñ/EÔì äÊ£‰’çMÂðš& Œ`*ýœºšÒ­IØÍié¨1§´/: ·‰ˆ®O3âÏ¡‰Ž‚cÖ‹ÓD7Ò ÒÄŒÞiš84K_{ÑIÀò†&šÔòŒ¥:¯Õ˨ªò„ù¯ cõÁé»h¢F§$ù‹£ãÿê?z5MT’€m<@Ê3lšØ4±Mä÷Òͥѩٜ]Ñ©þ® ä:?–si¯ {Döùš&°ŽàPƒnêV³C¹—&ªS Ädcq’tÓÄh›Ø‰èú41#þšè(8f½8Mt#½ MÌ覉ê\Ëÿôf©Ðk{M?Þ±DJy‘±P…µR°›*ã$öÁJ¥bßŇ¢cOKåå,QŽŽÞy͉+$ံ›죉 KÁ—£0ïV‡mvd~6L”ßUuÔsú±KqíÑä…áu…úÕ)úe «] Ã[a¢ŠcB$ V;z*à±aâÍ.±ÑõabFü90ÑQpÌzq˜èFzA˜˜Ñ; ͹$Žf)Æ‹[M(=Äße`“Êk‡mªòjͨ÷/»çÔ¢ ãèˆÜxÏ©z ¢ :ceù߆‰ +ÁDÉ:N”´Õ…Ά )'Š:¨ÔÖìDüJ˜ˆ(½¡ }”ïy,}¥Å.ÏS÷Þs:"Ÿûžnšx³MìDt}š˜Mt³^œ&º‘^&fôNÓÄ¡YŠ.n5a”.ïî9“ª¸MRÏßE‡¢#(÷ÑDñHTÎLx¨,›íÆu›&–¢‰ü^šiWÑ¿çTíÎo5Q7ÐP’ÆO¶£+&¨øàÐ×0am+ ’&Š:Þ›€]ÅA(„ Åð†‰á.±ÑõabFü90ÑQpÌzq˜èFzA˜˜Ñ; Õ9ZÞ{Çx–B”‹&ÒÃøÝÑÄ1©«•sªª(¯Tƒ”¦þÛŠÃÖ§æ”`PI½Eñéæå0Q=*Y’4V&¼ï9m˜X &¬l -S.R&ªøéGåw•M‡{ôlòªÛèy0ADéMÛ:ð<€Y9úB‹…¦[a¢8HC$«v vöp—؉èú01#þ˜è(8f½8Lt#½ LÌ膉êó`8ùnÑq¾s}‰ÒJÕb¨Œù)t¯/{}Y`}‰GÊ“aiŒÚí‹Zí Î?ù.¿KJyÚëÏÚÕŽâÒ ð|Ów;+È;Ä(Õ;| 4ÛéÍYyÅ©&Í/ÅiڌƟ!:]ÿkÕŒøs¾Vu³^ükU7Ò ~­šÑ;ýµª:á¼ÏR@÷E5~¾;únò®n°ãüy¤ÅîÑVUy%ÓÁM¨f—¾¬“Q{j1óO¢óTiærš¨µ–+cÙGß›&£ 7ÐD¶£§šÚ§Ñ„[~ÖÑø15¹”&ÒCBñu[TLyZ Š7»·Þ£mNËÕ ~¼fG¾ïÑŽv‰½ˆ.SâO‰ž‚cÖkÃD?ÒëÁÄ”ÞY˜hÎóZãYŠùââãò×,qP©¯uòý£^’÷ÏU~ìð»N¾ž:’2|§¯‚W³DóPš¼Œ•ùîd´Yb)–(ï¥a‚`è^£­v'³Dý]*rd8²P.-(3„7,y{¹W݃‰j—‰ãÖƒ‰&Ž¡_˰Ù!îÚãÃMb'¢ë³ÄŒøsX¢£à˜õâ,Ñô‚,1£wš%ªs._„`-5;ü®rí©óBýKÆÍNŸ.]Åc@æ4V¶¯Ñn˜X &ò{éÉâ÷žÿüçýuU:;'¯þ®SÊÓÞpìWLÄÃHÈð5M`餔£ÐUZíž7ìwÐDujùÅðĉnšn;]Ÿ&fÄŸCǬ§‰n¤¤‰½Ó4Ñœ‹ñx–2¸¶“G<å M“Êk]sjªòºêý®{?v/J§ÿÕ4QŸ:À±Ÿšðc—îKÊ+%•bký¶¨ÍžÄ6MlšX€&ò{éÊhݤ‰j—ÂξæT—HÂûÀªÚ«þsg¶E VMTJÕ’ a¤W;p¸•&ªSW”~¨f§¶K| ·‰ˆ®O3âÏ¡‰Ž‚cÖ‹ÓD7Ò ÒÄŒÞiš(Î ™ÁÍî¹íã4ál´w4qH*0¬EUUf E«Gþ®¤‰úÔ”BS|Ð/:œgÔàñ[GötAmÓĦ‰h"¿—!ÆÁÒ->Þìä©ÄÆI4Q~×ÈPFˆ»•Ѧ‰¥h‚Ë™Cv š¨vìx6M”ßå *D:?NFtmA§D˜'×4!uçØc?»ÚÜ›ƒ] Œ*ª6;RÚ41Ú&v"º>M̈?‡&: ŽY/NÝH/H3z§iâÐ,Å×ÒGž±ÞÑÄ1©–Ö¢‰¦žuPçýÇ.ùwÑD{je£Vr½&ªÇ D@ce¾{mšX‹&¤4-=Ô»4Qí@O?›rÓJœ¼_ÈîÇîUQïó’°=ÓD©§ðš&4`@ŒôbÇzóÙDgâ*Cqù!lÓÄh›Ø‰èú41#þšè(8f½8Mt#½ MÌ覉êGÑMäïu’N¥ ³ô Ç74qHj^Ö¢‰ªŠJ¿ž4VO‰¿‹&ŽEGà>š¨‘郷Nžê'ošØ4±MhÍ®Ö<ý§.MT;9½™Qý]Oœˆ‡#Û-#ǵyJ¥¯Ókš°<‚•Q%õOQªƽyÕ©©ËXœÒ¦‰á6±ÑõibFü94ÑQpÌzqšèFzAš˜Ñ;M‡f)K|m6úÓ›f¥âbYØU• )ûêõ˲°ëSG¹…ÆÑñ§½—ÓDñhH)†Ê Ð7MlšX‰&¬Ü4¢0ç>MT;ä³[×ÕßUÈû׎—+k:a< Üê}ݺ.¯=eRNƒêSÞæ »•&ŠSœ4»71Þ&v"º>M̈?‡&: ŽY/NÝH/H3z§i¢:Wc”jvŸMăàMT ¡ˆƒoGÕÎu±¼‰¢ªùàìzµK„ßEõ©1Yøx±t¸³ÝDõ(Ì®(cÝ76M,E^¾ù'¥~» ¯•dåô³ ¯bóì58u¬vèW·®CGzCQGºf¦êsOÔõ…ï=›¨âyT¦¢ÚY‚M£mb'¢ëÓÄŒøsh¢£à˜õâ4Ñô‚41£wš&šsÖAæ_"¯Í›€G2Èótg¶wÉ+ñ'R9Ö¢‰ª*¯Ö‰ÒêãËúM´èZú :¡|M%³Ñ‡Ê|gaošXŠ&¢PB"‹ß{²ýóŸ÷·¼èélš(¿kb‰GãÇ_ÝO=3oBS)Åþr}ɬQ äÝšNÍ.™ÜIÍ)Kx‚±8²}61Ú&ö"ºÂYb'¢óÃĈøk`¢£àœõä0Ñô„01¢w&šs"RËq/u°|)Ld°­töo`â”TJy.˜hªØ(w wõY¾ &Ú[ –éÅÑá¬ÏÁDõ¨˜4V¦Ö­¼SÁ„lž² ùŸsä_¿¿ÅNír˜(Ï…ÔÊE½v±ã|çÎ7ˆeÄ:¦ ­-8å2Âô4ª¸={޶‰L ~ÍŽò¢‰pšØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰S½ë½¥Q%å-Ó»ç¤æÉ6UZ¦ ±zaý.šho]†cSŽ££/P{;MT†ÉÕã¯Î|ÑÄ¢‰™h¢|—NHYRŸ&š\Ÿã£>W½tÈýšž»2Ü»5%ð®Ç4aµ i½§Ùqzvo¢9­äqF‹&Âib'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&vçõ6ŽT½9ý8l$ïhâœT“¹h¢©ªÇ‡Qbõ‡rÿmš¨o!a”­«Ù¥—9Çí4Ñ·^·KÆAûi{(zçÞDÚ²ÕK½Ç4‘K ®—&¨_ׯÙ僌´·ÒDqZÏ*¨Ó^Âd¥§‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞaš8ÑKY*n.ʈð¶·?!•&£‰¦ 12ÇêÁ¾,ÿx{kÒ2F{âû9šh½H Jœ4;{ÉŸ¼hbÑÄ4‘7‡Ú¡˜ö‹U»Òü.?éTŸ›™Š{ÚO±C¹³˜ç ‹}³7á¥'î/Í¥‰ê´f©È‰Cq5SÉ¢‰hšØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰æ\‰ÁSÜK êÝ4‘Ëlß÷öKU˜,ÿxSå˜sʱúÕúÖDPó…Ñ¡×Ä•·ÓDõÈuŽ‘ãß!¯üã‹&¦¢ ßJ§S(×­ »Ù%ô«i¢<—Ð8iÐ~ŠªÝH’7 2¼¯Vµ .Aà~}‹fG(ùIš8'Î^2Ë-š8ž&ö":=M ‰¿„&z ÎYÏMýHÏGCzGiâ\/å.ø_J¥Ü ²Ÿt:)Uçº7qJ='ø®[Øç¢SÃÇhâœ2[4±hb"š(ߥ'4ÑîI§ÝŽ_N]Cí¹Ân©¿´Û¡Ù½µQKdÕŽ«1´,œû7KQû©™d“Þ[o‚¸.èÓ„Ô–®^«Þw•6;ÉV¯«Nsq îM4»´ª×ÅÓÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0MìÎk{)¼·zûæÙÞÐÄ9©6ÙI§¦Š¸¦ŠÕ#MœŠ=¹7Ñ<:bIl·“¼2Ä.š˜Š&jiDwñþI§fv9MÔç2(F-ÛÜøNš ­àBÑrLZZ0¤z »ßÒµõ"ÒDÇFŒŠ+°°h"œ&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9/=}v{©×ÉÒ-·°ëœLßÝÂÞ¥ÖÅ#‹¥êQ¤¿IM•³ªI¬ÞáËN:Õ·®S;¦FÕ£@Âèò|³K/‰ÄL,˜˜&r;@„ZF˜.L4»ü²¦}Lä– VKÛNAû© jíæk¦‰”ßÑDžz{Q¢ »;8ãlãËçê1ýÉBÿúøRÞº&@fŠ£DOŽ/Å# ~ Œ2®ñe/3/¾%ÈeèÐÔ/gÔìòåÅQës11Õ…°nûivÈw& Ǽ‘«¹/Þfˆå—I(­sÜ?“HݺZÕÄ1”ß('hë m¸ щèü«U#â¯Y­ê(8g=ùjU7Ò®Vè^­:ÕKÑQzåAZ¨ Èó›ÕªsRg+ŽÚT•~Çêí»hâ\tòƒ4Q=jùì¢BKM™ñZ­Z41M8 ©j@õúžåëi¢®‚ai9h?ÅÎ’ßL$šŽW«$µ–ŽRà«§t·ãg‹£6§–JÙ?¼‹s[ÅQ£ib/¢ÓÓÄøKh¢§àœõÜ4Ñô|41¤w”&vç(ýó»(ÝJfºÉ»rFç¤"ÉT4±«âDÖ¿W±ÛùWÑÄþÖ’ÈÙâè0?—2°y̦¬}šØíuÑÄ¢‰‰h¢|—uOÙÅÿ,'ôë÷ï·Ø%¸zo¢=WËÈ‘ûåÀv;E¹“&|#%74­o©7p»«U»²>JÕi&-Äg±8QY4M;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4qª—Ò|oÊ@fÛ ¿£‰SR-ñ\4qJ}Æü]4q.:YŸ£‰¦LÔrÿ ÖnÇiG]41MÀæ$ΞR÷ZÞng—'ùhϵBã QË.vâwG­û‡µ|ÁšÀÚ‚Aj†¥®ÒfW¾ðGi¢9UB…‹“l‹&¢ib'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šózD#}Ð…šÂ½{D5aÞš¨Ê åMb©ž&Û›hêÁ(¸ìþó–ü]÷òö·FBî'æû±{³ßMÍ£mþ2IëÞÄ¢‰©h7‡¬©&ÑïÒD³ã—$ÑÖT€\†Žp€©vpë½<Ø +”¨Óµ¾E÷oHívô(M4§b б8¶µ7N;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Ñœ+ö3¢ýˆÔ›÷&”6Jïö&ÎIÍs% ÿQïΤ±z=8§õOÓD{ëb˜âèP¯ßUoâ':.-Žñƒ·°›Ç¬ŽÁ¬f'ºö&MLEZ÷ æ²w3Äîvàr5MÔçªfËõ{Õ.¥{i‚ ²Ðš°ÖÒ3[°‹Òì,={ »:•$`”5qþ²y»hâÍ4±ÑùibDü54ÑQpÎzršèFzBšÑ;LÍ9(YpIné÷Òle$}C§¤ÍUobW…–25¨?ÈmòOÓÄLe$Œ£ƒšŸ£‰æQò'¿›ðº…½hb*š°¶ç9co¢Ù%IWÓ„µ=V rg4;r¿7C,ŠdxC¹µ`ÏÜËkvœž=éÔœºðâ^7oM¼™&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâT/•ÿœä^{Ò)•9™òš8%Õa²“NU•B„Ô?P¯_F-:¥ßJÿ¶ ž£‰æ1cÆÿnå·Í‹&MÌD¹ÍÒÓ§ÿúýû­'Yýêêuí¹YŒ0HFTíL(ßœÓI2ë›{^[0©r°Mßìž­^×œš–†…±8Ãu ;œ&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9wÎ9îÎëÑ» ±U‚•X$îP-ñd4ÑTaÊ¢«/MìÑ1ò°îvú\-ìÝ£d=¨%ü§2‘U {ÑÄT4áí´”þ° »Ù_¾7QŸ›½´k‹Zv±Ë"÷ÒD>ª7AEÙæ))±]”fWZºÒƒ'~Äg÷ÞRÚÿì^Ša,š8š&ö#:9M Š¿€&ú ÎYÏLQ¤g£‰A½c4ñŸs㺜÷Rê7gˆÝ’!®ýH@ì¦ÜþOj¯'¢‰“êȾˆ&ÎFÓSbÿ§Ì³dŠ•½æë_4±hâoÓÄþ]²±eëÝ›øÏŽøÚ½‰Ÿç:i­_µί–o¨^g[‚„ Çã ì=o?߸أ4ÑÄ1—ÿÈ¡8$[·°Ãib'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šs!¦ ³ßEj¾÷¤ÒFYŽö&ÎJà¹h¢©R¨—ˆ?P¯ü]4ÑÞÚ@RoàQT}Ž&ªGJ^@ÇBe”Ò¢‰ESÑD­1í*”)wi¢ÙÉÅ9öçr­c9œs:\ºno"oLE‡ÓÖÌ¥=t³CÅGi¢9-"8g¸2ĆÓÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4ç®)KŽ{©œï=é¤$›ˆ¿¡‰SReý]š¨ª8%Ïâ¨Ïø]4Ñ¢IR°7±Gñõ®óÝ4Ñ<§øwcTY4±hb&š(ßeñ$Ч‰fzùÞD}.e­ÃOÔ~ïÌéÄyÃ$dzLTZ°äº Ñ_š¨—KçÕͯðŸ¨Ü[½NRA&;¾8•ŸÅ8ìƒ3òtãËçê‹å×/'¢ãÏŽ/+#D_ãË_f_x«g–ˆèÏբ߯—fÇÆW/õ¹j`9X‡nvšn­gD[ ¹!/Üæ®JÉ×ÀsØ~Ø3ßz-Ï6B¡cšÈ­oŽŠ 4;Éü(MäÖ Õ²TŠ£ô2ø-šx3MìDt~š Mtœ³žœ&º‘ž&FôÓDsN,øïv–îÝ›pÛ ÛšhD‚“£»â\4ÑT‘P²¢¢š:Ý{m‚Õ ý{çÖkI­Üñ¯2O<иnv9Ò(Š’—ó”(R„é(Úà°#.#†ƒ”oŸ²½!ÍìÕöjÜÝ8,Ï‘ÎPtý»Vûòó¥JoÓDL-X‚G¨¯÷;ç/¥‰ä” …,BMqGœ4Ñš&V":>Môˆ?†&* öYNÕHH=z»i";gvÒÈ‹]ì\<•&"†ÅÉÖ%ì,A¡‘ʶH ~,šÈª|JRÚê=„õï>Ôi"ۭχDnWSº‡ÝìµÐã™÷&d4Ü.g˜Zp ëû£Ån}wë šØ%.¬ªvÜ¢‰ôsþû—Ö$¿üé¶Æ§¿>=¯XâÛï(³7ô-…Å>&‘Þ¿ùúÉú×ožÞ½O¯üüÍH~-ã?ÿçí÷AgJµží½Ø9Y]RyNµ¯î9w­ÀÖß¿/°ÒtìŸÿéÍ»ï¿ÿƦj?~ýÆýòwïÿöùÛ`ø¾LÜl"ùþÇ_Þ6¢¿toïž—_~¡Ü½Ùß|ú·wýáÉúåÿzþî«ï?ý >µÿûîë|²AöíÓ·_xþè/¥³{kÝ~"³Ô?|ôüÅÛ'å/ž>ÿŠ>ùò‹¿|õ‰µ0ýäsÿŸ|¿ 'ÿ•C£ì^~÷·¥S¹-J×I)%„¬/¶e;ô0Y² •ˆŽÏ’=âaÉŠ‚}Öƒ³d5Ò²dÞn–ÌÎ9 ×5ÿ,òÜb#ȸ„t$yc´ß%–ÝXÅ ‹*oÀQ?•ý¢Þ?VN¯}ÑñணÉìÑ>;â;”Í Á“&£IJ7Y¼ý£J“ÙŽèèœ^ù¹ªè‚k¶ÕgfƸS¡ÛÅ !ý¤ÎQªZUš3ÅÄ‹Oºeq¤.º¦8³“y ¿9Q¬Dt|žè OT쳜'ª‘'zôvóDvÎÑ: n÷RìåÜ“n ‹}»{SY‚K[ª ‹&²ªQáŽáÀ¿.õǦ‰üÖ*ùŽÏPW…âO§‰ä D¼£¸yÒmÒÄP4Ái–îíó}½'þÙ߯٭z̓hž›¶r”ÁµÚO:¢Ï­7’<úÛ9½@R Ö”¿¥¾í(\KÉ)‚³8ú¶8ã¡I­ib%¢ãÓDøch¢¢`Ÿõà4Qô€4Ñ£·›&Šsr\O€Zì\ÔSi"·PÄ šØ%^W—ú}i¢¨ ¬w¨—Ǫ^XÞY"¹vtÐ]HÙ£·!º±#–íÄÍz#“&†¢ û.£^¤Zo¤Ø1} ??×F7 ÙïÅTåðܽ‰Ño\›ñö‹¤ôõ³dÙν>H}*Ld§âƒn‹[çyž0±1K¬Dt|˜è LTì³&ª‘&zôvÃDvœ²Óv/åùG<ëbIJY‚Æèêu_^‰ÂX0‘Tq*\Ñ8>žÕGz¬r#»¢Ãà:˜È‰Ä·•!Å &F‚ û.S®,Š\ßšÈvNŽNœŸ+bÃ4§ÁQøÔÁK±Îõ6M„Ü·Pº†_UÊ8tíÖD—àz>Ë;7/á7§‰•ˆŽO=â¡‰Š‚}ÖƒÓD5ÒÒDÞnšÈÎ9i÷R7î"{Ð ÝB~ë Ó>©8V‚à¢J„¡‘´ª¨×;è”ßÚ¸vtÄÇëh"y”t˜úža<®Ȥ‰IÐDH³tóå]5Ap¶ãuñ̓h"=צfû‰ÞþœI¸ /ø-šð‚ ±¡Ôì¼ºÑÆ—=êýëºa‰0>Tÿr÷<ä¿•‚ÌjÊ Ð–µÎüþ ÃÞŽèèê¦Îê‡IQŠ˜F½—¿zþæùÇ4¦|÷Åó9Zyúøæï~üŸw_¾ýø?þï¿| —ý«E÷‡g{>¶áðéý÷ß½ýxÃÀÈäcÏÞ¿·8¾ýø_žßÛÃÒ×b½ƒY~ýüîÍOÏO¯¾üïþÏß¼ûáËŸl~ÿó[¼_Þü[¼9þÖÌV¦ŸùUJ‘~¡çïþfN–ÿ¾;ŽáÒéƒy´¹CŒ÷([Püà¤?ÿL¬¿(i&oHOUgݺjÛû:à/B¸Ñùîì¬ãÍ6§õ÷kTGrñ"Ž9EvÛ?Ÿ‚ÌLsö7Øì/}¿â¡‘B¬ØÑ<}ÇêÒfDÿ?,BþvñG-Bn*Øg=ü"d%ÒC.Bþv½,Bšs2Fd¹£—Š'goñé^¼nÏVì8ÿúÞÎ8ÌŽÏigÏ Î¹Tš³*#Á[·=Äé4”9ÜÄéJc§í[!h-[˜ÐÃ-×&ÔJ¿½oGW{®à­à‚Ø k[™ç¹\;'ì£MØcôNäu=µÇ—9ðñã‹uÚÑ[›mµl³S=õð‡[TéíêРö“z´¦‘“;Û9v—®d§‚B>¶ÅÑjÕ}rׯ„ºÑñ¹«Gü1ÜUQ°ÏzpîªFz@îêÑÛÍ]ûz)=·:4I\¶’ïÊn0–اž«6ôÎèÄ ’gj“ãF5·lçuÞJ,1KØw•ƒ²Ö~d;‡gÌLÏÄBB­ö#jVµòÖbs;wDG¯«YëÁi¢éi¢Go7Mdç’NZB»—â(§ÒDDX˜62Aï“*~0šHªH»c8ðä‹&vE'°^GYً׳;\sΤ‰I¿?MPÊž6‹Õr‘ÙN½1О‹Î»@­ö“’õ œ»7ÁöŠ·"ç.(D_/ï^윻vk";M°WÏ5Tì'L4g‰•ˆŽ=≊‚}ÖƒÃD5ÒÂDÞn˜ØÕK ž¼5A¸Ä­j‘;¥òX)vªvЩDGÑ5ŽC;áë`"y´Ñ™l–ÑTÆç%ì CÁçKТ±^Þ"Ûq £/aççzdmÍÑÍÎA87e çÚ·iB¬‹Cp±¾X•ìâµ×&v‰ÓÈ“&ZÓÄJDǧ‰ñÇÐDEÁ>ëÁi¢éi¢Go7Mdç@,õ²O?‹ëÁi¢éi¢Go7Mdçcl÷R륫³.a»Íü°û¤b‹&²ªÈ‘EÚêõѪMØ[«ÇÚØ¹Év.¼„= 05ª„d;¦y {ÒÄP4$Ìáu¢¢Ï~ýýš_Õw>ˆ&ÒsÙxFC«e#‘Æ3iBÒf<nÒ¹Ô‚5¨£j"ˆbàÒ{Ù)؇äšâ'M4¦‰µˆO]⡉š‚}ÖcÓD=ÒãÑD—Þ^š(Ιn÷R¤çÒ„¯ Ám˜Ø§”q¬­‰¢J©~mâE}„‡‚‰òÖÞ†qÛÑ‘pÝìâ1%t‚ölÆ4abÂÄ@0aßeºŽ@ž¥ ÅÎ^º.?WRÎÕúìb§þÔƒN¼0Y;¾Í» 3ðu¡ðÒ‘_ÊÉ):GNï&K´&‰•ˆŽÏ=âa‰Š‚}Öƒ³D5Ò²DÞn–ÈÎm¶³¸"’ϯ\§ncgâE*x wH‡cÁDV…lóºC½êcÁD~kréA;:xá­‰âÑ«Q`{G޳rÝ„‰¡`ÒmOÞúÕ*Ld»uiÒƒ`"=—£øFMÊ;çÏM«­Ÿ¹Mh-˜H%Ô/;ÄK‹M§!uDÐçqÞšhN+Ÿ&zÄCû¬§‰j¤¤‰½Ý4Qœ³¼£ '§‡µ1gq~#£Ó‹ wHe‹&’*¶nœê§ï‹Ý£ÝÁ~‰Žì¸X•y=&²G¡¨õ´¾ÅŽ×Ê&MLšøýiÓÝf˜]5=l±<:=l~n:Üëu°‹»5!‹ênßÁ&J-Øâ dGÖS]J{Äñºl뤉ib%¢ãÓDøch¢¢`Ÿõà4Qô€4Ñ£·›&²sEÞb»—òzrF'ñ‹êÆì}Ruk"«òÎ!Fjªvu{_tdÅZ§ÓÄ.e&MLšŠ&hAD²y:Æ*Md;Ô£K×åçR zQÊbçƒ;“&t‰ä4Ä-šˆ-L¾q1!Û±òhãËõþñÆ—ôÖªÐX4-vWî}'ŒÈÒTf/Ïs|™ãËHã /ƒ×T«:¾d;v‡/é¹Ö#6V鳓SÇ¿H0BÛã çb®HÖPjvzëJɉ«UÉ©i·£)Î#ÌüãÍeˆJDÇ_­êÌjUEÁ>ëÁW«ª‘pµªGo÷jUvn³%½£—R>wïhqqë$í.©ä[­ÊªƒM´îPï«4jykÄz–ï—(òu‹Ç`¿Ü=¿›ý™41ib,š@›úzÖ׫­Ð Ójµè0šØôÿ§ý‹gV3òq!uakï[¬+¤-õ>(Ùºxï;‹ã¼ªÑ§D3ÇGsšX‰èø4Ñ#þš¨(Øg=8MT#= Môèí¦‰ì\"†v/Å«­½3hŒ%8Tzû»¥ò`I>ŠzD÷¨¶7Q¢ã£ó±‘ ÷&’ÇèÀ ÚÊ"̽‰ICÑ„,HŽE™ê4‘í æ¦‰ô\4PñÔš›à™'i™"£Ü_|jéœj/×W«²úk³|d§éŠ…º¶¸0k£¶§‰•ˆŽO=â¡‰Š‚}ÖƒÓD5ÒÒDÞnš0çÑ9°yœ´{©HþäŒ~ñ>nìM©,T¯úb‡ƒeùȪ­«‡¶zàK˜ßšÀLíèàŠO§‰ìQ XÛÊ|˜{“&†¢ Ÿî»Å .ú*Md;•Ã÷&|>¡K©u7Ú™ɹY>À¥,!÷A6C}ží@¯Ý›ÈNM^l$ÊvtÒDkšX‰èø4Ñ#þš¨(Øg=8MT#= Môèí¦‰ä:W/ô"2žKjý= oÐD‘š– )DZh"«‚Àž¹­äÁh"¿µ¹j‘b±ã gbxïP&B“&&MŒD!Ýw‹ÁÝHþÙ¯¿_³S=º6j~®°ÅÖcvtj5#â…Ø†ºjFj- Å5nxd;äx)M$§6;ß¼LšhM+Ÿ&zÄCû¬§‰j¤¤‰½Ý4‘§BJäš½!ú³O:¥Å¡¸ÝÛ1ãR£ŒEY“·9s[=E÷X4‘ßZ"»ã3¾®6jöÈ€@ÔþÝØñ¬:ib(šÐ4KOGé©~ ;ÛwtmÔüÜ@.%Ài´³Ã[·ÝŽ£ Z¢‹ºuÒ)æl±’ú.d²£¯Í@žÅ1)KlŠc†™¼9M¬Dt|šè MT쳜&ª‘&zôvÓDv.LBØî¥ݹ·°Õ/Daco¢HHÙIè©a°“NYU mõ^lo"¿µãŒÐŽŽ®–ÞN§‰äQÈS[™@ “&&MŒDqA!\¯g”íÖů¢‰ô\*åDíÇì :ÎÍÈÆþöÞ»Ô‚í‡ñ¾º{\ìÈ_zÒ©8ÕTËÍ·Åé¼7Ñœ&Ö":ˆPOó‘íB<÷¶[7r2æ>(Ýð¨÷Иû ‘Ki"‹ÆÆüÿ²w®Ér«:‘ ¡÷HÎügrïœê›´Q»°}¨4¿£X_k›Ç2HÚíè%©cÑÄÁ6±ÑùibDü54ÑQpÎzršèFzBšÑ;L͹e´x–b¿·;ªpÞ4Mœ’*Ÿ¥rJo§Ðk³°A ‡gû\«üÁ\4qN=ÙM§SÑÉð\M§sÊWö¢‰©h¢¼—êµ*Eîfaïv—ŸM”ç´/Q«å¿o¤ K–À¿o^GT°r¿LÆnGòìÑDu eT•5,WæªÑ.±ÑùabDü50ÑQpÎzr˜èFzB˜Ñ; Í9´»5á,or®=šH¸ ÚÁÑÄ)©“¥M4UY‚«<»zý®$ìýW#¦(©d¢és0Ñ<–=FîöÝíXVÚÄ‚‰©`¢¼—ЩÕ.ÉKI¥‹`¢>—iŽÆ¡ç;a‚òf)£\tâ2‚³€‰÷7ìÍ.á³4Ñœºe‹Å½öÐ\4q°MìDt~š Mtœ³žœ&º‘ž&FôÓDu^6qIƒ$ìÝ.Ý{4á^&->¢‰sR³ÌEMX½Ô«/pù]4Ñ~5Å+9¾¶¡½&šGÁ\v±2~¹(²hbÑÄ4±·¸6+@Ñ¥‰Ý.]Ý »=·ÅHA^Þn—î¼è”m³ÃÞu$m ÊHÐÿÜÓìÞT²½&šSQ"ú@Ûê6î;&FÄ_ç¬'‡‰n¤'„‰½Ã0QSÎâäá,E@7Ã„ÚÆG°OJuš &šª²ÛD—Êßv4Ñ~5•íDP7i¢>x4Ñ<šå²Ï‰•©Ð‚‰3ÁDy/™ÜŒE»0ÑìL¯®Ûž+ˆ6mvï„ Î[y1‘ßw› msQ}ìiv˜í6±;1 ÎMšûÊÁ·‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašØ›bؼÛé½Ý&P}sƒšØ%¸}"Õl.šhªÔ胵JéËŽ&Ú¯và GÇôÁú°Õ#g+ À¡2]Ý&MLEZ/:iJúgÇÏ~{)e¿ü¢S}.’ª9Fã‡àµM 9غ—©ÕÞÓ„Õ,àåïÓUÚì8åGi¢:•ÚØr,Îiå`‡ÛÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0Mœ™¥jWŠ{ëÃfÜ(ñMœ“š'£‰¦ªö;®ßïê¿­>lûÕ3©ÅÑzðl¢ydñÌ(+¸hbÑÄL4QÞKb¦„ÚO›hv×÷®kÏUPãhü&Iw¦MÔeö#špϦY%¢‰b—hºõå„úüçÉÊß¾¾œ‰Îë,~ÿúrB™dXëËZ_fZ_|+K ªŠö»5»üR¸ú¢õ¥>דyÔ.¯ÚÙëÝ‘ÖÙ´låà&­·bÙJB ”;´g{£6§eýÕ ÈG³YiyágˆNDçÿZ5"þš¯Uç¬'ÿZÕô„_«Fô­jÎkyZÌñ,¥ùÞÞ¨È^f¬ƒUç”Rž &võq+¦ÝŽ¿ìcUûÕ^{jaGËWå'Ï*ÓüR£`ÁÄ‚‰)`"sm9lÁD±“L×ÃDæú(ܳ+Ý|‘ÖMá -SÁµÿwÿÊün÷pkÔsâäeZ0ñ~—Ø‹èô01$þ˜è)8g=7Lô#=L é…‰“³”ß[1Po†ïaâœR…¹²òΩÿ6˜8§çN&ªGHL(3Êëä{ÁÄL0QßK4ÎY ›•·Ûá˼s LÔçd`KÈwž|#nè% ïYÚV“~Ò÷n—åY–¨NëÉ †âVcÔx“؉èü,1"þ–è(8g=9Kt#=!KŒèf‰Ý¹æ²tųætw+#CW8ží?—ÊsUßU1åL)VOô]Q¢£‚ qtø¥£Öí0Q—Á‘¢ñSskÖ¥MÄÌÞ—ø`Úê}, ¾4»ò›¥‰æÔIƒj†?v‹&¢mb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&ªsDÎÎR’þlØy)MËfFgç¤N–ƒý£ÞrÕCþ®fFû¯®÷»Ù?ˆŽ?W0p÷X6˜â=†ÐKÁ³E‹&& ‰ò^’±È›÷÷ŸßÞ_2¢«;£¶çº»š„#›ÜÞÝO½2mÂ’–ྦྷ ®ß PÕ´?CW;)«õ£4qFœ¾^X[4q°MìDt~š Mtœ³žœ&º‘ž&FôÓĹYJï=› “°ÏIuž‹&N©Ïßv6q.:†ÏÑDó(b©ß´u·ãU~|ÑÄ\4Á•êE£Ü¿éÔìô¥5éE4QžËµ×3s8²9¡Üœ„ ’ü( [ʶ„|º5÷šzöGi¢‰Ct ®a5»¼h"Þ&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâÔ,…éæò°f§ƒú°?ØÑà©oŠ"ý§4ÑT¡dü@ýú«i¢ýj®iêG‡äÁ›NÍ£Ifý@™r^4±hb&š(ï%Õîåº4Ñìñjš¨Ï5(@®²wfagß„-ñÁÙ„n9§” ƒºØK~´5ê.NÄ=@fÇ’MDÛÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4çFe'§ñ,¥ï¦ÐKk:”ý`g¶¯È1–ê6YÞDSŸ©û‡êK¾ì¦SûÕˆIRÜ£ø`ëºÝ£‹’¦X™¾d‘.šX41M”÷’ K”å¥_Ó©Ùe¿ü¦S}®3P Ç{ö;[סmXšjZÁêÒf—…¥‰æTjßű­¼‰p›Ø‰èü41"þšè(8g=9Mt#=!MŒè¦‰Ýy™AùƒYJä^š À íèlb—àå>‘j“Uˆmª,©"Äê5ëwÑÄW Ê©ïvô`ÞDõ˜³‚ô{×ív`‹&MLEÖÎÙúyöûÙÀE4a-¢Ì½ñ¬M®vq¸òl¶¤fGb½Ž`ª¥ú#½Ù=KÍ©3›§Xœâê]n;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Q#YYî8œ¥åî³ B°ãÙþc©ïª"ý§4ÑTÕDcôX=Ó—Ýtj¿ZËïîw9ÿ‰¢æçh¢z,”kìñý¥„ò¢‰EÐDy/µ^4²?»(üóÛû«®ï7QŸ[8¡ Úpüh™î¬K¶9±è;š¨/m›yA€:ß{~ÙÛs4ñËiízËsXÝëúÛÄ~D'§‰AñÐD_Á9ë™i"Šôl41¨wŒ&~œS’l¤á,E ýö~žÞö›ø%ÐÍ<– y¦,ì_ªê:çX}NßtÓéׯFFÈGᩳ‰_…JøãeœpÑÄ¢‰ihb/…M,e?¦‰_vš®½éôó\Íœ’Z4~DÁüNšÀ-I!+{OPF0›% 6ìÍ(?JÕ© &MŠÌ+ ;Ü&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9'JÖkÑü¯ðÝgIËTx<Û.q.šhªk¦ãê•¿‹&Ú¯ 'Ž£ó\/ì_] LH¬ÌÜM,š˜‰&jiJ\ïæti¢õ¢~ý¨}MÔç2hð¹ªÙÓ½YØ ÊÜJï×—\F°y èoØ«z†GiâŒ8˼Î&Âmb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&NÍR˜î=› „›s>8›8'5Ë\4qN½èwÑÄ©èP¦çhâ”2}9[4±hbš(ï¥1h™Vû4ÑìRÆ«i¢<×… Is4~ °¿«ë}MˆlȤpðµ ËvM¨Áùh³Ëö,M4§¦’=ÅâŒÒ¢‰h›Ø‰èü41"þšè(8g=9Mt#=!MŒè¦‰âSJµîGÍ£I¶ì±2áÕ¼nÁÄT0QÞK!#ÄÄ]˜hv"t5LÔç &³hüˆd¸3meËI•à&ÜH nJ‹Ý+vM²¾ÔvãÀ%бúüç:þ·¯/5: É4Ž¢>¹¾æI‚ ÐÍîµ8ËZ_Öú2ÁúB[m&„ìYºëK³ã‹Óò~žkà Ò¿òÓì’ä׿­À H~¿¾Pã’2»¸J+i >ú±ª‰#e±‰+v¼J†_!:ÿcÕˆøk>Vuœ³žücU7Ò~¬Ñ;ü±ª9åŒÏR‚toZžÕ“«š­™+ÕdsÁDSeÌÝêéÿÚ½9·ÿ«a¢ýꂊ.G§€ïs0Q=B¶28ce×É÷‚‰Ù`"—ý¸%6 `¢Ú1\Y”¤ÌÈáøEÅ{?VI­ƒ~p–ëæÔ__šÝ›<[a¢:­§,)Wû%,˜ˆv‰ˆÎ#â¯‰Ž‚sÖ“ÃD7ÒÂĈÞa˜h΃oÂÍŽ’Þ îeÆ:¬ñqNªOvôÝT‰gˆÕ³}ÙÑwûÕFÂIâèüßmÕ»i¢v /»cO†¡2·—e|ÑÄ¢‰ h¢¼—â&’4ui¢Ù•5âjšàZ‰P%—õ-? "·v3òòG+•÷4!ûT/u•6;‘gïÑV§ˆPߊP\YýhÑD´MìDt~š Mtœ³žœ&º‘ž&FôÓÄîÜÁƒ¤£ÝŽøæ{´y#?€‰¦€’gÿD©é\0ÑT1 Äê‰ü»`bÕŽ^qtós0Ñ<:eno7;Óu4±`b*˜zä-¡ô“òš¨] õ¹eæ-‘]PÂïlf„¸%e´ƒ{´ZF°Ô«ðÁTíØI…‰&NÀU,'¼î9Å»ÄND燉ñ×ÀDGÁ9ëÉa¢é abDï0LTçšËôÜ0ivð’t4Ž©BgˆÕ`û®-zûÕ$JˆqtèÉ-zõhZ68ÁµfG/µ¹Ö}mÑ'Ø¢×÷ÒµV›÷všäÞÛ¡y³d–÷ȵtGâÜ?š¨väx9MÔç*&2 ˜Z!çΣ N[r?ˆ“ÕUØ«Ì>K4;Óg‹[[¥k©–X\YÍK„›ÄNDçg‰ñ×°DGÁ9ëÉY¢é YbDï0K4ç˜Ø‚¹]¤§{Ë"lrX.p— ªþT„ɰÇö »qPÄzW/_v2±G‡kÓÓ8:ôdvóèˆ\Okv†«1êž©°Çj2‚¡÷+|4;3¹š%¬]_JäàÑøQ ¹µÂGù£•8Ñ„×h…úy ÍÞ-…7ÒDsʬ Y³{=Ø^4q°MìDt~š Mtœ³žœ&º‘ž&FôÓÄ©YŠs¾•&tC9€‰sJi2˜ØU9GY»~L´_-)8©Øí^JuÝ^s& \ƒH¥„+gbÁÄT0QÞËBßÄàýkNÍ.çËa¢>—UÝF›è­9ºAöœß÷E…TO›Ê ¹ûA£Ù™3= »8Ê..¡8GZGÑ.±ÑéabHü%0ÑSpÎzn˜èGz>˜Ò; ?ÎÉrЧÐ"òÞœ ØHù=MüH`+¯÷R‰¦¢‰]×Ë÷þzË_Eû¯¤òßãè°?w#k÷è*d(3_7²MÌDõ½-[èŒÝ£‰Ý.§«ë9Õçj2)ØqÎ~'M¤-i‰¿§ hßXô•BéæÒD'à†‰£Ä MDÛÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0MìÎÉ50K Ü{ÑI‘·ÄÕaw Ц”>j“ÑÄ®^2õ/ÌüØÑwåwì¿ÚËž£r³ÛÙË5°Ûi¢z„š|*ÐE‹&¦¢ ¨©Í šº4QíØÐ¯¦‰ú\圇³¶ü_æÖõ4‘eS"qO¹Œà]fýL˜f¨ü(Mœç¶ê9…ÛÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0Mœ™¥2¤{/:iò­,}4qN*Ë\4qJ}†ïê5±ÿjL˜ƒ=Ç—û·ÓDóXvYF+c^ëMLEå½Ô\vé¦ÝÆu»ÂÕ)Øí¹¨bâE‘[Ï&`c:¨ XG0)R¿9ón‡ôh¯‰Ý©!²| NaM„ÛÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4ç^Sä(ž¥Lï=›`J›èAÞÄ9©y.š¨ª°^ÀWù@ý—%aïÑ„ÖoAûÅ—“›Ûi¢y$Ïœˆív/Ëø¢‰EÐDy/ՌܹOÍNôò›Nå¹V^‡²ý·ÁV¿&Ü›7Q„0Üt¢6qBìÏÐôûôM4§Ä$ îvøÒM|ÑÄÁ6±ÑùibDü54ÑQpÎzršèFzBšÑ;L§f)B»·¤S=›°ƒ’N?L20ÛÏÕlbWŵ2~ Þ¿«sÝOt${æçú`ïÅÓoÊj6±hb*š ºKÏ*’»}°w;`¾š&ês¡¸‡xn%$éNš°-ƒ daוTæë×`Ýí(É£4QrâòÖH,ÎiõÁ·‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎ!a²»~Dʽ7XÊšrD»Ô²¨ÅR!MFMUÏÁ9ú®^ñ»h¢ýj”äÌqtò“YØÍ£*R¬LÒ:›X41MÔþÒšÝÕú4Ñì^ç‹h¢>×Ë/æx¬žïm]WþhF(øž&¤Ž`JÔ§‰f—õÙ³‰æÔ˜¥_k·[7>Ø&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâÔ,e¨·Ò„¤´9êMìÔË?| •m.šhªœ‚ŠT»zÿ²¼‰ú« £Ä+yÙ²?xÓ©yDC×+C\g‹&¦¢ ©ù†ß4rÿç·÷W3¿$Æ]Dõ¹Òo7ñc—î<›Èõlù 6hÁš€=¸I[í¤€×£4QZ™ùÊìŠSL«Bl¸MìDt~š Mtœ³žœ&º‘ž&FôÓĹYÊà^š0ßRòš8%•ÒdyçÔó—Ýt:ÖçúMT ‰%¢‰ªÌQVö¢‰©h¢6˜NJBM4;¹¼v{n-c›£ñc9‘ß{6!ž²°­Ž`pȹÛµÙ™ù³YØMœ×¶Šs}©½¸hâ`›Ø‰èü41"þšè(8g=9Mt#=!MŒè¦‰S³”'¼•&`+úÌ'{N…5$82ß•â\ÍëvõPLL‘úò+ß$#þÕ0Ñ~uáø-,v/0o‡‰æ±ðKy/ceB &LLÿcïLsÇu5¼#CœÅ•ôþwr5äé®XŒá¡„:ÿš0_ó‹†ÇÉ\ÓÜ™ò8 »Ù)\ž6QŸ›•¤¼l4~ræ…±/M›(S+È.L”uPµ~Ó”;am})ªL„’ÆêUÓ¯­/å­s.6£cÂO®/e¯‚e„8„ÊR^ëËZ_fZ_|KeÎ4ŒŠ|T;¡ë×—ò\cŒÒZ›]â;›£fØØ5í]¤õúY€Ì(PZv’ÙŸmŽÚÄ•’9,ëè;ü 1ˆèü«Îˆ¿æcÕ@Á1ëÉ?V #=áǪ3zO¬êÎYY%ž¥„î-@®’6żsôÝ%H!…o¤Êd_«šªò‚”òêó}··6 ¿¢ø`‘êËF ü eþÖ2qÑÄ¢‰)h¢i—‘ÅÐD-íÇx=M`&$r‰h¢Ø¥„÷ gVJü‘&0Õ‘µòp†îvï—ù ‰îTZöXÛ¢‰h›8Šèô4qJü%41RpÌznšGz>š8¥÷,MtçZ& ñÿ—¦›ÓòdÝ¡‰.Á(I’XªÁ\4ÑT‘rÑ ¡z’+ØßÚÊ¿q)—>Wä£yä¤f+ã„‹&MÌDõwYvó9'ùhve!Ћi¢=W\™0Eã'ÿ«:Å 4›»ÃÎÝ*„>·˜Œ/ów;¥ô(MT§\~yÜļÛ!¬äá6qÑùiâŒøkhb à˜õä41Œô„4qFïišhÎ)c<…2ɽ4Q`³Ç”rš«ÆGW¥\¿~¡Þ«›Ñ+:’•¿X+ ’=Õ£8Xüw”UãcÁÄT0õ‚,›ÚŸÁÿùÏï7˜] õ¹uʦqe¦ngpçEZôJèéóE'Ä6Ò=}À®)mvÈÏÂDuZ;oæPœÂ[ó;»ÄAD燉3⯉‚cÖ“ÃÄ0ÒÂĽ§a¢9'g‹½DÚíYyb£Éþk¥„s•øèª~¡þÇZ£ö·,» £Ãü\V^÷èêåceYWÖÄ‚‰©`¢ü.s™ù’%ÂD³ã|ukÔöÜ ¹žîEã'ç¤vï=§šÀ¡øy}¡:‚Ia|ÙìøÏúV˜¨N-µÓÛXœëjfî&Έ¿& ŽYOÃHOgôž†‰æR8KÙ‡ÉþÚ{N–6Ú»ætHi™/炉¦ “ê¸ßøÿ¿åo¥`÷·&Ê|¿ìQ|ûðv;L4ŠL±2ÕËhÁÄT0A &rò?GÖ?ÿùý;¼:i¢=7CÍÀgízëV˜H›c‰üÎ5'n#½`×8q«Û±?Úµ9Í©f–}!Îy%M„»ÄAD燉3⯉‚cÖ“ÃÄ0ÒÂĽ§a¢;¯)zÎREd¾&„u+bvhâ˜ÔOulÿ&M4UPV—X=ÐÑD{kdÍÁ=§n—žëeÔ=š(Ã3Vö¾[4±hbš(¿Ëlä„CšhvH—'MÔçf7‹n§v;Î÷ÒDñ€)¦ ©#˜³”ÝøPi³#zöh¢9­nÇÕr»¾µT[4±³MDt~š8#þš(8f=9M #=!MœÑ{š&šs »‹ä{&ÊL¿ ÒM’še2š8¤Þ?6ùŸ¦‰þÖ5è}Ù=™5Q=:%Í_üÝM,š˜‰&Êï²^ÒËŒÃ^F/»tùE§ú\Bb—ïïv äÞ^Fµÿƒ|ZGp™„ P •v»Oµ o¤‰âTYÑ–#q­%Ü¢‰h›8ˆèü4qFü541PpÌzršFzBš8£÷4M4çDŒñ,ÅùÞ³ ³ú…w&ºT1SŽ¥Ê§²å“&š*%K¤±zý‹ø?Mí­ëõ4ûb±T° SõX/×YpS£Ù%Z9Ø‹&¦¢ ­»tu³4l6Ñí$óÕ4QŸ+eíÇOýÖÂ7wFMµúgš°:‚É x¿4;ÄgÏ&šÓ\xr,.¿ÕÏ^4±³MDt~š8#þš(8f=9M #=!MœÑ{š&ªs¬{o‰§PüðéèRšÈI6ð=šhP¾˜Pp²ŠNMUY«X0V_þÿ[4q(:}Ÿ£‰æQ-[pa¹Ù½·8Y4±hbš(¿KO"„<¾éÔìèzš¨ÏeöìAFT·KùÞò°De(ïäMä6Ò8ï4æ>ÒíQš¨N)Jð1ª‰ËoÆE;ÛÄAD秉3⯡‰‚cÖ“ÓÄ0ÒÒĽ§i¢;2ˆ§PJÉï½é¤²ñM“Š“Ýtjª –I±zHü[4ÑÞs½LG§Ðïs4Ñ,›lä{4qLªMV ¶«§Zp#V/ðc­ëÚ[Š2³/¢£žMT’ [α2§•7±hb*šðJ eMÒ‹Ðá¬n÷áNÖ­4Ñœ–ÅÔéviÝt ·‰ƒˆÎOgÄ_CǬ'§‰a¤'¤‰3zOÓDsîä4Nmîv9ë½gÎâM’ê4×ÙDSeIJ¼¾XÜ~ël¢G4¥ñAþËžk^×=2d•Ñ[¡÷E‹&&  Ø( Ï&º]WÓD}.zÁÆO±Ë÷faËf¾{Ó‰°Ž`©·‡ç£ÝŽÓ£7ºÓLŽã ±ÝÎÞ.Œ-šØÙ&":?Mœ M ³žœ&†‘ž&Îè=M͹×=g©l7Wˆ5ÜìÜt:&Õa®,ì¦*'%¾Ñuõú[5E'—-às4Ñ<–½ Ûì(-šX41M`Ù¥— ²ëŸy?ÿüû÷[S+äêîuí¹"uí|Ù¥[+ÄÖ³ ´ ±Du{…ñÔì²>š…Ýœ:Š A(®ÞX[4mŸ&Έ¿†& ŽYONÃHOHgôž¦‰îd§y]àî9“B«ÉlË‹;"– P}±J¿¶¼x½Æ™5Ž=Xä£{MI¿ø»É[i™µ¼¬åe‚å…·ò$ ïvàW·3jÏÍJñK³“.\^´ÂKFþ¼¾pÛ –¡9PZìäÏöÕ·~¬*N­ «Ýçïv°.Ò†_!ÿcÕñ×|¬(8f=ùǪa¤'üXuFïéUÍ9zª+¼ìôÞ£o“¼9ï}¬jˆ5ÇR‰`.šhªØ$ŒÕ3ËoÑD{kaÌÁs·£/Ò6™å«¿›)-šX41M —ÿH hÛ—šëi¢]äÅäѬ]ìÒGßi˦ewô™&¤Œ`‚2Ÿ¥«]Y1ŸMËkâÈÄBq„¶J†ÛÄAD秉3⯡‰‚cÖ“ÓÄ0ÒÒĽ§iâÐ,E|o;#Rßa‡&ŽIÕÉÒòšªú…pÜ7êe÷cÈû[ Ô›t_DǤ‰æÑ”Å1V¦o{ŒE‹&&  iiqP3î†4Ñì˜/¿H[Ÿ+Ù)*QíÈ5ßKš .|† -˜!k’qQ¾jGùÏååV˜hâˆ! ‡âÓºGî&Έ¿& ŽYOÃHOgôž†‰c³”Þ\dÓÝ£‰cRódºú²%÷|Ù‘þLÔ·°²–ç8:ÿêt7LRf¸²òLLZ7óž¥qV^³3¾º›Q{®!`´G/và·ÖøàÍ“àÞE'k#]É‚ÏÍ.ѳ5>šÓš¹‹#L‹&¢mâ ¢óÓÄñ×ÐÄ@Á1ëÉibé iâŒÞÓ4Ñœ3”}núb–Ê~óÑ„mD{G‡¤r²¹h¢©’BBúÅZÅücYy=:HyÜ—ðeÖøh#ÇÊÞÛˆ,šX41M”=Q¶!M4;ðËi¢>Wrm®ìbWV“i‚Sõáy§b`.#XÙ¨üY†J›©>JÍ©Y²àSZ³S[4nŸ&Έ¿†& ŽYONÃHOHgôž¦‰C³”ñ½gä°éncR•碉¦Ê³åX}æ«?Þ£Ój³ÄÑy¯y;MTFR~<*3Е„½hb*šÈõÌ8ƒ/:5»D~5MÔçªÖ:|Ѭ]ë€Ø­Iؾ儈;i^G°§¬÷4;Ågi¢8-ÂÌ|Jkvö–˾hbg›8ˆèü4qFü541PpÌzršFzBš8£÷4MTçhµ­#„³~œB/¤ Ù 6Í»³ý©îsÑDUE€eY§X½«ÿMôè° ÆÑ! {£VŒ(¤f6ee/šX41MøF\‹˜Ž{£;rx;[»ˆ&Ês¥lH ÌDÛàbG|çÙ„ú¦äBŸó&8µ¹E Ëð{O·{ç®h¢;-k€Œ¡ìe÷ÖâvÑÄçmâ(¢ÓÓÄ)ñ—ÐÄHÁ1ë¹ibéùhâ”Þ³4Ñg'É)ž¥ŒîM¶̛}>›8&5Û\õÇ›*U‹Õ»ÿÖM§‚Q0>xÙÑsõÇ›GIZVhŠ•½÷8Y4±hâïÓD»ÝC`H4Ì›xÙÁÕ7ÚsY-œµ‹g¾9 ’~N›`¨Ø˜ÔÆB›ò£ÍŒšÓD73êv iÁD´KDt~˜8#þ˜(8f=9L #=!LœÑ{&ºsÎbÏRïkÍ-IØ7Ê;­QJ¥¹Z£vUXLç/ÔçßjúŠŽ+‹ÆÑAyîh¢{TJd+“·Š8 &LL°Õâ¬Z¼ ›5;pËWÃD}.±«C4²«Ý‡#Ù /:åP2íЖ¬É’Œ¤š8>š„ÝÅ ÄPœâÛ‚E;ÛÄAD秉3⯡‰‚cÖ“ÓÄ0ÒÒĽ§iâÐ,E”¶ât‡&ºö ßH¬ÛDWUë2¡Þè·h¢GÇ)%‹£óÞ‹ãvšh“é7]4±hb.šÀ­–fUt^têvï•—/¢‰ú\ÎHA·‰ng@wMš03¥Ï4AmK™„ÆsP³–&ªS+‹4fÅYÒÕ».Ü&":?Mœ M ³žœ&†‘ž&Îè=MÍ9Õ¹ÏR$ùÞ‹NÆ›¦$ì.A€)Ÿ.Õ'»ètH=Ûoõ®{E§¶Øâ8:Âψm NP7ŠïÊòJÂ^41MÐÖR<ù°¤S·KrùÙD}n™»œÆ=î»)Ý›„MF”vÎ&¸Ž`7ÎÁ•¬f—íÑvÍiFÉ<®SÝíÀW»‰p›8ˆèü4qFü541PpÌzršFzBš8£÷4M4çL€ãÜànG÷&a{ÚPm‡&ŽIu‹&š*I˜‚3ý-%ýMô蔿[ÁÓ&šÇ²ÇPöXY6]4±hb&šàz»§üxiHÍŽóåiõ¹Z“Ý$?ÅΘo>›`±æu,e;–åe\ä£Û%‘Gi¢9ŒJ:u;&[4mŸ&Έ¿†& ŽYONÃHOHgôž¦‰îœ9éS¨|º,ziÞÕóîšèªÖo¤ò\b»*UÎAÞD·ƒßj^×ߺ,ÇhúEtì¹±Õ£'(ãs¤¬µ[4±hb&šzæ@.éÏl€þýû-vøvEæ"š¨Ï5Jt]ëvˆrçÙ„mjR–Ï4¡unÉY£ÏÿÍîc­ôiBëôB)„²f‡«y]¼MDt~š8#þš(8f=9M #=!MœÑ{š&ºsDŽw_"ïm^'9mÆ{gM'ÌÁ]¡.u¶¼‰®^sÙiÅêËæõ·h¢½µ”ð¸~'i¢y4-C±2µuÓiÑÄT4¡a¡\$ÓD±÷Ë Ä6ÿPóŽ ?ÅŽìÖ³ Ø™ûÎÙ„µ¬eŸw»ôh»‰æ påP$X4nŸ&Έ¿†& ŽYONÃHOHgôž¦‰æY8c·Âæ\F0¡œ6;áô(MT§œ’±X,Î3.šˆ¶‰ƒˆÎOgÄ_CǬ'§‰a¤'¤‰3zOÓDsìÔ±îvxo» M¾íMt ÎÈø…Ôqm^ɦ¾×µI(¤`_HÕÉú5U*Dù‹µJ~-/ïPt¬@Þ<–¿ 1Ín5G]41M–y•íÏ;7ÿ¡‰b‡®×Ó•¡“)ß ºÜLœ¡Dý#MHa W ’4L*ivâúhÍÀ.N4³P(Nù-dÑÄçmâ(¢ÓÓÄ)ñ—ÐÄHÁ1ë¹ibéùhâ”Þ³4ql–*»½›óòhó´sö}PªÌU士2r÷õèvšñ§h¢¿µ;~ñ·5}Ž&ŠGJV (;ÊŠ¥u“vÑÄL4Q—TËç »£v;…«+·çJ­ò¡Ÿ²W»·f ¤LdŸi¶ºqªÍ·‡_«º§ü(MT§9‰C€:ÝŽV?£p›8ˆèü4qFü541PpÌzršFzBš8£÷4M4çeU“qÚÄËŽôVšÈä[âš]•(k:½¤NV¼«ªoƼ_oùc7iû[—¦ñJ^ v=GÕ££y"¡2 ‹&ML@PÏ i¢Ù‘ÁÕ4QŸ+FåáÑS+¥ªÍteÍ@ª wÎ&p£ÄÙÅÇõ|š$z–&‰ó´ú…ÛÄAD秉3⯡‰‚cÖ“ÓÄ0ÒÒĽ§iâØ,e÷ö3rM›ûN•ƒR'ëŽzH=ýVÞÄÁèøs5){o$²hbÑÄ4Q~—”Œë— !M4;e¿š&ês‘£¯(Í®(½÷¦~®(ÔG0–` •6;ôG«|t§j%Ћӷ‹&v¶‰ƒˆÎOgÄ_CǬ'§‰a¤'¤‰3zOÓDuN`jã ä]¤û½y´e݉#J«Ö¹`âzøP:ë&Ú[“€±V–=ǃ0Ñ<* 1ÅÊäí2õ‚‰ÀDù]2#Y¶4„‰foÙ»ÁD}®˜±hü°°¦{K2f¢Ï,Áu{v”1K4»÷hoe‰ê´„‘‚öÝŽÝKD›ÄADçg‰3â¯a‰‚cÖ“³Ä0ҲĽ§Y¢9/ã˜âYJåÞ{N°pʸ?Û-Õ炉¦ªvÁÈ_,Ù~ &ê[ dΊqtü=Óùn˜hÊÆ•»»,˜X01Lp½Ûðì[7rÇ¿ÊÀÁn`°Èª"k€}pvs1²›5Öôb#Ëcy±$H£¬Ãß=Eö‘|$fu§/b|h¿ŒÎÔtý»Nóòk’UÁÅT…‰lçNw®XòŠ­‹]à·Üç$çK£åì¢7²‘;”·9qA¢4ÅQ€Øaš%V"Ú>L,¿LT̳n&ª‘n&–è] Å9ê¸åÙî¥Ðo|Ü!¬LÌ“zî0ßÇ„‰¢ŠRv¶z¹,˜(wÍì|=íËÑv<4‘=r.¡Eö÷Æ Ða¢ÃDK0Ay2ϹºE¨ÂD±Ã“ÃZ+ÁD¾n.,Œl¶ìÀÚж- }¡ .-˜Å(½=Ø9¿kzØÁ)%BB[A?‚mN+mŸ&–ˆ_‡&* æY7NÕH7HKô.¦‰â\¼<¡—J~ÛCQû{ô~„&²„˜_ÿÖËgo £‰¢>W#0^q;ˆñ²h¢ÜµÎ& žú÷hwµ›ÓDñ˜ê•me§Éò;Mtšh€&8ï*/‚êûœ²FZ&òu£vêf¯1ЖG°Ñå}NQF6:EmÁI§N:xT•;<—dCšÈNS½æÍÑ{± sšX‰hû4±Dü:4QQ0Ϻqš¨FºAšX¢w1MçyóP½&å`Gç2l¯»Ñ‰RŠ•Þ^8FLvo/ÌÁ.ªá„á žQÿ›¦‰r×’Ï?²ñ;žšPàJ®³€ó}m¢ÓD[4‘“% ⥾ѩØ!­žÐI¯‹ú8JÞj?˜ÛÙ¶G°õjaìv*}KʅߪJÓГï{;åî…È膻Ðw:™ÓÄJDÛ§‰%âס‰Š‚yÖÓD5Ò ÒĽ‹i¢8§hä\ì n»6x „‘µ‰"uün‚Tl«öQ½x£ òÑÎ_XB§r×1Ãâ”蜜1Üœ&²Ç|˜™LeÚŒ{B§NMÑ„>—˜«?£‘Ð)Û9¿þ±‰â_<#™½6B·ñN'ÎoíÎÓ„äRãmU± »ÒDq*‘“‘ew°s}mœ&V"Ú>M,¿MT̳nœ&ª‘n&–è]LÙ¹×9:Õ‘‚Û›ª›¤Æ|FÙ”êÝ™œH•&Š*Ï‚FšÄbwºÿþ"h¢ÜµÎ:ÀMønÃ~4‘=†\à$’­L ïtê4ÑMèsIˆÁŸYõ{ôÞóKÂ.×¥ˆ`Nƒ‰â¦‡°18œ-„Mª,·t‰K­ÈÞ[;¿ãN§7N "Öº¡7v(==l}šXhã4±Pü 4QW0Ϻeš°"ÝM,Ô»Œ&Þ8gòàÄî¥Øo›Ò‰ë˜r¶ö[ ’ –ä­]j©öU‰8¸ ÃAäK*„ý段ÛžðÊn4qôˆ:>‡ ÊP‰»ÓD§‰fhbx.IBÌÅ‘Çi✤$[ƒ&†ë2$fD³ßc` [ÒD<”À§ó4¹£†µ±oìüžkoœŠ`¬e|kwÒ uš™&V"Ú>M,¿MT̳nœ&ª‘n&–è]LÙ99Þxç1عmÏMĈd?Bƒ„àSš"ÕS[4QTÑ[½xY41DóŽ ;:§»«7§‰â1:S”Qè b;M4Eú\"Œä©JÙŽamšÈ×MeY›­öS²o¹ÓÉHH>MøLÚ®‚Uš(v‰xmšÈ×M!WâNVûQÁmiB‡ðgä4`"¿T¿>LÊ9r¼KVûÉ5pÛ—UŽyd#-iKÏԷV;û.}g§>¸1Þù»Ó·ù&Ff‰•ˆ¶Kį󬇉j¤„‰%zÃÄà<×¥c»— Ž6.gäb‚ñÞÞëaŠTtmÑDQ…Qœ­á–&Ê]“ƒ)ÑI;.} ráS[Yt}é»ÓDS4A9mR" ¡NÅ.ºÕ7ÒæëŠGã…ÿ`ç6M@Î „vçÇÖêoÿ‹ÝÞKÅ©8!cœ.vL®Ó„5M¬D´}šX"~š¨(˜gÝ8MT#Ý M,Ñ»˜&fõR‘7N˜èÀc§òf)M±1˜Ô£gˆ¶z [šÈw.0çë‡è$¿Le„ ³S"ù&Z‚‰œƒ•|PJ¯ÃD±׆‰’‚'ãôC±;[5b=˜ˆù+?1·`NŒSy±ôA¼ïÒDG\S¢ëÍYb%¢íÃÄñëÀDEÁ<ëÆa¢éab‰ÞÅ018OQÀOè¥dÛÚ¨:â’ðMÌ’JmÑDQ]Pe¶z† ËXî: 0a°ŒaÇ¥‰ì‘ò‰T¦2r©ÓD§‰¦h"æ DQ=~˜éÑ{Ï/9Y›&òu}H^Ç7«ýP^“Ý&|—LŒ3ØÅŽNVtW¢‰|Ý ÿY•ŠÝæ4¡´<²‘Vr &ï8Õ×&dèÜ®4Qœ¦¨3A\¤Næ4±Ñöib‰øuh¢¢`žuã4Qtƒ4±DïbšÈΣÃäŒW¯E¤ Û–&Æj£Î“šËXÔCÙT¯wé.‹&Ê]{1w: Q<)$¾9Mü`è(vC§‰N-Ñ„äL¬-Ë861ØùÕMäë&çcD³×¦()m{[t\ðçi\nÁ‰†êÚ÷`ìIÅiÊçë9<; ¾ÓÉš&Ö"Ú¦T§‰b‡' ŽW¢‰|]Ñù-ï Š]ÂÍ«–‘µ Ÿ[p"õM¿ƒ&»ÒDv Žc4^¥qr’_¼ÓÄÈ4±Ñöib‰øuh¢¢`žuã4Qtƒ4±Dïbš(Î=æä¯f/.âÆ4äÆhb–T8Sýí£ÒÄ >:¬¡}s——• v¸ë"S˜Ù¯vÝà‘Õgœð½‘ïå&:M4Eú\²~˜ß¯Ti¢ØAX»v]¹. €g³×V;€i‚ÄŸ?„ !7`ÌOD½¡;ŸöÝèTœF²ŠB v|2Lw˜™%V"Ú>L,¿LT̳n&ª‘n&–è] ÅyŠëEÃŽv~ã”N%g?ÀÄ<©­ÁDQ%‰¨~Ìøh¶4‘ïÚ;d©u>FGö;„=( ¨ó,û{ó>õN&š‚‰P´Ó‡“ùGï=¿Ä"«otÊ×M¥Þ…9G§„iÓCØ|`m‹Dçiµ‚DRoéÙÎËÎÇ&Џ$:K°Å…È}£“9M¬D´}šX"~š¨(˜gÝ8MT#Ý M,Ñ»˜&ŠsIÌõü}G;Ç[×®ËiE ÒÛO–꣉yê?,ëñÛ¦‰|×èHa›ÑÁw¶mMÅ£~!@”yî):M4Eú\r"‡‰êŠ]µa—ëJÄ0Y퇅iÓCØtп ç+aåŒè}=Iõ`çíJÅ) ’ñ*m°ÃžÒÉœ&V"Ú>M,¿MT̳nœ&ª‘n&–è]LŹöØ\ϱ}´óÛÂÖçÀcg°ç)¥ÐLU1$WOðwT/v»Üµ¶€ ceŠûU›()x™Ê´÷}N&š‚ }.DPLOU˜(v$«ïsÊ×ÕÁMo¬ö“7=ƒrHAuŒœÁæÒÒYÃ^mPì¼ÛµÚÄàT9޶8L½¶9K¬D´}˜X"~˜¨(˜gÝ8LT#Ý L,Ñ»&fõR´qµ‰éŒÐÄ<©©­JØóÔ³¿°Nó¢w<5‘=²6ðd*cçûF§NMÑç%‡$ Zmb°cY»v¾®ŽÆNÖ4±Ñöib‰øuh¢¢`žuã4Qtƒ4±Dïbš(Îг{)Èm{ÒÁE?B³¤¢o,£SQ• ¨F?A=§Ë¢‰!:Q¼‘7i° ;ÒDñ(!X‡uŠ]¢^m¢ÓDS4¡Ï%ko¨¾êŠÝé4t%šÈ×MÚ[¢Š]~Kš€C’Ä0²Ñ)i ލQçúÚw±;“Á{Sš(N#úHh‹ã^»Îž&V"Ú>M,¿MT̳nœ&ª‘n&–è]Lƒó3v/qãŒNà0JE‚8F‘ R¹­ÚuGõQ•­^<\M w-@~JtÒŽ;²Ç¤Jýö`«ç‡Í×%ð vËæ|ºcÛJØ>¦(8F"Ùkà ¥j§ÝPk㋪ÊYæMîòÒÆ—Ÿ_WÙÑ!à=Çõ˜@LxêbèùÇûøÒÔø"I™©:¾»tòŽ¥ñE¯ë]ŠÉG»3UÜV­fDÞÇ‘—U¢D!Ñ!TíÂΧòŠÓ䎶¸è{Æ@ó-D%¢í¿¬Z"~—Uó¬YUtƒ/«–è]ü²ª8í±Ú½ÔÙ4Ikf p€ÑŒ*K.ÅŒ;hìX^Q¥Ã¥ÎV. &†èøöHž£HûÁDñÈ”«ÚÚʨ—Fí0ÑLä]H>E&Ôît޼L„ˆsÏgµµ nÛ´^y‰Ïg ô®´àR0¯¦t° ~׌Å)è—HõýȃB/fdMkmž&‰_…&j æY·MõH·G‹ô.¥‰£ó<ÀMè¥Üƽ„TzûéR±­¥‰AUίÍb«çpYKÃ]§ ÔÁŽNÄý–¾‹Gï 1ØOìK&Z¢‰ü\j¯Œnd}ôÞóË¢kešÈ×\b«ýD [.MÈÑ9ïÏ/ûá¤OoUi±Kâw¥‰ìÔ#CâdŠóJo&¬ib%¢íÓÄñëÐDEÁ<ëÆi¢éib‰ÞÅ4Qœ3³ Ý…êà6¥ ? G’| $ï ó¶Ô©-šÈªPà ÃA®ìzY4QîÚ³';:÷[›Úq_›0§‰•ˆ¶OKįC󬧉j¤¤‰%zÓDqžr~´{©ˆ²u5#‡d¼·Ï›Nb½Ð UZ[›(êó¦]ãw±¹¬”ÇèHòbäïTÙœ&ŠG&ïýe|º«ÓD§‰Oú\ÆèˆÑÕ×&²+r¬Mùº¢ÄïL7?~ǨŸþYû„/^<{¢OÑËw(›àg¯žè«Í2h3ÕvàÓ+ÔæšP¿2Iï5ÔÏôúªç7Ú…~ØäYÇÃäcø¢¯ó·ÿç÷?ÕNööùõ'ÇÏò¯K?û•SbqüE¹Ü¯óŒ?|} Ú×J“9húÁöQSg7'ÏŠú,Ë×y2ó~ÏwþIÆ—ëÿ“¢¿Òò(ÛWn_½¸yúòîÝQ,ÿôóßoîï¯ÝO=|OB¾ºýö±þÕíO×Á)[ Ë%å\~L_ÿîß¶”|±ßÿÎý Þºßÿ¢2þxóüæ[mÛw·/O¥·¿~û$•ßip }Œå‡ÿó??Í,ðÝéoôÿË×?òO¯îîó“©Ó§/ŸÝßæÿtûüþÙëu §~÷$V¾®¿)³kóx]>xZúüã¤?ûâóü¯¿§É¾ûþöñëÇ÷ÊÙOõo_äßýx£=äÃóû›ÇÅѯƒÉË›Üí½üDãòïyôû¿Ý—ŸùôæùËž=”Þ?{õÞÁËg÷÷·/Nd ¿ÑnAoQ{Ó…·ÿ¹>-O~x8ŠÿPRýêUÝ­Àäq]Î?~{óâöÇÛ‡A–°_rSz~÷øîáþõ¯€µ¡{ûelÀŒÈQœ=`žòÓ{fJ᛫›çÏï___ þJçOÞŽŠW˜ßèä1ôötœËonn|þpåFäIpÀ›òÄŸìx¸yùßÿõäÅÍó ¿$øæêo¯žæïæª(;¾€QŸIÄX ìX¦øôS|’C'ö¼E'Ý'œø<ÛZ>Õît³~Åg°}&ïr¡ëqK $ú)>q‚Oqy Óg?éû¤)>uŒp ¦OñÓbˆO<8–œÕŒë÷™íRˆ<Åg4ž[ÌxžGhH†ÏŒñw]*ÈNHØySÃÉ)¸¾T0ò¸Ñö— –ˆ_g© ¢`žuãKÕH7¸T°Dï⥂âÜ‹°°ÝKŸØfã8L5sóÐ9÷iRS[9x{Už]ÞVï!Þ-@œ¼Ýõ³½Ç$Mêet¤zÄ[D©˜š@Z35T±Ø9¿xÚ,-7ÐmSÖ”£‡U7“åÎé­nþf ƒ‰@R!äÕJ#ÄÌaà NÁ ÅNu9Z@o:MzÔŽ!Z]©ØZ)…`:Í5Ïz²Â›u>aÜ“Zk¥ÔI•òäÈ8¥v1[ãZÒi¦q)ÙN¥’1Z2]ç߆Õi–˜Dc™¼Ø¹}µÔ¦œJDÛ‡á9â—ኂiÖÃp5Ò Âð½³axR+uäž”eVvi†{ º GH%j †‹*Œ2, ¶ztánÁpÿÔ”ò¨èÄín·ì=Ê#yo+“z·ÃðÃMÁ0é‰%ŽR5«×'÷v9º¥aXËM]2²»°êÉ*d]Šã4p²*É'ÌN§ê}¡Úe†msHq^H½~¿eo‡aÇ sœX‰hû81Gü28QQ0ͺqœ¨FºAœ˜£w6NLj¥<¬{¿e€Ð¥„8ÑKˆ!7Bjk›/‹ª€2*ÅêéŽe}ë£ÃR°£#5`;œ(3"9¤‹]:H²¾ãÄŽ-à„TL¡t§Ùª8Qìà`Ùz!œÐråÃÖât±£ë&jðÕ¡+i²~ÂÚ 3Å.ûm/¸§Ñavlœå*vÀqÇ kœX‰hû81Gü28QQ0ͺqœ¨FºAœ˜£w6Nçô7v+å]Xwu"ú.¦¡Õ‰^B¦ˆq„ÔcéHIœ(ªbŽÁبPìÁ݉IщiÃ$ÒêQx@jÔˆ$sÚqblj¦pB*f†@1c=‹t±Ãœ—Æ -7f0‹»¸nÞ·ÐIOçC8Ž\>ubg¬Ô«;rKÁª8QÄÅ\&Sø¸ã„9N¬D´}œ˜#~œ¨(˜fÝ8NT#Ý NÌÑ;'&µRG–¢—ʼnP’ø àÄ4©±1œ˜¤>¹Ÿó£Æ‰iÑ!¿NLzrÂV–SØqblj¦p‚5å²Rx'z;·8Nh¹ƒ€Š·> ]Ƶ Nø.ùÁ½N,£åœ(g¶¾tµsiÔy +5w¥=c4vð»Ç‡aÓ)an FRµó™F¥ñC7Æ)eñI¦SÍ•7Ê©‘€Ò»Î9½R¬úN‹]äƒ5šS4䘙êëq½]p£ÎU¡7ê+uõsѽ]Ͷ‚U t”lõ9Ü­;6úè` ö»¤í&[{ÈQFnŸlÝGGMŽúŠ™“´<#‚M/¢ëÅÉxEØÝ‡ðáZŸÎ¾/îåSéa^y§ËBÔu=6eI´ß¶mMYÕ"ÚüÌæ,ñ‹ÌlÖL³n{f³éöf6gé;³Ù;÷,Ô„v+u˜{þ ÔŠQÇL·¿*WȈäñÃrG‰˜|)¿zñÓÓ«Ïîóó_¾{MÂWòý<»–8ß—Ø^<òø³ûòmÜ—ÑP¹Ôå³û_\?—Â4ÊÒ®‰å_®Ÿžýx}ñÁ¸üÉ›??{úìêGÅ=óÏ»³¯K?+ŽoÄìÀôòê‘0»>Œ ·^Š“îþ`¥ Ä®~ñÖqåäÁ9ÇãÛ 'Jõyè•ú?¿iDߎ‹ßܨÓm†¼{ãÎô[»p¤·NG5±»>ûv>côy0D½^xµÙ ÞÓÒ‰BK¹ÂŸŽÈ;(ûgƒ‚ï'Çà àS&´…´ÇÓšîsжú##Çž§sðÙœR»ƒS[ðt1KÁ#Þ[œ{gåÎÓ;O/ÎÓâ)9 ÌFÿ"vñ þbý‹½+Fz@ã;ȼîæ%Ýt†ú]W„”ÍÑBYWä'|Ô»¯ßóskw°*±þ º FôÿáŸ.~)ÂT0ͺy¯DºIÂ?]ï„/΃n‚v+`åDMŽ:æp|ïR/!FÏõ‹ðn¥f×Mˆªä„l½­ž€ïMht Î[Ò„¤æù@˜ö“Õ;M4GBÂ,c²h"é±âhBù1EkBQìrpk&jÊòŒN˜ÿ8N€ntñ,¿¯Ø‹Ýa.¤Úž#—¯s0Vÿ‹¨å4šN¥Qc=~ §bG¼iJªâTg¥ 7Åw° g§q%¢íƒÓñË€SEÁ4ëÆÁ©éÁiŽÞÙàTœ#ˆ{´[)ð+§¤ŠÒ=D?NÓ¤~xûó/ N½z”>ÒÛêÑÝ1pꟚõâÀщnk,cr0æ½…ÃÓí;8íàÔ8žÍÎ(õ·š’ª·K°ô¡R.¡èÍH?ÿUSRAÇ‘8°Îzû†“—RϧRì§QWƒ q5ˆc4î‰îí2mš«8Þ ×9S\Ä´oï4§•ˆ¶Ï0sÄ/Ã0Ó¬g˜j¤d˜9zg3Lqtï~°[©ënKŒì;ty€aŠ„!!5a[ STÇÙVã;šÕGÇ ÊÛ=y$Üîàzï‘9f¡Œa_üÙ¦-†ÑÌÒž¤#§Ÿ¾}¯kºÅ·*k¹)z0,÷vÁóÊ Czšá8ÃøÎKWŒõd|½]ÀQ7 âáMƒpÔ©ÔƒàÀËÞj—ˆÆ­Î¦S¡°’«'ÓïíÈ;¹n—×Â"i#c¶œjW@›ÒZq*ˆˆÞÙâ’;­YÃðJDÛ§µ9â—¡µŠ‚iÖÓZ5Ò ÒÚ½³i­8×ÍÐÙÛ­TÎ+g-ιs˜hm’TöЭ©ª¤éˆ}4Õ'çîVÖâþ©ÑQÑY&w0ïº:­‘È®u)fÜim§µ¦hÍË 01Æüábî·ïV`±ª[šÖ¤\tzZ,2Ù¥itÅ #À­EùÊ™ØüÔÕGåóÎ'‚ˆ9ùd1ŒÚ!Œr &8%AάæVÑÚèÑ|RfD–ªoõ€bZë¿'¨ÇÏ4|ìý÷”èìÚ ÿfmü(šµØÞÓ„îýw[ýw$rÒÛ`®'†(v‡g꿵\MÕ˜\½ƒ*v1øuïDÖû\:Þ‚g½óÅJuHÏaÓé@uªçŸÉHüPì í'wÍyžJDÛŸœ#~™éÀŠ‚iÖOV#Ýàtཱི§'µRxìâ–%§ ;œì%dÈõÌôo)´…E•§”s¶Õ{wÇp¢j½D‰GDç öê8¡•ïì1;·çÚq¢5œï…¤=´›M’&rÔB½÷ÆÔ–õfwŠl0ŒÚLž,Æ0ž]Î1D󫻸&Ãxì¤ÉÂ8È0A¢®÷ÌYp ÀãöQø`¼ž¨Ç€eW¿X®·#µÕÞ[Ç…cW²j3o8; m §Áçd‹ aß¼aÃ+mŸÖæˆ_†Ö* ¦Y7NkÕH7HksôΦµÞypž²ÝJEäu §Ô¥0”g©H `Nn„ÔÐØâO¯Þ ‡POîŽmµ/O9ô#¢“7Üj/É2›nŠ2ÎûVûÖÚ¢µ¨[Ø)çêy–z;X<Ï’–Ë.F)Üú€ræ +/þ 63ÇÁ‰ôS÷œÐX¦*vHÛ¦m-N9Çhä{ëíÈí8a+m'æˆ_'* ¦Y7ŽÕH7ˆsôÎÆ u€AÆaf+.ÅUq"K{ŸÒNôRXéÜz»ØØ%E²ólõHw 'ÊSëæDãˆÞ.ox DñH9owã@qß ¾ãD[8Ae}Å;ÓWq¢Ø¡[|/8éúŠ<2’ù±  «à„w  ¦ä]„l5Ñb÷/Ï^>tŸo[•øù“Ç®ÿûË‹§g?k»éÇvçß—ÿwþ¦éþô¯//¯ÎŸ]^|þôÙ“ÿýéÍøááõ£GÎþñ÷ü]¿–·¾š&^ÿþ“?þôôêË«´xó‰K9ŸüûõcA…þGú•i%üÓ%¼}¨©E1¿þú·o¾ÈżSÂ0µŒÛ×§üÑW—ÿ#½óÛWòó?Oy»Ÿt]wöé§gxvýPº¸ki¬úïôù)…ýÇÅÍÕó§ÒaŸ}ðsϬÌ'…ð“?\ýðèw×ÿzÄåô7[êÆ7¿ýâXa}‰—éêáåe8‡Ë <|Içœ/Òùì+¬|Ų;©>U½žô¿¿zþ䥌žÞVÓÃÈŸ$´Zìiïð7Wµ ¼_Zùq§<úçÊØ·½¶Œ§nž”«Í˜üúUéÑÏžßÓ®üÜÑ9’tà8ûÏ?K¥zçwÿúø…Œ‹O(ù““þèVгòT÷ÞGܾߖýÕÓŸÛ†{ýÀÿÄ’fvÙ·¥ôíFÿSZÛÊ(Þm=NóпCk‰Îþ‚Yeý Tñóï^Ý{ô@xqï´?òß72l¸ý·üK ëÉ‹êçú‹îÕõcy¥¯„ܾ“º’Ž×ÿü UÆ…Æ{]\³.ÊݾŸG×?\u?]Üü uïõëï¶«l'uôï5Ä'ö–'u&¿½¹yùB§JœÜóq¡ Ñÿ¼º÷áñå‹¿ØNÅÎùQé‰=Ns'µ¥ bÝ©ÚÅ£²lgœp“ÂÈKCJ,§ä=ÅMiM=iŠ“wµ_&cÃ+mŸÖæˆ_†Ö* ¦Y7NkÕH7HksôΦµI­Ô;ÛpÖØ’|'×­M’ŠèÚ¢µiêìÌú¨imRt|Ø0½¡zŒôˆ¼­ŒqOo¸ÓZ[´&“u”œrýBÌb‡¸ø 7-7{ï³cëâ|d>pÉÔ ¡s@.æ›&i~t$N†R±s4j™+€N¬i`!§:Ì;—·=VWœrˆœÑ—hÏ©hN+mŸaæˆ_†a* ¦Y7Î0ÕH7È0sôÎfuž@šnÌf+•œ§U†cî( åT,|`rÁ– Ìm1L¯žäG¨÷þŽ«+O8Œ¨†áà>²ÕF=fïB0æI‹º=KÇÎ0m1ŒTLÖ]kþßoß«À¬É¢–fîôºGÒÛ€HìŽä¾X–aä[ÌÇ“t§WNjrÃúlEo‡.×ƸwD Óf"p0œJÃÇœ·D˜^\ÔëN¼)NS%îcŒMkmaf‰_aj ¦Y·0õH·‡0³ôÎE˜[çÒþÔóÝÚù•ÓÂ3tìf¢Ô„M!L¯*yLžmõw a&F'»Í¦xdðÞ؈;xo;ÂìÓÂhÅdÍlCª!LoG9/Œ0Rnp.&2òmôvVÍÐ]Î0 1LÒÒœ¢h)¥`Ô-‘Á› “!z ÑìÕÄÇ¥…¡¾M c ’“áTìr·iÎH @[HŒàê×…övnÛMs½SJ!%°ÅÅq;­ Ã+mŸÖæˆ_†Ö* ¦Y7NkÕH7HksôΦµÞ9'ä­;%º ­‘Næe µ"!¹”GHå¶Ž8©ª$ý”t9d©OºGënÑZyjDÝgGaCZ+õ"p[Yô¼ÓÚNkMÑ P‚:­AY:há¢5-ׇÐyã;ȫҚï$îŽýÍ@#üÂ#DC©ØåqGœ ƒƒHÒîAõ´Qoa­%ÓiiÔä’áTµ—¹Ô)è³"Ûâ˜q'kD\‰hûà4Gü2àTQ0ͺqpªFºApš£w68õÎ3¥äÌV `íÛu§ž§pš&5µuÚ¨W…9Ë@ÈVìˆôG Nå©õèp=Íü­‹ÛSñ˜„Ã=Æ {nˆœÚ' ‘ÁçjüÞÎÌ7-NZnräœ1ó v:áµ&8Å.1ÆÌ7Œ^¼ÕD‹ð¨üvÁÈ ¡çk)…H©~ï{ow˜3¨æ”M§¥MÁ[Nµ3:¸ëw p*N…ë²QaŠÝáÝŠ;8 Œˆ+mœæˆ_œ* ¦Y7NÕH7NsôΧâ<Ç@Æ,[oçÝÊûsãÐþ@• ‹i„Tn œŠzg»¯Bx·À© ±ó~e„qŒÒÈÜ t0º¥‹#›}‹]H£NÃDo2Œ&Å9¶ÓÃëŸkNƒá4I嬕6­Þ¨»ÃÄž5§Ñt*•ÀE&é´¤ì¶]p*â¤Zƒ‹bwxãÜNkÃðJDÛ§µ9â—¡µŠ‚iÖÓZ5Ò ÒÚ½³i­w®½$Û­Ôá2«Ðšç.ó­M“C[´VTE´ÕGôw‹ÖÊS“ãÈc¢³%­©Ç좓•­Œ÷„ ;­5FkIrFõLsÅ.ÌÇ-DkR.8ByëÓ»¼2­Åð8¬åΡȤú€½ØºÚ(’0¹4 ¬iªÞÎm›º@J=Í.Øâ2î9ßì±i%¢í#ÌñË LEÁ4ëÆ¦éfŽÞÙSœ‡€Án¥ä•s¾¥.§¡œoEBBö#¤&h aŠ* >҈œàý¨¦<µ"²q\µ·s"Œzd§}ÙTƇ×ï³#L “u!Ç!çXÏù¦v™ÓÒ·³j¹ ß#Y"L^uÁ ;pžÓ Ãè1X]œ³”ŠrsŒ¨ ˜][½Ïù®u0Œ.¢6í`ÄcöÓe÷9²½ƒi«ƒáN/ÑÂbµƒ)vqùFÊÕS“s}!»Ø¹´fƒ¹“Áûÿ±w.;—ÜÆí4XW’É2@^À‹YY(¶‘"C’í×O‘}fr>ÏiV7ØÝ&t(í¾)tý»Nóòã¥J1½`r™#Z×";JÍÓ½5 ªS‰6g_œàÜrw"½ªGü9ëU Ǭ_¯jFzÀõª½ÝëUÕ¹ŠÍxÐï¥4\[£€T–´±ZuL(¶á^UE.Õv¨Ïov<º¾u¢œ:kõF˜0)Šýç)KöeÎÕª £ÁC BìÀ„Ù!^ hÃk@f(^»ZE)Fz}L5¤—¯78?}ü€Ùú÷O§Õ´n¼dvútÁ¤E¹‚¿>,fpÇ †˜Ó¥eâbÃQˆ¯óG ؈§6½ÔVZíì›Ø Nx`sjØÑ®'·ÚY[Þå|§öûcûÜójgV»œ:©‡ìad¶à8NÍñÖ“à«S-ëå;ĉÎÔCîÜ¿Ññ±Gü9ˆØPpÌzpDlFz@DìÑÛˆÅ9çT¶öÜ^ŠžK]ˆ¸æ¸5˜ª4ÖÞÚ1õ‘Þ«ú÷±è$É÷!âe6-{kÇBDX˜S²9wjc¨v¥òõÙˆhÏU %iœ‡#Ì9jºÖTà([Œ` 1Côfì‚ø\§¼Å0ä€SÙîã2h«Ü[O®:(W“¢+Î wÖ“s'§ˆŽÏ0=âÏa˜†‚cÖƒ3L3Ò2LÞn†Yk–vá‡À¥ £¬‹M7¶¹ªTµ7Ù!u´m®ªŠA±]˜lµ#|¯óèêžè0Ê} S• Mà++wö&ÃL†Ša°ìø`Fç6kµ3†‘³¦<7Ä`ðÙQºô| /Ê•ù× C €ÄHÛcaµ“¸«&¶²Ã0Tû¬è8¥’ä@oe˜Câô)"“a6&§ˆŽÏ0=âÏa˜†‚cÖƒ3L3Ò2LÞn†9ÔKÅ«“Š&]Œ46æ˜ÔÁ’ŠSŸ4¼ÃŠNF¼aŽ(‹ø”|2Ìd˜† ›°´êU; §3 U6aí:m«^[ hcÃk†áÂ&åt_h/’ñÊ:»®©8 õ ‰):Nˉޚ¥auš­5§ä‹‹œ'Ãx“ÓFDÇg˜ñç0LCÁ1ëÁ¦é¦Go7Ãç)@çÄë*òr† ÑÓe,†©êA0;Wc×·Œü^ ³F'ÚÔ@ýè€è} S=2²wݨÚÑÓ0>f2Ì à  A³0Bµ ñü³då¹™3xݶÙ!_É0KIÞ¨û0²XÌ‘¨}ê­Ø)?%n1Œ: sÌé¾r ]§ Jc$Ç©Ù=¯m9õ’CÈB×{3ŽS³Ã¸/¼ÙwšRÎÀí‚};¸÷¨^qš…$rpÅ•Œ¢½¹#¢ã#bøs±¡à˜õàˆØŒô€ˆØ£·õR6ϸ6#…Ä…o â1©£Õ;¤þù0Ú[ â±è$¾«Ç$†íÙWöœ‚q"âDÄQJ¦ ƒ¿ à|Àfy׿M .ƒpŠº+û@ß){Æö¡Äj·áò\d0Ò辰^»¡g}’lÁ0«áEö:5³SÜ•#¢Kk%9¼ï›0;·Ë©w›KÊØ9~YíðÞÚ„«S ‰sôÅ Î]DwîßˆèøˆØ#þDl(8f=8"6#= "öèíFÄÕ9 ìéBðZDDÙˆ6ñ˜T c!bU‚„ê#¾"Ö·NÙÉl¶F‘n¼ÍU<"HDWΤ…ÇBD›ŠØô—B3ÛûjÂéàTžk]/8 Èì8_™?°”Q08üqc€)Iõ,VNâõj‡²œØ§Xú FElŸŸ¨v„÷&^¯NµdÚa_œ>¥ü› ³19mDt|†éÃ4³œaš‘azôv3Lu)y©V;䋦UÙJ¼^%ØÐŸ1í:XŪUU®µI|õIÞŒaE'Ó'!‹Gd›Ü©?Ç@œ'!'à Æ0q±n5#qhgÕ«v‰ÒÙ SžKF˜½y°Ùá‹k¬'nþÀ"B9ÐÓ‚%­ˆ¤f¤ªÒvŠúïøéïå+øù§–oÿòç2ö[÷ô?é—òü þû»_¿¥oþÃŒ~ÿ‡?}÷ã·¥“úãwùÙfî ¥5 Ì¿û ùÿ¤ëŸ ÷oß”ù½ããéño 8WȪýp»hMZK¥‡±ß˜§fGùÞ{kÕif¬¾¸(³º°; oDt|Zë­5³œÖš‘ÖzôvÓZqnsË“ßKå‹%ŠÚðuƒÖŽI̓í8­êS¹Þ@ô^´Vߣs§¡Ú=gѺœÖªGÔŒ¾2kÄ“Ö&­ Ek©ää„Á9ªWíì>›ÖR9H3{ ¨)¼4÷ÆrK¶h-j2ˆÑì1ŒÙIÚU<*ªË0%§•õø ŽS³ ²+iaŒ{œ 'åä;µ¯u—SïÞZ¶’C=wÑtZìJ݉[i­ŠC±<»ââ¤5wÞˆèø´Ö#þZk(8f=8­5#= ­õèí¦µÕy œvôR¨ñâ½5sƒ[çW YB ;¤&‹Öª*A¤¤¾zz·+dÇ¢ó4¥¼œÖªÇ’þü9˼B6im0ZË…‚2圵IkÕ.áé{kå¹H!‹ˆÓ€ÊŪ@×ÒZ†’òÇÆxÈ~©¼¦nvA7i-Úøïßþïìçÿã—ëMßüýO6Š[Pÿ\›þ÷Ö_—ꕨGdÀU2žöd?Ê´Q÷Sõþ‹}:ëÏØíŽ·gZÛ* ôË_øõ›Ÿ¾¯ïö¯ßüçý£C5¥ÃÎ¥~Lã»üb‡x<>œŠ|l‡_ìæå2‡ Ú;ÅŸmǬG†G/Ò£Ác§Þ>xü윣Íd²ßKñ«õ·3·úQyŸ%H–ÜÚúbp xü¬JcNÑ«,úFðøù­³QV Ñ>Û¥”n‚LJGÅRÇÌW¦Ýiöýù¯“';žÅŽõ»d¬Õ ¿¾1õéã÷kvÏeÖÏ`ÇÇs… p«\øg;ŒùJvÌKŒ`ãÇ/Ç(-X"‰¶'ìÕî9Ò4Qœ¸Â}q™çV”;MlDt|šèM4³œ&š‘&zôvÓDuÄâLÑW‘zñÁÁŒKÌqƒ&V©cÚ!ˆÇ¢‰ª -ÎYw¨Ïð^4QßšÊw˜üè` ÷ÑDõhsþ0qÒĤ‰‘h&k495ibµC9›&Ês#Ûÿ‚Nûá3|!MpXÐ^1¦×4Ö‚¢ý:í–^í‚ÜKÕiÉ8’ƒ/Nž2%NšØ˜&6":>Môˆ?‡& ŽYNÍHH=z»i¢:±ßKi¾ö`›–Ó )oÐÄ*U£8«û« FEUɾ }õã{ÑD*©žê¹\NÕ#ÛŸ|eä‰I#ÑÚ,]s`þ:Kù§ß¯ÑDJt6M”禢¨7G7;ÆKsFäEE_¦½3T0‘„ö&Jµ ù^˜¨N•Y}qL8a›%6":>Lôˆ?& ŽYÍH=z»aâP/%píA'¦¸Øx¶Ǥr &Ž©oõ­í eH~tô¶,ÚÕ#‡*ÁW– &LL˜ &¨”â5O@m˜¨vúÃ'ÁDynD4œðæèfrem]D ñL”æ[NÒzBsMA4ÚðbêQ!¸ê?$…y“á¥rbôÎH¯Q|º |ÃðR– D(‰¯LòMôˆ?‡& ŽYNÍHH=z»iâH/%¯ÝùÆ…“âvo_ºñýÞ^d,š¨ª(§Ä°C}z³­‰úÖöÚ¤ìG‡ŸÒ·\NRËÚ&'ÅGµ#I“&&MŒDö]æ9ÄÐÞš¨v!ÂÙ4Qž ¬h·×Î@/Nüœ¸ó­ö£Yl_/µ~µJC{±ªÚ…¯·è/…‰µ¸6go˜®vi„7KlDt|˜èL4³&š‘&zôvÃÄên5}ùÙm PyÒĤ‰‘h¾˔Ô0©IÕN€Ï¦‰òÜ’#‘Û~RN|iаØ0'¸QË(—l@ÅNÂÀjGpïÞDuj_Dr*£V;e™4áMŸ&zÄŸC Ǭ§‰f¤¤‰½Ý4q¨—ŠÀ—ïMÜÚ›8&•»6QU¥r¢-ûêÓ×Wü~Û4Qß:3±s©dÎSŽäËi¢xŒ ê+ЙÑiÒÄP4Qê9D‰¾Îƒ÷é¾ß9Ÿž0°Ö“QJ쵟ô*#í™{i‰Q¼>é¡´` =µ/a¯v€·¦_ eóë‹ã§Û[“&^O[ž&ºÄŸB-ǬǦ‰v¤Ç£‰.½½4±:·éRhgÅ~ˆ”kS:±¦y#ýøA©9E«ªRò´M·äôV4ñˆNI”üè<'g¼š&ªG´•¢ÿÕ!41ib š°ïRjª1þ:¿ñ§ß¯ÙÑSfÑsh¢>ׯHÑë÷ÌN‘®­Œ ¿† ( ¸HuæëÅÎÞ†n…‰*M[WœõBskÂ%6":>Lôˆ?& ŽYÍH=z»abužrJ¸£—zuVôL˜^ÌÇL¬rN¼Gj«–ÑªŠ•)G_=½LŠÃ}ùaW‘%ŠøÊô‰¶'LL˜&ì»Ì4ç6a¢ØY§yöµ‰ú\Ãu{íÌ.=è„‹áB­ ´L %MnS)ÖáÞj«¸T¶˜ÐG:óÃúÓÄFDǧ‰ñçÐDCÁ1ëÁi¢éi¢Go7Mçå GjŸæ_í‚^{m‚*Cn÷öŠPІûR!uÐiUE ’¯Ó{]›xD'r þHn£å[Õc å,²¯ì9%Τ‰IÐ.Éfóòõ¦ß§ß¯D¡³i¢<×(ARôZ¶Ù]›Ò BZç´qoÈš0‡L)·›új‡r+NT§ŒÂ@¾8ž›þ<±Ññq¢Gü98ÑPpÌzpœhFz@œèÑÛÕ¹@É“³£—J×â„Í&ó²±9±JÕìÜ’{ؽ*ÙýÏĉª*‰&_}zqNë7å­%PHÜèH¸'ªG&ûxÐWF³öĉ±p‚ÊæDbÍÔ̻کœ¾9Qž+6öd¿ýdQMWnNÀ¡\ÏzM\Z0(xËUÕ.„{7'ªSñ×ÒªóÜœp§‰ˆŽO=âÏ¡‰†‚cÖƒÓD3ÒÒDÞnšXgôÖ$.*Lšð¦‰ˆŽO=âÏ¡‰†‚cÖƒÓD3ÒÒDÞnš8ÔKåpñÅ ÂÃÖ-ìcRq°½‰#êã‹•¯ß6M‹ŽÜHÕ£äœÍ»jÇO¹¸&MLš€&J‰ie4Noß®vô´7pM”çÆH%e·×~̵bKõ:ÜØ›ÐÚ·”øvK×:¾ð½4QÅQLÞ­Žj‡‚“&¼ib#¢ãÓDøsh¢¡à˜õà4ÑŒô€4Ñ£·›&õRDçt ²HÚº†}Lª vq¢ªbæLÑWÏáÍ®a×·–ÀŽè<Íy.§‰ê1Eɾ2“&&M EZ®WÛ,=FnÒDµÓ§ièI4Qžk Éï÷2CÆ+3Äêba|]½bmÁœÚJãÚWå[i¢8M%3&_\ÎóÞ„;MlDt|šèM4³œ&š‘&zôvÓDuŽ`Ý#»½T‚L—Ò„Zo¸°AU“Whoµ#ˆcÑDUC¹bí«·Ÿã½h¢¾u)Të$ìZíèÆ¤NÅcÆÄÄU–çI§ICÑD,'˜‚ÄäÜ›°ï7笧Ÿt*þ9ÊNû1»ç ìMÐÔ×0‘j.Ç(ÛB«Ý‹,—ÂDuªö1øâ”Ä o–؈èø0Ñ#þ˜h(8f=8L4#= Lôè톉ê<‰‚³³Úáµ—°•eAÎ0Q%d(Õ…vH-§“©Š%QbÎ;†‹ô{ÁDN©ål;­QL7ÂDõhs!Fô•‘Îr&†‚‰T:•”dЄ‰jGO½æI0QžË"sÙ­Úñ\¹5—L9(mÑDÎXR‹O©ÙY0Úøbª%9Ř>¿å»/¢C!ß9¾˜Ç·“¯LŸ2ÇÏñeŽ/Œ/y±I[9âýu¯ùa|©v§—3*ÏMåι3s¬v¯_H¬ªy=¾ä:ÇójäÞzFÕ©î'0ÒºËˆŽ¿ZÕ#þœÕª†‚cÖƒ¯V5#=àjUÞîÕªê<¢ oÉï¥bÐkW«bZÂÖbUUˆ½¬r«Rl±ªªÊ%)–øê³Æ÷‚‰òÖ¥âIÄàFBäû`¢zTCXçhHµ“4O˜ &JÍÓ’„480Q²àE:&$€9Î}Ûj®½•WÒÍÒëâ¨}E¼¥tµ թÄ$}q‚i„3KlEtx˜è L´³&Ú‘&ºôöÂÄê¼Ü‡j][íT/†‰Óòš&ªDdH;zûŒ<M¬ê€Úµ²o)á­h¢¾52ŒJ~tòÓ9±«ibUÆ2fWQ˜çh'MŒDö]Jà$$¹¹5±Ú=WÕ<‡&êsµ”¸È^¯mv‚t%Mð¢@¤¯·&Jß"ÖAÇvK‡:¾ˆÞJP;H@iCÙúù)ᤉib#¢ãÓDøsh¢¡à˜õà4ÑŒô€4Ñ£·›&ªs°Á¦}û!2§Kirlq»·'È)e¿·'HcíM¬ª0ÙϾzÔ7£‰úÖÄ‘w –D7ÞÊ[=JFŠÁW&2÷&&M Eö]ædÓGlç¯vÑtMå¹Ùš¥sí{µ³™üµ9>løÀôú -¢µ`œ{ÓÕŽ2ßšãcG ÑÇøÔ MšØ˜&6":>Môˆ?‡& ŽYNÍHÿ{ç²sÉmÜñWñÊ;·Éº±*€yƒldŒ-+V`[‚eûùÃËgáÈsšuìî!t¸˜Å jºþ]§yùñR5!MŒè¦‰âœ)sÐéC¤\›ãƒ@7‚üãǤғz@_”&Ž©Wy/š8¦ûòSf‘M,š˜‰&`+Iý ¥ŸüÃîáÚÂI4QžkhQœÍïfÇW^› Ú S„çI>s – ÈŽR¬=B¼5ÉG—¿˜üñ¸âDVmTšØ‰èü41"þšè(8f=9Mt#=!MŒè¦‰êÜbîb“ßK©Æ‹÷&òl"î÷ö%Oó¥šÆ¹h¢ª‡rÏQ\õ)~¾ ô˦‰úÖ(€/ –éÎKØÍ£Hž*ø_]\ùÇMLE¸qHe?û'ªœ^ͨ>×Àƒ:í§ØÉ¥ÕŒ`ƒR˜,=_¨´àTòùôÏdU;½•&ŠSL¦Á§7ÄMìL;Ÿ&FÄŸCǬ'§‰n¤'¤‰½Ã4q¨—ŠéÚjF̶Y”½‰CR!LFU2ƒs–§©ç7Û›8„ïMTó G_óJ¸hb*š zob·šQ³Kñô½‰òÜd¢*ä´Ÿl§éÊ[Ø`›HJagµŠËZVä Úß›¨vnMéÔœ2¢×AV;¤¸h›&v":?MŒˆ?‡&: ŽYONÝHOH#z‡iâP/E‘.®J[žˆìÐÄ1©ˆsÑDU•‘@M fGïUµ½uþ_ æG'=Œä—ÓDö¨!XÄä*Ëvq%ˆ]41MpÍ—ÛÅÝjFÍNéìäå¹1˜f^¿Wì®­fkÅ$ع…-µKHÐç)}¥p+MTqƒ&öÄi@^·°Ýib'¢óÓĈøsh¢£à˜õä4Ñô„41¢w˜&šs ’ßKÑç5Î¥ ÍcŠÒMT 0ÅW¤Îv »ª’ÐÙYio‰ô^4Ñ¢‘å…èH¸qo¢z4a/(SY÷&MLEÒN0IÚ¥‰jÇtv†Øú\‹%g`tÚO¶Ë#Ì•{º1³î$ˆM¹—‰@}¡Õîq¾~L§H‰Õ‡xÁ„7KìDt~˜Lt³ž&º‘ž&FôÃDuΪ~ŠÂ¥0[Ì,‘`¿·]jš &ªªD™}õòn bË[S`õŽ,7;€û`¢z,eNúµ›ÀÚšX01L¤óä1©&èÂDµ#“³a¢?ÉúõÏ¿ßlG‹Ú'ÑDy.¶ˆ^û)vŸ'J:·Ü„ÚNñ: µop†ÂfùÖrÍiþrb|AÃ:éäM{ž&†ÄŸB=Ǭ禉~¤ç£‰!½£4Ñœ—R¥¤~/%tíI'2Ü v®M4 )õ‹×}Hµ¹.a7U øêóçðV4ÑÞÚ„ôÂ`©%®®¦‰êQ£at•id]4±hb"šÈß%GIÙtÄ6;‰p2MÔçj(t¼~¯ØÅKKa‡ #QŒÏi"¶­¿ ÙìÞš ¶9eÎß„ùâø!Oõ¢‰ib'¢óÓĈøsh¢£à˜õä4Ñô„41¢w˜&ªsÉþûÛ»v¯ªIÇwh¢JHÔ/ªúa7Yñº¦Ê°,ÀùêíIzÛ_4M”·¶€Äà†98tMT( /4CX bMLEqc`Óì²[¼®Ù%ŽgÓDyn‚h‚^¿—íâ“U”iB7BH¼³7YIÊ–úɧŠ]éÈoMéÔÄA¹Ož¸Ü?ÊJéäN;Ÿ&FÄŸCǬ'§‰n¤'¤‰½Ã4Ñœ›²³ƒÚìÒµ b Ó&švh¢JÀ`)¼"Õæº7ÑTQL”ÔWÿ˜/ü-h¢EÇXû§«ÿE¾&ªÇ-ùÊ$®{‹&¦¢ Ø% êïM;6;}o¢<·,¢`òzmÆr-îÊRغE*wÒžÓ–l¹êçtú°‹ñVš(Ns 3(€+.âCMìL;Ÿ&FÄŸCǬ'§‰n¤'¤‰½Ã4Q³¨8 2ÍîóÎþܽ‰díÁDU 1c—½ Ô&ÛšhêUœhvò^—°Û['uVÞšÝ×&ªGhôÂï!¬”N &¦‚ Ü8š0÷·&ªëÙµëÊs!æwu*aØ!_[»ŽK5 ݃‰ÌRòoEŽR³hÏ2Ù~Ùñ%«$JÉU!…w_Jt$©øXôúñ%{äòJ/ünÖµ¼5¾L5¾Ð¢âôù…ן/Å®ÔZ;{|ÉÏͶ;K@Ííâüãd'yVgˆl¥œ‘£´Ø…[“|4§–4úâ4­Å*w¢Ñù«FÄŸ³XÕQpÌzòŪn¤'\¬Ñ;¼XUœçÁ‹c ¿—²¯MòÃÂNÊÀƒRm®$M}¦1î×búxK”÷¢‰úÖûI>>¢(÷ÕFm)•ÜȾ2´u-oÑÄd4Á˗餭vŠv>MpDgµªØ‘2^[U™å9MpnÁòÈáÜ|+vhŒ·ÒD—½†à‹#[4áM;Ÿ&FÄŸCǬ'§‰n¤'¤‰½Ã4Q !²ø½”<˺zêÖ7n¹;ß¡‰*A9/ø‚Ôg5»¿$M4õ)9éÓÿù–o¶7QßÚJ!Ž>C»±œQõÈ•]e !-šX41Mpž¥1{’Bê럿\öôlš(ÏÍͽ›\i&]I6Ãç4!¥Ç<¾ôË„7»Àr+MT§B1ZôÅIÐEÞ4±ÑùibDü94ÑQpÌzršèFzBšÑ;LÕ¹i)sï÷R&צ ´Üc‰ì$ ¯8(š+U³2{_’&ª*JN‹f‡o–2°¾5(¼ðÛrºñ$mñ˜(•;C®²„itZ41MH¹–g˜ýi—&ªÝcÉÏ“h¢œÇ¸œ&ªÇ„ÅŸc¨¬,‹&æ¢ -kþ’ÿ|¾óýõϿ߲ý–N¿7‘ŸK‘D¼ Õbºro·€°SÌÈrû5€¸"«Úº÷ÖDuÊJj/ˆ# ‹%¼Ib'¢ó³ÄˆøsX¢£à˜õä,Ñô„,1¢w˜%ªs Îæî‡H½ö6§ tïœS“ Èú‚ÔÇ"S°DU¥æ¤ä¨v)¾Ù9§úÖy.NšüèX¸ñÖDÉ2PKût”e;xHR°Xb±Ä,‘¿K„ $ÎÎD±‹ÂÙ,Qž‹5º½6—«{—ÞÁÆÍ¨Jr Îól2íîŽV» |kF§CâbþM8ÓÄ^D§§‰!ñ§ÐDOÁ1ë¹i¢éùhbHï(Më¥ éÅ¥Qe‹qggâ˜T a*š8¨žÞ‹&ŽE‡o:_LÍ£2aŒ¾2yX\4±hâËÓDù.s·m!õi¢Ú‰>Ô=8‡&êsòÜk?Ù.¾ƒ]ưðüÖÅÚ‚-jŸ{š‘ÞJÅi†Â’|qÆ‹&Üib'¢óÓĈøsh¢£à˜õä4Ñô„41¢w˜&ªó<Ø%·—‚˜.Î脲å‘o‡&I˜ko¢©B%¬ ø^4QߺnÞ˜z¨$r9MT!úÊ’¬½‰ESÑD,÷Ð,Räî9§b—gÒrö9§òÜÜé)sÿˆNµÓ+iBÃV²)ðÎÞ”¬ÙŸôû j'rkiÔê9åo"¸âtÝšp§‰ˆÎO#âÏ¡‰Ž‚cÖ“ÓD7ÒÒĈÞaš¨Î%S]úC¤ØµÊ­ Ù©zPêdµQ›ª ûµ2>Þòó•¯_6MÔ·V*Ieüè$º¯v]õH™££ùÊ(¨,šX41M@ÉÔ“*vk×5»€x6M”ç¦Ùí÷0Ã8]yÒ‰¶LwîMæÌ,ûg«ɽÕ&ªS âl‘Ö—0XµëÜib'¢óÓĈøsh¢£à˜õä4Ñô„41¢w˜&šs àöR?l® ‰¸Qàô}MA$/mù‡Ò4WB§¦ CHÎÂW{K}¯bÇ¢Ê÷ÁDõ(™î,øÊXxÁÄ‚‰™`"—hœ€?ßôûú_¾_4³ÓÖçR ‰XÝñ…òL?\ ʉ«çã åœ$«ˆý>ˆjK÷nM—òK.˜ðf‰ˆÎ#âÏ‰Ž‚cÖ“ÃD7ÒÂĈÞa˜8ÖKéµ0Á&ƽ­‰CR#Ñ\4qH=À{Â> 7nMjæM™òJ»hb*š ²äŸ¿K“þÖÕ- NgÓDyn~ÙÜ~Ðk?˜¢×ÂŽY`JÏi‚KK)Iè¯þ7»›i¢:– f¾8Z4áM;Ÿ&FÄŸCǬ'§‰n¤'¤‰½Ã4Q«ö+ }Øñµ—°5…-h;4Q%”:{I|©F“]Â.ª,”ôYÁUoAÒ{ÑD}kàÌ«þg˜?ÖûÒÃ6’gäã¥6ᢉE3Ñ×Ù< vi¢ÚA<;¥S}®qJê¬T;¾’&ÊEoŠ;0!¥G%'ç^³ v/LT§ÌôqüP¶uÁÄÎ,±ÑùabDü90ÑQpÌzr˜èFzB˜Ñ; Õyî~Bb¿—2¸¶r•rÛ;0‘%Ä<ÄjpnÃV»'KG_&ªªŒlÅWÿXÈô-`âPt0ޘѩz£øŠ2¶•ÑiÁÄT0QêK§(¥ЩÚ:=£Sy®E ¬êµ´@WnM€mA 3ÓsšH¹GÈ‘²þ„½ÚÅ›óÃV§œÕøâ˜`Ñ„7MìDt~šMt³žœ&º‘ž&FôÓDu.,!’ßKÉÅ·&Èpc¡š¨4;ÑÕ.ádª*Ó<œ½hã7Ë[ÞcÍÝè@¸sk¢z¤¨@/(ÇÊ_‹&ML@ù»dˆ)ôóÃV»O?èTžKH19—ݪ\š–`#MÈüœ&´¶ôýðcyåïþ’g$ÿ"ÿ-âï¼)J²è¾¥øp$â»ØüÕ}W»ÖH¹¿ÿ±MÀþZ¦cÿù¿úáûïÿ”§jûã¯ÂOÿöãß÷y0ü±MÜòDòÇ¿ýô¶uý©{ûá»í§_¨voù_~û÷þ÷¯Ÿr¿ü?ßýåÛïûøÛü×þøïŸò ûÕ§?#ôëß·Îî«Üí2+ýï¿ûæ«OJß|úÝ·ø›?|óûoC ú›ßɧø›oíü$ß–2èôëßý«Ö©gaÿ‘PaÓ7R?U.ÿÇh⪚@œs¬^ó/õB?LOea¿5㸑&ªSOH¤‹sXÈÃmb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&ÎÌRŸ Ä^KBÂMì -ÛRq²*M$SùFý¯Uùho¤Gç½^×í4Ñr«ˆœƒ/þÍ.3ßIº¡$ôƒšVGzÆÚT©«ÔÚHw~”&Έƒ‚n‹&¢mb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&NÍR$÷Ö ¤,[ÙHÐÄ9©Y碉Sê?õõøWÓÄ©èÈÛžãvšhËK[öX™™.šX41MX­ÅÇT»Lti¢Ù^ž7QŸ«Ì–ƒÚÍŽôΛNÄ›"úgšð2‚4qй¬Ù%|öl¢9%‡|nÙíÒ:›·‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎê‚óÅ,ex;MP:¢‰SR9á\4ÑTiNÜ¿ovÂ?v6q*:*úMT¦šã=FyùuÓiÑÄT4áõ‘P²°›]~û\tMxËîÎ%æ5;ÎùÞ›Ne‚Q4©Ž`ÔÔUºÛAzôlbwÊ&,‹ã·ô“EŸ·‰½ˆNOCâ/¡‰ž‚sÖsÓD?ÒóÑÄÞQšØKbóof)O÷ÞtJ¸±äM¼¤BvýBª¤¹ò&^ª<‘}±VýÚM§ý­[wÆ/¢cþM4eu./.¡2†·3¥E‹&þyš(¿KKHP›¤õhb·Ã·b@×ÐD{n±5;?ÅN)Ýy6‘6Éð9 jïTÀdÝÏÿ»É£ÝQw§fŒýÃÛÝ.Ã:›·‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎÊãYÊÓ½4!B›“ÐÄ.ÅD¾ŠsÝtjª„êÍúX½$ù­~F碞£‰æQÔŠˆXëºé´hb*š€²K7ìÞtÚí(_}Ó©=7SY&úýÀv;üðåÒ ±„ ôs…XÀ6ÒÍSðµªÙ±ÙM§]½×{:±z¡Ë›ho]~KŠGGéÁ³‰êÑ%ȵX™¯,ìEsфּ "@ëöÂÞíñjš¨ÏE1fíb—Ò­5hc·2|¦‰Üæ DÉû§Í®ÌÒDuêDÁÙÄ9É¢‰h›Ø‰èü41"þšè(8g=9Mt#=!MŒè¦‰3³”'ð{i¢Öðã›Nç¤êd5š*TËÁ!Ðë-ì¦Ó¹è؃gÍ£$ÌýžV»Ý{¡÷E‹&& ‰\)¥¶réÒD³C»œ&êsÁÚ-Æ`ü»,ùÞšNjùওmÙp{Wi³K`ÒDsÊš™¾Ço'‹&¶‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎÅÔ=dz”(Þ\Ó)oªG5š-+-Y,Ua2šhªrY0¾Poþ[4ÑÞÚ¬^cú`¿‰ê ¼xPè½Ù%_y‹&¦¢ «ßüU‰ÿÞEá¯ÿýý;¦ËûMÔçr2Á`üÔô$–{{a'Ä2Lh½ü×B ÑHwŸ-/¯©GeªÇ÷3¨Y_Ê[“bÆøo‹ôc¬/Å£e—pWSìò:û^ëË\ë‹o ʼT ovéú›´õ¹ž úëK³S·{oÒfcñƒõÅëQ©ìr-PZ÷¸÷3ªN±ü&Œ=þVÈ}}­:ø щèü_«FÄ_󵪣àœõä_«º‘žðkÕˆÞá¯UÍ9“€@8K!å{«|À&Ì|<Û-•梉]•Kù¾P¯?V3°½µE=Hw»7R¼&šGGÍü…2£ÕuÑÄd4aÀuŸÎM»Ät=M”Ý9¥¬Ÿb‡ŸÎ®¬@®žËñq}ÁTF01;÷;/ívïÜõMœG¼jFÛÄ^D§§‰!ñ—ÐDOÁ9ë¹i¢éùhbHï(Mœ›¥8é½U>Ä6у*»IDýýøKªúT4ñRoJý{Z/»õÓÿÍ4±¿µ¦¨ùߟ(âc4±{,ýÝÐËî-[gÑÄ¢‰ž&ÊïÒÊO×Ñ­ÛÏèe÷6Á_Cí¹œ¬ŒY ÆOµCº“&ÒVb¯ùóÙB=¹ ìÞÏû~Ù>JÍi.;;ÿB\~û3.š8Ø&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢97Pˆg)½ù&­mŒ7iw eY zÞ¼ì&ëgÔTIÙ¼d/Ô»þM´è@rD £S4<×u÷ÈÄ™ãe¼ö¾]4±hb&š€J euá¿Åøë¿íÆ­\MÐÎFÊöÛ5?ÕŽýΛN¾a*“Ç皈m“Y¿È÷nþ,M4§J5ç&'²ª|„ÛÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0Mìγhúb U¹¹ÊGö-3ÐD“Å)ø<³K5œ‹&šª²[Nýü”?où[7^ѱ¨‚ËËŽŸËËk•ܹŸ1¸ÛáÛ½ïE‹&&  Ü QjçQïÒD³+Èq5MÔç’e³Ÿz“Vï¬@N¶¥,EÍgš ­–-uõ~ç²f§bæMœg¾ò&Âmb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&NÍRþé²è•7L6>¼étNª¥¹hâŒúò§ú±³‰sÑy+„;M4¤Ùƒ;X»®*‹&¦¢ ª]G…x·ùnÇoî/¢‰úÜì@œ%?Å.sº÷¦S)Xó™&jd¯ù‹Á1}µ36 {G*б8§·0.š8Ø&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9çœÜ8ž¥˜o¾éi3;:›hH‚“èÝ.MF»*ó²}¡^«ŸÑþÖex¿ÛÓËîÁšN»GÏŒýž¼/;Z5MLE\)ÁŒ˜ú4Ñìòõ4Á-»šX¦ØßZÓÉ6­•ä3MH›[êbØß°ïvôìM§âÔSÊ©LD‘¸bǸh"Ú&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9ÇÚ [ãY òÝyºåMìR™,¸é´ÛÑ\5vU’Ý¿Pïü[4±G'Œ‚8:¤ÏÕtÚ=*$S‹•‰­,ìESÑ„l­®æL]šhvÉ®®ÛžkƦ)Ú»,|/M8àAÞ„–Œ‚šs?/¯Ú=J§Äɺéo;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4qj–RM÷æM(lEÈMœ“j4MœRo˜~‹&ÎEÇ<›8£ 1åE‹&f¢ ­»t&ì×tjv…篦‰ú\wNQúp³Ë"÷ÒDRS8¨›Ë®gǘúç£ÕNŸ=›8#®LU«;j¸MìDt~š Mtœ³žœ&º‘ž&FôÓÄ©YÊìÞ¼ Ü”ì€&NIõ4YM§sêåÇÎ&ÎDÇÓ„•ìÉ25¼«Y²Gi¢‰+k8˜†â<åU!6Ü&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâÔ,/»zq¿ ÝT`âœRå¹`¢©ª_¸Ácõ?v4ÑÞšPRJ&aÛf)‰£i”U»´J:-˜˜ &ÊïÊc(wa¢ÙáÛ‹`¢<—R–òÒáø¡¤zkó:ØØÕJ:y[˜00ÑìПMÂnNAÉcq &Â]b'¢óÃĈøk`¢£àœõä0Ñô„01¢w&vç\œÏRŠt/Lm`~@»Ñ”¾‘Ê“]tjªŒë™Xýψmoí˜S²8:fòMx¥æ ÉI³Ã÷+X‹&Müó4Q~—…’ppѩٽ÷)¸ˆ&ês M£=z±KŒ÷^t"'´Ï(Õ\~¹ÖO𨅡Ê\EMìâÈ$âê4”×E§h›Ø‹èô41$þšè)8g=7Mô#=M 饉ݹHËñ,EœïM›`ÚÒ&NJ5Š&Ωgú­³‰“ÑÉöMì3Q2Œ•©Ò¢‰EÑDý]b&ödÝv»ÈÕÚs-+„ã§Ø1Üœ6!`ieƒ&•Ôý޳۱ɣ4QbJ€`±8ç•„n;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Ñœ“:x6Ñž«þª9…MÝíÒ¥üG#ïvd‹&Ì 'ã`m¢ÚyŠe£ …µŠE-%Tþ’ï6îˆN¶[‡B÷H>¾`Ž•=…s|™ãËã .)[-Œôe’ùÆ—j§–O_ü¹þ¢Ñlïj—/®X0‘m¬}cÍ3²R”zÒk7o¤mâÀ3ráP•‡…ù9[µ1 Ñ‰èø³UGÄŸ3[ÕQ°ÏzðÙªn¤œ­:¢÷ðlUs^ïe‹{)ÀkïF¡¥l\º* ’‹Æ=á³+»K˜XÕKZ‹ÕÐ{ÁD{k.XúÕº>¢£÷•_=ZÎ _øÝœ·'LL˜ &´ˆÕàÀ„Û)¦óaB‹e±¤aËv»k'«`aKbKßÔú A vÌ7;Êr+L4§ê?c²Xœ< ~&6²ÄNDLJ‰#âÏ‰Ž‚}ÖƒÃD7ÒÂĽ‡a¢97(ÌǬ"5] ¼lÉÛ%Ô¾ìë[–¨ªØÈ/ŒTFú^,±FÇŽ£Ãþ×÷±Dóˆþ»)ÅÊà¡jüd‰É°µí©êÿ£_á£ÙéÃõ“X¢>—K'Ûp¼rmZ !åòœ%¸µà”$_š]æ{&šSB–àì|³Ãy•Qœ$v":>KKt쳜%º‘%Žè=ÌÍ9—z,0õâzyÝ¢‰]R9u—ѪJ|2xA½”÷¢‰öÖZ-qt„n\™¨ý#§'ÙØÊäñÔ꤉IÐ× Þf´t/FmvÙèô õ¹À Â4€Ê•‡ò /VôòsšÚ‚±niìÏh4;§[i¢9UÕl‹ã$“&¢4±Ññiâˆøsh¢£`Ÿõà4Ñô€4qDïašØÕKù#®]™P]¬lÚ'µä±hbŸz~³ê㻢£|c½ÀêQ áX™ÿ7÷9MšŠ&¤Ò;(¤î]F«åÓK|Ôç¢?—‚µ½f‡×Ò„$*g&´5`“œúÓ=ÕN<ú&š8@HÁUFÍ®ð,>f‰ˆŽGÄŸû¬‡‰n¤„‰#zÃDsމ0¸ nIt-L,–óLì“*ƒmtZÕ›•„±z¤7ƒ‰öÖTÀ˜âèPJ÷ÁDó¨X”^øÝØæÒÄ„‰¡`¿KPâ/š}ÿ«ï¼w=&êsM…58”×ì„èÚz XŸÓD½ÚXÅ»è ÖB³#»wi¢:µÄ˜bq¦óbÔ0MìDt|š8"þšè(Øg=8Mt#= MÑ{˜&šóœé•.ÔE^{—Rpƒ&öIÕÁJ6U%s hb}Kz³zNkt|,®2ZíðÆ¥‰æ‘@¡XÚ¬ç4ib(šðïA€…úÇ&š—Óë9Õç’÷Úbµ¬™ü•4A‹ù—¹QÏ ëÕÆfjÜßl¹Ú©Þz»:Íþ±²õwa­vùað›4ñ¦l¬Mì”:Ø]F«*„ h±zxrñk¦‰õ­‰$÷Ë]}D‘î»Ëhõ(äŸÆÊøaElÒĤ‰ßž&êw (»V;~¨ÎzM´çrb†¶ ¾úbToÌù9MdoÁ|€é¯®v%ߺÓiuJBÒ/Ÿ½Ú9¾MšˆÒÄNDǧ‰#âÏ¡‰Ž‚}ÖƒÓD7ÒÒĽ‡ibW/EtuA§²ˆmÀDSÀ9k¿NççñX0±ª÷¡¼0Tñ“zT_5L¬om¹ô+©ØÝX¶yôÌ!©Z¨¬$œK&†‚ ÿ.1%b°îUFÕÌèìŠN͆Zz"Gí½ùë•0a 'y~KmÁõNjìÏþ7»x+L4§$¢)ÅâhžÁ޳ÄNDLJ‰#âÏ‰Ž‚}ÖƒÃD7ÒÂĽ‡a¢9ç„Ò¿ÀáC¤^[Ñ X°­¥‰Ujfì_ƒýùJƒÑDSÕî,xa8x·ŠNû¢#x_}Øæ±`e •A’I“&†¢‰ºØK–Aâ^Ëã~ûϾfž¿î5ÿüÓ¿ÿü»ßÿÐzø¿9;~Y䇲Ë®«<+µ½üÓÿùãŸjkúÃïüSëÛÀùÍßýé¿úá»oÿùÿò#õûø5ÿÑuþü£·ºo½#øÝ/üÃwßnxNö­·ä_~ñßü»oÿáÇ_üa5žO»åüøÓ7ùñw_Œ)üüóo~úù‡¿xôËç[ü²|óOí#qÇÿåf¦ÿúÿ9 Õ—ñ®âÏîdùöïÇñGç¨ù̈ÿÚQ~¼ÀwëÇ~Ø;êÝöº¸’Ÿ|ùò$µÙùé×ëxËâùÕvRóºÈÇ5ùI‰ÛéÿVDÿ_Pâß,þ4JÜR°Ïz|JÜŽô˜”ø7ë=ƒ_ï¥ç².9“lñNu»CeUa¦nËÕ¸œ={êÏ5PòÔ.œ“´ ΞŠ.R¹Å6gOw(¬ Á>õž|¼O­<ýº2y˜Å™<=yzž6ÿ~År0¾¸˜?¾×SûI9h?FJùÊ2,K]¹Ú(j†PgÄÀDûG;W»¢åV8lNð ¼ Že êNDÇç®#âÏᮎ‚}ÖƒsW7Òr×½‡¹«:ÇÄDq/e_–>•»(¹¥ îÚ#S†±hbU¥Æ/ þ–o¶:×Þº¤d"qtr¹¯¨ÙêH^øÝ€'MLšŠ& ïÏJ¢¥KÕ®¦ÒgÓ„?— äZË#j?TJºr¶ tÉŽüyãà¶ž×{ þýºÍÎ#uo‚&$‚PB±IQšØ‰èø4qDü94ÑQ°ÏzpšèFz@š8¢÷0M¬ÎK=Ò÷R˜óÅ·7êâCÏM¬Àà%©EÇ¢‰¦ŠÀ»zyA½¼W‰äè Iy!:vM4–j¢+“2ï[™41M`=9dˆœµKÍŽn=‰&°-ÿg€\¢öƒ€WŸª5Í@žÓy &ÿÿ"ý„½Ù»÷äPsÊì=dŠÅ‘Í“CašØ‰èø4qDü94ÑQ°ÏzpšèFz@š8¢÷0M4ç’D(ǽ ^|á .²yrhŸTlm¢©ÒÆm±z÷*‘üfê¬vw5«¹€Z@Í.Á¤‰ICÑÕóýµ,ôwÒ6»”OßI[Ÿ Iê5IQû©'—Ê•4!KQH°Q‡€k æÊTýŠ#Õ®–U¸•&š8SŽ0®vy®M„ib'¢ãÓÄñçÐDGÁ>ëÁi¢éiâˆÞÃ4QK&Ä`ê¨Ù¥rí “¼äZʾl÷ö¯KíÜDSU<- VVÖ·”7£‰öÖ­ôï‚ÿˆâwÁ¯Ù½0Œ =0줉IЄ—îLP§S³{lY'Ñ„?—RÊZš¨þÍžTsM4;t6MÔçŠ÷za¯R¯Ý鄉 ˜ÐEKrèî7tm]P¹÷ØÄq …'LDYb'¢ãÃÄñçÀDGÁ>ëÁa¢éaâˆÞÃ0±¯—â‹‹_, [Ç&öIÕÁ`b—zz/˜Ø;K:UePЦHY)´=abÂÄ0áß%)¥Ü/é´Ú=¤¡'ÁD}®¤\8¸Ð«Úñ“ÍçÂ&óv\¶h ²„{‰š]Â4򿉻 ˜æ«ÏïV‚WtÊÃ&¶Æ«UlôÂW÷x¬iŽ/s|`|±ÅÓ6÷'Ü_šÂé%ësÅ®”ïYíÊ•iÛdnŒ/Ö2D" 6'¹]6å{—¾wˆ+‰öCÏÙªiˆNDÇŸ­:"þœÙªŽ‚}ÖƒÏVu#=àlÕ½‡g«öõR|í±<Í´(ãÆlÕ>©:MìR/åÍhbWt«ï£‰=Êò¼ÐkÒÄp4¡H9׃¥M¸]ât>M(J&bШý $KW.}{>œê½Oi‚Rméɧ‰\[p½¡EÄj÷î¤]*$îoZíä¡ÖФ‰4±Ññiâˆøsh¢£`Ÿõà4Ñô€4qDïašXK¶,q/¥|ñuFH oÐÄ*AY’¾ UÆ*@¾ª2FéWý°+ïu9j{kLÀÀå…èØ};Veà äP–2‹|LšŠ&rËÒ©v ¯v,gïtªÏåŒÞkcز9æ+iB–zòp&Jë‚<èýÛŒš]íÈo…‰&ý›ÈŠCx˜Ó˜0±‘%v":>LLtì³&º‘&Žè= Í9圇ú©z1L”EòL4 >ÜiʱTìnÔU•¤¬±zµ÷:–×ÞÚ¿BDˆGr×pß±¼Õ#¥XÒÜè4ab(˜(uë¨d#â.L4»óåµç’¡w_a¯Í¤—ÖǼ1x~j‚ 6`© ô…6;€r+Lì7/3z!KìDt|˜8"þ˜è(Øg=8Lt#= LÑ{&võRL_f²@Ù`‰}Je°mNUgÆÄªç¤ò^,±FG=>GÇÝßÇÕ£§ ’úwP­Ê`VŸ,1K@«ê­*_7øþWß/zÚpú„?—²£¸wiQû¡ éÒC´pËö&°õ-*ê·ôjG–o=‚½ŠƒRŠj(Ž ÍmNa–؉èø0qDü90ÑQ°Ïzp˜èFz@˜8¢÷0L4ç˜(•÷RP.®>^l)¸uhbŸT´±h¢©òÜ ‘Åê‘Þ«úø¾èMMt쳜&º‘&Žè=LÍ9¥zgÜK¡^KeÉÙh«àÆ*Í‚í·«ŒU~|UÅžpR Pœ(ÝGÍ#PÝn+ƒy3꤉Ñh A134QíÎ?5QŸËÎfM¸]yrZì<šÈL ËMHmÁl¢ÁìÕ.Ù­4Q–$už,WÁ¤‰(MìDt|š8"þšè(Øg=8Mt#= MÑ{˜&šó`q/•õÚ«QÉò(4Ñ$Õºå±Ô2ØÕ¨«*TI ±z|R&ñ«¦‰öÖĆù…Ïп€ûh¢yT±”[íæN§IcÑ„.)‘0ë—‡mv̧Ÿ›¨ÏÈ J¿ßkvEìJštdI ô'´u.U…Rë@tïÝuÍ)ä\Á,WW'&NDyb'¢ããÄñçàDGÁ>ëÁq¢éqâˆÞÃ8ÑœSM)\{Û„•^I§&궬KÀ±p¢©¢T«¦Äê±¼ÙâÄC ¨vµ»s«Sóh ‚/4Ë󶉉ƒá„rà¬à„Ûåó·:i;8‘ Çš^ZÒ Òây4ØÆ1lk-X‰±¿ø½Úݼ8Q"æÍ¥5;ž4¥‰ˆŽOGÄŸCû¬§‰n¤¤‰#zÓDsNÙûXŽ{)4¸xq‚M[‹«TR2‰¥ÒX4ÑTI X¨Ùñ»ÑD{kÅZÇ2ŽŽ–o›¨)gÒãNº³¨Ó¤‰¡h¿Ëz•{.Ú?8±Úw]}.ªç‡W;¸òî:JKQiŸÑ„º²Ú‚¤ÓCÚ=™­ºŽ&>j²Ô;Õñi'yu꧉ýˆNÅŸ@}û¬G¦‰(Ò£ÑÄA½ÇhâÃ9'1æ÷R¦×œ`æ…Š=£‰R9ÑHw×}ª*ZË Æê³ÙÑÄç[û{wk–üÕ.—›hâÓ£Šçñ0Î'‹&MLšø­ibý.ˆr纉O;x(—vM|<—±h·üħØ•w×aZ²ËsšÈÞ‚%©‹ê/¹µt²[i¢‰Sï`ÔivŒó¾‰0MìDt|š8"þšè(Øg=8Mt#= MÑ{˜&ªsM‚Y4î¥Ì®-ê”—Âf$Û½½C—f±Pª i,šhªˆ¦x8¨×ܽM슘ÞGÕ£¡@Âø«3H:ibÒÄH4Qo˜¶’“•>M4»œÊÙ4áÏ•ÄíȨý¸]ºÓ©,$ËÆÚD©}‹CJŸ{š]b¾•&Jë†êyz Å褉0MìDt|š8"þšè(Øg=8Mt#= MÑ{˜&šsŠ–wW»LÓ2—íÞþu©CuúTÅžofŠÕ3¼M´·6Ì9CÉ|M¸GLÝg¨ÌQ&MLšŠ&j™¼\rAÃ.M4»ü@Ã'ÑD}nÌ–Â4Øíò•××,™R6~>¾Ô üR"¢ÞñÕÎû*¦[i¢‰¨uH"qÞ ÍsqšØ‰èø4qDü94ÑQ°ÏzpšèFz@š8¢÷0M¬ÎÍÇŽ{)ørûй%b.bec§S“€I å©–Ç¢‰U½Ôë‘bõð^4ÑÞZ@_ø ùÎNÕcö/?½2ŒФ‰I#Ñ„—¬E”º4Ñì²òÙ4QŸkäÃiÔ~¸†åÊ˰mIõVpÛ¢ ³’ELs Ô,›Êhã‹«/ÈJªw¸|»ñ¥FGü×Ç8:å¡´Ù ã‹{ô!F$þê<(s'í_†_pñ‚DûkßÍ.Ñé;iý¹=­ÖÜo?ÍÓ¥×£Ú‚.D6öVaÍ$9aÑH©Ûá×£~8-ÉBqþ׳y8 Ñ‰èø³UGÄŸ3[ÕQ°ÏzðÙªn¤œ­:¢÷ðlUsÎRo¦‹{©' Ê'¯}'õ¡6o÷ö/Kå4ع¼U•šà+ê¿,ÎøuÓD{keD•8:é>š¨±­r¨¬NGNš˜41M¨Ô,!4ávðp-ói4¡…4DíG;ôښĂic|!oÁIMû-½Ù%¼wí»9åzÁvŠÅÎ*ašØ‰èø4qDü94ÑQ°ÏzpšèFz@š8¢÷0M¬Îë=;9î¥øâëQÉpI¦kßû¤"EM•‚%xa¬’"ïEktü÷§¢£pã¹¼ê3Õó¢¡2L4/4š41MP]Ó.Zð˵µïõýºÝùçò¨­}×+•$j?n—ðʵo÷AEhc'-×,Þƒ³UÍŽìÞµ‰ê”³P,ÎÒ¤‰0MìDt|š8"þšè(Øg=8Mt#= MÑ{˜&öõRzm•̶HÞÚI»G*¥šh) (ÆÊ0Ï´“&†¢ ^Ú•Ì`¹OÍ.³œMõ¹™sÁ$Qû‘|í}FÅóaö`lÔ ”ÖÒ±0÷ç š¨ÞJÍi­‹cžçòÂ4±Ññiâˆøsh¢£`Ÿõà4Ñô€4qDïašØÕK ]»6áÛ¢9oÐÄ*UüE^èíu´ äUû°n’bõö¤âáWM-:9p~¼JüršhýÓÇò‚2x¨Î2ibÒÄ4!-›/Y³ti¢ÙQ‚³i¢>ôØ;ÓÆu\î( 8ŠKx+¸ûßÉÓ ¤º1†l·ÐQ¡~XægÆŽ1qŠÚ‘Á¥4aGÈ ïi"·–žêõÄ¡ÒÜzh¢[i¢‰Óv,WB¶i"œ&"º>M̈?‡& ŽY/NÃH/H3z§i¢;¯gE=î¥Þç?•&Lä¡òio¢IÈDdokvF²MtõBš¿P_ðî·h¢½µcZý":ùÆ“NÕ£a”8Tf‰Ò¦‰M+ÑDn9Ë|>oa7»„§ŸtªÏEu)<µŸb—üâ“N>œtòÚ‚).šÝkZ¾;h¢9U(>$'Ì›&¢iâ ¢ëÓÄŒøshb à˜õâ41Œô‚41£wš&ºs­y½ã^JñÚœ¤VúûO{]BFÃo¤*­EMUFÆàÖG³3J¿E=:d¼êv|#MT9y¸7Ñí^Î}ošØ4±M”ïÒSfÎ<¾…Ýìálš¨Ï%ölÁ¢z·»²:ª<Œ”)¿¥‰µ×bi㬮ÝN™ï¤‰æ´¦^L˜BqN/[Û›&ÞOG]ž&¦ÄŸB#Ǭצ‰q¤×£‰)½³4ÑkŒ{)‘«ëAb2ÿÜÛ/Õq)šèª²RˆÇ*Wý­zFõ­Œâèøëí„‹i¢+“òºãÙP·#ß·°7M¬Dõ»th¥¶†4ÑíÔϦ‰ö\"r·°×vB¼ro‚页´ÒšH­3øBs·Ã¿×{.¥‰æT!{0|4;aÜ4M]Ÿ&fÄŸCǬ§‰a¤¤‰½Ó4Ñ—6AÜK]Ó‰Áð©žÑS‚PÖô…T^«ÞDW•“¦qn §zÏ¿Eõ­PöôÅHîvßÞDWFÄã’PhÓĦ‰•h¢|—¹<χÕQ›–ùüÙ4QŸË5•R²¨ýd6¸´:*?€‘ì}õ턵³kÞt;Êv+M4§9'§¯}Úí ±ñ4qÑõibFü941PpÌzqšFzAš˜Ñ;MÕ9‚RvP›Hw¸”&4ñCó‡ ±]jJ);…R|­[Ø]RM^«O¿Fí­ Œ9ÇÑAçûh¢y4¨ŸT¬ìõ|ݦ‰M Ð>êu$e‚á-ìf'.élšøÿ¿ÿôoHWîM˜<Ù§{‰J fa·`|ivl·ætjNë*SÁ˜Pœì{á4qÑõibFü941PpÌzqšFzAš˜Ñ;MÍ9Š$丗B¼ú¶>Ê0û&šJ¥+”/¤úZ÷&º*V r:=í~ìÞDëòëâøvÂÓîµâôÕ4Ñ<º×䯱2çÓiÓÄR4Aõæ±aJ˜‡4ÑìôôZØí¹HŠèaû)v—žtB{ñûJ؉KûÕT“éŽûgnýóÍ;Mç¬Á9§fG°Ï9…“ÄAD×g‰ñç°Ä@Á1ëÅYbéYbFï4Kꥮ­6áT{{ûÀǤ¾™ÿWYâ˜ú«„ÝßZÔ²¥8:ò’÷år–h³gÊ_(3Ø•°7K,ÅÜX"¥7™+þùï7¦ÓÏ9Õç¢B’qE¸n'ì×V›H‰\ßW›Hµ²)ñxg¢Ù±ß{k¢9õD)¥XœÙ¦‰pš8ˆèú41#þš(8f½8M #½ MÌ覉î¼*Ÿ—z½üzÉ­ ö‡} ‰cRËÛT•áÚYù õÆ¿EG¢Sìî«6Ñ=’3áÊ0ï;Ø›&–¢‰ò]º‘’¤ñÎD³C9ýv}nVTÎaûñ,˜¯¼ƒ ÉYÞ›(?g—ºs<>Õì0ßZ»:Uwg©îâ÷¥‰p–8ˆèú01#þ˜(8f½8L #½ LÌ膉桦؎z©*Ò.…‰ÜøLt©D>N[þï+-¶5ÑT±#³…?ÕÿÚ¥‰CÑaºqk¢z,˜_X‚beN{kbÃÄR0¡}’NüwQÈþãûõÍr6LÔç–«Dí§&}ºrk‚ü!IÒ{š°Ú‚-%—ÏìvJr+MXï†D&kvwéºpš8ˆèú41#þš(8f½8M #½ MÌ覉æ͒丗B¹öÒ„Rz`út»I ÄDK%ðµh¢©b oƪDùÇ:õè8|1X²Ü˜Ð©y40 Òˆ5;})hµibÓÄ4QË÷¸ƒå ¡S³» =¬Õ"$",Qûñ„DWtòZÛáÃA§\Z0–1.·šè½šS«—Ê<÷zÞrÓćiâ ¢ëÓÄŒøshb à˜õâ41Œô‚41£wš&šóš™–8ÁÅ{øÀüio¢Ip6ÉKuZì vUEÀ„žCõe6¿Eí­Bvˆ£“€î£‰æ‘‰K#‰•‘ï+Ø›&–¢‰\géœ1ºž¾7QŸKe ï0ÅŽ¯,]GðHà¥þD^sJÕ:•Òb§„«/õ€“(ÄêMíׯ—2É‚G'ë­ã‹;!'I*#x™€íñe/ Œ/þ¬¥Ýd\õiG§¯V•çR»$fêvéʃ´”\¸ãS±<¯3Dõ„lÒbWºñ[W«šSÏÀa«Ý^­Š—!]µjFü9«UǬ_­FzÁÕª½Ó«UÕ9dLúE/•/.fÄð ýT̨K­»±ÔòJ‹]ËëªL1ÈÛí˜~‹&Ú[×üÁön7¦o™ô‹ßdŸ¤Ý4±MÔ½g"Wh¢î=OÙ JP‚p\׊øÊ½ïü°DÞ¯V!”\ë–©×¥›»Þš~¼‹#Ã4üžv›&Âiâ(¢ËÓÄ”øShb¤à˜õÚ41Žôz41¥w–&ºsæ ãIîÓÒÅ4!þTõ)Á]“!•y)šèªjé½q‰Íßò·’|<£ã9ÚÉ}4Ñ=zrç/”e–M›&¢‰ú]JÈIˆF4ñ´ƒ³S¶ç¢–'× º¼ÛQ>5ýx]x/SiÁ¹ AA:Ÿf§–#âL}ÓD8MDt}š˜M ³^œ&†‘^&fôNÓÄ¡^*˵Y>Ò#  Ó|ooÙÖ*ÚT9—)ÄøÄLWïù·NÒöèTªU £SÜw/¯z,Ó„2>c¨Ì@eßËÛ4±M¤‡"ƒÙð$m·K§ŸtjÏ%&£ ý;ú»öö™'ä!5ÆlãKï¢t»×åÿ;h¢9ÕTÄ{,Nvòxš8ˆèú41#þš(8f½8M #½ MÌ覉îœDÇÇùÿym–B}ȇjF•âZI>ºªÌŠY¿PŸ«2ê±èß—2°yt(/D+óäcÃÄZ05_ùxA‡ÕŒºÑÙÊsÀÊ—£–íõ0á¥)ñRbËïa‚jKG{WCö¥Í2ß Í©x‰ãâxÃDG'û}—°«G+s‘qjônGy_›Ø4±MÐÃSÍŸ£0¼6ÑíÒ˦ßI4QŸ[³§ ý»¤W¦ d°dQ}O\[0jÀ=Íð^šhNÙˇı8N´i"š&"º>M̈?‡& ŽY/NÃH/H3z§i‚ûd)±Ë½ÔÅåŒíÁø‰&I•´Ö%ì®JAx\7ê©^~ìÚD{k/ÿ/1’«ß¸7Q=¦òÞåÅCe$Žß4±ibšà2KGdïM4»ô’Vû$š¨Ï%ÕZT!h?5O¹ë•4A0FÄ÷4!µ›˜×òšÝë‘°;h¢:E®U”r(®Ѧ‰hš8ˆèú41#þš(8f½8M #½ MÌ覉æ\‰h\¹ە©üÅ×&*îí¿—ªºM4Uê ¹X½ÚÑDŽg ®:w»Ë5e6TÞ)þêþH¹ibÓÄ4!õ:Ãßå„þùóû-vtþÞD}®°‰K Ú×-@¸ö¤“A–×&´¶`4M8¾àÑìÞì¢\JÍ©²§`o¢Û½ä‘ß4ñaš8ˆèú41#þš(8f½8M #½ MÌ覉æÜ¨°ŒÆ½”]|mB2?ì㽉&!'K±ÔŒ‹ÑDSåá›áÀåÇN:Õ·.#Ž?Cº1¥SóH˜ù‹ßÑ÷ÞĦ‰¥hB%”iúßÅ}ÿùóû­—µÕϦ‰ú\Æ”08)Øì’éµåŒˆ%eyOÖZºiNc¥Ö[ú­ÅQ»Ó̃“fg¼S:…ÓÄAD×§‰ñçÐÄ@Á1ëÅibéibFï4M4ç®- w;òKiB•‚þ&ªÊ _ôö¾ZJ§¦>9¸ÆÃ¤+ŽÚß³–ÇÑÁ—Ò~—ÓDó¨žS¬LdGÝ4±MXM•¤š² ‹×u»×“F'ÑD}®hs ÚO±”kiBó‡ü°¹6`*2ƒá¥Ù%º&šS)C¸@,Nvµ‰x–8ˆèú01#þ˜(8f½8L #½ LÌ膉æ¼^~Ë_ôRŠ×t*¨ð@’0qLª¥µ`âz#ù-˜8×4Å—ÃDõ¨Èü2M¸·&6L,¹BB²Ìç7þçÏï×3~Ð)÷­ Ô&šd½¶v'Bx_ ½¶`F"oB6»×Mœ;h¢:µòÏܺkv%›&¢iâ ¢ëÓÄŒøshb à˜õâ41Œô‚41£wš&šsfFˆ»P#Í_›À¬Àö¹·7!æ "A¥¼ØA§¦*§T‚«·ô[•°û[;x’/ˬ7¦tªsN(‰BeYm_›Ø4±Mx]ò‡\w'†4ÑíÒé bësU9'‹zíb'œ¯¤ €ÔíÏ·ã AmÁ‰Çueº]Ñz'Mt§†–Æ;¤ÝN9m𦉣ˆ.OSâO¡‰‘‚cÖkÓÄ8ÒëÑÄ”ÞYšhÎë-=E{)'º”&ÊL±ôXö&ºÔ2÷qâÿç+åµ*awU”\Æ©MºæüS4ñŒŽg`ˆ£Cù¾kÝc ~Pýi'»ö¦‰•h¢|—Ò±V ÒD·ƒ|öµ‰úÜ^OZE-»Øe¹2A,¥‡åš_ê=MÔê“îJÀÃngéÖÚuÕi®§`|½±Û•ïkÓD4MDt}š˜M ³^œ&†‘^&fôNÓÄ¡^ \|Ò)?ŒùM“J°M4UÄ–!¡Þð·hâPtˆîKéÔ=j™…øÊ$ïk›&–¢‰ZaºÞ4RÀ!M4;V<›&R¥„æ”9h?…fØ/Þ›@Eâ÷)K .RYÓ˜&ª]Á®[O:uqhå£áP\BØå&Âiâ ¢ëÓÄŒøshb à˜õâ41Œô‚41£wš&ºóº$#_ôRY®­„íùÁ€h¢I ÄAq¡§¬Un¢«bFfÿB=ÛoÑÄ¡èðK)‘Ëi¢yÌ\ºÔ+SÜ'6M,EXËHHù“‡÷&ºŸž ¶=—É”‚öSììʽ z€ka¦÷4A­¥—Qn\µÛ©ÜZn¢9EÎ(¡¸?2smšø0MDt}š˜M ³^œ&†‘^&fôNÓDsÎå‹ àt‘×:•&Lá‘ìMt©-Ø3ïvºV)ì®JLHã± …õ·h¢½µQy¶ÅÑy½Ur9MT”³Æ „^OSošØ4±MÐÃ1•‰:ñ°x]·ËrúI§ú\bõ¨×.väW–›àzšÊA?Á„{Á®"•"¡Ž,yµá¥¨R%_OyÚüÚðRÞÚQü‹è˜Ý9¼¸S‚Z TFö𲇗¥†~@MQSÐ`¼õÝìÈäìá¥>W¡–gOϪ¼II{îb;ˆ¿Ï?^” ¢h ´N8=ߺXUrv)ŠcÓ½õ®B "ºþbÕŒøs« ŽY/¾X5Œô‚‹U3z§«õRÙìâ$êîŸ{{®œ€é ©¾V’§úŒ‚«wþ­üãí­¨ˆÃèÔ£Ô÷ÑDóH @ñï&å×Ý4±ib-špÈ14Qì,åóiÂH˜j'éʃ´òÐ2€Ø‡ñEj v«KÓC¥ÍîõøÍ4Q*•QcqŠº“|„ÓÄAD×§‰ñçÐÄ@Á1ëÅibéibFï4MtçŠÈ÷RD×Ò„;=àS5£ƒRÅÖ¢‰¦Š¹¼€|¡Þlo¢½u™Œ›[¾1y÷h¬f+Ó—œš›&6M,@RÈ–ÑÅ`œä£Ù¡œ2°=—JwLæAû)vÂWV3B{¤\bûá ­>²'/ZÐÒ‹]=¬zo’Câ$írFá4qÑõibFü941PpÌzqšFzAš˜Ñ;MÇz)Çk¯åYéï=} ‰CR5áZ4qL½üXÊÀCÑ1¸‘&(«å˜öµ¼MKфֲR¾Þ¿‚ÿóç÷[ìHNß›¨ÏÅ 9í§ØÙ»^û¼“N\¯å~Hòa¥oqK¢4æžftïI§â´øÄ ;z³ß4N]Ÿ&fÄŸCǬ§‰a¤¤‰½Ó4q¨—Bå«Ë1hÒ½}‘Š&AJ§ç+ÑZ4ÑT1ª$øBý¯]ËëÑ¡Òq}~©Ñ{9M4†FA2Ãf§û¤Ó¦‰µhÂÊ,]ÊÜ×Ò˜&ª»Nõ¹*€AqánÇvir-?š | ‰\×0¡ã˜{š¤{i¢:E`• ;zG ›&¢iâ ¢ëÓÄŒøshb à˜õâ41Œô‚41£wš&õRzqr|ðÇäǤâb'Ž©·ß*Žz,:$滚&º2cË9V–ó>é´ib)šÈuÏAASö!M4»×â¤'ÑD}nyŸ”ƒu‚jW&.~m9#fÉö¾ø6ymé¢*ÁÕ„f‡)ÝJ‡Äí“N_L]Ÿ&fÄŸCǬ§‰a¤¤‰½Ó4Ñœ+‘ 2O‘|ñI'x}€‰cJób0ÑÕs=‚«Wú1˜homÌl;õèä·&ªGF‰•ùKÞø &€ ÆedÕqÆÀfGÌgÃD}®yÔï;¾öÚ®¢÷—°jKRow;„[/a7§Œeô Åqz©:¸aâý,qÑåabJü)01RpÌzm˜Gz=˜˜Ò; Ýyé0¥¸—rÈt²ÄƒÞ^ʬ!îíÿHû³MSŸòoÕF=|I#y5Mtõq|‰´Û½¦%Þ4±iâ¿O5çvùtÉK³ÑD·+cÐÉ4ÑžëR^X£–í˜=_\ÍÈÝüýlNµœˆ„6»lz+LT§õæKPaöiǸa"š%"º>L̈?& ŽY/ÃH/3z§a¢91r{)¡«aBMùCþ¾§„òóR3¯M•AO±zµß*ÚߺÕ9ÿâ3Ìz_~Øæ1'+xãE¿o˜Ø0±L¤‡·Ý\áaiÔn÷Zî$˜¨Ï•âêAûqb·+‹‘=¨5å÷ã –ìÈÀ0Ü„ìvÉøVšhNˆÆ¥0ºäMá4qÑõibFü941PpÌzqšFzAš˜Ñ;MÝy® Áã^JõÚŒN‚5縼?èÔ%‚‚ÇR Öª]×Uå27`ŽÕgáߢ‰öÖuÓ¾,ýå†úå4 ¨)ÜeÅ®kÓĦ‰•h¢|—ÅŸ Ž:U;v~ù~O¢‰ê?•—aŽ˜ªSôBššãƒ2¾¿5ÁTZp5×jv€ùÖ;Ø]\ׂŨfg"›&¢iâ ¢ëÓÄŒøshb à˜õâ41Œô‚41£wš&ªóò¯IÇEºH—tùÞDöϽý÷R=­E]}ÍGd¡zJ¿Eí­9o:?£ã7tjÙ€ð eô²b¹ibÓÄ4Q¾K®>®„ÝíèüƒNõ¹9“3…½6«ò•˜*eû@ÜZ°—_e¼nÐì –ÜJÍiù"ðÿÙ;³dÉm\ ïHAÄ´„»ï'—¤ŽÝiWŠH%£’õЧaáR>@ú@Üëªß¢‰ƒib'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šsG€q/er/MÙj{§¤zš«vSEI¼L>Po_¶7Ñ¢šƒ,¬?Q|’&šG¥z?3V&yU›X41M”ï’J’,wóÃîvö’ßø"šÈí¤“'Ö°×&~-“pÃI'Ü  a~pÒ‰k 6CÆþ½‰f÷Z¥ò š¨Nsý}Bq^î•-š8˜&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9ÏÍ8î¥Ê°t/M(o¥·? ‰sR}²{M•C¿Ž÷Ï[¾9?üGÓÄ©èÈ“'ªGΞ58éÔìÖI§E“ÑDù.IQ·ön—üò½‰òÜÒg—GgŒÚO™Ü[m­hy_Í(Kí[¸"ME£ÙeNÒDsj¢Ä‹Ó´R:…ÓÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0Mœê¥ì×׿‡uØ@*aŸ”š'»7ÑTÕ$¨î¨÷/£‰úÖ\ÕãèøË­’Ûi¢)#È¥ý„ʉM,š˜‰&ÊwIIëe³þI§fÇruµ‰ö\$@…°ß£=po‚X*ÄÑ-lmë¤A ïÝŽÈ¥‰æTÁ¼_mb·+#Ý¢‰hšØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰æÜ4yþD2ÝJYl+õMœ’j4ÙI§¦Ê…‰ýõþe÷&ê[KR¢0:R¦€ÏÑDóHªÑùºÝVN§ESÑDù. 'ÕÔ?éTíÐírš¨Ï­{œlvB|ç½ Û´^Î8¸7a¥+C ªvâðl¹‰]œ‘“„â4ûª„N;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Ñœ—Iýë½bÅi;<è´+EN¤±R™íÚDSeXFöx¨*S×/ƒ‰úÖ&‚Èü¶þ²€z;LœQf9­k &¦‚ «)•È4×&š_ õ¹„š1‡ý^»³væµæ€=„ ¯£ © ›]ÆùÆG3£ ½m³Ó7÷úøRwè ¯¦0:%‚ôèøâ™ B¨,g^‹Uk|™j|ñ­üIË—)ý­ïf'¢W/õ¹Z«®½v³KFw¤Í›”Çì|{ –¡8§o¼‚£kUMœ‹J¿Bóݪf/Bt":ÿZÕˆøkÖª: ÎYO¾VÕô„kU#z‡×ªªsKä5ÝínΈùøÖD“€9Q€=»â\0ÑTQ­]cõ”¿l±ª½u™ðPpÊx·£«šGSOÏ1Ü^È‚‰SÀD­R„t!€‰bG€×ÃDéÍÊ«”&j?Åî]ÕˆëÎÑòVâ®ÈoÇNµ ¤éNØw;ÖGw¾«ÓZ ˜0WìM„ÓÄ^D§§‰!ñ—ÐDOÁ9ë¹i¢éùhbHï(MìÎíƒ^êÍñ΋i‚(ÚaoÆÖ¯²·Û%æ©hbWE¬Ú/•½Ûaú®[yç¢Cé¹s´»Ç2ÁPü@ûÚú^41MÔï’S"èÒÄn÷ºŠq M´çb],˵Ÿ:ݹ5‘aS$0z?¾@7¤_o®Ù%§Gs|ìâÈÙû{?v°r|„ÓÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0Mœë¥äÞŒ¹¸‘tp+o—Q¹Ÿ»íÇn²ƒ´»ªv`Æ>P/_F{tê´Çãèðƒ·òv&åßÊttZ41M”ï2“±÷i¢Ùi¾zo¢=W"Eí'ókþþ[:aÎ`ïoå1–ŒY”‚ñ¥Ù‘*sÝÂÞU‰£q×ëµ`{í\K“Þ¹5‘6Pc|Ö† CJý=Hk#á¯o…‰&®üŽ&Šs´Um"œ%v":?LŒˆ¿&: ÎYOÝHO#z‡aâT/•Yï=ç$´©ÈL4 œ˜Ì?ª6Lìê]‚"?vôe[í­@‚›Î?Q|&šÇZæ„S¬Ì^òk.˜X01LX;¿TSôóÃ6»‚WÃD}n\¬<;j?ÅîÞÚu¾¬:Øúö­.Uµä]¥Í.Ù³šS&÷~ ÀÝ.¿íX4q0MìDt~š Mtœ³žœ&º‘ž&FôÓÄ©^ªLåo®6!ØQµ‰sRq²ü°M•¨Xu}Wïü]4q*:ÂðMTD ØšhÊœW~ØESÑDù.™’ÖÇ]š¨vè×ÓD}.—Q9lÙÌì·æ‡å- 9Ú𢬶`’^Å›¿íDý9šøqZ8È wÐé»—+ƒ‹&ÞMûœ&Å_@}ç¬g¦‰(Ò³ÑÄ Þ1šøÛ9¦Ò;ZÜKÜ\m"ÓÆü–&NKÍ4Mü£^IsŠÕ#~ÓÞÄ?oí"½TÿØÙS{{ä‚9Ï1 ¿”Š_4±hâwÓÄþ]rb@ÑÎì¿írºöÚÄÏs±g0‰Ú#²Ý[m W‰¼§ (-S¶ò¿]¥ÕHøQšhâ“% Å!¬“Nñ4±ÑùibDü54ÑQpÎzršèFzBšÑ;L»s5f{)”{¯M8à–²ÐÄ.Á“À'R 碉¦*'âƪ2úòwÑÄ2Š£“SzŽ&šGÓ”\be‚‹&MLEå»äŒÉI¼KÍ.]MÐöj·Nƒ¹ ½“&h"p~OXZ0e×~ÝìR~voâ”8â•6œ&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâ\/å÷ÒDFÜXð€&v©†¹wæä¯4ÙÞÄ®ªž þD½ÁwÑD{keMŸ|† òMT¹&Ù‚øwˉ×%ìESÑDù.%I=!š»4Ñì8ÛÕ4QŸ‹Ä扣öSÓbߺ7a›i-ã÷ž&¨(`ʹ¨ê*­vD¤ÒÄqYlÑD8MìDt~š Mtœ³žœ&º‘ž&FôÓÄ©^JÉoN[¸Æ®èí³²ÎE§ÔÛ·ÑÄ©è8ás4qFƒ®±‹&¦¢‰ò]zCþõnñ_ÿù~='º|o¢>·@J•µg¸7¥“¤BVx8¾|®ôõLØ,ãË õoFÇ?}|ñòðz1ïƒè¸=:¾8k}‹•‰¯jFk|™j|É[bw7ø5ËÆ¿Æ—j§œÒÕãKyn™Ÿ‘£õW{«]‰ Þ8¾ˆW~¡#~Éu†ˆõ·Ñ@i±K"®VU§‚¨®Š«%›ÖjU´ щèü«U#â¯Y­ê(8g=ùjU7Ò®Vè^­jÎÉ(3ĽݼZ…Y(Éqoÿ±TâÉö¾›*NªÅ.ÿZ,ãϦ‰öÖåµ9¸s¹G1ççh¢zÔz1s¨L“­œ‹&¦¢ ÞRMžoœ¤mvâp5MÔçfJ¹4înûiv¨·ÖFÕz£ÎÞ/\[pNdÒÚ#س÷òªSƒ†ò8×Eá4±ÑùibDü54ÑQpÎzršèFzBšÑ;L»sw5 {)´[i˜7Á|p’¶I 2Ò¢ÆR߯ø4ÑT©Ô]ÔX}Nò]4q*:ÌüMTN¨‰$TVó,šX41MÔÎÝD,¢‰b§ty–¥àš OãÍîmºëh7Ö|°Z%µ'5 rV»Bô(M4qHh”CqµÜÒ¢‰hšØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰Ýy]»â¸—BÆ[i‚·÷µQÿQP+˜Öý¼Ñd[M§2O€X=ýš›ñφ‰sÑÑ·&šÇ" ùƒQœ_“.˜X0ñûaBË$½€°@&š^ŸäCÛÖH"R3U;`ö{a‹ÇƒsNºâጒ0êõû³çœNˆ«ùˆl±D4IìDt~– Ktœ³žœ%º‘ž%Fô³Ä©^ ò½·ò2ùFr”üœÔ7õ€~+LœRôeçœÎEÇô9˜h%Qû¼ä_0±`b ˜¨·ÝJ"ÀD±K×Ä{‚ºµŸzáÎ ”ÊL›í=MXiÁPB ÖÇžfG†ÒDsjœ N|U3 §‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašØGEv~DÞK ¸ñM4 ž n|"Õ'Ë?^U!€:H¬ÞíËòïÑ¡ò»iZªê9šheDš]ÆukbÑÄT4á[ªÇX‰šØí¯¦‰ò\,MÑ­£j"ùFšÈ¾©°¥üž&¼¶àš#Ã"¥µ‡ÎÏÒD‡å÷! Å!ÊÚ›§‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎ3×Jòq/u{þqÁMíèÖÄ)©m.šhª8ƒ  Þ¾lob޳÷ëw;~&šG·üÁïöš‘fÑÄ¢‰)h¢¦s¢êÐD±K˜®§ wÏjoöFþÛ~ÜÉò½{ÎȦoiRiÁ”Ô»4ÑìÊëÑ“NÍi½HŸÜBqõ·^4L{ž&†Ä_B=ç¬ç¦‰~¤ç£‰!½£4Ñœ3b …w; {ï`ã†zq£I͹ˆ¥¾.ÏÌ@çÔ3ØWÑÄÉè¼@íÝ4qN™Ë¢‰E3Ñ@½¶þîZÜ+Mìvé¥ÀÄ54ÑžK¥OS›õÿþëŸ^ó+ß@¶er}Щ( =QÆþ.}³cOÏÂDÇ€Ö¯½ÛQ¢Ñ,±ÑùabDü50ÑQpÎzr˜èFzB˜Ñ; çz)»ùÚ„Û†zpÐéœÔ 0LœSÏßuÐé':\÷âèpzîÚÄîÑŒs¬ì_E[L,˜øý0e’nYðÍñÁD³#¾º4j}.$T Ô_­jv忾ó6m’ÌÑßÓÖ,e”맨þ±ƒü(MT§š¼üBŠÓ¤‹&Âib'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šsÂ,¦q/…~/M(ë–h¢IÈ^þa,•|®ô°»*)ãúc•²|Wéºý­­Ð*~ðÛ*Âs4Q=–7NÜÏ#¶ÛA^4±hb*š 2K'ID]šhvürBæ"š¨ÏeMf©OãÍŽáVš°­„Bø=LPë ‘RÿDÖn—éÙ­ j½ ‘ô3Þív L„³ÄND燉ñ×ÀDGÁ9ëÉa¢é abDï0L4ç˜Øˆâ^ rº&@78€‰&¡ Þ¯HüóJ:L4U…RÆXý›ë)6L´·®ÅLHãè0=W»ytr-*sÌëœÓ‚‰©`"×-L¥ã´.L4»„—ÃD}nm?ô{Í]ï„ Ø–©öû;ØEAiÁà–-RZìÊ'þ(M4§ÙÜ3Äâè¥à¢‰ƒib'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&NõR9ãÍw°Ó–ù ?ìI©2ÙA§¦JÊt³_Ôc·cø2š8—,–·ÓDóè¢,­6;“•vÑÄT4Á[‚òP±KÍ.ÁÕ•ëÚsk%í§"ÚíDýÞƒNHéà6o5Ñ‚«¸J+qHz”&ªS&ªWÆBq¼2:}0MìDt~š Mtœ³žœ&º‘ž&FôÓDsž“²AÜKýùa“l2:”:Y~ØõÆ–)VÿŽ…þhšhoÍ™U8Ž¿^N¸›&j}`Nõè¹FÊü¥Žï¢‰EÐDQ‰õ„%cŸ&v»ËK×µçÖLMÖ/]÷cÇtg~XÚœÍáà¶”¾E$SZzµ+•=JgÄ ¤U;œ&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâ\/uw~XÒÍühoâ”T„Éhâœú_‹þÙ4q.:Oft:¥ì_×ÃM,šøý4¡uͳyê_Ânv¯®6ÑžKB”ƒ]ÇfÇzçI'ÔØñè¶–¬’”!RZìRæGi¢9uCg‰Å9¯½‰pšØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰êÜ ôG˜Â^Ê’Þ{o¶DrÜÙ˜Zp{·ËsÂÞU¡Z 2ívü]Å&ö·¦\þ“x¬4¢çŠMìË<çƒß}]›X01LX›¤3‘÷ï`7»l—gt²v·šSÒþ†r³Ë~7Ld:€ ÛÜSB÷{Š]‚øÑÒu»¸Z§;‡â<‰¬KØá,±ÑùabDü50ÑQpÎzr˜èFzB˜Ñ; Õ9$ñ,q/¯•î Uw‚ãÞJêRß-rýNšhªrÒ š~=”ûgÓD{kV à8Än÷’ÿvš¨‘ˆ T†å‹&MÌDµÀuM{êÖ?èÔìøåöîE4Ñ l r2Ù5;F½¹Ø‹,Vª“Õ2jª¬0ŽçX½¦/»•÷Så%ŽÎë˜Û‡—Zÿ=!z ¦Í^êñ®áe /s /¹ÌÛ¼8Œ†—ü¯Å¢ë†—¬Ëÿ‘¢ö“•î]¬²TúV8_ʱvÐÁ­¼f—üÙÍ)'I ±¸2ˆ®Åªh¢Ñù«FÄ_³XÕQpÎzòŪn¤'\¬Ñ;¼Xµ;ׂ ÷R|seÔ̶Q>:G»KpBþ ·gͳÑDQ%Â,zìvðu4q&:ìOÒD]>õ¤üÁ0î’M,š˜ˆ&0µÅªlh]šØíøò­ïö\GÊÒ?‡¾ÛË4!+ÃÁѪ¢ÀëõÁrÙí¥‰æ4cÁ½>MüØQZ4L{ž&†Ä_B=ç¬ç¦‰~¤ç£‰!½£4±;ϨA•„ÝŽÞ]E¸&¤¯#{O?RµŒKKÍ2Wþñ]×_9«2~Mìo-Y5ŽŽ¤çr|ü?{gÛ#ÇqÜñ¯BèE`Æ¢»ºªð Å΃;,z#!8Q'ŠÈñä1!è»§ºg%­È®Ï̦ÃŽ{uSÿ©~øMwWU !9%m»HûÒ÷N]ÑD¬¹;8n.}vé$9ÄJ4Q®›a Íw@ÕN³n¹6tà¨QÏW32¥¥—bFì(µ>(çxQš¨âX ÀÇ”`§ ošØˆhÿ4±Dü:4ÑP0ϺsšhFºCšX¢w1MTç)”GÆï¥Lä¶4ÒuŒ&©FÎ[®ã-å¾h¢ªR1Âõz]ÇòŽÑÉ÷‚ÕN_Ž&ŠG{r„‚¯,áIíà&všè€&ÀféP“ñI“&Š]YœX›&ÊuÉ:ur^WU;Ò¼íFZ¤çij ®õY¥fñ²4Q&´/h‚8†=LJ;MlD´šX"~šh(˜gÝ9M4#Ý!M,Ñ»˜&fõR)¦w:åít:J`‰“¤b_ç&UŠ |õÂW¶6Qï:CIFàGGOŽ\nNÅ£P æÔU&1îwšèŠ&°ÌÒ5!çfmÔÁN0­Måº rÖÜî÷»@[ÒD>0 ÊM`iÁÖÎs{ÓoµKY/š|‡)BH®8²'ùp§‰ˆöOKįC ó¬;§‰f¤;¤‰%zÓĬ^ aÛüãHùd&æ)¥¾Òª(¦Ì2A½^W1£ct’A`ö£c¼y9˜¨%‘Ý”¯,íK;Lôd“t jœÞ^š(viu˜(×%ÌœíCÕ¶=6QŽþÉȱ ²¬1Hj'ù¨vrZ¶ì0QÅ!“vÅ)œìÂÚabd–؈hÿ0±Dü:0ÑP0Ϻs˜hFºC˜X¢w1LTçÄHºPżñÒDJ‡GJ£Î“Jˆ}ÑDUÅÖÏK˜ ^¯+cà1:ìÆýè0].¥Óà±”)¢ ÊvšØi¢+šàC({{XÞO–ö+š(vNêK¬D ÿŸ¼ëŸBØò6ò­å‘c\ûë‚Ú);¢‹æœ–”о¸ŒûÒ„;MlD´šX"~šh(˜gÝ9M4#Ý!M,Ñ»˜&Šó2ØDÞí¥rH[ÂN‡˜ÇŽMT SŽêKÐMTU²·}x°£+[›¨wØ9œ0Øá7:U)°ä ßÛiþæ&všè€&L%Ú:Bn§tªvœW§‰r] jÿµÛO±KÖ‚¶¤ 9hRá‘NröàPBGi.{óEi¢Š³nˆ©=øU;>93¸ÓÄÈ4±Ñþib‰øuh¢¡`žuç4ÑŒt‡4±Dïbš˜×KIÞ”&Tð8ÐÄ<©¹³cU•„Hí¨Ç»Ät]41D'jð£#ñ‚kÅcŒ9x;5ª]8É̸ÓÄNЄ )": b«]¢Õi¢\7›·ý˜]¤ma§GÖ&´´`¶¨§öžÆjGç²nHÕ©dbu;Üa»ÓÄFDû§‰%âס‰†‚yÖÓD3ÒÒĽ‹i¢8‡2}m¥Dæ°ñN'ãØN§yRSì‹&õ9 GW=œ)BûaÓD½ëR‘=ûƒ%ÄKÒDõhS!š¢ u/7±ÓDW4¡5Ak¶ÏC“&´&hMk×F-×ŲˆS'¡Úñ¦;ÌG‰½âyšÈ¥לOíõÑj—Þ¯½½)M§e•‰]qOjaì412MlD´šX"~šh(˜gÝ9M4#Ý!M,Ñ»˜&ªsŠ˜ý^ hÛµ ›K$ŽÂž%#÷Eƒ*Ê0LP/×U¼n¸k6P$G’ &ˆ­Ë~j§FµSÝS:í4ÑMä2KÇrÚ¬} »Ú­ž ¶\7J²nÛm?ÙxmÂFYkÌgiá<·“mƒ]-^78%Ü®Ü:Øá~nÂ&¶"Ú=M,¿ M´̳î›&Ú‘î&é]Józ) Û–›°¾ü çiâ(5à©ÐM ªˆ1NPŸäªhb^t8^îÜÄàQíó)s ýÜÄN=ÑDy.Á WE°Eƒ]BZ™&Êu1DŽÌnûÁ2Þ’&ø"åó4K Îs;Iõ`—䢧°«Óå¥ctÅ¥H²Ó„7MlD´šX"~šh(˜gÝ9M4#Ý!M,Ñ»˜&ç6Š(ø½Ô™ÃÁ«ÒD<+€ðxoŸ@RÊS¤¦¾2ĪPKªD_=¾Æïæ‰zׄxJtô‚4Q=ÚýÚûÊRØ×&všèŠ&b9]Ë)èܤ‰j§'™©W¢‰r]%!ð§Á(ªºå)ìx(-ÒùSØÖ‚0ÇØæžj$]”&ªSFç­_µ#Ùw:¹ÓÄFDû§‰%âס‰†‚yÖÓD3ÒÒĽ‹i¢:OˆØ®Ð|©Û–›`scP5²61Kj:sòàÿ”&U9h êåºJaw]*(¶ËÐíNó°nMÅ£`ÎílSƒ²Óœ4;Mì4ÑMØs‰È’ì!lÒDµ#\&Êu)—÷Uî4IyKš±ÑGßVaiéš$æf–Á.…‹žÂ®N3é´Uí`ßéäNíŸ&–ˆ_‡& æYwNÍHwHKô.¦‰ê¼LÐÛ üŽv6¥ ‰z Ghb@ÉÉÕùÓ-I_4QUQ²Ïy‚ú|]¥°ÑQûBtè¤rÔæ4Q=ªÍÆT|e‚;Mì4ÑMØsi½ Mš¨vÖ>7Q®[ÖÉm?T÷¶]›À¨pŒ&r‹•:ËôÕ$ö6¾”ÄâÉ@‚|õ˜ãµ/3¢C|ÑñňËh¬Ã®8P‚&¼ib#¢ýÓÄñëÐDCÁ<ëÎi¢éib‰ÞÅ418g†)½TÆmååšóM $9…1~º¥Î–&Š*ŒÈ9WQí\Ù±¼!:™)€›µ_Ž&ªGöðD_EÜib§‰žhÂTr0N÷6Ò¦2›²vqÔêJ©2t[6Û}Ë¥ Ì5u8˓҂S mš¨v„ù¢4Qœ–ÆÞë–*.ë¾%Ì&6"Ú?M,¿M4̳îœ&š‘î&–è]LÕ9XÛKQ„åQ:0Œ­MÌ“JÚMTU(‘$ùêáÚÖ&fEOÒ$oNÕ£FJÎÑÿjg_ÒN;MôDö\R9 ¬ÜÞèTíˆóÚ4Q®›¬%gƒgµ $ÛËÓÌ9å1š (6žIfG©Ù¥4úFƒ¾zôÅíS›¢e8SBàLu4«QBsöé«»ÿyúâÍëúËcÇsóèJ?³O¾¨ÖŸÕø×»çe¸(±=úÿ2Ì¿nÌó‹WµyþîÑ_<¿»’ß=úüÅÃíýMfþ¹ Ù³g-¤à„}enzSz,kÏŸ><5Ûãƒyl&ÿvûú»›Òÿù7¾ýÃ×ÿB¿·‡ø×Óí³oÙ§²>áÓW/žØSôú× ”Mðã7O싵f‰ÖL­]"=IˆœQÞmª›Sôv¢ïweÞñé0ý¾ê›òýÿéÝO­›½{yóÑñ³òëÚÓþÙäÔXQ/÷ËLã÷_ƒö¥ñd š}°}ÔÌÙíÉÓb>ëãòe™Î¼Û÷–‡ææÿÅ´èý¯´>ÊÇVÈç¯nŸ¿~úëq¬üôÃßnïïoÂ÷!~Ë™#~ùîëÇöWwß?Ü uìHA­så1y|ó›ü¹¥”‹ýö7á{Œ6ùº ¿ýÑdüáöåí×ÖºžÞ½>y”~þøíÏORý÷ˆÒÇX¾ÿ÷?üóóBßœþÆèñ¿|ûÃGÿôæé}y2mBðüõ‹û»òãï^Þ¿xûÌ&qöá·OŸ”Ïê×õW£vkoëÏk/R~<¢ôÇŸ~Rþõ—ãDùOO¿½{üöñ½‘ösûÛWåwÏn­|xyû¸:úe8y}[:¾×Y\þ½Œß-|öÉgÏo_¾þîÅCýçý‹7ߨ<¼zq÷êDÆðëì­?]xûŸØÓò仇3¡øcÕÏß”ñÝ LÙíçòã×·¯îžÝ= ²ü€ýXšÒËû§Ÿ>Ü¿ýåp¢6to?Ž ™l*©;dªMÅdžLUüêÑíË—÷oo þ‘Í žü<.>¢òN§Œ¢w§#]ytûðp÷ìåãp^Äš|ß{ßWìâ/›@n_ÿ÷=yuûò»J0¿zô×7ÏËwó¨*;¾ˆc>ˆU}ŸãŸqŠO͈ȾO aŠO8ñ9Ûœ1²F7¶Ùfš8Å'º÷Ébw  ÞËe³ƒi>É÷©$ÁɈWí8Oú>yŠÏ,QBò}ê´ûL®ÏSðbkv!Lj+âÜù°+>´³¤ÌRy¿´óï<À†gyõ³)åºHɼ” é–IÞ©|iÆ’¼gƒ0I$!{8av|’à«Å0à€S> (äDÎúá`G“Öò"ºNm(GÁä,W»À“ó"¹N¹„í\µwœš™Mrê,•R(ÝrPNØtZí úõ’ˆ8ˆ#fl×6ìðäØØŽˆççþ­ˆvˆ‹Ä¯‚ˆ-ó¬ûFÄv¤ûCÄEz—"âàœ%h{X<ÚÅ‹ ƒTGŠ Ï”Ê¹+DTIŒÎRÔ`—ÞùA#â1: N žŸ¢¨CÄâBùÖÄÆmòÉaGÄ{BÄò`r)l[ˆ8ØÁêU…ëuËI"ˆn·mØ6ÍÜO‡LFç3÷¢¥2¹8!’O:¡Ã$Ÿaf8•iˆ(ŽÓX:+Ì1¦f*£Ýek÷ NsöÅɾ¶æÏˆíœ–ˆ_œ æYwNÍHwNKô.§â<êöR6Œ›'j pªÊ[Ívé£]À¾À©ªª¯3'áºê[wM¥†]˜É—§â¢!=F_gÞÁi§®À)–¼kRr*6ó¾ v§ÅLV§r]"UtV7ª m Nxˆjq'°&ŒE±=À;ºhéyâwœp版öKįƒ ó¬;ljf¤;ĉ%zãļ^J6®I“¬Ç‘¼o3¥æ¾²HÏS¦øÏ‡å®9”’=äF‡ètCÜÖ81K™MÙvœØq¢+œ°“5SMœ(vét³ÒJ8Q®›í† V¼Ä9¤Mq†—D)Ÿ/JCXºèh´ÁíwÅ2\43Ä ŽDsòÅái!ì'F扈öKįƒ ó¬;ljf¤;ĉ%zãDuž08É’;¦°)N$Œ‡9¿séÎ¥=p©Ö͆Öj Í¥Õ.…Õ3›”ëJ)ªÜÆÆÀ¸)—}Ø£ #ÛTL$‰%¹‰§ÔðuZB $—a”RvjvÈÓhDÔƒG)ŸH›˜ö7”æŒÁ D_}æxmCéôè`ˆù’C©yÄI&(C }(݇Ү†Ò\ ŸC©ü“›Ciµ Êk¥y(f+êT'ªvœ`ÛW¼"ÃÈ+^S€!$"gÇÈ`.{Z¸:ET…싃´ïq_¹4"Úÿ›¹%â×y3×P0Ϻó7sÍHwøfn‰ÞÅoæfõR¼m “˜Ær8kš"µ/œ¨ªMÿ„± ® 'ê]3äœÈÀÖæ8Q=¦\2RúÊNS¦î8±ãD'8ÁÉ`˜þ—½ó[®ã6òðµÞ‚¥í–Ã1€þ@[¾ÈÚ›”+ëµ+Ž­‹ÝT™6\‹‹¤œuTz—œ ƒóÁ[E©Øqg¢ÿ¹\{~Ñ!\÷>Iâ—¯_½8ýïoŽÎÞk;ks»Ãçù¿ÞtÝŸÿüæxsxq|ôüðüâõÿþz“?œœ¾xñôàÿÇßÓÛrÛàÛqâÓß?úÓ¯ç›o6WGOÓqó‘ J;þpúJP¡ý‘qe\ ŸÝkáö¢Æ6•Åüö»¯oÞÈÅ|ÐÂ/vl9¶ïvù£oÿGFçÛ[òþŸ»ÜÝGMÓ|þù;8=‘!îT:«ö=½Ü¥±ÿ8:Û\žÉ€}pïç±ú0ïÂGßo^¾ø÷ÓW?oq9þÎæg㇯¿ÚÖXÛâñ‘ßœã¡=>r‡ù0†#x0Ž›ø"˜ž§¢×®ã›Ë×o${º}L»-ÑNB‹Íîv¿y•»­å³Ë¥™ûzÔ–|êì¼ÓnêÆä×oóˆ~pù8 准ËžZwp J›?üéËÇïv Ñ÷;½û¾Ú¼OGî¶,9Ò¤6!oîw’)¼>ù~#ïæÉåÓÛœµ[ùö¯òxüqób#Y¸¤OÞ¾ý ÷mû‘涃IéTê‘Úãñõ›o9,õS©×¾ýE~ùyx~,™Ó!š8L‹èãq ‡ÏÍo6'(i޼2rmïv»{’½<ý›¸}ºë-Ë÷7G¯$w<ù]ŽèÓƒÿü³Ð¿ØöwÉøq.wQ÷¯2X^üÚjÜí·ÕH•½"V;7¬à(¥½¤±è=9°ê·@±ƒÈƒœzũ͟FQ~ÊN³¹½ÖDÎN±ËuÇZqÑÙu’S›½*D´þIÎ)âç™ä,(g]ù$g1ÒNrNÑ;y’³uŽ©…ÚK9c`áÝÌ2ÜQÏnæ‘R+«‰Üª²iõ þ­™l¯Àꃥsa“œÙc*Õ]®‰ÜÚ1­»™×Iκ&9mžÕT6㤻óûö³ÌDk©]´"íòhì’KõÀ7âÐö¬Ô£‘8’W6þd;º?5æSŽ~w|ys.ls²Écá®1FV€Ú†=Qå´•έqmØÎGÊÎc•¢]õ®uúòô*)¯NN¯r´rúxðOW¿žo¾xòÃû¿ü°0Ú·Ý‹SzžÈpxtùúÕOz „LžÈxvy)qüâÉW§—ÒXz¨^§Bmòürztï{}óçç›_d¾¼¹ŠËæà»xq|&fÓãÍ éWÒÅÈ€ùFœ4Oþyr©7ŽÖÁŸo¸ð6]¸¾€/Ú¨Ï{ oÉðîÍ>“ñÙ;e:»µ3vÏHFˆ qKë· #;Ü4(`ã|/s ²ó,wMÕÈάÙßšýÕ–ýy€!®_!õÏK½ýÿðrwñs}…ìU0κú¯…HWùrw½3|…çN9qz©¸ì†aˆØHºÑŸ¤¢ƒ„÷¿ÝÁi±ãN5žÙpƒÐ¼ÑV}d»@K³Ü8I‘{yz¸ÒhBuàœIòÝõŽ\GÇ™Î^¿} Op¬® :Sîkƾfìudì$ 8¿ÜÛFd¿ÀCÖˆ4b;o—Ü!àb㣷û÷CªrŠR±c?¨&²ºæ$¸`Xüj£šXD7l[‚Wœrš\J÷Ä–¯4Û1ó^¿ã$§ÞxGÑ©â|w=㊘=ìPˆhýˆ9Eü<ˆYP0κrÄ,FºBÄœ¢w2b¶Î#ƒ½” ŸXê%ýì;±4K°⩱2pʪÀm¾·½Ê-ê?ipèTü\œ²GNGF]ùµ&Õ Nu“¨ iÛPùË\¶ÃÎ7‰™À)µËÆ‹{Ò^ @iÉm ¡ h˜ÂYÏ“ÊÛ’ð‰†bçaÐ)(¨óå´Ÿ;` QëùÄŽÂ0pÒŠùú´»ËÆÊ)²ÙnK°EÁ);%¹º8ê”ÙXÁ©'#.D´~pš"~p*(g]98#]!8MÑ;œ²ót\¬ z/Åà–ÝÏMÔ°åpʼ‘Æ +[*™UIf`ìkí¶œáòIƒS¾êT/ưèöNÉctd½×‡ñèì N+8ÕNò`zëÙnYaþìÎì­u³Ï8¥vÑa:­P{<\ò „0¨gÆÉ§}Ú1§¿‘íB¤! CFeé èâºZ­ ³%9U"Z7ÃL?aã¬+f5Ò•1ÌT½“æÖ94dõ^Š]Xx-™oÐl[K6^ê}Üúx s«*+†¿µóô€†¿¹jJ­=:$OÀ~æÖcoYÏ1ˆ;…PV†Yæc3ÌõƒéÀ½‹ý skdæd˜›vÑ2ˆ/í;GËV/¾•qËZ²VAÚ€#U`˜[»®ÒÀÊ0 ˆ˜=ƒê48ƒœ¢ê4‚nI»R±snÐ$DŠS›{HŠ¥íŒ·väq¯´–œ²5` õ®níL'¹Xi­' /D´~Z›"~Z+(g]9­#]!­MÑ;™Ö²s¨w¡,ƒþ²§ÑG“B¬e’¨{ku¥PSô[U$©‚']=YzX°–¯:–ÈAÎÞ£¿ñè%5óR al\am…µª`Ͷ9èî—{vçlg™ Ö¤ÝèL €ê«-vÞ,9áD &òYÏÓV}sÖ+JƒóØÁÚ°‚0.õhMpeXËvÞÙ½"Lrê-I@œ*ÎwïÝŠ0=¹i!¢õ#Ìñó LAÁ8ëʦé fŠÞÉ“;&P>=µvvY„!kc©‡aÆI%ª‹a²*€X*‹ú^}|` ÓF'+$ÝÚìa²GŽMЕQgÍÏÊ0+ÃÔÀ0.oÈ.ðýÃàŸÝy€åÍb˜›aR»dLª‰¨½@/yƒó [cû伇òG²l·Ï-ý7N£s‚NN­]÷à¨yb!¢õãÄñóàDAÁ8ëÊq¢é qbŠÞÉ8Ñ:OÇ‚ÞK‰ÈEqÂ6Ö:×ßÛGGhÌ©XÙ”HVé% ~Ë„Î'£¢#lj쑅N•5òÙîƒb+N¬8QN@Ú[C’a²/âD¶ÃÎ13á„´Š+œ×yã?FKN‰˜ÆÛ´Û¥o€IÜ$Ô^u± 8¬,™W¦D -3dµy±s1œ‡ ºScãiN£ó–jJ“zŸ>òèê…‡6”ÊUGAÛ`õèD ûJcˆä-¨Ê ›k®Cé:”Ö0”bcœw(Ïj,¥ÉŽ»û±gJS»L$.¥ÙÎY·lYQI‹z†RQ@œ8$M©`¯ÝïÎÒì”<±²V¹µ[wÓ?¹"Zÿ—¹)âçù2WP0κò/sÅHWøenŠÞÉ_æ’sËlL€½TðËÛô¾!´=ý£¤rmýI•K‡²:«ªwÝÉtÕN<'}°”(îq±rV(¿×ï8Xqbʼnêp"I¢² ÛQö½)Êàß4æ-€Ó&xG¸Ã„X|V/:xc®ð‰€Î÷1 û˜>‰aÄÎÓ ¥lÔÛãƒE µ¡TìÀúÈVuœ˜³ ]i²3ƒ¶³²V|ˆRÒÆ;_¾Òlç,ï³Sö^n„.ŽBXQËý ­§ˆŸ ÆYWŽˆÅHWˆˆSôNFÄì\ÆHƽóÂ'€35ÎA"Ž“b]ˆ˜UEI‡¬ÓÕ{⇅ˆ£¢y3NÉ£sÁ¡±ª2çÌŠˆ+"Ö…ˆò`Êc*ôçL‘Ö²<èsÓZj×¥ù&e-x¶ ÛÎÖlZ—NܳœëÓa­F`ÄŽ`ÐaÜ 8qê7(DãÊ»`²Ò~&;•Kݲðç¾8Á¿•a´ä´ÑúfŠøy¦ `œuå SŒt… 3Eïd†ÉΣ$ÈÁè½Tt°,ÃÛ0cÃŒ“ºeÝÙGeÎ9À¬Lä´vöPÍWÆ8 :ݓۖf˜ì%c1¨+\X¦.†•>Á”kòd;îF6Ãpš> ¥Ó¦oí¶í¼™u?+$á¬g€á`“.ZËØÙGëM2* ãS¿áze&ÛQØïR½ä [‹N;T·2LOrZˆhý 3Eü< SP0κr†)FºB†™¢w2ô΃ôÉz*"ia†‰ ø>†'5غ&«r z]ý¶ZèŸ4ÃŒŠN·*ùâ “=’ô¨JYÄl'™ÈÊ0+ÃTÅ0ò`JHÄåUsÙíìó0Òn´dµj,`@‹ž‹íK‚=ë`Ø'ÔŠA{ÕÅý C ˜† ¹t~^Ñi¶söÊ0É©t|òÀ¢*.¹+ÃhÉi!¢õ3Ìñó0LAÁ8ëʦé fŠÞÉ “³5N) 툖.$9¬§ž!fœÔXÃdU½¶Øº½JÀ‡Å0mtbºqztÂ>×’%ìpÛgì{ÊÒz™•aV†©ŠaäÁŒ\4¶\(ÛÑü “ÚõÖ‚5ŽÞøEã¶t`¸o€!"Hç«uBb' †a•aÈG‹ÈhU§ÑtæyKN½ê4:(}šQœŠ¸ÇI«>$ù´ÝH¿Ò”˜aKõ´}o1 r©Ìe§ÙŽß+"ÆÞè™ù`§Ñð êÞêN%X£Ã" g;„Ùa8µ‹) šñ§Y2\rQbhûÙˆuÞB 4ŠR±ë®/Ý­8„4&ˆHÁ­';‚a¢²›+e•j±ŒÅÉNØëé„cıéÖ~\q{î_Šhõˆ8Iü,ˆXR0κnD,Gº>Dœ¤w*"Žì¥â²E 1o^æíåÆIgªBÄ‘êýêH1.:û[ Ùzôì±\þìú :É+"V€ˆéÁŒÎ ³6”À)ÛÙˆsŸN˜ÛM3sQµ£Xñ’+!AèÃqྜG·ríŒÖN¤b,ƒ“µÒoØtênymLk°×i®Öi:t¢<ÛØÚ\+R¨Éi!¢õ3Ìñó0LAÁ8ëʦé fŠÞÉ “œ;ÄŒz/ò»¹8ÚÆÇí³\­R‹†Ë§æ]_Q¨« E«ÊE2¸-`?m„ÉW „®¼óùÚÎÐþ&{do±|êrk·°¾"LmcÓCpȾXT¯µ³4÷‰ˆ¹]a(†@Ú ãÖn{>„á&ÆhCÏ72›—ý¹ËgG¶v¶{/^¾þkz .^¿lŽÎOÓÐ/ÝÓÏá2= ¿ØãÍÕüNŒ¾þ—ÍÙQê¤N6ç’¸_ç“òj¸øÙo$äïÒö×à>û—ƒ”Þ+>:Í÷]©Oev´îVì8 Z}éI5—{H„X^ÉÐÚ!íÖ²Ó€Û– ßç× '= /D´~X›"~X+(g]9¬#]!¬MÑ;ÖZçžèBºeOÉŠ®Ap=´6N*Õµ&±UÓ†4£«[Îmù¤i-]5˜4aãDgå[e#cT•…•ÖVZ«‹Ö\:ˆŠÈm©±ýìÎ|w.w&ZsùÀ* ëX{Bà°èJ=jBú2Ig= º„0l´WÓ„Ó }UžU†!é6L:PLq*vÎÚAæý§ääù‹ºS„AEF¼²mÍ‚ô(DN¡ÜÇ';ˆH{¥µ,$opF‡®Sds¥µž4¼ÑúimŠøyh­ `œuå´VŒt…´6EïdZËÎÑs`Ð{)ðËA4‹}´ÖJ2°¡.¹2Z˪(–÷T\Û=°B‰íU§»V®Qvmgö¸<ðÿØ;»Émä ßÊùÌ2Y¿äFìäÄ@‚ Û;ëYø{sû)R£íi±Z ¤%¦ ì‰g ªWõ5‹|D²ªzÌžø»¥»3C“Ö&­@k¸ÄH-É7 %V;¼¯ôy­•ç²ÄìT]í˜ÏÜ[ãhD(¤øÃÆcƒ\BÂvAŠÕŽò&èM‰}ûã7öçÿö÷ÛMoþ÷;›Å-¨ïëÐgùºü_½2ì·´GFP6vú¸Ž`HÑ¡Üb'–È/¥‰*ŽÊ%½èŠS‚Iî2±Ññi¢Gü14ÑP°ÏzpšhFz@šèÑÛMÕ¹pi·èg)‰çnE,’6`¢*ÐZù ¥y°BUUJ2øêÐü´a¢¾u¹dÒnp³c¹&ŠG{cPö•%Pœ01ab$˜`[¤GQhv^íè®ÀÞA0QžËd ‚7~bíÛ}nÇ.A [0!e‡²ÇÔž_¤æ º¶ì]‡I‚øâÊn΄ o•؈èø0Ñ#þ˜h(Øg=8L4#= Lôè톉êœrJÎvòjwò¹6&\r š¨lbRg¯wµƒÁjFTU@í^Š7»W»…TßÚ&úôÄLž$_×<«zÌ!Ú{±¯,ëÜš˜41MH)A¥šh›&ªÒÑåÂës3SfçMôˆ?†& öYNÍHH=z»i¢:å•zYÊìÂÉ[€KÝ ‰UEÔô„T„±h¢ªÂ˜Ø¹»Zí :^‹&vEïÚ4žNÕ£ÄÒ/ÓWÆ1Nš˜41MhÙsPÌ í’ÕNôðtå¹Hˆ1xYÛì0ó¹5´í‘9?¦‰TG0€wÑ¿ÚáÇÕ¾O¥‰ê4QŽü„8å¹7á.Ÿ&zÄC û¬§‰f¤¤‰½Ý4QœÇ hIÖÏRùä iÌiIykobŸÔÄcÑDUʹÝÒïö–9½MìŠÈ…õ¬«GQV§v[µã”'MLš‰&Ò‰Cúø¤ÑWüýš!ÇÑ4QžË˜žÙÅ.Æ3¯ÜÓ‚¼ÑÊÔ”û%¼‹mÕN96¿äŒ`ÜÑU‘ùÕæ{kÄ€üèÀ•{ßÕ£„hë+cIs~™óËHóK^BÌ9Ú›óKµ3Î9z~±çZÖ+ÇßÛ#»Ú‰Ê¹­²3`ôx~)Ê!%VO©Bàk÷¾«SŽAžGi6_s?C4":þתñÇ|­j(Øg=ø×ªf¤üZÕ£·ûkUu.€’ƒŸ¥8Ÿ[䃳.6õl|­Z¥Š-«Á—*ŒcÑDU¥„IÈWo³ÚkÑD}ëÌÌòÄÏиã:š(±üò1»ÊŒ…aÒĤ‰±h"Ë:JAš0;Èt±'5i¢ÚE£i¢<·VæO茳“pê½¼¸h¹þ¸f @ý–•XC{~©v”Ò¥4Qœ ƒ“ «]äIî2±Ññi¢Gü14ÑP°ÏzpšhFz@šèÑÛMÕ¹%Prhbµ<{oÂfZØNöµ•ª¯Ôðh,˜¨ª20>gå×êfTßZCʨÁN©¹qLT6:sòÿnÊw]¨&LL˜&ÀéP6Ö¨ ÕN# P:e â-Ñ‹]8õÚD^PmÔÇó .õx@J¡ƒªÂ¥×&V§)Åè0YµÓ&¼Ub#¢ãÃDøc`¢¡`Ÿõà0ÑŒô€0Ñ£·&ŠóÌ¥O¸Y*S>÷ “Í(9âv¶ZꃱÿZš¨ª4br>|U;×jgTÞ:…òûj·þ[£“/,X•ª-ÈÜ_ýW˜%'M E¸”3´ö ؤ‰j'áðƒNå¹¹¥MÞ=†dq9—&(ÐMå–Ä©Tao*­v”¯½6QœfËDÙù”¶Úñ,@î.Ÿ&zÄC û¬§‰f¤¤‰½Ý4Q#ŠŸBmAuîA'%\LæÆA§*cÀ'¤æÁ:UU¤ ùê _ìÚD}kLÎ¥’ÕŽè:š •_Ê‘n_Y’I“&†¢  ¥=ÇöÞDµ»/ MØs1Ù£U³3~"²Ê™ØÖÛ4Áu+çØé«]¼–&¸|nI!€'®ÔÅ›{î2±Ññi¢Gü14ÑP°ÏzpšhFz@šèÑÛMÕ9¥RÂÏRôñœcÛI\"ç š¨ƒ7/­vQÇ¢‰ªJêþÄêÓkˆ½EGí†~täÊ“NÅ£,(ú7[7ñ¤‰I#Ñ/1[§§v;£Õî¾ÊÆA4QžKÿmçÚÑjwnsÔX÷Idƒ&¤Œà2Ô©=V»ðqã¥Si¢:eIÒ®yw³»»ý2ibc™Øˆèø4Ñ#þšh(Øg=8M4#= Môèí¦‰êÜ|S@?K‰œ{m‚SZ7 Ä® ûR5ÊX4QUeÈÞ—¯j—ò‹íM”·†Êö„œ¯kg´*CŸí樫ܕ˟41ibš[¥3“&lÓDµÃtø%ìòÜÒs47² t žI+Y…{yZs‹Rv>ÿW»$×ÒDq Š‚íúâ«c˜4á-Ÿ&zÄC û¬§‰f¤¤‰½Ý4±+KÉ£«gGžt] otÚ'•ÆjŽZU!¢Q[tÕc¤»…½+:pwìtš¨Y îûÊ'MLšŠ&t)ò(°CÕ."Mö\ûG[}ƒ—µÍ.œKº.Ù4ð˜&RÍ-¼RÐÕ.†ki¢:¥²qB¾8š·°ýeb#¢ãÓDøch¢¡`Ÿõà4ÑŒô€4Ñ£·›&ªs¡Ã)”O¾7AåëÑ&Mì“*a,šØ¥^Ô6ù¤i¢¾µýB³Ckt®,[ë±Q²éñP²Ko/JîËR‘Îݘ‚zt‰òÖl¿G¬ÐP4¹ªÂr6úêáÁªêS¦ÉÑI×ÑäêQ 1øÊæÖԤɑh²ü.)bÆøq±¼¯þôû¥Rüë`š¬ÏÅcÎîø!ȉÎ-\êdäù%–LX*6•Æ5WÁ¥4Yj”Ðþ¹ÚÉ,éå/Ÿ'zÄà û¬ç‰f¤ä‰½Ý<±:GLøD–Ò˜Ï=èFy±Dýø Û*!A™ŸŠc5/¼©Ï)eõÕ'~1š¨om9Ѿœr³ƒ i¢x´%(EWÁÝí×I“& ‰XiÊÆO“&ªÝ}øƒh"®” Šâ³>—&ÊÍ y| ¡Œà@ŒÚž ¡æ ”Ki¢Š³q»NùjÌ“&¼eb#¢ãÓDøch¢¡`Ÿõà4ÑŒô€4Ñ£·›&V碟¥Â¹4\æ” šX%è“R9EUi$_=}\ÌþÓ¦‰}ѹŽ&ªG¶õ˜¯Ldžt›41MØï’l™N‚Ík3«]=š&ÊsY -Þø!~P3òØk3¶42dyLh#˜¡œæn+­v1à¥4Q’‚ÓYñffóBw™Øˆèø4Ñ#þšh(Øg=8M4#= Môèí¦‰ÕyŽÌøD–R:™&`¡Mš¨8DNÏHÍcÝ›¹©Ï6­>1WñƒÒ6Ÿ4MÔ·–²< ×µY=æ¤ì+K”'MLš‰&°|Ÿ‰R6þš4Qí] ¸>…YÐ?„¬ù̽‰R(# æ-šÈÙ°‹TÉQšËVem~Ù¡>‡ðjóKÎhi)£<”®œ_L &ð•!ÄY2rÎ/CÍ/´ó”“!Ds~©v.YŸ›˜!¤ög–jGtf‘àE²¦Ï/´”û¬âœù­vxmsÜÕ©1(·‹¬vrw=|~­Úø шèø_«zÄ󵪡`Ÿõà_«š‘ðkUÞî¯UÕyéΟH¡ zò×*]‚l¤Ý'•ÆjŽ[Uiàl*|õ^«ý®èhÀ ÷¾«G*ÿó§qÅ8¿VMšŒ&°žW‡&ŠÄãi¢Ü´N¥*ˆ7~ˆl±&Mè’‰ä1MpÁI‚WH¥Ú^»÷½KœMÔ“&¼eb#¢ãÓDøch¢¡`Ÿõà4ÑŒô€4Ñ£·›&ve)ç6Ç%ÆÂM쓊c _Ue oç~}Kz­æ¸û¢“ïN±NÅcùÑyW«Ý}±¶I“& ‰r7AB,wîš4Qìø~dDå¹¶öΖ¿¼ñc‰]äÜ{yT@ =¦ )#8D`nïÒKÍA¤—ÒDW*dqÅ%Ô8iÂ[&6":>Môˆ?†& öYNÍHH=z»i¢:gÚî²´Ú™ÂSi‚SXXtƒ&ªAÍ}©œÛ›¨ªâÓ䣉úÖ‰)Ñ?C{îu4Q<æˆäô=¸Ù˜41ib$šzß-ríšÕŽùð*å¹ZzÉfwdu¤pîIZI1ÂãvV¥unÌõØoû,jµ‹/¥‰ê”sT_á¬Aî.Ÿ&zÄC û¬§‰f¤¤‰½Ý4Q³bp’ýjwrs\L²(Ç š¨Œ&¢sÑy•*ƒÑĪ^" úê^Œ&ê[+e"~":é“Nælu\hßW–%Nš˜41MØï’rùiÆvÍÀjwÍi¢ {.—vêgm*ñÜ*1‡÷òRÍA‘’´Gzµ“‹÷&RICFBØgé*Í*î2±Ññi¢Gü14ÑP°ÏzpšhFz@šèÑÛMÕ9’*ƒŸ¥ÎmŽ+¹tµØ€‰ª€±tm÷•ÒhÈ«*IDù‰8sæ×‚‰]ÑÕë`¢xŒ¥jñ³x q¶3š01LØï’…€RntªvòÑ0Qž›")¢;²93aBK1@×&²`K.€±}6¯!\ U—^L슂 î*±Ñña¢Gü10ÑP°Ïzp˜hFz@˜èÑÛ »²Ÿ|m".˜s#Ù³­³IžPJ2LTUÂQSzBý«sªo­±4÷£#raE§âC´ :ûÊò]¥Ç &€ û]R2‡QÛ0Qì4çÃÏ9•çfN Á_(S>µb ,øLÏ9QX3´åÞæH_í8_Úµ:5Ô“Ønƒp³“YÑÉ[%¶":Môˆ?†& öYNÍHH=z»i¢:§@ÕÏRx2MÀ¢ ±‘ì…8…öGâ› UUé»Gè«ç/ø}Ú0QßZ4'µV» [£VÈþ"â*ÓfA§ CÁ„ý.ÍaŠQÚ0±ÚÁÑÍ&ês%Cùå–têl !y\Љà6‚S»=sµ+ot)LTq†[}q¥ûë„ o•؈èø0Ñ#þ˜h(Øg=8L4#= Lôèí†‰êœ `»ìÜM¤ž\Vu±ù}ckb—TŠq,š¨ª˜0·KÝÔLmŸ6M¬ÑÉ‘%úÑa¦ëh¢zÌœíÅ|e‰çÖĤ‰¡hÊ*KáSiÒDµ#<ú v}n*×òBöÆkÆt.MpÙ>| X°`'AW;kw&ŠÓ„ˆ¹ÝÁ|µž—&ÜUb#¢ãÃDøc`¢¡`Ÿõà0ÑŒô€0Ñ£·&VçÔù³Úѹ0¡¥Û6nÔsZ%!·{%ߤ¦Á`¢ªâ\¾úê)¿Vìõ­E¢Ê“¥À…0Q<æRjß*Zíî?íN˜˜01LØïRb`ð`¢Ø…û ÁDyn™Þ¤ÝùqµÓxj='²1Êáñ¥<¢:€-P{&,v)ÇK‹Ã®âP9*ºâ2ÜÝ}™0±±JlDt|˜è L4ì³&š‘&zôvÃDunÙž!úYŠò¹0akí%êF«‰}Ry´sNûÔç×jƒ½¾µŠ0²Ñ aÂ<Ú;¯ÉS†!Î6Ø&Æ‚ û]¦l1ÿqƒž¯þôûMJñè+Øö\2'ôÖèäÌ •E#Q‚Ç4Áe[êçÒDµ ùZšØ%NÂl5á.Ÿ&zÄC û¬§‰f¤¤‰½Ý4±/KL¹”ÜØ ‰]R5ŽÕjbŸú$/F»¢“ï*ÿžNÅ#ÌÑÙ+vhžsš41MØï’³’ª4[M¬v’ÒÑ4ÁekËÇ2òÆD=ùœSÙ¸”'e‡¤Á¹Ö&5]\Ï©Š+wìÕŽdžsrW‰ˆŽ=≆‚}ÖƒÃD3ÒÂDÞn˜¨Î%fpVq«Ètr§ ËX€[0±JŨø„T‰cU‡]U)’¶ ™ßÔË‹ÕsªoÔ+~t”/¬çTÓ‹mM쉅×ÑDõhk'”a˜·&&M EZVó$ÿÇÞ¹æJ®ê`tF‘ñ¤ç?“ ¤ÎUõÙ "ÉF§Zý£ÛŠ¿¸Âc6¸¾5QìÞÓâ.¢‰üÜÔbµÚ’“ßK‰—4œä`ÇÜ‚Àê÷Ýïv ¥‰ìT‚š5îšØí`tjN+Ÿ&FÄ_C}Ö“ÓD5ÒÒĈÞaš(Îs%šFŠénGroÚòÆ$'4±KpŒü‰T›,m¢¨"uFk«§ÀßEå­9äáüƒèÄi¢xtµöCÞË[.šX41M¤ïRÕ“³Ÿ;ßþõýªÆpyv~nŒ©]cs¬¹ Ã4AEˆNg4᎑-4”&;8¸ è—Ç—¬ÞšqÎv¾m|Io–¯3jGçý¸ÝãKò(–¾Kn+c_«Uk|™j|ñ 0@"Nª×øØíàòƒ´ù¹•,Ö÷¾‹ý˜+Ç—°%28Y­ò-W" 86”&;€gW«ŠSHŸN[¿ýŒkµêd¢ÑùW«FÄ_³ZUQÐg=ùjU5Ò®Vè^­êë¥~.]¼÷-žî}÷IõÉ*v©þ²“´]ÑÑð Mn†“´Å.š,šX41M¡EÉŽP¯§‰Äì`Ò¢‰lG·®Véæ9cû¸d @jÁ!Ížª}P±K½ñ£õÇwq$Ô¨•²ÛaäEib-¢ÓÓÄøKh¢¦ Ïznš¨Gz>šÒ;J»sb³v/E~o^ž„°œì}÷Ieš«ÈÇ®JD‚Ãêý».3ÚßZb;:bÏÑDñÒ,äƒaÜeíM,š˜‰&òwCêuª'iw;&¾˜&Ês)2h»ýÄܽßH9÷à M„ÜÒ%ý;T÷&v;úyCÄ­4‘z®6$ØçámQcÑÄÉ4±ÑùibDü54QQÐg=9MT#=!MŒè¦‰â<ÍÏc}éè%ÒïÍËK,ÌÏ{û¥²Îu’vW¥"&¡­^~ÞAûߦ‰òÖ¦ õœ˜—v;•p5M„¼ç¡Cl¶ "·æåáù¬ì M`ié1¡Þ;’g÷&ŠÓ¨–Ô·Å™¬¼¼æ4±ÑùibDü54QQÐg=9MT#=!MŒè¦‰ì<€§:¶{)¿ù¤“o '5»¤@™‹&Š*ä *vý»h¢¼5!jc]pâƒ×íM!´•é.šX41M`>AÙ«'v;5¼š&òs…’#€VûI“`·{ïFÍ·¯b<¦ J-¸ìLx57¸Ø§gi¢ˆËÛ²Ô‡VÞDsšX‰èü41"þš¨(賞œ&ª‘ž&FôÓDqÎÌáƒ^*ê½yÊËIÞÄK*£„¤žÀýMš(ª$ý©gó¾ÔÇïªøŠNN?×vtDŸ«@¾{t‹DÖVæÀ‹&MÌDé»T‹Êü3àÏ¿¾_5WçM”ç:4jn¾ìÂ'P71~œ…-œZ08Ô³°w;‘gO:e§iLËÿÑÇ!® äÍib%¢óÓĈøkh¢¢ Ïzrš¨FzBšÑ;L»slÚî¥îÝ›°DÈxB}Rq® ä»*É׉c[=ÃwÕ ìŒÎÛ½·ÓDö(Îi’ñ2Ó•7±hb*šHßeL]¿ Tk»¼?~5MäçFul¥}ÙÔJ½0o"ïM˜T É-]€ê»(ÅŽÎÂ.N#§ïæqiP^4Ñš&V":?MŒˆ¿†&* ú¬'§‰j¤'¤‰½Ã4!ûd)ýµ{©ƒËë®Ý›pÚଦSŸT'˜‹&²*…¡‘C¾«ßUÓ©+: ¦ÏÑDñÈc[‡•7±hb*šRÓ)W¹«gaË^ÓéršÈÏECâM;å›÷&,¤w<† - 8õ½Zá³0Qœš ÖëìîvúVÏpÁÄÉ,±ÑùabDü50QQÐg=9LT#=!LŒè†‰âÜ‘C»—Š7—tRŒ†³´‰]ª'梶T“•tʪÒO†!þ@½âwÁD‰N@K7£cð^†õn˜(Ùäe ë:£SÁ„æRIòþD&²zð«a"?7MÑ%H³×ŽLïL›°ÓHK'E>,µàÌ(Ô[z±KΣ4QœJ ƒ¦ŸâÖuL+Ÿ&FÄ_C}Ö“ÓD5ÒÒĈÞašè륎ªâ]zÝmiByB»TJäóT “¥MUQÌðõvBþŸ¦‰=:öYt¢ëÉi¢é ibDï0MtõRD÷&a“‡M!œÐDŸT™Œ&úÔû—]7QÞZ‚¨H;:,ñ9š(cbÜú%Ý»ê*»hb*šHßeÞ¨%‘úuÙŽíjšÈÏU2 BIÅîæ±²9©ñÑÞDHÊR šz šÒ—8Ùs4Ñ).¼W Y4q4M¬GtršMÔôYÏL­HÏFƒzÇh¢·—bÛ“°-^…Ý-gº¼®[½}ÓÞDotŸJÂþÇcÔä”ÛÊLÖI§EóÐÄþ]&&ósšøÇŽ]JûscòãÕ{þ±;:áya¶oè!D9¦‰Z0QȽoUi¶Cd|”&²Ó…d¶Ë—m.šhM+Ÿ&FÄ_C}Ö“ÓD5ÒÒĈÞašèê¥ï¾ Û=×:ïí?—z´gþ›4Ñ¥ž ~MtEGøAšèRæë*ìEsÑDÈÙÕ’¯€ŽUš(vLz5MäçæÂÏT;Ÿú²#v/MAŠûñø‚¹¥'š•ªR,-ݞݛ(â˜,ÄØ'¤¼h¢5M¬Dt~š MTôYONÕHOH#z‡i¢87’èÖî¥Do¾ `ó㼉^©J“íMô©ÿªËëþ* –FòMdJÁÛsŒôý®ë&MLEé»L]ªQ«ÒD±¹œ&òs1 TóÅ^vÁùÖ¼ ÝXÙŽi‚R 6t% U¥Åî½®Ü4‘æ=0n‹ã·:¿‹&N¦‰•ˆÎO#⯡‰Š‚>ëÉi¢é ibDï0MtõR÷îM0ëNh¢O*Ú\4ѧÞÂwÑDWtô±¼‰NeÖÞÄ¢‰¹h"}—ž¾^f©ïM;`¸š&hcИZFhÍÑR§}g…X¶ÍHü,o‚6w¶|³Œ7”z®G³/êMðÛÆ—ŽèÄž_Ü…Á­±éTìð-[g/k|™`|á  <ü\mýk|)vï³ò‹Æ—ü\5hì}g»\²ïÎÕªÇ:Œ'ã ç"£»rCi²C ®V§óñ€¶8áµZÕ\†¨DtþÕªñ׬VUôYO¾ZUô„«U#z‡W«Šó4Ïv†v/upöµ«U”ÜØÙjU‘`ð©™m¿JEUTѶzSø.š(ofìÄÞŽNŒþMdŽš&cíaÜ®¼¼E“ÑCrdР # ×ÓDBñèÈí'Ûá'i%l‰ªñ˜&$·`C„ƺZ±KƒÐ£4‘œ*dâÄ¥8.šhM+Ÿ&FÄ_C}Ö“ÓD5ÒÒĈÞaš(θ"¶z©,Ro¥ ѸiÔšè“u.š(ª¢¶Õ’òÖ”Æq‹íè ðs4Q<&¾#üàwcçE‹&f¢ I³tŒžo­ÒD±»ao"?—E$6ªä;æ[÷¾3ñshiBs 6 Õ+ƒþ±“øìIÚì4€šh‹ó¸*7§‰•ˆÎO#⯡‰Š‚>ëÉi¢é ibDï0M瘰݅ø¹à-M lhg5û¤úd'iwõ1†Æ‰ÌÝîÛh¢¼5¦&Ð`­ÝŽ<éT<*Ѽ¦ŸÊ„⢉E3Ñ„æ{‚@E¼^30Û™\_å#?×ɡٲcô[im‹ˆê'{–[0QÒPoéÅåYš(NUˆ]Ûâ0-šhM+Ÿ&FÄ_C}Ö“ÓD5ÒÒĈÞaš(Î=v’v/åtóí¨1nN`"+ÀÄ›Jt2˜(ªÒ˜šfmõAé»`bŽç½›vtðÉ´¼âQ¢«ð»±.˜X01L¤ïÒ™£G¥*L;zK‹»&òs5õ{hÍöã~º2m‚7LNa"¦œoÞ–Fòl‡‡#á0QÄ)‹27Å¿U Y0q2K¬Dt~˜ LTôYOÕHO#z‡a¢«—¾wkB‰6“xB}Ru²´‰.õŠ_–„]ÞÚ0…G?ˆŽÁs4‘=r0õàmeî«dࢉ©h"}—ž—Ò£Ö‹|;â˯3ÊÏd&­öã(Ü›„-Âì~L^Z0@{Šñ³[Ù©‰Z„ØÇ×ÖDsšX‰èü41"þš¨(賞œ&ª‘ž&FôÓDW/â½4ÁRæd'4Ñ%y²½‰.õD_F}Ññ:u)sY{‹&¦¢‰ô]º¡s}o¢Ø¼¼d`~nTŽÚ(”Tì$ú½{D1gMH 8aTv»ô$LìN¥ln·ÅñÛ!ßdzÄZD§‡‰!ñ—ÀDMAŸõÜ0Qô|01¤w&vçJ,´{)‰÷žs’¼@…| /©l HUš &vUÜXøz©ßUü)ÇËŽè1˜(œ8~ ÌßkM-˜X0ñë0‘¾ËÔ布Ö+:ívñmÁý˜(ÏEÕ˜Zf£ýäpõ{o32¡4ÌÓDÈ-=¦PÝDÙíàÙ­‰Ýi.f‡±-NÞÎÉ.š8™&V":?MŒˆ¿†&* ú¬'§‰j¤'¤‰½Ã4Qœ;Qã"ùÝ.ú½[цt’ƒ]$„µ£ý•œæ¢‰]}¾€šê ì»¶&ö·Ñ©žbø²³çîFÝ=²«n+c]õaMLE!/ù#2Q&мU$»ˆ&òsÓë*ÖëþìàÖ´ Ì5pØÓæì‰&ê9Ø»R|”&²Ó¨ÎÚ»è:èÔœ&V":?MŒˆ¿†&* ú¬'§‰j¤'¤‰½Ã4Qœ[ 3h÷Rê÷Þ6Hƒ§„óÞ>Z•ø©æª»«J8fõ4ã—òwÑD_tÌž£‰ìÑC ZO"ÝíÀM,š˜Š&0§#sjE§bÇþ–}Mäç*³iNƒ=ç‹Ý[ÑI„ìx|¡ÒÒÙ¡^&£Ø¥®êYš(⸬j4Å9-šhO+Ÿ&FÄ_C}Ö“ÓD5ÒÒĈÞaš(ΩQ±æ%Rï¥ Ž‰'ÂÙI§>©q®$ì—ú(Ô˜/ïv¬ßEå­ó]"EçÁ´‰ìÑêkþTö×Ý‹&Mü>MP>Á”Z–Ç:M;æË÷&òs• ­Ñ~’]`½woBAëÆ|ojêò={U¥œ[ºã³'Š8Œ–>‡–¸ôºJ:5§‰•ˆÎO#⯡‰Š‚>ëÉi¢é ibDï0MçÄbÁÚ½Á½ym3<)Û)';éTîøf×OÔë—têŠÓƒ'º”éÛ æ‹&ML@\ò„U¥JÅŽõò½ .w×aL=r£ý$;b»¹@lˆx¶÷-©Iq×:Md;ðð,Md§é£QiŠ þVzqÑÄÉ4±ÑùibDü54QQÐg=9MT#=!MŒè¦‰Ý¹¨5¬ÿ#òæ“NS7ç½}¾FÄ>*“ÑDQ"Y„Ôܼ÷Ÿ¦‰®è~&ŠGI€ç¡­Œq•tZ41MH.ÕDÑöþüëûuKsí«i"?7æíDÄVûñ(DwžtJóaD:_4¯`Pkäå;8:ô{#M§jés ¶¸÷kºMœL+Ÿ&FÄ_C}Ö“ÓD5ÒÒĈÞaš(Î#E°Øî¥ŒäÞë&$l(~²7Q$¤Q üƒÞ>þ¼©ôwibWïyÑVïôeyù­T-J3: o91·ÓDñÈÎÖVÆo¿Û¢‰EÐDš‚¢*Vi¢Ø]}ÝDynr#[í'ÙÁ­4AéG³|ß÷1MXjÁ‚ÖLt.vpT+ýFš(NM”ÚâÔWM§æ4±ÑùibDü54QQÐg=9MT#=!MŒè¦‰ì\ÓPC)zù^Mÿ–šN.‰&ô„&ŠÔ@1 7¥*ü¬¹ñ»4QTatб­¾‹&Ê[‹å½§vtXŸ»¼®xL¤Û¿›IX4±hb*š°1÷(ÖØ›(vrýI§ü\áÔms«×NvtkÞûF`ˆ'{ß±ôA(èõbÇÀÒDqê«‹´ÅEÒE­ib%¢óÓĈøkh¢¢ Ïzrš¨FzBšÑ;LÙyúíÝ]äÍYØ*¼±žíMìRÛRÓ+M¶7QT…ˆNÔ'Ç—äQZÕYŠ­“´k|™k|ñ-=ÉDZ{ßÅNßN•\4¾äçzêÏ,Öw”³]t÷ž¤ÍÈϪ|xš!F£¨ÞRšìÄàÑÕªä4ˆGŠ8^«U­eˆJDç_­ÍjUEAŸõä«UÕHO¸Z5¢wxµª§—Jƒ‚Þ|;ª$f9»Ï¨Oêlyy]êCø²Õª¾è¼íÐÝN]ÊxÕ \41Mp`“H4‘ì”øzšàD 1Ñ 4ÚO¶ÃxçIZߢ²_g„:e‰ÕåžÝôÑä»ÓôÝ( ŽiùhÍk&†Ä_5}ÖsÃD=ÒóÁÄÞQ˜x9ÆõªÞ/;Š7 ÇMù¤y§T SÁÄ®Jµuêeè«`bëd߸Uk·³÷ä·›a¢x à›ÊÂû–Ò‚‰¿é»äíࢰ?¿ÉŽôj˜(Ï nx¦Ñ~’]<º6â:˜…\õ˜&BnÁé—IàSUZì„Ýš(N‘r}ô¶8 ¸h¢9M¬Dt~š MTôYONÕHOH#z‡i¢«—B½·¹¥þ>„“ä/©˜†ÚR æJËÛU¥¡øƒ@“úwÑDWt˜Ÿ;H»{tU¡ÐVf²ÒòMLE!Í#ûÏš°þþ~“½-j_Dù¹1ÍÒ]¤Ñ~’Ñ—£Rž;ŸsBL ¸ìŽÔ«ùìv`ø(L§¢ÄÛâÞó/˜8™%V":?LŒˆ¿&* ú¬'‡‰j¤'„‰½Ã0Qœk.ýìôR~oÅ@!Û@NÎ9õIÕÀsÁDQÓ4äõò]w£¾¢# õ2ø/;~înÔâQ‚§O¿­,EgeM,˜˜ &pcÕ X=ç´Û \]ã£<5¡µæèÉüV˜°s5Öc˜ Ü€£äõªÐbwxÇ0Ñ%ŽÞJq-˜8™%V":?LŒˆ¿&* ú¬'‡‰j¤'„‰½Ã0ÑÕK½ŸÆ¼åœSRÌð&ú¤ŠÍ]ê%|W v_t"?W0°x47Ö¦²D°kgbÁÄT0Ai’CúçŸû¹þþ~Mßj8]TJ‡ˆ¤i|£ý0ås w ” !—:¦ Î-ØÄZ{ÅìYšÈNóÝäXÏdßÅ9¯­‰æ4±ÑùibDü54QQÐg=9MT#=!MŒè¦‰ž^*=áÞ«Q]ÿÇÞ¹&IŽòjxG„nh%½ÿ.Õ'²§Ó" lD'1?¦Ja^«Ìå$å5é M4 ÙOøû•»çTUA´ü›Ô¿IœõOÓD}ëHd~êß/òs¥Q[‹œWd~Òûf÷ʰ›&6M,@”)!;‰ÐM?Þìà%ú"š(Ï%~Ù…j‡l7ßsÊP“è$›sN9v”;Ey–&ª8 ~ÞÅfu—Fí.®O3⯡ GÁ˜õâ4ázzAš˜Ñ;Mµñr—¬?J¡Ñ­4¨™x£=aH!ô¥¬•¶©b Ò‰ oêßDÿÓ4QÞÚ2ÒªŸä»yGâƒ41¤,™mšØ4±MpYÍc”74üëÏï7ÛÅpuúñú\N1öÖèÙNnM‹zD±ÓŒNÒzpv¼?BW;¥g3:•F¢önùV;]̨»Lt<º>M̈¿†&cÖ‹Ó„ëéibFï4M R‘ï-fĆG³³‰1©¶XF§!õH_v6ѼC)…¼C¨ÏÑDnÑòê8öh¢*³—ÒM›&  9ç~ebFÍNáršÈÏ•ÀDÑ/²ÓìHî,f”_ïjšà=Nhéê);Ë•ª¥«‹ê£81 .Wi§tê®®3â¯Á GÁ˜õâ8ázzAœ˜Ñ;C£ß^5ϵ'81&•É1õŸýÛ81äâS:•êmä¾²$ûpbãÄR8¡‡²RüÔýùýf»—GaççfLÈ] q§ÿh)ykµ ±ÃR)kú'RîÂ)Ï”þžFµƒw•—nĉÚhb <ÕN_`6Nœ¬®3â¯Á GÁ˜õâ8ázzAœ˜Ñ;¥q ! €þ(eéÞâuIâÃYR§&•U1v¥R µŠ×5U€yÉÌ}õ_v:Qß:’åyß;¯72nljÚ"SŒÜŸÆ‰ÃŽœØ8±N¤ƒòÒ ŠŸ!¶Øåˆ¯Æ‰òÜXëìôƽlÇrk½‰tpŒ!½‡ Ë8•ä¯äAÍ.>[n¢6ªRv5úâ4î¤NÝU¢ãÑõabFü50á(³^&\O/3z§a¢6n¢Ü‰2mv7W–²$Ké&Š ùãŽÒ•j;›¨ª‘°?„彩ÝÞ:rì•iÞ±¯:Õ…2Áö;ˆI &V‚ +‹ô \˜¨vò’Lî"˜(ÏÑNòŒjG×3º8C,Q{KJÎó h6;Ö›ÈRîU’çµÔWì`Mô–‰žG—§‰)ñ—Є§`Ìzmšð=½ML饉Ö8 †þ(Åï÷_{4yÐÒ“RØMBÉÍç—*ý‘jkM4U%P>Po¿Š&ê[CþÓ%Àëšýfšh- ļÒê+#Û)b7M¬Då»Dd éïꤿþóý"^Ô©>—%ã¸Ü\í(1ÝItDV>9›@È=8æéEý2{ÕìÙêuMå/F´+.ï0ìî2Ññèú41#þšpŒY/N®§¤‰½Ó4QgÑNŠØ;²›Ã°á@> Ãn2M ¿Ëõ#Õx-šhê-QľzÑho­JÛÕ?vüÜÙDm±ÜÁ@]ed‡aošXŠ& ìÇKuħ‰j‡árš(ÏMP*Nt{v&ŽH÷œ€ˆ¤ï£°1æLrO÷÷{ª†G£°k£ÒJšwÅ•ke›&zËDÇ£ëÓÄŒøkhÂQ0f½8M¸ž^&fôNÓDi¼„¿™‘£Ù1Ý›"¼¢æóÑ>ÿš4i_ª¤Åh¢¨J ‘ûêíMñ½š&ªwJªw]ïäŽò\-ìÒ"€ è*ƒüé†M›&V¢‰XVéVª¹»'š’\Mù¹ÌhbØ]£3»5©“ÔIÌNÎ&°ü–€Lý{µÃgSĶFË%,о¸×„S›&N–‰ŽG×§‰ñ×Є£`Ìzqšp=½ MÌ覉Úx2HŒýQ*ɽåë8Ò¡ñ$»I0"êì7; kÑDQ•BÌkù@½ÙwÑDóŽ)øAØ?vé¹bØ­E–¤ûÊvÜĦ‰¥h"—Ä( àîvdtuÜD}n)ž—°ÛH)ÝIhs¦8£ ³X“:õh"Ûi\n~)êS2?ïÅo›_,¯P°ë²qôäü’[$ ê+CØ9÷ü²ÔüBGþQþÿÞþc~©v¯e™/š_Ês5ä'“ß³‹]©6tóMZ‰p¶[Ee…lŠ¡£´¬qŸ-Úå(ÐG´³|t·!®¿[5#þšÝ*GÁ˜õâ»U®§Ü­šÑ;½[54JñÍÈ©4¢³Ýª&$¦øT¤µh¢ª ™Ü>P¯ß•åãÇ;I5Iß;BÏ¥ l-f Œáƒ¿[zYmšØ4±M”³oB¥Ô¡ "f×ÓÄiûÿí?”'¹7e R)úž&8¯–G ~!×b—GqäGi¢ˆƒ#j쉃`/áÁ›&N–‰ŽG×§‰ñ×Є£`Ìzqšp=½ MÌ覉Ú8`à ÝQªü»•&JT^D²óÑ@£~ •ÓZ4QUŤ©UØìþÞùú·i¢¾5² õ'KÀ'i¢¶(1ÿ÷ÁßmߤÝ4±Mð‘G.y¤ü¸¼jGéršÈÏ5•D ½Q[LÞfz½Œ&b¢Œrrô]ËDcGhµx´šQm4B§|E³ !l˜è­®3⯠GÁ˜õâ0ázzA˜˜Ñ; c£Ý›€œHHñähbLª,–2pH=„/KòQß:FÅ”>ðÎË9þí0Q[´À ûkŒHqW3Ú0±Läï’ÐH´ÕN_â®/‚‰ò\Ž–yzý‡øõŠü GxdZÉnOš{°Pz—ªý¥ÅŽÄž=šÇIyÓDo™èxt}š˜ M8 Ƭ§ ×Ó ÒÄŒÞiš¥,éÝG¹Í¨ó£½„ kÑĘzþ²°‰!ï@x0ÉGm±Ü­êšT;Ü4±ib%šÈßeÊ7³²KÕŽìò”å¹Jš»FêõŸ¤`w^tR({UB'4‘rÖ ß .v&ðQš¨âÐ4I_œb„M½e¢ãÑõibFü54á(³^œ&\O/H3z§i¢6N~0JQ¸9lBøˆ|VΨI@‚NöÏ+-–2°ªbJâê¿‹&šwòÊA>ðÎkæÊÛi¢´˜BPë¤F¯ÊÒËjlÓĦ‰h"b%‡@ú;‰À¯?¿ßl_Æ‹h¢<7…T2âvúXîdvëE§P²|¤³(l+]½;tH‹<›3°Šc`Ö—(ìœÝu¢ãÑõqbFü58á(³^'\O/ˆ3z§qbl”²poÎÀ”Ž gõŒšT”B_*ÃbWª*¼Hˆ¨×/‹›¨o­¥Võ½#üàáDiÑ€K©®2 ;gàÆ‰µp"—#畺Õ©ÚêÕ8Qž[ê®bçªSµ‹”î¼êÄGžæð} Å œ±ËM>Õì„Ó0Q…R™"Æ®8°o:õV‰žG—‡‰)ñ—À„§`Ìzm˜ð=½LLé…‰Ö8¤ ~•»ï=›HtœœM J•¸L4U¨¥–w_}´ïJéôã ø‰wPŸ Ân-*’¦” í±&V‚‰ò]r D&nJ§j§æ/†‰ò\‰‰‘ý¸£f‡|kqÔp[ŠkÅM4UVBT´¯>áwG󎽖ˆ»›&J‹˜×bªýiãËÞꦉM ÐDþ.Å8·x]³C»:n¢L̈¿&c֋ÄëéabFï4LÔÆMAEú£”‘݃].:ÒD‘@!ÏÃÁºR)`X‹&ª*Ðr3¹¯¾,Al{k,éûŸ!E{.¥SkQòçä§tjvÌ»ÜĦ‰¥h‚eƨ‚þÑD±£ëc°ësE…%B§ÿdÚo-7‡„ÜÊ{šà܃(ÅN€T±#‹ú(M”F¥w65êK0ïâuÝe¢ãÑõibFü54á(³^œ&\O/H3z§ibh”’w9¶¯ › 8ØÎÂ&ƤêZ•°ÇÔ+~ÙÙÄwÒƒ b‡” @Ü4±ib%šÈße^?§ô¦Ë¯ÿ|¿Ù.^„]žkŠü"Õ.É­ac¤“l)8øÅ.¿Ñ³GU D€®8á}4Ñ_%:]&fÄ_Ž‚1ëÅaÂõô‚01£w&jãŠA¨?„ŠÜ|ÏIÌ•3˜(RBìJUÓµÒÃ6õù€Q»êóÿ~LԷƧÁþd™¢ês0Q[”Vô2~¹^·abÃÄ0!‡–Rk¢¯‘ýùýj)ånWÃDy.›Y¦‰Nÿ©q|'L@:PÂÉÙ„–‰CË©¶ Y퀞ʼnÒh¢h úâê®6Ñ]':]'fÄ_ƒŽ‚1ëÅqÂõô‚81£w'jã"üšj?v˜n.…Íó¿óÑ>i(‰?>j‹áDU•Ð²Š¾úßUm¢½µeªî„:7»ð`J§ÒbAúÊ lãÄÆ‰¥pBK<23'ªÈå7ÊsS^h‡Øí?œ'¢;o:E;*VÄM¤:¶h -fϦt*‘RWœa’M½e¢ãÑõibFü54á(³^œ&\O/H3z§i¢6.¢IûC¨ …›o:Åðìp¢J(;X€}©h-šÈª0#ñÓAâ/K[½SžÒóN¶{‰‰¹&j‹F,~­øf—Ò¾é´ib)šH‡QÙõGñ£°‹ØËvÑE4QžkÙÍýq/;ÅâQØ vÄ`Ìv†–µ¢Jèuu³9­6Á ¨Çß6Á x‡PŸœ`” óž`ö³Òceá£=‚¨vŸÝ„©šaìì7;ÜGº]ºv<ºþ&ÌŒøk6acÖ‹o¸ž^pfFïô&LiBTŽ¡;JýQ™øž¼Úép¶ c‡„üêuÍîúÀ‰ò\I’'›Þ€žíøÖÀ :bŠ)àD•`ÆÒ)†Ýìt±ò¨E•æ2±tÕkPù.œhÞI%Ⱦï×3ºÛq¢¶(j±¯ŒeŸ~oœX '„Ä’Jo‚!!Õ&˜Ü) 5uGíÜ:Ý\UÅøíüÂ!¯b`~5Žjø$x5q„tÅE ;4¯·¢ö<º’àûÝ*†Üƒ ³¹kÍ.áÒDm´Djê‹£—«_›&N–‰ŽG×§‰ñ×Є£`Ìzqšp=½ MÌ覉Úx^Å©_)áÇNàÞä%¯ÓYò&AJ–XîK}›ÍöIUUÊoèçâmv á»h¢¾µqfÀ>C{’&J‹ V‚îºÊÄ6MlšX‰& ®Òóó÷¨ùë?ßoæxõ]Úò\Íï‹ù“èõ !êÍ4‘,Úû´\ë_`Éqèž7»(æ oæiíq¢;2¯»Lt<º>M̈¿†&cÖ‹Ó„ëéibFï4MÔÆ {¹øš]²{s ¦#ØÙÙD‘ #ûWØ›]µn:5U%ûœb_}^f|MÔ·¦¼ìþ`²Ì_Ƀ4Q[Ô¼ ëœ)5»—º¶›&6M,@±Ô4ö¯Ò6»< ]Må¹IR¹EÕë?َ½9ȹDvŸœM`_jGiµ ø,MÔFù7’›Ù®hÔ]&:]Ÿ&fÄ_CŽ‚1ëÅiÂõô‚41£wš&jã‚úÙˆšèÍ4AvÀ)M4©y-é×Ðü±#Y‹&Šª²ùƈ}õ&ß•ç£y'#-ûq?v,ÏÑDm±ìWBè+ㆽib-šÈße ßž|ÿúÏ÷«Dárš(Ïe£ýÌÞÍNìVš(¹DbF«÷4A¥#‘¿±VíÓ£4ÑÄ)Ä@}qˆ›&ºËDÇ£ëÓÄŒøkhÂQ0f½8M¸ž^&fôNÓDk<™Ñ'£”Ø­4Ñ–˜h Ì4òJÓb0QUqÔL}õy>û.˜ò‡&j‹Æd~Nœf§/56Ll˜X&¨9 pàÂDµ‹‰®†‰ò\ ä—^hv¬·†M„£lÙ½G Îý7±EêMpíç”E‰q)ÿ¥7JôÖˆŽG×G‰ñ× „£`Ìzq”p=½ JÌèF‰¡Q ¢Þ‹GJ'¹¯¥ÒZùaÕ§ô],1äøRôäv–RF²&6K,Å\JÐû!ØÕNäòƒ‰òÜ<õ$K©×TìÖŒ˜MH)½§ ©=B/(¯ÚE~´8jkT ¨/N^2Alš8Y&:]Ÿ&fÄ_CŽ‚1ëÅiÂõô‚41£wš&Zã‚b ¡zsqT²<§$:¡‰*!•úuúT kÑDQe!kঃÄ_–ÐiÈ;ÆÏ•3jʱÿÕ†ÐiÓÄR4‘¿K-m…NB§b…/§‰ò\áüÎÐ`THùNšÐC!Ó‚¾§ =ˆB€ É¿Ð¨u R~”&FÄóÁî.®O3⯡ GÁ˜õâ4ázzAš˜Ñ;MC£”Ä{¯9‰¤#‚ÐĘT^Œ&†Ô~YvõNÙšì{‡¼”‡¹&†”!ï{N›&–¢ ­ 8¸Å&šÊå÷œÊs-Šhçmµ{w&{M”K´ˆ§ RîÁ”¨?Xì2’<41$îµfǦ‰“e¢ãÑõibFü54á(³^œ&\O/H3z§ibh”JtsÐD²ñì¦Ó˜ÔÅJש·ø]•°½£Ï•®kÊJ…K‰]e_BH7MlšX€&R mΠ ~vµ£—Œ}ÑDy®F‹ØIºZíàݨ}ÝÙ„±”½÷4a¹cö'«ØÅ`ÏžM ‰£´K×u—‰ŽG×§‰ñ×Є£`Ìzqšp=½ MÌ覉¡QêM=¸ko:‰”›­'41&U;›Rÿu¥ëƼ#žM )³7MlšX‰&òw©€Q|š°Z"…«i¢<7¦Ü5´Û4˽g)…¨ñ MĬ,÷`Ž%ÄÃɶðc‡IŒ›Gò2 mšx·Lô=º8MLŠ¿€&|cÖ+ÓDÏÓ«ÑĤÞ9š¥R¼•&RäÃìmÜĨT +ÂVÿUF½“ŸûM *ã°3:mšXˆ&ÚwiÁò—ÉNF§ßv˜øRšøynn£wôcG†wÒ‡#FKïiJ† è3úmðYš¨R!ë‹£—ZE›&N–‰ŽG×§‰ñ×Є£`Ìzqšp=½ MÌ覉ڸ„`^Eµßv¬÷ætb†UNh¢I%ÕD}©kÂþ­Ê¸\®ï«ÿ®›N?o-½Ü¿ÿoñ9š¨-bÙÒü@Y´ÓiÓÄR4Q LCˆÿÇÞÙ3O’Ûfü«\vvÒE€—9·×FäÒÙÞ*[ºÒIªò·7ÈþŸ4Ú&‡bw/}à 6ØÂ6ŸÆ4Iüø@L\¥‰\Ûì†O¢‰Ò¾¨J»ÿ¸Ñ•{þ‹Žö&pË«8Þ×±§ØE[a¢4š"RËÅNL´¢ÄŠG燉ñçÀDEAŸõä0Qõô„01¢w&Jã>Ò'Óö(•.¾6A¶Àx]R'ƒ‰¬J8¾2WiÒ÷‚‰ïh0½&º”QZ×&LL˜¯C$ˆ‘  ÅNøô­‰üÜRl"6ûÛa¸ø ¡"<§‰è=؈¸°g;¾wk¢KœCᢉV˜Xñèü41"þš¨(賞œ&ªžž&FôÓDÏ(eèâr¶¹Ìšè“ú¤’ô7¥‰.õðV)º½c7nMx‹ìÑqˆmeöpûqÑÄ¢‰ h"×—ö::èVi"Û%µÓ·&òs½Ç2Y³ÿ¸]¤kK×qW¤#š0ÿ]‘µ¡ÔíÄ`¶ùÅÜÉ ¤©ž#Á»Í/f@H í5ºs~ÉÅx)’¦¦²lÍ/k~™j~¡-Ä`ÀZKòñ‹›œ=¿äçbRýh¶µ‹ÇüiÊóù…<’ŸˆQ¤¡ÔíäÖÕªÒh#‚¶¸Di­Vµ–!*µjDü9«U}Ö“¯VU==ájÕˆÞáÕªÜ8úpš£Tž{/]­òxlÔƒÕª"˜±-âd«UEƒ¸²¶zz’>ýWMå­E°Zªê/v!ÝG¹ÅóIsn*‹`ëZÞ¢‰Éh‚Œ-ñÔ ‰|}.ðù4A&‘°EnåÊ”Dú"ôœ&8÷`æ\¢µª4Ûa t+MäFÉaÏb[ùü±h¢&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4Ñ5JEäKi6‰„!ö¯K%‹&úÔë›í}wy‡ªj]N¹EÎg)ÄÚÊDM,š˜Š&J¹iP Iª4‘í¢Q<›&òs% ¤Ôì?ʯE{/˜(omQRã¸ÝîŇĕ—ÃDnQ1½cÈã†Ø‚‰À„—œèÉìòé‹ï× "ž ²q9Z«ÿ¸]TºvkB£¸˜ç4‘r¡‘g(•1(ÝK¹ÑÀ’¤).…‡ÉoÑÄA˜Xñèü41"þš¨(賞œ&ªžž&FôÓDi<ªÅÆù›Ý.Ä‹·&ˆBP<íó¬õ…•žÔú¦4QTù„£¯8úYæ¬_5M”·69‚µ½£éÆky¹E¥YÛ¿›{fe \41Møw©Êôë”°Ÿ¾ø~UÉN§‰”)Åg‰$Íþc ¨WnMІ!A<¸6¡[ ‹b¨sO¶SÀ¯“|äýËùåO?ýç~óÛË\ø÷qD‘•«±6RÍîòå¯|h:Ë3Ì#ãÇ?}þïÏÌãÎï~ûùe")!ÆwÿðÇÿýéǾÿ׿þÏ ùã»ÿg×ù‡Ï>>}ïCæo~þýï~øþÀÀ£×ï}Ìûùgï?|ÿOŸö‡eøä–ÿõù§ïþüù7_ýŒ¿ÿå¿÷Ó~ü³Õ?ÿò?oßýKéNÞðÿ¸Ùƒé¿ÿøþñå—ñAõOÞÈöý?ûQÓ‘ã¿ý™R>^à‡}X8lÝgÿWZ§'ÉÔ(=é_Wްb%*=àE‘Š<ÙÅÔ¢^‚‡©íâ' ã¿î«Ï;w&>ð9„ A­­Ì³E¯kEXß>ÂÊß/ˆ¾Q^5Ó½¤³8 œØšâØ'½US½¹¼Tñèü«#âÏY…¬(賞|²êé W!Gô¯BvRfzmÝ’-aÞR:Œ‘“JÌdª<]ì8œž†ÀŸ«,¾JÓUê‘®äiÙ0:{êu¥Ì4M¼®^bx;šxÝ;‰ñVšÐ”˜¶‡xؽ^4±hb š€ê3mÌ/nÇSáió Dg™ÍUPˆ|mš›\?,.V‘ åÐ{ChÞŒx3våý,÷a²¶8}…vÆÓ‡ýÿ€]¿ø³°ëPAŸõôØUñô”Øõ÷ë=»ˆó~—4RÎîváÚ,7¬² •‹Ü%$¢@/H•é¶&\0R3\ÎvïþÖÈB{²T t'Lx‹ì?È+¿éÚšX01L CjÀ„Ûº&ò¡Ž\ѽ9Á¸Ý“\ô'.VÅ-Ÿ°ŒÅçmË Ô²ØzOÏv>Ü»‰“`„€MqL«^d3L¬xt~šMTôYONUOOH#z‡i¢4žÀZ%w‹ÄkëEâÉâQ‚"AÉÁ'´¥&œì(yQe”D©­^íͲÜä·¦Ày«íK7nMeÂŒ/ünV,M,š˜€&ü»´ˆÌò5§úâûµõNO¢‰ü\bÓØž`r‰'»¶Â‹2=ß›€à=XØg¨®h;ŒwÒDŸ8cY4Ñkž&†ÄŸB5}ÖsÓDÝÓóÑÄÞQšè¥8(^›?Áp¿O*ð\4ѧŸ(û5ÓDŸw(ÜW}¾O™‘,šX41MøwÉùÞD¾ô]£‰bGôPGðšÈÏÂP£ÿ¸]ˆ—fà×MìàZ@îÁQƒjuEãÃnMs³7*"©^àÃŽM4ÃÄŠG秉ñçÐDEAŸõä4Qõô„41¢w˜&Jã) Ò¥]œ4“`:¨>¿KÐdbÖ–ªÏ*Ã|Kš(ª,¯a[½É{U.o-•#5½#áá3¼œ&J‹±äÖi+‹‹&MÌDþ]ZòùÉæó§/¾_K gŸtòç–¤™ õºß¥}“¤×žtR`fžÓ挊œêJ‹Э'öF9$JÚ÷x©cÑÄA˜Xñèü41"þš¨(賞œ&ªžž&FôÓÄÞxì_¥8À¥4SØ ÀDŸR sÁDQ%(ˆ/LUÌo¶5±{‡R¬gàÿ°‹÷•óÚ[4ÿåø…ßMÍL,˜˜ &°éQ‘ê[Å.ðÙ9>ü¹>i ¤„¡ÑÜ.`¸6g&ù<û&¢÷à$‰8ÔÇ bÇéÖKØ{£¦þÙ@[œ=Ü>Y0q%V<:?LŒˆ?&* ú¬'‡‰ª§'„‰½Ã0‘×@ ÒB5<$”ºdkBb.÷~@»V©çBÿ°‹“t*ª€Ðg«Ôë{ÕóÚß%4ª}xñÆü°{‹¤ ¿.šX41MøwihJòu=­O_|¿†ª§tÊÏ¥´‘×~·c¾ò æšaŽU4á¬!Ðü¯ªÒl§&|+Mq‘Gkгˆ+¥S3L¬xt~šMTôYONUOOH#z‡i¢4N1µdv;¸6“.%Ý4ÑÄ.!ùd_*q.š(ªr aÜVÏo–ÒikqØ’ÔöŽ„û.aï-‡/(³UÏkÑÄ\4A[¾„‰ê—°‹bà³iŸ£2YcG9Û¡\J7ÈI<žÃo=„”¤\† g9å/„‰"ÎBÞ½m‰srKëÖD3J¬xt~˜LTôYOUOO#z‡a"7þWL‹]Ы‹GcKǃ=DŠ¥­Ÿœú¦,QT¥|HGÛêÅÞìÒD~kÌYRê•ëv»ððÛ^ιÅüÑ5jMìv1®K‹%¦b ÞÌT1ÖY¢Øéù—&ü¹S«ÿ¸Û•éa9l1q ñùü"e b‘zŠêÝøÞK¥Q&r!mq”xÁD+J¬xt~˜LTôYOUOO#z‡a¢4.ÈF±=JñÕW°sý/<º‚Ý'Uæª\×§þqíý-h¢¼µr !¼àº1¡Sn1Bà€©­ÌBX4±hb&š¼ãà!2|]BáÓß~¿92Ÿ6?7b¾Ž­='I’+w&Øò΄7òœ&RééùS} Úí½ J£‰YË-»]\[Í0±âÑùibDü94QQÐg=9MT==!MŒè¦‰Ò¸10c{”2¼öքư%£šÈrAŸ‹_ª“íMìêSŽ"šê)¼ÛìòÖ@wkÛ;€7ÒDi1W8‰ÜVæóø¢‰E3ÑDÚ„ÙÇC¨ÓÄnNß›ÈÏÝëµÂàlôÚbÎ4†{ê=˜=ÐæÆ.J¶û›zwÐD縇õJKvaÑD3L¬xt~šMTôYONUOOH#z‡i¢o”J×›ˆ>ÞÓQ­‰¢ ‚šÁ JU炉]= 6RívøfW°Ë[“ ¼àt#L”“$¨W®Ûí$®­‰SÁ„æDÑ„úA§b ÉN‚‰ü\´\³³ÕÜ.¾²Ö„®žŽ`ÂA# ¶Ž¢f»|ùb¶ù%«W¶dMõþ¿õÝæk`heþýð¢Ü9¿x‹ 16ã·£u)oÍ/sÍ/¶°!Q}±ªØ1¾XåÏE h–êü’íÐ]¹X[È÷î¶¾Í#DAI¬ÚPêvî=H›M”#Ƕ8 ¶«Z«οX5"þœÅªŠ‚>ëÉ«ªžžp±jDïðbUß(•®=H•768X­ê“j<Mt©ç$ïE]Þ¼1ÅG—2M+ýø¢‰Éh‚°ÆðuŒüMd;„ói‚CDfPiô·“kæ,KéyeT ¹§K P?«Zì„9ÜI¥Q !ÄúÁý%,Á¢‰F˜Xóèô41$þš¨)賞›&êžž&†ôŽÒÄÞ8h\þ©áâL„G{ò©)¶¥BœkobW…)j}w÷ÃîÍ®åíoMˆú‰»]|H’|5Mì-&$¢ö4®¢+ýø¢‰™h¿K,ð,Qé§¿ý~Ýîñ ø94Qž›X0q«gg;¼2ÉÚ½‹$‘B(=Ø Ôw?ìb¼•&¼Q 9YJ=‹{g ë m3L¬xt~šMTôYONUOOH#z‡i¢4ÎAš£”2¸˜&4 Žöê¨a±žÀûÃ.È\4QT©Æ¨ØVïSï{ÑDykSA|á·µ‡.—ÓDn8Qãô‡ÝýÕE‹&&  ð(]̈°N»]¤³i"?WIHBjôŸl¯<é„iSTIø|~AïÁ1RÀXïéÙo½–×%.YÅŒšabÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L]£Àµ4á@µ!œtꔊsÝËëTŸè½h¢Ë;È÷Ý›Ø[äÀZO¸Ûy̶hbÑÄL4çk?)pªÒD± ñìbFå¹"`POºÛÅ`WÞË‹$Ñ£½ïXÆ dõ½ïb‡÷Þ›Øe‹¡~ýäÃîáÖ¢‰ƒ0±âÑùibDü94QQÐg=9MT==!MŒè¦‰Ò¸pÔF·ÛAº69Ê–âAòN©q²½‰¢*¡yÌü‚z}¯[Øû[{ÀNõÒ¨^äûîM” ¸©Œ¬,‹&¦¢‰˜×ü Ð0Ti¢ØI:;y~.Ä|ù¨ÑÜ.@ºvo"¦Èppo‚rÎ…Šb&ŠÝJ¹Q– ÚǜҢ‰V˜Xñèü41"þš¨(賞œ&ªžž&FôÓDi\siRJW×FÅȇkGE‚‰’Z[ªÑ\ È‹*É)©”›êäÍh¢¼uÎ’Ü8¼Û=$¸œ&J‹N: ÔVæ4´hbÑÄL4A¥k^Å€X¥‰bÇÑüI4áÏÅZ1ºÛ½4¹ÿhè¬pp’–½û·¨¯Vqƒ€o¥‰qŠ5ÌM„‰ÎO#âÏ¡‰Š‚>ëÉi¢êé ibDï0MtR‘®-g„[`cì5¦É¶&ºÔÓ»]›èòÇtLäÍûAh*³ ë Ó‚‰©`‚s½:¬ÂD±‘³a"?7RDkô·ÓkS:ÙMDõùü"¹;ti=yÛn÷¤ÀÅ¥0Ñ%NÂJéÔŒ+&FÄŸ}Ö“ÃDÕÓÂĈÞa˜è¥ìÚjF¶)tê’ªBsÑÄ®Þ@WÈw;y³kþÖ˜¶¼cÁ­î£‰Ü"DhlMeøpÂ`ÑÄ¢‰ hBò’Â|R¯JÅŽϦ‰ü\d¨­QÛíH®<èqKÓÑÖDÊ=XòÕ‘zO/vœî¥‰Ü(„±žg÷Ãn•›h‡‰ÎO#âÏ¡‰Š‚>ëÉi¢êé ibDï0M”ÆQ ¶G)d»”&ÄÈG¬£KØE“¢…¶Ô(“t*ªR0hÐD±“'¥÷~Õ4QÞZ£jÛ;Šá>šÈ-bâ\r¢© å1ÙÔ¢‰Eßž&ÒÆ1Yða³Z¼®Ø‰éé4‘rÒo£|Û©ÑÜŽðJš`ÞS8(ŽŠê=˜s–ÚTïéÅ.>[X»&J£)r$h‹{\õ[4q&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4Q×T_B} ½¸x]؈Žh¢KªÆÉö&Š*cÀƬº«·÷*^WÞZ<åFQ‡Ý‹w^›(Ê"˜²6• Ø¢‰ESÑ„æTMd©¾7QìôaQû$šÈÏMN È­QÛíâµ{°q. sp’Ö¼[Þm¬÷d»½×&ºÄE[—°›abÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L]£]\¼ŽlÆ|@]RùYªÁoI]êEã{ÑD—w¡öršèQ¦áá¾Ë¢‰EЄmì,‘„ª4áv˜?ô³i"·19¨ÄFÿq»gÉ'N¤ ÞÄ)@å)MÄPzpÎ+W]Ñ(vÉäVšØÅ%‘Ûâôñ Á¢‰çabÍ£ÓÓÄøSh¢¦ Ïznš¨{z>šÒ;J}£TÄki"&ÞPö&:¥NEêõ½na÷y‡J‰\M}ÊM,š˜‰&ü»tš {Åοñ³ËMìí‹YLÚè?ÙîÉ Ïi"nhh0yÙ@•ªg‰v;Ž·Ö®Ë»*-$jx1Û‰ëv3J¬xt~˜LTôYOUOO#z‡a¢k”R¹:?l$ðIöh´ï‘ªs]›èRïÑ'¾Ltyç±ÂÕå0Ñ¥L`-˜X0ñía¿˨̧W¯M; |v~ؽ}G ¬×~Üí\Y ;æJØ!ÄçÓ zÎë¨õ޾Û=«»t!LäFö˜ëÇdw;Lkg¢%V<:?LŒˆ?&* ú¬'‡‰ª§'„‰½Ã0±7®Œ`íQ*ʵéaIpƒH;Ea.Ö–J0×­‰]UÎç^/³ú¡ÞÞëœÓ‡w´u¶úÃî¡Løå0QZÌu%ñejkgbÁÄT0áß%‹ÿU-„]ì"ÑÙ¥ëÊsSð†fìéÊBضŸÓD.Þ—<·zF§bç#ù­¥ëúÄ)颉V˜Xñèü41"þš¨(賞œ&ªžž&FôÓDß(eßÁ6ÜäèvŸTƒ¹ªMt©—ovΩÏ;|ã9§.e¨+?좉©h"æ%‰9ev•&ŠÓÙw°ós)¨ÕSîvtíÖå¬kèˆ&ÌPС 5”º'˜m~Ééð\R[}’·›_ü­ÀHÚÞ1¼u~É9ïi³d»‡ÚÁk~YóËó m=Üöa§z+o·£xú9Úü\ I<úªöŸlféÚjF$)>Oè@‚Ñœ¿&ÝŠMŸ0ô‘àx ¨Žbž$ÈMì†còÜ.›«‹;ô"‘‰ž½ŸY:´Ì:|Ä m¨sǸٸmi)ï¼`U$˜çeò¨ôÍù›êMª½Ú»çbá"œÊ•så@Òo ñè +‘ŠewÜÔEàÐ-UãÌã¾­ÿTø>às*|AYëÊO…ZºÂSáûàÝûTx™—"7í…iÄMê9^4bmz§=¹07½Sb<ªÞ,Æ­dí¢w½S…Þ!ýe¾ðÏÊ$h;§\ù’&¿H}r§f˜ëº7ÂÁ+ãVvª£ÀônnXbcž…+B–Ür `¡ÀÊ(P8Ÿ ã‘¶Rœt¤JûH°(Í•E#mĬE‡#òÜÎï¸6{Òœ_ ¸à·Ôô’óëIæ X´þœß>à“ó@PÖºòœß ¥+Ìùíƒwïœ_‘— ~ÚJàS£AkOÖ¯ jLuIž"ôÛ[Æf!yʬ³]¼{jÉS„Œ`Éú-’§*É#M>m,âó—uŠ¿¾Xô» 4̱g£CTJó=I x1 fvíà¸åïÛNIãOñrÙ¶ÍX´~ѳøÃˆže­+=ƒ–®PôìƒwoÑÓvÎÀöR¦=Ñ¥|‘ièi!ˆhÀ6Táʶv·¨’d»v0³óMyÔùº¤dTìÚáËß·=‚§`æoÛ^DÏ"zª=y^Fd’½9#NZî)ò ÚåM¯{”GFiÍCt ‚>Nö8X+¢k•í÷ëPQ¾;o3Ê4¯Jëˆ~1ÛÖ!<^¹§®G}"#žo-à-$¸àOO‚í¼ ‡÷«v퀦Ìüù&%D¿»ÜSPܱ!z&$¨±¶N!0¬Dys¾3ïWŽãrí¥•Ð²hõy¿½À$ï7„ ¬uÝy¿aK×—÷Û ï¾y¿®ó”OEÛKIœx³SCìwçý6P£Ú‰m¨)øÚ$ù69,k7èÅÍMòŒ·3ï×õˆ˜/°‘/’g‘<•I œ/*°¼¶s~JÉÇÄ=§œ ‘î¨> Íã[O)Lq8ñ×¶s|Ô¢]§ íœ Íc³­_óìþ0šgAYëÊ5Ï ¥+Ô<ûàÝ[ó´#çÍV¶—Â&.ê `ä)CJuÝqÜ¡" Èd£Ÿ[U‡nÔ18KÎvÖáãíïîzLè]Œ62‰K=óEòT%yò¼"7ç¯w“ÞlåëQ<-€¨Á<àižeì0¨ÞóN•¡V<¡u[ñ¨»»‹À©³çEñX¡ì€EëW<û€?Œâ@PÖºrÅ3hé Ï>x÷Võ’ £†š£ ÔólÓ¡j ”Œz'm;¬ Ð'7;,±ÎVÒæ$=4“ð1טX¬Œ£ýf‰Û•?; ‚ãD½JPè‹–<Û@}’Ù’`Q±À––Òv P ¦ä%D½ªÝÙ‘ Z'o½Š¦u’Ûª~z T…J‚#ž[‚°àB‚•‘ »ô÷Ñâm'4åÎlªkã>Ì@Õ8±&7[”¨JÁ§h¹#mäj#AE¥"{}œßöP5cdsU<·wL”ˆ^H$šÈÐ/W¸.$X JŒÎååjsþRbœtcŒz6ÁØG‚¨"r¶ƒŒŽfZâGç(€ðpÝжS¨‹3ª¼¤éMôìü¼î1ïFíƒ÷†ìÚm%‹''Á¶Gô#ÙÈ`kMe!Á…+ ÁoÑó&¸´})ã’÷ëIè X´þ¼ß>à“÷@PÖºò¼ß ¥+Ìûíƒwï¼_‘—ò¦Íû%h©'ïWc]’§E8†èef\´£ÇddE;+ò%OÛ# 'ñÜÈ/y¿EòT%y(ßÞù`zÍàó¤’'ŸØòþ¦Ï½vÕ­œ…ä‰ 8$5¢q'·ƒôe5†‡§l§à”±÷áãSë‚.T ÎŒ};ë@L¶uýñØ7÷˜‹{“÷&2mÓ[Œ8Oú§»û{}~ݯûzƒè\Š#zÛ:˜¸pýÂõp}Ñü…íÒå_¾-1ŸVì„ï£2EË}ýQd/Áî^ïNTËh?gWí ¢5æëkô]º½È¿x÷ ?^Ý®wæSÊú~}a fþïw××ê¤3#µopN,d«?|î|ÿ¾‰­qSñ¸…œQ4{ƒ‘¬që“ÎqŒóØ~_W§y9f,³¢úÝ·Nc̯³öÈG÷þº8ÝË‘Ãd#Gëi¼ÿpv±nÇ~°1k\v„™I{þþãmkŸÖyúL|¿Z½[¿¿º½Í£úñîDcÎ0(²s {¯[Ó†¥Y]Ý]ìo2'c(~$Œ8ªïˆ»LC{½ Ñ3 ±0Ô>˜DŸC”ËõYF7qs µ¾Þ·k _ÈüéÉÝíÉæO×w‘Z~–y6=ì -ºí3›Ï†@_ú䇯ÑîÖ?\­| Ğǘ’ý°Jß^åRÕ¼¶ÏŒÈaÒú›Â_ô3(J;p~~}•ÁCÖ ×W7WyTÙÔªtâÝfW}»Zç¯=ýìFÎnÏ5غØ{듨ãÑ“ÌO¢Ž·ŽÐq%j àmñy9¾ˆÆúD£@V¶˜Q@“ÞÔÄ„ÜW³JƒÈnôZDeò·mð«/™:ÅË»‡›ÕåU^öϸ–É`Aìd°ÀëBZ/ÔÁËgÖjC3¼@ÈŠÃ}YpåóÕ9ê{_öýÇ}{[à>ï± îÈ+|šg?¬WWïo»°÷ye¿› —jœ§«_¯GÎ<nÏnÖ÷*kßlMÿvâœÜœÝª_¸ØÌ£›³ûÇ7Ÿ»~þôðó¥Ïdß¼2 X¹ƒg¼9R°eA߯ï9!Ï/uÊfÏÆå~»‰rúúŽª9Ù}Ó.ÉöB™Üœ]Ý~«ìqî;­@ ŪW¡ša“G¿µáñf­„|ÞE'¿{÷Éê¿~ÿûï~×îk!Â<©Î öâ‹'•k»@€ÓWõׯRÏÑJ]ù'd¦ÓÕ?ýãã/þE#Šía¯*>Êë“F/ÙÄGýç(ålm·¾îÞ¶[ÞÓnÒéöµ´¾º¾;ÿs!¥Ü=\ä¸ü½–Ç ¥ìö"/H´íòÝLà7åƒðf^YçÈŽ9,ôyý%[PGþ«ö1~zûfNR±Áž9)zo¾¸ÝÃ+Iô¿¥‹×?¨ŽhûÏ/ÏÛüÓ/4žýöÝÿ*Žï×—:TÆþí¿¯n/N¿úM~v½ýöùAŸöGK›f§_}þùßü‡†›ç,ú|NÎàRNP8¼ƒtqP.ÖÄ1mûoß}ó)BÝ=Ý´Ñ÷ëÇ»ªÆ>µ‰Y£]­¯/¾;{úCÉ_ž®žþz¿>]}ý[ £Î®¿Ö tö˜eý׿i­üvóŠ|½Òéþ?ëwçþ.Oðòr}‚É­Oäâ2ž¸Ëˆçï4ÄuÒòÈö é}"1¹”F<ò¶;-Œ9‚÷Á±3—#ƒw‰m"ùÂe™“Q;ç¤ÏÒîÜÓ& ­Pý¼C±ÉÕ¼ ²þá#&tÎF€;3|¶ßÍê«NOµòQ)B®‹Í½kÞÇK"X§æìñq}óîú¯*ÉõÙM³¾N'ÏŸýÃÿj&\õl±././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000755000175000017500000000000015133657716033103 5ustar zuulzuul././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000755000175000017500000000000015133657743033103 5ustar zuulzuul././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000644000175000017500000000202015133657716033077 0ustar zuulzuul2026-01-20T10:49:35.841250700+00:00 stderr F W0120 10:49:35.839180 1 deprecated.go:66] 2026-01-20T10:49:35.841250700+00:00 stderr F ==== Removed Flag Warning ====================== 2026-01-20T10:49:35.841250700+00:00 stderr F 2026-01-20T10:49:35.841250700+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2026-01-20T10:49:35.841250700+00:00 stderr F 2026-01-20T10:49:35.841250700+00:00 stderr F =============================================== 2026-01-20T10:49:35.841250700+00:00 stderr F 2026-01-20T10:49:35.841250700+00:00 stderr F I0120 10:49:35.840397 1 kube-rbac-proxy.go:233] Valid token audiences: 2026-01-20T10:49:35.841250700+00:00 stderr F I0120 10:49:35.840451 1 kube-rbac-proxy.go:347] Reading certificate files 2026-01-20T10:49:35.841250700+00:00 stderr F I0120 10:49:35.840949 1 kube-rbac-proxy.go:395] Starting TCP socket on :9393 2026-01-20T10:49:35.843183320+00:00 stderr F I0120 10:49:35.841848 1 kube-rbac-proxy.go:402] Listening securely on :9393 ././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000644000175000017500000000222515133657716033106 0ustar zuulzuul2025-08-13T19:59:39.171318469+00:00 stderr F W0813 19:59:39.169690 1 deprecated.go:66] 2025-08-13T19:59:39.171318469+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:39.171318469+00:00 stderr F 2025-08-13T19:59:39.171318469+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:39.171318469+00:00 stderr F 2025-08-13T19:59:39.171318469+00:00 stderr F =============================================== 2025-08-13T19:59:39.171318469+00:00 stderr F 2025-08-13T19:59:39.189878528+00:00 stderr F I0813 19:59:39.189460 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:39.189878528+00:00 stderr F I0813 19:59:39.189516 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:39.255153229+00:00 stderr F I0813 19:59:39.253915 1 kube-rbac-proxy.go:395] Starting TCP socket on :9393 2025-08-13T19:59:39.255217230+00:00 stderr F I0813 19:59:39.255157 1 kube-rbac-proxy.go:402] Listening securely on :9393 2025-08-13T20:42:42.013922763+00:00 stderr F I0813 20:42:42.013004 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000030200000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000755000175000017500000000000015133657743033103 5ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/5.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000644000175000017500000006064615133657716033121 0ustar zuulzuul2026-01-20T10:51:45.140681125+00:00 stderr F 2026-01-20T10:51:45.139Z INFO operator.main ingress-operator/start.go:64 using operator namespace {"namespace": "openshift-ingress-operator"} 2026-01-20T10:51:45.157094160+00:00 stderr F 2026-01-20T10:51:45.155Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for canary_controller 2026-01-20T10:51:45.157094160+00:00 stderr F 2026-01-20T10:51:45.155Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for ingress_controller 2026-01-20T10:51:45.157094160+00:00 stderr F 2026-01-20T10:51:45.156Z INFO operator.init runtime/asm_amd64.s:1650 starting metrics listener {"addr": "127.0.0.1:60000"} 2026-01-20T10:51:45.157094160+00:00 stderr F 2026-01-20T10:51:45.156Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for route_metrics_controller 2026-01-20T10:51:45.157094160+00:00 stderr F 2026-01-20T10:51:45.156Z INFO operator.main ingress-operator/start.go:64 watching file {"filename": "/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem"} 2026-01-20T10:51:45.161082063+00:00 stderr F I0120 10:51:45.159215 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2026-01-20T10:51:45.166768804+00:00 stderr F 2026-01-20T10:51:45.165Z INFO operator.init ingress-operator/start.go:198 FeatureGates initialized {"knownFeatures": ["AdminNetworkPolicy","AlibabaPlatform","AutomatedEtcdBackup","AzureWorkloadIdentity","BareMetalLoadBalancer","BuildCSIVolumes","CSIDriverSharedResource","ChunkSizeMiB","CloudDualStackNodeIPs","ClusterAPIInstall","ClusterAPIInstallAWS","ClusterAPIInstallAzure","ClusterAPIInstallGCP","ClusterAPIInstallIBMCloud","ClusterAPIInstallNutanix","ClusterAPIInstallOpenStack","ClusterAPIInstallPowerVS","ClusterAPIInstallVSphere","DNSNameResolver","DisableKubeletCloudCredentialProviders","DynamicResourceAllocation","EtcdBackendQuota","EventedPLEG","Example","ExternalCloudProvider","ExternalCloudProviderAzure","ExternalCloudProviderExternal","ExternalCloudProviderGCP","ExternalOIDC","ExternalRouteCertificate","GCPClusterHostedDNS","GCPLabelsTags","GatewayAPI","HardwareSpeed","ImagePolicy","InsightsConfig","InsightsConfigAPI","InsightsOnDemandDataGather","InstallAlternateInfrastructureAWS","KMSv1","MachineAPIOperatorDisableMachineHealthCheckController","MachineAPIProviderOpenStack","MachineConfigNodes","ManagedBootImages","MaxUnavailableStatefulSet","MetricsCollectionProfiles","MetricsServer","MixedCPUsAllocation","NetworkDiagnosticsConfig","NetworkLiveMigration","NewOLM","NodeDisruptionPolicy","NodeSwap","OnClusterBuild","OpenShiftPodSecurityAdmission","PinnedImages","PlatformOperators","PrivateHostedZoneAWS","RouteExternalCertificate","ServiceAccountTokenNodeBinding","ServiceAccountTokenNodeBindingValidation","ServiceAccountTokenPodNodeInfo","SignatureStores","SigstoreImageVerification","TranslateStreamCloseWebsocketRequests","UpgradeStatus","VSphereControlPlaneMachineSet","VSphereDriverConfiguration","VSphereMultiVCenters","VSphereStaticIPs","ValidatingAdmissionPolicy","VolumeGroupSnapshot"]} 2026-01-20T10:51:45.166768804+00:00 stderr F I0120 10:51:45.165789 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-ingress-operator", Name:"ingress-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2026-01-20T10:51:45.166768804+00:00 stderr F I0120 10:51:45.166094 1 base_controller.go:67] Waiting for caches to sync for spread-default-router-pods 2026-01-20T10:51:45.199495351+00:00 stderr F 2026-01-20T10:51:45.199Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Starting metrics server 2026-01-20T10:51:45.199675806+00:00 stderr F 2026-01-20T10:51:45.199Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Serving metrics server {"bindAddress": ":8080", "secure": false} 2026-01-20T10:51:45.266412377+00:00 stderr F I0120 10:51:45.266320 1 base_controller.go:73] Caches are synced for spread-default-router-pods 2026-01-20T10:51:45.266412377+00:00 stderr F I0120 10:51:45.266394 1 base_controller.go:110] Starting #1 worker of spread-default-router-pods controller ... 2026-01-20T10:51:45.301012907+00:00 stderr F 2026-01-20T10:51:45.300Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.IngressController"} 2026-01-20T10:51:45.301012907+00:00 stderr F 2026-01-20T10:51:45.300Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Deployment"} 2026-01-20T10:51:45.301042688+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Service"} 2026-01-20T10:51:45.301042688+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Pod"} 2026-01-20T10:51:45.301042688+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNS"} 2026-01-20T10:51:45.301073929+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNSRecord"} 2026-01-20T10:51:45.301083849+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Ingress"} 2026-01-20T10:51:45.301091169+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Proxy"} 2026-01-20T10:51:45.301106800+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingress_controller"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.IngressController"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.ClusterOperator"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "status_controller"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Ingress"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Role"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.RoleBinding"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "configurable_route_controller"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.IngressController"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "error_page_configmap_controller"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_controller", "source": "kind source: *v1.IngressController"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_controller"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "informer source: 0xc000a128b8"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNSRecord"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNS"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Infrastructure"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "kind source: *v1.IngressController"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_publisher_controller"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.301Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Secret"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.302Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "dns_controller"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.302Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.IngressController"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.302Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc000a12d30"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.302Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.302Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.302Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "clientca_configmap_controller"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.302Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc000a12d30"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.302Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "kind source: *v1.IngressController"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.302Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "crl"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.302Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.IngressController"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.302Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.Route"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.302Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "canary_controller"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.302Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressController"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.302Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressClass"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.302Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingressclass_controller"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.302Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.ConfigMap"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.302Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.Infrastructure"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.302Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "monitoring_dashboard_controller"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.302Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "route_metrics_controller", "source": "kind source: *v1.IngressController"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.302Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "route_metrics_controller", "source": "kind source: *v1.Route"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.302Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "route_metrics_controller"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.303Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "gatewayapi_controller", "source": "kind source: *v1.FeatureGate"} 2026-01-20T10:51:45.304621469+00:00 stderr F 2026-01-20T10:51:45.303Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "gatewayapi_controller"} 2026-01-20T10:51:45.309755365+00:00 stderr F 2026-01-20T10:51:45.309Z INFO operator.certificate_publisher_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2026-01-20T10:51:45.402507652+00:00 stderr F 2026-01-20T10:51:45.402Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2026-01-20T10:51:45.404454108+00:00 stderr F 2026-01-20T10:51:45.404Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_publisher_controller", "worker count": 1} 2026-01-20T10:51:45.404464598+00:00 stderr F 2026-01-20T10:51:45.404Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_controller", "worker count": 1} 2026-01-20T10:51:45.404598502+00:00 stderr F 2026-01-20T10:51:45.404Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "error_page_configmap_controller", "worker count": 1} 2026-01-20T10:51:45.404606442+00:00 stderr F 2026-01-20T10:51:45.404Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:51:45.404940041+00:00 stderr F 2026-01-20T10:51:45.404Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2026-01-20T10:51:45.405074525+00:00 stderr F 2026-01-20T10:51:45.404Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2026-01-20T10:51:45.405509957+00:00 stderr F 2026-01-20T10:51:45.405Z INFO operator.certificate_publisher_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:51:45.409442499+00:00 stderr F 2026-01-20T10:51:45.409Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2026-01-20T10:51:45.409461380+00:00 stderr F 2026-01-20T10:51:45.409Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2026-01-20T10:51:45.409504911+00:00 stderr F 2026-01-20T10:51:45.409Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2026-01-20T10:51:45.409516901+00:00 stderr F 2026-01-20T10:51:45.409Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2026-01-20T10:51:45.409548742+00:00 stderr F 2026-01-20T10:51:45.409Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2026-01-20T10:51:45.511084139+00:00 stderr F 2026-01-20T10:51:45.510Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "crl", "worker count": 1} 2026-01-20T10:51:45.511238593+00:00 stderr F 2026-01-20T10:51:45.511Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "monitoring_dashboard_controller", "worker count": 1} 2026-01-20T10:51:45.511893901+00:00 stderr F 2026-01-20T10:51:45.511Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "clientca_configmap_controller", "worker count": 1} 2026-01-20T10:51:45.514859835+00:00 stderr F 2026-01-20T10:51:45.514Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "route_metrics_controller", "worker count": 1} 2026-01-20T10:51:45.514859835+00:00 stderr F 2026-01-20T10:51:45.514Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingressclass_controller", "worker count": 1} 2026-01-20T10:51:45.514859835+00:00 stderr F 2026-01-20T10:51:45.514Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:51:45.514859835+00:00 stderr F 2026-01-20T10:51:45.514Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:51:45.611229836+00:00 stderr F 2026-01-20T10:51:45.611Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "dns_controller", "worker count": 1} 2026-01-20T10:51:45.615491366+00:00 stderr F 2026-01-20T10:51:45.615Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "gatewayapi_controller", "worker count": 1} 2026-01-20T10:51:45.615515787+00:00 stderr F 2026-01-20T10:51:45.615Z INFO operator.gatewayapi_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2026-01-20T10:51:45.810051999+00:00 stderr F 2026-01-20T10:51:45.807Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "status_controller", "worker count": 1} 2026-01-20T10:51:45.810051999+00:00 stderr F 2026-01-20T10:51:45.807Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:51:45.810051999+00:00 stderr F 2026-01-20T10:51:45.808Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2026-01-20T10:51:45.810051999+00:00 stderr F 2026-01-20T10:51:45.808Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingress_controller", "worker count": 1} 2026-01-20T10:51:45.810051999+00:00 stderr F 2026-01-20T10:51:45.808Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:51:45.810051999+00:00 stderr F 2026-01-20T10:51:45.808Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2026-01-20T10:51:45.820231567+00:00 stderr F 2026-01-20T10:51:45.817Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "canary_controller", "worker count": 1} 2026-01-20T10:51:45.834792320+00:00 stderr F 2026-01-20T10:51:45.834Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "configurable_route_controller", "worker count": 1} 2026-01-20T10:51:45.834792320+00:00 stderr F 2026-01-20T10:51:45.834Z INFO operator.configurable_route_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2026-01-20T10:51:46.026628114+00:00 stderr F 2026-01-20T10:51:46.026Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:57:45.383474552+00:00 stderr F 2026-01-20T10:57:45.382Z ERROR operator.init wait/backoff.go:226 failed to fetch ingress config {"error": "Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/ingresses/cluster\": dial tcp 10.217.4.1:443: connect: connection refused"} 2026-01-20T10:57:46.139111025+00:00 stderr F 2026-01-20T10:57:46.138Z ERROR operator.canary_controller wait/backoff.go:226 failed to get current canary route for canary check {"error": "Get \"https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-ingress-canary/routes/canary\": dial tcp 10.217.4.1:443: connect: connection refused"} 2026-01-20T10:58:19.543455868+00:00 stderr F 2026-01-20T10:58:19.542Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2026-01-20T10:58:19.543455868+00:00 stderr F 2026-01-20T10:58:19.543Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2026-01-20T10:58:19.543527480+00:00 stderr F 2026-01-20T10:58:19.543Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/3.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000644000175000017500000007117515133657716033120 0ustar zuulzuul2025-08-13T20:08:00.875382117+00:00 stderr F 2025-08-13T20:08:00.873Z INFO operator.main ingress-operator/start.go:64 using operator namespace {"namespace": "openshift-ingress-operator"} 2025-08-13T20:08:00.953061394+00:00 stderr F 2025-08-13T20:08:00.952Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for canary_controller 2025-08-13T20:08:00.953362393+00:00 stderr F 2025-08-13T20:08:00.953Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for ingress_controller 2025-08-13T20:08:00.953473236+00:00 stderr F 2025-08-13T20:08:00.953Z INFO operator.init runtime/asm_amd64.s:1650 starting metrics listener {"addr": "127.0.0.1:60000"} 2025-08-13T20:08:00.953535928+00:00 stderr F 2025-08-13T20:08:00.953Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for route_metrics_controller 2025-08-13T20:08:00.953745004+00:00 stderr F 2025-08-13T20:08:00.953Z INFO operator.main ingress-operator/start.go:64 watching file {"filename": "/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem"} 2025-08-13T20:08:00.998500257+00:00 stderr F I0813 20:08:00.997142 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:08:01.006195638+00:00 stderr F 2025-08-13T20:08:01.006Z INFO operator.init ingress-operator/start.go:198 FeatureGates initialized {"knownFeatures": ["AdminNetworkPolicy","AlibabaPlatform","AutomatedEtcdBackup","AzureWorkloadIdentity","BareMetalLoadBalancer","BuildCSIVolumes","CSIDriverSharedResource","ChunkSizeMiB","CloudDualStackNodeIPs","ClusterAPIInstall","ClusterAPIInstallAWS","ClusterAPIInstallAzure","ClusterAPIInstallGCP","ClusterAPIInstallIBMCloud","ClusterAPIInstallNutanix","ClusterAPIInstallOpenStack","ClusterAPIInstallPowerVS","ClusterAPIInstallVSphere","DNSNameResolver","DisableKubeletCloudCredentialProviders","DynamicResourceAllocation","EtcdBackendQuota","EventedPLEG","Example","ExternalCloudProvider","ExternalCloudProviderAzure","ExternalCloudProviderExternal","ExternalCloudProviderGCP","ExternalOIDC","ExternalRouteCertificate","GCPClusterHostedDNS","GCPLabelsTags","GatewayAPI","HardwareSpeed","ImagePolicy","InsightsConfig","InsightsConfigAPI","InsightsOnDemandDataGather","InstallAlternateInfrastructureAWS","KMSv1","MachineAPIOperatorDisableMachineHealthCheckController","MachineAPIProviderOpenStack","MachineConfigNodes","ManagedBootImages","MaxUnavailableStatefulSet","MetricsCollectionProfiles","MetricsServer","MixedCPUsAllocation","NetworkDiagnosticsConfig","NetworkLiveMigration","NewOLM","NodeDisruptionPolicy","NodeSwap","OnClusterBuild","OpenShiftPodSecurityAdmission","PinnedImages","PlatformOperators","PrivateHostedZoneAWS","RouteExternalCertificate","ServiceAccountTokenNodeBinding","ServiceAccountTokenNodeBindingValidation","ServiceAccountTokenPodNodeInfo","SignatureStores","SigstoreImageVerification","TranslateStreamCloseWebsocketRequests","UpgradeStatus","VSphereControlPlaneMachineSet","VSphereDriverConfiguration","VSphereMultiVCenters","VSphereStaticIPs","ValidatingAdmissionPolicy","VolumeGroupSnapshot"]} 2025-08-13T20:08:01.009092641+00:00 stderr F I0813 20:08:01.008085 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-ingress-operator", Name:"ingress-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:08:01.013531228+00:00 stderr F I0813 20:08:01.013036 1 base_controller.go:67] Waiting for caches to sync for spread-default-router-pods 2025-08-13T20:08:01.086373496+00:00 stderr F 2025-08-13T20:08:01.085Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Starting metrics server 2025-08-13T20:08:01.086373496+00:00 stderr F 2025-08-13T20:08:01.085Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Serving metrics server {"bindAddress": ":8080", "secure": false} 2025-08-13T20:08:01.116419978+00:00 stderr F I0813 20:08:01.115027 1 base_controller.go:73] Caches are synced for spread-default-router-pods 2025-08-13T20:08:01.116419978+00:00 stderr F I0813 20:08:01.115134 1 base_controller.go:110] Starting #1 worker of spread-default-router-pods controller ... 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.ClusterOperator"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "status_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Ingress"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Role"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.RoleBinding"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "configurable_route_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_controller", "worker count": 1} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "clientca_configmap_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "error_page_configmap_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "informer source: 0xc00007fc90"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_publisher_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNSRecord"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNS"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Infrastructure"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Secret"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "dns_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc0010b4270"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.426Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc0010b4270"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Deployment"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Service"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Pod"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNS"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "crl"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNSRecord"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Ingress"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Proxy"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingress_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.Route"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "canary_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressClass"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.431Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingressclass_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.Infrastructure"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.431Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "monitoring_dashboard_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.432Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "route_metrics_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr P 2025-08-13T20:08:01.434Z INFO operator.init controller/controller.go:234 Starting EventSour 2025-08-13T20:08:01.435319931+00:00 stderr F ce {"controller": "route_metrics_controller", "source": "kind source: *v1.Route"} 2025-08-13T20:08:01.435319931+00:00 stderr F 2025-08-13T20:08:01.434Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "route_metrics_controller"} 2025-08-13T20:08:01.435319931+00:00 stderr F 2025-08-13T20:08:01.432Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "gatewayapi_controller", "source": "kind source: *v1.FeatureGate"} 2025-08-13T20:08:01.435319931+00:00 stderr F 2025-08-13T20:08:01.434Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "gatewayapi_controller"} 2025-08-13T20:08:01.435319931+00:00 stderr F 2025-08-13T20:08:01.434Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:08:01.498055630+00:00 stderr F 2025-08-13T20:08:01.497Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:08:01.567591913+00:00 stderr F 2025-08-13T20:08:01.565Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:08:01.567591913+00:00 stderr F 2025-08-13T20:08:01.565Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:08:01.567591913+00:00 stderr F 2025-08-13T20:08:01.565Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:08:01.567591913+00:00 stderr F 2025-08-13T20:08:01.565Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:08:01.567591913+00:00 stderr F 2025-08-13T20:08:01.565Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:08:01.569202870+00:00 stderr F 2025-08-13T20:08:01.569Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:08:01.569414636+00:00 stderr F 2025-08-13T20:08:01.569Z INFO operator.certificate_publisher_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:08:01.582231593+00:00 stderr F 2025-08-13T20:08:01.582Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "route_metrics_controller", "worker count": 1} 2025-08-13T20:08:01.582393098+00:00 stderr F 2025-08-13T20:08:01.582Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:08:01.601005591+00:00 stderr F 2025-08-13T20:08:01.598Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:08:01.651241102+00:00 stderr F 2025-08-13T20:08:01.651Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_publisher_controller", "worker count": 1} 2025-08-13T20:08:01.651241102+00:00 stderr F 2025-08-13T20:08:01.651Z INFO operator.certificate_publisher_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:08:01.754147262+00:00 stderr F 2025-08-13T20:08:01.754Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "error_page_configmap_controller", "worker count": 1} 2025-08-13T20:08:01.754242945+00:00 stderr F 2025-08-13T20:08:01.754Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "crl", "worker count": 1} 2025-08-13T20:08:01.770681766+00:00 stderr F 2025-08-13T20:08:01.770Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "clientca_configmap_controller", "worker count": 1} 2025-08-13T20:08:01.787209920+00:00 stderr F 2025-08-13T20:08:01.787Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "dns_controller", "worker count": 1} 2025-08-13T20:08:01.802124928+00:00 stderr F 2025-08-13T20:08:01.800Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingressclass_controller", "worker count": 1} 2025-08-13T20:08:01.802124928+00:00 stderr F 2025-08-13T20:08:01.801Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:08:01.963268078+00:00 stderr F 2025-08-13T20:08:01.963Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "status_controller", "worker count": 1} 2025-08-13T20:08:01.963495594+00:00 stderr F 2025-08-13T20:08:01.963Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:08:01.967285223+00:00 stderr F 2025-08-13T20:08:01.964Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:08:01.967285223+00:00 stderr F 2025-08-13T20:08:01.966Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "configurable_route_controller", "worker count": 1} 2025-08-13T20:08:01.967285223+00:00 stderr F 2025-08-13T20:08:01.966Z INFO operator.configurable_route_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-08-13T20:08:02.003855092+00:00 stderr F 2025-08-13T20:08:02.003Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "canary_controller", "worker count": 1} 2025-08-13T20:08:02.008856995+00:00 stderr F 2025-08-13T20:08:02.008Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingress_controller", "worker count": 1} 2025-08-13T20:08:02.009029540+00:00 stderr F 2025-08-13T20:08:02.008Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:08:02.009560235+00:00 stderr F 2025-08-13T20:08:02.009Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:08:02.009712849+00:00 stderr F 2025-08-13T20:08:02.009Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "gatewayapi_controller", "worker count": 1} 2025-08-13T20:08:02.009870424+00:00 stderr F 2025-08-13T20:08:02.009Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "monitoring_dashboard_controller", "worker count": 1} 2025-08-13T20:08:02.010245235+00:00 stderr F 2025-08-13T20:08:02.009Z INFO operator.gatewayapi_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-08-13T20:08:02.533313412+00:00 stderr F 2025-08-13T20:08:02.533Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:30.401847845+00:00 stderr F 2025-08-13T20:09:30.400Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:44.952017060+00:00 stderr F 2025-08-13T20:09:44.951Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:44.952105402+00:00 stderr F 2025-08-13T20:09:44.952Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:44.952396641+00:00 stderr F 2025-08-13T20:09:44.952Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:44.957004893+00:00 stderr F 2025-08-13T20:09:44.955Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:44.957004893+00:00 stderr F 2025-08-13T20:09:44.955Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:46.234330695+00:00 stderr F 2025-08-13T20:09:46.234Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:09:46.234638084+00:00 stderr F 2025-08-13T20:09:46.234Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:09:46.234942633+00:00 stderr F 2025-08-13T20:09:46.234Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:51.588404831+00:00 stderr F 2025-08-13T20:09:51.585Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:09:51.588404831+00:00 stderr F 2025-08-13T20:09:51.587Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:09:51.588404831+00:00 stderr F 2025-08-13T20:09:51.587Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:53.788854380+00:00 stderr F 2025-08-13T20:09:53.788Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:09:53.788854380+00:00 stderr F 2025-08-13T20:09:53.788Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:09:53.788854380+00:00 stderr F 2025-08-13T20:09:53.788Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:55.836004494+00:00 stderr F 2025-08-13T20:09:55.835Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:10:01.895441172+00:00 stderr F 2025-08-13T20:10:01.894Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:10:01.895441172+00:00 stderr F 2025-08-13T20:10:01.895Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:10:01.895441172+00:00 stderr F 2025-08-13T20:10:01.894Z INFO operator.configurable_route_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-08-13T20:10:01.895506774+00:00 stderr F 2025-08-13T20:10:01.895Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:10:08.138086435+00:00 stderr F 2025-08-13T20:10:08.137Z INFO operator.gatewayapi_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-08-13T20:10:13.833002522+00:00 stderr F 2025-08-13T20:10:13.831Z INFO operator.certificate_publisher_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:10:13.833002522+00:00 stderr F 2025-08-13T20:10:13.832Z INFO operator.certificate_publisher_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:10:13.833002522+00:00 stderr F 2025-08-13T20:10:13.832Z INFO operator.certificate_publisher_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:10:22.868218468+00:00 stderr F 2025-08-13T20:10:22.867Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:10:22.868218468+00:00 stderr F 2025-08-13T20:10:22.868Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:10:22.868270259+00:00 stderr F 2025-08-13T20:10:22.868Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/4.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000644000175000017500000013052115133657716033107 0ustar zuulzuul2026-01-20T10:49:35.300489639+00:00 stderr F 2026-01-20T10:49:35.297Z INFO operator.main ingress-operator/start.go:64 using operator namespace {"namespace": "openshift-ingress-operator"} 2026-01-20T10:49:35.402667382+00:00 stderr F 2026-01-20T10:49:35.402Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for canary_controller 2026-01-20T10:49:35.402753725+00:00 stderr F 2026-01-20T10:49:35.402Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for ingress_controller 2026-01-20T10:49:35.402763015+00:00 stderr F 2026-01-20T10:49:35.402Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for route_metrics_controller 2026-01-20T10:49:35.402988522+00:00 stderr F 2026-01-20T10:49:35.402Z INFO operator.main ingress-operator/start.go:64 watching file {"filename": "/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem"} 2026-01-20T10:49:35.404176207+00:00 stderr F 2026-01-20T10:49:35.404Z INFO operator.init runtime/asm_amd64.s:1650 starting metrics listener {"addr": "127.0.0.1:60000"} 2026-01-20T10:49:35.415425150+00:00 stderr F I0120 10:49:35.415385 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2026-01-20T10:49:35.422225358+00:00 stderr F I0120 10:49:35.422178 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-ingress-operator", Name:"ingress-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2026-01-20T10:49:35.422324771+00:00 stderr F 2026-01-20T10:49:35.422Z INFO operator.init ingress-operator/start.go:198 FeatureGates initialized {"knownFeatures": ["AdminNetworkPolicy","AlibabaPlatform","AutomatedEtcdBackup","AzureWorkloadIdentity","BareMetalLoadBalancer","BuildCSIVolumes","CSIDriverSharedResource","ChunkSizeMiB","CloudDualStackNodeIPs","ClusterAPIInstall","ClusterAPIInstallAWS","ClusterAPIInstallAzure","ClusterAPIInstallGCP","ClusterAPIInstallIBMCloud","ClusterAPIInstallNutanix","ClusterAPIInstallOpenStack","ClusterAPIInstallPowerVS","ClusterAPIInstallVSphere","DNSNameResolver","DisableKubeletCloudCredentialProviders","DynamicResourceAllocation","EtcdBackendQuota","EventedPLEG","Example","ExternalCloudProvider","ExternalCloudProviderAzure","ExternalCloudProviderExternal","ExternalCloudProviderGCP","ExternalOIDC","ExternalRouteCertificate","GCPClusterHostedDNS","GCPLabelsTags","GatewayAPI","HardwareSpeed","ImagePolicy","InsightsConfig","InsightsConfigAPI","InsightsOnDemandDataGather","InstallAlternateInfrastructureAWS","KMSv1","MachineAPIOperatorDisableMachineHealthCheckController","MachineAPIProviderOpenStack","MachineConfigNodes","ManagedBootImages","MaxUnavailableStatefulSet","MetricsCollectionProfiles","MetricsServer","MixedCPUsAllocation","NetworkDiagnosticsConfig","NetworkLiveMigration","NewOLM","NodeDisruptionPolicy","NodeSwap","OnClusterBuild","OpenShiftPodSecurityAdmission","PinnedImages","PlatformOperators","PrivateHostedZoneAWS","RouteExternalCertificate","ServiceAccountTokenNodeBinding","ServiceAccountTokenNodeBindingValidation","ServiceAccountTokenPodNodeInfo","SignatureStores","SigstoreImageVerification","TranslateStreamCloseWebsocketRequests","UpgradeStatus","VSphereControlPlaneMachineSet","VSphereDriverConfiguration","VSphereMultiVCenters","VSphereStaticIPs","ValidatingAdmissionPolicy","VolumeGroupSnapshot"]} 2026-01-20T10:49:35.423733693+00:00 stderr F I0120 10:49:35.423706 1 base_controller.go:67] Waiting for caches to sync for spread-default-router-pods 2026-01-20T10:49:35.526051850+00:00 stderr F I0120 10:49:35.525613 1 base_controller.go:73] Caches are synced for spread-default-router-pods 2026-01-20T10:49:35.526051850+00:00 stderr F I0120 10:49:35.526030 1 base_controller.go:110] Starting #1 worker of spread-default-router-pods controller ... 2026-01-20T10:49:35.526123662+00:00 stderr F 2026-01-20T10:49:35.525Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Starting metrics server 2026-01-20T10:49:35.530825746+00:00 stderr F 2026-01-20T10:49:35.526Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Serving metrics server {"bindAddress": ":8080", "secure": false} 2026-01-20T10:49:35.629446280+00:00 stderr F 2026-01-20T10:49:35.629Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.IngressController"} 2026-01-20T10:49:35.629521612+00:00 stderr F 2026-01-20T10:49:35.629Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Deployment"} 2026-01-20T10:49:35.629550613+00:00 stderr F 2026-01-20T10:49:35.629Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Service"} 2026-01-20T10:49:35.629580614+00:00 stderr F 2026-01-20T10:49:35.629Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Pod"} 2026-01-20T10:49:35.629608054+00:00 stderr F 2026-01-20T10:49:35.629Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNS"} 2026-01-20T10:49:35.629636525+00:00 stderr F 2026-01-20T10:49:35.629Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNSRecord"} 2026-01-20T10:49:35.629663896+00:00 stderr F 2026-01-20T10:49:35.629Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Ingress"} 2026-01-20T10:49:35.629691927+00:00 stderr F 2026-01-20T10:49:35.629Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Proxy"} 2026-01-20T10:49:35.629720838+00:00 stderr F 2026-01-20T10:49:35.629Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingress_controller"} 2026-01-20T10:49:35.629859552+00:00 stderr F 2026-01-20T10:49:35.629Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.IngressController"} 2026-01-20T10:49:35.629901353+00:00 stderr F 2026-01-20T10:49:35.629Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.ClusterOperator"} 2026-01-20T10:49:35.629926814+00:00 stderr F 2026-01-20T10:49:35.629Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "status_controller"} 2026-01-20T10:49:35.630246444+00:00 stderr F 2026-01-20T10:49:35.630Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.IngressController"} 2026-01-20T10:49:35.630290335+00:00 stderr F 2026-01-20T10:49:35.630Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2026-01-20T10:49:35.630330776+00:00 stderr F 2026-01-20T10:49:35.630Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2026-01-20T10:49:35.630364877+00:00 stderr F 2026-01-20T10:49:35.630Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "error_page_configmap_controller"} 2026-01-20T10:49:35.630468200+00:00 stderr F 2026-01-20T10:49:35.630Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_controller", "source": "kind source: *v1.IngressController"} 2026-01-20T10:49:35.630496231+00:00 stderr F 2026-01-20T10:49:35.630Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_controller"} 2026-01-20T10:49:35.630574974+00:00 stderr F 2026-01-20T10:49:35.630Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.IngressController"} 2026-01-20T10:49:35.630604984+00:00 stderr F 2026-01-20T10:49:35.630Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2026-01-20T10:49:35.630632245+00:00 stderr F 2026-01-20T10:49:35.630Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2026-01-20T10:49:35.630666896+00:00 stderr F 2026-01-20T10:49:35.630Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "clientca_configmap_controller"} 2026-01-20T10:49:35.630790390+00:00 stderr F 2026-01-20T10:49:35.630Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "informer source: 0xc000e0a3a0"} 2026-01-20T10:49:35.630918364+00:00 stderr F 2026-01-20T10:49:35.630Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNSRecord"} 2026-01-20T10:49:35.630951235+00:00 stderr F 2026-01-20T10:49:35.630Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc000e0a678"} 2026-01-20T10:49:35.631090699+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "kind source: *v1.IngressController"} 2026-01-20T10:49:35.631131420+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_publisher_controller"} 2026-01-20T10:49:35.631271324+00:00 stderr F 2026-01-20T10:49:35.630Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Ingress"} 2026-01-20T10:49:35.631306186+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Role"} 2026-01-20T10:49:35.631306186+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc000e0a678"} 2026-01-20T10:49:35.631322266+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.RoleBinding"} 2026-01-20T10:49:35.631322266+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "configurable_route_controller"} 2026-01-20T10:49:35.631434579+00:00 stderr F 2026-01-20T10:49:35.630Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNS"} 2026-01-20T10:49:35.631444081+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Infrastructure"} 2026-01-20T10:49:35.631469401+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Secret"} 2026-01-20T10:49:35.631477412+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "dns_controller"} 2026-01-20T10:49:35.631496522+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.IngressController"} 2026-01-20T10:49:35.631534303+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.Route"} 2026-01-20T10:49:35.631570885+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "canary_controller"} 2026-01-20T10:49:35.631603395+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "kind source: *v1.IngressController"} 2026-01-20T10:49:35.631611906+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "crl"} 2026-01-20T10:49:35.631690668+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressController"} 2026-01-20T10:49:35.631699408+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressClass"} 2026-01-20T10:49:35.631706669+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingressclass_controller"} 2026-01-20T10:49:35.631786821+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.ConfigMap"} 2026-01-20T10:49:35.631823152+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "route_metrics_controller", "source": "kind source: *v1.IngressController"} 2026-01-20T10:49:35.631831302+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "route_metrics_controller", "source": "kind source: *v1.Route"} 2026-01-20T10:49:35.631852703+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "route_metrics_controller"} 2026-01-20T10:49:35.631875354+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.Infrastructure"} 2026-01-20T10:49:35.631909645+00:00 stderr F 2026-01-20T10:49:35.631Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "monitoring_dashboard_controller"} 2026-01-20T10:49:35.633203294+00:00 stderr F 2026-01-20T10:49:35.632Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "gatewayapi_controller", "source": "kind source: *v1.FeatureGate"} 2026-01-20T10:49:35.633203294+00:00 stderr F 2026-01-20T10:49:35.632Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "gatewayapi_controller"} 2026-01-20T10:49:35.644383594+00:00 stderr F 2026-01-20T10:49:35.642Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:53 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2026-01-20T10:49:35.649931133+00:00 stderr F 2026-01-20T10:49:35.649Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:53 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2026-01-20T10:49:35.683918958+00:00 stderr F 2026-01-20T10:49:35.683Z INFO operator.certificate_publisher_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2026-01-20T10:49:35.733791928+00:00 stderr F 2026-01-20T10:49:35.731Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2026-01-20T10:49:35.750331922+00:00 stderr F 2026-01-20T10:49:35.747Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "crl", "worker count": 1} 2026-01-20T10:49:35.750639191+00:00 stderr F 2026-01-20T10:49:35.750Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "status_controller", "worker count": 1} 2026-01-20T10:49:35.750917829+00:00 stderr F 2026-01-20T10:49:35.750Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:49:35.751720473+00:00 stderr F 2026-01-20T10:49:35.751Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2026-01-20T10:49:35.752393525+00:00 stderr F 2026-01-20T10:49:35.752Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2026-01-20T10:49:35.952169749+00:00 stderr F 2026-01-20T10:49:35.951Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "gatewayapi_controller", "worker count": 1} 2026-01-20T10:49:35.952169749+00:00 stderr F 2026-01-20T10:49:35.952Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "error_page_configmap_controller", "worker count": 1} 2026-01-20T10:49:35.952268722+00:00 stderr F 2026-01-20T10:49:35.952Z INFO operator.gatewayapi_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2026-01-20T10:49:36.001560594+00:00 stderr F 2026-01-20T10:49:36.001Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingressclass_controller", "worker count": 1} 2026-01-20T10:49:36.001590055+00:00 stderr F 2026-01-20T10:49:36.001Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:49:36.055127176+00:00 stderr F 2026-01-20T10:49:36.053Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_publisher_controller", "worker count": 1} 2026-01-20T10:49:36.055127176+00:00 stderr F 2026-01-20T10:49:36.054Z INFO operator.certificate_publisher_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:49:36.102774747+00:00 stderr F 2026-01-20T10:49:36.102Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "dns_controller", "worker count": 1} 2026-01-20T10:49:36.155936886+00:00 stderr F 2026-01-20T10:49:36.155Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_controller", "worker count": 1} 2026-01-20T10:49:36.156330268+00:00 stderr F 2026-01-20T10:49:36.156Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "monitoring_dashboard_controller", "worker count": 1} 2026-01-20T10:49:36.156366279+00:00 stderr F 2026-01-20T10:49:36.156Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "clientca_configmap_controller", "worker count": 1} 2026-01-20T10:49:36.156396860+00:00 stderr F 2026-01-20T10:49:36.156Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2026-01-20T10:49:36.156426591+00:00 stderr F 2026-01-20T10:49:36.156Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2026-01-20T10:49:36.156451632+00:00 stderr F 2026-01-20T10:49:36.156Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:49:36.205833496+00:00 stderr F 2026-01-20T10:49:36.205Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingress_controller", "worker count": 1} 2026-01-20T10:49:36.206002942+00:00 stderr F 2026-01-20T10:49:36.205Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "configurable_route_controller", "worker count": 1} 2026-01-20T10:49:36.206102235+00:00 stderr F 2026-01-20T10:49:36.206Z INFO operator.configurable_route_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2026-01-20T10:49:36.206149986+00:00 stderr F 2026-01-20T10:49:36.206Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:49:36.512305311+00:00 stderr F 2026-01-20T10:49:36.511Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:49:36.513818726+00:00 stderr F 2026-01-20T10:49:36.511Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:49:36.517499319+00:00 stderr F 2026-01-20T10:49:36.517Z ERROR operator.ingress_controller controller/controller.go:119 got retryable error; requeueing {"after": "59m59.999976479s", "error": "IngressController may become degraded soon: DeploymentReplicasAllAvailable=False"} 2026-01-20T10:49:36.517577562+00:00 stderr F 2026-01-20T10:49:36.517Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:49:36.534740034+00:00 stderr F 2026-01-20T10:49:36.534Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:49:36.649188510+00:00 stderr F 2026-01-20T10:49:36.648Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:49:36.742475461+00:00 stderr F 2026-01-20T10:49:36.742Z ERROR operator.ingress_controller controller/controller.go:119 got retryable error; requeueing {"after": "59m59.25890453s", "error": "IngressController may become degraded soon: DeploymentReplicasAllAvailable=False"} 2026-01-20T10:49:45.633386893+00:00 stderr F 2026-01-20T10:49:45.632Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2026-01-20T10:49:45.741628599+00:00 stderr F 2026-01-20T10:49:45.741Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "route_metrics_controller", "worker count": 1} 2026-01-20T10:49:45.741681112+00:00 stderr F 2026-01-20T10:49:45.741Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2026-01-20T10:49:45.741681112+00:00 stderr F 2026-01-20T10:49:45.741Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:49:45.741716253+00:00 stderr F 2026-01-20T10:49:45.741Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2026-01-20T10:49:45.741759714+00:00 stderr F 2026-01-20T10:49:45.741Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2026-01-20T10:49:45.741759714+00:00 stderr F 2026-01-20T10:49:45.741Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2026-01-20T10:49:45.741771455+00:00 stderr F 2026-01-20T10:49:45.741Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2026-01-20T10:49:45.946804890+00:00 stderr F 2026-01-20T10:49:45.946Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:49:55.634526162+00:00 stderr F 2026-01-20T10:49:55.633Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2026-01-20T10:50:05.633178864+00:00 stderr F 2026-01-20T10:50:05.632Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2026-01-20T10:50:15.632891658+00:00 stderr F 2026-01-20T10:50:15.632Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2026-01-20T10:50:25.633383189+00:00 stderr F 2026-01-20T10:50:25.632Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2026-01-20T10:50:35.633876610+00:00 stderr F 2026-01-20T10:50:35.633Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2026-01-20T10:50:44.989100378+00:00 stderr F 2026-01-20T10:50:44.984Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2026-01-20T10:50:44.989100378+00:00 stderr F 2026-01-20T10:50:44.985Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2026-01-20T10:50:44.989100378+00:00 stderr F 2026-01-20T10:50:44.985Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:50:45.051037943+00:00 stderr F 2026-01-20T10:50:45.050Z ERROR operator.ingress_controller controller/controller.go:119 got retryable error; requeueing {"after": "58m50.950736399s", "error": "IngressController may become degraded soon: DeploymentReplicasAllAvailable=False"} 2026-01-20T10:50:45.633333169+00:00 stderr F 2026-01-20T10:50:45.633Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2026-01-20T10:50:55.633520039+00:00 stderr F 2026-01-20T10:50:55.632Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2026-01-20T10:51:05.633647400+00:00 stderr F 2026-01-20T10:51:05.633Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2026-01-20T10:51:15.001774222+00:00 stderr F 2026-01-20T10:51:15.001Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2026-01-20T10:51:15.001774222+00:00 stderr F 2026-01-20T10:51:15.001Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2026-01-20T10:51:15.001826783+00:00 stderr F 2026-01-20T10:51:15.001Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:51:15.054559973+00:00 stderr F 2026-01-20T10:51:15.054Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:51:15.054612784+00:00 stderr F 2026-01-20T10:51:15.054Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:51:15.054816350+00:00 stderr F 2026-01-20T10:51:15.054Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:51:15.054968784+00:00 stderr F 2026-01-20T10:51:15.054Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:51:15.058606140+00:00 stderr F 2026-01-20T10:51:15.057Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:51:15.065919570+00:00 stderr F 2026-01-20T10:51:15.065Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2026-01-20T10:51:15.632265947+00:00 stderr F 2026-01-20T10:51:15.632Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2026-01-20T10:51:25.633496729+00:00 stderr F 2026-01-20T10:51:25.632Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2026-01-20T10:51:35.636886652+00:00 stderr F 2026-01-20T10:51:35.636Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2026-01-20T10:51:35.952891115+00:00 stderr F 2026-01-20T10:51:35.952Z ERROR operator.init controller/controller.go:208 Could not wait for Cache to sync {"controller": "canary_controller", "error": "failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route"} 2026-01-20T10:51:35.952891115+00:00 stderr F 2026-01-20T10:51:35.952Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for non leader election runnables 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for leader election runnables 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "route_metrics_controller"} 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "configurable_route_controller"} 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "ingress_controller"} 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "clientca_configmap_controller"} 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "monitoring_dashboard_controller"} 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "certificate_controller"} 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "dns_controller"} 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "certificate_publisher_controller"} 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "ingressclass_controller"} 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "error_page_configmap_controller"} 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "dns_controller"} 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "certificate_publisher_controller"} 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "clientca_configmap_controller"} 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "certificate_controller"} 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "gatewayapi_controller"} 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "error_page_configmap_controller"} 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "ingress_controller"} 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "configurable_route_controller"} 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "status_controller"} 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "status_controller"} 2026-01-20T10:51:35.953410639+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "crl"} 2026-01-20T10:51:35.953458361+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "monitoring_dashboard_controller"} 2026-01-20T10:51:35.953458361+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "gatewayapi_controller"} 2026-01-20T10:51:35.953458361+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "route_metrics_controller"} 2026-01-20T10:51:35.953458361+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "ingressclass_controller"} 2026-01-20T10:51:35.953458361+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "crl"} 2026-01-20T10:51:35.953506072+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for caches 2026-01-20T10:51:35.953757569+00:00 stderr F W0120 10:51:35.953717 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2026-01-20T10:51:35.953800650+00:00 stderr F W0120 10:51:35.953778 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2026-01-20T10:51:35.953830731+00:00 stderr F W0120 10:51:35.953812 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2026-01-20T10:51:35.953838991+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for webhooks 2026-01-20T10:51:35.953846772+00:00 stderr F 2026-01-20T10:51:35.953Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for HTTP servers 2026-01-20T10:51:35.953874082+00:00 stderr F W0120 10:51:35.953856 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2026-01-20T10:51:35.953928034+00:00 stderr F W0120 10:51:35.953901 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2026-01-20T10:51:35.953976315+00:00 stderr F W0120 10:51:35.953964 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2026-01-20T10:51:35.954055698+00:00 stderr F W0120 10:51:35.954034 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2026-01-20T10:51:35.954108880+00:00 stderr F W0120 10:51:35.954090 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2026-01-20T10:51:35.954140341+00:00 stderr F W0120 10:51:35.954122 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2026-01-20T10:51:35.954169552+00:00 stderr F W0120 10:51:35.954153 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2026-01-20T10:51:35.954203643+00:00 stderr F W0120 10:51:35.954191 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2026-01-20T10:51:35.954240064+00:00 stderr F W0120 10:51:35.954222 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2026-01-20T10:51:35.954269164+00:00 stderr F W0120 10:51:35.954253 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.IngressClass ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2026-01-20T10:51:35.954300005+00:00 stderr F W0120 10:51:35.954284 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2026-01-20T10:51:35.954346247+00:00 stderr F W0120 10:51:35.954335 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2026-01-20T10:51:35.954396888+00:00 stderr F W0120 10:51:35.954386 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2026-01-20T10:51:35.955372145+00:00 stderr F W0120 10:51:35.954040 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2026-01-20T10:51:35.955372145+00:00 stderr F 2026-01-20T10:51:35.954Z INFO operator.init.controller-runtime.metrics runtime/asm_amd64.s:1650 Shutting down metrics server with timeout of 1 minute 2026-01-20T10:51:35.955372145+00:00 stderr F 2026-01-20T10:51:35.954Z INFO operator.init runtime/asm_amd64.s:1650 Wait completed, proceeding to shutdown the manager 2026-01-20T10:51:35.955372145+00:00 stderr F 2026-01-20T10:51:35.954Z ERROR operator.main cobra/command.go:944 error starting {"error": "failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route"} ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000644000175000017500000013665215133657716033122 0ustar zuulzuul2025-08-13T20:05:08.191183858+00:00 stderr F 2025-08-13T20:05:08.190Z INFO operator.main ingress-operator/start.go:64 using operator namespace {"namespace": "openshift-ingress-operator"} 2025-08-13T20:05:08.198035654+00:00 stderr F 2025-08-13T20:05:08.197Z ERROR operator.main ingress-operator/start.go:64 failed to verify idling endpoints between endpoints and services {"error": "failed to list endpoints in all namespaces: failed to get API group resources: unable to retrieve the complete list of server APIs: v1: Get \"https://10.217.4.1:443/api/v1\": dial tcp 10.217.4.1:443: connect: connection refused"} 2025-08-13T20:05:08.198298121+00:00 stderr F 2025-08-13T20:05:08.198Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for canary_controller 2025-08-13T20:05:08.198359993+00:00 stderr F 2025-08-13T20:05:08.198Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for ingress_controller 2025-08-13T20:05:08.198415215+00:00 stderr F 2025-08-13T20:05:08.198Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for route_metrics_controller 2025-08-13T20:05:08.198545378+00:00 stderr F 2025-08-13T20:05:08.198Z INFO operator.main ingress-operator/start.go:64 watching file {"filename": "/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem"} 2025-08-13T20:05:08.200367231+00:00 stderr F 2025-08-13T20:05:08.200Z INFO operator.init runtime/asm_amd64.s:1650 starting metrics listener {"addr": "127.0.0.1:60000"} 2025-08-13T20:05:08.229326390+00:00 stderr F I0813 20:05:08.229215 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:05:08.236722812+00:00 stderr F W0813 20:05:08.236296 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:08.236722812+00:00 stderr F W0813 20:05:08.236305 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:08.236747412+00:00 stderr F E0813 20:05:08.236721 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:08.236747412+00:00 stderr F E0813 20:05:08.236732 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:09.327244290+00:00 stderr F W0813 20:05:09.325689 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:09.327244290+00:00 stderr F E0813 20:05:09.326389 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:09.566674106+00:00 stderr F W0813 20:05:09.565749 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:09.566674106+00:00 stderr F E0813 20:05:09.566641 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:11.263050674+00:00 stderr F W0813 20:05:11.262490 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:11.263050674+00:00 stderr F E0813 20:05:11.262989 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:12.421045724+00:00 stderr F W0813 20:05:12.420659 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:12.421105386+00:00 stderr F E0813 20:05:12.421044 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.769327666+00:00 stderr F W0813 20:05:15.768608 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.769327666+00:00 stderr F E0813 20:05:15.769244 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:16.912903143+00:00 stderr F W0813 20:05:16.911517 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:16.912903143+00:00 stderr F E0813 20:05:16.911649 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:27.134481088+00:00 stderr F 2025-08-13T20:05:27.133Z INFO operator.init ingress-operator/start.go:198 FeatureGates initialized {"knownFeatures": ["AdminNetworkPolicy","AlibabaPlatform","AutomatedEtcdBackup","AzureWorkloadIdentity","BareMetalLoadBalancer","BuildCSIVolumes","CSIDriverSharedResource","ChunkSizeMiB","CloudDualStackNodeIPs","ClusterAPIInstall","ClusterAPIInstallAWS","ClusterAPIInstallAzure","ClusterAPIInstallGCP","ClusterAPIInstallIBMCloud","ClusterAPIInstallNutanix","ClusterAPIInstallOpenStack","ClusterAPIInstallPowerVS","ClusterAPIInstallVSphere","DNSNameResolver","DisableKubeletCloudCredentialProviders","DynamicResourceAllocation","EtcdBackendQuota","EventedPLEG","Example","ExternalCloudProvider","ExternalCloudProviderAzure","ExternalCloudProviderExternal","ExternalCloudProviderGCP","ExternalOIDC","ExternalRouteCertificate","GCPClusterHostedDNS","GCPLabelsTags","GatewayAPI","HardwareSpeed","ImagePolicy","InsightsConfig","InsightsConfigAPI","InsightsOnDemandDataGather","InstallAlternateInfrastructureAWS","KMSv1","MachineAPIOperatorDisableMachineHealthCheckController","MachineAPIProviderOpenStack","MachineConfigNodes","ManagedBootImages","MaxUnavailableStatefulSet","MetricsCollectionProfiles","MetricsServer","MixedCPUsAllocation","NetworkDiagnosticsConfig","NetworkLiveMigration","NewOLM","NodeDisruptionPolicy","NodeSwap","OnClusterBuild","OpenShiftPodSecurityAdmission","PinnedImages","PlatformOperators","PrivateHostedZoneAWS","RouteExternalCertificate","ServiceAccountTokenNodeBinding","ServiceAccountTokenNodeBindingValidation","ServiceAccountTokenPodNodeInfo","SignatureStores","SigstoreImageVerification","TranslateStreamCloseWebsocketRequests","UpgradeStatus","VSphereControlPlaneMachineSet","VSphereDriverConfiguration","VSphereMultiVCenters","VSphereStaticIPs","ValidatingAdmissionPolicy","VolumeGroupSnapshot"]} 2025-08-13T20:05:27.134896079+00:00 stderr F I0813 20:05:27.133874 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-ingress-operator", Name:"ingress-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:05:27.171135417+00:00 stderr F I0813 20:05:27.171080 1 base_controller.go:67] Waiting for caches to sync for spread-default-router-pods 2025-08-13T20:05:27.271658006+00:00 stderr F I0813 20:05:27.271480 1 base_controller.go:73] Caches are synced for spread-default-router-pods 2025-08-13T20:05:27.271658006+00:00 stderr F I0813 20:05:27.271548 1 base_controller.go:110] Starting #1 worker of spread-default-router-pods controller ... 2025-08-13T20:05:28.707483552+00:00 stderr F 2025-08-13T20:05:28.706Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Starting metrics server 2025-08-13T20:05:28.863034757+00:00 stderr F 2025-08-13T20:05:28.862Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Serving metrics server {"bindAddress": ":8080", "secure": false} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Deployment"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Service"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Pod"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNS"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.ClusterOperator"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNSRecord"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "status_controller"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Ingress"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Proxy"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingress_controller"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Ingress"} 2025-08-13T20:05:29.452648831+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Role"} 2025-08-13T20:05:29.452648831+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.RoleBinding"} 2025-08-13T20:05:29.452648831+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "configurable_route_controller"} 2025-08-13T20:05:29.452648831+00:00 stderr F 2025-08-13T20:05:29.432Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.452648831+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "error_page_configmap_controller"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.432Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_controller"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.432Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "clientca_configmap_controller"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.432Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "informer source: 0xc00010e648"} 2025-08-13T20:05:29.453065183+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.453065183+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_publisher_controller"} 2025-08-13T20:05:29.453065183+00:00 stderr F 2025-08-13T20:05:29.433Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNSRecord"} 2025-08-13T20:05:29.453065183+00:00 stderr F 2025-08-13T20:05:29.453Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNS"} 2025-08-13T20:05:29.453065183+00:00 stderr F 2025-08-13T20:05:29.453Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Infrastructure"} 2025-08-13T20:05:29.453065183+00:00 stderr F 2025-08-13T20:05:29.453Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Secret"} 2025-08-13T20:05:29.453085534+00:00 stderr F 2025-08-13T20:05:29.453Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "dns_controller"} 2025-08-13T20:05:29.455007799+00:00 stderr F 2025-08-13T20:05:29.433Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc00010eba0"} 2025-08-13T20:05:29.455388390+00:00 stderr F 2025-08-13T20:05:29.455Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc00010eba0"} 2025-08-13T20:05:29.455467832+00:00 stderr F 2025-08-13T20:05:29.433Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.455765720+00:00 stderr F 2025-08-13T20:05:29.455Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.455902264+00:00 stderr F 2025-08-13T20:05:29.455Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "crl"} 2025-08-13T20:05:29.456298656+00:00 stderr F 2025-08-13T20:05:29.433Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.456352127+00:00 stderr F 2025-08-13T20:05:29.456Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressClass"} 2025-08-13T20:05:29.456389038+00:00 stderr F 2025-08-13T20:05:29.456Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingressclass_controller"} 2025-08-13T20:05:29.456435940+00:00 stderr F 2025-08-13T20:05:29.433Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:05:29.456476721+00:00 stderr F 2025-08-13T20:05:29.456Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.Infrastructure"} 2025-08-13T20:05:29.456510342+00:00 stderr F 2025-08-13T20:05:29.456Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "monitoring_dashboard_controller"} 2025-08-13T20:05:29.456558643+00:00 stderr F 2025-08-13T20:05:29.436Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "gatewayapi_controller", "source": "kind source: *v1.FeatureGate"} 2025-08-13T20:05:29.456639825+00:00 stderr F 2025-08-13T20:05:29.456Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "gatewayapi_controller"} 2025-08-13T20:05:29.456694627+00:00 stderr F 2025-08-13T20:05:29.446Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "route_metrics_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.456735988+00:00 stderr F 2025-08-13T20:05:29.456Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "route_metrics_controller", "source": "kind source: *v1.Route"} 2025-08-13T20:05:29.456882772+00:00 stderr F 2025-08-13T20:05:29.456Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "route_metrics_controller"} 2025-08-13T20:05:29.463311917+00:00 stderr F 2025-08-13T20:05:29.455Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.Route"} 2025-08-13T20:05:29.463311917+00:00 stderr F 2025-08-13T20:05:29.462Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "canary_controller"} 2025-08-13T20:05:29.574984784+00:00 stderr F 2025-08-13T20:05:29.563Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:53 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:05:29.945304648+00:00 stderr F 2025-08-13T20:05:29.917Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:05:29.965242119+00:00 stderr F 2025-08-13T20:05:29.965Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:05:30.014990724+00:00 stderr F 2025-08-13T20:05:30.014Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:05:30.023730434+00:00 stderr F 2025-08-13T20:05:30.023Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:53 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:05:30.026092791+00:00 stderr F 2025-08-13T20:05:30.023Z INFO operator.certificate_publisher_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:05:30.120415043+00:00 stderr F 2025-08-13T20:05:30.120Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "crl", "worker count": 1} 2025-08-13T20:05:30.120551686+00:00 stderr F 2025-08-13T20:05:30.120Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_controller", "worker count": 1} 2025-08-13T20:05:30.120979129+00:00 stderr F 2025-08-13T20:05:30.120Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:05:30.121431882+00:00 stderr F 2025-08-13T20:05:30.121Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "clientca_configmap_controller", "worker count": 1} 2025-08-13T20:05:30.219567762+00:00 stderr F 2025-08-13T20:05:30.219Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "monitoring_dashboard_controller", "worker count": 1} 2025-08-13T20:05:30.241382557+00:00 stderr F 2025-08-13T20:05:30.240Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "error_page_configmap_controller", "worker count": 1} 2025-08-13T20:05:30.315357705+00:00 stderr F 2025-08-13T20:05:30.315Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "gatewayapi_controller", "worker count": 1} 2025-08-13T20:05:30.331297851+00:00 stderr F 2025-08-13T20:05:30.315Z INFO operator.gatewayapi_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-08-13T20:05:30.378874514+00:00 stderr F 2025-08-13T20:05:30.378Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_publisher_controller", "worker count": 1} 2025-08-13T20:05:30.379387008+00:00 stderr F 2025-08-13T20:05:30.379Z INFO operator.certificate_publisher_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:05:30.499907610+00:00 stderr F 2025-08-13T20:05:30.499Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:05:30.522918519+00:00 stderr F 2025-08-13T20:05:30.518Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:05:30.526560943+00:00 stderr F 2025-08-13T20:05:30.526Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingressclass_controller", "worker count": 1} 2025-08-13T20:05:30.526961224+00:00 stderr F 2025-08-13T20:05:30.526Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:05:30.598924545+00:00 stderr F 2025-08-13T20:05:30.598Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "status_controller", "worker count": 1} 2025-08-13T20:05:30.599024598+00:00 stderr F 2025-08-13T20:05:30.598Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:05:30.599125781+00:00 stderr F 2025-08-13T20:05:30.599Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "dns_controller", "worker count": 1} 2025-08-13T20:05:30.629683626+00:00 stderr F 2025-08-13T20:05:30.629Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingress_controller", "worker count": 1} 2025-08-13T20:05:30.679536544+00:00 stderr F 2025-08-13T20:05:30.679Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "configurable_route_controller", "worker count": 1} 2025-08-13T20:05:30.679735869+00:00 stderr F 2025-08-13T20:05:30.679Z INFO operator.configurable_route_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-08-13T20:05:30.680070179+00:00 stderr F 2025-08-13T20:05:30.630Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:05:39.465426320+00:00 stderr F 2025-08-13T20:05:39.463Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:05:39.470629999+00:00 stderr F 2025-08-13T20:05:39.469Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:05:49.463650058+00:00 stderr F 2025-08-13T20:05:49.462Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:05:49.466441728+00:00 stderr F 2025-08-13T20:05:49.466Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:05:57.318941402+00:00 stderr F 2025-08-13T20:05:57.317Z INFO operator.configurable_route_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-08-13T20:05:57.318941402+00:00 stderr F 2025-08-13T20:05:57.317Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:05:57.318941402+00:00 stderr F 2025-08-13T20:05:57.317Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:05:57.318941402+00:00 stderr F 2025-08-13T20:05:57.318Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:05:59.460963930+00:00 stderr F 2025-08-13T20:05:59.460Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:05:59.465319955+00:00 stderr F 2025-08-13T20:05:59.465Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:06:09.466039895+00:00 stderr F 2025-08-13T20:06:09.465Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:06:09.466039895+00:00 stderr F 2025-08-13T20:06:09.465Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:06:19.466356135+00:00 stderr F 2025-08-13T20:06:19.465Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:06:19.466356135+00:00 stderr F 2025-08-13T20:06:19.465Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:06:29.464497590+00:00 stderr F 2025-08-13T20:06:29.463Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:06:29.465170690+00:00 stderr F 2025-08-13T20:06:29.465Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:06:39.466094165+00:00 stderr F 2025-08-13T20:06:39.465Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:06:39.470533012+00:00 stderr F 2025-08-13T20:06:39.469Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:06:49.461005407+00:00 stderr F 2025-08-13T20:06:49.460Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:06:49.463712194+00:00 stderr F 2025-08-13T20:06:49.463Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:06:59.465943937+00:00 stderr F 2025-08-13T20:06:59.464Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:06:59.481697919+00:00 stderr F 2025-08-13T20:06:59.480Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:07:09.465054500+00:00 stderr F 2025-08-13T20:07:09.461Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:07:09.467221923+00:00 stderr F 2025-08-13T20:07:09.466Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:07:19.470896816+00:00 stderr F 2025-08-13T20:07:19.468Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:07:19.470896816+00:00 stderr F 2025-08-13T20:07:19.470Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:07:29.462609576+00:00 stderr F 2025-08-13T20:07:29.461Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:07:29.463661686+00:00 stderr F 2025-08-13T20:07:29.463Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:07:30.132827102+00:00 stderr F 2025-08-13T20:07:30.128Z ERROR operator.init controller/controller.go:208 Could not wait for Cache to sync {"controller": "canary_controller", "error": "failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route"} 2025-08-13T20:07:30.135698934+00:00 stderr F 2025-08-13T20:07:30.132Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for non leader election runnables 2025-08-13T20:07:30.135768856+00:00 stderr F 2025-08-13T20:07:30.135Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for leader election runnables 2025-08-13T20:07:30.137697782+00:00 stderr F 2025-08-13T20:07:30.135Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "configurable_route_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "ingress_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "dns_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "status_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "ingressclass_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "certificate_publisher_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "gatewayapi_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "error_page_configmap_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "monitoring_dashboard_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "clientca_configmap_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "certificate_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "crl"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "monitoring_dashboard_controller"} 2025-08-13T20:07:30.154443352+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "gatewayapi_controller"} 2025-08-13T20:07:30.154443352+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "clientca_configmap_controller"} 2025-08-13T20:07:30.154443352+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "certificate_controller"} 2025-08-13T20:07:30.154443352+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "status_controller"} 2025-08-13T20:07:30.154507554+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "route_metrics_controller", "worker count": 1} 2025-08-13T20:07:30.154528604+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "route_metrics_controller"} 2025-08-13T20:07:30.154901485+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:07:30.155027888+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "dns_controller"} 2025-08-13T20:07:30.155089260+00:00 stderr F 2025-08-13T20:07:30.155Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "ingressclass_controller"} 2025-08-13T20:07:30.155145122+00:00 stderr F 2025-08-13T20:07:30.155Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "crl"} 2025-08-13T20:07:30.162760550+00:00 stderr F 2025-08-13T20:07:30.155Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "certificate_publisher_controller"} 2025-08-13T20:07:30.162760550+00:00 stderr F 2025-08-13T20:07:30.155Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "configurable_route_controller"} 2025-08-13T20:07:30.162760550+00:00 stderr F 2025-08-13T20:07:30.155Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "error_page_configmap_controller"} 2025-08-13T20:07:30.163077299+00:00 stderr F 2025-08-13T20:07:30.162Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "ingress_controller"} 2025-08-13T20:07:30.176402561+00:00 stderr F 2025-08-13T20:07:30.174Z ERROR operator.init controller/controller.go:266 Reconciler error {"controller": "route_metrics_controller", "object": {"name":"default","namespace":"openshift-ingress-operator"}, "namespace": "openshift-ingress-operator", "name": "default", "reconcileID": "8a74f58b-c73b-4c1b-937a-14d64341d67c", "error": "failed to get Ingress Controller \"openshift-ingress-operator/default\": Timeout: failed waiting for *v1.IngressController Informer to sync"} 2025-08-13T20:07:30.176402561+00:00 stderr F 2025-08-13T20:07:30.176Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "route_metrics_controller"} 2025-08-13T20:07:30.176402561+00:00 stderr F 2025-08-13T20:07:30.176Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for caches 2025-08-13T20:07:30.195953332+00:00 stderr F 2025-08-13T20:07:30.195Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for webhooks 2025-08-13T20:07:30.195953332+00:00 stderr F 2025-08-13T20:07:30.195Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for HTTP servers 2025-08-13T20:07:30.198455734+00:00 stderr F 2025-08-13T20:07:30.198Z INFO operator.init.controller-runtime.metrics runtime/asm_amd64.s:1650 Shutting down metrics server with timeout of 1 minute 2025-08-13T20:07:30.200492422+00:00 stderr F W0813 20:07:30.198536 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Proxy ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-08-13T20:07:30.201920803+00:00 stderr F W0813 20:07:30.198690 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-08-13T20:07:30.202012955+00:00 stderr F W0813 20:07:30.201950 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-08-13T20:07:30.202052777+00:00 stderr F W0813 20:07:30.198766 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-08-13T20:07:30.202135089+00:00 stderr F W0813 20:07:30.198484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-08-13T20:07:30.202245812+00:00 stderr F W0813 20:07:30.202220 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-08-13T20:07:30.204503267+00:00 stderr F W0813 20:07:30.199382 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-08-13T20:07:30.223684287+00:00 stderr F 2025-08-13T20:07:30.223Z INFO operator.init runtime/asm_amd64.s:1650 Wait completed, proceeding to shutdown the manager 2025-08-13T20:07:30.229373850+00:00 stderr F 2025-08-13T20:07:30.228Z ERROR operator.main cobra/command.go:944 error starting {"error": "failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route"} ././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015133657715033071 5ustar zuulzuul././@LongLink0000644000000000000000000000026300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/pruner/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015133657734033072 5ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/pruner/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000000357515133657715033105 0ustar zuulzuul2025-08-13T19:59:23.961605895+00:00 stderr F I0813 19:59:23.946532 1 cmd.go:40] &{ true {false} prune true map[cert-dir:0xc000543180 max-eligible-revision:0xc000542f00 protected-revisions:0xc000542fa0 resource-dir:0xc000543040 static-pod-name:0xc0005430e0 v:0xc000543860] [0xc000543860 0xc000542f00 0xc000542fa0 0xc000543040 0xc000543180 0xc0005430e0] [] map[cert-dir:0xc000543180 help:0xc000543c20 log-flush-frequency:0xc0005437c0 max-eligible-revision:0xc000542f00 protected-revisions:0xc000542fa0 resource-dir:0xc000543040 static-pod-name:0xc0005430e0 v:0xc000543860 vmodule:0xc000543900] [0xc000542f00 0xc000542fa0 0xc000543040 0xc0005430e0 0xc000543180 0xc0005437c0 0xc000543860 0xc000543900 0xc000543c20] [0xc000543180 0xc000543c20 0xc0005437c0 0xc000542f00 0xc000542fa0 0xc000543040 0xc0005430e0 0xc000543860 0xc000543900] map[104:0xc000543c20 118:0xc000543860] [] -1 0 0xc000548120 true 0x73b100 []} 2025-08-13T19:59:23.962267703+00:00 stderr F I0813 19:59:23.962243 1 cmd.go:41] (*prune.PruneOptions)(0xc0004fe2d0)({ 2025-08-13T19:59:23.962267703+00:00 stderr F MaxEligibleRevision: (int) 8, 2025-08-13T19:59:23.962267703+00:00 stderr F ProtectedRevisions: ([]int) (len=5 cap=5) { 2025-08-13T19:59:23.962267703+00:00 stderr F (int) 4, 2025-08-13T19:59:23.962267703+00:00 stderr F (int) 5, 2025-08-13T19:59:23.962267703+00:00 stderr F (int) 6, 2025-08-13T19:59:23.962267703+00:00 stderr F (int) 7, 2025-08-13T19:59:23.962267703+00:00 stderr F (int) 8 2025-08-13T19:59:23.962267703+00:00 stderr F }, 2025-08-13T19:59:23.962267703+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T19:59:23.962267703+00:00 stderr F CertDir: (string) (len=29) "kube-controller-manager-certs", 2025-08-13T19:59:23.962267703+00:00 stderr F StaticPodName: (string) (len=27) "kube-controller-manager-pod" 2025-08-13T19:59:23.962267703+00:00 stderr F }) ././@LongLink0000644000000000000000000000025700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-nc8zc_5adb4a31-5991-4381-a1ea-f1b095a071ea/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015133657715033114 5ustar zuulzuul././@LongLink0000644000000000000000000000030400000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-nc8zc_5adb4a31-5991-4381-a1ea-f1b095a071ea/marketplace-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015133657735033116 5ustar zuulzuul././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-nc8zc_5adb4a31-5991-4381-a1ea-f1b095a071ea/marketplace-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000001716215133657715033125 0ustar zuulzuul2026-01-20T10:50:59.480462457+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2026-01-20T10:50:59.480462457+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="Go OS/Arch: linux/amd64" 2026-01-20T10:50:59.480462457+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="[metrics] Registering marketplace metrics" 2026-01-20T10:50:59.480462457+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="[metrics] Serving marketplace metrics" 2026-01-20T10:50:59.480462457+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="TLS keys set, using https for metrics" 2026-01-20T10:50:59.539949530+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="Config API is available" 2026-01-20T10:50:59.539987972+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="setting up scheme" 2026-01-20T10:50:59.573241410+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="setting up health checks" 2026-01-20T10:50:59.575815404+00:00 stderr F I0120 10:50:59.575777 1 leaderelection.go:250] attempting to acquire leader lease openshift-marketplace/marketplace-operator-lock... 2026-01-20T10:50:59.589536709+00:00 stderr F I0120 10:50:59.589483 1 leaderelection.go:260] successfully acquired lease openshift-marketplace/marketplace-operator-lock 2026-01-20T10:50:59.589743475+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="became leader: marketplace-operator-8b455464d-nc8zc" 2026-01-20T10:50:59.589743475+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="registering components" 2026-01-20T10:50:59.589743475+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="setting up the marketplace clusteroperator status reporter" 2026-01-20T10:50:59.606008074+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="setting up controllers" 2026-01-20T10:50:59.608701701+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="starting the marketplace clusteroperator status reporter" 2026-01-20T10:50:59.608701701+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="starting manager" 2026-01-20T10:50:59.608701701+00:00 stderr F {"level":"info","ts":"2026-01-20T10:50:59Z","msg":"starting server","kind":"pprof","addr":"[::]:6060"} 2026-01-20T10:50:59.608701701+00:00 stderr F {"level":"info","ts":"2026-01-20T10:50:59Z","msg":"Starting EventSource","controller":"configmap-controller","source":"kind source: *v1.ConfigMap"} 2026-01-20T10:50:59.608701701+00:00 stderr F {"level":"info","ts":"2026-01-20T10:50:59Z","msg":"Starting Controller","controller":"configmap-controller"} 2026-01-20T10:50:59.608701701+00:00 stderr F {"level":"info","ts":"2026-01-20T10:50:59Z","msg":"Starting EventSource","controller":"catalogsource-controller","source":"kind source: *v1alpha1.CatalogSource"} 2026-01-20T10:50:59.608701701+00:00 stderr F {"level":"info","ts":"2026-01-20T10:50:59Z","msg":"Starting Controller","controller":"catalogsource-controller"} 2026-01-20T10:50:59.610189944+00:00 stderr F {"level":"info","ts":"2026-01-20T10:50:59Z","msg":"Starting EventSource","controller":"operatorhub-controller","source":"kind source: *v1.OperatorHub"} 2026-01-20T10:50:59.610246796+00:00 stderr F {"level":"info","ts":"2026-01-20T10:50:59Z","msg":"Starting Controller","controller":"operatorhub-controller"} 2026-01-20T10:50:59.712926445+00:00 stderr F {"level":"info","ts":"2026-01-20T10:50:59Z","msg":"Starting workers","controller":"configmap-controller","worker count":1} 2026-01-20T10:50:59.713471940+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="Reconciling ConfigMap openshift-marketplace/marketplace-trusted-ca" 2026-01-20T10:50:59.714750416+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="[ca] Certificate Authorization ConfigMap openshift-marketplace/marketplace-trusted-ca is in sync with disk." name=marketplace-trusted-ca type=ConfigMap 2026-01-20T10:50:59.720818131+00:00 stderr F {"level":"info","ts":"2026-01-20T10:50:59Z","msg":"Starting workers","controller":"catalogsource-controller","worker count":1} 2026-01-20T10:50:59.733753714+00:00 stderr F {"level":"info","ts":"2026-01-20T10:50:59Z","msg":"Starting workers","controller":"operatorhub-controller","worker count":1} 2026-01-20T10:50:59.741236850+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="Reconciling OperatorHub cluster" 2026-01-20T10:50:59.741460386+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2026-01-20T10:50:59.741511168+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="[defaults] CatalogSource redhat-marketplace is annotated and its spec is the same as the default spec" 2026-01-20T10:50:59.741546909+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="[defaults] CatalogSource redhat-operators is annotated and its spec is the same as the default spec" 2026-01-20T10:50:59.741596280+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="[defaults] CatalogSource certified-operators is annotated and its spec is the same as the default spec" 2026-01-20T10:51:00.916241124+00:00 stderr F time="2026-01-20T10:51:00Z" level=info msg="[defaults] CatalogSource redhat-marketplace is annotated and its spec is the same as the default spec" 2026-01-20T10:51:03.114631103+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="[defaults] CatalogSource redhat-marketplace is annotated and its spec is the same as the default spec" 2026-01-20T10:51:05.910670292+00:00 stderr F time="2026-01-20T10:51:05Z" level=info msg="[defaults] CatalogSource redhat-operators is annotated and its spec is the same as the default spec" 2026-01-20T10:51:06.916881932+00:00 stderr F time="2026-01-20T10:51:06Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2026-01-20T10:51:31.546574739+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2026-01-20T10:51:32.941463398+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="[defaults] CatalogSource redhat-marketplace is annotated and its spec is the same as the default spec" 2026-01-20T10:51:35.741117056+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="[defaults] CatalogSource certified-operators is annotated and its spec is the same as the default spec" 2026-01-20T10:51:36.942008768+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2026-01-20T10:51:40.011165810+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2026-01-20T10:51:45.992950040+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="[defaults] CatalogSource redhat-operators is annotated and its spec is the same as the default spec" 2026-01-20T10:51:46.598645660+00:00 stderr F time="2026-01-20T10:51:46Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2026-01-20T10:52:37.595495274+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="[defaults] CatalogSource redhat-operators is annotated and its spec is the same as the default spec" 2026-01-20T10:57:29.847974311+00:00 stderr F E0120 10:57:29.847895 1 leaderelection.go:332] error retrieving resource lock openshift-marketplace/marketplace-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-marketplace/leases/marketplace-operator-lock": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015133657716033072 5ustar zuulzuul././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/pruner/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015133657737033075 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/pruner/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000000375715133657716033110 0ustar zuulzuul2025-08-13T20:01:08.019466651+00:00 stderr F I0813 20:01:08.013488 1 cmd.go:40] &{ true {false} prune true map[cert-dir:0xc0005aa140 max-eligible-revision:0xc000387ea0 protected-revisions:0xc000387f40 resource-dir:0xc0005aa000 static-pod-name:0xc0005aa0a0 v:0xc0005aa820] [0xc0005aa820 0xc000387ea0 0xc000387f40 0xc0005aa000 0xc0005aa140 0xc0005aa0a0] [] map[cert-dir:0xc0005aa140 help:0xc0005aabe0 log-flush-frequency:0xc0005aa780 max-eligible-revision:0xc000387ea0 protected-revisions:0xc000387f40 resource-dir:0xc0005aa000 static-pod-name:0xc0005aa0a0 v:0xc0005aa820 vmodule:0xc0005aa8c0] [0xc000387ea0 0xc000387f40 0xc0005aa000 0xc0005aa0a0 0xc0005aa140 0xc0005aa780 0xc0005aa820 0xc0005aa8c0 0xc0005aabe0] [0xc0005aa140 0xc0005aabe0 0xc0005aa780 0xc000387ea0 0xc000387f40 0xc0005aa000 0xc0005aa0a0 0xc0005aa820 0xc0005aa8c0] map[104:0xc0005aabe0 118:0xc0005aa820] [] -1 0 0xc000333a10 true 0x73b100 []} 2025-08-13T20:01:08.051902776+00:00 stderr F I0813 20:01:08.050031 1 cmd.go:41] (*prune.PruneOptions)(0xc000488eb0)({ 2025-08-13T20:01:08.051902776+00:00 stderr F MaxEligibleRevision: (int) 10, 2025-08-13T20:01:08.051902776+00:00 stderr F ProtectedRevisions: ([]int) (len=7 cap=7) { 2025-08-13T20:01:08.051902776+00:00 stderr F (int) 4, 2025-08-13T20:01:08.051902776+00:00 stderr F (int) 5, 2025-08-13T20:01:08.051902776+00:00 stderr F (int) 6, 2025-08-13T20:01:08.051902776+00:00 stderr F (int) 7, 2025-08-13T20:01:08.051902776+00:00 stderr F (int) 8, 2025-08-13T20:01:08.051902776+00:00 stderr F (int) 9, 2025-08-13T20:01:08.051902776+00:00 stderr F (int) 10 2025-08-13T20:01:08.051902776+00:00 stderr F }, 2025-08-13T20:01:08.051902776+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:01:08.051902776+00:00 stderr F CertDir: (string) (len=29) "kube-controller-manager-certs", 2025-08-13T20:01:08.051902776+00:00 stderr F StaticPodName: (string) (len=27) "kube-controller-manager-pod" 2025-08-13T20:01:08.051902776+00:00 stderr F }) ././@LongLink0000644000000000000000000000024200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000755000175000017500000000000015133657716033202 5ustar zuulzuul././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000755000175000017500000000000015133657741033200 5ustar zuulzuul././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000017701315133657716033215 0ustar zuulzuul2025-08-13T19:59:46.011585192+00:00 stderr F I0813 19:59:45.901523 1 plugin.go:44] Starting Prometheus metrics endpoint server 2025-08-13T19:59:46.064059488+00:00 stderr F I0813 19:59:46.054048 1 plugin.go:47] Starting new HostPathDriver, config: {kubevirt.io.hostpath-provisioner unix:///csi/csi.sock crc map[] latest } 2025-08-13T19:59:54.824698355+00:00 stderr F I0813 19:59:54.823425 1 mount_linux.go:174] Cannot run systemd-run, assuming non-systemd OS 2025-08-13T19:59:54.842475592+00:00 stderr F I0813 19:59:54.836695 1 hostpath.go:88] name: local, dataDir: /csi-data-dir 2025-08-13T19:59:54.842475592+00:00 stderr F I0813 19:59:54.837369 1 hostpath.go:107] Driver: kubevirt.io.hostpath-provisioner, version: latest 2025-08-13T19:59:54.842475592+00:00 stderr F I0813 19:59:54.839745 1 server.go:194] Starting domain socket: unix///csi/csi.sock 2025-08-13T19:59:54.846271330+00:00 stderr F I0813 19:59:54.843974 1 server.go:89] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"} 2025-08-13T19:59:56.223891950+00:00 stderr F I0813 19:59:56.210090 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T19:59:56.498880609+00:00 stderr F I0813 19:59:56.484339 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginInfo 2025-08-13T19:59:58.205910298+00:00 stderr F I0813 19:59:58.122081 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginInfo 2025-08-13T19:59:58.239965509+00:00 stderr F I0813 19:59:58.206037 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetInfo 2025-08-13T20:00:06.991021332+00:00 stderr F I0813 20:00:06.970088 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginInfo 2025-08-13T20:00:07.168862783+00:00 stderr F I0813 20:00:07.147043 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginCapabilities 2025-08-13T20:00:07.353057946+00:00 stderr F I0813 20:00:07.320948 1 server.go:104] GRPC call: /csi.v1.Controller/ControllerGetCapabilities 2025-08-13T20:00:07.589521437+00:00 stderr F I0813 20:00:07.589477 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetInfo 2025-08-13T20:00:10.866698493+00:00 stderr F I0813 20:00:10.847387 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:00:11.139147861+00:00 stderr F I0813 20:00:11.038993 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:00:26.553247529+00:00 stderr F I0813 20:00:26.543271 1 server.go:104] GRPC call: /csi.v1.Node/NodeUnpublishVolume 2025-08-13T20:00:26.553247529+00:00 stderr F I0813 20:00:26.543509 1 nodeserver.go:199] Node Unpublish Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 TargetPath:/var/lib/kubelet/pods/c5bb4cdd-21b9-49ed-84ae-a405b60a0306/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:00:26.553247529+00:00 stderr F I0813 20:00:26.543770 1 nodeserver.go:206] Unmounting path: /var/lib/kubelet/pods/c5bb4cdd-21b9-49ed-84ae-a405b60a0306/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount 2025-08-13T20:00:26.845955495+00:00 stderr F I0813 20:00:26.834162 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:26.990584009+00:00 stderr F I0813 20:00:26.989183 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:27.045191487+00:00 stderr F I0813 20:00:27.044369 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:27.073488273+00:00 stderr F I0813 20:00:27.073376 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:27.110229281+00:00 stderr F I0813 20:00:27.109569 1 server.go:104] GRPC call: /csi.v1.Node/NodePublishVolume 2025-08-13T20:00:27.132535977+00:00 stderr F I0813 20:00:27.110290 1 nodeserver.go:82] Node Publish Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 PublishContext:map[] StagingTargetPath: TargetPath:/var/lib/kubelet/pods/42b6a393-6194-4620-bf8f-7e4b6cbe5679/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount VolumeCapability:mount:<> access_mode: Readonly:false Secrets:map[] VolumeContext:map[csi.storage.k8s.io/ephemeral:false csi.storage.k8s.io/pod.name:image-registry-7cbd5666ff-bbfrf csi.storage.k8s.io/pod.namespace:openshift-image-registry csi.storage.k8s.io/pod.uid:42b6a393-6194-4620-bf8f-7e4b6cbe5679 csi.storage.k8s.io/pv/name:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 csi.storage.k8s.io/pvc/name:crc-image-registry-storage csi.storage.k8s.io/pvc/namespace:openshift-image-registry csi.storage.k8s.io/serviceAccount.name:registry storage.kubernetes.io/csiProvisionerIdentity:1719494501704-84-kubevirt.io.hostpath-provisioner-crc storagePool:local] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:00:27.701925913+00:00 stderr F I0813 20:00:27.692296 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:27.744057214+00:00 stderr F I0813 20:00:27.729731 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:27.744057214+00:00 stderr F I0813 20:00:27.738204 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:27.813944467+00:00 stderr F I0813 20:00:27.813337 1 server.go:104] GRPC call: /csi.v1.Node/NodePublishVolume 2025-08-13T20:00:27.813944467+00:00 stderr F I0813 20:00:27.813370 1 nodeserver.go:82] Node Publish Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 PublishContext:map[] StagingTargetPath: TargetPath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount VolumeCapability:mount:<> access_mode: Readonly:false Secrets:map[] VolumeContext:map[csi.storage.k8s.io/ephemeral:false csi.storage.k8s.io/pod.name:image-registry-75779c45fd-v2j2v csi.storage.k8s.io/pod.namespace:openshift-image-registry csi.storage.k8s.io/pod.uid:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 csi.storage.k8s.io/pv/name:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 csi.storage.k8s.io/pvc/name:crc-image-registry-storage csi.storage.k8s.io/pvc/namespace:openshift-image-registry csi.storage.k8s.io/serviceAccount.name:registry storage.kubernetes.io/csiProvisionerIdentity:1719494501704-84-kubevirt.io.hostpath-provisioner-crc storagePool:local] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:00:54.807024101+00:00 stderr F I0813 20:00:54.798046 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:54.807024101+00:00 stderr F I0813 20:00:54.805210 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:00:54.824138009+00:00 stderr F I0813 20:00:54.807706 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:00:54.900730423+00:00 stderr F I0813 20:00:54.899397 1 healthcheck.go:84] fs available: 62292721664, total capacity: 85294297088, percentage available: 73.03, number of free inodes: 41533206 2025-08-13T20:00:54.900730423+00:00 stderr F I0813 20:00:54.899696 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:00:54.900730423+00:00 stderr F I0813 20:00:54.899712 1 nodeserver.go:330] Capacity: 85294297088 Used: 23001575424 Available: 62292721664 Inodes: 41680368 Free inodes: 41533206 Used inodes: 147162 2025-08-13T20:00:56.656209940+00:00 stderr F I0813 20:00:56.597000 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:01:10.384473536+00:00 stderr F I0813 20:01:10.384126 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:01:10.516433778+00:00 stderr F I0813 20:01:10.490297 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:01:10.516433778+00:00 stderr F I0813 20:01:10.490338 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:01:10.725402807+00:00 stderr F I0813 20:01:10.539162 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:01:10.742578247+00:00 stderr F I0813 20:01:10.725588 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/42b6a393-6194-4620-bf8f-7e4b6cbe5679/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:01:11.012727550+00:00 stderr F I0813 20:01:11.012392 1 healthcheck.go:84] fs available: 61630750720, total capacity: 85294297088, percentage available: 72.26, number of free inodes: 41533186 2025-08-13T20:01:11.012727550+00:00 stderr F I0813 20:01:11.012581 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:01:11.012727550+00:00 stderr F I0813 20:01:11.012604 1 nodeserver.go:330] Capacity: 85294297088 Used: 23663546368 Available: 61630750720 Inodes: 41680368 Free inodes: 41533186 Used inodes: 147182 2025-08-13T20:01:56.740528286+00:00 stderr F I0813 20:01:56.740322 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:02:10.485513699+00:00 stderr F I0813 20:02:10.485234 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:02:10.485513699+00:00 stderr F I0813 20:02:10.485341 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:02:31.814900262+00:00 stderr F I0813 20:02:31.814574 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:02:31.816711174+00:00 stderr F I0813 20:02:31.816638 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:02:31.816934930+00:00 stderr F I0813 20:02:31.816735 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:02:31.825060512+00:00 stderr F I0813 20:02:31.825011 1 healthcheck.go:84] fs available: 57135521792, total capacity: 85294297088, percentage available: 66.99, number of free inodes: 41516178 2025-08-13T20:02:31.825060512+00:00 stderr F I0813 20:02:31.825038 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:02:31.825060512+00:00 stderr F I0813 20:02:31.825050 1 nodeserver.go:330] Capacity: 85294297088 Used: 28158775296 Available: 57135521792 Inodes: 41680368 Free inodes: 41516178 Used inodes: 164190 2025-08-13T20:02:40.226898781+00:00 stderr F I0813 20:02:40.225329 1 server.go:104] GRPC call: /csi.v1.Node/NodeUnpublishVolume 2025-08-13T20:02:40.226898781+00:00 stderr F I0813 20:02:40.225397 1 nodeserver.go:199] Node Unpublish Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 TargetPath:/var/lib/kubelet/pods/42b6a393-6194-4620-bf8f-7e4b6cbe5679/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:02:40.226898781+00:00 stderr F I0813 20:02:40.225433 1 nodeserver.go:206] Unmounting path: /var/lib/kubelet/pods/42b6a393-6194-4620-bf8f-7e4b6cbe5679/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount 2025-08-13T20:02:56.756340071+00:00 stderr F I0813 20:02:56.756230 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:03:10.486762909+00:00 stderr F I0813 20:03:10.486679 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:10.487170981+00:00 stderr F I0813 20:03:10.487152 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:03:11.493869339+00:00 stderr F I0813 20:03:11.493707 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:11.493869339+00:00 stderr F I0813 20:03:11.493746 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:03:13.499529645+00:00 stderr F I0813 20:03:13.499420 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:13.499529645+00:00 stderr F I0813 20:03:13.499457 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:03:17.502945632+00:00 stderr F I0813 20:03:17.502827 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:17.502945632+00:00 stderr F I0813 20:03:17.502899 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:03:25.509244321+00:00 stderr F I0813 20:03:25.508881 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:25.509244321+00:00 stderr F I0813 20:03:25.508945 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:03:41.517643887+00:00 stderr F I0813 20:03:41.517458 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:41.517746540+00:00 stderr F I0813 20:03:41.517732 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:03:56.915135236+00:00 stderr F I0813 20:03:56.915006 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:04:01.000095908+00:00 stderr F I0813 20:04:00.999928 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:04:01.005914114+00:00 stderr F I0813 20:04:01.003058 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:04:01.005914114+00:00 stderr F I0813 20:04:01.003149 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:04:01.026741118+00:00 stderr F I0813 20:04:01.025128 1 healthcheck.go:84] fs available: 54261784576, total capacity: 85294297088, percentage available: 63.62, number of free inodes: 41488434 2025-08-13T20:04:01.026741118+00:00 stderr F I0813 20:04:01.025264 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:04:01.026741118+00:00 stderr F I0813 20:04:01.025369 1 nodeserver.go:330] Capacity: 85294297088 Used: 31032512512 Available: 54261784576 Inodes: 41680368 Free inodes: 41488434 Used inodes: 191934 2025-08-13T20:04:10.486988651+00:00 stderr F I0813 20:04:10.486697 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:04:10.487205307+00:00 stderr F I0813 20:04:10.487185 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:04:13.523738710+00:00 stderr F I0813 20:04:13.523676 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:04:13.524015298+00:00 stderr F I0813 20:04:13.523997 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:04:56.964622012+00:00 stderr F I0813 20:04:56.959874 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:05:10.491516851+00:00 stderr F I0813 20:05:10.491238 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:05:10.491516851+00:00 stderr F I0813 20:05:10.491302 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:05:46.708170333+00:00 stderr F I0813 20:05:46.708026 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:05:46.714759392+00:00 stderr F I0813 20:05:46.714656 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:05:46.714759392+00:00 stderr F I0813 20:05:46.714693 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:05:46.728663180+00:00 stderr F I0813 20:05:46.728583 1 healthcheck.go:84] fs available: 47486980096, total capacity: 85294297088, percentage available: 55.67, number of free inodes: 41469070 2025-08-13T20:05:46.728663180+00:00 stderr F I0813 20:05:46.728625 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:05:46.728663180+00:00 stderr F I0813 20:05:46.728641 1 nodeserver.go:330] Capacity: 85294297088 Used: 37807316992 Available: 47486980096 Inodes: 41680368 Free inodes: 41469070 Used inodes: 211298 2025-08-13T20:05:56.984080993+00:00 stderr F I0813 20:05:56.983475 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:06:06.802203704+00:00 stderr F I0813 20:06:06.798708 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:06:06.802203704+00:00 stderr F I0813 20:06:06.798750 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:06:10.487223827+00:00 stderr F I0813 20:06:10.487127 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:06:10.487332100+00:00 stderr F I0813 20:06:10.487318 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:06:21.529625288+00:00 stderr F I0813 20:06:21.529548 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:06:21.529924837+00:00 stderr F I0813 20:06:21.529905 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:06:57.096820633+00:00 stderr F I0813 20:06:57.092522 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:07:10.508947399+00:00 stderr F I0813 20:07:10.508715 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:07:10.513067847+00:00 stderr F I0813 20:07:10.510553 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:07:20.657193328+00:00 stderr F I0813 20:07:20.657106 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:07:20.695849096+00:00 stderr F I0813 20:07:20.685970 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:07:20.695849096+00:00 stderr F I0813 20:07:20.686005 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:07:20.749953126+00:00 stderr F I0813 20:07:20.733739 1 healthcheck.go:84] fs available: 49462222848, total capacity: 85294297088, percentage available: 57.99, number of free inodes: 41476143 2025-08-13T20:07:20.749953126+00:00 stderr F I0813 20:07:20.738849 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:07:20.749953126+00:00 stderr F I0813 20:07:20.738935 1 nodeserver.go:330] Capacity: 85294297088 Used: 35832098816 Available: 49462198272 Inodes: 41680368 Free inodes: 41476150 Used inodes: 204218 2025-08-13T20:07:57.145481707+00:00 stderr F I0813 20:07:57.144413 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:08:10.502928778+00:00 stderr F I0813 20:08:10.499463 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:08:10.502928778+00:00 stderr F I0813 20:08:10.499507 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:08:36.367695340+00:00 stderr F I0813 20:08:36.367492 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:08:36.369372678+00:00 stderr F I0813 20:08:36.369306 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:08:36.369396839+00:00 stderr F I0813 20:08:36.369339 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:08:36.380687823+00:00 stderr F I0813 20:08:36.380557 1 healthcheck.go:84] fs available: 51706716160, total capacity: 85294297088, percentage available: 60.62, number of free inodes: 41485057 2025-08-13T20:08:36.380687823+00:00 stderr F I0813 20:08:36.380626 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:08:36.380687823+00:00 stderr F I0813 20:08:36.380647 1 nodeserver.go:330] Capacity: 85294297088 Used: 33587580928 Available: 51706716160 Inodes: 41680368 Free inodes: 41485057 Used inodes: 195311 2025-08-13T20:08:57.162177496+00:00 stderr F I0813 20:08:57.160999 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:09:10.507345642+00:00 stderr F I0813 20:09:10.507012 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:09:10.507345642+00:00 stderr F I0813 20:09:10.507250 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:09:39.085725629+00:00 stderr F I0813 20:09:39.085497 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:09:39.090844026+00:00 stderr F I0813 20:09:39.090722 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:09:39.090984640+00:00 stderr F I0813 20:09:39.090767 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:09:39.105516197+00:00 stderr F I0813 20:09:39.105406 1 healthcheck.go:84] fs available: 51679031296, total capacity: 85294297088, percentage available: 60.59, number of free inodes: 41484961 2025-08-13T20:09:39.105516197+00:00 stderr F I0813 20:09:39.105490 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:09:39.105516197+00:00 stderr F I0813 20:09:39.105509 1 nodeserver.go:330] Capacity: 85294297088 Used: 33615265792 Available: 51679031296 Inodes: 41680368 Free inodes: 41484961 Used inodes: 195407 2025-08-13T20:09:57.199103275+00:00 stderr F I0813 20:09:57.198850 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:10:10.518927925+00:00 stderr F I0813 20:10:10.505588 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:10:10.518927925+00:00 stderr F I0813 20:10:10.505631 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:10:36.529453897+00:00 stderr F I0813 20:10:36.528695 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:10:36.529453897+00:00 stderr F I0813 20:10:36.528761 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:10:40.251553782+00:00 stderr F I0813 20:10:40.251303 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:10:40.253756016+00:00 stderr F I0813 20:10:40.253705 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:10:40.253836788+00:00 stderr F I0813 20:10:40.253736 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:10:40.267351885+00:00 stderr F I0813 20:10:40.267295 1 healthcheck.go:84] fs available: 51647049728, total capacity: 85294297088, percentage available: 60.55, number of free inodes: 41484928 2025-08-13T20:10:40.267418467+00:00 stderr F I0813 20:10:40.267405 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:10:40.278090563+00:00 stderr F I0813 20:10:40.276155 1 nodeserver.go:330] Capacity: 85294297088 Used: 33647247360 Available: 51647049728 Inodes: 41680368 Free inodes: 41484928 Used inodes: 195440 2025-08-13T20:10:57.213663048+00:00 stderr F I0813 20:10:57.212333 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:11:10.503087476+00:00 stderr F I0813 20:11:10.502811 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:11:10.503087476+00:00 stderr F I0813 20:11:10.502957 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:11:57.227966364+00:00 stderr F I0813 20:11:57.227715 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:12:10.505112391+00:00 stderr F I0813 20:12:10.504953 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:12:10.505112391+00:00 stderr F I0813 20:12:10.505061 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:12:36.137386269+00:00 stderr F I0813 20:12:36.136412 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:12:36.141004743+00:00 stderr F I0813 20:12:36.140064 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:12:36.141004743+00:00 stderr F I0813 20:12:36.140092 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:12:36.150283989+00:00 stderr F I0813 20:12:36.150177 1 healthcheck.go:84] fs available: 51640684544, total capacity: 85294297088, percentage available: 60.54, number of free inodes: 41485041 2025-08-13T20:12:36.150283989+00:00 stderr F I0813 20:12:36.150248 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:12:36.150283989+00:00 stderr F I0813 20:12:36.150271 1 nodeserver.go:330] Capacity: 85294297088 Used: 33653612544 Available: 51640684544 Inodes: 41680368 Free inodes: 41485041 Used inodes: 195327 2025-08-13T20:12:57.251129275+00:00 stderr F I0813 20:12:57.251039 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:13:10.503850112+00:00 stderr F I0813 20:13:10.503729 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:13:10.503917984+00:00 stderr F I0813 20:13:10.503904 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:13:57.263200051+00:00 stderr F I0813 20:13:57.262977 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:14:04.803178800+00:00 stderr F I0813 20:14:04.802515 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:14:04.804445436+00:00 stderr F I0813 20:14:04.804387 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:14:04.805386423+00:00 stderr F I0813 20:14:04.804429 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:14:04.811721495+00:00 stderr F I0813 20:14:04.811560 1 healthcheck.go:84] fs available: 51640279040, total capacity: 85294297088, percentage available: 60.54, number of free inodes: 41485153 2025-08-13T20:14:04.811721495+00:00 stderr F I0813 20:14:04.811598 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:14:04.811721495+00:00 stderr F I0813 20:14:04.811614 1 nodeserver.go:330] Capacity: 85294297088 Used: 33654018048 Available: 51640279040 Inodes: 41680368 Free inodes: 41485153 Used inodes: 195215 2025-08-13T20:14:10.505047437+00:00 stderr F I0813 20:14:10.504870 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:14:10.505047437+00:00 stderr F I0813 20:14:10.504991 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:14:57.277850577+00:00 stderr F I0813 20:14:57.277638 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:15:10.509881435+00:00 stderr F I0813 20:15:10.505916 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:15:10.509881435+00:00 stderr F I0813 20:15:10.506155 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:15:51.769155450+00:00 stderr F I0813 20:15:51.769000 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:15:51.772506196+00:00 stderr F I0813 20:15:51.772372 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:15:51.772761773+00:00 stderr F I0813 20:15:51.772515 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:15:51.781601066+00:00 stderr F I0813 20:15:51.781436 1 healthcheck.go:84] fs available: 51640188928, total capacity: 85294297088, percentage available: 60.54, number of free inodes: 41485105 2025-08-13T20:15:51.781601066+00:00 stderr F I0813 20:15:51.781516 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:15:51.781601066+00:00 stderr F I0813 20:15:51.781541 1 nodeserver.go:330] Capacity: 85294297088 Used: 33654108160 Available: 51640188928 Inodes: 41680368 Free inodes: 41485105 Used inodes: 195263 2025-08-13T20:15:57.291996787+00:00 stderr F I0813 20:15:57.291842 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:16:10.506534710+00:00 stderr F I0813 20:16:10.506390 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:16:10.506534710+00:00 stderr F I0813 20:16:10.506435 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:16:57.304684138+00:00 stderr F I0813 20:16:57.304610 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:17:10.519147126+00:00 stderr F I0813 20:17:10.518943 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:17:10.523121470+00:00 stderr F I0813 20:17:10.519870 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:17:36.432045642+00:00 stderr F I0813 20:17:36.431604 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:17:36.438031932+00:00 stderr F I0813 20:17:36.435931 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:17:36.438031932+00:00 stderr F I0813 20:17:36.436033 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:17:36.466379512+00:00 stderr F I0813 20:17:36.466277 1 healthcheck.go:84] fs available: 50719629312, total capacity: 85294297088, percentage available: 59.46, number of free inodes: 41481141 2025-08-13T20:17:36.466379512+00:00 stderr F I0813 20:17:36.466319 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:17:36.466379512+00:00 stderr F I0813 20:17:36.466336 1 nodeserver.go:330] Capacity: 85294297088 Used: 34574667776 Available: 50719629312 Inodes: 41680368 Free inodes: 41481141 Used inodes: 199227 2025-08-13T20:17:57.326616570+00:00 stderr F I0813 20:17:57.326474 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:18:10.526943734+00:00 stderr F I0813 20:18:10.524915 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:18:10.526943734+00:00 stderr F I0813 20:18:10.524956 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:18:57.355476292+00:00 stderr F I0813 20:18:57.355306 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:19:10.522212718+00:00 stderr F I0813 20:19:10.521996 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:19:10.522212718+00:00 stderr F I0813 20:19:10.522137 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:19:29.886707247+00:00 stderr F I0813 20:19:29.886609 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:19:29.890321280+00:00 stderr F I0813 20:19:29.890184 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:19:29.890591578+00:00 stderr F I0813 20:19:29.890222 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:19:29.898231086+00:00 stderr F I0813 20:19:29.898142 1 healthcheck.go:84] fs available: 49281728512, total capacity: 85294297088, percentage available: 57.78, number of free inodes: 41478728 2025-08-13T20:19:29.898231086+00:00 stderr F I0813 20:19:29.898180 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:19:29.898231086+00:00 stderr F I0813 20:19:29.898196 1 nodeserver.go:330] Capacity: 85294297088 Used: 36012568576 Available: 49281728512 Inodes: 41680368 Free inodes: 41478728 Used inodes: 201640 2025-08-13T20:19:57.368016391+00:00 stderr F I0813 20:19:57.367718 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:20:10.519551602+00:00 stderr F I0813 20:20:10.519431 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:20:10.519551602+00:00 stderr F I0813 20:20:10.519479 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:20:57.389371162+00:00 stderr F I0813 20:20:57.389152 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:21:10.523616582+00:00 stderr F I0813 20:21:10.523457 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:21:10.523616582+00:00 stderr F I0813 20:21:10.523543 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:21:19.817840363+00:00 stderr F I0813 20:21:19.817595 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:21:19.821330213+00:00 stderr F I0813 20:21:19.820144 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:21:19.821330213+00:00 stderr F I0813 20:21:19.820181 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:21:19.841292043+00:00 stderr F I0813 20:21:19.841120 1 healthcheck.go:84] fs available: 51637252096, total capacity: 85294297088, percentage available: 60.54, number of free inodes: 41485141 2025-08-13T20:21:19.841292043+00:00 stderr F I0813 20:21:19.841205 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:21:19.841292043+00:00 stderr F I0813 20:21:19.841221 1 nodeserver.go:330] Capacity: 85294297088 Used: 33657044992 Available: 51637252096 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:21:57.403416385+00:00 stderr F I0813 20:21:57.403188 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:22:10.524463732+00:00 stderr F I0813 20:22:10.524337 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:22:10.524463732+00:00 stderr F I0813 20:22:10.524388 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:22:56.284454137+00:00 stderr F I0813 20:22:56.283874 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:22:56.287110313+00:00 stderr F I0813 20:22:56.287065 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:22:56.287344410+00:00 stderr F I0813 20:22:56.287175 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:22:56.297120549+00:00 stderr F I0813 20:22:56.296459 1 healthcheck.go:84] fs available: 51642810368, total capacity: 85294297088, percentage available: 60.55, number of free inodes: 41485141 2025-08-13T20:22:56.297120549+00:00 stderr F I0813 20:22:56.296511 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:22:56.297120549+00:00 stderr F I0813 20:22:56.296527 1 nodeserver.go:330] Capacity: 85294297088 Used: 33651486720 Available: 51642810368 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:22:57.418258791+00:00 stderr F I0813 20:22:57.416314 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:23:10.527744748+00:00 stderr F I0813 20:23:10.527566 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:23:10.527744748+00:00 stderr F I0813 20:23:10.527615 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:23:57.431277872+00:00 stderr F I0813 20:23:57.431156 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:24:10.527718182+00:00 stderr F I0813 20:24:10.527537 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:24:10.527718182+00:00 stderr F I0813 20:24:10.527594 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:24:18.305041165+00:00 stderr F I0813 20:24:18.303307 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:24:18.306838946+00:00 stderr F I0813 20:24:18.306669 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:24:18.307206067+00:00 stderr F I0813 20:24:18.306756 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:24:18.315068762+00:00 stderr F I0813 20:24:18.314975 1 healthcheck.go:84] fs available: 51577217024, total capacity: 85294297088, percentage available: 60.47, number of free inodes: 41485141 2025-08-13T20:24:18.315068762+00:00 stderr F I0813 20:24:18.315035 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:24:18.315068762+00:00 stderr F I0813 20:24:18.315051 1 nodeserver.go:330] Capacity: 85294297088 Used: 33717080064 Available: 51577217024 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:24:57.451153962+00:00 stderr F I0813 20:24:57.450706 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:25:10.527768953+00:00 stderr F I0813 20:25:10.527627 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:25:10.527768953+00:00 stderr F I0813 20:25:10.527694 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:25:54.737841501+00:00 stderr F I0813 20:25:54.737523 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:25:54.740075855+00:00 stderr F I0813 20:25:54.739979 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:25:54.740249760+00:00 stderr F I0813 20:25:54.740049 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:25:54.764451082+00:00 stderr F I0813 20:25:54.764195 1 healthcheck.go:84] fs available: 51575119872, total capacity: 85294297088, percentage available: 60.47, number of free inodes: 41485141 2025-08-13T20:25:54.765378308+00:00 stderr F I0813 20:25:54.765255 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:25:54.765662736+00:00 stderr F I0813 20:25:54.765639 1 nodeserver.go:330] Capacity: 85294297088 Used: 33719177216 Available: 51575119872 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:25:57.469624523+00:00 stderr F I0813 20:25:57.469476 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:26:10.527753664+00:00 stderr F I0813 20:26:10.527598 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:26:10.527753664+00:00 stderr F I0813 20:26:10.527645 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:26:57.484049366+00:00 stderr F I0813 20:26:57.483879 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:27:10.562146652+00:00 stderr F I0813 20:27:10.561015 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:27:10.562146652+00:00 stderr F I0813 20:27:10.561118 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:27:16.955541565+00:00 stderr F I0813 20:27:16.955467 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:27:16.960923459+00:00 stderr F I0813 20:27:16.957556 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:27:16.960923459+00:00 stderr F I0813 20:27:16.957592 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:27:16.987278892+00:00 stderr F I0813 20:27:16.986431 1 healthcheck.go:84] fs available: 50553176064, total capacity: 85294297088, percentage available: 59.27, number of free inodes: 41482383 2025-08-13T20:27:16.987278892+00:00 stderr F I0813 20:27:16.986493 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:27:16.987278892+00:00 stderr F I0813 20:27:16.986513 1 nodeserver.go:330] Capacity: 85294297088 Used: 34741121024 Available: 50553176064 Inodes: 41680368 Free inodes: 41482383 Used inodes: 197985 2025-08-13T20:27:57.502633244+00:00 stderr F I0813 20:27:57.502378 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:28:10.531412180+00:00 stderr F I0813 20:28:10.531096 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:28:10.531412180+00:00 stderr F I0813 20:28:10.531169 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:28:33.112173790+00:00 stderr F I0813 20:28:33.111824 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:28:33.115109274+00:00 stderr F I0813 20:28:33.114985 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:28:33.115441193+00:00 stderr F I0813 20:28:33.115092 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:28:33.124039291+00:00 stderr F I0813 20:28:33.123446 1 healthcheck.go:84] fs available: 51568517120, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485141 2025-08-13T20:28:33.124039291+00:00 stderr F I0813 20:28:33.123954 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:28:33.124039291+00:00 stderr F I0813 20:28:33.123971 1 nodeserver.go:330] Capacity: 85294297088 Used: 33725779968 Available: 51568517120 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:28:57.520944432+00:00 stderr F I0813 20:28:57.520680 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:29:10.533174696+00:00 stderr F I0813 20:29:10.532999 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:29:10.533311740+00:00 stderr F I0813 20:29:10.533291 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:29:44.340528574+00:00 stderr F I0813 20:29:44.340329 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:29:44.341679667+00:00 stderr F I0813 20:29:44.341619 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:29:44.341708008+00:00 stderr F I0813 20:29:44.341651 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:29:44.365910144+00:00 stderr F I0813 20:29:44.365397 1 healthcheck.go:84] fs available: 50792988672, total capacity: 85294297088, percentage available: 59.55, number of free inodes: 41482040 2025-08-13T20:29:44.366058328+00:00 stderr F I0813 20:29:44.365988 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:29:44.367612433+00:00 stderr F I0813 20:29:44.366118 1 nodeserver.go:330] Capacity: 85294297088 Used: 34501308416 Available: 50792988672 Inodes: 41680368 Free inodes: 41482040 Used inodes: 198328 2025-08-13T20:29:57.699870673+00:00 stderr F I0813 20:29:57.699698 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:30:10.532263930+00:00 stderr F I0813 20:30:10.532146 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:30:10.532459155+00:00 stderr F I0813 20:30:10.532436 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:30:57.725149969+00:00 stderr F I0813 20:30:57.725009 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:31:07.875860146+00:00 stderr F I0813 20:31:07.875731 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:31:07.878296166+00:00 stderr F I0813 20:31:07.878232 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:31:07.878572504+00:00 stderr F I0813 20:31:07.878335 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:31:07.887215242+00:00 stderr F I0813 20:31:07.887145 1 healthcheck.go:84] fs available: 51566333952, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485141 2025-08-13T20:31:07.887215242+00:00 stderr F I0813 20:31:07.887179 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:31:07.887215242+00:00 stderr F I0813 20:31:07.887194 1 nodeserver.go:330] Capacity: 85294297088 Used: 33727963136 Available: 51566333952 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:31:10.532145442+00:00 stderr F I0813 20:31:10.532100 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:31:10.532199013+00:00 stderr F I0813 20:31:10.532186 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:31:57.750300037+00:00 stderr F I0813 20:31:57.749893 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:32:10.533952959+00:00 stderr F I0813 20:32:10.533818 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:32:10.533986770+00:00 stderr F I0813 20:32:10.533975 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:32:21.647876845+00:00 stderr F I0813 20:32:21.647682 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:32:21.650811060+00:00 stderr F I0813 20:32:21.650714 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:32:21.651427357+00:00 stderr F I0813 20:32:21.650756 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:32:21.660228610+00:00 stderr F I0813 20:32:21.660146 1 healthcheck.go:84] fs available: 51570003968, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485141 2025-08-13T20:32:21.660228610+00:00 stderr F I0813 20:32:21.660184 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:32:21.660228610+00:00 stderr F I0813 20:32:21.660201 1 nodeserver.go:330] Capacity: 85294297088 Used: 33724293120 Available: 51570003968 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:32:57.765045545+00:00 stderr F I0813 20:32:57.764764 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:33:10.536131259+00:00 stderr F I0813 20:33:10.535921 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:33:10.536131259+00:00 stderr F I0813 20:33:10.535980 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:33:22.980927831+00:00 stderr F I0813 20:33:22.980505 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:33:22.984701520+00:00 stderr F I0813 20:33:22.984616 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:33:22.984751711+00:00 stderr F I0813 20:33:22.984661 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:33:22.993114092+00:00 stderr F I0813 20:33:22.993024 1 healthcheck.go:84] fs available: 51570139136, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485141 2025-08-13T20:33:22.993114092+00:00 stderr F I0813 20:33:22.993106 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:33:22.993139062+00:00 stderr F I0813 20:33:22.993127 1 nodeserver.go:330] Capacity: 85294297088 Used: 33724157952 Available: 51570139136 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:33:57.792135620+00:00 stderr F I0813 20:33:57.791627 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:34:10.537730761+00:00 stderr F I0813 20:34:10.537651 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:34:10.537912446+00:00 stderr F I0813 20:34:10.537894 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:34:57.805666005+00:00 stderr F I0813 20:34:57.805502 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:35:01.408516971+00:00 stderr F I0813 20:35:01.408392 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:35:01.410161969+00:00 stderr F I0813 20:35:01.410032 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:35:01.410270932+00:00 stderr F I0813 20:35:01.410066 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:35:01.416319106+00:00 stderr F I0813 20:35:01.416222 1 healthcheck.go:84] fs available: 51570311168, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485141 2025-08-13T20:35:01.416319106+00:00 stderr F I0813 20:35:01.416257 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:35:01.416319106+00:00 stderr F I0813 20:35:01.416277 1 nodeserver.go:330] Capacity: 85294297088 Used: 33723985920 Available: 51570311168 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:35:10.537402616+00:00 stderr F I0813 20:35:10.537200 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:35:10.537402616+00:00 stderr F I0813 20:35:10.537241 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:35:57.828048567+00:00 stderr F I0813 20:35:57.827894 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:36:10.540230706+00:00 stderr F I0813 20:36:10.540159 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:36:10.540230706+00:00 stderr F I0813 20:36:10.540203 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:36:45.003879095+00:00 stderr F I0813 20:36:44.997700 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:36:45.007661814+00:00 stderr F I0813 20:36:45.006888 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:36:45.007661814+00:00 stderr F I0813 20:36:45.006921 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:36:45.017881339+00:00 stderr F I0813 20:36:45.017730 1 healthcheck.go:84] fs available: 51568951296, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485140 2025-08-13T20:36:45.017881339+00:00 stderr F I0813 20:36:45.017765 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:36:45.017881339+00:00 stderr F I0813 20:36:45.017833 1 nodeserver.go:330] Capacity: 85294297088 Used: 33725345792 Available: 51568951296 Inodes: 41680368 Free inodes: 41485140 Used inodes: 195228 2025-08-13T20:36:57.844487042+00:00 stderr F I0813 20:36:57.844282 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:37:10.538597061+00:00 stderr F I0813 20:37:10.538482 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:37:10.538597061+00:00 stderr F I0813 20:37:10.538537 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:37:57.863257984+00:00 stderr F I0813 20:37:57.862999 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:38:10.540288405+00:00 stderr F I0813 20:38:10.540127 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:38:10.540288405+00:00 stderr F I0813 20:38:10.540205 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:38:36.464710306+00:00 stderr F I0813 20:38:36.464263 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:38:36.467944479+00:00 stderr F I0813 20:38:36.466838 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:38:36.467944479+00:00 stderr F I0813 20:38:36.466865 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:38:36.478705029+00:00 stderr F I0813 20:38:36.477635 1 healthcheck.go:84] fs available: 51565973504, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485098 2025-08-13T20:38:36.478705029+00:00 stderr F I0813 20:38:36.477678 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:38:36.478705029+00:00 stderr F I0813 20:38:36.477694 1 nodeserver.go:330] Capacity: 85294297088 Used: 33728323584 Available: 51565973504 Inodes: 41680368 Free inodes: 41485098 Used inodes: 195270 2025-08-13T20:38:57.880440552+00:00 stderr F I0813 20:38:57.880262 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:39:10.541717859+00:00 stderr F I0813 20:39:10.541462 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:39:10.541717859+00:00 stderr F I0813 20:39:10.541506 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:39:57.907047914+00:00 stderr F I0813 20:39:57.905077 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:40:10.541471925+00:00 stderr F I0813 20:40:10.541375 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:40:10.541471925+00:00 stderr F I0813 20:40:10.541406 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:40:35.536596687+00:00 stderr F I0813 20:40:35.536482 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:40:35.538166962+00:00 stderr F I0813 20:40:35.537987 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:40:35.538166962+00:00 stderr F I0813 20:40:35.538023 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:40:35.547501511+00:00 stderr F I0813 20:40:35.546307 1 healthcheck.go:84] fs available: 51564703744, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485140 2025-08-13T20:40:35.547501511+00:00 stderr F I0813 20:40:35.546353 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:40:35.547501511+00:00 stderr F I0813 20:40:35.546369 1 nodeserver.go:330] Capacity: 85294297088 Used: 33729593344 Available: 51564703744 Inodes: 41680368 Free inodes: 41485140 Used inodes: 195228 2025-08-13T20:40:57.920707454+00:00 stderr F I0813 20:40:57.920519 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:41:10.545660471+00:00 stderr F I0813 20:41:10.545536 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:41:10.545660471+00:00 stderr F I0813 20:41:10.545577 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:41:57.937944436+00:00 stderr F I0813 20:41:57.937728 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:42:10.552582979+00:00 stderr F I0813 20:42:10.552449 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:42:10.552739754+00:00 stderr F I0813 20:42:10.552697 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:42:16.573332707+00:00 stderr F I0813 20:42:16.572654 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:42:16.586949320+00:00 stderr F I0813 20:42:16.577589 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:42:16.586949320+00:00 stderr F I0813 20:42:16.577636 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:42:16.589730510+00:00 stderr F I0813 20:42:16.589379 1 healthcheck.go:84] fs available: 51561603072, total capacity: 85294297088, percentage available: 60.45, number of free inodes: 41485107 2025-08-13T20:42:16.589730510+00:00 stderr F I0813 20:42:16.589439 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:42:16.589730510+00:00 stderr F I0813 20:42:16.589487 1 nodeserver.go:330] Capacity: 85294297088 Used: 33732689920 Available: 51561607168 Inodes: 41680368 Free inodes: 41485108 Used inodes: 195260 ././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000004132315133657716033207 0ustar zuulzuul2026-01-20T10:49:36.663005501+00:00 stderr F I0120 10:49:36.637373 1 plugin.go:44] Starting Prometheus metrics endpoint server 2026-01-20T10:49:36.663005501+00:00 stderr F I0120 10:49:36.641212 1 plugin.go:47] Starting new HostPathDriver, config: {kubevirt.io.hostpath-provisioner unix:///csi/csi.sock crc map[] latest } 2026-01-20T10:49:36.915363898+00:00 stderr F I0120 10:49:36.904788 1 mount_linux.go:174] Cannot run systemd-run, assuming non-systemd OS 2026-01-20T10:49:36.915363898+00:00 stderr F I0120 10:49:36.904866 1 hostpath.go:88] name: local, dataDir: /csi-data-dir 2026-01-20T10:49:36.915363898+00:00 stderr F I0120 10:49:36.904941 1 hostpath.go:107] Driver: kubevirt.io.hostpath-provisioner, version: latest 2026-01-20T10:49:36.915363898+00:00 stderr F I0120 10:49:36.905309 1 server.go:194] Starting domain socket: unix///csi/csi.sock 2026-01-20T10:49:36.915363898+00:00 stderr F I0120 10:49:36.905462 1 server.go:89] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"} 2026-01-20T10:49:37.068120830+00:00 stderr F I0120 10:49:37.066310 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2026-01-20T10:49:38.583844218+00:00 stderr F I0120 10:49:38.583524 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginInfo 2026-01-20T10:49:39.000754507+00:00 stderr F I0120 10:49:38.997420 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetInfo 2026-01-20T10:49:39.623814925+00:00 stderr F I0120 10:49:39.623296 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginInfo 2026-01-20T10:49:40.398501051+00:00 stderr F I0120 10:49:40.398189 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginInfo 2026-01-20T10:49:40.399003416+00:00 stderr F I0120 10:49:40.398961 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginCapabilities 2026-01-20T10:49:40.400634847+00:00 stderr F I0120 10:49:40.399688 1 server.go:104] GRPC call: /csi.v1.Controller/ControllerGetCapabilities 2026-01-20T10:49:40.401344458+00:00 stderr F I0120 10:49:40.400981 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetInfo 2026-01-20T10:49:40.583618380+00:00 stderr F I0120 10:49:40.583531 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:49:40.583618380+00:00 stderr F I0120 10:49:40.583555 1 controllerserver.go:230] Checking capacity for storage pool local 2026-01-20T10:50:37.077762391+00:00 stderr F I0120 10:50:37.077661 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2026-01-20T10:50:40.568999200+00:00 stderr F I0120 10:50:40.568920 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:50:40.568999200+00:00 stderr F I0120 10:50:40.568948 1 controllerserver.go:230] Checking capacity for storage pool local 2026-01-20T10:51:34.018250984+00:00 stderr F I0120 10:51:34.018170 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2026-01-20T10:51:35.872233300+00:00 stderr F I0120 10:51:35.868806 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2026-01-20T10:51:35.876120880+00:00 stderr F I0120 10:51:35.876088 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2026-01-20T10:51:35.903019622+00:00 stderr F I0120 10:51:35.902946 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2026-01-20T10:51:35.904482804+00:00 stderr F I0120 10:51:35.904436 1 server.go:104] GRPC call: /csi.v1.Node/NodePublishVolume 2026-01-20T10:51:35.904579216+00:00 stderr F I0120 10:51:35.904451 1 nodeserver.go:82] Node Publish Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 PublishContext:map[] StagingTargetPath: TargetPath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount VolumeCapability:mount:<> access_mode: Readonly:false Secrets:map[] VolumeContext:map[csi.storage.k8s.io/ephemeral:false csi.storage.k8s.io/pod.name:image-registry-75779c45fd-v2j2v csi.storage.k8s.io/pod.namespace:openshift-image-registry csi.storage.k8s.io/pod.uid:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 csi.storage.k8s.io/pv/name:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 csi.storage.k8s.io/pvc/name:crc-image-registry-storage csi.storage.k8s.io/pvc/namespace:openshift-image-registry csi.storage.k8s.io/serviceAccount.name:registry storage.kubernetes.io/csiProvisionerIdentity:1719494501704-84-kubevirt.io.hostpath-provisioner-crc storagePool:local] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2026-01-20T10:51:37.088991582+00:00 stderr F I0120 10:51:37.088722 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2026-01-20T10:51:40.570360632+00:00 stderr F I0120 10:51:40.570252 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:51:40.570505716+00:00 stderr F I0120 10:51:40.570487 1 controllerserver.go:230] Checking capacity for storage pool local 2026-01-20T10:52:37.099257918+00:00 stderr F I0120 10:52:37.098739 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2026-01-20T10:52:40.571113924+00:00 stderr F I0120 10:52:40.570987 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:52:40.571113924+00:00 stderr F I0120 10:52:40.571006 1 controllerserver.go:230] Checking capacity for storage pool local 2026-01-20T10:53:02.124572401+00:00 stderr F I0120 10:53:02.124468 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2026-01-20T10:53:02.126226624+00:00 stderr F I0120 10:53:02.126195 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2026-01-20T10:53:02.126502571+00:00 stderr F I0120 10:53:02.126220 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2026-01-20T10:53:02.134334204+00:00 stderr F I0120 10:53:02.134274 1 healthcheck.go:84] fs available: 44546015232, total capacity: 85294297088, percentage available: 52.23, number of free inodes: 41448668 2026-01-20T10:53:02.134334204+00:00 stderr F I0120 10:53:02.134316 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2026-01-20T10:53:02.134360434+00:00 stderr F I0120 10:53:02.134344 1 nodeserver.go:330] Capacity: 85294297088 Used: 40748281856 Available: 44546015232 Inodes: 41680320 Free inodes: 41448668 Used inodes: 231652 2026-01-20T10:53:37.117954505+00:00 stderr F I0120 10:53:37.117168 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2026-01-20T10:53:40.571972136+00:00 stderr F I0120 10:53:40.571860 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:53:40.571972136+00:00 stderr F I0120 10:53:40.571903 1 controllerserver.go:230] Checking capacity for storage pool local 2026-01-20T10:54:37.155693458+00:00 stderr F I0120 10:54:37.155596 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2026-01-20T10:54:40.572856970+00:00 stderr F I0120 10:54:40.572696 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:54:40.572856970+00:00 stderr F I0120 10:54:40.572748 1 controllerserver.go:230] Checking capacity for storage pool local 2026-01-20T10:54:54.647143967+00:00 stderr F I0120 10:54:54.647033 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2026-01-20T10:54:54.649288094+00:00 stderr F I0120 10:54:54.649208 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2026-01-20T10:54:54.649438768+00:00 stderr F I0120 10:54:54.649396 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2026-01-20T10:54:54.659455286+00:00 stderr F I0120 10:54:54.659366 1 healthcheck.go:84] fs available: 44523102208, total capacity: 85294297088, percentage available: 52.20, number of free inodes: 41448665 2026-01-20T10:54:54.659581739+00:00 stderr F I0120 10:54:54.659556 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2026-01-20T10:54:54.659737963+00:00 stderr F I0120 10:54:54.659655 1 nodeserver.go:330] Capacity: 85294297088 Used: 40771194880 Available: 44523102208 Inodes: 41680320 Free inodes: 41448665 Used inodes: 231655 2026-01-20T10:55:16.240609850+00:00 stderr F I0120 10:55:16.239443 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2026-01-20T10:55:16.241417563+00:00 stderr F I0120 10:55:16.241305 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2026-01-20T10:55:16.244733991+00:00 stderr F I0120 10:55:16.243623 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2026-01-20T10:55:16.245373207+00:00 stderr F I0120 10:55:16.245292 1 server.go:104] GRPC call: /csi.v1.Node/NodePublishVolume 2026-01-20T10:55:16.245417919+00:00 stderr F I0120 10:55:16.245319 1 nodeserver.go:82] Node Publish Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 PublishContext:map[] StagingTargetPath: TargetPath:/var/lib/kubelet/pods/7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount VolumeCapability:mount:<> access_mode: Readonly:false Secrets:map[] VolumeContext:map[csi.storage.k8s.io/ephemeral:false csi.storage.k8s.io/pod.name:image-registry-75b7bb6564-ln84v csi.storage.k8s.io/pod.namespace:openshift-image-registry csi.storage.k8s.io/pod.uid:7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76 csi.storage.k8s.io/pv/name:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 csi.storage.k8s.io/pvc/name:crc-image-registry-storage csi.storage.k8s.io/pvc/namespace:openshift-image-registry csi.storage.k8s.io/serviceAccount.name:registry storage.kubernetes.io/csiProvisionerIdentity:1719494501704-84-kubevirt.io.hostpath-provisioner-crc storagePool:local] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2026-01-20T10:55:37.164208723+00:00 stderr F I0120 10:55:37.164122 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2026-01-20T10:55:40.573378341+00:00 stderr F I0120 10:55:40.573265 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:55:40.573378341+00:00 stderr F I0120 10:55:40.573289 1 controllerserver.go:230] Checking capacity for storage pool local 2026-01-20T10:56:02.048312150+00:00 stderr F I0120 10:56:02.048225 1 server.go:104] GRPC call: /csi.v1.Node/NodeUnpublishVolume 2026-01-20T10:56:02.048312150+00:00 stderr F I0120 10:56:02.048247 1 nodeserver.go:199] Node Unpublish Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 TargetPath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2026-01-20T10:56:02.048312150+00:00 stderr F I0120 10:56:02.048276 1 nodeserver.go:206] Unmounting path: /var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount 2026-01-20T10:56:04.803045728+00:00 stderr F I0120 10:56:04.802955 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2026-01-20T10:56:04.811499995+00:00 stderr F I0120 10:56:04.810762 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2026-01-20T10:56:04.811524406+00:00 stderr F I0120 10:56:04.811475 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2026-01-20T10:56:04.819297004+00:00 stderr F I0120 10:56:04.819223 1 healthcheck.go:84] fs available: 44513927168, total capacity: 85294297088, percentage available: 52.19, number of free inodes: 41448653 2026-01-20T10:56:04.819297004+00:00 stderr F I0120 10:56:04.819263 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2026-01-20T10:56:04.819297004+00:00 stderr F I0120 10:56:04.819281 1 nodeserver.go:330] Capacity: 85294297088 Used: 40780369920 Available: 44513927168 Inodes: 41680320 Free inodes: 41448653 Used inodes: 231667 2026-01-20T10:56:37.175342024+00:00 stderr F I0120 10:56:37.175277 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2026-01-20T10:56:40.575259491+00:00 stderr F I0120 10:56:40.575194 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:56:40.575259491+00:00 stderr F I0120 10:56:40.575218 1 controllerserver.go:230] Checking capacity for storage pool local 2026-01-20T10:57:37.185684859+00:00 stderr F I0120 10:57:37.185614 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2026-01-20T10:57:40.575198906+00:00 stderr F I0120 10:57:40.575131 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:57:40.575198906+00:00 stderr F I0120 10:57:40.575157 1 controllerserver.go:230] Checking capacity for storage pool local 2026-01-20T10:57:41.578765175+00:00 stderr F I0120 10:57:41.578655 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:57:41.578765175+00:00 stderr F I0120 10:57:41.578699 1 controllerserver.go:230] Checking capacity for storage pool local 2026-01-20T10:57:43.582211077+00:00 stderr F I0120 10:57:43.582142 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:57:43.582211077+00:00 stderr F I0120 10:57:43.582166 1 controllerserver.go:230] Checking capacity for storage pool local 2026-01-20T10:57:47.585703722+00:00 stderr F I0120 10:57:47.585633 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:57:47.585703722+00:00 stderr F I0120 10:57:47.585661 1 controllerserver.go:230] Checking capacity for storage pool local 2026-01-20T10:57:55.589676359+00:00 stderr F I0120 10:57:55.589570 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:57:55.589676359+00:00 stderr F I0120 10:57:55.589602 1 controllerserver.go:230] Checking capacity for storage pool local 2026-01-20T10:57:59.923374012+00:00 stderr F I0120 10:57:59.923134 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2026-01-20T10:57:59.924742217+00:00 stderr F I0120 10:57:59.924660 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2026-01-20T10:57:59.924742217+00:00 stderr F I0120 10:57:59.924694 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2026-01-20T10:57:59.935889730+00:00 stderr F I0120 10:57:59.935793 1 healthcheck.go:84] fs available: 44322746368, total capacity: 85294297088, percentage available: 51.96, number of free inodes: 41446621 2026-01-20T10:57:59.935889730+00:00 stderr F I0120 10:57:59.935830 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2026-01-20T10:57:59.935889730+00:00 stderr F I0120 10:57:59.935853 1 nodeserver.go:330] Capacity: 85294297088 Used: 40971550720 Available: 44322746368 Inodes: 41680320 Free inodes: 41446621 Used inodes: 233699 2026-01-20T10:58:16.659460604+00:00 stderr F I0120 10:58:16.659370 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:58:16.659460604+00:00 stderr F I0120 10:58:16.659405 1 controllerserver.go:230] Checking capacity for storage pool local 2026-01-20T10:58:17.666149614+00:00 stderr F I0120 10:58:17.666089 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:58:17.666149614+00:00 stderr F I0120 10:58:17.666109 1 controllerserver.go:230] Checking capacity for storage pool local 2026-01-20T10:58:19.673386059+00:00 stderr F I0120 10:58:19.673313 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:58:19.673386059+00:00 stderr F I0120 10:58:19.673341 1 controllerserver.go:230] Checking capacity for storage pool local ././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000755000175000017500000000000015133657741033200 5ustar zuulzuul././@LongLink0000644000000000000000000000027500000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000000277515133657716033217 0ustar zuulzuul2026-01-20T10:49:38.556039481+00:00 stderr F I0120 10:49:38.550957 1 main.go:135] Version: v4.15.0-202406180807.p0.g9005584.assembly.stream.el8-0-g69b18c7-dirty 2026-01-20T10:49:38.556039481+00:00 stderr F I0120 10:49:38.551796 1 main.go:136] Running node-driver-registrar in mode= 2026-01-20T10:49:38.556039481+00:00 stderr F I0120 10:49:38.551803 1 main.go:157] Attempting to open a gRPC connection with: "/csi/csi.sock" 2026-01-20T10:49:38.560497827+00:00 stderr F I0120 10:49:38.560463 1 main.go:164] Calling CSI driver to discover driver name 2026-01-20T10:49:38.661613387+00:00 stderr F I0120 10:49:38.645873 1 main.go:173] CSI driver name: "kubevirt.io.hostpath-provisioner" 2026-01-20T10:49:38.661613387+00:00 stderr F I0120 10:49:38.645931 1 node_register.go:55] Starting Registration Server at: /registration/kubevirt.io.hostpath-provisioner-reg.sock 2026-01-20T10:49:38.661613387+00:00 stderr F I0120 10:49:38.653104 1 node_register.go:64] Registration Server started at: /registration/kubevirt.io.hostpath-provisioner-reg.sock 2026-01-20T10:49:38.661872835+00:00 stderr F I0120 10:49:38.661850 1 node_register.go:88] Skipping HTTP server because endpoint is set to: "" 2026-01-20T10:49:38.994228258+00:00 stderr F I0120 10:49:38.994142 1 main.go:90] Received GetInfo call: &InfoRequest{} 2026-01-20T10:49:39.023045056+00:00 stderr F I0120 10:49:39.022989 1 main.go:101] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,} ././@LongLink0000644000000000000000000000027500000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000000277515133657716033217 0ustar zuulzuul2025-08-13T19:59:54.983940954+00:00 stderr F I0813 19:59:54.882907 1 main.go:135] Version: v4.15.0-202406180807.p0.g9005584.assembly.stream.el8-0-g69b18c7-dirty 2025-08-13T19:59:55.140220789+00:00 stderr F I0813 19:59:55.102275 1 main.go:136] Running node-driver-registrar in mode= 2025-08-13T19:59:55.336952618+00:00 stderr F I0813 19:59:55.271313 1 main.go:157] Attempting to open a gRPC connection with: "/csi/csi.sock" 2025-08-13T19:59:56.036679094+00:00 stderr F I0813 19:59:55.934459 1 main.go:164] Calling CSI driver to discover driver name 2025-08-13T19:59:56.994891017+00:00 stderr F I0813 19:59:56.988897 1 main.go:173] CSI driver name: "kubevirt.io.hostpath-provisioner" 2025-08-13T19:59:56.994891017+00:00 stderr F I0813 19:59:56.991755 1 node_register.go:55] Starting Registration Server at: /registration/kubevirt.io.hostpath-provisioner-reg.sock 2025-08-13T19:59:57.003131672+00:00 stderr F I0813 19:59:56.996236 1 node_register.go:64] Registration Server started at: /registration/kubevirt.io.hostpath-provisioner-reg.sock 2025-08-13T19:59:57.043204285+00:00 stderr F I0813 19:59:57.030985 1 node_register.go:88] Skipping HTTP server because endpoint is set to: "" 2025-08-13T19:59:58.136694005+00:00 stderr F I0813 19:59:58.083078 1 main.go:90] Received GetInfo call: &InfoRequest{} 2025-08-13T19:59:58.490232413+00:00 stderr F I0813 19:59:58.490135 1 main.go:101] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,} ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000755000175000017500000000000015133657742033201 5ustar zuulzuul././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000016544215133657716033220 0ustar zuulzuul2026-01-20T10:49:40.373743537+00:00 stderr F W0120 10:49:40.372418 1 feature_gate.go:241] Setting GA feature gate Topology=true. It will be removed in a future release. 2026-01-20T10:49:40.373743537+00:00 stderr F I0120 10:49:40.372732 1 feature_gate.go:249] feature gates: &{map[Topology:true]} 2026-01-20T10:49:40.373743537+00:00 stderr F I0120 10:49:40.373138 1 csi-provisioner.go:154] Version: v4.15.0-202406180807.p0.gce5a1a3.assembly.stream.el8-0-g9363464-dirty 2026-01-20T10:49:40.373743537+00:00 stderr F I0120 10:49:40.373147 1 csi-provisioner.go:177] Building kube configs for running in cluster... 2026-01-20T10:49:40.382459632+00:00 stderr F I0120 10:49:40.381744 1 connection.go:215] Connecting to unix:///csi/csi.sock 2026-01-20T10:49:40.391935842+00:00 stderr F I0120 10:49:40.391367 1 common.go:138] Probing CSI driver for readiness 2026-01-20T10:49:40.391935842+00:00 stderr F I0120 10:49:40.391407 1 connection.go:244] GRPC call: /csi.v1.Identity/Probe 2026-01-20T10:49:40.396420688+00:00 stderr F I0120 10:49:40.391412 1 connection.go:245] GRPC request: {} 2026-01-20T10:49:40.397672226+00:00 stderr F I0120 10:49:40.397641 1 connection.go:251] GRPC response: {} 2026-01-20T10:49:40.397672226+00:00 stderr F I0120 10:49:40.397660 1 connection.go:252] GRPC error: 2026-01-20T10:49:40.397689947+00:00 stderr F I0120 10:49:40.397677 1 connection.go:244] GRPC call: /csi.v1.Identity/GetPluginInfo 2026-01-20T10:49:40.397699407+00:00 stderr F I0120 10:49:40.397681 1 connection.go:245] GRPC request: {} 2026-01-20T10:49:40.398739689+00:00 stderr F I0120 10:49:40.398713 1 connection.go:251] GRPC response: {"name":"kubevirt.io.hostpath-provisioner","vendor_version":"latest"} 2026-01-20T10:49:40.398739689+00:00 stderr F I0120 10:49:40.398727 1 connection.go:252] GRPC error: 2026-01-20T10:49:40.398757029+00:00 stderr F I0120 10:49:40.398738 1 csi-provisioner.go:230] Detected CSI driver kubevirt.io.hostpath-provisioner 2026-01-20T10:49:40.398757029+00:00 stderr F I0120 10:49:40.398749 1 connection.go:244] GRPC call: /csi.v1.Identity/GetPluginCapabilities 2026-01-20T10:49:40.398788390+00:00 stderr F I0120 10:49:40.398753 1 connection.go:245] GRPC request: {} 2026-01-20T10:49:40.399532983+00:00 stderr F I0120 10:49:40.399510 1 connection.go:251] GRPC response: {"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"Service":{"type":2}}}]} 2026-01-20T10:49:40.399532983+00:00 stderr F I0120 10:49:40.399523 1 connection.go:252] GRPC error: 2026-01-20T10:49:40.399547304+00:00 stderr F I0120 10:49:40.399530 1 connection.go:244] GRPC call: /csi.v1.Controller/ControllerGetCapabilities 2026-01-20T10:49:40.399554594+00:00 stderr F I0120 10:49:40.399536 1 connection.go:245] GRPC request: {} 2026-01-20T10:49:40.400029358+00:00 stderr F I0120 10:49:40.400008 1 connection.go:251] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":11}}}]} 2026-01-20T10:49:40.400029358+00:00 stderr F I0120 10:49:40.400020 1 connection.go:252] GRPC error: 2026-01-20T10:49:40.400677368+00:00 stderr F I0120 10:49:40.400656 1 csi-provisioner.go:302] CSI driver does not support PUBLISH_UNPUBLISH_VOLUME, not watching VolumeAttachments 2026-01-20T10:49:40.400677368+00:00 stderr F I0120 10:49:40.400670 1 connection.go:244] GRPC call: /csi.v1.Node/NodeGetInfo 2026-01-20T10:49:40.400723349+00:00 stderr F I0120 10:49:40.400674 1 connection.go:245] GRPC request: {} 2026-01-20T10:49:40.403848714+00:00 stderr F I0120 10:49:40.403795 1 connection.go:251] GRPC response: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"node_id":"crc"} 2026-01-20T10:49:40.403848714+00:00 stderr F I0120 10:49:40.403826 1 connection.go:252] GRPC error: 2026-01-20T10:49:40.413696105+00:00 stderr F I0120 10:49:40.403893 1 csi-provisioner.go:351] using local topology with Node = &Node{ObjectMeta:{crc 0 0001-01-01 00:00:00 +0000 UTC map[topology.hostpath.csi/node:crc] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} and CSINode = &CSINode{ObjectMeta:{crc 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:CSINodeSpec{Drivers:[]CSINodeDriver{CSINodeDriver{Name:kubevirt.io.hostpath-provisioner,NodeID:crc,TopologyKeys:[topology.hostpath.csi/node],Allocatable:nil,},},},} 2026-01-20T10:49:40.464977446+00:00 stderr F I0120 10:49:40.449910 1 csi-provisioner.go:464] using apps/v1/DaemonSet csi-hostpathplugin as owner of CSIStorageCapacity objects 2026-01-20T10:49:40.464977446+00:00 stderr F I0120 10:49:40.449952 1 csi-provisioner.go:483] producing CSIStorageCapacity objects with fixed topology segment [topology.hostpath.csi/node: crc] 2026-01-20T10:49:40.464977446+00:00 stderr F I0120 10:49:40.453434 1 csi-provisioner.go:529] using the CSIStorageCapacity v1 API 2026-01-20T10:49:40.464977446+00:00 stderr F I0120 10:49:40.462773 1 capacity.go:339] Capacity Controller: topology changed: added [0xc000015c08 = topology.hostpath.csi/node: crc], removed [] 2026-01-20T10:49:40.464977446+00:00 stderr F I0120 10:49:40.463191 1 controller.go:732] Using saving PVs to API server in background 2026-01-20T10:49:40.481273653+00:00 stderr F I0120 10:49:40.466375 1 reflector.go:289] Starting reflector *v1.CSIStorageCapacity (1h0m0s) from k8s.io/client-go/informers/factory.go:150 2026-01-20T10:49:40.481273653+00:00 stderr F I0120 10:49:40.466400 1 reflector.go:325] Listing and watching *v1.CSIStorageCapacity from k8s.io/client-go/informers/factory.go:150 2026-01-20T10:49:40.481273653+00:00 stderr F I0120 10:49:40.466472 1 reflector.go:289] Starting reflector *v1.StorageClass (1h0m0s) from k8s.io/client-go/informers/factory.go:150 2026-01-20T10:49:40.481273653+00:00 stderr F I0120 10:49:40.466487 1 reflector.go:325] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:150 2026-01-20T10:49:40.481273653+00:00 stderr F I0120 10:49:40.466904 1 reflector.go:289] Starting reflector *v1.PersistentVolumeClaim (15m0s) from k8s.io/client-go/informers/factory.go:150 2026-01-20T10:49:40.481273653+00:00 stderr F I0120 10:49:40.466911 1 reflector.go:325] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:150 2026-01-20T10:49:40.481273653+00:00 stderr F I0120 10:49:40.471656 1 capacity.go:373] Capacity Controller: storage class crc-csi-hostpath-provisioner was updated or added 2026-01-20T10:49:40.481273653+00:00 stderr F I0120 10:49:40.471667 1 capacity.go:480] Capacity Controller: enqueuing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:49:40.572117320+00:00 stderr F I0120 10:49:40.566134 1 shared_informer.go:341] caches populated 2026-01-20T10:49:40.572117320+00:00 stderr F I0120 10:49:40.566168 1 shared_informer.go:341] caches populated 2026-01-20T10:49:40.572117320+00:00 stderr F I0120 10:49:40.566207 1 controller.go:811] Starting provisioner controller kubevirt.io.hostpath-provisioner_csi-hostpathplugin-hvm8g_baf21605-2a81-4b43-8cdb-fb16a53559dc! 2026-01-20T10:49:40.572117320+00:00 stderr F I0120 10:49:40.566261 1 volume_store.go:97] Starting save volume queue 2026-01-20T10:49:40.572117320+00:00 stderr F I0120 10:49:40.566341 1 capacity.go:243] Starting Capacity Controller 2026-01-20T10:49:40.572117320+00:00 stderr F I0120 10:49:40.566463 1 shared_informer.go:341] caches populated 2026-01-20T10:49:40.572117320+00:00 stderr F I0120 10:49:40.566653 1 reflector.go:289] Starting reflector *v1.PersistentVolume (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845 2026-01-20T10:49:40.572117320+00:00 stderr F I0120 10:49:40.566665 1 reflector.go:325] Listing and watching *v1.PersistentVolume from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845 2026-01-20T10:49:40.572117320+00:00 stderr F I0120 10:49:40.566469 1 capacity.go:339] Capacity Controller: topology changed: added [0xc000015c08 = topology.hostpath.csi/node: crc], removed [] 2026-01-20T10:49:40.572117320+00:00 stderr F I0120 10:49:40.566941 1 capacity.go:480] Capacity Controller: enqueuing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:49:40.572117320+00:00 stderr F I0120 10:49:40.566954 1 capacity.go:279] Initial number of topology segments 1, storage classes 1, potential CSIStorageCapacity objects 1 2026-01-20T10:49:40.572117320+00:00 stderr F I0120 10:49:40.566959 1 capacity.go:290] Checking for existing CSIStorageCapacity objects 2026-01-20T10:49:40.572117320+00:00 stderr F I0120 10:49:40.566932 1 reflector.go:289] Starting reflector *v1.StorageClass (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848 2026-01-20T10:49:40.572117320+00:00 stderr F I0120 10:49:40.567047 1 reflector.go:325] Listing and watching *v1.StorageClass from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848 2026-01-20T10:49:40.572117320+00:00 stderr F I0120 10:49:40.567090 1 capacity.go:725] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 37403 matches {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:49:40.572117320+00:00 stderr F I0120 10:49:40.567187 1 capacity.go:255] Started Capacity Controller 2026-01-20T10:49:40.572117320+00:00 stderr F I0120 10:49:40.567208 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2026-01-20T10:49:40.572117320+00:00 stderr F I0120 10:49:40.567250 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 37403 is already known to match {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:49:40.572117320+00:00 stderr F I0120 10:49:40.567358 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:49:40.572117320+00:00 stderr F I0120 10:49:40.567409 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:49:40.572117320+00:00 stderr F I0120 10:49:40.567419 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2026-01-20T10:49:40.591848081+00:00 stderr F I0120 10:49:40.584165 1 connection.go:251] GRPC response: {"available_capacity":52441284608,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2026-01-20T10:49:40.591848081+00:00 stderr F I0120 10:49:40.584185 1 connection.go:252] GRPC error: 2026-01-20T10:49:40.591848081+00:00 stderr F I0120 10:49:40.584205 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}, new capacity 51212192Ki, new maximumVolumeSize 83295212Ki 2026-01-20T10:49:40.594757779+00:00 stderr F I0120 10:49:40.594202 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 40458 is already known to match {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:49:40.594757779+00:00 stderr F I0120 10:49:40.594358 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 40458 for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} with capacity 51212192Ki 2026-01-20T10:49:40.668597499+00:00 stderr F I0120 10:49:40.667139 1 shared_informer.go:341] caches populated 2026-01-20T10:49:40.668597499+00:00 stderr F I0120 10:49:40.667510 1 controller.go:860] Started provisioner controller kubevirt.io.hostpath-provisioner_csi-hostpathplugin-hvm8g_baf21605-2a81-4b43-8cdb-fb16a53559dc! 2026-01-20T10:49:40.672091215+00:00 stderr F I0120 10:49:40.667553 1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 4bd4486a-d347-4705-822c-1c402df66985 20233 0 2024-06-27 13:21:41 +0000 UTC map[] map[pv.kubernetes.io/provisioned-by:kubevirt.io.hostpath-provisioner volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:volumeAttributes":{".":{},"f:csi.storage.k8s.io/pv/name":{},"f:csi.storage.k8s.io/pvc/name":{},"f:csi.storage.k8s.io/pvc/namespace":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{},"f:storagePool":{}},"f:volumeHandle":{}},"f:nodeAffinity":{".":{},"f:required":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{32212254720 0} {} 30Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:kubevirt.io.hostpath-provisioner,VolumeHandle:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,ReadOnly:false,FSType:,VolumeAttributes:map[string]string{csi.storage.k8s.io/pv/name: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,csi.storage.k8s.io/pvc/name: crc-image-registry-storage,csi.storage.k8s.io/pvc/namespace: openshift-image-registry,storage.kubernetes.io/csiProvisionerIdentity: 1719494501704-84-kubevirt.io.hostpath-provisioner-crc,storagePool: local,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteMany],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:openshift-image-registry,Name:crc-image-registry-storage,UID:f5d86efc-9248-4b55-9b8b-23cf63fe9e97,APIVersion:v1,ResourceVersion:17977,FieldPath:,},PersistentVolumeReclaimPolicy:Retain,StorageClassName:crc-csi-hostpath-provisioner,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:&VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:topology.hostpath.csi/node,Operator:In,Values:[crc],},},MatchFields:[]NodeSelectorRequirement{},},},},},},Status:PersistentVolumeStatus{Phase:Bound,Message:,Reason:,LastPhaseTransitionTime:2024-06-27 13:21:41 +0000 UTC,},} 2026-01-20T10:49:40.672091215+00:00 stderr F I0120 10:49:40.670078 1 controller.go:1239] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" 2026-01-20T10:49:40.672091215+00:00 stderr F I0120 10:49:40.670087 1 controller.go:1260] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" is false: PersistentVolumePhase is not Released 2026-01-20T10:50:40.568363382+00:00 stderr F I0120 10:50:40.568268 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2026-01-20T10:50:40.568363382+00:00 stderr F I0120 10:50:40.568319 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:50:40.568363382+00:00 stderr F I0120 10:50:40.568343 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:50:40.568518606+00:00 stderr F I0120 10:50:40.568349 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2026-01-20T10:50:40.569441822+00:00 stderr F I0120 10:50:40.569404 1 connection.go:251] GRPC response: {"available_capacity":44676927488,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2026-01-20T10:50:40.569441822+00:00 stderr F I0120 10:50:40.569434 1 connection.go:252] GRPC error: 2026-01-20T10:50:40.569483644+00:00 stderr F I0120 10:50:40.569453 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}, new capacity 43629812Ki, new maximumVolumeSize 83295212Ki 2026-01-20T10:50:40.583722594+00:00 stderr F I0120 10:50:40.583471 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 41103 for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} with capacity 43629812Ki 2026-01-20T10:50:40.583722594+00:00 stderr F I0120 10:50:40.583686 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 41103 is already known to match {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:51:40.569827877+00:00 stderr F I0120 10:51:40.568690 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2026-01-20T10:51:40.569827877+00:00 stderr F I0120 10:51:40.568744 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:51:40.569827877+00:00 stderr F I0120 10:51:40.568781 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:51:40.569827877+00:00 stderr F I0120 10:51:40.568786 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2026-01-20T10:51:40.572585846+00:00 stderr F I0120 10:51:40.571171 1 connection.go:251] GRPC response: {"available_capacity":44055433216,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2026-01-20T10:51:40.572585846+00:00 stderr F I0120 10:51:40.571185 1 connection.go:252] GRPC error: 2026-01-20T10:51:40.572585846+00:00 stderr F I0120 10:51:40.571204 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}, new capacity 43022884Ki, new maximumVolumeSize 83295212Ki 2026-01-20T10:51:40.581400245+00:00 stderr F I0120 10:51:40.581336 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 41515 is already known to match {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:51:40.581550049+00:00 stderr F I0120 10:51:40.581515 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 41515 for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} with capacity 43022884Ki 2026-01-20T10:52:40.570302853+00:00 stderr F I0120 10:52:40.569579 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2026-01-20T10:52:40.570302853+00:00 stderr F I0120 10:52:40.569775 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:52:40.570302853+00:00 stderr F I0120 10:52:40.569824 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:52:40.570604941+00:00 stderr F I0120 10:52:40.569832 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2026-01-20T10:52:40.571704350+00:00 stderr F I0120 10:52:40.571668 1 connection.go:251] GRPC response: {"available_capacity":44545966080,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2026-01-20T10:52:40.571704350+00:00 stderr F I0120 10:52:40.571691 1 connection.go:252] GRPC error: 2026-01-20T10:52:40.571769881+00:00 stderr F I0120 10:52:40.571727 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}, new capacity 43501920Ki, new maximumVolumeSize 83295212Ki 2026-01-20T10:52:40.586037320+00:00 stderr F I0120 10:52:40.585960 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 41640 is already known to match {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:52:40.586567453+00:00 stderr F I0120 10:52:40.586527 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 41640 for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} with capacity 43501920Ki 2026-01-20T10:53:40.571851163+00:00 stderr F I0120 10:53:40.570849 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2026-01-20T10:53:40.571851163+00:00 stderr F I0120 10:53:40.570952 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:53:40.571851163+00:00 stderr F I0120 10:53:40.570993 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:53:40.571851163+00:00 stderr F I0120 10:53:40.571001 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2026-01-20T10:53:40.572981243+00:00 stderr F I0120 10:53:40.572929 1 connection.go:251] GRPC response: {"available_capacity":44543975424,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2026-01-20T10:53:40.572981243+00:00 stderr F I0120 10:53:40.572950 1 connection.go:252] GRPC error: 2026-01-20T10:53:40.572999813+00:00 stderr F I0120 10:53:40.572976 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}, new capacity 43499976Ki, new maximumVolumeSize 83295212Ki 2026-01-20T10:53:40.583917225+00:00 stderr F I0120 10:53:40.583838 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 41794 is already known to match {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:53:40.583982096+00:00 stderr F I0120 10:53:40.583892 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 41794 for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} with capacity 43499976Ki 2026-01-20T10:54:40.571722090+00:00 stderr F I0120 10:54:40.571541 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2026-01-20T10:54:40.571722090+00:00 stderr F I0120 10:54:40.571643 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:54:40.571854383+00:00 stderr F I0120 10:54:40.571729 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:54:40.571911345+00:00 stderr F I0120 10:54:40.571740 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2026-01-20T10:54:40.573351753+00:00 stderr F I0120 10:54:40.573275 1 connection.go:251] GRPC response: {"available_capacity":44522663936,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2026-01-20T10:54:40.573351753+00:00 stderr F I0120 10:54:40.573301 1 connection.go:252] GRPC error: 2026-01-20T10:54:40.573385874+00:00 stderr F I0120 10:54:40.573334 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}, new capacity 43479164Ki, new maximumVolumeSize 83295212Ki 2026-01-20T10:54:40.589812032+00:00 stderr F I0120 10:54:40.589673 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 41953 is already known to match {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:54:40.589960776+00:00 stderr F I0120 10:54:40.589866 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 41953 for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} with capacity 43479164Ki 2026-01-20T10:55:30.485609950+00:00 stderr F I0120 10:55:30.484915 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 13 items received 2026-01-20T10:55:40.572629242+00:00 stderr F I0120 10:55:40.572466 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2026-01-20T10:55:40.572629242+00:00 stderr F I0120 10:55:40.572557 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:55:40.572629242+00:00 stderr F I0120 10:55:40.572611 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:55:40.572779676+00:00 stderr F I0120 10:55:40.572618 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2026-01-20T10:55:40.573864215+00:00 stderr F I0120 10:55:40.573755 1 connection.go:251] GRPC response: {"available_capacity":44521271296,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2026-01-20T10:55:40.573864215+00:00 stderr F I0120 10:55:40.573781 1 connection.go:252] GRPC error: 2026-01-20T10:55:40.573864215+00:00 stderr F I0120 10:55:40.573803 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}, new capacity 43477804Ki, new maximumVolumeSize 83295212Ki 2026-01-20T10:55:40.584603201+00:00 stderr F I0120 10:55:40.584531 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 42260 for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} with capacity 43477804Ki 2026-01-20T10:55:40.584642242+00:00 stderr F I0120 10:55:40.584628 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 42260 is already known to match {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:55:47.474130355+00:00 stderr F I0120 10:55:47.473980 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 7 items received 2026-01-20T10:55:51.575321569+00:00 stderr F I0120 10:55:51.574327 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 7 items received 2026-01-20T10:56:40.574498711+00:00 stderr F I0120 10:56:40.573672 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2026-01-20T10:56:40.574498711+00:00 stderr F I0120 10:56:40.574436 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:56:40.574586503+00:00 stderr F I0120 10:56:40.574495 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:56:40.574713186+00:00 stderr F I0120 10:56:40.574501 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2026-01-20T10:56:40.575785785+00:00 stderr F I0120 10:56:40.575716 1 connection.go:251] GRPC response: {"available_capacity":44451770368,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2026-01-20T10:56:40.575785785+00:00 stderr F I0120 10:56:40.575773 1 connection.go:252] GRPC error: 2026-01-20T10:56:40.575875797+00:00 stderr F I0120 10:56:40.575834 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}, new capacity 43409932Ki, new maximumVolumeSize 83295212Ki 2026-01-20T10:56:40.588705893+00:00 stderr F I0120 10:56:40.588631 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 43042 for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} with capacity 43409932Ki 2026-01-20T10:56:40.589054682+00:00 stderr F I0120 10:56:40.588981 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 43042 is already known to match {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:57:09.880669029+00:00 stderr F I0120 10:57:09.878534 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 7 items received 2026-01-20T10:57:09.883033401+00:00 stderr F I0120 10:57:09.882934 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=43008&timeout=9m6s&timeoutSeconds=546&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:09.914179345+00:00 stderr F I0120 10:57:09.911730 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 1 items received 2026-01-20T10:57:09.935583291+00:00 stderr F I0120 10:57:09.935501 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=43154&timeout=7m31s&timeoutSeconds=451&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:09.936634879+00:00 stderr F I0120 10:57:09.936545 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 7 items received 2026-01-20T10:57:09.936825124+00:00 stderr F I0120 10:57:09.936803 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 1 items received 2026-01-20T10:57:09.940356347+00:00 stderr F I0120 10:57:09.939774 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 3 items received 2026-01-20T10:57:09.951276176+00:00 stderr F I0120 10:57:09.950307 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=43159&timeout=7m51s&timeoutSeconds=471&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:09.951730108+00:00 stderr F I0120 10:57:09.951699 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=43029&timeout=9m42s&timeoutSeconds=582&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:09.957540092+00:00 stderr F I0120 10:57:09.957217 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=43042&timeout=6m35s&timeoutSeconds=395&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:10.795215285+00:00 stderr F I0120 10:57:10.795106 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=43029&timeout=5m4s&timeoutSeconds=304&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:10.988745272+00:00 stderr F I0120 10:57:10.988602 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=43008&timeout=6m59s&timeoutSeconds=419&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:11.198099848+00:00 stderr F I0120 10:57:11.197955 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=43042&timeout=5m30s&timeoutSeconds=330&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:11.413675239+00:00 stderr F I0120 10:57:11.412644 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=43159&timeout=9m16s&timeoutSeconds=556&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:11.494203209+00:00 stderr F I0120 10:57:11.491256 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=43154&timeout=7m33s&timeoutSeconds=453&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:12.692117269+00:00 stderr F I0120 10:57:12.692030 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=43008&timeout=8m56s&timeoutSeconds=536&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:12.960403443+00:00 stderr F I0120 10:57:12.960306 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=43029&timeout=8m55s&timeoutSeconds=535&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:13.057857490+00:00 stderr F I0120 10:57:13.057799 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=43159&timeout=6m3s&timeoutSeconds=363&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:13.756423125+00:00 stderr F I0120 10:57:13.756342 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=43154&timeout=9m0s&timeoutSeconds=540&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:14.146600023+00:00 stderr F I0120 10:57:14.146510 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=43042&timeout=6m34s&timeoutSeconds=394&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:16.932863166+00:00 stderr F I0120 10:57:16.932766 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=43159&timeout=5m58s&timeoutSeconds=358&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:17.422972987+00:00 stderr F I0120 10:57:17.422820 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=43042&timeout=6m7s&timeoutSeconds=367&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:17.595996663+00:00 stderr F I0120 10:57:17.595902 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=43008&timeout=5m40s&timeoutSeconds=340&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:17.687404460+00:00 stderr F I0120 10:57:17.687322 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=43029&timeout=6m32s&timeoutSeconds=392&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:20.045966354+00:00 stderr F I0120 10:57:20.045873 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=43154&timeout=5m11s&timeoutSeconds=311&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:24.730308962+00:00 stderr F I0120 10:57:24.730216 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=43159&timeout=6m2s&timeoutSeconds=362&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:25.296607688+00:00 stderr F I0120 10:57:25.296530 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=43008&timeout=9m22s&timeoutSeconds=562&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:26.678301057+00:00 stderr F I0120 10:57:26.678216 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=43029&timeout=8m8s&timeoutSeconds=488&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:27.326100509+00:00 stderr F I0120 10:57:27.326011 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=43154&timeout=7m38s&timeoutSeconds=458&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:29.583154718+00:00 stderr F I0120 10:57:29.579929 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=43042&timeout=9m2s&timeoutSeconds=542&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:40.574648192+00:00 stderr F I0120 10:57:40.574551 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2026-01-20T10:57:40.574648192+00:00 stderr F I0120 10:57:40.574613 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:57:40.574648192+00:00 stderr F I0120 10:57:40.574638 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:57:40.574736774+00:00 stderr F I0120 10:57:40.574642 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2026-01-20T10:57:40.575468743+00:00 stderr F I0120 10:57:40.575439 1 connection.go:251] GRPC response: {"available_capacity":44323704832,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2026-01-20T10:57:40.575468743+00:00 stderr F I0120 10:57:40.575450 1 connection.go:252] GRPC error: 2026-01-20T10:57:40.575485904+00:00 stderr F I0120 10:57:40.575466 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}, new capacity 43284868Ki, new maximumVolumeSize 83295212Ki 2026-01-20T10:57:40.577226240+00:00 stderr F E0120 10:57:40.577188 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:40.577226240+00:00 stderr F W0120 10:57:40.577201 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc000015c08), storageClassName:"crc-csi-hostpath-provisioner"} after 0 failures 2026-01-20T10:57:40.930858622+00:00 stderr F I0120 10:57:40.930764 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=43008&timeout=7m7s&timeoutSeconds=427&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:41.167597362+00:00 stderr F I0120 10:57:41.164230 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=43154&timeout=7m0s&timeoutSeconds=420&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:41.578027616+00:00 stderr F I0120 10:57:41.577873 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:57:41.578027616+00:00 stderr F I0120 10:57:41.577934 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:57:41.578177890+00:00 stderr F I0120 10:57:41.577947 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2026-01-20T10:57:41.579089924+00:00 stderr F I0120 10:57:41.579031 1 connection.go:251] GRPC response: {"available_capacity":44323680256,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2026-01-20T10:57:41.579089924+00:00 stderr F I0120 10:57:41.579055 1 connection.go:252] GRPC error: 2026-01-20T10:57:41.579148145+00:00 stderr F I0120 10:57:41.579109 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}, new capacity 43284844Ki, new maximumVolumeSize 83295212Ki 2026-01-20T10:57:41.581169329+00:00 stderr F E0120 10:57:41.581099 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:41.581169329+00:00 stderr F W0120 10:57:41.581129 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc000015c08), storageClassName:"crc-csi-hostpath-provisioner"} after 1 failures 2026-01-20T10:57:43.581662693+00:00 stderr F I0120 10:57:43.581587 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:57:43.581662693+00:00 stderr F I0120 10:57:43.581644 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:57:43.581734295+00:00 stderr F I0120 10:57:43.581648 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2026-01-20T10:57:43.582825444+00:00 stderr F I0120 10:57:43.582560 1 connection.go:251] GRPC response: {"available_capacity":44323618816,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2026-01-20T10:57:43.582825444+00:00 stderr F I0120 10:57:43.582572 1 connection.go:252] GRPC error: 2026-01-20T10:57:43.582825444+00:00 stderr F I0120 10:57:43.582588 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}, new capacity 43284784Ki, new maximumVolumeSize 83295212Ki 2026-01-20T10:57:43.583639356+00:00 stderr F E0120 10:57:43.583602 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:43.583639356+00:00 stderr F W0120 10:57:43.583619 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc000015c08), storageClassName:"crc-csi-hostpath-provisioner"} after 2 failures 2026-01-20T10:57:47.584921890+00:00 stderr F I0120 10:57:47.584828 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:57:47.584953231+00:00 stderr F I0120 10:57:47.584924 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:57:47.585207138+00:00 stderr F I0120 10:57:47.584937 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2026-01-20T10:57:47.586283147+00:00 stderr F I0120 10:57:47.586238 1 connection.go:251] GRPC response: {"available_capacity":44323110912,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2026-01-20T10:57:47.586283147+00:00 stderr F I0120 10:57:47.586265 1 connection.go:252] GRPC error: 2026-01-20T10:57:47.586347009+00:00 stderr F I0120 10:57:47.586305 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}, new capacity 43284288Ki, new maximumVolumeSize 83295212Ki 2026-01-20T10:57:47.588358431+00:00 stderr F E0120 10:57:47.588325 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:47.588375691+00:00 stderr F W0120 10:57:47.588350 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc000015c08), storageClassName:"crc-csi-hostpath-provisioner"} after 3 failures 2026-01-20T10:57:47.807147798+00:00 stderr F I0120 10:57:47.806991 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=43042&timeout=5m33s&timeoutSeconds=333&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2026-01-20T10:57:48.797852457+00:00 stderr F I0120 10:57:48.797771 1 reflector.go:445] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass closed with: too old resource version: 43159 (43263) 2026-01-20T10:57:49.760532525+00:00 stderr F I0120 10:57:49.760436 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass closed with: too old resource version: 43029 (43263) 2026-01-20T10:57:55.588838356+00:00 stderr F I0120 10:57:55.588778 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:57:55.588972740+00:00 stderr F I0120 10:57:55.588959 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:57:55.589116514+00:00 stderr F I0120 10:57:55.588988 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2026-01-20T10:57:55.590033239+00:00 stderr F I0120 10:57:55.590017 1 connection.go:251] GRPC response: {"available_capacity":44322549760,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2026-01-20T10:57:55.590089900+00:00 stderr F I0120 10:57:55.590078 1 connection.go:252] GRPC error: 2026-01-20T10:57:55.590144382+00:00 stderr F I0120 10:57:55.590124 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}, new capacity 43283740Ki, new maximumVolumeSize 83295212Ki 2026-01-20T10:57:55.594825886+00:00 stderr F I0120 10:57:55.594802 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 43404 for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} with capacity 43283740Ki 2026-01-20T10:58:09.039303710+00:00 stderr F I0120 10:58:09.039226 1 reflector.go:445] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume closed with: too old resource version: 43008 (43263) 2026-01-20T10:58:16.656687424+00:00 stderr F I0120 10:58:16.655687 1 reflector.go:325] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:150 2026-01-20T10:58:16.658602633+00:00 stderr F I0120 10:58:16.658557 1 capacity.go:373] Capacity Controller: storage class crc-csi-hostpath-provisioner was updated or added 2026-01-20T10:58:16.658602633+00:00 stderr F I0120 10:58:16.658583 1 capacity.go:480] Capacity Controller: enqueuing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:58:16.658631553+00:00 stderr F I0120 10:58:16.658608 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:58:16.658645034+00:00 stderr F I0120 10:58:16.658633 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:58:16.658849799+00:00 stderr F I0120 10:58:16.658640 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2026-01-20T10:58:16.660181662+00:00 stderr F I0120 10:58:16.660154 1 connection.go:251] GRPC response: {"available_capacity":44323356672,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2026-01-20T10:58:16.660181662+00:00 stderr F I0120 10:58:16.660163 1 connection.go:252] GRPC error: 2026-01-20T10:58:16.660210263+00:00 stderr F I0120 10:58:16.660181 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}, new capacity 43284528Ki, new maximumVolumeSize 83295212Ki 2026-01-20T10:58:16.664284557+00:00 stderr F E0120 10:58:16.664192 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}: Operation cannot be fulfilled on csistoragecapacities.storage.k8s.io "csisc-f2s8x": the object has been modified; please apply your changes to the latest version and try again 2026-01-20T10:58:16.664284557+00:00 stderr F W0120 10:58:16.664207 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc000015c08), storageClassName:"crc-csi-hostpath-provisioner"} after 0 failures 2026-01-20T10:58:17.475980013+00:00 stderr F I0120 10:58:17.475910 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim closed with: too old resource version: 43154 (43263) 2026-01-20T10:58:17.665416646+00:00 stderr F I0120 10:58:17.665333 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:58:17.665416646+00:00 stderr F I0120 10:58:17.665393 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:58:17.665652461+00:00 stderr F I0120 10:58:17.665402 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2026-01-20T10:58:17.666432531+00:00 stderr F I0120 10:58:17.666401 1 connection.go:251] GRPC response: {"available_capacity":44323356672,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2026-01-20T10:58:17.666432531+00:00 stderr F I0120 10:58:17.666423 1 connection.go:252] GRPC error: 2026-01-20T10:58:17.666481282+00:00 stderr F I0120 10:58:17.666453 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}, new capacity 43284528Ki, new maximumVolumeSize 83295212Ki 2026-01-20T10:58:17.671155261+00:00 stderr F E0120 10:58:17.671125 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}: Operation cannot be fulfilled on csistoragecapacities.storage.k8s.io "csisc-f2s8x": the object has been modified; please apply your changes to the latest version and try again 2026-01-20T10:58:17.671212223+00:00 stderr F W0120 10:58:17.671193 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc000015c08), storageClassName:"crc-csi-hostpath-provisioner"} after 1 failures 2026-01-20T10:58:19.672494375+00:00 stderr F I0120 10:58:19.671873 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner} 2026-01-20T10:58:19.672494375+00:00 stderr F I0120 10:58:19.672473 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2026-01-20T10:58:19.672555227+00:00 stderr F I0120 10:58:19.672479 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2026-01-20T10:58:19.673892061+00:00 stderr F I0120 10:58:19.673841 1 connection.go:251] GRPC response: {"available_capacity":44324585472,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2026-01-20T10:58:19.673892061+00:00 stderr F I0120 10:58:19.673855 1 connection.go:252] GRPC error: 2026-01-20T10:58:19.673892061+00:00 stderr F I0120 10:58:19.673876 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}, new capacity 43285728Ki, new maximumVolumeSize 83295212Ki 2026-01-20T10:58:19.682580132+00:00 stderr F E0120 10:58:19.682551 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc000015c08 storageClassName:crc-csi-hostpath-provisioner}: Operation cannot be fulfilled on csistoragecapacities.storage.k8s.io "csisc-f2s8x": the object has been modified; please apply your changes to the latest version and try again 2026-01-20T10:58:19.682639433+00:00 stderr F W0120 10:58:19.682620 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc000015c08), storageClassName:"crc-csi-hostpath-provisioner"} after 2 failures ././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000054130515133657716033214 0ustar zuulzuul2025-08-13T20:00:06.445084695+00:00 stderr F W0813 20:00:06.443324 1 feature_gate.go:241] Setting GA feature gate Topology=true. It will be removed in a future release. 2025-08-13T20:00:06.445484987+00:00 stderr F I0813 20:00:06.445445 1 feature_gate.go:249] feature gates: &{map[Topology:true]} 2025-08-13T20:00:06.445541259+00:00 stderr F I0813 20:00:06.445526 1 csi-provisioner.go:154] Version: v4.15.0-202406180807.p0.gce5a1a3.assembly.stream.el8-0-g9363464-dirty 2025-08-13T20:00:06.445584310+00:00 stderr F I0813 20:00:06.445567 1 csi-provisioner.go:177] Building kube configs for running in cluster... 2025-08-13T20:00:06.695957019+00:00 stderr F I0813 20:00:06.658088 1 connection.go:215] Connecting to unix:///csi/csi.sock 2025-08-13T20:00:06.730316519+00:00 stderr F I0813 20:00:06.730187 1 common.go:138] Probing CSI driver for readiness 2025-08-13T20:00:06.730426782+00:00 stderr F I0813 20:00:06.730385 1 connection.go:244] GRPC call: /csi.v1.Identity/Probe 2025-08-13T20:00:06.763211887+00:00 stderr F I0813 20:00:06.730446 1 connection.go:245] GRPC request: {} 2025-08-13T20:00:06.929922880+00:00 stderr F I0813 20:00:06.925372 1 connection.go:251] GRPC response: {} 2025-08-13T20:00:06.929922880+00:00 stderr F I0813 20:00:06.925421 1 connection.go:252] GRPC error: 2025-08-13T20:00:06.929922880+00:00 stderr F I0813 20:00:06.925443 1 connection.go:244] GRPC call: /csi.v1.Identity/GetPluginInfo 2025-08-13T20:00:06.929922880+00:00 stderr F I0813 20:00:06.925449 1 connection.go:245] GRPC request: {} 2025-08-13T20:00:07.049764737+00:00 stderr F I0813 20:00:07.034174 1 connection.go:251] GRPC response: {"name":"kubevirt.io.hostpath-provisioner","vendor_version":"latest"} 2025-08-13T20:00:07.049764737+00:00 stderr F I0813 20:00:07.034258 1 connection.go:252] GRPC error: 2025-08-13T20:00:07.049764737+00:00 stderr F I0813 20:00:07.034275 1 csi-provisioner.go:230] Detected CSI driver kubevirt.io.hostpath-provisioner 2025-08-13T20:00:07.049764737+00:00 stderr F I0813 20:00:07.034292 1 connection.go:244] GRPC call: /csi.v1.Identity/GetPluginCapabilities 2025-08-13T20:00:07.049764737+00:00 stderr F I0813 20:00:07.034297 1 connection.go:245] GRPC request: {} 2025-08-13T20:00:07.272894790+00:00 stderr F I0813 20:00:07.242737 1 connection.go:251] GRPC response: {"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"Service":{"type":2}}}]} 2025-08-13T20:00:07.293117656+00:00 stderr F I0813 20:00:07.273106 1 connection.go:252] GRPC error: 2025-08-13T20:00:07.293289511+00:00 stderr F I0813 20:00:07.293266 1 connection.go:244] GRPC call: /csi.v1.Controller/ControllerGetCapabilities 2025-08-13T20:00:07.293398524+00:00 stderr F I0813 20:00:07.293311 1 connection.go:245] GRPC request: {} 2025-08-13T20:00:07.558471632+00:00 stderr F I0813 20:00:07.557998 1 connection.go:251] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":11}}}]} 2025-08-13T20:00:07.558471632+00:00 stderr F I0813 20:00:07.558063 1 connection.go:252] GRPC error: 2025-08-13T20:00:07.589250729+00:00 stderr F I0813 20:00:07.578185 1 csi-provisioner.go:302] CSI driver does not support PUBLISH_UNPUBLISH_VOLUME, not watching VolumeAttachments 2025-08-13T20:00:07.589250729+00:00 stderr F I0813 20:00:07.578240 1 connection.go:244] GRPC call: /csi.v1.Node/NodeGetInfo 2025-08-13T20:00:07.589250729+00:00 stderr F I0813 20:00:07.578246 1 connection.go:245] GRPC request: {} 2025-08-13T20:00:07.691395962+00:00 stderr F I0813 20:00:07.676466 1 connection.go:251] GRPC response: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"node_id":"crc"} 2025-08-13T20:00:07.691395962+00:00 stderr F I0813 20:00:07.676565 1 connection.go:252] GRPC error: 2025-08-13T20:00:07.929995405+00:00 stderr F I0813 20:00:07.676661 1 csi-provisioner.go:351] using local topology with Node = &Node{ObjectMeta:{crc 0 0001-01-01 00:00:00 +0000 UTC map[topology.hostpath.csi/node:crc] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} and CSINode = &CSINode{ObjectMeta:{crc 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:CSINodeSpec{Drivers:[]CSINodeDriver{CSINodeDriver{Name:kubevirt.io.hostpath-provisioner,NodeID:crc,TopologyKeys:[topology.hostpath.csi/node],Allocatable:nil,},},},} 2025-08-13T20:00:08.534874933+00:00 stderr F I0813 20:00:08.532659 1 csi-provisioner.go:464] using apps/v1/DaemonSet csi-hostpathplugin as owner of CSIStorageCapacity objects 2025-08-13T20:00:08.534874933+00:00 stderr F I0813 20:00:08.532718 1 csi-provisioner.go:483] producing CSIStorageCapacity objects with fixed topology segment [topology.hostpath.csi/node: crc] 2025-08-13T20:00:08.555891592+00:00 stderr F I0813 20:00:08.555419 1 csi-provisioner.go:529] using the CSIStorageCapacity v1 API 2025-08-13T20:00:08.558949119+00:00 stderr F I0813 20:00:08.556337 1 capacity.go:339] Capacity Controller: topology changed: added [0xc0001296f8 = topology.hostpath.csi/node: crc], removed [] 2025-08-13T20:00:08.558949119+00:00 stderr F I0813 20:00:08.557178 1 controller.go:732] Using saving PVs to API server in background 2025-08-13T20:00:08.584915160+00:00 stderr F I0813 20:00:08.576298 1 reflector.go:289] Starting reflector *v1.CSIStorageCapacity (1h0m0s) from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:00:08.584915160+00:00 stderr F I0813 20:00:08.576380 1 reflector.go:325] Listing and watching *v1.CSIStorageCapacity from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:00:08.601974936+00:00 stderr F I0813 20:00:08.591735 1 reflector.go:289] Starting reflector *v1.StorageClass (1h0m0s) from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:00:08.601974936+00:00 stderr F I0813 20:00:08.591826 1 reflector.go:325] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:00:08.617532940+00:00 stderr F I0813 20:00:08.612115 1 reflector.go:289] Starting reflector *v1.PersistentVolumeClaim (15m0s) from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:00:08.617532940+00:00 stderr F I0813 20:00:08.612165 1 reflector.go:325] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:00:10.156014488+00:00 stderr F I0813 20:00:10.115770 1 capacity.go:373] Capacity Controller: storage class crc-csi-hostpath-provisioner was updated or added 2025-08-13T20:00:10.156148102+00:00 stderr F I0813 20:00:10.156111 1 capacity.go:480] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:00:10.175935736+00:00 stderr F I0813 20:00:10.168114 1 shared_informer.go:341] caches populated 2025-08-13T20:00:10.175935736+00:00 stderr F I0813 20:00:10.168191 1 shared_informer.go:341] caches populated 2025-08-13T20:00:10.175935736+00:00 stderr F I0813 20:00:10.168371 1 controller.go:811] Starting provisioner controller kubevirt.io.hostpath-provisioner_csi-hostpathplugin-hvm8g_9c0cb162-9831-443d-a7c1-5af5632fc687! 2025-08-13T20:00:10.200235409+00:00 stderr F I0813 20:00:10.191948 1 capacity.go:243] Starting Capacity Controller 2025-08-13T20:00:10.380888950+00:00 stderr F I0813 20:00:10.380224 1 shared_informer.go:341] caches populated 2025-08-13T20:00:10.380888950+00:00 stderr F I0813 20:00:10.380278 1 capacity.go:339] Capacity Controller: topology changed: added [0xc0001296f8 = topology.hostpath.csi/node: crc], removed [] 2025-08-13T20:00:10.380888950+00:00 stderr F I0813 20:00:10.380614 1 capacity.go:480] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:00:10.380888950+00:00 stderr F I0813 20:00:10.380631 1 capacity.go:279] Initial number of topology segments 1, storage classes 1, potential CSIStorageCapacity objects 1 2025-08-13T20:00:10.380888950+00:00 stderr F I0813 20:00:10.380815 1 capacity.go:290] Checking for existing CSIStorageCapacity objects 2025-08-13T20:00:10.380956782+00:00 stderr F I0813 20:00:10.380903 1 capacity.go:725] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 24362 matches {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:00:10.380956782+00:00 stderr F I0813 20:00:10.380917 1 capacity.go:255] Started Capacity Controller 2025-08-13T20:00:10.380956782+00:00 stderr F I0813 20:00:10.380933 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:00:10.380968402+00:00 stderr F I0813 20:00:10.380954 1 reflector.go:289] Starting reflector *v1.PersistentVolume (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845 2025-08-13T20:00:10.380968402+00:00 stderr F I0813 20:00:10.380961 1 reflector.go:325] Listing and watching *v1.PersistentVolume from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845 2025-08-13T20:00:10.385743418+00:00 stderr F I0813 20:00:10.384352 1 volume_store.go:97] Starting save volume queue 2025-08-13T20:00:10.385743418+00:00 stderr F I0813 20:00:10.385212 1 reflector.go:289] Starting reflector *v1.StorageClass (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848 2025-08-13T20:00:10.385743418+00:00 stderr F I0813 20:00:10.385224 1 reflector.go:325] Listing and watching *v1.StorageClass from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848 2025-08-13T20:00:10.643051176+00:00 stderr F I0813 20:00:10.609281 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 24362 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:00:10.643051176+00:00 stderr F I0813 20:00:10.635463 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:00:10.643051176+00:00 stderr F I0813 20:00:10.636202 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:00:10.643051176+00:00 stderr F I0813 20:00:10.636211 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:00:11.139235764+00:00 stderr F I0813 20:00:11.119364 1 connection.go:251] GRPC response: {"available_capacity":63507808256,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:00:11.361356596+00:00 stderr F I0813 20:00:11.358355 1 connection.go:252] GRPC error: 2025-08-13T20:00:11.361356596+00:00 stderr F I0813 20:00:11.358538 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 62019344Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:00:11.361356596+00:00 stderr F I0813 20:00:11.178084 1 shared_informer.go:341] caches populated 2025-08-13T20:00:11.361356596+00:00 stderr F I0813 20:00:11.359593 1 controller.go:860] Started provisioner controller kubevirt.io.hostpath-provisioner_csi-hostpathplugin-hvm8g_9c0cb162-9831-443d-a7c1-5af5632fc687! 2025-08-13T20:00:11.627920227+00:00 stderr F I0813 20:00:11.359641 1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 4bd4486a-d347-4705-822c-1c402df66985 20233 0 2024-06-27 13:21:41 +0000 UTC map[] map[pv.kubernetes.io/provisioned-by:kubevirt.io.hostpath-provisioner volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:volumeAttributes":{".":{},"f:csi.storage.k8s.io/pv/name":{},"f:csi.storage.k8s.io/pvc/name":{},"f:csi.storage.k8s.io/pvc/namespace":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{},"f:storagePool":{}},"f:volumeHandle":{}},"f:nodeAffinity":{".":{},"f:required":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{32212254720 0} {} 30Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:kubevirt.io.hostpath-provisioner,VolumeHandle:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,ReadOnly:false,FSType:,VolumeAttributes:map[string]string{csi.storage.k8s.io/pv/name: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,csi.storage.k8s.io/pvc/name: crc-image-registry-storage,csi.storage.k8s.io/pvc/namespace: openshift-image-registry,storage.kubernetes.io/csiProvisionerIdentity: 1719494501704-84-kubevirt.io.hostpath-provisioner-crc,storagePool: local,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteMany],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:openshift-image-registry,Name:crc-image-registry-storage,UID:f5d86efc-9248-4b55-9b8b-23cf63fe9e97,APIVersion:v1,ResourceVersion:17977,FieldPath:,},PersistentVolumeReclaimPolicy:Retain,StorageClassName:crc-csi-hostpath-provisioner,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:&VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:topology.hostpath.csi/node,Operator:In,Values:[crc],},},MatchFields:[]NodeSelectorRequirement{},},},},},},Status:PersistentVolumeStatus{Phase:Bound,Message:,Reason:,LastPhaseTransitionTime:2024-06-27 13:21:41 +0000 UTC,},} 2025-08-13T20:00:11.627920227+00:00 stderr F I0813 20:00:11.620275 1 controller.go:1239] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" 2025-08-13T20:00:11.627920227+00:00 stderr F I0813 20:00:11.620441 1 controller.go:1260] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" is false: PersistentVolumePhase is not Released 2025-08-13T20:00:11.648860944+00:00 stderr F I0813 20:00:11.646741 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 29050 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:00:11.648860944+00:00 stderr F I0813 20:00:11.646990 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 29050 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 62019344Ki 2025-08-13T20:01:10.489766068+00:00 stderr F I0813 20:01:10.440600 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:01:10.489766068+00:00 stderr F I0813 20:01:10.441262 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:01:10.489766068+00:00 stderr F I0813 20:01:10.442478 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:01:10.489766068+00:00 stderr F I0813 20:01:10.442489 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:01:10.613188907+00:00 stderr F I0813 20:01:10.566852 1 connection.go:251] GRPC response: {"available_capacity":61631873024,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:01:10.613188907+00:00 stderr F I0813 20:01:10.567008 1 connection.go:252] GRPC error: 2025-08-13T20:01:10.613188907+00:00 stderr F I0813 20:01:10.567315 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 60187376Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:01:14.381652061+00:00 stderr F I0813 20:01:14.376669 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 30349 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:01:14.446122080+00:00 stderr F I0813 20:01:14.427122 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 30349 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 60187376Ki 2025-08-13T20:02:10.481966068+00:00 stderr F I0813 20:02:10.481609 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:02:10.481966068+00:00 stderr F I0813 20:02:10.481897 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:02:10.482290887+00:00 stderr F I0813 20:02:10.482127 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:02:10.482566955+00:00 stderr F I0813 20:02:10.482150 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:02:10.486612800+00:00 stderr F I0813 20:02:10.486560 1 connection.go:251] GRPC response: {"available_capacity":57153896448,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:02:10.486612800+00:00 stderr F I0813 20:02:10.486584 1 connection.go:252] GRPC error: 2025-08-13T20:02:10.486756145+00:00 stderr F I0813 20:02:10.486636 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 55814352Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:02:13.039704301+00:00 stderr F I0813 20:02:13.038706 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 30628 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:02:13.052607229+00:00 stderr F I0813 20:02:13.052485 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 30628 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 55814352Ki 2025-08-13T20:02:29.596263111+00:00 stderr F I0813 20:02:29.591484 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 2 items received 2025-08-13T20:02:29.643445047+00:00 stderr F I0813 20:02:29.643044 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 2 items received 2025-08-13T20:02:29.687225776+00:00 stderr F I0813 20:02:29.676729 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 5 items received 2025-08-13T20:02:29.687225776+00:00 stderr F I0813 20:02:29.677578 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 2 items received 2025-08-13T20:02:29.687225776+00:00 stderr F I0813 20:02:29.679991 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 2 items received 2025-08-13T20:02:29.687225776+00:00 stderr F I0813 20:02:29.682654 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=9m8s&timeoutSeconds=548&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:29.687225776+00:00 stderr F I0813 20:02:29.683529 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=8m32s&timeoutSeconds=512&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:29.701086371+00:00 stderr F I0813 20:02:29.690134 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=5m10s&timeoutSeconds=310&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:29.708374689+00:00 stderr F I0813 20:02:29.704397 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=6m6s&timeoutSeconds=366&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:29.708374689+00:00 stderr F I0813 20:02:29.707113 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=6m40s&timeoutSeconds=400&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:30.540496978+00:00 stderr F I0813 20:02:30.540338 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=8m21s&timeoutSeconds=501&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:30.583592947+00:00 stderr F I0813 20:02:30.583343 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=9m20s&timeoutSeconds=560&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:31.109182150+00:00 stderr F I0813 20:02:31.109009 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=7m44s&timeoutSeconds=464&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:31.124899568+00:00 stderr F I0813 20:02:31.124727 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=7m38s&timeoutSeconds=458&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:31.156989043+00:00 stderr F I0813 20:02:31.156761 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=5m2s&timeoutSeconds=302&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:32.744557062+00:00 stderr F I0813 20:02:32.744349 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=9m9s&timeoutSeconds=549&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:32.838680177+00:00 stderr F I0813 20:02:32.838465 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=8m13s&timeoutSeconds=493&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:33.163339149+00:00 stderr F I0813 20:02:33.163234 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=9m8s&timeoutSeconds=548&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:33.547760206+00:00 stderr F I0813 20:02:33.547306 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=6m10s&timeoutSeconds=370&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:33.892723506+00:00 stderr F I0813 20:02:33.892171 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=6m50s&timeoutSeconds=410&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:36.877874593+00:00 stderr F I0813 20:02:36.877703 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=7m23s&timeoutSeconds=443&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:38.534622675+00:00 stderr F I0813 20:02:38.534406 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=8m52s&timeoutSeconds=532&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:39.060985840+00:00 stderr F I0813 20:02:39.060316 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=6m12s&timeoutSeconds=372&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:39.439426896+00:00 stderr F I0813 20:02:39.439295 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=9m11s&timeoutSeconds=551&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:39.521892639+00:00 stderr F I0813 20:02:39.521633 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=9m47s&timeoutSeconds=587&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:44.532424188+00:00 stderr F I0813 20:02:44.530300 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=5m1s&timeoutSeconds=301&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:47.765023075+00:00 stderr F I0813 20:02:47.764585 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=8m18s&timeoutSeconds=498&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:48.193273602+00:00 stderr F I0813 20:02:48.193156 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=9m54s&timeoutSeconds=594&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:49.033706956+00:00 stderr F I0813 20:02:49.033579 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=9m18s&timeoutSeconds=558&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:51.421538544+00:00 stderr F I0813 20:02:51.421281 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=6m19s&timeoutSeconds=379&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:57.470707981+00:00 stderr F I0813 20:02:57.470627 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=6m30s&timeoutSeconds=390&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:07.083578567+00:00 stderr F I0813 20:03:07.083080 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=6m15s&timeoutSeconds=375&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:09.932190890+00:00 stderr F I0813 20:03:09.932042 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=7m9s&timeoutSeconds=429&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:10.483036413+00:00 stderr F I0813 20:03:10.482545 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:03:10.483876137+00:00 stderr F I0813 20:03:10.483736 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:03:10.484152475+00:00 stderr F I0813 20:03:10.484091 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:10.484967098+00:00 stderr F I0813 20:03:10.484128 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:03:10.488412296+00:00 stderr F I0813 20:03:10.488337 1 connection.go:251] GRPC response: {"available_capacity":55023792128,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:03:10.488412296+00:00 stderr F I0813 20:03:10.488355 1 connection.go:252] GRPC error: 2025-08-13T20:03:10.488824748+00:00 stderr F I0813 20:03:10.488495 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 53734172Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:03:10.490741163+00:00 stderr F E0813 20:03:10.490570 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:10.490741163+00:00 stderr F W0813 20:03:10.490611 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 0 failures 2025-08-13T20:03:10.693011203+00:00 stderr F I0813 20:03:10.692499 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=9m4s&timeoutSeconds=544&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:11.492386517+00:00 stderr F I0813 20:03:11.492204 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:03:11.492601553+00:00 stderr F I0813 20:03:11.492514 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:11.493017445+00:00 stderr F I0813 20:03:11.492540 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:03:11.494756084+00:00 stderr F I0813 20:03:11.494698 1 connection.go:251] GRPC response: {"available_capacity":54914170880,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:03:11.494756084+00:00 stderr F I0813 20:03:11.494734 1 connection.go:252] GRPC error: 2025-08-13T20:03:11.496875165+00:00 stderr F I0813 20:03:11.494947 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 53627120Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:03:11.497473372+00:00 stderr F E0813 20:03:11.497368 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:11.497473372+00:00 stderr F W0813 20:03:11.497402 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 1 failures 2025-08-13T20:03:13.498706382+00:00 stderr F I0813 20:03:13.498530 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:03:13.498706382+00:00 stderr F I0813 20:03:13.498619 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:13.498881047+00:00 stderr F I0813 20:03:13.498628 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:03:13.499975518+00:00 stderr F I0813 20:03:13.499936 1 connection.go:251] GRPC response: {"available_capacity":54619185152,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:03:13.499975518+00:00 stderr F I0813 20:03:13.499953 1 connection.go:252] GRPC error: 2025-08-13T20:03:13.500033699+00:00 stderr F I0813 20:03:13.499987 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 53339048Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:03:13.501631665+00:00 stderr F E0813 20:03:13.501561 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:13.501631665+00:00 stderr F W0813 20:03:13.501614 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 2 failures 2025-08-13T20:03:16.036603780+00:00 stderr F I0813 20:03:16.036401 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=5m11s&timeoutSeconds=311&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:17.501995015+00:00 stderr F I0813 20:03:17.501871 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:03:17.501995015+00:00 stderr F I0813 20:03:17.501973 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:17.502119369+00:00 stderr F I0813 20:03:17.501980 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:03:17.503699514+00:00 stderr F I0813 20:03:17.503642 1 connection.go:251] GRPC response: {"available_capacity":53907591168,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:03:17.503699514+00:00 stderr F I0813 20:03:17.503676 1 connection.go:252] GRPC error: 2025-08-13T20:03:17.503722575+00:00 stderr F I0813 20:03:17.503701 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 52644132Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:03:17.505298440+00:00 stderr F E0813 20:03:17.505246 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:17.505298440+00:00 stderr F W0813 20:03:17.505284 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 3 failures 2025-08-13T20:03:25.507222693+00:00 stderr F I0813 20:03:25.506678 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:03:25.507309735+00:00 stderr F I0813 20:03:25.507277 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:25.507887532+00:00 stderr F I0813 20:03:25.507288 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:03:25.509641382+00:00 stderr F I0813 20:03:25.509551 1 connection.go:251] GRPC response: {"available_capacity":52684136448,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:03:25.509641382+00:00 stderr F I0813 20:03:25.509589 1 connection.go:252] GRPC error: 2025-08-13T20:03:25.509764366+00:00 stderr F I0813 20:03:25.509677 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 51449352Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:03:25.511732552+00:00 stderr F E0813 20:03:25.511669 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.511732552+00:00 stderr F W0813 20:03:25.511709 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 4 failures 2025-08-13T20:03:41.513143198+00:00 stderr F I0813 20:03:41.512502 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:03:41.513143198+00:00 stderr F I0813 20:03:41.512892 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:41.513257972+00:00 stderr F I0813 20:03:41.512904 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:03:41.518480131+00:00 stderr F I0813 20:03:41.518403 1 connection.go:251] GRPC response: {"available_capacity":51869433856,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:03:41.518480131+00:00 stderr F I0813 20:03:41.518435 1 connection.go:252] GRPC error: 2025-08-13T20:03:41.518547163+00:00 stderr F I0813 20:03:41.518467 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50653744Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:03:41.520934581+00:00 stderr F E0813 20:03:41.520881 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:41.520934581+00:00 stderr F W0813 20:03:41.520922 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 5 failures 2025-08-13T20:03:42.890655574+00:00 stderr F I0813 20:03:42.890433 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=8m10s&timeoutSeconds=490&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:47.364616962+00:00 stderr F I0813 20:03:47.364496 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=6m45s&timeoutSeconds=405&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:48.306607314+00:00 stderr F I0813 20:03:48.306279 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=7m14s&timeoutSeconds=434&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:50.851697078+00:00 stderr F I0813 20:03:50.851564 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=8m11s&timeoutSeconds=491&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:52.214568860+00:00 stderr F I0813 20:03:52.214453 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=5m19s&timeoutSeconds=319&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:04:10.484515361+00:00 stderr F I0813 20:04:10.484202 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:04:10.484515361+00:00 stderr F I0813 20:04:10.484433 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:04:10.484827289+00:00 stderr F I0813 20:04:10.484734 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:04:10.485284332+00:00 stderr F I0813 20:04:10.484765 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:04:10.488669139+00:00 stderr F I0813 20:04:10.488550 1 connection.go:251] GRPC response: {"available_capacity":53007642624,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:04:10.488948477+00:00 stderr F I0813 20:04:10.488876 1 connection.go:252] GRPC error: 2025-08-13T20:04:10.489021569+00:00 stderr F I0813 20:04:10.488938 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 51765276Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:04:10.492210750+00:00 stderr F E0813 20:04:10.492117 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:10.492210750+00:00 stderr F W0813 20:04:10.492154 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 6 failures 2025-08-13T20:04:13.522250948+00:00 stderr F I0813 20:04:13.521990 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:04:13.522464694+00:00 stderr F I0813 20:04:13.522448 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:04:13.522983819+00:00 stderr F I0813 20:04:13.522486 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:04:13.524461261+00:00 stderr F I0813 20:04:13.524434 1 connection.go:251] GRPC response: {"available_capacity":52794683392,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:04:13.524668987+00:00 stderr F I0813 20:04:13.524619 1 connection.go:252] GRPC error: 2025-08-13T20:04:13.524953595+00:00 stderr F I0813 20:04:13.524744 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 51557308Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:04:13.526714495+00:00 stderr F E0813 20:04:13.526603 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:13.526714495+00:00 stderr F W0813 20:04:13.526646 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 7 failures 2025-08-13T20:04:18.700132527+00:00 stderr F I0813 20:04:18.694650 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=8m24s&timeoutSeconds=504&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:04:30.683585878+00:00 stderr F I0813 20:04:30.683216 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=5m58s&timeoutSeconds=358&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:04:34.170544641+00:00 stderr F I0813 20:04:34.170234 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=6m13s&timeoutSeconds=373&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:04:42.112098456+00:00 stderr F I0813 20:04:42.111886 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=9m26s&timeoutSeconds=566&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:04:49.089169411+00:00 stderr F I0813 20:04:49.086947 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=5m27s&timeoutSeconds=327&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:05:02.568697791+00:00 stderr F I0813 20:05:02.568367 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=8m4s&timeoutSeconds=484&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:05:10.486039784+00:00 stderr F I0813 20:05:10.485702 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:05:10.486039784+00:00 stderr F I0813 20:05:10.486018 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:05:10.486309361+00:00 stderr F I0813 20:05:10.486220 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:05:10.486683242+00:00 stderr F I0813 20:05:10.486245 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:05:10.492159129+00:00 stderr F I0813 20:05:10.492067 1 connection.go:251] GRPC response: {"available_capacity":48768638976,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:05:10.492159129+00:00 stderr F I0813 20:05:10.492101 1 connection.go:252] GRPC error: 2025-08-13T20:05:10.492188450+00:00 stderr F I0813 20:05:10.492134 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 47625624Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:05:10.495313319+00:00 stderr F E0813 20:05:10.494837 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:10.495313319+00:00 stderr F W0813 20:05:10.494884 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 8 failures 2025-08-13T20:05:27.772514028+00:00 stderr F I0813 20:05:27.772064 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity closed with: too old resource version: 30628 (30718) 2025-08-13T20:05:29.777518023+00:00 stderr F I0813 20:05:29.777231 1 reflector.go:445] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume closed with: too old resource version: 30619 (30718) 2025-08-13T20:05:32.165918598+00:00 stderr F I0813 20:05:32.165240 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass closed with: too old resource version: 30620 (30718) 2025-08-13T20:05:34.082920372+00:00 stderr F I0813 20:05:34.081109 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim closed with: too old resource version: 30625 (30718) 2025-08-13T20:05:51.792116716+00:00 stderr F I0813 20:05:51.791632 1 reflector.go:445] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass closed with: too old resource version: 30620 (30718) 2025-08-13T20:05:59.738435116+00:00 stderr F I0813 20:05:59.736072 1 reflector.go:325] Listing and watching *v1.CSIStorageCapacity from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:05:59.787175102+00:00 stderr F I0813 20:05:59.786630 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 30628 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:06.772732740+00:00 stderr F I0813 20:06:06.772544 1 reflector.go:325] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:06:06.795659576+00:00 stderr F I0813 20:06:06.795480 1 capacity.go:373] Capacity Controller: storage class crc-csi-hostpath-provisioner was updated or added 2025-08-13T20:06:06.795721518+00:00 stderr F I0813 20:06:06.795633 1 capacity.go:480] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:06.796370387+00:00 stderr F I0813 20:06:06.796282 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:06.797268303+00:00 stderr F I0813 20:06:06.796407 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:06:06.798042965+00:00 stderr F I0813 20:06:06.796415 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:06:06.799757244+00:00 stderr F I0813 20:06:06.799632 1 connection.go:251] GRPC response: {"available_capacity":47480958976,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:06:06.799974140+00:00 stderr F I0813 20:06:06.799949 1 connection.go:252] GRPC error: 2025-08-13T20:06:06.800095603+00:00 stderr F I0813 20:06:06.800050 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 46368124Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:06:06.820370124+00:00 stderr F I0813 20:06:06.819646 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 31987 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 46368124Ki 2025-08-13T20:06:06.826149130+00:00 stderr F I0813 20:06:06.826035 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 31987 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:10.486524997+00:00 stderr F I0813 20:06:10.486104 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:06:10.486524997+00:00 stderr F I0813 20:06:10.486227 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:10.486524997+00:00 stderr F I0813 20:06:10.486337 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:06:10.486524997+00:00 stderr F I0813 20:06:10.486344 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:06:10.516318230+00:00 stderr F I0813 20:06:10.516209 1 connection.go:251] GRPC response: {"available_capacity":47480971264,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:06:10.516318230+00:00 stderr F I0813 20:06:10.516250 1 connection.go:252] GRPC error: 2025-08-13T20:06:10.516318230+00:00 stderr F I0813 20:06:10.516275 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 46368136Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:06:10.580150658+00:00 stderr F I0813 20:06:10.579169 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 32004 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:10.583094132+00:00 stderr F I0813 20:06:10.582981 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 32004 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 46368136Ki 2025-08-13T20:06:18.840921105+00:00 stderr F I0813 20:06:18.839541 1 reflector.go:325] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:06:21.527915969+00:00 stderr F I0813 20:06:21.527730 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:21.528032313+00:00 stderr F I0813 20:06:21.527996 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:06:21.528336031+00:00 stderr F I0813 20:06:21.528020 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:06:21.535661441+00:00 stderr F I0813 20:06:21.535546 1 connection.go:251] GRPC response: {"available_capacity":47480754176,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:06:21.535661441+00:00 stderr F I0813 20:06:21.535579 1 connection.go:252] GRPC error: 2025-08-13T20:06:21.535744454+00:00 stderr F I0813 20:06:21.535615 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 46367924Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:06:21.550683701+00:00 stderr F I0813 20:06:21.550618 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 32037 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 46367924Ki 2025-08-13T20:06:21.551119974+00:00 stderr F I0813 20:06:21.551008 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 32037 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:21.866016731+00:00 stderr F I0813 20:06:21.865906 1 reflector.go:325] Listing and watching *v1.StorageClass from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848 2025-08-13T20:06:28.050184360+00:00 stderr F I0813 20:06:28.044093 1 reflector.go:325] Listing and watching *v1.PersistentVolume from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845 2025-08-13T20:06:28.062771960+00:00 stderr F I0813 20:06:28.055746 1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 4bd4486a-d347-4705-822c-1c402df66985 20233 0 2024-06-27 13:21:41 +0000 UTC map[] map[pv.kubernetes.io/provisioned-by:kubevirt.io.hostpath-provisioner volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:volumeAttributes":{".":{},"f:csi.storage.k8s.io/pv/name":{},"f:csi.storage.k8s.io/pvc/name":{},"f:csi.storage.k8s.io/pvc/namespace":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{},"f:storagePool":{}},"f:volumeHandle":{}},"f:nodeAffinity":{".":{},"f:required":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{32212254720 0} {} 30Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:kubevirt.io.hostpath-provisioner,VolumeHandle:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,ReadOnly:false,FSType:,VolumeAttributes:map[string]string{csi.storage.k8s.io/pv/name: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,csi.storage.k8s.io/pvc/name: crc-image-registry-storage,csi.storage.k8s.io/pvc/namespace: openshift-image-registry,storage.kubernetes.io/csiProvisionerIdentity: 1719494501704-84-kubevirt.io.hostpath-provisioner-crc,storagePool: local,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteMany],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:openshift-image-registry,Name:crc-image-registry-storage,UID:f5d86efc-9248-4b55-9b8b-23cf63fe9e97,APIVersion:v1,ResourceVersion:17977,FieldPath:,},PersistentVolumeReclaimPolicy:Retain,StorageClassName:crc-csi-hostpath-provisioner,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:&VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:topology.hostpath.csi/node,Operator:In,Values:[crc],},},MatchFields:[]NodeSelectorRequirement{},},},},},},Status:PersistentVolumeStatus{Phase:Bound,Message:,Reason:,LastPhaseTransitionTime:2024-06-27 13:21:41 +0000 UTC,},} 2025-08-13T20:06:28.062771960+00:00 stderr F I0813 20:06:28.062736 1 controller.go:1239] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" 2025-08-13T20:06:28.062945775+00:00 stderr F I0813 20:06:28.062823 1 controller.go:1260] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" is false: PersistentVolumePhase is not Released 2025-08-13T20:07:10.490577272+00:00 stderr F I0813 20:07:10.490248 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:07:10.490577272+00:00 stderr F I0813 20:07:10.490483 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:07:10.491729745+00:00 stderr F I0813 20:07:10.490719 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:07:10.491729745+00:00 stderr F I0813 20:07:10.490752 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:07:10.513115619+00:00 stderr F I0813 20:07:10.512331 1 connection.go:251] GRPC response: {"available_capacity":49674403840,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:07:10.514217710+00:00 stderr F I0813 20:07:10.514163 1 connection.go:252] GRPC error: 2025-08-13T20:07:10.514511549+00:00 stderr F I0813 20:07:10.514232 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 48510160Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:07:10.571925205+00:00 stderr F I0813 20:07:10.571615 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 32389 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:07:10.573383717+00:00 stderr F I0813 20:07:10.571987 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 32389 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 48510160Ki 2025-08-13T20:08:10.496212475+00:00 stderr F I0813 20:08:10.494950 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:08:10.496212475+00:00 stderr F I0813 20:08:10.495758 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:08:10.497057509+00:00 stderr F I0813 20:08:10.496985 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:08:10.497678547+00:00 stderr F I0813 20:08:10.497017 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:08:10.505908763+00:00 stderr F I0813 20:08:10.505524 1 connection.go:251] GRPC response: {"available_capacity":51680923648,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:08:10.505908763+00:00 stderr F I0813 20:08:10.505565 1 connection.go:252] GRPC error: 2025-08-13T20:08:10.505908763+00:00 stderr F I0813 20:08:10.505623 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50469652Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:08:10.529526140+00:00 stderr F I0813 20:08:10.529455 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 32843 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50469652Ki 2025-08-13T20:08:10.529643024+00:00 stderr F I0813 20:08:10.529465 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 32843 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:08:26.211449194+00:00 stderr F I0813 20:08:26.209243 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 1 items received 2025-08-13T20:08:26.232885748+00:00 stderr F I0813 20:08:26.226685 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 2 items received 2025-08-13T20:08:26.245728977+00:00 stderr F I0813 20:08:26.245648 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32566&timeout=6m20s&timeoutSeconds=380&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:26.245971094+00:00 stderr F I0813 20:08:26.245940 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32890&timeout=5m29s&timeoutSeconds=329&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:26.329062316+00:00 stderr F I0813 20:08:26.329006 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 7 items received 2025-08-13T20:08:26.338875157+00:00 stderr F I0813 20:08:26.337266 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 2 items received 2025-08-13T20:08:26.339112204+00:00 stderr F I0813 20:08:26.339083 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 2 items received 2025-08-13T20:08:26.339557827+00:00 stderr F I0813 20:08:26.339500 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=32843&timeout=7m55s&timeoutSeconds=475&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:26.348191124+00:00 stderr F I0813 20:08:26.348106 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32800&timeout=9m15s&timeoutSeconds=555&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:26.354041382+00:00 stderr F I0813 20:08:26.353987 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32890&timeout=6m32s&timeoutSeconds=392&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:27.271516707+00:00 stderr F I0813 20:08:27.271446 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32890&timeout=8m44s&timeoutSeconds=524&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:27.443477858+00:00 stderr F I0813 20:08:27.441995 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32800&timeout=5m47s&timeoutSeconds=347&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:27.459096985+00:00 stderr F I0813 20:08:27.458965 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32890&timeout=8m54s&timeoutSeconds=534&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:27.719648486+00:00 stderr F I0813 20:08:27.715522 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32566&timeout=5m57s&timeoutSeconds=357&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:27.915456340+00:00 stderr F I0813 20:08:27.915341 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=32843&timeout=8m52s&timeoutSeconds=532&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:28.964473995+00:00 stderr F I0813 20:08:28.964297 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32890&timeout=7m31s&timeoutSeconds=451&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:29.650917816+00:00 stderr F I0813 20:08:29.650675 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32890&timeout=9m54s&timeoutSeconds=594&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:30.207930386+00:00 stderr F I0813 20:08:30.198136 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32566&timeout=9m44s&timeoutSeconds=584&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:30.226399856+00:00 stderr F I0813 20:08:30.219066 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32800&timeout=6m28s&timeoutSeconds=388&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:31.042481194+00:00 stderr F I0813 20:08:31.042351 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=32843&timeout=6m39s&timeoutSeconds=399&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:33.424378073+00:00 stderr F I0813 20:08:33.424253 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32800&timeout=5m30s&timeoutSeconds=330&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:34.154934419+00:00 stderr F I0813 20:08:34.153059 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32890&timeout=6m20s&timeoutSeconds=380&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:34.169285610+00:00 stderr F I0813 20:08:34.169180 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32890&timeout=9m36s&timeoutSeconds=576&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:34.971095929+00:00 stderr F I0813 20:08:34.970921 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32566&timeout=7m47s&timeoutSeconds=467&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:36.935505080+00:00 stderr F I0813 20:08:36.935337 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=32843&timeout=5m59s&timeoutSeconds=359&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:41.031694191+00:00 stderr F I0813 20:08:41.026092 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32890&timeout=5m30s&timeoutSeconds=330&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:42.459972361+00:00 stderr F I0813 20:08:42.453283 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32800&timeout=6m5s&timeoutSeconds=365&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:43.610085645+00:00 stderr F I0813 20:08:43.608699 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32566&timeout=5m50s&timeoutSeconds=350&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:44.909074679+00:00 stderr F I0813 20:08:44.907690 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32890&timeout=8m34s&timeoutSeconds=514&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:48.575034265+00:00 stderr F I0813 20:08:48.574857 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=32843&timeout=7m29s&timeoutSeconds=449&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:57.009273752+00:00 stderr F I0813 20:08:57.009001 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32566&timeout=9m52s&timeoutSeconds=592&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:58.719116385+00:00 stderr F I0813 20:08:58.718969 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32890&timeout=5m50s&timeoutSeconds=350&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:09:00.268765025+00:00 stderr F I0813 20:09:00.265320 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32800&timeout=8m22s&timeoutSeconds=502&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:09:03.686105372+00:00 stderr F I0813 20:09:03.685709 1 reflector.go:445] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass closed with: too old resource version: 32890 (32913) 2025-08-13T20:09:03.686105372+00:00 stderr F I0813 20:09:03.685724 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity closed with: too old resource version: 32843 (32913) 2025-08-13T20:09:10.505736656+00:00 stderr F I0813 20:09:10.498204 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:09:10.505736656+00:00 stderr F I0813 20:09:10.498512 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:09:10.505736656+00:00 stderr F I0813 20:09:10.498673 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:09:10.505736656+00:00 stderr F I0813 20:09:10.498682 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:09:10.516154315+00:00 stderr F I0813 20:09:10.509149 1 connection.go:251] GRPC response: {"available_capacity":51679956992,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:09:10.516154315+00:00 stderr F I0813 20:09:10.509163 1 connection.go:252] GRPC error: 2025-08-13T20:09:10.516154315+00:00 stderr F I0813 20:09:10.509258 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50468708Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:09:10.536886979+00:00 stderr F I0813 20:09:10.536467 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33017 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50468708Ki 2025-08-13T20:09:31.302131848+00:00 stderr F I0813 20:09:31.300214 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim closed with: too old resource version: 32890 (32913) 2025-08-13T20:09:32.674389312+00:00 stderr F I0813 20:09:32.674302 1 reflector.go:445] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume closed with: too old resource version: 32566 (32913) 2025-08-13T20:09:34.049559569+00:00 stderr F I0813 20:09:34.049452 1 reflector.go:325] Listing and watching *v1.CSIStorageCapacity from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:09:34.054219852+00:00 stderr F I0813 20:09:34.054183 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33017 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:09:34.387063665+00:00 stderr F I0813 20:09:34.386506 1 reflector.go:325] Listing and watching *v1.StorageClass from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848 2025-08-13T20:09:42.139423570+00:00 stderr F I0813 20:09:42.139143 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass closed with: too old resource version: 32800 (32913) 2025-08-13T20:10:10.500175528+00:00 stderr F I0813 20:10:10.499530 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:10:10.500175528+00:00 stderr F I0813 20:10:10.499996 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:10:10.500258670+00:00 stderr F I0813 20:10:10.500203 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:10:10.505378987+00:00 stderr F I0813 20:10:10.500259 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:10:10.508842676+00:00 stderr F I0813 20:10:10.506508 1 connection.go:251] GRPC response: {"available_capacity":51678912512,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:10:10.508842676+00:00 stderr F I0813 20:10:10.506547 1 connection.go:252] GRPC error: 2025-08-13T20:10:10.508842676+00:00 stderr F I0813 20:10:10.506612 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50467688Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:10:10.525308928+00:00 stderr F I0813 20:10:10.525178 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33141 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:10:10.525308928+00:00 stderr F I0813 20:10:10.525256 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33141 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50467688Ki 2025-08-13T20:10:11.134723951+00:00 stderr F I0813 20:10:11.134630 1 reflector.go:325] Listing and watching *v1.PersistentVolume from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845 2025-08-13T20:10:11.141076763+00:00 stderr F I0813 20:10:11.139712 1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 4bd4486a-d347-4705-822c-1c402df66985 20233 0 2024-06-27 13:21:41 +0000 UTC map[] map[pv.kubernetes.io/provisioned-by:kubevirt.io.hostpath-provisioner volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:volumeAttributes":{".":{},"f:csi.storage.k8s.io/pv/name":{},"f:csi.storage.k8s.io/pvc/name":{},"f:csi.storage.k8s.io/pvc/namespace":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{},"f:storagePool":{}},"f:volumeHandle":{}},"f:nodeAffinity":{".":{},"f:required":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{32212254720 0} {} 30Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:kubevirt.io.hostpath-provisioner,VolumeHandle:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,ReadOnly:false,FSType:,VolumeAttributes:map[string]string{csi.storage.k8s.io/pv/name: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,csi.storage.k8s.io/pvc/name: crc-image-registry-storage,csi.storage.k8s.io/pvc/namespace: openshift-image-registry,storage.kubernetes.io/csiProvisionerIdentity: 1719494501704-84-kubevirt.io.hostpath-provisioner-crc,storagePool: local,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteMany],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:openshift-image-registry,Name:crc-image-registry-storage,UID:f5d86efc-9248-4b55-9b8b-23cf63fe9e97,APIVersion:v1,ResourceVersion:17977,FieldPath:,},PersistentVolumeReclaimPolicy:Retain,StorageClassName:crc-csi-hostpath-provisioner,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:&VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:topology.hostpath.csi/node,Operator:In,Values:[crc],},},MatchFields:[]NodeSelectorRequirement{},},},},},},Status:PersistentVolumeStatus{Phase:Bound,Message:,Reason:,LastPhaseTransitionTime:2024-06-27 13:21:41 +0000 UTC,},} 2025-08-13T20:10:11.141076763+00:00 stderr F I0813 20:10:11.140144 1 controller.go:1239] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" 2025-08-13T20:10:11.141076763+00:00 stderr F I0813 20:10:11.140154 1 controller.go:1260] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" is false: PersistentVolumePhase is not Released 2025-08-13T20:10:11.329857436+00:00 stderr F I0813 20:10:11.325844 1 reflector.go:325] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:10:36.518379150+00:00 stderr F I0813 20:10:36.518068 1 reflector.go:325] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:10:36.527029198+00:00 stderr F I0813 20:10:36.526895 1 capacity.go:373] Capacity Controller: storage class crc-csi-hostpath-provisioner was updated or added 2025-08-13T20:10:36.527029198+00:00 stderr F I0813 20:10:36.527007 1 capacity.go:480] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:10:36.527070579+00:00 stderr F I0813 20:10:36.527043 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:10:36.527211473+00:00 stderr F I0813 20:10:36.527164 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:10:36.527713867+00:00 stderr F I0813 20:10:36.527203 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:10:36.529408726+00:00 stderr F I0813 20:10:36.529332 1 connection.go:251] GRPC response: {"available_capacity":51647598592,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:10:36.529408726+00:00 stderr F I0813 20:10:36.529359 1 connection.go:252] GRPC error: 2025-08-13T20:10:36.529526919+00:00 stderr F I0813 20:10:36.529439 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50437108Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:10:36.549439450+00:00 stderr F I0813 20:10:36.549272 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33214 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50437108Ki 2025-08-13T20:10:36.549439450+00:00 stderr F I0813 20:10:36.549307 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33214 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:11:10.500524073+00:00 stderr F I0813 20:11:10.500223 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:11:10.500600195+00:00 stderr F I0813 20:11:10.500577 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:11:10.500900653+00:00 stderr F I0813 20:11:10.500838 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:11:10.501486520+00:00 stderr F I0813 20:11:10.500871 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:11:10.504303841+00:00 stderr F I0813 20:11:10.504273 1 connection.go:251] GRPC response: {"available_capacity":51642920960,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:11:10.504357362+00:00 stderr F I0813 20:11:10.504341 1 connection.go:252] GRPC error: 2025-08-13T20:11:10.504544088+00:00 stderr F I0813 20:11:10.504452 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50432540Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:11:10.524081448+00:00 stderr F I0813 20:11:10.524021 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33386 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50432540Ki 2025-08-13T20:11:10.527750263+00:00 stderr F I0813 20:11:10.525990 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33386 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:12:10.502114835+00:00 stderr F I0813 20:12:10.501553 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:12:10.502316631+00:00 stderr F I0813 20:12:10.502191 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:12:10.502569629+00:00 stderr F I0813 20:12:10.502518 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:12:10.503161966+00:00 stderr F I0813 20:12:10.502541 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:12:10.505836932+00:00 stderr F I0813 20:12:10.505727 1 connection.go:251] GRPC response: {"available_capacity":51640741888,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:12:10.505978606+00:00 stderr F I0813 20:12:10.505891 1 connection.go:252] GRPC error: 2025-08-13T20:12:10.506156291+00:00 stderr F I0813 20:12:10.506014 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50430412Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:12:10.524858878+00:00 stderr F I0813 20:12:10.524703 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33496 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50430412Ki 2025-08-13T20:12:10.525074764+00:00 stderr F I0813 20:12:10.524990 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33496 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:13:10.502418991+00:00 stderr F I0813 20:13:10.502124 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:13:10.502418991+00:00 stderr F I0813 20:13:10.502280 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:13:10.502514664+00:00 stderr F I0813 20:13:10.502454 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:13:10.502838033+00:00 stderr F I0813 20:13:10.502495 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:13:10.504610113+00:00 stderr F I0813 20:13:10.504575 1 connection.go:251] GRPC response: {"available_capacity":51640733696,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:13:10.504610113+00:00 stderr F I0813 20:13:10.504588 1 connection.go:252] GRPC error: 2025-08-13T20:13:10.504770328+00:00 stderr F I0813 20:13:10.504630 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50430404Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:13:10.527331795+00:00 stderr F I0813 20:13:10.527240 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33585 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:13:10.527486770+00:00 stderr F I0813 20:13:10.527375 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33585 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50430404Ki 2025-08-13T20:14:10.503254435+00:00 stderr F I0813 20:14:10.502892 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:14:10.503254435+00:00 stderr F I0813 20:14:10.503054 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:14:10.503254435+00:00 stderr F I0813 20:14:10.503103 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:14:10.503470731+00:00 stderr F I0813 20:14:10.503113 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:14:10.505520070+00:00 stderr F I0813 20:14:10.505426 1 connection.go:251] GRPC response: {"available_capacity":51640299520,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:14:10.505520070+00:00 stderr F I0813 20:14:10.505470 1 connection.go:252] GRPC error: 2025-08-13T20:14:10.505621053+00:00 stderr F I0813 20:14:10.505495 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50429980Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:14:10.526839471+00:00 stderr F I0813 20:14:10.526667 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33694 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50429980Ki 2025-08-13T20:14:10.527307325+00:00 stderr F I0813 20:14:10.527213 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33694 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:15:10.504389959+00:00 stderr F I0813 20:15:10.504159 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:15:10.504389959+00:00 stderr F I0813 20:15:10.504294 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:15:10.504389959+00:00 stderr F I0813 20:15:10.504337 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:15:10.504606455+00:00 stderr F I0813 20:15:10.504343 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:15:10.507081435+00:00 stderr F I0813 20:15:10.507023 1 connection.go:251] GRPC response: {"available_capacity":51639975936,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:15:10.507081435+00:00 stderr F I0813 20:15:10.507056 1 connection.go:252] GRPC error: 2025-08-13T20:15:10.507177058+00:00 stderr F I0813 20:15:10.507079 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50429664Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:15:10.556052854+00:00 stderr F I0813 20:15:10.555982 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33862 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50429664Ki 2025-08-13T20:15:10.556361263+00:00 stderr F I0813 20:15:10.556289 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33862 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:16:03.530489450+00:00 stderr F I0813 20:16:03.530193 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 6 items received 2025-08-13T20:16:10.505406177+00:00 stderr F I0813 20:16:10.505244 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:16:10.505517601+00:00 stderr F I0813 20:16:10.505459 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:16:10.505533831+00:00 stderr F I0813 20:16:10.505515 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:16:10.505725797+00:00 stderr F I0813 20:16:10.505523 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:16:10.507020333+00:00 stderr F I0813 20:16:10.506933 1 connection.go:251] GRPC response: {"available_capacity":51641487360,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:16:10.507020333+00:00 stderr F I0813 20:16:10.506989 1 connection.go:252] GRPC error: 2025-08-13T20:16:10.507103596+00:00 stderr F I0813 20:16:10.507039 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50431140Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:16:10.535623750+00:00 stderr F I0813 20:16:10.535510 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33987 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:16:10.536485935+00:00 stderr F I0813 20:16:10.536359 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33987 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50431140Ki 2025-08-13T20:17:10.515615905+00:00 stderr F I0813 20:17:10.512414 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:17:10.515615905+00:00 stderr F I0813 20:17:10.512837 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:17:10.515615905+00:00 stderr F I0813 20:17:10.513068 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:17:10.515615905+00:00 stderr F I0813 20:17:10.513080 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:17:10.521410071+00:00 stderr F I0813 20:17:10.521268 1 connection.go:251] GRPC response: {"available_capacity":51120128000,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:17:10.521410071+00:00 stderr F I0813 20:17:10.521396 1 connection.go:252] GRPC error: 2025-08-13T20:17:10.521606207+00:00 stderr F I0813 20:17:10.521489 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 49922000Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:17:10.625718320+00:00 stderr F I0813 20:17:10.625536 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 34153 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 49922000Ki 2025-08-13T20:17:10.627687856+00:00 stderr F I0813 20:17:10.627461 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 34153 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:17:25.400881884+00:00 stderr F I0813 20:17:25.400545 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 9 items received 2025-08-13T20:17:36.059047411+00:00 stderr F I0813 20:17:36.055432 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 18 items received 2025-08-13T20:17:53.142821593+00:00 stderr F I0813 20:17:53.142547 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 9 items received 2025-08-13T20:18:10.516188997+00:00 stderr F I0813 20:18:10.515307 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:18:10.516438474+00:00 stderr F I0813 20:18:10.516409 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:18:10.517535065+00:00 stderr F I0813 20:18:10.516641 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:18:10.518358489+00:00 stderr F I0813 20:18:10.517614 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:18:10.532150993+00:00 stderr F I0813 20:18:10.531300 1 connection.go:251] GRPC response: {"available_capacity":49833275392,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:18:10.532150993+00:00 stderr F I0813 20:18:10.531345 1 connection.go:252] GRPC error: 2025-08-13T20:18:10.532150993+00:00 stderr F I0813 20:18:10.531403 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 48665308Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:18:11.155853054+00:00 stderr F I0813 20:18:11.155701 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 34319 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:18:11.156017789+00:00 stderr F I0813 20:18:11.155957 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 34319 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 48665308Ki 2025-08-13T20:18:17.333243462+00:00 stderr F I0813 20:18:17.333025 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 9 items received 2025-08-13T20:19:10.517415961+00:00 stderr F I0813 20:19:10.517177 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:19:10.517690579+00:00 stderr F I0813 20:19:10.517663 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:19:10.517998237+00:00 stderr F I0813 20:19:10.517936 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:19:10.518746609+00:00 stderr F I0813 20:19:10.518045 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:19:10.524400020+00:00 stderr F I0813 20:19:10.523373 1 connection.go:251] GRPC response: {"available_capacity":49260212224,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:19:10.524400020+00:00 stderr F I0813 20:19:10.523409 1 connection.go:252] GRPC error: 2025-08-13T20:19:10.524400020+00:00 stderr F I0813 20:19:10.523467 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 48105676Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:19:10.548734825+00:00 stderr F I0813 20:19:10.547481 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 34447 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 48105676Ki 2025-08-13T20:19:10.548734825+00:00 stderr F I0813 20:19:10.547860 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 34447 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:20:10.518768340+00:00 stderr F I0813 20:20:10.517892 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:20:10.518768340+00:00 stderr F I0813 20:20:10.518171 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:20:10.518768340+00:00 stderr F I0813 20:20:10.518324 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:20:10.518768340+00:00 stderr F I0813 20:20:10.518330 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:20:10.520223671+00:00 stderr F I0813 20:20:10.520172 1 connection.go:251] GRPC response: {"available_capacity":51637207040,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:20:10.520462378+00:00 stderr F I0813 20:20:10.520441 1 connection.go:252] GRPC error: 2025-08-13T20:20:10.520596282+00:00 stderr F I0813 20:20:10.520548 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50426960Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:20:10.562236352+00:00 stderr F I0813 20:20:10.561759 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 34570 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:20:10.562236352+00:00 stderr F I0813 20:20:10.562004 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 34570 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50426960Ki 2025-08-13T20:21:10.520579165+00:00 stderr F I0813 20:21:10.520199 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:21:10.520690438+00:00 stderr F I0813 20:21:10.520562 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:21:10.520998747+00:00 stderr F I0813 20:21:10.520895 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:21:10.521716037+00:00 stderr F I0813 20:21:10.520905 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:21:10.524537428+00:00 stderr F I0813 20:21:10.524468 1 connection.go:251] GRPC response: {"available_capacity":51637256192,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:21:10.524537428+00:00 stderr F I0813 20:21:10.524501 1 connection.go:252] GRPC error: 2025-08-13T20:21:10.525048933+00:00 stderr F I0813 20:21:10.524594 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50427008Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:21:10.583213255+00:00 stderr F I0813 20:21:10.583011 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 34707 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:21:10.583606476+00:00 stderr F I0813 20:21:10.583515 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 34707 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50427008Ki 2025-08-13T20:22:10.521492087+00:00 stderr F I0813 20:22:10.520940 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:22:10.521492087+00:00 stderr F I0813 20:22:10.521372 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:22:10.522094324+00:00 stderr F I0813 20:22:10.521708 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:22:10.522720792+00:00 stderr F I0813 20:22:10.521741 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:22:10.525556303+00:00 stderr F I0813 20:22:10.525450 1 connection.go:251] GRPC response: {"available_capacity":51641798656,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:22:10.525556303+00:00 stderr F I0813 20:22:10.525489 1 connection.go:252] GRPC error: 2025-08-13T20:22:10.525664026+00:00 stderr F I0813 20:22:10.525569 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50431444Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:22:10.541904360+00:00 stderr F I0813 20:22:10.541672 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 34825 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:22:10.544800883+00:00 stderr F I0813 20:22:10.544685 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 34825 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50431444Ki 2025-08-13T20:23:10.527052118+00:00 stderr F I0813 20:23:10.523137 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:23:10.527052118+00:00 stderr F I0813 20:23:10.523664 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:23:10.527052118+00:00 stderr F I0813 20:23:10.524175 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:23:10.527052118+00:00 stderr F I0813 20:23:10.524187 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:23:10.530033843+00:00 stderr F I0813 20:23:10.528753 1 connection.go:251] GRPC response: {"available_capacity":51645571072,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:23:10.530033843+00:00 stderr F I0813 20:23:10.528970 1 connection.go:252] GRPC error: 2025-08-13T20:23:10.530033843+00:00 stderr F I0813 20:23:10.529061 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50435128Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:23:10.544002692+00:00 stderr F I0813 20:23:10.543885 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 34939 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50435128Ki 2025-08-13T20:23:10.544418204+00:00 stderr F I0813 20:23:10.544368 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 34939 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:23:46.064547252+00:00 stderr F I0813 20:23:46.063669 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 13 items received 2025-08-13T20:24:10.525005884+00:00 stderr F I0813 20:24:10.524701 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:24:10.525238161+00:00 stderr F I0813 20:24:10.525152 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:24:10.525447487+00:00 stderr F I0813 20:24:10.525375 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:24:10.526138997+00:00 stderr F I0813 20:24:10.525406 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:24:10.528528825+00:00 stderr F I0813 20:24:10.528481 1 connection.go:251] GRPC response: {"available_capacity":51577241600,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:24:10.528583917+00:00 stderr F I0813 20:24:10.528567 1 connection.go:252] GRPC error: 2025-08-13T20:24:10.528880635+00:00 stderr F I0813 20:24:10.528721 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50368400Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:24:10.544500702+00:00 stderr F I0813 20:24:10.544404 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 35044 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50368400Ki 2025-08-13T20:24:10.544734719+00:00 stderr F I0813 20:24:10.544673 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 35044 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:24:21.337206815+00:00 stderr F I0813 20:24:21.336881 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 7 items received 2025-08-13T20:24:27.539097402+00:00 stderr F I0813 20:24:27.537890 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 9 items received 2025-08-13T20:24:48.149301469+00:00 stderr F I0813 20:24:48.149070 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 8 items received 2025-08-13T20:25:10.525298092+00:00 stderr F I0813 20:25:10.525174 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:25:10.525392985+00:00 stderr F I0813 20:25:10.525296 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:25:10.525691593+00:00 stderr F I0813 20:25:10.525633 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:25:10.526530837+00:00 stderr F I0813 20:25:10.525670 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:25:10.528561555+00:00 stderr F I0813 20:25:10.528481 1 connection.go:251] GRPC response: {"available_capacity":51577327616,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:25:10.528561555+00:00 stderr F I0813 20:25:10.528519 1 connection.go:252] GRPC error: 2025-08-13T20:25:10.528626707+00:00 stderr F I0813 20:25:10.528560 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50368484Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:25:10.566050317+00:00 stderr F I0813 20:25:10.565076 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 35166 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50368484Ki 2025-08-13T20:25:10.566050317+00:00 stderr F I0813 20:25:10.565288 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 35166 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:25:11.140600555+00:00 stderr F I0813 20:25:11.140487 1 reflector.go:378] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: forcing resync 2025-08-13T20:25:11.141747908+00:00 stderr F I0813 20:25:11.140929 1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 4bd4486a-d347-4705-822c-1c402df66985 20233 0 2024-06-27 13:21:41 +0000 UTC map[] map[pv.kubernetes.io/provisioned-by:kubevirt.io.hostpath-provisioner volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:volumeAttributes":{".":{},"f:csi.storage.k8s.io/pv/name":{},"f:csi.storage.k8s.io/pvc/name":{},"f:csi.storage.k8s.io/pvc/namespace":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{},"f:storagePool":{}},"f:volumeHandle":{}},"f:nodeAffinity":{".":{},"f:required":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{32212254720 0} {} 30Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:kubevirt.io.hostpath-provisioner,VolumeHandle:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,ReadOnly:false,FSType:,VolumeAttributes:map[string]string{csi.storage.k8s.io/pv/name: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,csi.storage.k8s.io/pvc/name: crc-image-registry-storage,csi.storage.k8s.io/pvc/namespace: openshift-image-registry,storage.kubernetes.io/csiProvisionerIdentity: 1719494501704-84-kubevirt.io.hostpath-provisioner-crc,storagePool: local,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteMany],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:openshift-image-registry,Name:crc-image-registry-storage,UID:f5d86efc-9248-4b55-9b8b-23cf63fe9e97,APIVersion:v1,ResourceVersion:17977,FieldPath:,},PersistentVolumeReclaimPolicy:Retain,StorageClassName:crc-csi-hostpath-provisioner,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:&VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:topology.hostpath.csi/node,Operator:In,Values:[crc],},},MatchFields:[]NodeSelectorRequirement{},},},},},},Status:PersistentVolumeStatus{Phase:Bound,Message:,Reason:,LastPhaseTransitionTime:2024-06-27 13:21:41 +0000 UTC,},} 2025-08-13T20:25:11.141747908+00:00 stderr F I0813 20:25:11.141596 1 controller.go:1239] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" 2025-08-13T20:25:11.141747908+00:00 stderr F I0813 20:25:11.141725 1 controller.go:1260] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" is false: PersistentVolumePhase is not Released 2025-08-13T20:25:11.332425400+00:00 stderr F I0813 20:25:11.330959 1 reflector.go:378] k8s.io/client-go/informers/factory.go:150: forcing resync 2025-08-13T20:26:07.408562415+00:00 stderr F I0813 20:26:07.408258 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 10 items received 2025-08-13T20:26:10.525908641+00:00 stderr F I0813 20:26:10.525712 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:26:10.526061135+00:00 stderr F I0813 20:26:10.525979 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:26:10.526190229+00:00 stderr F I0813 20:26:10.526140 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:26:10.526542699+00:00 stderr F I0813 20:26:10.526164 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:26:10.528577897+00:00 stderr F I0813 20:26:10.528486 1 connection.go:251] GRPC response: {"available_capacity":51575119872,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:26:10.528577897+00:00 stderr F I0813 20:26:10.528540 1 connection.go:252] GRPC error: 2025-08-13T20:26:10.528706281+00:00 stderr F I0813 20:26:10.528596 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50366328Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:26:10.540600391+00:00 stderr F I0813 20:26:10.540490 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 35280 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:26:10.540904880+00:00 stderr F I0813 20:26:10.540752 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 35280 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50366328Ki 2025-08-13T20:27:10.531099845+00:00 stderr F I0813 20:27:10.527220 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:27:10.531099845+00:00 stderr F I0813 20:27:10.527530 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:27:10.531099845+00:00 stderr F I0813 20:27:10.527858 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:27:10.531099845+00:00 stderr F I0813 20:27:10.527870 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:27:10.562296477+00:00 stderr F I0813 20:27:10.562169 1 connection.go:251] GRPC response: {"available_capacity":51174940672,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:27:10.562374879+00:00 stderr F I0813 20:27:10.562202 1 connection.go:252] GRPC error: 2025-08-13T20:27:10.562616176+00:00 stderr F I0813 20:27:10.562443 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 49975528Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:27:10.621948322+00:00 stderr F I0813 20:27:10.621763 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 35442 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:27:10.622171959+00:00 stderr F I0813 20:27:10.622064 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 35442 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 49975528Ki 2025-08-13T20:28:10.528586639+00:00 stderr F I0813 20:28:10.528237 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:28:10.528586639+00:00 stderr F I0813 20:28:10.528479 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:28:10.529034552+00:00 stderr F I0813 20:28:10.528970 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:28:10.529561847+00:00 stderr F I0813 20:28:10.529026 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:28:10.532628515+00:00 stderr F I0813 20:28:10.532243 1 connection.go:251] GRPC response: {"available_capacity":51567644672,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:28:10.532681407+00:00 stderr F I0813 20:28:10.532664 1 connection.go:252] GRPC error: 2025-08-13T20:28:10.532953675+00:00 stderr F I0813 20:28:10.532871 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50359028Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:28:10.559875859+00:00 stderr F I0813 20:28:10.556836 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 35577 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:28:10.559875859+00:00 stderr F I0813 20:28:10.557475 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 35577 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50359028Ki 2025-08-13T20:29:10.529715136+00:00 stderr F I0813 20:29:10.529278 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:29:10.529715136+00:00 stderr F I0813 20:29:10.529606 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:29:10.530144499+00:00 stderr F I0813 20:29:10.530084 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:29:10.530790507+00:00 stderr F I0813 20:29:10.530110 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:29:10.535148743+00:00 stderr F I0813 20:29:10.534987 1 connection.go:251] GRPC response: {"available_capacity":51567120384,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:29:10.535148743+00:00 stderr F I0813 20:29:10.535126 1 connection.go:252] GRPC error: 2025-08-13T20:29:10.535486382+00:00 stderr F I0813 20:29:10.535285 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50358516Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:29:10.549869346+00:00 stderr F I0813 20:29:10.549666 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 35709 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:29:10.549869346+00:00 stderr F I0813 20:29:10.549727 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 35709 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50358516Ki 2025-08-13T20:30:10.530567451+00:00 stderr F I0813 20:30:10.530088 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:30:10.530567451+00:00 stderr F I0813 20:30:10.530249 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:30:10.530567451+00:00 stderr F I0813 20:30:10.530382 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:30:10.530663024+00:00 stderr F I0813 20:30:10.530389 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:30:10.535724309+00:00 stderr F I0813 20:30:10.535645 1 connection.go:251] GRPC response: {"available_capacity":49228713984,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:30:10.535724309+00:00 stderr F I0813 20:30:10.535675 1 connection.go:252] GRPC error: 2025-08-13T20:30:10.535972716+00:00 stderr F I0813 20:30:10.535726 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 48074916Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:30:10.550323769+00:00 stderr F I0813 20:30:10.550147 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 35890 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 48074916Ki 2025-08-13T20:30:10.550751311+00:00 stderr F I0813 20:30:10.550690 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 35890 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:30:27.073297549+00:00 stderr F I0813 20:30:27.073049 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 14 items received 2025-08-13T20:30:43.344585819+00:00 stderr F I0813 20:30:43.344328 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 7 items received 2025-08-13T20:30:44.158134804+00:00 stderr F I0813 20:30:44.158056 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 6 items received 2025-08-13T20:31:10.530875095+00:00 stderr F I0813 20:31:10.530527 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:31:10.530875095+00:00 stderr F I0813 20:31:10.530696 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:31:10.530951338+00:00 stderr F I0813 20:31:10.530912 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:31:10.531405681+00:00 stderr F I0813 20:31:10.530922 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:31:10.532597835+00:00 stderr F I0813 20:31:10.532491 1 connection.go:251] GRPC response: {"available_capacity":51566354432,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:31:10.532597835+00:00 stderr F I0813 20:31:10.532523 1 connection.go:252] GRPC error: 2025-08-13T20:31:10.532734639+00:00 stderr F I0813 20:31:10.532566 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50357768Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:31:10.544753214+00:00 stderr F I0813 20:31:10.544528 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36027 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:31:10.545745613+00:00 stderr F I0813 20:31:10.545608 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36027 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50357768Ki 2025-08-13T20:32:10.531836828+00:00 stderr F I0813 20:32:10.531466 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:32:10.531836828+00:00 stderr F I0813 20:32:10.531705 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:32:10.532556459+00:00 stderr F I0813 20:32:10.532005 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:32:10.532820336+00:00 stderr F I0813 20:32:10.532035 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:32:10.534975388+00:00 stderr F I0813 20:32:10.534850 1 connection.go:251] GRPC response: {"available_capacity":51568164864,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:32:10.534975388+00:00 stderr F I0813 20:32:10.534875 1 connection.go:252] GRPC error: 2025-08-13T20:32:10.535175014+00:00 stderr F I0813 20:32:10.534946 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50359536Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:32:10.553530152+00:00 stderr F I0813 20:32:10.553268 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36146 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:32:10.554085678+00:00 stderr F I0813 20:32:10.553945 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36146 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50359536Ki 2025-08-13T20:32:19.542178946+00:00 stderr F I0813 20:32:19.541133 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 9 items received 2025-08-13T20:32:39.416373564+00:00 stderr F I0813 20:32:39.416148 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 7 items received 2025-08-13T20:33:10.533275907+00:00 stderr F I0813 20:33:10.532719 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:33:10.533340859+00:00 stderr F I0813 20:33:10.533270 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:33:10.533504244+00:00 stderr F I0813 20:33:10.533448 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:33:10.534404340+00:00 stderr F I0813 20:33:10.533472 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:33:10.537336664+00:00 stderr F I0813 20:33:10.537200 1 connection.go:251] GRPC response: {"available_capacity":51570118656,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:33:10.537336664+00:00 stderr F I0813 20:33:10.537232 1 connection.go:252] GRPC error: 2025-08-13T20:33:10.537362225+00:00 stderr F I0813 20:33:10.537289 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50361444Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:33:10.550672547+00:00 stderr F I0813 20:33:10.550436 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36259 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50361444Ki 2025-08-13T20:33:10.551315246+00:00 stderr F I0813 20:33:10.551053 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36259 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:34:10.533967613+00:00 stderr F I0813 20:34:10.533496 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:34:10.533967613+00:00 stderr F I0813 20:34:10.533736 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:34:10.534398095+00:00 stderr F I0813 20:34:10.534321 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:34:10.536031192+00:00 stderr F I0813 20:34:10.534354 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:34:10.539150562+00:00 stderr F I0813 20:34:10.538875 1 connection.go:251] GRPC response: {"available_capacity":51570245632,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:34:10.539150562+00:00 stderr F I0813 20:34:10.539064 1 connection.go:252] GRPC error: 2025-08-13T20:34:10.539240014+00:00 stderr F I0813 20:34:10.539147 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50361568Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:34:10.554901905+00:00 stderr F I0813 20:34:10.554593 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36359 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:34:10.554901905+00:00 stderr F I0813 20:34:10.554608 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36359 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50361568Ki 2025-08-13T20:35:10.534873013+00:00 stderr F I0813 20:35:10.534360 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:35:10.534985776+00:00 stderr F I0813 20:35:10.534872 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:35:10.535289855+00:00 stderr F I0813 20:35:10.535233 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:35:10.536180410+00:00 stderr F I0813 20:35:10.535261 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:35:10.538021483+00:00 stderr F I0813 20:35:10.537966 1 connection.go:251] GRPC response: {"available_capacity":51569307648,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:35:10.538021483+00:00 stderr F I0813 20:35:10.537992 1 connection.go:252] GRPC error: 2025-08-13T20:35:10.538226819+00:00 stderr F I0813 20:35:10.538058 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50360652Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:35:10.552433408+00:00 stderr F I0813 20:35:10.552356 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36494 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50360652Ki 2025-08-13T20:35:10.552599143+00:00 stderr F I0813 20:35:10.552576 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36494 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:36:10.537165098+00:00 stderr F I0813 20:36:10.535607 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:36:10.537165098+00:00 stderr F I0813 20:36:10.535982 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:36:10.537165098+00:00 stderr F I0813 20:36:10.536269 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:36:10.537165098+00:00 stderr F I0813 20:36:10.536277 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:36:10.541271176+00:00 stderr F I0813 20:36:10.541238 1 connection.go:251] GRPC response: {"available_capacity":51568975872,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:36:10.541353548+00:00 stderr F I0813 20:36:10.541337 1 connection.go:252] GRPC error: 2025-08-13T20:36:10.541559594+00:00 stderr F I0813 20:36:10.541489 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50360328Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:36:10.562043683+00:00 stderr F I0813 20:36:10.561909 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36628 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:36:10.562558298+00:00 stderr F I0813 20:36:10.562329 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36628 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50360328Ki 2025-08-13T20:36:15.162456315+00:00 stderr F I0813 20:36:15.162055 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 6 items received 2025-08-13T20:37:02.083170034+00:00 stderr F I0813 20:37:02.082585 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 13 items received 2025-08-13T20:37:10.537019775+00:00 stderr F I0813 20:37:10.536721 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:37:10.537075267+00:00 stderr F I0813 20:37:10.537017 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:37:10.537088937+00:00 stderr F I0813 20:37:10.537082 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:37:10.537475688+00:00 stderr F I0813 20:37:10.537088 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:37:10.539417454+00:00 stderr F I0813 20:37:10.539325 1 connection.go:251] GRPC response: {"available_capacity":51568967680,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:37:10.539417454+00:00 stderr F I0813 20:37:10.539367 1 connection.go:252] GRPC error: 2025-08-13T20:37:10.539441095+00:00 stderr F I0813 20:37:10.539415 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 49180Mi, new maximumVolumeSize 83295212Ki 2025-08-13T20:37:10.547160107+00:00 stderr F I0813 20:37:10.547063 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36750 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 49180Mi 2025-08-13T20:37:10.547378274+00:00 stderr F I0813 20:37:10.547148 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36750 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:37:31.353038325+00:00 stderr F I0813 20:37:31.352700 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 7 items received 2025-08-13T20:38:10.537857165+00:00 stderr F I0813 20:38:10.537355 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:38:10.537937767+00:00 stderr F I0813 20:38:10.537858 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:38:10.539092880+00:00 stderr F I0813 20:38:10.539018 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:38:10.539377699+00:00 stderr F I0813 20:38:10.539057 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:38:10.540839291+00:00 stderr F I0813 20:38:10.540716 1 connection.go:251] GRPC response: {"available_capacity":51566211072,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:38:10.540839291+00:00 stderr F I0813 20:38:10.540743 1 connection.go:252] GRPC error: 2025-08-13T20:38:10.540984755+00:00 stderr F I0813 20:38:10.540877 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50357628Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:38:10.575113129+00:00 stderr F I0813 20:38:10.574998 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36879 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:38:10.575113129+00:00 stderr F I0813 20:38:10.575017 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36879 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50357628Ki 2025-08-13T20:39:10.539029861+00:00 stderr F I0813 20:39:10.538567 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:39:10.539123284+00:00 stderr F I0813 20:39:10.539028 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:39:10.539391742+00:00 stderr F I0813 20:39:10.539323 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:39:10.540115593+00:00 stderr F I0813 20:39:10.539350 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:39:10.542366978+00:00 stderr F I0813 20:39:10.542210 1 connection.go:251] GRPC response: {"available_capacity":51565502464,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:39:10.542366978+00:00 stderr F I0813 20:39:10.542237 1 connection.go:252] GRPC error: 2025-08-13T20:39:10.542366978+00:00 stderr F I0813 20:39:10.542316 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50356936Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:39:10.555006812+00:00 stderr F I0813 20:39:10.554849 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 37007 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50356936Ki 2025-08-13T20:39:10.555279120+00:00 stderr F I0813 20:39:10.555226 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 37007 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:40:10.540322852+00:00 stderr F I0813 20:40:10.540041 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:40:10.540322852+00:00 stderr F I0813 20:40:10.540237 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:40:10.540322852+00:00 stderr F I0813 20:40:10.540309 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:40:10.540570459+00:00 stderr F I0813 20:40:10.540314 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:40:10.542059762+00:00 stderr F I0813 20:40:10.541991 1 connection.go:251] GRPC response: {"available_capacity":51564032000,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:40:10.542059762+00:00 stderr F I0813 20:40:10.542021 1 connection.go:252] GRPC error: 2025-08-13T20:40:10.542082482+00:00 stderr F I0813 20:40:10.542041 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50355500Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:40:10.555260212+00:00 stderr F I0813 20:40:10.555106 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 37130 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50355500Ki 2025-08-13T20:40:10.555260212+00:00 stderr F I0813 20:40:10.555237 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 37130 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:40:11.141967927+00:00 stderr F I0813 20:40:11.141843 1 reflector.go:378] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: forcing resync 2025-08-13T20:40:11.142890584+00:00 stderr F I0813 20:40:11.142131 1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 4bd4486a-d347-4705-822c-1c402df66985 20233 0 2024-06-27 13:21:41 +0000 UTC map[] map[pv.kubernetes.io/provisioned-by:kubevirt.io.hostpath-provisioner volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:volumeAttributes":{".":{},"f:csi.storage.k8s.io/pv/name":{},"f:csi.storage.k8s.io/pvc/name":{},"f:csi.storage.k8s.io/pvc/namespace":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{},"f:storagePool":{}},"f:volumeHandle":{}},"f:nodeAffinity":{".":{},"f:required":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{32212254720 0} {} 30Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:kubevirt.io.hostpath-provisioner,VolumeHandle:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,ReadOnly:false,FSType:,VolumeAttributes:map[string]string{csi.storage.k8s.io/pv/name: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,csi.storage.k8s.io/pvc/name: crc-image-registry-storage,csi.storage.k8s.io/pvc/namespace: openshift-image-registry,storage.kubernetes.io/csiProvisionerIdentity: 1719494501704-84-kubevirt.io.hostpath-provisioner-crc,storagePool: local,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteMany],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:openshift-image-registry,Name:crc-image-registry-storage,UID:f5d86efc-9248-4b55-9b8b-23cf63fe9e97,APIVersion:v1,ResourceVersion:17977,FieldPath:,},PersistentVolumeReclaimPolicy:Retain,StorageClassName:crc-csi-hostpath-provisioner,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:&VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:topology.hostpath.csi/node,Operator:In,Values:[crc],},},MatchFields:[]NodeSelectorRequirement{},},},},},},Status:PersistentVolumeStatus{Phase:Bound,Message:,Reason:,LastPhaseTransitionTime:2024-06-27 13:21:41 +0000 UTC,},} 2025-08-13T20:40:11.142890584+00:00 stderr F I0813 20:40:11.142763 1 controller.go:1239] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" 2025-08-13T20:40:11.142914225+00:00 stderr F I0813 20:40:11.142894 1 controller.go:1260] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" is false: PersistentVolumePhase is not Released 2025-08-13T20:40:11.331528343+00:00 stderr F I0813 20:40:11.331348 1 reflector.go:378] k8s.io/client-go/informers/factory.go:150: forcing resync 2025-08-13T20:41:10.541671996+00:00 stderr F I0813 20:41:10.541162 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:41:10.541847171+00:00 stderr F I0813 20:41:10.541674 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:41:10.542047867+00:00 stderr F I0813 20:41:10.541965 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:41:10.544402995+00:00 stderr F I0813 20:41:10.542001 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:41:10.546830845+00:00 stderr F I0813 20:41:10.546706 1 connection.go:251] GRPC response: {"available_capacity":51564699648,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:41:10.546830845+00:00 stderr F I0813 20:41:10.546734 1 connection.go:252] GRPC error: 2025-08-13T20:41:10.546981129+00:00 stderr F I0813 20:41:10.546873 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50356152Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:41:10.566679587+00:00 stderr F I0813 20:41:10.566585 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 37251 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:41:10.566918594+00:00 stderr F I0813 20:41:10.566893 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 37251 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50356152Ki 2025-08-13T20:41:43.426306873+00:00 stderr F I0813 20:41:43.424528 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 9 items received 2025-08-13T20:41:52.548745175+00:00 stderr F I0813 20:41:52.546570 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 10 items received 2025-08-13T20:42:10.544697742+00:00 stderr F I0813 20:42:10.542362 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:42:10.544697742+00:00 stderr F I0813 20:42:10.542512 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:42:10.544697742+00:00 stderr F I0813 20:42:10.542703 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:42:10.547095161+00:00 stderr F I0813 20:42:10.542711 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:42:10.556140592+00:00 stderr F I0813 20:42:10.555855 1 connection.go:251] GRPC response: {"available_capacity":49223258112,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:42:10.556140592+00:00 stderr F I0813 20:42:10.555892 1 connection.go:252] GRPC error: 2025-08-13T20:42:10.556140592+00:00 stderr F I0813 20:42:10.555940 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 48069588Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:42:10.582284396+00:00 stderr F I0813 20:42:10.582197 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 37403 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 48069588Ki 2025-08-13T20:42:10.584191091+00:00 stderr F I0813 20:42:10.584074 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 37403 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:42:36.366462995+00:00 stderr F I0813 20:42:36.366327 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.366722473+00:00 stderr F I0813 20:42:36.366690 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 0 items received 2025-08-13T20:42:36.384868956+00:00 stderr F I0813 20:42:36.372885 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.384868956+00:00 stderr F I0813 20:42:36.373023 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 11 items received 2025-08-13T20:42:36.384868956+00:00 stderr F I0813 20:42:36.378923 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.384868956+00:00 stderr F I0813 20:42:36.378981 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 5 items received 2025-08-13T20:42:36.417150636+00:00 stderr F I0813 20:42:36.406967 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.417150636+00:00 stderr F I0813 20:42:36.407104 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 6 items received 2025-08-13T20:42:36.481197803+00:00 stderr F I0813 20:42:36.479176 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.492491459+00:00 stderr F I0813 20:42:36.490471 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 0 items received 2025-08-13T20:42:36.607854644+00:00 stderr F I0813 20:42:36.604957 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=37526&timeout=7m4s&timeoutSeconds=424&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:36.607854644+00:00 stderr F I0813 20:42:36.605095 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=37354&timeout=5m12s&timeoutSeconds=312&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:36.607854644+00:00 stderr F I0813 20:42:36.605145 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=37403&timeout=8m35s&timeoutSeconds=515&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:36.607854644+00:00 stderr F I0813 20:42:36.605333 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=37332&timeout=6m59s&timeoutSeconds=419&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:36.607854644+00:00 stderr F I0813 20:42:36.606900 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=37407&timeout=9m54s&timeoutSeconds=594&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:37.849300036+00:00 stderr F I0813 20:42:37.849155 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=37407&timeout=7m9s&timeoutSeconds=429&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:37.870661042+00:00 stderr F I0813 20:42:37.870079 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=37332&timeout=5m8s&timeoutSeconds=308&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:38.091716185+00:00 stderr F I0813 20:42:38.089119 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=37526&timeout=5m45s&timeoutSeconds=345&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:38.150465799+00:00 stderr F I0813 20:42:38.139752 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=37403&timeout=5m25s&timeoutSeconds=325&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:38.205443034+00:00 stderr F I0813 20:42:38.205106 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=37354&timeout=6m40s&timeoutSeconds=400&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:40.059831485+00:00 stderr F I0813 20:42:40.059687 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=37332&timeout=6m21s&timeoutSeconds=381&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:40.085673111+00:00 stderr F I0813 20:42:40.085582 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=37354&timeout=7m46s&timeoutSeconds=466&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:40.347258672+00:00 stderr F I0813 20:42:40.347129 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=37526&timeout=5m7s&timeoutSeconds=307&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:40.721875683+00:00 stderr F I0813 20:42:40.721701 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=37407&timeout=7m56s&timeoutSeconds=476&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:41.306913760+00:00 stderr F I0813 20:42:41.306567 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=37403&timeout=5m13s&timeoutSeconds=313&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off ././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000755000175000017500000000000015133657742033201 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000000061415133657716033205 0ustar zuulzuul2026-01-20T10:49:39.622840495+00:00 stderr F I0120 10:49:39.622348 1 main.go:149] calling CSI driver to discover driver name 2026-01-20T10:49:39.628503088+00:00 stderr F I0120 10:49:39.628462 1 main.go:155] CSI driver name: "kubevirt.io.hostpath-provisioner" 2026-01-20T10:49:39.628503088+00:00 stderr F I0120 10:49:39.628490 1 main.go:183] ServeMux listening at "0.0.0.0:9898" ././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000000061415133657716033205 0ustar zuulzuul2025-08-13T19:59:57.766335818+00:00 stderr F I0813 19:59:57.747274 1 main.go:149] calling CSI driver to discover driver name 2025-08-13T19:59:58.592328403+00:00 stderr F I0813 19:59:58.579963 1 main.go:155] CSI driver name: "kubevirt.io.hostpath-provisioner" 2025-08-13T19:59:58.592328403+00:00 stderr F I0813 19:59:58.580031 1 main.go:183] ServeMux listening at "0.0.0.0:9898" ././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015133657715033040 5ustar zuulzuul././@LongLink0000644000000000000000000000035300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015133657735033042 5ustar zuulzuul././@LongLink0000644000000000000000000000036000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000035733115133657715033056 0ustar zuulzuul2025-08-13T20:05:30.576738510+00:00 stderr F I0813 20:05:30.564427 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:05:30.576738510+00:00 stderr F I0813 20:05:30.572354 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:30.585268894+00:00 stderr F I0813 20:05:30.583436 1 observer_polling.go:159] Starting file observer 2025-08-13T20:05:30.862237025+00:00 stderr F I0813 20:05:30.862163 1 builder.go:299] openshift-cluster-kube-scheduler-operator version 4.16.0-202406131906.p0.g630f63b.assembly.stream.el9-630f63b-630f63bc7a30d2662bbb5115233144079de6eef6 2025-08-13T20:05:31.764230725+00:00 stderr F I0813 20:05:31.762474 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:05:31.764230725+00:00 stderr F W0813 20:05:31.762888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:31.764230725+00:00 stderr F W0813 20:05:31.762898 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:31.771968887+00:00 stderr F I0813 20:05:31.771303 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:05:31.771968887+00:00 stderr F I0813 20:05:31.771722 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock... 2025-08-13T20:05:31.842474786+00:00 stderr F I0813 20:05:31.841086 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:05:31.844878425+00:00 stderr F I0813 20:05:31.844819 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:05:31.845188073+00:00 stderr F I0813 20:05:31.844915 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:05:31.846711397+00:00 stderr F I0813 20:05:31.846567 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:05:31.850923628+00:00 stderr F I0813 20:05:31.846618 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:05:31.851387891+00:00 stderr F I0813 20:05:31.846641 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:05:31.851501344+00:00 stderr F I0813 20:05:31.851481 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:31.856524428+00:00 stderr F I0813 20:05:31.856485 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:05:31.856905299+00:00 stderr F I0813 20:05:31.856602 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:31.873882465+00:00 stderr F I0813 20:05:31.873732 1 leaderelection.go:260] successfully acquired lease openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock 2025-08-13T20:05:31.875052119+00:00 stderr F I0813 20:05:31.875001 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-cluster-kube-scheduler-operator-lock", UID:"71f97dfe-5375-4cc7-b12a-e96cf43bdae0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"31332", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_e7b654d9-77cd-448f-885c-1fad32cba9ad became leader 2025-08-13T20:05:31.893305981+00:00 stderr F I0813 20:05:31.893203 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:05:31.945610709+00:00 stderr F I0813 20:05:31.945524 1 starter.go:80] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:05:31.958186119+00:00 stderr F I0813 20:05:31.958120 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:31.962905474+00:00 stderr F I0813 20:05:31.954502 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:05:31.962905474+00:00 stderr F I0813 20:05:31.962226 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:05:31.962905474+00:00 stderr F I0813 20:05:31.962346 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:32.186166178+00:00 stderr F I0813 20:05:32.181264 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:05:32.207337004+00:00 stderr F I0813 20:05:32.206298 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:05:32.209975339+00:00 stderr F I0813 20:05:32.208241 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T20:05:32.225986218+00:00 stderr F I0813 20:05:32.224997 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.237841 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238049 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238171 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238193 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238208 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238241 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238265 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238281 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238297 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T20:05:32.260341982+00:00 stderr F I0813 20:05:32.260284 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-scheduler 2025-08-13T20:05:32.260746383+00:00 stderr F I0813 20:05:32.260726 1 base_controller.go:67] Waiting for caches to sync for KubeControllerManagerStaticResources 2025-08-13T20:05:32.261051902+00:00 stderr F I0813 20:05:32.261025 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T20:05:32.261267278+00:00 stderr F I0813 20:05:32.261248 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T20:05:32.543214912+00:00 stderr F I0813 20:05:32.541429 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T20:05:32.543214912+00:00 stderr F I0813 20:05:32.541471 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T20:05:32.543214912+00:00 stderr F I0813 20:05:32.541501 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T20:05:32.543214912+00:00 stderr F I0813 20:05:32.541506 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T20:05:32.543214912+00:00 stderr F I0813 20:05:32.542500 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:05:32.543214912+00:00 stderr F I0813 20:05:32.542511 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:05:32.550244423+00:00 stderr F I0813 20:05:32.550159 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:05:32.550244423+00:00 stderr F I0813 20:05:32.550199 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:05:32.550411688+00:00 stderr F I0813 20:05:32.550356 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:05:32.550952684+00:00 stderr F I0813 20:05:32.550889 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:32.582093436+00:00 stderr F I0813 20:05:32.581994 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:05:32.582234650+00:00 stderr F I0813 20:05:32.582216 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:05:32.609923302+00:00 stderr F I0813 20:05:32.603422 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:32.641180047+00:00 stderr F I0813 20:05:32.639060 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T20:05:32.641180047+00:00 stderr F I0813 20:05:32.639217 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T20:05:32.648563229+00:00 stderr F I0813 20:05:32.648487 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:32.649318871+00:00 stderr F I0813 20:05:32.649295 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T20:05:32.649368642+00:00 stderr F I0813 20:05:32.649354 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T20:05:32.649417263+00:00 stderr F I0813 20:05:32.649404 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T20:05:32.649448994+00:00 stderr F I0813 20:05:32.649437 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T20:05:32.697144040+00:00 stderr F I0813 20:05:32.650958 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:32.697144040+00:00 stderr F I0813 20:05:32.679140 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T20:05:32.697144040+00:00 stderr F I0813 20:05:32.695711 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T20:05:32.697144040+00:00 stderr F I0813 20:05:32.679216 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-scheduler 2025-08-13T20:05:32.697144040+00:00 stderr F I0813 20:05:32.696984 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-scheduler controller ... 2025-08-13T20:05:32.738932587+00:00 stderr F I0813 20:05:32.738728 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T20:05:32.738932587+00:00 stderr F I0813 20:05:32.738891 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T20:05:32.751229279+00:00 stderr F I0813 20:05:32.750144 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T20:05:32.751229279+00:00 stderr F I0813 20:05:32.750230 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T20:05:32.814209622+00:00 stderr F I0813 20:05:32.814019 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:32.995206675+00:00 stderr F I0813 20:05:32.986392 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:33.056409658+00:00 stderr F I0813 20:05:33.056318 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:33.072895720+00:00 stderr F I0813 20:05:33.072733 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T20:05:33.072895720+00:00 stderr F I0813 20:05:33.072839 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T20:05:33.119484604+00:00 stderr F I0813 20:05:33.119347 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:05:33.119484604+00:00 stderr F I0813 20:05:33.119392 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:05:33.124444846+00:00 stderr F I0813 20:05:33.120767 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T20:05:33.124444846+00:00 stderr F I0813 20:05:33.123645 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T20:05:33.127741881+00:00 stderr F I0813 20:05:33.127651 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:05:33.127984328+00:00 stderr F I0813 20:05:33.127958 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:05:33.193213226+00:00 stderr F I0813 20:05:33.193157 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:33.261045287+00:00 stderr F I0813 20:05:33.260983 1 base_controller.go:73] Caches are synced for KubeControllerManagerStaticResources 2025-08-13T20:05:33.261153770+00:00 stderr F I0813 20:05:33.261137 1 base_controller.go:110] Starting #1 worker of KubeControllerManagerStaticResources controller ... 2025-08-13T20:05:33.780178083+00:00 stderr F I0813 20:05:33.779760 1 request.go:697] Waited for 1.082746184s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc 2025-08-13T20:05:35.270288154+00:00 stderr F I0813 20:05:35.269500 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:05:35.270288154+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:05:35.270288154+00:00 stderr F CurrentRevision: (int32) 7, 2025-08-13T20:05:35.270288154+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:05:35.270288154+00:00 stderr F LastFailedRevision: (int32) 5, 2025-08-13T20:05:35.270288154+00:00 stderr F LastFailedTime: (*v1.Time)(0xc0016d1d70)(2024-06-26 12:52:00 +0000 UTC), 2025-08-13T20:05:35.270288154+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:05:35.270288154+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:05:35.270288154+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:05:35.270288154+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:05:35.270288154+00:00 stderr F (string) (len=2059) "installer: duler-pod\",\n SecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=31) \"localhost-recovery-client-token\"\n },\n OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"serving-cert\"\n },\n ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\n (string) (len=18) \"kube-scheduler-pod\",\n (string) (len=6) \"config\",\n (string) (len=17) \"serviceaccount-ca\",\n (string) (len=20) \"scheduler-kubeconfig\",\n (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\n },\n OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=16) \"policy-configmap\"\n },\n CertSecretNames: ([]string) (len=1 cap=1) {\n (string) (len=30) \"kube-scheduler-client-cert-key\"\n },\n OptionalCertSecretNamePrefixes: ([]string) ,\n CertConfigMapNamePrefixes: ([]string) ,\n OptionalCertConfigMapNamePrefixes: ([]string) ,\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:10.196140 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:10.287595 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:10.292603 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:20.300456 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:30.295953 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:50:00.296621 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:14.299317 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:05:35.270288154+00:00 stderr F } 2025-08-13T20:05:35.270288154+00:00 stderr F } 2025-08-13T20:05:35.270288154+00:00 stderr F because static pod is ready 2025-08-13T20:05:35.360014093+00:00 stderr F I0813 20:05:35.359946 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:05:35.361982280+00:00 stderr F I0813 20:05:35.361950 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "crc" from revision 6 to 7 because static pod is ready 2025-08-13T20:05:35.372351637+00:00 stderr F I0813 20:05:35.369665 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:05:35Z","message":"NodeInstallerProgressing: 1 node is at revision 7","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 7","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:05:35.451418841+00:00 stderr F I0813 20:05:35.451335 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 7"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6; 0 nodes have achieved new revision 7" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 7" 2025-08-13T20:07:15.704486660+00:00 stderr F I0813 20:07:15.701599 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 8 triggered by "required secret/localhost-recovery-client-token has changed" 2025-08-13T20:07:15.783097454+00:00 stderr F I0813 20:07:15.780987 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-scheduler-pod-8 -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:15.815624836+00:00 stderr F I0813 20:07:15.815563 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-8 -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:16.977272212+00:00 stderr F I0813 20:07:16.976932 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/serviceaccount-ca-8 -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:17.180451136+00:00 stderr F I0813 20:07:17.173849 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/scheduler-kubeconfig-8 -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:17.505357562+00:00 stderr F I0813 20:07:17.505279 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-8 -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:18.719316847+00:00 stderr F I0813 20:07:18.718327 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/serving-cert-8 -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:19.941996393+00:00 stderr F I0813 20:07:19.938574 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-8 -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:19.981566097+00:00 stderr F I0813 20:07:19.979750 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 8 triggered by "required secret/localhost-recovery-client-token has changed" 2025-08-13T20:07:20.101976720+00:00 stderr F I0813 20:07:20.100453 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:07:20.114002764+00:00 stderr F I0813 20:07:20.112277 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 8 created because required secret/localhost-recovery-client-token has changed 2025-08-13T20:07:21.320572926+00:00 stderr F I0813 20:07:21.318488 1 installer_controller.go:524] node crc with revision 7 is the oldest and needs new revision 8 2025-08-13T20:07:21.320572926+00:00 stderr F I0813 20:07:21.319194 1 installer_controller.go:532] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:07:21.320572926+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:07:21.320572926+00:00 stderr F CurrentRevision: (int32) 7, 2025-08-13T20:07:21.320572926+00:00 stderr F TargetRevision: (int32) 8, 2025-08-13T20:07:21.320572926+00:00 stderr F LastFailedRevision: (int32) 5, 2025-08-13T20:07:21.320572926+00:00 stderr F LastFailedTime: (*v1.Time)(0xc001c32198)(2024-06-26 12:52:00 +0000 UTC), 2025-08-13T20:07:21.320572926+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:07:21.320572926+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:07:21.320572926+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:07:21.320572926+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:07:21.320572926+00:00 stderr F (string) (len=2059) "installer: duler-pod\",\n SecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=31) \"localhost-recovery-client-token\"\n },\n OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"serving-cert\"\n },\n ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\n (string) (len=18) \"kube-scheduler-pod\",\n (string) (len=6) \"config\",\n (string) (len=17) \"serviceaccount-ca\",\n (string) (len=20) \"scheduler-kubeconfig\",\n (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\n },\n OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=16) \"policy-configmap\"\n },\n CertSecretNames: ([]string) (len=1 cap=1) {\n (string) (len=30) \"kube-scheduler-client-cert-key\"\n },\n OptionalCertSecretNamePrefixes: ([]string) ,\n CertConfigMapNamePrefixes: ([]string) ,\n OptionalCertConfigMapNamePrefixes: ([]string) ,\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:10.196140 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:10.287595 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:10.292603 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:20.300456 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:30.295953 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:50:00.296621 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:14.299317 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:07:21.320572926+00:00 stderr F } 2025-08-13T20:07:21.320572926+00:00 stderr F } 2025-08-13T20:07:21.364919068+00:00 stderr F I0813 20:07:21.362636 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeTargetRevisionChanged' Updating node "crc" from revision 7 to 8 because node crc with revision 7 is the oldest 2025-08-13T20:07:21.401748604+00:00 stderr F I0813 20:07:21.400658 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:07:21Z","message":"NodeInstallerProgressing: 1 node is at revision 7; 0 nodes have achieved new revision 8","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 7; 0 nodes have achieved new revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:21.405192112+00:00 stderr F I0813 20:07:21.405162 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:07:21.431219419+00:00 stderr F I0813 20:07:21.429589 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 7; 0 nodes have achieved new revision 8"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 7" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 7; 0 nodes have achieved new revision 8" 2025-08-13T20:07:23.084966174+00:00 stderr F I0813 20:07:23.081293 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-8-crc -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:23.920610573+00:00 stderr F I0813 20:07:23.915736 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:07:26.094867570+00:00 stderr F I0813 20:07:26.093717 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:27.701233176+00:00 stderr F I0813 20:07:27.691110 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:58.161748364+00:00 stderr F I0813 20:07:58.161265 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:59.618478561+00:00 stderr F I0813 20:07:59.618418 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because waiting for static pod of revision 8, found 7 2025-08-13T20:08:00.260228759+00:00 stderr F I0813 20:08:00.257628 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because waiting for static pod of revision 8, found 7 2025-08-13T20:08:08.302031825+00:00 stderr F I0813 20:08:08.299863 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because static pod is pending 2025-08-13T20:08:08.669468880+00:00 stderr F I0813 20:08:08.668167 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because static pod is pending 2025-08-13T20:08:09.661708029+00:00 stderr F I0813 20:08:09.660459 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because static pod is pending 2025-08-13T20:08:11.298218219+00:00 stderr F I0813 20:08:11.296588 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because static pod is pending 2025-08-13T20:08:32.423503917+00:00 stderr F E0813 20:08:32.421764 1 leaderelection.go:332] error retrieving resource lock openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler-operator/leases/openshift-cluster-kube-scheduler-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.657075264+00:00 stderr F E0813 20:08:32.656916 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.659262606+00:00 stderr F E0813 20:08:32.659211 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.665237858+00:00 stderr F E0813 20:08:32.664273 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.671484387+00:00 stderr F E0813 20:08:32.670156 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.676878851+00:00 stderr F E0813 20:08:32.676854 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.684176151+00:00 stderr F E0813 20:08:32.684095 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.700397636+00:00 stderr F E0813 20:08:32.700290 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.709248700+00:00 stderr F E0813 20:08:32.709170 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.742218485+00:00 stderr F E0813 20:08:32.742106 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.754565889+00:00 stderr F E0813 20:08:32.754514 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.857328175+00:00 stderr F E0813 20:08:32.857281 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.058487083+00:00 stderr F E0813 20:08:33.058429 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.256689975+00:00 stderr F E0813 20:08:33.256581 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.460638473+00:00 stderr F E0813 20:08:33.460541 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:33.859451947+00:00 stderr F E0813 20:08:33.859324 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:34.259980410+00:00 stderr F E0813 20:08:34.259926 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:34.459226493+00:00 stderr F E0813 20:08:34.458756 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.065538257+00:00 stderr F E0813 20:08:35.065047 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.260978180+00:00 stderr F E0813 20:08:35.259879 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:35.857700678+00:00 stderr F E0813 20:08:35.857534 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:36.062962473+00:00 stderr F E0813 20:08:36.061522 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:36.460504571+00:00 stderr F E0813 20:08:36.460427 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:36.657974753+00:00 stderr F E0813 20:08:36.657874 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:36.863096394+00:00 stderr F E0813 20:08:36.862953 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:37.060307248+00:00 stderr F E0813 20:08:37.060175 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.465059463+00:00 stderr F E0813 20:08:37.464972 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:37.860871651+00:00 stderr F E0813 20:08:37.858753 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.260733035+00:00 stderr F E0813 20:08:38.260682 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:38.663861994+00:00 stderr F E0813 20:08:38.662819 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.262952820+00:00 stderr F E0813 20:08:39.260829 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:39.462171091+00:00 stderr F E0813 20:08:39.462085 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:40.063432860+00:00 stderr F E0813 20:08:40.059750 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.486148519+00:00 stderr F E0813 20:08:40.481360 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:41.059009174+00:00 stderr F E0813 20:08:41.058291 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.466963720+00:00 stderr F E0813 20:08:41.465427 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:41.659714477+00:00 stderr F E0813 20:08:41.659572 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.265556167+00:00 stderr F E0813 20:08:42.265423 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:42.861740440+00:00 stderr F E0813 20:08:42.861671 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.469264278+00:00 stderr F E0813 20:08:43.463841 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:44.859357553+00:00 stderr F E0813 20:08:44.859243 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.105574872+00:00 stderr F E0813 20:08:45.105237 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:45.260272658+00:00 stderr F E0813 20:08:45.260163 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:46.657439815+00:00 stderr F E0813 20:08:46.657116 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.060711047+00:00 stderr F E0813 20:08:47.060353 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:47.264939003+00:00 stderr F E0813 20:08:47.262274 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.466948065+00:00 stderr F E0813 20:08:47.464918 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.860878800+00:00 stderr F E0813 20:08:48.860045 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:49.261750484+00:00 stderr F E0813 20:08:49.261652 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.664740178+00:00 stderr F E0813 20:08:50.664349 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:50.865964007+00:00 stderr F E0813 20:08:50.864296 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:51.260165459+00:00 stderr F E0813 20:08:51.260058 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.462859232+00:00 stderr F E0813 20:08:52.462645 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:53.061524716+00:00 stderr F E0813 20:08:53.061168 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.261631214+00:00 stderr F E0813 20:08:54.261472 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.864943431+00:00 stderr F E0813 20:08:54.861968 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:55.861874535+00:00 stderr F E0813 20:08:55.861383 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.900825473+00:00 stderr F E0813 20:08:56.900663 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.440125344+00:00 stderr F E0813 20:08:57.440012 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:57.507969159+00:00 stderr F E0813 20:08:57.507842 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.434416582+00:00 stderr F E0813 20:08:58.433978 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:28.101218495+00:00 stderr F I0813 20:09:28.100409 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:29.927306530+00:00 stderr F I0813 20:09:29.926836 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeschedulers from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:29.930692147+00:00 stderr F I0813 20:09:29.930523 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:09:29.952076110+00:00 stderr F I0813 20:09:29.951739 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:09:29.952076110+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:09:29.952076110+00:00 stderr F CurrentRevision: (int32) 8, 2025-08-13T20:09:29.952076110+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:09:29.952076110+00:00 stderr F LastFailedRevision: (int32) 5, 2025-08-13T20:09:29.952076110+00:00 stderr F LastFailedTime: (*v1.Time)(0xc00042e9d8)(2024-06-26 12:52:00 +0000 UTC), 2025-08-13T20:09:29.952076110+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:09:29.952076110+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:09:29.952076110+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:09:29.952076110+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:09:29.952076110+00:00 stderr F (string) (len=2059) "installer: duler-pod\",\n SecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=31) \"localhost-recovery-client-token\"\n },\n OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"serving-cert\"\n },\n ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\n (string) (len=18) \"kube-scheduler-pod\",\n (string) (len=6) \"config\",\n (string) (len=17) \"serviceaccount-ca\",\n (string) (len=20) \"scheduler-kubeconfig\",\n (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\n },\n OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=16) \"policy-configmap\"\n },\n CertSecretNames: ([]string) (len=1 cap=1) {\n (string) (len=30) \"kube-scheduler-client-cert-key\"\n },\n OptionalCertSecretNamePrefixes: ([]string) ,\n CertConfigMapNamePrefixes: ([]string) ,\n OptionalCertConfigMapNamePrefixes: ([]string) ,\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:10.196140 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:10.287595 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:10.292603 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:20.300456 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:30.295953 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:50:00.296621 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:14.299317 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:09:29.952076110+00:00 stderr F } 2025-08-13T20:09:29.952076110+00:00 stderr F } 2025-08-13T20:09:29.952076110+00:00 stderr F because static pod is ready 2025-08-13T20:09:29.974254396+00:00 stderr F I0813 20:09:29.974145 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "crc" from revision 7 to 8 because static pod is ready 2025-08-13T20:09:29.975662596+00:00 stderr F I0813 20:09:29.975627 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:29Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:29.976646144+00:00 stderr F I0813 20:09:29.976583 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:09:29.995650949+00:00 stderr F I0813 20:09:29.995595 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 8"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 7; 0 nodes have achieved new revision 8" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8" 2025-08-13T20:09:30.537031651+00:00 stderr F I0813 20:09:30.536506 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:30Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:30.543282740+00:00 stderr F E0813 20:09:30.543234 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:30.544034722+00:00 stderr F I0813 20:09:30.544004 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:30Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:30.550363613+00:00 stderr F E0813 20:09:30.550266 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:30.551084374+00:00 stderr F I0813 20:09:30.550996 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:30Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:30.556504790+00:00 stderr F E0813 20:09:30.556470 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:30.561870313+00:00 stderr F I0813 20:09:30.561683 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:30Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:30.568154224+00:00 stderr F E0813 20:09:30.568032 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:30.609735966+00:00 stderr F I0813 20:09:30.609603 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:30Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:30.616439068+00:00 stderr F E0813 20:09:30.616301 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:30.699459548+00:00 stderr F I0813 20:09:30.697607 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:30Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:30.707994923+00:00 stderr F E0813 20:09:30.707681 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:30.869500534+00:00 stderr F I0813 20:09:30.869428 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:30Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:30.880470038+00:00 stderr F E0813 20:09:30.880354 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:31.203732086+00:00 stderr F I0813 20:09:31.203624 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:31Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:31.213663921+00:00 stderr F E0813 20:09:31.213471 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:31.855883135+00:00 stderr F I0813 20:09:31.855749 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:31Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:31.862935347+00:00 stderr F E0813 20:09:31.862764 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:32.274533878+00:00 stderr F I0813 20:09:32.274336 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:32.348989672+00:00 stderr F I0813 20:09:32.347245 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:32.702465977+00:00 stderr F I0813 20:09:32.702301 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:32Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:32.709356515+00:00 stderr F E0813 20:09:32.709256 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:33.136957823+00:00 stderr F I0813 20:09:33.136156 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:33Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:33.147025742+00:00 stderr F E0813 20:09:33.146737 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:33.148276978+00:00 stderr F I0813 20:09:33.148199 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:33Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:33.157408120+00:00 stderr F E0813 20:09:33.157256 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:34.113335487+00:00 stderr F I0813 20:09:34.113272 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:34.534085630+00:00 stderr F I0813 20:09:34.534017 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:34.774416151+00:00 stderr F I0813 20:09:34.774049 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:35.933337538+00:00 stderr F I0813 20:09:35.932834 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:36.134601698+00:00 stderr F I0813 20:09:36.134500 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:36.873630366+00:00 stderr F I0813 20:09:36.873530 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:37.075117703+00:00 stderr F I0813 20:09:37.074269 1 reflector.go:351] Caches populated for *v1.APIServer from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:37.130367177+00:00 stderr F I0813 20:09:37.130258 1 request.go:697] Waited for 1.19496227s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc 2025-08-13T20:09:37.136475272+00:00 stderr F I0813 20:09:37.136238 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:37Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:37.142530496+00:00 stderr F E0813 20:09:37.142432 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:37.143245286+00:00 stderr F I0813 20:09:37.143168 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:37Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:37.149396453+00:00 stderr F E0813 20:09:37.149312 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:37.334031207+00:00 stderr F I0813 20:09:37.333836 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:38.268554420+00:00 stderr F I0813 20:09:38.268410 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.274959354+00:00 stderr F E0813 20:09:38.274735 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.330433744+00:00 stderr F I0813 20:09:38.330277 1 request.go:697] Waited for 1.195071584s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc 2025-08-13T20:09:38.340314518+00:00 stderr F I0813 20:09:38.340212 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.346515885+00:00 stderr F E0813 20:09:38.346338 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.347238156+00:00 stderr F I0813 20:09:38.347124 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.357465189+00:00 stderr F E0813 20:09:38.357309 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.533927949+00:00 stderr F I0813 20:09:38.533022 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:41.658292706+00:00 stderr F I0813 20:09:41.658105 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:41.798998570+00:00 stderr F I0813 20:09:41.798438 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:43.796436379+00:00 stderr F I0813 20:09:43.795019 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:44.200848973+00:00 stderr F I0813 20:09:44.198357 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:49.203368260+00:00 stderr F I0813 20:09:49.202253 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:51.434943531+00:00 stderr F I0813 20:09:51.431532 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:52.016265048+00:00 stderr F I0813 20:09:52.016174 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:12.320140828+00:00 stderr F I0813 20:10:12.317306 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:14.400878154+00:00 stderr F I0813 20:10:14.399461 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:18.062113022+00:00 stderr F I0813 20:10:18.058972 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:19.598567013+00:00 stderr F I0813 20:10:19.598491 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:26.215610350+00:00 stderr F I0813 20:10:26.214704 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:27.797339089+00:00 stderr F I0813 20:10:27.797234 1 reflector.go:351] Caches populated for *v1.Scheduler from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:28.405947179+00:00 stderr F I0813 20:10:28.405471 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:31.021413336+00:00 stderr F I0813 20:10:31.021171 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:40.552074109+00:00 stderr F I0813 20:10:40.543346 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:41.705436466+00:00 stderr F I0813 20:10:41.705315 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:42:36.355339714+00:00 stderr F I0813 20:42:36.341575 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.373365804+00:00 stderr F I0813 20:42:36.350385 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.373365804+00:00 stderr F I0813 20:42:36.350426 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.373365804+00:00 stderr F I0813 20:42:36.350439 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.373365804+00:00 stderr F I0813 20:42:36.350460 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.373428316+00:00 stderr F I0813 20:42:36.350472 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350487 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350501 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350513 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350524 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350535 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350561 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350575 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350587 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.320254 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350642 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350656 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350666 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350677 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350687 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350723 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350738 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350754 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350866 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350883 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350892 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350908 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350917 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.393459673+00:00 stderr F I0813 20:42:36.350935 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.412702238+00:00 stderr F I0813 20:42:36.412551 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:39.402538706+00:00 stderr F I0813 20:42:39.401878 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:39.403501624+00:00 stderr F I0813 20:42:39.402611 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:39.403579226+00:00 stderr F E0813 20:42:39.403535 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler-operator/leases/openshift-cluster-kube-scheduler-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.403682989+00:00 stderr F W0813 20:42:39.403643 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000036000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000023040715133657715033050 0ustar zuulzuul2025-08-13T19:59:06.750257233+00:00 stderr F I0813 19:59:06.742937 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T19:59:06.786971529+00:00 stderr F I0813 19:59:06.782877 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:06.942901614+00:00 stderr F I0813 19:59:06.940217 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:07.926525343+00:00 stderr F I0813 19:59:07.924691 1 builder.go:299] openshift-cluster-kube-scheduler-operator version 4.16.0-202406131906.p0.g630f63b.assembly.stream.el9-630f63b-630f63bc7a30d2662bbb5115233144079de6eef6 2025-08-13T19:59:12.720179267+00:00 stderr F I0813 19:59:12.719111 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:12.820604490+00:00 stderr F I0813 19:59:12.820420 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:12.820604490+00:00 stderr F W0813 19:59:12.820512 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:12.820604490+00:00 stderr F W0813 19:59:12.820521 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:12.823878544+00:00 stderr F I0813 19:59:12.822329 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock... 2025-08-13T19:59:14.074160714+00:00 stderr F I0813 19:59:14.073686 1 leaderelection.go:260] successfully acquired lease openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock 2025-08-13T19:59:14.086618029+00:00 stderr F I0813 19:59:14.082412 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-cluster-kube-scheduler-operator-lock", UID:"71f97dfe-5375-4cc7-b12a-e96cf43bdae0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"27949", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_b7a0aeae-1f73-4439-9109-a6d072941331 became leader 2025-08-13T19:59:14.277518981+00:00 stderr F I0813 19:59:14.276568 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:14.277518981+00:00 stderr F I0813 19:59:14.277119 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:15.568504613+00:00 stderr F I0813 19:59:15.563198 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:15.568504613+00:00 stderr F I0813 19:59:15.563643 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:15.568504613+00:00 stderr F I0813 19:59:15.563239 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:15.587516315+00:00 stderr F I0813 19:59:15.582508 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:15.607910957+00:00 stderr F I0813 19:59:15.605047 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:15.607910957+00:00 stderr F I0813 19:59:15.605131 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:15.620592328+00:00 stderr F I0813 19:59:15.620545 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:15.620755023+00:00 stderr F I0813 19:59:15.620734 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:15.845312024+00:00 stderr F I0813 19:59:15.821569 1 starter.go:80] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:15.845858660+00:00 stderr F I0813 19:59:15.845737 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:15.846489558+00:00 stderr F I0813 19:59:15.846218 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:15.847163457+00:00 stderr F I0813 19:59:15.846455 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:16.319895263+00:00 stderr F I0813 19:59:16.318321 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:16.319895263+00:00 stderr F E0813 19:59:16.318472 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.319895263+00:00 stderr F E0813 19:59:16.318510 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.334965852+00:00 stderr F E0813 19:59:16.326704 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.360658714+00:00 stderr F E0813 19:59:16.360212 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.360658714+00:00 stderr F E0813 19:59:16.360280 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.381641183+00:00 stderr F E0813 19:59:16.378296 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.445417691+00:00 stderr F E0813 19:59:16.436610 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.488232291+00:00 stderr F E0813 19:59:16.483222 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.568202701+00:00 stderr F E0813 19:59:16.558084 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.654130840+00:00 stderr F E0813 19:59:16.651388 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.654130840+00:00 stderr F E0813 19:59:16.651483 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.861331357+00:00 stderr F E0813 19:59:16.813732 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.861331357+00:00 stderr F E0813 19:59:16.827618 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:17.177242403+00:00 stderr F E0813 19:59:17.034141 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:17.177242403+00:00 stderr F E0813 19:59:17.176599 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:17.177242403+00:00 stderr F I0813 19:59:17.088712 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T19:59:17.177242403+00:00 stderr F I0813 19:59:17.118470 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T19:59:17.177242403+00:00 stderr F I0813 19:59:17.161507 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:17.177242403+00:00 stderr F I0813 19:59:17.175011 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T19:59:19.941138489+00:00 stderr F I0813 19:59:19.923556 1 base_controller.go:67] Waiting for caches to sync for KubeControllerManagerStaticResources 2025-08-13T19:59:19.984238058+00:00 stderr F I0813 19:59:19.938257 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T19:59:19.989219850+00:00 stderr F I0813 19:59:19.988407 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:19.989219850+00:00 stderr F E0813 19:59:19.988689 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.154621 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.154758 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.154911 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.154937 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.155286 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.155309 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.155322 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.155766 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-scheduler 2025-08-13T19:59:20.217902409+00:00 stderr F I0813 19:59:20.217548 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T19:59:20.238744153+00:00 stderr F I0813 19:59:20.238127 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T19:59:20.571913630+00:00 stderr F E0813 19:59:20.571743 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.075747208+00:00 stderr F I0813 19:59:24.027082 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T19:59:24.075747208+00:00 stderr F I0813 19:59:24.027502 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T19:59:24.075747208+00:00 stderr F I0813 19:59:24.027548 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T19:59:24.075747208+00:00 stderr F I0813 19:59:24.027556 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T19:59:24.075747208+00:00 stderr F E0813 19:59:24.031037 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.075747208+00:00 stderr F E0813 19:59:24.031073 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.174209355+00:00 stderr F I0813 19:59:24.173715 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T19:59:24.174209355+00:00 stderr F I0813 19:59:24.173875 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T19:59:24.174209355+00:00 stderr F I0813 19:59:24.173900 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T19:59:24.180871845+00:00 stderr F I0813 19:59:24.178934 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T19:59:24.197711115+00:00 stderr F I0813 19:59:24.197643 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T19:59:24.252630560+00:00 stderr F I0813 19:59:24.252208 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T19:59:24.285859177+00:00 stderr F I0813 19:59:24.282293 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:24.298903559+00:00 stderr F I0813 19:59:24.286007 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T19:59:24.308657957+00:00 stderr F I0813 19:59:24.308498 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:24.308755230+00:00 stderr F I0813 19:59:24.308735 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:24.309183862+00:00 stderr F I0813 19:59:24.309119 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T19:59:24.309183862+00:00 stderr F I0813 19:59:24.309171 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T19:59:24.309538402+00:00 stderr F I0813 19:59:24.308546 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T19:59:24.309596684+00:00 stderr F I0813 19:59:24.309577 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T19:59:24.313347991+00:00 stderr F I0813 19:59:24.286034 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T19:59:24.313418613+00:00 stderr F I0813 19:59:24.313402 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T19:59:24.337597902+00:00 stderr F I0813 19:59:24.337309 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:24.338316453+00:00 stderr F I0813 19:59:24.338235 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:24.339141446+00:00 stderr F I0813 19:59:24.338647 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T19:59:24.345083176+00:00 stderr F I0813 19:59:24.345016 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T19:59:24.356059319+00:00 stderr F I0813 19:59:24.355992 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T19:59:24.357731286+00:00 stderr F I0813 19:59:24.357700 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-scheduler 2025-08-13T19:59:24.360130935+00:00 stderr F I0813 19:59:24.360021 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-scheduler controller ... 2025-08-13T19:59:24.362277496+00:00 stderr F I0813 19:59:24.362238 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:24Z","message":"NodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)","reason":"NodeController_MasterNodesReady","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:58:01Z","message":"NodeInstallerProgressing: 1 node is at revision 6","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:24.371413966+00:00 stderr F I0813 19:59:24.364198 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T19:59:24.419431825+00:00 stderr F I0813 19:59:24.419309 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T19:59:24.419431825+00:00 stderr F I0813 19:59:24.419354 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T19:59:24.461939797+00:00 stderr F I0813 19:59:24.458873 1 base_controller.go:73] Caches are synced for KubeControllerManagerStaticResources 2025-08-13T19:59:24.461939797+00:00 stderr F I0813 19:59:24.458941 1 base_controller.go:110] Starting #1 worker of KubeControllerManagerStaticResources controller ... 2025-08-13T19:59:25.315461666+00:00 stderr F E0813 19:59:25.312038 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:25.516356702+00:00 stderr F I0813 19:59:25.504740 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded changed from False to True ("NodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)") 2025-08-13T19:59:25.820550084+00:00 stderr F I0813 19:59:25.820094 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:58:01Z","message":"NodeInstallerProgressing: 1 node is at revision 6","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:26.105266230+00:00 stderr F I0813 19:59:26.105188 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:26.237973862+00:00 stderr F I0813 19:59:26.192630 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") 2025-08-13T19:59:26.593444855+00:00 stderr F E0813 19:59:26.593093 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:26.892963843+00:00 stderr F I0813 19:59:26.892221 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T19:59:26.980165039+00:00 stderr F I0813 19:59:26.979948 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:26.980165039+00:00 stderr F I0813 19:59:26.979994 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:26.980165039+00:00 stderr F I0813 19:59:26.980052 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T19:59:26.980165039+00:00 stderr F I0813 19:59:26.980059 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T19:59:27.875125470+00:00 stderr F E0813 19:59:27.872973 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:27.919694851+00:00 stderr F I0813 19:59:27.919459 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:27.919999479+00:00 stderr F I0813 19:59:27.919974 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:31.716486518+00:00 stderr F E0813 19:59:31.716107 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:32.995353672+00:00 stderr F E0813 19:59:32.994615 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.965062365+00:00 stderr F E0813 19:59:41.962572 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.238992878+00:00 stderr F E0813 19:59:43.238112 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:51.822111606+00:00 stderr F I0813 19:59:51.815170 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:51.915942131+00:00 stderr F I0813 19:59:51.915673 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:51.915501758 +0000 UTC))" 2025-08-13T19:59:51.915942131+00:00 stderr F I0813 19:59:51.915890 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:51.915755946 +0000 UTC))" 2025-08-13T19:59:51.915942131+00:00 stderr F I0813 19:59:51.915924 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.91590089 +0000 UTC))" 2025-08-13T19:59:51.916148947+00:00 stderr F I0813 19:59:51.915949 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.915930981 +0000 UTC))" 2025-08-13T19:59:51.916148947+00:00 stderr F I0813 19:59:51.915971 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.915958592 +0000 UTC))" 2025-08-13T19:59:51.916148947+00:00 stderr F I0813 19:59:51.915988 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.915976512 +0000 UTC))" 2025-08-13T19:59:51.916148947+00:00 stderr F I0813 19:59:51.916011 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.915993482 +0000 UTC))" 2025-08-13T19:59:51.916148947+00:00 stderr F I0813 19:59:51.916029 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.916018183 +0000 UTC))" 2025-08-13T19:59:51.916148947+00:00 stderr F I0813 19:59:51.916055 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.916034914 +0000 UTC))" 2025-08-13T19:59:51.917876096+00:00 stderr F I0813 19:59:51.916576 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-scheduler-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-scheduler-operator.svc,metrics.openshift-kube-scheduler-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:10 +0000 UTC to 2026-06-26 12:47:11 +0000 UTC (now=2025-08-13 19:59:51.916552738 +0000 UTC))" 2025-08-13T19:59:52.021908452+00:00 stderr F I0813 19:59:52.021159 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115151\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115149\" (2025-08-13 18:59:08 +0000 UTC to 2026-08-13 18:59:08 +0000 UTC (now=2025-08-13 19:59:52.021103789 +0000 UTC))" 2025-08-13T20:00:05.730978713+00:00 stderr F I0813 20:00:05.728508 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.728468532 +0000 UTC))" 2025-08-13T20:00:05.730978713+00:00 stderr F I0813 20:00:05.729386 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.729306736 +0000 UTC))" 2025-08-13T20:00:05.730978713+00:00 stderr F I0813 20:00:05.729417 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.729396818 +0000 UTC))" 2025-08-13T20:00:05.730978713+00:00 stderr F I0813 20:00:05.729494 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.72946132 +0000 UTC))" 2025-08-13T20:00:05.730978713+00:00 stderr F I0813 20:00:05.729511 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.729500051 +0000 UTC))" 2025-08-13T20:00:05.801291348+00:00 stderr F I0813 20:00:05.729770 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.729519012 +0000 UTC))" 2025-08-13T20:00:05.801291348+00:00 stderr F I0813 20:00:05.799009 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.7988868 +0000 UTC))" 2025-08-13T20:00:05.801291348+00:00 stderr F I0813 20:00:05.799112 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.799092906 +0000 UTC))" 2025-08-13T20:00:05.801291348+00:00 stderr F I0813 20:00:05.799206 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.799118016 +0000 UTC))" 2025-08-13T20:00:05.801291348+00:00 stderr F I0813 20:00:05.799338 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.799217719 +0000 UTC))" 2025-08-13T20:00:05.825056146+00:00 stderr F I0813 20:00:05.822722 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-scheduler-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-scheduler-operator.svc,metrics.openshift-kube-scheduler-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:10 +0000 UTC to 2026-06-26 12:47:11 +0000 UTC (now=2025-08-13 20:00:05.822678308 +0000 UTC))" 2025-08-13T20:00:05.825056146+00:00 stderr F I0813 20:00:05.823980 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115151\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115149\" (2025-08-13 18:59:08 +0000 UTC to 2026-08-13 18:59:08 +0000 UTC (now=2025-08-13 20:00:05.823609605 +0000 UTC))" 2025-08-13T20:00:30.817609083+00:00 stderr F I0813 20:00:30.815856 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 7 triggered by "optional secret/serving-cert has changed" 2025-08-13T20:00:30.880127836+00:00 stderr F I0813 20:00:30.874647 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-scheduler-pod-7 -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:30.901642529+00:00 stderr F I0813 20:00:30.900057 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-7 -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:31.038110090+00:00 stderr F I0813 20:00:31.035252 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/serviceaccount-ca-7 -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:31.628087483+00:00 stderr F I0813 20:00:31.626200 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/scheduler-kubeconfig-7 -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:33.047671160+00:00 stderr F I0813 20:00:33.034887 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-7 -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:33.234393285+00:00 stderr F I0813 20:00:33.230541 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/serving-cert-7 -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:33.384646049+00:00 stderr F I0813 20:00:33.383876 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-7 -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:33.794477925+00:00 stderr F I0813 20:00:33.794386 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 7 triggered by "optional secret/serving-cert has changed" 2025-08-13T20:00:33.994328463+00:00 stderr F I0813 20:00:33.993397 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 7 created because optional secret/serving-cert has changed 2025-08-13T20:00:33.997412101+00:00 stderr F I0813 20:00:33.997345 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:00:35.638695201+00:00 stderr F I0813 20:00:35.635725 1 installer_controller.go:524] node crc with revision 6 is the oldest and needs new revision 7 2025-08-13T20:00:35.638695201+00:00 stderr F I0813 20:00:35.636690 1 installer_controller.go:532] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:00:35.638695201+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:00:35.638695201+00:00 stderr F CurrentRevision: (int32) 6, 2025-08-13T20:00:35.638695201+00:00 stderr F TargetRevision: (int32) 7, 2025-08-13T20:00:35.638695201+00:00 stderr F LastFailedRevision: (int32) 5, 2025-08-13T20:00:35.638695201+00:00 stderr F LastFailedTime: (*v1.Time)(0xc0017426d8)(2024-06-26 12:52:00 +0000 UTC), 2025-08-13T20:00:35.638695201+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:00:35.638695201+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:00:35.638695201+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:00:35.638695201+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:00:35.638695201+00:00 stderr F (string) (len=2059) "installer: duler-pod\",\n SecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=31) \"localhost-recovery-client-token\"\n },\n OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"serving-cert\"\n },\n ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\n (string) (len=18) \"kube-scheduler-pod\",\n (string) (len=6) \"config\",\n (string) (len=17) \"serviceaccount-ca\",\n (string) (len=20) \"scheduler-kubeconfig\",\n (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\n },\n OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=16) \"policy-configmap\"\n },\n CertSecretNames: ([]string) (len=1 cap=1) {\n (string) (len=30) \"kube-scheduler-client-cert-key\"\n },\n OptionalCertSecretNamePrefixes: ([]string) ,\n CertConfigMapNamePrefixes: ([]string) ,\n OptionalCertConfigMapNamePrefixes: ([]string) ,\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:10.196140 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:10.287595 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:10.292603 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:20.300456 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:30.295953 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:50:00.296621 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:14.299317 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:00:35.638695201+00:00 stderr F } 2025-08-13T20:00:35.638695201+00:00 stderr F } 2025-08-13T20:00:35.744594801+00:00 stderr F I0813 20:00:35.744447 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeTargetRevisionChanged' Updating node "crc" from revision 6 to 7 because node crc with revision 6 is the oldest 2025-08-13T20:00:35.765922529+00:00 stderr F I0813 20:00:35.762651 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:35Z","message":"NodeInstallerProgressing: 1 node is at revision 6; 0 nodes have achieved new revision 7","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6; 0 nodes have achieved new revision 7","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:35.843115420+00:00 stderr F I0813 20:00:35.842474 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:00:35.907574728+00:00 stderr F I0813 20:00:35.905035 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 6; 0 nodes have achieved new revision 7"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6; 0 nodes have achieved new revision 7" 2025-08-13T20:00:37.153230506+00:00 stderr F I0813 20:00:37.136051 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-7-crc -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:38.492115353+00:00 stderr F I0813 20:00:38.485670 1 installer_controller.go:512] "crc" is in transition to 7, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:39.595945127+00:00 stderr F I0813 20:00:39.578461 1 installer_controller.go:512] "crc" is in transition to 7, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:46.881302911+00:00 stderr F I0813 20:00:46.866452 1 installer_controller.go:512] "crc" is in transition to 7, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.981038 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.980942085 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.998578 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:59.994291525 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.998644 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.998610639 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.998677 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.99865393 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.998719 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.998701701 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.998744 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.998726352 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.999094 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.998750653 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.999209 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.999111613 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.999251 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:59.999224796 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.999302 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:00:59.999281778 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.999330 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.999312609 +0000 UTC))" 2025-08-13T20:01:00.028106330+00:00 stderr F I0813 20:01:00.016271 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-scheduler-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-scheduler-operator.svc,metrics.openshift-kube-scheduler-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:10 +0000 UTC to 2026-06-26 12:47:11 +0000 UTC (now=2025-08-13 20:01:00.016232431 +0000 UTC))" 2025-08-13T20:01:00.028106330+00:00 stderr F I0813 20:01:00.020201 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115151\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115149\" (2025-08-13 18:59:08 +0000 UTC to 2026-08-13 18:59:08 +0000 UTC (now=2025-08-13 20:01:00.020135532 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.375177 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.375957 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.376642 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382332 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:20.382291382 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382367 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:20.382353574 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382388 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:20.382372944 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382413 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:20.382396385 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382431 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:20.382419856 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382454 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:20.382441626 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382474 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:20.382461357 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382505 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:20.382480117 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382523 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:20.382512488 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382541 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:20.382530409 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382567 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:20.38255572 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382990 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-scheduler-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-scheduler-operator.svc,metrics.openshift-kube-scheduler-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:26 +0000 UTC to 2027-08-13 20:00:27 +0000 UTC (now=2025-08-13 20:01:20.382970791 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.383322 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115151\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115149\" (2025-08-13 18:59:08 +0000 UTC to 2026-08-13 18:59:08 +0000 UTC (now=2025-08-13 20:01:20.383296541 +0000 UTC))" 2025-08-13T20:01:21.951018043+00:00 stderr F I0813 20:01:21.950025 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="bb1e896cccd42fdf3b040bf941474d8b98c2e1008b067302988ef61fbb74af9d", new="f2ef63f4fe25e3a28410fbdc21c93cf27decbc59c0810ae7fae1df548bef3156") 2025-08-13T20:01:21.951018043+00:00 stderr F W0813 20:01:21.950997 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.key was modified 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.951354 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="af4bed0830029e9d25ab9fe2cabce4be2989fe96db922a2b2f78449a73557f43", new="fd988104fafb68af302e48a9bb235ee14d5ce3d51ab6bad4c4b27339f3fa7c47") 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.951708 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.951904 1 base_controller.go:172] Shutting down MissingStaticPodController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.951966 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952066 1 base_controller.go:172] Shutting down PruneController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952097 1 base_controller.go:172] Shutting down NodeController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952166 1 base_controller.go:172] Shutting down GuardController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952182 1 base_controller.go:172] Shutting down InstallerStateController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952302 1 base_controller.go:114] Shutting down worker of PruneController controller ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952312 1 base_controller.go:104] All PruneController workers have been terminated 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952406 1 controller_manager.go:54] PruneController controller terminated 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952419 1 base_controller.go:114] Shutting down worker of NodeController controller ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952426 1 base_controller.go:104] All NodeController workers have been terminated 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952430 1 controller_manager.go:54] NodeController controller terminated 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952437 1 base_controller.go:114] Shutting down worker of GuardController controller ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952448 1 base_controller.go:104] All GuardController workers have been terminated 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952452 1 controller_manager.go:54] GuardController controller terminated 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952461 1 base_controller.go:114] Shutting down worker of InstallerStateController controller ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952508 1 base_controller.go:172] Shutting down StatusSyncer_kube-scheduler ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952533 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952555 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952570 1 base_controller.go:172] Shutting down InstallerController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952584 1 base_controller.go:172] Shutting down StaticPodStateController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952597 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952609 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952624 1 base_controller.go:114] Shutting down worker of MissingStaticPodController controller ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952641 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952694 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952829 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952872 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952884 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.953768 1 base_controller.go:150] All StatusSyncer_kube-scheduler post start hooks have been terminated 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962390 1 base_controller.go:104] All MissingStaticPodController workers have been terminated 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962408 1 controller_manager.go:54] MissingStaticPodController controller terminated 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962418 1 base_controller.go:104] All InstallerStateController workers have been terminated 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962424 1 controller_manager.go:54] InstallerStateController controller terminated 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962495 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962552 1 base_controller.go:172] Shutting down TargetConfigController ... 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962565 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962602 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962665 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:21.962742357+00:00 stderr F E0813 20:01:21.962704 1 request.go:1116] Unexpected error when reading response body: context canceled 2025-08-13T20:01:21.962967624+00:00 stderr F I0813 20:01:21.962524 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:01:21.963154879+00:00 stderr F I0813 20:01:21.963093 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:01:21.963154879+00:00 stderr F I0813 20:01:21.963127 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:01:21.963154879+00:00 stderr F I0813 20:01:21.963139 1 controller_manager.go:54] LoggingSyncer controller terminated 2025-08-13T20:01:21.963154879+00:00 stderr F I0813 20:01:21.963148 1 base_controller.go:114] Shutting down worker of InstallerController controller ... 2025-08-13T20:01:21.963176590+00:00 stderr F I0813 20:01:21.963155 1 base_controller.go:104] All InstallerController workers have been terminated 2025-08-13T20:01:21.963176590+00:00 stderr F I0813 20:01:21.963160 1 controller_manager.go:54] InstallerController controller terminated 2025-08-13T20:01:21.963189860+00:00 stderr F I0813 20:01:21.963180 1 base_controller.go:114] Shutting down worker of StaticPodStateController controller ... 2025-08-13T20:01:21.963236351+00:00 stderr F I0813 20:01:21.963187 1 base_controller.go:104] All StaticPodStateController workers have been terminated 2025-08-13T20:01:21.963236351+00:00 stderr F I0813 20:01:21.963193 1 controller_manager.go:54] StaticPodStateController controller terminated 2025-08-13T20:01:21.963236351+00:00 stderr F I0813 20:01:21.963201 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:01:21.963236351+00:00 stderr F I0813 20:01:21.963209 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:01:21.963236351+00:00 stderr F I0813 20:01:21.963085 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:01:21.963236351+00:00 stderr F I0813 20:01:21.963231 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:01:21.963270052+00:00 stderr F I0813 20:01:21.963236 1 controller_manager.go:54] UnsupportedConfigOverridesController controller terminated 2025-08-13T20:01:21.963270052+00:00 stderr F I0813 20:01:21.963248 1 base_controller.go:114] Shutting down worker of StatusSyncer_kube-scheduler controller ... 2025-08-13T20:01:21.963270052+00:00 stderr F I0813 20:01:21.963255 1 base_controller.go:104] All StatusSyncer_kube-scheduler workers have been terminated 2025-08-13T20:01:21.963270052+00:00 stderr F I0813 20:01:21.963265 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:01:21.963284713+00:00 stderr F I0813 20:01:21.963272 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:01:21.963284713+00:00 stderr F I0813 20:01:21.963277 1 controller_manager.go:54] RevisionController controller terminated 2025-08-13T20:01:21.963441377+00:00 stderr F I0813 20:01:21.963377 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:01:21.963531170+00:00 stderr F I0813 20:01:21.963478 1 builder.go:330] server exited 2025-08-13T20:01:21.963705225+00:00 stderr F I0813 20:01:21.963681 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:01:21.969730416+00:00 stderr F I0813 20:01:21.964094 1 base_controller.go:172] Shutting down KubeControllerManagerStaticResources ... 2025-08-13T20:01:21.969887911+00:00 stderr F I0813 20:01:21.964110 1 base_controller.go:172] Shutting down BackingResourceController ... 2025-08-13T20:01:21.969938282+00:00 stderr F I0813 20:01:21.964128 1 base_controller.go:114] Shutting down worker of KubeControllerManagerStaticResources controller ... 2025-08-13T20:01:21.969994814+00:00 stderr F I0813 20:01:21.969976 1 base_controller.go:104] All KubeControllerManagerStaticResources workers have been terminated 2025-08-13T20:01:21.970029175+00:00 stderr F I0813 20:01:21.964139 1 base_controller.go:114] Shutting down worker of BackingResourceController controller ... 2025-08-13T20:01:21.970070696+00:00 stderr F I0813 20:01:21.970054 1 base_controller.go:104] All BackingResourceController workers have been terminated 2025-08-13T20:01:21.970116817+00:00 stderr F I0813 20:01:21.970100 1 controller_manager.go:54] BackingResourceController controller terminated 2025-08-13T20:01:21.970154508+00:00 stderr F I0813 20:01:21.964419 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:01:21.970195970+00:00 stderr F I0813 20:01:21.970180 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:01:21.970282332+00:00 stderr F I0813 20:01:21.964467 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:01:21.970337104+00:00 stderr F I0813 20:01:21.964538 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:01:21.970382325+00:00 stderr F I0813 20:01:21.970365 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:01:21.997038065+00:00 stderr F E0813 20:01:21.972707 1 base_controller.go:268] TargetConfigController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:01:21.997038065+00:00 stderr F I0813 20:01:21.972864 1 base_controller.go:114] Shutting down worker of TargetConfigController controller ... 2025-08-13T20:01:21.997038065+00:00 stderr F I0813 20:01:21.972905 1 base_controller.go:104] All TargetConfigController workers have been terminated 2025-08-13T20:01:22.298082319+00:00 stderr F W0813 20:01:22.298025 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000036000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000022021515133657715033044 0ustar zuulzuul2026-01-20T10:49:34.206912290+00:00 stderr F I0120 10:49:34.206768 1 cmd.go:241] Using service-serving-cert provided certificates 2026-01-20T10:49:34.206912290+00:00 stderr F I0120 10:49:34.206875 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:49:34.209270772+00:00 stderr F I0120 10:49:34.209248 1 observer_polling.go:159] Starting file observer 2026-01-20T10:49:34.299843760+00:00 stderr F I0120 10:49:34.299235 1 builder.go:299] openshift-cluster-kube-scheduler-operator version 4.16.0-202406131906.p0.g630f63b.assembly.stream.el9-630f63b-630f63bc7a30d2662bbb5115233144079de6eef6 2026-01-20T10:49:34.743826204+00:00 stderr F I0120 10:49:34.741915 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2026-01-20T10:49:34.744112883+00:00 stderr F I0120 10:49:34.744082 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock... 2026-01-20T10:49:34.750596170+00:00 stderr F I0120 10:49:34.750548 1 secure_serving.go:57] Forcing use of http/1.1 only 2026-01-20T10:49:34.750596170+00:00 stderr F W0120 10:49:34.750569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:34.750596170+00:00 stderr F W0120 10:49:34.750575 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:34.755096087+00:00 stderr F I0120 10:49:34.755029 1 secure_serving.go:213] Serving securely on [::]:8443 2026-01-20T10:49:34.756854710+00:00 stderr F I0120 10:49:34.756809 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2026-01-20T10:49:34.756895682+00:00 stderr F I0120 10:49:34.756873 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:49:34.756921232+00:00 stderr F I0120 10:49:34.756908 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:49:34.756961994+00:00 stderr F I0120 10:49:34.756879 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:49:34.758085378+00:00 stderr F I0120 10:49:34.757211 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:49:34.758182421+00:00 stderr F I0120 10:49:34.758115 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:34.758397338+00:00 stderr F I0120 10:49:34.757285 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:49:34.758408968+00:00 stderr F I0120 10:49:34.758394 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:34.767478454+00:00 stderr F I0120 10:49:34.767441 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:49:34.767522935+00:00 stderr F I0120 10:49:34.767441 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:34.770077463+00:00 stderr F I0120 10:49:34.769933 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:55:29.005459193+00:00 stderr F I0120 10:55:29.004791 1 leaderelection.go:260] successfully acquired lease openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock 2026-01-20T10:55:29.005459193+00:00 stderr F I0120 10:55:29.005315 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-cluster-kube-scheduler-operator-lock", UID:"71f97dfe-5375-4cc7-b12a-e96cf43bdae0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"42172", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_547f24ff-f3b2-4aee-a79f-47d4e9ce9736 became leader 2026-01-20T10:55:29.006913143+00:00 stderr F I0120 10:55:29.006820 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2026-01-20T10:55:29.012515272+00:00 stderr F I0120 10:55:29.012416 1 starter.go:80] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2026-01-20T10:55:29.012515272+00:00 stderr F I0120 10:55:29.012421 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2026-01-20T10:55:29.026571607+00:00 stderr F I0120 10:55:29.026461 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2026-01-20T10:55:29.026633728+00:00 stderr F I0120 10:55:29.026575 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2026-01-20T10:55:29.026748971+00:00 stderr F I0120 10:55:29.026690 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2026-01-20T10:55:29.026933086+00:00 stderr F I0120 10:55:29.026849 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2026-01-20T10:55:29.026933086+00:00 stderr F I0120 10:55:29.026883 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2026-01-20T10:55:29.027162452+00:00 stderr F I0120 10:55:29.027097 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2026-01-20T10:55:29.027162452+00:00 stderr F I0120 10:55:29.027145 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2026-01-20T10:55:29.027202823+00:00 stderr F I0120 10:55:29.026455 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2026-01-20T10:55:29.027202823+00:00 stderr F I0120 10:55:29.027181 1 base_controller.go:67] Waiting for caches to sync for GuardController 2026-01-20T10:55:29.027323126+00:00 stderr F I0120 10:55:29.026492 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2026-01-20T10:55:29.027323126+00:00 stderr F I0120 10:55:29.026510 1 base_controller.go:67] Waiting for caches to sync for PruneController 2026-01-20T10:55:29.027323126+00:00 stderr F I0120 10:55:29.027294 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2026-01-20T10:55:29.027323126+00:00 stderr F I0120 10:55:29.026520 1 base_controller.go:67] Waiting for caches to sync for NodeController 2026-01-20T10:55:29.027355607+00:00 stderr F I0120 10:55:29.026512 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2026-01-20T10:55:29.027355607+00:00 stderr F I0120 10:55:29.026532 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2026-01-20T10:55:29.027916082+00:00 stderr F I0120 10:55:29.027812 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-scheduler 2026-01-20T10:55:29.031222620+00:00 stderr F I0120 10:55:29.031118 1 base_controller.go:67] Waiting for caches to sync for KubeControllerManagerStaticResources 2026-01-20T10:55:29.126846319+00:00 stderr F I0120 10:55:29.126745 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2026-01-20T10:55:29.126846319+00:00 stderr F I0120 10:55:29.126792 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2026-01-20T10:55:29.127256791+00:00 stderr F I0120 10:55:29.127208 1 base_controller.go:73] Caches are synced for ConfigObserver 2026-01-20T10:55:29.127256791+00:00 stderr F I0120 10:55:29.127232 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2026-01-20T10:55:29.127314622+00:00 stderr F I0120 10:55:29.127277 1 base_controller.go:73] Caches are synced for BackingResourceController 2026-01-20T10:55:29.127314622+00:00 stderr F I0120 10:55:29.127302 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2026-01-20T10:55:29.127326252+00:00 stderr F I0120 10:55:29.127277 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2026-01-20T10:55:29.127355863+00:00 stderr F I0120 10:55:29.127320 1 base_controller.go:73] Caches are synced for LoggingSyncer 2026-01-20T10:55:29.127355863+00:00 stderr F I0120 10:55:29.127335 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2026-01-20T10:55:29.127355863+00:00 stderr F I0120 10:55:29.127341 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2026-01-20T10:55:29.127578279+00:00 stderr F I0120 10:55:29.127536 1 base_controller.go:73] Caches are synced for PruneController 2026-01-20T10:55:29.127627790+00:00 stderr F I0120 10:55:29.127613 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2026-01-20T10:55:29.127960699+00:00 stderr F I0120 10:55:29.127934 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-scheduler 2026-01-20T10:55:29.127960699+00:00 stderr F I0120 10:55:29.127950 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-scheduler controller ... 2026-01-20T10:55:29.128259157+00:00 stderr F I0120 10:55:29.128239 1 prune_controller.go:269] Nothing to prune 2026-01-20T10:55:29.237322244+00:00 stderr F I0120 10:55:29.237244 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:55:29.430762541+00:00 stderr F I0120 10:55:29.430646 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:55:29.532133073+00:00 stderr F I0120 10:55:29.529326 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2026-01-20T10:55:29.532133073+00:00 stderr F I0120 10:55:29.529368 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2026-01-20T10:55:29.532133073+00:00 stderr F I0120 10:55:29.529414 1 base_controller.go:73] Caches are synced for StaticPodStateController 2026-01-20T10:55:29.532133073+00:00 stderr F I0120 10:55:29.529422 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2026-01-20T10:55:29.532133073+00:00 stderr F I0120 10:55:29.529455 1 base_controller.go:73] Caches are synced for InstallerStateController 2026-01-20T10:55:29.532133073+00:00 stderr F I0120 10:55:29.529462 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2026-01-20T10:55:29.540209638+00:00 stderr F I0120 10:55:29.537272 1 base_controller.go:73] Caches are synced for InstallerController 2026-01-20T10:55:29.540209638+00:00 stderr F I0120 10:55:29.537321 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2026-01-20T10:55:29.540209638+00:00 stderr F I0120 10:55:29.539199 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8 2026-01-20T10:55:29.576120036+00:00 stderr F E0120 10:55:29.573228 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8] 2026-01-20T10:55:29.585101525+00:00 stderr F I0120 10:55:29.584875 1 prune_controller.go:269] Nothing to prune 2026-01-20T10:55:29.588106435+00:00 stderr F I0120 10:55:29.585688 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8]\nNodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:29Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:55:29.588106435+00:00 stderr F I0120 10:55:29.587514 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8 2026-01-20T10:55:29.611188070+00:00 stderr F I0120 10:55:29.608132 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8]\nNodeControllerDegraded: All master nodes are ready" 2026-01-20T10:55:29.617084697+00:00 stderr F E0120 10:55:29.616723 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8] 2026-01-20T10:55:29.617130329+00:00 stderr F E0120 10:55:29.617095 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8] 2026-01-20T10:55:29.621158726+00:00 stderr F I0120 10:55:29.617688 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8 2026-01-20T10:55:29.627765082+00:00 stderr F E0120 10:55:29.627706 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8] 2026-01-20T10:55:29.628177573+00:00 stderr F I0120 10:55:29.628141 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8 2026-01-20T10:55:29.632377006+00:00 stderr F I0120 10:55:29.632328 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:55:29.668899369+00:00 stderr F E0120 10:55:29.668826 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8] 2026-01-20T10:55:29.668899369+00:00 stderr F I0120 10:55:29.668881 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8 2026-01-20T10:55:29.727733937+00:00 stderr F I0120 10:55:29.727623 1 base_controller.go:73] Caches are synced for GuardController 2026-01-20T10:55:29.727733937+00:00 stderr F I0120 10:55:29.727682 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2026-01-20T10:55:29.727844830+00:00 stderr F I0120 10:55:29.727630 1 base_controller.go:73] Caches are synced for NodeController 2026-01-20T10:55:29.727902472+00:00 stderr F I0120 10:55:29.727859 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2026-01-20T10:55:29.749549579+00:00 stderr F E0120 10:55:29.749480 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8] 2026-01-20T10:55:29.749607051+00:00 stderr F I0120 10:55:29.749531 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8 2026-01-20T10:55:29.833208669+00:00 stderr F I0120 10:55:29.833048 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:55:29.928817898+00:00 stderr F I0120 10:55:29.928198 1 base_controller.go:73] Caches are synced for RevisionController 2026-01-20T10:55:29.928817898+00:00 stderr F I0120 10:55:29.928246 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2026-01-20T10:55:29.928817898+00:00 stderr F I0120 10:55:29.928319 1 base_controller.go:73] Caches are synced for TargetConfigController 2026-01-20T10:55:29.928817898+00:00 stderr F I0120 10:55:29.928340 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2026-01-20T10:55:29.931791457+00:00 stderr F I0120 10:55:29.931699 1 base_controller.go:73] Caches are synced for KubeControllerManagerStaticResources 2026-01-20T10:55:29.931791457+00:00 stderr F I0120 10:55:29.931740 1 base_controller.go:110] Starting #1 worker of KubeControllerManagerStaticResources controller ... 2026-01-20T10:55:30.030683553+00:00 stderr F I0120 10:55:30.030602 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:55:30.128245854+00:00 stderr F I0120 10:55:30.128139 1 base_controller.go:73] Caches are synced for ResourceSyncController 2026-01-20T10:55:30.128245854+00:00 stderr F I0120 10:55:30.128183 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2026-01-20T10:55:30.227055198+00:00 stderr F I0120 10:55:30.226935 1 request.go:697] Waited for 1.099067088s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa 2026-01-20T10:55:31.427664314+00:00 stderr F I0120 10:55:31.427536 1 request.go:697] Waited for 1.19289973s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa 2026-01-20T10:55:31.846542939+00:00 stderr F I0120 10:55:31.846383 1 prune_controller.go:269] Nothing to prune 2026-01-20T10:55:31.847086873+00:00 stderr F I0120 10:55:31.846979 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:29Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:55:31.863454250+00:00 stderr F I0120 10:55:31.863296 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8]\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" 2026-01-20T10:56:07.105295500+00:00 stderr F I0120 10:56:07.104637 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:56:07.104570271 +0000 UTC))" 2026-01-20T10:56:07.105295500+00:00 stderr F I0120 10:56:07.104869 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:56:07.104829727 +0000 UTC))" 2026-01-20T10:56:07.105295500+00:00 stderr F I0120 10:56:07.104897 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.104880919 +0000 UTC))" 2026-01-20T10:56:07.105295500+00:00 stderr F I0120 10:56:07.104928 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.104902209 +0000 UTC))" 2026-01-20T10:56:07.105295500+00:00 stderr F I0120 10:56:07.104949 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.10493461 +0000 UTC))" 2026-01-20T10:56:07.105295500+00:00 stderr F I0120 10:56:07.104968 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.104954701 +0000 UTC))" 2026-01-20T10:56:07.105295500+00:00 stderr F I0120 10:56:07.104990 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.104973821 +0000 UTC))" 2026-01-20T10:56:07.105295500+00:00 stderr F I0120 10:56:07.105029 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.104995572 +0000 UTC))" 2026-01-20T10:56:07.105295500+00:00 stderr F I0120 10:56:07.105049 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.105035613 +0000 UTC))" 2026-01-20T10:56:07.108735243+00:00 stderr F I0120 10:56:07.108692 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:56:07.108668591 +0000 UTC))" 2026-01-20T10:56:07.108756714+00:00 stderr F I0120 10:56:07.108737 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.108715953 +0000 UTC))" 2026-01-20T10:56:07.108795005+00:00 stderr F I0120 10:56:07.108773 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.108746753 +0000 UTC))" 2026-01-20T10:56:07.109215916+00:00 stderr F I0120 10:56:07.109193 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-scheduler-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-scheduler-operator.svc,metrics.openshift-kube-scheduler-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:26 +0000 UTC to 2027-08-13 20:00:27 +0000 UTC (now=2026-01-20 10:56:07.109172335 +0000 UTC))" 2026-01-20T10:56:07.109567325+00:00 stderr F I0120 10:56:07.109541 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906174\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906174\" (2026-01-20 09:49:34 +0000 UTC to 2027-01-20 09:49:34 +0000 UTC (now=2026-01-20 10:56:07.109524874 +0000 UTC))" 2026-01-20T10:57:29.027633047+00:00 stderr F E0120 10:57:29.026969 1 leaderelection.go:332] error retrieving resource lock openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler-operator/leases/openshift-cluster-kube-scheduler-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:29.149354425+00:00 stderr F E0120 10:57:29.149252 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:29.168402269+00:00 stderr F E0120 10:57:29.168178 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:29.195733582+00:00 stderr F E0120 10:57:29.195359 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:29.223142607+00:00 stderr F E0120 10:57:29.223016 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:29.268697222+00:00 stderr F E0120 10:57:29.267084 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:29.358349993+00:00 stderr F E0120 10:57:29.358298 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:29.524632190+00:00 stderr F E0120 10:57:29.524556 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:29.530764283+00:00 stderr F E0120 10:57:29.530708 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:29.531761999+00:00 stderr F E0120 10:57:29.531725 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:29.538016024+00:00 stderr F E0120 10:57:29.537952 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:29.538228149+00:00 stderr F E0120 10:57:29.538190 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:29.549265401+00:00 stderr F E0120 10:57:29.549156 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:29.740373085+00:00 stderr F E0120 10:57:29.740289 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:29.938819293+00:00 stderr F E0120 10:57:29.938735 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:30.139830929+00:00 stderr F E0120 10:57:30.139770 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:30.340997849+00:00 stderr F E0120 10:57:30.340921 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:30.938786238+00:00 stderr F E0120 10:57:30.938681 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:31.140873192+00:00 stderr F E0120 10:57:31.140777 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:31.741137377+00:00 stderr F E0120 10:57:31.741022 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:31.939615275+00:00 stderr F E0120 10:57:31.939533 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.140877467+00:00 stderr F E0120 10:57:32.140800 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.739378015+00:00 stderr F E0120 10:57:32.739304 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.975940931+00:00 stderr F E0120 10:57:32.975879 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:33.342172286+00:00 stderr F E0120 10:57:33.342043 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:33.542956966+00:00 stderr F E0120 10:57:33.542796 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:33.738521478+00:00 stderr F E0120 10:57:33.738424 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:33.940277524+00:00 stderr F E0120 10:57:33.940183 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:34.140104688+00:00 stderr F E0120 10:57:34.139681 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:34.944806328+00:00 stderr F E0120 10:57:34.944435 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:35.540662686+00:00 stderr F E0120 10:57:35.540608 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:35.940745227+00:00 stderr F E0120 10:57:35.940662 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:36.541274427+00:00 stderr F E0120 10:57:36.541211 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:36.740598528+00:00 stderr F E0120 10:57:36.740487 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.939254702+00:00 stderr F E0120 10:57:36.939172 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.549663884+00:00 stderr F E0120 10:57:37.549514 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:38.140755036+00:00 stderr F E0120 10:57:38.140602 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:39.140171466+00:00 stderr F E0120 10:57:39.140109 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:39.742851264+00:00 stderr F E0120 10:57:39.742779 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:39.939561366+00:00 stderr F E0120 10:57:39.939190 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:40.546154728+00:00 stderr F E0120 10:57:40.546086 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:41.541708566+00:00 stderr F E0120 10:57:41.541294 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:41.742742242+00:00 stderr F E0120 10:57:41.742645 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:42.146412068+00:00 stderr F E0120 10:57:42.146346 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:43.541133952+00:00 stderr F E0120 10:57:43.541050 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:43.743503853+00:00 stderr F E0120 10:57:43.743431 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:45.140539128+00:00 stderr F E0120 10:57:45.140467 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:45.539393086+00:00 stderr F E0120 10:57:45.539293 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:45.738818600+00:00 stderr F E0120 10:57:45.738747 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:46.141638462+00:00 stderr F E0120 10:57:46.141592 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:47.341701318+00:00 stderr F E0120 10:57:47.340875 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:47.741356617+00:00 stderr F E0120 10:57:47.740636 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:48.949041335+00:00 stderr F E0120 10:57:48.948982 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:49.966867252+00:00 stderr F I0120 10:57:49.966784 1 helpers.go:260] lister was stale at resourceVersion=42196, live get showed resourceVersion=43333 2026-01-20T10:57:49.974160235+00:00 stderr F E0120 10:57:49.973913 1 base_controller.go:268] TargetConfigController reconciliation failed: synthetic requeue request 2026-01-20T10:58:16.389915268+00:00 stderr F I0120 10:58:16.389381 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:58:16.418172015+00:00 stderr F I0120 10:58:16.418109 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000755000175000017500000000000015133657735033106 5ustar zuulzuul././@LongLink0000644000000000000000000000040400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000644000175000017500000000012315133657715033102 0ustar zuulzuul2025-08-13T20:42:54.210046648+00:00 stderr F Shutting down, got signal: Terminated ././@LongLink0000644000000000000000000000040400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/d9aeaa1aa1d02c7e8201fbb13a3ee252fd99aa6b0819f3318aaa2bd88982712e.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000644000175000017500000000000015133657715033074 0ustar zuulzuul././@LongLink0000644000000000000000000000033700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000755000175000017500000000000015133657735033106 5ustar zuulzuul././@LongLink0000644000000000000000000000034400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000644000175000017500000000123015133657715033102 0ustar zuulzuul2026-01-20T10:49:37.606758597+00:00 stderr F I0120 10:49:37.606305 20 cmd.go:331] Waiting for process with process name "cluster-samples-operator" ... 2026-01-20T10:49:37.608230112+00:00 stderr F I0120 10:49:37.607485 20 cmd.go:341] Watching for changes in: ([]string) (len=2 cap=2) { 2026-01-20T10:49:37.608230112+00:00 stderr F (string) (len=32) "/proc/7/root/etc/secrets/tls.crt", 2026-01-20T10:49:37.608230112+00:00 stderr F (string) (len=32) "/proc/7/root/etc/secrets/tls.key" 2026-01-20T10:49:37.608230112+00:00 stderr F } 2026-01-20T10:49:37.622190017+00:00 stderr F I0120 10:49:37.613600 20 observer_polling.go:159] Starting file observer ././@LongLink0000644000000000000000000000034400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000644000175000017500000000770515133657715033117 0ustar zuulzuul2025-08-13T19:59:27.826236007+00:00 stderr F I0813 19:59:27.767514 20 cmd.go:331] Waiting for process with process name "cluster-samples-operator" ... 2025-08-13T19:59:28.048819791+00:00 stderr F I0813 19:59:28.024085 20 cmd.go:341] Watching for changes in: ([]string) (len=2 cap=2) { 2025-08-13T19:59:28.048819791+00:00 stderr F (string) (len=32) "/proc/7/root/etc/secrets/tls.crt", 2025-08-13T19:59:28.048819791+00:00 stderr F (string) (len=32) "/proc/7/root/etc/secrets/tls.key" 2025-08-13T19:59:28.048819791+00:00 stderr F } 2025-08-13T19:59:28.504245112+00:00 stderr F I0813 19:59:28.496669 20 observer_polling.go:159] Starting file observer 2025-08-13T20:00:33.509123438+00:00 stderr F I0813 20:00:33.508126 20 observer_polling.go:120] Observed file "/proc/7/root/etc/secrets/tls.crt" has been modified (old="cdb17acdd32bfc0645d11444c7bea3d36372b393d1d931de26e0171fac0f40c1", new="efb887ba7696e196412e106436a291839e27a1e68375ce9e547b2e2b32b7e988") 2025-08-13T20:00:33.774400752+00:00 stderr F I0813 20:00:33.767934 20 cmd.go:292] Sending TERM signal to 7 ... 2025-08-13T20:00:34.034707685+00:00 stderr F W0813 20:00:34.034452 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 67, err: ) 2025-08-13T20:00:34.273950867+00:00 stderr F I0813 20:00:34.272174 20 observer_polling.go:114] Observed file "/proc/7/root/etc/secrets/tls.key" has been deleted 2025-08-13T20:00:34.273950867+00:00 stderr F I0813 20:00:34.272491 20 cmd.go:331] Waiting for process with process name "cluster-samples-operator" ... 2025-08-13T20:00:34.273950867+00:00 stderr F W0813 20:00:34.273404 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 1, err: ) 2025-08-13T20:00:35.365647885+00:00 stderr F W0813 20:00:35.357860 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 2, err: ) 2025-08-13T20:00:36.281903040+00:00 stderr F W0813 20:00:36.279125 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 3, err: ) 2025-08-13T20:00:37.283066398+00:00 stderr F W0813 20:00:37.276990 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 4, err: ) 2025-08-13T20:00:38.308991331+00:00 stderr F W0813 20:00:38.305516 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 5, err: ) 2025-08-13T20:00:38.520605965+00:00 stderr F I0813 20:00:38.518352 20 observer_polling.go:162] Shutting down file observer 2025-08-13T20:00:39.284050724+00:00 stderr F W0813 20:00:39.283223 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 6, err: ) 2025-08-13T20:00:40.280375272+00:00 stderr F W0813 20:00:40.277531 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 7, err: ) 2025-08-13T20:00:41.276881747+00:00 stderr F W0813 20:00:41.276510 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 8, err: ) 2025-08-13T20:00:42.286214347+00:00 stderr F W0813 20:00:42.278618 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 9, err: ) 2025-08-13T20:00:43.275711531+00:00 stderr F W0813 20:00:43.275310 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 10, err: ) 2025-08-13T20:00:44.275672973+00:00 stderr F I0813 20:00:44.274906 20 cmd.go:341] Watching for changes in: ([]string) (len=2 cap=2) { 2025-08-13T20:00:44.275672973+00:00 stderr F (string) (len=33) "/proc/38/root/etc/secrets/tls.crt", 2025-08-13T20:00:44.275672973+00:00 stderr F (string) (len=33) "/proc/38/root/etc/secrets/tls.key" 2025-08-13T20:00:44.275672973+00:00 stderr F } 2025-08-13T20:00:45.576200637+00:00 stderr F I0813 20:00:45.575191 20 observer_polling.go:159] Starting file observer 2025-08-13T20:42:42.279269663+00:00 stderr F W0813 20:42:42.278947 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 2529, err: ) ././@LongLink0000644000000000000000000000033100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000755000175000017500000000000015133657735033106 5ustar zuulzuul././@LongLink0000644000000000000000000000033600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000644000175000017500000033314115133657715033113 0ustar zuulzuul2025-08-13T20:00:44.607246338+00:00 stderr F time="2025-08-13T20:00:44Z" level=info msg="Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-08-13T20:00:44.607860915+00:00 stderr F time="2025-08-13T20:00:44Z" level=info msg="Go OS/Arch: linux/amd64" 2025-08-13T20:00:44.643869742+00:00 stderr F time="2025-08-13T20:00:44Z" level=info msg="template client &v1.TemplateV1Client{restClient:(*rest.RESTClient)(0xc000789040)}" 2025-08-13T20:00:44.643869742+00:00 stderr F time="2025-08-13T20:00:44Z" level=info msg="image client &v1.ImageV1Client{restClient:(*rest.RESTClient)(0xc0007890e0)}" 2025-08-13T20:00:46.045395765+00:00 stderr F time="2025-08-13T20:00:46Z" level=info msg="waiting for informer caches to sync" 2025-08-13T20:00:46.490100176+00:00 stderr F W0813 20:00:46.490036 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:46.490191988+00:00 stderr F E0813 20:00:46.490176 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:46.556919121+00:00 stderr F W0813 20:00:46.550017 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:46.556919121+00:00 stderr F E0813 20:00:46.550134 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:47.714511817+00:00 stderr F W0813 20:00:47.713583 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:47.714623810+00:00 stderr F E0813 20:00:47.714577 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:47.909393354+00:00 stderr F W0813 20:00:47.908970 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:47.909393354+00:00 stderr F E0813 20:00:47.909304 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:50.029249460+00:00 stderr F W0813 20:00:50.028537 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:50.029249460+00:00 stderr F E0813 20:00:50.029222 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:50.706434319+00:00 stderr F W0813 20:00:50.705977 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:50.706434319+00:00 stderr F E0813 20:00:50.706347 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:54.091117328+00:00 stderr F W0813 20:00:54.087954 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:54.091117328+00:00 stderr F E0813 20:00:54.088658 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:56.391398369+00:00 stderr F W0813 20:00:56.389256 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:56.391398369+00:00 stderr F E0813 20:00:56.389730 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:01:06.689662603+00:00 stderr F W0813 20:01:06.687460 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:01:06.693053539+00:00 stderr F E0813 20:01:06.688866 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:01:08.723703810+00:00 stderr F W0813 20:01:08.722018 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:01:08.723703810+00:00 stderr F E0813 20:01:08.722756 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:01:26.625246002+00:00 stderr F W0813 20:01:26.623011 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:01:26.633354693+00:00 stderr F E0813 20:01:26.633140 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:01:28.382170209+00:00 stderr F W0813 20:01:28.381670 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:01:28.382270301+00:00 stderr F E0813 20:01:28.382213 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:01:59.134006323+00:00 stderr F W0813 20:01:59.133064 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:01:59.134006323+00:00 stderr F E0813 20:01:59.133733 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:02:17.309913758+00:00 stderr F W0813 20:02:17.309063 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:02:17.310005721+00:00 stderr F E0813 20:02:17.309935 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:02:36.746592578+00:00 stderr F W0813 20:02:36.745612 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.746592578+00:00 stderr F E0813 20:02:36.746462 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.876275642+00:00 stderr F W0813 20:02:49.875554 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.876275642+00:00 stderr F E0813 20:02:49.876223 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:22.230143008+00:00 stderr F W0813 20:03:22.229209 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:22.230143008+00:00 stderr F E0813 20:03:22.229992 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.101454885+00:00 stderr F W0813 20:03:26.101276 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.101677402+00:00 stderr F E0813 20:03:26.101486 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:59.219366650+00:00 stderr F W0813 20:03:59.218715 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:59.219366650+00:00 stderr F E0813 20:03:59.219326 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.239806871+00:00 stderr F W0813 20:04:06.239069 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.239806871+00:00 stderr F E0813 20:04:06.239748 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.574106586+00:00 stderr F W0813 20:04:37.561396 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.574975231+00:00 stderr F E0813 20:04:37.574405 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:38.987897501+00:00 stderr F W0813 20:04:38.985355 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:38.987897501+00:00 stderr F E0813 20:04:38.985464 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:33.947007920+00:00 stderr F W0813 20:05:33.933063 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:05:33.947007920+00:00 stderr F E0813 20:05:33.942434 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:05:34.721279152+00:00 stderr F W0813 20:05:34.720916 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:34.721279152+00:00 stderr F E0813 20:05:34.721267 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:06:08.857838209+00:00 stderr F W0813 20:06:08.856659 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:06:08.857988443+00:00 stderr F E0813 20:06:08.857853 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:06:21.536005791+00:00 stderr F W0813 20:06:21.533951 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:06:21.536005791+00:00 stderr F E0813 20:06:21.534644 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:06:56.337219614+00:00 stderr F W0813 20:06:56.335478 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:06:56.337219614+00:00 stderr F E0813 20:06:56.336353 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:07:06.011577326+00:00 stderr F W0813 20:07:06.010522 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:07:06.011577326+00:00 stderr F E0813 20:07:06.011292 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:08:02.348213355+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="started events processor" 2025-08-13T20:08:02.372559943+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.372559943+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift" 2025-08-13T20:08:02.403632764+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift-java11 already deleted so no worries on clearing tags" 2025-08-13T20:08:02.403632764+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift-java11" 2025-08-13T20:08:02.410222553+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream fuse7-java11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.410222553+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java11-openshift" 2025-08-13T20:08:02.427300232+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream dotnet-runtime already deleted so no worries on clearing tags" 2025-08-13T20:08:02.427300232+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2025-08-13T20:08:02.429669810+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:08:02.435338613+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream dotnet already deleted so no worries on clearing tags" 2025-08-13T20:08:02.435626801+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2025-08-13T20:08:02.447449820+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift-jdk11 already deleted so no worries on clearing tags" 2025-08-13T20:08:02.447449820+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift-jdk11" 2025-08-13T20:08:02.448483380+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:08:02.455757238+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream java already deleted so no worries on clearing tags" 2025-08-13T20:08:02.455757238+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream java" 2025-08-13T20:08:02.461821002+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream fuse7-java-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.461821002+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java-openshift" 2025-08-13T20:08:02.803253941+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream golang already deleted so no worries on clearing tags" 2025-08-13T20:08:02.803253941+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream golang" 2025-08-13T20:08:02.807970266+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.807970266+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift" 2025-08-13T20:08:02.872235759+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream httpd already deleted so no worries on clearing tags" 2025-08-13T20:08:02.872235759+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream httpd" 2025-08-13T20:08:02.888209217+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.888209217+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-openshift" 2025-08-13T20:08:02.896845954+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream java-runtime already deleted so no worries on clearing tags" 2025-08-13T20:08:02.896845954+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2025-08-13T20:08:02.901936570+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.901936570+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2025-08-13T20:08:02.910760503+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.910760503+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-runtime-openshift" 2025-08-13T20:08:02.914398708+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-datagrid73-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.914398708+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-datagrid73-openshift" 2025-08-13T20:08:02.921727088+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.921727088+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-openshift" 2025-08-13T20:08:02.926546616+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.926546616+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2025-08-13T20:08:02.940195247+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:08:02.940195247+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2025-08-13T20:08:02.947607250+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jenkins already deleted so no worries on clearing tags" 2025-08-13T20:08:02.947672352+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-08-13T20:08:02.955914108+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.955914108+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-openshift" 2025-08-13T20:08:02.962386474+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.962386474+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-openshift" 2025-08-13T20:08:02.969634151+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.969634151+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-runtime-openshift" 2025-08-13T20:08:02.974028547+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:08:02.974028547+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2025-08-13T20:08:02.982579443+00:00 stderr F time="2025-08-13T20:08:02Z" level=warning msg="Image import for imagestream jenkins-agent-base tag scheduled-upgrade generation 3 failed with detailed message Internal error occurred: registry.redhat.io/ocp-tools-4/jenkins-agent-base-rhel8:v4.13.0: Get \"https://registry.redhat.io/v2/ocp-tools-4/jenkins-agent-base-rhel8/manifests/v4.13.0\": unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" 2025-08-13T20:08:03.922922212+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="initiated an imagestreamimport retry for imagestream/tag jenkins-agent-base/scheduled-upgrade" 2025-08-13T20:08:03.935561485+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream nginx already deleted so no worries on clearing tags" 2025-08-13T20:08:03.935561485+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream nginx" 2025-08-13T20:08:03.938521480+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream openjdk-11-rhel7 already deleted so no worries on clearing tags" 2025-08-13T20:08:03.938521480+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2025-08-13T20:08:03.942338049+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream mariadb already deleted so no worries on clearing tags" 2025-08-13T20:08:03.942393101+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream mariadb" 2025-08-13T20:08:03.945844949+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream mysql already deleted so no worries on clearing tags" 2025-08-13T20:08:03.945939002+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream mysql" 2025-08-13T20:08:03.952315515+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream postgresql already deleted so no worries on clearing tags" 2025-08-13T20:08:03.952315515+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql" 2025-08-13T20:08:03.960539321+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream nodejs already deleted so no worries on clearing tags" 2025-08-13T20:08:03.960848920+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream nodejs" 2025-08-13T20:08:03.965657717+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream python already deleted so no worries on clearing tags" 2025-08-13T20:08:03.965880724+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream python" 2025-08-13T20:08:03.969083446+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream perl already deleted so no worries on clearing tags" 2025-08-13T20:08:03.969203969+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream perl" 2025-08-13T20:08:03.973291166+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream redis already deleted so no worries on clearing tags" 2025-08-13T20:08:03.973291166+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream redis" 2025-08-13T20:08:03.977096866+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream php already deleted so no worries on clearing tags" 2025-08-13T20:08:03.977170668+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream php" 2025-08-13T20:08:03.980565505+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream ruby already deleted so no worries on clearing tags" 2025-08-13T20:08:03.980597226+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream ruby" 2025-08-13T20:08:03.984014454+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:08:03.984014454+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream sso76-openshift-rhel8" 2025-08-13T20:08:03.986409873+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:08:03.986738382+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso75-openshift-rhel8" 2025-08-13T20:08:03.989862012+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:08:03.989969005+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso76-openshift-rhel8" 2025-08-13T20:08:03.994198356+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11 already deleted so no worries on clearing tags" 2025-08-13T20:08:03.994198356+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2025-08-13T20:08:03.999411255+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17 already deleted so no worries on clearing tags" 2025-08-13T20:08:03.999511038+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2025-08-13T20:08:04.009067372+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream redhat-openjdk18-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:04.009667959+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2025-08-13T20:08:04.013335484+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17-runtime already deleted so no worries on clearing tags" 2025-08-13T20:08:04.013420207+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2025-08-13T20:08:04.016498465+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8 already deleted so no worries on clearing tags" 2025-08-13T20:08:04.016678530+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2025-08-13T20:08:04.020407737+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:08:04.020590873+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream sso75-openshift-rhel8" 2025-08-13T20:08:04.026368188+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11-runtime already deleted so no worries on clearing tags" 2025-08-13T20:08:04.026455471+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2025-08-13T20:08:04.031284829+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearing error messages from configmap for stream jenkins-agent-base and tag scheduled-upgrade" 2025-08-13T20:08:04.040103402+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins-agent-base" 2025-08-13T20:08:04.049922974+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21 already deleted so no worries on clearing tags" 2025-08-13T20:08:04.049922974+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2025-08-13T20:08:04.050622114+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="CRDUPDATE importerrors false update" 2025-08-13T20:08:04.053712322+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21-runtime already deleted so no worries on clearing tags" 2025-08-13T20:08:04.053762564+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2025-08-13T20:08:04.057615664+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8-runtime already deleted so no worries on clearing tags" 2025-08-13T20:08:04.057615664+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2025-08-13T20:08:07.114613122+00:00 stderr F time="2025-08-13T20:08:07Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:08:07.114613122+00:00 stderr F time="2025-08-13T20:08:07Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:08:07.222850344+00:00 stderr F time="2025-08-13T20:08:07Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:08:07.223007889+00:00 stderr F time="2025-08-13T20:08:07Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:08:11.779708813+00:00 stderr F time="2025-08-13T20:08:11Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:08:11.779708813+00:00 stderr F time="2025-08-13T20:08:11Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:08:11.861485008+00:00 stderr F time="2025-08-13T20:08:11Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:08:11.861485008+00:00 stderr F time="2025-08-13T20:08:11Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:08:12.282581371+00:00 stderr F time="2025-08-13T20:08:12Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:08:12.282581371+00:00 stderr F time="2025-08-13T20:08:12Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:09:54.830597497+00:00 stderr F time="2025-08-13T20:09:54Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:09:54.830597497+00:00 stderr F time="2025-08-13T20:09:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:09:55.128982603+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:09:55.129065715+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:10:01.566036489+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:10:01.566036489+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:10:39.024057289+00:00 stderr F time="2025-08-13T20:10:39Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:10:39.024302786+00:00 stderr F time="2025-08-13T20:10:39Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:10:59.781430907+00:00 stderr F time="2025-08-13T20:10:59Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:10:59.781430907+00:00 stderr F time="2025-08-13T20:10:59Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:11:00.084006942+00:00 stderr F time="2025-08-13T20:11:00Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:11:00.084006942+00:00 stderr F time="2025-08-13T20:11:00Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:11:00.374907753+00:00 stderr F time="2025-08-13T20:11:00Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:11:00.375574102+00:00 stderr F time="2025-08-13T20:11:00Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:11:19.467053212+00:00 stderr F time="2025-08-13T20:11:19Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:11:19.467053212+00:00 stderr F time="2025-08-13T20:11:19Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:18:02.307339975+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="clearImageStreamTagError: stream dotnet already deleted so no worries on clearing tags" 2025-08-13T20:18:02.307339975+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2025-08-13T20:18:02.966757286+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="clearImageStreamTagError: stream nodejs already deleted so no worries on clearing tags" 2025-08-13T20:18:02.966757286+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="There are no more errors or image imports in flight for imagestream nodejs" 2025-08-13T20:18:02.985382158+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8-runtime already deleted so no worries on clearing tags" 2025-08-13T20:18:02.985382158+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2025-08-13T20:18:02.985382158+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="clearImageStreamTagError: stream python already deleted so no worries on clearing tags" 2025-08-13T20:18:02.985382158+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="There are no more errors or image imports in flight for imagestream python" 2025-08-13T20:18:03.520430957+00:00 stderr F time="2025-08-13T20:18:03Z" level=info msg="clearImageStreamTagError: stream openjdk-11-rhel7 already deleted so no worries on clearing tags" 2025-08-13T20:18:03.520430957+00:00 stderr F time="2025-08-13T20:18:03Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2025-08-13T20:18:04.535876206+00:00 stderr F time="2025-08-13T20:18:04Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:04.535876206+00:00 stderr F time="2025-08-13T20:18:04Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-openshift" 2025-08-13T20:18:04.978403133+00:00 stderr F time="2025-08-13T20:18:04Z" level=info msg="clearImageStreamTagError: stream java-runtime already deleted so no worries on clearing tags" 2025-08-13T20:18:04.978403133+00:00 stderr F time="2025-08-13T20:18:04Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2025-08-13T20:18:06.390925831+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream php already deleted so no worries on clearing tags" 2025-08-13T20:18:06.390925831+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream php" 2025-08-13T20:18:06.407627178+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream postgresql already deleted so no worries on clearing tags" 2025-08-13T20:18:06.407627178+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql" 2025-08-13T20:18:06.416865322+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift-jdk11 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.416865322+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift-jdk11" 2025-08-13T20:18:06.446104327+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21-runtime already deleted so no worries on clearing tags" 2025-08-13T20:18:06.446104327+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2025-08-13T20:18:06.452899421+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.452899421+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-runtime-openshift" 2025-08-13T20:18:06.459854979+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jenkins already deleted so no worries on clearing tags" 2025-08-13T20:18:06.459854979+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-08-13T20:18:06.469164085+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream golang already deleted so no worries on clearing tags" 2025-08-13T20:18:06.469164085+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream golang" 2025-08-13T20:18:06.568908434+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.568908434+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso75-openshift-rhel8" 2025-08-13T20:18:06.574094902+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream fuse7-java-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.574094902+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java-openshift" 2025-08-13T20:18:06.579286820+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-datagrid73-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.579286820+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-datagrid73-openshift" 2025-08-13T20:18:06.582616595+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.582616595+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2025-08-13T20:18:06.587947887+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream redis already deleted so no worries on clearing tags" 2025-08-13T20:18:06.587947887+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream redis" 2025-08-13T20:18:06.599131687+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream httpd already deleted so no worries on clearing tags" 2025-08-13T20:18:06.599131687+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream httpd" 2025-08-13T20:18:06.605735775+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.605735775+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-runtime-openshift" 2025-08-13T20:18:06.612459677+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17-runtime already deleted so no worries on clearing tags" 2025-08-13T20:18:06.612459677+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2025-08-13T20:18:06.623063180+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jenkins-agent-base already deleted so no worries on clearing tags" 2025-08-13T20:18:06.623063180+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins-agent-base" 2025-08-13T20:18:06.634421014+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.634421014+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2025-08-13T20:18:06.643141153+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream java already deleted so no worries on clearing tags" 2025-08-13T20:18:06.643141153+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream java" 2025-08-13T20:18:06.654255851+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.654255851+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso76-openshift-rhel8" 2025-08-13T20:18:06.662020883+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream dotnet-runtime already deleted so no worries on clearing tags" 2025-08-13T20:18:06.662020883+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2025-08-13T20:18:06.667241402+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream redhat-openjdk18-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.667241402+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2025-08-13T20:18:06.685541374+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.685541374+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2025-08-13T20:18:06.688721875+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream fuse7-java11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.688721875+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java11-openshift" 2025-08-13T20:18:06.693616445+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.693698747+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2025-08-13T20:18:06.699126952+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream mysql already deleted so no worries on clearing tags" 2025-08-13T20:18:06.699126952+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream mysql" 2025-08-13T20:18:06.702904520+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.702904520+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2025-08-13T20:18:06.707837741+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.707897353+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift" 2025-08-13T20:18:06.714696467+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.714696467+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-openshift" 2025-08-13T20:18:06.726013190+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.726013190+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2025-08-13T20:18:06.731489846+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ruby already deleted so no worries on clearing tags" 2025-08-13T20:18:06.731489846+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ruby" 2025-08-13T20:18:06.736492549+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.736492549+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream sso76-openshift-rhel8" 2025-08-13T20:18:06.739185426+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.739252558+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-openshift" 2025-08-13T20:18:06.744658042+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11-runtime already deleted so no worries on clearing tags" 2025-08-13T20:18:06.744658042+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2025-08-13T20:18:06.748432640+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.748432640+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-openshift" 2025-08-13T20:18:06.751220370+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.751220370+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream sso75-openshift-rhel8" 2025-08-13T20:18:06.754631607+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream mariadb already deleted so no worries on clearing tags" 2025-08-13T20:18:06.754631607+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream mariadb" 2025-08-13T20:18:06.758684183+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream nginx already deleted so no worries on clearing tags" 2025-08-13T20:18:06.758684183+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream nginx" 2025-08-13T20:18:06.767243277+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.767243277+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift" 2025-08-13T20:18:06.776049099+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.776049099+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2025-08-13T20:18:06.779757575+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.779757575+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2025-08-13T20:18:06.783944384+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream perl already deleted so no worries on clearing tags" 2025-08-13T20:18:06.783944384+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream perl" 2025-08-13T20:18:06.787472455+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift-java11 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.787543947+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift-java11" 2025-08-13T20:19:54.880115698+00:00 stderr F time="2025-08-13T20:19:54Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:19:54.880115698+00:00 stderr F time="2025-08-13T20:19:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:19:55.141373815+00:00 stderr F time="2025-08-13T20:19:55Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:19:55.141494039+00:00 stderr F time="2025-08-13T20:19:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:22:18.915201214+00:00 stderr F time="2025-08-13T20:22:18Z" level=info msg="clearImageStreamTagError: stream jenkins already deleted so no worries on clearing tags" 2025-08-13T20:22:18.915201214+00:00 stderr F time="2025-08-13T20:22:18Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-08-13T20:28:02.308714006+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17-runtime already deleted so no worries on clearing tags" 2025-08-13T20:28:02.308714006+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2025-08-13T20:28:02.319453535+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.319453535+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso76-openshift-rhel8" 2025-08-13T20:28:02.323832900+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream java already deleted so no worries on clearing tags" 2025-08-13T20:28:02.323923243+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream java" 2025-08-13T20:28:02.329429961+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.329429961+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2025-08-13T20:28:02.333298503+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream redhat-openjdk18-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.333298503+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2025-08-13T20:28:02.337924946+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.337924946+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2025-08-13T20:28:02.341408756+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.341408756+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2025-08-13T20:28:02.346147092+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream fuse7-java11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.346147092+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java11-openshift" 2025-08-13T20:28:02.350829617+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.350829617+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-runtime-openshift" 2025-08-13T20:28:02.355305725+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream dotnet-runtime already deleted so no worries on clearing tags" 2025-08-13T20:28:02.355305725+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2025-08-13T20:28:02.360209206+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream mysql already deleted so no worries on clearing tags" 2025-08-13T20:28:02.360209206+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream mysql" 2025-08-13T20:28:02.364812158+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.364812158+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2025-08-13T20:28:02.371478110+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.371478110+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift" 2025-08-13T20:28:02.373967922+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.373967922+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-openshift" 2025-08-13T20:28:02.378446610+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.378446610+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream sso75-openshift-rhel8" 2025-08-13T20:28:02.381110237+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11-runtime already deleted so no worries on clearing tags" 2025-08-13T20:28:02.381110237+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2025-08-13T20:28:02.384421912+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream mariadb already deleted so no worries on clearing tags" 2025-08-13T20:28:02.384421912+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream mariadb" 2025-08-13T20:28:02.386399489+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.386399489+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-openshift" 2025-08-13T20:28:02.389844498+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.389844498+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift" 2025-08-13T20:28:02.392765632+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.392765632+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream sso76-openshift-rhel8" 2025-08-13T20:28:02.395645355+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ruby already deleted so no worries on clearing tags" 2025-08-13T20:28:02.395645355+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ruby" 2025-08-13T20:28:02.403048348+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream nginx already deleted so no worries on clearing tags" 2025-08-13T20:28:02.403142020+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream nginx" 2025-08-13T20:28:02.411039457+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.411039457+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-openshift" 2025-08-13T20:28:02.415214457+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift-java11 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.415214457+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift-java11" 2025-08-13T20:28:02.419758268+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream perl already deleted so no worries on clearing tags" 2025-08-13T20:28:02.419758268+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream perl" 2025-08-13T20:28:02.424949507+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream php already deleted so no worries on clearing tags" 2025-08-13T20:28:02.424949507+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream php" 2025-08-13T20:28:02.430269180+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.430269180+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2025-08-13T20:28:02.440613397+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream java-runtime already deleted so no worries on clearing tags" 2025-08-13T20:28:02.440613397+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2025-08-13T20:28:02.444632473+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream python already deleted so no worries on clearing tags" 2025-08-13T20:28:02.444632473+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream python" 2025-08-13T20:28:02.448248957+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.448248957+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2025-08-13T20:28:02.454270920+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream nodejs already deleted so no worries on clearing tags" 2025-08-13T20:28:02.454270920+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream nodejs" 2025-08-13T20:28:02.463883476+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream dotnet already deleted so no worries on clearing tags" 2025-08-13T20:28:02.463883476+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2025-08-13T20:28:02.467411198+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream openjdk-11-rhel7 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.467411198+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2025-08-13T20:28:02.471754743+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jenkins already deleted so no worries on clearing tags" 2025-08-13T20:28:02.471754743+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-08-13T20:28:02.477402255+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream postgresql already deleted so no worries on clearing tags" 2025-08-13T20:28:02.477402255+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql" 2025-08-13T20:28:02.482131731+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8-runtime already deleted so no worries on clearing tags" 2025-08-13T20:28:02.482131731+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2025-08-13T20:28:02.488397691+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21-runtime already deleted so no worries on clearing tags" 2025-08-13T20:28:02.488397691+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2025-08-13T20:28:02.491967054+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.492082687+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-openshift" 2025-08-13T20:28:02.495688911+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift-jdk11 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.495688911+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift-jdk11" 2025-08-13T20:28:02.498623645+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream redis already deleted so no worries on clearing tags" 2025-08-13T20:28:02.498623645+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream redis" 2025-08-13T20:28:02.501133997+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream golang already deleted so no worries on clearing tags" 2025-08-13T20:28:02.501185759+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream golang" 2025-08-13T20:28:02.503971799+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream fuse7-java-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.503971799+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java-openshift" 2025-08-13T20:28:02.506392838+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.506446380+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2025-08-13T20:28:02.509020194+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.509020194+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-runtime-openshift" 2025-08-13T20:28:02.511605938+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.511653530+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso75-openshift-rhel8" 2025-08-13T20:28:02.514507372+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jenkins-agent-base already deleted so no worries on clearing tags" 2025-08-13T20:28:02.514507372+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins-agent-base" 2025-08-13T20:28:02.521752660+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-datagrid73-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.522160162+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-datagrid73-openshift" 2025-08-13T20:28:02.526249639+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream httpd already deleted so no worries on clearing tags" 2025-08-13T20:28:02.526249639+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream httpd" 2025-08-13T20:28:02.529900994+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.529900994+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2025-08-13T20:29:54.885451542+00:00 stderr F time="2025-08-13T20:29:54Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:29:54.885451542+00:00 stderr F time="2025-08-13T20:29:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:29:55.027747743+00:00 stderr F time="2025-08-13T20:29:55Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:29:55.030641676+00:00 stderr F time="2025-08-13T20:29:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:29:55.147048872+00:00 stderr F time="2025-08-13T20:29:55Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:29:55.147048872+00:00 stderr F time="2025-08-13T20:29:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:38:02.309098888+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.309269413+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-runtime-openshift" 2025-08-13T20:38:02.326326495+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.326326495+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2025-08-13T20:38:02.338454975+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-datagrid73-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.338454975+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-datagrid73-openshift" 2025-08-13T20:38:02.342211403+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.342211403+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2025-08-13T20:38:02.346724143+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jenkins-agent-base already deleted so no worries on clearing tags" 2025-08-13T20:38:02.346865967+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins-agent-base" 2025-08-13T20:38:02.351160871+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream java already deleted so no worries on clearing tags" 2025-08-13T20:38:02.351160871+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream java" 2025-08-13T20:38:02.354514998+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.354514998+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso76-openshift-rhel8" 2025-08-13T20:38:02.358551754+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.358598555+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2025-08-13T20:38:02.362463087+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.362463087+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2025-08-13T20:38:02.366733010+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.366863374+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso75-openshift-rhel8" 2025-08-13T20:38:02.370736005+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream httpd already deleted so no worries on clearing tags" 2025-08-13T20:38:02.370736005+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream httpd" 2025-08-13T20:38:02.375228885+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream dotnet-runtime already deleted so no worries on clearing tags" 2025-08-13T20:38:02.375308407+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2025-08-13T20:38:02.379249151+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17-runtime already deleted so no worries on clearing tags" 2025-08-13T20:38:02.379295622+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2025-08-13T20:38:02.383374900+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.383423071+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift" 2025-08-13T20:38:02.385961164+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.386057937+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-runtime-openshift" 2025-08-13T20:38:02.388712063+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream redhat-openjdk18-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.388831257+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2025-08-13T20:38:02.391967277+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.391967277+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2025-08-13T20:38:02.399498344+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream fuse7-java11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.399547966+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java11-openshift" 2025-08-13T20:38:02.403921842+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.403990024+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-openshift" 2025-08-13T20:38:02.407048532+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.407236408+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-openshift" 2025-08-13T20:38:02.409688868+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.409733350+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2025-08-13T20:38:02.413591791+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream mysql already deleted so no worries on clearing tags" 2025-08-13T20:38:02.413642652+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream mysql" 2025-08-13T20:38:02.416944857+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ruby already deleted so no worries on clearing tags" 2025-08-13T20:38:02.416994169+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ruby" 2025-08-13T20:38:02.419545742+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream mariadb already deleted so no worries on clearing tags" 2025-08-13T20:38:02.419632375+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream mariadb" 2025-08-13T20:38:02.422408055+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.422457466+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift" 2025-08-13T20:38:02.425163324+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11-runtime already deleted so no worries on clearing tags" 2025-08-13T20:38:02.425163324+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2025-08-13T20:38:02.427469901+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.427516402+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream sso76-openshift-rhel8" 2025-08-13T20:38:02.430865569+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream nginx already deleted so no worries on clearing tags" 2025-08-13T20:38:02.430920290+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream nginx" 2025-08-13T20:38:02.436699087+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream perl already deleted so no worries on clearing tags" 2025-08-13T20:38:02.436699087+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream perl" 2025-08-13T20:38:02.439641982+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.439641982+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream sso75-openshift-rhel8" 2025-08-13T20:38:02.442081142+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.442081142+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-openshift" 2025-08-13T20:38:02.446396566+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.446396566+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2025-08-13T20:38:02.455233461+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream php already deleted so no worries on clearing tags" 2025-08-13T20:38:02.455233461+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream php" 2025-08-13T20:38:02.458619829+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift-java11 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.458619829+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift-java11" 2025-08-13T20:38:02.463981274+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.463981274+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2025-08-13T20:38:02.469686288+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream python already deleted so no worries on clearing tags" 2025-08-13T20:38:02.469686288+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream python" 2025-08-13T20:38:02.475182526+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream openjdk-11-rhel7 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.475182526+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2025-08-13T20:38:02.479040148+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21-runtime already deleted so no worries on clearing tags" 2025-08-13T20:38:02.479040148+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2025-08-13T20:38:02.489055846+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream postgresql already deleted so no worries on clearing tags" 2025-08-13T20:38:02.489055846+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql" 2025-08-13T20:38:02.491998151+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream nodejs already deleted so no worries on clearing tags" 2025-08-13T20:38:02.492242748+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream nodejs" 2025-08-13T20:38:02.496285355+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jenkins already deleted so no worries on clearing tags" 2025-08-13T20:38:02.496285355+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-08-13T20:38:02.499108776+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8-runtime already deleted so no worries on clearing tags" 2025-08-13T20:38:02.499213759+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2025-08-13T20:38:02.502560806+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream dotnet already deleted so no worries on clearing tags" 2025-08-13T20:38:02.502649578+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2025-08-13T20:38:02.507321043+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream java-runtime already deleted so no worries on clearing tags" 2025-08-13T20:38:02.507537159+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2025-08-13T20:38:02.512683188+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.513183312+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-openshift" 2025-08-13T20:38:02.518338191+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream golang already deleted so no worries on clearing tags" 2025-08-13T20:38:02.518338191+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream golang" 2025-08-13T20:38:02.522257654+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream redis already deleted so no worries on clearing tags" 2025-08-13T20:38:02.522257654+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream redis" 2025-08-13T20:38:02.528893235+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream fuse7-java-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.528893235+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java-openshift" 2025-08-13T20:38:02.533061065+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift-jdk11 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.533061065+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift-jdk11" 2025-08-13T20:39:54.858926167+00:00 stderr F time="2025-08-13T20:39:54Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:39:54.858926167+00:00 stderr F time="2025-08-13T20:39:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:39:54.929035027+00:00 stderr F time="2025-08-13T20:39:54Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:39:54.929035027+00:00 stderr F time="2025-08-13T20:39:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:39:55.130956989+00:00 stderr F time="2025-08-13T20:39:55Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:39:55.130956989+00:00 stderr F time="2025-08-13T20:39:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:20.405267752+00:00 stderr F time="2025-08-13T20:42:20Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:20.405267752+00:00 stderr F time="2025-08-13T20:42:20Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:20.467397943+00:00 stderr F time="2025-08-13T20:42:20Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:20.467397943+00:00 stderr F time="2025-08-13T20:42:20Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:21.394447020+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:21.396885090+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:21.465459317+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:21.465459317+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:22.370705595+00:00 stderr F time="2025-08-13T20:42:22Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:22.371446596+00:00 stderr F time="2025-08-13T20:42:22Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:22.433963449+00:00 stderr F time="2025-08-13T20:42:22Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:22.433963449+00:00 stderr F time="2025-08-13T20:42:22Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:23.380587010+00:00 stderr F time="2025-08-13T20:42:23Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:23.380587010+00:00 stderr F time="2025-08-13T20:42:23Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:23.461112032+00:00 stderr F time="2025-08-13T20:42:23Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:23.461112032+00:00 stderr F time="2025-08-13T20:42:23Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:24.381655721+00:00 stderr F time="2025-08-13T20:42:24Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:24.381655721+00:00 stderr F time="2025-08-13T20:42:24Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:24.458359573+00:00 stderr F time="2025-08-13T20:42:24Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:24.458471076+00:00 stderr F time="2025-08-13T20:42:24Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:25.417401671+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:25.417401671+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:25.505537472+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:25.505537472+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:26.408426263+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:26.408524316+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:26.480985995+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:26.480985995+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:27.472023757+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:27.472023757+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:27.612048704+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:27.614752602+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:28.392512125+00:00 stderr F time="2025-08-13T20:42:28Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:28.392512125+00:00 stderr F time="2025-08-13T20:42:28Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:28.468054283+00:00 stderr F time="2025-08-13T20:42:28Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:28.468054283+00:00 stderr F time="2025-08-13T20:42:28Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:29.407675142+00:00 stderr F time="2025-08-13T20:42:29Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:29.407719013+00:00 stderr F time="2025-08-13T20:42:29Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:29.479289737+00:00 stderr F time="2025-08-13T20:42:29Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:29.479289737+00:00 stderr F time="2025-08-13T20:42:29Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:30.404129860+00:00 stderr F time="2025-08-13T20:42:30Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:30.404129860+00:00 stderr F time="2025-08-13T20:42:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:30.506750169+00:00 stderr F time="2025-08-13T20:42:30Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:30.506750169+00:00 stderr F time="2025-08-13T20:42:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:31.393037481+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:31.393037481+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:31.456042957+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:31.456042957+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:32.390306402+00:00 stderr F time="2025-08-13T20:42:32Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:32.390306402+00:00 stderr F time="2025-08-13T20:42:32Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:32.467966580+00:00 stderr F time="2025-08-13T20:42:32Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:32.467966580+00:00 stderr F time="2025-08-13T20:42:32Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:33.431971123+00:00 stderr F time="2025-08-13T20:42:33Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:33.431971123+00:00 stderr F time="2025-08-13T20:42:33Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:33.545267030+00:00 stderr F time="2025-08-13T20:42:33Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:33.545267030+00:00 stderr F time="2025-08-13T20:42:33Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:34.405879691+00:00 stderr F time="2025-08-13T20:42:34Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:34.405879691+00:00 stderr F time="2025-08-13T20:42:34Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:34.501982352+00:00 stderr F time="2025-08-13T20:42:34Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:34.501982352+00:00 stderr F time="2025-08-13T20:42:34Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:35.404360718+00:00 stderr F time="2025-08-13T20:42:35Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:35.405447990+00:00 stderr F time="2025-08-13T20:42:35Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:35.490723988+00:00 stderr F time="2025-08-13T20:42:35Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:35.490723988+00:00 stderr F time="2025-08-13T20:42:35Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:41.328870733+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="shutting down events processor" ././@LongLink0000644000000000000000000000033600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000644000175000017500000053350415133657715033120 0ustar zuulzuul2026-01-20T10:49:37.251462455+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2026-01-20T10:49:37.251462455+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="Go OS/Arch: linux/amd64" 2026-01-20T10:49:37.255817537+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="template client &v1.TemplateV1Client{restClient:(*rest.RESTClient)(0xc000743540)}" 2026-01-20T10:49:37.255817537+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="image client &v1.ImageV1Client{restClient:(*rest.RESTClient)(0xc000743680)}" 2026-01-20T10:49:37.451976092+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="waiting for informer caches to sync" 2026-01-20T10:49:37.519388525+00:00 stderr F W0120 10:49:37.519287 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2026-01-20T10:49:37.519388525+00:00 stderr F E0120 10:49:37.519374 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2026-01-20T10:49:37.521691126+00:00 stderr F W0120 10:49:37.521628 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2026-01-20T10:49:37.521709636+00:00 stderr F E0120 10:49:37.521694 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2026-01-20T10:49:38.827435208+00:00 stderr F W0120 10:49:38.826395 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2026-01-20T10:49:38.827435208+00:00 stderr F E0120 10:49:38.827177 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2026-01-20T10:49:38.939104049+00:00 stderr F W0120 10:49:38.938353 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2026-01-20T10:49:38.939104049+00:00 stderr F E0120 10:49:38.938388 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2026-01-20T10:49:41.939291963+00:00 stderr F W0120 10:49:41.939008 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2026-01-20T10:49:41.939291963+00:00 stderr F E0120 10:49:41.939276 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2026-01-20T10:49:41.977878038+00:00 stderr F W0120 10:49:41.977817 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2026-01-20T10:49:41.977878038+00:00 stderr F E0120 10:49:41.977854 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2026-01-20T10:49:46.151023100+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="started events processor" 2026-01-20T10:49:46.153283488+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream dotnet already deleted so no worries on clearing tags" 2026-01-20T10:49:46.153283488+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2026-01-20T10:49:46.162097948+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream fuse7-java-openshift already deleted so no worries on clearing tags" 2026-01-20T10:49:46.162097948+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java-openshift" 2026-01-20T10:49:46.169674888+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift-java11 already deleted so no worries on clearing tags" 2026-01-20T10:49:46.169674888+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift-java11" 2026-01-20T10:49:46.173625509+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift already deleted so no worries on clearing tags" 2026-01-20T10:49:46.173625509+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift" 2026-01-20T10:49:46.173625509+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift already deleted so no worries on clearing tags" 2026-01-20T10:49:46.173625509+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift" 2026-01-20T10:49:46.176352992+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream httpd already deleted so no worries on clearing tags" 2026-01-20T10:49:46.176352992+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream httpd" 2026-01-20T10:49:46.185211961+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream dotnet-runtime already deleted so no worries on clearing tags" 2026-01-20T10:49:46.185211961+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2026-01-20T10:49:46.185211961+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream fuse7-java11-openshift already deleted so no worries on clearing tags" 2026-01-20T10:49:46.185211961+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java11-openshift" 2026-01-20T10:49:46.190181133+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream java already deleted so no worries on clearing tags" 2026-01-20T10:49:46.190181133+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream java" 2026-01-20T10:49:46.190181133+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2026-01-20T10:49:46.200976711+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift-jdk11 already deleted so no worries on clearing tags" 2026-01-20T10:49:46.200976711+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift-jdk11" 2026-01-20T10:49:46.218146375+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream golang already deleted so no worries on clearing tags" 2026-01-20T10:49:46.218146375+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream golang" 2026-01-20T10:49:46.218146375+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:49:46.283993620+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream java-runtime already deleted so no worries on clearing tags" 2026-01-20T10:49:46.284074392+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2026-01-20T10:49:46.314308513+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream jboss-datagrid73-openshift already deleted so no worries on clearing tags" 2026-01-20T10:49:46.314378155+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-datagrid73-openshift" 2026-01-20T10:49:46.318482530+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-openshift already deleted so no worries on clearing tags" 2026-01-20T10:49:46.318542982+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-openshift" 2026-01-20T10:49:46.322210324+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-openshift already deleted so no worries on clearing tags" 2026-01-20T10:49:46.322210324+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-openshift" 2026-01-20T10:49:46.328425973+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-openshift already deleted so no worries on clearing tags" 2026-01-20T10:49:46.329276229+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-openshift" 2026-01-20T10:49:46.331103315+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2026-01-20T10:49:46.331103315+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2026-01-20T10:49:46.333038374+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2026-01-20T10:49:46.333038374+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2026-01-20T10:49:46.334482998+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-openshift already deleted so no worries on clearing tags" 2026-01-20T10:49:46.334482998+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-openshift" 2026-01-20T10:49:46.335977964+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2026-01-20T10:49:46.335977964+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-runtime-openshift" 2026-01-20T10:49:46.337387976+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-runtime-openshift already deleted so no worries on clearing tags" 2026-01-20T10:49:46.337387976+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-runtime-openshift" 2026-01-20T10:49:46.341169222+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream mariadb already deleted so no worries on clearing tags" 2026-01-20T10:49:46.341169222+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream mariadb" 2026-01-20T10:49:46.343119931+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2026-01-20T10:49:46.343119931+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2026-01-20T10:49:46.345096891+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2026-01-20T10:49:46.345096891+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2026-01-20T10:49:46.347369211+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream mysql already deleted so no worries on clearing tags" 2026-01-20T10:49:46.347369211+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream mysql" 2026-01-20T10:49:46.352807286+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream nginx already deleted so no worries on clearing tags" 2026-01-20T10:49:46.352807286+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream nginx" 2026-01-20T10:49:46.354300001+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream jenkins already deleted so no worries on clearing tags" 2026-01-20T10:49:46.354300001+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2026-01-20T10:49:46.359284013+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream jenkins-agent-base already deleted so no worries on clearing tags" 2026-01-20T10:49:46.359284013+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins-agent-base" 2026-01-20T10:49:46.365887734+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream openjdk-11-rhel7 already deleted so no worries on clearing tags" 2026-01-20T10:49:46.365887734+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2026-01-20T10:49:46.370074382+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream php already deleted so no worries on clearing tags" 2026-01-20T10:49:46.370128344+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream php" 2026-01-20T10:49:46.374392633+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream perl already deleted so no worries on clearing tags" 2026-01-20T10:49:46.374392633+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream perl" 2026-01-20T10:49:46.376791946+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream python already deleted so no worries on clearing tags" 2026-01-20T10:49:46.376791946+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream python" 2026-01-20T10:49:46.379145118+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2026-01-20T10:49:46.379145118+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso75-openshift-rhel8" 2026-01-20T10:49:46.381459659+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream nodejs already deleted so no worries on clearing tags" 2026-01-20T10:49:46.381459659+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream nodejs" 2026-01-20T10:49:46.383806350+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream postgresql already deleted so no worries on clearing tags" 2026-01-20T10:49:46.383806350+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql" 2026-01-20T10:49:46.385819691+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream ruby already deleted so no worries on clearing tags" 2026-01-20T10:49:46.385819691+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream ruby" 2026-01-20T10:49:46.388677878+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2026-01-20T10:49:46.388677878+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso76-openshift-rhel8" 2026-01-20T10:49:46.396227328+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2026-01-20T10:49:46.396227328+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream sso76-openshift-rhel8" 2026-01-20T10:49:46.399169448+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2026-01-20T10:49:46.399169448+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream sso75-openshift-rhel8" 2026-01-20T10:49:46.401305413+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream redis already deleted so no worries on clearing tags" 2026-01-20T10:49:46.401305413+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream redis" 2026-01-20T10:49:46.403164790+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream redhat-openjdk18-openshift already deleted so no worries on clearing tags" 2026-01-20T10:49:46.403164790+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2026-01-20T10:49:46.406330406+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11-runtime already deleted so no worries on clearing tags" 2026-01-20T10:49:46.406330406+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2026-01-20T10:49:46.408079719+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21-runtime already deleted so no worries on clearing tags" 2026-01-20T10:49:46.408096970+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2026-01-20T10:49:46.409762611+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21 already deleted so no worries on clearing tags" 2026-01-20T10:49:46.409762611+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2026-01-20T10:49:46.412263637+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17-runtime already deleted so no worries on clearing tags" 2026-01-20T10:49:46.412263637+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2026-01-20T10:49:46.413111793+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11 already deleted so no worries on clearing tags" 2026-01-20T10:49:46.413111793+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2026-01-20T10:49:46.415150705+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17 already deleted so no worries on clearing tags" 2026-01-20T10:49:46.415150705+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2026-01-20T10:49:46.419295231+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8-runtime already deleted so no worries on clearing tags" 2026-01-20T10:49:46.419295231+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2026-01-20T10:49:46.419295231+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8 already deleted so no worries on clearing tags" 2026-01-20T10:49:46.419295231+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2026-01-20T10:49:53.183736362+00:00 stderr F time="2026-01-20T10:49:53Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2026-01-20T10:49:53.183736362+00:00 stderr F time="2026-01-20T10:49:53Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:51:15.084725932+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2026-01-20T10:51:15.084725932+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:54:32.393739821+00:00 stderr F time="2026-01-20T10:54:32Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:54:32.393739821+00:00 stderr F time="2026-01-20T10:54:32Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:54:32.393739821+00:00 stderr F time="2026-01-20T10:54:32Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:54:32.393739821+00:00 stderr F time="2026-01-20T10:54:32Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:54:32.393739821+00:00 stderr F time="2026-01-20T10:54:32Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:54:32.393739821+00:00 stderr F time="2026-01-20T10:54:32Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:54:32.393837964+00:00 stderr F time="2026-01-20T10:54:32Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:54:32.393837964+00:00 stderr F time="2026-01-20T10:54:32Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:54:35.439398008+00:00 stderr F time="2026-01-20T10:54:35Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:54:35.439398008+00:00 stderr F time="2026-01-20T10:54:35Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:54:35.439398008+00:00 stderr F time="2026-01-20T10:54:35Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:54:35.439398008+00:00 stderr F time="2026-01-20T10:54:35Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:54:35.439398008+00:00 stderr F time="2026-01-20T10:54:35Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:54:35.439398008+00:00 stderr F time="2026-01-20T10:54:35Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:54:35.439398008+00:00 stderr F time="2026-01-20T10:54:35Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:54:35.439398008+00:00 stderr F time="2026-01-20T10:54:35Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:54:38.457792878+00:00 stderr F time="2026-01-20T10:54:38Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:54:38.457792878+00:00 stderr F time="2026-01-20T10:54:38Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:54:38.457792878+00:00 stderr F time="2026-01-20T10:54:38Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:54:38.457792878+00:00 stderr F time="2026-01-20T10:54:38Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:54:38.457792878+00:00 stderr F time="2026-01-20T10:54:38Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:54:38.457792878+00:00 stderr F time="2026-01-20T10:54:38Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:54:38.457792878+00:00 stderr F time="2026-01-20T10:54:38Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:54:38.457832680+00:00 stderr F time="2026-01-20T10:54:38Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:54:51.653529228+00:00 stderr F time="2026-01-20T10:54:51Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:54:51.653529228+00:00 stderr F time="2026-01-20T10:54:51Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:54:51.653529228+00:00 stderr F time="2026-01-20T10:54:51Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:54:51.653529228+00:00 stderr F time="2026-01-20T10:54:51Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:54:51.653529228+00:00 stderr F time="2026-01-20T10:54:51Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:54:51.653529228+00:00 stderr F time="2026-01-20T10:54:51Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:54:51.653529228+00:00 stderr F time="2026-01-20T10:54:51Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:54:51.653529228+00:00 stderr F time="2026-01-20T10:54:51Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:54:54.684790481+00:00 stderr F time="2026-01-20T10:54:54Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:54:54.684790481+00:00 stderr F time="2026-01-20T10:54:54Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:54:54.684790481+00:00 stderr F time="2026-01-20T10:54:54Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:54:54.684790481+00:00 stderr F time="2026-01-20T10:54:54Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:54:54.684790481+00:00 stderr F time="2026-01-20T10:54:54Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:54:54.684790481+00:00 stderr F time="2026-01-20T10:54:54Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:54:54.684790481+00:00 stderr F time="2026-01-20T10:54:54Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:54:54.684790481+00:00 stderr F time="2026-01-20T10:54:54Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:54:58.946522436+00:00 stderr F time="2026-01-20T10:54:58Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:54:58.946522436+00:00 stderr F time="2026-01-20T10:54:58Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:54:58.946522436+00:00 stderr F time="2026-01-20T10:54:58Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:54:58.946522436+00:00 stderr F time="2026-01-20T10:54:58Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:54:58.946522436+00:00 stderr F time="2026-01-20T10:54:58Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:54:58.946522436+00:00 stderr F time="2026-01-20T10:54:58Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:54:58.946588998+00:00 stderr F time="2026-01-20T10:54:58Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:54:58.946588998+00:00 stderr F time="2026-01-20T10:54:58Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:55:01.976519897+00:00 stderr F time="2026-01-20T10:55:01Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:55:01.976519897+00:00 stderr F time="2026-01-20T10:55:01Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:55:01.976519897+00:00 stderr F time="2026-01-20T10:55:01Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:55:01.976519897+00:00 stderr F time="2026-01-20T10:55:01Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:55:01.976519897+00:00 stderr F time="2026-01-20T10:55:01Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:55:01.976519897+00:00 stderr F time="2026-01-20T10:55:01Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:55:01.976584049+00:00 stderr F time="2026-01-20T10:55:01Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:55:01.976584049+00:00 stderr F time="2026-01-20T10:55:01Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:55:05.004846121+00:00 stderr F time="2026-01-20T10:55:05Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:55:05.004846121+00:00 stderr F time="2026-01-20T10:55:05Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:55:05.004846121+00:00 stderr F time="2026-01-20T10:55:05Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:55:05.004846121+00:00 stderr F time="2026-01-20T10:55:05Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:55:05.004846121+00:00 stderr F time="2026-01-20T10:55:05Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:55:05.004846121+00:00 stderr F time="2026-01-20T10:55:05Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:55:05.004945484+00:00 stderr F time="2026-01-20T10:55:05Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:55:05.004945484+00:00 stderr F time="2026-01-20T10:55:05Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:55:08.030232569+00:00 stderr F time="2026-01-20T10:55:08Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:55:08.030232569+00:00 stderr F time="2026-01-20T10:55:08Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:55:08.030232569+00:00 stderr F time="2026-01-20T10:55:08Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:55:08.030232569+00:00 stderr F time="2026-01-20T10:55:08Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:55:08.030281220+00:00 stderr F time="2026-01-20T10:55:08Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:55:08.030281220+00:00 stderr F time="2026-01-20T10:55:08Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:55:08.030281220+00:00 stderr F time="2026-01-20T10:55:08Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:55:08.030291340+00:00 stderr F time="2026-01-20T10:55:08Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:55:11.051803833+00:00 stderr F time="2026-01-20T10:55:11Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:55:11.051803833+00:00 stderr F time="2026-01-20T10:55:11Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:55:11.051803833+00:00 stderr F time="2026-01-20T10:55:11Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:55:11.051803833+00:00 stderr F time="2026-01-20T10:55:11Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:55:11.051874545+00:00 stderr F time="2026-01-20T10:55:11Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:55:11.051874545+00:00 stderr F time="2026-01-20T10:55:11Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:55:11.051874545+00:00 stderr F time="2026-01-20T10:55:11Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:55:11.051874545+00:00 stderr F time="2026-01-20T10:55:11Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:55:14.077282352+00:00 stderr F time="2026-01-20T10:55:14Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:55:14.077282352+00:00 stderr F time="2026-01-20T10:55:14Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:55:14.077282352+00:00 stderr F time="2026-01-20T10:55:14Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:55:14.077282352+00:00 stderr F time="2026-01-20T10:55:14Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:55:14.077282352+00:00 stderr F time="2026-01-20T10:55:14Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:55:14.077282352+00:00 stderr F time="2026-01-20T10:55:14Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:55:14.077282352+00:00 stderr F time="2026-01-20T10:55:14Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:55:14.077345594+00:00 stderr F time="2026-01-20T10:55:14Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:55:17.208437610+00:00 stderr F time="2026-01-20T10:55:17Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:55:17.208437610+00:00 stderr F time="2026-01-20T10:55:17Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:55:17.208437610+00:00 stderr F time="2026-01-20T10:55:17Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:55:17.208437610+00:00 stderr F time="2026-01-20T10:55:17Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:55:17.208437610+00:00 stderr F time="2026-01-20T10:55:17Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:55:17.208437610+00:00 stderr F time="2026-01-20T10:55:17Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:55:17.208437610+00:00 stderr F time="2026-01-20T10:55:17Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:55:17.208482601+00:00 stderr F time="2026-01-20T10:55:17Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:55:20.234513436+00:00 stderr F time="2026-01-20T10:55:20Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:55:20.234513436+00:00 stderr F time="2026-01-20T10:55:20Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:55:20.234513436+00:00 stderr F time="2026-01-20T10:55:20Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:55:20.234513436+00:00 stderr F time="2026-01-20T10:55:20Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:55:20.234513436+00:00 stderr F time="2026-01-20T10:55:20Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:55:20.234513436+00:00 stderr F time="2026-01-20T10:55:20Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:55:20.234619749+00:00 stderr F time="2026-01-20T10:55:20Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:55:20.234619749+00:00 stderr F time="2026-01-20T10:55:20Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:55:23.257783917+00:00 stderr F time="2026-01-20T10:55:23Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:55:23.257783917+00:00 stderr F time="2026-01-20T10:55:23Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:55:23.257783917+00:00 stderr F time="2026-01-20T10:55:23Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:55:23.257783917+00:00 stderr F time="2026-01-20T10:55:23Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:55:23.257783917+00:00 stderr F time="2026-01-20T10:55:23Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:55:23.257783917+00:00 stderr F time="2026-01-20T10:55:23Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:55:23.257783917+00:00 stderr F time="2026-01-20T10:55:23Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:55:23.257783917+00:00 stderr F time="2026-01-20T10:55:23Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:55:26.279289072+00:00 stderr F time="2026-01-20T10:55:26Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:55:26.279289072+00:00 stderr F time="2026-01-20T10:55:26Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:55:26.279289072+00:00 stderr F time="2026-01-20T10:55:26Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:55:26.279289072+00:00 stderr F time="2026-01-20T10:55:26Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:55:26.279289072+00:00 stderr F time="2026-01-20T10:55:26Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:55:26.279289072+00:00 stderr F time="2026-01-20T10:55:26Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:55:26.279289072+00:00 stderr F time="2026-01-20T10:55:26Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:55:26.279289072+00:00 stderr F time="2026-01-20T10:55:26Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:55:29.299866251+00:00 stderr F time="2026-01-20T10:55:29Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:55:29.299866251+00:00 stderr F time="2026-01-20T10:55:29Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:55:29.299866251+00:00 stderr F time="2026-01-20T10:55:29Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:55:29.299866251+00:00 stderr F time="2026-01-20T10:55:29Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:55:29.299866251+00:00 stderr F time="2026-01-20T10:55:29Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:55:29.299866251+00:00 stderr F time="2026-01-20T10:55:29Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:55:29.299942933+00:00 stderr F time="2026-01-20T10:55:29Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:55:29.299942933+00:00 stderr F time="2026-01-20T10:55:29Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:55:32.324767908+00:00 stderr F time="2026-01-20T10:55:32Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:55:32.324767908+00:00 stderr F time="2026-01-20T10:55:32Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:55:32.324767908+00:00 stderr F time="2026-01-20T10:55:32Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:55:32.324767908+00:00 stderr F time="2026-01-20T10:55:32Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:55:32.324767908+00:00 stderr F time="2026-01-20T10:55:32Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:55:32.324767908+00:00 stderr F time="2026-01-20T10:55:32Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:55:32.324767908+00:00 stderr F time="2026-01-20T10:55:32Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:55:32.324767908+00:00 stderr F time="2026-01-20T10:55:32Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:55:35.351944415+00:00 stderr F time="2026-01-20T10:55:35Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:55:35.351944415+00:00 stderr F time="2026-01-20T10:55:35Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:55:35.351944415+00:00 stderr F time="2026-01-20T10:55:35Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:55:35.351944415+00:00 stderr F time="2026-01-20T10:55:35Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:55:35.351944415+00:00 stderr F time="2026-01-20T10:55:35Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:55:35.351944415+00:00 stderr F time="2026-01-20T10:55:35Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:55:35.352035997+00:00 stderr F time="2026-01-20T10:55:35Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:55:35.352035997+00:00 stderr F time="2026-01-20T10:55:35Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:55:38.383408244+00:00 stderr F time="2026-01-20T10:55:38Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:55:38.383408244+00:00 stderr F time="2026-01-20T10:55:38Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:55:38.383408244+00:00 stderr F time="2026-01-20T10:55:38Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:55:38.383408244+00:00 stderr F time="2026-01-20T10:55:38Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:55:38.383408244+00:00 stderr F time="2026-01-20T10:55:38Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:55:38.383408244+00:00 stderr F time="2026-01-20T10:55:38Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:55:38.383408244+00:00 stderr F time="2026-01-20T10:55:38Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:55:38.383488236+00:00 stderr F time="2026-01-20T10:55:38Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:55:41.402127953+00:00 stderr F time="2026-01-20T10:55:41Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:55:41.402127953+00:00 stderr F time="2026-01-20T10:55:41Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:55:41.402127953+00:00 stderr F time="2026-01-20T10:55:41Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:55:41.402127953+00:00 stderr F time="2026-01-20T10:55:41Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:55:41.402127953+00:00 stderr F time="2026-01-20T10:55:41Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:55:41.402127953+00:00 stderr F time="2026-01-20T10:55:41Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:55:41.402127953+00:00 stderr F time="2026-01-20T10:55:41Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:55:41.402127953+00:00 stderr F time="2026-01-20T10:55:41Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:55:44.431778506+00:00 stderr F time="2026-01-20T10:55:44Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:55:44.431778506+00:00 stderr F time="2026-01-20T10:55:44Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:55:44.431778506+00:00 stderr F time="2026-01-20T10:55:44Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:55:44.431778506+00:00 stderr F time="2026-01-20T10:55:44Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:55:44.431778506+00:00 stderr F time="2026-01-20T10:55:44Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:55:44.431778506+00:00 stderr F time="2026-01-20T10:55:44Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:55:44.431778506+00:00 stderr F time="2026-01-20T10:55:44Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:55:44.431778506+00:00 stderr F time="2026-01-20T10:55:44Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:55:47.463010228+00:00 stderr F time="2026-01-20T10:55:47Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:55:47.463010228+00:00 stderr F time="2026-01-20T10:55:47Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:55:47.463010228+00:00 stderr F time="2026-01-20T10:55:47Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:55:47.463010228+00:00 stderr F time="2026-01-20T10:55:47Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:55:47.463010228+00:00 stderr F time="2026-01-20T10:55:47Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:55:47.463010228+00:00 stderr F time="2026-01-20T10:55:47Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:55:47.463010228+00:00 stderr F time="2026-01-20T10:55:47Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:55:47.463010228+00:00 stderr F time="2026-01-20T10:55:47Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:55:50.499118105+00:00 stderr F time="2026-01-20T10:55:50Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:55:50.499118105+00:00 stderr F time="2026-01-20T10:55:50Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:55:50.499118105+00:00 stderr F time="2026-01-20T10:55:50Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:55:50.499118105+00:00 stderr F time="2026-01-20T10:55:50Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:55:50.499118105+00:00 stderr F time="2026-01-20T10:55:50Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:55:50.499173227+00:00 stderr F time="2026-01-20T10:55:50Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:55:50.499173227+00:00 stderr F time="2026-01-20T10:55:50Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:55:50.499173227+00:00 stderr F time="2026-01-20T10:55:50Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:55:53.526900517+00:00 stderr F time="2026-01-20T10:55:53Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:55:53.526900517+00:00 stderr F time="2026-01-20T10:55:53Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:55:53.526900517+00:00 stderr F time="2026-01-20T10:55:53Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:55:53.526900517+00:00 stderr F time="2026-01-20T10:55:53Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:55:53.526900517+00:00 stderr F time="2026-01-20T10:55:53Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:55:53.526900517+00:00 stderr F time="2026-01-20T10:55:53Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:55:53.526900517+00:00 stderr F time="2026-01-20T10:55:53Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:55:53.526976589+00:00 stderr F time="2026-01-20T10:55:53Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:55:56.554468485+00:00 stderr F time="2026-01-20T10:55:56Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:55:56.554613799+00:00 stderr F time="2026-01-20T10:55:56Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:55:56.554682211+00:00 stderr F time="2026-01-20T10:55:56Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:55:56.554750643+00:00 stderr F time="2026-01-20T10:55:56Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:55:56.554819945+00:00 stderr F time="2026-01-20T10:55:56Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:55:56.554911667+00:00 stderr F time="2026-01-20T10:55:56Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:55:56.555001910+00:00 stderr F time="2026-01-20T10:55:56Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:55:56.555126973+00:00 stderr F time="2026-01-20T10:55:56Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:55:59.586438662+00:00 stderr F time="2026-01-20T10:55:59Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:55:59.586438662+00:00 stderr F time="2026-01-20T10:55:59Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:55:59.586438662+00:00 stderr F time="2026-01-20T10:55:59Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:55:59.586438662+00:00 stderr F time="2026-01-20T10:55:59Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:55:59.586438662+00:00 stderr F time="2026-01-20T10:55:59Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:55:59.586438662+00:00 stderr F time="2026-01-20T10:55:59Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:55:59.586438662+00:00 stderr F time="2026-01-20T10:55:59Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:55:59.586438662+00:00 stderr F time="2026-01-20T10:55:59Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:56:02.610476256+00:00 stderr F time="2026-01-20T10:56:02Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:56:02.610476256+00:00 stderr F time="2026-01-20T10:56:02Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:56:02.610476256+00:00 stderr F time="2026-01-20T10:56:02Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:56:02.610476256+00:00 stderr F time="2026-01-20T10:56:02Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:56:02.610476256+00:00 stderr F time="2026-01-20T10:56:02Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:56:02.610476256+00:00 stderr F time="2026-01-20T10:56:02Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:56:02.610476256+00:00 stderr F time="2026-01-20T10:56:02Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:56:02.610476256+00:00 stderr F time="2026-01-20T10:56:02Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:56:05.633476900+00:00 stderr F time="2026-01-20T10:56:05Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:56:05.633476900+00:00 stderr F time="2026-01-20T10:56:05Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:56:05.633476900+00:00 stderr F time="2026-01-20T10:56:05Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:56:05.633476900+00:00 stderr F time="2026-01-20T10:56:05Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:56:05.633476900+00:00 stderr F time="2026-01-20T10:56:05Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:56:05.633562473+00:00 stderr F time="2026-01-20T10:56:05Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:56:05.633562473+00:00 stderr F time="2026-01-20T10:56:05Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:56:05.633562473+00:00 stderr F time="2026-01-20T10:56:05Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:56:08.651607444+00:00 stderr F time="2026-01-20T10:56:08Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:56:08.651607444+00:00 stderr F time="2026-01-20T10:56:08Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:56:08.651607444+00:00 stderr F time="2026-01-20T10:56:08Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:56:08.651607444+00:00 stderr F time="2026-01-20T10:56:08Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:56:08.651607444+00:00 stderr F time="2026-01-20T10:56:08Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:56:08.651607444+00:00 stderr F time="2026-01-20T10:56:08Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:56:08.651607444+00:00 stderr F time="2026-01-20T10:56:08Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:56:08.651607444+00:00 stderr F time="2026-01-20T10:56:08Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:56:11.670411566+00:00 stderr F time="2026-01-20T10:56:11Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:56:11.670411566+00:00 stderr F time="2026-01-20T10:56:11Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:56:11.670411566+00:00 stderr F time="2026-01-20T10:56:11Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:56:11.670411566+00:00 stderr F time="2026-01-20T10:56:11Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:56:11.670411566+00:00 stderr F time="2026-01-20T10:56:11Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:56:11.670411566+00:00 stderr F time="2026-01-20T10:56:11Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:56:11.670411566+00:00 stderr F time="2026-01-20T10:56:11Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:56:11.670455288+00:00 stderr F time="2026-01-20T10:56:11Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:56:14.697561383+00:00 stderr F time="2026-01-20T10:56:14Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:56:14.697561383+00:00 stderr F time="2026-01-20T10:56:14Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:56:14.697561383+00:00 stderr F time="2026-01-20T10:56:14Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:56:14.697561383+00:00 stderr F time="2026-01-20T10:56:14Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:56:14.697561383+00:00 stderr F time="2026-01-20T10:56:14Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:56:14.697561383+00:00 stderr F time="2026-01-20T10:56:14Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:56:14.697561383+00:00 stderr F time="2026-01-20T10:56:14Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:56:14.697612044+00:00 stderr F time="2026-01-20T10:56:14Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:56:17.717860866+00:00 stderr F time="2026-01-20T10:56:17Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:56:17.717860866+00:00 stderr F time="2026-01-20T10:56:17Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:56:17.717860866+00:00 stderr F time="2026-01-20T10:56:17Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:56:17.717860866+00:00 stderr F time="2026-01-20T10:56:17Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:56:17.717860866+00:00 stderr F time="2026-01-20T10:56:17Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:56:17.717860866+00:00 stderr F time="2026-01-20T10:56:17Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:56:17.717937988+00:00 stderr F time="2026-01-20T10:56:17Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:56:17.717937988+00:00 stderr F time="2026-01-20T10:56:17Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:56:20.735276121+00:00 stderr F time="2026-01-20T10:56:20Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:56:20.735276121+00:00 stderr F time="2026-01-20T10:56:20Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:56:20.735276121+00:00 stderr F time="2026-01-20T10:56:20Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:56:20.735276121+00:00 stderr F time="2026-01-20T10:56:20Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:56:20.735276121+00:00 stderr F time="2026-01-20T10:56:20Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:56:20.735276121+00:00 stderr F time="2026-01-20T10:56:20Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:56:20.735276121+00:00 stderr F time="2026-01-20T10:56:20Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:56:20.735276121+00:00 stderr F time="2026-01-20T10:56:20Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:56:23.758694415+00:00 stderr F time="2026-01-20T10:56:23Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:56:23.758694415+00:00 stderr F time="2026-01-20T10:56:23Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:56:23.758694415+00:00 stderr F time="2026-01-20T10:56:23Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:56:23.758694415+00:00 stderr F time="2026-01-20T10:56:23Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:56:23.758694415+00:00 stderr F time="2026-01-20T10:56:23Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:56:23.758694415+00:00 stderr F time="2026-01-20T10:56:23Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:56:23.758694415+00:00 stderr F time="2026-01-20T10:56:23Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:56:23.758694415+00:00 stderr F time="2026-01-20T10:56:23Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:56:26.786119450+00:00 stderr F time="2026-01-20T10:56:26Z" level=info msg="considering allowed registry 38.102.83.51:5001 for " 2026-01-20T10:56:26.786119450+00:00 stderr F time="2026-01-20T10:56:26Z" level=info msg="considering allowed registry quay.io for " 2026-01-20T10:56:26.786119450+00:00 stderr F time="2026-01-20T10:56:26Z" level=info msg="considering allowed registry gcr.io for " 2026-01-20T10:56:26.786119450+00:00 stderr F time="2026-01-20T10:56:26Z" level=info msg="considering allowed registry registry.k8s.io for " 2026-01-20T10:56:26.786119450+00:00 stderr F time="2026-01-20T10:56:26Z" level=info msg="considering allowed registry registry.redhat.io for " 2026-01-20T10:56:26.786119450+00:00 stderr F time="2026-01-20T10:56:26Z" level=info msg="considering allowed registry registry.connect.redhat.com for " 2026-01-20T10:56:26.786119450+00:00 stderr F time="2026-01-20T10:56:26Z" level=info msg="considering allowed registry registry-proxy.engineering.redhat.com for " 2026-01-20T10:56:26.786119450+00:00 stderr F time="2026-01-20T10:56:26Z" level=info msg="considering allowed registry images.paas.redhat.com for " 2026-01-20T10:56:26.786119450+00:00 stderr F time="2026-01-20T10:56:26Z" level=info msg="considering allowed registry image-registry.openshift-image-registry.svc:5000 for " 2026-01-20T10:56:26.786119450+00:00 stderr F time="2026-01-20T10:56:26Z" level=info msg="no allowed registries items will permit the use of " 2026-01-20T10:56:26.786119450+00:00 stderr F time="2026-01-20T10:56:26Z" level=info msg="&errors.errorString{s:\"global openshift image configuration prevents the creation of imagestreams using the registry \"}" 2026-01-20T10:56:26.786119450+00:00 stderr F time="2026-01-20T10:56:26Z" level=info msg="CRDUPDATE bad spec validation update" 2026-01-20T10:56:29.792385063+00:00 stderr F time="2026-01-20T10:56:29Z" level=info msg="CRDERROR bad spec validation update" 2026-01-20T10:56:29.792385063+00:00 stderr F time="2026-01-20T10:56:29Z" level=error msg="unable to sync: Operation cannot be fulfilled on configs.samples.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again, requeuing" 2026-01-20T10:56:29.807365736+00:00 stderr F time="2026-01-20T10:56:29Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:29.807365736+00:00 stderr F time="2026-01-20T10:56:29Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:29.807365736+00:00 stderr F time="2026-01-20T10:56:29Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:29.807365736+00:00 stderr F time="2026-01-20T10:56:29Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:29.807365736+00:00 stderr F time="2026-01-20T10:56:29Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:29.807365736+00:00 stderr F time="2026-01-20T10:56:29Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:29.807445598+00:00 stderr F time="2026-01-20T10:56:29Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:29.807445598+00:00 stderr F time="2026-01-20T10:56:29Z" level=info msg="CRDUPDATE spec corrected" 2026-01-20T10:56:32.851090429+00:00 stderr F time="2026-01-20T10:56:32Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:32.851090429+00:00 stderr F time="2026-01-20T10:56:32Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:32.851090429+00:00 stderr F time="2026-01-20T10:56:32Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:32.851090429+00:00 stderr F time="2026-01-20T10:56:32Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:32.851090429+00:00 stderr F time="2026-01-20T10:56:32Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:32.851166831+00:00 stderr F time="2026-01-20T10:56:32Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:32.851166831+00:00 stderr F time="2026-01-20T10:56:32Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:32.851166831+00:00 stderr F time="2026-01-20T10:56:32Z" level=info msg="SamplesRegistry changed from to registry.redhat.io" 2026-01-20T10:56:32.851264103+00:00 stderr F time="2026-01-20T10:56:32Z" level=info msg="ENTERING UPSERT / STEADY STATE PATH ExistTrue true ImageInProgressFalse true VersionOK true ConfigChanged true ManagementStateChanged true" 2026-01-20T10:56:33.143485785+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="updated imagestream fuse7-eap-openshift" 2026-01-20T10:56:33.143765653+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift" 2026-01-20T10:56:33.161439719+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift-jdk11" 2026-01-20T10:56:33.161439719+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="updated imagestream fuse7-karaf-openshift-jdk11" 2026-01-20T10:56:33.178384495+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="updated imagestream golang" 2026-01-20T10:56:33.179751761+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="There are no more errors or image imports in flight for imagestream golang" 2026-01-20T10:56:33.200281343+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="updated imagestream redhat-openjdk18-openshift" 2026-01-20T10:56:33.202380650+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2026-01-20T10:56:33.222643225+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="There are no more errors or image imports in flight for imagestream php" 2026-01-20T10:56:33.223313954+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="updated imagestream php" 2026-01-20T10:56:33.285025204+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2026-01-20T10:56:33.285224319+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="updated imagestream java-runtime" 2026-01-20T10:56:33.340581789+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="updated imagestream ubi8-openjdk-21" 2026-01-20T10:56:33.340952549+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2026-01-20T10:56:33.410249342+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="updated imagestream ubi8-openjdk-8-runtime" 2026-01-20T10:56:33.412101652+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2026-01-20T10:56:33.462790836+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="updated imagestream dotnet-runtime" 2026-01-20T10:56:33.463691580+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2026-01-20T10:56:33.522151933+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="updated imagestream jboss-eap74-openjdk8-openshift" 2026-01-20T10:56:33.522329168+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-openshift" 2026-01-20T10:56:33.579544288+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="There are no more errors or image imports in flight for imagestream httpd" 2026-01-20T10:56:33.580128653+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="updated imagestream httpd" 2026-01-20T10:56:33.639757647+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="updated imagestream java" 2026-01-20T10:56:33.641399522+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="There are no more errors or image imports in flight for imagestream java" 2026-01-20T10:56:33.710447590+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="updated imagestream ubi8-openjdk-11-runtime" 2026-01-20T10:56:33.712302329+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2026-01-20T10:56:33.770076964+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2026-01-20T10:56:33.770326031+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="updated imagestream ubi8-openjdk-17" 2026-01-20T10:56:33.803536335+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21 already deleted so no worries on clearing tags" 2026-01-20T10:56:33.803536335+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2026-01-20T10:56:33.830175061+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="updated imagestream jboss-eap74-openjdk11-openshift" 2026-01-20T10:56:33.834238151+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-openshift" 2026-01-20T10:56:33.937050256+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins-agent-base" 2026-01-20T10:56:33.941480516+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="updated imagestream jenkins-agent-base" 2026-01-20T10:56:33.968202735+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso76-openshift-rhel8" 2026-01-20T10:56:33.968202735+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="updated imagestream postgresql13-for-sso76-openshift-rhel8" 2026-01-20T10:56:34.036039710+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="updated imagestream sso76-openshift-rhel8" 2026-01-20T10:56:34.036039710+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="There are no more errors or image imports in flight for imagestream sso76-openshift-rhel8" 2026-01-20T10:56:34.087114704+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2026-01-20T10:56:34.090855894+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="updated imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2026-01-20T10:56:34.142857464+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="updated imagestream jboss-eap74-openjdk11-runtime-openshift" 2026-01-20T10:56:34.143485921+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-runtime-openshift" 2026-01-20T10:56:34.201590874+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java11-openshift" 2026-01-20T10:56:34.202625202+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="updated imagestream fuse7-java11-openshift" 2026-01-20T10:56:34.298837981+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="updated imagestream ubi8-openjdk-17-runtime" 2026-01-20T10:56:34.301241625+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2026-01-20T10:56:34.338293333+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="updated imagestream python" 2026-01-20T10:56:34.341544520+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="There are no more errors or image imports in flight for imagestream python" 2026-01-20T10:56:34.384181007+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2026-01-20T10:56:34.384711441+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="updated imagestream openjdk-11-rhel7" 2026-01-20T10:56:34.446714489+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2026-01-20T10:56:34.448017065+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="updated imagestream jenkins" 2026-01-20T10:56:34.504600277+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="updated imagestream jboss-datagrid73-openshift" 2026-01-20T10:56:34.504600277+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-datagrid73-openshift" 2026-01-20T10:56:34.569887713+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift-java11" 2026-01-20T10:56:34.570413408+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="updated imagestream fuse7-eap-openshift-java11" 2026-01-20T10:56:34.592025749+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="clearImageStreamTagError: stream dotnet-runtime already deleted so no worries on clearing tags" 2026-01-20T10:56:34.592025749+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2026-01-20T10:56:34.639950198+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2026-01-20T10:56:34.640502074+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="updated imagestream ubi8-openjdk-8" 2026-01-20T10:56:34.699856361+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="updated imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2026-01-20T10:56:34.700063216+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2026-01-20T10:56:34.783318076+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="updated imagestream fuse7-karaf-openshift" 2026-01-20T10:56:34.783657395+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift" 2026-01-20T10:56:34.817795434+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="There are no more errors or image imports in flight for imagestream mariadb" 2026-01-20T10:56:34.818376009+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="updated imagestream mariadb" 2026-01-20T10:56:34.877789407+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="There are no more errors or image imports in flight for imagestream sso75-openshift-rhel8" 2026-01-20T10:56:34.879313899+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="updated imagestream sso75-openshift-rhel8" 2026-01-20T10:56:34.893842400+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="clearImageStreamTagError: stream java-runtime already deleted so no worries on clearing tags" 2026-01-20T10:56:34.893842400+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2026-01-20T10:56:34.963882884+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-openshift" 2026-01-20T10:56:34.964191612+00:00 stderr F time="2026-01-20T10:56:34Z" level=info msg="updated imagestream jboss-eap-xp4-openjdk11-openshift" 2026-01-20T10:56:35.018806022+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2026-01-20T10:56:35.019307455+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="updated imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2026-01-20T10:56:35.081507879+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="updated imagestream fuse7-java-openshift" 2026-01-20T10:56:35.081975801+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java-openshift" 2026-01-20T10:56:35.138789720+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso75-openshift-rhel8" 2026-01-20T10:56:35.139119129+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="updated imagestream postgresql13-for-sso75-openshift-rhel8" 2026-01-20T10:56:35.199385380+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2026-01-20T10:56:35.200232153+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="updated imagestream ubi8-openjdk-21-runtime" 2026-01-20T10:56:35.262745005+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2026-01-20T10:56:35.266177328+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="updated imagestream ubi8-openjdk-11" 2026-01-20T10:56:35.323258673+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-runtime-openshift" 2026-01-20T10:56:35.323258673+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="updated imagestream jboss-eap74-openjdk8-runtime-openshift" 2026-01-20T10:56:35.380250477+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="updated imagestream nginx" 2026-01-20T10:56:35.382649091+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="There are no more errors or image imports in flight for imagestream nginx" 2026-01-20T10:56:35.442882312+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="There are no more errors or image imports in flight for imagestream nodejs" 2026-01-20T10:56:35.443468497+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="updated imagestream nodejs" 2026-01-20T10:56:35.498742235+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="There are no more errors or image imports in flight for imagestream perl" 2026-01-20T10:56:35.499181737+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="updated imagestream perl" 2026-01-20T10:56:35.560711152+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="There are no more errors or image imports in flight for imagestream redis" 2026-01-20T10:56:35.564953857+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="updated imagestream redis" 2026-01-20T10:56:35.621288512+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21-runtime already deleted so no worries on clearing tags" 2026-01-20T10:56:35.621288512+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2026-01-20T10:56:35.626592985+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="updated imagestream postgresql" 2026-01-20T10:56:35.631565118+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql" 2026-01-20T10:56:35.781859672+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="There are no more errors or image imports in flight for imagestream ruby" 2026-01-20T10:56:35.781859672+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="updated imagestream ruby" 2026-01-20T10:56:35.795963462+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="updated imagestream dotnet" 2026-01-20T10:56:35.796085795+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2026-01-20T10:56:35.820617485+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="updated imagestream jboss-eap-xp3-openjdk11-openshift" 2026-01-20T10:56:35.820770319+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-openshift" 2026-01-20T10:56:35.877013582+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="updated imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2026-01-20T10:56:35.877280219+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2026-01-20T10:56:35.937864620+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="There are no more errors or image imports in flight for imagestream mysql" 2026-01-20T10:56:35.938256800+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="updated imagestream mysql" 2026-01-20T10:56:35.938256800+00:00 stderr F time="2026-01-20T10:56:35Z" level=info msg="CRDUPDATE samples upserted; set clusteroperator ready, steady state" 2026-01-20T10:56:36.995949688+00:00 stderr F time="2026-01-20T10:56:36Z" level=info msg="clearImageStreamTagError: stream dotnet already deleted so no worries on clearing tags" 2026-01-20T10:56:36.995949688+00:00 stderr F time="2026-01-20T10:56:36Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2026-01-20T10:56:37.606421133+00:00 stderr F time="2026-01-20T10:56:37Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8-runtime already deleted so no worries on clearing tags" 2026-01-20T10:56:37.606421133+00:00 stderr F time="2026-01-20T10:56:37Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2026-01-20T10:56:37.610801771+00:00 stderr F time="2026-01-20T10:56:37Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17 already deleted so no worries on clearing tags" 2026-01-20T10:56:37.610801771+00:00 stderr F time="2026-01-20T10:56:37Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2026-01-20T10:56:37.708114069+00:00 stderr F time="2026-01-20T10:56:37Z" level=info msg="clearImageStreamTagError: stream openjdk-11-rhel7 already deleted so no worries on clearing tags" 2026-01-20T10:56:37.708114069+00:00 stderr F time="2026-01-20T10:56:37Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2026-01-20T10:56:37.741516627+00:00 stderr F time="2026-01-20T10:56:37Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17-runtime already deleted so no worries on clearing tags" 2026-01-20T10:56:37.741516627+00:00 stderr F time="2026-01-20T10:56:37Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2026-01-20T10:56:38.037575233+00:00 stderr F time="2026-01-20T10:56:38Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11-runtime already deleted so no worries on clearing tags" 2026-01-20T10:56:38.037647735+00:00 stderr F time="2026-01-20T10:56:38Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2026-01-20T10:56:38.966873827+00:00 stderr F time="2026-01-20T10:56:38Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:38.966873827+00:00 stderr F time="2026-01-20T10:56:38Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:38.966873827+00:00 stderr F time="2026-01-20T10:56:38Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:38.966873827+00:00 stderr F time="2026-01-20T10:56:38Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:38.966873827+00:00 stderr F time="2026-01-20T10:56:38Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:38.966873827+00:00 stderr F time="2026-01-20T10:56:38Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:38.966929808+00:00 stderr F time="2026-01-20T10:56:38Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:38.966929808+00:00 stderr F time="2026-01-20T10:56:38Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:39.039113190+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:39.039113190+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:39.039113190+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:39.039113190+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:39.039113190+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:39.039113190+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:39.039113190+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:39.039113190+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:39.425326392+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="clearImageStreamTagError: stream redhat-openjdk18-openshift already deleted so no worries on clearing tags" 2026-01-20T10:56:39.425326392+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2026-01-20T10:56:39.683642472+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8 already deleted so no worries on clearing tags" 2026-01-20T10:56:39.683642472+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2026-01-20T10:56:39.705181630+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11 already deleted so no worries on clearing tags" 2026-01-20T10:56:39.705181630+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2026-01-20T10:56:39.952430064+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:39.952430064+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:39.952430064+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:39.952430064+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:39.952430064+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:39.952430064+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:39.952430064+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:39.952430064+00:00 stderr F time="2026-01-20T10:56:39Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:40.019629372+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:40.019629372+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:40.019629372+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:40.019629372+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:40.019629372+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:40.019629372+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:40.019629372+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:40.019701093+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:40.944408333+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:40.944408333+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:40.944408333+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:40.944408333+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:40.944408333+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:40.944408333+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:40.944408333+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:40.944408333+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:40.991817209+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:40.991913691+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:40.991950382+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:40.991989573+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:40.992025604+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:40.992092136+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:40.992133277+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:40.992180938+00:00 stderr F time="2026-01-20T10:56:40Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:41.937442331+00:00 stderr F time="2026-01-20T10:56:41Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:41.937504202+00:00 stderr F time="2026-01-20T10:56:41Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:41.937533723+00:00 stderr F time="2026-01-20T10:56:41Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:41.937565114+00:00 stderr F time="2026-01-20T10:56:41Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:41.937604205+00:00 stderr F time="2026-01-20T10:56:41Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:41.937637766+00:00 stderr F time="2026-01-20T10:56:41Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:41.937668807+00:00 stderr F time="2026-01-20T10:56:41Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:41.937724808+00:00 stderr F time="2026-01-20T10:56:41Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:41.996903931+00:00 stderr F time="2026-01-20T10:56:41Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:41.996903931+00:00 stderr F time="2026-01-20T10:56:41Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:41.996903931+00:00 stderr F time="2026-01-20T10:56:41Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:41.996903931+00:00 stderr F time="2026-01-20T10:56:41Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:41.996903931+00:00 stderr F time="2026-01-20T10:56:41Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:41.996903931+00:00 stderr F time="2026-01-20T10:56:41Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:41.996903931+00:00 stderr F time="2026-01-20T10:56:41Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:41.996938722+00:00 stderr F time="2026-01-20T10:56:41Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:42.962839929+00:00 stderr F time="2026-01-20T10:56:42Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:42.962839929+00:00 stderr F time="2026-01-20T10:56:42Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:42.962839929+00:00 stderr F time="2026-01-20T10:56:42Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:42.962839929+00:00 stderr F time="2026-01-20T10:56:42Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:42.962839929+00:00 stderr F time="2026-01-20T10:56:42Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:42.962839929+00:00 stderr F time="2026-01-20T10:56:42Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:42.962839929+00:00 stderr F time="2026-01-20T10:56:42Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:42.962839929+00:00 stderr F time="2026-01-20T10:56:42Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:43.026872662+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:43.026872662+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:43.026872662+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:43.026872662+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:43.026872662+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:43.026933244+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:43.026933244+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:43.026962595+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:43.347948461+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:43.347948461+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:43.347948461+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:43.347948461+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:43.347948461+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:43.347948461+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:43.347948461+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:43.347948461+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:43.437795389+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:43.437882581+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:43.437938683+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:43.437974593+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:43.438005974+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:43.438040355+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:43.438096367+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:43.438167529+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:43.499258262+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:43.499258262+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:43.499258262+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:43.499258262+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:43.499258262+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:43.499258262+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:43.499258262+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:43.499258262+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:43.942378575+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:43.942378575+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:43.942378575+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:43.942378575+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:43.942378575+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:43.942378575+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:43.942378575+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:43.942453917+00:00 stderr F time="2026-01-20T10:56:43Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:44.003721705+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:44.003820528+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:44.003858429+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:44.003895470+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:44.003932731+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:44.003972082+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:44.004009083+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:44.004146597+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:44.292689150+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:44.292746261+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:44.292772392+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:44.292797233+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:44.292820413+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:44.292846284+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:44.292877895+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:44.292925926+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:44.702640209+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:44.702640209+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:44.702640209+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:44.702640209+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:44.702640209+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:44.702640209+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:44.702640209+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:44.702776263+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:44.936643886+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:44.936643886+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:44.936643886+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:44.936643886+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:44.936643886+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:44.936643886+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:44.936643886+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:44.937380285+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:44.989828167+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:44.989909809+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:44.989934310+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:44.989958700+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:44.989982071+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:44.990008252+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:44.990086644+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:44.990144615+00:00 stderr F time="2026-01-20T10:56:44Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:45.497111625+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:45.497111625+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:45.497111625+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:45.497111625+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:45.497111625+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:45.497111625+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:45.497111625+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:45.497171977+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:45.893289534+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:45.893289534+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:45.893289534+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:45.893289534+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:45.893289534+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:45.893289534+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:45.893289534+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:45.893289534+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:45.946041914+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:45.946041914+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:45.946041914+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:45.946041914+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:45.946041914+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:45.946041914+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:45.946041914+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:45.946106256+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:45.991389844+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:45.991389844+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:45.991389844+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:45.991389844+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:45.991389844+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:45.991389844+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:45.991389844+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:45.991426715+00:00 stderr F time="2026-01-20T10:56:45Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:46.943315676+00:00 stderr F time="2026-01-20T10:56:46Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:46.943315676+00:00 stderr F time="2026-01-20T10:56:46Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:46.943315676+00:00 stderr F time="2026-01-20T10:56:46Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:46.943315676+00:00 stderr F time="2026-01-20T10:56:46Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:46.943315676+00:00 stderr F time="2026-01-20T10:56:46Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:46.943315676+00:00 stderr F time="2026-01-20T10:56:46Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:46.943315676+00:00 stderr F time="2026-01-20T10:56:46Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:46.943315676+00:00 stderr F time="2026-01-20T10:56:46Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:47.006659021+00:00 stderr F time="2026-01-20T10:56:47Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:47.006659021+00:00 stderr F time="2026-01-20T10:56:47Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:47.006659021+00:00 stderr F time="2026-01-20T10:56:47Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:47.006659021+00:00 stderr F time="2026-01-20T10:56:47Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:47.006659021+00:00 stderr F time="2026-01-20T10:56:47Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:47.006659021+00:00 stderr F time="2026-01-20T10:56:47Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:47.006659021+00:00 stderr F time="2026-01-20T10:56:47Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:47.006659021+00:00 stderr F time="2026-01-20T10:56:47Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:47.946828455+00:00 stderr F time="2026-01-20T10:56:47Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:47.946828455+00:00 stderr F time="2026-01-20T10:56:47Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:47.946828455+00:00 stderr F time="2026-01-20T10:56:47Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:47.946888387+00:00 stderr F time="2026-01-20T10:56:47Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:47.946888387+00:00 stderr F time="2026-01-20T10:56:47Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:47.946888387+00:00 stderr F time="2026-01-20T10:56:47Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:47.946888387+00:00 stderr F time="2026-01-20T10:56:47Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:47.946924318+00:00 stderr F time="2026-01-20T10:56:47Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:48.005130524+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:48.005130524+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:48.005130524+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:48.005130524+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:48.005130524+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:48.005130524+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:48.005130524+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:48.005130524+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:48.935563168+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:48.935563168+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:48.935563168+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:48.935563168+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:48.935563168+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:48.935563168+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:48.935563168+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:48.935563168+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:48.994313509+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:48.994313509+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:48.994313509+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:48.994313509+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:48.994313509+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:48.994374361+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:48.994374361+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:48.994374361+00:00 stderr F time="2026-01-20T10:56:48Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:49.972003424+00:00 stderr F time="2026-01-20T10:56:49Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:49.972003424+00:00 stderr F time="2026-01-20T10:56:49Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:49.972003424+00:00 stderr F time="2026-01-20T10:56:49Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:49.972003424+00:00 stderr F time="2026-01-20T10:56:49Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:49.972003424+00:00 stderr F time="2026-01-20T10:56:49Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:49.972073236+00:00 stderr F time="2026-01-20T10:56:49Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:49.972073236+00:00 stderr F time="2026-01-20T10:56:49Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:49.972116037+00:00 stderr F time="2026-01-20T10:56:49Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:50.044646499+00:00 stderr F time="2026-01-20T10:56:50Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:50.044646499+00:00 stderr F time="2026-01-20T10:56:50Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:50.044646499+00:00 stderr F time="2026-01-20T10:56:50Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:50.044646499+00:00 stderr F time="2026-01-20T10:56:50Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:50.044646499+00:00 stderr F time="2026-01-20T10:56:50Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:50.044646499+00:00 stderr F time="2026-01-20T10:56:50Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:50.044646499+00:00 stderr F time="2026-01-20T10:56:50Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:50.044646499+00:00 stderr F time="2026-01-20T10:56:50Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:50.950827771+00:00 stderr F time="2026-01-20T10:56:50Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:50.950827771+00:00 stderr F time="2026-01-20T10:56:50Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:50.950827771+00:00 stderr F time="2026-01-20T10:56:50Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:50.950827771+00:00 stderr F time="2026-01-20T10:56:50Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:50.950827771+00:00 stderr F time="2026-01-20T10:56:50Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:50.950827771+00:00 stderr F time="2026-01-20T10:56:50Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:50.950827771+00:00 stderr F time="2026-01-20T10:56:50Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:50.950874762+00:00 stderr F time="2026-01-20T10:56:50Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:51.016548948+00:00 stderr F time="2026-01-20T10:56:51Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:51.016548948+00:00 stderr F time="2026-01-20T10:56:51Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:51.016548948+00:00 stderr F time="2026-01-20T10:56:51Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:51.016548948+00:00 stderr F time="2026-01-20T10:56:51Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:51.016548948+00:00 stderr F time="2026-01-20T10:56:51Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:51.016548948+00:00 stderr F time="2026-01-20T10:56:51Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:51.016548948+00:00 stderr F time="2026-01-20T10:56:51Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:51.016595770+00:00 stderr F time="2026-01-20T10:56:51Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:51.942920753+00:00 stderr F time="2026-01-20T10:56:51Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:51.942920753+00:00 stderr F time="2026-01-20T10:56:51Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:51.942920753+00:00 stderr F time="2026-01-20T10:56:51Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:51.942920753+00:00 stderr F time="2026-01-20T10:56:51Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:51.942920753+00:00 stderr F time="2026-01-20T10:56:51Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:51.942920753+00:00 stderr F time="2026-01-20T10:56:51Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:51.942920753+00:00 stderr F time="2026-01-20T10:56:51Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:51.942920753+00:00 stderr F time="2026-01-20T10:56:51Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:52.006552796+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:52.006552796+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:52.006552796+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:52.006552796+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:52.006552796+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:52.006552796+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:52.006552796+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:52.006630528+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:52.937881093+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:52.937881093+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:52.937881093+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:52.937881093+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:52.937881093+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:52.937881093+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:52.937881093+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:52.937926244+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:52.982008940+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:52.982008940+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:52.982008940+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:52.982008940+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:52.982008940+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:52.982008940+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:52.982008940+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:52.982045851+00:00 stderr F time="2026-01-20T10:56:52Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:53.941602737+00:00 stderr F time="2026-01-20T10:56:53Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:53.941602737+00:00 stderr F time="2026-01-20T10:56:53Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:53.941602737+00:00 stderr F time="2026-01-20T10:56:53Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:53.941602737+00:00 stderr F time="2026-01-20T10:56:53Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:53.941602737+00:00 stderr F time="2026-01-20T10:56:53Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:53.941602737+00:00 stderr F time="2026-01-20T10:56:53Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:53.941602737+00:00 stderr F time="2026-01-20T10:56:53Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:53.941692879+00:00 stderr F time="2026-01-20T10:56:53Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:53.987596304+00:00 stderr F time="2026-01-20T10:56:53Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:53.987596304+00:00 stderr F time="2026-01-20T10:56:53Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:53.987596304+00:00 stderr F time="2026-01-20T10:56:53Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:53.987596304+00:00 stderr F time="2026-01-20T10:56:53Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:53.987596304+00:00 stderr F time="2026-01-20T10:56:53Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:53.987596304+00:00 stderr F time="2026-01-20T10:56:53Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:53.987671335+00:00 stderr F time="2026-01-20T10:56:53Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:53.987671335+00:00 stderr F time="2026-01-20T10:56:53Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:54.957887463+00:00 stderr F time="2026-01-20T10:56:54Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:54.958793767+00:00 stderr F time="2026-01-20T10:56:54Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:54.958793767+00:00 stderr F time="2026-01-20T10:56:54Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:54.958793767+00:00 stderr F time="2026-01-20T10:56:54Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:54.958793767+00:00 stderr F time="2026-01-20T10:56:54Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:54.958793767+00:00 stderr F time="2026-01-20T10:56:54Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:54.958813047+00:00 stderr F time="2026-01-20T10:56:54Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:54.958857498+00:00 stderr F time="2026-01-20T10:56:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:55.011153152+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:55.011239904+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:55.011265545+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:55.011290585+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:55.011315916+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:55.011341677+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:55.011367017+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:55.011409108+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:55.938221788+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:55.938221788+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:55.938221788+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:55.938221788+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:55.938221788+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:55.938221788+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:55.938261990+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:55.938261990+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:55.987836811+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:55.987836811+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:55.987836811+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:55.987836811+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:55.987836811+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:55.987836811+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:55.987836811+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:55.987924633+00:00 stderr F time="2026-01-20T10:56:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:56.947193131+00:00 stderr F time="2026-01-20T10:56:56Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:56.947193131+00:00 stderr F time="2026-01-20T10:56:56Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:56.947193131+00:00 stderr F time="2026-01-20T10:56:56Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:56.947193131+00:00 stderr F time="2026-01-20T10:56:56Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:56.947193131+00:00 stderr F time="2026-01-20T10:56:56Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:56.947193131+00:00 stderr F time="2026-01-20T10:56:56Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:56.947193131+00:00 stderr F time="2026-01-20T10:56:56Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:56.947193131+00:00 stderr F time="2026-01-20T10:56:56Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:57.001573920+00:00 stderr F time="2026-01-20T10:56:57Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:57.001573920+00:00 stderr F time="2026-01-20T10:56:57Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:57.001573920+00:00 stderr F time="2026-01-20T10:56:57Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:57.001573920+00:00 stderr F time="2026-01-20T10:56:57Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:57.001573920+00:00 stderr F time="2026-01-20T10:56:57Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:57.001573920+00:00 stderr F time="2026-01-20T10:56:57Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:57.001573920+00:00 stderr F time="2026-01-20T10:56:57Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:57.001573920+00:00 stderr F time="2026-01-20T10:56:57Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:57.953235426+00:00 stderr F time="2026-01-20T10:56:57Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:57.953235426+00:00 stderr F time="2026-01-20T10:56:57Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:57.953235426+00:00 stderr F time="2026-01-20T10:56:57Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:57.953235426+00:00 stderr F time="2026-01-20T10:56:57Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:57.953235426+00:00 stderr F time="2026-01-20T10:56:57Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:57.953235426+00:00 stderr F time="2026-01-20T10:56:57Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:57.953235426+00:00 stderr F time="2026-01-20T10:56:57Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:57.953326038+00:00 stderr F time="2026-01-20T10:56:57Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:58.013197632+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:58.013197632+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:58.013197632+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:58.013197632+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:58.013197632+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:58.013197632+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:58.013197632+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:58.013253074+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:58.864547957+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:58.864547957+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:58.864547957+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:58.864584998+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:58.864584998+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:58.864584998+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:58.864584998+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:58.864611789+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:58.930701426+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:58.930701426+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:58.930701426+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:58.930701426+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:58.930701426+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:58.930701426+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:58.930701426+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:58.930701426+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:58.975402878+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:58.975402878+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:58.975402878+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:58.975402878+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:58.975402878+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:58.975402878+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:58.975402878+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:58.975448769+00:00 stderr F time="2026-01-20T10:56:58Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:59.945051920+00:00 stderr F time="2026-01-20T10:56:59Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:59.945051920+00:00 stderr F time="2026-01-20T10:56:59Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:59.945051920+00:00 stderr F time="2026-01-20T10:56:59Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:59.945051920+00:00 stderr F time="2026-01-20T10:56:59Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:59.945051920+00:00 stderr F time="2026-01-20T10:56:59Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:59.945051920+00:00 stderr F time="2026-01-20T10:56:59Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:59.945051920+00:00 stderr F time="2026-01-20T10:56:59Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:59.945113151+00:00 stderr F time="2026-01-20T10:56:59Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:56:59.992522675+00:00 stderr F time="2026-01-20T10:56:59Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:56:59.992522675+00:00 stderr F time="2026-01-20T10:56:59Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:56:59.992522675+00:00 stderr F time="2026-01-20T10:56:59Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:56:59.992522675+00:00 stderr F time="2026-01-20T10:56:59Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:56:59.992522675+00:00 stderr F time="2026-01-20T10:56:59Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:56:59.992522675+00:00 stderr F time="2026-01-20T10:56:59Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:56:59.992522675+00:00 stderr F time="2026-01-20T10:56:59Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:56:59.992522675+00:00 stderr F time="2026-01-20T10:56:59Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:57:00.948851706+00:00 stderr F time="2026-01-20T10:57:00Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:57:00.948932748+00:00 stderr F time="2026-01-20T10:57:00Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:57:00.948958099+00:00 stderr F time="2026-01-20T10:57:00Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:57:00.948983110+00:00 stderr F time="2026-01-20T10:57:00Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:57:00.949014311+00:00 stderr F time="2026-01-20T10:57:00Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:57:00.949041421+00:00 stderr F time="2026-01-20T10:57:00Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:57:00.949082992+00:00 stderr F time="2026-01-20T10:57:00Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:57:00.949136724+00:00 stderr F time="2026-01-20T10:57:00Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:57:01.003777358+00:00 stderr F time="2026-01-20T10:57:01Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:57:01.003835070+00:00 stderr F time="2026-01-20T10:57:01Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:57:01.003860320+00:00 stderr F time="2026-01-20T10:57:01Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:57:01.003885771+00:00 stderr F time="2026-01-20T10:57:01Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:57:01.003909212+00:00 stderr F time="2026-01-20T10:57:01Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:57:01.003934902+00:00 stderr F time="2026-01-20T10:57:01Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:57:01.003957913+00:00 stderr F time="2026-01-20T10:57:01Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:57:01.003995234+00:00 stderr F time="2026-01-20T10:57:01Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:57:01.946529710+00:00 stderr F time="2026-01-20T10:57:01Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:57:01.946529710+00:00 stderr F time="2026-01-20T10:57:01Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:57:01.946529710+00:00 stderr F time="2026-01-20T10:57:01Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:57:01.946529710+00:00 stderr F time="2026-01-20T10:57:01Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:57:01.946529710+00:00 stderr F time="2026-01-20T10:57:01Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:57:01.946529710+00:00 stderr F time="2026-01-20T10:57:01Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:57:01.946574852+00:00 stderr F time="2026-01-20T10:57:01Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:57:01.946574852+00:00 stderr F time="2026-01-20T10:57:01Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:57:02.002962473+00:00 stderr F time="2026-01-20T10:57:02Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:57:02.002962473+00:00 stderr F time="2026-01-20T10:57:02Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:57:02.002962473+00:00 stderr F time="2026-01-20T10:57:02Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:57:02.002962473+00:00 stderr F time="2026-01-20T10:57:02Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:57:02.002962473+00:00 stderr F time="2026-01-20T10:57:02Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:57:02.002962473+00:00 stderr F time="2026-01-20T10:57:02Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:57:02.003013224+00:00 stderr F time="2026-01-20T10:57:02Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:57:02.003013224+00:00 stderr F time="2026-01-20T10:57:02Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:57:02.943386552+00:00 stderr F time="2026-01-20T10:57:02Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:57:02.943386552+00:00 stderr F time="2026-01-20T10:57:02Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:57:02.943386552+00:00 stderr F time="2026-01-20T10:57:02Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:57:02.943386552+00:00 stderr F time="2026-01-20T10:57:02Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:57:02.943386552+00:00 stderr F time="2026-01-20T10:57:02Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:57:02.943386552+00:00 stderr F time="2026-01-20T10:57:02Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:57:02.943437783+00:00 stderr F time="2026-01-20T10:57:02Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:57:02.943437783+00:00 stderr F time="2026-01-20T10:57:02Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:57:03.001039516+00:00 stderr F time="2026-01-20T10:57:03Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:57:03.001134169+00:00 stderr F time="2026-01-20T10:57:03Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:57:03.001160319+00:00 stderr F time="2026-01-20T10:57:03Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:57:03.001200990+00:00 stderr F time="2026-01-20T10:57:03Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:57:03.001224561+00:00 stderr F time="2026-01-20T10:57:03Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:57:03.001249482+00:00 stderr F time="2026-01-20T10:57:03Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:57:03.001277492+00:00 stderr F time="2026-01-20T10:57:03Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:57:03.001315213+00:00 stderr F time="2026-01-20T10:57:03Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:57:03.965975454+00:00 stderr F time="2026-01-20T10:57:03Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:57:03.966132398+00:00 stderr F time="2026-01-20T10:57:03Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:57:03.966208960+00:00 stderr F time="2026-01-20T10:57:03Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:57:03.966277642+00:00 stderr F time="2026-01-20T10:57:03Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:57:03.966341064+00:00 stderr F time="2026-01-20T10:57:03Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:57:03.966423776+00:00 stderr F time="2026-01-20T10:57:03Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:57:03.966492108+00:00 stderr F time="2026-01-20T10:57:03Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:57:03.966595130+00:00 stderr F time="2026-01-20T10:57:03Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:57:04.042839706+00:00 stderr F time="2026-01-20T10:57:04Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:57:04.042924799+00:00 stderr F time="2026-01-20T10:57:04Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:57:04.042968930+00:00 stderr F time="2026-01-20T10:57:04Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:57:04.043016501+00:00 stderr F time="2026-01-20T10:57:04Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:57:04.043095673+00:00 stderr F time="2026-01-20T10:57:04Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:57:04.043150355+00:00 stderr F time="2026-01-20T10:57:04Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:57:04.043194346+00:00 stderr F time="2026-01-20T10:57:04Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:57:04.043255197+00:00 stderr F time="2026-01-20T10:57:04Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:57:04.947094789+00:00 stderr F time="2026-01-20T10:57:04Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:57:04.947094789+00:00 stderr F time="2026-01-20T10:57:04Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:57:04.947094789+00:00 stderr F time="2026-01-20T10:57:04Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:57:04.947094789+00:00 stderr F time="2026-01-20T10:57:04Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:57:04.947094789+00:00 stderr F time="2026-01-20T10:57:04Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:57:04.947165051+00:00 stderr F time="2026-01-20T10:57:04Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:57:04.947165051+00:00 stderr F time="2026-01-20T10:57:04Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:57:04.947165051+00:00 stderr F time="2026-01-20T10:57:04Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:57:05.007982500+00:00 stderr F time="2026-01-20T10:57:05Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:57:05.007982500+00:00 stderr F time="2026-01-20T10:57:05Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:57:05.007982500+00:00 stderr F time="2026-01-20T10:57:05Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:57:05.007982500+00:00 stderr F time="2026-01-20T10:57:05Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:57:05.007982500+00:00 stderr F time="2026-01-20T10:57:05Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:57:05.007982500+00:00 stderr F time="2026-01-20T10:57:05Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:57:05.007982500+00:00 stderr F time="2026-01-20T10:57:05Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:57:05.007982500+00:00 stderr F time="2026-01-20T10:57:05Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:57:05.948004069+00:00 stderr F time="2026-01-20T10:57:05Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:57:05.948004069+00:00 stderr F time="2026-01-20T10:57:05Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:57:05.948004069+00:00 stderr F time="2026-01-20T10:57:05Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:57:05.948004069+00:00 stderr F time="2026-01-20T10:57:05Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:57:05.948004069+00:00 stderr F time="2026-01-20T10:57:05Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:57:05.948004069+00:00 stderr F time="2026-01-20T10:57:05Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:57:05.948004069+00:00 stderr F time="2026-01-20T10:57:05Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:57:05.948157063+00:00 stderr F time="2026-01-20T10:57:05Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:57:06.001738830+00:00 stderr F time="2026-01-20T10:57:06Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:57:06.001884243+00:00 stderr F time="2026-01-20T10:57:06Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:57:06.001974857+00:00 stderr F time="2026-01-20T10:57:06Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:57:06.002142551+00:00 stderr F time="2026-01-20T10:57:06Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:57:06.002252204+00:00 stderr F time="2026-01-20T10:57:06Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:57:06.002342646+00:00 stderr F time="2026-01-20T10:57:06Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:57:06.002514251+00:00 stderr F time="2026-01-20T10:57:06Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:57:06.002650084+00:00 stderr F time="2026-01-20T10:57:06Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:57:06.939250362+00:00 stderr F time="2026-01-20T10:57:06Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:57:06.939250362+00:00 stderr F time="2026-01-20T10:57:06Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:57:06.939250362+00:00 stderr F time="2026-01-20T10:57:06Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:57:06.939250362+00:00 stderr F time="2026-01-20T10:57:06Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:57:06.939250362+00:00 stderr F time="2026-01-20T10:57:06Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:57:06.939250362+00:00 stderr F time="2026-01-20T10:57:06Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:57:06.939250362+00:00 stderr F time="2026-01-20T10:57:06Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:57:06.939308154+00:00 stderr F time="2026-01-20T10:57:06Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2026-01-20T10:57:07.017638315+00:00 stderr F time="2026-01-20T10:57:07Z" level=info msg="considering allowed registry 38.102.83.51:5001 for registry.redhat.io" 2026-01-20T10:57:07.017638315+00:00 stderr F time="2026-01-20T10:57:07Z" level=info msg="considering allowed registry quay.io for registry.redhat.io" 2026-01-20T10:57:07.017638315+00:00 stderr F time="2026-01-20T10:57:07Z" level=info msg="considering allowed registry gcr.io for registry.redhat.io" 2026-01-20T10:57:07.017638315+00:00 stderr F time="2026-01-20T10:57:07Z" level=info msg="considering allowed registry registry.k8s.io for registry.redhat.io" 2026-01-20T10:57:07.017638315+00:00 stderr F time="2026-01-20T10:57:07Z" level=info msg="considering allowed registry registry.redhat.io for registry.redhat.io" 2026-01-20T10:57:07.017708537+00:00 stderr F time="2026-01-20T10:57:07Z" level=info msg="the allowed registry registry.redhat.io allows imagestream creation for search param registry.redhat.io" 2026-01-20T10:57:07.017708537+00:00 stderr F time="2026-01-20T10:57:07Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2026-01-20T10:57:07.017708537+00:00 stderr F time="2026-01-20T10:57:07Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" ././@LongLink0000644000000000000000000000033600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000644000175000017500000007432515133657715033121 0ustar zuulzuul2025-08-13T19:59:23.745759522+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-08-13T19:59:23.745759522+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="Go OS/Arch: linux/amd64" 2025-08-13T19:59:25.650871697+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="template client &v1.TemplateV1Client{restClient:(*rest.RESTClient)(0xc0005c8f00)}" 2025-08-13T19:59:25.650871697+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="image client &v1.ImageV1Client{restClient:(*rest.RESTClient)(0xc0005c8fa0)}" 2025-08-13T19:59:31.855068557+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="waiting for informer caches to sync" 2025-08-13T19:59:34.408955148+00:00 stderr F W0813 19:59:34.375704 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:34.408955148+00:00 stderr F E0813 19:59:34.378903 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:34.408955148+00:00 stderr F W0813 19:59:32.402524 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:34.408955148+00:00 stderr F E0813 19:59:34.378961 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:35.874613566+00:00 stderr F W0813 19:59:35.873930 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:35.874724959+00:00 stderr F E0813 19:59:35.874709 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:35.874972206+00:00 stderr F W0813 19:59:35.874946 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:35.875028847+00:00 stderr F E0813 19:59:35.875010 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:38.636207756+00:00 stderr F W0813 19:59:38.635498 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:38.636318039+00:00 stderr F E0813 19:59:38.636299 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:38.659493230+00:00 stderr F W0813 19:59:38.659430 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:38.659730837+00:00 stderr F E0813 19:59:38.659714 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:44.388028852+00:00 stderr F W0813 19:59:44.386596 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:44.388028852+00:00 stderr F W0813 19:59:44.387299 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:44.388028852+00:00 stderr F E0813 19:59:44.387307 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:44.388028852+00:00 stderr F E0813 19:59:44.387323 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:56.216972453+00:00 stderr F W0813 19:59:56.214886 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:56.216972453+00:00 stderr F E0813 19:59:56.215543 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:56.216972453+00:00 stderr F W0813 19:59:56.215600 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:56.216972453+00:00 stderr F E0813 19:59:56.215622 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:20.485015857+00:00 stderr F time="2025-08-13T20:00:20Z" level=info msg="started events processor" 2025-08-13T20:00:20.695061156+00:00 stderr F time="2025-08-13T20:00:20Z" level=info msg="clearImageStreamTagError: stream dotnet-runtime already deleted so no worries on clearing tags" 2025-08-13T20:00:20.695061156+00:00 stderr F time="2025-08-13T20:00:20Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2025-08-13T20:00:20.975764440+00:00 stderr F time="2025-08-13T20:00:20Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift-java11 already deleted so no worries on clearing tags" 2025-08-13T20:00:20.975764440+00:00 stderr F time="2025-08-13T20:00:20Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift-java11" 2025-08-13T20:00:21.317402651+00:00 stderr F time="2025-08-13T20:00:21Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:21.317402651+00:00 stderr F time="2025-08-13T20:00:21Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift" 2025-08-13T20:00:22.029163475+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:22.029163475+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift" 2025-08-13T20:00:22.088482377+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:22.121173649+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:22.363082328+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="clearImageStreamTagError: stream fuse7-java11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:22.363082328+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java11-openshift" 2025-08-13T20:00:23.156969546+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="clearImageStreamTagError: stream fuse7-java-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:23.156969546+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java-openshift" 2025-08-13T20:00:23.197957675+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="clearImageStreamTagError: stream dotnet already deleted so no worries on clearing tags" 2025-08-13T20:00:23.197957675+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2025-08-13T20:00:23.444171106+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:23.514211193+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift-jdk11 already deleted so no worries on clearing tags" 2025-08-13T20:00:23.514211193+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift-jdk11" 2025-08-13T20:00:23.514211193+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:23.795935686+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="clearImageStreamTagError: stream java already deleted so no worries on clearing tags" 2025-08-13T20:00:23.795935686+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="There are no more errors or image imports in flight for imagestream java" 2025-08-13T20:00:24.208376076+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="clearImageStreamTagError: stream golang already deleted so no worries on clearing tags" 2025-08-13T20:00:24.208454759+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="There are no more errors or image imports in flight for imagestream golang" 2025-08-13T20:00:24.576855803+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="clearImageStreamTagError: stream httpd already deleted so no worries on clearing tags" 2025-08-13T20:00:24.576855803+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="There are no more errors or image imports in flight for imagestream httpd" 2025-08-13T20:00:24.752157662+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="clearImageStreamTagError: stream java-runtime already deleted so no worries on clearing tags" 2025-08-13T20:00:24.752157662+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2025-08-13T20:00:24.953569996+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:24.953569996+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-openshift" 2025-08-13T20:00:25.068954766+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:25.076592703+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2025-08-13T20:00:25.102950565+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:25.102950565+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:25.102950565+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-runtime-openshift" 2025-08-13T20:00:25.222950897+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="clearImageStreamTagError: stream jboss-datagrid73-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:25.222950897+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-datagrid73-openshift" 2025-08-13T20:00:25.293323493+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:25.293323493+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-openshift" 2025-08-13T20:00:25.355468586+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:25.355468586+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2025-08-13T20:00:25.373568142+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:26.063921706+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:26.092546152+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:00:26.092546152+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2025-08-13T20:00:26.098371999+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:26.098420160+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-openshift" 2025-08-13T20:00:26.104509704+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:26.104509704+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-openshift" 2025-08-13T20:00:26.108077015+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:26.116665780+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="clearImageStreamTagError: stream mariadb already deleted so no worries on clearing tags" 2025-08-13T20:00:26.116665780+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="There are no more errors or image imports in flight for imagestream mariadb" 2025-08-13T20:00:26.132354727+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:26.132354727+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-runtime-openshift" 2025-08-13T20:00:26.137939257+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:00:26.140329065+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2025-08-13T20:00:26.989988312+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="clearImageStreamTagError: stream nginx already deleted so no worries on clearing tags" 2025-08-13T20:00:26.990079965+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="There are no more errors or image imports in flight for imagestream nginx" 2025-08-13T20:00:27.020001668+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream jenkins already deleted so no worries on clearing tags" 2025-08-13T20:00:27.020001668+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-08-13T20:00:27.047936015+00:00 stderr F time="2025-08-13T20:00:27Z" level=warning msg="Image import for imagestream jenkins-agent-base tag scheduled-upgrade generation 3 failed with detailed message Internal error occurred: registry.redhat.io/ocp-tools-4/jenkins-agent-base-rhel8:v4.13.0: Get \"https://registry.redhat.io/v2/ocp-tools-4/jenkins-agent-base-rhel8/manifests/v4.13.0\": unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" 2025-08-13T20:00:27.227916267+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:27.457083101+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="initiated an imagestreamimport retry for imagestream/tag jenkins-agent-base/scheduled-upgrade" 2025-08-13T20:00:27.488319112+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream mysql already deleted so no worries on clearing tags" 2025-08-13T20:00:27.488528778+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream mysql" 2025-08-13T20:00:27.517631908+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream openjdk-11-rhel7 already deleted so no worries on clearing tags" 2025-08-13T20:00:27.535678842+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:27.535678842+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2025-08-13T20:00:27.567528931+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:00:27.567528931+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso75-openshift-rhel8" 2025-08-13T20:00:27.611589907+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream nodejs already deleted so no worries on clearing tags" 2025-08-13T20:00:27.611589907+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream nodejs" 2025-08-13T20:00:27.912992651+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream perl already deleted so no worries on clearing tags" 2025-08-13T20:00:27.916887692+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream perl" 2025-08-13T20:00:27.931633623+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream php already deleted so no worries on clearing tags" 2025-08-13T20:00:27.931633623+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream php" 2025-08-13T20:00:27.938067556+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream redis already deleted so no worries on clearing tags" 2025-08-13T20:00:27.938067556+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream redis" 2025-08-13T20:00:28.025671784+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream postgresql already deleted so no worries on clearing tags" 2025-08-13T20:00:28.026030494+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql" 2025-08-13T20:00:28.038732106+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:00:28.038896461+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso76-openshift-rhel8" 2025-08-13T20:00:28.046319203+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream python already deleted so no worries on clearing tags" 2025-08-13T20:00:28.046395345+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream python" 2025-08-13T20:00:28.086291793+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:28.086672094+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream redhat-openjdk18-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:28.086710605+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2025-08-13T20:00:28.135997070+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ruby already deleted so no worries on clearing tags" 2025-08-13T20:00:28.135997070+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ruby" 2025-08-13T20:00:28.137027379+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:28.183570867+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11 already deleted so no worries on clearing tags" 2025-08-13T20:00:28.183570867+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2025-08-13T20:00:28.429676585+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:00:28.429676585+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream sso75-openshift-rhel8" 2025-08-13T20:00:28.449937642+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:00:28.449937642+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream sso76-openshift-rhel8" 2025-08-13T20:00:28.499246578+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17-runtime already deleted so no worries on clearing tags" 2025-08-13T20:00:28.499246578+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2025-08-13T20:00:28.513314029+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8 already deleted so no worries on clearing tags" 2025-08-13T20:00:28.513314029+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2025-08-13T20:00:28.523039677+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17 already deleted so no worries on clearing tags" 2025-08-13T20:00:28.523039677+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2025-08-13T20:00:28.541055630+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21 already deleted so no worries on clearing tags" 2025-08-13T20:00:28.541055630+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2025-08-13T20:00:28.549866492+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8-runtime already deleted so no worries on clearing tags" 2025-08-13T20:00:28.549866492+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2025-08-13T20:00:28.560148035+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11-runtime already deleted so no worries on clearing tags" 2025-08-13T20:00:28.560148035+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2025-08-13T20:00:28.566894357+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21-runtime already deleted so no worries on clearing tags" 2025-08-13T20:00:28.566894357+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2025-08-13T20:00:28.570909852+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:28.583921022+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:29.732106741+00:00 stderr F time="2025-08-13T20:00:29Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:29.732106741+00:00 stderr F time="2025-08-13T20:00:29Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:30.031907490+00:00 stderr F time="2025-08-13T20:00:30Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:30.031907490+00:00 stderr F time="2025-08-13T20:00:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:30.300890139+00:00 stderr F time="2025-08-13T20:00:30Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:30.300890139+00:00 stderr F time="2025-08-13T20:00:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:30.833731993+00:00 stderr F time="2025-08-13T20:00:30Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:30.833894197+00:00 stderr F time="2025-08-13T20:00:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:31.044594205+00:00 stderr F time="2025-08-13T20:00:31Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:31.044757080+00:00 stderr F time="2025-08-13T20:00:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:31.258115964+00:00 stderr F time="2025-08-13T20:00:31Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:31.258263938+00:00 stderr F time="2025-08-13T20:00:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:31.592207790+00:00 stderr F time="2025-08-13T20:00:31Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:31.592207790+00:00 stderr F time="2025-08-13T20:00:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:33.490730614+00:00 stderr F time="2025-08-13T20:00:33Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:33.491354072+00:00 stderr F time="2025-08-13T20:00:33Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:33.777940123+00:00 stderr F time="2025-08-13T20:00:33Z" level=info msg="shutting down events processor" ././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000755000175000017500000000000015133657716033055 5ustar zuulzuul././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000755000175000017500000000000015133657741033053 5ustar zuulzuul././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000644000175000017500000001665715133657716033076 0ustar zuulzuul2026-01-20T10:47:24.780032969+00:00 stderr F I0120 10:47:24.779880 1 flags.go:64] FLAG: --add-dir-header="false" 2026-01-20T10:47:24.780032969+00:00 stderr F I0120 10:47:24.779986 1 flags.go:64] FLAG: --allow-paths="[]" 2026-01-20T10:47:24.780032969+00:00 stderr F I0120 10:47:24.779998 1 flags.go:64] FLAG: --alsologtostderr="false" 2026-01-20T10:47:24.780032969+00:00 stderr F I0120 10:47:24.780003 1 flags.go:64] FLAG: --auth-header-fields-enabled="false" 2026-01-20T10:47:24.780032969+00:00 stderr F I0120 10:47:24.780007 1 flags.go:64] FLAG: --auth-header-groups-field-name="x-remote-groups" 2026-01-20T10:47:24.780032969+00:00 stderr F I0120 10:47:24.780013 1 flags.go:64] FLAG: --auth-header-groups-field-separator="|" 2026-01-20T10:47:24.780032969+00:00 stderr F I0120 10:47:24.780016 1 flags.go:64] FLAG: --auth-header-user-field-name="x-remote-user" 2026-01-20T10:47:24.780032969+00:00 stderr F I0120 10:47:24.780019 1 flags.go:64] FLAG: --auth-token-audiences="[]" 2026-01-20T10:47:24.780032969+00:00 stderr F I0120 10:47:24.780025 1 flags.go:64] FLAG: --client-ca-file="" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780028 1 flags.go:64] FLAG: --config-file="/etc/kube-rbac-proxy/config-file.yaml" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780031 1 flags.go:64] FLAG: --help="false" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780037 1 flags.go:64] FLAG: --http2-disable="false" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780040 1 flags.go:64] FLAG: --http2-max-concurrent-streams="100" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780044 1 flags.go:64] FLAG: --http2-max-size="262144" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780048 1 flags.go:64] FLAG: --ignore-paths="[]" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780051 1 flags.go:64] FLAG: --insecure-listen-address="" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780054 1 flags.go:64] FLAG: --kube-api-burst="0" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780075 1 flags.go:64] FLAG: --kube-api-qps="0" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780080 1 flags.go:64] FLAG: --kubeconfig="" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780083 1 flags.go:64] FLAG: --log-backtrace-at="" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780086 1 flags.go:64] FLAG: --log-dir="" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780090 1 flags.go:64] FLAG: --log-file="" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780094 1 flags.go:64] FLAG: --log-file-max-size="0" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780097 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780102 1 flags.go:64] FLAG: --logtostderr="true" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780105 1 flags.go:64] FLAG: --oidc-ca-file="" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780108 1 flags.go:64] FLAG: --oidc-clientID="" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780111 1 flags.go:64] FLAG: --oidc-groups-claim="groups" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780114 1 flags.go:64] FLAG: --oidc-groups-prefix="" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780116 1 flags.go:64] FLAG: --oidc-issuer="" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780119 1 flags.go:64] FLAG: --oidc-sign-alg="[RS256]" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780130 1 flags.go:64] FLAG: --oidc-username-claim="email" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780134 1 flags.go:64] FLAG: --one-output="false" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780137 1 flags.go:64] FLAG: --proxy-endpoints-port="0" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780140 1 flags.go:64] FLAG: --secure-listen-address="0.0.0.0:9192" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780144 1 flags.go:64] FLAG: --skip-headers="false" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780149 1 flags.go:64] FLAG: --skip-log-headers="false" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780153 1 flags.go:64] FLAG: --stderrthreshold="" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780156 1 flags.go:64] FLAG: --tls-cert-file="/etc/tls/private/tls.crt" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780159 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305]" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780167 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780173 1 flags.go:64] FLAG: --tls-private-key-file="/etc/tls/private/tls.key" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780177 1 flags.go:64] FLAG: --tls-reload-interval="1m0s" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780182 1 flags.go:64] FLAG: --upstream="http://127.0.0.1:9191/" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780185 1 flags.go:64] FLAG: --upstream-ca-file="" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780188 1 flags.go:64] FLAG: --upstream-client-cert-file="" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780191 1 flags.go:64] FLAG: --upstream-client-key-file="" 2026-01-20T10:47:24.780198034+00:00 stderr F I0120 10:47:24.780194 1 flags.go:64] FLAG: --upstream-force-h2c="false" 2026-01-20T10:47:24.780228304+00:00 stderr F I0120 10:47:24.780197 1 flags.go:64] FLAG: --v="3" 2026-01-20T10:47:24.780228304+00:00 stderr F I0120 10:47:24.780200 1 flags.go:64] FLAG: --version="false" 2026-01-20T10:47:24.780228304+00:00 stderr F I0120 10:47:24.780205 1 flags.go:64] FLAG: --vmodule="" 2026-01-20T10:47:24.780228304+00:00 stderr F W0120 10:47:24.780213 1 deprecated.go:66] 2026-01-20T10:47:24.780228304+00:00 stderr F ==== Removed Flag Warning ====================== 2026-01-20T10:47:24.780228304+00:00 stderr F 2026-01-20T10:47:24.780228304+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2026-01-20T10:47:24.780228304+00:00 stderr F 2026-01-20T10:47:24.780228304+00:00 stderr F =============================================== 2026-01-20T10:47:24.780228304+00:00 stderr F 2026-01-20T10:47:24.780239515+00:00 stderr F I0120 10:47:24.780226 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2026-01-20T10:47:24.782399273+00:00 stderr F I0120 10:47:24.781529 1 kube-rbac-proxy.go:233] Valid token audiences: 2026-01-20T10:47:24.782399273+00:00 stderr F I0120 10:47:24.781576 1 kube-rbac-proxy.go:347] Reading certificate files 2026-01-20T10:47:24.783802971+00:00 stderr F I0120 10:47:24.783688 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9192 2026-01-20T10:47:24.784010286+00:00 stderr F I0120 10:47:24.783982 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9192 ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000644000175000017500000001706415133657716033067 0ustar zuulzuul2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312609 1 flags.go:64] FLAG: --add-dir-header="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312952 1 flags.go:64] FLAG: --allow-paths="[]" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312962 1 flags.go:64] FLAG: --alsologtostderr="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312973 1 flags.go:64] FLAG: --auth-header-fields-enabled="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312977 1 flags.go:64] FLAG: --auth-header-groups-field-name="x-remote-groups" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312983 1 flags.go:64] FLAG: --auth-header-groups-field-separator="|" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312990 1 flags.go:64] FLAG: --auth-header-user-field-name="x-remote-user" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312994 1 flags.go:64] FLAG: --auth-token-audiences="[]" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312999 1 flags.go:64] FLAG: --client-ca-file="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313003 1 flags.go:64] FLAG: --config-file="/etc/kube-rbac-proxy/config-file.yaml" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313007 1 flags.go:64] FLAG: --help="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313011 1 flags.go:64] FLAG: --http2-disable="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313015 1 flags.go:64] FLAG: --http2-max-concurrent-streams="100" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313020 1 flags.go:64] FLAG: --http2-max-size="262144" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313024 1 flags.go:64] FLAG: --ignore-paths="[]" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313030 1 flags.go:64] FLAG: --insecure-listen-address="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313037 1 flags.go:64] FLAG: --kube-api-burst="0" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313042 1 flags.go:64] FLAG: --kube-api-qps="0" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313054 1 flags.go:64] FLAG: --kubeconfig="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313621 1 flags.go:64] FLAG: --log-backtrace-at="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313717 1 flags.go:64] FLAG: --log-dir="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313723 1 flags.go:64] FLAG: --log-file="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313728 1 flags.go:64] FLAG: --log-file-max-size="0" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313735 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313753 1 flags.go:64] FLAG: --logtostderr="true" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313759 1 flags.go:64] FLAG: --oidc-ca-file="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313765 1 flags.go:64] FLAG: --oidc-clientID="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313770 1 flags.go:64] FLAG: --oidc-groups-claim="groups" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313861 1 flags.go:64] FLAG: --oidc-groups-prefix="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313867 1 flags.go:64] FLAG: --oidc-issuer="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313871 1 flags.go:64] FLAG: --oidc-sign-alg="[RS256]" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313879 1 flags.go:64] FLAG: --oidc-username-claim="email" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313883 1 flags.go:64] FLAG: --one-output="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313887 1 flags.go:64] FLAG: --proxy-endpoints-port="0" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313892 1 flags.go:64] FLAG: --secure-listen-address="0.0.0.0:9192" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313906 1 flags.go:64] FLAG: --skip-headers="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313912 1 flags.go:64] FLAG: --skip-log-headers="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313917 1 flags.go:64] FLAG: --stderrthreshold="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313921 1 flags.go:64] FLAG: --tls-cert-file="/etc/tls/private/tls.crt" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313925 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305]" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313936 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313941 1 flags.go:64] FLAG: --tls-private-key-file="/etc/tls/private/tls.key" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313945 1 flags.go:64] FLAG: --tls-reload-interval="1m0s" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313951 1 flags.go:64] FLAG: --upstream="http://127.0.0.1:9191/" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313956 1 flags.go:64] FLAG: --upstream-ca-file="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313962 1 flags.go:64] FLAG: --upstream-client-cert-file="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313966 1 flags.go:64] FLAG: --upstream-client-key-file="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313970 1 flags.go:64] FLAG: --upstream-force-h2c="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313974 1 flags.go:64] FLAG: --v="3" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313979 1 flags.go:64] FLAG: --version="false" 2025-08-13T19:50:43.318225195+00:00 stderr F I0813 19:50:43.313985 1 flags.go:64] FLAG: --vmodule="" 2025-08-13T19:50:43.318225195+00:00 stderr F W0813 19:50:43.316320 1 deprecated.go:66] 2025-08-13T19:50:43.318225195+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:50:43.318225195+00:00 stderr F 2025-08-13T19:50:43.318225195+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:50:43.318225195+00:00 stderr F 2025-08-13T19:50:43.318225195+00:00 stderr F =============================================== 2025-08-13T19:50:43.318225195+00:00 stderr F 2025-08-13T19:50:43.318225195+00:00 stderr F I0813 19:50:43.316363 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-08-13T19:50:43.323724493+00:00 stderr F I0813 19:50:43.321089 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:50:43.323724493+00:00 stderr F I0813 19:50:43.321223 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:50:43.324886206+00:00 stderr F I0813 19:50:43.324349 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9192 2025-08-13T19:50:43.333905294+00:00 stderr F I0813 19:50:43.331692 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9192 2025-08-13T20:42:46.941041162+00:00 stderr F I0813 20:42:46.940758 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000032400000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000755000175000017500000000000015133657741033053 5ustar zuulzuul././@LongLink0000644000000000000000000000033100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000644000175000017500000001222715133657716033063 0ustar zuulzuul2026-01-20T10:47:25.944287561+00:00 stderr F W0120 10:47:25.943894 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2026-01-20T10:47:25.944721483+00:00 stderr F W0120 10:47:25.944472 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2026-01-20T10:47:25.944721483+00:00 stderr F I0120 10:47:25.944651 1 main.go:150] setting up manager 2026-01-20T10:47:25.945510254+00:00 stderr F I0120 10:47:25.945366 1 main.go:168] registering components 2026-01-20T10:47:25.945510254+00:00 stderr F I0120 10:47:25.945383 1 main.go:170] setting up scheme 2026-01-20T10:47:25.945803721+00:00 stderr F I0120 10:47:25.945770 1 main.go:208] setting up controllers 2026-01-20T10:47:25.945813992+00:00 stderr F I0120 10:47:25.945803 1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory 2026-01-20T10:47:25.945839202+00:00 stderr F I0120 10:47:25.945817 1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}} 2026-01-20T10:47:25.946193522+00:00 stderr F I0120 10:47:25.946170 1 main.go:233] starting the cmd 2026-01-20T10:47:25.947094697+00:00 stderr F I0120 10:47:25.946523 1 server.go:185] "Starting metrics server" logger="controller-runtime.metrics" 2026-01-20T10:47:25.949829701+00:00 stderr F I0120 10:47:25.949795 1 leaderelection.go:250] attempting to acquire leader lease openshift-cluster-machine-approver/cluster-machine-approver-leader... 2026-01-20T10:47:25.950303033+00:00 stderr F I0120 10:47:25.950199 1 server.go:224] "Serving metrics server" logger="controller-runtime.metrics" bindAddress="127.0.0.1:9191" secure=false 2026-01-20T10:47:55.954026917+00:00 stderr F E0120 10:47:55.953861 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:50:59.120976659+00:00 stderr F I0120 10:50:59.120909 1 leaderelection.go:260] successfully acquired lease openshift-cluster-machine-approver/cluster-machine-approver-leader 2026-01-20T10:50:59.121334339+00:00 stderr F I0120 10:50:59.121282 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.CertificateSigningRequest" 2026-01-20T10:50:59.121376711+00:00 stderr F I0120 10:50:59.121347 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.ConfigMap" 2026-01-20T10:50:59.121415432+00:00 stderr F I0120 10:50:59.121385 1 controller.go:186] "Starting Controller" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2026-01-20T10:50:59.121497675+00:00 stderr F I0120 10:50:59.121456 1 status.go:97] Starting cluster operator status controller 2026-01-20T10:50:59.166586224+00:00 stderr F I0120 10:50:59.166468 1 recorder.go:104] "crc_2538a32f-aa32-46fd-a48a-923069a2d441 became leader" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-cluster-machine-approver","name":"cluster-machine-approver-leader","uid":"396b5b52-acf2-4d11-8e98-69ecff2f52d0","apiVersion":"coordination.k8s.io/v1","resourceVersion":"41285"} reason="LeaderElection" 2026-01-20T10:50:59.169423485+00:00 stderr F I0120 10:50:59.169373 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/cluster-machine-approver/status.go:99 2026-01-20T10:50:59.173152422+00:00 stderr F I0120 10:50:59.173049 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2026-01-20T10:50:59.307330158+00:00 stderr F I0120 10:50:59.307252 1 reflector.go:351] Caches populated for *v1.ConfigMap from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2026-01-20T10:50:59.371467126+00:00 stderr F I0120 10:50:59.371396 1 controller.go:220] "Starting workers" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" worker count=10 2026-01-20T10:57:57.516418672+00:00 stderr F I0120 10:57:57.516353 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/cluster-machine-approver/status.go:99 2026-01-20T10:57:57.954242211+00:00 stderr F I0120 10:57:57.954158 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2026-01-20T10:57:58.001806099+00:00 stderr F I0120 10:57:58.001714 1 reflector.go:351] Caches populated for *v1.ConfigMap from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 ././@LongLink0000644000000000000000000000033100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000644000175000017500000001665615133657716033075 0ustar zuulzuul2025-08-13T20:05:12.855041932+00:00 stderr F W0813 20:05:12.854614 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T20:05:12.856080652+00:00 stderr F W0813 20:05:12.855207 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T20:05:12.856080652+00:00 stderr F I0813 20:05:12.855407 1 main.go:150] setting up manager 2025-08-13T20:05:12.856517694+00:00 stderr F I0813 20:05:12.856386 1 main.go:168] registering components 2025-08-13T20:05:12.856517694+00:00 stderr F I0813 20:05:12.856465 1 main.go:170] setting up scheme 2025-08-13T20:05:12.857273606+00:00 stderr F I0813 20:05:12.857204 1 main.go:208] setting up controllers 2025-08-13T20:05:12.857273606+00:00 stderr F I0813 20:05:12.857261 1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory 2025-08-13T20:05:12.857315647+00:00 stderr F I0813 20:05:12.857279 1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}} 2025-08-13T20:05:12.860378595+00:00 stderr F I0813 20:05:12.860230 1 main.go:233] starting the cmd 2025-08-13T20:05:12.860902380+00:00 stderr F I0813 20:05:12.860686 1 server.go:185] "Starting metrics server" logger="controller-runtime.metrics" 2025-08-13T20:05:12.861960160+00:00 stderr F I0813 20:05:12.861896 1 leaderelection.go:250] attempting to acquire leader lease openshift-cluster-machine-approver/cluster-machine-approver-leader... 2025-08-13T20:05:12.863479814+00:00 stderr F I0813 20:05:12.863313 1 server.go:224] "Serving metrics server" logger="controller-runtime.metrics" bindAddress="127.0.0.1:9191" secure=false 2025-08-13T20:08:02.991035955+00:00 stderr F I0813 20:08:02.989962 1 leaderelection.go:260] successfully acquired lease openshift-cluster-machine-approver/cluster-machine-approver-leader 2025-08-13T20:08:02.991690854+00:00 stderr F I0813 20:08:02.990989 1 recorder.go:104] "crc_38dd04bb-211c-4052-882f-1b12e44fa6dd became leader" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-cluster-machine-approver","name":"cluster-machine-approver-leader","uid":"396b5b52-acf2-4d11-8e98-69ecff2f52d0","apiVersion":"coordination.k8s.io/v1","resourceVersion":"32804"} reason="LeaderElection" 2025-08-13T20:08:03.000627140+00:00 stderr F I0813 20:08:03.000537 1 status.go:97] Starting cluster operator status controller 2025-08-13T20:08:03.018414770+00:00 stderr F I0813 20:08:03.017937 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.CertificateSigningRequest" 2025-08-13T20:08:03.021056156+00:00 stderr F I0813 20:08:03.020547 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.ConfigMap" 2025-08-13T20:08:03.021056156+00:00 stderr F I0813 20:08:03.020594 1 controller.go:186] "Starting Controller" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T20:08:03.078039859+00:00 stderr F I0813 20:08:03.075934 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:08:03.078039859+00:00 stderr F I0813 20:08:03.077053 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/cluster-machine-approver/status.go:99 2025-08-13T20:08:03.349822692+00:00 stderr F I0813 20:08:03.349675 1 reflector.go:351] Caches populated for *v1.ConfigMap from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:08:03.368563049+00:00 stderr F I0813 20:08:03.368427 1 controller.go:220] "Starting workers" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" worker count=10 2025-08-13T20:08:59.007661338+00:00 stderr F E0813 20:08:59.007399 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:09:05.293722175+00:00 stderr F I0813 20:09:05.293507 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:09:05.536332881+00:00 stderr F I0813 20:09:05.536106 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/cluster-machine-approver/status.go:99 2025-08-13T20:09:05.790060166+00:00 stderr F I0813 20:09:05.789989 1 reflector.go:351] Caches populated for *v1.ConfigMap from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:42:36.491959513+00:00 stderr F I0813 20:42:36.480040 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514396170+00:00 stderr F I0813 20:42:36.479177 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514396170+00:00 stderr F I0813 20:42:36.489175 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:39.634924855+00:00 stderr F I0813 20:42:39.633727 1 internal.go:516] "Stopping and waiting for non leader election runnables" 2025-08-13T20:42:39.635499292+00:00 stderr F I0813 20:42:39.634967 1 internal.go:520] "Stopping and waiting for leader election runnables" 2025-08-13T20:42:39.639672802+00:00 stderr F I0813 20:42:39.639594 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T20:42:39.639672802+00:00 stderr F I0813 20:42:39.639655 1 controller.go:242] "All workers finished" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T20:42:39.639736334+00:00 stderr F I0813 20:42:39.639706 1 internal.go:526] "Stopping and waiting for caches" 2025-08-13T20:42:39.644435049+00:00 stderr F I0813 20:42:39.644388 1 internal.go:530] "Stopping and waiting for webhooks" 2025-08-13T20:42:39.644719988+00:00 stderr F I0813 20:42:39.644686 1 internal.go:533] "Stopping and waiting for HTTP servers" 2025-08-13T20:42:39.647757065+00:00 stderr F I0813 20:42:39.647042 1 server.go:231] "Shutting down metrics server with timeout of 1 minute" logger="controller-runtime.metrics" 2025-08-13T20:42:39.647757065+00:00 stderr F I0813 20:42:39.647249 1 internal.go:537] "Wait completed, proceeding to shutdown the manager" 2025-08-13T20:42:39.651514124+00:00 stderr F E0813 20:42:39.651384 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000033100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000644000175000017500000002505015133657716033061 0ustar zuulzuul2025-08-13T19:50:52.226135278+00:00 stderr F W0813 19:50:52.191052 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:50:52.238309136+00:00 stderr F W0813 19:50:52.238020 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:50:52.241364754+00:00 stderr F I0813 19:50:52.239659 1 main.go:150] setting up manager 2025-08-13T19:50:52.290188489+00:00 stderr F I0813 19:50:52.289306 1 main.go:168] registering components 2025-08-13T19:50:52.290188489+00:00 stderr F I0813 19:50:52.289358 1 main.go:170] setting up scheme 2025-08-13T19:50:52.291735653+00:00 stderr F I0813 19:50:52.291002 1 main.go:208] setting up controllers 2025-08-13T19:50:52.292664000+00:00 stderr F I0813 19:50:52.291879 1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory 2025-08-13T19:50:52.292664000+00:00 stderr F I0813 19:50:52.292278 1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}} 2025-08-13T19:50:52.295243134+00:00 stderr F I0813 19:50:52.295107 1 main.go:233] starting the cmd 2025-08-13T19:50:52.308932555+00:00 stderr F I0813 19:50:52.307024 1 server.go:185] "Starting metrics server" logger="controller-runtime.metrics" 2025-08-13T19:50:52.327417733+00:00 stderr F I0813 19:50:52.326253 1 leaderelection.go:250] attempting to acquire leader lease openshift-cluster-machine-approver/cluster-machine-approver-leader... 2025-08-13T19:50:52.343960816+00:00 stderr F I0813 19:50:52.343359 1 server.go:224] "Serving metrics server" logger="controller-runtime.metrics" bindAddress="127.0.0.1:9191" secure=false 2025-08-13T19:51:22.437513386+00:00 stderr F E0813 19:51:22.437168 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:38.535367297+00:00 stderr F E0813 19:52:38.535240 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:54.958972291+00:00 stderr F E0813 19:53:54.958581 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:54:58.672042812+00:00 stderr F E0813 19:54:58.671715 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:15.692884729+00:00 stderr F E0813 19:56:15.692681 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:11.955337238+00:00 stderr F E0813 19:57:11.955019 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:58:04.802508691+00:00 stderr F I0813 19:58:04.802165 1 leaderelection.go:260] successfully acquired lease openshift-cluster-machine-approver/cluster-machine-approver-leader 2025-08-13T19:58:04.803114468+00:00 stderr F I0813 19:58:04.803034 1 status.go:97] Starting cluster operator status controller 2025-08-13T19:58:04.805735363+00:00 stderr F I0813 19:58:04.804317 1 recorder.go:104] "crc_998ad275-6fd6-49e7-a1d3-0d4cd7031028 became leader" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-cluster-machine-approver","name":"cluster-machine-approver-leader","uid":"396b5b52-acf2-4d11-8e98-69ecff2f52d0","apiVersion":"coordination.k8s.io/v1","resourceVersion":"27679"} reason="LeaderElection" 2025-08-13T19:58:04.806422283+00:00 stderr F I0813 19:58:04.806346 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.CertificateSigningRequest" 2025-08-13T19:58:04.809971334+00:00 stderr F I0813 19:58:04.806616 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.ConfigMap" 2025-08-13T19:58:04.810082087+00:00 stderr F I0813 19:58:04.810029 1 controller.go:186] "Starting Controller" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T19:58:04.814620396+00:00 stderr F I0813 19:58:04.813702 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/cluster-machine-approver/status.go:99 2025-08-13T19:58:04.819118554+00:00 stderr F I0813 19:58:04.818976 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T19:58:04.999867317+00:00 stderr F I0813 19:58:04.999733 1 reflector.go:351] Caches populated for *v1.ConfigMap from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T19:58:05.021482623+00:00 stderr F I0813 19:58:05.021318 1 controller.go:220] "Starting workers" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" worker count=10 2025-08-13T19:58:05.021741430+00:00 stderr F I0813 19:58:05.021656 1 controller.go:120] Reconciling CSR: csr-fxkbs 2025-08-13T19:58:05.068715569+00:00 stderr F I0813 19:58:05.068648 1 controller.go:213] csr-fxkbs: CSR is already approved 2025-08-13T19:59:54.738971221+00:00 stderr F I0813 19:59:54.736169 1 controller.go:120] Reconciling CSR: system:openshift:openshift-authenticator-dk965 2025-08-13T19:59:55.783770514+00:00 stderr F I0813 19:59:55.783131 1 csr_check.go:163] system:openshift:openshift-authenticator-dk965: CSR does not appear to be client csr 2025-08-13T19:59:55.783770514+00:00 stderr F I0813 19:59:55.783176 1 csr_check.go:59] system:openshift:openshift-authenticator-dk965: CSR does not appear to be a node serving cert 2025-08-13T19:59:55.783770514+00:00 stderr F I0813 19:59:55.783237 1 controller.go:232] system:openshift:openshift-authenticator-dk965: CSR not authorized 2025-08-13T19:59:56.015861990+00:00 stderr F I0813 19:59:56.015717 1 controller.go:120] Reconciling CSR: system:openshift:openshift-authenticator-dk965 2025-08-13T19:59:57.081251229+00:00 stderr F I0813 19:59:57.073260 1 controller.go:213] system:openshift:openshift-authenticator-dk965: CSR is already approved 2025-08-13T20:00:01.347367856+00:00 stderr F I0813 20:00:01.346761 1 controller.go:120] Reconciling CSR: system:openshift:openshift-authenticator-dk965 2025-08-13T20:00:01.631136935+00:00 stderr F I0813 20:00:01.631069 1 controller.go:213] system:openshift:openshift-authenticator-dk965: CSR is already approved 2025-08-13T20:03:21.934357110+00:00 stderr F E0813 20:03:21.934093 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:04:17.937495002+00:00 stderr F E0813 20:04:17.937199 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:04:38.942610615+00:00 stderr F I0813 20:04:38.936003 1 leaderelection.go:285] failed to renew lease openshift-cluster-machine-approver/cluster-machine-approver-leader: timed out waiting for the condition 2025-08-13T20:05:08.957482381+00:00 stderr F E0813 20:05:08.957257 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:05:08.990523638+00:00 stderr F F0813 20:05:08.990431 1 main.go:235] unable to run the manager: leader election lost 2025-08-13T20:05:09.028667000+00:00 stderr F I0813 20:05:09.028498 1 internal.go:516] "Stopping and waiting for non leader election runnables" 2025-08-13T20:05:09.028667000+00:00 stderr F I0813 20:05:09.028591 1 internal.go:520] "Stopping and waiting for leader election runnables" 2025-08-13T20:05:09.028667000+00:00 stderr F I0813 20:05:09.028608 1 internal.go:526] "Stopping and waiting for caches" 2025-08-13T20:05:09.028667000+00:00 stderr F I0813 20:05:09.028585 1 recorder.go:104] "crc_998ad275-6fd6-49e7-a1d3-0d4cd7031028 stopped leading" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-cluster-machine-approver","name":"cluster-machine-approver-leader","uid":"396b5b52-acf2-4d11-8e98-69ecff2f52d0","apiVersion":"coordination.k8s.io/v1","resourceVersion":"30699"} reason="LeaderElection" 2025-08-13T20:05:09.028896756+00:00 stderr F I0813 20:05:09.028819 1 internal.go:530] "Stopping and waiting for webhooks" 2025-08-13T20:05:09.028896756+00:00 stderr F I0813 20:05:09.028849 1 internal.go:533] "Stopping and waiting for HTTP servers" 2025-08-13T20:05:09.028896756+00:00 stderr F I0813 20:05:09.028884 1 internal.go:537] "Wait completed, proceeding to shutdown the manager" ././@LongLink0000644000000000000000000000024400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015133657715033114 5ustar zuulzuul././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/registry-server/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015133657734033115 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/registry-server/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000067015133657715033121 0ustar zuulzuul2026-01-20T10:51:17.066521450+00:00 stderr F time="2026-01-20T10:51:17Z" level=info msg="starting pprof endpoint" address="localhost:6060" 2026-01-20T10:51:19.991430861+00:00 stderr F time="2026-01-20T10:51:19Z" level=info msg="serving registry" configs=/extracted-catalog/catalog port=50051 2026-01-20T10:51:19.991430861+00:00 stderr F time="2026-01-20T10:51:19Z" level=info msg="stopped caching cpu profile data" address="localhost:6060" ././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/extract-content/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015133657734033115 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/extract-content/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015133657715033104 0ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/extract-utilities/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015133657734033115 5ustar zuulzuul././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/extract-utilities/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015133657715033104 0ustar zuulzuul././@LongLink0000644000000000000000000000024100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015133657715033114 5ustar zuulzuul././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/extract-content/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015133657735033116 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/extract-content/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015133657715033104 0ustar zuulzuul././@LongLink0000644000000000000000000000026300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/extract-utilities/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015133657735033116 5ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/extract-utilities/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015133657715033104 0ustar zuulzuul././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/registry-server/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015133657735033116 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/registry-server/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000067015133657715033121 0ustar zuulzuul2026-01-20T10:51:45.717756804+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="starting pprof endpoint" address="localhost:6060" 2026-01-20T10:51:51.504114167+00:00 stderr F time="2026-01-20T10:51:51Z" level=info msg="serving registry" configs=/extracted-catalog/catalog port=50051 2026-01-20T10:51:51.504114167+00:00 stderr F time="2026-01-20T10:51:51Z" level=info msg="stopped caching cpu profile data" address="localhost:6060" ././@LongLink0000644000000000000000000000024200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-cana0000755000175000017500000000000015133657716033040 5ustar zuulzuul././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-cana0000755000175000017500000000000015133657736033042 5ustar zuulzuul././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-cana0000644000175000017500000000113215133657716033037 0ustar zuulzuul2026-01-20T10:49:35.984751072+00:00 stdout F serving on 8888 2026-01-20T10:49:35.984751072+00:00 stdout F serving on 8080 2026-01-20T10:51:45.931461188+00:00 stdout F Serving canary healthcheck request 2026-01-20T10:52:45.967598595+00:00 stdout F Serving canary healthcheck request 2026-01-20T10:53:46.011395007+00:00 stdout F Serving canary healthcheck request 2026-01-20T10:54:46.045819313+00:00 stdout F Serving canary healthcheck request 2026-01-20T10:55:46.095348631+00:00 stdout F Serving canary healthcheck request 2026-01-20T10:56:46.125247236+00:00 stdout F Serving canary healthcheck request ././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-cana0000644000175000017500000000555215133657716033051 0ustar zuulzuul2025-08-13T19:59:34.879020507+00:00 stdout F serving on 8888 2025-08-13T19:59:35.198359410+00:00 stdout F serving on 8080 2025-08-13T20:08:02.339625798+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:09:02.392271856+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:10:02.482997678+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:11:02.534550541+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:12:02.579987363+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:13:02.622390324+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:14:02.665508540+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:15:02.709870087+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:16:02.748527721+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:17:02.809741878+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:18:03.008946071+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:19:04.599125722+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:20:04.657458550+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:21:04.729419456+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:22:04.794676989+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:23:04.860257187+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:24:04.910380940+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:25:04.970661472+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:26:05.020180631+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:27:05.076309120+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:28:05.120889853+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:29:05.169859934+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:30:05.293400605+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:31:05.339396053+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:32:05.396747728+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:33:05.450110989+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:34:05.494021784+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:35:05.541527757+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:36:05.601877841+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:37:05.651693983+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:38:05.693964495+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:39:05.739405167+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:40:05.786630512+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:41:05.844252540+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:42:05.898761088+00:00 stdout F Serving canary healthcheck request ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015133657716033015 5ustar zuulzuul././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015133657737033020 5ustar zuulzuul././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000013175715133657716033035 0ustar zuulzuul2025-08-13T19:50:48.104229662+00:00 stderr F I0813 19:50:48.084890 1 start.go:38] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:51:18.557700290+00:00 stderr F W0813 19:51:18.557095 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:18.558076260+00:00 stderr F I0813 19:51:18.558043 1 trace.go:236] Trace[912093740]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:50:48.535) (total time: 30022ms): 2025-08-13T19:51:18.558076260+00:00 stderr F Trace[912093740]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30021ms (19:51:18.556) 2025-08-13T19:51:18.558076260+00:00 stderr F Trace[912093740]: [30.022734787s] [30.022734787s] END 2025-08-13T19:51:18.558524563+00:00 stderr F E0813 19:51:18.558463 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:18.569047554+00:00 stderr F W0813 19:51:18.568932 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:18.569072735+00:00 stderr F I0813 19:51:18.569052 1 trace.go:236] Trace[2105199760]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:50:48.532) (total time: 30036ms): 2025-08-13T19:51:18.569072735+00:00 stderr F Trace[2105199760]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30036ms (19:51:18.568) 2025-08-13T19:51:18.569072735+00:00 stderr F Trace[2105199760]: [30.036876891s] [30.036876891s] END 2025-08-13T19:51:18.569072735+00:00 stderr F E0813 19:51:18.569067 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:18.572313797+00:00 stderr F W0813 19:51:18.572229 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:18.572412030+00:00 stderr F I0813 19:51:18.572388 1 trace.go:236] Trace[153696363]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:50:48.532) (total time: 30040ms): 2025-08-13T19:51:18.572412030+00:00 stderr F Trace[153696363]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30040ms (19:51:18.572) 2025-08-13T19:51:18.572412030+00:00 stderr F Trace[153696363]: [30.04035192s] [30.04035192s] END 2025-08-13T19:51:18.572482572+00:00 stderr F E0813 19:51:18.572458 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:18.663598296+00:00 stderr F W0813 19:51:18.663514 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:18.663703289+00:00 stderr F I0813 19:51:18.663686 1 trace.go:236] Trace[1936765059]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:50:48.640) (total time: 30023ms): 2025-08-13T19:51:18.663703289+00:00 stderr F Trace[1936765059]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30023ms (19:51:18.663) 2025-08-13T19:51:18.663703289+00:00 stderr F Trace[1936765059]: [30.02355184s] [30.02355184s] END 2025-08-13T19:51:18.663749820+00:00 stderr F E0813 19:51:18.663735 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.504049251+00:00 stderr F W0813 19:51:49.503980 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.504303129+00:00 stderr F I0813 19:51:49.504282 1 trace.go:236] Trace[1149040783]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:51:19.502) (total time: 30001ms): 2025-08-13T19:51:49.504303129+00:00 stderr F Trace[1149040783]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:51:49.503) 2025-08-13T19:51:49.504303129+00:00 stderr F Trace[1149040783]: [30.001813737s] [30.001813737s] END 2025-08-13T19:51:49.504357790+00:00 stderr F E0813 19:51:49.504343 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.604755611+00:00 stderr F W0813 19:51:49.604696 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.604995868+00:00 stderr F I0813 19:51:49.604972 1 trace.go:236] Trace[933594454]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:51:19.603) (total time: 30001ms): 2025-08-13T19:51:49.604995868+00:00 stderr F Trace[933594454]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:51:49.604) 2025-08-13T19:51:49.604995868+00:00 stderr F Trace[933594454]: [30.001502258s] [30.001502258s] END 2025-08-13T19:51:49.605047339+00:00 stderr F E0813 19:51:49.605033 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.608579750+00:00 stderr F W0813 19:51:49.608494 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.608737084+00:00 stderr F I0813 19:51:49.608587 1 trace.go:236] Trace[802142435]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:51:19.607) (total time: 30000ms): 2025-08-13T19:51:49.608737084+00:00 stderr F Trace[802142435]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (19:51:49.608) 2025-08-13T19:51:49.608737084+00:00 stderr F Trace[802142435]: [30.000833509s] [30.000833509s] END 2025-08-13T19:51:49.608737084+00:00 stderr F E0813 19:51:49.608606 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.955594646+00:00 stderr F W0813 19:51:49.955445 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.956119431+00:00 stderr F I0813 19:51:49.955722 1 trace.go:236] Trace[375825009]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:51:19.952) (total time: 30003ms): 2025-08-13T19:51:49.956119431+00:00 stderr F Trace[375825009]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:51:49.955) 2025-08-13T19:51:49.956119431+00:00 stderr F Trace[375825009]: [30.003125253s] [30.003125253s] END 2025-08-13T19:51:49.956178603+00:00 stderr F E0813 19:51:49.956178 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:07.688377796+00:00 stderr F W0813 19:52:07.688204 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:52:07.688377796+00:00 stderr F I0813 19:52:07.688318 1 trace.go:236] Trace[1219952241]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:51:52.075) (total time: 15613ms): 2025-08-13T19:52:07.688377796+00:00 stderr F Trace[1219952241]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 15613ms (19:52:07.688) 2025-08-13T19:52:07.688377796+00:00 stderr F Trace[1219952241]: [15.613260573s] [15.613260573s] END 2025-08-13T19:52:07.688473449+00:00 stderr F E0813 19:52:07.688372 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:52:08.200402353+00:00 stderr F W0813 19:52:08.200209 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:52:08.200402353+00:00 stderr F I0813 19:52:08.200319 1 trace.go:236] Trace[66411733]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:51:52.591) (total time: 15608ms): 2025-08-13T19:52:08.200402353+00:00 stderr F Trace[66411733]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 15608ms (19:52:08.200) 2025-08-13T19:52:08.200402353+00:00 stderr F Trace[66411733]: [15.608347813s] [15.608347813s] END 2025-08-13T19:52:08.200402353+00:00 stderr F E0813 19:52:08.200340 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:52:21.221752251+00:00 stderr F W0813 19:52:21.221643 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:21.221752251+00:00 stderr F I0813 19:52:21.221724 1 trace.go:236] Trace[1453919014]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:51:51.217) (total time: 30004ms): 2025-08-13T19:52:21.221752251+00:00 stderr F Trace[1453919014]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30004ms (19:52:21.221) 2025-08-13T19:52:21.221752251+00:00 stderr F Trace[1453919014]: [30.004688526s] [30.004688526s] END 2025-08-13T19:52:21.221908706+00:00 stderr F E0813 19:52:21.221764 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:22.252234010+00:00 stderr F W0813 19:52:22.252150 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:22.252393324+00:00 stderr F I0813 19:52:22.252369 1 trace.go:236] Trace[1556460362]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:51:52.249) (total time: 30002ms): 2025-08-13T19:52:22.252393324+00:00 stderr F Trace[1556460362]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:52:22.252) 2025-08-13T19:52:22.252393324+00:00 stderr F Trace[1556460362]: [30.002882453s] [30.002882453s] END 2025-08-13T19:52:22.252472317+00:00 stderr F E0813 19:52:22.252453 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:42.086000804+00:00 stderr F W0813 19:52:42.084352 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:42.086000804+00:00 stderr F I0813 19:52:42.084477 1 trace.go:236] Trace[1272804760]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:52:12.081) (total time: 30003ms): 2025-08-13T19:52:42.086000804+00:00 stderr F Trace[1272804760]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30003ms (19:52:42.084) 2025-08-13T19:52:42.086000804+00:00 stderr F Trace[1272804760]: [30.003320052s] [30.003320052s] END 2025-08-13T19:52:42.086000804+00:00 stderr F E0813 19:52:42.084495 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:42.671043855+00:00 stderr F W0813 19:52:42.670931 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:42.671043855+00:00 stderr F I0813 19:52:42.671022 1 trace.go:236] Trace[1619673043]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:52:12.669) (total time: 30001ms): 2025-08-13T19:52:42.671043855+00:00 stderr F Trace[1619673043]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:52:42.670) 2025-08-13T19:52:42.671043855+00:00 stderr F Trace[1619673043]: [30.001258406s] [30.001258406s] END 2025-08-13T19:52:42.671085227+00:00 stderr F E0813 19:52:42.671038 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:55.955644476+00:00 stderr F W0813 19:52:55.955445 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:55.955708578+00:00 stderr F I0813 19:52:55.955686 1 trace.go:236] Trace[1977466941]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:52:25.954) (total time: 30001ms): 2025-08-13T19:52:55.955708578+00:00 stderr F Trace[1977466941]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:52:55.955) 2025-08-13T19:52:55.955708578+00:00 stderr F Trace[1977466941]: [30.001497346s] [30.001497346s] END 2025-08-13T19:52:55.955838651+00:00 stderr F E0813 19:52:55.955727 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:57.480925777+00:00 stderr F W0813 19:52:57.480755 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:57.480925777+00:00 stderr F I0813 19:52:57.480906 1 trace.go:236] Trace[1866283227]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:52:27.479) (total time: 30001ms): 2025-08-13T19:52:57.480925777+00:00 stderr F Trace[1866283227]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:52:57.480) 2025-08-13T19:52:57.480925777+00:00 stderr F Trace[1866283227]: [30.00181764s] [30.00181764s] END 2025-08-13T19:52:57.481038301+00:00 stderr F E0813 19:52:57.480924 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:20.367755117+00:00 stderr F W0813 19:53:20.367425 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:20.368044136+00:00 stderr F I0813 19:53:20.368019 1 trace.go:236] Trace[1083300361]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:52:50.364) (total time: 30003ms): 2025-08-13T19:53:20.368044136+00:00 stderr F Trace[1083300361]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:53:20.367) 2025-08-13T19:53:20.368044136+00:00 stderr F Trace[1083300361]: [30.003506528s] [30.003506528s] END 2025-08-13T19:53:20.368124388+00:00 stderr F E0813 19:53:20.368101 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:21.438760162+00:00 stderr F W0813 19:53:21.438626 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:21.438955457+00:00 stderr F I0813 19:53:21.438756 1 trace.go:236] Trace[702819263]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:52:51.436) (total time: 30001ms): 2025-08-13T19:53:21.438955457+00:00 stderr F Trace[702819263]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:53:21.438) 2025-08-13T19:53:21.438955457+00:00 stderr F Trace[702819263]: [30.001816914s] [30.001816914s] END 2025-08-13T19:53:21.438955457+00:00 stderr F E0813 19:53:21.438873 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:34.333573367+00:00 stderr F W0813 19:53:34.333471 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:34.333968239+00:00 stderr F I0813 19:53:34.333752 1 trace.go:236] Trace[623327545]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:53:04.330) (total time: 30002ms): 2025-08-13T19:53:34.333968239+00:00 stderr F Trace[623327545]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:53:34.333) 2025-08-13T19:53:34.333968239+00:00 stderr F Trace[623327545]: [30.002761313s] [30.002761313s] END 2025-08-13T19:53:34.334038991+00:00 stderr F E0813 19:53:34.334010 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:38.954408297+00:00 stderr F W0813 19:53:38.954273 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:38.954472968+00:00 stderr F I0813 19:53:38.954398 1 trace.go:236] Trace[32431997]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:53:08.953) (total time: 30001ms): 2025-08-13T19:53:38.954472968+00:00 stderr F Trace[32431997]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:53:38.954) 2025-08-13T19:53:38.954472968+00:00 stderr F Trace[32431997]: [30.001223625s] [30.001223625s] END 2025-08-13T19:53:38.954472968+00:00 stderr F E0813 19:53:38.954423 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:54:04.670979405+00:00 stderr F W0813 19:54:04.670474 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:54:04.670979405+00:00 stderr F I0813 19:54:04.670582 1 trace.go:236] Trace[1588578066]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:53:34.668) (total time: 30001ms): 2025-08-13T19:54:04.670979405+00:00 stderr F Trace[1588578066]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:54:04.670) 2025-08-13T19:54:04.670979405+00:00 stderr F Trace[1588578066]: [30.001681551s] [30.001681551s] END 2025-08-13T19:54:04.670979405+00:00 stderr F E0813 19:54:04.670604 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:54:06.844605739+00:00 stderr F W0813 19:54:06.844423 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:54:06.844650260+00:00 stderr F I0813 19:54:06.844638 1 trace.go:236] Trace[737086321]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:53:36.843) (total time: 30001ms): 2025-08-13T19:54:06.844650260+00:00 stderr F Trace[737086321]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:54:06.844) 2025-08-13T19:54:06.844650260+00:00 stderr F Trace[737086321]: [30.001231928s] [30.001231928s] END 2025-08-13T19:54:06.844730882+00:00 stderr F E0813 19:54:06.844665 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:54:09.417087090+00:00 stderr F W0813 19:54:09.416716 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:54:09.417087090+00:00 stderr F E0813 19:54:09.416971 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:54:22.292448382+00:00 stderr F W0813 19:54:22.292119 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:54:22.292448382+00:00 stderr F I0813 19:54:22.292366 1 trace.go:236] Trace[1159168860]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:53:52.288) (total time: 30003ms): 2025-08-13T19:54:22.292448382+00:00 stderr F Trace[1159168860]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30003ms (19:54:22.292) 2025-08-13T19:54:22.292448382+00:00 stderr F Trace[1159168860]: [30.003899291s] [30.003899291s] END 2025-08-13T19:54:22.292554615+00:00 stderr F E0813 19:54:22.292436 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:02.279342700+00:00 stderr F W0813 19:55:02.279209 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:02.279342700+00:00 stderr F I0813 19:55:02.279323 1 trace.go:236] Trace[316618279]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:54:32.277) (total time: 30002ms): 2025-08-13T19:55:02.279342700+00:00 stderr F Trace[316618279]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:55:02.279) 2025-08-13T19:55:02.279342700+00:00 stderr F Trace[316618279]: [30.002099667s] [30.002099667s] END 2025-08-13T19:55:02.279416412+00:00 stderr F E0813 19:55:02.279339 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:04.185069915+00:00 stderr F W0813 19:55:04.184917 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:04.185069915+00:00 stderr F I0813 19:55:04.185050 1 trace.go:236] Trace[151776207]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:54:34.182) (total time: 30002ms): 2025-08-13T19:55:04.185069915+00:00 stderr F Trace[151776207]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:55:04.184) 2025-08-13T19:55:04.185069915+00:00 stderr F Trace[151776207]: [30.00292695s] [30.00292695s] END 2025-08-13T19:55:04.185110916+00:00 stderr F E0813 19:55:04.185068 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:23.981912405+00:00 stderr F W0813 19:55:23.981607 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:23.982536453+00:00 stderr F I0813 19:55:23.982465 1 trace.go:236] Trace[1215179368]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:54:53.979) (total time: 30003ms): 2025-08-13T19:55:23.982536453+00:00 stderr F Trace[1215179368]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:55:23.981) 2025-08-13T19:55:23.982536453+00:00 stderr F Trace[1215179368]: [30.003092165s] [30.003092165s] END 2025-08-13T19:55:23.982720378+00:00 stderr F E0813 19:55:23.982695 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:30.972367921+00:00 stderr F W0813 19:55:30.972184 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:30.972367921+00:00 stderr F I0813 19:55:30.972342 1 trace.go:236] Trace[1271936866]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:55:00.971) (total time: 30001ms): 2025-08-13T19:55:30.972367921+00:00 stderr F Trace[1271936866]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (19:55:30.972) 2025-08-13T19:55:30.972367921+00:00 stderr F Trace[1271936866]: [30.001039224s] [30.001039224s] END 2025-08-13T19:55:30.972367921+00:00 stderr F E0813 19:55:30.972360 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:51.572445390+00:00 stderr F W0813 19:55:51.571947 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:55:51.572445390+00:00 stderr F E0813 19:55:51.572021 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:56:16.186532855+00:00 stderr F W0813 19:56:16.185956 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:16.186532855+00:00 stderr F I0813 19:56:16.186062 1 trace.go:236] Trace[1037743998]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:55:46.184) (total time: 30001ms): 2025-08-13T19:56:16.186532855+00:00 stderr F Trace[1037743998]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:56:16.185) 2025-08-13T19:56:16.186532855+00:00 stderr F Trace[1037743998]: [30.001968233s] [30.001968233s] END 2025-08-13T19:56:16.186532855+00:00 stderr F E0813 19:56:16.186078 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:51.014032237+00:00 stderr F W0813 19:56:51.013906 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:51.014345846+00:00 stderr F I0813 19:56:51.014263 1 trace.go:236] Trace[324623291]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:56:21.012) (total time: 30002ms): 2025-08-13T19:56:51.014345846+00:00 stderr F Trace[324623291]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:56:51.013) 2025-08-13T19:56:51.014345846+00:00 stderr F Trace[324623291]: [30.002111695s] [30.002111695s] END 2025-08-13T19:56:51.014450029+00:00 stderr F E0813 19:56:51.014402 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:52.162970465+00:00 stderr F W0813 19:56:52.162515 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:52.162970465+00:00 stderr F I0813 19:56:52.162611 1 trace.go:236] Trace[633606038]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:56:22.160) (total time: 30002ms): 2025-08-13T19:56:52.162970465+00:00 stderr F Trace[633606038]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:56:52.162) 2025-08-13T19:56:52.162970465+00:00 stderr F Trace[633606038]: [30.002157687s] [30.002157687s] END 2025-08-13T19:56:52.162970465+00:00 stderr F E0813 19:56:52.162625 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:58.950133678+00:00 stderr F W0813 19:56:58.949498 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:58.950133678+00:00 stderr F I0813 19:56:58.949878 1 trace.go:236] Trace[1460267524]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:56:28.947) (total time: 30002ms): 2025-08-13T19:56:58.950133678+00:00 stderr F Trace[1460267524]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:56:58.949) 2025-08-13T19:56:58.950133678+00:00 stderr F Trace[1460267524]: [30.0023144s] [30.0023144s] END 2025-08-13T19:56:58.950133678+00:00 stderr F E0813 19:56:58.949947 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:36.904398000+00:00 stderr F I0813 19:57:36.904118 1 trace.go:236] Trace[561210625]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:57:09.705) (total time: 27198ms): 2025-08-13T19:57:36.904398000+00:00 stderr F Trace[561210625]: ---"Objects listed" error: 27197ms (19:57:36.903) 2025-08-13T19:57:36.904398000+00:00 stderr F Trace[561210625]: [27.198137534s] [27.198137534s] END 2025-08-13T19:57:37.027895736+00:00 stderr F I0813 19:57:37.027722 1 api.go:65] Launching server on :22624 2025-08-13T19:57:37.032064235+00:00 stderr F I0813 19:57:37.030734 1 api.go:65] Launching server on :22623 ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000002656215133657716033032 0ustar zuulzuul2026-01-20T10:47:24.913099588+00:00 stderr F I0120 10:47:24.912912 1 start.go:38] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2026-01-20T10:47:54.918963340+00:00 stderr F W0120 10:47:54.918698 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:47:54.918963340+00:00 stderr F I0120 10:47:54.918940 1 trace.go:236] Trace[533201751]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (20-Jan-2026 10:47:24.916) (total time: 30002ms): 2026-01-20T10:47:54.918963340+00:00 stderr F Trace[533201751]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (10:47:54.918) 2026-01-20T10:47:54.918963340+00:00 stderr F Trace[533201751]: [30.002283045s] [30.002283045s] END 2026-01-20T10:47:54.919046413+00:00 stderr F E0120 10:47:54.918980 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:47:54.919309910+00:00 stderr F W0120 10:47:54.919174 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:47:54.919309910+00:00 stderr F I0120 10:47:54.919266 1 trace.go:236] Trace[1190776164]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (20-Jan-2026 10:47:24.916) (total time: 30002ms): 2026-01-20T10:47:54.919309910+00:00 stderr F Trace[1190776164]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (10:47:54.919) 2026-01-20T10:47:54.919309910+00:00 stderr F Trace[1190776164]: [30.002558123s] [30.002558123s] END 2026-01-20T10:47:54.919309910+00:00 stderr F E0120 10:47:54.919282 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:47:54.919452693+00:00 stderr F W0120 10:47:54.919382 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:47:54.919452693+00:00 stderr F I0120 10:47:54.919445 1 trace.go:236] Trace[1363704821]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (20-Jan-2026 10:47:24.916) (total time: 30002ms): 2026-01-20T10:47:54.919452693+00:00 stderr F Trace[1363704821]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (10:47:54.919) 2026-01-20T10:47:54.919452693+00:00 stderr F Trace[1363704821]: [30.002651405s] [30.002651405s] END 2026-01-20T10:47:54.919477304+00:00 stderr F E0120 10:47:54.919459 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:47:54.919597567+00:00 stderr F W0120 10:47:54.919470 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:47:54.919693250+00:00 stderr F I0120 10:47:54.919630 1 trace.go:236] Trace[816473774]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (20-Jan-2026 10:47:24.917) (total time: 30002ms): 2026-01-20T10:47:54.919693250+00:00 stderr F Trace[816473774]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (10:47:54.919) 2026-01-20T10:47:54.919693250+00:00 stderr F Trace[816473774]: [30.002244745s] [30.002244745s] END 2026-01-20T10:47:54.919693250+00:00 stderr F E0120 10:47:54.919663 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:48:25.885805144+00:00 stderr F W0120 10:48:25.884902 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:48:25.885950258+00:00 stderr F I0120 10:48:25.885925 1 trace.go:236] Trace[166573614]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (20-Jan-2026 10:47:55.884) (total time: 30001ms): 2026-01-20T10:48:25.885950258+00:00 stderr F Trace[166573614]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (10:48:25.884) 2026-01-20T10:48:25.885950258+00:00 stderr F Trace[166573614]: [30.001732978s] [30.001732978s] END 2026-01-20T10:48:25.886020600+00:00 stderr F E0120 10:48:25.885999 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:48:26.304901447+00:00 stderr F W0120 10:48:26.304767 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:48:26.304901447+00:00 stderr F I0120 10:48:26.304850 1 trace.go:236] Trace[1678790560]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (20-Jan-2026 10:47:56.303) (total time: 30001ms): 2026-01-20T10:48:26.304901447+00:00 stderr F Trace[1678790560]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (10:48:26.304) 2026-01-20T10:48:26.304901447+00:00 stderr F Trace[1678790560]: [30.001068171s] [30.001068171s] END 2026-01-20T10:48:26.304901447+00:00 stderr F E0120 10:48:26.304865 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:48:26.323543441+00:00 stderr F W0120 10:48:26.323440 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:48:26.323543441+00:00 stderr F I0120 10:48:26.323527 1 trace.go:236] Trace[1724296556]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (20-Jan-2026 10:47:56.322) (total time: 30001ms): 2026-01-20T10:48:26.323543441+00:00 stderr F Trace[1724296556]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (10:48:26.323) 2026-01-20T10:48:26.323543441+00:00 stderr F Trace[1724296556]: [30.001307678s] [30.001307678s] END 2026-01-20T10:48:26.323636384+00:00 stderr F E0120 10:48:26.323548 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:48:26.323756247+00:00 stderr F W0120 10:48:26.323699 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:48:26.323866730+00:00 stderr F I0120 10:48:26.323842 1 trace.go:236] Trace[67536016]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (20-Jan-2026 10:47:56.322) (total time: 30001ms): 2026-01-20T10:48:26.323866730+00:00 stderr F Trace[67536016]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (10:48:26.323) 2026-01-20T10:48:26.323866730+00:00 stderr F Trace[67536016]: [30.001568044s] [30.001568044s] END 2026-01-20T10:48:26.323973743+00:00 stderr F E0120 10:48:26.323917 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:48:35.318520023+00:00 stderr F I0120 10:48:35.318338 1 api.go:65] Launching server on :22624 2026-01-20T10:48:35.318520023+00:00 stderr F I0120 10:48:35.318403 1 api.go:65] Launching server on :22623 ././@LongLink0000644000000000000000000000022400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-defa0000755000175000017500000000000015133657716033013 5ustar zuulzuul././@LongLink0000644000000000000000000000023000000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-defa0000755000175000017500000000000015133657744033014 5ustar zuulzuul././@LongLink0000644000000000000000000000023500000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-defa0000644000175000017500000000437515133657716033026 0ustar zuulzuul2026-01-20T10:49:33.430007236+00:00 stdout F .:5353 2026-01-20T10:49:33.430007236+00:00 stdout F hostname.bind.:5353 2026-01-20T10:49:33.430007236+00:00 stdout F [INFO] plugin/reload: Running configuration SHA512 = c40f1fac74a6633c6b1943fe251ad80adf3d5bd9b35c9e7d9b72bc260c5e2455f03e403e3b79d32f0936ff27e81ff6d07c68a95724b1c2c23510644372976718 2026-01-20T10:49:33.430007236+00:00 stdout F CoreDNS-1.11.1 2026-01-20T10:49:33.430007236+00:00 stdout F linux/amd64, go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime, 2026-01-20T10:55:13.317725595+00:00 stdout F [INFO] 10.217.0.8:54233 - 22353 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002883577s 2026-01-20T10:55:13.317725595+00:00 stdout F [INFO] 10.217.0.8:50495 - 51250 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002945419s 2026-01-20T10:55:32.258614224+00:00 stdout F [INFO] 10.217.0.8:50119 - 18067 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00264315s 2026-01-20T10:55:32.258734507+00:00 stdout F [INFO] 10.217.0.8:57057 - 30253 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002945208s 2026-01-20T10:55:54.293628627+00:00 stdout F [INFO] 10.217.0.8:51274 - 62020 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000765431s 2026-01-20T10:55:54.294010217+00:00 stdout F [INFO] 10.217.0.8:44389 - 44524 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001644564s 2026-01-20T10:56:32.257476967+00:00 stdout F [INFO] 10.217.0.8:38797 - 30995 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002740064s 2026-01-20T10:56:32.257476967+00:00 stdout F [INFO] 10.217.0.8:47144 - 12976 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.003028171s 2026-01-20T10:57:32.257724288+00:00 stdout F [INFO] 10.217.0.8:55140 - 57131 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002387003s 2026-01-20T10:57:32.257724288+00:00 stdout F [INFO] 10.217.0.8:48292 - 7484 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002585389s ././@LongLink0000644000000000000000000000023500000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-defa0000644000175000017500000226074215133657716033032 0ustar zuulzuul2025-08-13T19:59:13.144252487+00:00 stdout F [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server 2025-08-13T19:59:13.171934436+00:00 stdout F .:5353 2025-08-13T19:59:13.171934436+00:00 stdout F hostname.bind.:5353 2025-08-13T19:59:13.185586915+00:00 stdout F [INFO] plugin/reload: Running configuration SHA512 = c40f1fac74a6633c6b1943fe251ad80adf3d5bd9b35c9e7d9b72bc260c5e2455f03e403e3b79d32f0936ff27e81ff6d07c68a95724b1c2c23510644372976718 2025-08-13T19:59:13.187380976+00:00 stdout F CoreDNS-1.11.1 2025-08-13T19:59:13.187380976+00:00 stdout F linux/amd64, go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime, 2025-08-13T19:59:36.359190859+00:00 stdout F [INFO] 10.217.0.28:60726 - 4384 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.009569913s 2025-08-13T19:59:36.359190859+00:00 stdout F [INFO] 10.217.0.28:45746 - 61404 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.016871841s 2025-08-13T19:59:38.555978409+00:00 stdout F [INFO] 10.217.0.8:37135 - 10343 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001011259s 2025-08-13T19:59:38.555978409+00:00 stdout F [INFO] 10.217.0.8:58657 - 31103 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001289036s 2025-08-13T19:59:39.582450718+00:00 stdout F [INFO] 10.217.0.8:36699 - 25225 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003627074s 2025-08-13T19:59:39.583116537+00:00 stdout F [INFO] 10.217.0.8:46453 - 52750 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000979658s 2025-08-13T19:59:41.285089343+00:00 stdout F [INFO] 10.217.0.8:42982 - 35440 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.005392324s 2025-08-13T19:59:41.372498954+00:00 stdout F [INFO] 10.217.0.28:53074 - 61598 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.044448346s 2025-08-13T19:59:41.372498954+00:00 stdout F [INFO] 10.217.0.28:47243 - 33124 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.045254849s 2025-08-13T19:59:41.380854483+00:00 stdout F [INFO] 10.217.0.8:59732 - 7106 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.093183886s 2025-08-13T19:59:42.056944115+00:00 stdout F [INFO] 10.217.0.8:57861 - 34485 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001998527s 2025-08-13T19:59:42.057040607+00:00 stdout F [INFO] 10.217.0.8:51920 - 49588 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00312535s 2025-08-13T19:59:42.254946729+00:00 stdout F [INFO] 10.217.0.8:42744 - 36863 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002500741s 2025-08-13T19:59:42.368712312+00:00 stdout F [INFO] 10.217.0.8:41487 - 58644 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00389273s 2025-08-13T19:59:43.477540588+00:00 stdout F [INFO] 10.217.0.8:55842 - 16014 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002701517s 2025-08-13T19:59:43.477540588+00:00 stdout F [INFO] 10.217.0.8:59959 - 45350 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004039455s 2025-08-13T19:59:44.068239376+00:00 stdout F [INFO] 10.217.0.8:54207 - 19718 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003326715s 2025-08-13T19:59:44.068239376+00:00 stdout F [INFO] 10.217.0.8:55710 - 12381 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000859364s 2025-08-13T19:59:44.473616752+00:00 stdout F [INFO] 10.217.0.8:57433 - 12555 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001231785s 2025-08-13T19:59:44.490141913+00:00 stdout F [INFO] 10.217.0.8:56361 - 611 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001532184s 2025-08-13T19:59:45.207580854+00:00 stdout F [INFO] 10.217.0.8:44517 - 45189 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.007351389s 2025-08-13T19:59:45.331979930+00:00 stdout F [INFO] 10.217.0.8:58571 - 60387 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.010958382s 2025-08-13T19:59:46.275200306+00:00 stdout F [INFO] 10.217.0.28:56315 - 52153 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004062336s 2025-08-13T19:59:46.275701470+00:00 stdout F [INFO] 10.217.0.28:53644 - 10701 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004936511s 2025-08-13T19:59:46.778551484+00:00 stdout F [INFO] 10.217.0.8:35729 - 16750 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003115839s 2025-08-13T19:59:46.779616544+00:00 stdout F [INFO] 10.217.0.8:47577 - 4218 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004051835s 2025-08-13T19:59:49.490117401+00:00 stdout F [INFO] 10.217.0.8:60652 - 34962 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000776823s 2025-08-13T19:59:49.492174519+00:00 stdout F [INFO] 10.217.0.8:42073 - 18763 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000911126s 2025-08-13T19:59:51.325499579+00:00 stdout F [INFO] 10.217.0.28:35410 - 28476 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002997815s 2025-08-13T19:59:51.325499579+00:00 stdout F [INFO] 10.217.0.28:38192 - 43866 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002657256s 2025-08-13T19:59:54.722954265+00:00 stdout F [INFO] 10.217.0.8:57411 - 62430 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003547181s 2025-08-13T19:59:54.722954265+00:00 stdout F [INFO] 10.217.0.8:44346 - 25135 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004123868s 2025-08-13T19:59:56.279401503+00:00 stdout F [INFO] 10.217.0.28:55304 - 30922 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002699447s 2025-08-13T19:59:56.402210944+00:00 stdout F [INFO] 10.217.0.28:60105 - 977 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.129498722s 2025-08-13T20:00:01.312229374+00:00 stdout F [INFO] 10.217.0.28:36085 - 33371 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003943962s 2025-08-13T20:00:01.312229374+00:00 stdout F [INFO] 10.217.0.28:33968 - 25713 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004474897s 2025-08-13T20:00:05.154259269+00:00 stdout F [INFO] 10.217.0.8:38795 - 21470 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003238472s 2025-08-13T20:00:05.154259269+00:00 stdout F [INFO] 10.217.0.8:39200 - 34446 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.023923592s 2025-08-13T20:00:06.214910512+00:00 stdout F [INFO] 10.217.0.28:43430 - 45449 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000923157s 2025-08-13T20:00:06.221060848+00:00 stdout F [INFO] 10.217.0.28:53417 - 39326 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.009344247s 2025-08-13T20:00:10.325143240+00:00 stdout F [INFO] 10.217.0.62:51993 - 11598 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004463237s 2025-08-13T20:00:10.333127238+00:00 stdout F [INFO] 10.217.0.62:44300 - 42845 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001539534s 2025-08-13T20:00:11.276530577+00:00 stdout F [INFO] 10.217.0.28:43084 - 32054 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006750782s 2025-08-13T20:00:11.276530577+00:00 stdout F [INFO] 10.217.0.28:53563 - 42854 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006401943s 2025-08-13T20:00:11.908425165+00:00 stdout F [INFO] 10.217.0.62:54342 - 11297 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.013080363s 2025-08-13T20:00:11.909993260+00:00 stdout F [INFO] 10.217.0.62:39933 - 33204 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.014236276s 2025-08-13T20:00:12.249274154+00:00 stdout F [INFO] 10.217.0.19:58421 - 64290 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001664357s 2025-08-13T20:00:12.250137289+00:00 stdout F [INFO] 10.217.0.19:52477 - 46487 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004760996s 2025-08-13T20:00:12.416577515+00:00 stdout F [INFO] 10.217.0.19:53799 - 52499 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000827873s 2025-08-13T20:00:12.416577515+00:00 stdout F [INFO] 10.217.0.19:60061 - 51150 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000839763s 2025-08-13T20:00:12.441540027+00:00 stdout F [INFO] 10.217.0.62:34840 - 21342 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000948127s 2025-08-13T20:00:12.453288212+00:00 stdout F [INFO] 10.217.0.62:42451 - 35945 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.01227864s 2025-08-13T20:00:12.493549240+00:00 stdout F [INFO] 10.217.0.62:50935 - 27932 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002008878s 2025-08-13T20:00:12.493702974+00:00 stdout F [INFO] 10.217.0.62:43620 - 46295 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001971266s 2025-08-13T20:00:12.530296808+00:00 stdout F [INFO] 10.217.0.62:36702 - 14398 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000981408s 2025-08-13T20:00:12.530296808+00:00 stdout F [INFO] 10.217.0.62:48646 - 64315 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000479213s 2025-08-13T20:00:13.138332395+00:00 stdout F [INFO] 10.217.0.62:59404 - 49497 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00140676s 2025-08-13T20:00:13.138332395+00:00 stdout F [INFO] 10.217.0.62:49332 - 15686 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002175362s 2025-08-13T20:00:13.283059932+00:00 stdout F [INFO] 10.217.0.62:44541 - 25632 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002145041s 2025-08-13T20:00:13.283059932+00:00 stdout F [INFO] 10.217.0.62:40546 - 5445 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002069449s 2025-08-13T20:00:13.353282534+00:00 stdout F [INFO] 10.217.0.62:54016 - 46275 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00071551s 2025-08-13T20:00:13.354444558+00:00 stdout F [INFO] 10.217.0.62:33622 - 22522 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000671669s 2025-08-13T20:00:13.439033769+00:00 stdout F [INFO] 10.217.0.62:39950 - 27175 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.007004119s 2025-08-13T20:00:13.443953430+00:00 stdout F [INFO] 10.217.0.62:48086 - 49881 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004105107s 2025-08-13T20:00:13.541132851+00:00 stdout F [INFO] 10.217.0.62:58381 - 51106 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001316337s 2025-08-13T20:00:13.541132851+00:00 stdout F [INFO] 10.217.0.62:54860 - 61170 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002034738s 2025-08-13T20:00:13.653005921+00:00 stdout F [INFO] 10.217.0.62:48034 - 54387 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001380509s 2025-08-13T20:00:13.653005921+00:00 stdout F [INFO] 10.217.0.62:48917 - 55266 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001103781s 2025-08-13T20:00:13.833552609+00:00 stdout F [INFO] 10.217.0.19:38017 - 48268 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004880009s 2025-08-13T20:00:13.833552609+00:00 stdout F [INFO] 10.217.0.62:49113 - 33930 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003019526s 2025-08-13T20:00:13.833552609+00:00 stdout F [INFO] 10.217.0.62:47685 - 37637 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00420269s 2025-08-13T20:00:13.833552609+00:00 stdout F [INFO] 10.217.0.19:36992 - 13239 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00596202s 2025-08-13T20:00:13.954743274+00:00 stdout F [INFO] 10.217.0.62:60184 - 33262 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001215854s 2025-08-13T20:00:13.971888523+00:00 stdout F [INFO] 10.217.0.62:40495 - 54729 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.018309722s 2025-08-13T20:00:14.090604728+00:00 stdout F [INFO] 10.217.0.62:44602 - 62643 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00139623s 2025-08-13T20:00:14.090604728+00:00 stdout F [INFO] 10.217.0.62:38107 - 49721 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002221754s 2025-08-13T20:00:14.159122312+00:00 stdout F [INFO] 10.217.0.62:47344 - 18930 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.009264744s 2025-08-13T20:00:14.159179524+00:00 stdout F [INFO] 10.217.0.62:51560 - 51651 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.010123818s 2025-08-13T20:00:14.262518960+00:00 stdout F [INFO] 10.217.0.62:39670 - 1640 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000935326s 2025-08-13T20:00:14.264760374+00:00 stdout F [INFO] 10.217.0.62:55417 - 30464 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001238945s 2025-08-13T20:00:14.346152915+00:00 stdout F [INFO] 10.217.0.62:56143 - 12731 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003476399s 2025-08-13T20:00:14.346643759+00:00 stdout F [INFO] 10.217.0.62:48892 - 34607 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004013275s 2025-08-13T20:00:14.509059210+00:00 stdout F [INFO] 10.217.0.62:56488 - 15404 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001027549s 2025-08-13T20:00:14.514045602+00:00 stdout F [INFO] 10.217.0.62:47329 - 1471 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000592067s 2025-08-13T20:00:14.622904326+00:00 stdout F [INFO] 10.217.0.62:56101 - 63034 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001539474s 2025-08-13T20:00:14.622904326+00:00 stdout F [INFO] 10.217.0.62:42829 - 34852 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001581765s 2025-08-13T20:00:14.723699989+00:00 stdout F [INFO] 10.217.0.62:38772 - 52371 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000794353s 2025-08-13T20:00:14.723699989+00:00 stdout F [INFO] 10.217.0.62:50000 - 58314 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000788752s 2025-08-13T20:00:14.852071560+00:00 stdout F [INFO] 10.217.0.19:57963 - 54633 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001571125s 2025-08-13T20:00:14.852071560+00:00 stdout F [INFO] 10.217.0.19:49761 - 48106 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00210259s 2025-08-13T20:00:14.891880995+00:00 stdout F [INFO] 10.217.0.62:45637 - 43595 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002088s 2025-08-13T20:00:14.891880995+00:00 stdout F [INFO] 10.217.0.62:46509 - 9221 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002678606s 2025-08-13T20:00:14.902471267+00:00 stdout F [INFO] 10.217.0.19:44812 - 34538 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001137942s 2025-08-13T20:00:14.902471267+00:00 stdout F [INFO] 10.217.0.19:39668 - 49216 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000528135s 2025-08-13T20:00:15.004110975+00:00 stdout F [INFO] 10.217.0.19:60068 - 15629 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.024030195s 2025-08-13T20:00:15.004110975+00:00 stdout F [INFO] 10.217.0.19:41174 - 41426 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.024001304s 2025-08-13T20:00:15.004110975+00:00 stdout F [INFO] 10.217.0.62:42114 - 47386 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.016223713s 2025-08-13T20:00:15.017236019+00:00 stdout F [INFO] 10.217.0.62:51210 - 5963 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.037691795s 2025-08-13T20:00:15.061301206+00:00 stdout F [INFO] 10.217.0.19:35651 - 11770 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003591183s 2025-08-13T20:00:15.087382469+00:00 stdout F [INFO] 10.217.0.19:37849 - 7839 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003403527s 2025-08-13T20:00:15.093767751+00:00 stdout F [INFO] 10.217.0.62:46650 - 34229 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001093931s 2025-08-13T20:00:15.093767751+00:00 stdout F [INFO] 10.217.0.62:39635 - 29161 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00105456s 2025-08-13T20:00:15.226151576+00:00 stdout F [INFO] 10.217.0.19:46945 - 33977 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000949407s 2025-08-13T20:00:15.226151576+00:00 stdout F [INFO] 10.217.0.19:57137 - 54899 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001574874s 2025-08-13T20:00:15.240936918+00:00 stdout F [INFO] 10.217.0.62:42099 - 59560 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002851612s 2025-08-13T20:00:15.240936918+00:00 stdout F [INFO] 10.217.0.62:41175 - 29037 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002982265s 2025-08-13T20:00:15.364911513+00:00 stdout F [INFO] 10.217.0.62:49482 - 56674 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.007002209s 2025-08-13T20:00:15.368139115+00:00 stdout F [INFO] 10.217.0.62:53468 - 9573 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002738768s 2025-08-13T20:00:15.382737091+00:00 stdout F [INFO] 10.217.0.19:57361 - 17104 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00106734s 2025-08-13T20:00:15.382737091+00:00 stdout F [INFO] 10.217.0.19:55283 - 16862 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.011326993s 2025-08-13T20:00:15.433983542+00:00 stdout F [INFO] 10.217.0.62:55667 - 33923 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004534109s 2025-08-13T20:00:15.433983542+00:00 stdout F [INFO] 10.217.0.62:34787 - 37314 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004371375s 2025-08-13T20:00:15.444336067+00:00 stdout F [INFO] 10.217.0.19:51870 - 64590 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001194404s 2025-08-13T20:00:15.446909311+00:00 stdout F [INFO] 10.217.0.19:43903 - 19405 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001676148s 2025-08-13T20:00:15.517874164+00:00 stdout F [INFO] 10.217.0.62:38759 - 35666 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003969233s 2025-08-13T20:00:15.531892114+00:00 stdout F [INFO] 10.217.0.62:56587 - 13504 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.01158042s 2025-08-13T20:00:15.573548082+00:00 stdout F [INFO] 10.217.0.62:52776 - 23634 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00211457s 2025-08-13T20:00:15.573548082+00:00 stdout F [INFO] 10.217.0.62:42107 - 27037 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002034778s 2025-08-13T20:00:15.654391257+00:00 stdout F [INFO] 10.217.0.62:32812 - 42091 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00282523s 2025-08-13T20:00:15.656534878+00:00 stdout F [INFO] 10.217.0.62:38907 - 27002 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002048938s 2025-08-13T20:00:15.734330996+00:00 stdout F [INFO] 10.217.0.62:33135 - 32921 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003792468s 2025-08-13T20:00:15.734330996+00:00 stdout F [INFO] 10.217.0.62:33955 - 62297 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004643453s 2025-08-13T20:00:15.802234793+00:00 stdout F [INFO] 10.217.0.62:49884 - 1877 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00387168s 2025-08-13T20:00:15.802278394+00:00 stdout F [INFO] 10.217.0.62:49483 - 64427 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00490845s 2025-08-13T20:00:15.873532576+00:00 stdout F [INFO] 10.217.0.62:43741 - 36144 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000938287s 2025-08-13T20:00:15.873532576+00:00 stdout F [INFO] 10.217.0.62:41760 - 12545 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000846474s 2025-08-13T20:00:15.908057300+00:00 stdout F [INFO] 10.217.0.62:33669 - 26634 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001498842s 2025-08-13T20:00:15.908057300+00:00 stdout F [INFO] 10.217.0.62:49123 - 62305 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001537134s 2025-08-13T20:00:15.958915760+00:00 stdout F [INFO] 10.217.0.62:36466 - 6221 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000944747s 2025-08-13T20:00:15.958915760+00:00 stdout F [INFO] 10.217.0.62:45514 - 61891 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001165713s 2025-08-13T20:00:16.003345277+00:00 stdout F [INFO] 10.217.0.62:35535 - 24579 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000768992s 2025-08-13T20:00:16.003458100+00:00 stdout F [INFO] 10.217.0.62:54227 - 43386 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000751341s 2025-08-13T20:00:16.025173939+00:00 stdout F [INFO] 10.217.0.62:47898 - 47331 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.009788199s 2025-08-13T20:00:16.028196506+00:00 stdout F [INFO] 10.217.0.62:53790 - 34665 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.011007464s 2025-08-13T20:00:16.063730659+00:00 stdout F [INFO] 10.217.0.19:38695 - 5246 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002941634s 2025-08-13T20:00:16.070752309+00:00 stdout F [INFO] 10.217.0.19:34631 - 33937 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.007993128s 2025-08-13T20:00:16.071029697+00:00 stdout F [INFO] 10.217.0.62:45478 - 2016 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000954457s 2025-08-13T20:00:16.073662712+00:00 stdout F [INFO] 10.217.0.62:60256 - 52513 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001386379s 2025-08-13T20:00:16.115580247+00:00 stdout F [INFO] 10.217.0.62:60048 - 4345 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000984398s 2025-08-13T20:00:16.115580247+00:00 stdout F [INFO] 10.217.0.62:60259 - 45250 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002088889s 2025-08-13T20:00:16.150981417+00:00 stdout F [INFO] 10.217.0.62:33976 - 29295 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.010768027s 2025-08-13T20:00:16.159019016+00:00 stdout F [INFO] 10.217.0.62:53332 - 40899 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.012587289s 2025-08-13T20:00:16.197911625+00:00 stdout F [INFO] 10.217.0.62:46654 - 45279 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000622438s 2025-08-13T20:00:16.200930001+00:00 stdout F [INFO] 10.217.0.62:32870 - 21450 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001038189s 2025-08-13T20:00:16.205693417+00:00 stdout F [INFO] 10.217.0.28:51187 - 30233 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001215535s 2025-08-13T20:00:16.205693417+00:00 stdout F [INFO] 10.217.0.28:51035 - 51486 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000806912s 2025-08-13T20:00:16.336719233+00:00 stdout F [INFO] 10.217.0.62:60998 - 63904 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001224225s 2025-08-13T20:00:16.336825176+00:00 stdout F [INFO] 10.217.0.62:33232 - 15158 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001088981s 2025-08-13T20:00:16.356991191+00:00 stdout F [INFO] 10.217.0.62:41518 - 40868 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000610358s 2025-08-13T20:00:16.357465835+00:00 stdout F [INFO] 10.217.0.62:54709 - 24528 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001147052s 2025-08-13T20:00:16.379975566+00:00 stdout F [INFO] 10.217.0.62:57672 - 38980 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000834984s 2025-08-13T20:00:16.380039328+00:00 stdout F [INFO] 10.217.0.62:49914 - 18773 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000918706s 2025-08-13T20:00:16.407303966+00:00 stdout F [INFO] 10.217.0.62:52427 - 11793 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001208755s 2025-08-13T20:00:16.408590282+00:00 stdout F [INFO] 10.217.0.62:43965 - 9519 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002598674s 2025-08-13T20:00:16.452031651+00:00 stdout F [INFO] 10.217.0.62:55006 - 16870 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000919176s 2025-08-13T20:00:16.452031651+00:00 stdout F [INFO] 10.217.0.62:49785 - 28542 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001365109s 2025-08-13T20:00:17.145072472+00:00 stdout F [INFO] 10.217.0.19:41177 - 20131 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001652497s 2025-08-13T20:00:17.148561112+00:00 stdout F [INFO] 10.217.0.19:37598 - 42797 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006819774s 2025-08-13T20:00:17.197991761+00:00 stdout F [INFO] 10.217.0.19:33492 - 52647 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001516074s 2025-08-13T20:00:17.198367222+00:00 stdout F [INFO] 10.217.0.19:45055 - 1509 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001945665s 2025-08-13T20:00:17.757549517+00:00 stdout F [INFO] 10.217.0.62:43319 - 34607 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001677958s 2025-08-13T20:00:17.763061234+00:00 stdout F [INFO] 10.217.0.19:55481 - 21056 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000947157s 2025-08-13T20:00:17.763061234+00:00 stdout F [INFO] 10.217.0.19:58786 - 15965 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000674169s 2025-08-13T20:00:17.763061234+00:00 stdout F [INFO] 10.217.0.62:33169 - 37945 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001128812s 2025-08-13T20:00:17.816943650+00:00 stdout F [INFO] 10.217.0.62:42205 - 36373 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002212463s 2025-08-13T20:00:17.816943650+00:00 stdout F [INFO] 10.217.0.62:37387 - 16790 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001350018s 2025-08-13T20:00:17.868672735+00:00 stdout F [INFO] 10.217.0.62:52021 - 9648 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001696059s 2025-08-13T20:00:17.874387088+00:00 stdout F [INFO] 10.217.0.62:40260 - 51115 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000826174s 2025-08-13T20:00:17.998417495+00:00 stdout F [INFO] 10.217.0.62:47087 - 30590 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000785533s 2025-08-13T20:00:18.001043570+00:00 stdout F [INFO] 10.217.0.62:47798 - 1955 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003256683s 2025-08-13T20:00:18.093719402+00:00 stdout F [INFO] 10.217.0.62:58441 - 33701 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.021269306s 2025-08-13T20:00:18.093719402+00:00 stdout F [INFO] 10.217.0.62:41589 - 40585 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.030229852s 2025-08-13T20:00:18.296816152+00:00 stdout F [INFO] 10.217.0.19:51832 - 36823 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003318795s 2025-08-13T20:00:18.301997970+00:00 stdout F [INFO] 10.217.0.19:39903 - 35 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001972347s 2025-08-13T20:00:18.350136493+00:00 stdout F [INFO] 10.217.0.19:58831 - 64918 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002482641s 2025-08-13T20:00:18.350136493+00:00 stdout F [INFO] 10.217.0.19:42371 - 39239 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006809914s 2025-08-13T20:00:18.407327564+00:00 stdout F [INFO] 10.217.0.62:52472 - 11023 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.011434426s 2025-08-13T20:00:18.423120254+00:00 stdout F [INFO] 10.217.0.62:47420 - 64726 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.020940467s 2025-08-13T20:00:18.629197680+00:00 stdout F [INFO] 10.217.0.62:37245 - 2390 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002279585s 2025-08-13T20:00:18.631098444+00:00 stdout F [INFO] 10.217.0.62:50831 - 11251 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003223551s 2025-08-13T20:00:18.761415450+00:00 stdout F [INFO] 10.217.0.62:46429 - 45837 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001951445s 2025-08-13T20:00:18.815524273+00:00 stdout F [INFO] 10.217.0.62:36125 - 11874 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.033757773s 2025-08-13T20:00:18.929433271+00:00 stdout F [INFO] 10.217.0.19:57431 - 54919 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001774561s 2025-08-13T20:00:18.940378423+00:00 stdout F [INFO] 10.217.0.19:56896 - 39088 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002388608s 2025-08-13T20:00:19.114070866+00:00 stdout F [INFO] 10.217.0.62:33352 - 47244 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.021087921s 2025-08-13T20:00:19.114070866+00:00 stdout F [INFO] 10.217.0.62:60188 - 11847 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.021673368s 2025-08-13T20:00:19.237767773+00:00 stdout F [INFO] 10.217.0.62:56582 - 19683 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001947166s 2025-08-13T20:00:19.260334966+00:00 stdout F [INFO] 10.217.0.62:33301 - 31898 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.011323893s 2025-08-13T20:00:19.527445143+00:00 stdout F [INFO] 10.217.0.19:60066 - 34703 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001614406s 2025-08-13T20:00:19.528323238+00:00 stdout F [INFO] 10.217.0.19:60305 - 37834 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002391868s 2025-08-13T20:00:20.196990224+00:00 stdout F [INFO] 10.217.0.19:52258 - 25009 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002468161s 2025-08-13T20:00:20.198550649+00:00 stdout F [INFO] 10.217.0.19:38223 - 9118 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.007779282s 2025-08-13T20:00:20.476452013+00:00 stdout F [INFO] 10.217.0.62:54708 - 21880 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002848321s 2025-08-13T20:00:20.476452013+00:00 stdout F [INFO] 10.217.0.62:59683 - 4772 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.005088505s 2025-08-13T20:00:20.675957121+00:00 stdout F [INFO] 10.217.0.62:50588 - 46196 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004520369s 2025-08-13T20:00:20.714120399+00:00 stdout F [INFO] 10.217.0.62:41010 - 48406 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.009880922s 2025-08-13T20:00:20.839418082+00:00 stdout F [INFO] 10.217.0.62:47437 - 37198 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.006881437s 2025-08-13T20:00:20.839561556+00:00 stdout F [INFO] 10.217.0.62:36545 - 56669 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.007588906s 2025-08-13T20:00:20.923534181+00:00 stdout F [INFO] 10.217.0.62:41433 - 54004 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000731301s 2025-08-13T20:00:20.923853600+00:00 stdout F [INFO] 10.217.0.62:58985 - 8745 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001453541s 2025-08-13T20:00:21.140617421+00:00 stdout F [INFO] 10.217.0.62:50318 - 16622 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.012183297s 2025-08-13T20:00:21.140617421+00:00 stdout F [INFO] 10.217.0.62:43541 - 15801 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.012824826s 2025-08-13T20:00:21.197936515+00:00 stdout F [INFO] 10.217.0.62:49442 - 22994 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004789596s 2025-08-13T20:00:21.198102180+00:00 stdout F [INFO] 10.217.0.28:32829 - 14166 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002330176s 2025-08-13T20:00:21.202505155+00:00 stdout F [INFO] 10.217.0.62:33830 - 17870 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00597609s 2025-08-13T20:00:21.202505155+00:00 stdout F [INFO] 10.217.0.28:53244 - 10983 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002501252s 2025-08-13T20:00:21.450165117+00:00 stdout F [INFO] 10.217.0.62:50542 - 30667 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002195952s 2025-08-13T20:00:21.450165117+00:00 stdout F [INFO] 10.217.0.62:37026 - 47114 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002694096s 2025-08-13T20:00:21.548909932+00:00 stdout F [INFO] 10.217.0.62:58992 - 14098 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001445921s 2025-08-13T20:00:21.548909932+00:00 stdout F [INFO] 10.217.0.62:59478 - 12237 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.007775702s 2025-08-13T20:00:21.665943849+00:00 stdout F [INFO] 10.217.0.62:53629 - 30765 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002404769s 2025-08-13T20:00:21.670443357+00:00 stdout F [INFO] 10.217.0.62:59632 - 3278 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.008741309s 2025-08-13T20:00:21.703125519+00:00 stdout F [INFO] 10.217.0.62:41766 - 63549 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001135602s 2025-08-13T20:00:21.703719906+00:00 stdout F [INFO] 10.217.0.62:46054 - 10024 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000784832s 2025-08-13T20:00:22.579485308+00:00 stdout F [INFO] 10.217.0.19:50242 - 14711 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003977904s 2025-08-13T20:00:22.579485308+00:00 stdout F [INFO] 10.217.0.19:48700 - 53632 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004684324s 2025-08-13T20:00:25.731086805+00:00 stdout F [INFO] 10.217.0.8:59710 - 37958 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003472519s 2025-08-13T20:00:25.731086805+00:00 stdout F [INFO] 10.217.0.8:38997 - 4199 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000922526s 2025-08-13T20:00:25.739917427+00:00 stdout F [INFO] 10.217.0.8:37909 - 64583 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.005826467s 2025-08-13T20:00:25.743884900+00:00 stdout F [INFO] 10.217.0.8:51606 - 4042 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001596626s 2025-08-13T20:00:26.211451573+00:00 stdout F [INFO] 10.217.0.28:41789 - 33440 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004783477s 2025-08-13T20:00:26.215747035+00:00 stdout F [INFO] 10.217.0.28:54682 - 18635 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005841127s 2025-08-13T20:00:27.306241280+00:00 stdout F [INFO] 10.217.0.37:49903 - 59099 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002873802s 2025-08-13T20:00:27.306241280+00:00 stdout F [INFO] 10.217.0.37:49337 - 16367 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00315367s 2025-08-13T20:00:27.610176267+00:00 stdout F [INFO] 10.217.0.57:48884 - 35058 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001445402s 2025-08-13T20:00:27.611670049+00:00 stdout F [INFO] 10.217.0.57:47676 - 42222 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001946096s 2025-08-13T20:00:27.716045735+00:00 stdout F [INFO] 10.217.0.57:44140 - 9399 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.013504715s 2025-08-13T20:00:27.717506627+00:00 stdout F [INFO] 10.217.0.57:45630 - 4081 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.013814204s 2025-08-13T20:00:31.268346355+00:00 stdout F [INFO] 10.217.0.28:57797 - 30523 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001610226s 2025-08-13T20:00:31.268346355+00:00 stdout F [INFO] 10.217.0.28:51665 - 51102 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002051629s 2025-08-13T20:00:31.425624620+00:00 stdout F [INFO] 10.217.0.62:40352 - 63069 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003710086s 2025-08-13T20:00:31.437602572+00:00 stdout F [INFO] 10.217.0.62:41024 - 28867 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004605691s 2025-08-13T20:00:31.661447155+00:00 stdout F [INFO] 10.217.0.62:44788 - 55371 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001476762s 2025-08-13T20:00:31.689217696+00:00 stdout F [INFO] 10.217.0.62:44360 - 51842 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.023932763s 2025-08-13T20:00:31.811253746+00:00 stdout F [INFO] 10.217.0.62:57778 - 8850 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001100381s 2025-08-13T20:00:31.811253746+00:00 stdout F [INFO] 10.217.0.62:60857 - 53999 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000835464s 2025-08-13T20:00:31.961965074+00:00 stdout F [INFO] 10.217.0.62:34727 - 35445 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002562353s 2025-08-13T20:00:31.964171786+00:00 stdout F [INFO] 10.217.0.62:41993 - 40439 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.006554117s 2025-08-13T20:00:32.113323289+00:00 stdout F [INFO] 10.217.0.62:58232 - 45749 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001293527s 2025-08-13T20:00:32.114298577+00:00 stdout F [INFO] 10.217.0.62:35112 - 22784 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000848074s 2025-08-13T20:00:32.737430424+00:00 stdout F [INFO] 10.217.0.57:38966 - 30327 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000583237s 2025-08-13T20:00:32.737498686+00:00 stdout F [INFO] 10.217.0.57:50692 - 52016 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001420221s 2025-08-13T20:00:33.045863489+00:00 stdout F [INFO] 10.217.0.62:58824 - 1702 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001806381s 2025-08-13T20:00:33.058230141+00:00 stdout F [INFO] 10.217.0.62:36864 - 16355 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.008756479s 2025-08-13T20:00:33.577261071+00:00 stdout F [INFO] 10.217.0.62:43777 - 9593 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001006539s 2025-08-13T20:00:33.577261071+00:00 stdout F [INFO] 10.217.0.62:53467 - 42328 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000907196s 2025-08-13T20:00:33.863106352+00:00 stdout F [INFO] 10.217.0.62:38630 - 31176 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001723559s 2025-08-13T20:00:33.872896041+00:00 stdout F [INFO] 10.217.0.62:60120 - 36301 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002984125s 2025-08-13T20:00:33.968965360+00:00 stdout F [INFO] 10.217.0.62:34743 - 13834 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00177437s 2025-08-13T20:00:33.968965360+00:00 stdout F [INFO] 10.217.0.62:44966 - 17140 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002161702s 2025-08-13T20:00:34.058497063+00:00 stdout F [INFO] 10.217.0.62:43193 - 43833 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001836962s 2025-08-13T20:00:34.058497063+00:00 stdout F [INFO] 10.217.0.62:34426 - 45919 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002009107s 2025-08-13T20:00:35.433094038+00:00 stdout F [INFO] 10.217.0.19:49445 - 33612 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000984848s 2025-08-13T20:00:35.433094038+00:00 stdout F [INFO] 10.217.0.19:41366 - 18651 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000927016s 2025-08-13T20:00:35.453677195+00:00 stdout F [INFO] 10.217.0.8:57298 - 52946 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.007615537s 2025-08-13T20:00:35.453677195+00:00 stdout F [INFO] 10.217.0.8:38349 - 16220 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.006572848s 2025-08-13T20:00:35.468442306+00:00 stdout F [INFO] 10.217.0.8:33452 - 50656 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.012375263s 2025-08-13T20:00:35.468442306+00:00 stdout F [INFO] 10.217.0.8:35773 - 41908 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.012355652s 2025-08-13T20:00:35.690673373+00:00 stdout F [INFO] 10.217.0.62:55989 - 53229 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.0014117s 2025-08-13T20:00:35.690673373+00:00 stdout F [INFO] 10.217.0.62:33065 - 53085 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001248106s 2025-08-13T20:00:35.730503909+00:00 stdout F [INFO] 10.217.0.19:36292 - 10939 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005190688s 2025-08-13T20:00:35.770042926+00:00 stdout F [INFO] 10.217.0.19:54938 - 27133 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.048229325s 2025-08-13T20:00:35.897148751+00:00 stdout F [INFO] 10.217.0.62:51579 - 52497 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.010913982s 2025-08-13T20:00:35.897148751+00:00 stdout F [INFO] 10.217.0.62:32787 - 24254 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.011512098s 2025-08-13T20:00:35.952043906+00:00 stdout F [INFO] 10.217.0.19:33608 - 30398 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000857724s 2025-08-13T20:00:35.952043906+00:00 stdout F [INFO] 10.217.0.19:39154 - 13296 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001591015s 2025-08-13T20:00:36.052158070+00:00 stdout F [INFO] 10.217.0.62:57742 - 50873 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.007803622s 2025-08-13T20:00:36.052158070+00:00 stdout F [INFO] 10.217.0.62:50217 - 39537 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.008163043s 2025-08-13T20:00:36.085994515+00:00 stdout F [INFO] 10.217.0.62:46973 - 4617 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000783212s 2025-08-13T20:00:36.085994515+00:00 stdout F [INFO] 10.217.0.62:48190 - 9675 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000738442s 2025-08-13T20:00:36.123936097+00:00 stdout F [INFO] 10.217.0.62:57435 - 21834 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000900436s 2025-08-13T20:00:36.123936097+00:00 stdout F [INFO] 10.217.0.62:50783 - 58379 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000881065s 2025-08-13T20:00:36.192447920+00:00 stdout F [INFO] 10.217.0.28:35258 - 57233 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001739479s 2025-08-13T20:00:36.201027864+00:00 stdout F [INFO] 10.217.0.28:45699 - 57411 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.010188299s 2025-08-13T20:00:36.562031178+00:00 stdout F [INFO] 10.217.0.19:51585 - 9027 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000966748s 2025-08-13T20:00:36.575971016+00:00 stdout F [INFO] 10.217.0.19:34532 - 16670 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.014703769s 2025-08-13T20:00:36.983461205+00:00 stdout F [INFO] 10.217.0.19:41101 - 30346 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001602416s 2025-08-13T20:00:36.983886077+00:00 stdout F [INFO] 10.217.0.19:37160 - 8672 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001703779s 2025-08-13T20:00:37.283931612+00:00 stdout F [INFO] 10.217.0.19:51863 - 63110 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003548381s 2025-08-13T20:00:37.285398654+00:00 stdout F [INFO] 10.217.0.19:46973 - 8780 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001810482s 2025-08-13T20:00:37.544949185+00:00 stdout F [INFO] 10.217.0.19:60998 - 12802 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002060749s 2025-08-13T20:00:37.553034096+00:00 stdout F [INFO] 10.217.0.19:57940 - 9552 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.010101818s 2025-08-13T20:00:37.790369403+00:00 stdout F [INFO] 10.217.0.57:43550 - 43696 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.009638015s 2025-08-13T20:00:37.796695383+00:00 stdout F [INFO] 10.217.0.57:39451 - 44600 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006570608s 2025-08-13T20:00:40.173948128+00:00 stdout F [INFO] 10.217.0.60:48850 - 60117 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008025469s 2025-08-13T20:00:40.173948128+00:00 stdout F [INFO] 10.217.0.60:48884 - 7163 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.011604921s 2025-08-13T20:00:40.183890401+00:00 stdout F [INFO] 10.217.0.60:53961 - 51717 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009816699s 2025-08-13T20:00:40.184241341+00:00 stdout F [INFO] 10.217.0.60:46675 - 43378 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009938023s 2025-08-13T20:00:40.236573113+00:00 stdout F [INFO] 10.217.0.62:40825 - 24788 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004861068s 2025-08-13T20:00:40.236573113+00:00 stdout F [INFO] 10.217.0.62:46421 - 61090 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.005456386s 2025-08-13T20:00:40.714710607+00:00 stdout F [INFO] 10.217.0.62:59322 - 41606 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.006050943s 2025-08-13T20:00:40.714710607+00:00 stdout F [INFO] 10.217.0.62:39975 - 59107 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.007196325s 2025-08-13T20:00:40.803110348+00:00 stdout F [INFO] 10.217.0.60:45725 - 34122 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.013637669s 2025-08-13T20:00:40.803110348+00:00 stdout F [INFO] 10.217.0.60:44229 - 60604 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.014164904s 2025-08-13T20:00:40.803110348+00:00 stdout F [INFO] 10.217.0.60:43328 - 58288 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.015254415s 2025-08-13T20:00:40.803110348+00:00 stdout F [INFO] 10.217.0.60:38617 - 57227 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.014962096s 2025-08-13T20:00:40.910345735+00:00 stdout F [INFO] 10.217.0.60:45732 - 4038 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004323434s 2025-08-13T20:00:40.930483439+00:00 stdout F [INFO] 10.217.0.60:49251 - 15133 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.021362629s 2025-08-13T20:00:40.990210252+00:00 stdout F [INFO] 10.217.0.60:40178 - 34377 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004410105s 2025-08-13T20:00:41.060271440+00:00 stdout F [INFO] 10.217.0.60:50125 - 63735 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.018152517s 2025-08-13T20:00:41.075524395+00:00 stdout F [INFO] 10.217.0.62:57370 - 34368 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.017149759s 2025-08-13T20:00:41.075931257+00:00 stdout F [INFO] 10.217.0.62:41324 - 39724 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.018058485s 2025-08-13T20:00:41.190088592+00:00 stdout F [INFO] 10.217.0.60:36300 - 17768 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007318269s 2025-08-13T20:00:41.190088592+00:00 stdout F [INFO] 10.217.0.60:60017 - 36037 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.016632274s 2025-08-13T20:00:41.224605096+00:00 stdout F [INFO] 10.217.0.28:51635 - 40403 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006965329s 2025-08-13T20:00:41.264087632+00:00 stdout F [INFO] 10.217.0.28:52062 - 1904 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.009673976s 2025-08-13T20:00:41.373989136+00:00 stdout F [INFO] 10.217.0.62:56057 - 9966 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001195254s 2025-08-13T20:00:41.387945613+00:00 stdout F [INFO] 10.217.0.62:46435 - 5126 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.016528461s 2025-08-13T20:00:41.715311878+00:00 stdout F [INFO] 10.217.0.62:50311 - 25129 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.0063246s 2025-08-13T20:00:41.717438859+00:00 stdout F [INFO] 10.217.0.62:33386 - 40928 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.006839125s 2025-08-13T20:00:41.738119748+00:00 stdout F [INFO] 10.217.0.60:52373 - 23632 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007761871s 2025-08-13T20:00:41.801297700+00:00 stdout F [INFO] 10.217.0.60:35320 - 1700 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.058023705s 2025-08-13T20:00:41.801297700+00:00 stdout F [INFO] 10.217.0.60:55328 - 10605 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.058711244s 2025-08-13T20:00:41.829259987+00:00 stdout F [INFO] 10.217.0.60:53501 - 19974 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.05925692s 2025-08-13T20:00:41.976174496+00:00 stdout F [INFO] 10.217.0.60:52578 - 9173 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008371928s 2025-08-13T20:00:41.976497765+00:00 stdout F [INFO] 10.217.0.60:40532 - 45632 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00071053s 2025-08-13T20:00:41.976709652+00:00 stdout F [INFO] 10.217.0.60:34514 - 826 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001254386s 2025-08-13T20:00:41.977127923+00:00 stdout F [INFO] 10.217.0.60:57847 - 46655 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009665195s 2025-08-13T20:00:42.007513300+00:00 stdout F [INFO] 10.217.0.60:54739 - 10924 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003385036s 2025-08-13T20:00:42.007513300+00:00 stdout F [INFO] 10.217.0.60:59114 - 30470 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005906868s 2025-08-13T20:00:42.084584237+00:00 stdout F [INFO] 10.217.0.60:56862 - 58713 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003955053s 2025-08-13T20:00:42.087243233+00:00 stdout F [INFO] 10.217.0.60:36311 - 27915 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002747588s 2025-08-13T20:00:42.132939816+00:00 stdout F [INFO] 10.217.0.60:52763 - 16178 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009929473s 2025-08-13T20:00:42.133577344+00:00 stdout F [INFO] 10.217.0.60:49355 - 29298 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012692052s 2025-08-13T20:00:42.153883023+00:00 stdout F [INFO] 10.217.0.19:47374 - 58670 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00105662s 2025-08-13T20:00:42.153944575+00:00 stdout F [INFO] 10.217.0.19:53419 - 55595 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000777443s 2025-08-13T20:00:42.164854346+00:00 stdout F [INFO] 10.217.0.60:32773 - 53922 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.006955098s 2025-08-13T20:00:42.164854346+00:00 stdout F [INFO] 10.217.0.60:33864 - 8621 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.005889688s 2025-08-13T20:00:42.206111563+00:00 stdout F [INFO] 10.217.0.60:57809 - 30579 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005044634s 2025-08-13T20:00:42.206578306+00:00 stdout F [INFO] 10.217.0.60:58796 - 61858 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006533616s 2025-08-13T20:00:42.290129518+00:00 stdout F [INFO] 10.217.0.60:47662 - 60054 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008448151s 2025-08-13T20:00:42.290129518+00:00 stdout F [INFO] 10.217.0.60:44908 - 34608 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009088019s 2025-08-13T20:00:42.372369263+00:00 stdout F [INFO] 10.217.0.60:50074 - 31458 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003794648s 2025-08-13T20:00:42.374170995+00:00 stdout F [INFO] 10.217.0.60:59507 - 41881 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004698834s 2025-08-13T20:00:42.381148524+00:00 stdout F [INFO] 10.217.0.60:53163 - 6696 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00353352s 2025-08-13T20:00:42.381148524+00:00 stdout F [INFO] 10.217.0.60:47961 - 23913 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003714746s 2025-08-13T20:00:42.395730410+00:00 stdout F [INFO] 10.217.0.60:51772 - 27470 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004109837s 2025-08-13T20:00:42.396155742+00:00 stdout F [INFO] 10.217.0.60:56559 - 41180 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003775757s 2025-08-13T20:00:42.465712745+00:00 stdout F [INFO] 10.217.0.60:52007 - 64110 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002959514s 2025-08-13T20:00:42.466142467+00:00 stdout F [INFO] 10.217.0.60:46205 - 30385 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000848104s 2025-08-13T20:00:42.466550819+00:00 stdout F [INFO] 10.217.0.60:39664 - 25521 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003301974s 2025-08-13T20:00:42.466616831+00:00 stdout F [INFO] 10.217.0.60:58488 - 44759 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001436741s 2025-08-13T20:00:42.501886346+00:00 stdout F [INFO] 10.217.0.60:53857 - 2484 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005065864s 2025-08-13T20:00:42.502274967+00:00 stdout F [INFO] 10.217.0.60:37557 - 36840 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006229778s 2025-08-13T20:00:42.541595989+00:00 stdout F [INFO] 10.217.0.60:49941 - 58907 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001070981s 2025-08-13T20:00:42.542671779+00:00 stdout F [INFO] 10.217.0.60:33250 - 47268 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001080841s 2025-08-13T20:00:42.549360000+00:00 stdout F [INFO] 10.217.0.60:36173 - 36326 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00417675s 2025-08-13T20:00:42.549577626+00:00 stdout F [INFO] 10.217.0.60:44656 - 34287 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00457103s 2025-08-13T20:00:42.612258244+00:00 stdout F [INFO] 10.217.0.60:36746 - 61964 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.007478603s 2025-08-13T20:00:42.612546452+00:00 stdout F [INFO] 10.217.0.60:48936 - 60887 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.008014958s 2025-08-13T20:00:42.612857751+00:00 stdout F [INFO] 10.217.0.60:41597 - 9052 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.010254542s 2025-08-13T20:00:42.613108138+00:00 stdout F [INFO] 10.217.0.60:60388 - 16520 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.010896681s 2025-08-13T20:00:42.626540471+00:00 stdout F [INFO] 10.217.0.19:47160 - 61990 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001217815s 2025-08-13T20:00:42.626767237+00:00 stdout F [INFO] 10.217.0.19:35835 - 10492 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001272586s 2025-08-13T20:00:42.635307571+00:00 stdout F [INFO] 10.217.0.60:47210 - 51012 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00493136s 2025-08-13T20:00:42.635508907+00:00 stdout F [INFO] 10.217.0.60:48041 - 17561 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004893059s 2025-08-13T20:00:42.662218808+00:00 stdout F [INFO] 10.217.0.60:52906 - 27055 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009696496s 2025-08-13T20:00:42.662218808+00:00 stdout F [INFO] 10.217.0.60:36666 - 6155 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009805049s 2025-08-13T20:00:42.716915738+00:00 stdout F [INFO] 10.217.0.60:60334 - 9510 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005900578s 2025-08-13T20:00:42.716915738+00:00 stdout F [INFO] 10.217.0.60:56325 - 59807 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006511966s 2025-08-13T20:00:42.722290411+00:00 stdout F [INFO] 10.217.0.60:56599 - 47234 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.007804482s 2025-08-13T20:00:42.722639381+00:00 stdout F [INFO] 10.217.0.60:40308 - 51070 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.008097021s 2025-08-13T20:00:42.751493194+00:00 stdout F [INFO] 10.217.0.60:42287 - 5955 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00068765s 2025-08-13T20:00:42.753440729+00:00 stdout F [INFO] 10.217.0.60:43479 - 31758 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000434712s 2025-08-13T20:00:42.758242896+00:00 stdout F [INFO] 10.217.0.57:34553 - 58336 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003449808s 2025-08-13T20:00:42.758242896+00:00 stdout F [INFO] 10.217.0.57:55698 - 58679 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003164171s 2025-08-13T20:00:42.799761920+00:00 stdout F [INFO] 10.217.0.60:57471 - 53951 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001028359s 2025-08-13T20:00:42.799761920+00:00 stdout F [INFO] 10.217.0.60:36137 - 30042 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000925436s 2025-08-13T20:00:42.821276633+00:00 stdout F [INFO] 10.217.0.60:51024 - 25845 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002521761s 2025-08-13T20:00:42.852377870+00:00 stdout F [INFO] 10.217.0.60:45732 - 49655 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.015162122s 2025-08-13T20:00:42.877721733+00:00 stdout F [INFO] 10.217.0.60:36587 - 34390 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008998267s 2025-08-13T20:00:42.877761484+00:00 stdout F [INFO] 10.217.0.60:41455 - 12455 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009635164s 2025-08-13T20:00:42.934947915+00:00 stdout F [INFO] 10.217.0.60:48487 - 16479 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001001568s 2025-08-13T20:00:42.934947915+00:00 stdout F [INFO] 10.217.0.60:39248 - 18897 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001283497s 2025-08-13T20:00:42.992199317+00:00 stdout F [INFO] 10.217.0.60:38306 - 15345 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001791081s 2025-08-13T20:00:42.997569910+00:00 stdout F [INFO] 10.217.0.60:43691 - 39626 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006527846s 2025-08-13T20:00:43.062744239+00:00 stdout F [INFO] 10.217.0.60:43124 - 10054 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002526562s 2025-08-13T20:00:43.062819551+00:00 stdout F [INFO] 10.217.0.60:43696 - 21466 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002597884s 2025-08-13T20:00:43.124412067+00:00 stdout F [INFO] 10.217.0.60:40037 - 48306 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001489412s 2025-08-13T20:00:43.124412067+00:00 stdout F [INFO] 10.217.0.60:56776 - 5951 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001802502s 2025-08-13T20:00:43.124448128+00:00 stdout F [INFO] 10.217.0.60:35495 - 36278 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001221295s 2025-08-13T20:00:43.125673583+00:00 stdout F [INFO] 10.217.0.60:33856 - 56194 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002322437s 2025-08-13T20:00:43.147262809+00:00 stdout F [INFO] 10.217.0.60:36329 - 25227 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003368526s 2025-08-13T20:00:43.148305638+00:00 stdout F [INFO] 10.217.0.60:48432 - 30770 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003434678s 2025-08-13T20:00:43.187327211+00:00 stdout F [INFO] 10.217.0.60:42719 - 44038 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002487311s 2025-08-13T20:00:43.187622319+00:00 stdout F [INFO] 10.217.0.60:33733 - 28042 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002977814s 2025-08-13T20:00:43.193190838+00:00 stdout F [INFO] 10.217.0.60:40321 - 64415 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001383639s 2025-08-13T20:00:43.193190838+00:00 stdout F [INFO] 10.217.0.60:58984 - 37137 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001215215s 2025-08-13T20:00:43.281338372+00:00 stdout F [INFO] 10.217.0.60:40457 - 23589 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001961056s 2025-08-13T20:00:43.281886087+00:00 stdout F [INFO] 10.217.0.60:52727 - 44082 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002660136s 2025-08-13T20:00:43.285920862+00:00 stdout F [INFO] 10.217.0.60:55415 - 57509 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069516s 2025-08-13T20:00:43.285920862+00:00 stdout F [INFO] 10.217.0.60:58720 - 48518 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000799473s 2025-08-13T20:00:43.352745757+00:00 stdout F [INFO] 10.217.0.60:47603 - 41130 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007655488s 2025-08-13T20:00:43.352745757+00:00 stdout F [INFO] 10.217.0.60:58734 - 17536 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007685769s 2025-08-13T20:00:43.423710250+00:00 stdout F [INFO] 10.217.0.60:51827 - 61021 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003569272s 2025-08-13T20:00:43.424613746+00:00 stdout F [INFO] 10.217.0.60:49551 - 22902 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004468317s 2025-08-13T20:00:43.425194043+00:00 stdout F [INFO] 10.217.0.60:43655 - 46137 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004508809s 2025-08-13T20:00:43.425534112+00:00 stdout F [INFO] 10.217.0.60:57763 - 5602 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005004903s 2025-08-13T20:00:43.470829214+00:00 stdout F [INFO] 10.217.0.60:60522 - 63475 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001313588s 2025-08-13T20:00:43.472331027+00:00 stdout F [INFO] 10.217.0.60:47979 - 23540 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002337996s 2025-08-13T20:00:43.497125284+00:00 stdout F [INFO] 10.217.0.60:39595 - 21645 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001085451s 2025-08-13T20:00:43.497502724+00:00 stdout F [INFO] 10.217.0.60:44441 - 51694 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001836472s 2025-08-13T20:00:43.543204928+00:00 stdout F [INFO] 10.217.0.60:44521 - 34550 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001911934s 2025-08-13T20:00:43.546595764+00:00 stdout F [INFO] 10.217.0.60:52466 - 14265 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000969237s 2025-08-13T20:00:43.557742862+00:00 stdout F [INFO] 10.217.0.60:59650 - 5341 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001627867s 2025-08-13T20:00:43.557742862+00:00 stdout F [INFO] 10.217.0.60:51584 - 49883 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001501333s 2025-08-13T20:00:43.600072219+00:00 stdout F [INFO] 10.217.0.60:49431 - 52323 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000845184s 2025-08-13T20:00:43.600110150+00:00 stdout F [INFO] 10.217.0.60:48025 - 60773 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000760792s 2025-08-13T20:00:43.611390982+00:00 stdout F [INFO] 10.217.0.60:40820 - 1549 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070815s 2025-08-13T20:00:43.611544296+00:00 stdout F [INFO] 10.217.0.60:47757 - 10448 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000754742s 2025-08-13T20:00:43.618045112+00:00 stdout F [INFO] 10.217.0.60:53912 - 30298 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001381489s 2025-08-13T20:00:43.619334828+00:00 stdout F [INFO] 10.217.0.60:44704 - 44979 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000822413s 2025-08-13T20:00:43.662573931+00:00 stdout F [INFO] 10.217.0.60:55017 - 30285 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001868524s 2025-08-13T20:00:43.662573931+00:00 stdout F [INFO] 10.217.0.60:59906 - 48194 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001780561s 2025-08-13T20:00:43.678353481+00:00 stdout F [INFO] 10.217.0.60:55405 - 6328 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000933286s 2025-08-13T20:00:43.678353481+00:00 stdout F [INFO] 10.217.0.60:47560 - 58287 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001986826s 2025-08-13T20:00:43.679677589+00:00 stdout F [INFO] 10.217.0.60:41177 - 47820 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001985617s 2025-08-13T20:00:43.686079122+00:00 stdout F [INFO] 10.217.0.60:35239 - 45681 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008523043s 2025-08-13T20:00:43.729225602+00:00 stdout F [INFO] 10.217.0.60:36044 - 4180 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001164383s 2025-08-13T20:00:43.729225602+00:00 stdout F [INFO] 10.217.0.60:41479 - 54020 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001488832s 2025-08-13T20:00:43.749673445+00:00 stdout F [INFO] 10.217.0.60:51760 - 2234 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001074331s 2025-08-13T20:00:43.750405596+00:00 stdout F [INFO] 10.217.0.60:56738 - 50301 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001634247s 2025-08-13T20:00:43.788935744+00:00 stdout F [INFO] 10.217.0.60:58467 - 36904 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003258043s 2025-08-13T20:00:43.788935744+00:00 stdout F [INFO] 10.217.0.60:51533 - 39214 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003825169s 2025-08-13T20:00:43.806927437+00:00 stdout F [INFO] 10.217.0.60:34408 - 40072 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070067s 2025-08-13T20:00:43.807457072+00:00 stdout F [INFO] 10.217.0.60:49017 - 43687 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001157863s 2025-08-13T20:00:43.848398340+00:00 stdout F [INFO] 10.217.0.60:53118 - 52132 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000759662s 2025-08-13T20:00:43.848634737+00:00 stdout F [INFO] 10.217.0.60:35567 - 5264 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000990089s 2025-08-13T20:00:43.872894978+00:00 stdout F [INFO] 10.217.0.60:45098 - 58996 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001257796s 2025-08-13T20:00:43.876055308+00:00 stdout F [INFO] 10.217.0.60:41662 - 8062 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003664225s 2025-08-13T20:00:43.922988587+00:00 stdout F [INFO] 10.217.0.60:60175 - 57474 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000804053s 2025-08-13T20:00:43.923065349+00:00 stdout F [INFO] 10.217.0.60:58165 - 10664 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001026449s 2025-08-13T20:00:43.935124723+00:00 stdout F [INFO] 10.217.0.60:37231 - 28474 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000776202s 2025-08-13T20:00:43.935243866+00:00 stdout F [INFO] 10.217.0.60:34968 - 12153 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001174304s 2025-08-13T20:00:44.003189984+00:00 stdout F [INFO] 10.217.0.60:37021 - 40929 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000838734s 2025-08-13T20:00:44.003408840+00:00 stdout F [INFO] 10.217.0.60:38598 - 52744 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001257846s 2025-08-13T20:00:44.042574607+00:00 stdout F [INFO] 10.217.0.60:34231 - 46460 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000661409s 2025-08-13T20:00:44.043194384+00:00 stdout F [INFO] 10.217.0.60:58181 - 15108 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001057461s 2025-08-13T20:00:44.097995907+00:00 stdout F [INFO] 10.217.0.60:39022 - 58329 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000672049s 2025-08-13T20:00:44.098756259+00:00 stdout F [INFO] 10.217.0.60:57403 - 10634 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001475332s 2025-08-13T20:00:44.111347118+00:00 stdout F [INFO] 10.217.0.60:37653 - 37404 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000602627s 2025-08-13T20:00:44.111596875+00:00 stdout F [INFO] 10.217.0.60:47515 - 2054 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000467333s 2025-08-13T20:00:44.238898615+00:00 stdout F [INFO] 10.217.0.60:42808 - 16861 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000642628s 2025-08-13T20:00:44.238898615+00:00 stdout F [INFO] 10.217.0.60:48067 - 30534 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001094392s 2025-08-13T20:00:44.239324577+00:00 stdout F [INFO] 10.217.0.60:35307 - 23336 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001235565s 2025-08-13T20:00:44.239324577+00:00 stdout F [INFO] 10.217.0.60:39319 - 35875 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001458431s 2025-08-13T20:00:44.306349678+00:00 stdout F [INFO] 10.217.0.60:38046 - 11013 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001185764s 2025-08-13T20:00:44.306349678+00:00 stdout F [INFO] 10.217.0.60:43413 - 39794 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00105296s 2025-08-13T20:00:44.308755196+00:00 stdout F [INFO] 10.217.0.60:54687 - 38217 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004081087s 2025-08-13T20:00:44.309022484+00:00 stdout F [INFO] 10.217.0.60:43534 - 29464 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004097267s 2025-08-13T20:00:44.371112775+00:00 stdout F [INFO] 10.217.0.60:53728 - 6570 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002602834s 2025-08-13T20:00:44.371112775+00:00 stdout F [INFO] 10.217.0.60:41092 - 5177 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005096716s 2025-08-13T20:00:44.371112775+00:00 stdout F [INFO] 10.217.0.60:35936 - 46005 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005082925s 2025-08-13T20:00:44.373211944+00:00 stdout F [INFO] 10.217.0.60:39366 - 34168 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006175236s 2025-08-13T20:00:44.467058530+00:00 stdout F [INFO] 10.217.0.60:50003 - 16 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005760904s 2025-08-13T20:00:44.470581051+00:00 stdout F [INFO] 10.217.0.60:59145 - 51973 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00663318s 2025-08-13T20:00:44.561180964+00:00 stdout F [INFO] 10.217.0.60:38896 - 9837 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007873105s 2025-08-13T20:00:44.562378618+00:00 stdout F [INFO] 10.217.0.60:57914 - 49009 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008189754s 2025-08-13T20:00:44.565446676+00:00 stdout F [INFO] 10.217.0.60:44070 - 11996 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003979883s 2025-08-13T20:00:44.570577672+00:00 stdout F [INFO] 10.217.0.60:58860 - 63938 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008014928s 2025-08-13T20:00:44.634733022+00:00 stdout F [INFO] 10.217.0.60:57237 - 12292 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00106751s 2025-08-13T20:00:44.635281027+00:00 stdout F [INFO] 10.217.0.60:50472 - 10469 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001678718s 2025-08-13T20:00:44.636369068+00:00 stdout F [INFO] 10.217.0.60:59839 - 51565 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002900223s 2025-08-13T20:00:44.636465431+00:00 stdout F [INFO] 10.217.0.60:34556 - 21555 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002778619s 2025-08-13T20:00:44.718334255+00:00 stdout F [INFO] 10.217.0.60:47654 - 43073 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009952494s 2025-08-13T20:00:44.725470629+00:00 stdout F [INFO] 10.217.0.60:37658 - 61873 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007332789s 2025-08-13T20:00:44.725470629+00:00 stdout F [INFO] 10.217.0.60:47601 - 29425 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007493024s 2025-08-13T20:00:44.725535371+00:00 stdout F [INFO] 10.217.0.60:36558 - 10190 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007303668s 2025-08-13T20:00:44.804289786+00:00 stdout F [INFO] 10.217.0.60:59348 - 55222 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001225224s 2025-08-13T20:00:44.807936300+00:00 stdout F [INFO] 10.217.0.60:40745 - 55897 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004331593s 2025-08-13T20:00:44.872898843+00:00 stdout F [INFO] 10.217.0.60:35807 - 51060 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000934206s 2025-08-13T20:00:44.872898843+00:00 stdout F [INFO] 10.217.0.60:45328 - 15524 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001419791s 2025-08-13T20:00:44.946685987+00:00 stdout F [INFO] 10.217.0.60:52632 - 32111 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002485871s 2025-08-13T20:00:44.948327163+00:00 stdout F [INFO] 10.217.0.60:59941 - 31677 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000775922s 2025-08-13T20:00:45.044009342+00:00 stdout F [INFO] 10.217.0.60:36355 - 57491 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003067507s 2025-08-13T20:00:45.044009342+00:00 stdout F [INFO] 10.217.0.60:49443 - 46875 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004174929s 2025-08-13T20:00:45.067966155+00:00 stdout F [INFO] 10.217.0.60:45455 - 970 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007233066s 2025-08-13T20:00:45.069930651+00:00 stdout F [INFO] 10.217.0.60:47235 - 43002 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007697979s 2025-08-13T20:00:45.128341376+00:00 stdout F [INFO] 10.217.0.60:48452 - 47585 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.004124938s 2025-08-13T20:00:45.128606654+00:00 stdout F [INFO] 10.217.0.60:43867 - 19263 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00526546s 2025-08-13T20:00:45.135204322+00:00 stdout F [INFO] 10.217.0.60:37334 - 26237 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00631382s 2025-08-13T20:00:45.139650159+00:00 stdout F [INFO] 10.217.0.60:36882 - 30502 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006561027s 2025-08-13T20:00:45.152938708+00:00 stdout F [INFO] 10.217.0.60:49344 - 27017 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001373699s 2025-08-13T20:00:45.153824703+00:00 stdout F [INFO] 10.217.0.60:33307 - 39959 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002010507s 2025-08-13T20:00:45.207455652+00:00 stdout F [INFO] 10.217.0.60:60805 - 45507 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001168484s 2025-08-13T20:00:45.209456329+00:00 stdout F [INFO] 10.217.0.60:36405 - 47965 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00210645s 2025-08-13T20:00:45.212187877+00:00 stdout F [INFO] 10.217.0.60:40040 - 8629 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002089839s 2025-08-13T20:00:45.212291200+00:00 stdout F [INFO] 10.217.0.60:50600 - 32146 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002621445s 2025-08-13T20:00:45.286609639+00:00 stdout F [INFO] 10.217.0.60:39091 - 50762 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002044838s 2025-08-13T20:00:45.286928948+00:00 stdout F [INFO] 10.217.0.60:39286 - 6970 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.0024663s 2025-08-13T20:00:45.297360786+00:00 stdout F [INFO] 10.217.0.60:45099 - 30506 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001081961s 2025-08-13T20:00:45.297486899+00:00 stdout F [INFO] 10.217.0.60:40857 - 4648 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000881935s 2025-08-13T20:00:45.358146569+00:00 stdout F [INFO] 10.217.0.60:54841 - 9214 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001787301s 2025-08-13T20:00:45.358523760+00:00 stdout F [INFO] 10.217.0.60:56436 - 23377 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002150051s 2025-08-13T20:00:45.389388330+00:00 stdout F [INFO] 10.217.0.60:54857 - 31695 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006964878s 2025-08-13T20:00:45.390320876+00:00 stdout F [INFO] 10.217.0.60:43585 - 49599 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008879413s 2025-08-13T20:00:45.413340543+00:00 stdout F [INFO] 10.217.0.60:59884 - 19312 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00456255s 2025-08-13T20:00:45.416187324+00:00 stdout F [INFO] 10.217.0.60:42278 - 12738 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005400374s 2025-08-13T20:00:45.539007396+00:00 stdout F [INFO] 10.217.0.60:59731 - 36791 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.046739423s 2025-08-13T20:00:45.539007396+00:00 stdout F [INFO] 10.217.0.60:58492 - 4853 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.047440983s 2025-08-13T20:00:45.543086592+00:00 stdout F [INFO] 10.217.0.60:44716 - 64555 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001795521s 2025-08-13T20:00:45.568481187+00:00 stdout F [INFO] 10.217.0.60:55385 - 23971 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002639415s 2025-08-13T20:00:45.661917681+00:00 stdout F [INFO] 10.217.0.60:48047 - 11680 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.019125105s 2025-08-13T20:00:45.662415625+00:00 stdout F [INFO] 10.217.0.60:39642 - 1573 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.019904978s 2025-08-13T20:00:45.664545706+00:00 stdout F [INFO] 10.217.0.60:36013 - 44109 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002240734s 2025-08-13T20:00:45.664806083+00:00 stdout F [INFO] 10.217.0.60:44076 - 19830 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002743738s 2025-08-13T20:00:45.711686910+00:00 stdout F [INFO] 10.217.0.60:58769 - 4854 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001804002s 2025-08-13T20:00:45.711913676+00:00 stdout F [INFO] 10.217.0.60:43779 - 32061 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002489711s 2025-08-13T20:00:45.763128247+00:00 stdout F [INFO] 10.217.0.60:38757 - 37067 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001899084s 2025-08-13T20:00:45.763128247+00:00 stdout F [INFO] 10.217.0.60:47541 - 44972 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002489831s 2025-08-13T20:00:45.818947768+00:00 stdout F [INFO] 10.217.0.60:45022 - 59800 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004771886s 2025-08-13T20:00:45.849971173+00:00 stdout F [INFO] 10.217.0.60:41359 - 41174 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.033949028s 2025-08-13T20:00:45.946985419+00:00 stdout F [INFO] 10.217.0.60:55262 - 56115 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.025679812s 2025-08-13T20:00:45.946985419+00:00 stdout F [INFO] 10.217.0.60:57747 - 17936 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.026174786s 2025-08-13T20:00:46.032202749+00:00 stdout F [INFO] 10.217.0.60:41161 - 13567 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.013157245s 2025-08-13T20:00:46.032202749+00:00 stdout F [INFO] 10.217.0.60:56298 - 13419 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.013566517s 2025-08-13T20:00:46.065677784+00:00 stdout F [INFO] 10.217.0.60:44922 - 19924 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008302017s 2025-08-13T20:00:46.066911359+00:00 stdout F [INFO] 10.217.0.60:56113 - 53771 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0091097s 2025-08-13T20:00:46.151485830+00:00 stdout F [INFO] 10.217.0.60:52318 - 47301 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000995248s 2025-08-13T20:00:46.173898789+00:00 stdout F [INFO] 10.217.0.60:43487 - 39893 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.016371676s 2025-08-13T20:00:46.174119226+00:00 stdout F [INFO] 10.217.0.60:40927 - 1924 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.023281174s 2025-08-13T20:00:46.174286081+00:00 stdout F [INFO] 10.217.0.60:58486 - 48914 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.016530531s 2025-08-13T20:00:46.191491961+00:00 stdout F [INFO] 10.217.0.60:40447 - 49758 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.01370845s 2025-08-13T20:00:46.192810969+00:00 stdout F [INFO] 10.217.0.60:54234 - 36503 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000944556s 2025-08-13T20:00:46.334162769+00:00 stdout F [INFO] 10.217.0.28:54681 - 4446 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001799722s 2025-08-13T20:00:46.350929327+00:00 stdout F [INFO] 10.217.0.28:50313 - 58891 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.018742515s 2025-08-13T20:00:46.411343540+00:00 stdout F [INFO] 10.217.0.60:52088 - 60139 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.033143935s 2025-08-13T20:00:46.426122801+00:00 stdout F [INFO] 10.217.0.60:33487 - 5955 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.015007558s 2025-08-13T20:00:46.430542497+00:00 stdout F [INFO] 10.217.0.60:35828 - 34714 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.011280352s 2025-08-13T20:00:46.430755333+00:00 stdout F [INFO] 10.217.0.60:40533 - 36830 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.012218549s 2025-08-13T20:00:46.433404079+00:00 stdout F [INFO] 10.217.0.60:44919 - 20101 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.014276107s 2025-08-13T20:00:46.453394739+00:00 stdout F [INFO] 10.217.0.60:39833 - 42467 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00278477s 2025-08-13T20:00:46.535225702+00:00 stdout F [INFO] 10.217.0.60:43467 - 41891 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.013027052s 2025-08-13T20:00:46.538984440+00:00 stdout F [INFO] 10.217.0.60:47507 - 26638 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.017371926s 2025-08-13T20:00:46.540622626+00:00 stdout F [INFO] 10.217.0.60:44240 - 8524 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004996362s 2025-08-13T20:00:46.540914895+00:00 stdout F [INFO] 10.217.0.60:48175 - 63130 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005273221s 2025-08-13T20:00:46.634162373+00:00 stdout F [INFO] 10.217.0.60:37456 - 31623 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.012673661s 2025-08-13T20:00:46.644251291+00:00 stdout F [INFO] 10.217.0.60:53598 - 25926 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.021997027s 2025-08-13T20:00:46.754285509+00:00 stdout F [INFO] 10.217.0.60:44935 - 57055 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003675785s 2025-08-13T20:00:46.756616595+00:00 stdout F [INFO] 10.217.0.60:51680 - 59997 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006043852s 2025-08-13T20:00:46.768956457+00:00 stdout F [INFO] 10.217.0.60:60644 - 39110 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.011803606s 2025-08-13T20:00:46.769154143+00:00 stdout F [INFO] 10.217.0.60:35175 - 29320 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012395984s 2025-08-13T20:00:46.769327967+00:00 stdout F [INFO] 10.217.0.60:60554 - 44922 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.013082673s 2025-08-13T20:00:46.769385279+00:00 stdout F [INFO] 10.217.0.60:54831 - 58016 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.013392642s 2025-08-13T20:00:46.839294423+00:00 stdout F [INFO] 10.217.0.60:56661 - 11234 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008934975s 2025-08-13T20:00:46.854649420+00:00 stdout F [INFO] 10.217.0.60:54181 - 15570 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.023440799s 2025-08-13T20:00:46.943090421+00:00 stdout F [INFO] 10.217.0.60:56336 - 41120 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.034635307s 2025-08-13T20:00:46.943090421+00:00 stdout F [INFO] 10.217.0.60:44456 - 12995 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.035224234s 2025-08-13T20:00:47.116218208+00:00 stdout F [INFO] 10.217.0.60:37550 - 43782 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.020021221s 2025-08-13T20:00:47.116218208+00:00 stdout F [INFO] 10.217.0.60:40522 - 2678 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.02034309s 2025-08-13T20:00:47.126230043+00:00 stdout F [INFO] 10.217.0.60:36157 - 52261 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.011332573s 2025-08-13T20:00:47.126230043+00:00 stdout F [INFO] 10.217.0.60:47728 - 17977 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.011963282s 2025-08-13T20:00:47.241771228+00:00 stdout F [INFO] 10.217.0.60:41268 - 12463 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008230535s 2025-08-13T20:00:47.242500149+00:00 stdout F [INFO] 10.217.0.60:48998 - 45260 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009925393s 2025-08-13T20:00:47.272471063+00:00 stdout F [INFO] 10.217.0.60:48206 - 29639 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.010554841s 2025-08-13T20:00:47.272471063+00:00 stdout F [INFO] 10.217.0.60:46995 - 41796 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.011676323s 2025-08-13T20:00:47.353176194+00:00 stdout F [INFO] 10.217.0.60:52090 - 63475 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001874713s 2025-08-13T20:00:47.353440092+00:00 stdout F [INFO] 10.217.0.60:45588 - 13464 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000767912s 2025-08-13T20:00:47.376652584+00:00 stdout F [INFO] 10.217.0.60:56400 - 20881 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003841279s 2025-08-13T20:00:47.379424483+00:00 stdout F [INFO] 10.217.0.60:49524 - 56709 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004009245s 2025-08-13T20:00:47.593026593+00:00 stdout F [INFO] 10.217.0.60:44764 - 44056 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005789535s 2025-08-13T20:00:47.595032430+00:00 stdout F [INFO] 10.217.0.60:53477 - 46362 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006140115s 2025-08-13T20:00:47.622130023+00:00 stdout F [INFO] 10.217.0.60:33700 - 42114 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.014865764s 2025-08-13T20:00:47.622935996+00:00 stdout F [INFO] 10.217.0.60:48559 - 14693 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004471528s 2025-08-13T20:00:47.721827146+00:00 stdout F [INFO] 10.217.0.60:44528 - 15152 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007532715s 2025-08-13T20:00:47.722296829+00:00 stdout F [INFO] 10.217.0.60:50872 - 49613 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007892625s 2025-08-13T20:00:47.748162137+00:00 stdout F [INFO] 10.217.0.60:38608 - 25117 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012239068s 2025-08-13T20:00:47.748162137+00:00 stdout F [INFO] 10.217.0.60:42298 - 23915 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009985865s 2025-08-13T20:00:47.752151451+00:00 stdout F [INFO] 10.217.0.60:52605 - 33456 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002111341s 2025-08-13T20:00:47.752151451+00:00 stdout F [INFO] 10.217.0.60:54884 - 59917 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002028918s 2025-08-13T20:00:47.804053931+00:00 stdout F [INFO] 10.217.0.57:34181 - 56876 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.032832606s 2025-08-13T20:00:47.809603409+00:00 stdout F [INFO] 10.217.0.57:53332 - 16628 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.007190365s 2025-08-13T20:00:47.853002786+00:00 stdout F [INFO] 10.217.0.60:60728 - 54856 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.011329993s 2025-08-13T20:00:47.853002786+00:00 stdout F [INFO] 10.217.0.60:47761 - 10494 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.012030293s 2025-08-13T20:00:47.864434812+00:00 stdout F [INFO] 10.217.0.60:51567 - 53307 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012441734s 2025-08-13T20:00:47.881486958+00:00 stdout F [INFO] 10.217.0.60:37162 - 20491 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.026721132s 2025-08-13T20:00:47.918687149+00:00 stdout F [INFO] 10.217.0.60:33731 - 24998 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009373047s 2025-08-13T20:00:47.941229022+00:00 stdout F [INFO] 10.217.0.60:51821 - 8384 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.031317193s 2025-08-13T20:00:47.941717446+00:00 stdout F [INFO] 10.217.0.60:47409 - 36059 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006863795s 2025-08-13T20:00:47.942178689+00:00 stdout F [INFO] 10.217.0.60:50438 - 6738 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007836173s 2025-08-13T20:00:48.101958315+00:00 stdout F [INFO] 10.217.0.60:37129 - 8864 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004393355s 2025-08-13T20:00:48.113905356+00:00 stdout F [INFO] 10.217.0.60:46038 - 64197 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012032263s 2025-08-13T20:00:48.113905356+00:00 stdout F [INFO] 10.217.0.60:52966 - 53758 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012346242s 2025-08-13T20:00:48.140736431+00:00 stdout F [INFO] 10.217.0.60:57444 - 30845 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.047522346s 2025-08-13T20:00:48.221997768+00:00 stdout F [INFO] 10.217.0.60:52176 - 15093 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000942287s 2025-08-13T20:00:48.221997768+00:00 stdout F [INFO] 10.217.0.60:43544 - 8170 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001601525s 2025-08-13T20:00:48.288368820+00:00 stdout F [INFO] 10.217.0.60:47883 - 29163 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005282701s 2025-08-13T20:00:48.288368820+00:00 stdout F [INFO] 10.217.0.60:40600 - 44845 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005914099s 2025-08-13T20:00:48.300356172+00:00 stdout F [INFO] 10.217.0.60:60257 - 34117 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001176293s 2025-08-13T20:00:48.300356172+00:00 stdout F [INFO] 10.217.0.60:54557 - 16227 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005456785s 2025-08-13T20:00:48.333949810+00:00 stdout F [INFO] 10.217.0.60:36088 - 663 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001502822s 2025-08-13T20:00:48.333949810+00:00 stdout F [INFO] 10.217.0.60:54236 - 5321 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001722179s 2025-08-13T20:00:48.378015887+00:00 stdout F [INFO] 10.217.0.60:57981 - 48896 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00138745s 2025-08-13T20:00:48.378015887+00:00 stdout F [INFO] 10.217.0.60:57451 - 16785 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00140207s 2025-08-13T20:00:48.450697889+00:00 stdout F [INFO] 10.217.0.60:48824 - 688 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.010017566s 2025-08-13T20:00:48.464770230+00:00 stdout F [INFO] 10.217.0.60:48120 - 24857 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.029571773s 2025-08-13T20:00:48.465644615+00:00 stdout F [INFO] 10.217.0.60:41695 - 60353 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007614157s 2025-08-13T20:00:48.518190334+00:00 stdout F [INFO] 10.217.0.60:58007 - 53703 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.033442244s 2025-08-13T20:00:48.538927885+00:00 stdout F [INFO] 10.217.0.60:48976 - 17944 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008436611s 2025-08-13T20:00:48.538927885+00:00 stdout F [INFO] 10.217.0.60:56226 - 25509 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008007849s 2025-08-13T20:00:48.604984298+00:00 stdout F [INFO] 10.217.0.60:39769 - 21331 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002969005s 2025-08-13T20:00:48.604984298+00:00 stdout F [INFO] 10.217.0.60:58238 - 34786 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003174291s 2025-08-13T20:00:48.626890253+00:00 stdout F [INFO] 10.217.0.60:57081 - 27014 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001323508s 2025-08-13T20:00:48.633335527+00:00 stdout F [INFO] 10.217.0.60:55145 - 13147 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.011070895s 2025-08-13T20:00:48.677906238+00:00 stdout F [INFO] 10.217.0.60:48452 - 48371 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007958427s 2025-08-13T20:00:48.677906238+00:00 stdout F [INFO] 10.217.0.60:59529 - 56136 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00839607s 2025-08-13T20:00:48.817630152+00:00 stdout F [INFO] 10.217.0.60:35230 - 49847 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.019805685s 2025-08-13T20:00:48.843040606+00:00 stdout F [INFO] 10.217.0.60:34693 - 19528 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.037023836s 2025-08-13T20:00:48.879189817+00:00 stdout F [INFO] 10.217.0.60:34184 - 25098 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.015109961s 2025-08-13T20:00:48.894400171+00:00 stdout F [INFO] 10.217.0.60:34465 - 55609 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.042471811s 2025-08-13T20:00:48.957136080+00:00 stdout F [INFO] 10.217.0.60:38476 - 51226 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008033729s 2025-08-13T20:00:48.991085998+00:00 stdout F [INFO] 10.217.0.60:34121 - 45872 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.018326873s 2025-08-13T20:00:49.007615269+00:00 stdout F [INFO] 10.217.0.60:52068 - 8984 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006058543s 2025-08-13T20:00:49.007615269+00:00 stdout F [INFO] 10.217.0.60:41706 - 21787 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005875008s 2025-08-13T20:00:49.091360167+00:00 stdout F [INFO] 10.217.0.60:45325 - 10432 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.01261853s 2025-08-13T20:00:49.091360167+00:00 stdout F [INFO] 10.217.0.60:43459 - 12755 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.01299332s 2025-08-13T20:00:49.120658922+00:00 stdout F [INFO] 10.217.0.60:60336 - 33140 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003302204s 2025-08-13T20:00:49.120658922+00:00 stdout F [INFO] 10.217.0.60:33883 - 18191 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004276132s 2025-08-13T20:00:49.186982213+00:00 stdout F [INFO] 10.217.0.60:45906 - 53699 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.011465726s 2025-08-13T20:00:49.189545497+00:00 stdout F [INFO] 10.217.0.60:39046 - 2349 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.013666889s 2025-08-13T20:00:49.335934651+00:00 stdout F [INFO] 10.217.0.60:45971 - 35326 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003226722s 2025-08-13T20:00:49.339332508+00:00 stdout F [INFO] 10.217.0.60:55148 - 46935 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006413193s 2025-08-13T20:00:49.401101669+00:00 stdout F [INFO] 10.217.0.60:43142 - 57420 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.029564173s 2025-08-13T20:00:49.401101669+00:00 stdout F [INFO] 10.217.0.60:53476 - 47620 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.030470539s 2025-08-13T20:00:49.475906792+00:00 stdout F [INFO] 10.217.0.60:44808 - 38767 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00664648s 2025-08-13T20:00:49.476184320+00:00 stdout F [INFO] 10.217.0.60:59050 - 11320 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007127623s 2025-08-13T20:00:49.499986538+00:00 stdout F [INFO] 10.217.0.60:56114 - 55312 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001972917s 2025-08-13T20:00:49.500952016+00:00 stdout F [INFO] 10.217.0.60:58021 - 19747 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003528551s 2025-08-13T20:00:49.585704093+00:00 stdout F [INFO] 10.217.0.60:53492 - 45836 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004307853s 2025-08-13T20:00:49.587022820+00:00 stdout F [INFO] 10.217.0.60:59742 - 9936 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006042932s 2025-08-13T20:00:49.619867467+00:00 stdout F [INFO] 10.217.0.60:40608 - 36044 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002080209s 2025-08-13T20:00:49.621386450+00:00 stdout F [INFO] 10.217.0.60:54497 - 59787 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.003969453s 2025-08-13T20:00:49.738205081+00:00 stdout F [INFO] 10.217.0.60:59059 - 46154 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.0013962s 2025-08-13T20:00:49.738205081+00:00 stdout F [INFO] 10.217.0.60:37185 - 27738 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002493211s 2025-08-13T20:00:49.742536965+00:00 stdout F [INFO] 10.217.0.60:49568 - 23639 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006554967s 2025-08-13T20:00:49.742898255+00:00 stdout F [INFO] 10.217.0.60:60560 - 13861 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007278567s 2025-08-13T20:00:49.772066467+00:00 stdout F [INFO] 10.217.0.60:37772 - 50665 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.009398078s 2025-08-13T20:00:49.798546172+00:00 stdout F [INFO] 10.217.0.60:41603 - 50543 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.035662117s 2025-08-13T20:00:49.876354960+00:00 stdout F [INFO] 10.217.0.60:39133 - 51596 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003322455s 2025-08-13T20:00:49.879992484+00:00 stdout F [INFO] 10.217.0.60:48144 - 11011 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004976322s 2025-08-13T20:00:49.899650055+00:00 stdout F [INFO] 10.217.0.60:34077 - 56114 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001278427s 2025-08-13T20:00:49.900298013+00:00 stdout F [INFO] 10.217.0.60:47986 - 62439 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001832252s 2025-08-13T20:00:50.012628316+00:00 stdout F [INFO] 10.217.0.60:39637 - 26080 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00632463s 2025-08-13T20:00:50.012688988+00:00 stdout F [INFO] 10.217.0.60:53716 - 534 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.010245322s 2025-08-13T20:00:50.051583307+00:00 stdout F [INFO] 10.217.0.60:55657 - 19501 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004846708s 2025-08-13T20:00:50.058106233+00:00 stdout F [INFO] 10.217.0.60:40058 - 18678 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.011136948s 2025-08-13T20:00:50.141707167+00:00 stdout F [INFO] 10.217.0.60:60817 - 13198 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008362138s 2025-08-13T20:00:50.142358925+00:00 stdout F [INFO] 10.217.0.60:46182 - 57950 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009989265s 2025-08-13T20:00:50.152990589+00:00 stdout F [INFO] 10.217.0.60:60651 - 47522 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001214825s 2025-08-13T20:00:50.153652057+00:00 stdout F [INFO] 10.217.0.60:51945 - 46133 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005919309s 2025-08-13T20:00:50.216588312+00:00 stdout F [INFO] 10.217.0.60:36086 - 48239 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009558012s 2025-08-13T20:00:50.230207140+00:00 stdout F [INFO] 10.217.0.60:41438 - 20130 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001547914s 2025-08-13T20:00:50.246760042+00:00 stdout F [INFO] 10.217.0.60:37703 - 41222 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000947307s 2025-08-13T20:00:50.286219267+00:00 stdout F [INFO] 10.217.0.60:44233 - 28746 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.038549609s 2025-08-13T20:00:50.426368064+00:00 stdout F [INFO] 10.217.0.60:59761 - 18861 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00103962s 2025-08-13T20:00:50.426610421+00:00 stdout F [INFO] 10.217.0.60:50474 - 18337 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001572425s 2025-08-13T20:00:50.476992067+00:00 stdout F [INFO] 10.217.0.60:45294 - 4861 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.011467187s 2025-08-13T20:00:50.479194270+00:00 stdout F [INFO] 10.217.0.60:60484 - 20544 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.014007289s 2025-08-13T20:00:50.487049573+00:00 stdout F [INFO] 10.217.0.60:50323 - 53459 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00952296s 2025-08-13T20:00:50.487462955+00:00 stdout F [INFO] 10.217.0.60:53709 - 16232 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.010004194s 2025-08-13T20:00:50.631761129+00:00 stdout F [INFO] 10.217.0.60:43049 - 58271 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004848749s 2025-08-13T20:00:50.631761129+00:00 stdout F [INFO] 10.217.0.60:51596 - 4810 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005409235s 2025-08-13T20:00:50.632212972+00:00 stdout F [INFO] 10.217.0.60:36500 - 56939 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005445896s 2025-08-13T20:00:50.653875190+00:00 stdout F [INFO] 10.217.0.60:37162 - 20095 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002929244s 2025-08-13T20:00:50.710595157+00:00 stdout F [INFO] 10.217.0.60:53343 - 65404 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001281806s 2025-08-13T20:00:50.710917926+00:00 stdout F [INFO] 10.217.0.60:46189 - 9933 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002245714s 2025-08-13T20:00:50.741703954+00:00 stdout F [INFO] 10.217.0.60:43271 - 32831 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003597993s 2025-08-13T20:00:50.741906950+00:00 stdout F [INFO] 10.217.0.60:49322 - 45791 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004130768s 2025-08-13T20:00:50.814239543+00:00 stdout F [INFO] 10.217.0.60:43252 - 38980 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008340968s 2025-08-13T20:00:50.821518250+00:00 stdout F [INFO] 10.217.0.60:33887 - 23010 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008209924s 2025-08-13T20:00:50.822469257+00:00 stdout F [INFO] 10.217.0.60:52123 - 64764 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000922446s 2025-08-13T20:00:50.822531939+00:00 stdout F [INFO] 10.217.0.60:47079 - 60051 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000604947s 2025-08-13T20:00:50.902917641+00:00 stdout F [INFO] 10.217.0.60:37685 - 16757 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001947336s 2025-08-13T20:00:50.903207449+00:00 stdout F [INFO] 10.217.0.60:57688 - 53872 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00208412s 2025-08-13T20:00:50.903287672+00:00 stdout F [INFO] 10.217.0.60:60105 - 24057 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002338506s 2025-08-13T20:00:50.903567440+00:00 stdout F [INFO] 10.217.0.60:38381 - 3564 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00279449s 2025-08-13T20:00:51.002414068+00:00 stdout F [INFO] 10.217.0.60:41363 - 55003 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001155413s 2025-08-13T20:00:51.004027554+00:00 stdout F [INFO] 10.217.0.60:41385 - 36490 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00104889s 2025-08-13T20:00:51.017230481+00:00 stdout F [INFO] 10.217.0.60:51407 - 52430 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003244232s 2025-08-13T20:00:51.018354993+00:00 stdout F [INFO] 10.217.0.60:48463 - 37591 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004471547s 2025-08-13T20:00:51.101497723+00:00 stdout F [INFO] 10.217.0.60:46161 - 3185 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00561465s 2025-08-13T20:00:51.101497723+00:00 stdout F [INFO] 10.217.0.60:51482 - 46090 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005485686s 2025-08-13T20:00:51.142181403+00:00 stdout F [INFO] 10.217.0.60:56445 - 47860 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004654473s 2025-08-13T20:00:51.146638801+00:00 stdout F [INFO] 10.217.0.60:45442 - 19436 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005363643s 2025-08-13T20:00:51.204234113+00:00 stdout F [INFO] 10.217.0.60:35174 - 59788 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007879465s 2025-08-13T20:00:51.204373347+00:00 stdout F [INFO] 10.217.0.60:38913 - 63826 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009311036s 2025-08-13T20:00:51.219247781+00:00 stdout F [INFO] 10.217.0.28:60652 - 32272 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005673381s 2025-08-13T20:00:51.219938511+00:00 stdout F [INFO] 10.217.0.28:33440 - 14907 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.007741481s 2025-08-13T20:00:51.220274210+00:00 stdout F [INFO] 10.217.0.60:33067 - 24093 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003064477s 2025-08-13T20:00:51.220749424+00:00 stdout F [INFO] 10.217.0.60:60012 - 42704 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002688427s 2025-08-13T20:00:51.288859666+00:00 stdout F [INFO] 10.217.0.60:49681 - 14797 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000803023s 2025-08-13T20:00:51.288859666+00:00 stdout F [INFO] 10.217.0.60:44507 - 3751 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000508474s 2025-08-13T20:00:51.306944231+00:00 stdout F [INFO] 10.217.0.60:40419 - 58620 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007643068s 2025-08-13T20:00:51.306944231+00:00 stdout F [INFO] 10.217.0.60:51791 - 27961 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008525664s 2025-08-13T20:00:51.375923848+00:00 stdout F [INFO] 10.217.0.60:34231 - 35820 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000928496s 2025-08-13T20:00:51.376458394+00:00 stdout F [INFO] 10.217.0.60:43604 - 31528 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001008199s 2025-08-13T20:00:51.386691535+00:00 stdout F [INFO] 10.217.0.60:60210 - 17663 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001227575s 2025-08-13T20:00:51.386691535+00:00 stdout F [INFO] 10.217.0.60:44075 - 52352 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001294827s 2025-08-13T20:00:51.611735065+00:00 stdout F [INFO] 10.217.0.60:38690 - 52947 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.077360954s 2025-08-13T20:00:51.612714183+00:00 stdout F [INFO] 10.217.0.60:41948 - 6937 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.078289441s 2025-08-13T20:00:51.676514931+00:00 stdout F [INFO] 10.217.0.60:42928 - 19370 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001911985s 2025-08-13T20:00:51.676514931+00:00 stdout F [INFO] 10.217.0.60:32987 - 60196 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002667866s 2025-08-13T20:00:51.676514931+00:00 stdout F [INFO] 10.217.0.60:49495 - 63508 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002623465s 2025-08-13T20:00:51.676514931+00:00 stdout F [INFO] 10.217.0.60:52718 - 33970 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002470911s 2025-08-13T20:00:51.742160331+00:00 stdout F [INFO] 10.217.0.60:48686 - 41468 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001831722s 2025-08-13T20:00:51.742427329+00:00 stdout F [INFO] 10.217.0.60:35098 - 63774 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002177992s 2025-08-13T20:00:51.743220981+00:00 stdout F [INFO] 10.217.0.60:43032 - 9717 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001273976s 2025-08-13T20:00:51.744057455+00:00 stdout F [INFO] 10.217.0.60:34108 - 623 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002023838s 2025-08-13T20:00:51.748934934+00:00 stdout F [INFO] 10.217.0.60:40602 - 21272 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000581176s 2025-08-13T20:00:51.749930403+00:00 stdout F [INFO] 10.217.0.60:37496 - 12601 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001247316s 2025-08-13T20:00:51.822689737+00:00 stdout F [INFO] 10.217.0.60:38232 - 17431 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00665717s 2025-08-13T20:00:51.823074108+00:00 stdout F [INFO] 10.217.0.60:58055 - 13489 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007323248s 2025-08-13T20:00:51.823325085+00:00 stdout F [INFO] 10.217.0.60:47561 - 18500 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007794712s 2025-08-13T20:00:51.823613724+00:00 stdout F [INFO] 10.217.0.60:47869 - 39243 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007890675s 2025-08-13T20:00:51.855978707+00:00 stdout F [INFO] 10.217.0.60:34863 - 45394 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004177349s 2025-08-13T20:00:51.867184376+00:00 stdout F [INFO] 10.217.0.60:39514 - 30757 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.012415594s 2025-08-13T20:00:51.910303916+00:00 stdout F [INFO] 10.217.0.60:49136 - 577 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003391667s 2025-08-13T20:00:51.910573373+00:00 stdout F [INFO] 10.217.0.60:42532 - 1690 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004396395s 2025-08-13T20:00:51.942863424+00:00 stdout F [INFO] 10.217.0.60:33964 - 26144 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002259924s 2025-08-13T20:00:51.945255812+00:00 stdout F [INFO] 10.217.0.60:58693 - 49707 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004602351s 2025-08-13T20:00:51.984627495+00:00 stdout F [INFO] 10.217.0.60:38197 - 6440 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012006032s 2025-08-13T20:00:51.984761369+00:00 stdout F [INFO] 10.217.0.60:55819 - 1373 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012547668s 2025-08-13T20:00:51.999348845+00:00 stdout F [INFO] 10.217.0.60:44642 - 63594 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004095017s 2025-08-13T20:00:51.999725875+00:00 stdout F [INFO] 10.217.0.60:53602 - 61788 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004738385s 2025-08-13T20:00:52.015235118+00:00 stdout F [INFO] 10.217.0.60:52813 - 36985 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001107691s 2025-08-13T20:00:52.015343841+00:00 stdout F [INFO] 10.217.0.60:55494 - 9494 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001735579s 2025-08-13T20:00:52.067732165+00:00 stdout F [INFO] 10.217.0.60:56886 - 32543 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002148031s 2025-08-13T20:00:52.068143936+00:00 stdout F [INFO] 10.217.0.60:52460 - 57024 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002538542s 2025-08-13T20:00:52.108533718+00:00 stdout F [INFO] 10.217.0.60:51310 - 53415 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.016064728s 2025-08-13T20:00:52.109018332+00:00 stdout F [INFO] 10.217.0.60:59350 - 62708 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.016686755s 2025-08-13T20:00:52.174940892+00:00 stdout F [INFO] 10.217.0.60:39079 - 60775 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005555518s 2025-08-13T20:00:52.183123615+00:00 stdout F [INFO] 10.217.0.60:57759 - 31828 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.013467164s 2025-08-13T20:00:52.184567376+00:00 stdout F [INFO] 10.217.0.60:58451 - 2791 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001231495s 2025-08-13T20:00:52.186513841+00:00 stdout F [INFO] 10.217.0.60:32892 - 3317 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002866262s 2025-08-13T20:00:52.252289647+00:00 stdout F [INFO] 10.217.0.60:49551 - 64846 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001023919s 2025-08-13T20:00:52.252707309+00:00 stdout F [INFO] 10.217.0.60:49202 - 12073 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00142371s 2025-08-13T20:00:52.342704175+00:00 stdout F [INFO] 10.217.0.60:43192 - 32993 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.016870411s 2025-08-13T20:00:52.342704175+00:00 stdout F [INFO] 10.217.0.60:38616 - 39260 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.017765217s 2025-08-13T20:00:52.343319953+00:00 stdout F [INFO] 10.217.0.60:32855 - 42141 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.034283607s 2025-08-13T20:00:52.343319953+00:00 stdout F [INFO] 10.217.0.60:58760 - 51997 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.034872634s 2025-08-13T20:00:52.363666863+00:00 stdout F [INFO] 10.217.0.60:34342 - 39912 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002230364s 2025-08-13T20:00:52.364055264+00:00 stdout F [INFO] 10.217.0.60:40112 - 47926 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.003496609s 2025-08-13T20:00:52.368147640+00:00 stdout F [INFO] 10.217.0.60:41418 - 27502 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000772232s 2025-08-13T20:00:52.368147640+00:00 stdout F [INFO] 10.217.0.60:54568 - 8178 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000862675s 2025-08-13T20:00:52.417474187+00:00 stdout F [INFO] 10.217.0.60:34401 - 41823 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001162503s 2025-08-13T20:00:52.417474187+00:00 stdout F [INFO] 10.217.0.60:54281 - 1628 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000844794s 2025-08-13T20:00:52.425915018+00:00 stdout F [INFO] 10.217.0.60:44410 - 34400 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001689318s 2025-08-13T20:00:52.425915018+00:00 stdout F [INFO] 10.217.0.60:41198 - 33445 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002418249s 2025-08-13T20:00:52.478947420+00:00 stdout F [INFO] 10.217.0.60:41674 - 36850 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.004340144s 2025-08-13T20:00:52.478947420+00:00 stdout F [INFO] 10.217.0.60:47806 - 36647 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.004133997s 2025-08-13T20:00:52.503407487+00:00 stdout F [INFO] 10.217.0.60:49901 - 36499 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00488833s 2025-08-13T20:00:52.503434608+00:00 stdout F [INFO] 10.217.0.60:56089 - 38963 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003719806s 2025-08-13T20:00:52.612027434+00:00 stdout F [INFO] 10.217.0.60:49947 - 17990 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002598664s 2025-08-13T20:00:52.612027434+00:00 stdout F [INFO] 10.217.0.60:42773 - 36346 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003213992s 2025-08-13T20:00:52.631294194+00:00 stdout F [INFO] 10.217.0.60:35440 - 16529 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007477253s 2025-08-13T20:00:52.631294194+00:00 stdout F [INFO] 10.217.0.60:38024 - 19973 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00633636s 2025-08-13T20:00:52.669176664+00:00 stdout F [INFO] 10.217.0.60:60705 - 31478 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001868913s 2025-08-13T20:00:52.686242461+00:00 stdout F [INFO] 10.217.0.60:40476 - 8782 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.017952332s 2025-08-13T20:00:52.700432845+00:00 stdout F [INFO] 10.217.0.60:38184 - 24580 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006019771s 2025-08-13T20:00:52.700533838+00:00 stdout F [INFO] 10.217.0.60:60768 - 46385 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006904217s 2025-08-13T20:00:52.708316640+00:00 stdout F [INFO] 10.217.0.60:41556 - 44564 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003196391s 2025-08-13T20:00:52.708471474+00:00 stdout F [INFO] 10.217.0.60:46736 - 43501 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003790588s 2025-08-13T20:00:52.753748055+00:00 stdout F [INFO] 10.217.0.60:40008 - 18248 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003482239s 2025-08-13T20:00:52.757647567+00:00 stdout F [INFO] 10.217.0.60:39281 - 40776 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007330079s 2025-08-13T20:00:52.793241162+00:00 stdout F [INFO] 10.217.0.57:45300 - 46926 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.014451182s 2025-08-13T20:00:52.793326134+00:00 stdout F [INFO] 10.217.0.57:35437 - 60936 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.008610236s 2025-08-13T20:00:52.804324888+00:00 stdout F [INFO] 10.217.0.60:50313 - 39461 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.025408345s 2025-08-13T20:00:52.804324888+00:00 stdout F [INFO] 10.217.0.60:34678 - 44426 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.025856147s 2025-08-13T20:00:52.860209981+00:00 stdout F [INFO] 10.217.0.60:51260 - 353 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003001965s 2025-08-13T20:00:52.861057635+00:00 stdout F [INFO] 10.217.0.60:47222 - 2537 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006474134s 2025-08-13T20:00:52.881133378+00:00 stdout F [INFO] 10.217.0.60:52091 - 13165 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000937946s 2025-08-13T20:00:52.881607451+00:00 stdout F [INFO] 10.217.0.60:54382 - 48231 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001847163s 2025-08-13T20:00:52.885077150+00:00 stdout F [INFO] 10.217.0.60:47517 - 41143 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003975963s 2025-08-13T20:00:52.901372165+00:00 stdout F [INFO] 10.217.0.60:49983 - 15446 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.020226507s 2025-08-13T20:00:52.945679118+00:00 stdout F [INFO] 10.217.0.60:57504 - 17108 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002316026s 2025-08-13T20:00:52.946193043+00:00 stdout F [INFO] 10.217.0.60:55459 - 34942 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003532381s 2025-08-13T20:00:52.965326068+00:00 stdout F [INFO] 10.217.0.60:52607 - 15292 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001042949s 2025-08-13T20:00:52.966944765+00:00 stdout F [INFO] 10.217.0.60:48379 - 16992 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002693877s 2025-08-13T20:00:52.968991643+00:00 stdout F [INFO] 10.217.0.60:34347 - 19971 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001370789s 2025-08-13T20:00:52.968991643+00:00 stdout F [INFO] 10.217.0.60:58227 - 27878 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002165991s 2025-08-13T20:00:53.008765737+00:00 stdout F [INFO] 10.217.0.60:50325 - 4808 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004454127s 2025-08-13T20:00:53.008765737+00:00 stdout F [INFO] 10.217.0.60:53513 - 15907 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004383155s 2025-08-13T20:00:53.042605572+00:00 stdout F [INFO] 10.217.0.60:50040 - 6391 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001593495s 2025-08-13T20:00:53.042711485+00:00 stdout F [INFO] 10.217.0.60:56947 - 35540 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001457651s 2025-08-13T20:00:53.098025152+00:00 stdout F [INFO] 10.217.0.60:47610 - 52365 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002047098s 2025-08-13T20:00:53.098424924+00:00 stdout F [INFO] 10.217.0.60:46903 - 19146 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.011319583s 2025-08-13T20:00:53.099042741+00:00 stdout F [INFO] 10.217.0.60:40196 - 10562 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0035066s 2025-08-13T20:00:53.099131124+00:00 stdout F [INFO] 10.217.0.60:40279 - 44420 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.012856286s 2025-08-13T20:00:53.201288987+00:00 stdout F [INFO] 10.217.0.60:44271 - 34314 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004942361s 2025-08-13T20:00:53.201394970+00:00 stdout F [INFO] 10.217.0.60:40528 - 65373 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.003594543s 2025-08-13T20:00:53.201640297+00:00 stdout F [INFO] 10.217.0.60:45273 - 26892 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005355922s 2025-08-13T20:00:53.201729409+00:00 stdout F [INFO] 10.217.0.60:57186 - 19083 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00348992s 2025-08-13T20:00:53.252955060+00:00 stdout F [INFO] 10.217.0.60:54736 - 18202 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006892836s 2025-08-13T20:00:53.255122962+00:00 stdout F [INFO] 10.217.0.60:56460 - 34298 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006607799s 2025-08-13T20:00:53.286328582+00:00 stdout F [INFO] 10.217.0.60:46439 - 33364 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006952718s 2025-08-13T20:00:53.286657981+00:00 stdout F [INFO] 10.217.0.60:50478 - 32524 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006934677s 2025-08-13T20:00:53.289987276+00:00 stdout F [INFO] 10.217.0.60:47197 - 42349 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002953375s 2025-08-13T20:00:53.289987276+00:00 stdout F [INFO] 10.217.0.60:55550 - 14391 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003505219s 2025-08-13T20:00:53.340759034+00:00 stdout F [INFO] 10.217.0.60:36130 - 51725 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005857947s 2025-08-13T20:00:53.340759034+00:00 stdout F [INFO] 10.217.0.60:47471 - 34726 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005486817s 2025-08-13T20:00:53.411344546+00:00 stdout F [INFO] 10.217.0.60:54552 - 11612 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.023914062s 2025-08-13T20:00:53.411344546+00:00 stdout F [INFO] 10.217.0.60:43437 - 29520 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.025134666s 2025-08-13T20:00:53.432280653+00:00 stdout F [INFO] 10.217.0.60:38044 - 13694 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008570454s 2025-08-13T20:00:53.433227400+00:00 stdout F [INFO] 10.217.0.60:54568 - 64079 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007869224s 2025-08-13T20:00:53.471951515+00:00 stdout F [INFO] 10.217.0.60:53072 - 41075 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002703447s 2025-08-13T20:00:53.471951515+00:00 stdout F [INFO] 10.217.0.60:49580 - 61579 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002508161s 2025-08-13T20:00:53.516481754+00:00 stdout F [INFO] 10.217.0.60:50918 - 35890 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003679485s 2025-08-13T20:00:53.516481754+00:00 stdout F [INFO] 10.217.0.60:43105 - 22887 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003932652s 2025-08-13T20:00:53.516594207+00:00 stdout F [INFO] 10.217.0.60:33168 - 53941 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.023277554s 2025-08-13T20:00:53.535903078+00:00 stdout F [INFO] 10.217.0.60:42750 - 34289 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.038139318s 2025-08-13T20:00:53.570983318+00:00 stdout F [INFO] 10.217.0.60:41931 - 41111 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003255393s 2025-08-13T20:00:53.573029197+00:00 stdout F [INFO] 10.217.0.60:40847 - 48976 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006612799s 2025-08-13T20:00:53.597868635+00:00 stdout F [INFO] 10.217.0.60:59360 - 2136 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005416224s 2025-08-13T20:00:53.597868635+00:00 stdout F [INFO] 10.217.0.60:52461 - 27104 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001841083s 2025-08-13T20:00:53.601450557+00:00 stdout F [INFO] 10.217.0.60:54454 - 54017 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006414802s 2025-08-13T20:00:53.601450557+00:00 stdout F [INFO] 10.217.0.60:37130 - 13113 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001838232s 2025-08-13T20:00:53.665990417+00:00 stdout F [INFO] 10.217.0.60:46913 - 29149 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006355062s 2025-08-13T20:00:53.678985018+00:00 stdout F [INFO] 10.217.0.60:60979 - 7034 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002641435s 2025-08-13T20:00:53.679034359+00:00 stdout F [INFO] 10.217.0.60:42536 - 55728 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002853981s 2025-08-13T20:00:53.692268767+00:00 stdout F [INFO] 10.217.0.60:59147 - 6840 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.020945667s 2025-08-13T20:00:53.759479803+00:00 stdout F [INFO] 10.217.0.60:45721 - 40846 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001359888s 2025-08-13T20:00:53.759479803+00:00 stdout F [INFO] 10.217.0.60:59305 - 1822 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002186972s 2025-08-13T20:00:53.766363369+00:00 stdout F [INFO] 10.217.0.60:35259 - 58487 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003414577s 2025-08-13T20:00:53.766543834+00:00 stdout F [INFO] 10.217.0.60:37103 - 25785 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004143848s 2025-08-13T20:00:53.848258384+00:00 stdout F [INFO] 10.217.0.60:47545 - 58113 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002532112s 2025-08-13T20:00:53.848258384+00:00 stdout F [INFO] 10.217.0.60:42233 - 41616 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004171259s 2025-08-13T20:00:53.864414455+00:00 stdout F [INFO] 10.217.0.60:51098 - 41843 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004119977s 2025-08-13T20:00:53.864721734+00:00 stdout F [INFO] 10.217.0.60:35925 - 20271 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004664763s 2025-08-13T20:00:53.981440612+00:00 stdout F [INFO] 10.217.0.60:55209 - 35425 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003106589s 2025-08-13T20:00:53.984768667+00:00 stdout F [INFO] 10.217.0.60:49145 - 61100 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003199231s 2025-08-13T20:00:54.006162107+00:00 stdout F [INFO] 10.217.0.60:36995 - 2348 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001911485s 2025-08-13T20:00:54.006666721+00:00 stdout F [INFO] 10.217.0.60:44304 - 50706 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002065359s 2025-08-13T20:00:54.088313588+00:00 stdout F [INFO] 10.217.0.60:58116 - 63292 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.017787097s 2025-08-13T20:00:54.088313588+00:00 stdout F [INFO] 10.217.0.60:56494 - 34065 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.018799796s 2025-08-13T20:00:54.088313588+00:00 stdout F [INFO] 10.217.0.60:40690 - 33753 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.018499167s 2025-08-13T20:00:54.091014255+00:00 stdout F [INFO] 10.217.0.60:52303 - 57777 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.021306917s 2025-08-13T20:00:54.177955294+00:00 stdout F [INFO] 10.217.0.60:45987 - 37479 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002613084s 2025-08-13T20:00:54.177955294+00:00 stdout F [INFO] 10.217.0.60:58306 - 8356 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003806909s 2025-08-13T20:00:54.190158042+00:00 stdout F [INFO] 10.217.0.60:49703 - 48052 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005040684s 2025-08-13T20:00:54.190158042+00:00 stdout F [INFO] 10.217.0.60:47515 - 57080 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005875278s 2025-08-13T20:00:54.210693428+00:00 stdout F [INFO] 10.217.0.60:53317 - 61694 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001216725s 2025-08-13T20:00:54.211213623+00:00 stdout F [INFO] 10.217.0.60:36938 - 24375 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001097501s 2025-08-13T20:00:54.261073224+00:00 stdout F [INFO] 10.217.0.60:48205 - 50646 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001469772s 2025-08-13T20:00:54.261451375+00:00 stdout F [INFO] 10.217.0.60:48539 - 55836 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005469026s 2025-08-13T20:00:54.346313265+00:00 stdout F [INFO] 10.217.0.60:54730 - 32651 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.045075945s 2025-08-13T20:00:54.346313265+00:00 stdout F [INFO] 10.217.0.60:43637 - 9908 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.046208437s 2025-08-13T20:00:54.358372139+00:00 stdout F [INFO] 10.217.0.60:56901 - 19987 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006063023s 2025-08-13T20:00:54.358429720+00:00 stdout F [INFO] 10.217.0.60:39734 - 36586 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008345908s 2025-08-13T20:00:54.413487130+00:00 stdout F [INFO] 10.217.0.60:35374 - 37770 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003413067s 2025-08-13T20:00:54.414041696+00:00 stdout F [INFO] 10.217.0.60:32861 - 23169 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003239342s 2025-08-13T20:00:54.444068402+00:00 stdout F [INFO] 10.217.0.60:49969 - 9790 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003211791s 2025-08-13T20:00:54.444068402+00:00 stdout F [INFO] 10.217.0.60:48352 - 43128 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00142177s 2025-08-13T20:00:54.496125126+00:00 stdout F [INFO] 10.217.0.60:50646 - 6041 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001598405s 2025-08-13T20:00:54.500601564+00:00 stdout F [INFO] 10.217.0.60:54137 - 5363 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005470266s 2025-08-13T20:00:54.562235942+00:00 stdout F [INFO] 10.217.0.60:34907 - 2227 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009674926s 2025-08-13T20:00:54.562235942+00:00 stdout F [INFO] 10.217.0.60:56654 - 46794 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009203472s 2025-08-13T20:00:54.605821094+00:00 stdout F [INFO] 10.217.0.60:38008 - 56395 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006390213s 2025-08-13T20:00:54.605821094+00:00 stdout F [INFO] 10.217.0.60:60224 - 57243 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006918808s 2025-08-13T20:00:54.641390439+00:00 stdout F [INFO] 10.217.0.60:35101 - 21936 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00350588s 2025-08-13T20:00:54.642122009+00:00 stdout F [INFO] 10.217.0.60:36606 - 13769 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00490198s 2025-08-13T20:00:54.701656737+00:00 stdout F [INFO] 10.217.0.60:60974 - 25606 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00630711s 2025-08-13T20:00:54.717990603+00:00 stdout F [INFO] 10.217.0.60:56711 - 1491 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.021716869s 2025-08-13T20:00:54.769018978+00:00 stdout F [INFO] 10.217.0.60:56705 - 50944 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002386768s 2025-08-13T20:00:54.800321270+00:00 stdout F [INFO] 10.217.0.60:33894 - 34414 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.020661909s 2025-08-13T20:00:54.821698250+00:00 stdout F [INFO] 10.217.0.60:46811 - 16255 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001141063s 2025-08-13T20:00:54.821698250+00:00 stdout F [INFO] 10.217.0.60:41651 - 35052 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002192633s 2025-08-13T20:00:56.237173901+00:00 stdout F [INFO] 10.217.0.28:57650 - 26271 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006250578s 2025-08-13T20:00:56.237173901+00:00 stdout F [INFO] 10.217.0.28:36911 - 23347 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006025012s 2025-08-13T20:00:56.426192821+00:00 stdout F [INFO] 10.217.0.64:60386 - 16371 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002096029s 2025-08-13T20:00:56.426755407+00:00 stdout F [INFO] 10.217.0.64:37132 - 17691 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002402099s 2025-08-13T20:00:56.429936718+00:00 stdout F [INFO] 10.217.0.64:56566 - 24357 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001352609s 2025-08-13T20:00:56.429936718+00:00 stdout F [INFO] 10.217.0.64:56185 - 8747 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001461742s 2025-08-13T20:00:57.764385539+00:00 stdout F [INFO] 10.217.0.57:55995 - 61941 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002156022s 2025-08-13T20:00:57.764487992+00:00 stdout F [INFO] 10.217.0.57:32930 - 18533 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003038847s 2025-08-13T20:01:01.217889967+00:00 stdout F [INFO] 10.217.0.28:60933 - 64320 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00173895s 2025-08-13T20:01:01.229498738+00:00 stdout F [INFO] 10.217.0.28:45794 - 37899 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004856648s 2025-08-13T20:01:02.786206288+00:00 stdout F [INFO] 10.217.0.57:40045 - 52840 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004254891s 2025-08-13T20:01:02.786714652+00:00 stdout F [INFO] 10.217.0.57:37315 - 53363 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004645843s 2025-08-13T20:01:06.248307858+00:00 stdout F [INFO] 10.217.0.28:32940 - 64022 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.010549551s 2025-08-13T20:01:06.248307858+00:00 stdout F [INFO] 10.217.0.28:39158 - 51585 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.010443988s 2025-08-13T20:01:07.803260386+00:00 stdout F [INFO] 10.217.0.57:55745 - 47377 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.008387099s 2025-08-13T20:01:07.803260386+00:00 stdout F [INFO] 10.217.0.57:52414 - 56772 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001229835s 2025-08-13T20:01:07.876341910+00:00 stdout F [INFO] 10.217.0.62:36493 - 37658 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.006798604s 2025-08-13T20:01:07.928674602+00:00 stdout F [INFO] 10.217.0.62:47351 - 5566 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.054731061s 2025-08-13T20:01:08.958312120+00:00 stdout F [INFO] 10.217.0.62:45058 - 48452 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.009701407s 2025-08-13T20:01:08.958708771+00:00 stdout F [INFO] 10.217.0.62:55693 - 35669 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.010232312s 2025-08-13T20:01:09.463479824+00:00 stdout F [INFO] 10.217.0.62:42400 - 55577 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003704086s 2025-08-13T20:01:09.463555486+00:00 stdout F [INFO] 10.217.0.62:37144 - 52861 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003480729s 2025-08-13T20:01:09.791534678+00:00 stdout F [INFO] 10.217.0.62:35264 - 59363 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.008885523s 2025-08-13T20:01:09.792368432+00:00 stdout F [INFO] 10.217.0.62:51739 - 64573 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.019929048s 2025-08-13T20:01:09.883410818+00:00 stdout F [INFO] 10.217.0.62:33611 - 14300 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003421418s 2025-08-13T20:01:09.883410818+00:00 stdout F [INFO] 10.217.0.62:43024 - 10224 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004042806s 2025-08-13T20:01:10.148001973+00:00 stdout F [INFO] 10.217.0.62:46260 - 39301 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.005178448s 2025-08-13T20:01:10.153873390+00:00 stdout F [INFO] 10.217.0.62:57629 - 50837 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.011846428s 2025-08-13T20:01:10.366079851+00:00 stdout F [INFO] 10.217.0.62:53398 - 6757 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.010000395s 2025-08-13T20:01:10.368951013+00:00 stdout F [INFO] 10.217.0.62:34726 - 49492 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.009213843s 2025-08-13T20:01:10.599053724+00:00 stdout F [INFO] 10.217.0.62:38447 - 31498 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.006358471s 2025-08-13T20:01:10.602083821+00:00 stdout F [INFO] 10.217.0.62:35021 - 53431 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.028566614s 2025-08-13T20:01:11.189276624+00:00 stdout F [INFO] 10.217.0.28:57278 - 49657 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000955917s 2025-08-13T20:01:11.190236392+00:00 stdout F [INFO] 10.217.0.28:51161 - 41838 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001605305s 2025-08-13T20:01:12.737583602+00:00 stdout F [INFO] 10.217.0.57:38545 - 42595 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000573556s 2025-08-13T20:01:12.738199670+00:00 stdout F [INFO] 10.217.0.57:37428 - 59560 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000958997s 2025-08-13T20:01:16.220937836+00:00 stdout F [INFO] 10.217.0.28:53038 - 50311 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004677073s 2025-08-13T20:01:16.272384153+00:00 stdout F [INFO] 10.217.0.28:41185 - 25277 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.059117746s 2025-08-13T20:01:17.742430731+00:00 stdout F [INFO] 10.217.0.57:55347 - 17398 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001211184s 2025-08-13T20:01:17.743296655+00:00 stdout F [INFO] 10.217.0.57:50425 - 39022 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001655518s 2025-08-13T20:01:21.215652285+00:00 stdout F [INFO] 10.217.0.28:39681 - 25914 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001337798s 2025-08-13T20:01:21.218189227+00:00 stdout F [INFO] 10.217.0.28:50437 - 47565 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00278562s 2025-08-13T20:01:22.758813675+00:00 stdout F [INFO] 10.217.0.57:51254 - 6258 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00737564s 2025-08-13T20:01:22.759524666+00:00 stdout F [INFO] 10.217.0.57:39283 - 43233 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00418449s 2025-08-13T20:01:22.875226975+00:00 stdout F [INFO] 10.217.0.8:35450 - 460 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002742558s 2025-08-13T20:01:22.899969020+00:00 stdout F [INFO] 10.217.0.8:50023 - 58252 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.012143567s 2025-08-13T20:01:22.903084419+00:00 stdout F [INFO] 10.217.0.8:37803 - 13193 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001617146s 2025-08-13T20:01:22.903387998+00:00 stdout F [INFO] 10.217.0.8:33487 - 42396 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001981667s 2025-08-13T20:01:22.948433202+00:00 stdout F [INFO] 10.217.0.8:60399 - 35132 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000913156s 2025-08-13T20:01:22.948694070+00:00 stdout F [INFO] 10.217.0.8:58767 - 6469 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00105186s 2025-08-13T20:01:22.955416611+00:00 stdout F [INFO] 10.217.0.8:35574 - 36598 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.0017425s 2025-08-13T20:01:22.956320387+00:00 stdout F [INFO] 10.217.0.8:52173 - 4616 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00209741s 2025-08-13T20:01:22.983271905+00:00 stdout F [INFO] 10.217.0.8:42709 - 34679 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000729141s 2025-08-13T20:01:22.983556464+00:00 stdout F [INFO] 10.217.0.8:37180 - 59501 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000854214s 2025-08-13T20:01:22.985588542+00:00 stdout F [INFO] 10.217.0.8:46898 - 18051 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00067822s 2025-08-13T20:01:22.986040514+00:00 stdout F [INFO] 10.217.0.8:55598 - 646 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000887455s 2025-08-13T20:01:23.019215770+00:00 stdout F [INFO] 10.217.0.8:43776 - 16490 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000885795s 2025-08-13T20:01:23.019563030+00:00 stdout F [INFO] 10.217.0.8:55377 - 13705 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000998358s 2025-08-13T20:01:23.021052543+00:00 stdout F [INFO] 10.217.0.8:32974 - 5627 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000607378s 2025-08-13T20:01:23.021367562+00:00 stdout F [INFO] 10.217.0.8:46715 - 588 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000855444s 2025-08-13T20:01:23.072998254+00:00 stdout F [INFO] 10.217.0.8:46363 - 14241 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000838344s 2025-08-13T20:01:23.073300263+00:00 stdout F [INFO] 10.217.0.8:39978 - 64267 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000959147s 2025-08-13T20:01:23.075549817+00:00 stdout F [INFO] 10.217.0.8:50302 - 46004 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00036754s 2025-08-13T20:01:23.080598711+00:00 stdout F [INFO] 10.217.0.8:36433 - 63422 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001915785s 2025-08-13T20:01:23.174177349+00:00 stdout F [INFO] 10.217.0.8:44290 - 35137 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001082941s 2025-08-13T20:01:23.174453997+00:00 stdout F [INFO] 10.217.0.8:58110 - 59594 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001618486s 2025-08-13T20:01:23.182685632+00:00 stdout F [INFO] 10.217.0.8:41613 - 40136 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.0024516s 2025-08-13T20:01:23.183076613+00:00 stdout F [INFO] 10.217.0.8:49877 - 49171 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.003004526s 2025-08-13T20:01:23.356737254+00:00 stdout F [INFO] 10.217.0.8:44016 - 20804 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000900336s 2025-08-13T20:01:23.356737254+00:00 stdout F [INFO] 10.217.0.8:50779 - 40285 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000814863s 2025-08-13T20:01:23.362991283+00:00 stdout F [INFO] 10.217.0.8:58191 - 8238 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.003906692s 2025-08-13T20:01:23.363173828+00:00 stdout F [INFO] 10.217.0.8:34112 - 1811 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.004521639s 2025-08-13T20:01:23.697180232+00:00 stdout F [INFO] 10.217.0.8:45455 - 40717 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.005744714s 2025-08-13T20:01:23.697623495+00:00 stdout F [INFO] 10.217.0.8:36955 - 33064 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.006594509s 2025-08-13T20:01:23.710523722+00:00 stdout F [INFO] 10.217.0.8:53645 - 3233 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.011739265s 2025-08-13T20:01:23.711900992+00:00 stdout F [INFO] 10.217.0.8:59710 - 44829 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.012081614s 2025-08-13T20:01:24.356571364+00:00 stdout F [INFO] 10.217.0.8:60593 - 48150 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000811133s 2025-08-13T20:01:24.356620205+00:00 stdout F [INFO] 10.217.0.8:55236 - 41544 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000640498s 2025-08-13T20:01:24.357769188+00:00 stdout F [INFO] 10.217.0.8:41541 - 33055 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000577307s 2025-08-13T20:01:24.358411346+00:00 stdout F [INFO] 10.217.0.8:47550 - 49968 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000632798s 2025-08-13T20:01:25.676466559+00:00 stdout F [INFO] 10.217.0.8:59216 - 3385 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.006457944s 2025-08-13T20:01:25.676466559+00:00 stdout F [INFO] 10.217.0.8:41508 - 20230 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.007326559s 2025-08-13T20:01:25.677325214+00:00 stdout F [INFO] 10.217.0.8:35700 - 28311 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00069899s 2025-08-13T20:01:25.678744934+00:00 stdout F [INFO] 10.217.0.8:57821 - 32512 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000834894s 2025-08-13T20:01:26.195177770+00:00 stdout F [INFO] 10.217.0.28:45247 - 60056 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001457361s 2025-08-13T20:01:26.197937238+00:00 stdout F [INFO] 10.217.0.28:36922 - 18578 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00243877s 2025-08-13T20:01:27.747230374+00:00 stdout F [INFO] 10.217.0.57:42648 - 41427 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003019136s 2025-08-13T20:01:27.747230374+00:00 stdout F [INFO] 10.217.0.57:45322 - 51505 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003414618s 2025-08-13T20:01:28.252961704+00:00 stdout F [INFO] 10.217.0.8:38267 - 2620 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001890944s 2025-08-13T20:01:28.253402637+00:00 stdout F [INFO] 10.217.0.8:33474 - 20943 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002586484s 2025-08-13T20:01:28.257625927+00:00 stdout F [INFO] 10.217.0.8:46050 - 9905 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002819021s 2025-08-13T20:01:28.258326047+00:00 stdout F [INFO] 10.217.0.8:52537 - 2133 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000777933s 2025-08-13T20:01:31.226187453+00:00 stdout F [INFO] 10.217.0.28:54125 - 15824 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.010559841s 2025-08-13T20:01:31.226275255+00:00 stdout F [INFO] 10.217.0.28:34218 - 4235 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.010554371s 2025-08-13T20:01:32.746987848+00:00 stdout F [INFO] 10.217.0.57:48872 - 60983 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001016209s 2025-08-13T20:01:32.747463922+00:00 stdout F [INFO] 10.217.0.57:44677 - 14267 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001244225s 2025-08-13T20:01:33.390184889+00:00 stdout F [INFO] 10.217.0.8:51622 - 5984 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001513223s 2025-08-13T20:01:33.390184889+00:00 stdout F [INFO] 10.217.0.8:55023 - 43646 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001353799s 2025-08-13T20:01:33.391758223+00:00 stdout F [INFO] 10.217.0.8:36157 - 4425 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000748261s 2025-08-13T20:01:33.427944867+00:00 stdout F [INFO] 10.217.0.8:50839 - 60785 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.036577225s 2025-08-13T20:01:36.185028201+00:00 stdout F [INFO] 10.217.0.28:53977 - 19614 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000873425s 2025-08-13T20:01:36.185510115+00:00 stdout F [INFO] 10.217.0.28:48739 - 23520 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000631698s 2025-08-13T20:01:37.742700716+00:00 stdout F [INFO] 10.217.0.57:37766 - 57355 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001464932s 2025-08-13T20:01:37.742909242+00:00 stdout F [INFO] 10.217.0.57:45806 - 61763 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001585065s 2025-08-13T20:01:42.737094876+00:00 stdout F [INFO] 10.217.0.57:51744 - 57853 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000895476s 2025-08-13T20:01:42.738118555+00:00 stdout F [INFO] 10.217.0.57:50810 - 5972 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001739799s 2025-08-13T20:01:43.701713921+00:00 stdout F [INFO] 10.217.0.8:36382 - 38398 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001006558s 2025-08-13T20:01:43.701753482+00:00 stdout F [INFO] 10.217.0.8:47387 - 17641 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001006888s 2025-08-13T20:01:43.704013746+00:00 stdout F [INFO] 10.217.0.8:54898 - 64653 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000649298s 2025-08-13T20:01:43.704736497+00:00 stdout F [INFO] 10.217.0.8:32768 - 24650 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001379599s 2025-08-13T20:01:47.743368384+00:00 stdout F [INFO] 10.217.0.57:58791 - 31751 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001133693s 2025-08-13T20:01:47.744028232+00:00 stdout F [INFO] 10.217.0.57:60427 - 36503 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001844383s 2025-08-13T20:01:52.740083758+00:00 stdout F [INFO] 10.217.0.57:55965 - 27277 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001546664s 2025-08-13T20:01:52.740083758+00:00 stdout F [INFO] 10.217.0.57:43501 - 22555 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001841002s 2025-08-13T20:01:56.381289243+00:00 stdout F [INFO] 10.217.0.64:33868 - 27378 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001576005s 2025-08-13T20:01:56.381949141+00:00 stdout F [INFO] 10.217.0.64:43107 - 15688 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.000985269s 2025-08-13T20:01:56.385583775+00:00 stdout F [INFO] 10.217.0.64:38774 - 34772 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.000796203s 2025-08-13T20:01:56.386594514+00:00 stdout F [INFO] 10.217.0.64:51409 - 59863 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001705519s 2025-08-13T20:01:57.740516440+00:00 stdout F [INFO] 10.217.0.57:40457 - 8246 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000826874s 2025-08-13T20:01:57.740968553+00:00 stdout F [INFO] 10.217.0.57:41951 - 15134 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000784732s 2025-08-13T20:02:02.752936391+00:00 stdout F [INFO] 10.217.0.57:45491 - 28069 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00174288s 2025-08-13T20:02:02.752936391+00:00 stdout F [INFO] 10.217.0.57:51623 - 20798 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001608546s 2025-08-13T20:02:04.196547714+00:00 stdout F [INFO] 10.217.0.8:53185 - 3699 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001116902s 2025-08-13T20:02:04.196872024+00:00 stdout F [INFO] 10.217.0.8:41818 - 23258 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001150073s 2025-08-13T20:02:04.198577372+00:00 stdout F [INFO] 10.217.0.8:59757 - 57451 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00072344s 2025-08-13T20:02:04.198696666+00:00 stdout F [INFO] 10.217.0.8:40174 - 24364 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00069757s 2025-08-13T20:02:07.740393995+00:00 stdout F [INFO] 10.217.0.57:44423 - 57160 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001285037s 2025-08-13T20:02:07.740930810+00:00 stdout F [INFO] 10.217.0.57:37229 - 23125 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001233406s 2025-08-13T20:02:12.737589534+00:00 stdout F [INFO] 10.217.0.57:60917 - 6095 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001498463s 2025-08-13T20:02:12.740934829+00:00 stdout F [INFO] 10.217.0.57:46568 - 39709 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005422165s 2025-08-13T20:02:17.741689966+00:00 stdout F [INFO] 10.217.0.57:39448 - 45084 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001421561s 2025-08-13T20:02:17.742317324+00:00 stdout F [INFO] 10.217.0.57:58611 - 38262 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00175589s 2025-08-13T20:02:22.742260017+00:00 stdout F [INFO] 10.217.0.57:37594 - 46202 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001674168s 2025-08-13T20:02:22.743136312+00:00 stdout F [INFO] 10.217.0.57:46178 - 3725 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002001987s 2025-08-13T20:02:22.850376041+00:00 stdout F [INFO] 10.217.0.8:37535 - 5907 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000958718s 2025-08-13T20:02:22.850376041+00:00 stdout F [INFO] 10.217.0.8:57939 - 54092 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00138282s 2025-08-13T20:02:22.851987297+00:00 stdout F [INFO] 10.217.0.8:47235 - 23930 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000638688s 2025-08-13T20:02:22.852372948+00:00 stdout F [INFO] 10.217.0.8:35321 - 3816 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000816614s 2025-08-13T20:02:45.175333378+00:00 stdout F [INFO] 10.217.0.8:44330 - 47496 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002702667s 2025-08-13T20:02:45.175333378+00:00 stdout F [INFO] 10.217.0.8:53638 - 28468 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003530061s 2025-08-13T20:02:45.178408116+00:00 stdout F [INFO] 10.217.0.8:34486 - 6540 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001221594s 2025-08-13T20:02:45.181027171+00:00 stdout F [INFO] 10.217.0.8:50562 - 34559 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.003549701s 2025-08-13T20:02:56.361633502+00:00 stdout F [INFO] 10.217.0.64:54018 - 28556 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002588663s 2025-08-13T20:02:56.361633502+00:00 stdout F [INFO] 10.217.0.64:52974 - 20654 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002253644s 2025-08-13T20:02:56.361737164+00:00 stdout F [INFO] 10.217.0.64:55753 - 41252 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002745678s 2025-08-13T20:02:56.361757175+00:00 stdout F [INFO] 10.217.0.64:57017 - 50212 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002530172s 2025-08-13T20:03:22.854718995+00:00 stdout F [INFO] 10.217.0.8:48056 - 26529 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.005041034s 2025-08-13T20:03:22.855475337+00:00 stdout F [INFO] 10.217.0.8:36036 - 55050 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00594783s 2025-08-13T20:03:22.858104102+00:00 stdout F [INFO] 10.217.0.8:35111 - 28439 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001604606s 2025-08-13T20:03:22.858457422+00:00 stdout F [INFO] 10.217.0.8:38433 - 51125 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00106096s 2025-08-13T20:03:59.364398927+00:00 stdout F [INFO] 10.217.0.64:44595 - 12598 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 2.98031181s 2025-08-13T20:03:59.364398927+00:00 stdout F [INFO] 10.217.0.64:51254 - 44553 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 2.9808070239999997s 2025-08-13T20:03:59.364398927+00:00 stdout F [INFO] 10.217.0.64:47570 - 1814 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 2.9712284s 2025-08-13T20:03:59.364398927+00:00 stdout F [INFO] 10.217.0.64:38933 - 34536 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 2.971904639s 2025-08-13T20:04:22.855177511+00:00 stdout F [INFO] 10.217.0.8:36152 - 1487 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003761058s 2025-08-13T20:04:22.855177511+00:00 stdout F [INFO] 10.217.0.8:57798 - 5862 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00453144s 2025-08-13T20:04:22.859450584+00:00 stdout F [INFO] 10.217.0.8:51299 - 43533 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001581495s 2025-08-13T20:04:22.859450584+00:00 stdout F [INFO] 10.217.0.8:33424 - 53620 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001211915s 2025-08-13T20:04:56.363900410+00:00 stdout F [INFO] 10.217.0.64:53398 - 7438 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002246825s 2025-08-13T20:04:56.363900410+00:00 stdout F [INFO] 10.217.0.64:53707 - 34441 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001054011s 2025-08-13T20:04:56.363900410+00:00 stdout F [INFO] 10.217.0.64:35175 - 59215 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.000854044s 2025-08-13T20:04:56.363900410+00:00 stdout F [INFO] 10.217.0.64:52207 - 4802 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002560213s 2025-08-13T20:05:22.834833023+00:00 stdout F [INFO] 10.217.0.57:56324 - 25639 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002765099s 2025-08-13T20:05:22.834833023+00:00 stdout F [INFO] 10.217.0.57:56473 - 6138 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003923752s 2025-08-13T20:05:22.874352144+00:00 stdout F [INFO] 10.217.0.8:57994 - 3404 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002601494s 2025-08-13T20:05:22.874352144+00:00 stdout F [INFO] 10.217.0.8:56186 - 51110 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003099199s 2025-08-13T20:05:22.877030471+00:00 stdout F [INFO] 10.217.0.8:39214 - 1864 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000801883s 2025-08-13T20:05:22.877568387+00:00 stdout F [INFO] 10.217.0.8:52797 - 23506 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000858964s 2025-08-13T20:05:29.056323092+00:00 stdout F [INFO] 10.217.0.8:42329 - 34614 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001238436s 2025-08-13T20:05:29.056323092+00:00 stdout F [INFO] 10.217.0.8:34828 - 37668 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002666427s 2025-08-13T20:05:29.056323092+00:00 stdout F [INFO] 10.217.0.8:50135 - 64401 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002043728s 2025-08-13T20:05:29.056323092+00:00 stdout F [INFO] 10.217.0.8:47646 - 27514 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001369899s 2025-08-13T20:05:30.484629752+00:00 stdout F [INFO] 10.217.0.73:35591 - 37435 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001129143s 2025-08-13T20:05:30.485074535+00:00 stdout F [INFO] 10.217.0.73:46188 - 1945 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001558785s 2025-08-13T20:05:31.039418839+00:00 stdout F [INFO] 10.217.0.57:55199 - 28877 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002956185s 2025-08-13T20:05:31.067272687+00:00 stdout F [INFO] 10.217.0.57:34733 - 49370 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.032735907s 2025-08-13T20:05:56.380618832+00:00 stdout F [INFO] 10.217.0.64:50848 - 53238 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003411278s 2025-08-13T20:05:56.380618832+00:00 stdout F [INFO] 10.217.0.64:39821 - 33040 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003394617s 2025-08-13T20:05:56.380618832+00:00 stdout F [INFO] 10.217.0.64:37737 - 1404 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003336776s 2025-08-13T20:05:56.380818118+00:00 stdout F [INFO] 10.217.0.64:35989 - 676 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003093488s 2025-08-13T20:05:57.260526589+00:00 stdout F [INFO] 10.217.0.19:55716 - 30971 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001340569s 2025-08-13T20:05:57.260526589+00:00 stdout F [INFO] 10.217.0.19:48212 - 28087 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001226735s 2025-08-13T20:06:02.817339144+00:00 stdout F [INFO] 10.217.0.19:42395 - 20503 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001299787s 2025-08-13T20:06:02.817339144+00:00 stdout F [INFO] 10.217.0.19:58590 - 35761 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001914884s 2025-08-13T20:06:10.928750040+00:00 stdout F [INFO] 10.217.0.19:40611 - 63299 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001083611s 2025-08-13T20:06:10.928750040+00:00 stdout F [INFO] 10.217.0.19:45243 - 63194 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001169673s 2025-08-13T20:06:13.331971681+00:00 stdout F [INFO] 10.217.0.19:60442 - 24966 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00211397s 2025-08-13T20:06:13.331971681+00:00 stdout F [INFO] 10.217.0.19:49512 - 53168 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002144402s 2025-08-13T20:06:19.649446948+00:00 stdout F [INFO] 10.217.0.19:33262 - 15714 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002213083s 2025-08-13T20:06:19.649446948+00:00 stdout F [INFO] 10.217.0.19:51577 - 30006 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002758709s 2025-08-13T20:06:22.858350418+00:00 stdout F [INFO] 10.217.0.8:54831 - 57999 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.0014117s 2025-08-13T20:06:22.858350418+00:00 stdout F [INFO] 10.217.0.8:51901 - 11298 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001923375s 2025-08-13T20:06:22.860114628+00:00 stdout F [INFO] 10.217.0.8:41737 - 24832 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000652439s 2025-08-13T20:06:22.860657794+00:00 stdout F [INFO] 10.217.0.8:51273 - 60742 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001147012s 2025-08-13T20:06:24.856002612+00:00 stdout F [INFO] 10.217.0.19:46264 - 24052 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000716081s 2025-08-13T20:06:24.856002612+00:00 stdout F [INFO] 10.217.0.19:38175 - 50312 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000926017s 2025-08-13T20:06:24.982110083+00:00 stdout F [INFO] 10.217.0.19:41025 - 12544 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00067708s 2025-08-13T20:06:24.983396940+00:00 stdout F [INFO] 10.217.0.19:48621 - 6324 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000912816s 2025-08-13T20:06:42.398374316+00:00 stdout F [INFO] 10.217.0.19:40544 - 31548 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003992194s 2025-08-13T20:06:42.398551511+00:00 stdout F [INFO] 10.217.0.19:47961 - 38383 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004599002s 2025-08-13T20:06:44.831326502+00:00 stdout F [INFO] 10.217.0.19:46233 - 36481 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000929797s 2025-08-13T20:06:44.831326502+00:00 stdout F [INFO] 10.217.0.19:52474 - 16656 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001438521s 2025-08-13T20:06:45.977531073+00:00 stdout F [INFO] 10.217.0.19:42773 - 41690 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000721611s 2025-08-13T20:06:45.977531073+00:00 stdout F [INFO] 10.217.0.19:48183 - 6599 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000650689s 2025-08-13T20:06:49.310538523+00:00 stdout F [INFO] 10.217.0.19:40594 - 10031 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001391659s 2025-08-13T20:06:49.310569354+00:00 stdout F [INFO] 10.217.0.19:50841 - 61241 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000930497s 2025-08-13T20:06:56.401041584+00:00 stdout F [INFO] 10.217.0.64:35784 - 58433 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.005351913s 2025-08-13T20:06:56.401041584+00:00 stdout F [INFO] 10.217.0.64:56005 - 56756 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.005147368s 2025-08-13T20:06:56.408528299+00:00 stdout F [INFO] 10.217.0.64:53375 - 63899 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001581075s 2025-08-13T20:06:56.409009593+00:00 stdout F [INFO] 10.217.0.64:60446 - 44331 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002788589s 2025-08-13T20:07:12.391819733+00:00 stdout F [INFO] 10.217.0.19:51663 - 10457 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003456329s 2025-08-13T20:07:12.392347668+00:00 stdout F [INFO] 10.217.0.19:36490 - 30092 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002132671s 2025-08-13T20:07:22.916941526+00:00 stdout F [INFO] 10.217.0.8:49926 - 26483 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004742656s 2025-08-13T20:07:22.919477009+00:00 stdout F [INFO] 10.217.0.8:44496 - 22655 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00626125s 2025-08-13T20:07:22.925974105+00:00 stdout F [INFO] 10.217.0.8:43816 - 62129 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001857553s 2025-08-13T20:07:22.926229433+00:00 stdout F [INFO] 10.217.0.8:38711 - 50307 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00279s 2025-08-13T20:07:42.446369421+00:00 stdout F [INFO] 10.217.0.19:35262 - 46720 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004152379s 2025-08-13T20:07:42.446369421+00:00 stdout F [INFO] 10.217.0.19:58765 - 30779 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004750666s 2025-08-13T20:07:55.895470739+00:00 stdout F [INFO] 10.217.0.19:55147 - 31451 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002516592s 2025-08-13T20:07:55.895470739+00:00 stdout F [INFO] 10.217.0.19:57149 - 51963 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001709219s 2025-08-13T20:07:56.365057553+00:00 stdout F [INFO] 10.217.0.64:57024 - 62161 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001544464s 2025-08-13T20:07:56.365057553+00:00 stdout F [INFO] 10.217.0.64:40706 - 49086 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001788221s 2025-08-13T20:07:56.365057553+00:00 stdout F [INFO] 10.217.0.64:33994 - 7565 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001635137s 2025-08-13T20:07:56.368462020+00:00 stdout F [INFO] 10.217.0.64:38183 - 47201 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.00557274s 2025-08-13T20:07:58.927633963+00:00 stdout F [INFO] 10.217.0.19:35433 - 24636 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002575593s 2025-08-13T20:07:58.927706205+00:00 stdout F [INFO] 10.217.0.19:38853 - 41776 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003402368s 2025-08-13T20:07:59.051685730+00:00 stdout F [INFO] 10.217.0.19:39692 - 59438 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001105412s 2025-08-13T20:07:59.053430520+00:00 stdout F [INFO] 10.217.0.19:37686 - 44994 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001551274s 2025-08-13T20:08:02.186596361+00:00 stdout F [INFO] 10.217.0.45:41379 - 46721 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000921637s 2025-08-13T20:08:02.189161414+00:00 stdout F [INFO] 10.217.0.45:59167 - 54274 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001012559s 2025-08-13T20:08:03.090366973+00:00 stdout F [INFO] 10.217.0.82:44963 - 1211 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001140693s 2025-08-13T20:08:03.090366973+00:00 stdout F [INFO] 10.217.0.82:46007 - 19049 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000906806s 2025-08-13T20:08:03.829207705+00:00 stdout F [INFO] 10.217.0.82:50887 - 47041 "AAAA IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.00105427s 2025-08-13T20:08:03.829537805+00:00 stdout F [INFO] 10.217.0.82:49104 - 36628 "A IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.001064701s 2025-08-13T20:08:11.673531949+00:00 stdout F [INFO] 10.217.0.62:53197 - 19130 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000967497s 2025-08-13T20:08:11.673531949+00:00 stdout F [INFO] 10.217.0.62:55102 - 58758 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001366819s 2025-08-13T20:08:11.863221898+00:00 stdout F [INFO] 10.217.0.62:46299 - 6790 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002732218s 2025-08-13T20:08:11.863307000+00:00 stdout F [INFO] 10.217.0.62:52890 - 25150 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002978285s 2025-08-13T20:08:12.460165583+00:00 stdout F [INFO] 10.217.0.19:55674 - 5069 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001345239s 2025-08-13T20:08:12.460165583+00:00 stdout F [INFO] 10.217.0.19:53687 - 56231 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001230745s 2025-08-13T20:08:16.460247929+00:00 stdout F [INFO] 10.217.0.19:33883 - 22269 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001597046s 2025-08-13T20:08:16.460247929+00:00 stdout F [INFO] 10.217.0.19:55451 - 9936 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001798961s 2025-08-13T20:08:17.093307339+00:00 stdout F [INFO] 10.217.0.74:47638 - 51886 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.0014134s 2025-08-13T20:08:17.093307339+00:00 stdout F [INFO] 10.217.0.74:43722 - 53547 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002339637s 2025-08-13T20:08:17.093651509+00:00 stdout F [INFO] 10.217.0.74:47930 - 29842 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001831943s 2025-08-13T20:08:17.093928847+00:00 stdout F [INFO] 10.217.0.74:60430 - 2357 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002843112s 2025-08-13T20:08:17.242727213+00:00 stdout F [INFO] 10.217.0.74:37794 - 44338 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002469581s 2025-08-13T20:08:17.242727213+00:00 stdout F [INFO] 10.217.0.74:36380 - 24703 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002938054s 2025-08-13T20:08:17.242727213+00:00 stdout F [INFO] 10.217.0.74:54413 - 4769 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006599279s 2025-08-13T20:08:17.242727213+00:00 stdout F [INFO] 10.217.0.74:58887 - 6627 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007189946s 2025-08-13T20:08:17.313472882+00:00 stdout F [INFO] 10.217.0.74:34420 - 20953 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000654899s 2025-08-13T20:08:17.316035505+00:00 stdout F [INFO] 10.217.0.74:40265 - 13881 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000726571s 2025-08-13T20:08:17.316035505+00:00 stdout F [INFO] 10.217.0.74:60934 - 58201 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001991577s 2025-08-13T20:08:17.316035505+00:00 stdout F [INFO] 10.217.0.74:56818 - 41121 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001181694s 2025-08-13T20:08:17.353452958+00:00 stdout F [INFO] 10.217.0.74:59414 - 1737 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001528744s 2025-08-13T20:08:17.353452958+00:00 stdout F [INFO] 10.217.0.74:35775 - 32087 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001648827s 2025-08-13T20:08:17.374565733+00:00 stdout F [INFO] 10.217.0.74:51839 - 10472 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000966457s 2025-08-13T20:08:17.374623335+00:00 stdout F [INFO] 10.217.0.74:59412 - 56029 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000880815s 2025-08-13T20:08:17.379713471+00:00 stdout F [INFO] 10.217.0.74:59850 - 37284 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000467644s 2025-08-13T20:08:17.379713471+00:00 stdout F [INFO] 10.217.0.74:48117 - 9957 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000437173s 2025-08-13T20:08:17.414002834+00:00 stdout F [INFO] 10.217.0.74:53202 - 11313 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0007195s 2025-08-13T20:08:17.415289781+00:00 stdout F [INFO] 10.217.0.74:34969 - 34716 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001980387s 2025-08-13T20:08:17.438063694+00:00 stdout F [INFO] 10.217.0.74:44550 - 1766 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001649777s 2025-08-13T20:08:17.438179977+00:00 stdout F [INFO] 10.217.0.74:33996 - 10441 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002167302s 2025-08-13T20:08:17.506074713+00:00 stdout F [INFO] 10.217.0.74:36687 - 64047 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00101641s 2025-08-13T20:08:17.506356141+00:00 stdout F [INFO] 10.217.0.74:60766 - 9260 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001620316s 2025-08-13T20:08:17.511222911+00:00 stdout F [INFO] 10.217.0.74:58673 - 45520 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001016829s 2025-08-13T20:08:17.512197309+00:00 stdout F [INFO] 10.217.0.74:48516 - 11841 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001193584s 2025-08-13T20:08:17.512382915+00:00 stdout F [INFO] 10.217.0.74:43668 - 62296 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001465352s 2025-08-13T20:08:17.513393153+00:00 stdout F [INFO] 10.217.0.74:53257 - 38114 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001675628s 2025-08-13T20:08:17.531563284+00:00 stdout F [INFO] 10.217.0.74:32914 - 43822 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000721731s 2025-08-13T20:08:17.531563284+00:00 stdout F [INFO] 10.217.0.74:57557 - 60863 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000788833s 2025-08-13T20:08:17.571268303+00:00 stdout F [INFO] 10.217.0.74:34472 - 31021 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001095471s 2025-08-13T20:08:17.571371496+00:00 stdout F [INFO] 10.217.0.74:45301 - 50358 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000771102s 2025-08-13T20:08:17.576149263+00:00 stdout F [INFO] 10.217.0.74:38042 - 53403 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00139997s 2025-08-13T20:08:17.577165722+00:00 stdout F [INFO] 10.217.0.74:50457 - 1804 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000845834s 2025-08-13T20:08:17.587648382+00:00 stdout F [INFO] 10.217.0.74:33807 - 22539 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00101709s 2025-08-13T20:08:17.587648382+00:00 stdout F [INFO] 10.217.0.74:50078 - 57611 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000952327s 2025-08-13T20:08:17.596519817+00:00 stdout F [INFO] 10.217.0.74:51619 - 1271 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000502865s 2025-08-13T20:08:17.596723603+00:00 stdout F [INFO] 10.217.0.74:33030 - 31537 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000527365s 2025-08-13T20:08:17.633355123+00:00 stdout F [INFO] 10.217.0.74:49719 - 52641 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000757552s 2025-08-13T20:08:17.633409224+00:00 stdout F [INFO] 10.217.0.74:44615 - 29659 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000824994s 2025-08-13T20:08:17.659664257+00:00 stdout F [INFO] 10.217.0.74:60079 - 38710 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002021758s 2025-08-13T20:08:17.659664257+00:00 stdout F [INFO] 10.217.0.74:46372 - 29784 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002361298s 2025-08-13T20:08:17.697094000+00:00 stdout F [INFO] 10.217.0.74:37367 - 17356 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00175155s 2025-08-13T20:08:17.697094000+00:00 stdout F [INFO] 10.217.0.74:33453 - 15295 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001937126s 2025-08-13T20:08:17.720648796+00:00 stdout F [INFO] 10.217.0.74:56978 - 59221 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001009399s 2025-08-13T20:08:17.720648796+00:00 stdout F [INFO] 10.217.0.74:37924 - 45832 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001598885s 2025-08-13T20:08:17.757582705+00:00 stdout F [INFO] 10.217.0.74:50255 - 7778 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0010567s 2025-08-13T20:08:17.757663517+00:00 stdout F [INFO] 10.217.0.74:49792 - 21282 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001282237s 2025-08-13T20:08:17.780338727+00:00 stdout F [INFO] 10.217.0.74:56255 - 42807 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001229795s 2025-08-13T20:08:17.780338727+00:00 stdout F [INFO] 10.217.0.74:59643 - 60857 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001216784s 2025-08-13T20:08:17.796204772+00:00 stdout F [INFO] 10.217.0.74:39042 - 60622 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002146812s 2025-08-13T20:08:17.796295525+00:00 stdout F [INFO] 10.217.0.74:51432 - 60269 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002134271s 2025-08-13T20:08:17.819143930+00:00 stdout F [INFO] 10.217.0.74:33576 - 27950 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00103722s 2025-08-13T20:08:17.819928862+00:00 stdout F [INFO] 10.217.0.74:46599 - 38481 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000400511s 2025-08-13T20:08:17.858210460+00:00 stdout F [INFO] 10.217.0.74:40244 - 6782 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000604517s 2025-08-13T20:08:17.858261901+00:00 stdout F [INFO] 10.217.0.74:34036 - 38337 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001010159s 2025-08-13T20:08:17.858490348+00:00 stdout F [INFO] 10.217.0.74:59286 - 12399 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000644209s 2025-08-13T20:08:17.858646702+00:00 stdout F [INFO] 10.217.0.74:59246 - 35961 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001014419s 2025-08-13T20:08:17.880153489+00:00 stdout F [INFO] 10.217.0.74:36022 - 45218 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000942657s 2025-08-13T20:08:17.880526860+00:00 stdout F [INFO] 10.217.0.74:34981 - 33902 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001340278s 2025-08-13T20:08:17.881396075+00:00 stdout F [INFO] 10.217.0.74:49413 - 27840 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000502115s 2025-08-13T20:08:17.883545775+00:00 stdout F [INFO] 10.217.0.74:57490 - 176 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002198163s 2025-08-13T20:08:17.923052838+00:00 stdout F [INFO] 10.217.0.74:42831 - 31660 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003493061s 2025-08-13T20:08:17.924053797+00:00 stdout F [INFO] 10.217.0.74:56272 - 13437 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003915012s 2025-08-13T20:08:17.930211313+00:00 stdout F [INFO] 10.217.0.74:51914 - 40356 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001364369s 2025-08-13T20:08:17.930663796+00:00 stdout F [INFO] 10.217.0.74:35796 - 40289 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001480022s 2025-08-13T20:08:17.939123259+00:00 stdout F [INFO] 10.217.0.74:48300 - 58931 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000874155s 2025-08-13T20:08:17.939484169+00:00 stdout F [INFO] 10.217.0.74:44967 - 4651 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001081602s 2025-08-13T20:08:17.957997040+00:00 stdout F [INFO] 10.217.0.74:45670 - 58180 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001100601s 2025-08-13T20:08:17.958301839+00:00 stdout F [INFO] 10.217.0.74:39047 - 29042 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000748512s 2025-08-13T20:08:18.001523288+00:00 stdout F [INFO] 10.217.0.74:60147 - 42545 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001336129s 2025-08-13T20:08:18.002163866+00:00 stdout F [INFO] 10.217.0.74:52066 - 45542 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001274117s 2025-08-13T20:08:18.003171135+00:00 stdout F [INFO] 10.217.0.74:44306 - 52127 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001024219s 2025-08-13T20:08:18.003308809+00:00 stdout F [INFO] 10.217.0.74:54199 - 52772 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001133893s 2025-08-13T20:08:18.042742490+00:00 stdout F [INFO] 10.217.0.74:49386 - 1226 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000768522s 2025-08-13T20:08:18.042914435+00:00 stdout F [INFO] 10.217.0.74:47491 - 30793 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000759702s 2025-08-13T20:08:18.057213194+00:00 stdout F [INFO] 10.217.0.74:35466 - 46773 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001504533s 2025-08-13T20:08:18.058068579+00:00 stdout F [INFO] 10.217.0.74:47145 - 28223 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001951675s 2025-08-13T20:08:18.060862139+00:00 stdout F [INFO] 10.217.0.74:43862 - 43274 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000581036s 2025-08-13T20:08:18.060862139+00:00 stdout F [INFO] 10.217.0.74:53185 - 40376 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000613208s 2025-08-13T20:08:18.101125083+00:00 stdout F [INFO] 10.217.0.74:41513 - 8463 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00105484s 2025-08-13T20:08:18.101125083+00:00 stdout F [INFO] 10.217.0.74:57968 - 54244 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001155973s 2025-08-13T20:08:18.119922342+00:00 stdout F [INFO] 10.217.0.74:40669 - 25614 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000916636s 2025-08-13T20:08:18.120042356+00:00 stdout F [INFO] 10.217.0.74:52622 - 18205 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000878835s 2025-08-13T20:08:18.155849493+00:00 stdout F [INFO] 10.217.0.74:59445 - 26368 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000769882s 2025-08-13T20:08:18.156263654+00:00 stdout F [INFO] 10.217.0.74:60747 - 31872 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00104462s 2025-08-13T20:08:18.180989663+00:00 stdout F [INFO] 10.217.0.74:60997 - 17041 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000908776s 2025-08-13T20:08:18.180989663+00:00 stdout F [INFO] 10.217.0.74:46551 - 10659 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001160833s 2025-08-13T20:08:18.213105324+00:00 stdout F [INFO] 10.217.0.74:39244 - 55181 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070225s 2025-08-13T20:08:18.213105324+00:00 stdout F [INFO] 10.217.0.74:51836 - 55722 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000638858s 2025-08-13T20:08:18.239968064+00:00 stdout F [INFO] 10.217.0.74:52292 - 12385 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000729421s 2025-08-13T20:08:18.240025206+00:00 stdout F [INFO] 10.217.0.74:39250 - 28993 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000791312s 2025-08-13T20:08:18.271506388+00:00 stdout F [INFO] 10.217.0.74:58756 - 712 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000933617s 2025-08-13T20:08:18.271506388+00:00 stdout F [INFO] 10.217.0.74:46085 - 64615 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001323918s 2025-08-13T20:08:18.296870226+00:00 stdout F [INFO] 10.217.0.74:51242 - 53671 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001269976s 2025-08-13T20:08:18.296870226+00:00 stdout F [INFO] 10.217.0.74:38515 - 27078 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001712259s 2025-08-13T20:08:18.331032635+00:00 stdout F [INFO] 10.217.0.74:50758 - 41176 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000789273s 2025-08-13T20:08:18.331032635+00:00 stdout F [INFO] 10.217.0.74:35435 - 27337 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001215435s 2025-08-13T20:08:18.340077824+00:00 stdout F [INFO] 10.217.0.74:52631 - 10801 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000752912s 2025-08-13T20:08:18.340128446+00:00 stdout F [INFO] 10.217.0.74:49085 - 51573 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00104725s 2025-08-13T20:08:18.357214316+00:00 stdout F [INFO] 10.217.0.74:36458 - 4215 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00101837s 2025-08-13T20:08:18.357267757+00:00 stdout F [INFO] 10.217.0.74:37756 - 51038 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001114382s 2025-08-13T20:08:18.388217535+00:00 stdout F [INFO] 10.217.0.74:55365 - 27500 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001113472s 2025-08-13T20:08:18.388380129+00:00 stdout F [INFO] 10.217.0.74:50138 - 31839 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001151443s 2025-08-13T20:08:18.395161234+00:00 stdout F [INFO] 10.217.0.74:34389 - 8169 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000879295s 2025-08-13T20:08:18.395161234+00:00 stdout F [INFO] 10.217.0.74:39526 - 1664 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000770822s 2025-08-13T20:08:18.421573191+00:00 stdout F [INFO] 10.217.0.74:42922 - 4581 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002536073s 2025-08-13T20:08:18.421699015+00:00 stdout F [INFO] 10.217.0.74:36744 - 20784 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003105509s 2025-08-13T20:08:18.445528128+00:00 stdout F [INFO] 10.217.0.74:40515 - 63429 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000746172s 2025-08-13T20:08:18.446329821+00:00 stdout F [INFO] 10.217.0.74:40343 - 7258 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000667309s 2025-08-13T20:08:18.454415793+00:00 stdout F [INFO] 10.217.0.74:43286 - 24798 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000833254s 2025-08-13T20:08:18.456846412+00:00 stdout F [INFO] 10.217.0.74:45836 - 44265 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002275765s 2025-08-13T20:08:18.477692230+00:00 stdout F [INFO] 10.217.0.74:40548 - 35806 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000763682s 2025-08-13T20:08:18.478217505+00:00 stdout F [INFO] 10.217.0.74:51750 - 50242 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00104433s 2025-08-13T20:08:18.501767740+00:00 stdout F [INFO] 10.217.0.74:45811 - 21551 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001103142s 2025-08-13T20:08:18.501767740+00:00 stdout F [INFO] 10.217.0.74:35630 - 41838 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0010681s 2025-08-13T20:08:18.513550628+00:00 stdout F [INFO] 10.217.0.74:37388 - 59626 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000973008s 2025-08-13T20:08:18.513633050+00:00 stdout F [INFO] 10.217.0.74:41749 - 40718 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001201095s 2025-08-13T20:08:18.533399707+00:00 stdout F [INFO] 10.217.0.74:37546 - 7034 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001270477s 2025-08-13T20:08:18.533399707+00:00 stdout F [INFO] 10.217.0.74:39060 - 18619 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001359849s 2025-08-13T20:08:18.561114502+00:00 stdout F [INFO] 10.217.0.74:50716 - 20934 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001269167s 2025-08-13T20:08:18.561666278+00:00 stdout F [INFO] 10.217.0.74:41229 - 6523 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001590155s 2025-08-13T20:08:18.568039400+00:00 stdout F [INFO] 10.217.0.74:46810 - 19974 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00174046s 2025-08-13T20:08:18.568272437+00:00 stdout F [INFO] 10.217.0.74:52908 - 17620 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001644087s 2025-08-13T20:08:18.590841034+00:00 stdout F [INFO] 10.217.0.74:39127 - 36265 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000630128s 2025-08-13T20:08:18.591324348+00:00 stdout F [INFO] 10.217.0.74:40898 - 29496 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00103056s 2025-08-13T20:08:18.616736236+00:00 stdout F [INFO] 10.217.0.74:36035 - 28386 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000574817s 2025-08-13T20:08:18.617176059+00:00 stdout F [INFO] 10.217.0.74:39294 - 50408 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000754191s 2025-08-13T20:08:18.653306395+00:00 stdout F [INFO] 10.217.0.74:49643 - 20012 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000793343s 2025-08-13T20:08:18.653829190+00:00 stdout F [INFO] 10.217.0.74:49400 - 43367 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001490833s 2025-08-13T20:08:18.681602586+00:00 stdout F [INFO] 10.217.0.74:34632 - 19788 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000816623s 2025-08-13T20:08:18.681980997+00:00 stdout F [INFO] 10.217.0.74:60503 - 51226 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001154653s 2025-08-13T20:08:18.714427047+00:00 stdout F [INFO] 10.217.0.74:57813 - 21578 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00103567s 2025-08-13T20:08:18.714427047+00:00 stdout F [INFO] 10.217.0.74:44347 - 19831 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001489173s 2025-08-13T20:08:18.728724417+00:00 stdout F [INFO] 10.217.0.74:56433 - 9101 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000562716s 2025-08-13T20:08:18.730372845+00:00 stdout F [INFO] 10.217.0.74:43298 - 41510 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000668719s 2025-08-13T20:08:18.739250759+00:00 stdout F [INFO] 10.217.0.74:58344 - 16334 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000675279s 2025-08-13T20:08:18.739410844+00:00 stdout F [INFO] 10.217.0.74:55776 - 48013 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000848244s 2025-08-13T20:08:18.772723499+00:00 stdout F [INFO] 10.217.0.74:52116 - 40428 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000591977s 2025-08-13T20:08:18.772940755+00:00 stdout F [INFO] 10.217.0.74:56184 - 26165 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000535085s 2025-08-13T20:08:18.784148006+00:00 stdout F [INFO] 10.217.0.74:59633 - 62916 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00071752s 2025-08-13T20:08:18.784242399+00:00 stdout F [INFO] 10.217.0.74:36176 - 20344 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000646268s 2025-08-13T20:08:18.829873817+00:00 stdout F [INFO] 10.217.0.74:53963 - 24242 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000631288s 2025-08-13T20:08:18.829873817+00:00 stdout F [INFO] 10.217.0.74:53356 - 40328 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000655019s 2025-08-13T20:08:18.842616683+00:00 stdout F [INFO] 10.217.0.74:44219 - 60953 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001134443s 2025-08-13T20:08:18.842616683+00:00 stdout F [INFO] 10.217.0.74:53785 - 15265 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001234925s 2025-08-13T20:08:18.888835948+00:00 stdout F [INFO] 10.217.0.74:45759 - 30313 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000572467s 2025-08-13T20:08:18.889302571+00:00 stdout F [INFO] 10.217.0.74:42183 - 64045 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001291417s 2025-08-13T20:08:18.889944740+00:00 stdout F [INFO] 10.217.0.74:53201 - 28698 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00066511s 2025-08-13T20:08:18.891499084+00:00 stdout F [INFO] 10.217.0.74:34657 - 40843 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001221505s 2025-08-13T20:08:18.901686146+00:00 stdout F [INFO] 10.217.0.74:59660 - 46487 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000527275s 2025-08-13T20:08:18.902438668+00:00 stdout F [INFO] 10.217.0.74:39170 - 27000 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000802423s 2025-08-13T20:08:18.943844235+00:00 stdout F [INFO] 10.217.0.74:47238 - 51574 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000864945s 2025-08-13T20:08:18.943958148+00:00 stdout F [INFO] 10.217.0.74:40533 - 50479 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000665359s 2025-08-13T20:08:18.945034189+00:00 stdout F [INFO] 10.217.0.74:59443 - 32212 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070142s 2025-08-13T20:08:18.945284256+00:00 stdout F [INFO] 10.217.0.74:44930 - 37490 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000979978s 2025-08-13T20:08:18.999972634+00:00 stdout F [INFO] 10.217.0.74:49226 - 54351 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069281s 2025-08-13T20:08:19.000010075+00:00 stdout F [INFO] 10.217.0.74:33374 - 675 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000548685s 2025-08-13T20:08:19.059905113+00:00 stdout F [INFO] 10.217.0.74:34487 - 13154 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000845254s 2025-08-13T20:08:19.059905113+00:00 stdout F [INFO] 10.217.0.74:47712 - 4340 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000636399s 2025-08-13T20:08:19.059905113+00:00 stdout F [INFO] 10.217.0.74:35471 - 58428 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001321158s 2025-08-13T20:08:19.059905113+00:00 stdout F [INFO] 10.217.0.74:60689 - 274 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000734062s 2025-08-13T20:08:19.111097770+00:00 stdout F [INFO] 10.217.0.74:46092 - 31714 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001013969s 2025-08-13T20:08:19.111729979+00:00 stdout F [INFO] 10.217.0.74:35110 - 49539 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000569996s 2025-08-13T20:08:19.123997620+00:00 stdout F [INFO] 10.217.0.74:50974 - 12345 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000832623s 2025-08-13T20:08:19.124532406+00:00 stdout F [INFO] 10.217.0.74:32788 - 22690 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001368809s 2025-08-13T20:08:19.173362416+00:00 stdout F [INFO] 10.217.0.74:57072 - 18749 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002017758s 2025-08-13T20:08:19.173960953+00:00 stdout F [INFO] 10.217.0.74:58011 - 64217 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003071098s 2025-08-13T20:08:19.176240778+00:00 stdout F [INFO] 10.217.0.74:49301 - 14570 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001588786s 2025-08-13T20:08:19.176240778+00:00 stdout F [INFO] 10.217.0.74:51980 - 56486 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001845263s 2025-08-13T20:08:19.231853013+00:00 stdout F [INFO] 10.217.0.74:42889 - 57106 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001345139s 2025-08-13T20:08:19.232280125+00:00 stdout F [INFO] 10.217.0.74:38556 - 62901 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001417371s 2025-08-13T20:08:19.234921181+00:00 stdout F [INFO] 10.217.0.74:36720 - 15603 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00138679s 2025-08-13T20:08:19.237827514+00:00 stdout F [INFO] 10.217.0.74:59480 - 37520 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00104755s 2025-08-13T20:08:19.254981216+00:00 stdout F [INFO] 10.217.0.74:36818 - 57438 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000951858s 2025-08-13T20:08:19.254981216+00:00 stdout F [INFO] 10.217.0.74:55824 - 7717 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001288587s 2025-08-13T20:08:19.287223030+00:00 stdout F [INFO] 10.217.0.74:45490 - 37893 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000742451s 2025-08-13T20:08:19.288857277+00:00 stdout F [INFO] 10.217.0.74:54329 - 23127 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000895675s 2025-08-13T20:08:19.309303843+00:00 stdout F [INFO] 10.217.0.74:46021 - 745 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000624868s 2025-08-13T20:08:19.309303843+00:00 stdout F [INFO] 10.217.0.74:60525 - 34491 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000919096s 2025-08-13T20:08:19.344323977+00:00 stdout F [INFO] 10.217.0.74:35478 - 40915 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001874983s 2025-08-13T20:08:19.344323977+00:00 stdout F [INFO] 10.217.0.74:34458 - 1330 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002321437s 2025-08-13T20:08:19.404959636+00:00 stdout F [INFO] 10.217.0.74:49339 - 57172 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000725511s 2025-08-13T20:08:19.404959636+00:00 stdout F [INFO] 10.217.0.74:42428 - 29004 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000939657s 2025-08-13T20:08:19.440233127+00:00 stdout F [INFO] 10.217.0.74:34899 - 32956 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001207635s 2025-08-13T20:08:19.440233127+00:00 stdout F [INFO] 10.217.0.74:48635 - 22276 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000980108s 2025-08-13T20:08:19.458175032+00:00 stdout F [INFO] 10.217.0.74:57975 - 38076 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000981929s 2025-08-13T20:08:19.458175032+00:00 stdout F [INFO] 10.217.0.74:46325 - 41507 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000946747s 2025-08-13T20:08:19.491980691+00:00 stdout F [INFO] 10.217.0.74:43809 - 28638 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070041s 2025-08-13T20:08:19.495368818+00:00 stdout F [INFO] 10.217.0.74:54058 - 23406 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003173411s 2025-08-13T20:08:19.499145456+00:00 stdout F [INFO] 10.217.0.74:32823 - 17562 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000504084s 2025-08-13T20:08:19.499145456+00:00 stdout F [INFO] 10.217.0.74:56726 - 343 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000561356s 2025-08-13T20:08:19.514751294+00:00 stdout F [INFO] 10.217.0.74:41593 - 63316 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069s 2025-08-13T20:08:19.514751294+00:00 stdout F [INFO] 10.217.0.74:42069 - 26740 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000919106s 2025-08-13T20:08:19.557246502+00:00 stdout F [INFO] 10.217.0.74:36148 - 26416 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000834204s 2025-08-13T20:08:19.557246502+00:00 stdout F [INFO] 10.217.0.74:53623 - 53928 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000612508s 2025-08-13T20:08:19.560498005+00:00 stdout F [INFO] 10.217.0.74:57862 - 55531 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001326568s 2025-08-13T20:08:19.561442072+00:00 stdout F [INFO] 10.217.0.74:46271 - 32034 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001072861s 2025-08-13T20:08:19.574042364+00:00 stdout F [INFO] 10.217.0.74:60622 - 623 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000890776s 2025-08-13T20:08:19.575124835+00:00 stdout F [INFO] 10.217.0.74:44083 - 31197 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001470792s 2025-08-13T20:08:19.575124835+00:00 stdout F [INFO] 10.217.0.74:33941 - 39911 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001874684s 2025-08-13T20:08:19.575124835+00:00 stdout F [INFO] 10.217.0.74:52521 - 63095 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001776991s 2025-08-13T20:08:19.619598280+00:00 stdout F [INFO] 10.217.0.74:57139 - 31701 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001597296s 2025-08-13T20:08:19.619631901+00:00 stdout F [INFO] 10.217.0.74:51704 - 25051 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001798212s 2025-08-13T20:08:19.634578559+00:00 stdout F [INFO] 10.217.0.74:48185 - 21897 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004593802s 2025-08-13T20:08:19.634578559+00:00 stdout F [INFO] 10.217.0.74:42680 - 51080 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002749438s 2025-08-13T20:08:19.634578559+00:00 stdout F [INFO] 10.217.0.74:53337 - 43838 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002883242s 2025-08-13T20:08:19.634578559+00:00 stdout F [INFO] 10.217.0.74:49462 - 22842 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004737485s 2025-08-13T20:08:19.692766767+00:00 stdout F [INFO] 10.217.0.74:47487 - 10576 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000635059s 2025-08-13T20:08:19.692766767+00:00 stdout F [INFO] 10.217.0.74:55514 - 64163 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001015499s 2025-08-13T20:08:19.692766767+00:00 stdout F [INFO] 10.217.0.74:59295 - 60158 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000657648s 2025-08-13T20:08:19.692766767+00:00 stdout F [INFO] 10.217.0.74:56438 - 61941 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000965287s 2025-08-13T20:08:19.751462260+00:00 stdout F [INFO] 10.217.0.74:51878 - 49945 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000657199s 2025-08-13T20:08:19.751462260+00:00 stdout F [INFO] 10.217.0.74:36094 - 50966 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001020449s 2025-08-13T20:08:19.751462260+00:00 stdout F [INFO] 10.217.0.74:55603 - 60901 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069649s 2025-08-13T20:08:19.752212632+00:00 stdout F [INFO] 10.217.0.74:56998 - 15524 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000435012s 2025-08-13T20:08:19.809153344+00:00 stdout F [INFO] 10.217.0.74:53401 - 39051 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001184504s 2025-08-13T20:08:19.809153344+00:00 stdout F [INFO] 10.217.0.74:51215 - 21865 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000982628s 2025-08-13T20:08:19.812235183+00:00 stdout F [INFO] 10.217.0.74:40889 - 52437 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000586647s 2025-08-13T20:08:19.813729486+00:00 stdout F [INFO] 10.217.0.74:48421 - 25312 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001891555s 2025-08-13T20:08:19.863663957+00:00 stdout F [INFO] 10.217.0.74:60962 - 58177 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000864505s 2025-08-13T20:08:19.863663957+00:00 stdout F [INFO] 10.217.0.74:53141 - 18297 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000925797s 2025-08-13T20:08:19.920427445+00:00 stdout F [INFO] 10.217.0.74:46406 - 6689 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001152653s 2025-08-13T20:08:19.921107744+00:00 stdout F [INFO] 10.217.0.74:52820 - 53416 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000558376s 2025-08-13T20:08:19.922021890+00:00 stdout F [INFO] 10.217.0.74:47651 - 17881 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001226225s 2025-08-13T20:08:19.922523145+00:00 stdout F [INFO] 10.217.0.74:52083 - 21131 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001890835s 2025-08-13T20:08:19.964350344+00:00 stdout F [INFO] 10.217.0.74:46219 - 37278 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069833s 2025-08-13T20:08:19.964350344+00:00 stdout F [INFO] 10.217.0.74:56445 - 19602 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000983288s 2025-08-13T20:08:19.980119026+00:00 stdout F [INFO] 10.217.0.74:47896 - 63739 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002735378s 2025-08-13T20:08:19.980119026+00:00 stdout F [INFO] 10.217.0.74:53528 - 63966 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001429141s 2025-08-13T20:08:19.982137164+00:00 stdout F [INFO] 10.217.0.74:36392 - 821 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000785813s 2025-08-13T20:08:19.983587326+00:00 stdout F [INFO] 10.217.0.74:37348 - 50385 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001475423s 2025-08-13T20:08:20.026518367+00:00 stdout F [INFO] 10.217.0.74:53987 - 20951 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000582706s 2025-08-13T20:08:20.026565948+00:00 stdout F [INFO] 10.217.0.74:59175 - 57290 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000483584s 2025-08-13T20:08:20.037504422+00:00 stdout F [INFO] 10.217.0.74:59422 - 54552 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000653699s 2025-08-13T20:08:20.037552413+00:00 stdout F [INFO] 10.217.0.74:42804 - 8086 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000626458s 2025-08-13T20:08:20.045226223+00:00 stdout F [INFO] 10.217.0.74:38048 - 34636 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000587927s 2025-08-13T20:08:20.045523631+00:00 stdout F [INFO] 10.217.0.74:46147 - 23800 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000991229s 2025-08-13T20:08:20.092669803+00:00 stdout F [INFO] 10.217.0.74:42603 - 6548 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000541455s 2025-08-13T20:08:20.092669803+00:00 stdout F [INFO] 10.217.0.74:59176 - 45031 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000920527s 2025-08-13T20:08:20.132848905+00:00 stdout F [INFO] 10.217.0.74:39842 - 39036 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000801383s 2025-08-13T20:08:20.132848905+00:00 stdout F [INFO] 10.217.0.74:55461 - 20359 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000863444s 2025-08-13T20:08:20.148882125+00:00 stdout F [INFO] 10.217.0.74:48741 - 47367 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000800953s 2025-08-13T20:08:20.148882125+00:00 stdout F [INFO] 10.217.0.74:49669 - 13644 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000790093s 2025-08-13T20:08:20.188878572+00:00 stdout F [INFO] 10.217.0.74:45619 - 23079 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000639578s 2025-08-13T20:08:20.189544251+00:00 stdout F [INFO] 10.217.0.74:35659 - 16828 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000816493s 2025-08-13T20:08:20.203331556+00:00 stdout F [INFO] 10.217.0.74:54643 - 54596 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001219905s 2025-08-13T20:08:20.203816790+00:00 stdout F [INFO] 10.217.0.74:44079 - 24799 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001638507s 2025-08-13T20:08:20.222126875+00:00 stdout F [INFO] 10.217.0.74:35728 - 57603 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000833413s 2025-08-13T20:08:20.223450793+00:00 stdout F [INFO] 10.217.0.74:51625 - 51357 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001006178s 2025-08-13T20:08:20.224555275+00:00 stdout F [INFO] 10.217.0.74:55727 - 11533 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000638348s 2025-08-13T20:08:20.225231624+00:00 stdout F [INFO] 10.217.0.74:47336 - 45122 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000887886s 2025-08-13T20:08:20.234617923+00:00 stdout F [INFO] 10.217.0.74:48099 - 10617 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001363009s 2025-08-13T20:08:20.234617923+00:00 stdout F [INFO] 10.217.0.74:40174 - 24904 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001339848s 2025-08-13T20:08:20.253737411+00:00 stdout F [INFO] 10.217.0.74:42674 - 40263 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000679899s 2025-08-13T20:08:20.254765271+00:00 stdout F [INFO] 10.217.0.74:34954 - 36129 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000766252s 2025-08-13T20:08:20.266526848+00:00 stdout F [INFO] 10.217.0.74:47334 - 28205 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001164104s 2025-08-13T20:08:20.266742764+00:00 stdout F [INFO] 10.217.0.74:53173 - 33111 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001522993s 2025-08-13T20:08:20.289119096+00:00 stdout F [INFO] 10.217.0.74:58640 - 24864 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070667s 2025-08-13T20:08:20.289391903+00:00 stdout F [INFO] 10.217.0.74:50835 - 5956 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000740561s 2025-08-13T20:08:20.301440449+00:00 stdout F [INFO] 10.217.0.74:49383 - 62464 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00104861s 2025-08-13T20:08:20.301440449+00:00 stdout F [INFO] 10.217.0.74:36017 - 30424 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001671858s 2025-08-13T20:08:20.316057878+00:00 stdout F [INFO] 10.217.0.74:36367 - 62148 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000437042s 2025-08-13T20:08:20.316353217+00:00 stdout F [INFO] 10.217.0.74:56467 - 60074 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000994989s 2025-08-13T20:08:20.347647934+00:00 stdout F [INFO] 10.217.0.74:43603 - 36237 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001694369s 2025-08-13T20:08:20.347647934+00:00 stdout F [INFO] 10.217.0.74:58264 - 25764 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002189132s 2025-08-13T20:08:20.374389510+00:00 stdout F [INFO] 10.217.0.74:40516 - 61985 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000824754s 2025-08-13T20:08:20.374389510+00:00 stdout F [INFO] 10.217.0.74:41851 - 23707 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000762832s 2025-08-13T20:08:20.404057201+00:00 stdout F [INFO] 10.217.0.74:45825 - 36179 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000787853s 2025-08-13T20:08:20.404187815+00:00 stdout F [INFO] 10.217.0.74:60727 - 34978 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001304217s 2025-08-13T20:08:20.433333370+00:00 stdout F [INFO] 10.217.0.74:47313 - 41258 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001661527s 2025-08-13T20:08:20.433333370+00:00 stdout F [INFO] 10.217.0.74:37101 - 36275 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001336558s 2025-08-13T20:08:20.434687389+00:00 stdout F [INFO] 10.217.0.74:50737 - 46844 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000860355s 2025-08-13T20:08:20.434687389+00:00 stdout F [INFO] 10.217.0.74:34462 - 31226 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001051541s 2025-08-13T20:08:20.468028635+00:00 stdout F [INFO] 10.217.0.74:57365 - 25742 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000990839s 2025-08-13T20:08:20.469755305+00:00 stdout F [INFO] 10.217.0.74:39107 - 47850 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001581315s 2025-08-13T20:08:20.477362013+00:00 stdout F [INFO] 10.217.0.74:47989 - 52317 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000649848s 2025-08-13T20:08:20.477449415+00:00 stdout F [INFO] 10.217.0.74:40591 - 60541 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000771862s 2025-08-13T20:08:20.510911725+00:00 stdout F [INFO] 10.217.0.74:48028 - 3073 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000765232s 2025-08-13T20:08:20.510911725+00:00 stdout F [INFO] 10.217.0.74:46573 - 64702 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001097651s 2025-08-13T20:08:20.530874907+00:00 stdout F [INFO] 10.217.0.74:41963 - 44907 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002360568s 2025-08-13T20:08:20.531193616+00:00 stdout F [INFO] 10.217.0.74:53470 - 27807 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002826101s 2025-08-13T20:08:20.544350043+00:00 stdout F [INFO] 10.217.0.74:53647 - 29671 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009632836s 2025-08-13T20:08:20.544350043+00:00 stdout F [INFO] 10.217.0.74:57620 - 55875 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009781851s 2025-08-13T20:08:20.570202815+00:00 stdout F [INFO] 10.217.0.74:44231 - 53021 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000651088s 2025-08-13T20:08:20.571462151+00:00 stdout F [INFO] 10.217.0.74:42901 - 59801 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000768212s 2025-08-13T20:08:20.576648890+00:00 stdout F [INFO] 10.217.0.74:57027 - 64889 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000545476s 2025-08-13T20:08:20.576802884+00:00 stdout F [INFO] 10.217.0.74:33614 - 61489 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000609417s 2025-08-13T20:08:20.588171280+00:00 stdout F [INFO] 10.217.0.74:49097 - 19093 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000917837s 2025-08-13T20:08:20.588599452+00:00 stdout F [INFO] 10.217.0.74:55195 - 10285 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001500893s 2025-08-13T20:08:20.629764072+00:00 stdout F [INFO] 10.217.0.74:49525 - 63178 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000586707s 2025-08-13T20:08:20.630031790+00:00 stdout F [INFO] 10.217.0.74:43936 - 49048 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000670439s 2025-08-13T20:08:20.633615463+00:00 stdout F [INFO] 10.217.0.74:56815 - 9782 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000677729s 2025-08-13T20:08:20.633615463+00:00 stdout F [INFO] 10.217.0.74:50379 - 52189 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001158163s 2025-08-13T20:08:20.648194051+00:00 stdout F [INFO] 10.217.0.74:40370 - 31715 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001159454s 2025-08-13T20:08:20.648194051+00:00 stdout F [INFO] 10.217.0.74:46407 - 49932 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001255207s 2025-08-13T20:08:20.651815225+00:00 stdout F [INFO] 10.217.0.74:39433 - 20178 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001055991s 2025-08-13T20:08:20.656983383+00:00 stdout F [INFO] 10.217.0.74:44582 - 39095 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00140599s 2025-08-13T20:08:20.665125626+00:00 stdout F [INFO] 10.217.0.74:44221 - 55255 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000734341s 2025-08-13T20:08:20.665251720+00:00 stdout F [INFO] 10.217.0.74:39018 - 60238 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000659679s 2025-08-13T20:08:20.689938068+00:00 stdout F [INFO] 10.217.0.74:41391 - 51096 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002542943s 2025-08-13T20:08:20.690173254+00:00 stdout F [INFO] 10.217.0.74:47992 - 23554 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000934757s 2025-08-13T20:08:20.712100273+00:00 stdout F [INFO] 10.217.0.74:59180 - 1034 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000846694s 2025-08-13T20:08:20.712746112+00:00 stdout F [INFO] 10.217.0.74:50309 - 3207 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000605618s 2025-08-13T20:08:20.722083729+00:00 stdout F [INFO] 10.217.0.74:33841 - 24481 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069825s 2025-08-13T20:08:20.722843781+00:00 stdout F [INFO] 10.217.0.74:41004 - 7845 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000935487s 2025-08-13T20:08:20.749768843+00:00 stdout F [INFO] 10.217.0.74:35507 - 16179 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000877175s 2025-08-13T20:08:20.753088058+00:00 stdout F [INFO] 10.217.0.74:59183 - 20585 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000753702s 2025-08-13T20:08:20.767614805+00:00 stdout F [INFO] 10.217.0.74:58570 - 41317 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000853814s 2025-08-13T20:08:20.767841851+00:00 stdout F [INFO] 10.217.0.74:55816 - 52170 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000998769s 2025-08-13T20:08:20.814605662+00:00 stdout F [INFO] 10.217.0.74:43412 - 35534 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070546s 2025-08-13T20:08:20.815200969+00:00 stdout F [INFO] 10.217.0.74:36960 - 17945 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001435021s 2025-08-13T20:08:20.829635843+00:00 stdout F [INFO] 10.217.0.74:36040 - 9387 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000743341s 2025-08-13T20:08:20.829635843+00:00 stdout F [INFO] 10.217.0.74:42363 - 34278 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000615428s 2025-08-13T20:08:20.873258044+00:00 stdout F [INFO] 10.217.0.74:57034 - 44179 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000813913s 2025-08-13T20:08:20.873319845+00:00 stdout F [INFO] 10.217.0.74:46023 - 9625 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001219815s 2025-08-13T20:08:20.899992170+00:00 stdout F [INFO] 10.217.0.74:44255 - 47347 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000738771s 2025-08-13T20:08:20.900095283+00:00 stdout F [INFO] 10.217.0.74:45113 - 757 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000606878s 2025-08-13T20:08:20.922948738+00:00 stdout F [INFO] 10.217.0.74:43853 - 47584 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000456963s 2025-08-13T20:08:20.923684229+00:00 stdout F [INFO] 10.217.0.74:59554 - 41272 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001007579s 2025-08-13T20:08:20.932553414+00:00 stdout F [INFO] 10.217.0.74:51472 - 20194 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000796723s 2025-08-13T20:08:20.932639126+00:00 stdout F [INFO] 10.217.0.74:42302 - 22466 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00105013s 2025-08-13T20:08:20.958865018+00:00 stdout F [INFO] 10.217.0.74:34581 - 22745 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00071544s 2025-08-13T20:08:20.959120615+00:00 stdout F [INFO] 10.217.0.74:51406 - 23089 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000901286s 2025-08-13T20:08:20.967416073+00:00 stdout F [INFO] 10.217.0.74:45014 - 37578 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000870665s 2025-08-13T20:08:20.967648760+00:00 stdout F [INFO] 10.217.0.74:47564 - 64455 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001078651s 2025-08-13T20:08:20.984656478+00:00 stdout F [INFO] 10.217.0.74:34397 - 48184 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003339386s 2025-08-13T20:08:20.984656478+00:00 stdout F [INFO] 10.217.0.74:57204 - 2685 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003514661s 2025-08-13T20:08:20.995082076+00:00 stdout F [INFO] 10.217.0.74:55637 - 63608 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00068953s 2025-08-13T20:08:20.996061235+00:00 stdout F [INFO] 10.217.0.74:60552 - 45167 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000512405s 2025-08-13T20:08:21.027181477+00:00 stdout F [INFO] 10.217.0.74:45214 - 44061 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001082451s 2025-08-13T20:08:21.028565126+00:00 stdout F [INFO] 10.217.0.74:34987 - 36881 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000870085s 2025-08-13T20:08:21.029176524+00:00 stdout F [INFO] 10.217.0.74:57687 - 39493 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000792063s 2025-08-13T20:08:21.029436942+00:00 stdout F [INFO] 10.217.0.74:48392 - 28980 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000893525s 2025-08-13T20:08:21.049611960+00:00 stdout F [INFO] 10.217.0.74:57286 - 65085 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00105317s 2025-08-13T20:08:21.049670302+00:00 stdout F [INFO] 10.217.0.74:43053 - 12216 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000657459s 2025-08-13T20:08:21.095700921+00:00 stdout F [INFO] 10.217.0.74:43606 - 1746 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001463912s 2025-08-13T20:08:21.095700921+00:00 stdout F [INFO] 10.217.0.74:59907 - 36665 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000466304s 2025-08-13T20:08:21.097937406+00:00 stdout F [INFO] 10.217.0.74:54768 - 998 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000672219s 2025-08-13T20:08:21.097963896+00:00 stdout F [INFO] 10.217.0.74:45132 - 28321 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000950917s 2025-08-13T20:08:21.156295639+00:00 stdout F [INFO] 10.217.0.74:53052 - 4292 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000813174s 2025-08-13T20:08:21.157350069+00:00 stdout F [INFO] 10.217.0.74:43527 - 27937 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000610728s 2025-08-13T20:08:21.159719867+00:00 stdout F [INFO] 10.217.0.74:35243 - 38330 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001566785s 2025-08-13T20:08:21.159719867+00:00 stdout F [INFO] 10.217.0.74:44258 - 26336 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001432661s 2025-08-13T20:08:21.214957301+00:00 stdout F [INFO] 10.217.0.74:33663 - 52704 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000819633s 2025-08-13T20:08:21.215178627+00:00 stdout F [INFO] 10.217.0.74:54171 - 1 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000600528s 2025-08-13T20:08:21.251848858+00:00 stdout F [INFO] 10.217.0.74:41906 - 5676 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001237486s 2025-08-13T20:08:21.253050703+00:00 stdout F [INFO] 10.217.0.74:47230 - 11157 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001222845s 2025-08-13T20:08:21.276477684+00:00 stdout F [INFO] 10.217.0.74:36698 - 36123 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000664929s 2025-08-13T20:08:21.277333799+00:00 stdout F [INFO] 10.217.0.74:38715 - 787 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000642098s 2025-08-13T20:08:21.277816763+00:00 stdout F [INFO] 10.217.0.74:41323 - 41761 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000915877s 2025-08-13T20:08:21.278021229+00:00 stdout F [INFO] 10.217.0.74:44740 - 7711 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001326388s 2025-08-13T20:08:21.318615902+00:00 stdout F [INFO] 10.217.0.74:47736 - 41622 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001616496s 2025-08-13T20:08:21.318761397+00:00 stdout F [INFO] 10.217.0.74:60351 - 34419 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002020678s 2025-08-13T20:08:21.348972883+00:00 stdout F [INFO] 10.217.0.74:51905 - 53438 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003447209s 2025-08-13T20:08:21.350017763+00:00 stdout F [INFO] 10.217.0.74:53640 - 36814 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004793708s 2025-08-13T20:08:21.350343552+00:00 stdout F [INFO] 10.217.0.74:53486 - 4265 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001804502s 2025-08-13T20:08:21.352277748+00:00 stdout F [INFO] 10.217.0.74:42630 - 31454 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002065249s 2025-08-13T20:08:21.381030062+00:00 stdout F [INFO] 10.217.0.74:40627 - 8986 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000994388s 2025-08-13T20:08:21.381292199+00:00 stdout F [INFO] 10.217.0.74:56655 - 62455 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000911647s 2025-08-13T20:08:21.391505122+00:00 stdout F [INFO] 10.217.0.74:38469 - 59137 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000469634s 2025-08-13T20:08:21.391505122+00:00 stdout F [INFO] 10.217.0.74:33175 - 27614 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000449763s 2025-08-13T20:08:21.406906914+00:00 stdout F [INFO] 10.217.0.74:59333 - 37348 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001283207s 2025-08-13T20:08:21.407047058+00:00 stdout F [INFO] 10.217.0.74:47742 - 60191 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001129573s 2025-08-13T20:08:21.443857173+00:00 stdout F [INFO] 10.217.0.74:40528 - 41650 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000792613s 2025-08-13T20:08:21.444235444+00:00 stdout F [INFO] 10.217.0.74:47840 - 21143 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003349207s 2025-08-13T20:08:21.451058140+00:00 stdout F [INFO] 10.217.0.74:41261 - 13281 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000775202s 2025-08-13T20:08:21.451058140+00:00 stdout F [INFO] 10.217.0.74:45851 - 14398 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000826563s 2025-08-13T20:08:21.467184651+00:00 stdout F [INFO] 10.217.0.74:51266 - 7856 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000544876s 2025-08-13T20:08:21.467329675+00:00 stdout F [INFO] 10.217.0.74:42910 - 23102 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001009659s 2025-08-13T20:08:21.507686802+00:00 stdout F [INFO] 10.217.0.74:57006 - 16266 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000783853s 2025-08-13T20:08:21.507686802+00:00 stdout F [INFO] 10.217.0.74:59004 - 2798 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070864s 2025-08-13T20:08:21.508597748+00:00 stdout F [INFO] 10.217.0.74:36516 - 35450 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000865265s 2025-08-13T20:08:21.509967498+00:00 stdout F [INFO] 10.217.0.74:58294 - 64367 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001360009s 2025-08-13T20:08:21.524058432+00:00 stdout F [INFO] 10.217.0.74:57622 - 23682 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000840354s 2025-08-13T20:08:21.524096323+00:00 stdout F [INFO] 10.217.0.74:36061 - 43951 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000754442s 2025-08-13T20:08:21.567555409+00:00 stdout F [INFO] 10.217.0.74:46832 - 53516 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000781792s 2025-08-13T20:08:21.567607830+00:00 stdout F [INFO] 10.217.0.74:59571 - 25182 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000803143s 2025-08-13T20:08:21.578841352+00:00 stdout F [INFO] 10.217.0.74:35014 - 55842 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000644738s 2025-08-13T20:08:21.581847819+00:00 stdout F [INFO] 10.217.0.74:55775 - 55026 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00102618s 2025-08-13T20:08:21.627158318+00:00 stdout F [INFO] 10.217.0.74:52303 - 42393 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000750662s 2025-08-13T20:08:21.627158318+00:00 stdout F [INFO] 10.217.0.74:59691 - 22958 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000549025s 2025-08-13T20:08:21.641962452+00:00 stdout F [INFO] 10.217.0.74:54771 - 8887 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000581156s 2025-08-13T20:08:21.641962452+00:00 stdout F [INFO] 10.217.0.74:49009 - 37894 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00104908s 2025-08-13T20:08:21.681094214+00:00 stdout F [INFO] 10.217.0.74:37748 - 51751 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001596456s 2025-08-13T20:08:21.681628369+00:00 stdout F [INFO] 10.217.0.74:36028 - 57991 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002143281s 2025-08-13T20:08:21.699708158+00:00 stdout F [INFO] 10.217.0.74:35438 - 20392 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000583846s 2025-08-13T20:08:21.699708158+00:00 stdout F [INFO] 10.217.0.74:45645 - 53652 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000513624s 2025-08-13T20:08:21.740565719+00:00 stdout F [INFO] 10.217.0.74:43636 - 21088 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001104292s 2025-08-13T20:08:21.740881948+00:00 stdout F [INFO] 10.217.0.74:60974 - 8141 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001300058s 2025-08-13T20:08:21.753051197+00:00 stdout F [INFO] 10.217.0.74:58618 - 16057 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001003989s 2025-08-13T20:08:21.756076784+00:00 stdout F [INFO] 10.217.0.74:43844 - 23921 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004007895s 2025-08-13T20:08:21.758245566+00:00 stdout F [INFO] 10.217.0.74:47124 - 59391 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000936547s 2025-08-13T20:08:21.758441132+00:00 stdout F [INFO] 10.217.0.74:49102 - 43624 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001713939s 2025-08-13T20:08:21.772176476+00:00 stdout F [INFO] 10.217.0.74:46657 - 4769 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000960378s 2025-08-13T20:08:21.772284439+00:00 stdout F [INFO] 10.217.0.74:35549 - 18077 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001296937s 2025-08-13T20:08:21.781389130+00:00 stdout F [INFO] 10.217.0.74:35033 - 499 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000914066s 2025-08-13T20:08:21.781597126+00:00 stdout F [INFO] 10.217.0.74:52691 - 12784 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000996999s 2025-08-13T20:08:21.795328699+00:00 stdout F [INFO] 10.217.0.74:39043 - 52094 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000585827s 2025-08-13T20:08:21.795527995+00:00 stdout F [INFO] 10.217.0.74:38492 - 24303 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000870904s 2025-08-13T20:08:21.812769619+00:00 stdout F [INFO] 10.217.0.74:44059 - 37291 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002576694s 2025-08-13T20:08:21.812980465+00:00 stdout F [INFO] 10.217.0.74:35021 - 5072 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002731668s 2025-08-13T20:08:21.815246030+00:00 stdout F [INFO] 10.217.0.74:51931 - 8604 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001972337s 2025-08-13T20:08:21.815246030+00:00 stdout F [INFO] 10.217.0.74:46159 - 49185 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002262854s 2025-08-13T20:08:21.836761357+00:00 stdout F [INFO] 10.217.0.74:41242 - 33438 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001621987s 2025-08-13T20:08:21.837088427+00:00 stdout F [INFO] 10.217.0.74:56449 - 54193 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.0020712s 2025-08-13T20:08:21.852242801+00:00 stdout F [INFO] 10.217.0.74:40528 - 28590 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000773742s 2025-08-13T20:08:21.853209469+00:00 stdout F [INFO] 10.217.0.74:40518 - 52546 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001800842s 2025-08-13T20:08:21.874935372+00:00 stdout F [INFO] 10.217.0.74:48787 - 4883 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000742461s 2025-08-13T20:08:21.875084876+00:00 stdout F [INFO] 10.217.0.74:53900 - 17729 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001129183s 2025-08-13T20:08:21.909328318+00:00 stdout F [INFO] 10.217.0.74:58437 - 7710 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000685829s 2025-08-13T20:08:21.909587265+00:00 stdout F [INFO] 10.217.0.74:41177 - 8284 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000807493s 2025-08-13T20:08:21.931037270+00:00 stdout F [INFO] 10.217.0.74:50656 - 30424 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070117s 2025-08-13T20:08:21.931133773+00:00 stdout F [INFO] 10.217.0.74:54700 - 47502 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001229895s 2025-08-13T20:08:21.968340560+00:00 stdout F [INFO] 10.217.0.74:51881 - 37498 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001640137s 2025-08-13T20:08:21.968429062+00:00 stdout F [INFO] 10.217.0.74:40751 - 48109 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001837213s 2025-08-13T20:08:21.973223730+00:00 stdout F [INFO] 10.217.0.74:47528 - 5737 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000527265s 2025-08-13T20:08:21.973223730+00:00 stdout F [INFO] 10.217.0.74:36322 - 65342 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000478184s 2025-08-13T20:08:21.979201161+00:00 stdout F [INFO] 10.217.0.74:40156 - 14196 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000723381s 2025-08-13T20:08:21.980020715+00:00 stdout F [INFO] 10.217.0.74:46359 - 39016 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001127842s 2025-08-13T20:08:21.982708972+00:00 stdout F [INFO] 10.217.0.74:52956 - 1784 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001235185s 2025-08-13T20:08:21.983336250+00:00 stdout F [INFO] 10.217.0.74:49008 - 65323 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001339569s 2025-08-13T20:08:22.031831640+00:00 stdout F [INFO] 10.217.0.74:59415 - 64727 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000711211s 2025-08-13T20:08:22.031917433+00:00 stdout F [INFO] 10.217.0.74:54528 - 22049 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001002928s 2025-08-13T20:08:22.042281320+00:00 stdout F [INFO] 10.217.0.74:35159 - 49207 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00068618s 2025-08-13T20:08:22.042378513+00:00 stdout F [INFO] 10.217.0.74:54247 - 9704 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000502274s 2025-08-13T20:08:22.044223495+00:00 stdout F [INFO] 10.217.0.74:56155 - 28029 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001385s 2025-08-13T20:08:22.044223495+00:00 stdout F [INFO] 10.217.0.74:49841 - 61991 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00102096s 2025-08-13T20:08:22.083609545+00:00 stdout F [INFO] 10.217.0.74:46141 - 31380 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000932726s 2025-08-13T20:08:22.084075808+00:00 stdout F [INFO] 10.217.0.74:47850 - 15680 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001004049s 2025-08-13T20:08:22.098749489+00:00 stdout F [INFO] 10.217.0.74:36090 - 7351 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000636378s 2025-08-13T20:08:22.099014486+00:00 stdout F [INFO] 10.217.0.74:38597 - 21700 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000812803s 2025-08-13T20:08:22.139299391+00:00 stdout F [INFO] 10.217.0.74:39378 - 44582 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000653949s 2025-08-13T20:08:22.139647061+00:00 stdout F [INFO] 10.217.0.74:36225 - 22223 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001251976s 2025-08-13T20:08:22.155679041+00:00 stdout F [INFO] 10.217.0.74:37035 - 15705 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000885045s 2025-08-13T20:08:22.158742589+00:00 stdout F [INFO] 10.217.0.74:35138 - 51780 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001082841s 2025-08-13T20:08:22.179351120+00:00 stdout F [INFO] 10.217.0.74:45734 - 36900 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000978308s 2025-08-13T20:08:22.179593517+00:00 stdout F [INFO] 10.217.0.74:40799 - 54997 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001279057s 2025-08-13T20:08:22.201436993+00:00 stdout F [INFO] 10.217.0.74:44396 - 55140 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001330658s 2025-08-13T20:08:22.201560556+00:00 stdout F [INFO] 10.217.0.74:33447 - 1163 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00139551s 2025-08-13T20:08:22.214420215+00:00 stdout F [INFO] 10.217.0.74:52628 - 34934 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000763552s 2025-08-13T20:08:22.214820137+00:00 stdout F [INFO] 10.217.0.74:52648 - 24657 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001467352s 2025-08-13T20:08:22.239860595+00:00 stdout F [INFO] 10.217.0.74:51385 - 16973 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00105344s 2025-08-13T20:08:22.239860595+00:00 stdout F [INFO] 10.217.0.74:49919 - 34860 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001381359s 2025-08-13T20:08:22.261126194+00:00 stdout F [INFO] 10.217.0.74:43526 - 6054 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000821284s 2025-08-13T20:08:22.261374731+00:00 stdout F [INFO] 10.217.0.74:40719 - 25417 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001145663s 2025-08-13T20:08:22.267248970+00:00 stdout F [INFO] 10.217.0.74:44331 - 53040 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002694397s 2025-08-13T20:08:22.267341953+00:00 stdout F [INFO] 10.217.0.74:56326 - 34656 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002656876s 2025-08-13T20:08:22.276757652+00:00 stdout F [INFO] 10.217.0.74:52375 - 43508 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001356439s 2025-08-13T20:08:22.276943238+00:00 stdout F [INFO] 10.217.0.74:58513 - 13901 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001270606s 2025-08-13T20:08:22.332239413+00:00 stdout F [INFO] 10.217.0.74:41229 - 42767 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001075081s 2025-08-13T20:08:22.332239413+00:00 stdout F [INFO] 10.217.0.74:52021 - 30200 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000567466s 2025-08-13T20:08:22.398007929+00:00 stdout F [INFO] 10.217.0.74:47267 - 62452 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001238796s 2025-08-13T20:08:22.398975957+00:00 stdout F [INFO] 10.217.0.74:57833 - 38362 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000962667s 2025-08-13T20:08:22.399740409+00:00 stdout F [INFO] 10.217.0.74:38237 - 53887 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069585s 2025-08-13T20:08:22.401736936+00:00 stdout F [INFO] 10.217.0.74:44127 - 22403 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00105724s 2025-08-13T20:08:22.433097905+00:00 stdout F [INFO] 10.217.0.74:52965 - 42770 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000876085s 2025-08-13T20:08:22.434533936+00:00 stdout F [INFO] 10.217.0.74:58613 - 49472 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002353548s 2025-08-13T20:08:22.455585540+00:00 stdout F [INFO] 10.217.0.74:60674 - 20048 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000941467s 2025-08-13T20:08:22.463268320+00:00 stdout F [INFO] 10.217.0.74:41587 - 40893 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000694699s 2025-08-13T20:08:22.466577205+00:00 stdout F [INFO] 10.217.0.74:46766 - 32712 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000801933s 2025-08-13T20:08:22.466577205+00:00 stdout F [INFO] 10.217.0.74:45096 - 7155 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000899256s 2025-08-13T20:08:22.510876515+00:00 stdout F [INFO] 10.217.0.74:54552 - 47724 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005303202s 2025-08-13T20:08:22.511946716+00:00 stdout F [INFO] 10.217.0.74:45434 - 9918 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006879638s 2025-08-13T20:08:22.519684217+00:00 stdout F [INFO] 10.217.0.74:37786 - 11215 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000952157s 2025-08-13T20:08:22.520166181+00:00 stdout F [INFO] 10.217.0.74:36296 - 46459 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000757792s 2025-08-13T20:08:22.522077546+00:00 stdout F [INFO] 10.217.0.74:51580 - 63207 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000449953s 2025-08-13T20:08:22.522077546+00:00 stdout F [INFO] 10.217.0.74:40251 - 25049 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000519685s 2025-08-13T20:08:22.576696652+00:00 stdout F [INFO] 10.217.0.74:57622 - 63627 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069116s 2025-08-13T20:08:22.577608658+00:00 stdout F [INFO] 10.217.0.74:45888 - 2144 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001443201s 2025-08-13T20:08:22.636625250+00:00 stdout F [INFO] 10.217.0.74:52830 - 1253 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000813273s 2025-08-13T20:08:22.636625250+00:00 stdout F [INFO] 10.217.0.74:37953 - 41778 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001006499s 2025-08-13T20:08:22.638191015+00:00 stdout F [INFO] 10.217.0.74:41117 - 48794 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000588647s 2025-08-13T20:08:22.638569146+00:00 stdout F [INFO] 10.217.0.74:41318 - 41580 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000748311s 2025-08-13T20:08:22.648858241+00:00 stdout F [INFO] 10.217.0.74:45123 - 27094 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000730341s 2025-08-13T20:08:22.649838699+00:00 stdout F [INFO] 10.217.0.74:38611 - 55493 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001436781s 2025-08-13T20:08:22.664156650+00:00 stdout F [INFO] 10.217.0.74:43102 - 36070 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070239s 2025-08-13T20:08:22.664418027+00:00 stdout F [INFO] 10.217.0.74:32935 - 60085 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000615287s 2025-08-13T20:08:22.691543585+00:00 stdout F [INFO] 10.217.0.74:57635 - 58953 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001525044s 2025-08-13T20:08:22.691858614+00:00 stdout F [INFO] 10.217.0.74:50777 - 5102 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00139997s 2025-08-13T20:08:22.693921343+00:00 stdout F [INFO] 10.217.0.74:34461 - 46395 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000959297s 2025-08-13T20:08:22.694219062+00:00 stdout F [INFO] 10.217.0.74:34317 - 59106 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001340569s 2025-08-13T20:08:22.702807788+00:00 stdout F [INFO] 10.217.0.74:53439 - 28469 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000780463s 2025-08-13T20:08:22.703057565+00:00 stdout F [INFO] 10.217.0.74:45448 - 47572 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00071143s 2025-08-13T20:08:22.721880755+00:00 stdout F [INFO] 10.217.0.74:51174 - 61205 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000579236s 2025-08-13T20:08:22.721984198+00:00 stdout F [INFO] 10.217.0.74:40056 - 50199 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000744641s 2025-08-13T20:08:22.744304728+00:00 stdout F [INFO] 10.217.0.74:42321 - 7798 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000613208s 2025-08-13T20:08:22.745109991+00:00 stdout F [INFO] 10.217.0.74:42452 - 58325 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000604637s 2025-08-13T20:08:22.802369352+00:00 stdout F [INFO] 10.217.0.74:43377 - 56553 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000826424s 2025-08-13T20:08:22.802422454+00:00 stdout F [INFO] 10.217.0.74:36584 - 51319 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000934307s 2025-08-13T20:08:22.826702370+00:00 stdout F [INFO] 10.217.0.74:50113 - 4511 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001435291s 2025-08-13T20:08:22.827707989+00:00 stdout F [INFO] 10.217.0.74:60437 - 29422 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002011328s 2025-08-13T20:08:22.841089162+00:00 stdout F [INFO] 10.217.0.74:40733 - 6592 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000608397s 2025-08-13T20:08:22.841724611+00:00 stdout F [INFO] 10.217.0.74:48412 - 63283 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000536045s 2025-08-13T20:08:22.853629642+00:00 stdout F [INFO] 10.217.0.74:57847 - 38739 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000649439s 2025-08-13T20:08:22.853831288+00:00 stdout F [INFO] 10.217.0.74:44784 - 44679 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00068887s 2025-08-13T20:08:22.862625980+00:00 stdout F [INFO] 10.217.0.8:39285 - 842 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000741802s 2025-08-13T20:08:22.862710062+00:00 stdout F [INFO] 10.217.0.8:49026 - 54348 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000958068s 2025-08-13T20:08:22.865088361+00:00 stdout F [INFO] 10.217.0.8:45372 - 17228 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000958098s 2025-08-13T20:08:22.865369679+00:00 stdout F [INFO] 10.217.0.8:59953 - 11278 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001006529s 2025-08-13T20:08:22.866947584+00:00 stdout F [INFO] 10.217.0.74:50270 - 28207 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000865685s 2025-08-13T20:08:22.867147420+00:00 stdout F [INFO] 10.217.0.74:35084 - 42316 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00103552s 2025-08-13T20:08:22.883126038+00:00 stdout F [INFO] 10.217.0.74:43979 - 42279 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000583907s 2025-08-13T20:08:22.884513628+00:00 stdout F [INFO] 10.217.0.74:36172 - 11931 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001900804s 2025-08-13T20:08:22.901498455+00:00 stdout F [INFO] 10.217.0.74:53107 - 13887 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001286337s 2025-08-13T20:08:22.901736311+00:00 stdout F [INFO] 10.217.0.74:58698 - 43570 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001462982s 2025-08-13T20:08:22.906571780+00:00 stdout F [INFO] 10.217.0.74:38047 - 19286 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000501705s 2025-08-13T20:08:22.906856988+00:00 stdout F [INFO] 10.217.0.74:60554 - 11266 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069933s 2025-08-13T20:08:22.921088926+00:00 stdout F [INFO] 10.217.0.74:54298 - 2714 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000801863s 2025-08-13T20:08:22.921255611+00:00 stdout F [INFO] 10.217.0.74:51577 - 64057 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070477s 2025-08-13T20:08:22.943009155+00:00 stdout F [INFO] 10.217.0.74:56610 - 49819 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000551956s 2025-08-13T20:08:22.943108448+00:00 stdout F [INFO] 10.217.0.74:60473 - 63037 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001326858s 2025-08-13T20:08:22.964922453+00:00 stdout F [INFO] 10.217.0.74:40775 - 7463 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003672445s 2025-08-13T20:08:22.967642081+00:00 stdout F [INFO] 10.217.0.74:38048 - 46678 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004013355s 2025-08-13T20:08:22.967642081+00:00 stdout F [INFO] 10.217.0.74:41056 - 56626 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004316424s 2025-08-13T20:08:22.967642081+00:00 stdout F [INFO] 10.217.0.74:46554 - 45253 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004679264s 2025-08-13T20:08:22.983985159+00:00 stdout F [INFO] 10.217.0.74:35219 - 42924 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00314427s 2025-08-13T20:08:22.984067622+00:00 stdout F [INFO] 10.217.0.74:33846 - 27423 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003050767s 2025-08-13T20:08:22.998949769+00:00 stdout F [INFO] 10.217.0.74:34661 - 48996 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001009099s 2025-08-13T20:08:22.998949769+00:00 stdout F [INFO] 10.217.0.74:49940 - 35189 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001271607s 2025-08-13T20:08:23.028307460+00:00 stdout F [INFO] 10.217.0.74:50107 - 58863 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000579406s 2025-08-13T20:08:23.028442514+00:00 stdout F [INFO] 10.217.0.74:47941 - 33212 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000843434s 2025-08-13T20:08:23.028560918+00:00 stdout F [INFO] 10.217.0.74:53687 - 62909 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000572706s 2025-08-13T20:08:23.029147614+00:00 stdout F [INFO] 10.217.0.74:33055 - 46652 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000587257s 2025-08-13T20:08:23.041401596+00:00 stdout F [INFO] 10.217.0.74:50392 - 19123 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000922237s 2025-08-13T20:08:23.041401596+00:00 stdout F [INFO] 10.217.0.74:59302 - 36026 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001150433s 2025-08-13T20:08:23.067258497+00:00 stdout F [INFO] 10.217.0.74:48739 - 15689 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001277987s 2025-08-13T20:08:23.067258497+00:00 stdout F [INFO] 10.217.0.74:45224 - 16916 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001430021s 2025-08-13T20:08:23.086723925+00:00 stdout F [INFO] 10.217.0.74:46805 - 11307 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001466452s 2025-08-13T20:08:23.086723925+00:00 stdout F [INFO] 10.217.0.74:44357 - 57641 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001210575s 2025-08-13T20:08:23.102167298+00:00 stdout F [INFO] 10.217.0.74:33526 - 21050 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001001959s 2025-08-13T20:08:23.102167298+00:00 stdout F [INFO] 10.217.0.74:44924 - 63144 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001224765s 2025-08-13T20:08:23.130224712+00:00 stdout F [INFO] 10.217.0.74:51112 - 22062 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001516313s 2025-08-13T20:08:23.130457029+00:00 stdout F [INFO] 10.217.0.74:45256 - 58772 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002293196s 2025-08-13T20:08:23.142601387+00:00 stdout F [INFO] 10.217.0.74:39991 - 54096 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001159244s 2025-08-13T20:08:23.142601387+00:00 stdout F [INFO] 10.217.0.74:53534 - 40160 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000824164s 2025-08-13T20:08:23.158352619+00:00 stdout F [INFO] 10.217.0.74:43614 - 27969 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000830473s 2025-08-13T20:08:23.158352619+00:00 stdout F [INFO] 10.217.0.74:51746 - 46118 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00071584s 2025-08-13T20:08:23.173865214+00:00 stdout F [INFO] 10.217.0.74:56815 - 10800 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001192754s 2025-08-13T20:08:23.173865214+00:00 stdout F [INFO] 10.217.0.74:37470 - 26160 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001359149s 2025-08-13T20:08:23.186530057+00:00 stdout F [INFO] 10.217.0.74:35698 - 17396 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001814522s 2025-08-13T20:08:23.186576378+00:00 stdout F [INFO] 10.217.0.74:47297 - 35966 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002178742s 2025-08-13T20:08:23.201879497+00:00 stdout F [INFO] 10.217.0.74:44242 - 61307 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000986358s 2025-08-13T20:08:23.204248675+00:00 stdout F [INFO] 10.217.0.74:49688 - 14474 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001558625s 2025-08-13T20:08:23.207265181+00:00 stdout F [INFO] 10.217.0.74:60793 - 2671 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000507625s 2025-08-13T20:08:23.207265181+00:00 stdout F [INFO] 10.217.0.74:47305 - 44007 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00280909s 2025-08-13T20:08:23.224348531+00:00 stdout F [INFO] 10.217.0.74:43667 - 49108 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00139733s 2025-08-13T20:08:23.224348531+00:00 stdout F [INFO] 10.217.0.74:59356 - 15274 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001646567s 2025-08-13T20:08:23.240945167+00:00 stdout F [INFO] 10.217.0.74:37437 - 18158 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001358979s 2025-08-13T20:08:23.245963521+00:00 stdout F [INFO] 10.217.0.74:36155 - 8426 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002000028s 2025-08-13T20:08:23.247389952+00:00 stdout F [INFO] 10.217.0.74:36748 - 23802 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000823644s 2025-08-13T20:08:23.247389952+00:00 stdout F [INFO] 10.217.0.74:34141 - 30409 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001084881s 2025-08-13T20:08:23.265588203+00:00 stdout F [INFO] 10.217.0.74:33930 - 425 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002363888s 2025-08-13T20:08:23.265588203+00:00 stdout F [INFO] 10.217.0.74:48650 - 34517 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002275536s 2025-08-13T20:08:23.265645915+00:00 stdout F [INFO] 10.217.0.74:45625 - 16991 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002644116s 2025-08-13T20:08:23.265862481+00:00 stdout F [INFO] 10.217.0.74:44165 - 37866 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002362178s 2025-08-13T20:08:23.286846603+00:00 stdout F [INFO] 10.217.0.74:43829 - 8853 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001176033s 2025-08-13T20:08:23.287205573+00:00 stdout F [INFO] 10.217.0.74:59220 - 37306 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001708199s 2025-08-13T20:08:23.302813801+00:00 stdout F [INFO] 10.217.0.74:35066 - 6993 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069728s 2025-08-13T20:08:23.303000866+00:00 stdout F [INFO] 10.217.0.74:58090 - 34860 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070014s 2025-08-13T20:08:23.323088672+00:00 stdout F [INFO] 10.217.0.74:58163 - 25525 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000577657s 2025-08-13T20:08:23.323339729+00:00 stdout F [INFO] 10.217.0.74:47008 - 15212 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000624078s 2025-08-13T20:08:23.346958476+00:00 stdout F [INFO] 10.217.0.74:36598 - 11240 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000590827s 2025-08-13T20:08:23.347015058+00:00 stdout F [INFO] 10.217.0.74:55618 - 51013 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0006817s 2025-08-13T20:08:23.365053625+00:00 stdout F [INFO] 10.217.0.74:56789 - 10666 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001782331s 2025-08-13T20:08:23.365137098+00:00 stdout F [INFO] 10.217.0.74:50602 - 20928 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001779181s 2025-08-13T20:08:23.385098650+00:00 stdout F [INFO] 10.217.0.74:38012 - 36657 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002411469s 2025-08-13T20:08:23.385748948+00:00 stdout F [INFO] 10.217.0.74:39530 - 9020 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002152452s 2025-08-13T20:08:23.403006163+00:00 stdout F [INFO] 10.217.0.74:46876 - 29006 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001095072s 2025-08-13T20:08:23.403196749+00:00 stdout F [INFO] 10.217.0.74:55259 - 46879 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001254786s 2025-08-13T20:08:23.423972174+00:00 stdout F [INFO] 10.217.0.74:45266 - 62572 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001078551s 2025-08-13T20:08:23.424981563+00:00 stdout F [INFO] 10.217.0.74:56831 - 15440 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002025318s 2025-08-13T20:08:23.448177158+00:00 stdout F [INFO] 10.217.0.74:35227 - 32690 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000593787s 2025-08-13T20:08:23.448323243+00:00 stdout F [INFO] 10.217.0.74:35367 - 16258 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001110942s 2025-08-13T20:08:23.462711955+00:00 stdout F [INFO] 10.217.0.74:46751 - 5046 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000588527s 2025-08-13T20:08:23.463100416+00:00 stdout F [INFO] 10.217.0.74:40424 - 57957 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000795833s 2025-08-13T20:08:23.483741708+00:00 stdout F [INFO] 10.217.0.74:57391 - 28731 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004024075s 2025-08-13T20:08:23.483866722+00:00 stdout F [INFO] 10.217.0.74:50791 - 22189 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000505134s 2025-08-13T20:08:23.504960606+00:00 stdout F [INFO] 10.217.0.74:56036 - 54608 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000469544s 2025-08-13T20:08:23.505524843+00:00 stdout F [INFO] 10.217.0.74:34403 - 11947 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001620877s 2025-08-13T20:08:23.521936363+00:00 stdout F [INFO] 10.217.0.74:55722 - 8768 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000545856s 2025-08-13T20:08:23.521936363+00:00 stdout F [INFO] 10.217.0.74:35719 - 8130 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000526285s 2025-08-13T20:08:23.547486986+00:00 stdout F [INFO] 10.217.0.74:40315 - 7041 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001810692s 2025-08-13T20:08:23.548531626+00:00 stdout F [INFO] 10.217.0.74:38044 - 53013 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002714748s 2025-08-13T20:08:23.566259754+00:00 stdout F [INFO] 10.217.0.74:43842 - 53315 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00175518s 2025-08-13T20:08:23.566840831+00:00 stdout F [INFO] 10.217.0.74:39656 - 15795 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001910445s 2025-08-13T20:08:23.604865841+00:00 stdout F [INFO] 10.217.0.74:45080 - 14184 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000976128s 2025-08-13T20:08:23.604865841+00:00 stdout F [INFO] 10.217.0.74:52623 - 24062 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001460812s 2025-08-13T20:08:23.606214639+00:00 stdout F [INFO] 10.217.0.74:34918 - 62766 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070184s 2025-08-13T20:08:23.607222188+00:00 stdout F [INFO] 10.217.0.74:55300 - 45356 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002002047s 2025-08-13T20:08:23.617876724+00:00 stdout F [INFO] 10.217.0.74:33261 - 28747 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000575036s 2025-08-13T20:08:23.618124261+00:00 stdout F [INFO] 10.217.0.74:47962 - 437 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000735731s 2025-08-13T20:08:23.649416848+00:00 stdout F [INFO] 10.217.0.74:46369 - 10987 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00175553s 2025-08-13T20:08:23.649961034+00:00 stdout F [INFO] 10.217.0.74:42929 - 63396 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002341427s 2025-08-13T20:08:23.664390887+00:00 stdout F [INFO] 10.217.0.74:59747 - 35086 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000679749s 2025-08-13T20:08:23.665021106+00:00 stdout F [INFO] 10.217.0.74:47958 - 54146 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000965598s 2025-08-13T20:08:23.665225681+00:00 stdout F [INFO] 10.217.0.74:60738 - 28876 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001199405s 2025-08-13T20:08:23.665372356+00:00 stdout F [INFO] 10.217.0.74:43542 - 36132 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000755901s 2025-08-13T20:08:23.673935971+00:00 stdout F [INFO] 10.217.0.74:49465 - 19319 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069495s 2025-08-13T20:08:23.673935971+00:00 stdout F [INFO] 10.217.0.74:38700 - 37429 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000587637s 2025-08-13T20:08:23.711639802+00:00 stdout F [INFO] 10.217.0.74:45354 - 1150 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000710531s 2025-08-13T20:08:23.712135206+00:00 stdout F [INFO] 10.217.0.74:55757 - 61526 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001368139s 2025-08-13T20:08:23.724351257+00:00 stdout F [INFO] 10.217.0.74:43712 - 32007 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000786572s 2025-08-13T20:08:23.725416347+00:00 stdout F [INFO] 10.217.0.74:60368 - 40387 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001848233s 2025-08-13T20:08:23.790917425+00:00 stdout F [INFO] 10.217.0.74:35075 - 58159 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00105166s 2025-08-13T20:08:23.790917425+00:00 stdout F [INFO] 10.217.0.74:41707 - 58369 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000997008s 2025-08-13T20:08:23.790977527+00:00 stdout F [INFO] 10.217.0.74:42168 - 54228 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005657392s 2025-08-13T20:08:23.790993397+00:00 stdout F [INFO] 10.217.0.74:57634 - 13866 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005406595s 2025-08-13T20:08:23.861389176+00:00 stdout F [INFO] 10.217.0.74:53505 - 55365 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004804728s 2025-08-13T20:08:23.862470117+00:00 stdout F [INFO] 10.217.0.74:49058 - 22335 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005747865s 2025-08-13T20:08:23.865482123+00:00 stdout F [INFO] 10.217.0.74:44041 - 4083 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003981994s 2025-08-13T20:08:23.865695549+00:00 stdout F [INFO] 10.217.0.74:38165 - 44456 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004146199s 2025-08-13T20:08:23.918755860+00:00 stdout F [INFO] 10.217.0.74:37762 - 27467 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006508776s 2025-08-13T20:08:23.919418649+00:00 stdout F [INFO] 10.217.0.74:38545 - 48889 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006821736s 2025-08-13T20:08:23.930539608+00:00 stdout F [INFO] 10.217.0.74:37549 - 38725 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000463294s 2025-08-13T20:08:23.941953185+00:00 stdout F [INFO] 10.217.0.74:40745 - 59090 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012452197s 2025-08-13T20:08:23.993117352+00:00 stdout F [INFO] 10.217.0.74:45014 - 24003 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002614755s 2025-08-13T20:08:23.993117352+00:00 stdout F [INFO] 10.217.0.74:59484 - 61983 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002975335s 2025-08-13T20:08:24.024252145+00:00 stdout F [INFO] 10.217.0.74:48887 - 8971 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000939367s 2025-08-13T20:08:24.024723789+00:00 stdout F [INFO] 10.217.0.74:42036 - 20136 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001184584s 2025-08-13T20:08:24.062560143+00:00 stdout F [INFO] 10.217.0.74:52891 - 55952 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00069057s 2025-08-13T20:08:24.062560143+00:00 stdout F [INFO] 10.217.0.74:55499 - 34115 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000744161s 2025-08-13T20:08:24.079944652+00:00 stdout F [INFO] 10.217.0.74:49890 - 38847 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000767392s 2025-08-13T20:08:24.079944652+00:00 stdout F [INFO] 10.217.0.74:60726 - 27498 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001103431s 2025-08-13T20:08:24.110453807+00:00 stdout F [INFO] 10.217.0.74:39015 - 8181 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000881105s 2025-08-13T20:08:24.110453807+00:00 stdout F [INFO] 10.217.0.74:46491 - 49334 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000803433s 2025-08-13T20:08:24.131459409+00:00 stdout F [INFO] 10.217.0.74:60382 - 27381 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000645609s 2025-08-13T20:08:24.134677111+00:00 stdout F [INFO] 10.217.0.74:39985 - 63434 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002143151s 2025-08-13T20:08:24.141847427+00:00 stdout F [INFO] 10.217.0.74:39208 - 986 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000837914s 2025-08-13T20:08:24.141847427+00:00 stdout F [INFO] 10.217.0.74:51325 - 7814 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001170333s 2025-08-13T20:08:24.170697164+00:00 stdout F [INFO] 10.217.0.74:33143 - 6862 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000834134s 2025-08-13T20:08:24.170986162+00:00 stdout F [INFO] 10.217.0.74:42045 - 3662 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00103698s 2025-08-13T20:08:24.213495631+00:00 stdout F [INFO] 10.217.0.74:48666 - 48400 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000563326s 2025-08-13T20:08:24.213495631+00:00 stdout F [INFO] 10.217.0.74:46676 - 6547 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000519965s 2025-08-13T20:08:24.213495631+00:00 stdout F [INFO] 10.217.0.74:35811 - 22073 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000716421s 2025-08-13T20:08:24.213495631+00:00 stdout F [INFO] 10.217.0.74:36893 - 38752 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000668669s 2025-08-13T20:08:24.234172204+00:00 stdout F [INFO] 10.217.0.74:59916 - 9133 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000994929s 2025-08-13T20:08:24.234172204+00:00 stdout F [INFO] 10.217.0.74:53675 - 62828 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001190384s 2025-08-13T20:08:24.244733557+00:00 stdout F [INFO] 10.217.0.74:45956 - 55947 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00104376s 2025-08-13T20:08:24.249329228+00:00 stdout F [INFO] 10.217.0.74:32801 - 25347 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00071749s 2025-08-13T20:08:24.264423121+00:00 stdout F [INFO] 10.217.0.74:38307 - 32343 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000897656s 2025-08-13T20:08:24.264423121+00:00 stdout F [INFO] 10.217.0.74:33335 - 51186 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000887916s 2025-08-13T20:08:24.271386791+00:00 stdout F [INFO] 10.217.0.74:43747 - 31683 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002332827s 2025-08-13T20:08:24.272200764+00:00 stdout F [INFO] 10.217.0.74:50597 - 61774 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000667099s 2025-08-13T20:08:24.291488567+00:00 stdout F [INFO] 10.217.0.74:38457 - 23497 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001091461s 2025-08-13T20:08:24.292299590+00:00 stdout F [INFO] 10.217.0.74:60003 - 46978 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000740811s 2025-08-13T20:08:24.300860186+00:00 stdout F [INFO] 10.217.0.74:33575 - 3747 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000834954s 2025-08-13T20:08:24.301911626+00:00 stdout F [INFO] 10.217.0.74:49103 - 44491 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000587857s 2025-08-13T20:08:24.327883021+00:00 stdout F [INFO] 10.217.0.74:43019 - 60503 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000859995s 2025-08-13T20:08:24.328021395+00:00 stdout F [INFO] 10.217.0.74:44241 - 44160 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001261347s 2025-08-13T20:08:24.345248108+00:00 stdout F [INFO] 10.217.0.74:47062 - 48859 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000619378s 2025-08-13T20:08:24.345599408+00:00 stdout F [INFO] 10.217.0.74:47666 - 58634 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000612027s 2025-08-13T20:08:24.366537759+00:00 stdout F [INFO] 10.217.0.74:47016 - 59253 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001480562s 2025-08-13T20:08:24.366744295+00:00 stdout F [INFO] 10.217.0.74:46246 - 30705 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001438961s 2025-08-13T20:08:24.392236476+00:00 stdout F [INFO] 10.217.0.74:60212 - 51027 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008598517s 2025-08-13T20:08:24.392592016+00:00 stdout F [INFO] 10.217.0.74:58066 - 61218 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008901875s 2025-08-13T20:08:24.410356635+00:00 stdout F [INFO] 10.217.0.74:40348 - 18397 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000849115s 2025-08-13T20:08:24.410696465+00:00 stdout F [INFO] 10.217.0.74:55095 - 42434 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001425661s 2025-08-13T20:08:24.426346164+00:00 stdout F [INFO] 10.217.0.74:58429 - 37747 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002262725s 2025-08-13T20:08:24.426429916+00:00 stdout F [INFO] 10.217.0.74:47306 - 45293 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002616975s 2025-08-13T20:08:24.450868817+00:00 stdout F [INFO] 10.217.0.74:55881 - 27796 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000377991s 2025-08-13T20:08:24.451188236+00:00 stdout F [INFO] 10.217.0.74:45482 - 49898 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000913146s 2025-08-13T20:08:24.452458552+00:00 stdout F [INFO] 10.217.0.74:33213 - 47965 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000542426s 2025-08-13T20:08:24.453206924+00:00 stdout F [INFO] 10.217.0.74:46715 - 48650 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000621578s 2025-08-13T20:08:24.508503639+00:00 stdout F [INFO] 10.217.0.74:35011 - 31718 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001217464s 2025-08-13T20:08:24.508854299+00:00 stdout F [INFO] 10.217.0.74:52715 - 20222 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000768762s 2025-08-13T20:08:24.520193884+00:00 stdout F [INFO] 10.217.0.74:51498 - 7713 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000939187s 2025-08-13T20:08:24.520439531+00:00 stdout F [INFO] 10.217.0.74:38612 - 10532 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001119952s 2025-08-13T20:08:24.553056217+00:00 stdout F [INFO] 10.217.0.74:39589 - 34577 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00073006s 2025-08-13T20:08:24.553266833+00:00 stdout F [INFO] 10.217.0.74:50887 - 44553 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000965228s 2025-08-13T20:08:24.578471775+00:00 stdout F [INFO] 10.217.0.74:49744 - 44397 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000530205s 2025-08-13T20:08:24.578997010+00:00 stdout F [INFO] 10.217.0.74:60908 - 2189 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000731991s 2025-08-13T20:08:24.617388391+00:00 stdout F [INFO] 10.217.0.74:48084 - 23967 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000972908s 2025-08-13T20:08:24.617695760+00:00 stdout F [INFO] 10.217.0.74:36137 - 28536 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000564016s 2025-08-13T20:08:24.618158183+00:00 stdout F [INFO] 10.217.0.74:43087 - 46692 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001262616s 2025-08-13T20:08:24.618211615+00:00 stdout F [INFO] 10.217.0.74:52408 - 20765 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00138048s 2025-08-13T20:08:24.645443035+00:00 stdout F [INFO] 10.217.0.74:38609 - 47142 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001852633s 2025-08-13T20:08:24.646116835+00:00 stdout F [INFO] 10.217.0.74:41031 - 52842 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002800981s 2025-08-13T20:08:24.646466725+00:00 stdout F [INFO] 10.217.0.74:59849 - 60627 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000685259s 2025-08-13T20:08:24.646518566+00:00 stdout F [INFO] 10.217.0.74:35121 - 7797 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000626798s 2025-08-13T20:08:24.682149238+00:00 stdout F [INFO] 10.217.0.74:34031 - 10475 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001158263s 2025-08-13T20:08:24.682391585+00:00 stdout F [INFO] 10.217.0.74:35405 - 13007 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001918105s 2025-08-13T20:08:24.688375276+00:00 stdout F [INFO] 10.217.0.74:42975 - 3941 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000588656s 2025-08-13T20:08:24.688375276+00:00 stdout F [INFO] 10.217.0.74:40270 - 27750 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000626067s 2025-08-13T20:08:24.719747866+00:00 stdout F [INFO] 10.217.0.74:54688 - 30674 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002935224s 2025-08-13T20:08:24.719747866+00:00 stdout F [INFO] 10.217.0.74:47077 - 35095 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003633474s 2025-08-13T20:08:24.722207566+00:00 stdout F [INFO] 10.217.0.74:39440 - 61086 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002880883s 2025-08-13T20:08:24.722207566+00:00 stdout F [INFO] 10.217.0.74:47023 - 61827 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003258813s 2025-08-13T20:08:24.755713397+00:00 stdout F [INFO] 10.217.0.74:35434 - 20749 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000781653s 2025-08-13T20:08:24.756109458+00:00 stdout F [INFO] 10.217.0.74:53059 - 9205 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001007379s 2025-08-13T20:08:24.784571244+00:00 stdout F [INFO] 10.217.0.74:36555 - 53245 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000628068s 2025-08-13T20:08:24.785053848+00:00 stdout F [INFO] 10.217.0.74:56648 - 41834 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001459212s 2025-08-13T20:08:24.805676099+00:00 stdout F [INFO] 10.217.0.74:54450 - 39176 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001632277s 2025-08-13T20:08:24.805729691+00:00 stdout F [INFO] 10.217.0.74:53352 - 1136 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001506933s 2025-08-13T20:08:24.851120342+00:00 stdout F [INFO] 10.217.0.74:38902 - 48894 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002668936s 2025-08-13T20:08:24.852221004+00:00 stdout F [INFO] 10.217.0.74:43404 - 30730 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003059038s 2025-08-13T20:08:24.873870125+00:00 stdout F [INFO] 10.217.0.74:42313 - 6315 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000823914s 2025-08-13T20:08:24.873870125+00:00 stdout F [INFO] 10.217.0.74:44164 - 12605 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000872185s 2025-08-13T20:08:24.909741623+00:00 stdout F [INFO] 10.217.0.74:42936 - 10687 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00103921s 2025-08-13T20:08:24.909741623+00:00 stdout F [INFO] 10.217.0.74:36100 - 37711 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000985678s 2025-08-13T20:08:24.909741623+00:00 stdout F [INFO] 10.217.0.74:56708 - 58003 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000768852s 2025-08-13T20:08:24.909741623+00:00 stdout F [INFO] 10.217.0.74:55787 - 15515 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001686759s 2025-08-13T20:08:24.929399557+00:00 stdout F [INFO] 10.217.0.74:36348 - 27003 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000728081s 2025-08-13T20:08:24.929612103+00:00 stdout F [INFO] 10.217.0.74:58709 - 40669 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001330818s 2025-08-13T20:08:24.930241351+00:00 stdout F [INFO] 10.217.0.74:43717 - 42072 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001431632s 2025-08-13T20:08:24.930980552+00:00 stdout F [INFO] 10.217.0.74:50271 - 30858 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001860083s 2025-08-13T20:08:24.972041189+00:00 stdout F [INFO] 10.217.0.74:36524 - 45953 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004283983s 2025-08-13T20:08:24.972041189+00:00 stdout F [INFO] 10.217.0.74:46433 - 44305 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005543429s 2025-08-13T20:08:24.978928047+00:00 stdout F [INFO] 10.217.0.74:39950 - 50856 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000684569s 2025-08-13T20:08:24.978928047+00:00 stdout F [INFO] 10.217.0.74:51766 - 25395 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000776972s 2025-08-13T20:08:25.004847210+00:00 stdout F [INFO] 10.217.0.74:34690 - 7086 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003429738s 2025-08-13T20:08:25.004847210+00:00 stdout F [INFO] 10.217.0.74:43571 - 61951 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003356256s 2025-08-13T20:08:25.004847210+00:00 stdout F [INFO] 10.217.0.74:56828 - 58461 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007452054s 2025-08-13T20:08:25.004847210+00:00 stdout F [INFO] 10.217.0.74:46054 - 64977 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007036922s 2025-08-13T20:08:25.036035054+00:00 stdout F [INFO] 10.217.0.74:53095 - 17901 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001446761s 2025-08-13T20:08:25.036035054+00:00 stdout F [INFO] 10.217.0.74:57138 - 43105 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001692948s 2025-08-13T20:08:25.052948038+00:00 stdout F [INFO] 10.217.0.74:43926 - 25694 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002316425s 2025-08-13T20:08:25.052997559+00:00 stdout F [INFO] 10.217.0.74:53532 - 45713 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001506383s 2025-08-13T20:08:25.096404534+00:00 stdout F [INFO] 10.217.0.74:45404 - 13830 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002596194s 2025-08-13T20:08:25.096613950+00:00 stdout F [INFO] 10.217.0.74:51308 - 24763 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000542626s 2025-08-13T20:08:25.111347213+00:00 stdout F [INFO] 10.217.0.74:40585 - 22852 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000808484s 2025-08-13T20:08:25.111431875+00:00 stdout F [INFO] 10.217.0.74:52176 - 59269 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000953717s 2025-08-13T20:08:25.142186147+00:00 stdout F [INFO] 10.217.0.74:49925 - 55227 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000873406s 2025-08-13T20:08:25.142406793+00:00 stdout F [INFO] 10.217.0.74:58393 - 32978 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001325018s 2025-08-13T20:08:25.154245632+00:00 stdout F [INFO] 10.217.0.74:33747 - 17097 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000816724s 2025-08-13T20:08:25.154245632+00:00 stdout F [INFO] 10.217.0.74:57980 - 13317 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00104587s 2025-08-13T20:08:25.157223058+00:00 stdout F [INFO] 10.217.0.74:52234 - 27248 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000541706s 2025-08-13T20:08:25.157223058+00:00 stdout F [INFO] 10.217.0.74:35217 - 42622 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000668129s 2025-08-13T20:08:25.174913265+00:00 stdout F [INFO] 10.217.0.74:54396 - 14624 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000631218s 2025-08-13T20:08:25.174913265+00:00 stdout F [INFO] 10.217.0.74:43944 - 47763 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001561625s 2025-08-13T20:08:25.199658625+00:00 stdout F [INFO] 10.217.0.74:59881 - 6776 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00068003s 2025-08-13T20:08:25.199985764+00:00 stdout F [INFO] 10.217.0.74:51326 - 7569 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001062151s 2025-08-13T20:08:25.227633737+00:00 stdout F [INFO] 10.217.0.74:46695 - 35811 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000990998s 2025-08-13T20:08:25.228805970+00:00 stdout F [INFO] 10.217.0.74:36601 - 55298 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000490834s 2025-08-13T20:08:25.239563699+00:00 stdout F [INFO] 10.217.0.74:57169 - 22926 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000628898s 2025-08-13T20:08:25.239563699+00:00 stdout F [INFO] 10.217.0.74:45489 - 37868 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000734892s 2025-08-13T20:08:25.295034379+00:00 stdout F [INFO] 10.217.0.74:46388 - 48751 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001222805s 2025-08-13T20:08:25.295085910+00:00 stdout F [INFO] 10.217.0.74:51846 - 12374 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001515264s 2025-08-13T20:08:41.635823402+00:00 stdout F [INFO] 10.217.0.62:34379 - 7559 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003923512s 2025-08-13T20:08:41.636428109+00:00 stdout F [INFO] 10.217.0.62:45312 - 22703 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004136568s 2025-08-13T20:08:42.402994408+00:00 stdout F [INFO] 10.217.0.19:40701 - 2139 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003220132s 2025-08-13T20:08:42.403236254+00:00 stdout F [INFO] 10.217.0.19:39684 - 2175 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003694346s 2025-08-13T20:08:56.374659197+00:00 stdout F [INFO] 10.217.0.64:34472 - 48135 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003421968s 2025-08-13T20:08:56.374745359+00:00 stdout F [INFO] 10.217.0.64:54476 - 13919 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002192603s 2025-08-13T20:08:56.374840782+00:00 stdout F [INFO] 10.217.0.64:40326 - 5569 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004057756s 2025-08-13T20:08:56.375154361+00:00 stdout F [INFO] 10.217.0.64:49383 - 27450 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004855109s 2025-08-13T20:09:02.378239024+00:00 stdout F [INFO] 10.217.0.45:43397 - 49512 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002536683s 2025-08-13T20:09:02.378239024+00:00 stdout F [INFO] 10.217.0.45:56823 - 15158 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002466611s 2025-08-13T20:09:11.725896608+00:00 stdout F [INFO] 10.217.0.62:34550 - 12666 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.006674761s 2025-08-13T20:09:11.725896608+00:00 stdout F [INFO] 10.217.0.62:39907 - 38685 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00801045s 2025-08-13T20:09:22.862762824+00:00 stdout F [INFO] 10.217.0.8:39156 - 25361 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002846862s 2025-08-13T20:09:22.862953529+00:00 stdout F [INFO] 10.217.0.8:53927 - 1632 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002753809s 2025-08-13T20:09:22.865524273+00:00 stdout F [INFO] 10.217.0.8:54729 - 65332 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001623787s 2025-08-13T20:09:22.865728939+00:00 stdout F [INFO] 10.217.0.8:49636 - 52910 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00175812s 2025-08-13T20:09:26.078490881+00:00 stdout F [INFO] 10.217.0.19:48977 - 50456 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001002199s 2025-08-13T20:09:26.078490881+00:00 stdout F [INFO] 10.217.0.19:59578 - 61462 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00106775s 2025-08-13T20:09:33.484850278+00:00 stdout F [INFO] 10.217.0.62:42372 - 190 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00139979s 2025-08-13T20:09:33.485393903+00:00 stdout F [INFO] 10.217.0.62:34167 - 57309 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00244067s 2025-08-13T20:09:33.523096954+00:00 stdout F [INFO] 10.217.0.62:49737 - 32251 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000720071s 2025-08-13T20:09:33.523096954+00:00 stdout F [INFO] 10.217.0.62:40671 - 52652 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000871775s 2025-08-13T20:09:35.187928017+00:00 stdout F [INFO] 10.217.0.19:33301 - 21434 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001188724s 2025-08-13T20:09:35.188043610+00:00 stdout F [INFO] 10.217.0.19:52934 - 11233 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001575785s 2025-08-13T20:09:38.306477737+00:00 stdout F [INFO] 10.217.0.62:40952 - 6986 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001327978s 2025-08-13T20:09:38.306477737+00:00 stdout F [INFO] 10.217.0.62:45564 - 26341 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002726588s 2025-08-13T20:09:41.638121368+00:00 stdout F [INFO] 10.217.0.62:56173 - 24407 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002718738s 2025-08-13T20:09:41.638416066+00:00 stdout F [INFO] 10.217.0.62:47549 - 44536 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003351996s 2025-08-13T20:09:42.030655352+00:00 stdout F [INFO] 10.217.0.19:56636 - 61643 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001563925s 2025-08-13T20:09:42.030851607+00:00 stdout F [INFO] 10.217.0.19:51975 - 50744 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002233574s 2025-08-13T20:09:42.183194045+00:00 stdout F [INFO] 10.217.0.19:33628 - 49975 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003283644s 2025-08-13T20:09:42.183194045+00:00 stdout F [INFO] 10.217.0.19:45670 - 3055 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003702877s 2025-08-13T20:09:42.529990608+00:00 stdout F [INFO] 10.217.0.62:48401 - 656 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00071058s 2025-08-13T20:09:42.529990608+00:00 stdout F [INFO] 10.217.0.62:58670 - 49481 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000742051s 2025-08-13T20:09:45.431538738+00:00 stdout F [INFO] 10.217.0.62:43098 - 58085 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00107588s 2025-08-13T20:09:45.431538738+00:00 stdout F [INFO] 10.217.0.62:59851 - 55795 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000526385s 2025-08-13T20:09:54.586441237+00:00 stdout F [INFO] 10.217.0.19:47711 - 21321 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001843932s 2025-08-13T20:09:54.586441237+00:00 stdout F [INFO] 10.217.0.19:44898 - 45604 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002623275s 2025-08-13T20:09:56.371181378+00:00 stdout F [INFO] 10.217.0.64:41542 - 41108 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001686828s 2025-08-13T20:09:56.371181378+00:00 stdout F [INFO] 10.217.0.64:48576 - 20336 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.00139462s 2025-08-13T20:09:56.372300050+00:00 stdout F [INFO] 10.217.0.64:38485 - 9093 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.000542365s 2025-08-13T20:09:56.374142783+00:00 stdout F [INFO] 10.217.0.64:46788 - 13302 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001550825s 2025-08-13T20:09:57.900691400+00:00 stdout F [INFO] 10.217.0.19:53614 - 57161 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001425781s 2025-08-13T20:09:57.901094752+00:00 stdout F [INFO] 10.217.0.19:53845 - 1062 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000942357s 2025-08-13T20:09:57.983843465+00:00 stdout F [INFO] 10.217.0.19:39361 - 11239 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002185313s 2025-08-13T20:09:57.983843465+00:00 stdout F [INFO] 10.217.0.19:41001 - 62298 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002361678s 2025-08-13T20:10:01.056405577+00:00 stdout F [INFO] 10.217.0.19:39388 - 44237 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002664086s 2025-08-13T20:10:01.056405577+00:00 stdout F [INFO] 10.217.0.19:33615 - 53295 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002689067s 2025-08-13T20:10:01.079333115+00:00 stdout F [INFO] 10.217.0.19:50682 - 20587 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000795232s 2025-08-13T20:10:01.081832366+00:00 stdout F [INFO] 10.217.0.19:44383 - 40450 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000861545s 2025-08-13T20:10:02.444873055+00:00 stdout F [INFO] 10.217.0.45:44013 - 11251 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000983468s 2025-08-13T20:10:02.453875573+00:00 stdout F [INFO] 10.217.0.45:37306 - 39606 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001917425s 2025-08-13T20:10:05.186876261+00:00 stdout F [INFO] 10.217.0.19:58118 - 11638 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002586544s 2025-08-13T20:10:05.187021395+00:00 stdout F [INFO] 10.217.0.19:38219 - 21719 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002535682s 2025-08-13T20:10:08.345010648+00:00 stdout F [INFO] 10.217.0.19:51265 - 1330 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002312606s 2025-08-13T20:10:08.345010648+00:00 stdout F [INFO] 10.217.0.19:59348 - 19091 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002631336s 2025-08-13T20:10:08.355087687+00:00 stdout F [INFO] 10.217.0.19:42792 - 61879 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002574304s 2025-08-13T20:10:08.355139488+00:00 stdout F [INFO] 10.217.0.19:39369 - 9267 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002482411s 2025-08-13T20:10:08.411554935+00:00 stdout F [INFO] 10.217.0.19:35601 - 58707 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001258786s 2025-08-13T20:10:08.413167012+00:00 stdout F [INFO] 10.217.0.19:56524 - 65241 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002158461s 2025-08-13T20:10:08.548968705+00:00 stdout F [INFO] 10.217.0.19:45966 - 19689 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001272617s 2025-08-13T20:10:08.548968705+00:00 stdout F [INFO] 10.217.0.19:48885 - 11127 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001335808s 2025-08-13T20:10:11.628730835+00:00 stdout F [INFO] 10.217.0.62:36584 - 46375 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001654868s 2025-08-13T20:10:11.633039398+00:00 stdout F [INFO] 10.217.0.62:41877 - 29105 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000915227s 2025-08-13T20:10:22.867019153+00:00 stdout F [INFO] 10.217.0.8:49904 - 64741 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003766368s 2025-08-13T20:10:22.867019153+00:00 stdout F [INFO] 10.217.0.8:41201 - 45742 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004181499s 2025-08-13T20:10:22.867959440+00:00 stdout F [INFO] 10.217.0.8:35597 - 48884 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001018489s 2025-08-13T20:10:22.868719412+00:00 stdout F [INFO] 10.217.0.8:47410 - 20918 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000770882s 2025-08-13T20:10:30.808858992+00:00 stdout F [INFO] 10.217.0.73:39897 - 44453 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001530163s 2025-08-13T20:10:30.808951815+00:00 stdout F [INFO] 10.217.0.73:50070 - 51054 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00174087s 2025-08-13T20:10:33.362905250+00:00 stdout F [INFO] 10.217.0.19:32790 - 5866 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004795368s 2025-08-13T20:10:33.362905250+00:00 stdout F [INFO] 10.217.0.19:49534 - 57961 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004693464s 2025-08-13T20:10:33.373874594+00:00 stdout F [INFO] 10.217.0.19:37942 - 8212 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000595517s 2025-08-13T20:10:33.374417290+00:00 stdout F [INFO] 10.217.0.19:56916 - 12238 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000661719s 2025-08-13T20:10:35.199277209+00:00 stdout F [INFO] 10.217.0.19:41870 - 23972 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001476932s 2025-08-13T20:10:35.199277209+00:00 stdout F [INFO] 10.217.0.19:58212 - 53423 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001832952s 2025-08-13T20:10:35.697241647+00:00 stdout F [INFO] 10.217.0.19:33071 - 20323 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00069523s 2025-08-13T20:10:35.697303468+00:00 stdout F [INFO] 10.217.0.19:42008 - 33508 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000793093s 2025-08-13T20:10:41.622326063+00:00 stdout F [INFO] 10.217.0.62:34619 - 105 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001195944s 2025-08-13T20:10:41.622417725+00:00 stdout F [INFO] 10.217.0.62:43729 - 23355 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001137973s 2025-08-13T20:10:56.367748695+00:00 stdout F [INFO] 10.217.0.64:35428 - 7758 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002594974s 2025-08-13T20:10:56.367748695+00:00 stdout F [INFO] 10.217.0.64:53534 - 35151 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003622834s 2025-08-13T20:10:56.367748695+00:00 stdout F [INFO] 10.217.0.64:60247 - 46067 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003174031s 2025-08-13T20:10:56.367748695+00:00 stdout F [INFO] 10.217.0.64:49932 - 39403 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003596073s 2025-08-13T20:11:02.520033866+00:00 stdout F [INFO] 10.217.0.45:39664 - 38219 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001183744s 2025-08-13T20:11:02.520033866+00:00 stdout F [INFO] 10.217.0.45:60311 - 9110 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001162163s 2025-08-13T20:11:03.399531671+00:00 stdout F [INFO] 10.217.0.87:37356 - 34781 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001512653s 2025-08-13T20:11:03.399531671+00:00 stdout F [INFO] 10.217.0.87:54183 - 38414 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002224574s 2025-08-13T20:11:03.399949703+00:00 stdout F [INFO] 10.217.0.87:35523 - 20655 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000555516s 2025-08-13T20:11:03.399949703+00:00 stdout F [INFO] 10.217.0.87:49559 - 28694 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001214345s 2025-08-13T20:11:03.529312752+00:00 stdout F [INFO] 10.217.0.87:35061 - 433 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000936687s 2025-08-13T20:11:03.529312752+00:00 stdout F [INFO] 10.217.0.87:40965 - 48513 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000667779s 2025-08-13T20:11:03.529312752+00:00 stdout F [INFO] 10.217.0.87:54542 - 34394 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000680099s 2025-08-13T20:11:03.529312752+00:00 stdout F [INFO] 10.217.0.87:38160 - 26586 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000648358s 2025-08-13T20:11:03.647199392+00:00 stdout F [INFO] 10.217.0.87:41763 - 22360 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001318338s 2025-08-13T20:11:03.647199392+00:00 stdout F [INFO] 10.217.0.87:47208 - 63365 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000723751s 2025-08-13T20:11:03.647199392+00:00 stdout F [INFO] 10.217.0.87:53434 - 45248 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070464s 2025-08-13T20:11:03.649033824+00:00 stdout F [INFO] 10.217.0.87:53713 - 18434 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002473161s 2025-08-13T20:11:03.718848076+00:00 stdout F [INFO] 10.217.0.87:41691 - 15051 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002671346s 2025-08-13T20:11:03.718848076+00:00 stdout F [INFO] 10.217.0.87:46056 - 9861 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00103799s 2025-08-13T20:11:03.722611354+00:00 stdout F [INFO] 10.217.0.87:39086 - 34308 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000556906s 2025-08-13T20:11:03.722611354+00:00 stdout F [INFO] 10.217.0.87:42891 - 6546 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000504144s 2025-08-13T20:11:03.767221563+00:00 stdout F [INFO] 10.217.0.87:58197 - 30208 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00385624s 2025-08-13T20:11:03.767221563+00:00 stdout F [INFO] 10.217.0.87:39728 - 49827 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003767278s 2025-08-13T20:11:03.817666909+00:00 stdout F [INFO] 10.217.0.87:35172 - 55283 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001101361s 2025-08-13T20:11:03.817995749+00:00 stdout F [INFO] 10.217.0.87:39261 - 6732 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000712641s 2025-08-13T20:11:03.829900000+00:00 stdout F [INFO] 10.217.0.87:53119 - 44401 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001572155s 2025-08-13T20:11:03.829900000+00:00 stdout F [INFO] 10.217.0.87:56071 - 13263 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001206034s 2025-08-13T20:11:03.829900000+00:00 stdout F [INFO] 10.217.0.87:53447 - 33479 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001892754s 2025-08-13T20:11:03.829900000+00:00 stdout F [INFO] 10.217.0.87:53190 - 29850 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00175118s 2025-08-13T20:11:03.847942587+00:00 stdout F [INFO] 10.217.0.87:56508 - 38072 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004455958s 2025-08-13T20:11:03.848072911+00:00 stdout F [INFO] 10.217.0.87:33109 - 11968 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004842229s 2025-08-13T20:11:03.897437386+00:00 stdout F [INFO] 10.217.0.87:53430 - 55335 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000998288s 2025-08-13T20:11:03.897437386+00:00 stdout F [INFO] 10.217.0.87:32838 - 14045 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001481643s 2025-08-13T20:11:03.900510014+00:00 stdout F [INFO] 10.217.0.87:43972 - 23286 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000949927s 2025-08-13T20:11:03.900756701+00:00 stdout F [INFO] 10.217.0.87:49838 - 38792 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001019979s 2025-08-13T20:11:03.902765709+00:00 stdout F [INFO] 10.217.0.87:47774 - 42440 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000679239s 2025-08-13T20:11:03.904858479+00:00 stdout F [INFO] 10.217.0.87:57291 - 56285 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001835753s 2025-08-13T20:11:03.913620140+00:00 stdout F [INFO] 10.217.0.87:46856 - 11308 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000987408s 2025-08-13T20:11:03.914091244+00:00 stdout F [INFO] 10.217.0.87:55527 - 2267 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001479192s 2025-08-13T20:11:03.954577195+00:00 stdout F [INFO] 10.217.0.87:51392 - 21341 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000807283s 2025-08-13T20:11:03.954937355+00:00 stdout F [INFO] 10.217.0.87:41890 - 15684 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001436721s 2025-08-13T20:11:03.977215704+00:00 stdout F [INFO] 10.217.0.87:47581 - 1434 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00104783s 2025-08-13T20:11:03.977509832+00:00 stdout F [INFO] 10.217.0.87:54607 - 53936 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001454602s 2025-08-13T20:11:04.023222913+00:00 stdout F [INFO] 10.217.0.87:54496 - 16868 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003156581s 2025-08-13T20:11:04.023686356+00:00 stdout F [INFO] 10.217.0.87:42267 - 51763 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003255144s 2025-08-13T20:11:04.039212591+00:00 stdout F [INFO] 10.217.0.87:53493 - 33994 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001088152s 2025-08-13T20:11:04.041175287+00:00 stdout F [INFO] 10.217.0.87:40575 - 33957 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003272184s 2025-08-13T20:11:04.041577319+00:00 stdout F [INFO] 10.217.0.87:50647 - 26691 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0010365s 2025-08-13T20:11:04.045511112+00:00 stdout F [INFO] 10.217.0.87:54103 - 16592 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000857164s 2025-08-13T20:11:04.102489046+00:00 stdout F [INFO] 10.217.0.87:55724 - 48220 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006211418s 2025-08-13T20:11:04.133961708+00:00 stdout F [INFO] 10.217.0.87:43240 - 60487 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001790991s 2025-08-13T20:11:04.142111361+00:00 stdout F [INFO] 10.217.0.87:41388 - 46250 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003411068s 2025-08-13T20:11:04.142305827+00:00 stdout F [INFO] 10.217.0.87:36215 - 4218 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003739277s 2025-08-13T20:11:04.202104702+00:00 stdout F [INFO] 10.217.0.87:55842 - 26479 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001143423s 2025-08-13T20:11:04.202351719+00:00 stdout F [INFO] 10.217.0.87:34872 - 6296 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000963698s 2025-08-13T20:11:04.209565155+00:00 stdout F [INFO] 10.217.0.87:36576 - 31736 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000777092s 2025-08-13T20:11:04.209661408+00:00 stdout F [INFO] 10.217.0.87:52223 - 46340 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000739101s 2025-08-13T20:11:04.267950309+00:00 stdout F [INFO] 10.217.0.87:47717 - 18265 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001877574s 2025-08-13T20:11:04.269479953+00:00 stdout F [INFO] 10.217.0.87:49071 - 3191 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001551425s 2025-08-13T20:11:04.271193682+00:00 stdout F [INFO] 10.217.0.87:39213 - 49156 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000561976s 2025-08-13T20:11:04.271193682+00:00 stdout F [INFO] 10.217.0.87:45159 - 61063 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000781293s 2025-08-13T20:11:04.334709023+00:00 stdout F [INFO] 10.217.0.87:42608 - 45702 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000631568s 2025-08-13T20:11:04.336961018+00:00 stdout F [INFO] 10.217.0.87:40086 - 23695 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000568236s 2025-08-13T20:11:04.337177644+00:00 stdout F [INFO] 10.217.0.87:44964 - 114 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000677759s 2025-08-13T20:11:04.337395390+00:00 stdout F [INFO] 10.217.0.87:46041 - 32750 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.003591873s 2025-08-13T20:11:04.408524350+00:00 stdout F [INFO] 10.217.0.87:42635 - 50007 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002949285s 2025-08-13T20:11:04.408524350+00:00 stdout F [INFO] 10.217.0.87:41785 - 28793 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003545352s 2025-08-13T20:11:04.408524350+00:00 stdout F [INFO] 10.217.0.87:47944 - 7692 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000543566s 2025-08-13T20:11:04.408524350+00:00 stdout F [INFO] 10.217.0.87:37949 - 25447 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000756501s 2025-08-13T20:11:04.463665041+00:00 stdout F [INFO] 10.217.0.87:36079 - 39964 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001139033s 2025-08-13T20:11:04.463665041+00:00 stdout F [INFO] 10.217.0.87:43169 - 50861 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001230846s 2025-08-13T20:11:04.532024350+00:00 stdout F [INFO] 10.217.0.87:48399 - 60449 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002498091s 2025-08-13T20:11:04.532024350+00:00 stdout F [INFO] 10.217.0.87:37499 - 27113 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001824773s 2025-08-13T20:11:04.532024350+00:00 stdout F [INFO] 10.217.0.87:58069 - 60634 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004967292s 2025-08-13T20:11:04.538092615+00:00 stdout F [INFO] 10.217.0.87:49277 - 25371 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007577988s 2025-08-13T20:11:04.569854585+00:00 stdout F [INFO] 10.217.0.87:43743 - 36368 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001295347s 2025-08-13T20:11:04.569854585+00:00 stdout F [INFO] 10.217.0.87:39846 - 32417 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00105676s 2025-08-13T20:11:04.618958633+00:00 stdout F [INFO] 10.217.0.87:50590 - 58487 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005965221s 2025-08-13T20:11:04.621902648+00:00 stdout F [INFO] 10.217.0.87:33159 - 18725 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006726893s 2025-08-13T20:11:04.653088392+00:00 stdout F [INFO] 10.217.0.87:40765 - 34294 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001484582s 2025-08-13T20:11:04.653088392+00:00 stdout F [INFO] 10.217.0.87:44177 - 23695 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001702179s 2025-08-13T20:11:04.715885942+00:00 stdout F [INFO] 10.217.0.87:33043 - 12052 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002513822s 2025-08-13T20:11:04.723084329+00:00 stdout F [INFO] 10.217.0.87:35338 - 1652 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006784525s 2025-08-13T20:11:04.805091720+00:00 stdout F [INFO] 10.217.0.87:52674 - 55206 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006223669s 2025-08-13T20:11:04.805091720+00:00 stdout F [INFO] 10.217.0.87:52033 - 60656 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006853436s 2025-08-13T20:11:04.816179148+00:00 stdout F [INFO] 10.217.0.87:50848 - 46737 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001064251s 2025-08-13T20:11:04.816179148+00:00 stdout F [INFO] 10.217.0.87:42103 - 9497 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001151093s 2025-08-13T20:11:04.890117708+00:00 stdout F [INFO] 10.217.0.87:47340 - 39540 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003902982s 2025-08-13T20:11:04.890117708+00:00 stdout F [INFO] 10.217.0.87:39624 - 35803 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004711905s 2025-08-13T20:11:04.890117708+00:00 stdout F [INFO] 10.217.0.87:52299 - 14108 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006719073s 2025-08-13T20:11:04.890117708+00:00 stdout F [INFO] 10.217.0.87:51626 - 3473 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006789615s 2025-08-13T20:11:04.954439932+00:00 stdout F [INFO] 10.217.0.87:52193 - 7609 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000888555s 2025-08-13T20:11:04.954660798+00:00 stdout F [INFO] 10.217.0.87:38807 - 33179 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000561866s 2025-08-13T20:11:04.954746981+00:00 stdout F [INFO] 10.217.0.87:51130 - 30582 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000945637s 2025-08-13T20:11:04.954990948+00:00 stdout F [INFO] 10.217.0.87:35982 - 7711 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001363909s 2025-08-13T20:11:05.014950167+00:00 stdout F [INFO] 10.217.0.87:42495 - 23659 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000950157s 2025-08-13T20:11:05.014950167+00:00 stdout F [INFO] 10.217.0.87:56385 - 22005 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001003799s 2025-08-13T20:11:05.067956076+00:00 stdout F [INFO] 10.217.0.87:47869 - 28411 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001158573s 2025-08-13T20:11:05.067956076+00:00 stdout F [INFO] 10.217.0.87:53901 - 44918 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000643019s 2025-08-13T20:11:05.072057984+00:00 stdout F [INFO] 10.217.0.87:49967 - 54232 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001286327s 2025-08-13T20:11:05.072114736+00:00 stdout F [INFO] 10.217.0.87:37700 - 58806 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00102417s 2025-08-13T20:11:05.102279941+00:00 stdout F [INFO] 10.217.0.87:46754 - 38555 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001617416s 2025-08-13T20:11:05.116212460+00:00 stdout F [INFO] 10.217.0.87:51552 - 34798 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.022058323s 2025-08-13T20:11:05.145679675+00:00 stdout F [INFO] 10.217.0.87:36269 - 19193 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000509534s 2025-08-13T20:11:05.151134801+00:00 stdout F [INFO] 10.217.0.87:45946 - 16784 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007139375s 2025-08-13T20:11:05.155084265+00:00 stdout F [INFO] 10.217.0.87:54068 - 64440 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002909163s 2025-08-13T20:11:05.160362146+00:00 stdout F [INFO] 10.217.0.87:52370 - 7710 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006949079s 2025-08-13T20:11:05.204469531+00:00 stdout F [INFO] 10.217.0.87:46241 - 8297 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069313s 2025-08-13T20:11:05.208651580+00:00 stdout F [INFO] 10.217.0.87:40189 - 37269 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000619368s 2025-08-13T20:11:05.241635746+00:00 stdout F [INFO] 10.217.0.87:44567 - 60474 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006494957s 2025-08-13T20:11:05.241752309+00:00 stdout F [INFO] 10.217.0.87:38105 - 27874 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006842646s 2025-08-13T20:11:05.249504242+00:00 stdout F [INFO] 10.217.0.87:57155 - 58400 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005020243s 2025-08-13T20:11:05.250721467+00:00 stdout F [INFO] 10.217.0.87:52971 - 8666 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005474167s 2025-08-13T20:11:05.258077158+00:00 stdout F [INFO] 10.217.0.19:40122 - 47771 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005301442s 2025-08-13T20:11:05.259867269+00:00 stdout F [INFO] 10.217.0.19:52923 - 1992 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001806382s 2025-08-13T20:11:05.290980531+00:00 stdout F [INFO] 10.217.0.87:51112 - 26133 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001604326s 2025-08-13T20:11:05.290980531+00:00 stdout F [INFO] 10.217.0.87:57064 - 21762 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004293173s 2025-08-13T20:11:05.302120030+00:00 stdout F [INFO] 10.217.0.87:40821 - 41355 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000800522s 2025-08-13T20:11:05.302379718+00:00 stdout F [INFO] 10.217.0.87:56713 - 19604 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001137903s 2025-08-13T20:11:05.313414544+00:00 stdout F [INFO] 10.217.0.87:36400 - 26073 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000772012s 2025-08-13T20:11:05.317306166+00:00 stdout F [INFO] 10.217.0.87:53031 - 42574 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003964054s 2025-08-13T20:11:05.321047143+00:00 stdout F [INFO] 10.217.0.87:53956 - 17610 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00068611s 2025-08-13T20:11:05.321278060+00:00 stdout F [INFO] 10.217.0.87:46140 - 44894 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000664749s 2025-08-13T20:11:05.342043325+00:00 stdout F [INFO] 10.217.0.87:35847 - 16647 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000609347s 2025-08-13T20:11:05.342043325+00:00 stdout F [INFO] 10.217.0.87:54364 - 5238 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000893065s 2025-08-13T20:11:05.362751859+00:00 stdout F [INFO] 10.217.0.87:37115 - 46250 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000650209s 2025-08-13T20:11:05.362751859+00:00 stdout F [INFO] 10.217.0.87:49240 - 12686 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000519375s 2025-08-13T20:11:05.367891566+00:00 stdout F [INFO] 10.217.0.87:34732 - 2819 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000436073s 2025-08-13T20:11:05.367969348+00:00 stdout F [INFO] 10.217.0.87:33759 - 26935 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000470533s 2025-08-13T20:11:05.397690750+00:00 stdout F [INFO] 10.217.0.87:35998 - 34083 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003644675s 2025-08-13T20:11:05.397754462+00:00 stdout F [INFO] 10.217.0.87:52131 - 3546 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003547612s 2025-08-13T20:11:05.420550816+00:00 stdout F [INFO] 10.217.0.87:39735 - 12055 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000621878s 2025-08-13T20:11:05.420843084+00:00 stdout F [INFO] 10.217.0.87:42570 - 32800 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000528725s 2025-08-13T20:11:05.457708621+00:00 stdout F [INFO] 10.217.0.87:48256 - 42070 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001333268s 2025-08-13T20:11:05.457865856+00:00 stdout F [INFO] 10.217.0.87:59243 - 325 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001696519s 2025-08-13T20:11:05.459695468+00:00 stdout F [INFO] 10.217.0.87:36953 - 24753 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000388781s 2025-08-13T20:11:05.459695468+00:00 stdout F [INFO] 10.217.0.87:50391 - 5271 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000510425s 2025-08-13T20:11:05.476240483+00:00 stdout F [INFO] 10.217.0.87:53411 - 28017 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000549455s 2025-08-13T20:11:05.477273462+00:00 stdout F [INFO] 10.217.0.87:54565 - 53660 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001512104s 2025-08-13T20:11:05.517877896+00:00 stdout F [INFO] 10.217.0.87:36552 - 9276 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000594707s 2025-08-13T20:11:05.517877896+00:00 stdout F [INFO] 10.217.0.87:57974 - 5702 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000671679s 2025-08-13T20:11:05.518295358+00:00 stdout F [INFO] 10.217.0.87:35507 - 62258 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000621747s 2025-08-13T20:11:05.518966027+00:00 stdout F [INFO] 10.217.0.87:48489 - 51566 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001242945s 2025-08-13T20:11:05.532273779+00:00 stdout F [INFO] 10.217.0.87:54360 - 8769 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000570296s 2025-08-13T20:11:05.532620469+00:00 stdout F [INFO] 10.217.0.87:45071 - 2076 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000563326s 2025-08-13T20:11:05.572070690+00:00 stdout F [INFO] 10.217.0.87:35327 - 48220 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001152263s 2025-08-13T20:11:05.573180142+00:00 stdout F [INFO] 10.217.0.87:40319 - 52828 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000888025s 2025-08-13T20:11:05.573360557+00:00 stdout F [INFO] 10.217.0.87:57234 - 30752 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001159563s 2025-08-13T20:11:05.574248242+00:00 stdout F [INFO] 10.217.0.87:50779 - 25703 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000996218s 2025-08-13T20:11:05.636950840+00:00 stdout F [INFO] 10.217.0.87:38613 - 10822 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001529404s 2025-08-13T20:11:05.636950840+00:00 stdout F [INFO] 10.217.0.87:34827 - 31320 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001443362s 2025-08-13T20:11:05.695036296+00:00 stdout F [INFO] 10.217.0.87:50823 - 21243 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000660819s 2025-08-13T20:11:05.695036296+00:00 stdout F [INFO] 10.217.0.87:34187 - 38762 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000649678s 2025-08-13T20:11:05.695768016+00:00 stdout F [INFO] 10.217.0.87:48186 - 4485 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000949247s 2025-08-13T20:11:05.695871989+00:00 stdout F [INFO] 10.217.0.87:55448 - 36001 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001257476s 2025-08-13T20:11:05.727963740+00:00 stdout F [INFO] 10.217.0.87:45140 - 62363 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001007239s 2025-08-13T20:11:05.727963740+00:00 stdout F [INFO] 10.217.0.87:41301 - 4913 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000911526s 2025-08-13T20:11:05.737425181+00:00 stdout F [INFO] 10.217.0.87:54940 - 54838 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00068333s 2025-08-13T20:11:05.737695999+00:00 stdout F [INFO] 10.217.0.87:47179 - 4957 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000484444s 2025-08-13T20:11:05.750248559+00:00 stdout F [INFO] 10.217.0.87:38521 - 7178 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070606s 2025-08-13T20:11:05.750464235+00:00 stdout F [INFO] 10.217.0.87:40720 - 32764 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000742241s 2025-08-13T20:11:05.758315660+00:00 stdout F [INFO] 10.217.0.87:53784 - 14976 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000741112s 2025-08-13T20:11:05.759021180+00:00 stdout F [INFO] 10.217.0.87:47766 - 50912 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000766572s 2025-08-13T20:11:05.787698922+00:00 stdout F [INFO] 10.217.0.87:46050 - 38144 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000734861s 2025-08-13T20:11:05.787846096+00:00 stdout F [INFO] 10.217.0.87:49776 - 41453 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069027s 2025-08-13T20:11:05.808445967+00:00 stdout F [INFO] 10.217.0.87:34882 - 53698 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000775223s 2025-08-13T20:11:05.810042633+00:00 stdout F [INFO] 10.217.0.87:57934 - 36471 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000613088s 2025-08-13T20:11:05.815279783+00:00 stdout F [INFO] 10.217.0.87:59022 - 44298 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000583917s 2025-08-13T20:11:05.815389526+00:00 stdout F [INFO] 10.217.0.87:50553 - 13065 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000446953s 2025-08-13T20:11:05.847885058+00:00 stdout F [INFO] 10.217.0.87:50104 - 3863 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000556416s 2025-08-13T20:11:05.849309909+00:00 stdout F [INFO] 10.217.0.87:42086 - 17959 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001748971s 2025-08-13T20:11:05.850025639+00:00 stdout F [INFO] 10.217.0.87:46475 - 670 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000601117s 2025-08-13T20:11:05.850271846+00:00 stdout F [INFO] 10.217.0.87:54185 - 52777 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000961198s 2025-08-13T20:11:05.864866695+00:00 stdout F [INFO] 10.217.0.87:35066 - 21339 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000673089s 2025-08-13T20:11:05.864866695+00:00 stdout F [INFO] 10.217.0.87:52115 - 23043 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000745862s 2025-08-13T20:11:05.905121909+00:00 stdout F [INFO] 10.217.0.87:41027 - 45325 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000649229s 2025-08-13T20:11:05.905199141+00:00 stdout F [INFO] 10.217.0.87:35153 - 34789 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000867205s 2025-08-13T20:11:05.905703236+00:00 stdout F [INFO] 10.217.0.87:48675 - 18007 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000562456s 2025-08-13T20:11:05.905742957+00:00 stdout F [INFO] 10.217.0.87:43674 - 13853 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000559216s 2025-08-13T20:11:05.921436857+00:00 stdout F [INFO] 10.217.0.87:48274 - 1386 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000690469s 2025-08-13T20:11:05.921436857+00:00 stdout F [INFO] 10.217.0.87:42202 - 38824 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000865555s 2025-08-13T20:11:05.946203447+00:00 stdout F [INFO] 10.217.0.87:56106 - 42543 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00072583s 2025-08-13T20:11:05.946437223+00:00 stdout F [INFO] 10.217.0.87:56970 - 21694 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001012729s 2025-08-13T20:11:05.961452604+00:00 stdout F [INFO] 10.217.0.87:55002 - 42104 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000834214s 2025-08-13T20:11:05.962054481+00:00 stdout F [INFO] 10.217.0.87:57408 - 45063 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001483442s 2025-08-13T20:11:05.978225745+00:00 stdout F [INFO] 10.217.0.87:54164 - 2186 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000564496s 2025-08-13T20:11:05.978225745+00:00 stdout F [INFO] 10.217.0.87:49778 - 4297 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001432481s 2025-08-13T20:11:06.001327877+00:00 stdout F [INFO] 10.217.0.87:53683 - 16082 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000602247s 2025-08-13T20:11:06.001327877+00:00 stdout F [INFO] 10.217.0.87:50274 - 56137 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070625s 2025-08-13T20:11:06.024428690+00:00 stdout F [INFO] 10.217.0.87:59104 - 10557 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000574946s 2025-08-13T20:11:06.024482571+00:00 stdout F [INFO] 10.217.0.87:57333 - 36257 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000501454s 2025-08-13T20:11:06.042144708+00:00 stdout F [INFO] 10.217.0.87:39448 - 51107 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000729891s 2025-08-13T20:11:06.042350963+00:00 stdout F [INFO] 10.217.0.87:53358 - 62191 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000761701s 2025-08-13T20:11:06.064882039+00:00 stdout F [INFO] 10.217.0.87:46028 - 30827 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000779163s 2025-08-13T20:11:06.064965342+00:00 stdout F [INFO] 10.217.0.87:34523 - 33195 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000976068s 2025-08-13T20:11:06.079122668+00:00 stdout F [INFO] 10.217.0.87:47662 - 3356 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000976048s 2025-08-13T20:11:06.079490318+00:00 stdout F [INFO] 10.217.0.87:39599 - 47911 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001112542s 2025-08-13T20:11:06.081466575+00:00 stdout F [INFO] 10.217.0.87:56701 - 58527 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001175663s 2025-08-13T20:11:06.081526277+00:00 stdout F [INFO] 10.217.0.87:46845 - 61569 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001124562s 2025-08-13T20:11:06.094482548+00:00 stdout F [INFO] 10.217.0.87:44174 - 39781 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000914246s 2025-08-13T20:11:06.094591531+00:00 stdout F [INFO] 10.217.0.87:57763 - 55061 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00136419s 2025-08-13T20:11:06.117101526+00:00 stdout F [INFO] 10.217.0.87:35335 - 509 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000558666s 2025-08-13T20:11:06.117376293+00:00 stdout F [INFO] 10.217.0.87:45399 - 59288 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000826444s 2025-08-13T20:11:06.134357430+00:00 stdout F [INFO] 10.217.0.87:57107 - 7410 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000519265s 2025-08-13T20:11:06.134609068+00:00 stdout F [INFO] 10.217.0.87:38962 - 36757 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000547605s 2025-08-13T20:11:06.136617685+00:00 stdout F [INFO] 10.217.0.87:40779 - 41501 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000559076s 2025-08-13T20:11:06.136842652+00:00 stdout F [INFO] 10.217.0.87:53109 - 41446 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000586077s 2025-08-13T20:11:06.153357345+00:00 stdout F [INFO] 10.217.0.87:53791 - 54818 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000739891s 2025-08-13T20:11:06.153455868+00:00 stdout F [INFO] 10.217.0.87:40529 - 22388 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000568216s 2025-08-13T20:11:06.174253844+00:00 stdout F [INFO] 10.217.0.87:33434 - 53255 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000811123s 2025-08-13T20:11:06.174311886+00:00 stdout F [INFO] 10.217.0.87:41504 - 17167 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000981359s 2025-08-13T20:11:06.191676854+00:00 stdout F [INFO] 10.217.0.87:44850 - 25924 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000839094s 2025-08-13T20:11:06.191676854+00:00 stdout F [INFO] 10.217.0.87:35096 - 8796 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000593937s 2025-08-13T20:11:06.194354941+00:00 stdout F [INFO] 10.217.0.87:42299 - 56354 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000612367s 2025-08-13T20:11:06.195107692+00:00 stdout F [INFO] 10.217.0.87:60823 - 24639 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000432882s 2025-08-13T20:11:06.222853908+00:00 stdout F [INFO] 10.217.0.87:58939 - 34814 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000455973s 2025-08-13T20:11:06.223381183+00:00 stdout F [INFO] 10.217.0.87:41083 - 28150 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000423563s 2025-08-13T20:11:06.229876849+00:00 stdout F [INFO] 10.217.0.87:50140 - 37233 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00036622s 2025-08-13T20:11:06.229876849+00:00 stdout F [INFO] 10.217.0.87:44139 - 19006 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000462453s 2025-08-13T20:11:06.235723447+00:00 stdout F [INFO] 10.217.0.87:37000 - 48562 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000615558s 2025-08-13T20:11:06.236182840+00:00 stdout F [INFO] 10.217.0.87:37401 - 26048 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000878205s 2025-08-13T20:11:06.248820622+00:00 stdout F [INFO] 10.217.0.87:53517 - 14644 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000478294s 2025-08-13T20:11:06.249197003+00:00 stdout F [INFO] 10.217.0.87:46966 - 49070 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00102005s 2025-08-13T20:11:06.252328663+00:00 stdout F [INFO] 10.217.0.87:49959 - 54517 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000936917s 2025-08-13T20:11:06.252872748+00:00 stdout F [INFO] 10.217.0.87:34122 - 64368 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001336928s 2025-08-13T20:11:06.289952311+00:00 stdout F [INFO] 10.217.0.87:54879 - 31349 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000757111s 2025-08-13T20:11:06.289952311+00:00 stdout F [INFO] 10.217.0.87:36976 - 4227 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00072202s 2025-08-13T20:11:06.289952311+00:00 stdout F [INFO] 10.217.0.87:50486 - 30028 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000467073s 2025-08-13T20:11:06.290157227+00:00 stdout F [INFO] 10.217.0.87:44581 - 20684 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000646279s 2025-08-13T20:11:06.306865616+00:00 stdout F [INFO] 10.217.0.87:52436 - 41201 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000389121s 2025-08-13T20:11:06.307338690+00:00 stdout F [INFO] 10.217.0.87:46131 - 60825 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000489754s 2025-08-13T20:11:06.347124001+00:00 stdout F [INFO] 10.217.0.87:50795 - 29243 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000588857s 2025-08-13T20:11:06.347124001+00:00 stdout F [INFO] 10.217.0.87:40418 - 64190 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070063s 2025-08-13T20:11:06.354559374+00:00 stdout F [INFO] 10.217.0.87:38571 - 1815 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000470153s 2025-08-13T20:11:06.355035317+00:00 stdout F [INFO] 10.217.0.87:33273 - 16168 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000579847s 2025-08-13T20:11:06.400201202+00:00 stdout F [INFO] 10.217.0.87:60599 - 15192 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002863203s 2025-08-13T20:11:06.400386088+00:00 stdout F [INFO] 10.217.0.87:39072 - 30678 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002858582s 2025-08-13T20:11:06.412956878+00:00 stdout F [INFO] 10.217.0.87:55153 - 53749 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003419738s 2025-08-13T20:11:06.412956878+00:00 stdout F [INFO] 10.217.0.87:39601 - 28661 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003660195s 2025-08-13T20:11:06.454535910+00:00 stdout F [INFO] 10.217.0.87:35573 - 1487 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000883155s 2025-08-13T20:11:06.454573761+00:00 stdout F [INFO] 10.217.0.87:53850 - 41953 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000804833s 2025-08-13T20:11:06.463589630+00:00 stdout F [INFO] 10.217.0.87:34383 - 55906 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000546776s 2025-08-13T20:11:06.463877388+00:00 stdout F [INFO] 10.217.0.87:50571 - 33343 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000808043s 2025-08-13T20:11:06.491525171+00:00 stdout F [INFO] 10.217.0.87:41911 - 60063 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001192454s 2025-08-13T20:11:06.492985652+00:00 stdout F [INFO] 10.217.0.87:45782 - 22682 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002186713s 2025-08-13T20:11:06.509816265+00:00 stdout F [INFO] 10.217.0.87:52306 - 22289 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000733441s 2025-08-13T20:11:06.509855916+00:00 stdout F [INFO] 10.217.0.87:35555 - 28182 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000762071s 2025-08-13T20:11:06.533077352+00:00 stdout F [INFO] 10.217.0.87:43457 - 64202 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000824794s 2025-08-13T20:11:06.534133062+00:00 stdout F [INFO] 10.217.0.87:50391 - 2706 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000806523s 2025-08-13T20:11:06.552254362+00:00 stdout F [INFO] 10.217.0.87:41758 - 54760 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000894436s 2025-08-13T20:11:06.552254362+00:00 stdout F [INFO] 10.217.0.87:41418 - 37950 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00102568s 2025-08-13T20:11:06.565714768+00:00 stdout F [INFO] 10.217.0.87:53002 - 34946 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000669639s 2025-08-13T20:11:06.565991156+00:00 stdout F [INFO] 10.217.0.87:52134 - 13366 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000942697s 2025-08-13T20:11:06.592532397+00:00 stdout F [INFO] 10.217.0.87:41702 - 16932 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000916666s 2025-08-13T20:11:06.592709062+00:00 stdout F [INFO] 10.217.0.87:33361 - 18852 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001009309s 2025-08-13T20:11:06.608624028+00:00 stdout F [INFO] 10.217.0.87:37399 - 31636 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000787072s 2025-08-13T20:11:06.609027200+00:00 stdout F [INFO] 10.217.0.87:38988 - 32905 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001131022s 2025-08-13T20:11:06.650274692+00:00 stdout F [INFO] 10.217.0.87:56662 - 29308 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001180094s 2025-08-13T20:11:06.651669022+00:00 stdout F [INFO] 10.217.0.87:41415 - 62225 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001204265s 2025-08-13T20:11:06.665180180+00:00 stdout F [INFO] 10.217.0.87:35263 - 58718 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000958997s 2025-08-13T20:11:06.665440387+00:00 stdout F [INFO] 10.217.0.87:34409 - 3458 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000758072s 2025-08-13T20:11:06.665900150+00:00 stdout F [INFO] 10.217.0.87:38774 - 44257 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069317s 2025-08-13T20:11:06.666261191+00:00 stdout F [INFO] 10.217.0.87:51417 - 37318 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000584717s 2025-08-13T20:11:06.710675574+00:00 stdout F [INFO] 10.217.0.87:58364 - 51070 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001012859s 2025-08-13T20:11:06.710869830+00:00 stdout F [INFO] 10.217.0.87:33657 - 35571 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001009739s 2025-08-13T20:11:06.719730754+00:00 stdout F [INFO] 10.217.0.87:56472 - 42237 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000597037s 2025-08-13T20:11:06.720049443+00:00 stdout F [INFO] 10.217.0.87:34285 - 51553 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070137s 2025-08-13T20:11:06.725270743+00:00 stdout F [INFO] 10.217.0.87:56793 - 19375 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000649548s 2025-08-13T20:11:06.725506199+00:00 stdout F [INFO] 10.217.0.87:37335 - 60642 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00069964s 2025-08-13T20:11:06.743459024+00:00 stdout F [INFO] 10.217.0.87:49928 - 38274 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000782013s 2025-08-13T20:11:06.743459024+00:00 stdout F [INFO] 10.217.0.87:38836 - 29306 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000717341s 2025-08-13T20:11:06.768242695+00:00 stdout F [INFO] 10.217.0.87:60924 - 22492 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001305128s 2025-08-13T20:11:06.768311347+00:00 stdout F [INFO] 10.217.0.87:33026 - 58340 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001566935s 2025-08-13T20:11:06.774824223+00:00 stdout F [INFO] 10.217.0.87:35023 - 36636 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000527365s 2025-08-13T20:11:06.775069740+00:00 stdout F [INFO] 10.217.0.87:36959 - 13740 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000538765s 2025-08-13T20:11:06.804987058+00:00 stdout F [INFO] 10.217.0.87:52927 - 15321 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000732581s 2025-08-13T20:11:06.805176204+00:00 stdout F [INFO] 10.217.0.87:41921 - 3811 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001665357s 2025-08-13T20:11:06.824849498+00:00 stdout F [INFO] 10.217.0.87:56546 - 11258 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000651899s 2025-08-13T20:11:06.825061844+00:00 stdout F [INFO] 10.217.0.87:53868 - 34815 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001021429s 2025-08-13T20:11:06.843694408+00:00 stdout F [INFO] 10.217.0.87:39912 - 28227 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000592707s 2025-08-13T20:11:06.844727207+00:00 stdout F [INFO] 10.217.0.87:43094 - 49549 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000830364s 2025-08-13T20:11:06.880210985+00:00 stdout F [INFO] 10.217.0.87:39540 - 28116 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000741151s 2025-08-13T20:11:06.880420891+00:00 stdout F [INFO] 10.217.0.87:37296 - 5530 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000985599s 2025-08-13T20:11:06.898124619+00:00 stdout F [INFO] 10.217.0.87:34894 - 32594 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001575145s 2025-08-13T20:11:06.898483399+00:00 stdout F [INFO] 10.217.0.87:52980 - 13091 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001652387s 2025-08-13T20:11:06.929318283+00:00 stdout F [INFO] 10.217.0.87:59919 - 19170 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000769912s 2025-08-13T20:11:06.929360744+00:00 stdout F [INFO] 10.217.0.87:47630 - 55246 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000912626s 2025-08-13T20:11:06.937565909+00:00 stdout F [INFO] 10.217.0.87:40289 - 56438 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001221625s 2025-08-13T20:11:06.937699333+00:00 stdout F [INFO] 10.217.0.87:52474 - 24238 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001223485s 2025-08-13T20:11:06.939627758+00:00 stdout F [INFO] 10.217.0.87:36682 - 2516 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001295677s 2025-08-13T20:11:06.939627758+00:00 stdout F [INFO] 10.217.0.87:56748 - 10632 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001002569s 2025-08-13T20:11:06.986652897+00:00 stdout F [INFO] 10.217.0.87:49396 - 42637 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002637565s 2025-08-13T20:11:06.986720629+00:00 stdout F [INFO] 10.217.0.87:41630 - 25830 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002414649s 2025-08-13T20:11:06.997326363+00:00 stdout F [INFO] 10.217.0.87:51569 - 44524 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00104834s 2025-08-13T20:11:06.997622911+00:00 stdout F [INFO] 10.217.0.87:36794 - 18852 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001446492s 2025-08-13T20:11:07.044982969+00:00 stdout F [INFO] 10.217.0.87:51837 - 36888 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000936357s 2025-08-13T20:11:07.045427022+00:00 stdout F [INFO] 10.217.0.87:58976 - 21162 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001307418s 2025-08-13T20:11:07.045548015+00:00 stdout F [INFO] 10.217.0.87:36974 - 32460 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000771272s 2025-08-13T20:11:07.047007987+00:00 stdout F [INFO] 10.217.0.87:58415 - 60280 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002137671s 2025-08-13T20:11:07.105296868+00:00 stdout F [INFO] 10.217.0.87:60538 - 54742 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001012649s 2025-08-13T20:11:07.105296868+00:00 stdout F [INFO] 10.217.0.87:58811 - 17080 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0017433s 2025-08-13T20:11:07.105495824+00:00 stdout F [INFO] 10.217.0.87:51688 - 12168 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002047849s 2025-08-13T20:11:07.105664929+00:00 stdout F [INFO] 10.217.0.87:50539 - 42293 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002032268s 2025-08-13T20:11:07.163007373+00:00 stdout F [INFO] 10.217.0.87:58079 - 39541 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000982978s 2025-08-13T20:11:07.163007373+00:00 stdout F [INFO] 10.217.0.87:37302 - 55370 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001032149s 2025-08-13T20:11:07.170320102+00:00 stdout F [INFO] 10.217.0.87:49364 - 65517 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000833624s 2025-08-13T20:11:07.171272730+00:00 stdout F [INFO] 10.217.0.87:36435 - 40868 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000906396s 2025-08-13T20:11:07.171989020+00:00 stdout F [INFO] 10.217.0.87:59848 - 32310 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000610828s 2025-08-13T20:11:07.172141005+00:00 stdout F [INFO] 10.217.0.87:55904 - 37116 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000517145s 2025-08-13T20:11:07.207731465+00:00 stdout F [INFO] 10.217.0.87:43437 - 61119 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000728381s 2025-08-13T20:11:07.207861279+00:00 stdout F [INFO] 10.217.0.87:57484 - 35925 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000743692s 2025-08-13T20:11:07.221266323+00:00 stdout F [INFO] 10.217.0.87:43164 - 16768 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000541336s 2025-08-13T20:11:07.221266323+00:00 stdout F [INFO] 10.217.0.87:60482 - 24325 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000885805s 2025-08-13T20:11:07.225198466+00:00 stdout F [INFO] 10.217.0.87:49620 - 5559 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000587827s 2025-08-13T20:11:07.225319079+00:00 stdout F [INFO] 10.217.0.87:48813 - 53088 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000534395s 2025-08-13T20:11:07.257577124+00:00 stdout F [INFO] 10.217.0.87:40760 - 5643 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000937127s 2025-08-13T20:11:07.257826671+00:00 stdout F [INFO] 10.217.0.87:41535 - 4835 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001141493s 2025-08-13T20:11:07.262181946+00:00 stdout F [INFO] 10.217.0.87:49409 - 29932 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00072507s 2025-08-13T20:11:07.262240898+00:00 stdout F [INFO] 10.217.0.87:33200 - 42103 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000580437s 2025-08-13T20:11:07.276715173+00:00 stdout F [INFO] 10.217.0.87:49404 - 50720 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000742431s 2025-08-13T20:11:07.276846737+00:00 stdout F [INFO] 10.217.0.87:54369 - 4775 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000498485s 2025-08-13T20:11:07.280807520+00:00 stdout F [INFO] 10.217.0.87:53914 - 14210 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000432693s 2025-08-13T20:11:07.281426688+00:00 stdout F [INFO] 10.217.0.87:51940 - 43696 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000888956s 2025-08-13T20:11:07.317145732+00:00 stdout F [INFO] 10.217.0.87:59791 - 56988 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000923027s 2025-08-13T20:11:07.317197224+00:00 stdout F [INFO] 10.217.0.87:43385 - 12255 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00072117s 2025-08-13T20:11:07.334514660+00:00 stdout F [INFO] 10.217.0.87:34980 - 10777 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070021s 2025-08-13T20:11:07.334691555+00:00 stdout F [INFO] 10.217.0.87:55757 - 15662 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000538395s 2025-08-13T20:11:07.376549415+00:00 stdout F [INFO] 10.217.0.87:53403 - 11578 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070054s 2025-08-13T20:11:07.376549415+00:00 stdout F [INFO] 10.217.0.87:35946 - 17798 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001085221s 2025-08-13T20:11:07.389303621+00:00 stdout F [INFO] 10.217.0.87:44091 - 57874 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000605428s 2025-08-13T20:11:07.389703323+00:00 stdout F [INFO] 10.217.0.87:53756 - 50686 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000758902s 2025-08-13T20:11:07.417015286+00:00 stdout F [INFO] 10.217.0.87:40347 - 17567 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001583275s 2025-08-13T20:11:07.417194241+00:00 stdout F [INFO] 10.217.0.87:33768 - 59369 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001353908s 2025-08-13T20:11:07.443644379+00:00 stdout F [INFO] 10.217.0.87:50810 - 56645 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000776422s 2025-08-13T20:11:07.443644379+00:00 stdout F [INFO] 10.217.0.87:46506 - 36903 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000660439s 2025-08-13T20:11:07.472205178+00:00 stdout F [INFO] 10.217.0.87:47203 - 54622 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000751011s 2025-08-13T20:11:07.472205178+00:00 stdout F [INFO] 10.217.0.87:57837 - 6935 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000608628s 2025-08-13T20:11:07.481557286+00:00 stdout F [INFO] 10.217.0.87:34574 - 39418 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000713751s 2025-08-13T20:11:07.482115062+00:00 stdout F [INFO] 10.217.0.87:60348 - 34743 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001230536s 2025-08-13T20:11:07.507484500+00:00 stdout F [INFO] 10.217.0.87:34603 - 8271 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000671269s 2025-08-13T20:11:07.507538051+00:00 stdout F [INFO] 10.217.0.87:46661 - 53710 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000798843s 2025-08-13T20:11:07.527233716+00:00 stdout F [INFO] 10.217.0.87:50539 - 40214 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000919156s 2025-08-13T20:11:07.527354589+00:00 stdout F [INFO] 10.217.0.87:58467 - 3852 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000786952s 2025-08-13T20:11:07.535472752+00:00 stdout F [INFO] 10.217.0.87:34042 - 43843 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000737821s 2025-08-13T20:11:07.535559044+00:00 stdout F [INFO] 10.217.0.87:35568 - 3264 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000796383s 2025-08-13T20:11:07.557271757+00:00 stdout F [INFO] 10.217.0.87:38703 - 62460 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070702s 2025-08-13T20:11:07.557654668+00:00 stdout F [INFO] 10.217.0.87:54204 - 9243 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000859315s 2025-08-13T20:11:07.571607918+00:00 stdout F [INFO] 10.217.0.87:51349 - 43568 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000394932s 2025-08-13T20:11:07.571909177+00:00 stdout F [INFO] 10.217.0.87:55665 - 10478 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000890045s 2025-08-13T20:11:07.579506694+00:00 stdout F [INFO] 10.217.0.87:41713 - 44982 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000639949s 2025-08-13T20:11:07.579822964+00:00 stdout F [INFO] 10.217.0.87:35599 - 30094 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000641579s 2025-08-13T20:11:07.611660656+00:00 stdout F [INFO] 10.217.0.87:52989 - 57443 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000811524s 2025-08-13T20:11:07.612091619+00:00 stdout F [INFO] 10.217.0.87:56925 - 44043 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000982728s 2025-08-13T20:11:07.630198318+00:00 stdout F [INFO] 10.217.0.87:59685 - 54799 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001186674s 2025-08-13T20:11:07.630585489+00:00 stdout F [INFO] 10.217.0.87:41588 - 19165 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001908224s 2025-08-13T20:11:07.638215148+00:00 stdout F [INFO] 10.217.0.87:60419 - 15905 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000870815s 2025-08-13T20:11:07.638310660+00:00 stdout F [INFO] 10.217.0.87:39051 - 25853 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001563095s 2025-08-13T20:11:07.668658981+00:00 stdout F [INFO] 10.217.0.87:36510 - 56531 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000753982s 2025-08-13T20:11:07.668910118+00:00 stdout F [INFO] 10.217.0.87:45382 - 41023 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000604917s 2025-08-13T20:11:07.688061667+00:00 stdout F [INFO] 10.217.0.87:43511 - 46046 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000736041s 2025-08-13T20:11:07.688430487+00:00 stdout F [INFO] 10.217.0.87:55325 - 56240 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001117712s 2025-08-13T20:11:07.693539064+00:00 stdout F [INFO] 10.217.0.87:57667 - 32897 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000927677s 2025-08-13T20:11:07.693603626+00:00 stdout F [INFO] 10.217.0.87:53696 - 30464 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00103904s 2025-08-13T20:11:07.716821652+00:00 stdout F [INFO] 10.217.0.87:47346 - 60672 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000607528s 2025-08-13T20:11:07.717144831+00:00 stdout F [INFO] 10.217.0.87:57924 - 33305 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000790483s 2025-08-13T20:11:07.743893558+00:00 stdout F [INFO] 10.217.0.87:56868 - 30677 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000767852s 2025-08-13T20:11:07.744194106+00:00 stdout F [INFO] 10.217.0.87:49842 - 57859 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000859045s 2025-08-13T20:11:07.772413855+00:00 stdout F [INFO] 10.217.0.87:33450 - 52319 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000833204s 2025-08-13T20:11:07.772864258+00:00 stdout F [INFO] 10.217.0.87:49356 - 2562 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001155553s 2025-08-13T20:11:07.799166922+00:00 stdout F [INFO] 10.217.0.87:53352 - 29022 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00142271s 2025-08-13T20:11:07.799415080+00:00 stdout F [INFO] 10.217.0.87:49746 - 13386 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001722979s 2025-08-13T20:11:07.826411524+00:00 stdout F [INFO] 10.217.0.87:35786 - 37507 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000635989s 2025-08-13T20:11:07.826551608+00:00 stdout F [INFO] 10.217.0.87:40944 - 25651 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000536725s 2025-08-13T20:11:07.866043630+00:00 stdout F [INFO] 10.217.0.87:46701 - 20031 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000803634s 2025-08-13T20:11:07.866305297+00:00 stdout F [INFO] 10.217.0.87:49754 - 37423 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000920356s 2025-08-13T20:11:07.881445101+00:00 stdout F [INFO] 10.217.0.87:41833 - 64957 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000903816s 2025-08-13T20:11:07.881964906+00:00 stdout F [INFO] 10.217.0.87:43415 - 47315 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001354449s 2025-08-13T20:11:07.882467471+00:00 stdout F [INFO] 10.217.0.87:48205 - 62842 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000731091s 2025-08-13T20:11:07.882601685+00:00 stdout F [INFO] 10.217.0.87:48027 - 3372 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00105356s 2025-08-13T20:11:07.905826531+00:00 stdout F [INFO] 10.217.0.87:58159 - 46154 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000625888s 2025-08-13T20:11:07.906098518+00:00 stdout F [INFO] 10.217.0.87:36665 - 53602 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000843184s 2025-08-13T20:11:07.921844300+00:00 stdout F [INFO] 10.217.0.87:52715 - 35017 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000484183s 2025-08-13T20:11:07.922081147+00:00 stdout F [INFO] 10.217.0.87:52898 - 49650 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000426353s 2025-08-13T20:11:07.938112126+00:00 stdout F [INFO] 10.217.0.87:49711 - 41337 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000938537s 2025-08-13T20:11:07.938154007+00:00 stdout F [INFO] 10.217.0.87:44817 - 58022 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001096921s 2025-08-13T20:11:07.962470985+00:00 stdout F [INFO] 10.217.0.87:41285 - 13116 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000762342s 2025-08-13T20:11:07.962525396+00:00 stdout F [INFO] 10.217.0.87:38417 - 4434 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000501594s 2025-08-13T20:11:07.981162011+00:00 stdout F [INFO] 10.217.0.87:37682 - 11071 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00072595s 2025-08-13T20:11:07.981193221+00:00 stdout F [INFO] 10.217.0.87:45937 - 140 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000706161s 2025-08-13T20:11:07.999602619+00:00 stdout F [INFO] 10.217.0.87:48786 - 24055 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000580597s 2025-08-13T20:11:07.999602619+00:00 stdout F [INFO] 10.217.0.87:39752 - 12452 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000929997s 2025-08-13T20:11:08.064061297+00:00 stdout F [INFO] 10.217.0.87:49777 - 44088 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000721461s 2025-08-13T20:11:08.064688655+00:00 stdout F [INFO] 10.217.0.87:58214 - 49133 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001578405s 2025-08-13T20:11:08.081518438+00:00 stdout F [INFO] 10.217.0.87:36948 - 4802 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000605337s 2025-08-13T20:11:08.081581300+00:00 stdout F [INFO] 10.217.0.87:41574 - 52127 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000793012s 2025-08-13T20:11:08.119315391+00:00 stdout F [INFO] 10.217.0.87:34539 - 59046 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001407601s 2025-08-13T20:11:08.119385703+00:00 stdout F [INFO] 10.217.0.87:53056 - 45122 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001035959s 2025-08-13T20:11:08.140376275+00:00 stdout F [INFO] 10.217.0.87:32830 - 23502 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000724451s 2025-08-13T20:11:08.141998682+00:00 stdout F [INFO] 10.217.0.87:41187 - 7567 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001931556s 2025-08-13T20:11:08.146236623+00:00 stdout F [INFO] 10.217.0.87:45883 - 45200 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000991418s 2025-08-13T20:11:08.146587473+00:00 stdout F [INFO] 10.217.0.87:59245 - 9995 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001093662s 2025-08-13T20:11:08.176333046+00:00 stdout F [INFO] 10.217.0.87:38631 - 5845 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000641959s 2025-08-13T20:11:08.176854491+00:00 stdout F [INFO] 10.217.0.87:45038 - 42993 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000985579s 2025-08-13T20:11:08.195841936+00:00 stdout F [INFO] 10.217.0.87:35908 - 62593 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00107632s 2025-08-13T20:11:08.195879627+00:00 stdout F [INFO] 10.217.0.87:57842 - 25290 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001344349s 2025-08-13T20:11:08.201090476+00:00 stdout F [INFO] 10.217.0.87:40521 - 43457 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070197s 2025-08-13T20:11:08.201833717+00:00 stdout F [INFO] 10.217.0.87:53603 - 40731 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001195764s 2025-08-13T20:11:08.236267675+00:00 stdout F [INFO] 10.217.0.87:54377 - 33154 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000904966s 2025-08-13T20:11:08.236475721+00:00 stdout F [INFO] 10.217.0.87:49212 - 45423 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000958377s 2025-08-13T20:11:08.251370648+00:00 stdout F [INFO] 10.217.0.87:55160 - 29098 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000675209s 2025-08-13T20:11:08.251516222+00:00 stdout F [INFO] 10.217.0.87:42220 - 5777 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000705991s 2025-08-13T20:11:08.259369497+00:00 stdout F [INFO] 10.217.0.87:53420 - 37723 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000562176s 2025-08-13T20:11:08.259485550+00:00 stdout F [INFO] 10.217.0.87:37634 - 18652 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000766012s 2025-08-13T20:11:08.306407546+00:00 stdout F [INFO] 10.217.0.87:58080 - 12192 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000620858s 2025-08-13T20:11:08.307266460+00:00 stdout F [INFO] 10.217.0.87:41950 - 42971 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001173934s 2025-08-13T20:11:08.317512554+00:00 stdout F [INFO] 10.217.0.87:41163 - 25281 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000689609s 2025-08-13T20:11:08.317614577+00:00 stdout F [INFO] 10.217.0.87:43491 - 18647 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000532295s 2025-08-13T20:11:08.322396894+00:00 stdout F [INFO] 10.217.0.87:35754 - 20409 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000502435s 2025-08-13T20:11:08.322396894+00:00 stdout F [INFO] 10.217.0.87:52198 - 36446 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000585887s 2025-08-13T20:11:08.339933417+00:00 stdout F [INFO] 10.217.0.87:50506 - 34942 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000850635s 2025-08-13T20:11:08.340078971+00:00 stdout F [INFO] 10.217.0.87:41082 - 58001 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001259356s 2025-08-13T20:11:08.347099922+00:00 stdout F [INFO] 10.217.0.87:48202 - 24673 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000706341s 2025-08-13T20:11:08.347345749+00:00 stdout F [INFO] 10.217.0.87:55112 - 2851 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000876806s 2025-08-13T20:11:08.363877823+00:00 stdout F [INFO] 10.217.0.87:45181 - 18776 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00068396s 2025-08-13T20:11:08.364125350+00:00 stdout F [INFO] 10.217.0.87:33510 - 43360 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000972978s 2025-08-13T20:11:08.376663840+00:00 stdout F [INFO] 10.217.0.87:54478 - 11527 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000708951s 2025-08-13T20:11:08.376987139+00:00 stdout F [INFO] 10.217.0.87:48980 - 35262 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000651708s 2025-08-13T20:11:08.384834434+00:00 stdout F [INFO] 10.217.0.87:35665 - 2681 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000518975s 2025-08-13T20:11:08.384966958+00:00 stdout F [INFO] 10.217.0.87:33876 - 4161 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000749152s 2025-08-13T20:11:08.395677425+00:00 stdout F [INFO] 10.217.0.87:57212 - 64901 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000905326s 2025-08-13T20:11:08.395890481+00:00 stdout F [INFO] 10.217.0.87:45257 - 8753 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069839s 2025-08-13T20:11:08.404554680+00:00 stdout F [INFO] 10.217.0.87:59219 - 40644 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001885915s 2025-08-13T20:11:08.404554680+00:00 stdout F [INFO] 10.217.0.87:50503 - 20139 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001935436s 2025-08-13T20:11:08.423018619+00:00 stdout F [INFO] 10.217.0.87:60190 - 12866 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001440531s 2025-08-13T20:11:08.423496763+00:00 stdout F [INFO] 10.217.0.87:55511 - 27204 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00209878s 2025-08-13T20:11:08.432903562+00:00 stdout F [INFO] 10.217.0.87:35410 - 46456 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000507545s 2025-08-13T20:11:08.432903562+00:00 stdout F [INFO] 10.217.0.87:58066 - 60152 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000623148s 2025-08-13T20:11:08.459096243+00:00 stdout F [INFO] 10.217.0.87:55275 - 54993 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000979079s 2025-08-13T20:11:08.459194356+00:00 stdout F [INFO] 10.217.0.87:33489 - 1806 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000961338s 2025-08-13T20:11:08.487465687+00:00 stdout F [INFO] 10.217.0.87:56155 - 40098 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000728021s 2025-08-13T20:11:08.487522958+00:00 stdout F [INFO] 10.217.0.87:34779 - 64998 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000902886s 2025-08-13T20:11:08.499420449+00:00 stdout F [INFO] 10.217.0.87:53081 - 28211 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000496094s 2025-08-13T20:11:08.499476821+00:00 stdout F [INFO] 10.217.0.87:44623 - 61713 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000791513s 2025-08-13T20:11:08.517257761+00:00 stdout F [INFO] 10.217.0.87:60530 - 2512 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005661673s 2025-08-13T20:11:08.517432846+00:00 stdout F [INFO] 10.217.0.87:46936 - 26510 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005884159s 2025-08-13T20:11:08.544881423+00:00 stdout F [INFO] 10.217.0.87:46216 - 38326 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000781543s 2025-08-13T20:11:08.544881423+00:00 stdout F [INFO] 10.217.0.87:55829 - 44853 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001203765s 2025-08-13T20:11:08.544881423+00:00 stdout F [INFO] 10.217.0.87:36518 - 59457 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001556755s 2025-08-13T20:11:08.544881423+00:00 stdout F [INFO] 10.217.0.87:38180 - 64319 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001057991s 2025-08-13T20:11:08.555623421+00:00 stdout F [INFO] 10.217.0.87:55842 - 205 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000616398s 2025-08-13T20:11:08.555623421+00:00 stdout F [INFO] 10.217.0.87:45745 - 7394 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000842004s 2025-08-13T20:11:08.573364259+00:00 stdout F [INFO] 10.217.0.87:57068 - 35192 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001069521s 2025-08-13T20:11:08.573364259+00:00 stdout F [INFO] 10.217.0.87:49760 - 49044 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000945517s 2025-08-13T20:11:08.579015571+00:00 stdout F [INFO] 10.217.0.87:43119 - 21963 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000549946s 2025-08-13T20:11:08.579708371+00:00 stdout F [INFO] 10.217.0.87:60286 - 28624 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000616738s 2025-08-13T20:11:08.599127888+00:00 stdout F [INFO] 10.217.0.87:42287 - 8556 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001494493s 2025-08-13T20:11:08.599256052+00:00 stdout F [INFO] 10.217.0.87:38544 - 61819 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001566324s 2025-08-13T20:11:08.599857979+00:00 stdout F [INFO] 10.217.0.87:54830 - 40587 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000758882s 2025-08-13T20:11:08.600256170+00:00 stdout F [INFO] 10.217.0.87:45174 - 390 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001139543s 2025-08-13T20:11:08.627299166+00:00 stdout F [INFO] 10.217.0.87:52597 - 28415 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000736661s 2025-08-13T20:11:08.627299166+00:00 stdout F [INFO] 10.217.0.87:57052 - 4137 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000971697s 2025-08-13T20:11:08.632964848+00:00 stdout F [INFO] 10.217.0.87:39800 - 50300 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000512485s 2025-08-13T20:11:08.633108582+00:00 stdout F [INFO] 10.217.0.87:58174 - 39741 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000584247s 2025-08-13T20:11:08.655018860+00:00 stdout F [INFO] 10.217.0.87:47248 - 53195 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000666499s 2025-08-13T20:11:08.655018860+00:00 stdout F [INFO] 10.217.0.87:60705 - 40238 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000729491s 2025-08-13T20:11:08.661694522+00:00 stdout F [INFO] 10.217.0.87:53642 - 49494 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000573287s 2025-08-13T20:11:08.661694522+00:00 stdout F [INFO] 10.217.0.87:36968 - 25355 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000502334s 2025-08-13T20:11:08.687659976+00:00 stdout F [INFO] 10.217.0.87:48084 - 29807 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000730811s 2025-08-13T20:11:08.687659976+00:00 stdout F [INFO] 10.217.0.87:53674 - 5248 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000805873s 2025-08-13T20:11:08.711454949+00:00 stdout F [INFO] 10.217.0.87:34393 - 12701 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000739862s 2025-08-13T20:11:08.712082747+00:00 stdout F [INFO] 10.217.0.87:42170 - 56756 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000760271s 2025-08-13T20:11:08.741562942+00:00 stdout F [INFO] 10.217.0.87:47188 - 22454 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000750161s 2025-08-13T20:11:08.741692766+00:00 stdout F [INFO] 10.217.0.87:60309 - 59151 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001182084s 2025-08-13T20:11:08.745069502+00:00 stdout F [INFO] 10.217.0.87:45344 - 61474 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000668139s 2025-08-13T20:11:08.745069502+00:00 stdout F [INFO] 10.217.0.87:58293 - 48213 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00105222s 2025-08-13T20:11:08.766544188+00:00 stdout F [INFO] 10.217.0.87:40412 - 29040 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000454903s 2025-08-13T20:11:08.766693232+00:00 stdout F [INFO] 10.217.0.87:59933 - 26530 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000411332s 2025-08-13T20:11:08.797008712+00:00 stdout F [INFO] 10.217.0.87:45446 - 13925 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000613957s 2025-08-13T20:11:08.797008712+00:00 stdout F [INFO] 10.217.0.87:57036 - 43938 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000506345s 2025-08-13T20:11:08.820598308+00:00 stdout F [INFO] 10.217.0.87:45608 - 64359 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000469703s 2025-08-13T20:11:08.820818864+00:00 stdout F [INFO] 10.217.0.87:39399 - 5121 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000514944s 2025-08-13T20:11:08.849384853+00:00 stdout F [INFO] 10.217.0.87:40247 - 24315 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00068251s 2025-08-13T20:11:08.850164516+00:00 stdout F [INFO] 10.217.0.87:59475 - 26937 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001337518s 2025-08-13T20:11:08.874556005+00:00 stdout F [INFO] 10.217.0.87:57108 - 6458 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000702661s 2025-08-13T20:11:08.874619757+00:00 stdout F [INFO] 10.217.0.87:44884 - 13914 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001088391s 2025-08-13T20:11:08.888978959+00:00 stdout F [INFO] 10.217.0.87:53148 - 29687 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000759401s 2025-08-13T20:11:08.889120743+00:00 stdout F [INFO] 10.217.0.87:38152 - 61846 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000890716s 2025-08-13T20:11:08.905832352+00:00 stdout F [INFO] 10.217.0.87:33541 - 15399 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000869564s 2025-08-13T20:11:08.906256264+00:00 stdout F [INFO] 10.217.0.87:36658 - 33176 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001183774s 2025-08-13T20:11:08.942624797+00:00 stdout F [INFO] 10.217.0.87:60473 - 46967 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000935207s 2025-08-13T20:11:08.942624797+00:00 stdout F [INFO] 10.217.0.87:42459 - 50629 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000915966s 2025-08-13T20:11:08.956306379+00:00 stdout F [INFO] 10.217.0.87:39704 - 64412 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069933s 2025-08-13T20:11:08.956390161+00:00 stdout F [INFO] 10.217.0.87:35286 - 54664 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000657089s 2025-08-13T20:11:09.009847054+00:00 stdout F [INFO] 10.217.0.87:54051 - 28377 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000598137s 2025-08-13T20:11:09.010077731+00:00 stdout F [INFO] 10.217.0.87:37021 - 5506 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000633918s 2025-08-13T20:11:09.064643675+00:00 stdout F [INFO] 10.217.0.87:48804 - 14694 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000769092s 2025-08-13T20:11:09.064643675+00:00 stdout F [INFO] 10.217.0.87:40276 - 41515 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000978868s 2025-08-13T20:11:09.075868827+00:00 stdout F [INFO] 10.217.0.87:52252 - 12463 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000681859s 2025-08-13T20:11:09.076070703+00:00 stdout F [INFO] 10.217.0.87:40170 - 19513 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001002759s 2025-08-13T20:11:09.115422351+00:00 stdout F [INFO] 10.217.0.87:37118 - 64783 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000903006s 2025-08-13T20:11:09.115540514+00:00 stdout F [INFO] 10.217.0.87:55116 - 52020 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000845964s 2025-08-13T20:11:09.123530044+00:00 stdout F [INFO] 10.217.0.87:46018 - 56449 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000654459s 2025-08-13T20:11:09.124037828+00:00 stdout F [INFO] 10.217.0.87:34184 - 43139 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000538175s 2025-08-13T20:11:09.130141053+00:00 stdout F [INFO] 10.217.0.87:56344 - 19 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000817974s 2025-08-13T20:11:09.130393150+00:00 stdout F [INFO] 10.217.0.87:46527 - 2906 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000835174s 2025-08-13T20:11:09.165076025+00:00 stdout F [INFO] 10.217.0.87:45989 - 3354 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000988799s 2025-08-13T20:11:09.165162427+00:00 stdout F [INFO] 10.217.0.87:40822 - 23382 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001084201s 2025-08-13T20:11:09.177453200+00:00 stdout F [INFO] 10.217.0.87:56347 - 34656 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000718601s 2025-08-13T20:11:09.177554863+00:00 stdout F [INFO] 10.217.0.87:45640 - 4199 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000768752s 2025-08-13T20:11:09.185731777+00:00 stdout F [INFO] 10.217.0.87:36847 - 50417 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000633828s 2025-08-13T20:11:09.185731777+00:00 stdout F [INFO] 10.217.0.87:52308 - 60129 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000776212s 2025-08-13T20:11:09.221102691+00:00 stdout F [INFO] 10.217.0.87:47405 - 29150 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000558266s 2025-08-13T20:11:09.221102691+00:00 stdout F [INFO] 10.217.0.87:45271 - 42783 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000761961s 2025-08-13T20:11:09.231398136+00:00 stdout F [INFO] 10.217.0.87:33150 - 48647 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000595407s 2025-08-13T20:11:09.231444548+00:00 stdout F [INFO] 10.217.0.87:55990 - 64284 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000492205s 2025-08-13T20:11:09.241229768+00:00 stdout F [INFO] 10.217.0.87:35019 - 34439 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000612528s 2025-08-13T20:11:09.241851496+00:00 stdout F [INFO] 10.217.0.87:57066 - 18806 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000641939s 2025-08-13T20:11:09.275480630+00:00 stdout F [INFO] 10.217.0.87:46739 - 53451 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000772852s 2025-08-13T20:11:09.275853701+00:00 stdout F [INFO] 10.217.0.87:55427 - 54278 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00103188s 2025-08-13T20:11:09.276502500+00:00 stdout F [INFO] 10.217.0.87:49219 - 48743 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000627578s 2025-08-13T20:11:09.277077556+00:00 stdout F [INFO] 10.217.0.87:38267 - 55558 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001083721s 2025-08-13T20:11:09.285312502+00:00 stdout F [INFO] 10.217.0.87:57050 - 26639 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000527796s 2025-08-13T20:11:09.285312502+00:00 stdout F [INFO] 10.217.0.87:38263 - 35891 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000778362s 2025-08-13T20:11:09.328960133+00:00 stdout F [INFO] 10.217.0.87:35786 - 6519 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000763642s 2025-08-13T20:11:09.329544330+00:00 stdout F [INFO] 10.217.0.87:51541 - 48774 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001336938s 2025-08-13T20:11:09.330937810+00:00 stdout F [INFO] 10.217.0.87:45477 - 8398 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000767262s 2025-08-13T20:11:09.331439575+00:00 stdout F [INFO] 10.217.0.87:47873 - 29317 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001480123s 2025-08-13T20:11:09.339755763+00:00 stdout F [INFO] 10.217.0.87:53989 - 17357 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001479943s 2025-08-13T20:11:09.339946839+00:00 stdout F [INFO] 10.217.0.87:39615 - 48203 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001503723s 2025-08-13T20:11:09.371635847+00:00 stdout F [INFO] 10.217.0.87:33870 - 47553 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000803013s 2025-08-13T20:11:09.372254395+00:00 stdout F [INFO] 10.217.0.87:33091 - 45955 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001228876s 2025-08-13T20:11:09.392121074+00:00 stdout F [INFO] 10.217.0.87:43512 - 7191 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000619788s 2025-08-13T20:11:09.392121074+00:00 stdout F [INFO] 10.217.0.87:43215 - 45310 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000675009s 2025-08-13T20:11:09.427988023+00:00 stdout F [INFO] 10.217.0.87:33535 - 58269 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000669729s 2025-08-13T20:11:09.428161888+00:00 stdout F [INFO] 10.217.0.87:40410 - 40341 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000664079s 2025-08-13T20:11:09.448156561+00:00 stdout F [INFO] 10.217.0.87:54954 - 42761 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000628158s 2025-08-13T20:11:09.448156561+00:00 stdout F [INFO] 10.217.0.87:53421 - 38162 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00138773s 2025-08-13T20:11:09.449071847+00:00 stdout F [INFO] 10.217.0.87:55224 - 28314 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000527796s 2025-08-13T20:11:09.449071847+00:00 stdout F [INFO] 10.217.0.87:49659 - 6648 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000764562s 2025-08-13T20:11:09.482376662+00:00 stdout F [INFO] 10.217.0.87:48689 - 21850 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000638679s 2025-08-13T20:11:09.482376662+00:00 stdout F [INFO] 10.217.0.87:57101 - 38149 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000632798s 2025-08-13T20:11:09.503586530+00:00 stdout F [INFO] 10.217.0.87:52881 - 6877 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000808803s 2025-08-13T20:11:09.503586530+00:00 stdout F [INFO] 10.217.0.87:34201 - 21870 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000537925s 2025-08-13T20:11:09.505160085+00:00 stdout F [INFO] 10.217.0.87:35048 - 16070 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000575357s 2025-08-13T20:11:09.505404882+00:00 stdout F [INFO] 10.217.0.87:41863 - 24054 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00105403s 2025-08-13T20:11:09.537342868+00:00 stdout F [INFO] 10.217.0.87:60456 - 44692 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000797203s 2025-08-13T20:11:09.537342868+00:00 stdout F [INFO] 10.217.0.87:38893 - 64497 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001186754s 2025-08-13T20:11:09.565609508+00:00 stdout F [INFO] 10.217.0.87:49518 - 2987 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000829094s 2025-08-13T20:11:09.566091892+00:00 stdout F [INFO] 10.217.0.87:33729 - 40144 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000658899s 2025-08-13T20:11:09.567072360+00:00 stdout F [INFO] 10.217.0.87:53459 - 9026 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000851374s 2025-08-13T20:11:09.567072360+00:00 stdout F [INFO] 10.217.0.87:51738 - 51961 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000828444s 2025-08-13T20:11:09.619399371+00:00 stdout F [INFO] 10.217.0.87:46668 - 55956 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000985679s 2025-08-13T20:11:09.619708900+00:00 stdout F [INFO] 10.217.0.87:42988 - 40163 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000664769s 2025-08-13T20:11:09.621897792+00:00 stdout F [INFO] 10.217.0.87:48469 - 28257 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000531815s 2025-08-13T20:11:09.622085258+00:00 stdout F [INFO] 10.217.0.87:56697 - 386 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0006824s 2025-08-13T20:11:09.644941383+00:00 stdout F [INFO] 10.217.0.87:50544 - 34100 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001134353s 2025-08-13T20:11:09.645365005+00:00 stdout F [INFO] 10.217.0.87:53854 - 56413 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00140471s 2025-08-13T20:11:09.676304942+00:00 stdout F [INFO] 10.217.0.87:56020 - 5860 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000740331s 2025-08-13T20:11:09.676304942+00:00 stdout F [INFO] 10.217.0.87:39917 - 33779 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000581146s 2025-08-13T20:11:09.678225417+00:00 stdout F [INFO] 10.217.0.87:44903 - 15129 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000491185s 2025-08-13T20:11:09.678534746+00:00 stdout F [INFO] 10.217.0.87:47391 - 53784 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000419112s 2025-08-13T20:11:09.699531377+00:00 stdout F [INFO] 10.217.0.87:40503 - 29441 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000648348s 2025-08-13T20:11:09.699735143+00:00 stdout F [INFO] 10.217.0.87:56421 - 20914 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000838404s 2025-08-13T20:11:09.732739419+00:00 stdout F [INFO] 10.217.0.87:60895 - 25087 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000904066s 2025-08-13T20:11:09.732879243+00:00 stdout F [INFO] 10.217.0.87:45233 - 38453 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001086661s 2025-08-13T20:11:09.754002619+00:00 stdout F [INFO] 10.217.0.87:47687 - 30565 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000753871s 2025-08-13T20:11:09.754062771+00:00 stdout F [INFO] 10.217.0.87:56598 - 25031 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000797913s 2025-08-13T20:11:09.791738541+00:00 stdout F [INFO] 10.217.0.87:35890 - 8037 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070223s 2025-08-13T20:11:09.791738541+00:00 stdout F [INFO] 10.217.0.87:54483 - 44678 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000723801s 2025-08-13T20:11:09.801146801+00:00 stdout F [INFO] 10.217.0.87:50514 - 38291 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00313687s 2025-08-13T20:11:09.803285852+00:00 stdout F [INFO] 10.217.0.87:39839 - 32977 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001900175s 2025-08-13T20:11:09.810966062+00:00 stdout F [INFO] 10.217.0.87:48258 - 20246 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002353797s 2025-08-13T20:11:09.811439386+00:00 stdout F [INFO] 10.217.0.87:60135 - 51291 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002544313s 2025-08-13T20:11:09.815948235+00:00 stdout F [INFO] 10.217.0.87:40563 - 6633 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000910756s 2025-08-13T20:11:09.815948235+00:00 stdout F [INFO] 10.217.0.87:37855 - 63755 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000946797s 2025-08-13T20:11:09.855902510+00:00 stdout F [INFO] 10.217.0.87:55357 - 52910 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000672089s 2025-08-13T20:11:09.856062125+00:00 stdout F [INFO] 10.217.0.87:56983 - 37252 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001012639s 2025-08-13T20:11:09.871047725+00:00 stdout F [INFO] 10.217.0.87:35549 - 20648 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000976718s 2025-08-13T20:11:09.871047725+00:00 stdout F [INFO] 10.217.0.87:44595 - 1932 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002394738s 2025-08-13T20:11:09.871549349+00:00 stdout F [INFO] 10.217.0.87:40336 - 25626 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001868553s 2025-08-13T20:11:09.871549349+00:00 stdout F [INFO] 10.217.0.87:46912 - 10871 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003059918s 2025-08-13T20:11:09.910681771+00:00 stdout F [INFO] 10.217.0.87:45858 - 55022 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001129743s 2025-08-13T20:11:09.910974900+00:00 stdout F [INFO] 10.217.0.87:55087 - 63530 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001228406s 2025-08-13T20:11:09.914111879+00:00 stdout F [INFO] 10.217.0.87:38666 - 64429 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000674719s 2025-08-13T20:11:09.914236783+00:00 stdout F [INFO] 10.217.0.87:47423 - 54243 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000635178s 2025-08-13T20:11:09.924884558+00:00 stdout F [INFO] 10.217.0.87:55691 - 6387 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000615748s 2025-08-13T20:11:09.925091264+00:00 stdout F [INFO] 10.217.0.87:36726 - 7563 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000820113s 2025-08-13T20:11:09.933855386+00:00 stdout F [INFO] 10.217.0.87:41358 - 31832 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000607047s 2025-08-13T20:11:09.934167695+00:00 stdout F [INFO] 10.217.0.87:54170 - 36712 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000620148s 2025-08-13T20:11:09.967295634+00:00 stdout F [INFO] 10.217.0.87:58101 - 34961 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000751712s 2025-08-13T20:11:09.967366646+00:00 stdout F [INFO] 10.217.0.87:49474 - 38655 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000732611s 2025-08-13T20:11:09.969581790+00:00 stdout F [INFO] 10.217.0.87:47263 - 41007 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000959338s 2025-08-13T20:11:09.969630631+00:00 stdout F [INFO] 10.217.0.87:58934 - 8505 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00107031s 2025-08-13T20:11:09.979130774+00:00 stdout F [INFO] 10.217.0.87:58145 - 58179 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000754711s 2025-08-13T20:11:09.979260177+00:00 stdout F [INFO] 10.217.0.87:38045 - 32921 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000793963s 2025-08-13T20:11:09.988431340+00:00 stdout F [INFO] 10.217.0.87:58901 - 15757 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000785472s 2025-08-13T20:11:09.988598135+00:00 stdout F [INFO] 10.217.0.87:40838 - 44964 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001242825s 2025-08-13T20:11:10.020712196+00:00 stdout F [INFO] 10.217.0.87:47276 - 32294 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000511265s 2025-08-13T20:11:10.020857110+00:00 stdout F [INFO] 10.217.0.87:53692 - 50495 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000708841s 2025-08-13T20:11:10.031652339+00:00 stdout F [INFO] 10.217.0.87:35587 - 6934 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000771622s 2025-08-13T20:11:10.031652339+00:00 stdout F [INFO] 10.217.0.87:49877 - 38155 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000954388s 2025-08-13T20:11:10.040906155+00:00 stdout F [INFO] 10.217.0.87:38169 - 44285 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000416262s 2025-08-13T20:11:10.041016538+00:00 stdout F [INFO] 10.217.0.87:48760 - 59142 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070476s 2025-08-13T20:11:10.075523907+00:00 stdout F [INFO] 10.217.0.87:40850 - 53105 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000811223s 2025-08-13T20:11:10.075755104+00:00 stdout F [INFO] 10.217.0.87:38297 - 14321 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000948948s 2025-08-13T20:11:10.084645699+00:00 stdout F [INFO] 10.217.0.87:34519 - 42542 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000430953s 2025-08-13T20:11:10.084704071+00:00 stdout F [INFO] 10.217.0.87:43438 - 12631 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00067879s 2025-08-13T20:11:10.093303337+00:00 stdout F [INFO] 10.217.0.87:56908 - 17450 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000708811s 2025-08-13T20:11:10.093442521+00:00 stdout F [INFO] 10.217.0.87:51618 - 15318 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000817544s 2025-08-13T20:11:10.101313267+00:00 stdout F [INFO] 10.217.0.87:37133 - 19887 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000712141s 2025-08-13T20:11:10.101512533+00:00 stdout F [INFO] 10.217.0.87:50412 - 11192 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000948597s 2025-08-13T20:11:10.129508615+00:00 stdout F [INFO] 10.217.0.87:43831 - 20450 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000750722s 2025-08-13T20:11:10.129556986+00:00 stdout F [INFO] 10.217.0.87:52358 - 12808 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000957057s 2025-08-13T20:11:10.136328551+00:00 stdout F [INFO] 10.217.0.87:52783 - 29135 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000594527s 2025-08-13T20:11:10.136569338+00:00 stdout F [INFO] 10.217.0.87:34678 - 60145 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000918516s 2025-08-13T20:11:10.152319909+00:00 stdout F [INFO] 10.217.0.87:37166 - 20284 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00071952s 2025-08-13T20:11:10.152695840+00:00 stdout F [INFO] 10.217.0.87:58908 - 31337 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001094711s 2025-08-13T20:11:10.186369485+00:00 stdout F [INFO] 10.217.0.87:40691 - 64 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00106306s 2025-08-13T20:11:10.186424177+00:00 stdout F [INFO] 10.217.0.87:59312 - 58681 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000909676s 2025-08-13T20:11:10.212445733+00:00 stdout F [INFO] 10.217.0.87:36036 - 47426 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001638717s 2025-08-13T20:11:10.212445733+00:00 stdout F [INFO] 10.217.0.87:47264 - 35254 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001092741s 2025-08-13T20:11:10.214609215+00:00 stdout F [INFO] 10.217.0.87:60530 - 26229 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000657939s 2025-08-13T20:11:10.214609215+00:00 stdout F [INFO] 10.217.0.87:41544 - 60427 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001028729s 2025-08-13T20:11:10.240992232+00:00 stdout F [INFO] 10.217.0.87:54263 - 21128 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001061571s 2025-08-13T20:11:10.241217088+00:00 stdout F [INFO] 10.217.0.87:47748 - 54361 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001358849s 2025-08-13T20:11:10.282300966+00:00 stdout F [INFO] 10.217.0.87:49674 - 704 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.003252953s 2025-08-13T20:11:10.282300966+00:00 stdout F [INFO] 10.217.0.87:44821 - 63231 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00314728s 2025-08-13T20:11:10.291850080+00:00 stdout F [INFO] 10.217.0.87:35498 - 30057 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000780553s 2025-08-13T20:11:10.292084226+00:00 stdout F [INFO] 10.217.0.87:34433 - 61062 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000761342s 2025-08-13T20:11:10.308115376+00:00 stdout F [INFO] 10.217.0.87:45267 - 53227 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000601597s 2025-08-13T20:11:10.308174398+00:00 stdout F [INFO] 10.217.0.87:53644 - 2574 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000985089s 2025-08-13T20:11:10.345653612+00:00 stdout F [INFO] 10.217.0.87:54134 - 43179 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000740462s 2025-08-13T20:11:10.345706304+00:00 stdout F [INFO] 10.217.0.87:51934 - 41503 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00068563s 2025-08-13T20:11:10.347835605+00:00 stdout F [INFO] 10.217.0.87:48788 - 65440 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000528275s 2025-08-13T20:11:10.348054241+00:00 stdout F [INFO] 10.217.0.87:46192 - 13272 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000802233s 2025-08-13T20:11:10.365227793+00:00 stdout F [INFO] 10.217.0.87:56813 - 61336 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000841114s 2025-08-13T20:11:10.365558593+00:00 stdout F [INFO] 10.217.0.87:56406 - 44405 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001118022s 2025-08-13T20:11:10.404568631+00:00 stdout F [INFO] 10.217.0.87:39247 - 53082 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000962568s 2025-08-13T20:11:10.404568631+00:00 stdout F [INFO] 10.217.0.87:39015 - 64029 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001217245s 2025-08-13T20:11:10.407330981+00:00 stdout F [INFO] 10.217.0.87:48101 - 3473 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00104135s 2025-08-13T20:11:10.407428593+00:00 stdout F [INFO] 10.217.0.87:54432 - 43945 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000922756s 2025-08-13T20:11:10.427216141+00:00 stdout F [INFO] 10.217.0.87:58629 - 23429 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069069s 2025-08-13T20:11:10.427321594+00:00 stdout F [INFO] 10.217.0.87:53692 - 13549 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000727961s 2025-08-13T20:11:10.442966342+00:00 stdout F [INFO] 10.217.0.87:35003 - 8346 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000654699s 2025-08-13T20:11:10.443015794+00:00 stdout F [INFO] 10.217.0.87:57738 - 6845 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000876895s 2025-08-13T20:11:10.462218544+00:00 stdout F [INFO] 10.217.0.87:38434 - 47432 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001120352s 2025-08-13T20:11:10.462218544+00:00 stdout F [INFO] 10.217.0.87:46544 - 18300 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001356399s 2025-08-13T20:11:10.483870565+00:00 stdout F [INFO] 10.217.0.87:56416 - 36850 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000761392s 2025-08-13T20:11:10.483943937+00:00 stdout F [INFO] 10.217.0.87:35365 - 54110 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001481532s 2025-08-13T20:11:10.497518346+00:00 stdout F [INFO] 10.217.0.87:36394 - 20484 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000662119s 2025-08-13T20:11:10.497561348+00:00 stdout F [INFO] 10.217.0.87:47551 - 7794 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000662119s 2025-08-13T20:11:10.524954413+00:00 stdout F [INFO] 10.217.0.87:58045 - 1135 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000569947s 2025-08-13T20:11:10.525179859+00:00 stdout F [INFO] 10.217.0.87:45550 - 18182 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00102599s 2025-08-13T20:11:10.535454884+00:00 stdout F [INFO] 10.217.0.87:38958 - 24486 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001087191s 2025-08-13T20:11:10.535617379+00:00 stdout F [INFO] 10.217.0.87:35462 - 51414 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001264926s 2025-08-13T20:11:10.555337624+00:00 stdout F [INFO] 10.217.0.87:37227 - 51475 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000821594s 2025-08-13T20:11:10.555405996+00:00 stdout F [INFO] 10.217.0.87:35920 - 64361 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069905s 2025-08-13T20:11:10.582638297+00:00 stdout F [INFO] 10.217.0.87:46664 - 5406 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000808283s 2025-08-13T20:11:10.582867113+00:00 stdout F [INFO] 10.217.0.87:37306 - 48071 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00106375s 2025-08-13T20:11:10.594028313+00:00 stdout F [INFO] 10.217.0.87:59666 - 45350 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001113462s 2025-08-13T20:11:10.594028313+00:00 stdout F [INFO] 10.217.0.87:53509 - 55627 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001300197s 2025-08-13T20:11:10.610093124+00:00 stdout F [INFO] 10.217.0.87:46572 - 58676 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001320778s 2025-08-13T20:11:10.610148496+00:00 stdout F [INFO] 10.217.0.87:40323 - 64318 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00139729s 2025-08-13T20:11:10.635853543+00:00 stdout F [INFO] 10.217.0.87:53169 - 9990 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069213s 2025-08-13T20:11:10.636072669+00:00 stdout F [INFO] 10.217.0.87:50177 - 43293 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000754261s 2025-08-13T20:11:10.668321404+00:00 stdout F [INFO] 10.217.0.87:53293 - 62044 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001576965s 2025-08-13T20:11:10.668321404+00:00 stdout F [INFO] 10.217.0.87:57662 - 55233 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001761411s 2025-08-13T20:11:10.697836330+00:00 stdout F [INFO] 10.217.0.87:41254 - 48236 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001464082s 2025-08-13T20:11:10.697836330+00:00 stdout F [INFO] 10.217.0.87:37182 - 29642 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001693129s 2025-08-13T20:11:10.725196374+00:00 stdout F [INFO] 10.217.0.87:34029 - 25028 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000731301s 2025-08-13T20:11:10.725431641+00:00 stdout F [INFO] 10.217.0.87:51302 - 1519 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000817833s 2025-08-13T20:11:10.739264747+00:00 stdout F [INFO] 10.217.0.87:46810 - 32281 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000892646s 2025-08-13T20:11:10.739420212+00:00 stdout F [INFO] 10.217.0.87:43458 - 39469 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000972398s 2025-08-13T20:11:10.751142388+00:00 stdout F [INFO] 10.217.0.87:36895 - 9809 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000612998s 2025-08-13T20:11:10.751430606+00:00 stdout F [INFO] 10.217.0.87:39596 - 65208 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000677019s 2025-08-13T20:11:10.764475080+00:00 stdout F [INFO] 10.217.0.87:47548 - 14641 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000989968s 2025-08-13T20:11:10.764557333+00:00 stdout F [INFO] 10.217.0.87:43050 - 3786 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000903196s 2025-08-13T20:11:10.779200152+00:00 stdout F [INFO] 10.217.0.87:37918 - 49565 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001230145s 2025-08-13T20:11:10.779329346+00:00 stdout F [INFO] 10.217.0.87:41188 - 48358 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001293657s 2025-08-13T20:11:10.791840205+00:00 stdout F [INFO] 10.217.0.87:56862 - 47485 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001256216s 2025-08-13T20:11:10.791965058+00:00 stdout F [INFO] 10.217.0.87:56497 - 6626 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001321368s 2025-08-13T20:11:10.806885366+00:00 stdout F [INFO] 10.217.0.87:42559 - 25132 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000740422s 2025-08-13T20:11:10.807054091+00:00 stdout F [INFO] 10.217.0.87:38089 - 18651 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000656199s 2025-08-13T20:11:10.821087963+00:00 stdout F [INFO] 10.217.0.87:53932 - 43404 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000529605s 2025-08-13T20:11:10.821087963+00:00 stdout F [INFO] 10.217.0.87:53669 - 32799 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000495664s 2025-08-13T20:11:10.838031589+00:00 stdout F [INFO] 10.217.0.87:48051 - 47009 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069481s 2025-08-13T20:11:10.838031589+00:00 stdout F [INFO] 10.217.0.87:49633 - 2906 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000731321s 2025-08-13T20:11:10.847942923+00:00 stdout F [INFO] 10.217.0.87:40866 - 54204 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000492464s 2025-08-13T20:11:10.847942923+00:00 stdout F [INFO] 10.217.0.87:55638 - 5418 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000507535s 2025-08-13T20:11:10.859496155+00:00 stdout F [INFO] 10.217.0.87:43040 - 23737 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000779872s 2025-08-13T20:11:10.859560776+00:00 stdout F [INFO] 10.217.0.87:32768 - 51691 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000733041s 2025-08-13T20:11:10.897474393+00:00 stdout F [INFO] 10.217.0.87:44023 - 5516 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00068915s 2025-08-13T20:11:10.898269516+00:00 stdout F [INFO] 10.217.0.87:43332 - 54899 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001154573s 2025-08-13T20:11:10.913750840+00:00 stdout F [INFO] 10.217.0.87:44629 - 53763 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000848825s 2025-08-13T20:11:10.913846803+00:00 stdout F [INFO] 10.217.0.87:47543 - 42445 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000658709s 2025-08-13T20:11:10.949764543+00:00 stdout F [INFO] 10.217.0.87:60784 - 25294 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001517523s 2025-08-13T20:11:10.949888526+00:00 stdout F [INFO] 10.217.0.87:43931 - 17207 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00175024s 2025-08-13T20:11:10.968321445+00:00 stdout F [INFO] 10.217.0.87:39431 - 5861 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000533925s 2025-08-13T20:11:10.969151149+00:00 stdout F [INFO] 10.217.0.87:36790 - 35647 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001096441s 2025-08-13T20:11:10.989866742+00:00 stdout F [INFO] 10.217.0.87:40960 - 23472 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000554456s 2025-08-13T20:11:10.990262134+00:00 stdout F [INFO] 10.217.0.87:51522 - 65437 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000818503s 2025-08-13T20:11:11.006375476+00:00 stdout F [INFO] 10.217.0.87:52401 - 28255 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000934187s 2025-08-13T20:11:11.006555201+00:00 stdout F [INFO] 10.217.0.87:44627 - 52350 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000977718s 2025-08-13T20:11:11.006699505+00:00 stdout F [INFO] 10.217.0.87:39108 - 49176 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000870595s 2025-08-13T20:11:11.006741886+00:00 stdout F [INFO] 10.217.0.87:34385 - 6198 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001256896s 2025-08-13T20:11:11.032143735+00:00 stdout F [INFO] 10.217.0.87:49275 - 4543 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000543846s 2025-08-13T20:11:11.032388102+00:00 stdout F [INFO] 10.217.0.87:53875 - 52880 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000647548s 2025-08-13T20:11:11.042664306+00:00 stdout F [INFO] 10.217.0.87:49993 - 57969 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000487754s 2025-08-13T20:11:11.042990406+00:00 stdout F [INFO] 10.217.0.87:42332 - 18348 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00072954s 2025-08-13T20:11:11.060284971+00:00 stdout F [INFO] 10.217.0.87:46737 - 15987 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000730151s 2025-08-13T20:11:11.061205788+00:00 stdout F [INFO] 10.217.0.87:37223 - 10334 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001298888s 2025-08-13T20:11:11.061591239+00:00 stdout F [INFO] 10.217.0.87:46133 - 23565 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000556496s 2025-08-13T20:11:11.061872807+00:00 stdout F [INFO] 10.217.0.87:35095 - 42296 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000506325s 2025-08-13T20:11:11.086639487+00:00 stdout F [INFO] 10.217.0.87:34121 - 241 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000619908s 2025-08-13T20:11:11.086905245+00:00 stdout F [INFO] 10.217.0.87:53216 - 15697 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000627458s 2025-08-13T20:11:11.099091904+00:00 stdout F [INFO] 10.217.0.87:52035 - 14176 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000739021s 2025-08-13T20:11:11.099598979+00:00 stdout F [INFO] 10.217.0.87:43061 - 9950 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000636628s 2025-08-13T20:11:11.115611158+00:00 stdout F [INFO] 10.217.0.87:55396 - 28029 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00105816s 2025-08-13T20:11:11.115710071+00:00 stdout F [INFO] 10.217.0.87:47389 - 50072 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001448772s 2025-08-13T20:11:11.116040450+00:00 stdout F [INFO] 10.217.0.87:38105 - 49459 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001355359s 2025-08-13T20:11:11.116375710+00:00 stdout F [INFO] 10.217.0.87:60529 - 58340 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001838553s 2025-08-13T20:11:11.139878123+00:00 stdout F [INFO] 10.217.0.87:34386 - 39284 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000587697s 2025-08-13T20:11:11.139997127+00:00 stdout F [INFO] 10.217.0.87:33415 - 62766 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000665539s 2025-08-13T20:11:11.153224846+00:00 stdout F [INFO] 10.217.0.87:58006 - 26905 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000784372s 2025-08-13T20:11:11.153637368+00:00 stdout F [INFO] 10.217.0.87:56675 - 53466 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001014779s 2025-08-13T20:11:11.173482927+00:00 stdout F [INFO] 10.217.0.87:37374 - 59512 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002216123s 2025-08-13T20:11:11.173804926+00:00 stdout F [INFO] 10.217.0.87:43308 - 15355 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002719148s 2025-08-13T20:11:11.193315066+00:00 stdout F [INFO] 10.217.0.87:39323 - 45299 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000551476s 2025-08-13T20:11:11.193843811+00:00 stdout F [INFO] 10.217.0.87:60659 - 56528 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000749141s 2025-08-13T20:11:11.203904119+00:00 stdout F [INFO] 10.217.0.87:35418 - 56911 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000771722s 2025-08-13T20:11:11.204142896+00:00 stdout F [INFO] 10.217.0.87:39299 - 56238 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000817754s 2025-08-13T20:11:11.206533775+00:00 stdout F [INFO] 10.217.0.87:39203 - 4198 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000597697s 2025-08-13T20:11:11.206586266+00:00 stdout F [INFO] 10.217.0.87:33058 - 32028 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000648949s 2025-08-13T20:11:11.221888475+00:00 stdout F [INFO] 10.217.0.87:57419 - 33230 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000789802s 2025-08-13T20:11:11.222177353+00:00 stdout F [INFO] 10.217.0.87:45855 - 55824 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001099731s 2025-08-13T20:11:11.252482972+00:00 stdout F [INFO] 10.217.0.87:49168 - 33322 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000678s 2025-08-13T20:11:11.253498791+00:00 stdout F [INFO] 10.217.0.87:41421 - 56012 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001364589s 2025-08-13T20:11:11.258970418+00:00 stdout F [INFO] 10.217.0.87:34340 - 41230 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000517045s 2025-08-13T20:11:11.259189624+00:00 stdout F [INFO] 10.217.0.87:36332 - 57392 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0006936s 2025-08-13T20:11:11.277766107+00:00 stdout F [INFO] 10.217.0.87:55318 - 35910 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000542755s 2025-08-13T20:11:11.277852489+00:00 stdout F [INFO] 10.217.0.87:57870 - 21939 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000753292s 2025-08-13T20:11:11.307979793+00:00 stdout F [INFO] 10.217.0.87:51817 - 47893 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000649408s 2025-08-13T20:11:11.308105307+00:00 stdout F [INFO] 10.217.0.87:33695 - 38695 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000747191s 2025-08-13T20:11:11.313892163+00:00 stdout F [INFO] 10.217.0.87:35898 - 19074 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00068864s 2025-08-13T20:11:11.314104319+00:00 stdout F [INFO] 10.217.0.87:47876 - 34289 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000799533s 2025-08-13T20:11:11.367508110+00:00 stdout F [INFO] 10.217.0.87:52614 - 11229 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069437s 2025-08-13T20:11:11.367664444+00:00 stdout F [INFO] 10.217.0.87:59094 - 25413 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000768972s 2025-08-13T20:11:11.643660627+00:00 stdout F [INFO] 10.217.0.62:54796 - 40368 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001262316s 2025-08-13T20:11:11.643823292+00:00 stdout F [INFO] 10.217.0.62:57586 - 55468 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001439811s 2025-08-13T20:11:22.864396336+00:00 stdout F [INFO] 10.217.0.8:43207 - 51542 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001437441s 2025-08-13T20:11:22.864396336+00:00 stdout F [INFO] 10.217.0.8:32904 - 36126 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001346429s 2025-08-13T20:11:22.866861087+00:00 stdout F [INFO] 10.217.0.8:58933 - 4423 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001564905s 2025-08-13T20:11:22.867523206+00:00 stdout F [INFO] 10.217.0.8:59732 - 57745 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000567686s 2025-08-13T20:11:35.191236613+00:00 stdout F [INFO] 10.217.0.19:43624 - 31557 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004043076s 2025-08-13T20:11:35.191360136+00:00 stdout F [INFO] 10.217.0.19:42091 - 49079 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004665044s 2025-08-13T20:11:41.626012045+00:00 stdout F [INFO] 10.217.0.62:49209 - 7004 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00136998s 2025-08-13T20:11:41.626231631+00:00 stdout F [INFO] 10.217.0.62:48100 - 28186 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001462432s 2025-08-13T20:11:45.337920649+00:00 stdout F [INFO] 10.217.0.19:55508 - 38516 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000946417s 2025-08-13T20:11:45.337920649+00:00 stdout F [INFO] 10.217.0.19:57786 - 17416 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001212935s 2025-08-13T20:11:53.267739152+00:00 stdout F [INFO] 10.217.0.19:38914 - 46916 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00105049s 2025-08-13T20:11:53.268465283+00:00 stdout F [INFO] 10.217.0.19:41853 - 32507 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001277896s 2025-08-13T20:11:56.369747159+00:00 stdout F [INFO] 10.217.0.64:34223 - 33431 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004252351s 2025-08-13T20:11:56.369747159+00:00 stdout F [INFO] 10.217.0.64:34960 - 46826 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004715415s 2025-08-13T20:11:56.369747159+00:00 stdout F [INFO] 10.217.0.64:55176 - 46961 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002248995s 2025-08-13T20:11:56.369747159+00:00 stdout F [INFO] 10.217.0.64:47156 - 33166 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003679365s 2025-08-13T20:12:02.567650759+00:00 stdout F [INFO] 10.217.0.45:44131 - 57505 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.00210487s 2025-08-13T20:12:02.567650759+00:00 stdout F [INFO] 10.217.0.45:47913 - 32986 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.00244551s 2025-08-13T20:12:05.182422296+00:00 stdout F [INFO] 10.217.0.19:34869 - 29012 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00175998s 2025-08-13T20:12:05.187282006+00:00 stdout F [INFO] 10.217.0.19:47330 - 29258 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003057838s 2025-08-13T20:12:11.627950944+00:00 stdout F [INFO] 10.217.0.62:40275 - 29937 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002399889s 2025-08-13T20:12:11.628227361+00:00 stdout F [INFO] 10.217.0.62:39484 - 32290 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003273974s 2025-08-13T20:12:22.871627992+00:00 stdout F [INFO] 10.217.0.8:39575 - 52483 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004259432s 2025-08-13T20:12:22.871905410+00:00 stdout F [INFO] 10.217.0.8:53081 - 41816 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00452466s 2025-08-13T20:12:22.876477911+00:00 stdout F [INFO] 10.217.0.8:35821 - 12538 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002057629s 2025-08-13T20:12:22.877191531+00:00 stdout F [INFO] 10.217.0.8:48890 - 2287 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00069125s 2025-08-13T20:12:35.190887254+00:00 stdout F [INFO] 10.217.0.19:60434 - 29890 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002610955s 2025-08-13T20:12:35.191185352+00:00 stdout F [INFO] 10.217.0.19:47054 - 51416 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002316357s 2025-08-13T20:12:41.625981740+00:00 stdout F [INFO] 10.217.0.62:40202 - 46466 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001112132s 2025-08-13T20:12:41.625981740+00:00 stdout F [INFO] 10.217.0.62:38107 - 12855 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001321438s 2025-08-13T20:12:54.987282390+00:00 stdout F [INFO] 10.217.0.19:45800 - 50279 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003032327s 2025-08-13T20:12:54.990679337+00:00 stdout F [INFO] 10.217.0.19:53152 - 56491 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003755488s 2025-08-13T20:12:56.370738424+00:00 stdout F [INFO] 10.217.0.64:35513 - 44991 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001019439s 2025-08-13T20:12:56.370738424+00:00 stdout F [INFO] 10.217.0.64:45882 - 50490 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001013849s 2025-08-13T20:12:56.370738424+00:00 stdout F [INFO] 10.217.0.64:37570 - 4790 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001625246s 2025-08-13T20:12:56.376464509+00:00 stdout F [INFO] 10.217.0.64:46358 - 22273 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.008205426s 2025-08-13T20:13:02.615040274+00:00 stdout F [INFO] 10.217.0.45:37088 - 44026 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000938007s 2025-08-13T20:13:02.615040274+00:00 stdout F [INFO] 10.217.0.45:35431 - 27214 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000953788s 2025-08-13T20:13:05.198056111+00:00 stdout F [INFO] 10.217.0.19:35572 - 34597 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000881615s 2025-08-13T20:13:05.198056111+00:00 stdout F [INFO] 10.217.0.19:59582 - 43606 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002272025s 2025-08-13T20:13:11.639921623+00:00 stdout F [INFO] 10.217.0.62:52711 - 32476 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001424541s 2025-08-13T20:13:11.639921623+00:00 stdout F [INFO] 10.217.0.62:57132 - 59874 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001547004s 2025-08-13T20:13:22.865124441+00:00 stdout F [INFO] 10.217.0.8:59533 - 12813 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001793821s 2025-08-13T20:13:22.865124441+00:00 stdout F [INFO] 10.217.0.8:35766 - 13841 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002905793s 2025-08-13T20:13:22.866911342+00:00 stdout F [INFO] 10.217.0.8:35077 - 20431 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000996988s 2025-08-13T20:13:22.867195700+00:00 stdout F [INFO] 10.217.0.8:60321 - 18676 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001366739s 2025-08-13T20:13:35.203554812+00:00 stdout F [INFO] 10.217.0.19:50652 - 1843 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001442701s 2025-08-13T20:13:35.204251312+00:00 stdout F [INFO] 10.217.0.19:44566 - 54018 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004432657s 2025-08-13T20:13:41.632163817+00:00 stdout F [INFO] 10.217.0.62:59452 - 58538 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001657787s 2025-08-13T20:13:41.632163817+00:00 stdout F [INFO] 10.217.0.62:33873 - 61617 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00068974s 2025-08-13T20:13:51.958542921+00:00 stdout F [INFO] 10.217.0.19:34570 - 51195 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001959286s 2025-08-13T20:13:51.958542921+00:00 stdout F [INFO] 10.217.0.19:52745 - 6433 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002972695s 2025-08-13T20:13:56.366582744+00:00 stdout F [INFO] 10.217.0.64:58857 - 31612 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001163063s 2025-08-13T20:13:56.366997556+00:00 stdout F [INFO] 10.217.0.64:38313 - 44558 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001188384s 2025-08-13T20:13:56.367223423+00:00 stdout F [INFO] 10.217.0.64:43580 - 26926 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001452902s 2025-08-13T20:13:56.368567151+00:00 stdout F [INFO] 10.217.0.64:34606 - 26831 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001230565s 2025-08-13T20:14:02.654198796+00:00 stdout F [INFO] 10.217.0.45:34942 - 58963 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001013979s 2025-08-13T20:14:02.654198796+00:00 stdout F [INFO] 10.217.0.45:33630 - 17756 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001152493s 2025-08-13T20:14:04.578540679+00:00 stdout F [INFO] 10.217.0.19:36000 - 321 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000795613s 2025-08-13T20:14:04.578714824+00:00 stdout F [INFO] 10.217.0.19:39517 - 29052 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000842304s 2025-08-13T20:14:05.187584410+00:00 stdout F [INFO] 10.217.0.19:46122 - 58576 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001175993s 2025-08-13T20:14:05.187721404+00:00 stdout F [INFO] 10.217.0.19:57402 - 32181 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001507053s 2025-08-13T20:14:11.629651830+00:00 stdout F [INFO] 10.217.0.62:59159 - 41701 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001221915s 2025-08-13T20:14:11.629651830+00:00 stdout F [INFO] 10.217.0.62:50146 - 57371 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00104821s 2025-08-13T20:14:22.866394636+00:00 stdout F [INFO] 10.217.0.8:37824 - 64992 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002725368s 2025-08-13T20:14:22.866496279+00:00 stdout F [INFO] 10.217.0.8:59572 - 3383 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002805781s 2025-08-13T20:14:22.868010292+00:00 stdout F [INFO] 10.217.0.8:60365 - 35111 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000760842s 2025-08-13T20:14:22.868010292+00:00 stdout F [INFO] 10.217.0.8:32914 - 8157 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001276487s 2025-08-13T20:14:35.203869808+00:00 stdout F [INFO] 10.217.0.19:60879 - 49232 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003147531s 2025-08-13T20:14:35.203869808+00:00 stdout F [INFO] 10.217.0.19:52879 - 60081 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003865761s 2025-08-13T20:14:41.632919724+00:00 stdout F [INFO] 10.217.0.62:59797 - 26522 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001363269s 2025-08-13T20:14:41.632919724+00:00 stdout F [INFO] 10.217.0.62:35790 - 34336 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001472952s 2025-08-13T20:14:56.364108879+00:00 stdout F [INFO] 10.217.0.64:43764 - 10505 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003449399s 2025-08-13T20:14:56.364148850+00:00 stdout F [INFO] 10.217.0.64:56395 - 23031 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.005052034s 2025-08-13T20:14:56.364314075+00:00 stdout F [INFO] 10.217.0.64:38623 - 43236 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00524927s 2025-08-13T20:14:56.364473110+00:00 stdout F [INFO] 10.217.0.64:48325 - 42341 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003925912s 2025-08-13T20:15:02.699019916+00:00 stdout F [INFO] 10.217.0.45:34508 - 55080 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001459191s 2025-08-13T20:15:02.699019916+00:00 stdout F [INFO] 10.217.0.45:37620 - 24835 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000971367s 2025-08-13T20:15:05.193938748+00:00 stdout F [INFO] 10.217.0.19:40224 - 64478 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002663827s 2025-08-13T20:15:05.193938748+00:00 stdout F [INFO] 10.217.0.19:56288 - 41557 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004044026s 2025-08-13T20:15:11.637670792+00:00 stdout F [INFO] 10.217.0.62:47926 - 48891 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000847794s 2025-08-13T20:15:11.637670792+00:00 stdout F [INFO] 10.217.0.62:39812 - 38108 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000968387s 2025-08-13T20:15:14.207678284+00:00 stdout F [INFO] 10.217.0.19:50192 - 41833 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000742601s 2025-08-13T20:15:14.207824008+00:00 stdout F [INFO] 10.217.0.19:35203 - 22436 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001242815s 2025-08-13T20:15:22.865495047+00:00 stdout F [INFO] 10.217.0.8:48730 - 3306 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001072211s 2025-08-13T20:15:22.865621930+00:00 stdout F [INFO] 10.217.0.8:57376 - 21441 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000529565s 2025-08-13T20:15:22.866675980+00:00 stdout F [INFO] 10.217.0.8:43108 - 55457 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000485033s 2025-08-13T20:15:22.866750692+00:00 stdout F [INFO] 10.217.0.8:37354 - 59539 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000607437s 2025-08-13T20:15:30.900572435+00:00 stdout F [INFO] 10.217.0.73:36123 - 30169 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001139382s 2025-08-13T20:15:30.900679548+00:00 stdout F [INFO] 10.217.0.73:59681 - 33505 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001239995s 2025-08-13T20:15:35.189505304+00:00 stdout F [INFO] 10.217.0.19:57428 - 57631 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000825914s 2025-08-13T20:15:35.196183465+00:00 stdout F [INFO] 10.217.0.19:53814 - 50916 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001068761s 2025-08-13T20:15:41.629469340+00:00 stdout F [INFO] 10.217.0.62:34583 - 42316 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001442091s 2025-08-13T20:15:41.629735768+00:00 stdout F [INFO] 10.217.0.62:59091 - 44739 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001546074s 2025-08-13T20:15:50.643763762+00:00 stdout F [INFO] 10.217.0.19:39438 - 48635 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001065651s 2025-08-13T20:15:50.644611426+00:00 stdout F [INFO] 10.217.0.19:53652 - 38101 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001177314s 2025-08-13T20:15:56.362018229+00:00 stdout F [INFO] 10.217.0.64:41088 - 873 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001000629s 2025-08-13T20:15:56.362251565+00:00 stdout F [INFO] 10.217.0.64:54845 - 9984 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001465762s 2025-08-13T20:15:56.362723509+00:00 stdout F [INFO] 10.217.0.64:55499 - 33178 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001249626s 2025-08-13T20:15:56.363073269+00:00 stdout F [INFO] 10.217.0.64:47932 - 55426 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002146091s 2025-08-13T20:16:02.739548364+00:00 stdout F [INFO] 10.217.0.45:60222 - 28348 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000946617s 2025-08-13T20:16:02.739548364+00:00 stdout F [INFO] 10.217.0.45:48591 - 5325 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001547304s 2025-08-13T20:16:05.205548545+00:00 stdout F [INFO] 10.217.0.19:53559 - 36203 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001881074s 2025-08-13T20:16:05.209619061+00:00 stdout F [INFO] 10.217.0.19:49043 - 24811 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001894224s 2025-08-13T20:16:11.634174071+00:00 stdout F [INFO] 10.217.0.62:48721 - 37751 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002248584s 2025-08-13T20:16:11.634174071+00:00 stdout F [INFO] 10.217.0.62:38022 - 4312 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002118481s 2025-08-13T20:16:22.867687818+00:00 stdout F [INFO] 10.217.0.8:33638 - 1399 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002938574s 2025-08-13T20:16:22.867687818+00:00 stdout F [INFO] 10.217.0.8:60628 - 5377 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002751959s 2025-08-13T20:16:22.869538900+00:00 stdout F [INFO] 10.217.0.8:50403 - 34129 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001442112s 2025-08-13T20:16:22.869829829+00:00 stdout F [INFO] 10.217.0.8:56570 - 9246 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001772s 2025-08-13T20:16:23.832644995+00:00 stdout F [INFO] 10.217.0.19:37855 - 59033 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001472782s 2025-08-13T20:16:23.833301754+00:00 stdout F [INFO] 10.217.0.19:51228 - 32795 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001648597s 2025-08-13T20:16:35.191683097+00:00 stdout F [INFO] 10.217.0.19:58939 - 48494 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001907314s 2025-08-13T20:16:35.191683097+00:00 stdout F [INFO] 10.217.0.19:58218 - 25332 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003341335s 2025-08-13T20:16:41.630991336+00:00 stdout F [INFO] 10.217.0.62:36001 - 43032 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001942895s 2025-08-13T20:16:41.630991336+00:00 stdout F [INFO] 10.217.0.62:47530 - 49232 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002037278s 2025-08-13T20:16:56.366145027+00:00 stdout F [INFO] 10.217.0.64:41050 - 32678 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004154149s 2025-08-13T20:16:56.366145027+00:00 stdout F [INFO] 10.217.0.64:40954 - 48337 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00383684s 2025-08-13T20:16:56.366145027+00:00 stdout F [INFO] 10.217.0.64:40716 - 7441 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003682005s 2025-08-13T20:16:56.366145027+00:00 stdout F [INFO] 10.217.0.64:33635 - 35645 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003746728s 2025-08-13T20:17:02.788890842+00:00 stdout F [INFO] 10.217.0.45:34768 - 24306 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.00141146s 2025-08-13T20:17:02.788934033+00:00 stdout F [INFO] 10.217.0.45:33409 - 18159 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002433539s 2025-08-13T20:17:05.225377351+00:00 stdout F [INFO] 10.217.0.19:40180 - 29563 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003059687s 2025-08-13T20:17:05.225767672+00:00 stdout F [INFO] 10.217.0.19:36494 - 38063 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003259813s 2025-08-13T20:17:11.648223999+00:00 stdout F [INFO] 10.217.0.62:46663 - 8219 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002281925s 2025-08-13T20:17:11.648223999+00:00 stdout F [INFO] 10.217.0.62:41844 - 54966 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000982588s 2025-08-13T20:17:22.871070539+00:00 stdout F [INFO] 10.217.0.8:32890 - 11124 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002656126s 2025-08-13T20:17:22.871352047+00:00 stdout F [INFO] 10.217.0.8:44256 - 52256 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003753657s 2025-08-13T20:17:22.874458756+00:00 stdout F [INFO] 10.217.0.8:58412 - 22097 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001787121s 2025-08-13T20:17:22.874552369+00:00 stdout F [INFO] 10.217.0.8:35431 - 14077 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001840333s 2025-08-13T20:17:34.179906137+00:00 stdout F [INFO] 10.217.0.19:44118 - 63889 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004259092s 2025-08-13T20:17:34.179906137+00:00 stdout F [INFO] 10.217.0.19:37748 - 10294 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005022833s 2025-08-13T20:17:35.190115476+00:00 stdout F [INFO] 10.217.0.19:40477 - 2326 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000728961s 2025-08-13T20:17:35.190115476+00:00 stdout F [INFO] 10.217.0.19:34273 - 12052 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000755701s 2025-08-13T20:17:41.668105549+00:00 stdout F [INFO] 10.217.0.62:46066 - 43508 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001331998s 2025-08-13T20:17:41.668105549+00:00 stdout F [INFO] 10.217.0.62:51181 - 37419 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00176431s 2025-08-13T20:17:49.346102970+00:00 stdout F [INFO] 10.217.0.19:36761 - 23898 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001356448s 2025-08-13T20:17:49.346102970+00:00 stdout F [INFO] 10.217.0.19:48274 - 65185 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001533424s 2025-08-13T20:17:56.366593235+00:00 stdout F [INFO] 10.217.0.64:38161 - 13410 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002194382s 2025-08-13T20:17:56.366593235+00:00 stdout F [INFO] 10.217.0.64:56655 - 1458 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003056427s 2025-08-13T20:17:56.366593235+00:00 stdout F [INFO] 10.217.0.64:33396 - 10926 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.000674519s 2025-08-13T20:17:56.366593235+00:00 stdout F [INFO] 10.217.0.64:57160 - 47287 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.000974197s 2025-08-13T20:18:02.989608509+00:00 stdout F [INFO] 10.217.0.45:45054 - 18970 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000822773s 2025-08-13T20:18:02.989696421+00:00 stdout F [INFO] 10.217.0.45:33510 - 45927 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001276436s 2025-08-13T20:18:05.199108315+00:00 stdout F [INFO] 10.217.0.19:54250 - 7398 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001587865s 2025-08-13T20:18:05.200078023+00:00 stdout F [INFO] 10.217.0.19:41213 - 54456 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000583547s 2025-08-13T20:18:11.579762720+00:00 stdout F [INFO] 10.217.0.62:41421 - 22525 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001183184s 2025-08-13T20:18:11.580219803+00:00 stdout F [INFO] 10.217.0.62:36929 - 36123 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000424312s 2025-08-13T20:18:11.649163252+00:00 stdout F [INFO] 10.217.0.62:42461 - 29602 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000733311s 2025-08-13T20:18:11.649199703+00:00 stdout F [INFO] 10.217.0.62:56212 - 10407 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000868245s 2025-08-13T20:18:22.870151721+00:00 stdout F [INFO] 10.217.0.8:49188 - 57176 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002603574s 2025-08-13T20:18:22.870151721+00:00 stdout F [INFO] 10.217.0.8:36995 - 42927 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002829531s 2025-08-13T20:18:26.402111553+00:00 stdout F [INFO] 10.217.0.19:58040 - 22048 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001067321s 2025-08-13T20:18:26.404015028+00:00 stdout F [INFO] 10.217.0.19:40164 - 61985 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002737419s 2025-08-13T20:18:33.587042474+00:00 stdout F [INFO] 10.217.0.82:53079 - 31680 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001442992s 2025-08-13T20:18:33.587042474+00:00 stdout F [INFO] 10.217.0.82:56870 - 30507 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001379969s 2025-08-13T20:18:34.092944759+00:00 stdout F [INFO] 10.217.0.82:35359 - 18217 "A IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.003619673s 2025-08-13T20:18:34.094876035+00:00 stdout F [INFO] 10.217.0.82:46518 - 34600 "AAAA IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.001328408s 2025-08-13T20:18:35.226993336+00:00 stdout F [INFO] 10.217.0.19:33501 - 47608 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001215595s 2025-08-13T20:18:35.228134039+00:00 stdout F [INFO] 10.217.0.19:60161 - 50650 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002118781s 2025-08-13T20:18:41.636032550+00:00 stdout F [INFO] 10.217.0.62:38503 - 27459 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.0010409s 2025-08-13T20:18:41.636032550+00:00 stdout F [INFO] 10.217.0.62:33295 - 42276 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001495803s 2025-08-13T20:18:43.085667028+00:00 stdout F [INFO] 10.217.0.19:60883 - 52790 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001888864s 2025-08-13T20:18:43.086028788+00:00 stdout F [INFO] 10.217.0.19:36040 - 17832 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000813713s 2025-08-13T20:18:56.366735426+00:00 stdout F [INFO] 10.217.0.64:41379 - 8858 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002860291s 2025-08-13T20:18:56.366735426+00:00 stdout F [INFO] 10.217.0.64:54022 - 180 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004060546s 2025-08-13T20:18:56.366735426+00:00 stdout F [INFO] 10.217.0.64:47538 - 9813 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004242701s 2025-08-13T20:18:56.366735426+00:00 stdout F [INFO] 10.217.0.64:45471 - 9488 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004319933s 2025-08-13T20:19:04.585379040+00:00 stdout F [INFO] 10.217.0.45:33914 - 2665 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002576953s 2025-08-13T20:19:04.585379040+00:00 stdout F [INFO] 10.217.0.45:39467 - 47346 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002866012s 2025-08-13T20:19:05.231523052+00:00 stdout F [INFO] 10.217.0.19:40211 - 52386 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001249576s 2025-08-13T20:19:05.231523052+00:00 stdout F [INFO] 10.217.0.19:33331 - 15313 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000792802s 2025-08-13T20:19:11.643530019+00:00 stdout F [INFO] 10.217.0.62:33123 - 49781 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000986178s 2025-08-13T20:19:11.643530019+00:00 stdout F [INFO] 10.217.0.62:49419 - 12181 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000731901s 2025-08-13T20:19:22.872287200+00:00 stdout F [INFO] 10.217.0.8:38758 - 21581 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002758359s 2025-08-13T20:19:22.872287200+00:00 stdout F [INFO] 10.217.0.8:46333 - 9209 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003077518s 2025-08-13T20:19:33.482702098+00:00 stdout F [INFO] 10.217.0.62:60479 - 42267 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002999965s 2025-08-13T20:19:33.482702098+00:00 stdout F [INFO] 10.217.0.62:60878 - 6876 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003089438s 2025-08-13T20:19:33.502325339+00:00 stdout F [INFO] 10.217.0.62:39222 - 4986 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001012609s 2025-08-13T20:19:33.502364490+00:00 stdout F [INFO] 10.217.0.62:60567 - 47569 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001098921s 2025-08-13T20:19:35.213953105+00:00 stdout F [INFO] 10.217.0.19:46242 - 54493 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001431731s 2025-08-13T20:19:35.213953105+00:00 stdout F [INFO] 10.217.0.19:36706 - 45113 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001586256s 2025-08-13T20:19:38.306546218+00:00 stdout F [INFO] 10.217.0.62:38211 - 9131 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000923397s 2025-08-13T20:19:38.306630511+00:00 stdout F [INFO] 10.217.0.62:46397 - 22873 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00072067s 2025-08-13T20:19:41.636439144+00:00 stdout F [INFO] 10.217.0.62:47865 - 15773 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000987888s 2025-08-13T20:19:41.636439144+00:00 stdout F [INFO] 10.217.0.62:50558 - 51504 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.0010408s 2025-08-13T20:19:42.026544983+00:00 stdout F [INFO] 10.217.0.19:36856 - 18795 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000823494s 2025-08-13T20:19:42.026938724+00:00 stdout F [INFO] 10.217.0.19:34625 - 9057 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001165133s 2025-08-13T20:19:42.164504595+00:00 stdout F [INFO] 10.217.0.19:45942 - 61324 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000872075s 2025-08-13T20:19:42.164671840+00:00 stdout F [INFO] 10.217.0.19:41972 - 52459 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00106331s 2025-08-13T20:19:42.532376799+00:00 stdout F [INFO] 10.217.0.62:46454 - 63259 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000792552s 2025-08-13T20:19:42.535296553+00:00 stdout F [INFO] 10.217.0.62:47598 - 51230 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004496309s 2025-08-13T20:19:45.401583608+00:00 stdout F [INFO] 10.217.0.62:56666 - 9412 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001494062s 2025-08-13T20:19:45.401583608+00:00 stdout F [INFO] 10.217.0.62:38089 - 57843 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001731939s 2025-08-13T20:19:48.020856666+00:00 stdout F [INFO] 10.217.0.19:49297 - 40636 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000908326s 2025-08-13T20:19:48.021008500+00:00 stdout F [INFO] 10.217.0.19:50217 - 33124 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000479443s 2025-08-13T20:19:52.707583818+00:00 stdout F [INFO] 10.217.0.19:41970 - 64782 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001206844s 2025-08-13T20:19:52.707583818+00:00 stdout F [INFO] 10.217.0.19:38499 - 56421 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001794481s 2025-08-13T20:19:56.368549176+00:00 stdout F [INFO] 10.217.0.64:45663 - 24171 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001554525s 2025-08-13T20:19:56.368829954+00:00 stdout F [INFO] 10.217.0.64:56459 - 43664 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00173986s 2025-08-13T20:19:56.369091442+00:00 stdout F [INFO] 10.217.0.64:46431 - 38097 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002498151s 2025-08-13T20:19:56.369270407+00:00 stdout F [INFO] 10.217.0.64:46771 - 2172 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002322767s 2025-08-13T20:20:01.071900077+00:00 stdout F [INFO] 10.217.0.19:36348 - 12264 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001161613s 2025-08-13T20:20:01.071900077+00:00 stdout F [INFO] 10.217.0.19:58531 - 49546 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000880615s 2025-08-13T20:20:01.094909815+00:00 stdout F [INFO] 10.217.0.19:45307 - 27566 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001574105s 2025-08-13T20:20:01.094909815+00:00 stdout F [INFO] 10.217.0.19:46628 - 46836 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001631457s 2025-08-13T20:20:04.641614348+00:00 stdout F [INFO] 10.217.0.45:35084 - 63330 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000804223s 2025-08-13T20:20:04.641962067+00:00 stdout F [INFO] 10.217.0.45:45080 - 40107 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001006019s 2025-08-13T20:20:05.195151847+00:00 stdout F [INFO] 10.217.0.19:56212 - 48003 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000984178s 2025-08-13T20:20:05.196163296+00:00 stdout F [INFO] 10.217.0.19:39285 - 14780 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002337637s 2025-08-13T20:20:08.340700585+00:00 stdout F [INFO] 10.217.0.19:53833 - 40272 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001970646s 2025-08-13T20:20:08.340700585+00:00 stdout F [INFO] 10.217.0.19:41946 - 24164 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002420129s 2025-08-13T20:20:08.363431225+00:00 stdout F [INFO] 10.217.0.19:46307 - 5452 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000778262s 2025-08-13T20:20:08.363431225+00:00 stdout F [INFO] 10.217.0.19:35245 - 39248 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000445573s 2025-08-13T20:20:08.502443628+00:00 stdout F [INFO] 10.217.0.19:44873 - 4998 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001938985s 2025-08-13T20:20:08.502494999+00:00 stdout F [INFO] 10.217.0.19:44407 - 54307 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001842663s 2025-08-13T20:20:11.654028374+00:00 stdout F [INFO] 10.217.0.62:38324 - 64366 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002006857s 2025-08-13T20:20:11.655275660+00:00 stdout F [INFO] 10.217.0.62:45022 - 17853 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00071154s 2025-08-13T20:20:22.874056324+00:00 stdout F [INFO] 10.217.0.8:56271 - 7607 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003067057s 2025-08-13T20:20:22.874056324+00:00 stdout F [INFO] 10.217.0.8:53777 - 62518 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004086857s 2025-08-13T20:20:22.875510136+00:00 stdout F [INFO] 10.217.0.8:47578 - 64153 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000949147s 2025-08-13T20:20:22.877051280+00:00 stdout F [INFO] 10.217.0.8:38799 - 52729 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000611507s 2025-08-13T20:20:30.984937467+00:00 stdout F [INFO] 10.217.0.73:34657 - 18108 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001264556s 2025-08-13T20:20:30.984937467+00:00 stdout F [INFO] 10.217.0.73:54396 - 42450 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000660829s 2025-08-13T20:20:35.217291606+00:00 stdout F [INFO] 10.217.0.19:58010 - 34249 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001632276s 2025-08-13T20:20:35.217291606+00:00 stdout F [INFO] 10.217.0.19:43193 - 27871 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001431521s 2025-08-13T20:20:41.640317422+00:00 stdout F [INFO] 10.217.0.62:33136 - 3157 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000968398s 2025-08-13T20:20:41.640317422+00:00 stdout F [INFO] 10.217.0.62:42410 - 28426 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001284317s 2025-08-13T20:20:56.375567479+00:00 stdout F [INFO] 10.217.0.64:46364 - 61033 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004658863s 2025-08-13T20:20:56.375655501+00:00 stdout F [INFO] 10.217.0.64:41556 - 23747 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003774458s 2025-08-13T20:20:56.375691332+00:00 stdout F [INFO] 10.217.0.64:37617 - 7065 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003570242s 2025-08-13T20:20:56.376064003+00:00 stdout F [INFO] 10.217.0.64:40574 - 17849 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004663733s 2025-08-13T20:21:02.344894508+00:00 stdout F [INFO] 10.217.0.19:36337 - 62106 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002328257s 2025-08-13T20:21:02.344894508+00:00 stdout F [INFO] 10.217.0.19:42735 - 19208 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002479191s 2025-08-13T20:21:04.715673334+00:00 stdout F [INFO] 10.217.0.45:52465 - 28402 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001366489s 2025-08-13T20:21:04.715673334+00:00 stdout F [INFO] 10.217.0.45:46488 - 13842 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.00104926s 2025-08-13T20:21:05.226410270+00:00 stdout F [INFO] 10.217.0.19:46338 - 63671 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001718309s 2025-08-13T20:21:05.228319235+00:00 stdout F [INFO] 10.217.0.19:45280 - 55564 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000554165s 2025-08-13T20:21:11.645199285+00:00 stdout F [INFO] 10.217.0.62:40198 - 22184 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002646386s 2025-08-13T20:21:11.646182993+00:00 stdout F [INFO] 10.217.0.62:45571 - 1066 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003772287s 2025-08-13T20:21:22.874469926+00:00 stdout F [INFO] 10.217.0.8:52225 - 3320 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000639608s 2025-08-13T20:21:22.874469926+00:00 stdout F [INFO] 10.217.0.8:54099 - 33964 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003515781s 2025-08-13T20:21:22.877588095+00:00 stdout F [INFO] 10.217.0.8:52824 - 64521 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000438082s 2025-08-13T20:21:22.877652327+00:00 stdout F [INFO] 10.217.0.8:57555 - 59549 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001463112s 2025-08-13T20:21:35.222143431+00:00 stdout F [INFO] 10.217.0.19:60649 - 50574 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002335237s 2025-08-13T20:21:35.222143431+00:00 stdout F [INFO] 10.217.0.19:53041 - 33994 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006000452s 2025-08-13T20:21:41.644282842+00:00 stdout F [INFO] 10.217.0.62:40499 - 18965 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001027549s 2025-08-13T20:21:41.644545980+00:00 stdout F [INFO] 10.217.0.62:56677 - 33292 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001361999s 2025-08-13T20:21:46.720648942+00:00 stdout F [INFO] 10.217.0.19:52844 - 54819 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000996739s 2025-08-13T20:21:46.720648942+00:00 stdout F [INFO] 10.217.0.19:35401 - 32403 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00106424s 2025-08-13T20:21:56.365048060+00:00 stdout F [INFO] 10.217.0.64:53848 - 12324 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001368299s 2025-08-13T20:21:56.365048060+00:00 stdout F [INFO] 10.217.0.64:58459 - 32629 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001531664s 2025-08-13T20:21:56.365048060+00:00 stdout F [INFO] 10.217.0.64:54440 - 63440 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001854953s 2025-08-13T20:21:56.365450051+00:00 stdout F [INFO] 10.217.0.64:45586 - 58199 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003015456s 2025-08-13T20:22:04.772447424+00:00 stdout F [INFO] 10.217.0.45:47208 - 6310 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002282725s 2025-08-13T20:22:04.773388391+00:00 stdout F [INFO] 10.217.0.45:40257 - 64603 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002641876s 2025-08-13T20:22:05.225832351+00:00 stdout F [INFO] 10.217.0.19:48589 - 4475 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001206624s 2025-08-13T20:22:05.226189052+00:00 stdout F [INFO] 10.217.0.19:44554 - 62237 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001643067s 2025-08-13T20:22:09.059585917+00:00 stdout F [INFO] 10.217.0.8:46872 - 45123 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001205414s 2025-08-13T20:22:09.059585917+00:00 stdout F [INFO] 10.217.0.8:44809 - 58206 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001253486s 2025-08-13T20:22:09.061384968+00:00 stdout F [INFO] 10.217.0.8:40056 - 55732 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000912436s 2025-08-13T20:22:09.062433578+00:00 stdout F [INFO] 10.217.0.8:50159 - 60649 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000761292s 2025-08-13T20:22:11.644848192+00:00 stdout F [INFO] 10.217.0.62:55597 - 19802 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00140525s 2025-08-13T20:22:11.644848192+00:00 stdout F [INFO] 10.217.0.62:37054 - 62067 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001243296s 2025-08-13T20:22:11.959636427+00:00 stdout F [INFO] 10.217.0.19:37069 - 44420 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001090712s 2025-08-13T20:22:11.959636427+00:00 stdout F [INFO] 10.217.0.19:51196 - 10736 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001195314s 2025-08-13T20:22:18.381498281+00:00 stdout F [INFO] 10.217.0.82:33260 - 39922 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001861193s 2025-08-13T20:22:18.381703217+00:00 stdout F [INFO] 10.217.0.82:57325 - 15076 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002520261s 2025-08-13T20:22:18.819005535+00:00 stdout F [INFO] 10.217.0.82:59297 - 26216 "A IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.00105767s 2025-08-13T20:22:18.819194680+00:00 stdout F [INFO] 10.217.0.82:59783 - 53810 "AAAA IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.001294797s 2025-08-13T20:22:18.930505661+00:00 stdout F [INFO] 10.217.0.87:37155 - 34712 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001688419s 2025-08-13T20:22:18.930695497+00:00 stdout F [INFO] 10.217.0.87:43269 - 1726 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001895594s 2025-08-13T20:22:18.996825837+00:00 stdout F [INFO] 10.217.0.87:57214 - 3424 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001348508s 2025-08-13T20:22:18.996914279+00:00 stdout F [INFO] 10.217.0.87:44706 - 18025 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002379739s 2025-08-13T20:22:22.879676495+00:00 stdout F [INFO] 10.217.0.8:46274 - 2923 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004247991s 2025-08-13T20:22:22.879676495+00:00 stdout F [INFO] 10.217.0.8:45842 - 32928 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004246601s 2025-08-13T20:22:22.881415405+00:00 stdout F [INFO] 10.217.0.8:57451 - 31094 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000848284s 2025-08-13T20:22:22.881448626+00:00 stdout F [INFO] 10.217.0.8:36628 - 63016 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001175654s 2025-08-13T20:22:35.224010916+00:00 stdout F [INFO] 10.217.0.19:43222 - 5389 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002881602s 2025-08-13T20:22:35.224683185+00:00 stdout F [INFO] 10.217.0.19:42907 - 51848 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004116278s 2025-08-13T20:22:41.646202026+00:00 stdout F [INFO] 10.217.0.62:43760 - 28539 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001611936s 2025-08-13T20:22:41.646202026+00:00 stdout F [INFO] 10.217.0.62:35224 - 43702 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002153242s 2025-08-13T20:22:56.369606611+00:00 stdout F [INFO] 10.217.0.64:59458 - 7965 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002547783s 2025-08-13T20:22:56.369606611+00:00 stdout F [INFO] 10.217.0.64:48398 - 54107 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002361567s 2025-08-13T20:22:56.370722603+00:00 stdout F [INFO] 10.217.0.64:36212 - 12955 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001255206s 2025-08-13T20:22:56.372176595+00:00 stdout F [INFO] 10.217.0.64:36092 - 52355 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001346788s 2025-08-13T20:23:04.846709970+00:00 stdout F [INFO] 10.217.0.45:40440 - 63694 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001198544s 2025-08-13T20:23:04.846827404+00:00 stdout F [INFO] 10.217.0.45:38541 - 28253 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001752361s 2025-08-13T20:23:05.213124442+00:00 stdout F [INFO] 10.217.0.19:56738 - 47459 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000649099s 2025-08-13T20:23:05.213124442+00:00 stdout F [INFO] 10.217.0.19:46391 - 51426 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00070577s 2025-08-13T20:23:11.652968446+00:00 stdout F [INFO] 10.217.0.62:53728 - 49866 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00210305s 2025-08-13T20:23:11.653085550+00:00 stdout F [INFO] 10.217.0.62:45456 - 15639 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002129911s 2025-08-13T20:23:21.588321740+00:00 stdout F [INFO] 10.217.0.19:60874 - 35135 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002127911s 2025-08-13T20:23:21.588321740+00:00 stdout F [INFO] 10.217.0.19:38718 - 31175 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002510541s 2025-08-13T20:23:22.871336408+00:00 stdout F [INFO] 10.217.0.8:36103 - 24028 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000782063s 2025-08-13T20:23:22.871515043+00:00 stdout F [INFO] 10.217.0.8:49021 - 53513 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000862385s 2025-08-13T20:23:22.873339626+00:00 stdout F [INFO] 10.217.0.8:37985 - 20136 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000531166s 2025-08-13T20:23:22.873339626+00:00 stdout F [INFO] 10.217.0.8:55991 - 14323 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000785993s 2025-08-13T20:23:35.238045523+00:00 stdout F [INFO] 10.217.0.19:36684 - 2848 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002465901s 2025-08-13T20:23:35.240079521+00:00 stdout F [INFO] 10.217.0.19:33588 - 27633 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001195794s 2025-08-13T20:23:41.647280071+00:00 stdout F [INFO] 10.217.0.62:42476 - 43342 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001642987s 2025-08-13T20:23:41.647415965+00:00 stdout F [INFO] 10.217.0.62:54440 - 6040 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001811622s 2025-08-13T20:23:45.402046379+00:00 stdout F [INFO] 10.217.0.19:44745 - 60022 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000837994s 2025-08-13T20:23:45.402108621+00:00 stdout F [INFO] 10.217.0.19:43993 - 40694 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000823644s 2025-08-13T20:23:56.365249950+00:00 stdout F [INFO] 10.217.0.64:40236 - 37886 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003663245s 2025-08-13T20:23:56.365372663+00:00 stdout F [INFO] 10.217.0.64:46539 - 61891 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004957812s 2025-08-13T20:23:56.365478967+00:00 stdout F [INFO] 10.217.0.64:56603 - 40332 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004119848s 2025-08-13T20:23:56.365851677+00:00 stdout F [INFO] 10.217.0.64:58763 - 26661 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004710615s 2025-08-13T20:24:04.894208688+00:00 stdout F [INFO] 10.217.0.45:53176 - 54604 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001489782s 2025-08-13T20:24:04.895019631+00:00 stdout F [INFO] 10.217.0.45:35227 - 1868 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001315408s 2025-08-13T20:24:05.215869215+00:00 stdout F [INFO] 10.217.0.19:50430 - 16440 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002552023s 2025-08-13T20:24:05.215869215+00:00 stdout F [INFO] 10.217.0.19:43727 - 5350 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002644575s 2025-08-13T20:24:11.654122291+00:00 stdout F [INFO] 10.217.0.62:47207 - 44200 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000967948s 2025-08-13T20:24:11.654122291+00:00 stdout F [INFO] 10.217.0.62:35930 - 61246 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000842024s 2025-08-13T20:24:22.877013284+00:00 stdout F [INFO] 10.217.0.8:34036 - 32585 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002436069s 2025-08-13T20:24:22.877013284+00:00 stdout F [INFO] 10.217.0.8:37756 - 19886 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002389609s 2025-08-13T20:24:22.877728545+00:00 stdout F [INFO] 10.217.0.8:43919 - 30620 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000891985s 2025-08-13T20:24:22.877890659+00:00 stdout F [INFO] 10.217.0.8:59344 - 6990 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00070076s 2025-08-13T20:24:31.218619713+00:00 stdout F [INFO] 10.217.0.19:38893 - 10072 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001308328s 2025-08-13T20:24:31.218619713+00:00 stdout F [INFO] 10.217.0.19:49427 - 61245 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000495315s 2025-08-13T20:24:35.210617970+00:00 stdout F [INFO] 10.217.0.19:34324 - 3422 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001206925s 2025-08-13T20:24:35.216068206+00:00 stdout F [INFO] 10.217.0.19:35128 - 8581 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000886335s 2025-08-13T20:24:41.649324149+00:00 stdout F [INFO] 10.217.0.62:45676 - 32314 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001269026s 2025-08-13T20:24:41.650352639+00:00 stdout F [INFO] 10.217.0.62:59504 - 25827 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002697517s 2025-08-13T20:24:56.364831731+00:00 stdout F [INFO] 10.217.0.64:46731 - 49096 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002134331s 2025-08-13T20:24:56.364904373+00:00 stdout F [INFO] 10.217.0.64:47783 - 39549 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003447299s 2025-08-13T20:24:56.364915843+00:00 stdout F [INFO] 10.217.0.64:58109 - 10846 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004238141s 2025-08-13T20:24:56.365140799+00:00 stdout F [INFO] 10.217.0.64:42859 - 26107 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003592322s 2025-08-13T20:25:04.954625974+00:00 stdout F [INFO] 10.217.0.45:56098 - 16684 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.00105109s 2025-08-13T20:25:04.954625974+00:00 stdout F [INFO] 10.217.0.45:60242 - 56074 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001544635s 2025-08-13T20:25:05.216681297+00:00 stdout F [INFO] 10.217.0.19:43870 - 53964 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000880955s 2025-08-13T20:25:05.218245922+00:00 stdout F [INFO] 10.217.0.19:38683 - 13931 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002842042s 2025-08-13T20:25:11.655107217+00:00 stdout F [INFO] 10.217.0.62:42560 - 56615 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001234255s 2025-08-13T20:25:11.655152038+00:00 stdout F [INFO] 10.217.0.62:51024 - 56501 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001732259s 2025-08-13T20:25:22.875674448+00:00 stdout F [INFO] 10.217.0.8:37291 - 10141 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002284745s 2025-08-13T20:25:22.875674448+00:00 stdout F [INFO] 10.217.0.8:53584 - 44780 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00104894s 2025-08-13T20:25:22.876833261+00:00 stdout F [INFO] 10.217.0.8:44784 - 23561 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000829414s 2025-08-13T20:25:22.877283834+00:00 stdout F [INFO] 10.217.0.8:34509 - 32310 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000649649s 2025-08-13T20:25:31.108541627+00:00 stdout F [INFO] 10.217.0.73:54395 - 27034 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001383989s 2025-08-13T20:25:31.108673060+00:00 stdout F [INFO] 10.217.0.73:50430 - 47262 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000849244s 2025-08-13T20:25:35.223646663+00:00 stdout F [INFO] 10.217.0.19:59430 - 40751 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003284004s 2025-08-13T20:25:35.223646663+00:00 stdout F [INFO] 10.217.0.19:38543 - 12475 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003529581s 2025-08-13T20:25:40.839594434+00:00 stdout F [INFO] 10.217.0.19:54153 - 43863 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000966588s 2025-08-13T20:25:40.839594434+00:00 stdout F [INFO] 10.217.0.19:60269 - 32295 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001198244s 2025-08-13T20:25:41.654132535+00:00 stdout F [INFO] 10.217.0.62:57722 - 60781 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001192605s 2025-08-13T20:25:41.654225638+00:00 stdout F [INFO] 10.217.0.62:57240 - 9692 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000768012s 2025-08-13T20:25:44.091295833+00:00 stdout F [INFO] 10.217.0.19:47837 - 6058 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000956198s 2025-08-13T20:25:44.091480398+00:00 stdout F [INFO] 10.217.0.19:42351 - 47123 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001190144s 2025-08-13T20:25:56.362749204+00:00 stdout F [INFO] 10.217.0.64:57008 - 2843 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001875883s 2025-08-13T20:25:56.362749204+00:00 stdout F [INFO] 10.217.0.64:59145 - 41905 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.000892495s 2025-08-13T20:25:56.364121553+00:00 stdout F [INFO] 10.217.0.64:54461 - 27316 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.000776812s 2025-08-13T20:25:56.364366410+00:00 stdout F [INFO] 10.217.0.64:49156 - 29068 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001151533s 2025-08-13T20:26:05.008171728+00:00 stdout F [INFO] 10.217.0.45:56933 - 22116 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001376049s 2025-08-13T20:26:05.008171728+00:00 stdout F [INFO] 10.217.0.45:38062 - 21287 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001615627s 2025-08-13T20:26:05.225685178+00:00 stdout F [INFO] 10.217.0.19:42480 - 2952 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001715269s 2025-08-13T20:26:05.229846137+00:00 stdout F [INFO] 10.217.0.19:38966 - 45405 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000540405s 2025-08-13T20:26:11.660082231+00:00 stdout F [INFO] 10.217.0.62:36357 - 25846 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001317148s 2025-08-13T20:26:11.660082231+00:00 stdout F [INFO] 10.217.0.62:40131 - 27947 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001431501s 2025-08-13T20:26:22.885119750+00:00 stdout F [INFO] 10.217.0.8:39265 - 12506 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.008256897s 2025-08-13T20:26:22.885300815+00:00 stdout F [INFO] 10.217.0.8:60052 - 21113 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.009225783s 2025-08-13T20:26:22.887208370+00:00 stdout F [INFO] 10.217.0.8:50059 - 64664 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001089472s 2025-08-13T20:26:22.888349282+00:00 stdout F [INFO] 10.217.0.8:58525 - 41733 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002595604s 2025-08-13T20:26:35.217672305+00:00 stdout F [INFO] 10.217.0.19:50871 - 46595 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003050657s 2025-08-13T20:26:35.223889833+00:00 stdout F [INFO] 10.217.0.19:34499 - 21925 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001911215s 2025-08-13T20:26:41.663241068+00:00 stdout F [INFO] 10.217.0.62:37952 - 57689 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002296435s 2025-08-13T20:26:41.663450064+00:00 stdout F [INFO] 10.217.0.62:32910 - 1261 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002552483s 2025-08-13T20:26:50.459861487+00:00 stdout F [INFO] 10.217.0.19:59463 - 36264 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001129462s 2025-08-13T20:26:50.460897926+00:00 stdout F [INFO] 10.217.0.19:43264 - 12719 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003320605s 2025-08-13T20:26:56.361161538+00:00 stdout F [INFO] 10.217.0.64:40095 - 42976 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001215025s 2025-08-13T20:26:56.362470865+00:00 stdout F [INFO] 10.217.0.64:51703 - 18236 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.00106133s 2025-08-13T20:26:56.362470865+00:00 stdout F [INFO] 10.217.0.64:42173 - 15221 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001270597s 2025-08-13T20:26:56.366062168+00:00 stdout F [INFO] 10.217.0.64:56428 - 48198 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.006411643s 2025-08-13T20:27:05.061678481+00:00 stdout F [INFO] 10.217.0.45:44274 - 14734 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001185503s 2025-08-13T20:27:05.061678481+00:00 stdout F [INFO] 10.217.0.45:55812 - 7493 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000598277s 2025-08-13T20:27:05.211882976+00:00 stdout F [INFO] 10.217.0.19:52471 - 16429 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001636387s 2025-08-13T20:27:05.215722676+00:00 stdout F [INFO] 10.217.0.19:54830 - 38464 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000740491s 2025-08-13T20:27:11.668728224+00:00 stdout F [INFO] 10.217.0.62:46677 - 64812 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001148663s 2025-08-13T20:27:11.668728224+00:00 stdout F [INFO] 10.217.0.62:51999 - 8607 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001020539s 2025-08-13T20:27:22.879288288+00:00 stdout F [INFO] 10.217.0.8:37705 - 46550 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002511792s 2025-08-13T20:27:22.879288288+00:00 stdout F [INFO] 10.217.0.8:37535 - 28497 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003007216s 2025-08-13T20:27:22.880758070+00:00 stdout F [INFO] 10.217.0.8:51257 - 27538 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001197134s 2025-08-13T20:27:22.880952866+00:00 stdout F [INFO] 10.217.0.8:53250 - 20415 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001178664s 2025-08-13T20:27:35.228295304+00:00 stdout F [INFO] 10.217.0.19:57583 - 19387 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003182911s 2025-08-13T20:27:35.228295304+00:00 stdout F [INFO] 10.217.0.19:53003 - 43498 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001328308s 2025-08-13T20:27:41.658872270+00:00 stdout F [INFO] 10.217.0.62:45081 - 383 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001441351s 2025-08-13T20:27:41.658872270+00:00 stdout F [INFO] 10.217.0.62:33850 - 51076 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001704479s 2025-08-13T20:27:42.780742399+00:00 stdout F [INFO] 10.217.0.19:39615 - 13995 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001031459s 2025-08-13T20:27:42.780742399+00:00 stdout F [INFO] 10.217.0.19:51920 - 40149 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000956647s 2025-08-13T20:27:56.376127303+00:00 stdout F [INFO] 10.217.0.64:58075 - 52190 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004438267s 2025-08-13T20:27:56.376127303+00:00 stdout F [INFO] 10.217.0.64:48199 - 47108 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.005041774s 2025-08-13T20:27:56.376127303+00:00 stdout F [INFO] 10.217.0.64:51669 - 65440 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.005462966s 2025-08-13T20:27:56.376127303+00:00 stdout F [INFO] 10.217.0.64:47060 - 8587 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.005900449s 2025-08-13T20:27:58.930109056+00:00 stdout F [INFO] 10.217.0.19:55625 - 45930 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000983058s 2025-08-13T20:27:58.930109056+00:00 stdout F [INFO] 10.217.0.19:42361 - 34807 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001129392s 2025-08-13T20:28:00.077029005+00:00 stdout F [INFO] 10.217.0.19:53335 - 44549 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000760012s 2025-08-13T20:28:00.077029005+00:00 stdout F [INFO] 10.217.0.19:40680 - 63548 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000908866s 2025-08-13T20:28:05.109126285+00:00 stdout F [INFO] 10.217.0.45:51541 - 679 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000994358s 2025-08-13T20:28:05.109126285+00:00 stdout F [INFO] 10.217.0.45:51904 - 3423 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001150933s 2025-08-13T20:28:05.222457713+00:00 stdout F [INFO] 10.217.0.19:56911 - 31417 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001446252s 2025-08-13T20:28:05.222507354+00:00 stdout F [INFO] 10.217.0.19:35415 - 62687 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001685248s 2025-08-13T20:28:11.573842296+00:00 stdout F [INFO] 10.217.0.62:38831 - 24140 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.01252948s 2025-08-13T20:28:11.574213006+00:00 stdout F [INFO] 10.217.0.62:52754 - 21246 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.012967142s 2025-08-13T20:28:11.652482776+00:00 stdout F [INFO] 10.217.0.62:44148 - 61190 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00139288s 2025-08-13T20:28:11.652482776+00:00 stdout F [INFO] 10.217.0.62:40743 - 20956 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001218395s 2025-08-13T20:28:22.878744002+00:00 stdout F [INFO] 10.217.0.8:46071 - 3347 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002957745s 2025-08-13T20:28:22.878744002+00:00 stdout F [INFO] 10.217.0.8:60971 - 48670 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003520741s 2025-08-13T20:28:22.880619546+00:00 stdout F [INFO] 10.217.0.8:58226 - 45100 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001212545s 2025-08-13T20:28:22.880977026+00:00 stdout F [INFO] 10.217.0.8:44978 - 48564 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001512964s 2025-08-13T20:28:35.237885214+00:00 stdout F [INFO] 10.217.0.19:54157 - 35131 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002727999s 2025-08-13T20:28:35.238211853+00:00 stdout F [INFO] 10.217.0.19:37322 - 15708 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003765728s 2025-08-13T20:28:41.656353106+00:00 stdout F [INFO] 10.217.0.62:34350 - 14878 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001065571s 2025-08-13T20:28:41.656353106+00:00 stdout F [INFO] 10.217.0.62:42110 - 17498 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000952677s 2025-08-13T20:28:56.363747197+00:00 stdout F [INFO] 10.217.0.64:59117 - 59507 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003759828s 2025-08-13T20:28:56.363747197+00:00 stdout F [INFO] 10.217.0.64:39236 - 60297 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004420417s 2025-08-13T20:28:56.364644013+00:00 stdout F [INFO] 10.217.0.64:39886 - 59994 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002289126s 2025-08-13T20:28:56.364993613+00:00 stdout F [INFO] 10.217.0.64:51482 - 47269 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001537855s 2025-08-13T20:29:05.158973531+00:00 stdout F [INFO] 10.217.0.45:49586 - 293 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001536384s 2025-08-13T20:29:05.158973531+00:00 stdout F [INFO] 10.217.0.45:37945 - 29941 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002622176s 2025-08-13T20:29:05.246980231+00:00 stdout F [INFO] 10.217.0.19:39196 - 22276 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000976709s 2025-08-13T20:29:05.247213568+00:00 stdout F [INFO] 10.217.0.19:50331 - 9060 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000653139s 2025-08-13T20:29:09.709941691+00:00 stdout F [INFO] 10.217.0.19:49760 - 39795 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001007329s 2025-08-13T20:29:09.710193458+00:00 stdout F [INFO] 10.217.0.19:52124 - 16792 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002368648s 2025-08-13T20:29:11.659446941+00:00 stdout F [INFO] 10.217.0.62:48548 - 25764 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000931517s 2025-08-13T20:29:11.659446941+00:00 stdout F [INFO] 10.217.0.62:52247 - 39213 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001345389s 2025-08-13T20:29:22.877995183+00:00 stdout F [INFO] 10.217.0.8:55820 - 24792 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004234281s 2025-08-13T20:29:22.877995183+00:00 stdout F [INFO] 10.217.0.8:36516 - 18894 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003181971s 2025-08-13T20:29:22.879826456+00:00 stdout F [INFO] 10.217.0.8:35384 - 34055 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.0010181s 2025-08-13T20:29:22.880051702+00:00 stdout F [INFO] 10.217.0.8:37455 - 35412 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001280907s 2025-08-13T20:29:33.508718749+00:00 stdout F [INFO] 10.217.0.62:52420 - 65467 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.005813477s 2025-08-13T20:29:33.508718749+00:00 stdout F [INFO] 10.217.0.62:59946 - 45827 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.005716655s 2025-08-13T20:29:33.548111311+00:00 stdout F [INFO] 10.217.0.62:39576 - 4767 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001328578s 2025-08-13T20:29:33.549247944+00:00 stdout F [INFO] 10.217.0.62:39409 - 20523 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00106722s 2025-08-13T20:29:35.242365363+00:00 stdout F [INFO] 10.217.0.19:52444 - 13513 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000664429s 2025-08-13T20:29:35.245163164+00:00 stdout F [INFO] 10.217.0.19:57824 - 18608 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003030437s 2025-08-13T20:29:38.331371308+00:00 stdout F [INFO] 10.217.0.62:50284 - 47289 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000978308s 2025-08-13T20:29:38.331371308+00:00 stdout F [INFO] 10.217.0.62:54421 - 29466 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00069253s 2025-08-13T20:29:41.472703057+00:00 stdout F [INFO] 10.217.0.19:50960 - 37288 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00139353s 2025-08-13T20:29:41.472703057+00:00 stdout F [INFO] 10.217.0.19:40772 - 35345 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000867005s 2025-08-13T20:29:41.655946815+00:00 stdout F [INFO] 10.217.0.62:46017 - 45350 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000554786s 2025-08-13T20:29:41.657692135+00:00 stdout F [INFO] 10.217.0.62:38458 - 9290 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002201973s 2025-08-13T20:29:42.029686128+00:00 stdout F [INFO] 10.217.0.19:49384 - 17937 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001275446s 2025-08-13T20:29:42.030119931+00:00 stdout F [INFO] 10.217.0.19:34126 - 31563 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002031728s 2025-08-13T20:29:42.195863505+00:00 stdout F [INFO] 10.217.0.19:50292 - 8646 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000771612s 2025-08-13T20:29:42.195863505+00:00 stdout F [INFO] 10.217.0.19:41813 - 53616 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001189794s 2025-08-13T20:29:42.548149462+00:00 stdout F [INFO] 10.217.0.62:46761 - 59090 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001441271s 2025-08-13T20:29:42.548391029+00:00 stdout F [INFO] 10.217.0.62:44356 - 37096 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001953676s 2025-08-13T20:29:45.402769419+00:00 stdout F [INFO] 10.217.0.62:48046 - 45459 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000817103s 2025-08-13T20:29:45.402769419+00:00 stdout F [INFO] 10.217.0.62:49429 - 57122 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000449153s 2025-08-13T20:29:56.372192900+00:00 stdout F [INFO] 10.217.0.64:57376 - 19028 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003027407s 2025-08-13T20:29:56.372192900+00:00 stdout F [INFO] 10.217.0.64:53232 - 19566 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003590323s 2025-08-13T20:29:56.372192900+00:00 stdout F [INFO] 10.217.0.64:50097 - 25136 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003240624s 2025-08-13T20:29:56.372192900+00:00 stdout F [INFO] 10.217.0.64:47419 - 35557 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003612844s 2025-08-13T20:30:01.075376634+00:00 stdout F [INFO] 10.217.0.19:42400 - 32330 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00315681s 2025-08-13T20:30:01.075376634+00:00 stdout F [INFO] 10.217.0.19:43531 - 21848 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003187702s 2025-08-13T20:30:01.100151146+00:00 stdout F [INFO] 10.217.0.19:37195 - 2542 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000633388s 2025-08-13T20:30:01.100151146+00:00 stdout F [INFO] 10.217.0.19:48203 - 48919 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000567806s 2025-08-13T20:30:05.230154427+00:00 stdout F [INFO] 10.217.0.45:36707 - 63835 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001633947s 2025-08-13T20:30:05.230154427+00:00 stdout F [INFO] 10.217.0.45:54164 - 55506 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002323076s 2025-08-13T20:30:05.241957226+00:00 stdout F [INFO] 10.217.0.19:56331 - 59077 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003138751s 2025-08-13T20:30:05.241957226+00:00 stdout F [INFO] 10.217.0.19:45527 - 23788 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003551052s 2025-08-13T20:30:08.348942269+00:00 stdout F [INFO] 10.217.0.19:33338 - 2468 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00103812s 2025-08-13T20:30:08.348942269+00:00 stdout F [INFO] 10.217.0.19:45130 - 60291 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001927446s 2025-08-13T20:30:08.397565217+00:00 stdout F [INFO] 10.217.0.19:35130 - 55305 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000807704s 2025-08-13T20:30:08.397565217+00:00 stdout F [INFO] 10.217.0.19:43466 - 56354 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002227894s 2025-08-13T20:30:08.542631706+00:00 stdout F [INFO] 10.217.0.19:32789 - 5660 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001609206s 2025-08-13T20:30:08.542692378+00:00 stdout F [INFO] 10.217.0.19:34852 - 15652 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001358189s 2025-08-13T20:30:11.680732082+00:00 stdout F [INFO] 10.217.0.62:57583 - 18489 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001232076s 2025-08-13T20:30:11.680732082+00:00 stdout F [INFO] 10.217.0.62:41359 - 11743 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001834313s 2025-08-13T20:30:19.348744243+00:00 stdout F [INFO] 10.217.0.19:60500 - 65234 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002055099s 2025-08-13T20:30:19.348744243+00:00 stdout F [INFO] 10.217.0.19:34223 - 16093 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002216144s 2025-08-13T20:30:22.880352091+00:00 stdout F [INFO] 10.217.0.8:43568 - 50549 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001704109s 2025-08-13T20:30:22.880603728+00:00 stdout F [INFO] 10.217.0.8:47766 - 6147 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00312927s 2025-08-13T20:30:22.882453871+00:00 stdout F [INFO] 10.217.0.8:54083 - 61218 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000749791s 2025-08-13T20:30:22.882881273+00:00 stdout F [INFO] 10.217.0.8:54024 - 55183 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001355379s 2025-08-13T20:30:31.155485163+00:00 stdout F [INFO] 10.217.0.73:38005 - 4639 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002329047s 2025-08-13T20:30:31.155485163+00:00 stdout F [INFO] 10.217.0.73:38446 - 24841 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003351347s 2025-08-13T20:30:33.360074255+00:00 stdout F [INFO] 10.217.0.19:45240 - 18271 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001849443s 2025-08-13T20:30:33.360074255+00:00 stdout F [INFO] 10.217.0.19:56701 - 35882 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002291556s 2025-08-13T20:30:33.386018850+00:00 stdout F [INFO] 10.217.0.19:59664 - 50456 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004310074s 2025-08-13T20:30:33.386313009+00:00 stdout F [INFO] 10.217.0.19:58470 - 46639 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005115767s 2025-08-13T20:30:35.215184861+00:00 stdout F [INFO] 10.217.0.19:49596 - 39944 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002485991s 2025-08-13T20:30:35.217097636+00:00 stdout F [INFO] 10.217.0.19:37961 - 22633 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000818494s 2025-08-13T20:30:41.662681529+00:00 stdout F [INFO] 10.217.0.62:49564 - 50238 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000863175s 2025-08-13T20:30:41.662681529+00:00 stdout F [INFO] 10.217.0.62:38814 - 44130 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001675658s 2025-08-13T20:30:56.364671261+00:00 stdout F [INFO] 10.217.0.64:43125 - 35104 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002406829s 2025-08-13T20:30:56.364671261+00:00 stdout F [INFO] 10.217.0.64:35781 - 62498 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003089479s 2025-08-13T20:30:56.364671261+00:00 stdout F [INFO] 10.217.0.64:37735 - 22294 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.00384699s 2025-08-13T20:30:56.364671261+00:00 stdout F [INFO] 10.217.0.64:37353 - 20789 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003985004s 2025-08-13T20:31:05.224302085+00:00 stdout F [INFO] 10.217.0.19:57220 - 36252 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001177644s 2025-08-13T20:31:05.224433359+00:00 stdout F [INFO] 10.217.0.19:59782 - 30926 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000577116s 2025-08-13T20:31:05.326079580+00:00 stdout F [INFO] 10.217.0.45:45832 - 46685 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000889425s 2025-08-13T20:31:05.326079580+00:00 stdout F [INFO] 10.217.0.45:40414 - 1549 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001170953s 2025-08-13T20:31:11.673462830+00:00 stdout F [INFO] 10.217.0.62:38378 - 23668 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002980646s 2025-08-13T20:31:11.673636245+00:00 stdout F [INFO] 10.217.0.62:40505 - 54986 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002723098s 2025-08-13T20:31:22.881264896+00:00 stdout F [INFO] 10.217.0.8:55399 - 52791 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003612124s 2025-08-13T20:31:22.881264896+00:00 stdout F [INFO] 10.217.0.8:38597 - 2000 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004345355s 2025-08-13T20:31:22.883736167+00:00 stdout F [INFO] 10.217.0.8:50837 - 15887 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00104287s 2025-08-13T20:31:22.883736167+00:00 stdout F [INFO] 10.217.0.8:43696 - 36032 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001087321s 2025-08-13T20:31:28.968581310+00:00 stdout F [INFO] 10.217.0.19:39400 - 8426 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001307347s 2025-08-13T20:31:28.968882459+00:00 stdout F [INFO] 10.217.0.19:36145 - 62501 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001686888s 2025-08-13T20:31:35.240270472+00:00 stdout F [INFO] 10.217.0.19:36421 - 17002 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002607855s 2025-08-13T20:31:35.241578470+00:00 stdout F [INFO] 10.217.0.19:59967 - 53451 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004725376s 2025-08-13T20:31:40.160974940+00:00 stdout F [INFO] 10.217.0.19:38477 - 33833 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001063061s 2025-08-13T20:31:40.160974940+00:00 stdout F [INFO] 10.217.0.19:37658 - 8896 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001155733s 2025-08-13T20:31:41.663014977+00:00 stdout F [INFO] 10.217.0.62:51660 - 16571 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000794503s 2025-08-13T20:31:41.663014977+00:00 stdout F [INFO] 10.217.0.62:52306 - 23225 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000757912s 2025-08-13T20:31:56.361142474+00:00 stdout F [INFO] 10.217.0.64:60172 - 9058 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00172696s 2025-08-13T20:31:56.361142474+00:00 stdout F [INFO] 10.217.0.64:48943 - 4720 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003051658s 2025-08-13T20:31:56.363961735+00:00 stdout F [INFO] 10.217.0.64:50037 - 34579 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001489083s 2025-08-13T20:31:56.365112368+00:00 stdout F [INFO] 10.217.0.64:43784 - 39155 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001770351s 2025-08-13T20:32:05.226220156+00:00 stdout F [INFO] 10.217.0.19:43101 - 63759 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00175152s 2025-08-13T20:32:05.226220156+00:00 stdout F [INFO] 10.217.0.19:55083 - 62337 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002368358s 2025-08-13T20:32:05.381093178+00:00 stdout F [INFO] 10.217.0.45:52832 - 65452 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000965308s 2025-08-13T20:32:05.381833029+00:00 stdout F [INFO] 10.217.0.45:36205 - 44015 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001442462s 2025-08-13T20:32:11.669659366+00:00 stdout F [INFO] 10.217.0.62:46206 - 42199 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001127753s 2025-08-13T20:32:11.669659366+00:00 stdout F [INFO] 10.217.0.62:55044 - 47589 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001251706s 2025-08-13T20:32:22.887274833+00:00 stdout F [INFO] 10.217.0.8:57254 - 11688 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002743639s 2025-08-13T20:32:22.887327804+00:00 stdout F [INFO] 10.217.0.8:40915 - 29430 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003152451s 2025-08-13T20:32:22.889946979+00:00 stdout F [INFO] 10.217.0.8:58018 - 20602 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001598576s 2025-08-13T20:32:22.890211667+00:00 stdout F [INFO] 10.217.0.8:35529 - 19346 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001166383s 2025-08-13T20:32:35.244724858+00:00 stdout F [INFO] 10.217.0.19:45669 - 34426 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002952165s 2025-08-13T20:32:35.244724858+00:00 stdout F [INFO] 10.217.0.19:51147 - 51510 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003438999s 2025-08-13T20:32:38.580841186+00:00 stdout F [INFO] 10.217.0.19:43690 - 3324 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001319678s 2025-08-13T20:32:38.580988980+00:00 stdout F [INFO] 10.217.0.19:56587 - 24648 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001099972s 2025-08-13T20:32:41.667486954+00:00 stdout F [INFO] 10.217.0.62:55855 - 31828 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001149163s 2025-08-13T20:32:41.667486954+00:00 stdout F [INFO] 10.217.0.62:34829 - 3463 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001708029s 2025-08-13T20:32:56.366707428+00:00 stdout F [INFO] 10.217.0.64:51627 - 65020 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003232263s 2025-08-13T20:32:56.366707428+00:00 stdout F [INFO] 10.217.0.64:52149 - 8902 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003574023s 2025-08-13T20:32:56.368728506+00:00 stdout F [INFO] 10.217.0.64:51624 - 35655 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.000891486s 2025-08-13T20:32:56.369118527+00:00 stdout F [INFO] 10.217.0.64:60490 - 38135 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.00103067s 2025-08-13T20:33:05.238556718+00:00 stdout F [INFO] 10.217.0.19:59718 - 38615 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00139407s 2025-08-13T20:33:05.238651740+00:00 stdout F [INFO] 10.217.0.19:54459 - 19380 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001287367s 2025-08-13T20:33:05.431433402+00:00 stdout F [INFO] 10.217.0.45:50366 - 22220 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000820193s 2025-08-13T20:33:05.431991198+00:00 stdout F [INFO] 10.217.0.45:52058 - 7055 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000636528s 2025-08-13T20:33:11.674968627+00:00 stdout F [INFO] 10.217.0.62:53746 - 415 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00138406s 2025-08-13T20:33:11.674968627+00:00 stdout F [INFO] 10.217.0.62:46126 - 49757 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002078379s 2025-08-13T20:33:22.881237326+00:00 stdout F [INFO] 10.217.0.8:40930 - 16922 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00244019s 2025-08-13T20:33:22.881237326+00:00 stdout F [INFO] 10.217.0.8:34743 - 60811 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002328907s 2025-08-13T20:33:22.883380997+00:00 stdout F [INFO] 10.217.0.8:58520 - 29998 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001180044s 2025-08-13T20:33:22.883380997+00:00 stdout F [INFO] 10.217.0.8:33257 - 4060 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001460842s 2025-08-13T20:33:33.398825661+00:00 stdout F [INFO] 10.217.0.82:32850 - 61432 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003149961s 2025-08-13T20:33:33.398825661+00:00 stdout F [INFO] 10.217.0.82:33287 - 31809 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003351926s 2025-08-13T20:33:33.878633203+00:00 stdout F [INFO] 10.217.0.82:59293 - 7022 "A IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.000809964s 2025-08-13T20:33:33.878633203+00:00 stdout F [INFO] 10.217.0.82:60830 - 12546 "AAAA IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.000924657s 2025-08-13T20:33:35.232345647+00:00 stdout F [INFO] 10.217.0.19:36818 - 54936 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001196634s 2025-08-13T20:33:35.232345647+00:00 stdout F [INFO] 10.217.0.19:41457 - 46863 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001223505s 2025-08-13T20:33:38.860763048+00:00 stdout F [INFO] 10.217.0.19:34258 - 15358 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001795122s 2025-08-13T20:33:38.860763048+00:00 stdout F [INFO] 10.217.0.19:42002 - 13607 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002488091s 2025-08-13T20:33:41.668370103+00:00 stdout F [INFO] 10.217.0.62:60543 - 3360 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001166404s 2025-08-13T20:33:41.668370103+00:00 stdout F [INFO] 10.217.0.62:56859 - 37798 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001088791s 2025-08-13T20:33:48.204972862+00:00 stdout F [INFO] 10.217.0.19:60096 - 61982 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000928067s 2025-08-13T20:33:48.206303960+00:00 stdout F [INFO] 10.217.0.19:60303 - 53773 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002658236s 2025-08-13T20:33:56.360014243+00:00 stdout F [INFO] 10.217.0.64:51406 - 31658 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001163614s 2025-08-13T20:33:56.360014243+00:00 stdout F [INFO] 10.217.0.64:48360 - 33699 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001532054s 2025-08-13T20:33:56.361576218+00:00 stdout F [INFO] 10.217.0.64:50492 - 53197 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.000629309s 2025-08-13T20:33:56.362663479+00:00 stdout F [INFO] 10.217.0.64:48910 - 50058 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001494913s 2025-08-13T20:34:05.229192181+00:00 stdout F [INFO] 10.217.0.19:41931 - 36188 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001559254s 2025-08-13T20:34:05.229192181+00:00 stdout F [INFO] 10.217.0.19:56602 - 24472 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001454581s 2025-08-13T20:34:05.481479033+00:00 stdout F [INFO] 10.217.0.45:46813 - 52256 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000608437s 2025-08-13T20:34:05.481870144+00:00 stdout F [INFO] 10.217.0.45:42826 - 54158 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000893826s 2025-08-13T20:34:11.675507967+00:00 stdout F [INFO] 10.217.0.62:42151 - 37111 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001868794s 2025-08-13T20:34:11.679175752+00:00 stdout F [INFO] 10.217.0.62:59737 - 62734 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001298907s 2025-08-13T20:34:22.881957784+00:00 stdout F [INFO] 10.217.0.8:43179 - 46202 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002114951s 2025-08-13T20:34:22.881957784+00:00 stdout F [INFO] 10.217.0.8:42515 - 56482 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002568244s 2025-08-13T20:34:22.883479128+00:00 stdout F [INFO] 10.217.0.8:39755 - 48303 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000877095s 2025-08-13T20:34:22.883726015+00:00 stdout F [INFO] 10.217.0.8:39774 - 22805 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000591607s 2025-08-13T20:34:35.232811425+00:00 stdout F [INFO] 10.217.0.19:60022 - 37769 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003804899s 2025-08-13T20:34:35.232811425+00:00 stdout F [INFO] 10.217.0.19:39283 - 32612 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004103508s 2025-08-13T20:34:41.678398886+00:00 stdout F [INFO] 10.217.0.62:35730 - 62817 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001451642s 2025-08-13T20:34:41.678520039+00:00 stdout F [INFO] 10.217.0.62:47850 - 28599 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003685396s 2025-08-13T20:34:56.369902494+00:00 stdout F [INFO] 10.217.0.64:38897 - 642 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00246145s 2025-08-13T20:34:56.369902494+00:00 stdout F [INFO] 10.217.0.64:52998 - 17838 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002885533s 2025-08-13T20:34:56.369902494+00:00 stdout F [INFO] 10.217.0.64:52702 - 63679 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003412048s 2025-08-13T20:34:56.369902494+00:00 stdout F [INFO] 10.217.0.64:54492 - 49052 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002701758s 2025-08-13T20:34:57.836345007+00:00 stdout F [INFO] 10.217.0.19:42490 - 13001 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002930475s 2025-08-13T20:34:57.836416799+00:00 stdout F [INFO] 10.217.0.19:46472 - 20548 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003218083s 2025-08-13T20:35:05.227701626+00:00 stdout F [INFO] 10.217.0.19:52788 - 47586 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00484177s 2025-08-13T20:35:05.227701626+00:00 stdout F [INFO] 10.217.0.19:51739 - 54276 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005381574s 2025-08-13T20:35:05.528765310+00:00 stdout F [INFO] 10.217.0.45:41036 - 749 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001621397s 2025-08-13T20:35:05.528765310+00:00 stdout F [INFO] 10.217.0.45:40449 - 23309 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001683448s 2025-08-13T20:35:11.673861094+00:00 stdout F [INFO] 10.217.0.62:50333 - 63789 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000766982s 2025-08-13T20:35:11.674033429+00:00 stdout F [INFO] 10.217.0.62:51423 - 972 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001759371s 2025-08-13T20:35:22.883890497+00:00 stdout F [INFO] 10.217.0.8:35533 - 23190 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002822982s 2025-08-13T20:35:22.883890497+00:00 stdout F [INFO] 10.217.0.8:48943 - 18475 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003335756s 2025-08-13T20:35:22.884929167+00:00 stdout F [INFO] 10.217.0.8:51669 - 49205 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000912526s 2025-08-13T20:35:22.885243076+00:00 stdout F [INFO] 10.217.0.8:35489 - 30954 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000887996s 2025-08-13T20:35:31.201730869+00:00 stdout F [INFO] 10.217.0.73:36025 - 37682 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00104204s 2025-08-13T20:35:31.201730869+00:00 stdout F [INFO] 10.217.0.73:44698 - 26406 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001180854s 2025-08-13T20:35:35.230980092+00:00 stdout F [INFO] 10.217.0.19:50589 - 7915 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001655618s 2025-08-13T20:35:35.230980092+00:00 stdout F [INFO] 10.217.0.19:55591 - 64446 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001892654s 2025-08-13T20:35:37.541280062+00:00 stdout F [INFO] 10.217.0.19:36145 - 55938 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000741561s 2025-08-13T20:35:37.541280062+00:00 stdout F [INFO] 10.217.0.19:36908 - 8687 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000822553s 2025-08-13T20:35:41.673136284+00:00 stdout F [INFO] 10.217.0.62:47360 - 54352 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000790583s 2025-08-13T20:35:41.675182283+00:00 stdout F [INFO] 10.217.0.62:52225 - 16649 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001279057s 2025-08-13T20:35:56.367908224+00:00 stdout F [INFO] 10.217.0.64:40533 - 46498 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002024898s 2025-08-13T20:35:56.367908224+00:00 stdout F [INFO] 10.217.0.64:53651 - 56781 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001327868s 2025-08-13T20:35:56.367908224+00:00 stdout F [INFO] 10.217.0.64:40804 - 18331 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002547243s 2025-08-13T20:35:56.367908224+00:00 stdout F [INFO] 10.217.0.64:49348 - 64759 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001311637s 2025-08-13T20:36:05.242703867+00:00 stdout F [INFO] 10.217.0.19:52997 - 8478 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002759859s 2025-08-13T20:36:05.242703867+00:00 stdout F [INFO] 10.217.0.19:37916 - 39157 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003782448s 2025-08-13T20:36:05.582597387+00:00 stdout F [INFO] 10.217.0.45:41747 - 26619 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000789732s 2025-08-13T20:36:05.582597387+00:00 stdout F [INFO] 10.217.0.45:56187 - 63967 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000641038s 2025-08-13T20:36:07.456016709+00:00 stdout F [INFO] 10.217.0.19:55081 - 39422 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000868175s 2025-08-13T20:36:07.456016709+00:00 stdout F [INFO] 10.217.0.19:50072 - 41015 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000941947s 2025-08-13T20:36:11.676425877+00:00 stdout F [INFO] 10.217.0.62:37814 - 7136 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001359319s 2025-08-13T20:36:11.676425877+00:00 stdout F [INFO] 10.217.0.62:33472 - 12989 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001268686s 2025-08-13T20:36:22.883978362+00:00 stdout F [INFO] 10.217.0.8:58021 - 44272 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003780959s 2025-08-13T20:36:22.884527128+00:00 stdout F [INFO] 10.217.0.8:35563 - 10048 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004389926s 2025-08-13T20:36:22.889843281+00:00 stdout F [INFO] 10.217.0.8:51549 - 40045 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001655578s 2025-08-13T20:36:22.889843281+00:00 stdout F [INFO] 10.217.0.8:41648 - 55672 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001504023s 2025-08-13T20:36:35.261662630+00:00 stdout F [INFO] 10.217.0.19:44748 - 55600 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.007348752s 2025-08-13T20:36:35.261936927+00:00 stdout F [INFO] 10.217.0.19:58670 - 34210 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00869192s 2025-08-13T20:36:41.673019005+00:00 stdout F [INFO] 10.217.0.62:47774 - 57002 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000775162s 2025-08-13T20:36:41.673019005+00:00 stdout F [INFO] 10.217.0.62:58481 - 36932 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00069989s 2025-08-13T20:36:56.360899890+00:00 stdout F [INFO] 10.217.0.64:49690 - 6346 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001882895s 2025-08-13T20:36:56.360899890+00:00 stdout F [INFO] 10.217.0.64:33376 - 48808 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002107641s 2025-08-13T20:36:56.362291470+00:00 stdout F [INFO] 10.217.0.64:46296 - 927 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00072105s 2025-08-13T20:36:56.362291470+00:00 stdout F [INFO] 10.217.0.64:36520 - 31547 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.000756802s 2025-08-13T20:37:05.233258069+00:00 stdout F [INFO] 10.217.0.19:59772 - 14035 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002645536s 2025-08-13T20:37:05.233258069+00:00 stdout F [INFO] 10.217.0.19:37719 - 14045 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004455419s 2025-08-13T20:37:05.640907632+00:00 stdout F [INFO] 10.217.0.45:38874 - 9095 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000904947s 2025-08-13T20:37:05.640907632+00:00 stdout F [INFO] 10.217.0.45:47158 - 35513 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000973378s 2025-08-13T20:37:11.675012994+00:00 stdout F [INFO] 10.217.0.62:52041 - 28255 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001223665s 2025-08-13T20:37:11.675012994+00:00 stdout F [INFO] 10.217.0.62:53127 - 64300 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001225125s 2025-08-13T20:37:17.087581728+00:00 stdout F [INFO] 10.217.0.19:41701 - 21437 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001998977s 2025-08-13T20:37:17.088880585+00:00 stdout F [INFO] 10.217.0.19:35947 - 41854 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002072s 2025-08-13T20:37:22.884701150+00:00 stdout F [INFO] 10.217.0.8:33740 - 41869 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001245746s 2025-08-13T20:37:22.884864495+00:00 stdout F [INFO] 10.217.0.8:47477 - 64327 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002157842s 2025-08-13T20:37:22.885529914+00:00 stdout F [INFO] 10.217.0.8:36284 - 15082 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000625468s 2025-08-13T20:37:22.885654827+00:00 stdout F [INFO] 10.217.0.8:51765 - 18434 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000766943s 2025-08-13T20:37:35.259845730+00:00 stdout F [INFO] 10.217.0.19:57972 - 33316 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002853822s 2025-08-13T20:37:35.259933912+00:00 stdout F [INFO] 10.217.0.19:38435 - 20685 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000810904s 2025-08-13T20:37:36.229228104+00:00 stdout F [INFO] 10.217.0.19:52269 - 385 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000973738s 2025-08-13T20:37:36.229265696+00:00 stdout F [INFO] 10.217.0.19:51103 - 9389 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001151813s 2025-08-13T20:37:41.676036215+00:00 stdout F [INFO] 10.217.0.62:57770 - 42152 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00107205s 2025-08-13T20:37:41.676216830+00:00 stdout F [INFO] 10.217.0.62:38336 - 46730 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001156253s 2025-08-13T20:37:56.364031551+00:00 stdout F [INFO] 10.217.0.64:57040 - 20480 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003036318s 2025-08-13T20:37:56.364031551+00:00 stdout F [INFO] 10.217.0.64:58395 - 10701 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002653297s 2025-08-13T20:37:56.364031551+00:00 stdout F [INFO] 10.217.0.64:42967 - 18626 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002456391s 2025-08-13T20:37:56.364031551+00:00 stdout F [INFO] 10.217.0.64:46951 - 59146 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002577614s 2025-08-13T20:38:05.237364761+00:00 stdout F [INFO] 10.217.0.19:58012 - 10248 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00347164s 2025-08-13T20:38:05.237436263+00:00 stdout F [INFO] 10.217.0.19:43124 - 45715 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003759018s 2025-08-13T20:38:05.684249535+00:00 stdout F [INFO] 10.217.0.45:44888 - 52674 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000809783s 2025-08-13T20:38:05.684453351+00:00 stdout F [INFO] 10.217.0.45:51261 - 16994 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000764562s 2025-08-13T20:38:11.559879617+00:00 stdout F [INFO] 10.217.0.62:40287 - 3642 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001132343s 2025-08-13T20:38:11.559879617+00:00 stdout F [INFO] 10.217.0.62:43178 - 21365 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001228095s 2025-08-13T20:38:11.671238298+00:00 stdout F [INFO] 10.217.0.62:53652 - 48116 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000799183s 2025-08-13T20:38:11.671454264+00:00 stdout F [INFO] 10.217.0.62:47552 - 28323 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000930827s 2025-08-13T20:38:22.888462531+00:00 stdout F [INFO] 10.217.0.8:60705 - 40432 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004011316s 2025-08-13T20:38:22.888594465+00:00 stdout F [INFO] 10.217.0.8:51067 - 23753 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002672687s 2025-08-13T20:38:22.890079978+00:00 stdout F [INFO] 10.217.0.8:49803 - 63924 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000994649s 2025-08-13T20:38:22.890333025+00:00 stdout F [INFO] 10.217.0.8:40392 - 33949 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001184864s 2025-08-13T20:38:26.708978077+00:00 stdout F [INFO] 10.217.0.19:48385 - 4246 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001006469s 2025-08-13T20:38:26.708978077+00:00 stdout F [INFO] 10.217.0.19:40797 - 41192 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001494034s 2025-08-13T20:38:35.230301728+00:00 stdout F [INFO] 10.217.0.19:35587 - 13762 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001269237s 2025-08-13T20:38:35.230301728+00:00 stdout F [INFO] 10.217.0.19:36692 - 31283 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001164714s 2025-08-13T20:38:41.688856539+00:00 stdout F [INFO] 10.217.0.62:43247 - 17071 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001293217s 2025-08-13T20:38:41.689246861+00:00 stdout F [INFO] 10.217.0.62:48262 - 45415 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001867384s 2025-08-13T20:38:49.075275340+00:00 stdout F [INFO] 10.217.0.8:48946 - 32235 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001550515s 2025-08-13T20:38:49.075275340+00:00 stdout F [INFO] 10.217.0.8:51973 - 5060 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000840614s 2025-08-13T20:38:49.075678612+00:00 stdout F [INFO] 10.217.0.8:58876 - 13836 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000530995s 2025-08-13T20:38:49.076039632+00:00 stdout F [INFO] 10.217.0.8:36573 - 30703 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000560596s 2025-08-13T20:38:56.362727357+00:00 stdout F [INFO] 10.217.0.64:36222 - 957 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001191765s 2025-08-13T20:38:56.362727357+00:00 stdout F [INFO] 10.217.0.64:34160 - 60776 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001621137s 2025-08-13T20:38:56.363249612+00:00 stdout F [INFO] 10.217.0.64:49956 - 6395 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00106333s 2025-08-13T20:38:56.363525860+00:00 stdout F [INFO] 10.217.0.64:34527 - 596 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001017109s 2025-08-13T20:39:05.244100877+00:00 stdout F [INFO] 10.217.0.19:41309 - 38781 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001548515s 2025-08-13T20:39:05.248323839+00:00 stdout F [INFO] 10.217.0.19:39054 - 64832 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001515684s 2025-08-13T20:39:05.727542485+00:00 stdout F [INFO] 10.217.0.45:51846 - 56137 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000750081s 2025-08-13T20:39:05.727619187+00:00 stdout F [INFO] 10.217.0.45:33209 - 27535 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000454923s 2025-08-13T20:39:11.675957299+00:00 stdout F [INFO] 10.217.0.62:59504 - 55315 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001278777s 2025-08-13T20:39:11.675957299+00:00 stdout F [INFO] 10.217.0.62:53199 - 51466 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001345839s 2025-08-13T20:39:22.886569621+00:00 stdout F [INFO] 10.217.0.8:44293 - 63474 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002724538s 2025-08-13T20:39:22.886569621+00:00 stdout F [INFO] 10.217.0.8:45409 - 47820 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00310802s 2025-08-13T20:39:22.888062534+00:00 stdout F [INFO] 10.217.0.8:46886 - 26202 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001225315s 2025-08-13T20:39:22.889389783+00:00 stdout F [INFO] 10.217.0.8:34919 - 32205 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000997758s 2025-08-13T20:39:33.486891188+00:00 stdout F [INFO] 10.217.0.62:38465 - 36749 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003028897s 2025-08-13T20:39:33.487008922+00:00 stdout F [INFO] 10.217.0.62:59209 - 25765 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00138855s 2025-08-13T20:39:33.520351993+00:00 stdout F [INFO] 10.217.0.62:54424 - 53500 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002620206s 2025-08-13T20:39:33.520574099+00:00 stdout F [INFO] 10.217.0.62:34405 - 52422 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002562044s 2025-08-13T20:39:34.929199110+00:00 stdout F [INFO] 10.217.0.19:59832 - 28710 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00105181s 2025-08-13T20:39:34.929293403+00:00 stdout F [INFO] 10.217.0.19:52613 - 8655 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001231076s 2025-08-13T20:39:35.238379334+00:00 stdout F [INFO] 10.217.0.19:53361 - 39733 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001116923s 2025-08-13T20:39:35.238379334+00:00 stdout F [INFO] 10.217.0.19:40427 - 35608 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000605167s 2025-08-13T20:39:36.328013239+00:00 stdout F [INFO] 10.217.0.19:37534 - 9463 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000661189s 2025-08-13T20:39:36.328494453+00:00 stdout F [INFO] 10.217.0.19:40881 - 3299 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001098372s 2025-08-13T20:39:38.315116047+00:00 stdout F [INFO] 10.217.0.62:36946 - 5637 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001194124s 2025-08-13T20:39:38.315116047+00:00 stdout F [INFO] 10.217.0.62:36687 - 12833 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001436381s 2025-08-13T20:39:41.673972293+00:00 stdout F [INFO] 10.217.0.62:52646 - 36352 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000531765s 2025-08-13T20:39:41.673972293+00:00 stdout F [INFO] 10.217.0.62:55867 - 16523 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001416061s 2025-08-13T20:39:42.023083748+00:00 stdout F [INFO] 10.217.0.19:34636 - 26806 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000735301s 2025-08-13T20:39:42.023083748+00:00 stdout F [INFO] 10.217.0.19:35424 - 55421 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000683849s 2025-08-13T20:39:42.111955400+00:00 stdout F [INFO] 10.217.0.19:54314 - 8861 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000667829s 2025-08-13T20:39:42.112000792+00:00 stdout F [INFO] 10.217.0.19:33866 - 17232 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000617808s 2025-08-13T20:39:42.548542797+00:00 stdout F [INFO] 10.217.0.62:46643 - 51865 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000583177s 2025-08-13T20:39:42.548542797+00:00 stdout F [INFO] 10.217.0.62:57527 - 7741 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001258096s 2025-08-13T20:39:45.396443433+00:00 stdout F [INFO] 10.217.0.62:36515 - 31345 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002234595s 2025-08-13T20:39:45.396822784+00:00 stdout F [INFO] 10.217.0.62:45617 - 47171 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002168872s 2025-08-13T20:39:56.364665487+00:00 stdout F [INFO] 10.217.0.64:42472 - 6754 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00345008s 2025-08-13T20:39:56.364665487+00:00 stdout F [INFO] 10.217.0.64:48733 - 49549 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002868173s 2025-08-13T20:39:56.364665487+00:00 stdout F [INFO] 10.217.0.64:49502 - 50789 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004282123s 2025-08-13T20:39:56.365445019+00:00 stdout F [INFO] 10.217.0.64:43991 - 12908 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003339476s 2025-08-13T20:40:01.060378034+00:00 stdout F [INFO] 10.217.0.19:47434 - 25551 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001119163s 2025-08-13T20:40:01.060378034+00:00 stdout F [INFO] 10.217.0.19:51981 - 9328 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00137947s 2025-08-13T20:40:01.096217857+00:00 stdout F [INFO] 10.217.0.19:39773 - 1053 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001982967s 2025-08-13T20:40:01.096217857+00:00 stdout F [INFO] 10.217.0.19:34232 - 3657 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002569844s 2025-08-13T20:40:05.319413503+00:00 stdout F [INFO] 10.217.0.19:53209 - 19770 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001161094s 2025-08-13T20:40:05.330872523+00:00 stdout F [INFO] 10.217.0.19:47054 - 30121 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000943507s 2025-08-13T20:40:05.767886341+00:00 stdout F [INFO] 10.217.0.45:37617 - 52692 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000704861s 2025-08-13T20:40:05.767886341+00:00 stdout F [INFO] 10.217.0.45:36136 - 29941 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000623578s 2025-08-13T20:40:08.338925635+00:00 stdout F [INFO] 10.217.0.19:40398 - 2296 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000933276s 2025-08-13T20:40:08.338925635+00:00 stdout F [INFO] 10.217.0.19:37518 - 13823 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001020039s 2025-08-13T20:40:08.364254426+00:00 stdout F [INFO] 10.217.0.19:44262 - 12027 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000547166s 2025-08-13T20:40:08.364254426+00:00 stdout F [INFO] 10.217.0.19:52854 - 44366 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000602588s 2025-08-13T20:40:08.482902576+00:00 stdout F [INFO] 10.217.0.19:46869 - 62396 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000638609s 2025-08-13T20:40:08.482902576+00:00 stdout F [INFO] 10.217.0.19:59250 - 28552 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000966127s 2025-08-13T20:40:11.682310416+00:00 stdout F [INFO] 10.217.0.62:34923 - 63174 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000663609s 2025-08-13T20:40:11.682310416+00:00 stdout F [INFO] 10.217.0.62:48099 - 56805 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001330308s 2025-08-13T20:40:22.885384643+00:00 stdout F [INFO] 10.217.0.8:34635 - 60579 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002014208s 2025-08-13T20:40:22.885384643+00:00 stdout F [INFO] 10.217.0.8:50611 - 3878 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000949757s 2025-08-13T20:40:22.887684909+00:00 stdout F [INFO] 10.217.0.8:50782 - 41587 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000613388s 2025-08-13T20:40:22.888213085+00:00 stdout F [INFO] 10.217.0.8:60131 - 22403 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002481611s 2025-08-13T20:40:31.249961554+00:00 stdout F [INFO] 10.217.0.73:56729 - 39008 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001197004s 2025-08-13T20:40:31.249961554+00:00 stdout F [INFO] 10.217.0.73:50058 - 34761 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001338929s 2025-08-13T20:40:35.247958776+00:00 stdout F [INFO] 10.217.0.19:32839 - 29831 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002160112s 2025-08-13T20:40:35.247958776+00:00 stdout F [INFO] 10.217.0.19:45865 - 28487 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002171683s 2025-08-13T20:40:41.677077336+00:00 stdout F [INFO] 10.217.0.62:39935 - 29145 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001932495s 2025-08-13T20:40:41.677077336+00:00 stdout F [INFO] 10.217.0.62:44261 - 57272 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002200554s 2025-08-13T20:40:45.955422642+00:00 stdout F [INFO] 10.217.0.19:49738 - 38120 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000867465s 2025-08-13T20:40:45.955559056+00:00 stdout F [INFO] 10.217.0.19:38158 - 58947 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000880456s 2025-08-13T20:40:56.365382803+00:00 stdout F [INFO] 10.217.0.64:40460 - 52275 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004399127s 2025-08-13T20:40:56.365452485+00:00 stdout F [INFO] 10.217.0.64:55259 - 53324 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003396638s 2025-08-13T20:40:56.365603599+00:00 stdout F [INFO] 10.217.0.64:41027 - 3240 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003673406s 2025-08-13T20:40:56.366382632+00:00 stdout F [INFO] 10.217.0.64:40992 - 1845 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.005853039s 2025-08-13T20:41:03.402012231+00:00 stdout F [INFO] 10.217.0.82:36224 - 24973 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005070816s 2025-08-13T20:41:03.402012231+00:00 stdout F [INFO] 10.217.0.82:58091 - 39129 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005111768s 2025-08-13T20:41:03.942112441+00:00 stdout F [INFO] 10.217.0.82:45684 - 38759 "AAAA IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.002796691s 2025-08-13T20:41:03.942412880+00:00 stdout F [INFO] 10.217.0.82:60802 - 56082 "A IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.000828994s 2025-08-13T20:41:05.259690387+00:00 stdout F [INFO] 10.217.0.19:54024 - 37942 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001103122s 2025-08-13T20:41:05.259690387+00:00 stdout F [INFO] 10.217.0.19:55781 - 37536 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000913356s 2025-08-13T20:41:05.828109685+00:00 stdout F [INFO] 10.217.0.45:58352 - 47304 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000848504s 2025-08-13T20:41:05.828612010+00:00 stdout F [INFO] 10.217.0.45:37579 - 6043 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000965248s 2025-08-13T20:41:11.678751738+00:00 stdout F [INFO] 10.217.0.62:37982 - 19302 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00105337s 2025-08-13T20:41:11.679052437+00:00 stdout F [INFO] 10.217.0.62:36701 - 22169 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001077221s 2025-08-13T20:41:22.888874237+00:00 stdout F [INFO] 10.217.0.8:40546 - 41194 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002784941s 2025-08-13T20:41:22.888976080+00:00 stdout F [INFO] 10.217.0.8:49312 - 22262 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00451814s 2025-08-13T20:41:22.890701110+00:00 stdout F [INFO] 10.217.0.8:35622 - 3186 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001110432s 2025-08-13T20:41:22.893256974+00:00 stdout F [INFO] 10.217.0.8:46249 - 43292 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001048781s 2025-08-13T20:41:33.614180660+00:00 stdout F [INFO] 10.217.0.19:34045 - 59058 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003720818s 2025-08-13T20:41:33.614180660+00:00 stdout F [INFO] 10.217.0.19:42856 - 33590 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004016166s 2025-08-13T20:41:35.258012771+00:00 stdout F [INFO] 10.217.0.19:47082 - 44019 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004036316s 2025-08-13T20:41:35.258012771+00:00 stdout F [INFO] 10.217.0.19:53023 - 62739 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004330375s 2025-08-13T20:41:41.687703349+00:00 stdout F [INFO] 10.217.0.62:59948 - 65257 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003153271s 2025-08-13T20:41:41.687703349+00:00 stdout F [INFO] 10.217.0.62:46724 - 35067 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00419401s 2025-08-13T20:41:55.588870352+00:00 stdout F [INFO] 10.217.0.19:60199 - 65263 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002129141s 2025-08-13T20:41:55.588870352+00:00 stdout F [INFO] 10.217.0.19:37883 - 24039 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003023257s 2025-08-13T20:41:56.366333937+00:00 stdout F [INFO] 10.217.0.64:45373 - 9031 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.000812273s 2025-08-13T20:41:56.366770229+00:00 stdout F [INFO] 10.217.0.64:49602 - 22957 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001103102s 2025-08-13T20:41:56.367114399+00:00 stdout F [INFO] 10.217.0.64:57192 - 51247 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001530174s 2025-08-13T20:41:56.369267911+00:00 stdout F [INFO] 10.217.0.64:43691 - 12648 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001527594s 2025-08-13T20:42:05.249404267+00:00 stdout F [INFO] 10.217.0.19:56767 - 13605 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001139273s 2025-08-13T20:42:05.249404267+00:00 stdout F [INFO] 10.217.0.19:42307 - 25762 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002037729s 2025-08-13T20:42:05.884957991+00:00 stdout F [INFO] 10.217.0.45:34593 - 23662 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001245216s 2025-08-13T20:42:05.886252038+00:00 stdout F [INFO] 10.217.0.45:41209 - 48819 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002737289s 2025-08-13T20:42:10.672284590+00:00 stdout F [INFO] 10.217.0.19:36729 - 28054 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000588907s 2025-08-13T20:42:10.674088373+00:00 stdout F [INFO] 10.217.0.19:43538 - 27639 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001673758s 2025-08-13T20:42:11.682824284+00:00 stdout F [INFO] 10.217.0.62:47919 - 6529 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000978938s 2025-08-13T20:42:11.682824284+00:00 stdout F [INFO] 10.217.0.62:48501 - 2072 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001006219s 2025-08-13T20:42:22.889244705+00:00 stdout F [INFO] 10.217.0.8:57028 - 4650 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003507911s 2025-08-13T20:42:22.889244705+00:00 stdout F [INFO] 10.217.0.8:55542 - 40045 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001010219s 2025-08-13T20:42:22.890900792+00:00 stdout F [INFO] 10.217.0.8:51800 - 58877 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001256136s 2025-08-13T20:42:22.891156140+00:00 stdout F [INFO] 10.217.0.8:58299 - 14903 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001578475s 2025-08-13T20:42:35.276591715+00:00 stdout F [INFO] 10.217.0.19:42163 - 4628 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003242554s 2025-08-13T20:42:35.276591715+00:00 stdout F [INFO] 10.217.0.19:43475 - 45715 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004115779s 2025-08-13T20:42:43.641185657+00:00 stdout F [INFO] SIGTERM: Shutting down servers then terminating 2025-08-13T20:42:43.648934750+00:00 stdout F [INFO] plugin/health: Going into lameduck mode for 20s ././@LongLink0000644000000000000000000000024400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-defa0000755000175000017500000000000015133657744033014 5ustar zuulzuul././@LongLink0000644000000000000000000000025100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-defa0000644000175000017500000000202015133657716033007 0ustar zuulzuul2026-01-20T10:49:33.593793214+00:00 stderr F W0120 10:49:33.590584 1 deprecated.go:66] 2026-01-20T10:49:33.593793214+00:00 stderr F ==== Removed Flag Warning ====================== 2026-01-20T10:49:33.593793214+00:00 stderr F 2026-01-20T10:49:33.593793214+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2026-01-20T10:49:33.593793214+00:00 stderr F 2026-01-20T10:49:33.593793214+00:00 stderr F =============================================== 2026-01-20T10:49:33.593793214+00:00 stderr F 2026-01-20T10:49:33.593793214+00:00 stderr F I0120 10:49:33.591701 1 kube-rbac-proxy.go:233] Valid token audiences: 2026-01-20T10:49:33.593793214+00:00 stderr F I0120 10:49:33.591748 1 kube-rbac-proxy.go:347] Reading certificate files 2026-01-20T10:49:33.593793214+00:00 stderr F I0120 10:49:33.592332 1 kube-rbac-proxy.go:395] Starting TCP socket on :9154 2026-01-20T10:49:33.593793214+00:00 stderr F I0120 10:49:33.592822 1 kube-rbac-proxy.go:402] Listening securely on :9154 ././@LongLink0000644000000000000000000000025100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-defa0000644000175000017500000000222515133657716033016 0ustar zuulzuul2025-08-13T19:59:22.553763903+00:00 stderr F W0813 19:59:22.553003 1 deprecated.go:66] 2025-08-13T19:59:22.553763903+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:22.553763903+00:00 stderr F 2025-08-13T19:59:22.553763903+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:22.553763903+00:00 stderr F 2025-08-13T19:59:22.553763903+00:00 stderr F =============================================== 2025-08-13T19:59:22.553763903+00:00 stderr F 2025-08-13T19:59:22.658394206+00:00 stderr F I0813 19:59:22.658018 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:22.658394206+00:00 stderr F I0813 19:59:22.658104 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:22.714929947+00:00 stderr F I0813 19:59:22.702562 1 kube-rbac-proxy.go:395] Starting TCP socket on :9154 2025-08-13T19:59:22.796718549+00:00 stderr F I0813 19:59:22.796259 1 kube-rbac-proxy.go:402] Listening securely on :9154 2025-08-13T20:42:42.272284042+00:00 stderr F I0813 20:42:42.271402 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000755000175000017500000000000015133657715033121 5ustar zuulzuul././@LongLink0000644000000000000000000000030200000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000755000175000017500000000000015133657735033123 5ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000644000175000017500000214255315133657715033137 0ustar zuulzuul2025-08-13T20:05:42.847593472+00:00 stderr F I0813 20:05:42.847360 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:05:42.849079444+00:00 stderr F I0813 20:05:42.848832 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:42.850582657+00:00 stderr F I0813 20:05:42.850357 1 observer_polling.go:159] Starting file observer 2025-08-13T20:05:42.959341362+00:00 stderr F I0813 20:05:42.959227 1 builder.go:299] network-operator version 4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08-84f9a080d03777c95a1c5a0d13ca16e5aa342d98 2025-08-13T20:05:43.835638706+00:00 stderr F I0813 20:05:43.832980 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:05:43.835638706+00:00 stderr F W0813 20:05:43.833036 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:43.835638706+00:00 stderr F W0813 20:05:43.833046 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:43.839256959+00:00 stderr F I0813 20:05:43.837451 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:05:43.842104231+00:00 stderr F I0813 20:05:43.841213 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:05:43.842104231+00:00 stderr F I0813 20:05:43.841744 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:05:43.842415920+00:00 stderr F I0813 20:05:43.842362 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:05:43.842824481+00:00 stderr F I0813 20:05:43.842538 1 secure_serving.go:213] Serving securely on [::]:9104 2025-08-13T20:05:43.842953695+00:00 stderr F I0813 20:05:43.842935 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:05:43.843316465+00:00 stderr F I0813 20:05:43.843296 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:05:43.847064563+00:00 stderr F I0813 20:05:43.846944 1 leaderelection.go:250] attempting to acquire leader lease openshift-network-operator/network-operator-lock... 2025-08-13T20:05:43.849444071+00:00 stderr F I0813 20:05:43.847922 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:43.849444071+00:00 stderr F I0813 20:05:43.848425 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:05:43.849493042+00:00 stderr F I0813 20:05:43.849475 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:43.950929837+00:00 stderr F I0813 20:05:43.950757 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:43.950983549+00:00 stderr F I0813 20:05:43.950918 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:05:43.952076690+00:00 stderr F I0813 20:05:43.951937 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:10:14.618367979+00:00 stderr F I0813 20:10:14.615016 1 leaderelection.go:260] successfully acquired lease openshift-network-operator/network-operator-lock 2025-08-13T20:10:14.619163142+00:00 stderr F I0813 20:10:14.619073 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-network-operator", Name:"network-operator-lock", UID:"22664f33-4062-41bd-9ac9-dc79ccf9e70c", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"33148", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_c4643de4-0a66-40c8-abff-4239e04f61ab became leader 2025-08-13T20:10:14.783630177+00:00 stderr F I0813 20:10:14.783197 1 operator.go:97] Creating status manager for stand-alone cluster 2025-08-13T20:10:14.784895684+00:00 stderr F I0813 20:10:14.784846 1 operator.go:102] Adding controller-runtime controllers 2025-08-13T20:10:14.806739700+00:00 stderr F I0813 20:10:14.805093 1 operconfig_controller.go:102] Waiting for feature gates initialization... 2025-08-13T20:10:14.813119033+00:00 stderr F I0813 20:10:14.812083 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:10:14.823420598+00:00 stderr F I0813 20:10:14.823371 1 operconfig_controller.go:109] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:10:14.825390405+00:00 stderr F I0813 20:10:14.825331 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-network-operator", Name:"network-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:10:14.838390267+00:00 stderr F I0813 20:10:14.838340 1 client.go:239] Starting informers... 2025-08-13T20:10:14.839898711+00:00 stderr F I0813 20:10:14.839877 1 client.go:250] Waiting for informers to sync... 2025-08-13T20:10:15.043609921+00:00 stderr F I0813 20:10:15.041276 1 client.go:271] Informers started and synced 2025-08-13T20:10:15.043609921+00:00 stderr F I0813 20:10:15.041345 1 operator.go:126] Starting controller-manager 2025-08-13T20:10:15.043609921+00:00 stderr F I0813 20:10:15.042155 1 server.go:185] "Starting metrics server" logger="controller-runtime.metrics" 2025-08-13T20:10:15.043980672+00:00 stderr F I0813 20:10:15.043893 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:10:15.044043254+00:00 stderr F I0813 20:10:15.044026 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:10:15.044104816+00:00 stderr F I0813 20:10:15.044071 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:10:15.045016342+00:00 stderr F I0813 20:10:15.044954 1 server.go:224] "Serving metrics server" logger="controller-runtime.metrics" bindAddress=":8080" secure=false 2025-08-13T20:10:15.045158616+00:00 stderr F I0813 20:10:15.045136 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-08-13T20:10:15.045206317+00:00 stderr F I0813 20:10:15.045189 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-08-13T20:10:15.045246928+00:00 stderr F I0813 20:10:15.045232 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-08-13T20:10:15.046874225+00:00 stderr F I0813 20:10:15.046648 1 controller.go:178] "Starting EventSource" controller="pki-controller" source="kind source: *v1.OperatorPKI" 2025-08-13T20:10:15.046874225+00:00 stderr F I0813 20:10:15.046696 1 controller.go:186] "Starting Controller" controller="pki-controller" 2025-08-13T20:10:15.046874225+00:00 stderr F I0813 20:10:15.046726 1 controller.go:178] "Starting EventSource" controller="clusterconfig-controller" source="kind source: *v1.Network" 2025-08-13T20:10:15.046874225+00:00 stderr F I0813 20:10:15.046768 1 controller.go:186] "Starting Controller" controller="clusterconfig-controller" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.047049 1 controller.go:178] "Starting EventSource" controller="egress-router-controller" source="kind source: *v1.EgressRouter" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.047177 1 controller.go:186] "Starting Controller" controller="egress-router-controller" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.047487 1 controller.go:178] "Starting EventSource" controller="signer-controller" source="kind source: *v1.CertificateSigningRequest" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.047536 1 controller.go:186] "Starting Controller" controller="signer-controller" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.047670 1 controller.go:178] "Starting EventSource" controller="configmap-trust-bundle-injector-controller" source="informer source: 0xc000b7bb80" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.048008 1 controller.go:178] "Starting EventSource" controller="configmap-trust-bundle-injector-controller" source="informer source: 0xc000b7bc30" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.048136 1 controller.go:186] "Starting Controller" controller="configmap-trust-bundle-injector-controller" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.048165 1 controller.go:220] "Starting workers" controller="configmap-trust-bundle-injector-controller" worker count=1 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.048265 1 controller.go:178] "Starting EventSource" controller="infrastructureconfig-controller" source="kind source: *v1.Infrastructure" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.048279 1 controller.go:186] "Starting Controller" controller="infrastructureconfig-controller" 2025-08-13T20:10:15.052039263+00:00 stderr F I0813 20:10:15.049571 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-08-13T20:10:15.052097065+00:00 stderr F I0813 20:10:15.047085 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Network" 2025-08-13T20:10:15.052127916+00:00 stderr F I0813 20:10:15.050615 1 controller.go:178] "Starting EventSource" controller="ingress-config-controller" source="kind source: *v1.IngressController" 2025-08-13T20:10:15.052158247+00:00 stderr F I0813 20:10:15.050697 1 controller.go:178] "Starting EventSource" controller="dashboard-controller" source="informer source: 0xc000b7bd90" 2025-08-13T20:10:15.052283460+00:00 stderr F I0813 20:10:15.052266 1 controller.go:186] "Starting Controller" controller="dashboard-controller" 2025-08-13T20:10:15.052378773+00:00 stderr F I0813 20:10:15.052362 1 controller.go:220] "Starting workers" controller="dashboard-controller" worker count=1 2025-08-13T20:10:15.055895624+00:00 stderr F I0813 20:10:15.052956 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Network" 2025-08-13T20:10:15.064980024+00:00 stderr F I0813 20:10:15.056021 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="informer source: 0xc000b7bad0" 2025-08-13T20:10:15.065230501+00:00 stderr F I0813 20:10:15.065197 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Node" 2025-08-13T20:10:15.065286083+00:00 stderr F I0813 20:10:15.065268 1 controller.go:186] "Starting Controller" controller="operconfig-controller" 2025-08-13T20:10:15.065560431+00:00 stderr F I0813 20:10:15.056196 1 dashboard_controller.go:113] Reconcile dashboards 2025-08-13T20:10:15.066300372+00:00 stderr F I0813 20:10:15.066210 1 reflector.go:351] Caches populated for *v1.EgressRouter from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.069116893+00:00 stderr F I0813 20:10:15.068249 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.055259 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.055426 1 controller.go:178] "Starting EventSource" controller="allowlist-controller" source="informer source: 0xc000b7bce0" 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.074287 1 controller.go:186] "Starting Controller" controller="allowlist-controller" 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.074304 1 controller.go:220] "Starting workers" controller="allowlist-controller" worker count=1 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.055632 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc000b7be40" 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.074346 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc000b7bef0" 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.074372 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc0002ae790" 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.074386 1 controller.go:186] "Starting Controller" controller="pod-watcher" 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.074392 1 controller.go:220] "Starting workers" controller="pod-watcher" worker count=1 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.055668 1 controller.go:186] "Starting Controller" controller="ingress-config-controller" 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.058618 1 reflector.go:351] Caches populated for *v1.OperatorPKI from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.080641983+00:00 stderr F I0813 20:10:15.079665 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.080641983+00:00 stderr F I0813 20:10:15.080391 1 dashboard_controller.go:139] Applying dashboards manifests 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081068 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-admission-controller updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081116 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081450 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081473 1 pod_watcher.go:131] Operand /, Kind= openshift-network-operator/iptables-alerter updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081485 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081490 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081497 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081502 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/network-metrics-daemon updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.058942 1 log.go:245] Reconciling configmap from openshift-apiserver-operator/trusted-ca-bundle 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.082569 1 reflector.go:351] Caches populated for *v1.IngressController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.084970727+00:00 stderr F I0813 20:10:15.083428 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.084970727+00:00 stderr F I0813 20:10:15.084043 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.085413390+00:00 stderr F I0813 20:10:15.085380 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.085639496+00:00 stderr F I0813 20:10:15.059371 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.085742399+00:00 stderr F I0813 20:10:15.059579 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.086010897+00:00 stderr F I0813 20:10:15.054200 1 controller.go:178] "Starting EventSource" controller="proxyconfig-controller" source="informer source: 0xc000b7b6b0" 2025-08-13T20:10:15.086122110+00:00 stderr F I0813 20:10:15.086106 1 controller.go:178] "Starting EventSource" controller="proxyconfig-controller" source="kind source: *v1.Proxy" 2025-08-13T20:10:15.086163021+00:00 stderr F I0813 20:10:15.086150 1 controller.go:186] "Starting Controller" controller="proxyconfig-controller" 2025-08-13T20:10:15.088455547+00:00 stderr F I0813 20:10:15.086260 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.088631592+00:00 stderr F I0813 20:10:15.059959 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.088903120+00:00 stderr F I0813 20:10:15.088853 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.088969962+00:00 stderr F I0813 20:10:15.060065 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089088075+00:00 stderr F I0813 20:10:15.060137 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089189328+00:00 stderr F I0813 20:10:15.060191 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089282461+00:00 stderr F I0813 20:10:15.060315 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089355763+00:00 stderr F I0813 20:10:15.061244 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=openshiftapiservers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089459266+00:00 stderr F I0813 20:10:15.061712 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089474106+00:00 stderr F I0813 20:10:15.059849 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089570569+00:00 stderr F I0813 20:10:15.086609 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089879838+00:00 stderr F I0813 20:10:15.089859 1 log.go:245] openshift-network-operator/iptables-alerter-script changed, triggering operconf reconciliation 2025-08-13T20:10:15.089982301+00:00 stderr F I0813 20:10:15.089963 1 log.go:245] openshift-network-operator/kube-root-ca.crt changed, triggering operconf reconciliation 2025-08-13T20:10:15.090061853+00:00 stderr F I0813 20:10:15.090013 1 log.go:245] openshift-network-operator/mtu changed, triggering operconf reconciliation 2025-08-13T20:10:15.090105684+00:00 stderr F I0813 20:10:15.090093 1 log.go:245] openshift-network-operator/openshift-service-ca.crt changed, triggering operconf reconciliation 2025-08-13T20:10:15.100008418+00:00 stderr F I0813 20:10:15.099132 1 allowlist_controller.go:103] Reconcile allowlist for openshift-multus/cni-sysctl-allowlist 2025-08-13T20:10:15.141355304+00:00 stderr F I0813 20:10:15.137225 1 log.go:245] ConfigMap openshift-apiserver-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:15.145938415+00:00 stderr F I0813 20:10:15.144502 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-08-13T20:10:15.151855365+00:00 stderr F I0813 20:10:15.150383 1 controller.go:220] "Starting workers" controller="signer-controller" worker count=1 2025-08-13T20:10:15.151855365+00:00 stderr F I0813 20:10:15.150424 1 controller.go:220] "Starting workers" controller="egress-router-controller" worker count=1 2025-08-13T20:10:15.151855365+00:00 stderr F I0813 20:10:15.150547 1 controller.go:220] "Starting workers" controller="pki-controller" worker count=1 2025-08-13T20:10:15.151855365+00:00 stderr F I0813 20:10:15.150641 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:10:15.158031662+00:00 stderr F I0813 20:10:15.157991 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.158970089+00:00 stderr F I0813 20:10:15.158871 1 controller.go:220] "Starting workers" controller="clusterconfig-controller" worker count=1 2025-08-13T20:10:15.159072092+00:00 stderr F I0813 20:10:15.159011 1 log.go:245] Reconciling Network.config.openshift.io cluster 2025-08-13T20:10:15.159352200+00:00 stderr F I0813 20:10:15.159283 1 controller.go:220] "Starting workers" controller="infrastructureconfig-controller" worker count=1 2025-08-13T20:10:15.159451862+00:00 stderr F I0813 20:10:15.159384 1 controller.go:220] "Starting workers" controller="ingress-config-controller" worker count=1 2025-08-13T20:10:15.159468423+00:00 stderr F I0813 20:10:15.159449 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:10:15.175656397+00:00 stderr F I0813 20:10:15.159886 1 warning_handler.go:65] "unknown field \"spec.template.spec.volumes[0].configMap.namespace\"" logger="KubeAPIWarningLogger" 2025-08-13T20:10:15.175656397+00:00 stderr F I0813 20:10:15.159941 1 warning_handler.go:65] "unknown field \"spec.template.spec.volumes[0].defaultMode\"" logger="KubeAPIWarningLogger" 2025-08-13T20:10:15.175656397+00:00 stderr F I0813 20:10:15.166186 1 log.go:245] /crc changed, triggering operconf reconciliation 2025-08-13T20:10:15.175656397+00:00 stderr F I0813 20:10:15.171500 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.175656397+00:00 stderr F I0813 20:10:15.172980 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.188693941+00:00 stderr F I0813 20:10:15.188566 1 controller.go:220] "Starting workers" controller="proxyconfig-controller" worker count=1 2025-08-13T20:10:15.191884182+00:00 stderr F I0813 20:10:15.188865 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-metric-serving-ca' 2025-08-13T20:10:15.197997997+00:00 stderr F I0813 20:10:15.193331 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-08-13T20:10:15.197997997+00:00 stderr F I0813 20:10:15.193401 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-08-13T20:10:15.214512021+00:00 stderr F I0813 20:10:15.214316 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-08-13T20:10:15.224471057+00:00 stderr F I0813 20:10:15.222161 1 log.go:245] configmap 'openshift-config/etcd-metric-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.224471057+00:00 stderr F I0813 20:10:15.222263 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/initial-kube-apiserver-server-ca' 2025-08-13T20:10:15.229861391+00:00 stderr F I0813 20:10:15.228541 1 log.go:245] Reconciling configmap from openshift-apiserver/trusted-ca-bundle 2025-08-13T20:10:15.240978060+00:00 stderr F I0813 20:10:15.233518 1 log.go:245] ConfigMap openshift-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:15.265250526+00:00 stderr F I0813 20:10:15.260705 1 log.go:245] successful reconciliation 2025-08-13T20:10:15.269867948+00:00 stderr F I0813 20:10:15.267717 1 controller.go:220] "Starting workers" controller="operconfig-controller" worker count=1 2025-08-13T20:10:15.269867948+00:00 stderr F I0813 20:10:15.267902 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:10:15.284082236+00:00 stderr F I0813 20:10:15.276520 1 log.go:245] configmap 'openshift-config/initial-kube-apiserver-server-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.284082236+00:00 stderr F I0813 20:10:15.276604 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/kube-root-ca.crt' 2025-08-13T20:10:15.284082236+00:00 stderr F I0813 20:10:15.278257 1 log.go:245] Reconciling configmap from openshift-authentication/v4-0-config-system-trusted-ca-bundle 2025-08-13T20:10:15.308118515+00:00 stderr F I0813 20:10:15.305285 1 log.go:245] ConfigMap openshift-authentication/v4-0-config-system-trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:15.308468075+00:00 stderr F I0813 20:10:15.308386 1 log.go:245] configmap 'openshift-config/kube-root-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.308486515+00:00 stderr F I0813 20:10:15.308478 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-service-ca.crt' 2025-08-13T20:10:15.338965739+00:00 stderr F I0813 20:10:15.338564 1 log.go:245] configmap 'openshift-config/openshift-service-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.338965739+00:00 stderr F I0813 20:10:15.338664 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-kubeconfig-client-ca' 2025-08-13T20:10:15.353518877+00:00 stderr F I0813 20:10:15.353420 1 log.go:245] configmap 'openshift-config/admin-kubeconfig-client-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.353565208+00:00 stderr F I0813 20:10:15.353520 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-ca-bundle' 2025-08-13T20:10:15.360472336+00:00 stderr F I0813 20:10:15.360382 1 log.go:245] configmap 'openshift-config/etcd-ca-bundle' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.360514647+00:00 stderr F I0813 20:10:15.360482 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-install-manifests' 2025-08-13T20:10:15.381515149+00:00 stderr F I0813 20:10:15.379059 1 log.go:245] configmap 'openshift-config/openshift-install-manifests' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.381515149+00:00 stderr F I0813 20:10:15.379154 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/registry-certs' 2025-08-13T20:10:15.387460300+00:00 stderr F I0813 20:10:15.387420 1 log.go:245] configmap 'openshift-config/registry-certs' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.387566783+00:00 stderr F I0813 20:10:15.387553 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-acks' 2025-08-13T20:10:15.520354019+00:00 stderr F I0813 20:10:15.520245 1 log.go:245] configmap 'openshift-config/admin-acks' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.520690129+00:00 stderr F I0813 20:10:15.520661 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-serving-ca' 2025-08-13T20:10:15.551942095+00:00 stderr F I0813 20:10:15.551594 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.553274773+00:00 stderr F I0813 20:10:15.553210 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-08-13T20:10:15.553274773+00:00 stderr F I0813 20:10:15.553242 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-08-13T20:10:15.600361053+00:00 stderr F I0813 20:10:15.600133 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.625530425+00:00 stderr F I0813 20:10:15.624979 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.635443619+00:00 stderr F I0813 20:10:15.634847 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.657638876+00:00 stderr F I0813 20:10:15.657545 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.696231862+00:00 stderr F I0813 20:10:15.695244 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.710192442+00:00 stderr F I0813 20:10:15.707050 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:10:15.713141237+00:00 stderr F I0813 20:10:15.713054 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.713343193+00:00 stderr F I0813 20:10:15.713319 1 log.go:245] configmap 'openshift-config/etcd-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.713431175+00:00 stderr F I0813 20:10:15.713417 1 log.go:245] Reconciling proxy 'cluster' 2025-08-13T20:10:15.727638152+00:00 stderr F I0813 20:10:15.727582 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.745508955+00:00 stderr F I0813 20:10:15.740998 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T20:10:15.746159484+00:00 stderr F I0813 20:10:15.746132 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:10:15.746209645+00:00 stderr F I0813 20:10:15.746197 1 log.go:245] Successfully updated Operator config from Cluster config 2025-08-13T20:10:15.747837812+00:00 stderr F I0813 20:10:15.746759 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.756361906+00:00 stderr F I0813 20:10:15.755365 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.787690624+00:00 stderr F I0813 20:10:15.787166 1 log.go:245] httpProxy, httpsProxy and noProxy not defined for proxy 'cluster'; validation will be skipped 2025-08-13T20:10:15.905854422+00:00 stderr F I0813 20:10:15.905622 1 log.go:245] Reconciling proxy 'cluster' complete 2025-08-13T20:10:16.493622982+00:00 stderr F I0813 20:10:16.492608 1 dashboard_controller.go:113] Reconcile dashboards 2025-08-13T20:10:16.504737851+00:00 stderr F I0813 20:10:16.504547 1 dashboard_controller.go:139] Applying dashboards manifests 2025-08-13T20:10:16.521899913+00:00 stderr F I0813 20:10:16.521531 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-08-13T20:10:16.537375427+00:00 stderr F I0813 20:10:16.537311 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-08-13T20:10:16.537504620+00:00 stderr F I0813 20:10:16.537487 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-08-13T20:10:16.551889763+00:00 stderr F I0813 20:10:16.551720 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-08-13T20:10:16.692362681+00:00 stderr F I0813 20:10:16.692297 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T20:10:16.700454323+00:00 stderr F I0813 20:10:16.700272 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:16.702027648+00:00 stderr F I0813 20:10:16.700874 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:16.795859278+00:00 stderr F I0813 20:10:16.795186 1 log.go:245] successful reconciliation 2025-08-13T20:10:17.097212428+00:00 stderr F I0813 20:10:17.096540 1 log.go:245] Reconciling configmap from openshift-machine-api/mao-trusted-ca 2025-08-13T20:10:17.100598595+00:00 stderr F I0813 20:10:17.100378 1 log.go:245] ConfigMap openshift-machine-api/mao-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:17.300866087+00:00 stderr F I0813 20:10:17.300728 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:10:17.304838281+00:00 stderr F I0813 20:10:17.304748 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:10:17.310176314+00:00 stderr F I0813 20:10:17.310118 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:10:17.310201994+00:00 stderr F I0813 20:10:17.310159 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc000940380 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:10:17.318161843+00:00 stderr F I0813 20:10:17.317414 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:10:17.318235395+00:00 stderr F I0813 20:10:17.318219 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:10:17.318275836+00:00 stderr F I0813 20:10:17.318262 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:10:17.322330792+00:00 stderr F I0813 20:10:17.322283 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:10:17.322399284+00:00 stderr F I0813 20:10:17.322386 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:10:17.322429815+00:00 stderr F I0813 20:10:17.322418 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:10:17.322458676+00:00 stderr F I0813 20:10:17.322448 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:10:17.322533848+00:00 stderr F I0813 20:10:17.322507 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:10:17.491410320+00:00 stderr F I0813 20:10:17.491325 1 log.go:245] Reconciling Network.config.openshift.io cluster 2025-08-13T20:10:17.703263844+00:00 stderr F I0813 20:10:17.703141 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:10:17.719277103+00:00 stderr F I0813 20:10:17.719118 1 log.go:245] Failed to update the operator configuration: could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:10:17.719277103+00:00 stderr F E0813 20:10:17.719231 1 controller.go:329] "Reconciler error" err="could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="1f50cad5-60d2-4f3d-b8ab-18987764a41f" 2025-08-13T20:10:17.725032788+00:00 stderr F I0813 20:10:17.724614 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:10:17.897303557+00:00 stderr F I0813 20:10:17.896154 1 dashboard_controller.go:113] Reconcile dashboards 2025-08-13T20:10:17.902614249+00:00 stderr F I0813 20:10:17.902576 1 dashboard_controller.go:139] Applying dashboards manifests 2025-08-13T20:10:17.923173799+00:00 stderr F I0813 20:10:17.922722 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-08-13T20:10:17.943853801+00:00 stderr F I0813 20:10:17.943707 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-08-13T20:10:17.943853801+00:00 stderr F I0813 20:10:17.943755 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-08-13T20:10:17.957862963+00:00 stderr F I0813 20:10:17.957742 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-08-13T20:10:18.093558154+00:00 stderr F I0813 20:10:18.093248 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T20:10:18.099429212+00:00 stderr F I0813 20:10:18.099385 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:18.109217933+00:00 stderr F I0813 20:10:18.108590 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:18.178884190+00:00 stderr F I0813 20:10:18.178330 1 allowlist_controller.go:146] Successfully updated sysctl allowlist 2025-08-13T20:10:18.200862150+00:00 stderr F I0813 20:10:18.198317 1 log.go:245] successful reconciliation 2025-08-13T20:10:18.289874792+00:00 stderr F I0813 20:10:18.289321 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:10:18.313854600+00:00 stderr F I0813 20:10:18.313708 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:10:18.313854600+00:00 stderr F I0813 20:10:18.313768 1 log.go:245] Successfully updated Operator config from Cluster config 2025-08-13T20:10:18.315971261+00:00 stderr F I0813 20:10:18.315882 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T20:10:18.492054889+00:00 stderr F I0813 20:10:18.491952 1 log.go:245] Reconciling configmap from openshift-authentication-operator/trusted-ca-bundle 2025-08-13T20:10:18.495456387+00:00 stderr F I0813 20:10:18.495370 1 log.go:245] ConfigMap openshift-authentication-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:19.498520906+00:00 stderr F I0813 20:10:19.498423 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:10:19.503122328+00:00 stderr F I0813 20:10:19.503082 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:10:19.507103102+00:00 stderr F I0813 20:10:19.507068 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:10:19.507187084+00:00 stderr F I0813 20:10:19.507146 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc003acdb80 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:10:19.517054227+00:00 stderr F I0813 20:10:19.517007 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:10:19.517129689+00:00 stderr F I0813 20:10:19.517115 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:10:19.517163580+00:00 stderr F I0813 20:10:19.517152 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:10:19.521718241+00:00 stderr F I0813 20:10:19.521682 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:10:19.521859855+00:00 stderr F I0813 20:10:19.521764 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:10:19.521903126+00:00 stderr F I0813 20:10:19.521888 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:10:19.522002509+00:00 stderr F I0813 20:10:19.521986 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:10:19.522061621+00:00 stderr F I0813 20:10:19.522049 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:10:19.703470441+00:00 stderr F I0813 20:10:19.703361 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:10:19.720904881+00:00 stderr F I0813 20:10:19.720475 1 log.go:245] Failed to update the operator configuration: could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:10:19.720904881+00:00 stderr F E0813 20:10:19.720610 1 controller.go:329] "Reconciler error" err="could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="046702fa-95f0-4511-9599-1da40328714e" 2025-08-13T20:10:19.731743692+00:00 stderr F I0813 20:10:19.731668 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:10:19.892101659+00:00 stderr F I0813 20:10:19.890303 1 log.go:245] Reconciling configmap from openshift-console/trusted-ca-bundle 2025-08-13T20:10:19.893996923+00:00 stderr F I0813 20:10:19.893943 1 log.go:245] ConfigMap openshift-console/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:20.091372732+00:00 stderr F I0813 20:10:20.091233 1 log.go:245] Reconciling configmap from openshift-controller-manager/openshift-global-ca 2025-08-13T20:10:20.093704069+00:00 stderr F I0813 20:10:20.093616 1 log.go:245] ConfigMap openshift-controller-manager/openshift-global-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:20.493945505+00:00 stderr F I0813 20:10:20.493604 1 log.go:245] Reconciling configmap from openshift-image-registry/trusted-ca 2025-08-13T20:10:20.498337961+00:00 stderr F I0813 20:10:20.498289 1 log.go:245] ConfigMap openshift-image-registry/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:20.891459272+00:00 stderr F I0813 20:10:20.891337 1 log.go:245] Reconciling configmap from openshift-ingress-operator/trusted-ca 2025-08-13T20:10:20.905319379+00:00 stderr F I0813 20:10:20.905262 1 log.go:245] ConfigMap openshift-ingress-operator/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.103277445+00:00 stderr F I0813 20:10:21.103186 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:10:21.106639701+00:00 stderr F I0813 20:10:21.106580 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:10:21.108664629+00:00 stderr F I0813 20:10:21.108585 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:10:21.108664629+00:00 stderr F I0813 20:10:21.108619 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc0010c5800 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:10:21.114252159+00:00 stderr F I0813 20:10:21.114159 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:10:21.114252159+00:00 stderr F I0813 20:10:21.114206 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:10:21.114252159+00:00 stderr F I0813 20:10:21.114224 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:10:21.118045648+00:00 stderr F I0813 20:10:21.117969 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:10:21.118045648+00:00 stderr F I0813 20:10:21.118007 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:10:21.118045648+00:00 stderr F I0813 20:10:21.118026 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:10:21.118045648+00:00 stderr F I0813 20:10:21.118033 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:10:21.118318786+00:00 stderr F I0813 20:10:21.118274 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:10:21.295458115+00:00 stderr F I0813 20:10:21.293016 1 log.go:245] Reconciling configmap from openshift-kube-apiserver/trusted-ca-bundle 2025-08-13T20:10:21.297510904+00:00 stderr F I0813 20:10:21.297171 1 log.go:245] ConfigMap openshift-kube-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.491060843+00:00 stderr F I0813 20:10:21.490953 1 log.go:245] Reconciling configmap from openshift-kube-controller-manager/trusted-ca-bundle 2025-08-13T20:10:21.494009628+00:00 stderr F I0813 20:10:21.493448 1 log.go:245] ConfigMap openshift-kube-controller-manager/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.504048745+00:00 stderr F I0813 20:10:21.503017 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:10:21.523490663+00:00 stderr F I0813 20:10:21.523122 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:10:21.523490663+00:00 stderr F I0813 20:10:21.523183 1 log.go:245] Starting render phase 2025-08-13T20:10:21.561189264+00:00 stderr F I0813 20:10:21.559944 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:10:21.602377435+00:00 stderr F I0813 20:10:21.601509 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:10:21.602377435+00:00 stderr F I0813 20:10:21.601549 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:10:21.602377435+00:00 stderr F I0813 20:10:21.601586 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:10:21.602377435+00:00 stderr F I0813 20:10:21.601624 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:10:21.691747777+00:00 stderr F I0813 20:10:21.691660 1 log.go:245] Reconciling configmap from openshift-marketplace/marketplace-trusted-ca 2025-08-13T20:10:21.695859975+00:00 stderr F I0813 20:10:21.694041 1 log.go:245] ConfigMap openshift-marketplace/marketplace-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.707107557+00:00 stderr F I0813 20:10:21.707020 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:10:21.707107557+00:00 stderr F I0813 20:10:21.707053 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:10:21.894252693+00:00 stderr F I0813 20:10:21.893607 1 log.go:245] Reconciling configmap from openshift-config-managed/trusted-ca-bundle 2025-08-13T20:10:21.897754513+00:00 stderr F I0813 20:10:21.897684 1 log.go:245] trusted-ca-bundle changed, updating 12 configMaps 2025-08-13T20:10:21.897838216+00:00 stderr F I0813 20:10:21.897816 1 log.go:245] ConfigMap openshift-kube-controller-manager/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.897857 1 log.go:245] ConfigMap openshift-marketplace/marketplace-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.897952 1 log.go:245] ConfigMap openshift-authentication-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.897985 1 log.go:245] ConfigMap openshift-console/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898029 1 log.go:245] ConfigMap openshift-controller-manager/openshift-global-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898082 1 log.go:245] ConfigMap openshift-image-registry/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898122 1 log.go:245] ConfigMap openshift-ingress-operator/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898151 1 log.go:245] ConfigMap openshift-kube-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898173 1 log.go:245] ConfigMap openshift-apiserver-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898203 1 log.go:245] ConfigMap openshift-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898244 1 log.go:245] ConfigMap openshift-authentication/v4-0-config-system-trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898265 1 log.go:245] ConfigMap openshift-machine-api/mao-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.939293424+00:00 stderr F I0813 20:10:21.938402 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:10:22.312176035+00:00 stderr F I0813 20:10:22.310021 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:10:22.325019283+00:00 stderr F I0813 20:10:22.323766 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:10:22.325019283+00:00 stderr F I0813 20:10:22.324015 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:10:22.336750280+00:00 stderr F I0813 20:10:22.336655 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:10:22.336750280+00:00 stderr F I0813 20:10:22.336710 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:10:22.346664414+00:00 stderr F I0813 20:10:22.346573 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:10:22.346753177+00:00 stderr F I0813 20:10:22.346663 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:10:22.362561200+00:00 stderr F I0813 20:10:22.362508 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:10:22.362671523+00:00 stderr F I0813 20:10:22.362656 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:10:22.370158458+00:00 stderr F I0813 20:10:22.369302 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:10:22.370158458+00:00 stderr F I0813 20:10:22.369394 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:10:22.382879762+00:00 stderr F I0813 20:10:22.382733 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:10:22.382879762+00:00 stderr F I0813 20:10:22.382865 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:10:22.397029888+00:00 stderr F I0813 20:10:22.396511 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:10:22.397029888+00:00 stderr F I0813 20:10:22.396601 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:10:22.409078684+00:00 stderr F I0813 20:10:22.408017 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:10:22.409078684+00:00 stderr F I0813 20:10:22.408086 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:10:22.421082108+00:00 stderr F I0813 20:10:22.419645 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:10:22.421082108+00:00 stderr F I0813 20:10:22.419731 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:10:22.428624814+00:00 stderr F I0813 20:10:22.427127 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:10:22.428624814+00:00 stderr F I0813 20:10:22.427193 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:10:22.520892639+00:00 stderr F I0813 20:10:22.520085 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:10:22.520892639+00:00 stderr F I0813 20:10:22.520153 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:10:22.721411529+00:00 stderr F I0813 20:10:22.721251 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:10:22.721411529+00:00 stderr F I0813 20:10:22.721321 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:10:22.927994422+00:00 stderr F I0813 20:10:22.927876 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:10:22.927994422+00:00 stderr F I0813 20:10:22.927981 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:10:23.123548518+00:00 stderr F I0813 20:10:23.121703 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:10:23.123548518+00:00 stderr F I0813 20:10:23.121845 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:10:23.323863840+00:00 stderr F I0813 20:10:23.322137 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:10:23.323863840+00:00 stderr F I0813 20:10:23.322283 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:10:23.524312658+00:00 stderr F I0813 20:10:23.521103 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:10:23.524312658+00:00 stderr F I0813 20:10:23.521193 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:10:23.721742038+00:00 stderr F I0813 20:10:23.721581 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:10:23.721742038+00:00 stderr F I0813 20:10:23.721722 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:10:23.924133671+00:00 stderr F I0813 20:10:23.922598 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:10:23.924133671+00:00 stderr F I0813 20:10:23.922687 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:10:24.119658897+00:00 stderr F I0813 20:10:24.119602 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:10:24.119879823+00:00 stderr F I0813 20:10:24.119855 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:10:24.319565018+00:00 stderr F I0813 20:10:24.319507 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:10:24.319760864+00:00 stderr F I0813 20:10:24.319741 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:10:24.522948220+00:00 stderr F I0813 20:10:24.522836 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:10:24.522948220+00:00 stderr F I0813 20:10:24.522904 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:10:24.799190120+00:00 stderr F I0813 20:10:24.799067 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:10:24.799372835+00:00 stderr F I0813 20:10:24.799350 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:10:24.931274677+00:00 stderr F I0813 20:10:24.931220 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:10:24.931429361+00:00 stderr F I0813 20:10:24.931414 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:10:25.124266860+00:00 stderr F I0813 20:10:25.123539 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:10:25.124382073+00:00 stderr F I0813 20:10:25.124367 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:10:25.320061254+00:00 stderr F I0813 20:10:25.320004 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:10:25.320175797+00:00 stderr F I0813 20:10:25.320161 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:10:25.519275435+00:00 stderr F I0813 20:10:25.518652 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:10:25.519275435+00:00 stderr F I0813 20:10:25.518731 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:10:25.736134673+00:00 stderr F I0813 20:10:25.736036 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:10:25.736134673+00:00 stderr F I0813 20:10:25.736108 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:10:25.925590275+00:00 stderr F I0813 20:10:25.925203 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:10:25.925590275+00:00 stderr F I0813 20:10:25.925275 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:10:26.123879040+00:00 stderr F I0813 20:10:26.123028 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:10:26.123879040+00:00 stderr F I0813 20:10:26.123099 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:10:26.323700779+00:00 stderr F I0813 20:10:26.323533 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:10:26.323700779+00:00 stderr F I0813 20:10:26.323636 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:10:26.527292577+00:00 stderr F I0813 20:10:26.527233 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:10:26.527426310+00:00 stderr F I0813 20:10:26.527402 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:10:26.724639985+00:00 stderr F I0813 20:10:26.724538 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:10:26.724699056+00:00 stderr F I0813 20:10:26.724643 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:10:26.923624379+00:00 stderr F I0813 20:10:26.923318 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:10:26.923887836+00:00 stderr F I0813 20:10:26.923867 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:10:27.128076081+00:00 stderr F I0813 20:10:27.127842 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:10:27.128076081+00:00 stderr F I0813 20:10:27.128023 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:10:27.325535142+00:00 stderr F I0813 20:10:27.325407 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:10:27.325535142+00:00 stderr F I0813 20:10:27.325510 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:10:27.529757257+00:00 stderr F I0813 20:10:27.529598 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:10:27.529757257+00:00 stderr F I0813 20:10:27.529682 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:10:27.724759538+00:00 stderr F I0813 20:10:27.724633 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:10:27.724759538+00:00 stderr F I0813 20:10:27.724711 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:10:27.920503770+00:00 stderr F I0813 20:10:27.920391 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:10:27.920503770+00:00 stderr F I0813 20:10:27.920468 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:10:28.124461178+00:00 stderr F I0813 20:10:28.123842 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:10:28.124461178+00:00 stderr F I0813 20:10:28.124025 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:10:28.319762888+00:00 stderr F I0813 20:10:28.319650 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:10:28.319762888+00:00 stderr F I0813 20:10:28.319733 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:10:28.522318335+00:00 stderr F I0813 20:10:28.522195 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:10:28.522318335+00:00 stderr F I0813 20:10:28.522260 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:10:28.721852396+00:00 stderr F I0813 20:10:28.721622 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:10:28.721852396+00:00 stderr F I0813 20:10:28.721715 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:10:28.927618676+00:00 stderr F I0813 20:10:28.927482 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:10:28.927618676+00:00 stderr F I0813 20:10:28.927562 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:10:29.136455113+00:00 stderr F I0813 20:10:29.136266 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:10:29.136455113+00:00 stderr F I0813 20:10:29.136352 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:10:29.326397239+00:00 stderr F I0813 20:10:29.326310 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:10:29.326440441+00:00 stderr F I0813 20:10:29.326399 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:10:29.529527483+00:00 stderr F I0813 20:10:29.529420 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:10:29.529527483+00:00 stderr F I0813 20:10:29.529505 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:10:29.724242336+00:00 stderr F I0813 20:10:29.724035 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:10:29.724242336+00:00 stderr F I0813 20:10:29.724102 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:10:29.952650554+00:00 stderr F I0813 20:10:29.952593 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:10:29.952941233+00:00 stderr F I0813 20:10:29.952896 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:10:30.156097907+00:00 stderr F I0813 20:10:30.155900 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:10:30.156097907+00:00 stderr F I0813 20:10:30.156025 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:10:30.319976035+00:00 stderr F I0813 20:10:30.319680 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:10:30.319976035+00:00 stderr F I0813 20:10:30.319769 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:10:30.520501744+00:00 stderr F I0813 20:10:30.520394 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:10:30.520563216+00:00 stderr F I0813 20:10:30.520555 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:10:30.734934362+00:00 stderr F I0813 20:10:30.734182 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:10:30.734934362+00:00 stderr F I0813 20:10:30.734256 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:10:30.921047819+00:00 stderr F I0813 20:10:30.920898 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:10:30.921177432+00:00 stderr F I0813 20:10:30.921161 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:10:31.119574791+00:00 stderr F I0813 20:10:31.119426 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:10:31.119574791+00:00 stderr F I0813 20:10:31.119505 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:10:31.322412276+00:00 stderr F I0813 20:10:31.322271 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:10:31.322412276+00:00 stderr F I0813 20:10:31.322363 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:10:31.521363790+00:00 stderr F I0813 20:10:31.521222 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:10:31.521363790+00:00 stderr F I0813 20:10:31.521297 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:10:31.723192067+00:00 stderr F I0813 20:10:31.723043 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:10:31.723192067+00:00 stderr F I0813 20:10:31.723129 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:10:31.920969337+00:00 stderr F I0813 20:10:31.920680 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:10:31.920969337+00:00 stderr F I0813 20:10:31.920767 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:10:32.121627650+00:00 stderr F I0813 20:10:32.121471 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:10:32.121627650+00:00 stderr F I0813 20:10:32.121564 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:10:32.324740024+00:00 stderr F I0813 20:10:32.324619 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:10:32.324740024+00:00 stderr F I0813 20:10:32.324722 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:10:32.518459638+00:00 stderr F I0813 20:10:32.518327 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:10:32.518459638+00:00 stderr F I0813 20:10:32.518405 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:10:32.718275557+00:00 stderr F I0813 20:10:32.718118 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:10:32.718275557+00:00 stderr F I0813 20:10:32.718222 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:10:32.922672908+00:00 stderr F I0813 20:10:32.922513 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:10:32.922672908+00:00 stderr F I0813 20:10:32.922624 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:10:33.121058765+00:00 stderr F I0813 20:10:33.120958 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:10:33.121058765+00:00 stderr F I0813 20:10:33.121044 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:10:33.322137191+00:00 stderr F I0813 20:10:33.321998 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:10:33.322137191+00:00 stderr F I0813 20:10:33.322076 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:10:33.521951420+00:00 stderr F I0813 20:10:33.521861 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:10:33.522072753+00:00 stderr F I0813 20:10:33.522056 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:10:33.721470860+00:00 stderr F I0813 20:10:33.721344 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:10:33.721470860+00:00 stderr F I0813 20:10:33.721428 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:10:33.924454139+00:00 stderr F I0813 20:10:33.924308 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:10:33.924660925+00:00 stderr F I0813 20:10:33.924645 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:10:34.122531938+00:00 stderr F I0813 20:10:34.122431 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:10:34.122590230+00:00 stderr F I0813 20:10:34.122528 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:10:34.318450145+00:00 stderr F I0813 20:10:34.318305 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:10:34.318450145+00:00 stderr F I0813 20:10:34.318389 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:10:34.527260152+00:00 stderr F I0813 20:10:34.527157 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:10:34.528939790+00:00 stderr F I0813 20:10:34.528564 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:10:34.725633460+00:00 stderr F I0813 20:10:34.725198 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:10:34.725633460+00:00 stderr F I0813 20:10:34.725356 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:10:34.925560072+00:00 stderr F I0813 20:10:34.925500 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:10:34.925682115+00:00 stderr F I0813 20:10:34.925667 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:10:35.119538073+00:00 stderr F I0813 20:10:35.119417 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:10:35.119578044+00:00 stderr F I0813 20:10:35.119548 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:10:35.318718504+00:00 stderr F I0813 20:10:35.318551 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:10:35.319024753+00:00 stderr F I0813 20:10:35.319001 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:10:35.519099909+00:00 stderr F I0813 20:10:35.518651 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:10:35.519099909+00:00 stderr F I0813 20:10:35.518748 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:10:35.722389858+00:00 stderr F I0813 20:10:35.722324 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:10:35.722567473+00:00 stderr F I0813 20:10:35.722546 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:10:35.921644231+00:00 stderr F I0813 20:10:35.921586 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:10:35.922010341+00:00 stderr F I0813 20:10:35.921740 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:10:36.129045217+00:00 stderr F I0813 20:10:36.128878 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:10:36.129045217+00:00 stderr F I0813 20:10:36.128967 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:10:36.342385344+00:00 stderr F I0813 20:10:36.342256 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:10:36.342385344+00:00 stderr F I0813 20:10:36.342348 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:10:36.519449990+00:00 stderr F I0813 20:10:36.519340 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:10:36.519449990+00:00 stderr F I0813 20:10:36.519424 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:10:36.719878037+00:00 stderr F I0813 20:10:36.719706 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:10:36.719948579+00:00 stderr F I0813 20:10:36.719876 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:10:36.919298924+00:00 stderr F I0813 20:10:36.919195 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:10:36.919298924+00:00 stderr F I0813 20:10:36.919269 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:10:37.118161506+00:00 stderr F I0813 20:10:37.117957 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:10:37.118161506+00:00 stderr F I0813 20:10:37.118039 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:10:37.322144624+00:00 stderr F I0813 20:10:37.322025 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:10:37.322144624+00:00 stderr F I0813 20:10:37.322107 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:10:37.519550883+00:00 stderr F I0813 20:10:37.519429 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:10:37.519550883+00:00 stderr F I0813 20:10:37.519499 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:10:37.718666492+00:00 stderr F I0813 20:10:37.718559 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:10:37.718666492+00:00 stderr F I0813 20:10:37.718648 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:10:37.940935225+00:00 stderr F I0813 20:10:37.938980 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:10:37.940935225+00:00 stderr F I0813 20:10:37.939317 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:10:38.123136988+00:00 stderr F I0813 20:10:38.123011 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:10:38.123136988+00:00 stderr F I0813 20:10:38.123099 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:10:38.327121127+00:00 stderr F I0813 20:10:38.326997 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:10:38.327121127+00:00 stderr F I0813 20:10:38.327105 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:10:38.518325349+00:00 stderr F I0813 20:10:38.518189 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:10:38.518325349+00:00 stderr F I0813 20:10:38.518302 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:10:38.720081113+00:00 stderr F I0813 20:10:38.719936 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:10:38.720081113+00:00 stderr F I0813 20:10:38.720022 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:10:38.924877685+00:00 stderr F I0813 20:10:38.924686 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:10:38.924877685+00:00 stderr F I0813 20:10:38.924830 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:10:39.119301820+00:00 stderr F I0813 20:10:39.119180 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:10:39.119301820+00:00 stderr F I0813 20:10:39.119262 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:10:39.322502676+00:00 stderr F I0813 20:10:39.322362 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:10:39.322502676+00:00 stderr F I0813 20:10:39.322449 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:10:39.522842260+00:00 stderr F I0813 20:10:39.522711 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:10:39.522905202+00:00 stderr F I0813 20:10:39.522864 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:10:39.720545158+00:00 stderr F I0813 20:10:39.720445 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:10:39.720545158+00:00 stderr F I0813 20:10:39.720510 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:10:39.925620748+00:00 stderr F I0813 20:10:39.923488 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:10:39.925620748+00:00 stderr F I0813 20:10:39.923561 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:10:40.118724484+00:00 stderr F I0813 20:10:40.118611 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:10:40.118724484+00:00 stderr F I0813 20:10:40.118691 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:10:40.320312704+00:00 stderr F I0813 20:10:40.320183 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:10:40.320312704+00:00 stderr F I0813 20:10:40.320273 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:10:40.530647374+00:00 stderr F I0813 20:10:40.530503 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:10:40.530647374+00:00 stderr F I0813 20:10:40.530601 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:10:40.719393536+00:00 stderr F I0813 20:10:40.719282 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:10:40.719446957+00:00 stderr F I0813 20:10:40.719386 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:10:40.919968957+00:00 stderr F I0813 20:10:40.919588 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:10:40.919968957+00:00 stderr F I0813 20:10:40.919670 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:10:41.120396652+00:00 stderr F I0813 20:10:41.120317 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:10:41.120438443+00:00 stderr F I0813 20:10:41.120409 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:10:41.327983544+00:00 stderr F I0813 20:10:41.327640 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:10:41.328054846+00:00 stderr F I0813 20:10:41.327997 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:10:41.522964074+00:00 stderr F I0813 20:10:41.522755 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:10:41.523020465+00:00 stderr F I0813 20:10:41.522971 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:10:41.725399998+00:00 stderr F I0813 20:10:41.725267 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:10:41.725399998+00:00 stderr F I0813 20:10:41.725342 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:10:41.919321038+00:00 stderr F I0813 20:10:41.919192 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:10:41.919472992+00:00 stderr F I0813 20:10:41.919407 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:10:42.120738593+00:00 stderr F I0813 20:10:42.120473 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:10:42.120738593+00:00 stderr F I0813 20:10:42.120550 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:10:42.321723816+00:00 stderr F I0813 20:10:42.321611 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:10:42.321841499+00:00 stderr F I0813 20:10:42.321711 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:10:42.526454565+00:00 stderr F I0813 20:10:42.526331 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:10:42.526454565+00:00 stderr F I0813 20:10:42.526424 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:10:42.719186791+00:00 stderr F I0813 20:10:42.719072 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:10:42.719252073+00:00 stderr F I0813 20:10:42.719180 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:10:42.922417198+00:00 stderr F I0813 20:10:42.922270 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:10:42.939641362+00:00 stderr F I0813 20:10:42.939553 1 log.go:245] Operconfig Controller complete 2025-08-13T20:13:15.311294346+00:00 stderr F I0813 20:13:15.310918 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:13:42.940568520+00:00 stderr F I0813 20:13:42.940483 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:13:43.374828721+00:00 stderr F I0813 20:13:43.374716 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:13:43.377434225+00:00 stderr F I0813 20:13:43.377324 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:13:43.380157543+00:00 stderr F I0813 20:13:43.380092 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:13:43.380179564+00:00 stderr F I0813 20:13:43.380129 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc00074a600 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:13:43.385634841+00:00 stderr F I0813 20:13:43.385539 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:13:43.385634841+00:00 stderr F I0813 20:13:43.385589 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:13:43.385634841+00:00 stderr F I0813 20:13:43.385600 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:13:43.389670876+00:00 stderr F I0813 20:13:43.389128 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:13:43.389670876+00:00 stderr F I0813 20:13:43.389169 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:13:43.389670876+00:00 stderr F I0813 20:13:43.389192 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:13:43.389670876+00:00 stderr F I0813 20:13:43.389200 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:13:43.389670876+00:00 stderr F I0813 20:13:43.389330 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:13:43.403831402+00:00 stderr F I0813 20:13:43.403740 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:13:43.418676848+00:00 stderr F I0813 20:13:43.418573 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:13:43.418676848+00:00 stderr F I0813 20:13:43.418656 1 log.go:245] Starting render phase 2025-08-13T20:13:43.432573196+00:00 stderr F I0813 20:13:43.432482 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:13:43.469153245+00:00 stderr F I0813 20:13:43.469024 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:13:43.469153245+00:00 stderr F I0813 20:13:43.469065 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:13:43.469153245+00:00 stderr F I0813 20:13:43.469136 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:13:43.469218897+00:00 stderr F I0813 20:13:43.469166 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:13:43.487704557+00:00 stderr F I0813 20:13:43.487597 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:13:43.487704557+00:00 stderr F I0813 20:13:43.487653 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:13:43.503889401+00:00 stderr F I0813 20:13:43.503672 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:13:43.520217339+00:00 stderr F I0813 20:13:43.520092 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:13:43.527586020+00:00 stderr F I0813 20:13:43.527422 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:13:43.527586020+00:00 stderr F I0813 20:13:43.527494 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:13:43.536987710+00:00 stderr F I0813 20:13:43.536676 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:13:43.536987710+00:00 stderr F I0813 20:13:43.536759 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:13:43.545637138+00:00 stderr F I0813 20:13:43.545533 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:13:43.545637138+00:00 stderr F I0813 20:13:43.545579 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:13:43.554432040+00:00 stderr F I0813 20:13:43.554328 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:13:43.554432040+00:00 stderr F I0813 20:13:43.554370 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:13:43.563117879+00:00 stderr F I0813 20:13:43.562860 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:13:43.563117879+00:00 stderr F I0813 20:13:43.562923 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:13:43.570672736+00:00 stderr F I0813 20:13:43.570530 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:13:43.570672736+00:00 stderr F I0813 20:13:43.570618 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:13:43.578617994+00:00 stderr F I0813 20:13:43.578499 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:13:43.578617994+00:00 stderr F I0813 20:13:43.578605 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:13:43.584363808+00:00 stderr F I0813 20:13:43.584218 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:13:43.584363808+00:00 stderr F I0813 20:13:43.584292 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:13:43.590532865+00:00 stderr F I0813 20:13:43.590429 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:13:43.590532865+00:00 stderr F I0813 20:13:43.590502 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:13:43.615068418+00:00 stderr F I0813 20:13:43.614748 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:13:43.615068418+00:00 stderr F I0813 20:13:43.615055 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:13:43.816850564+00:00 stderr F I0813 20:13:43.816620 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:13:43.816850564+00:00 stderr F I0813 20:13:43.816732 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:13:44.045211281+00:00 stderr F I0813 20:13:44.045101 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:13:44.045211281+00:00 stderr F I0813 20:13:44.045173 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:13:44.214676170+00:00 stderr F I0813 20:13:44.214560 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:13:44.214676170+00:00 stderr F I0813 20:13:44.214633 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:13:44.414017305+00:00 stderr F I0813 20:13:44.413907 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:13:44.414017305+00:00 stderr F I0813 20:13:44.414004 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:13:44.616854340+00:00 stderr F I0813 20:13:44.616651 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:13:44.617054796+00:00 stderr F I0813 20:13:44.616892 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:13:44.816605287+00:00 stderr F I0813 20:13:44.815293 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:13:44.816605287+00:00 stderr F I0813 20:13:44.816217 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:13:45.018822195+00:00 stderr F I0813 20:13:45.018280 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:13:45.018822195+00:00 stderr F I0813 20:13:45.018359 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:13:45.216616916+00:00 stderr F I0813 20:13:45.216222 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:13:45.216616916+00:00 stderr F I0813 20:13:45.216293 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:13:45.417322111+00:00 stderr F I0813 20:13:45.417262 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:13:45.417534628+00:00 stderr F I0813 20:13:45.417515 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:13:45.615292018+00:00 stderr F I0813 20:13:45.615154 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:13:45.615292018+00:00 stderr F I0813 20:13:45.615220 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:13:45.814613313+00:00 stderr F I0813 20:13:45.814560 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:13:45.814888881+00:00 stderr F I0813 20:13:45.814867 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:13:46.025141219+00:00 stderr F I0813 20:13:46.024971 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:13:46.025141219+00:00 stderr F I0813 20:13:46.025052 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:13:46.225747440+00:00 stderr F I0813 20:13:46.225614 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:13:46.225747440+00:00 stderr F I0813 20:13:46.225690 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:13:46.414999516+00:00 stderr F I0813 20:13:46.414885 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:13:46.415107769+00:00 stderr F I0813 20:13:46.415093 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:13:46.614743393+00:00 stderr F I0813 20:13:46.614643 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:13:46.614913388+00:00 stderr F I0813 20:13:46.614882 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:13:46.815699004+00:00 stderr F I0813 20:13:46.815560 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:13:46.815699004+00:00 stderr F I0813 20:13:46.815642 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:13:47.022632887+00:00 stderr F I0813 20:13:47.022324 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:13:47.022632887+00:00 stderr F I0813 20:13:47.022434 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:13:47.218154962+00:00 stderr F I0813 20:13:47.217983 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:13:47.218154962+00:00 stderr F I0813 20:13:47.218104 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:13:47.420447001+00:00 stderr F I0813 20:13:47.420287 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:13:47.420447001+00:00 stderr F I0813 20:13:47.420392 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:13:47.615309948+00:00 stderr F I0813 20:13:47.615199 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:13:47.615309948+00:00 stderr F I0813 20:13:47.615294 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:13:47.815000913+00:00 stderr F I0813 20:13:47.814648 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:13:47.815000913+00:00 stderr F I0813 20:13:47.814727 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:13:48.018637922+00:00 stderr F I0813 20:13:48.018534 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:13:48.018637922+00:00 stderr F I0813 20:13:48.018579 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:13:48.217180944+00:00 stderr F I0813 20:13:48.216324 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:13:48.217180944+00:00 stderr F I0813 20:13:48.216413 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:13:48.414719087+00:00 stderr F I0813 20:13:48.414622 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:13:48.414719087+00:00 stderr F I0813 20:13:48.414691 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:13:48.614108514+00:00 stderr F I0813 20:13:48.613413 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:13:48.614149025+00:00 stderr F I0813 20:13:48.614131 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:13:48.817015652+00:00 stderr F I0813 20:13:48.816916 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:13:48.817055383+00:00 stderr F I0813 20:13:48.817009 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:13:49.019493637+00:00 stderr F I0813 20:13:49.019344 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:13:49.019493637+00:00 stderr F I0813 20:13:49.019461 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:13:49.217746911+00:00 stderr F I0813 20:13:49.216564 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:13:49.217746911+00:00 stderr F I0813 20:13:49.216637 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:13:49.443239016+00:00 stderr F I0813 20:13:49.442371 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:13:49.443239016+00:00 stderr F I0813 20:13:49.443043 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:13:49.618431979+00:00 stderr F I0813 20:13:49.617657 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:13:49.618431979+00:00 stderr F I0813 20:13:49.618405 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:13:49.816567380+00:00 stderr F I0813 20:13:49.816398 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:13:49.816667963+00:00 stderr F I0813 20:13:49.816646 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:13:50.014527865+00:00 stderr F I0813 20:13:50.014388 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:13:50.014527865+00:00 stderr F I0813 20:13:50.014460 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:13:50.225897456+00:00 stderr F I0813 20:13:50.225706 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:13:50.225897456+00:00 stderr F I0813 20:13:50.225863 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:13:50.420505806+00:00 stderr F I0813 20:13:50.420379 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:13:50.420505806+00:00 stderr F I0813 20:13:50.420455 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:13:50.625005169+00:00 stderr F I0813 20:13:50.624863 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:13:50.625005169+00:00 stderr F I0813 20:13:50.624933 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:13:50.824855698+00:00 stderr F I0813 20:13:50.824663 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:13:50.824855698+00:00 stderr F I0813 20:13:50.824758 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:13:51.017863611+00:00 stderr F I0813 20:13:51.017673 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:13:51.017863611+00:00 stderr F I0813 20:13:51.017749 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:13:51.289760177+00:00 stderr F I0813 20:13:51.287445 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:13:51.289760177+00:00 stderr F I0813 20:13:51.287519 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:13:51.453939414+00:00 stderr F I0813 20:13:51.453768 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:13:51.453939414+00:00 stderr F I0813 20:13:51.453894 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:13:51.615114005+00:00 stderr F I0813 20:13:51.615004 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:13:51.615114005+00:00 stderr F I0813 20:13:51.615074 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:13:51.817841177+00:00 stderr F I0813 20:13:51.817663 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:13:51.817841177+00:00 stderr F I0813 20:13:51.817744 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:13:52.016338579+00:00 stderr F I0813 20:13:52.016238 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:13:52.016338579+00:00 stderr F I0813 20:13:52.016312 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:13:52.216276261+00:00 stderr F I0813 20:13:52.216221 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:13:52.216375604+00:00 stderr F I0813 20:13:52.216361 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:13:52.414331500+00:00 stderr F I0813 20:13:52.414205 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:13:52.414331500+00:00 stderr F I0813 20:13:52.414278 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:13:52.615858418+00:00 stderr F I0813 20:13:52.615651 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:13:52.615858418+00:00 stderr F I0813 20:13:52.615720 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:13:52.816060237+00:00 stderr F I0813 20:13:52.815899 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:13:52.816060237+00:00 stderr F I0813 20:13:52.816013 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:13:53.019017056+00:00 stderr F I0813 20:13:53.018889 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:13:53.019066518+00:00 stderr F I0813 20:13:53.019021 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:13:53.216182769+00:00 stderr F I0813 20:13:53.216074 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:13:53.216182769+00:00 stderr F I0813 20:13:53.216144 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:13:53.418753757+00:00 stderr F I0813 20:13:53.418643 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:13:53.418753757+00:00 stderr F I0813 20:13:53.418731 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:13:53.625225817+00:00 stderr F I0813 20:13:53.625005 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:13:53.625225817+00:00 stderr F I0813 20:13:53.625073 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:13:53.815445371+00:00 stderr F I0813 20:13:53.815317 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:13:53.815445371+00:00 stderr F I0813 20:13:53.815387 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:13:54.014515318+00:00 stderr F I0813 20:13:54.014401 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:13:54.014515318+00:00 stderr F I0813 20:13:54.014468 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:13:54.231292933+00:00 stderr F I0813 20:13:54.231165 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:13:54.231500299+00:00 stderr F I0813 20:13:54.231279 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:13:54.418158130+00:00 stderr F I0813 20:13:54.417727 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:13:54.418158130+00:00 stderr F I0813 20:13:54.417841 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:13:54.618489853+00:00 stderr F I0813 20:13:54.618262 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:13:54.618489853+00:00 stderr F I0813 20:13:54.618344 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:13:54.815864362+00:00 stderr F I0813 20:13:54.815675 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:13:54.815864362+00:00 stderr F I0813 20:13:54.815760 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:13:55.017643068+00:00 stderr F I0813 20:13:55.017533 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:13:55.017643068+00:00 stderr F I0813 20:13:55.017618 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:13:55.221649977+00:00 stderr F I0813 20:13:55.221007 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:13:55.221649977+00:00 stderr F I0813 20:13:55.221077 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:13:55.420225810+00:00 stderr F I0813 20:13:55.419610 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:13:55.420225810+00:00 stderr F I0813 20:13:55.419684 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:13:55.616715154+00:00 stderr F I0813 20:13:55.616622 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:13:55.616715154+00:00 stderr F I0813 20:13:55.616674 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:13:55.819831198+00:00 stderr F I0813 20:13:55.819646 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:13:55.820014403+00:00 stderr F I0813 20:13:55.819929 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:13:56.020237654+00:00 stderr F I0813 20:13:56.020119 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:13:56.020237654+00:00 stderr F I0813 20:13:56.020193 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:13:56.218649183+00:00 stderr F I0813 20:13:56.217976 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:13:56.218649183+00:00 stderr F I0813 20:13:56.218612 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:13:56.443436028+00:00 stderr F I0813 20:13:56.442659 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:13:56.443436028+00:00 stderr F I0813 20:13:56.442737 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:13:56.620633618+00:00 stderr F I0813 20:13:56.620546 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:13:56.620681910+00:00 stderr F I0813 20:13:56.620650 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:13:56.817469731+00:00 stderr F I0813 20:13:56.817347 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:13:56.817704398+00:00 stderr F I0813 20:13:56.817688 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:13:57.016028065+00:00 stderr F I0813 20:13:57.015855 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:13:57.016028065+00:00 stderr F I0813 20:13:57.015967 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:13:57.220501447+00:00 stderr F I0813 20:13:57.219443 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:13:57.220501447+00:00 stderr F I0813 20:13:57.220307 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:13:57.424032872+00:00 stderr F I0813 20:13:57.423901 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:13:57.424093534+00:00 stderr F I0813 20:13:57.424028 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:13:57.637005778+00:00 stderr F I0813 20:13:57.636883 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:13:57.637005778+00:00 stderr F I0813 20:13:57.636933 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:13:57.815596899+00:00 stderr F I0813 20:13:57.815465 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:13:57.815596899+00:00 stderr F I0813 20:13:57.815547 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:13:58.015258352+00:00 stderr F I0813 20:13:58.015162 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:13:58.015258352+00:00 stderr F I0813 20:13:58.015233 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:13:58.221014882+00:00 stderr F I0813 20:13:58.220477 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:13:58.221014882+00:00 stderr F I0813 20:13:58.220645 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:13:58.416857407+00:00 stderr F I0813 20:13:58.416681 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:13:58.416857407+00:00 stderr F I0813 20:13:58.416762 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:13:58.614262667+00:00 stderr F I0813 20:13:58.614088 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:13:58.614262667+00:00 stderr F I0813 20:13:58.614173 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:13:58.816146785+00:00 stderr F I0813 20:13:58.816089 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:13:58.816263938+00:00 stderr F I0813 20:13:58.816250 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:13:59.015859581+00:00 stderr F I0813 20:13:59.015758 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:13:59.015904282+00:00 stderr F I0813 20:13:59.015867 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:13:59.226871771+00:00 stderr F I0813 20:13:59.226666 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:13:59.226871771+00:00 stderr F I0813 20:13:59.226742 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:13:59.419289098+00:00 stderr F I0813 20:13:59.419152 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:13:59.419289098+00:00 stderr F I0813 20:13:59.419230 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:13:59.619827387+00:00 stderr F I0813 20:13:59.619650 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:13:59.619827387+00:00 stderr F I0813 20:13:59.619724 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:13:59.815385904+00:00 stderr F I0813 20:13:59.815249 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:13:59.815385904+00:00 stderr F I0813 20:13:59.815325 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:14:00.015931724+00:00 stderr F I0813 20:14:00.015717 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:14:00.015931724+00:00 stderr F I0813 20:14:00.015885 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:14:00.234377297+00:00 stderr F I0813 20:14:00.234281 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:14:00.234377297+00:00 stderr F I0813 20:14:00.234351 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:14:00.414505691+00:00 stderr F I0813 20:14:00.414410 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:14:00.414505691+00:00 stderr F I0813 20:14:00.414484 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:14:00.615417922+00:00 stderr F I0813 20:14:00.615262 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:14:00.615417922+00:00 stderr F I0813 20:14:00.615346 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:14:00.815018555+00:00 stderr F I0813 20:14:00.814875 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:14:00.815018555+00:00 stderr F I0813 20:14:00.814972 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:14:01.019400795+00:00 stderr F I0813 20:14:01.018477 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:14:01.019612631+00:00 stderr F I0813 20:14:01.019521 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:14:01.217608247+00:00 stderr F I0813 20:14:01.216765 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:14:01.217608247+00:00 stderr F I0813 20:14:01.216920 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:14:01.415330817+00:00 stderr F I0813 20:14:01.415212 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:14:01.415330817+00:00 stderr F I0813 20:14:01.415284 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:14:01.617152582+00:00 stderr F I0813 20:14:01.617097 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:14:01.617282986+00:00 stderr F I0813 20:14:01.617268 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:14:01.814648095+00:00 stderr F I0813 20:14:01.814586 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:14:01.814864021+00:00 stderr F I0813 20:14:01.814760 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:14:02.014336540+00:00 stderr F I0813 20:14:02.014160 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:14:02.015155753+00:00 stderr F I0813 20:14:02.015102 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:14:02.215230920+00:00 stderr F I0813 20:14:02.214761 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:14:02.215286621+00:00 stderr F I0813 20:14:02.215233 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:14:02.415911973+00:00 stderr F I0813 20:14:02.415713 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:14:02.416126300+00:00 stderr F I0813 20:14:02.416052 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:14:02.615518257+00:00 stderr F I0813 20:14:02.615394 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:14:02.615518257+00:00 stderr F I0813 20:14:02.615476 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:14:02.838552571+00:00 stderr F I0813 20:14:02.838390 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:14:02.838617163+00:00 stderr F I0813 20:14:02.838584 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:14:03.027283372+00:00 stderr F I0813 20:14:03.027133 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:14:03.027283372+00:00 stderr F I0813 20:14:03.027242 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:14:03.222765997+00:00 stderr F I0813 20:14:03.222560 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:14:03.222765997+00:00 stderr F I0813 20:14:03.222681 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:14:03.420601300+00:00 stderr F I0813 20:14:03.420453 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:14:03.421113364+00:00 stderr F I0813 20:14:03.421057 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:14:03.617066523+00:00 stderr F I0813 20:14:03.616967 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:14:03.617121434+00:00 stderr F I0813 20:14:03.617064 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:14:03.818043865+00:00 stderr F I0813 20:14:03.817875 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:14:03.818043865+00:00 stderr F I0813 20:14:03.817982 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:14:04.021085856+00:00 stderr F I0813 20:14:04.020926 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:14:04.021085856+00:00 stderr F I0813 20:14:04.021019 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:14:04.222729668+00:00 stderr F I0813 20:14:04.222669 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:14:04.245133170+00:00 stderr F I0813 20:14:04.245074 1 log.go:245] Operconfig Controller complete 2025-08-13T20:14:46.226385533+00:00 stderr F I0813 20:14:46.226016 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.236012789+00:00 stderr F I0813 20:14:46.235859 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.243878934+00:00 stderr F I0813 20:14:46.243702 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.255624171+00:00 stderr F I0813 20:14:46.255576 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.263317642+00:00 stderr F I0813 20:14:46.263194 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.277946161+00:00 stderr F I0813 20:14:46.276878 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.290137251+00:00 stderr F I0813 20:14:46.289856 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.301030933+00:00 stderr F I0813 20:14:46.300886 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.311577085+00:00 stderr F I0813 20:14:46.311445 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:15:16.693640856+00:00 stderr F I0813 20:15:16.693425 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:15:16.694565642+00:00 stderr F I0813 20:15:16.694470 1 log.go:245] successful reconciliation 2025-08-13T20:15:18.098319579+00:00 stderr F I0813 20:15:18.093704 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T20:15:18.099662928+00:00 stderr F I0813 20:15:18.099572 1 log.go:245] successful reconciliation 2025-08-13T20:15:19.293654066+00:00 stderr F I0813 20:15:19.291996 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T20:15:19.293654066+00:00 stderr F I0813 20:15:19.292752 1 log.go:245] successful reconciliation 2025-08-13T20:16:15.330726733+00:00 stderr F I0813 20:16:15.330619 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:17:04.248934446+00:00 stderr F I0813 20:17:04.247105 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:17:04.923761548+00:00 stderr F I0813 20:17:04.923308 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:17:04.928896624+00:00 stderr F I0813 20:17:04.927616 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:17:04.933402173+00:00 stderr F I0813 20:17:04.933366 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:17:04.933532467+00:00 stderr F I0813 20:17:04.933442 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc0038ed480 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:17:04.948513865+00:00 stderr F I0813 20:17:04.948374 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:17:04.948513865+00:00 stderr F I0813 20:17:04.948451 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:17:04.948513865+00:00 stderr F I0813 20:17:04.948460 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:17:04.953758454+00:00 stderr F I0813 20:17:04.953665 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:17:04.953864467+00:00 stderr F I0813 20:17:04.953849 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:17:04.953902188+00:00 stderr F I0813 20:17:04.953887 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:17:04.953931809+00:00 stderr F I0813 20:17:04.953920 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:17:04.954125055+00:00 stderr F I0813 20:17:04.954104 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:17:04.975956108+00:00 stderr F I0813 20:17:04.975644 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:17:05.003896586+00:00 stderr F I0813 20:17:05.002820 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:17:05.003896586+00:00 stderr F I0813 20:17:05.002885 1 log.go:245] Starting render phase 2025-08-13T20:17:05.018018340+00:00 stderr F I0813 20:17:05.016832 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:17:05.095545084+00:00 stderr F I0813 20:17:05.095477 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:17:05.095627966+00:00 stderr F I0813 20:17:05.095611 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:17:05.095709958+00:00 stderr F I0813 20:17:05.095682 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:17:05.095826582+00:00 stderr F I0813 20:17:05.095761 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:17:05.114408032+00:00 stderr F I0813 20:17:05.114126 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:17:05.114408032+00:00 stderr F I0813 20:17:05.114169 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:17:05.128207176+00:00 stderr F I0813 20:17:05.126949 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:17:05.148047803+00:00 stderr F I0813 20:17:05.146815 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:17:05.155462774+00:00 stderr F I0813 20:17:05.155385 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:17:05.155462774+00:00 stderr F I0813 20:17:05.155434 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:17:05.165527762+00:00 stderr F I0813 20:17:05.165441 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:17:05.165630375+00:00 stderr F I0813 20:17:05.165615 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:17:05.189557328+00:00 stderr F I0813 20:17:05.187764 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:17:05.189557328+00:00 stderr F I0813 20:17:05.187892 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:17:05.199769360+00:00 stderr F I0813 20:17:05.198740 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:17:05.200000276+00:00 stderr F I0813 20:17:05.199944 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:17:05.215723615+00:00 stderr F I0813 20:17:05.215615 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:17:05.216057955+00:00 stderr F I0813 20:17:05.216006 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:17:05.229190230+00:00 stderr F I0813 20:17:05.229001 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:17:05.231911968+00:00 stderr F I0813 20:17:05.231476 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:17:05.249879071+00:00 stderr F I0813 20:17:05.247699 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:17:05.249879071+00:00 stderr F I0813 20:17:05.247765 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:17:05.262497681+00:00 stderr F I0813 20:17:05.262441 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:17:05.262631155+00:00 stderr F I0813 20:17:05.262611 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:17:05.269325766+00:00 stderr F I0813 20:17:05.269269 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:17:05.269427789+00:00 stderr F I0813 20:17:05.269414 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:17:05.278072156+00:00 stderr F I0813 20:17:05.276175 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:17:05.278072156+00:00 stderr F I0813 20:17:05.276242 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:17:05.396580740+00:00 stderr F I0813 20:17:05.396502 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:17:05.396580740+00:00 stderr F I0813 20:17:05.396569 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:17:05.600000119+00:00 stderr F I0813 20:17:05.599714 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:17:05.600124203+00:00 stderr F I0813 20:17:05.600107 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:17:05.800443824+00:00 stderr F I0813 20:17:05.799029 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:17:05.800443824+00:00 stderr F I0813 20:17:05.799429 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:17:05.994423033+00:00 stderr F I0813 20:17:05.994317 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:17:05.995766151+00:00 stderr F I0813 20:17:05.994432 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:17:06.200320373+00:00 stderr F I0813 20:17:06.199748 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:17:06.200430726+00:00 stderr F I0813 20:17:06.200412 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:17:06.397920626+00:00 stderr F I0813 20:17:06.396137 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:17:06.397920626+00:00 stderr F I0813 20:17:06.396208 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:17:06.597172416+00:00 stderr F I0813 20:17:06.595172 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:17:06.597172416+00:00 stderr F I0813 20:17:06.595253 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:17:06.961050417+00:00 stderr F I0813 20:17:06.960676 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:17:06.961753458+00:00 stderr F I0813 20:17:06.961479 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:17:07.025874209+00:00 stderr F I0813 20:17:07.025442 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:17:07.025874209+00:00 stderr F I0813 20:17:07.025856 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:17:07.600142918+00:00 stderr F I0813 20:17:07.596664 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:17:07.600142918+00:00 stderr F I0813 20:17:07.596734 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:17:08.303768101+00:00 stderr F I0813 20:17:08.303595 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:17:08.303768101+00:00 stderr F I0813 20:17:08.303664 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:17:09.114260336+00:00 stderr F I0813 20:17:09.114146 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:17:09.114260336+00:00 stderr F I0813 20:17:09.114227 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:17:09.141484114+00:00 stderr F I0813 20:17:09.141308 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:17:09.141484114+00:00 stderr F I0813 20:17:09.141358 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:17:09.151986274+00:00 stderr F I0813 20:17:09.149873 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:17:09.151986274+00:00 stderr F I0813 20:17:09.150046 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:17:09.436568531+00:00 stderr F I0813 20:17:09.436517 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:17:09.436715335+00:00 stderr F I0813 20:17:09.436697 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:17:09.450451017+00:00 stderr F I0813 20:17:09.450399 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:17:09.450556220+00:00 stderr F I0813 20:17:09.450539 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:17:09.467747911+00:00 stderr F I0813 20:17:09.467593 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:17:09.467747911+00:00 stderr F I0813 20:17:09.467664 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:17:09.491358585+00:00 stderr F I0813 20:17:09.490353 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:17:09.492730374+00:00 stderr F I0813 20:17:09.492588 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:17:09.513332593+00:00 stderr F I0813 20:17:09.513225 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:17:09.513332593+00:00 stderr F I0813 20:17:09.513291 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:17:09.521854906+00:00 stderr F I0813 20:17:09.519131 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:17:09.521854906+00:00 stderr F I0813 20:17:09.519202 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:17:09.525744807+00:00 stderr F I0813 20:17:09.525647 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:17:09.525744807+00:00 stderr F I0813 20:17:09.525719 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:17:09.623143019+00:00 stderr F I0813 20:17:09.622999 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:17:09.623143019+00:00 stderr F I0813 20:17:09.623069 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:17:09.809079268+00:00 stderr F I0813 20:17:09.808507 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:17:09.809079268+00:00 stderr F I0813 20:17:09.808594 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:17:10.025402256+00:00 stderr F I0813 20:17:10.025194 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:17:10.025402256+00:00 stderr F I0813 20:17:10.025275 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:17:10.209152154+00:00 stderr F I0813 20:17:10.209003 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:17:10.209152154+00:00 stderr F I0813 20:17:10.209075 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:17:10.399028096+00:00 stderr F I0813 20:17:10.398950 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:17:10.399139459+00:00 stderr F I0813 20:17:10.399124 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:17:10.624912547+00:00 stderr F I0813 20:17:10.624513 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:17:10.625088892+00:00 stderr F I0813 20:17:10.625068 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:17:10.819764451+00:00 stderr F I0813 20:17:10.819627 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:17:10.819845573+00:00 stderr F I0813 20:17:10.819766 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:17:11.011765324+00:00 stderr F I0813 20:17:11.011626 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:17:11.011765324+00:00 stderr F I0813 20:17:11.011698 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:17:11.206567386+00:00 stderr F I0813 20:17:11.206413 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:17:11.206567386+00:00 stderr F I0813 20:17:11.206509 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:17:11.409652566+00:00 stderr F I0813 20:17:11.409552 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:17:11.409652566+00:00 stderr F I0813 20:17:11.409636 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:17:11.599700713+00:00 stderr F I0813 20:17:11.599627 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:17:11.599743204+00:00 stderr F I0813 20:17:11.599697 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:17:11.828678432+00:00 stderr F I0813 20:17:11.828561 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:17:11.828678432+00:00 stderr F I0813 20:17:11.828666 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:17:12.007482418+00:00 stderr F I0813 20:17:12.007375 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:17:12.007482418+00:00 stderr F I0813 20:17:12.007460 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:17:12.246413281+00:00 stderr F I0813 20:17:12.245524 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:17:12.246413281+00:00 stderr F I0813 20:17:12.245593 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:17:12.421217743+00:00 stderr F I0813 20:17:12.421129 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:17:12.421217743+00:00 stderr F I0813 20:17:12.421201 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:17:12.615136101+00:00 stderr F I0813 20:17:12.615078 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:17:12.615335817+00:00 stderr F I0813 20:17:12.615316 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:17:12.841163046+00:00 stderr F I0813 20:17:12.841047 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:17:12.841163046+00:00 stderr F I0813 20:17:12.841126 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:17:13.041407154+00:00 stderr F I0813 20:17:13.041305 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:17:13.041407154+00:00 stderr F I0813 20:17:13.041386 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:17:13.195388202+00:00 stderr F I0813 20:17:13.195274 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:17:13.195388202+00:00 stderr F I0813 20:17:13.195345 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:17:13.397167614+00:00 stderr F I0813 20:17:13.395151 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:17:13.397167614+00:00 stderr F I0813 20:17:13.395395 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:17:14.550767769+00:00 stderr F I0813 20:17:14.546523 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:17:14.550767769+00:00 stderr F I0813 20:17:14.546616 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:17:14.751016676+00:00 stderr F I0813 20:17:14.750921 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:17:14.751136970+00:00 stderr F I0813 20:17:14.751116 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:17:15.457592522+00:00 stderr F I0813 20:17:15.457543 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:17:15.458085836+00:00 stderr F I0813 20:17:15.457691 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:17:16.081516709+00:00 stderr F I0813 20:17:16.080115 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:17:16.081516709+00:00 stderr F I0813 20:17:16.080196 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:17:18.510213935+00:00 stderr F I0813 20:17:18.509540 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:17:18.510213935+00:00 stderr F I0813 20:17:18.509656 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:17:20.689240763+00:00 stderr F I0813 20:17:20.687479 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:17:20.689240763+00:00 stderr F I0813 20:17:20.687544 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:17:20.829185339+00:00 stderr F I0813 20:17:20.815747 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:17:20.829185339+00:00 stderr F I0813 20:17:20.815890 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:17:20.829185339+00:00 stderr F I0813 20:17:20.827718 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:17:20.829185339+00:00 stderr F I0813 20:17:20.828089 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:17:21.110686038+00:00 stderr F I0813 20:17:21.110587 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:17:21.110732109+00:00 stderr F I0813 20:17:21.110719 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:17:21.455195706+00:00 stderr F I0813 20:17:21.455105 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:17:21.455195706+00:00 stderr F I0813 20:17:21.455173 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:17:21.468591539+00:00 stderr F I0813 20:17:21.468122 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:17:21.468591539+00:00 stderr F I0813 20:17:21.468188 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:17:21.479685966+00:00 stderr F I0813 20:17:21.477249 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:17:21.479685966+00:00 stderr F I0813 20:17:21.477320 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:17:21.609719289+00:00 stderr F I0813 20:17:21.609615 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:17:21.609759620+00:00 stderr F I0813 20:17:21.609712 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:17:21.643704690+00:00 stderr F I0813 20:17:21.642515 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:17:21.643704690+00:00 stderr F I0813 20:17:21.642611 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:17:22.156243025+00:00 stderr F I0813 20:17:22.155518 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:17:22.163195764+00:00 stderr F I0813 20:17:22.162545 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:17:22.175694211+00:00 stderr F I0813 20:17:22.174350 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:17:22.175940878+00:00 stderr F I0813 20:17:22.175917 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:17:24.354742039+00:00 stderr F I0813 20:17:24.354686 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:17:24.354908564+00:00 stderr F I0813 20:17:24.354893 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:17:25.198849754+00:00 stderr F I0813 20:17:25.196380 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:17:25.198849754+00:00 stderr F I0813 20:17:25.196470 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:17:25.367419628+00:00 stderr F I0813 20:17:25.367017 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:17:25.367419628+00:00 stderr F I0813 20:17:25.367087 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:17:25.611864598+00:00 stderr F I0813 20:17:25.610414 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:17:25.611864598+00:00 stderr F I0813 20:17:25.610533 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:17:25.865057108+00:00 stderr F I0813 20:17:25.864496 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:17:25.865057108+00:00 stderr F I0813 20:17:25.864581 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:17:25.964360494+00:00 stderr F I0813 20:17:25.964059 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:17:25.964360494+00:00 stderr F I0813 20:17:25.964140 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:17:25.976997645+00:00 stderr F I0813 20:17:25.976861 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:17:25.976997645+00:00 stderr F I0813 20:17:25.976925 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:17:25.993494326+00:00 stderr F I0813 20:17:25.993267 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:17:25.993494326+00:00 stderr F I0813 20:17:25.993333 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:17:26.007510616+00:00 stderr F I0813 20:17:26.006831 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:17:26.007510616+00:00 stderr F I0813 20:17:26.006899 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:17:26.015620538+00:00 stderr F I0813 20:17:26.015577 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:17:26.015652549+00:00 stderr F I0813 20:17:26.015626 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:17:26.028443164+00:00 stderr F I0813 20:17:26.028273 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:17:26.028443164+00:00 stderr F I0813 20:17:26.028365 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:17:26.156008707+00:00 stderr F I0813 20:17:26.155864 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:17:26.156008707+00:00 stderr F I0813 20:17:26.155938 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:17:26.192566731+00:00 stderr F I0813 20:17:26.192221 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:17:26.192566731+00:00 stderr F I0813 20:17:26.192307 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:17:26.202242798+00:00 stderr F I0813 20:17:26.202193 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:17:26.202305549+00:00 stderr F I0813 20:17:26.202262 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:17:26.206278563+00:00 stderr F I0813 20:17:26.206218 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:17:26.206525590+00:00 stderr F I0813 20:17:26.206505 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:17:26.210906375+00:00 stderr F I0813 20:17:26.210881 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:17:26.211096360+00:00 stderr F I0813 20:17:26.211077 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:17:26.272601667+00:00 stderr F I0813 20:17:26.272547 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:17:26.272877395+00:00 stderr F I0813 20:17:26.272858 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:17:26.497129599+00:00 stderr F I0813 20:17:26.497077 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:17:26.497238062+00:00 stderr F I0813 20:17:26.497223 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:17:26.671165119+00:00 stderr F I0813 20:17:26.671060 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:17:26.671215510+00:00 stderr F I0813 20:17:26.671180 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:17:26.878282874+00:00 stderr F I0813 20:17:26.876824 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:17:26.878282874+00:00 stderr F I0813 20:17:26.876943 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:17:27.078192682+00:00 stderr F I0813 20:17:27.077948 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:17:27.078192682+00:00 stderr F I0813 20:17:27.078042 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:17:27.281010014+00:00 stderr F I0813 20:17:27.280909 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:17:27.281151648+00:00 stderr F I0813 20:17:27.281132 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:17:27.475283332+00:00 stderr F I0813 20:17:27.475194 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:17:27.475283332+00:00 stderr F I0813 20:17:27.475264 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:17:27.673828522+00:00 stderr F I0813 20:17:27.673727 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:17:27.673934355+00:00 stderr F I0813 20:17:27.673919 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:17:27.882406239+00:00 stderr F I0813 20:17:27.881724 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:17:27.882453420+00:00 stderr F I0813 20:17:27.882413 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:17:28.089025749+00:00 stderr F I0813 20:17:28.088921 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:17:28.089025749+00:00 stderr F I0813 20:17:28.089011 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:17:28.316000211+00:00 stderr F I0813 20:17:28.312312 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:17:28.316000211+00:00 stderr F I0813 20:17:28.312405 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:17:28.474872858+00:00 stderr F I0813 20:17:28.474171 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:17:28.474872858+00:00 stderr F I0813 20:17:28.474241 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:17:28.677523755+00:00 stderr F I0813 20:17:28.677094 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:17:28.677523755+00:00 stderr F I0813 20:17:28.677162 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:17:28.878186476+00:00 stderr F I0813 20:17:28.877952 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:17:28.878186476+00:00 stderr F I0813 20:17:28.878077 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:17:29.074164322+00:00 stderr F I0813 20:17:29.072985 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:17:29.074345078+00:00 stderr F I0813 20:17:29.074323 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:17:29.810163930+00:00 stderr F I0813 20:17:29.808755 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:17:29.810163930+00:00 stderr F I0813 20:17:29.808885 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:17:29.979875406+00:00 stderr F I0813 20:17:29.979530 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:17:29.980053091+00:00 stderr F I0813 20:17:29.980026 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:17:29.987354000+00:00 stderr F I0813 20:17:29.987307 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:17:29.987504534+00:00 stderr F I0813 20:17:29.987452 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:17:30.003726477+00:00 stderr F I0813 20:17:30.003608 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:17:30.003848001+00:00 stderr F I0813 20:17:30.003737 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:17:30.081601471+00:00 stderr F I0813 20:17:30.081307 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:17:30.081601471+00:00 stderr F I0813 20:17:30.081480 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:17:30.285579766+00:00 stderr F I0813 20:17:30.285454 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:17:30.285700750+00:00 stderr F I0813 20:17:30.285616 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:17:30.476206270+00:00 stderr F I0813 20:17:30.474574 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:17:30.476206270+00:00 stderr F I0813 20:17:30.474644 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:17:30.681186174+00:00 stderr F I0813 20:17:30.679527 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:17:30.681186174+00:00 stderr F I0813 20:17:30.679616 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:17:30.899633072+00:00 stderr F I0813 20:17:30.898764 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:17:30.899633072+00:00 stderr F I0813 20:17:30.898896 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:17:31.086475278+00:00 stderr F I0813 20:17:31.086416 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:17:31.086651643+00:00 stderr F I0813 20:17:31.086601 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:17:31.278907913+00:00 stderr F I0813 20:17:31.276175 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:17:31.278907913+00:00 stderr F I0813 20:17:31.276245 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:17:31.489349833+00:00 stderr F I0813 20:17:31.488684 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:17:31.489349833+00:00 stderr F I0813 20:17:31.489295 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:17:31.673177362+00:00 stderr F I0813 20:17:31.672490 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:17:31.673177362+00:00 stderr F I0813 20:17:31.672744 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:17:32.992174549+00:00 stderr F I0813 20:17:32.991855 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:17:32.992174549+00:00 stderr F I0813 20:17:32.991920 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:17:34.146403610+00:00 stderr F I0813 20:17:34.141673 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:17:34.418636985+00:00 stderr F I0813 20:17:34.418051 1 log.go:245] Operconfig Controller complete 2025-08-13T20:19:15.390012368+00:00 stderr F I0813 20:19:15.389744 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:20:15.101084387+00:00 stderr F I0813 20:20:15.100748 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.114815029+00:00 stderr F I0813 20:20:15.114695 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.125764452+00:00 stderr F I0813 20:20:15.125638 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.137224420+00:00 stderr F I0813 20:20:15.137120 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.152676501+00:00 stderr F I0813 20:20:15.152593 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.164924782+00:00 stderr F I0813 20:20:15.164672 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.176847732+00:00 stderr F I0813 20:20:15.176734 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.187516697+00:00 stderr F I0813 20:20:15.187416 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.198853051+00:00 stderr F I0813 20:20:15.198680 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.220993194+00:00 stderr F I0813 20:20:15.220128 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.300436784+00:00 stderr F I0813 20:20:15.300324 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.503327633+00:00 stderr F I0813 20:20:15.503190 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.707100297+00:00 stderr F I0813 20:20:15.706874 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.899922868+00:00 stderr F I0813 20:20:15.899841 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:16.109863998+00:00 stderr F I0813 20:20:16.109035 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:20:16.300858926+00:00 stderr F I0813 20:20:16.300648 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:20:16.502383996+00:00 stderr F I0813 20:20:16.502267 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:16.700403735+00:00 stderr F I0813 20:20:16.700085 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:16.709526806+00:00 stderr F I0813 20:20:16.709440 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:20:16.722279010+00:00 stderr F I0813 20:20:16.722178 1 log.go:245] successful reconciliation 2025-08-13T20:20:16.903428678+00:00 stderr F I0813 20:20:16.903340 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:20:17.101131458+00:00 stderr F I0813 20:20:17.101067 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:17.302029259+00:00 stderr F I0813 20:20:17.301849 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:17.506245175+00:00 stderr F I0813 20:20:17.506137 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:17.698525250+00:00 stderr F I0813 20:20:17.698399 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:17.901593583+00:00 stderr F I0813 20:20:17.901476 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:20:18.098873941+00:00 stderr F I0813 20:20:18.098756 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:20:18.112693666+00:00 stderr F I0813 20:20:18.112663 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T20:20:18.113472659+00:00 stderr F I0813 20:20:18.113449 1 log.go:245] successful reconciliation 2025-08-13T20:20:18.300057921+00:00 stderr F I0813 20:20:18.299926 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:18.499598024+00:00 stderr F I0813 20:20:18.499481 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:19.309851151+00:00 stderr F I0813 20:20:19.309652 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T20:20:19.318428196+00:00 stderr F I0813 20:20:19.318244 1 log.go:245] successful reconciliation 2025-08-13T20:20:34.428447821+00:00 stderr F I0813 20:20:34.419477 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:20:34.851948604+00:00 stderr F I0813 20:20:34.851346 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:20:34.855412533+00:00 stderr F I0813 20:20:34.855289 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:20:34.858759729+00:00 stderr F I0813 20:20:34.858709 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:20:34.858873532+00:00 stderr F I0813 20:20:34.858750 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc000ff3900 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:20:34.866739777+00:00 stderr F I0813 20:20:34.866657 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:20:34.866739777+00:00 stderr F I0813 20:20:34.866693 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:20:34.866739777+00:00 stderr F I0813 20:20:34.866702 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:20:34.870574977+00:00 stderr F I0813 20:20:34.870536 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:20:34.870599937+00:00 stderr F I0813 20:20:34.870556 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:20:34.870599937+00:00 stderr F I0813 20:20:34.870577 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:20:34.870599937+00:00 stderr F I0813 20:20:34.870582 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:20:34.873440519+00:00 stderr F I0813 20:20:34.873276 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:20:34.900480641+00:00 stderr F I0813 20:20:34.900355 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:20:34.919302469+00:00 stderr F I0813 20:20:34.919179 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:20:34.919409652+00:00 stderr F I0813 20:20:34.919351 1 log.go:245] Starting render phase 2025-08-13T20:20:34.938492368+00:00 stderr F I0813 20:20:34.938343 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:20:34.990484114+00:00 stderr F I0813 20:20:34.990139 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:20:34.990484114+00:00 stderr F I0813 20:20:34.990184 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:20:34.990484114+00:00 stderr F I0813 20:20:34.990234 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:20:34.990484114+00:00 stderr F I0813 20:20:34.990288 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:20:35.004262808+00:00 stderr F I0813 20:20:35.004169 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:20:35.004262808+00:00 stderr F I0813 20:20:35.004218 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:20:35.017341791+00:00 stderr F I0813 20:20:35.017274 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:20:35.051393225+00:00 stderr F I0813 20:20:35.051231 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:20:35.058551829+00:00 stderr F I0813 20:20:35.058446 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:20:35.059034963+00:00 stderr F I0813 20:20:35.058900 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:20:35.079694123+00:00 stderr F I0813 20:20:35.079607 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:20:35.079694123+00:00 stderr F I0813 20:20:35.079679 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:20:35.090843102+00:00 stderr F I0813 20:20:35.090591 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:20:35.090989686+00:00 stderr F I0813 20:20:35.090906 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:20:35.108679192+00:00 stderr F I0813 20:20:35.108582 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:20:35.108679192+00:00 stderr F I0813 20:20:35.108661 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:20:35.115992141+00:00 stderr F I0813 20:20:35.115879 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:20:35.115992141+00:00 stderr F I0813 20:20:35.115945 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:20:35.123428763+00:00 stderr F I0813 20:20:35.123263 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:20:35.123428763+00:00 stderr F I0813 20:20:35.123331 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:20:35.129516987+00:00 stderr F I0813 20:20:35.129375 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:20:35.129516987+00:00 stderr F I0813 20:20:35.129442 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:20:35.135237861+00:00 stderr F I0813 20:20:35.135154 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:20:35.135276982+00:00 stderr F I0813 20:20:35.135234 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:20:35.140937714+00:00 stderr F I0813 20:20:35.140874 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:20:35.140998655+00:00 stderr F I0813 20:20:35.140939 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:20:35.146605056+00:00 stderr F I0813 20:20:35.146536 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:20:35.146636316+00:00 stderr F I0813 20:20:35.146606 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:20:35.316872931+00:00 stderr F I0813 20:20:35.316759 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:20:35.316911002+00:00 stderr F I0813 20:20:35.316867 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:20:35.515310052+00:00 stderr F I0813 20:20:35.515201 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:20:35.515310052+00:00 stderr F I0813 20:20:35.515289 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:20:35.716497682+00:00 stderr F I0813 20:20:35.714520 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:20:35.716497682+00:00 stderr F I0813 20:20:35.714574 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:20:35.914878012+00:00 stderr F I0813 20:20:35.914749 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:20:35.914878012+00:00 stderr F I0813 20:20:35.914866 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:20:36.115593748+00:00 stderr F I0813 20:20:36.115430 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:20:36.115593748+00:00 stderr F I0813 20:20:36.115478 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:20:36.314676358+00:00 stderr F I0813 20:20:36.314602 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:20:36.314676358+00:00 stderr F I0813 20:20:36.314651 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:20:36.516143806+00:00 stderr F I0813 20:20:36.516030 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:20:36.516143806+00:00 stderr F I0813 20:20:36.516127 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:20:36.714144305+00:00 stderr F I0813 20:20:36.714085 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:20:36.714343080+00:00 stderr F I0813 20:20:36.714319 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:20:36.914930013+00:00 stderr F I0813 20:20:36.914717 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:20:36.915104228+00:00 stderr F I0813 20:20:36.915080 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:20:37.115703191+00:00 stderr F I0813 20:20:37.115575 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:20:37.115703191+00:00 stderr F I0813 20:20:37.115659 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:20:37.318935769+00:00 stderr F I0813 20:20:37.318860 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:20:37.319066933+00:00 stderr F I0813 20:20:37.319052 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:20:37.525262466+00:00 stderr F I0813 20:20:37.525170 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:20:37.525304217+00:00 stderr F I0813 20:20:37.525259 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:20:37.732249852+00:00 stderr F I0813 20:20:37.732083 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:20:37.732249852+00:00 stderr F I0813 20:20:37.732159 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:20:37.914341566+00:00 stderr F I0813 20:20:37.914221 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:20:37.914404717+00:00 stderr F I0813 20:20:37.914348 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:20:38.118032347+00:00 stderr F I0813 20:20:38.117908 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:20:38.118032347+00:00 stderr F I0813 20:20:38.118019 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:20:38.318535957+00:00 stderr F I0813 20:20:38.318407 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:20:38.318535957+00:00 stderr F I0813 20:20:38.318516 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:20:38.522085155+00:00 stderr F I0813 20:20:38.522006 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:20:38.522085155+00:00 stderr F I0813 20:20:38.522072 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:20:38.729408640+00:00 stderr F I0813 20:20:38.729323 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:20:38.729408640+00:00 stderr F I0813 20:20:38.729392 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:20:38.919334157+00:00 stderr F I0813 20:20:38.919168 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:20:38.919334157+00:00 stderr F I0813 20:20:38.919238 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:20:39.114197166+00:00 stderr F I0813 20:20:39.113946 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:20:39.114197166+00:00 stderr F I0813 20:20:39.114030 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:20:39.314688925+00:00 stderr F I0813 20:20:39.314561 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:20:39.314688925+00:00 stderr F I0813 20:20:39.314633 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:20:39.516921546+00:00 stderr F I0813 20:20:39.516837 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:20:39.516921546+00:00 stderr F I0813 20:20:39.516910 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:20:39.713355890+00:00 stderr F I0813 20:20:39.713214 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:20:39.713355890+00:00 stderr F I0813 20:20:39.713285 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:20:39.913859030+00:00 stderr F I0813 20:20:39.913665 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:20:39.913859030+00:00 stderr F I0813 20:20:39.913753 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:20:40.119089585+00:00 stderr F I0813 20:20:40.118954 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:20:40.119089585+00:00 stderr F I0813 20:20:40.119072 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:20:40.317057773+00:00 stderr F I0813 20:20:40.316939 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:20:40.317107275+00:00 stderr F I0813 20:20:40.317073 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:20:40.522411752+00:00 stderr F I0813 20:20:40.522288 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:20:40.522411752+00:00 stderr F I0813 20:20:40.522356 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:20:40.717497637+00:00 stderr F I0813 20:20:40.717372 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:20:40.717497637+00:00 stderr F I0813 20:20:40.717442 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:20:40.915160267+00:00 stderr F I0813 20:20:40.915061 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:20:40.915160267+00:00 stderr F I0813 20:20:40.915130 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:20:41.114289977+00:00 stderr F I0813 20:20:41.114186 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:20:41.114289977+00:00 stderr F I0813 20:20:41.114261 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:20:41.316610860+00:00 stderr F I0813 20:20:41.316514 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:20:41.316610860+00:00 stderr F I0813 20:20:41.316597 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:20:41.514700131+00:00 stderr F I0813 20:20:41.514583 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:20:41.514700131+00:00 stderr F I0813 20:20:41.514658 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:20:41.746578498+00:00 stderr F I0813 20:20:41.746453 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:20:41.746578498+00:00 stderr F I0813 20:20:41.746523 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:20:41.920291183+00:00 stderr F I0813 20:20:41.920172 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:20:41.920291183+00:00 stderr F I0813 20:20:41.920282 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:20:42.123346296+00:00 stderr F I0813 20:20:42.123249 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:20:42.123346296+00:00 stderr F I0813 20:20:42.123323 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:20:42.360555405+00:00 stderr F I0813 20:20:42.359752 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:20:42.360555405+00:00 stderr F I0813 20:20:42.359862 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:20:42.521421362+00:00 stderr F I0813 20:20:42.521174 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:20:42.521421362+00:00 stderr F I0813 20:20:42.521269 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:20:42.764649163+00:00 stderr F I0813 20:20:42.764515 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:20:42.764649163+00:00 stderr F I0813 20:20:42.764587 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:20:42.947993663+00:00 stderr F I0813 20:20:42.947726 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:20:42.947993663+00:00 stderr F I0813 20:20:42.947869 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:20:43.115195341+00:00 stderr F I0813 20:20:43.115053 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:20:43.115195341+00:00 stderr F I0813 20:20:43.115161 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:20:43.314743734+00:00 stderr F I0813 20:20:43.314552 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:20:43.314743734+00:00 stderr F I0813 20:20:43.314640 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:20:43.538945162+00:00 stderr F I0813 20:20:43.536506 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:20:43.538945162+00:00 stderr F I0813 20:20:43.536611 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:20:43.714613733+00:00 stderr F I0813 20:20:43.714499 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:20:43.714613733+00:00 stderr F I0813 20:20:43.714569 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:20:43.916191274+00:00 stderr F I0813 20:20:43.916093 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:20:43.916425280+00:00 stderr F I0813 20:20:43.916375 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:20:44.112591947+00:00 stderr F I0813 20:20:44.112526 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:20:44.112694870+00:00 stderr F I0813 20:20:44.112680 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:20:44.313680614+00:00 stderr F I0813 20:20:44.313552 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:20:44.313680614+00:00 stderr F I0813 20:20:44.313627 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:20:44.521593585+00:00 stderr F I0813 20:20:44.521003 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:20:44.521593585+00:00 stderr F I0813 20:20:44.521564 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:20:44.714654432+00:00 stderr F I0813 20:20:44.714600 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:20:44.714764915+00:00 stderr F I0813 20:20:44.714751 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:20:44.915892243+00:00 stderr F I0813 20:20:44.915660 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:20:44.915892243+00:00 stderr F I0813 20:20:44.915760 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:20:45.115664543+00:00 stderr F I0813 20:20:45.115534 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:20:45.115664543+00:00 stderr F I0813 20:20:45.115620 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:20:45.341162817+00:00 stderr F I0813 20:20:45.341077 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:20:45.341162817+00:00 stderr F I0813 20:20:45.341146 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:20:45.516009704+00:00 stderr F I0813 20:20:45.515460 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:20:45.516009704+00:00 stderr F I0813 20:20:45.515531 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:20:45.718480460+00:00 stderr F I0813 20:20:45.718322 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:20:45.718480460+00:00 stderr F I0813 20:20:45.718406 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:20:45.917883789+00:00 stderr F I0813 20:20:45.917660 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:20:45.917883789+00:00 stderr F I0813 20:20:45.917741 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:20:46.115838446+00:00 stderr F I0813 20:20:46.115696 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:20:46.115838446+00:00 stderr F I0813 20:20:46.115764 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:20:46.317691464+00:00 stderr F I0813 20:20:46.317581 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:20:46.317691464+00:00 stderr F I0813 20:20:46.317634 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:20:46.523676001+00:00 stderr F I0813 20:20:46.523581 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:20:46.523720803+00:00 stderr F I0813 20:20:46.523669 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:20:46.719160048+00:00 stderr F I0813 20:20:46.718504 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:20:46.719160048+00:00 stderr F I0813 20:20:46.718650 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:20:46.918250678+00:00 stderr F I0813 20:20:46.918163 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:20:46.918250678+00:00 stderr F I0813 20:20:46.918244 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:20:47.114304512+00:00 stderr F I0813 20:20:47.114169 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:20:47.114304512+00:00 stderr F I0813 20:20:47.114274 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:20:47.317070466+00:00 stderr F I0813 20:20:47.316924 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:20:47.317070466+00:00 stderr F I0813 20:20:47.317018 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:20:47.524885645+00:00 stderr F I0813 20:20:47.524665 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:20:47.524885645+00:00 stderr F I0813 20:20:47.524762 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:20:47.719041623+00:00 stderr F I0813 20:20:47.718885 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:20:47.719041623+00:00 stderr F I0813 20:20:47.719023 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:20:47.916272160+00:00 stderr F I0813 20:20:47.916161 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:20:47.916272160+00:00 stderr F I0813 20:20:47.916232 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:20:48.114940447+00:00 stderr F I0813 20:20:48.114849 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:20:48.114940447+00:00 stderr F I0813 20:20:48.114925 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:20:48.318125074+00:00 stderr F I0813 20:20:48.318029 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:20:48.318125074+00:00 stderr F I0813 20:20:48.318095 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:20:48.515077522+00:00 stderr F I0813 20:20:48.514919 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:20:48.515077522+00:00 stderr F I0813 20:20:48.515033 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:20:48.716356645+00:00 stderr F I0813 20:20:48.716255 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:20:48.716400156+00:00 stderr F I0813 20:20:48.716355 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:20:48.943259860+00:00 stderr F I0813 20:20:48.943095 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:20:48.943259860+00:00 stderr F I0813 20:20:48.943245 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:20:49.169943599+00:00 stderr F I0813 20:20:49.168227 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:20:49.169943599+00:00 stderr F I0813 20:20:49.168305 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:20:49.317727692+00:00 stderr F I0813 20:20:49.317655 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:20:49.317727692+00:00 stderr F I0813 20:20:49.317717 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:20:49.518447409+00:00 stderr F I0813 20:20:49.518139 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:20:49.518447409+00:00 stderr F I0813 20:20:49.518411 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:20:49.715203211+00:00 stderr F I0813 20:20:49.715070 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:20:49.715203211+00:00 stderr F I0813 20:20:49.715160 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:20:49.923103433+00:00 stderr F I0813 20:20:49.922956 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:20:49.923204656+00:00 stderr F I0813 20:20:49.923166 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:20:50.116761217+00:00 stderr F I0813 20:20:50.116517 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:20:50.116761217+00:00 stderr F I0813 20:20:50.116673 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:20:50.314908870+00:00 stderr F I0813 20:20:50.314731 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:20:50.315132137+00:00 stderr F I0813 20:20:50.315102 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:20:50.514398462+00:00 stderr F I0813 20:20:50.514290 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:20:50.514398462+00:00 stderr F I0813 20:20:50.514382 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:20:50.726556925+00:00 stderr F I0813 20:20:50.726420 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:20:50.726556925+00:00 stderr F I0813 20:20:50.726535 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:20:50.919371475+00:00 stderr F I0813 20:20:50.919234 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:20:50.919371475+00:00 stderr F I0813 20:20:50.919331 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:20:51.116153669+00:00 stderr F I0813 20:20:51.116044 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:20:51.116153669+00:00 stderr F I0813 20:20:51.116120 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:20:51.316356891+00:00 stderr F I0813 20:20:51.316310 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:20:51.316471514+00:00 stderr F I0813 20:20:51.316456 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:20:51.516163331+00:00 stderr F I0813 20:20:51.516066 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:20:51.516163331+00:00 stderr F I0813 20:20:51.516133 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:20:51.719646336+00:00 stderr F I0813 20:20:51.719546 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:20:51.719770710+00:00 stderr F I0813 20:20:51.719756 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:20:51.916366349+00:00 stderr F I0813 20:20:51.915645 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:20:51.918048257+00:00 stderr F I0813 20:20:51.918017 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:20:52.117860137+00:00 stderr F I0813 20:20:52.117626 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:20:52.117860137+00:00 stderr F I0813 20:20:52.117750 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:20:52.314295181+00:00 stderr F I0813 20:20:52.314191 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:20:52.314295181+00:00 stderr F I0813 20:20:52.314259 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:20:52.516091738+00:00 stderr F I0813 20:20:52.516039 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:20:52.516239083+00:00 stderr F I0813 20:20:52.516222 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:20:52.719680517+00:00 stderr F I0813 20:20:52.719550 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:20:52.719680517+00:00 stderr F I0813 20:20:52.719668 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:20:52.914934307+00:00 stderr F I0813 20:20:52.914744 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:20:52.915017569+00:00 stderr F I0813 20:20:52.914998 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:20:53.121269393+00:00 stderr F I0813 20:20:53.121151 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:20:53.121269393+00:00 stderr F I0813 20:20:53.121221 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:20:53.318071618+00:00 stderr F I0813 20:20:53.317222 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:20:53.318071618+00:00 stderr F I0813 20:20:53.317285 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:20:53.515249083+00:00 stderr F I0813 20:20:53.515093 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:20:53.515249083+00:00 stderr F I0813 20:20:53.515188 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:20:53.715506386+00:00 stderr F I0813 20:20:53.715429 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:20:53.715561408+00:00 stderr F I0813 20:20:53.715502 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:20:53.928710710+00:00 stderr F I0813 20:20:53.928579 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:20:53.928710710+00:00 stderr F I0813 20:20:53.928647 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:20:54.115359034+00:00 stderr F I0813 20:20:54.115214 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:20:54.115431756+00:00 stderr F I0813 20:20:54.115353 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:20:54.329936146+00:00 stderr F I0813 20:20:54.329755 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:20:54.329936146+00:00 stderr F I0813 20:20:54.329912 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:20:54.530932861+00:00 stderr F I0813 20:20:54.529003 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:20:54.530932861+00:00 stderr F I0813 20:20:54.530847 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:20:54.721034324+00:00 stderr F I0813 20:20:54.720904 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:20:54.721113416+00:00 stderr F I0813 20:20:54.721035 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:20:54.915171662+00:00 stderr F I0813 20:20:54.915090 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:20:54.915229324+00:00 stderr F I0813 20:20:54.915182 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:20:55.115471167+00:00 stderr F I0813 20:20:55.115377 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:20:55.115552259+00:00 stderr F I0813 20:20:55.115490 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:20:55.315049650+00:00 stderr F I0813 20:20:55.314920 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:20:55.315049650+00:00 stderr F I0813 20:20:55.315022 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:20:55.515115428+00:00 stderr F I0813 20:20:55.514999 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:20:55.515172839+00:00 stderr F I0813 20:20:55.515132 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:20:55.721548628+00:00 stderr F I0813 20:20:55.721333 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:20:55.742750874+00:00 stderr F I0813 20:20:55.742651 1 log.go:245] Operconfig Controller complete 2025-08-13T20:22:15.414660491+00:00 stderr F I0813 20:22:15.414332 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:23:55.745175459+00:00 stderr F I0813 20:23:55.744924 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:23:56.056716478+00:00 stderr F I0813 20:23:56.056606 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:23:56.062176634+00:00 stderr F I0813 20:23:56.062079 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:23:56.066000683+00:00 stderr F I0813 20:23:56.065853 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:23:56.066000683+00:00 stderr F I0813 20:23:56.065883 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc003b9ea80 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:23:56.072064717+00:00 stderr F I0813 20:23:56.071972 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:23:56.072064717+00:00 stderr F I0813 20:23:56.072031 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:23:56.072064717+00:00 stderr F I0813 20:23:56.072040 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:23:56.078091609+00:00 stderr F I0813 20:23:56.077868 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:23:56.078091609+00:00 stderr F I0813 20:23:56.077895 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:23:56.078091609+00:00 stderr F I0813 20:23:56.077901 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:23:56.078091609+00:00 stderr F I0813 20:23:56.077907 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:23:56.078091609+00:00 stderr F I0813 20:23:56.078022 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:23:56.091823722+00:00 stderr F I0813 20:23:56.091671 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:23:56.106619705+00:00 stderr F I0813 20:23:56.106510 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:23:56.106619705+00:00 stderr F I0813 20:23:56.106587 1 log.go:245] Starting render phase 2025-08-13T20:23:56.124137446+00:00 stderr F I0813 20:23:56.124053 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:23:56.161680999+00:00 stderr F I0813 20:23:56.161549 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:23:56.161680999+00:00 stderr F I0813 20:23:56.161596 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:23:56.161680999+00:00 stderr F I0813 20:23:56.161653 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:23:56.161763471+00:00 stderr F I0813 20:23:56.161690 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:23:56.175322409+00:00 stderr F I0813 20:23:56.175221 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:23:56.175322409+00:00 stderr F I0813 20:23:56.175269 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:23:56.189743842+00:00 stderr F I0813 20:23:56.189626 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:23:56.223138826+00:00 stderr F I0813 20:23:56.223025 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:23:56.229318543+00:00 stderr F I0813 20:23:56.229226 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:23:56.229318543+00:00 stderr F I0813 20:23:56.229284 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:23:56.238065373+00:00 stderr F I0813 20:23:56.238029 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:23:56.238157806+00:00 stderr F I0813 20:23:56.238144 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:23:56.248264415+00:00 stderr F I0813 20:23:56.248236 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:23:56.248351667+00:00 stderr F I0813 20:23:56.248335 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:23:56.256419778+00:00 stderr F I0813 20:23:56.256333 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:23:56.256419778+00:00 stderr F I0813 20:23:56.256408 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:23:56.263159241+00:00 stderr F I0813 20:23:56.263114 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:23:56.263267974+00:00 stderr F I0813 20:23:56.263249 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:23:56.271216531+00:00 stderr F I0813 20:23:56.271161 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:23:56.271329714+00:00 stderr F I0813 20:23:56.271314 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:23:56.279186849+00:00 stderr F I0813 20:23:56.279141 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:23:56.279288662+00:00 stderr F I0813 20:23:56.279274 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:23:56.285667304+00:00 stderr F I0813 20:23:56.285512 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:23:56.285667304+00:00 stderr F I0813 20:23:56.285593 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:23:56.292482569+00:00 stderr F I0813 20:23:56.292447 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:23:56.292577362+00:00 stderr F I0813 20:23:56.292562 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:23:56.301649901+00:00 stderr F I0813 20:23:56.301509 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:23:56.301649901+00:00 stderr F I0813 20:23:56.301564 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:23:56.506046986+00:00 stderr F I0813 20:23:56.505885 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:23:56.506090177+00:00 stderr F I0813 20:23:56.506040 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:23:56.703672107+00:00 stderr F I0813 20:23:56.703336 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:23:56.703672107+00:00 stderr F I0813 20:23:56.703429 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:23:56.903017297+00:00 stderr F I0813 20:23:56.902868 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:23:56.903017297+00:00 stderr F I0813 20:23:56.902937 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:23:57.105524158+00:00 stderr F I0813 20:23:57.105419 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:23:57.105524158+00:00 stderr F I0813 20:23:57.105497 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:23:57.304444006+00:00 stderr F I0813 20:23:57.304332 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:23:57.304444006+00:00 stderr F I0813 20:23:57.304407 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:23:57.503685003+00:00 stderr F I0813 20:23:57.503636 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:23:57.503957881+00:00 stderr F I0813 20:23:57.503938 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:23:57.704355091+00:00 stderr F I0813 20:23:57.704301 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:23:57.704573877+00:00 stderr F I0813 20:23:57.704552 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:23:57.905446501+00:00 stderr F I0813 20:23:57.902869 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:23:57.905587625+00:00 stderr F I0813 20:23:57.905570 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:23:58.103037451+00:00 stderr F I0813 20:23:58.102875 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:23:58.103037451+00:00 stderr F I0813 20:23:58.102951 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:23:58.305037147+00:00 stderr F I0813 20:23:58.304951 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:23:58.305097379+00:00 stderr F I0813 20:23:58.305086 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:23:58.504179291+00:00 stderr F I0813 20:23:58.504046 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:23:58.504179291+00:00 stderr F I0813 20:23:58.504118 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:23:58.716846142+00:00 stderr F I0813 20:23:58.716743 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:23:58.716968416+00:00 stderr F I0813 20:23:58.716953 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:23:58.915933825+00:00 stderr F I0813 20:23:58.915878 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:23:58.916067789+00:00 stderr F I0813 20:23:58.916051 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:23:59.107436621+00:00 stderr F I0813 20:23:59.107367 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:23:59.107547914+00:00 stderr F I0813 20:23:59.107531 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:23:59.303224768+00:00 stderr F I0813 20:23:59.303168 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:23:59.303363672+00:00 stderr F I0813 20:23:59.303344 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:23:59.504193185+00:00 stderr F I0813 20:23:59.504044 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:23:59.504193185+00:00 stderr F I0813 20:23:59.504108 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:23:59.707910500+00:00 stderr F I0813 20:23:59.707758 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:23:59.708084205+00:00 stderr F I0813 20:23:59.708067 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:23:59.905622053+00:00 stderr F I0813 20:23:59.905511 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:23:59.905622053+00:00 stderr F I0813 20:23:59.905586 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:24:00.111376367+00:00 stderr F I0813 20:24:00.111228 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:24:00.111376367+00:00 stderr F I0813 20:24:00.111333 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:24:00.309879493+00:00 stderr F I0813 20:24:00.307164 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:24:00.309879493+00:00 stderr F I0813 20:24:00.307251 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:24:00.502397528+00:00 stderr F I0813 20:24:00.502198 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:24:00.502397528+00:00 stderr F I0813 20:24:00.502279 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:24:00.703889160+00:00 stderr F I0813 20:24:00.703701 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:24:00.703889160+00:00 stderr F I0813 20:24:00.703847 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:24:00.932875887+00:00 stderr F I0813 20:24:00.932711 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:24:00.932875887+00:00 stderr F I0813 20:24:00.932857 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:24:01.109561740+00:00 stderr F I0813 20:24:01.107418 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:24:01.109561740+00:00 stderr F I0813 20:24:01.107519 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:24:01.303446083+00:00 stderr F I0813 20:24:01.303328 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:24:01.303446083+00:00 stderr F I0813 20:24:01.303412 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:24:01.505444009+00:00 stderr F I0813 20:24:01.505299 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:24:01.505444009+00:00 stderr F I0813 20:24:01.505379 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:24:01.707515097+00:00 stderr F I0813 20:24:01.707435 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:24:01.707628561+00:00 stderr F I0813 20:24:01.707610 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:24:01.903363987+00:00 stderr F I0813 20:24:01.903289 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:24:01.903363987+00:00 stderr F I0813 20:24:01.903342 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:24:02.104367165+00:00 stderr F I0813 20:24:02.104131 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:24:02.104571011+00:00 stderr F I0813 20:24:02.104553 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:24:02.304961631+00:00 stderr F I0813 20:24:02.304722 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:24:02.304961631+00:00 stderr F I0813 20:24:02.304918 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:24:02.504882538+00:00 stderr F I0813 20:24:02.504097 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:24:02.504882538+00:00 stderr F I0813 20:24:02.504765 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:24:02.704090464+00:00 stderr F I0813 20:24:02.703946 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:24:02.704171946+00:00 stderr F I0813 20:24:02.704101 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:24:02.910926357+00:00 stderr F I0813 20:24:02.910395 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:24:02.910926357+00:00 stderr F I0813 20:24:02.910473 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:24:03.109578138+00:00 stderr F I0813 20:24:03.109479 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:24:03.109650070+00:00 stderr F I0813 20:24:03.109604 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:24:03.311612165+00:00 stderr F I0813 20:24:03.311452 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:24:03.311612165+00:00 stderr F I0813 20:24:03.311527 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:24:03.518483780+00:00 stderr F I0813 20:24:03.518204 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:24:03.518483780+00:00 stderr F I0813 20:24:03.518275 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:24:03.709574234+00:00 stderr F I0813 20:24:03.709513 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:24:03.709700518+00:00 stderr F I0813 20:24:03.709686 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:24:03.941181797+00:00 stderr F I0813 20:24:03.941042 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:24:03.941181797+00:00 stderr F I0813 20:24:03.941116 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:24:04.148030891+00:00 stderr F I0813 20:24:04.147255 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:24:04.148075283+00:00 stderr F I0813 20:24:04.148031 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:24:04.303652371+00:00 stderr F I0813 20:24:04.303504 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:24:04.303652371+00:00 stderr F I0813 20:24:04.303550 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:24:04.506547913+00:00 stderr F I0813 20:24:04.506449 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:24:04.506547913+00:00 stderr F I0813 20:24:04.506515 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:24:04.706940153+00:00 stderr F I0813 20:24:04.706715 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:24:04.706940153+00:00 stderr F I0813 20:24:04.706865 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:24:04.906138199+00:00 stderr F I0813 20:24:04.905277 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:24:04.906138199+00:00 stderr F I0813 20:24:04.905349 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:24:05.102190895+00:00 stderr F I0813 20:24:05.102072 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:24:05.102190895+00:00 stderr F I0813 20:24:05.102163 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:24:05.303721187+00:00 stderr F I0813 20:24:05.303579 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:24:05.303721187+00:00 stderr F I0813 20:24:05.303651 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:24:05.504195840+00:00 stderr F I0813 20:24:05.504088 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:24:05.504195840+00:00 stderr F I0813 20:24:05.504182 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:24:05.703151339+00:00 stderr F I0813 20:24:05.703030 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:24:05.703295753+00:00 stderr F I0813 20:24:05.703218 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:24:05.905630218+00:00 stderr F I0813 20:24:05.905487 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:24:05.905630218+00:00 stderr F I0813 20:24:05.905566 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:24:06.104202557+00:00 stderr F I0813 20:24:06.104087 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:24:06.104202557+00:00 stderr F I0813 20:24:06.104178 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:24:06.304850084+00:00 stderr F I0813 20:24:06.304742 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:24:06.304896585+00:00 stderr F I0813 20:24:06.304870 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:24:06.502601377+00:00 stderr F I0813 20:24:06.502497 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:24:06.502601377+00:00 stderr F I0813 20:24:06.502574 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:24:06.707914818+00:00 stderr F I0813 20:24:06.707659 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:24:06.707914818+00:00 stderr F I0813 20:24:06.707874 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:24:06.905040635+00:00 stderr F I0813 20:24:06.904866 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:24:06.905040635+00:00 stderr F I0813 20:24:06.904934 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:24:07.104293682+00:00 stderr F I0813 20:24:07.104200 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:24:07.104293682+00:00 stderr F I0813 20:24:07.104283 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:24:07.305269639+00:00 stderr F I0813 20:24:07.305107 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:24:07.305269639+00:00 stderr F I0813 20:24:07.305193 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:24:07.505268098+00:00 stderr F I0813 20:24:07.505131 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:24:07.505268098+00:00 stderr F I0813 20:24:07.505260 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:24:07.707370547+00:00 stderr F I0813 20:24:07.706715 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:24:07.707370547+00:00 stderr F I0813 20:24:07.707020 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:24:07.908327433+00:00 stderr F I0813 20:24:07.908202 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:24:07.908327433+00:00 stderr F I0813 20:24:07.908274 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:24:08.107377655+00:00 stderr F I0813 20:24:08.107259 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:24:08.107377655+00:00 stderr F I0813 20:24:08.107332 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:24:08.304240254+00:00 stderr F I0813 20:24:08.304146 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:24:08.304240254+00:00 stderr F I0813 20:24:08.304217 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:24:08.506155518+00:00 stderr F I0813 20:24:08.506100 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:24:08.506360984+00:00 stderr F I0813 20:24:08.506343 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:24:08.708078002+00:00 stderr F I0813 20:24:08.707897 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:24:08.708078002+00:00 stderr F I0813 20:24:08.708005 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:24:08.905852117+00:00 stderr F I0813 20:24:08.905680 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:24:08.905852117+00:00 stderr F I0813 20:24:08.905748 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:24:09.105362862+00:00 stderr F I0813 20:24:09.105229 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:24:09.105362862+00:00 stderr F I0813 20:24:09.105302 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:24:09.308393187+00:00 stderr F I0813 20:24:09.308281 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:24:09.308393187+00:00 stderr F I0813 20:24:09.308351 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:24:09.503841136+00:00 stderr F I0813 20:24:09.503616 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:24:09.503841136+00:00 stderr F I0813 20:24:09.503689 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:24:09.710423843+00:00 stderr F I0813 20:24:09.710291 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:24:09.710423843+00:00 stderr F I0813 20:24:09.710370 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:24:09.904903784+00:00 stderr F I0813 20:24:09.904143 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:24:09.905225733+00:00 stderr F I0813 20:24:09.905103 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:24:10.117640306+00:00 stderr F I0813 20:24:10.117359 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:24:10.117640306+00:00 stderr F I0813 20:24:10.117462 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:24:10.334751444+00:00 stderr F I0813 20:24:10.334449 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:24:10.334751444+00:00 stderr F I0813 20:24:10.334553 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:24:10.505470656+00:00 stderr F I0813 20:24:10.505341 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:24:10.505470656+00:00 stderr F I0813 20:24:10.505420 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:24:10.705679281+00:00 stderr F I0813 20:24:10.705551 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:24:10.705679281+00:00 stderr F I0813 20:24:10.705638 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:24:10.902584351+00:00 stderr F I0813 20:24:10.902460 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:24:10.902584351+00:00 stderr F I0813 20:24:10.902548 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:24:11.104597097+00:00 stderr F I0813 20:24:11.104095 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:24:11.104635988+00:00 stderr F I0813 20:24:11.104593 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:24:11.303297819+00:00 stderr F I0813 20:24:11.303215 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:24:11.303340690+00:00 stderr F I0813 20:24:11.303301 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:24:11.507098977+00:00 stderr F I0813 20:24:11.506921 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:24:11.507098977+00:00 stderr F I0813 20:24:11.507039 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:24:11.704892512+00:00 stderr F I0813 20:24:11.704748 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:24:11.704962174+00:00 stderr F I0813 20:24:11.704935 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:24:11.908494334+00:00 stderr F I0813 20:24:11.908373 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:24:11.908494334+00:00 stderr F I0813 20:24:11.908476 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:24:12.107716141+00:00 stderr F I0813 20:24:12.107620 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:24:12.107922847+00:00 stderr F I0813 20:24:12.107719 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:24:12.306970778+00:00 stderr F I0813 20:24:12.306848 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:24:12.306970778+00:00 stderr F I0813 20:24:12.306943 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:24:12.510876339+00:00 stderr F I0813 20:24:12.510737 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:24:12.510937191+00:00 stderr F I0813 20:24:12.510900 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:24:12.703612550+00:00 stderr F I0813 20:24:12.703499 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:24:12.703612550+00:00 stderr F I0813 20:24:12.703577 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:24:12.908060046+00:00 stderr F I0813 20:24:12.907951 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:24:12.908060046+00:00 stderr F I0813 20:24:12.908041 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:24:13.106532681+00:00 stderr F I0813 20:24:13.106404 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:24:13.106532681+00:00 stderr F I0813 20:24:13.106494 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:24:13.304753239+00:00 stderr F I0813 20:24:13.304631 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:24:13.304753239+00:00 stderr F I0813 20:24:13.304735 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:24:13.505024716+00:00 stderr F I0813 20:24:13.504860 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:24:13.505024716+00:00 stderr F I0813 20:24:13.505011 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:24:13.705484167+00:00 stderr F I0813 20:24:13.705360 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:24:13.705484167+00:00 stderr F I0813 20:24:13.705464 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:24:13.903625263+00:00 stderr F I0813 20:24:13.903529 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:24:13.903625263+00:00 stderr F I0813 20:24:13.903602 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:24:14.104092195+00:00 stderr F I0813 20:24:14.103966 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:24:14.104133356+00:00 stderr F I0813 20:24:14.104092 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:24:14.303117126+00:00 stderr F I0813 20:24:14.302972 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:24:14.303117126+00:00 stderr F I0813 20:24:14.303097 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:24:14.508547180+00:00 stderr F I0813 20:24:14.508316 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:24:14.508547180+00:00 stderr F I0813 20:24:14.508405 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:24:14.706561422+00:00 stderr F I0813 20:24:14.706437 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:24:14.706561422+00:00 stderr F I0813 20:24:14.706527 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:24:14.904234084+00:00 stderr F I0813 20:24:14.904096 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:24:14.904234084+00:00 stderr F I0813 20:24:14.904220 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:24:15.102945636+00:00 stderr F I0813 20:24:15.102856 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:24:15.102945636+00:00 stderr F I0813 20:24:15.102920 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:24:15.304903511+00:00 stderr F I0813 20:24:15.304838 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:24:15.305076096+00:00 stderr F I0813 20:24:15.305056 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:24:15.507388510+00:00 stderr F I0813 20:24:15.507226 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:24:15.507492543+00:00 stderr F I0813 20:24:15.507399 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:24:15.720216476+00:00 stderr F I0813 20:24:15.719621 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:24:15.720216476+00:00 stderr F I0813 20:24:15.719699 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:24:15.904202717+00:00 stderr F I0813 20:24:15.904044 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:24:15.904202717+00:00 stderr F I0813 20:24:15.904115 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:24:16.105078811+00:00 stderr F I0813 20:24:16.104583 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:24:16.105078811+00:00 stderr F I0813 20:24:16.104659 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:24:16.303380021+00:00 stderr F I0813 20:24:16.303289 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:24:16.303380021+00:00 stderr F I0813 20:24:16.303362 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:24:16.504512272+00:00 stderr F I0813 20:24:16.504309 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:24:16.504512272+00:00 stderr F I0813 20:24:16.504421 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:24:16.704225613+00:00 stderr F I0813 20:24:16.704172 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:24:16.704360707+00:00 stderr F I0813 20:24:16.704345 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:24:16.911404137+00:00 stderr F I0813 20:24:16.911266 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:24:16.937133472+00:00 stderr F I0813 20:24:16.936845 1 log.go:245] Operconfig Controller complete 2025-08-13T20:25:15.429967135+00:00 stderr F I0813 20:25:15.429662 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:25:16.740137398+00:00 stderr F I0813 20:25:16.739949 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:25:16.742323171+00:00 stderr F I0813 20:25:16.742244 1 log.go:245] successful reconciliation 2025-08-13T20:25:18.126867019+00:00 stderr F I0813 20:25:18.126717 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T20:25:18.127921029+00:00 stderr F I0813 20:25:18.127892 1 log.go:245] successful reconciliation 2025-08-13T20:25:19.346569016+00:00 stderr F I0813 20:25:19.345537 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T20:25:19.346569016+00:00 stderr F I0813 20:25:19.346393 1 log.go:245] successful reconciliation 2025-08-13T20:27:16.938669622+00:00 stderr F I0813 20:27:16.938467 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:27:19.812921138+00:00 stderr F I0813 20:27:19.812057 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:27:19.816840280+00:00 stderr F I0813 20:27:19.814837 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:27:19.819847356+00:00 stderr F I0813 20:27:19.817458 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:27:19.819847356+00:00 stderr F I0813 20:27:19.817517 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc002a8ed00 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:27:19.828855064+00:00 stderr F I0813 20:27:19.824289 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:27:19.828855064+00:00 stderr F I0813 20:27:19.824340 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:27:19.828855064+00:00 stderr F I0813 20:27:19.824379 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:27:19.829031209+00:00 stderr F I0813 20:27:19.828891 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:27:19.829031209+00:00 stderr F I0813 20:27:19.828939 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:27:19.829031209+00:00 stderr F I0813 20:27:19.828948 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:27:19.829031209+00:00 stderr F I0813 20:27:19.828956 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:27:19.829950725+00:00 stderr F I0813 20:27:19.829083 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:27:19.861508177+00:00 stderr F I0813 20:27:19.860090 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:27:19.882350973+00:00 stderr F I0813 20:27:19.879935 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:27:19.882350973+00:00 stderr F I0813 20:27:19.880025 1 log.go:245] Starting render phase 2025-08-13T20:27:19.896873369+00:00 stderr F I0813 20:27:19.896220 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:27:19.931929931+00:00 stderr F I0813 20:27:19.931254 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:27:19.931929931+00:00 stderr F I0813 20:27:19.931296 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:27:19.931929931+00:00 stderr F I0813 20:27:19.931322 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:27:19.931929931+00:00 stderr F I0813 20:27:19.931349 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:27:19.944909902+00:00 stderr F I0813 20:27:19.943601 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:27:19.944909902+00:00 stderr F I0813 20:27:19.943649 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:27:19.957856962+00:00 stderr F I0813 20:27:19.955095 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:27:19.990336141+00:00 stderr F I0813 20:27:19.982749 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:27:19.999886924+00:00 stderr F I0813 20:27:19.995326 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:27:19.999886924+00:00 stderr F I0813 20:27:19.995387 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:27:20.006681439+00:00 stderr F I0813 20:27:20.004174 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:27:20.006681439+00:00 stderr F I0813 20:27:20.004253 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:27:20.019916377+00:00 stderr F I0813 20:27:20.019718 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:27:20.019916377+00:00 stderr F I0813 20:27:20.019844 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:27:20.032081685+00:00 stderr F I0813 20:27:20.030848 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:27:20.032081685+00:00 stderr F I0813 20:27:20.030919 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:27:20.039156817+00:00 stderr F I0813 20:27:20.038392 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:27:20.039156817+00:00 stderr F I0813 20:27:20.038461 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:27:20.048527175+00:00 stderr F I0813 20:27:20.046105 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:27:20.048527175+00:00 stderr F I0813 20:27:20.046172 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:27:20.052883210+00:00 stderr F I0813 20:27:20.051561 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:27:20.052883210+00:00 stderr F I0813 20:27:20.051642 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:27:20.058851640+00:00 stderr F I0813 20:27:20.057424 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:27:20.058851640+00:00 stderr F I0813 20:27:20.057470 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:27:20.065037407+00:00 stderr F I0813 20:27:20.062875 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:27:20.065037407+00:00 stderr F I0813 20:27:20.062920 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:27:20.072289915+00:00 stderr F I0813 20:27:20.072255 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:27:20.072368487+00:00 stderr F I0813 20:27:20.072355 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:27:20.277151312+00:00 stderr F I0813 20:27:20.277043 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:27:20.277151312+00:00 stderr F I0813 20:27:20.277109 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:27:20.479363295+00:00 stderr F I0813 20:27:20.479249 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:27:20.479363295+00:00 stderr F I0813 20:27:20.479316 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:27:20.674626708+00:00 stderr F I0813 20:27:20.674501 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:27:20.674626708+00:00 stderr F I0813 20:27:20.674573 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:27:20.877436387+00:00 stderr F I0813 20:27:20.877296 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:27:20.877436387+00:00 stderr F I0813 20:27:20.877385 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:27:21.073222746+00:00 stderr F I0813 20:27:21.073047 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:27:21.073222746+00:00 stderr F I0813 20:27:21.073108 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:27:21.275581292+00:00 stderr F I0813 20:27:21.275420 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:27:21.275581292+00:00 stderr F I0813 20:27:21.275493 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:27:21.475888709+00:00 stderr F I0813 20:27:21.475668 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:27:21.476053114+00:00 stderr F I0813 20:27:21.475950 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:27:21.674091187+00:00 stderr F I0813 20:27:21.673969 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:27:21.674091187+00:00 stderr F I0813 20:27:21.674079 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:27:21.877266086+00:00 stderr F I0813 20:27:21.877110 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:27:21.877266086+00:00 stderr F I0813 20:27:21.877249 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:27:22.075151775+00:00 stderr F I0813 20:27:22.075092 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:27:22.075265938+00:00 stderr F I0813 20:27:22.075247 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:27:22.275664958+00:00 stderr F I0813 20:27:22.275608 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:27:22.275879375+00:00 stderr F I0813 20:27:22.275859 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:27:22.492454097+00:00 stderr F I0813 20:27:22.492318 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:27:22.492579351+00:00 stderr F I0813 20:27:22.492565 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:27:22.686749513+00:00 stderr F I0813 20:27:22.686621 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:27:22.686749513+00:00 stderr F I0813 20:27:22.686691 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:27:22.873680618+00:00 stderr F I0813 20:27:22.873593 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:27:22.873680618+00:00 stderr F I0813 20:27:22.873661 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:27:23.072481833+00:00 stderr F I0813 20:27:23.072305 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:27:23.072481833+00:00 stderr F I0813 20:27:23.072374 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:27:23.276458574+00:00 stderr F I0813 20:27:23.276323 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:27:23.276458574+00:00 stderr F I0813 20:27:23.276436 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:27:23.484912135+00:00 stderr F I0813 20:27:23.483658 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:27:23.484912135+00:00 stderr F I0813 20:27:23.484887 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:27:23.677401369+00:00 stderr F I0813 20:27:23.677220 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:27:23.677401369+00:00 stderr F I0813 20:27:23.677313 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:27:23.880580669+00:00 stderr F I0813 20:27:23.880454 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:27:23.880580669+00:00 stderr F I0813 20:27:23.880539 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:27:24.074331689+00:00 stderr F I0813 20:27:24.074174 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:27:24.074331689+00:00 stderr F I0813 20:27:24.074250 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:27:24.275878012+00:00 stderr F I0813 20:27:24.275651 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:27:24.275878012+00:00 stderr F I0813 20:27:24.275734 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:27:24.490976322+00:00 stderr F I0813 20:27:24.490877 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:27:24.490976322+00:00 stderr F I0813 20:27:24.490949 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:27:24.678108233+00:00 stderr F I0813 20:27:24.677687 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:27:24.678108233+00:00 stderr F I0813 20:27:24.677785 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:27:24.879197503+00:00 stderr F I0813 20:27:24.879127 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:27:24.879323967+00:00 stderr F I0813 20:27:24.879309 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:27:25.074599921+00:00 stderr F I0813 20:27:25.074447 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:27:25.074599921+00:00 stderr F I0813 20:27:25.074543 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:27:25.278032578+00:00 stderr F I0813 20:27:25.277904 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:27:25.278091839+00:00 stderr F I0813 20:27:25.278028 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:27:25.482462533+00:00 stderr F I0813 20:27:25.481667 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:27:25.482462533+00:00 stderr F I0813 20:27:25.481758 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:27:25.675832623+00:00 stderr F I0813 20:27:25.675602 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:27:25.675832623+00:00 stderr F I0813 20:27:25.675686 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:27:25.875290516+00:00 stderr F I0813 20:27:25.875144 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:27:25.875290516+00:00 stderr F I0813 20:27:25.875220 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:27:26.074473591+00:00 stderr F I0813 20:27:26.074356 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:27:26.074473591+00:00 stderr F I0813 20:27:26.074429 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:27:26.275488309+00:00 stderr F I0813 20:27:26.275383 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:27:26.275488309+00:00 stderr F I0813 20:27:26.275470 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:27:26.475374105+00:00 stderr F I0813 20:27:26.475280 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:27:26.475374105+00:00 stderr F I0813 20:27:26.475356 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:27:26.682038284+00:00 stderr F I0813 20:27:26.681942 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:27:26.682249990+00:00 stderr F I0813 20:27:26.682232 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:27:26.880741565+00:00 stderr F I0813 20:27:26.880534 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:27:26.880741565+00:00 stderr F I0813 20:27:26.880624 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:27:27.085580382+00:00 stderr F I0813 20:27:27.085487 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:27:27.085580382+00:00 stderr F I0813 20:27:27.085559 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:27:27.294074424+00:00 stderr F I0813 20:27:27.293871 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:27:27.294074424+00:00 stderr F I0813 20:27:27.293955 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:27:27.479526467+00:00 stderr F I0813 20:27:27.479420 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:27:27.479526467+00:00 stderr F I0813 20:27:27.479508 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:27:27.713295301+00:00 stderr F I0813 20:27:27.713142 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:27:27.713479096+00:00 stderr F I0813 20:27:27.713458 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:27:27.905872938+00:00 stderr F I0813 20:27:27.905709 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:27:27.905872938+00:00 stderr F I0813 20:27:27.905783 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:27:28.076200558+00:00 stderr F I0813 20:27:28.075234 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:27:28.076200558+00:00 stderr F I0813 20:27:28.075333 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:27:28.272687606+00:00 stderr F I0813 20:27:28.272579 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:27:28.272983305+00:00 stderr F I0813 20:27:28.272963 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:27:28.475242958+00:00 stderr F I0813 20:27:28.475165 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:27:28.475400733+00:00 stderr F I0813 20:27:28.475385 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:27:28.676608776+00:00 stderr F I0813 20:27:28.676464 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:27:28.676866624+00:00 stderr F I0813 20:27:28.676840 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:27:28.874520685+00:00 stderr F I0813 20:27:28.874390 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:27:28.874520685+00:00 stderr F I0813 20:27:28.874488 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:27:29.078941491+00:00 stderr F I0813 20:27:29.078704 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:27:29.078941491+00:00 stderr F I0813 20:27:29.078885 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:27:29.274753910+00:00 stderr F I0813 20:27:29.274270 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:27:29.274753910+00:00 stderr F I0813 20:27:29.274357 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:27:29.478494395+00:00 stderr F I0813 20:27:29.476646 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:27:29.478494395+00:00 stderr F I0813 20:27:29.476729 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:27:29.673460880+00:00 stderr F I0813 20:27:29.673334 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:27:29.673460880+00:00 stderr F I0813 20:27:29.673448 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:27:29.903861329+00:00 stderr F I0813 20:27:29.903288 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:27:29.903924220+00:00 stderr F I0813 20:27:29.903904 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:27:30.075440425+00:00 stderr F I0813 20:27:30.075353 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:27:30.075577639+00:00 stderr F I0813 20:27:30.075562 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:27:30.275364651+00:00 stderr F I0813 20:27:30.275309 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:27:30.275477125+00:00 stderr F I0813 20:27:30.275462 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:27:30.475919065+00:00 stderr F I0813 20:27:30.475812 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:27:30.475919065+00:00 stderr F I0813 20:27:30.475891 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:27:30.676348546+00:00 stderr F I0813 20:27:30.676239 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:27:30.676403658+00:00 stderr F I0813 20:27:30.676357 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:27:30.875751948+00:00 stderr F I0813 20:27:30.875595 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:27:30.875751948+00:00 stderr F I0813 20:27:30.875706 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:27:31.077562149+00:00 stderr F I0813 20:27:31.077426 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:27:31.077562149+00:00 stderr F I0813 20:27:31.077503 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:27:31.275134298+00:00 stderr F I0813 20:27:31.274718 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:27:31.275134298+00:00 stderr F I0813 20:27:31.274879 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:27:31.478038670+00:00 stderr F I0813 20:27:31.477907 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:27:31.478038670+00:00 stderr F I0813 20:27:31.477991 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:27:31.677351119+00:00 stderr F I0813 20:27:31.677256 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:27:31.677411741+00:00 stderr F I0813 20:27:31.677367 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:27:31.902942360+00:00 stderr F I0813 20:27:31.902022 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:27:31.902942360+00:00 stderr F I0813 20:27:31.902168 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:27:32.074235438+00:00 stderr F I0813 20:27:32.074163 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:27:32.074288709+00:00 stderr F I0813 20:27:32.074270 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:27:32.275389269+00:00 stderr F I0813 20:27:32.274892 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:27:32.275389269+00:00 stderr F I0813 20:27:32.274967 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:27:32.478609180+00:00 stderr F I0813 20:27:32.478481 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:27:32.478609180+00:00 stderr F I0813 20:27:32.478551 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:27:32.676428627+00:00 stderr F I0813 20:27:32.676267 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:27:32.676428627+00:00 stderr F I0813 20:27:32.676390 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:27:32.875700675+00:00 stderr F I0813 20:27:32.875605 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:27:32.875700675+00:00 stderr F I0813 20:27:32.875675 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:27:33.074303934+00:00 stderr F I0813 20:27:33.074195 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:27:33.074303934+00:00 stderr F I0813 20:27:33.074294 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:27:33.275156507+00:00 stderr F I0813 20:27:33.275042 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:27:33.275156507+00:00 stderr F I0813 20:27:33.275119 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:27:33.486121099+00:00 stderr F I0813 20:27:33.485938 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:27:33.486121099+00:00 stderr F I0813 20:27:33.486059 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:27:33.676598736+00:00 stderr F I0813 20:27:33.676496 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:27:33.676658168+00:00 stderr F I0813 20:27:33.676624 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:27:33.884867211+00:00 stderr F I0813 20:27:33.884746 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:27:33.884867211+00:00 stderr F I0813 20:27:33.884851 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:27:34.117624456+00:00 stderr F I0813 20:27:34.117501 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:27:34.117624456+00:00 stderr F I0813 20:27:34.117581 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:27:34.274883432+00:00 stderr F I0813 20:27:34.274724 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:27:34.274883432+00:00 stderr F I0813 20:27:34.274855 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:27:34.475435797+00:00 stderr F I0813 20:27:34.474983 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:27:34.475435797+00:00 stderr F I0813 20:27:34.475083 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:27:34.674946292+00:00 stderr F I0813 20:27:34.674724 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:27:34.674946292+00:00 stderr F I0813 20:27:34.674915 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:27:34.877277537+00:00 stderr F I0813 20:27:34.877183 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:27:34.877430331+00:00 stderr F I0813 20:27:34.877415 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:27:35.074172647+00:00 stderr F I0813 20:27:35.074074 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:27:35.074302281+00:00 stderr F I0813 20:27:35.074286 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:27:35.324443254+00:00 stderr F I0813 20:27:35.323637 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:27:35.324562067+00:00 stderr F I0813 20:27:35.324546 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:27:35.476392218+00:00 stderr F I0813 20:27:35.476338 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:27:35.476532242+00:00 stderr F I0813 20:27:35.476514 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:27:35.679270359+00:00 stderr F I0813 20:27:35.679214 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:27:35.679430624+00:00 stderr F I0813 20:27:35.679415 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:27:35.878030423+00:00 stderr F I0813 20:27:35.877335 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:27:35.878030423+00:00 stderr F I0813 20:27:35.877973 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:27:36.078351921+00:00 stderr F I0813 20:27:36.078295 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:27:36.078520286+00:00 stderr F I0813 20:27:36.078497 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:27:36.277145725+00:00 stderr F I0813 20:27:36.276983 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:27:36.277145725+00:00 stderr F I0813 20:27:36.277125 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:27:36.473252673+00:00 stderr F I0813 20:27:36.473140 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:27:36.473252673+00:00 stderr F I0813 20:27:36.473234 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:27:36.682613589+00:00 stderr F I0813 20:27:36.682452 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:27:36.682848456+00:00 stderr F I0813 20:27:36.682747 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:27:36.875639039+00:00 stderr F I0813 20:27:36.875564 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:27:36.875764662+00:00 stderr F I0813 20:27:36.875750 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:27:37.077621424+00:00 stderr F I0813 20:27:37.077553 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:27:37.077683456+00:00 stderr F I0813 20:27:37.077622 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:27:37.275223275+00:00 stderr F I0813 20:27:37.275132 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:27:37.275264046+00:00 stderr F I0813 20:27:37.275234 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:27:37.476175620+00:00 stderr F I0813 20:27:37.475378 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:27:37.476175620+00:00 stderr F I0813 20:27:37.475454 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:27:37.675347645+00:00 stderr F I0813 20:27:37.675220 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:27:37.675347645+00:00 stderr F I0813 20:27:37.675305 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:27:37.876045223+00:00 stderr F I0813 20:27:37.875901 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:27:37.876045223+00:00 stderr F I0813 20:27:37.876018 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:27:38.075255540+00:00 stderr F I0813 20:27:38.075132 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:27:38.075255540+00:00 stderr F I0813 20:27:38.075206 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:27:38.275369412+00:00 stderr F I0813 20:27:38.275255 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:27:38.275369412+00:00 stderr F I0813 20:27:38.275327 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:27:38.476051670+00:00 stderr F I0813 20:27:38.475905 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:27:38.476093981+00:00 stderr F I0813 20:27:38.476044 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:27:38.673141516+00:00 stderr F I0813 20:27:38.672967 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:27:38.673141516+00:00 stderr F I0813 20:27:38.673065 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:27:38.874939536+00:00 stderr F I0813 20:27:38.874841 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:27:38.874983707+00:00 stderr F I0813 20:27:38.874942 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:27:39.080548935+00:00 stderr F I0813 20:27:39.080478 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:27:39.080742101+00:00 stderr F I0813 20:27:39.080726 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:27:39.275239472+00:00 stderr F I0813 20:27:39.275107 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:27:39.275320675+00:00 stderr F I0813 20:27:39.275260 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:27:39.479622147+00:00 stderr F I0813 20:27:39.479412 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:27:39.479622147+00:00 stderr F I0813 20:27:39.479482 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:27:39.678976277+00:00 stderr F I0813 20:27:39.678915 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:27:39.679137262+00:00 stderr F I0813 20:27:39.679119 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:27:39.874529669+00:00 stderr F I0813 20:27:39.874368 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:27:39.874529669+00:00 stderr F I0813 20:27:39.874466 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:27:40.074724013+00:00 stderr F I0813 20:27:40.074608 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:27:40.074724013+00:00 stderr F I0813 20:27:40.074684 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:27:40.277202343+00:00 stderr F I0813 20:27:40.277103 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:27:40.277240424+00:00 stderr F I0813 20:27:40.277203 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:27:40.475867144+00:00 stderr F I0813 20:27:40.475539 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:27:40.476030788+00:00 stderr F I0813 20:27:40.475980 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:27:40.684934542+00:00 stderr F I0813 20:27:40.684749 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:27:40.717508413+00:00 stderr F I0813 20:27:40.717296 1 log.go:245] Operconfig Controller complete 2025-08-13T20:28:15.445211430+00:00 stderr F I0813 20:28:15.444915 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:30:15.103488261+00:00 stderr F I0813 20:30:15.102262 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.120589893+00:00 stderr F I0813 20:30:15.120430 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.131220018+00:00 stderr F I0813 20:30:15.131144 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.144720936+00:00 stderr F I0813 20:30:15.144172 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.162873468+00:00 stderr F I0813 20:30:15.160154 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.176998854+00:00 stderr F I0813 20:30:15.175634 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.238151032+00:00 stderr F I0813 20:30:15.235906 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.253899975+00:00 stderr F I0813 20:30:15.253546 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.271076388+00:00 stderr F I0813 20:30:15.270870 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.319889072+00:00 stderr F I0813 20:30:15.319191 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.333916245+00:00 stderr F I0813 20:30:15.332979 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.501238095+00:00 stderr F I0813 20:30:15.501167 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.701316776+00:00 stderr F I0813 20:30:15.700247 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.899667878+00:00 stderr F I0813 20:30:15.899606 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:16.105764482+00:00 stderr F I0813 20:30:16.105626 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:30:16.308067017+00:00 stderr F I0813 20:30:16.307675 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:30:16.500222871+00:00 stderr F I0813 20:30:16.500105 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:16.707581772+00:00 stderr F I0813 20:30:16.707413 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:16.765385643+00:00 stderr F I0813 20:30:16.765326 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:30:16.766400673+00:00 stderr F I0813 20:30:16.766196 1 log.go:245] successful reconciliation 2025-08-13T20:30:16.903174684+00:00 stderr F I0813 20:30:16.903121 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:30:17.102322299+00:00 stderr F I0813 20:30:17.101879 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:17.308739373+00:00 stderr F I0813 20:30:17.307947 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:17.504629714+00:00 stderr F I0813 20:30:17.503536 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:17.700326829+00:00 stderr F I0813 20:30:17.700165 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:17.902967744+00:00 stderr F I0813 20:30:17.902587 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:30:18.101906923+00:00 stderr F I0813 20:30:18.101174 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:30:18.146436613+00:00 stderr F I0813 20:30:18.146369 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T20:30:18.147537174+00:00 stderr F I0813 20:30:18.147506 1 log.go:245] successful reconciliation 2025-08-13T20:30:18.301902702+00:00 stderr F I0813 20:30:18.301414 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:18.497639668+00:00 stderr F I0813 20:30:18.497531 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:19.369631973+00:00 stderr F I0813 20:30:19.368156 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T20:30:19.369722486+00:00 stderr F I0813 20:30:19.369627 1 log.go:245] successful reconciliation 2025-08-13T20:30:40.720631459+00:00 stderr F I0813 20:30:40.720502 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:30:41.019113419+00:00 stderr F I0813 20:30:41.018948 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:30:41.023017072+00:00 stderr F I0813 20:30:41.022967 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:30:41.027077108+00:00 stderr F I0813 20:30:41.026934 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:30:41.027077108+00:00 stderr F I0813 20:30:41.026973 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc002a58f80 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:30:41.033230705+00:00 stderr F I0813 20:30:41.033178 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:30:41.033311127+00:00 stderr F I0813 20:30:41.033296 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:30:41.033345358+00:00 stderr F I0813 20:30:41.033333 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:30:41.037094376+00:00 stderr F I0813 20:30:41.037018 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:30:41.037164898+00:00 stderr F I0813 20:30:41.037147 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:30:41.037224280+00:00 stderr F I0813 20:30:41.037206 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:30:41.037263221+00:00 stderr F I0813 20:30:41.037248 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:30:41.037358094+00:00 stderr F I0813 20:30:41.037340 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:30:41.053752475+00:00 stderr F I0813 20:30:41.053611 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:30:41.072225106+00:00 stderr F I0813 20:30:41.072124 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:30:41.072312749+00:00 stderr F I0813 20:30:41.072299 1 log.go:245] Starting render phase 2025-08-13T20:30:41.085439336+00:00 stderr F I0813 20:30:41.085383 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:30:41.132865619+00:00 stderr F I0813 20:30:41.132699 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:30:41.132865619+00:00 stderr F I0813 20:30:41.132756 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:30:41.132968252+00:00 stderr F I0813 20:30:41.132901 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:30:41.132968252+00:00 stderr F I0813 20:30:41.132950 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:30:41.151236047+00:00 stderr F I0813 20:30:41.151108 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:30:41.151236047+00:00 stderr F I0813 20:30:41.151155 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:30:41.164714875+00:00 stderr F I0813 20:30:41.164602 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:30:41.181887598+00:00 stderr F I0813 20:30:41.181699 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:30:41.189108226+00:00 stderr F I0813 20:30:41.188930 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:30:41.189108226+00:00 stderr F I0813 20:30:41.189004 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:30:41.203751137+00:00 stderr F I0813 20:30:41.203609 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:30:41.203751137+00:00 stderr F I0813 20:30:41.203682 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:30:41.213886538+00:00 stderr F I0813 20:30:41.213712 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:30:41.213886538+00:00 stderr F I0813 20:30:41.213845 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:30:41.224961757+00:00 stderr F I0813 20:30:41.224729 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:30:41.224961757+00:00 stderr F I0813 20:30:41.224884 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:30:41.231178145+00:00 stderr F I0813 20:30:41.231089 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:30:41.231178145+00:00 stderr F I0813 20:30:41.231141 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:30:41.236415726+00:00 stderr F I0813 20:30:41.236325 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:30:41.236415726+00:00 stderr F I0813 20:30:41.236383 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:30:41.241243505+00:00 stderr F I0813 20:30:41.241148 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:30:41.241243505+00:00 stderr F I0813 20:30:41.241199 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:30:41.246169636+00:00 stderr F I0813 20:30:41.246013 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:30:41.246169636+00:00 stderr F I0813 20:30:41.246093 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:30:41.251924512+00:00 stderr F I0813 20:30:41.251822 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:30:41.251924512+00:00 stderr F I0813 20:30:41.251873 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:30:41.265869343+00:00 stderr F I0813 20:30:41.265736 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:30:41.265869343+00:00 stderr F I0813 20:30:41.265849 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:30:41.466570622+00:00 stderr F I0813 20:30:41.466472 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:30:41.466570622+00:00 stderr F I0813 20:30:41.466545 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:30:41.669875076+00:00 stderr F I0813 20:30:41.668975 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:30:41.669875076+00:00 stderr F I0813 20:30:41.669075 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:30:41.871059470+00:00 stderr F I0813 20:30:41.870961 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:30:41.871245545+00:00 stderr F I0813 20:30:41.871225 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:30:42.066910000+00:00 stderr F I0813 20:30:42.066720 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:30:42.066982132+00:00 stderr F I0813 20:30:42.066927 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:30:42.265595481+00:00 stderr F I0813 20:30:42.265493 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:30:42.265595481+00:00 stderr F I0813 20:30:42.265566 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:30:42.467688321+00:00 stderr F I0813 20:30:42.467458 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:30:42.467688321+00:00 stderr F I0813 20:30:42.467506 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:30:42.664977302+00:00 stderr F I0813 20:30:42.664915 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:30:42.664977302+00:00 stderr F I0813 20:30:42.664963 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:30:42.865076634+00:00 stderr F I0813 20:30:42.864938 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:30:42.865076634+00:00 stderr F I0813 20:30:42.865013 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:30:43.064592409+00:00 stderr F I0813 20:30:43.064424 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:30:43.064592409+00:00 stderr F I0813 20:30:43.064503 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:30:43.266429742+00:00 stderr F I0813 20:30:43.266315 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:30:43.266429742+00:00 stderr F I0813 20:30:43.266384 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:30:43.467720158+00:00 stderr F I0813 20:30:43.467559 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:30:43.467720158+00:00 stderr F I0813 20:30:43.467643 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:30:43.682762459+00:00 stderr F I0813 20:30:43.682707 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:30:43.682942494+00:00 stderr F I0813 20:30:43.682926 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:30:43.901669821+00:00 stderr F I0813 20:30:43.899014 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:30:43.901669821+00:00 stderr F I0813 20:30:43.899096 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:30:44.067638902+00:00 stderr F I0813 20:30:44.067574 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:30:44.067746905+00:00 stderr F I0813 20:30:44.067732 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:30:44.268109565+00:00 stderr F I0813 20:30:44.268021 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:30:44.268392203+00:00 stderr F I0813 20:30:44.268377 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:30:44.467892528+00:00 stderr F I0813 20:30:44.466314 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:30:44.467892528+00:00 stderr F I0813 20:30:44.466385 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:30:44.672502490+00:00 stderr F I0813 20:30:44.672371 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:30:44.672502490+00:00 stderr F I0813 20:30:44.672441 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:30:44.866728623+00:00 stderr F I0813 20:30:44.866626 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:30:44.866728623+00:00 stderr F I0813 20:30:44.866699 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:30:45.068913005+00:00 stderr F I0813 20:30:45.068734 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:30:45.068913005+00:00 stderr F I0813 20:30:45.068884 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:30:45.266746562+00:00 stderr F I0813 20:30:45.266691 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:30:45.267020740+00:00 stderr F I0813 20:30:45.266993 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:30:45.467309668+00:00 stderr F I0813 20:30:45.467206 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:30:45.467309668+00:00 stderr F I0813 20:30:45.467294 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:30:45.668306165+00:00 stderr F I0813 20:30:45.667615 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:30:45.668306165+00:00 stderr F I0813 20:30:45.668254 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:30:45.868098279+00:00 stderr F I0813 20:30:45.867943 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:30:45.868159231+00:00 stderr F I0813 20:30:45.868026 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:30:46.066620786+00:00 stderr F I0813 20:30:46.066456 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:30:46.066620786+00:00 stderr F I0813 20:30:46.066532 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:30:46.266411099+00:00 stderr F I0813 20:30:46.266159 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:30:46.266411099+00:00 stderr F I0813 20:30:46.266255 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:30:46.476275982+00:00 stderr F I0813 20:30:46.476164 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:30:46.476275982+00:00 stderr F I0813 20:30:46.476242 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:30:46.669208458+00:00 stderr F I0813 20:30:46.669152 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:30:46.669312791+00:00 stderr F I0813 20:30:46.669298 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:30:46.866456978+00:00 stderr F I0813 20:30:46.865708 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:30:46.866456978+00:00 stderr F I0813 20:30:46.866417 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:30:47.065006916+00:00 stderr F I0813 20:30:47.064927 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:30:47.065006916+00:00 stderr F I0813 20:30:47.064995 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:30:47.266742004+00:00 stderr F I0813 20:30:47.266681 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:30:47.266862928+00:00 stderr F I0813 20:30:47.266745 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:30:47.470235984+00:00 stderr F I0813 20:30:47.470126 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:30:47.470235984+00:00 stderr F I0813 20:30:47.470209 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:30:47.666509226+00:00 stderr F I0813 20:30:47.666430 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:30:47.666509226+00:00 stderr F I0813 20:30:47.666477 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:30:47.875247236+00:00 stderr F I0813 20:30:47.875142 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:30:47.875247236+00:00 stderr F I0813 20:30:47.875215 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:30:48.071547479+00:00 stderr F I0813 20:30:48.071454 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:30:48.071547479+00:00 stderr F I0813 20:30:48.071525 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:30:48.272230598+00:00 stderr F I0813 20:30:48.272099 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:30:48.272230598+00:00 stderr F I0813 20:30:48.272166 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:30:48.476084348+00:00 stderr F I0813 20:30:48.475878 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:30:48.476084348+00:00 stderr F I0813 20:30:48.475947 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:30:48.670865127+00:00 stderr F I0813 20:30:48.670753 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:30:48.670932099+00:00 stderr F I0813 20:30:48.670882 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:30:48.907618253+00:00 stderr F I0813 20:30:48.907289 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:30:48.907618253+00:00 stderr F I0813 20:30:48.907398 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:30:49.098693795+00:00 stderr F I0813 20:30:49.098640 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:30:49.098886131+00:00 stderr F I0813 20:30:49.098866 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:30:49.265969404+00:00 stderr F I0813 20:30:49.265883 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:30:49.265969404+00:00 stderr F I0813 20:30:49.265948 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:30:49.472286255+00:00 stderr F I0813 20:30:49.470468 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:30:49.472286255+00:00 stderr F I0813 20:30:49.470607 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:30:49.667573459+00:00 stderr F I0813 20:30:49.667472 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:30:49.667630500+00:00 stderr F I0813 20:30:49.667601 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:30:49.866920589+00:00 stderr F I0813 20:30:49.866592 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:30:49.866920589+00:00 stderr F I0813 20:30:49.866668 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:30:50.067448213+00:00 stderr F I0813 20:30:50.067332 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:30:50.067448213+00:00 stderr F I0813 20:30:50.067404 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:30:50.270901692+00:00 stderr F I0813 20:30:50.269555 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:30:50.270901692+00:00 stderr F I0813 20:30:50.269634 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:30:50.465650800+00:00 stderr F I0813 20:30:50.465403 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:30:50.465650800+00:00 stderr F I0813 20:30:50.465480 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:30:50.664696432+00:00 stderr F I0813 20:30:50.664595 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:30:50.664696432+00:00 stderr F I0813 20:30:50.664675 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:30:50.864598677+00:00 stderr F I0813 20:30:50.864493 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:30:50.864598677+00:00 stderr F I0813 20:30:50.864558 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:30:51.068003334+00:00 stderr F I0813 20:30:51.067922 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:30:51.068003334+00:00 stderr F I0813 20:30:51.067976 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:30:51.268356824+00:00 stderr F I0813 20:30:51.268193 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:30:51.268356824+00:00 stderr F I0813 20:30:51.268267 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:30:51.465676606+00:00 stderr F I0813 20:30:51.465523 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:30:51.465676606+00:00 stderr F I0813 20:30:51.465609 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:30:51.667382744+00:00 stderr F I0813 20:30:51.667285 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:30:51.667382744+00:00 stderr F I0813 20:30:51.667357 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:30:51.872449269+00:00 stderr F I0813 20:30:51.872009 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:30:51.872449269+00:00 stderr F I0813 20:30:51.872198 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:30:52.067623169+00:00 stderr F I0813 20:30:52.067490 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:30:52.067623169+00:00 stderr F I0813 20:30:52.067578 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:30:52.266402163+00:00 stderr F I0813 20:30:52.266348 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:30:52.266521707+00:00 stderr F I0813 20:30:52.266506 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:30:52.465656291+00:00 stderr F I0813 20:30:52.465489 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:30:52.465873007+00:00 stderr F I0813 20:30:52.465847 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:30:52.667109222+00:00 stderr F I0813 20:30:52.667051 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:30:52.667232086+00:00 stderr F I0813 20:30:52.667218 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:30:52.870098207+00:00 stderr F I0813 20:30:52.869973 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:30:52.870146849+00:00 stderr F I0813 20:30:52.870093 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:30:53.069201471+00:00 stderr F I0813 20:30:53.069143 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:30:53.069335705+00:00 stderr F I0813 20:30:53.069321 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:30:53.269534440+00:00 stderr F I0813 20:30:53.269479 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:30:53.269729705+00:00 stderr F I0813 20:30:53.269682 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:30:53.465182654+00:00 stderr F I0813 20:30:53.465126 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:30:53.465304477+00:00 stderr F I0813 20:30:53.465290 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:30:53.668848778+00:00 stderr F I0813 20:30:53.668682 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:30:53.668913010+00:00 stderr F I0813 20:30:53.668852 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:30:53.868876008+00:00 stderr F I0813 20:30:53.868747 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:30:53.868934110+00:00 stderr F I0813 20:30:53.868885 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:30:54.068198808+00:00 stderr F I0813 20:30:54.068087 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:30:54.068198808+00:00 stderr F I0813 20:30:54.068153 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:30:54.268524407+00:00 stderr F I0813 20:30:54.268393 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:30:54.268524407+00:00 stderr F I0813 20:30:54.268475 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:30:54.465879089+00:00 stderr F I0813 20:30:54.465370 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:30:54.465879089+00:00 stderr F I0813 20:30:54.465444 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:30:54.670215923+00:00 stderr F I0813 20:30:54.670087 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:30:54.670215923+00:00 stderr F I0813 20:30:54.670201 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:30:54.867530015+00:00 stderr F I0813 20:30:54.867414 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:30:54.867530015+00:00 stderr F I0813 20:30:54.867497 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:30:55.074336970+00:00 stderr F I0813 20:30:55.074217 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:30:55.074336970+00:00 stderr F I0813 20:30:55.074299 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:30:55.287739444+00:00 stderr F I0813 20:30:55.287549 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:30:55.287739444+00:00 stderr F I0813 20:30:55.287648 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:30:55.466752630+00:00 stderr F I0813 20:30:55.466509 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:30:55.467058379+00:00 stderr F I0813 20:30:55.466932 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:30:55.667846611+00:00 stderr F I0813 20:30:55.667238 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:30:55.668121898+00:00 stderr F I0813 20:30:55.668065 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:30:55.865593585+00:00 stderr F I0813 20:30:55.865535 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:30:55.865732009+00:00 stderr F I0813 20:30:55.865716 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:30:56.065572483+00:00 stderr F I0813 20:30:56.065470 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:30:56.065572483+00:00 stderr F I0813 20:30:56.065543 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:30:56.265318505+00:00 stderr F I0813 20:30:56.265244 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:30:56.265318505+00:00 stderr F I0813 20:30:56.265297 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:30:56.464912203+00:00 stderr F I0813 20:30:56.464767 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:30:56.465016976+00:00 stderr F I0813 20:30:56.465001 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:30:56.667616009+00:00 stderr F I0813 20:30:56.667003 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:30:56.667877337+00:00 stderr F I0813 20:30:56.667855 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:30:56.872565761+00:00 stderr F I0813 20:30:56.872507 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:30:56.872705995+00:00 stderr F I0813 20:30:56.872686 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:30:57.069824261+00:00 stderr F I0813 20:30:57.069557 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:30:57.069824261+00:00 stderr F I0813 20:30:57.069631 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:30:57.267319578+00:00 stderr F I0813 20:30:57.267193 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:30:57.267319578+00:00 stderr F I0813 20:30:57.267269 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:30:57.467141612+00:00 stderr F I0813 20:30:57.466963 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:30:57.467141612+00:00 stderr F I0813 20:30:57.467073 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:30:57.665717791+00:00 stderr F I0813 20:30:57.665665 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:30:57.665977288+00:00 stderr F I0813 20:30:57.665939 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:30:57.871306161+00:00 stderr F I0813 20:30:57.871173 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:30:57.871306161+00:00 stderr F I0813 20:30:57.871241 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:30:58.065934614+00:00 stderr F I0813 20:30:58.065885 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:30:58.066085039+00:00 stderr F I0813 20:30:58.066066 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:30:58.264672847+00:00 stderr F I0813 20:30:58.264573 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:30:58.264672847+00:00 stderr F I0813 20:30:58.264642 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:30:58.465591573+00:00 stderr F I0813 20:30:58.465490 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:30:58.465735327+00:00 stderr F I0813 20:30:58.465714 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:30:58.667376133+00:00 stderr F I0813 20:30:58.667275 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:30:58.667376133+00:00 stderr F I0813 20:30:58.667355 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:30:58.865918911+00:00 stderr F I0813 20:30:58.865764 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:30:58.865918911+00:00 stderr F I0813 20:30:58.865873 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:30:59.065382604+00:00 stderr F I0813 20:30:59.065328 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:30:59.065550759+00:00 stderr F I0813 20:30:59.065531 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:30:59.266353811+00:00 stderr F I0813 20:30:59.266260 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:30:59.266526286+00:00 stderr F I0813 20:30:59.266505 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:30:59.467545325+00:00 stderr F I0813 20:30:59.467453 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:30:59.467727930+00:00 stderr F I0813 20:30:59.467713 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:30:59.668898103+00:00 stderr F I0813 20:30:59.668676 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:30:59.668898103+00:00 stderr F I0813 20:30:59.668761 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:30:59.867370678+00:00 stderr F I0813 20:30:59.866937 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:30:59.867370678+00:00 stderr F I0813 20:30:59.867013 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:31:00.067346366+00:00 stderr F I0813 20:31:00.067238 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:31:00.067346366+00:00 stderr F I0813 20:31:00.067294 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:31:00.267185481+00:00 stderr F I0813 20:31:00.266536 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:31:00.267185481+00:00 stderr F I0813 20:31:00.266619 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:31:00.468525989+00:00 stderr F I0813 20:31:00.468416 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:31:00.468525989+00:00 stderr F I0813 20:31:00.468500 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:31:00.679869654+00:00 stderr F I0813 20:31:00.677152 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:31:00.679869654+00:00 stderr F I0813 20:31:00.677235 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:31:00.866958832+00:00 stderr F I0813 20:31:00.866765 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:31:00.867020224+00:00 stderr F I0813 20:31:00.866957 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:31:01.066267221+00:00 stderr F I0813 20:31:01.066207 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:31:01.066368744+00:00 stderr F I0813 20:31:01.066354 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:31:01.266406504+00:00 stderr F I0813 20:31:01.266312 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:31:01.266406504+00:00 stderr F I0813 20:31:01.266382 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:31:01.468601207+00:00 stderr F I0813 20:31:01.468511 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:31:01.468601207+00:00 stderr F I0813 20:31:01.468561 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:31:01.668015968+00:00 stderr F I0813 20:31:01.667907 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:31:01.668015968+00:00 stderr F I0813 20:31:01.667978 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:31:01.874729560+00:00 stderr F I0813 20:31:01.874545 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:31:01.897344430+00:00 stderr F I0813 20:31:01.897256 1 log.go:245] Operconfig Controller complete 2025-08-13T20:31:15.463724055+00:00 stderr F I0813 20:31:15.463641 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:34:01.898846109+00:00 stderr F I0813 20:34:01.898266 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:34:02.347061513+00:00 stderr F I0813 20:34:02.346995 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:34:02.349605686+00:00 stderr F I0813 20:34:02.349583 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:34:02.353003844+00:00 stderr F I0813 20:34:02.352974 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:34:02.353157229+00:00 stderr F I0813 20:34:02.353045 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc002c81300 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:34:02.358610565+00:00 stderr F I0813 20:34:02.358578 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:34:02.358666277+00:00 stderr F I0813 20:34:02.358653 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:34:02.358710538+00:00 stderr F I0813 20:34:02.358698 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:34:02.362291441+00:00 stderr F I0813 20:34:02.362252 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:34:02.362351813+00:00 stderr F I0813 20:34:02.362339 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:34:02.362383404+00:00 stderr F I0813 20:34:02.362371 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:34:02.362413865+00:00 stderr F I0813 20:34:02.362401 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:34:02.362584459+00:00 stderr F I0813 20:34:02.362561 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:34:02.376896681+00:00 stderr F I0813 20:34:02.376855 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:34:02.394217089+00:00 stderr F I0813 20:34:02.394175 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:34:02.394371933+00:00 stderr F I0813 20:34:02.394355 1 log.go:245] Starting render phase 2025-08-13T20:34:02.408479919+00:00 stderr F I0813 20:34:02.408389 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:34:02.459436904+00:00 stderr F I0813 20:34:02.459339 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:34:02.459436904+00:00 stderr F I0813 20:34:02.459359 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:34:02.459436904+00:00 stderr F I0813 20:34:02.459390 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:34:02.459436904+00:00 stderr F I0813 20:34:02.459415 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:34:02.481868178+00:00 stderr F I0813 20:34:02.477364 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:34:02.481868178+00:00 stderr F I0813 20:34:02.477405 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:34:02.490105455+00:00 stderr F I0813 20:34:02.490028 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:34:02.516704930+00:00 stderr F I0813 20:34:02.515292 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:34:02.525922575+00:00 stderr F I0813 20:34:02.525857 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:34:02.526061659+00:00 stderr F I0813 20:34:02.526039 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:34:02.538826236+00:00 stderr F I0813 20:34:02.538702 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:34:02.538885037+00:00 stderr F I0813 20:34:02.538825 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:34:02.549844182+00:00 stderr F I0813 20:34:02.549367 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:34:02.549844182+00:00 stderr F I0813 20:34:02.549439 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:34:02.557690168+00:00 stderr F I0813 20:34:02.557571 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:34:02.557690168+00:00 stderr F I0813 20:34:02.557644 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:34:02.564064961+00:00 stderr F I0813 20:34:02.563968 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:34:02.564064961+00:00 stderr F I0813 20:34:02.564018 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:34:02.571366011+00:00 stderr F I0813 20:34:02.571235 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:34:02.571528926+00:00 stderr F I0813 20:34:02.571361 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:34:02.580674579+00:00 stderr F I0813 20:34:02.580567 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:34:02.580925406+00:00 stderr F I0813 20:34:02.580869 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:34:02.590179692+00:00 stderr F I0813 20:34:02.590113 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:34:02.590179692+00:00 stderr F I0813 20:34:02.590160 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:34:02.596106172+00:00 stderr F I0813 20:34:02.595989 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:34:02.596217205+00:00 stderr F I0813 20:34:02.596144 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:34:02.605350648+00:00 stderr F I0813 20:34:02.605284 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:34:02.605471461+00:00 stderr F I0813 20:34:02.605412 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:34:02.791759386+00:00 stderr F I0813 20:34:02.791696 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:34:02.791857339+00:00 stderr F I0813 20:34:02.791826 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:34:02.992460636+00:00 stderr F I0813 20:34:02.992338 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:34:02.992460636+00:00 stderr F I0813 20:34:02.992427 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:34:03.190330584+00:00 stderr F I0813 20:34:03.190203 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:34:03.190330584+00:00 stderr F I0813 20:34:03.190270 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:34:03.392275479+00:00 stderr F I0813 20:34:03.391893 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:34:03.392275479+00:00 stderr F I0813 20:34:03.391985 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:34:03.590293521+00:00 stderr F I0813 20:34:03.590185 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:34:03.590293521+00:00 stderr F I0813 20:34:03.590276 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:34:03.793694477+00:00 stderr F I0813 20:34:03.793546 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:34:03.793694477+00:00 stderr F I0813 20:34:03.793631 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:34:03.991153244+00:00 stderr F I0813 20:34:03.990960 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:34:03.991420951+00:00 stderr F I0813 20:34:03.991281 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:34:04.193105408+00:00 stderr F I0813 20:34:04.192850 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:34:04.194444156+00:00 stderr F I0813 20:34:04.193990 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:34:04.390930445+00:00 stderr F I0813 20:34:04.390837 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:34:04.390930445+00:00 stderr F I0813 20:34:04.390908 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:34:04.592739116+00:00 stderr F I0813 20:34:04.592578 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:34:04.592739116+00:00 stderr F I0813 20:34:04.592648 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:34:04.791824868+00:00 stderr F I0813 20:34:04.791398 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:34:04.791824868+00:00 stderr F I0813 20:34:04.791478 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:34:05.006237832+00:00 stderr F I0813 20:34:05.006111 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:34:05.006237832+00:00 stderr F I0813 20:34:05.006213 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:34:05.205585162+00:00 stderr F I0813 20:34:05.205455 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:34:05.205585162+00:00 stderr F I0813 20:34:05.205563 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:34:05.391462976+00:00 stderr F I0813 20:34:05.391375 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:34:05.391462976+00:00 stderr F I0813 20:34:05.391447 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:34:05.592371751+00:00 stderr F I0813 20:34:05.592168 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:34:05.592371751+00:00 stderr F I0813 20:34:05.592327 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:34:05.792461253+00:00 stderr F I0813 20:34:05.792350 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:34:05.792461253+00:00 stderr F I0813 20:34:05.792418 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:34:05.997855747+00:00 stderr F I0813 20:34:05.997661 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:34:05.997855747+00:00 stderr F I0813 20:34:05.997731 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:34:06.192750000+00:00 stderr F I0813 20:34:06.192023 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:34:06.192750000+00:00 stderr F I0813 20:34:06.192131 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:34:06.405499526+00:00 stderr F I0813 20:34:06.405402 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:34:06.405499526+00:00 stderr F I0813 20:34:06.405474 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:34:06.591590125+00:00 stderr F I0813 20:34:06.591475 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:34:06.591590125+00:00 stderr F I0813 20:34:06.591543 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:34:06.791561694+00:00 stderr F I0813 20:34:06.791434 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:34:06.791599175+00:00 stderr F I0813 20:34:06.791554 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:34:06.990274817+00:00 stderr F I0813 20:34:06.990146 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:34:06.990274817+00:00 stderr F I0813 20:34:06.990229 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:34:07.193212091+00:00 stderr F I0813 20:34:07.193041 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:34:07.193212091+00:00 stderr F I0813 20:34:07.193178 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:34:07.390697817+00:00 stderr F I0813 20:34:07.390573 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:34:07.390697817+00:00 stderr F I0813 20:34:07.390674 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:34:07.590741787+00:00 stderr F I0813 20:34:07.590633 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:34:07.590741787+00:00 stderr F I0813 20:34:07.590711 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:34:07.793677420+00:00 stderr F I0813 20:34:07.793387 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:34:07.793677420+00:00 stderr F I0813 20:34:07.793517 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:34:07.997853869+00:00 stderr F I0813 20:34:07.997702 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:34:07.997900471+00:00 stderr F I0813 20:34:07.997890 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:34:08.192646179+00:00 stderr F I0813 20:34:08.192560 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:34:08.192688900+00:00 stderr F I0813 20:34:08.192641 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:34:08.391741752+00:00 stderr F I0813 20:34:08.391636 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:34:08.391741752+00:00 stderr F I0813 20:34:08.391731 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:34:08.590589948+00:00 stderr F I0813 20:34:08.590476 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:34:08.590589948+00:00 stderr F I0813 20:34:08.590547 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:34:08.792295437+00:00 stderr F I0813 20:34:08.792186 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:34:08.792295437+00:00 stderr F I0813 20:34:08.792256 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:34:08.994584142+00:00 stderr F I0813 20:34:08.994441 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:34:08.994584142+00:00 stderr F I0813 20:34:08.994536 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:34:09.200691657+00:00 stderr F I0813 20:34:09.200536 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:34:09.200691657+00:00 stderr F I0813 20:34:09.200660 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:34:09.401433477+00:00 stderr F I0813 20:34:09.401329 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:34:09.401433477+00:00 stderr F I0813 20:34:09.401420 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:34:09.603680051+00:00 stderr F I0813 20:34:09.602910 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:34:09.603680051+00:00 stderr F I0813 20:34:09.603043 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:34:09.803002570+00:00 stderr F I0813 20:34:09.802890 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:34:09.803002570+00:00 stderr F I0813 20:34:09.802975 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:34:09.996217015+00:00 stderr F I0813 20:34:09.995977 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:34:09.996217015+00:00 stderr F I0813 20:34:09.996101 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:34:10.230448068+00:00 stderr F I0813 20:34:10.229725 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:34:10.230448068+00:00 stderr F I0813 20:34:10.230408 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:34:10.428630135+00:00 stderr F I0813 20:34:10.428498 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:34:10.428630135+00:00 stderr F I0813 20:34:10.428569 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:34:10.589824859+00:00 stderr F I0813 20:34:10.589701 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:34:10.589869290+00:00 stderr F I0813 20:34:10.589847 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:34:10.792166955+00:00 stderr F I0813 20:34:10.792015 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:34:10.792166955+00:00 stderr F I0813 20:34:10.792135 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:34:10.990958880+00:00 stderr F I0813 20:34:10.990854 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:34:10.990958880+00:00 stderr F I0813 20:34:10.990926 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:34:11.194214743+00:00 stderr F I0813 20:34:11.194059 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:34:11.194214743+00:00 stderr F I0813 20:34:11.194150 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:34:11.391465992+00:00 stderr F I0813 20:34:11.391317 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:34:11.391465992+00:00 stderr F I0813 20:34:11.391386 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:34:11.590546464+00:00 stderr F I0813 20:34:11.590394 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:34:11.590546464+00:00 stderr F I0813 20:34:11.590443 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:34:11.792278223+00:00 stderr F I0813 20:34:11.792183 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:34:11.792278223+00:00 stderr F I0813 20:34:11.792257 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:34:11.994766304+00:00 stderr F I0813 20:34:11.994636 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:34:11.994878057+00:00 stderr F I0813 20:34:11.994733 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:34:12.189735708+00:00 stderr F I0813 20:34:12.189641 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:34:12.189735708+00:00 stderr F I0813 20:34:12.189727 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:34:12.393122385+00:00 stderr F I0813 20:34:12.392978 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:34:12.393122385+00:00 stderr F I0813 20:34:12.393100 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:34:12.590111908+00:00 stderr F I0813 20:34:12.589937 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:34:12.590111908+00:00 stderr F I0813 20:34:12.590006 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:34:12.790832658+00:00 stderr F I0813 20:34:12.790703 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:34:12.790873779+00:00 stderr F I0813 20:34:12.790835 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:34:12.990068935+00:00 stderr F I0813 20:34:12.989969 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:34:12.990068935+00:00 stderr F I0813 20:34:12.990038 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:34:13.193573535+00:00 stderr F I0813 20:34:13.193472 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:34:13.193573535+00:00 stderr F I0813 20:34:13.193543 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:34:13.392244116+00:00 stderr F I0813 20:34:13.392125 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:34:13.392244116+00:00 stderr F I0813 20:34:13.392208 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:34:13.592920595+00:00 stderr F I0813 20:34:13.592819 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:34:13.592920595+00:00 stderr F I0813 20:34:13.592891 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:34:13.791756610+00:00 stderr F I0813 20:34:13.791647 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:34:13.791756610+00:00 stderr F I0813 20:34:13.791744 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:34:13.994705574+00:00 stderr F I0813 20:34:13.994533 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:34:13.994705574+00:00 stderr F I0813 20:34:13.994646 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:34:14.198914114+00:00 stderr F I0813 20:34:14.198527 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:34:14.199246014+00:00 stderr F I0813 20:34:14.199219 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:34:14.396138834+00:00 stderr F I0813 20:34:14.395985 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:34:14.396138834+00:00 stderr F I0813 20:34:14.396098 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:34:14.591892783+00:00 stderr F I0813 20:34:14.591738 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:34:14.591939165+00:00 stderr F I0813 20:34:14.591899 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:34:14.791929321+00:00 stderr F I0813 20:34:14.791768 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:34:14.791992423+00:00 stderr F I0813 20:34:14.791889 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:34:14.999191288+00:00 stderr F I0813 20:34:14.999125 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:34:14.999324092+00:00 stderr F I0813 20:34:14.999307 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:34:15.193444792+00:00 stderr F I0813 20:34:15.193317 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:34:15.193444792+00:00 stderr F I0813 20:34:15.193407 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:34:15.391907047+00:00 stderr F I0813 20:34:15.391685 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:34:15.391907047+00:00 stderr F I0813 20:34:15.391845 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:34:15.472661219+00:00 stderr F I0813 20:34:15.472518 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:34:15.590637810+00:00 stderr F I0813 20:34:15.590511 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:34:15.590637810+00:00 stderr F I0813 20:34:15.590611 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:34:15.792261286+00:00 stderr F I0813 20:34:15.792159 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:34:15.792261286+00:00 stderr F I0813 20:34:15.792251 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:34:15.991879234+00:00 stderr F I0813 20:34:15.991680 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:34:15.991879234+00:00 stderr F I0813 20:34:15.991757 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:34:16.192925793+00:00 stderr F I0813 20:34:16.192759 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:34:16.193001135+00:00 stderr F I0813 20:34:16.192915 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:34:16.398388169+00:00 stderr F I0813 20:34:16.398285 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:34:16.398388169+00:00 stderr F I0813 20:34:16.398371 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:34:16.626511537+00:00 stderr F I0813 20:34:16.626381 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:34:16.626511537+00:00 stderr F I0813 20:34:16.626463 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:34:16.790697576+00:00 stderr F I0813 20:34:16.790576 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:34:16.790697576+00:00 stderr F I0813 20:34:16.790651 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:34:16.990958463+00:00 stderr F I0813 20:34:16.990747 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:34:16.990958463+00:00 stderr F I0813 20:34:16.990950 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:34:17.190279383+00:00 stderr F I0813 20:34:17.190166 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:34:17.190279383+00:00 stderr F I0813 20:34:17.190238 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:34:17.391147327+00:00 stderr F I0813 20:34:17.391009 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:34:17.391212799+00:00 stderr F I0813 20:34:17.391144 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:34:17.593873105+00:00 stderr F I0813 20:34:17.593713 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:34:17.593957077+00:00 stderr F I0813 20:34:17.593935 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:34:17.790564209+00:00 stderr F I0813 20:34:17.790444 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:34:17.790564209+00:00 stderr F I0813 20:34:17.790539 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:34:17.990932369+00:00 stderr F I0813 20:34:17.990765 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:34:17.990932369+00:00 stderr F I0813 20:34:17.990915 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:34:18.197471136+00:00 stderr F I0813 20:34:18.197310 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:34:18.197471136+00:00 stderr F I0813 20:34:18.197417 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:34:18.395934641+00:00 stderr F I0813 20:34:18.395759 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:34:18.395934641+00:00 stderr F I0813 20:34:18.395897 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:34:18.595005712+00:00 stderr F I0813 20:34:18.594883 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:34:18.595005712+00:00 stderr F I0813 20:34:18.594963 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:34:18.792035076+00:00 stderr F I0813 20:34:18.791938 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:34:18.792035076+00:00 stderr F I0813 20:34:18.792008 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:34:18.991953233+00:00 stderr F I0813 20:34:18.991677 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:34:18.992025365+00:00 stderr F I0813 20:34:18.991976 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:34:19.199985023+00:00 stderr F I0813 20:34:19.199520 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:34:19.199985023+00:00 stderr F I0813 20:34:19.199645 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:34:19.392242189+00:00 stderr F I0813 20:34:19.392145 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:34:19.392242189+00:00 stderr F I0813 20:34:19.392214 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:34:19.590447337+00:00 stderr F I0813 20:34:19.590324 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:34:19.590447337+00:00 stderr F I0813 20:34:19.590411 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:34:19.790043175+00:00 stderr F I0813 20:34:19.789893 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:34:19.790303302+00:00 stderr F I0813 20:34:19.790283 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:34:19.991140495+00:00 stderr F I0813 20:34:19.990982 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:34:19.991140495+00:00 stderr F I0813 20:34:19.991069 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:34:20.190366642+00:00 stderr F I0813 20:34:20.190237 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:34:20.190366642+00:00 stderr F I0813 20:34:20.190307 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:34:20.391485854+00:00 stderr F I0813 20:34:20.391364 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:34:20.391527305+00:00 stderr F I0813 20:34:20.391494 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:34:20.590484144+00:00 stderr F I0813 20:34:20.590359 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:34:20.590484144+00:00 stderr F I0813 20:34:20.590468 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:34:20.792546193+00:00 stderr F I0813 20:34:20.792493 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:34:20.792848241+00:00 stderr F I0813 20:34:20.792768 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:34:20.993187900+00:00 stderr F I0813 20:34:20.992968 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:34:20.993187900+00:00 stderr F I0813 20:34:20.993038 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:34:21.190684068+00:00 stderr F I0813 20:34:21.190623 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:34:21.190863913+00:00 stderr F I0813 20:34:21.190838 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:34:21.391707346+00:00 stderr F I0813 20:34:21.391595 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:34:21.391707346+00:00 stderr F I0813 20:34:21.391685 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:34:21.600415226+00:00 stderr F I0813 20:34:21.600355 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:34:21.600548830+00:00 stderr F I0813 20:34:21.600533 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:34:21.794062293+00:00 stderr F I0813 20:34:21.793997 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:34:21.794323100+00:00 stderr F I0813 20:34:21.794298 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:34:21.999035905+00:00 stderr F I0813 20:34:21.998981 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:34:21.999185979+00:00 stderr F I0813 20:34:21.999170 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:34:22.195112490+00:00 stderr F I0813 20:34:22.194997 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:34:22.195171272+00:00 stderr F I0813 20:34:22.195105 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:34:22.391537307+00:00 stderr F I0813 20:34:22.391375 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:34:22.391537307+00:00 stderr F I0813 20:34:22.391449 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:34:22.590581448+00:00 stderr F I0813 20:34:22.590471 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:34:22.590581448+00:00 stderr F I0813 20:34:22.590564 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:34:22.791332809+00:00 stderr F I0813 20:34:22.791203 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:34:22.791332809+00:00 stderr F I0813 20:34:22.791295 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:34:22.994927962+00:00 stderr F I0813 20:34:22.994657 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:34:22.994927962+00:00 stderr F I0813 20:34:22.994740 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:34:23.196711822+00:00 stderr F I0813 20:34:23.196646 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:34:23.217697855+00:00 stderr F I0813 20:34:23.217574 1 log.go:245] Operconfig Controller complete 2025-08-13T20:35:16.785492872+00:00 stderr F I0813 20:35:16.785143 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:35:16.787115989+00:00 stderr F I0813 20:35:16.787010 1 log.go:245] successful reconciliation 2025-08-13T20:35:18.170311261+00:00 stderr F I0813 20:35:18.170171 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T20:35:18.174400889+00:00 stderr F I0813 20:35:18.174311 1 log.go:245] successful reconciliation 2025-08-13T20:35:19.385155273+00:00 stderr F I0813 20:35:19.385001 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T20:35:19.385899974+00:00 stderr F I0813 20:35:19.385759 1 log.go:245] successful reconciliation 2025-08-13T20:37:15.481051441+00:00 stderr F I0813 20:37:15.480749 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:37:23.219393589+00:00 stderr F I0813 20:37:23.219216 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:37:23.607865939+00:00 stderr F I0813 20:37:23.605470 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:37:23.614544421+00:00 stderr F I0813 20:37:23.613518 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:37:23.629073360+00:00 stderr F I0813 20:37:23.627334 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:37:23.629073360+00:00 stderr F I0813 20:37:23.627379 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc003acd900 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:37:23.638643286+00:00 stderr F I0813 20:37:23.636944 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:37:23.638643286+00:00 stderr F I0813 20:37:23.637162 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:37:23.638643286+00:00 stderr F I0813 20:37:23.637178 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:37:23.643098785+00:00 stderr F I0813 20:37:23.642346 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:37:23.643098785+00:00 stderr F I0813 20:37:23.642392 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:37:23.643098785+00:00 stderr F I0813 20:37:23.642416 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:37:23.643098785+00:00 stderr F I0813 20:37:23.642423 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:37:23.643098785+00:00 stderr F I0813 20:37:23.642612 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:37:23.695010751+00:00 stderr F I0813 20:37:23.694666 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:37:23.717315084+00:00 stderr F I0813 20:37:23.717177 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:37:23.717315084+00:00 stderr F I0813 20:37:23.717248 1 log.go:245] Starting render phase 2025-08-13T20:37:23.737492456+00:00 stderr F I0813 20:37:23.737362 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:37:23.784281595+00:00 stderr F I0813 20:37:23.784170 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:37:23.784281595+00:00 stderr F I0813 20:37:23.784204 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:37:23.784281595+00:00 stderr F I0813 20:37:23.784244 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:37:23.784358457+00:00 stderr F I0813 20:37:23.784270 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:37:23.800890364+00:00 stderr F I0813 20:37:23.799868 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:37:23.800890364+00:00 stderr F I0813 20:37:23.799906 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:37:23.815894706+00:00 stderr F I0813 20:37:23.814725 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:37:23.842947236+00:00 stderr F I0813 20:37:23.842823 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:37:23.848839156+00:00 stderr F I0813 20:37:23.848721 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:37:23.848839156+00:00 stderr F I0813 20:37:23.848759 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:37:23.858370741+00:00 stderr F I0813 20:37:23.858253 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:37:23.858370741+00:00 stderr F I0813 20:37:23.858323 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:37:23.874865737+00:00 stderr F I0813 20:37:23.874692 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:37:23.875210397+00:00 stderr F I0813 20:37:23.874977 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:37:23.887039078+00:00 stderr F I0813 20:37:23.886972 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:37:23.887209623+00:00 stderr F I0813 20:37:23.887190 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:37:23.894961726+00:00 stderr F I0813 20:37:23.894886 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:37:23.895193193+00:00 stderr F I0813 20:37:23.895165 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:37:23.902042160+00:00 stderr F I0813 20:37:23.901964 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:37:23.902236726+00:00 stderr F I0813 20:37:23.902211 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:37:23.910201475+00:00 stderr F I0813 20:37:23.910042 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:37:23.910201475+00:00 stderr F I0813 20:37:23.910184 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:37:23.916724563+00:00 stderr F I0813 20:37:23.916579 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:37:23.916724563+00:00 stderr F I0813 20:37:23.916673 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:37:23.924086736+00:00 stderr F I0813 20:37:23.923638 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:37:23.924270911+00:00 stderr F I0813 20:37:23.924248 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:37:23.931562381+00:00 stderr F I0813 20:37:23.931519 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:37:23.931682785+00:00 stderr F I0813 20:37:23.931662 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:37:24.107742351+00:00 stderr F I0813 20:37:24.107614 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:37:24.107742351+00:00 stderr F I0813 20:37:24.107698 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:37:24.307965673+00:00 stderr F I0813 20:37:24.307867 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:37:24.308059906+00:00 stderr F I0813 20:37:24.307958 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:37:24.507947379+00:00 stderr F I0813 20:37:24.507853 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:37:24.507947379+00:00 stderr F I0813 20:37:24.507926 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:37:24.710617201+00:00 stderr F I0813 20:37:24.710550 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:37:24.710695853+00:00 stderr F I0813 20:37:24.710630 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:37:24.908331961+00:00 stderr F I0813 20:37:24.908164 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:37:24.908331961+00:00 stderr F I0813 20:37:24.908232 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:37:25.107497343+00:00 stderr F I0813 20:37:25.107416 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:37:25.107497343+00:00 stderr F I0813 20:37:25.107465 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:37:25.307848999+00:00 stderr F I0813 20:37:25.307732 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:37:25.307909041+00:00 stderr F I0813 20:37:25.307859 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:37:25.511641345+00:00 stderr F I0813 20:37:25.511530 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:37:25.511641345+00:00 stderr F I0813 20:37:25.511627 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:37:25.707709358+00:00 stderr F I0813 20:37:25.707622 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:37:25.707759669+00:00 stderr F I0813 20:37:25.707702 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:37:25.912432070+00:00 stderr F I0813 20:37:25.912270 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:37:25.912432070+00:00 stderr F I0813 20:37:25.912342 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:37:26.107642188+00:00 stderr F I0813 20:37:26.107510 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:37:26.107642188+00:00 stderr F I0813 20:37:26.107596 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:37:26.316556871+00:00 stderr F I0813 20:37:26.316473 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:37:26.316599892+00:00 stderr F I0813 20:37:26.316557 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:37:26.519348457+00:00 stderr F I0813 20:37:26.519242 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:37:26.519348457+00:00 stderr F I0813 20:37:26.519323 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:37:26.707399369+00:00 stderr F I0813 20:37:26.707282 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:37:26.707399369+00:00 stderr F I0813 20:37:26.707350 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:37:26.908764955+00:00 stderr F I0813 20:37:26.908643 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:37:26.908880428+00:00 stderr F I0813 20:37:26.908761 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:37:27.108116552+00:00 stderr F I0813 20:37:27.107985 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:37:27.108116552+00:00 stderr F I0813 20:37:27.108061 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:37:27.316842560+00:00 stderr F I0813 20:37:27.316700 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:37:27.316842560+00:00 stderr F I0813 20:37:27.316832 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:37:27.510084171+00:00 stderr F I0813 20:37:27.509947 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:37:27.510084171+00:00 stderr F I0813 20:37:27.510036 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:37:27.713203738+00:00 stderr F I0813 20:37:27.713073 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:37:27.713203738+00:00 stderr F I0813 20:37:27.713185 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:37:27.913630256+00:00 stderr F I0813 20:37:27.913477 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:37:27.913680007+00:00 stderr F I0813 20:37:27.913631 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:37:28.107087573+00:00 stderr F I0813 20:37:28.106960 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:37:28.107087573+00:00 stderr F I0813 20:37:28.107035 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:37:28.309265712+00:00 stderr F I0813 20:37:28.309159 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:37:28.309265712+00:00 stderr F I0813 20:37:28.309246 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:37:28.508677781+00:00 stderr F I0813 20:37:28.508560 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:37:28.508677781+00:00 stderr F I0813 20:37:28.508632 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:37:28.707441951+00:00 stderr F I0813 20:37:28.707314 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:37:28.707441951+00:00 stderr F I0813 20:37:28.707387 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:37:28.910568438+00:00 stderr F I0813 20:37:28.910450 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:37:28.910568438+00:00 stderr F I0813 20:37:28.910523 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:37:29.114977161+00:00 stderr F I0813 20:37:29.113892 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:37:29.114977161+00:00 stderr F I0813 20:37:29.113979 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:37:29.316423349+00:00 stderr F I0813 20:37:29.316314 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:37:29.316423349+00:00 stderr F I0813 20:37:29.316386 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:37:29.510629207+00:00 stderr F I0813 20:37:29.510455 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:37:29.510629207+00:00 stderr F I0813 20:37:29.510544 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:37:29.708267586+00:00 stderr F I0813 20:37:29.708112 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:37:29.708267586+00:00 stderr F I0813 20:37:29.708211 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:37:29.909850897+00:00 stderr F I0813 20:37:29.909753 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:37:29.909956990+00:00 stderr F I0813 20:37:29.909879 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:37:30.109516274+00:00 stderr F I0813 20:37:30.109431 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:37:30.109516274+00:00 stderr F I0813 20:37:30.109502 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:37:30.307823971+00:00 stderr F I0813 20:37:30.307667 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:37:30.307823971+00:00 stderr F I0813 20:37:30.307760 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:37:30.514521560+00:00 stderr F I0813 20:37:30.514407 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:37:30.514582472+00:00 stderr F I0813 20:37:30.514548 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:37:30.717672507+00:00 stderr F I0813 20:37:30.717559 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:37:30.717672507+00:00 stderr F I0813 20:37:30.717643 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:37:30.917608311+00:00 stderr F I0813 20:37:30.917472 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:37:30.917608311+00:00 stderr F I0813 20:37:30.917582 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:37:31.118597046+00:00 stderr F I0813 20:37:31.118382 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:37:31.118597046+00:00 stderr F I0813 20:37:31.118471 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:37:31.313304599+00:00 stderr F I0813 20:37:31.313188 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:37:31.313304599+00:00 stderr F I0813 20:37:31.313259 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:37:31.549347115+00:00 stderr F I0813 20:37:31.549115 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:37:31.549347115+00:00 stderr F I0813 20:37:31.549216 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:37:31.742304547+00:00 stderr F I0813 20:37:31.742197 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:37:31.742304547+00:00 stderr F I0813 20:37:31.742290 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:37:31.908476348+00:00 stderr F I0813 20:37:31.908341 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:37:31.908476348+00:00 stderr F I0813 20:37:31.908430 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:37:32.110072410+00:00 stderr F I0813 20:37:32.109951 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:37:32.110072410+00:00 stderr F I0813 20:37:32.110045 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:37:32.311913049+00:00 stderr F I0813 20:37:32.311826 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:37:32.311963980+00:00 stderr F I0813 20:37:32.311911 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:37:32.508538458+00:00 stderr F I0813 20:37:32.508483 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:37:32.508677642+00:00 stderr F I0813 20:37:32.508661 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:37:32.707224186+00:00 stderr F I0813 20:37:32.707114 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:37:32.707224186+00:00 stderr F I0813 20:37:32.707206 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:37:32.907333975+00:00 stderr F I0813 20:37:32.907219 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:37:32.907333975+00:00 stderr F I0813 20:37:32.907320 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:37:33.107524727+00:00 stderr F I0813 20:37:33.107397 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:37:33.107524727+00:00 stderr F I0813 20:37:33.107482 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:37:33.308833780+00:00 stderr F I0813 20:37:33.308683 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:37:33.308883852+00:00 stderr F I0813 20:37:33.308767 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:37:33.508307632+00:00 stderr F I0813 20:37:33.508186 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:37:33.508307632+00:00 stderr F I0813 20:37:33.508259 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:37:33.711968483+00:00 stderr F I0813 20:37:33.711904 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:37:33.712018365+00:00 stderr F I0813 20:37:33.711971 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:37:33.909602181+00:00 stderr F I0813 20:37:33.909480 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:37:33.909602181+00:00 stderr F I0813 20:37:33.909548 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:37:34.108732982+00:00 stderr F I0813 20:37:34.108526 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:37:34.108732982+00:00 stderr F I0813 20:37:34.108624 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:37:34.308305936+00:00 stderr F I0813 20:37:34.308054 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:37:34.308305936+00:00 stderr F I0813 20:37:34.308156 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:37:34.510729862+00:00 stderr F I0813 20:37:34.510621 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:37:34.510729862+00:00 stderr F I0813 20:37:34.510707 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:37:34.708272368+00:00 stderr F I0813 20:37:34.708107 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:37:34.708272368+00:00 stderr F I0813 20:37:34.708211 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:37:34.908726827+00:00 stderr F I0813 20:37:34.908629 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:37:34.908726827+00:00 stderr F I0813 20:37:34.908697 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:37:35.106672564+00:00 stderr F I0813 20:37:35.106564 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:37:35.106672564+00:00 stderr F I0813 20:37:35.106637 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:37:35.311843897+00:00 stderr F I0813 20:37:35.311735 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:37:35.311906899+00:00 stderr F I0813 20:37:35.311858 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:37:35.512505492+00:00 stderr F I0813 20:37:35.512411 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:37:35.512505492+00:00 stderr F I0813 20:37:35.512488 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:37:35.722896327+00:00 stderr F I0813 20:37:35.722061 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:37:35.722896327+00:00 stderr F I0813 20:37:35.722160 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:37:35.908469097+00:00 stderr F I0813 20:37:35.908325 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:37:35.908469097+00:00 stderr F I0813 20:37:35.908411 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:37:36.109765210+00:00 stderr F I0813 20:37:36.109588 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:37:36.109765210+00:00 stderr F I0813 20:37:36.109734 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:37:36.311869647+00:00 stderr F I0813 20:37:36.311711 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:37:36.311936499+00:00 stderr F I0813 20:37:36.311867 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:37:36.511670837+00:00 stderr F I0813 20:37:36.511531 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:37:36.511670837+00:00 stderr F I0813 20:37:36.511653 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:37:36.712617920+00:00 stderr F I0813 20:37:36.711973 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:37:36.712617920+00:00 stderr F I0813 20:37:36.712579 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:37:36.913230494+00:00 stderr F I0813 20:37:36.913096 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:37:36.913230494+00:00 stderr F I0813 20:37:36.913195 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:37:37.108671609+00:00 stderr F I0813 20:37:37.108545 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:37:37.108671609+00:00 stderr F I0813 20:37:37.108643 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:37:37.311879127+00:00 stderr F I0813 20:37:37.311227 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:37:37.311879127+00:00 stderr F I0813 20:37:37.311295 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:37:37.511670577+00:00 stderr F I0813 20:37:37.511561 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:37:37.511716399+00:00 stderr F I0813 20:37:37.511668 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:37:37.718119269+00:00 stderr F I0813 20:37:37.717978 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:37:37.718119269+00:00 stderr F I0813 20:37:37.718057 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:37:37.931684726+00:00 stderr F I0813 20:37:37.931575 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:37:37.931684726+00:00 stderr F I0813 20:37:37.931669 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:37:38.110214683+00:00 stderr F I0813 20:37:38.109307 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:37:38.110278385+00:00 stderr F I0813 20:37:38.110221 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:37:38.310597560+00:00 stderr F I0813 20:37:38.310485 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:37:38.310597560+00:00 stderr F I0813 20:37:38.310559 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:37:38.517357531+00:00 stderr F I0813 20:37:38.516962 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:37:38.517881746+00:00 stderr F I0813 20:37:38.517768 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:37:38.713851576+00:00 stderr F I0813 20:37:38.713639 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:37:38.713851576+00:00 stderr F I0813 20:37:38.713761 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:37:38.913720767+00:00 stderr F I0813 20:37:38.913474 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:37:38.913720767+00:00 stderr F I0813 20:37:38.913587 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:37:39.111837119+00:00 stderr F I0813 20:37:39.111638 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:37:39.111965403+00:00 stderr F I0813 20:37:39.111890 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:37:39.308543710+00:00 stderr F I0813 20:37:39.308471 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:37:39.308610902+00:00 stderr F I0813 20:37:39.308549 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:37:39.516062593+00:00 stderr F I0813 20:37:39.515940 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:37:39.516062593+00:00 stderr F I0813 20:37:39.516029 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:37:39.720623170+00:00 stderr F I0813 20:37:39.720563 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:37:39.721215527+00:00 stderr F I0813 20:37:39.721163 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:37:39.908000432+00:00 stderr F I0813 20:37:39.907906 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:37:39.908000432+00:00 stderr F I0813 20:37:39.907979 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:37:40.107714410+00:00 stderr F I0813 20:37:40.107612 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:37:40.107714410+00:00 stderr F I0813 20:37:40.107682 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:37:40.310193258+00:00 stderr F I0813 20:37:40.310015 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:37:40.311905397+00:00 stderr F I0813 20:37:40.310253 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:37:40.513283703+00:00 stderr F I0813 20:37:40.513228 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:37:40.513401556+00:00 stderr F I0813 20:37:40.513388 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:37:40.708574643+00:00 stderr F I0813 20:37:40.708517 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:37:40.708689806+00:00 stderr F I0813 20:37:40.708674 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:37:40.909344761+00:00 stderr F I0813 20:37:40.909216 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:37:40.909344761+00:00 stderr F I0813 20:37:40.909309 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:37:41.107342230+00:00 stderr F I0813 20:37:41.107235 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:37:41.107509094+00:00 stderr F I0813 20:37:41.107328 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:37:41.309322663+00:00 stderr F I0813 20:37:41.309263 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:37:41.309322663+00:00 stderr F I0813 20:37:41.309315 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:37:41.508740131+00:00 stderr F I0813 20:37:41.508433 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:37:41.508740131+00:00 stderr F I0813 20:37:41.508505 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:37:41.709441598+00:00 stderr F I0813 20:37:41.709284 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:37:41.709441598+00:00 stderr F I0813 20:37:41.709355 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:37:41.908863267+00:00 stderr F I0813 20:37:41.908758 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:37:41.908921099+00:00 stderr F I0813 20:37:41.908905 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:37:42.112349844+00:00 stderr F I0813 20:37:42.112233 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:37:42.112349844+00:00 stderr F I0813 20:37:42.112308 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:37:42.308936191+00:00 stderr F I0813 20:37:42.308859 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:37:42.309073755+00:00 stderr F I0813 20:37:42.309013 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:37:42.520145489+00:00 stderr F I0813 20:37:42.511742 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:37:42.520145489+00:00 stderr F I0813 20:37:42.511854 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:37:42.708728676+00:00 stderr F I0813 20:37:42.708617 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:37:42.708728676+00:00 stderr F I0813 20:37:42.708693 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:37:42.916895448+00:00 stderr F I0813 20:37:42.915329 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:37:42.916895448+00:00 stderr F I0813 20:37:42.915397 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:37:43.108367398+00:00 stderr F I0813 20:37:43.108240 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:37:43.108367398+00:00 stderr F I0813 20:37:43.108322 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:37:43.314610504+00:00 stderr F I0813 20:37:43.314521 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:37:43.314610504+00:00 stderr F I0813 20:37:43.314597 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:37:43.510358067+00:00 stderr F I0813 20:37:43.510242 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:37:43.510358067+00:00 stderr F I0813 20:37:43.510319 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:37:43.707886952+00:00 stderr F I0813 20:37:43.707754 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:37:43.708051337+00:00 stderr F I0813 20:37:43.707980 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:37:43.909417972+00:00 stderr F I0813 20:37:43.909287 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:37:43.909460144+00:00 stderr F I0813 20:37:43.909423 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:37:44.108268745+00:00 stderr F I0813 20:37:44.107990 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:37:44.108268745+00:00 stderr F I0813 20:37:44.108067 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:37:44.310035742+00:00 stderr F I0813 20:37:44.309955 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:37:44.310080864+00:00 stderr F I0813 20:37:44.310047 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:37:44.522115036+00:00 stderr F I0813 20:37:44.520425 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:37:44.546810448+00:00 stderr F I0813 20:37:44.546650 1 log.go:245] Operconfig Controller complete 2025-08-13T20:40:15.097755834+00:00 stderr F I0813 20:40:15.096603 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.106656070+00:00 stderr F I0813 20:40:15.106593 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.116356380+00:00 stderr F I0813 20:40:15.116120 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.126956326+00:00 stderr F I0813 20:40:15.126523 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.135594905+00:00 stderr F I0813 20:40:15.135452 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.145745757+00:00 stderr F I0813 20:40:15.145707 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.155562220+00:00 stderr F I0813 20:40:15.155518 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.164362444+00:00 stderr F I0813 20:40:15.164243 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.172441127+00:00 stderr F I0813 20:40:15.172395 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.187869062+00:00 stderr F I0813 20:40:15.187749 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.297835262+00:00 stderr F I0813 20:40:15.297718 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.494120081+00:00 stderr F I0813 20:40:15.494020 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:40:15.495948704+00:00 stderr F I0813 20:40:15.495463 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.699396089+00:00 stderr F I0813 20:40:15.699217 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.895930785+00:00 stderr F I0813 20:40:15.895768 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:16.098507356+00:00 stderr F I0813 20:40:16.098373 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:40:16.297440331+00:00 stderr F I0813 20:40:16.297311 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:40:16.494455340+00:00 stderr F I0813 20:40:16.494324 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:16.698251816+00:00 stderr F I0813 20:40:16.698096 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:16.803980614+00:00 stderr F I0813 20:40:16.803839 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:40:16.805148378+00:00 stderr F I0813 20:40:16.805001 1 log.go:245] successful reconciliation 2025-08-13T20:40:16.901600799+00:00 stderr F I0813 20:40:16.901504 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:40:17.097096905+00:00 stderr F I0813 20:40:17.097019 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:17.297375289+00:00 stderr F I0813 20:40:17.297223 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:17.499045973+00:00 stderr F I0813 20:40:17.498826 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:17.696763303+00:00 stderr F I0813 20:40:17.696577 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:17.896643316+00:00 stderr F I0813 20:40:17.896577 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:40:18.097033654+00:00 stderr F I0813 20:40:18.096890 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:40:18.191251460+00:00 stderr F I0813 20:40:18.191147 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T20:40:18.192369292+00:00 stderr F I0813 20:40:18.192299 1 log.go:245] successful reconciliation 2025-08-13T20:40:18.297909155+00:00 stderr F I0813 20:40:18.297078 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:18.495371918+00:00 stderr F I0813 20:40:18.495316 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:19.402151561+00:00 stderr F I0813 20:40:19.402033 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T20:40:19.402839470+00:00 stderr F I0813 20:40:19.402737 1 log.go:245] successful reconciliation 2025-08-13T20:40:44.550953322+00:00 stderr F I0813 20:40:44.547924 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:40:44.880082911+00:00 stderr F I0813 20:40:44.879976 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:40:44.885301581+00:00 stderr F I0813 20:40:44.885179 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:40:44.888364619+00:00 stderr F I0813 20:40:44.888287 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:40:44.888387960+00:00 stderr F I0813 20:40:44.888319 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc002a58f80 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:40:44.893609900+00:00 stderr F I0813 20:40:44.893531 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:40:44.893609900+00:00 stderr F I0813 20:40:44.893575 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:40:44.893609900+00:00 stderr F I0813 20:40:44.893586 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:40:44.901888579+00:00 stderr F I0813 20:40:44.900643 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:40:44.901888579+00:00 stderr F I0813 20:40:44.900709 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:40:44.901888579+00:00 stderr F I0813 20:40:44.900726 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:40:44.901888579+00:00 stderr F I0813 20:40:44.900732 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:40:44.901888579+00:00 stderr F I0813 20:40:44.900864 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:40:44.922083981+00:00 stderr F I0813 20:40:44.921997 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:40:44.939979497+00:00 stderr F I0813 20:40:44.938052 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:40:44.939979497+00:00 stderr F I0813 20:40:44.938126 1 log.go:245] Starting render phase 2025-08-13T20:40:44.959439848+00:00 stderr F I0813 20:40:44.959229 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:40:45.000872032+00:00 stderr F I0813 20:40:45.000752 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:40:45.000995475+00:00 stderr F I0813 20:40:45.000950 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:40:45.001087238+00:00 stderr F I0813 20:40:45.001035 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:40:45.001255463+00:00 stderr F I0813 20:40:45.001101 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:40:45.021410054+00:00 stderr F I0813 20:40:45.021335 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:40:45.021410054+00:00 stderr F I0813 20:40:45.021374 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:40:45.031547496+00:00 stderr F I0813 20:40:45.031435 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:40:45.047656371+00:00 stderr F I0813 20:40:45.047569 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:40:45.054137838+00:00 stderr F I0813 20:40:45.054082 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:40:45.054293752+00:00 stderr F I0813 20:40:45.054245 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:40:45.066159814+00:00 stderr F I0813 20:40:45.066090 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:40:45.066159814+00:00 stderr F I0813 20:40:45.066154 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:40:45.076235345+00:00 stderr F I0813 20:40:45.075669 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:40:45.076235345+00:00 stderr F I0813 20:40:45.075868 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:40:45.083839714+00:00 stderr F I0813 20:40:45.083750 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:40:45.083937557+00:00 stderr F I0813 20:40:45.083899 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:40:45.090948699+00:00 stderr F I0813 20:40:45.090864 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:40:45.090948699+00:00 stderr F I0813 20:40:45.090934 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:40:45.096176520+00:00 stderr F I0813 20:40:45.096122 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:40:45.096229091+00:00 stderr F I0813 20:40:45.096178 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:40:45.103018037+00:00 stderr F I0813 20:40:45.102976 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:40:45.103082479+00:00 stderr F I0813 20:40:45.103024 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:40:45.108667400+00:00 stderr F I0813 20:40:45.108546 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:40:45.108667400+00:00 stderr F I0813 20:40:45.108620 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:40:45.113478218+00:00 stderr F I0813 20:40:45.113383 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:40:45.113478218+00:00 stderr F I0813 20:40:45.113442 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:40:45.132231419+00:00 stderr F I0813 20:40:45.132133 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:40:45.132263690+00:00 stderr F I0813 20:40:45.132233 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:40:45.334941113+00:00 stderr F I0813 20:40:45.333055 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:40:45.334941113+00:00 stderr F I0813 20:40:45.333120 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:40:45.534044994+00:00 stderr F I0813 20:40:45.533935 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:40:45.534044994+00:00 stderr F I0813 20:40:45.534020 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:40:45.734093761+00:00 stderr F I0813 20:40:45.733991 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:40:45.734093761+00:00 stderr F I0813 20:40:45.734056 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:40:45.934989923+00:00 stderr F I0813 20:40:45.934862 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:40:45.934989923+00:00 stderr F I0813 20:40:45.934937 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:40:46.136050940+00:00 stderr F I0813 20:40:46.135386 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:40:46.136261176+00:00 stderr F I0813 20:40:46.136241 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:40:46.334704357+00:00 stderr F I0813 20:40:46.334389 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:40:46.334704357+00:00 stderr F I0813 20:40:46.334492 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:40:46.534071955+00:00 stderr F I0813 20:40:46.533976 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:40:46.534071955+00:00 stderr F I0813 20:40:46.534046 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:40:46.734065191+00:00 stderr F I0813 20:40:46.733923 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:40:46.734065191+00:00 stderr F I0813 20:40:46.734018 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:40:46.935165579+00:00 stderr F I0813 20:40:46.935051 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:40:46.935165579+00:00 stderr F I0813 20:40:46.935123 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:40:47.133985861+00:00 stderr F I0813 20:40:47.133850 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:40:47.133985861+00:00 stderr F I0813 20:40:47.133914 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:40:47.336224892+00:00 stderr F I0813 20:40:47.336084 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:40:47.337163039+00:00 stderr F I0813 20:40:47.336992 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:40:47.556753560+00:00 stderr F I0813 20:40:47.556557 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:40:47.556753560+00:00 stderr F I0813 20:40:47.556645 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:40:47.744716309+00:00 stderr F I0813 20:40:47.744594 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:40:47.744716309+00:00 stderr F I0813 20:40:47.744664 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:40:47.933715097+00:00 stderr F I0813 20:40:47.933590 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:40:47.933715097+00:00 stderr F I0813 20:40:47.933657 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:40:48.133974231+00:00 stderr F I0813 20:40:48.133292 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:40:48.133974231+00:00 stderr F I0813 20:40:48.133946 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:40:48.333302858+00:00 stderr F I0813 20:40:48.333182 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:40:48.333302858+00:00 stderr F I0813 20:40:48.333279 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:40:48.539219144+00:00 stderr F I0813 20:40:48.539086 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:40:48.539219144+00:00 stderr F I0813 20:40:48.539181 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:40:48.735148462+00:00 stderr F I0813 20:40:48.735023 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:40:48.735148462+00:00 stderr F I0813 20:40:48.735103 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:40:48.942234152+00:00 stderr F I0813 20:40:48.942114 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:40:48.942272123+00:00 stderr F I0813 20:40:48.942231 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:40:49.133289281+00:00 stderr F I0813 20:40:49.133155 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:40:49.133289281+00:00 stderr F I0813 20:40:49.133256 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:40:49.338485556+00:00 stderr F I0813 20:40:49.338022 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:40:49.338485556+00:00 stderr F I0813 20:40:49.338108 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:40:49.536014531+00:00 stderr F I0813 20:40:49.535911 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:40:49.536014531+00:00 stderr F I0813 20:40:49.535998 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:40:49.732605459+00:00 stderr F I0813 20:40:49.732506 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:40:49.732605459+00:00 stderr F I0813 20:40:49.732579 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:40:49.934277163+00:00 stderr F I0813 20:40:49.934067 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:40:49.934277163+00:00 stderr F I0813 20:40:49.934237 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:40:50.133117616+00:00 stderr F I0813 20:40:50.132972 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:40:50.133117616+00:00 stderr F I0813 20:40:50.133083 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:40:50.336270013+00:00 stderr F I0813 20:40:50.336095 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:40:50.336270013+00:00 stderr F I0813 20:40:50.336180 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:40:50.541051407+00:00 stderr F I0813 20:40:50.540908 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:40:50.541051407+00:00 stderr F I0813 20:40:50.540998 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:40:50.738020606+00:00 stderr F I0813 20:40:50.737874 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:40:50.738020606+00:00 stderr F I0813 20:40:50.737961 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:40:50.933706277+00:00 stderr F I0813 20:40:50.933583 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:40:50.933767719+00:00 stderr F I0813 20:40:50.933716 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:40:51.134281030+00:00 stderr F I0813 20:40:51.134102 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:40:51.134281030+00:00 stderr F I0813 20:40:51.134177 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:40:51.337694324+00:00 stderr F I0813 20:40:51.337469 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:40:51.338152948+00:00 stderr F I0813 20:40:51.337707 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:40:51.535758445+00:00 stderr F I0813 20:40:51.535658 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:40:51.535758445+00:00 stderr F I0813 20:40:51.535737 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:40:51.741660291+00:00 stderr F I0813 20:40:51.741520 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:40:51.741905518+00:00 stderr F I0813 20:40:51.741876 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:40:51.942284875+00:00 stderr F I0813 20:40:51.942181 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:40:51.942435259+00:00 stderr F I0813 20:40:51.942409 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:40:52.141424596+00:00 stderr F I0813 20:40:52.141239 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:40:52.141424596+00:00 stderr F I0813 20:40:52.141311 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:40:52.344283844+00:00 stderr F I0813 20:40:52.344100 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:40:52.344283844+00:00 stderr F I0813 20:40:52.344176 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:40:52.543569269+00:00 stderr F I0813 20:40:52.543477 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:40:52.543725784+00:00 stderr F I0813 20:40:52.543709 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:40:52.775044693+00:00 stderr F I0813 20:40:52.774900 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:40:52.775302360+00:00 stderr F I0813 20:40:52.775289 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:40:52.965434652+00:00 stderr F I0813 20:40:52.965380 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:40:52.965575536+00:00 stderr F I0813 20:40:52.965557 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:40:53.133901969+00:00 stderr F I0813 20:40:53.133753 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:40:53.134093365+00:00 stderr F I0813 20:40:53.134070 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:40:53.333476783+00:00 stderr F I0813 20:40:53.333361 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:40:53.333476783+00:00 stderr F I0813 20:40:53.333426 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:40:53.532567543+00:00 stderr F I0813 20:40:53.532463 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:40:53.532567543+00:00 stderr F I0813 20:40:53.532532 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:40:53.733328031+00:00 stderr F I0813 20:40:53.733183 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:40:53.733328031+00:00 stderr F I0813 20:40:53.733301 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:40:53.934893202+00:00 stderr F I0813 20:40:53.934723 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:40:53.934893202+00:00 stderr F I0813 20:40:53.934877 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:40:54.140545551+00:00 stderr F I0813 20:40:54.140415 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:40:54.140545551+00:00 stderr F I0813 20:40:54.140522 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:40:54.334310287+00:00 stderr F I0813 20:40:54.334151 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:40:54.334310287+00:00 stderr F I0813 20:40:54.334278 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:40:54.540096500+00:00 stderr F I0813 20:40:54.539963 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:40:54.540096500+00:00 stderr F I0813 20:40:54.540064 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:40:54.737360797+00:00 stderr F I0813 20:40:54.737277 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:40:54.737448070+00:00 stderr F I0813 20:40:54.737359 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:40:54.936895240+00:00 stderr F I0813 20:40:54.936435 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:40:54.936895240+00:00 stderr F I0813 20:40:54.936568 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:40:55.135324501+00:00 stderr F I0813 20:40:55.135174 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:40:55.135324501+00:00 stderr F I0813 20:40:55.135311 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:40:55.333496524+00:00 stderr F I0813 20:40:55.333401 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:40:55.333496524+00:00 stderr F I0813 20:40:55.333474 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:40:55.534464098+00:00 stderr F I0813 20:40:55.533916 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:40:55.534464098+00:00 stderr F I0813 20:40:55.534436 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:40:55.733257638+00:00 stderr F I0813 20:40:55.733132 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:40:55.733315220+00:00 stderr F I0813 20:40:55.733259 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:40:55.935170400+00:00 stderr F I0813 20:40:55.935001 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:40:55.935170400+00:00 stderr F I0813 20:40:55.935125 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:40:56.137363189+00:00 stderr F I0813 20:40:56.137224 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:40:56.137363189+00:00 stderr F I0813 20:40:56.137345 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:40:56.333316829+00:00 stderr F I0813 20:40:56.333179 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:40:56.333379720+00:00 stderr F I0813 20:40:56.333317 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:40:56.537067023+00:00 stderr F I0813 20:40:56.536396 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:40:56.537067023+00:00 stderr F I0813 20:40:56.537048 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:40:56.739909091+00:00 stderr F I0813 20:40:56.739827 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:40:56.739955312+00:00 stderr F I0813 20:40:56.739917 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:40:56.939033072+00:00 stderr F I0813 20:40:56.938924 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:40:56.939033072+00:00 stderr F I0813 20:40:56.939011 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:40:57.134730513+00:00 stderr F I0813 20:40:57.134621 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:40:57.134730513+00:00 stderr F I0813 20:40:57.134702 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:40:57.335601675+00:00 stderr F I0813 20:40:57.335497 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:40:57.335601675+00:00 stderr F I0813 20:40:57.335564 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:40:57.536922069+00:00 stderr F I0813 20:40:57.536749 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:40:57.537086954+00:00 stderr F I0813 20:40:57.537025 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:40:57.735013100+00:00 stderr F I0813 20:40:57.734861 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:40:57.735013100+00:00 stderr F I0813 20:40:57.734952 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:40:57.935669555+00:00 stderr F I0813 20:40:57.935559 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:40:57.935669555+00:00 stderr F I0813 20:40:57.935638 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:40:58.134578770+00:00 stderr F I0813 20:40:58.134440 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:40:58.134578770+00:00 stderr F I0813 20:40:58.134514 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:40:58.334404561+00:00 stderr F I0813 20:40:58.334294 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:40:58.334404561+00:00 stderr F I0813 20:40:58.334365 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:40:58.534003205+00:00 stderr F I0813 20:40:58.533921 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:40:58.534003205+00:00 stderr F I0813 20:40:58.533985 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:40:58.735044331+00:00 stderr F I0813 20:40:58.734959 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:40:58.735145574+00:00 stderr F I0813 20:40:58.735044 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:40:58.940868225+00:00 stderr F I0813 20:40:58.940706 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:40:58.940868225+00:00 stderr F I0813 20:40:58.940847 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:40:59.169112596+00:00 stderr F I0813 20:40:59.168295 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:40:59.169112596+00:00 stderr F I0813 20:40:59.168393 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:40:59.333872705+00:00 stderr F I0813 20:40:59.333697 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:40:59.333930886+00:00 stderr F I0813 20:40:59.333875 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:40:59.533867841+00:00 stderr F I0813 20:40:59.533268 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:40:59.533867841+00:00 stderr F I0813 20:40:59.533351 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:40:59.733832806+00:00 stderr F I0813 20:40:59.733672 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:40:59.733832806+00:00 stderr F I0813 20:40:59.733746 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:40:59.933404709+00:00 stderr F I0813 20:40:59.933267 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:40:59.933404709+00:00 stderr F I0813 20:40:59.933356 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:41:00.135516177+00:00 stderr F I0813 20:41:00.135412 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:41:00.135516177+00:00 stderr F I0813 20:41:00.135481 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:41:00.337475089+00:00 stderr F I0813 20:41:00.337316 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:41:00.337475089+00:00 stderr F I0813 20:41:00.337384 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:41:00.532647036+00:00 stderr F I0813 20:41:00.532587 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:41:00.532765110+00:00 stderr F I0813 20:41:00.532749 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:41:00.753741351+00:00 stderr F I0813 20:41:00.752926 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:41:00.753741351+00:00 stderr F I0813 20:41:00.753011 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:41:00.939638220+00:00 stderr F I0813 20:41:00.939578 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:41:00.939826206+00:00 stderr F I0813 20:41:00.939761 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:41:01.134297502+00:00 stderr F I0813 20:41:01.134230 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:41:01.134439496+00:00 stderr F I0813 20:41:01.134425 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:41:01.335296097+00:00 stderr F I0813 20:41:01.335059 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:41:01.335296097+00:00 stderr F I0813 20:41:01.335145 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:41:01.534527091+00:00 stderr F I0813 20:41:01.534407 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:41:01.534527091+00:00 stderr F I0813 20:41:01.534504 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:41:01.740891471+00:00 stderr F I0813 20:41:01.739968 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:41:01.740891471+00:00 stderr F I0813 20:41:01.740139 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:41:01.935963905+00:00 stderr F I0813 20:41:01.935834 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:41:01.935963905+00:00 stderr F I0813 20:41:01.935913 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:41:02.133495580+00:00 stderr F I0813 20:41:02.133434 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:41:02.133631094+00:00 stderr F I0813 20:41:02.133612 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:41:02.334090013+00:00 stderr F I0813 20:41:02.333947 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:41:02.334090013+00:00 stderr F I0813 20:41:02.334034 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:41:02.540864845+00:00 stderr F I0813 20:41:02.540755 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:41:02.541058460+00:00 stderr F I0813 20:41:02.541038 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:41:02.733239701+00:00 stderr F I0813 20:41:02.733155 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:41:02.733383725+00:00 stderr F I0813 20:41:02.733345 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:41:02.934099401+00:00 stderr F I0813 20:41:02.933992 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:41:02.934099401+00:00 stderr F I0813 20:41:02.934069 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:41:03.138715190+00:00 stderr F I0813 20:41:03.138572 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:41:03.138715190+00:00 stderr F I0813 20:41:03.138639 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:41:03.333914997+00:00 stderr F I0813 20:41:03.332724 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:41:03.333914997+00:00 stderr F I0813 20:41:03.332971 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:41:03.532882163+00:00 stderr F I0813 20:41:03.532038 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:41:03.532882163+00:00 stderr F I0813 20:41:03.532120 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:41:03.735113184+00:00 stderr F I0813 20:41:03.734248 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:41:03.735113184+00:00 stderr F I0813 20:41:03.734328 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:41:03.934974305+00:00 stderr F I0813 20:41:03.933942 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:41:03.934974305+00:00 stderr F I0813 20:41:03.934050 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:41:04.137248687+00:00 stderr F I0813 20:41:04.136951 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:41:04.137248687+00:00 stderr F I0813 20:41:04.137087 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:41:04.334109462+00:00 stderr F I0813 20:41:04.333452 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:41:04.334325979+00:00 stderr F I0813 20:41:04.334306 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:41:04.556272377+00:00 stderr F I0813 20:41:04.555674 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:41:04.556420642+00:00 stderr F I0813 20:41:04.556405 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:41:04.749264212+00:00 stderr F I0813 20:41:04.745995 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:41:04.749264212+00:00 stderr F I0813 20:41:04.746078 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:41:04.939856926+00:00 stderr F I0813 20:41:04.939308 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:41:04.939856926+00:00 stderr F I0813 20:41:04.939425 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:41:05.134035275+00:00 stderr F I0813 20:41:05.133413 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:41:05.134089146+00:00 stderr F I0813 20:41:05.134032 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:41:05.333707351+00:00 stderr F I0813 20:41:05.332665 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:41:05.333707351+00:00 stderr F I0813 20:41:05.332833 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:41:05.536474187+00:00 stderr F I0813 20:41:05.536400 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:41:05.536603541+00:00 stderr F I0813 20:41:05.536587 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:41:05.737175183+00:00 stderr F I0813 20:41:05.737070 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:41:05.758628642+00:00 stderr F I0813 20:41:05.758569 1 log.go:245] Operconfig Controller complete 2025-08-13T20:42:36.391312212+00:00 stderr F I0813 20:42:36.391113 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.391312212+00:00 stderr F I0813 20:42:36.389176 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.391453956+00:00 stderr F I0813 20:42:36.389413 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.440209101+00:00 stderr F I0813 20:42:36.392183 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.440209101+00:00 stderr F I0813 20:42:36.389336 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.440209101+00:00 stderr F I0813 20:42:36.395504 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.440454488+00:00 stderr F I0813 20:42:36.440420 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.460945159+00:00 stderr F I0813 20:42:36.460876 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.512163326+00:00 stderr F I0813 20:42:36.512108 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.512654420+00:00 stderr F I0813 20:42:36.512632 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.512876606+00:00 stderr F I0813 20:42:36.512854 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.513069152+00:00 stderr F I0813 20:42:36.513050 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.513195025+00:00 stderr F I0813 20:42:36.513178 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.513426082+00:00 stderr F I0813 20:42:36.513406 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.513558686+00:00 stderr F I0813 20:42:36.513541 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.513675699+00:00 stderr F I0813 20:42:36.513659 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.513892706+00:00 stderr F I0813 20:42:36.513871 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514085401+00:00 stderr F I0813 20:42:36.514061 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514206105+00:00 stderr F I0813 20:42:36.514189 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514409870+00:00 stderr F I0813 20:42:36.514388 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514547994+00:00 stderr F I0813 20:42:36.514529 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514919475+00:00 stderr F I0813 20:42:36.514897 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515101870+00:00 stderr F I0813 20:42:36.515083 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515254945+00:00 stderr F I0813 20:42:36.515205 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515385399+00:00 stderr F I0813 20:42:36.515367 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515515932+00:00 stderr F I0813 20:42:36.515499 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515645166+00:00 stderr F I0813 20:42:36.515624 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515849292+00:00 stderr F I0813 20:42:36.515828 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515985746+00:00 stderr F I0813 20:42:36.515967 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.516156521+00:00 stderr F I0813 20:42:36.516139 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.516309315+00:00 stderr F I0813 20:42:36.516288 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.516422969+00:00 stderr F I0813 20:42:36.516406 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.522322549+00:00 stderr F I0813 20:42:36.520212 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.522322549+00:00 stderr F I0813 20:42:36.520650 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.522705880+00:00 stderr F I0813 20:42:36.522676 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.522979318+00:00 stderr F I0813 20:42:36.522958 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.523127952+00:00 stderr F I0813 20:42:36.523111 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.523295327+00:00 stderr F I0813 20:42:36.523274 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.523473222+00:00 stderr F I0813 20:42:36.523454 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.523632686+00:00 stderr F I0813 20:42:36.523613 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.523770170+00:00 stderr F I0813 20:42:36.523753 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.524150731+00:00 stderr F I0813 20:42:36.524127 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.524397088+00:00 stderr F I0813 20:42:36.524376 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.534004525+00:00 stderr F I0813 20:42:36.524513 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.556483373+00:00 stderr F I0813 20:42:36.555278 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:40.495414124+00:00 stderr F I0813 20:42:40.494156 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:40.498079220+00:00 stderr F I0813 20:42:40.498000 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:40.499157992+00:00 stderr F I0813 20:42:40.499087 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:40.502500498+00:00 stderr F I0813 20:42:40.502429 1 base_controller.go:172] Shutting down ManagementStateController ... 2025-08-13T20:42:40.502583810+00:00 stderr F I0813 20:42:40.502556 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:40.502687953+00:00 stderr F I0813 20:42:40.502645 1 base_controller.go:172] Shutting down ConnectivityCheckController ... 2025-08-13T20:42:40.503464056+00:00 stderr F I0813 20:42:40.503402 1 base_controller.go:114] Shutting down worker of ManagementStateController controller ... 2025-08-13T20:42:40.503464056+00:00 stderr F I0813 20:42:40.503449 1 base_controller.go:104] All ManagementStateController workers have been terminated 2025-08-13T20:42:40.503516897+00:00 stderr F I0813 20:42:40.503468 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:40.503516897+00:00 stderr F I0813 20:42:40.503477 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:40.505134334+00:00 stderr F I0813 20:42:40.504123 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:42:40.505327259+00:00 stderr F I0813 20:42:40.505277 1 base_controller.go:114] Shutting down worker of ConnectivityCheckController controller ... 2025-08-13T20:42:40.505347790+00:00 stderr F I0813 20:42:40.505336 1 base_controller.go:104] All ConnectivityCheckController workers have been terminated 2025-08-13T20:42:40.505361020+00:00 stderr F I0813 20:42:40.505342 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:40.508271484+00:00 stderr F E0813 20:42:40.507176 1 leaderelection.go:308] Failed to release lock: Put "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-operator/leases/network-operator-lock?timeout=4m0s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:42:40.508370927+00:00 stderr F I0813 20:42:40.508323 1 internal.go:516] "Stopping and waiting for non leader election runnables" 2025-08-13T20:42:40.509313034+00:00 stderr F I0813 20:42:40.509205 1 secure_serving.go:258] Stopped listening on [::]:9104 2025-08-13T20:42:40.509313034+00:00 stderr F I0813 20:42:40.509260 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:40.509313034+00:00 stderr F I0813 20:42:40.509264 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:40.509313034+00:00 stderr F I0813 20:42:40.509298 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:42:40.509335995+00:00 stderr F I0813 20:42:40.509314 1 builder.go:330] server exited 2025-08-13T20:42:40.509335995+00:00 stderr F I0813 20:42:40.509321 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:40.509346375+00:00 stderr F I0813 20:42:40.509338 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:40.510411026+00:00 stderr F I0813 20:42:40.510349 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:40.510652763+00:00 stderr F I0813 20:42:40.510631 1 internal.go:520] "Stopping and waiting for leader election runnables" 2025-08-13T20:42:40.511647552+00:00 stderr F I0813 20:42:40.511625 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="operconfig-controller" 2025-08-13T20:42:40.511714894+00:00 stderr F I0813 20:42:40.511700 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="proxyconfig-controller" 2025-08-13T20:42:40.511752765+00:00 stderr F I0813 20:42:40.511741 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="ingress-config-controller" 2025-08-13T20:42:40.511843317+00:00 stderr F I0813 20:42:40.511825 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="infrastructureconfig-controller" 2025-08-13T20:42:40.511911469+00:00 stderr F I0813 20:42:40.511898 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="clusterconfig-controller" 2025-08-13T20:42:40.511946980+00:00 stderr F I0813 20:42:40.511935 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="pki-controller" 2025-08-13T20:42:40.511981081+00:00 stderr F I0813 20:42:40.511969 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="egress-router-controller" 2025-08-13T20:42:40.512019012+00:00 stderr F I0813 20:42:40.512007 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="signer-controller" 2025-08-13T20:42:40.512057463+00:00 stderr F I0813 20:42:40.512046 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="pod-watcher" 2025-08-13T20:42:40.512091064+00:00 stderr F I0813 20:42:40.512079 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="allowlist-controller" 2025-08-13T20:42:40.512144096+00:00 stderr F I0813 20:42:40.512132 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="dashboard-controller" 2025-08-13T20:42:40.512217028+00:00 stderr F I0813 20:42:40.512204 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="configmap-trust-bundle-injector-controller" 2025-08-13T20:42:40.512325451+00:00 stderr F I0813 20:42:40.512311 1 controller.go:242] "All workers finished" controller="pod-watcher" 2025-08-13T20:42:40.512364382+00:00 stderr F I0813 20:42:40.512352 1 controller.go:242] "All workers finished" controller="allowlist-controller" 2025-08-13T20:42:40.512399143+00:00 stderr F I0813 20:42:40.512388 1 controller.go:242] "All workers finished" controller="proxyconfig-controller" 2025-08-13T20:42:40.512432934+00:00 stderr F I0813 20:42:40.512421 1 controller.go:242] "All workers finished" controller="infrastructureconfig-controller" 2025-08-13T20:42:40.512473035+00:00 stderr F I0813 20:42:40.512458 1 controller.go:242] "All workers finished" controller="dashboard-controller" 2025-08-13T20:42:40.512513917+00:00 stderr F I0813 20:42:40.512501 1 controller.go:242] "All workers finished" controller="ingress-config-controller" 2025-08-13T20:42:40.512548758+00:00 stderr F I0813 20:42:40.512537 1 controller.go:242] "All workers finished" controller="clusterconfig-controller" 2025-08-13T20:42:40.512694282+00:00 stderr F I0813 20:42:40.512597 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:40.512694282+00:00 stderr F I0813 20:42:40.512608 1 controller.go:242] "All workers finished" controller="operconfig-controller" 2025-08-13T20:42:40.512694282+00:00 stderr F I0813 20:42:40.512684 1 controller.go:242] "All workers finished" controller="signer-controller" 2025-08-13T20:42:40.512694282+00:00 stderr F I0813 20:42:40.512685 1 controller.go:242] "All workers finished" controller="egress-router-controller" 2025-08-13T20:42:40.512716922+00:00 stderr F I0813 20:42:40.512699 1 controller.go:242] "All workers finished" controller="pki-controller" 2025-08-13T20:42:40.512754053+00:00 stderr F I0813 20:42:40.512739 1 controller.go:242] "All workers finished" controller="configmap-trust-bundle-injector-controller" 2025-08-13T20:42:40.512845306+00:00 stderr F I0813 20:42:40.512827 1 internal.go:526] "Stopping and waiting for caches" 2025-08-13T20:42:40.513871366+00:00 stderr F I0813 20:42:40.513704 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-network-operator", Name:"network-operator-lock", UID:"22664f33-4062-41bd-9ac9-dc79ccf9e70c", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"37442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_c4643de4-0a66-40c8-abff-4239e04f61ab stopped leading 2025-08-13T20:42:40.514043371+00:00 stderr F I0813 20:42:40.513966 1 internal.go:530] "Stopping and waiting for webhooks" 2025-08-13T20:42:40.514043371+00:00 stderr F I0813 20:42:40.514017 1 internal.go:533] "Stopping and waiting for HTTP servers" 2025-08-13T20:42:40.516137041+00:00 stderr F W0813 20:42:40.516017 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000644000175000017500000141275715133657715033144 0ustar zuulzuul2026-01-20T10:47:26.005252369+00:00 stderr F I0120 10:47:26.002261 1 cmd.go:241] Using service-serving-cert provided certificates 2026-01-20T10:47:26.005252369+00:00 stderr F I0120 10:47:26.004767 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:47:26.005702372+00:00 stderr F I0120 10:47:26.005658 1 observer_polling.go:159] Starting file observer 2026-01-20T10:47:26.028509158+00:00 stderr F I0120 10:47:26.027464 1 builder.go:299] network-operator version 4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08-84f9a080d03777c95a1c5a0d13ca16e5aa342d98 2026-01-20T10:47:26.477090278+00:00 stderr F I0120 10:47:26.476965 1 secure_serving.go:57] Forcing use of http/1.1 only 2026-01-20T10:47:26.477090278+00:00 stderr F W0120 10:47:26.477004 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:47:26.477090278+00:00 stderr F W0120 10:47:26.477017 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:47:26.480156741+00:00 stderr F I0120 10:47:26.480089 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2026-01-20T10:47:26.481739513+00:00 stderr F I0120 10:47:26.481670 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2026-01-20T10:47:26.481763493+00:00 stderr F I0120 10:47:26.481727 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:47:26.481852406+00:00 stderr F I0120 10:47:26.481768 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:47:26.482090983+00:00 stderr F I0120 10:47:26.482022 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:47:26.482516744+00:00 stderr F I0120 10:47:26.482458 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:47:26.482516744+00:00 stderr F I0120 10:47:26.482484 1 leaderelection.go:250] attempting to acquire leader lease openshift-network-operator/network-operator-lock... 2026-01-20T10:47:26.483193703+00:00 stderr F I0120 10:47:26.482458 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:47:26.483348277+00:00 stderr F I0120 10:47:26.482459 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:47:26.483419139+00:00 stderr F I0120 10:47:26.483387 1 secure_serving.go:213] Serving securely on [::]:9104 2026-01-20T10:47:26.483434019+00:00 stderr F I0120 10:47:26.483420 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:47:26.584146532+00:00 stderr F I0120 10:47:26.584091 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:47:26.584248475+00:00 stderr F I0120 10:47:26.584208 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:47:26.584389619+00:00 stderr F I0120 10:47:26.584165 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:53:15.183977649+00:00 stderr F I0120 10:53:15.180740 1 leaderelection.go:260] successfully acquired lease openshift-network-operator/network-operator-lock 2026-01-20T10:53:15.183977649+00:00 stderr F I0120 10:53:15.181329 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-network-operator", Name:"network-operator-lock", UID:"22664f33-4062-41bd-9ac9-dc79ccf9e70c", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41699", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_abc14e8f-ed5f-4eee-bd1e-d55aa90f8ce2 became leader 2026-01-20T10:53:15.195721192+00:00 stderr F I0120 10:53:15.195613 1 operator.go:97] Creating status manager for stand-alone cluster 2026-01-20T10:53:15.195721192+00:00 stderr F I0120 10:53:15.195704 1 operator.go:102] Adding controller-runtime controllers 2026-01-20T10:53:15.196490252+00:00 stderr F I0120 10:53:15.196446 1 operconfig_controller.go:102] Waiting for feature gates initialization... 2026-01-20T10:53:15.198417821+00:00 stderr F I0120 10:53:15.198346 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2026-01-20T10:53:15.201013239+00:00 stderr F I0120 10:53:15.200928 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-network-operator", Name:"network-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2026-01-20T10:53:15.201013239+00:00 stderr F I0120 10:53:15.200956 1 operconfig_controller.go:109] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2026-01-20T10:53:15.201691797+00:00 stderr F I0120 10:53:15.201635 1 client.go:239] Starting informers... 2026-01-20T10:53:15.202043776+00:00 stderr F I0120 10:53:15.201997 1 client.go:250] Waiting for informers to sync... 2026-01-20T10:53:15.303276270+00:00 stderr F I0120 10:53:15.302503 1 client.go:271] Informers started and synced 2026-01-20T10:53:15.303389723+00:00 stderr F I0120 10:53:15.303374 1 operator.go:126] Starting controller-manager 2026-01-20T10:53:15.305251721+00:00 stderr F I0120 10:53:15.305226 1 server.go:185] "Starting metrics server" logger="controller-runtime.metrics" 2026-01-20T10:53:15.305551849+00:00 stderr F I0120 10:53:15.305531 1 server.go:224] "Serving metrics server" logger="controller-runtime.metrics" bindAddress=":8080" secure=false 2026-01-20T10:53:15.307144980+00:00 stderr F I0120 10:53:15.307121 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2026-01-20T10:53:15.307195441+00:00 stderr F I0120 10:53:15.307182 1 base_controller.go:73] Caches are synced for LoggingSyncer 2026-01-20T10:53:15.307233862+00:00 stderr F I0120 10:53:15.307220 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2026-01-20T10:53:15.307475498+00:00 stderr F I0120 10:53:15.307455 1 controller.go:178] "Starting EventSource" controller="pki-controller" source="kind source: *v1.OperatorPKI" 2026-01-20T10:53:15.307522559+00:00 stderr F I0120 10:53:15.307509 1 controller.go:186] "Starting Controller" controller="pki-controller" 2026-01-20T10:53:15.307902529+00:00 stderr F I0120 10:53:15.307882 1 controller.go:178] "Starting EventSource" controller="proxyconfig-controller" source="informer source: 0xc000a36370" 2026-01-20T10:53:15.307989362+00:00 stderr F I0120 10:53:15.307971 1 controller.go:178] "Starting EventSource" controller="proxyconfig-controller" source="kind source: *v1.Proxy" 2026-01-20T10:53:15.308045334+00:00 stderr F I0120 10:53:15.308026 1 controller.go:186] "Starting Controller" controller="proxyconfig-controller" 2026-01-20T10:53:15.309463110+00:00 stderr F I0120 10:53:15.309385 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2026-01-20T10:53:15.309485970+00:00 stderr F I0120 10:53:15.309468 1 base_controller.go:73] Caches are synced for ManagementStateController 2026-01-20T10:53:15.309495981+00:00 stderr F I0120 10:53:15.309483 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2026-01-20T10:53:15.311032981+00:00 stderr F I0120 10:53:15.310987 1 controller.go:178] "Starting EventSource" controller="egress-router-controller" source="kind source: *v1.EgressRouter" 2026-01-20T10:53:15.311032981+00:00 stderr F I0120 10:53:15.310993 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2026-01-20T10:53:15.311061221+00:00 stderr F I0120 10:53:15.311025 1 controller.go:186] "Starting Controller" controller="egress-router-controller" 2026-01-20T10:53:15.311594195+00:00 stderr F I0120 10:53:15.311546 1 controller.go:178] "Starting EventSource" controller="clusterconfig-controller" source="kind source: *v1.Network" 2026-01-20T10:53:15.311594195+00:00 stderr F I0120 10:53:15.311584 1 controller.go:186] "Starting Controller" controller="clusterconfig-controller" 2026-01-20T10:53:15.311842701+00:00 stderr F I0120 10:53:15.311804 1 controller.go:178] "Starting EventSource" controller="signer-controller" source="kind source: *v1.CertificateSigningRequest" 2026-01-20T10:53:15.311842701+00:00 stderr F I0120 10:53:15.311836 1 controller.go:186] "Starting Controller" controller="signer-controller" 2026-01-20T10:53:15.312053987+00:00 stderr F I0120 10:53:15.312025 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Network" 2026-01-20T10:53:15.312053987+00:00 stderr F I0120 10:53:15.312047 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Network" 2026-01-20T10:53:15.312071487+00:00 stderr F I0120 10:53:15.312062 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="informer source: 0xc000f2e000" 2026-01-20T10:53:15.312182410+00:00 stderr F I0120 10:53:15.312130 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Node" 2026-01-20T10:53:15.312182410+00:00 stderr F I0120 10:53:15.312149 1 controller.go:186] "Starting Controller" controller="operconfig-controller" 2026-01-20T10:53:15.312259702+00:00 stderr F I0120 10:53:15.312232 1 controller.go:178] "Starting EventSource" controller="configmap-trust-bundle-injector-controller" source="informer source: 0xc000f2e0b0" 2026-01-20T10:53:15.312299933+00:00 stderr F I0120 10:53:15.312275 1 controller.go:178] "Starting EventSource" controller="configmap-trust-bundle-injector-controller" source="informer source: 0xc000f2e160" 2026-01-20T10:53:15.312354984+00:00 stderr F I0120 10:53:15.312333 1 controller.go:186] "Starting Controller" controller="configmap-trust-bundle-injector-controller" 2026-01-20T10:53:15.312368115+00:00 stderr F I0120 10:53:15.312360 1 controller.go:220] "Starting workers" controller="configmap-trust-bundle-injector-controller" worker count=1 2026-01-20T10:53:15.312607751+00:00 stderr F I0120 10:53:15.312578 1 controller.go:178] "Starting EventSource" controller="infrastructureconfig-controller" source="kind source: *v1.Infrastructure" 2026-01-20T10:53:15.312607751+00:00 stderr F I0120 10:53:15.312597 1 controller.go:186] "Starting Controller" controller="infrastructureconfig-controller" 2026-01-20T10:53:15.313304509+00:00 stderr F I0120 10:53:15.313273 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.314440298+00:00 stderr F I0120 10:53:15.314394 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.314528341+00:00 stderr F I0120 10:53:15.314462 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.315348772+00:00 stderr F I0120 10:53:15.315302 1 log.go:245] openshift-network-operator/openshift-service-ca.crt changed, triggering operconf reconciliation 2026-01-20T10:53:15.315510696+00:00 stderr F I0120 10:53:15.315484 1 log.go:245] openshift-network-operator/iptables-alerter-script changed, triggering operconf reconciliation 2026-01-20T10:53:15.315510696+00:00 stderr F I0120 10:53:15.315498 1 log.go:245] openshift-network-operator/kube-root-ca.crt changed, triggering operconf reconciliation 2026-01-20T10:53:15.315510696+00:00 stderr F I0120 10:53:15.315505 1 log.go:245] openshift-network-operator/mtu changed, triggering operconf reconciliation 2026-01-20T10:53:15.316254645+00:00 stderr F I0120 10:53:15.315719 1 reflector.go:351] Caches populated for *v1.OperatorPKI from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.316254645+00:00 stderr F I0120 10:53:15.315927 1 controller.go:178] "Starting EventSource" controller="dashboard-controller" source="informer source: 0xc000f2e2c0" 2026-01-20T10:53:15.316254645+00:00 stderr F I0120 10:53:15.316021 1 controller.go:186] "Starting Controller" controller="dashboard-controller" 2026-01-20T10:53:15.316254645+00:00 stderr F I0120 10:53:15.316035 1 controller.go:220] "Starting workers" controller="dashboard-controller" worker count=1 2026-01-20T10:53:15.318185705+00:00 stderr F I0120 10:53:15.316924 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.318185705+00:00 stderr F I0120 10:53:15.317136 1 log.go:245] Reconciling configmap from openshift-console/trusted-ca-bundle 2026-01-20T10:53:15.318185705+00:00 stderr F I0120 10:53:15.317155 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.318373870+00:00 stderr F I0120 10:53:15.318338 1 dashboard_controller.go:113] Reconcile dashboards 2026-01-20T10:53:15.318526764+00:00 stderr F I0120 10:53:15.318494 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.318660067+00:00 stderr F I0120 10:53:15.318597 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.324921519+00:00 stderr F I0120 10:53:15.324814 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.325147915+00:00 stderr F I0120 10:53:15.325089 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.325643218+00:00 stderr F I0120 10:53:15.325573 1 controller.go:178] "Starting EventSource" controller="ingress-config-controller" source="kind source: *v1.IngressController" 2026-01-20T10:53:15.325643218+00:00 stderr F I0120 10:53:15.325596 1 controller.go:186] "Starting Controller" controller="ingress-config-controller" 2026-01-20T10:53:15.325860743+00:00 stderr F I0120 10:53:15.325805 1 controller.go:178] "Starting EventSource" controller="allowlist-controller" source="informer source: 0xc000f2e210" 2026-01-20T10:53:15.325860743+00:00 stderr F I0120 10:53:15.325840 1 controller.go:186] "Starting Controller" controller="allowlist-controller" 2026-01-20T10:53:15.325860743+00:00 stderr F I0120 10:53:15.325853 1 controller.go:220] "Starting workers" controller="allowlist-controller" worker count=1 2026-01-20T10:53:15.326277654+00:00 stderr F I0120 10:53:15.326233 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.326415277+00:00 stderr F I0120 10:53:15.326379 1 reflector.go:351] Caches populated for *v1.EgressRouter from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.326468309+00:00 stderr F I0120 10:53:15.326443 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc000f2e370" 2026-01-20T10:53:15.326568301+00:00 stderr F I0120 10:53:15.326549 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc000f2e420" 2026-01-20T10:53:15.326645923+00:00 stderr F I0120 10:53:15.326626 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc000f2e4d0" 2026-01-20T10:53:15.326732786+00:00 stderr F I0120 10:53:15.326704 1 controller.go:186] "Starting Controller" controller="pod-watcher" 2026-01-20T10:53:15.326800687+00:00 stderr F I0120 10:53:15.326776 1 controller.go:220] "Starting workers" controller="pod-watcher" worker count=1 2026-01-20T10:53:15.327248479+00:00 stderr F I0120 10:53:15.327149 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.327271119+00:00 stderr F I0120 10:53:15.327252 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-admission-controller updated, re-generating status 2026-01-20T10:53:15.327325821+00:00 stderr F I0120 10:53:15.327290 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2026-01-20T10:53:15.327359602+00:00 stderr F I0120 10:53:15.327305 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2026-01-20T10:53:15.327431143+00:00 stderr F I0120 10:53:15.327400 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2026-01-20T10:53:15.327502055+00:00 stderr F I0120 10:53:15.327461 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=openshiftapiservers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.327536116+00:00 stderr F I0120 10:53:15.327470 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/network-metrics-daemon updated, re-generating status 2026-01-20T10:53:15.327578338+00:00 stderr F I0120 10:53:15.327561 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2026-01-20T10:53:15.327616419+00:00 stderr F I0120 10:53:15.327603 1 pod_watcher.go:131] Operand /, Kind= openshift-network-operator/iptables-alerter updated, re-generating status 2026-01-20T10:53:15.327651340+00:00 stderr F I0120 10:53:15.327638 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2026-01-20T10:53:15.328124122+00:00 stderr F I0120 10:53:15.328067 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.329378904+00:00 stderr F I0120 10:53:15.329351 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.329974919+00:00 stderr F I0120 10:53:15.329934 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.330055541+00:00 stderr F I0120 10:53:15.330021 1 log.go:245] ConfigMap openshift-console/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2026-01-20T10:53:15.330466893+00:00 stderr F I0120 10:53:15.330436 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.330608116+00:00 stderr F I0120 10:53:15.330557 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.335470561+00:00 stderr F I0120 10:53:15.335397 1 allowlist_controller.go:103] Reconcile allowlist for openshift-multus/cni-sysctl-allowlist 2026-01-20T10:53:15.335514312+00:00 stderr F I0120 10:53:15.335462 1 dashboard_controller.go:139] Applying dashboards manifests 2026-01-20T10:53:15.336072517+00:00 stderr F I0120 10:53:15.336019 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.336871358+00:00 stderr F I0120 10:53:15.336847 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.341140017+00:00 stderr F I0120 10:53:15.338347 1 reflector.go:351] Caches populated for *v1.IngressController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.343132509+00:00 stderr F I0120 10:53:15.343087 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2026-01-20T10:53:15.364303056+00:00 stderr F I0120 10:53:15.363938 1 warning_handler.go:65] "unknown field \"spec.template.spec.volumes[0].configMap.namespace\"" logger="KubeAPIWarningLogger" 2026-01-20T10:53:15.364303056+00:00 stderr F I0120 10:53:15.363957 1 warning_handler.go:65] "unknown field \"spec.template.spec.volumes[0].defaultMode\"" logger="KubeAPIWarningLogger" 2026-01-20T10:53:15.368171206+00:00 stderr F I0120 10:53:15.368091 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2026-01-20T10:53:15.368171206+00:00 stderr F I0120 10:53:15.368118 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2026-01-20T10:53:15.401692762+00:00 stderr F I0120 10:53:15.395448 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2026-01-20T10:53:15.413323412+00:00 stderr F I0120 10:53:15.410339 1 controller.go:220] "Starting workers" controller="pki-controller" worker count=1 2026-01-20T10:53:15.413323412+00:00 stderr F I0120 10:53:15.410473 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2026-01-20T10:53:15.413323412+00:00 stderr F I0120 10:53:15.411087 1 controller.go:220] "Starting workers" controller="proxyconfig-controller" worker count=1 2026-01-20T10:53:15.413323412+00:00 stderr F I0120 10:53:15.411131 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-kubeconfig-client-ca' 2026-01-20T10:53:15.413323412+00:00 stderr F I0120 10:53:15.412342 1 controller.go:220] "Starting workers" controller="clusterconfig-controller" worker count=1 2026-01-20T10:53:15.413323412+00:00 stderr F I0120 10:53:15.412454 1 controller.go:220] "Starting workers" controller="egress-router-controller" worker count=1 2026-01-20T10:53:15.413323412+00:00 stderr F I0120 10:53:15.412469 1 log.go:245] Reconciling Network.config.openshift.io cluster 2026-01-20T10:53:15.415392426+00:00 stderr F I0120 10:53:15.413663 1 log.go:245] /crc changed, triggering operconf reconciliation 2026-01-20T10:53:15.426219865+00:00 stderr F I0120 10:53:15.422635 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.426219865+00:00 stderr F I0120 10:53:15.423798 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.430142026+00:00 stderr F I0120 10:53:15.426526 1 controller.go:220] "Starting workers" controller="ingress-config-controller" worker count=1 2026-01-20T10:53:15.430142026+00:00 stderr F I0120 10:53:15.426646 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2026-01-20T10:53:15.460814179+00:00 stderr F I0120 10:53:15.460710 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.461315511+00:00 stderr F I0120 10:53:15.461274 1 controller.go:220] "Starting workers" controller="signer-controller" worker count=1 2026-01-20T10:53:15.461591668+00:00 stderr F I0120 10:53:15.461545 1 controller.go:220] "Starting workers" controller="infrastructureconfig-controller" worker count=1 2026-01-20T10:53:15.461752422+00:00 stderr F I0120 10:53:15.461725 1 controller.go:220] "Starting workers" controller="operconfig-controller" worker count=1 2026-01-20T10:53:15.461852895+00:00 stderr F I0120 10:53:15.461831 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2026-01-20T10:53:15.478202497+00:00 stderr F I0120 10:53:15.476773 1 log.go:245] configmap 'openshift-config/admin-kubeconfig-client-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2026-01-20T10:53:15.478202497+00:00 stderr F I0120 10:53:15.476881 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-metric-serving-ca' 2026-01-20T10:53:15.487199730+00:00 stderr F I0120 10:53:15.487166 1 log.go:245] configmap 'openshift-config/etcd-metric-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2026-01-20T10:53:15.487236560+00:00 stderr F I0120 10:53:15.487205 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-serving-ca' 2026-01-20T10:53:15.511415175+00:00 stderr F I0120 10:53:15.511362 1 log.go:245] configmap 'openshift-config/etcd-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2026-01-20T10:53:15.511554039+00:00 stderr F I0120 10:53:15.511532 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/initial-kube-apiserver-server-ca' 2026-01-20T10:53:15.512207546+00:00 stderr F I0120 10:53:15.511691 1 log.go:245] "network-node-identity-cert" in "openshift-network-node-identity" requires a new target cert/key pair: past its latest possible time 2026-01-06 19:56:26.8 +0000 UTC 2026-01-20T10:53:15.523336983+00:00 stderr F I0120 10:53:15.523262 1 log.go:245] configmap 'openshift-config/initial-kube-apiserver-server-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2026-01-20T10:53:15.523393294+00:00 stderr F I0120 10:53:15.523343 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-install-manifests' 2026-01-20T10:53:15.530966480+00:00 stderr F I0120 10:53:15.530918 1 log.go:245] configmap 'openshift-config/openshift-install-manifests' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2026-01-20T10:53:15.530998691+00:00 stderr F I0120 10:53:15.530985 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-service-ca.crt' 2026-01-20T10:53:15.545145856+00:00 stderr F I0120 10:53:15.545081 1 log.go:245] configmap 'openshift-config/openshift-service-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2026-01-20T10:53:15.545264559+00:00 stderr F I0120 10:53:15.545253 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-acks' 2026-01-20T10:53:15.555088004+00:00 stderr F I0120 10:53:15.555019 1 log.go:245] configmap 'openshift-config/admin-acks' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2026-01-20T10:53:15.555088004+00:00 stderr F I0120 10:53:15.555081 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-ca-bundle' 2026-01-20T10:53:15.563268365+00:00 stderr F I0120 10:53:15.562701 1 log.go:245] configmap 'openshift-config/etcd-ca-bundle' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2026-01-20T10:53:15.563268365+00:00 stderr F I0120 10:53:15.562747 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/kube-root-ca.crt' 2026-01-20T10:53:15.624700161+00:00 stderr F I0120 10:53:15.624627 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:15.702493960+00:00 stderr F I0120 10:53:15.701797 1 log.go:245] Updated Secret/network-node-identity-cert -n openshift-network-node-identity because it changed 2026-01-20T10:53:15.702493960+00:00 stderr F I0120 10:53:15.701835 1 log.go:245] successful reconciliation 2026-01-20T10:53:15.711919664+00:00 stderr F I0120 10:53:15.711852 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2026-01-20T10:53:15.711919664+00:00 stderr F I0120 10:53:15.711894 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2026-01-20T10:53:15.736110628+00:00 stderr F I0120 10:53:15.736022 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2026-01-20T10:53:15.738742066+00:00 stderr F I0120 10:53:15.738715 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2026-01-20T10:53:15.741629581+00:00 stderr F I0120 10:53:15.741579 1 log.go:245] configmap 'openshift-config/kube-root-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2026-01-20T10:53:15.741666062+00:00 stderr F I0120 10:53:15.741655 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/registry-certs' 2026-01-20T10:53:15.749236657+00:00 stderr F I0120 10:53:15.749200 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:53:15.755637442+00:00 stderr F I0120 10:53:15.755147 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2026-01-20T10:53:15.755637442+00:00 stderr F I0120 10:53:15.755171 1 log.go:245] Successfully updated Operator config from Cluster config 2026-01-20T10:53:15.756637299+00:00 stderr F I0120 10:53:15.756592 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2026-01-20T10:53:15.762613093+00:00 stderr F I0120 10:53:15.758147 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:53:15.773107154+00:00 stderr F I0120 10:53:15.773017 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:53:15.779796187+00:00 stderr F I0120 10:53:15.779755 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:53:15.789972780+00:00 stderr F I0120 10:53:15.787433 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2026-01-20T10:53:15.795087681+00:00 stderr F I0120 10:53:15.794685 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2026-01-20T10:53:15.803149040+00:00 stderr F I0120 10:53:15.803094 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:53:15.810996262+00:00 stderr F I0120 10:53:15.810967 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2026-01-20T10:53:15.946423540+00:00 stderr F I0120 10:53:15.946380 1 log.go:245] configmap 'openshift-config/registry-certs' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2026-01-20T10:53:15.946541263+00:00 stderr F I0120 10:53:15.946530 1 log.go:245] Reconciling proxy 'cluster' 2026-01-20T10:53:16.064988802+00:00 stderr F I0120 10:53:16.064896 1 log.go:245] httpProxy, httpsProxy and noProxy not defined for proxy 'cluster'; validation will be skipped 2026-01-20T10:53:16.140633276+00:00 stderr F I0120 10:53:16.140433 1 log.go:245] Reconciling proxy 'cluster' complete 2026-01-20T10:53:16.341418921+00:00 stderr F I0120 10:53:16.341377 1 log.go:245] Reconciling configmap from openshift-image-registry/trusted-ca 2026-01-20T10:53:16.352506257+00:00 stderr F I0120 10:53:16.352463 1 log.go:245] ConfigMap openshift-image-registry/trusted-ca ca-bundle.crt unchanged, skipping 2026-01-20T10:53:16.534405525+00:00 stderr F I0120 10:53:16.534360 1 dashboard_controller.go:113] Reconcile dashboards 2026-01-20T10:53:16.541150970+00:00 stderr F I0120 10:53:16.540800 1 dashboard_controller.go:139] Applying dashboards manifests 2026-01-20T10:53:16.549389863+00:00 stderr F I0120 10:53:16.548666 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2026-01-20T10:53:16.561848704+00:00 stderr F I0120 10:53:16.561798 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2026-01-20T10:53:16.561959987+00:00 stderr F I0120 10:53:16.561947 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2026-01-20T10:53:16.574219454+00:00 stderr F I0120 10:53:16.574170 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2026-01-20T10:53:16.734970616+00:00 stderr F I0120 10:53:16.734924 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2026-01-20T10:53:16.738573058+00:00 stderr F I0120 10:53:16.738542 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:16.740282483+00:00 stderr F I0120 10:53:16.740225 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:16.836695982+00:00 stderr F I0120 10:53:16.836335 1 log.go:245] "ovn-cert" in "openshift-ovn-kubernetes" requires a new target cert/key pair: past its latest possible time 2026-01-06 19:56:29.8 +0000 UTC 2026-01-20T10:53:16.936682085+00:00 stderr F I0120 10:53:16.934388 1 log.go:245] Reconciling Network.config.openshift.io cluster 2026-01-20T10:53:16.989417057+00:00 stderr F I0120 10:53:16.989347 1 log.go:245] Updated Secret/ovn-cert -n openshift-ovn-kubernetes because it changed 2026-01-20T10:53:16.989417057+00:00 stderr F I0120 10:53:16.989377 1 log.go:245] successful reconciliation 2026-01-20T10:53:17.334848868+00:00 stderr F I0120 10:53:17.334766 1 log.go:245] Reconciling configmap from openshift-kube-apiserver/trusted-ca-bundle 2026-01-20T10:53:17.337034535+00:00 stderr F I0120 10:53:17.336998 1 log.go:245] ConfigMap openshift-kube-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2026-01-20T10:53:17.532665408+00:00 stderr F I0120 10:53:17.532578 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2026-01-20T10:53:17.547220773+00:00 stderr F I0120 10:53:17.547152 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2026-01-20T10:53:17.547527661+00:00 stderr F I0120 10:53:17.547501 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2026-01-20T10:53:17.547527661+00:00 stderr F I0120 10:53:17.547515 1 log.go:245] Successfully updated Operator config from Cluster config 2026-01-20T10:53:17.738653237+00:00 stderr F I0120 10:53:17.738589 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2026-01-20T10:53:17.740060334+00:00 stderr F I0120 10:53:17.740033 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2026-01-20T10:53:17.741431509+00:00 stderr F I0120 10:53:17.741393 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2026-01-20T10:53:17.741431509+00:00 stderr F I0120 10:53:17.741409 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc002c17280 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2026-01-20T10:53:17.768014096+00:00 stderr F I0120 10:53:17.767959 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2026-01-20T10:53:17.768136689+00:00 stderr F I0120 10:53:17.768093 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2026-01-20T10:53:17.768174510+00:00 stderr F I0120 10:53:17.768160 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2026-01-20T10:53:17.771967587+00:00 stderr F I0120 10:53:17.771715 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2026-01-20T10:53:17.771967587+00:00 stderr F I0120 10:53:17.771746 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2026-01-20T10:53:17.771967587+00:00 stderr F I0120 10:53:17.771752 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2026-01-20T10:53:17.771967587+00:00 stderr F I0120 10:53:17.771759 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2026-01-20T10:53:17.771994958+00:00 stderr F I0120 10:53:17.771979 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2026-01-20T10:53:18.136840121+00:00 stderr F I0120 10:53:18.136775 1 dashboard_controller.go:113] Reconcile dashboards 2026-01-20T10:53:18.140747872+00:00 stderr F I0120 10:53:18.140719 1 dashboard_controller.go:139] Applying dashboards manifests 2026-01-20T10:53:18.141330307+00:00 stderr F I0120 10:53:18.141299 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2026-01-20T10:53:18.153509401+00:00 stderr F I0120 10:53:18.153438 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2026-01-20T10:53:18.158330556+00:00 stderr F I0120 10:53:18.158297 1 log.go:245] Failed to update the operator configuration: could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2026-01-20T10:53:18.158482710+00:00 stderr F E0120 10:53:18.158462 1 controller.go:329] "Reconciler error" err="could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="88d783ff-2c56-477b-b208-5c2551e444f6" 2026-01-20T10:53:18.158572762+00:00 stderr F I0120 10:53:18.158561 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2026-01-20T10:53:18.167656666+00:00 stderr F I0120 10:53:18.167556 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2026-01-20T10:53:18.167656666+00:00 stderr F I0120 10:53:18.167596 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2026-01-20T10:53:18.179140503+00:00 stderr F I0120 10:53:18.178199 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2026-01-20T10:53:18.336656592+00:00 stderr F I0120 10:53:18.336593 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2026-01-20T10:53:18.342231336+00:00 stderr F I0120 10:53:18.341002 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:18.345279634+00:00 stderr F I0120 10:53:18.345221 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:18.438518562+00:00 stderr F I0120 10:53:18.438353 1 log.go:245] "signer-cert" in "openshift-ovn-kubernetes" requires a new target cert/key pair: past its latest possible time 2026-01-06 19:56:32.8 +0000 UTC 2026-01-20T10:53:18.527788878+00:00 stderr F I0120 10:53:18.527719 1 log.go:245] Updated Secret/signer-cert -n openshift-ovn-kubernetes because it changed 2026-01-20T10:53:18.527788878+00:00 stderr F I0120 10:53:18.527745 1 log.go:245] successful reconciliation 2026-01-20T10:53:18.533204908+00:00 stderr F I0120 10:53:18.533135 1 log.go:245] Reconciling configmap from openshift-apiserver-operator/trusted-ca-bundle 2026-01-20T10:53:18.538556186+00:00 stderr F I0120 10:53:18.538467 1 log.go:245] ConfigMap openshift-apiserver-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2026-01-20T10:53:19.548378946+00:00 stderr F I0120 10:53:19.548314 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2026-01-20T10:53:19.551294811+00:00 stderr F I0120 10:53:19.551235 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2026-01-20T10:53:19.554635957+00:00 stderr F I0120 10:53:19.554609 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2026-01-20T10:53:19.554701839+00:00 stderr F I0120 10:53:19.554674 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc0006a9080 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2026-01-20T10:53:19.560139059+00:00 stderr F I0120 10:53:19.560093 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2026-01-20T10:53:19.560139059+00:00 stderr F I0120 10:53:19.560110 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2026-01-20T10:53:19.560139059+00:00 stderr F I0120 10:53:19.560116 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2026-01-20T10:53:19.564788900+00:00 stderr F I0120 10:53:19.563578 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2026-01-20T10:53:19.564788900+00:00 stderr F I0120 10:53:19.563600 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2026-01-20T10:53:19.564788900+00:00 stderr F I0120 10:53:19.563606 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2026-01-20T10:53:19.564788900+00:00 stderr F I0120 10:53:19.563613 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2026-01-20T10:53:19.564788900+00:00 stderr F I0120 10:53:19.563646 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2026-01-20T10:53:19.937080825+00:00 stderr F I0120 10:53:19.936411 1 log.go:245] Reconciling configmap from openshift-apiserver/trusted-ca-bundle 2026-01-20T10:53:19.941899399+00:00 stderr F I0120 10:53:19.941734 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2026-01-20T10:53:19.944320622+00:00 stderr F I0120 10:53:19.943225 1 log.go:245] ConfigMap openshift-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2026-01-20T10:53:19.956120527+00:00 stderr F I0120 10:53:19.954766 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2026-01-20T10:53:19.956120527+00:00 stderr F I0120 10:53:19.954798 1 log.go:245] Starting render phase 2026-01-20T10:53:19.982667732+00:00 stderr F I0120 10:53:19.982598 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2026-01-20T10:53:20.016874425+00:00 stderr F I0120 10:53:20.015292 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2026-01-20T10:53:20.016874425+00:00 stderr F I0120 10:53:20.015315 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2026-01-20T10:53:20.016874425+00:00 stderr F I0120 10:53:20.015340 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2026-01-20T10:53:20.016874425+00:00 stderr F I0120 10:53:20.015370 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2026-01-20T10:53:20.133748214+00:00 stderr F I0120 10:53:20.133670 1 log.go:245] Reconciling configmap from openshift-authentication/v4-0-config-system-trusted-ca-bundle 2026-01-20T10:53:20.137928502+00:00 stderr F I0120 10:53:20.137823 1 log.go:245] ConfigMap openshift-authentication/v4-0-config-system-trusted-ca-bundle ca-bundle.crt unchanged, skipping 2026-01-20T10:53:20.142174761+00:00 stderr F I0120 10:53:20.142134 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2026-01-20T10:53:20.142174761+00:00 stderr F I0120 10:53:20.142164 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2026-01-20T10:53:20.336047488+00:00 stderr F I0120 10:53:20.335969 1 log.go:245] Reconciling configmap from openshift-kube-controller-manager/trusted-ca-bundle 2026-01-20T10:53:20.340813272+00:00 stderr F I0120 10:53:20.340764 1 log.go:245] ConfigMap openshift-kube-controller-manager/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2026-01-20T10:53:20.346750475+00:00 stderr F I0120 10:53:20.345784 1 log.go:245] Render phase done, rendered 112 objects 2026-01-20T10:53:20.534563915+00:00 stderr F I0120 10:53:20.533931 1 log.go:245] Reconciling configmap from openshift-machine-api/mao-trusted-ca 2026-01-20T10:53:20.542562222+00:00 stderr F I0120 10:53:20.542457 1 log.go:245] ConfigMap openshift-machine-api/mao-trusted-ca ca-bundle.crt unchanged, skipping 2026-01-20T10:53:20.934871684+00:00 stderr F I0120 10:53:20.934314 1 log.go:245] Reconciling configmap from openshift-marketplace/marketplace-trusted-ca 2026-01-20T10:53:20.934871684+00:00 stderr F I0120 10:53:20.934619 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2026-01-20T10:53:20.937974514+00:00 stderr F I0120 10:53:20.936493 1 log.go:245] ConfigMap openshift-marketplace/marketplace-trusted-ca ca-bundle.crt unchanged, skipping 2026-01-20T10:53:20.940856608+00:00 stderr F I0120 10:53:20.940813 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2026-01-20T10:53:20.940877409+00:00 stderr F I0120 10:53:20.940857 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2026-01-20T10:53:20.948387453+00:00 stderr F I0120 10:53:20.947664 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2026-01-20T10:53:20.948387453+00:00 stderr F I0120 10:53:20.947715 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2026-01-20T10:53:20.956133884+00:00 stderr F I0120 10:53:20.955234 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2026-01-20T10:53:20.956133884+00:00 stderr F I0120 10:53:20.955319 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2026-01-20T10:53:20.963692538+00:00 stderr F I0120 10:53:20.963632 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2026-01-20T10:53:20.963717509+00:00 stderr F I0120 10:53:20.963705 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2026-01-20T10:53:20.968594715+00:00 stderr F I0120 10:53:20.968540 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2026-01-20T10:53:20.968613635+00:00 stderr F I0120 10:53:20.968597 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2026-01-20T10:53:20.979869506+00:00 stderr F I0120 10:53:20.979802 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2026-01-20T10:53:20.979926027+00:00 stderr F I0120 10:53:20.979891 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2026-01-20T10:53:20.987605926+00:00 stderr F I0120 10:53:20.987545 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2026-01-20T10:53:20.987623386+00:00 stderr F I0120 10:53:20.987601 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2026-01-20T10:53:20.993841557+00:00 stderr F I0120 10:53:20.993784 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2026-01-20T10:53:20.993865807+00:00 stderr F I0120 10:53:20.993856 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2026-01-20T10:53:20.998334603+00:00 stderr F I0120 10:53:20.998283 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2026-01-20T10:53:20.998353024+00:00 stderr F I0120 10:53:20.998346 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2026-01-20T10:53:21.002034458+00:00 stderr F I0120 10:53:21.002011 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2026-01-20T10:53:21.002126081+00:00 stderr F I0120 10:53:21.002114 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2026-01-20T10:53:21.140893175+00:00 stderr F I0120 10:53:21.140813 1 log.go:245] Reconciling configmap from openshift-authentication-operator/trusted-ca-bundle 2026-01-20T10:53:21.145617146+00:00 stderr F I0120 10:53:21.145192 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2026-01-20T10:53:21.145617146+00:00 stderr F I0120 10:53:21.145260 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2026-01-20T10:53:21.147360792+00:00 stderr F I0120 10:53:21.146403 1 log.go:245] ConfigMap openshift-authentication-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2026-01-20T10:53:21.337626726+00:00 stderr F I0120 10:53:21.337561 1 log.go:245] Reconciling configmap from openshift-controller-manager/openshift-global-ca 2026-01-20T10:53:21.341227629+00:00 stderr F I0120 10:53:21.340469 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2026-01-20T10:53:21.341227629+00:00 stderr F I0120 10:53:21.340520 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2026-01-20T10:53:21.342144963+00:00 stderr F I0120 10:53:21.342065 1 log.go:245] ConfigMap openshift-controller-manager/openshift-global-ca ca-bundle.crt unchanged, skipping 2026-01-20T10:53:21.537924188+00:00 stderr F I0120 10:53:21.537845 1 log.go:245] Reconciling configmap from openshift-ingress-operator/trusted-ca 2026-01-20T10:53:21.540722911+00:00 stderr F I0120 10:53:21.540665 1 log.go:245] ConfigMap openshift-ingress-operator/trusted-ca ca-bundle.crt unchanged, skipping 2026-01-20T10:53:21.543037790+00:00 stderr F I0120 10:53:21.542967 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2026-01-20T10:53:21.543037790+00:00 stderr F I0120 10:53:21.543014 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2026-01-20T10:53:21.739298729+00:00 stderr F I0120 10:53:21.739240 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2026-01-20T10:53:21.739358431+00:00 stderr F I0120 10:53:21.739300 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2026-01-20T10:53:21.740323966+00:00 stderr F I0120 10:53:21.740266 1 log.go:245] Reconciling configmap from openshift-config-managed/trusted-ca-bundle 2026-01-20T10:53:21.745466668+00:00 stderr F I0120 10:53:21.745415 1 log.go:245] trusted-ca-bundle changed, updating 12 configMaps 2026-01-20T10:53:21.745514969+00:00 stderr F I0120 10:53:21.745479 1 log.go:245] ConfigMap openshift-authentication/v4-0-config-system-trusted-ca-bundle ca-bundle.crt unchanged, skipping 2026-01-20T10:53:21.745554982+00:00 stderr F I0120 10:53:21.745532 1 log.go:245] ConfigMap openshift-console/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2026-01-20T10:53:21.745597633+00:00 stderr F I0120 10:53:21.745577 1 log.go:245] ConfigMap openshift-image-registry/trusted-ca ca-bundle.crt unchanged, skipping 2026-01-20T10:53:21.745642164+00:00 stderr F I0120 10:53:21.745621 1 log.go:245] ConfigMap openshift-kube-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2026-01-20T10:53:21.745677155+00:00 stderr F I0120 10:53:21.745653 1 log.go:245] ConfigMap openshift-apiserver-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2026-01-20T10:53:21.745715596+00:00 stderr F I0120 10:53:21.745687 1 log.go:245] ConfigMap openshift-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2026-01-20T10:53:21.745754357+00:00 stderr F I0120 10:53:21.745732 1 log.go:245] ConfigMap openshift-ingress-operator/trusted-ca ca-bundle.crt unchanged, skipping 2026-01-20T10:53:21.745797738+00:00 stderr F I0120 10:53:21.745775 1 log.go:245] ConfigMap openshift-kube-controller-manager/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2026-01-20T10:53:21.745838339+00:00 stderr F I0120 10:53:21.745818 1 log.go:245] ConfigMap openshift-machine-api/mao-trusted-ca ca-bundle.crt unchanged, skipping 2026-01-20T10:53:21.745884780+00:00 stderr F I0120 10:53:21.745859 1 log.go:245] ConfigMap openshift-marketplace/marketplace-trusted-ca ca-bundle.crt unchanged, skipping 2026-01-20T10:53:21.745921571+00:00 stderr F I0120 10:53:21.745901 1 log.go:245] ConfigMap openshift-authentication-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2026-01-20T10:53:21.745964592+00:00 stderr F I0120 10:53:21.745944 1 log.go:245] ConfigMap openshift-controller-manager/openshift-global-ca ca-bundle.crt unchanged, skipping 2026-01-20T10:53:21.945161636+00:00 stderr F I0120 10:53:21.945098 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2026-01-20T10:53:21.945199507+00:00 stderr F I0120 10:53:21.945165 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2026-01-20T10:53:22.141016635+00:00 stderr F I0120 10:53:22.140949 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2026-01-20T10:53:22.141016635+00:00 stderr F I0120 10:53:22.141001 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2026-01-20T10:53:22.340456355+00:00 stderr F I0120 10:53:22.340380 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2026-01-20T10:53:22.340456355+00:00 stderr F I0120 10:53:22.340437 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2026-01-20T10:53:22.540100771+00:00 stderr F I0120 10:53:22.540041 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2026-01-20T10:53:22.540172423+00:00 stderr F I0120 10:53:22.540107 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2026-01-20T10:53:22.739800458+00:00 stderr F I0120 10:53:22.739742 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2026-01-20T10:53:22.739862640+00:00 stderr F I0120 10:53:22.739808 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2026-01-20T10:53:22.945160093+00:00 stderr F I0120 10:53:22.944642 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2026-01-20T10:53:22.945160093+00:00 stderr F I0120 10:53:22.945152 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2026-01-20T10:53:23.141208326+00:00 stderr F I0120 10:53:23.141135 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2026-01-20T10:53:23.141208326+00:00 stderr F I0120 10:53:23.141198 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2026-01-20T10:53:23.351918948+00:00 stderr F I0120 10:53:23.351704 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2026-01-20T10:53:23.351918948+00:00 stderr F I0120 10:53:23.351755 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2026-01-20T10:53:23.558816671+00:00 stderr F I0120 10:53:23.558727 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2026-01-20T10:53:23.558879393+00:00 stderr F I0120 10:53:23.558825 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2026-01-20T10:53:23.743367328+00:00 stderr F I0120 10:53:23.743283 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2026-01-20T10:53:23.743367328+00:00 stderr F I0120 10:53:23.743350 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2026-01-20T10:53:23.942252044+00:00 stderr F I0120 10:53:23.942182 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2026-01-20T10:53:23.942309986+00:00 stderr F I0120 10:53:23.942246 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2026-01-20T10:53:24.142643839+00:00 stderr F I0120 10:53:24.142556 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2026-01-20T10:53:24.142643839+00:00 stderr F I0120 10:53:24.142631 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2026-01-20T10:53:24.348622949+00:00 stderr F I0120 10:53:24.348108 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2026-01-20T10:53:24.348622949+00:00 stderr F I0120 10:53:24.348169 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2026-01-20T10:53:24.546002996+00:00 stderr F I0120 10:53:24.545904 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2026-01-20T10:53:24.546002996+00:00 stderr F I0120 10:53:24.545964 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2026-01-20T10:53:24.741142157+00:00 stderr F I0120 10:53:24.741025 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2026-01-20T10:53:24.741142157+00:00 stderr F I0120 10:53:24.741107 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2026-01-20T10:53:24.940208848+00:00 stderr F I0120 10:53:24.940127 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2026-01-20T10:53:24.940208848+00:00 stderr F I0120 10:53:24.940186 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2026-01-20T10:53:25.141246690+00:00 stderr F I0120 10:53:25.141172 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2026-01-20T10:53:25.141246690+00:00 stderr F I0120 10:53:25.141227 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2026-01-20T10:53:25.354248171+00:00 stderr F I0120 10:53:25.352818 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2026-01-20T10:53:25.354248171+00:00 stderr F I0120 10:53:25.352878 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2026-01-20T10:53:25.541829836+00:00 stderr F I0120 10:53:25.541732 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2026-01-20T10:53:25.541829836+00:00 stderr F I0120 10:53:25.541792 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2026-01-20T10:53:25.741820831+00:00 stderr F I0120 10:53:25.741760 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2026-01-20T10:53:25.741878542+00:00 stderr F I0120 10:53:25.741823 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2026-01-20T10:53:25.943150730+00:00 stderr F I0120 10:53:25.942946 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2026-01-20T10:53:25.943150730+00:00 stderr F I0120 10:53:25.943006 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2026-01-20T10:53:26.147834247+00:00 stderr F I0120 10:53:26.147734 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2026-01-20T10:53:26.147834247+00:00 stderr F I0120 10:53:26.147811 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2026-01-20T10:53:26.346033946+00:00 stderr F I0120 10:53:26.345959 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2026-01-20T10:53:26.346033946+00:00 stderr F I0120 10:53:26.346021 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2026-01-20T10:53:26.371021231+00:00 stderr F I0120 10:53:26.370938 1 allowlist_controller.go:146] Successfully updated sysctl allowlist 2026-01-20T10:53:26.547469318+00:00 stderr F I0120 10:53:26.544437 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2026-01-20T10:53:26.547469318+00:00 stderr F I0120 10:53:26.544514 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2026-01-20T10:53:26.740607946+00:00 stderr F I0120 10:53:26.740540 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2026-01-20T10:53:26.740607946+00:00 stderr F I0120 10:53:26.740592 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2026-01-20T10:53:26.947236832+00:00 stderr F I0120 10:53:26.945420 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2026-01-20T10:53:26.947236832+00:00 stderr F I0120 10:53:26.945474 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2026-01-20T10:53:27.144212580+00:00 stderr F I0120 10:53:27.144135 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2026-01-20T10:53:27.144212580+00:00 stderr F I0120 10:53:27.144198 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2026-01-20T10:53:27.343818485+00:00 stderr F I0120 10:53:27.343742 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2026-01-20T10:53:27.343818485+00:00 stderr F I0120 10:53:27.343807 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2026-01-20T10:53:27.547796193+00:00 stderr F I0120 10:53:27.547724 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2026-01-20T10:53:27.547796193+00:00 stderr F I0120 10:53:27.547775 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2026-01-20T10:53:27.754604204+00:00 stderr F I0120 10:53:27.754495 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2026-01-20T10:53:27.754604204+00:00 stderr F I0120 10:53:27.754550 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2026-01-20T10:53:27.948475701+00:00 stderr F I0120 10:53:27.947752 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2026-01-20T10:53:27.948475701+00:00 stderr F I0120 10:53:27.947823 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2026-01-20T10:53:28.156379490+00:00 stderr F I0120 10:53:28.156332 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2026-01-20T10:53:28.156493933+00:00 stderr F I0120 10:53:28.156483 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2026-01-20T10:53:28.347122416+00:00 stderr F I0120 10:53:28.344219 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2026-01-20T10:53:28.347122416+00:00 stderr F I0120 10:53:28.344277 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2026-01-20T10:53:28.581220493+00:00 stderr F I0120 10:53:28.579858 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2026-01-20T10:53:28.581220493+00:00 stderr F I0120 10:53:28.579913 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2026-01-20T10:53:28.781170016+00:00 stderr F I0120 10:53:28.780646 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2026-01-20T10:53:28.781170016+00:00 stderr F I0120 10:53:28.780699 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2026-01-20T10:53:28.942335159+00:00 stderr F I0120 10:53:28.942247 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2026-01-20T10:53:28.942335159+00:00 stderr F I0120 10:53:28.942296 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2026-01-20T10:53:29.140411754+00:00 stderr F I0120 10:53:29.140308 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2026-01-20T10:53:29.140411754+00:00 stderr F I0120 10:53:29.140363 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2026-01-20T10:53:29.342330649+00:00 stderr F I0120 10:53:29.342258 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2026-01-20T10:53:29.342393700+00:00 stderr F I0120 10:53:29.342333 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2026-01-20T10:53:29.545458645+00:00 stderr F I0120 10:53:29.545391 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2026-01-20T10:53:29.545458645+00:00 stderr F I0120 10:53:29.545452 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2026-01-20T10:53:29.741327314+00:00 stderr F I0120 10:53:29.740776 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2026-01-20T10:53:29.741327314+00:00 stderr F I0120 10:53:29.741318 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2026-01-20T10:53:29.941186815+00:00 stderr F I0120 10:53:29.941108 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2026-01-20T10:53:29.941246076+00:00 stderr F I0120 10:53:29.941178 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2026-01-20T10:53:30.140952564+00:00 stderr F I0120 10:53:30.140879 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2026-01-20T10:53:30.140952564+00:00 stderr F I0120 10:53:30.140943 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2026-01-20T10:53:30.345161338+00:00 stderr F I0120 10:53:30.341221 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2026-01-20T10:53:30.345161338+00:00 stderr F I0120 10:53:30.341270 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2026-01-20T10:53:30.541734994+00:00 stderr F I0120 10:53:30.541628 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2026-01-20T10:53:30.541734994+00:00 stderr F I0120 10:53:30.541709 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2026-01-20T10:53:30.744538253+00:00 stderr F I0120 10:53:30.743810 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2026-01-20T10:53:30.744538253+00:00 stderr F I0120 10:53:30.744496 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2026-01-20T10:53:30.942466153+00:00 stderr F I0120 10:53:30.942364 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2026-01-20T10:53:30.942466153+00:00 stderr F I0120 10:53:30.942414 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2026-01-20T10:53:31.142645284+00:00 stderr F I0120 10:53:31.142522 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2026-01-20T10:53:31.142645284+00:00 stderr F I0120 10:53:31.142587 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2026-01-20T10:53:31.341379626+00:00 stderr F I0120 10:53:31.341292 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2026-01-20T10:53:31.341379626+00:00 stderr F I0120 10:53:31.341338 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2026-01-20T10:53:31.542891661+00:00 stderr F I0120 10:53:31.542803 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2026-01-20T10:53:31.542891661+00:00 stderr F I0120 10:53:31.542856 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2026-01-20T10:53:31.742652920+00:00 stderr F I0120 10:53:31.742520 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2026-01-20T10:53:31.742652920+00:00 stderr F I0120 10:53:31.742603 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2026-01-20T10:53:31.944673627+00:00 stderr F I0120 10:53:31.944595 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2026-01-20T10:53:31.944673627+00:00 stderr F I0120 10:53:31.944649 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2026-01-20T10:53:32.141245004+00:00 stderr F I0120 10:53:32.141146 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2026-01-20T10:53:32.141245004+00:00 stderr F I0120 10:53:32.141226 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2026-01-20T10:53:32.342762608+00:00 stderr F I0120 10:53:32.342675 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2026-01-20T10:53:32.342762608+00:00 stderr F I0120 10:53:32.342729 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2026-01-20T10:53:32.544610591+00:00 stderr F I0120 10:53:32.544497 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2026-01-20T10:53:32.544610591+00:00 stderr F I0120 10:53:32.544555 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2026-01-20T10:53:32.745167401+00:00 stderr F I0120 10:53:32.745026 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2026-01-20T10:53:32.745252463+00:00 stderr F I0120 10:53:32.745186 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2026-01-20T10:53:32.942563419+00:00 stderr F I0120 10:53:32.942421 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2026-01-20T10:53:32.942563419+00:00 stderr F I0120 10:53:32.942479 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2026-01-20T10:53:33.143618582+00:00 stderr F I0120 10:53:33.143537 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2026-01-20T10:53:33.143618582+00:00 stderr F I0120 10:53:33.143589 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2026-01-20T10:53:33.348053111+00:00 stderr F I0120 10:53:33.347919 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2026-01-20T10:53:33.348053111+00:00 stderr F I0120 10:53:33.347993 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2026-01-20T10:53:33.543625173+00:00 stderr F I0120 10:53:33.543506 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2026-01-20T10:53:33.543625173+00:00 stderr F I0120 10:53:33.543564 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2026-01-20T10:53:33.742143610+00:00 stderr F I0120 10:53:33.741941 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2026-01-20T10:53:33.742143610+00:00 stderr F I0120 10:53:33.742000 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2026-01-20T10:53:33.940089552+00:00 stderr F I0120 10:53:33.939974 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2026-01-20T10:53:33.940089552+00:00 stderr F I0120 10:53:33.940032 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2026-01-20T10:53:34.140423946+00:00 stderr F I0120 10:53:34.140296 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2026-01-20T10:53:34.140423946+00:00 stderr F I0120 10:53:34.140354 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2026-01-20T10:53:34.342918155+00:00 stderr F I0120 10:53:34.342786 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2026-01-20T10:53:34.342918155+00:00 stderr F I0120 10:53:34.342866 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2026-01-20T10:53:34.542661304+00:00 stderr F I0120 10:53:34.542570 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2026-01-20T10:53:34.542661304+00:00 stderr F I0120 10:53:34.542629 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2026-01-20T10:53:34.753852538+00:00 stderr F I0120 10:53:34.753755 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2026-01-20T10:53:34.753852538+00:00 stderr F I0120 10:53:34.753821 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2026-01-20T10:53:34.983924000+00:00 stderr F I0120 10:53:34.983837 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2026-01-20T10:53:34.983924000+00:00 stderr F I0120 10:53:34.983891 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2026-01-20T10:53:35.143214214+00:00 stderr F I0120 10:53:35.143117 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2026-01-20T10:53:35.143214214+00:00 stderr F I0120 10:53:35.143184 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2026-01-20T10:53:35.342156392+00:00 stderr F I0120 10:53:35.342080 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2026-01-20T10:53:35.342156392+00:00 stderr F I0120 10:53:35.342136 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2026-01-20T10:53:35.446375014+00:00 stderr F I0120 10:53:35.446231 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2026-01-20T10:53:35.456433734+00:00 stderr F I0120 10:53:35.456355 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:53:35.465470557+00:00 stderr F I0120 10:53:35.465386 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:53:35.476635045+00:00 stderr F I0120 10:53:35.476563 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:53:35.483855242+00:00 stderr F I0120 10:53:35.483792 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:53:35.493415409+00:00 stderr F I0120 10:53:35.493348 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2026-01-20T10:53:35.500452271+00:00 stderr F I0120 10:53:35.500371 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2026-01-20T10:53:35.506258931+00:00 stderr F I0120 10:53:35.506180 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:53:35.512826450+00:00 stderr F I0120 10:53:35.512760 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2026-01-20T10:53:35.542020714+00:00 stderr F I0120 10:53:35.541824 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2026-01-20T10:53:35.542020714+00:00 stderr F I0120 10:53:35.541893 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2026-01-20T10:53:35.741930447+00:00 stderr F I0120 10:53:35.741864 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2026-01-20T10:53:35.742053780+00:00 stderr F I0120 10:53:35.742038 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2026-01-20T10:53:35.943322919+00:00 stderr F I0120 10:53:35.943165 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2026-01-20T10:53:35.943322919+00:00 stderr F I0120 10:53:35.943216 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2026-01-20T10:53:36.140547442+00:00 stderr F I0120 10:53:36.140426 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2026-01-20T10:53:36.140547442+00:00 stderr F I0120 10:53:36.140485 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2026-01-20T10:53:36.339512270+00:00 stderr F I0120 10:53:36.339462 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2026-01-20T10:53:36.339606943+00:00 stderr F I0120 10:53:36.339596 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2026-01-20T10:53:36.351434758+00:00 stderr F I0120 10:53:36.351339 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2026-01-20T10:53:36.360538164+00:00 stderr F I0120 10:53:36.360458 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:53:36.366366374+00:00 stderr F I0120 10:53:36.366298 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:53:36.374183125+00:00 stderr F I0120 10:53:36.374117 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:53:36.380726025+00:00 stderr F I0120 10:53:36.380644 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:53:36.446954365+00:00 stderr F I0120 10:53:36.446383 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2026-01-20T10:53:36.549129394+00:00 stderr F I0120 10:53:36.549017 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2026-01-20T10:53:36.549198726+00:00 stderr F I0120 10:53:36.549123 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2026-01-20T10:53:36.649757823+00:00 stderr F I0120 10:53:36.649619 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2026-01-20T10:53:36.743598046+00:00 stderr F I0120 10:53:36.743499 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2026-01-20T10:53:36.743598046+00:00 stderr F I0120 10:53:36.743552 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2026-01-20T10:53:36.847864190+00:00 stderr F I0120 10:53:36.847785 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:53:36.942871383+00:00 stderr F I0120 10:53:36.942797 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2026-01-20T10:53:36.942871383+00:00 stderr F I0120 10:53:36.942861 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2026-01-20T10:53:37.045823502+00:00 stderr F I0120 10:53:37.045737 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2026-01-20T10:53:37.141325928+00:00 stderr F I0120 10:53:37.141230 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2026-01-20T10:53:37.141325928+00:00 stderr F I0120 10:53:37.141310 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2026-01-20T10:53:37.345394649+00:00 stderr F I0120 10:53:37.345270 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2026-01-20T10:53:37.345394649+00:00 stderr F I0120 10:53:37.345331 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2026-01-20T10:53:37.361118605+00:00 stderr F I0120 10:53:37.361026 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2026-01-20T10:53:37.447278980+00:00 stderr F I0120 10:53:37.447168 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:53:37.549807148+00:00 stderr F I0120 10:53:37.549622 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2026-01-20T10:53:37.549807148+00:00 stderr F I0120 10:53:37.549674 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2026-01-20T10:53:37.645736875+00:00 stderr F I0120 10:53:37.645635 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:53:37.741222861+00:00 stderr F I0120 10:53:37.741100 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2026-01-20T10:53:37.741222861+00:00 stderr F I0120 10:53:37.741165 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2026-01-20T10:53:37.873309313+00:00 stderr F I0120 10:53:37.872132 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:53:37.942542591+00:00 stderr F I0120 10:53:37.942290 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2026-01-20T10:53:37.942542591+00:00 stderr F I0120 10:53:37.942367 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2026-01-20T10:53:38.048937598+00:00 stderr F I0120 10:53:38.048792 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:53:38.143802648+00:00 stderr F I0120 10:53:38.143725 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2026-01-20T10:53:38.143802648+00:00 stderr F I0120 10:53:38.143791 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2026-01-20T10:53:38.250182656+00:00 stderr F I0120 10:53:38.250119 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2026-01-20T10:53:38.342648374+00:00 stderr F I0120 10:53:38.342562 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2026-01-20T10:53:38.342648374+00:00 stderr F I0120 10:53:38.342638 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2026-01-20T10:53:38.445365427+00:00 stderr F I0120 10:53:38.445258 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2026-01-20T10:53:38.542492805+00:00 stderr F I0120 10:53:38.542412 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2026-01-20T10:53:38.542492805+00:00 stderr F I0120 10:53:38.542482 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2026-01-20T10:53:38.644927610+00:00 stderr F I0120 10:53:38.644859 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:53:38.741321619+00:00 stderr F I0120 10:53:38.741235 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2026-01-20T10:53:38.741321619+00:00 stderr F I0120 10:53:38.741303 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2026-01-20T10:53:38.844610029+00:00 stderr F I0120 10:53:38.844551 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2026-01-20T10:53:38.941593245+00:00 stderr F I0120 10:53:38.941534 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2026-01-20T10:53:38.941662617+00:00 stderr F I0120 10:53:38.941620 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2026-01-20T10:53:39.048316670+00:00 stderr F I0120 10:53:39.048248 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2026-01-20T10:53:39.141121663+00:00 stderr F I0120 10:53:39.141039 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2026-01-20T10:53:39.141121663+00:00 stderr F I0120 10:53:39.141103 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2026-01-20T10:53:39.246187065+00:00 stderr F I0120 10:53:39.246092 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:53:39.341092235+00:00 stderr F I0120 10:53:39.340982 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2026-01-20T10:53:39.341155337+00:00 stderr F I0120 10:53:39.341057 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2026-01-20T10:53:39.448204860+00:00 stderr F I0120 10:53:39.448122 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:53:39.541205609+00:00 stderr F I0120 10:53:39.541133 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2026-01-20T10:53:39.541205609+00:00 stderr F I0120 10:53:39.541198 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2026-01-20T10:53:39.650412860+00:00 stderr F I0120 10:53:39.650343 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:53:39.741640623+00:00 stderr F I0120 10:53:39.741587 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2026-01-20T10:53:39.741748075+00:00 stderr F I0120 10:53:39.741734 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2026-01-20T10:53:39.847926015+00:00 stderr F I0120 10:53:39.847834 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:53:39.942914897+00:00 stderr F I0120 10:53:39.942822 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2026-01-20T10:53:39.942914897+00:00 stderr F I0120 10:53:39.942894 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2026-01-20T10:53:40.048746329+00:00 stderr F I0120 10:53:40.048635 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2026-01-20T10:53:40.141807719+00:00 stderr F I0120 10:53:40.141730 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2026-01-20T10:53:40.141844220+00:00 stderr F I0120 10:53:40.141805 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2026-01-20T10:53:40.249661405+00:00 stderr F I0120 10:53:40.249553 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2026-01-20T10:53:40.351988732+00:00 stderr F I0120 10:53:40.351876 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2026-01-20T10:53:40.351988732+00:00 stderr F I0120 10:53:40.351950 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2026-01-20T10:53:40.447200960+00:00 stderr F I0120 10:53:40.447122 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:53:40.544303909+00:00 stderr F I0120 10:53:40.544228 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2026-01-20T10:53:40.544454683+00:00 stderr F I0120 10:53:40.544431 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2026-01-20T10:53:40.645557868+00:00 stderr F I0120 10:53:40.645482 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2026-01-20T10:53:40.740109809+00:00 stderr F I0120 10:53:40.740016 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2026-01-20T10:53:40.740109809+00:00 stderr F I0120 10:53:40.740100 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2026-01-20T10:53:40.847893411+00:00 stderr F I0120 10:53:40.847804 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2026-01-20T10:53:40.940306525+00:00 stderr F I0120 10:53:40.940239 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2026-01-20T10:53:40.940360376+00:00 stderr F I0120 10:53:40.940303 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2026-01-20T10:53:41.047009329+00:00 stderr F I0120 10:53:41.046937 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:53:41.142318521+00:00 stderr F I0120 10:53:41.142249 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2026-01-20T10:53:41.142318521+00:00 stderr F I0120 10:53:41.142313 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2026-01-20T10:53:41.248248704+00:00 stderr F I0120 10:53:41.248121 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:53:41.341769597+00:00 stderr F I0120 10:53:41.341704 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2026-01-20T10:53:41.341832869+00:00 stderr F I0120 10:53:41.341768 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2026-01-20T10:53:41.447961838+00:00 stderr F I0120 10:53:41.447858 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:53:41.544281386+00:00 stderr F I0120 10:53:41.544210 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2026-01-20T10:53:41.560056236+00:00 stderr F I0120 10:53:41.560011 1 log.go:245] Operconfig Controller complete 2026-01-20T10:53:41.560128028+00:00 stderr F I0120 10:53:41.560095 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2026-01-20T10:53:41.656933698+00:00 stderr F I0120 10:53:41.653725 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:53:41.846148183+00:00 stderr F I0120 10:53:41.845988 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2026-01-20T10:53:41.857455294+00:00 stderr F I0120 10:53:41.857357 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2026-01-20T10:53:41.860330511+00:00 stderr F I0120 10:53:41.859766 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2026-01-20T10:53:41.862532200+00:00 stderr F I0120 10:53:41.862487 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2026-01-20T10:53:41.862556020+00:00 stderr F I0120 10:53:41.862518 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc004691e00 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2026-01-20T10:53:41.867319516+00:00 stderr F I0120 10:53:41.867266 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2026-01-20T10:53:41.867319516+00:00 stderr F I0120 10:53:41.867297 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2026-01-20T10:53:41.867319516+00:00 stderr F I0120 10:53:41.867309 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2026-01-20T10:53:41.870103031+00:00 stderr F I0120 10:53:41.870030 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2026-01-20T10:53:41.870149512+00:00 stderr F I0120 10:53:41.870057 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2026-01-20T10:53:41.870149512+00:00 stderr F I0120 10:53:41.870135 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2026-01-20T10:53:41.870149512+00:00 stderr F I0120 10:53:41.870143 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2026-01-20T10:53:41.870210474+00:00 stderr F I0120 10:53:41.870175 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2026-01-20T10:53:41.882341488+00:00 stderr F I0120 10:53:41.882280 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2026-01-20T10:53:41.901733665+00:00 stderr F I0120 10:53:41.901652 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2026-01-20T10:53:41.901733665+00:00 stderr F I0120 10:53:41.901699 1 log.go:245] Starting render phase 2026-01-20T10:53:41.922606701+00:00 stderr F I0120 10:53:41.922545 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2026-01-20T10:53:41.957930753+00:00 stderr F I0120 10:53:41.957506 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2026-01-20T10:53:41.957930753+00:00 stderr F I0120 10:53:41.957534 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2026-01-20T10:53:41.957930753+00:00 stderr F I0120 10:53:41.957562 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2026-01-20T10:53:41.957930753+00:00 stderr F I0120 10:53:41.957595 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2026-01-20T10:53:41.968528084+00:00 stderr F I0120 10:53:41.968452 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2026-01-20T10:53:41.968528084+00:00 stderr F I0120 10:53:41.968484 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2026-01-20T10:53:41.976564839+00:00 stderr F I0120 10:53:41.976485 1 log.go:245] Render phase done, rendered 112 objects 2026-01-20T10:53:41.990325336+00:00 stderr F I0120 10:53:41.990210 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2026-01-20T10:53:41.999110560+00:00 stderr F I0120 10:53:41.999002 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2026-01-20T10:53:41.999110560+00:00 stderr F I0120 10:53:41.999098 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2026-01-20T10:53:42.046900504+00:00 stderr F I0120 10:53:42.046814 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2026-01-20T10:53:42.144213048+00:00 stderr F I0120 10:53:42.144136 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2026-01-20T10:53:42.144277400+00:00 stderr F I0120 10:53:42.144207 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2026-01-20T10:53:42.249588627+00:00 stderr F I0120 10:53:42.249511 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:53:42.343783868+00:00 stderr F I0120 10:53:42.343712 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2026-01-20T10:53:42.343783868+00:00 stderr F I0120 10:53:42.343765 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2026-01-20T10:53:42.445910371+00:00 stderr F I0120 10:53:42.445813 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2026-01-20T10:53:42.545586477+00:00 stderr F I0120 10:53:42.545473 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2026-01-20T10:53:42.545586477+00:00 stderr F I0120 10:53:42.545548 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2026-01-20T10:53:42.740732150+00:00 stderr F I0120 10:53:42.740628 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2026-01-20T10:53:42.740732150+00:00 stderr F I0120 10:53:42.740687 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2026-01-20T10:53:42.941374419+00:00 stderr F I0120 10:53:42.941284 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2026-01-20T10:53:42.941374419+00:00 stderr F I0120 10:53:42.941339 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2026-01-20T10:53:43.139968173+00:00 stderr F I0120 10:53:43.139879 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2026-01-20T10:53:43.139968173+00:00 stderr F I0120 10:53:43.139933 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2026-01-20T10:53:43.341960647+00:00 stderr F I0120 10:53:43.341881 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2026-01-20T10:53:43.341960647+00:00 stderr F I0120 10:53:43.341939 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2026-01-20T10:53:43.539743209+00:00 stderr F I0120 10:53:43.539654 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2026-01-20T10:53:43.539743209+00:00 stderr F I0120 10:53:43.539711 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2026-01-20T10:53:43.743103520+00:00 stderr F I0120 10:53:43.742369 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2026-01-20T10:53:43.743103520+00:00 stderr F I0120 10:53:43.742423 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2026-01-20T10:53:43.943145572+00:00 stderr F I0120 10:53:43.943004 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2026-01-20T10:53:43.943145572+00:00 stderr F I0120 10:53:43.943117 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2026-01-20T10:53:44.142548308+00:00 stderr F I0120 10:53:44.141349 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2026-01-20T10:53:44.142548308+00:00 stderr F I0120 10:53:44.141445 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2026-01-20T10:53:44.340557557+00:00 stderr F I0120 10:53:44.340451 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2026-01-20T10:53:44.340557557+00:00 stderr F I0120 10:53:44.340509 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2026-01-20T10:53:44.542114980+00:00 stderr F I0120 10:53:44.542020 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2026-01-20T10:53:44.542162681+00:00 stderr F I0120 10:53:44.542135 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2026-01-20T10:53:44.740639692+00:00 stderr F I0120 10:53:44.740521 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2026-01-20T10:53:44.740639692+00:00 stderr F I0120 10:53:44.740577 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2026-01-20T10:53:44.942779480+00:00 stderr F I0120 10:53:44.942713 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2026-01-20T10:53:44.942779480+00:00 stderr F I0120 10:53:44.942768 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2026-01-20T10:53:45.141368994+00:00 stderr F I0120 10:53:45.141268 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2026-01-20T10:53:45.141368994+00:00 stderr F I0120 10:53:45.141335 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2026-01-20T10:53:45.341038547+00:00 stderr F I0120 10:53:45.340956 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2026-01-20T10:53:45.341128859+00:00 stderr F I0120 10:53:45.341036 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2026-01-20T10:53:45.548672341+00:00 stderr F I0120 10:53:45.548611 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2026-01-20T10:53:45.548672341+00:00 stderr F I0120 10:53:45.548666 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2026-01-20T10:53:45.741715027+00:00 stderr F I0120 10:53:45.741651 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2026-01-20T10:53:45.741766229+00:00 stderr F I0120 10:53:45.741710 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2026-01-20T10:53:45.942145381+00:00 stderr F I0120 10:53:45.941639 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2026-01-20T10:53:45.942145381+00:00 stderr F I0120 10:53:45.942127 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2026-01-20T10:53:46.153890525+00:00 stderr F I0120 10:53:46.153799 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2026-01-20T10:53:46.153969987+00:00 stderr F I0120 10:53:46.153891 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2026-01-20T10:53:46.352672364+00:00 stderr F I0120 10:53:46.352561 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2026-01-20T10:53:46.352672364+00:00 stderr F I0120 10:53:46.352609 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2026-01-20T10:53:46.542023702+00:00 stderr F I0120 10:53:46.541895 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2026-01-20T10:53:46.542023702+00:00 stderr F I0120 10:53:46.541981 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2026-01-20T10:53:46.743186215+00:00 stderr F I0120 10:53:46.743056 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2026-01-20T10:53:46.743186215+00:00 stderr F I0120 10:53:46.743158 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2026-01-20T10:53:46.941855521+00:00 stderr F I0120 10:53:46.941781 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2026-01-20T10:53:46.941855521+00:00 stderr F I0120 10:53:46.941839 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2026-01-20T10:53:47.144828841+00:00 stderr F I0120 10:53:47.144736 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2026-01-20T10:53:47.144828841+00:00 stderr F I0120 10:53:47.144799 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2026-01-20T10:53:47.344140654+00:00 stderr F I0120 10:53:47.344007 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2026-01-20T10:53:47.344189945+00:00 stderr F I0120 10:53:47.344141 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2026-01-20T10:53:47.548277726+00:00 stderr F I0120 10:53:47.548180 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2026-01-20T10:53:47.548277726+00:00 stderr F I0120 10:53:47.548251 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2026-01-20T10:53:47.743196263+00:00 stderr F I0120 10:53:47.743047 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2026-01-20T10:53:47.743196263+00:00 stderr F I0120 10:53:47.743149 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2026-01-20T10:53:47.942719881+00:00 stderr F I0120 10:53:47.942631 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2026-01-20T10:53:47.942719881+00:00 stderr F I0120 10:53:47.942705 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2026-01-20T10:53:48.145734242+00:00 stderr F I0120 10:53:48.144795 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2026-01-20T10:53:48.145734242+00:00 stderr F I0120 10:53:48.144851 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2026-01-20T10:53:48.344705397+00:00 stderr F I0120 10:53:48.344622 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2026-01-20T10:53:48.344705397+00:00 stderr F I0120 10:53:48.344670 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2026-01-20T10:53:48.541404330+00:00 stderr F I0120 10:53:48.541299 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2026-01-20T10:53:48.541404330+00:00 stderr F I0120 10:53:48.541375 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2026-01-20T10:53:48.743353622+00:00 stderr F I0120 10:53:48.743258 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2026-01-20T10:53:48.743353622+00:00 stderr F I0120 10:53:48.743336 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2026-01-20T10:53:48.944385581+00:00 stderr F I0120 10:53:48.944252 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2026-01-20T10:53:48.944385581+00:00 stderr F I0120 10:53:48.944334 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2026-01-20T10:53:49.151928803+00:00 stderr F I0120 10:53:49.151832 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2026-01-20T10:53:49.152006475+00:00 stderr F I0120 10:53:49.151926 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2026-01-20T10:53:49.345608146+00:00 stderr F I0120 10:53:49.345039 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2026-01-20T10:53:49.345608146+00:00 stderr F I0120 10:53:49.345144 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2026-01-20T10:53:49.542092563+00:00 stderr F I0120 10:53:49.541943 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2026-01-20T10:53:49.542092563+00:00 stderr F I0120 10:53:49.542006 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2026-01-20T10:53:49.742292300+00:00 stderr F I0120 10:53:49.742194 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2026-01-20T10:53:49.742292300+00:00 stderr F I0120 10:53:49.742279 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2026-01-20T10:53:49.944617003+00:00 stderr F I0120 10:53:49.944497 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2026-01-20T10:53:49.944706115+00:00 stderr F I0120 10:53:49.944631 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2026-01-20T10:53:50.145376784+00:00 stderr F I0120 10:53:50.145277 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2026-01-20T10:53:50.145376784+00:00 stderr F I0120 10:53:50.145357 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2026-01-20T10:53:50.354422527+00:00 stderr F I0120 10:53:50.354320 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2026-01-20T10:53:50.354422527+00:00 stderr F I0120 10:53:50.354392 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2026-01-20T10:53:50.546381374+00:00 stderr F I0120 10:53:50.546315 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2026-01-20T10:53:50.546445745+00:00 stderr F I0120 10:53:50.546391 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2026-01-20T10:53:50.750401302+00:00 stderr F I0120 10:53:50.750316 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2026-01-20T10:53:50.750401302+00:00 stderr F I0120 10:53:50.750373 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2026-01-20T10:53:50.964406778+00:00 stderr F I0120 10:53:50.964342 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2026-01-20T10:53:50.964464640+00:00 stderr F I0120 10:53:50.964431 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2026-01-20T10:53:51.148016122+00:00 stderr F I0120 10:53:51.147936 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2026-01-20T10:53:51.148016122+00:00 stderr F I0120 10:53:51.147988 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2026-01-20T10:53:51.400346668+00:00 stderr F I0120 10:53:51.400260 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2026-01-20T10:53:51.400346668+00:00 stderr F I0120 10:53:51.400329 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2026-01-20T10:53:51.599630980+00:00 stderr F I0120 10:53:51.599546 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2026-01-20T10:53:51.599630980+00:00 stderr F I0120 10:53:51.599608 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2026-01-20T10:53:51.739555850+00:00 stderr F I0120 10:53:51.739494 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2026-01-20T10:53:51.739611061+00:00 stderr F I0120 10:53:51.739552 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2026-01-20T10:53:51.942788017+00:00 stderr F I0120 10:53:51.942664 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2026-01-20T10:53:51.942788017+00:00 stderr F I0120 10:53:51.942742 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2026-01-20T10:53:52.141764792+00:00 stderr F I0120 10:53:52.141660 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2026-01-20T10:53:52.141764792+00:00 stderr F I0120 10:53:52.141723 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2026-01-20T10:53:52.340398976+00:00 stderr F I0120 10:53:52.340309 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2026-01-20T10:53:52.340398976+00:00 stderr F I0120 10:53:52.340367 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2026-01-20T10:53:52.540537231+00:00 stderr F I0120 10:53:52.540436 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2026-01-20T10:53:52.540537231+00:00 stderr F I0120 10:53:52.540483 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2026-01-20T10:53:52.741119938+00:00 stderr F I0120 10:53:52.740974 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2026-01-20T10:53:52.741119938+00:00 stderr F I0120 10:53:52.741036 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2026-01-20T10:53:52.941347315+00:00 stderr F I0120 10:53:52.941233 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2026-01-20T10:53:52.941347315+00:00 stderr F I0120 10:53:52.941310 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2026-01-20T10:53:53.141412339+00:00 stderr F I0120 10:53:53.141163 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2026-01-20T10:53:53.141412339+00:00 stderr F I0120 10:53:53.141217 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2026-01-20T10:53:53.340100815+00:00 stderr F I0120 10:53:53.339857 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2026-01-20T10:53:53.340100815+00:00 stderr F I0120 10:53:53.339909 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2026-01-20T10:53:53.544093474+00:00 stderr F I0120 10:53:53.543767 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2026-01-20T10:53:53.544093474+00:00 stderr F I0120 10:53:53.543824 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2026-01-20T10:53:53.739521672+00:00 stderr F I0120 10:53:53.739418 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2026-01-20T10:53:53.739521672+00:00 stderr F I0120 10:53:53.739480 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2026-01-20T10:53:53.940739446+00:00 stderr F I0120 10:53:53.940527 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2026-01-20T10:53:53.940739446+00:00 stderr F I0120 10:53:53.940650 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2026-01-20T10:53:54.143740588+00:00 stderr F I0120 10:53:54.143642 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2026-01-20T10:53:54.143821930+00:00 stderr F I0120 10:53:54.143731 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2026-01-20T10:53:54.342927668+00:00 stderr F I0120 10:53:54.342799 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2026-01-20T10:53:54.342927668+00:00 stderr F I0120 10:53:54.342859 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2026-01-20T10:53:54.543146894+00:00 stderr F I0120 10:53:54.543033 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2026-01-20T10:53:54.543265908+00:00 stderr F I0120 10:53:54.543164 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2026-01-20T10:53:54.748187360+00:00 stderr F I0120 10:53:54.747561 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2026-01-20T10:53:54.748187360+00:00 stderr F I0120 10:53:54.747648 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2026-01-20T10:53:54.941714399+00:00 stderr F I0120 10:53:54.941632 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2026-01-20T10:53:54.941784181+00:00 stderr F I0120 10:53:54.941721 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2026-01-20T10:53:55.147048263+00:00 stderr F I0120 10:53:55.146934 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2026-01-20T10:53:55.147048263+00:00 stderr F I0120 10:53:55.146987 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2026-01-20T10:53:55.354134933+00:00 stderr F I0120 10:53:55.351454 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2026-01-20T10:53:55.354134933+00:00 stderr F I0120 10:53:55.351519 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2026-01-20T10:53:55.544340023+00:00 stderr F I0120 10:53:55.544256 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2026-01-20T10:53:55.544340023+00:00 stderr F I0120 10:53:55.544310 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2026-01-20T10:53:55.748531986+00:00 stderr F I0120 10:53:55.747935 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2026-01-20T10:53:55.748531986+00:00 stderr F I0120 10:53:55.748477 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2026-01-20T10:53:55.942008233+00:00 stderr F I0120 10:53:55.941922 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2026-01-20T10:53:55.942008233+00:00 stderr F I0120 10:53:55.941975 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2026-01-20T10:53:56.140971368+00:00 stderr F I0120 10:53:56.140890 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2026-01-20T10:53:56.141112401+00:00 stderr F I0120 10:53:56.140980 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2026-01-20T10:53:56.341448321+00:00 stderr F I0120 10:53:56.340911 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2026-01-20T10:53:56.341448321+00:00 stderr F I0120 10:53:56.341414 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2026-01-20T10:53:56.545820259+00:00 stderr F I0120 10:53:56.545715 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2026-01-20T10:53:56.545820259+00:00 stderr F I0120 10:53:56.545787 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2026-01-20T10:53:56.741248109+00:00 stderr F I0120 10:53:56.741168 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2026-01-20T10:53:56.741248109+00:00 stderr F I0120 10:53:56.741235 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2026-01-20T10:53:56.942947805+00:00 stderr F I0120 10:53:56.941758 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2026-01-20T10:53:56.942947805+00:00 stderr F I0120 10:53:56.941813 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2026-01-20T10:53:57.144135049+00:00 stderr F I0120 10:53:57.143986 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2026-01-20T10:53:57.144135049+00:00 stderr F I0120 10:53:57.144088 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2026-01-20T10:53:57.343790861+00:00 stderr F I0120 10:53:57.343654 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2026-01-20T10:53:57.343790861+00:00 stderr F I0120 10:53:57.343718 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2026-01-20T10:53:57.554950189+00:00 stderr F I0120 10:53:57.554904 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2026-01-20T10:53:57.555536165+00:00 stderr F I0120 10:53:57.555523 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2026-01-20T10:53:57.780746599+00:00 stderr F I0120 10:53:57.780694 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2026-01-20T10:53:57.780882922+00:00 stderr F I0120 10:53:57.780871 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2026-01-20T10:53:57.941346140+00:00 stderr F I0120 10:53:57.941292 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2026-01-20T10:53:57.941395631+00:00 stderr F I0120 10:53:57.941342 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2026-01-20T10:53:58.141192477+00:00 stderr F I0120 10:53:58.141101 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2026-01-20T10:53:58.141192477+00:00 stderr F I0120 10:53:58.141152 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2026-01-20T10:53:58.342831963+00:00 stderr F I0120 10:53:58.342724 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2026-01-20T10:53:58.342831963+00:00 stderr F I0120 10:53:58.342792 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2026-01-20T10:53:58.540274965+00:00 stderr F I0120 10:53:58.540200 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2026-01-20T10:53:58.540274965+00:00 stderr F I0120 10:53:58.540259 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2026-01-20T10:53:58.744029217+00:00 stderr F I0120 10:53:58.743922 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2026-01-20T10:53:58.744029217+00:00 stderr F I0120 10:53:58.743974 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2026-01-20T10:53:58.941456719+00:00 stderr F I0120 10:53:58.941326 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2026-01-20T10:53:58.941456719+00:00 stderr F I0120 10:53:58.941416 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2026-01-20T10:53:59.141555094+00:00 stderr F I0120 10:53:59.141399 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2026-01-20T10:53:59.141555094+00:00 stderr F I0120 10:53:59.141507 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2026-01-20T10:53:59.350760040+00:00 stderr F I0120 10:53:59.350649 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2026-01-20T10:53:59.350760040+00:00 stderr F I0120 10:53:59.350733 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2026-01-20T10:53:59.543126519+00:00 stderr F I0120 10:53:59.543007 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2026-01-20T10:53:59.543126519+00:00 stderr F I0120 10:53:59.543111 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2026-01-20T10:53:59.742187315+00:00 stderr F I0120 10:53:59.742125 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2026-01-20T10:53:59.742249217+00:00 stderr F I0120 10:53:59.742185 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2026-01-20T10:53:59.939550606+00:00 stderr F I0120 10:53:59.939488 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2026-01-20T10:53:59.939550606+00:00 stderr F I0120 10:53:59.939536 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2026-01-20T10:54:00.143437681+00:00 stderr F I0120 10:54:00.143346 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2026-01-20T10:54:00.143437681+00:00 stderr F I0120 10:54:00.143406 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2026-01-20T10:54:00.344398458+00:00 stderr F I0120 10:54:00.344293 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2026-01-20T10:54:00.344398458+00:00 stderr F I0120 10:54:00.344368 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2026-01-20T10:54:00.541342828+00:00 stderr F I0120 10:54:00.541267 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2026-01-20T10:54:00.541401489+00:00 stderr F I0120 10:54:00.541356 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2026-01-20T10:54:00.742792908+00:00 stderr F I0120 10:54:00.742557 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2026-01-20T10:54:00.742792908+00:00 stderr F I0120 10:54:00.742619 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2026-01-20T10:54:00.942539513+00:00 stderr F I0120 10:54:00.942457 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2026-01-20T10:54:00.942539513+00:00 stderr F I0120 10:54:00.942527 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2026-01-20T10:54:01.141499907+00:00 stderr F I0120 10:54:01.141405 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2026-01-20T10:54:01.141539198+00:00 stderr F I0120 10:54:01.141524 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2026-01-20T10:54:01.341771345+00:00 stderr F I0120 10:54:01.341686 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2026-01-20T10:54:01.341895178+00:00 stderr F I0120 10:54:01.341818 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2026-01-20T10:54:01.541684264+00:00 stderr F I0120 10:54:01.541572 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2026-01-20T10:54:01.541684264+00:00 stderr F I0120 10:54:01.541631 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2026-01-20T10:54:01.743272417+00:00 stderr F I0120 10:54:01.743046 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2026-01-20T10:54:01.743272417+00:00 stderr F I0120 10:54:01.743134 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2026-01-20T10:54:01.940920297+00:00 stderr F I0120 10:54:01.940784 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2026-01-20T10:54:01.940920297+00:00 stderr F I0120 10:54:01.940876 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2026-01-20T10:54:02.140399044+00:00 stderr F I0120 10:54:02.140319 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2026-01-20T10:54:02.140399044+00:00 stderr F I0120 10:54:02.140371 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2026-01-20T10:54:02.341778392+00:00 stderr F I0120 10:54:02.341691 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2026-01-20T10:54:02.341778392+00:00 stderr F I0120 10:54:02.341765 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2026-01-20T10:54:02.542311078+00:00 stderr F I0120 10:54:02.542159 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2026-01-20T10:54:02.542311078+00:00 stderr F I0120 10:54:02.542259 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2026-01-20T10:54:02.741436196+00:00 stderr F I0120 10:54:02.741347 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2026-01-20T10:54:02.741436196+00:00 stderr F I0120 10:54:02.741421 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2026-01-20T10:54:02.941760615+00:00 stderr F I0120 10:54:02.941047 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2026-01-20T10:54:02.941760615+00:00 stderr F I0120 10:54:02.941134 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2026-01-20T10:54:03.145179658+00:00 stderr F I0120 10:54:03.145124 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2026-01-20T10:54:03.145240100+00:00 stderr F I0120 10:54:03.145181 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2026-01-20T10:54:03.343203496+00:00 stderr F I0120 10:54:03.343106 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2026-01-20T10:54:03.343203496+00:00 stderr F I0120 10:54:03.343188 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2026-01-20T10:54:03.540483566+00:00 stderr F I0120 10:54:03.539995 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2026-01-20T10:54:03.540483566+00:00 stderr F I0120 10:54:03.540453 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2026-01-20T10:54:03.741904735+00:00 stderr F I0120 10:54:03.741783 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2026-01-20T10:54:03.741904735+00:00 stderr F I0120 10:54:03.741871 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2026-01-20T10:54:03.942188754+00:00 stderr F I0120 10:54:03.942110 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2026-01-20T10:54:03.942188754+00:00 stderr F I0120 10:54:03.942172 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2026-01-20T10:54:04.140666505+00:00 stderr F I0120 10:54:04.140598 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2026-01-20T10:54:04.140666505+00:00 stderr F I0120 10:54:04.140657 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2026-01-20T10:54:04.347975651+00:00 stderr F I0120 10:54:04.347897 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2026-01-20T10:54:04.368223140+00:00 stderr F I0120 10:54:04.368116 1 log.go:245] Operconfig Controller complete 2026-01-20T10:56:07.097674585+00:00 stderr F I0120 10:56:07.096180 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:56:07.096134294 +0000 UTC))" 2026-01-20T10:56:07.097674585+00:00 stderr F I0120 10:56:07.096226 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:56:07.096208616 +0000 UTC))" 2026-01-20T10:56:07.097674585+00:00 stderr F I0120 10:56:07.096250 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.096231776 +0000 UTC))" 2026-01-20T10:56:07.097674585+00:00 stderr F I0120 10:56:07.096271 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.096255047 +0000 UTC))" 2026-01-20T10:56:07.097674585+00:00 stderr F I0120 10:56:07.096294 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.096277457 +0000 UTC))" 2026-01-20T10:56:07.097674585+00:00 stderr F I0120 10:56:07.096313 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.096298738 +0000 UTC))" 2026-01-20T10:56:07.097674585+00:00 stderr F I0120 10:56:07.096334 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.096318528 +0000 UTC))" 2026-01-20T10:56:07.097674585+00:00 stderr F I0120 10:56:07.096356 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.096340339 +0000 UTC))" 2026-01-20T10:56:07.097674585+00:00 stderr F I0120 10:56:07.096377 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.09636144 +0000 UTC))" 2026-01-20T10:56:07.097674585+00:00 stderr F I0120 10:56:07.096399 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:56:07.09638582 +0000 UTC))" 2026-01-20T10:56:07.097674585+00:00 stderr F I0120 10:56:07.096422 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.096405381 +0000 UTC))" 2026-01-20T10:56:07.097674585+00:00 stderr F I0120 10:56:07.096443 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.096426581 +0000 UTC))" 2026-01-20T10:56:07.097674585+00:00 stderr F I0120 10:56:07.096843 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"*.metrics.openshift-network-operator.svc\" [serving] validServingFor=[*.metrics.openshift-network-operator.svc,*.metrics.openshift-network-operator.svc.cluster.local,metrics.openshift-network-operator.svc,metrics.openshift-network-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2026-01-20 10:56:07.096815192 +0000 UTC))" 2026-01-20T10:56:07.097674585+00:00 stderr F I0120 10:56:07.097257 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906046\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906046\" (2026-01-20 09:47:26 +0000 UTC to 2027-01-20 09:47:26 +0000 UTC (now=2026-01-20 10:56:07.097236824 +0000 UTC))" 2026-01-20T10:56:15.536380351+00:00 stderr F I0120 10:56:15.536296 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2026-01-20T10:56:28.737079990+00:00 stderr F I0120 10:56:28.736964 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2026-01-20T10:56:28.993638653+00:00 stderr F I0120 10:56:28.993581 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2026-01-20T10:56:28.995970146+00:00 stderr F I0120 10:56:28.995936 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2026-01-20T10:56:28.998359629+00:00 stderr F I0120 10:56:28.998254 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2026-01-20T10:56:28.998359629+00:00 stderr F I0120 10:56:28.998290 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc00274b480 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2026-01-20T10:56:29.002595714+00:00 stderr F I0120 10:56:29.002538 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2026-01-20T10:56:29.002595714+00:00 stderr F I0120 10:56:29.002562 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2026-01-20T10:56:29.002595714+00:00 stderr F I0120 10:56:29.002569 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2026-01-20T10:56:29.005355898+00:00 stderr F I0120 10:56:29.005297 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2026-01-20T10:56:29.005355898+00:00 stderr F I0120 10:56:29.005322 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2026-01-20T10:56:29.005355898+00:00 stderr F I0120 10:56:29.005328 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2026-01-20T10:56:29.005355898+00:00 stderr F I0120 10:56:29.005335 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2026-01-20T10:56:29.005387409+00:00 stderr F I0120 10:56:29.005365 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2026-01-20T10:56:29.016645512+00:00 stderr F I0120 10:56:29.016586 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2026-01-20T10:56:29.028956173+00:00 stderr F I0120 10:56:29.028892 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2026-01-20T10:56:29.029639302+00:00 stderr F I0120 10:56:29.029597 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2026-01-20T10:56:29.029639302+00:00 stderr F I0120 10:56:29.029632 1 log.go:245] Starting render phase 2026-01-20T10:56:29.042737993+00:00 stderr F I0120 10:56:29.042649 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2026-01-20T10:56:29.077847089+00:00 stderr F I0120 10:56:29.077672 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2026-01-20T10:56:29.077847089+00:00 stderr F I0120 10:56:29.077699 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2026-01-20T10:56:29.077847089+00:00 stderr F I0120 10:56:29.077725 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2026-01-20T10:56:29.077847089+00:00 stderr F I0120 10:56:29.077749 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2026-01-20T10:56:29.086343827+00:00 stderr F I0120 10:56:29.086289 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2026-01-20T10:56:29.086422739+00:00 stderr F I0120 10:56:29.086407 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2026-01-20T10:56:29.094413304+00:00 stderr F I0120 10:56:29.094344 1 log.go:245] Render phase done, rendered 112 objects 2026-01-20T10:56:29.109013727+00:00 stderr F I0120 10:56:29.107845 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2026-01-20T10:56:29.112968973+00:00 stderr F I0120 10:56:29.112915 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2026-01-20T10:56:29.112994754+00:00 stderr F I0120 10:56:29.112984 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2026-01-20T10:56:29.119738176+00:00 stderr F I0120 10:56:29.119698 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2026-01-20T10:56:29.119762736+00:00 stderr F I0120 10:56:29.119752 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2026-01-20T10:56:29.127208896+00:00 stderr F I0120 10:56:29.127139 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2026-01-20T10:56:29.127208896+00:00 stderr F I0120 10:56:29.127166 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2026-01-20T10:56:29.132698744+00:00 stderr F I0120 10:56:29.132658 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2026-01-20T10:56:29.132717514+00:00 stderr F I0120 10:56:29.132700 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2026-01-20T10:56:29.139629711+00:00 stderr F I0120 10:56:29.139593 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2026-01-20T10:56:29.139629711+00:00 stderr F I0120 10:56:29.139623 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2026-01-20T10:56:29.144709138+00:00 stderr F I0120 10:56:29.144676 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2026-01-20T10:56:29.144709138+00:00 stderr F I0120 10:56:29.144702 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2026-01-20T10:56:29.149209378+00:00 stderr F I0120 10:56:29.149178 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2026-01-20T10:56:29.149296790+00:00 stderr F I0120 10:56:29.149284 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2026-01-20T10:56:29.153993207+00:00 stderr F I0120 10:56:29.153941 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2026-01-20T10:56:29.154050359+00:00 stderr F I0120 10:56:29.154019 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2026-01-20T10:56:29.158436017+00:00 stderr F I0120 10:56:29.158413 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2026-01-20T10:56:29.158503419+00:00 stderr F I0120 10:56:29.158491 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2026-01-20T10:56:29.224996217+00:00 stderr F I0120 10:56:29.224933 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2026-01-20T10:56:29.225235873+00:00 stderr F I0120 10:56:29.225210 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2026-01-20T10:56:29.426042767+00:00 stderr F I0120 10:56:29.425927 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2026-01-20T10:56:29.426042767+00:00 stderr F I0120 10:56:29.426002 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2026-01-20T10:56:29.625994517+00:00 stderr F I0120 10:56:29.625942 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2026-01-20T10:56:29.626100330+00:00 stderr F I0120 10:56:29.626088 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2026-01-20T10:56:29.828896435+00:00 stderr F I0120 10:56:29.828208 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2026-01-20T10:56:29.828896435+00:00 stderr F I0120 10:56:29.828292 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2026-01-20T10:56:30.026167133+00:00 stderr F I0120 10:56:30.026048 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2026-01-20T10:56:30.026167133+00:00 stderr F I0120 10:56:30.026136 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2026-01-20T10:56:30.227103919+00:00 stderr F I0120 10:56:30.227024 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2026-01-20T10:56:30.227166091+00:00 stderr F I0120 10:56:30.227111 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2026-01-20T10:56:30.427439160+00:00 stderr F I0120 10:56:30.427343 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2026-01-20T10:56:30.427517622+00:00 stderr F I0120 10:56:30.427449 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2026-01-20T10:56:30.627692667+00:00 stderr F I0120 10:56:30.627617 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2026-01-20T10:56:30.627692667+00:00 stderr F I0120 10:56:30.627677 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2026-01-20T10:56:30.826949408+00:00 stderr F I0120 10:56:30.826872 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2026-01-20T10:56:30.826949408+00:00 stderr F I0120 10:56:30.826935 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2026-01-20T10:56:31.025389027+00:00 stderr F I0120 10:56:31.025320 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2026-01-20T10:56:31.025389027+00:00 stderr F I0120 10:56:31.025383 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2026-01-20T10:56:31.226772046+00:00 stderr F I0120 10:56:31.226682 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2026-01-20T10:56:31.226772046+00:00 stderr F I0120 10:56:31.226752 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2026-01-20T10:56:31.426727965+00:00 stderr F I0120 10:56:31.426655 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2026-01-20T10:56:31.426727965+00:00 stderr F I0120 10:56:31.426719 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2026-01-20T10:56:31.636868639+00:00 stderr F I0120 10:56:31.636730 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2026-01-20T10:56:31.636868639+00:00 stderr F I0120 10:56:31.636811 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2026-01-20T10:56:31.850381464+00:00 stderr F I0120 10:56:31.850315 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2026-01-20T10:56:31.850443866+00:00 stderr F I0120 10:56:31.850378 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2026-01-20T10:56:32.028292960+00:00 stderr F I0120 10:56:32.028234 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2026-01-20T10:56:32.028292960+00:00 stderr F I0120 10:56:32.028286 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2026-01-20T10:56:32.226983976+00:00 stderr F I0120 10:56:32.226914 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2026-01-20T10:56:32.227044898+00:00 stderr F I0120 10:56:32.227009 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2026-01-20T10:56:32.426775411+00:00 stderr F I0120 10:56:32.426690 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2026-01-20T10:56:32.426775411+00:00 stderr F I0120 10:56:32.426746 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2026-01-20T10:56:32.640566324+00:00 stderr F I0120 10:56:32.640496 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2026-01-20T10:56:32.640638896+00:00 stderr F I0120 10:56:32.640587 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2026-01-20T10:56:32.826939508+00:00 stderr F I0120 10:56:32.826561 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2026-01-20T10:56:32.826939508+00:00 stderr F I0120 10:56:32.826623 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2026-01-20T10:56:33.028295846+00:00 stderr F I0120 10:56:33.028187 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2026-01-20T10:56:33.028295846+00:00 stderr F I0120 10:56:33.028273 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2026-01-20T10:56:33.226876599+00:00 stderr F I0120 10:56:33.226795 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2026-01-20T10:56:33.226920240+00:00 stderr F I0120 10:56:33.226869 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2026-01-20T10:56:33.426351306+00:00 stderr F I0120 10:56:33.426210 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2026-01-20T10:56:33.426351306+00:00 stderr F I0120 10:56:33.426296 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2026-01-20T10:56:33.627489108+00:00 stderr F I0120 10:56:33.627406 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2026-01-20T10:56:33.627489108+00:00 stderr F I0120 10:56:33.627469 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2026-01-20T10:56:33.770841744+00:00 stderr F I0120 10:56:33.770722 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2026-01-20T10:56:33.782253691+00:00 stderr F I0120 10:56:33.779971 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:56:33.794248024+00:00 stderr F I0120 10:56:33.793316 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:56:33.805743514+00:00 stderr F I0120 10:56:33.805670 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:56:33.814661144+00:00 stderr F I0120 10:56:33.814617 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:56:33.822877385+00:00 stderr F I0120 10:56:33.822842 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2026-01-20T10:56:33.827119289+00:00 stderr F I0120 10:56:33.827066 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2026-01-20T10:56:33.827138039+00:00 stderr F I0120 10:56:33.827123 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2026-01-20T10:56:33.829949215+00:00 stderr F I0120 10:56:33.829906 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2026-01-20T10:56:33.836953794+00:00 stderr F I0120 10:56:33.836902 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:56:33.845711979+00:00 stderr F I0120 10:56:33.845666 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2026-01-20T10:56:33.887796312+00:00 stderr F I0120 10:56:33.887714 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2026-01-20T10:56:33.989155998+00:00 stderr F I0120 10:56:33.986789 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:56:34.036775390+00:00 stderr F I0120 10:56:34.035810 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2026-01-20T10:56:34.036775390+00:00 stderr F I0120 10:56:34.035867 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2026-01-20T10:56:34.171843544+00:00 stderr F I0120 10:56:34.171595 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:56:34.225518138+00:00 stderr F I0120 10:56:34.225423 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2026-01-20T10:56:34.225518138+00:00 stderr F I0120 10:56:34.225489 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2026-01-20T10:56:34.370234432+00:00 stderr F I0120 10:56:34.368640 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:56:34.439366792+00:00 stderr F I0120 10:56:34.439139 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2026-01-20T10:56:34.439366792+00:00 stderr F I0120 10:56:34.439225 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2026-01-20T10:56:34.569802991+00:00 stderr F I0120 10:56:34.568678 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:56:34.630344770+00:00 stderr F I0120 10:56:34.630248 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2026-01-20T10:56:34.630344770+00:00 stderr F I0120 10:56:34.630308 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2026-01-20T10:56:34.769460073+00:00 stderr F I0120 10:56:34.769397 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2026-01-20T10:56:34.827776362+00:00 stderr F I0120 10:56:34.826029 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2026-01-20T10:56:34.827776362+00:00 stderr F I0120 10:56:34.826109 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2026-01-20T10:56:34.973330618+00:00 stderr F I0120 10:56:34.972874 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2026-01-20T10:56:35.026772056+00:00 stderr F I0120 10:56:35.026693 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2026-01-20T10:56:35.026772056+00:00 stderr F I0120 10:56:35.026749 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2026-01-20T10:56:35.169963769+00:00 stderr F I0120 10:56:35.169894 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:56:35.226500560+00:00 stderr F I0120 10:56:35.226427 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2026-01-20T10:56:35.226548351+00:00 stderr F I0120 10:56:35.226498 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2026-01-20T10:56:35.368466499+00:00 stderr F I0120 10:56:35.367775 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2026-01-20T10:56:35.426284775+00:00 stderr F I0120 10:56:35.426215 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2026-01-20T10:56:35.426284775+00:00 stderr F I0120 10:56:35.426268 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2026-01-20T10:56:35.572829378+00:00 stderr F I0120 10:56:35.572782 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2026-01-20T10:56:35.627434157+00:00 stderr F I0120 10:56:35.627373 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2026-01-20T10:56:35.627434157+00:00 stderr F I0120 10:56:35.627427 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2026-01-20T10:56:35.778172083+00:00 stderr F I0120 10:56:35.768507 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:56:35.830754347+00:00 stderr F I0120 10:56:35.830684 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2026-01-20T10:56:35.830790768+00:00 stderr F I0120 10:56:35.830755 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2026-01-20T10:56:35.967053895+00:00 stderr F I0120 10:56:35.966925 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:56:36.030415040+00:00 stderr F I0120 10:56:36.030339 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2026-01-20T10:56:36.030415040+00:00 stderr F I0120 10:56:36.030391 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2026-01-20T10:56:36.171367812+00:00 stderr F I0120 10:56:36.171311 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:56:36.231844680+00:00 stderr F I0120 10:56:36.231765 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2026-01-20T10:56:36.231920172+00:00 stderr F I0120 10:56:36.231840 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2026-01-20T10:56:36.368042974+00:00 stderr F I0120 10:56:36.367962 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:56:36.433294679+00:00 stderr F I0120 10:56:36.433239 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2026-01-20T10:56:36.433360222+00:00 stderr F I0120 10:56:36.433291 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2026-01-20T10:56:36.575145196+00:00 stderr F I0120 10:56:36.575026 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2026-01-20T10:56:36.634525183+00:00 stderr F I0120 10:56:36.634435 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2026-01-20T10:56:36.634525183+00:00 stderr F I0120 10:56:36.634514 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2026-01-20T10:56:36.771865628+00:00 stderr F I0120 10:56:36.771757 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2026-01-20T10:56:36.929112299+00:00 stderr F I0120 10:56:36.925525 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2026-01-20T10:56:36.929112299+00:00 stderr F I0120 10:56:36.925593 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2026-01-20T10:56:36.967733378+00:00 stderr F I0120 10:56:36.967668 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:56:37.061733808+00:00 stderr F I0120 10:56:37.061647 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2026-01-20T10:56:37.061733808+00:00 stderr F I0120 10:56:37.061700 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2026-01-20T10:56:37.167772671+00:00 stderr F I0120 10:56:37.167708 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2026-01-20T10:56:37.225249377+00:00 stderr F I0120 10:56:37.225188 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2026-01-20T10:56:37.225309419+00:00 stderr F I0120 10:56:37.225251 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2026-01-20T10:56:37.427904570+00:00 stderr F I0120 10:56:37.427814 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2026-01-20T10:56:37.427904570+00:00 stderr F I0120 10:56:37.427886 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2026-01-20T10:56:37.625931628+00:00 stderr F I0120 10:56:37.625843 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2026-01-20T10:56:37.625931628+00:00 stderr F I0120 10:56:37.625908 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2026-01-20T10:56:37.828669813+00:00 stderr F I0120 10:56:37.828566 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2026-01-20T10:56:37.828669813+00:00 stderr F I0120 10:56:37.828628 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2026-01-20T10:56:38.026762142+00:00 stderr F I0120 10:56:38.026660 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2026-01-20T10:56:38.026762142+00:00 stderr F I0120 10:56:38.026735 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2026-01-20T10:56:38.226226409+00:00 stderr F I0120 10:56:38.226160 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2026-01-20T10:56:38.226265580+00:00 stderr F I0120 10:56:38.226226 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2026-01-20T10:56:38.425912212+00:00 stderr F I0120 10:56:38.425840 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2026-01-20T10:56:38.425912212+00:00 stderr F I0120 10:56:38.425905 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2026-01-20T10:56:38.626725944+00:00 stderr F I0120 10:56:38.626652 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2026-01-20T10:56:38.626725944+00:00 stderr F I0120 10:56:38.626709 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2026-01-20T10:56:38.840005863+00:00 stderr F I0120 10:56:38.839938 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2026-01-20T10:56:38.840256390+00:00 stderr F I0120 10:56:38.840222 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2026-01-20T10:56:39.029455460+00:00 stderr F I0120 10:56:39.029353 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2026-01-20T10:56:39.029455460+00:00 stderr F I0120 10:56:39.029414 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2026-01-20T10:56:39.227175450+00:00 stderr F I0120 10:56:39.227106 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2026-01-20T10:56:39.227175450+00:00 stderr F I0120 10:56:39.227160 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2026-01-20T10:56:39.430146861+00:00 stderr F I0120 10:56:39.430056 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2026-01-20T10:56:39.430214583+00:00 stderr F I0120 10:56:39.430161 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2026-01-20T10:56:39.625107247+00:00 stderr F I0120 10:56:39.625014 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2026-01-20T10:56:39.625140177+00:00 stderr F I0120 10:56:39.625105 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2026-01-20T10:56:39.831172690+00:00 stderr F I0120 10:56:39.830681 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2026-01-20T10:56:39.831172690+00:00 stderr F I0120 10:56:39.830752 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2026-01-20T10:56:40.028340316+00:00 stderr F I0120 10:56:40.028265 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2026-01-20T10:56:40.028384487+00:00 stderr F I0120 10:56:40.028335 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2026-01-20T10:56:40.227683519+00:00 stderr F I0120 10:56:40.227618 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2026-01-20T10:56:40.227683519+00:00 stderr F I0120 10:56:40.227672 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2026-01-20T10:56:40.426587501+00:00 stderr F I0120 10:56:40.426524 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2026-01-20T10:56:40.426643382+00:00 stderr F I0120 10:56:40.426585 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2026-01-20T10:56:40.630962059+00:00 stderr F I0120 10:56:40.630863 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2026-01-20T10:56:40.630962059+00:00 stderr F I0120 10:56:40.630942 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2026-01-20T10:56:40.831089094+00:00 stderr F I0120 10:56:40.831004 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2026-01-20T10:56:40.831137765+00:00 stderr F I0120 10:56:40.831085 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2026-01-20T10:56:41.030028376+00:00 stderr F I0120 10:56:41.029952 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2026-01-20T10:56:41.030085888+00:00 stderr F I0120 10:56:41.030032 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2026-01-20T10:56:41.225123706+00:00 stderr F I0120 10:56:41.224953 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2026-01-20T10:56:41.225123706+00:00 stderr F I0120 10:56:41.225009 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2026-01-20T10:56:41.428879888+00:00 stderr F I0120 10:56:41.428817 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2026-01-20T10:56:41.428879888+00:00 stderr F I0120 10:56:41.428865 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2026-01-20T10:56:41.630126313+00:00 stderr F I0120 10:56:41.630026 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2026-01-20T10:56:41.630126313+00:00 stderr F I0120 10:56:41.630102 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2026-01-20T10:56:41.827964765+00:00 stderr F I0120 10:56:41.827866 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2026-01-20T10:56:41.827964765+00:00 stderr F I0120 10:56:41.827923 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2026-01-20T10:56:42.028990174+00:00 stderr F I0120 10:56:42.028009 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2026-01-20T10:56:42.028990174+00:00 stderr F I0120 10:56:42.028489 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2026-01-20T10:56:42.226164559+00:00 stderr F I0120 10:56:42.225463 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2026-01-20T10:56:42.226164559+00:00 stderr F I0120 10:56:42.225516 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2026-01-20T10:56:42.425435800+00:00 stderr F I0120 10:56:42.425357 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2026-01-20T10:56:42.425435800+00:00 stderr F I0120 10:56:42.425423 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2026-01-20T10:56:42.628578876+00:00 stderr F I0120 10:56:42.628488 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2026-01-20T10:56:42.628578876+00:00 stderr F I0120 10:56:42.628563 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2026-01-20T10:56:42.830206561+00:00 stderr F I0120 10:56:42.829397 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2026-01-20T10:56:42.830206561+00:00 stderr F I0120 10:56:42.829461 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2026-01-20T10:56:43.035337900+00:00 stderr F I0120 10:56:43.035269 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2026-01-20T10:56:43.035397442+00:00 stderr F I0120 10:56:43.035337 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2026-01-20T10:56:43.259579133+00:00 stderr F I0120 10:56:43.259473 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2026-01-20T10:56:43.259579133+00:00 stderr F I0120 10:56:43.259501 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2026-01-20T10:56:43.261007962+00:00 stderr F I0120 10:56:43.260693 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2026-01-20T10:56:43.261007962+00:00 stderr F I0120 10:56:43.260737 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2026-01-20T10:56:43.319674401+00:00 stderr F I0120 10:56:43.319583 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2026-01-20T10:56:43.319674401+00:00 stderr F I0120 10:56:43.319615 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2026-01-20T10:56:43.348120286+00:00 stderr F I0120 10:56:43.335350 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2026-01-20T10:56:43.348120286+00:00 stderr F I0120 10:56:43.336626 1 log.go:245] Network operator config updated with conditions: 2026-01-20T10:56:43.348120286+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:56:43.348120286+00:00 stderr F status: "False" 2026-01-20T10:56:43.348120286+00:00 stderr F type: ManagementStateDegraded 2026-01-20T10:56:43.348120286+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2026-01-20T10:56:43.348120286+00:00 stderr F status: "False" 2026-01-20T10:56:43.348120286+00:00 stderr F type: Degraded 2026-01-20T10:56:43.348120286+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:56:43.348120286+00:00 stderr F status: "True" 2026-01-20T10:56:43.348120286+00:00 stderr F type: Upgradeable 2026-01-20T10:56:43.348120286+00:00 stderr F - lastTransitionTime: "2026-01-20T10:56:43Z" 2026-01-20T10:56:43.348120286+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" update is being processed 2026-01-20T10:56:43.348120286+00:00 stderr F (generation 3, observed generation 2) 2026-01-20T10:56:43.348120286+00:00 stderr F reason: Deploying 2026-01-20T10:56:43.348120286+00:00 stderr F status: "True" 2026-01-20T10:56:43.348120286+00:00 stderr F type: Progressing 2026-01-20T10:56:43.348120286+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2026-01-20T10:56:43.348120286+00:00 stderr F status: "True" 2026-01-20T10:56:43.348120286+00:00 stderr F type: Available 2026-01-20T10:56:43.376316934+00:00 stderr F I0120 10:56:43.373469 1 log.go:245] ClusterOperator config status updated with conditions: 2026-01-20T10:56:43.376316934+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2026-01-20T10:56:43.376316934+00:00 stderr F status: "False" 2026-01-20T10:56:43.376316934+00:00 stderr F type: Degraded 2026-01-20T10:56:43.376316934+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:56:43.376316934+00:00 stderr F status: "True" 2026-01-20T10:56:43.376316934+00:00 stderr F type: Upgradeable 2026-01-20T10:56:43.376316934+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:56:43.376316934+00:00 stderr F status: "False" 2026-01-20T10:56:43.376316934+00:00 stderr F type: ManagementStateDegraded 2026-01-20T10:56:43.376316934+00:00 stderr F - lastTransitionTime: "2026-01-20T10:56:43Z" 2026-01-20T10:56:43.376316934+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" update is being processed 2026-01-20T10:56:43.376316934+00:00 stderr F (generation 3, observed generation 2) 2026-01-20T10:56:43.376316934+00:00 stderr F reason: Deploying 2026-01-20T10:56:43.376316934+00:00 stderr F status: "True" 2026-01-20T10:56:43.376316934+00:00 stderr F type: Progressing 2026-01-20T10:56:43.376316934+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2026-01-20T10:56:43.376316934+00:00 stderr F status: "True" 2026-01-20T10:56:43.376316934+00:00 stderr F type: Available 2026-01-20T10:56:43.426702500+00:00 stderr F I0120 10:56:43.426658 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2026-01-20T10:56:43.426791733+00:00 stderr F I0120 10:56:43.426780 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2026-01-20T10:56:43.436720749+00:00 stderr F I0120 10:56:43.435654 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2026-01-20T10:56:43.436720749+00:00 stderr F I0120 10:56:43.435694 1 log.go:245] Network operator config updated with conditions: 2026-01-20T10:56:43.436720749+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:56:43.436720749+00:00 stderr F status: "False" 2026-01-20T10:56:43.436720749+00:00 stderr F type: ManagementStateDegraded 2026-01-20T10:56:43.436720749+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2026-01-20T10:56:43.436720749+00:00 stderr F status: "False" 2026-01-20T10:56:43.436720749+00:00 stderr F type: Degraded 2026-01-20T10:56:43.436720749+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:56:43.436720749+00:00 stderr F status: "True" 2026-01-20T10:56:43.436720749+00:00 stderr F type: Upgradeable 2026-01-20T10:56:43.436720749+00:00 stderr F - lastTransitionTime: "2026-01-20T10:56:43Z" 2026-01-20T10:56:43.436720749+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" update is rolling out 2026-01-20T10:56:43.436720749+00:00 stderr F (0 out of 1 updated) 2026-01-20T10:56:43.436720749+00:00 stderr F reason: Deploying 2026-01-20T10:56:43.436720749+00:00 stderr F status: "True" 2026-01-20T10:56:43.436720749+00:00 stderr F type: Progressing 2026-01-20T10:56:43.436720749+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2026-01-20T10:56:43.436720749+00:00 stderr F status: "True" 2026-01-20T10:56:43.436720749+00:00 stderr F type: Available 2026-01-20T10:56:43.473484419+00:00 stderr F I0120 10:56:43.471488 1 log.go:245] ClusterOperator config status updated with conditions: 2026-01-20T10:56:43.473484419+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2026-01-20T10:56:43.473484419+00:00 stderr F status: "False" 2026-01-20T10:56:43.473484419+00:00 stderr F type: Degraded 2026-01-20T10:56:43.473484419+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:56:43.473484419+00:00 stderr F status: "True" 2026-01-20T10:56:43.473484419+00:00 stderr F type: Upgradeable 2026-01-20T10:56:43.473484419+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:56:43.473484419+00:00 stderr F status: "False" 2026-01-20T10:56:43.473484419+00:00 stderr F type: ManagementStateDegraded 2026-01-20T10:56:43.473484419+00:00 stderr F - lastTransitionTime: "2026-01-20T10:56:43Z" 2026-01-20T10:56:43.473484419+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" update is rolling out 2026-01-20T10:56:43.473484419+00:00 stderr F (0 out of 1 updated) 2026-01-20T10:56:43.473484419+00:00 stderr F reason: Deploying 2026-01-20T10:56:43.473484419+00:00 stderr F status: "True" 2026-01-20T10:56:43.473484419+00:00 stderr F type: Progressing 2026-01-20T10:56:43.473484419+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2026-01-20T10:56:43.473484419+00:00 stderr F status: "True" 2026-01-20T10:56:43.473484419+00:00 stderr F type: Available 2026-01-20T10:56:43.625505649+00:00 stderr F I0120 10:56:43.624780 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2026-01-20T10:56:43.625505649+00:00 stderr F I0120 10:56:43.625485 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2026-01-20T10:56:43.690668302+00:00 stderr F I0120 10:56:43.690105 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2026-01-20T10:56:43.690668302+00:00 stderr F I0120 10:56:43.690645 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2026-01-20T10:56:43.742866947+00:00 stderr F I0120 10:56:43.742808 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2026-01-20T10:56:43.742866947+00:00 stderr F I0120 10:56:43.742846 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2026-01-20T10:56:43.885966397+00:00 stderr F I0120 10:56:43.885618 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2026-01-20T10:56:43.885966397+00:00 stderr F I0120 10:56:43.885672 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2026-01-20T10:56:44.026386105+00:00 stderr F I0120 10:56:44.026299 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2026-01-20T10:56:44.026386105+00:00 stderr F I0120 10:56:44.026351 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2026-01-20T10:56:44.226652663+00:00 stderr F I0120 10:56:44.226558 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2026-01-20T10:56:44.226652663+00:00 stderr F I0120 10:56:44.226618 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2026-01-20T10:56:44.299612617+00:00 stderr F I0120 10:56:44.299508 1 log.go:245] Network operator config updated with conditions: 2026-01-20T10:56:44.299612617+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:56:44.299612617+00:00 stderr F status: "False" 2026-01-20T10:56:44.299612617+00:00 stderr F type: ManagementStateDegraded 2026-01-20T10:56:44.299612617+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2026-01-20T10:56:44.299612617+00:00 stderr F status: "False" 2026-01-20T10:56:44.299612617+00:00 stderr F type: Degraded 2026-01-20T10:56:44.299612617+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:56:44.299612617+00:00 stderr F status: "True" 2026-01-20T10:56:44.299612617+00:00 stderr F type: Upgradeable 2026-01-20T10:56:44.299612617+00:00 stderr F - lastTransitionTime: "2026-01-20T10:56:43Z" 2026-01-20T10:56:44.299612617+00:00 stderr F message: |- 2026-01-20T10:56:44.299612617+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" update is rolling out (0 out of 1 updated) 2026-01-20T10:56:44.299612617+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2026-01-20T10:56:44.299612617+00:00 stderr F reason: Deploying 2026-01-20T10:56:44.299612617+00:00 stderr F status: "True" 2026-01-20T10:56:44.299612617+00:00 stderr F type: Progressing 2026-01-20T10:56:44.299612617+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2026-01-20T10:56:44.299612617+00:00 stderr F status: "True" 2026-01-20T10:56:44.299612617+00:00 stderr F type: Available 2026-01-20T10:56:44.299612617+00:00 stderr F I0120 10:56:44.299572 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2026-01-20T10:56:44.426964893+00:00 stderr F I0120 10:56:44.426851 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2026-01-20T10:56:44.426964893+00:00 stderr F I0120 10:56:44.426905 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2026-01-20T10:56:44.626143622+00:00 stderr F I0120 10:56:44.626030 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2026-01-20T10:56:44.626143622+00:00 stderr F I0120 10:56:44.626132 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2026-01-20T10:56:44.677666337+00:00 stderr F I0120 10:56:44.677332 1 log.go:245] ClusterOperator config status updated with conditions: 2026-01-20T10:56:44.677666337+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2026-01-20T10:56:44.677666337+00:00 stderr F status: "False" 2026-01-20T10:56:44.677666337+00:00 stderr F type: Degraded 2026-01-20T10:56:44.677666337+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:56:44.677666337+00:00 stderr F status: "True" 2026-01-20T10:56:44.677666337+00:00 stderr F type: Upgradeable 2026-01-20T10:56:44.677666337+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:56:44.677666337+00:00 stderr F status: "False" 2026-01-20T10:56:44.677666337+00:00 stderr F type: ManagementStateDegraded 2026-01-20T10:56:44.677666337+00:00 stderr F - lastTransitionTime: "2026-01-20T10:56:43Z" 2026-01-20T10:56:44.677666337+00:00 stderr F message: |- 2026-01-20T10:56:44.677666337+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" update is rolling out (0 out of 1 updated) 2026-01-20T10:56:44.677666337+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2026-01-20T10:56:44.677666337+00:00 stderr F reason: Deploying 2026-01-20T10:56:44.677666337+00:00 stderr F status: "True" 2026-01-20T10:56:44.677666337+00:00 stderr F type: Progressing 2026-01-20T10:56:44.677666337+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2026-01-20T10:56:44.677666337+00:00 stderr F status: "True" 2026-01-20T10:56:44.677666337+00:00 stderr F type: Available 2026-01-20T10:56:44.841680711+00:00 stderr F I0120 10:56:44.840196 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2026-01-20T10:56:44.841680711+00:00 stderr F I0120 10:56:44.840263 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2026-01-20T10:56:45.029027951+00:00 stderr F I0120 10:56:45.028970 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2026-01-20T10:56:45.029085703+00:00 stderr F I0120 10:56:45.029037 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2026-01-20T10:56:45.225653571+00:00 stderr F I0120 10:56:45.225582 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2026-01-20T10:56:45.225653571+00:00 stderr F I0120 10:56:45.225638 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2026-01-20T10:56:45.425371755+00:00 stderr F I0120 10:56:45.425304 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2026-01-20T10:56:45.425371755+00:00 stderr F I0120 10:56:45.425364 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2026-01-20T10:56:45.487510018+00:00 stderr F I0120 10:56:45.487455 1 log.go:245] Network operator config updated with conditions: 2026-01-20T10:56:45.487510018+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:56:45.487510018+00:00 stderr F status: "False" 2026-01-20T10:56:45.487510018+00:00 stderr F type: ManagementStateDegraded 2026-01-20T10:56:45.487510018+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2026-01-20T10:56:45.487510018+00:00 stderr F status: "False" 2026-01-20T10:56:45.487510018+00:00 stderr F type: Degraded 2026-01-20T10:56:45.487510018+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:56:45.487510018+00:00 stderr F status: "True" 2026-01-20T10:56:45.487510018+00:00 stderr F type: Upgradeable 2026-01-20T10:56:45.487510018+00:00 stderr F - lastTransitionTime: "2026-01-20T10:56:43Z" 2026-01-20T10:56:45.487510018+00:00 stderr F message: |- 2026-01-20T10:56:45.487510018+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2026-01-20T10:56:45.487510018+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2026-01-20T10:56:45.487510018+00:00 stderr F reason: Deploying 2026-01-20T10:56:45.487510018+00:00 stderr F status: "True" 2026-01-20T10:56:45.487510018+00:00 stderr F type: Progressing 2026-01-20T10:56:45.487510018+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2026-01-20T10:56:45.487510018+00:00 stderr F status: "True" 2026-01-20T10:56:45.487510018+00:00 stderr F type: Available 2026-01-20T10:56:45.487773415+00:00 stderr F I0120 10:56:45.487728 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2026-01-20T10:56:45.625020927+00:00 stderr F I0120 10:56:45.624956 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2026-01-20T10:56:45.625255843+00:00 stderr F I0120 10:56:45.625231 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2026-01-20T10:56:45.833326042+00:00 stderr F I0120 10:56:45.833268 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2026-01-20T10:56:45.833381063+00:00 stderr F I0120 10:56:45.833323 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2026-01-20T10:56:45.872631799+00:00 stderr F I0120 10:56:45.872393 1 log.go:245] ClusterOperator config status updated with conditions: 2026-01-20T10:56:45.872631799+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2026-01-20T10:56:45.872631799+00:00 stderr F status: "False" 2026-01-20T10:56:45.872631799+00:00 stderr F type: Degraded 2026-01-20T10:56:45.872631799+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:56:45.872631799+00:00 stderr F status: "True" 2026-01-20T10:56:45.872631799+00:00 stderr F type: Upgradeable 2026-01-20T10:56:45.872631799+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:56:45.872631799+00:00 stderr F status: "False" 2026-01-20T10:56:45.872631799+00:00 stderr F type: ManagementStateDegraded 2026-01-20T10:56:45.872631799+00:00 stderr F - lastTransitionTime: "2026-01-20T10:56:43Z" 2026-01-20T10:56:45.872631799+00:00 stderr F message: |- 2026-01-20T10:56:45.872631799+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2026-01-20T10:56:45.872631799+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2026-01-20T10:56:45.872631799+00:00 stderr F reason: Deploying 2026-01-20T10:56:45.872631799+00:00 stderr F status: "True" 2026-01-20T10:56:45.872631799+00:00 stderr F type: Progressing 2026-01-20T10:56:45.872631799+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2026-01-20T10:56:45.872631799+00:00 stderr F status: "True" 2026-01-20T10:56:45.872631799+00:00 stderr F type: Available 2026-01-20T10:56:46.025472952+00:00 stderr F I0120 10:56:46.025415 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2026-01-20T10:56:46.025529073+00:00 stderr F I0120 10:56:46.025470 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2026-01-20T10:56:46.225753380+00:00 stderr F I0120 10:56:46.225685 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2026-01-20T10:56:46.225904294+00:00 stderr F I0120 10:56:46.225890 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2026-01-20T10:56:46.425980268+00:00 stderr F I0120 10:56:46.425905 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2026-01-20T10:56:46.425980268+00:00 stderr F I0120 10:56:46.425960 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2026-01-20T10:56:46.627047817+00:00 stderr F I0120 10:56:46.626934 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2026-01-20T10:56:46.627047817+00:00 stderr F I0120 10:56:46.627017 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2026-01-20T10:56:46.826272517+00:00 stderr F I0120 10:56:46.826208 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2026-01-20T10:56:46.826272517+00:00 stderr F I0120 10:56:46.826263 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2026-01-20T10:56:47.028103487+00:00 stderr F I0120 10:56:47.028006 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2026-01-20T10:56:47.028163860+00:00 stderr F I0120 10:56:47.028140 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2026-01-20T10:56:47.226961947+00:00 stderr F I0120 10:56:47.226857 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2026-01-20T10:56:47.226961947+00:00 stderr F I0120 10:56:47.226938 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2026-01-20T10:56:47.427524824+00:00 stderr F I0120 10:56:47.427438 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2026-01-20T10:56:47.427683938+00:00 stderr F I0120 10:56:47.427663 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2026-01-20T10:56:47.626206020+00:00 stderr F I0120 10:56:47.626121 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2026-01-20T10:56:47.626206020+00:00 stderr F I0120 10:56:47.626177 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2026-01-20T10:56:47.827741692+00:00 stderr F I0120 10:56:47.827558 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2026-01-20T10:56:47.827741692+00:00 stderr F I0120 10:56:47.827620 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2026-01-20T10:56:48.026486319+00:00 stderr F I0120 10:56:48.026404 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2026-01-20T10:56:48.026486319+00:00 stderr F I0120 10:56:48.026464 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2026-01-20T10:56:48.227641582+00:00 stderr F I0120 10:56:48.226964 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2026-01-20T10:56:48.227641582+00:00 stderr F I0120 10:56:48.227019 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2026-01-20T10:56:48.426649636+00:00 stderr F I0120 10:56:48.426602 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2026-01-20T10:56:48.426767269+00:00 stderr F I0120 10:56:48.426756 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2026-01-20T10:56:48.632035012+00:00 stderr F I0120 10:56:48.631966 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2026-01-20T10:56:48.632035012+00:00 stderr F I0120 10:56:48.632028 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2026-01-20T10:56:48.827187363+00:00 stderr F I0120 10:56:48.827129 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2026-01-20T10:56:48.827187363+00:00 stderr F I0120 10:56:48.827180 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2026-01-20T10:56:49.027902773+00:00 stderr F I0120 10:56:49.027831 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2026-01-20T10:56:49.027967824+00:00 stderr F I0120 10:56:49.027897 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2026-01-20T10:56:49.229223149+00:00 stderr F I0120 10:56:49.228654 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2026-01-20T10:56:49.229223149+00:00 stderr F I0120 10:56:49.229200 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2026-01-20T10:56:49.426552619+00:00 stderr F I0120 10:56:49.426476 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2026-01-20T10:56:49.426552619+00:00 stderr F I0120 10:56:49.426528 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2026-01-20T10:56:49.626113438+00:00 stderr F I0120 10:56:49.626019 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2026-01-20T10:56:49.626177980+00:00 stderr F I0120 10:56:49.626114 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2026-01-20T10:56:49.829134170+00:00 stderr F I0120 10:56:49.828733 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2026-01-20T10:56:49.841795071+00:00 stderr F I0120 10:56:49.841742 1 log.go:245] Operconfig Controller complete 2026-01-20T10:56:49.841833752+00:00 stderr F I0120 10:56:49.841808 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2026-01-20T10:56:50.074484412+00:00 stderr F I0120 10:56:50.073957 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2026-01-20T10:56:50.077027030+00:00 stderr F I0120 10:56:50.076983 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2026-01-20T10:56:50.078791497+00:00 stderr F I0120 10:56:50.078751 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2026-01-20T10:56:50.078791497+00:00 stderr F I0120 10:56:50.078769 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc00274bf00 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2026-01-20T10:56:50.082519158+00:00 stderr F I0120 10:56:50.082474 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2026-01-20T10:56:50.082519158+00:00 stderr F I0120 10:56:50.082495 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2026-01-20T10:56:50.082519158+00:00 stderr F I0120 10:56:50.082501 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2026-01-20T10:56:50.087170343+00:00 stderr F I0120 10:56:50.087096 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 3 -> 3 2026-01-20T10:56:50.087170343+00:00 stderr F I0120 10:56:50.087119 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 3 -> 3 2026-01-20T10:56:50.087170343+00:00 stderr F I0120 10:56:50.087127 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=true 2026-01-20T10:56:50.097542102+00:00 stderr F I0120 10:56:50.097463 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2026-01-20T10:56:50.109342059+00:00 stderr F I0120 10:56:50.109292 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2026-01-20T10:56:50.109342059+00:00 stderr F I0120 10:56:50.109325 1 log.go:245] Starting render phase 2026-01-20T10:56:50.109786622+00:00 stderr F I0120 10:56:50.109754 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2026-01-20T10:56:50.119574115+00:00 stderr F I0120 10:56:50.119531 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2026-01-20T10:56:50.147015823+00:00 stderr F I0120 10:56:50.146920 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2026-01-20T10:56:50.147015823+00:00 stderr F I0120 10:56:50.146946 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2026-01-20T10:56:50.147015823+00:00 stderr F I0120 10:56:50.146965 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2026-01-20T10:56:50.147015823+00:00 stderr F I0120 10:56:50.146989 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2026-01-20T10:56:50.155489001+00:00 stderr F I0120 10:56:50.155428 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2026-01-20T10:56:50.155489001+00:00 stderr F I0120 10:56:50.155452 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2026-01-20T10:56:50.162572351+00:00 stderr F I0120 10:56:50.162532 1 log.go:245] Render phase done, rendered 112 objects 2026-01-20T10:56:50.183630928+00:00 stderr F I0120 10:56:50.183575 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2026-01-20T10:56:50.225784003+00:00 stderr F I0120 10:56:50.225727 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2026-01-20T10:56:50.225909386+00:00 stderr F I0120 10:56:50.225893 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2026-01-20T10:56:50.426395351+00:00 stderr F I0120 10:56:50.426327 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2026-01-20T10:56:50.426395351+00:00 stderr F I0120 10:56:50.426380 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2026-01-20T10:56:50.628591941+00:00 stderr F I0120 10:56:50.628539 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2026-01-20T10:56:50.628723884+00:00 stderr F I0120 10:56:50.628709 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2026-01-20T10:56:50.826953158+00:00 stderr F I0120 10:56:50.826887 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2026-01-20T10:56:50.826953158+00:00 stderr F I0120 10:56:50.826939 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2026-01-20T10:56:51.027780911+00:00 stderr F I0120 10:56:51.027712 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2026-01-20T10:56:51.027780911+00:00 stderr F I0120 10:56:51.027763 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2026-01-20T10:56:51.225987004+00:00 stderr F I0120 10:56:51.225882 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2026-01-20T10:56:51.225987004+00:00 stderr F I0120 10:56:51.225937 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2026-01-20T10:56:51.427112745+00:00 stderr F I0120 10:56:51.427017 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2026-01-20T10:56:51.427112745+00:00 stderr F I0120 10:56:51.427093 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2026-01-20T10:56:51.625946825+00:00 stderr F I0120 10:56:51.625815 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2026-01-20T10:56:51.626014267+00:00 stderr F I0120 10:56:51.625976 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2026-01-20T10:56:51.827900139+00:00 stderr F I0120 10:56:51.827788 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2026-01-20T10:56:51.827900139+00:00 stderr F I0120 10:56:51.827842 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2026-01-20T10:56:52.025301719+00:00 stderr F I0120 10:56:52.025225 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2026-01-20T10:56:52.025301719+00:00 stderr F I0120 10:56:52.025283 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2026-01-20T10:56:52.227459978+00:00 stderr F I0120 10:56:52.227367 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2026-01-20T10:56:52.227524661+00:00 stderr F I0120 10:56:52.227476 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2026-01-20T10:56:52.425429615+00:00 stderr F I0120 10:56:52.425377 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2026-01-20T10:56:52.425462576+00:00 stderr F I0120 10:56:52.425432 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2026-01-20T10:56:52.627907323+00:00 stderr F I0120 10:56:52.627798 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2026-01-20T10:56:52.627907323+00:00 stderr F I0120 10:56:52.627858 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2026-01-20T10:56:52.824613816+00:00 stderr F I0120 10:56:52.824092 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2026-01-20T10:56:52.824613816+00:00 stderr F I0120 10:56:52.824141 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2026-01-20T10:56:53.025682015+00:00 stderr F I0120 10:56:53.025624 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2026-01-20T10:56:53.025734156+00:00 stderr F I0120 10:56:53.025685 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2026-01-20T10:56:53.226208641+00:00 stderr F I0120 10:56:53.225838 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2026-01-20T10:56:53.226208641+00:00 stderr F I0120 10:56:53.225905 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2026-01-20T10:56:53.429674835+00:00 stderr F I0120 10:56:53.428468 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2026-01-20T10:56:53.429674835+00:00 stderr F I0120 10:56:53.428525 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2026-01-20T10:56:53.623858404+00:00 stderr F I0120 10:56:53.623791 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2026-01-20T10:56:53.623918916+00:00 stderr F I0120 10:56:53.623852 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2026-01-20T10:56:53.824623784+00:00 stderr F I0120 10:56:53.824568 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2026-01-20T10:56:53.824656255+00:00 stderr F I0120 10:56:53.824632 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2026-01-20T10:56:54.029236105+00:00 stderr F I0120 10:56:54.029175 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2026-01-20T10:56:54.029273536+00:00 stderr F I0120 10:56:54.029235 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2026-01-20T10:56:54.225534746+00:00 stderr F I0120 10:56:54.225458 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2026-01-20T10:56:54.225534746+00:00 stderr F I0120 10:56:54.225513 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2026-01-20T10:56:54.445817561+00:00 stderr F I0120 10:56:54.445747 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2026-01-20T10:56:54.445855792+00:00 stderr F I0120 10:56:54.445815 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2026-01-20T10:56:54.640566912+00:00 stderr F I0120 10:56:54.640481 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2026-01-20T10:56:54.640566912+00:00 stderr F I0120 10:56:54.640543 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2026-01-20T10:56:54.826673953+00:00 stderr F I0120 10:56:54.826563 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2026-01-20T10:56:54.826673953+00:00 stderr F I0120 10:56:54.826633 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2026-01-20T10:56:55.026146599+00:00 stderr F I0120 10:56:55.025352 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2026-01-20T10:56:55.026146599+00:00 stderr F I0120 10:56:55.025416 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2026-01-20T10:56:55.225807799+00:00 stderr F I0120 10:56:55.225741 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2026-01-20T10:56:55.225807799+00:00 stderr F I0120 10:56:55.225794 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2026-01-20T10:56:55.430757009+00:00 stderr F I0120 10:56:55.430676 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2026-01-20T10:56:55.430757009+00:00 stderr F I0120 10:56:55.430726 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2026-01-20T10:56:55.631251761+00:00 stderr F I0120 10:56:55.631174 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2026-01-20T10:56:55.631251761+00:00 stderr F I0120 10:56:55.631237 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2026-01-20T10:56:55.826176665+00:00 stderr F I0120 10:56:55.826094 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2026-01-20T10:56:55.826176665+00:00 stderr F I0120 10:56:55.826153 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2026-01-20T10:56:56.026425451+00:00 stderr F I0120 10:56:56.026357 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2026-01-20T10:56:56.026425451+00:00 stderr F I0120 10:56:56.026410 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2026-01-20T10:56:56.226302697+00:00 stderr F I0120 10:56:56.225666 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2026-01-20T10:56:56.226302697+00:00 stderr F I0120 10:56:56.225727 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2026-01-20T10:56:56.426230974+00:00 stderr F I0120 10:56:56.425684 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2026-01-20T10:56:56.426230974+00:00 stderr F I0120 10:56:56.426210 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2026-01-20T10:56:56.624977370+00:00 stderr F I0120 10:56:56.624912 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2026-01-20T10:56:56.624977370+00:00 stderr F I0120 10:56:56.624964 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2026-01-20T10:56:56.826999962+00:00 stderr F I0120 10:56:56.826946 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2026-01-20T10:56:56.827034363+00:00 stderr F I0120 10:56:56.826996 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2026-01-20T10:56:57.027270369+00:00 stderr F I0120 10:56:57.027194 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2026-01-20T10:56:57.027270369+00:00 stderr F I0120 10:56:57.027250 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2026-01-20T10:56:57.227624447+00:00 stderr F I0120 10:56:57.227144 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2026-01-20T10:56:57.227624447+00:00 stderr F I0120 10:56:57.227596 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2026-01-20T10:56:57.429026623+00:00 stderr F I0120 10:56:57.428912 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2026-01-20T10:56:57.429026623+00:00 stderr F I0120 10:56:57.428975 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2026-01-20T10:56:57.625993383+00:00 stderr F I0120 10:56:57.625839 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2026-01-20T10:56:57.625993383+00:00 stderr F I0120 10:56:57.625887 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2026-01-20T10:56:57.826050093+00:00 stderr F I0120 10:56:57.825976 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2026-01-20T10:56:57.826050093+00:00 stderr F I0120 10:56:57.826034 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2026-01-20T10:56:58.026172845+00:00 stderr F I0120 10:56:58.026099 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2026-01-20T10:56:58.026172845+00:00 stderr F I0120 10:56:58.026150 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2026-01-20T10:56:58.226907064+00:00 stderr F I0120 10:56:58.226800 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2026-01-20T10:56:58.226907064+00:00 stderr F I0120 10:56:58.226853 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2026-01-20T10:56:58.425217948+00:00 stderr F I0120 10:56:58.425134 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2026-01-20T10:56:58.425217948+00:00 stderr F I0120 10:56:58.425189 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2026-01-20T10:56:58.631576626+00:00 stderr F I0120 10:56:58.631507 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2026-01-20T10:56:58.631647918+00:00 stderr F I0120 10:56:58.631619 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2026-01-20T10:56:58.811439422+00:00 stderr F I0120 10:56:58.810806 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2026-01-20T10:56:58.811439422+00:00 stderr F I0120 10:56:58.810847 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2026-01-20T10:56:58.835112578+00:00 stderr F I0120 10:56:58.833872 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2026-01-20T10:56:58.835112578+00:00 stderr F I0120 10:56:58.833922 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2026-01-20T10:56:58.865625045+00:00 stderr F I0120 10:56:58.865551 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2026-01-20T10:56:58.866589700+00:00 stderr F I0120 10:56:58.866538 1 log.go:245] Network operator config updated with conditions: 2026-01-20T10:56:58.866589700+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:56:58.866589700+00:00 stderr F status: "False" 2026-01-20T10:56:58.866589700+00:00 stderr F type: ManagementStateDegraded 2026-01-20T10:56:58.866589700+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2026-01-20T10:56:58.866589700+00:00 stderr F status: "False" 2026-01-20T10:56:58.866589700+00:00 stderr F type: Degraded 2026-01-20T10:56:58.866589700+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:56:58.866589700+00:00 stderr F status: "True" 2026-01-20T10:56:58.866589700+00:00 stderr F type: Upgradeable 2026-01-20T10:56:58.866589700+00:00 stderr F - lastTransitionTime: "2026-01-20T10:56:43Z" 2026-01-20T10:56:58.866589700+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 2026-01-20T10:56:58.866589700+00:00 stderr F 1 nodes) 2026-01-20T10:56:58.866589700+00:00 stderr F reason: Deploying 2026-01-20T10:56:58.866589700+00:00 stderr F status: "True" 2026-01-20T10:56:58.866589700+00:00 stderr F type: Progressing 2026-01-20T10:56:58.866589700+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2026-01-20T10:56:58.866589700+00:00 stderr F status: "True" 2026-01-20T10:56:58.866589700+00:00 stderr F type: Available 2026-01-20T10:56:58.887202956+00:00 stderr F I0120 10:56:58.887127 1 log.go:245] ClusterOperator config status updated with conditions: 2026-01-20T10:56:58.887202956+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2026-01-20T10:56:58.887202956+00:00 stderr F status: "False" 2026-01-20T10:56:58.887202956+00:00 stderr F type: Degraded 2026-01-20T10:56:58.887202956+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:56:58.887202956+00:00 stderr F status: "True" 2026-01-20T10:56:58.887202956+00:00 stderr F type: Upgradeable 2026-01-20T10:56:58.887202956+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:56:58.887202956+00:00 stderr F status: "False" 2026-01-20T10:56:58.887202956+00:00 stderr F type: ManagementStateDegraded 2026-01-20T10:56:58.887202956+00:00 stderr F - lastTransitionTime: "2026-01-20T10:56:43Z" 2026-01-20T10:56:58.887202956+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 2026-01-20T10:56:58.887202956+00:00 stderr F 1 nodes) 2026-01-20T10:56:58.887202956+00:00 stderr F reason: Deploying 2026-01-20T10:56:58.887202956+00:00 stderr F status: "True" 2026-01-20T10:56:58.887202956+00:00 stderr F type: Progressing 2026-01-20T10:56:58.887202956+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2026-01-20T10:56:58.887202956+00:00 stderr F status: "True" 2026-01-20T10:56:58.887202956+00:00 stderr F type: Available 2026-01-20T10:56:59.030151056+00:00 stderr F I0120 10:56:59.030045 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2026-01-20T10:56:59.030151056+00:00 stderr F I0120 10:56:59.030113 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2026-01-20T10:56:59.248431438+00:00 stderr F I0120 10:56:59.248365 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2026-01-20T10:56:59.248431438+00:00 stderr F I0120 10:56:59.248423 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2026-01-20T10:56:59.433827371+00:00 stderr F I0120 10:56:59.433718 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2026-01-20T10:56:59.433827371+00:00 stderr F I0120 10:56:59.433797 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2026-01-20T10:56:59.651785945+00:00 stderr F I0120 10:56:59.651704 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2026-01-20T10:56:59.651785945+00:00 stderr F I0120 10:56:59.651776 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2026-01-20T10:56:59.854198347+00:00 stderr F I0120 10:56:59.853888 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2026-01-20T10:56:59.854198347+00:00 stderr F I0120 10:56:59.853938 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2026-01-20T10:57:00.026936746+00:00 stderr F I0120 10:57:00.026857 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2026-01-20T10:57:00.027088230+00:00 stderr F I0120 10:57:00.027075 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2026-01-20T10:57:00.226188675+00:00 stderr F I0120 10:57:00.226120 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2026-01-20T10:57:00.226253457+00:00 stderr F I0120 10:57:00.226192 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2026-01-20T10:57:00.426742238+00:00 stderr F I0120 10:57:00.426661 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2026-01-20T10:57:00.426742238+00:00 stderr F I0120 10:57:00.426717 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2026-01-20T10:57:00.627440536+00:00 stderr F I0120 10:57:00.627371 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2026-01-20T10:57:00.627490847+00:00 stderr F I0120 10:57:00.627442 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2026-01-20T10:57:00.826603063+00:00 stderr F I0120 10:57:00.826508 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2026-01-20T10:57:00.826603063+00:00 stderr F I0120 10:57:00.826577 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2026-01-20T10:57:01.025846942+00:00 stderr F I0120 10:57:01.025772 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2026-01-20T10:57:01.025846942+00:00 stderr F I0120 10:57:01.025825 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2026-01-20T10:57:01.225250845+00:00 stderr F I0120 10:57:01.225189 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2026-01-20T10:57:01.225250845+00:00 stderr F I0120 10:57:01.225239 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2026-01-20T10:57:01.426363444+00:00 stderr F I0120 10:57:01.426268 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2026-01-20T10:57:01.426363444+00:00 stderr F I0120 10:57:01.426323 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2026-01-20T10:57:01.625821519+00:00 stderr F I0120 10:57:01.625282 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2026-01-20T10:57:01.625821519+00:00 stderr F I0120 10:57:01.625815 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2026-01-20T10:57:01.830096811+00:00 stderr F I0120 10:57:01.830031 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2026-01-20T10:57:01.830147712+00:00 stderr F I0120 10:57:01.830118 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2026-01-20T10:57:02.026003072+00:00 stderr F I0120 10:57:02.025947 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2026-01-20T10:57:02.026168986+00:00 stderr F I0120 10:57:02.026154 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2026-01-20T10:57:02.225335103+00:00 stderr F I0120 10:57:02.225276 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2026-01-20T10:57:02.225389434+00:00 stderr F I0120 10:57:02.225331 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2026-01-20T10:57:02.425690012+00:00 stderr F I0120 10:57:02.425250 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2026-01-20T10:57:02.425806625+00:00 stderr F I0120 10:57:02.425795 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2026-01-20T10:57:02.626664417+00:00 stderr F I0120 10:57:02.626618 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2026-01-20T10:57:02.626775719+00:00 stderr F I0120 10:57:02.626765 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2026-01-20T10:57:02.827280531+00:00 stderr F I0120 10:57:02.827219 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2026-01-20T10:57:02.827401624+00:00 stderr F I0120 10:57:02.827391 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2026-01-20T10:57:03.028582955+00:00 stderr F I0120 10:57:03.028503 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2026-01-20T10:57:03.028582955+00:00 stderr F I0120 10:57:03.028559 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2026-01-20T10:57:03.226984322+00:00 stderr F I0120 10:57:03.226899 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2026-01-20T10:57:03.226984322+00:00 stderr F I0120 10:57:03.226964 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2026-01-20T10:57:03.429551338+00:00 stderr F I0120 10:57:03.429468 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2026-01-20T10:57:03.429551338+00:00 stderr F I0120 10:57:03.429545 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2026-01-20T10:57:03.633597014+00:00 stderr F I0120 10:57:03.633528 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2026-01-20T10:57:03.633597014+00:00 stderr F I0120 10:57:03.633580 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2026-01-20T10:57:03.830330216+00:00 stderr F I0120 10:57:03.830208 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2026-01-20T10:57:03.830330216+00:00 stderr F I0120 10:57:03.830270 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2026-01-20T10:57:04.027527572+00:00 stderr F I0120 10:57:04.027455 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2026-01-20T10:57:04.027565423+00:00 stderr F I0120 10:57:04.027527 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2026-01-20T10:57:04.226114144+00:00 stderr F I0120 10:57:04.226017 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2026-01-20T10:57:04.226114144+00:00 stderr F I0120 10:57:04.226096 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2026-01-20T10:57:04.429278466+00:00 stderr F I0120 10:57:04.429144 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2026-01-20T10:57:04.429278466+00:00 stderr F I0120 10:57:04.429213 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2026-01-20T10:57:04.628495984+00:00 stderr F I0120 10:57:04.628394 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2026-01-20T10:57:04.628495984+00:00 stderr F I0120 10:57:04.628475 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2026-01-20T10:57:04.826868840+00:00 stderr F I0120 10:57:04.826721 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2026-01-20T10:57:04.826868840+00:00 stderr F I0120 10:57:04.826780 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2026-01-20T10:57:05.027115626+00:00 stderr F I0120 10:57:05.026514 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2026-01-20T10:57:05.027115626+00:00 stderr F I0120 10:57:05.026591 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2026-01-20T10:57:05.226250632+00:00 stderr F I0120 10:57:05.226174 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2026-01-20T10:57:05.226250632+00:00 stderr F I0120 10:57:05.226237 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2026-01-20T10:57:05.426474927+00:00 stderr F I0120 10:57:05.426400 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2026-01-20T10:57:05.426537548+00:00 stderr F I0120 10:57:05.426474 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2026-01-20T10:57:05.628733995+00:00 stderr F I0120 10:57:05.628633 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2026-01-20T10:57:05.628733995+00:00 stderr F I0120 10:57:05.628708 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2026-01-20T10:57:05.834786935+00:00 stderr F I0120 10:57:05.834664 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2026-01-20T10:57:05.835495223+00:00 stderr F I0120 10:57:05.835433 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2026-01-20T10:57:06.061155172+00:00 stderr F I0120 10:57:06.057581 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2026-01-20T10:57:06.061155172+00:00 stderr F I0120 10:57:06.057644 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2026-01-20T10:57:06.226733370+00:00 stderr F I0120 10:57:06.226647 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2026-01-20T10:57:06.226733370+00:00 stderr F I0120 10:57:06.226709 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2026-01-20T10:57:06.431540416+00:00 stderr F I0120 10:57:06.431459 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2026-01-20T10:57:06.431540416+00:00 stderr F I0120 10:57:06.431524 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2026-01-20T10:57:06.625602408+00:00 stderr F I0120 10:57:06.625533 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2026-01-20T10:57:06.625602408+00:00 stderr F I0120 10:57:06.625593 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2026-01-20T10:57:06.826316726+00:00 stderr F I0120 10:57:06.826240 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2026-01-20T10:57:06.826316726+00:00 stderr F I0120 10:57:06.826305 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2026-01-20T10:57:07.028933014+00:00 stderr F I0120 10:57:07.028467 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2026-01-20T10:57:07.028959465+00:00 stderr F I0120 10:57:07.028932 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2026-01-20T10:57:07.227406913+00:00 stderr F I0120 10:57:07.227337 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2026-01-20T10:57:07.227463344+00:00 stderr F I0120 10:57:07.227414 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2026-01-20T10:57:07.426015125+00:00 stderr F I0120 10:57:07.425910 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2026-01-20T10:57:07.426110958+00:00 stderr F I0120 10:57:07.426009 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2026-01-20T10:57:07.634951540+00:00 stderr F I0120 10:57:07.634875 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2026-01-20T10:57:07.635013512+00:00 stderr F I0120 10:57:07.634959 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2026-01-20T10:57:07.830870351+00:00 stderr F I0120 10:57:07.830806 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2026-01-20T10:57:07.830870351+00:00 stderr F I0120 10:57:07.830860 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2026-01-20T10:57:08.022425358+00:00 stderr F I0120 10:57:08.022332 1 log.go:245] could not apply (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source: failed to apply / update (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source: Patch "https://api-int.crc.testing:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-network-diagnostics/servicemonitors/network-check-source?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:08.022425358+00:00 stderr F I0120 10:57:08.022414 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2026-01-20T10:57:08.221784109+00:00 stderr F I0120 10:57:08.221694 1 log.go:245] could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s: Patch "https://api-int.crc.testing:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-network-diagnostics/roles/prometheus-k8s?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:08.221784109+00:00 stderr F I0120 10:57:08.221754 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2026-01-20T10:57:08.421410048+00:00 stderr F I0120 10:57:08.421202 1 log.go:245] could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s: Patch "https://api-int.crc.testing:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-network-diagnostics/rolebindings/prometheus-k8s?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:08.421410048+00:00 stderr F I0120 10:57:08.421265 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2026-01-20T10:57:08.621917021+00:00 stderr F I0120 10:57:08.621833 1 log.go:245] could not apply (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target: failed to apply / update (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target: Patch "https://api-int.crc.testing:6443/apis/apps/v1/namespaces/openshift-network-diagnostics/daemonsets/network-check-target?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:08.621989093+00:00 stderr F I0120 10:57:08.621918 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2026-01-20T10:57:08.822203097+00:00 stderr F I0120 10:57:08.822015 1 log.go:245] could not apply (/v1, Kind=Service) openshift-network-diagnostics/network-check-target: failed to apply / update (/v1, Kind=Service) openshift-network-diagnostics/network-check-target: Patch "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/services/network-check-target?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:08.822314940+00:00 stderr F I0120 10:57:08.822205 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2026-01-20T10:57:09.022138105+00:00 stderr F I0120 10:57:09.021984 1 log.go:245] could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role: Patch "https://api-int.crc.testing:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/openshift-network-public-role?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:09.022229337+00:00 stderr F I0120 10:57:09.022143 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2026-01-20T10:57:09.221807185+00:00 stderr F I0120 10:57:09.221662 1 log.go:245] could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding: Patch "https://api-int.crc.testing:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/openshift-network-public-role-binding?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:09.221807185+00:00 stderr F I0120 10:57:09.221764 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2026-01-20T10:57:09.422774900+00:00 stderr F I0120 10:57:09.422635 1 log.go:245] could not apply (/v1, Kind=Namespace) /openshift-network-node-identity: failed to apply / update (/v1, Kind=Namespace) /openshift-network-node-identity: Patch "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:09.422848852+00:00 stderr F I0120 10:57:09.422804 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2026-01-20T10:57:09.621838684+00:00 stderr F I0120 10:57:09.621755 1 log.go:245] could not apply (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity: failed to apply / update (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity: Patch "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/serviceaccounts/network-node-identity?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:09.621909756+00:00 stderr F I0120 10:57:09.621838 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2026-01-20T10:57:09.822687605+00:00 stderr F I0120 10:57:09.821827 1 log.go:245] could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity: Patch "https://api-int.crc.testing:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/network-node-identity?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:09.822687605+00:00 stderr F I0120 10:57:09.821914 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2026-01-20T10:57:10.020626890+00:00 stderr F I0120 10:57:10.020567 1 log.go:245] could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity: Patch "https://api-int.crc.testing:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/network-node-identity?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:10.020713972+00:00 stderr F I0120 10:57:10.020629 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2026-01-20T10:57:10.221433741+00:00 stderr F I0120 10:57:10.221336 1 log.go:245] could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases: Patch "https://api-int.crc.testing:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-network-node-identity/rolebindings/network-node-identity-leases?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:10.221498282+00:00 stderr F I0120 10:57:10.221433 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2026-01-20T10:57:10.422306432+00:00 stderr F I0120 10:57:10.422177 1 log.go:245] could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases: Patch "https://api-int.crc.testing:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-network-node-identity/roles/network-node-identity-leases?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:10.422306432+00:00 stderr F I0120 10:57:10.422275 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2026-01-20T10:57:10.622330932+00:00 stderr F I0120 10:57:10.622218 1 log.go:245] could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2: Patch "https://api-int.crc.testing:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-network-node-identity/rolebindings/system:openshift:scc:hostnetwork-v2?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:10.622330932+00:00 stderr F I0120 10:57:10.622292 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2026-01-20T10:57:10.821826068+00:00 stderr F I0120 10:57:10.821713 1 log.go:245] could not apply (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm: failed to apply / update (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm: Patch "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps/ovnkube-identity-cm?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:10.821826068+00:00 stderr F I0120 10:57:10.821804 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2026-01-20T10:57:11.021272552+00:00 stderr F I0120 10:57:11.021156 1 log.go:245] could not apply (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity: failed to apply / update (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity: Patch "https://api-int.crc.testing:6443/apis/network.operator.openshift.io/v1/namespaces/openshift-network-node-identity/operatorpkis/network-node-identity?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:11.021272552+00:00 stderr F I0120 10:57:11.021256 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2026-01-20T10:57:11.221234501+00:00 stderr F I0120 10:57:11.221091 1 log.go:245] could not apply (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io: failed to apply / update (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io: Patch "https://api-int.crc.testing:6443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/network-node-identity.openshift.io?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:11.221234501+00:00 stderr F I0120 10:57:11.221166 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2026-01-20T10:57:11.421235880+00:00 stderr F I0120 10:57:11.421147 1 log.go:245] could not apply (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity: failed to apply / update (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity: Patch "https://api-int.crc.testing:6443/apis/apps/v1/namespaces/openshift-network-node-identity/daemonsets/network-node-identity?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:11.421235880+00:00 stderr F I0120 10:57:11.421209 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2026-01-20T10:57:11.621914357+00:00 stderr F I0120 10:57:11.621783 1 log.go:245] could not apply (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules: failed to apply / update (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules: Patch "https://api-int.crc.testing:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-network-operator/prometheusrules/openshift-network-operator-ipsec-rules?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:11.621914357+00:00 stderr F I0120 10:57:11.621856 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2026-01-20T10:57:11.822408418+00:00 stderr F I0120 10:57:11.822265 1 log.go:245] could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter: Patch "https://api-int.crc.testing:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/openshift-iptables-alerter?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:11.822408418+00:00 stderr F I0120 10:57:11.822342 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2026-01-20T10:57:12.022298475+00:00 stderr F I0120 10:57:12.022196 1 log.go:245] could not apply (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter: failed to apply / update (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter: Patch "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/iptables-alerter?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:12.022386417+00:00 stderr F I0120 10:57:12.022294 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2026-01-20T10:57:12.222029057+00:00 stderr F I0120 10:57:12.221901 1 log.go:245] could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter: Patch "https://api-int.crc.testing:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/openshift-iptables-alerter?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:12.222029057+00:00 stderr F I0120 10:57:12.221996 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2026-01-20T10:57:12.422567220+00:00 stderr F I0120 10:57:12.422449 1 log.go:245] could not apply (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script: failed to apply / update (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script: Patch "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps/iptables-alerter-script?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:12.422567220+00:00 stderr F I0120 10:57:12.422543 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2026-01-20T10:57:12.622266911+00:00 stderr F I0120 10:57:12.621578 1 log.go:245] could not apply (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: failed to apply / update (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: Patch "https://api-int.crc.testing:6443/apis/apps/v1/namespaces/openshift-network-operator/daemonsets/iptables-alerter?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:12.623010711+00:00 stderr F I0120 10:57:12.622956 1 log.go:245] Failed to set operator status: Get "https://api-int.crc.testing:6443/apis/operator.openshift.io/v1/networks/cluster": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:12.623940115+00:00 stderr F I0120 10:57:12.623886 1 log.go:245] Failed to set ClusterOperator: Get "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusteroperators/network": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:12.624082349+00:00 stderr F E0120 10:57:12.624030 1 controller.go:329] "Reconciler error" err="could not apply (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: failed to apply / update (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: Patch \"https://api-int.crc.testing:6443/apis/apps/v1/namespaces/openshift-network-operator/daemonsets/iptables-alerter?fieldManager=cluster-network-operator%2Foperconfig&force=true\": dial tcp 38.102.83.220:6443: connect: connection refused" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="74eb3ec8-e266-4d7c-bd55-934e87e78149" 2026-01-20T10:57:12.629682827+00:00 stderr F I0120 10:57:12.629608 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2026-01-20T10:57:12.630661293+00:00 stderr F I0120 10:57:12.630610 1 log.go:245] Unable to retrieve Network.operator.openshift.io object: Get "https://api-int.crc.testing:6443/apis/operator.openshift.io/v1/networks/cluster": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:12.630822338+00:00 stderr F E0120 10:57:12.630776 1 controller.go:329] "Reconciler error" err="Get \"https://api-int.crc.testing:6443/apis/operator.openshift.io/v1/networks/cluster\": dial tcp 38.102.83.220:6443: connect: connection refused" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="b2b8ce21-3299-4366-8b2a-358b891b09c7" 2026-01-20T10:57:12.641185431+00:00 stderr F I0120 10:57:12.641113 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2026-01-20T10:57:12.641748307+00:00 stderr F I0120 10:57:12.641689 1 log.go:245] Unable to retrieve Network.operator.openshift.io object: Get "https://api-int.crc.testing:6443/apis/operator.openshift.io/v1/networks/cluster": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:12.641847369+00:00 stderr F E0120 10:57:12.641795 1 controller.go:329] "Reconciler error" err="Get \"https://api-int.crc.testing:6443/apis/operator.openshift.io/v1/networks/cluster\": dial tcp 38.102.83.220:6443: connect: connection refused" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="edcb0b76-a66a-4203-aba6-0732cdf2e88f" 2026-01-20T10:57:12.662489865+00:00 stderr F I0120 10:57:12.662397 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2026-01-20T10:57:12.663673887+00:00 stderr F I0120 10:57:12.663630 1 log.go:245] Unable to retrieve Network.operator.openshift.io object: Get "https://api-int.crc.testing:6443/apis/operator.openshift.io/v1/networks/cluster": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:12.663801260+00:00 stderr F E0120 10:57:12.663747 1 controller.go:329] "Reconciler error" err="Get \"https://api-int.crc.testing:6443/apis/operator.openshift.io/v1/networks/cluster\": dial tcp 38.102.83.220:6443: connect: connection refused" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="bc9ffbd6-c356-4f2f-a5ab-469a14244451" 2026-01-20T10:57:12.703922031+00:00 stderr F I0120 10:57:12.703843 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2026-01-20T10:57:12.704641509+00:00 stderr F I0120 10:57:12.704589 1 log.go:245] Unable to retrieve Network.operator.openshift.io object: Get "https://api-int.crc.testing:6443/apis/operator.openshift.io/v1/networks/cluster": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:12.704779503+00:00 stderr F E0120 10:57:12.704732 1 controller.go:329] "Reconciler error" err="Get \"https://api-int.crc.testing:6443/apis/operator.openshift.io/v1/networks/cluster\": dial tcp 38.102.83.220:6443: connect: connection refused" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="2260d3ca-f71f-4c72-9813-302c916125e9" 2026-01-20T10:57:12.785748084+00:00 stderr F I0120 10:57:12.785623 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2026-01-20T10:57:12.786503954+00:00 stderr F I0120 10:57:12.786440 1 log.go:245] Unable to retrieve Network.operator.openshift.io object: Get "https://api-int.crc.testing:6443/apis/operator.openshift.io/v1/networks/cluster": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:12.786622487+00:00 stderr F E0120 10:57:12.786569 1 controller.go:329] "Reconciler error" err="Get \"https://api-int.crc.testing:6443/apis/operator.openshift.io/v1/networks/cluster\": dial tcp 38.102.83.220:6443: connect: connection refused" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="d8283a4f-7a27-4486-9e49-260c7def579d" 2026-01-20T10:57:12.947585775+00:00 stderr F I0120 10:57:12.947498 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2026-01-20T10:57:12.948089698+00:00 stderr F I0120 10:57:12.948019 1 log.go:245] Unable to retrieve Network.operator.openshift.io object: Get "https://api-int.crc.testing:6443/apis/operator.openshift.io/v1/networks/cluster": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:12.948156839+00:00 stderr F E0120 10:57:12.948127 1 controller.go:329] "Reconciler error" err="Get \"https://api-int.crc.testing:6443/apis/operator.openshift.io/v1/networks/cluster\": dial tcp 38.102.83.220:6443: connect: connection refused" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="a5564a02-072c-4a8b-a2fa-5456676b3ddb" 2026-01-20T10:57:13.269079087+00:00 stderr F I0120 10:57:13.268972 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2026-01-20T10:57:13.269791495+00:00 stderr F I0120 10:57:13.269747 1 log.go:245] Unable to retrieve Network.operator.openshift.io object: Get "https://api-int.crc.testing:6443/apis/operator.openshift.io/v1/networks/cluster": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:13.269988540+00:00 stderr F E0120 10:57:13.269897 1 controller.go:329] "Reconciler error" err="Get \"https://api-int.crc.testing:6443/apis/operator.openshift.io/v1/networks/cluster\": dial tcp 38.102.83.220:6443: connect: connection refused" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="aff26bb7-209c-4623-9e71-c2c02eb5cfea" 2026-01-20T10:57:13.910028656+00:00 stderr F I0120 10:57:13.909964 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2026-01-20T10:57:13.910683354+00:00 stderr F I0120 10:57:13.910645 1 log.go:245] Unable to retrieve Network.operator.openshift.io object: Get "https://api-int.crc.testing:6443/apis/operator.openshift.io/v1/networks/cluster": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:13.910760346+00:00 stderr F E0120 10:57:13.910736 1 controller.go:329] "Reconciler error" err="Get \"https://api-int.crc.testing:6443/apis/operator.openshift.io/v1/networks/cluster\": dial tcp 38.102.83.220:6443: connect: connection refused" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="5f3b1683-9cc8-4bbb-bfe6-f38196d404f4" 2026-01-20T10:57:15.191448884+00:00 stderr F I0120 10:57:15.191369 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2026-01-20T10:57:15.192306687+00:00 stderr F I0120 10:57:15.192228 1 log.go:245] Unable to retrieve Network.operator.openshift.io object: Get "https://api-int.crc.testing:6443/apis/operator.openshift.io/v1/networks/cluster": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:15.192417829+00:00 stderr F E0120 10:57:15.192370 1 controller.go:329] "Reconciler error" err="Get \"https://api-int.crc.testing:6443/apis/operator.openshift.io/v1/networks/cluster\": dial tcp 38.102.83.220:6443: connect: connection refused" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="1224a0b1-2cc9-4dda-ad15-648db35d7d57" 2026-01-20T10:57:15.223761388+00:00 stderr F E0120 10:57:15.223651 1 leaderelection.go:332] error retrieving resource lock openshift-network-operator/network-operator-lock: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-operator/leases/network-operator-lock?timeout=4m0s": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:17.754526456+00:00 stderr F I0120 10:57:17.754416 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2026-01-20T10:57:17.755537992+00:00 stderr F I0120 10:57:17.755467 1 log.go:245] Unable to retrieve Network.operator.openshift.io object: Get "https://api-int.crc.testing:6443/apis/operator.openshift.io/v1/networks/cluster": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:17.755658945+00:00 stderr F E0120 10:57:17.755605 1 controller.go:329] "Reconciler error" err="Get \"https://api-int.crc.testing:6443/apis/operator.openshift.io/v1/networks/cluster\": dial tcp 38.102.83.220:6443: connect: connection refused" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="08339f3a-3aa0-4fe7-be34-70fbee4ac2e1" 2026-01-20T10:57:22.876610940+00:00 stderr F I0120 10:57:22.876523 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2026-01-20T10:57:29.446133134+00:00 stderr F I0120 10:57:29.446033 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2026-01-20T10:57:29.452090011+00:00 stderr F I0120 10:57:29.451320 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2026-01-20T10:57:29.460107663+00:00 stderr F I0120 10:57:29.459454 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2026-01-20T10:57:29.460107663+00:00 stderr F I0120 10:57:29.459490 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc003e98c00 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2026-01-20T10:57:29.472113282+00:00 stderr F I0120 10:57:29.471713 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2026-01-20T10:57:29.472113282+00:00 stderr F I0120 10:57:29.471762 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2026-01-20T10:57:29.472113282+00:00 stderr F I0120 10:57:29.471780 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2026-01-20T10:57:29.478106970+00:00 stderr F I0120 10:57:29.477999 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 3 -> 3 2026-01-20T10:57:29.478106970+00:00 stderr F I0120 10:57:29.478014 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 3 -> 3 2026-01-20T10:57:29.478106970+00:00 stderr F I0120 10:57:29.478022 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=true 2026-01-20T10:57:29.505110733+00:00 stderr F I0120 10:57:29.505034 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2026-01-20T10:57:29.524094536+00:00 stderr F I0120 10:57:29.524000 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2026-01-20T10:57:29.524094536+00:00 stderr F I0120 10:57:29.524036 1 log.go:245] Starting render phase 2026-01-20T10:57:29.533732111+00:00 stderr F I0120 10:57:29.533673 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2026-01-20T10:57:29.572624069+00:00 stderr F I0120 10:57:29.572572 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2026-01-20T10:57:29.572624069+00:00 stderr F I0120 10:57:29.572593 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2026-01-20T10:57:29.572624069+00:00 stderr F I0120 10:57:29.572613 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2026-01-20T10:57:29.572661920+00:00 stderr F I0120 10:57:29.572639 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2026-01-20T10:57:29.583270531+00:00 stderr F I0120 10:57:29.583197 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2026-01-20T10:57:29.583270531+00:00 stderr F I0120 10:57:29.583231 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2026-01-20T10:57:29.591934380+00:00 stderr F I0120 10:57:29.591868 1 log.go:245] Render phase done, rendered 112 objects 2026-01-20T10:57:29.606409623+00:00 stderr F I0120 10:57:29.606343 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2026-01-20T10:57:29.615116443+00:00 stderr F I0120 10:57:29.611694 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2026-01-20T10:57:29.615116443+00:00 stderr F I0120 10:57:29.611756 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2026-01-20T10:57:29.618137173+00:00 stderr F I0120 10:57:29.617642 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2026-01-20T10:57:29.618137173+00:00 stderr F I0120 10:57:29.617683 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2026-01-20T10:57:29.623942767+00:00 stderr F I0120 10:57:29.623924 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2026-01-20T10:57:29.623995108+00:00 stderr F I0120 10:57:29.623985 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2026-01-20T10:57:29.629039272+00:00 stderr F I0120 10:57:29.628992 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2026-01-20T10:57:29.629039272+00:00 stderr F I0120 10:57:29.629024 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2026-01-20T10:57:29.634699411+00:00 stderr F I0120 10:57:29.634555 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2026-01-20T10:57:29.634699411+00:00 stderr F I0120 10:57:29.634588 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2026-01-20T10:57:29.640726800+00:00 stderr F I0120 10:57:29.640704 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2026-01-20T10:57:29.640789362+00:00 stderr F I0120 10:57:29.640777 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2026-01-20T10:57:29.644256334+00:00 stderr F I0120 10:57:29.644232 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2026-01-20T10:57:29.644274914+00:00 stderr F I0120 10:57:29.644255 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2026-01-20T10:57:29.649306047+00:00 stderr F I0120 10:57:29.649265 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2026-01-20T10:57:29.649426800+00:00 stderr F I0120 10:57:29.649412 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2026-01-20T10:57:29.654811553+00:00 stderr F I0120 10:57:29.654768 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2026-01-20T10:57:29.654811553+00:00 stderr F I0120 10:57:29.654800 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2026-01-20T10:57:29.714299676+00:00 stderr F I0120 10:57:29.714231 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2026-01-20T10:57:29.714434009+00:00 stderr F I0120 10:57:29.714422 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2026-01-20T10:57:29.915608330+00:00 stderr F I0120 10:57:29.914645 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2026-01-20T10:57:29.915608330+00:00 stderr F I0120 10:57:29.914703 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2026-01-20T10:57:30.115223978+00:00 stderr F I0120 10:57:30.113965 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2026-01-20T10:57:30.115223978+00:00 stderr F I0120 10:57:30.114016 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2026-01-20T10:57:30.317402995+00:00 stderr F I0120 10:57:30.317333 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2026-01-20T10:57:30.317402995+00:00 stderr F I0120 10:57:30.317391 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2026-01-20T10:57:30.514857797+00:00 stderr F I0120 10:57:30.514771 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2026-01-20T10:57:30.514857797+00:00 stderr F I0120 10:57:30.514820 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2026-01-20T10:57:30.715229515+00:00 stderr F I0120 10:57:30.715104 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2026-01-20T10:57:30.715229515+00:00 stderr F I0120 10:57:30.715156 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2026-01-20T10:57:30.916097538+00:00 stderr F I0120 10:57:30.916012 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2026-01-20T10:57:30.916097538+00:00 stderr F I0120 10:57:30.916080 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2026-01-20T10:57:31.114900775+00:00 stderr F I0120 10:57:31.114810 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2026-01-20T10:57:31.114900775+00:00 stderr F I0120 10:57:31.114882 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2026-01-20T10:57:31.316281531+00:00 stderr F I0120 10:57:31.316193 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2026-01-20T10:57:31.316281531+00:00 stderr F I0120 10:57:31.316245 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2026-01-20T10:57:31.516987939+00:00 stderr F I0120 10:57:31.516901 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2026-01-20T10:57:31.516987939+00:00 stderr F I0120 10:57:31.516958 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2026-01-20T10:57:31.716107495+00:00 stderr F I0120 10:57:31.715961 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2026-01-20T10:57:31.716107495+00:00 stderr F I0120 10:57:31.716013 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2026-01-20T10:57:31.916632237+00:00 stderr F I0120 10:57:31.916567 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2026-01-20T10:57:31.916632237+00:00 stderr F I0120 10:57:31.916622 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2026-01-20T10:57:32.128194892+00:00 stderr F I0120 10:57:32.128022 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2026-01-20T10:57:32.128351936+00:00 stderr F I0120 10:57:32.128293 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2026-01-20T10:57:32.323410654+00:00 stderr F I0120 10:57:32.323329 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2026-01-20T10:57:32.323410654+00:00 stderr F I0120 10:57:32.323384 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2026-01-20T10:57:32.516554212+00:00 stderr F I0120 10:57:32.516399 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2026-01-20T10:57:32.516554212+00:00 stderr F I0120 10:57:32.516481 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2026-01-20T10:57:32.714935799+00:00 stderr F I0120 10:57:32.714844 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2026-01-20T10:57:32.714935799+00:00 stderr F I0120 10:57:32.714906 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2026-01-20T10:57:32.923147914+00:00 stderr F I0120 10:57:32.923071 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2026-01-20T10:57:32.923215027+00:00 stderr F I0120 10:57:32.923147 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2026-01-20T10:57:33.118385458+00:00 stderr F I0120 10:57:33.118330 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2026-01-20T10:57:33.118442329+00:00 stderr F I0120 10:57:33.118387 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2026-01-20T10:57:33.320489042+00:00 stderr F I0120 10:57:33.320428 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2026-01-20T10:57:33.320555294+00:00 stderr F I0120 10:57:33.320493 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2026-01-20T10:57:33.517091222+00:00 stderr F I0120 10:57:33.517001 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2026-01-20T10:57:33.517154404+00:00 stderr F I0120 10:57:33.517114 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2026-01-20T10:57:33.716196458+00:00 stderr F I0120 10:57:33.715608 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2026-01-20T10:57:33.716196458+00:00 stderr F I0120 10:57:33.716177 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2026-01-20T10:57:33.914524572+00:00 stderr F I0120 10:57:33.914454 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2026-01-20T10:57:33.914524572+00:00 stderr F I0120 10:57:33.914515 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2026-01-20T10:57:34.126345644+00:00 stderr F I0120 10:57:34.126283 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2026-01-20T10:57:34.126397195+00:00 stderr F I0120 10:57:34.126344 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2026-01-20T10:57:34.316547264+00:00 stderr F I0120 10:57:34.316434 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2026-01-20T10:57:34.316547264+00:00 stderr F I0120 10:57:34.316521 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2026-01-20T10:57:34.515787913+00:00 stderr F I0120 10:57:34.515676 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2026-01-20T10:57:34.515787913+00:00 stderr F I0120 10:57:34.515736 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2026-01-20T10:57:34.714115618+00:00 stderr F I0120 10:57:34.713998 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2026-01-20T10:57:34.714159279+00:00 stderr F I0120 10:57:34.714117 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2026-01-20T10:57:34.920454714+00:00 stderr F I0120 10:57:34.920383 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2026-01-20T10:57:34.920454714+00:00 stderr F I0120 10:57:34.920444 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2026-01-20T10:57:35.127687324+00:00 stderr F I0120 10:57:35.127613 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2026-01-20T10:57:35.127718385+00:00 stderr F I0120 10:57:35.127687 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2026-01-20T10:57:35.316126638+00:00 stderr F I0120 10:57:35.315897 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2026-01-20T10:57:35.316126638+00:00 stderr F I0120 10:57:35.315957 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2026-01-20T10:57:35.514115754+00:00 stderr F I0120 10:57:35.514030 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2026-01-20T10:57:35.514115754+00:00 stderr F I0120 10:57:35.514100 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2026-01-20T10:57:35.713669322+00:00 stderr F I0120 10:57:35.713617 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2026-01-20T10:57:35.713726283+00:00 stderr F I0120 10:57:35.713670 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2026-01-20T10:57:35.915573271+00:00 stderr F I0120 10:57:35.915483 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2026-01-20T10:57:35.915573271+00:00 stderr F I0120 10:57:35.915552 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2026-01-20T10:57:36.115921149+00:00 stderr F I0120 10:57:36.115847 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2026-01-20T10:57:36.116087193+00:00 stderr F I0120 10:57:36.116033 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2026-01-20T10:57:36.326231310+00:00 stderr F I0120 10:57:36.326165 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2026-01-20T10:57:36.326231310+00:00 stderr F I0120 10:57:36.326224 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2026-01-20T10:57:36.527249256+00:00 stderr F I0120 10:57:36.527183 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2026-01-20T10:57:36.527249256+00:00 stderr F I0120 10:57:36.527236 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2026-01-20T10:57:36.723537007+00:00 stderr F I0120 10:57:36.723463 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2026-01-20T10:57:36.723537007+00:00 stderr F I0120 10:57:36.723518 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2026-01-20T10:57:36.924847191+00:00 stderr F I0120 10:57:36.924775 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2026-01-20T10:57:36.924847191+00:00 stderr F I0120 10:57:36.924829 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2026-01-20T10:57:37.121198764+00:00 stderr F I0120 10:57:37.121103 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2026-01-20T10:57:37.121198764+00:00 stderr F I0120 10:57:37.121172 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2026-01-20T10:57:37.340106883+00:00 stderr F I0120 10:57:37.340003 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2026-01-20T10:57:37.340106883+00:00 stderr F I0120 10:57:37.340077 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2026-01-20T10:57:37.574670956+00:00 stderr F I0120 10:57:37.574549 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2026-01-20T10:57:37.574670956+00:00 stderr F I0120 10:57:37.574607 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2026-01-20T10:57:37.717990206+00:00 stderr F I0120 10:57:37.717871 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2026-01-20T10:57:37.717990206+00:00 stderr F I0120 10:57:37.717960 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2026-01-20T10:57:37.917124152+00:00 stderr F I0120 10:57:37.917032 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2026-01-20T10:57:37.917191284+00:00 stderr F I0120 10:57:37.917136 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2026-01-20T10:57:38.115266012+00:00 stderr F I0120 10:57:38.115194 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2026-01-20T10:57:38.115266012+00:00 stderr F I0120 10:57:38.115254 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2026-01-20T10:57:38.317048528+00:00 stderr F I0120 10:57:38.316930 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2026-01-20T10:57:38.317048528+00:00 stderr F I0120 10:57:38.316989 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2026-01-20T10:57:38.515818605+00:00 stderr F I0120 10:57:38.515686 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2026-01-20T10:57:38.515917288+00:00 stderr F I0120 10:57:38.515829 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2026-01-20T10:57:38.716284406+00:00 stderr F I0120 10:57:38.716182 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2026-01-20T10:57:38.716284406+00:00 stderr F I0120 10:57:38.716251 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2026-01-20T10:57:38.914758135+00:00 stderr F I0120 10:57:38.914687 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2026-01-20T10:57:38.914758135+00:00 stderr F I0120 10:57:38.914744 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2026-01-20T10:57:39.115947585+00:00 stderr F I0120 10:57:39.115883 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2026-01-20T10:57:39.115947585+00:00 stderr F I0120 10:57:39.115936 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2026-01-20T10:57:39.314300961+00:00 stderr F I0120 10:57:39.314235 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2026-01-20T10:57:39.314300961+00:00 stderr F I0120 10:57:39.314291 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2026-01-20T10:57:39.394197004+00:00 stderr F I0120 10:57:39.394118 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:39.515491102+00:00 stderr F I0120 10:57:39.515424 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2026-01-20T10:57:39.515527173+00:00 stderr F I0120 10:57:39.515496 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2026-01-20T10:57:39.713351844+00:00 stderr F I0120 10:57:39.713264 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2026-01-20T10:57:39.713351844+00:00 stderr F I0120 10:57:39.713326 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2026-01-20T10:57:39.915354636+00:00 stderr F I0120 10:57:39.915278 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2026-01-20T10:57:39.915397497+00:00 stderr F I0120 10:57:39.915369 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2026-01-20T10:57:40.083209035+00:00 stderr F I0120 10:57:40.083101 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:40.115576611+00:00 stderr F I0120 10:57:40.115510 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2026-01-20T10:57:40.115576611+00:00 stderr F I0120 10:57:40.115562 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2026-01-20T10:57:40.315945470+00:00 stderr F I0120 10:57:40.315856 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2026-01-20T10:57:40.315945470+00:00 stderr F I0120 10:57:40.315925 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2026-01-20T10:57:40.514998514+00:00 stderr F I0120 10:57:40.514932 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2026-01-20T10:57:40.514998514+00:00 stderr F I0120 10:57:40.514985 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2026-01-20T10:57:40.715587349+00:00 stderr F I0120 10:57:40.715525 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2026-01-20T10:57:40.715587349+00:00 stderr F I0120 10:57:40.715574 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2026-01-20T10:57:40.914171330+00:00 stderr F I0120 10:57:40.914088 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2026-01-20T10:57:40.914171330+00:00 stderr F I0120 10:57:40.914146 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2026-01-20T10:57:41.129139345+00:00 stderr F I0120 10:57:41.127000 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2026-01-20T10:57:41.129139345+00:00 stderr F I0120 10:57:41.127064 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2026-01-20T10:57:41.134106217+00:00 stderr F I0120 10:57:41.131440 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:41.319982552+00:00 stderr F I0120 10:57:41.319917 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2026-01-20T10:57:41.320017433+00:00 stderr F I0120 10:57:41.319987 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2026-01-20T10:57:41.516668014+00:00 stderr F I0120 10:57:41.516603 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2026-01-20T10:57:41.516701914+00:00 stderr F I0120 10:57:41.516666 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2026-01-20T10:57:41.524813749+00:00 stderr F I0120 10:57:41.524701 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:41.670617965+00:00 stderr F I0120 10:57:41.670537 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:41.686398992+00:00 stderr F I0120 10:57:41.686331 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2026-01-20T10:57:41.696509879+00:00 stderr F I0120 10:57:41.696416 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:41.703932166+00:00 stderr F I0120 10:57:41.703860 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:41.713143969+00:00 stderr F I0120 10:57:41.713054 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:41.718627084+00:00 stderr F I0120 10:57:41.718571 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2026-01-20T10:57:41.718627084+00:00 stderr F I0120 10:57:41.718617 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2026-01-20T10:57:41.722704663+00:00 stderr F I0120 10:57:41.722649 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:41.735297325+00:00 stderr F I0120 10:57:41.735226 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2026-01-20T10:57:41.745257299+00:00 stderr F I0120 10:57:41.745169 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2026-01-20T10:57:41.752715386+00:00 stderr F I0120 10:57:41.752532 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:41.758571630+00:00 stderr F I0120 10:57:41.758513 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:41.831745526+00:00 stderr F I0120 10:57:41.831651 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:41.832240279+00:00 stderr F I0120 10:57:41.832190 1 log.go:245] Reconciling configmap from openshift-config-managed/trusted-ca-bundle 2026-01-20T10:57:41.835644999+00:00 stderr F I0120 10:57:41.835551 1 log.go:245] trusted-ca-bundle changed, updating 12 configMaps 2026-01-20T10:57:41.835644999+00:00 stderr F I0120 10:57:41.835619 1 log.go:245] ConfigMap openshift-kube-controller-manager/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2026-01-20T10:57:41.835703811+00:00 stderr F I0120 10:57:41.835663 1 log.go:245] ConfigMap openshift-machine-api/mao-trusted-ca ca-bundle.crt unchanged, skipping 2026-01-20T10:57:41.835703811+00:00 stderr F I0120 10:57:41.835689 1 log.go:245] ConfigMap openshift-marketplace/marketplace-trusted-ca ca-bundle.crt unchanged, skipping 2026-01-20T10:57:41.835738102+00:00 stderr F I0120 10:57:41.835716 1 log.go:245] ConfigMap openshift-authentication-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2026-01-20T10:57:41.835813263+00:00 stderr F I0120 10:57:41.835784 1 log.go:245] ConfigMap openshift-controller-manager/openshift-global-ca ca-bundle.crt unchanged, skipping 2026-01-20T10:57:41.835847264+00:00 stderr F I0120 10:57:41.835829 1 log.go:245] ConfigMap openshift-ingress-operator/trusted-ca ca-bundle.crt unchanged, skipping 2026-01-20T10:57:41.835900296+00:00 stderr F I0120 10:57:41.835871 1 log.go:245] ConfigMap openshift-console/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2026-01-20T10:57:41.835935697+00:00 stderr F I0120 10:57:41.835918 1 log.go:245] ConfigMap openshift-image-registry/trusted-ca ca-bundle.crt unchanged, skipping 2026-01-20T10:57:41.835977828+00:00 stderr F I0120 10:57:41.835960 1 log.go:245] ConfigMap openshift-kube-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2026-01-20T10:57:41.836020229+00:00 stderr F I0120 10:57:41.836003 1 log.go:245] ConfigMap openshift-apiserver-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2026-01-20T10:57:41.836079080+00:00 stderr F I0120 10:57:41.836045 1 log.go:245] ConfigMap openshift-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2026-01-20T10:57:41.836094461+00:00 stderr F I0120 10:57:41.836086 1 log.go:245] ConfigMap openshift-authentication/v4-0-config-system-trusted-ca-bundle ca-bundle.crt unchanged, skipping 2026-01-20T10:57:41.848995702+00:00 stderr F I0120 10:57:41.848938 1 log.go:245] Network operator config updated with conditions: 2026-01-20T10:57:41.848995702+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:57:41.848995702+00:00 stderr F status: "False" 2026-01-20T10:57:41.848995702+00:00 stderr F type: ManagementStateDegraded 2026-01-20T10:57:41.848995702+00:00 stderr F - lastTransitionTime: "2026-01-20T10:57:41Z" 2026-01-20T10:57:41.848995702+00:00 stderr F message: 'Error while updating operator configuration: could not apply (apps/v1, 2026-01-20T10:57:41.848995702+00:00 stderr F Kind=DaemonSet) openshift-network-operator/iptables-alerter: failed to apply / 2026-01-20T10:57:41.848995702+00:00 stderr F update (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: 2026-01-20T10:57:41.848995702+00:00 stderr F Patch "https://api-int.crc.testing:6443/apis/apps/v1/namespaces/openshift-network-operator/daemonsets/iptables-alerter?fieldManager=cluster-network-operator%2Foperconfig&force=true": 2026-01-20T10:57:41.848995702+00:00 stderr F dial tcp 38.102.83.220:6443: connect: connection refused' 2026-01-20T10:57:41.848995702+00:00 stderr F reason: ApplyOperatorConfig 2026-01-20T10:57:41.848995702+00:00 stderr F status: "True" 2026-01-20T10:57:41.848995702+00:00 stderr F type: Degraded 2026-01-20T10:57:41.848995702+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:57:41.848995702+00:00 stderr F status: "True" 2026-01-20T10:57:41.848995702+00:00 stderr F type: Upgradeable 2026-01-20T10:57:41.848995702+00:00 stderr F - lastTransitionTime: "2026-01-20T10:56:43Z" 2026-01-20T10:57:41.848995702+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 2026-01-20T10:57:41.848995702+00:00 stderr F 1 nodes) 2026-01-20T10:57:41.848995702+00:00 stderr F reason: Deploying 2026-01-20T10:57:41.848995702+00:00 stderr F status: "True" 2026-01-20T10:57:41.848995702+00:00 stderr F type: Progressing 2026-01-20T10:57:41.848995702+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2026-01-20T10:57:41.848995702+00:00 stderr F status: "True" 2026-01-20T10:57:41.848995702+00:00 stderr F type: Available 2026-01-20T10:57:41.881153512+00:00 stderr F I0120 10:57:41.881089 1 log.go:245] ClusterOperator config status updated with conditions: 2026-01-20T10:57:41.881153512+00:00 stderr F - lastTransitionTime: "2026-01-20T10:57:41Z" 2026-01-20T10:57:41.881153512+00:00 stderr F message: 'Error while updating operator configuration: could not apply (apps/v1, 2026-01-20T10:57:41.881153512+00:00 stderr F Kind=DaemonSet) openshift-network-operator/iptables-alerter: failed to apply / 2026-01-20T10:57:41.881153512+00:00 stderr F update (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: 2026-01-20T10:57:41.881153512+00:00 stderr F Patch "https://api-int.crc.testing:6443/apis/apps/v1/namespaces/openshift-network-operator/daemonsets/iptables-alerter?fieldManager=cluster-network-operator%2Foperconfig&force=true": 2026-01-20T10:57:41.881153512+00:00 stderr F dial tcp 38.102.83.220:6443: connect: connection refused' 2026-01-20T10:57:41.881153512+00:00 stderr F reason: ApplyOperatorConfig 2026-01-20T10:57:41.881153512+00:00 stderr F status: "True" 2026-01-20T10:57:41.881153512+00:00 stderr F type: Degraded 2026-01-20T10:57:41.881153512+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:57:41.881153512+00:00 stderr F status: "True" 2026-01-20T10:57:41.881153512+00:00 stderr F type: Upgradeable 2026-01-20T10:57:41.881153512+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:57:41.881153512+00:00 stderr F status: "False" 2026-01-20T10:57:41.881153512+00:00 stderr F type: ManagementStateDegraded 2026-01-20T10:57:41.881153512+00:00 stderr F - lastTransitionTime: "2026-01-20T10:56:43Z" 2026-01-20T10:57:41.881153512+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 2026-01-20T10:57:41.881153512+00:00 stderr F 1 nodes) 2026-01-20T10:57:41.881153512+00:00 stderr F reason: Deploying 2026-01-20T10:57:41.881153512+00:00 stderr F status: "True" 2026-01-20T10:57:41.881153512+00:00 stderr F type: Progressing 2026-01-20T10:57:41.881153512+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2026-01-20T10:57:41.881153512+00:00 stderr F status: "True" 2026-01-20T10:57:41.881153512+00:00 stderr F type: Available 2026-01-20T10:57:41.914853193+00:00 stderr F I0120 10:57:41.914776 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2026-01-20T10:57:41.914853193+00:00 stderr F I0120 10:57:41.914834 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2026-01-20T10:57:41.915654955+00:00 stderr F I0120 10:57:41.915605 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:41.987713300+00:00 stderr F I0120 10:57:41.987624 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:42.126609174+00:00 stderr F I0120 10:57:42.126495 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2026-01-20T10:57:42.126609174+00:00 stderr F I0120 10:57:42.126582 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2026-01-20T10:57:42.179831561+00:00 stderr F I0120 10:57:42.179750 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:42.318179540+00:00 stderr F I0120 10:57:42.318091 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2026-01-20T10:57:42.318179540+00:00 stderr F I0120 10:57:42.318147 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2026-01-20T10:57:42.519256397+00:00 stderr F I0120 10:57:42.519142 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2026-01-20T10:57:42.519256397+00:00 stderr F I0120 10:57:42.519195 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2026-01-20T10:57:42.716869954+00:00 stderr F I0120 10:57:42.716750 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2026-01-20T10:57:42.716869954+00:00 stderr F I0120 10:57:42.716839 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2026-01-20T10:57:42.847652322+00:00 stderr F I0120 10:57:42.847540 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:42.869484549+00:00 stderr F I0120 10:57:42.869376 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2026-01-20T10:57:42.879589947+00:00 stderr F I0120 10:57:42.879508 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:42.886630363+00:00 stderr F I0120 10:57:42.886586 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:42.896216447+00:00 stderr F I0120 10:57:42.894190 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:42.901331591+00:00 stderr F I0120 10:57:42.900914 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:42.907183866+00:00 stderr F I0120 10:57:42.907132 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2026-01-20T10:57:42.913090243+00:00 stderr F I0120 10:57:42.913045 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2026-01-20T10:57:42.913161285+00:00 stderr F I0120 10:57:42.913130 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2026-01-20T10:57:42.914045188+00:00 stderr F I0120 10:57:42.914002 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2026-01-20T10:57:43.024726854+00:00 stderr F I0120 10:57:43.024652 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:43.085707947+00:00 stderr F I0120 10:57:43.085603 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:43.113995385+00:00 stderr F I0120 10:57:43.113892 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2026-01-20T10:57:43.113995385+00:00 stderr F I0120 10:57:43.113982 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2026-01-20T10:57:43.283858718+00:00 stderr F I0120 10:57:43.283775 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:43.321018700+00:00 stderr F I0120 10:57:43.320936 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2026-01-20T10:57:43.321114493+00:00 stderr F I0120 10:57:43.321030 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2026-01-20T10:57:43.485671055+00:00 stderr F I0120 10:57:43.485579 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2026-01-20T10:57:43.521217625+00:00 stderr F I0120 10:57:43.521138 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2026-01-20T10:57:43.521217625+00:00 stderr F I0120 10:57:43.521194 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2026-01-20T10:57:43.684130163+00:00 stderr F I0120 10:57:43.684049 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:43.731296861+00:00 stderr F I0120 10:57:43.731229 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2026-01-20T10:57:43.731296861+00:00 stderr F I0120 10:57:43.731291 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2026-01-20T10:57:43.815335533+00:00 stderr F I0120 10:57:43.815257 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:43.888716733+00:00 stderr F I0120 10:57:43.888647 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:43.913291584+00:00 stderr F I0120 10:57:43.913229 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2026-01-20T10:57:43.913291584+00:00 stderr F I0120 10:57:43.913277 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2026-01-20T10:57:44.086929015+00:00 stderr F I0120 10:57:44.086855 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:44.113697663+00:00 stderr F I0120 10:57:44.113641 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2026-01-20T10:57:44.113755304+00:00 stderr F I0120 10:57:44.113698 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2026-01-20T10:57:44.238024451+00:00 stderr F I0120 10:57:44.237969 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=openshiftapiservers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:44.284989883+00:00 stderr F I0120 10:57:44.284916 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:44.315195702+00:00 stderr F I0120 10:57:44.315144 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2026-01-20T10:57:44.315222892+00:00 stderr F I0120 10:57:44.315196 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2026-01-20T10:57:44.486473141+00:00 stderr F I0120 10:57:44.486392 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2026-01-20T10:57:44.516385752+00:00 stderr F I0120 10:57:44.516308 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2026-01-20T10:57:44.516427383+00:00 stderr F I0120 10:57:44.516386 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2026-01-20T10:57:44.685295079+00:00 stderr F I0120 10:57:44.685229 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2026-01-20T10:57:44.715168699+00:00 stderr F I0120 10:57:44.715104 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2026-01-20T10:57:44.715168699+00:00 stderr F I0120 10:57:44.715159 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2026-01-20T10:57:44.767345498+00:00 stderr F I0120 10:57:44.767262 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:44.883550982+00:00 stderr F I0120 10:57:44.883445 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:44.914465249+00:00 stderr F I0120 10:57:44.914377 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2026-01-20T10:57:44.914465249+00:00 stderr F I0120 10:57:44.914451 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2026-01-20T10:57:45.019718233+00:00 stderr F I0120 10:57:45.019651 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:45.082920614+00:00 stderr F I0120 10:57:45.082433 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:45.101972848+00:00 stderr F I0120 10:57:45.101825 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:45.102205844+00:00 stderr F I0120 10:57:45.102174 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2026-01-20T10:57:45.113399150+00:00 stderr F I0120 10:57:45.113365 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2026-01-20T10:57:45.113422151+00:00 stderr F I0120 10:57:45.113416 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2026-01-20T10:57:45.286374705+00:00 stderr F I0120 10:57:45.285777 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2026-01-20T10:57:45.322088149+00:00 stderr F I0120 10:57:45.322015 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2026-01-20T10:57:45.322141801+00:00 stderr F I0120 10:57:45.322085 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2026-01-20T10:57:45.485992954+00:00 stderr F I0120 10:57:45.485879 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:45.519537361+00:00 stderr F I0120 10:57:45.518984 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2026-01-20T10:57:45.519651124+00:00 stderr F I0120 10:57:45.519529 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2026-01-20T10:57:45.624089175+00:00 stderr F I0120 10:57:45.623439 1 reflector.go:351] Caches populated for *v1.OperatorPKI from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:45.624089175+00:00 stderr F I0120 10:57:45.623681 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2026-01-20T10:57:45.624642260+00:00 stderr F I0120 10:57:45.624191 1 log.go:245] successful reconciliation 2026-01-20T10:57:45.635327603+00:00 stderr F I0120 10:57:45.635253 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2026-01-20T10:57:45.635910609+00:00 stderr F I0120 10:57:45.635872 1 log.go:245] successful reconciliation 2026-01-20T10:57:45.646788246+00:00 stderr F I0120 10:57:45.646719 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2026-01-20T10:57:45.647163106+00:00 stderr F I0120 10:57:45.647131 1 log.go:245] successful reconciliation 2026-01-20T10:57:45.685619703+00:00 stderr F I0120 10:57:45.685547 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:45.691123528+00:00 stderr F I0120 10:57:45.690929 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:45.716871639+00:00 stderr F I0120 10:57:45.716532 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2026-01-20T10:57:45.716871639+00:00 stderr F I0120 10:57:45.716582 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2026-01-20T10:57:45.775903680+00:00 stderr F I0120 10:57:45.775845 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:45.861030542+00:00 stderr F I0120 10:57:45.860906 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:45.887875471+00:00 stderr F I0120 10:57:45.887796 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:45.915739708+00:00 stderr F I0120 10:57:45.915659 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2026-01-20T10:57:45.915739708+00:00 stderr F I0120 10:57:45.915716 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2026-01-20T10:57:45.922703163+00:00 stderr F I0120 10:57:45.922639 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:45.995119848+00:00 stderr F I0120 10:57:45.994957 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:46.019107672+00:00 stderr F I0120 10:57:46.018891 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:46.093150830+00:00 stderr F I0120 10:57:46.091102 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:46.125147646+00:00 stderr F I0120 10:57:46.121437 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2026-01-20T10:57:46.125147646+00:00 stderr F I0120 10:57:46.121491 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2026-01-20T10:57:46.292174514+00:00 stderr F I0120 10:57:46.291261 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2026-01-20T10:57:46.324408226+00:00 stderr F I0120 10:57:46.321519 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2026-01-20T10:57:46.324408226+00:00 stderr F I0120 10:57:46.321589 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2026-01-20T10:57:46.482014974+00:00 stderr F I0120 10:57:46.481926 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:46.516639319+00:00 stderr F I0120 10:57:46.516561 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2026-01-20T10:57:46.516639319+00:00 stderr F I0120 10:57:46.516621 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2026-01-20T10:57:46.689997444+00:00 stderr F I0120 10:57:46.689916 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2026-01-20T10:57:46.717118491+00:00 stderr F I0120 10:57:46.717034 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2026-01-20T10:57:46.717118491+00:00 stderr F I0120 10:57:46.717110 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2026-01-20T10:57:46.887109166+00:00 stderr F I0120 10:57:46.885795 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:46.917679885+00:00 stderr F I0120 10:57:46.917600 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2026-01-20T10:57:46.917679885+00:00 stderr F I0120 10:57:46.917670 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2026-01-20T10:57:47.082598417+00:00 stderr F I0120 10:57:47.081809 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:47.119956374+00:00 stderr F I0120 10:57:47.119589 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2026-01-20T10:57:47.119956374+00:00 stderr F I0120 10:57:47.119731 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2026-01-20T10:57:47.288937444+00:00 stderr F I0120 10:57:47.288300 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2026-01-20T10:57:47.315174497+00:00 stderr F I0120 10:57:47.315096 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2026-01-20T10:57:47.315174497+00:00 stderr F I0120 10:57:47.315155 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2026-01-20T10:57:47.342648754+00:00 stderr F I0120 10:57:47.342584 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:47.342723226+00:00 stderr F I0120 10:57:47.342686 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2026-01-20T10:57:47.342723226+00:00 stderr F I0120 10:57:47.342708 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2026-01-20T10:57:47.342723226+00:00 stderr F I0120 10:57:47.342714 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2026-01-20T10:57:47.342723226+00:00 stderr F I0120 10:57:47.342719 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2026-01-20T10:57:47.342748046+00:00 stderr F I0120 10:57:47.342736 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/network-metrics-daemon updated, re-generating status 2026-01-20T10:57:47.342748046+00:00 stderr F I0120 10:57:47.342743 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/network-metrics-daemon updated, re-generating status 2026-01-20T10:57:47.342756457+00:00 stderr F I0120 10:57:47.342748 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2026-01-20T10:57:47.342756457+00:00 stderr F I0120 10:57:47.342753 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2026-01-20T10:57:47.342785327+00:00 stderr F I0120 10:57:47.342763 1 pod_watcher.go:131] Operand /, Kind= openshift-network-operator/iptables-alerter updated, re-generating status 2026-01-20T10:57:47.342785327+00:00 stderr F I0120 10:57:47.342769 1 pod_watcher.go:131] Operand /, Kind= openshift-network-operator/iptables-alerter updated, re-generating status 2026-01-20T10:57:47.342785327+00:00 stderr F I0120 10:57:47.342773 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2026-01-20T10:57:47.342785327+00:00 stderr F I0120 10:57:47.342777 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2026-01-20T10:57:47.488141472+00:00 stderr F I0120 10:57:47.488040 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:47.514252462+00:00 stderr F I0120 10:57:47.514146 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2026-01-20T10:57:47.514252462+00:00 stderr F I0120 10:57:47.514201 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2026-01-20T10:57:47.579738843+00:00 stderr F I0120 10:57:47.579587 1 reflector.go:351] Caches populated for *v1.IngressController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:47.581840479+00:00 stderr F I0120 10:57:47.579875 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2026-01-20T10:57:47.689291720+00:00 stderr F I0120 10:57:47.689170 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:47.714254230+00:00 stderr F I0120 10:57:47.714182 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2026-01-20T10:57:47.714254230+00:00 stderr F I0120 10:57:47.714244 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2026-01-20T10:57:47.783213904+00:00 stderr F I0120 10:57:47.783105 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:47.892499735+00:00 stderr F I0120 10:57:47.892394 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:47.916291604+00:00 stderr F I0120 10:57:47.916221 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2026-01-20T10:57:47.916291604+00:00 stderr F I0120 10:57:47.916271 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2026-01-20T10:57:48.084843961+00:00 stderr F I0120 10:57:48.084775 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:48.113732005+00:00 stderr F I0120 10:57:48.113665 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2026-01-20T10:57:48.113732005+00:00 stderr F I0120 10:57:48.113717 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2026-01-20T10:57:48.285309802+00:00 stderr F I0120 10:57:48.285239 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2026-01-20T10:57:48.313796615+00:00 stderr F I0120 10:57:48.313728 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2026-01-20T10:57:48.313834726+00:00 stderr F I0120 10:57:48.313791 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2026-01-20T10:57:48.483780151+00:00 stderr F I0120 10:57:48.483714 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2026-01-20T10:57:48.514224876+00:00 stderr F I0120 10:57:48.514161 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2026-01-20T10:57:48.514224876+00:00 stderr F I0120 10:57:48.514218 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2026-01-20T10:57:48.682994509+00:00 stderr F I0120 10:57:48.682930 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:48.714302577+00:00 stderr F I0120 10:57:48.714106 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2026-01-20T10:57:48.714302577+00:00 stderr F I0120 10:57:48.714158 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2026-01-20T10:57:48.883999245+00:00 stderr F I0120 10:57:48.883939 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:48.906193672+00:00 stderr F I0120 10:57:48.906125 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:48.914291086+00:00 stderr F I0120 10:57:48.914237 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2026-01-20T10:57:48.914317557+00:00 stderr F I0120 10:57:48.914297 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2026-01-20T10:57:49.013673844+00:00 stderr F I0120 10:57:49.013601 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:49.086259324+00:00 stderr F I0120 10:57:49.086177 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2026-01-20T10:57:49.101517028+00:00 stderr F I0120 10:57:49.101448 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:49.121271240+00:00 stderr F I0120 10:57:49.121216 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2026-01-20T10:57:49.121302991+00:00 stderr F I0120 10:57:49.121295 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2026-01-20T10:57:49.361557845+00:00 stderr F I0120 10:57:49.360467 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:49.365113479+00:00 stderr F I0120 10:57:49.365033 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2026-01-20T10:57:49.365147600+00:00 stderr F I0120 10:57:49.365113 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2026-01-20T10:57:49.368523599+00:00 stderr F I0120 10:57:49.368419 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:49.488400719+00:00 stderr F I0120 10:57:49.486968 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:49.506980320+00:00 stderr F I0120 10:57:49.505930 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:49.515241718+00:00 stderr F I0120 10:57:49.515155 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2026-01-20T10:57:49.515280479+00:00 stderr F I0120 10:57:49.515262 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2026-01-20T10:57:49.685519121+00:00 stderr F I0120 10:57:49.685445 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:49.715496175+00:00 stderr F I0120 10:57:49.715423 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2026-01-20T10:57:49.715534126+00:00 stderr F I0120 10:57:49.715493 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2026-01-20T10:57:49.825992526+00:00 stderr F I0120 10:57:49.825916 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:49.884174945+00:00 stderr F I0120 10:57:49.884035 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:49.917710762+00:00 stderr F I0120 10:57:49.917634 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2026-01-20T10:57:49.917710762+00:00 stderr F I0120 10:57:49.917694 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2026-01-20T10:57:50.085827558+00:00 stderr F I0120 10:57:50.085719 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2026-01-20T10:57:50.116600631+00:00 stderr F I0120 10:57:50.116518 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2026-01-20T10:57:50.116600631+00:00 stderr F I0120 10:57:50.116589 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2026-01-20T10:57:50.288460576+00:00 stderr F I0120 10:57:50.288354 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2026-01-20T10:57:50.320552875+00:00 stderr F I0120 10:57:50.320469 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2026-01-20T10:57:50.328321551+00:00 stderr F I0120 10:57:50.328247 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:50.328576198+00:00 stderr F I0120 10:57:50.328530 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-acks' 2026-01-20T10:57:50.335356366+00:00 stderr F I0120 10:57:50.335288 1 log.go:245] configmap 'openshift-config/admin-acks' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2026-01-20T10:57:50.335387637+00:00 stderr F I0120 10:57:50.335356 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-kubeconfig-client-ca' 2026-01-20T10:57:50.339711352+00:00 stderr F I0120 10:57:50.339653 1 log.go:245] Network operator config updated with conditions: 2026-01-20T10:57:50.339711352+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:57:50.339711352+00:00 stderr F status: "False" 2026-01-20T10:57:50.339711352+00:00 stderr F type: ManagementStateDegraded 2026-01-20T10:57:50.339711352+00:00 stderr F - lastTransitionTime: "2026-01-20T10:57:50Z" 2026-01-20T10:57:50.339711352+00:00 stderr F status: "False" 2026-01-20T10:57:50.339711352+00:00 stderr F type: Degraded 2026-01-20T10:57:50.339711352+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:57:50.339711352+00:00 stderr F status: "True" 2026-01-20T10:57:50.339711352+00:00 stderr F type: Upgradeable 2026-01-20T10:57:50.339711352+00:00 stderr F - lastTransitionTime: "2026-01-20T10:56:43Z" 2026-01-20T10:57:50.339711352+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 2026-01-20T10:57:50.339711352+00:00 stderr F 1 nodes) 2026-01-20T10:57:50.339711352+00:00 stderr F reason: Deploying 2026-01-20T10:57:50.339711352+00:00 stderr F status: "True" 2026-01-20T10:57:50.339711352+00:00 stderr F type: Progressing 2026-01-20T10:57:50.339711352+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2026-01-20T10:57:50.339711352+00:00 stderr F status: "True" 2026-01-20T10:57:50.339711352+00:00 stderr F type: Available 2026-01-20T10:57:50.341230152+00:00 stderr F I0120 10:57:50.339947 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2026-01-20T10:57:50.342266540+00:00 stderr F I0120 10:57:50.342208 1 log.go:245] configmap 'openshift-config/admin-kubeconfig-client-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2026-01-20T10:57:50.342266540+00:00 stderr F I0120 10:57:50.342260 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-ca-bundle' 2026-01-20T10:57:50.346898412+00:00 stderr F I0120 10:57:50.346837 1 log.go:245] configmap 'openshift-config/etcd-ca-bundle' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2026-01-20T10:57:50.346926772+00:00 stderr F I0120 10:57:50.346901 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-metric-serving-ca' 2026-01-20T10:57:50.355281293+00:00 stderr F I0120 10:57:50.352823 1 log.go:245] configmap 'openshift-config/etcd-metric-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2026-01-20T10:57:50.355281293+00:00 stderr F I0120 10:57:50.352874 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-serving-ca' 2026-01-20T10:57:50.358133819+00:00 stderr F I0120 10:57:50.357981 1 log.go:245] configmap 'openshift-config/etcd-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2026-01-20T10:57:50.358133819+00:00 stderr F I0120 10:57:50.358078 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/initial-kube-apiserver-server-ca' 2026-01-20T10:57:50.358877028+00:00 stderr F I0120 10:57:50.358821 1 log.go:245] ClusterOperator config status updated with conditions: 2026-01-20T10:57:50.358877028+00:00 stderr F - lastTransitionTime: "2026-01-20T10:57:50Z" 2026-01-20T10:57:50.358877028+00:00 stderr F status: "False" 2026-01-20T10:57:50.358877028+00:00 stderr F type: Degraded 2026-01-20T10:57:50.358877028+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:57:50.358877028+00:00 stderr F status: "True" 2026-01-20T10:57:50.358877028+00:00 stderr F type: Upgradeable 2026-01-20T10:57:50.358877028+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:57:50.358877028+00:00 stderr F status: "False" 2026-01-20T10:57:50.358877028+00:00 stderr F type: ManagementStateDegraded 2026-01-20T10:57:50.358877028+00:00 stderr F - lastTransitionTime: "2026-01-20T10:56:43Z" 2026-01-20T10:57:50.358877028+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 2026-01-20T10:57:50.358877028+00:00 stderr F 1 nodes) 2026-01-20T10:57:50.358877028+00:00 stderr F reason: Deploying 2026-01-20T10:57:50.358877028+00:00 stderr F status: "True" 2026-01-20T10:57:50.358877028+00:00 stderr F type: Progressing 2026-01-20T10:57:50.358877028+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2026-01-20T10:57:50.358877028+00:00 stderr F status: "True" 2026-01-20T10:57:50.358877028+00:00 stderr F type: Available 2026-01-20T10:57:50.358877028+00:00 stderr F I0120 10:57:50.358845 1 log.go:245] Operconfig Controller complete 2026-01-20T10:57:50.374159113+00:00 stderr F I0120 10:57:50.371145 1 log.go:245] configmap 'openshift-config/initial-kube-apiserver-server-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2026-01-20T10:57:50.374159113+00:00 stderr F I0120 10:57:50.371204 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/kube-root-ca.crt' 2026-01-20T10:57:50.385935764+00:00 stderr F I0120 10:57:50.384410 1 log.go:245] configmap 'openshift-config/kube-root-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2026-01-20T10:57:50.385935764+00:00 stderr F I0120 10:57:50.384496 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-install-manifests' 2026-01-20T10:57:50.395783155+00:00 stderr F I0120 10:57:50.395629 1 log.go:245] configmap 'openshift-config/openshift-install-manifests' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2026-01-20T10:57:50.395783155+00:00 stderr F I0120 10:57:50.395686 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-service-ca.crt' 2026-01-20T10:57:50.402295227+00:00 stderr F I0120 10:57:50.402240 1 log.go:245] configmap 'openshift-config/openshift-service-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2026-01-20T10:57:50.402295227+00:00 stderr F I0120 10:57:50.402280 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/registry-certs' 2026-01-20T10:57:50.407505385+00:00 stderr F I0120 10:57:50.407474 1 log.go:245] configmap 'openshift-config/registry-certs' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2026-01-20T10:57:50.485585320+00:00 stderr F I0120 10:57:50.485518 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:50.686033610+00:00 stderr F I0120 10:57:50.685965 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:50.889139962+00:00 stderr F I0120 10:57:50.888877 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2026-01-20T10:57:51.087003284+00:00 stderr F I0120 10:57:51.086901 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:51.136867323+00:00 stderr F I0120 10:57:51.136786 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:51.286157440+00:00 stderr F I0120 10:57:51.286088 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:51.310490804+00:00 stderr F I0120 10:57:51.310440 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:51.491179793+00:00 stderr F I0120 10:57:51.490562 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:51.688177453+00:00 stderr F I0120 10:57:51.688096 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:51.885436229+00:00 stderr F I0120 10:57:51.885321 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2026-01-20T10:57:52.087083831+00:00 stderr F I0120 10:57:52.086972 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2026-01-20T10:57:52.285622962+00:00 stderr F I0120 10:57:52.285530 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:52.482718325+00:00 stderr F I0120 10:57:52.482640 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:52.575755705+00:00 stderr F I0120 10:57:52.575696 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:52.576002271+00:00 stderr F I0120 10:57:52.575907 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-admission-controller updated, re-generating status 2026-01-20T10:57:52.576002271+00:00 stderr F I0120 10:57:52.575926 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-admission-controller updated, re-generating status 2026-01-20T10:57:52.576002271+00:00 stderr F I0120 10:57:52.575933 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2026-01-20T10:57:52.576002271+00:00 stderr F I0120 10:57:52.575936 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2026-01-20T10:57:53.068562568+00:00 stderr F I0120 10:57:53.068474 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:53.120994653+00:00 stderr F I0120 10:57:53.120911 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:53.121117467+00:00 stderr F I0120 10:57:53.121090 1 log.go:245] Reconciling proxy 'cluster' 2026-01-20T10:57:53.123805878+00:00 stderr F I0120 10:57:53.123769 1 log.go:245] httpProxy, httpsProxy and noProxy not defined for proxy 'cluster'; validation will be skipped 2026-01-20T10:57:53.135272722+00:00 stderr F I0120 10:57:53.135200 1 log.go:245] Reconciling proxy 'cluster' complete 2026-01-20T10:57:53.583127175+00:00 stderr F I0120 10:57:53.583014 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:53.586091693+00:00 stderr F I0120 10:57:53.586027 1 reflector.go:351] Caches populated for *v1.EgressRouter from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:54.051938873+00:00 stderr F I0120 10:57:54.051857 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:55.352898407+00:00 stderr F I0120 10:57:55.351454 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:55.525326078+00:00 stderr F I0120 10:57:55.525249 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:55.534944371+00:00 stderr F I0120 10:57:55.534864 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2026-01-20T10:57:55.540997251+00:00 stderr F I0120 10:57:55.540931 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:55.546672071+00:00 stderr F I0120 10:57:55.546613 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:55.548565102+00:00 stderr F I0120 10:57:55.548513 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:55.553016820+00:00 stderr F I0120 10:57:55.552973 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:55.557570980+00:00 stderr F I0120 10:57:55.557543 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:57:55.564948435+00:00 stderr F I0120 10:57:55.564889 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2026-01-20T10:57:55.571977221+00:00 stderr F I0120 10:57:55.571930 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2026-01-20T10:57:55.576578153+00:00 stderr F I0120 10:57:55.576545 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:57:55.582529760+00:00 stderr F I0120 10:57:55.582478 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2026-01-20T10:58:07.514103278+00:00 stderr F I0120 10:58:07.513671 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2026-01-20T10:58:07.523685752+00:00 stderr F I0120 10:58:07.519594 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:58:07.529195593+00:00 stderr F I0120 10:58:07.527268 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2026-01-20T10:58:07.529195593+00:00 stderr F I0120 10:58:07.527301 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2026-01-20T10:58:07.529469639+00:00 stderr F I0120 10:58:07.529432 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:58:07.545354882+00:00 stderr F I0120 10:58:07.539441 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:58:07.545354882+00:00 stderr F I0120 10:58:07.545024 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2026-01-20T10:58:07.559547344+00:00 stderr F I0120 10:58:07.559483 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2026-01-20T10:58:07.574572225+00:00 stderr F I0120 10:58:07.574519 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2026-01-20T10:58:07.606644929+00:00 stderr F I0120 10:58:07.606587 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2026-01-20T10:58:07.635221286+00:00 stderr F I0120 10:58:07.631813 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2026-01-20T10:58:07.635221286+00:00 stderr F I0120 10:58:07.632494 1 log.go:245] Network operator config updated with conditions: 2026-01-20T10:58:07.635221286+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:58:07.635221286+00:00 stderr F status: "False" 2026-01-20T10:58:07.635221286+00:00 stderr F type: ManagementStateDegraded 2026-01-20T10:58:07.635221286+00:00 stderr F - lastTransitionTime: "2026-01-20T10:57:50Z" 2026-01-20T10:58:07.635221286+00:00 stderr F status: "False" 2026-01-20T10:58:07.635221286+00:00 stderr F type: Degraded 2026-01-20T10:58:07.635221286+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:58:07.635221286+00:00 stderr F status: "True" 2026-01-20T10:58:07.635221286+00:00 stderr F type: Upgradeable 2026-01-20T10:58:07.635221286+00:00 stderr F - lastTransitionTime: "2026-01-20T10:58:07Z" 2026-01-20T10:58:07.635221286+00:00 stderr F status: "False" 2026-01-20T10:58:07.635221286+00:00 stderr F type: Progressing 2026-01-20T10:58:07.635221286+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2026-01-20T10:58:07.635221286+00:00 stderr F status: "True" 2026-01-20T10:58:07.635221286+00:00 stderr F type: Available 2026-01-20T10:58:07.635221286+00:00 stderr F I0120 10:58:07.632796 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2026-01-20T10:58:07.693660029+00:00 stderr F I0120 10:58:07.693602 1 log.go:245] ClusterOperator config status updated with conditions: 2026-01-20T10:58:07.693660029+00:00 stderr F - lastTransitionTime: "2026-01-20T10:57:50Z" 2026-01-20T10:58:07.693660029+00:00 stderr F status: "False" 2026-01-20T10:58:07.693660029+00:00 stderr F type: Degraded 2026-01-20T10:58:07.693660029+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:58:07.693660029+00:00 stderr F status: "True" 2026-01-20T10:58:07.693660029+00:00 stderr F type: Upgradeable 2026-01-20T10:58:07.693660029+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2026-01-20T10:58:07.693660029+00:00 stderr F status: "False" 2026-01-20T10:58:07.693660029+00:00 stderr F type: ManagementStateDegraded 2026-01-20T10:58:07.693660029+00:00 stderr F - lastTransitionTime: "2026-01-20T10:58:07Z" 2026-01-20T10:58:07.693660029+00:00 stderr F status: "False" 2026-01-20T10:58:07.693660029+00:00 stderr F type: Progressing 2026-01-20T10:58:07.693660029+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2026-01-20T10:58:07.693660029+00:00 stderr F status: "True" 2026-01-20T10:58:07.693660029+00:00 stderr F type: Available 2026-01-20T10:58:16.736195073+00:00 stderr F I0120 10:58:16.736108 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2026-01-20T10:58:16.736777868+00:00 stderr F I0120 10:58:16.736733 1 log.go:245] successful reconciliation 2026-01-20T10:58:18.337548338+00:00 stderr F I0120 10:58:18.337481 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2026-01-20T10:58:18.337958468+00:00 stderr F I0120 10:58:18.337921 1 log.go:245] successful reconciliation 2026-01-20T10:58:19.736597064+00:00 stderr F I0120 10:58:19.736549 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2026-01-20T10:58:19.737082026+00:00 stderr F I0120 10:58:19.737053 1 log.go:245] successful reconciliation ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000644000175000017500000144355515133657715033144 0ustar zuulzuul2025-08-13T19:50:51.526005638+00:00 stderr F I0813 19:50:51.524133 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T19:50:51.805548568+00:00 stderr F I0813 19:50:51.796567 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:50:51.907527603+00:00 stderr F I0813 19:50:51.906451 1 observer_polling.go:159] Starting file observer 2025-08-13T19:50:52.430967163+00:00 stderr F I0813 19:50:52.430191 1 builder.go:299] network-operator version 4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08-84f9a080d03777c95a1c5a0d13ca16e5aa342d98 2025-08-13T19:50:53.696147402+00:00 stderr F I0813 19:50:53.658721 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:50:53.696515842+00:00 stderr F W0813 19:50:53.696475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:50:53.696565593+00:00 stderr F W0813 19:50:53.696546 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:50:53.711051698+00:00 stderr F I0813 19:50:53.688429 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:50:53.729164305+00:00 stderr F I0813 19:50:53.728532 1 secure_serving.go:213] Serving securely on [::]:9104 2025-08-13T19:50:53.732060848+00:00 stderr F I0813 19:50:53.731301 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:50:53.733718265+00:00 stderr F I0813 19:50:53.733681 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:50:53.749106915+00:00 stderr F I0813 19:50:53.737647 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:50:53.749106915+00:00 stderr F I0813 19:50:53.739633 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:50:53.749106915+00:00 stderr F I0813 19:50:53.748216 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:50:53.749106915+00:00 stderr F I0813 19:50:53.748340 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:50:53.749106915+00:00 stderr F I0813 19:50:53.748333 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.749106915+00:00 stderr F I0813 19:50:53.748369 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:50:53.749106915+00:00 stderr F I0813 19:50:53.748549 1 leaderelection.go:250] attempting to acquire leader lease openshift-network-operator/network-operator-lock... 2025-08-13T19:50:53.855680041+00:00 stderr F I0813 19:50:53.851938 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:50:53.855680041+00:00 stderr F I0813 19:50:53.852293 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:50:53.871563255+00:00 stderr F I0813 19:50:53.871012 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:50:53.878122182+00:00 stderr F E0813 19:50:53.878073 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.878459512+00:00 stderr F E0813 19:50:53.878441 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.884662709+00:00 stderr F E0813 19:50:53.883761 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.888043106+00:00 stderr F E0813 19:50:53.887730 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.894232243+00:00 stderr F E0813 19:50:53.894194 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.921112841+00:00 stderr F E0813 19:50:53.921050 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.921221634+00:00 stderr F E0813 19:50:53.921202 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.944473119+00:00 stderr F E0813 19:50:53.944405 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.965390587+00:00 stderr F E0813 19:50:53.964305 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.990391071+00:00 stderr F E0813 19:50:53.989304 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:54.053048102+00:00 stderr F E0813 19:50:54.051645 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:54.076296547+00:00 stderr F E0813 19:50:54.076033 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:54.214466776+00:00 stderr F E0813 19:50:54.213642 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:54.237106333+00:00 stderr F E0813 19:50:54.236942 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:54.539054753+00:00 stderr F E0813 19:50:54.535234 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:54.558006074+00:00 stderr F E0813 19:50:54.557434 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:55.176672496+00:00 stderr F E0813 19:50:55.176032 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:55.199176329+00:00 stderr F E0813 19:50:55.198548 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:56.458594364+00:00 stderr F E0813 19:50:56.458543 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:56.485217234+00:00 stderr F E0813 19:50:56.481127 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:59.034562497+00:00 stderr F E0813 19:50:59.034344 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:59.044633055+00:00 stderr F E0813 19:50:59.044467 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:51:04.165674402+00:00 stderr F E0813 19:51:04.162623 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:51:04.166167186+00:00 stderr F E0813 19:51:04.166139 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:51:14.403445769+00:00 stderr F E0813 19:51:14.403252 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:51:14.407896356+00:00 stderr F E0813 19:51:14.406714 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:51:34.885304072+00:00 stderr F E0813 19:51:34.884178 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:51:34.887387631+00:00 stderr F E0813 19:51:34.887255 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:51:53.853097978+00:00 stderr F E0813 19:51:53.852977 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:52:15.845380614+00:00 stderr F E0813 19:52:15.845240 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:52:15.847854545+00:00 stderr F E0813 19:52:15.847751 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:52:53.852904639+00:00 stderr F E0813 19:52:53.852557 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:53:37.766395565+00:00 stderr F E0813 19:53:37.766247 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:53:53.853033193+00:00 stderr F E0813 19:53:53.852902 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:54:53.853306268+00:00 stderr F E0813 19:54:53.853107 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:54:59.688769553+00:00 stderr F E0813 19:54:59.688621 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:55:53.852650979+00:00 stderr F E0813 19:55:53.852507 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:56:21.606934133+00:00 stderr F E0813 19:56:21.606713 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:56:27.298752589+00:00 stderr F I0813 19:56:27.298520 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-network-operator", Name:"network-operator-lock", UID:"f2f09683-2189-4368-ac3d-7dc7538da4b8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"27121", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_3457bc4e-7009-4fb9-bd36-e077ad27f0dd became leader 2025-08-13T19:56:27.299471170+00:00 stderr F I0813 19:56:27.299360 1 leaderelection.go:260] successfully acquired lease openshift-network-operator/network-operator-lock 2025-08-13T19:56:27.351114894+00:00 stderr F I0813 19:56:27.351001 1 operator.go:97] Creating status manager for stand-alone cluster 2025-08-13T19:56:27.351565257+00:00 stderr F I0813 19:56:27.351512 1 operator.go:102] Adding controller-runtime controllers 2025-08-13T19:56:27.354990565+00:00 stderr F I0813 19:56:27.354921 1 operconfig_controller.go:102] Waiting for feature gates initialization... 2025-08-13T19:56:27.358337231+00:00 stderr F I0813 19:56:27.358191 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:56:27.363699364+00:00 stderr F I0813 19:56:27.363602 1 operconfig_controller.go:109] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:56:27.364136466+00:00 stderr F I0813 19:56:27.364072 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-network-operator", Name:"network-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:56:27.366122993+00:00 stderr F I0813 19:56:27.366051 1 client.go:239] Starting informers... 2025-08-13T19:56:27.366292048+00:00 stderr F I0813 19:56:27.366232 1 client.go:250] Waiting for informers to sync... 2025-08-13T19:56:27.567890954+00:00 stderr F I0813 19:56:27.567745 1 client.go:271] Informers started and synced 2025-08-13T19:56:27.567980227+00:00 stderr F I0813 19:56:27.567965 1 operator.go:126] Starting controller-manager 2025-08-13T19:56:27.569429138+00:00 stderr F I0813 19:56:27.569399 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:56:27.569530191+00:00 stderr F I0813 19:56:27.569512 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:56:27.569658345+00:00 stderr F I0813 19:56:27.569634 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:56:27.569953743+00:00 stderr F I0813 19:56:27.569405 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-08-13T19:56:27.569976414+00:00 stderr F I0813 19:56:27.569955 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-08-13T19:56:27.569988794+00:00 stderr F I0813 19:56:27.569970 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-08-13T19:56:27.570112128+00:00 stderr F I0813 19:56:27.570015 1 server.go:185] "Starting metrics server" logger="controller-runtime.metrics" 2025-08-13T19:56:27.571338433+00:00 stderr F I0813 19:56:27.570882 1 server.go:224] "Serving metrics server" logger="controller-runtime.metrics" bindAddress=":8080" secure=false 2025-08-13T19:56:27.573138374+00:00 stderr F I0813 19:56:27.573062 1 controller.go:178] "Starting EventSource" controller="proxyconfig-controller" source="informer source: 0xc000d2d970" 2025-08-13T19:56:27.573237987+00:00 stderr F I0813 19:56:27.573176 1 controller.go:178] "Starting EventSource" controller="egress-router-controller" source="kind source: *v1.EgressRouter" 2025-08-13T19:56:27.573380101+00:00 stderr F I0813 19:56:27.573272 1 controller.go:178] "Starting EventSource" controller="proxyconfig-controller" source="kind source: *v1.Proxy" 2025-08-13T19:56:27.573393931+00:00 stderr F I0813 19:56:27.573376 1 controller.go:186] "Starting Controller" controller="proxyconfig-controller" 2025-08-13T19:56:27.573393931+00:00 stderr F I0813 19:56:27.573376 1 controller.go:186] "Starting Controller" controller="egress-router-controller" 2025-08-13T19:56:27.574079911+00:00 stderr F I0813 19:56:27.573928 1 controller.go:178] "Starting EventSource" controller="clusterconfig-controller" source="kind source: *v1.Network" 2025-08-13T19:56:27.574079911+00:00 stderr F I0813 19:56:27.574014 1 controller.go:186] "Starting Controller" controller="clusterconfig-controller" 2025-08-13T19:56:27.574079911+00:00 stderr F I0813 19:56:27.573086 1 controller.go:178] "Starting EventSource" controller="pki-controller" source="kind source: *v1.OperatorPKI" 2025-08-13T19:56:27.574103762+00:00 stderr F I0813 19:56:27.574085 1 controller.go:186] "Starting Controller" controller="pki-controller" 2025-08-13T19:56:27.574197864+00:00 stderr F I0813 19:56:27.574118 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Network" 2025-08-13T19:56:27.574211965+00:00 stderr F I0813 19:56:27.574195 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Network" 2025-08-13T19:56:27.574211965+00:00 stderr F I0813 19:56:27.574207 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="informer source: 0xc001030000" 2025-08-13T19:56:27.574395470+00:00 stderr F I0813 19:56:27.574278 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Node" 2025-08-13T19:56:27.574395470+00:00 stderr F I0813 19:56:27.574335 1 controller.go:186] "Starting Controller" controller="operconfig-controller" 2025-08-13T19:56:27.574414261+00:00 stderr F I0813 19:56:27.574389 1 controller.go:178] "Starting EventSource" controller="infrastructureconfig-controller" source="kind source: *v1.Infrastructure" 2025-08-13T19:56:27.574426401+00:00 stderr F I0813 19:56:27.574417 1 controller.go:186] "Starting Controller" controller="infrastructureconfig-controller" 2025-08-13T19:56:27.574506623+00:00 stderr F I0813 19:56:27.574434 1 controller.go:178] "Starting EventSource" controller="dashboard-controller" source="informer source: 0xc0010302c0" 2025-08-13T19:56:27.574542484+00:00 stderr F I0813 19:56:27.574279 1 controller.go:178] "Starting EventSource" controller="signer-controller" source="kind source: *v1.CertificateSigningRequest" 2025-08-13T19:56:27.574664318+00:00 stderr F I0813 19:56:27.574565 1 controller.go:186] "Starting Controller" controller="dashboard-controller" 2025-08-13T19:56:27.574776281+00:00 stderr F I0813 19:56:27.574601 1 controller.go:178] "Starting EventSource" controller="allowlist-controller" source="informer source: 0xc001030210" 2025-08-13T19:56:27.574888574+00:00 stderr F I0813 19:56:27.574574 1 controller.go:186] "Starting Controller" controller="signer-controller" 2025-08-13T19:56:27.575969945+00:00 stderr F I0813 19:56:27.574542 1 controller.go:178] "Starting EventSource" controller="ingress-config-controller" source="kind source: *v1.IngressController" 2025-08-13T19:56:27.576037697+00:00 stderr F I0813 19:56:27.576022 1 controller.go:186] "Starting Controller" controller="ingress-config-controller" 2025-08-13T19:56:27.576094398+00:00 stderr F I0813 19:56:27.574714 1 controller.go:220] "Starting workers" controller="dashboard-controller" worker count=1 2025-08-13T19:56:27.576491250+00:00 stderr F I0813 19:56:27.574875 1 controller.go:186] "Starting Controller" controller="allowlist-controller" 2025-08-13T19:56:27.576617133+00:00 stderr F I0813 19:56:27.576531 1 controller.go:220] "Starting workers" controller="allowlist-controller" worker count=1 2025-08-13T19:56:27.576926262+00:00 stderr F I0813 19:56:27.574307 1 controller.go:178] "Starting EventSource" controller="configmap-trust-bundle-injector-controller" source="informer source: 0xc0010300b0" 2025-08-13T19:56:27.576951953+00:00 stderr F I0813 19:56:27.576927 1 controller.go:178] "Starting EventSource" controller="configmap-trust-bundle-injector-controller" source="informer source: 0xc001030160" 2025-08-13T19:56:27.577041156+00:00 stderr F I0813 19:56:27.576992 1 controller.go:186] "Starting Controller" controller="configmap-trust-bundle-injector-controller" 2025-08-13T19:56:27.577074326+00:00 stderr F I0813 19:56:27.576996 1 dashboard_controller.go:113] Reconcile dashboards 2025-08-13T19:56:27.577215551+00:00 stderr F I0813 19:56:27.577133 1 controller.go:220] "Starting workers" controller="configmap-trust-bundle-injector-controller" worker count=1 2025-08-13T19:56:27.577215551+00:00 stderr F I0813 19:56:27.574959 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc001030370" 2025-08-13T19:56:27.577232791+00:00 stderr F I0813 19:56:27.577215 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc001030420" 2025-08-13T19:56:27.577244481+00:00 stderr F I0813 19:56:27.577233 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc0010304d0" 2025-08-13T19:56:27.577254382+00:00 stderr F I0813 19:56:27.577246 1 controller.go:186] "Starting Controller" controller="pod-watcher" 2025-08-13T19:56:27.577264902+00:00 stderr F I0813 19:56:27.577254 1 controller.go:220] "Starting workers" controller="pod-watcher" worker count=1 2025-08-13T19:56:27.577437857+00:00 stderr F I0813 19:56:27.575918 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-08-13T19:56:27.577776886+00:00 stderr F I0813 19:56:27.577667 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T19:56:27.577776886+00:00 stderr F I0813 19:56:27.577720 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T19:56:27.577776886+00:00 stderr F I0813 19:56:27.577732 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T19:56:27.577776886+00:00 stderr F I0813 19:56:27.577737 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/network-metrics-daemon updated, re-generating status 2025-08-13T19:56:27.577776886+00:00 stderr F I0813 19:56:27.577744 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T19:56:27.577776886+00:00 stderr F I0813 19:56:27.577750 1 pod_watcher.go:131] Operand /, Kind= openshift-network-operator/iptables-alerter updated, re-generating status 2025-08-13T19:56:27.577776886+00:00 stderr F I0813 19:56:27.577767 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-admission-controller updated, re-generating status 2025-08-13T19:56:27.578450466+00:00 stderr F I0813 19:56:27.577775 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:27.578575999+00:00 stderr F I0813 19:56:27.578480 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.578575999+00:00 stderr F I0813 19:56:27.578544 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.578921419+00:00 stderr F I0813 19:56:27.575299 1 log.go:245] openshift-network-operator/kube-root-ca.crt changed, triggering operconf reconciliation 2025-08-13T19:56:27.579011332+00:00 stderr F I0813 19:56:27.578991 1 log.go:245] openshift-network-operator/mtu changed, triggering operconf reconciliation 2025-08-13T19:56:27.579067203+00:00 stderr F I0813 19:56:27.579050 1 log.go:245] openshift-network-operator/openshift-service-ca.crt changed, triggering operconf reconciliation 2025-08-13T19:56:27.579356502+00:00 stderr F I0813 19:56:27.579333 1 log.go:245] Reconciling configmap from openshift-ingress-operator/trusted-ca 2025-08-13T19:56:27.580948257+00:00 stderr F I0813 19:56:27.579506 1 reflector.go:351] Caches populated for *v1.OperatorPKI from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.587549346+00:00 stderr F I0813 19:56:27.584463 1 log.go:245] openshift-network-operator/iptables-alerter-script changed, triggering operconf reconciliation 2025-08-13T19:56:27.591530269+00:00 stderr F I0813 19:56:27.590312 1 dashboard_controller.go:139] Applying dashboards manifests 2025-08-13T19:56:27.591530269+00:00 stderr F I0813 19:56:27.591221 1 reflector.go:351] Caches populated for *v1.EgressRouter from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.591860269+00:00 stderr F I0813 19:56:27.591702 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.592230659+00:00 stderr F I0813 19:56:27.592106 1 log.go:245] ConfigMap openshift-ingress-operator/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:27.592852457+00:00 stderr F I0813 19:56:27.592740 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.593769843+00:00 stderr F I0813 19:56:27.593747 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.594589167+00:00 stderr F I0813 19:56:27.594464 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.595849663+00:00 stderr F I0813 19:56:27.595733 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.596215333+00:00 stderr F I0813 19:56:27.596118 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.599428015+00:00 stderr F I0813 19:56:27.599166 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.599659351+00:00 stderr F I0813 19:56:27.599636 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.601773632+00:00 stderr F I0813 19:56:27.600107 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.602556664+00:00 stderr F I0813 19:56:27.602266 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.603653376+00:00 stderr F I0813 19:56:27.603261 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.605116727+00:00 stderr F I0813 19:56:27.605009 1 reflector.go:351] Caches populated for *v1.IngressController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.605654523+00:00 stderr F I0813 19:56:27.605606 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.607689741+00:00 stderr F I0813 19:56:27.606413 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.607689741+00:00 stderr F I0813 19:56:27.606613 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.607689741+00:00 stderr F I0813 19:56:27.607599 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.608162284+00:00 stderr F I0813 19:56:27.608138 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.609076180+00:00 stderr F I0813 19:56:27.609050 1 allowlist_controller.go:103] Reconcile allowlist for openshift-multus/cni-sysctl-allowlist 2025-08-13T19:56:27.609481932+00:00 stderr F I0813 19:56:27.609420 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=openshiftapiservers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.612131368+00:00 stderr F I0813 19:56:27.612050 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-08-13T19:56:27.635883836+00:00 stderr F I0813 19:56:27.635730 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T19:56:27.635883836+00:00 stderr F I0813 19:56:27.635768 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T19:56:27.639028526+00:00 stderr F I0813 19:56:27.638976 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-08-13T19:56:27.639028526+00:00 stderr F I0813 19:56:27.639008 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-08-13T19:56:27.644852602+00:00 stderr F I0813 19:56:27.644681 1 warning_handler.go:65] "unknown field \"spec.template.spec.volumes[0].configMap.namespace\"" logger="KubeAPIWarningLogger" 2025-08-13T19:56:27.644852602+00:00 stderr F I0813 19:56:27.644740 1 warning_handler.go:65] "unknown field \"spec.template.spec.volumes[0].defaultMode\"" logger="KubeAPIWarningLogger" 2025-08-13T19:56:27.662414693+00:00 stderr F I0813 19:56:27.661700 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-08-13T19:56:27.677311729+00:00 stderr F I0813 19:56:27.675885 1 log.go:245] /crc changed, triggering operconf reconciliation 2025-08-13T19:56:27.677694760+00:00 stderr F I0813 19:56:27.677612 1 controller.go:220] "Starting workers" controller="signer-controller" worker count=1 2025-08-13T19:56:27.677762442+00:00 stderr F I0813 19:56:27.677703 1 controller.go:220] "Starting workers" controller="egress-router-controller" worker count=1 2025-08-13T19:56:27.677977208+00:00 stderr F I0813 19:56:27.677866 1 controller.go:220] "Starting workers" controller="pki-controller" worker count=1 2025-08-13T19:56:27.678407880+00:00 stderr F I0813 19:56:27.678174 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T19:56:27.685178653+00:00 stderr F I0813 19:56:27.683757 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T19:56:27.685178653+00:00 stderr F I0813 19:56:27.683940 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T19:56:27.685178653+00:00 stderr F I0813 19:56:27.684316 1 controller.go:220] "Starting workers" controller="clusterconfig-controller" worker count=1 2025-08-13T19:56:27.685178653+00:00 stderr F I0813 19:56:27.684541 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.685178653+00:00 stderr F I0813 19:56:27.684572 1 controller.go:220] "Starting workers" controller="proxyconfig-controller" worker count=1 2025-08-13T19:56:27.685178653+00:00 stderr F I0813 19:56:27.684543 1 log.go:245] Reconciling Network.config.openshift.io cluster 2025-08-13T19:56:27.685178653+00:00 stderr F I0813 19:56:27.684871 1 controller.go:220] "Starting workers" controller="ingress-config-controller" worker count=1 2025-08-13T19:56:27.685271766+00:00 stderr F I0813 19:56:27.685248 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T19:56:27.692572995+00:00 stderr F I0813 19:56:27.685387 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/kube-root-ca.crt' 2025-08-13T19:56:27.692572995+00:00 stderr F I0813 19:56:27.685465 1 controller.go:220] "Starting workers" controller="infrastructureconfig-controller" worker count=1 2025-08-13T19:56:27.692572995+00:00 stderr F I0813 19:56:27.685503 1 controller.go:220] "Starting workers" controller="operconfig-controller" worker count=1 2025-08-13T19:56:27.692572995+00:00 stderr F I0813 19:56:27.685712 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T19:56:27.698346949+00:00 stderr F I0813 19:56:27.698189 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.701887971+00:00 stderr F I0813 19:56:27.700366 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.728995845+00:00 stderr F I0813 19:56:27.728910 1 log.go:245] configmap 'openshift-config/kube-root-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:27.729443867+00:00 stderr F I0813 19:56:27.729417 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/registry-certs' 2025-08-13T19:56:27.743084397+00:00 stderr F I0813 19:56:27.742986 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T19:56:27.743084397+00:00 stderr F I0813 19:56:27.743033 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T19:56:27.749590293+00:00 stderr F I0813 19:56:27.749370 1 log.go:245] configmap 'openshift-config/registry-certs' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:27.751598850+00:00 stderr F I0813 19:56:27.749864 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-acks' 2025-08-13T19:56:27.796282896+00:00 stderr F I0813 19:56:27.793146 1 log.go:245] "network-node-identity-cert" in "openshift-network-node-identity" requires a new target cert/key pair: already expired 2025-08-13T19:56:27.814100495+00:00 stderr F I0813 19:56:27.808184 1 log.go:245] configmap 'openshift-config/admin-acks' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:27.814100495+00:00 stderr F I0813 19:56:27.808279 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/initial-kube-apiserver-server-ca' 2025-08-13T19:56:27.840895580+00:00 stderr F I0813 19:56:27.838241 1 log.go:245] configmap 'openshift-config/initial-kube-apiserver-server-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:27.840895580+00:00 stderr F I0813 19:56:27.838324 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-metric-serving-ca' 2025-08-13T19:56:27.840895580+00:00 stderr F I0813 19:56:27.838474 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T19:56:27.840895580+00:00 stderr F I0813 19:56:27.838488 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T19:56:27.849897277+00:00 stderr F I0813 19:56:27.849302 1 log.go:245] configmap 'openshift-config/etcd-metric-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:27.849897277+00:00 stderr F I0813 19:56:27.849392 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-serving-ca' 2025-08-13T19:56:27.865566954+00:00 stderr F I0813 19:56:27.864192 1 log.go:245] configmap 'openshift-config/etcd-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:27.865566954+00:00 stderr F I0813 19:56:27.864295 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-install-manifests' 2025-08-13T19:56:27.872170163+00:00 stderr F I0813 19:56:27.871893 1 log.go:245] configmap 'openshift-config/openshift-install-manifests' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:27.872170163+00:00 stderr F I0813 19:56:27.872007 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-service-ca.crt' 2025-08-13T19:56:27.915571142+00:00 stderr F I0813 19:56:27.915478 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T19:56:27.917293291+00:00 stderr F I0813 19:56:27.917181 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:27.917293291+00:00 stderr F I0813 19:56:27.917230 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:27.979374604+00:00 stderr F I0813 19:56:27.979044 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T19:56:27.979374604+00:00 stderr F I0813 19:56:27.979088 1 log.go:245] Successfully updated Operator config from Cluster config 2025-08-13T19:56:27.984852501+00:00 stderr F I0813 19:56:27.980469 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:28.002926967+00:00 stderr F I0813 19:56:28.002764 1 log.go:245] Updated Secret/network-node-identity-cert -n openshift-network-node-identity because it changed 2025-08-13T19:56:28.002926967+00:00 stderr F I0813 19:56:28.002871 1 log.go:245] successful reconciliation 2025-08-13T19:56:28.027777056+00:00 stderr F I0813 19:56:28.023051 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:28.027777056+00:00 stderr F I0813 19:56:28.025626 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:28.028953530+00:00 stderr F I0813 19:56:28.028774 1 log.go:245] configmap 'openshift-config/openshift-service-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:28.029014962+00:00 stderr F I0813 19:56:28.028982 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-kubeconfig-client-ca' 2025-08-13T19:56:28.045195174+00:00 stderr F I0813 19:56:28.045140 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:28.045630416+00:00 stderr F I0813 19:56:28.045572 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:56:28.045630416+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.045630416+00:00 stderr F status: "False" 2025-08-13T19:56:28.045630416+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:28.045630416+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.045630416+00:00 stderr F status: "False" 2025-08-13T19:56:28.045630416+00:00 stderr F type: Degraded 2025-08-13T19:56:28.045630416+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.045630416+00:00 stderr F status: "True" 2025-08-13T19:56:28.045630416+00:00 stderr F type: Upgradeable 2025-08-13T19:56:28.045630416+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:28.045630416+00:00 stderr F message: |- 2025-08-13T19:56:28.045630416+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.045630416+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.045630416+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:28.045630416+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.045630416+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:28.045630416+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.045630416+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:28.045630416+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.045630416+00:00 stderr F reason: Deploying 2025-08-13T19:56:28.045630416+00:00 stderr F status: "True" 2025-08-13T19:56:28.045630416+00:00 stderr F type: Progressing 2025-08-13T19:56:28.045630416+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:28.045630416+00:00 stderr F status: "True" 2025-08-13T19:56:28.045630416+00:00 stderr F type: Available 2025-08-13T19:56:28.048961891+00:00 stderr F I0813 19:56:28.048120 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:28.078560756+00:00 stderr F I0813 19:56:28.078446 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-08-13T19:56:28.078560756+00:00 stderr F I0813 19:56:28.078494 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-08-13T19:56:28.109633104+00:00 stderr F I0813 19:56:28.109512 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:56:28.109633104+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:33Z" 2025-08-13T19:56:28.109633104+00:00 stderr F status: "False" 2025-08-13T19:56:28.109633104+00:00 stderr F type: Degraded 2025-08-13T19:56:28.109633104+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.109633104+00:00 stderr F status: "True" 2025-08-13T19:56:28.109633104+00:00 stderr F type: Upgradeable 2025-08-13T19:56:28.109633104+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.109633104+00:00 stderr F status: "False" 2025-08-13T19:56:28.109633104+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:28.109633104+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:28.109633104+00:00 stderr F message: |- 2025-08-13T19:56:28.109633104+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.109633104+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.109633104+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:28.109633104+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.109633104+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:28.109633104+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.109633104+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:28.109633104+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.109633104+00:00 stderr F reason: Deploying 2025-08-13T19:56:28.109633104+00:00 stderr F status: "True" 2025-08-13T19:56:28.109633104+00:00 stderr F type: Progressing 2025-08-13T19:56:28.109633104+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:28.109633104+00:00 stderr F status: "True" 2025-08-13T19:56:28.109633104+00:00 stderr F type: Available 2025-08-13T19:56:28.169379430+00:00 stderr F I0813 19:56:28.168772 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:28.171271324+00:00 stderr F I0813 19:56:28.171043 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:56:28.171271324+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.171271324+00:00 stderr F status: "False" 2025-08-13T19:56:28.171271324+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:28.171271324+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:28.171271324+00:00 stderr F message: |- 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:28.171271324+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" rollout is not making progress - last change 2024-06-27T13:34:19Z 2025-08-13T19:56:28.171271324+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:28.171271324+00:00 stderr F status: "True" 2025-08-13T19:56:28.171271324+00:00 stderr F type: Degraded 2025-08-13T19:56:28.171271324+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.171271324+00:00 stderr F status: "True" 2025-08-13T19:56:28.171271324+00:00 stderr F type: Upgradeable 2025-08-13T19:56:28.171271324+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:28.171271324+00:00 stderr F message: |- 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.171271324+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:28.171271324+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.171271324+00:00 stderr F reason: Deploying 2025-08-13T19:56:28.171271324+00:00 stderr F status: "True" 2025-08-13T19:56:28.171271324+00:00 stderr F type: Progressing 2025-08-13T19:56:28.171271324+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:28.171271324+00:00 stderr F status: "True" 2025-08-13T19:56:28.171271324+00:00 stderr F type: Available 2025-08-13T19:56:28.182717181+00:00 stderr F I0813 19:56:28.182605 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:56:28.190129682+00:00 stderr F I0813 19:56:28.190008 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:56:28.215646811+00:00 stderr F I0813 19:56:28.215536 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:56:28.221927250+00:00 stderr F I0813 19:56:28.221689 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:56:28.221927250+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:28.221927250+00:00 stderr F message: |- 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:28.221927250+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" rollout is not making progress - last change 2024-06-27T13:34:19Z 2025-08-13T19:56:28.221927250+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:28.221927250+00:00 stderr F status: "True" 2025-08-13T19:56:28.221927250+00:00 stderr F type: Degraded 2025-08-13T19:56:28.221927250+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.221927250+00:00 stderr F status: "True" 2025-08-13T19:56:28.221927250+00:00 stderr F type: Upgradeable 2025-08-13T19:56:28.221927250+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.221927250+00:00 stderr F status: "False" 2025-08-13T19:56:28.221927250+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:28.221927250+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:28.221927250+00:00 stderr F message: |- 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.221927250+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:28.221927250+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.221927250+00:00 stderr F reason: Deploying 2025-08-13T19:56:28.221927250+00:00 stderr F status: "True" 2025-08-13T19:56:28.221927250+00:00 stderr F type: Progressing 2025-08-13T19:56:28.221927250+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:28.221927250+00:00 stderr F status: "True" 2025-08-13T19:56:28.221927250+00:00 stderr F type: Available 2025-08-13T19:56:28.226875732+00:00 stderr F I0813 19:56:28.226700 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:56:28.235752065+00:00 stderr F I0813 19:56:28.235649 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:56:28.251752272+00:00 stderr F I0813 19:56:28.251700 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:56:28.418311498+00:00 stderr F I0813 19:56:28.417090 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:28.418311498+00:00 stderr F I0813 19:56:28.417135 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:28.446930105+00:00 stderr F I0813 19:56:28.446671 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:28.446930105+00:00 stderr F I0813 19:56:28.446723 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:28.447300036+00:00 stderr F I0813 19:56:28.446986 1 log.go:245] configmap 'openshift-config/admin-kubeconfig-client-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:28.447300036+00:00 stderr F I0813 19:56:28.447064 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-ca-bundle' 2025-08-13T19:56:28.541006542+00:00 stderr F I0813 19:56:28.540908 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:28.541331291+00:00 stderr F I0813 19:56:28.541228 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:56:28.541331291+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.541331291+00:00 stderr F status: "False" 2025-08-13T19:56:28.541331291+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:28.541331291+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:28.541331291+00:00 stderr F message: |- 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:28.541331291+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:28.541331291+00:00 stderr F status: "True" 2025-08-13T19:56:28.541331291+00:00 stderr F type: Degraded 2025-08-13T19:56:28.541331291+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.541331291+00:00 stderr F status: "True" 2025-08-13T19:56:28.541331291+00:00 stderr F type: Upgradeable 2025-08-13T19:56:28.541331291+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:28.541331291+00:00 stderr F message: |- 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.541331291+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:28.541331291+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.541331291+00:00 stderr F reason: Deploying 2025-08-13T19:56:28.541331291+00:00 stderr F status: "True" 2025-08-13T19:56:28.541331291+00:00 stderr F type: Progressing 2025-08-13T19:56:28.541331291+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:28.541331291+00:00 stderr F status: "True" 2025-08-13T19:56:28.541331291+00:00 stderr F type: Available 2025-08-13T19:56:28.819877075+00:00 stderr F I0813 19:56:28.819654 1 log.go:245] configmap 'openshift-config/etcd-ca-bundle' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:28.819877075+00:00 stderr F I0813 19:56:28.819756 1 log.go:245] Reconciling proxy 'cluster' 2025-08-13T19:56:28.822445798+00:00 stderr F I0813 19:56:28.822344 1 log.go:245] httpProxy, httpsProxy and noProxy not defined for proxy 'cluster'; validation will be skipped 2025-08-13T19:56:28.896447592+00:00 stderr F I0813 19:56:28.896354 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:56:28.896447592+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:28.896447592+00:00 stderr F message: |- 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:28.896447592+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:28.896447592+00:00 stderr F status: "True" 2025-08-13T19:56:28.896447592+00:00 stderr F type: Degraded 2025-08-13T19:56:28.896447592+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.896447592+00:00 stderr F status: "True" 2025-08-13T19:56:28.896447592+00:00 stderr F type: Upgradeable 2025-08-13T19:56:28.896447592+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.896447592+00:00 stderr F status: "False" 2025-08-13T19:56:28.896447592+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:28.896447592+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:28.896447592+00:00 stderr F message: |- 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.896447592+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:28.896447592+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.896447592+00:00 stderr F reason: Deploying 2025-08-13T19:56:28.896447592+00:00 stderr F status: "True" 2025-08-13T19:56:28.896447592+00:00 stderr F type: Progressing 2025-08-13T19:56:28.896447592+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:28.896447592+00:00 stderr F status: "True" 2025-08-13T19:56:28.896447592+00:00 stderr F type: Available 2025-08-13T19:56:29.014293887+00:00 stderr F I0813 19:56:29.014225 1 log.go:245] Reconciling proxy 'cluster' complete 2025-08-13T19:56:29.259978901+00:00 stderr F I0813 19:56:29.259705 1 log.go:245] Reconciling configmap from openshift-kube-controller-manager/trusted-ca-bundle 2025-08-13T19:56:29.262194434+00:00 stderr F I0813 19:56:29.262113 1 log.go:245] ConfigMap openshift-kube-controller-manager/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:29.568310526+00:00 stderr F I0813 19:56:29.568258 1 dashboard_controller.go:113] Reconcile dashboards 2025-08-13T19:56:29.573930976+00:00 stderr F I0813 19:56:29.573686 1 dashboard_controller.go:139] Applying dashboards manifests 2025-08-13T19:56:29.629145853+00:00 stderr F I0813 19:56:29.629050 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-08-13T19:56:29.717455055+00:00 stderr F I0813 19:56:29.717192 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-08-13T19:56:29.717455055+00:00 stderr F I0813 19:56:29.717251 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-08-13T19:56:29.725126934+00:00 stderr F I0813 19:56:29.724986 1 log.go:245] Reconciling Network.config.openshift.io cluster 2025-08-13T19:56:29.727114510+00:00 stderr F I0813 19:56:29.727056 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T19:56:29.731300240+00:00 stderr F I0813 19:56:29.730746 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-08-13T19:56:29.733667117+00:00 stderr F I0813 19:56:29.732921 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T19:56:29.737509137+00:00 stderr F I0813 19:56:29.737396 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T19:56:29.737509137+00:00 stderr F I0813 19:56:29.737431 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc000b25200 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T19:56:29.863906976+00:00 stderr F I0813 19:56:29.861509 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T19:56:29.869447885+00:00 stderr F I0813 19:56:29.868308 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:29.944962301+00:00 stderr F I0813 19:56:29.944068 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:29.963897282+00:00 stderr F I0813 19:56:29.962888 1 log.go:245] "ovn-cert" in "openshift-ovn-kubernetes" requires a new target cert/key pair: already expired 2025-08-13T19:56:30.021370463+00:00 stderr F I0813 19:56:30.021290 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout progressing; 0/1 scheduled; 0 available; generation 3 -> 3 2025-08-13T19:56:30.021404664+00:00 stderr F I0813 19:56:30.021394 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=true 2025-08-13T19:56:30.024986796+00:00 stderr F I0813 19:56:30.024932 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 2 -> 2 2025-08-13T19:56:30.024986796+00:00 stderr F W0813 19:56:30.024966 1 ovn_kubernetes.go:1641] daemonset openshift-ovn-kubernetes/ovnkube-node rollout seems to have hung with 0/1 behind, force-continuing 2025-08-13T19:56:30.025103729+00:00 stderr F I0813 19:56:30.025059 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 2 -> 2 2025-08-13T19:56:30.025103729+00:00 stderr F W0813 19:56:30.025087 1 ovn_kubernetes.go:1641] daemonset openshift-ovn-kubernetes/ovnkube-node rollout seems to have hung with 0/1 behind, force-continuing 2025-08-13T19:56:30.025313625+00:00 stderr F I0813 19:56:30.025270 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T19:56:30.257542217+00:00 stderr F I0813 19:56:30.257476 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T19:56:30.304158318+00:00 stderr F I0813 19:56:30.303717 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T19:56:30.304158318+00:00 stderr F I0813 19:56:30.303893 1 log.go:245] Successfully updated Operator config from Cluster config 2025-08-13T19:56:30.305343021+00:00 stderr F I0813 19:56:30.305227 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:30.407072026+00:00 stderr F I0813 19:56:30.406632 1 log.go:245] Updated Secret/ovn-cert -n openshift-ovn-kubernetes because it changed 2025-08-13T19:56:30.407072026+00:00 stderr F I0813 19:56:30.406702 1 log.go:245] successful reconciliation 2025-08-13T19:56:30.414902470+00:00 stderr F I0813 19:56:30.414745 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T19:56:30.433026738+00:00 stderr F I0813 19:56:30.432771 1 log.go:245] Failed to update the operator configuration: could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:56:30.434142459+00:00 stderr F E0813 19:56:30.434050 1 controller.go:329] "Reconciler error" err="could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="3ac04967-f509-46f0-a407-9eb8ddb74597" 2025-08-13T19:56:30.440151681+00:00 stderr F I0813 19:56:30.439985 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T19:56:30.705619982+00:00 stderr F I0813 19:56:30.705474 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:56:30.705619982+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:30.705619982+00:00 stderr F status: "False" 2025-08-13T19:56:30.705619982+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:30.705619982+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:30.705619982+00:00 stderr F message: |- 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:30.705619982+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:30.705619982+00:00 stderr F status: "True" 2025-08-13T19:56:30.705619982+00:00 stderr F type: Degraded 2025-08-13T19:56:30.705619982+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:30.705619982+00:00 stderr F status: "True" 2025-08-13T19:56:30.705619982+00:00 stderr F type: Upgradeable 2025-08-13T19:56:30.705619982+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:30.705619982+00:00 stderr F message: |- 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:30.705619982+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:30.705619982+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:30.705619982+00:00 stderr F reason: Deploying 2025-08-13T19:56:30.705619982+00:00 stderr F status: "True" 2025-08-13T19:56:30.705619982+00:00 stderr F type: Progressing 2025-08-13T19:56:30.705619982+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:30.705619982+00:00 stderr F status: "True" 2025-08-13T19:56:30.705619982+00:00 stderr F type: Available 2025-08-13T19:56:30.706326832+00:00 stderr F I0813 19:56:30.706243 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:31.279067037+00:00 stderr F I0813 19:56:31.278972 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:56:31.279067037+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:31.279067037+00:00 stderr F message: |- 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:31.279067037+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:31.279067037+00:00 stderr F status: "True" 2025-08-13T19:56:31.279067037+00:00 stderr F type: Degraded 2025-08-13T19:56:31.279067037+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:31.279067037+00:00 stderr F status: "True" 2025-08-13T19:56:31.279067037+00:00 stderr F type: Upgradeable 2025-08-13T19:56:31.279067037+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:31.279067037+00:00 stderr F status: "False" 2025-08-13T19:56:31.279067037+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:31.279067037+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:31.279067037+00:00 stderr F message: |- 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.279067037+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:31.279067037+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.279067037+00:00 stderr F reason: Deploying 2025-08-13T19:56:31.279067037+00:00 stderr F status: "True" 2025-08-13T19:56:31.279067037+00:00 stderr F type: Progressing 2025-08-13T19:56:31.279067037+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:31.279067037+00:00 stderr F status: "True" 2025-08-13T19:56:31.279067037+00:00 stderr F type: Available 2025-08-13T19:56:31.307715565+00:00 stderr F I0813 19:56:31.307639 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:56:31.307715565+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:31.307715565+00:00 stderr F status: "False" 2025-08-13T19:56:31.307715565+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:31.307715565+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:31.307715565+00:00 stderr F message: |- 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:31.307715565+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:31.307715565+00:00 stderr F status: "True" 2025-08-13T19:56:31.307715565+00:00 stderr F type: Degraded 2025-08-13T19:56:31.307715565+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:31.307715565+00:00 stderr F status: "True" 2025-08-13T19:56:31.307715565+00:00 stderr F type: Upgradeable 2025-08-13T19:56:31.307715565+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:31.307715565+00:00 stderr F message: |- 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.307715565+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:31.307715565+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.307715565+00:00 stderr F reason: Deploying 2025-08-13T19:56:31.307715565+00:00 stderr F status: "True" 2025-08-13T19:56:31.307715565+00:00 stderr F type: Progressing 2025-08-13T19:56:31.307715565+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:31.307715565+00:00 stderr F status: "True" 2025-08-13T19:56:31.307715565+00:00 stderr F type: Available 2025-08-13T19:56:31.308573909+00:00 stderr F I0813 19:56:31.308483 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:31.870004360+00:00 stderr F I0813 19:56:31.869375 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T19:56:31.871996847+00:00 stderr F I0813 19:56:31.871895 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T19:56:31.874093797+00:00 stderr F I0813 19:56:31.873954 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T19:56:31.874184540+00:00 stderr F I0813 19:56:31.874125 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc003cc6380 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T19:56:31.880532731+00:00 stderr F I0813 19:56:31.880410 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout progressing; 0/1 scheduled; 0 available; generation 3 -> 3 2025-08-13T19:56:31.880532731+00:00 stderr F I0813 19:56:31.880446 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=true 2025-08-13T19:56:31.886454200+00:00 stderr F I0813 19:56:31.886341 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 2 -> 2 2025-08-13T19:56:31.886454200+00:00 stderr F W0813 19:56:31.886377 1 ovn_kubernetes.go:1641] daemonset openshift-ovn-kubernetes/ovnkube-node rollout seems to have hung with 0/1 behind, force-continuing 2025-08-13T19:56:31.886454200+00:00 stderr F I0813 19:56:31.886385 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 2 -> 2 2025-08-13T19:56:31.886454200+00:00 stderr F W0813 19:56:31.886390 1 ovn_kubernetes.go:1641] daemonset openshift-ovn-kubernetes/ovnkube-node rollout seems to have hung with 0/1 behind, force-continuing 2025-08-13T19:56:31.886454200+00:00 stderr F I0813 19:56:31.886414 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T19:56:32.073586814+00:00 stderr F I0813 19:56:32.073229 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:56:32.073586814+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:32.073586814+00:00 stderr F message: |- 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:32.073586814+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:32.073586814+00:00 stderr F status: "True" 2025-08-13T19:56:32.073586814+00:00 stderr F type: Degraded 2025-08-13T19:56:32.073586814+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:32.073586814+00:00 stderr F status: "True" 2025-08-13T19:56:32.073586814+00:00 stderr F type: Upgradeable 2025-08-13T19:56:32.073586814+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:32.073586814+00:00 stderr F status: "False" 2025-08-13T19:56:32.073586814+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:32.073586814+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:32.073586814+00:00 stderr F message: |- 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:32.073586814+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:32.073586814+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:32.073586814+00:00 stderr F reason: Deploying 2025-08-13T19:56:32.073586814+00:00 stderr F status: "True" 2025-08-13T19:56:32.073586814+00:00 stderr F type: Progressing 2025-08-13T19:56:32.073586814+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:32.073586814+00:00 stderr F status: "True" 2025-08-13T19:56:32.073586814+00:00 stderr F type: Available 2025-08-13T19:56:32.215632320+00:00 stderr F I0813 19:56:32.215466 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T19:56:32.231190424+00:00 stderr F I0813 19:56:32.231059 1 log.go:245] Failed to update the operator configuration: could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:56:32.231261466+00:00 stderr F E0813 19:56:32.231196 1 controller.go:329] "Reconciler error" err="could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="e3de9d8e-7ea9-4fab-b6a7-656f1b43f054" 2025-08-13T19:56:32.241853199+00:00 stderr F I0813 19:56:32.241740 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T19:56:32.461214022+00:00 stderr F I0813 19:56:32.460764 1 log.go:245] Reconciling configmap from openshift-machine-api/mao-trusted-ca 2025-08-13T19:56:32.463244990+00:00 stderr F I0813 19:56:32.463203 1 log.go:245] ConfigMap openshift-machine-api/mao-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:32.669238543+00:00 stderr F I0813 19:56:32.669132 1 dashboard_controller.go:113] Reconcile dashboards 2025-08-13T19:56:32.674038810+00:00 stderr F I0813 19:56:32.673902 1 dashboard_controller.go:139] Applying dashboards manifests 2025-08-13T19:56:32.683527121+00:00 stderr F I0813 19:56:32.683422 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-08-13T19:56:32.699001412+00:00 stderr F I0813 19:56:32.698891 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-08-13T19:56:32.699001412+00:00 stderr F I0813 19:56:32.698936 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-08-13T19:56:32.710297945+00:00 stderr F I0813 19:56:32.710190 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-08-13T19:56:33.461166035+00:00 stderr F I0813 19:56:33.461019 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T19:56:33.466487867+00:00 stderr F I0813 19:56:33.466357 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:33.468734431+00:00 stderr F I0813 19:56:33.468632 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:33.563471056+00:00 stderr F I0813 19:56:33.563150 1 log.go:245] "signer-cert" in "openshift-ovn-kubernetes" requires a new target cert/key pair: already expired 2025-08-13T19:56:33.664681716+00:00 stderr F I0813 19:56:33.664587 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T19:56:33.667218929+00:00 stderr F I0813 19:56:33.667162 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T19:56:33.670093191+00:00 stderr F I0813 19:56:33.670062 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T19:56:33.670153243+00:00 stderr F I0813 19:56:33.670127 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc000abf900 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T19:56:33.675535246+00:00 stderr F I0813 19:56:33.675435 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout progressing; 0/1 scheduled; 0 available; generation 3 -> 3 2025-08-13T19:56:33.675535246+00:00 stderr F I0813 19:56:33.675515 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=true 2025-08-13T19:56:33.678138111+00:00 stderr F I0813 19:56:33.678080 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 2 -> 2 2025-08-13T19:56:33.678138111+00:00 stderr F W0813 19:56:33.678122 1 ovn_kubernetes.go:1641] daemonset openshift-ovn-kubernetes/ovnkube-node rollout seems to have hung with 0/1 behind, force-continuing 2025-08-13T19:56:33.678138111+00:00 stderr F I0813 19:56:33.678131 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 2 -> 2 2025-08-13T19:56:33.678163041+00:00 stderr F W0813 19:56:33.678138 1 ovn_kubernetes.go:1641] daemonset openshift-ovn-kubernetes/ovnkube-node rollout seems to have hung with 0/1 behind, force-continuing 2025-08-13T19:56:33.678172702+00:00 stderr F I0813 19:56:33.678163 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T19:56:33.702260569+00:00 stderr F I0813 19:56:33.702159 1 log.go:245] Updated Secret/signer-cert -n openshift-ovn-kubernetes because it changed 2025-08-13T19:56:33.702260569+00:00 stderr F I0813 19:56:33.702186 1 log.go:245] successful reconciliation 2025-08-13T19:56:33.859683725+00:00 stderr F I0813 19:56:33.859617 1 log.go:245] Reconciling configmap from openshift-console/trusted-ca-bundle 2025-08-13T19:56:33.862468614+00:00 stderr F I0813 19:56:33.862444 1 log.go:245] ConfigMap openshift-console/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:34.017071259+00:00 stderr F I0813 19:56:34.017011 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T19:56:34.036715309+00:00 stderr F I0813 19:56:34.036622 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T19:56:34.036914765+00:00 stderr F I0813 19:56:34.036872 1 log.go:245] Starting render phase 2025-08-13T19:56:34.037773970+00:00 stderr F I0813 19:56:34.037655 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:34.068159047+00:00 stderr F I0813 19:56:34.068024 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T19:56:34.106548934+00:00 stderr F I0813 19:56:34.106431 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T19:56:34.106548934+00:00 stderr F I0813 19:56:34.106484 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T19:56:34.106548934+00:00 stderr F I0813 19:56:34.106516 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T19:56:34.106600245+00:00 stderr F I0813 19:56:34.106554 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T19:56:34.218480220+00:00 stderr F I0813 19:56:34.218361 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 1 -> 1 2025-08-13T19:56:34.421917649+00:00 stderr F I0813 19:56:34.421720 1 node_identity.go:204] network-node-identity webhook will not be applied, the deployment/daemonset is not ready 2025-08-13T19:56:34.421917649+00:00 stderr F I0813 19:56:34.421856 1 node_identity.go:208] network-node-identity webhook will not be applied, if it already exists it won't be removed 2025-08-13T19:56:34.426493139+00:00 stderr F I0813 19:56:34.426420 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T19:56:34.461098758+00:00 stderr F I0813 19:56:34.460764 1 log.go:245] Reconciling configmap from openshift-image-registry/trusted-ca 2025-08-13T19:56:34.464623398+00:00 stderr F I0813 19:56:34.464156 1 log.go:245] ConfigMap openshift-image-registry/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:34.862418207+00:00 stderr F I0813 19:56:34.862291 1 log.go:245] Reconciling configmap from openshift-authentication-operator/trusted-ca-bundle 2025-08-13T19:56:34.863104167+00:00 stderr F I0813 19:56:34.863025 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T19:56:34.864596619+00:00 stderr F I0813 19:56:34.864483 1 log.go:245] ConfigMap openshift-authentication-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:34.870510848+00:00 stderr F I0813 19:56:34.870429 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T19:56:34.870543839+00:00 stderr F I0813 19:56:34.870507 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T19:56:34.880596586+00:00 stderr F I0813 19:56:34.880486 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T19:56:34.880596586+00:00 stderr F I0813 19:56:34.880545 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T19:56:34.891750095+00:00 stderr F I0813 19:56:34.891663 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T19:56:34.892159396+00:00 stderr F I0813 19:56:34.892081 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T19:56:34.902209913+00:00 stderr F I0813 19:56:34.902054 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T19:56:34.902209913+00:00 stderr F I0813 19:56:34.902128 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T19:56:34.910589693+00:00 stderr F I0813 19:56:34.910560 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T19:56:34.910863740+00:00 stderr F I0813 19:56:34.910730 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T19:56:34.919874608+00:00 stderr F I0813 19:56:34.919635 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T19:56:34.919874608+00:00 stderr F I0813 19:56:34.919709 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T19:56:34.926683742+00:00 stderr F I0813 19:56:34.926177 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T19:56:34.927102754+00:00 stderr F I0813 19:56:34.927021 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T19:56:34.934686861+00:00 stderr F I0813 19:56:34.934559 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T19:56:34.934686861+00:00 stderr F I0813 19:56:34.934619 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T19:56:34.940050964+00:00 stderr F I0813 19:56:34.939867 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T19:56:34.940050964+00:00 stderr F I0813 19:56:34.939909 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T19:56:34.944758798+00:00 stderr F I0813 19:56:34.944607 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T19:56:34.944758798+00:00 stderr F I0813 19:56:34.944675 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T19:56:35.060988547+00:00 stderr F I0813 19:56:35.060754 1 log.go:245] Reconciling configmap from openshift-authentication/v4-0-config-system-trusted-ca-bundle 2025-08-13T19:56:35.063339344+00:00 stderr F I0813 19:56:35.063294 1 log.go:245] ConfigMap openshift-authentication/v4-0-config-system-trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:35.069140020+00:00 stderr F I0813 19:56:35.068999 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T19:56:35.069140020+00:00 stderr F I0813 19:56:35.069061 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T19:56:35.269428489+00:00 stderr F I0813 19:56:35.269371 1 log.go:245] Reconciling configmap from openshift-controller-manager/openshift-global-ca 2025-08-13T19:56:35.273852706+00:00 stderr F I0813 19:56:35.273718 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T19:56:35.273878236+00:00 stderr F I0813 19:56:35.273861 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T19:56:35.274124703+00:00 stderr F I0813 19:56:35.274103 1 log.go:245] ConfigMap openshift-controller-manager/openshift-global-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:35.463171062+00:00 stderr F I0813 19:56:35.463013 1 log.go:245] Reconciling configmap from openshift-kube-apiserver/trusted-ca-bundle 2025-08-13T19:56:35.466748384+00:00 stderr F I0813 19:56:35.466671 1 log.go:245] ConfigMap openshift-kube-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:35.473264810+00:00 stderr F I0813 19:56:35.473220 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T19:56:35.473337272+00:00 stderr F I0813 19:56:35.473284 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T19:56:35.661679340+00:00 stderr F I0813 19:56:35.661587 1 log.go:245] Reconciling configmap from openshift-marketplace/marketplace-trusted-ca 2025-08-13T19:56:35.666281531+00:00 stderr F I0813 19:56:35.666181 1 log.go:245] ConfigMap openshift-marketplace/marketplace-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:35.670559483+00:00 stderr F I0813 19:56:35.669432 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T19:56:35.670559483+00:00 stderr F I0813 19:56:35.669496 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T19:56:35.863934805+00:00 stderr F I0813 19:56:35.863675 1 log.go:245] Reconciling configmap from openshift-apiserver-operator/trusted-ca-bundle 2025-08-13T19:56:35.868876786+00:00 stderr F I0813 19:56:35.866928 1 log.go:245] ConfigMap openshift-apiserver-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:35.877013529+00:00 stderr F I0813 19:56:35.873385 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T19:56:35.877013529+00:00 stderr F I0813 19:56:35.873492 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T19:56:36.114440068+00:00 stderr F I0813 19:56:36.107927 1 log.go:245] Reconciling configmap from openshift-apiserver/trusted-ca-bundle 2025-08-13T19:56:36.114440068+00:00 stderr F I0813 19:56:36.112344 1 log.go:245] ConfigMap openshift-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.115323574+00:00 stderr F I0813 19:56:36.115043 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T19:56:36.115475538+00:00 stderr F I0813 19:56:36.115455 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T19:56:36.260204401+00:00 stderr F I0813 19:56:36.260087 1 log.go:245] Reconciling configmap from openshift-config-managed/trusted-ca-bundle 2025-08-13T19:56:36.262636010+00:00 stderr F I0813 19:56:36.262575 1 log.go:245] trusted-ca-bundle changed, updating 12 configMaps 2025-08-13T19:56:36.262660061+00:00 stderr F I0813 19:56:36.262630 1 log.go:245] ConfigMap openshift-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.262660061+00:00 stderr F I0813 19:56:36.262656 1 log.go:245] ConfigMap openshift-authentication-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.262716893+00:00 stderr F I0813 19:56:36.262679 1 log.go:245] ConfigMap openshift-authentication/v4-0-config-system-trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.262870197+00:00 stderr F I0813 19:56:36.262721 1 log.go:245] ConfigMap openshift-controller-manager/openshift-global-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.262870197+00:00 stderr F I0813 19:56:36.262743 1 log.go:245] ConfigMap openshift-kube-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.262870197+00:00 stderr F I0813 19:56:36.262765 1 log.go:245] ConfigMap openshift-marketplace/marketplace-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.262870197+00:00 stderr F I0813 19:56:36.262859 1 log.go:245] ConfigMap openshift-apiserver-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.263059382+00:00 stderr F I0813 19:56:36.262890 1 log.go:245] ConfigMap openshift-image-registry/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.263074453+00:00 stderr F I0813 19:56:36.263057 1 log.go:245] ConfigMap openshift-ingress-operator/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.263221087+00:00 stderr F I0813 19:56:36.263138 1 log.go:245] ConfigMap openshift-kube-controller-manager/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.263367011+00:00 stderr F I0813 19:56:36.263260 1 log.go:245] ConfigMap openshift-machine-api/mao-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.263367011+00:00 stderr F I0813 19:56:36.263308 1 log.go:245] ConfigMap openshift-console/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.270019731+00:00 stderr F I0813 19:56:36.269948 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T19:56:36.270156515+00:00 stderr F I0813 19:56:36.270141 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T19:56:36.468757485+00:00 stderr F I0813 19:56:36.468668 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T19:56:36.468907739+00:00 stderr F I0813 19:56:36.468753 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T19:56:36.669737104+00:00 stderr F I0813 19:56:36.669618 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T19:56:36.669737104+00:00 stderr F I0813 19:56:36.669696 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T19:56:36.870671062+00:00 stderr F I0813 19:56:36.870397 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T19:56:36.870671062+00:00 stderr F I0813 19:56:36.870476 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T19:56:37.070240691+00:00 stderr F I0813 19:56:37.070104 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T19:56:37.070585640+00:00 stderr F I0813 19:56:37.070449 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T19:56:37.282758169+00:00 stderr F I0813 19:56:37.282586 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T19:56:37.282758169+00:00 stderr F I0813 19:56:37.282706 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T19:56:37.486565189+00:00 stderr F I0813 19:56:37.486446 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T19:56:37.486565189+00:00 stderr F I0813 19:56:37.486513 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T19:56:37.671652084+00:00 stderr F I0813 19:56:37.671551 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T19:56:37.671652084+00:00 stderr F I0813 19:56:37.671620 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T19:56:37.870397229+00:00 stderr F I0813 19:56:37.870274 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T19:56:37.870397229+00:00 stderr F I0813 19:56:37.870350 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T19:56:38.070926605+00:00 stderr F I0813 19:56:38.070534 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T19:56:38.070926605+00:00 stderr F I0813 19:56:38.070592 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T19:56:38.280464439+00:00 stderr F I0813 19:56:38.280129 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T19:56:38.280464439+00:00 stderr F I0813 19:56:38.280180 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T19:56:38.473032357+00:00 stderr F I0813 19:56:38.472924 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T19:56:38.473032357+00:00 stderr F I0813 19:56:38.473016 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T19:56:38.672108052+00:00 stderr F I0813 19:56:38.671995 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T19:56:38.672211555+00:00 stderr F I0813 19:56:38.672197 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T19:56:38.870135517+00:00 stderr F I0813 19:56:38.869996 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T19:56:38.870135517+00:00 stderr F I0813 19:56:38.870081 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T19:56:39.069669964+00:00 stderr F I0813 19:56:39.069522 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T19:56:39.069669964+00:00 stderr F I0813 19:56:39.069628 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T19:56:39.270930261+00:00 stderr F I0813 19:56:39.270701 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T19:56:39.270930261+00:00 stderr F I0813 19:56:39.270901 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T19:56:39.469740118+00:00 stderr F I0813 19:56:39.469665 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T19:56:39.469952214+00:00 stderr F I0813 19:56:39.469936 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T19:56:39.669101311+00:00 stderr F I0813 19:56:39.668988 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T19:56:39.669101311+00:00 stderr F I0813 19:56:39.669087 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T19:56:39.869489454+00:00 stderr F I0813 19:56:39.869368 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T19:56:39.869489454+00:00 stderr F I0813 19:56:39.869457 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T19:56:40.080668163+00:00 stderr F I0813 19:56:40.079891 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T19:56:40.080668163+00:00 stderr F I0813 19:56:40.079990 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T19:56:40.274378724+00:00 stderr F I0813 19:56:40.274266 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T19:56:40.274378724+00:00 stderr F I0813 19:56:40.274335 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T19:56:40.470202326+00:00 stderr F I0813 19:56:40.470092 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T19:56:40.470202326+00:00 stderr F I0813 19:56:40.470165 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T19:56:40.669492477+00:00 stderr F I0813 19:56:40.669386 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T19:56:40.669492477+00:00 stderr F I0813 19:56:40.669450 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T19:56:40.872887145+00:00 stderr F I0813 19:56:40.871054 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T19:56:40.872887145+00:00 stderr F I0813 19:56:40.871135 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T19:56:41.072220557+00:00 stderr F I0813 19:56:41.072099 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T19:56:41.072279518+00:00 stderr F I0813 19:56:41.072217 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T19:56:41.271660712+00:00 stderr F I0813 19:56:41.271529 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T19:56:41.271660712+00:00 stderr F I0813 19:56:41.271631 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T19:56:41.477445898+00:00 stderr F I0813 19:56:41.477261 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T19:56:41.477445898+00:00 stderr F I0813 19:56:41.477350 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T19:56:41.689719429+00:00 stderr F I0813 19:56:41.689618 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T19:56:41.689719429+00:00 stderr F I0813 19:56:41.689703 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T19:56:41.875882805+00:00 stderr F I0813 19:56:41.875677 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T19:56:41.875882805+00:00 stderr F I0813 19:56:41.875862 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T19:56:42.083596536+00:00 stderr F I0813 19:56:42.083236 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T19:56:42.083596536+00:00 stderr F I0813 19:56:42.083547 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T19:56:42.289346682+00:00 stderr F I0813 19:56:42.289193 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T19:56:42.289346682+00:00 stderr F I0813 19:56:42.289326 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T19:56:42.505345469+00:00 stderr F I0813 19:56:42.505191 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T19:56:42.505345469+00:00 stderr F I0813 19:56:42.505331 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T19:56:42.726707130+00:00 stderr F I0813 19:56:42.726650 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T19:56:42.726974988+00:00 stderr F I0813 19:56:42.726959 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T19:56:42.869456647+00:00 stderr F I0813 19:56:42.869380 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T19:56:42.869582880+00:00 stderr F I0813 19:56:42.869565 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T19:56:43.070341413+00:00 stderr F I0813 19:56:43.070201 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T19:56:43.070341413+00:00 stderr F I0813 19:56:43.070278 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T19:56:43.270219901+00:00 stderr F I0813 19:56:43.270102 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T19:56:43.270219901+00:00 stderr F I0813 19:56:43.270191 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T19:56:43.473703151+00:00 stderr F I0813 19:56:43.473570 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T19:56:43.473703151+00:00 stderr F I0813 19:56:43.473654 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T19:56:43.671879419+00:00 stderr F I0813 19:56:43.671643 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T19:56:43.671879419+00:00 stderr F I0813 19:56:43.671717 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T19:56:43.870044978+00:00 stderr F I0813 19:56:43.869963 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T19:56:43.870044978+00:00 stderr F I0813 19:56:43.870036 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T19:56:44.071046257+00:00 stderr F I0813 19:56:44.070938 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T19:56:44.071046257+00:00 stderr F I0813 19:56:44.071025 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T19:56:44.270071800+00:00 stderr F I0813 19:56:44.269958 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T19:56:44.270071800+00:00 stderr F I0813 19:56:44.270023 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T19:56:44.470025050+00:00 stderr F I0813 19:56:44.469913 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T19:56:44.470025050+00:00 stderr F I0813 19:56:44.469980 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T19:56:44.669212908+00:00 stderr F I0813 19:56:44.669080 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T19:56:44.669212908+00:00 stderr F I0813 19:56:44.669163 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T19:56:44.868872549+00:00 stderr F I0813 19:56:44.868710 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T19:56:44.868993452+00:00 stderr F I0813 19:56:44.868978 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T19:56:45.069440356+00:00 stderr F I0813 19:56:45.069285 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T19:56:45.069440356+00:00 stderr F I0813 19:56:45.069366 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T19:56:45.274258124+00:00 stderr F I0813 19:56:45.273359 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T19:56:45.274258124+00:00 stderr F I0813 19:56:45.273617 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T19:56:45.474058959+00:00 stderr F I0813 19:56:45.473945 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T19:56:45.474058959+00:00 stderr F I0813 19:56:45.474024 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T19:56:45.672082224+00:00 stderr F I0813 19:56:45.671497 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T19:56:45.672082224+00:00 stderr F I0813 19:56:45.671605 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T19:56:45.872153397+00:00 stderr F I0813 19:56:45.872041 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T19:56:45.872153397+00:00 stderr F I0813 19:56:45.872110 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T19:56:46.071185660+00:00 stderr F I0813 19:56:46.071082 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T19:56:46.071185660+00:00 stderr F I0813 19:56:46.071169 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T19:56:46.272108338+00:00 stderr F I0813 19:56:46.271639 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T19:56:46.272108338+00:00 stderr F I0813 19:56:46.271731 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T19:56:46.475244508+00:00 stderr F I0813 19:56:46.475168 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T19:56:46.475364051+00:00 stderr F I0813 19:56:46.475347 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T19:56:46.672464330+00:00 stderr F I0813 19:56:46.672407 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T19:56:46.672563062+00:00 stderr F I0813 19:56:46.672549 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T19:56:46.870759002+00:00 stderr F I0813 19:56:46.870673 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T19:56:46.871024059+00:00 stderr F I0813 19:56:46.871000 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T19:56:47.070729842+00:00 stderr F I0813 19:56:47.070569 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T19:56:47.070769013+00:00 stderr F I0813 19:56:47.070732 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T19:56:47.272746319+00:00 stderr F I0813 19:56:47.272608 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T19:56:47.272746319+00:00 stderr F I0813 19:56:47.272689 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T19:56:47.472268006+00:00 stderr F I0813 19:56:47.472172 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T19:56:47.472268006+00:00 stderr F I0813 19:56:47.472243 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T19:56:47.669651062+00:00 stderr F I0813 19:56:47.669352 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T19:56:47.669651062+00:00 stderr F I0813 19:56:47.669530 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T19:56:47.869551281+00:00 stderr F I0813 19:56:47.869456 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T19:56:47.869551281+00:00 stderr F I0813 19:56:47.869518 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T19:56:48.070491658+00:00 stderr F I0813 19:56:48.070388 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T19:56:48.070491658+00:00 stderr F I0813 19:56:48.070460 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T19:56:48.270906480+00:00 stderr F I0813 19:56:48.270569 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T19:56:48.270906480+00:00 stderr F I0813 19:56:48.270648 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T19:56:48.471754555+00:00 stderr F I0813 19:56:48.471633 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T19:56:48.471754555+00:00 stderr F I0813 19:56:48.471707 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T19:56:48.679738444+00:00 stderr F I0813 19:56:48.679667 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T19:56:48.680020982+00:00 stderr F I0813 19:56:48.679998 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T19:56:48.927119287+00:00 stderr F I0813 19:56:48.919690 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T19:56:48.927631082+00:00 stderr F I0813 19:56:48.927557 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T19:56:49.053428524+00:00 stderr F I0813 19:56:49.053356 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:56:49.067288300+00:00 stderr F I0813 19:56:49.067134 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:56:49.069858223+00:00 stderr F I0813 19:56:49.069697 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T19:56:49.070064309+00:00 stderr F I0813 19:56:49.069941 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T19:56:49.075114563+00:00 stderr F I0813 19:56:49.075070 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:56:49.091433090+00:00 stderr F I0813 19:56:49.091319 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:56:49.098720678+00:00 stderr F I0813 19:56:49.098614 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:56:49.108031803+00:00 stderr F I0813 19:56:49.106716 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:56:49.269661349+00:00 stderr F I0813 19:56:49.269561 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T19:56:49.269661349+00:00 stderr F I0813 19:56:49.269630 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T19:56:49.471665687+00:00 stderr F I0813 19:56:49.471548 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T19:56:49.471665687+00:00 stderr F I0813 19:56:49.471642 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T19:56:49.669349102+00:00 stderr F I0813 19:56:49.669279 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T19:56:49.669378623+00:00 stderr F I0813 19:56:49.669344 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T19:56:49.871717901+00:00 stderr F I0813 19:56:49.871571 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T19:56:49.871717901+00:00 stderr F I0813 19:56:49.871636 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T19:56:50.069382135+00:00 stderr F I0813 19:56:50.069261 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T19:56:50.069382135+00:00 stderr F I0813 19:56:50.069333 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T19:56:50.294058591+00:00 stderr F I0813 19:56:50.293963 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T19:56:50.294058591+00:00 stderr F I0813 19:56:50.294035 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T19:56:50.474174344+00:00 stderr F I0813 19:56:50.474124 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T19:56:50.474286327+00:00 stderr F I0813 19:56:50.474266 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T19:56:50.678359292+00:00 stderr F I0813 19:56:50.678236 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T19:56:50.678404604+00:00 stderr F I0813 19:56:50.678369 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T19:56:50.876748957+00:00 stderr F I0813 19:56:50.876645 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T19:56:50.876748957+00:00 stderr F I0813 19:56:50.876715 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T19:56:51.084106038+00:00 stderr F I0813 19:56:51.083989 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T19:56:51.084106038+00:00 stderr F I0813 19:56:51.084060 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T19:56:51.268857194+00:00 stderr F I0813 19:56:51.268668 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T19:56:51.268857194+00:00 stderr F I0813 19:56:51.268735 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T19:56:51.476582676+00:00 stderr F I0813 19:56:51.476441 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T19:56:51.476582676+00:00 stderr F I0813 19:56:51.476509 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T19:56:51.670874374+00:00 stderr F I0813 19:56:51.670615 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T19:56:51.670874374+00:00 stderr F I0813 19:56:51.670759 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T19:56:51.869480295+00:00 stderr F I0813 19:56:51.869386 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T19:56:51.869480295+00:00 stderr F I0813 19:56:51.869456 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T19:56:52.072013618+00:00 stderr F I0813 19:56:52.071894 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T19:56:52.072013618+00:00 stderr F I0813 19:56:52.071968 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T19:56:52.272030859+00:00 stderr F I0813 19:56:52.271918 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T19:56:52.272030859+00:00 stderr F I0813 19:56:52.271999 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T19:56:52.469568890+00:00 stderr F I0813 19:56:52.469439 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T19:56:52.469568890+00:00 stderr F I0813 19:56:52.469506 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T19:56:52.669634632+00:00 stderr F I0813 19:56:52.669507 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T19:56:52.669634632+00:00 stderr F I0813 19:56:52.669602 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T19:56:52.870676873+00:00 stderr F I0813 19:56:52.870269 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T19:56:52.870676873+00:00 stderr F I0813 19:56:52.870349 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T19:56:53.071203919+00:00 stderr F I0813 19:56:53.070908 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T19:56:53.071203919+00:00 stderr F I0813 19:56:53.070992 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T19:56:53.272353583+00:00 stderr F I0813 19:56:53.272206 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T19:56:53.272353583+00:00 stderr F I0813 19:56:53.272315 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T19:56:53.470096510+00:00 stderr F I0813 19:56:53.469939 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T19:56:53.470096510+00:00 stderr F I0813 19:56:53.470021 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T19:56:53.670002818+00:00 stderr F I0813 19:56:53.669895 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T19:56:53.670002818+00:00 stderr F I0813 19:56:53.669958 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T19:56:53.853625041+00:00 stderr F E0813 19:56:53.853468 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:56:53.872491880+00:00 stderr F I0813 19:56:53.872351 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T19:56:53.872491880+00:00 stderr F I0813 19:56:53.872427 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T19:56:53.872585833+00:00 stderr F I0813 19:56:53.872497 1 log.go:245] Object (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io has create-wait annotation, skipping apply. 2025-08-13T19:56:53.872585833+00:00 stderr F I0813 19:56:53.872529 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T19:56:54.079313836+00:00 stderr F I0813 19:56:54.079125 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T19:56:54.079313836+00:00 stderr F I0813 19:56:54.079196 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T19:56:54.274740046+00:00 stderr F I0813 19:56:54.274540 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T19:56:54.274740046+00:00 stderr F I0813 19:56:54.274617 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T19:56:54.474113979+00:00 stderr F I0813 19:56:54.473973 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T19:56:54.474113979+00:00 stderr F I0813 19:56:54.474062 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T19:56:54.670616190+00:00 stderr F I0813 19:56:54.670498 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T19:56:54.670616190+00:00 stderr F I0813 19:56:54.670566 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T19:56:54.871342992+00:00 stderr F I0813 19:56:54.871161 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T19:56:54.871342992+00:00 stderr F I0813 19:56:54.871256 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T19:56:55.073747421+00:00 stderr F I0813 19:56:55.073604 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T19:56:55.073747421+00:00 stderr F I0813 19:56:55.073706 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T19:56:55.282257015+00:00 stderr F I0813 19:56:55.281910 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T19:56:55.309945396+00:00 stderr F I0813 19:56:55.309311 1 log.go:245] Operconfig Controller complete 2025-08-13T19:57:16.034315282+00:00 stderr F I0813 19:57:16.034152 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.042003061+00:00 stderr F I0813 19:57:16.041865 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.070489014+00:00 stderr F I0813 19:57:16.070373 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.088582771+00:00 stderr F I0813 19:57:16.088455 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.192429817+00:00 stderr F I0813 19:57:16.192311 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.202311909+00:00 stderr F I0813 19:57:16.202193 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.225711577+00:00 stderr F I0813 19:57:16.225656 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.245669617+00:00 stderr F I0813 19:57:16.245515 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.254938681+00:00 stderr F I0813 19:57:16.254894 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.281678735+00:00 stderr F I0813 19:57:16.281563 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.339818425+00:00 stderr F I0813 19:57:16.339719 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.431200795+00:00 stderr F I0813 19:57:16.431143 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.761302101+00:00 stderr F I0813 19:57:16.761186 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T19:57:16.761302101+00:00 stderr F I0813 19:57:16.761234 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T19:57:16.802314282+00:00 stderr F I0813 19:57:16.800418 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T19:57:16.802314282+00:00 stderr F I0813 19:57:16.800463 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T19:57:16.808145299+00:00 stderr F I0813 19:57:16.808077 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T19:57:16.808291143+00:00 stderr F I0813 19:57:16.808189 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T19:57:16.969932258+00:00 stderr F I0813 19:57:16.969745 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:16.969932258+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:16.969932258+00:00 stderr F status: "False" 2025-08-13T19:57:16.969932258+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:16.969932258+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:16.969932258+00:00 stderr F message: |- 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:16.969932258+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:16.969932258+00:00 stderr F status: "True" 2025-08-13T19:57:16.969932258+00:00 stderr F type: Degraded 2025-08-13T19:57:16.969932258+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:16.969932258+00:00 stderr F status: "True" 2025-08-13T19:57:16.969932258+00:00 stderr F type: Upgradeable 2025-08-13T19:57:16.969932258+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:16.969932258+00:00 stderr F message: |- 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:16.969932258+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:16.969932258+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:16.969932258+00:00 stderr F reason: Deploying 2025-08-13T19:57:16.969932258+00:00 stderr F status: "True" 2025-08-13T19:57:16.969932258+00:00 stderr F type: Progressing 2025-08-13T19:57:16.969932258+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:16.969932258+00:00 stderr F status: "True" 2025-08-13T19:57:16.969932258+00:00 stderr F type: Available 2025-08-13T19:57:16.972714958+00:00 stderr F I0813 19:57:16.972639 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:17.003603950+00:00 stderr F I0813 19:57:17.003501 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:17.003603950+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:17.003603950+00:00 stderr F message: |- 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:17.003603950+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:17.003603950+00:00 stderr F status: "True" 2025-08-13T19:57:17.003603950+00:00 stderr F type: Degraded 2025-08-13T19:57:17.003603950+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.003603950+00:00 stderr F status: "True" 2025-08-13T19:57:17.003603950+00:00 stderr F type: Upgradeable 2025-08-13T19:57:17.003603950+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.003603950+00:00 stderr F status: "False" 2025-08-13T19:57:17.003603950+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:17.003603950+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:17.003603950+00:00 stderr F message: |- 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.003603950+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:17.003603950+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.003603950+00:00 stderr F reason: Deploying 2025-08-13T19:57:17.003603950+00:00 stderr F status: "True" 2025-08-13T19:57:17.003603950+00:00 stderr F type: Progressing 2025-08-13T19:57:17.003603950+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:17.003603950+00:00 stderr F status: "True" 2025-08-13T19:57:17.003603950+00:00 stderr F type: Available 2025-08-13T19:57:17.027331517+00:00 stderr F I0813 19:57:17.025409 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:17.027331517+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.027331517+00:00 stderr F status: "False" 2025-08-13T19:57:17.027331517+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:17.027331517+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:17.027331517+00:00 stderr F message: |- 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:17.027331517+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:17.027331517+00:00 stderr F status: "True" 2025-08-13T19:57:17.027331517+00:00 stderr F type: Degraded 2025-08-13T19:57:17.027331517+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.027331517+00:00 stderr F status: "True" 2025-08-13T19:57:17.027331517+00:00 stderr F type: Upgradeable 2025-08-13T19:57:17.027331517+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:17.027331517+00:00 stderr F message: |- 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.027331517+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:17.027331517+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.027331517+00:00 stderr F reason: Deploying 2025-08-13T19:57:17.027331517+00:00 stderr F status: "True" 2025-08-13T19:57:17.027331517+00:00 stderr F type: Progressing 2025-08-13T19:57:17.027331517+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:17.027331517+00:00 stderr F status: "True" 2025-08-13T19:57:17.027331517+00:00 stderr F type: Available 2025-08-13T19:57:17.027331517+00:00 stderr F I0813 19:57:17.025657 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:17.064305253+00:00 stderr F I0813 19:57:17.064072 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:17.064305253+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:17.064305253+00:00 stderr F message: |- 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:17.064305253+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:17.064305253+00:00 stderr F status: "True" 2025-08-13T19:57:17.064305253+00:00 stderr F type: Degraded 2025-08-13T19:57:17.064305253+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.064305253+00:00 stderr F status: "True" 2025-08-13T19:57:17.064305253+00:00 stderr F type: Upgradeable 2025-08-13T19:57:17.064305253+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.064305253+00:00 stderr F status: "False" 2025-08-13T19:57:17.064305253+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:17.064305253+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:17.064305253+00:00 stderr F message: |- 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.064305253+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:17.064305253+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.064305253+00:00 stderr F reason: Deploying 2025-08-13T19:57:17.064305253+00:00 stderr F status: "True" 2025-08-13T19:57:17.064305253+00:00 stderr F type: Progressing 2025-08-13T19:57:17.064305253+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:17.064305253+00:00 stderr F status: "True" 2025-08-13T19:57:17.064305253+00:00 stderr F type: Available 2025-08-13T19:57:17.088951607+00:00 stderr F I0813 19:57:17.088886 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T19:57:17.089022279+00:00 stderr F I0813 19:57:17.089009 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T19:57:17.201901152+00:00 stderr F I0813 19:57:17.201844 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:17.201901152+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.201901152+00:00 stderr F status: "False" 2025-08-13T19:57:17.201901152+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:17.201901152+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:17.201901152+00:00 stderr F message: |- 2025-08-13T19:57:17.201901152+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:17.201901152+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:17.201901152+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:17.201901152+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:17.201901152+00:00 stderr F status: "True" 2025-08-13T19:57:17.201901152+00:00 stderr F type: Degraded 2025-08-13T19:57:17.201901152+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.201901152+00:00 stderr F status: "True" 2025-08-13T19:57:17.201901152+00:00 stderr F type: Upgradeable 2025-08-13T19:57:17.201901152+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:17.201901152+00:00 stderr F message: |- 2025-08-13T19:57:17.201901152+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:17.201901152+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:17.201901152+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.201901152+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.201901152+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:17.201901152+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.201901152+00:00 stderr F reason: Deploying 2025-08-13T19:57:17.201901152+00:00 stderr F status: "True" 2025-08-13T19:57:17.201901152+00:00 stderr F type: Progressing 2025-08-13T19:57:17.201901152+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:17.201901152+00:00 stderr F status: "True" 2025-08-13T19:57:17.201901152+00:00 stderr F type: Available 2025-08-13T19:57:17.202354635+00:00 stderr F I0813 19:57:17.202188 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:17.223129958+00:00 stderr F I0813 19:57:17.222995 1 log.go:245] unable to determine openshift-apiserver apiserver service endpoints: no openshift-apiserver api endpoints found 2025-08-13T19:57:17.232103015+00:00 stderr F I0813 19:57:17.232074 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:57:17.245811336+00:00 stderr F I0813 19:57:17.245701 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:17.245811336+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:17.245811336+00:00 stderr F message: |- 2025-08-13T19:57:17.245811336+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:17.245811336+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:17.245811336+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:17.245811336+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:17.245811336+00:00 stderr F status: "True" 2025-08-13T19:57:17.245811336+00:00 stderr F type: Degraded 2025-08-13T19:57:17.245811336+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.245811336+00:00 stderr F status: "True" 2025-08-13T19:57:17.245811336+00:00 stderr F type: Upgradeable 2025-08-13T19:57:17.245811336+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.245811336+00:00 stderr F status: "False" 2025-08-13T19:57:17.245811336+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:17.245811336+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:17.245811336+00:00 stderr F message: |- 2025-08-13T19:57:17.245811336+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:17.245811336+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:17.245811336+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.245811336+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.245811336+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:17.245811336+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.245811336+00:00 stderr F reason: Deploying 2025-08-13T19:57:17.245811336+00:00 stderr F status: "True" 2025-08-13T19:57:17.245811336+00:00 stderr F type: Progressing 2025-08-13T19:57:17.245811336+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:17.245811336+00:00 stderr F status: "True" 2025-08-13T19:57:17.245811336+00:00 stderr F type: Available 2025-08-13T19:57:17.249137321+00:00 stderr F I0813 19:57:17.249110 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:17.259308692+00:00 stderr F I0813 19:57:17.259190 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:17.270118710+00:00 stderr F I0813 19:57:17.269988 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:57:17.274218327+00:00 stderr F I0813 19:57:17.274162 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:17.275952317+00:00 stderr F I0813 19:57:17.274720 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:17.275952317+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.275952317+00:00 stderr F status: "False" 2025-08-13T19:57:17.275952317+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:17.275952317+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:17.275952317+00:00 stderr F message: |- 2025-08-13T19:57:17.275952317+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:17.275952317+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:17.275952317+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:17.275952317+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:17.275952317+00:00 stderr F status: "True" 2025-08-13T19:57:17.275952317+00:00 stderr F type: Degraded 2025-08-13T19:57:17.275952317+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.275952317+00:00 stderr F status: "True" 2025-08-13T19:57:17.275952317+00:00 stderr F type: Upgradeable 2025-08-13T19:57:17.275952317+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:17.275952317+00:00 stderr F message: |- 2025-08-13T19:57:17.275952317+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:17.275952317+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:17.275952317+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.275952317+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.275952317+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:17.275952317+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.275952317+00:00 stderr F reason: Deploying 2025-08-13T19:57:17.275952317+00:00 stderr F status: "True" 2025-08-13T19:57:17.275952317+00:00 stderr F type: Progressing 2025-08-13T19:57:17.275952317+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:17.275952317+00:00 stderr F status: "True" 2025-08-13T19:57:17.275952317+00:00 stderr F type: Available 2025-08-13T19:57:17.438910890+00:00 stderr F I0813 19:57:17.438515 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:57:17.591118266+00:00 stderr F I0813 19:57:17.587982 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:17.591118266+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:17.591118266+00:00 stderr F message: |- 2025-08-13T19:57:17.591118266+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:17.591118266+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:17.591118266+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:17.591118266+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:17.591118266+00:00 stderr F status: "True" 2025-08-13T19:57:17.591118266+00:00 stderr F type: Degraded 2025-08-13T19:57:17.591118266+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.591118266+00:00 stderr F status: "True" 2025-08-13T19:57:17.591118266+00:00 stderr F type: Upgradeable 2025-08-13T19:57:17.591118266+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.591118266+00:00 stderr F status: "False" 2025-08-13T19:57:17.591118266+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:17.591118266+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:17.591118266+00:00 stderr F message: |- 2025-08-13T19:57:17.591118266+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:17.591118266+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:17.591118266+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.591118266+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.591118266+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:17.591118266+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.591118266+00:00 stderr F reason: Deploying 2025-08-13T19:57:17.591118266+00:00 stderr F status: "True" 2025-08-13T19:57:17.591118266+00:00 stderr F type: Progressing 2025-08-13T19:57:17.591118266+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:17.591118266+00:00 stderr F status: "True" 2025-08-13T19:57:17.591118266+00:00 stderr F type: Available 2025-08-13T19:57:17.632087666+00:00 stderr F I0813 19:57:17.631991 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:17.642120213+00:00 stderr F I0813 19:57:17.641978 1 log.go:245] unable to determine openshift-apiserver apiserver service endpoints: no openshift-apiserver api endpoints found 2025-08-13T19:57:17.831496050+00:00 stderr F I0813 19:57:17.831406 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:57:18.036197856+00:00 stderr F I0813 19:57:18.035861 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:18.133008850+00:00 stderr F I0813 19:57:18.132935 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:57:18.133102643+00:00 stderr F I0813 19:57:18.133088 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:57:18.196054480+00:00 stderr F I0813 19:57:18.195709 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:18.196054480+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.196054480+00:00 stderr F status: "False" 2025-08-13T19:57:18.196054480+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:18.196054480+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:18.196054480+00:00 stderr F message: |- 2025-08-13T19:57:18.196054480+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:18.196054480+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:18.196054480+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:18.196054480+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:18.196054480+00:00 stderr F status: "True" 2025-08-13T19:57:18.196054480+00:00 stderr F type: Degraded 2025-08-13T19:57:18.196054480+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.196054480+00:00 stderr F status: "True" 2025-08-13T19:57:18.196054480+00:00 stderr F type: Upgradeable 2025-08-13T19:57:18.196054480+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:18.196054480+00:00 stderr F message: |- 2025-08-13T19:57:18.196054480+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.196054480+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:18.196054480+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:18.196054480+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.196054480+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:18.196054480+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.196054480+00:00 stderr F reason: Deploying 2025-08-13T19:57:18.196054480+00:00 stderr F status: "True" 2025-08-13T19:57:18.196054480+00:00 stderr F type: Progressing 2025-08-13T19:57:18.196054480+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:18.196054480+00:00 stderr F status: "True" 2025-08-13T19:57:18.196054480+00:00 stderr F type: Available 2025-08-13T19:57:18.202886366+00:00 stderr F I0813 19:57:18.202753 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:18.236058863+00:00 stderr F I0813 19:57:18.235671 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:57:18.430881896+00:00 stderr F I0813 19:57:18.430665 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:18.579966113+00:00 stderr F I0813 19:57:18.579881 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:18.579966113+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:18.579966113+00:00 stderr F message: |- 2025-08-13T19:57:18.579966113+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:18.579966113+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:18.579966113+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:18.579966113+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:18.579966113+00:00 stderr F status: "True" 2025-08-13T19:57:18.579966113+00:00 stderr F type: Degraded 2025-08-13T19:57:18.579966113+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.579966113+00:00 stderr F status: "True" 2025-08-13T19:57:18.579966113+00:00 stderr F type: Upgradeable 2025-08-13T19:57:18.579966113+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.579966113+00:00 stderr F status: "False" 2025-08-13T19:57:18.579966113+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:18.579966113+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:18.579966113+00:00 stderr F message: |- 2025-08-13T19:57:18.579966113+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.579966113+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:18.579966113+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:18.579966113+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.579966113+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:18.579966113+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.579966113+00:00 stderr F reason: Deploying 2025-08-13T19:57:18.579966113+00:00 stderr F status: "True" 2025-08-13T19:57:18.579966113+00:00 stderr F type: Progressing 2025-08-13T19:57:18.579966113+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:18.579966113+00:00 stderr F status: "True" 2025-08-13T19:57:18.579966113+00:00 stderr F type: Available 2025-08-13T19:57:18.603711061+00:00 stderr F I0813 19:57:18.603625 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:18.603711061+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.603711061+00:00 stderr F status: "False" 2025-08-13T19:57:18.603711061+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:18.603711061+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:18.603711061+00:00 stderr F message: |- 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:18.603711061+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:18.603711061+00:00 stderr F status: "True" 2025-08-13T19:57:18.603711061+00:00 stderr F type: Degraded 2025-08-13T19:57:18.603711061+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.603711061+00:00 stderr F status: "True" 2025-08-13T19:57:18.603711061+00:00 stderr F type: Upgradeable 2025-08-13T19:57:18.603711061+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:18.603711061+00:00 stderr F message: |- 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.603711061+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:18.603711061+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.603711061+00:00 stderr F reason: Deploying 2025-08-13T19:57:18.603711061+00:00 stderr F status: "True" 2025-08-13T19:57:18.603711061+00:00 stderr F type: Progressing 2025-08-13T19:57:18.603711061+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:18.603711061+00:00 stderr F status: "True" 2025-08-13T19:57:18.603711061+00:00 stderr F type: Available 2025-08-13T19:57:18.604328479+00:00 stderr F I0813 19:57:18.604264 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:18.630157446+00:00 stderr F I0813 19:57:18.630055 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:57:18.831180777+00:00 stderr F I0813 19:57:18.831084 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:57:18.982905299+00:00 stderr F I0813 19:57:18.982710 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:18.982905299+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:18.982905299+00:00 stderr F message: |- 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:18.982905299+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:18.982905299+00:00 stderr F status: "True" 2025-08-13T19:57:18.982905299+00:00 stderr F type: Degraded 2025-08-13T19:57:18.982905299+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.982905299+00:00 stderr F status: "True" 2025-08-13T19:57:18.982905299+00:00 stderr F type: Upgradeable 2025-08-13T19:57:18.982905299+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.982905299+00:00 stderr F status: "False" 2025-08-13T19:57:18.982905299+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:18.982905299+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:18.982905299+00:00 stderr F message: |- 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.982905299+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:18.982905299+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.982905299+00:00 stderr F reason: Deploying 2025-08-13T19:57:18.982905299+00:00 stderr F status: "True" 2025-08-13T19:57:18.982905299+00:00 stderr F type: Progressing 2025-08-13T19:57:18.982905299+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:18.982905299+00:00 stderr F status: "True" 2025-08-13T19:57:18.982905299+00:00 stderr F type: Available 2025-08-13T19:57:19.031076985+00:00 stderr F I0813 19:57:19.030963 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:19.674537108+00:00 stderr F I0813 19:57:19.674439 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:19.674537108+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:19.674537108+00:00 stderr F status: "False" 2025-08-13T19:57:19.674537108+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:19.674537108+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:19.674537108+00:00 stderr F message: |- 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:19.674537108+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:19.674537108+00:00 stderr F status: "True" 2025-08-13T19:57:19.674537108+00:00 stderr F type: Degraded 2025-08-13T19:57:19.674537108+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:19.674537108+00:00 stderr F status: "True" 2025-08-13T19:57:19.674537108+00:00 stderr F type: Upgradeable 2025-08-13T19:57:19.674537108+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:19.674537108+00:00 stderr F message: |- 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:19.674537108+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:19.674537108+00:00 stderr F reason: Deploying 2025-08-13T19:57:19.674537108+00:00 stderr F status: "True" 2025-08-13T19:57:19.674537108+00:00 stderr F type: Progressing 2025-08-13T19:57:19.674537108+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:19.674537108+00:00 stderr F status: "True" 2025-08-13T19:57:19.674537108+00:00 stderr F type: Available 2025-08-13T19:57:19.675118124+00:00 stderr F I0813 19:57:19.675020 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:19.984504589+00:00 stderr F I0813 19:57:19.984386 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:19.984504589+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:19.984504589+00:00 stderr F message: |- 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:19.984504589+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:19.984504589+00:00 stderr F status: "True" 2025-08-13T19:57:19.984504589+00:00 stderr F type: Degraded 2025-08-13T19:57:19.984504589+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:19.984504589+00:00 stderr F status: "True" 2025-08-13T19:57:19.984504589+00:00 stderr F type: Upgradeable 2025-08-13T19:57:19.984504589+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:19.984504589+00:00 stderr F status: "False" 2025-08-13T19:57:19.984504589+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:19.984504589+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:19.984504589+00:00 stderr F message: |- 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:19.984504589+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:19.984504589+00:00 stderr F reason: Deploying 2025-08-13T19:57:19.984504589+00:00 stderr F status: "True" 2025-08-13T19:57:19.984504589+00:00 stderr F type: Progressing 2025-08-13T19:57:19.984504589+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:19.984504589+00:00 stderr F status: "True" 2025-08-13T19:57:19.984504589+00:00 stderr F type: Available 2025-08-13T19:57:20.007128255+00:00 stderr F I0813 19:57:20.006991 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:20.007128255+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:20.007128255+00:00 stderr F status: "False" 2025-08-13T19:57:20.007128255+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:20.007128255+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:20.007128255+00:00 stderr F message: |- 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:20.007128255+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:20.007128255+00:00 stderr F status: "True" 2025-08-13T19:57:20.007128255+00:00 stderr F type: Degraded 2025-08-13T19:57:20.007128255+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:20.007128255+00:00 stderr F status: "True" 2025-08-13T19:57:20.007128255+00:00 stderr F type: Upgradeable 2025-08-13T19:57:20.007128255+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:20.007128255+00:00 stderr F message: |- 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:20.007128255+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:20.007128255+00:00 stderr F reason: Deploying 2025-08-13T19:57:20.007128255+00:00 stderr F status: "True" 2025-08-13T19:57:20.007128255+00:00 stderr F type: Progressing 2025-08-13T19:57:20.007128255+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:20.007128255+00:00 stderr F status: "True" 2025-08-13T19:57:20.007128255+00:00 stderr F type: Available 2025-08-13T19:57:20.007128255+00:00 stderr F I0813 19:57:20.007109 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:20.384928903+00:00 stderr F I0813 19:57:20.384687 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:20.384928903+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:20.384928903+00:00 stderr F message: |- 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:20.384928903+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:20.384928903+00:00 stderr F status: "True" 2025-08-13T19:57:20.384928903+00:00 stderr F type: Degraded 2025-08-13T19:57:20.384928903+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:20.384928903+00:00 stderr F status: "True" 2025-08-13T19:57:20.384928903+00:00 stderr F type: Upgradeable 2025-08-13T19:57:20.384928903+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:20.384928903+00:00 stderr F status: "False" 2025-08-13T19:57:20.384928903+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:20.384928903+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:20.384928903+00:00 stderr F message: |- 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:20.384928903+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:20.384928903+00:00 stderr F reason: Deploying 2025-08-13T19:57:20.384928903+00:00 stderr F status: "True" 2025-08-13T19:57:20.384928903+00:00 stderr F type: Progressing 2025-08-13T19:57:20.384928903+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:20.384928903+00:00 stderr F status: "True" 2025-08-13T19:57:20.384928903+00:00 stderr F type: Available 2025-08-13T19:57:27.646199035+00:00 stderr F E0813 19:57:27.645984 1 allowlist_controller.go:142] Failed to verify ready status on allowlist daemonset pods: client rate limiter Wait returned an error: context deadline exceeded 2025-08-13T19:57:36.833565267+00:00 stderr F I0813 19:57:36.829316 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T19:57:36.834918476+00:00 stderr F I0813 19:57:36.834168 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T19:57:36.943902088+00:00 stderr F I0813 19:57:36.943626 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T19:57:36.943902088+00:00 stderr F I0813 19:57:36.943743 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T19:57:37.094172179+00:00 stderr F I0813 19:57:37.088743 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:37.094172179+00:00 stderr F I0813 19:57:37.090964 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:37.094172179+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.094172179+00:00 stderr F status: "False" 2025-08-13T19:57:37.094172179+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:37.094172179+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:37.094172179+00:00 stderr F message: |- 2025-08-13T19:57:37.094172179+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:37.094172179+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:37.094172179+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:37.094172179+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:37.094172179+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:37.094172179+00:00 stderr F status: "True" 2025-08-13T19:57:37.094172179+00:00 stderr F type: Degraded 2025-08-13T19:57:37.094172179+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.094172179+00:00 stderr F status: "True" 2025-08-13T19:57:37.094172179+00:00 stderr F type: Upgradeable 2025-08-13T19:57:37.094172179+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:37.094172179+00:00 stderr F message: |- 2025-08-13T19:57:37.094172179+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:37.094172179+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:37.094172179+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:37.094172179+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:37.094172179+00:00 stderr F reason: Deploying 2025-08-13T19:57:37.094172179+00:00 stderr F status: "True" 2025-08-13T19:57:37.094172179+00:00 stderr F type: Progressing 2025-08-13T19:57:37.094172179+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:37.094172179+00:00 stderr F status: "True" 2025-08-13T19:57:37.094172179+00:00 stderr F type: Available 2025-08-13T19:57:37.165531076+00:00 stderr F I0813 19:57:37.165462 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:37.165531076+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:37.165531076+00:00 stderr F message: |- 2025-08-13T19:57:37.165531076+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:37.165531076+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:37.165531076+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:37.165531076+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:37.165531076+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:37.165531076+00:00 stderr F status: "True" 2025-08-13T19:57:37.165531076+00:00 stderr F type: Degraded 2025-08-13T19:57:37.165531076+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.165531076+00:00 stderr F status: "True" 2025-08-13T19:57:37.165531076+00:00 stderr F type: Upgradeable 2025-08-13T19:57:37.165531076+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.165531076+00:00 stderr F status: "False" 2025-08-13T19:57:37.165531076+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:37.165531076+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:37.165531076+00:00 stderr F message: |- 2025-08-13T19:57:37.165531076+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:37.165531076+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:37.165531076+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:37.165531076+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:37.165531076+00:00 stderr F reason: Deploying 2025-08-13T19:57:37.165531076+00:00 stderr F status: "True" 2025-08-13T19:57:37.165531076+00:00 stderr F type: Progressing 2025-08-13T19:57:37.165531076+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:37.165531076+00:00 stderr F status: "True" 2025-08-13T19:57:37.165531076+00:00 stderr F type: Available 2025-08-13T19:57:37.199699932+00:00 stderr F I0813 19:57:37.199578 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:37.200026730+00:00 stderr F I0813 19:57:37.199884 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:37.200026730+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.200026730+00:00 stderr F status: "False" 2025-08-13T19:57:37.200026730+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:37.200026730+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:37.200026730+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making 2025-08-13T19:57:37.200026730+00:00 stderr F progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:37.200026730+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:37.200026730+00:00 stderr F status: "True" 2025-08-13T19:57:37.200026730+00:00 stderr F type: Degraded 2025-08-13T19:57:37.200026730+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.200026730+00:00 stderr F status: "True" 2025-08-13T19:57:37.200026730+00:00 stderr F type: Upgradeable 2025-08-13T19:57:37.200026730+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:37.200026730+00:00 stderr F message: |- 2025-08-13T19:57:37.200026730+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:37.200026730+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:37.200026730+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:37.200026730+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:37.200026730+00:00 stderr F reason: Deploying 2025-08-13T19:57:37.200026730+00:00 stderr F status: "True" 2025-08-13T19:57:37.200026730+00:00 stderr F type: Progressing 2025-08-13T19:57:37.200026730+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:37.200026730+00:00 stderr F status: "True" 2025-08-13T19:57:37.200026730+00:00 stderr F type: Available 2025-08-13T19:57:37.267659872+00:00 stderr F I0813 19:57:37.267557 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:37.267659872+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:37.267659872+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making 2025-08-13T19:57:37.267659872+00:00 stderr F progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:37.267659872+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:37.267659872+00:00 stderr F status: "True" 2025-08-13T19:57:37.267659872+00:00 stderr F type: Degraded 2025-08-13T19:57:37.267659872+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.267659872+00:00 stderr F status: "True" 2025-08-13T19:57:37.267659872+00:00 stderr F type: Upgradeable 2025-08-13T19:57:37.267659872+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.267659872+00:00 stderr F status: "False" 2025-08-13T19:57:37.267659872+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:37.267659872+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:37.267659872+00:00 stderr F message: |- 2025-08-13T19:57:37.267659872+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:37.267659872+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:37.267659872+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:37.267659872+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:37.267659872+00:00 stderr F reason: Deploying 2025-08-13T19:57:37.267659872+00:00 stderr F status: "True" 2025-08-13T19:57:37.267659872+00:00 stderr F type: Progressing 2025-08-13T19:57:37.267659872+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:37.267659872+00:00 stderr F status: "True" 2025-08-13T19:57:37.267659872+00:00 stderr F type: Available 2025-08-13T19:57:40.331406117+00:00 stderr F I0813 19:57:40.331298 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T19:57:40.331406117+00:00 stderr F I0813 19:57:40.331339 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T19:57:40.363359359+00:00 stderr F I0813 19:57:40.362143 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T19:57:40.363359359+00:00 stderr F I0813 19:57:40.362189 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T19:57:40.422123417+00:00 stderr F I0813 19:57:40.421990 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:40.422558630+00:00 stderr F I0813 19:57:40.422454 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:40.422558630+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.422558630+00:00 stderr F status: "False" 2025-08-13T19:57:40.422558630+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:40.422558630+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:40.422558630+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making 2025-08-13T19:57:40.422558630+00:00 stderr F progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:40.422558630+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:40.422558630+00:00 stderr F status: "True" 2025-08-13T19:57:40.422558630+00:00 stderr F type: Degraded 2025-08-13T19:57:40.422558630+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.422558630+00:00 stderr F status: "True" 2025-08-13T19:57:40.422558630+00:00 stderr F type: Upgradeable 2025-08-13T19:57:40.422558630+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:40.422558630+00:00 stderr F message: |- 2025-08-13T19:57:40.422558630+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:40.422558630+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:40.422558630+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:40.422558630+00:00 stderr F reason: Deploying 2025-08-13T19:57:40.422558630+00:00 stderr F status: "True" 2025-08-13T19:57:40.422558630+00:00 stderr F type: Progressing 2025-08-13T19:57:40.422558630+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:40.422558630+00:00 stderr F status: "True" 2025-08-13T19:57:40.422558630+00:00 stderr F type: Available 2025-08-13T19:57:40.451011842+00:00 stderr F I0813 19:57:40.450905 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:40.451011842+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:40.451011842+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making 2025-08-13T19:57:40.451011842+00:00 stderr F progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:40.451011842+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:40.451011842+00:00 stderr F status: "True" 2025-08-13T19:57:40.451011842+00:00 stderr F type: Degraded 2025-08-13T19:57:40.451011842+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.451011842+00:00 stderr F status: "True" 2025-08-13T19:57:40.451011842+00:00 stderr F type: Upgradeable 2025-08-13T19:57:40.451011842+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.451011842+00:00 stderr F status: "False" 2025-08-13T19:57:40.451011842+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:40.451011842+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:40.451011842+00:00 stderr F message: |- 2025-08-13T19:57:40.451011842+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:40.451011842+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:40.451011842+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:40.451011842+00:00 stderr F reason: Deploying 2025-08-13T19:57:40.451011842+00:00 stderr F status: "True" 2025-08-13T19:57:40.451011842+00:00 stderr F type: Progressing 2025-08-13T19:57:40.451011842+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:40.451011842+00:00 stderr F status: "True" 2025-08-13T19:57:40.451011842+00:00 stderr F type: Available 2025-08-13T19:57:40.483402657+00:00 stderr F I0813 19:57:40.483342 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:40.485476046+00:00 stderr F I0813 19:57:40.484150 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:40.485476046+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.485476046+00:00 stderr F status: "False" 2025-08-13T19:57:40.485476046+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:40.485476046+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:57:40.485476046+00:00 stderr F status: "False" 2025-08-13T19:57:40.485476046+00:00 stderr F type: Degraded 2025-08-13T19:57:40.485476046+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.485476046+00:00 stderr F status: "True" 2025-08-13T19:57:40.485476046+00:00 stderr F type: Upgradeable 2025-08-13T19:57:40.485476046+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:40.485476046+00:00 stderr F message: |- 2025-08-13T19:57:40.485476046+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:40.485476046+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:40.485476046+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:40.485476046+00:00 stderr F reason: Deploying 2025-08-13T19:57:40.485476046+00:00 stderr F status: "True" 2025-08-13T19:57:40.485476046+00:00 stderr F type: Progressing 2025-08-13T19:57:40.485476046+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:40.485476046+00:00 stderr F status: "True" 2025-08-13T19:57:40.485476046+00:00 stderr F type: Available 2025-08-13T19:57:40.515236706+00:00 stderr F I0813 19:57:40.515082 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:40.515236706+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:57:40.515236706+00:00 stderr F status: "False" 2025-08-13T19:57:40.515236706+00:00 stderr F type: Degraded 2025-08-13T19:57:40.515236706+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.515236706+00:00 stderr F status: "True" 2025-08-13T19:57:40.515236706+00:00 stderr F type: Upgradeable 2025-08-13T19:57:40.515236706+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.515236706+00:00 stderr F status: "False" 2025-08-13T19:57:40.515236706+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:40.515236706+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:40.515236706+00:00 stderr F message: |- 2025-08-13T19:57:40.515236706+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:40.515236706+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:40.515236706+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:40.515236706+00:00 stderr F reason: Deploying 2025-08-13T19:57:40.515236706+00:00 stderr F status: "True" 2025-08-13T19:57:40.515236706+00:00 stderr F type: Progressing 2025-08-13T19:57:40.515236706+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:40.515236706+00:00 stderr F status: "True" 2025-08-13T19:57:40.515236706+00:00 stderr F type: Available 2025-08-13T19:57:48.718553717+00:00 stderr F I0813 19:57:48.718162 1 pod_watcher.go:131] Operand /, Kind= openshift-network-operator/iptables-alerter updated, re-generating status 2025-08-13T19:57:48.718553717+00:00 stderr F I0813 19:57:48.718217 1 pod_watcher.go:131] Operand /, Kind= openshift-network-operator/iptables-alerter updated, re-generating status 2025-08-13T19:57:50.186632608+00:00 stderr F I0813 19:57:50.186549 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:50.186632608+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:50.186632608+00:00 stderr F status: "False" 2025-08-13T19:57:50.186632608+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:50.186632608+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:57:50.186632608+00:00 stderr F status: "False" 2025-08-13T19:57:50.186632608+00:00 stderr F type: Degraded 2025-08-13T19:57:50.186632608+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:50.186632608+00:00 stderr F status: "True" 2025-08-13T19:57:50.186632608+00:00 stderr F type: Upgradeable 2025-08-13T19:57:50.186632608+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:50.186632608+00:00 stderr F message: |- 2025-08-13T19:57:50.186632608+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:50.186632608+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:50.186632608+00:00 stderr F reason: Deploying 2025-08-13T19:57:50.186632608+00:00 stderr F status: "True" 2025-08-13T19:57:50.186632608+00:00 stderr F type: Progressing 2025-08-13T19:57:50.186632608+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:50.186632608+00:00 stderr F status: "True" 2025-08-13T19:57:50.186632608+00:00 stderr F type: Available 2025-08-13T19:57:50.188397668+00:00 stderr F I0813 19:57:50.187123 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:50.711039923+00:00 stderr F I0813 19:57:50.710052 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:50.711039923+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:57:50.711039923+00:00 stderr F status: "False" 2025-08-13T19:57:50.711039923+00:00 stderr F type: Degraded 2025-08-13T19:57:50.711039923+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:50.711039923+00:00 stderr F status: "True" 2025-08-13T19:57:50.711039923+00:00 stderr F type: Upgradeable 2025-08-13T19:57:50.711039923+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:50.711039923+00:00 stderr F status: "False" 2025-08-13T19:57:50.711039923+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:50.711039923+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:50.711039923+00:00 stderr F message: |- 2025-08-13T19:57:50.711039923+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:50.711039923+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:50.711039923+00:00 stderr F reason: Deploying 2025-08-13T19:57:50.711039923+00:00 stderr F status: "True" 2025-08-13T19:57:50.711039923+00:00 stderr F type: Progressing 2025-08-13T19:57:50.711039923+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:50.711039923+00:00 stderr F status: "True" 2025-08-13T19:57:50.711039923+00:00 stderr F type: Available 2025-08-13T19:57:53.854169574+00:00 stderr F E0813 19:57:53.854019 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:58:53.853220130+00:00 stderr F E0813 19:58:53.852971 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.933536645+00:00 stderr F I0813 19:59:16.928079 1 log.go:245] unable to determine openshift-apiserver apiserver service endpoints: no openshift-apiserver api endpoints found 2025-08-13T19:59:17.449368829+00:00 stderr F I0813 19:59:17.449298 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:17.540009033+00:00 stderr F I0813 19:59:17.539733 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:17.797601196+00:00 stderr F I0813 19:59:17.794238 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:17.849392622+00:00 stderr F I0813 19:59:17.849009 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:18.021281852+00:00 stderr F I0813 19:59:18.020347 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:18.078086281+00:00 stderr F I0813 19:59:18.076133 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/network-metrics-daemon updated, re-generating status 2025-08-13T19:59:18.078086281+00:00 stderr F I0813 19:59:18.076275 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/network-metrics-daemon updated, re-generating status 2025-08-13T19:59:18.247723307+00:00 stderr F I0813 19:59:18.247439 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:18.718923629+00:00 stderr F I0813 19:59:18.716957 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:19.320117607+00:00 stderr F I0813 19:59:19.294730 1 log.go:245] unable to determine openshift-apiserver apiserver service endpoints: no openshift-apiserver api endpoints found 2025-08-13T19:59:19.725952816+00:00 stderr F I0813 19:59:19.703422 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:20.361934375+00:00 stderr F I0813 19:59:20.359979 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:20.786513098+00:00 stderr F I0813 19:59:20.784268 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:21.111025047+00:00 stderr F I0813 19:59:21.108740 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:21.648204880+00:00 stderr F I0813 19:59:21.648113 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:21.852747820+00:00 stderr F I0813 19:59:21.852489 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:21.966195494+00:00 stderr F I0813 19:59:21.945572 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:22.005721751+00:00 stderr F I0813 19:59:22.003273 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:59:22.007893113+00:00 stderr F I0813 19:59:22.006732 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:59:22.007893113+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:22.007893113+00:00 stderr F status: "False" 2025-08-13T19:59:22.007893113+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:59:22.007893113+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:59:22.007893113+00:00 stderr F status: "False" 2025-08-13T19:59:22.007893113+00:00 stderr F type: Degraded 2025-08-13T19:59:22.007893113+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:22.007893113+00:00 stderr F status: "True" 2025-08-13T19:59:22.007893113+00:00 stderr F type: Upgradeable 2025-08-13T19:59:22.007893113+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:59:22.007893113+00:00 stderr F message: Deployment "/openshift-multus/multus-admission-controller" is waiting for 2025-08-13T19:59:22.007893113+00:00 stderr F other operators to become ready 2025-08-13T19:59:22.007893113+00:00 stderr F reason: Deploying 2025-08-13T19:59:22.007893113+00:00 stderr F status: "True" 2025-08-13T19:59:22.007893113+00:00 stderr F type: Progressing 2025-08-13T19:59:22.007893113+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:59:22.007893113+00:00 stderr F status: "True" 2025-08-13T19:59:22.007893113+00:00 stderr F type: Available 2025-08-13T19:59:22.623140311+00:00 stderr F I0813 19:59:22.622565 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:59:22.623140311+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:59:22.623140311+00:00 stderr F status: "False" 2025-08-13T19:59:22.623140311+00:00 stderr F type: Degraded 2025-08-13T19:59:22.623140311+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:22.623140311+00:00 stderr F status: "True" 2025-08-13T19:59:22.623140311+00:00 stderr F type: Upgradeable 2025-08-13T19:59:22.623140311+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:22.623140311+00:00 stderr F status: "False" 2025-08-13T19:59:22.623140311+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:59:22.623140311+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:59:22.623140311+00:00 stderr F message: Deployment "/openshift-multus/multus-admission-controller" is waiting for 2025-08-13T19:59:22.623140311+00:00 stderr F other operators to become ready 2025-08-13T19:59:22.623140311+00:00 stderr F reason: Deploying 2025-08-13T19:59:22.623140311+00:00 stderr F status: "True" 2025-08-13T19:59:22.623140311+00:00 stderr F type: Progressing 2025-08-13T19:59:22.623140311+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:59:22.623140311+00:00 stderr F status: "True" 2025-08-13T19:59:22.623140311+00:00 stderr F type: Available 2025-08-13T19:59:23.111462501+00:00 stderr F I0813 19:59:23.104684 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:23.350067482+00:00 stderr F I0813 19:59:23.315737 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:23.446624725+00:00 stderr F I0813 19:59:23.425940 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:23.962965053+00:00 stderr F I0813 19:59:23.962769 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:24.943743630+00:00 stderr F I0813 19:59:24.935070 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:25.812553016+00:00 stderr F I0813 19:59:25.811360 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:26.292940969+00:00 stderr F I0813 19:59:26.292674 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:26.995629630+00:00 stderr F I0813 19:59:26.995568 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:27.126353436+00:00 stderr F I0813 19:59:27.126299 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:27.330258469+00:00 stderr F I0813 19:59:27.330199 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:27.764258490+00:00 stderr F I0813 19:59:27.760309 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:27.805641620+00:00 stderr F I0813 19:59:27.802246 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T19:59:28.078679293+00:00 stderr F I0813 19:59:28.078586 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:28.101021519+00:00 stderr F I0813 19:59:28.098559 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-admission-controller updated, re-generating status 2025-08-13T19:59:28.101021519+00:00 stderr F I0813 19:59:28.098616 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-admission-controller updated, re-generating status 2025-08-13T19:59:28.363168061+00:00 stderr F I0813 19:59:28.363039 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:28.696077801+00:00 stderr F I0813 19:59:28.692985 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:28.843750770+00:00 stderr F I0813 19:59:28.839683 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:29.196290770+00:00 stderr F I0813 19:59:29.196230 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:29.349409284+00:00 stderr F I0813 19:59:29.348558 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:29.477965059+00:00 stderr F I0813 19:59:29.477700 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:29.641111209+00:00 stderr F I0813 19:59:29.641055 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:29.699073332+00:00 stderr F I0813 19:59:29.685576 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:59:29.709884170+00:00 stderr F I0813 19:59:29.709004 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:59:29.709884170+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:29.709884170+00:00 stderr F status: "False" 2025-08-13T19:59:29.709884170+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:59:29.709884170+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:59:29.709884170+00:00 stderr F status: "False" 2025-08-13T19:59:29.709884170+00:00 stderr F type: Degraded 2025-08-13T19:59:29.709884170+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:29.709884170+00:00 stderr F status: "True" 2025-08-13T19:59:29.709884170+00:00 stderr F type: Upgradeable 2025-08-13T19:59:29.709884170+00:00 stderr F - lastTransitionTime: "2025-08-13T19:59:29Z" 2025-08-13T19:59:29.709884170+00:00 stderr F status: "False" 2025-08-13T19:59:29.709884170+00:00 stderr F type: Progressing 2025-08-13T19:59:29.709884170+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:59:29.709884170+00:00 stderr F status: "True" 2025-08-13T19:59:29.709884170+00:00 stderr F type: Available 2025-08-13T19:59:30.241056071+00:00 stderr F I0813 19:59:30.241003 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:31.334293824+00:00 stderr F I0813 19:59:31.332858 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:31.510259090+00:00 stderr F I0813 19:59:31.509102 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:59:31.510259090+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:59:31.510259090+00:00 stderr F status: "False" 2025-08-13T19:59:31.510259090+00:00 stderr F type: Degraded 2025-08-13T19:59:31.510259090+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:31.510259090+00:00 stderr F status: "True" 2025-08-13T19:59:31.510259090+00:00 stderr F type: Upgradeable 2025-08-13T19:59:31.510259090+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:31.510259090+00:00 stderr F status: "False" 2025-08-13T19:59:31.510259090+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:59:31.510259090+00:00 stderr F - lastTransitionTime: "2025-08-13T19:59:30Z" 2025-08-13T19:59:31.510259090+00:00 stderr F status: "False" 2025-08-13T19:59:31.510259090+00:00 stderr F type: Progressing 2025-08-13T19:59:31.510259090+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:59:31.510259090+00:00 stderr F status: "True" 2025-08-13T19:59:31.510259090+00:00 stderr F type: Available 2025-08-13T19:59:48.140554559+00:00 stderr F I0813 19:59:48.131293 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:48.391260996+00:00 stderr F I0813 19:59:48.389276 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:48.570048173+00:00 stderr F I0813 19:59:48.568410 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:48.833236585+00:00 stderr F I0813 19:59:48.832707 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:48.974349148+00:00 stderr F I0813 19:59:48.973569 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:49.216714687+00:00 stderr F I0813 19:59:49.205400 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:49.290936663+00:00 stderr F I0813 19:59:49.287212 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:49.426012323+00:00 stderr F I0813 19:59:49.425152 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:50.346580474+00:00 stderr F I0813 19:59:50.346523 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:50.613374570+00:00 stderr F I0813 19:59:50.609724 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:50.864720594+00:00 stderr F I0813 19:59:50.864066 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:51.243916454+00:00 stderr F I0813 19:59:51.243825 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:51.804263647+00:00 stderr F I0813 19:59:51.804204 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:51.834284163+00:00 stderr F I0813 19:59:51.834085 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:51.979645897+00:00 stderr F I0813 19:59:51.979480 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:51.954065868 +0000 UTC))" 2025-08-13T19:59:51.979891574+00:00 stderr F I0813 19:59:51.979865 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:51.979721539 +0000 UTC))" 2025-08-13T19:59:51.980096460+00:00 stderr F I0813 19:59:51.979996 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.979966846 +0000 UTC))" 2025-08-13T19:59:51.980178442+00:00 stderr F I0813 19:59:51.980157 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.98011923 +0000 UTC))" 2025-08-13T19:59:51.980242584+00:00 stderr F I0813 19:59:51.980217 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.980200393 +0000 UTC))" 2025-08-13T19:59:51.980446290+00:00 stderr F I0813 19:59:51.980416 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.980275205 +0000 UTC))" 2025-08-13T19:59:51.980535682+00:00 stderr F I0813 19:59:51.980508 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.98047528 +0000 UTC))" 2025-08-13T19:59:51.980603604+00:00 stderr F I0813 19:59:51.980583 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.980560143 +0000 UTC))" 2025-08-13T19:59:51.980667716+00:00 stderr F I0813 19:59:51.980653 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.980630925 +0000 UTC))" 2025-08-13T19:59:52.032246636+00:00 stderr F I0813 19:59:52.030423 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"*.metrics.openshift-network-operator.svc\" [serving] validServingFor=[*.metrics.openshift-network-operator.svc,*.metrics.openshift-network-operator.svc.cluster.local,metrics.openshift-network-operator.svc,metrics.openshift-network-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:16 +0000 UTC to 2026-06-26 12:47:17 +0000 UTC (now=2025-08-13 19:59:52.030310541 +0000 UTC))" 2025-08-13T19:59:52.032246636+00:00 stderr F I0813 19:59:52.031124 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755114653\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755114652\" (2025-08-13 18:50:52 +0000 UTC to 2026-08-13 18:50:52 +0000 UTC (now=2025-08-13 19:59:52.031027672 +0000 UTC))" 2025-08-13T19:59:52.146350009+00:00 stderr F I0813 19:59:52.144680 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:52.231451655+00:00 stderr F I0813 19:59:52.230944 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:52.309745156+00:00 stderr F I0813 19:59:52.309454 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:52.637698115+00:00 stderr F I0813 19:59:52.634134 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T19:59:55.316949758+00:00 stderr F I0813 19:59:55.316873 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T19:59:58.072087613+00:00 stderr F I0813 19:59:58.071428 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.305632231+00:00 stderr F I0813 19:59:58.304088 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.377723176+00:00 stderr F I0813 19:59:58.367820 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.434523765+00:00 stderr F I0813 19:59:58.434098 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.482619416+00:00 stderr F I0813 19:59:58.480009 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.528341879+00:00 stderr F I0813 19:59:58.522759 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.545087616+00:00 stderr F I0813 19:59:58.543626 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.600185317+00:00 stderr F I0813 19:59:58.589599 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.703059880+00:00 stderr F I0813 19:59:58.701218 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.738516400+00:00 stderr F I0813 19:59:58.737600 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.787730443+00:00 stderr F I0813 19:59:58.787055 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.841489106+00:00 stderr F I0813 19:59:58.838749 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.884679367+00:00 stderr F I0813 19:59:58.856358 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.963088732+00:00 stderr F I0813 19:59:58.961511 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:59.131081701+00:00 stderr F I0813 19:59:59.125180 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:59.191984487+00:00 stderr F I0813 19:59:59.189505 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.016409427+00:00 stderr F I0813 20:00:00.015619 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:00:00.044321713+00:00 stderr F I0813 20:00:00.039911 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:00:00.051241160+00:00 stderr F I0813 20:00:00.047119 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:00:00.051241160+00:00 stderr F I0813 20:00:00.047155 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc000abff80 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:00:00.085951090+00:00 stderr F I0813 20:00:00.085465 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:00:00.085951090+00:00 stderr F I0813 20:00:00.085513 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:00:00.085951090+00:00 stderr F I0813 20:00:00.085531 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:00:00.102467720+00:00 stderr F I0813 20:00:00.100949 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:00:00.102467720+00:00 stderr F I0813 20:00:00.101022 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:00:00.102467720+00:00 stderr F I0813 20:00:00.101030 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:00:00.102467720+00:00 stderr F I0813 20:00:00.101036 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:00:00.102467720+00:00 stderr F I0813 20:00:00.101079 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:00:00.240105194+00:00 stderr F I0813 20:00:00.236030 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:00:00.415568186+00:00 stderr F I0813 20:00:00.415503 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:00:00.419339323+00:00 stderr F I0813 20:00:00.419293 1 log.go:245] Starting render phase 2025-08-13T20:00:00.537154070+00:00 stderr F I0813 20:00:00.537099 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.583753909+00:00 stderr F I0813 20:00:00.576994 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T20:00:00.585098887+00:00 stderr F I0813 20:00:00.585069 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.606038084+00:00 stderr F I0813 20:00:00.605336 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.704095379+00:00 stderr F I0813 20:00:00.685563 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.755946427+00:00 stderr F I0813 20:00:00.750192 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:00:00.756553244+00:00 stderr F I0813 20:00:00.756405 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.810522553+00:00 stderr F I0813 20:00:00.810427 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.935619299+00:00 stderr F I0813 20:00:00.925007 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.966938491+00:00 stderr F I0813 20:00:00.966348 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:01.012044437+00:00 stderr F I0813 20:00:00.994077 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:01.198850282+00:00 stderr F I0813 20:00:01.196550 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:01.378529464+00:00 stderr F I0813 20:00:01.373556 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:01.526869373+00:00 stderr F I0813 20:00:01.526736 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:01.793436441+00:00 stderr F I0813 20:00:01.793339 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:01.974855962+00:00 stderr F I0813 20:00:01.963018 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:02.051281641+00:00 stderr F I0813 20:00:02.048943 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:00:02.051281641+00:00 stderr F I0813 20:00:02.048980 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:00:02.051281641+00:00 stderr F I0813 20:00:02.049010 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:00:02.051281641+00:00 stderr F I0813 20:00:02.049058 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:00:02.145098276+00:00 stderr F I0813 20:00:02.144755 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:02.293526629+00:00 stderr F I0813 20:00:02.293464 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:00:02.293683323+00:00 stderr F I0813 20:00:02.293670 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:00:02.323525774+00:00 stderr F I0813 20:00:02.323471 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:02.433367366+00:00 stderr F I0813 20:00:02.425473 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:00:02.833232198+00:00 stderr F I0813 20:00:02.833170 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:02.912416596+00:00 stderr F I0813 20:00:02.912359 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:02.960892078+00:00 stderr F I0813 20:00:02.959429 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:00:02.990424180+00:00 stderr F I0813 20:00:02.990367 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:03.003689948+00:00 stderr F I0813 20:00:03.003291 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:00:03.003689948+00:00 stderr F I0813 20:00:03.003423 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:00:03.024762009+00:00 stderr F I0813 20:00:03.024709 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:00:03.024974555+00:00 stderr F I0813 20:00:03.024956 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:00:03.157048601+00:00 stderr F I0813 20:00:03.156993 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:03.204146314+00:00 stderr F I0813 20:00:03.203526 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:00:03.204146314+00:00 stderr F I0813 20:00:03.203594 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:00:03.428984805+00:00 stderr F I0813 20:00:03.428928 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:00:03.429097758+00:00 stderr F I0813 20:00:03.429084 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:00:03.441752649+00:00 stderr F I0813 20:00:03.440235 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:03.529341597+00:00 stderr F I0813 20:00:03.528243 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:00:03.538519809+00:00 stderr F I0813 20:00:03.528329 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:00:03.588427262+00:00 stderr F I0813 20:00:03.588335 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:03.640600989+00:00 stderr F I0813 20:00:03.640539 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:00:03.641026771+00:00 stderr F I0813 20:00:03.641007 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:00:03.772039317+00:00 stderr F I0813 20:00:03.765621 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:00:03.772039317+00:00 stderr F I0813 20:00:03.769461 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:00:03.782352661+00:00 stderr F I0813 20:00:03.772442 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:03.871938865+00:00 stderr F I0813 20:00:03.866133 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:00:03.872073069+00:00 stderr F I0813 20:00:03.872053 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:00:03.917299689+00:00 stderr F I0813 20:00:03.914318 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:00:03.920611853+00:00 stderr F I0813 20:00:03.917435 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:00:03.929444985+00:00 stderr F I0813 20:00:03.929282 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:00:03.929554168+00:00 stderr F I0813 20:00:03.929539 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:00:03.961414587+00:00 stderr F I0813 20:00:03.961358 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:00:03.961531410+00:00 stderr F I0813 20:00:03.961513 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:00:03.985005849+00:00 stderr F I0813 20:00:03.982309 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:00:04.003910608+00:00 stderr F I0813 20:00:03.999925 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:00:04.005635377+00:00 stderr F I0813 20:00:04.005601 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:04.027927552+00:00 stderr F I0813 20:00:04.027069 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:00:04.045293458+00:00 stderr F I0813 20:00:04.045115 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:00:04.063575569+00:00 stderr F I0813 20:00:04.063518 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:00:04.063912578+00:00 stderr F I0813 20:00:04.063892 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:00:04.088456888+00:00 stderr F I0813 20:00:04.088401 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:00:04.089140188+00:00 stderr F I0813 20:00:04.088955 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:00:04.131601498+00:00 stderr F I0813 20:00:04.131542 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:04.182439358+00:00 stderr F I0813 20:00:04.182377 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:00:04.182577902+00:00 stderr F I0813 20:00:04.182554 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:00:04.322262595+00:00 stderr F I0813 20:00:04.322190 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:04.371423647+00:00 stderr F I0813 20:00:04.371362 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:00:04.371567621+00:00 stderr F I0813 20:00:04.371546 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:00:04.563670579+00:00 stderr F I0813 20:00:04.563619 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:04.579273074+00:00 stderr F I0813 20:00:04.577416 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:00:04.579526481+00:00 stderr F I0813 20:00:04.579502 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:00:04.768828728+00:00 stderr F I0813 20:00:04.768573 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:00:04.768828728+00:00 stderr F I0813 20:00:04.768703 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:00:05.089912124+00:00 stderr F I0813 20:00:05.089594 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:00:05.090033748+00:00 stderr F I0813 20:00:05.090018 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:00:05.245299915+00:00 stderr F I0813 20:00:05.243752 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:00:05.245299915+00:00 stderr F I0813 20:00:05.243892 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:00:05.402293141+00:00 stderr F I0813 20:00:05.402240 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:00:05.402397694+00:00 stderr F I0813 20:00:05.402383 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:00:05.494256684+00:00 stderr F I0813 20:00:05.494179 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:05.521273424+00:00 stderr F I0813 20:00:05.521228 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:05.571085004+00:00 stderr F I0813 20:00:05.571031 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:05.585370512+00:00 stderr F I0813 20:00:05.585318 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:05.592692180+00:00 stderr F I0813 20:00:05.592578 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:05.613300498+00:00 stderr F I0813 20:00:05.612470 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:00:05.613300498+00:00 stderr F I0813 20:00:05.612559 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740411 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.740375171 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740519 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.740496435 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740546 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.740529816 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740565 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.740551536 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740584 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.740572197 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740624 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.740610108 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740643 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.740629369 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740660 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.740648819 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740682 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.74066537 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740703 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.74069001 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.741475 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"*.metrics.openshift-network-operator.svc\" [serving] validServingFor=[*.metrics.openshift-network-operator.svc,*.metrics.openshift-network-operator.svc.cluster.local,metrics.openshift-network-operator.svc,metrics.openshift-network-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:16 +0000 UTC to 2026-06-26 12:47:17 +0000 UTC (now=2025-08-13 20:00:05.741448102 +0000 UTC))" 2025-08-13T20:00:05.748960716+00:00 stderr F I0813 20:00:05.748933 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:05.782825402+00:00 stderr F I0813 20:00:05.782349 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755114653\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755114652\" (2025-08-13 18:50:52 +0000 UTC to 2026-08-13 18:50:52 +0000 UTC (now=2025-08-13 20:00:05.782298587 +0000 UTC))" 2025-08-13T20:00:05.829491792+00:00 stderr F I0813 20:00:05.827515 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:00:05.829491792+00:00 stderr F I0813 20:00:05.827590 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:00:05.936560876+00:00 stderr F I0813 20:00:05.935582 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:05.973523859+00:00 stderr F I0813 20:00:05.973349 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:00:05.973523859+00:00 stderr F I0813 20:00:05.973409 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:00:06.147090158+00:00 stderr F I0813 20:00:06.146975 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:06.173986895+00:00 stderr F I0813 20:00:06.171235 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:00:06.174203062+00:00 stderr F I0813 20:00:06.174180 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:00:06.335594323+00:00 stderr F I0813 20:00:06.335530 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:06.413332790+00:00 stderr F I0813 20:00:06.411275 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:00:06.413332790+00:00 stderr F I0813 20:00:06.411388 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:00:06.543155352+00:00 stderr F I0813 20:00:06.538471 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:06.591404678+00:00 stderr F I0813 20:00:06.587102 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:00:06.591567032+00:00 stderr F I0813 20:00:06.591551 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:00:06.727749705+00:00 stderr F I0813 20:00:06.726725 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:06.834137209+00:00 stderr F I0813 20:00:06.833488 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:00:06.834137209+00:00 stderr F I0813 20:00:06.833576 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:00:06.972059312+00:00 stderr F I0813 20:00:06.969446 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:07.068523202+00:00 stderr F I0813 20:00:07.067173 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:07.068523202+00:00 stderr F I0813 20:00:07.067241 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:00:07.126004661+00:00 stderr F I0813 20:00:07.125906 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:07.181911975+00:00 stderr F I0813 20:00:07.179116 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:07.181911975+00:00 stderr F I0813 20:00:07.179189 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:00:07.367050765+00:00 stderr F I0813 20:00:07.364935 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:07.439404058+00:00 stderr F I0813 20:00:07.439299 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:00:07.439404058+00:00 stderr F I0813 20:00:07.439362 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:00:07.570584558+00:00 stderr F I0813 20:00:07.566959 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:07.610458055+00:00 stderr F I0813 20:00:07.609288 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:00:07.610458055+00:00 stderr F I0813 20:00:07.609359 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:00:07.740640546+00:00 stderr F I0813 20:00:07.740525 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:07.913240478+00:00 stderr F I0813 20:00:07.908830 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:00:07.913240478+00:00 stderr F I0813 20:00:07.908932 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:00:08.024148710+00:00 stderr F I0813 20:00:08.021659 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:00:08.024148710+00:00 stderr F I0813 20:00:08.021742 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:00:08.076240345+00:00 stderr F I0813 20:00:08.072585 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:08.234126367+00:00 stderr F I0813 20:00:08.232925 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:00:08.234126367+00:00 stderr F I0813 20:00:08.232991 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:00:08.246128620+00:00 stderr F I0813 20:00:08.245571 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:08.484883697+00:00 stderr F I0813 20:00:08.484422 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:00:08.484883697+00:00 stderr F I0813 20:00:08.484544 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:00:08.509748366+00:00 stderr F I0813 20:00:08.507183 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:08.724308484+00:00 stderr F I0813 20:00:08.723940 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:08.890706279+00:00 stderr F I0813 20:00:08.890426 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:00:08.906373966+00:00 stderr F I0813 20:00:08.905441 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:00:08.923653378+00:00 stderr F I0813 20:00:08.920199 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:09.990633902+00:00 stderr F I0813 20:00:09.985965 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:09.990633902+00:00 stderr F I0813 20:00:09.986155 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:00:10.110526881+00:00 stderr F I0813 20:00:10.110462 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:10.206356513+00:00 stderr F I0813 20:00:10.206050 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:10.206356513+00:00 stderr F I0813 20:00:10.206099 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:00:10.221050462+00:00 stderr F I0813 20:00:10.220763 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:10.611933238+00:00 stderr F I0813 20:00:10.607622 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:10.612359180+00:00 stderr F I0813 20:00:10.612334 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:00:10.612448893+00:00 stderr F I0813 20:00:10.612432 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:00:10.825699854+00:00 stderr F I0813 20:00:10.825565 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:00:10.830917772+00:00 stderr F I0813 20:00:10.825762 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:00:10.852116287+00:00 stderr F I0813 20:00:10.848476 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:11.308090267+00:00 stderr F I0813 20:00:11.307681 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:00:11.308090267+00:00 stderr F I0813 20:00:11.307755 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:00:11.316324702+00:00 stderr F I0813 20:00:11.316292 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:11.695420422+00:00 stderr F I0813 20:00:11.695361 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:11.700181848+00:00 stderr F I0813 20:00:11.698073 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:00:11.700181848+00:00 stderr F I0813 20:00:11.698161 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:00:11.924462803+00:00 stderr F I0813 20:00:11.924415 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:00:11.924557115+00:00 stderr F I0813 20:00:11.924540 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:00:12.057868176+00:00 stderr F I0813 20:00:12.057643 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:12.138956629+00:00 stderr F I0813 20:00:12.138102 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:12.236306495+00:00 stderr F I0813 20:00:12.219931 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:00:12.236306495+00:00 stderr F I0813 20:00:12.220000 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:00:12.236306495+00:00 stderr F I0813 20:00:12.225358 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:12.341434712+00:00 stderr F I0813 20:00:12.338142 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:12.382584996+00:00 stderr F I0813 20:00:12.381413 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:00:12.382584996+00:00 stderr F I0813 20:00:12.381486 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:00:12.822989293+00:00 stderr F I0813 20:00:12.816201 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:12.912339241+00:00 stderr F I0813 20:00:12.902022 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:00:12.912339241+00:00 stderr F I0813 20:00:12.902123 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:00:13.015617166+00:00 stderr F I0813 20:00:13.012751 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:13.232013697+00:00 stderr F I0813 20:00:13.223038 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:00:13.232013697+00:00 stderr F I0813 20:00:13.223117 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:00:13.250970737+00:00 stderr F I0813 20:00:13.239234 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:13.306879791+00:00 stderr F I0813 20:00:13.298383 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:00:13.306879791+00:00 stderr F I0813 20:00:13.298451 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:00:13.338236715+00:00 stderr F I0813 20:00:13.335162 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:13.359024598+00:00 stderr F I0813 20:00:13.358209 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:00:13.359024598+00:00 stderr F I0813 20:00:13.358297 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:00:13.477082374+00:00 stderr F I0813 20:00:13.471221 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:00:13.477082374+00:00 stderr F I0813 20:00:13.471283 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:00:13.477082374+00:00 stderr F I0813 20:00:13.471658 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:13.638533468+00:00 stderr F I0813 20:00:13.634291 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:00:13.638533468+00:00 stderr F I0813 20:00:13.634535 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:00:13.678753645+00:00 stderr F I0813 20:00:13.677949 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:00:13.678753645+00:00 stderr F I0813 20:00:13.678384 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:00:14.228906282+00:00 stderr F I0813 20:00:14.228151 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:00:14.228906282+00:00 stderr F I0813 20:00:14.228229 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:00:14.997396433+00:00 stderr F I0813 20:00:14.993453 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-service-ca.crt' 2025-08-13T20:00:15.392155450+00:00 stderr F I0813 20:00:15.389447 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:00:15.392155450+00:00 stderr F I0813 20:00:15.389661 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:00:16.733912929+00:00 stderr F I0813 20:00:16.729412 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:00:16.733912929+00:00 stderr F I0813 20:00:16.729568 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:00:17.055233911+00:00 stderr F I0813 20:00:17.046488 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:00:17.055233911+00:00 stderr F I0813 20:00:17.046589 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:17.084964209+00:00 stderr F I0813 20:00:17.080672 1 log.go:245] configmap 'openshift-config/openshift-service-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:00:17.396449320+00:00 stderr F I0813 20:00:17.396312 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:17.396449320+00:00 stderr F I0813 20:00:17.396395 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:17.518882001+00:00 stderr F I0813 20:00:17.509358 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-service-ca.crt' 2025-08-13T20:00:17.608238829+00:00 stderr F I0813 20:00:17.607567 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:17.608238829+00:00 stderr F I0813 20:00:17.607642 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:17.971545149+00:00 stderr F I0813 20:00:17.971490 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:17.972118835+00:00 stderr F I0813 20:00:17.971982 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:17.999936048+00:00 stderr F I0813 20:00:17.976912 1 log.go:245] configmap 'openshift-config/openshift-service-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:00:18.198934632+00:00 stderr F I0813 20:00:18.198525 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:18.198934632+00:00 stderr F I0813 20:00:18.198631 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:00:18.375177797+00:00 stderr F I0813 20:00:18.370384 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:00:18.375177797+00:00 stderr F I0813 20:00:18.370537 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:00:18.554072558+00:00 stderr F I0813 20:00:18.553209 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:00:18.554072558+00:00 stderr F I0813 20:00:18.553370 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:00:18.920695282+00:00 stderr F I0813 20:00:18.920636 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:00:18.920943779+00:00 stderr F I0813 20:00:18.920917 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:00:19.139574513+00:00 stderr F I0813 20:00:19.137568 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:00:19.139574513+00:00 stderr F I0813 20:00:19.137636 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:00:19.404905919+00:00 stderr F I0813 20:00:19.402702 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:00:19.404905919+00:00 stderr F I0813 20:00:19.402860 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:00:19.573699642+00:00 stderr F I0813 20:00:19.573642 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:00:19.574088273+00:00 stderr F I0813 20:00:19.574064 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:00:19.724215163+00:00 stderr F I0813 20:00:19.723080 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:19.874118948+00:00 stderr F I0813 20:00:19.872469 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:00:19.874118948+00:00 stderr F I0813 20:00:19.872687 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:00:19.946009208+00:00 stderr F I0813 20:00:19.922655 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:20.075301534+00:00 stderr F I0813 20:00:20.075240 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:20.090052095+00:00 stderr F I0813 20:00:20.076416 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:00:20.117163328+00:00 stderr F I0813 20:00:20.105138 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:00:20.251120698+00:00 stderr F I0813 20:00:20.251042 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:20.314602038+00:00 stderr F I0813 20:00:20.314148 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:00:20.314602038+00:00 stderr F I0813 20:00:20.314290 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:00:20.361443863+00:00 stderr F I0813 20:00:20.361386 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:20.584204975+00:00 stderr F I0813 20:00:20.584143 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:00:20.584352459+00:00 stderr F I0813 20:00:20.584337 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:00:20.607314304+00:00 stderr F I0813 20:00:20.599333 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:20.794267785+00:00 stderr F I0813 20:00:20.794008 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:00:20.794267785+00:00 stderr F I0813 20:00:20.794119 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:00:20.801894842+00:00 stderr F I0813 20:00:20.800106 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:20.932976790+00:00 stderr F I0813 20:00:20.932921 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:20.967903216+00:00 stderr F I0813 20:00:20.960389 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:00:20.967903216+00:00 stderr F I0813 20:00:20.963155 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:00:21.002545324+00:00 stderr F I0813 20:00:21.001559 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:21.317566856+00:00 stderr F I0813 20:00:21.316563 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:00:21.317566856+00:00 stderr F I0813 20:00:21.316644 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:00:22.015934828+00:00 stderr F I0813 20:00:22.014466 1 log.go:245] openshift-network-operator/openshift-service-ca.crt changed, triggering operconf reconciliation 2025-08-13T20:00:22.015934828+00:00 stderr F I0813 20:00:22.014514 1 log.go:245] openshift-network-operator/openshift-service-ca.crt changed, triggering operconf reconciliation 2025-08-13T20:00:22.015934828+00:00 stderr F I0813 20:00:22.015032 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:00:22.015934828+00:00 stderr F I0813 20:00:22.015078 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:00:22.106024227+00:00 stderr F I0813 20:00:22.105687 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:00:22.106024227+00:00 stderr F I0813 20:00:22.105765 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:00:22.130968929+00:00 stderr F I0813 20:00:22.130918 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:00:22.131218176+00:00 stderr F I0813 20:00:22.131200 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:00:22.191311289+00:00 stderr F I0813 20:00:22.174576 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:00:22.191311289+00:00 stderr F I0813 20:00:22.174677 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:00:22.548052572+00:00 stderr F I0813 20:00:22.547727 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:00:22.548052572+00:00 stderr F I0813 20:00:22.547934 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:00:22.649770082+00:00 stderr F I0813 20:00:22.649264 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:00:22.649770082+00:00 stderr F I0813 20:00:22.649389 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:00:23.170936145+00:00 stderr F I0813 20:00:23.169369 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:00:23.170936145+00:00 stderr F I0813 20:00:23.169451 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:00:23.219909761+00:00 stderr F I0813 20:00:23.216758 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:00:23.219909761+00:00 stderr F I0813 20:00:23.216930 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:00:23.277926796+00:00 stderr F I0813 20:00:23.277766 1 log.go:245] openshift-network-operator/openshift-service-ca.crt changed, triggering operconf reconciliation 2025-08-13T20:00:23.278022798+00:00 stderr F I0813 20:00:23.278009 1 log.go:245] openshift-network-operator/openshift-service-ca.crt changed, triggering operconf reconciliation 2025-08-13T20:00:23.311979927+00:00 stderr F I0813 20:00:23.311917 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:00:23.312134401+00:00 stderr F I0813 20:00:23.312106 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:00:23.448147939+00:00 stderr F I0813 20:00:23.446470 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:00:23.448147939+00:00 stderr F I0813 20:00:23.446518 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:00:23.809745610+00:00 stderr F I0813 20:00:23.808217 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:00:23.809745610+00:00 stderr F I0813 20:00:23.808331 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:00:24.459348903+00:00 stderr F I0813 20:00:24.455964 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:00:24.459348903+00:00 stderr F I0813 20:00:24.456049 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:00:24.754925571+00:00 stderr F I0813 20:00:24.744957 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:00:24.754925571+00:00 stderr F I0813 20:00:24.745021 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:00:24.983936881+00:00 stderr F I0813 20:00:24.982250 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:00:24.983936881+00:00 stderr F I0813 20:00:24.982318 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:00:25.021522503+00:00 stderr F I0813 20:00:25.017295 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:00:25.021522503+00:00 stderr F I0813 20:00:25.017361 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:00:25.081994857+00:00 stderr F I0813 20:00:25.081905 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:00:25.082097940+00:00 stderr F I0813 20:00:25.082084 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:00:25.145287482+00:00 stderr F I0813 20:00:25.145235 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:00:25.145432446+00:00 stderr F I0813 20:00:25.145414 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:00:25.237749949+00:00 stderr F I0813 20:00:25.237611 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:00:25.237953855+00:00 stderr F I0813 20:00:25.237936 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:00:25.276163404+00:00 stderr F I0813 20:00:25.276107 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:00:25.276324179+00:00 stderr F I0813 20:00:25.276304 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:00:25.304971936+00:00 stderr F I0813 20:00:25.304829 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:00:25.305650655+00:00 stderr F I0813 20:00:25.305185 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:00:25.352379178+00:00 stderr F I0813 20:00:25.352324 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:00:25.356604208+00:00 stderr F I0813 20:00:25.356569 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:00:25.519980045+00:00 stderr F I0813 20:00:25.519607 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:00:25.519980045+00:00 stderr F I0813 20:00:25.519720 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:00:25.547145310+00:00 stderr F I0813 20:00:25.546995 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:00:25.547145310+00:00 stderr F I0813 20:00:25.547060 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:00:25.626914625+00:00 stderr F I0813 20:00:25.621566 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:00:25.626914625+00:00 stderr F I0813 20:00:25.621871 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:00:25.678964769+00:00 stderr F I0813 20:00:25.677521 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:00:25.678964769+00:00 stderr F I0813 20:00:25.677575 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:00:25.724325682+00:00 stderr F I0813 20:00:25.724270 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:00:25.724439176+00:00 stderr F I0813 20:00:25.724418 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:00:25.755367988+00:00 stderr F I0813 20:00:25.755303 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:00:25.755481341+00:00 stderr F I0813 20:00:25.755466 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:00:25.810361636+00:00 stderr F I0813 20:00:25.810284 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:00:25.810565262+00:00 stderr F I0813 20:00:25.810536 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:00:26.008240528+00:00 stderr F I0813 20:00:26.006854 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:00:26.008240528+00:00 stderr F I0813 20:00:26.006968 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:00:26.200354916+00:00 stderr F I0813 20:00:26.200288 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:00:26.200509281+00:00 stderr F I0813 20:00:26.200482 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:00:26.409191671+00:00 stderr F I0813 20:00:26.407084 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:00:26.409191671+00:00 stderr F I0813 20:00:26.407611 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:00:26.663284947+00:00 stderr F I0813 20:00:26.663029 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:00:26.663441071+00:00 stderr F I0813 20:00:26.663423 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:00:26.803330350+00:00 stderr F I0813 20:00:26.803278 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:00:26.803432753+00:00 stderr F I0813 20:00:26.803415 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:00:27.017349632+00:00 stderr F I0813 20:00:27.017209 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:00:27.019970497+00:00 stderr F I0813 20:00:27.017413 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:00:27.224954812+00:00 stderr F I0813 20:00:27.221811 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:00:27.224954812+00:00 stderr F I0813 20:00:27.221904 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:00:27.410078641+00:00 stderr F I0813 20:00:27.410001 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:00:27.410425941+00:00 stderr F I0813 20:00:27.410404 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:00:27.604440143+00:00 stderr F I0813 20:00:27.604250 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:00:27.604440143+00:00 stderr F I0813 20:00:27.604339 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:00:27.835231354+00:00 stderr F I0813 20:00:27.833215 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:00:28.068597348+00:00 stderr F I0813 20:00:28.064478 1 log.go:245] Operconfig Controller complete 2025-08-13T20:00:28.068597348+00:00 stderr F I0813 20:00:28.064713 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:00:29.775416506+00:00 stderr F I0813 20:00:29.771208 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:00:29.775416506+00:00 stderr F I0813 20:00:29.775167 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:00:29.794362896+00:00 stderr F I0813 20:00:29.784451 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:00:29.794362896+00:00 stderr F I0813 20:00:29.784499 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc00428ad00 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:00:29.804066233+00:00 stderr F I0813 20:00:29.798045 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:00:29.804066233+00:00 stderr F I0813 20:00:29.798086 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:00:29.804066233+00:00 stderr F I0813 20:00:29.798095 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:00:29.838588567+00:00 stderr F I0813 20:00:29.823867 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:00:29.838588567+00:00 stderr F I0813 20:00:29.823908 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:00:29.838588567+00:00 stderr F I0813 20:00:29.823915 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:00:29.838588567+00:00 stderr F I0813 20:00:29.823921 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:00:29.838588567+00:00 stderr F I0813 20:00:29.823948 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:00:29.932167476+00:00 stderr F I0813 20:00:29.918701 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:00:29.980936696+00:00 stderr F I0813 20:00:29.975894 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:00:29.989299175+00:00 stderr F I0813 20:00:29.989037 1 log.go:245] Starting render phase 2025-08-13T20:00:30.016067508+00:00 stderr F I0813 20:00:30.013338 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:00:30.108168064+00:00 stderr F I0813 20:00:30.107529 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:00:30.108168064+00:00 stderr F I0813 20:00:30.107568 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:00:30.108168064+00:00 stderr F I0813 20:00:30.107595 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:00:30.108168064+00:00 stderr F I0813 20:00:30.107618 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:00:30.125986172+00:00 stderr F I0813 20:00:30.124066 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:00:30.125986172+00:00 stderr F I0813 20:00:30.124102 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:00:30.144326355+00:00 stderr F I0813 20:00:30.144236 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:00:30.213997792+00:00 stderr F I0813 20:00:30.212562 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:00:30.237987136+00:00 stderr F I0813 20:00:30.237936 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:00:30.238115149+00:00 stderr F I0813 20:00:30.238100 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:00:30.273752145+00:00 stderr F I0813 20:00:30.273704 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:00:30.273929241+00:00 stderr F I0813 20:00:30.273912 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:00:30.287072325+00:00 stderr F I0813 20:00:30.286516 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:00:30.287072325+00:00 stderr F I0813 20:00:30.286581 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:00:30.299328935+00:00 stderr F I0813 20:00:30.299291 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:00:30.299507580+00:00 stderr F I0813 20:00:30.299451 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:00:30.306408037+00:00 stderr F I0813 20:00:30.306354 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:00:30.306479469+00:00 stderr F I0813 20:00:30.306465 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:00:30.315886657+00:00 stderr F I0813 20:00:30.314876 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:00:30.315886657+00:00 stderr F I0813 20:00:30.315010 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:00:30.329411403+00:00 stderr F I0813 20:00:30.329281 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:00:30.329411403+00:00 stderr F I0813 20:00:30.329360 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:00:30.342455165+00:00 stderr F I0813 20:00:30.342345 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:00:30.342687001+00:00 stderr F I0813 20:00:30.342668 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:00:30.353904471+00:00 stderr F I0813 20:00:30.353878 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:00:30.354048575+00:00 stderr F I0813 20:00:30.353994 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:00:30.368868138+00:00 stderr F I0813 20:00:30.366356 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:00:30.368868138+00:00 stderr F I0813 20:00:30.366421 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:00:30.457129684+00:00 stderr F I0813 20:00:30.454154 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:00:30.457129684+00:00 stderr F I0813 20:00:30.454266 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:00:30.734759971+00:00 stderr F I0813 20:00:30.732826 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:00:30.734759971+00:00 stderr F I0813 20:00:30.732939 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:00:30.846214329+00:00 stderr F I0813 20:00:30.846151 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:00:30.846328932+00:00 stderr F I0813 20:00:30.846314 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:00:31.033400936+00:00 stderr F I0813 20:00:31.033259 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:00:31.033400936+00:00 stderr F I0813 20:00:31.033344 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:00:31.264141776+00:00 stderr F I0813 20:00:31.263064 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:00:31.264141776+00:00 stderr F I0813 20:00:31.263128 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:00:31.423354566+00:00 stderr F I0813 20:00:31.423155 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:00:31.423354566+00:00 stderr F I0813 20:00:31.423228 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:00:31.642241897+00:00 stderr F I0813 20:00:31.641700 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:00:31.642241897+00:00 stderr F I0813 20:00:31.641940 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:00:31.830724141+00:00 stderr F I0813 20:00:31.829337 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:00:31.830724141+00:00 stderr F I0813 20:00:31.829470 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:00:33.021246527+00:00 stderr F I0813 20:00:33.017039 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:00:33.021246527+00:00 stderr F I0813 20:00:33.017201 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:00:33.214942320+00:00 stderr F I0813 20:00:33.211612 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:00:33.225679446+00:00 stderr F I0813 20:00:33.223342 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:00:33.291981177+00:00 stderr F I0813 20:00:33.291921 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:00:33.292109290+00:00 stderr F I0813 20:00:33.292076 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:00:33.518331391+00:00 stderr F I0813 20:00:33.511285 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:00:33.518331391+00:00 stderr F I0813 20:00:33.511361 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:00:33.618065725+00:00 stderr F I0813 20:00:33.613622 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:00:33.618065725+00:00 stderr F I0813 20:00:33.613729 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:00:33.781043142+00:00 stderr F I0813 20:00:33.779456 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:00:33.781043142+00:00 stderr F I0813 20:00:33.779529 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:00:33.809218685+00:00 stderr F I0813 20:00:33.808024 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:00:33.809218685+00:00 stderr F I0813 20:00:33.808281 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:00:33.976896016+00:00 stderr F I0813 20:00:33.976325 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:00:33.980126638+00:00 stderr F I0813 20:00:33.978085 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:00:34.074260693+00:00 stderr F I0813 20:00:34.074164 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:00:34.076908508+00:00 stderr F I0813 20:00:34.074419 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:00:34.180443360+00:00 stderr F I0813 20:00:34.179269 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:00:34.180443360+00:00 stderr F I0813 20:00:34.179367 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:00:34.240868913+00:00 stderr F I0813 20:00:34.239584 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:00:34.240868913+00:00 stderr F I0813 20:00:34.239660 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:00:34.345699852+00:00 stderr F I0813 20:00:34.345343 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:34.345699852+00:00 stderr F I0813 20:00:34.345407 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:00:34.494032052+00:00 stderr F I0813 20:00:34.493918 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:34.494032052+00:00 stderr F I0813 20:00:34.493988 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:00:34.638011747+00:00 stderr F I0813 20:00:34.636415 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:00:34.638011747+00:00 stderr F I0813 20:00:34.636571 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:00:34.941888462+00:00 stderr F I0813 20:00:34.940449 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:00:34.941888462+00:00 stderr F I0813 20:00:34.940613 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:00:35.142183403+00:00 stderr F I0813 20:00:35.129461 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:00:35.142183403+00:00 stderr F I0813 20:00:35.129619 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:00:35.292967053+00:00 stderr F I0813 20:00:35.292333 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:00:35.292967053+00:00 stderr F I0813 20:00:35.292402 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:00:35.599894075+00:00 stderr F I0813 20:00:35.597497 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:00:35.599894075+00:00 stderr F I0813 20:00:35.597575 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:00:35.786772323+00:00 stderr F I0813 20:00:35.786550 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:00:35.786951618+00:00 stderr F I0813 20:00:35.786915 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:00:35.919109847+00:00 stderr F I0813 20:00:35.908447 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:00:35.919109847+00:00 stderr F I0813 20:00:35.908521 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:00:36.280135760+00:00 stderr F I0813 20:00:36.279934 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:36.280135760+00:00 stderr F I0813 20:00:36.279997 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:00:36.304390692+00:00 stderr F I0813 20:00:36.304280 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:36.304390692+00:00 stderr F I0813 20:00:36.304353 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:00:36.527641798+00:00 stderr F I0813 20:00:36.526617 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:00:36.527641798+00:00 stderr F I0813 20:00:36.526714 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:00:36.872335626+00:00 stderr F I0813 20:00:36.870403 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:00:36.872335626+00:00 stderr F I0813 20:00:36.870490 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:00:37.039218725+00:00 stderr F I0813 20:00:37.035778 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:00:37.039218725+00:00 stderr F I0813 20:00:37.035911 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:00:37.061460459+00:00 stderr F I0813 20:00:37.060257 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:00:37.061460459+00:00 stderr F I0813 20:00:37.060813 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:00:37.405100147+00:00 stderr F I0813 20:00:37.404219 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:00:37.405100147+00:00 stderr F I0813 20:00:37.404308 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:00:37.677505535+00:00 stderr F I0813 20:00:37.676925 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:00:37.677505535+00:00 stderr F I0813 20:00:37.677028 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:00:38.481051027+00:00 stderr F I0813 20:00:38.480998 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:00:38.481223452+00:00 stderr F I0813 20:00:38.481204 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:00:39.484972473+00:00 stderr F I0813 20:00:39.482212 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:00:39.484972473+00:00 stderr F I0813 20:00:39.482313 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:00:39.657008589+00:00 stderr F I0813 20:00:39.655702 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:39.718378388+00:00 stderr F I0813 20:00:39.716749 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:39.811242325+00:00 stderr F I0813 20:00:39.809485 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:39.926295526+00:00 stderr F I0813 20:00:39.925431 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:40.132291870+00:00 stderr F I0813 20:00:40.127019 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:40.192144436+00:00 stderr F I0813 20:00:40.190426 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:40.247094393+00:00 stderr F I0813 20:00:40.246116 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:40.264164620+00:00 stderr F I0813 20:00:40.260585 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:00:40.264164620+00:00 stderr F I0813 20:00:40.260662 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:00:40.295207435+00:00 stderr F I0813 20:00:40.294411 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:00:40.295207435+00:00 stderr F I0813 20:00:40.294480 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:00:40.295207435+00:00 stderr F I0813 20:00:40.295031 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:40.343954825+00:00 stderr F I0813 20:00:40.341401 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:00:40.343954825+00:00 stderr F I0813 20:00:40.341469 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:00:40.350194063+00:00 stderr F I0813 20:00:40.348485 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:40.386627212+00:00 stderr F I0813 20:00:40.385264 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:00:40.386627212+00:00 stderr F I0813 20:00:40.385333 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:00:40.447356633+00:00 stderr F I0813 20:00:40.446253 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:00:40.447356633+00:00 stderr F I0813 20:00:40.446327 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:00:40.512614574+00:00 stderr F I0813 20:00:40.512450 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:00:40.512614574+00:00 stderr F I0813 20:00:40.512525 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:00:40.564477983+00:00 stderr F I0813 20:00:40.564392 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:00:40.574181020+00:00 stderr F I0813 20:00:40.573921 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:00:40.617123694+00:00 stderr F I0813 20:00:40.616045 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:00:40.617123694+00:00 stderr F I0813 20:00:40.616113 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:00:40.665397691+00:00 stderr F I0813 20:00:40.665036 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:00:40.665397691+00:00 stderr F I0813 20:00:40.665126 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:00:40.817811727+00:00 stderr F I0813 20:00:40.768086 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:00:40.825312231+00:00 stderr F I0813 20:00:40.817770 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:40.865493456+00:00 stderr F I0813 20:00:40.864759 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:40.893745262+00:00 stderr F I0813 20:00:40.893356 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:40.940753422+00:00 stderr F I0813 20:00:40.933778 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:40.940753422+00:00 stderr F I0813 20:00:40.933933 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:40.959415614+00:00 stderr F I0813 20:00:40.953109 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:40.959415614+00:00 stderr F I0813 20:00:40.953208 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:41.088007401+00:00 stderr F I0813 20:00:41.078313 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:41.088007401+00:00 stderr F I0813 20:00:41.078427 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:00:41.111112900+00:00 stderr F I0813 20:00:41.107727 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:00:41.111112900+00:00 stderr F I0813 20:00:41.107887 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:00:41.287864020+00:00 stderr F I0813 20:00:41.287155 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:00:41.287864020+00:00 stderr F I0813 20:00:41.287242 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:00:41.573910536+00:00 stderr F I0813 20:00:41.571755 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:00:41.573910536+00:00 stderr F I0813 20:00:41.571885 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:00:41.589723997+00:00 stderr F I0813 20:00:41.589567 1 log.go:245] unable to determine openshift-apiserver apiserver service endpoints: no openshift-apiserver api endpoints found 2025-08-13T20:00:41.804696177+00:00 stderr F I0813 20:00:41.799270 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:41.884044949+00:00 stderr F I0813 20:00:41.882960 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:00:41.884044949+00:00 stderr F I0813 20:00:41.883028 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:00:42.031881395+00:00 stderr F I0813 20:00:42.029309 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:45.426981842+00:00 stderr F I0813 20:00:45.426759 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:45.465658575+00:00 stderr F I0813 20:00:45.460570 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:00:45.465658575+00:00 stderr F I0813 20:00:45.460646 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:00:45.854964905+00:00 stderr F I0813 20:00:45.847386 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:00:45.854964905+00:00 stderr F I0813 20:00:45.847471 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:00:46.049895544+00:00 stderr F I0813 20:00:46.049222 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:46.185547212+00:00 stderr F I0813 20:00:46.177358 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:00:46.185547212+00:00 stderr F I0813 20:00:46.177434 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:00:46.357890236+00:00 stderr F I0813 20:00:46.355693 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:46.557055535+00:00 stderr F I0813 20:00:46.556373 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:00:46.557055535+00:00 stderr F I0813 20:00:46.556443 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:00:46.638880888+00:00 stderr F I0813 20:00:46.638271 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:46.862627028+00:00 stderr F I0813 20:00:46.861683 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:00:46.862627028+00:00 stderr F I0813 20:00:46.861773 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:00:47.049860316+00:00 stderr F I0813 20:00:47.046662 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:47.103034452+00:00 stderr F I0813 20:00:47.099528 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:00:47.103034452+00:00 stderr F I0813 20:00:47.099593 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:00:47.445677632+00:00 stderr F I0813 20:00:47.442375 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:00:47.445677632+00:00 stderr F I0813 20:00:47.442477 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:00:47.461549324+00:00 stderr F I0813 20:00:47.460309 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:47.653116347+00:00 stderr F I0813 20:00:47.640629 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-kubeconfig-client-ca' 2025-08-13T20:00:48.149622854+00:00 stderr F I0813 20:00:48.147700 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:00:48.149622854+00:00 stderr F I0813 20:00:48.147769 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:00:48.711000051+00:00 stderr F I0813 20:00:48.710428 1 log.go:245] Deleted PodNetworkConnectivityCheck.controlplane.operator.openshift.io/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics because it is no more valid. 2025-08-13T20:00:48.751613939+00:00 stderr F I0813 20:00:48.751242 1 log.go:245] configmap 'openshift-config/admin-kubeconfig-client-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:00:48.766983538+00:00 stderr F I0813 20:00:48.763258 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:00:48.766983538+00:00 stderr F I0813 20:00:48.763335 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:00:48.785319110+00:00 stderr F I0813 20:00:48.783615 1 log.go:245] unable to determine openshift-apiserver apiserver service endpoints: no openshift-apiserver api endpoints found 2025-08-13T20:00:49.312439251+00:00 stderr F I0813 20:00:49.311670 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:00:49.312439251+00:00 stderr F I0813 20:00:49.311743 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:00:49.381325275+00:00 stderr F I0813 20:00:49.377963 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:49.780431785+00:00 stderr F I0813 20:00:49.780116 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:49.828949339+00:00 stderr F I0813 20:00:49.828457 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:00:49.828949339+00:00 stderr F I0813 20:00:49.828519 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:00:50.410139431+00:00 stderr F I0813 20:00:50.408428 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:50.447921398+00:00 stderr F I0813 20:00:50.446764 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:00:50.447921398+00:00 stderr F I0813 20:00:50.447337 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:00:57.313215275+00:00 stderr F I0813 20:00:57.311867 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:00:57.313215275+00:00 stderr F I0813 20:00:57.311985 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:00:57.313215275+00:00 stderr F I0813 20:00:57.312066 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:59.931671730+00:00 stderr F I0813 20:00:59.928650 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.928441377 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997081 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:59.928746796 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997209 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.997146407 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997306 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.997279051 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997471 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997384424 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997510 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997484367 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997536 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997519628 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997585 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997542278 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997621 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:59.99759535 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997647 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:00:59.997632671 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997905 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997665592 +0000 UTC))" 2025-08-13T20:01:00.013692589+00:00 stderr F I0813 20:01:00.013315 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"*.metrics.openshift-network-operator.svc\" [serving] validServingFor=[*.metrics.openshift-network-operator.svc,*.metrics.openshift-network-operator.svc.cluster.local,metrics.openshift-network-operator.svc,metrics.openshift-network-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:16 +0000 UTC to 2026-06-26 12:47:17 +0000 UTC (now=2025-08-13 20:01:00.012919427 +0000 UTC))" 2025-08-13T20:01:00.013764041+00:00 stderr F I0813 20:01:00.013712 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755114653\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755114652\" (2025-08-13 18:50:52 +0000 UTC to 2026-08-13 18:50:52 +0000 UTC (now=2025-08-13 20:01:00.013687419 +0000 UTC))" 2025-08-13T20:01:03.338570499+00:00 stderr F I0813 20:01:03.337009 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:01:03.338570499+00:00 stderr F I0813 20:01:03.337223 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:01:03.347989128+00:00 stderr F I0813 20:01:03.340281 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:01:05.775416984+00:00 stderr F I0813 20:01:05.773578 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:01:05.851148293+00:00 stderr F I0813 20:01:05.848382 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:01:05.851148293+00:00 stderr F I0813 20:01:05.848503 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:01:06.799594387+00:00 stderr F I0813 20:01:06.795264 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:01:06.975266097+00:00 stderr F I0813 20:01:06.965133 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:01:06.975266097+00:00 stderr F I0813 20:01:06.965226 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:01:07.408913532+00:00 stderr F I0813 20:01:07.408155 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:01:07.408913532+00:00 stderr F I0813 20:01:07.408221 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:01:07.505907187+00:00 stderr F I0813 20:01:07.496222 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:01:07.544588930+00:00 stderr F I0813 20:01:07.542569 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:01:07.544588930+00:00 stderr F I0813 20:01:07.542660 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:01:07.713043074+00:00 stderr F I0813 20:01:07.705253 1 log.go:245] PodNetworkConnectivityCheck.controlplane.operator.openshift.io/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics: podnetworkconnectivitychecks.controlplane.operator.openshift.io "network-check-source-crc-to-openshift-apiserver-endpoint-crc" not found 2025-08-13T20:01:07.800526638+00:00 stderr F I0813 20:01:07.798978 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:01:07.800526638+00:00 stderr F I0813 20:01:07.799081 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:01:07.868560558+00:00 stderr F I0813 20:01:07.867235 1 log.go:245] unable to determine openshift-apiserver apiserver service endpoints: no openshift-apiserver api endpoints found 2025-08-13T20:01:07.997000220+00:00 stderr F I0813 20:01:07.996687 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:01:08.001494768+00:00 stderr F I0813 20:01:07.997167 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:01:08.001494768+00:00 stderr F I0813 20:01:07.997252 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:01:08.111965388+00:00 stderr F I0813 20:01:08.111873 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:01:08.179292768+00:00 stderr F I0813 20:01:08.178584 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:01:08.179292768+00:00 stderr F I0813 20:01:08.178677 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:01:08.267942886+00:00 stderr F I0813 20:01:08.262127 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:01:08.424896791+00:00 stderr F I0813 20:01:08.421744 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:01:08.425373585+00:00 stderr F I0813 20:01:08.425345 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:01:08.425494888+00:00 stderr F I0813 20:01:08.425473 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:01:08.557829191+00:00 stderr F I0813 20:01:08.557427 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:01:08.557973345+00:00 stderr F I0813 20:01:08.557950 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:01:08.574873937+00:00 stderr F I0813 20:01:08.570390 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:01:08.658270425+00:00 stderr F I0813 20:01:08.653255 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:01:08.658270425+00:00 stderr F I0813 20:01:08.653325 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:01:08.809928729+00:00 stderr F I0813 20:01:08.809867 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:01:08.810065923+00:00 stderr F I0813 20:01:08.810045 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:01:08.850252439+00:00 stderr F I0813 20:01:08.850045 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:01:09.377500883+00:00 stderr F I0813 20:01:09.377442 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:01:09.377669027+00:00 stderr F I0813 20:01:09.377650 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:01:09.495525358+00:00 stderr F I0813 20:01:09.495391 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:01:09.495525358+00:00 stderr F I0813 20:01:09.495459 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:01:09.585283377+00:00 stderr F I0813 20:01:09.578555 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:01:09.674311386+00:00 stderr F I0813 20:01:09.670623 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:01:09.764236940+00:00 stderr F I0813 20:01:09.763933 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:01:09.957706507+00:00 stderr F I0813 20:01:09.957533 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:01:09.982307818+00:00 stderr F I0813 20:01:09.982249 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:01:09.982460653+00:00 stderr F I0813 20:01:09.982442 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:01:10.205196494+00:00 stderr F I0813 20:01:10.204294 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:01:10.205196494+00:00 stderr F I0813 20:01:10.204407 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:01:14.365655855+00:00 stderr F I0813 20:01:14.365324 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:01:14.365655855+00:00 stderr F I0813 20:01:14.365415 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:01:18.807159000+00:00 stderr F I0813 20:01:18.807070 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:01:18.807159000+00:00 stderr F I0813 20:01:18.807143 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:01:20.173082827+00:00 stderr F I0813 20:01:20.171163 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:01:20.173082827+00:00 stderr F I0813 20:01:20.171250 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:01:20.947087297+00:00 stderr F I0813 20:01:20.945174 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:01:20.947087297+00:00 stderr F I0813 20:01:20.945275 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:01:21.297330493+00:00 stderr F I0813 20:01:21.296989 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:01:21.297330493+00:00 stderr F I0813 20:01:21.297045 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:01:21.660375606+00:00 stderr F I0813 20:01:21.659963 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:01:21.660375606+00:00 stderr F I0813 20:01:21.660031 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:01:21.927616236+00:00 stderr F I0813 20:01:21.922315 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:01:21.927616236+00:00 stderr F I0813 20:01:21.922433 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:01:21.978028653+00:00 stderr F I0813 20:01:21.975980 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:01:22.195034151+00:00 stderr F I0813 20:01:22.194690 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:01:22.195234016+00:00 stderr F I0813 20:01:22.195212 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:01:22.838528918+00:00 stderr F I0813 20:01:22.832272 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:01:22.838528918+00:00 stderr F I0813 20:01:22.832354 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:01:23.444975610+00:00 stderr F I0813 20:01:23.443616 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:01:23.444975610+00:00 stderr F I0813 20:01:23.443759 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:01:23.878311757+00:00 stderr F I0813 20:01:23.872714 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:01:23.879092539+00:00 stderr F I0813 20:01:23.878991 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:01:23.995899970+00:00 stderr F I0813 20:01:23.994182 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:01:23.995899970+00:00 stderr F I0813 20:01:23.994291 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:01:28.283247368+00:00 stderr F I0813 20:01:28.283056 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:28.283877656+00:00 stderr F I0813 20:01:28.283516 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:28.284144654+00:00 stderr F I0813 20:01:28.284080 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284554 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:28.284501904 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284631 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:28.284602557 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284655 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:28.284638218 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284674 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:28.284660188 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284729 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.284692069 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284747 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.28473534 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284860 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.284752491 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284887 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.284871524 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284907 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:28.284895155 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284936 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:28.284925406 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284957 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.284944016 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.285444 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"*.metrics.openshift-network-operator.svc\" [serving] validServingFor=[*.metrics.openshift-network-operator.svc,*.metrics.openshift-network-operator.svc.cluster.local,metrics.openshift-network-operator.svc,metrics.openshift-network-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2025-08-13 20:01:28.285370599 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.285903 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755114653\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755114652\" (2025-08-13 18:50:52 +0000 UTC to 2026-08-13 18:50:52 +0000 UTC (now=2025-08-13 20:01:28.285885023 +0000 UTC))" 2025-08-13T20:01:29.312658341+00:00 stderr F I0813 20:01:29.310941 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:01:29.312658341+00:00 stderr F I0813 20:01:29.311047 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:01:29.863595680+00:00 stderr F I0813 20:01:29.863511 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:01:29.868243463+00:00 stderr F I0813 20:01:29.868114 1 log.go:245] successful reconciliation 2025-08-13T20:01:31.577566643+00:00 stderr F I0813 20:01:31.573682 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:01:31.577566643+00:00 stderr F I0813 20:01:31.574006 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:01:31.911241717+00:00 stderr F I0813 20:01:31.911140 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="7cb00b31757c1394caec2ab807eb732759e4f60c33864abe1196343a32306fb6", new="9ad6ea9e8750b1797a2290d936071e1b6afeb9dca994b721070d9e8357ccc62d") 2025-08-13T20:01:31.911347140+00:00 stderr F W0813 20:01:31.911330 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:01:31.911447063+00:00 stderr F I0813 20:01:31.911429 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="4b32fe525f439299dff9032d1fedd04523f56def3ae405a8eb1dcca9b4fa85c6", new="87b482cbe679adfeab619a3107dca513400610c55f0dbc209ea08b45f985b260") 2025-08-13T20:01:31.911757862+00:00 stderr F E0813 20:01:31.911723 1 leaderelection.go:369] Failed to update lock: Put "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-operator/leases/network-operator-lock?timeout=4m0s": context canceled 2025-08-13T20:01:31.911910516+00:00 stderr F I0813 20:01:31.911891 1 leaderelection.go:285] failed to renew lease openshift-network-operator/network-operator-lock: timed out waiting for the condition 2025-08-13T20:01:31.918981938+00:00 stderr F I0813 20:01:31.914450 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:31.918981938+00:00 stderr F I0813 20:01:31.916186 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:31.918981938+00:00 stderr F I0813 20:01:31.916519 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:31.919079251+00:00 stderr F I0813 20:01:31.919055 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:01:31.919271436+00:00 stderr F I0813 20:01:31.919254 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:01:31.924649409+00:00 stderr F I0813 20:01:31.919711 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:31.925018960+00:00 stderr F I0813 20:01:31.924991 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:01:31.925073401+00:00 stderr F I0813 20:01:31.925059 1 base_controller.go:172] Shutting down ManagementStateController ... 2025-08-13T20:01:31.925122453+00:00 stderr F I0813 20:01:31.925106 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:01:31.925156124+00:00 stderr F I0813 20:01:31.925144 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:01:31.925197425+00:00 stderr F I0813 20:01:31.925182 1 base_controller.go:114] Shutting down worker of ManagementStateController controller ... 2025-08-13T20:01:31.925228356+00:00 stderr F I0813 20:01:31.925217 1 base_controller.go:104] All ManagementStateController workers have been terminated 2025-08-13T20:01:31.925347939+00:00 stderr F I0813 20:01:31.925333 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:01:31.926140032+00:00 stderr F I0813 20:01:31.926120 1 base_controller.go:172] Shutting down ConnectivityCheckController ... 2025-08-13T20:01:31.926218674+00:00 stderr F I0813 20:01:31.926205 1 base_controller.go:114] Shutting down worker of ConnectivityCheckController controller ... 2025-08-13T20:01:31.926305776+00:00 stderr F I0813 20:01:31.926240 1 base_controller.go:104] All ConnectivityCheckController workers have been terminated 2025-08-13T20:01:31.927496950+00:00 stderr F I0813 20:01:31.927469 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:31.927554022+00:00 stderr F I0813 20:01:31.927541 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929071 1 internal.go:516] "Stopping and waiting for non leader election runnables" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929147 1 internal.go:520] "Stopping and waiting for leader election runnables" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929151 1 secure_serving.go:258] Stopped listening on [::]:9104 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929197 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929514 1 log.go:245] could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter: Patch "https://api-int.crc.testing:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/openshift-iptables-alerter?fieldManager=cluster-network-operator%2Foperconfig&force=true": context canceled 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929620 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929647 1 builder.go:330] server exited 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929764 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="dashboard-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929929 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="operconfig-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929942 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="infrastructureconfig-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929954 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="ingress-config-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929946 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929976 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="clusterconfig-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930177 1 log.go:245] could not apply (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script: failed to apply / update (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script: client rate limiter Wait returned an error: context canceled 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930268 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930294 1 controller.go:242] "All workers finished" controller="dashboard-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930368 1 controller.go:242] "All workers finished" controller="ingress-config-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930381 1 controller.go:242] "All workers finished" controller="infrastructureconfig-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930419 1 controller.go:242] "All workers finished" controller="clusterconfig-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930448 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="signer-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930454 1 controller.go:242] "All workers finished" controller="signer-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930465 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="pki-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930474 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="egress-router-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930479 1 controller.go:242] "All workers finished" controller="egress-router-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930489 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="configmap-trust-bundle-injector-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930496 1 controller.go:242] "All workers finished" controller="configmap-trust-bundle-injector-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930505 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="pod-watcher" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930510 1 controller.go:242] "All workers finished" controller="pod-watcher" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930519 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="allowlist-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930525 1 controller.go:242] "All workers finished" controller="allowlist-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929965 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="proxyconfig-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.931928 1 controller.go:242] "All workers finished" controller="proxyconfig-controller" 2025-08-13T20:01:31.937819265+00:00 stderr F I0813 20:01:31.937655 1 log.go:245] could not apply (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: failed to apply / update (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: client rate limiter Wait returned an error: context canceled 2025-08-13T20:01:35.298495932+00:00 stderr F E0813 20:01:35.297231 1 leaderelection.go:308] Failed to release lock: Operation cannot be fulfilled on leases.coordination.k8s.io "network-operator-lock": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:01:35.298495932+00:00 stderr F W0813 20:01:35.297515 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015133657716033105 5ustar zuulzuul././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015133657741033103 5ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000046335115133657716033123 0ustar zuulzuul2025-08-13T19:50:47.008111355+00:00 stderr F + [[ -f /env/_master ]] 2025-08-13T19:50:47.008111355+00:00 stderr F + ovn_v4_join_subnet_opt= 2025-08-13T19:50:47.008111355+00:00 stderr F + [[ '' != '' ]] 2025-08-13T19:50:47.008111355+00:00 stderr F + ovn_v6_join_subnet_opt= 2025-08-13T19:50:47.008111355+00:00 stderr F + [[ '' != '' ]] 2025-08-13T19:50:47.008111355+00:00 stderr F + ovn_v4_transit_switch_subnet_opt= 2025-08-13T19:50:47.008111355+00:00 stderr F + [[ '' != '' ]] 2025-08-13T19:50:47.008111355+00:00 stderr F + ovn_v6_transit_switch_subnet_opt= 2025-08-13T19:50:47.008111355+00:00 stderr F + [[ '' != '' ]] 2025-08-13T19:50:47.008111355+00:00 stderr F + dns_name_resolver_enabled_flag= 2025-08-13T19:50:47.008111355+00:00 stderr F + [[ false == \t\r\u\e ]] 2025-08-13T19:50:47.008704362+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-08-13T19:50:47.017865444+00:00 stdout F I0813 19:50:47.015608289 - ovnkube-control-plane - start ovnkube --init-cluster-manager crc 2025-08-13T19:50:47.017888264+00:00 stderr F + echo 'I0813 19:50:47.015608289 - ovnkube-control-plane - start ovnkube --init-cluster-manager crc' 2025-08-13T19:50:47.017888264+00:00 stderr F + exec /usr/bin/ovnkube --enable-interconnect --init-cluster-manager crc --config-file=/run/ovnkube-config/ovnkube.conf --loglevel 4 --metrics-bind-address 127.0.0.1:29108 --metrics-enable-pprof --metrics-enable-config-duration 2025-08-13T19:50:50.137364720+00:00 stderr F I0813 19:50:50.135597 1 config.go:2178] Parsed config file /run/ovnkube-config/ovnkube.conf 2025-08-13T19:50:50.138350768+00:00 stderr F I0813 19:50:50.137279 1 config.go:2179] Parsed config: {Default:{MTU:1400 RoutableMTU:0 ConntrackZone:64000 HostMasqConntrackZone:0 OVNMasqConntrackZone:0 HostNodePortConntrackZone:0 ReassemblyConntrackZone:0 EncapType:geneve EncapIP: EncapPort:6081 InactivityProbe:100000 OpenFlowProbe:180 OfctrlWaitBeforeClear:0 MonitorAll:true LFlowCacheEnable:true LFlowCacheLimit:0 LFlowCacheLimitKb:1048576 RawClusterSubnets:10.217.0.0/22/23 ClusterSubnets:[] EnableUDPAggregation:true Zone:global} Logging:{File: CNIFile: LibovsdbFile:/var/log/ovnkube/libovsdb.log Level:4 LogFileMaxSize:100 LogFileMaxBackups:5 LogFileMaxAge:0 ACLLoggingRateLimit:20} Monitoring:{RawNetFlowTargets: RawSFlowTargets: RawIPFIXTargets: NetFlowTargets:[] SFlowTargets:[] IPFIXTargets:[]} IPFIX:{Sampling:400 CacheActiveTimeout:60 CacheMaxFlows:0} CNI:{ConfDir:/etc/cni/net.d Plugin:ovn-k8s-cni-overlay} OVNKubernetesFeature:{EnableAdminNetworkPolicy:true EnableEgressIP:true EgressIPReachabiltyTotalTimeout:1 EnableEgressFirewall:true EnableEgressQoS:true EnableEgressService:true EgressIPNodeHealthCheckPort:9107 EnableMultiNetwork:true EnableMultiNetworkPolicy:false EnableStatelessNetPol:false EnableInterconnect:false EnableMultiExternalGateway:true EnablePersistentIPs:false EnableDNSNameResolver:false EnableServiceTemplateSupport:false} Kubernetes:{BootstrapKubeconfig: CertDir: CertDuration:10m0s Kubeconfig: CACert: CAData:[] APIServer:https://api-int.crc.testing:6443 Token: TokenFile: CompatServiceCIDR: RawServiceCIDRs:10.217.4.0/23 ServiceCIDRs:[] OVNConfigNamespace:openshift-ovn-kubernetes OVNEmptyLbEvents:false PodIP: RawNoHostSubnetNodes: NoHostSubnetNodes: HostNetworkNamespace:openshift-host-network PlatformType:None HealthzBindAddress:0.0.0.0:10256 CompatMetricsBindAddress: CompatOVNMetricsBindAddress: CompatMetricsEnablePprof:false DNSServiceNamespace:openshift-dns DNSServiceName:dns-default} Metrics:{BindAddress: OVNMetricsBindAddress: ExportOVSMetrics:false EnablePprof:false NodeServerPrivKey: NodeServerCert: EnableConfigDuration:false EnableScaleMetrics:false} OvnNorth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:} OvnSouth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:} Gateway:{Mode:shared Interface: EgressGWInterface: NextHop: VLANID:0 NodeportEnable:true DisableSNATMultipleGWs:false V4JoinSubnet:100.64.0.0/16 V6JoinSubnet:fd98::/64 V4MasqueradeSubnet:169.254.169.0/29 V6MasqueradeSubnet:fd69::/125 MasqueradeIPs:{V4OVNMasqueradeIP:169.254.169.1 V6OVNMasqueradeIP:fd69::1 V4HostMasqueradeIP:169.254.169.2 V6HostMasqueradeIP:fd69::2 V4HostETPLocalMasqueradeIP:169.254.169.3 V6HostETPLocalMasqueradeIP:fd69::3 V4DummyNextHopMasqueradeIP:169.254.169.4 V6DummyNextHopMasqueradeIP:fd69::4 V4OVNServiceHairpinMasqueradeIP:169.254.169.5 V6OVNServiceHairpinMasqueradeIP:fd69::5} DisablePacketMTUCheck:false RouterSubnet: SingleNode:false DisableForwarding:false AllowNoUplink:false} MasterHA:{ElectionLeaseDuration:137 ElectionRenewDeadline:107 ElectionRetryPeriod:26} ClusterMgrHA:{ElectionLeaseDuration:137 ElectionRenewDeadline:107 ElectionRetryPeriod:26} HybridOverlay:{Enabled:false RawClusterSubnets: ClusterSubnets:[] VXLANPort:4789} OvnKubeNode:{Mode:full DPResourceDeviceIdsMap:map[] MgmtPortNetdev: MgmtPortDPResourceName:} ClusterManager:{V4TransitSwitchSubnet:100.88.0.0/16 V6TransitSwitchSubnet:fd97::/64}} 2025-08-13T19:50:50.196606393+00:00 stderr F I0813 19:50:50.194947 1 leaderelection.go:250] attempting to acquire leader lease openshift-ovn-kubernetes/ovn-kubernetes-master... 2025-08-13T19:50:50.221444453+00:00 stderr F I0813 19:50:50.219718 1 metrics.go:532] Starting metrics server at address "127.0.0.1:29108" 2025-08-13T19:50:51.311304292+00:00 stderr F I0813 19:50:51.308268 1 leaderelection.go:260] successfully acquired lease openshift-ovn-kubernetes/ovn-kubernetes-master 2025-08-13T19:50:51.313362211+00:00 stderr F I0813 19:50:51.312386 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-ovn-kubernetes", Name:"ovn-kubernetes-master", UID:"252a0995-33c8-4721-8f3e-d993f8bb73c8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"25604", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ovnkube-control-plane-77c846df58-6l97b became leader 2025-08-13T19:50:51.323193252+00:00 stderr F I0813 19:50:51.313472 1 ovnkube.go:386] Won leader election; in active mode 2025-08-13T19:50:51.783390905+00:00 stderr F I0813 19:50:51.783329 1 secondary_network_cluster_manager.go:38] Creating secondary network cluster manager 2025-08-13T19:50:51.785274099+00:00 stderr F I0813 19:50:51.785245 1 egressservice_cluster.go:97] Setting up event handlers for Egress Services 2025-08-13T19:50:51.786952736+00:00 stderr F I0813 19:50:51.786879 1 clustermanager.go:123] Starting the cluster manager 2025-08-13T19:50:51.787671477+00:00 stderr F I0813 19:50:51.787651 1 factory.go:405] Starting watch factory 2025-08-13T19:50:51.792769953+00:00 stderr F I0813 19:50:51.792729 1 reflector.go:289] Starting reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.793460833+00:00 stderr F I0813 19:50:51.792660 1 reflector.go:289] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.793460833+00:00 stderr F I0813 19:50:51.793314 1 reflector.go:325] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.821478453+00:00 stderr F I0813 19:50:51.796024 1 reflector.go:325] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.822853883+00:00 stderr F I0813 19:50:51.796034 1 reflector.go:289] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.833061844+00:00 stderr F I0813 19:50:51.833013 1 reflector.go:325] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.833449055+00:00 stderr F I0813 19:50:51.793187 1 reflector.go:289] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.833536038+00:00 stderr F I0813 19:50:51.833482 1 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.866091138+00:00 stderr F I0813 19:50:51.866025 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.879486051+00:00 stderr F I0813 19:50:51.879300 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.886033998+00:00 stderr F I0813 19:50:51.884115 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:52.016009213+00:00 stderr F I0813 19:50:52.013643 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:52.138039531+00:00 stderr F I0813 19:50:52.133891 1 reflector.go:289] Starting reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.138039531+00:00 stderr F I0813 19:50:52.134003 1 reflector.go:325] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.225412408+00:00 stderr F I0813 19:50:52.223535 1 reflector.go:351] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.252296116+00:00 stderr F I0813 19:50:52.246244 1 reflector.go:289] Starting reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.252296116+00:00 stderr F I0813 19:50:52.246268 1 reflector.go:325] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.256614110+00:00 stderr F I0813 19:50:52.255960 1 reflector.go:351] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.352689005+00:00 stderr F I0813 19:50:52.352212 1 reflector.go:289] Starting reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.352689005+00:00 stderr F I0813 19:50:52.352334 1 reflector.go:325] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.360154769+00:00 stderr F I0813 19:50:52.359628 1 reflector.go:351] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.455446802+00:00 stderr F I0813 19:50:52.454619 1 reflector.go:289] Starting reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.455446802+00:00 stderr F I0813 19:50:52.455116 1 reflector.go:325] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.473755876+00:00 stderr F I0813 19:50:52.473481 1 reflector.go:351] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.559030883+00:00 stderr F I0813 19:50:52.558718 1 reflector.go:289] Starting reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.559030883+00:00 stderr F I0813 19:50:52.558875 1 reflector.go:325] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.563253964+00:00 stderr F I0813 19:50:52.563219 1 reflector.go:351] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.705244052+00:00 stderr F I0813 19:50:52.702912 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:50:52.705244052+00:00 stderr F I0813 19:50:52.703033 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:50:52.705244052+00:00 stderr F I0813 19:50:52.703049 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T19:50:52.706934640+00:00 stderr F I0813 19:50:52.706318 1 zone_cluster_controller.go:244] Node crc has the id 2 set 2025-08-13T19:50:52.714757014+00:00 stderr F I0813 19:50:52.714187 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:50:52.809440979+00:00 stderr F E0813 19:50:52.803290 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:50:52.809440979+00:00 stderr F E0813 19:50:52.803425 1 obj_retry.go:533] Failed to create *v1.Node crc, error: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:50:52.809440979+00:00 stderr F I0813 19:50:52.803522 1 secondary_network_cluster_manager.go:65] Starting secondary network cluster manager 2025-08-13T19:50:52.856380130+00:00 stderr F I0813 19:50:52.853690 1 network_attach_def_controller.go:134] Starting cluster-manager NAD controller 2025-08-13T19:50:52.856380130+00:00 stderr F I0813 19:50:52.855582 1 shared_informer.go:311] Waiting for caches to sync for cluster-manager 2025-08-13T19:50:52.857419180+00:00 stderr F I0813 19:50:52.857099 1 reflector.go:289] Starting reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T19:50:52.857419180+00:00 stderr F I0813 19:50:52.857151 1 reflector.go:325] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T19:50:52.870162564+00:00 stderr F I0813 19:50:52.869381 1 reflector.go:351] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T19:50:52.958608312+00:00 stderr F I0813 19:50:52.957364 1 shared_informer.go:318] Caches are synced for cluster-manager 2025-08-13T19:50:52.958608312+00:00 stderr F I0813 19:50:52.957688 1 network_attach_def_controller.go:182] Starting repairing loop for cluster-manager 2025-08-13T19:50:52.959329373+00:00 stderr F I0813 19:50:52.959096 1 network_attach_def_controller.go:184] Finished repairing loop for cluster-manager: 1.407231ms err: 2025-08-13T19:50:52.959329373+00:00 stderr F I0813 19:50:52.959265 1 network_attach_def_controller.go:153] Starting workers for cluster-manager NAD controller 2025-08-13T19:50:52.969034880+00:00 stderr F W0813 19:50:52.966294 1 egressip_healthcheck.go:165] Health checking using insecure connection 2025-08-13T19:50:53.964938774+00:00 stderr F W0813 19:50:53.963445 1 egressip_healthcheck.go:182] Could not connect to crc (10.217.0.2:9107): context deadline exceeded 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975184 1 egressip_controller.go:459] EgressIP node reachability enabled and using gRPC port 9107 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975308 1 egressservice_cluster.go:170] Starting Egress Services Controller 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975431 1 shared_informer.go:311] Waiting for caches to sync for egressservices 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975463 1 shared_informer.go:318] Caches are synced for egressservices 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975470 1 shared_informer.go:311] Waiting for caches to sync for egressservices_services 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975481 1 shared_informer.go:318] Caches are synced for egressservices_services 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975489 1 shared_informer.go:311] Waiting for caches to sync for egressservices_endpointslices 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975494 1 shared_informer.go:318] Caches are synced for egressservices_endpointslices 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975501 1 shared_informer.go:311] Waiting for caches to sync for egressservices_nodes 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975506 1 shared_informer.go:318] Caches are synced for egressservices_nodes 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975510 1 egressservice_cluster.go:187] Repairing Egress Services 2025-08-13T19:50:53.979904332+00:00 stderr F I0813 19:50:53.978374 1 kube.go:267] Setting labels map[] on node crc 2025-08-13T19:50:54.041876633+00:00 stderr F E0813 19:50:54.041484 1 egressservice_cluster.go:190] Failed to repair Egress Services entries: failed to remove stale labels map[] from node crc, err: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z 2025-08-13T19:50:54.041876633+00:00 stderr F I0813 19:50:54.041708 1 status_manager.go:210] Starting StatusManager with typed managers: map[adminpolicybasedexternalroutes:0xc000543280 egressfirewalls:0xc000543640 egressqoses:0xc000543a00] 2025-08-13T19:50:54.058164418+00:00 stderr F I0813 19:50:54.056436 1 egressservice_cluster_node.go:167] Processing sync for Egress Service node crc 2025-08-13T19:50:54.060862575+00:00 stderr F I0813 19:50:54.060104 1 controller.go:69] Adding controller zone_tracker event handlers 2025-08-13T19:50:54.063860881+00:00 stderr F I0813 19:50:54.063359 1 shared_informer.go:311] Waiting for caches to sync for zone_tracker 2025-08-13T19:50:54.063860881+00:00 stderr F I0813 19:50:54.063452 1 shared_informer.go:318] Caches are synced for zone_tracker 2025-08-13T19:50:54.063860881+00:00 stderr F I0813 19:50:54.063583 1 status_manager.go:234] StatusManager got zones update: map[crc:{}] 2025-08-13T19:50:54.071896051+00:00 stderr F I0813 19:50:54.071052 1 egressservice_cluster_node.go:170] Finished syncing Egress Service node crc: 2.665757ms 2025-08-13T19:50:54.071896051+00:00 stderr F I0813 19:50:54.071883 1 controller.go:199] Controller egressfirewalls_statusmanager: full reconcile 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.072169 1 controller.go:199] Controller egressqoses_statusmanager: full reconcile 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.076453 1 controller.go:199] Controller adminpolicybasedexternalroutes_statusmanager: full reconcile 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.076479 1 status_manager.go:234] StatusManager got zones update: map[crc:{}] 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.076560 1 controller.go:199] Controller adminpolicybasedexternalroutes_statusmanager: full reconcile 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.076567 1 controller.go:199] Controller egressfirewalls_statusmanager: full reconcile 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.076573 1 controller.go:199] Controller egressqoses_statusmanager: full reconcile 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.076580 1 controller.go:93] Starting controller zone_tracker with 1 workers 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.076820 1 controller.go:69] Adding controller egressqoses_statusmanager event handlers 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.089351 1 shared_informer.go:311] Waiting for caches to sync for egressqoses_statusmanager 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.090597 1 shared_informer.go:318] Caches are synced for egressqoses_statusmanager 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.090843 1 controller.go:93] Starting controller egressqoses_statusmanager with 1 workers 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.090973 1 controller.go:69] Adding controller adminpolicybasedexternalroutes_statusmanager event handlers 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.091220 1 shared_informer.go:311] Waiting for caches to sync for adminpolicybasedexternalroutes_statusmanager 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.091228 1 shared_informer.go:318] Caches are synced for adminpolicybasedexternalroutes_statusmanager 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.091236 1 controller.go:93] Starting controller adminpolicybasedexternalroutes_statusmanager with 1 workers 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.091582 1 controller.go:69] Adding controller egressfirewalls_statusmanager event handlers 2025-08-13T19:50:54.110862645+00:00 stderr F I0813 19:50:54.110354 1 shared_informer.go:311] Waiting for caches to sync for egressfirewalls_statusmanager 2025-08-13T19:50:54.110862645+00:00 stderr F I0813 19:50:54.110587 1 shared_informer.go:318] Caches are synced for egressfirewalls_statusmanager 2025-08-13T19:50:54.110862645+00:00 stderr F I0813 19:50:54.110615 1 controller.go:93] Starting controller egressfirewalls_statusmanager with 1 workers 2025-08-13T19:51:22.807417979+00:00 stderr F I0813 19:51:22.806770 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:51:22.807519982+00:00 stderr F I0813 19:51:22.807427 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:51:22.808847550+00:00 stderr F I0813 19:51:22.807642 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:51:22.823338894+00:00 stderr F E0813 19:51:22.823289 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:51:22.823429986+00:00 stderr F I0813 19:51:22.823396 1 obj_retry.go:370] Retry add failed for *v1.Node crc, will try again later: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:51:52.807413517+00:00 stderr F I0813 19:51:52.806947 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:51:52.807413517+00:00 stderr F I0813 19:51:52.807028 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:51:52.807413517+00:00 stderr F I0813 19:51:52.807070 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:51:52.820544981+00:00 stderr F E0813 19:51:52.820407 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:51:52.820677944+00:00 stderr F I0813 19:51:52.820623 1 obj_retry.go:370] Retry add failed for *v1.Node crc, will try again later: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:52:22.806992436+00:00 stderr F I0813 19:52:22.806741 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:52:22.806992436+00:00 stderr F I0813 19:52:22.806917 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:52:22.807074858+00:00 stderr F I0813 19:52:22.806963 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:52:22.830313290+00:00 stderr F E0813 19:52:22.829879 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:52:22.830313290+00:00 stderr F I0813 19:52:22.829935 1 obj_retry.go:370] Retry add failed for *v1.Node crc, will try again later: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:52:52.807391803+00:00 stderr F I0813 19:52:52.807110 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:52:52.807391803+00:00 stderr F I0813 19:52:52.807254 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:52:52.807391803+00:00 stderr F I0813 19:52:52.807342 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:52:52.825204490+00:00 stderr F E0813 19:52:52.825150 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:52:52.825331403+00:00 stderr F I0813 19:52:52.825297 1 obj_retry.go:370] Retry add failed for *v1.Node crc, will try again later: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:53:22.807657141+00:00 stderr F I0813 19:53:22.807266 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:53:22.807657141+00:00 stderr F I0813 19:53:22.807365 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:53:22.811209102+00:00 stderr F I0813 19:53:22.807710 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:53:22.827475965+00:00 stderr F E0813 19:53:22.827347 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:53:22.827475965+00:00 stderr F I0813 19:53:22.827388 1 obj_retry.go:370] Retry add failed for *v1.Node crc, will try again later: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:54:22.807698924+00:00 stderr F I0813 19:54:22.807408 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:54:22.807698924+00:00 stderr F I0813 19:54:22.807554 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:54:22.808136047+00:00 stderr F I0813 19:54:22.807918 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:54:22.820529181+00:00 stderr F E0813 19:54:22.820369 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:54:22.820529181+00:00 stderr F I0813 19:54:22.820465 1 obj_retry.go:370] Retry add failed for *v1.Node crc, will try again later: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:55:52.807349912+00:00 stderr F I0813 19:55:52.807274 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:55:52.807456975+00:00 stderr F I0813 19:55:52.807442 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:55:52.807543087+00:00 stderr F I0813 19:55:52.807501 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:55:52.829499734+00:00 stderr F E0813 19:55:52.829448 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:55:52.829591107+00:00 stderr F I0813 19:55:52.829564 1 obj_retry.go:370] Retry add failed for *v1.Node crc, will try again later: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:56:40.869402446+00:00 stderr F I0813 19:56:40.869230 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 7 items received 2025-08-13T19:56:56.883563870+00:00 stderr F I0813 19:56:56.883367 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 8 items received 2025-08-13T19:57:18.475975004+00:00 stderr F I0813 19:57:18.475715 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 7 items received 2025-08-13T19:57:22.807232882+00:00 stderr F I0813 19:57:22.807088 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:57:22.807232882+00:00 stderr F I0813 19:57:22.807163 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:57:22.807461968+00:00 stderr F I0813 19:57:22.807315 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:57:22.821010285+00:00 stderr F I0813 19:57:22.820923 1 obj_retry.go:379] Retry successful for *v1.Node crc after 8 failed attempt(s) 2025-08-13T19:57:34.937879056+00:00 stderr F I0813 19:57:34.936725 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:57:34.937879056+00:00 stderr F I0813 19:57:34.936855 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:57:34.937879056+00:00 stderr F I0813 19:57:34.936874 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T19:57:35.246653813+00:00 stderr F I0813 19:57:35.245549 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:57:35.246653813+00:00 stderr F I0813 19:57:35.245606 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:57:35.246653813+00:00 stderr F I0813 19:57:35.245619 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T19:57:48.015393389+00:00 stderr F I0813 19:57:48.014453 1 egressservice_cluster_node.go:167] Processing sync for Egress Service node crc 2025-08-13T19:57:48.016040407+00:00 stderr F I0813 19:57:48.015562 1 egressservice_cluster_node.go:170] Finished syncing Egress Service node crc: 1.166293ms 2025-08-13T19:58:08.260292188+00:00 stderr F I0813 19:58:08.260083 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 9 items received 2025-08-13T19:58:11.367112049+00:00 stderr F I0813 19:58:11.366955 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 8 items received 2025-08-13T19:58:28.569171919+00:00 stderr F I0813 19:58:28.569104 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 8 items received 2025-08-13T19:58:30.882072410+00:00 stderr F I0813 19:58:30.882015 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 8 items received 2025-08-13T19:59:31.931567968+00:00 stderr F I0813 19:59:31.931278 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:59:31.931723763+00:00 stderr F I0813 19:59:31.931703 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:59:31.931768044+00:00 stderr F I0813 19:59:31.931753 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T19:59:36.344737507+00:00 stderr F I0813 19:59:36.344667 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:59:36.344924792+00:00 stderr F I0813 19:59:36.344903 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:59:36.344973634+00:00 stderr F I0813 19:59:36.344959 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T19:59:39.558692531+00:00 stderr F I0813 19:59:39.558625 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:59:39.558876106+00:00 stderr F I0813 19:59:39.558765 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:59:39.558924078+00:00 stderr F I0813 19:59:39.558909 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T19:59:41.449920391+00:00 stderr F I0813 19:59:41.446758 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:59:41.449920391+00:00 stderr F I0813 19:59:41.447112 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:59:41.449920391+00:00 stderr F I0813 19:59:41.447132 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T19:59:46.552350166+00:00 stderr F I0813 19:59:46.552263 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:59:46.552350166+00:00 stderr F I0813 19:59:46.552292 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:59:46.552350166+00:00 stderr F I0813 19:59:46.552301 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:00:06.018912174+00:00 stderr F I0813 20:00:06.018621 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 267 items received 2025-08-13T20:00:10.536567409+00:00 stderr F I0813 20:00:10.536496 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:00:10.546479252+00:00 stderr F I0813 20:00:10.546405 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:00:10.553482522+00:00 stderr F I0813 20:00:10.553237 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:00:10.904909542+00:00 stderr F I0813 20:00:10.901702 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 117 items received 2025-08-13T20:00:17.374942857+00:00 stderr F I0813 20:00:17.372350 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:00:17.374942857+00:00 stderr F I0813 20:00:17.372401 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:00:17.374942857+00:00 stderr F I0813 20:00:17.372442 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:00:20.964953522+00:00 stderr F I0813 20:00:20.964555 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:00:20.964953522+00:00 stderr F I0813 20:00:20.964670 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:00:20.964953522+00:00 stderr F I0813 20:00:20.964684 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:00:27.217829259+00:00 stderr F I0813 20:00:27.207613 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:00:27.217829259+00:00 stderr F I0813 20:00:27.207664 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:00:27.217829259+00:00 stderr F I0813 20:00:27.207677 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:00:29.178003061+00:00 stderr F I0813 20:00:29.175693 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:00:29.178003061+00:00 stderr F I0813 20:00:29.175986 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:00:29.178003061+00:00 stderr F I0813 20:00:29.176002 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:00:37.239886826+00:00 stderr F I0813 20:00:37.239669 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 10 items received 2025-08-13T20:01:06.730935180+00:00 stderr F I0813 20:01:06.721539 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:01:06.730935180+00:00 stderr F I0813 20:01:06.721626 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:01:06.730935180+00:00 stderr F I0813 20:01:06.721637 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:01:20.183921586+00:00 stderr F I0813 20:01:20.183454 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:01:20.183921586+00:00 stderr F I0813 20:01:20.183508 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:01:20.183921586+00:00 stderr F I0813 20:01:20.183522 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:01:31.483369536+00:00 stderr F I0813 20:01:31.482592 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:01:31.483369536+00:00 stderr F I0813 20:01:31.482642 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:01:31.483369536+00:00 stderr F I0813 20:01:31.482653 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:02:27.230902514+00:00 stderr F E0813 20:02:27.230715 1 leaderelection.go:332] error retrieving resource lock openshift-ovn-kubernetes/ovn-kubernetes-master: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:02:29.555021265+00:00 stderr F I0813 20:02:29.543445 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 5 items received 2025-08-13T20:02:29.567434599+00:00 stderr F I0813 20:02:29.559371 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=30574&timeout=8m24s&timeoutSeconds=504&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.618309570+00:00 stderr F I0813 20:02:29.617674 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 29 items received 2025-08-13T20:02:29.636020835+00:00 stderr F I0813 20:02:29.634907 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 41 items received 2025-08-13T20:02:29.636020835+00:00 stderr F I0813 20:02:29.635379 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30556&timeout=7m17s&timeoutSeconds=437&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.636020835+00:00 stderr F I0813 20:02:29.635438 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=30708&timeout=6m7s&timeoutSeconds=367&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.650166689+00:00 stderr F I0813 20:02:29.648275 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 190 items received 2025-08-13T20:02:29.650864819+00:00 stderr F I0813 20:02:29.650415 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=30713&timeout=9m24s&timeoutSeconds=564&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.728826623+00:00 stderr F I0813 20:02:29.728640 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 3 items received 2025-08-13T20:02:29.730307945+00:00 stderr F I0813 20:02:29.729917 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=30528&timeout=5m26s&timeoutSeconds=326&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.766212049+00:00 stderr F I0813 20:02:29.766065 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 1 items received 2025-08-13T20:02:29.769020879+00:00 stderr F I0813 20:02:29.768893 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=30542&timeout=9m32s&timeoutSeconds=572&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.786236280+00:00 stderr F I0813 20:02:29.785403 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 5 items received 2025-08-13T20:02:29.790440980+00:00 stderr F I0813 20:02:29.790200 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 4 items received 2025-08-13T20:02:29.791201392+00:00 stderr F I0813 20:02:29.790954 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=30644&timeout=7m26s&timeoutSeconds=446&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.791201392+00:00 stderr F I0813 20:02:29.791122 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=30620&timeout=6m51s&timeoutSeconds=411&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.821539418+00:00 stderr F I0813 20:02:29.819711 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 4 items received 2025-08-13T20:02:29.826120728+00:00 stderr F I0813 20:02:29.823557 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=30619&timeout=6m17s&timeoutSeconds=377&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.850282528+00:00 stderr F I0813 20:02:29.848563 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 4 items received 2025-08-13T20:02:29.863768742+00:00 stderr F I0813 20:02:29.862059 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=30717&timeout=7m53s&timeoutSeconds=473&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:30.426758203+00:00 stderr F I0813 20:02:30.426630 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=30574&timeout=5m0s&timeoutSeconds=300&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:30.613562752+00:00 stderr F I0813 20:02:30.613459 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30556&timeout=6m49s&timeoutSeconds=409&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:30.622867737+00:00 stderr F I0813 20:02:30.622702 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=30528&timeout=5m40s&timeoutSeconds=340&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:30.734187682+00:00 stderr F I0813 20:02:30.733902 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=30619&timeout=5m49s&timeoutSeconds=349&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:30.827708730+00:00 stderr F I0813 20:02:30.827599 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=30620&timeout=8m30s&timeoutSeconds=510&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:30.969348540+00:00 stderr F I0813 20:02:30.969254 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=30717&timeout=9m5s&timeoutSeconds=545&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:30.986989914+00:00 stderr F I0813 20:02:30.986882 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=30644&timeout=9m21s&timeoutSeconds=561&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:31.148498841+00:00 stderr F I0813 20:02:31.148283 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=30713&timeout=7m59s&timeoutSeconds=479&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:31.185367203+00:00 stderr F I0813 20:02:31.185265 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=30708&timeout=5m28s&timeoutSeconds=328&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:31.205262971+00:00 stderr F I0813 20:02:31.205151 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=30542&timeout=5m37s&timeoutSeconds=337&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:32.811521383+00:00 stderr F I0813 20:02:32.811380 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=30619&timeout=5m53s&timeoutSeconds=353&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:32.967218024+00:00 stderr F I0813 20:02:32.966975 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=30620&timeout=8m2s&timeoutSeconds=482&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:33.267354416+00:00 stderr F I0813 20:02:33.267244 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=30528&timeout=5m25s&timeoutSeconds=325&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:33.283575399+00:00 stderr F I0813 20:02:33.283445 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=30717&timeout=6m50s&timeoutSeconds=410&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:33.311052643+00:00 stderr F I0813 20:02:33.310927 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30556&timeout=8m3s&timeoutSeconds=483&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:33.497556503+00:00 stderr F I0813 20:02:33.497437 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=30574&timeout=9m37s&timeoutSeconds=577&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:33.812968671+00:00 stderr F I0813 20:02:33.812640 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=30713&timeout=9m17s&timeoutSeconds=557&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:33.896226736+00:00 stderr F I0813 20:02:33.896089 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=30708&timeout=6m43s&timeoutSeconds=403&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:34.043488597+00:00 stderr F I0813 20:02:34.043275 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=30644&timeout=6m5s&timeoutSeconds=365&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:34.127397161+00:00 stderr F I0813 20:02:34.124967 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=30542&timeout=5m29s&timeoutSeconds=329&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:36.687493352+00:00 stderr F I0813 20:02:36.687328 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30556&timeout=7m6s&timeoutSeconds=426&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:37.379416071+00:00 stderr F I0813 20:02:37.379304 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=30620&timeout=7m34s&timeoutSeconds=454&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:37.775255493+00:00 stderr F I0813 20:02:37.775089 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=30644&timeout=7m16s&timeoutSeconds=436&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:38.458428071+00:00 stderr F I0813 20:02:38.458252 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=30717&timeout=5m30s&timeoutSeconds=330&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:38.775456795+00:00 stderr F I0813 20:02:38.775309 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=30619&timeout=5m57s&timeoutSeconds=357&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:39.049732509+00:00 stderr F I0813 20:02:39.049528 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=30713&timeout=7m58s&timeoutSeconds=478&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:39.266981397+00:00 stderr F I0813 20:02:39.266750 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=30528&timeout=9m17s&timeoutSeconds=557&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:39.303708335+00:00 stderr F I0813 20:02:39.303525 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=30708&timeout=6m31s&timeoutSeconds=391&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:39.370876081+00:00 stderr F I0813 20:02:39.369179 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=30574&timeout=7m29s&timeoutSeconds=449&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:40.381014997+00:00 stderr F I0813 20:02:40.380834 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=30542&timeout=6m14s&timeoutSeconds=374&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:44.406123425+00:00 stderr F I0813 20:02:44.406046 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=30620&timeout=7m26s&timeoutSeconds=446&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:47.103889375+00:00 stderr F I0813 20:02:47.103631 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=30644&timeout=8m40s&timeoutSeconds=520&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:48.121575296+00:00 stderr F I0813 20:02:48.121432 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30556&timeout=7m59s&timeoutSeconds=479&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:48.670409442+00:00 stderr F I0813 20:02:48.670299 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=30708&timeout=6m15s&timeoutSeconds=375&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:49.210064637+00:00 stderr F I0813 20:02:49.209985 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=30717&timeout=9m18s&timeoutSeconds=558&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:49.265045125+00:00 stderr F I0813 20:02:49.264951 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=30528&timeout=6m51s&timeoutSeconds=411&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:49.804731651+00:00 stderr F I0813 20:02:49.804662 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=30619&timeout=8m4s&timeoutSeconds=484&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:50.484299458+00:00 stderr F I0813 20:02:50.484202 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=30574&timeout=9m35s&timeoutSeconds=575&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:50.736101791+00:00 stderr F I0813 20:02:50.736002 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=30542&timeout=5m34s&timeoutSeconds=334&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:51.559878001+00:00 stderr F I0813 20:02:51.559728 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=30713&timeout=8m18s&timeoutSeconds=498&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:53.234302317+00:00 stderr F E0813 20:02:53.233647 1 leaderelection.go:332] error retrieving resource lock openshift-ovn-kubernetes/ovn-kubernetes-master: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:03:01.312578358+00:00 stderr F I0813 20:03:01.312464 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=30620&timeout=7m40s&timeoutSeconds=460&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:03.902879611+00:00 stderr F I0813 20:03:03.902658 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=30708&timeout=8m2s&timeoutSeconds=482&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:04.115010862+00:00 stderr F I0813 20:03:04.114909 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=30528&timeout=7m18s&timeoutSeconds=438&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:04.838387938+00:00 stderr F I0813 20:03:04.838262 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=30619&timeout=8m2s&timeoutSeconds=482&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:05.747741640+00:00 stderr F I0813 20:03:05.747634 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=30644&timeout=8m48s&timeoutSeconds=528&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:07.821412355+00:00 stderr F I0813 20:03:07.821340 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30556&timeout=5m5s&timeoutSeconds=305&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:08.326698340+00:00 stderr F I0813 20:03:08.326594 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=30542&timeout=7m33s&timeoutSeconds=453&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:09.777199688+00:00 stderr F I0813 20:03:09.777085 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=30717&timeout=6m33s&timeoutSeconds=393&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:10.723459022+00:00 stderr F I0813 20:03:10.723347 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=30574&timeout=9m18s&timeoutSeconds=558&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:12.627312223+00:00 stderr F I0813 20:03:12.627170 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=30713&timeout=9m29s&timeoutSeconds=569&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:19.234222432+00:00 stderr F E0813 20:03:19.233955 1 leaderelection.go:332] error retrieving resource lock openshift-ovn-kubernetes/ovn-kubernetes-master: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:03:30.614507142+00:00 stderr F I0813 20:03:30.614265 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=30619&timeout=8m35s&timeoutSeconds=515&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:31.360239635+00:00 stderr F I0813 20:03:31.360179 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=30528&timeout=5m37s&timeoutSeconds=337&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:34.585615087+00:00 stderr F I0813 20:03:34.585528 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=30644&timeout=8m39s&timeoutSeconds=519&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:36.600715002+00:00 stderr F I0813 20:03:36.600587 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=30620&timeout=6m0s&timeoutSeconds=360&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:37.001654079+00:00 stderr F I0813 20:03:37.001568 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=30717&timeout=9m37s&timeoutSeconds=577&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:45.054617405+00:00 stderr F I0813 20:03:45.054499 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=30708&timeout=5m8s&timeoutSeconds=308&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:45.234548188+00:00 stderr F E0813 20:03:45.234001 1 leaderelection.go:332] error retrieving resource lock openshift-ovn-kubernetes/ovn-kubernetes-master: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:03:55.172078932+00:00 stderr F I0813 20:03:55.171354 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30556&timeout=5m56s&timeoutSeconds=356&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:59.287601266+00:00 stderr F I0813 20:03:59.287477 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=30713&timeout=8m44s&timeoutSeconds=524&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:59.287647398+00:00 stderr F I0813 20:03:59.287609 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=30574&timeout=6m2s&timeoutSeconds=362&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:59.288739989+00:00 stderr F I0813 20:03:59.287711 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=30542&timeout=6m40s&timeoutSeconds=400&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:04:08.669168343+00:00 stderr F I0813 20:04:08.669023 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall closed with: too old resource version: 30619 (30722) 2025-08-13T20:04:11.111392643+00:00 stderr F I0813 20:04:11.109692 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute closed with: too old resource version: 30717 (30724) 2025-08-13T20:04:15.899353869+00:00 stderr F I0813 20:04:15.899290 1 reflector.go:449] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition closed with: too old resource version: 30528 (30740) 2025-08-13T20:04:21.176697857+00:00 stderr F I0813 20:04:21.175306 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService closed with: too old resource version: 30644 (30758) 2025-08-13T20:04:32.645446258+00:00 stderr F I0813 20:04:32.644704 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS closed with: too old resource version: 30620 (30792) 2025-08-13T20:04:34.205071440+00:00 stderr F I0813 20:04:34.204960 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod closed with: too old resource version: 30713 (30718) 2025-08-13T20:04:35.147270382+00:00 stderr F I0813 20:04:35.144833 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node closed with: too old resource version: 30556 (30718) 2025-08-13T20:04:43.777106034+00:00 stderr F I0813 20:04:43.776984 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice closed with: too old resource version: 30708 (30718) 2025-08-13T20:04:44.225162345+00:00 stderr F I0813 20:04:44.221249 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service closed with: too old resource version: 30574 (30718) 2025-08-13T20:04:51.763337638+00:00 stderr F I0813 20:04:51.763204 1 reflector.go:325] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:04:51.774225740+00:00 stderr F I0813 20:04:51.774007 1 reflector.go:351] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:04:58.390947006+00:00 stderr F I0813 20:04:58.390185 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP closed with: too old resource version: 30542 (30756) 2025-08-13T20:05:04.968210444+00:00 stderr F I0813 20:05:04.968084 1 reflector.go:325] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:05:04.971231360+00:00 stderr F I0813 20:05:04.971150 1 reflector.go:351] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:05:10.117422378+00:00 stderr F I0813 20:05:10.117132 1 reflector.go:325] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:05:10.178603540+00:00 stderr F I0813 20:05:10.178488 1 reflector.go:351] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:05:10.489933925+00:00 stderr F I0813 20:05:10.489759 1 reflector.go:325] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:05:10.495159075+00:00 stderr F I0813 20:05:10.495008 1 reflector.go:351] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:05:13.591742609+00:00 stderr F I0813 20:05:13.591678 1 reflector.go:325] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T20:05:13.595271800+00:00 stderr F I0813 20:05:13.595146 1 reflector.go:351] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T20:05:14.414762877+00:00 stderr F I0813 20:05:14.414695 1 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:14.435980504+00:00 stderr F I0813 20:05:14.435741 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:14.973598850+00:00 stderr F I0813 20:05:14.973540 1 reflector.go:325] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:14.982395342+00:00 stderr F I0813 20:05:14.982351 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:20.334415922+00:00 stderr F I0813 20:05:20.334322 1 reflector.go:325] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:20.343099340+00:00 stderr F I0813 20:05:20.342971 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:23.010243446+00:00 stderr F I0813 20:05:23.007982 1 reflector.go:325] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:23.085513441+00:00 stderr F I0813 20:05:23.085366 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:40.178921662+00:00 stderr F I0813 20:05:40.178700 1 reflector.go:325] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:05:40.183126722+00:00 stderr F I0813 20:05:40.181664 1 reflector.go:351] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:06:07.034686591+00:00 stderr F I0813 20:06:07.034127 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:06:07.034686591+00:00 stderr F I0813 20:06:07.034184 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:06:07.034686591+00:00 stderr F I0813 20:06:07.034195 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:06:14.803185921+00:00 stderr F I0813 20:06:14.802522 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:06:14.803185921+00:00 stderr F I0813 20:06:14.802582 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:06:14.803185921+00:00 stderr F I0813 20:06:14.802595 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:06:49.903899045+00:00 stderr F I0813 20:06:49.900523 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:06:49.903899045+00:00 stderr F I0813 20:06:49.900591 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:06:49.903899045+00:00 stderr F I0813 20:06:49.900602 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:06:50.817129669+00:00 stderr F I0813 20:06:50.815968 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:06:50.817129669+00:00 stderr F I0813 20:06:50.816015 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:06:50.817129669+00:00 stderr F I0813 20:06:50.816025 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:06:51.440337467+00:00 stderr F I0813 20:06:51.439960 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:06:51.440337467+00:00 stderr F I0813 20:06:51.440007 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:06:51.440337467+00:00 stderr F I0813 20:06:51.440018 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:08:26.090759383+00:00 stderr F I0813 20:08:26.089857 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 8 items received 2025-08-13T20:08:26.238631123+00:00 stderr F I0813 20:08:26.238046 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 222 items received 2025-08-13T20:08:26.259847771+00:00 stderr F I0813 20:08:26.259645 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 3 items received 2025-08-13T20:08:26.277692593+00:00 stderr F I0813 20:08:26.276954 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=32875&timeout=8m9s&timeoutSeconds=489&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.277692593+00:00 stderr F I0813 20:08:26.277048 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=32901&timeout=5m23s&timeoutSeconds=323&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.277692593+00:00 stderr F I0813 20:08:26.277107 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=32843&timeout=5m20s&timeoutSeconds=320&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.292522258+00:00 stderr F I0813 20:08:26.292372 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 39 items received 2025-08-13T20:08:26.298874000+00:00 stderr F I0813 20:08:26.298551 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=32909&timeout=7m5s&timeoutSeconds=425&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.365083809+00:00 stderr F I0813 20:08:26.364958 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 3 items received 2025-08-13T20:08:26.378009589+00:00 stderr F I0813 20:08:26.377942 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=32837&timeout=6m50s&timeoutSeconds=410&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.393010649+00:00 stderr F I0813 20:08:26.392832 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 2 items received 2025-08-13T20:08:26.397152478+00:00 stderr F I0813 20:08:26.395718 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=32690&timeout=5m31s&timeoutSeconds=331&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.435614541+00:00 stderr F I0813 20:08:26.435110 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 3 items received 2025-08-13T20:08:26.437728472+00:00 stderr F I0813 20:08:26.436745 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=32732&timeout=6m30s&timeoutSeconds=390&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.455392008+00:00 stderr F I0813 20:08:26.454752 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 3 items received 2025-08-13T20:08:26.485053188+00:00 stderr F I0813 20:08:26.479510 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=32829&timeout=8m13s&timeoutSeconds=493&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.505489924+00:00 stderr F I0813 20:08:26.504755 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 3 items received 2025-08-13T20:08:26.520279368+00:00 stderr F I0813 20:08:26.520223 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=32798&timeout=6m12s&timeoutSeconds=372&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.524691415+00:00 stderr F I0813 20:08:26.524132 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 3 items received 2025-08-13T20:08:26.536512614+00:00 stderr F I0813 20:08:26.535729 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=32814&timeout=5m11s&timeoutSeconds=311&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.439708929+00:00 stderr F I0813 20:08:27.437496 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=32837&timeout=5m56s&timeoutSeconds=356&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.556914909+00:00 stderr F I0813 20:08:27.554592 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=32901&timeout=7m18s&timeoutSeconds=438&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.556914909+00:00 stderr F I0813 20:08:27.554639 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=32843&timeout=5m22s&timeoutSeconds=322&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.556914909+00:00 stderr F I0813 20:08:27.554759 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=32798&timeout=6m4s&timeoutSeconds=364&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.581716051+00:00 stderr F I0813 20:08:27.581655 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=32732&timeout=5m28s&timeoutSeconds=328&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.827098016+00:00 stderr F I0813 20:08:27.827034 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=32909&timeout=9m32s&timeoutSeconds=572&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.848690705+00:00 stderr F I0813 20:08:27.847430 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=32690&timeout=6m59s&timeoutSeconds=419&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.856049506+00:00 stderr F I0813 20:08:27.854639 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=32875&timeout=7m43s&timeoutSeconds=463&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.874058453+00:00 stderr F I0813 20:08:27.873571 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=32829&timeout=8m59s&timeoutSeconds=539&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:28.064758340+00:00 stderr F I0813 20:08:28.064695 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=32814&timeout=8m40s&timeoutSeconds=520&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:29.331667863+00:00 stderr F I0813 20:08:29.331478 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=32837&timeout=5m44s&timeoutSeconds=344&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:29.808479154+00:00 stderr F I0813 20:08:29.808398 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=32829&timeout=6m7s&timeoutSeconds=367&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.058301906+00:00 stderr F I0813 20:08:30.058178 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=32909&timeout=7m22s&timeoutSeconds=442&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.059010837+00:00 stderr F I0813 20:08:30.058958 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=32690&timeout=6m29s&timeoutSeconds=389&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.073310337+00:00 stderr F I0813 20:08:30.073260 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=32798&timeout=5m43s&timeoutSeconds=343&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.449455051+00:00 stderr F I0813 20:08:30.449389 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=32875&timeout=7m52s&timeoutSeconds=472&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.647255572+00:00 stderr F I0813 20:08:30.647054 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=32732&timeout=9m42s&timeoutSeconds=582&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.682630716+00:00 stderr F I0813 20:08:30.682560 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=32843&timeout=9m49s&timeoutSeconds=589&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.707877010+00:00 stderr F I0813 20:08:30.707073 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=32901&timeout=5m45s&timeoutSeconds=345&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.992885373+00:00 stderr F I0813 20:08:30.992644 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=32814&timeout=6m17s&timeoutSeconds=377&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:32.581485986+00:00 stderr F I0813 20:08:32.581307 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=32837&timeout=7m45s&timeoutSeconds=465&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:34.324217982+00:00 stderr F I0813 20:08:34.324044 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=32829&timeout=6m2s&timeoutSeconds=362&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:34.885714871+00:00 stderr F I0813 20:08:34.885543 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=32901&timeout=8m27s&timeoutSeconds=507&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:35.087059034+00:00 stderr F I0813 20:08:35.086944 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=32732&timeout=6m37s&timeoutSeconds=397&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:35.131211520+00:00 stderr F I0813 20:08:35.131130 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=32909&timeout=5m13s&timeoutSeconds=313&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:35.199171128+00:00 stderr F E0813 20:08:35.198986 1 leaderelection.go:332] error retrieving resource lock openshift-ovn-kubernetes/ovn-kubernetes-master: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:08:35.380952220+00:00 stderr F I0813 20:08:35.380700 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=32690&timeout=6m54s&timeoutSeconds=414&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:35.555910746+00:00 stderr F I0813 20:08:35.555680 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=32798&timeout=5m50s&timeoutSeconds=350&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:36.059852564+00:00 stderr F I0813 20:08:36.059700 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=32875&timeout=9m5s&timeoutSeconds=545&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:36.252016463+00:00 stderr F I0813 20:08:36.251855 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=32843&timeout=7m51s&timeoutSeconds=471&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:36.662864323+00:00 stderr F I0813 20:08:36.661881 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=32814&timeout=7m40s&timeoutSeconds=460&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:44.451271463+00:00 stderr F I0813 20:08:44.451033 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService closed with: too old resource version: 32732 (32918) 2025-08-13T20:08:44.844100816+00:00 stderr F I0813 20:08:44.843246 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP closed with: too old resource version: 32690 (32918) 2025-08-13T20:08:45.099720865+00:00 stderr F I0813 20:08:45.098729 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node closed with: too old resource version: 32875 (32913) 2025-08-13T20:08:45.348963531+00:00 stderr F I0813 20:08:45.348844 1 reflector.go:449] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition closed with: too old resource version: 32829 (32918) 2025-08-13T20:08:46.097685517+00:00 stderr F I0813 20:08:46.097516 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS closed with: too old resource version: 32814 (32918) 2025-08-13T20:08:47.272971763+00:00 stderr F I0813 20:08:47.269375 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall closed with: too old resource version: 32798 (32918) 2025-08-13T20:08:47.290868536+00:00 stderr F I0813 20:08:47.290586 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service closed with: too old resource version: 32843 (32913) 2025-08-13T20:08:47.326887289+00:00 stderr F I0813 20:08:47.326144 1 with_retry.go:234] Got a Retry-After 5s response for attempt 1 to https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=32909&timeout=7m15s&timeoutSeconds=435&watch=true 2025-08-13T20:08:47.334399114+00:00 stderr F I0813 20:08:47.334296 1 with_retry.go:234] Got a Retry-After 5s response for attempt 1 to https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=32837&timeout=7m19s&timeoutSeconds=439&watch=true 2025-08-13T20:08:47.336839134+00:00 stderr F I0813 20:08:47.334591 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice closed with: too old resource version: 32909 (32913) 2025-08-13T20:08:47.343543856+00:00 stderr F I0813 20:08:47.342605 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute closed with: too old resource version: 32837 (32918) 2025-08-13T20:08:48.497341147+00:00 stderr F I0813 20:08:48.497028 1 with_retry.go:234] Got a Retry-After 5s response for attempt 1 to https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=32901&timeout=5m40s&timeoutSeconds=340&watch=true 2025-08-13T20:08:48.500319193+00:00 stderr F I0813 20:08:48.500176 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod closed with: too old resource version: 32901 (32913) 2025-08-13T20:09:00.365291112+00:00 stderr F I0813 20:09:00.365157 1 reflector.go:325] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:00.370503812+00:00 stderr F I0813 20:09:00.370417 1 reflector.go:351] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:02.210017681+00:00 stderr F I0813 20:09:02.208986 1 reflector.go:325] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:02.217704731+00:00 stderr F I0813 20:09:02.217587 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:03.178241281+00:00 stderr F I0813 20:09:03.178084 1 reflector.go:325] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:03.183834491+00:00 stderr F I0813 20:09:03.183731 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:03.256741451+00:00 stderr F I0813 20:09:03.256609 1 reflector.go:325] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:03.258701918+00:00 stderr F I0813 20:09:03.258600 1 reflector.go:351] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:04.923135168+00:00 stderr F I0813 20:09:04.922990 1 reflector.go:325] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:04.925005072+00:00 stderr F I0813 20:09:04.924884 1 reflector.go:351] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:07.684448359+00:00 stderr F I0813 20:09:07.684300 1 reflector.go:325] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:07.686339683+00:00 stderr F I0813 20:09:07.686236 1 reflector.go:351] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:08.255527921+00:00 stderr F I0813 20:09:08.255435 1 reflector.go:325] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T20:09:08.257354203+00:00 stderr F I0813 20:09:08.257266 1 reflector.go:351] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T20:09:08.414106628+00:00 stderr F I0813 20:09:08.413950 1 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:08.421467149+00:00 stderr F I0813 20:09:08.421398 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:09.945760781+00:00 stderr F I0813 20:09:09.945677 1 reflector.go:325] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:09.948237462+00:00 stderr F I0813 20:09:09.948206 1 reflector.go:351] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:10.057412292+00:00 stderr F I0813 20:09:10.057293 1 reflector.go:325] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:10.081556054+00:00 stderr F I0813 20:09:10.081404 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:15:03.950126047+00:00 stderr F I0813 20:15:03.949893 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 6 items received 2025-08-13T20:16:32.083872167+00:00 stderr F I0813 20:16:32.083696 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 58 items received 2025-08-13T20:16:58.188249111+00:00 stderr F I0813 20:16:58.188171 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 10 items received 2025-08-13T20:17:00.691109205+00:00 stderr F I0813 20:17:00.689700 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 8 items received 2025-08-13T20:17:05.423536700+00:00 stderr F I0813 20:17:05.423393 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 9 items received 2025-08-13T20:17:25.219155214+00:00 stderr F I0813 20:17:25.219006 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 19 items received 2025-08-13T20:17:27.261038704+00:00 stderr F I0813 20:17:27.259905 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 9 items received 2025-08-13T20:17:29.374880989+00:00 stderr F I0813 20:17:29.373161 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 10 items received 2025-08-13T20:18:30.926830606+00:00 stderr F I0813 20:18:30.926629 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 10 items received 2025-08-13T20:18:46.261049777+00:00 stderr F I0813 20:18:46.260940 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 10 items received 2025-08-13T20:22:38.193464370+00:00 stderr F I0813 20:22:38.192907 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 7 items received 2025-08-13T20:23:32.958254448+00:00 stderr F I0813 20:23:32.957887 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 10 items received 2025-08-13T20:23:54.378534512+00:00 stderr F I0813 20:23:54.377539 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 8 items received 2025-08-13T20:24:42.425620956+00:00 stderr F I0813 20:24:42.425447 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 9 items received 2025-08-13T20:25:18.087113742+00:00 stderr F I0813 20:25:18.086745 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 73 items received 2025-08-13T20:25:45.268485504+00:00 stderr F I0813 20:25:45.268365 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 10 items received 2025-08-13T20:26:09.696522126+00:00 stderr F I0813 20:26:09.696386 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 10 items received 2025-08-13T20:26:19.269665650+00:00 stderr F I0813 20:26:19.269527 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 9 items received 2025-08-13T20:26:36.221522339+00:00 stderr F I0813 20:26:36.221340 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 11 items received 2025-08-13T20:27:13.935322595+00:00 stderr F I0813 20:27:13.935240 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 9 items received 2025-08-13T20:29:38.197111009+00:00 stderr F I0813 20:29:38.196647 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 8 items received 2025-08-13T20:30:32.963915317+00:00 stderr F I0813 20:30:32.963851 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 7 items received 2025-08-13T20:32:27.381759922+00:00 stderr F I0813 20:32:27.381663 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 10 items received 2025-08-13T20:33:02.700387816+00:00 stderr F I0813 20:33:02.700286 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 8 items received 2025-08-13T20:34:09.277196906+00:00 stderr F I0813 20:34:09.276918 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 10 items received 2025-08-13T20:34:15.428383736+00:00 stderr F I0813 20:34:15.428177 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 11 items received 2025-08-13T20:34:37.095177679+00:00 stderr F I0813 20:34:37.094865 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 87 items received 2025-08-13T20:35:23.231610773+00:00 stderr F I0813 20:35:23.231439 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 9 items received 2025-08-13T20:35:51.277656871+00:00 stderr F I0813 20:35:51.277406 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 11 items received 2025-08-13T20:37:01.945895656+00:00 stderr F I0813 20:37:01.945690 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 10 items received 2025-08-13T20:37:25.979330639+00:00 stderr F I0813 20:37:25.979247 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 8 items received 2025-08-13T20:38:59.199981815+00:00 stderr F I0813 20:38:59.199865 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 13 items received 2025-08-13T20:40:09.103307343+00:00 stderr F I0813 20:40:09.102623 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 38 items received 2025-08-13T20:41:27.433448588+00:00 stderr F I0813 20:41:27.432302 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 9 items received 2025-08-13T20:42:06.389294881+00:00 stderr F I0813 20:42:06.388390 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 11 items received 2025-08-13T20:42:13.322337202+00:00 stderr F I0813 20:42:13.322153 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:42:13.322337202+00:00 stderr F I0813 20:42:13.322268 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:42:13.322337202+00:00 stderr F I0813 20:42:13.322281 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:42:14.045741887+00:00 stderr F I0813 20:42:14.045658 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:42:14.046010575+00:00 stderr F I0813 20:42:14.045977 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:42:14.046096927+00:00 stderr F I0813 20:42:14.046069 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:42:16.636605691+00:00 stderr F I0813 20:42:16.636532 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:42:16.636711154+00:00 stderr F I0813 20:42:16.636691 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:42:16.636762906+00:00 stderr F I0813 20:42:16.636744 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:42:21.922931155+00:00 stderr F I0813 20:42:21.922766 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:42:21.923024178+00:00 stderr F I0813 20:42:21.923009 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:42:21.923060749+00:00 stderr F I0813 20:42:21.923047 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:42:24.100400083+00:00 stderr F I0813 20:42:24.099356 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:42:24.100400083+00:00 stderr F I0813 20:42:24.099402 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:42:24.100400083+00:00 stderr F I0813 20:42:24.099413 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:42:36.331250880+00:00 stderr F I0813 20:42:36.331166 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.331349213+00:00 stderr F I0813 20:42:36.331311 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 7 items received 2025-08-13T20:42:36.331592590+00:00 stderr F I0813 20:42:36.331573 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.331637981+00:00 stderr F I0813 20:42:36.331624 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 5 items received 2025-08-13T20:42:36.331898209+00:00 stderr F I0813 20:42:36.331828 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.331990341+00:00 stderr F I0813 20:42:36.331914 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 0 items received 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.340240 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.340465 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 8 items received 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.340514 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.340560 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 24 items received 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.342976 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.343115 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 6 items received 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.378411 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.378439 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 5 items received 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.378738 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.378756 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 11 items received 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.378898 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.378911 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 9 items received 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.379107 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.379119 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 1 items received 2025-08-13T20:42:36.468721613+00:00 stderr F I0813 20:42:36.468553 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=37518&timeout=6m18s&timeoutSeconds=378&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.469025152+00:00 stderr F I0813 20:42:36.468987 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=37382&timeout=7m30s&timeoutSeconds=450&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.469131115+00:00 stderr F I0813 20:42:36.469109 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=37379&timeout=8m55s&timeoutSeconds=535&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.469254349+00:00 stderr F I0813 20:42:36.469202 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=37457&timeout=5m27s&timeoutSeconds=327&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.469357872+00:00 stderr F I0813 20:42:36.469335 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=37460&timeout=8m0s&timeoutSeconds=480&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.469467895+00:00 stderr F I0813 20:42:36.469426 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=37339&timeout=7m32s&timeoutSeconds=452&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.469565308+00:00 stderr F I0813 20:42:36.469543 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=37395&timeout=7m11s&timeoutSeconds=431&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.469649140+00:00 stderr F I0813 20:42:36.469629 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=37387&timeout=7m17s&timeoutSeconds=437&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.474744937+00:00 stderr F I0813 20:42:36.474712 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=37483&timeout=8m48s&timeoutSeconds=528&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.512134905+00:00 stderr F I0813 20:42:36.510142 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=37463&timeout=5m3s&timeoutSeconds=303&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.323390694+00:00 stderr F I0813 20:42:37.323003 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=37518&timeout=8m3s&timeoutSeconds=483&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.366734713+00:00 stderr F I0813 20:42:37.365096 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=37463&timeout=7m18s&timeoutSeconds=438&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.368280888+00:00 stderr F I0813 20:42:37.368204 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=37339&timeout=5m43s&timeoutSeconds=343&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.469326791+00:00 stderr F I0813 20:42:37.469035 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=37382&timeout=5m52s&timeoutSeconds=352&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.507438310+00:00 stderr F I0813 20:42:37.507270 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=37457&timeout=9m21s&timeoutSeconds=561&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.576580683+00:00 stderr F I0813 20:42:37.576505 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=37387&timeout=5m25s&timeoutSeconds=325&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.719706180+00:00 stderr F I0813 20:42:37.719420 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=37460&timeout=9m3s&timeoutSeconds=543&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.761913206+00:00 stderr F I0813 20:42:37.761702 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=37395&timeout=8m11s&timeoutSeconds=491&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.893640494+00:00 stderr F I0813 20:42:37.893106 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=37483&timeout=5m24s&timeoutSeconds=324&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:38.751053994+00:00 stderr F I0813 20:42:38.747855 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=37379&timeout=9m47s&timeoutSeconds=587&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:39.422669377+00:00 stderr F I0813 20:42:39.422559 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=37387&timeout=8m51s&timeoutSeconds=531&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:39.647877839+00:00 stderr F I0813 20:42:39.647552 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=37483&timeout=7m53s&timeoutSeconds=473&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:39.747394858+00:00 stderr F I0813 20:42:39.747311 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=37463&timeout=6m33s&timeoutSeconds=393&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:40.014528689+00:00 stderr F I0813 20:42:40.014415 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=37339&timeout=7m2s&timeoutSeconds=422&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:40.169436885+00:00 stderr F I0813 20:42:40.169161 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=37457&timeout=7m22s&timeoutSeconds=442&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:40.328353177+00:00 stderr F I0813 20:42:40.328195 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=37518&timeout=8m57s&timeoutSeconds=537&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:40.614885258+00:00 stderr F I0813 20:42:40.614162 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=37382&timeout=9m54s&timeoutSeconds=594&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:40.765210432+00:00 stderr F I0813 20:42:40.765040 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=37460&timeout=7m9s&timeoutSeconds=429&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:40.865262196+00:00 stderr F I0813 20:42:40.864737 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=37395&timeout=7m6s&timeoutSeconds=426&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:41.495986521+00:00 stderr F I0813 20:42:41.495869 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=37379&timeout=5m11s&timeoutSeconds=311&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:43.194981592+00:00 stderr F I0813 20:42:43.194872 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=37463&timeout=6m5s&timeoutSeconds=365&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:43.685720091+00:00 stderr F I0813 20:42:43.685610 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=37457&timeout=9m58s&timeoutSeconds=598&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:43.877741517+00:00 stderr F I0813 20:42:43.877605 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=37483&timeout=6m47s&timeoutSeconds=407&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:44.027481544+00:00 stderr F I0813 20:42:44.027363 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=37387&timeout=7m13s&timeoutSeconds=433&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:44.107128320+00:00 stderr F I0813 20:42:44.107005 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=37339&timeout=6m0s&timeoutSeconds=360&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:45.013280195+00:00 stderr F I0813 20:42:45.013115 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=37460&timeout=8m35s&timeoutSeconds=515&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:45.065594613+00:00 stderr F I0813 20:42:45.065486 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=37395&timeout=9m9s&timeoutSeconds=549&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:45.541671939+00:00 stderr F I0813 20:42:45.541600 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=37379&timeout=5m57s&timeoutSeconds=357&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:45.848624948+00:00 stderr F I0813 20:42:45.848520 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=37382&timeout=8m1s&timeoutSeconds=481&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:45.871742875+00:00 stderr F I0813 20:42:45.871625 1 ovnkube.go:129] Received signal terminated. Shutting down 2025-08-13T20:42:45.871742875+00:00 stderr F I0813 20:42:45.871726 1 ovnkube.go:577] Stopping ovnkube... 2025-08-13T20:42:45.871926880+00:00 stderr F I0813 20:42:45.871874 1 metrics.go:552] Stopping metrics server at address "127.0.0.1:29108" 2025-08-13T20:42:45.872162857+00:00 stderr F I0813 20:42:45.872084 1 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:42:45.872178347+00:00 stderr F I0813 20:42:45.872161 1 reflector.go:295] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:42:45.872398174+00:00 stderr F I0813 20:42:45.872346 1 clustermanager.go:170] Stopping the cluster manager 2025-08-13T20:42:45.872412244+00:00 stderr F I0813 20:42:45.872395 1 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:42:45.872482296+00:00 stderr F I0813 20:42:45.872426 1 secondary_network_cluster_manager.go:100] Stopping secondary network cluster manager 2025-08-13T20:42:45.872482296+00:00 stderr F I0813 20:42:45.872458 1 network_attach_def_controller.go:166] Shutting down cluster-manager NAD controller 2025-08-13T20:42:45.872482296+00:00 stderr F I0813 20:42:45.872461 1 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:42:45.872497767+00:00 stderr F I0813 20:42:45.872488 1 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:42:45.872595679+00:00 stderr F I0813 20:42:45.872537 1 reflector.go:295] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:42:45.872608610+00:00 stderr F I0813 20:42:45.872591 1 reflector.go:295] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:42:45.872618260+00:00 stderr F I0813 20:42:45.872341 1 reflector.go:295] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:42:45.872841806+00:00 stderr F I0813 20:42:45.872669 1 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T20:42:45.872841806+00:00 stderr F I0813 20:42:45.872371 1 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:42:45.873977959+00:00 stderr F E0813 20:42:45.873903 1 handler.go:199] Removing already-removed *v1.EgressIP event handler 4 2025-08-13T20:42:45.874051631+00:00 stderr F E0813 20:42:45.873997 1 handler.go:199] Removing already-removed *v1.Node event handler 1 2025-08-13T20:42:45.874079042+00:00 stderr F E0813 20:42:45.874061 1 handler.go:199] Removing already-removed *v1.Node event handler 2 2025-08-13T20:42:45.874090303+00:00 stderr F E0813 20:42:45.874077 1 handler.go:199] Removing already-removed *v1.Node event handler 3 2025-08-13T20:42:45.875302367+00:00 stderr F I0813 20:42:45.875214 1 egressservice_cluster.go:218] Shutting down Egress Services controller 2025-08-13T20:42:45.877599304+00:00 stderr F I0813 20:42:45.877560 1 ovnkube.go:581] Stopped ovnkube 2025-08-13T20:42:45.877876942+00:00 stderr F E0813 20:42:45.877692 1 leaderelection.go:308] Failed to release lock: Put "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:42:45.880356233+00:00 stderr F I0813 20:42:45.880204 1 ovnkube.go:395] No longer leader; exiting ././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000015713415133657716033122 0ustar zuulzuul2026-01-20T10:47:24.770843761+00:00 stderr F + [[ -f /env/_master ]] 2026-01-20T10:47:24.770843761+00:00 stderr F + ovn_v4_join_subnet_opt= 2026-01-20T10:47:24.770843761+00:00 stderr F + [[ '' != '' ]] 2026-01-20T10:47:24.770960764+00:00 stderr F + ovn_v6_join_subnet_opt= 2026-01-20T10:47:24.770960764+00:00 stderr F + [[ '' != '' ]] 2026-01-20T10:47:24.770960764+00:00 stderr F + ovn_v4_transit_switch_subnet_opt= 2026-01-20T10:47:24.770960764+00:00 stderr F + [[ '' != '' ]] 2026-01-20T10:47:24.770960764+00:00 stderr F + ovn_v6_transit_switch_subnet_opt= 2026-01-20T10:47:24.770960764+00:00 stderr F + [[ '' != '' ]] 2026-01-20T10:47:24.770960764+00:00 stderr F + dns_name_resolver_enabled_flag= 2026-01-20T10:47:24.770960764+00:00 stderr F + [[ false == \t\r\u\e ]] 2026-01-20T10:47:24.771341074+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2026-01-20T10:47:24.774409937+00:00 stdout F I0120 10:47:24.773495653 - ovnkube-control-plane - start ovnkube --init-cluster-manager crc 2026-01-20T10:47:24.774424188+00:00 stderr F + echo 'I0120 10:47:24.773495653 - ovnkube-control-plane - start ovnkube --init-cluster-manager crc' 2026-01-20T10:47:24.774478979+00:00 stderr F + exec /usr/bin/ovnkube --enable-interconnect --init-cluster-manager crc --config-file=/run/ovnkube-config/ovnkube.conf --loglevel 4 --metrics-bind-address 127.0.0.1:29108 --metrics-enable-pprof --metrics-enable-config-duration 2026-01-20T10:47:24.976967544+00:00 stderr F I0120 10:47:24.976708 1 config.go:2178] Parsed config file /run/ovnkube-config/ovnkube.conf 2026-01-20T10:47:24.977029226+00:00 stderr F I0120 10:47:24.976936 1 config.go:2179] Parsed config: {Default:{MTU:1400 RoutableMTU:0 ConntrackZone:64000 HostMasqConntrackZone:0 OVNMasqConntrackZone:0 HostNodePortConntrackZone:0 ReassemblyConntrackZone:0 EncapType:geneve EncapIP: EncapPort:6081 InactivityProbe:100000 OpenFlowProbe:180 OfctrlWaitBeforeClear:0 MonitorAll:true LFlowCacheEnable:true LFlowCacheLimit:0 LFlowCacheLimitKb:1048576 RawClusterSubnets:10.217.0.0/22/23 ClusterSubnets:[] EnableUDPAggregation:true Zone:global} Logging:{File: CNIFile: LibovsdbFile:/var/log/ovnkube/libovsdb.log Level:4 LogFileMaxSize:100 LogFileMaxBackups:5 LogFileMaxAge:0 ACLLoggingRateLimit:20} Monitoring:{RawNetFlowTargets: RawSFlowTargets: RawIPFIXTargets: NetFlowTargets:[] SFlowTargets:[] IPFIXTargets:[]} IPFIX:{Sampling:400 CacheActiveTimeout:60 CacheMaxFlows:0} CNI:{ConfDir:/etc/cni/net.d Plugin:ovn-k8s-cni-overlay} OVNKubernetesFeature:{EnableAdminNetworkPolicy:true EnableEgressIP:true EgressIPReachabiltyTotalTimeout:1 EnableEgressFirewall:true EnableEgressQoS:true EnableEgressService:true EgressIPNodeHealthCheckPort:9107 EnableMultiNetwork:true EnableMultiNetworkPolicy:false EnableStatelessNetPol:false EnableInterconnect:false EnableMultiExternalGateway:true EnablePersistentIPs:false EnableDNSNameResolver:false EnableServiceTemplateSupport:false} Kubernetes:{BootstrapKubeconfig: CertDir: CertDuration:10m0s Kubeconfig: CACert: CAData:[] APIServer:https://api-int.crc.testing:6443 Token: TokenFile: CompatServiceCIDR: RawServiceCIDRs:10.217.4.0/23 ServiceCIDRs:[] OVNConfigNamespace:openshift-ovn-kubernetes OVNEmptyLbEvents:false PodIP: RawNoHostSubnetNodes: NoHostSubnetNodes: HostNetworkNamespace:openshift-host-network PlatformType:None HealthzBindAddress:0.0.0.0:10256 CompatMetricsBindAddress: CompatOVNMetricsBindAddress: CompatMetricsEnablePprof:false DNSServiceNamespace:openshift-dns DNSServiceName:dns-default} Metrics:{BindAddress: OVNMetricsBindAddress: ExportOVSMetrics:false EnablePprof:false NodeServerPrivKey: NodeServerCert: EnableConfigDuration:false EnableScaleMetrics:false} OvnNorth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:} OvnSouth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:} Gateway:{Mode:shared Interface: EgressGWInterface: NextHop: VLANID:0 NodeportEnable:true DisableSNATMultipleGWs:false V4JoinSubnet:100.64.0.0/16 V6JoinSubnet:fd98::/64 V4MasqueradeSubnet:169.254.169.0/29 V6MasqueradeSubnet:fd69::/125 MasqueradeIPs:{V4OVNMasqueradeIP:169.254.169.1 V6OVNMasqueradeIP:fd69::1 V4HostMasqueradeIP:169.254.169.2 V6HostMasqueradeIP:fd69::2 V4HostETPLocalMasqueradeIP:169.254.169.3 V6HostETPLocalMasqueradeIP:fd69::3 V4DummyNextHopMasqueradeIP:169.254.169.4 V6DummyNextHopMasqueradeIP:fd69::4 V4OVNServiceHairpinMasqueradeIP:169.254.169.5 V6OVNServiceHairpinMasqueradeIP:fd69::5} DisablePacketMTUCheck:false RouterSubnet: SingleNode:false DisableForwarding:false AllowNoUplink:false} MasterHA:{ElectionLeaseDuration:137 ElectionRenewDeadline:107 ElectionRetryPeriod:26} ClusterMgrHA:{ElectionLeaseDuration:137 ElectionRenewDeadline:107 ElectionRetryPeriod:26} HybridOverlay:{Enabled:false RawClusterSubnets: ClusterSubnets:[] VXLANPort:4789} OvnKubeNode:{Mode:full DPResourceDeviceIdsMap:map[] MgmtPortNetdev: MgmtPortDPResourceName:} ClusterManager:{V4TransitSwitchSubnet:100.88.0.0/16 V6TransitSwitchSubnet:fd97::/64}} 2026-01-20T10:47:24.983995105+00:00 stderr F I0120 10:47:24.983946 1 metrics.go:532] Starting metrics server at address "127.0.0.1:29108" 2026-01-20T10:47:24.985889415+00:00 stderr F I0120 10:47:24.984108 1 leaderelection.go:250] attempting to acquire leader lease openshift-ovn-kubernetes/ovn-kubernetes-master... 2026-01-20T10:47:25.004884639+00:00 stderr F I0120 10:47:25.004837 1 leaderelection.go:260] successfully acquired lease openshift-ovn-kubernetes/ovn-kubernetes-master 2026-01-20T10:47:25.005106645+00:00 stderr F I0120 10:47:25.005083 1 ovnkube.go:386] Won leader election; in active mode 2026-01-20T10:47:25.005171327+00:00 stderr F I0120 10:47:25.005122 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-ovn-kubernetes", Name:"ovn-kubernetes-master", UID:"191c52d0-8e80-4a25-b5b6-abbf211ef81a", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"38009", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ovnkube-control-plane-77c846df58-6l97b became leader 2026-01-20T10:47:25.008106996+00:00 stderr F I0120 10:47:25.008071 1 secondary_network_cluster_manager.go:38] Creating secondary network cluster manager 2026-01-20T10:47:25.008516958+00:00 stderr F I0120 10:47:25.008498 1 egressservice_cluster.go:97] Setting up event handlers for Egress Services 2026-01-20T10:47:25.008653862+00:00 stderr F I0120 10:47:25.008626 1 clustermanager.go:123] Starting the cluster manager 2026-01-20T10:47:25.008653862+00:00 stderr F I0120 10:47:25.008637 1 factory.go:405] Starting watch factory 2026-01-20T10:47:25.009099983+00:00 stderr F I0120 10:47:25.008999 1 reflector.go:289] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:47:25.009099983+00:00 stderr F I0120 10:47:25.009074 1 reflector.go:325] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:47:25.009874224+00:00 stderr F I0120 10:47:25.009830 1 reflector.go:289] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:47:25.009874224+00:00 stderr F I0120 10:47:25.009857 1 reflector.go:325] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:47:25.010140491+00:00 stderr F I0120 10:47:25.010083 1 reflector.go:289] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:47:25.010140491+00:00 stderr F I0120 10:47:25.010122 1 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:47:25.010269565+00:00 stderr F I0120 10:47:25.010248 1 reflector.go:289] Starting reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:47:25.010269565+00:00 stderr F I0120 10:47:25.010259 1 reflector.go:325] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:47:25.011188980+00:00 stderr F I0120 10:47:25.011162 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:47:25.015901917+00:00 stderr F I0120 10:47:25.015865 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:47:25.016553174+00:00 stderr F I0120 10:47:25.016530 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:47:25.031163360+00:00 stderr F I0120 10:47:25.031112 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:47:25.110789873+00:00 stderr F I0120 10:47:25.110612 1 reflector.go:289] Starting reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:47:25.110789873+00:00 stderr F I0120 10:47:25.110636 1 reflector.go:325] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:47:25.127048793+00:00 stderr F I0120 10:47:25.126922 1 reflector.go:351] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:47:25.211608519+00:00 stderr F I0120 10:47:25.211434 1 reflector.go:289] Starting reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:47:25.211608519+00:00 stderr F I0120 10:47:25.211470 1 reflector.go:325] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:47:25.224765935+00:00 stderr F I0120 10:47:25.224726 1 reflector.go:351] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:47:25.311831059+00:00 stderr F I0120 10:47:25.311774 1 reflector.go:289] Starting reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:47:25.311831059+00:00 stderr F I0120 10:47:25.311801 1 reflector.go:325] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:47:25.336756743+00:00 stderr F I0120 10:47:25.336723 1 reflector.go:351] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:47:25.412999015+00:00 stderr F I0120 10:47:25.412937 1 reflector.go:289] Starting reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:47:25.412999015+00:00 stderr F I0120 10:47:25.412964 1 reflector.go:325] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:47:25.428570096+00:00 stderr F I0120 10:47:25.428535 1 reflector.go:351] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:47:25.513643116+00:00 stderr F I0120 10:47:25.513469 1 reflector.go:289] Starting reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:47:25.513643116+00:00 stderr F I0120 10:47:25.513488 1 reflector.go:325] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:47:25.528899318+00:00 stderr F I0120 10:47:25.528807 1 reflector.go:351] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:47:25.617135315+00:00 stderr F I0120 10:47:25.614264 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2026-01-20T10:47:25.617135315+00:00 stderr F I0120 10:47:25.614292 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2026-01-20T10:47:25.617135315+00:00 stderr F I0120 10:47:25.614301 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2026-01-20T10:47:25.617135315+00:00 stderr F I0120 10:47:25.614321 1 zone_cluster_controller.go:244] Node crc has the id 2 set 2026-01-20T10:47:25.617135315+00:00 stderr F I0120 10:47:25.614498 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2026-01-20T10:47:25.629164640+00:00 stderr F I0120 10:47:25.629090 1 secondary_network_cluster_manager.go:65] Starting secondary network cluster manager 2026-01-20T10:47:25.629316794+00:00 stderr F I0120 10:47:25.629299 1 network_attach_def_controller.go:134] Starting cluster-manager NAD controller 2026-01-20T10:47:25.629507349+00:00 stderr F I0120 10:47:25.629489 1 shared_informer.go:311] Waiting for caches to sync for cluster-manager 2026-01-20T10:47:25.629999782+00:00 stderr F I0120 10:47:25.629929 1 reflector.go:289] Starting reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2026-01-20T10:47:25.629999782+00:00 stderr F I0120 10:47:25.629974 1 reflector.go:325] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2026-01-20T10:47:25.644591827+00:00 stderr F I0120 10:47:25.644502 1 reflector.go:351] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2026-01-20T10:47:25.730341605+00:00 stderr F I0120 10:47:25.730285 1 shared_informer.go:318] Caches are synced for cluster-manager 2026-01-20T10:47:25.730341605+00:00 stderr F I0120 10:47:25.730326 1 network_attach_def_controller.go:182] Starting repairing loop for cluster-manager 2026-01-20T10:47:25.730476519+00:00 stderr F I0120 10:47:25.730445 1 network_attach_def_controller.go:184] Finished repairing loop for cluster-manager: 139.654µs err: 2026-01-20T10:47:25.730476519+00:00 stderr F I0120 10:47:25.730457 1 network_attach_def_controller.go:153] Starting workers for cluster-manager NAD controller 2026-01-20T10:47:25.731479177+00:00 stderr F W0120 10:47:25.731439 1 egressip_healthcheck.go:165] Health checking using insecure connection 2026-01-20T10:47:26.731120696+00:00 stderr F W0120 10:47:26.730988 1 egressip_healthcheck.go:182] Could not connect to crc (10.217.0.2:9107): context deadline exceeded 2026-01-20T10:47:26.731120696+00:00 stderr F I0120 10:47:26.731045 1 egressip_controller.go:459] EgressIP node reachability enabled and using gRPC port 9107 2026-01-20T10:47:26.731120696+00:00 stderr F I0120 10:47:26.731051 1 egressservice_cluster.go:170] Starting Egress Services Controller 2026-01-20T10:47:26.731120696+00:00 stderr F I0120 10:47:26.731073 1 shared_informer.go:311] Waiting for caches to sync for egressservices 2026-01-20T10:47:26.731120696+00:00 stderr F I0120 10:47:26.731078 1 shared_informer.go:318] Caches are synced for egressservices 2026-01-20T10:47:26.731120696+00:00 stderr F I0120 10:47:26.731084 1 shared_informer.go:311] Waiting for caches to sync for egressservices_services 2026-01-20T10:47:26.731120696+00:00 stderr F I0120 10:47:26.731087 1 shared_informer.go:318] Caches are synced for egressservices_services 2026-01-20T10:47:26.731120696+00:00 stderr F I0120 10:47:26.731092 1 shared_informer.go:311] Waiting for caches to sync for egressservices_endpointslices 2026-01-20T10:47:26.731120696+00:00 stderr F I0120 10:47:26.731095 1 shared_informer.go:318] Caches are synced for egressservices_endpointslices 2026-01-20T10:47:26.731120696+00:00 stderr F I0120 10:47:26.731105 1 shared_informer.go:311] Waiting for caches to sync for egressservices_nodes 2026-01-20T10:47:26.731120696+00:00 stderr F I0120 10:47:26.731108 1 shared_informer.go:318] Caches are synced for egressservices_nodes 2026-01-20T10:47:26.731120696+00:00 stderr F I0120 10:47:26.731111 1 egressservice_cluster.go:187] Repairing Egress Services 2026-01-20T10:47:26.731175377+00:00 stderr F I0120 10:47:26.731163 1 kube.go:267] Setting labels map[] on node crc 2026-01-20T10:47:26.743937133+00:00 stderr F I0120 10:47:26.743825 1 status_manager.go:210] Starting StatusManager with typed managers: map[adminpolicybasedexternalroutes:0xc0004b0480 egressfirewalls:0xc0004b0840 egressqoses:0xc0004b17c0] 2026-01-20T10:47:26.743937133+00:00 stderr F I0120 10:47:26.743857 1 egressservice_cluster_node.go:167] Processing sync for Egress Service node crc 2026-01-20T10:47:26.743937133+00:00 stderr F I0120 10:47:26.743903 1 controller.go:69] Adding controller zone_tracker event handlers 2026-01-20T10:47:26.743937133+00:00 stderr F I0120 10:47:26.743887 1 egressservice_cluster_node.go:170] Finished syncing Egress Service node crc: 43.441µs 2026-01-20T10:47:26.743973864+00:00 stderr F I0120 10:47:26.743945 1 shared_informer.go:311] Waiting for caches to sync for zone_tracker 2026-01-20T10:47:26.743973864+00:00 stderr F I0120 10:47:26.743953 1 shared_informer.go:318] Caches are synced for zone_tracker 2026-01-20T10:47:26.744031105+00:00 stderr F I0120 10:47:26.743982 1 status_manager.go:234] StatusManager got zones update: map[crc:{}] 2026-01-20T10:47:26.744031105+00:00 stderr F I0120 10:47:26.744014 1 controller.go:199] Controller egressqoses_statusmanager: full reconcile 2026-01-20T10:47:26.744031105+00:00 stderr F I0120 10:47:26.744020 1 controller.go:199] Controller adminpolicybasedexternalroutes_statusmanager: full reconcile 2026-01-20T10:47:26.744044866+00:00 stderr F I0120 10:47:26.744040 1 controller.go:199] Controller egressfirewalls_statusmanager: full reconcile 2026-01-20T10:47:26.744053176+00:00 stderr F I0120 10:47:26.744045 1 status_manager.go:234] StatusManager got zones update: map[crc:{}] 2026-01-20T10:47:26.744053176+00:00 stderr F I0120 10:47:26.744050 1 controller.go:199] Controller egressqoses_statusmanager: full reconcile 2026-01-20T10:47:26.744074957+00:00 stderr F I0120 10:47:26.744053 1 controller.go:199] Controller adminpolicybasedexternalroutes_statusmanager: full reconcile 2026-01-20T10:47:26.744074957+00:00 stderr F I0120 10:47:26.744068 1 controller.go:199] Controller egressfirewalls_statusmanager: full reconcile 2026-01-20T10:47:26.744086197+00:00 stderr F I0120 10:47:26.744074 1 controller.go:93] Starting controller zone_tracker with 1 workers 2026-01-20T10:47:26.744095807+00:00 stderr F I0120 10:47:26.744091 1 controller.go:69] Adding controller adminpolicybasedexternalroutes_statusmanager event handlers 2026-01-20T10:47:26.744124368+00:00 stderr F I0120 10:47:26.744107 1 shared_informer.go:311] Waiting for caches to sync for adminpolicybasedexternalroutes_statusmanager 2026-01-20T10:47:26.744124368+00:00 stderr F I0120 10:47:26.744113 1 shared_informer.go:318] Caches are synced for adminpolicybasedexternalroutes_statusmanager 2026-01-20T10:47:26.744124368+00:00 stderr F I0120 10:47:26.744119 1 controller.go:93] Starting controller adminpolicybasedexternalroutes_statusmanager with 1 workers 2026-01-20T10:47:26.744145108+00:00 stderr F I0120 10:47:26.744129 1 controller.go:69] Adding controller egressfirewalls_statusmanager event handlers 2026-01-20T10:47:26.744145108+00:00 stderr F I0120 10:47:26.744138 1 shared_informer.go:311] Waiting for caches to sync for egressfirewalls_statusmanager 2026-01-20T10:47:26.744145108+00:00 stderr F I0120 10:47:26.744141 1 shared_informer.go:318] Caches are synced for egressfirewalls_statusmanager 2026-01-20T10:47:26.744155169+00:00 stderr F I0120 10:47:26.744145 1 controller.go:93] Starting controller egressfirewalls_statusmanager with 1 workers 2026-01-20T10:47:26.744155169+00:00 stderr F I0120 10:47:26.744152 1 controller.go:69] Adding controller egressqoses_statusmanager event handlers 2026-01-20T10:47:26.744165629+00:00 stderr F I0120 10:47:26.744161 1 shared_informer.go:311] Waiting for caches to sync for egressqoses_statusmanager 2026-01-20T10:47:26.744173739+00:00 stderr F I0120 10:47:26.744165 1 shared_informer.go:318] Caches are synced for egressqoses_statusmanager 2026-01-20T10:47:26.744173739+00:00 stderr F I0120 10:47:26.744169 1 controller.go:93] Starting controller egressqoses_statusmanager with 1 workers 2026-01-20T10:47:26.997364816+00:00 stderr F I0120 10:47:26.996564 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2026-01-20T10:47:26.997364816+00:00 stderr F I0120 10:47:26.996605 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2026-01-20T10:47:26.997364816+00:00 stderr F I0120 10:47:26.996621 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2026-01-20T10:47:27.614486903+00:00 stderr F I0120 10:47:27.608410 1 egressservice_cluster_node.go:167] Processing sync for Egress Service node crc 2026-01-20T10:47:27.614486903+00:00 stderr F I0120 10:47:27.608448 1 egressservice_cluster_node.go:170] Finished syncing Egress Service node crc: 45.001µs 2026-01-20T10:48:34.151753213+00:00 stderr F I0120 10:48:34.151542 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2026-01-20T10:48:34.151753213+00:00 stderr F I0120 10:48:34.151576 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2026-01-20T10:48:34.151753213+00:00 stderr F I0120 10:48:34.151587 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2026-01-20T10:48:34.269683937+00:00 stderr F I0120 10:48:34.269616 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2026-01-20T10:48:34.269683937+00:00 stderr F I0120 10:48:34.269647 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2026-01-20T10:48:34.269683937+00:00 stderr F I0120 10:48:34.269657 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2026-01-20T10:48:36.367908663+00:00 stderr F I0120 10:48:36.367804 1 egressservice_cluster_node.go:167] Processing sync for Egress Service node crc 2026-01-20T10:48:36.367908663+00:00 stderr F I0120 10:48:36.367841 1 egressservice_cluster_node.go:170] Finished syncing Egress Service node crc: 52.831µs 2026-01-20T10:52:42.018838313+00:00 stderr F I0120 10:52:42.018706 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 143 items received 2026-01-20T10:52:58.034286404+00:00 stderr F I0120 10:52:58.033823 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 385 items received 2026-01-20T10:53:29.129264066+00:00 stderr F I0120 10:53:29.129087 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 7 items received 2026-01-20T10:54:01.338725544+00:00 stderr F I0120 10:54:01.338623 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 7 items received 2026-01-20T10:54:13.018454590+00:00 stderr F I0120 10:54:13.018405 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 7 items received 2026-01-20T10:54:20.542741456+00:00 stderr F I0120 10:54:20.542677 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2026-01-20T10:54:20.542816218+00:00 stderr F I0120 10:54:20.542801 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2026-01-20T10:54:20.542852269+00:00 stderr F I0120 10:54:20.542840 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2026-01-20T10:54:21.013537166+00:00 stderr F I0120 10:54:21.013461 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 18 items received 2026-01-20T10:54:37.530890290+00:00 stderr F I0120 10:54:37.530779 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 8 items received 2026-01-20T10:54:38.226590546+00:00 stderr F I0120 10:54:38.226482 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 8 items received 2026-01-20T10:54:48.275309944+00:00 stderr F I0120 10:54:48.275221 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2026-01-20T10:54:48.275309944+00:00 stderr F I0120 10:54:48.275251 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2026-01-20T10:54:48.275309944+00:00 stderr F I0120 10:54:48.275259 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2026-01-20T10:54:49.204123813+00:00 stderr F I0120 10:54:49.199465 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2026-01-20T10:54:49.204123813+00:00 stderr F I0120 10:54:49.199522 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2026-01-20T10:54:49.204123813+00:00 stderr F I0120 10:54:49.199545 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2026-01-20T10:55:21.646415632+00:00 stderr F I0120 10:55:21.646345 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 8 items received 2026-01-20T10:56:53.435213334+00:00 stderr F I0120 10:56:53.428043 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2026-01-20T10:56:53.435213334+00:00 stderr F I0120 10:56:53.428097 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2026-01-20T10:56:53.435213334+00:00 stderr F I0120 10:56:53.428108 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2026-01-20T10:56:55.429966788+00:00 stderr F I0120 10:56:55.429886 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 10 items received 2026-01-20T10:57:09.874702671+00:00 stderr F I0120 10:57:09.873821 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 6 items received 2026-01-20T10:57:09.887300784+00:00 stderr F I0120 10:57:09.886969 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=43189&timeout=9m55s&timeoutSeconds=595&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:09.914416491+00:00 stderr F I0120 10:57:09.911896 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 5 items received 2026-01-20T10:57:09.914416491+00:00 stderr F I0120 10:57:09.912430 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=42888&timeout=7m4s&timeoutSeconds=424&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:09.920149743+00:00 stderr F I0120 10:57:09.920039 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 16 items received 2026-01-20T10:57:09.930014914+00:00 stderr F I0120 10:57:09.928121 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 65 items received 2026-01-20T10:57:09.930014914+00:00 stderr F I0120 10:57:09.928238 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=43258&timeout=9m58s&timeoutSeconds=598&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:09.932637453+00:00 stderr F I0120 10:57:09.931263 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=43251&timeout=9m55s&timeoutSeconds=595&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:09.951050940+00:00 stderr F I0120 10:57:09.950635 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 2 items received 2026-01-20T10:57:09.952597901+00:00 stderr F I0120 10:57:09.952528 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=43021&timeout=6m8s&timeoutSeconds=368&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:09.962131893+00:00 stderr F I0120 10:57:09.962073 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 0 items received 2026-01-20T10:57:09.963771827+00:00 stderr F I0120 10:57:09.963719 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=43181&timeout=6m17s&timeoutSeconds=377&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:09.974834469+00:00 stderr F I0120 10:57:09.974698 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 3 items received 2026-01-20T10:57:09.976684438+00:00 stderr F I0120 10:57:09.975213 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=43202&timeout=9m24s&timeoutSeconds=564&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:09.992999109+00:00 stderr F I0120 10:57:09.992437 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 1 items received 2026-01-20T10:57:09.993866892+00:00 stderr F I0120 10:57:09.993230 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=42489&timeout=7m54s&timeoutSeconds=474&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:10.000947030+00:00 stderr F I0120 10:57:10.000769 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 2 items received 2026-01-20T10:57:10.001193046+00:00 stderr F I0120 10:57:10.001163 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 3 items received 2026-01-20T10:57:10.001984767+00:00 stderr F I0120 10:57:10.001351 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=43030&timeout=8m25s&timeoutSeconds=505&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:10.002355747+00:00 stderr F I0120 10:57:10.002281 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=42605&timeout=6m37s&timeoutSeconds=397&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:10.789352139+00:00 stderr F I0120 10:57:10.789248 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=43181&timeout=6m47s&timeoutSeconds=407&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:10.934050536+00:00 stderr F I0120 10:57:10.933976 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=43189&timeout=7m51s&timeoutSeconds=471&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:10.999699832+00:00 stderr F I0120 10:57:10.999620 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=43202&timeout=7m19s&timeoutSeconds=439&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:11.064088135+00:00 stderr F I0120 10:57:11.063955 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=42605&timeout=8m57s&timeoutSeconds=537&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:11.228625396+00:00 stderr F I0120 10:57:11.228530 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=43021&timeout=9m36s&timeoutSeconds=576&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:11.305266033+00:00 stderr F I0120 10:57:11.305167 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=42888&timeout=9m47s&timeoutSeconds=587&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:11.362704301+00:00 stderr F I0120 10:57:11.362612 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=43030&timeout=6m40s&timeoutSeconds=400&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:11.384433466+00:00 stderr F I0120 10:57:11.384315 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=43251&timeout=5m44s&timeoutSeconds=344&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:11.524898591+00:00 stderr F I0120 10:57:11.524786 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=43258&timeout=8m38s&timeoutSeconds=518&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:11.565571567+00:00 stderr F I0120 10:57:11.565457 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=42489&timeout=9m33s&timeoutSeconds=573&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:12.666554103+00:00 stderr F I0120 10:57:12.666418 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=43181&timeout=8m21s&timeoutSeconds=501&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:12.752326540+00:00 stderr F I0120 10:57:12.752208 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=43202&timeout=6m21s&timeoutSeconds=381&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:13.255813236+00:00 stderr F I0120 10:57:13.255711 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=42888&timeout=5m54s&timeoutSeconds=354&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:13.333476660+00:00 stderr F I0120 10:57:13.333382 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=42605&timeout=6m6s&timeoutSeconds=366&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:13.348201879+00:00 stderr F I0120 10:57:13.348090 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=42489&timeout=8m16s&timeoutSeconds=496&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:13.521105062+00:00 stderr F I0120 10:57:13.520997 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=43189&timeout=7m9s&timeoutSeconds=429&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:13.898800059+00:00 stderr F I0120 10:57:13.898709 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=43021&timeout=9m24s&timeoutSeconds=564&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:14.463422021+00:00 stderr F I0120 10:57:14.463326 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=43030&timeout=7m33s&timeoutSeconds=453&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:14.487740405+00:00 stderr F I0120 10:57:14.487679 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=43258&timeout=8m39s&timeoutSeconds=519&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:14.577736195+00:00 stderr F I0120 10:57:14.577657 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=43251&timeout=6m48s&timeoutSeconds=408&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:15.993906755+00:00 stderr F I0120 10:57:15.993782 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=43181&timeout=8m7s&timeoutSeconds=487&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:17.088702637+00:00 stderr F I0120 10:57:17.088568 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=43202&timeout=8m15s&timeoutSeconds=495&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:17.528251921+00:00 stderr F I0120 10:57:17.528152 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=42489&timeout=7m22s&timeoutSeconds=442&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:17.576586429+00:00 stderr F I0120 10:57:17.575900 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=42888&timeout=5m14s&timeoutSeconds=314&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:17.999976626+00:00 stderr F I0120 10:57:17.999887 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=43030&timeout=5m49s&timeoutSeconds=349&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:18.332532811+00:00 stderr F I0120 10:57:18.332418 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=43251&timeout=5m50s&timeoutSeconds=350&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:18.363962892+00:00 stderr F I0120 10:57:18.363853 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=43021&timeout=8m56s&timeoutSeconds=536&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:18.903502791+00:00 stderr F I0120 10:57:18.903415 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=43258&timeout=6m1s&timeoutSeconds=361&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:19.503258401+00:00 stderr F I0120 10:57:19.503127 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=43189&timeout=8m58s&timeoutSeconds=538&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:19.659301248+00:00 stderr F I0120 10:57:19.659226 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=42605&timeout=8m48s&timeoutSeconds=528&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:26.000852162+00:00 stderr F I0120 10:57:26.000773 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS closed with: too old resource version: 43202 (43268) 2026-01-20T10:57:26.122620933+00:00 stderr F I0120 10:57:26.122548 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice closed with: too old resource version: 43258 (43263) 2026-01-20T10:57:26.545796634+00:00 stderr F I0120 10:57:26.545721 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService closed with: too old resource version: 43181 (43291) 2026-01-20T10:57:28.023398249+00:00 stderr F I0120 10:57:28.023354 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall closed with: too old resource version: 43030 (43268) 2026-01-20T10:57:29.077555787+00:00 stderr F I0120 10:57:29.077487 1 with_retry.go:234] Got a Retry-After 5s response for attempt 1 to https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master 2026-01-20T10:57:29.515054917+00:00 stderr F I0120 10:57:29.514997 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute closed with: too old resource version: 43021 (43291) 2026-01-20T10:57:29.736970515+00:00 stderr F I0120 10:57:29.736929 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod closed with: too old resource version: 43251 (43263) 2026-01-20T10:57:29.850112498+00:00 stderr F I0120 10:57:29.850038 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service closed with: too old resource version: 42888 (43263) 2026-01-20T10:57:30.246282634+00:00 stderr F I0120 10:57:30.246193 1 with_retry.go:234] Got a Retry-After 5s response for attempt 1 to https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=42489&timeout=7m12s&timeoutSeconds=432&watch=true 2026-01-20T10:57:30.248510024+00:00 stderr F I0120 10:57:30.248465 1 reflector.go:449] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition closed with: too old resource version: 42489 (43291) 2026-01-20T10:57:31.297733990+00:00 stderr F I0120 10:57:31.297694 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP closed with: too old resource version: 42605 (43291) 2026-01-20T10:57:32.273281028+00:00 stderr F I0120 10:57:32.273198 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node closed with: too old resource version: 43189 (43263) 2026-01-20T10:57:39.430111974+00:00 stderr F I0120 10:57:39.429941 1 reflector.go:325] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:57:39.436333268+00:00 stderr F I0120 10:57:39.436213 1 reflector.go:351] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:57:40.368840498+00:00 stderr F I0120 10:57:40.368766 1 reflector.go:325] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:40.372299241+00:00 stderr F I0120 10:57:40.372263 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:44.185452051+00:00 stderr F I0120 10:57:44.185369 1 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:44.188003478+00:00 stderr F I0120 10:57:44.187961 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:44.729277712+00:00 stderr F I0120 10:57:44.729224 1 reflector.go:325] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:57:44.731218124+00:00 stderr F I0120 10:57:44.731163 1 reflector.go:351] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:57:44.852003868+00:00 stderr F I0120 10:57:44.851903 1 reflector.go:325] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2026-01-20T10:57:44.853926148+00:00 stderr F I0120 10:57:44.853885 1 reflector.go:351] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2026-01-20T10:57:45.690677507+00:00 stderr F I0120 10:57:45.690603 1 reflector.go:325] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:45.692685030+00:00 stderr F I0120 10:57:45.692639 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:46.996110219+00:00 stderr F I0120 10:57:46.995996 1 reflector.go:325] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:47.016015765+00:00 stderr F I0120 10:57:47.015915 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:47.632246662+00:00 stderr F I0120 10:57:47.632162 1 reflector.go:325] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:57:47.635153439+00:00 stderr F I0120 10:57:47.635102 1 reflector.go:351] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:57:51.383164076+00:00 stderr F I0120 10:57:51.383100 1 reflector.go:325] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:57:51.389348879+00:00 stderr F I0120 10:57:51.389322 1 reflector.go:351] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:57:52.640834146+00:00 stderr F I0120 10:57:52.640740 1 reflector.go:325] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:57:52.642167732+00:00 stderr F I0120 10:57:52.642137 1 reflector.go:351] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 ././@LongLink0000644000000000000000000000030400000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015133657741033103 5ustar zuulzuul././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000000244415133657716033113 0ustar zuulzuul2025-08-13T19:50:44.187310475+00:00 stdout F 2025-08-13T19:50:44+00:00 INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy 2025-08-13T19:50:45.643448822+00:00 stderr F W0813 19:50:45.641749 1 deprecated.go:66] 2025-08-13T19:50:45.643448822+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:50:45.643448822+00:00 stderr F 2025-08-13T19:50:45.643448822+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:50:45.643448822+00:00 stderr F 2025-08-13T19:50:45.643448822+00:00 stderr F =============================================== 2025-08-13T19:50:45.643448822+00:00 stderr F 2025-08-13T19:50:45.669538817+00:00 stderr F I0813 19:50:45.668563 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:50:45.669538817+00:00 stderr F I0813 19:50:45.668714 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:50:45.683583669+00:00 stderr F I0813 19:50:45.683419 1 kube-rbac-proxy.go:395] Starting TCP socket on :9108 2025-08-13T19:50:45.688697335+00:00 stderr F I0813 19:50:45.688343 1 kube-rbac-proxy.go:402] Listening securely on :9108 2025-08-13T20:42:46.651250588+00:00 stderr F I0813 20:42:46.650986 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000000223715133657716033113 0ustar zuulzuul2026-01-20T10:47:24.615533391+00:00 stdout F 2026-01-20T10:47:24+00:00 INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy 2026-01-20T10:47:24.668224356+00:00 stderr F W0120 10:47:24.667725 1 deprecated.go:66] 2026-01-20T10:47:24.668224356+00:00 stderr F ==== Removed Flag Warning ====================== 2026-01-20T10:47:24.668224356+00:00 stderr F 2026-01-20T10:47:24.668224356+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2026-01-20T10:47:24.668224356+00:00 stderr F 2026-01-20T10:47:24.668224356+00:00 stderr F =============================================== 2026-01-20T10:47:24.668224356+00:00 stderr F 2026-01-20T10:47:24.668359409+00:00 stderr F I0120 10:47:24.668341 1 kube-rbac-proxy.go:233] Valid token audiences: 2026-01-20T10:47:24.668404781+00:00 stderr F I0120 10:47:24.668390 1 kube-rbac-proxy.go:347] Reading certificate files 2026-01-20T10:47:24.668847452+00:00 stderr F I0120 10:47:24.668820 1 kube-rbac-proxy.go:395] Starting TCP socket on :9108 2026-01-20T10:47:24.669437939+00:00 stderr F I0120 10:47:24.669219 1 kube-rbac-proxy.go:402] Listening securely on :9108 ././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015133657715033071 5ustar zuulzuul././@LongLink0000644000000000000000000000026300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/pruner/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015133657734033072 5ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/pruner/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000000366515133657715033105 0ustar zuulzuul2025-08-13T20:00:19.608298628+00:00 stderr F I0813 20:00:19.605121 1 cmd.go:40] &{ true {false} prune true map[cert-dir:0xc0008f6be0 max-eligible-revision:0xc0008f6960 protected-revisions:0xc0008f6a00 resource-dir:0xc0008f6aa0 static-pod-name:0xc0008f6b40 v:0xc0008f72c0] [0xc0008f72c0 0xc0008f6960 0xc0008f6a00 0xc0008f6aa0 0xc0008f6be0 0xc0008f6b40] [] map[cert-dir:0xc0008f6be0 help:0xc0008f7680 log-flush-frequency:0xc0008f7220 max-eligible-revision:0xc0008f6960 protected-revisions:0xc0008f6a00 resource-dir:0xc0008f6aa0 static-pod-name:0xc0008f6b40 v:0xc0008f72c0 vmodule:0xc0008f7360] [0xc0008f6960 0xc0008f6a00 0xc0008f6aa0 0xc0008f6b40 0xc0008f6be0 0xc0008f7220 0xc0008f72c0 0xc0008f7360 0xc0008f7680] [0xc0008f6be0 0xc0008f7680 0xc0008f7220 0xc0008f6960 0xc0008f6a00 0xc0008f6aa0 0xc0008f6b40 0xc0008f72c0 0xc0008f7360] map[104:0xc0008f7680 118:0xc0008f72c0] [] -1 0 0xc0008adc80 true 0x73b100 []} 2025-08-13T20:00:19.608298628+00:00 stderr F I0813 20:00:19.606611 1 cmd.go:41] (*prune.PruneOptions)(0xc0008e31d0)({ 2025-08-13T20:00:19.608298628+00:00 stderr F MaxEligibleRevision: (int) 9, 2025-08-13T20:00:19.608298628+00:00 stderr F ProtectedRevisions: ([]int) (len=6 cap=6) { 2025-08-13T20:00:19.608298628+00:00 stderr F (int) 4, 2025-08-13T20:00:19.608298628+00:00 stderr F (int) 5, 2025-08-13T20:00:19.608298628+00:00 stderr F (int) 6, 2025-08-13T20:00:19.608298628+00:00 stderr F (int) 7, 2025-08-13T20:00:19.608298628+00:00 stderr F (int) 8, 2025-08-13T20:00:19.608298628+00:00 stderr F (int) 9 2025-08-13T20:00:19.608298628+00:00 stderr F }, 2025-08-13T20:00:19.608298628+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:00:19.608298628+00:00 stderr F CertDir: (string) (len=29) "kube-controller-manager-certs", 2025-08-13T20:00:19.608298628+00:00 stderr F StaticPodName: (string) (len=27) "kube-controller-manager-pod" 2025-08-13T20:00:19.608298628+00:00 stderr F }) ././@LongLink0000644000000000000000000000027700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015133657715033014 5ustar zuulzuul././@LongLink0000644000000000000000000000031700000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015133657735033016 5ustar zuulzuul././@LongLink0000644000000000000000000000032400000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000227415133657715033023 0ustar zuulzuul2026-01-20T10:49:36.780222092+00:00 stderr F W0120 10:49:36.780084 1 deprecated.go:66] 2026-01-20T10:49:36.780222092+00:00 stderr F ==== Removed Flag Warning ====================== 2026-01-20T10:49:36.780222092+00:00 stderr F 2026-01-20T10:49:36.780222092+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2026-01-20T10:49:36.780222092+00:00 stderr F 2026-01-20T10:49:36.780222092+00:00 stderr F =============================================== 2026-01-20T10:49:36.780222092+00:00 stderr F 2026-01-20T10:49:36.780222092+00:00 stderr F I0120 10:49:36.780189 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2026-01-20T10:49:36.782139159+00:00 stderr F I0120 10:49:36.780796 1 kube-rbac-proxy.go:233] Valid token audiences: 2026-01-20T10:49:36.782139159+00:00 stderr F I0120 10:49:36.780835 1 kube-rbac-proxy.go:347] Reading certificate files 2026-01-20T10:49:36.782139159+00:00 stderr F I0120 10:49:36.781990 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9001 2026-01-20T10:49:36.786111321+00:00 stderr F I0120 10:49:36.782541 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9001 ././@LongLink0000644000000000000000000000032400000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000250115133657715033014 0ustar zuulzuul2025-08-13T19:59:12.246877885+00:00 stderr F W0813 19:59:12.142202 1 deprecated.go:66] 2025-08-13T19:59:12.246877885+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:12.246877885+00:00 stderr F 2025-08-13T19:59:12.246877885+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:12.246877885+00:00 stderr F 2025-08-13T19:59:12.246877885+00:00 stderr F =============================================== 2025-08-13T19:59:12.246877885+00:00 stderr F 2025-08-13T19:59:12.246877885+00:00 stderr F I0813 19:59:12.224140 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-08-13T19:59:12.550927052+00:00 stderr F I0813 19:59:12.536614 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:12.551285632+00:00 stderr F I0813 19:59:12.551190 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:12.558915680+00:00 stderr F I0813 19:59:12.558720 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9001 2025-08-13T19:59:12.724634104+00:00 stderr F I0813 19:59:12.644589 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9001 2025-08-13T20:42:43.553128358+00:00 stderr F I0813 20:42:43.552988 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000032700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015133657735033016 5ustar zuulzuul././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000014016615133657715033026 0ustar zuulzuul2026-01-20T10:49:35.329928956+00:00 stderr F I0120 10:49:35.328962 1 start.go:52] Version: 4.16.0 (Raw: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty, Hash: 9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2026-01-20T10:49:35.330247666+00:00 stderr F I0120 10:49:35.330199 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:49:35.332398012+00:00 stderr F I0120 10:49:35.330383 1 metrics.go:100] Registering Prometheus metrics 2026-01-20T10:49:35.332398012+00:00 stderr F I0120 10:49:35.330444 1 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2026-01-20T10:49:35.361130107+00:00 stderr F I0120 10:49:35.356860 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-config-operator/machine-config... 2026-01-20T10:54:17.717104382+00:00 stderr F I0120 10:54:17.716348 1 leaderelection.go:260] successfully acquired lease openshift-machine-config-operator/machine-config 2026-01-20T10:54:17.731617799+00:00 stderr F I0120 10:54:17.731530 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2026-01-20T10:54:17.767267319+00:00 stderr F I0120 10:54:17.766926 1 start.go:127] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2026-01-20T10:54:17.767375342+00:00 stderr F I0120 10:54:17.767300 1 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2026-01-20T10:54:17.993340496+00:00 stderr F I0120 10:54:17.993205 1 operator.go:396] Change observed to kube-apiserver-server-ca 2026-01-20T10:54:18.002774687+00:00 stderr F I0120 10:54:18.002667 1 operator.go:376] Starting MachineConfigOperator 2026-01-20T10:54:51.592154401+00:00 stderr F E0120 10:54:51.592010 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:51.614732373+00:00 stderr F I0120 10:54:51.614619 1 event.go:364] Event(v1.ObjectReference{Kind:"", Namespace:"openshift-machine-config-operator", Name:"machine-config", UID:"7f2912cb-6bc0-4410-a0a8-5ee7300fa84b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'OperatorDegraded: RequiredPoolsFailed' Failed to resync 4.16.0 because: error required MachineConfigPool master is paused and cannot sync until it is unpaused 2026-01-20T10:54:58.912202482+00:00 stderr F E0120 10:54:58.912031 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:54:59.937402450+00:00 stderr F E0120 10:54:59.935963 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:00.928246662+00:00 stderr F E0120 10:55:00.928162 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:01.927752727+00:00 stderr F E0120 10:55:01.927664 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:02.932534732+00:00 stderr F E0120 10:55:02.931049 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:03.925953302+00:00 stderr F E0120 10:55:03.925614 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:04.925757302+00:00 stderr F E0120 10:55:04.925566 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:05.930541196+00:00 stderr F E0120 10:55:05.928881 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:06.925089688+00:00 stderr F E0120 10:55:06.924666 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:07.931031034+00:00 stderr F E0120 10:55:07.930373 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:08.930498216+00:00 stderr F E0120 10:55:08.930436 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:09.927090192+00:00 stderr F E0120 10:55:09.926970 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:10.924389867+00:00 stderr F E0120 10:55:10.924301 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:11.926133639+00:00 stderr F E0120 10:55:11.926016 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:12.927533483+00:00 stderr F E0120 10:55:12.927436 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:13.933433258+00:00 stderr F E0120 10:55:13.932722 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:14.933151538+00:00 stderr F E0120 10:55:14.931443 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:15.930287939+00:00 stderr F E0120 10:55:15.929215 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:16.921694187+00:00 stderr F E0120 10:55:16.921604 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:17.931572177+00:00 stderr F E0120 10:55:17.930641 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:18.927149006+00:00 stderr F E0120 10:55:18.927038 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:19.926626729+00:00 stderr F E0120 10:55:19.926556 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:20.927497648+00:00 stderr F E0120 10:55:20.927402 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:21.925694497+00:00 stderr F E0120 10:55:21.923859 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:22.926000863+00:00 stderr F E0120 10:55:22.925937 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:23.928013294+00:00 stderr F E0120 10:55:23.927942 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:24.923751407+00:00 stderr F E0120 10:55:24.923353 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:25.921171525+00:00 stderr F E0120 10:55:25.921099 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:26.928814776+00:00 stderr F E0120 10:55:26.928705 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:27.931445104+00:00 stderr F E0120 10:55:27.930345 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:28.924039813+00:00 stderr F E0120 10:55:28.923381 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:29.925283143+00:00 stderr F E0120 10:55:29.922620 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:30.957537141+00:00 stderr F E0120 10:55:30.957418 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:31.931414611+00:00 stderr F E0120 10:55:31.928948 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:32.933359581+00:00 stderr F E0120 10:55:32.932757 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:33.934501089+00:00 stderr F E0120 10:55:33.933952 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:34.933292464+00:00 stderr F E0120 10:55:34.932653 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:35.935277074+00:00 stderr F E0120 10:55:35.935200 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:36.930481243+00:00 stderr F E0120 10:55:36.927354 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:37.925227590+00:00 stderr F E0120 10:55:37.924149 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:38.928506834+00:00 stderr F E0120 10:55:38.927825 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:39.925265865+00:00 stderr F E0120 10:55:39.925166 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:40.925851407+00:00 stderr F E0120 10:55:40.925737 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:41.924468288+00:00 stderr F E0120 10:55:41.924222 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:42.928531892+00:00 stderr F E0120 10:55:42.926713 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:43.937247883+00:00 stderr F E0120 10:55:43.937151 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:44.927949011+00:00 stderr F E0120 10:55:44.926985 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:45.930414164+00:00 stderr F E0120 10:55:45.926132 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:46.926880016+00:00 stderr F E0120 10:55:46.926387 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:47.930825999+00:00 stderr F E0120 10:55:47.929406 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:48.926708229+00:00 stderr F E0120 10:55:48.926574 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:49.936021145+00:00 stderr F E0120 10:55:49.935888 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:50.928825576+00:00 stderr F E0120 10:55:50.927827 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:51.932978752+00:00 stderr F E0120 10:55:51.931101 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:52.934188331+00:00 stderr F E0120 10:55:52.932314 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:53.930860367+00:00 stderr F E0120 10:55:53.930309 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:54.927383758+00:00 stderr F E0120 10:55:54.926924 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:55.925703319+00:00 stderr F E0120 10:55:55.924664 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:56.927521442+00:00 stderr F E0120 10:55:56.927352 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:57.930487698+00:00 stderr F E0120 10:55:57.928287 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:58.928763188+00:00 stderr F E0120 10:55:58.928526 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:55:59.933412988+00:00 stderr F E0120 10:55:59.930628 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:00.937906314+00:00 stderr F E0120 10:56:00.935019 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:01.928214479+00:00 stderr F E0120 10:56:01.927495 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:02.933275160+00:00 stderr F E0120 10:56:02.933171 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:03.925745893+00:00 stderr F E0120 10:56:03.925603 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:04.929935022+00:00 stderr F E0120 10:56:04.927757 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:05.925713203+00:00 stderr F E0120 10:56:05.924417 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:06.927237120+00:00 stderr F E0120 10:56:06.927152 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:07.923105553+00:00 stderr F E0120 10:56:07.923014 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:08.925429952+00:00 stderr F E0120 10:56:08.922361 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:09.922845807+00:00 stderr F E0120 10:56:09.922747 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:10.922513784+00:00 stderr F E0120 10:56:10.922412 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:11.932776676+00:00 stderr F E0120 10:56:11.930226 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:12.932083762+00:00 stderr F E0120 10:56:12.931952 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:13.930127815+00:00 stderr F E0120 10:56:13.929974 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:14.928723962+00:00 stderr F E0120 10:56:14.927991 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:15.925092151+00:00 stderr F E0120 10:56:15.924241 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:16.930269396+00:00 stderr F E0120 10:56:16.930193 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:17.925079871+00:00 stderr F E0120 10:56:17.924667 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:18.923318859+00:00 stderr F E0120 10:56:18.923199 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:19.921138206+00:00 stderr F E0120 10:56:19.920876 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:20.926106234+00:00 stderr F E0120 10:56:20.925993 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:21.921445214+00:00 stderr F E0120 10:56:21.921334 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:22.924825190+00:00 stderr F E0120 10:56:22.922317 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:23.923786607+00:00 stderr F E0120 10:56:23.923690 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:24.924699217+00:00 stderr F E0120 10:56:24.922698 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:25.925097312+00:00 stderr F E0120 10:56:25.924993 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:26.926622280+00:00 stderr F E0120 10:56:26.926549 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:27.946971622+00:00 stderr F E0120 10:56:27.944004 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:28.927157164+00:00 stderr F E0120 10:56:28.926817 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:29.926230344+00:00 stderr F E0120 10:56:29.925649 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:30.926833936+00:00 stderr F E0120 10:56:30.923784 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:31.930512129+00:00 stderr F E0120 10:56:31.930337 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:32.933507166+00:00 stderr F E0120 10:56:32.932883 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:33.938103775+00:00 stderr F E0120 10:56:33.937540 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:34.932021747+00:00 stderr F E0120 10:56:34.931458 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:35.921259393+00:00 stderr F E0120 10:56:35.921194 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:36.932579443+00:00 stderr F E0120 10:56:36.932212 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:37.931579281+00:00 stderr F E0120 10:56:37.931513 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:38.930929379+00:00 stderr F E0120 10:56:38.930872 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:39.930664688+00:00 stderr F E0120 10:56:39.930587 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:40.924682032+00:00 stderr F E0120 10:56:40.924631 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:41.923650139+00:00 stderr F E0120 10:56:41.923592 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:42.939738348+00:00 stderr F E0120 10:56:42.937926 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:43.928199053+00:00 stderr F E0120 10:56:43.928128 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:44.923081131+00:00 stderr F E0120 10:56:44.922442 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:45.925945213+00:00 stderr F E0120 10:56:45.925349 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:46.930300237+00:00 stderr F E0120 10:56:46.930219 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:47.922941483+00:00 stderr F E0120 10:56:47.922889 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:48.923941876+00:00 stderr F E0120 10:56:48.923041 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:49.954200845+00:00 stderr F E0120 10:56:49.953011 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:50.929289681+00:00 stderr F E0120 10:56:50.926256 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:51.927172299+00:00 stderr F E0120 10:56:51.926116 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:52.924904344+00:00 stderr F E0120 10:56:52.924822 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:53.925529362+00:00 stderr F E0120 10:56:53.924485 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:54.939117886+00:00 stderr F E0120 10:56:54.938972 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:55.925451631+00:00 stderr F E0120 10:56:55.925190 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:56.927312485+00:00 stderr F E0120 10:56:56.924832 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:57.928881563+00:00 stderr F E0120 10:56:57.928799 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:58.923885656+00:00 stderr F E0120 10:56:58.923276 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:56:59.925858853+00:00 stderr F E0120 10:56:59.923545 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:57:00.923474015+00:00 stderr F E0120 10:57:00.923396 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:57:01.929405538+00:00 stderr F E0120 10:57:01.926245 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:57:02.926035173+00:00 stderr F E0120 10:57:02.925976 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:57:03.926463659+00:00 stderr F E0120 10:57:03.926254 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:57:04.926375292+00:00 stderr F E0120 10:57:04.922972 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:57:05.926697916+00:00 stderr F E0120 10:57:05.926328 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:57:06.923440314+00:00 stderr F E0120 10:57:06.923356 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:57:17.744820368+00:00 stderr F E0120 10:57:17.743958 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:48.924197028+00:00 stderr F E0120 10:57:48.923647 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:57:49.932127383+00:00 stderr F E0120 10:57:49.931551 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:57:50.925127244+00:00 stderr F E0120 10:57:50.925002 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:57:51.924151853+00:00 stderr F E0120 10:57:51.923444 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:57:52.923102191+00:00 stderr F E0120 10:57:52.923005 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:57:53.925408767+00:00 stderr F E0120 10:57:53.925314 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:57:54.924052126+00:00 stderr F E0120 10:57:54.923971 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:57:55.925481360+00:00 stderr F E0120 10:57:55.925409 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:57:56.934767571+00:00 stderr F E0120 10:57:56.933956 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:57:57.934239693+00:00 stderr F E0120 10:57:57.932881 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:57:58.936815092+00:00 stderr F E0120 10:57:58.936737 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:57:59.932302329+00:00 stderr F E0120 10:57:59.932197 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:58:00.922771376+00:00 stderr F E0120 10:58:00.922703 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:58:01.927211190+00:00 stderr F E0120 10:58:01.927097 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:58:02.931379526+00:00 stderr F E0120 10:58:02.931259 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:58:03.928188725+00:00 stderr F E0120 10:58:03.926035 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:58:04.937365528+00:00 stderr F E0120 10:58:04.936738 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:58:05.929451618+00:00 stderr F E0120 10:58:05.928003 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:58:06.923297012+00:00 stderr F E0120 10:58:06.923203 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:58:07.927848318+00:00 stderr F E0120 10:58:07.927350 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:58:08.924451372+00:00 stderr F E0120 10:58:08.924387 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:58:09.922441981+00:00 stderr F E0120 10:58:09.922379 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:58:10.923676023+00:00 stderr F E0120 10:58:10.923163 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:58:11.924449553+00:00 stderr F E0120 10:58:11.924391 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:58:12.928725782+00:00 stderr F E0120 10:58:12.928674 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:58:13.926838224+00:00 stderr F E0120 10:58:13.926772 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:58:14.924842744+00:00 stderr F E0120 10:58:14.924775 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:58:15.925688406+00:00 stderr F E0120 10:58:15.925084 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:58:16.934717116+00:00 stderr F E0120 10:58:16.934632 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:58:17.928137809+00:00 stderr F E0120 10:58:17.928051 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:58:18.929256467+00:00 stderr F E0120 10:58:18.928660 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2026-01-20T10:58:19.924975929+00:00 stderr F E0120 10:58:19.924903 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" ././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000003771515133657715033033 0ustar zuulzuul2025-08-13T19:59:03.793003276+00:00 stderr F I0813 19:59:03.792203 1 start.go:52] Version: 4.16.0 (Raw: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty, Hash: 9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:59:03.810241137+00:00 stderr F I0813 19:59:03.806593 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:03.811597626+00:00 stderr F I0813 19:59:03.811551 1 metrics.go:100] Registering Prometheus metrics 2025-08-13T19:59:03.811901004+00:00 stderr F I0813 19:59:03.811879 1 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2025-08-13T19:59:05.606955533+00:00 stderr F I0813 19:59:05.602330 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-config-operator/machine-config... 2025-08-13T19:59:06.352317300+00:00 stderr F I0813 19:59:06.351645 1 leaderelection.go:260] successfully acquired lease openshift-machine-config-operator/machine-config 2025-08-13T19:59:06.624953102+00:00 stderr F I0813 19:59:06.621000 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:07.641337833+00:00 stderr F I0813 19:59:07.604244 1 start.go:127] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:07.641337833+00:00 stderr F I0813 19:59:07.605338 1 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:17.879134960+00:00 stderr F I0813 19:59:17.868467 1 trace.go:236] Trace[2088585093]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:07.208) (total time: 10647ms): 2025-08-13T19:59:17.879134960+00:00 stderr F Trace[2088585093]: ---"Objects listed" error: 10625ms (19:59:17.833) 2025-08-13T19:59:17.879134960+00:00 stderr F Trace[2088585093]: [10.647548997s] [10.647548997s] END 2025-08-13T19:59:17.879134960+00:00 stderr F I0813 19:59:17.878865 1 operator.go:396] Change observed to kube-apiserver-server-ca 2025-08-13T19:59:18.050386191+00:00 stderr F I0813 19:59:18.044491 1 operator.go:376] Starting MachineConfigOperator 2025-08-13T20:02:26.192144382+00:00 stderr F I0813 20:02:26.175662 1 sync.go:419] Detecting changes in merged-trusted-image-registry-ca, creating patch 2025-08-13T20:02:26.192144382+00:00 stderr F I0813 20:02:26.191605 1 sync.go:424] JSONPATCH: 2025-08-13T20:02:26.192144382+00:00 stderr F {"data":{"image-registry.openshift-image-registry.svc..5000":"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n","image-registry.openshift-image-registry.svc.cluster.local..5000":"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":null,"managedFields":null,"namespace":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:03:15.119052825+00:00 stderr F E0813 20:03:15.118870 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:15.125557835+00:00 stderr F E0813 20:04:15.124918 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.128905567+00:00 stderr F E0813 20:05:15.128095 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.763145649+00:00 stderr F E0813 20:05:15.763084 1 operator.go:448] error syncing progressing status: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/machine-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:06:24.640926653+00:00 stderr F I0813 20:06:24.640737 1 operator.go:396] Change observed to kube-apiserver-server-ca 2025-08-13T20:09:39.828705851+00:00 stderr F I0813 20:09:39.828618 1 operator.go:396] Change observed to kube-apiserver-server-ca 2025-08-13T20:42:28.362519840+00:00 stderr F E0813 20:42:28.360448 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:29.356362823+00:00 stderr F E0813 20:42:29.352075 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:30.358835494+00:00 stderr F E0813 20:42:30.355618 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:31.358397922+00:00 stderr F E0813 20:42:31.356276 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.362554222+00:00 stderr F E0813 20:42:32.361311 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:33.358700581+00:00 stderr F E0813 20:42:33.358518 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:34.357667441+00:00 stderr F E0813 20:42:34.355686 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:35.354656805+00:00 stderr F E0813 20:42:35.353969 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:44.634028771+00:00 stderr F I0813 20:42:44.633156 1 helpers.go:93] Shutting down due to: terminated 2025-08-13T20:42:44.634028771+00:00 stderr F I0813 20:42:44.633964 1 helpers.go:96] Context cancelled 2025-08-13T20:42:44.637659186+00:00 stderr F I0813 20:42:44.637441 1 operator.go:386] Shutting down MachineConfigOperator 2025-08-13T20:42:44.639461528+00:00 stderr F E0813 20:42:44.638840 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:44.641299241+00:00 stderr F I0813 20:42:44.641137 1 start.go:150] Stopped leading. Terminating. ././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015133657716033071 5ustar zuulzuul././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015133657736033073 5ustar zuulzuul././@LongLink0000644000000000000000000000032000000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000152053215133657716033103 0ustar zuulzuul2026-01-20T10:49:34.141970662+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="log level info" 2026-01-20T10:49:34.146180409+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="TLS keys set, using https for metrics" 2026-01-20T10:49:34.146180409+00:00 stderr F W0120 10:49:34.145906 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2026-01-20T10:49:34.147230712+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="Using in-cluster kube client config" 2026-01-20T10:49:34.174810442+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="Using in-cluster kube client config" 2026-01-20T10:49:34.174810442+00:00 stderr F W0120 10:49:34.174507 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2026-01-20T10:49:34.197096320+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=clusterroles" 2026-01-20T10:49:34.197096320+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=clusterrolebindings" 2026-01-20T10:49:34.197096320+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="skipping irrelevant gvr" gvr="apps/v1, Resource=deployments" 2026-01-20T10:49:34.452570003+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="detected ability to filter informers" canFilter=true 2026-01-20T10:49:34.455022307+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="registering owner reference fixer" gvr="/v1, Resource=services" 2026-01-20T10:49:34.458297037+00:00 stderr F W0120 10:49:34.458235 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2026-01-20T10:49:34.463275339+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2026-01-20T10:49:34.463275339+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="operator ready" 2026-01-20T10:49:34.463275339+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="starting informers..." 2026-01-20T10:49:34.463275339+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="informers started" 2026-01-20T10:49:34.463275339+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="waiting for caches to sync..." 2026-01-20T10:49:34.563925844+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="starting workers..." 2026-01-20T10:49:34.569680389+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2026-01-20T10:49:34.569680389+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="operator ready" 2026-01-20T10:49:34.569680389+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="starting informers..." 2026-01-20T10:49:34.569680389+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="informers started" 2026-01-20T10:49:34.569680389+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="waiting for caches to sync..." 2026-01-20T10:49:34.603091787+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1X/SI 2026-01-20T10:49:34.603091787+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1X/SI 2026-01-20T10:49:34.607882473+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ikAwl 2026-01-20T10:49:34.607882473+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ikAwl 2026-01-20T10:49:34.607882473+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="resolving sources" id=cYKX9 namespace=default 2026-01-20T10:49:34.607882473+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="checking if subscriptions need update" id=cYKX9 namespace=default 2026-01-20T10:49:34.607882473+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="resolving sources" id=Fe/z0 namespace=hostpath-provisioner 2026-01-20T10:49:34.607882473+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="checking if subscriptions need update" id=Fe/z0 namespace=hostpath-provisioner 2026-01-20T10:49:34.608309466+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="No subscriptions were found in namespace hostpath-provisioner" id=Fe/z0 namespace=hostpath-provisioner 2026-01-20T10:49:34.608309466+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="resolving sources" id=1F0E4 namespace=kube-node-lease 2026-01-20T10:49:34.608309466+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="checking if subscriptions need update" id=1F0E4 namespace=kube-node-lease 2026-01-20T10:49:34.608449320+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="No subscriptions were found in namespace default" id=cYKX9 namespace=default 2026-01-20T10:49:34.608449320+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="resolving sources" id=A3xXm namespace=kube-public 2026-01-20T10:49:34.608449320+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="checking if subscriptions need update" id=A3xXm namespace=kube-public 2026-01-20T10:49:34.609184893+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1X/SI 2026-01-20T10:49:34.609408090+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ikAwl 2026-01-20T10:49:34.610152862+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1X/SI 2026-01-20T10:49:34.610152862+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1X/SI 2026-01-20T10:49:34.610152862+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=1X/SI 2026-01-20T10:49:34.610173763+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1X/SI 2026-01-20T10:49:34.610173763+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1X/SI 2026-01-20T10:49:34.610720649+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ikAwl 2026-01-20T10:49:34.610720649+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ikAwl 2026-01-20T10:49:34.610720649+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=ikAwl 2026-01-20T10:49:34.610743810+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ikAwl 2026-01-20T10:49:34.610743810+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ikAwl 2026-01-20T10:49:34.613257127+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="No subscriptions were found in namespace kube-public" id=A3xXm namespace=kube-public 2026-01-20T10:49:34.613282717+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="resolving sources" id=jXTOL namespace=kube-system 2026-01-20T10:49:34.613282717+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="checking if subscriptions need update" id=jXTOL namespace=kube-system 2026-01-20T10:49:34.613454293+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="No subscriptions were found in namespace kube-node-lease" id=1F0E4 namespace=kube-node-lease 2026-01-20T10:49:34.613477713+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="resolving sources" id=r+G4p namespace=openshift 2026-01-20T10:49:34.613477713+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="checking if subscriptions need update" id=r+G4p namespace=openshift 2026-01-20T10:49:34.618199537+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="No subscriptions were found in namespace openshift" id=r+G4p namespace=openshift 2026-01-20T10:49:34.618199537+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="resolving sources" id=I8jyH namespace=openshift-apiserver 2026-01-20T10:49:34.618199537+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="checking if subscriptions need update" id=I8jyH namespace=openshift-apiserver 2026-01-20T10:49:34.618199537+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="No subscriptions were found in namespace kube-system" id=jXTOL namespace=kube-system 2026-01-20T10:49:34.618199537+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="resolving sources" id=aB+/q namespace=openshift-apiserver-operator 2026-01-20T10:49:34.618199537+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="checking if subscriptions need update" id=aB+/q namespace=openshift-apiserver-operator 2026-01-20T10:49:34.667389185+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="No subscriptions were found in namespace openshift-apiserver" id=I8jyH namespace=openshift-apiserver 2026-01-20T10:49:34.667424176+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="resolving sources" id=hBqYP namespace=openshift-authentication 2026-01-20T10:49:34.667424176+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="checking if subscriptions need update" id=hBqYP namespace=openshift-authentication 2026-01-20T10:49:34.667499749+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="starting workers..." 2026-01-20T10:49:34.667669504+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ikAwl 2026-01-20T10:49:34.667843590+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ikAwl 2026-01-20T10:49:34.667874981+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ikAwl 2026-01-20T10:49:34.667951353+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ikAwl 2026-01-20T10:49:34.671844001+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=CONNECTING" 2026-01-20T10:49:34.694689987+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=TRANSIENT_FAILURE" 2026-01-20T10:49:34.866937824+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1X/SI 2026-01-20T10:49:34.867189522+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1X/SI 2026-01-20T10:49:34.867189522+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1X/SI 2026-01-20T10:49:34.867189522+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="catalog update required at 2026-01-20 10:49:34.86714119 +0000 UTC m=+0.976592768" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1X/SI 2026-01-20T10:49:34.867819081+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="No subscriptions were found in namespace openshift-apiserver-operator" id=aB+/q namespace=openshift-apiserver-operator 2026-01-20T10:49:34.867819081+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="resolving sources" id=kU4Bl namespace=openshift-authentication-operator 2026-01-20T10:49:34.867819081+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="checking if subscriptions need update" id=kU4Bl namespace=openshift-authentication-operator 2026-01-20T10:49:35.068123211+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="No subscriptions were found in namespace openshift-authentication" id=hBqYP namespace=openshift-authentication 2026-01-20T10:49:35.068161493+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="resolving sources" id=KYITP namespace=openshift-cloud-network-config-controller 2026-01-20T10:49:35.068161493+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="checking if subscriptions need update" id=KYITP namespace=openshift-cloud-network-config-controller 2026-01-20T10:49:35.075304450+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1X/SI 2026-01-20T10:49:35.075640201+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=CONNECTING" 2026-01-20T10:49:35.091567176+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=TRANSIENT_FAILURE" 2026-01-20T10:49:35.275275681+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GWzMS 2026-01-20T10:49:35.275275681+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GWzMS 2026-01-20T10:49:35.277969653+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GWzMS 2026-01-20T10:49:35.277969653+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GWzMS 2026-01-20T10:49:35.277969653+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GWzMS 2026-01-20T10:49:35.277969653+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=GWzMS 2026-01-20T10:49:35.277969653+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GWzMS 2026-01-20T10:49:35.277969653+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GWzMS 2026-01-20T10:49:35.479933005+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="No subscriptions were found in namespace openshift-authentication-operator" id=kU4Bl namespace=openshift-authentication-operator 2026-01-20T10:49:35.479933005+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="resolving sources" id=v00aw namespace=openshift-cloud-platform-infra 2026-01-20T10:49:35.479933005+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="checking if subscriptions need update" id=v00aw namespace=openshift-cloud-platform-infra 2026-01-20T10:49:35.670045656+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="No subscriptions were found in namespace openshift-cloud-network-config-controller" id=KYITP namespace=openshift-cloud-network-config-controller 2026-01-20T10:49:35.670132238+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="resolving sources" id=ic3H2 namespace=openshift-cluster-machine-approver 2026-01-20T10:49:35.670132238+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="checking if subscriptions need update" id=ic3H2 namespace=openshift-cluster-machine-approver 2026-01-20T10:49:35.670323604+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GWzMS 2026-01-20T10:49:35.670723367+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GWzMS 2026-01-20T10:49:35.670723367+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GWzMS 2026-01-20T10:49:35.670723367+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="catalog update required at 2026-01-20 10:49:35.670544351 +0000 UTC m=+1.779995919" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GWzMS 2026-01-20T10:49:35.877855115+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GWzMS 2026-01-20T10:49:35.877900207+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=CONNECTING" 2026-01-20T10:49:35.882025493+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uEmE4 2026-01-20T10:49:35.882089235+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uEmE4 2026-01-20T10:49:35.909928333+00:00 stderr F time="2026-01-20T10:49:35Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=TRANSIENT_FAILURE" 2026-01-20T10:49:36.069797392+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uEmE4 2026-01-20T10:49:36.069878485+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uEmE4 2026-01-20T10:49:36.069878485+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uEmE4 2026-01-20T10:49:36.069889755+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=uEmE4 2026-01-20T10:49:36.069898745+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uEmE4 2026-01-20T10:49:36.069906065+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uEmE4 2026-01-20T10:49:36.069999758+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="No subscriptions were found in namespace openshift-cloud-platform-infra" id=v00aw namespace=openshift-cloud-platform-infra 2026-01-20T10:49:36.070031269+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="resolving sources" id=npybw namespace=openshift-cluster-samples-operator 2026-01-20T10:49:36.070031269+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="checking if subscriptions need update" id=npybw namespace=openshift-cluster-samples-operator 2026-01-20T10:49:36.269671570+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="No subscriptions were found in namespace openshift-cluster-machine-approver" id=ic3H2 namespace=openshift-cluster-machine-approver 2026-01-20T10:49:36.269671570+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="resolving sources" id=HzIEt namespace=openshift-cluster-storage-operator 2026-01-20T10:49:36.269671570+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="checking if subscriptions need update" id=HzIEt namespace=openshift-cluster-storage-operator 2026-01-20T10:49:36.484133633+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uEmE4 2026-01-20T10:49:36.484133633+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uEmE4 2026-01-20T10:49:36.484133633+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uEmE4 2026-01-20T10:49:36.484133633+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="catalog update required at 2026-01-20 10:49:36.481838673 +0000 UTC m=+2.591290241" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uEmE4 2026-01-20T10:49:36.489413264+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bcfRx 2026-01-20T10:49:36.489413264+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bcfRx 2026-01-20T10:49:36.673586274+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="No subscriptions were found in namespace openshift-cluster-samples-operator" id=npybw namespace=openshift-cluster-samples-operator 2026-01-20T10:49:36.673652096+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="resolving sources" id=on5CU namespace=openshift-cluster-version 2026-01-20T10:49:36.673652096+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="checking if subscriptions need update" id=on5CU namespace=openshift-cluster-version 2026-01-20T10:49:36.677319627+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uEmE4 2026-01-20T10:49:36.679010968+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=CONNECTING" 2026-01-20T10:49:36.753172067+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=TRANSIENT_FAILURE" 2026-01-20T10:49:36.873280456+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="No subscriptions were found in namespace openshift-cluster-storage-operator" id=HzIEt namespace=openshift-cluster-storage-operator 2026-01-20T10:49:36.873280456+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="resolving sources" id=6hs1w namespace=openshift-config 2026-01-20T10:49:36.873280456+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="checking if subscriptions need update" id=6hs1w namespace=openshift-config 2026-01-20T10:49:36.879010131+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bcfRx 2026-01-20T10:49:36.879010131+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bcfRx 2026-01-20T10:49:36.879010131+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bcfRx 2026-01-20T10:49:36.879010131+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=bcfRx 2026-01-20T10:49:36.879010131+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bcfRx 2026-01-20T10:49:36.879010131+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bcfRx 2026-01-20T10:49:37.068637817+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="No subscriptions were found in namespace openshift-cluster-version" id=on5CU namespace=openshift-cluster-version 2026-01-20T10:49:37.068637817+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="resolving sources" id=DomYD namespace=openshift-config-managed 2026-01-20T10:49:37.068637817+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="checking if subscriptions need update" id=DomYD namespace=openshift-config-managed 2026-01-20T10:49:37.291593387+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bcfRx 2026-01-20T10:49:37.291729901+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bcfRx 2026-01-20T10:49:37.291740651+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bcfRx 2026-01-20T10:49:37.291812414+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bcfRx 2026-01-20T10:49:37.292257457+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RrV5K 2026-01-20T10:49:37.292257457+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RrV5K 2026-01-20T10:49:37.474149607+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RrV5K 2026-01-20T10:49:37.474149607+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RrV5K 2026-01-20T10:49:37.474149607+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RrV5K 2026-01-20T10:49:37.474149607+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=RrV5K 2026-01-20T10:49:37.474149607+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RrV5K 2026-01-20T10:49:37.474149607+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RrV5K 2026-01-20T10:49:37.474149607+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="No subscriptions were found in namespace openshift-config" id=6hs1w namespace=openshift-config 2026-01-20T10:49:37.474149607+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="resolving sources" id=gJAof namespace=openshift-config-operator 2026-01-20T10:49:37.474149607+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="checking if subscriptions need update" id=gJAof namespace=openshift-config-operator 2026-01-20T10:49:37.679983067+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="No subscriptions were found in namespace openshift-config-managed" id=DomYD namespace=openshift-config-managed 2026-01-20T10:49:37.679983067+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="resolving sources" id=NJ7KC namespace=openshift-console 2026-01-20T10:49:37.679983067+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="checking if subscriptions need update" id=NJ7KC namespace=openshift-console 2026-01-20T10:49:37.875536244+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=T8y58 2026-01-20T10:49:37.875536244+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=T8y58 2026-01-20T10:49:37.875536244+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RrV5K 2026-01-20T10:49:37.875536244+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RrV5K 2026-01-20T10:49:37.875536244+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RrV5K 2026-01-20T10:49:37.875536244+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RrV5K 2026-01-20T10:49:38.080643101+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=T8y58 2026-01-20T10:49:38.080643101+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=T8y58 2026-01-20T10:49:38.080643101+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=T8y58 2026-01-20T10:49:38.080643101+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=T8y58 2026-01-20T10:49:38.080643101+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=T8y58 2026-01-20T10:49:38.080643101+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=T8y58 2026-01-20T10:49:38.085401955+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="No subscriptions were found in namespace openshift-config-operator" id=gJAof namespace=openshift-config-operator 2026-01-20T10:49:38.085401955+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="resolving sources" id=uHFVz namespace=openshift-console-operator 2026-01-20T10:49:38.085401955+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="checking if subscriptions need update" id=uHFVz namespace=openshift-console-operator 2026-01-20T10:49:38.284595304+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="No subscriptions were found in namespace openshift-console" id=NJ7KC namespace=openshift-console 2026-01-20T10:49:38.284595304+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="resolving sources" id=rEanX namespace=openshift-console-user-settings 2026-01-20T10:49:38.284595304+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="checking if subscriptions need update" id=rEanX namespace=openshift-console-user-settings 2026-01-20T10:49:38.474036173+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=T8y58 2026-01-20T10:49:38.474036173+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=T8y58 2026-01-20T10:49:38.474036173+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=T8y58 2026-01-20T10:49:38.474036173+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=T8y58 2026-01-20T10:49:38.474036173+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=YwuW3 2026-01-20T10:49:38.474036173+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=YwuW3 2026-01-20T10:49:38.669801536+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="No subscriptions were found in namespace openshift-console-operator" id=uHFVz namespace=openshift-console-operator 2026-01-20T10:49:38.669801536+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="resolving sources" id=ITjL4 namespace=openshift-controller-manager 2026-01-20T10:49:38.669801536+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="checking if subscriptions need update" id=ITjL4 namespace=openshift-controller-manager 2026-01-20T10:49:38.674701516+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=YwuW3 2026-01-20T10:49:38.674833470+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=YwuW3 2026-01-20T10:49:38.674864501+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=YwuW3 2026-01-20T10:49:38.674895162+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=YwuW3 2026-01-20T10:49:38.674918872+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=YwuW3 2026-01-20T10:49:38.674951433+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=YwuW3 2026-01-20T10:49:38.867158258+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="No subscriptions were found in namespace openshift-console-user-settings" id=rEanX namespace=openshift-console-user-settings 2026-01-20T10:49:38.867158258+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="resolving sources" id=E8jmJ namespace=openshift-controller-manager-operator 2026-01-20T10:49:38.867158258+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="checking if subscriptions need update" id=E8jmJ namespace=openshift-controller-manager-operator 2026-01-20T10:49:39.068212992+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=YwuW3 2026-01-20T10:49:39.068300184+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=YwuW3 2026-01-20T10:49:39.068300184+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=YwuW3 2026-01-20T10:49:39.068372276+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=YwuW3 2026-01-20T10:49:39.073525394+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hxKbk 2026-01-20T10:49:39.073525394+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hxKbk 2026-01-20T10:49:39.266586694+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hxKbk 2026-01-20T10:49:39.266720108+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hxKbk 2026-01-20T10:49:39.266720108+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hxKbk 2026-01-20T10:49:39.266720108+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=hxKbk 2026-01-20T10:49:39.266720108+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hxKbk 2026-01-20T10:49:39.266720108+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hxKbk 2026-01-20T10:49:39.267841483+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager" id=ITjL4 namespace=openshift-controller-manager 2026-01-20T10:49:39.267841483+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="resolving sources" id=C+/HR namespace=openshift-dns 2026-01-20T10:49:39.267841483+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="checking if subscriptions need update" id=C+/HR namespace=openshift-dns 2026-01-20T10:49:39.471101023+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager-operator" id=E8jmJ namespace=openshift-controller-manager-operator 2026-01-20T10:49:39.471101023+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="resolving sources" id=8ffCy namespace=openshift-dns-operator 2026-01-20T10:49:39.471101023+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="checking if subscriptions need update" id=8ffCy namespace=openshift-dns-operator 2026-01-20T10:49:39.670044653+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hxKbk 2026-01-20T10:49:39.670524397+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hxKbk 2026-01-20T10:49:39.670563519+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hxKbk 2026-01-20T10:49:39.670646461+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hxKbk 2026-01-20T10:49:39.670728574+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=W9rTB 2026-01-20T10:49:39.670773015+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=W9rTB 2026-01-20T10:49:39.671482877+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=kgC5p 2026-01-20T10:49:39.671533409+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=kgC5p 2026-01-20T10:49:39.873191381+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="No subscriptions were found in namespace openshift-dns" id=C+/HR namespace=openshift-dns 2026-01-20T10:49:39.873191381+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="resolving sources" id=CWumZ namespace=openshift-etcd 2026-01-20T10:49:39.873191381+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="checking if subscriptions need update" id=CWumZ namespace=openshift-etcd 2026-01-20T10:49:39.873191381+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=W9rTB 2026-01-20T10:49:39.873191381+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=W9rTB 2026-01-20T10:49:39.873191381+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=W9rTB 2026-01-20T10:49:39.873191381+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=W9rTB 2026-01-20T10:49:39.873191381+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=W9rTB 2026-01-20T10:49:39.873191381+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=W9rTB 2026-01-20T10:49:40.071106739+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=kgC5p 2026-01-20T10:49:40.071106739+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=kgC5p 2026-01-20T10:49:40.071106739+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=kgC5p 2026-01-20T10:49:40.071106739+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=kgC5p 2026-01-20T10:49:40.071106739+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=kgC5p 2026-01-20T10:49:40.071106739+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=kgC5p 2026-01-20T10:49:40.071106739+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="No subscriptions were found in namespace openshift-dns-operator" id=8ffCy namespace=openshift-dns-operator 2026-01-20T10:49:40.071106739+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="resolving sources" id=y/J2W namespace=openshift-etcd-operator 2026-01-20T10:49:40.071106739+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="checking if subscriptions need update" id=y/J2W namespace=openshift-etcd-operator 2026-01-20T10:49:40.267574133+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="No subscriptions were found in namespace openshift-etcd" id=CWumZ namespace=openshift-etcd 2026-01-20T10:49:40.267574133+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="resolving sources" id=bnoc5 namespace=openshift-host-network 2026-01-20T10:49:40.267574133+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="checking if subscriptions need update" id=bnoc5 namespace=openshift-host-network 2026-01-20T10:49:40.466385829+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="No subscriptions were found in namespace openshift-etcd-operator" id=y/J2W namespace=openshift-etcd-operator 2026-01-20T10:49:40.466385829+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="resolving sources" id=cQGO9 namespace=openshift-image-registry 2026-01-20T10:49:40.466385829+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="checking if subscriptions need update" id=cQGO9 namespace=openshift-image-registry 2026-01-20T10:49:40.666488134+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=W9rTB 2026-01-20T10:49:40.666659600+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=W9rTB 2026-01-20T10:49:40.666690810+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=W9rTB 2026-01-20T10:49:40.666763063+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=W9rTB 2026-01-20T10:49:40.666834665+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5G5xk 2026-01-20T10:49:40.666869396+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5G5xk 2026-01-20T10:49:40.671525908+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="No subscriptions were found in namespace openshift-host-network" id=bnoc5 namespace=openshift-host-network 2026-01-20T10:49:40.671611451+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="resolving sources" id=5vYTH namespace=openshift-infra 2026-01-20T10:49:40.671638891+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="checking if subscriptions need update" id=5vYTH namespace=openshift-infra 2026-01-20T10:49:40.870206800+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="No subscriptions were found in namespace openshift-image-registry" id=cQGO9 namespace=openshift-image-registry 2026-01-20T10:49:40.870206800+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="resolving sources" id=ftbxY namespace=openshift-ingress 2026-01-20T10:49:40.870206800+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="checking if subscriptions need update" id=ftbxY namespace=openshift-ingress 2026-01-20T10:49:40.870206800+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=kgC5p 2026-01-20T10:49:40.870257351+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=kgC5p 2026-01-20T10:49:40.870257351+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=kgC5p 2026-01-20T10:49:40.870471498+00:00 stderr F time="2026-01-20T10:49:40Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=kgC5p 2026-01-20T10:49:41.066925481+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="No subscriptions were found in namespace openshift-infra" id=5vYTH namespace=openshift-infra 2026-01-20T10:49:41.066925481+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="resolving sources" id=Jir+3 namespace=openshift-ingress-canary 2026-01-20T10:49:41.066925481+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="checking if subscriptions need update" id=Jir+3 namespace=openshift-ingress-canary 2026-01-20T10:49:41.067101147+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5G5xk 2026-01-20T10:49:41.067256071+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=5G5xk 2026-01-20T10:49:41.067256071+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=5G5xk 2026-01-20T10:49:41.067283912+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=5G5xk 2026-01-20T10:49:41.067283912+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5G5xk 2026-01-20T10:49:41.067283912+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5G5xk 2026-01-20T10:49:41.287773218+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="No subscriptions were found in namespace openshift-ingress" id=ftbxY namespace=openshift-ingress 2026-01-20T10:49:41.287773218+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="resolving sources" id=51Wc6 namespace=openshift-ingress-operator 2026-01-20T10:49:41.287773218+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="checking if subscriptions need update" id=51Wc6 namespace=openshift-ingress-operator 2026-01-20T10:49:41.466438080+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5G5xk 2026-01-20T10:49:41.466503852+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=5G5xk 2026-01-20T10:49:41.466512332+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=5G5xk 2026-01-20T10:49:41.466634976+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5G5xk 2026-01-20T10:49:41.467471212+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="No subscriptions were found in namespace openshift-ingress-canary" id=Jir+3 namespace=openshift-ingress-canary 2026-01-20T10:49:41.467684588+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="resolving sources" id=VmF15 namespace=openshift-kni-infra 2026-01-20T10:49:41.467728529+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="checking if subscriptions need update" id=VmF15 namespace=openshift-kni-infra 2026-01-20T10:49:41.677926322+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="No subscriptions were found in namespace openshift-ingress-operator" id=51Wc6 namespace=openshift-ingress-operator 2026-01-20T10:49:41.678047586+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="resolving sources" id=1zjMT namespace=openshift-kube-apiserver 2026-01-20T10:49:41.678089357+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="checking if subscriptions need update" id=1zjMT namespace=openshift-kube-apiserver 2026-01-20T10:49:41.866219227+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="No subscriptions were found in namespace openshift-kni-infra" id=VmF15 namespace=openshift-kni-infra 2026-01-20T10:49:41.866219227+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="resolving sources" id=vNLn8 namespace=openshift-kube-apiserver-operator 2026-01-20T10:49:41.866219227+00:00 stderr F time="2026-01-20T10:49:41Z" level=info msg="checking if subscriptions need update" id=vNLn8 namespace=openshift-kube-apiserver-operator 2026-01-20T10:49:42.071115848+00:00 stderr F time="2026-01-20T10:49:42Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver" id=1zjMT namespace=openshift-kube-apiserver 2026-01-20T10:49:42.071115848+00:00 stderr F time="2026-01-20T10:49:42Z" level=info msg="resolving sources" id=R4VA4 namespace=openshift-kube-controller-manager 2026-01-20T10:49:42.071115848+00:00 stderr F time="2026-01-20T10:49:42Z" level=info msg="checking if subscriptions need update" id=R4VA4 namespace=openshift-kube-controller-manager 2026-01-20T10:49:42.265885001+00:00 stderr F time="2026-01-20T10:49:42Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver-operator" id=vNLn8 namespace=openshift-kube-apiserver-operator 2026-01-20T10:49:42.265885001+00:00 stderr F time="2026-01-20T10:49:42Z" level=info msg="resolving sources" id=YpYg6 namespace=openshift-kube-controller-manager-operator 2026-01-20T10:49:42.265885001+00:00 stderr F time="2026-01-20T10:49:42Z" level=info msg="checking if subscriptions need update" id=YpYg6 namespace=openshift-kube-controller-manager-operator 2026-01-20T10:49:42.470127362+00:00 stderr F time="2026-01-20T10:49:42Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager" id=R4VA4 namespace=openshift-kube-controller-manager 2026-01-20T10:49:42.470127362+00:00 stderr F time="2026-01-20T10:49:42Z" level=info msg="resolving sources" id=PqEFN namespace=openshift-kube-scheduler 2026-01-20T10:49:42.470127362+00:00 stderr F time="2026-01-20T10:49:42Z" level=info msg="checking if subscriptions need update" id=PqEFN namespace=openshift-kube-scheduler 2026-01-20T10:49:42.671403473+00:00 stderr F time="2026-01-20T10:49:42Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager-operator" id=YpYg6 namespace=openshift-kube-controller-manager-operator 2026-01-20T10:49:42.671403473+00:00 stderr F time="2026-01-20T10:49:42Z" level=info msg="resolving sources" id=ZiJ0e namespace=openshift-kube-scheduler-operator 2026-01-20T10:49:42.671403473+00:00 stderr F time="2026-01-20T10:49:42Z" level=info msg="checking if subscriptions need update" id=ZiJ0e namespace=openshift-kube-scheduler-operator 2026-01-20T10:49:43.258460864+00:00 stderr F time="2026-01-20T10:49:43Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler-operator" id=ZiJ0e namespace=openshift-kube-scheduler-operator 2026-01-20T10:49:43.258641619+00:00 stderr F time="2026-01-20T10:49:43Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler" id=PqEFN namespace=openshift-kube-scheduler 2026-01-20T10:49:43.258641619+00:00 stderr F time="2026-01-20T10:49:43Z" level=info msg="resolving sources" id=XW0/9 namespace=openshift-kube-storage-version-migrator-operator 2026-01-20T10:49:43.258641619+00:00 stderr F time="2026-01-20T10:49:43Z" level=info msg="checking if subscriptions need update" id=XW0/9 namespace=openshift-kube-storage-version-migrator-operator 2026-01-20T10:49:43.258681131+00:00 stderr F time="2026-01-20T10:49:43Z" level=info msg="resolving sources" id=RUKEw namespace=openshift-kube-storage-version-migrator 2026-01-20T10:49:43.258681131+00:00 stderr F time="2026-01-20T10:49:43Z" level=info msg="checking if subscriptions need update" id=RUKEw namespace=openshift-kube-storage-version-migrator 2026-01-20T10:49:43.268504370+00:00 stderr F time="2026-01-20T10:49:43Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator-operator" id=XW0/9 namespace=openshift-kube-storage-version-migrator-operator 2026-01-20T10:49:43.268621674+00:00 stderr F time="2026-01-20T10:49:43Z" level=info msg="resolving sources" id=nfiRh namespace=openshift-machine-api 2026-01-20T10:49:43.268651355+00:00 stderr F time="2026-01-20T10:49:43Z" level=info msg="checking if subscriptions need update" id=nfiRh namespace=openshift-machine-api 2026-01-20T10:49:43.466668266+00:00 stderr F time="2026-01-20T10:49:43Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator" id=RUKEw namespace=openshift-kube-storage-version-migrator 2026-01-20T10:49:43.466846101+00:00 stderr F time="2026-01-20T10:49:43Z" level=info msg="resolving sources" id=w7iJD namespace=openshift-machine-config-operator 2026-01-20T10:49:43.466873352+00:00 stderr F time="2026-01-20T10:49:43Z" level=info msg="checking if subscriptions need update" id=w7iJD namespace=openshift-machine-config-operator 2026-01-20T10:49:43.667667708+00:00 stderr F time="2026-01-20T10:49:43Z" level=info msg="No subscriptions were found in namespace openshift-machine-api" id=nfiRh namespace=openshift-machine-api 2026-01-20T10:49:43.667775131+00:00 stderr F time="2026-01-20T10:49:43Z" level=info msg="resolving sources" id=XmzfA namespace=openshift-marketplace 2026-01-20T10:49:43.667818703+00:00 stderr F time="2026-01-20T10:49:43Z" level=info msg="checking if subscriptions need update" id=XmzfA namespace=openshift-marketplace 2026-01-20T10:49:43.871118895+00:00 stderr F time="2026-01-20T10:49:43Z" level=info msg="No subscriptions were found in namespace openshift-machine-config-operator" id=w7iJD namespace=openshift-machine-config-operator 2026-01-20T10:49:43.871472896+00:00 stderr F time="2026-01-20T10:49:43Z" level=info msg="resolving sources" id=wjSlk namespace=openshift-monitoring 2026-01-20T10:49:43.871504547+00:00 stderr F time="2026-01-20T10:49:43Z" level=info msg="checking if subscriptions need update" id=wjSlk namespace=openshift-monitoring 2026-01-20T10:49:44.066507167+00:00 stderr F time="2026-01-20T10:49:44Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=XmzfA namespace=openshift-marketplace 2026-01-20T10:49:44.066507167+00:00 stderr F time="2026-01-20T10:49:44Z" level=info msg="resolving sources" id=IYq0K namespace=openshift-multus 2026-01-20T10:49:44.066507167+00:00 stderr F time="2026-01-20T10:49:44Z" level=info msg="checking if subscriptions need update" id=IYq0K namespace=openshift-multus 2026-01-20T10:49:44.274427110+00:00 stderr F time="2026-01-20T10:49:44Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=wjSlk namespace=openshift-monitoring 2026-01-20T10:49:44.274427110+00:00 stderr F time="2026-01-20T10:49:44Z" level=info msg="resolving sources" id=pG5SQ namespace=openshift-network-diagnostics 2026-01-20T10:49:44.274427110+00:00 stderr F time="2026-01-20T10:49:44Z" level=info msg="checking if subscriptions need update" id=pG5SQ namespace=openshift-network-diagnostics 2026-01-20T10:49:44.467885403+00:00 stderr F time="2026-01-20T10:49:44Z" level=info msg="No subscriptions were found in namespace openshift-multus" id=IYq0K namespace=openshift-multus 2026-01-20T10:49:44.467951895+00:00 stderr F time="2026-01-20T10:49:44Z" level=info msg="resolving sources" id=T4bYO namespace=openshift-network-node-identity 2026-01-20T10:49:44.467951895+00:00 stderr F time="2026-01-20T10:49:44Z" level=info msg="checking if subscriptions need update" id=T4bYO namespace=openshift-network-node-identity 2026-01-20T10:49:44.673106243+00:00 stderr F time="2026-01-20T10:49:44Z" level=info msg="No subscriptions were found in namespace openshift-network-diagnostics" id=pG5SQ namespace=openshift-network-diagnostics 2026-01-20T10:49:44.673106243+00:00 stderr F time="2026-01-20T10:49:44Z" level=info msg="resolving sources" id=ml4TT namespace=openshift-network-operator 2026-01-20T10:49:44.673106243+00:00 stderr F time="2026-01-20T10:49:44Z" level=info msg="checking if subscriptions need update" id=ml4TT namespace=openshift-network-operator 2026-01-20T10:49:44.867560726+00:00 stderr F time="2026-01-20T10:49:44Z" level=info msg="No subscriptions were found in namespace openshift-network-node-identity" id=T4bYO namespace=openshift-network-node-identity 2026-01-20T10:49:44.867709101+00:00 stderr F time="2026-01-20T10:49:44Z" level=info msg="resolving sources" id=13H2K namespace=openshift-node 2026-01-20T10:49:44.867756433+00:00 stderr F time="2026-01-20T10:49:44Z" level=info msg="checking if subscriptions need update" id=13H2K namespace=openshift-node 2026-01-20T10:49:45.067253349+00:00 stderr F time="2026-01-20T10:49:45Z" level=info msg="No subscriptions were found in namespace openshift-network-operator" id=ml4TT namespace=openshift-network-operator 2026-01-20T10:49:45.067253349+00:00 stderr F time="2026-01-20T10:49:45Z" level=info msg="resolving sources" id=NaiYq namespace=openshift-nutanix-infra 2026-01-20T10:49:45.067253349+00:00 stderr F time="2026-01-20T10:49:45Z" level=info msg="checking if subscriptions need update" id=NaiYq namespace=openshift-nutanix-infra 2026-01-20T10:49:45.267478087+00:00 stderr F time="2026-01-20T10:49:45Z" level=info msg="No subscriptions were found in namespace openshift-node" id=13H2K namespace=openshift-node 2026-01-20T10:49:45.267478087+00:00 stderr F time="2026-01-20T10:49:45Z" level=info msg="resolving sources" id=+EB0W namespace=openshift-oauth-apiserver 2026-01-20T10:49:45.267478087+00:00 stderr F time="2026-01-20T10:49:45Z" level=info msg="checking if subscriptions need update" id=+EB0W namespace=openshift-oauth-apiserver 2026-01-20T10:49:45.474209044+00:00 stderr F time="2026-01-20T10:49:45Z" level=info msg="No subscriptions were found in namespace openshift-nutanix-infra" id=NaiYq namespace=openshift-nutanix-infra 2026-01-20T10:49:45.474209044+00:00 stderr F time="2026-01-20T10:49:45Z" level=info msg="resolving sources" id=F0xzX namespace=openshift-openstack-infra 2026-01-20T10:49:45.474209044+00:00 stderr F time="2026-01-20T10:49:45Z" level=info msg="checking if subscriptions need update" id=F0xzX namespace=openshift-openstack-infra 2026-01-20T10:49:45.667473371+00:00 stderr F time="2026-01-20T10:49:45Z" level=info msg="No subscriptions were found in namespace openshift-oauth-apiserver" id=+EB0W namespace=openshift-oauth-apiserver 2026-01-20T10:49:45.667518362+00:00 stderr F time="2026-01-20T10:49:45Z" level=info msg="resolving sources" id=so1r8 namespace=openshift-operator-lifecycle-manager 2026-01-20T10:49:45.667518362+00:00 stderr F time="2026-01-20T10:49:45Z" level=info msg="checking if subscriptions need update" id=so1r8 namespace=openshift-operator-lifecycle-manager 2026-01-20T10:49:45.866705589+00:00 stderr F time="2026-01-20T10:49:45Z" level=info msg="No subscriptions were found in namespace openshift-openstack-infra" id=F0xzX namespace=openshift-openstack-infra 2026-01-20T10:49:45.866794662+00:00 stderr F time="2026-01-20T10:49:45Z" level=info msg="resolving sources" id=HgLt4 namespace=openshift-operators 2026-01-20T10:49:45.866831803+00:00 stderr F time="2026-01-20T10:49:45Z" level=info msg="checking if subscriptions need update" id=HgLt4 namespace=openshift-operators 2026-01-20T10:49:46.066835496+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=so1r8 namespace=openshift-operator-lifecycle-manager 2026-01-20T10:49:46.066926809+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="resolving sources" id=HVf6y namespace=openshift-ovirt-infra 2026-01-20T10:49:46.066953119+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="checking if subscriptions need update" id=HVf6y namespace=openshift-ovirt-infra 2026-01-20T10:49:46.269100376+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=HgLt4 namespace=openshift-operators 2026-01-20T10:49:46.269100376+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="resolving sources" id=xer7Z namespace=openshift-ovn-kubernetes 2026-01-20T10:49:46.269100376+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="checking if subscriptions need update" id=xer7Z namespace=openshift-ovn-kubernetes 2026-01-20T10:49:46.466773737+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="No subscriptions were found in namespace openshift-ovirt-infra" id=HVf6y namespace=openshift-ovirt-infra 2026-01-20T10:49:46.466861230+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="resolving sources" id=5vHE4 namespace=openshift-route-controller-manager 2026-01-20T10:49:46.466886220+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="checking if subscriptions need update" id=5vHE4 namespace=openshift-route-controller-manager 2026-01-20T10:49:46.666901803+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="No subscriptions were found in namespace openshift-ovn-kubernetes" id=xer7Z namespace=openshift-ovn-kubernetes 2026-01-20T10:49:46.666901803+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="resolving sources" id=1Og1G namespace=openshift-service-ca 2026-01-20T10:49:46.666901803+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="checking if subscriptions need update" id=1Og1G namespace=openshift-service-ca 2026-01-20T10:49:46.867511143+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="No subscriptions were found in namespace openshift-route-controller-manager" id=5vHE4 namespace=openshift-route-controller-manager 2026-01-20T10:49:46.867511143+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="resolving sources" id=jaFnb namespace=openshift-service-ca-operator 2026-01-20T10:49:46.867562215+00:00 stderr F time="2026-01-20T10:49:46Z" level=info msg="checking if subscriptions need update" id=jaFnb namespace=openshift-service-ca-operator 2026-01-20T10:49:47.066592827+00:00 stderr F time="2026-01-20T10:49:47Z" level=info msg="No subscriptions were found in namespace openshift-service-ca" id=1Og1G namespace=openshift-service-ca 2026-01-20T10:49:47.066633078+00:00 stderr F time="2026-01-20T10:49:47Z" level=info msg="resolving sources" id=VnrtC namespace=openshift-user-workload-monitoring 2026-01-20T10:49:47.066633078+00:00 stderr F time="2026-01-20T10:49:47Z" level=info msg="checking if subscriptions need update" id=VnrtC namespace=openshift-user-workload-monitoring 2026-01-20T10:49:47.266700763+00:00 stderr F time="2026-01-20T10:49:47Z" level=info msg="No subscriptions were found in namespace openshift-service-ca-operator" id=jaFnb namespace=openshift-service-ca-operator 2026-01-20T10:49:47.266700763+00:00 stderr F time="2026-01-20T10:49:47Z" level=info msg="resolving sources" id=HNjIj namespace=openshift-vsphere-infra 2026-01-20T10:49:47.266700763+00:00 stderr F time="2026-01-20T10:49:47Z" level=info msg="checking if subscriptions need update" id=HNjIj namespace=openshift-vsphere-infra 2026-01-20T10:49:47.466581581+00:00 stderr F time="2026-01-20T10:49:47Z" level=info msg="No subscriptions were found in namespace openshift-user-workload-monitoring" id=VnrtC namespace=openshift-user-workload-monitoring 2026-01-20T10:49:47.466581581+00:00 stderr F time="2026-01-20T10:49:47Z" level=info msg="resolving sources" id=5j9j+ namespace=openshift-monitoring 2026-01-20T10:49:47.466581581+00:00 stderr F time="2026-01-20T10:49:47Z" level=info msg="checking if subscriptions need update" id=5j9j+ namespace=openshift-monitoring 2026-01-20T10:49:47.672123362+00:00 stderr F time="2026-01-20T10:49:47Z" level=info msg="No subscriptions were found in namespace openshift-vsphere-infra" id=HNjIj namespace=openshift-vsphere-infra 2026-01-20T10:49:47.672215315+00:00 stderr F time="2026-01-20T10:49:47Z" level=info msg="resolving sources" id=66Mm7 namespace=openshift-operator-lifecycle-manager 2026-01-20T10:49:47.672241805+00:00 stderr F time="2026-01-20T10:49:47Z" level=info msg="checking if subscriptions need update" id=66Mm7 namespace=openshift-operator-lifecycle-manager 2026-01-20T10:49:47.867326227+00:00 stderr F time="2026-01-20T10:49:47Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=5j9j+ namespace=openshift-monitoring 2026-01-20T10:49:47.867421710+00:00 stderr F time="2026-01-20T10:49:47Z" level=info msg="resolving sources" id=TR5D8 namespace=openshift-operators 2026-01-20T10:49:47.867447321+00:00 stderr F time="2026-01-20T10:49:47Z" level=info msg="checking if subscriptions need update" id=TR5D8 namespace=openshift-operators 2026-01-20T10:49:48.068448643+00:00 stderr F time="2026-01-20T10:49:48Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=66Mm7 namespace=openshift-operator-lifecycle-manager 2026-01-20T10:49:48.266894048+00:00 stderr F time="2026-01-20T10:49:48Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=TR5D8 namespace=openshift-operators 2026-01-20T10:49:49.094831836+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/6El3 2026-01-20T10:49:49.094831836+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/6El3 2026-01-20T10:49:49.095679832+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=M5mgE 2026-01-20T10:49:49.095696293+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=M5mgE 2026-01-20T10:49:49.101125748+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=M5mgE 2026-01-20T10:49:49.101125748+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=M5mgE 2026-01-20T10:49:49.101125748+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=M5mgE 2026-01-20T10:49:49.101125748+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=M5mgE 2026-01-20T10:49:49.101125748+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=M5mgE 2026-01-20T10:49:49.101125748+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=M5mgE 2026-01-20T10:49:49.107330437+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/6El3 2026-01-20T10:49:49.107330437+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=/6El3 2026-01-20T10:49:49.107330437+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=/6El3 2026-01-20T10:49:49.107330437+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=/6El3 2026-01-20T10:49:49.107330437+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/6El3 2026-01-20T10:49:49.107330437+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/6El3 2026-01-20T10:49:49.124383656+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=M5mgE 2026-01-20T10:49:49.124563812+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=M5mgE 2026-01-20T10:49:49.124607243+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=M5mgE 2026-01-20T10:49:49.124696946+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=M5mgE 2026-01-20T10:49:49.124771998+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=q7L9Q 2026-01-20T10:49:49.124807979+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=q7L9Q 2026-01-20T10:49:49.124984255+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/6El3 2026-01-20T10:49:49.125107128+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=/6El3 2026-01-20T10:49:49.125142979+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=/6El3 2026-01-20T10:49:49.125627644+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/6El3 2026-01-20T10:49:49.125697256+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=c0KaQ 2026-01-20T10:49:49.125734187+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=c0KaQ 2026-01-20T10:49:49.130768151+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=q7L9Q 2026-01-20T10:49:49.132671149+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=q7L9Q 2026-01-20T10:49:49.132671149+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=q7L9Q 2026-01-20T10:49:49.132671149+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=q7L9Q 2026-01-20T10:49:49.132671149+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=q7L9Q 2026-01-20T10:49:49.132671149+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=q7L9Q 2026-01-20T10:49:49.132671149+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=c0KaQ 2026-01-20T10:49:49.132671149+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=c0KaQ 2026-01-20T10:49:49.132671149+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=c0KaQ 2026-01-20T10:49:49.132671149+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=c0KaQ 2026-01-20T10:49:49.132671149+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=c0KaQ 2026-01-20T10:49:49.132671149+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=c0KaQ 2026-01-20T10:49:49.300564393+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=q7L9Q 2026-01-20T10:49:49.300707437+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=q7L9Q 2026-01-20T10:49:49.300707437+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=q7L9Q 2026-01-20T10:49:49.300793380+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=q7L9Q 2026-01-20T10:49:49.300857242+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/O2VW 2026-01-20T10:49:49.300857242+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/O2VW 2026-01-20T10:49:49.496739418+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=c0KaQ 2026-01-20T10:49:49.496834021+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=c0KaQ 2026-01-20T10:49:49.496834021+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=c0KaQ 2026-01-20T10:49:49.496909943+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=c0KaQ 2026-01-20T10:49:49.496973195+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5hxPh 2026-01-20T10:49:49.496973195+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5hxPh 2026-01-20T10:49:49.696157872+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/O2VW 2026-01-20T10:49:49.696253855+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=/O2VW 2026-01-20T10:49:49.696253855+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=/O2VW 2026-01-20T10:49:49.696266196+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=/O2VW 2026-01-20T10:49:49.696275116+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/O2VW 2026-01-20T10:49:49.696283976+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/O2VW 2026-01-20T10:49:49.896257838+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5hxPh 2026-01-20T10:49:49.896446943+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5hxPh 2026-01-20T10:49:49.896488595+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5hxPh 2026-01-20T10:49:49.896523216+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=5hxPh 2026-01-20T10:49:49.896552387+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5hxPh 2026-01-20T10:49:49.896578927+00:00 stderr F time="2026-01-20T10:49:49Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5hxPh 2026-01-20T10:49:50.496408958+00:00 stderr F time="2026-01-20T10:49:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/O2VW 2026-01-20T10:49:50.496507471+00:00 stderr F time="2026-01-20T10:49:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=/O2VW 2026-01-20T10:49:50.496507471+00:00 stderr F time="2026-01-20T10:49:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=/O2VW 2026-01-20T10:49:50.496643565+00:00 stderr F time="2026-01-20T10:49:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/O2VW 2026-01-20T10:49:50.496697066+00:00 stderr F time="2026-01-20T10:49:50Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=s7idk 2026-01-20T10:49:50.496697066+00:00 stderr F time="2026-01-20T10:49:50Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=s7idk 2026-01-20T10:49:50.697186253+00:00 stderr F time="2026-01-20T10:49:50Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5hxPh 2026-01-20T10:49:50.697407050+00:00 stderr F time="2026-01-20T10:49:50Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5hxPh 2026-01-20T10:49:50.697407050+00:00 stderr F time="2026-01-20T10:49:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5hxPh 2026-01-20T10:49:50.697407050+00:00 stderr F time="2026-01-20T10:49:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5hxPh 2026-01-20T10:49:50.898754982+00:00 stderr F time="2026-01-20T10:49:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=s7idk 2026-01-20T10:49:50.898754982+00:00 stderr F time="2026-01-20T10:49:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=s7idk 2026-01-20T10:49:50.898754982+00:00 stderr F time="2026-01-20T10:49:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=s7idk 2026-01-20T10:49:50.898754982+00:00 stderr F time="2026-01-20T10:49:50Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=s7idk 2026-01-20T10:49:50.898754982+00:00 stderr F time="2026-01-20T10:49:50Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=s7idk 2026-01-20T10:49:50.898754982+00:00 stderr F time="2026-01-20T10:49:50Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=s7idk 2026-01-20T10:49:51.302456580+00:00 stderr F time="2026-01-20T10:49:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=s7idk 2026-01-20T10:49:51.302561643+00:00 stderr F time="2026-01-20T10:49:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=s7idk 2026-01-20T10:49:51.302561643+00:00 stderr F time="2026-01-20T10:49:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=s7idk 2026-01-20T10:49:51.302561643+00:00 stderr F time="2026-01-20T10:49:51Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=s7idk 2026-01-20T10:50:04.669054007+00:00 stderr F time="2026-01-20T10:50:04Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h7vFR 2026-01-20T10:50:04.669054007+00:00 stderr F time="2026-01-20T10:50:04Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h7vFR 2026-01-20T10:50:04.680341101+00:00 stderr F time="2026-01-20T10:50:04Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h7vFR 2026-01-20T10:50:04.680498816+00:00 stderr F time="2026-01-20T10:50:04Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=h7vFR 2026-01-20T10:50:04.680498816+00:00 stderr F time="2026-01-20T10:50:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=h7vFR 2026-01-20T10:50:04.680498816+00:00 stderr F time="2026-01-20T10:50:04Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=h7vFR 2026-01-20T10:50:04.680535257+00:00 stderr F time="2026-01-20T10:50:04Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h7vFR 2026-01-20T10:50:04.680535257+00:00 stderr F time="2026-01-20T10:50:04Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h7vFR 2026-01-20T10:50:04.692138530+00:00 stderr F time="2026-01-20T10:50:04Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h7vFR 2026-01-20T10:50:04.692406638+00:00 stderr F time="2026-01-20T10:50:04Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=h7vFR 2026-01-20T10:50:04.692406638+00:00 stderr F time="2026-01-20T10:50:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=h7vFR 2026-01-20T10:50:04.692585514+00:00 stderr F time="2026-01-20T10:50:04Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h7vFR 2026-01-20T10:50:05.075510168+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3cbV7 2026-01-20T10:50:05.075510168+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3cbV7 2026-01-20T10:50:05.079962464+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3cbV7 2026-01-20T10:50:05.080176620+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=3cbV7 2026-01-20T10:50:05.080200401+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=3cbV7 2026-01-20T10:50:05.080200401+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=3cbV7 2026-01-20T10:50:05.080217551+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3cbV7 2026-01-20T10:50:05.080217551+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3cbV7 2026-01-20T10:50:05.090177164+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3cbV7 2026-01-20T10:50:05.090369120+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=3cbV7 2026-01-20T10:50:05.090369120+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=3cbV7 2026-01-20T10:50:05.090501634+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3cbV7 2026-01-20T10:50:05.878238688+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=KcRtm 2026-01-20T10:50:05.878238688+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=KcRtm 2026-01-20T10:50:05.881288231+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=KcRtm 2026-01-20T10:50:05.881400755+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=KcRtm 2026-01-20T10:50:05.881400755+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=KcRtm 2026-01-20T10:50:05.881414465+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=KcRtm 2026-01-20T10:50:05.881414465+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=KcRtm 2026-01-20T10:50:05.881425095+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=KcRtm 2026-01-20T10:50:05.889465250+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=KcRtm 2026-01-20T10:50:05.889554613+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=KcRtm 2026-01-20T10:50:05.889554613+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=KcRtm 2026-01-20T10:50:05.889634115+00:00 stderr F time="2026-01-20T10:50:05Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=KcRtm 2026-01-20T10:50:06.678260396+00:00 stderr F time="2026-01-20T10:50:06Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iCA4j 2026-01-20T10:50:06.678260396+00:00 stderr F time="2026-01-20T10:50:06Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iCA4j 2026-01-20T10:50:06.682096214+00:00 stderr F time="2026-01-20T10:50:06Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iCA4j 2026-01-20T10:50:06.682429404+00:00 stderr F time="2026-01-20T10:50:06Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iCA4j 2026-01-20T10:50:06.682429404+00:00 stderr F time="2026-01-20T10:50:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iCA4j 2026-01-20T10:50:06.682429404+00:00 stderr F time="2026-01-20T10:50:06Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=iCA4j 2026-01-20T10:50:06.682429404+00:00 stderr F time="2026-01-20T10:50:06Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iCA4j 2026-01-20T10:50:06.682429404+00:00 stderr F time="2026-01-20T10:50:06Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iCA4j 2026-01-20T10:50:06.690937543+00:00 stderr F time="2026-01-20T10:50:06Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iCA4j 2026-01-20T10:50:06.691014875+00:00 stderr F time="2026-01-20T10:50:06Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iCA4j 2026-01-20T10:50:06.691014875+00:00 stderr F time="2026-01-20T10:50:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iCA4j 2026-01-20T10:50:06.691132159+00:00 stderr F time="2026-01-20T10:50:06Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iCA4j 2026-01-20T10:50:34.693415724+00:00 stderr F time="2026-01-20T10:50:34Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IVZs6 2026-01-20T10:50:34.693415724+00:00 stderr F time="2026-01-20T10:50:34Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IVZs6 2026-01-20T10:50:34.700545949+00:00 stderr F time="2026-01-20T10:50:34Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IVZs6 2026-01-20T10:50:34.700710674+00:00 stderr F time="2026-01-20T10:50:34Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=IVZs6 2026-01-20T10:50:34.700721474+00:00 stderr F time="2026-01-20T10:50:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=IVZs6 2026-01-20T10:50:34.700741584+00:00 stderr F time="2026-01-20T10:50:34Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=IVZs6 2026-01-20T10:50:34.700760465+00:00 stderr F time="2026-01-20T10:50:34Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IVZs6 2026-01-20T10:50:34.700760465+00:00 stderr F time="2026-01-20T10:50:34Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IVZs6 2026-01-20T10:50:34.712116002+00:00 stderr F time="2026-01-20T10:50:34Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IVZs6 2026-01-20T10:50:34.712311948+00:00 stderr F time="2026-01-20T10:50:34Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=IVZs6 2026-01-20T10:50:34.712311948+00:00 stderr F time="2026-01-20T10:50:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=IVZs6 2026-01-20T10:50:34.712392600+00:00 stderr F time="2026-01-20T10:50:34Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IVZs6 2026-01-20T10:50:35.091539755+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=6mZMS 2026-01-20T10:50:35.091539755+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=6mZMS 2026-01-20T10:50:35.094283694+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=6mZMS 2026-01-20T10:50:35.094409627+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=6mZMS 2026-01-20T10:50:35.094419138+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=6mZMS 2026-01-20T10:50:35.094429058+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=6mZMS 2026-01-20T10:50:35.094437008+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=6mZMS 2026-01-20T10:50:35.094444848+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=6mZMS 2026-01-20T10:50:35.100671568+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=6mZMS 2026-01-20T10:50:35.100848553+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=6mZMS 2026-01-20T10:50:35.100866423+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=6mZMS 2026-01-20T10:50:35.100977596+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=6mZMS 2026-01-20T10:50:35.890050331+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=RM4xF 2026-01-20T10:50:35.890050331+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=RM4xF 2026-01-20T10:50:35.893763708+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=RM4xF 2026-01-20T10:50:35.893915282+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=RM4xF 2026-01-20T10:50:35.893915282+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=RM4xF 2026-01-20T10:50:35.893915282+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=RM4xF 2026-01-20T10:50:35.893937253+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=RM4xF 2026-01-20T10:50:35.893937253+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=RM4xF 2026-01-20T10:50:35.903687374+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=RM4xF 2026-01-20T10:50:35.903773757+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=RM4xF 2026-01-20T10:50:35.903773757+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=RM4xF 2026-01-20T10:50:35.903797647+00:00 stderr F time="2026-01-20T10:50:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=RM4xF 2026-01-20T10:50:36.691583505+00:00 stderr F time="2026-01-20T10:50:36Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hDAZz 2026-01-20T10:50:36.691583505+00:00 stderr F time="2026-01-20T10:50:36Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hDAZz 2026-01-20T10:50:36.695082855+00:00 stderr F time="2026-01-20T10:50:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hDAZz 2026-01-20T10:50:36.695200659+00:00 stderr F time="2026-01-20T10:50:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=hDAZz 2026-01-20T10:50:36.695200659+00:00 stderr F time="2026-01-20T10:50:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=hDAZz 2026-01-20T10:50:36.695239430+00:00 stderr F time="2026-01-20T10:50:36Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=hDAZz 2026-01-20T10:50:36.695239430+00:00 stderr F time="2026-01-20T10:50:36Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hDAZz 2026-01-20T10:50:36.695239430+00:00 stderr F time="2026-01-20T10:50:36Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hDAZz 2026-01-20T10:50:36.703440386+00:00 stderr F time="2026-01-20T10:50:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hDAZz 2026-01-20T10:50:36.703486627+00:00 stderr F time="2026-01-20T10:50:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=hDAZz 2026-01-20T10:50:36.703486627+00:00 stderr F time="2026-01-20T10:50:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=hDAZz 2026-01-20T10:50:36.703503858+00:00 stderr F time="2026-01-20T10:50:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hDAZz 2026-01-20T10:50:43.801249786+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Q9w5u 2026-01-20T10:50:43.801249786+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Q9w5u 2026-01-20T10:50:43.804168180+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Q9w5u 2026-01-20T10:50:43.804277773+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Q9w5u 2026-01-20T10:50:43.804277773+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Q9w5u 2026-01-20T10:50:43.804277773+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Q9w5u 2026-01-20T10:50:43.804277773+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Q9w5u 2026-01-20T10:50:43.804277773+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Q9w5u 2026-01-20T10:50:43.818693248+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Q9w5u 2026-01-20T10:50:43.818693248+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Q9w5u 2026-01-20T10:50:43.818693248+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Q9w5u 2026-01-20T10:50:43.818693248+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Q9w5u 2026-01-20T10:50:43.851499694+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=YTYAt 2026-01-20T10:50:43.851499694+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=YTYAt 2026-01-20T10:50:43.874216258+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=YTYAt 2026-01-20T10:50:43.874303591+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=YTYAt 2026-01-20T10:50:43.874303591+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=YTYAt 2026-01-20T10:50:43.874303591+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=YTYAt 2026-01-20T10:50:43.874314541+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=YTYAt 2026-01-20T10:50:43.874314541+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=YTYAt 2026-01-20T10:50:43.875254548+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=VsNu7 2026-01-20T10:50:43.875384401+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=VsNu7 2026-01-20T10:50:43.878875981+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=VsNu7 2026-01-20T10:50:43.878952394+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=VsNu7 2026-01-20T10:50:43.878952394+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=VsNu7 2026-01-20T10:50:43.878962284+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=VsNu7 2026-01-20T10:50:43.878962284+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=VsNu7 2026-01-20T10:50:43.878970174+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=VsNu7 2026-01-20T10:50:43.882587629+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=YTYAt 2026-01-20T10:50:43.882673882+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=YTYAt 2026-01-20T10:50:43.882673882+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=YTYAt 2026-01-20T10:50:43.882754834+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=YTYAt 2026-01-20T10:50:43.882795915+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=rv0hW 2026-01-20T10:50:43.882805165+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=rv0hW 2026-01-20T10:50:43.884567385+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=rv0hW 2026-01-20T10:50:43.884639798+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=rv0hW 2026-01-20T10:50:43.884639798+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=rv0hW 2026-01-20T10:50:43.884639798+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=rv0hW 2026-01-20T10:50:43.884639798+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=rv0hW 2026-01-20T10:50:43.884670169+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=rv0hW 2026-01-20T10:50:43.884828694+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=VsNu7 2026-01-20T10:50:43.884891596+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=VsNu7 2026-01-20T10:50:43.884891596+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=VsNu7 2026-01-20T10:50:43.884942587+00:00 stderr F time="2026-01-20T10:50:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=VsNu7 2026-01-20T10:50:44.202290130+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=rv0hW 2026-01-20T10:50:44.202382923+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=rv0hW 2026-01-20T10:50:44.202382923+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=rv0hW 2026-01-20T10:50:44.202466695+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=rv0hW 2026-01-20T10:50:44.832169607+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U+QeE 2026-01-20T10:50:44.832169607+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U+QeE 2026-01-20T10:50:44.832286750+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U+QeE 2026-01-20T10:50:44.832738753+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=U+QeE 2026-01-20T10:50:44.832738753+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=U+QeE 2026-01-20T10:50:44.832738753+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=U+QeE 2026-01-20T10:50:44.832738753+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U+QeE 2026-01-20T10:50:44.832738753+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U+QeE 2026-01-20T10:50:44.838926281+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U+QeE 2026-01-20T10:50:44.839078216+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=U+QeE 2026-01-20T10:50:44.839088446+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=U+QeE 2026-01-20T10:50:44.839169148+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U+QeE 2026-01-20T10:50:44.845042657+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=v4A2e 2026-01-20T10:50:44.845042657+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=v4A2e 2026-01-20T10:50:44.874090765+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=UAKF8 2026-01-20T10:50:44.874090765+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=UAKF8 2026-01-20T10:50:45.001756612+00:00 stderr F time="2026-01-20T10:50:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=v4A2e 2026-01-20T10:50:45.001756612+00:00 stderr F time="2026-01-20T10:50:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=v4A2e 2026-01-20T10:50:45.206000897+00:00 stderr F time="2026-01-20T10:50:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=UAKF8 2026-01-20T10:50:45.206150061+00:00 stderr F time="2026-01-20T10:50:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=UAKF8 2026-01-20T10:50:45.206150061+00:00 stderr F time="2026-01-20T10:50:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=UAKF8 2026-01-20T10:50:45.206163712+00:00 stderr F time="2026-01-20T10:50:45Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=UAKF8 2026-01-20T10:50:45.206171042+00:00 stderr F time="2026-01-20T10:50:45Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=UAKF8 2026-01-20T10:50:45.206178122+00:00 stderr F time="2026-01-20T10:50:45Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=UAKF8 2026-01-20T10:50:45.329637969+00:00 stderr F time="2026-01-20T10:50:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=v4A2e 2026-01-20T10:50:45.329637969+00:00 stderr F time="2026-01-20T10:50:45Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=v4A2e 2026-01-20T10:50:45.329637969+00:00 stderr F time="2026-01-20T10:50:45Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=v4A2e 2026-01-20T10:50:45.329637969+00:00 stderr F time="2026-01-20T10:50:45Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=v4A2e 2026-01-20T10:50:45.392163761+00:00 stderr F 2026/01/20 10:50:45 http: TLS handshake error from 10.217.0.27:41054: tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of "x509: invalid signature: parent certificate cannot sign this kind of certificate" while trying to verify candidate authority certificate "Red Hat, Inc.") 2026-01-20T10:50:45.802707329+00:00 stderr F time="2026-01-20T10:50:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=UAKF8 2026-01-20T10:50:45.802825112+00:00 stderr F time="2026-01-20T10:50:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=UAKF8 2026-01-20T10:50:45.802833172+00:00 stderr F time="2026-01-20T10:50:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=UAKF8 2026-01-20T10:50:45.802923075+00:00 stderr F time="2026-01-20T10:50:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=UAKF8 2026-01-20T10:50:45.802980836+00:00 stderr F time="2026-01-20T10:50:45Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fa5Ub 2026-01-20T10:50:45.803004167+00:00 stderr F time="2026-01-20T10:50:45Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fa5Ub 2026-01-20T10:50:46.004013169+00:00 stderr F time="2026-01-20T10:50:46Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=v4A2e 2026-01-20T10:50:46.004355198+00:00 stderr F time="2026-01-20T10:50:46Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=v4A2e 2026-01-20T10:50:46.004355198+00:00 stderr F time="2026-01-20T10:50:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=v4A2e 2026-01-20T10:50:46.004355198+00:00 stderr F time="2026-01-20T10:50:46Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=v4A2e 2026-01-20T10:50:46.004355198+00:00 stderr F time="2026-01-20T10:50:46Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7lB5o 2026-01-20T10:50:46.004355198+00:00 stderr F time="2026-01-20T10:50:46Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7lB5o 2026-01-20T10:50:46.202910769+00:00 stderr F time="2026-01-20T10:50:46Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fa5Ub 2026-01-20T10:50:46.202970091+00:00 stderr F time="2026-01-20T10:50:46Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fa5Ub 2026-01-20T10:50:46.202970091+00:00 stderr F time="2026-01-20T10:50:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fa5Ub 2026-01-20T10:50:46.202984611+00:00 stderr F time="2026-01-20T10:50:46Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=fa5Ub 2026-01-20T10:50:46.202984611+00:00 stderr F time="2026-01-20T10:50:46Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fa5Ub 2026-01-20T10:50:46.202984611+00:00 stderr F time="2026-01-20T10:50:46Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fa5Ub 2026-01-20T10:50:46.403689474+00:00 stderr F time="2026-01-20T10:50:46Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7lB5o 2026-01-20T10:50:46.406037792+00:00 stderr F time="2026-01-20T10:50:46Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=7lB5o 2026-01-20T10:50:46.406037792+00:00 stderr F time="2026-01-20T10:50:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=7lB5o 2026-01-20T10:50:46.406037792+00:00 stderr F time="2026-01-20T10:50:46Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=7lB5o 2026-01-20T10:50:46.406037792+00:00 stderr F time="2026-01-20T10:50:46Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7lB5o 2026-01-20T10:50:46.406037792+00:00 stderr F time="2026-01-20T10:50:46Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7lB5o 2026-01-20T10:50:47.004160924+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fa5Ub 2026-01-20T10:50:47.004160924+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fa5Ub 2026-01-20T10:50:47.004160924+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fa5Ub 2026-01-20T10:50:47.004160924+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fa5Ub 2026-01-20T10:50:47.004160924+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YJssD 2026-01-20T10:50:47.004160924+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YJssD 2026-01-20T10:50:47.205719991+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7lB5o 2026-01-20T10:50:47.206104572+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=7lB5o 2026-01-20T10:50:47.206154964+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=7lB5o 2026-01-20T10:50:47.206267427+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7lB5o 2026-01-20T10:50:47.206380360+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AkA5Y 2026-01-20T10:50:47.206438722+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AkA5Y 2026-01-20T10:50:47.401811441+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YJssD 2026-01-20T10:50:47.401920554+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=YJssD 2026-01-20T10:50:47.401920554+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=YJssD 2026-01-20T10:50:47.401931024+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=YJssD 2026-01-20T10:50:47.401938545+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YJssD 2026-01-20T10:50:47.401945785+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YJssD 2026-01-20T10:50:47.602024149+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AkA5Y 2026-01-20T10:50:47.602144453+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=AkA5Y 2026-01-20T10:50:47.602165304+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=AkA5Y 2026-01-20T10:50:47.602165304+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=AkA5Y 2026-01-20T10:50:47.602174304+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AkA5Y 2026-01-20T10:50:47.602181054+00:00 stderr F time="2026-01-20T10:50:47Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AkA5Y 2026-01-20T10:50:48.204133527+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YJssD 2026-01-20T10:50:48.204133527+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=YJssD 2026-01-20T10:50:48.204133527+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=YJssD 2026-01-20T10:50:48.204133527+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YJssD 2026-01-20T10:50:48.204133527+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=1h4/z 2026-01-20T10:50:48.204133527+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=1h4/z 2026-01-20T10:50:48.403057238+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AkA5Y 2026-01-20T10:50:48.403179602+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=AkA5Y 2026-01-20T10:50:48.403179602+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=AkA5Y 2026-01-20T10:50:48.403297536+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AkA5Y 2026-01-20T10:50:48.403366438+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=5Jp/w 2026-01-20T10:50:48.403366438+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=5Jp/w 2026-01-20T10:50:48.603259317+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=1h4/z 2026-01-20T10:50:48.603520304+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=1h4/z 2026-01-20T10:50:48.603568026+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=1h4/z 2026-01-20T10:50:48.603613577+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=1h4/z 2026-01-20T10:50:48.603648828+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=1h4/z 2026-01-20T10:50:48.603683539+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=1h4/z 2026-01-20T10:50:48.803013892+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=5Jp/w 2026-01-20T10:50:48.803096444+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=5Jp/w 2026-01-20T10:50:48.803096444+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=5Jp/w 2026-01-20T10:50:48.803096444+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=5Jp/w 2026-01-20T10:50:48.803096444+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=5Jp/w 2026-01-20T10:50:48.803096444+00:00 stderr F time="2026-01-20T10:50:48Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=5Jp/w 2026-01-20T10:50:49.403611547+00:00 stderr F time="2026-01-20T10:50:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=1h4/z 2026-01-20T10:50:49.403611547+00:00 stderr F time="2026-01-20T10:50:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=1h4/z 2026-01-20T10:50:49.403611547+00:00 stderr F time="2026-01-20T10:50:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=1h4/z 2026-01-20T10:50:49.403684749+00:00 stderr F time="2026-01-20T10:50:49Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=1h4/z 2026-01-20T10:50:49.403696209+00:00 stderr F time="2026-01-20T10:50:49Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=MCvhw 2026-01-20T10:50:49.403706679+00:00 stderr F time="2026-01-20T10:50:49Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=MCvhw 2026-01-20T10:50:49.603154886+00:00 stderr F time="2026-01-20T10:50:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=5Jp/w 2026-01-20T10:50:49.603154886+00:00 stderr F time="2026-01-20T10:50:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=5Jp/w 2026-01-20T10:50:49.603154886+00:00 stderr F time="2026-01-20T10:50:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=5Jp/w 2026-01-20T10:50:49.603256908+00:00 stderr F time="2026-01-20T10:50:49Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=5Jp/w 2026-01-20T10:50:49.802785957+00:00 stderr F time="2026-01-20T10:50:49Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=MCvhw 2026-01-20T10:50:49.802907500+00:00 stderr F time="2026-01-20T10:50:49Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=MCvhw 2026-01-20T10:50:49.802907500+00:00 stderr F time="2026-01-20T10:50:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=MCvhw 2026-01-20T10:50:49.802907500+00:00 stderr F time="2026-01-20T10:50:49Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=MCvhw 2026-01-20T10:50:49.802907500+00:00 stderr F time="2026-01-20T10:50:49Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=MCvhw 2026-01-20T10:50:49.802907500+00:00 stderr F time="2026-01-20T10:50:49Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=MCvhw 2026-01-20T10:50:50.204868782+00:00 stderr F time="2026-01-20T10:50:50Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=MCvhw 2026-01-20T10:50:50.204970435+00:00 stderr F time="2026-01-20T10:50:50Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=MCvhw 2026-01-20T10:50:50.204970435+00:00 stderr F time="2026-01-20T10:50:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=MCvhw 2026-01-20T10:50:50.205146100+00:00 stderr F time="2026-01-20T10:50:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=MCvhw 2026-01-20T10:50:52.920590766+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=G9c6K 2026-01-20T10:50:52.920590766+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=G9c6K 2026-01-20T10:50:52.924333513+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=G9c6K 2026-01-20T10:50:52.924432936+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=G9c6K 2026-01-20T10:50:52.924432936+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=G9c6K 2026-01-20T10:50:52.924447506+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=G9c6K 2026-01-20T10:50:52.924447506+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=G9c6K 2026-01-20T10:50:52.924447506+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=G9c6K 2026-01-20T10:50:52.932263432+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=G9c6K 2026-01-20T10:50:52.932360094+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=G9c6K 2026-01-20T10:50:52.932360094+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=G9c6K 2026-01-20T10:50:52.932419746+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=G9c6K 2026-01-20T10:50:52.980273535+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zJ5jO 2026-01-20T10:50:52.980273535+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zJ5jO 2026-01-20T10:50:52.986423313+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zJ5jO 2026-01-20T10:50:52.986564247+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=zJ5jO 2026-01-20T10:50:52.986564247+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=zJ5jO 2026-01-20T10:50:52.986589017+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=zJ5jO 2026-01-20T10:50:52.986589017+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zJ5jO 2026-01-20T10:50:52.986597358+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zJ5jO 2026-01-20T10:50:52.991977823+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zJ5jO 2026-01-20T10:50:52.991977823+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=zJ5jO 2026-01-20T10:50:52.991977823+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=zJ5jO 2026-01-20T10:50:52.991977823+00:00 stderr F time="2026-01-20T10:50:52Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zJ5jO 2026-01-20T10:50:53.022187192+00:00 stderr F time="2026-01-20T10:50:53Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=m/ja6 2026-01-20T10:50:53.022187192+00:00 stderr F time="2026-01-20T10:50:53Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=m/ja6 2026-01-20T10:50:53.025473617+00:00 stderr F time="2026-01-20T10:50:53Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=m/ja6 2026-01-20T10:50:53.025574781+00:00 stderr F time="2026-01-20T10:50:53Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=m/ja6 2026-01-20T10:50:53.025574781+00:00 stderr F time="2026-01-20T10:50:53Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=m/ja6 2026-01-20T10:50:53.025598921+00:00 stderr F time="2026-01-20T10:50:53Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=m/ja6 2026-01-20T10:50:53.025598921+00:00 stderr F time="2026-01-20T10:50:53Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=m/ja6 2026-01-20T10:50:53.025606702+00:00 stderr F time="2026-01-20T10:50:53Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=m/ja6 2026-01-20T10:50:53.033814257+00:00 stderr F time="2026-01-20T10:50:53Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=m/ja6 2026-01-20T10:50:53.034537729+00:00 stderr F time="2026-01-20T10:50:53Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=m/ja6 2026-01-20T10:50:53.034537729+00:00 stderr F time="2026-01-20T10:50:53Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=m/ja6 2026-01-20T10:50:53.034537729+00:00 stderr F time="2026-01-20T10:50:53Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=m/ja6 2026-01-20T10:50:54.009048046+00:00 stderr F time="2026-01-20T10:50:54Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IuxDc 2026-01-20T10:50:54.009048046+00:00 stderr F time="2026-01-20T10:50:54Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IuxDc 2026-01-20T10:50:54.011114415+00:00 stderr F time="2026-01-20T10:50:54Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IuxDc 2026-01-20T10:50:54.011218518+00:00 stderr F time="2026-01-20T10:50:54Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IuxDc 2026-01-20T10:50:54.011218518+00:00 stderr F time="2026-01-20T10:50:54Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IuxDc 2026-01-20T10:50:54.011218518+00:00 stderr F time="2026-01-20T10:50:54Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=IuxDc 2026-01-20T10:50:54.011232808+00:00 stderr F time="2026-01-20T10:50:54Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IuxDc 2026-01-20T10:50:54.011241339+00:00 stderr F time="2026-01-20T10:50:54Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IuxDc 2026-01-20T10:50:54.017807648+00:00 stderr F time="2026-01-20T10:50:54Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IuxDc 2026-01-20T10:50:54.017857690+00:00 stderr F time="2026-01-20T10:50:54Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IuxDc 2026-01-20T10:50:54.018288402+00:00 stderr F time="2026-01-20T10:50:54Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IuxDc 2026-01-20T10:50:54.018288402+00:00 stderr F time="2026-01-20T10:50:54Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IuxDc 2026-01-20T10:50:55.919357916+00:00 stderr F time="2026-01-20T10:50:55Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=H5X/j 2026-01-20T10:50:55.919357916+00:00 stderr F time="2026-01-20T10:50:55Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=H5X/j 2026-01-20T10:50:55.924645497+00:00 stderr F time="2026-01-20T10:50:55Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=H5X/j 2026-01-20T10:50:55.924737700+00:00 stderr F time="2026-01-20T10:50:55Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=H5X/j 2026-01-20T10:50:55.924737700+00:00 stderr F time="2026-01-20T10:50:55Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=H5X/j 2026-01-20T10:50:55.924755450+00:00 stderr F time="2026-01-20T10:50:55Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=H5X/j 2026-01-20T10:50:55.924755450+00:00 stderr F time="2026-01-20T10:50:55Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=H5X/j 2026-01-20T10:50:55.924755450+00:00 stderr F time="2026-01-20T10:50:55Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=H5X/j 2026-01-20T10:50:55.931839215+00:00 stderr F time="2026-01-20T10:50:55Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=H5X/j 2026-01-20T10:50:55.932010410+00:00 stderr F time="2026-01-20T10:50:55Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=H5X/j 2026-01-20T10:50:55.932010410+00:00 stderr F time="2026-01-20T10:50:55Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=H5X/j 2026-01-20T10:50:55.932096522+00:00 stderr F time="2026-01-20T10:50:55Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=H5X/j 2026-01-20T10:50:56.694974572+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRH5E 2026-01-20T10:50:56.694974572+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRH5E 2026-01-20T10:50:56.696999020+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRH5E 2026-01-20T10:50:56.697112633+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RRH5E 2026-01-20T10:50:56.697112633+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RRH5E 2026-01-20T10:50:56.697124774+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=RRH5E 2026-01-20T10:50:56.697124774+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRH5E 2026-01-20T10:50:56.697138734+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRH5E 2026-01-20T10:50:56.702866849+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRH5E 2026-01-20T10:50:56.702942201+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RRH5E 2026-01-20T10:50:56.702953461+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RRH5E 2026-01-20T10:50:56.703012823+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRH5E 2026-01-20T10:50:56.708736448+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IvxSG 2026-01-20T10:50:56.708736448+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IvxSG 2026-01-20T10:50:56.713596008+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IvxSG 2026-01-20T10:50:56.713702101+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IvxSG 2026-01-20T10:50:56.713702101+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IvxSG 2026-01-20T10:50:56.713713882+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=IvxSG 2026-01-20T10:50:56.713713882+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IvxSG 2026-01-20T10:50:56.713723342+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IvxSG 2026-01-20T10:50:56.720121906+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=o5M4c 2026-01-20T10:50:56.720121906+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=o5M4c 2026-01-20T10:50:56.723850094+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IvxSG 2026-01-20T10:50:56.723987418+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IvxSG 2026-01-20T10:50:56.724007219+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IvxSG 2026-01-20T10:50:56.724119542+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IvxSG 2026-01-20T10:50:56.724586855+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=o5M4c 2026-01-20T10:50:56.724657517+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=o5M4c 2026-01-20T10:50:56.724657517+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=o5M4c 2026-01-20T10:50:56.724669787+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=o5M4c 2026-01-20T10:50:56.724678168+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=o5M4c 2026-01-20T10:50:56.724686238+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=o5M4c 2026-01-20T10:50:56.732188484+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=o5M4c 2026-01-20T10:50:56.732324968+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=o5M4c 2026-01-20T10:50:56.732324968+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=o5M4c 2026-01-20T10:50:56.732413061+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=o5M4c 2026-01-20T10:50:56.732461692+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=E3J1x 2026-01-20T10:50:56.732480773+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=E3J1x 2026-01-20T10:50:56.734929733+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=E3J1x 2026-01-20T10:50:56.735107818+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=E3J1x 2026-01-20T10:50:56.735107818+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=E3J1x 2026-01-20T10:50:56.735107818+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=E3J1x 2026-01-20T10:50:56.735107818+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=E3J1x 2026-01-20T10:50:56.735107818+00:00 stderr F time="2026-01-20T10:50:56Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=E3J1x 2026-01-20T10:50:57.099103836+00:00 stderr F time="2026-01-20T10:50:57Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=E3J1x 2026-01-20T10:50:57.099334472+00:00 stderr F time="2026-01-20T10:50:57Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=E3J1x 2026-01-20T10:50:57.099374594+00:00 stderr F time="2026-01-20T10:50:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=E3J1x 2026-01-20T10:50:57.099573959+00:00 stderr F time="2026-01-20T10:50:57Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=E3J1x 2026-01-20T10:50:57.206214362+00:00 stderr F time="2026-01-20T10:50:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=5hCmZ 2026-01-20T10:50:57.206214362+00:00 stderr F time="2026-01-20T10:50:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=5hCmZ 2026-01-20T10:50:57.224049605+00:00 stderr F time="2026-01-20T10:50:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZdnN0 2026-01-20T10:50:57.224049605+00:00 stderr F time="2026-01-20T10:50:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZdnN0 2026-01-20T10:50:57.298409828+00:00 stderr F time="2026-01-20T10:50:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=5hCmZ 2026-01-20T10:50:57.298503721+00:00 stderr F time="2026-01-20T10:50:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=5hCmZ 2026-01-20T10:50:57.298503721+00:00 stderr F time="2026-01-20T10:50:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=5hCmZ 2026-01-20T10:50:57.298539302+00:00 stderr F time="2026-01-20T10:50:57Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=5hCmZ 2026-01-20T10:50:57.298539302+00:00 stderr F time="2026-01-20T10:50:57Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=5hCmZ 2026-01-20T10:50:57.298539302+00:00 stderr F time="2026-01-20T10:50:57Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=5hCmZ 2026-01-20T10:50:57.500402178+00:00 stderr F time="2026-01-20T10:50:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZdnN0 2026-01-20T10:50:57.500402178+00:00 stderr F time="2026-01-20T10:50:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ZdnN0 2026-01-20T10:50:57.500402178+00:00 stderr F time="2026-01-20T10:50:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ZdnN0 2026-01-20T10:50:57.500402178+00:00 stderr F time="2026-01-20T10:50:57Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=ZdnN0 2026-01-20T10:50:57.500402178+00:00 stderr F time="2026-01-20T10:50:57Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZdnN0 2026-01-20T10:50:57.500402178+00:00 stderr F time="2026-01-20T10:50:57Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZdnN0 2026-01-20T10:50:58.098585452+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=5hCmZ 2026-01-20T10:50:58.098671884+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=5hCmZ 2026-01-20T10:50:58.098671884+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=5hCmZ 2026-01-20T10:50:58.098791928+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=5hCmZ 2026-01-20T10:50:58.098878501+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0aD6v 2026-01-20T10:50:58.098878501+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0aD6v 2026-01-20T10:50:58.297281697+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZdnN0 2026-01-20T10:50:58.297388890+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ZdnN0 2026-01-20T10:50:58.297388890+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ZdnN0 2026-01-20T10:50:58.297458572+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZdnN0 2026-01-20T10:50:58.297519613+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LMt0C 2026-01-20T10:50:58.297540994+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LMt0C 2026-01-20T10:50:58.498787583+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0aD6v 2026-01-20T10:50:58.499172583+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=0aD6v 2026-01-20T10:50:58.499172583+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=0aD6v 2026-01-20T10:50:58.499172583+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=0aD6v 2026-01-20T10:50:58.499172583+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0aD6v 2026-01-20T10:50:58.499172583+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0aD6v 2026-01-20T10:50:58.697705934+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LMt0C 2026-01-20T10:50:58.697775256+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=LMt0C 2026-01-20T10:50:58.697775256+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=LMt0C 2026-01-20T10:50:58.697791866+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=LMt0C 2026-01-20T10:50:58.697791866+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LMt0C 2026-01-20T10:50:58.697791866+00:00 stderr F time="2026-01-20T10:50:58Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LMt0C 2026-01-20T10:50:59.298027091+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0aD6v 2026-01-20T10:50:59.298298248+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=0aD6v 2026-01-20T10:50:59.298312839+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=0aD6v 2026-01-20T10:50:59.298402471+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0aD6v 2026-01-20T10:50:59.298571586+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=bU4W2 2026-01-20T10:50:59.298602957+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=bU4W2 2026-01-20T10:50:59.499132494+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LMt0C 2026-01-20T10:50:59.499215516+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=LMt0C 2026-01-20T10:50:59.499215516+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=LMt0C 2026-01-20T10:50:59.499304499+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LMt0C 2026-01-20T10:50:59.499359340+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cMyKF 2026-01-20T10:50:59.499380701+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cMyKF 2026-01-20T10:50:59.697333465+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=bU4W2 2026-01-20T10:50:59.697444838+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=bU4W2 2026-01-20T10:50:59.697444838+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=bU4W2 2026-01-20T10:50:59.697444838+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=bU4W2 2026-01-20T10:50:59.697444838+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=bU4W2 2026-01-20T10:50:59.697456368+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=bU4W2 2026-01-20T10:50:59.898342056+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cMyKF 2026-01-20T10:50:59.898342056+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cMyKF 2026-01-20T10:50:59.898342056+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cMyKF 2026-01-20T10:50:59.898342056+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=cMyKF 2026-01-20T10:50:59.898342056+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cMyKF 2026-01-20T10:50:59.898342056+00:00 stderr F time="2026-01-20T10:50:59Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cMyKF 2026-01-20T10:51:00.498814557+00:00 stderr F time="2026-01-20T10:51:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=bU4W2 2026-01-20T10:51:00.498970561+00:00 stderr F time="2026-01-20T10:51:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=bU4W2 2026-01-20T10:51:00.498983892+00:00 stderr F time="2026-01-20T10:51:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=bU4W2 2026-01-20T10:51:00.499079574+00:00 stderr F time="2026-01-20T10:51:00Z" level=info msg="catalog polling result: update pod redhat-marketplace-qr2m4 failed to start" UpdatePod=redhat-marketplace-qr2m4 2026-01-20T10:51:00.698016336+00:00 stderr F time="2026-01-20T10:51:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cMyKF 2026-01-20T10:51:00.698157780+00:00 stderr F time="2026-01-20T10:51:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cMyKF 2026-01-20T10:51:00.698220422+00:00 stderr F time="2026-01-20T10:51:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cMyKF 2026-01-20T10:51:00.698263813+00:00 stderr F time="2026-01-20T10:51:00Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cMyKF 2026-01-20T10:51:00.914709460+00:00 stderr F E0120 10:51:00.914262 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: couldn't ensure registry server - error ensuring updated catalog source pod: : update pod redhat-marketplace-qr2m4 in a Failed state: deleted update pod 2026-01-20T10:51:00.914709460+00:00 stderr F time="2026-01-20T10:51:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=VwHaR 2026-01-20T10:51:00.914709460+00:00 stderr F time="2026-01-20T10:51:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=VwHaR 2026-01-20T10:51:01.098790683+00:00 stderr F time="2026-01-20T10:51:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=VwHaR 2026-01-20T10:51:01.099024220+00:00 stderr F time="2026-01-20T10:51:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=VwHaR 2026-01-20T10:51:01.099024220+00:00 stderr F time="2026-01-20T10:51:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=VwHaR 2026-01-20T10:51:01.099038010+00:00 stderr F time="2026-01-20T10:51:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=VwHaR 2026-01-20T10:51:01.099038010+00:00 stderr F time="2026-01-20T10:51:01Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=VwHaR 2026-01-20T10:51:01.099046000+00:00 stderr F time="2026-01-20T10:51:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=VwHaR 2026-01-20T10:51:01.498980483+00:00 stderr F time="2026-01-20T10:51:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=VwHaR 2026-01-20T10:51:01.499230400+00:00 stderr F time="2026-01-20T10:51:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=VwHaR 2026-01-20T10:51:01.499230400+00:00 stderr F time="2026-01-20T10:51:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=VwHaR 2026-01-20T10:51:01.499311243+00:00 stderr F time="2026-01-20T10:51:01Z" level=info msg="catalog polling result: update pod redhat-marketplace-qr2m4 failed to start" UpdatePod=redhat-marketplace-qr2m4 2026-01-20T10:51:01.699768079+00:00 stderr F E0120 10:51:01.699702 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: couldn't ensure registry server - error ensuring updated catalog source pod: : update pod redhat-marketplace-qr2m4 in a Failed state: deleted update pod 2026-01-20T10:51:01.699829390+00:00 stderr F time="2026-01-20T10:51:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=gcGdj 2026-01-20T10:51:01.699888902+00:00 stderr F time="2026-01-20T10:51:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=gcGdj 2026-01-20T10:51:01.898710560+00:00 stderr F time="2026-01-20T10:51:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=gcGdj 2026-01-20T10:51:01.898915006+00:00 stderr F time="2026-01-20T10:51:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=gcGdj 2026-01-20T10:51:01.898927847+00:00 stderr F time="2026-01-20T10:51:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=gcGdj 2026-01-20T10:51:01.898939657+00:00 stderr F time="2026-01-20T10:51:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=gcGdj 2026-01-20T10:51:01.899001059+00:00 stderr F time="2026-01-20T10:51:01Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=gcGdj 2026-01-20T10:51:01.899001059+00:00 stderr F time="2026-01-20T10:51:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=gcGdj 2026-01-20T10:51:02.298577022+00:00 stderr F time="2026-01-20T10:51:02Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=gcGdj 2026-01-20T10:51:02.298834959+00:00 stderr F time="2026-01-20T10:51:02Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=gcGdj 2026-01-20T10:51:02.298834959+00:00 stderr F time="2026-01-20T10:51:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=gcGdj 2026-01-20T10:51:02.298961422+00:00 stderr F time="2026-01-20T10:51:02Z" level=info msg="catalog polling result: update pod redhat-marketplace-qr2m4 failed to start" UpdatePod=redhat-marketplace-qr2m4 2026-01-20T10:51:02.498361827+00:00 stderr F E0120 10:51:02.498304 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: couldn't ensure registry server - error ensuring updated catalog source pod: : update pod redhat-marketplace-qr2m4 in a Failed state: deleted update pod 2026-01-20T10:51:02.498399718+00:00 stderr F time="2026-01-20T10:51:02Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=T/V6j 2026-01-20T10:51:02.498434349+00:00 stderr F time="2026-01-20T10:51:02Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=T/V6j 2026-01-20T10:51:02.698799303+00:00 stderr F time="2026-01-20T10:51:02Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=T/V6j 2026-01-20T10:51:02.698886245+00:00 stderr F time="2026-01-20T10:51:02Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=T/V6j 2026-01-20T10:51:02.698904126+00:00 stderr F time="2026-01-20T10:51:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=T/V6j 2026-01-20T10:51:02.698917946+00:00 stderr F time="2026-01-20T10:51:02Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=T/V6j 2026-01-20T10:51:02.698917946+00:00 stderr F time="2026-01-20T10:51:02Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=T/V6j 2026-01-20T10:51:02.698931407+00:00 stderr F time="2026-01-20T10:51:02Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=T/V6j 2026-01-20T10:51:03.098958701+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=T/V6j 2026-01-20T10:51:03.099095195+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=T/V6j 2026-01-20T10:51:03.099095195+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=T/V6j 2026-01-20T10:51:03.099253920+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=T/V6j 2026-01-20T10:51:03.099253920+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=T/V6j 2026-01-20T10:51:03.105724437+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EBNrF 2026-01-20T10:51:03.105757718+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EBNrF 2026-01-20T10:51:03.298829121+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EBNrF 2026-01-20T10:51:03.298871472+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=EBNrF 2026-01-20T10:51:03.298871472+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=EBNrF 2026-01-20T10:51:03.298879692+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=EBNrF 2026-01-20T10:51:03.298886902+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EBNrF 2026-01-20T10:51:03.298894142+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EBNrF 2026-01-20T10:51:03.426262972+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e8blq 2026-01-20T10:51:03.426331834+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e8blq 2026-01-20T10:51:03.698836726+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e8blq 2026-01-20T10:51:03.698946279+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e8blq 2026-01-20T10:51:03.698946279+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=false id=e8blq 2026-01-20T10:51:03.698946279+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e8blq 2026-01-20T10:51:03.900207617+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EBNrF 2026-01-20T10:51:03.900340030+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EBNrF 2026-01-20T10:51:03.900392622+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="registry pods invalid, need to overwrite" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EBNrF 2026-01-20T10:51:03.900555337+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="creating desired pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EBNrF pod.name= pod.namespace=openshift-marketplace 2026-01-20T10:51:04.310542290+00:00 stderr F time="2026-01-20T10:51:04Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EBNrF 2026-01-20T10:51:04.310542290+00:00 stderr F time="2026-01-20T10:51:04Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EBNrF 2026-01-20T10:51:04.317577242+00:00 stderr F time="2026-01-20T10:51:04Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"redhat-marketplace\": the object has been modified; please apply your changes to the latest version and try again" id=EBNrF 2026-01-20T10:51:04.317638733+00:00 stderr F E0120 10:51:04.317600 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "redhat-marketplace": the object has been modified; please apply your changes to the latest version and try again 2026-01-20T10:51:04.317825949+00:00 stderr F time="2026-01-20T10:51:04Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=20g9a 2026-01-20T10:51:04.317844419+00:00 stderr F time="2026-01-20T10:51:04Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=20g9a 2026-01-20T10:51:04.499158024+00:00 stderr F time="2026-01-20T10:51:04Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e8blq 2026-01-20T10:51:04.499452212+00:00 stderr F time="2026-01-20T10:51:04Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e8blq 2026-01-20T10:51:04.499452212+00:00 stderr F time="2026-01-20T10:51:04Z" level=info msg="registry pods invalid, need to overwrite" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e8blq 2026-01-20T10:51:04.499816372+00:00 stderr F time="2026-01-20T10:51:04Z" level=info msg="creating desired pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e8blq pod.name= pod.namespace=openshift-marketplace 2026-01-20T10:51:04.698981811+00:00 stderr F time="2026-01-20T10:51:04Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=20g9a 2026-01-20T10:51:04.699156246+00:00 stderr F time="2026-01-20T10:51:04Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=20g9a 2026-01-20T10:51:04.699156246+00:00 stderr F time="2026-01-20T10:51:04Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=false id=20g9a 2026-01-20T10:51:04.699156246+00:00 stderr F time="2026-01-20T10:51:04Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=20g9a 2026-01-20T10:51:04.911045941+00:00 stderr F time="2026-01-20T10:51:04Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e8blq 2026-01-20T10:51:04.911045941+00:00 stderr F time="2026-01-20T10:51:04Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5dedt 2026-01-20T10:51:04.911111213+00:00 stderr F time="2026-01-20T10:51:04Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5dedt 2026-01-20T10:51:05.298924117+00:00 stderr F time="2026-01-20T10:51:05Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5dedt 2026-01-20T10:51:05.298993719+00:00 stderr F time="2026-01-20T10:51:05Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5dedt 2026-01-20T10:51:05.298993719+00:00 stderr F time="2026-01-20T10:51:05Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=false id=5dedt 2026-01-20T10:51:05.298993719+00:00 stderr F time="2026-01-20T10:51:05Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5dedt 2026-01-20T10:51:05.498416954+00:00 stderr F time="2026-01-20T10:51:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=20g9a 2026-01-20T10:51:05.498549268+00:00 stderr F time="2026-01-20T10:51:05Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=20g9a 2026-01-20T10:51:05.498549268+00:00 stderr F time="2026-01-20T10:51:05Z" level=info msg="registry pods invalid, need to overwrite" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=20g9a 2026-01-20T10:51:05.498696272+00:00 stderr F time="2026-01-20T10:51:05Z" level=info msg="creating desired pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=20g9a pod.name= pod.namespace=openshift-marketplace 2026-01-20T10:51:05.901948930+00:00 stderr F time="2026-01-20T10:51:05Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=20g9a 2026-01-20T10:51:05.901948930+00:00 stderr F time="2026-01-20T10:51:05Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=20g9a 2026-01-20T10:51:05.908009875+00:00 stderr F time="2026-01-20T10:51:05Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=HLTBp 2026-01-20T10:51:05.908009875+00:00 stderr F time="2026-01-20T10:51:05Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=HLTBp 2026-01-20T10:51:06.100645305+00:00 stderr F time="2026-01-20T10:51:06Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5dedt 2026-01-20T10:51:06.100752508+00:00 stderr F time="2026-01-20T10:51:06Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5dedt 2026-01-20T10:51:06.100752508+00:00 stderr F time="2026-01-20T10:51:06Z" level=info msg="registry pods invalid, need to overwrite" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5dedt 2026-01-20T10:51:06.100825970+00:00 stderr F time="2026-01-20T10:51:06Z" level=info msg="creating desired pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5dedt pod.name= pod.namespace=openshift-marketplace 2026-01-20T10:51:06.301099370+00:00 stderr F time="2026-01-20T10:51:06Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=HLTBp 2026-01-20T10:51:06.301099370+00:00 stderr F time="2026-01-20T10:51:06Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=HLTBp 2026-01-20T10:51:06.301099370+00:00 stderr F time="2026-01-20T10:51:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=HLTBp 2026-01-20T10:51:06.301099370+00:00 stderr F time="2026-01-20T10:51:06Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=HLTBp 2026-01-20T10:51:06.301099370+00:00 stderr F time="2026-01-20T10:51:06Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=HLTBp 2026-01-20T10:51:06.301099370+00:00 stderr F time="2026-01-20T10:51:06Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=HLTBp 2026-01-20T10:51:06.507314712+00:00 stderr F time="2026-01-20T10:51:06Z" level=info msg="catalog update required at 2026-01-20 10:51:06.50723947 +0000 UTC m=+92.616691038" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5dedt 2026-01-20T10:51:06.905080872+00:00 stderr F time="2026-01-20T10:51:06Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5dedt 2026-01-20T10:51:06.915398260+00:00 stderr F time="2026-01-20T10:51:06Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=gL9O4 2026-01-20T10:51:06.915398260+00:00 stderr F time="2026-01-20T10:51:06Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=gL9O4 2026-01-20T10:51:07.103239692+00:00 stderr F time="2026-01-20T10:51:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=HLTBp 2026-01-20T10:51:07.103239692+00:00 stderr F time="2026-01-20T10:51:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=HLTBp 2026-01-20T10:51:07.103239692+00:00 stderr F time="2026-01-20T10:51:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=HLTBp 2026-01-20T10:51:07.103239692+00:00 stderr F time="2026-01-20T10:51:07Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=HLTBp 2026-01-20T10:51:07.103239692+00:00 stderr F time="2026-01-20T10:51:07Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=HLTBp 2026-01-20T10:51:07.103239692+00:00 stderr F time="2026-01-20T10:51:07Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=EgD26 2026-01-20T10:51:07.103239692+00:00 stderr F time="2026-01-20T10:51:07Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=EgD26 2026-01-20T10:51:07.298980482+00:00 stderr F time="2026-01-20T10:51:07Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=gL9O4 2026-01-20T10:51:07.299067634+00:00 stderr F time="2026-01-20T10:51:07Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=gL9O4 2026-01-20T10:51:07.299067634+00:00 stderr F time="2026-01-20T10:51:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=gL9O4 2026-01-20T10:51:07.299136276+00:00 stderr F time="2026-01-20T10:51:07Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=gL9O4 2026-01-20T10:51:07.299136276+00:00 stderr F time="2026-01-20T10:51:07Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=gL9O4 2026-01-20T10:51:07.299136276+00:00 stderr F time="2026-01-20T10:51:07Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=gL9O4 2026-01-20T10:51:07.497725868+00:00 stderr F time="2026-01-20T10:51:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=EgD26 2026-01-20T10:51:07.497810020+00:00 stderr F time="2026-01-20T10:51:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=EgD26 2026-01-20T10:51:07.497810020+00:00 stderr F time="2026-01-20T10:51:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=EgD26 2026-01-20T10:51:07.497819730+00:00 stderr F time="2026-01-20T10:51:07Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=EgD26 2026-01-20T10:51:07.497819730+00:00 stderr F time="2026-01-20T10:51:07Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=EgD26 2026-01-20T10:51:07.497827321+00:00 stderr F time="2026-01-20T10:51:07Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=EgD26 2026-01-20T10:51:08.099783053+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=gL9O4 2026-01-20T10:51:08.101482503+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=gL9O4 2026-01-20T10:51:08.101482503+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=gL9O4 2026-01-20T10:51:08.101482503+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=gL9O4 2026-01-20T10:51:08.101482503+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=gL9O4 2026-01-20T10:51:08.101482503+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc1i8 2026-01-20T10:51:08.101482503+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc1i8 2026-01-20T10:51:08.298814989+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=EgD26 2026-01-20T10:51:08.298906071+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=EgD26 2026-01-20T10:51:08.298906071+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=EgD26 2026-01-20T10:51:08.298991704+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=EgD26 2026-01-20T10:51:08.298991704+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=EgD26 2026-01-20T10:51:08.299045935+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oOyES 2026-01-20T10:51:08.299074056+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oOyES 2026-01-20T10:51:08.498176922+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc1i8 2026-01-20T10:51:08.498289906+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=Rc1i8 2026-01-20T10:51:08.498299436+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=Rc1i8 2026-01-20T10:51:08.498309916+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Rc1i8 2026-01-20T10:51:08.498318406+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc1i8 2026-01-20T10:51:08.498318406+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc1i8 2026-01-20T10:51:08.701660974+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oOyES 2026-01-20T10:51:08.701660974+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=oOyES 2026-01-20T10:51:08.701660974+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=oOyES 2026-01-20T10:51:08.701660974+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=oOyES 2026-01-20T10:51:08.701660974+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oOyES 2026-01-20T10:51:08.701660974+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oOyES 2026-01-20T10:51:09.297996106+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc1i8 2026-01-20T10:51:09.298152230+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=Rc1i8 2026-01-20T10:51:09.298152230+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=Rc1i8 2026-01-20T10:51:09.298563662+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc1i8 2026-01-20T10:51:09.298563662+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWtBa 2026-01-20T10:51:09.298563662+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWtBa 2026-01-20T10:51:09.498280107+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oOyES 2026-01-20T10:51:09.498337769+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=oOyES 2026-01-20T10:51:09.498337769+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=oOyES 2026-01-20T10:51:09.498432261+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oOyES 2026-01-20T10:51:09.498432261+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oOyES 2026-01-20T10:51:09.498487873+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xXzWl 2026-01-20T10:51:09.498512044+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xXzWl 2026-01-20T10:51:09.698237218+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWtBa 2026-01-20T10:51:09.698308730+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=bWtBa 2026-01-20T10:51:09.698308730+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=bWtBa 2026-01-20T10:51:09.698317860+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=bWtBa 2026-01-20T10:51:09.698325280+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWtBa 2026-01-20T10:51:09.698325280+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWtBa 2026-01-20T10:51:09.898199929+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xXzWl 2026-01-20T10:51:09.898261620+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=xXzWl 2026-01-20T10:51:09.898261620+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=xXzWl 2026-01-20T10:51:09.898274501+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=xXzWl 2026-01-20T10:51:09.898274501+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xXzWl 2026-01-20T10:51:09.898284071+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xXzWl 2026-01-20T10:51:10.498023422+00:00 stderr F time="2026-01-20T10:51:10Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWtBa 2026-01-20T10:51:10.498167566+00:00 stderr F time="2026-01-20T10:51:10Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=bWtBa 2026-01-20T10:51:10.498167566+00:00 stderr F time="2026-01-20T10:51:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=bWtBa 2026-01-20T10:51:10.498287369+00:00 stderr F time="2026-01-20T10:51:10Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWtBa 2026-01-20T10:51:10.498287369+00:00 stderr F time="2026-01-20T10:51:10Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWtBa 2026-01-20T10:51:10.498355321+00:00 stderr F time="2026-01-20T10:51:10Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=F1xBd 2026-01-20T10:51:10.498365861+00:00 stderr F time="2026-01-20T10:51:10Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=F1xBd 2026-01-20T10:51:10.698996031+00:00 stderr F time="2026-01-20T10:51:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xXzWl 2026-01-20T10:51:10.698996031+00:00 stderr F time="2026-01-20T10:51:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=xXzWl 2026-01-20T10:51:10.698996031+00:00 stderr F time="2026-01-20T10:51:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=xXzWl 2026-01-20T10:51:10.698996031+00:00 stderr F time="2026-01-20T10:51:10Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xXzWl 2026-01-20T10:51:10.698996031+00:00 stderr F time="2026-01-20T10:51:10Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xXzWl 2026-01-20T10:51:10.698996031+00:00 stderr F time="2026-01-20T10:51:10Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3j2q1 2026-01-20T10:51:10.698996031+00:00 stderr F time="2026-01-20T10:51:10Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3j2q1 2026-01-20T10:51:10.898161739+00:00 stderr F time="2026-01-20T10:51:10Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=F1xBd 2026-01-20T10:51:10.898372775+00:00 stderr F time="2026-01-20T10:51:10Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=F1xBd 2026-01-20T10:51:10.898427077+00:00 stderr F time="2026-01-20T10:51:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=F1xBd 2026-01-20T10:51:10.898471389+00:00 stderr F time="2026-01-20T10:51:10Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=F1xBd 2026-01-20T10:51:10.898514540+00:00 stderr F time="2026-01-20T10:51:10Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=F1xBd 2026-01-20T10:51:10.898548831+00:00 stderr F time="2026-01-20T10:51:10Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=F1xBd 2026-01-20T10:51:11.100520270+00:00 stderr F time="2026-01-20T10:51:11Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3j2q1 2026-01-20T10:51:11.100520270+00:00 stderr F time="2026-01-20T10:51:11Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=3j2q1 2026-01-20T10:51:11.100520270+00:00 stderr F time="2026-01-20T10:51:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=3j2q1 2026-01-20T10:51:11.100520270+00:00 stderr F time="2026-01-20T10:51:11Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=3j2q1 2026-01-20T10:51:11.100520270+00:00 stderr F time="2026-01-20T10:51:11Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3j2q1 2026-01-20T10:51:11.100520270+00:00 stderr F time="2026-01-20T10:51:11Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3j2q1 2026-01-20T10:51:11.698018135+00:00 stderr F time="2026-01-20T10:51:11Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=F1xBd 2026-01-20T10:51:11.698154349+00:00 stderr F time="2026-01-20T10:51:11Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=F1xBd 2026-01-20T10:51:11.698154349+00:00 stderr F time="2026-01-20T10:51:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=F1xBd 2026-01-20T10:51:11.698239171+00:00 stderr F time="2026-01-20T10:51:11Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=F1xBd 2026-01-20T10:51:11.898516802+00:00 stderr F time="2026-01-20T10:51:11Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3j2q1 2026-01-20T10:51:11.898610935+00:00 stderr F time="2026-01-20T10:51:11Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=3j2q1 2026-01-20T10:51:11.898610935+00:00 stderr F time="2026-01-20T10:51:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=3j2q1 2026-01-20T10:51:11.899325575+00:00 stderr F time="2026-01-20T10:51:11Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3j2q1 2026-01-20T10:51:11.899325575+00:00 stderr F time="2026-01-20T10:51:11Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3j2q1 2026-01-20T10:51:13.156622050+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WQiSs 2026-01-20T10:51:13.156622050+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WQiSs 2026-01-20T10:51:13.159628787+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WQiSs 2026-01-20T10:51:13.159823402+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=WQiSs 2026-01-20T10:51:13.159823402+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=WQiSs 2026-01-20T10:51:13.159837063+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=WQiSs 2026-01-20T10:51:13.159837063+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WQiSs 2026-01-20T10:51:13.159847273+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WQiSs 2026-01-20T10:51:13.169856501+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WQiSs 2026-01-20T10:51:13.169856501+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=WQiSs 2026-01-20T10:51:13.169856501+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=WQiSs 2026-01-20T10:51:13.169856501+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WQiSs 2026-01-20T10:51:13.176091021+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=RTyYW 2026-01-20T10:51:13.176091021+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=RTyYW 2026-01-20T10:51:13.180877288+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=RTyYW 2026-01-20T10:51:13.180996162+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=RTyYW 2026-01-20T10:51:13.180996162+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=RTyYW 2026-01-20T10:51:13.181006922+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=RTyYW 2026-01-20T10:51:13.181016102+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=RTyYW 2026-01-20T10:51:13.181023383+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=RTyYW 2026-01-20T10:51:13.185933054+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=RTyYW 2026-01-20T10:51:13.186006846+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=RTyYW 2026-01-20T10:51:13.186006846+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=RTyYW 2026-01-20T10:51:13.186086579+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=RTyYW 2026-01-20T10:51:14.794022286+00:00 stderr F time="2026-01-20T10:51:14Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=VFhp2 2026-01-20T10:51:14.794022286+00:00 stderr F time="2026-01-20T10:51:14Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=VFhp2 2026-01-20T10:51:14.796610171+00:00 stderr F time="2026-01-20T10:51:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=VFhp2 2026-01-20T10:51:14.796836907+00:00 stderr F time="2026-01-20T10:51:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=VFhp2 2026-01-20T10:51:14.796836907+00:00 stderr F time="2026-01-20T10:51:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=VFhp2 2026-01-20T10:51:14.796856688+00:00 stderr F time="2026-01-20T10:51:14Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=VFhp2 2026-01-20T10:51:14.796856688+00:00 stderr F time="2026-01-20T10:51:14Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=VFhp2 2026-01-20T10:51:14.796882998+00:00 stderr F time="2026-01-20T10:51:14Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=VFhp2 2026-01-20T10:51:14.802295764+00:00 stderr F time="2026-01-20T10:51:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=VFhp2 2026-01-20T10:51:14.802336346+00:00 stderr F time="2026-01-20T10:51:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=VFhp2 2026-01-20T10:51:14.802336346+00:00 stderr F time="2026-01-20T10:51:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=VFhp2 2026-01-20T10:51:14.802450699+00:00 stderr F time="2026-01-20T10:51:14Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=VFhp2 2026-01-20T10:51:14.802450699+00:00 stderr F time="2026-01-20T10:51:14Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=VFhp2 2026-01-20T10:51:15.253160405+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4vtQP 2026-01-20T10:51:15.253160405+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4vtQP 2026-01-20T10:51:15.255953295+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4vtQP 2026-01-20T10:51:15.256086169+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=4vtQP 2026-01-20T10:51:15.256115540+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=4vtQP 2026-01-20T10:51:15.256123240+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=4vtQP 2026-01-20T10:51:15.256130470+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4vtQP 2026-01-20T10:51:15.256137730+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4vtQP 2026-01-20T10:51:15.260827716+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4vtQP 2026-01-20T10:51:15.261022681+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=4vtQP 2026-01-20T10:51:15.261022681+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=4vtQP 2026-01-20T10:51:15.261174846+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4vtQP 2026-01-20T10:51:15.261174846+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4vtQP 2026-01-20T10:51:15.370220918+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=U9aKD 2026-01-20T10:51:15.370220918+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=U9aKD 2026-01-20T10:51:15.372866814+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=U9aKD 2026-01-20T10:51:15.372951796+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=U9aKD 2026-01-20T10:51:15.372951796+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=U9aKD 2026-01-20T10:51:15.372961187+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=U9aKD 2026-01-20T10:51:15.372968507+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=U9aKD 2026-01-20T10:51:15.372968507+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=U9aKD 2026-01-20T10:51:15.377738043+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=U9aKD 2026-01-20T10:51:15.377816847+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=U9aKD 2026-01-20T10:51:15.377816847+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=U9aKD 2026-01-20T10:51:15.377893379+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=U9aKD 2026-01-20T10:51:15.377893379+00:00 stderr F time="2026-01-20T10:51:15Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=U9aKD 2026-01-20T10:51:16.246221246+00:00 stderr F time="2026-01-20T10:51:16Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RtTua 2026-01-20T10:51:16.246221246+00:00 stderr F time="2026-01-20T10:51:16Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RtTua 2026-01-20T10:51:16.255334049+00:00 stderr F time="2026-01-20T10:51:16Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RtTua 2026-01-20T10:51:16.255424622+00:00 stderr F time="2026-01-20T10:51:16Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=RtTua 2026-01-20T10:51:16.255424622+00:00 stderr F time="2026-01-20T10:51:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=RtTua 2026-01-20T10:51:16.255433592+00:00 stderr F time="2026-01-20T10:51:16Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=RtTua 2026-01-20T10:51:16.255441222+00:00 stderr F time="2026-01-20T10:51:16Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RtTua 2026-01-20T10:51:16.255448383+00:00 stderr F time="2026-01-20T10:51:16Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RtTua 2026-01-20T10:51:16.261882688+00:00 stderr F time="2026-01-20T10:51:16Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RtTua 2026-01-20T10:51:16.262008041+00:00 stderr F time="2026-01-20T10:51:16Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=RtTua 2026-01-20T10:51:16.262008041+00:00 stderr F time="2026-01-20T10:51:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=RtTua 2026-01-20T10:51:16.262125835+00:00 stderr F time="2026-01-20T10:51:16Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RtTua 2026-01-20T10:51:16.262125835+00:00 stderr F time="2026-01-20T10:51:16Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RtTua 2026-01-20T10:51:18.193928713+00:00 stderr F time="2026-01-20T10:51:18Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fd2Oa 2026-01-20T10:51:18.193928713+00:00 stderr F time="2026-01-20T10:51:18Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fd2Oa 2026-01-20T10:51:18.195812777+00:00 stderr F time="2026-01-20T10:51:18Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fd2Oa 2026-01-20T10:51:18.195908670+00:00 stderr F time="2026-01-20T10:51:18Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=fd2Oa 2026-01-20T10:51:18.195908670+00:00 stderr F time="2026-01-20T10:51:18Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=fd2Oa 2026-01-20T10:51:18.195922640+00:00 stderr F time="2026-01-20T10:51:18Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=fd2Oa 2026-01-20T10:51:18.195922640+00:00 stderr F time="2026-01-20T10:51:18Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fd2Oa 2026-01-20T10:51:18.195922640+00:00 stderr F time="2026-01-20T10:51:18Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fd2Oa 2026-01-20T10:51:18.202944793+00:00 stderr F time="2026-01-20T10:51:18Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fd2Oa 2026-01-20T10:51:18.203046766+00:00 stderr F time="2026-01-20T10:51:18Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=fd2Oa 2026-01-20T10:51:18.203046766+00:00 stderr F time="2026-01-20T10:51:18Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=fd2Oa 2026-01-20T10:51:18.203153039+00:00 stderr F time="2026-01-20T10:51:18Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fd2Oa 2026-01-20T10:51:26.226297549+00:00 stderr F time="2026-01-20T10:51:26Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9faDW 2026-01-20T10:51:26.226297549+00:00 stderr F time="2026-01-20T10:51:26Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9faDW 2026-01-20T10:51:26.229826020+00:00 stderr F time="2026-01-20T10:51:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9faDW 2026-01-20T10:51:26.231175899+00:00 stderr F time="2026-01-20T10:51:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=9faDW 2026-01-20T10:51:26.231175899+00:00 stderr F time="2026-01-20T10:51:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=9faDW 2026-01-20T10:51:26.231175899+00:00 stderr F time="2026-01-20T10:51:26Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=9faDW 2026-01-20T10:51:26.231175899+00:00 stderr F time="2026-01-20T10:51:26Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9faDW 2026-01-20T10:51:26.231175899+00:00 stderr F time="2026-01-20T10:51:26Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9faDW 2026-01-20T10:51:26.241475386+00:00 stderr F time="2026-01-20T10:51:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9faDW 2026-01-20T10:51:26.241590649+00:00 stderr F time="2026-01-20T10:51:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=9faDW 2026-01-20T10:51:26.241590649+00:00 stderr F time="2026-01-20T10:51:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=9faDW 2026-01-20T10:51:26.241657302+00:00 stderr F time="2026-01-20T10:51:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9faDW 2026-01-20T10:51:27.007613300+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jUfwc 2026-01-20T10:51:27.007613300+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jUfwc 2026-01-20T10:51:27.013222921+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jUfwc 2026-01-20T10:51:27.013343335+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=jUfwc 2026-01-20T10:51:27.013343335+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=jUfwc 2026-01-20T10:51:27.013354355+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=jUfwc 2026-01-20T10:51:27.013354355+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jUfwc 2026-01-20T10:51:27.013363015+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jUfwc 2026-01-20T10:51:27.021268343+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jUfwc 2026-01-20T10:51:27.021408197+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=jUfwc 2026-01-20T10:51:27.021408197+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=jUfwc 2026-01-20T10:51:27.021498610+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jUfwc 2026-01-20T10:51:27.348451950+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nZhRG 2026-01-20T10:51:27.348451950+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nZhRG 2026-01-20T10:51:27.353831005+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nZhRG 2026-01-20T10:51:27.353949439+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=nZhRG 2026-01-20T10:51:27.353959589+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=nZhRG 2026-01-20T10:51:27.353970789+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=nZhRG 2026-01-20T10:51:27.353979139+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nZhRG 2026-01-20T10:51:27.353987510+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nZhRG 2026-01-20T10:51:27.360700693+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nZhRG 2026-01-20T10:51:27.360806686+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=nZhRG 2026-01-20T10:51:27.360806686+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=nZhRG 2026-01-20T10:51:27.360855487+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nZhRG 2026-01-20T10:51:27.364769880+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=J3FUt 2026-01-20T10:51:27.364769880+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=J3FUt 2026-01-20T10:51:27.369919688+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=J3FUt 2026-01-20T10:51:27.370039371+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=J3FUt 2026-01-20T10:51:27.370039371+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=J3FUt 2026-01-20T10:51:27.370048602+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=J3FUt 2026-01-20T10:51:27.370055812+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=J3FUt 2026-01-20T10:51:27.370096463+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=J3FUt 2026-01-20T10:51:27.376789047+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=J3FUt 2026-01-20T10:51:27.376965262+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=J3FUt 2026-01-20T10:51:27.376965262+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=J3FUt 2026-01-20T10:51:27.377043114+00:00 stderr F time="2026-01-20T10:51:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=J3FUt 2026-01-20T10:51:28.354946739+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cruQf 2026-01-20T10:51:28.354946739+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cruQf 2026-01-20T10:51:28.357579855+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cruQf 2026-01-20T10:51:28.357632546+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=cruQf 2026-01-20T10:51:28.357652817+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=cruQf 2026-01-20T10:51:28.357652817+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=cruQf 2026-01-20T10:51:28.357663607+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cruQf 2026-01-20T10:51:28.357672847+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cruQf 2026-01-20T10:51:28.365476352+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cruQf 2026-01-20T10:51:28.365533274+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=cruQf 2026-01-20T10:51:28.365533274+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=cruQf 2026-01-20T10:51:28.372174465+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cruQf 2026-01-20T10:51:28.372174465+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cruQf 2026-01-20T10:51:28.372996138+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cB860 2026-01-20T10:51:28.372996138+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cB860 2026-01-20T10:51:28.375502931+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cB860 2026-01-20T10:51:28.375613984+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=cB860 2026-01-20T10:51:28.375613984+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=cB860 2026-01-20T10:51:28.375626724+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=cB860 2026-01-20T10:51:28.375626724+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cB860 2026-01-20T10:51:28.375646015+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cB860 2026-01-20T10:51:28.380750812+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cB860 2026-01-20T10:51:28.380816984+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=cB860 2026-01-20T10:51:28.380816984+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=cB860 2026-01-20T10:51:28.411920010+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cB860 2026-01-20T10:51:28.411920010+00:00 stderr F time="2026-01-20T10:51:28Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cB860 2026-01-20T10:51:31.513277006+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=READY" 2026-01-20T10:51:31.513372478+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dPtkx 2026-01-20T10:51:31.513372478+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dPtkx 2026-01-20T10:51:31.513425840+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="resolving sources" id=dhIdD namespace=openshift-marketplace 2026-01-20T10:51:31.513425840+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="checking if subscriptions need update" id=dhIdD namespace=openshift-marketplace 2026-01-20T10:51:31.516940810+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dPtkx 2026-01-20T10:51:31.517180207+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=dPtkx 2026-01-20T10:51:31.517180207+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=dPtkx 2026-01-20T10:51:31.517180207+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=dPtkx 2026-01-20T10:51:31.517180207+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dPtkx 2026-01-20T10:51:31.517180207+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dPtkx 2026-01-20T10:51:31.518957677+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=dhIdD namespace=openshift-marketplace 2026-01-20T10:51:31.526204752+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dPtkx 2026-01-20T10:51:31.526479560+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=dPtkx 2026-01-20T10:51:31.526479560+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=dPtkx 2026-01-20T10:51:31.529228688+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dPtkx 2026-01-20T10:51:31.529228688+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dPtkx 2026-01-20T10:51:31.539833689+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8S0MY 2026-01-20T10:51:31.539833689+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8S0MY 2026-01-20T10:51:31.544975004+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8S0MY 2026-01-20T10:51:31.545160240+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=8S0MY 2026-01-20T10:51:31.545160240+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=8S0MY 2026-01-20T10:51:31.545160240+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=8S0MY 2026-01-20T10:51:31.545196241+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8S0MY 2026-01-20T10:51:31.545196241+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8S0MY 2026-01-20T10:51:31.557618532+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8S0MY 2026-01-20T10:51:31.557618532+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=8S0MY 2026-01-20T10:51:31.557618532+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=8S0MY 2026-01-20T10:51:31.560737250+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8S0MY 2026-01-20T10:51:31.560737250+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8S0MY 2026-01-20T10:51:32.913503176+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=READY" 2026-01-20T10:51:32.913503176+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=98Dni 2026-01-20T10:51:32.913503176+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=98Dni 2026-01-20T10:51:32.913575448+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="resolving sources" id=xgWZQ namespace=openshift-marketplace 2026-01-20T10:51:32.913575448+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="checking if subscriptions need update" id=xgWZQ namespace=openshift-marketplace 2026-01-20T10:51:32.918362954+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=xgWZQ namespace=openshift-marketplace 2026-01-20T10:51:32.918674252+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=98Dni 2026-01-20T10:51:32.918780725+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=98Dni 2026-01-20T10:51:32.918780725+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=98Dni 2026-01-20T10:51:32.918780725+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=98Dni 2026-01-20T10:51:32.918780725+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=98Dni 2026-01-20T10:51:32.918780725+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=98Dni 2026-01-20T10:51:32.924873988+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=98Dni 2026-01-20T10:51:32.924909189+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=98Dni 2026-01-20T10:51:32.924909189+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=98Dni 2026-01-20T10:51:32.925038262+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=98Dni 2026-01-20T10:51:32.925038262+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=98Dni 2026-01-20T10:51:32.937356092+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ZfRl0 2026-01-20T10:51:32.937356092+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ZfRl0 2026-01-20T10:51:32.942470206+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ZfRl0 2026-01-20T10:51:32.942630552+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=ZfRl0 2026-01-20T10:51:32.942630552+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=ZfRl0 2026-01-20T10:51:32.942630552+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=ZfRl0 2026-01-20T10:51:32.942651022+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ZfRl0 2026-01-20T10:51:32.942651022+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ZfRl0 2026-01-20T10:51:32.951113742+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ZfRl0 2026-01-20T10:51:32.951220915+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=ZfRl0 2026-01-20T10:51:32.951220915+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=ZfRl0 2026-01-20T10:51:32.951348918+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ZfRl0 2026-01-20T10:51:32.951348918+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ZfRl0 2026-01-20T10:51:35.714563643+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=READY" 2026-01-20T10:51:35.714612834+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yu75Z 2026-01-20T10:51:35.714654465+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yu75Z 2026-01-20T10:51:35.715186620+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="resolving sources" id=zF6kl namespace=openshift-marketplace 2026-01-20T10:51:35.715186620+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="checking if subscriptions need update" id=zF6kl namespace=openshift-marketplace 2026-01-20T10:51:35.717949369+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yu75Z 2026-01-20T10:51:35.718056612+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=yu75Z 2026-01-20T10:51:35.718089722+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=yu75Z 2026-01-20T10:51:35.718116553+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=yu75Z 2026-01-20T10:51:35.718125443+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yu75Z 2026-01-20T10:51:35.718133904+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yu75Z 2026-01-20T10:51:35.718627718+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=zF6kl namespace=openshift-marketplace 2026-01-20T10:51:35.728931710+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yu75Z 2026-01-20T10:51:35.729181337+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=yu75Z 2026-01-20T10:51:35.729181337+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=yu75Z 2026-01-20T10:51:35.729377582+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yu75Z 2026-01-20T10:51:35.729377582+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yu75Z 2026-01-20T10:51:35.740633191+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7zvXi 2026-01-20T10:51:35.740754015+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7zvXi 2026-01-20T10:51:35.746068065+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7zvXi 2026-01-20T10:51:35.746218289+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=7zvXi 2026-01-20T10:51:35.746218289+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=7zvXi 2026-01-20T10:51:35.746245540+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=7zvXi 2026-01-20T10:51:35.746245540+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7zvXi 2026-01-20T10:51:35.746253590+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7zvXi 2026-01-20T10:51:35.753680401+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7zvXi 2026-01-20T10:51:35.753769944+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=7zvXi 2026-01-20T10:51:35.753769944+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=7zvXi 2026-01-20T10:51:35.753907927+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7zvXi 2026-01-20T10:51:35.753915878+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7zvXi 2026-01-20T10:51:36.909235509+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rZaeR 2026-01-20T10:51:36.909235509+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rZaeR 2026-01-20T10:51:36.909235509+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rZaeR 2026-01-20T10:51:36.909235509+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=rZaeR 2026-01-20T10:51:36.909235509+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=rZaeR 2026-01-20T10:51:36.909235509+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=rZaeR 2026-01-20T10:51:36.909235509+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rZaeR 2026-01-20T10:51:36.909235509+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rZaeR 2026-01-20T10:51:36.916021222+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rZaeR 2026-01-20T10:51:36.916168946+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=rZaeR 2026-01-20T10:51:36.916168946+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=rZaeR 2026-01-20T10:51:36.918572744+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rZaeR 2026-01-20T10:51:36.918572744+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rZaeR 2026-01-20T10:51:36.930290636+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SN2Ly 2026-01-20T10:51:36.930290636+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SN2Ly 2026-01-20T10:51:36.940885426+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SN2Ly 2026-01-20T10:51:36.940885426+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=SN2Ly 2026-01-20T10:51:36.940885426+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=SN2Ly 2026-01-20T10:51:36.940885426+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=SN2Ly 2026-01-20T10:51:36.940885426+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SN2Ly 2026-01-20T10:51:36.940885426+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SN2Ly 2026-01-20T10:51:36.948375538+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SN2Ly 2026-01-20T10:51:36.948404378+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=SN2Ly 2026-01-20T10:51:36.948404378+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=SN2Ly 2026-01-20T10:51:36.953441071+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SN2Ly 2026-01-20T10:51:36.953441071+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SN2Ly 2026-01-20T10:51:39.978114274+00:00 stderr F time="2026-01-20T10:51:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=TjT5/ 2026-01-20T10:51:39.978114274+00:00 stderr F time="2026-01-20T10:51:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=TjT5/ 2026-01-20T10:51:39.985208144+00:00 stderr F time="2026-01-20T10:51:39Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=TjT5/ 2026-01-20T10:51:39.985208144+00:00 stderr F time="2026-01-20T10:51:39Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=TjT5/ 2026-01-20T10:51:39.985208144+00:00 stderr F time="2026-01-20T10:51:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=TjT5/ 2026-01-20T10:51:39.985208144+00:00 stderr F time="2026-01-20T10:51:39Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=TjT5/ 2026-01-20T10:51:39.985208144+00:00 stderr F time="2026-01-20T10:51:39Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=TjT5/ 2026-01-20T10:51:39.985208144+00:00 stderr F time="2026-01-20T10:51:39Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=TjT5/ 2026-01-20T10:51:39.994106907+00:00 stderr F time="2026-01-20T10:51:39Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=TjT5/ 2026-01-20T10:51:39.994106907+00:00 stderr F time="2026-01-20T10:51:39Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=TjT5/ 2026-01-20T10:51:39.994106907+00:00 stderr F time="2026-01-20T10:51:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=TjT5/ 2026-01-20T10:51:39.994106907+00:00 stderr F time="2026-01-20T10:51:39Z" level=info msg="catalog polling result: update pod community-operators-9k9jk failed to start" UpdatePod=community-operators-9k9jk 2026-01-20T10:51:40.002106294+00:00 stderr F E0120 10:51:40.001457 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: couldn't ensure registry server - error ensuring updated catalog source pod: : update pod community-operators-9k9jk in a Failed state: deleted update pod 2026-01-20T10:51:40.002106294+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EbPjD 2026-01-20T10:51:40.002106294+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EbPjD 2026-01-20T10:51:40.011108149+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EbPjD 2026-01-20T10:51:40.011108149+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=EbPjD 2026-01-20T10:51:40.011108149+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=EbPjD 2026-01-20T10:51:40.011108149+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=EbPjD 2026-01-20T10:51:40.011108149+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EbPjD 2026-01-20T10:51:40.011108149+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EbPjD 2026-01-20T10:51:40.019089005+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EbPjD 2026-01-20T10:51:40.019089005+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=EbPjD 2026-01-20T10:51:40.019089005+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=EbPjD 2026-01-20T10:51:40.019089005+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="catalog polling result: update pod community-operators-9k9jk failed to start" UpdatePod=community-operators-9k9jk 2026-01-20T10:51:40.020089863+00:00 stderr F E0120 10:51:40.019928 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: couldn't ensure registry server - error ensuring updated catalog source pod: : update pod community-operators-9k9jk in a Failed state: deleted update pod 2026-01-20T10:51:40.020089863+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GUELy 2026-01-20T10:51:40.020089863+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GUELy 2026-01-20T10:51:40.024086646+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GUELy 2026-01-20T10:51:40.024086646+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=GUELy 2026-01-20T10:51:40.024086646+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=GUELy 2026-01-20T10:51:40.024086646+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=GUELy 2026-01-20T10:51:40.024086646+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GUELy 2026-01-20T10:51:40.024086646+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GUELy 2026-01-20T10:51:40.183395009+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GUELy 2026-01-20T10:51:40.183528303+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=GUELy 2026-01-20T10:51:40.183528303+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=GUELy 2026-01-20T10:51:40.183584385+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="catalog polling result: update pod community-operators-9k9jk failed to start" UpdatePod=community-operators-9k9jk 2026-01-20T10:51:40.382590603+00:00 stderr F E0120 10:51:40.382495 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: couldn't ensure registry server - error ensuring updated catalog source pod: : update pod community-operators-9k9jk in a Failed state: deleted update pod 2026-01-20T10:51:40.382590603+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=xyIfa 2026-01-20T10:51:40.382590603+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=xyIfa 2026-01-20T10:51:40.581202640+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=xyIfa 2026-01-20T10:51:40.581326093+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=xyIfa 2026-01-20T10:51:40.581345974+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=xyIfa 2026-01-20T10:51:40.581382595+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=xyIfa 2026-01-20T10:51:40.581391405+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=xyIfa 2026-01-20T10:51:40.581399515+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=xyIfa 2026-01-20T10:51:40.984607319+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=xyIfa 2026-01-20T10:51:40.984686911+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=xyIfa 2026-01-20T10:51:40.984686911+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=xyIfa 2026-01-20T10:51:40.985525405+00:00 stderr F time="2026-01-20T10:51:40Z" level=info msg="catalog polling result: update pod community-operators-9k9jk failed to start" UpdatePod=community-operators-9k9jk 2026-01-20T10:51:41.185410288+00:00 stderr F E0120 10:51:41.181529 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: couldn't ensure registry server - error ensuring updated catalog source pod: : update pod community-operators-9k9jk in a Failed state: deleted update pod 2026-01-20T10:51:41.185410288+00:00 stderr F time="2026-01-20T10:51:41Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=buKG8 2026-01-20T10:51:41.185410288+00:00 stderr F time="2026-01-20T10:51:41Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=buKG8 2026-01-20T10:51:41.382998135+00:00 stderr F time="2026-01-20T10:51:41Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=buKG8 2026-01-20T10:51:41.383119740+00:00 stderr F time="2026-01-20T10:51:41Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=buKG8 2026-01-20T10:51:41.383119740+00:00 stderr F time="2026-01-20T10:51:41Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=buKG8 2026-01-20T10:51:41.383136190+00:00 stderr F time="2026-01-20T10:51:41Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=buKG8 2026-01-20T10:51:41.383136190+00:00 stderr F time="2026-01-20T10:51:41Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=buKG8 2026-01-20T10:51:41.383145840+00:00 stderr F time="2026-01-20T10:51:41Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=buKG8 2026-01-20T10:51:41.780723583+00:00 stderr F time="2026-01-20T10:51:41Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=buKG8 2026-01-20T10:51:41.780818366+00:00 stderr F time="2026-01-20T10:51:41Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=buKG8 2026-01-20T10:51:41.780818366+00:00 stderr F time="2026-01-20T10:51:41Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=buKG8 2026-01-20T10:51:41.780981131+00:00 stderr F time="2026-01-20T10:51:41Z" level=info msg="catalog polling result: update pod community-operators-9k9jk failed to start" UpdatePod=community-operators-9k9jk 2026-01-20T10:51:41.983616482+00:00 stderr F E0120 10:51:41.983540 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: couldn't ensure registry server - error ensuring updated catalog source pod: : update pod community-operators-9k9jk in a Failed state: deleted update pod 2026-01-20T10:51:41.983716365+00:00 stderr F time="2026-01-20T10:51:41Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+Ophv 2026-01-20T10:51:41.983757136+00:00 stderr F time="2026-01-20T10:51:41Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+Ophv 2026-01-20T10:51:42.180308014+00:00 stderr F time="2026-01-20T10:51:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+Ophv 2026-01-20T10:51:42.180494289+00:00 stderr F time="2026-01-20T10:51:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=+Ophv 2026-01-20T10:51:42.180526060+00:00 stderr F time="2026-01-20T10:51:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=+Ophv 2026-01-20T10:51:42.180595972+00:00 stderr F time="2026-01-20T10:51:42Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=+Ophv 2026-01-20T10:51:42.180621823+00:00 stderr F time="2026-01-20T10:51:42Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+Ophv 2026-01-20T10:51:42.180645244+00:00 stderr F time="2026-01-20T10:51:42Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+Ophv 2026-01-20T10:51:42.581735337+00:00 stderr F time="2026-01-20T10:51:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+Ophv 2026-01-20T10:51:42.582105788+00:00 stderr F time="2026-01-20T10:51:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=+Ophv 2026-01-20T10:51:42.582105788+00:00 stderr F time="2026-01-20T10:51:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=+Ophv 2026-01-20T10:51:42.582105788+00:00 stderr F time="2026-01-20T10:51:42Z" level=info msg="catalog polling result: update pod community-operators-9k9jk failed to start" UpdatePod=community-operators-9k9jk 2026-01-20T10:51:42.782476974+00:00 stderr F E0120 10:51:42.782388 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: couldn't ensure registry server - error ensuring updated catalog source pod: : update pod community-operators-9k9jk in a Failed state: deleted update pod 2026-01-20T10:51:42.782548316+00:00 stderr F time="2026-01-20T10:51:42Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+Sdcs 2026-01-20T10:51:42.782548316+00:00 stderr F time="2026-01-20T10:51:42Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+Sdcs 2026-01-20T10:51:42.980358401+00:00 stderr F time="2026-01-20T10:51:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+Sdcs 2026-01-20T10:51:42.980549296+00:00 stderr F time="2026-01-20T10:51:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=+Sdcs 2026-01-20T10:51:42.980582517+00:00 stderr F time="2026-01-20T10:51:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=+Sdcs 2026-01-20T10:51:42.980613118+00:00 stderr F time="2026-01-20T10:51:42Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=+Sdcs 2026-01-20T10:51:42.980636479+00:00 stderr F time="2026-01-20T10:51:42Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+Sdcs 2026-01-20T10:51:42.980665739+00:00 stderr F time="2026-01-20T10:51:42Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+Sdcs 2026-01-20T10:51:43.382852383+00:00 stderr F time="2026-01-20T10:51:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+Sdcs 2026-01-20T10:51:43.383281255+00:00 stderr F time="2026-01-20T10:51:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=+Sdcs 2026-01-20T10:51:43.383321407+00:00 stderr F time="2026-01-20T10:51:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=+Sdcs 2026-01-20T10:51:43.383400160+00:00 stderr F time="2026-01-20T10:51:43Z" level=info msg="catalog polling result: update pod community-operators-9k9jk failed to start" UpdatePod=community-operators-9k9jk 2026-01-20T10:51:43.582131519+00:00 stderr F E0120 10:51:43.582090 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: couldn't ensure registry server - error ensuring updated catalog source pod: : update pod community-operators-9k9jk in a Failed state: deleted update pod 2026-01-20T10:51:43.582217912+00:00 stderr F time="2026-01-20T10:51:43Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jd/Pf 2026-01-20T10:51:43.582549232+00:00 stderr F time="2026-01-20T10:51:43Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jd/Pf 2026-01-20T10:51:43.780396186+00:00 stderr F time="2026-01-20T10:51:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jd/Pf 2026-01-20T10:51:43.780396186+00:00 stderr F time="2026-01-20T10:51:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=jd/Pf 2026-01-20T10:51:43.780396186+00:00 stderr F time="2026-01-20T10:51:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=jd/Pf 2026-01-20T10:51:43.780396186+00:00 stderr F time="2026-01-20T10:51:43Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=jd/Pf 2026-01-20T10:51:43.780396186+00:00 stderr F time="2026-01-20T10:51:43Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jd/Pf 2026-01-20T10:51:43.780396186+00:00 stderr F time="2026-01-20T10:51:43Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jd/Pf 2026-01-20T10:51:44.182358655+00:00 stderr F time="2026-01-20T10:51:44Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jd/Pf 2026-01-20T10:51:44.182539280+00:00 stderr F time="2026-01-20T10:51:44Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=jd/Pf 2026-01-20T10:51:44.182581271+00:00 stderr F time="2026-01-20T10:51:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=jd/Pf 2026-01-20T10:51:44.182651473+00:00 stderr F time="2026-01-20T10:51:44Z" level=info msg="catalog polling result: update pod community-operators-9k9jk failed to start" UpdatePod=community-operators-9k9jk 2026-01-20T10:51:44.386232251+00:00 stderr F E0120 10:51:44.384518 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: couldn't ensure registry server - error ensuring updated catalog source pod: : update pod community-operators-9k9jk in a Failed state: deleted update pod 2026-01-20T10:51:44.386232251+00:00 stderr F time="2026-01-20T10:51:44Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=he6bh 2026-01-20T10:51:44.386232251+00:00 stderr F time="2026-01-20T10:51:44Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=he6bh 2026-01-20T10:51:44.586545135+00:00 stderr F time="2026-01-20T10:51:44Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=he6bh 2026-01-20T10:51:44.589103788+00:00 stderr F time="2026-01-20T10:51:44Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=he6bh 2026-01-20T10:51:44.589103788+00:00 stderr F time="2026-01-20T10:51:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=he6bh 2026-01-20T10:51:44.589103788+00:00 stderr F time="2026-01-20T10:51:44Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=he6bh 2026-01-20T10:51:44.589103788+00:00 stderr F time="2026-01-20T10:51:44Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=he6bh 2026-01-20T10:51:44.589103788+00:00 stderr F time="2026-01-20T10:51:44Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=he6bh 2026-01-20T10:51:44.848924119+00:00 stderr F time="2026-01-20T10:51:44Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e1pLS 2026-01-20T10:51:44.848924119+00:00 stderr F time="2026-01-20T10:51:44Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e1pLS 2026-01-20T10:51:44.980666482+00:00 stderr F time="2026-01-20T10:51:44Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=he6bh 2026-01-20T10:51:44.980772845+00:00 stderr F time="2026-01-20T10:51:44Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=he6bh 2026-01-20T10:51:44.980772845+00:00 stderr F time="2026-01-20T10:51:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=he6bh 2026-01-20T10:51:44.980872197+00:00 stderr F time="2026-01-20T10:51:44Z" level=info msg="catalog polling result: update pod community-operators-9k9jk failed to start" UpdatePod=community-operators-9k9jk 2026-01-20T10:51:45.182908061+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e1pLS 2026-01-20T10:51:45.182983463+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=e1pLS 2026-01-20T10:51:45.182983463+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=e1pLS 2026-01-20T10:51:45.182983463+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=e1pLS 2026-01-20T10:51:45.183003834+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e1pLS 2026-01-20T10:51:45.183003834+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e1pLS 2026-01-20T10:51:45.382182647+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZYdCX 2026-01-20T10:51:45.382301170+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZYdCX 2026-01-20T10:51:45.783786244+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZYdCX 2026-01-20T10:51:45.783901247+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=ZYdCX 2026-01-20T10:51:45.783901247+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=ZYdCX 2026-01-20T10:51:45.783901247+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=ZYdCX 2026-01-20T10:51:45.783901247+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZYdCX 2026-01-20T10:51:45.783916518+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZYdCX 2026-01-20T10:51:45.983170104+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e1pLS 2026-01-20T10:51:45.983170104+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=e1pLS 2026-01-20T10:51:45.983170104+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=e1pLS 2026-01-20T10:51:45.983170104+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e1pLS 2026-01-20T10:51:45.983170104+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e1pLS 2026-01-20T10:51:45.989032920+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=s19QI 2026-01-20T10:51:45.989032920+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=s19QI 2026-01-20T10:51:46.381792217+00:00 stderr F time="2026-01-20T10:51:46Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=s19QI 2026-01-20T10:51:46.381900300+00:00 stderr F time="2026-01-20T10:51:46Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=s19QI 2026-01-20T10:51:46.381909460+00:00 stderr F time="2026-01-20T10:51:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=s19QI 2026-01-20T10:51:46.381927681+00:00 stderr F time="2026-01-20T10:51:46Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=s19QI 2026-01-20T10:51:46.381927681+00:00 stderr F time="2026-01-20T10:51:46Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=s19QI 2026-01-20T10:51:46.381935641+00:00 stderr F time="2026-01-20T10:51:46Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=s19QI 2026-01-20T10:51:46.583337097+00:00 stderr F time="2026-01-20T10:51:46Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZYdCX 2026-01-20T10:51:46.583427739+00:00 stderr F time="2026-01-20T10:51:46Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=ZYdCX 2026-01-20T10:51:46.583439500+00:00 stderr F time="2026-01-20T10:51:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=ZYdCX 2026-01-20T10:51:46.583624155+00:00 stderr F time="2026-01-20T10:51:46Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZYdCX 2026-01-20T10:51:46.583624155+00:00 stderr F time="2026-01-20T10:51:46Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZYdCX 2026-01-20T10:51:46.591922030+00:00 stderr F time="2026-01-20T10:51:46Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=K0PcX 2026-01-20T10:51:46.591922030+00:00 stderr F time="2026-01-20T10:51:46Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=K0PcX 2026-01-20T10:51:46.982560317+00:00 stderr F time="2026-01-20T10:51:46Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=K0PcX 2026-01-20T10:51:46.982560317+00:00 stderr F time="2026-01-20T10:51:46Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=K0PcX 2026-01-20T10:51:46.982560317+00:00 stderr F time="2026-01-20T10:51:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=K0PcX 2026-01-20T10:51:46.982560317+00:00 stderr F time="2026-01-20T10:51:46Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=K0PcX 2026-01-20T10:51:46.982560317+00:00 stderr F time="2026-01-20T10:51:46Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=K0PcX 2026-01-20T10:51:46.982560317+00:00 stderr F time="2026-01-20T10:51:46Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=K0PcX 2026-01-20T10:51:47.183158560+00:00 stderr F time="2026-01-20T10:51:47Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=s19QI 2026-01-20T10:51:47.183308034+00:00 stderr F time="2026-01-20T10:51:47Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=s19QI 2026-01-20T10:51:47.183308034+00:00 stderr F time="2026-01-20T10:51:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=s19QI 2026-01-20T10:51:47.183533311+00:00 stderr F time="2026-01-20T10:51:47Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=s19QI 2026-01-20T10:51:47.183533311+00:00 stderr F time="2026-01-20T10:51:47Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=s19QI 2026-01-20T10:51:47.190328223+00:00 stderr F time="2026-01-20T10:51:47Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"redhat-operators\": the object has been modified; please apply your changes to the latest version and try again" id=s19QI 2026-01-20T10:51:47.190400616+00:00 stderr F E0120 10:51:47.190327 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "redhat-operators": the object has been modified; please apply your changes to the latest version and try again 2026-01-20T10:51:47.190420096+00:00 stderr F time="2026-01-20T10:51:47Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2b9VN 2026-01-20T10:51:47.190485928+00:00 stderr F time="2026-01-20T10:51:47Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2b9VN 2026-01-20T10:51:47.581077634+00:00 stderr F time="2026-01-20T10:51:47Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2b9VN 2026-01-20T10:51:47.581143716+00:00 stderr F time="2026-01-20T10:51:47Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=2b9VN 2026-01-20T10:51:47.581143716+00:00 stderr F time="2026-01-20T10:51:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=2b9VN 2026-01-20T10:51:47.581171306+00:00 stderr F time="2026-01-20T10:51:47Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=2b9VN 2026-01-20T10:51:47.581171306+00:00 stderr F time="2026-01-20T10:51:47Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2b9VN 2026-01-20T10:51:47.581171306+00:00 stderr F time="2026-01-20T10:51:47Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2b9VN 2026-01-20T10:51:47.781047299+00:00 stderr F time="2026-01-20T10:51:47Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=K0PcX 2026-01-20T10:51:47.781152852+00:00 stderr F time="2026-01-20T10:51:47Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=K0PcX 2026-01-20T10:51:47.781152852+00:00 stderr F time="2026-01-20T10:51:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=K0PcX 2026-01-20T10:51:47.781379268+00:00 stderr F time="2026-01-20T10:51:47Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=K0PcX 2026-01-20T10:51:47.781379268+00:00 stderr F time="2026-01-20T10:51:47Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=K0PcX 2026-01-20T10:51:47.786178884+00:00 stderr F time="2026-01-20T10:51:47Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"community-operators\": the object has been modified; please apply your changes to the latest version and try again" id=K0PcX 2026-01-20T10:51:47.786225495+00:00 stderr F E0120 10:51:47.786178 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "community-operators": the object has been modified; please apply your changes to the latest version and try again 2026-01-20T10:51:47.786225495+00:00 stderr F time="2026-01-20T10:51:47Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G5S4B 2026-01-20T10:51:47.786256356+00:00 stderr F time="2026-01-20T10:51:47Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G5S4B 2026-01-20T10:51:48.181748981+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G5S4B 2026-01-20T10:51:48.181827913+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=G5S4B 2026-01-20T10:51:48.181827913+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=G5S4B 2026-01-20T10:51:48.181836983+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=G5S4B 2026-01-20T10:51:48.181844154+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G5S4B 2026-01-20T10:51:48.181851354+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G5S4B 2026-01-20T10:51:48.382671624+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2b9VN 2026-01-20T10:51:48.382793557+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=2b9VN 2026-01-20T10:51:48.382793557+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=2b9VN 2026-01-20T10:51:48.382893910+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2b9VN 2026-01-20T10:51:48.382893910+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2b9VN 2026-01-20T10:51:48.382944581+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+lus3 2026-01-20T10:51:48.382963592+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+lus3 2026-01-20T10:51:48.781229855+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+lus3 2026-01-20T10:51:48.781229855+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=+lus3 2026-01-20T10:51:48.781229855+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=+lus3 2026-01-20T10:51:48.781229855+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=+lus3 2026-01-20T10:51:48.781229855+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+lus3 2026-01-20T10:51:48.781229855+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+lus3 2026-01-20T10:51:48.985139702+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G5S4B 2026-01-20T10:51:48.985139702+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=G5S4B 2026-01-20T10:51:48.985139702+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=G5S4B 2026-01-20T10:51:48.985139702+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G5S4B 2026-01-20T10:51:48.985139702+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G5S4B 2026-01-20T10:51:48.985139702+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ODAMs 2026-01-20T10:51:48.985139702+00:00 stderr F time="2026-01-20T10:51:48Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ODAMs 2026-01-20T10:51:49.381601815+00:00 stderr F time="2026-01-20T10:51:49Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ODAMs 2026-01-20T10:51:49.381601815+00:00 stderr F time="2026-01-20T10:51:49Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=ODAMs 2026-01-20T10:51:49.381733798+00:00 stderr F time="2026-01-20T10:51:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=ODAMs 2026-01-20T10:51:49.381733798+00:00 stderr F time="2026-01-20T10:51:49Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=ODAMs 2026-01-20T10:51:49.381733798+00:00 stderr F time="2026-01-20T10:51:49Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ODAMs 2026-01-20T10:51:49.381733798+00:00 stderr F time="2026-01-20T10:51:49Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ODAMs 2026-01-20T10:51:49.581243921+00:00 stderr F time="2026-01-20T10:51:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+lus3 2026-01-20T10:51:49.581348723+00:00 stderr F time="2026-01-20T10:51:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=+lus3 2026-01-20T10:51:49.581348723+00:00 stderr F time="2026-01-20T10:51:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=+lus3 2026-01-20T10:51:49.581574790+00:00 stderr F time="2026-01-20T10:51:49Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+lus3 2026-01-20T10:51:49.581574790+00:00 stderr F time="2026-01-20T10:51:49Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+lus3 2026-01-20T10:51:49.981432638+00:00 stderr F time="2026-01-20T10:51:49Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ODAMs 2026-01-20T10:51:49.981517131+00:00 stderr F time="2026-01-20T10:51:49Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=ODAMs 2026-01-20T10:51:49.981517131+00:00 stderr F time="2026-01-20T10:51:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=ODAMs 2026-01-20T10:51:49.981613404+00:00 stderr F time="2026-01-20T10:51:49Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ODAMs 2026-01-20T10:51:49.981613404+00:00 stderr F time="2026-01-20T10:51:49Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ODAMs 2026-01-20T10:51:56.350289553+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rWp6r 2026-01-20T10:51:56.350289553+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rWp6r 2026-01-20T10:51:56.359775343+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rWp6r 2026-01-20T10:51:56.359849345+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=rWp6r 2026-01-20T10:51:56.359849345+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=rWp6r 2026-01-20T10:51:56.359860535+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=rWp6r 2026-01-20T10:51:56.359860535+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rWp6r 2026-01-20T10:51:56.359869835+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rWp6r 2026-01-20T10:51:56.367005647+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rWp6r 2026-01-20T10:51:56.367121870+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=rWp6r 2026-01-20T10:51:56.367121870+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=rWp6r 2026-01-20T10:51:56.367203003+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rWp6r 2026-01-20T10:51:56.367203003+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rWp6r 2026-01-20T10:51:56.449588317+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xBRnk 2026-01-20T10:51:56.449588317+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xBRnk 2026-01-20T10:51:56.452614073+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xBRnk 2026-01-20T10:51:56.452614073+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=xBRnk 2026-01-20T10:51:56.452614073+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=xBRnk 2026-01-20T10:51:56.452614073+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=xBRnk 2026-01-20T10:51:56.452614073+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xBRnk 2026-01-20T10:51:56.452614073+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xBRnk 2026-01-20T10:51:56.460490146+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xBRnk 2026-01-20T10:51:56.460575269+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=xBRnk 2026-01-20T10:51:56.460575269+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=xBRnk 2026-01-20T10:51:56.460660901+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xBRnk 2026-01-20T10:51:56.460660901+00:00 stderr F time="2026-01-20T10:51:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xBRnk 2026-01-20T10:52:37.564602886+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=READY" 2026-01-20T10:52:37.564674518+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=TGClT 2026-01-20T10:52:37.564721359+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=TGClT 2026-01-20T10:52:37.564773421+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="resolving sources" id=PrmPb namespace=openshift-marketplace 2026-01-20T10:52:37.564773421+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="checking if subscriptions need update" id=PrmPb namespace=openshift-marketplace 2026-01-20T10:52:37.571016333+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=PrmPb namespace=openshift-marketplace 2026-01-20T10:52:37.572506891+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=TGClT 2026-01-20T10:52:37.572673865+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=TGClT 2026-01-20T10:52:37.572683095+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=TGClT 2026-01-20T10:52:37.572716246+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=TGClT 2026-01-20T10:52:37.572723906+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=TGClT 2026-01-20T10:52:37.572733076+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=TGClT 2026-01-20T10:52:37.580512137+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=TGClT 2026-01-20T10:52:37.580759984+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=TGClT 2026-01-20T10:52:37.580759984+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=TGClT 2026-01-20T10:52:37.580952489+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=TGClT 2026-01-20T10:52:37.580952489+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=TGClT 2026-01-20T10:52:37.589962421+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XO4mx 2026-01-20T10:52:37.589962421+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XO4mx 2026-01-20T10:52:37.595164676+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XO4mx 2026-01-20T10:52:37.595337440+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=XO4mx 2026-01-20T10:52:37.595337440+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=XO4mx 2026-01-20T10:52:37.595368311+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=XO4mx 2026-01-20T10:52:37.595375811+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XO4mx 2026-01-20T10:52:37.595383051+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XO4mx 2026-01-20T10:52:37.601288024+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XO4mx 2026-01-20T10:52:37.601388777+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=XO4mx 2026-01-20T10:52:37.601388777+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=XO4mx 2026-01-20T10:52:37.601478259+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XO4mx 2026-01-20T10:52:37.601478259+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XO4mx 2026-01-20T10:56:24.917091013+00:00 stderr F time="2026-01-20T10:56:24Z" level=info msg="resolving sources" id=ZvyNw namespace=openstack 2026-01-20T10:56:24.917168745+00:00 stderr F time="2026-01-20T10:56:24Z" level=info msg="checking if subscriptions need update" id=ZvyNw namespace=openstack 2026-01-20T10:56:24.921502781+00:00 stderr F time="2026-01-20T10:56:24Z" level=info msg="No subscriptions were found in namespace openstack" id=ZvyNw namespace=openstack 2026-01-20T10:56:24.940856972+00:00 stderr F time="2026-01-20T10:56:24Z" level=info msg="resolving sources" id=r5egM namespace=openstack 2026-01-20T10:56:24.940856972+00:00 stderr F time="2026-01-20T10:56:24Z" level=info msg="checking if subscriptions need update" id=r5egM namespace=openstack 2026-01-20T10:56:24.947475339+00:00 stderr F time="2026-01-20T10:56:24Z" level=info msg="No subscriptions were found in namespace openstack" id=r5egM namespace=openstack 2026-01-20T10:56:24.977889638+00:00 stderr F time="2026-01-20T10:56:24Z" level=info msg="resolving sources" id=VyKrZ namespace=openstack 2026-01-20T10:56:24.977960440+00:00 stderr F time="2026-01-20T10:56:24Z" level=info msg="checking if subscriptions need update" id=VyKrZ namespace=openstack 2026-01-20T10:56:24.980734414+00:00 stderr F time="2026-01-20T10:56:24Z" level=info msg="No subscriptions were found in namespace openstack" id=VyKrZ namespace=openstack 2026-01-20T10:56:25.748992665+00:00 stderr F time="2026-01-20T10:56:25Z" level=info msg="resolving sources" id=UdJWr namespace=openstack-operators 2026-01-20T10:56:25.748992665+00:00 stderr F time="2026-01-20T10:56:25Z" level=info msg="checking if subscriptions need update" id=UdJWr namespace=openstack-operators 2026-01-20T10:56:25.754267316+00:00 stderr F time="2026-01-20T10:56:25Z" level=info msg="No subscriptions were found in namespace openstack-operators" id=UdJWr namespace=openstack-operators 2026-01-20T10:56:25.758790898+00:00 stderr F time="2026-01-20T10:56:25Z" level=info msg="resolving sources" id=eHNTX namespace=openstack-operators 2026-01-20T10:56:25.758790898+00:00 stderr F time="2026-01-20T10:56:25Z" level=info msg="checking if subscriptions need update" id=eHNTX namespace=openstack-operators 2026-01-20T10:56:25.763150936+00:00 stderr F time="2026-01-20T10:56:25Z" level=info msg="No subscriptions were found in namespace openstack-operators" id=eHNTX namespace=openstack-operators 2026-01-20T10:56:25.767873252+00:00 stderr F time="2026-01-20T10:56:25Z" level=info msg="resolving sources" id=Uaao9 namespace=openstack-operators 2026-01-20T10:56:25.767873252+00:00 stderr F time="2026-01-20T10:56:25Z" level=info msg="checking if subscriptions need update" id=Uaao9 namespace=openstack-operators 2026-01-20T10:56:25.771512331+00:00 stderr F time="2026-01-20T10:56:25Z" level=info msg="No subscriptions were found in namespace openstack-operators" id=Uaao9 namespace=openstack-operators 2026-01-20T10:56:30.677288042+00:00 stderr F time="2026-01-20T10:56:30Z" level=info msg="resolving sources" id=hm285 namespace=cert-manager-operator 2026-01-20T10:56:30.677288042+00:00 stderr F time="2026-01-20T10:56:30Z" level=info msg="checking if subscriptions need update" id=hm285 namespace=cert-manager-operator 2026-01-20T10:56:30.686932271+00:00 stderr F time="2026-01-20T10:56:30Z" level=info msg="No subscriptions were found in namespace cert-manager-operator" id=hm285 namespace=cert-manager-operator 2026-01-20T10:56:30.687127876+00:00 stderr F time="2026-01-20T10:56:30Z" level=info msg="resolving sources" id=Bc+A/ namespace=cert-manager-operator 2026-01-20T10:56:30.687200928+00:00 stderr F time="2026-01-20T10:56:30Z" level=info msg="checking if subscriptions need update" id=Bc+A/ namespace=cert-manager-operator 2026-01-20T10:56:30.693970191+00:00 stderr F time="2026-01-20T10:56:30Z" level=info msg="No subscriptions were found in namespace cert-manager-operator" id=Bc+A/ namespace=cert-manager-operator 2026-01-20T10:56:30.700009803+00:00 stderr F time="2026-01-20T10:56:30Z" level=info msg="resolving sources" id=vM+2b namespace=cert-manager-operator 2026-01-20T10:56:30.700009803+00:00 stderr F time="2026-01-20T10:56:30Z" level=info msg="checking if subscriptions need update" id=vM+2b namespace=cert-manager-operator 2026-01-20T10:56:30.708337437+00:00 stderr F time="2026-01-20T10:56:30Z" level=info msg="No subscriptions were found in namespace cert-manager-operator" id=vM+2b namespace=cert-manager-operator 2026-01-20T10:56:33.648937545+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="resolving sources" id=nsAG9 namespace=cert-manager 2026-01-20T10:56:33.648937545+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="checking if subscriptions need update" id=nsAG9 namespace=cert-manager 2026-01-20T10:56:33.660349562+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="No subscriptions were found in namespace cert-manager" id=nsAG9 namespace=cert-manager 2026-01-20T10:56:33.666587940+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="resolving sources" id=EITwz namespace=cert-manager 2026-01-20T10:56:33.666650111+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="checking if subscriptions need update" id=EITwz namespace=cert-manager 2026-01-20T10:56:33.673220028+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="No subscriptions were found in namespace cert-manager" id=EITwz namespace=cert-manager 2026-01-20T10:56:33.678291804+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="resolving sources" id=+j10J namespace=cert-manager 2026-01-20T10:56:33.678379077+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="checking if subscriptions need update" id=+j10J namespace=cert-manager 2026-01-20T10:56:33.688180271+00:00 stderr F time="2026-01-20T10:56:33Z" level=info msg="No subscriptions were found in namespace cert-manager" id=+j10J namespace=cert-manager 2026-01-20T10:58:19.732335556+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EXL3U 2026-01-20T10:58:19.732335556+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EXL3U 2026-01-20T10:58:19.732407968+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NuVoU 2026-01-20T10:58:19.732407968+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NuVoU 2026-01-20T10:58:19.734577683+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EXL3U 2026-01-20T10:58:19.734681256+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=EXL3U 2026-01-20T10:58:19.734681256+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=EXL3U 2026-01-20T10:58:19.734681256+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=EXL3U 2026-01-20T10:58:19.734681256+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EXL3U 2026-01-20T10:58:19.734681256+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EXL3U 2026-01-20T10:58:19.735152117+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NuVoU 2026-01-20T10:58:19.735210409+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=NuVoU 2026-01-20T10:58:19.735210409+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=NuVoU 2026-01-20T10:58:19.735210409+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=NuVoU 2026-01-20T10:58:19.735224279+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NuVoU 2026-01-20T10:58:19.735224279+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NuVoU 2026-01-20T10:58:19.742561245+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EXL3U 2026-01-20T10:58:19.742717220+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=EXL3U 2026-01-20T10:58:19.742717220+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-mpjb7 current-pod.namespace=openshift-marketplace id=EXL3U 2026-01-20T10:58:19.742842753+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EXL3U 2026-01-20T10:58:19.742842753+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EXL3U 2026-01-20T10:58:19.745162561+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NuVoU 2026-01-20T10:58:19.745247153+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=NuVoU 2026-01-20T10:58:19.745247153+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-6m4w2 current-pod.namespace=openshift-marketplace id=NuVoU 2026-01-20T10:58:19.745414258+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NuVoU 2026-01-20T10:58:19.745414258+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NuVoU 2026-01-20T10:58:19.750322902+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+6WUf 2026-01-20T10:58:19.750322902+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+6WUf 2026-01-20T10:58:19.751092082+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=k98v6 2026-01-20T10:58:19.751092082+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=k98v6 2026-01-20T10:58:19.752494258+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+6WUf 2026-01-20T10:58:19.752562410+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=+6WUf 2026-01-20T10:58:19.752562410+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=+6WUf 2026-01-20T10:58:19.752575410+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=+6WUf 2026-01-20T10:58:19.752575410+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+6WUf 2026-01-20T10:58:19.752586450+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+6WUf 2026-01-20T10:58:19.752944309+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=k98v6 2026-01-20T10:58:19.753010021+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=k98v6 2026-01-20T10:58:19.753010021+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=k98v6 2026-01-20T10:58:19.753010021+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=k98v6 2026-01-20T10:58:19.753024061+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=k98v6 2026-01-20T10:58:19.753024061+00:00 stderr F time="2026-01-20T10:58:19Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=k98v6 2026-01-20T10:58:20.126236791+00:00 stderr F time="2026-01-20T10:58:20Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+6WUf 2026-01-20T10:58:20.126374544+00:00 stderr F time="2026-01-20T10:58:20Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=+6WUf 2026-01-20T10:58:20.126374544+00:00 stderr F time="2026-01-20T10:58:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-2mx7j current-pod.namespace=openshift-marketplace id=+6WUf 2026-01-20T10:58:20.126472236+00:00 stderr F time="2026-01-20T10:58:20Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+6WUf 2026-01-20T10:58:20.126472236+00:00 stderr F time="2026-01-20T10:58:20Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+6WUf 2026-01-20T10:58:20.326534689+00:00 stderr F time="2026-01-20T10:58:20Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=k98v6 2026-01-20T10:58:20.326534689+00:00 stderr F time="2026-01-20T10:58:20Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=k98v6 2026-01-20T10:58:20.326534689+00:00 stderr F time="2026-01-20T10:58:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-2nxg8 current-pod.namespace=openshift-marketplace id=k98v6 2026-01-20T10:58:20.326534689+00:00 stderr F time="2026-01-20T10:58:20Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=k98v6 2026-01-20T10:58:20.326534689+00:00 stderr F time="2026-01-20T10:58:20Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=k98v6 ././@LongLink0000644000000000000000000000032000000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000560135215133657716033107 0ustar zuulzuul2025-08-13T19:59:14.867388586+00:00 stderr F time="2025-08-13T19:59:14Z" level=info msg="log level info" 2025-08-13T19:59:14.944315019+00:00 stderr F time="2025-08-13T19:59:14Z" level=info msg="TLS keys set, using https for metrics" 2025-08-13T19:59:15.055645443+00:00 stderr F W0813 19:59:15.055574 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:59:15.250577380+00:00 stderr F time="2025-08-13T19:59:15Z" level=info msg="Using in-cluster kube client config" 2025-08-13T19:59:16.004044899+00:00 stderr F time="2025-08-13T19:59:15Z" level=info msg="Using in-cluster kube client config" 2025-08-13T19:59:16.313683285+00:00 stderr F W0813 19:59:16.310301 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:59:19.740591563+00:00 stderr F time="2025-08-13T19:59:19Z" level=info msg="skipping irrelevant gvr" gvr="apps/v1, Resource=deployments" 2025-08-13T19:59:19.790674340+00:00 stderr F time="2025-08-13T19:59:19Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=clusterroles" 2025-08-13T19:59:19.801225861+00:00 stderr F time="2025-08-13T19:59:19Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=clusterrolebindings" 2025-08-13T19:59:22.002547450+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="detected ability to filter informers" canFilter=true 2025-08-13T19:59:22.214694658+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="registering owner reference fixer" gvr="/v1, Resource=services" 2025-08-13T19:59:22.327504744+00:00 stderr F W0813 19:59:22.326666 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:59:22.425442905+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2025-08-13T19:59:22.425657291+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="operator ready" 2025-08-13T19:59:22.425657291+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="starting informers..." 2025-08-13T19:59:22.513393653+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="informers started" 2025-08-13T19:59:22.524531920+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="waiting for caches to sync..." 2025-08-13T19:59:23.026938681+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="starting workers..." 2025-08-13T19:59:23.041385643+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:23.041385643+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:23.041385643+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:23.041385643+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:23.065088729+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=1tszK namespace=default 2025-08-13T19:59:23.065088729+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=1tszK namespace=default 2025-08-13T19:59:23.065088729+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=bkTke namespace=hostpath-provisioner 2025-08-13T19:59:23.065088729+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=bkTke namespace=hostpath-provisioner 2025-08-13T19:59:23.096940857+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="operator ready" 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="starting informers..." 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="informers started" 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="waiting for caches to sync..." 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="No subscriptions were found in namespace hostpath-provisioner" id=bkTke namespace=hostpath-provisioner 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=SfIzQ namespace=kube-node-lease 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=SfIzQ namespace=kube-node-lease 2025-08-13T19:59:23.284758571+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="starting workers..." 2025-08-13T19:59:23.453691866+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="No subscriptions were found in namespace kube-node-lease" id=SfIzQ namespace=kube-node-lease 2025-08-13T19:59:23.453691866+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="No subscriptions were found in namespace default" id=1tszK namespace=default 2025-08-13T19:59:23.453691866+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=Nm0y1 namespace=kube-public 2025-08-13T19:59:23.453691866+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=Nm0y1 namespace=kube-public 2025-08-13T19:59:23.453691866+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=Fd4f+ namespace=kube-system 2025-08-13T19:59:23.453691866+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=Fd4f+ namespace=kube-system 2025-08-13T19:59:23.507687285+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="No subscriptions were found in namespace kube-public" id=Nm0y1 namespace=kube-public 2025-08-13T19:59:23.507687285+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=IcxZR namespace=openshift 2025-08-13T19:59:23.507687285+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=IcxZR namespace=openshift 2025-08-13T19:59:23.507687285+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="No subscriptions were found in namespace kube-system" id=Fd4f+ namespace=kube-system 2025-08-13T19:59:23.516924149+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:23.523150956+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=NduNP namespace=openshift-apiserver 2025-08-13T19:59:23.523150956+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=NduNP namespace=openshift-apiserver 2025-08-13T19:59:23.529682902+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:23.529682902+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:23.529682902+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=L1+GA 2025-08-13T19:59:23.529682902+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:23.529682902+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:23.530038362+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:23.530123125+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=2zAJE 2025-08-13T19:59:23.530157036+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:23.530187557+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:23.912587797+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="No subscriptions were found in namespace openshift-apiserver" id=NduNP namespace=openshift-apiserver 2025-08-13T19:59:23.912587797+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=vgUXD namespace=openshift-apiserver-operator 2025-08-13T19:59:23.912587797+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=vgUXD namespace=openshift-apiserver-operator 2025-08-13T19:59:23.912587797+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="No subscriptions were found in namespace openshift" id=IcxZR namespace=openshift 2025-08-13T19:59:23.912587797+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=A0rZ7 namespace=openshift-authentication 2025-08-13T19:59:23.912587797+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=A0rZ7 namespace=openshift-authentication 2025-08-13T19:59:24.726412565+00:00 stderr F time="2025-08-13T19:59:24Z" level=info msg="No subscriptions were found in namespace openshift-authentication" id=A0rZ7 namespace=openshift-authentication 2025-08-13T19:59:24.726522058+00:00 stderr F time="2025-08-13T19:59:24Z" level=info msg="resolving sources" id=N9wn5 namespace=openshift-authentication-operator 2025-08-13T19:59:24.726556649+00:00 stderr F time="2025-08-13T19:59:24Z" level=info msg="checking if subscriptions need update" id=N9wn5 namespace=openshift-authentication-operator 2025-08-13T19:59:24.861561787+00:00 stderr F time="2025-08-13T19:59:24Z" level=info msg="No subscriptions were found in namespace openshift-apiserver-operator" id=vgUXD namespace=openshift-apiserver-operator 2025-08-13T19:59:24.861561787+00:00 stderr F time="2025-08-13T19:59:24Z" level=info msg="resolving sources" id=NVR/s namespace=openshift-cloud-network-config-controller 2025-08-13T19:59:24.861561787+00:00 stderr F time="2025-08-13T19:59:24Z" level=info msg="checking if subscriptions need update" id=NVR/s namespace=openshift-cloud-network-config-controller 2025-08-13T19:59:25.290710720+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="No subscriptions were found in namespace openshift-authentication-operator" id=N9wn5 namespace=openshift-authentication-operator 2025-08-13T19:59:25.290710720+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="resolving sources" id=gBgIJ namespace=openshift-cloud-platform-infra 2025-08-13T19:59:25.290710720+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="checking if subscriptions need update" id=gBgIJ namespace=openshift-cloud-platform-infra 2025-08-13T19:59:25.792509454+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="No subscriptions were found in namespace openshift-cloud-network-config-controller" id=NVR/s namespace=openshift-cloud-network-config-controller 2025-08-13T19:59:25.792509454+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="resolving sources" id=rCCoh namespace=openshift-cluster-machine-approver 2025-08-13T19:59:25.792509454+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="checking if subscriptions need update" id=rCCoh namespace=openshift-cluster-machine-approver 2025-08-13T19:59:25.805455853+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:25.805642469+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:25.805682480+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:25.805870025+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:25.807135631+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:25.807258165+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:25.807296346+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:25.807404539+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:25.807543523+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="No subscriptions were found in namespace openshift-cloud-platform-infra" id=gBgIJ namespace=openshift-cloud-platform-infra 2025-08-13T19:59:25.807607975+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="resolving sources" id=wU+NU namespace=openshift-cluster-samples-operator 2025-08-13T19:59:25.807646726+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="checking if subscriptions need update" id=wU+NU namespace=openshift-cluster-samples-operator 2025-08-13T19:59:25.818163626+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=CONNECTING" 2025-08-13T19:59:25.822749276+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=CONNECTING" 2025-08-13T19:59:26.200523475+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=TRANSIENT_FAILURE" 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="No subscriptions were found in namespace openshift-cluster-machine-approver" id=rCCoh namespace=openshift-cluster-machine-approver 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="resolving sources" id=P7R3s namespace=openshift-cluster-storage-operator 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="checking if subscriptions need update" id=P7R3s namespace=openshift-cluster-storage-operator 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="No subscriptions were found in namespace openshift-cluster-samples-operator" id=wU+NU namespace=openshift-cluster-samples-operator 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="resolving sources" id=iF9KB namespace=openshift-cluster-version 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="checking if subscriptions need update" id=iF9KB namespace=openshift-cluster-version 2025-08-13T19:59:26.315502542+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=TRANSIENT_FAILURE" 2025-08-13T19:59:26.323310225+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:26.323437499+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:26.630627235+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:26.631473109+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:26.631561792+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:26.631884371+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=yW7NQ 2025-08-13T19:59:26.631966313+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:26.632391166+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:26.633056135+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="No subscriptions were found in namespace openshift-cluster-version" id=iF9KB namespace=openshift-cluster-version 2025-08-13T19:59:26.633345513+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="resolving sources" id=/0z19 namespace=openshift-config 2025-08-13T19:59:26.673319412+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="checking if subscriptions need update" id=/0z19 namespace=openshift-config 2025-08-13T19:59:26.707423114+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="No subscriptions were found in namespace openshift-cluster-storage-operator" id=P7R3s namespace=openshift-cluster-storage-operator 2025-08-13T19:59:26.707529307+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="resolving sources" id=8nG21 namespace=openshift-config-managed 2025-08-13T19:59:26.707577859+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="checking if subscriptions need update" id=8nG21 namespace=openshift-config-managed 2025-08-13T19:59:26.740619411+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:26.744728288+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:26.744964494+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:26.745052047+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=U8uxi 2025-08-13T19:59:26.745615003+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:26.745987864+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:27.068613230+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="No subscriptions were found in namespace openshift-config-managed" id=8nG21 namespace=openshift-config-managed 2025-08-13T19:59:27.068613230+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="resolving sources" id=Wm9lv namespace=openshift-config-operator 2025-08-13T19:59:27.068613230+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checking if subscriptions need update" id=Wm9lv namespace=openshift-config-operator 2025-08-13T19:59:27.068613230+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="No subscriptions were found in namespace openshift-config" id=/0z19 namespace=openshift-config 2025-08-13T19:59:27.068613230+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="resolving sources" id=9Gh0e namespace=openshift-console 2025-08-13T19:59:27.068613230+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checking if subscriptions need update" id=9Gh0e namespace=openshift-console 2025-08-13T19:59:27.126553402+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:27.126753548+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:27.127020695+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:27.127736336+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:27.167088657+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=CONNECTING" 2025-08-13T19:59:27.264241327+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="No subscriptions were found in namespace openshift-console" id=9Gh0e namespace=openshift-console 2025-08-13T19:59:27.264353730+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="resolving sources" id=T/C3j namespace=openshift-console-operator 2025-08-13T19:59:27.264394321+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checking if subscriptions need update" id=T/C3j namespace=openshift-console-operator 2025-08-13T19:59:27.340527411+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:27.340527411+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:27.340527411+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:27.340527411+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:27.340527411+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="No subscriptions were found in namespace openshift-config-operator" id=Wm9lv namespace=openshift-config-operator 2025-08-13T19:59:27.340527411+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="resolving sources" id=4m9Zr namespace=openshift-console-user-settings 2025-08-13T19:59:27.340527411+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checking if subscriptions need update" id=4m9Zr namespace=openshift-console-user-settings 2025-08-13T19:59:27.341884690+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=CONNECTING" 2025-08-13T19:59:27.496339343+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="No subscriptions were found in namespace openshift-console-operator" id=T/C3j namespace=openshift-console-operator 2025-08-13T19:59:27.496339343+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="resolving sources" id=8fXaO namespace=openshift-controller-manager 2025-08-13T19:59:27.496339343+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checking if subscriptions need update" id=8fXaO namespace=openshift-controller-manager 2025-08-13T19:59:27.496996842+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:27.497136946+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:27.737028164+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="No subscriptions were found in namespace openshift-console-user-settings" id=4m9Zr namespace=openshift-console-user-settings 2025-08-13T19:59:27.737266901+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="resolving sources" id=chpL+ namespace=openshift-controller-manager-operator 2025-08-13T19:59:27.737314662+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checking if subscriptions need update" id=chpL+ namespace=openshift-controller-manager-operator 2025-08-13T19:59:27.737550989+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:27.737615391+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=/+To3 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager" id=8fXaO namespace=openshift-controller-manager 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="resolving sources" id=sb0C1 namespace=openshift-dns 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checking if subscriptions need update" id=sb0C1 namespace=openshift-dns 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=TRANSIENT_FAILURE" 2025-08-13T19:59:27.843101857+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=TRANSIENT_FAILURE" 2025-08-13T19:59:28.021570865+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.021570865+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.021570865+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.021570865+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=8p0k2 2025-08-13T19:59:28.021570865+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.021570865+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.042563063+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager-operator" id=chpL+ namespace=openshift-controller-manager-operator 2025-08-13T19:59:28.042563063+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=Lw1uN namespace=openshift-dns-operator 2025-08-13T19:59:28.042563063+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=Lw1uN namespace=openshift-dns-operator 2025-08-13T19:59:28.042618355+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-dns" id=sb0C1 namespace=openshift-dns 2025-08-13T19:59:28.042629975+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=rjPqu namespace=openshift-etcd 2025-08-13T19:59:28.042934364+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=rjPqu namespace=openshift-etcd 2025-08-13T19:59:28.176819719+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-etcd" id=rjPqu namespace=openshift-etcd 2025-08-13T19:59:28.176819719+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=gVT6l namespace=openshift-etcd-operator 2025-08-13T19:59:28.176819719+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=gVT6l namespace=openshift-etcd-operator 2025-08-13T19:59:28.204133568+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-dns-operator" id=Lw1uN namespace=openshift-dns-operator 2025-08-13T19:59:28.204133568+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=u7Ka4 namespace=openshift-host-network 2025-08-13T19:59:28.204133568+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=u7Ka4 namespace=openshift-host-network 2025-08-13T19:59:28.204133568+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-etcd-operator" id=gVT6l namespace=openshift-etcd-operator 2025-08-13T19:59:28.204133568+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=nujbR namespace=openshift-image-registry 2025-08-13T19:59:28.204133568+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=nujbR namespace=openshift-image-registry 2025-08-13T19:59:28.205114356+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:28.205114356+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:28.205114356+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:28.205114356+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:28.340480004+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-host-network" id=u7Ka4 namespace=openshift-host-network 2025-08-13T19:59:28.340734442+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=r366e namespace=openshift-infra 2025-08-13T19:59:28.340734442+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=r366e namespace=openshift-infra 2025-08-13T19:59:28.341944786+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.348934225+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.352167718+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.353058673+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-image-registry" id=nujbR namespace=openshift-image-registry 2025-08-13T19:59:28.353058673+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=d/l/x namespace=openshift-ingress 2025-08-13T19:59:28.353058673+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=d/l/x namespace=openshift-ingress 2025-08-13T19:59:28.443163431+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.623900403+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:28.623900403+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:28.732500659+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-infra" id=r366e namespace=openshift-infra 2025-08-13T19:59:28.732500659+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=Ydart namespace=openshift-ingress-canary 2025-08-13T19:59:28.732500659+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=Ydart namespace=openshift-ingress-canary 2025-08-13T19:59:28.766033965+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:28.769268027+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:28.769268027+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:28.769268027+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=aF0pv 2025-08-13T19:59:28.769268027+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:28.769268027+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:29.329208979+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="No subscriptions were found in namespace openshift-ingress" id=d/l/x namespace=openshift-ingress 2025-08-13T19:59:29.329330142+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="resolving sources" id=yu8bW namespace=openshift-ingress-operator 2025-08-13T19:59:29.329376943+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="checking if subscriptions need update" id=yu8bW namespace=openshift-ingress-operator 2025-08-13T19:59:29.329596840+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:29.330245548+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:29.330298460+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:29.330675310+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:29.398870494+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:29.399252315+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:29.428178660+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="No subscriptions were found in namespace openshift-ingress-canary" id=Ydart namespace=openshift-ingress-canary 2025-08-13T19:59:29.428306033+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="resolving sources" id=00kih namespace=openshift-kni-infra 2025-08-13T19:59:29.428348624+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="checking if subscriptions need update" id=00kih namespace=openshift-kni-infra 2025-08-13T19:59:29.551404102+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="No subscriptions were found in namespace openshift-ingress-operator" id=yu8bW namespace=openshift-ingress-operator 2025-08-13T19:59:29.551521896+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="resolving sources" id=dXsgs namespace=openshift-kube-apiserver 2025-08-13T19:59:29.551563397+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="checking if subscriptions need update" id=dXsgs namespace=openshift-kube-apiserver 2025-08-13T19:59:29.618814094+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:29.912166496+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:29.912166496+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:29.912166496+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="No subscriptions were found in namespace openshift-kni-infra" id=00kih namespace=openshift-kni-infra 2025-08-13T19:59:29.912166496+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="resolving sources" id=A23fX namespace=openshift-kube-apiserver-operator 2025-08-13T19:59:29.912166496+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="checking if subscriptions need update" id=A23fX namespace=openshift-kube-apiserver-operator 2025-08-13T19:59:30.291718285+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:30.291718285+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:30.291718285+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:30.291718285+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=0kX61 2025-08-13T19:59:30.291718285+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:30.291718285+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:30.588336350+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver-operator" id=A23fX namespace=openshift-kube-apiserver-operator 2025-08-13T19:59:30.588336350+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="resolving sources" id=sRyvf namespace=openshift-kube-controller-manager 2025-08-13T19:59:30.588336350+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="checking if subscriptions need update" id=sRyvf namespace=openshift-kube-controller-manager 2025-08-13T19:59:30.588336350+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver" id=dXsgs namespace=openshift-kube-apiserver 2025-08-13T19:59:30.588336350+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="resolving sources" id=/FT8f namespace=openshift-kube-controller-manager-operator 2025-08-13T19:59:30.588336350+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="checking if subscriptions need update" id=/FT8f namespace=openshift-kube-controller-manager-operator 2025-08-13T19:59:31.003420522+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager-operator" id=/FT8f namespace=openshift-kube-controller-manager-operator 2025-08-13T19:59:31.004217255+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="resolving sources" id=vQOBl namespace=openshift-kube-scheduler 2025-08-13T19:59:31.004274696+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="checking if subscriptions need update" id=vQOBl namespace=openshift-kube-scheduler 2025-08-13T19:59:31.141924400+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager" id=sRyvf namespace=openshift-kube-controller-manager 2025-08-13T19:59:31.142061084+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="resolving sources" id=DZQfB namespace=openshift-kube-scheduler-operator 2025-08-13T19:59:31.142096805+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="checking if subscriptions need update" id=DZQfB namespace=openshift-kube-scheduler-operator 2025-08-13T19:59:31.355001094+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler" id=vQOBl namespace=openshift-kube-scheduler 2025-08-13T19:59:31.355001094+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="resolving sources" id=AE+gp namespace=openshift-monitoring 2025-08-13T19:59:31.355001094+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="checking if subscriptions need update" id=AE+gp namespace=openshift-monitoring 2025-08-13T19:59:31.355001094+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler-operator" id=DZQfB namespace=openshift-kube-scheduler-operator 2025-08-13T19:59:31.355001094+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="resolving sources" id=4dfz6 namespace=openshift-operator-lifecycle-manager 2025-08-13T19:59:31.355001094+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="checking if subscriptions need update" id=4dfz6 namespace=openshift-operator-lifecycle-manager 2025-08-13T19:59:31.390212098+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:31.421724486+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:31.421724486+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:31.421724486+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:31.422030935+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:31.422030935+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:31.624880807+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:31.624880807+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:31.624880807+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:31.624880807+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=4p7jA 2025-08-13T19:59:31.624880807+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:31.624880807+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:32.391256522+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=4dfz6 namespace=openshift-operator-lifecycle-manager 2025-08-13T19:59:32.584702026+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=AE+gp namespace=openshift-monitoring 2025-08-13T19:59:32.585387936+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="resolving sources" id=OIaZW namespace=openshift-kube-storage-version-migrator 2025-08-13T19:59:32.585524430+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="checking if subscriptions need update" id=OIaZW namespace=openshift-kube-storage-version-migrator 2025-08-13T19:59:32.593423435+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="resolving sources" id=CW6fM namespace=openshift-operators 2025-08-13T19:59:32.669432191+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:32.669432191+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:32.669432191+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=n+zaX 2025-08-13T19:59:32.669492833+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:32.669492833+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:32.669542154+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator" id=OIaZW namespace=openshift-kube-storage-version-migrator 2025-08-13T19:59:32.669727270+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="resolving sources" id=5HI0X namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T19:59:32.669768551+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="checking if subscriptions need update" id=5HI0X namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T19:59:32.714419844+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator-operator" id=5HI0X namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T19:59:32.715084313+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="resolving sources" id=Tvm/D namespace=openshift-machine-api 2025-08-13T19:59:32.715348650+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="checking if subscriptions need update" id=Tvm/D namespace=openshift-machine-api 2025-08-13T19:59:32.792942022+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="checking if subscriptions need update" id=CW6fM namespace=openshift-operators 2025-08-13T19:59:32.952874511+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="No subscriptions were found in namespace openshift-machine-api" id=Tvm/D namespace=openshift-machine-api 2025-08-13T19:59:32.953173450+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="resolving sources" id=xPiUN namespace=openshift-machine-config-operator 2025-08-13T19:59:32.953307313+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="checking if subscriptions need update" id=xPiUN namespace=openshift-machine-config-operator 2025-08-13T19:59:32.982043382+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:32.982043382+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:32.982043382+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:32.982043382+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:32.982043382+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:32.982043382+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-machine-config-operator" id=xPiUN namespace=openshift-machine-config-operator 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=DH/KU namespace=openshift-marketplace 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=DH/KU namespace=openshift-marketplace 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=h2NDS 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=CW6fM namespace=openshift-operators 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=7gj+h namespace=openshift-monitoring 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=7gj+h namespace=openshift-monitoring 2025-08-13T19:59:33.147542840+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=7gj+h namespace=openshift-monitoring 2025-08-13T19:59:33.147659554+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=BL42U namespace=openshift-multus 2025-08-13T19:59:33.147693865+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=BL42U namespace=openshift-multus 2025-08-13T19:59:33.147978023+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=DH/KU namespace=openshift-marketplace 2025-08-13T19:59:33.148037714+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=cyGEC namespace=openshift-network-diagnostics 2025-08-13T19:59:33.148078215+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=cyGEC namespace=openshift-network-diagnostics 2025-08-13T19:59:33.165429970+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.165429970+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.165429970+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.165429970+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.165429970+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.165429970+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.310597278+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.310660420+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.310660420+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.310676000+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=EG+R5 2025-08-13T19:59:33.310688081+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.310688081+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.385050291+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.385050291+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.385362430+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-multus" id=BL42U namespace=openshift-multus 2025-08-13T19:59:33.385441562+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=4wzOg namespace=openshift-network-node-identity 2025-08-13T19:59:33.385441562+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=4wzOg namespace=openshift-network-node-identity 2025-08-13T19:59:33.591915857+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.591915857+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.591915857+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.591915857+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=tZPiK 2025-08-13T19:59:33.591915857+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.591915857+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.674899343+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-network-diagnostics" id=cyGEC namespace=openshift-network-diagnostics 2025-08-13T19:59:33.674899343+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=IzSRQ namespace=openshift-network-operator 2025-08-13T19:59:33.676986902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.706623267+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.706623267+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.706731020+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.740440261+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-network-node-identity" id=4wzOg namespace=openshift-network-node-identity 2025-08-13T19:59:33.740507163+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=c3ECV namespace=openshift-node 2025-08-13T19:59:33.740507163+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=c3ECV namespace=openshift-node 2025-08-13T19:59:33.741243924+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:33.840507144+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.842934233+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.842934233+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.842934233+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=IzSRQ namespace=openshift-network-operator 2025-08-13T19:59:33.842934233+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-node" id=c3ECV namespace=openshift-node 2025-08-13T19:59:33.842934233+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=X8taY namespace=openshift-nutanix-infra 2025-08-13T19:59:33.842934233+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=X8taY namespace=openshift-nutanix-infra 2025-08-13T19:59:33.905607960+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.905647991+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:33.905750974+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:33.939003481+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:33.939536447+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:33.939582898+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:33.939737092+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=xySjj 2025-08-13T19:59:33.939947768+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:33.940077102+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:33.985690332+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:34.161017760+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:34.161017760+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:34.161017760+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:34.161017760+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:34.161310969+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="No subscriptions were found in namespace openshift-network-operator" id=IzSRQ namespace=openshift-network-operator 2025-08-13T19:59:34.161679489+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="resolving sources" id=MMlDc namespace=openshift-oauth-apiserver 2025-08-13T19:59:34.162027529+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="checking if subscriptions need update" id=MMlDc namespace=openshift-oauth-apiserver 2025-08-13T19:59:34.194938967+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:34.197689775+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="No subscriptions were found in namespace openshift-nutanix-infra" id=X8taY namespace=openshift-nutanix-infra 2025-08-13T19:59:34.198110157+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="resolving sources" id=CD0QG namespace=openshift-openstack-infra 2025-08-13T19:59:34.198179299+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="checking if subscriptions need update" id=CD0QG namespace=openshift-openstack-infra 2025-08-13T19:59:34.352345744+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:34.352345744+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:34.352345744+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=OxmCH 2025-08-13T19:59:34.352345744+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:34.352345744+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:34.411095919+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.411177581+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.473884238+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.474142106+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.474217108+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.474275190+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=1LWtC 2025-08-13T19:59:34.474325011+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.474374332+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.566921241+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="No subscriptions were found in namespace openshift-oauth-apiserver" id=MMlDc namespace=openshift-oauth-apiserver 2025-08-13T19:59:34.566921241+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="resolving sources" id=Cbtq2 namespace=openshift-operator-lifecycle-manager 2025-08-13T19:59:34.566921241+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="checking if subscriptions need update" id=Cbtq2 namespace=openshift-operator-lifecycle-manager 2025-08-13T19:59:34.641284630+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="No subscriptions were found in namespace openshift-openstack-infra" id=CD0QG namespace=openshift-openstack-infra 2025-08-13T19:59:34.655651460+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="resolving sources" id=8Hevs namespace=openshift-operators 2025-08-13T19:59:34.655940278+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="checking if subscriptions need update" id=8Hevs namespace=openshift-operators 2025-08-13T19:59:34.703154954+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.703583876+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.703643278+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.703752451+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.826184401+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=Cbtq2 namespace=openshift-operator-lifecycle-manager 2025-08-13T19:59:34.826184401+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="resolving sources" id=9c8wv namespace=openshift-ovirt-infra 2025-08-13T19:59:34.826243543+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="checking if subscriptions need update" id=9c8wv namespace=openshift-ovirt-infra 2025-08-13T19:59:34.956635909+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:34.956762983+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:35.028041045+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:35.028041045+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:35.028041045+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:35.028125617+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:35.028125617+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:35.028139348+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:35.052736449+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=8Hevs namespace=openshift-operators 2025-08-13T19:59:35.052736449+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="resolving sources" id=YWQxZ namespace=openshift-ovn-kubernetes 2025-08-13T19:59:35.052736449+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="checking if subscriptions need update" id=YWQxZ namespace=openshift-ovn-kubernetes 2025-08-13T19:59:35.104361450+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:35.126499021+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:35.126499021+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:35.126499021+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=wkF6t 2025-08-13T19:59:35.126499021+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:35.126499021+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:35.235304383+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="No subscriptions were found in namespace openshift-ovirt-infra" id=9c8wv namespace=openshift-ovirt-infra 2025-08-13T19:59:35.235304383+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="resolving sources" id=e+NiR namespace=openshift-route-controller-manager 2025-08-13T19:59:35.235304383+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="checking if subscriptions need update" id=e+NiR namespace=openshift-route-controller-manager 2025-08-13T19:59:35.324618519+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:35.324618519+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:35.324618519+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:35.324618519+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=npCd0 2025-08-13T19:59:35.324618519+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:35.324618519+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:35.413547903+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="No subscriptions were found in namespace openshift-ovn-kubernetes" id=YWQxZ namespace=openshift-ovn-kubernetes 2025-08-13T19:59:35.413547903+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="resolving sources" id=g+Jzv namespace=openshift-service-ca 2025-08-13T19:59:35.413547903+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="checking if subscriptions need update" id=g+Jzv namespace=openshift-service-ca 2025-08-13T19:59:35.612152014+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="No subscriptions were found in namespace openshift-route-controller-manager" id=e+NiR namespace=openshift-route-controller-manager 2025-08-13T19:59:35.612152014+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="resolving sources" id=/7VRD namespace=openshift-service-ca-operator 2025-08-13T19:59:35.612152014+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="checking if subscriptions need update" id=/7VRD namespace=openshift-service-ca-operator 2025-08-13T19:59:35.824071075+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="No subscriptions were found in namespace openshift-service-ca" id=g+Jzv namespace=openshift-service-ca 2025-08-13T19:59:35.824071075+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="resolving sources" id=V0Mj5 namespace=openshift-user-workload-monitoring 2025-08-13T19:59:35.824071075+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="checking if subscriptions need update" id=V0Mj5 namespace=openshift-user-workload-monitoring 2025-08-13T19:59:35.939629879+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:36.014112112+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:36.014112112+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:36.014112112+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:36.073974139+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="No subscriptions were found in namespace openshift-service-ca-operator" id=/7VRD namespace=openshift-service-ca-operator 2025-08-13T19:59:36.073974139+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="resolving sources" id=uVRCy namespace=openshift-vsphere-infra 2025-08-13T19:59:36.073974139+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="checking if subscriptions need update" id=uVRCy namespace=openshift-vsphere-infra 2025-08-13T19:59:36.147751992+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:36.147751992+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:36.147751992+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:36.147751992+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:36.283543052+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="No subscriptions were found in namespace openshift-user-workload-monitoring" id=V0Mj5 namespace=openshift-user-workload-monitoring 2025-08-13T19:59:36.467016732+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="No subscriptions were found in namespace openshift-vsphere-infra" id=uVRCy namespace=openshift-vsphere-infra 2025-08-13T19:59:39.198040701+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.198040701+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.262666783+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.262666783+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.262666783+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.262666783+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=PQFYE 2025-08-13T19:59:39.262666783+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.262666783+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.386097141+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.386338798+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.386388440+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.386757920+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.433271946+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.433271946+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.451519856+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.451879316+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.451936398+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.451982749+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=06pP6 2025-08-13T19:59:39.452017720+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.452048641+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.583471257+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.583720424+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.583761046+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.584012163+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.857200820+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:39.857328944+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:39.970764137+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:39.970764137+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:39.970764137+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:39.970764137+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=4QB1e 2025-08-13T19:59:39.970764137+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:39.970764137+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:40.178530660+00:00 stderr F time="2025-08-13T19:59:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:40.197105349+00:00 stderr F time="2025-08-13T19:59:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:40.197105349+00:00 stderr F time="2025-08-13T19:59:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:40.197105349+00:00 stderr F time="2025-08-13T19:59:40Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:42.951135533+00:00 stderr F time="2025-08-13T19:59:42Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:42.983729312+00:00 stderr F time="2025-08-13T19:59:42Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.117258258+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.117603748+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.117603748+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.117603748+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=c+nUg 2025-08-13T19:59:43.117603748+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.117603748+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.341403168+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.341403168+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.341403168+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.343336543+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:46.735288911+00:00 stderr F time="2025-08-13T19:59:46Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:46.735288911+00:00 stderr F time="2025-08-13T19:59:46Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.169190439+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.169540529+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.169540529+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.169540529+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=0nGjK 2025-08-13T19:59:47.169573950+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.169573950+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.733056373+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.733056373+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.733056373+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.733056373+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:49.859424817+00:00 stderr F time="2025-08-13T19:59:49Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:49.859482799+00:00 stderr F time="2025-08-13T19:59:49Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:50.197821493+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.197821493+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.348487368+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.359249445+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.359249445+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.359249445+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=liWgX 2025-08-13T19:59:50.359249445+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.359249445+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.609499119+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:50.740285827+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:50.740285827+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:50.740285827+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=t++He 2025-08-13T19:59:50.740285827+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:50.740285827+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:50.944928531+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.974117473+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.974117473+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.974117473+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.974117473+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:50.974117473+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.207667451+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.207943779+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.208099823+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.208140744+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=wt1Gj 2025-08-13T19:59:51.208179315+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.208221417+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.344408219+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:51.344730288+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:51.344875572+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:51.345033086+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:51.934687685+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.934687685+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.934687685+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.934687685+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:52.079561575+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.079561575+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.145560836+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.145821764+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.145901376+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.146001069+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=gPEn3 2025-08-13T19:59:52.146040170+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.146079781+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.312279509+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.336229291+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.336381076+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.336600242+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.568228405+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.568329207+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.613310480+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.648750480+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.648970566+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.650705826+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=wkEPj 2025-08-13T19:59:52.650861700+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.650908281+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.843163742+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.843427369+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.843467740+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.843584284+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:55.809157948+00:00 stderr F time="2025-08-13T19:59:55Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:55.809157948+00:00 stderr F time="2025-08-13T19:59:55Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:55.814073058+00:00 stderr F time="2025-08-13T19:59:55Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:55.814073058+00:00 stderr F time="2025-08-13T19:59:55Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:56.189320515+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:56.189640814+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:56.189640814+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:56.189640814+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=iwckz 2025-08-13T19:59:56.189640814+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:56.189640814+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:56.427273978+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:56.427273978+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:56.427273978+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:56.427273978+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=nCdV/ 2025-08-13T19:59:56.427273978+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:56.427273978+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:57.554957182+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:57.554957182+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:57.554957182+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:57.554957182+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:57.554957182+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:57.554957182+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:57.558713989+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:57.558713989+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:57.558713989+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:57.570590918+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:57.571053191+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:57.571149394+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:57.691261188+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:57.691261188+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:57.691261188+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:57.691261188+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=J6dpF 2025-08-13T19:59:57.691261188+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:57.691261188+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:57.691340420+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:57.691736371+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:57.691736371+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:57.691736371+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=xg3sJ 2025-08-13T19:59:57.691736371+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:57.691736371+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:58.146626098+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:58.147017139+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:58.147089101+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:58.147257586+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:58.301985327+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:58.302308336+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:58.302525802+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:58.302713597+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:58.328573935+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.328699418+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.445077216+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.447239057+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.447239057+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.447239057+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=LLmbp 2025-08-13T19:59:58.447239057+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.447239057+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.536339537+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.536660586+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.536703237+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.536883623+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.690032828+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.690915883+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.705605072+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.705605072+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.705605072+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.705605072+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Io3rP 2025-08-13T19:59:58.705605072+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.705605072+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.793308482+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.793308482+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.793308482+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.793308482+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:59.388897670+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.389099906+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.402952380+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.404585347+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.405404920+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.405570945+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=rcxaT 2025-08-13T19:59:59.406018848+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.407268553+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.451741681+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.452075531+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.452189124+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.452303107+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rcxaT 2025-08-13T20:00:00.491755946+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:00.523464760+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:00.695830604+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:00.858367957+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:00.858768298+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:00.859165879+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=b3BRZ 2025-08-13T20:00:00.859254962+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:00.895947778+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:01.051495292+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:01.051495292+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:01.051495292+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:01.051495292+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:01.397743552+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.397743552+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.437899726+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.437899726+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.437899726+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.437899726+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=1zHG8 2025-08-13T20:00:01.437899726+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.437899726+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.671504025+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.671504025+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.671504025+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.671504025+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:10.257916954+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:10.257916954+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:10.618114084+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:10.618688671+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:10.618764433+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:10.633569755+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=215t+ 2025-08-13T20:00:10.633639977+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:10.633680268+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:11.625957411+00:00 stderr F time="2025-08-13T20:00:11Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:11.626000152+00:00 stderr F time="2025-08-13T20:00:11Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:11.626000152+00:00 stderr F time="2025-08-13T20:00:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:11.626870777+00:00 stderr F time="2025-08-13T20:00:11Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:11.986223504+00:00 stderr F time="2025-08-13T20:00:11Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:11.986223504+00:00 stderr F time="2025-08-13T20:00:11Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.174732659+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.174732659+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.174732659+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.174732659+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=hpl2D 2025-08-13T20:00:12.174732659+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.174732659+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.858314771+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.865198987+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.865198987+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.865198987+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:13.518068723+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:13.518068723+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:13.668883343+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:13.671459407+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:13.678907139+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:13.679191017+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=9FaCP 2025-08-13T20:00:13.679475516+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:13.679511957+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:13.735700629+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:13.737032737+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:14.477480900+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:14.477480900+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:14.477480900+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:14.477549732+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:14.490480350+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:14.490755508+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:14.491100338+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:14.491267593+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=iRqKV 2025-08-13T20:00:14.491357865+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:14.491395416+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:16.614995068+00:00 stderr F time="2025-08-13T20:00:16Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:16.615298246+00:00 stderr F time="2025-08-13T20:00:16Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.063028203+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.063028203+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.063028203+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.063028203+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=PSvZx 2025-08-13T20:00:17.063028203+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.063028203+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.102506279+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:17.103391784+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:17.103391784+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:17.103391784+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:17.585501881+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.585501881+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.585501881+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.585501881+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:22.519198049+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:22.519198049+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:22.649052642+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:22.649052642+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:22.649052642+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:22.649052642+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=FO0J2 2025-08-13T20:00:22.649052642+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:22.649052642+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:23.311714839+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:23.311714839+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:23.311714839+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:23.311714839+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:25.342577698+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.342733923+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.382118736+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.385075220+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.385075220+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.385075220+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=prq86 2025-08-13T20:00:25.385075220+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.385075220+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.542007313+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.542007313+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.542007313+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.542007313+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:27.574588102+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.575607891+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.575607891+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.578337239+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.599054859+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.599054859+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.599054859+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.599054859+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=BsAt4 2025-08-13T20:00:27.599054859+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.599054859+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.599054859+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.599170943+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.599170943+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.599170943+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=RRXyD 2025-08-13T20:00:27.599170943+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.599170943+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.664769343+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.664769343+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.664769343+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.665193065+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.705098873+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.705098873+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.705098873+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.705098873+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:28.159993484+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.159993484+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.181162468+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.181162468+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.181162468+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.181162468+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=4uxdX 2025-08-13T20:00:28.181162468+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.181162468+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.289581400+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.289581400+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.289581400+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.289965351+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.304474084+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.304474084+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.377364413+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.378421583+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.378490995+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.378537576+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Gc25i 2025-08-13T20:00:28.378580947+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.378627869+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.442753307+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.442753307+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.442753307+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.442753307+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:57.666292432+00:00 stderr F time="2025-08-13T20:00:57Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:00:57.666721735+00:00 stderr F time="2025-08-13T20:00:57Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:00:57.704953045+00:00 stderr F time="2025-08-13T20:00:57Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:00:57.704953045+00:00 stderr F time="2025-08-13T20:00:57Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:03.329597053+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:03.348178953+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:03.348262555+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:03.348433920+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=kPFEg 2025-08-13T20:01:03.348525503+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:03.348570214+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:03.426941299+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:03.426941299+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:03.426941299+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:03.426941299+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=rtMPL 2025-08-13T20:01:03.426941299+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:03.426941299+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:07.388328095+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:07.388328095+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:07.520350389+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:07.520576086+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:07.520616927+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:07.520654998+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=2qdyS 2025-08-13T20:01:07.520690729+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:07.520727110+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:07.528978965+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:07.529331725+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:07.529379006+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:07.529418868+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=IRiQU 2025-08-13T20:01:07.529449949+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:07.529480639+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:08.064295639+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:08.064629838+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:08.064711361+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:08.065045300+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:08.096452916+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:08.097157546+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:08.097301700+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:08.097634070+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:37.338197592+00:00 stderr F time="2025-08-13T20:01:37Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:01:37.338363566+00:00 stderr F time="2025-08-13T20:01:37Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:01:37.338668485+00:00 stderr F time="2025-08-13T20:01:37Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:01:37.338668485+00:00 stderr F time="2025-08-13T20:01:37Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:01:51.216011662+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:01:51.220481139+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:01:51.220481139+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:01:51.220481139+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=ER/Xo 2025-08-13T20:01:51.220481139+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:01:51.220481139+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:01:51.799299583+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:01:51.799687804+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:01:51.799687804+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:01:51.799711044+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=fpzh6 2025-08-13T20:01:51.799892539+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:01:51.799892539+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:02:01.318031038+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:02:01.318031038+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:01.338527872+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:01.338527872+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:03.455292138+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:03.490024119+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:03.490024119+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:03.490024119+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Y9gAd 2025-08-13T20:02:03.490024119+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:03.490024119+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:03.503401750+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:03.503401750+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:03.503401750+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:03.503401750+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=4YVg1 2025-08-13T20:02:03.503401750+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:03.503401750+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:11.183876642+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:11.183876642+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:11.183876642+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:11.183876642+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:11.198228512+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:11.198228512+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:11.198228512+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:11.198228512+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:29.459578992+00:00 stderr F time="2025-08-13T20:02:29Z" level=error msg="Unable to retrieve cluster operator: Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-catalog\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:02:31.320216660+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=buKVh 2025-08-13T20:02:31.320359594+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OFmS9 2025-08-13T20:02:31.320380295+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OFmS9 2025-08-13T20:02:31.320392025+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=buKVh 2025-08-13T20:02:31.324995286+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=buKVh 2025-08-13T20:02:31.325200072+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=OFmS9 2025-08-13T20:02:31.325309705+00:00 stderr F E0813 20:02:31.325252 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.325442679+00:00 stderr F E0813 20:02:31.325238 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.331356318+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=YuG3U 2025-08-13T20:02:31.331434770+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=YuG3U 2025-08-13T20:02:31.332093909+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8sZdC 2025-08-13T20:02:31.332093909+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8sZdC 2025-08-13T20:02:31.335483205+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=YuG3U 2025-08-13T20:02:31.335630260+00:00 stderr F E0813 20:02:31.335536 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.336220676+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=8sZdC 2025-08-13T20:02:31.336242677+00:00 stderr F E0813 20:02:31.336211 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.347115237+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=tUqdu 2025-08-13T20:02:31.347115237+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=tUqdu 2025-08-13T20:02:31.347680633+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fxfKq 2025-08-13T20:02:31.347680633+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fxfKq 2025-08-13T20:02:31.350528075+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=tUqdu 2025-08-13T20:02:31.350528075+00:00 stderr F E0813 20:02:31.350496 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.359026477+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=fxfKq 2025-08-13T20:02:31.359140730+00:00 stderr F E0813 20:02:31.359109 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.371356229+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VKUpv 2025-08-13T20:02:31.371457772+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VKUpv 2025-08-13T20:02:31.377650838+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=VKUpv 2025-08-13T20:02:31.377997318+00:00 stderr F E0813 20:02:31.377921 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.379327976+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=JDVSq 2025-08-13T20:02:31.379422999+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=JDVSq 2025-08-13T20:02:31.382383563+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=JDVSq 2025-08-13T20:02:31.382383563+00:00 stderr F E0813 20:02:31.382323 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.421214611+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=x39iE 2025-08-13T20:02:31.421214611+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=x39iE 2025-08-13T20:02:31.422724244+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3OfOW 2025-08-13T20:02:31.422936740+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3OfOW 2025-08-13T20:02:31.423440045+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=x39iE 2025-08-13T20:02:31.423440045+00:00 stderr F E0813 20:02:31.423385 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.425895305+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=3OfOW 2025-08-13T20:02:31.425982657+00:00 stderr F E0813 20:02:31.425952 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.504993781+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=p3gN7 2025-08-13T20:02:31.504993781+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=p3gN7 2025-08-13T20:02:31.510025254+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1Lp29 2025-08-13T20:02:31.510164538+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1Lp29 2025-08-13T20:02:31.524701023+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=p3gN7 2025-08-13T20:02:31.524701023+00:00 stderr F E0813 20:02:31.524677 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.685335536+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wq+F6 2025-08-13T20:02:31.685376547+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wq+F6 2025-08-13T20:02:31.723889646+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=1Lp29 2025-08-13T20:02:31.723946007+00:00 stderr F E0813 20:02:31.723879 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.884613581+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cysEU 2025-08-13T20:02:31.884613581+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cysEU 2025-08-13T20:02:31.924894470+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=wq+F6 2025-08-13T20:02:31.924894470+00:00 stderr F E0813 20:02:31.924867 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.124681629+00:00 stderr F time="2025-08-13T20:02:32Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=cysEU 2025-08-13T20:02:32.124681629+00:00 stderr F E0813 20:02:32.124632 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.245592088+00:00 stderr F time="2025-08-13T20:02:32Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y7GJB 2025-08-13T20:02:32.245592088+00:00 stderr F time="2025-08-13T20:02:32Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y7GJB 2025-08-13T20:02:32.324189910+00:00 stderr F time="2025-08-13T20:02:32Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=y7GJB 2025-08-13T20:02:32.324312864+00:00 stderr F E0813 20:02:32.324289 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.446008686+00:00 stderr F time="2025-08-13T20:02:32Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uuH+l 2025-08-13T20:02:32.446008686+00:00 stderr F time="2025-08-13T20:02:32Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uuH+l 2025-08-13T20:02:32.524368101+00:00 stderr F time="2025-08-13T20:02:32Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=uuH+l 2025-08-13T20:02:32.524368101+00:00 stderr F E0813 20:02:32.524259 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.964896128+00:00 stderr F time="2025-08-13T20:02:32Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=59J53 2025-08-13T20:02:32.964928159+00:00 stderr F time="2025-08-13T20:02:32Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=59J53 2025-08-13T20:02:32.968003557+00:00 stderr F time="2025-08-13T20:02:32Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=59J53 2025-08-13T20:02:33.165883152+00:00 stderr F time="2025-08-13T20:02:33Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ezGB7 2025-08-13T20:02:33.165883152+00:00 stderr F time="2025-08-13T20:02:33Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ezGB7 2025-08-13T20:02:33.169296029+00:00 stderr F time="2025-08-13T20:02:33Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=ezGB7 2025-08-13T20:02:34.461571413+00:00 stderr F time="2025-08-13T20:02:34Z" level=error msg="Unable to retrieve cluster operator: Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-catalog\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:02:41.180077903+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EdhsD 2025-08-13T20:02:41.180258628+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EdhsD 2025-08-13T20:02:41.184228181+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=EdhsD 2025-08-13T20:02:41.184272712+00:00 stderr F E0813 20:02:41.184250 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.189750679+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Oj8ti 2025-08-13T20:02:41.189907113+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Oj8ti 2025-08-13T20:02:41.191991262+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=Oj8ti 2025-08-13T20:02:41.192127796+00:00 stderr F E0813 20:02:41.192004 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.194489064+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cne/A 2025-08-13T20:02:41.194563256+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cne/A 2025-08-13T20:02:41.196163562+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=Cne/A 2025-08-13T20:02:41.196213943+00:00 stderr F E0813 20:02:41.196155 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.201539315+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=upscx 2025-08-13T20:02:41.201573576+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=upscx 2025-08-13T20:02:41.202747019+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=fcrac 2025-08-13T20:02:41.202912874+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=fcrac 2025-08-13T20:02:41.203769559+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=upscx 2025-08-13T20:02:41.203863561+00:00 stderr F E0813 20:02:41.203812 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.206941009+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=fcrac 2025-08-13T20:02:41.206962590+00:00 stderr F E0813 20:02:41.206940 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.214092483+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5gZ8D 2025-08-13T20:02:41.214092483+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5gZ8D 2025-08-13T20:02:41.216442510+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=5gZ8D 2025-08-13T20:02:41.216522152+00:00 stderr F E0813 20:02:41.216451 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.228136414+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eTECs 2025-08-13T20:02:41.228136414+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eTECs 2025-08-13T20:02:41.230565593+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=eTECs 2025-08-13T20:02:41.230565593+00:00 stderr F E0813 20:02:41.230267 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.238051627+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=kH+Io 2025-08-13T20:02:41.238081248+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=kH+Io 2025-08-13T20:02:41.240472596+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=kH+Io 2025-08-13T20:02:41.240557138+00:00 stderr F E0813 20:02:41.240493 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.271165252+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ogtb3 2025-08-13T20:02:41.271206903+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ogtb3 2025-08-13T20:02:41.274379723+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=ogtb3 2025-08-13T20:02:41.274413014+00:00 stderr F E0813 20:02:41.274362 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.281761504+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RsNAH 2025-08-13T20:02:41.281835696+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RsNAH 2025-08-13T20:02:41.284457551+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=RsNAH 2025-08-13T20:02:41.284457551+00:00 stderr F E0813 20:02:41.284415 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.355263271+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eeoYO 2025-08-13T20:02:41.355263271+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eeoYO 2025-08-13T20:02:41.365502733+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=R8X3r 2025-08-13T20:02:41.365502733+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=R8X3r 2025-08-13T20:02:41.385820093+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=eeoYO 2025-08-13T20:02:41.385903655+00:00 stderr F E0813 20:02:41.385835 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.546526037+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=vpLbh 2025-08-13T20:02:41.546526037+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=vpLbh 2025-08-13T20:02:41.584814309+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=R8X3r 2025-08-13T20:02:41.584935812+00:00 stderr F E0813 20:02:41.584887 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.745618686+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z8Tk9 2025-08-13T20:02:41.745735399+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z8Tk9 2025-08-13T20:02:41.784504675+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=vpLbh 2025-08-13T20:02:41.784504675+00:00 stderr F E0813 20:02:41.784476 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.985068617+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=z8Tk9 2025-08-13T20:02:41.985068617+00:00 stderr F E0813 20:02:41.984942 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.105127703+00:00 stderr F time="2025-08-13T20:02:42Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=fKVKG 2025-08-13T20:02:42.105127703+00:00 stderr F time="2025-08-13T20:02:42Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=fKVKG 2025-08-13T20:02:42.184517948+00:00 stderr F time="2025-08-13T20:02:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=fKVKG 2025-08-13T20:02:42.184564359+00:00 stderr F E0813 20:02:42.184495 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.305245692+00:00 stderr F time="2025-08-13T20:02:42Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z/+5s 2025-08-13T20:02:42.305245692+00:00 stderr F time="2025-08-13T20:02:42Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z/+5s 2025-08-13T20:02:42.384638557+00:00 stderr F time="2025-08-13T20:02:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=z/+5s 2025-08-13T20:02:42.384638557+00:00 stderr F E0813 20:02:42.384559 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.825919346+00:00 stderr F time="2025-08-13T20:02:42Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=e2NJz 2025-08-13T20:02:42.825919346+00:00 stderr F time="2025-08-13T20:02:42Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=e2NJz 2025-08-13T20:02:42.828727216+00:00 stderr F time="2025-08-13T20:02:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=e2NJz 2025-08-13T20:02:42.830009803+00:00 stderr F time="2025-08-13T20:02:42Z" level=error msg="Unable to retrieve cluster operator: Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-catalog\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:02:43.025300714+00:00 stderr F time="2025-08-13T20:02:43Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=pBUc+ 2025-08-13T20:02:43.025300714+00:00 stderr F time="2025-08-13T20:02:43Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=pBUc+ 2025-08-13T20:02:43.028725332+00:00 stderr F time="2025-08-13T20:02:43Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=pBUc+ 2025-08-13T20:02:47.833958362+00:00 stderr F time="2025-08-13T20:02:47Z" level=error msg="Unable to retrieve cluster operator: Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-catalog\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:06:11.493510853+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=READY" 2025-08-13T20:06:11.513128685+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.513128685+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.527002293+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="resolving sources" id=k2b6s namespace=openshift-marketplace 2025-08-13T20:06:11.527002293+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="checking if subscriptions need update" id=k2b6s namespace=openshift-marketplace 2025-08-13T20:06:11.530683628+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=k2b6s namespace=openshift-marketplace 2025-08-13T20:06:11.564933489+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.613763127+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.613963973+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.614943581+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=6ycoF 2025-08-13T20:06:11.615105376+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.615142827+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.717925600+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.718236119+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.718306171+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.719213827+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:13.735607719+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=READY" 2025-08-13T20:06:13.735857166+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="resolving sources" id=emBXK namespace=openshift-marketplace 2025-08-13T20:06:13.735949859+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="checking if subscriptions need update" id=emBXK namespace=openshift-marketplace 2025-08-13T20:06:13.736274268+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.738287036+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.740709755+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=emBXK namespace=openshift-marketplace 2025-08-13T20:06:13.741268591+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.741347584+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.741347584+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.741347584+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=AU2sX 2025-08-13T20:06:13.741347584+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.741347584+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.766092952+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.766359430+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.766359430+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.766474793+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:26.110215478+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.110277409+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.113314296+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.113314296+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.114711466+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.114711466+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.114711466+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.114711466+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=mlSDt 2025-08-13T20:06:26.114711466+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.114711466+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.114995094+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.115138028+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.115138028+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.115138028+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Yd8EB 2025-08-13T20:06:26.115138028+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.115174030+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.135973235+00:00 stderr F time="2025-08-13T20:06:26Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"community-operators\": the object has been modified; please apply your changes to the latest version and try again" id=Yd8EB 2025-08-13T20:06:26.139168817+00:00 stderr F E0813 20:06:26.139011 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "community-operators": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:26.139256829+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.139256829+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.146991231+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.150852471+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.150852471+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.150852471+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=OgeZt 2025-08-13T20:06:26.150852471+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.150852471+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.153847167+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:26.153847167+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:26.283168110+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:26.283281093+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:26.283281093+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:26.283281093+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=O7Sr0 2025-08-13T20:06:26.283317255+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:26.283317255+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:26.485976288+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.486146673+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.486161263+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.486262386+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.500887035+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:26.500887035+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:26.886070925+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:26.886070925+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:26.886070925+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:26.886070925+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Zh+dj 2025-08-13T20:06:26.886070925+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:26.886070925+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:27.083998242+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:27.084290890+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:27.084290890+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:27.084290890+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:27.096395597+00:00 stderr F time="2025-08-13T20:06:27Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"redhat-operators\": the object has been modified; please apply your changes to the latest version and try again" id=O7Sr0 2025-08-13T20:06:27.096551121+00:00 stderr F E0813 20:06:27.096522 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "redhat-operators": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:27.102279406+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:27.102279406+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:27.481279009+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:27.481479894+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:27.481529046+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:27.481574137+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=+4AoL 2025-08-13T20:06:27.481606088+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:27.481635659+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:27.685135696+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:27.685418774+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:27.685461056+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:27.685586239+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:27.699274701+00:00 stderr F time="2025-08-13T20:06:27Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"community-operators\": the object has been modified; please apply your changes to the latest version and try again" id=Zh+dj 2025-08-13T20:06:27.699274701+00:00 stderr F E0813 20:06:27.698399 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "community-operators": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:27.709208696+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:27.709451833+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.085910023+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.086225762+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.086282274+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.086363386+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Oq41s 2025-08-13T20:06:28.086406027+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.086482439+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.282898944+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:28.283227293+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:28.283351627+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:28.283547252+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:28.302014561+00:00 stderr F time="2025-08-13T20:06:28Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"redhat-operators\": the object has been modified; please apply your changes to the latest version and try again" id=+4AoL 2025-08-13T20:06:28.302014561+00:00 stderr F E0813 20:06:28.301962 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "redhat-operators": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:28.302055262+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:28.302055262+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:28.685616206+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:28.685669818+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:28.685683108+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:28.685895464+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Tff8j 2025-08-13T20:06:28.685895464+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:28.685895464+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:28.883888374+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.884000247+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.884000247+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.884082709+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.902330863+00:00 stderr F time="2025-08-13T20:06:28Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"community-operators\": the object has been modified; please apply your changes to the latest version and try again" id=Oq41s 2025-08-13T20:06:28.902330863+00:00 stderr F E0813 20:06:28.902313 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "community-operators": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:28.904563686+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:28.904659029+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:29.288172151+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:29.288172151+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:29.288172151+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:29.288172151+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=q7Re+ 2025-08-13T20:06:29.288172151+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:29.288172151+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:29.484531994+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:29.484769601+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:29.484769601+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:29.484890994+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:29.485084470+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:29.485084470+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.081941562+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.082042295+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.082042295+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.082058995+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Zknej 2025-08-13T20:06:30.082109716+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.082109716+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.286076917+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:30.286076917+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:30.286076917+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:30.693063231+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:30.693214205+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:30.693433561+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:30.693498553+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:30.884149023+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.884149023+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.884149023+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:31.087252319+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:31.089005829+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:31.089005829+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:31.089005829+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=CNIN0 2025-08-13T20:06:31.089005829+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:31.089005829+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:31.410178215+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:31.410178215+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:31.815722402+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:31.815722402+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:31.889601161+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:31.889944851+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:31.889994392+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:31.890068284+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=3/b+M 2025-08-13T20:06:31.890100255+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:31.890156607+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:32.086024412+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:32.086065673+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:32.086075844+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:32.304304171+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:32.304304171+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:32.304304171+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:32.610152570+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:32.610152570+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:32.679106567+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:32.688346182+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:32.762185319+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:32.762185319+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:32.762185319+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:32.762185319+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:32.881690145+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:32.882261601+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:32.883097115+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:32.883097115+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=8ka7C 2025-08-13T20:06:32.883097115+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:32.883097115+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:33.088444173+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.088444173+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.088444173+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.088444173+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=q32a1 2025-08-13T20:06:33.088444173+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.088444173+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.682592218+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:33.683471263+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:33.683471263+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:33.683471263+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="catalog update required at 2025-08-13 20:06:33.682825164 +0000 UTC m=+449.334213281" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:33.888190812+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.888190812+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.888190812+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.888190812+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.888190812+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:33.888190812+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:34.203693937+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:34.230721402+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:34.231202476+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:34.294228903+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:34.294601363+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:34.294688216+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:34.294835090+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=GYogS 2025-08-13T20:06:34.294933823+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:34.295012845+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:34.531050753+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:34.531341911+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:34.531425763+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:34.531661660+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=EgTk3 2025-08-13T20:06:34.531742733+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:34.531845646+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:35.087074635+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:35.087519547+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:35.087647251+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:35.087946150+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:35.088233678+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:35.088348361+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:35.319184129+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:35.319619572+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:35.319726125+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:35.320045724+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="catalog update required at 2025-08-13 20:06:35.319971052 +0000 UTC m=+450.971359429" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:35.492294943+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:35.492363785+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:35.492363785+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:35.492444727+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=daLHK 2025-08-13T20:06:35.492444727+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:35.492444727+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:35.716204932+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:35.739636144+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:35.739636144+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.083299257+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.083299257+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.083299257+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.083299257+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=em2ci 2025-08-13T20:06:36.083299257+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.083299257+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.290457157+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:36.290457157+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:36.290457157+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:36.290457157+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="catalog update required at 2025-08-13 20:06:36.290062885 +0000 UTC m=+451.941451072" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:36.701448760+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:36.733992323+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:36.733992323+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:36.896833492+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.896833492+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.896833492+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.896833492+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.896833492+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:36.896833492+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:37.085092960+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.085092960+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.085092960+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.085092960+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=V1QwJ 2025-08-13T20:06:37.085092960+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.085092960+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.283263672+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:37.283379175+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:37.283379175+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:37.283433127+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=XhBeC 2025-08-13T20:06:37.283433127+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:37.283433127+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:37.885185538+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.885185538+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.885185538+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.885185538+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="catalog update required at 2025-08-13 20:06:37.884757656 +0000 UTC m=+453.536145863" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:38.086100799+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:38.086100799+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:38.086100799+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:38.086100799+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:38.086100799+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:38.086100799+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:38.092241975+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=READY" 2025-08-13T20:06:38.092323467+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="resolving sources" id=qJFDI namespace=openshift-marketplace 2025-08-13T20:06:38.092323467+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="checking if subscriptions need update" id=qJFDI namespace=openshift-marketplace 2025-08-13T20:06:38.104610420+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=qJFDI namespace=openshift-marketplace 2025-08-13T20:06:38.315227018+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:38.355523894+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:38.355523894+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:38.484491041+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:38.484713547+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:38.484754199+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:38.484928834+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=4TwJq 2025-08-13T20:06:38.484971205+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:38.485004926+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:38.687010868+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:38.687010868+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:38.687010868+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:38.687010868+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=eGwG5 2025-08-13T20:06:38.687010868+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:38.687010868+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:39.286016762+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:39.286108774+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:39.286108774+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:39.286166416+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:39.411929782+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:39.411929782+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:39.515309426+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:39.515309426+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:39.515309426+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:39.515309426+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:39.558727521+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:39.558727521+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:39.682668194+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:39.683063256+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:39.683152478+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:39.683388495+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=wIhww 2025-08-13T20:06:39.683499848+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:39.683588961+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:39.895125296+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:39.895463175+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:39.895557448+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:39.896552426+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=k0tK8 2025-08-13T20:06:39.896742382+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:39.896977069+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:40.492854883+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:40.492854883+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:40.492854883+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:40.492854883+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:40.518465107+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:40.518599201+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:40.683681214+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:40.683840249+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:40.683840249+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:40.683945572+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:40.702434612+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:40.702434612+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:40.910619761+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:40.910619761+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:40.910619761+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:40.910619761+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=3zDkc 2025-08-13T20:06:40.910619761+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:40.910619761+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:41.091704553+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.091704553+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.091704553+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.091704553+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=v1iQK 2025-08-13T20:06:41.091704553+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.091704553+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.683858140+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:41.684409336+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:41.684473457+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:41.684592981+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:41.684758366+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:41.684909280+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:41.884354688+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.884581335+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.885465550+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.885964304+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.886066947+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:41.886122629+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:42.089097418+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.089097418+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.089097418+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.089097418+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=KWBpt 2025-08-13T20:06:42.089097418+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.089097418+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.299479610+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:42.299479610+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:42.299479610+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:42.299479610+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=NXE+0 2025-08-13T20:06:42.299479610+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:42.299479610+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:42.890318840+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.890318840+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.890318840+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.890318840+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.890383862+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:42.892015309+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:43.352143611+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:43.352143611+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:43.352143611+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:43.352143611+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:43.867567909+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:43.867911889+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:43.867963541+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:43.868006242+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=GW6YT 2025-08-13T20:06:43.868038283+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:43.868068834+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:43.975695169+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:43.975974367+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=FL2Zc 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:45.911116949+00:00 stderr F time="2025-08-13T20:06:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:45.911116949+00:00 stderr F time="2025-08-13T20:06:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:45.911116949+00:00 stderr F time="2025-08-13T20:06:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:45.911116949+00:00 stderr F time="2025-08-13T20:06:45Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=4lZcL 2025-08-13T20:06:45.911116949+00:00 stderr F time="2025-08-13T20:06:45Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:45.911116949+00:00 stderr F time="2025-08-13T20:06:45Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:47.259663453+00:00 stderr F time="2025-08-13T20:06:47Z" level=info msg="resolving sources" id=N6Yij namespace=openshift-monitoring 2025-08-13T20:06:47.259663453+00:00 stderr F time="2025-08-13T20:06:47Z" level=info msg="checking if subscriptions need update" id=N6Yij namespace=openshift-monitoring 2025-08-13T20:06:47.259663453+00:00 stderr F time="2025-08-13T20:06:47Z" level=info msg="resolving sources" id=nuOJf namespace=openshift-operator-lifecycle-manager 2025-08-13T20:06:47.259663453+00:00 stderr F time="2025-08-13T20:06:47Z" level=info msg="checking if subscriptions need update" id=nuOJf namespace=openshift-operator-lifecycle-manager 2025-08-13T20:06:48.517634150+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=N6Yij namespace=openshift-monitoring 2025-08-13T20:06:48.517634150+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="resolving sources" id=UI3Kj namespace=openshift-operators 2025-08-13T20:06:48.517634150+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="checking if subscriptions need update" id=UI3Kj namespace=openshift-operators 2025-08-13T20:06:48.529694646+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:48.529741967+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:48.529741967+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:48.530030396+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:48.530030396+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:48.530030396+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:48.530299093+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=nuOJf namespace=openshift-operator-lifecycle-manager 2025-08-13T20:06:49.170617092+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=UI3Kj namespace=openshift-operators 2025-08-13T20:06:49.183381657+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:49.183552712+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:49.183552712+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:49.183552712+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=wnVLL 2025-08-13T20:06:49.183552712+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:49.183552712+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:49.186333622+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:49.187175386+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:49.187175386+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:49.187175386+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:49.187175386+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:49.187175386+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:49.918537555+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:49.918658748+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:49.918658748+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:49.918703399+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=bcIBF 2025-08-13T20:06:49.918703399+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:49.918703399+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:49.950863622+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=READY" 2025-08-13T20:06:49.950863622+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="resolving sources" id=60tmm namespace=openshift-marketplace 2025-08-13T20:06:49.950863622+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="checking if subscriptions need update" id=60tmm namespace=openshift-marketplace 2025-08-13T20:06:50.513729180+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=60tmm namespace=openshift-marketplace 2025-08-13T20:06:50.655559256+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:50.656228175+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:50.656228175+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:50.656228175+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:50.708972438+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:50.708972438+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:50.708972438+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:50.708972438+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:50.714640670+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.714640670+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.797510826+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:50.797510826+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:50.805644769+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.805963649+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.806042481+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.806094592+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=TdbPT 2025-08-13T20:06:50.806133023+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.810715425+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.832434047+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:50.832434047+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:50.832434047+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:50.832434047+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Kf0TQ 2025-08-13T20:06:50.832434047+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:50.832434047+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:50.901265181+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.901662052+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.902011852+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.902231359+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:51.107906076+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:51.107906076+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:51.107906076+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:51.107906076+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:51.107906076+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.107906076+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.133741756+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.133741756+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.136862036+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.136862036+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.136862036+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.136862036+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=o0mEY 2025-08-13T20:06:51.136862036+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.136862036+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.146006978+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.149216880+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.149323193+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.149441806+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=+VsiI 2025-08-13T20:06:51.149549129+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.149635552+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.312652676+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.312652676+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.312652676+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.312652676+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.509337165+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.525533099+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.525657003+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.526219469+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:56.134097751+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.134097751+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.376976224+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.378197979+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.378197979+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.380846025+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=dA2JE 2025-08-13T20:06:56.381006220+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.381068022+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.429337665+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.429337665+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.429337665+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.429337665+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.490907001+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.490907001+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.501915416+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.501915416+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.501915416+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.501915416+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=6Fxp2 2025-08-13T20:06:56.501915416+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.501915416+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.539867094+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.539867094+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.539867094+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.539867094+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.904147439+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=Jgp93 namespace=default 2025-08-13T20:06:56.904147439+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=Jgp93 namespace=default 2025-08-13T20:06:56.904147439+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=8627U namespace=hostpath-provisioner 2025-08-13T20:06:56.904147439+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=8627U namespace=hostpath-provisioner 2025-08-13T20:06:56.910020227+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace default" id=Jgp93 namespace=default 2025-08-13T20:06:56.910020227+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=GAzm9 namespace=kube-node-lease 2025-08-13T20:06:56.910020227+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=GAzm9 namespace=kube-node-lease 2025-08-13T20:06:56.910020227+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace hostpath-provisioner" id=8627U namespace=hostpath-provisioner 2025-08-13T20:06:56.910020227+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=UgcrS namespace=kube-public 2025-08-13T20:06:56.910020227+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=UgcrS namespace=kube-public 2025-08-13T20:06:56.918897792+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace kube-node-lease" id=GAzm9 namespace=kube-node-lease 2025-08-13T20:06:56.918897792+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace kube-public" id=UgcrS namespace=kube-public 2025-08-13T20:06:56.918897792+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=dUTGp namespace=kube-system 2025-08-13T20:06:56.918897792+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=dUTGp namespace=kube-system 2025-08-13T20:06:56.918897792+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=YBTPP namespace=openshift 2025-08-13T20:06:56.918897792+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=YBTPP namespace=openshift 2025-08-13T20:06:56.930916906+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace openshift" id=YBTPP namespace=openshift 2025-08-13T20:06:56.930916906+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=ScKe9 namespace=openshift-apiserver 2025-08-13T20:06:56.930916906+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=ScKe9 namespace=openshift-apiserver 2025-08-13T20:06:56.930916906+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace kube-system" id=dUTGp namespace=kube-system 2025-08-13T20:06:56.930916906+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=1r5b5 namespace=openshift-apiserver-operator 2025-08-13T20:06:56.930916906+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=1r5b5 namespace=openshift-apiserver-operator 2025-08-13T20:06:56.938095442+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace openshift-apiserver-operator" id=1r5b5 namespace=openshift-apiserver-operator 2025-08-13T20:06:56.938095442+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=vxc7b namespace=openshift-authentication 2025-08-13T20:06:56.938095442+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=vxc7b namespace=openshift-authentication 2025-08-13T20:06:56.938095442+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace openshift-apiserver" id=ScKe9 namespace=openshift-apiserver 2025-08-13T20:06:56.938095442+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=D5zUy namespace=openshift-authentication-operator 2025-08-13T20:06:56.938095442+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=D5zUy namespace=openshift-authentication-operator 2025-08-13T20:06:57.052847822+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="No subscriptions were found in namespace openshift-authentication" id=vxc7b namespace=openshift-authentication 2025-08-13T20:06:57.052847822+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="resolving sources" id=GH+4T namespace=openshift-cloud-network-config-controller 2025-08-13T20:06:57.052847822+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="checking if subscriptions need update" id=GH+4T namespace=openshift-cloud-network-config-controller 2025-08-13T20:06:57.052847822+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="No subscriptions were found in namespace openshift-authentication-operator" id=D5zUy namespace=openshift-authentication-operator 2025-08-13T20:06:57.052847822+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="resolving sources" id=EFPAp namespace=openshift-cloud-platform-infra 2025-08-13T20:06:57.052847822+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="checking if subscriptions need update" id=EFPAp namespace=openshift-cloud-platform-infra 2025-08-13T20:06:57.103426132+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="No subscriptions were found in namespace openshift-cloud-network-config-controller" id=GH+4T namespace=openshift-cloud-network-config-controller 2025-08-13T20:06:57.103552366+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="resolving sources" id=jSDho namespace=openshift-cluster-machine-approver 2025-08-13T20:06:57.103587677+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="checking if subscriptions need update" id=jSDho namespace=openshift-cluster-machine-approver 2025-08-13T20:06:57.877818125+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="No subscriptions were found in namespace openshift-cloud-platform-infra" id=EFPAp namespace=openshift-cloud-platform-infra 2025-08-13T20:06:57.877818125+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="resolving sources" id=ivWJs namespace=openshift-cluster-samples-operator 2025-08-13T20:06:57.877818125+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="checking if subscriptions need update" id=ivWJs namespace=openshift-cluster-samples-operator 2025-08-13T20:06:57.877818125+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="No subscriptions were found in namespace openshift-cluster-machine-approver" id=jSDho namespace=openshift-cluster-machine-approver 2025-08-13T20:06:57.877818125+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="resolving sources" id=fKmPg namespace=openshift-cluster-storage-operator 2025-08-13T20:06:57.877818125+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="checking if subscriptions need update" id=fKmPg namespace=openshift-cluster-storage-operator 2025-08-13T20:06:57.877918478+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="No subscriptions were found in namespace openshift-cluster-samples-operator" id=ivWJs namespace=openshift-cluster-samples-operator 2025-08-13T20:06:57.877918478+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="resolving sources" id=RF8fd namespace=openshift-cluster-version 2025-08-13T20:06:57.877918478+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="checking if subscriptions need update" id=RF8fd namespace=openshift-cluster-version 2025-08-13T20:06:58.536983374+00:00 stderr F time="2025-08-13T20:06:58Z" level=info msg="No subscriptions were found in namespace openshift-cluster-version" id=RF8fd namespace=openshift-cluster-version 2025-08-13T20:06:58.536983374+00:00 stderr F time="2025-08-13T20:06:58Z" level=info msg="resolving sources" id=YcZnZ namespace=openshift-config 2025-08-13T20:06:58.536983374+00:00 stderr F time="2025-08-13T20:06:58Z" level=info msg="checking if subscriptions need update" id=YcZnZ namespace=openshift-config 2025-08-13T20:06:58.536983374+00:00 stderr F time="2025-08-13T20:06:58Z" level=info msg="No subscriptions were found in namespace openshift-cluster-storage-operator" id=fKmPg namespace=openshift-cluster-storage-operator 2025-08-13T20:06:58.536983374+00:00 stderr F time="2025-08-13T20:06:58Z" level=info msg="resolving sources" id=fnLg3 namespace=openshift-config-managed 2025-08-13T20:06:58.536983374+00:00 stderr F time="2025-08-13T20:06:58Z" level=info msg="checking if subscriptions need update" id=fnLg3 namespace=openshift-config-managed 2025-08-13T20:06:59.170645801+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-config" id=YcZnZ namespace=openshift-config 2025-08-13T20:06:59.170830256+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=fJd8l namespace=openshift-config-operator 2025-08-13T20:06:59.170918399+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=fJd8l namespace=openshift-config-operator 2025-08-13T20:06:59.175838440+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.175838440+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.181687667+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-config-managed" id=fnLg3 namespace=openshift-config-managed 2025-08-13T20:06:59.181903334+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=R3oAq namespace=openshift-console 2025-08-13T20:06:59.181993026+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=R3oAq namespace=openshift-console 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=qgbwR 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-console" id=R3oAq namespace=openshift-console 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=88gS+ namespace=openshift-console-operator 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=88gS+ namespace=openshift-console-operator 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-config-operator" id=fJd8l namespace=openshift-config-operator 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=jVVWB namespace=openshift-console-user-settings 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=jVVWB namespace=openshift-console-user-settings 2025-08-13T20:06:59.247192736+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-console-operator" id=88gS+ namespace=openshift-console-operator 2025-08-13T20:06:59.247192736+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=7ABnv namespace=openshift-controller-manager 2025-08-13T20:06:59.247192736+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=7ABnv namespace=openshift-controller-manager 2025-08-13T20:06:59.272742758+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.272742758+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.272742758+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.272742758+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.302259294+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-console-user-settings" id=jVVWB namespace=openshift-console-user-settings 2025-08-13T20:06:59.302259294+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=bozjR namespace=openshift-controller-manager-operator 2025-08-13T20:06:59.302259294+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=bozjR namespace=openshift-controller-manager-operator 2025-08-13T20:06:59.506092188+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager" id=7ABnv namespace=openshift-controller-manager 2025-08-13T20:06:59.506092188+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=Bio85 namespace=openshift-dns 2025-08-13T20:06:59.506092188+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=Bio85 namespace=openshift-dns 2025-08-13T20:06:59.703075856+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager-operator" id=bozjR namespace=openshift-controller-manager-operator 2025-08-13T20:06:59.703192020+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=Af1UN namespace=openshift-dns-operator 2025-08-13T20:06:59.703232731+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=Af1UN namespace=openshift-dns-operator 2025-08-13T20:06:59.902379050+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-dns" id=Bio85 namespace=openshift-dns 2025-08-13T20:06:59.902379050+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=fddqw namespace=openshift-etcd 2025-08-13T20:06:59.902379050+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=fddqw namespace=openshift-etcd 2025-08-13T20:07:00.096657181+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.096657181+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.098826673+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.099016628+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.099061860+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.099155562+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Nrb/Z 2025-08-13T20:07:00.099232584+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.099268175+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.103694382+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="No subscriptions were found in namespace openshift-dns-operator" id=Af1UN namespace=openshift-dns-operator 2025-08-13T20:07:00.103694382+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="resolving sources" id=uuetg namespace=openshift-etcd-operator 2025-08-13T20:07:00.103694382+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="checking if subscriptions need update" id=uuetg namespace=openshift-etcd-operator 2025-08-13T20:07:00.138364416+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.138364416+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.138364416+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.138364416+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.319853490+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="No subscriptions were found in namespace openshift-etcd" id=fddqw namespace=openshift-etcd 2025-08-13T20:07:00.319853490+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="resolving sources" id=jF/Mz namespace=openshift-host-network 2025-08-13T20:07:00.319853490+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="checking if subscriptions need update" id=jF/Mz namespace=openshift-host-network 2025-08-13T20:07:00.503281089+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="No subscriptions were found in namespace openshift-etcd-operator" id=uuetg namespace=openshift-etcd-operator 2025-08-13T20:07:00.503375092+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="resolving sources" id=kLxN2 namespace=openshift-image-registry 2025-08-13T20:07:00.503409213+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="checking if subscriptions need update" id=kLxN2 namespace=openshift-image-registry 2025-08-13T20:07:00.700257756+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="No subscriptions were found in namespace openshift-host-network" id=jF/Mz namespace=openshift-host-network 2025-08-13T20:07:00.700257756+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="resolving sources" id=+/qN1 namespace=openshift-infra 2025-08-13T20:07:00.700257756+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="checking if subscriptions need update" id=+/qN1 namespace=openshift-infra 2025-08-13T20:07:00.935907223+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="No subscriptions were found in namespace openshift-image-registry" id=kLxN2 namespace=openshift-image-registry 2025-08-13T20:07:00.936043507+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="resolving sources" id=Ty/lT namespace=openshift-ingress 2025-08-13T20:07:00.936077668+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="checking if subscriptions need update" id=Ty/lT namespace=openshift-ingress 2025-08-13T20:07:01.131677856+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="No subscriptions were found in namespace openshift-infra" id=+/qN1 namespace=openshift-infra 2025-08-13T20:07:01.131677856+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="resolving sources" id=cRYWK namespace=openshift-ingress-canary 2025-08-13T20:07:01.131677856+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="checking if subscriptions need update" id=cRYWK namespace=openshift-ingress-canary 2025-08-13T20:07:01.301361841+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress" id=Ty/lT namespace=openshift-ingress 2025-08-13T20:07:01.301466174+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="resolving sources" id=tkPdT namespace=openshift-ingress-operator 2025-08-13T20:07:01.301499775+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="checking if subscriptions need update" id=tkPdT namespace=openshift-ingress-operator 2025-08-13T20:07:01.502206029+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress-canary" id=cRYWK namespace=openshift-ingress-canary 2025-08-13T20:07:01.502206029+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="resolving sources" id=E3P2t namespace=openshift-kni-infra 2025-08-13T20:07:01.502206029+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="checking if subscriptions need update" id=E3P2t namespace=openshift-kni-infra 2025-08-13T20:07:01.706895048+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress-operator" id=tkPdT namespace=openshift-ingress-operator 2025-08-13T20:07:01.706895048+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="resolving sources" id=X/Qr9 namespace=openshift-kube-apiserver 2025-08-13T20:07:01.706895048+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="checking if subscriptions need update" id=X/Qr9 namespace=openshift-kube-apiserver 2025-08-13T20:07:01.909935219+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="No subscriptions were found in namespace openshift-kni-infra" id=E3P2t namespace=openshift-kni-infra 2025-08-13T20:07:01.910360421+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="resolving sources" id=EvcdC namespace=openshift-kube-apiserver-operator 2025-08-13T20:07:01.910960379+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="checking if subscriptions need update" id=EvcdC namespace=openshift-kube-apiserver-operator 2025-08-13T20:07:02.428272220+00:00 stderr F time="2025-08-13T20:07:02Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver-operator" id=EvcdC namespace=openshift-kube-apiserver-operator 2025-08-13T20:07:02.428531668+00:00 stderr F time="2025-08-13T20:07:02Z" level=info msg="resolving sources" id=MfPwX namespace=openshift-kube-controller-manager 2025-08-13T20:07:02.437184656+00:00 stderr F time="2025-08-13T20:07:02Z" level=info msg="checking if subscriptions need update" id=MfPwX namespace=openshift-kube-controller-manager 2025-08-13T20:07:02.438742051+00:00 stderr F time="2025-08-13T20:07:02Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver" id=X/Qr9 namespace=openshift-kube-apiserver 2025-08-13T20:07:02.438742051+00:00 stderr F time="2025-08-13T20:07:02Z" level=info msg="resolving sources" id=/r0yC namespace=openshift-kube-controller-manager-operator 2025-08-13T20:07:02.438742051+00:00 stderr F time="2025-08-13T20:07:02Z" level=info msg="checking if subscriptions need update" id=/r0yC namespace=openshift-kube-controller-manager-operator 2025-08-13T20:07:03.884353897+00:00 stderr F time="2025-08-13T20:07:03Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:03.884477280+00:00 stderr F time="2025-08-13T20:07:03Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.452600719+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager" id=MfPwX namespace=openshift-kube-controller-manager 2025-08-13T20:07:04.452600719+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=M48GQ namespace=openshift-kube-scheduler 2025-08-13T20:07:04.452600719+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=M48GQ namespace=openshift-kube-scheduler 2025-08-13T20:07:04.452815165+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager-operator" id=/r0yC namespace=openshift-kube-controller-manager-operator 2025-08-13T20:07:04.452930688+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=MhMX5 namespace=openshift-kube-scheduler-operator 2025-08-13T20:07:04.452972260+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=MhMX5 namespace=openshift-kube-scheduler-operator 2025-08-13T20:07:04.519223669+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.519223669+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.519223669+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.519223669+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=OmDfI 2025-08-13T20:07:04.519223669+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.519223669+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.519223669+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler" id=M48GQ namespace=openshift-kube-scheduler 2025-08-13T20:07:04.519269010+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=/ZiT9 namespace=openshift-kube-storage-version-migrator 2025-08-13T20:07:04.519269010+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=/ZiT9 namespace=openshift-kube-storage-version-migrator 2025-08-13T20:07:04.521825534+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler-operator" id=MhMX5 namespace=openshift-kube-scheduler-operator 2025-08-13T20:07:04.521825534+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=HWQsE namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:07:04.521825534+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=HWQsE namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:07:04.550900997+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator-operator" id=HWQsE namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:07:04.550900997+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=1p/uA namespace=openshift-machine-api 2025-08-13T20:07:04.550900997+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=1p/uA namespace=openshift-machine-api 2025-08-13T20:07:04.550900997+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator" id=/ZiT9 namespace=openshift-kube-storage-version-migrator 2025-08-13T20:07:04.550900997+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=7tI/m namespace=openshift-machine-config-operator 2025-08-13T20:07:04.550900997+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=7tI/m namespace=openshift-machine-config-operator 2025-08-13T20:07:04.574843154+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.574843154+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.611680340+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-machine-config-operator" id=7tI/m namespace=openshift-machine-config-operator 2025-08-13T20:07:04.611680340+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=iPKzd namespace=openshift-marketplace 2025-08-13T20:07:04.611680340+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=iPKzd namespace=openshift-marketplace 2025-08-13T20:07:04.611680340+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-machine-api" id=1p/uA namespace=openshift-machine-api 2025-08-13T20:07:04.611680340+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=KEqPT namespace=openshift-monitoring 2025-08-13T20:07:04.611680340+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=KEqPT namespace=openshift-monitoring 2025-08-13T20:07:04.612855994+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.616764686+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.616764686+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.616764686+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.642194045+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.642194045+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.642194045+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.642194045+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=THIlK 2025-08-13T20:07:04.642194045+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.642194045+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.642194045+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=iPKzd namespace=openshift-marketplace 2025-08-13T20:07:04.642263837+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=XK4U+ namespace=openshift-multus 2025-08-13T20:07:04.642263837+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=XK4U+ namespace=openshift-multus 2025-08-13T20:07:04.643818171+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=KEqPT namespace=openshift-monitoring 2025-08-13T20:07:04.643818171+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=sfNTf namespace=openshift-network-diagnostics 2025-08-13T20:07:04.643818171+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=sfNTf namespace=openshift-network-diagnostics 2025-08-13T20:07:04.653687764+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-multus" id=XK4U+ namespace=openshift-multus 2025-08-13T20:07:04.653687764+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=z0TgY namespace=openshift-network-node-identity 2025-08-13T20:07:04.653687764+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=z0TgY namespace=openshift-network-node-identity 2025-08-13T20:07:04.662074755+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.662074755+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.662074755+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.675230562+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.720861450+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-network-diagnostics" id=sfNTf namespace=openshift-network-diagnostics 2025-08-13T20:07:04.723694311+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=UXPNK namespace=openshift-network-operator 2025-08-13T20:07:04.723694311+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=UXPNK namespace=openshift-network-operator 2025-08-13T20:07:04.923082668+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-network-node-identity" id=z0TgY namespace=openshift-network-node-identity 2025-08-13T20:07:04.923082668+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=+Tnz6 namespace=openshift-node 2025-08-13T20:07:04.923082668+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=+Tnz6 namespace=openshift-node 2025-08-13T20:07:04.998404328+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:04.998404328+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.017358401+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.017358401+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.017358401+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.017358401+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=SM2df 2025-08-13T20:07:05.017358401+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.017358401+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.037186670+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.037311213+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.037311213+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.037565210+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.098929900+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.098929900+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="No subscriptions were found in namespace openshift-network-operator" id=UXPNK namespace=openshift-network-operator 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="resolving sources" id=EF1d4 namespace=openshift-nutanix-infra 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checking if subscriptions need update" id=EF1d4 namespace=openshift-nutanix-infra 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=hUGCZ 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.173327193+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.194309614+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.194309614+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.194476909+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.200168582+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.200168582+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.206701680+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.206727580+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.206740501+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.206753011+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=EM2Dn 2025-08-13T20:07:05.206763421+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.206814553+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.304568886+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="No subscriptions were found in namespace openshift-node" id=+Tnz6 namespace=openshift-node 2025-08-13T20:07:05.304568886+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="resolving sources" id=90LR/ namespace=openshift-oauth-apiserver 2025-08-13T20:07:05.304568886+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checking if subscriptions need update" id=90LR/ namespace=openshift-oauth-apiserver 2025-08-13T20:07:05.324980481+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.324980481+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.324980481+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.324980481+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.495970383+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:05.495970383+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:05.507549825+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="No subscriptions were found in namespace openshift-nutanix-infra" id=EF1d4 namespace=openshift-nutanix-infra 2025-08-13T20:07:05.507549825+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="resolving sources" id=bzNXl namespace=openshift-openstack-infra 2025-08-13T20:07:05.507549825+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checking if subscriptions need update" id=bzNXl namespace=openshift-openstack-infra 2025-08-13T20:07:05.527593480+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:05.527648152+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:05.527648152+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:05.527713273+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=cmZbo 2025-08-13T20:07:05.527713273+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:05.527713273+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:05.720449489+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:05.720449489+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:05.904604799+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="No subscriptions were found in namespace openshift-oauth-apiserver" id=90LR/ namespace=openshift-oauth-apiserver 2025-08-13T20:07:05.904604799+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="resolving sources" id=7fSuO namespace=openshift-operator-lifecycle-manager 2025-08-13T20:07:05.904604799+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checking if subscriptions need update" id=7fSuO namespace=openshift-operator-lifecycle-manager 2025-08-13T20:07:05.927625519+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:05.927741673+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:05.927741673+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:05.927741673+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=G2Jyv 2025-08-13T20:07:05.927741673+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:05.927741673+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:06.104661205+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="No subscriptions were found in namespace openshift-openstack-infra" id=bzNXl namespace=openshift-openstack-infra 2025-08-13T20:07:06.104661205+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="resolving sources" id=7AwXH namespace=openshift-operators 2025-08-13T20:07:06.104661205+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="checking if subscriptions need update" id=7AwXH namespace=openshift-operators 2025-08-13T20:07:06.128517619+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:06.128517619+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:06.128517619+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:06.303616518+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=7fSuO namespace=openshift-operator-lifecycle-manager 2025-08-13T20:07:06.303616518+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="resolving sources" id=Fif+h namespace=openshift-ovirt-infra 2025-08-13T20:07:06.303616518+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="checking if subscriptions need update" id=Fif+h namespace=openshift-ovirt-infra 2025-08-13T20:07:06.502120210+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=7AwXH namespace=openshift-operators 2025-08-13T20:07:06.502120210+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="resolving sources" id=8/ggq namespace=openshift-ovn-kubernetes 2025-08-13T20:07:06.502120210+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="checking if subscriptions need update" id=8/ggq namespace=openshift-ovn-kubernetes 2025-08-13T20:07:06.552257327+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:06.552257327+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:06.556577681+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:06.556577681+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:06.722922060+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="No subscriptions were found in namespace openshift-ovirt-infra" id=Fif+h namespace=openshift-ovirt-infra 2025-08-13T20:07:06.722922060+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="resolving sources" id=KRghI namespace=openshift-route-controller-manager 2025-08-13T20:07:06.722922060+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="checking if subscriptions need update" id=KRghI namespace=openshift-route-controller-manager 2025-08-13T20:07:06.735860501+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:06.736024806+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:06.736024806+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:06.736105458+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:06.903362294+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="No subscriptions were found in namespace openshift-ovn-kubernetes" id=8/ggq namespace=openshift-ovn-kubernetes 2025-08-13T20:07:06.903362294+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="resolving sources" id=HgYn+ namespace=openshift-service-ca 2025-08-13T20:07:06.903425696+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="checking if subscriptions need update" id=HgYn+ namespace=openshift-service-ca 2025-08-13T20:07:06.933280412+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:06.933280412+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:06.933280412+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:06.933280412+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=R/FBr 2025-08-13T20:07:06.933280412+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:06.933280412+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:07.102216185+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="No subscriptions were found in namespace openshift-route-controller-manager" id=KRghI namespace=openshift-route-controller-manager 2025-08-13T20:07:07.102216185+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="resolving sources" id=5YpIl namespace=openshift-service-ca-operator 2025-08-13T20:07:07.102216185+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="checking if subscriptions need update" id=5YpIl namespace=openshift-service-ca-operator 2025-08-13T20:07:07.310395244+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="No subscriptions were found in namespace openshift-service-ca" id=HgYn+ namespace=openshift-service-ca 2025-08-13T20:07:07.310395244+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="resolving sources" id=PVkqD namespace=openshift-user-workload-monitoring 2025-08-13T20:07:07.310395244+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="checking if subscriptions need update" id=PVkqD namespace=openshift-user-workload-monitoring 2025-08-13T20:07:07.429827448+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:07.433853584+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:07.433853584+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:07.502697127+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="No subscriptions were found in namespace openshift-service-ca-operator" id=5YpIl namespace=openshift-service-ca-operator 2025-08-13T20:07:07.502697127+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="resolving sources" id=t5Uw+ namespace=openshift-vsphere-infra 2025-08-13T20:07:07.502697127+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="checking if subscriptions need update" id=t5Uw+ namespace=openshift-vsphere-infra 2025-08-13T20:07:07.523702840+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:07.523702840+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:08.166985893+00:00 stderr F time="2025-08-13T20:07:08Z" level=info msg="No subscriptions were found in namespace openshift-vsphere-infra" id=t5Uw+ namespace=openshift-vsphere-infra 2025-08-13T20:07:08.167210020+00:00 stderr F time="2025-08-13T20:07:08Z" level=info msg="No subscriptions were found in namespace openshift-user-workload-monitoring" id=PVkqD namespace=openshift-user-workload-monitoring 2025-08-13T20:07:09.012897627+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.012897627+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.026530958+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.026739694+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.026849607+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.026943929+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=oO6F4 2025-08-13T20:07:09.026994031+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.027029852+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.098257214+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.098257214+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.098257214+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.098257214+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.295143979+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.295143979+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.318319843+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.318319843+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.318319843+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.318319843+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=paOHr 2025-08-13T20:07:09.318319843+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.318319843+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.352439852+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.352667768+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.352715280+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.352948736+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.544717705+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.544928851+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.566983993+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.566983993+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.566983993+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.566983993+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=pXP5D 2025-08-13T20:07:09.566983993+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.566983993+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.595940763+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.595940763+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.595940763+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.595940763+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.595940763+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.595940763+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.595940763+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.606969489+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.606969489+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.606969489+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.606969489+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=6G8MG 2025-08-13T20:07:09.606969489+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.606969489+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.927146228+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.927937911+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.927937911+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.927937911+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.927937911+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:17.402023609+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.402023609+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.412219631+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.412219631+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.412219631+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.412219631+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=qjygH 2025-08-13T20:07:17.412219631+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.412219631+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.445047393+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.445047393+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.445047393+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.471673926+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.471673926+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.540994884+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.540994884+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.545500963+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.545500963+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.545500963+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.545500963+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=eF3ux 2025-08-13T20:07:17.545500963+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.545500963+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.564189038+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.564189038+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.564189038+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.569925323+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.569925323+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:19.897743354+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:19.901121591+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:19.962722627+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:19.963857450+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:19.963857450+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:19.963857450+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=XXh2y 2025-08-13T20:07:19.963908401+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:19.963908401+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:20.109529166+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:20.109769743+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:20.109941668+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:20.110055541+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:20.363563220+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.363563220+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.383363787+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.383672586+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.383724398+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.383768529+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=xkuDP 2025-08-13T20:07:20.383921013+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.383967415+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.488287136+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.488488451+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.488488451+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.488488451+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.494112093+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.494289488+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.521108847+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.521445106+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.521614701+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.521707434+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=SeWd4 2025-08-13T20:07:20.522132546+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.522691662+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.539406671+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.539712020+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.565002835+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.565767577+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.566136148+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.566481028+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=mvFP1 2025-08-13T20:07:20.566576080+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.566707794+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.650509956+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.650744512+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.650846445+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.650978819+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.822077504+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.822822676+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.822964710+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.823405262+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:21.118466272+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.118579555+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.143076477+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.143546361+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.143663764+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.143761227+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=TpWB0 2025-08-13T20:07:21.143910861+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.144029865+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.222951427+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.223850313+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.223993457+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.224400239+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.224472851+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.224765519+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.224996136+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.258483186+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.258935579+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.259019062+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.259229778+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=kdqU8 2025-08-13T20:07:21.259333681+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.259418713+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.513392535+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.513501248+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.513501248+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.513692293+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.513692293+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:26.432005016+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.432005016+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.444967928+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.444967928+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.444967928+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.444967928+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=BKWN2 2025-08-13T20:07:26.444967928+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.444967928+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.463701255+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.463701255+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.463701255+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.463750046+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.463760816+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.537822570+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.537822570+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.545020466+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.545276254+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.545319895+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.545360856+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=+5tYT 2025-08-13T20:07:26.545391007+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.545420538+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.581111321+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.581111321+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.581111321+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.581111321+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.581111321+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:34.621979639+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.621979639+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.626902090+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.626902090+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.626902090+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.626902090+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=rpx5y 2025-08-13T20:07:34.626902090+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.626902090+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.736871293+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.736871293+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.736871293+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.736871293+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:35.201738891+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.202012579+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.212002335+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.212002335+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.212002335+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.212002335+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=QgRXm 2025-08-13T20:07:35.212002335+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.212002335+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.227052937+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.227052937+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.227052937+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.227093568+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:39.803162468+00:00 stderr F time="2025-08-13T20:07:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:39.803162468+00:00 stderr F time="2025-08-13T20:07:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.284657233+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.284698754+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.284708834+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.284718375+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=kwBL/ 2025-08-13T20:07:40.284728345+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.284728345+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.489997690+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.489997690+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.489997690+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.534016432+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.534016432+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.534107155+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.534107155+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.570955361+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.570955361+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.570955361+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.570955361+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=A24F2 2025-08-13T20:07:40.570955361+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.570955361+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.570955361+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.572495475+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.585762246+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.585762246+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.585762246+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.585762246+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=CQ/MI 2025-08-13T20:07:40.585762246+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.585762246+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.590760549+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.591497060+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.591582493+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.601486417+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.601486417+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.601486417+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.601486417+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.608217870+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.610414963+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.610414963+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.610414963+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=GGhoG 2025-08-13T20:07:40.610414963+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.610414963+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.610990589+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.610990589+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.610990589+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.611012370+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.890683498+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.891013548+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.891013548+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:41.092128714+00:00 stderr F time="2025-08-13T20:07:41Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:41.092128714+00:00 stderr F time="2025-08-13T20:07:41Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:42.210912870+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.210912870+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.235076863+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.238390857+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.238522041+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.238632754+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=GWpeZ 2025-08-13T20:07:42.238742578+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.238829830+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.323922220+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.323922220+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.323922220+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.323922220+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.328618204+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.328618204+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.331729294+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.332459115+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.332459115+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.332459115+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=kRsnw 2025-08-13T20:07:42.332483155+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.332483155+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.361420195+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:42.361420195+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:42.363117934+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.363152725+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.363162755+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.363394412+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.507196204+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:42.507257866+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:42.507267827+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:42.507277557+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=uUuJK 2025-08-13T20:07:42.507287257+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:42.507287257+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:42.615342925+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:42.615342925+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:42.892730938+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:42.892986045+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:42.892986045+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:42.892986045+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=jnpU3 2025-08-13T20:07:42.892986045+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:42.892986045+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:43.090820408+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:43.091066445+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:43.091110296+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:43.091269920+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:43.492354760+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:43.492641138+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:43.492689280+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:43.492920576+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:43.492966527+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:43.493119482+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:43.493202214+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:43.690738548+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:43.691115039+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:43.691176620+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:43.691215671+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=zL7jK 2025-08-13T20:07:43.691245762+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:43.691275603+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:44.090946862+00:00 stderr F time="2025-08-13T20:07:44Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:44.091207440+00:00 stderr F time="2025-08-13T20:07:44Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:44.091248581+00:00 stderr F time="2025-08-13T20:07:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:44.091381265+00:00 stderr F time="2025-08-13T20:07:44Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:44.091425996+00:00 stderr F time="2025-08-13T20:07:44Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:57.427919925+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.427919925+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.442174744+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.442174744+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.442174744+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.442174744+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=cr9Ef 2025-08-13T20:07:57.442174744+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.442174744+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.465447051+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.465447051+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.465447051+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.465447051+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.616078620+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.616078620+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.621872686+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.622178515+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.622240697+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.622283578+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=/W5/o 2025-08-13T20:07:57.622338369+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.622556936+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.635754994+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.636161676+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.636214817+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.661724129+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.661833862+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.661948955+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.661995626+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.669732978+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.669973195+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.670018046+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.670056607+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=ZaL6T 2025-08-13T20:07:57.670089528+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.670119569+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.686739826+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.687120957+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.687190539+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.691604545+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.691604545+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:59.719443285+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.719556459+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.730188293+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.730188293+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.733711424+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.733711424+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=/d3LR 2025-08-13T20:07:59.733711424+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.733711424+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.745876913+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.746115750+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.746156171+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.746274225+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:08:00.174554403+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.174687257+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.179552996+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.179695070+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.179695070+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.179695070+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=7VIOl 2025-08-13T20:08:00.179719591+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.179719591+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.237632841+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.237918690+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.238012702+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.238112515+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:01.069850143+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.069850143+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.096753804+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.096753804+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.096753804+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.096753804+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=SOnQo 2025-08-13T20:08:01.096753804+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.096753804+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.120543376+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.120871825+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.120973828+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.121166044+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.121215975+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.121343809+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.121392790+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.136554085+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.136739570+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.136851224+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.136933486+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=yqVMl 2025-08-13T20:08:01.136987037+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.137161492+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.167616776+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.167848692+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.167848692+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.176215862+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.176215862+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:04.740190434+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.740190434+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.744980792+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.745589859+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.745589859+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.745589859+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=2tQSQ 2025-08-13T20:08:04.745589859+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.745589859+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.792837464+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.793194344+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.793234185+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.793402310+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.793441461+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:05.227830806+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.227830806+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.239376377+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.239376377+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.239376377+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.239376377+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=1S6Hl 2025-08-13T20:08:05.239376377+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.239376377+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.258361051+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.258426143+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.258436623+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.258636439+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.258636439+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:34.569230327+00:00 stderr F time="2025-08-13T20:08:34Z" level=error msg="Unable to retrieve cluster operator: Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-catalog\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:09:45.312959779+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.312959779+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.313585607+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.313674139+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.325440706+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.329022759+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.329022759+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.329022759+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=e2a+y 2025-08-13T20:09:45.329022759+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.329022759+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.329209134+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.329468842+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.329510783+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.329549094+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=m5f4a 2025-08-13T20:09:45.329580195+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.329610826+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.371295331+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.371295331+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.371295331+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.371295331+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.371295331+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.387238498+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.395537146+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.395660890+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.395710771+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=auOVA 2025-08-13T20:09:45.395743392+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.395877206+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.398470770+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.398541992+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.398585564+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Id+B8 2025-08-13T20:09:45.398619555+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.398650715+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.399083668+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.718941189+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.718941189+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.718941189+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.718941189+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.718941189+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.908512014+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.908512014+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.908512014+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.908512014+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.908512014+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:53.174562738+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="resolving sources" id=yM50g namespace=openshift-monitoring 2025-08-13T20:09:53.175244858+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="checking if subscriptions need update" id=yM50g namespace=openshift-monitoring 2025-08-13T20:09:53.175335740+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="resolving sources" id=1JynF namespace=openshift-operator-lifecycle-manager 2025-08-13T20:09:53.175335740+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="checking if subscriptions need update" id=1JynF namespace=openshift-operator-lifecycle-manager 2025-08-13T20:09:53.179189551+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=1JynF namespace=openshift-operator-lifecycle-manager 2025-08-13T20:09:53.179189551+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="resolving sources" id=cqdOf namespace=openshift-operators 2025-08-13T20:09:53.179215391+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="checking if subscriptions need update" id=cqdOf namespace=openshift-operators 2025-08-13T20:09:53.197599259+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=cqdOf namespace=openshift-operators 2025-08-13T20:09:53.199541824+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=yM50g namespace=openshift-monitoring 2025-08-13T20:09:55.747227128+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.747227128+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.747227128+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.747227128+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.751689806+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.752277003+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.752326864+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.752379006+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=HVKfg 2025-08-13T20:09:55.752410847+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.752451938+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.752714516+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.753997462+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.754057224+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.754096405+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=OemHP 2025-08-13T20:09:55.754128546+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.754158867+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.770968679+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.775595562+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.775595562+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.775595562+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.775595562+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.775595562+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:55.775595562+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:55.776185209+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.776207459+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.776207459+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.776339553+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.776339553+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.777490676+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:55.779940056+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:55.779940056+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:55.779940056+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:55.779940056+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=afjez 2025-08-13T20:09:55.779940056+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:55.779940056+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:55.779940056+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:55.787737710+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:55.787737710+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:55.787737710+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:55.787737710+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=z5DhF 2025-08-13T20:09:55.787737710+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:55.787737710+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:56.145267191+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:56.145713573+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:56.145713573+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:56.145713573+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:56.145713573+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:56.145713573+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:56.145713573+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:56.348223830+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:56.348223830+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:56.348223830+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:56.348223830+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:56.348223830+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:56.348223830+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:56.348223830+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:56.546288508+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:56.546357680+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:56.546357680+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:56.546357680+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=AUBJc 2025-08-13T20:09:56.546357680+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:56.546357680+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:56.748351692+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:56.748483165+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:56.748483165+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:56.748497786+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=IakPh 2025-08-13T20:09:56.748497786+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:56.748508116+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:57.156892975+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=aUKHs namespace=default 2025-08-13T20:09:57.156892975+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=aUKHs namespace=default 2025-08-13T20:09:57.157035829+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=NGiwD namespace=hostpath-provisioner 2025-08-13T20:09:57.157035829+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=NGiwD namespace=hostpath-provisioner 2025-08-13T20:09:57.163282918+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace hostpath-provisioner" id=NGiwD namespace=hostpath-provisioner 2025-08-13T20:09:57.163282918+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=VX3sc namespace=kube-node-lease 2025-08-13T20:09:57.163282918+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=VX3sc namespace=kube-node-lease 2025-08-13T20:09:57.165284335+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace default" id=aUKHs namespace=default 2025-08-13T20:09:57.165284335+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=aaOn8 namespace=kube-public 2025-08-13T20:09:57.165311576+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=aaOn8 namespace=kube-public 2025-08-13T20:09:57.168065415+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace kube-node-lease" id=VX3sc namespace=kube-node-lease 2025-08-13T20:09:57.168065415+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=d1BGQ namespace=kube-system 2025-08-13T20:09:57.168065415+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=d1BGQ namespace=kube-system 2025-08-13T20:09:57.168197969+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace kube-public" id=aaOn8 namespace=kube-public 2025-08-13T20:09:57.168197969+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=cmzNJ namespace=openshift 2025-08-13T20:09:57.168212199+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=cmzNJ namespace=openshift 2025-08-13T20:09:57.171427562+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift" id=cmzNJ namespace=openshift 2025-08-13T20:09:57.171427562+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=4umjd namespace=openshift-apiserver 2025-08-13T20:09:57.171427562+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=4umjd namespace=openshift-apiserver 2025-08-13T20:09:57.171427562+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace kube-system" id=d1BGQ namespace=kube-system 2025-08-13T20:09:57.171427562+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=WMEZ7 namespace=openshift-apiserver-operator 2025-08-13T20:09:57.171427562+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=WMEZ7 namespace=openshift-apiserver-operator 2025-08-13T20:09:57.174108278+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-apiserver" id=4umjd namespace=openshift-apiserver 2025-08-13T20:09:57.174108278+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=Z2Ddy namespace=openshift-authentication 2025-08-13T20:09:57.174108278+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=Z2Ddy namespace=openshift-authentication 2025-08-13T20:09:57.174298894+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-apiserver-operator" id=WMEZ7 namespace=openshift-apiserver-operator 2025-08-13T20:09:57.174298894+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=jYGI7 namespace=openshift-authentication-operator 2025-08-13T20:09:57.174298894+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=jYGI7 namespace=openshift-authentication-operator 2025-08-13T20:09:57.177936758+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-authentication" id=Z2Ddy namespace=openshift-authentication 2025-08-13T20:09:57.177936758+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=F/ZKL namespace=openshift-cloud-network-config-controller 2025-08-13T20:09:57.177936758+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=F/ZKL namespace=openshift-cloud-network-config-controller 2025-08-13T20:09:57.177936758+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-authentication-operator" id=jYGI7 namespace=openshift-authentication-operator 2025-08-13T20:09:57.177936758+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=XQ/f/ namespace=openshift-cloud-platform-infra 2025-08-13T20:09:57.177936758+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=XQ/f/ namespace=openshift-cloud-platform-infra 2025-08-13T20:09:57.365030892+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-cloud-network-config-controller" id=F/ZKL namespace=openshift-cloud-network-config-controller 2025-08-13T20:09:57.365030892+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=EJvXe namespace=openshift-cluster-machine-approver 2025-08-13T20:09:57.365030892+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=EJvXe namespace=openshift-cluster-machine-approver 2025-08-13T20:09:57.548997987+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:57.548997987+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:57.548997987+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:57.548997987+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:57.548997987+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:57.549209973+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:57.549209973+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:57.563831192+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-cloud-platform-infra" id=XQ/f/ namespace=openshift-cloud-platform-infra 2025-08-13T20:09:57.563831192+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=YCKSt namespace=openshift-cluster-samples-operator 2025-08-13T20:09:57.563831192+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=YCKSt namespace=openshift-cluster-samples-operator 2025-08-13T20:09:57.749354201+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:57.749354201+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:57.749354201+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:57.749354201+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:57.749354201+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:57.749354201+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:57.749354201+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:57.762163049+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-cluster-machine-approver" id=EJvXe namespace=openshift-cluster-machine-approver 2025-08-13T20:09:57.762163049+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=a5iDW namespace=openshift-cluster-storage-operator 2025-08-13T20:09:57.762163049+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=a5iDW namespace=openshift-cluster-storage-operator 2025-08-13T20:09:57.946444862+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:57.948296765+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:57.948296765+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:57.948296765+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=rG7wq 2025-08-13T20:09:57.948296765+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:57.948296765+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:57.962935355+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-cluster-samples-operator" id=YCKSt namespace=openshift-cluster-samples-operator 2025-08-13T20:09:57.962935355+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=N9XnC namespace=openshift-cluster-version 2025-08-13T20:09:57.962935355+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=N9XnC namespace=openshift-cluster-version 2025-08-13T20:09:58.148277418+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.148434292+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.148434292+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.148464483+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=FPqH4 2025-08-13T20:09:58.148464483+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.148464483+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.162341041+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="No subscriptions were found in namespace openshift-cluster-storage-operator" id=a5iDW namespace=openshift-cluster-storage-operator 2025-08-13T20:09:58.162341041+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="resolving sources" id=shDZ2 namespace=openshift-config 2025-08-13T20:09:58.162341041+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="checking if subscriptions need update" id=shDZ2 namespace=openshift-config 2025-08-13T20:09:58.362413208+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="No subscriptions were found in namespace openshift-cluster-version" id=N9XnC namespace=openshift-cluster-version 2025-08-13T20:09:58.362413208+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="resolving sources" id=u9Mnp namespace=openshift-config-managed 2025-08-13T20:09:58.362413208+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="checking if subscriptions need update" id=u9Mnp namespace=openshift-config-managed 2025-08-13T20:09:58.561747213+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="No subscriptions were found in namespace openshift-config" id=shDZ2 namespace=openshift-config 2025-08-13T20:09:58.561747213+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="resolving sources" id=rL09V namespace=openshift-config-operator 2025-08-13T20:09:58.561747213+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="checking if subscriptions need update" id=rL09V namespace=openshift-config-operator 2025-08-13T20:09:58.745648065+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:58.745931463+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:58.745931463+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:58.746050217+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:58.746050217+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:58.761941082+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="No subscriptions were found in namespace openshift-config-managed" id=u9Mnp namespace=openshift-config-managed 2025-08-13T20:09:58.761941082+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="resolving sources" id=xns6Z namespace=openshift-console 2025-08-13T20:09:58.761941082+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="checking if subscriptions need update" id=xns6Z namespace=openshift-console 2025-08-13T20:09:58.947855433+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.947855433+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.947855433+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.947855433+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.947855433+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.962240075+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="No subscriptions were found in namespace openshift-config-operator" id=rL09V namespace=openshift-config-operator 2025-08-13T20:09:58.962337268+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="resolving sources" id=hQUnk namespace=openshift-console-operator 2025-08-13T20:09:58.962337268+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="checking if subscriptions need update" id=hQUnk namespace=openshift-console-operator 2025-08-13T20:09:59.162651561+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="No subscriptions were found in namespace openshift-console" id=xns6Z namespace=openshift-console 2025-08-13T20:09:59.162651561+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="resolving sources" id=2uTnu namespace=openshift-console-user-settings 2025-08-13T20:09:59.162651561+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="checking if subscriptions need update" id=2uTnu namespace=openshift-console-user-settings 2025-08-13T20:09:59.363311044+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="No subscriptions were found in namespace openshift-console-operator" id=hQUnk namespace=openshift-console-operator 2025-08-13T20:09:59.363311044+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="resolving sources" id=cDtnC namespace=openshift-controller-manager 2025-08-13T20:09:59.363311044+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="checking if subscriptions need update" id=cDtnC namespace=openshift-controller-manager 2025-08-13T20:09:59.565472180+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="No subscriptions were found in namespace openshift-console-user-settings" id=2uTnu namespace=openshift-console-user-settings 2025-08-13T20:09:59.565472180+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="resolving sources" id=L69/p namespace=openshift-controller-manager-operator 2025-08-13T20:09:59.565472180+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="checking if subscriptions need update" id=L69/p namespace=openshift-controller-manager-operator 2025-08-13T20:09:59.762659064+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager" id=cDtnC namespace=openshift-controller-manager 2025-08-13T20:09:59.762659064+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="resolving sources" id=CTpzL namespace=openshift-dns 2025-08-13T20:09:59.762715376+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="checking if subscriptions need update" id=CTpzL namespace=openshift-dns 2025-08-13T20:09:59.963401190+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager-operator" id=L69/p namespace=openshift-controller-manager-operator 2025-08-13T20:09:59.963401190+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="resolving sources" id=WzHQ7 namespace=openshift-dns-operator 2025-08-13T20:09:59.963401190+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="checking if subscriptions need update" id=WzHQ7 namespace=openshift-dns-operator 2025-08-13T20:10:00.165312179+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="No subscriptions were found in namespace openshift-dns" id=CTpzL namespace=openshift-dns 2025-08-13T20:10:00.165312179+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="resolving sources" id=sAyKO namespace=openshift-etcd 2025-08-13T20:10:00.165312179+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="checking if subscriptions need update" id=sAyKO namespace=openshift-etcd 2025-08-13T20:10:00.373735034+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="No subscriptions were found in namespace openshift-dns-operator" id=WzHQ7 namespace=openshift-dns-operator 2025-08-13T20:10:00.373735034+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="resolving sources" id=RUWXH namespace=openshift-etcd-operator 2025-08-13T20:10:00.373735034+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="checking if subscriptions need update" id=RUWXH namespace=openshift-etcd-operator 2025-08-13T20:10:00.562843906+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="No subscriptions were found in namespace openshift-etcd" id=sAyKO namespace=openshift-etcd 2025-08-13T20:10:00.562987640+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="resolving sources" id=b1K1t namespace=openshift-host-network 2025-08-13T20:10:00.563004551+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="checking if subscriptions need update" id=b1K1t namespace=openshift-host-network 2025-08-13T20:10:00.765071324+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="No subscriptions were found in namespace openshift-etcd-operator" id=RUWXH namespace=openshift-etcd-operator 2025-08-13T20:10:00.765071324+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="resolving sources" id=T2uVn namespace=openshift-image-registry 2025-08-13T20:10:00.765071324+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="checking if subscriptions need update" id=T2uVn namespace=openshift-image-registry 2025-08-13T20:10:00.962427103+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="No subscriptions were found in namespace openshift-host-network" id=b1K1t namespace=openshift-host-network 2025-08-13T20:10:00.962524555+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="resolving sources" id=vUTpE namespace=openshift-infra 2025-08-13T20:10:00.962524555+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="checking if subscriptions need update" id=vUTpE namespace=openshift-infra 2025-08-13T20:10:01.163223430+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="No subscriptions were found in namespace openshift-image-registry" id=T2uVn namespace=openshift-image-registry 2025-08-13T20:10:01.163223430+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="resolving sources" id=KYZtS namespace=openshift-ingress 2025-08-13T20:10:01.163223430+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="checking if subscriptions need update" id=KYZtS namespace=openshift-ingress 2025-08-13T20:10:01.365713315+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="No subscriptions were found in namespace openshift-infra" id=vUTpE namespace=openshift-infra 2025-08-13T20:10:01.365713315+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="resolving sources" id=oFtfD namespace=openshift-ingress-canary 2025-08-13T20:10:01.365713315+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="checking if subscriptions need update" id=oFtfD namespace=openshift-ingress-canary 2025-08-13T20:10:01.562995042+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress" id=KYZtS namespace=openshift-ingress 2025-08-13T20:10:01.562995042+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="resolving sources" id=iqP7R namespace=openshift-ingress-operator 2025-08-13T20:10:01.562995042+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="checking if subscriptions need update" id=iqP7R namespace=openshift-ingress-operator 2025-08-13T20:10:01.765622000+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress-canary" id=oFtfD namespace=openshift-ingress-canary 2025-08-13T20:10:01.765622000+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="resolving sources" id=7k1Dn namespace=openshift-kni-infra 2025-08-13T20:10:01.765622000+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="checking if subscriptions need update" id=7k1Dn namespace=openshift-kni-infra 2025-08-13T20:10:01.961185727+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress-operator" id=iqP7R namespace=openshift-ingress-operator 2025-08-13T20:10:01.961185727+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="resolving sources" id=vvE+v namespace=openshift-kube-apiserver 2025-08-13T20:10:01.961185727+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="checking if subscriptions need update" id=vvE+v namespace=openshift-kube-apiserver 2025-08-13T20:10:02.163209680+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="No subscriptions were found in namespace openshift-kni-infra" id=7k1Dn namespace=openshift-kni-infra 2025-08-13T20:10:02.163209680+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="resolving sources" id=1Jzn4 namespace=openshift-kube-apiserver-operator 2025-08-13T20:10:02.163209680+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="checking if subscriptions need update" id=1Jzn4 namespace=openshift-kube-apiserver-operator 2025-08-13T20:10:02.372387967+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver" id=vvE+v namespace=openshift-kube-apiserver 2025-08-13T20:10:02.372522941+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="resolving sources" id=I4eoQ namespace=openshift-kube-controller-manager 2025-08-13T20:10:02.372563762+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="checking if subscriptions need update" id=I4eoQ namespace=openshift-kube-controller-manager 2025-08-13T20:10:02.568864200+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver-operator" id=1Jzn4 namespace=openshift-kube-apiserver-operator 2025-08-13T20:10:02.568864200+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="resolving sources" id=RBqCK namespace=openshift-kube-controller-manager-operator 2025-08-13T20:10:02.568864200+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="checking if subscriptions need update" id=RBqCK namespace=openshift-kube-controller-manager-operator 2025-08-13T20:10:02.760844894+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager" id=I4eoQ namespace=openshift-kube-controller-manager 2025-08-13T20:10:02.760844894+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="resolving sources" id=QGL7g namespace=openshift-kube-scheduler 2025-08-13T20:10:02.760895106+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="checking if subscriptions need update" id=QGL7g namespace=openshift-kube-scheduler 2025-08-13T20:10:02.964298548+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager-operator" id=RBqCK namespace=openshift-kube-controller-manager-operator 2025-08-13T20:10:02.964298548+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="resolving sources" id=EN/zw namespace=openshift-kube-scheduler-operator 2025-08-13T20:10:02.964298548+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="checking if subscriptions need update" id=EN/zw namespace=openshift-kube-scheduler-operator 2025-08-13T20:10:03.164272721+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler" id=QGL7g namespace=openshift-kube-scheduler 2025-08-13T20:10:03.164272721+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="resolving sources" id=MpSL1 namespace=openshift-kube-storage-version-migrator 2025-08-13T20:10:03.164272721+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="checking if subscriptions need update" id=MpSL1 namespace=openshift-kube-storage-version-migrator 2025-08-13T20:10:03.362014181+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler-operator" id=EN/zw namespace=openshift-kube-scheduler-operator 2025-08-13T20:10:03.362014181+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="resolving sources" id=BjQbm namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:10:03.362014181+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="checking if subscriptions need update" id=BjQbm namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:10:03.561016716+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator" id=MpSL1 namespace=openshift-kube-storage-version-migrator 2025-08-13T20:10:03.561016716+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="resolving sources" id=dyEQA namespace=openshift-machine-api 2025-08-13T20:10:03.561016716+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="checking if subscriptions need update" id=dyEQA namespace=openshift-machine-api 2025-08-13T20:10:03.763069669+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator-operator" id=BjQbm namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:10:03.763069669+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="resolving sources" id=/z821 namespace=openshift-machine-config-operator 2025-08-13T20:10:03.763069669+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="checking if subscriptions need update" id=/z821 namespace=openshift-machine-config-operator 2025-08-13T20:10:03.973649977+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="No subscriptions were found in namespace openshift-machine-api" id=dyEQA namespace=openshift-machine-api 2025-08-13T20:10:03.973649977+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="resolving sources" id=wmntX namespace=openshift-marketplace 2025-08-13T20:10:03.973649977+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="checking if subscriptions need update" id=wmntX namespace=openshift-marketplace 2025-08-13T20:10:04.163847310+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="No subscriptions were found in namespace openshift-machine-config-operator" id=/z821 namespace=openshift-machine-config-operator 2025-08-13T20:10:04.163847310+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="resolving sources" id=/GelK namespace=openshift-monitoring 2025-08-13T20:10:04.163847310+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="checking if subscriptions need update" id=/GelK namespace=openshift-monitoring 2025-08-13T20:10:04.362302830+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=wmntX namespace=openshift-marketplace 2025-08-13T20:10:04.362302830+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="resolving sources" id=K7dM5 namespace=openshift-multus 2025-08-13T20:10:04.362302830+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="checking if subscriptions need update" id=K7dM5 namespace=openshift-multus 2025-08-13T20:10:04.563660643+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=/GelK namespace=openshift-monitoring 2025-08-13T20:10:04.563660643+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="resolving sources" id=IjjbX namespace=openshift-network-diagnostics 2025-08-13T20:10:04.563660643+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="checking if subscriptions need update" id=IjjbX namespace=openshift-network-diagnostics 2025-08-13T20:10:04.762154954+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="No subscriptions were found in namespace openshift-multus" id=K7dM5 namespace=openshift-multus 2025-08-13T20:10:04.762154954+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="resolving sources" id=9ERws namespace=openshift-network-node-identity 2025-08-13T20:10:04.762154954+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="checking if subscriptions need update" id=9ERws namespace=openshift-network-node-identity 2025-08-13T20:10:04.962517399+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="No subscriptions were found in namespace openshift-network-diagnostics" id=IjjbX namespace=openshift-network-diagnostics 2025-08-13T20:10:04.962517399+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="resolving sources" id=ejPIl namespace=openshift-network-operator 2025-08-13T20:10:04.962517399+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="checking if subscriptions need update" id=ejPIl namespace=openshift-network-operator 2025-08-13T20:10:05.161489993+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="No subscriptions were found in namespace openshift-network-node-identity" id=9ERws namespace=openshift-network-node-identity 2025-08-13T20:10:05.161489993+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="resolving sources" id=3JdwK namespace=openshift-node 2025-08-13T20:10:05.161489993+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="checking if subscriptions need update" id=3JdwK namespace=openshift-node 2025-08-13T20:10:05.363526565+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="No subscriptions were found in namespace openshift-network-operator" id=ejPIl namespace=openshift-network-operator 2025-08-13T20:10:05.363526565+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="resolving sources" id=cxk9z namespace=openshift-nutanix-infra 2025-08-13T20:10:05.363526565+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="checking if subscriptions need update" id=cxk9z namespace=openshift-nutanix-infra 2025-08-13T20:10:05.562304444+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="No subscriptions were found in namespace openshift-node" id=3JdwK namespace=openshift-node 2025-08-13T20:10:05.562304444+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="resolving sources" id=twwxk namespace=openshift-oauth-apiserver 2025-08-13T20:10:05.562304444+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="checking if subscriptions need update" id=twwxk namespace=openshift-oauth-apiserver 2025-08-13T20:10:05.761377402+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="No subscriptions were found in namespace openshift-nutanix-infra" id=cxk9z namespace=openshift-nutanix-infra 2025-08-13T20:10:05.761377402+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="resolving sources" id=LOHtz namespace=openshift-openstack-infra 2025-08-13T20:10:05.761377402+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="checking if subscriptions need update" id=LOHtz namespace=openshift-openstack-infra 2025-08-13T20:10:05.961238832+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="No subscriptions were found in namespace openshift-oauth-apiserver" id=twwxk namespace=openshift-oauth-apiserver 2025-08-13T20:10:05.961238832+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="resolving sources" id=PrCpi namespace=openshift-operator-lifecycle-manager 2025-08-13T20:10:05.961238832+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="checking if subscriptions need update" id=PrCpi namespace=openshift-operator-lifecycle-manager 2025-08-13T20:10:06.162898174+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="No subscriptions were found in namespace openshift-openstack-infra" id=LOHtz namespace=openshift-openstack-infra 2025-08-13T20:10:06.162898174+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="resolving sources" id=077is namespace=openshift-operators 2025-08-13T20:10:06.162898174+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="checking if subscriptions need update" id=077is namespace=openshift-operators 2025-08-13T20:10:06.361410876+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=PrCpi namespace=openshift-operator-lifecycle-manager 2025-08-13T20:10:06.361549950+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="resolving sources" id=gIJXC namespace=openshift-ovirt-infra 2025-08-13T20:10:06.361585521+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="checking if subscriptions need update" id=gIJXC namespace=openshift-ovirt-infra 2025-08-13T20:10:06.560976667+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=077is namespace=openshift-operators 2025-08-13T20:10:06.561110021+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="resolving sources" id=ikEL/ namespace=openshift-ovn-kubernetes 2025-08-13T20:10:06.561188093+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="checking if subscriptions need update" id=ikEL/ namespace=openshift-ovn-kubernetes 2025-08-13T20:10:06.761085465+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="No subscriptions were found in namespace openshift-ovirt-infra" id=gIJXC namespace=openshift-ovirt-infra 2025-08-13T20:10:06.761085465+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="resolving sources" id=oFghW namespace=openshift-route-controller-manager 2025-08-13T20:10:06.761085465+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="checking if subscriptions need update" id=oFghW namespace=openshift-route-controller-manager 2025-08-13T20:10:06.961369297+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="No subscriptions were found in namespace openshift-ovn-kubernetes" id=ikEL/ namespace=openshift-ovn-kubernetes 2025-08-13T20:10:06.961369297+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="resolving sources" id=jtVwc namespace=openshift-service-ca 2025-08-13T20:10:06.961369297+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="checking if subscriptions need update" id=jtVwc namespace=openshift-service-ca 2025-08-13T20:10:07.162838483+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="No subscriptions were found in namespace openshift-route-controller-manager" id=oFghW namespace=openshift-route-controller-manager 2025-08-13T20:10:07.162959517+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="resolving sources" id=4EA5b namespace=openshift-service-ca-operator 2025-08-13T20:10:07.162959517+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="checking if subscriptions need update" id=4EA5b namespace=openshift-service-ca-operator 2025-08-13T20:10:07.360749808+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="No subscriptions were found in namespace openshift-service-ca" id=jtVwc namespace=openshift-service-ca 2025-08-13T20:10:07.360749808+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="resolving sources" id=+3L9L namespace=openshift-user-workload-monitoring 2025-08-13T20:10:07.360749808+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="checking if subscriptions need update" id=+3L9L namespace=openshift-user-workload-monitoring 2025-08-13T20:10:07.562619746+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="No subscriptions were found in namespace openshift-service-ca-operator" id=4EA5b namespace=openshift-service-ca-operator 2025-08-13T20:10:07.562619746+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="resolving sources" id=ANtt4 namespace=openshift-vsphere-infra 2025-08-13T20:10:07.562619746+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="checking if subscriptions need update" id=ANtt4 namespace=openshift-vsphere-infra 2025-08-13T20:10:07.763725242+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="No subscriptions were found in namespace openshift-user-workload-monitoring" id=+3L9L namespace=openshift-user-workload-monitoring 2025-08-13T20:10:07.962111509+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="No subscriptions were found in namespace openshift-vsphere-infra" id=ANtt4 namespace=openshift-vsphere-infra 2025-08-13T20:10:36.567661543+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.567730145+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.568002362+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.568002362+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.571734019+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.571849393+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.572242304+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.572242304+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.572262824+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=52tz4 2025-08-13T20:10:36.572318606+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.572318606+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.572536092+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.572536092+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.572536092+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Upb4M 2025-08-13T20:10:36.572574683+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.572574683+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.592592257+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.592866545+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.592866545+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.593097232+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.593097232+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.593356229+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.593453532+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:36.593453532+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:36.593564375+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.593564375+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.593581236+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.593592416+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.593727950+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.593727950+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.596436368+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.596566301+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.596566301+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.596580592+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=h3sA4 2025-08-13T20:10:36.596590752+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.596590752+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.596769277+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:36.597005344+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:36.597005344+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:36.597020974+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=zC15b 2025-08-13T20:10:36.597020974+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:36.597020974+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:36.967022983+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.967294600+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.967346602+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.967502926+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.967541448+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:37.169114087+00:00 stderr F time="2025-08-13T20:10:37Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:37.169361304+00:00 stderr F time="2025-08-13T20:10:37Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:37.169429406+00:00 stderr F time="2025-08-13T20:10:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:37.169642292+00:00 stderr F time="2025-08-13T20:10:37Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:37.169769006+00:00 stderr F time="2025-08-13T20:10:37Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:16:57.983401621+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:57.983401621+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:57.994107757+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:57.994407785+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:57.994407785+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:57.994407785+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=NJHA9 2025-08-13T20:16:57.994495598+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:57.994495598+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:58.052560476+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:58.052759281+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:58.052759281+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:58.052999578+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="catalog update required at 2025-08-13 20:16:58.052873305 +0000 UTC m=+1073.704261512" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:58.104206561+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:58.160920630+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.160920630+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.182861157+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.183069583+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.183069583+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.183069583+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=uYxTw 2025-08-13T20:16:58.183069583+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.183069583+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.301356281+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.301490114+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.301490114+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.301576117+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.301687300+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.301687300+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.349110124+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.349228398+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.349244878+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.349256929+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Q4L1p 2025-08-13T20:16:58.349267839+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.349267839+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.365853723+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.365853723+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.365853723+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.365853723+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.366071269+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.366071269+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.371099302+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.371587276+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.371587276+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.371587276+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=bWm0C 2025-08-13T20:16:58.371587276+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.371587276+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.589040206+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.589040206+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.589040206+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.589040206+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.869911227+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:58.869911227+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:58.892030609+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:58.892030609+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:58.892030609+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:58.892030609+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=5lbPJ 2025-08-13T20:16:58.892030609+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:58.892030609+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:59.187540528+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:59.187934079+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:59.187934079+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:59.188080593+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:59.982015846+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:16:59.982015846+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:16:59.985854136+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:16:59.986118273+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:16:59.986118273+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:16:59.986118273+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=4Ikty 2025-08-13T20:16:59.986118273+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:16:59.986118273+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:17:00.010869410+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:17:00.011457507+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:17:00.011457507+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:17:00.011457507+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="catalog update required at 2025-08-13 20:17:00.011047765 +0000 UTC m=+1075.662435882" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:17:00.118078552+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:17:00.206741004+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:00.206741004+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:00.222015870+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:00.222015870+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:00.247353173+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:00.247353173+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:00.247411305+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:00.247411305+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=auUuY 2025-08-13T20:17:00.247411305+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:00.247411305+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:00.388159314+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:00.388311109+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:00.388311109+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:00.388311109+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Nnq8Z 2025-08-13T20:17:00.388311109+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:00.388311109+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:01.065919068+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:01.065989060+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:01.065989060+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:01.068575694+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:01.068575694+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.068575694+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.204820865+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:01.204820865+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:01.204820865+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:01.204820865+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:01.388421828+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.388733117+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.388733117+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.388733117+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=ytnzk 2025-08-13T20:17:01.388733117+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.388733117+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.790047488+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.790047488+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.790047488+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.790047488+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:02.214701065+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.214916361+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.257914299+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.260706749+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.260706749+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.260706749+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=vRJPZ 2025-08-13T20:17:02.260706749+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.260706749+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.328990569+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:02.329082781+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:02.398516884+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:02.398516884+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:02.398516884+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:02.398516884+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=pj29E 2025-08-13T20:17:02.398516884+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:02.398516884+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:02.590033273+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.590033273+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.590033273+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.590033273+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:03.006849277+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:03.006849277+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:03.006849277+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:03.006849277+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:03.226194831+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.226194831+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.231187223+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.231403959+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.231451051+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.231496242+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=NcZkZ 2025-08-13T20:17:03.231594725+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.231634786+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.589912777+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.589912777+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.589912777+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.589912777+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:13.567481128+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:13.567481128+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:13.575424395+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:13.575732133+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:13.575732133+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:13.575751934+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=cWRIS 2025-08-13T20:17:13.575766694+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:13.575833406+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:14.575128444+00:00 stderr F time="2025-08-13T20:17:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:14.575128444+00:00 stderr F time="2025-08-13T20:17:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:14.575128444+00:00 stderr F time="2025-08-13T20:17:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:14.575128444+00:00 stderr F time="2025-08-13T20:17:14Z" level=info msg="catalog update required at 2025-08-13 20:17:14.574302031 +0000 UTC m=+1090.225690378" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:14.761215717+00:00 stderr F time="2025-08-13T20:17:14Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:16.057152613+00:00 stderr F time="2025-08-13T20:17:16Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:16.057152613+00:00 stderr F time="2025-08-13T20:17:16Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:17.090869583+00:00 stderr F time="2025-08-13T20:17:17Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:17.092864910+00:00 stderr F time="2025-08-13T20:17:17Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:17.092945742+00:00 stderr F time="2025-08-13T20:17:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:17.093063286+00:00 stderr F time="2025-08-13T20:17:17Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=DQw9G 2025-08-13T20:17:17.093101357+00:00 stderr F time="2025-08-13T20:17:17Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:17.093149408+00:00 stderr F time="2025-08-13T20:17:17Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:20.687015579+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:20.687015579+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:20.687015579+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:20.687015579+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:20.687015579+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:20.687015579+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:20.829281742+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:20.829281742+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:20.829281742+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:20.829281742+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=elow1 2025-08-13T20:17:20.829281742+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:20.829281742+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:21.167229383+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.167229383+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.468461835+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:21.468461835+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:21.468461835+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:21.468461835+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:21.468679401+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.472224943+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.472224943+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.472224943+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=oFTtd 2025-08-13T20:17:21.472224943+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.472224943+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.623563934+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:21.623563934+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:21.634942809+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:21.634942809+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:21.634942809+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:21.634942809+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=6npwb 2025-08-13T20:17:21.634942809+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:21.634942809+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:21.643182265+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.643390751+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.643430382+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.643529405+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:22.163267576+00:00 stderr F time="2025-08-13T20:17:22Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:22.163267576+00:00 stderr F time="2025-08-13T20:17:22Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:22.163267576+00:00 stderr F time="2025-08-13T20:17:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:22.163396920+00:00 stderr F time="2025-08-13T20:17:22Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:25.192955276+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.192955276+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.367319875+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.367319875+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.367319875+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.367319875+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=b08SX 2025-08-13T20:17:25.367319875+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.367319875+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.588933943+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.588933943+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.613010941+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.614580255+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.614639487+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.614680998+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=LEDYg 2025-08-13T20:17:25.614712339+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.614743140+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.802365198+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.802365198+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.802365198+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.802365198+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.802365198+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:25.802365198+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:25.872067149+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:25.872067149+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:25.872067149+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:25.872067149+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=A6muu 2025-08-13T20:17:25.872067149+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:25.872067149+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:25.952123845+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.952123845+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.952123845+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.952123845+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:26.014261069+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:26.016297707+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:26.016297707+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:26.016297707+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:26.446116762+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.446166283+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.450755694+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.451002092+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.451002092+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.451002092+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=VNRX8 2025-08-13T20:17:26.451029212+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.451029212+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.500597568+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.500840705+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.500840705+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.501008230+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:28.104889022+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.104942904+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.109917646+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.109917646+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.109917646+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.109917646+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=H9D55 2025-08-13T20:17:28.109917646+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.109917646+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.140235902+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.140235902+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.140235902+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.140235902+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:29.083862719+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.083862719+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.780267686+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.780400900+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.780400900+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.780400900+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=fI9Bs 2025-08-13T20:17:29.780400900+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.780400900+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.988379249+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.988635976+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.988635976+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.988765340+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="catalog update required at 2025-08-13 20:17:29.988696568 +0000 UTC m=+1105.640085225" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:30.091590506+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:30.118922297+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.118922297+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=0d4y8 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.258727749+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.258895684+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.258914255+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.258914255+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=tlYOF 2025-08-13T20:17:30.258924595+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.258934245+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.292201915+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.292201915+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.292201915+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.292201915+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.300262586+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.304010403+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.304010403+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.304010403+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.359155597+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.359155597+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.447140790+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.447489830+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.447532691+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.447571042+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=qpwQE 2025-08-13T20:17:30.447625614+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.447664765+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.505590159+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.506902747+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.506902747+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.506902747+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.507065671+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.507065671+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.519579569+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.519579569+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.519579569+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.519579569+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=5DLXM 2025-08-13T20:17:30.519579569+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.519579569+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.786895173+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:30.786895173+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:30.790183456+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.790392212+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.790466695+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.790550767+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.790622629+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:30.790622629+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:30.988258963+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:30.988258963+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:30.988258963+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:30.988258963+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=qF6GE 2025-08-13T20:17:30.988258963+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:30.988258963+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:31.237228273+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:31.237274404+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:31.237287695+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:31.237355846+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=le9Xh 2025-08-13T20:17:31.237355846+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:31.237355846+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:31.786854869+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:31.790262216+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:31.790262216+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:33.004230403+00:00 stderr F time="2025-08-13T20:17:33Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:33.004230403+00:00 stderr F time="2025-08-13T20:17:33Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:33.004230403+00:00 stderr F time="2025-08-13T20:17:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:33.004230403+00:00 stderr F time="2025-08-13T20:17:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:33.004230403+00:00 stderr F time="2025-08-13T20:17:33Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:33.004230403+00:00 stderr F time="2025-08-13T20:17:33Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.089041672+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.089041672+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.089041672+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.089041672+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=ESgBx 2025-08-13T20:17:34.089041672+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.089041672+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.153361819+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:34.153361819+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:34.153361819+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:34.153361819+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:34.309984322+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:34.309984322+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:34.309984322+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:34.309984322+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=OC4YC 2025-08-13T20:17:34.309984322+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:34.309984322+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:34.355004368+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.357128128+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.357128128+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.357128128+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.357128128+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:34.357128128+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:34.713398702+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:34.713398702+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:34.713398702+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:34.713398702+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=WkWXW 2025-08-13T20:17:34.713398702+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:34.713398702+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:35.301474856+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:35.301877788+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:35.302487085+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:35.330228528+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:35.330280479+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:35.330280479+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:35.330924267+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:35.330924267+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:35.330924267+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:35.330924267+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.330924267+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.338001729+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.338001729+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.349076676+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.349076676+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.349076676+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.349076676+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=3ug3Z 2025-08-13T20:17:35.349076676+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.349076676+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.351170726+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.351229217+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.351229217+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.351229217+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=LtZef 2025-08-13T20:17:35.351245528+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.351245528+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.507943673+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.507943673+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.507943673+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.507943673+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.508587661+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.508587661+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.508587661+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.593362432+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.593362432+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.849649031+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:35.849649031+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:35.894919484+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:35.894919484+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:35.894919484+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:35.894919484+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=jwtyF 2025-08-13T20:17:35.894919484+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:35.894919484+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:36.195428065+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:36.195492227+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:36.195492227+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:36.196427574+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:36.624680583+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.624680583+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.628370598+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.628370598+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.628370598+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.628461031+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=9ciqs 2025-08-13T20:17:36.628461031+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.628461031+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.847434594+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.847479315+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.847479315+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.847670481+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.853169388+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:36.853169388+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:36.996943424+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:36.996943424+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:36.996943424+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:36.996943424+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=geRH1 2025-08-13T20:17:36.996943424+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:36.996943424+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:37.599457340+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:37.599457340+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:37.599457340+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:37.599457340+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:37.599457340+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:37.600328055+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:37.600328055+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:37.603665960+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:37.604008350+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:37.604008350+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:37.604008350+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=PX9hI 2025-08-13T20:17:37.604008350+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:37.604008350+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:38.013003720+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:38.013050411+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:38.013050411+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:38.014143572+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:38.014143572+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:38.847365117+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:38.847365117+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:38.852446052+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:38.852603126+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:38.852616807+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:38.852652978+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=t60me 2025-08-13T20:17:38.852664618+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:38.852664618+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:39.032347619+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:39.032394491+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:39.032394491+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:39.172130361+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:39.172130361+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:39.172176812+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.172192393+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.176426884+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.176426884+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.176426884+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.176426884+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=i/23X 2025-08-13T20:17:39.176426884+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.176426884+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.389021935+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.389021935+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.389021935+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.588865252+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.588865252+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.588918974+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:39.588937064+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:39.787139864+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:39.787755032+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:39.787755032+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:39.787755032+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=nB5aM 2025-08-13T20:17:39.787755032+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:39.787755032+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:43.661614527+00:00 stderr F time="2025-08-13T20:17:43Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:43.662073730+00:00 stderr F time="2025-08-13T20:17:43Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:43.662073730+00:00 stderr F time="2025-08-13T20:17:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:44.101329484+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:44.101422527+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:44.763039721+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:44.763039721+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:44.975679963+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:44.976215999+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:44.976273680+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:44.976273680+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=ja6zk 2025-08-13T20:17:44.976289581+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:44.976289581+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:44.993212864+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:44.993404710+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.061914386+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:45.061914386+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:45.061914386+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:45.061914386+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:45.068746621+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.071657754+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.071657754+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.071657754+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Cr0Wf 2025-08-13T20:17:45.071657754+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.071657754+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.251885971+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.251934492+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.251944773+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.252406326+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.252493518+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.252493518+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.304897995+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.305139592+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.305181903+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.305221494+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=cxXAg 2025-08-13T20:17:45.305253065+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.305284176+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.449427192+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.449815113+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.449882175+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.450066551+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.450107972+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.450257196+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.450310008+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.460243081+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.460365025+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.460365025+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.460365025+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=XeZZz 2025-08-13T20:17:45.460418126+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.460418126+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.495863248+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.496298881+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.496298881+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.496430045+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.496430045+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:58.140876162+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.140938434+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.152175935+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.152175935+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.152175935+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.152175935+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=f6dDy 2025-08-13T20:17:58.152175935+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.152175935+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.187421121+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.188168663+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.188168663+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.188168663+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.188168663+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:18:00.092684011+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:00.092684011+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:00.292591780+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:00.292647911+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:00.552924954+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:00.553143560+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:00.553360606+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:00.553360606+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:00.553360606+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=+rQlS 2025-08-13T20:18:00.553360606+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:00.553360606+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:00.557076243+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:00.557076243+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:00.557076243+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=BGjV2 2025-08-13T20:18:00.557076243+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:00.557076243+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:02.967377044+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:02.967377044+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:02.967377044+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:02.967893429+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:02.967893429+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:02.999135901+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:02.999352517+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:02.999352517+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:02.999352517+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:15.062646720+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.062646720+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.070203696+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.070523866+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.070523866+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.070708231+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=dO5Zc 2025-08-13T20:18:15.070708231+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.070708231+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.101226102+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.101420618+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.101420618+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.101629364+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:24.802453013+00:00 stderr F time="2025-08-13T20:18:24Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:24.802453013+00:00 stderr F time="2025-08-13T20:18:24Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:26.371243652+00:00 stderr F time="2025-08-13T20:18:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:26.371673174+00:00 stderr F time="2025-08-13T20:18:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:26.371673174+00:00 stderr F time="2025-08-13T20:18:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:26.371673174+00:00 stderr F time="2025-08-13T20:18:26Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=CEJs1 2025-08-13T20:18:26.371673174+00:00 stderr F time="2025-08-13T20:18:26Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:26.371673174+00:00 stderr F time="2025-08-13T20:18:26Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:28.673912080+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:28.673912080+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:28.673912080+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:28.673912080+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:28.673912080+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.673912080+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.687576910+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.687576910+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.687576910+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.687576910+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=817zx 2025-08-13T20:18:28.687576910+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.687576910+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.793699001+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.793699001+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.793699001+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.793699001+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:33.000200576+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.000200576+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.005656402+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.006006962+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.006006962+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.006006962+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Rc4s8 2025-08-13T20:18:33.006006962+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.006006962+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.107085609+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.107085609+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.107085609+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.107085609+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:45.102857093+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.102857093+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.106837916+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.107105494+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.107105494+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.107105494+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=uRN/m 2025-08-13T20:18:45.107105494+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.107105494+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.127030363+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.127136296+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.127136296+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.127194348+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:50.911552532+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.911657815+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.918417388+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.918417388+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.918417388+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.918417388+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=h+jCC 2025-08-13T20:18:50.918417388+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.918417388+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.945205163+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.945329567+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.945342337+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.945415929+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:51.034061451+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.034061451+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.038585820+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.038867428+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.038867428+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.038867428+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=ltPw3 2025-08-13T20:18:51.038867428+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.038867428+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.087387074+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.087558829+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.087558829+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.203088938+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.203088938+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.205561498+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.205561498+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.209178852+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.209348007+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.209386928+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.209424589+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=x6WRy 2025-08-13T20:18:51.209455260+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.209484670+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.224655474+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.225014904+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.225014904+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.229637206+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.229702278+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:56.030993378+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.030993378+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.048494758+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.060944494+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.060944494+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.060944494+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=dSllX 2025-08-13T20:18:56.060944494+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.060944494+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.534042774+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.534369633+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.534431465+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.536889185+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.537126812+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.537176154+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.573850301+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.574244392+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.574735616+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.574735616+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=sZqtq 2025-08-13T20:18:56.574752107+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.574752107+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.669666167+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.669886183+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.669886183+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.669886183+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.669886183+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.670265584+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.670265584+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.740382367+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.740382367+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.740382367+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.740382367+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=kpkmt 2025-08-13T20:18:56.740382367+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.740431948+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.844820709+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.845126518+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.845241561+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.845402136+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.845440837+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:59.231328542+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.231328542+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.265665972+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.265926380+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.266036373+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.266100075+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Cle0k 2025-08-13T20:18:59.266134555+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.266166386+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.285993252+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.286239890+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.286239890+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.286408614+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=QxWsM 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.237103764+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.237207807+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.237207807+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.237387262+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:03.108079571+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.108079571+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.111257432+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.111529539+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.111672513+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.111731485+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=bi/DE 2025-08-13T20:19:03.111766506+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.111874869+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.356746543+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.357187816+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.357187816+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.357187816+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.357187816+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:15.128864790+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.128864790+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.137729953+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.138222637+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.138222637+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.138222637+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=MW41X 2025-08-13T20:19:15.138222637+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.138222637+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.194517545+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.194517545+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.194517545+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.202372679+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:31.079602079+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.079602079+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.112271453+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.112491929+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.112491929+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.112491929+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=a07yu 2025-08-13T20:19:31.112512779+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.112512779+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.130389610+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.130656218+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.130656218+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.130855944+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.205528878+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:31.205528878+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:31.212642071+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:31.212748464+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:31.212764055+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:31.212867548+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=LvVPs 2025-08-13T20:19:31.212867548+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:31.213025682+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:32.421495939+00:00 stderr F time="2025-08-13T20:19:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:32.421495939+00:00 stderr F time="2025-08-13T20:19:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:32.421495939+00:00 stderr F time="2025-08-13T20:19:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:34.120932469+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:34.120932469+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:34.131326806+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:34.131326806+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:34.999898807+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:34.999898807+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:34.999898807+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:34.999898807+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=NRKI6 2025-08-13T20:19:34.999898807+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:34.999898807+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:35.244542949+00:00 stderr F time="2025-08-13T20:19:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:35.244542949+00:00 stderr F time="2025-08-13T20:19:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:35.244542949+00:00 stderr F time="2025-08-13T20:19:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:35.828544909+00:00 stderr F time="2025-08-13T20:19:35Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:35.828666523+00:00 stderr F time="2025-08-13T20:19:35Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:35.843087175+00:00 stderr F time="2025-08-13T20:19:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:35.843087175+00:00 stderr F time="2025-08-13T20:19:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.254558045+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.254558045+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.254558045+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.254558045+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=6ViN4 2025-08-13T20:19:36.254558045+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.254558045+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.698876273+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.698876273+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.698876273+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.698876273+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.698876273+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:36.698876273+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.337431651+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.337431651+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.337431651+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.337431651+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=+GcWt 2025-08-13T20:19:38.337431651+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.337431651+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.579905471+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.579905471+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.579905471+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.579905471+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.579905471+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.579905471+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.584940115+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.584940115+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.584940115+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.584940115+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Oo754 2025-08-13T20:19:38.584940115+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.584940115+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.699018575+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.700074545+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.700074545+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.700074545+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.700074545+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.700074545+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.700074545+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.702572287+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.702761442+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.702761442+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.702761442+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=3uSaW 2025-08-13T20:19:38.702761442+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.702761442+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.718816861+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.718913884+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.718913884+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.719110969+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.719110969+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:45.203583479+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.203583479+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.212295488+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.213076901+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.213076901+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.213076901+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=e37o0 2025-08-13T20:19:45.213076901+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.213076901+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.231634581+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.231717194+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.231717194+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.232405603+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.232405603+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:22:23.582698448+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.582698448+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.584282953+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.584467578+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.591555321+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.591886760+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.592366794+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.592366794+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.592366794+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=OTm45 2025-08-13T20:22:23.592366794+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.592366794+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.593759534+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.593759534+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.593759534+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=nw+2Z 2025-08-13T20:22:23.593865707+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.593865707+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.608149195+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.608293229+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.608306630+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.608468524+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.608482785+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.608669270+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.608669270+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.608716671+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.608923097+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.608941598+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.609014160+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.609014160+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.609103472+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.609138513+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.612150039+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.612229572+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.612336435+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.612375166+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.612413887+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Yo163 2025-08-13T20:22:23.612444628+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.612503430+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.612745146+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.612745146+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.612745146+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=JcgEP 2025-08-13T20:22:23.612745146+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.612745146+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.789613431+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.789740455+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.789757655+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.790150497+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.790150497+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.987111926+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.987264800+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.987282611+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.987452915+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.987452915+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:23:25.274452558+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=WKAf+ namespace=openshift-network-diagnostics 2025-08-13T20:23:25.274452558+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=WKAf+ namespace=openshift-network-diagnostics 2025-08-13T20:23:25.274585281+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=26pRh namespace=openshift-infra 2025-08-13T20:23:25.274585281+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=26pRh namespace=openshift-infra 2025-08-13T20:23:25.278488203+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-network-diagnostics" id=WKAf+ namespace=openshift-network-diagnostics 2025-08-13T20:23:25.278587946+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=v44d/ namespace=openshift-operator-lifecycle-manager 2025-08-13T20:23:25.278587946+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=v44d/ namespace=openshift-operator-lifecycle-manager 2025-08-13T20:23:25.278848883+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-infra" id=26pRh namespace=openshift-infra 2025-08-13T20:23:25.278907275+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=3ImAl namespace=openshift-ovn-kubernetes 2025-08-13T20:23:25.278907275+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=3ImAl namespace=openshift-ovn-kubernetes 2025-08-13T20:23:25.281244742+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=v44d/ namespace=openshift-operator-lifecycle-manager 2025-08-13T20:23:25.281244742+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=myf9p namespace=openshift-apiserver 2025-08-13T20:23:25.281244742+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=myf9p namespace=openshift-apiserver 2025-08-13T20:23:25.282133977+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-ovn-kubernetes" id=3ImAl namespace=openshift-ovn-kubernetes 2025-08-13T20:23:25.282133977+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=aQ7Eq namespace=openshift-console-operator 2025-08-13T20:23:25.282133977+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=aQ7Eq namespace=openshift-console-operator 2025-08-13T20:23:25.285119902+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-apiserver" id=myf9p namespace=openshift-apiserver 2025-08-13T20:23:25.285119902+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=UuCS+ namespace=openshift-dns 2025-08-13T20:23:25.285119902+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=UuCS+ namespace=openshift-dns 2025-08-13T20:23:25.285119902+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-console-operator" id=aQ7Eq namespace=openshift-console-operator 2025-08-13T20:23:25.285220695+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=SHiBK namespace=openshift-ingress-operator 2025-08-13T20:23:25.285326038+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=SHiBK namespace=openshift-ingress-operator 2025-08-13T20:23:25.287261794+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-dns" id=UuCS+ namespace=openshift-dns 2025-08-13T20:23:25.287322135+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=1wSOa namespace=openshift 2025-08-13T20:23:25.287322135+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=1wSOa namespace=openshift 2025-08-13T20:23:25.287655265+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-ingress-operator" id=SHiBK namespace=openshift-ingress-operator 2025-08-13T20:23:25.287655265+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=7OGDf namespace=openshift-cloud-platform-infra 2025-08-13T20:23:25.287655265+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=7OGDf namespace=openshift-cloud-platform-infra 2025-08-13T20:23:25.290094095+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift" id=1wSOa namespace=openshift 2025-08-13T20:23:25.290094095+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=zioiR namespace=openshift-ingress 2025-08-13T20:23:25.290094095+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=zioiR namespace=openshift-ingress 2025-08-13T20:23:25.290204408+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-cloud-platform-infra" id=7OGDf namespace=openshift-cloud-platform-infra 2025-08-13T20:23:25.290217238+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=/X4+y namespace=openshift-kube-apiserver-operator 2025-08-13T20:23:25.290226318+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=/X4+y namespace=openshift-kube-apiserver-operator 2025-08-13T20:23:25.480389203+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-ingress" id=zioiR namespace=openshift-ingress 2025-08-13T20:23:25.480389203+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=27eAx namespace=openshift-monitoring 2025-08-13T20:23:25.480389203+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=27eAx namespace=openshift-monitoring 2025-08-13T20:23:25.678599768+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver-operator" id=/X4+y namespace=openshift-kube-apiserver-operator 2025-08-13T20:23:25.678599768+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=9qgNb namespace=openshift-controller-manager-operator 2025-08-13T20:23:25.678599768+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=9qgNb namespace=openshift-controller-manager-operator 2025-08-13T20:23:25.880302963+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=27eAx namespace=openshift-monitoring 2025-08-13T20:23:25.880302963+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=8Gm9g namespace=openshift-machine-api 2025-08-13T20:23:25.880302963+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=8Gm9g namespace=openshift-machine-api 2025-08-13T20:23:26.079347861+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager-operator" id=9qgNb namespace=openshift-controller-manager-operator 2025-08-13T20:23:26.079520806+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="resolving sources" id=+A9qY namespace=openshift-service-ca-operator 2025-08-13T20:23:26.079556917+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="checking if subscriptions need update" id=+A9qY namespace=openshift-service-ca-operator 2025-08-13T20:23:26.279465021+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="No subscriptions were found in namespace openshift-machine-api" id=8Gm9g namespace=openshift-machine-api 2025-08-13T20:23:26.279616845+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="resolving sources" id=52xeD namespace=openshift-multus 2025-08-13T20:23:26.279665146+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="checking if subscriptions need update" id=52xeD namespace=openshift-multus 2025-08-13T20:23:26.479201109+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="No subscriptions were found in namespace openshift-service-ca-operator" id=+A9qY namespace=openshift-service-ca-operator 2025-08-13T20:23:26.479201109+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="resolving sources" id=rigbO namespace=openshift-ovirt-infra 2025-08-13T20:23:26.479201109+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="checking if subscriptions need update" id=rigbO namespace=openshift-ovirt-infra 2025-08-13T20:23:26.678405412+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="No subscriptions were found in namespace openshift-multus" id=52xeD namespace=openshift-multus 2025-08-13T20:23:26.678405412+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="resolving sources" id=yPz0V namespace=kube-node-lease 2025-08-13T20:23:26.678462454+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="checking if subscriptions need update" id=yPz0V namespace=kube-node-lease 2025-08-13T20:23:26.878764609+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="No subscriptions were found in namespace openshift-ovirt-infra" id=rigbO namespace=openshift-ovirt-infra 2025-08-13T20:23:26.878764609+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="resolving sources" id=qg5x0 namespace=openshift-authentication 2025-08-13T20:23:26.878764609+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="checking if subscriptions need update" id=qg5x0 namespace=openshift-authentication 2025-08-13T20:23:27.078688941+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="No subscriptions were found in namespace kube-node-lease" id=yPz0V namespace=kube-node-lease 2025-08-13T20:23:27.078688941+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="resolving sources" id=Na6cW namespace=openshift-ingress-canary 2025-08-13T20:23:27.078688941+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="checking if subscriptions need update" id=Na6cW namespace=openshift-ingress-canary 2025-08-13T20:23:27.278844792+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="No subscriptions were found in namespace openshift-authentication" id=qg5x0 namespace=openshift-authentication 2025-08-13T20:23:27.278904683+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="resolving sources" id=ofcr8 namespace=openshift-kube-scheduler 2025-08-13T20:23:27.278904683+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="checking if subscriptions need update" id=ofcr8 namespace=openshift-kube-scheduler 2025-08-13T20:23:27.480060842+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="No subscriptions were found in namespace openshift-ingress-canary" id=Na6cW namespace=openshift-ingress-canary 2025-08-13T20:23:27.480060842+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="resolving sources" id=pbqJ6 namespace=openshift-nutanix-infra 2025-08-13T20:23:27.480105244+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="checking if subscriptions need update" id=pbqJ6 namespace=openshift-nutanix-infra 2025-08-13T20:23:27.681057427+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler" id=ofcr8 namespace=openshift-kube-scheduler 2025-08-13T20:23:27.681057427+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="resolving sources" id=Gox56 namespace=openshift-service-ca 2025-08-13T20:23:27.681057427+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="checking if subscriptions need update" id=Gox56 namespace=openshift-service-ca 2025-08-13T20:23:27.879351374+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="No subscriptions were found in namespace openshift-nutanix-infra" id=pbqJ6 namespace=openshift-nutanix-infra 2025-08-13T20:23:27.879351374+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="resolving sources" id=75adD namespace=openshift-openstack-infra 2025-08-13T20:23:27.879391395+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="checking if subscriptions need update" id=75adD namespace=openshift-openstack-infra 2025-08-13T20:23:28.079429683+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="No subscriptions were found in namespace openshift-service-ca" id=Gox56 namespace=openshift-service-ca 2025-08-13T20:23:28.079655029+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="resolving sources" id=lEKVF namespace=openshift-route-controller-manager 2025-08-13T20:23:28.079693940+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="checking if subscriptions need update" id=lEKVF namespace=openshift-route-controller-manager 2025-08-13T20:23:28.281998772+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="No subscriptions were found in namespace openshift-openstack-infra" id=75adD namespace=openshift-openstack-infra 2025-08-13T20:23:28.281998772+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="resolving sources" id=95zqT namespace=openshift-apiserver-operator 2025-08-13T20:23:28.281998772+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="checking if subscriptions need update" id=95zqT namespace=openshift-apiserver-operator 2025-08-13T20:23:28.478929790+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="No subscriptions were found in namespace openshift-route-controller-manager" id=lEKVF namespace=openshift-route-controller-manager 2025-08-13T20:23:28.478967151+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="resolving sources" id=e20x5 namespace=openshift-config 2025-08-13T20:23:28.478967151+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="checking if subscriptions need update" id=e20x5 namespace=openshift-config 2025-08-13T20:23:28.679541924+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="No subscriptions were found in namespace openshift-apiserver-operator" id=95zqT namespace=openshift-apiserver-operator 2025-08-13T20:23:28.679586375+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="resolving sources" id=w/k7b namespace=openshift-dns-operator 2025-08-13T20:23:28.679586375+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="checking if subscriptions need update" id=w/k7b namespace=openshift-dns-operator 2025-08-13T20:23:28.879235711+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="No subscriptions were found in namespace openshift-config" id=e20x5 namespace=openshift-config 2025-08-13T20:23:28.879235711+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="resolving sources" id=4F67A namespace=openshift-console 2025-08-13T20:23:28.879292752+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="checking if subscriptions need update" id=4F67A namespace=openshift-console 2025-08-13T20:23:29.078267459+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="No subscriptions were found in namespace openshift-dns-operator" id=w/k7b namespace=openshift-dns-operator 2025-08-13T20:23:29.078267459+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="resolving sources" id=VxLwh namespace=openshift-console-user-settings 2025-08-13T20:23:29.078267459+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="checking if subscriptions need update" id=VxLwh namespace=openshift-console-user-settings 2025-08-13T20:23:29.279920622+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="No subscriptions were found in namespace openshift-console" id=4F67A namespace=openshift-console 2025-08-13T20:23:29.279920622+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="resolving sources" id=ncGTk namespace=openshift-host-network 2025-08-13T20:23:29.279920622+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="checking if subscriptions need update" id=ncGTk namespace=openshift-host-network 2025-08-13T20:23:29.479273590+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="No subscriptions were found in namespace openshift-console-user-settings" id=VxLwh namespace=openshift-console-user-settings 2025-08-13T20:23:29.480161175+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="resolving sources" id=kUVi4 namespace=openshift-kni-infra 2025-08-13T20:23:29.480161175+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="checking if subscriptions need update" id=kUVi4 namespace=openshift-kni-infra 2025-08-13T20:23:29.679517303+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="No subscriptions were found in namespace openshift-host-network" id=ncGTk namespace=openshift-host-network 2025-08-13T20:23:29.679517303+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="resolving sources" id=AQ+aa namespace=openshift-kube-controller-manager 2025-08-13T20:23:29.679517303+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="checking if subscriptions need update" id=AQ+aa namespace=openshift-kube-controller-manager 2025-08-13T20:23:29.879731814+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="No subscriptions were found in namespace openshift-kni-infra" id=kUVi4 namespace=openshift-kni-infra 2025-08-13T20:23:29.879731814+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="resolving sources" id=lZ2JR namespace=kube-public 2025-08-13T20:23:29.879731814+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="checking if subscriptions need update" id=lZ2JR namespace=kube-public 2025-08-13T20:23:30.078891927+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager" id=AQ+aa namespace=openshift-kube-controller-manager 2025-08-13T20:23:30.078938818+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="resolving sources" id=0sdrF namespace=openshift-cluster-machine-approver 2025-08-13T20:23:30.078938818+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="checking if subscriptions need update" id=0sdrF namespace=openshift-cluster-machine-approver 2025-08-13T20:23:30.280734605+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="No subscriptions were found in namespace kube-public" id=lZ2JR namespace=kube-public 2025-08-13T20:23:30.280734605+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="resolving sources" id=jWYDM namespace=openshift-cluster-version 2025-08-13T20:23:30.280734605+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="checking if subscriptions need update" id=jWYDM namespace=openshift-cluster-version 2025-08-13T20:23:30.478089306+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="No subscriptions were found in namespace openshift-cluster-machine-approver" id=0sdrF namespace=openshift-cluster-machine-approver 2025-08-13T20:23:30.478089306+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="resolving sources" id=ODqCf namespace=openshift-vsphere-infra 2025-08-13T20:23:30.478089306+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="checking if subscriptions need update" id=ODqCf namespace=openshift-vsphere-infra 2025-08-13T20:23:30.685425780+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="No subscriptions were found in namespace openshift-cluster-version" id=jWYDM namespace=openshift-cluster-version 2025-08-13T20:23:30.685467922+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="resolving sources" id=YADVZ namespace=default 2025-08-13T20:23:30.685467922+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="checking if subscriptions need update" id=YADVZ namespace=default 2025-08-13T20:23:30.878117218+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="No subscriptions were found in namespace openshift-vsphere-infra" id=ODqCf namespace=openshift-vsphere-infra 2025-08-13T20:23:30.878117218+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="resolving sources" id=0dv1p namespace=openshift-etcd 2025-08-13T20:23:30.878117218+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="checking if subscriptions need update" id=0dv1p namespace=openshift-etcd 2025-08-13T20:23:31.080339247+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="No subscriptions were found in namespace default" id=YADVZ namespace=default 2025-08-13T20:23:31.080339247+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="resolving sources" id=B89A7 namespace=openshift-operators 2025-08-13T20:23:31.080339247+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="checking if subscriptions need update" id=B89A7 namespace=openshift-operators 2025-08-13T20:23:31.279559751+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="No subscriptions were found in namespace openshift-etcd" id=0dv1p namespace=openshift-etcd 2025-08-13T20:23:31.279559751+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="resolving sources" id=h8mP3 namespace=openshift-kube-controller-manager-operator 2025-08-13T20:23:31.279559751+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="checking if subscriptions need update" id=h8mP3 namespace=openshift-kube-controller-manager-operator 2025-08-13T20:23:31.480312099+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=B89A7 namespace=openshift-operators 2025-08-13T20:23:31.480312099+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="resolving sources" id=LBeOL namespace=openshift-machine-config-operator 2025-08-13T20:23:31.480312099+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="checking if subscriptions need update" id=LBeOL namespace=openshift-machine-config-operator 2025-08-13T20:23:31.679576314+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager-operator" id=h8mP3 namespace=openshift-kube-controller-manager-operator 2025-08-13T20:23:31.679576314+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="resolving sources" id=h1KWz namespace=openshift-network-node-identity 2025-08-13T20:23:31.679576314+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="checking if subscriptions need update" id=h1KWz namespace=openshift-network-node-identity 2025-08-13T20:23:31.879869898+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="No subscriptions were found in namespace openshift-machine-config-operator" id=LBeOL namespace=openshift-machine-config-operator 2025-08-13T20:23:31.879929879+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="resolving sources" id=34qy7 namespace=openshift-user-workload-monitoring 2025-08-13T20:23:31.879929879+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="checking if subscriptions need update" id=34qy7 namespace=openshift-user-workload-monitoring 2025-08-13T20:23:32.079894044+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="No subscriptions were found in namespace openshift-network-node-identity" id=h1KWz namespace=openshift-network-node-identity 2025-08-13T20:23:32.079894044+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="resolving sources" id=LE6nr namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:23:32.079894044+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="checking if subscriptions need update" id=LE6nr namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:23:32.279527420+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="No subscriptions were found in namespace openshift-user-workload-monitoring" id=34qy7 namespace=openshift-user-workload-monitoring 2025-08-13T20:23:32.279527420+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="resolving sources" id=ZqsjM namespace=openshift-network-operator 2025-08-13T20:23:32.279527420+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="checking if subscriptions need update" id=ZqsjM namespace=openshift-network-operator 2025-08-13T20:23:32.477210620+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator-operator" id=LE6nr namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:23:32.477394285+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="resolving sources" id=489c1 namespace=openshift-oauth-apiserver 2025-08-13T20:23:32.477449906+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="checking if subscriptions need update" id=489c1 namespace=openshift-oauth-apiserver 2025-08-13T20:23:32.679878262+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="No subscriptions were found in namespace openshift-network-operator" id=ZqsjM namespace=openshift-network-operator 2025-08-13T20:23:32.679878262+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="resolving sources" id=lrgTb namespace=openshift-node 2025-08-13T20:23:32.679878262+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="checking if subscriptions need update" id=lrgTb namespace=openshift-node 2025-08-13T20:23:32.878000334+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="No subscriptions were found in namespace openshift-oauth-apiserver" id=489c1 namespace=openshift-oauth-apiserver 2025-08-13T20:23:32.878000334+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="resolving sources" id=vq/Dx namespace=openshift-controller-manager 2025-08-13T20:23:32.878000334+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="checking if subscriptions need update" id=vq/Dx namespace=openshift-controller-manager 2025-08-13T20:23:33.079031800+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="No subscriptions were found in namespace openshift-node" id=lrgTb namespace=openshift-node 2025-08-13T20:23:33.079093591+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="resolving sources" id=4aVqI namespace=openshift-kube-storage-version-migrator 2025-08-13T20:23:33.079093591+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="checking if subscriptions need update" id=4aVqI namespace=openshift-kube-storage-version-migrator 2025-08-13T20:23:33.281458495+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager" id=vq/Dx namespace=openshift-controller-manager 2025-08-13T20:23:33.281458495+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="resolving sources" id=X2XhN namespace=openshift-marketplace 2025-08-13T20:23:33.281458495+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="checking if subscriptions need update" id=X2XhN namespace=openshift-marketplace 2025-08-13T20:23:33.478631100+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator" id=4aVqI namespace=openshift-kube-storage-version-migrator 2025-08-13T20:23:33.478631100+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="resolving sources" id=L94Te namespace=openshift-config-operator 2025-08-13T20:23:33.478631100+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="checking if subscriptions need update" id=L94Te namespace=openshift-config-operator 2025-08-13T20:23:33.680558511+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=X2XhN namespace=openshift-marketplace 2025-08-13T20:23:33.680625313+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="resolving sources" id=rcLeM namespace=openshift-kube-scheduler-operator 2025-08-13T20:23:33.680625313+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="checking if subscriptions need update" id=rcLeM namespace=openshift-kube-scheduler-operator 2025-08-13T20:23:33.880255458+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="No subscriptions were found in namespace openshift-config-operator" id=L94Te namespace=openshift-config-operator 2025-08-13T20:23:33.880255458+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="resolving sources" id=ByakK namespace=openshift-image-registry 2025-08-13T20:23:33.880255458+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="checking if subscriptions need update" id=ByakK namespace=openshift-image-registry 2025-08-13T20:23:34.080902143+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler-operator" id=rcLeM namespace=openshift-kube-scheduler-operator 2025-08-13T20:23:34.080902143+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="resolving sources" id=tqBUa namespace=hostpath-provisioner 2025-08-13T20:23:34.080902143+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="checking if subscriptions need update" id=tqBUa namespace=hostpath-provisioner 2025-08-13T20:23:34.279279401+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="No subscriptions were found in namespace openshift-image-registry" id=ByakK namespace=openshift-image-registry 2025-08-13T20:23:34.279338033+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="resolving sources" id=ttsJq namespace=openshift-authentication-operator 2025-08-13T20:23:34.279338033+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="checking if subscriptions need update" id=ttsJq namespace=openshift-authentication-operator 2025-08-13T20:23:34.483005584+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="No subscriptions were found in namespace hostpath-provisioner" id=tqBUa namespace=hostpath-provisioner 2025-08-13T20:23:34.483005584+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="resolving sources" id=A3r/Z namespace=openshift-etcd-operator 2025-08-13T20:23:34.483005584+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="checking if subscriptions need update" id=A3r/Z namespace=openshift-etcd-operator 2025-08-13T20:23:34.679611322+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="No subscriptions were found in namespace openshift-authentication-operator" id=ttsJq namespace=openshift-authentication-operator 2025-08-13T20:23:34.679729226+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="resolving sources" id=8fxIB namespace=openshift-cluster-storage-operator 2025-08-13T20:23:34.679864840+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="checking if subscriptions need update" id=8fxIB namespace=openshift-cluster-storage-operator 2025-08-13T20:23:34.879157535+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="No subscriptions were found in namespace openshift-etcd-operator" id=A3r/Z namespace=openshift-etcd-operator 2025-08-13T20:23:34.879157535+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="resolving sources" id=w6Ebf namespace=openshift-config-managed 2025-08-13T20:23:34.879157535+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="checking if subscriptions need update" id=w6Ebf namespace=openshift-config-managed 2025-08-13T20:23:35.079941204+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="No subscriptions were found in namespace openshift-cluster-storage-operator" id=8fxIB namespace=openshift-cluster-storage-operator 2025-08-13T20:23:35.079941204+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="resolving sources" id=27YHB namespace=openshift-kube-apiserver 2025-08-13T20:23:35.079941204+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="checking if subscriptions need update" id=27YHB namespace=openshift-kube-apiserver 2025-08-13T20:23:35.279440186+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="No subscriptions were found in namespace openshift-config-managed" id=w6Ebf namespace=openshift-config-managed 2025-08-13T20:23:35.279440186+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="resolving sources" id=g5QKY namespace=kube-system 2025-08-13T20:23:35.279440186+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="checking if subscriptions need update" id=g5QKY namespace=kube-system 2025-08-13T20:23:35.479221045+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver" id=27YHB namespace=openshift-kube-apiserver 2025-08-13T20:23:35.479338269+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="resolving sources" id=Kw5Sr namespace=openshift-cloud-network-config-controller 2025-08-13T20:23:35.479374890+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="checking if subscriptions need update" id=Kw5Sr namespace=openshift-cloud-network-config-controller 2025-08-13T20:23:35.679251252+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="No subscriptions were found in namespace kube-system" id=g5QKY namespace=kube-system 2025-08-13T20:23:35.679371545+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="resolving sources" id=tz9qS namespace=openshift-cluster-samples-operator 2025-08-13T20:23:35.679412346+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="checking if subscriptions need update" id=tz9qS namespace=openshift-cluster-samples-operator 2025-08-13T20:23:35.879907187+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="No subscriptions were found in namespace openshift-cloud-network-config-controller" id=Kw5Sr namespace=openshift-cloud-network-config-controller 2025-08-13T20:23:36.080542461+00:00 stderr F time="2025-08-13T20:23:36Z" level=info msg="No subscriptions were found in namespace openshift-cluster-samples-operator" id=tz9qS namespace=openshift-cluster-samples-operator 2025-08-13T20:26:10.557713661+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.557975158+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.558019659+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.558225385+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.568387246+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.568607772+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.569072225+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.569218189+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.569314292+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=jjdyi 2025-08-13T20:26:10.569368614+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.569411965+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.569503438+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.569503438+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.569521388+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=82cCN 2025-08-13T20:26:10.569521388+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.569549739+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.586239956+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.586475883+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.586475883+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.586656668+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.586656668+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.586729980+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.586942976+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.587275366+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.587413600+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.587457221+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.587657317+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.587725259+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.587913264+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.588059168+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.589939162+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.589939162+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.589939162+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.589939162+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=7RIXq 2025-08-13T20:26:10.589939162+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.589939162+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.591514757+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.591539028+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.591551748+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.591563829+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=N73fY 2025-08-13T20:26:10.591573769+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.591573769+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.764340349+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.764581136+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.764621647+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.764746011+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.764856804+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.963193385+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.963378190+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.963378190+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.963556586+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.963570866+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:51.378101812+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.378101812+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.378541324+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.378604646+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.383734863+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.383734863+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.383734863+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.383734863+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=EhycU 2025-08-13T20:26:51.383734863+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.383734863+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.384194766+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.384194766+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.384215967+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.384225547+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Fbx2n 2025-08-13T20:26:51.384234957+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.384234957+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.407581495+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.407823032+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.407823032+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.408155261+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.408155261+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.408313286+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.408313286+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.410325153+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.410325153+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.410325153+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.410325153+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.410325153+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.410325153+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.410325153+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.412890567+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.413032331+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.413032331+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.413032331+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=XVosD 2025-08-13T20:26:51.413051251+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.413051251+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.415528632+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.415617175+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.415617175+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.415617175+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=mMHet 2025-08-13T20:26:51.415630715+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.415640495+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.582409004+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.582568818+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.582568818+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.582686762+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.582739173+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.787308723+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.787364754+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.787364754+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.787450447+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.787450447+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:27:02.910862330+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="resolving sources" id=kfcha namespace=openshift-monitoring 2025-08-13T20:27:02.910862330+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="checking if subscriptions need update" id=kfcha namespace=openshift-monitoring 2025-08-13T20:27:02.910954953+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="resolving sources" id=L/JIx namespace=openshift-operator-lifecycle-manager 2025-08-13T20:27:02.910954953+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="checking if subscriptions need update" id=L/JIx namespace=openshift-operator-lifecycle-manager 2025-08-13T20:27:02.919243130+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=L/JIx namespace=openshift-operator-lifecycle-manager 2025-08-13T20:27:02.919243130+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="resolving sources" id=OATTi namespace=openshift-operators 2025-08-13T20:27:02.919243130+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="checking if subscriptions need update" id=OATTi namespace=openshift-operators 2025-08-13T20:27:02.919337803+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=kfcha namespace=openshift-monitoring 2025-08-13T20:27:02.921322119+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=OATTi namespace=openshift-operators 2025-08-13T20:27:05.587701902+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.587701902+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.588240057+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.588240057+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.592149169+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.592277173+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.592456148+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.592527050+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.592573421+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Dg+Zu 2025-08-13T20:27:05.592618812+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.592650153+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.592747646+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.592747646+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.592764706+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=u13ew 2025-08-13T20:27:05.592825458+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.592825458+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.604485182+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.604519133+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.604519133+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="catalog update required at 2025-08-13 20:27:05.604565974 +0000 UTC m=+1681.255954091" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.608475986+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.609234727+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.609306930+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.609376271+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=7QFOv 2025-08-13T20:27:05.609411883+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.609442743+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.630010321+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.630208127+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.630247778+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.630540807+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="catalog update required at 2025-08-13 20:27:05.63031641 +0000 UTC m=+1681.281704527" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.631330109+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.648020906+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:05.648020906+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:05.817190904+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.845462672+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:05.845653888+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:05.992745104+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:05.992979530+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:05.993040852+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:05.993040852+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=uIXIK 2025-08-13T20:27:05.993040852+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:05.993065643+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:06.197191580+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.197351684+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.197351684+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.197367215+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=X7mdk 2025-08-13T20:27:06.197367215+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.197367215+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.793177852+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:06.793548082+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:06.793593863+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:06.793714697+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:06.793750328+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:06.793931393+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:06.794043356+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:06.992609614+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.992609614+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.992609614+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.992609614+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.992609614+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:06.992609614+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:07.192680665+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.192848760+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.193829028+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.193829028+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=e6I4b 2025-08-13T20:27:07.193829028+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.193829028+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.393473886+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:07.396466352+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:07.396466352+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:07.396466352+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=wawcU 2025-08-13T20:27:07.396466352+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:07.396466352+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:07.992046822+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.992230498+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.992230498+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.992306210+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.993027990+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:07.993027990+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:08.199321409+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:08.199321409+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:08.199321409+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:08.199321409+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:08.199428362+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:08.199428362+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:08.396344503+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:08.396344503+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:08.396344503+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:08.396344503+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Uui59 2025-08-13T20:27:08.396344503+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:08.396344503+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:08.593088429+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:08.593317615+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:08.593372507+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:08.593428788+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=QgpZY 2025-08-13T20:27:08.593471799+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:08.593515401+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:09.194541236+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:09.194541236+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:09.194541236+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:09.194541236+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:09.194541236+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:09.194541236+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:09.392242049+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:09.393138455+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:09.393260138+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:09.393695830+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:09.394145433+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:09.394391530+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:09.592765033+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:09.592765033+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:09.592765033+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:09.592765033+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=5iLo6 2025-08-13T20:27:09.592765033+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:09.592765033+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:09.798720253+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:09.799315680+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:09.799440413+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:09.799544036+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=1ECed 2025-08-13T20:27:09.799696700+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:09.799976328+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:10.394728315+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:10.395145047+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:10.395195749+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:10.395315932+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:10.619287536+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:10.619287536+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:10.619287536+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:10.619287536+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:15.986079095+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:15.986217479+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:15.994691361+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:15.995077322+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:15.995133143+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:15.995188995+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=usioD 2025-08-13T20:27:15.995220696+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:15.995250277+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:16.012711636+00:00 stderr F time="2025-08-13T20:27:16Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:16.012711636+00:00 stderr F time="2025-08-13T20:27:16Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:16.012711636+00:00 stderr F time="2025-08-13T20:27:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:16.013114138+00:00 stderr F time="2025-08-13T20:27:16Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:18.640321989+00:00 stderr F time="2025-08-13T20:27:18Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:18.640387931+00:00 stderr F time="2025-08-13T20:27:18Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.021962132+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.022034294+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.022034294+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.022140117+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=jDV0J 2025-08-13T20:27:19.022140117+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.022140117+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.108886688+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.109065703+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.340868041+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.340868041+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.340868041+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.340868041+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.342131307+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.342131307+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.342131307+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.342131307+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=4kQ+B 2025-08-13T20:27:19.342131307+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.342131307+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.512700575+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.512700575+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.512700575+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.512700575+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:20.031982022+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.031982022+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.038391595+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.038626582+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.038665963+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.038708324+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=o+5uM 2025-08-13T20:27:20.038754476+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.038842488+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.053014683+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.053014683+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.053014683+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.053014683+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:26.208424501+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.208424501+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.218349525+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.218545621+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.218545621+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.218545621+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Iu40F 2025-08-13T20:27:26.218545621+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.218567871+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.236246947+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.236426812+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.236426812+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.236544435+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.341705882+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.341705882+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.348538678+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.348721433+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.348721433+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.348721433+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=V95ue 2025-08-13T20:27:26.348721433+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.348749164+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.362693263+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.363116775+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.363181356+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.363332101+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:27.200910640+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.200910640+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.209475655+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.209592758+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.209592758+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.209592758+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=ppEqc 2025-08-13T20:27:27.209613099+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.209613099+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.243204249+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.243204249+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.243204249+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.243302082+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.243302082+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.254441610+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.254681467+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.254681467+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.254701548+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=YUGao 2025-08-13T20:27:27.254713588+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.254725039+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.261695998+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.261695998+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.263760517+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.263760517+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.274579836+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.274873295+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.274891105+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.274907706+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=lMTM+ 2025-08-13T20:27:27.274907706+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.274960297+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.274960297+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.275287447+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.275304057+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.435446946+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.435533299+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.435661092+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:27.435724374+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:27.612335504+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.612424517+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.612424517+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.813162157+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:27.813229039+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:27.813229039+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:27.813229039+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Pgq+4 2025-08-13T20:27:27.813255879+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:27.813255879+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:28.018274392+00:00 stderr F time="2025-08-13T20:27:28Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:28.018274392+00:00 stderr F time="2025-08-13T20:27:28Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:28.414124611+00:00 stderr F time="2025-08-13T20:27:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:28.414321676+00:00 stderr F time="2025-08-13T20:27:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:28.414321676+00:00 stderr F time="2025-08-13T20:27:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:28.614579873+00:00 stderr F time="2025-08-13T20:27:28Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:28.614579873+00:00 stderr F time="2025-08-13T20:27:28Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:29.587962126+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.587962126+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.594576925+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.594716109+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.594729579+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.594748110+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=VPS8m 2025-08-13T20:27:29.594748110+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.594748110+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.616736668+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.616736668+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.622042850+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.622042850+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.622042850+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.622042850+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.630929544+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.631161171+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.631161171+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.631176021+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=xYlza 2025-08-13T20:27:29.631176021+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.631186272+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.813262168+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.813532566+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.813624288+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.813756632+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:30.145874999+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.145874999+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.151888081+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.151888081+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.151888081+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.151888081+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=jA0Zq 2025-08-13T20:27:30.151888081+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.151888081+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.190230007+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:30.190230007+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:30.413088979+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:30.414091247+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:30.414091247+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:30.414091247+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=DnChm 2025-08-13T20:27:30.414091247+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:30.414129448+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:30.612113699+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.612253553+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.612253553+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.612269604+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.612279504+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.612460769+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:30.612460769+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.014216317+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.014375462+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.014390642+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.014555617+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=GOJCo 2025-08-13T20:27:31.014555617+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.014555617+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.212750144+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:31.212984231+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:31.212984231+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:31.213207357+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:31.213207357+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:31.213262039+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:31.213262039+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:31.612159995+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:31.612246318+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:31.612258898+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:31.612268088+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=b/75+ 2025-08-13T20:27:31.612268088+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:31.612277558+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:31.812117543+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.812200225+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.812200225+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.812390290+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.812390290+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:32.212036668+00:00 stderr F time="2025-08-13T20:27:32Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:32.212237254+00:00 stderr F time="2025-08-13T20:27:32Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:32.212237254+00:00 stderr F time="2025-08-13T20:27:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:32.212348927+00:00 stderr F time="2025-08-13T20:27:32Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:32.212348927+00:00 stderr F time="2025-08-13T20:27:32Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:35.632209844+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.632209844+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.635748505+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.636015453+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.636015453+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.636046523+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Wr/Yi 2025-08-13T20:27:35.636060424+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.636060424+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.646249685+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.646408050+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.646408050+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.646563154+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.646563154+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.818397507+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.818397507+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.822343920+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.822475804+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.822475804+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.822475804+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=XKv1U 2025-08-13T20:27:35.822475804+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.822475804+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.833224791+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.833372116+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.833372116+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.834251871+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.834251871+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:28:43.254469814+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.254538446+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.259174940+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.260038304+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.260137017+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.260280481+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=lmNDI 2025-08-13T20:28:43.260280481+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.260280481+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.279125173+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.279191525+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.279204205+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.279445772+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="catalog update required at 2025-08-13 20:28:43.279262877 +0000 UTC m=+1778.930651114" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.310497095+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.333983480+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.333983480+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.353519032+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.353519032+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.353519032+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.353519032+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=+dw+v 2025-08-13T20:28:43.353519032+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.353519032+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.376674797+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.377066058+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.377131780+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.377261594+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.377602304+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.377688916+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.385157771+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.385536162+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.385599744+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.385647325+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=orR+p 2025-08-13T20:28:43.385686206+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.385724237+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.407342689+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.407498503+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.407516784+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.407669618+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.407986017+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.407986017+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.460921429+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.460984801+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.461026952+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.461044412+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=VzCkx 2025-08-13T20:28:43.461056563+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.461056563+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.863512192+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.864122329+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.864190921+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.864355826+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:44.065517389+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.065517389+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.069128302+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.069128302+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.069324688+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.069324688+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=hkCnO 2025-08-13T20:28:44.069324688+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.069324688+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.460311807+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.460738529+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.460925065+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.461180082+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.686685004+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:44.686893840+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:44.692123461+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:44.692302356+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:44.692302356+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:44.692316516+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=j9Ilt 2025-08-13T20:28:44.692326567+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:44.692336137+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:45.059853951+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:45.060085458+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:45.060085458+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:45.060159620+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:45.700760064+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.701313660+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.706951412+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.707230380+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.707278661+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.707317832+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=1eV/T 2025-08-13T20:28:45.707350223+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.707380704+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.720619575+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.721039397+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.721089478+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.721202291+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:58.823046762+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.823046762+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.828352615+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.828554991+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.828554991+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.828686654+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=W6dV0 2025-08-13T20:28:58.828686654+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.828686654+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.853182689+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.853304382+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.853304382+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.853536609+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:59.821934486+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.821934486+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.835347291+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.835511536+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.835511536+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.835528157+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Cs38y 2025-08-13T20:28:59.835528157+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.835539857+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.853963977+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.853963977+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.853963977+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.853963977+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:29:03.819867187+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.819867187+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.828040382+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.828101454+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.828101454+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.828101454+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=XHcJM 2025-08-13T20:29:03.828116945+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.828116945+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.846457512+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.846674518+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.846674518+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.846767951+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.847152722+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.847152722+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.850550909+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.850719544+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.850719544+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.850719544+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=1fMF/ 2025-08-13T20:29:03.850719544+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.850719544+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.869118033+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.869326689+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.869343280+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.869458703+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.952824390+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.954735224+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.958091961+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.958263086+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.958312147+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.958349778+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=31e1z 2025-08-13T20:29:03.958381039+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.958411290+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.974093571+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.974093571+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.974093571+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.990631976+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.990631976+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.991181802+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:03.991248434+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.024139789+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.024275293+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.024275293+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.024318385+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=MbB2T 2025-08-13T20:29:04.024340705+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.024340705+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.422918813+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.423215961+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.423215961+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.625743933+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.625743933+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:06.308482025+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.308965369+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.313576681+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.313885620+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.313946132+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.314136067+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=QlN5u 2025-08-13T20:29:06.314181639+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.314245120+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.332115274+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.332262018+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.332262018+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.332358481+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.895144589+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.895237722+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.899833244+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.899922396+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.899922396+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.899935767+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=AKAco 2025-08-13T20:29:06.899945767+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.899945767+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.928347703+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.928548919+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.928548919+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.928665273+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:07.468727926+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.468904331+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.473551145+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.473551145+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.473551145+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.473551145+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=svFkZ 2025-08-13T20:29:07.473551145+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.473551145+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.489868624+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.490161342+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.490161342+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.490364788+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.490364788+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.490537513+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.490537513+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.498314657+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.498448631+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.498448631+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.498448631+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=/DcUI 2025-08-13T20:29:07.498476511+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.498476511+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.524169690+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.524287303+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.524287303+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.524534580+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.524534580+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:13.310707337+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.310707337+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.315120664+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.315356171+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.315356171+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.315356171+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Qy4PJ 2025-08-13T20:29:13.315419963+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.315419963+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.331589987+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.332102452+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.332164764+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.332345859+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.332396120+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:25.860089594+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=IDLE" 2025-08-13T20:29:25.861642709+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=IDLE" 2025-08-13T20:29:25.861642709+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=CONNECTING" 2025-08-13T20:29:25.861876145+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.861904356+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.862066681+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.862136663+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.862667748+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=CONNECTING" 2025-08-13T20:29:25.871682017+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.872280065+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.872413568+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.872543412+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=ODM5w 2025-08-13T20:29:25.872703947+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.872856731+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.873487219+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.873558501+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.873571482+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.873583572+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=bHhfJ 2025-08-13T20:29:25.873661704+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.873661704+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.886810192+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=READY" 2025-08-13T20:29:25.889217131+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=READY" 2025-08-13T20:29:25.889388116+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="resolving sources" id=lbixZ namespace=openshift-marketplace 2025-08-13T20:29:25.889428317+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="checking if subscriptions need update" id=lbixZ namespace=openshift-marketplace 2025-08-13T20:29:25.892328661+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.892516766+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.892576638+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.892613499+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.892702362+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=lbixZ namespace=openshift-marketplace 2025-08-13T20:29:25.892873136+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.892942178+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.893078802+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.893124664+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.893542786+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.893594217+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.908448774+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:25.908448774+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:25.909181525+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:25.916642860+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:25.916642860+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:25.916642860+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:25.916642860+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Ag3x9 2025-08-13T20:29:25.916642860+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:25.916642860+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:25.920732907+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:25.929641733+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:25.929762747+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:25.929762747+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:25.929762747+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=9p/HG 2025-08-13T20:29:25.929762747+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:25.929762747+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:26.067388593+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:26.067605369+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:26.067605369+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:26.067731293+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:26.067731293+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:26.074127517+00:00 stderr F time="2025-08-13T20:29:26Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"certified-operators\": the object has been modified; please apply your changes to the latest version and try again" id=Ag3x9 2025-08-13T20:29:26.080554121+00:00 stderr F E0813 20:29:26.080435 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "certified-operators": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:29:26.080590313+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:26.080590313+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:26.270189473+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:26.270189473+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:26.270189473+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:26.270189473+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:26.270189473+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:26.291902467+00:00 stderr F time="2025-08-13T20:29:26Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"community-operators\": the object has been modified; please apply your changes to the latest version and try again" id=9p/HG 2025-08-13T20:29:26.292289898+00:00 stderr F E0813 20:29:26.292263 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "community-operators": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:29:26.292381251+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:26.292439932+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:26.471669454+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:26.472093807+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:26.472144748+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:26.472197850+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=IVAIN 2025-08-13T20:29:26.472254471+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:26.472286312+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:26.669128741+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:26.669350997+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:26.669390618+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:26.669429659+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=qvytb 2025-08-13T20:29:26.669463340+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:26.669494351+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:27.130067450+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=IDLE" 2025-08-13T20:29:27.130067450+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=CONNECTING" 2025-08-13T20:29:27.136101984+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=READY" 2025-08-13T20:29:27.136101984+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="resolving sources" id=Iz3// namespace=openshift-marketplace 2025-08-13T20:29:27.136101984+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="checking if subscriptions need update" id=Iz3// namespace=openshift-marketplace 2025-08-13T20:29:27.141432467+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=Iz3// namespace=openshift-marketplace 2025-08-13T20:29:27.267853131+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:27.268166730+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:27.268208791+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:27.268346585+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:27.268390547+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:27.286310242+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:27.289581716+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:27.343343171+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=IDLE" 2025-08-13T20:29:27.343343171+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=CONNECTING" 2025-08-13T20:29:27.351528807+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=READY" 2025-08-13T20:29:27.351769783+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="resolving sources" id=dQ83w namespace=openshift-marketplace 2025-08-13T20:29:27.351913928+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="checking if subscriptions need update" id=dQ83w namespace=openshift-marketplace 2025-08-13T20:29:27.355880932+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=dQ83w namespace=openshift-marketplace 2025-08-13T20:29:27.467052507+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:27.467199872+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:27.467199872+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:27.467314815+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:27.467314815+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:27.485198729+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:27.485312222+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:27.667667774+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:27.667957622+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:27.668053655+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:27.668101757+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=jYXYT 2025-08-13T20:29:27.668139228+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:27.668233650+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:27.870866355+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:27.870866355+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:27.870866355+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:27.870866355+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=+eOEm 2025-08-13T20:29:27.870866355+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:27.870866355+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:28.476526806+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:28.476526806+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:28.476526806+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:28.476526806+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:28.476526806+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:28.490677703+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:28.490677703+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:28.672372555+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:28.672372555+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:28.672372555+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:28.672372555+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:28.672372555+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:28.690657910+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:28.690657910+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:28.868224694+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:28.868224694+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:28.868224694+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:28.868224694+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=QI/Uk 2025-08-13T20:29:28.868224694+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:28.868224694+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:29.067641717+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.068102180+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.068102180+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.068102180+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=tg7US 2025-08-13T20:29:29.068102180+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.068102180+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.673468052+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:29.674550453+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:29.674550453+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:29.674550453+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="catalog update required at 2025-08-13 20:29:29.673964586 +0000 UTC m=+1825.325352703" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:29.871500065+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.871500065+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.871500065+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.871500065+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.871500065+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.902366472+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:29.902366472+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:30.086921327+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:30.118863966+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:30.118863966+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:30.271424261+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:30.271424261+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:30.271424261+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:30.271424261+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=GzliX 2025-08-13T20:29:30.271424261+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:30.271424261+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:30.471629556+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:30.471629556+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:30.471629556+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:30.471629556+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Ci7F0 2025-08-13T20:29:30.471629556+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:30.471629556+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:31.075876846+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:31.075876846+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:31.075876846+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:31.075965238+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:31.075965238+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:31.076230606+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:31.076230606+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:31.277869532+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:31.278112969+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:31.278154690+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:31.278281154+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:31.278316915+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:31.278427688+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:31.278528761+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:31.466890766+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:31.466890766+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:31.466890766+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:31.466890766+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=YUsDI 2025-08-13T20:29:31.466890766+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:31.466890766+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:31.670110057+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:31.670225961+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:31.670225961+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:31.670225961+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=iVmGt 2025-08-13T20:29:31.670225961+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:31.670225961+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:32.268088386+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:32.268333793+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:32.268375374+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:32.268557919+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:32.268596570+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:32.476162587+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:32.476162587+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:32.476162587+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:32.480623775+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:32.480623775+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:32.480623775+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:32.668717822+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:32.668852796+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:32.668852796+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:32.668852796+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=RJgaQ 2025-08-13T20:29:32.668852796+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:32.668852796+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:33.074224299+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:33.074672562+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:33.074721983+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:33.074868407+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:33.074958670+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.075066323+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.274758184+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.274894658+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.274894658+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.274894658+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=eCngn 2025-08-13T20:29:33.274894658+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.274894658+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.673948288+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.674263387+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.674334649+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.674479074+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=eCngn 2025-08-13T20:30:00.087849848+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:00.087849848+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:00.093647334+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:00.093815869+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:00.093815869+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:00.093848830+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=qnVZ4 2025-08-13T20:30:00.093848830+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:00.093859011+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:01.153067387+00:00 stderr F time="2025-08-13T20:30:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:01.154602241+00:00 stderr F time="2025-08-13T20:30:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:01.154602241+00:00 stderr F time="2025-08-13T20:30:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:01.154681723+00:00 stderr F time="2025-08-13T20:30:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:07.395009878+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.395009878+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.403667227+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.404198012+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.404251244+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.404293905+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=dPB/P 2025-08-13T20:30:07.404431929+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.404467380+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.429060657+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.429265763+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.429359416+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.429459739+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:08.381187566+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.381409602+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.391396369+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.391868703+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.391966646+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.392115790+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=z9jxi 2025-08-13T20:30:08.392218983+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.392318836+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.411661382+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.412101244+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.412224308+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.412458885+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:30.678726219+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.680324975+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.696571992+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.696663364+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.696663364+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.696677685+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=0ogZg 2025-08-13T20:30:30.696728886+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.696728886+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.731091104+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.731091104+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.731091104+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.731091104+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.824839489+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.825205609+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.833672703+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.834149896+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.834149896+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.834149896+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=5Dw73 2025-08-13T20:30:30.834149896+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.834149896+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.847428678+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.847579832+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.847579832+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.862139941+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.862139941+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.862649906+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.863433168+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.867727792+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.867727792+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.867727792+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.867727792+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=F7Tog 2025-08-13T20:30:30.867727792+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.867727792+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.880538440+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.880734075+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.880734075+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.896538670+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.896538670+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:31.156699008+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.156767480+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.161682512+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.161682512+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.161682512+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.161682512+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=06773 2025-08-13T20:30:31.161682512+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.161682512+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.489263468+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.489263468+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.489263468+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.684967394+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.684967394+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:32.978336031+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:32.978336031+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:32.993658712+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:32.993713763+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:32.993724044+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:32.993736414+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Q/ZaW 2025-08-13T20:30:32.993746194+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:32.993746194+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:33.008257061+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:33.008360794+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:33.008374535+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:33.008452197+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:33.566198690+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.566198690+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.570876814+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.570933446+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.570945806+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.570955057+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Lr1gT 2025-08-13T20:30:33.570964237+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.570964237+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.582153869+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.582153869+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.582153869+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.582153869+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:34.179811709+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.179811709+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.190007142+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.190007142+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.190007142+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.190097114+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Ol4C3 2025-08-13T20:30:34.190097114+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.190097114+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.201019318+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.201146142+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.201146142+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.201285946+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.201285946+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.201343868+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.201343868+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.204170099+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.204192700+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.204202330+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.204225051+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=IDV2+ 2025-08-13T20:30:34.204225051+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.204241781+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.217487252+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.217738399+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.217738399+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.217754369+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.217754369+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:31:03.010201680+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.010201680+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.019081985+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.019232660+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.019232660+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.019232660+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=g3ts9 2025-08-13T20:31:03.019250450+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.019250450+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.034408246+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.034641553+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.034641553+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.034722565+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.034722565+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:35:01.852202125+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.853252566+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.853374639+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.853462172+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.862532792+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.863551292+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.863916342+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.863982874+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.864066337+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=rC0bJ 2025-08-13T20:35:01.864156769+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.864251112+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.867295509+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.867501785+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.867715691+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=FHAUm 2025-08-13T20:35:01.867848785+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.867940438+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.883066263+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.883214097+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.883214097+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.883411403+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.883411403+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.883555537+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:01.883908367+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.884227916+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:01.884511274+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.884619467+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.884982768+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.885125652+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.885442081+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:01.885664577+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:01.887966714+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:01.888236191+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:01.888306573+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:01.888385266+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=HxuCG 2025-08-13T20:35:01.888419297+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:01.888462108+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:01.889839087+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:01.890179037+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:01.890303701+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:01.890431634+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=s7WmR 2025-08-13T20:35:01.890742873+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:01.890963630+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:02.058354381+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:02.058354381+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:02.058354381+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:02.058641240+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:02.058641240+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:02.257482765+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:02.257756573+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:02.258041451+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:02.258291279+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:02.258360451+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:36:53.388034701+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=VFOrE namespace=openshift-config 2025-08-13T20:36:53.388034701+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=VFOrE namespace=openshift-config 2025-08-13T20:36:53.388325389+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=JJ+8B namespace=openshift-apiserver-operator 2025-08-13T20:36:53.388325389+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=JJ+8B namespace=openshift-apiserver-operator 2025-08-13T20:36:53.396364431+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-config" id=VFOrE namespace=openshift-config 2025-08-13T20:36:53.396364431+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=rBOYg namespace=openshift-dns-operator 2025-08-13T20:36:53.396364431+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=rBOYg namespace=openshift-dns-operator 2025-08-13T20:36:53.396546176+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-apiserver-operator" id=JJ+8B namespace=openshift-apiserver-operator 2025-08-13T20:36:53.396546176+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=stn8i namespace=openshift-openstack-infra 2025-08-13T20:36:53.396546176+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=stn8i namespace=openshift-openstack-infra 2025-08-13T20:36:53.399357757+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-openstack-infra" id=stn8i namespace=openshift-openstack-infra 2025-08-13T20:36:53.399357757+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=T2eSg namespace=openshift-route-controller-manager 2025-08-13T20:36:53.399357757+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=T2eSg namespace=openshift-route-controller-manager 2025-08-13T20:36:53.399357757+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-dns-operator" id=rBOYg namespace=openshift-dns-operator 2025-08-13T20:36:53.399407719+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=oxUpj namespace=openshift-kube-controller-manager 2025-08-13T20:36:53.399407719+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=oxUpj namespace=openshift-kube-controller-manager 2025-08-13T20:36:53.402621272+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager" id=oxUpj namespace=openshift-kube-controller-manager 2025-08-13T20:36:53.402621272+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=SBluW namespace=kube-public 2025-08-13T20:36:53.402621272+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=SBluW namespace=kube-public 2025-08-13T20:36:53.402621272+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-route-controller-manager" id=T2eSg namespace=openshift-route-controller-manager 2025-08-13T20:36:53.402621272+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=lGYyO namespace=openshift-cluster-machine-approver 2025-08-13T20:36:53.402621272+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=lGYyO namespace=openshift-cluster-machine-approver 2025-08-13T20:36:53.404943408+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace kube-public" id=SBluW namespace=kube-public 2025-08-13T20:36:53.404943408+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=rEkak namespace=openshift-cluster-version 2025-08-13T20:36:53.404943408+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=rEkak namespace=openshift-cluster-version 2025-08-13T20:36:53.406287067+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-cluster-machine-approver" id=lGYyO namespace=openshift-cluster-machine-approver 2025-08-13T20:36:53.406287067+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=IE4fJ namespace=openshift-console 2025-08-13T20:36:53.406287067+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=IE4fJ namespace=openshift-console 2025-08-13T20:36:53.409279704+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-console" id=IE4fJ namespace=openshift-console 2025-08-13T20:36:53.409279704+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=wMzTv namespace=openshift-console-user-settings 2025-08-13T20:36:53.409279704+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=wMzTv namespace=openshift-console-user-settings 2025-08-13T20:36:53.409741967+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-cluster-version" id=rEkak namespace=openshift-cluster-version 2025-08-13T20:36:53.409741967+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=nVLri namespace=openshift-host-network 2025-08-13T20:36:53.409741967+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=nVLri namespace=openshift-host-network 2025-08-13T20:36:53.595268516+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-console-user-settings" id=wMzTv namespace=openshift-console-user-settings 2025-08-13T20:36:53.595308527+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=RdSIx namespace=openshift-kni-infra 2025-08-13T20:36:53.595308527+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=RdSIx namespace=openshift-kni-infra 2025-08-13T20:36:53.792380319+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-host-network" id=nVLri namespace=openshift-host-network 2025-08-13T20:36:53.792380319+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=VjCvs namespace=openshift-vsphere-infra 2025-08-13T20:36:53.792380319+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=VjCvs namespace=openshift-vsphere-infra 2025-08-13T20:36:53.992025865+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-kni-infra" id=RdSIx namespace=openshift-kni-infra 2025-08-13T20:36:53.992025865+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=Db5z+ namespace=default 2025-08-13T20:36:53.992025865+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=Db5z+ namespace=default 2025-08-13T20:36:54.193906325+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="No subscriptions were found in namespace openshift-vsphere-infra" id=VjCvs namespace=openshift-vsphere-infra 2025-08-13T20:36:54.193906325+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="resolving sources" id=hcG5S namespace=openshift-etcd 2025-08-13T20:36:54.193906325+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="checking if subscriptions need update" id=hcG5S namespace=openshift-etcd 2025-08-13T20:36:54.392425629+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="No subscriptions were found in namespace default" id=Db5z+ namespace=default 2025-08-13T20:36:54.392425629+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="resolving sources" id=XSilc namespace=openshift-kube-controller-manager-operator 2025-08-13T20:36:54.392425629+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="checking if subscriptions need update" id=XSilc namespace=openshift-kube-controller-manager-operator 2025-08-13T20:36:54.592038073+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="No subscriptions were found in namespace openshift-etcd" id=hcG5S namespace=openshift-etcd 2025-08-13T20:36:54.592038073+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="resolving sources" id=PO0gX namespace=openshift-machine-config-operator 2025-08-13T20:36:54.592038073+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="checking if subscriptions need update" id=PO0gX namespace=openshift-machine-config-operator 2025-08-13T20:36:54.794418988+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager-operator" id=XSilc namespace=openshift-kube-controller-manager-operator 2025-08-13T20:36:54.794418988+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="resolving sources" id=ljE/t namespace=openshift-network-node-identity 2025-08-13T20:36:54.794418988+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="checking if subscriptions need update" id=ljE/t namespace=openshift-network-node-identity 2025-08-13T20:36:54.992814608+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="No subscriptions were found in namespace openshift-machine-config-operator" id=PO0gX namespace=openshift-machine-config-operator 2025-08-13T20:36:54.992814608+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="resolving sources" id=48RO+ namespace=openshift-operators 2025-08-13T20:36:54.992941382+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="checking if subscriptions need update" id=48RO+ namespace=openshift-operators 2025-08-13T20:36:55.197920681+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="No subscriptions were found in namespace openshift-network-node-identity" id=ljE/t namespace=openshift-network-node-identity 2025-08-13T20:36:55.197920681+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="resolving sources" id=eUlKx namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:36:55.197920681+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="checking if subscriptions need update" id=eUlKx namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:36:55.392218093+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=48RO+ namespace=openshift-operators 2025-08-13T20:36:55.392218093+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="resolving sources" id=wnWXv namespace=openshift-network-operator 2025-08-13T20:36:55.392218093+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="checking if subscriptions need update" id=wnWXv namespace=openshift-network-operator 2025-08-13T20:36:55.617722165+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator-operator" id=eUlKx namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:36:55.617722165+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="resolving sources" id=TTn6+ namespace=openshift-oauth-apiserver 2025-08-13T20:36:55.617722165+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="checking if subscriptions need update" id=TTn6+ namespace=openshift-oauth-apiserver 2025-08-13T20:36:55.792151533+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="No subscriptions were found in namespace openshift-network-operator" id=wnWXv namespace=openshift-network-operator 2025-08-13T20:36:55.792151533+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="resolving sources" id=E1REP namespace=openshift-user-workload-monitoring 2025-08-13T20:36:55.792151533+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="checking if subscriptions need update" id=E1REP namespace=openshift-user-workload-monitoring 2025-08-13T20:36:55.991897921+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="No subscriptions were found in namespace openshift-oauth-apiserver" id=TTn6+ namespace=openshift-oauth-apiserver 2025-08-13T20:36:55.991949463+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="resolving sources" id=VTe3f namespace=openshift-controller-manager 2025-08-13T20:36:55.991949463+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="checking if subscriptions need update" id=VTe3f namespace=openshift-controller-manager 2025-08-13T20:36:56.192648019+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="No subscriptions were found in namespace openshift-user-workload-monitoring" id=E1REP namespace=openshift-user-workload-monitoring 2025-08-13T20:36:56.192648019+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="resolving sources" id=0irSf namespace=openshift-kube-storage-version-migrator 2025-08-13T20:36:56.192648019+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="checking if subscriptions need update" id=0irSf namespace=openshift-kube-storage-version-migrator 2025-08-13T20:36:56.393842649+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager" id=VTe3f namespace=openshift-controller-manager 2025-08-13T20:36:56.393907951+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="resolving sources" id=qWKh8 namespace=openshift-marketplace 2025-08-13T20:36:56.393907951+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="checking if subscriptions need update" id=qWKh8 namespace=openshift-marketplace 2025-08-13T20:36:56.592472676+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator" id=0irSf namespace=openshift-kube-storage-version-migrator 2025-08-13T20:36:56.592472676+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="resolving sources" id=zlUkI namespace=openshift-node 2025-08-13T20:36:56.592472676+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="checking if subscriptions need update" id=zlUkI namespace=openshift-node 2025-08-13T20:36:56.792553194+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=qWKh8 namespace=openshift-marketplace 2025-08-13T20:36:56.792553194+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="resolving sources" id=8yh9s namespace=openshift-config-operator 2025-08-13T20:36:56.792553194+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="checking if subscriptions need update" id=8yh9s namespace=openshift-config-operator 2025-08-13T20:36:56.992554081+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="No subscriptions were found in namespace openshift-node" id=zlUkI namespace=openshift-node 2025-08-13T20:36:56.992604432+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="resolving sources" id=Piv70 namespace=openshift-kube-scheduler-operator 2025-08-13T20:36:56.992604432+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="checking if subscriptions need update" id=Piv70 namespace=openshift-kube-scheduler-operator 2025-08-13T20:36:57.191831596+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="No subscriptions were found in namespace openshift-config-operator" id=8yh9s namespace=openshift-config-operator 2025-08-13T20:36:57.191893087+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="resolving sources" id=B2om3 namespace=hostpath-provisioner 2025-08-13T20:36:57.191893087+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="checking if subscriptions need update" id=B2om3 namespace=hostpath-provisioner 2025-08-13T20:36:57.392847441+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler-operator" id=Piv70 namespace=openshift-kube-scheduler-operator 2025-08-13T20:36:57.392905693+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="resolving sources" id=LY/yO namespace=openshift-authentication-operator 2025-08-13T20:36:57.392905693+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="checking if subscriptions need update" id=LY/yO namespace=openshift-authentication-operator 2025-08-13T20:36:57.592836027+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="No subscriptions were found in namespace hostpath-provisioner" id=B2om3 namespace=hostpath-provisioner 2025-08-13T20:36:57.592836027+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="resolving sources" id=OQMMA namespace=openshift-etcd-operator 2025-08-13T20:36:57.592836027+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="checking if subscriptions need update" id=OQMMA namespace=openshift-etcd-operator 2025-08-13T20:36:57.791666939+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="No subscriptions were found in namespace openshift-authentication-operator" id=LY/yO namespace=openshift-authentication-operator 2025-08-13T20:36:57.791666939+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="resolving sources" id=au+aT namespace=openshift-image-registry 2025-08-13T20:36:57.791666939+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="checking if subscriptions need update" id=au+aT namespace=openshift-image-registry 2025-08-13T20:36:57.992757847+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="No subscriptions were found in namespace openshift-etcd-operator" id=OQMMA namespace=openshift-etcd-operator 2025-08-13T20:36:57.992757847+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="resolving sources" id=yhel9 namespace=kube-system 2025-08-13T20:36:57.992757847+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="checking if subscriptions need update" id=yhel9 namespace=kube-system 2025-08-13T20:36:58.191854536+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="No subscriptions were found in namespace openshift-image-registry" id=au+aT namespace=openshift-image-registry 2025-08-13T20:36:58.191919628+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="resolving sources" id=ltN1l namespace=openshift-cloud-network-config-controller 2025-08-13T20:36:58.191919628+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="checking if subscriptions need update" id=ltN1l namespace=openshift-cloud-network-config-controller 2025-08-13T20:36:58.391483332+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="No subscriptions were found in namespace kube-system" id=yhel9 namespace=kube-system 2025-08-13T20:36:58.391483332+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="resolving sources" id=a7cm1 namespace=openshift-cluster-samples-operator 2025-08-13T20:36:58.391483332+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="checking if subscriptions need update" id=a7cm1 namespace=openshift-cluster-samples-operator 2025-08-13T20:36:58.593005012+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="No subscriptions were found in namespace openshift-cloud-network-config-controller" id=ltN1l namespace=openshift-cloud-network-config-controller 2025-08-13T20:36:58.593005012+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="resolving sources" id=o6a8I namespace=openshift-cluster-storage-operator 2025-08-13T20:36:58.593066134+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="checking if subscriptions need update" id=o6a8I namespace=openshift-cluster-storage-operator 2025-08-13T20:36:58.792568716+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="No subscriptions were found in namespace openshift-cluster-samples-operator" id=a7cm1 namespace=openshift-cluster-samples-operator 2025-08-13T20:36:58.792568716+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="resolving sources" id=Eqi5R namespace=openshift-config-managed 2025-08-13T20:36:58.792568716+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="checking if subscriptions need update" id=Eqi5R namespace=openshift-config-managed 2025-08-13T20:36:58.993050326+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="No subscriptions were found in namespace openshift-cluster-storage-operator" id=o6a8I namespace=openshift-cluster-storage-operator 2025-08-13T20:36:58.993050326+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="resolving sources" id=5wSDE namespace=openshift-kube-apiserver 2025-08-13T20:36:58.993050326+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="checking if subscriptions need update" id=5wSDE namespace=openshift-kube-apiserver 2025-08-13T20:36:59.191148527+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="No subscriptions were found in namespace openshift-config-managed" id=Eqi5R namespace=openshift-config-managed 2025-08-13T20:36:59.191266030+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="resolving sources" id=FH6ef namespace=openshift-apiserver 2025-08-13T20:36:59.191300851+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="checking if subscriptions need update" id=FH6ef namespace=openshift-apiserver 2025-08-13T20:36:59.391858323+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver" id=5wSDE namespace=openshift-kube-apiserver 2025-08-13T20:36:59.392011648+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="resolving sources" id=B2nqc namespace=openshift-console-operator 2025-08-13T20:36:59.392056369+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="checking if subscriptions need update" id=B2nqc namespace=openshift-console-operator 2025-08-13T20:36:59.593216928+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="No subscriptions were found in namespace openshift-apiserver" id=FH6ef namespace=openshift-apiserver 2025-08-13T20:36:59.593342931+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="resolving sources" id=sTFxb namespace=openshift-dns 2025-08-13T20:36:59.593383303+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="checking if subscriptions need update" id=sTFxb namespace=openshift-dns 2025-08-13T20:36:59.793593055+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="No subscriptions were found in namespace openshift-console-operator" id=B2nqc namespace=openshift-console-operator 2025-08-13T20:36:59.793593055+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="resolving sources" id=5vN/7 namespace=openshift-infra 2025-08-13T20:36:59.793593055+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="checking if subscriptions need update" id=5vN/7 namespace=openshift-infra 2025-08-13T20:37:00.006157823+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="No subscriptions were found in namespace openshift-dns" id=sTFxb namespace=openshift-dns 2025-08-13T20:37:00.006157823+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="resolving sources" id=s26VQ namespace=openshift-network-diagnostics 2025-08-13T20:37:00.006157823+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="checking if subscriptions need update" id=s26VQ namespace=openshift-network-diagnostics 2025-08-13T20:37:00.194086031+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="No subscriptions were found in namespace openshift-infra" id=5vN/7 namespace=openshift-infra 2025-08-13T20:37:00.194086031+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="resolving sources" id=q+bOg namespace=openshift-operator-lifecycle-manager 2025-08-13T20:37:00.194086031+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="checking if subscriptions need update" id=q+bOg namespace=openshift-operator-lifecycle-manager 2025-08-13T20:37:00.392604635+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="No subscriptions were found in namespace openshift-network-diagnostics" id=s26VQ namespace=openshift-network-diagnostics 2025-08-13T20:37:00.392604635+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="resolving sources" id=yhBsE namespace=openshift-ovn-kubernetes 2025-08-13T20:37:00.392604635+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="checking if subscriptions need update" id=yhBsE namespace=openshift-ovn-kubernetes 2025-08-13T20:37:00.598104059+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=q+bOg namespace=openshift-operator-lifecycle-manager 2025-08-13T20:37:00.598104059+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="resolving sources" id=rtok6 namespace=openshift 2025-08-13T20:37:00.598104059+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="checking if subscriptions need update" id=rtok6 namespace=openshift 2025-08-13T20:37:00.793269346+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="No subscriptions were found in namespace openshift-ovn-kubernetes" id=yhBsE namespace=openshift-ovn-kubernetes 2025-08-13T20:37:00.793269346+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="resolving sources" id=iZvx5 namespace=openshift-cloud-platform-infra 2025-08-13T20:37:00.793302827+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="checking if subscriptions need update" id=iZvx5 namespace=openshift-cloud-platform-infra 2025-08-13T20:37:00.993206561+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="No subscriptions were found in namespace openshift" id=rtok6 namespace=openshift 2025-08-13T20:37:00.993246192+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="resolving sources" id=MMLJM namespace=openshift-ingress 2025-08-13T20:37:00.993246192+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="checking if subscriptions need update" id=MMLJM namespace=openshift-ingress 2025-08-13T20:37:01.193548277+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="No subscriptions were found in namespace openshift-cloud-platform-infra" id=iZvx5 namespace=openshift-cloud-platform-infra 2025-08-13T20:37:01.193548277+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="resolving sources" id=r+dwA namespace=openshift-ingress-operator 2025-08-13T20:37:01.193596558+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="checking if subscriptions need update" id=r+dwA namespace=openshift-ingress-operator 2025-08-13T20:37:01.392715689+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress" id=MMLJM namespace=openshift-ingress 2025-08-13T20:37:01.392715689+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="resolving sources" id=dm6rL namespace=openshift-kube-apiserver-operator 2025-08-13T20:37:01.392851213+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="checking if subscriptions need update" id=dm6rL namespace=openshift-kube-apiserver-operator 2025-08-13T20:37:01.593042834+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress-operator" id=r+dwA namespace=openshift-ingress-operator 2025-08-13T20:37:01.593042834+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="resolving sources" id=av0fE namespace=openshift-monitoring 2025-08-13T20:37:01.593103245+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="checking if subscriptions need update" id=av0fE namespace=openshift-monitoring 2025-08-13T20:37:01.793229225+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver-operator" id=dm6rL namespace=openshift-kube-apiserver-operator 2025-08-13T20:37:01.793229225+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="resolving sources" id=2SdCa namespace=openshift-controller-manager-operator 2025-08-13T20:37:01.793229225+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="checking if subscriptions need update" id=2SdCa namespace=openshift-controller-manager-operator 2025-08-13T20:37:01.993424696+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=av0fE namespace=openshift-monitoring 2025-08-13T20:37:01.993424696+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="resolving sources" id=7B/h0 namespace=openshift-machine-api 2025-08-13T20:37:01.993424696+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="checking if subscriptions need update" id=7B/h0 namespace=openshift-machine-api 2025-08-13T20:37:02.193224846+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager-operator" id=2SdCa namespace=openshift-controller-manager-operator 2025-08-13T20:37:02.193224846+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="resolving sources" id=e+7jb namespace=openshift-service-ca-operator 2025-08-13T20:37:02.193224846+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="checking if subscriptions need update" id=e+7jb namespace=openshift-service-ca-operator 2025-08-13T20:37:02.393097139+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="No subscriptions were found in namespace openshift-machine-api" id=7B/h0 namespace=openshift-machine-api 2025-08-13T20:37:02.393097139+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="resolving sources" id=N6crf namespace=kube-node-lease 2025-08-13T20:37:02.393186041+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="checking if subscriptions need update" id=N6crf namespace=kube-node-lease 2025-08-13T20:37:02.592714244+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="No subscriptions were found in namespace openshift-service-ca-operator" id=e+7jb namespace=openshift-service-ca-operator 2025-08-13T20:37:02.592714244+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="resolving sources" id=Wkz+p namespace=openshift-authentication 2025-08-13T20:37:02.592714244+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="checking if subscriptions need update" id=Wkz+p namespace=openshift-authentication 2025-08-13T20:37:02.793885884+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="No subscriptions were found in namespace kube-node-lease" id=N6crf namespace=kube-node-lease 2025-08-13T20:37:02.793885884+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="resolving sources" id=seGjw namespace=openshift-ingress-canary 2025-08-13T20:37:02.793885884+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="checking if subscriptions need update" id=seGjw namespace=openshift-ingress-canary 2025-08-13T20:37:02.997471303+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="No subscriptions were found in namespace openshift-authentication" id=Wkz+p namespace=openshift-authentication 2025-08-13T20:37:02.997471303+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="resolving sources" id=zKPT1 namespace=openshift-multus 2025-08-13T20:37:02.997471303+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="checking if subscriptions need update" id=zKPT1 namespace=openshift-multus 2025-08-13T20:37:03.192958918+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="No subscriptions were found in namespace openshift-ingress-canary" id=seGjw namespace=openshift-ingress-canary 2025-08-13T20:37:03.192958918+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="resolving sources" id=YwDwN namespace=openshift-ovirt-infra 2025-08-13T20:37:03.192958918+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="checking if subscriptions need update" id=YwDwN namespace=openshift-ovirt-infra 2025-08-13T20:37:03.393518880+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="No subscriptions were found in namespace openshift-multus" id=zKPT1 namespace=openshift-multus 2025-08-13T20:37:03.393518880+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="resolving sources" id=n2bJb namespace=openshift-kube-scheduler 2025-08-13T20:37:03.393518880+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="checking if subscriptions need update" id=n2bJb namespace=openshift-kube-scheduler 2025-08-13T20:37:03.606887721+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="No subscriptions were found in namespace openshift-ovirt-infra" id=YwDwN namespace=openshift-ovirt-infra 2025-08-13T20:37:03.606887721+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="resolving sources" id=f+/CR namespace=openshift-nutanix-infra 2025-08-13T20:37:03.606887721+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="checking if subscriptions need update" id=f+/CR namespace=openshift-nutanix-infra 2025-08-13T20:37:03.793828001+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler" id=n2bJb namespace=openshift-kube-scheduler 2025-08-13T20:37:03.793828001+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="resolving sources" id=UtUw9 namespace=openshift-service-ca 2025-08-13T20:37:03.793828001+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="checking if subscriptions need update" id=UtUw9 namespace=openshift-service-ca 2025-08-13T20:37:03.993434825+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="No subscriptions were found in namespace openshift-nutanix-infra" id=f+/CR namespace=openshift-nutanix-infra 2025-08-13T20:37:04.192877725+00:00 stderr F time="2025-08-13T20:37:04Z" level=info msg="No subscriptions were found in namespace openshift-service-ca" id=UtUw9 namespace=openshift-service-ca 2025-08-13T20:37:48.140845145+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.140845145+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.149819614+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.150244806+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.150244806+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.150271737+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=LJPfk 2025-08-13T20:37:48.150271737+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.150271737+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.169262244+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.169486381+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.169486381+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.169585623+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="catalog update required at 2025-08-13 20:37:48.169524322 +0000 UTC m=+2323.820912439" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.202895864+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.235902465+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.235902465+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.250294840+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.250387523+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.250387523+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.250403694+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=V0BeR 2025-08-13T20:37:48.250416114+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.250416114+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.278460742+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.278460742+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.278460742+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.278460742+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.278460742+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.278460742+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.289895642+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.289895642+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.289895642+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.289895642+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=iLMLn 2025-08-13T20:37:48.289895642+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.289895642+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.310082564+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.310082564+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.310082564+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.310183017+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.310202588+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.310218458+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.346458733+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.346649158+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.346649158+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.346667969+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=exBoo 2025-08-13T20:37:48.346667969+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.346683899+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.746413244+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.746413244+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.746413244+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.746413244+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.901461744+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:48.901461744+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:48.954429901+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:48.954429901+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:48.954429901+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:48.954429901+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=eYO5E 2025-08-13T20:37:48.954429901+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:48.954429901+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:49.347029030+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:49.347029030+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:49.347029030+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:49.347029030+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:49.652853206+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.659175078+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.666604712+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.667440096+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.667749245+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.667766916+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=OvvLL 2025-08-13T20:37:49.667766916+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.667854928+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.947296895+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.947296895+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.947296895+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.947296895+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:50.659916190+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.659916190+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.664305747+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.664305747+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.664305747+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.664305747+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=MsbCW 2025-08-13T20:37:50.664305747+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.664305747+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.677160507+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.677160507+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.677160507+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.677160507+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:54.702537060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.702537060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.706724060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.706724060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.706724060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.706724060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=82ddB 2025-08-13T20:37:54.706724060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.706724060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.719991153+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.720081666+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.720094756+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.720294862+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:56.510581626+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:56.510581626+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:56.985428096+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:56.985428096+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:56.985428096+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:56.985428096+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=26uBa 2025-08-13T20:37:56.985428096+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:56.985428096+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:57.176652949+00:00 stderr F time="2025-08-13T20:37:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:57.176819174+00:00 stderr F time="2025-08-13T20:37:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:57.176819174+00:00 stderr F time="2025-08-13T20:37:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:57.176939727+00:00 stderr F time="2025-08-13T20:37:57Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:58.709366357+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.709366357+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.716500383+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.716531534+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.716541624+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.716561715+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=v6ChI 2025-08-13T20:37:58.716561715+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.716571635+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.730188238+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.730385453+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.730385453+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.730493686+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:38:08.712229842+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.712864230+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.723938010+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.724248169+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.724316031+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.724378502+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=inZ7f 2025-08-13T20:38:08.724420964+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.724460165+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.734251827+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.734417332+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.734417332+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.753557394+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.753748839+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.757069885+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.757069885+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.761936785+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.761936785+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.761936785+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.761936785+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=URCqG 2025-08-13T20:38:08.761936785+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.761936785+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.779865862+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.780030897+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.780257533+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.785683050+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.785683050+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:09.211962620+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.211962620+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.230719220+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.230719220+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.230719220+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.230719220+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=yfLya 2025-08-13T20:38:09.230719220+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.230719220+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.244304872+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.244395695+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.244395695+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.244537729+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.827574717+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.827574717+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.833049285+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.833431306+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.833431306+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.833431306+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=0abZw 2025-08-13T20:38:09.833431306+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.833431306+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.866261663+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.866261663+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.866261663+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.866775578+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.866913492+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.867066806+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:09.867117897+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:09.872003308+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:09.872003308+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:09.872003308+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:09.872003308+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=s+jRy 2025-08-13T20:38:09.872003308+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:09.872003308+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:10.120128792+00:00 stderr F time="2025-08-13T20:38:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:10.120208544+00:00 stderr F time="2025-08-13T20:38:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:10.120208544+00:00 stderr F time="2025-08-13T20:38:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:10.120326007+00:00 stderr F time="2025-08-13T20:38:10Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:10.120326007+00:00 stderr F time="2025-08-13T20:38:10Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:18.205309937+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.205309937+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.209619681+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.209919810+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.210004862+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.210114695+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=/UUWu 2025-08-13T20:38:18.210206248+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.210281980+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.229032341+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.229131134+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.229131134+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.229330149+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.229330149+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:27.984278184+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:27.984278184+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:27.989278628+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:27.989414642+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:27.989414642+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:27.989414642+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Ld4lM 2025-08-13T20:38:27.989414642+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:27.989432803+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:28.002103098+00:00 stderr F time="2025-08-13T20:38:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:28.002278123+00:00 stderr F time="2025-08-13T20:38:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:28.002278123+00:00 stderr F time="2025-08-13T20:38:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:28.002394966+00:00 stderr F time="2025-08-13T20:38:28Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:28.002394966+00:00 stderr F time="2025-08-13T20:38:28Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:36.025954327+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.025954327+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.033188556+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.033395082+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.033395082+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.033409762+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=mCjzB 2025-08-13T20:38:36.033419982+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.033419982+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.046728436+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.046952883+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.046952883+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.047027775+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="catalog update required at 2025-08-13 20:38:36.046963033 +0000 UTC m=+2371.698351150" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.076431152+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.095997827+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.096064658+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.101432033+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.101737922+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.101888866+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.101944148+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=w3m+T 2025-08-13T20:38:36.101993429+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.102024590+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.119409181+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.119694810+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.119854244+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.119982568+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.144541655+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.144623367+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.154501002+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.154614195+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.154614195+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.154614195+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=8BKfc 2025-08-13T20:38:36.154645996+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.154645996+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.174859609+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.175226120+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.175274451+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.175394795+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.175509788+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.175567700+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.230473672+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.230473672+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.230473672+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.230473672+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=OcfXv 2025-08-13T20:38:36.230473672+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.230473672+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.635668854+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.635745037+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.635745037+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.636006694+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.809771714+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:36.815577961+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:36.835503266+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:36.835680361+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:36.835680361+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:36.835680361+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=1C3N0 2025-08-13T20:38:36.835680361+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:36.835680361+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:37.231557874+00:00 stderr F time="2025-08-13T20:38:37Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:37.231700468+00:00 stderr F time="2025-08-13T20:38:37Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:37.231749590+00:00 stderr F time="2025-08-13T20:38:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:37.232066649+00:00 stderr F time="2025-08-13T20:38:37Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:38.040583679+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.040583679+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.052417700+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.052417700+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.052417700+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.052417700+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=d0AYk 2025-08-13T20:38:38.052417700+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.052417700+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.063656424+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.063656424+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.063656424+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.063656424+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:39.052584605+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.052584605+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.056694574+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.056851708+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.056869919+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.056869919+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=yPMA5 2025-08-13T20:38:39.056909940+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.056909940+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.070698508+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.070698508+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.070698508+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.071094319+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:44.095858713+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.095858713+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.101091804+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.101286750+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.101286750+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.101286750+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=mFy1J 2025-08-13T20:38:44.101312300+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.101312300+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.111966228+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.112211775+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.112326378+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.112477592+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:45.083822447+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.083822447+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.090675814+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.090946492+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.090995984+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.091035015+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=e92MU 2025-08-13T20:38:45.091067476+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.091098456+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.102374282+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.102573637+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.102573637+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.102639139+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:46.581209017+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.581315311+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.590748013+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.590865646+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.590865646+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.590865646+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=1XQaS 2025-08-13T20:38:46.590888397+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.590888397+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.602829551+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.603099199+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.603099199+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.603118799+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:56.600761379+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.600761379+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.606826364+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.607076611+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.607076611+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.607076611+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=uW69p 2025-08-13T20:38:56.607076611+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.607076611+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.624625037+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.624850884+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.624850884+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.635995575+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.635995575+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.640464834+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.640464834+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.649345390+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.649572396+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.649627118+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.649676889+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=y9XcG 2025-08-13T20:38:56.649722221+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.649763432+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.663227400+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.663358624+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.663358624+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.668764420+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.668926914+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:57.573863334+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.573863334+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.578659592+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.578659592+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.578659592+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.578659592+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=wUMqM 2025-08-13T20:38:57.578659592+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.578659592+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.592442830+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.592547973+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.592598544+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.592670976+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:58.200207671+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.200207671+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.204743742+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.204743742+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.204743742+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.204743742+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=am3VE 2025-08-13T20:38:58.204892796+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.204892796+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.222874964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.222874964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.222874964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.222947846+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.225684215+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.225684215+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.229899257+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.231878964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.231878964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.231878964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=lSuGG 2025-08-13T20:38:58.231878964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.231878964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.246613609+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.246613609+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.246613609+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.246645790+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.246645790+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.246723762+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.246752153+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.249256685+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.249283486+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.249283486+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.249299486+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=RxOF8 2025-08-13T20:38:58.249311097+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.249311097+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.605917108+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.606131374+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.606131374+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.606323190+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.606323190+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:39:06.078032810+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.078032810+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.086442792+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.086703710+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.086703710+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.086703710+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=jXQ9t 2025-08-13T20:39:06.086792913+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.087428181+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.098113519+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.098339335+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.098339335+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.098408087+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.098408087+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:41:21.359712161+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.360188655+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.375548007+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.376392892+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.376455903+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.376567807+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=JA2JG 2025-08-13T20:41:21.376613018+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.376644079+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.398404336+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.398714105+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.398935692+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.399160668+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="catalog update required at 2025-08-13 20:41:21.399119637 +0000 UTC m=+2537.050507884" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.436382651+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.456508562+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.456702157+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.477696532+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.477696532+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.477696532+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.477696532+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=lnhwh 2025-08-13T20:41:21.477696532+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.477696532+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.491948363+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.492104778+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.492104778+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.492119768+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.492539550+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.492539550+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.508408988+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.508408988+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.508408988+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.508408988+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=BLbf4 2025-08-13T20:41:21.508408988+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.508408988+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.524944875+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.525875591+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.525875591+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.525875591+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.525875591+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.525875591+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.568005376+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.568005376+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.568005376+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.568005376+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=GJRiW 2025-08-13T20:41:21.568005376+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.568005376+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.966515775+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.966635409+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.966635409+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.966635409+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.966740402+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:21.966754632+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.169561929+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.169561929+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.169561929+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.169561929+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=GFKjJ 2025-08-13T20:41:22.169561929+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.169561929+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.568117450+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.568162071+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.568176081+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.568338156+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.568922683+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:22.568922683+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:22.764863052+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:22.765135530+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:22.765135530+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:22.765135530+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=iX7BZ 2025-08-13T20:41:22.765135530+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:22.765135530+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:23.170066744+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:23.170066744+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:23.170066744+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:23.170066744+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:23.199135652+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.199298947+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.365362345+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.365735835+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.365899840+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.365962612+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=aQO+y 2025-08-13T20:41:23.366007463+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.366048504+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.765080429+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.765328116+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.765372407+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.765521581+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:24.189760582+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.189921307+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.195527539+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.195663143+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.195702974+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.195784976+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=JTrAU 2025-08-13T20:41:24.195873519+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.195923920+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.366974281+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.366974281+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.366974281+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.366974281+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:49.595426319+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.597122208+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.625866667+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.625866667+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.625866667+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.625866667+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=JjYKc 2025-08-13T20:41:49.625866667+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.625866667+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.644166045+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.644166045+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.644166045+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.644166045+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:50.478897890+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.479014664+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.484344197+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.485094339+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.485154241+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.485255504+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=aMQtg 2025-08-13T20:41:50.485363477+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.485441989+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.508896725+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.509281986+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.509281986+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.509305417+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:51.436771096+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.436771096+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.444349654+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.446124076+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.446124076+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.446124076+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=jLth3 2025-08-13T20:41:51.446124076+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.446124076+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.466836423+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.466836423+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.466836423+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.466836423+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=jLth3 2025-08-13T20:42:12.028532201+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.028532201+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.037502209+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.037502209+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.037502209+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.037502209+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=XMdr1 2025-08-13T20:42:12.037502209+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.037502209+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.049278609+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.049386252+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.049386252+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.049439004+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.132087226+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.132206940+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.141983392+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.142414344+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.142414344+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.142414344+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=T2x7r 2025-08-13T20:42:12.142414344+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.142414344+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.158975662+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.158975662+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.158975662+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.172837501+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.173045127+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.178349390+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.178349390+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.190517391+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.190627644+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.190627644+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.190627644+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=SyITD 2025-08-13T20:42:12.190695276+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.190695276+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.215049318+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.215049318+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.215049318+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.234955482+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.234955482+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:14.280544576+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.280575917+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.287078865+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.287078865+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.287078865+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.287149327+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=umXeC 2025-08-13T20:42:14.287149327+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.287149327+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.301883032+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.301883032+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.301922413+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.302129409+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.700878763+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.700878763+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.706892667+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.706892667+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.706892667+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.706892667+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=HEk78 2025-08-13T20:42:14.706892667+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.706892667+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.742541894+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.742769351+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.742873784+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.742977057+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:15.645178848+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.645371383+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.656382491+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.656490364+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.656490364+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.656508114+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=HqDrp 2025-08-13T20:42:15.656508114+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.656521225+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.699921786+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.700378689+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.700378689+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.700669237+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.700669237+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.701007197+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.701007197+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.715175146+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.715338750+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.715338750+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.715338750+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Fz3M4 2025-08-13T20:42:15.715401792+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.715401792+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.738736995+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.738919550+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.738919550+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.738919550+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.738940741+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:21.467882987+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.467882987+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.471869692+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.472009786+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.472009786+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.472033887+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=nrngo 2025-08-13T20:42:21.472033887+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.472033887+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.486397541+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.486512274+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.486512274+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.486632568+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.486632568+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:25.371066825+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.371066825+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.371066825+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.371066825+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.391533916+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.391533916+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.391533916+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.391533916+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=qC+nW 2025-08-13T20:42:25.391533916+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.391533916+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.391838604+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.392165434+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.392253346+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.392324568+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=R4BYM 2025-08-13T20:42:25.392357319+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.392404501+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.424544857+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.424712972+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.424917948+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.424965669+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.425070212+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.425070212+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.425168135+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.425207176+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.425410092+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.426424491+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.426525494+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.426562195+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.426633837+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.426673749+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.434557386+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.434833034+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.434885505+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.434924907+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=zTeMR 2025-08-13T20:42:25.434955887+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.434991648+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.439813977+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.440527788+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.440527788+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.440527788+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=SZPn1 2025-08-13T20:42:25.440527788+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.440527788+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.574757068+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.575143659+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.575186140+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.575378466+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.575417057+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.775083633+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.775301020+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.775301020+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.775429033+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="catalog update required at 2025-08-13 20:42:25.775357471 +0000 UTC m=+2601.426745588" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:26.001076909+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:26.026720348+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.026720348+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.174703455+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.174875540+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.174875540+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.174875540+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=3reUD 2025-08-13T20:42:26.174875540+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.174897940+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.579582078+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.579582078+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.579582078+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.579582078+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.579582078+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:26.579582078+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:26.778006938+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:26.778006938+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:26.778006938+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:26.778006938+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=gFay8 2025-08-13T20:42:26.778006938+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:26.778006938+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:27.177567398+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:27.177697992+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:27.177697992+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:27.177759503+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:27.892832629+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.892832629+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.896906027+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.896963068+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.897060281+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.897060281+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=1id1d 2025-08-13T20:42:27.897060281+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.897060281+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.914892475+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.914892475+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.914892475+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.914892475+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:31.791832838+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.791832838+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.798975694+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.799129189+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.799129189+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.799142949+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=jifKp 2025-08-13T20:42:31.799142949+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.799152399+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.819035763+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.819269409+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.819269409+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.819269409+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:39.302358298+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=IDLE" 2025-08-13T20:42:39.303379778+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=CONNECTING" 2025-08-13T20:42:39.303487781+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=KaU8M 2025-08-13T20:42:39.303530122+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=KaU8M 2025-08-13T20:42:39.305189010+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=KaU8M 2025-08-13T20:42:39.307013572+00:00 stderr F E0813 20:42:39.306924 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.313402777+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OZNsZ 2025-08-13T20:42:39.313424687+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OZNsZ 2025-08-13T20:42:39.314723615+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=OZNsZ 2025-08-13T20:42:39.314861459+00:00 stderr F E0813 20:42:39.314735 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.325173216+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JdCT7 2025-08-13T20:42:39.325173216+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JdCT7 2025-08-13T20:42:39.328115041+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=JdCT7 2025-08-13T20:42:39.328311126+00:00 stderr F E0813 20:42:39.328173 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.333503286+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=TRANSIENT_FAILURE" 2025-08-13T20:42:39.333526417+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=60fxG 2025-08-13T20:42:39.333536657+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=60fxG 2025-08-13T20:42:39.334476064+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=60fxG 2025-08-13T20:42:39.334476064+00:00 stderr F E0813 20:42:39.334461 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.349912389+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3IXh3 2025-08-13T20:42:39.349912389+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3IXh3 2025-08-13T20:42:39.351027821+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=3IXh3 2025-08-13T20:42:39.351027821+00:00 stderr F E0813 20:42:39.350986 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.431494521+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XvpGc 2025-08-13T20:42:39.431494521+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XvpGc 2025-08-13T20:42:39.432640164+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=XvpGc 2025-08-13T20:42:39.432693016+00:00 stderr F E0813 20:42:39.432643 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.594484399+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U1hZz 2025-08-13T20:42:39.594539681+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U1hZz 2025-08-13T20:42:39.598815574+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=U1hZz 2025-08-13T20:42:39.599062621+00:00 stderr F E0813 20:42:39.598878 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.725401654+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=IDLE" 2025-08-13T20:42:39.725401654+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=KqOOE 2025-08-13T20:42:39.725451675+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=KqOOE 2025-08-13T20:42:39.725716933+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=CONNECTING" 2025-08-13T20:42:39.732380185+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=KqOOE 2025-08-13T20:42:39.732402876+00:00 stderr F E0813 20:42:39.732374 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.732446807+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=yZKca 2025-08-13T20:42:39.732543970+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=yZKca 2025-08-13T20:42:39.733369753+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=yZKca 2025-08-13T20:42:39.733389254+00:00 stderr F E0813 20:42:39.733357 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.734430794+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=TRANSIENT_FAILURE" 2025-08-13T20:42:39.734430794+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZS0Pg 2025-08-13T20:42:39.734451695+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZS0Pg 2025-08-13T20:42:39.735339760+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=ZS0Pg 2025-08-13T20:42:39.735339760+00:00 stderr F E0813 20:42:39.735305 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.737593425+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=22fgy 2025-08-13T20:42:39.737593425+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=22fgy 2025-08-13T20:42:39.738130551+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=22fgy 2025-08-13T20:42:39.738130551+00:00 stderr F E0813 20:42:39.738105 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.778287578+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Nmv/v 2025-08-13T20:42:39.778287578+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Nmv/v 2025-08-13T20:42:39.778901296+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=Nmv/v 2025-08-13T20:42:39.778926287+00:00 stderr F E0813 20:42:39.778894 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.859767828+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=JD95p 2025-08-13T20:42:39.859767828+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=JD95p 2025-08-13T20:42:39.904756515+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=JD95p 2025-08-13T20:42:39.904756515+00:00 stderr F E0813 20:42:39.904736 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.920288802+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=8rqVT 2025-08-13T20:42:39.920288802+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=8rqVT 2025-08-13T20:42:39.944602543+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=IDLE" 2025-08-13T20:42:39.944602543+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=CHY0C 2025-08-13T20:42:39.944656885+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=CHY0C 2025-08-13T20:42:39.945185070+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=CONNECTING" 2025-08-13T20:42:39.953284364+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=TRANSIENT_FAILURE" 2025-08-13T20:42:40.040316833+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=IDLE" 2025-08-13T20:42:40.040436596+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=CONNECTING" 2025-08-13T20:42:40.044638258+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=TRANSIENT_FAILURE" 2025-08-13T20:42:40.105641986+00:00 stderr F time="2025-08-13T20:42:40Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=8rqVT 2025-08-13T20:42:40.105641986+00:00 stderr F E0813 20:42:40.105594 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.105711408+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ONbEu 2025-08-13T20:42:40.105711408+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ONbEu 2025-08-13T20:42:40.305063236+00:00 stderr F time="2025-08-13T20:42:40Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=CHY0C 2025-08-13T20:42:40.305100627+00:00 stderr F E0813 20:42:40.305053 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.305173279+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=asLjU 2025-08-13T20:42:40.305173279+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=asLjU 2025-08-13T20:42:40.506030160+00:00 stderr F time="2025-08-13T20:42:40Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=ONbEu 2025-08-13T20:42:40.506030160+00:00 stderr F E0813 20:42:40.505984 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.506125882+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=YSP3F 2025-08-13T20:42:40.506141043+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=YSP3F 2025-08-13T20:42:40.705592583+00:00 stderr F time="2025-08-13T20:42:40Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=asLjU 2025-08-13T20:42:40.705592583+00:00 stderr F E0813 20:42:40.705573 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.705642695+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=suOAk 2025-08-13T20:42:40.705642695+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=suOAk 2025-08-13T20:42:40.904975241+00:00 stderr F time="2025-08-13T20:42:40Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=YSP3F 2025-08-13T20:42:40.904975241+00:00 stderr F E0813 20:42:40.904949 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.905013752+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=j66ZP 2025-08-13T20:42:40.905058694+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=j66ZP 2025-08-13T20:42:41.106209573+00:00 stderr F time="2025-08-13T20:42:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=suOAk 2025-08-13T20:42:41.106209573+00:00 stderr F E0813 20:42:41.105955 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.106209573+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=No0Sy 2025-08-13T20:42:41.106209573+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=No0Sy 2025-08-13T20:42:41.306850968+00:00 stderr F time="2025-08-13T20:42:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=j66ZP 2025-08-13T20:42:41.306850968+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=UnIEX 2025-08-13T20:42:41.306850968+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=UnIEX 2025-08-13T20:42:41.505318040+00:00 stderr F time="2025-08-13T20:42:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=No0Sy 2025-08-13T20:42:41.505351801+00:00 stderr F E0813 20:42:41.505293 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.505501865+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=m4nH4 2025-08-13T20:42:41.505501865+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=m4nH4 2025-08-13T20:42:41.705478940+00:00 stderr F time="2025-08-13T20:42:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=UnIEX 2025-08-13T20:42:41.705517131+00:00 stderr F E0813 20:42:41.705468 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.705572143+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PpzAk 2025-08-13T20:42:41.705572143+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PpzAk 2025-08-13T20:42:41.905277341+00:00 stderr F time="2025-08-13T20:42:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=m4nH4 2025-08-13T20:42:41.905324902+00:00 stderr F E0813 20:42:41.905211 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.925762921+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=dwSwo 2025-08-13T20:42:41.925897595+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=dwSwo 2025-08-13T20:42:42.105688269+00:00 stderr F time="2025-08-13T20:42:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=PpzAk 2025-08-13T20:42:42.105688269+00:00 stderr F E0813 20:42:42.105656 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:42.147917036+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Wl+0f 2025-08-13T20:42:42.147917036+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Wl+0f 2025-08-13T20:42:42.304827760+00:00 stderr F time="2025-08-13T20:42:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=dwSwo 2025-08-13T20:42:42.304827760+00:00 stderr F E0813 20:42:42.304758 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:42.345775970+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=RVYto 2025-08-13T20:42:42.345775970+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=RVYto 2025-08-13T20:42:42.505527846+00:00 stderr F time="2025-08-13T20:42:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=Wl+0f 2025-08-13T20:42:42.505527846+00:00 stderr F E0813 20:42:42.505496 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:42.505635749+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x8BXC 2025-08-13T20:42:42.505635749+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x8BXC 2025-08-13T20:42:42.704977556+00:00 stderr F time="2025-08-13T20:42:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=RVYto 2025-08-13T20:42:42.704977556+00:00 stderr F E0813 20:42:42.704955 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:42.705079099+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cAhw1 2025-08-13T20:42:42.705092210+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cAhw1 2025-08-13T20:42:42.905590960+00:00 stderr F time="2025-08-13T20:42:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=x8BXC 2025-08-13T20:42:42.905590960+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eJByL 2025-08-13T20:42:42.905590960+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eJByL 2025-08-13T20:42:43.105845533+00:00 stderr F time="2025-08-13T20:42:43Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=cAhw1 2025-08-13T20:42:43.106050719+00:00 stderr F E0813 20:42:43.105946 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:43.266869965+00:00 stderr F time="2025-08-13T20:42:43Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w1tnL 2025-08-13T20:42:43.266869965+00:00 stderr F time="2025-08-13T20:42:43Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w1tnL 2025-08-13T20:42:43.305633583+00:00 stderr F time="2025-08-13T20:42:43Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=eJByL 2025-08-13T20:42:43.305681664+00:00 stderr F E0813 20:42:43.305620 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:43.466166321+00:00 stderr F time="2025-08-13T20:42:43Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ffEpJ 2025-08-13T20:42:43.466166321+00:00 stderr F time="2025-08-13T20:42:43Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ffEpJ 2025-08-13T20:42:43.504700592+00:00 stderr F time="2025-08-13T20:42:43Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=w1tnL 2025-08-13T20:42:43.504926168+00:00 stderr F E0813 20:42:43.504895 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:43.705847031+00:00 stderr F time="2025-08-13T20:42:43Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=ffEpJ 2025-08-13T20:42:43.706270283+00:00 stderr F E0813 20:42:43.706201 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:43.826166130+00:00 stderr F time="2025-08-13T20:42:43Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=pD+NU 2025-08-13T20:42:43.826166130+00:00 stderr F time="2025-08-13T20:42:43Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=pD+NU 2025-08-13T20:42:43.905995921+00:00 stderr F time="2025-08-13T20:42:43Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=pD+NU 2025-08-13T20:42:43.906124515+00:00 stderr F E0813 20:42:43.906098 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:44.027184745+00:00 stderr F time="2025-08-13T20:42:44Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1gecN 2025-08-13T20:42:44.027350100+00:00 stderr F time="2025-08-13T20:42:44Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1gecN 2025-08-13T20:42:44.106599205+00:00 stderr F time="2025-08-13T20:42:44Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=1gecN 2025-08-13T20:42:44.106599205+00:00 stderr F E0813 20:42:44.106445 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000022200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015133657716033245 5ustar zuulzuul././@LongLink0000644000000000000000000000023600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015133657737033250 5ustar zuulzuul././@LongLink0000644000000000000000000000024300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/5.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000137466515133657716033275 0ustar zuulzuul2025-08-13T19:57:38.998949389+00:00 stdout F 2025-08-13T19:57:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d86c2164-4a1e-4b53-8f29-3287db575df7 2025-08-13T19:57:39.063740029+00:00 stdout F 2025-08-13T19:57:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d86c2164-4a1e-4b53-8f29-3287db575df7 to /host/opt/cni/bin/ 2025-08-13T19:57:39.142894289+00:00 stderr F 2025-08-13T19:57:39Z [verbose] multus-daemon started 2025-08-13T19:57:39.142894289+00:00 stderr F 2025-08-13T19:57:39Z [verbose] Readiness Indicator file check 2025-08-13T19:57:39.143093375+00:00 stderr F 2025-08-13T19:57:39Z [verbose] Readiness Indicator file check done! 2025-08-13T19:57:39.150443065+00:00 stderr F I0813 19:57:39.150296 23104 certificate_store.go:130] Loading cert/key pair from "/etc/cni/multus/certs/multus-client-current.pem". 2025-08-13T19:57:39.155552761+00:00 stderr F 2025-08-13T19:57:39Z [verbose] Waiting for certificate 2025-08-13T19:57:40.156328197+00:00 stderr F I0813 19:57:40.156164 23104 certificate_store.go:130] Loading cert/key pair from "/etc/cni/multus/certs/multus-client-current.pem". 2025-08-13T19:57:40.156925264+00:00 stderr F 2025-08-13T19:57:40Z [verbose] Certificate found! 2025-08-13T19:57:40.158536691+00:00 stderr F 2025-08-13T19:57:40Z [verbose] server configured with chroot: /hostroot 2025-08-13T19:57:40.158536691+00:00 stderr F 2025-08-13T19:57:40Z [verbose] Filtering pod watch for node "crc" 2025-08-13T19:57:40.264016993+00:00 stderr F 2025-08-13T19:57:40Z [verbose] API readiness check 2025-08-13T19:57:40.269915831+00:00 stderr F 2025-08-13T19:57:40Z [verbose] API readiness check done! 2025-08-13T19:57:40.269915831+00:00 stderr F 2025-08-13T19:57:40Z [verbose] Generated MultusCNI config: {"binDir":"/var/lib/cni/bin","cniVersion":"0.3.1","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","namespaceIsolation":true,"globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","type":"multus-shim","daemonSocketDir":"/run/multus/socket"} 2025-08-13T19:57:40.270096556+00:00 stderr F 2025-08-13T19:57:40Z [verbose] started to watch file /host/run/multus/cni/net.d/10-ovn-kubernetes.conf 2025-08-13T19:57:48.872958586+00:00 stderr F 2025-08-13T19:57:48Z [verbose] ADD starting CNI request ContainerID:"ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350" Netns:"/var/run/netns/f7f9b752-d0b9-4b04-9ab7-5c6e4bb4a343" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-k9qqb;K8S_POD_INFRA_CONTAINER_ID=ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350;K8S_POD_UID=ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Path:"" 2025-08-13T19:57:49.433950375+00:00 stderr F 2025-08-13T19:57:49Z [verbose] ADD starting CNI request ContainerID:"8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998" Netns:"/var/run/netns/4f11a94b-a4a4-4eea-91bc-1db62481ef44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251905-zmjv9;K8S_POD_INFRA_CONTAINER_ID=8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998;K8S_POD_UID=8500d7bd-50fb-4ca6-af41-b7a24cae43cd" Path:"" 2025-08-13T19:57:49.630465507+00:00 stderr F 2025-08-13T19:57:49Z [verbose] Add: openshift-marketplace:community-operators-k9qqb:ccdf38cf-634a-41a2-9c8b-74bb86af80a7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"ac543dfbb4577c1","mac":"9e:fb:45:69:5c:25"},{"name":"eth0","mac":"0a:58:0a:d9:00:1d","sandbox":"/var/run/netns/f7f9b752-d0b9-4b04-9ab7-5c6e4bb4a343"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.29/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:57:49.630527319+00:00 stderr F 2025-08-13T19:57:49Z [verbose] Add: openshift-operator-lifecycle-manager:collect-profiles-29251905-zmjv9:8500d7bd-50fb-4ca6-af41-b7a24cae43cd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"8eb40cf57cd4084","mac":"aa:3f:6d:4c:4e:9e"},{"name":"eth0","mac":"0a:58:0a:d9:00:23","sandbox":"/var/run/netns/4f11a94b-a4a4-4eea-91bc-1db62481ef44"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.35/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:57:49.638874547+00:00 stderr F I0813 19:57:49.638647 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"collect-profiles-29251905-zmjv9", UID:"8500d7bd-50fb-4ca6-af41-b7a24cae43cd", APIVersion:"v1", ResourceVersion:"27591", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.35/23] from ovn-kubernetes 2025-08-13T19:57:49.638874547+00:00 stderr F I0813 19:57:49.638760 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-k9qqb", UID:"ccdf38cf-634a-41a2-9c8b-74bb86af80a7", APIVersion:"v1", ResourceVersion:"27590", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.29/23] from ovn-kubernetes 2025-08-13T19:57:49.749688731+00:00 stderr F 2025-08-13T19:57:49Z [verbose] ADD starting CNI request ContainerID:"fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45" Netns:"/var/run/netns/5b8d87b9-c5ef-415b-94c0-714b74bcbc43" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-dcqzh;K8S_POD_INFRA_CONTAINER_ID=fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45;K8S_POD_UID=6db26b71-4e04-4688-a0c0-00e06e8c888d" Path:"" 2025-08-13T19:57:49.783585229+00:00 stderr F 2025-08-13T19:57:49Z [verbose] ADD starting CNI request ContainerID:"2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761" Netns:"/var/run/netns/023f1963-9a7c-4cad-80bd-fe72a8fdf2b9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-g4v97;K8S_POD_INFRA_CONTAINER_ID=2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761;K8S_POD_UID=bb917686-edfb-4158-86ad-6fce0abec64c" Path:"" 2025-08-13T19:57:50.109627859+00:00 stderr F 2025-08-13T19:57:50Z [verbose] ADD finished CNI request ContainerID:"ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350" Netns:"/var/run/netns/f7f9b752-d0b9-4b04-9ab7-5c6e4bb4a343" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-k9qqb;K8S_POD_INFRA_CONTAINER_ID=ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350;K8S_POD_UID=ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9e:fb:45:69:5c:25\",\"name\":\"ac543dfbb4577c1\"},{\"mac\":\"0a:58:0a:d9:00:1d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f7f9b752-d0b9-4b04-9ab7-5c6e4bb4a343\"}],\"ips\":[{\"address\":\"10.217.0.29/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:57:50.109852686+00:00 stderr P 2025-08-13T19:57:50Z [verbose] 2025-08-13T19:57:50.109924838+00:00 stderr P ADD finished CNI request ContainerID:"8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998" Netns:"/var/run/netns/4f11a94b-a4a4-4eea-91bc-1db62481ef44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251905-zmjv9;K8S_POD_INFRA_CONTAINER_ID=8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998;K8S_POD_UID=8500d7bd-50fb-4ca6-af41-b7a24cae43cd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"aa:3f:6d:4c:4e:9e\",\"name\":\"8eb40cf57cd4084\"},{\"mac\":\"0a:58:0a:d9:00:23\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/4f11a94b-a4a4-4eea-91bc-1db62481ef44\"}],\"ips\":[{\"address\":\"10.217.0.35/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:57:50.109956979+00:00 stderr F 2025-08-13T19:57:50.216443459+00:00 stderr F 2025-08-13T19:57:50Z [verbose] Add: openshift-marketplace:certified-operators-g4v97:bb917686-edfb-4158-86ad-6fce0abec64c:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2c30e71c46910d5","mac":"4a:8a:38:55:61:94"},{"name":"eth0","mac":"0a:58:0a:d9:00:21","sandbox":"/var/run/netns/023f1963-9a7c-4cad-80bd-fe72a8fdf2b9"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.33/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:57:50.216899512+00:00 stderr F I0813 19:57:50.216754 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-g4v97", UID:"bb917686-edfb-4158-86ad-6fce0abec64c", APIVersion:"v1", ResourceVersion:"27585", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.33/23] from ovn-kubernetes 2025-08-13T19:57:50.257050749+00:00 stderr F 2025-08-13T19:57:50Z [verbose] Add: openshift-marketplace:redhat-operators-dcqzh:6db26b71-4e04-4688-a0c0-00e06e8c888d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"fd8d1d12d982e02","mac":"62:c6:c2:53:ad:e3"},{"name":"eth0","mac":"0a:58:0a:d9:00:22","sandbox":"/var/run/netns/5b8d87b9-c5ef-415b-94c0-714b74bcbc43"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.34/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:57:50.257381338+00:00 stderr F I0813 19:57:50.257318 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-dcqzh", UID:"6db26b71-4e04-4688-a0c0-00e06e8c888d", APIVersion:"v1", ResourceVersion:"27584", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.34/23] from ovn-kubernetes 2025-08-13T19:57:51.134684360+00:00 stderr F 2025-08-13T19:57:51Z [verbose] ADD finished CNI request ContainerID:"fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45" Netns:"/var/run/netns/5b8d87b9-c5ef-415b-94c0-714b74bcbc43" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-dcqzh;K8S_POD_INFRA_CONTAINER_ID=fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45;K8S_POD_UID=6db26b71-4e04-4688-a0c0-00e06e8c888d" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"62:c6:c2:53:ad:e3\",\"name\":\"fd8d1d12d982e02\"},{\"mac\":\"0a:58:0a:d9:00:22\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/5b8d87b9-c5ef-415b-94c0-714b74bcbc43\"}],\"ips\":[{\"address\":\"10.217.0.34/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:57:51.134684360+00:00 stderr F 2025-08-13T19:57:51Z [verbose] ADD finished CNI request ContainerID:"2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761" Netns:"/var/run/netns/023f1963-9a7c-4cad-80bd-fe72a8fdf2b9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-g4v97;K8S_POD_INFRA_CONTAINER_ID=2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761;K8S_POD_UID=bb917686-edfb-4158-86ad-6fce0abec64c" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"4a:8a:38:55:61:94\",\"name\":\"2c30e71c46910d5\"},{\"mac\":\"0a:58:0a:d9:00:21\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/023f1963-9a7c-4cad-80bd-fe72a8fdf2b9\"}],\"ips\":[{\"address\":\"10.217.0.33/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:57:56.898496694+00:00 stderr P 2025-08-13T19:57:56Z [verbose] 2025-08-13T19:57:56.898613597+00:00 stderr P DEL starting CNI request ContainerID:"8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998" Netns:"/var/run/netns/4f11a94b-a4a4-4eea-91bc-1db62481ef44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251905-zmjv9;K8S_POD_INFRA_CONTAINER_ID=8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998;K8S_POD_UID=8500d7bd-50fb-4ca6-af41-b7a24cae43cd" Path:"" 2025-08-13T19:57:56.898643558+00:00 stderr F 2025-08-13T19:57:56.900093550+00:00 stderr P 2025-08-13T19:57:56Z [verbose] 2025-08-13T19:57:56.900135291+00:00 stderr P Del: openshift-operator-lifecycle-manager:collect-profiles-29251905-zmjv9:8500d7bd-50fb-4ca6-af41-b7a24cae43cd:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T19:57:56.900171212+00:00 stderr F 2025-08-13T19:57:57.037192858+00:00 stderr P 2025-08-13T19:57:57Z [verbose] 2025-08-13T19:57:57.037266280+00:00 stderr P DEL finished CNI request ContainerID:"8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998" Netns:"/var/run/netns/4f11a94b-a4a4-4eea-91bc-1db62481ef44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251905-zmjv9;K8S_POD_INFRA_CONTAINER_ID=8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998;K8S_POD_UID=8500d7bd-50fb-4ca6-af41-b7a24cae43cd" Path:"", result: "", err: 2025-08-13T19:57:57.037292021+00:00 stderr F 2025-08-13T19:58:54.366505502+00:00 stderr F 2025-08-13T19:58:54Z [verbose] ADD starting CNI request ContainerID:"a3a061a59b867b60a3e6a1a13d08ce968a7bfbe260f6cd0b17972429364f2dff" Netns:"/var/run/netns/70f78e3f-759f-4789-a9bd-1530bc742a7e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-controller-6df6df6b6b-58shh;K8S_POD_INFRA_CONTAINER_ID=a3a061a59b867b60a3e6a1a13d08ce968a7bfbe260f6cd0b17972429364f2dff;K8S_POD_UID=297ab9b6-2186-4d5b-a952-2bfd59af63c4" Path:"" 2025-08-13T19:58:54.618147395+00:00 stderr P 2025-08-13T19:58:54Z [verbose] ADD starting CNI request ContainerID:"cb33d2fb758e44ea5d6c5308cf6a0c2e4f669470cf12ebbac204a7dbd9719cdb" Netns:"/var/run/netns/2ef05a7b-fa59-47f5-b04f-8a0f8c971486" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-76788bff89-wkjgm;K8S_POD_INFRA_CONTAINER_ID=cb33d2fb758e44ea5d6c5308cf6a0c2e4f669470cf12ebbac204a7dbd9719cdb;K8S_POD_UID=120b38dc-8236-4fa6-a452-642b8ad738ee" Path:"" 2025-08-13T19:58:54.618344581+00:00 stderr F 2025-08-13T19:58:54.735013927+00:00 stderr F 2025-08-13T19:58:54Z [verbose] ADD starting CNI request ContainerID:"caf64d49987c99e4ea9efe593e0798b0aa755d8fdf7441c0156e1863763a7aa0" Netns:"/var/run/netns/1cabf433-3078-402a-a57a-5dc9c2a3b85e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-788b7c6b6c-ctdmb;K8S_POD_INFRA_CONTAINER_ID=caf64d49987c99e4ea9efe593e0798b0aa755d8fdf7441c0156e1863763a7aa0;K8S_POD_UID=4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Path:"" 2025-08-13T19:58:55.107383541+00:00 stderr F 2025-08-13T19:58:55Z [verbose] Add: openshift-machine-config-operator:machine-config-controller-6df6df6b6b-58shh:297ab9b6-2186-4d5b-a952-2bfd59af63c4:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a3a061a59b867b6","mac":"12:21:e7:30:c2:18"},{"name":"eth0","mac":"0a:58:0a:d9:00:3f","sandbox":"/var/run/netns/70f78e3f-759f-4789-a9bd-1530bc742a7e"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.63/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:55.110313245+00:00 stderr F I0813 19:58:55.108594 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-config-operator", Name:"machine-config-controller-6df6df6b6b-58shh", UID:"297ab9b6-2186-4d5b-a952-2bfd59af63c4", APIVersion:"v1", ResourceVersion:"27254", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.63/23] from ovn-kubernetes 2025-08-13T19:58:55.227223477+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD finished CNI request ContainerID:"a3a061a59b867b60a3e6a1a13d08ce968a7bfbe260f6cd0b17972429364f2dff" Netns:"/var/run/netns/70f78e3f-759f-4789-a9bd-1530bc742a7e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-controller-6df6df6b6b-58shh;K8S_POD_INFRA_CONTAINER_ID=a3a061a59b867b60a3e6a1a13d08ce968a7bfbe260f6cd0b17972429364f2dff;K8S_POD_UID=297ab9b6-2186-4d5b-a952-2bfd59af63c4" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"12:21:e7:30:c2:18\",\"name\":\"a3a061a59b867b6\"},{\"mac\":\"0a:58:0a:d9:00:3f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/70f78e3f-759f-4789-a9bd-1530bc742a7e\"}],\"ips\":[{\"address\":\"10.217.0.63/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:55.287557247+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD starting CNI request ContainerID:"2e8f0bacebafcab5bbf3b42b7e4297638b1e6acfcc74bfc10076897a7be4d368" Netns:"/var/run/netns/f945e43c-f2df-4189-96aa-497dc55219ad" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-649bd778b4-tt5tw;K8S_POD_INFRA_CONTAINER_ID=2e8f0bacebafcab5bbf3b42b7e4297638b1e6acfcc74bfc10076897a7be4d368;K8S_POD_UID=45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Path:"" 2025-08-13T19:58:55.309136562+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD starting CNI request ContainerID:"07c341dd7186a1b00e23f13a401a9b19e5d1744c38a4a91d135cf6cc1891fe61" Netns:"/var/run/netns/cde88eb3-039d-4e0e-88a8-df51e2e90a47" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-5d9b995f6b-fcgd7;K8S_POD_INFRA_CONTAINER_ID=07c341dd7186a1b00e23f13a401a9b19e5d1744c38a4a91d135cf6cc1891fe61;K8S_POD_UID=71af81a9-7d43-49b2-9287-c375900aa905" Path:"" 2025-08-13T19:58:55.316436250+00:00 stderr F 2025-08-13T19:58:55Z [verbose] Add: openshift-machine-config-operator:machine-config-operator-76788bff89-wkjgm:120b38dc-8236-4fa6-a452-642b8ad738ee:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"cb33d2fb758e44e","mac":"26:8b:b4:e3:af:9c"},{"name":"eth0","mac":"0a:58:0a:d9:00:15","sandbox":"/var/run/netns/2ef05a7b-fa59-47f5-b04f-8a0f8c971486"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.21/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:55.317640615+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD starting CNI request ContainerID:"9ed66fef0dec7ca57bc8a1a3ccbadd74658c15ad523b6b56b58becdb98c703e8" Netns:"/var/run/netns/a9771d53-0f7a-4d85-8611-be5ec13889be" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=9ed66fef0dec7ca57bc8a1a3ccbadd74658c15ad523b6b56b58becdb98c703e8;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"" 2025-08-13T19:58:55.318273593+00:00 stderr F I0813 19:58:55.318144 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-config-operator", Name:"machine-config-operator-76788bff89-wkjgm", UID:"120b38dc-8236-4fa6-a452-642b8ad738ee", APIVersion:"v1", ResourceVersion:"27443", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.21/23] from ovn-kubernetes 2025-08-13T19:58:55.373365343+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD finished CNI request ContainerID:"cb33d2fb758e44ea5d6c5308cf6a0c2e4f669470cf12ebbac204a7dbd9719cdb" Netns:"/var/run/netns/2ef05a7b-fa59-47f5-b04f-8a0f8c971486" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-76788bff89-wkjgm;K8S_POD_INFRA_CONTAINER_ID=cb33d2fb758e44ea5d6c5308cf6a0c2e4f669470cf12ebbac204a7dbd9719cdb;K8S_POD_UID=120b38dc-8236-4fa6-a452-642b8ad738ee" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"26:8b:b4:e3:af:9c\",\"name\":\"cb33d2fb758e44e\"},{\"mac\":\"0a:58:0a:d9:00:15\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/2ef05a7b-fa59-47f5-b04f-8a0f8c971486\"}],\"ips\":[{\"address\":\"10.217.0.21/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:55.393036774+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD starting CNI request ContainerID:"3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219" Netns:"/var/run/netns/9a40501a-04df-4b97-b0c1-7e22cd5bb9fe" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-8464bcc55b-sjnqz;K8S_POD_INFRA_CONTAINER_ID=3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219;K8S_POD_UID=bd556935-a077-45df-ba3f-d42c39326ccd" Path:"" 2025-08-13T19:58:55.478690086+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD starting CNI request ContainerID:"861ac63b0e0c6ab1fc9beb841998e0e5dd2860ed632f8f364e94f575b406c884" Netns:"/var/run/netns/e517e8ca-14f5-4d84-8cbb-cb4146e07cba" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-857456c46-7f5wf;K8S_POD_INFRA_CONTAINER_ID=861ac63b0e0c6ab1fc9beb841998e0e5dd2860ed632f8f364e94f575b406c884;K8S_POD_UID=8a5ae51d-d173-4531-8975-f164c975ce1f" Path:"" 2025-08-13T19:58:55.537093720+00:00 stderr F 2025-08-13T19:58:55Z [verbose] Add: openshift-machine-api:machine-api-operator-788b7c6b6c-ctdmb:4f8aa612-9da0-4a2b-911e-6a1764a4e74e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"caf64d49987c99e","mac":"b2:ea:84:15:ac:21"},{"name":"eth0","mac":"0a:58:0a:d9:00:05","sandbox":"/var/run/netns/1cabf433-3078-402a-a57a-5dc9c2a3b85e"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.5/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:55.537340517+00:00 stderr F I0813 19:58:55.537273 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-api", Name:"machine-api-operator-788b7c6b6c-ctdmb", UID:"4f8aa612-9da0-4a2b-911e-6a1764a4e74e", APIVersion:"v1", ResourceVersion:"27399", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.5/23] from ovn-kubernetes 2025-08-13T19:58:55.584707128+00:00 stderr P 2025-08-13T19:58:55Z [verbose] 2025-08-13T19:58:55.584768759+00:00 stderr P ADD starting CNI request ContainerID:"1f2d8ae3277a5b2f175e31e08d91633d08f596d9399c619715c2f8b9fe7a9cf2" Netns:"/var/run/netns/2f56b827-692e-4c19-ad24-0ca3b949cfda" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=1f2d8ae3277a5b2f175e31e08d91633d08f596d9399c619715c2f8b9fe7a9cf2;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"" 2025-08-13T19:58:55.584951845+00:00 stderr F 2025-08-13T19:58:55.625408338+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD finished CNI request ContainerID:"caf64d49987c99e4ea9efe593e0798b0aa755d8fdf7441c0156e1863763a7aa0" Netns:"/var/run/netns/1cabf433-3078-402a-a57a-5dc9c2a3b85e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-788b7c6b6c-ctdmb;K8S_POD_INFRA_CONTAINER_ID=caf64d49987c99e4ea9efe593e0798b0aa755d8fdf7441c0156e1863763a7aa0;K8S_POD_UID=4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"b2:ea:84:15:ac:21\",\"name\":\"caf64d49987c99e\"},{\"mac\":\"0a:58:0a:d9:00:05\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1cabf433-3078-402a-a57a-5dc9c2a3b85e\"}],\"ips\":[{\"address\":\"10.217.0.5/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:55.632598563+00:00 stderr P 2025-08-13T19:58:55Z [verbose] 2025-08-13T19:58:55.632651844+00:00 stderr P Add: openshift-machine-api:control-plane-machine-set-operator-649bd778b4-tt5tw:45a8038e-e7f2-4d93-a6f5-7753aa54e63f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2e8f0bacebafcab","mac":"9a:5d:fa:e3:14:9e"},{"name":"eth0","mac":"0a:58:0a:d9:00:14","sandbox":"/var/run/netns/f945e43c-f2df-4189-96aa-497dc55219ad"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.20/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:55.632683775+00:00 stderr F 2025-08-13T19:58:55.635280089+00:00 stderr F I0813 19:58:55.633408 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-api", Name:"control-plane-machine-set-operator-649bd778b4-tt5tw", UID:"45a8038e-e7f2-4d93-a6f5-7753aa54e63f", APIVersion:"v1", ResourceVersion:"27292", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.20/23] from ovn-kubernetes 2025-08-13T19:58:55.723115223+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD starting CNI request ContainerID:"2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5" Netns:"/var/run/netns/fd584002-b48d-40c7-b6b9-fd1d9206c3f9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-qdfr4;K8S_POD_INFRA_CONTAINER_ID=2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5;K8S_POD_UID=a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Path:"" 2025-08-13T19:58:55.735739483+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD finished CNI request ContainerID:"2e8f0bacebafcab5bbf3b42b7e4297638b1e6acfcc74bfc10076897a7be4d368" Netns:"/var/run/netns/f945e43c-f2df-4189-96aa-497dc55219ad" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-649bd778b4-tt5tw;K8S_POD_INFRA_CONTAINER_ID=2e8f0bacebafcab5bbf3b42b7e4297638b1e6acfcc74bfc10076897a7be4d368;K8S_POD_UID=45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9a:5d:fa:e3:14:9e\",\"name\":\"2e8f0bacebafcab\"},{\"mac\":\"0a:58:0a:d9:00:14\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f945e43c-f2df-4189-96aa-497dc55219ad\"}],\"ips\":[{\"address\":\"10.217.0.20/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:55.814497258+00:00 stderr F 2025-08-13T19:58:55Z [verbose] Add: openshift-marketplace:certified-operators-7287f:887d596e-c519-4bfa-af90-3edd9e1b2f0f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"9ed66fef0dec7ca","mac":"d2:7f:2e:a1:42:5d"},{"name":"eth0","mac":"0a:58:0a:d9:00:2f","sandbox":"/var/run/netns/a9771d53-0f7a-4d85-8611-be5ec13889be"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.47/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:55.814882729+00:00 stderr F I0813 19:58:55.814750 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-7287f", UID:"887d596e-c519-4bfa-af90-3edd9e1b2f0f", APIVersion:"v1", ResourceVersion:"27417", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.47/23] from ovn-kubernetes 2025-08-13T19:58:55.862531407+00:00 stderr P 2025-08-13T19:58:55Z [verbose] 2025-08-13T19:58:55.862588959+00:00 stderr F ADD starting CNI request ContainerID:"042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933" Netns:"/var/run/netns/6eba4aaa-159d-430d-9946-5a35c80a373c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"" 2025-08-13T19:58:55.867309433+00:00 stderr F 2025-08-13T19:58:55Z [verbose] Add: openshift-operator-lifecycle-manager:catalog-operator-857456c46-7f5wf:8a5ae51d-d173-4531-8975-f164c975ce1f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"861ac63b0e0c6ab","mac":"5e:7f:a7:dd:ef:03"},{"name":"eth0","mac":"0a:58:0a:d9:00:0b","sandbox":"/var/run/netns/e517e8ca-14f5-4d84-8cbb-cb4146e07cba"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.11/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:55.867309433+00:00 stderr F I0813 19:58:55.867187 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"catalog-operator-857456c46-7f5wf", UID:"8a5ae51d-d173-4531-8975-f164c975ce1f", APIVersion:"v1", ResourceVersion:"27311", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.11/23] from ovn-kubernetes 2025-08-13T19:58:55.900977383+00:00 stderr P 2025-08-13T19:58:55Z [verbose] 2025-08-13T19:58:55.901038685+00:00 stderr P ADD finished CNI request ContainerID:"9ed66fef0dec7ca57bc8a1a3ccbadd74658c15ad523b6b56b58becdb98c703e8" Netns:"/var/run/netns/a9771d53-0f7a-4d85-8611-be5ec13889be" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=9ed66fef0dec7ca57bc8a1a3ccbadd74658c15ad523b6b56b58becdb98c703e8;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d2:7f:2e:a1:42:5d\",\"name\":\"9ed66fef0dec7ca\"},{\"mac\":\"0a:58:0a:d9:00:2f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/a9771d53-0f7a-4d85-8611-be5ec13889be\"}],\"ips\":[{\"address\":\"10.217.0.47/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:55.901073766+00:00 stderr F 2025-08-13T19:58:55.932652876+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD finished CNI request ContainerID:"861ac63b0e0c6ab1fc9beb841998e0e5dd2860ed632f8f364e94f575b406c884" Netns:"/var/run/netns/e517e8ca-14f5-4d84-8cbb-cb4146e07cba" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-857456c46-7f5wf;K8S_POD_INFRA_CONTAINER_ID=861ac63b0e0c6ab1fc9beb841998e0e5dd2860ed632f8f364e94f575b406c884;K8S_POD_UID=8a5ae51d-d173-4531-8975-f164c975ce1f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5e:7f:a7:dd:ef:03\",\"name\":\"861ac63b0e0c6ab\"},{\"mac\":\"0a:58:0a:d9:00:0b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/e517e8ca-14f5-4d84-8cbb-cb4146e07cba\"}],\"ips\":[{\"address\":\"10.217.0.11/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:55.937498794+00:00 stderr P 2025-08-13T19:58:55Z [verbose] 2025-08-13T19:58:55.937598317+00:00 stderr P ADD starting CNI request ContainerID:"51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724" Netns:"/var/run/netns/c61a4ab0-6f50-41fe-8134-619a499fa7b3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-84d578d794-jw7r2;K8S_POD_INFRA_CONTAINER_ID=51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724;K8S_POD_UID=63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Path:"" 2025-08-13T19:58:55.937638998+00:00 stderr F 2025-08-13T19:58:56.012273095+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-operator-lifecycle-manager:packageserver-8464bcc55b-sjnqz:bd556935-a077-45df-ba3f-d42c39326ccd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"3a1adfc54f586eb","mac":"a2:c6:80:90:78:65"},{"name":"eth0","mac":"0a:58:0a:d9:00:2b","sandbox":"/var/run/netns/9a40501a-04df-4b97-b0c1-7e22cd5bb9fe"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.43/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.012273095+00:00 stderr F I0813 19:58:56.012117 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-8464bcc55b-sjnqz", UID:"bd556935-a077-45df-ba3f-d42c39326ccd", APIVersion:"v1", ResourceVersion:"27446", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.43/23] from ovn-kubernetes 2025-08-13T19:58:56.142080505+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-kube-scheduler-operator:openshift-kube-scheduler-operator-5d9b995f6b-fcgd7:71af81a9-7d43-49b2-9287-c375900aa905:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"07c341dd7186a1b","mac":"06:3c:d7:23:95:35"},{"name":"eth0","mac":"0a:58:0a:d9:00:0c","sandbox":"/var/run/netns/cde88eb3-039d-4e0e-88a8-df51e2e90a47"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.12/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.142080505+00:00 stderr F I0813 19:58:56.141432 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7", UID:"71af81a9-7d43-49b2-9287-c375900aa905", APIVersion:"v1", ResourceVersion:"27289", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.12/23] from ovn-kubernetes 2025-08-13T19:58:56.157497584+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-multus:network-metrics-daemon-qdfr4:a702c6d2-4dde-4077-ab8c-0f8df804bf7a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2680ced3658686e","mac":"2a:b7:0a:d6:5a:09"},{"name":"eth0","mac":"0a:58:0a:d9:00:03","sandbox":"/var/run/netns/fd584002-b48d-40c7-b6b9-fd1d9206c3f9"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.3/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.157497584+00:00 stderr F I0813 19:58:56.157423 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-multus", Name:"network-metrics-daemon-qdfr4", UID:"a702c6d2-4dde-4077-ab8c-0f8df804bf7a", APIVersion:"v1", ResourceVersion:"27375", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.3/23] from ovn-kubernetes 2025-08-13T19:58:56.186366667+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-marketplace:marketplace-operator-8b455464d-f9xdt:3482be94-0cdb-4e2a-889b-e5fac59fdbf5:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"1f2d8ae3277a5b2","mac":"b6:78:0d:36:15:42"},{"name":"eth0","mac":"0a:58:0a:d9:00:0d","sandbox":"/var/run/netns/2f56b827-692e-4c19-ad24-0ca3b949cfda"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.13/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.187137139+00:00 stderr F I0813 19:58:56.187067 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"marketplace-operator-8b455464d-f9xdt", UID:"3482be94-0cdb-4e2a-889b-e5fac59fdbf5", APIVersion:"v1", ResourceVersion:"27286", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.13/23] from ovn-kubernetes 2025-08-13T19:58:56.207969403+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219" Netns:"/var/run/netns/9a40501a-04df-4b97-b0c1-7e22cd5bb9fe" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-8464bcc55b-sjnqz;K8S_POD_INFRA_CONTAINER_ID=3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219;K8S_POD_UID=bd556935-a077-45df-ba3f-d42c39326ccd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"a2:c6:80:90:78:65\",\"name\":\"3a1adfc54f586eb\"},{\"mac\":\"0a:58:0a:d9:00:2b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/9a40501a-04df-4b97-b0c1-7e22cd5bb9fe\"}],\"ips\":[{\"address\":\"10.217.0.43/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.216271910+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"07c341dd7186a1b00e23f13a401a9b19e5d1744c38a4a91d135cf6cc1891fe61" Netns:"/var/run/netns/cde88eb3-039d-4e0e-88a8-df51e2e90a47" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-5d9b995f6b-fcgd7;K8S_POD_INFRA_CONTAINER_ID=07c341dd7186a1b00e23f13a401a9b19e5d1744c38a4a91d135cf6cc1891fe61;K8S_POD_UID=71af81a9-7d43-49b2-9287-c375900aa905" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"06:3c:d7:23:95:35\",\"name\":\"07c341dd7186a1b\"},{\"mac\":\"0a:58:0a:d9:00:0c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/cde88eb3-039d-4e0e-88a8-df51e2e90a47\"}],\"ips\":[{\"address\":\"10.217.0.12/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.216271910+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5" Netns:"/var/run/netns/fd584002-b48d-40c7-b6b9-fd1d9206c3f9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-qdfr4;K8S_POD_INFRA_CONTAINER_ID=2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5;K8S_POD_UID=a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2a:b7:0a:d6:5a:09\",\"name\":\"2680ced3658686e\"},{\"mac\":\"0a:58:0a:d9:00:03\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/fd584002-b48d-40c7-b6b9-fd1d9206c3f9\"}],\"ips\":[{\"address\":\"10.217.0.3/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.259076390+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"1f2d8ae3277a5b2f175e31e08d91633d08f596d9399c619715c2f8b9fe7a9cf2" Netns:"/var/run/netns/2f56b827-692e-4c19-ad24-0ca3b949cfda" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=1f2d8ae3277a5b2f175e31e08d91633d08f596d9399c619715c2f8b9fe7a9cf2;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"b6:78:0d:36:15:42\",\"name\":\"1f2d8ae3277a5b2\"},{\"mac\":\"0a:58:0a:d9:00:0d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/2f56b827-692e-4c19-ad24-0ca3b949cfda\"}],\"ips\":[{\"address\":\"10.217.0.13/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.286691187+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD starting CNI request ContainerID:"76a23bcc5261ffef3e87aed770d502891d5cf930ce8f5608091c10c4c2f8355e" Netns:"/var/run/netns/ae621442-01e0-4565-979c-905caed43df8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-78d54458c4-sc8h7;K8S_POD_INFRA_CONTAINER_ID=76a23bcc5261ffef3e87aed770d502891d5cf930ce8f5608091c10c4c2f8355e;K8S_POD_UID=ed024e5d-8fc2-4c22-803d-73f3c9795f19" Path:"" 2025-08-13T19:58:56.297972859+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-marketplace:community-operators-8jhz6:3f4dca86-e6ee-4ec9-8324-86aff960225e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"042b00f26918850","mac":"76:a6:6d:aa:9c:56"},{"name":"eth0","mac":"0a:58:0a:d9:00:30","sandbox":"/var/run/netns/6eba4aaa-159d-430d-9946-5a35c80a373c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.48/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.297972859+00:00 stderr F I0813 19:58:56.297589 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-8jhz6", UID:"3f4dca86-e6ee-4ec9-8324-86aff960225e", APIVersion:"v1", ResourceVersion:"27295", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.48/23] from ovn-kubernetes 2025-08-13T19:58:56.363031603+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-operator-lifecycle-manager:package-server-manager-84d578d794-jw7r2:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"51987a02e71ec40","mac":"f2:7b:ae:d5:12:4e"},{"name":"eth0","mac":"0a:58:0a:d9:00:18","sandbox":"/var/run/netns/c61a4ab0-6f50-41fe-8134-619a499fa7b3"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.24/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.364867235+00:00 stderr F I0813 19:58:56.363236 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"package-server-manager-84d578d794-jw7r2", UID:"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be", APIVersion:"v1", ResourceVersion:"27283", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.24/23] from ovn-kubernetes 2025-08-13T19:58:56.365723000+00:00 stderr P 2025-08-13T19:58:56Z [verbose] 2025-08-13T19:58:56.366205074+00:00 stderr P ADD starting CNI request ContainerID:"489c96bd95d523f4b7e59e72e928433dfb6870d719899f788f393fc315d5c1f5" Netns:"/var/run/netns/f496d738-acb8-4c3d-9c78-8782b64f4ab7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-7978d7d7f6-2nt8z;K8S_POD_INFRA_CONTAINER_ID=489c96bd95d523f4b7e59e72e928433dfb6870d719899f788f393fc315d5c1f5;K8S_POD_UID=0f394926-bdb9-425c-b36e-264d7fd34550" Path:"" 2025-08-13T19:58:56.366299326+00:00 stderr F 2025-08-13T19:58:56.397057653+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD starting CNI request ContainerID:"40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748" Netns:"/var/run/netns/57dbd258-9cae-497a-b5eb-fe7fbbfe66b6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"" 2025-08-13T19:58:56.397057653+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD starting CNI request ContainerID:"88c60b5e25b2ce016efe1942b67b182d4d9c87cf3eb10c9dc1468dc3abce4e98" Netns:"/var/run/netns/f961ec0b-d3ba-4ba9-a578-5548310fe298" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6c7c885997-4hbbc;K8S_POD_INFRA_CONTAINER_ID=88c60b5e25b2ce016efe1942b67b182d4d9c87cf3eb10c9dc1468dc3abce4e98;K8S_POD_UID=d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Path:"" 2025-08-13T19:58:56.450894008+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933" Netns:"/var/run/netns/6eba4aaa-159d-430d-9946-5a35c80a373c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"76:a6:6d:aa:9c:56\",\"name\":\"042b00f26918850\"},{\"mac\":\"0a:58:0a:d9:00:30\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/6eba4aaa-159d-430d-9946-5a35c80a373c\"}],\"ips\":[{\"address\":\"10.217.0.48/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.458953357+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724" Netns:"/var/run/netns/c61a4ab0-6f50-41fe-8134-619a499fa7b3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-84d578d794-jw7r2;K8S_POD_INFRA_CONTAINER_ID=51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724;K8S_POD_UID=63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"f2:7b:ae:d5:12:4e\",\"name\":\"51987a02e71ec40\"},{\"mac\":\"0a:58:0a:d9:00:18\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c61a4ab0-6f50-41fe-8134-619a499fa7b3\"}],\"ips\":[{\"address\":\"10.217.0.24/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.477898467+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD starting CNI request ContainerID:"44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2" Netns:"/var/run/netns/9cd13442-7a93-465f-b720-10418ec1fe51" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator;K8S_POD_NAME=migrator-f7c6d88df-q2fnv;K8S_POD_INFRA_CONTAINER_ID=44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2;K8S_POD_UID=cf1a8966-f594-490a-9fbb-eec5bafd13d3" Path:"" 2025-08-13T19:58:56.521298975+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-kube-apiserver-operator:kube-apiserver-operator-78d54458c4-sc8h7:ed024e5d-8fc2-4c22-803d-73f3c9795f19:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"76a23bcc5261ffe","mac":"1a:bb:ff:c9:e7:24"},{"name":"eth0","mac":"0a:58:0a:d9:00:07","sandbox":"/var/run/netns/ae621442-01e0-4565-979c-905caed43df8"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.7/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.521637754+00:00 stderr F I0813 19:58:56.521579 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator-78d54458c4-sc8h7", UID:"ed024e5d-8fc2-4c22-803d-73f3c9795f19", APIVersion:"v1", ResourceVersion:"27411", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.7/23] from ovn-kubernetes 2025-08-13T19:58:56.576623112+00:00 stderr P 2025-08-13T19:58:56Z [verbose] 2025-08-13T19:58:56.576744955+00:00 stderr P ADD starting CNI request ContainerID:"fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039" Netns:"/var/run/netns/16067c77-713a-4961-add8-c216b53340b1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-vlbxv;K8S_POD_INFRA_CONTAINER_ID=fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039;K8S_POD_UID=378552fd-5e53-4882-87ff-95f3d9198861" Path:"" 2025-08-13T19:58:56.576936031+00:00 stderr F 2025-08-13T19:58:56.586952556+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"76a23bcc5261ffef3e87aed770d502891d5cf930ce8f5608091c10c4c2f8355e" Netns:"/var/run/netns/ae621442-01e0-4565-979c-905caed43df8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-78d54458c4-sc8h7;K8S_POD_INFRA_CONTAINER_ID=76a23bcc5261ffef3e87aed770d502891d5cf930ce8f5608091c10c4c2f8355e;K8S_POD_UID=ed024e5d-8fc2-4c22-803d-73f3c9795f19" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"1a:bb:ff:c9:e7:24\",\"name\":\"76a23bcc5261ffe\"},{\"mac\":\"0a:58:0a:d9:00:07\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/ae621442-01e0-4565-979c-905caed43df8\"}],\"ips\":[{\"address\":\"10.217.0.7/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.782631784+00:00 stderr P 2025-08-13T19:58:56Z [verbose] 2025-08-13T19:58:56.782699766+00:00 stderr P Add: openshift-controller-manager-operator:openshift-controller-manager-operator-7978d7d7f6-2nt8z:0f394926-bdb9-425c-b36e-264d7fd34550:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"489c96bd95d523f","mac":"a2:cb:e6:94:36:cf"},{"name":"eth0","mac":"0a:58:0a:d9:00:09","sandbox":"/var/run/netns/f496d738-acb8-4c3d-9c78-8782b64f4ab7"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.9/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.782731397+00:00 stderr F 2025-08-13T19:58:56.783256972+00:00 stderr F I0813 19:58:56.783225 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator-7978d7d7f6-2nt8z", UID:"0f394926-bdb9-425c-b36e-264d7fd34550", APIVersion:"v1", ResourceVersion:"27338", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.9/23] from ovn-kubernetes 2025-08-13T19:58:56.841606195+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-marketplace:redhat-operators-f4jkp:4092a9f8-5acc-4932-9e90-ef962eeb301a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"40aef0eb1bbaaf5","mac":"6a:17:c4:5e:dd:74"},{"name":"eth0","mac":"0a:58:0a:d9:00:32","sandbox":"/var/run/netns/57dbd258-9cae-497a-b5eb-fe7fbbfe66b6"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.50/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.841606195+00:00 stderr F I0813 19:58:56.840907 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-f4jkp", UID:"4092a9f8-5acc-4932-9e90-ef962eeb301a", APIVersion:"v1", ResourceVersion:"27305", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.50/23] from ovn-kubernetes 2025-08-13T19:58:56.841606195+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"489c96bd95d523f4b7e59e72e928433dfb6870d719899f788f393fc315d5c1f5" Netns:"/var/run/netns/f496d738-acb8-4c3d-9c78-8782b64f4ab7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-7978d7d7f6-2nt8z;K8S_POD_INFRA_CONTAINER_ID=489c96bd95d523f4b7e59e72e928433dfb6870d719899f788f393fc315d5c1f5;K8S_POD_UID=0f394926-bdb9-425c-b36e-264d7fd34550" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"a2:cb:e6:94:36:cf\",\"name\":\"489c96bd95d523f\"},{\"mac\":\"0a:58:0a:d9:00:09\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f496d738-acb8-4c3d-9c78-8782b64f4ab7\"}],\"ips\":[{\"address\":\"10.217.0.9/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.886460464+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-multus:multus-admission-controller-6c7c885997-4hbbc:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"88c60b5e25b2ce0","mac":"d2:40:20:33:89:cb"},{"name":"eth0","mac":"0a:58:0a:d9:00:20","sandbox":"/var/run/netns/f961ec0b-d3ba-4ba9-a578-5548310fe298"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.32/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.886460464+00:00 stderr F I0813 19:58:56.886090 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-multus", Name:"multus-admission-controller-6c7c885997-4hbbc", UID:"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0", APIVersion:"v1", ResourceVersion:"27266", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.32/23] from ovn-kubernetes 2025-08-13T19:58:56.890648493+00:00 stderr P 2025-08-13T19:58:56Z [verbose] 2025-08-13T19:58:56.893887085+00:00 stderr P ADD finished CNI request ContainerID:"40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748" Netns:"/var/run/netns/57dbd258-9cae-497a-b5eb-fe7fbbfe66b6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"6a:17:c4:5e:dd:74\",\"name\":\"40aef0eb1bbaaf5\"},{\"mac\":\"0a:58:0a:d9:00:32\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/57dbd258-9cae-497a-b5eb-fe7fbbfe66b6\"}],\"ips\":[{\"address\":\"10.217.0.50/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.893974558+00:00 stderr F 2025-08-13T19:58:56.916500400+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD starting CNI request ContainerID:"2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb" Netns:"/var/run/netns/335912d9-96b7-4d5c-981d-f6f4bc9d5a82" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-bc474d5d6-wshwg;K8S_POD_INFRA_CONTAINER_ID=2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb;K8S_POD_UID=f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Path:"" 2025-08-13T19:58:56.917071956+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-kube-storage-version-migrator:migrator-f7c6d88df-q2fnv:cf1a8966-f594-490a-9fbb-eec5bafd13d3:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"44f5ef3518ac6b9","mac":"2a:0a:70:95:cf:7d"},{"name":"eth0","mac":"0a:58:0a:d9:00:19","sandbox":"/var/run/netns/9cd13442-7a93-465f-b720-10418ec1fe51"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.25/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.917188900+00:00 stderr F I0813 19:58:56.917099 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-storage-version-migrator", Name:"migrator-f7c6d88df-q2fnv", UID:"cf1a8966-f594-490a-9fbb-eec5bafd13d3", APIVersion:"v1", ResourceVersion:"27336", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.25/23] from ovn-kubernetes 2025-08-13T19:58:57.000396391+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"88c60b5e25b2ce016efe1942b67b182d4d9c87cf3eb10c9dc1468dc3abce4e98" Netns:"/var/run/netns/f961ec0b-d3ba-4ba9-a578-5548310fe298" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6c7c885997-4hbbc;K8S_POD_INFRA_CONTAINER_ID=88c60b5e25b2ce016efe1942b67b182d4d9c87cf3eb10c9dc1468dc3abce4e98;K8S_POD_UID=d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d2:40:20:33:89:cb\",\"name\":\"88c60b5e25b2ce0\"},{\"mac\":\"0a:58:0a:d9:00:20\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f961ec0b-d3ba-4ba9-a578-5548310fe298\"}],\"ips\":[{\"address\":\"10.217.0.32/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:57.004141538+00:00 stderr F 2025-08-13T19:58:57Z [verbose] Add: openshift-service-ca:service-ca-666f99b6f-vlbxv:378552fd-5e53-4882-87ff-95f3d9198861:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"fbf310c9137d286","mac":"b2:ca:c3:72:ee:8c"},{"name":"eth0","mac":"0a:58:0a:d9:00:1a","sandbox":"/var/run/netns/16067c77-713a-4961-add8-c216b53340b1"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.26/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:57.004686544+00:00 stderr F I0813 19:58:57.004388 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-service-ca", Name:"service-ca-666f99b6f-vlbxv", UID:"378552fd-5e53-4882-87ff-95f3d9198861", APIVersion:"v1", ResourceVersion:"27387", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.26/23] from ovn-kubernetes 2025-08-13T19:58:57.056690216+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD finished CNI request ContainerID:"fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039" Netns:"/var/run/netns/16067c77-713a-4961-add8-c216b53340b1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-vlbxv;K8S_POD_INFRA_CONTAINER_ID=fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039;K8S_POD_UID=378552fd-5e53-4882-87ff-95f3d9198861" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"b2:ca:c3:72:ee:8c\",\"name\":\"fbf310c9137d286\"},{\"mac\":\"0a:58:0a:d9:00:1a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/16067c77-713a-4961-add8-c216b53340b1\"}],\"ips\":[{\"address\":\"10.217.0.26/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:57.057946752+00:00 stderr P 2025-08-13T19:58:57Z [verbose] 2025-08-13T19:58:57.058000283+00:00 stderr P ADD finished CNI request ContainerID:"44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2" Netns:"/var/run/netns/9cd13442-7a93-465f-b720-10418ec1fe51" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator;K8S_POD_NAME=migrator-f7c6d88df-q2fnv;K8S_POD_INFRA_CONTAINER_ID=44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2;K8S_POD_UID=cf1a8966-f594-490a-9fbb-eec5bafd13d3" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2a:0a:70:95:cf:7d\",\"name\":\"44f5ef3518ac6b9\"},{\"mac\":\"0a:58:0a:d9:00:19\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/9cd13442-7a93-465f-b720-10418ec1fe51\"}],\"ips\":[{\"address\":\"10.217.0.25/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:57.058026644+00:00 stderr F 2025-08-13T19:58:57.105462326+00:00 stderr P 2025-08-13T19:58:57Z [verbose] 2025-08-13T19:58:57.105524388+00:00 stderr P ADD starting CNI request ContainerID:"fe503da15decef9b50942972e3f741dba12102460aee1b1db682f945b69c1239" Netns:"/var/run/netns/507b0851-b062-4428-b5aa-59889c89d7f6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-7769bd8d7d-q5cvv;K8S_POD_INFRA_CONTAINER_ID=fe503da15decef9b50942972e3f741dba12102460aee1b1db682f945b69c1239;K8S_POD_UID=b54e8941-2fc4-432a-9e51-39684df9089e" Path:"" 2025-08-13T19:58:57.105548729+00:00 stderr F 2025-08-13T19:58:57.244323695+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD starting CNI request ContainerID:"20a42c53825c9180dbf4c0a948617094d91e080fc40247547ca99c537257a821" Netns:"/var/run/netns/74cd27b4-e9e0-40d9-8969-943a3c87dfa4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-75f687757b-nz2xb;K8S_POD_INFRA_CONTAINER_ID=20a42c53825c9180dbf4c0a948617094d91e080fc40247547ca99c537257a821;K8S_POD_UID=10603adc-d495-423c-9459-4caa405960bb" Path:"" 2025-08-13T19:58:57.294940708+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD starting CNI request ContainerID:"e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f" Netns:"/var/run/netns/b4cbc8dd-918e-4828-a883-addb98426d7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-84fccc7b6-mkncc;K8S_POD_INFRA_CONTAINER_ID=e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f;K8S_POD_UID=b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Path:"" 2025-08-13T19:58:57.333358203+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD starting CNI request ContainerID:"63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc" Netns:"/var/run/netns/596a56ad-fa96-4824-9885-0b954a96d079" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns;K8S_POD_NAME=dns-default-gbw49;K8S_POD_INFRA_CONTAINER_ID=63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc;K8S_POD_UID=13045510-8717-4a71-ade4-be95a76440a7" Path:"" 2025-08-13T19:58:57.427415884+00:00 stderr F 2025-08-13T19:58:57Z [verbose] Add: openshift-cluster-samples-operator:cluster-samples-operator-bc474d5d6-wshwg:f728c15e-d8de-4a9a-a3ea-fdcead95cb91:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2c45b735c45341a","mac":"ca:10:40:2a:6a:05"},{"name":"eth0","mac":"0a:58:0a:d9:00:2e","sandbox":"/var/run/netns/335912d9-96b7-4d5c-981d-f6f4bc9d5a82"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.46/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:57.428259008+00:00 stderr F I0813 19:58:57.428100 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-cluster-samples-operator", Name:"cluster-samples-operator-bc474d5d6-wshwg", UID:"f728c15e-d8de-4a9a-a3ea-fdcead95cb91", APIVersion:"v1", ResourceVersion:"27350", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.46/23] from ovn-kubernetes 2025-08-13T19:58:57.480594800+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD starting CNI request ContainerID:"2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892" Netns:"/var/run/netns/899f04cb-d5a9-4d0d-ab5f-db55f116a94c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-768d5b5d86-722mg;K8S_POD_INFRA_CONTAINER_ID=2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892;K8S_POD_UID=0b5c38ff-1fa8-4219-994d-15776acd4a4d" Path:"" 2025-08-13T19:58:57.561150286+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD finished CNI request ContainerID:"2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb" Netns:"/var/run/netns/335912d9-96b7-4d5c-981d-f6f4bc9d5a82" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-bc474d5d6-wshwg;K8S_POD_INFRA_CONTAINER_ID=2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb;K8S_POD_UID=f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ca:10:40:2a:6a:05\",\"name\":\"2c45b735c45341a\"},{\"mac\":\"0a:58:0a:d9:00:2e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/335912d9-96b7-4d5c-981d-f6f4bc9d5a82\"}],\"ips\":[{\"address\":\"10.217.0.46/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:57.616173514+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD starting CNI request ContainerID:"282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722" Netns:"/var/run/netns/655ccbbf-76e3-4129-bcd4-1cb267a341b0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-546b4f8984-pwccz;K8S_POD_INFRA_CONTAINER_ID=282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722;K8S_POD_UID=6d67253e-2acd-4bc1-8185-793587da4f17" Path:"" 2025-08-13T19:58:57.690381570+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD starting CNI request ContainerID:"aab926f26907ff6a0818e2560ab90daa29fc5dd04e9bc7ca22bafece60120f4d" Netns:"/var/run/netns/ae557160-e1b1-4a90-8f0b-6b65225ee83e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-65476884b9-9wcvx;K8S_POD_INFRA_CONTAINER_ID=aab926f26907ff6a0818e2560ab90daa29fc5dd04e9bc7ca22bafece60120f4d;K8S_POD_UID=6268b7fe-8910-4505-b404-6f1df638105c" Path:"" 2025-08-13T19:58:57.869494565+00:00 stderr P 2025-08-13T19:58:57Z [verbose] 2025-08-13T19:58:57.869559487+00:00 stderr P Add: openshift-dns-operator:dns-operator-75f687757b-nz2xb:10603adc-d495-423c-9459-4caa405960bb:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"20a42c53825c918","mac":"4e:5c:e6:eb:13:aa"},{"name":"eth0","mac":"0a:58:0a:d9:00:12","sandbox":"/var/run/netns/74cd27b4-e9e0-40d9-8969-943a3c87dfa4"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.18/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:57.869596958+00:00 stderr F 2025-08-13T19:58:57.870607277+00:00 stderr F I0813 19:58:57.870464 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-dns-operator", Name:"dns-operator-75f687757b-nz2xb", UID:"10603adc-d495-423c-9459-4caa405960bb", APIVersion:"v1", ResourceVersion:"27299", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.18/23] from ovn-kubernetes 2025-08-13T19:58:57.947280493+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD finished CNI request ContainerID:"20a42c53825c9180dbf4c0a948617094d91e080fc40247547ca99c537257a821" Netns:"/var/run/netns/74cd27b4-e9e0-40d9-8969-943a3c87dfa4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-75f687757b-nz2xb;K8S_POD_INFRA_CONTAINER_ID=20a42c53825c9180dbf4c0a948617094d91e080fc40247547ca99c537257a821;K8S_POD_UID=10603adc-d495-423c-9459-4caa405960bb" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"4e:5c:e6:eb:13:aa\",\"name\":\"20a42c53825c918\"},{\"mac\":\"0a:58:0a:d9:00:12\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/74cd27b4-e9e0-40d9-8969-943a3c87dfa4\"}],\"ips\":[{\"address\":\"10.217.0.18/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:57.983576217+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD starting CNI request ContainerID:"8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141" Netns:"/var/run/netns/0e871511-fda3-41d1-ad6f-2bcfcfa2481d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-765b47f944-n2lhl;K8S_POD_INFRA_CONTAINER_ID=8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141;K8S_POD_UID=13ad7555-5f28-4555-a563-892713a8433a" Path:"" 2025-08-13T19:58:58.085763880+00:00 stderr P 2025-08-13T19:58:58Z [verbose] 2025-08-13T19:58:58.085948135+00:00 stderr P Add: openshift-image-registry:cluster-image-registry-operator-7769bd8d7d-q5cvv:b54e8941-2fc4-432a-9e51-39684df9089e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"fe503da15decef9","mac":"62:a9:f3:84:d8:42"},{"name":"eth0","mac":"0a:58:0a:d9:00:16","sandbox":"/var/run/netns/507b0851-b062-4428-b5aa-59889c89d7f6"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.22/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:58.085979036+00:00 stderr F 2025-08-13T19:58:58.087337515+00:00 stderr F I0813 19:58:58.086439 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"cluster-image-registry-operator-7769bd8d7d-q5cvv", UID:"b54e8941-2fc4-432a-9e51-39684df9089e", APIVersion:"v1", ResourceVersion:"27428", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.22/23] from ovn-kubernetes 2025-08-13T19:58:58.167669555+00:00 stderr P 2025-08-13T19:58:58Z [verbose] 2025-08-13T19:58:58.168224991+00:00 stderr P ADD finished CNI request ContainerID:"fe503da15decef9b50942972e3f741dba12102460aee1b1db682f945b69c1239" Netns:"/var/run/netns/507b0851-b062-4428-b5aa-59889c89d7f6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-7769bd8d7d-q5cvv;K8S_POD_INFRA_CONTAINER_ID=fe503da15decef9b50942972e3f741dba12102460aee1b1db682f945b69c1239;K8S_POD_UID=b54e8941-2fc4-432a-9e51-39684df9089e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"62:a9:f3:84:d8:42\",\"name\":\"fe503da15decef9\"},{\"mac\":\"0a:58:0a:d9:00:16\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/507b0851-b062-4428-b5aa-59889c89d7f6\"}],\"ips\":[{\"address\":\"10.217.0.22/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:58.168290953+00:00 stderr F 2025-08-13T19:58:58.213958704+00:00 stderr F 2025-08-13T19:58:58Z [verbose] Add: openshift-console:console-84fccc7b6-mkncc:b233d916-bfe3-4ae5-ae39-6b574d1aa05e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"e6ed8c1e93f8bc4","mac":"aa:e8:ef:66:06:69"},{"name":"eth0","mac":"0a:58:0a:d9:00:1c","sandbox":"/var/run/netns/b4cbc8dd-918e-4828-a883-addb98426d7d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.28/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:58.216982181+00:00 stderr F I0813 19:58:58.216294 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console", Name:"console-84fccc7b6-mkncc", UID:"b233d916-bfe3-4ae5-ae39-6b574d1aa05e", APIVersion:"v1", ResourceVersion:"27421", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.28/23] from ovn-kubernetes 2025-08-13T19:58:58.295951372+00:00 stderr F 2025-08-13T19:58:58Z [verbose] ADD finished CNI request ContainerID:"e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f" Netns:"/var/run/netns/b4cbc8dd-918e-4828-a883-addb98426d7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-84fccc7b6-mkncc;K8S_POD_INFRA_CONTAINER_ID=e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f;K8S_POD_UID=b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"aa:e8:ef:66:06:69\",\"name\":\"e6ed8c1e93f8bc4\"},{\"mac\":\"0a:58:0a:d9:00:1c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b4cbc8dd-918e-4828-a883-addb98426d7d\"}],\"ips\":[{\"address\":\"10.217.0.28/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:58.338013061+00:00 stderr F 2025-08-13T19:58:58Z [verbose] ADD starting CNI request ContainerID:"2aed5bade7f294b09e25840fe64b91ca7e8460e350e656827bd2648f0721976d" Netns:"/var/run/netns/5461ae60-bf28-4153-9fe9-e79ce8ef8bfa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-6f6cb54958-rbddb;K8S_POD_INFRA_CONTAINER_ID=2aed5bade7f294b09e25840fe64b91ca7e8460e350e656827bd2648f0721976d;K8S_POD_UID=c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Path:"" 2025-08-13T19:58:58.473564435+00:00 stderr F 2025-08-13T19:58:58Z [verbose] ADD starting CNI request ContainerID:"7c70e17033c682195efbddb8b127b02b239fc67e597936ebf8283a79edea04e3" Netns:"/var/run/netns/22af814e-c9c5-4a38-8e71-91e0ef541bf4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-6d8474f75f-x54mh;K8S_POD_INFRA_CONTAINER_ID=7c70e17033c682195efbddb8b127b02b239fc67e597936ebf8283a79edea04e3;K8S_POD_UID=c085412c-b875-46c9-ae3e-e6b0d8067091" Path:"" 2025-08-13T19:58:58.634314047+00:00 stderr F 2025-08-13T19:58:58Z [verbose] ADD starting CNI request ContainerID:"d3db60615905e44dc8f118e1544f7eb252e9b396f1af3b926339817c7ce1ed71" Netns:"/var/run/netns/b894c68f-ab5f-4e66-b40a-eb0462970fb7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-77658b5b66-dq5sc;K8S_POD_INFRA_CONTAINER_ID=d3db60615905e44dc8f118e1544f7eb252e9b396f1af3b926339817c7ce1ed71;K8S_POD_UID=530553aa-0a1d-423e-8a22-f5eb4bdbb883" Path:"" 2025-08-13T19:58:59.005862598+00:00 stderr F 2025-08-13T19:58:59Z [verbose] Add: openshift-service-ca-operator:service-ca-operator-546b4f8984-pwccz:6d67253e-2acd-4bc1-8185-793587da4f17:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"282af480c29eba8","mac":"5a:fa:7e:60:ae:8e"},{"name":"eth0","mac":"0a:58:0a:d9:00:0a","sandbox":"/var/run/netns/655ccbbf-76e3-4129-bcd4-1cb267a341b0"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.10/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:59.007142414+00:00 stderr F I0813 19:58:59.007096 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator-546b4f8984-pwccz", UID:"6d67253e-2acd-4bc1-8185-793587da4f17", APIVersion:"v1", ResourceVersion:"27329", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.10/23] from ovn-kubernetes 2025-08-13T19:58:59.143136761+00:00 stderr F 2025-08-13T19:58:59Z [verbose] Add: openshift-authentication:oauth-openshift-765b47f944-n2lhl:13ad7555-5f28-4555-a563-892713a8433a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"8266ab3300c992b","mac":"16:7d:9f:58:7f:21"},{"name":"eth0","mac":"0a:58:0a:d9:00:1e","sandbox":"/var/run/netns/0e871511-fda3-41d1-ad6f-2bcfcfa2481d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.30/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:59.143136761+00:00 stderr F I0813 19:58:59.119382 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-authentication", Name:"oauth-openshift-765b47f944-n2lhl", UID:"13ad7555-5f28-4555-a563-892713a8433a", APIVersion:"v1", ResourceVersion:"27326", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.30/23] from ovn-kubernetes 2025-08-13T19:58:59.184869251+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD finished CNI request ContainerID:"282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722" Netns:"/var/run/netns/655ccbbf-76e3-4129-bcd4-1cb267a341b0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-546b4f8984-pwccz;K8S_POD_INFRA_CONTAINER_ID=282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722;K8S_POD_UID=6d67253e-2acd-4bc1-8185-793587da4f17" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5a:fa:7e:60:ae:8e\",\"name\":\"282af480c29eba8\"},{\"mac\":\"0a:58:0a:d9:00:0a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/655ccbbf-76e3-4129-bcd4-1cb267a341b0\"}],\"ips\":[{\"address\":\"10.217.0.10/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:59.197442879+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD finished CNI request ContainerID:"8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141" Netns:"/var/run/netns/0e871511-fda3-41d1-ad6f-2bcfcfa2481d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-765b47f944-n2lhl;K8S_POD_INFRA_CONTAINER_ID=8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141;K8S_POD_UID=13ad7555-5f28-4555-a563-892713a8433a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"16:7d:9f:58:7f:21\",\"name\":\"8266ab3300c992b\"},{\"mac\":\"0a:58:0a:d9:00:1e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/0e871511-fda3-41d1-ad6f-2bcfcfa2481d\"}],\"ips\":[{\"address\":\"10.217.0.30/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:59.216022159+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238" Netns:"/var/run/netns/4f246d4c-404d-4986-b5a6-5c2f06b87c6a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5dbbc74dc9-cp5cd;K8S_POD_INFRA_CONTAINER_ID=0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238;K8S_POD_UID=e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Path:"" 2025-08-13T19:58:59.216075330+00:00 stderr P 2025-08-13T19:58:59Z [verbose] 2025-08-13T19:58:59.216086340+00:00 stderr F ADD starting CNI request ContainerID:"5aa1911bfbbdddf05ac698792baebff15593339de601d73adeab5547c57d456a" Netns:"/var/run/netns/6c622cad-6adf-4b6c-9819-f3b4b84dbf1c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-v54bt;K8S_POD_INFRA_CONTAINER_ID=5aa1911bfbbdddf05ac698792baebff15593339de601d73adeab5547c57d456a;K8S_POD_UID=34a48baf-1bee-4921-8bb2-9b7320e76f79" Path:"" 2025-08-13T19:58:59.226760295+00:00 stderr F 2025-08-13T19:58:59Z [verbose] Add: openshift-etcd-operator:etcd-operator-768d5b5d86-722mg:0b5c38ff-1fa8-4219-994d-15776acd4a4d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2cacd5e0efb1ce8","mac":"ea:0d:4c:ae:13:d8"},{"name":"eth0","mac":"0a:58:0a:d9:00:08","sandbox":"/var/run/netns/899f04cb-d5a9-4d0d-ab5f-db55f116a94c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.8/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:59.226760295+00:00 stderr F I0813 19:58:59.219586 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-etcd-operator", Name:"etcd-operator-768d5b5d86-722mg", UID:"0b5c38ff-1fa8-4219-994d-15776acd4a4d", APIVersion:"v1", ResourceVersion:"27425", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.8/23] from ovn-kubernetes 2025-08-13T19:58:59.226760295+00:00 stderr F 2025-08-13T19:58:59Z [verbose] Add: openshift-kube-controller-manager-operator:kube-controller-manager-operator-6f6cb54958-rbddb:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2aed5bade7f294b","mac":"4e:5a:b9:b2:bb:00"},{"name":"eth0","mac":"0a:58:0a:d9:00:0f","sandbox":"/var/run/netns/5461ae60-bf28-4153-9fe9-e79ce8ef8bfa"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.15/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:59.226951730+00:00 stderr F I0813 19:58:59.226928 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator-6f6cb54958-rbddb", UID:"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf", APIVersion:"v1", ResourceVersion:"27367", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.15/23] from ovn-kubernetes 2025-08-13T19:58:59.238044236+00:00 stderr F 2025-08-13T19:58:59Z [verbose] Add: openshift-dns:dns-default-gbw49:13045510-8717-4a71-ade4-be95a76440a7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"63f14f64c728127","mac":"9a:23:de:02:96:5e"},{"name":"eth0","mac":"0a:58:0a:d9:00:1f","sandbox":"/var/run/netns/596a56ad-fa96-4824-9885-0b954a96d079"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.31/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:59.238044236+00:00 stderr F I0813 19:58:59.231537 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-dns", Name:"dns-default-gbw49", UID:"13045510-8717-4a71-ade4-be95a76440a7", APIVersion:"v1", ResourceVersion:"27378", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.31/23] from ovn-kubernetes 2025-08-13T19:58:59.405539971+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"526dc34c7f0224642660d74a0d2dc6ff8a8ffcb683f16dcb88b66dd5d2363e0a" Netns:"/var/run/netns/07a3c410-01ff-48da-98d3-be907e6beb6c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-7c88c4c865-kn67m;K8S_POD_INFRA_CONTAINER_ID=526dc34c7f0224642660d74a0d2dc6ff8a8ffcb683f16dcb88b66dd5d2363e0a;K8S_POD_UID=43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Path:"" 2025-08-13T19:58:59.419100297+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"10cfef5f94c814cc8355e17d7fdcccd543ac26c393e3a7c8452af1219913ea3a" Netns:"/var/run/netns/3b25dfa7-0dbe-48ac-9bb0-4559de6430f6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=10cfef5f94c814cc8355e17d7fdcccd543ac26c393e3a7c8452af1219913ea3a;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"" 2025-08-13T19:58:59.448707101+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD finished CNI request ContainerID:"63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc" Netns:"/var/run/netns/596a56ad-fa96-4824-9885-0b954a96d079" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns;K8S_POD_NAME=dns-default-gbw49;K8S_POD_INFRA_CONTAINER_ID=63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc;K8S_POD_UID=13045510-8717-4a71-ade4-be95a76440a7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9a:23:de:02:96:5e\",\"name\":\"63f14f64c728127\"},{\"mac\":\"0a:58:0a:d9:00:1f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/596a56ad-fa96-4824-9885-0b954a96d079\"}],\"ips\":[{\"address\":\"10.217.0.31/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:59.448707101+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD finished CNI request ContainerID:"2aed5bade7f294b09e25840fe64b91ca7e8460e350e656827bd2648f0721976d" Netns:"/var/run/netns/5461ae60-bf28-4153-9fe9-e79ce8ef8bfa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-6f6cb54958-rbddb;K8S_POD_INFRA_CONTAINER_ID=2aed5bade7f294b09e25840fe64b91ca7e8460e350e656827bd2648f0721976d;K8S_POD_UID=c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"4e:5a:b9:b2:bb:00\",\"name\":\"2aed5bade7f294b\"},{\"mac\":\"0a:58:0a:d9:00:0f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/5461ae60-bf28-4153-9fe9-e79ce8ef8bfa\"}],\"ips\":[{\"address\":\"10.217.0.15/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:59.448707101+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD finished CNI request ContainerID:"2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892" Netns:"/var/run/netns/899f04cb-d5a9-4d0d-ab5f-db55f116a94c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-768d5b5d86-722mg;K8S_POD_INFRA_CONTAINER_ID=2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892;K8S_POD_UID=0b5c38ff-1fa8-4219-994d-15776acd4a4d" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ea:0d:4c:ae:13:d8\",\"name\":\"2cacd5e0efb1ce8\"},{\"mac\":\"0a:58:0a:d9:00:08\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/899f04cb-d5a9-4d0d-ab5f-db55f116a94c\"}],\"ips\":[{\"address\":\"10.217.0.8/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:59.499960322+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f" Netns:"/var/run/netns/c964672c-1543-4ed9-8311-27242d3cbe4c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6ff78978b4-q4vv8;K8S_POD_INFRA_CONTAINER_ID=4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f;K8S_POD_UID=87df87f4-ba66-4137-8e41-1fa632ad4207" Path:"" 2025-08-13T19:58:59.600975011+00:00 stderr P 2025-08-13T19:58:59Z [verbose] 2025-08-13T19:58:59.601505556+00:00 stderr P ADD starting CNI request ContainerID:"9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7" Netns:"/var/run/netns/1c780c69-25c3-4c0b-953d-e261642f6782" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-rmwfn;K8S_POD_INFRA_CONTAINER_ID=9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7;K8S_POD_UID=9ad279b4-d9dc-42a8-a1c8-a002bd063482" Path:"" 2025-08-13T19:58:59.601565358+00:00 stderr F 2025-08-13T19:58:59.631759808+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf" Netns:"/var/run/netns/39f01de0-25a0-4e52-b360-7aa638922e4a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5c4dbb8899-tchz5;K8S_POD_INFRA_CONTAINER_ID=893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf;K8S_POD_UID=af6b67a3-a2bd-4051-9adc-c208a5a65d79" Path:"" 2025-08-13T19:58:59.647303371+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"b27ef0e5311849c50317136877d704c05729518c9dcec03ecef2bf1dc575fbe7" Netns:"/var/run/netns/816c9fbd-4e9d-4e58-9650-775be1fefeff" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-69c565c9b6-vbdpd;K8S_POD_INFRA_CONTAINER_ID=b27ef0e5311849c50317136877d704c05729518c9dcec03ecef2bf1dc575fbe7;K8S_POD_UID=5bacb25d-97b6-4491-8fb4-99feae1d802a" Path:"" 2025-08-13T19:58:59.805428699+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"4146ac88f77df20ec1239010fef77264fc27e17e8819d70b5707697a50193ca3" Netns:"/var/run/netns/3dc43bc9-2154-4ea7-a28e-e367c75aa76c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-canary;K8S_POD_NAME=ingress-canary-2vhcn;K8S_POD_INFRA_CONTAINER_ID=4146ac88f77df20ec1239010fef77264fc27e17e8819d70b5707697a50193ca3;K8S_POD_UID=0b5d722a-1123-4935-9740-52a08d018bc9" Path:"" 2025-08-13T19:58:59.844184134+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5" Netns:"/var/run/netns/559b0d49-ba17-4e69-90ab-366e59d1166b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-686c6c748c-qbnnr;K8S_POD_INFRA_CONTAINER_ID=717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5;K8S_POD_UID=9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Path:"" 2025-08-13T19:59:00.168377535+00:00 stderr P 2025-08-13T19:59:00Z [verbose] 2025-08-13T19:59:00.170681861+00:00 stderr P ADD starting CNI request ContainerID:"906e45421a720cb9e49c934ec2f44b74221d2f79757d98a1581d6bf6a1fc5f31" Netns:"/var/run/netns/40790f09-e8eb-4d30-a296-031a41bd8f5d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-7cc7ff75d5-g9qv8;K8S_POD_INFRA_CONTAINER_ID=906e45421a720cb9e49c934ec2f44b74221d2f79757d98a1581d6bf6a1fc5f31;K8S_POD_UID=ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Path:"" 2025-08-13T19:59:00.170862256+00:00 stderr F 2025-08-13T19:59:00.171981088+00:00 stderr F 2025-08-13T19:59:00Z [verbose] ADD starting CNI request ContainerID:"d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877" Netns:"/var/run/netns/7bb6bb4a-a920-463e-a86a-6d40819d6976" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-8-crc;K8S_POD_INFRA_CONTAINER_ID=d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877;K8S_POD_UID=72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Path:"" 2025-08-13T19:59:00.202354764+00:00 stderr F 2025-08-13T19:59:00Z [verbose] ADD starting CNI request ContainerID:"961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc" Netns:"/var/run/netns/96629289-ef1e-4353-acb2-125a21227454" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-mtx25;K8S_POD_INFRA_CONTAINER_ID=961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc;K8S_POD_UID=23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Path:"" 2025-08-13T19:59:00.324034272+00:00 stderr F 2025-08-13T19:59:00Z [verbose] ADD starting CNI request ContainerID:"97418fd7ce5644b997f128bada5bb6c90d375c9d7626fb1d5981b09a8d6771d7" Netns:"/var/run/netns/98329f47-b240-4eb5-89bf-192e571daa4a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-5c5478f8c-vqvt7;K8S_POD_INFRA_CONTAINER_ID=97418fd7ce5644b997f128bada5bb6c90d375c9d7626fb1d5981b09a8d6771d7;K8S_POD_UID=d0f40333-c860-4c04-8058-a0bf572dcf12" Path:"" 2025-08-13T19:59:00.808159642+00:00 stderr P 2025-08-13T19:59:00Z [verbose] 2025-08-13T19:59:00.808287966+00:00 stderr P ADD starting CNI request ContainerID:"22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed" Netns:"/var/run/netns/e6ba88c5-6730-4a5b-b353-7bbeb4049738" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-7d46d5bb6d-rrg6t;K8S_POD_INFRA_CONTAINER_ID=22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed;K8S_POD_UID=7d51f445-054a-4e4f-a67b-a828f5a32511" Path:"" 2025-08-13T19:59:00.808756179+00:00 stderr F 2025-08-13T19:59:01.123363757+00:00 stderr F 2025-08-13T19:59:01Z [verbose] ADD starting CNI request ContainerID:"ce1a5d3596103f2604e3421cb68ffd62e530298f3c2a7b8074896c2e7152c621" Netns:"/var/run/netns/1c78c2d9-426c-4eca-8f52-b5c0eb14a232" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=hostpath-provisioner;K8S_POD_NAME=csi-hostpathplugin-hvm8g;K8S_POD_INFRA_CONTAINER_ID=ce1a5d3596103f2604e3421cb68ffd62e530298f3c2a7b8074896c2e7152c621;K8S_POD_UID=12e733dd-0939-4f1b-9cbb-13897e093787" Path:"" 2025-08-13T19:59:03.135317658+00:00 stderr F 2025-08-13T19:59:03Z [verbose] ADD starting CNI request ContainerID:"a10fd87b4b9fef36cf95839340b0ecf97070241659beb7fea58a63794a40a007" Netns:"/var/run/netns/c156238a-d5d4-4feb-883a-0fefaa4b801f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-conversion-webhook-595f9969b-l6z49;K8S_POD_INFRA_CONTAINER_ID=a10fd87b4b9fef36cf95839340b0ecf97070241659beb7fea58a63794a40a007;K8S_POD_UID=59748b9b-c309-4712-aa85-bb38d71c4915" Path:"" 2025-08-13T19:59:03.136943044+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.137268004+00:00 stderr P Add: openshift-console:downloads-65476884b9-9wcvx:6268b7fe-8910-4505-b404-6f1df638105c:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"aab926f26907ff6","mac":"ae:8f:58:c5:04:5d"},{"name":"eth0","mac":"0a:58:0a:d9:00:42","sandbox":"/var/run/netns/ae557160-e1b1-4a90-8f0b-6b65225ee83e"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.66/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.137298555+00:00 stderr F 2025-08-13T19:59:03.181010350+00:00 stderr F 2025-08-13T19:59:03Z [verbose] Add: openshift-operator-lifecycle-manager:olm-operator-6d8474f75f-x54mh:c085412c-b875-46c9-ae3e-e6b0d8067091:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"7c70e17033c6821","mac":"a6:4b:a8:66:58:77"},{"name":"eth0","mac":"0a:58:0a:d9:00:0e","sandbox":"/var/run/netns/22af814e-c9c5-4a38-8e71-91e0ef541bf4"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.14/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.181010350+00:00 stderr F 2025-08-13T19:59:03Z [verbose] Add: openshift-marketplace:redhat-marketplace-8s8pc:c782cf62-a827-4677-b3c2-6f82c5f09cbb:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"10cfef5f94c814c","mac":"fa:06:81:ef:ea:29"},{"name":"eth0","mac":"0a:58:0a:d9:00:33","sandbox":"/var/run/netns/3b25dfa7-0dbe-48ac-9bb0-4559de6430f6"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.51/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.181010350+00:00 stderr P 2025-08-13T19:59:03Z [verbose] Add: openshift-config-operator:openshift-config-operator-77658b5b66-dq5sc:530553aa-0a1d-423e-8a22-f5eb4bdbb883:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d3db60615905e44","mac":"d6:ab:ce:8e:34:62"},{"name":"eth0","mac":"0a:58:0a:d9:00:17","sandbox":"/var/run/netns/b894c68f-ab5f-4e66-b40a-eb0462970fb7"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.23/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.301128335+00:00 stderr F 2025-08-13T19:59:03.301128335+00:00 stderr F 2025-08-13T19:59:03Z [verbose] Add: openshift-network-diagnostics:network-check-target-v54bt:34a48baf-1bee-4921-8bb2-9b7320e76f79:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"5aa1911bfbbdddf","mac":"ea:f0:97:ca:28:ed"},{"name":"eth0","mac":"0a:58:0a:d9:00:04","sandbox":"/var/run/netns/6c622cad-6adf-4b6c-9819-f3b4b84dbf1c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.4/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.301128335+00:00 stderr F 2025-08-13T19:59:03Z [verbose] Add: openshift-route-controller-manager:route-controller-manager-5c4dbb8899-tchz5:af6b67a3-a2bd-4051-9adc-c208a5a65d79:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"893b4f9b5ed2707","mac":"d2:c7:9d:0a:38:17"},{"name":"eth0","mac":"0a:58:0a:d9:00:11","sandbox":"/var/run/netns/39f01de0-25a0-4e52-b360-7aa638922e4a"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.17/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.301128335+00:00 stderr F I0813 19:59:03.181666 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console", Name:"downloads-65476884b9-9wcvx", UID:"6268b7fe-8910-4505-b404-6f1df638105c", APIVersion:"v1", ResourceVersion:"27396", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.66/23] from ovn-kubernetes 2025-08-13T19:59:03.301128335+00:00 stderr F I0813 19:59:03.280298 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"olm-operator-6d8474f75f-x54mh", UID:"c085412c-b875-46c9-ae3e-e6b0d8067091", APIVersion:"v1", ResourceVersion:"27257", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.14/23] from ovn-kubernetes 2025-08-13T19:59:03.301128335+00:00 stderr F I0813 19:59:03.280323 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-8s8pc", UID:"c782cf62-a827-4677-b3c2-6f82c5f09cbb", APIVersion:"v1", ResourceVersion:"27308", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.51/23] from ovn-kubernetes 2025-08-13T19:59:03.301128335+00:00 stderr F I0813 19:59:03.288692 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-config-operator", Name:"openshift-config-operator-77658b5b66-dq5sc", UID:"530553aa-0a1d-423e-8a22-f5eb4bdbb883", APIVersion:"v1", ResourceVersion:"27263", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.23/23] from ovn-kubernetes 2025-08-13T19:59:03.301128335+00:00 stderr F I0813 19:59:03.288731 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-network-diagnostics", Name:"network-check-target-v54bt", UID:"34a48baf-1bee-4921-8bb2-9b7320e76f79", APIVersion:"v1", ResourceVersion:"27275", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.4/23] from ovn-kubernetes 2025-08-13T19:59:03.301128335+00:00 stderr F I0813 19:59:03.288748 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-route-controller-manager", Name:"route-controller-manager-5c4dbb8899-tchz5", UID:"af6b67a3-a2bd-4051-9adc-c208a5a65d79", APIVersion:"v1", ResourceVersion:"27278", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.17/23] from ovn-kubernetes 2025-08-13T19:59:03.321165456+00:00 stderr F 2025-08-13T19:59:03Z [verbose] ADD finished CNI request ContainerID:"aab926f26907ff6a0818e2560ab90daa29fc5dd04e9bc7ca22bafece60120f4d" Netns:"/var/run/netns/ae557160-e1b1-4a90-8f0b-6b65225ee83e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-65476884b9-9wcvx;K8S_POD_INFRA_CONTAINER_ID=aab926f26907ff6a0818e2560ab90daa29fc5dd04e9bc7ca22bafece60120f4d;K8S_POD_UID=6268b7fe-8910-4505-b404-6f1df638105c" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ae:8f:58:c5:04:5d\",\"name\":\"aab926f26907ff6\"},{\"mac\":\"0a:58:0a:d9:00:42\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/ae557160-e1b1-4a90-8f0b-6b65225ee83e\"}],\"ips\":[{\"address\":\"10.217.0.66/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.321165456+00:00 stderr F 2025-08-13T19:59:03Z [verbose] ADD finished CNI request ContainerID:"7c70e17033c682195efbddb8b127b02b239fc67e597936ebf8283a79edea04e3" Netns:"/var/run/netns/22af814e-c9c5-4a38-8e71-91e0ef541bf4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-6d8474f75f-x54mh;K8S_POD_INFRA_CONTAINER_ID=7c70e17033c682195efbddb8b127b02b239fc67e597936ebf8283a79edea04e3;K8S_POD_UID=c085412c-b875-46c9-ae3e-e6b0d8067091" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"a6:4b:a8:66:58:77\",\"name\":\"7c70e17033c6821\"},{\"mac\":\"0a:58:0a:d9:00:0e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/22af814e-c9c5-4a38-8e71-91e0ef541bf4\"}],\"ips\":[{\"address\":\"10.217.0.14/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.672140931+00:00 stderr F 2025-08-13T19:59:03Z [verbose] ADD finished CNI request ContainerID:"10cfef5f94c814cc8355e17d7fdcccd543ac26c393e3a7c8452af1219913ea3a" Netns:"/var/run/netns/3b25dfa7-0dbe-48ac-9bb0-4559de6430f6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=10cfef5f94c814cc8355e17d7fdcccd543ac26c393e3a7c8452af1219913ea3a;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"fa:06:81:ef:ea:29\",\"name\":\"10cfef5f94c814c\"},{\"mac\":\"0a:58:0a:d9:00:33\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3b25dfa7-0dbe-48ac-9bb0-4559de6430f6\"}],\"ips\":[{\"address\":\"10.217.0.51/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.684501513+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.685468030+00:00 stderr P Add: openshift-oauth-apiserver:apiserver-69c565c9b6-vbdpd:5bacb25d-97b6-4491-8fb4-99feae1d802a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"b27ef0e5311849c","mac":"5a:be:36:6c:e5:ff"},{"name":"eth0","mac":"0a:58:0a:d9:00:27","sandbox":"/var/run/netns/816c9fbd-4e9d-4e58-9650-775be1fefeff"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.39/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.685510352+00:00 stderr F 2025-08-13T19:59:03.685942824+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.685981775+00:00 stderr P Add: openshift-controller-manager:controller-manager-6ff78978b4-q4vv8:87df87f4-ba66-4137-8e41-1fa632ad4207:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"4916f2a17d27bbf","mac":"4e:22:5c:05:c0:19"},{"name":"eth0","mac":"0a:58:0a:d9:00:24","sandbox":"/var/run/netns/c964672c-1543-4ed9-8311-27242d3cbe4c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.36/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.686051727+00:00 stderr F 2025-08-13T19:59:03.686445048+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.686494450+00:00 stderr P Add: openshift-apiserver-operator:openshift-apiserver-operator-7c88c4c865-kn67m:43ae1c37-047b-4ee2-9fee-41e337dd4ac8:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"526dc34c7f02246","mac":"1e:22:ca:d9:c0:4a"},{"name":"eth0","mac":"0a:58:0a:d9:00:06","sandbox":"/var/run/netns/07a3c410-01ff-48da-98d3-be907e6beb6c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.6/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.686523791+00:00 stderr F 2025-08-13T19:59:03.701115536+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.701925940+00:00 stderr F Add: openshift-authentication-operator:authentication-operator-7cc7ff75d5-g9qv8:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"906e45421a720cb","mac":"26:7a:19:ff:0e:c3"},{"name":"eth0","mac":"0a:58:0a:d9:00:13","sandbox":"/var/run/netns/40790f09-e8eb-4d30-a296-031a41bd8f5d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.19/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.702007492+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.702074924+00:00 stderr P ADD finished CNI request ContainerID:"893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf" Netns:"/var/run/netns/39f01de0-25a0-4e52-b360-7aa638922e4a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5c4dbb8899-tchz5;K8S_POD_INFRA_CONTAINER_ID=893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf;K8S_POD_UID=af6b67a3-a2bd-4051-9adc-c208a5a65d79" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d2:c7:9d:0a:38:17\",\"name\":\"893b4f9b5ed2707\"},{\"mac\":\"0a:58:0a:d9:00:11\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/39f01de0-25a0-4e52-b360-7aa638922e4a\"}],\"ips\":[{\"address\":\"10.217.0.17/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.702175467+00:00 stderr F 2025-08-13T19:59:03.702379302+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.702452915+00:00 stderr P Add: openshift-marketplace:redhat-marketplace-rmwfn:9ad279b4-d9dc-42a8-a1c8-a002bd063482:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"9218677c9aa0f21","mac":"3e:06:24:c1:84:12"},{"name":"eth0","mac":"0a:58:0a:d9:00:36","sandbox":"/var/run/netns/1c780c69-25c3-4c0b-953d-e261642f6782"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.54/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.702478165+00:00 stderr F 2025-08-13T19:59:03.702995640+00:00 stderr F I0813 19:59:03.702962 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-oauth-apiserver", Name:"apiserver-69c565c9b6-vbdpd", UID:"5bacb25d-97b6-4491-8fb4-99feae1d802a", APIVersion:"v1", ResourceVersion:"27346", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.39/23] from ovn-kubernetes 2025-08-13T19:59:03.703055082+00:00 stderr F I0813 19:59:03.703034 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager", Name:"controller-manager-6ff78978b4-q4vv8", UID:"87df87f4-ba66-4137-8e41-1fa632ad4207", APIVersion:"v1", ResourceVersion:"27269", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.36/23] from ovn-kubernetes 2025-08-13T19:59:03.703092063+00:00 stderr F I0813 19:59:03.703076 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator-7c88c4c865-kn67m", UID:"43ae1c37-047b-4ee2-9fee-41e337dd4ac8", APIVersion:"v1", ResourceVersion:"27332", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.6/23] from ovn-kubernetes 2025-08-13T19:59:03.703125654+00:00 stderr F I0813 19:59:03.703110 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-authentication-operator", Name:"authentication-operator-7cc7ff75d5-g9qv8", UID:"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e", APIVersion:"v1", ResourceVersion:"27314", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.19/23] from ovn-kubernetes 2025-08-13T19:59:03.703158455+00:00 stderr F I0813 19:59:03.703143 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-rmwfn", UID:"9ad279b4-d9dc-42a8-a1c8-a002bd063482", APIVersion:"v1", ResourceVersion:"27389", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.54/23] from ovn-kubernetes 2025-08-13T19:59:03.732219983+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.732304806+00:00 stderr P Add: openshift-ingress-canary:ingress-canary-2vhcn:0b5d722a-1123-4935-9740-52a08d018bc9:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"4146ac88f77df20","mac":"7a:30:67:39:88:7d"},{"name":"eth0","mac":"0a:58:0a:d9:00:47","sandbox":"/var/run/netns/3dc43bc9-2154-4ea7-a28e-e367c75aa76c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.71/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.732330756+00:00 stderr F 2025-08-13T19:59:03.733101848+00:00 stderr F I0813 19:59:03.733068 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-ingress-canary", Name:"ingress-canary-2vhcn", UID:"0b5d722a-1123-4935-9740-52a08d018bc9", APIVersion:"v1", ResourceVersion:"27357", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.71/23] from ovn-kubernetes 2025-08-13T19:59:03.880295054+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.881361964+00:00 stderr P ADD finished CNI request ContainerID:"5aa1911bfbbdddf05ac698792baebff15593339de601d73adeab5547c57d456a" Netns:"/var/run/netns/6c622cad-6adf-4b6c-9819-f3b4b84dbf1c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-v54bt;K8S_POD_INFRA_CONTAINER_ID=5aa1911bfbbdddf05ac698792baebff15593339de601d73adeab5547c57d456a;K8S_POD_UID=34a48baf-1bee-4921-8bb2-9b7320e76f79" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ea:f0:97:ca:28:ed\",\"name\":\"5aa1911bfbbdddf\"},{\"mac\":\"0a:58:0a:d9:00:04\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/6c622cad-6adf-4b6c-9819-f3b4b84dbf1c\"}],\"ips\":[{\"address\":\"10.217.0.4/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.881411256+00:00 stderr F 2025-08-13T19:59:03.936494066+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.936603749+00:00 stderr P ADD finished CNI request ContainerID:"b27ef0e5311849c50317136877d704c05729518c9dcec03ecef2bf1dc575fbe7" Netns:"/var/run/netns/816c9fbd-4e9d-4e58-9650-775be1fefeff" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-69c565c9b6-vbdpd;K8S_POD_INFRA_CONTAINER_ID=b27ef0e5311849c50317136877d704c05729518c9dcec03ecef2bf1dc575fbe7;K8S_POD_UID=5bacb25d-97b6-4491-8fb4-99feae1d802a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5a:be:36:6c:e5:ff\",\"name\":\"b27ef0e5311849c\"},{\"mac\":\"0a:58:0a:d9:00:27\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/816c9fbd-4e9d-4e58-9650-775be1fefeff\"}],\"ips\":[{\"address\":\"10.217.0.39/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.936678871+00:00 stderr F 2025-08-13T19:59:03.937121224+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.937362851+00:00 stderr P ADD finished CNI request ContainerID:"906e45421a720cb9e49c934ec2f44b74221d2f79757d98a1581d6bf6a1fc5f31" Netns:"/var/run/netns/40790f09-e8eb-4d30-a296-031a41bd8f5d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-7cc7ff75d5-g9qv8;K8S_POD_INFRA_CONTAINER_ID=906e45421a720cb9e49c934ec2f44b74221d2f79757d98a1581d6bf6a1fc5f31;K8S_POD_UID=ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"26:7a:19:ff:0e:c3\",\"name\":\"906e45421a720cb\"},{\"mac\":\"0a:58:0a:d9:00:13\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/40790f09-e8eb-4d30-a296-031a41bd8f5d\"}],\"ips\":[{\"address\":\"10.217.0.19/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.937438253+00:00 stderr F 2025-08-13T19:59:03.937627508+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.937896776+00:00 stderr P ADD finished CNI request ContainerID:"4146ac88f77df20ec1239010fef77264fc27e17e8819d70b5707697a50193ca3" Netns:"/var/run/netns/3dc43bc9-2154-4ea7-a28e-e367c75aa76c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-canary;K8S_POD_NAME=ingress-canary-2vhcn;K8S_POD_INFRA_CONTAINER_ID=4146ac88f77df20ec1239010fef77264fc27e17e8819d70b5707697a50193ca3;K8S_POD_UID=0b5d722a-1123-4935-9740-52a08d018bc9" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"7a:30:67:39:88:7d\",\"name\":\"4146ac88f77df20\"},{\"mac\":\"0a:58:0a:d9:00:47\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3dc43bc9-2154-4ea7-a28e-e367c75aa76c\"}],\"ips\":[{\"address\":\"10.217.0.71/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.938123652+00:00 stderr F 2025-08-13T19:59:03.953421098+00:00 stderr F 2025-08-13T19:59:03Z [verbose] Add: openshift-console-operator:console-operator-5dbbc74dc9-cp5cd:e9127708-ccfd-4891-8a3a-f0cacb77e0f4:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"0e119602de1750a","mac":"5a:68:88:74:1a:1a"},{"name":"eth0","mac":"0a:58:0a:d9:00:3e","sandbox":"/var/run/netns/4f246d4c-404d-4986-b5a6-5c2f06b87c6a"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.62/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.953911272+00:00 stderr F I0813 19:59:03.953591 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-5dbbc74dc9-cp5cd", UID:"e9127708-ccfd-4891-8a3a-f0cacb77e0f4", APIVersion:"v1", ResourceVersion:"27354", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.62/23] from ovn-kubernetes 2025-08-13T19:59:04.073471741+00:00 stderr F 2025-08-13T19:59:04Z [verbose] Add: openshift-apiserver:apiserver-67cbf64bc9-mtx25:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"961449f5e5e8534","mac":"06:02:94:00:bf:62"},{"name":"eth0","mac":"0a:58:0a:d9:00:25","sandbox":"/var/run/netns/96629289-ef1e-4353-acb2-125a21227454"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.37/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:04.075157649+00:00 stderr F I0813 19:59:04.075076 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-apiserver", Name:"apiserver-67cbf64bc9-mtx25", UID:"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab", APIVersion:"v1", ResourceVersion:"27361", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.37/23] from ovn-kubernetes 2025-08-13T19:59:04.183987791+00:00 stderr F 2025-08-13T19:59:04Z [verbose] ADD finished CNI request ContainerID:"526dc34c7f0224642660d74a0d2dc6ff8a8ffcb683f16dcb88b66dd5d2363e0a" Netns:"/var/run/netns/07a3c410-01ff-48da-98d3-be907e6beb6c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-7c88c4c865-kn67m;K8S_POD_INFRA_CONTAINER_ID=526dc34c7f0224642660d74a0d2dc6ff8a8ffcb683f16dcb88b66dd5d2363e0a;K8S_POD_UID=43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"1e:22:ca:d9:c0:4a\",\"name\":\"526dc34c7f02246\"},{\"mac\":\"0a:58:0a:d9:00:06\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/07a3c410-01ff-48da-98d3-be907e6beb6c\"}],\"ips\":[{\"address\":\"10.217.0.6/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:04.521248175+00:00 stderr F 2025-08-13T19:59:04Z [verbose] Add: openshift-kube-controller-manager:revision-pruner-8-crc:72854c1e-5ae2-4ed6-9e50-ff3bccde2635:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d84dd6581e40bee","mac":"f2:91:66:93:67:34"},{"name":"eth0","mac":"0a:58:0a:d9:00:37","sandbox":"/var/run/netns/7bb6bb4a-a920-463e-a86a-6d40819d6976"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.55/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:04.521441280+00:00 stderr F I0813 19:59:04.521386 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"revision-pruner-8-crc", UID:"72854c1e-5ae2-4ed6-9e50-ff3bccde2635", APIVersion:"v1", ResourceVersion:"27298", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.55/23] from ovn-kubernetes 2025-08-13T19:59:04.538269750+00:00 stderr F 2025-08-13T19:59:04Z [verbose] Add: openshift-kube-storage-version-migrator-operator:kube-storage-version-migrator-operator-686c6c748c-qbnnr:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"717e351e369b4a5","mac":"52:85:10:ff:fa:5e"},{"name":"eth0","mac":"0a:58:0a:d9:00:10","sandbox":"/var/run/netns/559b0d49-ba17-4e69-90ab-366e59d1166b"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.16/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:04.538555898+00:00 stderr F I0813 19:59:04.538471 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator-686c6c748c-qbnnr", UID:"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7", APIVersion:"v1", ResourceVersion:"27371", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.16/23] from ovn-kubernetes 2025-08-13T19:59:04.658192458+00:00 stderr F 2025-08-13T19:59:04Z [verbose] Add: openshift-network-diagnostics:network-check-source-5c5478f8c-vqvt7:d0f40333-c860-4c04-8058-a0bf572dcf12:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"97418fd7ce5644b","mac":"6e:43:34:af:3f:cd"},{"name":"eth0","mac":"0a:58:0a:d9:00:40","sandbox":"/var/run/netns/98329f47-b240-4eb5-89bf-192e571daa4a"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.64/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:04.658192458+00:00 stderr F I0813 19:59:04.653636 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-network-diagnostics", Name:"network-check-source-5c5478f8c-vqvt7", UID:"d0f40333-c860-4c04-8058-a0bf572dcf12", APIVersion:"v1", ResourceVersion:"27272", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.64/23] from ovn-kubernetes 2025-08-13T19:59:04.667030550+00:00 stderr P 2025-08-13T19:59:04Z [verbose] 2025-08-13T19:59:04.667081782+00:00 stderr P Add: openshift-ingress-operator:ingress-operator-7d46d5bb6d-rrg6t:7d51f445-054a-4e4f-a67b-a828f5a32511:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"22d48c9fe60d97e","mac":"c6:21:89:7b:ef:07"},{"name":"eth0","mac":"0a:58:0a:d9:00:2d","sandbox":"/var/run/netns/e6ba88c5-6730-4a5b-b353-7bbeb4049738"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.45/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:04.667113393+00:00 stderr F 2025-08-13T19:59:04.667529494+00:00 stderr F I0813 19:59:04.667467 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-ingress-operator", Name:"ingress-operator-7d46d5bb6d-rrg6t", UID:"7d51f445-054a-4e4f-a67b-a828f5a32511", APIVersion:"v1", ResourceVersion:"27414", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.45/23] from ovn-kubernetes 2025-08-13T19:59:04.774100162+00:00 stderr F 2025-08-13T19:59:04Z [verbose] Add: openshift-console-operator:console-conversion-webhook-595f9969b-l6z49:59748b9b-c309-4712-aa85-bb38d71c4915:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a10fd87b4b9fef3","mac":"7a:f0:82:7d:d8:bb"},{"name":"eth0","mac":"0a:58:0a:d9:00:3d","sandbox":"/var/run/netns/c156238a-d5d4-4feb-883a-0fefaa4b801f"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.61/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:04.774299488+00:00 stderr F I0813 19:59:04.774247 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-conversion-webhook-595f9969b-l6z49", UID:"59748b9b-c309-4712-aa85-bb38d71c4915", APIVersion:"v1", ResourceVersion:"27381", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.61/23] from ovn-kubernetes 2025-08-13T19:59:04.817896801+00:00 stderr F 2025-08-13T19:59:04Z [verbose] ADD finished CNI request ContainerID:"9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7" Netns:"/var/run/netns/1c780c69-25c3-4c0b-953d-e261642f6782" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-rmwfn;K8S_POD_INFRA_CONTAINER_ID=9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7;K8S_POD_UID=9ad279b4-d9dc-42a8-a1c8-a002bd063482" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"3e:06:24:c1:84:12\",\"name\":\"9218677c9aa0f21\"},{\"mac\":\"0a:58:0a:d9:00:36\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1c780c69-25c3-4c0b-953d-e261642f6782\"}],\"ips\":[{\"address\":\"10.217.0.54/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:04.874933427+00:00 stderr P 2025-08-13T19:59:04Z [verbose] 2025-08-13T19:59:04.874985878+00:00 stderr P Add: hostpath-provisioner:csi-hostpathplugin-hvm8g:12e733dd-0939-4f1b-9cbb-13897e093787:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"ce1a5d3596103f2","mac":"8e:1b:59:6f:cb:45"},{"name":"eth0","mac":"0a:58:0a:d9:00:31","sandbox":"/var/run/netns/1c78c2d9-426c-4eca-8f52-b5c0eb14a232"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.49/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:04.875017139+00:00 stderr F 2025-08-13T19:59:04.875932285+00:00 stderr F I0813 19:59:04.875899 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"hostpath-provisioner", Name:"csi-hostpathplugin-hvm8g", UID:"12e733dd-0939-4f1b-9cbb-13897e093787", APIVersion:"v1", ResourceVersion:"27304", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.49/23] from ovn-kubernetes 2025-08-13T19:59:05.358683366+00:00 stderr F 2025-08-13T19:59:05Z [verbose] ADD finished CNI request ContainerID:"4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f" Netns:"/var/run/netns/c964672c-1543-4ed9-8311-27242d3cbe4c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6ff78978b4-q4vv8;K8S_POD_INFRA_CONTAINER_ID=4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f;K8S_POD_UID=87df87f4-ba66-4137-8e41-1fa632ad4207" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"4e:22:5c:05:c0:19\",\"name\":\"4916f2a17d27bbf\"},{\"mac\":\"0a:58:0a:d9:00:24\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c964672c-1543-4ed9-8311-27242d3cbe4c\"}],\"ips\":[{\"address\":\"10.217.0.36/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:05.358683366+00:00 stderr F 2025-08-13T19:59:05Z [verbose] ADD finished CNI request ContainerID:"d3db60615905e44dc8f118e1544f7eb252e9b396f1af3b926339817c7ce1ed71" Netns:"/var/run/netns/b894c68f-ab5f-4e66-b40a-eb0462970fb7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-77658b5b66-dq5sc;K8S_POD_INFRA_CONTAINER_ID=d3db60615905e44dc8f118e1544f7eb252e9b396f1af3b926339817c7ce1ed71;K8S_POD_UID=530553aa-0a1d-423e-8a22-f5eb4bdbb883" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d6:ab:ce:8e:34:62\",\"name\":\"d3db60615905e44\"},{\"mac\":\"0a:58:0a:d9:00:17\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b894c68f-ab5f-4e66-b40a-eb0462970fb7\"}],\"ips\":[{\"address\":\"10.217.0.23/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.132119023+00:00 stderr F 2025-08-13T19:59:06Z [verbose] ADD finished CNI request ContainerID:"d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877" Netns:"/var/run/netns/7bb6bb4a-a920-463e-a86a-6d40819d6976" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-8-crc;K8S_POD_INFRA_CONTAINER_ID=d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877;K8S_POD_UID=72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"f2:91:66:93:67:34\",\"name\":\"d84dd6581e40bee\"},{\"mac\":\"0a:58:0a:d9:00:37\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/7bb6bb4a-a920-463e-a86a-6d40819d6976\"}],\"ips\":[{\"address\":\"10.217.0.55/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.132119023+00:00 stderr F 2025-08-13T19:59:06Z [verbose] ADD finished CNI request ContainerID:"97418fd7ce5644b997f128bada5bb6c90d375c9d7626fb1d5981b09a8d6771d7" Netns:"/var/run/netns/98329f47-b240-4eb5-89bf-192e571daa4a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-5c5478f8c-vqvt7;K8S_POD_INFRA_CONTAINER_ID=97418fd7ce5644b997f128bada5bb6c90d375c9d7626fb1d5981b09a8d6771d7;K8S_POD_UID=d0f40333-c860-4c04-8058-a0bf572dcf12" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"6e:43:34:af:3f:cd\",\"name\":\"97418fd7ce5644b\"},{\"mac\":\"0a:58:0a:d9:00:40\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/98329f47-b240-4eb5-89bf-192e571daa4a\"}],\"ips\":[{\"address\":\"10.217.0.64/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.280437451+00:00 stderr F 2025-08-13T19:59:06Z [verbose] ADD finished CNI request ContainerID:"22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed" Netns:"/var/run/netns/e6ba88c5-6730-4a5b-b353-7bbeb4049738" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-7d46d5bb6d-rrg6t;K8S_POD_INFRA_CONTAINER_ID=22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed;K8S_POD_UID=7d51f445-054a-4e4f-a67b-a828f5a32511" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"c6:21:89:7b:ef:07\",\"name\":\"22d48c9fe60d97e\"},{\"mac\":\"0a:58:0a:d9:00:2d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/e6ba88c5-6730-4a5b-b353-7bbeb4049738\"}],\"ips\":[{\"address\":\"10.217.0.45/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.310012694+00:00 stderr F 2025-08-13T19:59:06Z [verbose] ADD finished CNI request ContainerID:"717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5" Netns:"/var/run/netns/559b0d49-ba17-4e69-90ab-366e59d1166b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-686c6c748c-qbnnr;K8S_POD_INFRA_CONTAINER_ID=717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5;K8S_POD_UID=9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"52:85:10:ff:fa:5e\",\"name\":\"717e351e369b4a5\"},{\"mac\":\"0a:58:0a:d9:00:10\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/559b0d49-ba17-4e69-90ab-366e59d1166b\"}],\"ips\":[{\"address\":\"10.217.0.16/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.310012694+00:00 stderr F 2025-08-13T19:59:06Z [verbose] ADD finished CNI request ContainerID:"a10fd87b4b9fef36cf95839340b0ecf97070241659beb7fea58a63794a40a007" Netns:"/var/run/netns/c156238a-d5d4-4feb-883a-0fefaa4b801f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-conversion-webhook-595f9969b-l6z49;K8S_POD_INFRA_CONTAINER_ID=a10fd87b4b9fef36cf95839340b0ecf97070241659beb7fea58a63794a40a007;K8S_POD_UID=59748b9b-c309-4712-aa85-bb38d71c4915" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"7a:f0:82:7d:d8:bb\",\"name\":\"a10fd87b4b9fef3\"},{\"mac\":\"0a:58:0a:d9:00:3d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c156238a-d5d4-4feb-883a-0fefaa4b801f\"}],\"ips\":[{\"address\":\"10.217.0.61/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.313906515+00:00 stderr P 2025-08-13T19:59:06Z [verbose] 2025-08-13T19:59:06.313973167+00:00 stderr P ADD finished CNI request ContainerID:"0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238" Netns:"/var/run/netns/4f246d4c-404d-4986-b5a6-5c2f06b87c6a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5dbbc74dc9-cp5cd;K8S_POD_INFRA_CONTAINER_ID=0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238;K8S_POD_UID=e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5a:68:88:74:1a:1a\",\"name\":\"0e119602de1750a\"},{\"mac\":\"0a:58:0a:d9:00:3e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/4f246d4c-404d-4986-b5a6-5c2f06b87c6a\"}],\"ips\":[{\"address\":\"10.217.0.62/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.314000308+00:00 stderr F 2025-08-13T19:59:06.314444290+00:00 stderr P 2025-08-13T19:59:06Z [verbose] 2025-08-13T19:59:06.314487182+00:00 stderr P ADD finished CNI request ContainerID:"ce1a5d3596103f2604e3421cb68ffd62e530298f3c2a7b8074896c2e7152c621" Netns:"/var/run/netns/1c78c2d9-426c-4eca-8f52-b5c0eb14a232" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=hostpath-provisioner;K8S_POD_NAME=csi-hostpathplugin-hvm8g;K8S_POD_INFRA_CONTAINER_ID=ce1a5d3596103f2604e3421cb68ffd62e530298f3c2a7b8074896c2e7152c621;K8S_POD_UID=12e733dd-0939-4f1b-9cbb-13897e093787" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"8e:1b:59:6f:cb:45\",\"name\":\"ce1a5d3596103f2\"},{\"mac\":\"0a:58:0a:d9:00:31\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1c78c2d9-426c-4eca-8f52-b5c0eb14a232\"}],\"ips\":[{\"address\":\"10.217.0.49/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.314518213+00:00 stderr F 2025-08-13T19:59:06.320966546+00:00 stderr F 2025-08-13T19:59:06Z [verbose] ADD finished CNI request ContainerID:"961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc" Netns:"/var/run/netns/96629289-ef1e-4353-acb2-125a21227454" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-mtx25;K8S_POD_INFRA_CONTAINER_ID=961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc;K8S_POD_UID=23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"06:02:94:00:bf:62\",\"name\":\"961449f5e5e8534\"},{\"mac\":\"0a:58:0a:d9:00:25\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/96629289-ef1e-4353-acb2-125a21227454\"}],\"ips\":[{\"address\":\"10.217.0.37/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:27.144002369+00:00 stderr F 2025-08-13T19:59:27Z [verbose] DEL starting CNI request ContainerID:"d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877" Netns:"/var/run/netns/7bb6bb4a-a920-463e-a86a-6d40819d6976" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-8-crc;K8S_POD_INFRA_CONTAINER_ID=d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877;K8S_POD_UID=72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Path:"" 2025-08-13T19:59:27.179924703+00:00 stderr F 2025-08-13T19:59:27Z [verbose] Del: openshift-kube-controller-manager:revision-pruner-8-crc:72854c1e-5ae2-4ed6-9e50-ff3bccde2635:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T19:59:33.096082993+00:00 stderr F 2025-08-13T19:59:33Z [verbose] DEL finished CNI request ContainerID:"d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877" Netns:"/var/run/netns/7bb6bb4a-a920-463e-a86a-6d40819d6976" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-8-crc;K8S_POD_INFRA_CONTAINER_ID=d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877;K8S_POD_UID=72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Path:"", result: "", err: 2025-08-13T19:59:41.933253729+00:00 stderr F 2025-08-13T19:59:41Z [verbose] ADD starting CNI request ContainerID:"c5069234e6bbbde190e466fb11df01a395209a382d2942184c3f52c3865e00ee" Netns:"/var/run/netns/e65ece38-f1c9-4db0-b9d9-7d3aa029a566" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-kk8kg;K8S_POD_INFRA_CONTAINER_ID=c5069234e6bbbde190e466fb11df01a395209a382d2942184c3f52c3865e00ee;K8S_POD_UID=e4a7de23-6134-4044-902a-0900dc04a501" Path:"" 2025-08-13T19:59:43.456335124+00:00 stderr F 2025-08-13T19:59:43Z [verbose] DEL starting CNI request ContainerID:"fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039" Netns:"/var/run/netns/16067c77-713a-4961-add8-c216b53340b1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-vlbxv;K8S_POD_INFRA_CONTAINER_ID=fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039;K8S_POD_UID=378552fd-5e53-4882-87ff-95f3d9198861" Path:"" 2025-08-13T19:59:43.456335124+00:00 stderr F 2025-08-13T19:59:43Z [verbose] Del: openshift-service-ca:service-ca-666f99b6f-vlbxv:378552fd-5e53-4882-87ff-95f3d9198861:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T19:59:47.068968522+00:00 stderr F 2025-08-13T19:59:47Z [verbose] Add: openshift-service-ca:service-ca-666f99b6f-kk8kg:e4a7de23-6134-4044-902a-0900dc04a501:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"c5069234e6bbbde","mac":"46:16:5d:0b:45:7c"},{"name":"eth0","mac":"0a:58:0a:d9:00:28","sandbox":"/var/run/netns/e65ece38-f1c9-4db0-b9d9-7d3aa029a566"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.40/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:47.099974756+00:00 stderr F I0813 19:59:47.099179 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-service-ca", Name:"service-ca-666f99b6f-kk8kg", UID:"e4a7de23-6134-4044-902a-0900dc04a501", APIVersion:"v1", ResourceVersion:"28290", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.40/23] from ovn-kubernetes 2025-08-13T19:59:47.897507141+00:00 stderr F 2025-08-13T19:59:47Z [verbose] ADD finished CNI request ContainerID:"c5069234e6bbbde190e466fb11df01a395209a382d2942184c3f52c3865e00ee" Netns:"/var/run/netns/e65ece38-f1c9-4db0-b9d9-7d3aa029a566" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-kk8kg;K8S_POD_INFRA_CONTAINER_ID=c5069234e6bbbde190e466fb11df01a395209a382d2942184c3f52c3865e00ee;K8S_POD_UID=e4a7de23-6134-4044-902a-0900dc04a501" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"46:16:5d:0b:45:7c\",\"name\":\"c5069234e6bbbde\"},{\"mac\":\"0a:58:0a:d9:00:28\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/e65ece38-f1c9-4db0-b9d9-7d3aa029a566\"}],\"ips\":[{\"address\":\"10.217.0.40/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:49.047495393+00:00 stderr F 2025-08-13T19:59:49Z [verbose] DEL finished CNI request ContainerID:"fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039" Netns:"/var/run/netns/16067c77-713a-4961-add8-c216b53340b1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-vlbxv;K8S_POD_INFRA_CONTAINER_ID=fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039;K8S_POD_UID=378552fd-5e53-4882-87ff-95f3d9198861" Path:"", result: "", err: 2025-08-13T19:59:52.638690193+00:00 stderr F 2025-08-13T19:59:52Z [verbose] DEL starting CNI request ContainerID:"4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f" Netns:"/var/run/netns/c964672c-1543-4ed9-8311-27242d3cbe4c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6ff78978b4-q4vv8;K8S_POD_INFRA_CONTAINER_ID=4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f;K8S_POD_UID=87df87f4-ba66-4137-8e41-1fa632ad4207" Path:"" 2025-08-13T19:59:52.658090086+00:00 stderr F 2025-08-13T19:59:52Z [verbose] Del: openshift-controller-manager:controller-manager-6ff78978b4-q4vv8:87df87f4-ba66-4137-8e41-1fa632ad4207:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T19:59:52.849384859+00:00 stderr P 2025-08-13T19:59:52Z [verbose] 2025-08-13T19:59:52.849595085+00:00 stderr P DEL starting CNI request ContainerID:"893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf" Netns:"/var/run/netns/39f01de0-25a0-4e52-b360-7aa638922e4a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5c4dbb8899-tchz5;K8S_POD_INFRA_CONTAINER_ID=893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf;K8S_POD_UID=af6b67a3-a2bd-4051-9adc-c208a5a65d79" Path:"" 2025-08-13T19:59:52.850056428+00:00 stderr F 2025-08-13T19:59:52.850768769+00:00 stderr P 2025-08-13T19:59:52Z [verbose] 2025-08-13T19:59:52.850910913+00:00 stderr P Del: openshift-route-controller-manager:route-controller-manager-5c4dbb8899-tchz5:af6b67a3-a2bd-4051-9adc-c208a5a65d79:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T19:59:52.850952694+00:00 stderr F 2025-08-13T19:59:53.782082985+00:00 stderr F 2025-08-13T19:59:53Z [verbose] DEL finished CNI request ContainerID:"4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f" Netns:"/var/run/netns/c964672c-1543-4ed9-8311-27242d3cbe4c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6ff78978b4-q4vv8;K8S_POD_INFRA_CONTAINER_ID=4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f;K8S_POD_UID=87df87f4-ba66-4137-8e41-1fa632ad4207" Path:"", result: "", err: 2025-08-13T19:59:54.606403153+00:00 stderr F 2025-08-13T19:59:54Z [verbose] DEL finished CNI request ContainerID:"893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf" Netns:"/var/run/netns/39f01de0-25a0-4e52-b360-7aa638922e4a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5c4dbb8899-tchz5;K8S_POD_INFRA_CONTAINER_ID=893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf;K8S_POD_UID=af6b67a3-a2bd-4051-9adc-c208a5a65d79" Path:"", result: "", err: 2025-08-13T19:59:57.244142212+00:00 stderr F 2025-08-13T19:59:57Z [verbose] ADD starting CNI request ContainerID:"4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54" Netns:"/var/run/netns/931f6801-4284-48e2-ba9b-ebb96ec043e6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-c4dd57946-mpxjt;K8S_POD_INFRA_CONTAINER_ID=4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54;K8S_POD_UID=16f68e98-a8f9-417a-b92b-37bfd7b11e01" Path:"" 2025-08-13T19:59:57.594150390+00:00 stderr F 2025-08-13T19:59:57Z [verbose] ADD starting CNI request ContainerID:"13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a" Netns:"/var/run/netns/58cf4644-01ae-42de-998d-840ca6b6dd39" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5b77f9fd48-hb8xt;K8S_POD_INFRA_CONTAINER_ID=13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a;K8S_POD_UID=83bf0764-e80c-490b-8d3c-3cf626fdb233" Path:"" 2025-08-13T19:59:59.956411127+00:00 stderr F 2025-08-13T19:59:59Z [verbose] Add: openshift-controller-manager:controller-manager-c4dd57946-mpxjt:16f68e98-a8f9-417a-b92b-37bfd7b11e01:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"4cfa6ec97b88dab","mac":"d6:04:2f:51:ff:10"},{"name":"eth0","mac":"0a:58:0a:d9:00:29","sandbox":"/var/run/netns/931f6801-4284-48e2-ba9b-ebb96ec043e6"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.41/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:59.956411127+00:00 stderr F I0813 19:59:59.954708 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager", Name:"controller-manager-c4dd57946-mpxjt", UID:"16f68e98-a8f9-417a-b92b-37bfd7b11e01", APIVersion:"v1", ResourceVersion:"28746", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.41/23] from ovn-kubernetes 2025-08-13T20:00:00.044587720+00:00 stderr F 2025-08-13T20:00:00Z [verbose] ADD finished CNI request ContainerID:"4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54" Netns:"/var/run/netns/931f6801-4284-48e2-ba9b-ebb96ec043e6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-c4dd57946-mpxjt;K8S_POD_INFRA_CONTAINER_ID=4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54;K8S_POD_UID=16f68e98-a8f9-417a-b92b-37bfd7b11e01" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d6:04:2f:51:ff:10\",\"name\":\"4cfa6ec97b88dab\"},{\"mac\":\"0a:58:0a:d9:00:29\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/931f6801-4284-48e2-ba9b-ebb96ec043e6\"}],\"ips\":[{\"address\":\"10.217.0.41/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:01.558864525+00:00 stderr F 2025-08-13T20:00:01Z [verbose] Add: openshift-route-controller-manager:route-controller-manager-5b77f9fd48-hb8xt:83bf0764-e80c-490b-8d3c-3cf626fdb233:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"13b18d12f5f999b","mac":"f6:89:4d:62:4a:70"},{"name":"eth0","mac":"0a:58:0a:d9:00:2a","sandbox":"/var/run/netns/58cf4644-01ae-42de-998d-840ca6b6dd39"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.42/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:01.559106632+00:00 stderr F I0813 20:00:01.559027 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-route-controller-manager", Name:"route-controller-manager-5b77f9fd48-hb8xt", UID:"83bf0764-e80c-490b-8d3c-3cf626fdb233", APIVersion:"v1", ResourceVersion:"28749", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.42/23] from ovn-kubernetes 2025-08-13T20:00:01.691357391+00:00 stderr P 2025-08-13T20:00:01Z [verbose] 2025-08-13T20:00:01.731302750+00:00 stderr P ADD finished CNI request ContainerID:"13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a" Netns:"/var/run/netns/58cf4644-01ae-42de-998d-840ca6b6dd39" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5b77f9fd48-hb8xt;K8S_POD_INFRA_CONTAINER_ID=13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a;K8S_POD_UID=83bf0764-e80c-490b-8d3c-3cf626fdb233" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"f6:89:4d:62:4a:70\",\"name\":\"13b18d12f5f999b\"},{\"mac\":\"0a:58:0a:d9:00:2a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/58cf4644-01ae-42de-998d-840ca6b6dd39\"}],\"ips\":[{\"address\":\"10.217.0.42/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:01.731524846+00:00 stderr F 2025-08-13T20:00:02.963959695+00:00 stderr F 2025-08-13T20:00:02Z [verbose] ADD starting CNI request ContainerID:"eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348" Netns:"/var/run/netns/d03b248a-9813-4629-85b8-29241dc1e984" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251920-wcws2;K8S_POD_INFRA_CONTAINER_ID=eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348;K8S_POD_UID=deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" Path:"" 2025-08-13T20:00:05.345329857+00:00 stderr P 2025-08-13T20:00:05Z [verbose] 2025-08-13T20:00:05.345387239+00:00 stderr P Add: openshift-operator-lifecycle-manager:collect-profiles-29251920-wcws2:deaee4f4-7b7a-442d-99b7-c8ac62ef5f27:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"eae823dac0e12a2","mac":"92:41:c9:a3:ea:a6"},{"name":"eth0","mac":"0a:58:0a:d9:00:2c","sandbox":"/var/run/netns/d03b248a-9813-4629-85b8-29241dc1e984"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.44/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:05.345418890+00:00 stderr F 2025-08-13T20:00:05.346879541+00:00 stderr F I0813 20:00:05.345996 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"collect-profiles-29251920-wcws2", UID:"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27", APIVersion:"v1", ResourceVersion:"28823", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.44/23] from ovn-kubernetes 2025-08-13T20:00:05.381500918+00:00 stderr P 2025-08-13T20:00:05Z [verbose] 2025-08-13T20:00:05.381557790+00:00 stderr P ADD finished CNI request ContainerID:"eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348" Netns:"/var/run/netns/d03b248a-9813-4629-85b8-29241dc1e984" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251920-wcws2;K8S_POD_INFRA_CONTAINER_ID=eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348;K8S_POD_UID=deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"92:41:c9:a3:ea:a6\",\"name\":\"eae823dac0e12a2\"},{\"mac\":\"0a:58:0a:d9:00:2c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/d03b248a-9813-4629-85b8-29241dc1e984\"}],\"ips\":[{\"address\":\"10.217.0.44/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:05.381588991+00:00 stderr F 2025-08-13T20:00:08.083714518+00:00 stderr F 2025-08-13T20:00:08Z [verbose] ADD starting CNI request ContainerID:"beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319" Netns:"/var/run/netns/9628325c-7662-4b67-b68d-e2df8d301173" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-9-crc;K8S_POD_INFRA_CONTAINER_ID=beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319;K8S_POD_UID=a0453d24-e872-43af-9e7a-86227c26d200" Path:"" 2025-08-13T20:00:11.240574632+00:00 stderr F 2025-08-13T20:00:11Z [verbose] ADD starting CNI request ContainerID:"ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef" Netns:"/var/run/netns/f3d4ee0f-cb95-4404-998d-8234df0e9330" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef;K8S_POD_UID=227e3650-2a85-4229-8099-bb53972635b2" Path:"" 2025-08-13T20:00:12.220030431+00:00 stderr F 2025-08-13T20:00:12Z [verbose] Add: openshift-kube-controller-manager:revision-pruner-9-crc:a0453d24-e872-43af-9e7a-86227c26d200:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"beb700893f285f1","mac":"06:fd:56:87:f4:cd"},{"name":"eth0","mac":"0a:58:0a:d9:00:34","sandbox":"/var/run/netns/9628325c-7662-4b67-b68d-e2df8d301173"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.52/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:12.220030431+00:00 stderr F I0813 20:00:12.216134 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"revision-pruner-9-crc", UID:"a0453d24-e872-43af-9e7a-86227c26d200", APIVersion:"v1", ResourceVersion:"28975", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.52/23] from ovn-kubernetes 2025-08-13T20:00:12.800695578+00:00 stderr F 2025-08-13T20:00:12Z [verbose] ADD finished CNI request ContainerID:"beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319" Netns:"/var/run/netns/9628325c-7662-4b67-b68d-e2df8d301173" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-9-crc;K8S_POD_INFRA_CONTAINER_ID=beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319;K8S_POD_UID=a0453d24-e872-43af-9e7a-86227c26d200" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"06:fd:56:87:f4:cd\",\"name\":\"beb700893f285f1\"},{\"mac\":\"0a:58:0a:d9:00:34\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/9628325c-7662-4b67-b68d-e2df8d301173\"}],\"ips\":[{\"address\":\"10.217.0.52/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:13.096603935+00:00 stderr F 2025-08-13T20:00:13Z [verbose] Add: openshift-kube-controller-manager:installer-9-crc:227e3650-2a85-4229-8099-bb53972635b2:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"ca267bd7a205181","mac":"06:5e:50:09:30:d5"},{"name":"eth0","mac":"0a:58:0a:d9:00:35","sandbox":"/var/run/netns/f3d4ee0f-cb95-4404-998d-8234df0e9330"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.53/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:13.096603935+00:00 stderr F I0813 20:00:13.095603 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"installer-9-crc", UID:"227e3650-2a85-4229-8099-bb53972635b2", APIVersion:"v1", ResourceVersion:"29034", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.53/23] from ovn-kubernetes 2025-08-13T20:00:13.126610891+00:00 stderr P 2025-08-13T20:00:13Z [verbose] 2025-08-13T20:00:13.126685123+00:00 stderr P DEL starting CNI request ContainerID:"13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a" Netns:"/var/run/netns/58cf4644-01ae-42de-998d-840ca6b6dd39" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5b77f9fd48-hb8xt;K8S_POD_INFRA_CONTAINER_ID=13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a;K8S_POD_UID=83bf0764-e80c-490b-8d3c-3cf626fdb233" Path:"" 2025-08-13T20:00:13.126721704+00:00 stderr F 2025-08-13T20:00:13.127349202+00:00 stderr P 2025-08-13T20:00:13Z [verbose] 2025-08-13T20:00:13.127392773+00:00 stderr P Del: openshift-route-controller-manager:route-controller-manager-5b77f9fd48-hb8xt:83bf0764-e80c-490b-8d3c-3cf626fdb233:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:13.127422254+00:00 stderr F 2025-08-13T20:00:13.427259784+00:00 stderr F 2025-08-13T20:00:13Z [verbose] ADD finished CNI request ContainerID:"ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef" Netns:"/var/run/netns/f3d4ee0f-cb95-4404-998d-8234df0e9330" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef;K8S_POD_UID=227e3650-2a85-4229-8099-bb53972635b2" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"06:5e:50:09:30:d5\",\"name\":\"ca267bd7a205181\"},{\"mac\":\"0a:58:0a:d9:00:35\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f3d4ee0f-cb95-4404-998d-8234df0e9330\"}],\"ips\":[{\"address\":\"10.217.0.53/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:13.463043484+00:00 stderr F 2025-08-13T20:00:13Z [verbose] DEL starting CNI request ContainerID:"4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54" Netns:"/var/run/netns/931f6801-4284-48e2-ba9b-ebb96ec043e6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-c4dd57946-mpxjt;K8S_POD_INFRA_CONTAINER_ID=4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54;K8S_POD_UID=16f68e98-a8f9-417a-b92b-37bfd7b11e01" Path:"" 2025-08-13T20:00:13.463043484+00:00 stderr F 2025-08-13T20:00:13Z [verbose] Del: openshift-controller-manager:controller-manager-c4dd57946-mpxjt:16f68e98-a8f9-417a-b92b-37bfd7b11e01:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:14.913106910+00:00 stderr F 2025-08-13T20:00:14Z [verbose] DEL finished CNI request ContainerID:"4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54" Netns:"/var/run/netns/931f6801-4284-48e2-ba9b-ebb96ec043e6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-c4dd57946-mpxjt;K8S_POD_INFRA_CONTAINER_ID=4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54;K8S_POD_UID=16f68e98-a8f9-417a-b92b-37bfd7b11e01" Path:"", result: "", err: 2025-08-13T20:00:15.107138713+00:00 stderr F 2025-08-13T20:00:15Z [verbose] DEL starting CNI request ContainerID:"eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348" Netns:"/var/run/netns/d03b248a-9813-4629-85b8-29241dc1e984" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251920-wcws2;K8S_POD_INFRA_CONTAINER_ID=eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348;K8S_POD_UID=deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" Path:"" 2025-08-13T20:00:15.107138713+00:00 stderr F 2025-08-13T20:00:15Z [verbose] Del: openshift-operator-lifecycle-manager:collect-profiles-29251920-wcws2:deaee4f4-7b7a-442d-99b7-c8ac62ef5f27:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:15.630039383+00:00 stderr P 2025-08-13T20:00:15Z [verbose] 2025-08-13T20:00:15.630093634+00:00 stderr P DEL finished CNI request ContainerID:"13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a" Netns:"/var/run/netns/58cf4644-01ae-42de-998d-840ca6b6dd39" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5b77f9fd48-hb8xt;K8S_POD_INFRA_CONTAINER_ID=13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a;K8S_POD_UID=83bf0764-e80c-490b-8d3c-3cf626fdb233" Path:"", result: "", err: 2025-08-13T20:00:15.630118135+00:00 stderr F 2025-08-13T20:00:16.272621295+00:00 stderr F 2025-08-13T20:00:16Z [verbose] DEL finished CNI request ContainerID:"eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348" Netns:"/var/run/netns/d03b248a-9813-4629-85b8-29241dc1e984" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251920-wcws2;K8S_POD_INFRA_CONTAINER_ID=eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348;K8S_POD_UID=deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" Path:"", result: "", err: 2025-08-13T20:00:18.052931309+00:00 stderr F 2025-08-13T20:00:18Z [verbose] DEL starting CNI request ContainerID:"628032c525b0ea76a1735a05272ee3f884451044c1e0736096ab478da4f7f292" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-28658250-dvzvw;K8S_POD_INFRA_CONTAINER_ID=628032c525b0ea76a1735a05272ee3f884451044c1e0736096ab478da4f7f292;K8S_POD_UID=05fb6e44-aaf9-4fbc-a235-7a3447ac3086" Path:"" 2025-08-13T20:00:18.053876806+00:00 stderr F 2025-08-13T20:00:18Z [error] Multus: failed to get the cached delegates file: open /var/lib/cni/multus/628032c525b0ea76a1735a05272ee3f884451044c1e0736096ab478da4f7f292: no such file or directory, cannot properly delete 2025-08-13T20:00:18.053876806+00:00 stderr F 2025-08-13T20:00:18Z [verbose] DEL finished CNI request ContainerID:"628032c525b0ea76a1735a05272ee3f884451044c1e0736096ab478da4f7f292" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-28658250-dvzvw;K8S_POD_INFRA_CONTAINER_ID=628032c525b0ea76a1735a05272ee3f884451044c1e0736096ab478da4f7f292;K8S_POD_UID=05fb6e44-aaf9-4fbc-a235-7a3447ac3086" Path:"", result: "", err: 2025-08-13T20:00:18.953674012+00:00 stderr F 2025-08-13T20:00:18Z [verbose] ADD starting CNI request ContainerID:"9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8" Netns:"/var/run/netns/3ebbbba3-541f-422f-b4bd-ea82b778fb0d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8;K8S_POD_UID=2ad657a4-8b02-4373-8d0d-b0e25345dc90" Path:"" 2025-08-13T20:00:19.081176678+00:00 stderr F 2025-08-13T20:00:19Z [verbose] DEL starting CNI request ContainerID:"d6388111d42b1b52671ff33a0eeb73cc55dd6201183af5171802748304de2a17" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-546b4f8984-pwccz;K8S_POD_INFRA_CONTAINER_ID=d6388111d42b1b52671ff33a0eeb73cc55dd6201183af5171802748304de2a17;K8S_POD_UID=6d67253e-2acd-4bc1-8185-793587da4f17" Path:"" 2025-08-13T20:00:19.153086928+00:00 stderr F 2025-08-13T20:00:19Z [verbose] Del: openshift-service-ca-operator:service-ca-operator-546b4f8984-pwccz:6d67253e-2acd-4bc1-8185-793587da4f17:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:19.425343031+00:00 stderr F 2025-08-13T20:00:19Z [verbose] ADD starting CNI request ContainerID:"1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715" Netns:"/var/run/netns/2d4bc6b6-5587-4e2e-9a5d-bd19e8d86588" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6cfd9fc8fc-7sbzw;K8S_POD_INFRA_CONTAINER_ID=1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715;K8S_POD_UID=1713e8bc-bab0-49a8-8618-9ded2e18906c" Path:"" 2025-08-13T20:00:20.398503070+00:00 stderr F 2025-08-13T20:00:20Z [verbose] ADD starting CNI request ContainerID:"612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7" Netns:"/var/run/netns/911ccb4d-64e5-46bb-a0cd-d2ccbe43dbc0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-5d9678894c-wx62n;K8S_POD_INFRA_CONTAINER_ID=612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7;K8S_POD_UID=384ed0e8-86e4-42df-bd2c-604c1f536a15" Path:"" 2025-08-13T20:00:21.775932115+00:00 stderr F 2025-08-13T20:00:21Z [verbose] ADD starting CNI request ContainerID:"51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa" Netns:"/var/run/netns/55bf1bf4-f151-480d-940d-f0f12e62a28d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67685c4459-7p2h8;K8S_POD_INFRA_CONTAINER_ID=51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa;K8S_POD_UID=a560ec6a-586f-403c-a08e-e3a76fa1b7fd" Path:"" 2025-08-13T20:00:21.808731090+00:00 stderr F 2025-08-13T20:00:21Z [verbose] DEL finished CNI request ContainerID:"d6388111d42b1b52671ff33a0eeb73cc55dd6201183af5171802748304de2a17" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-546b4f8984-pwccz;K8S_POD_INFRA_CONTAINER_ID=d6388111d42b1b52671ff33a0eeb73cc55dd6201183af5171802748304de2a17;K8S_POD_UID=6d67253e-2acd-4bc1-8185-793587da4f17" Path:"", result: "", err: 2025-08-13T20:00:22.604078219+00:00 stderr F 2025-08-13T20:00:22Z [verbose] DEL starting CNI request ContainerID:"a3f675eda1f1f650af555ac6627d1fdb41abe888489e03a064815885157f5403" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-6f6cb54958-rbddb;K8S_POD_INFRA_CONTAINER_ID=a3f675eda1f1f650af555ac6627d1fdb41abe888489e03a064815885157f5403;K8S_POD_UID=c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Path:"" 2025-08-13T20:00:22.624992706+00:00 stderr F 2025-08-13T20:00:22Z [verbose] Del: openshift-kube-controller-manager-operator:kube-controller-manager-operator-6f6cb54958-rbddb:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:23.221625190+00:00 stderr P 2025-08-13T20:00:23Z [verbose] 2025-08-13T20:00:23.221874807+00:00 stderr P DEL starting CNI request ContainerID:"beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319" Netns:"/var/run/netns/9628325c-7662-4b67-b68d-e2df8d301173" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-9-crc;K8S_POD_INFRA_CONTAINER_ID=beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319;K8S_POD_UID=a0453d24-e872-43af-9e7a-86227c26d200" Path:"" 2025-08-13T20:00:23.221917299+00:00 stderr F 2025-08-13T20:00:23.222874806+00:00 stderr P 2025-08-13T20:00:23Z [verbose] 2025-08-13T20:00:23.222953108+00:00 stderr P Del: openshift-kube-controller-manager:revision-pruner-9-crc:a0453d24-e872-43af-9e7a-86227c26d200:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:23.222991239+00:00 stderr F 2025-08-13T20:00:23.426680097+00:00 stderr F 2025-08-13T20:00:23Z [verbose] Add: openshift-route-controller-manager:route-controller-manager-6cfd9fc8fc-7sbzw:1713e8bc-bab0-49a8-8618-9ded2e18906c:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"1f55b781eeb63db","mac":"4a:a0:3c:7f:f6:89"},{"name":"eth0","mac":"0a:58:0a:d9:00:38","sandbox":"/var/run/netns/2d4bc6b6-5587-4e2e-9a5d-bd19e8d86588"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.56/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:23.427156031+00:00 stderr F I0813 20:00:23.427101 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-route-controller-manager", Name:"route-controller-manager-6cfd9fc8fc-7sbzw", UID:"1713e8bc-bab0-49a8-8618-9ded2e18906c", APIVersion:"v1", ResourceVersion:"29279", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.56/23] from ovn-kubernetes 2025-08-13T20:00:23.524430564+00:00 stderr P 2025-08-13T20:00:23Z [verbose] 2025-08-13T20:00:23.524518267+00:00 stderr P Add: openshift-kube-apiserver:installer-9-crc:2ad657a4-8b02-4373-8d0d-b0e25345dc90:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"9b70547ed21fdd5","mac":"1e:44:eb:8d:8e:6a"},{"name":"eth0","mac":"0a:58:0a:d9:00:37","sandbox":"/var/run/netns/3ebbbba3-541f-422f-b4bd-ea82b778fb0d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.55/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:23.524584588+00:00 stderr F 2025-08-13T20:00:23.525128274+00:00 stderr F I0813 20:00:23.525048 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"installer-9-crc", UID:"2ad657a4-8b02-4373-8d0d-b0e25345dc90", APIVersion:"v1", ResourceVersion:"29261", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.55/23] from ovn-kubernetes 2025-08-13T20:00:23.693962418+00:00 stderr P 2025-08-13T20:00:23Z [verbose] 2025-08-13T20:00:23.694021390+00:00 stderr P Add: openshift-console:console-5d9678894c-wx62n:384ed0e8-86e4-42df-bd2c-604c1f536a15:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"612e7824c92f4db","mac":"2a:0d:bb:e8:fc:b3"},{"name":"eth0","mac":"0a:58:0a:d9:00:39","sandbox":"/var/run/netns/911ccb4d-64e5-46bb-a0cd-d2ccbe43dbc0"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.57/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:23.694045971+00:00 stderr F 2025-08-13T20:00:23.694451672+00:00 stderr F I0813 20:00:23.694330 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console", Name:"console-5d9678894c-wx62n", UID:"384ed0e8-86e4-42df-bd2c-604c1f536a15", APIVersion:"v1", ResourceVersion:"29333", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.57/23] from ovn-kubernetes 2025-08-13T20:00:23.700320369+00:00 stderr F 2025-08-13T20:00:23Z [verbose] DEL finished CNI request ContainerID:"a3f675eda1f1f650af555ac6627d1fdb41abe888489e03a064815885157f5403" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-6f6cb54958-rbddb;K8S_POD_INFRA_CONTAINER_ID=a3f675eda1f1f650af555ac6627d1fdb41abe888489e03a064815885157f5403;K8S_POD_UID=c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Path:"", result: "", err: 2025-08-13T20:00:23.734697240+00:00 stderr F 2025-08-13T20:00:23Z [verbose] DEL starting CNI request ContainerID:"f9aabd5b02c81bbe0daa4b7f8edaa2072f7d958bff4d4003604076a03fb177e2" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-788b7c6b6c-ctdmb;K8S_POD_INFRA_CONTAINER_ID=f9aabd5b02c81bbe0daa4b7f8edaa2072f7d958bff4d4003604076a03fb177e2;K8S_POD_UID=4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Path:"" 2025-08-13T20:00:23.735698498+00:00 stderr F 2025-08-13T20:00:23Z [verbose] Del: openshift-machine-api:machine-api-operator-788b7c6b6c-ctdmb:4f8aa612-9da0-4a2b-911e-6a1764a4e74e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:23.764698625+00:00 stderr P 2025-08-13T20:00:23Z [verbose] 2025-08-13T20:00:23.764912391+00:00 stderr P Add: openshift-controller-manager:controller-manager-67685c4459-7p2h8:a560ec6a-586f-403c-a08e-e3a76fa1b7fd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"51aea926a857cd4","mac":"9e:95:94:ba:9d:71"},{"name":"eth0","mac":"0a:58:0a:d9:00:3a","sandbox":"/var/run/netns/55bf1bf4-f151-480d-940d-f0f12e62a28d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.58/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:23.765013234+00:00 stderr F 2025-08-13T20:00:23.765593531+00:00 stderr F I0813 20:00:23.765564 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager", Name:"controller-manager-67685c4459-7p2h8", UID:"a560ec6a-586f-403c-a08e-e3a76fa1b7fd", APIVersion:"v1", ResourceVersion:"29411", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.58/23] from ovn-kubernetes 2025-08-13T20:00:23.818472889+00:00 stderr F 2025-08-13T20:00:23Z [verbose] ADD finished CNI request ContainerID:"1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715" Netns:"/var/run/netns/2d4bc6b6-5587-4e2e-9a5d-bd19e8d86588" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6cfd9fc8fc-7sbzw;K8S_POD_INFRA_CONTAINER_ID=1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715;K8S_POD_UID=1713e8bc-bab0-49a8-8618-9ded2e18906c" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"4a:a0:3c:7f:f6:89\",\"name\":\"1f55b781eeb63db\"},{\"mac\":\"0a:58:0a:d9:00:38\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/2d4bc6b6-5587-4e2e-9a5d-bd19e8d86588\"}],\"ips\":[{\"address\":\"10.217.0.56/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:23.895917767+00:00 stderr F 2025-08-13T20:00:23Z [verbose] DEL finished CNI request ContainerID:"beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319" Netns:"/var/run/netns/9628325c-7662-4b67-b68d-e2df8d301173" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-9-crc;K8S_POD_INFRA_CONTAINER_ID=beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319;K8S_POD_UID=a0453d24-e872-43af-9e7a-86227c26d200" Path:"", result: "", err: 2025-08-13T20:00:23.993536050+00:00 stderr F 2025-08-13T20:00:23Z [verbose] DEL finished CNI request ContainerID:"f9aabd5b02c81bbe0daa4b7f8edaa2072f7d958bff4d4003604076a03fb177e2" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-788b7c6b6c-ctdmb;K8S_POD_INFRA_CONTAINER_ID=f9aabd5b02c81bbe0daa4b7f8edaa2072f7d958bff4d4003604076a03fb177e2;K8S_POD_UID=4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Path:"", result: "", err: 2025-08-13T20:00:24.155722865+00:00 stderr F 2025-08-13T20:00:24Z [verbose] DEL starting CNI request ContainerID:"657f2ae8a88862495da3eaf4929639924b3c5ac944123fbfa2c45af54da8eeae" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=657f2ae8a88862495da3eaf4929639924b3c5ac944123fbfa2c45af54da8eeae;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"" 2025-08-13T20:00:24.158038961+00:00 stderr F 2025-08-13T20:00:24Z [verbose] Del: openshift-marketplace:redhat-marketplace-8s8pc:c782cf62-a827-4677-b3c2-6f82c5f09cbb:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:24.250949330+00:00 stderr P 2025-08-13T20:00:24Z [verbose] 2025-08-13T20:00:24.282009636+00:00 stderr F ADD finished CNI request ContainerID:"9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8" Netns:"/var/run/netns/3ebbbba3-541f-422f-b4bd-ea82b778fb0d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8;K8S_POD_UID=2ad657a4-8b02-4373-8d0d-b0e25345dc90" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"1e:44:eb:8d:8e:6a\",\"name\":\"9b70547ed21fdd5\"},{\"mac\":\"0a:58:0a:d9:00:37\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3ebbbba3-541f-422f-b4bd-ea82b778fb0d\"}],\"ips\":[{\"address\":\"10.217.0.55/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:24.282009636+00:00 stderr F 2025-08-13T20:00:24Z [verbose] ADD finished CNI request ContainerID:"612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7" Netns:"/var/run/netns/911ccb4d-64e5-46bb-a0cd-d2ccbe43dbc0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-5d9678894c-wx62n;K8S_POD_INFRA_CONTAINER_ID=612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7;K8S_POD_UID=384ed0e8-86e4-42df-bd2c-604c1f536a15" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2a:0d:bb:e8:fc:b3\",\"name\":\"612e7824c92f4db\"},{\"mac\":\"0a:58:0a:d9:00:39\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/911ccb4d-64e5-46bb-a0cd-d2ccbe43dbc0\"}],\"ips\":[{\"address\":\"10.217.0.57/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:24.282009636+00:00 stderr F 2025-08-13T20:00:24Z [verbose] ADD finished CNI request ContainerID:"51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa" Netns:"/var/run/netns/55bf1bf4-f151-480d-940d-f0f12e62a28d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67685c4459-7p2h8;K8S_POD_INFRA_CONTAINER_ID=51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa;K8S_POD_UID=a560ec6a-586f-403c-a08e-e3a76fa1b7fd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9e:95:94:ba:9d:71\",\"name\":\"51aea926a857cd4\"},{\"mac\":\"0a:58:0a:d9:00:3a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/55bf1bf4-f151-480d-940d-f0f12e62a28d\"}],\"ips\":[{\"address\":\"10.217.0.58/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:24.344766175+00:00 stderr F 2025-08-13T20:00:24Z [verbose] DEL finished CNI request ContainerID:"657f2ae8a88862495da3eaf4929639924b3c5ac944123fbfa2c45af54da8eeae" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=657f2ae8a88862495da3eaf4929639924b3c5ac944123fbfa2c45af54da8eeae;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"", result: "", err: 2025-08-13T20:00:24.388626996+00:00 stderr F 2025-08-13T20:00:24Z [verbose] DEL starting CNI request ContainerID:"4e60446641bb2794f361bac93ea3ad453233afcdf0dccdc18a1602491cab10f7" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-7c88c4c865-kn67m;K8S_POD_INFRA_CONTAINER_ID=4e60446641bb2794f361bac93ea3ad453233afcdf0dccdc18a1602491cab10f7;K8S_POD_UID=43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Path:"" 2025-08-13T20:00:24.400294769+00:00 stderr F 2025-08-13T20:00:24Z [verbose] Del: openshift-apiserver-operator:openshift-apiserver-operator-7c88c4c865-kn67m:43ae1c37-047b-4ee2-9fee-41e337dd4ac8:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:24.770294289+00:00 stderr F 2025-08-13T20:00:24Z [verbose] DEL finished CNI request ContainerID:"4e60446641bb2794f361bac93ea3ad453233afcdf0dccdc18a1602491cab10f7" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-7c88c4c865-kn67m;K8S_POD_INFRA_CONTAINER_ID=4e60446641bb2794f361bac93ea3ad453233afcdf0dccdc18a1602491cab10f7;K8S_POD_UID=43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Path:"", result: "", err: 2025-08-13T20:00:24.831022011+00:00 stderr F 2025-08-13T20:00:24Z [verbose] DEL starting CNI request ContainerID:"defe45c320ad6203b1ec1370b005153bc67bfbf75778890aabec5f575fc3869e" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-8464bcc55b-sjnqz;K8S_POD_INFRA_CONTAINER_ID=defe45c320ad6203b1ec1370b005153bc67bfbf75778890aabec5f575fc3869e;K8S_POD_UID=bd556935-a077-45df-ba3f-d42c39326ccd" Path:"" 2025-08-13T20:00:24.837189417+00:00 stderr F 2025-08-13T20:00:24Z [verbose] Del: openshift-operator-lifecycle-manager:packageserver-8464bcc55b-sjnqz:bd556935-a077-45df-ba3f-d42c39326ccd:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:26.002482674+00:00 stderr F 2025-08-13T20:00:26Z [verbose] DEL finished CNI request ContainerID:"defe45c320ad6203b1ec1370b005153bc67bfbf75778890aabec5f575fc3869e" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-8464bcc55b-sjnqz;K8S_POD_INFRA_CONTAINER_ID=defe45c320ad6203b1ec1370b005153bc67bfbf75778890aabec5f575fc3869e;K8S_POD_UID=bd556935-a077-45df-ba3f-d42c39326ccd" Path:"", result: "", err: 2025-08-13T20:00:26.422336186+00:00 stderr F 2025-08-13T20:00:26Z [verbose] DEL starting CNI request ContainerID:"6824abd319d801d93100e5a78029eeb4b36aff7431e6dd2a4cc86e65a59169df" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-7cc7ff75d5-g9qv8;K8S_POD_INFRA_CONTAINER_ID=6824abd319d801d93100e5a78029eeb4b36aff7431e6dd2a4cc86e65a59169df;K8S_POD_UID=ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Path:"" 2025-08-13T20:00:26.434518494+00:00 stderr F 2025-08-13T20:00:26Z [verbose] Del: openshift-authentication-operator:authentication-operator-7cc7ff75d5-g9qv8:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:27.168170163+00:00 stderr F 2025-08-13T20:00:27Z [verbose] DEL starting CNI request ContainerID:"51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa" Netns:"/var/run/netns/55bf1bf4-f151-480d-940d-f0f12e62a28d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67685c4459-7p2h8;K8S_POD_INFRA_CONTAINER_ID=51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa;K8S_POD_UID=a560ec6a-586f-403c-a08e-e3a76fa1b7fd" Path:"" 2025-08-13T20:00:27.168170163+00:00 stderr F 2025-08-13T20:00:27Z [verbose] Del: openshift-controller-manager:controller-manager-67685c4459-7p2h8:a560ec6a-586f-403c-a08e-e3a76fa1b7fd:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:27.794460121+00:00 stderr F 2025-08-13T20:00:27Z [verbose] DEL starting CNI request ContainerID:"1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715" Netns:"/var/run/netns/2d4bc6b6-5587-4e2e-9a5d-bd19e8d86588" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6cfd9fc8fc-7sbzw;K8S_POD_INFRA_CONTAINER_ID=1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715;K8S_POD_UID=1713e8bc-bab0-49a8-8618-9ded2e18906c" Path:"" 2025-08-13T20:00:27.794704278+00:00 stderr F 2025-08-13T20:00:27Z [verbose] Del: openshift-route-controller-manager:route-controller-manager-6cfd9fc8fc-7sbzw:1713e8bc-bab0-49a8-8618-9ded2e18906c:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:28.483037086+00:00 stderr P 2025-08-13T20:00:28Z [verbose] 2025-08-13T20:00:28.483116038+00:00 stderr P ADD starting CNI request ContainerID:"958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4" Netns:"/var/run/netns/4cb67ea2-66ca-4477-911c-e402227f8dda" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-7cbd5666ff-bbfrf;K8S_POD_INFRA_CONTAINER_ID=958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4;K8S_POD_UID=42b6a393-6194-4620-bf8f-7e4b6cbe5679" Path:"" 2025-08-13T20:00:28.483149849+00:00 stderr F 2025-08-13T20:00:28.644932272+00:00 stderr F 2025-08-13T20:00:28Z [verbose] ADD starting CNI request ContainerID:"7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e" Netns:"/var/run/netns/c953bf54-7cc8-4003-9703-2c7a471aed7f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75779c45fd-v2j2v;K8S_POD_INFRA_CONTAINER_ID=7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e;K8S_POD_UID=f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Path:"" 2025-08-13T20:00:28.787764605+00:00 stderr F 2025-08-13T20:00:28Z [verbose] DEL finished CNI request ContainerID:"6824abd319d801d93100e5a78029eeb4b36aff7431e6dd2a4cc86e65a59169df" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-7cc7ff75d5-g9qv8;K8S_POD_INFRA_CONTAINER_ID=6824abd319d801d93100e5a78029eeb4b36aff7431e6dd2a4cc86e65a59169df;K8S_POD_UID=ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Path:"", result: "", err: 2025-08-13T20:00:28.855300300+00:00 stderr F 2025-08-13T20:00:28Z [verbose] DEL finished CNI request ContainerID:"51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa" Netns:"/var/run/netns/55bf1bf4-f151-480d-940d-f0f12e62a28d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67685c4459-7p2h8;K8S_POD_INFRA_CONTAINER_ID=51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa;K8S_POD_UID=a560ec6a-586f-403c-a08e-e3a76fa1b7fd" Path:"", result: "", err: 2025-08-13T20:00:28.919684786+00:00 stderr F 2025-08-13T20:00:28Z [verbose] DEL starting CNI request ContainerID:"9a9e3e1f426de43a655d2713cf2cf80d659d319c3d5af635a89d2d1998a12685" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=9a9e3e1f426de43a655d2713cf2cf80d659d319c3d5af635a89d2d1998a12685;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"" 2025-08-13T20:00:28.922399074+00:00 stderr F 2025-08-13T20:00:28Z [verbose] Del: openshift-marketplace:redhat-operators-f4jkp:4092a9f8-5acc-4932-9e90-ef962eeb301a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:29.981503632+00:00 stderr F 2025-08-13T20:00:29Z [verbose] ADD starting CNI request ContainerID:"97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9" Netns:"/var/run/netns/56a892cd-92ef-4d1e-8ced-347053efce52" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-78589965b8-vmcwt;K8S_POD_INFRA_CONTAINER_ID=97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9;K8S_POD_UID=00d32440-4cce-4609-96f3-51ac94480aab" Path:"" 2025-08-13T20:00:30.201077733+00:00 stderr F 2025-08-13T20:00:30Z [verbose] DEL finished CNI request ContainerID:"1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715" Netns:"/var/run/netns/2d4bc6b6-5587-4e2e-9a5d-bd19e8d86588" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6cfd9fc8fc-7sbzw;K8S_POD_INFRA_CONTAINER_ID=1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715;K8S_POD_UID=1713e8bc-bab0-49a8-8618-9ded2e18906c" Path:"", result: "", err: 2025-08-13T20:00:30.542571871+00:00 stderr F 2025-08-13T20:00:30Z [verbose] DEL finished CNI request ContainerID:"9a9e3e1f426de43a655d2713cf2cf80d659d319c3d5af635a89d2d1998a12685" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=9a9e3e1f426de43a655d2713cf2cf80d659d319c3d5af635a89d2d1998a12685;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"", result: "", err: 2025-08-13T20:00:30.750113158+00:00 stderr P 2025-08-13T20:00:30Z [verbose] 2025-08-13T20:00:30.750212571+00:00 stderr P Add: openshift-image-registry:image-registry-7cbd5666ff-bbfrf:42b6a393-6194-4620-bf8f-7e4b6cbe5679:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"958ba1ee8e9afa1","mac":"72:4f:d8:41:fe:b2"},{"name":"eth0","mac":"0a:58:0a:d9:00:26","sandbox":"/var/run/netns/4cb67ea2-66ca-4477-911c-e402227f8dda"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.38/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:30.750352425+00:00 stderr F 2025-08-13T20:00:30.757356695+00:00 stderr F I0813 20:00:30.750923 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"image-registry-7cbd5666ff-bbfrf", UID:"42b6a393-6194-4620-bf8f-7e4b6cbe5679", APIVersion:"v1", ResourceVersion:"27607", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.38/23] from ovn-kubernetes 2025-08-13T20:00:30.851936702+00:00 stderr F 2025-08-13T20:00:30Z [verbose] DEL starting CNI request ContainerID:"e1b8b9555bd24d97227d4c4d3c9d61794f42a04b0ad9a23c933d58fe5e0eccb0" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-84fccc7b6-mkncc;K8S_POD_INFRA_CONTAINER_ID=e1b8b9555bd24d97227d4c4d3c9d61794f42a04b0ad9a23c933d58fe5e0eccb0;K8S_POD_UID=b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Path:"" 2025-08-13T20:00:30.851936702+00:00 stderr F 2025-08-13T20:00:30Z [verbose] Del: openshift-console:console-84fccc7b6-mkncc:b233d916-bfe3-4ae5-ae39-6b574d1aa05e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:30.863932404+00:00 stderr F 2025-08-13T20:00:30Z [verbose] ADD finished CNI request ContainerID:"958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4" Netns:"/var/run/netns/4cb67ea2-66ca-4477-911c-e402227f8dda" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-7cbd5666ff-bbfrf;K8S_POD_INFRA_CONTAINER_ID=958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4;K8S_POD_UID=42b6a393-6194-4620-bf8f-7e4b6cbe5679" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"72:4f:d8:41:fe:b2\",\"name\":\"958ba1ee8e9afa1\"},{\"mac\":\"0a:58:0a:d9:00:26\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/4cb67ea2-66ca-4477-911c-e402227f8dda\"}],\"ips\":[{\"address\":\"10.217.0.38/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:30.912946971+00:00 stderr F 2025-08-13T20:00:30Z [verbose] Add: openshift-image-registry:image-registry-75779c45fd-v2j2v:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"7356b549b0982e9","mac":"1a:15:48:83:02:58"},{"name":"eth0","mac":"0a:58:0a:d9:00:3b","sandbox":"/var/run/netns/c953bf54-7cc8-4003-9703-2c7a471aed7f"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.59/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:30.912946971+00:00 stderr F I0813 20:00:30.912354 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"image-registry-75779c45fd-v2j2v", UID:"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319", APIVersion:"v1", ResourceVersion:"29604", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.59/23] from ovn-kubernetes 2025-08-13T20:00:30.967560289+00:00 stderr F 2025-08-13T20:00:30Z [verbose] ADD finished CNI request ContainerID:"7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e" Netns:"/var/run/netns/c953bf54-7cc8-4003-9703-2c7a471aed7f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75779c45fd-v2j2v;K8S_POD_INFRA_CONTAINER_ID=7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e;K8S_POD_UID=f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"1a:15:48:83:02:58\",\"name\":\"7356b549b0982e9\"},{\"mac\":\"0a:58:0a:d9:00:3b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c953bf54-7cc8-4003-9703-2c7a471aed7f\"}],\"ips\":[{\"address\":\"10.217.0.59/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:31.002266648+00:00 stderr P 2025-08-13T20:00:30Z [verbose] 2025-08-13T20:00:31.002340550+00:00 stderr P Add: openshift-controller-manager:controller-manager-78589965b8-vmcwt:00d32440-4cce-4609-96f3-51ac94480aab:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"97945bb2ed21e57","mac":"de:38:c2:b4:d8:3a"},{"name":"eth0","mac":"0a:58:0a:d9:00:3c","sandbox":"/var/run/netns/56a892cd-92ef-4d1e-8ced-347053efce52"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.60/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:31.002466734+00:00 stderr F 2025-08-13T20:00:31.003170114+00:00 stderr F I0813 20:00:31.003086 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager", Name:"controller-manager-78589965b8-vmcwt", UID:"00d32440-4cce-4609-96f3-51ac94480aab", APIVersion:"v1", ResourceVersion:"29670", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.60/23] from ovn-kubernetes 2025-08-13T20:00:31.082701722+00:00 stderr F 2025-08-13T20:00:31Z [verbose] ADD finished CNI request ContainerID:"97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9" Netns:"/var/run/netns/56a892cd-92ef-4d1e-8ced-347053efce52" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-78589965b8-vmcwt;K8S_POD_INFRA_CONTAINER_ID=97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9;K8S_POD_UID=00d32440-4cce-4609-96f3-51ac94480aab" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"de:38:c2:b4:d8:3a\",\"name\":\"97945bb2ed21e57\"},{\"mac\":\"0a:58:0a:d9:00:3c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/56a892cd-92ef-4d1e-8ced-347053efce52\"}],\"ips\":[{\"address\":\"10.217.0.60/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:32.195444221+00:00 stderr F 2025-08-13T20:00:32Z [verbose] DEL finished CNI request ContainerID:"e1b8b9555bd24d97227d4c4d3c9d61794f42a04b0ad9a23c933d58fe5e0eccb0" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-84fccc7b6-mkncc;K8S_POD_INFRA_CONTAINER_ID=e1b8b9555bd24d97227d4c4d3c9d61794f42a04b0ad9a23c933d58fe5e0eccb0;K8S_POD_UID=b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Path:"", result: "", err: 2025-08-13T20:00:32.558349439+00:00 stderr F 2025-08-13T20:00:32Z [verbose] DEL starting CNI request ContainerID:"9f59b16bbb5c3dd1084a5709e4c6d5ef7d05038b8e180f5a91a326c80990cac2" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=9f59b16bbb5c3dd1084a5709e4c6d5ef7d05038b8e180f5a91a326c80990cac2;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"" 2025-08-13T20:00:32.651056871+00:00 stderr F 2025-08-13T20:00:32Z [verbose] Del: openshift-marketplace:community-operators-8jhz6:3f4dca86-e6ee-4ec9-8324-86aff960225e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:34.140405589+00:00 stderr F 2025-08-13T20:00:34Z [verbose] ADD starting CNI request ContainerID:"7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e" Netns:"/var/run/netns/08ce2810-2ad2-40b0-834e-37fa5f126f28" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-846977c6bc-7gjhh;K8S_POD_INFRA_CONTAINER_ID=7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e;K8S_POD_UID=ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" Path:"" 2025-08-13T20:00:34.673187000+00:00 stderr F 2025-08-13T20:00:34Z [verbose] DEL finished CNI request ContainerID:"9f59b16bbb5c3dd1084a5709e4c6d5ef7d05038b8e180f5a91a326c80990cac2" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=9f59b16bbb5c3dd1084a5709e4c6d5ef7d05038b8e180f5a91a326c80990cac2;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"", result: "", err: 2025-08-13T20:00:34.835387915+00:00 stderr F 2025-08-13T20:00:34Z [verbose] DEL starting CNI request ContainerID:"fdb3fe17bdce79dd11ccd5d25e01662bcafa8f777956cea880dd33473ddc0bd7" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6c7c885997-4hbbc;K8S_POD_INFRA_CONTAINER_ID=fdb3fe17bdce79dd11ccd5d25e01662bcafa8f777956cea880dd33473ddc0bd7;K8S_POD_UID=d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Path:"" 2025-08-13T20:00:34.844419193+00:00 stderr F 2025-08-13T20:00:34Z [verbose] Del: openshift-multus:multus-admission-controller-6c7c885997-4hbbc:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:35.948650999+00:00 stderr P 2025-08-13T20:00:35Z [verbose] 2025-08-13T20:00:35.948741002+00:00 stderr P Add: openshift-route-controller-manager:route-controller-manager-846977c6bc-7gjhh:ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"7b8bdc9f188dc33","mac":"76:3b:e8:1c:21:23"},{"name":"eth0","mac":"0a:58:0a:d9:00:41","sandbox":"/var/run/netns/08ce2810-2ad2-40b0-834e-37fa5f126f28"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.65/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:35.948823624+00:00 stderr F 2025-08-13T20:00:35.967116546+00:00 stderr F I0813 20:00:35.953067 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-route-controller-manager", Name:"route-controller-manager-846977c6bc-7gjhh", UID:"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d", APIVersion:"v1", ResourceVersion:"29760", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.65/23] from ovn-kubernetes 2025-08-13T20:00:36.106877411+00:00 stderr F 2025-08-13T20:00:36Z [verbose] DEL finished CNI request ContainerID:"fdb3fe17bdce79dd11ccd5d25e01662bcafa8f777956cea880dd33473ddc0bd7" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6c7c885997-4hbbc;K8S_POD_INFRA_CONTAINER_ID=fdb3fe17bdce79dd11ccd5d25e01662bcafa8f777956cea880dd33473ddc0bd7;K8S_POD_UID=d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Path:"", result: "", err: 2025-08-13T20:00:36.200960723+00:00 stderr P 2025-08-13T20:00:36Z [verbose] 2025-08-13T20:00:36.201043076+00:00 stderr P DEL starting CNI request ContainerID:"9ac47df691313e3ec13649d206884e8233eda77a32f30da60cc55f7d0cc657a6" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-65476884b9-9wcvx;K8S_POD_INFRA_CONTAINER_ID=9ac47df691313e3ec13649d206884e8233eda77a32f30da60cc55f7d0cc657a6;K8S_POD_UID=6268b7fe-8910-4505-b404-6f1df638105c" Path:"" 2025-08-13T20:00:36.201067356+00:00 stderr F 2025-08-13T20:00:36.214635213+00:00 stderr P 2025-08-13T20:00:36Z [verbose] 2025-08-13T20:00:36.214699455+00:00 stderr P Del: openshift-console:downloads-65476884b9-9wcvx:6268b7fe-8910-4505-b404-6f1df638105c:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:36.214723896+00:00 stderr F 2025-08-13T20:00:36.550105928+00:00 stderr F 2025-08-13T20:00:36Z [verbose] ADD finished CNI request ContainerID:"7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e" Netns:"/var/run/netns/08ce2810-2ad2-40b0-834e-37fa5f126f28" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-846977c6bc-7gjhh;K8S_POD_INFRA_CONTAINER_ID=7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e;K8S_POD_UID=ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"76:3b:e8:1c:21:23\",\"name\":\"7b8bdc9f188dc33\"},{\"mac\":\"0a:58:0a:d9:00:41\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/08ce2810-2ad2-40b0-834e-37fa5f126f28\"}],\"ips\":[{\"address\":\"10.217.0.65/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:37.125114874+00:00 stderr F 2025-08-13T20:00:37Z [verbose] DEL finished CNI request ContainerID:"9ac47df691313e3ec13649d206884e8233eda77a32f30da60cc55f7d0cc657a6" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-65476884b9-9wcvx;K8S_POD_INFRA_CONTAINER_ID=9ac47df691313e3ec13649d206884e8233eda77a32f30da60cc55f7d0cc657a6;K8S_POD_UID=6268b7fe-8910-4505-b404-6f1df638105c" Path:"", result: "", err: 2025-08-13T20:00:37.215378848+00:00 stderr F 2025-08-13T20:00:37Z [verbose] DEL starting CNI request ContainerID:"fccee2e1608ededc00abd9b8a844dcbb4ee8e3e2d8d12e5cbe4114be73cca5eb" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6ff78978b4-q4vv8;K8S_POD_INFRA_CONTAINER_ID=fccee2e1608ededc00abd9b8a844dcbb4ee8e3e2d8d12e5cbe4114be73cca5eb;K8S_POD_UID=87df87f4-ba66-4137-8e41-1fa632ad4207" Path:"" 2025-08-13T20:00:37.262763049+00:00 stderr F 2025-08-13T20:00:37Z [verbose] Del: openshift-controller-manager:controller-manager-6ff78978b4-q4vv8:unknownUID:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:38.651627631+00:00 stderr P 2025-08-13T20:00:38Z [verbose] 2025-08-13T20:00:38.651700183+00:00 stderr P DEL finished CNI request ContainerID:"fccee2e1608ededc00abd9b8a844dcbb4ee8e3e2d8d12e5cbe4114be73cca5eb" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6ff78978b4-q4vv8;K8S_POD_INFRA_CONTAINER_ID=fccee2e1608ededc00abd9b8a844dcbb4ee8e3e2d8d12e5cbe4114be73cca5eb;K8S_POD_UID=87df87f4-ba66-4137-8e41-1fa632ad4207" Path:"", result: "", err: 2025-08-13T20:00:38.651743074+00:00 stderr F 2025-08-13T20:00:38.821651149+00:00 stderr P 2025-08-13T20:00:38Z [verbose] 2025-08-13T20:00:38.821763572+00:00 stderr P ADD starting CNI request ContainerID:"639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab" Netns:"/var/run/netns/1321210d-04c3-46b8-9215-4081860acf11" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-7-crc;K8S_POD_INFRA_CONTAINER_ID=639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab;K8S_POD_UID=b57cce81-8ea0-4c4d-aae1-ee024d201c15" Path:"" 2025-08-13T20:00:38.827408263+00:00 stderr F 2025-08-13T20:00:40.655755536+00:00 stderr F 2025-08-13T20:00:40Z [verbose] DEL starting CNI request ContainerID:"16950a4b107083363af8e2aead82e0fa59f887a15781ee3d28ad0f1ab990af61" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-8-crc;K8S_POD_INFRA_CONTAINER_ID=16950a4b107083363af8e2aead82e0fa59f887a15781ee3d28ad0f1ab990af61;K8S_POD_UID=7fc6841e-1b13-42dc-8470-506b09b9d82d" Path:"" 2025-08-13T20:00:40.655755536+00:00 stderr F 2025-08-13T20:00:40Z [error] Multus: failed to get the cached delegates file: open /var/lib/cni/multus/16950a4b107083363af8e2aead82e0fa59f887a15781ee3d28ad0f1ab990af61: no such file or directory, cannot properly delete 2025-08-13T20:00:40.655755536+00:00 stderr F 2025-08-13T20:00:40Z [verbose] DEL finished CNI request ContainerID:"16950a4b107083363af8e2aead82e0fa59f887a15781ee3d28ad0f1ab990af61" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-8-crc;K8S_POD_INFRA_CONTAINER_ID=16950a4b107083363af8e2aead82e0fa59f887a15781ee3d28ad0f1ab990af61;K8S_POD_UID=7fc6841e-1b13-42dc-8470-506b09b9d82d" Path:"", result: "", err: 2025-08-13T20:00:41.080083045+00:00 stderr F 2025-08-13T20:00:41Z [verbose] DEL starting CNI request ContainerID:"8be7a9b51f6314aeef996acf7469da8aaceeb411905168e5ca18ca82de940a1d" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-69c565c9b6-vbdpd;K8S_POD_INFRA_CONTAINER_ID=8be7a9b51f6314aeef996acf7469da8aaceeb411905168e5ca18ca82de940a1d;K8S_POD_UID=5bacb25d-97b6-4491-8fb4-99feae1d802a" Path:"" 2025-08-13T20:00:41.080083045+00:00 stderr F 2025-08-13T20:00:41Z [verbose] Del: openshift-oauth-apiserver:apiserver-69c565c9b6-vbdpd:5bacb25d-97b6-4491-8fb4-99feae1d802a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:42.117281860+00:00 stderr F 2025-08-13T20:00:42Z [verbose] ADD starting CNI request ContainerID:"c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5" Netns:"/var/run/netns/fd0a99e4-8ce6-4c16-9b4e-aec3188878d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-10-crc;K8S_POD_INFRA_CONTAINER_ID=c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5;K8S_POD_UID=2f155735-a9be-4621-a5f2-5ab4b6957acd" Path:"" 2025-08-13T20:00:42.225946918+00:00 stderr F 2025-08-13T20:00:42Z [verbose] DEL starting CNI request ContainerID:"8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141" Netns:"/var/run/netns/0e871511-fda3-41d1-ad6f-2bcfcfa2481d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-765b47f944-n2lhl;K8S_POD_INFRA_CONTAINER_ID=8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141;K8S_POD_UID=13ad7555-5f28-4555-a563-892713a8433a" Path:"" 2025-08-13T20:00:42.228035068+00:00 stderr F 2025-08-13T20:00:42Z [verbose] Del: openshift-authentication:oauth-openshift-765b47f944-n2lhl:13ad7555-5f28-4555-a563-892713a8433a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:42.356362727+00:00 stderr F 2025-08-13T20:00:42Z [verbose] DEL finished CNI request ContainerID:"8be7a9b51f6314aeef996acf7469da8aaceeb411905168e5ca18ca82de940a1d" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-69c565c9b6-vbdpd;K8S_POD_INFRA_CONTAINER_ID=8be7a9b51f6314aeef996acf7469da8aaceeb411905168e5ca18ca82de940a1d;K8S_POD_UID=5bacb25d-97b6-4491-8fb4-99feae1d802a" Path:"", result: "", err: 2025-08-13T20:00:42.478464869+00:00 stderr F 2025-08-13T20:00:42Z [verbose] Add: openshift-kube-scheduler:installer-7-crc:b57cce81-8ea0-4c4d-aae1-ee024d201c15:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"639e0e9093fe7c9","mac":"d2:b3:7b:29:cb:9c"},{"name":"eth0","mac":"0a:58:0a:d9:00:43","sandbox":"/var/run/netns/1321210d-04c3-46b8-9215-4081860acf11"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.67/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:42.479200910+00:00 stderr F I0813 20:00:42.479162 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-scheduler", Name:"installer-7-crc", UID:"b57cce81-8ea0-4c4d-aae1-ee024d201c15", APIVersion:"v1", ResourceVersion:"29872", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.67/23] from ovn-kubernetes 2025-08-13T20:00:42.730766093+00:00 stderr P 2025-08-13T20:00:42Z [verbose] 2025-08-13T20:00:42.730891296+00:00 stderr P DEL starting CNI request ContainerID:"da2c22b4e9fecd84ca52faa8c6c1fc46d66b1ff333890e8fe52c11000a224c02" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-585546dd8b-v5m4t;K8S_POD_INFRA_CONTAINER_ID=da2c22b4e9fecd84ca52faa8c6c1fc46d66b1ff333890e8fe52c11000a224c02;K8S_POD_UID=c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Path:"" 2025-08-13T20:00:42.730916937+00:00 stderr F 2025-08-13T20:00:42.941020918+00:00 stderr F 2025-08-13T20:00:42Z [verbose] Add: openshift-kube-controller-manager:revision-pruner-10-crc:2f155735-a9be-4621-a5f2-5ab4b6957acd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"c05ff35bd00034f","mac":"d2:80:43:7f:e4:85"},{"name":"eth0","mac":"0a:58:0a:d9:00:44","sandbox":"/var/run/netns/fd0a99e4-8ce6-4c16-9b4e-aec3188878d5"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.68/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:42.944025443+00:00 stderr F I0813 20:00:42.941635 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"revision-pruner-10-crc", UID:"2f155735-a9be-4621-a5f2-5ab4b6957acd", APIVersion:"v1", ResourceVersion:"29896", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.68/23] from ovn-kubernetes 2025-08-13T20:00:42.950006964+00:00 stderr F 2025-08-13T20:00:42Z [verbose] DEL finished CNI request ContainerID:"8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141" Netns:"/var/run/netns/0e871511-fda3-41d1-ad6f-2bcfcfa2481d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-765b47f944-n2lhl;K8S_POD_INFRA_CONTAINER_ID=8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141;K8S_POD_UID=13ad7555-5f28-4555-a563-892713a8433a" Path:"", result: "", err: 2025-08-13T20:00:42.992405573+00:00 stderr P 2025-08-13T20:00:42Z [verbose] 2025-08-13T20:00:42.992465665+00:00 stderr P ADD starting CNI request ContainerID:"c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc" Netns:"/var/run/netns/a78e9cb9-c5d1-4c13-9b41-34a1e877122f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-crc;K8S_POD_INFRA_CONTAINER_ID=c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc;K8S_POD_UID=79050916-d488-4806-b556-1b0078b31e53" Path:"" 2025-08-13T20:00:42.992498166+00:00 stderr F 2025-08-13T20:00:43.013136444+00:00 stderr F 2025-08-13T20:00:43Z [verbose] Del: openshift-image-registry:image-registry-585546dd8b-v5m4t:unknownUID:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:43.393009776+00:00 stderr F 2025-08-13T20:00:43Z [verbose] DEL finished CNI request ContainerID:"da2c22b4e9fecd84ca52faa8c6c1fc46d66b1ff333890e8fe52c11000a224c02" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-585546dd8b-v5m4t;K8S_POD_INFRA_CONTAINER_ID=da2c22b4e9fecd84ca52faa8c6c1fc46d66b1ff333890e8fe52c11000a224c02;K8S_POD_UID=c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Path:"", result: "", err: 2025-08-13T20:00:43.421580150+00:00 stderr P 2025-08-13T20:00:43Z [verbose] 2025-08-13T20:00:43.421641321+00:00 stderr P Add: openshift-kube-controller-manager:installer-10-crc:79050916-d488-4806-b556-1b0078b31e53:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"c5d98545d20b610","mac":"1e:0a:7c:9b:81:94"},{"name":"eth0","mac":"0a:58:0a:d9:00:45","sandbox":"/var/run/netns/a78e9cb9-c5d1-4c13-9b41-34a1e877122f"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.69/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:43.421713993+00:00 stderr F 2025-08-13T20:00:43.422160696+00:00 stderr F I0813 20:00:43.422095 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"installer-10-crc", UID:"79050916-d488-4806-b556-1b0078b31e53", APIVersion:"v1", ResourceVersion:"29912", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.69/23] from ovn-kubernetes 2025-08-13T20:00:43.680240285+00:00 stderr F 2025-08-13T20:00:43Z [verbose] DEL starting CNI request ContainerID:"5e64da46150ebb40cdd4ecb78b0a375db61e971c6e24c5e8eaf23820a89589dc" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-7769bd8d7d-q5cvv;K8S_POD_INFRA_CONTAINER_ID=5e64da46150ebb40cdd4ecb78b0a375db61e971c6e24c5e8eaf23820a89589dc;K8S_POD_UID=b54e8941-2fc4-432a-9e51-39684df9089e" Path:"" 2025-08-13T20:00:43.758502797+00:00 stderr F 2025-08-13T20:00:43Z [verbose] Del: openshift-image-registry:cluster-image-registry-operator-7769bd8d7d-q5cvv:b54e8941-2fc4-432a-9e51-39684df9089e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:44.114893609+00:00 stderr F 2025-08-13T20:00:44Z [verbose] DEL finished CNI request ContainerID:"5e64da46150ebb40cdd4ecb78b0a375db61e971c6e24c5e8eaf23820a89589dc" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-7769bd8d7d-q5cvv;K8S_POD_INFRA_CONTAINER_ID=5e64da46150ebb40cdd4ecb78b0a375db61e971c6e24c5e8eaf23820a89589dc;K8S_POD_UID=b54e8941-2fc4-432a-9e51-39684df9089e" Path:"", result: "", err: 2025-08-13T20:00:44.184127013+00:00 stderr F 2025-08-13T20:00:44Z [verbose] DEL starting CNI request ContainerID:"a44b069f93eedf98b5b6f9e5b936bcc32d50e0c5c9c62cef520ab0ba979214bf" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-6d8474f75f-x54mh;K8S_POD_INFRA_CONTAINER_ID=a44b069f93eedf98b5b6f9e5b936bcc32d50e0c5c9c62cef520ab0ba979214bf;K8S_POD_UID=c085412c-b875-46c9-ae3e-e6b0d8067091" Path:"" 2025-08-13T20:00:44.217335970+00:00 stderr F 2025-08-13T20:00:44Z [verbose] Del: openshift-operator-lifecycle-manager:olm-operator-6d8474f75f-x54mh:c085412c-b875-46c9-ae3e-e6b0d8067091:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:45.211890339+00:00 stderr F 2025-08-13T20:00:45Z [verbose] DEL finished CNI request ContainerID:"a44b069f93eedf98b5b6f9e5b936bcc32d50e0c5c9c62cef520ab0ba979214bf" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-6d8474f75f-x54mh;K8S_POD_INFRA_CONTAINER_ID=a44b069f93eedf98b5b6f9e5b936bcc32d50e0c5c9c62cef520ab0ba979214bf;K8S_POD_UID=c085412c-b875-46c9-ae3e-e6b0d8067091" Path:"", result: "", err: 2025-08-13T20:00:45.672314437+00:00 stderr P 2025-08-13T20:00:45Z [verbose] 2025-08-13T20:00:45.672434441+00:00 stderr P DEL starting CNI request ContainerID:"5ee90cc93a8f50d5f16df1056e015fc09ea8ce98eba3651d2af537405414ffe7" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-7978d7d7f6-2nt8z;K8S_POD_INFRA_CONTAINER_ID=5ee90cc93a8f50d5f16df1056e015fc09ea8ce98eba3651d2af537405414ffe7;K8S_POD_UID=0f394926-bdb9-425c-b36e-264d7fd34550" Path:"" 2025-08-13T20:00:45.672463122+00:00 stderr F 2025-08-13T20:00:45.697065183+00:00 stderr P 2025-08-13T20:00:45Z [verbose] 2025-08-13T20:00:45.697150805+00:00 stderr P Del: openshift-controller-manager-operator:openshift-controller-manager-operator-7978d7d7f6-2nt8z:0f394926-bdb9-425c-b36e-264d7fd34550:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:45.697175946+00:00 stderr F 2025-08-13T20:00:45.805306359+00:00 stderr F 2025-08-13T20:00:45Z [verbose] ADD finished CNI request ContainerID:"c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc" Netns:"/var/run/netns/a78e9cb9-c5d1-4c13-9b41-34a1e877122f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-crc;K8S_POD_INFRA_CONTAINER_ID=c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc;K8S_POD_UID=79050916-d488-4806-b556-1b0078b31e53" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"1e:0a:7c:9b:81:94\",\"name\":\"c5d98545d20b610\"},{\"mac\":\"0a:58:0a:d9:00:45\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/a78e9cb9-c5d1-4c13-9b41-34a1e877122f\"}],\"ips\":[{\"address\":\"10.217.0.69/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:45.911488567+00:00 stderr F 2025-08-13T20:00:45Z [verbose] ADD finished CNI request ContainerID:"c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5" Netns:"/var/run/netns/fd0a99e4-8ce6-4c16-9b4e-aec3188878d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-10-crc;K8S_POD_INFRA_CONTAINER_ID=c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5;K8S_POD_UID=2f155735-a9be-4621-a5f2-5ab4b6957acd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d2:80:43:7f:e4:85\",\"name\":\"c05ff35bd00034f\"},{\"mac\":\"0a:58:0a:d9:00:44\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/fd0a99e4-8ce6-4c16-9b4e-aec3188878d5\"}],\"ips\":[{\"address\":\"10.217.0.68/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:45.911685283+00:00 stderr F 2025-08-13T20:00:45Z [verbose] ADD finished CNI request ContainerID:"639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab" Netns:"/var/run/netns/1321210d-04c3-46b8-9215-4081860acf11" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-7-crc;K8S_POD_INFRA_CONTAINER_ID=639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab;K8S_POD_UID=b57cce81-8ea0-4c4d-aae1-ee024d201c15" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d2:b3:7b:29:cb:9c\",\"name\":\"639e0e9093fe7c9\"},{\"mac\":\"0a:58:0a:d9:00:43\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1321210d-04c3-46b8-9215-4081860acf11\"}],\"ips\":[{\"address\":\"10.217.0.67/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:46.769819461+00:00 stderr F 2025-08-13T20:00:46Z [verbose] ADD starting CNI request ContainerID:"411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58" Netns:"/var/run/netns/33580ad5-d33b-4301-b2b1-f494846cac8b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-jjfds;K8S_POD_INFRA_CONTAINER_ID=411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58;K8S_POD_UID=b23d6435-6431-4905-b41b-a517327385e5" Path:"" 2025-08-13T20:00:47.097347730+00:00 stderr F 2025-08-13T20:00:47Z [verbose] DEL finished CNI request ContainerID:"5ee90cc93a8f50d5f16df1056e015fc09ea8ce98eba3651d2af537405414ffe7" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-7978d7d7f6-2nt8z;K8S_POD_INFRA_CONTAINER_ID=5ee90cc93a8f50d5f16df1056e015fc09ea8ce98eba3651d2af537405414ffe7;K8S_POD_UID=0f394926-bdb9-425c-b36e-264d7fd34550" Path:"", result: "", err: 2025-08-13T20:00:47.471940481+00:00 stderr F 2025-08-13T20:00:47Z [verbose] DEL starting CNI request ContainerID:"7cbee94a0019287163f034af6eaf026b22eda36296fee7ada8ec1f0757be2148" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-conversion-webhook-595f9969b-l6z49;K8S_POD_INFRA_CONTAINER_ID=7cbee94a0019287163f034af6eaf026b22eda36296fee7ada8ec1f0757be2148;K8S_POD_UID=59748b9b-c309-4712-aa85-bb38d71c4915" Path:"" 2025-08-13T20:00:47.557602813+00:00 stderr F 2025-08-13T20:00:47Z [verbose] Del: openshift-console-operator:console-conversion-webhook-595f9969b-l6z49:59748b9b-c309-4712-aa85-bb38d71c4915:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:50.754210741+00:00 stderr F 2025-08-13T20:00:50Z [verbose] Add: openshift-apiserver:apiserver-67cbf64bc9-jjfds:b23d6435-6431-4905-b41b-a517327385e5:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"411add17e78de78","mac":"86:a5:06:68:9f:58"},{"name":"eth0","mac":"0a:58:0a:d9:00:46","sandbox":"/var/run/netns/33580ad5-d33b-4301-b2b1-f494846cac8b"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.70/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:50.754265362+00:00 stderr F I0813 20:00:50.754227 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-apiserver", Name:"apiserver-67cbf64bc9-jjfds", UID:"b23d6435-6431-4905-b41b-a517327385e5", APIVersion:"v1", ResourceVersion:"29962", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.70/23] from ovn-kubernetes 2025-08-13T20:00:50.832530764+00:00 stderr F 2025-08-13T20:00:50Z [verbose] DEL finished CNI request ContainerID:"7cbee94a0019287163f034af6eaf026b22eda36296fee7ada8ec1f0757be2148" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-conversion-webhook-595f9969b-l6z49;K8S_POD_INFRA_CONTAINER_ID=7cbee94a0019287163f034af6eaf026b22eda36296fee7ada8ec1f0757be2148;K8S_POD_UID=59748b9b-c309-4712-aa85-bb38d71c4915" Path:"", result: "", err: 2025-08-13T20:00:51.690436508+00:00 stderr F 2025-08-13T20:00:51Z [verbose] DEL starting CNI request ContainerID:"c64580af653a5c82ccb1aa4cec3fd7112e5c51f08e77a2cb0dd795ea7b69f906" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-mtx25;K8S_POD_INFRA_CONTAINER_ID=c64580af653a5c82ccb1aa4cec3fd7112e5c51f08e77a2cb0dd795ea7b69f906;K8S_POD_UID=23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Path:"" 2025-08-13T20:00:51.757410636+00:00 stderr F 2025-08-13T20:00:51Z [verbose] Del: openshift-apiserver:apiserver-67cbf64bc9-mtx25:unknownUID:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:52.963080514+00:00 stderr F 2025-08-13T20:00:52Z [verbose] ADD starting CNI request ContainerID:"ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404" Netns:"/var/run/netns/2ae85cbd-71a6-416a-8db5-ee7ccfb3e931" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-74fc7c67cc-xqf8b;K8S_POD_INFRA_CONTAINER_ID=ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404;K8S_POD_UID=01feb2e0-a0f4-4573-8335-34e364e0ef40" Path:"" 2025-08-13T20:00:53.012391360+00:00 stderr F 2025-08-13T20:00:53Z [verbose] DEL finished CNI request ContainerID:"c64580af653a5c82ccb1aa4cec3fd7112e5c51f08e77a2cb0dd795ea7b69f906" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-mtx25;K8S_POD_INFRA_CONTAINER_ID=c64580af653a5c82ccb1aa4cec3fd7112e5c51f08e77a2cb0dd795ea7b69f906;K8S_POD_UID=23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Path:"", result: "", err: 2025-08-13T20:00:54.999966983+00:00 stderr F 2025-08-13T20:00:54Z [verbose] Add: openshift-authentication:oauth-openshift-74fc7c67cc-xqf8b:01feb2e0-a0f4-4573-8335-34e364e0ef40:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"ca33bd29c9a026f","mac":"d2:28:de:94:9a:bd"},{"name":"eth0","mac":"0a:58:0a:d9:00:48","sandbox":"/var/run/netns/2ae85cbd-71a6-416a-8db5-ee7ccfb3e931"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.72/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:54.999966983+00:00 stderr F I0813 20:00:54.997914 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-authentication", Name:"oauth-openshift-74fc7c67cc-xqf8b", UID:"01feb2e0-a0f4-4573-8335-34e364e0ef40", APIVersion:"v1", ResourceVersion:"30093", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.72/23] from ovn-kubernetes 2025-08-13T20:00:55.784357859+00:00 stderr P 2025-08-13T20:00:55Z [verbose] 2025-08-13T20:00:55.784484773+00:00 stderr P DEL starting CNI request ContainerID:"df01448416fec1aac04076c4b97a2ab84a208f9c7e531a6eea8357072084eaef" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-75f687757b-nz2xb;K8S_POD_INFRA_CONTAINER_ID=df01448416fec1aac04076c4b97a2ab84a208f9c7e531a6eea8357072084eaef;K8S_POD_UID=10603adc-d495-423c-9459-4caa405960bb" Path:"" 2025-08-13T20:00:55.784519304+00:00 stderr F 2025-08-13T20:00:55.801439626+00:00 stderr P 2025-08-13T20:00:55Z [verbose] 2025-08-13T20:00:55.801514399+00:00 stderr P Del: openshift-dns-operator:dns-operator-75f687757b-nz2xb:10603adc-d495-423c-9459-4caa405960bb:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:55.801574770+00:00 stderr F 2025-08-13T20:00:57.685908652+00:00 stderr P 2025-08-13T20:00:57Z [verbose] 2025-08-13T20:00:57.686257362+00:00 stderr P DEL finished CNI request ContainerID:"df01448416fec1aac04076c4b97a2ab84a208f9c7e531a6eea8357072084eaef" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-75f687757b-nz2xb;K8S_POD_INFRA_CONTAINER_ID=df01448416fec1aac04076c4b97a2ab84a208f9c7e531a6eea8357072084eaef;K8S_POD_UID=10603adc-d495-423c-9459-4caa405960bb" Path:"", result: "", err: 2025-08-13T20:00:57.686300333+00:00 stderr F 2025-08-13T20:00:59.625135578+00:00 stderr F 2025-08-13T20:00:59Z [verbose] DEL starting CNI request ContainerID:"ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef" Netns:"/var/run/netns/f3d4ee0f-cb95-4404-998d-8234df0e9330" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef;K8S_POD_UID=227e3650-2a85-4229-8099-bb53972635b2" Path:"" 2025-08-13T20:00:59.626535218+00:00 stderr F 2025-08-13T20:00:59Z [verbose] Del: openshift-kube-controller-manager:installer-9-crc:227e3650-2a85-4229-8099-bb53972635b2:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:59.949886749+00:00 stderr F 2025-08-13T20:00:59Z [verbose] ADD finished CNI request ContainerID:"ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404" Netns:"/var/run/netns/2ae85cbd-71a6-416a-8db5-ee7ccfb3e931" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-74fc7c67cc-xqf8b;K8S_POD_INFRA_CONTAINER_ID=ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404;K8S_POD_UID=01feb2e0-a0f4-4573-8335-34e364e0ef40" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d2:28:de:94:9a:bd\",\"name\":\"ca33bd29c9a026f\"},{\"mac\":\"0a:58:0a:d9:00:48\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/2ae85cbd-71a6-416a-8db5-ee7ccfb3e931\"}],\"ips\":[{\"address\":\"10.217.0.72/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:59.955906181+00:00 stderr F 2025-08-13T20:00:59Z [verbose] ADD finished CNI request ContainerID:"411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58" Netns:"/var/run/netns/33580ad5-d33b-4301-b2b1-f494846cac8b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-jjfds;K8S_POD_INFRA_CONTAINER_ID=411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58;K8S_POD_UID=b23d6435-6431-4905-b41b-a517327385e5" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"86:a5:06:68:9f:58\",\"name\":\"411add17e78de78\"},{\"mac\":\"0a:58:0a:d9:00:46\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/33580ad5-d33b-4301-b2b1-f494846cac8b\"}],\"ips\":[{\"address\":\"10.217.0.70/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:01:00.262048961+00:00 stderr P 2025-08-13T20:01:00Z [verbose] 2025-08-13T20:01:00.262101852+00:00 stderr P DEL starting CNI request ContainerID:"2ae90acd2995b1dcc37ed6119a29512e6665ccfe374f811d7650ee274489ed91" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-78d54458c4-sc8h7;K8S_POD_INFRA_CONTAINER_ID=2ae90acd2995b1dcc37ed6119a29512e6665ccfe374f811d7650ee274489ed91;K8S_POD_UID=ed024e5d-8fc2-4c22-803d-73f3c9795f19" Path:"" 2025-08-13T20:01:00.262127363+00:00 stderr F 2025-08-13T20:01:00.306681903+00:00 stderr P 2025-08-13T20:01:00Z [verbose] 2025-08-13T20:01:00.306880019+00:00 stderr P Del: openshift-kube-apiserver-operator:kube-apiserver-operator-78d54458c4-sc8h7:ed024e5d-8fc2-4c22-803d-73f3c9795f19:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:00.306920850+00:00 stderr F 2025-08-13T20:01:01.496994785+00:00 stderr F 2025-08-13T20:01:01Z [verbose] DEL finished CNI request ContainerID:"ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef" Netns:"/var/run/netns/f3d4ee0f-cb95-4404-998d-8234df0e9330" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef;K8S_POD_UID=227e3650-2a85-4229-8099-bb53972635b2" Path:"", result: "", err: 2025-08-13T20:01:01.818888194+00:00 stderr F 2025-08-13T20:01:01Z [verbose] DEL finished CNI request ContainerID:"2ae90acd2995b1dcc37ed6119a29512e6665ccfe374f811d7650ee274489ed91" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-78d54458c4-sc8h7;K8S_POD_INFRA_CONTAINER_ID=2ae90acd2995b1dcc37ed6119a29512e6665ccfe374f811d7650ee274489ed91;K8S_POD_UID=ed024e5d-8fc2-4c22-803d-73f3c9795f19" Path:"", result: "", err: 2025-08-13T20:01:04.274439165+00:00 stderr P 2025-08-13T20:01:04Z [verbose] 2025-08-13T20:01:04.274510537+00:00 stderr P DEL starting CNI request ContainerID:"432a44023c96c4a190a4d9e4aaade94c463767d2c03338ff6c3eec88593678b0" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-qdfr4;K8S_POD_INFRA_CONTAINER_ID=432a44023c96c4a190a4d9e4aaade94c463767d2c03338ff6c3eec88593678b0;K8S_POD_UID=a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Path:"" 2025-08-13T20:01:04.274656522+00:00 stderr F 2025-08-13T20:01:04.279231072+00:00 stderr P 2025-08-13T20:01:04Z [verbose] 2025-08-13T20:01:04.279270203+00:00 stderr P Del: openshift-multus:network-metrics-daemon-qdfr4:a702c6d2-4dde-4077-ab8c-0f8df804bf7a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:04.279294144+00:00 stderr F 2025-08-13T20:01:06.005889996+00:00 stderr P 2025-08-13T20:01:06Z [verbose] 2025-08-13T20:01:06.006018259+00:00 stderr P DEL finished CNI request ContainerID:"432a44023c96c4a190a4d9e4aaade94c463767d2c03338ff6c3eec88593678b0" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-qdfr4;K8S_POD_INFRA_CONTAINER_ID=432a44023c96c4a190a4d9e4aaade94c463767d2c03338ff6c3eec88593678b0;K8S_POD_UID=a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Path:"", result: "", err: 2025-08-13T20:01:06.006046850+00:00 stderr F 2025-08-13T20:01:07.641698699+00:00 stderr F 2025-08-13T20:01:07Z [verbose] DEL starting CNI request ContainerID:"33a7a183a635cac6a4eb98a37022992dab3d3a030061cdaba0e740f17db9067e" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-768d5b5d86-722mg;K8S_POD_INFRA_CONTAINER_ID=33a7a183a635cac6a4eb98a37022992dab3d3a030061cdaba0e740f17db9067e;K8S_POD_UID=0b5c38ff-1fa8-4219-994d-15776acd4a4d" Path:"" 2025-08-13T20:01:07.655044230+00:00 stderr F 2025-08-13T20:01:07Z [verbose] Del: openshift-etcd-operator:etcd-operator-768d5b5d86-722mg:0b5c38ff-1fa8-4219-994d-15776acd4a4d:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:11.315883054+00:00 stderr F 2025-08-13T20:01:11Z [verbose] DEL finished CNI request ContainerID:"33a7a183a635cac6a4eb98a37022992dab3d3a030061cdaba0e740f17db9067e" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-768d5b5d86-722mg;K8S_POD_INFRA_CONTAINER_ID=33a7a183a635cac6a4eb98a37022992dab3d3a030061cdaba0e740f17db9067e;K8S_POD_UID=0b5c38ff-1fa8-4219-994d-15776acd4a4d" Path:"", result: "", err: 2025-08-13T20:01:11.611486543+00:00 stderr P 2025-08-13T20:01:11Z [verbose] 2025-08-13T20:01:11.611557375+00:00 stderr P DEL starting CNI request ContainerID:"dd9ec529de53736ffb1b837b53f817960d4759ad27cbf8ae8f5117fbdaa896c1" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-84d578d794-jw7r2;K8S_POD_INFRA_CONTAINER_ID=dd9ec529de53736ffb1b837b53f817960d4759ad27cbf8ae8f5117fbdaa896c1;K8S_POD_UID=63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Path:"" 2025-08-13T20:01:11.611620457+00:00 stderr F 2025-08-13T20:01:11.640926123+00:00 stderr P 2025-08-13T20:01:11Z [verbose] 2025-08-13T20:01:11.641012095+00:00 stderr P Del: openshift-operator-lifecycle-manager:package-server-manager-84d578d794-jw7r2:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:11.641042516+00:00 stderr F 2025-08-13T20:01:12.677367365+00:00 stderr P 2025-08-13T20:01:12Z [verbose] 2025-08-13T20:01:12.677746566+00:00 stderr P DEL finished CNI request ContainerID:"dd9ec529de53736ffb1b837b53f817960d4759ad27cbf8ae8f5117fbdaa896c1" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-84d578d794-jw7r2;K8S_POD_INFRA_CONTAINER_ID=dd9ec529de53736ffb1b837b53f817960d4759ad27cbf8ae8f5117fbdaa896c1;K8S_POD_UID=63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Path:"", result: "", err: 2025-08-13T20:01:12.677957122+00:00 stderr F 2025-08-13T20:01:15.978545495+00:00 stderr F 2025-08-13T20:01:15Z [verbose] DEL starting CNI request ContainerID:"0bd5c29e77a3037aae2fa96b544efc5ac7e1fb73a7a339a2f0f8a9e30d2f62a1" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5dbbc74dc9-cp5cd;K8S_POD_INFRA_CONTAINER_ID=0bd5c29e77a3037aae2fa96b544efc5ac7e1fb73a7a339a2f0f8a9e30d2f62a1;K8S_POD_UID=e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Path:"" 2025-08-13T20:01:16.068195151+00:00 stderr P 2025-08-13T20:01:16Z [verbose] 2025-08-13T20:01:16.068257523+00:00 stderr P DEL starting CNI request ContainerID:"c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5" Netns:"/var/run/netns/fd0a99e4-8ce6-4c16-9b4e-aec3188878d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-10-crc;K8S_POD_INFRA_CONTAINER_ID=c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5;K8S_POD_UID=2f155735-a9be-4621-a5f2-5ab4b6957acd" Path:"" 2025-08-13T20:01:16.068284344+00:00 stderr F 2025-08-13T20:01:16.068958033+00:00 stderr P 2025-08-13T20:01:16Z [verbose] 2025-08-13T20:01:16.069011444+00:00 stderr P Del: openshift-kube-controller-manager:revision-pruner-10-crc:2f155735-a9be-4621-a5f2-5ab4b6957acd:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:16.069044455+00:00 stderr F 2025-08-13T20:01:16.129297173+00:00 stderr F 2025-08-13T20:01:16Z [verbose] Del: openshift-console-operator:console-operator-5dbbc74dc9-cp5cd:e9127708-ccfd-4891-8a3a-f0cacb77e0f4:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:17.040373572+00:00 stderr F 2025-08-13T20:01:17Z [verbose] DEL finished CNI request ContainerID:"0bd5c29e77a3037aae2fa96b544efc5ac7e1fb73a7a339a2f0f8a9e30d2f62a1" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5dbbc74dc9-cp5cd;K8S_POD_INFRA_CONTAINER_ID=0bd5c29e77a3037aae2fa96b544efc5ac7e1fb73a7a339a2f0f8a9e30d2f62a1;K8S_POD_UID=e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Path:"", result: "", err: 2025-08-13T20:01:17.168158526+00:00 stderr P 2025-08-13T20:01:17Z [verbose] 2025-08-13T20:01:17.168284669+00:00 stderr P DEL finished CNI request ContainerID:"c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5" Netns:"/var/run/netns/fd0a99e4-8ce6-4c16-9b4e-aec3188878d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-10-crc;K8S_POD_INFRA_CONTAINER_ID=c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5;K8S_POD_UID=2f155735-a9be-4621-a5f2-5ab4b6957acd" Path:"", result: "", err: 2025-08-13T20:01:17.168357521+00:00 stderr F 2025-08-13T20:01:18.589913375+00:00 stderr F 2025-08-13T20:01:18Z [verbose] DEL starting CNI request ContainerID:"878c7e3ae7a33039e3715e057a430612aebe8e58d774871e2d8a818ea3d7c85f" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-v54bt;K8S_POD_INFRA_CONTAINER_ID=878c7e3ae7a33039e3715e057a430612aebe8e58d774871e2d8a818ea3d7c85f;K8S_POD_UID=34a48baf-1bee-4921-8bb2-9b7320e76f79" Path:"" 2025-08-13T20:01:18.595353010+00:00 stderr F 2025-08-13T20:01:18Z [verbose] Del: openshift-network-diagnostics:network-check-target-v54bt:34a48baf-1bee-4921-8bb2-9b7320e76f79:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:19.197903550+00:00 stderr F 2025-08-13T20:01:19Z [verbose] DEL finished CNI request ContainerID:"878c7e3ae7a33039e3715e057a430612aebe8e58d774871e2d8a818ea3d7c85f" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-v54bt;K8S_POD_INFRA_CONTAINER_ID=878c7e3ae7a33039e3715e057a430612aebe8e58d774871e2d8a818ea3d7c85f;K8S_POD_UID=34a48baf-1bee-4921-8bb2-9b7320e76f79" Path:"", result: "", err: 2025-08-13T20:01:19.430217025+00:00 stderr P 2025-08-13T20:01:19Z [verbose] 2025-08-13T20:01:19.430279626+00:00 stderr P DEL starting CNI request ContainerID:"21796c1f7342724e32e54ca11b541e1cda8e05849122530266a7b12a23cc0fe1" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator;K8S_POD_NAME=migrator-f7c6d88df-q2fnv;K8S_POD_INFRA_CONTAINER_ID=21796c1f7342724e32e54ca11b541e1cda8e05849122530266a7b12a23cc0fe1;K8S_POD_UID=cf1a8966-f594-490a-9fbb-eec5bafd13d3" Path:"" 2025-08-13T20:01:19.430305647+00:00 stderr F 2025-08-13T20:01:19.469115584+00:00 stderr P 2025-08-13T20:01:19Z [verbose] 2025-08-13T20:01:19.469181956+00:00 stderr P Del: openshift-kube-storage-version-migrator:migrator-f7c6d88df-q2fnv:cf1a8966-f594-490a-9fbb-eec5bafd13d3:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:19.469207707+00:00 stderr F 2025-08-13T20:01:20.235586379+00:00 stderr F 2025-08-13T20:01:20Z [verbose] DEL finished CNI request ContainerID:"21796c1f7342724e32e54ca11b541e1cda8e05849122530266a7b12a23cc0fe1" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator;K8S_POD_NAME=migrator-f7c6d88df-q2fnv;K8S_POD_INFRA_CONTAINER_ID=21796c1f7342724e32e54ca11b541e1cda8e05849122530266a7b12a23cc0fe1;K8S_POD_UID=cf1a8966-f594-490a-9fbb-eec5bafd13d3" Path:"", result: "", err: 2025-08-13T20:01:20.796826942+00:00 stderr F 2025-08-13T20:01:20Z [verbose] DEL starting CNI request ContainerID:"2904aa4ee10011260ecc56661b111d456503de7780da1835b59281d4e8da37e6" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-76788bff89-wkjgm;K8S_POD_INFRA_CONTAINER_ID=2904aa4ee10011260ecc56661b111d456503de7780da1835b59281d4e8da37e6;K8S_POD_UID=120b38dc-8236-4fa6-a452-642b8ad738ee" Path:"" 2025-08-13T20:01:20.829113883+00:00 stderr F 2025-08-13T20:01:20Z [verbose] Del: openshift-machine-config-operator:machine-config-operator-76788bff89-wkjgm:120b38dc-8236-4fa6-a452-642b8ad738ee:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:21.226157394+00:00 stderr F 2025-08-13T20:01:21Z [verbose] DEL finished CNI request ContainerID:"2904aa4ee10011260ecc56661b111d456503de7780da1835b59281d4e8da37e6" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-76788bff89-wkjgm;K8S_POD_INFRA_CONTAINER_ID=2904aa4ee10011260ecc56661b111d456503de7780da1835b59281d4e8da37e6;K8S_POD_UID=120b38dc-8236-4fa6-a452-642b8ad738ee" Path:"", result: "", err: 2025-08-13T20:01:22.043060597+00:00 stderr F 2025-08-13T20:01:22Z [verbose] DEL starting CNI request ContainerID:"3ec87f325b56dafac3c9f6517722f77891a8dcfe8d0e6aeff1e0fe4564dfa13c" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-5c5478f8c-vqvt7;K8S_POD_INFRA_CONTAINER_ID=3ec87f325b56dafac3c9f6517722f77891a8dcfe8d0e6aeff1e0fe4564dfa13c;K8S_POD_UID=d0f40333-c860-4c04-8058-a0bf572dcf12" Path:"" 2025-08-13T20:01:22.078591830+00:00 stderr F 2025-08-13T20:01:22Z [verbose] Del: openshift-network-diagnostics:network-check-source-5c5478f8c-vqvt7:d0f40333-c860-4c04-8058-a0bf572dcf12:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:22.231411208+00:00 stderr F 2025-08-13T20:01:22Z [verbose] ADD starting CNI request ContainerID:"48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107" Netns:"/var/run/netns/c01eb88d-c361-4ad4-aeb3-1fef2bcbda1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-644bb77b49-5x5xk;K8S_POD_INFRA_CONTAINER_ID=48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107;K8S_POD_UID=9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Path:"" 2025-08-13T20:01:22.447936762+00:00 stderr F 2025-08-13T20:01:22Z [verbose] DEL finished CNI request ContainerID:"3ec87f325b56dafac3c9f6517722f77891a8dcfe8d0e6aeff1e0fe4564dfa13c" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-5c5478f8c-vqvt7;K8S_POD_INFRA_CONTAINER_ID=3ec87f325b56dafac3c9f6517722f77891a8dcfe8d0e6aeff1e0fe4564dfa13c;K8S_POD_UID=d0f40333-c860-4c04-8058-a0bf572dcf12" Path:"", result: "", err: 2025-08-13T20:01:22.762623774+00:00 stderr F 2025-08-13T20:01:22Z [verbose] Add: openshift-console:console-644bb77b49-5x5xk:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"48ddb06f60b4f68","mac":"2e:65:98:07:c8:3f"},{"name":"eth0","mac":"0a:58:0a:d9:00:49","sandbox":"/var/run/netns/c01eb88d-c361-4ad4-aeb3-1fef2bcbda1d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.73/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:01:22.763233211+00:00 stderr F I0813 20:01:22.763087 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console", Name:"console-644bb77b49-5x5xk", UID:"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1", APIVersion:"v1", ResourceVersion:"30362", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.73/23] from ovn-kubernetes 2025-08-13T20:01:22.999714184+00:00 stderr F 2025-08-13T20:01:22Z [verbose] DEL starting CNI request ContainerID:"2d2195d84ac2445cdc3e5a462627ae2f8ed35af7ac5aa5d370c6a2c32b9485b2" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-controller-6df6df6b6b-58shh;K8S_POD_INFRA_CONTAINER_ID=2d2195d84ac2445cdc3e5a462627ae2f8ed35af7ac5aa5d370c6a2c32b9485b2;K8S_POD_UID=297ab9b6-2186-4d5b-a952-2bfd59af63c4" Path:"" 2025-08-13T20:01:23.071366437+00:00 stderr F 2025-08-13T20:01:23Z [verbose] Del: openshift-machine-config-operator:machine-config-controller-6df6df6b6b-58shh:297ab9b6-2186-4d5b-a952-2bfd59af63c4:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:23.408762758+00:00 stderr F 2025-08-13T20:01:23Z [verbose] DEL finished CNI request ContainerID:"2d2195d84ac2445cdc3e5a462627ae2f8ed35af7ac5aa5d370c6a2c32b9485b2" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-controller-6df6df6b6b-58shh;K8S_POD_INFRA_CONTAINER_ID=2d2195d84ac2445cdc3e5a462627ae2f8ed35af7ac5aa5d370c6a2c32b9485b2;K8S_POD_UID=297ab9b6-2186-4d5b-a952-2bfd59af63c4" Path:"", result: "", err: 2025-08-13T20:01:23.633597139+00:00 stderr P 2025-08-13T20:01:23Z [verbose] 2025-08-13T20:01:23.634064242+00:00 stderr P ADD finished CNI request ContainerID:"48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107" Netns:"/var/run/netns/c01eb88d-c361-4ad4-aeb3-1fef2bcbda1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-644bb77b49-5x5xk;K8S_POD_INFRA_CONTAINER_ID=48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107;K8S_POD_UID=9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2e:65:98:07:c8:3f\",\"name\":\"48ddb06f60b4f68\"},{\"mac\":\"0a:58:0a:d9:00:49\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c01eb88d-c361-4ad4-aeb3-1fef2bcbda1d\"}],\"ips\":[{\"address\":\"10.217.0.73/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:01:23.634102513+00:00 stderr F 2025-08-13T20:01:23.794134056+00:00 stderr F 2025-08-13T20:01:23Z [verbose] DEL starting CNI request ContainerID:"2a222a6e16cc3bc99ae882453cf511604f6c8cf0f7727eca34e65c54c3de325f" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-686c6c748c-qbnnr;K8S_POD_INFRA_CONTAINER_ID=2a222a6e16cc3bc99ae882453cf511604f6c8cf0f7727eca34e65c54c3de325f;K8S_POD_UID=9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Path:"" 2025-08-13T20:01:23.798542252+00:00 stderr F 2025-08-13T20:01:23Z [verbose] Del: openshift-kube-storage-version-migrator-operator:kube-storage-version-migrator-operator-686c6c748c-qbnnr:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:23.997263328+00:00 stderr P 2025-08-13T20:01:23Z [verbose] 2025-08-13T20:01:23.997368721+00:00 stderr P DEL starting CNI request ContainerID:"97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9" Netns:"/var/run/netns/56a892cd-92ef-4d1e-8ced-347053efce52" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-78589965b8-vmcwt;K8S_POD_INFRA_CONTAINER_ID=97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9;K8S_POD_UID=00d32440-4cce-4609-96f3-51ac94480aab" Path:"" 2025-08-13T20:01:23.997394662+00:00 stderr F 2025-08-13T20:01:24.002935580+00:00 stderr P 2025-08-13T20:01:23Z [verbose] 2025-08-13T20:01:24.003018803+00:00 stderr P Del: openshift-controller-manager:controller-manager-78589965b8-vmcwt:00d32440-4cce-4609-96f3-51ac94480aab:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:24.003107155+00:00 stderr F 2025-08-13T20:01:24.207200205+00:00 stderr F 2025-08-13T20:01:24Z [verbose] DEL finished CNI request ContainerID:"2a222a6e16cc3bc99ae882453cf511604f6c8cf0f7727eca34e65c54c3de325f" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-686c6c748c-qbnnr;K8S_POD_INFRA_CONTAINER_ID=2a222a6e16cc3bc99ae882453cf511604f6c8cf0f7727eca34e65c54c3de325f;K8S_POD_UID=9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Path:"", result: "", err: 2025-08-13T20:01:24.405154259+00:00 stderr P 2025-08-13T20:01:24Z [verbose] 2025-08-13T20:01:24.405207660+00:00 stderr P DEL finished CNI request ContainerID:"97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9" Netns:"/var/run/netns/56a892cd-92ef-4d1e-8ced-347053efce52" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-78589965b8-vmcwt;K8S_POD_INFRA_CONTAINER_ID=97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9;K8S_POD_UID=00d32440-4cce-4609-96f3-51ac94480aab" Path:"", result: "", err: 2025-08-13T20:01:24.405232041+00:00 stderr F 2025-08-13T20:01:24.676881827+00:00 stderr F 2025-08-13T20:01:24Z [verbose] DEL starting CNI request ContainerID:"d60a8ff795b3aa244ba0a6d4beb85f92509f5b3b5e5325497c70969789030543" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-canary;K8S_POD_NAME=ingress-canary-2vhcn;K8S_POD_INFRA_CONTAINER_ID=d60a8ff795b3aa244ba0a6d4beb85f92509f5b3b5e5325497c70969789030543;K8S_POD_UID=0b5d722a-1123-4935-9740-52a08d018bc9" Path:"" 2025-08-13T20:01:24.977894500+00:00 stderr F 2025-08-13T20:01:24Z [verbose] Del: openshift-ingress-canary:ingress-canary-2vhcn:0b5d722a-1123-4935-9740-52a08d018bc9:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:25.162886715+00:00 stderr F 2025-08-13T20:01:25Z [verbose] DEL finished CNI request ContainerID:"d60a8ff795b3aa244ba0a6d4beb85f92509f5b3b5e5325497c70969789030543" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-canary;K8S_POD_NAME=ingress-canary-2vhcn;K8S_POD_INFRA_CONTAINER_ID=d60a8ff795b3aa244ba0a6d4beb85f92509f5b3b5e5325497c70969789030543;K8S_POD_UID=0b5d722a-1123-4935-9740-52a08d018bc9" Path:"", result: "", err: 2025-08-13T20:01:25.410320910+00:00 stderr F 2025-08-13T20:01:25Z [verbose] DEL starting CNI request ContainerID:"c577d94ee62a3a3ce3f153fd7296888a10d038e1220ab6f060850cebd3768fa4" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-5d9b995f6b-fcgd7;K8S_POD_INFRA_CONTAINER_ID=c577d94ee62a3a3ce3f153fd7296888a10d038e1220ab6f060850cebd3768fa4;K8S_POD_UID=71af81a9-7d43-49b2-9287-c375900aa905" Path:"" 2025-08-13T20:01:25.473106201+00:00 stderr F 2025-08-13T20:01:25Z [verbose] Del: openshift-kube-scheduler-operator:openshift-kube-scheduler-operator-5d9b995f6b-fcgd7:71af81a9-7d43-49b2-9287-c375900aa905:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:26.572894659+00:00 stderr F 2025-08-13T20:01:26Z [verbose] DEL finished CNI request ContainerID:"c577d94ee62a3a3ce3f153fd7296888a10d038e1220ab6f060850cebd3768fa4" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-5d9b995f6b-fcgd7;K8S_POD_INFRA_CONTAINER_ID=c577d94ee62a3a3ce3f153fd7296888a10d038e1220ab6f060850cebd3768fa4;K8S_POD_UID=71af81a9-7d43-49b2-9287-c375900aa905" Path:"", result: "", err: 2025-08-13T20:01:27.222996886+00:00 stderr F 2025-08-13T20:01:27Z [verbose] DEL starting CNI request ContainerID:"1cb83bf6435551fc60288f3f171dc9f91e5856d343faf671fbecc82fa4708d4f" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=1cb83bf6435551fc60288f3f171dc9f91e5856d343faf671fbecc82fa4708d4f;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"" 2025-08-13T20:01:27.225452106+00:00 stderr F 2025-08-13T20:01:27Z [verbose] Del: openshift-marketplace:marketplace-operator-8b455464d-f9xdt:3482be94-0cdb-4e2a-889b-e5fac59fdbf5:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:27.256063179+00:00 stderr F 2025-08-13T20:01:27Z [verbose] DEL starting CNI request ContainerID:"7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e" Netns:"/var/run/netns/08ce2810-2ad2-40b0-834e-37fa5f126f28" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-846977c6bc-7gjhh;K8S_POD_INFRA_CONTAINER_ID=7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e;K8S_POD_UID=ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" Path:"" 2025-08-13T20:01:27.256524022+00:00 stderr F 2025-08-13T20:01:27Z [verbose] Del: openshift-route-controller-manager:route-controller-manager-846977c6bc-7gjhh:ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:27.767518242+00:00 stderr F 2025-08-13T20:01:27Z [verbose] DEL finished CNI request ContainerID:"1cb83bf6435551fc60288f3f171dc9f91e5856d343faf671fbecc82fa4708d4f" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=1cb83bf6435551fc60288f3f171dc9f91e5856d343faf671fbecc82fa4708d4f;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"", result: "", err: 2025-08-13T20:01:27.879041363+00:00 stderr F 2025-08-13T20:01:27Z [verbose] DEL finished CNI request ContainerID:"7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e" Netns:"/var/run/netns/08ce2810-2ad2-40b0-834e-37fa5f126f28" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-846977c6bc-7gjhh;K8S_POD_INFRA_CONTAINER_ID=7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e;K8S_POD_UID=ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" Path:"", result: "", err: 2025-08-13T20:01:28.293605073+00:00 stderr F 2025-08-13T20:01:28Z [verbose] DEL starting CNI request ContainerID:"5d5ab928ee2118ee6832fa28966b7fe15cf32a3ce9753af82602917b135938a5" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-765b47f944-n2lhl;K8S_POD_INFRA_CONTAINER_ID=5d5ab928ee2118ee6832fa28966b7fe15cf32a3ce9753af82602917b135938a5;K8S_POD_UID=13ad7555-5f28-4555-a563-892713a8433a" Path:"" 2025-08-13T20:01:28.599038402+00:00 stderr F 2025-08-13T20:01:28Z [verbose] Del: openshift-authentication:oauth-openshift-765b47f944-n2lhl:unknownUID:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:28.839098428+00:00 stderr F 2025-08-13T20:01:28Z [verbose] DEL finished CNI request ContainerID:"5d5ab928ee2118ee6832fa28966b7fe15cf32a3ce9753af82602917b135938a5" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-765b47f944-n2lhl;K8S_POD_INFRA_CONTAINER_ID=5d5ab928ee2118ee6832fa28966b7fe15cf32a3ce9753af82602917b135938a5;K8S_POD_UID=13ad7555-5f28-4555-a563-892713a8433a" Path:"", result: "", err: 2025-08-13T20:01:28.913564581+00:00 stderr F 2025-08-13T20:01:28Z [verbose] DEL starting CNI request ContainerID:"2cd2d310ebe8b3f7efe01442baea6a99f4bdf879192ffea1402034757e499bc3" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-vlbxv;K8S_POD_INFRA_CONTAINER_ID=2cd2d310ebe8b3f7efe01442baea6a99f4bdf879192ffea1402034757e499bc3;K8S_POD_UID=378552fd-5e53-4882-87ff-95f3d9198861" Path:"" 2025-08-13T20:01:29.073348387+00:00 stderr F 2025-08-13T20:01:29Z [verbose] Del: openshift-service-ca:service-ca-666f99b6f-vlbxv:unknownUID:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:29.443305406+00:00 stderr F 2025-08-13T20:01:29Z [verbose] DEL finished CNI request ContainerID:"2cd2d310ebe8b3f7efe01442baea6a99f4bdf879192ffea1402034757e499bc3" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-vlbxv;K8S_POD_INFRA_CONTAINER_ID=2cd2d310ebe8b3f7efe01442baea6a99f4bdf879192ffea1402034757e499bc3;K8S_POD_UID=378552fd-5e53-4882-87ff-95f3d9198861" Path:"", result: "", err: 2025-08-13T20:01:29.693927872+00:00 stderr P 2025-08-13T20:01:29Z [verbose] 2025-08-13T20:01:29.694009855+00:00 stderr P DEL starting CNI request ContainerID:"dbfc5ec7432a60129231728884794ababe640efdb630ee659406890b3d7d9c03" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-77658b5b66-dq5sc;K8S_POD_INFRA_CONTAINER_ID=dbfc5ec7432a60129231728884794ababe640efdb630ee659406890b3d7d9c03;K8S_POD_UID=530553aa-0a1d-423e-8a22-f5eb4bdbb883" Path:"" 2025-08-13T20:01:29.694036865+00:00 stderr F 2025-08-13T20:01:29.695198629+00:00 stderr P 2025-08-13T20:01:29Z [verbose] 2025-08-13T20:01:29.695294501+00:00 stderr P Del: openshift-config-operator:openshift-config-operator-77658b5b66-dq5sc:530553aa-0a1d-423e-8a22-f5eb4bdbb883:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:29.695434705+00:00 stderr F 2025-08-13T20:01:30.418929144+00:00 stderr F 2025-08-13T20:01:30Z [verbose] DEL finished CNI request ContainerID:"dbfc5ec7432a60129231728884794ababe640efdb630ee659406890b3d7d9c03" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-77658b5b66-dq5sc;K8S_POD_INFRA_CONTAINER_ID=dbfc5ec7432a60129231728884794ababe640efdb630ee659406890b3d7d9c03;K8S_POD_UID=530553aa-0a1d-423e-8a22-f5eb4bdbb883" Path:"", result: "", err: 2025-08-13T20:01:30.747711369+00:00 stderr F 2025-08-13T20:01:30Z [verbose] DEL starting CNI request ContainerID:"a990f905c27257a1e5d4f50465c77e865fd84acb2321a5e682d1c3f0256c57ac" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-857456c46-7f5wf;K8S_POD_INFRA_CONTAINER_ID=a990f905c27257a1e5d4f50465c77e865fd84acb2321a5e682d1c3f0256c57ac;K8S_POD_UID=8a5ae51d-d173-4531-8975-f164c975ce1f" Path:"" 2025-08-13T20:01:30.750870159+00:00 stderr F 2025-08-13T20:01:30Z [verbose] Del: openshift-operator-lifecycle-manager:catalog-operator-857456c46-7f5wf:8a5ae51d-d173-4531-8975-f164c975ce1f:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:30.992747486+00:00 stderr F 2025-08-13T20:01:30Z [verbose] DEL finished CNI request ContainerID:"a990f905c27257a1e5d4f50465c77e865fd84acb2321a5e682d1c3f0256c57ac" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-857456c46-7f5wf;K8S_POD_INFRA_CONTAINER_ID=a990f905c27257a1e5d4f50465c77e865fd84acb2321a5e682d1c3f0256c57ac;K8S_POD_UID=8a5ae51d-d173-4531-8975-f164c975ce1f" Path:"", result: "", err: 2025-08-13T20:01:31.345338550+00:00 stderr F 2025-08-13T20:01:31Z [verbose] DEL starting CNI request ContainerID:"0dd09f73af5d2dea79cdb7c4a76803e01abcc55aceb455914ccd26eeee6686f3" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-7d46d5bb6d-rrg6t;K8S_POD_INFRA_CONTAINER_ID=0dd09f73af5d2dea79cdb7c4a76803e01abcc55aceb455914ccd26eeee6686f3;K8S_POD_UID=7d51f445-054a-4e4f-a67b-a828f5a32511" Path:"" 2025-08-13T20:01:31.353528964+00:00 stderr F 2025-08-13T20:01:31Z [verbose] Del: openshift-ingress-operator:ingress-operator-7d46d5bb6d-rrg6t:7d51f445-054a-4e4f-a67b-a828f5a32511:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:31.967512002+00:00 stderr F 2025-08-13T20:01:31Z [verbose] DEL finished CNI request ContainerID:"0dd09f73af5d2dea79cdb7c4a76803e01abcc55aceb455914ccd26eeee6686f3" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-7d46d5bb6d-rrg6t;K8S_POD_INFRA_CONTAINER_ID=0dd09f73af5d2dea79cdb7c4a76803e01abcc55aceb455914ccd26eeee6686f3;K8S_POD_UID=7d51f445-054a-4e4f-a67b-a828f5a32511" Path:"", result: "", err: 2025-08-13T20:01:32.461093585+00:00 stderr P 2025-08-13T20:01:32Z [verbose] 2025-08-13T20:01:32.461155507+00:00 stderr P DEL starting CNI request ContainerID:"b04bfa8a4df35640a1a2ca8c160ff99f05de826ac1be45f3e2ab1c114915d89d" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-bc474d5d6-wshwg;K8S_POD_INFRA_CONTAINER_ID=b04bfa8a4df35640a1a2ca8c160ff99f05de826ac1be45f3e2ab1c114915d89d;K8S_POD_UID=f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Path:"" 2025-08-13T20:01:32.461180178+00:00 stderr F 2025-08-13T20:01:32.509104104+00:00 stderr P 2025-08-13T20:01:32Z [verbose] 2025-08-13T20:01:32.509161716+00:00 stderr P Del: openshift-cluster-samples-operator:cluster-samples-operator-bc474d5d6-wshwg:f728c15e-d8de-4a9a-a3ea-fdcead95cb91:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:32.509186517+00:00 stderr F 2025-08-13T20:01:32.840936847+00:00 stderr P 2025-08-13T20:01:32Z [verbose] 2025-08-13T20:01:32.840995988+00:00 stderr P DEL finished CNI request ContainerID:"b04bfa8a4df35640a1a2ca8c160ff99f05de826ac1be45f3e2ab1c114915d89d" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-bc474d5d6-wshwg;K8S_POD_INFRA_CONTAINER_ID=b04bfa8a4df35640a1a2ca8c160ff99f05de826ac1be45f3e2ab1c114915d89d;K8S_POD_UID=f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Path:"", result: "", err: 2025-08-13T20:01:32.841020399+00:00 stderr F 2025-08-13T20:01:32.931127618+00:00 stderr F 2025-08-13T20:01:32Z [verbose] DEL starting CNI request ContainerID:"d1ee5558ce798968c7d7c7ccb69cfaabbafa7221af2265122bd6ceb0ec69e969" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=d1ee5558ce798968c7d7c7ccb69cfaabbafa7221af2265122bd6ceb0ec69e969;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"" 2025-08-13T20:01:32.957988004+00:00 stderr F 2025-08-13T20:01:32Z [verbose] Del: openshift-marketplace:certified-operators-7287f:887d596e-c519-4bfa-af90-3edd9e1b2f0f:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:33.181215610+00:00 stderr F 2025-08-13T20:01:33Z [verbose] DEL finished CNI request ContainerID:"d1ee5558ce798968c7d7c7ccb69cfaabbafa7221af2265122bd6ceb0ec69e969" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=d1ee5558ce798968c7d7c7ccb69cfaabbafa7221af2265122bd6ceb0ec69e969;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"", result: "", err: 2025-08-13T20:01:33.285728640+00:00 stderr P 2025-08-13T20:01:33Z [verbose] 2025-08-13T20:01:33.285874134+00:00 stderr P DEL starting CNI request ContainerID:"af64d8772888c95f8b1c9b70e3f5bf0c57ab125b9bd97401e6a93d105abec31b" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=hostpath-provisioner;K8S_POD_NAME=csi-hostpathplugin-hvm8g;K8S_POD_INFRA_CONTAINER_ID=af64d8772888c95f8b1c9b70e3f5bf0c57ab125b9bd97401e6a93d105abec31b;K8S_POD_UID=12e733dd-0939-4f1b-9cbb-13897e093787" Path:"" 2025-08-13T20:01:33.285905965+00:00 stderr F 2025-08-13T20:01:33.301964453+00:00 stderr P 2025-08-13T20:01:33Z [verbose] 2025-08-13T20:01:33.302017954+00:00 stderr P Del: hostpath-provisioner:csi-hostpathplugin-hvm8g:12e733dd-0939-4f1b-9cbb-13897e093787:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:33.302042765+00:00 stderr F 2025-08-13T20:01:33.561984896+00:00 stderr P 2025-08-13T20:01:33Z [verbose] 2025-08-13T20:01:33.562075539+00:00 stderr P DEL finished CNI request ContainerID:"af64d8772888c95f8b1c9b70e3f5bf0c57ab125b9bd97401e6a93d105abec31b" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=hostpath-provisioner;K8S_POD_NAME=csi-hostpathplugin-hvm8g;K8S_POD_INFRA_CONTAINER_ID=af64d8772888c95f8b1c9b70e3f5bf0c57ab125b9bd97401e6a93d105abec31b;K8S_POD_UID=12e733dd-0939-4f1b-9cbb-13897e093787" Path:"", result: "", err: 2025-08-13T20:01:33.562102600+00:00 stderr F 2025-08-13T20:02:24.117412736+00:00 stderr F 2025-08-13T20:02:24Z [verbose] DEL starting CNI request ContainerID:"9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8" Netns:"/var/run/netns/3ebbbba3-541f-422f-b4bd-ea82b778fb0d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8;K8S_POD_UID=2ad657a4-8b02-4373-8d0d-b0e25345dc90" Path:"" 2025-08-13T20:02:24.118743814+00:00 stderr F 2025-08-13T20:02:24Z [verbose] Del: openshift-kube-apiserver:installer-9-crc:2ad657a4-8b02-4373-8d0d-b0e25345dc90:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:02:24.384304969+00:00 stderr F 2025-08-13T20:02:24Z [verbose] DEL finished CNI request ContainerID:"9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8" Netns:"/var/run/netns/3ebbbba3-541f-422f-b4bd-ea82b778fb0d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8;K8S_POD_UID=2ad657a4-8b02-4373-8d0d-b0e25345dc90" Path:"", result: "", err: 2025-08-13T20:02:25.102333633+00:00 stderr F 2025-08-13T20:02:25Z [verbose] DEL starting CNI request ContainerID:"639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab" Netns:"/var/run/netns/1321210d-04c3-46b8-9215-4081860acf11" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-7-crc;K8S_POD_INFRA_CONTAINER_ID=639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab;K8S_POD_UID=b57cce81-8ea0-4c4d-aae1-ee024d201c15" Path:"" 2025-08-13T20:02:25.102500167+00:00 stderr F 2025-08-13T20:02:25Z [verbose] Del: openshift-kube-scheduler:installer-7-crc:b57cce81-8ea0-4c4d-aae1-ee024d201c15:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:02:25.345992404+00:00 stderr F 2025-08-13T20:02:25Z [verbose] DEL finished CNI request ContainerID:"639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab" Netns:"/var/run/netns/1321210d-04c3-46b8-9215-4081860acf11" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-7-crc;K8S_POD_INFRA_CONTAINER_ID=639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab;K8S_POD_UID=b57cce81-8ea0-4c4d-aae1-ee024d201c15" Path:"", result: "", err: 2025-08-13T20:02:33.574743915+00:00 stderr F 2025-08-13T20:02:33Z [verbose] DEL starting CNI request ContainerID:"e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f" Netns:"/var/run/netns/b4cbc8dd-918e-4828-a883-addb98426d7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-84fccc7b6-mkncc;K8S_POD_INFRA_CONTAINER_ID=e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f;K8S_POD_UID=b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Path:"" 2025-08-13T20:02:33.575285181+00:00 stderr F 2025-08-13T20:02:33Z [verbose] Del: openshift-console:console-84fccc7b6-mkncc:b233d916-bfe3-4ae5-ae39-6b574d1aa05e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:02:33.592362268+00:00 stderr F 2025-08-13T20:02:33Z [verbose] DEL starting CNI request ContainerID:"958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4" Netns:"/var/run/netns/4cb67ea2-66ca-4477-911c-e402227f8dda" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-7cbd5666ff-bbfrf;K8S_POD_INFRA_CONTAINER_ID=958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4;K8S_POD_UID=42b6a393-6194-4620-bf8f-7e4b6cbe5679" Path:"" 2025-08-13T20:02:33.592662666+00:00 stderr F 2025-08-13T20:02:33Z [verbose] Del: openshift-image-registry:image-registry-7cbd5666ff-bbfrf:42b6a393-6194-4620-bf8f-7e4b6cbe5679:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:02:33.602590850+00:00 stderr F 2025-08-13T20:02:33Z [verbose] DEL starting CNI request ContainerID:"961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc" Netns:"/var/run/netns/96629289-ef1e-4353-acb2-125a21227454" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-mtx25;K8S_POD_INFRA_CONTAINER_ID=961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc;K8S_POD_UID=23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Path:"" 2025-08-13T20:02:33.602590850+00:00 stderr F 2025-08-13T20:02:33Z [verbose] Del: openshift-apiserver:apiserver-67cbf64bc9-mtx25:unknownUID:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:02:33.797138049+00:00 stderr F 2025-08-13T20:02:33Z [verbose] DEL finished CNI request ContainerID:"961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc" Netns:"/var/run/netns/96629289-ef1e-4353-acb2-125a21227454" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-mtx25;K8S_POD_INFRA_CONTAINER_ID=961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc;K8S_POD_UID=23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Path:"", result: "", err: 2025-08-13T20:02:33.958714969+00:00 stderr F 2025-08-13T20:02:33Z [verbose] DEL finished CNI request ContainerID:"e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f" Netns:"/var/run/netns/b4cbc8dd-918e-4828-a883-addb98426d7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-84fccc7b6-mkncc;K8S_POD_INFRA_CONTAINER_ID=e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f;K8S_POD_UID=b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Path:"", result: "", err: 2025-08-13T20:02:33.961291092+00:00 stderr F 2025-08-13T20:02:33Z [verbose] DEL finished CNI request ContainerID:"958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4" Netns:"/var/run/netns/4cb67ea2-66ca-4477-911c-e402227f8dda" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-7cbd5666ff-bbfrf;K8S_POD_INFRA_CONTAINER_ID=958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4;K8S_POD_UID=42b6a393-6194-4620-bf8f-7e4b6cbe5679" Path:"", result: "", err: 2025-08-13T20:02:34.574369441+00:00 stderr F 2025-08-13T20:02:34Z [verbose] DEL starting CNI request ContainerID:"c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc" Netns:"/var/run/netns/a78e9cb9-c5d1-4c13-9b41-34a1e877122f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-crc;K8S_POD_INFRA_CONTAINER_ID=c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc;K8S_POD_UID=79050916-d488-4806-b556-1b0078b31e53" Path:"" 2025-08-13T20:02:34.574644859+00:00 stderr F 2025-08-13T20:02:34Z [verbose] Del: openshift-kube-controller-manager:installer-10-crc:79050916-d488-4806-b556-1b0078b31e53:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:02:34.770582238+00:00 stderr F 2025-08-13T20:02:34Z [verbose] DEL finished CNI request ContainerID:"c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc" Netns:"/var/run/netns/a78e9cb9-c5d1-4c13-9b41-34a1e877122f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-crc;K8S_POD_INFRA_CONTAINER_ID=c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc;K8S_POD_UID=79050916-d488-4806-b556-1b0078b31e53" Path:"", result: "", err: 2025-08-13T20:05:14.544423840+00:00 stderr P 2025-08-13T20:05:14Z [verbose] 2025-08-13T20:05:14.544546203+00:00 stderr P ADD starting CNI request ContainerID:"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626" Netns:"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"" 2025-08-13T20:05:14.544572164+00:00 stderr F 2025-08-13T20:05:14.562162048+00:00 stderr F 2025-08-13T20:05:14Z [verbose] ADD starting CNI request ContainerID:"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23" Netns:"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"" 2025-08-13T20:05:17.045696786+00:00 stderr P 2025-08-13T20:05:17Z [error] 2025-08-13T20:05:17.046235931+00:00 stderr P Multus: [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm/8b8d1c48-5762-450f-bd4d-9134869f432b]: error waiting for pod: pod "controller-manager-598fc85fd4-8wlsm" not found 2025-08-13T20:05:17.046279673+00:00 stderr F 2025-08-13T20:05:17.046380996+00:00 stderr P 2025-08-13T20:05:17Z [verbose] 2025-08-13T20:05:17.046411026+00:00 stderr P ADD finished CNI request ContainerID:"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626" Netns:"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"", result: "", err: error configuring pod [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm] networking: Multus: [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm/8b8d1c48-5762-450f-bd4d-9134869f432b]: error waiting for pod: pod "controller-manager-598fc85fd4-8wlsm" not found 2025-08-13T20:05:17.046435127+00:00 stderr F 2025-08-13T20:05:17.065878714+00:00 stderr F 2025-08-13T20:05:17Z [error] Multus: [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx/becc7e17-2bc7-417d-832f-55127299d70f]: error waiting for pod: pod "route-controller-manager-6884dcf749-n4qpx" not found 2025-08-13T20:05:17.065878714+00:00 stderr F 2025-08-13T20:05:17Z [verbose] ADD finished CNI request ContainerID:"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23" Netns:"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"", result: "", err: error configuring pod [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx/becc7e17-2bc7-417d-832f-55127299d70f]: error waiting for pod: pod "route-controller-manager-6884dcf749-n4qpx" not found 2025-08-13T20:05:17.113344323+00:00 stderr F 2025-08-13T20:05:17Z [verbose] DEL starting CNI request ContainerID:"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626" Netns:"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"" 2025-08-13T20:05:17.115397422+00:00 stderr F 2025-08-13T20:05:17Z [error] Multus: failed to get the cached delegates file: open /var/lib/cni/multus/ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626: no such file or directory, cannot properly delete 2025-08-13T20:05:17.115397422+00:00 stderr F 2025-08-13T20:05:17Z [verbose] DEL finished CNI request ContainerID:"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626" Netns:"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"", result: "", err: 2025-08-13T20:05:17.127817708+00:00 stderr F 2025-08-13T20:05:17Z [verbose] DEL starting CNI request ContainerID:"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23" Netns:"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"" 2025-08-13T20:05:17.127817708+00:00 stderr F 2025-08-13T20:05:17Z [error] Multus: failed to get the cached delegates file: open /var/lib/cni/multus/d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23: no such file or directory, cannot properly delete 2025-08-13T20:05:17.127817708+00:00 stderr F 2025-08-13T20:05:17Z [verbose] DEL finished CNI request ContainerID:"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23" Netns:"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"", result: "", err: 2025-08-13T20:05:19.113406156+00:00 stderr F 2025-08-13T20:05:19Z [verbose] ADD starting CNI request ContainerID:"7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb" Netns:"/var/run/netns/90838034-5fbf-4da8-8c62-f815c2b7b742" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"" 2025-08-13T20:05:19.788248221+00:00 stderr F 2025-08-13T20:05:19Z [verbose] ADD starting CNI request ContainerID:"924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7" Netns:"/var/run/netns/3b9627d6-1453-443c-ba95-69c7de74e340" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"" 2025-08-13T20:05:22.561108354+00:00 stderr F 2025-08-13T20:05:22Z [verbose] Add: openshift-controller-manager:controller-manager-598fc85fd4-8wlsm:8b8d1c48-5762-450f-bd4d-9134869f432b:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"7814bf45dce77ed","mac":"de:08:bb:e3:37:f7"},{"name":"eth0","mac":"0a:58:0a:d9:00:4a","sandbox":"/var/run/netns/90838034-5fbf-4da8-8c62-f815c2b7b742"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.74/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:05:22.561108354+00:00 stderr F I0813 20:05:22.559936 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager", Name:"controller-manager-598fc85fd4-8wlsm", UID:"8b8d1c48-5762-450f-bd4d-9134869f432b", APIVersion:"v1", ResourceVersion:"31131", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.74/23] from ovn-kubernetes 2025-08-13T20:05:22.604005233+00:00 stderr F 2025-08-13T20:05:22Z [verbose] ADD finished CNI request ContainerID:"7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb" Netns:"/var/run/netns/90838034-5fbf-4da8-8c62-f815c2b7b742" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"de:08:bb:e3:37:f7\",\"name\":\"7814bf45dce77ed\"},{\"mac\":\"0a:58:0a:d9:00:4a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/90838034-5fbf-4da8-8c62-f815c2b7b742\"}],\"ips\":[{\"address\":\"10.217.0.74/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:05:22.654375065+00:00 stderr F 2025-08-13T20:05:22Z [verbose] Add: openshift-route-controller-manager:route-controller-manager-6884dcf749-n4qpx:becc7e17-2bc7-417d-832f-55127299d70f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"924f68f94ccf00f","mac":"32:fc:e5:39:3f:bc"},{"name":"eth0","mac":"0a:58:0a:d9:00:4b","sandbox":"/var/run/netns/3b9627d6-1453-443c-ba95-69c7de74e340"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.75/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:05:22.654375065+00:00 stderr F I0813 20:05:22.654203 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-route-controller-manager", Name:"route-controller-manager-6884dcf749-n4qpx", UID:"becc7e17-2bc7-417d-832f-55127299d70f", APIVersion:"v1", ResourceVersion:"31132", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.75/23] from ovn-kubernetes 2025-08-13T20:05:22.708178386+00:00 stderr F 2025-08-13T20:05:22Z [verbose] ADD finished CNI request ContainerID:"924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7" Netns:"/var/run/netns/3b9627d6-1453-443c-ba95-69c7de74e340" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"32:fc:e5:39:3f:bc\",\"name\":\"924f68f94ccf00f\"},{\"mac\":\"0a:58:0a:d9:00:4b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3b9627d6-1453-443c-ba95-69c7de74e340\"}],\"ips\":[{\"address\":\"10.217.0.75/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:05:57.593828054+00:00 stderr F 2025-08-13T20:05:57Z [verbose] ADD starting CNI request ContainerID:"0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec" Netns:"/var/run/netns/a326abff-8906-403f-bb73-851a6aaa3e63" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-retry-1-crc;K8S_POD_INFRA_CONTAINER_ID=0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec;K8S_POD_UID=dc02677d-deed-4cc9-bb8c-0dd300f83655" Path:"" 2025-08-13T20:05:57.913180129+00:00 stderr F 2025-08-13T20:05:57Z [verbose] Add: openshift-kube-controller-manager:installer-10-retry-1-crc:dc02677d-deed-4cc9-bb8c-0dd300f83655:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"0d375f365a8fdeb","mac":"1a:07:0f:a8:db:26"},{"name":"eth0","mac":"0a:58:0a:d9:00:4c","sandbox":"/var/run/netns/a326abff-8906-403f-bb73-851a6aaa3e63"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.76/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:05:57.913180129+00:00 stderr F I0813 20:05:57.907516 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"installer-10-retry-1-crc", UID:"dc02677d-deed-4cc9-bb8c-0dd300f83655", APIVersion:"v1", ResourceVersion:"31897", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.76/23] from ovn-kubernetes 2025-08-13T20:05:57.935943931+00:00 stderr F 2025-08-13T20:05:57Z [verbose] ADD finished CNI request ContainerID:"0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec" Netns:"/var/run/netns/a326abff-8906-403f-bb73-851a6aaa3e63" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-retry-1-crc;K8S_POD_INFRA_CONTAINER_ID=0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec;K8S_POD_UID=dc02677d-deed-4cc9-bb8c-0dd300f83655" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"1a:07:0f:a8:db:26\",\"name\":\"0d375f365a8fdeb\"},{\"mac\":\"0a:58:0a:d9:00:4c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/a326abff-8906-403f-bb73-851a6aaa3e63\"}],\"ips\":[{\"address\":\"10.217.0.76/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:06:07.174029432+00:00 stderr F 2025-08-13T20:06:07Z [verbose] DEL starting CNI request ContainerID:"612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7" Netns:"/var/run/netns/911ccb4d-64e5-46bb-a0cd-d2ccbe43dbc0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-5d9678894c-wx62n;K8S_POD_INFRA_CONTAINER_ID=612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7;K8S_POD_UID=384ed0e8-86e4-42df-bd2c-604c1f536a15" Path:"" 2025-08-13T20:06:07.175242426+00:00 stderr F 2025-08-13T20:06:07Z [verbose] Del: openshift-console:console-5d9678894c-wx62n:384ed0e8-86e4-42df-bd2c-604c1f536a15:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:06:07.412890992+00:00 stderr F 2025-08-13T20:06:07Z [verbose] DEL finished CNI request ContainerID:"612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7" Netns:"/var/run/netns/911ccb4d-64e5-46bb-a0cd-d2ccbe43dbc0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-5d9678894c-wx62n;K8S_POD_INFRA_CONTAINER_ID=612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7;K8S_POD_UID=384ed0e8-86e4-42df-bd2c-604c1f536a15" Path:"", result: "", err: 2025-08-13T20:06:30.919161705+00:00 stderr F 2025-08-13T20:06:30Z [verbose] DEL starting CNI request ContainerID:"9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7" Netns:"/var/run/netns/1c780c69-25c3-4c0b-953d-e261642f6782" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-rmwfn;K8S_POD_INFRA_CONTAINER_ID=9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7;K8S_POD_UID=9ad279b4-d9dc-42a8-a1c8-a002bd063482" Path:"" 2025-08-13T20:06:30.929945744+00:00 stderr F 2025-08-13T20:06:30Z [verbose] Del: openshift-marketplace:redhat-marketplace-rmwfn:9ad279b4-d9dc-42a8-a1c8-a002bd063482:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:06:31.214078963+00:00 stderr F 2025-08-13T20:06:31Z [verbose] DEL finished CNI request ContainerID:"9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7" Netns:"/var/run/netns/1c780c69-25c3-4c0b-953d-e261642f6782" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-rmwfn;K8S_POD_INFRA_CONTAINER_ID=9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7;K8S_POD_UID=9ad279b4-d9dc-42a8-a1c8-a002bd063482" Path:"", result: "", err: 2025-08-13T20:06:32.106536910+00:00 stderr F 2025-08-13T20:06:32Z [verbose] DEL starting CNI request ContainerID:"fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45" Netns:"/var/run/netns/5b8d87b9-c5ef-415b-94c0-714b74bcbc43" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-dcqzh;K8S_POD_INFRA_CONTAINER_ID=fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45;K8S_POD_UID=6db26b71-4e04-4688-a0c0-00e06e8c888d" Path:"" 2025-08-13T20:06:32.107840658+00:00 stderr F 2025-08-13T20:06:32Z [verbose] Del: openshift-marketplace:redhat-operators-dcqzh:6db26b71-4e04-4688-a0c0-00e06e8c888d:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:06:32.435188093+00:00 stderr F 2025-08-13T20:06:32Z [verbose] DEL finished CNI request ContainerID:"fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45" Netns:"/var/run/netns/5b8d87b9-c5ef-415b-94c0-714b74bcbc43" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-dcqzh;K8S_POD_INFRA_CONTAINER_ID=fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45;K8S_POD_UID=6db26b71-4e04-4688-a0c0-00e06e8c888d" Path:"", result: "", err: 2025-08-13T20:06:32.925951714+00:00 stderr F 2025-08-13T20:06:32Z [verbose] DEL starting CNI request ContainerID:"ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350" Netns:"/var/run/netns/f7f9b752-d0b9-4b04-9ab7-5c6e4bb4a343" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-k9qqb;K8S_POD_INFRA_CONTAINER_ID=ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350;K8S_POD_UID=ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Path:"" 2025-08-13T20:06:32.928683942+00:00 stderr F 2025-08-13T20:06:32Z [verbose] Del: openshift-marketplace:community-operators-k9qqb:ccdf38cf-634a-41a2-9c8b-74bb86af80a7:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:06:33.103497344+00:00 stderr F 2025-08-13T20:06:33Z [verbose] DEL starting CNI request ContainerID:"2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761" Netns:"/var/run/netns/023f1963-9a7c-4cad-80bd-fe72a8fdf2b9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-g4v97;K8S_POD_INFRA_CONTAINER_ID=2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761;K8S_POD_UID=bb917686-edfb-4158-86ad-6fce0abec64c" Path:"" 2025-08-13T20:06:33.106290584+00:00 stderr F 2025-08-13T20:06:33Z [verbose] Del: openshift-marketplace:certified-operators-g4v97:bb917686-edfb-4158-86ad-6fce0abec64c:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:06:33.192759834+00:00 stderr F 2025-08-13T20:06:33Z [verbose] DEL finished CNI request ContainerID:"ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350" Netns:"/var/run/netns/f7f9b752-d0b9-4b04-9ab7-5c6e4bb4a343" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-k9qqb;K8S_POD_INFRA_CONTAINER_ID=ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350;K8S_POD_UID=ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Path:"", result: "", err: 2025-08-13T20:06:33.476751036+00:00 stderr F 2025-08-13T20:06:33Z [verbose] DEL finished CNI request ContainerID:"2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761" Netns:"/var/run/netns/023f1963-9a7c-4cad-80bd-fe72a8fdf2b9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-g4v97;K8S_POD_INFRA_CONTAINER_ID=2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761;K8S_POD_UID=bb917686-edfb-4158-86ad-6fce0abec64c" Path:"", result: "", err: 2025-08-13T20:06:34.983841015+00:00 stderr F 2025-08-13T20:06:34Z [verbose] DEL starting CNI request ContainerID:"0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec" Netns:"/var/run/netns/a326abff-8906-403f-bb73-851a6aaa3e63" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-retry-1-crc;K8S_POD_INFRA_CONTAINER_ID=0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec;K8S_POD_UID=dc02677d-deed-4cc9-bb8c-0dd300f83655" Path:"" 2025-08-13T20:06:34.983841015+00:00 stderr F 2025-08-13T20:06:34Z [verbose] Del: openshift-kube-controller-manager:installer-10-retry-1-crc:dc02677d-deed-4cc9-bb8c-0dd300f83655:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:06:35.070086737+00:00 stderr F 2025-08-13T20:06:35Z [verbose] ADD starting CNI request ContainerID:"0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d" Netns:"/var/run/netns/2c92bf17-1a89-43e9-8f9c-76623132015c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-4txfd;K8S_POD_INFRA_CONTAINER_ID=0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d;K8S_POD_UID=af6c965e-9dc8-417a-aa1c-303a50ec9adc" Path:"" 2025-08-13T20:06:35.488599537+00:00 stderr P 2025-08-13T20:06:35Z [verbose] 2025-08-13T20:06:35.493270091+00:00 stderr F DEL finished CNI request ContainerID:"0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec" Netns:"/var/run/netns/a326abff-8906-403f-bb73-851a6aaa3e63" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-retry-1-crc;K8S_POD_INFRA_CONTAINER_ID=0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec;K8S_POD_UID=dc02677d-deed-4cc9-bb8c-0dd300f83655" Path:"", result: "", err: 2025-08-13T20:06:35.892583669+00:00 stderr F 2025-08-13T20:06:35Z [verbose] Add: openshift-marketplace:redhat-marketplace-4txfd:af6c965e-9dc8-417a-aa1c-303a50ec9adc:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"0ac24e234dbea3f","mac":"9a:93:0b:c4:ff:6b"},{"name":"eth0","mac":"0a:58:0a:d9:00:4d","sandbox":"/var/run/netns/2c92bf17-1a89-43e9-8f9c-76623132015c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.77/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:06:35.893138495+00:00 stderr F I0813 20:06:35.893084 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-4txfd", UID:"af6c965e-9dc8-417a-aa1c-303a50ec9adc", APIVersion:"v1", ResourceVersion:"32097", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.77/23] from ovn-kubernetes 2025-08-13T20:06:35.925956266+00:00 stderr F 2025-08-13T20:06:35Z [verbose] ADD finished CNI request ContainerID:"0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d" Netns:"/var/run/netns/2c92bf17-1a89-43e9-8f9c-76623132015c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-4txfd;K8S_POD_INFRA_CONTAINER_ID=0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d;K8S_POD_UID=af6c965e-9dc8-417a-aa1c-303a50ec9adc" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9a:93:0b:c4:ff:6b\",\"name\":\"0ac24e234dbea3f\"},{\"mac\":\"0a:58:0a:d9:00:4d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/2c92bf17-1a89-43e9-8f9c-76623132015c\"}],\"ips\":[{\"address\":\"10.217.0.77/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:06:36.181685578+00:00 stderr F 2025-08-13T20:06:36Z [verbose] ADD starting CNI request ContainerID:"93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d" Netns:"/var/run/netns/3e67ef33-2561-418e-bd7c-606044cdd1e8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-cfdk8;K8S_POD_INFRA_CONTAINER_ID=93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d;K8S_POD_UID=5391dc5d-0f00-4464-b617-b164e2f9b77a" Path:"" 2025-08-13T20:06:36.609644798+00:00 stderr F 2025-08-13T20:06:36Z [verbose] Add: openshift-marketplace:certified-operators-cfdk8:5391dc5d-0f00-4464-b617-b164e2f9b77a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"93c5c47bf133377","mac":"42:38:8c:ef:03:cd"},{"name":"eth0","mac":"0a:58:0a:d9:00:4e","sandbox":"/var/run/netns/3e67ef33-2561-418e-bd7c-606044cdd1e8"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.78/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:06:36.609644798+00:00 stderr F I0813 20:06:36.608961 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-cfdk8", UID:"5391dc5d-0f00-4464-b617-b164e2f9b77a", APIVersion:"v1", ResourceVersion:"32114", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.78/23] from ovn-kubernetes 2025-08-13T20:06:36.644232890+00:00 stderr P 2025-08-13T20:06:36Z [verbose] 2025-08-13T20:06:36.644306972+00:00 stderr P ADD finished CNI request ContainerID:"93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d" Netns:"/var/run/netns/3e67ef33-2561-418e-bd7c-606044cdd1e8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-cfdk8;K8S_POD_INFRA_CONTAINER_ID=93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d;K8S_POD_UID=5391dc5d-0f00-4464-b617-b164e2f9b77a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"42:38:8c:ef:03:cd\",\"name\":\"93c5c47bf133377\"},{\"mac\":\"0a:58:0a:d9:00:4e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3e67ef33-2561-418e-bd7c-606044cdd1e8\"}],\"ips\":[{\"address\":\"10.217.0.78/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:06:36.644332063+00:00 stderr F 2025-08-13T20:06:37.252751407+00:00 stderr F 2025-08-13T20:06:37Z [verbose] ADD starting CNI request ContainerID:"3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8" Netns:"/var/run/netns/bfd1eef8-c810-457e-84f5-906053b64ad0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-pmqwc;K8S_POD_INFRA_CONTAINER_ID=3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8;K8S_POD_UID=0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" Path:"" 2025-08-13T20:06:37.613569952+00:00 stderr F 2025-08-13T20:06:37Z [verbose] Add: openshift-marketplace:redhat-operators-pmqwc:0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"3025039c6358002","mac":"c6:bc:20:ad:82:0d"},{"name":"eth0","mac":"0a:58:0a:d9:00:4f","sandbox":"/var/run/netns/bfd1eef8-c810-457e-84f5-906053b64ad0"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.79/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:06:37.615822376+00:00 stderr F I0813 20:06:37.615700 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-pmqwc", UID:"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed", APIVersion:"v1", ResourceVersion:"32132", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.79/23] from ovn-kubernetes 2025-08-13T20:06:37.662039492+00:00 stderr P 2025-08-13T20:06:37Z [verbose] 2025-08-13T20:06:37.662084033+00:00 stderr F ADD finished CNI request ContainerID:"3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8" Netns:"/var/run/netns/bfd1eef8-c810-457e-84f5-906053b64ad0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-pmqwc;K8S_POD_INFRA_CONTAINER_ID=3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8;K8S_POD_UID=0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"c6:bc:20:ad:82:0d\",\"name\":\"3025039c6358002\"},{\"mac\":\"0a:58:0a:d9:00:4f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/bfd1eef8-c810-457e-84f5-906053b64ad0\"}],\"ips\":[{\"address\":\"10.217.0.79/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:06:38.808915803+00:00 stderr F 2025-08-13T20:06:38Z [verbose] ADD starting CNI request ContainerID:"4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7" Netns:"/var/run/netns/8b7eea37-2dbd-4df6-97f4-0127e310caf0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-p7svp;K8S_POD_INFRA_CONTAINER_ID=4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7;K8S_POD_UID=8518239d-8dab-48ac-a3c1-e775566b9bff" Path:"" 2025-08-13T20:06:39.261856519+00:00 stderr F 2025-08-13T20:06:39Z [verbose] Add: openshift-marketplace:community-operators-p7svp:8518239d-8dab-48ac-a3c1-e775566b9bff:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"4a52c9653485366","mac":"5e:de:da:57:2f:c1"},{"name":"eth0","mac":"0a:58:0a:d9:00:50","sandbox":"/var/run/netns/8b7eea37-2dbd-4df6-97f4-0127e310caf0"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.80/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:06:39.261856519+00:00 stderr F I0813 20:06:39.258584 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-p7svp", UID:"8518239d-8dab-48ac-a3c1-e775566b9bff", APIVersion:"v1", ResourceVersion:"32155", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.80/23] from ovn-kubernetes 2025-08-13T20:06:39.384274459+00:00 stderr F 2025-08-13T20:06:39Z [verbose] ADD finished CNI request ContainerID:"4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7" Netns:"/var/run/netns/8b7eea37-2dbd-4df6-97f4-0127e310caf0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-p7svp;K8S_POD_INFRA_CONTAINER_ID=4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7;K8S_POD_UID=8518239d-8dab-48ac-a3c1-e775566b9bff" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5e:de:da:57:2f:c1\",\"name\":\"4a52c9653485366\"},{\"mac\":\"0a:58:0a:d9:00:50\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/8b7eea37-2dbd-4df6-97f4-0127e310caf0\"}],\"ips\":[{\"address\":\"10.217.0.80/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:07:05.047576957+00:00 stderr F 2025-08-13T20:07:05Z [verbose] ADD starting CNI request ContainerID:"82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763" Netns:"/var/run/netns/c5cfd491-70bc-4ec7-991a-0855c95408c7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763;K8S_POD_UID=47a054e4-19c2-4c12-a054-fc5edc98978a" Path:"" 2025-08-13T20:07:05.529643079+00:00 stderr F 2025-08-13T20:07:05Z [verbose] Add: openshift-kube-apiserver:installer-11-crc:47a054e4-19c2-4c12-a054-fc5edc98978a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"82592d624297fdd","mac":"72:ad:68:88:87:91"},{"name":"eth0","mac":"0a:58:0a:d9:00:51","sandbox":"/var/run/netns/c5cfd491-70bc-4ec7-991a-0855c95408c7"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.81/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:07:05.531470061+00:00 stderr F I0813 20:07:05.530332 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"installer-11-crc", UID:"47a054e4-19c2-4c12-a054-fc5edc98978a", APIVersion:"v1", ResourceVersion:"32354", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.81/23] from ovn-kubernetes 2025-08-13T20:07:05.592743088+00:00 stderr F 2025-08-13T20:07:05Z [verbose] ADD finished CNI request ContainerID:"82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763" Netns:"/var/run/netns/c5cfd491-70bc-4ec7-991a-0855c95408c7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763;K8S_POD_UID=47a054e4-19c2-4c12-a054-fc5edc98978a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"72:ad:68:88:87:91\",\"name\":\"82592d624297fdd\"},{\"mac\":\"0a:58:0a:d9:00:51\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c5cfd491-70bc-4ec7-991a-0855c95408c7\"}],\"ips\":[{\"address\":\"10.217.0.81/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:07:08.199712932+00:00 stderr F 2025-08-13T20:07:08Z [verbose] DEL starting CNI request ContainerID:"0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d" Netns:"/var/run/netns/2c92bf17-1a89-43e9-8f9c-76623132015c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-4txfd;K8S_POD_INFRA_CONTAINER_ID=0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d;K8S_POD_UID=af6c965e-9dc8-417a-aa1c-303a50ec9adc" Path:"" 2025-08-13T20:07:08.211746267+00:00 stderr F 2025-08-13T20:07:08Z [verbose] Del: openshift-marketplace:redhat-marketplace-4txfd:af6c965e-9dc8-417a-aa1c-303a50ec9adc:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:08.631969465+00:00 stderr F 2025-08-13T20:07:08Z [verbose] DEL finished CNI request ContainerID:"0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d" Netns:"/var/run/netns/2c92bf17-1a89-43e9-8f9c-76623132015c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-4txfd;K8S_POD_INFRA_CONTAINER_ID=0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d;K8S_POD_UID=af6c965e-9dc8-417a-aa1c-303a50ec9adc" Path:"", result: "", err: 2025-08-13T20:07:15.563655952+00:00 stderr F 2025-08-13T20:07:15Z [verbose] DEL starting CNI request ContainerID:"411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58" Netns:"/var/run/netns/33580ad5-d33b-4301-b2b1-f494846cac8b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-jjfds;K8S_POD_INFRA_CONTAINER_ID=411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58;K8S_POD_UID=b23d6435-6431-4905-b41b-a517327385e5" Path:"" 2025-08-13T20:07:15.567894734+00:00 stderr F 2025-08-13T20:07:15Z [verbose] Del: openshift-apiserver:apiserver-67cbf64bc9-jjfds:b23d6435-6431-4905-b41b-a517327385e5:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:16.064026918+00:00 stderr F 2025-08-13T20:07:16Z [verbose] DEL finished CNI request ContainerID:"411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58" Netns:"/var/run/netns/33580ad5-d33b-4301-b2b1-f494846cac8b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-jjfds;K8S_POD_INFRA_CONTAINER_ID=411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58;K8S_POD_UID=b23d6435-6431-4905-b41b-a517327385e5" Path:"", result: "", err: 2025-08-13T20:07:19.217331386+00:00 stderr F 2025-08-13T20:07:19Z [verbose] DEL starting CNI request ContainerID:"93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d" Netns:"/var/run/netns/3e67ef33-2561-418e-bd7c-606044cdd1e8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-cfdk8;K8S_POD_INFRA_CONTAINER_ID=93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d;K8S_POD_UID=5391dc5d-0f00-4464-b617-b164e2f9b77a" Path:"" 2025-08-13T20:07:19.299711168+00:00 stderr F 2025-08-13T20:07:19Z [verbose] Del: openshift-marketplace:certified-operators-cfdk8:5391dc5d-0f00-4464-b617-b164e2f9b77a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:20.041520296+00:00 stderr F 2025-08-13T20:07:20Z [verbose] DEL finished CNI request ContainerID:"93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d" Netns:"/var/run/netns/3e67ef33-2561-418e-bd7c-606044cdd1e8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-cfdk8;K8S_POD_INFRA_CONTAINER_ID=93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d;K8S_POD_UID=5391dc5d-0f00-4464-b617-b164e2f9b77a" Path:"", result: "", err: 2025-08-13T20:07:20.591508015+00:00 stderr P 2025-08-13T20:07:20Z [verbose] 2025-08-13T20:07:20.591577117+00:00 stderr P ADD starting CNI request ContainerID:"2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8" Netns:"/var/run/netns/d0f6574e-984f-461d-b8a3-1d458aee6fc0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-7fc54b8dd7-d2bhp;K8S_POD_INFRA_CONTAINER_ID=2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8;K8S_POD_UID=41e8708a-e40d-4d28-846b-c52eda4d1755" Path:"" 2025-08-13T20:07:20.591602628+00:00 stderr F 2025-08-13T20:07:21.160681172+00:00 stderr F 2025-08-13T20:07:21Z [verbose] Add: openshift-apiserver:apiserver-7fc54b8dd7-d2bhp:41e8708a-e40d-4d28-846b-c52eda4d1755:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2059a6e71652337","mac":"7a:aa:4b:73:44:25"},{"name":"eth0","mac":"0a:58:0a:d9:00:52","sandbox":"/var/run/netns/d0f6574e-984f-461d-b8a3-1d458aee6fc0"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.82/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:07:21.160681172+00:00 stderr F I0813 20:07:21.159554 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-apiserver", Name:"apiserver-7fc54b8dd7-d2bhp", UID:"41e8708a-e40d-4d28-846b-c52eda4d1755", APIVersion:"v1", ResourceVersion:"32500", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.82/23] from ovn-kubernetes 2025-08-13T20:07:21.342528146+00:00 stderr F 2025-08-13T20:07:21Z [verbose] ADD finished CNI request ContainerID:"2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8" Netns:"/var/run/netns/d0f6574e-984f-461d-b8a3-1d458aee6fc0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-7fc54b8dd7-d2bhp;K8S_POD_INFRA_CONTAINER_ID=2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8;K8S_POD_UID=41e8708a-e40d-4d28-846b-c52eda4d1755" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"7a:aa:4b:73:44:25\",\"name\":\"2059a6e71652337\"},{\"mac\":\"0a:58:0a:d9:00:52\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/d0f6574e-984f-461d-b8a3-1d458aee6fc0\"}],\"ips\":[{\"address\":\"10.217.0.82/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:07:22.608537384+00:00 stderr F 2025-08-13T20:07:22Z [verbose] ADD starting CNI request ContainerID:"a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448" Netns:"/var/run/netns/a0d12f90-5ec5-489c-80e8-393e54e57ce3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-11-crc;K8S_POD_INFRA_CONTAINER_ID=a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448;K8S_POD_UID=1784282a-268d-4e44-a766-43281414e2dc" Path:"" 2025-08-13T20:07:23.304977332+00:00 stderr F 2025-08-13T20:07:23Z [verbose] Add: openshift-kube-controller-manager:revision-pruner-11-crc:1784282a-268d-4e44-a766-43281414e2dc:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a480fccd2debaaf","mac":"92:1b:9d:51:b7:76"},{"name":"eth0","mac":"0a:58:0a:d9:00:53","sandbox":"/var/run/netns/a0d12f90-5ec5-489c-80e8-393e54e57ce3"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.83/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:07:23.304977332+00:00 stderr F I0813 20:07:23.303547 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"revision-pruner-11-crc", UID:"1784282a-268d-4e44-a766-43281414e2dc", APIVersion:"v1", ResourceVersion:"32536", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.83/23] from ovn-kubernetes 2025-08-13T20:07:23.501200018+00:00 stderr F 2025-08-13T20:07:23Z [verbose] ADD finished CNI request ContainerID:"a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448" Netns:"/var/run/netns/a0d12f90-5ec5-489c-80e8-393e54e57ce3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-11-crc;K8S_POD_INFRA_CONTAINER_ID=a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448;K8S_POD_UID=1784282a-268d-4e44-a766-43281414e2dc" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"92:1b:9d:51:b7:76\",\"name\":\"a480fccd2debaaf\"},{\"mac\":\"0a:58:0a:d9:00:53\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/a0d12f90-5ec5-489c-80e8-393e54e57ce3\"}],\"ips\":[{\"address\":\"10.217.0.83/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:07:23.524278799+00:00 stderr F 2025-08-13T20:07:23Z [verbose] ADD starting CNI request ContainerID:"d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056" Netns:"/var/run/netns/f141d018-90a9-410f-8899-edd638325eac" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-8-crc;K8S_POD_INFRA_CONTAINER_ID=d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056;K8S_POD_UID=aca1f9ff-a685-4a78-b461-3931b757f754" Path:"" 2025-08-13T20:07:24.002870461+00:00 stderr F 2025-08-13T20:07:24Z [verbose] Add: openshift-kube-scheduler:installer-8-crc:aca1f9ff-a685-4a78-b461-3931b757f754:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d0ba8aa29fc697e","mac":"32:07:c5:1f:d6:82"},{"name":"eth0","mac":"0a:58:0a:d9:00:54","sandbox":"/var/run/netns/f141d018-90a9-410f-8899-edd638325eac"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.84/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:07:24.003631173+00:00 stderr F I0813 20:07:24.003197 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-scheduler", Name:"installer-8-crc", UID:"aca1f9ff-a685-4a78-b461-3931b757f754", APIVersion:"v1", ResourceVersion:"32550", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.84/23] from ovn-kubernetes 2025-08-13T20:07:24.105089962+00:00 stderr F 2025-08-13T20:07:24Z [verbose] ADD finished CNI request ContainerID:"d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056" Netns:"/var/run/netns/f141d018-90a9-410f-8899-edd638325eac" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-8-crc;K8S_POD_INFRA_CONTAINER_ID=d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056;K8S_POD_UID=aca1f9ff-a685-4a78-b461-3931b757f754" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"32:07:c5:1f:d6:82\",\"name\":\"d0ba8aa29fc697e\"},{\"mac\":\"0a:58:0a:d9:00:54\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f141d018-90a9-410f-8899-edd638325eac\"}],\"ips\":[{\"address\":\"10.217.0.84/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:07:25.184381715+00:00 stderr F 2025-08-13T20:07:25Z [verbose] ADD starting CNI request ContainerID:"8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31" Netns:"/var/run/netns/b1f9f948-5182-4a9d-864b-bda6943b9187" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31;K8S_POD_UID=a45bfab9-f78b-4d72-b5b7-903e60401124" Path:"" 2025-08-13T20:07:25.783963976+00:00 stderr F 2025-08-13T20:07:25Z [verbose] Add: openshift-kube-controller-manager:installer-11-crc:a45bfab9-f78b-4d72-b5b7-903e60401124:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"8f0bbf4ce8e2b74","mac":"62:a3:62:e6:c6:48"},{"name":"eth0","mac":"0a:58:0a:d9:00:55","sandbox":"/var/run/netns/b1f9f948-5182-4a9d-864b-bda6943b9187"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.85/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:07:25.783963976+00:00 stderr F I0813 20:07:25.783385 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"installer-11-crc", UID:"a45bfab9-f78b-4d72-b5b7-903e60401124", APIVersion:"v1", ResourceVersion:"32572", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.85/23] from ovn-kubernetes 2025-08-13T20:07:26.201866818+00:00 stderr F 2025-08-13T20:07:26Z [verbose] ADD finished CNI request ContainerID:"8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31" Netns:"/var/run/netns/b1f9f948-5182-4a9d-864b-bda6943b9187" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31;K8S_POD_UID=a45bfab9-f78b-4d72-b5b7-903e60401124" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"62:a3:62:e6:c6:48\",\"name\":\"8f0bbf4ce8e2b74\"},{\"mac\":\"0a:58:0a:d9:00:55\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b1f9f948-5182-4a9d-864b-bda6943b9187\"}],\"ips\":[{\"address\":\"10.217.0.85/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:07:27.651630154+00:00 stderr F 2025-08-13T20:07:27Z [verbose] DEL starting CNI request ContainerID:"a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448" Netns:"/var/run/netns/a0d12f90-5ec5-489c-80e8-393e54e57ce3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-11-crc;K8S_POD_INFRA_CONTAINER_ID=a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448;K8S_POD_UID=1784282a-268d-4e44-a766-43281414e2dc" Path:"" 2025-08-13T20:07:27.659907761+00:00 stderr P 2025-08-13T20:07:27Z [verbose] Del: openshift-kube-controller-manager:revision-pruner-11-crc:1784282a-268d-4e44-a766-43281414e2dc:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:27.659977833+00:00 stderr F 2025-08-13T20:07:28.002170003+00:00 stderr F 2025-08-13T20:07:28Z [verbose] DEL finished CNI request ContainerID:"a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448" Netns:"/var/run/netns/a0d12f90-5ec5-489c-80e8-393e54e57ce3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-11-crc;K8S_POD_INFRA_CONTAINER_ID=a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448;K8S_POD_UID=1784282a-268d-4e44-a766-43281414e2dc" Path:"", result: "", err: 2025-08-13T20:07:34.403942158+00:00 stderr F 2025-08-13T20:07:34Z [verbose] ADD starting CNI request ContainerID:"afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309" Netns:"/var/run/netns/b78865c0-1c97-410c-9fce-84455b78845c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-12-crc;K8S_POD_INFRA_CONTAINER_ID=afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309;K8S_POD_UID=3557248c-8f70-4165-aa66-8df983e7e01a" Path:"" 2025-08-13T20:07:34.844143269+00:00 stderr P 2025-08-13T20:07:34Z [verbose] 2025-08-13T20:07:34.844201801+00:00 stderr P Add: openshift-kube-apiserver:installer-12-crc:3557248c-8f70-4165-aa66-8df983e7e01a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"afb6a839e21ef78","mac":"06:11:98:91:d8:e0"},{"name":"eth0","mac":"0a:58:0a:d9:00:56","sandbox":"/var/run/netns/b78865c0-1c97-410c-9fce-84455b78845c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.86/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:07:34.844227922+00:00 stderr F 2025-08-13T20:07:34.847824415+00:00 stderr F I0813 20:07:34.846601 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"installer-12-crc", UID:"3557248c-8f70-4165-aa66-8df983e7e01a", APIVersion:"v1", ResourceVersion:"32679", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.86/23] from ovn-kubernetes 2025-08-13T20:07:34.913360634+00:00 stderr P 2025-08-13T20:07:34Z [verbose] 2025-08-13T20:07:34.913470257+00:00 stderr P ADD finished CNI request ContainerID:"afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309" Netns:"/var/run/netns/b78865c0-1c97-410c-9fce-84455b78845c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-12-crc;K8S_POD_INFRA_CONTAINER_ID=afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309;K8S_POD_UID=3557248c-8f70-4165-aa66-8df983e7e01a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"06:11:98:91:d8:e0\",\"name\":\"afb6a839e21ef78\"},{\"mac\":\"0a:58:0a:d9:00:56\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b78865c0-1c97-410c-9fce-84455b78845c\"}],\"ips\":[{\"address\":\"10.217.0.86/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:07:34.913683113+00:00 stderr F 2025-08-13T20:07:39.789523187+00:00 stderr P 2025-08-13T20:07:39Z [verbose] 2025-08-13T20:07:39.789634540+00:00 stderr P DEL starting CNI request ContainerID:"82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763" Netns:"/var/run/netns/c5cfd491-70bc-4ec7-991a-0855c95408c7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763;K8S_POD_UID=47a054e4-19c2-4c12-a054-fc5edc98978a" Path:"" 2025-08-13T20:07:39.789669071+00:00 stderr F 2025-08-13T20:07:39.811267440+00:00 stderr P 2025-08-13T20:07:39Z [verbose] 2025-08-13T20:07:39.811328592+00:00 stderr P Del: openshift-kube-apiserver:installer-11-crc:47a054e4-19c2-4c12-a054-fc5edc98978a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:39.811353123+00:00 stderr F 2025-08-13T20:07:40.270850337+00:00 stderr P 2025-08-13T20:07:40Z [verbose] 2025-08-13T20:07:40.270932819+00:00 stderr P DEL finished CNI request ContainerID:"82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763" Netns:"/var/run/netns/c5cfd491-70bc-4ec7-991a-0855c95408c7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763;K8S_POD_UID=47a054e4-19c2-4c12-a054-fc5edc98978a" Path:"", result: "", err: 2025-08-13T20:07:40.270960230+00:00 stderr F 2025-08-13T20:07:41.160427082+00:00 stderr P 2025-08-13T20:07:41Z [verbose] 2025-08-13T20:07:41.160537705+00:00 stderr P DEL starting CNI request ContainerID:"4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7" Netns:"/var/run/netns/8b7eea37-2dbd-4df6-97f4-0127e310caf0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-p7svp;K8S_POD_INFRA_CONTAINER_ID=4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7;K8S_POD_UID=8518239d-8dab-48ac-a3c1-e775566b9bff" Path:"" 2025-08-13T20:07:41.160629108+00:00 stderr F 2025-08-13T20:07:41.161322108+00:00 stderr P 2025-08-13T20:07:41Z [verbose] 2025-08-13T20:07:41.161356919+00:00 stderr P Del: openshift-marketplace:community-operators-p7svp:8518239d-8dab-48ac-a3c1-e775566b9bff:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:41.161380659+00:00 stderr F 2025-08-13T20:07:41.453042622+00:00 stderr P 2025-08-13T20:07:41Z [verbose] 2025-08-13T20:07:41.453105944+00:00 stderr P DEL finished CNI request ContainerID:"4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7" Netns:"/var/run/netns/8b7eea37-2dbd-4df6-97f4-0127e310caf0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-p7svp;K8S_POD_INFRA_CONTAINER_ID=4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7;K8S_POD_UID=8518239d-8dab-48ac-a3c1-e775566b9bff" Path:"", result: "", err: 2025-08-13T20:07:41.453130364+00:00 stderr F 2025-08-13T20:07:59.167879871+00:00 stderr F 2025-08-13T20:07:59Z [verbose] DEL starting CNI request ContainerID:"d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056" Netns:"/var/run/netns/f141d018-90a9-410f-8899-edd638325eac" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-8-crc;K8S_POD_INFRA_CONTAINER_ID=d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056;K8S_POD_UID=aca1f9ff-a685-4a78-b461-3931b757f754" Path:"" 2025-08-13T20:07:59.169982662+00:00 stderr F 2025-08-13T20:07:59Z [verbose] Del: openshift-kube-scheduler:installer-8-crc:aca1f9ff-a685-4a78-b461-3931b757f754:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:59.413508534+00:00 stderr P 2025-08-13T20:07:59Z [verbose] 2025-08-13T20:07:59.413580176+00:00 stderr P DEL starting CNI request ContainerID:"3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8" Netns:"/var/run/netns/bfd1eef8-c810-457e-84f5-906053b64ad0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-pmqwc;K8S_POD_INFRA_CONTAINER_ID=3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8;K8S_POD_UID=0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" Path:"" 2025-08-13T20:07:59.413611157+00:00 stderr F 2025-08-13T20:07:59.416382616+00:00 stderr P 2025-08-13T20:07:59Z [verbose] 2025-08-13T20:07:59.416426968+00:00 stderr P Del: openshift-marketplace:redhat-operators-pmqwc:0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:59.416457268+00:00 stderr F 2025-08-13T20:07:59.503252977+00:00 stderr F 2025-08-13T20:07:59Z [verbose] DEL finished CNI request ContainerID:"d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056" Netns:"/var/run/netns/f141d018-90a9-410f-8899-edd638325eac" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-8-crc;K8S_POD_INFRA_CONTAINER_ID=d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056;K8S_POD_UID=aca1f9ff-a685-4a78-b461-3931b757f754" Path:"", result: "", err: 2025-08-13T20:07:59.646335739+00:00 stderr F 2025-08-13T20:07:59Z [verbose] DEL finished CNI request ContainerID:"3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8" Netns:"/var/run/netns/bfd1eef8-c810-457e-84f5-906053b64ad0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-pmqwc;K8S_POD_INFRA_CONTAINER_ID=3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8;K8S_POD_UID=0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" Path:"", result: "", err: 2025-08-13T20:08:02.315139306+00:00 stderr P 2025-08-13T20:08:02Z [verbose] 2025-08-13T20:08:02.315255940+00:00 stderr P DEL starting CNI request ContainerID:"8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31" Netns:"/var/run/netns/b1f9f948-5182-4a9d-864b-bda6943b9187" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31;K8S_POD_UID=a45bfab9-f78b-4d72-b5b7-903e60401124" Path:"" 2025-08-13T20:08:02.315282180+00:00 stderr F 2025-08-13T20:08:02.347661799+00:00 stderr P 2025-08-13T20:08:02Z [verbose] 2025-08-13T20:08:02.347721001+00:00 stderr P Del: openshift-kube-controller-manager:installer-11-crc:a45bfab9-f78b-4d72-b5b7-903e60401124:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:08:02.347754912+00:00 stderr F 2025-08-13T20:08:02.634587615+00:00 stderr P 2025-08-13T20:08:02Z [verbose] 2025-08-13T20:08:02.634645187+00:00 stderr P DEL finished CNI request ContainerID:"8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31" Netns:"/var/run/netns/b1f9f948-5182-4a9d-864b-bda6943b9187" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31;K8S_POD_UID=a45bfab9-f78b-4d72-b5b7-903e60401124" Path:"", result: "", err: 2025-08-13T20:08:02.634669568+00:00 stderr F 2025-08-13T20:08:25.500144370+00:00 stderr P 2025-08-13T20:08:25Z [verbose] 2025-08-13T20:08:25.500255923+00:00 stderr P DEL starting CNI request ContainerID:"afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309" Netns:"/var/run/netns/b78865c0-1c97-410c-9fce-84455b78845c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-12-crc;K8S_POD_INFRA_CONTAINER_ID=afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309;K8S_POD_UID=3557248c-8f70-4165-aa66-8df983e7e01a" Path:"" 2025-08-13T20:08:25.500286974+00:00 stderr F 2025-08-13T20:08:25.501293113+00:00 stderr P 2025-08-13T20:08:25Z [verbose] 2025-08-13T20:08:25.501330984+00:00 stderr P Del: openshift-kube-apiserver:installer-12-crc:3557248c-8f70-4165-aa66-8df983e7e01a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:08:25.501355614+00:00 stderr F 2025-08-13T20:08:25.826326272+00:00 stderr P 2025-08-13T20:08:25Z [verbose] 2025-08-13T20:08:25.826536238+00:00 stderr P DEL finished CNI request ContainerID:"afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309" Netns:"/var/run/netns/b78865c0-1c97-410c-9fce-84455b78845c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-12-crc;K8S_POD_INFRA_CONTAINER_ID=afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309;K8S_POD_UID=3557248c-8f70-4165-aa66-8df983e7e01a" Path:"", result: "", err: 2025-08-13T20:08:25.826562909+00:00 stderr F 2025-08-13T20:11:00.076861257+00:00 stderr F 2025-08-13T20:11:00Z [verbose] DEL starting CNI request ContainerID:"924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7" Netns:"/var/run/netns/3b9627d6-1453-443c-ba95-69c7de74e340" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"" 2025-08-13T20:11:00.078535555+00:00 stderr F 2025-08-13T20:11:00Z [verbose] Del: openshift-route-controller-manager:route-controller-manager-6884dcf749-n4qpx:becc7e17-2bc7-417d-832f-55127299d70f:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:11:00.078535555+00:00 stderr F 2025-08-13T20:11:00Z [verbose] DEL starting CNI request ContainerID:"7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb" Netns:"/var/run/netns/90838034-5fbf-4da8-8c62-f815c2b7b742" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"" 2025-08-13T20:11:00.082829858+00:00 stderr F 2025-08-13T20:11:00Z [verbose] Del: openshift-controller-manager:controller-manager-598fc85fd4-8wlsm:8b8d1c48-5762-450f-bd4d-9134869f432b:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:11:00.299981454+00:00 stderr F 2025-08-13T20:11:00Z [verbose] DEL finished CNI request ContainerID:"7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb" Netns:"/var/run/netns/90838034-5fbf-4da8-8c62-f815c2b7b742" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"", result: "", err: 2025-08-13T20:11:00.443386906+00:00 stderr F 2025-08-13T20:11:00Z [verbose] DEL finished CNI request ContainerID:"924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7" Netns:"/var/run/netns/3b9627d6-1453-443c-ba95-69c7de74e340" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"", result: "", err: 2025-08-13T20:11:01.950907168+00:00 stderr F 2025-08-13T20:11:01Z [verbose] ADD starting CNI request ContainerID:"67a3c779a8c87e71b43d6cb834c45eddf91ef0c21c030e8ec0df8e8304073b3c" Netns:"/var/run/netns/7b782b33-396c-4f2e-9790-a15358281ab8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-778975cc4f-x5vcf;K8S_POD_INFRA_CONTAINER_ID=67a3c779a8c87e71b43d6cb834c45eddf91ef0c21c030e8ec0df8e8304073b3c;K8S_POD_UID=1a3e81c3-c292-4130-9436-f94062c91efd" Path:"" 2025-08-13T20:11:01.963896741+00:00 stderr F 2025-08-13T20:11:01Z [verbose] ADD starting CNI request ContainerID:"c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88" Netns:"/var/run/netns/3c1d86f6-868f-48ac-ae16-82b923dda727" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-776b8b7477-sfpvs;K8S_POD_INFRA_CONTAINER_ID=c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88;K8S_POD_UID=21d29937-debd-4407-b2b1-d1053cb0f342" Path:"" 2025-08-13T20:11:02.160903619+00:00 stderr F 2025-08-13T20:11:02Z [verbose] Add: openshift-controller-manager:controller-manager-778975cc4f-x5vcf:1a3e81c3-c292-4130-9436-f94062c91efd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"67a3c779a8c87e7","mac":"aa:6d:9a:25:77:61"},{"name":"eth0","mac":"0a:58:0a:d9:00:57","sandbox":"/var/run/netns/7b782b33-396c-4f2e-9790-a15358281ab8"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.87/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:11:02.160903619+00:00 stderr F I0813 20:11:02.158865 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager", Name:"controller-manager-778975cc4f-x5vcf", UID:"1a3e81c3-c292-4130-9436-f94062c91efd", APIVersion:"v1", ResourceVersion:"33335", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.87/23] from ovn-kubernetes 2025-08-13T20:11:02.189402916+00:00 stderr F 2025-08-13T20:11:02Z [verbose] ADD finished CNI request ContainerID:"67a3c779a8c87e71b43d6cb834c45eddf91ef0c21c030e8ec0df8e8304073b3c" Netns:"/var/run/netns/7b782b33-396c-4f2e-9790-a15358281ab8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-778975cc4f-x5vcf;K8S_POD_INFRA_CONTAINER_ID=67a3c779a8c87e71b43d6cb834c45eddf91ef0c21c030e8ec0df8e8304073b3c;K8S_POD_UID=1a3e81c3-c292-4130-9436-f94062c91efd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"aa:6d:9a:25:77:61\",\"name\":\"67a3c779a8c87e7\"},{\"mac\":\"0a:58:0a:d9:00:57\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/7b782b33-396c-4f2e-9790-a15358281ab8\"}],\"ips\":[{\"address\":\"10.217.0.87/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:11:02.247865802+00:00 stderr F 2025-08-13T20:11:02Z [verbose] Add: openshift-route-controller-manager:route-controller-manager-776b8b7477-sfpvs:21d29937-debd-4407-b2b1-d1053cb0f342:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"c5bff19800c2cb5","mac":"ae:fe:5d:38:ef:2e"},{"name":"eth0","mac":"0a:58:0a:d9:00:58","sandbox":"/var/run/netns/3c1d86f6-868f-48ac-ae16-82b923dda727"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.88/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:11:02.248631374+00:00 stderr F I0813 20:11:02.248566 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-route-controller-manager", Name:"route-controller-manager-776b8b7477-sfpvs", UID:"21d29937-debd-4407-b2b1-d1053cb0f342", APIVersion:"v1", ResourceVersion:"33336", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.88/23] from ovn-kubernetes 2025-08-13T20:11:02.285459280+00:00 stderr P 2025-08-13T20:11:02Z [verbose] ADD finished CNI request ContainerID:"c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88" Netns:"/var/run/netns/3c1d86f6-868f-48ac-ae16-82b923dda727" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-776b8b7477-sfpvs;K8S_POD_INFRA_CONTAINER_ID=c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88;K8S_POD_UID=21d29937-debd-4407-b2b1-d1053cb0f342" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ae:fe:5d:38:ef:2e\",\"name\":\"c5bff19800c2cb5\"},{\"mac\":\"0a:58:0a:d9:00:58\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3c1d86f6-868f-48ac-ae16-82b923dda727\"}],\"ips\":[{\"address\":\"10.217.0.88/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:11:02.286108679+00:00 stderr F 2025-08-13T20:15:00.781729927+00:00 stderr F 2025-08-13T20:15:00Z [verbose] ADD starting CNI request ContainerID:"21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855" Netns:"/var/run/netns/f993d2a8-6c5a-40e6-91ea-0cbc25b63647" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251935-d7x6j;K8S_POD_INFRA_CONTAINER_ID=21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855;K8S_POD_UID=51936587-a4af-470d-ad92-8ab9062cbc72" Path:"" 2025-08-13T20:15:00.963120417+00:00 stderr F 2025-08-13T20:15:00Z [verbose] Add: openshift-operator-lifecycle-manager:collect-profiles-29251935-d7x6j:51936587-a4af-470d-ad92-8ab9062cbc72:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"21feea149913711","mac":"be:26:57:2f:28:77"},{"name":"eth0","mac":"0a:58:0a:d9:00:59","sandbox":"/var/run/netns/f993d2a8-6c5a-40e6-91ea-0cbc25b63647"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.89/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:15:00.963591081+00:00 stderr F I0813 20:15:00.963481 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"collect-profiles-29251935-d7x6j", UID:"51936587-a4af-470d-ad92-8ab9062cbc72", APIVersion:"v1", ResourceVersion:"33816", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.89/23] from ovn-kubernetes 2025-08-13T20:15:01.019053031+00:00 stderr F 2025-08-13T20:15:01Z [verbose] ADD finished CNI request ContainerID:"21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855" Netns:"/var/run/netns/f993d2a8-6c5a-40e6-91ea-0cbc25b63647" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251935-d7x6j;K8S_POD_INFRA_CONTAINER_ID=21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855;K8S_POD_UID=51936587-a4af-470d-ad92-8ab9062cbc72" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"be:26:57:2f:28:77\",\"name\":\"21feea149913711\"},{\"mac\":\"0a:58:0a:d9:00:59\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f993d2a8-6c5a-40e6-91ea-0cbc25b63647\"}],\"ips\":[{\"address\":\"10.217.0.89/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:15:04.402616130+00:00 stderr F 2025-08-13T20:15:04Z [verbose] DEL starting CNI request ContainerID:"21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855" Netns:"/var/run/netns/f993d2a8-6c5a-40e6-91ea-0cbc25b63647" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251935-d7x6j;K8S_POD_INFRA_CONTAINER_ID=21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855;K8S_POD_UID=51936587-a4af-470d-ad92-8ab9062cbc72" Path:"" 2025-08-13T20:15:04.403546217+00:00 stderr F 2025-08-13T20:15:04Z [verbose] Del: openshift-operator-lifecycle-manager:collect-profiles-29251935-d7x6j:51936587-a4af-470d-ad92-8ab9062cbc72:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:15:04.593539334+00:00 stderr F 2025-08-13T20:15:04Z [verbose] DEL finished CNI request ContainerID:"21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855" Netns:"/var/run/netns/f993d2a8-6c5a-40e6-91ea-0cbc25b63647" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251935-d7x6j;K8S_POD_INFRA_CONTAINER_ID=21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855;K8S_POD_UID=51936587-a4af-470d-ad92-8ab9062cbc72" Path:"", result: "", err: 2025-08-13T20:16:58.608948335+00:00 stderr F 2025-08-13T20:16:58Z [verbose] ADD starting CNI request ContainerID:"18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f" Netns:"/var/run/netns/266038b1-4f8d-4244-9273-e3cef9a4f474" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-8bbjz;K8S_POD_INFRA_CONTAINER_ID=18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f;K8S_POD_UID=8e241cc6-c71d-4fa0-9a1a-18098bcf6594" Path:"" 2025-08-13T20:16:58.796684776+00:00 stderr F 2025-08-13T20:16:58Z [verbose] Add: openshift-marketplace:certified-operators-8bbjz:8e241cc6-c71d-4fa0-9a1a-18098bcf6594:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"18af4daca70b211","mac":"9a:cd:12:c0:68:18"},{"name":"eth0","mac":"0a:58:0a:d9:00:5a","sandbox":"/var/run/netns/266038b1-4f8d-4244-9273-e3cef9a4f474"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.90/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:16:58.797372606+00:00 stderr F I0813 20:16:58.797261 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-8bbjz", UID:"8e241cc6-c71d-4fa0-9a1a-18098bcf6594", APIVersion:"v1", ResourceVersion:"34100", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.90/23] from ovn-kubernetes 2025-08-13T20:16:58.860039395+00:00 stderr F 2025-08-13T20:16:58Z [verbose] ADD finished CNI request ContainerID:"18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f" Netns:"/var/run/netns/266038b1-4f8d-4244-9273-e3cef9a4f474" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-8bbjz;K8S_POD_INFRA_CONTAINER_ID=18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f;K8S_POD_UID=8e241cc6-c71d-4fa0-9a1a-18098bcf6594" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9a:cd:12:c0:68:18\",\"name\":\"18af4daca70b211\"},{\"mac\":\"0a:58:0a:d9:00:5a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/266038b1-4f8d-4244-9273-e3cef9a4f474\"}],\"ips\":[{\"address\":\"10.217.0.90/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:17:00.557066007+00:00 stderr P 2025-08-13T20:17:00Z [verbose] 2025-08-13T20:17:00.557183100+00:00 stderr P ADD starting CNI request ContainerID:"95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439" Netns:"/var/run/netns/bfc0bac2-17b6-45ef-a113-6b252dd17063" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nsk78;K8S_POD_INFRA_CONTAINER_ID=95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439;K8S_POD_UID=a084eaff-10e9-439e-96f3-f3450fb14db7" Path:"" 2025-08-13T20:17:00.557291693+00:00 stderr F 2025-08-13T20:17:00.797736670+00:00 stderr F 2025-08-13T20:17:00Z [verbose] Add: openshift-marketplace:redhat-marketplace-nsk78:a084eaff-10e9-439e-96f3-f3450fb14db7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"95f40ae6abffb8f","mac":"9e:b4:d7:fb:98:d3"},{"name":"eth0","mac":"0a:58:0a:d9:00:5b","sandbox":"/var/run/netns/bfc0bac2-17b6-45ef-a113-6b252dd17063"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.91/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:17:00.797736670+00:00 stderr F I0813 20:17:00.796302 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-nsk78", UID:"a084eaff-10e9-439e-96f3-f3450fb14db7", APIVersion:"v1", ResourceVersion:"34117", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.91/23] from ovn-kubernetes 2025-08-13T20:17:01.051256190+00:00 stderr P 2025-08-13T20:17:01Z [verbose] 2025-08-13T20:17:01.051408334+00:00 stderr P ADD finished CNI request ContainerID:"95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439" Netns:"/var/run/netns/bfc0bac2-17b6-45ef-a113-6b252dd17063" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nsk78;K8S_POD_INFRA_CONTAINER_ID=95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439;K8S_POD_UID=a084eaff-10e9-439e-96f3-f3450fb14db7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9e:b4:d7:fb:98:d3\",\"name\":\"95f40ae6abffb8f\"},{\"mac\":\"0a:58:0a:d9:00:5b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/bfc0bac2-17b6-45ef-a113-6b252dd17063\"}],\"ips\":[{\"address\":\"10.217.0.91/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:17:01.051495627+00:00 stderr F 2025-08-13T20:17:20.945952804+00:00 stderr F 2025-08-13T20:17:20Z [verbose] ADD starting CNI request ContainerID:"011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5" Netns:"/var/run/netns/532cbd77-cc65-4c3b-a882-d79634b87a38" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-swl5s;K8S_POD_INFRA_CONTAINER_ID=011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5;K8S_POD_UID=407a8505-ab64-42f9-aa53-a63f8e97c189" Path:"" 2025-08-13T20:17:21.291695047+00:00 stderr P 2025-08-13T20:17:21Z [verbose] 2025-08-13T20:17:21.291757969+00:00 stderr P Add: openshift-marketplace:redhat-operators-swl5s:407a8505-ab64-42f9-aa53-a63f8e97c189:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"011ddcc3b1f8c14","mac":"9a:a7:0d:46:5e:25"},{"name":"eth0","mac":"0a:58:0a:d9:00:5c","sandbox":"/var/run/netns/532cbd77-cc65-4c3b-a882-d79634b87a38"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.92/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:17:21.291910763+00:00 stderr F 2025-08-13T20:17:21.293331844+00:00 stderr F I0813 20:17:21.292950 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-swl5s", UID:"407a8505-ab64-42f9-aa53-a63f8e97c189", APIVersion:"v1", ResourceVersion:"34179", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.92/23] from ovn-kubernetes 2025-08-13T20:17:21.607094354+00:00 stderr P 2025-08-13T20:17:21Z [verbose] 2025-08-13T20:17:21.607148426+00:00 stderr P ADD finished CNI request ContainerID:"011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5" Netns:"/var/run/netns/532cbd77-cc65-4c3b-a882-d79634b87a38" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-swl5s;K8S_POD_INFRA_CONTAINER_ID=011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5;K8S_POD_UID=407a8505-ab64-42f9-aa53-a63f8e97c189" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9a:a7:0d:46:5e:25\",\"name\":\"011ddcc3b1f8c14\"},{\"mac\":\"0a:58:0a:d9:00:5c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/532cbd77-cc65-4c3b-a882-d79634b87a38\"}],\"ips\":[{\"address\":\"10.217.0.92/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:17:21.607191417+00:00 stderr F 2025-08-13T20:17:30.765245514+00:00 stderr P 2025-08-13T20:17:30Z [verbose] 2025-08-13T20:17:30.765323897+00:00 stderr P ADD starting CNI request ContainerID:"b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790" Netns:"/var/run/netns/281fd615-72a1-4ed5-bac0-895020cb2103" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-tfv59;K8S_POD_INFRA_CONTAINER_ID=b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790;K8S_POD_UID=718f06fe-dcad-4053-8de2-e2c38fb7503d" Path:"" 2025-08-13T20:17:30.765348217+00:00 stderr F 2025-08-13T20:17:31.059174718+00:00 stderr F 2025-08-13T20:17:31Z [verbose] Add: openshift-marketplace:community-operators-tfv59:718f06fe-dcad-4053-8de2-e2c38fb7503d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"b983de43e586634","mac":"8e:ce:68:75:7d:52"},{"name":"eth0","mac":"0a:58:0a:d9:00:5d","sandbox":"/var/run/netns/281fd615-72a1-4ed5-bac0-895020cb2103"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.93/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:17:31.059174718+00:00 stderr F I0813 20:17:31.056753 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-tfv59", UID:"718f06fe-dcad-4053-8de2-e2c38fb7503d", APIVersion:"v1", ResourceVersion:"34230", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.93/23] from ovn-kubernetes 2025-08-13T20:17:31.140038977+00:00 stderr F 2025-08-13T20:17:31Z [verbose] ADD finished CNI request ContainerID:"b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790" Netns:"/var/run/netns/281fd615-72a1-4ed5-bac0-895020cb2103" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-tfv59;K8S_POD_INFRA_CONTAINER_ID=b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790;K8S_POD_UID=718f06fe-dcad-4053-8de2-e2c38fb7503d" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"8e:ce:68:75:7d:52\",\"name\":\"b983de43e586634\"},{\"mac\":\"0a:58:0a:d9:00:5d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/281fd615-72a1-4ed5-bac0-895020cb2103\"}],\"ips\":[{\"address\":\"10.217.0.93/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:17:35.355986043+00:00 stderr F 2025-08-13T20:17:35Z [verbose] DEL starting CNI request ContainerID:"95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439" Netns:"/var/run/netns/bfc0bac2-17b6-45ef-a113-6b252dd17063" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nsk78;K8S_POD_INFRA_CONTAINER_ID=95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439;K8S_POD_UID=a084eaff-10e9-439e-96f3-f3450fb14db7" Path:"" 2025-08-13T20:17:35.356862828+00:00 stderr F 2025-08-13T20:17:35Z [verbose] Del: openshift-marketplace:redhat-marketplace-nsk78:a084eaff-10e9-439e-96f3-f3450fb14db7:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:17:35.677012541+00:00 stderr F 2025-08-13T20:17:35Z [verbose] DEL finished CNI request ContainerID:"95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439" Netns:"/var/run/netns/bfc0bac2-17b6-45ef-a113-6b252dd17063" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nsk78;K8S_POD_INFRA_CONTAINER_ID=95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439;K8S_POD_UID=a084eaff-10e9-439e-96f3-f3450fb14db7" Path:"", result: "", err: 2025-08-13T20:17:42.940476414+00:00 stderr P 2025-08-13T20:17:42Z [verbose] 2025-08-13T20:17:42.940551276+00:00 stderr P DEL starting CNI request ContainerID:"18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f" Netns:"/var/run/netns/266038b1-4f8d-4244-9273-e3cef9a4f474" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-8bbjz;K8S_POD_INFRA_CONTAINER_ID=18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f;K8S_POD_UID=8e241cc6-c71d-4fa0-9a1a-18098bcf6594" Path:"" 2025-08-13T20:17:42.940576557+00:00 stderr F 2025-08-13T20:17:42.941672528+00:00 stderr P 2025-08-13T20:17:42Z [verbose] 2025-08-13T20:17:42.941711539+00:00 stderr P Del: openshift-marketplace:certified-operators-8bbjz:8e241cc6-c71d-4fa0-9a1a-18098bcf6594:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:17:42.941735740+00:00 stderr F 2025-08-13T20:17:43.130319066+00:00 stderr P 2025-08-13T20:17:43Z [verbose] 2025-08-13T20:17:43.130398278+00:00 stderr P DEL finished CNI request ContainerID:"18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f" Netns:"/var/run/netns/266038b1-4f8d-4244-9273-e3cef9a4f474" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-8bbjz;K8S_POD_INFRA_CONTAINER_ID=18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f;K8S_POD_UID=8e241cc6-c71d-4fa0-9a1a-18098bcf6594" Path:"", result: "", err: 2025-08-13T20:17:43.130429249+00:00 stderr F 2025-08-13T20:18:52.308718151+00:00 stderr P 2025-08-13T20:18:52Z [verbose] 2025-08-13T20:18:52.308860535+00:00 stderr P DEL starting CNI request ContainerID:"b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790" Netns:"/var/run/netns/281fd615-72a1-4ed5-bac0-895020cb2103" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-tfv59;K8S_POD_INFRA_CONTAINER_ID=b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790;K8S_POD_UID=718f06fe-dcad-4053-8de2-e2c38fb7503d" Path:"" 2025-08-13T20:18:52.308910326+00:00 stderr F 2025-08-13T20:18:52.322174545+00:00 stderr P 2025-08-13T20:18:52Z [verbose] 2025-08-13T20:18:52.322215476+00:00 stderr P Del: openshift-marketplace:community-operators-tfv59:718f06fe-dcad-4053-8de2-e2c38fb7503d:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:18:52.322271608+00:00 stderr F 2025-08-13T20:18:54.275913969+00:00 stderr P 2025-08-13T20:18:54Z [verbose] 2025-08-13T20:18:54.276014152+00:00 stderr P DEL finished CNI request ContainerID:"b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790" Netns:"/var/run/netns/281fd615-72a1-4ed5-bac0-895020cb2103" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-tfv59;K8S_POD_INFRA_CONTAINER_ID=b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790;K8S_POD_UID=718f06fe-dcad-4053-8de2-e2c38fb7503d" Path:"", result: "", err: 2025-08-13T20:18:54.276041162+00:00 stderr F 2025-08-13T20:19:34.296937659+00:00 stderr F 2025-08-13T20:19:34Z [verbose] DEL starting CNI request ContainerID:"011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5" Netns:"/var/run/netns/532cbd77-cc65-4c3b-a882-d79634b87a38" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-swl5s;K8S_POD_INFRA_CONTAINER_ID=011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5;K8S_POD_UID=407a8505-ab64-42f9-aa53-a63f8e97c189" Path:"" 2025-08-13T20:19:34.300885202+00:00 stderr F 2025-08-13T20:19:34Z [verbose] Del: openshift-marketplace:redhat-operators-swl5s:407a8505-ab64-42f9-aa53-a63f8e97c189:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:19:34.534297221+00:00 stderr F 2025-08-13T20:19:34Z [verbose] DEL finished CNI request ContainerID:"011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5" Netns:"/var/run/netns/532cbd77-cc65-4c3b-a882-d79634b87a38" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-swl5s;K8S_POD_INFRA_CONTAINER_ID=011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5;K8S_POD_UID=407a8505-ab64-42f9-aa53-a63f8e97c189" Path:"", result: "", err: 2025-08-13T20:27:06.122840134+00:00 stderr F 2025-08-13T20:27:06Z [verbose] ADD starting CNI request ContainerID:"65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0" Netns:"/var/run/netns/f8d0473d-1fd4-4d95-bae4-0a4398be087b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-jbzn9;K8S_POD_INFRA_CONTAINER_ID=65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0;K8S_POD_UID=b152b92f-8fab-4b74-8e68-00278380759d" Path:"" 2025-08-13T20:27:06.262561679+00:00 stderr F 2025-08-13T20:27:06Z [verbose] ADD starting CNI request ContainerID:"d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e" Netns:"/var/run/netns/53c62a5d-26ea-42f7-9577-906284d26f79" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-xldzg;K8S_POD_INFRA_CONTAINER_ID=d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e;K8S_POD_UID=926ac7a4-e156-4e71-9681-7a48897402eb" Path:"" 2025-08-13T20:27:06.458150272+00:00 stderr F 2025-08-13T20:27:06Z [verbose] Add: openshift-marketplace:redhat-marketplace-jbzn9:b152b92f-8fab-4b74-8e68-00278380759d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"65efa81c3e0e120","mac":"e6:4c:64:f5:01:ca"},{"name":"eth0","mac":"0a:58:0a:d9:00:5e","sandbox":"/var/run/netns/f8d0473d-1fd4-4d95-bae4-0a4398be087b"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.94/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:27:06.479224494+00:00 stderr F I0813 20:27:06.473549 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-jbzn9", UID:"b152b92f-8fab-4b74-8e68-00278380759d", APIVersion:"v1", ResourceVersion:"35401", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.94/23] from ovn-kubernetes 2025-08-13T20:27:06.522091700+00:00 stderr F 2025-08-13T20:27:06Z [verbose] ADD finished CNI request ContainerID:"65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0" Netns:"/var/run/netns/f8d0473d-1fd4-4d95-bae4-0a4398be087b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-jbzn9;K8S_POD_INFRA_CONTAINER_ID=65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0;K8S_POD_UID=b152b92f-8fab-4b74-8e68-00278380759d" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"e6:4c:64:f5:01:ca\",\"name\":\"65efa81c3e0e120\"},{\"mac\":\"0a:58:0a:d9:00:5e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f8d0473d-1fd4-4d95-bae4-0a4398be087b\"}],\"ips\":[{\"address\":\"10.217.0.94/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:27:06.569207297+00:00 stderr F 2025-08-13T20:27:06Z [verbose] Add: openshift-marketplace:certified-operators-xldzg:926ac7a4-e156-4e71-9681-7a48897402eb:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d26f242e575b9e4","mac":"12:4b:e0:9e:55:86"},{"name":"eth0","mac":"0a:58:0a:d9:00:5f","sandbox":"/var/run/netns/53c62a5d-26ea-42f7-9577-906284d26f79"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.95/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:27:06.569458594+00:00 stderr F I0813 20:27:06.569413 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-xldzg", UID:"926ac7a4-e156-4e71-9681-7a48897402eb", APIVersion:"v1", ResourceVersion:"35408", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.95/23] from ovn-kubernetes 2025-08-13T20:27:06.626562697+00:00 stderr F 2025-08-13T20:27:06Z [verbose] ADD finished CNI request ContainerID:"d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e" Netns:"/var/run/netns/53c62a5d-26ea-42f7-9577-906284d26f79" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-xldzg;K8S_POD_INFRA_CONTAINER_ID=d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e;K8S_POD_UID=926ac7a4-e156-4e71-9681-7a48897402eb" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"12:4b:e0:9e:55:86\",\"name\":\"d26f242e575b9e4\"},{\"mac\":\"0a:58:0a:d9:00:5f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/53c62a5d-26ea-42f7-9577-906284d26f79\"}],\"ips\":[{\"address\":\"10.217.0.95/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:27:29.231263536+00:00 stderr F 2025-08-13T20:27:29Z [verbose] DEL starting CNI request ContainerID:"d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e" Netns:"/var/run/netns/53c62a5d-26ea-42f7-9577-906284d26f79" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-xldzg;K8S_POD_INFRA_CONTAINER_ID=d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e;K8S_POD_UID=926ac7a4-e156-4e71-9681-7a48897402eb" Path:"" 2025-08-13T20:27:29.236216528+00:00 stderr F 2025-08-13T20:27:29Z [verbose] Del: openshift-marketplace:certified-operators-xldzg:926ac7a4-e156-4e71-9681-7a48897402eb:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:27:29.284492108+00:00 stderr F 2025-08-13T20:27:29Z [verbose] DEL starting CNI request ContainerID:"65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0" Netns:"/var/run/netns/f8d0473d-1fd4-4d95-bae4-0a4398be087b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-jbzn9;K8S_POD_INFRA_CONTAINER_ID=65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0;K8S_POD_UID=b152b92f-8fab-4b74-8e68-00278380759d" Path:"" 2025-08-13T20:27:29.284642242+00:00 stderr F 2025-08-13T20:27:29Z [verbose] Del: openshift-marketplace:redhat-marketplace-jbzn9:b152b92f-8fab-4b74-8e68-00278380759d:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:27:29.434254651+00:00 stderr F 2025-08-13T20:27:29Z [verbose] DEL finished CNI request ContainerID:"d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e" Netns:"/var/run/netns/53c62a5d-26ea-42f7-9577-906284d26f79" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-xldzg;K8S_POD_INFRA_CONTAINER_ID=d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e;K8S_POD_UID=926ac7a4-e156-4e71-9681-7a48897402eb" Path:"", result: "", err: 2025-08-13T20:27:29.494300477+00:00 stderr F 2025-08-13T20:27:29Z [verbose] DEL finished CNI request ContainerID:"65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0" Netns:"/var/run/netns/f8d0473d-1fd4-4d95-bae4-0a4398be087b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-jbzn9;K8S_POD_INFRA_CONTAINER_ID=65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0;K8S_POD_UID=b152b92f-8fab-4b74-8e68-00278380759d" Path:"", result: "", err: 2025-08-13T20:28:43.767683367+00:00 stderr F 2025-08-13T20:28:43Z [verbose] ADD starting CNI request ContainerID:"786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af" Netns:"/var/run/netns/1e0f12a7-9b7a-46a2-b454-6eae53051610" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-hvwvm;K8S_POD_INFRA_CONTAINER_ID=786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af;K8S_POD_UID=bfb8fd54-a923-43fe-a0f5-bc4066352d71" Path:"" 2025-08-13T20:28:44.030064569+00:00 stderr F 2025-08-13T20:28:44Z [verbose] Add: openshift-marketplace:community-operators-hvwvm:bfb8fd54-a923-43fe-a0f5-bc4066352d71:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"786926dc94686ef","mac":"f2:1e:d5:6b:2f:19"},{"name":"eth0","mac":"0a:58:0a:d9:00:60","sandbox":"/var/run/netns/1e0f12a7-9b7a-46a2-b454-6eae53051610"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.96/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:28:44.030692537+00:00 stderr F I0813 20:28:44.030422 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-hvwvm", UID:"bfb8fd54-a923-43fe-a0f5-bc4066352d71", APIVersion:"v1", ResourceVersion:"35629", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.96/23] from ovn-kubernetes 2025-08-13T20:28:44.068653379+00:00 stderr F 2025-08-13T20:28:44Z [verbose] ADD finished CNI request ContainerID:"786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af" Netns:"/var/run/netns/1e0f12a7-9b7a-46a2-b454-6eae53051610" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-hvwvm;K8S_POD_INFRA_CONTAINER_ID=786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af;K8S_POD_UID=bfb8fd54-a923-43fe-a0f5-bc4066352d71" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"f2:1e:d5:6b:2f:19\",\"name\":\"786926dc94686ef\"},{\"mac\":\"0a:58:0a:d9:00:60\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1e0f12a7-9b7a-46a2-b454-6eae53051610\"}],\"ips\":[{\"address\":\"10.217.0.96/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:29:06.009323725+00:00 stderr P 2025-08-13T20:29:06Z [verbose] 2025-08-13T20:29:06.009411328+00:00 stderr P DEL starting CNI request ContainerID:"786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af" Netns:"/var/run/netns/1e0f12a7-9b7a-46a2-b454-6eae53051610" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-hvwvm;K8S_POD_INFRA_CONTAINER_ID=786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af;K8S_POD_UID=bfb8fd54-a923-43fe-a0f5-bc4066352d71" Path:"" 2025-08-13T20:29:06.009436298+00:00 stderr F 2025-08-13T20:29:06.010357595+00:00 stderr P 2025-08-13T20:29:06Z [verbose] 2025-08-13T20:29:06.010394066+00:00 stderr P Del: openshift-marketplace:community-operators-hvwvm:bfb8fd54-a923-43fe-a0f5-bc4066352d71:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:29:06.010418947+00:00 stderr F 2025-08-13T20:29:06.211321622+00:00 stderr P 2025-08-13T20:29:06Z [verbose] 2025-08-13T20:29:06.211421805+00:00 stderr P DEL finished CNI request ContainerID:"786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af" Netns:"/var/run/netns/1e0f12a7-9b7a-46a2-b454-6eae53051610" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-hvwvm;K8S_POD_INFRA_CONTAINER_ID=786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af;K8S_POD_UID=bfb8fd54-a923-43fe-a0f5-bc4066352d71" Path:"", result: "", err: 2025-08-13T20:29:06.211461296+00:00 stderr F 2025-08-13T20:29:30.524869297+00:00 stderr F 2025-08-13T20:29:30Z [verbose] ADD starting CNI request ContainerID:"3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664" Netns:"/var/run/netns/8b542f5e-b37c-4ba6-a2cd-003f00a0be1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-zdwjn;K8S_POD_INFRA_CONTAINER_ID=3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664;K8S_POD_UID=6d579e1a-3b27-4c1f-9175-42ac58490d42" Path:"" 2025-08-13T20:29:30.754850807+00:00 stderr P 2025-08-13T20:29:30Z [verbose] 2025-08-13T20:29:30.754914439+00:00 stderr P Add: openshift-marketplace:redhat-operators-zdwjn:6d579e1a-3b27-4c1f-9175-42ac58490d42:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"3fdb2c96a67c002","mac":"06:8b:ac:a2:0e:cc"},{"name":"eth0","mac":"0a:58:0a:d9:00:61","sandbox":"/var/run/netns/8b542f5e-b37c-4ba6-a2cd-003f00a0be1d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.97/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:29:30.754940140+00:00 stderr F 2025-08-13T20:29:30.762887008+00:00 stderr F I0813 20:29:30.762765 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-zdwjn", UID:"6d579e1a-3b27-4c1f-9175-42ac58490d42", APIVersion:"v1", ResourceVersion:"35763", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.97/23] from ovn-kubernetes 2025-08-13T20:29:30.797250476+00:00 stderr F 2025-08-13T20:29:30Z [verbose] ADD finished CNI request ContainerID:"3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664" Netns:"/var/run/netns/8b542f5e-b37c-4ba6-a2cd-003f00a0be1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-zdwjn;K8S_POD_INFRA_CONTAINER_ID=3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664;K8S_POD_UID=6d579e1a-3b27-4c1f-9175-42ac58490d42" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"06:8b:ac:a2:0e:cc\",\"name\":\"3fdb2c96a67c002\"},{\"mac\":\"0a:58:0a:d9:00:61\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/8b542f5e-b37c-4ba6-a2cd-003f00a0be1d\"}],\"ips\":[{\"address\":\"10.217.0.97/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:30:02.419847222+00:00 stderr P 2025-08-13T20:30:02Z [verbose] 2025-08-13T20:30:02.420056688+00:00 stderr P ADD starting CNI request ContainerID:"61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9" Netns:"/var/run/netns/7b94eab6-f613-47c5-b591-462836b19b9c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251950-x8jjd;K8S_POD_INFRA_CONTAINER_ID=61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9;K8S_POD_UID=ad171c4b-8408-4370-8e86-502999788ddb" Path:"" 2025-08-13T20:30:02.420093569+00:00 stderr F 2025-08-13T20:30:02.778196933+00:00 stderr P 2025-08-13T20:30:02Z [verbose] 2025-08-13T20:30:02.778271045+00:00 stderr P Add: openshift-operator-lifecycle-manager:collect-profiles-29251950-x8jjd:ad171c4b-8408-4370-8e86-502999788ddb:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"61f39a784f23d0e","mac":"da:99:8d:d0:4e:9c"},{"name":"eth0","mac":"0a:58:0a:d9:00:62","sandbox":"/var/run/netns/7b94eab6-f613-47c5-b591-462836b19b9c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.98/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:30:02.778296465+00:00 stderr F 2025-08-13T20:30:02.779130789+00:00 stderr F I0813 20:30:02.779084 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"collect-profiles-29251950-x8jjd", UID:"ad171c4b-8408-4370-8e86-502999788ddb", APIVersion:"v1", ResourceVersion:"35844", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.98/23] from ovn-kubernetes 2025-08-13T20:30:02.811261253+00:00 stderr P 2025-08-13T20:30:02Z [verbose] 2025-08-13T20:30:02.811543541+00:00 stderr P ADD finished CNI request ContainerID:"61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9" Netns:"/var/run/netns/7b94eab6-f613-47c5-b591-462836b19b9c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251950-x8jjd;K8S_POD_INFRA_CONTAINER_ID=61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9;K8S_POD_UID=ad171c4b-8408-4370-8e86-502999788ddb" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"da:99:8d:d0:4e:9c\",\"name\":\"61f39a784f23d0e\"},{\"mac\":\"0a:58:0a:d9:00:62\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/7b94eab6-f613-47c5-b591-462836b19b9c\"}],\"ips\":[{\"address\":\"10.217.0.98/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:30:02.811577742+00:00 stderr F 2025-08-13T20:30:06.544162270+00:00 stderr P 2025-08-13T20:30:06Z [verbose] 2025-08-13T20:30:06.544238552+00:00 stderr P DEL starting CNI request ContainerID:"61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9" Netns:"/var/run/netns/7b94eab6-f613-47c5-b591-462836b19b9c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251950-x8jjd;K8S_POD_INFRA_CONTAINER_ID=61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9;K8S_POD_UID=ad171c4b-8408-4370-8e86-502999788ddb" Path:"" 2025-08-13T20:30:06.544263533+00:00 stderr F 2025-08-13T20:30:06.545051676+00:00 stderr P 2025-08-13T20:30:06Z [verbose] 2025-08-13T20:30:06.545090917+00:00 stderr P Del: openshift-operator-lifecycle-manager:collect-profiles-29251950-x8jjd:ad171c4b-8408-4370-8e86-502999788ddb:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:30:06.545115287+00:00 stderr F 2025-08-13T20:30:06.810139966+00:00 stderr P 2025-08-13T20:30:06Z [verbose] 2025-08-13T20:30:06.810241489+00:00 stderr P DEL finished CNI request ContainerID:"61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9" Netns:"/var/run/netns/7b94eab6-f613-47c5-b591-462836b19b9c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251950-x8jjd;K8S_POD_INFRA_CONTAINER_ID=61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9;K8S_POD_UID=ad171c4b-8408-4370-8e86-502999788ddb" Path:"", result: "", err: 2025-08-13T20:30:06.810291930+00:00 stderr F 2025-08-13T20:30:32.652453365+00:00 stderr P 2025-08-13T20:30:32Z [verbose] 2025-08-13T20:30:32.652542547+00:00 stderr P DEL starting CNI request ContainerID:"3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664" Netns:"/var/run/netns/8b542f5e-b37c-4ba6-a2cd-003f00a0be1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-zdwjn;K8S_POD_INFRA_CONTAINER_ID=3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664;K8S_POD_UID=6d579e1a-3b27-4c1f-9175-42ac58490d42" Path:"" 2025-08-13T20:30:32.652567668+00:00 stderr F 2025-08-13T20:30:32.654540215+00:00 stderr P 2025-08-13T20:30:32Z [verbose] 2025-08-13T20:30:32.654575346+00:00 stderr P Del: openshift-marketplace:redhat-operators-zdwjn:6d579e1a-3b27-4c1f-9175-42ac58490d42:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:30:32.654599396+00:00 stderr F 2025-08-13T20:30:32.870968506+00:00 stderr F 2025-08-13T20:30:32Z [verbose] DEL finished CNI request ContainerID:"3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664" Netns:"/var/run/netns/8b542f5e-b37c-4ba6-a2cd-003f00a0be1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-zdwjn;K8S_POD_INFRA_CONTAINER_ID=3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664;K8S_POD_UID=6d579e1a-3b27-4c1f-9175-42ac58490d42" Path:"", result: "", err: 2025-08-13T20:37:48.650104847+00:00 stderr F 2025-08-13T20:37:48Z [verbose] ADD starting CNI request ContainerID:"316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973" Netns:"/var/run/netns/526cbd5c-b3c2-4dba-819c-ba400a719e44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nkzlk;K8S_POD_INFRA_CONTAINER_ID=316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973;K8S_POD_UID=afc02c17-9714-426d-aafa-ee58c673ab0c" Path:"" 2025-08-13T20:37:48.861010088+00:00 stderr F 2025-08-13T20:37:48Z [verbose] Add: openshift-marketplace:redhat-marketplace-nkzlk:afc02c17-9714-426d-aafa-ee58c673ab0c:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"316cb50fa85ce61","mac":"26:aa:67:d6:f6:bb"},{"name":"eth0","mac":"0a:58:0a:d9:00:63","sandbox":"/var/run/netns/526cbd5c-b3c2-4dba-819c-ba400a719e44"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.99/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:37:48.862828720+00:00 stderr F I0813 20:37:48.861954 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-nkzlk", UID:"afc02c17-9714-426d-aafa-ee58c673ab0c", APIVersion:"v1", ResourceVersion:"36823", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.99/23] from ovn-kubernetes 2025-08-13T20:37:48.903530504+00:00 stderr F 2025-08-13T20:37:48Z [verbose] ADD finished CNI request ContainerID:"316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973" Netns:"/var/run/netns/526cbd5c-b3c2-4dba-819c-ba400a719e44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nkzlk;K8S_POD_INFRA_CONTAINER_ID=316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973;K8S_POD_UID=afc02c17-9714-426d-aafa-ee58c673ab0c" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"26:aa:67:d6:f6:bb\",\"name\":\"316cb50fa85ce61\"},{\"mac\":\"0a:58:0a:d9:00:63\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/526cbd5c-b3c2-4dba-819c-ba400a719e44\"}],\"ips\":[{\"address\":\"10.217.0.99/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:38:08.940938186+00:00 stderr F 2025-08-13T20:38:08Z [verbose] DEL starting CNI request ContainerID:"316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973" Netns:"/var/run/netns/526cbd5c-b3c2-4dba-819c-ba400a719e44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nkzlk;K8S_POD_INFRA_CONTAINER_ID=316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973;K8S_POD_UID=afc02c17-9714-426d-aafa-ee58c673ab0c" Path:"" 2025-08-13T20:38:08.941644556+00:00 stderr F 2025-08-13T20:38:08Z [verbose] Del: openshift-marketplace:redhat-marketplace-nkzlk:afc02c17-9714-426d-aafa-ee58c673ab0c:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:38:09.132340454+00:00 stderr F 2025-08-13T20:38:09Z [verbose] DEL finished CNI request ContainerID:"316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973" Netns:"/var/run/netns/526cbd5c-b3c2-4dba-819c-ba400a719e44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nkzlk;K8S_POD_INFRA_CONTAINER_ID=316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973;K8S_POD_UID=afc02c17-9714-426d-aafa-ee58c673ab0c" Path:"", result: "", err: 2025-08-13T20:38:36.497725067+00:00 stderr F 2025-08-13T20:38:36Z [verbose] ADD starting CNI request ContainerID:"48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0" Netns:"/var/run/netns/79131e3e-a4d7-46c4-b812-b1c65ea76c18" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-4kmbv;K8S_POD_INFRA_CONTAINER_ID=48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0;K8S_POD_UID=847e60dc-7a0a-4115-a7e1-356476e319e7" Path:"" 2025-08-13T20:38:36.719000537+00:00 stderr F 2025-08-13T20:38:36Z [verbose] Add: openshift-marketplace:certified-operators-4kmbv:847e60dc-7a0a-4115-a7e1-356476e319e7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"48a72e1ed96b8c0","mac":"ba:71:ca:85:15:a1"},{"name":"eth0","mac":"0a:58:0a:d9:00:64","sandbox":"/var/run/netns/79131e3e-a4d7-46c4-b812-b1c65ea76c18"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.100/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:38:36.720045557+00:00 stderr F I0813 20:38:36.719870 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-4kmbv", UID:"847e60dc-7a0a-4115-a7e1-356476e319e7", APIVersion:"v1", ResourceVersion:"36922", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.100/23] from ovn-kubernetes 2025-08-13T20:38:36.806263303+00:00 stderr P 2025-08-13T20:38:36Z [verbose] 2025-08-13T20:38:36.806353615+00:00 stderr P ADD finished CNI request ContainerID:"48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0" Netns:"/var/run/netns/79131e3e-a4d7-46c4-b812-b1c65ea76c18" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-4kmbv;K8S_POD_INFRA_CONTAINER_ID=48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0;K8S_POD_UID=847e60dc-7a0a-4115-a7e1-356476e319e7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ba:71:ca:85:15:a1\",\"name\":\"48a72e1ed96b8c0\"},{\"mac\":\"0a:58:0a:d9:00:64\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/79131e3e-a4d7-46c4-b812-b1c65ea76c18\"}],\"ips\":[{\"address\":\"10.217.0.100/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:38:36.806379876+00:00 stderr F 2025-08-13T20:38:57.274709039+00:00 stderr F 2025-08-13T20:38:57Z [verbose] DEL starting CNI request ContainerID:"48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0" Netns:"/var/run/netns/79131e3e-a4d7-46c4-b812-b1c65ea76c18" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-4kmbv;K8S_POD_INFRA_CONTAINER_ID=48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0;K8S_POD_UID=847e60dc-7a0a-4115-a7e1-356476e319e7" Path:"" 2025-08-13T20:38:57.275727299+00:00 stderr F 2025-08-13T20:38:57Z [verbose] Del: openshift-marketplace:certified-operators-4kmbv:847e60dc-7a0a-4115-a7e1-356476e319e7:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:38:57.457966003+00:00 stderr F 2025-08-13T20:38:57Z [verbose] DEL finished CNI request ContainerID:"48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0" Netns:"/var/run/netns/79131e3e-a4d7-46c4-b812-b1c65ea76c18" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-4kmbv;K8S_POD_INFRA_CONTAINER_ID=48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0;K8S_POD_UID=847e60dc-7a0a-4115-a7e1-356476e319e7" Path:"", result: "", err: 2025-08-13T20:41:21.899190984+00:00 stderr P 2025-08-13T20:41:21Z [verbose] 2025-08-13T20:41:21.899407450+00:00 stderr P ADD starting CNI request ContainerID:"b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c" Netns:"/var/run/netns/03d3e0f3-a040-4e17-a014-8ff1d38e5167" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-k2tgr;K8S_POD_INFRA_CONTAINER_ID=b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c;K8S_POD_UID=58e4f786-ee2a-45c4-83a4-523611d1eccd" Path:"" 2025-08-13T20:41:21.899451962+00:00 stderr F 2025-08-13T20:41:22.168813287+00:00 stderr F 2025-08-13T20:41:22Z [verbose] Add: openshift-marketplace:redhat-operators-k2tgr:58e4f786-ee2a-45c4-83a4-523611d1eccd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"b07b3fcd02d69d1","mac":"6e:99:82:6e:a6:ca"},{"name":"eth0","mac":"0a:58:0a:d9:00:65","sandbox":"/var/run/netns/03d3e0f3-a040-4e17-a014-8ff1d38e5167"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.101/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:41:22.170555268+00:00 stderr F I0813 20:41:22.170374 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-k2tgr", UID:"58e4f786-ee2a-45c4-83a4-523611d1eccd", APIVersion:"v1", ResourceVersion:"37281", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.101/23] from ovn-kubernetes 2025-08-13T20:41:22.217286985+00:00 stderr F 2025-08-13T20:41:22Z [verbose] ADD finished CNI request ContainerID:"b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c" Netns:"/var/run/netns/03d3e0f3-a040-4e17-a014-8ff1d38e5167" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-k2tgr;K8S_POD_INFRA_CONTAINER_ID=b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c;K8S_POD_UID=58e4f786-ee2a-45c4-83a4-523611d1eccd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"6e:99:82:6e:a6:ca\",\"name\":\"b07b3fcd02d69d1\"},{\"mac\":\"0a:58:0a:d9:00:65\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/03d3e0f3-a040-4e17-a014-8ff1d38e5167\"}],\"ips\":[{\"address\":\"10.217.0.101/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:42:13.827600468+00:00 stderr F 2025-08-13T20:42:13Z [verbose] DEL starting CNI request ContainerID:"b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c" Netns:"/var/run/netns/03d3e0f3-a040-4e17-a014-8ff1d38e5167" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-k2tgr;K8S_POD_INFRA_CONTAINER_ID=b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c;K8S_POD_UID=58e4f786-ee2a-45c4-83a4-523611d1eccd" Path:"" 2025-08-13T20:42:13.830067749+00:00 stderr F 2025-08-13T20:42:13Z [verbose] Del: openshift-marketplace:redhat-operators-k2tgr:58e4f786-ee2a-45c4-83a4-523611d1eccd:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:42:14.179240226+00:00 stderr F 2025-08-13T20:42:14Z [verbose] DEL finished CNI request ContainerID:"b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c" Netns:"/var/run/netns/03d3e0f3-a040-4e17-a014-8ff1d38e5167" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-k2tgr;K8S_POD_INFRA_CONTAINER_ID=b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c;K8S_POD_UID=58e4f786-ee2a-45c4-83a4-523611d1eccd" Path:"", result: "", err: 2025-08-13T20:42:27.029073457+00:00 stderr P 2025-08-13T20:42:27Z [verbose] 2025-08-13T20:42:27.029266502+00:00 stderr P ADD starting CNI request ContainerID:"b4ce7c1e13297d1e3743efaf9f1064544bf90f65fb0b7a8fecd420a76ed2a73a" Netns:"/var/run/netns/80c609b5-77e3-47c8-8037-c2c5aeb88310" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-sdddl;K8S_POD_INFRA_CONTAINER_ID=b4ce7c1e13297d1e3743efaf9f1064544bf90f65fb0b7a8fecd420a76ed2a73a;K8S_POD_UID=fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Path:"" 2025-08-13T20:42:27.029306213+00:00 stderr F 2025-08-13T20:42:27.856884263+00:00 stderr F 2025-08-13T20:42:27Z [verbose] Add: openshift-marketplace:community-operators-sdddl:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"b4ce7c1e13297d1","mac":"be:61:47:79:5b:8f"},{"name":"eth0","mac":"0a:58:0a:d9:00:66","sandbox":"/var/run/netns/80c609b5-77e3-47c8-8037-c2c5aeb88310"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.102/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:42:27.863480073+00:00 stderr F I0813 20:42:27.859848 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-sdddl", UID:"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760", APIVersion:"v1", ResourceVersion:"37479", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.102/23] from ovn-kubernetes 2025-08-13T20:42:27.896369491+00:00 stderr F 2025-08-13T20:42:27Z [verbose] ADD finished CNI request ContainerID:"b4ce7c1e13297d1e3743efaf9f1064544bf90f65fb0b7a8fecd420a76ed2a73a" Netns:"/var/run/netns/80c609b5-77e3-47c8-8037-c2c5aeb88310" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-sdddl;K8S_POD_INFRA_CONTAINER_ID=b4ce7c1e13297d1e3743efaf9f1064544bf90f65fb0b7a8fecd420a76ed2a73a;K8S_POD_UID=fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"be:61:47:79:5b:8f\",\"name\":\"b4ce7c1e13297d1\"},{\"mac\":\"0a:58:0a:d9:00:66\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/80c609b5-77e3-47c8-8037-c2c5aeb88310\"}],\"ips\":[{\"address\":\"10.217.0.102/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:42:44.710914328+00:00 stderr P 2025-08-13T20:42:44Z [verbose] 2025-08-13T20:42:44.712022420+00:00 stderr F caught terminated, stopping... 2025-08-13T20:42:44.713590275+00:00 stderr F 2025-08-13T20:42:44Z [verbose] Stopped monitoring, closing channel ... 2025-08-13T20:42:44.718194688+00:00 stderr F 2025-08-13T20:42:44Z [verbose] ConfigWatcher done 2025-08-13T20:42:44.718194688+00:00 stderr F 2025-08-13T20:42:44Z [verbose] Delete old config @ /host/etc/cni/net.d/00-multus.conf 2025-08-13T20:42:44.718472186+00:00 stderr F 2025-08-13T20:42:44Z [verbose] multus daemon is exited ././@LongLink0000644000000000000000000000024300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/4.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000157215133657716033254 0ustar zuulzuul2025-08-13T19:55:29.808124912+00:00 stdout F 2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 2025-08-13T19:55:29.859028584+00:00 stdout F 2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/ 2025-08-13T19:55:29.894323461+00:00 stderr F 2025-08-13T19:55:29Z [verbose] multus-daemon started 2025-08-13T19:55:29.894323461+00:00 stderr F 2025-08-13T19:55:29Z [verbose] Readiness Indicator file check 2025-08-13T19:56:14.896151827+00:00 stderr F 2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition ././@LongLink0000644000000000000000000000024300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/7.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000050202215133657716033250 0ustar zuulzuul2026-01-20T10:48:11.911746848+00:00 stdout F 2026-01-20T10:48:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6564c080-e00b-4499-9df5-cc0880ad6630 2026-01-20T10:48:11.955463610+00:00 stdout F 2026-01-20T10:48:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6564c080-e00b-4499-9df5-cc0880ad6630 to /host/opt/cni/bin/ 2026-01-20T10:48:11.997460346+00:00 stderr F 2026-01-20T10:48:11Z [verbose] multus-daemon started 2026-01-20T10:48:11.997460346+00:00 stderr F 2026-01-20T10:48:11Z [verbose] Readiness Indicator file check 2026-01-20T10:48:35.001259660+00:00 stderr F 2026-01-20T10:48:34Z [verbose] Readiness Indicator file check done! 2026-01-20T10:48:35.004333026+00:00 stderr F I0120 10:48:35.002898 7526 certificate_store.go:130] Loading cert/key pair from "/etc/cni/multus/certs/multus-client-current.pem". 2026-01-20T10:48:35.004605724+00:00 stderr F 2026-01-20T10:48:35Z [verbose] Waiting for certificate 2026-01-20T10:48:36.005054260+00:00 stderr F I0120 10:48:36.004922 7526 certificate_store.go:130] Loading cert/key pair from "/etc/cni/multus/certs/multus-client-current.pem". 2026-01-20T10:48:36.005421630+00:00 stderr F 2026-01-20T10:48:36Z [verbose] Certificate found! 2026-01-20T10:48:36.006220192+00:00 stderr F 2026-01-20T10:48:36Z [verbose] server configured with chroot: /hostroot 2026-01-20T10:48:36.006220192+00:00 stderr F 2026-01-20T10:48:36Z [verbose] Filtering pod watch for node "crc" 2026-01-20T10:48:36.110131591+00:00 stderr F 2026-01-20T10:48:36Z [verbose] API readiness check 2026-01-20T10:48:36.110131591+00:00 stderr F 2026-01-20T10:48:36Z [verbose] API readiness check done! 2026-01-20T10:48:36.110131591+00:00 stderr F 2026-01-20T10:48:36Z [verbose] Generated MultusCNI config: {"binDir":"/var/lib/cni/bin","cniVersion":"0.3.1","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","namespaceIsolation":true,"globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","type":"multus-shim","daemonSocketDir":"/run/multus/socket"} 2026-01-20T10:48:36.110131591+00:00 stderr F 2026-01-20T10:48:36Z [verbose] started to watch file /host/run/multus/cni/net.d/10-ovn-kubernetes.conf 2026-01-20T10:49:32.155946109+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"e9bfdf7ad0e854ea0676bdf904cd105d27afd839f122e2eff908516df128b256" Netns:"/var/run/netns/9b63ed5a-2a2a-47d0-9508-3515a4741d3e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns;K8S_POD_NAME=dns-default-gbw49;K8S_POD_INFRA_CONTAINER_ID=e9bfdf7ad0e854ea0676bdf904cd105d27afd839f122e2eff908516df128b256;K8S_POD_UID=13045510-8717-4a71-ade4-be95a76440a7" Path:"" 2026-01-20T10:49:32.161972642+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"6f76866c5589c459017fbf7f921bbc924c4127e0201c72b514907d71804b1ed0" Netns:"/var/run/netns/182c39cd-b692-4fc2-bb71-00350177c000" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-768d5b5d86-722mg;K8S_POD_INFRA_CONTAINER_ID=6f76866c5589c459017fbf7f921bbc924c4127e0201c72b514907d71804b1ed0;K8S_POD_UID=0b5c38ff-1fa8-4219-994d-15776acd4a4d" Path:"" 2026-01-20T10:49:32.179368902+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"3a8739569015671c6f9f01a8119a17783644b5faec5224fefcb5cdce7737fc06" Netns:"/var/run/netns/d6967b26-5c5f-4d32-9d0c-69c7fad645f5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5dbbc74dc9-cp5cd;K8S_POD_INFRA_CONTAINER_ID=3a8739569015671c6f9f01a8119a17783644b5faec5224fefcb5cdce7737fc06;K8S_POD_UID=e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Path:"" 2026-01-20T10:49:32.181849787+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"71f6858fe8b34b08a087aa4bdf6f19e06c8503b8e48ea7c9cf03725e928b0788" Netns:"/var/run/netns/2654c5b8-2eb5-43dd-8fd8-a288eb0f2efb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-5d9b995f6b-fcgd7;K8S_POD_INFRA_CONTAINER_ID=71f6858fe8b34b08a087aa4bdf6f19e06c8503b8e48ea7c9cf03725e928b0788;K8S_POD_UID=71af81a9-7d43-49b2-9287-c375900aa905" Path:"" 2026-01-20T10:49:32.200012551+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"24c018c62b0bc3fe31bdd54a1245e4278882bad91cf3ea80963f577537ef89a2" Netns:"/var/run/netns/0b4184a9-64e0-48cf-96c3-1f5f5e408954" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-857456c46-7f5wf;K8S_POD_INFRA_CONTAINER_ID=24c018c62b0bc3fe31bdd54a1245e4278882bad91cf3ea80963f577537ef89a2;K8S_POD_UID=8a5ae51d-d173-4531-8975-f164c975ce1f" Path:"" 2026-01-20T10:49:32.251648833+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"8ea598627749117dc8fac77d1c0db6f442df826d23959e250097a84a4e542fb5" Netns:"/var/run/netns/8af24913-1155-48f0-b430-30c659e2d890" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-7769bd8d7d-q5cvv;K8S_POD_INFRA_CONTAINER_ID=8ea598627749117dc8fac77d1c0db6f442df826d23959e250097a84a4e542fb5;K8S_POD_UID=b54e8941-2fc4-432a-9e51-39684df9089e" Path:"" 2026-01-20T10:49:32.270515548+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"1272c455fd97422ccb0dd4a1670d048b6d2cf50dcd0023daef52bb3d8fd49295" Netns:"/var/run/netns/e3402cbb-3776-4e1a-b505-c8c78765502c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-77658b5b66-dq5sc;K8S_POD_INFRA_CONTAINER_ID=1272c455fd97422ccb0dd4a1670d048b6d2cf50dcd0023daef52bb3d8fd49295;K8S_POD_UID=530553aa-0a1d-423e-8a22-f5eb4bdbb883" Path:"" 2026-01-20T10:49:32.296529151+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"81bc64aec27985047638b6f3725c38dd891cd1728982908d5c683ea56504b1db" Netns:"/var/run/netns/43813cbd-b762-4c33-8ee5-14d12e7196f5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-65476884b9-9wcvx;K8S_POD_INFRA_CONTAINER_ID=81bc64aec27985047638b6f3725c38dd891cd1728982908d5c683ea56504b1db;K8S_POD_UID=6268b7fe-8910-4505-b404-6f1df638105c" Path:"" 2026-01-20T10:49:32.296529151+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"cf684e9d3913688e7820e8102fb3a95320285a422a529251dcd046ada13804b1" Netns:"/var/run/netns/6211b826-fd4d-4bdb-b6da-56b8582ec607" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-75f687757b-nz2xb;K8S_POD_INFRA_CONTAINER_ID=cf684e9d3913688e7820e8102fb3a95320285a422a529251dcd046ada13804b1;K8S_POD_UID=10603adc-d495-423c-9459-4caa405960bb" Path:"" 2026-01-20T10:49:32.308567637+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"6c311a84b5597af2f190de8d51b343bdca077dc8fd07fe3a32e430fbfc2fc69e" Netns:"/var/run/netns/6bc7ae3f-8e0b-4879-a9fd-90125b86259b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-7d46d5bb6d-rrg6t;K8S_POD_INFRA_CONTAINER_ID=6c311a84b5597af2f190de8d51b343bdca077dc8fd07fe3a32e430fbfc2fc69e;K8S_POD_UID=7d51f445-054a-4e4f-a67b-a828f5a32511" Path:"" 2026-01-20T10:49:32.326261577+00:00 stderr F 2026-01-20T10:49:32Z [verbose] Add: openshift-etcd-operator:etcd-operator-768d5b5d86-722mg:0b5c38ff-1fa8-4219-994d-15776acd4a4d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"6f76866c5589c45","mac":"f2:a7:69:13:71:1a"},{"name":"eth0","mac":"0a:58:0a:d9:00:08","sandbox":"/var/run/netns/182c39cd-b692-4fc2-bb71-00350177c000"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.8/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:32.327157113+00:00 stderr F I0120 10:49:32.326630 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-etcd-operator", Name:"etcd-operator-768d5b5d86-722mg", UID:"0b5c38ff-1fa8-4219-994d-15776acd4a4d", APIVersion:"v1", ResourceVersion:"38030", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.8/23] from ovn-kubernetes 2026-01-20T10:49:32.334376734+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"14d5763f6e24322a1465ef73cf43b66985b5ee8d4d0bcbad81d831ddf725f0f9" Netns:"/var/run/netns/f3ea1add-848b-47fc-8dfe-d8aace505c15" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=hostpath-provisioner;K8S_POD_NAME=csi-hostpathplugin-hvm8g;K8S_POD_INFRA_CONTAINER_ID=14d5763f6e24322a1465ef73cf43b66985b5ee8d4d0bcbad81d831ddf725f0f9;K8S_POD_UID=12e733dd-0939-4f1b-9cbb-13897e093787" Path:"" 2026-01-20T10:49:32.346115491+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD finished CNI request ContainerID:"6f76866c5589c459017fbf7f921bbc924c4127e0201c72b514907d71804b1ed0" Netns:"/var/run/netns/182c39cd-b692-4fc2-bb71-00350177c000" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-768d5b5d86-722mg;K8S_POD_INFRA_CONTAINER_ID=6f76866c5589c459017fbf7f921bbc924c4127e0201c72b514907d71804b1ed0;K8S_POD_UID=0b5c38ff-1fa8-4219-994d-15776acd4a4d" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"f2:a7:69:13:71:1a\",\"name\":\"6f76866c5589c45\"},{\"mac\":\"0a:58:0a:d9:00:08\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/182c39cd-b692-4fc2-bb71-00350177c000\"}],\"ips\":[{\"address\":\"10.217.0.8/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:32.377013552+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"35d926e5c8594024062db6e1d898a4b570b652442f9644531861d98f8c7ee8ca" Netns:"/var/run/netns/9148ca88-4087-4d3a-ae28-368fcf6522af" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-controller-6df6df6b6b-58shh;K8S_POD_INFRA_CONTAINER_ID=35d926e5c8594024062db6e1d898a4b570b652442f9644531861d98f8c7ee8ca;K8S_POD_UID=297ab9b6-2186-4d5b-a952-2bfd59af63c4" Path:"" 2026-01-20T10:49:32.441725933+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"e95e48e211a52d904a4f9f56a709f1aa84d4db578f6828050c44b37bd8889836" Netns:"/var/run/netns/bf109501-3d99-4761-8527-b428be885a7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-5c5478f8c-vqvt7;K8S_POD_INFRA_CONTAINER_ID=e95e48e211a52d904a4f9f56a709f1aa84d4db578f6828050c44b37bd8889836;K8S_POD_UID=d0f40333-c860-4c04-8058-a0bf572dcf12" Path:"" 2026-01-20T10:49:32.459817244+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"1b67ef467f01c9dc938c60eb0bf81d8e4d950721465ae6e9ef3d84b6cf9ee3d8" Netns:"/var/run/netns/1bf8cc82-bf40-427c-8e87-779ee7ffec55" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-7cc7ff75d5-g9qv8;K8S_POD_INFRA_CONTAINER_ID=1b67ef467f01c9dc938c60eb0bf81d8e4d950721465ae6e9ef3d84b6cf9ee3d8;K8S_POD_UID=ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Path:"" 2026-01-20T10:49:32.481537156+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"05dc8f8fd04adcc92b768cf40248cfd351acd6c19f2be7e81050bc05cf9813bd" Netns:"/var/run/netns/ae5cf0c5-4e12-4417-a40b-6157bb079e1a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=05dc8f8fd04adcc92b768cf40248cfd351acd6c19f2be7e81050bc05cf9813bd;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"" 2026-01-20T10:49:32.485846787+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"c37c0f7f9c3caeac2f0f421e74752cc1973142d946599a07d92d249026284d34" Netns:"/var/run/netns/def6f8fc-722f-4e84-93c8-8e2fa8320ff9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-76788bff89-wkjgm;K8S_POD_INFRA_CONTAINER_ID=c37c0f7f9c3caeac2f0f421e74752cc1973142d946599a07d92d249026284d34;K8S_POD_UID=120b38dc-8236-4fa6-a452-642b8ad738ee" Path:"" 2026-01-20T10:49:32.485846787+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"cb38236eaf8b650fcac261ec8c19468d6eaced6b5e3eb4812d1a64fe308fe8a0" Netns:"/var/run/netns/4fbd4254-ea76-4c65-9bbd-77af8e1c45f2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-conversion-webhook-595f9969b-l6z49;K8S_POD_INFRA_CONTAINER_ID=cb38236eaf8b650fcac261ec8c19468d6eaced6b5e3eb4812d1a64fe308fe8a0;K8S_POD_UID=59748b9b-c309-4712-aa85-bb38d71c4915" Path:"" 2026-01-20T10:49:32.497977657+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"2cf7b8831320ab3b2bf6ae8117aef184a21c07b6471e80418ee4500c54f3fad6" Netns:"/var/run/netns/0350d956-6e45-4c91-a1aa-70f651bc3165" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=2cf7b8831320ab3b2bf6ae8117aef184a21c07b6471e80418ee4500c54f3fad6;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"" 2026-01-20T10:49:32.527127564+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"20ab1b715d7b6c41e864c6f170d21d6a77de6775a6494cc52facc82695cc867a" Netns:"/var/run/netns/15fcc251-2b4d-4cf2-bf83-5c1ae3d1d8b0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-6f6cb54958-rbddb;K8S_POD_INFRA_CONTAINER_ID=20ab1b715d7b6c41e864c6f170d21d6a77de6775a6494cc52facc82695cc867a;K8S_POD_UID=c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Path:"" 2026-01-20T10:49:32.536370556+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"cac7e1a6cb9d4b94b39d540b0a123a36ff2a52f7240cc2c03bd825e650b13a4b" Netns:"/var/run/netns/4eeb4a72-61f4-4db2-bdc3-3f06cd20d036" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-qdfr4;K8S_POD_INFRA_CONTAINER_ID=cac7e1a6cb9d4b94b39d540b0a123a36ff2a52f7240cc2c03bd825e650b13a4b;K8S_POD_UID=a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Path:"" 2026-01-20T10:49:32.565149893+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"6e13b5b12625f2c121063f4f8e2dc03f99ee6b08120ac99ab62d3049b441197b" Netns:"/var/run/netns/e2cf0fca-072e-44c4-8104-37fa09956164" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-v54bt;K8S_POD_INFRA_CONTAINER_ID=6e13b5b12625f2c121063f4f8e2dc03f99ee6b08120ac99ab62d3049b441197b;K8S_POD_UID=34a48baf-1bee-4921-8bb2-9b7320e76f79" Path:"" 2026-01-20T10:49:32.568878126+00:00 stderr F 2026-01-20T10:49:32Z [verbose] Add: openshift-dns:dns-default-gbw49:13045510-8717-4a71-ade4-be95a76440a7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"e9bfdf7ad0e854e","mac":"82:72:e7:17:74:dd"},{"name":"eth0","mac":"0a:58:0a:d9:00:1f","sandbox":"/var/run/netns/9b63ed5a-2a2a-47d0-9508-3515a4741d3e"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.31/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:32.569181465+00:00 stderr F I0120 10:49:32.569149 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-dns", Name:"dns-default-gbw49", UID:"13045510-8717-4a71-ade4-be95a76440a7", APIVersion:"v1", ResourceVersion:"38053", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.31/23] from ovn-kubernetes 2026-01-20T10:49:32.569376271+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"b9fd178cb9bef15c61889b5a37a29730f32c86e12823401ab5ad5dbc03498c6e" Netns:"/var/run/netns/e2d575d7-9708-488d-8455-8d67c4ba0a4f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6c7c885997-4hbbc;K8S_POD_INFRA_CONTAINER_ID=b9fd178cb9bef15c61889b5a37a29730f32c86e12823401ab5ad5dbc03498c6e;K8S_POD_UID=d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Path:"" 2026-01-20T10:49:32.571602969+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"63060f2e5e81735a13cf50a826be14e5ae642aae237240835572b44f94cc026f" Netns:"/var/run/netns/60b48ca2-ab0f-4c8f-ab6f-232d9fbbc5a2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-6d8474f75f-x54mh;K8S_POD_INFRA_CONTAINER_ID=63060f2e5e81735a13cf50a826be14e5ae642aae237240835572b44f94cc026f;K8S_POD_UID=c085412c-b875-46c9-ae3e-e6b0d8067091" Path:"" 2026-01-20T10:49:32.574511527+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"7d2c4481dd898c58d65f0d3fcd46a36c9a7529a4e056ae4c41d68df0bc82d4d1" Netns:"/var/run/netns/fc66b5be-c8af-4f10-8f73-84d566f05b3d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-78d54458c4-sc8h7;K8S_POD_INFRA_CONTAINER_ID=7d2c4481dd898c58d65f0d3fcd46a36c9a7529a4e056ae4c41d68df0bc82d4d1;K8S_POD_UID=ed024e5d-8fc2-4c22-803d-73f3c9795f19" Path:"" 2026-01-20T10:49:32.576643443+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"033a0e312725abb60e66ae41d294b5e59466881540f3e07b7910d59f42981410" Netns:"/var/run/netns/fab25025-7cc0-4601-a941-3d21c4a51db0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-69c565c9b6-vbdpd;K8S_POD_INFRA_CONTAINER_ID=033a0e312725abb60e66ae41d294b5e59466881540f3e07b7910d59f42981410;K8S_POD_UID=5bacb25d-97b6-4491-8fb4-99feae1d802a" Path:"" 2026-01-20T10:49:32.583069908+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD finished CNI request ContainerID:"e9bfdf7ad0e854ea0676bdf904cd105d27afd839f122e2eff908516df128b256" Netns:"/var/run/netns/9b63ed5a-2a2a-47d0-9508-3515a4741d3e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns;K8S_POD_NAME=dns-default-gbw49;K8S_POD_INFRA_CONTAINER_ID=e9bfdf7ad0e854ea0676bdf904cd105d27afd839f122e2eff908516df128b256;K8S_POD_UID=13045510-8717-4a71-ade4-be95a76440a7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"82:72:e7:17:74:dd\",\"name\":\"e9bfdf7ad0e854e\"},{\"mac\":\"0a:58:0a:d9:00:1f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/9b63ed5a-2a2a-47d0-9508-3515a4741d3e\"}],\"ips\":[{\"address\":\"10.217.0.31/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:32.588977528+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"157518a2fbcdf41441384a5aa6f72952fe3c2c121c81583bef75a581d87f05c8" Netns:"/var/run/netns/4325550d-2409-4a96-8324-10e599004834" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-788b7c6b6c-ctdmb;K8S_POD_INFRA_CONTAINER_ID=157518a2fbcdf41441384a5aa6f72952fe3c2c121c81583bef75a581d87f05c8;K8S_POD_UID=4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Path:"" 2026-01-20T10:49:32.594754565+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"3bebb50a1166a73fd7b542a977594df4a0aaf79383c52922f0f3a6c8dbb51ab6" Netns:"/var/run/netns/46c26261-bbfc-455b-8f19-c653b8119f30" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-546b4f8984-pwccz;K8S_POD_INFRA_CONTAINER_ID=3bebb50a1166a73fd7b542a977594df4a0aaf79383c52922f0f3a6c8dbb51ab6;K8S_POD_UID=6d67253e-2acd-4bc1-8185-793587da4f17" Path:"" 2026-01-20T10:49:32.600394417+00:00 stderr P 2026-01-20T10:49:32Z [verbose] 2026-01-20T10:49:32.600430768+00:00 stderr P ADD starting CNI request ContainerID:"f5a1d658ee8d94f24351fdebdeaa5f08ad7ed91b3be46d98ce39ac9e191d4a6a" Netns:"/var/run/netns/75dbe8fa-70ad-4c14-ae3e-2f587fbd524d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-649bd778b4-tt5tw;K8S_POD_INFRA_CONTAINER_ID=f5a1d658ee8d94f24351fdebdeaa5f08ad7ed91b3be46d98ce39ac9e191d4a6a;K8S_POD_UID=45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Path:"" 2026-01-20T10:49:32.600450268+00:00 stderr F 2026-01-20T10:49:32.744292249+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"07d15cd3389df14ba4c9fc246f14021164672e87e4b4ecb407cfdd0ac94e6acb" Netns:"/var/run/netns/fdf53b20-c671-41fa-8096-d45883fab661" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-canary;K8S_POD_NAME=ingress-canary-2vhcn;K8S_POD_INFRA_CONTAINER_ID=07d15cd3389df14ba4c9fc246f14021164672e87e4b4ecb407cfdd0ac94e6acb;K8S_POD_UID=0b5d722a-1123-4935-9740-52a08d018bc9" Path:"" 2026-01-20T10:49:32.753404157+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"6b56e04f51f76d31ffffcb7df9cbcbee1036c5156cd1dba474c959cd416cd0c3" Netns:"/var/run/netns/e26f94cf-65d0-4d3d-b44c-5c37b77fe837" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-8464bcc55b-sjnqz;K8S_POD_INFRA_CONTAINER_ID=6b56e04f51f76d31ffffcb7df9cbcbee1036c5156cd1dba474c959cd416cd0c3;K8S_POD_UID=bd556935-a077-45df-ba3f-d42c39326ccd" Path:"" 2026-01-20T10:49:32.758587295+00:00 stderr P 2026-01-20T10:49:32Z [verbose] 2026-01-20T10:49:32.758641746+00:00 stderr P ADD starting CNI request ContainerID:"20a4a76da066ba0e6174defee9a4750e0695845ded888d2c84f66d5851357cfb" Netns:"/var/run/netns/d883b9ee-af5e-46f6-a16e-33489971f781" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-84d578d794-jw7r2;K8S_POD_INFRA_CONTAINER_ID=20a4a76da066ba0e6174defee9a4750e0695845ded888d2c84f66d5851357cfb;K8S_POD_UID=63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Path:"" 2026-01-20T10:49:32.758661787+00:00 stderr F 2026-01-20T10:49:32.763421012+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"351b90c7b77c23bb31e087f766759c49e139e83963f9f54d6c259cbb8ef2ab92" Netns:"/var/run/netns/bcf1de44-ac52-4d02-a07a-47e5f10b9ae4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-7c88c4c865-kn67m;K8S_POD_INFRA_CONTAINER_ID=351b90c7b77c23bb31e087f766759c49e139e83963f9f54d6c259cbb8ef2ab92;K8S_POD_UID=43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Path:"" 2026-01-20T10:49:32.776265403+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"4132ae12a9d721af0ec27f865825a8e753c61ea0013dcd45d0fc61145643b094" Netns:"/var/run/netns/64e8c073-3e1a-4ef5-8999-4b63ed280c5f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-686c6c748c-qbnnr;K8S_POD_INFRA_CONTAINER_ID=4132ae12a9d721af0ec27f865825a8e753c61ea0013dcd45d0fc61145643b094;K8S_POD_UID=9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Path:"" 2026-01-20T10:49:32.926374815+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"4fdab1420f407743cc8ac662254758af9071357f2aa3b4d55150aa3592daafc0" Netns:"/var/run/netns/a1f9219d-bb56-4308-9cc8-b7c4bc100ea1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=4fdab1420f407743cc8ac662254758af9071357f2aa3b4d55150aa3592daafc0;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"" 2026-01-20T10:49:32.960310009+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"a8b18abf4689d10f3ed0100a7e3b06ad6325b157c2ed815d7efa869e36652f20" Netns:"/var/run/netns/ebd3bd16-c53b-49a4-be78-a266ba4f0921" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-sdddl;K8S_POD_INFRA_CONTAINER_ID=a8b18abf4689d10f3ed0100a7e3b06ad6325b157c2ed815d7efa869e36652f20;K8S_POD_UID=fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Path:"" 2026-01-20T10:49:32.986419134+00:00 stderr F 2026-01-20T10:49:32Z [verbose] ADD starting CNI request ContainerID:"19bf0574cec6c2fcffb7413ec6a0f6c184c9588a6d44000ad64d498b177a9d29" Netns:"/var/run/netns/b9e425fb-7e3b-49a0-a1be-63ef7ceb7b6a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=19bf0574cec6c2fcffb7413ec6a0f6c184c9588a6d44000ad64d498b177a9d29;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"" 2026-01-20T10:49:33.171365937+00:00 stderr F 2026-01-20T10:49:33Z [verbose] ADD starting CNI request ContainerID:"be3a1d8e2700ba2157ff969219afddc1fd55597aaf83f46c4ff8f913e78a4d9e" Netns:"/var/run/netns/516f2f98-4388-4819-84e1-d2228e7db8a2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-644bb77b49-5x5xk;K8S_POD_INFRA_CONTAINER_ID=be3a1d8e2700ba2157ff969219afddc1fd55597aaf83f46c4ff8f913e78a4d9e;K8S_POD_UID=9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Path:"" 2026-01-20T10:49:33.186656264+00:00 stderr P 2026-01-20T10:49:33Z [verbose] 2026-01-20T10:49:33.186745256+00:00 stderr P ADD starting CNI request ContainerID:"29fa79ece7bc4aa42a04b8e27dbcddb57b0b5ea610e528ae05ed041db9421d74" Netns:"/var/run/netns/c40887c4-6ced-45f5-8497-1dbc4d35241e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-kk8kg;K8S_POD_INFRA_CONTAINER_ID=29fa79ece7bc4aa42a04b8e27dbcddb57b0b5ea610e528ae05ed041db9421d74;K8S_POD_UID=e4a7de23-6134-4044-902a-0900dc04a501" Path:"" 2026-01-20T10:49:33.186772437+00:00 stderr F 2026-01-20T10:49:33.193627385+00:00 stderr F 2026-01-20T10:49:33Z [verbose] Add: openshift-kube-scheduler-operator:openshift-kube-scheduler-operator-5d9b995f6b-fcgd7:71af81a9-7d43-49b2-9287-c375900aa905:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"71f6858fe8b34b0","mac":"2e:a9:c5:72:75:99"},{"name":"eth0","mac":"0a:58:0a:d9:00:0c","sandbox":"/var/run/netns/2654c5b8-2eb5-43dd-8fd8-a288eb0f2efb"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.12/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:33.193936705+00:00 stderr F I0120 10:49:33.193888 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7", UID:"71af81a9-7d43-49b2-9287-c375900aa905", APIVersion:"v1", ResourceVersion:"38070", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.12/23] from ovn-kubernetes 2026-01-20T10:49:33.200345061+00:00 stderr F 2026-01-20T10:49:33Z [verbose] Add: openshift-console-operator:console-operator-5dbbc74dc9-cp5cd:e9127708-ccfd-4891-8a3a-f0cacb77e0f4:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"3a8739569015671","mac":"a6:e9:b5:f1:5d:41"},{"name":"eth0","mac":"0a:58:0a:d9:00:3e","sandbox":"/var/run/netns/d6967b26-5c5f-4d32-9d0c-69c7fad645f5"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.62/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:33.200750403+00:00 stderr F I0120 10:49:33.200668 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-5dbbc74dc9-cp5cd", UID:"e9127708-ccfd-4891-8a3a-f0cacb77e0f4", APIVersion:"v1", ResourceVersion:"38075", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.62/23] from ovn-kubernetes 2026-01-20T10:49:33.204661692+00:00 stderr F 2026-01-20T10:49:33Z [verbose] ADD starting CNI request ContainerID:"26aaf844c5160ed714327b2727c1d365375f0f252f5c0136588433cc4b50105b" Netns:"/var/run/netns/1f773903-cbc5-4949-b66a-00ea704aea9c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-778975cc4f-x5vcf;K8S_POD_INFRA_CONTAINER_ID=26aaf844c5160ed714327b2727c1d365375f0f252f5c0136588433cc4b50105b;K8S_POD_UID=1a3e81c3-c292-4130-9436-f94062c91efd" Path:"" 2026-01-20T10:49:33.204661692+00:00 stderr F 2026-01-20T10:49:33Z [verbose] Add: openshift-dns-operator:dns-operator-75f687757b-nz2xb:10603adc-d495-423c-9459-4caa405960bb:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"cf684e9d3913688","mac":"06:73:59:32:80:d7"},{"name":"eth0","mac":"0a:58:0a:d9:00:12","sandbox":"/var/run/netns/6211b826-fd4d-4bdb-b6da-56b8582ec607"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.18/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:33.204661692+00:00 stderr F I0120 10:49:33.204054 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-dns-operator", Name:"dns-operator-75f687757b-nz2xb", UID:"10603adc-d495-423c-9459-4caa405960bb", APIVersion:"v1", ResourceVersion:"38022", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.18/23] from ovn-kubernetes 2026-01-20T10:49:33.208519850+00:00 stderr F 2026-01-20T10:49:33Z [verbose] ADD finished CNI request ContainerID:"71f6858fe8b34b08a087aa4bdf6f19e06c8503b8e48ea7c9cf03725e928b0788" Netns:"/var/run/netns/2654c5b8-2eb5-43dd-8fd8-a288eb0f2efb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-5d9b995f6b-fcgd7;K8S_POD_INFRA_CONTAINER_ID=71f6858fe8b34b08a087aa4bdf6f19e06c8503b8e48ea7c9cf03725e928b0788;K8S_POD_UID=71af81a9-7d43-49b2-9287-c375900aa905" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2e:a9:c5:72:75:99\",\"name\":\"71f6858fe8b34b0\"},{\"mac\":\"0a:58:0a:d9:00:0c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/2654c5b8-2eb5-43dd-8fd8-a288eb0f2efb\"}],\"ips\":[{\"address\":\"10.217.0.12/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:33.208519850+00:00 stderr F 2026-01-20T10:49:33Z [verbose] Add: openshift-operator-lifecycle-manager:catalog-operator-857456c46-7f5wf:8a5ae51d-d173-4531-8975-f164c975ce1f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"24c018c62b0bc3f","mac":"aa:09:9f:9d:54:16"},{"name":"eth0","mac":"0a:58:0a:d9:00:0b","sandbox":"/var/run/netns/0b4184a9-64e0-48cf-96c3-1f5f5e408954"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.11/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:33.208918492+00:00 stderr F I0120 10:49:33.208749 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"catalog-operator-857456c46-7f5wf", UID:"8a5ae51d-d173-4531-8975-f164c975ce1f", APIVersion:"v1", ResourceVersion:"38018", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.11/23] from ovn-kubernetes 2026-01-20T10:49:33.216771481+00:00 stderr F 2026-01-20T10:49:33Z [verbose] ADD finished CNI request ContainerID:"3a8739569015671c6f9f01a8119a17783644b5faec5224fefcb5cdce7737fc06" Netns:"/var/run/netns/d6967b26-5c5f-4d32-9d0c-69c7fad645f5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5dbbc74dc9-cp5cd;K8S_POD_INFRA_CONTAINER_ID=3a8739569015671c6f9f01a8119a17783644b5faec5224fefcb5cdce7737fc06;K8S_POD_UID=e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"a6:e9:b5:f1:5d:41\",\"name\":\"3a8739569015671\"},{\"mac\":\"0a:58:0a:d9:00:3e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/d6967b26-5c5f-4d32-9d0c-69c7fad645f5\"}],\"ips\":[{\"address\":\"10.217.0.62/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:33.218464262+00:00 stderr F 2026-01-20T10:49:33Z [verbose] ADD finished CNI request ContainerID:"cf684e9d3913688e7820e8102fb3a95320285a422a529251dcd046ada13804b1" Netns:"/var/run/netns/6211b826-fd4d-4bdb-b6da-56b8582ec607" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-75f687757b-nz2xb;K8S_POD_INFRA_CONTAINER_ID=cf684e9d3913688e7820e8102fb3a95320285a422a529251dcd046ada13804b1;K8S_POD_UID=10603adc-d495-423c-9459-4caa405960bb" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"06:73:59:32:80:d7\",\"name\":\"cf684e9d3913688\"},{\"mac\":\"0a:58:0a:d9:00:12\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/6211b826-fd4d-4bdb-b6da-56b8582ec607\"}],\"ips\":[{\"address\":\"10.217.0.18/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:33.227851109+00:00 stderr F 2026-01-20T10:49:33Z [verbose] ADD finished CNI request ContainerID:"24c018c62b0bc3fe31bdd54a1245e4278882bad91cf3ea80963f577537ef89a2" Netns:"/var/run/netns/0b4184a9-64e0-48cf-96c3-1f5f5e408954" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-857456c46-7f5wf;K8S_POD_INFRA_CONTAINER_ID=24c018c62b0bc3fe31bdd54a1245e4278882bad91cf3ea80963f577537ef89a2;K8S_POD_UID=8a5ae51d-d173-4531-8975-f164c975ce1f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"aa:09:9f:9d:54:16\",\"name\":\"24c018c62b0bc3f\"},{\"mac\":\"0a:58:0a:d9:00:0b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/0b4184a9-64e0-48cf-96c3-1f5f5e408954\"}],\"ips\":[{\"address\":\"10.217.0.11/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:33.324485181+00:00 stderr F 2026-01-20T10:49:33Z [verbose] ADD starting CNI request ContainerID:"896f1d1c66b49ee6c0953e78a523b98ad3a86e177837cd70e3d39a63cc66859b" Netns:"/var/run/netns/9296f533-4afc-4c60-9b3b-e8f01b8f54a7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=896f1d1c66b49ee6c0953e78a523b98ad3a86e177837cd70e3d39a63cc66859b;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"" 2026-01-20T10:49:33.363495980+00:00 stderr F 2026-01-20T10:49:33Z [verbose] ADD starting CNI request ContainerID:"5f52bddb9a859c7f9d3ffd24bdc4abe1daea6e58c8e74397015e4fca56e02fed" Netns:"/var/run/netns/c6e95119-67e8-4297-94e7-81bc3b9239e4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator;K8S_POD_NAME=migrator-f7c6d88df-q2fnv;K8S_POD_INFRA_CONTAINER_ID=5f52bddb9a859c7f9d3ffd24bdc4abe1daea6e58c8e74397015e4fca56e02fed;K8S_POD_UID=cf1a8966-f594-490a-9fbb-eec5bafd13d3" Path:"" 2026-01-20T10:49:33.399848147+00:00 stderr F 2026-01-20T10:49:33Z [verbose] ADD starting CNI request ContainerID:"f600c407dd70d660077bd6ebfdcd7911fc2b2034dc37f08620eb35fdc05ef09c" Netns:"/var/run/netns/9819bdca-30e9-4166-88d6-6bcf2ab43cee" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-776b8b7477-sfpvs;K8S_POD_INFRA_CONTAINER_ID=f600c407dd70d660077bd6ebfdcd7911fc2b2034dc37f08620eb35fdc05ef09c;K8S_POD_UID=21d29937-debd-4407-b2b1-d1053cb0f342" Path:"" 2026-01-20T10:49:33.400237899+00:00 stderr F 2026-01-20T10:49:33Z [verbose] ADD starting CNI request ContainerID:"d558d59974b2a9b3db9e27ee455a8739e2efdbf2974fd844df32834055fc90ee" Netns:"/var/run/netns/c2fd837d-1d81-4f88-bc39-5526b22b89ff" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-7978d7d7f6-2nt8z;K8S_POD_INFRA_CONTAINER_ID=d558d59974b2a9b3db9e27ee455a8739e2efdbf2974fd844df32834055fc90ee;K8S_POD_UID=0f394926-bdb9-425c-b36e-264d7fd34550" Path:"" 2026-01-20T10:49:33.429407757+00:00 stderr F 2026-01-20T10:49:33Z [verbose] ADD starting CNI request ContainerID:"b8204918f90a9ca753a24fddaaebfedda53be5597097876a148cf997da19104c" Netns:"/var/run/netns/9fadf4f6-8c20-4278-82e0-9f6a9d88c69b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-74fc7c67cc-xqf8b;K8S_POD_INFRA_CONTAINER_ID=b8204918f90a9ca753a24fddaaebfedda53be5597097876a148cf997da19104c;K8S_POD_UID=01feb2e0-a0f4-4573-8335-34e364e0ef40" Path:"" 2026-01-20T10:49:33.451423928+00:00 stderr F 2026-01-20T10:49:33Z [verbose] ADD starting CNI request ContainerID:"d9aeaa1aa1d02c7e8201fbb13a3ee252fd99aa6b0819f3318aaa2bd88982712e" Netns:"/var/run/netns/37f39fee-efca-4dfe-9830-963fd18845a8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-bc474d5d6-wshwg;K8S_POD_INFRA_CONTAINER_ID=d9aeaa1aa1d02c7e8201fbb13a3ee252fd99aa6b0819f3318aaa2bd88982712e;K8S_POD_UID=f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Path:"" 2026-01-20T10:49:33.497018417+00:00 stderr F 2026-01-20T10:49:33Z [verbose] ADD starting CNI request ContainerID:"75910c2f6810535f83faf9be6f15ebe1bfc09ceb577e00a2485aed199f3d50e9" Netns:"/var/run/netns/827afc23-7520-4bf1-bb8b-b3bcfa516a63" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-7fc54b8dd7-d2bhp;K8S_POD_INFRA_CONTAINER_ID=75910c2f6810535f83faf9be6f15ebe1bfc09ceb577e00a2485aed199f3d50e9;K8S_POD_UID=41e8708a-e40d-4d28-846b-c52eda4d1755" Path:"" 2026-01-20T10:49:33.586845983+00:00 stderr P 2026-01-20T10:49:33Z [verbose] 2026-01-20T10:49:33.586910955+00:00 stderr P Add: openshift-console:downloads-65476884b9-9wcvx:6268b7fe-8910-4505-b404-6f1df638105c:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"81bc64aec279850","mac":"7e:6c:78:ca:0a:54"},{"name":"eth0","mac":"0a:58:0a:d9:00:42","sandbox":"/var/run/netns/43813cbd-b762-4c33-8ee5-14d12e7196f5"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.66/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:33.586934796+00:00 stderr F 2026-01-20T10:49:33.587396020+00:00 stderr F I0120 10:49:33.587308 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console", Name:"downloads-65476884b9-9wcvx", UID:"6268b7fe-8910-4505-b404-6f1df638105c", APIVersion:"v1", ResourceVersion:"38039", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.66/23] from ovn-kubernetes 2026-01-20T10:49:33.597792017+00:00 stderr P 2026-01-20T10:49:33Z [verbose] 2026-01-20T10:49:33.597865159+00:00 stderr P ADD finished CNI request ContainerID:"81bc64aec27985047638b6f3725c38dd891cd1728982908d5c683ea56504b1db" Netns:"/var/run/netns/43813cbd-b762-4c33-8ee5-14d12e7196f5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-65476884b9-9wcvx;K8S_POD_INFRA_CONTAINER_ID=81bc64aec27985047638b6f3725c38dd891cd1728982908d5c683ea56504b1db;K8S_POD_UID=6268b7fe-8910-4505-b404-6f1df638105c" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"7e:6c:78:ca:0a:54\",\"name\":\"81bc64aec279850\"},{\"mac\":\"0a:58:0a:d9:00:42\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/43813cbd-b762-4c33-8ee5-14d12e7196f5\"}],\"ips\":[{\"address\":\"10.217.0.66/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:33.597891320+00:00 stderr F 2026-01-20T10:49:33.605657915+00:00 stderr F 2026-01-20T10:49:33Z [verbose] Add: openshift-config-operator:openshift-config-operator-77658b5b66-dq5sc:530553aa-0a1d-423e-8a22-f5eb4bdbb883:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"1272c455fd97422","mac":"d6:78:b9:5f:39:5a"},{"name":"eth0","mac":"0a:58:0a:d9:00:17","sandbox":"/var/run/netns/e3402cbb-3776-4e1a-b505-c8c78765502c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.23/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:33.605657915+00:00 stderr F I0120 10:49:33.604579 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-config-operator", Name:"openshift-config-operator-77658b5b66-dq5sc", UID:"530553aa-0a1d-423e-8a22-f5eb4bdbb883", APIVersion:"v1", ResourceVersion:"38016", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.23/23] from ovn-kubernetes 2026-01-20T10:49:33.609881025+00:00 stderr F 2026-01-20T10:49:33Z [verbose] Add: openshift-ingress-operator:ingress-operator-7d46d5bb6d-rrg6t:7d51f445-054a-4e4f-a67b-a828f5a32511:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"6c311a84b5597af","mac":"6a:4f:93:dd:d0:ef"},{"name":"eth0","mac":"0a:58:0a:d9:00:2d","sandbox":"/var/run/netns/6bc7ae3f-8e0b-4879-a9fd-90125b86259b"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.45/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:33.609881025+00:00 stderr F I0120 10:49:33.608999 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-ingress-operator", Name:"ingress-operator-7d46d5bb6d-rrg6t", UID:"7d51f445-054a-4e4f-a67b-a828f5a32511", APIVersion:"v1", ResourceVersion:"38012", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.45/23] from ovn-kubernetes 2026-01-20T10:49:33.609881025+00:00 stderr F 2026-01-20T10:49:33Z [verbose] Add: openshift-image-registry:cluster-image-registry-operator-7769bd8d7d-q5cvv:b54e8941-2fc4-432a-9e51-39684df9089e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"8ea598627749117","mac":"86:ca:e1:ca:63:fe"},{"name":"eth0","mac":"0a:58:0a:d9:00:16","sandbox":"/var/run/netns/8af24913-1155-48f0-b430-30c659e2d890"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.22/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:33.609920916+00:00 stderr F I0120 10:49:33.609885 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"cluster-image-registry-operator-7769bd8d7d-q5cvv", UID:"b54e8941-2fc4-432a-9e51-39684df9089e", APIVersion:"v1", ResourceVersion:"38023", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.22/23] from ovn-kubernetes 2026-01-20T10:49:33.620726145+00:00 stderr F 2026-01-20T10:49:33Z [verbose] ADD finished CNI request ContainerID:"1272c455fd97422ccb0dd4a1670d048b6d2cf50dcd0023daef52bb3d8fd49295" Netns:"/var/run/netns/e3402cbb-3776-4e1a-b505-c8c78765502c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-77658b5b66-dq5sc;K8S_POD_INFRA_CONTAINER_ID=1272c455fd97422ccb0dd4a1670d048b6d2cf50dcd0023daef52bb3d8fd49295;K8S_POD_UID=530553aa-0a1d-423e-8a22-f5eb4bdbb883" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d6:78:b9:5f:39:5a\",\"name\":\"1272c455fd97422\"},{\"mac\":\"0a:58:0a:d9:00:17\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/e3402cbb-3776-4e1a-b505-c8c78765502c\"}],\"ips\":[{\"address\":\"10.217.0.23/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:33.625231612+00:00 stderr F 2026-01-20T10:49:33Z [verbose] ADD finished CNI request ContainerID:"6c311a84b5597af2f190de8d51b343bdca077dc8fd07fe3a32e430fbfc2fc69e" Netns:"/var/run/netns/6bc7ae3f-8e0b-4879-a9fd-90125b86259b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-7d46d5bb6d-rrg6t;K8S_POD_INFRA_CONTAINER_ID=6c311a84b5597af2f190de8d51b343bdca077dc8fd07fe3a32e430fbfc2fc69e;K8S_POD_UID=7d51f445-054a-4e4f-a67b-a828f5a32511" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"6a:4f:93:dd:d0:ef\",\"name\":\"6c311a84b5597af\"},{\"mac\":\"0a:58:0a:d9:00:2d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/6bc7ae3f-8e0b-4879-a9fd-90125b86259b\"}],\"ips\":[{\"address\":\"10.217.0.45/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:33.626674836+00:00 stderr F 2026-01-20T10:49:33Z [verbose] ADD finished CNI request ContainerID:"8ea598627749117dc8fac77d1c0db6f442df826d23959e250097a84a4e542fb5" Netns:"/var/run/netns/8af24913-1155-48f0-b430-30c659e2d890" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-7769bd8d7d-q5cvv;K8S_POD_INFRA_CONTAINER_ID=8ea598627749117dc8fac77d1c0db6f442df826d23959e250097a84a4e542fb5;K8S_POD_UID=b54e8941-2fc4-432a-9e51-39684df9089e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"86:ca:e1:ca:63:fe\",\"name\":\"8ea598627749117\"},{\"mac\":\"0a:58:0a:d9:00:16\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/8af24913-1155-48f0-b430-30c659e2d890\"}],\"ips\":[{\"address\":\"10.217.0.22/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:33.879761305+00:00 stderr F 2026-01-20T10:49:33Z [verbose] Add: openshift-kube-controller-manager-operator:kube-controller-manager-operator-6f6cb54958-rbddb:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"20ab1b715d7b6c4","mac":"1a:b1:68:9c:0c:40"},{"name":"eth0","mac":"0a:58:0a:d9:00:0f","sandbox":"/var/run/netns/15fcc251-2b4d-4cf2-bf83-5c1ae3d1d8b0"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.15/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:33.879761305+00:00 stderr F I0120 10:49:33.879473 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator-6f6cb54958-rbddb", UID:"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf", APIVersion:"v1", ResourceVersion:"38217", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.15/23] from ovn-kubernetes 2026-01-20T10:49:33.888563053+00:00 stderr F 2026-01-20T10:49:33Z [verbose] Add: openshift-authentication-operator:authentication-operator-7cc7ff75d5-g9qv8:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"1b67ef467f01c9d","mac":"36:ef:40:1b:3e:9d"},{"name":"eth0","mac":"0a:58:0a:d9:00:13","sandbox":"/var/run/netns/1bf8cc82-bf40-427c-8e87-779ee7ffec55"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.19/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:33.888717027+00:00 stderr F I0120 10:49:33.888686 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-authentication-operator", Name:"authentication-operator-7cc7ff75d5-g9qv8", UID:"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e", APIVersion:"v1", ResourceVersion:"38226", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.19/23] from ovn-kubernetes 2026-01-20T10:49:33.894913887+00:00 stderr F 2026-01-20T10:49:33Z [verbose] Add: hostpath-provisioner:csi-hostpathplugin-hvm8g:12e733dd-0939-4f1b-9cbb-13897e093787:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"14d5763f6e24322","mac":"c2:35:b3:db:80:75"},{"name":"eth0","mac":"0a:58:0a:d9:00:31","sandbox":"/var/run/netns/f3ea1add-848b-47fc-8dfe-d8aace505c15"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.49/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:33.895275378+00:00 stderr F I0120 10:49:33.895037 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"hostpath-provisioner", Name:"csi-hostpathplugin-hvm8g", UID:"12e733dd-0939-4f1b-9cbb-13897e093787", APIVersion:"v1", ResourceVersion:"38068", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.49/23] from ovn-kubernetes 2026-01-20T10:49:33.896206456+00:00 stderr P 2026-01-20T10:49:33Z [verbose] 2026-01-20T10:49:33.896231316+00:00 stderr F ADD finished CNI request ContainerID:"20ab1b715d7b6c41e864c6f170d21d6a77de6775a6494cc52facc82695cc867a" Netns:"/var/run/netns/15fcc251-2b4d-4cf2-bf83-5c1ae3d1d8b0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-6f6cb54958-rbddb;K8S_POD_INFRA_CONTAINER_ID=20ab1b715d7b6c41e864c6f170d21d6a77de6775a6494cc52facc82695cc867a;K8S_POD_UID=c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"1a:b1:68:9c:0c:40\",\"name\":\"20ab1b715d7b6c4\"},{\"mac\":\"0a:58:0a:d9:00:0f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/15fcc251-2b4d-4cf2-bf83-5c1ae3d1d8b0\"}],\"ips\":[{\"address\":\"10.217.0.15/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:33.906234762+00:00 stderr F 2026-01-20T10:49:33Z [verbose] ADD finished CNI request ContainerID:"1b67ef467f01c9dc938c60eb0bf81d8e4d950721465ae6e9ef3d84b6cf9ee3d8" Netns:"/var/run/netns/1bf8cc82-bf40-427c-8e87-779ee7ffec55" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-7cc7ff75d5-g9qv8;K8S_POD_INFRA_CONTAINER_ID=1b67ef467f01c9dc938c60eb0bf81d8e4d950721465ae6e9ef3d84b6cf9ee3d8;K8S_POD_UID=ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"36:ef:40:1b:3e:9d\",\"name\":\"1b67ef467f01c9d\"},{\"mac\":\"0a:58:0a:d9:00:13\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1bf8cc82-bf40-427c-8e87-779ee7ffec55\"}],\"ips\":[{\"address\":\"10.217.0.19/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:33.913181813+00:00 stderr F 2026-01-20T10:49:33Z [verbose] ADD finished CNI request ContainerID:"14d5763f6e24322a1465ef73cf43b66985b5ee8d4d0bcbad81d831ddf725f0f9" Netns:"/var/run/netns/f3ea1add-848b-47fc-8dfe-d8aace505c15" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=hostpath-provisioner;K8S_POD_NAME=csi-hostpathplugin-hvm8g;K8S_POD_INFRA_CONTAINER_ID=14d5763f6e24322a1465ef73cf43b66985b5ee8d4d0bcbad81d831ddf725f0f9;K8S_POD_UID=12e733dd-0939-4f1b-9cbb-13897e093787" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"c2:35:b3:db:80:75\",\"name\":\"14d5763f6e24322\"},{\"mac\":\"0a:58:0a:d9:00:31\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f3ea1add-848b-47fc-8dfe-d8aace505c15\"}],\"ips\":[{\"address\":\"10.217.0.49/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.049243537+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-operator-lifecycle-manager:package-server-manager-84d578d794-jw7r2:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"20a4a76da066ba0","mac":"9e:1f:c0:e2:53:36"},{"name":"eth0","mac":"0a:58:0a:d9:00:18","sandbox":"/var/run/netns/d883b9ee-af5e-46f6-a16e-33489971f781"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.24/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.049443103+00:00 stderr F I0120 10:49:34.049386 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"package-server-manager-84d578d794-jw7r2", UID:"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be", APIVersion:"v1", ResourceVersion:"38143", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.24/23] from ovn-kubernetes 2026-01-20T10:49:34.055247200+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-marketplace:marketplace-operator-8b455464d-f9xdt:3482be94-0cdb-4e2a-889b-e5fac59fdbf5:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"05dc8f8fd04adcc","mac":"82:1d:40:8b:6c:7e"},{"name":"eth0","mac":"0a:58:0a:d9:00:0d","sandbox":"/var/run/netns/ae5cf0c5-4e12-4417-a40b-6157bb079e1a"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.13/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.055247200+00:00 stderr F I0120 10:49:34.053726 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"marketplace-operator-8b455464d-f9xdt", UID:"3482be94-0cdb-4e2a-889b-e5fac59fdbf5", APIVersion:"v1", ResourceVersion:"38073", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.13/23] from ovn-kubernetes 2026-01-20T10:49:34.063156021+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"20a4a76da066ba0e6174defee9a4750e0695845ded888d2c84f66d5851357cfb" Netns:"/var/run/netns/d883b9ee-af5e-46f6-a16e-33489971f781" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-84d578d794-jw7r2;K8S_POD_INFRA_CONTAINER_ID=20a4a76da066ba0e6174defee9a4750e0695845ded888d2c84f66d5851357cfb;K8S_POD_UID=63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9e:1f:c0:e2:53:36\",\"name\":\"20a4a76da066ba0\"},{\"mac\":\"0a:58:0a:d9:00:18\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/d883b9ee-af5e-46f6-a16e-33489971f781\"}],\"ips\":[{\"address\":\"10.217.0.24/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.076135176+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-multus:network-metrics-daemon-qdfr4:a702c6d2-4dde-4077-ab8c-0f8df804bf7a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"cac7e1a6cb9d4b9","mac":"ce:c0:c6:59:95:17"},{"name":"eth0","mac":"0a:58:0a:d9:00:03","sandbox":"/var/run/netns/4eeb4a72-61f4-4db2-bdc3-3f06cd20d036"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.3/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.076448335+00:00 stderr F I0120 10:49:34.076373 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-multus", Name:"network-metrics-daemon-qdfr4", UID:"a702c6d2-4dde-4077-ab8c-0f8df804bf7a", APIVersion:"v1", ResourceVersion:"38057", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.3/23] from ovn-kubernetes 2026-01-20T10:49:34.078644683+00:00 stderr P 2026-01-20T10:49:34Z [verbose] 2026-01-20T10:49:34.078664993+00:00 stderr F ADD finished CNI request ContainerID:"05dc8f8fd04adcc92b768cf40248cfd351acd6c19f2be7e81050bc05cf9813bd" Netns:"/var/run/netns/ae5cf0c5-4e12-4417-a40b-6157bb079e1a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=05dc8f8fd04adcc92b768cf40248cfd351acd6c19f2be7e81050bc05cf9813bd;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"82:1d:40:8b:6c:7e\",\"name\":\"05dc8f8fd04adcc\"},{\"mac\":\"0a:58:0a:d9:00:0d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/ae5cf0c5-4e12-4417-a40b-6157bb079e1a\"}],\"ips\":[{\"address\":\"10.217.0.13/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.080698645+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-machine-config-operator:machine-config-operator-76788bff89-wkjgm:120b38dc-8236-4fa6-a452-642b8ad738ee:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"c37c0f7f9c3caea","mac":"da:1b:7a:9b:be:46"},{"name":"eth0","mac":"0a:58:0a:d9:00:15","sandbox":"/var/run/netns/def6f8fc-722f-4e84-93c8-8e2fa8320ff9"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.21/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.080943053+00:00 stderr F I0120 10:49:34.080888 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-config-operator", Name:"machine-config-operator-76788bff89-wkjgm", UID:"120b38dc-8236-4fa6-a452-642b8ad738ee", APIVersion:"v1", ResourceVersion:"38046", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.21/23] from ovn-kubernetes 2026-01-20T10:49:34.083770869+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-machine-api:control-plane-machine-set-operator-649bd778b4-tt5tw:45a8038e-e7f2-4d93-a6f5-7753aa54e63f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"f5a1d658ee8d94f","mac":"9a:71:a8:dd:f9:3d"},{"name":"eth0","mac":"0a:58:0a:d9:00:14","sandbox":"/var/run/netns/75dbe8fa-70ad-4c14-ae3e-2f587fbd524d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.20/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.083770869+00:00 stderr F I0120 10:49:34.083638 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-api", Name:"control-plane-machine-set-operator-649bd778b4-tt5tw", UID:"45a8038e-e7f2-4d93-a6f5-7753aa54e63f", APIVersion:"v1", ResourceVersion:"38066", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.20/23] from ovn-kubernetes 2026-01-20T10:49:34.087693428+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-marketplace:redhat-operators-f4jkp:4092a9f8-5acc-4932-9e90-ef962eeb301a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2cf7b8831320ab3","mac":"ba:db:c2:dc:92:1e"},{"name":"eth0","mac":"0a:58:0a:d9:00:32","sandbox":"/var/run/netns/0350d956-6e45-4c91-a1aa-70f651bc3165"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.50/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.087949407+00:00 stderr F I0120 10:49:34.087916 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-f4jkp", UID:"4092a9f8-5acc-4932-9e90-ef962eeb301a", APIVersion:"v1", ResourceVersion:"38043", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.50/23] from ovn-kubernetes 2026-01-20T10:49:34.100816058+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-network-diagnostics:network-check-target-v54bt:34a48baf-1bee-4921-8bb2-9b7320e76f79:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"6e13b5b12625f2c","mac":"5a:53:e3:6b:7f:f4"},{"name":"eth0","mac":"0a:58:0a:d9:00:04","sandbox":"/var/run/netns/e2cf0fca-072e-44c4-8104-37fa09956164"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.4/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.100816058+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-network-diagnostics:network-check-source-5c5478f8c-vqvt7:d0f40333-c860-4c04-8058-a0bf572dcf12:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"e95e48e211a52d9","mac":"02:98:3e:0f:af:67"},{"name":"eth0","mac":"0a:58:0a:d9:00:40","sandbox":"/var/run/netns/bf109501-3d99-4761-8527-b428be885a7d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.64/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.100816058+00:00 stderr F I0120 10:49:34.097357 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-network-diagnostics", Name:"network-check-target-v54bt", UID:"34a48baf-1bee-4921-8bb2-9b7320e76f79", APIVersion:"v1", ResourceVersion:"38047", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.4/23] from ovn-kubernetes 2026-01-20T10:49:34.100816058+00:00 stderr F I0120 10:49:34.097372 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-network-diagnostics", Name:"network-check-source-5c5478f8c-vqvt7", UID:"d0f40333-c860-4c04-8058-a0bf572dcf12", APIVersion:"v1", ResourceVersion:"38032", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.64/23] from ovn-kubernetes 2026-01-20T10:49:34.101256411+00:00 stderr P 2026-01-20T10:49:34Z [verbose] 2026-01-20T10:49:34.101295652+00:00 stderr P ADD finished CNI request ContainerID:"cac7e1a6cb9d4b94b39d540b0a123a36ff2a52f7240cc2c03bd825e650b13a4b" Netns:"/var/run/netns/4eeb4a72-61f4-4db2-bdc3-3f06cd20d036" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-qdfr4;K8S_POD_INFRA_CONTAINER_ID=cac7e1a6cb9d4b94b39d540b0a123a36ff2a52f7240cc2c03bd825e650b13a4b;K8S_POD_UID=a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ce:c0:c6:59:95:17\",\"name\":\"cac7e1a6cb9d4b9\"},{\"mac\":\"0a:58:0a:d9:00:03\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/4eeb4a72-61f4-4db2-bdc3-3f06cd20d036\"}],\"ips\":[{\"address\":\"10.217.0.3/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.101315093+00:00 stderr F 2026-01-20T10:49:34.103807529+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"c37c0f7f9c3caeac2f0f421e74752cc1973142d946599a07d92d249026284d34" Netns:"/var/run/netns/def6f8fc-722f-4e84-93c8-8e2fa8320ff9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-76788bff89-wkjgm;K8S_POD_INFRA_CONTAINER_ID=c37c0f7f9c3caeac2f0f421e74752cc1973142d946599a07d92d249026284d34;K8S_POD_UID=120b38dc-8236-4fa6-a452-642b8ad738ee" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"da:1b:7a:9b:be:46\",\"name\":\"c37c0f7f9c3caea\"},{\"mac\":\"0a:58:0a:d9:00:15\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/def6f8fc-722f-4e84-93c8-8e2fa8320ff9\"}],\"ips\":[{\"address\":\"10.217.0.21/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.105303195+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-machine-config-operator:machine-config-controller-6df6df6b6b-58shh:297ab9b6-2186-4d5b-a952-2bfd59af63c4:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"35d926e5c859402","mac":"e6:0b:b1:eb:48:a5"},{"name":"eth0","mac":"0a:58:0a:d9:00:3f","sandbox":"/var/run/netns/9148ca88-4087-4d3a-ae28-368fcf6522af"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.63/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.105303195+00:00 stderr F I0120 10:49:34.105196 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-config-operator", Name:"machine-config-controller-6df6df6b6b-58shh", UID:"297ab9b6-2186-4d5b-a952-2bfd59af63c4", APIVersion:"v1", ResourceVersion:"38048", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.63/23] from ovn-kubernetes 2026-01-20T10:49:34.107177321+00:00 stderr P 2026-01-20T10:49:34Z [verbose] 2026-01-20T10:49:34.107215153+00:00 stderr P ADD finished CNI request ContainerID:"f5a1d658ee8d94f24351fdebdeaa5f08ad7ed91b3be46d98ce39ac9e191d4a6a" Netns:"/var/run/netns/75dbe8fa-70ad-4c14-ae3e-2f587fbd524d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-649bd778b4-tt5tw;K8S_POD_INFRA_CONTAINER_ID=f5a1d658ee8d94f24351fdebdeaa5f08ad7ed91b3be46d98ce39ac9e191d4a6a;K8S_POD_UID=45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9a:71:a8:dd:f9:3d\",\"name\":\"f5a1d658ee8d94f\"},{\"mac\":\"0a:58:0a:d9:00:14\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/75dbe8fa-70ad-4c14-ae3e-2f587fbd524d\"}],\"ips\":[{\"address\":\"10.217.0.20/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.107234003+00:00 stderr F 2026-01-20T10:49:34.118128075+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"e95e48e211a52d904a4f9f56a709f1aa84d4db578f6828050c44b37bd8889836" Netns:"/var/run/netns/bf109501-3d99-4761-8527-b428be885a7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-5c5478f8c-vqvt7;K8S_POD_INFRA_CONTAINER_ID=e95e48e211a52d904a4f9f56a709f1aa84d4db578f6828050c44b37bd8889836;K8S_POD_UID=d0f40333-c860-4c04-8058-a0bf572dcf12" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"02:98:3e:0f:af:67\",\"name\":\"e95e48e211a52d9\"},{\"mac\":\"0a:58:0a:d9:00:40\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/bf109501-3d99-4761-8527-b428be885a7d\"}],\"ips\":[{\"address\":\"10.217.0.64/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.118128075+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"6e13b5b12625f2c121063f4f8e2dc03f99ee6b08120ac99ab62d3049b441197b" Netns:"/var/run/netns/e2cf0fca-072e-44c4-8104-37fa09956164" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-v54bt;K8S_POD_INFRA_CONTAINER_ID=6e13b5b12625f2c121063f4f8e2dc03f99ee6b08120ac99ab62d3049b441197b;K8S_POD_UID=34a48baf-1bee-4921-8bb2-9b7320e76f79" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5a:53:e3:6b:7f:f4\",\"name\":\"6e13b5b12625f2c\"},{\"mac\":\"0a:58:0a:d9:00:04\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/e2cf0fca-072e-44c4-8104-37fa09956164\"}],\"ips\":[{\"address\":\"10.217.0.4/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.118271889+00:00 stderr P 2026-01-20T10:49:34Z [verbose] 2026-01-20T10:49:34.118298470+00:00 stderr P ADD finished CNI request ContainerID:"2cf7b8831320ab3b2bf6ae8117aef184a21c07b6471e80418ee4500c54f3fad6" Netns:"/var/run/netns/0350d956-6e45-4c91-a1aa-70f651bc3165" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=2cf7b8831320ab3b2bf6ae8117aef184a21c07b6471e80418ee4500c54f3fad6;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ba:db:c2:dc:92:1e\",\"name\":\"2cf7b8831320ab3\"},{\"mac\":\"0a:58:0a:d9:00:32\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/0350d956-6e45-4c91-a1aa-70f651bc3165\"}],\"ips\":[{\"address\":\"10.217.0.50/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.118318031+00:00 stderr F 2026-01-20T10:49:34.125658195+00:00 stderr P 2026-01-20T10:49:34Z [verbose] 2026-01-20T10:49:34.125718377+00:00 stderr P ADD finished CNI request ContainerID:"35d926e5c8594024062db6e1d898a4b570b652442f9644531861d98f8c7ee8ca" Netns:"/var/run/netns/9148ca88-4087-4d3a-ae28-368fcf6522af" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-controller-6df6df6b6b-58shh;K8S_POD_INFRA_CONTAINER_ID=35d926e5c8594024062db6e1d898a4b570b652442f9644531861d98f8c7ee8ca;K8S_POD_UID=297ab9b6-2186-4d5b-a952-2bfd59af63c4" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"e6:0b:b1:eb:48:a5\",\"name\":\"35d926e5c859402\"},{\"mac\":\"0a:58:0a:d9:00:3f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/9148ca88-4087-4d3a-ae28-368fcf6522af\"}],\"ips\":[{\"address\":\"10.217.0.63/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.125758278+00:00 stderr F 2026-01-20T10:49:34.127193332+00:00 stderr P 2026-01-20T10:49:34Z [verbose] 2026-01-20T10:49:34.127232633+00:00 stderr P Add: openshift-console-operator:console-conversion-webhook-595f9969b-l6z49:59748b9b-c309-4712-aa85-bb38d71c4915:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"cb38236eaf8b650","mac":"aa:53:ea:34:78:ff"},{"name":"eth0","mac":"0a:58:0a:d9:00:3d","sandbox":"/var/run/netns/4fbd4254-ea76-4c65-9bbd-77af8e1c45f2"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.61/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.127254684+00:00 stderr F 2026-01-20T10:49:34.130541324+00:00 stderr F I0120 10:49:34.128342 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-conversion-webhook-595f9969b-l6z49", UID:"59748b9b-c309-4712-aa85-bb38d71c4915", APIVersion:"v1", ResourceVersion:"38042", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.61/23] from ovn-kubernetes 2026-01-20T10:49:34.144215470+00:00 stderr P 2026-01-20T10:49:34Z [verbose] 2026-01-20T10:49:34.144264642+00:00 stderr P Add: openshift-operator-lifecycle-manager:olm-operator-6d8474f75f-x54mh:c085412c-b875-46c9-ae3e-e6b0d8067091:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"63060f2e5e81735","mac":"2e:b7:3c:9e:67:6e"},{"name":"eth0","mac":"0a:58:0a:d9:00:0e","sandbox":"/var/run/netns/60b48ca2-ab0f-4c8f-ab6f-232d9fbbc5a2"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.14/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.144284912+00:00 stderr F 2026-01-20T10:49:34.144601082+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"cb38236eaf8b650fcac261ec8c19468d6eaced6b5e3eb4812d1a64fe308fe8a0" Netns:"/var/run/netns/4fbd4254-ea76-4c65-9bbd-77af8e1c45f2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-conversion-webhook-595f9969b-l6z49;K8S_POD_INFRA_CONTAINER_ID=cb38236eaf8b650fcac261ec8c19468d6eaced6b5e3eb4812d1a64fe308fe8a0;K8S_POD_UID=59748b9b-c309-4712-aa85-bb38d71c4915" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"aa:53:ea:34:78:ff\",\"name\":\"cb38236eaf8b650\"},{\"mac\":\"0a:58:0a:d9:00:3d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/4fbd4254-ea76-4c65-9bbd-77af8e1c45f2\"}],\"ips\":[{\"address\":\"10.217.0.61/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.144718926+00:00 stderr F I0120 10:49:34.144681 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"olm-operator-6d8474f75f-x54mh", UID:"c085412c-b875-46c9-ae3e-e6b0d8067091", APIVersion:"v1", ResourceVersion:"38124", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.14/23] from ovn-kubernetes 2026-01-20T10:49:34.153684688+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-machine-api:machine-api-operator-788b7c6b6c-ctdmb:4f8aa612-9da0-4a2b-911e-6a1764a4e74e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"157518a2fbcdf41","mac":"da:0c:13:4b:26:79"},{"name":"eth0","mac":"0a:58:0a:d9:00:05","sandbox":"/var/run/netns/4325550d-2409-4a96-8324-10e599004834"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.5/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.153684688+00:00 stderr F I0120 10:49:34.153301 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-api", Name:"machine-api-operator-788b7c6b6c-ctdmb", UID:"4f8aa612-9da0-4a2b-911e-6a1764a4e74e", APIVersion:"v1", ResourceVersion:"38021", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.5/23] from ovn-kubernetes 2026-01-20T10:49:34.158470725+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"63060f2e5e81735a13cf50a826be14e5ae642aae237240835572b44f94cc026f" Netns:"/var/run/netns/60b48ca2-ab0f-4c8f-ab6f-232d9fbbc5a2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-6d8474f75f-x54mh;K8S_POD_INFRA_CONTAINER_ID=63060f2e5e81735a13cf50a826be14e5ae642aae237240835572b44f94cc026f;K8S_POD_UID=c085412c-b875-46c9-ae3e-e6b0d8067091" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2e:b7:3c:9e:67:6e\",\"name\":\"63060f2e5e81735\"},{\"mac\":\"0a:58:0a:d9:00:0e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/60b48ca2-ab0f-4c8f-ab6f-232d9fbbc5a2\"}],\"ips\":[{\"address\":\"10.217.0.14/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.171788119+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"157518a2fbcdf41441384a5aa6f72952fe3c2c121c81583bef75a581d87f05c8" Netns:"/var/run/netns/4325550d-2409-4a96-8324-10e599004834" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-788b7c6b6c-ctdmb;K8S_POD_INFRA_CONTAINER_ID=157518a2fbcdf41441384a5aa6f72952fe3c2c121c81583bef75a581d87f05c8;K8S_POD_UID=4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"da:0c:13:4b:26:79\",\"name\":\"157518a2fbcdf41\"},{\"mac\":\"0a:58:0a:d9:00:05\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/4325550d-2409-4a96-8324-10e599004834\"}],\"ips\":[{\"address\":\"10.217.0.5/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.269684251+00:00 stderr P 2026-01-20T10:49:34Z [verbose] 2026-01-20T10:49:34.269757324+00:00 stderr P Add: openshift-kube-storage-version-migrator-operator:kube-storage-version-migrator-operator-686c6c748c-qbnnr:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"4132ae12a9d721a","mac":"fa:16:bb:9a:a4:cc"},{"name":"eth0","mac":"0a:58:0a:d9:00:10","sandbox":"/var/run/netns/64e8c073-3e1a-4ef5-8999-4b63ed280c5f"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.16/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.269776934+00:00 stderr F 2026-01-20T10:49:34.278098028+00:00 stderr F I0120 10:49:34.277980 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator-686c6c748c-qbnnr", UID:"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7", APIVersion:"v1", ResourceVersion:"38221", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.16/23] from ovn-kubernetes 2026-01-20T10:49:34.279751648+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-console:console-644bb77b49-5x5xk:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"be3a1d8e2700ba2","mac":"12:ac:87:b2:ad:51"},{"name":"eth0","mac":"0a:58:0a:d9:00:49","sandbox":"/var/run/netns/516f2f98-4388-4819-84e1-d2228e7db8a2"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.73/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.282693328+00:00 stderr F I0120 10:49:34.282433 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console", Name:"console-644bb77b49-5x5xk", UID:"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1", APIVersion:"v1", ResourceVersion:"38025", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.73/23] from ovn-kubernetes 2026-01-20T10:49:34.294394764+00:00 stderr P 2026-01-20T10:49:34Z [verbose] 2026-01-20T10:49:34.294442736+00:00 stderr P Add: openshift-kube-apiserver-operator:kube-apiserver-operator-78d54458c4-sc8h7:ed024e5d-8fc2-4c22-803d-73f3c9795f19:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"7d2c4481dd898c5","mac":"2a:27:da:64:90:c3"},{"name":"eth0","mac":"0a:58:0a:d9:00:07","sandbox":"/var/run/netns/fc66b5be-c8af-4f10-8f73-84d566f05b3d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.7/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.294462286+00:00 stderr F 2026-01-20T10:49:34.294686073+00:00 stderr F I0120 10:49:34.294660 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator-78d54458c4-sc8h7", UID:"ed024e5d-8fc2-4c22-803d-73f3c9795f19", APIVersion:"v1", ResourceVersion:"38214", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.7/23] from ovn-kubernetes 2026-01-20T10:49:34.306445982+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"4132ae12a9d721af0ec27f865825a8e753c61ea0013dcd45d0fc61145643b094" Netns:"/var/run/netns/64e8c073-3e1a-4ef5-8999-4b63ed280c5f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-686c6c748c-qbnnr;K8S_POD_INFRA_CONTAINER_ID=4132ae12a9d721af0ec27f865825a8e753c61ea0013dcd45d0fc61145643b094;K8S_POD_UID=9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"fa:16:bb:9a:a4:cc\",\"name\":\"4132ae12a9d721a\"},{\"mac\":\"0a:58:0a:d9:00:10\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/64e8c073-3e1a-4ef5-8999-4b63ed280c5f\"}],\"ips\":[{\"address\":\"10.217.0.16/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.315172108+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-ingress-canary:ingress-canary-2vhcn:0b5d722a-1123-4935-9740-52a08d018bc9:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"07d15cd3389df14","mac":"0e:f6:89:0b:01:17"},{"name":"eth0","mac":"0a:58:0a:d9:00:47","sandbox":"/var/run/netns/fdf53b20-c671-41fa-8096-d45883fab661"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.71/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.315172108+00:00 stderr F I0120 10:49:34.314742 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-ingress-canary", Name:"ingress-canary-2vhcn", UID:"0b5d722a-1123-4935-9740-52a08d018bc9", APIVersion:"v1", ResourceVersion:"38029", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.71/23] from ovn-kubernetes 2026-01-20T10:49:34.319769177+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-operator-lifecycle-manager:packageserver-8464bcc55b-sjnqz:bd556935-a077-45df-ba3f-d42c39326ccd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"6b56e04f51f76d3","mac":"56:de:50:bf:19:1b"},{"name":"eth0","mac":"0a:58:0a:d9:00:2b","sandbox":"/var/run/netns/e26f94cf-65d0-4d3d-b44c-5c37b77fe837"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.43/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.319900791+00:00 stderr F I0120 10:49:34.319869 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-8464bcc55b-sjnqz", UID:"bd556935-a077-45df-ba3f-d42c39326ccd", APIVersion:"v1", ResourceVersion:"38058", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.43/23] from ovn-kubernetes 2026-01-20T10:49:34.320779888+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"be3a1d8e2700ba2157ff969219afddc1fd55597aaf83f46c4ff8f913e78a4d9e" Netns:"/var/run/netns/516f2f98-4388-4819-84e1-d2228e7db8a2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-644bb77b49-5x5xk;K8S_POD_INFRA_CONTAINER_ID=be3a1d8e2700ba2157ff969219afddc1fd55597aaf83f46c4ff8f913e78a4d9e;K8S_POD_UID=9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"12:ac:87:b2:ad:51\",\"name\":\"be3a1d8e2700ba2\"},{\"mac\":\"0a:58:0a:d9:00:49\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/516f2f98-4388-4819-84e1-d2228e7db8a2\"}],\"ips\":[{\"address\":\"10.217.0.73/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.320973094+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-apiserver-operator:openshift-apiserver-operator-7c88c4c865-kn67m:43ae1c37-047b-4ee2-9fee-41e337dd4ac8:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"351b90c7b77c23b","mac":"f2:17:fa:81:db:a3"},{"name":"eth0","mac":"0a:58:0a:d9:00:06","sandbox":"/var/run/netns/bcf1de44-ac52-4d02-a07a-47e5f10b9ae4"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.6/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.321195961+00:00 stderr F I0120 10:49:34.321163 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator-7c88c4c865-kn67m", UID:"43ae1c37-047b-4ee2-9fee-41e337dd4ac8", APIVersion:"v1", ResourceVersion:"38020", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.6/23] from ovn-kubernetes 2026-01-20T10:49:34.330536025+00:00 stderr P 2026-01-20T10:49:34Z [verbose] 2026-01-20T10:49:34.330590957+00:00 stderr P Add: openshift-service-ca-operator:service-ca-operator-546b4f8984-pwccz:6d67253e-2acd-4bc1-8185-793587da4f17:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"3bebb50a1166a73","mac":"3a:7e:65:08:84:00"},{"name":"eth0","mac":"0a:58:0a:d9:00:0a","sandbox":"/var/run/netns/46c26261-bbfc-455b-8f19-c653b8119f30"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.10/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.330611327+00:00 stderr F 2026-01-20T10:49:34.330934527+00:00 stderr F I0120 10:49:34.330844 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator-546b4f8984-pwccz", UID:"6d67253e-2acd-4bc1-8185-793587da4f17", APIVersion:"v1", ResourceVersion:"38051", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.10/23] from ovn-kubernetes 2026-01-20T10:49:34.334107554+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-oauth-apiserver:apiserver-69c565c9b6-vbdpd:5bacb25d-97b6-4491-8fb4-99feae1d802a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"033a0e312725abb","mac":"6a:2d:a8:f5:d0:24"},{"name":"eth0","mac":"0a:58:0a:d9:00:27","sandbox":"/var/run/netns/fab25025-7cc0-4601-a941-3d21c4a51db0"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.39/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.334107554+00:00 stderr F I0120 10:49:34.333192 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-oauth-apiserver", Name:"apiserver-69c565c9b6-vbdpd", UID:"5bacb25d-97b6-4491-8fb4-99feae1d802a", APIVersion:"v1", ResourceVersion:"38161", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.39/23] from ovn-kubernetes 2026-01-20T10:49:34.337103745+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-marketplace:redhat-marketplace-8s8pc:c782cf62-a827-4677-b3c2-6f82c5f09cbb:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"4fdab1420f40774","mac":"2e:90:76:a5:43:e9"},{"name":"eth0","mac":"0a:58:0a:d9:00:33","sandbox":"/var/run/netns/a1f9219d-bb56-4308-9cc8-b7c4bc100ea1"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.51/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.337323222+00:00 stderr F I0120 10:49:34.337272 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-8s8pc", UID:"c782cf62-a827-4677-b3c2-6f82c5f09cbb", APIVersion:"v1", ResourceVersion:"38090", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.51/23] from ovn-kubernetes 2026-01-20T10:49:34.353095192+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"7d2c4481dd898c58d65f0d3fcd46a36c9a7529a4e056ae4c41d68df0bc82d4d1" Netns:"/var/run/netns/fc66b5be-c8af-4f10-8f73-84d566f05b3d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-78d54458c4-sc8h7;K8S_POD_INFRA_CONTAINER_ID=7d2c4481dd898c58d65f0d3fcd46a36c9a7529a4e056ae4c41d68df0bc82d4d1;K8S_POD_UID=ed024e5d-8fc2-4c22-803d-73f3c9795f19" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2a:27:da:64:90:c3\",\"name\":\"7d2c4481dd898c5\"},{\"mac\":\"0a:58:0a:d9:00:07\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/fc66b5be-c8af-4f10-8f73-84d566f05b3d\"}],\"ips\":[{\"address\":\"10.217.0.7/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.353095192+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-kube-storage-version-migrator:migrator-f7c6d88df-q2fnv:cf1a8966-f594-490a-9fbb-eec5bafd13d3:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"5f52bddb9a859c7","mac":"b6:7d:f0:17:6b:93"},{"name":"eth0","mac":"0a:58:0a:d9:00:19","sandbox":"/var/run/netns/c6e95119-67e8-4297-94e7-81bc3b9239e4"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.25/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.353095192+00:00 stderr F I0120 10:49:34.343053 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-storage-version-migrator", Name:"migrator-f7c6d88df-q2fnv", UID:"cf1a8966-f594-490a-9fbb-eec5bafd13d3", APIVersion:"v1", ResourceVersion:"38052", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.25/23] from ovn-kubernetes 2026-01-20T10:49:34.355964940+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"351b90c7b77c23bb31e087f766759c49e139e83963f9f54d6c259cbb8ef2ab92" Netns:"/var/run/netns/bcf1de44-ac52-4d02-a07a-47e5f10b9ae4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-7c88c4c865-kn67m;K8S_POD_INFRA_CONTAINER_ID=351b90c7b77c23bb31e087f766759c49e139e83963f9f54d6c259cbb8ef2ab92;K8S_POD_UID=43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"f2:17:fa:81:db:a3\",\"name\":\"351b90c7b77c23b\"},{\"mac\":\"0a:58:0a:d9:00:06\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/bcf1de44-ac52-4d02-a07a-47e5f10b9ae4\"}],\"ips\":[{\"address\":\"10.217.0.6/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.360727695+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"6b56e04f51f76d31ffffcb7df9cbcbee1036c5156cd1dba474c959cd416cd0c3" Netns:"/var/run/netns/e26f94cf-65d0-4d3d-b44c-5c37b77fe837" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-8464bcc55b-sjnqz;K8S_POD_INFRA_CONTAINER_ID=6b56e04f51f76d31ffffcb7df9cbcbee1036c5156cd1dba474c959cd416cd0c3;K8S_POD_UID=bd556935-a077-45df-ba3f-d42c39326ccd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"56:de:50:bf:19:1b\",\"name\":\"6b56e04f51f76d3\"},{\"mac\":\"0a:58:0a:d9:00:2b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/e26f94cf-65d0-4d3d-b44c-5c37b77fe837\"}],\"ips\":[{\"address\":\"10.217.0.43/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.360727695+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"3bebb50a1166a73fd7b542a977594df4a0aaf79383c52922f0f3a6c8dbb51ab6" Netns:"/var/run/netns/46c26261-bbfc-455b-8f19-c653b8119f30" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-546b4f8984-pwccz;K8S_POD_INFRA_CONTAINER_ID=3bebb50a1166a73fd7b542a977594df4a0aaf79383c52922f0f3a6c8dbb51ab6;K8S_POD_UID=6d67253e-2acd-4bc1-8185-793587da4f17" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"3a:7e:65:08:84:00\",\"name\":\"3bebb50a1166a73\"},{\"mac\":\"0a:58:0a:d9:00:0a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/46c26261-bbfc-455b-8f19-c653b8119f30\"}],\"ips\":[{\"address\":\"10.217.0.10/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.364127708+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"07d15cd3389df14ba4c9fc246f14021164672e87e4b4ecb407cfdd0ac94e6acb" Netns:"/var/run/netns/fdf53b20-c671-41fa-8096-d45883fab661" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-canary;K8S_POD_NAME=ingress-canary-2vhcn;K8S_POD_INFRA_CONTAINER_ID=07d15cd3389df14ba4c9fc246f14021164672e87e4b4ecb407cfdd0ac94e6acb;K8S_POD_UID=0b5d722a-1123-4935-9740-52a08d018bc9" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"0e:f6:89:0b:01:17\",\"name\":\"07d15cd3389df14\"},{\"mac\":\"0a:58:0a:d9:00:47\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/fdf53b20-c671-41fa-8096-d45883fab661\"}],\"ips\":[{\"address\":\"10.217.0.71/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.364342565+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"5f52bddb9a859c7f9d3ffd24bdc4abe1daea6e58c8e74397015e4fca56e02fed" Netns:"/var/run/netns/c6e95119-67e8-4297-94e7-81bc3b9239e4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator;K8S_POD_NAME=migrator-f7c6d88df-q2fnv;K8S_POD_INFRA_CONTAINER_ID=5f52bddb9a859c7f9d3ffd24bdc4abe1daea6e58c8e74397015e4fca56e02fed;K8S_POD_UID=cf1a8966-f594-490a-9fbb-eec5bafd13d3" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"b6:7d:f0:17:6b:93\",\"name\":\"5f52bddb9a859c7\"},{\"mac\":\"0a:58:0a:d9:00:19\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c6e95119-67e8-4297-94e7-81bc3b9239e4\"}],\"ips\":[{\"address\":\"10.217.0.25/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.365392657+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"4fdab1420f407743cc8ac662254758af9071357f2aa3b4d55150aa3592daafc0" Netns:"/var/run/netns/a1f9219d-bb56-4308-9cc8-b7c4bc100ea1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=4fdab1420f407743cc8ac662254758af9071357f2aa3b4d55150aa3592daafc0;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2e:90:76:a5:43:e9\",\"name\":\"4fdab1420f40774\"},{\"mac\":\"0a:58:0a:d9:00:33\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/a1f9219d-bb56-4308-9cc8-b7c4bc100ea1\"}],\"ips\":[{\"address\":\"10.217.0.51/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.376524956+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-controller-manager-operator:openshift-controller-manager-operator-7978d7d7f6-2nt8z:0f394926-bdb9-425c-b36e-264d7fd34550:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d558d59974b2a9b","mac":"46:e4:fa:17:92:00"},{"name":"eth0","mac":"0a:58:0a:d9:00:09","sandbox":"/var/run/netns/c2fd837d-1d81-4f88-bc39-5526b22b89ff"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.9/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.376585798+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"033a0e312725abb60e66ae41d294b5e59466881540f3e07b7910d59f42981410" Netns:"/var/run/netns/fab25025-7cc0-4601-a941-3d21c4a51db0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-69c565c9b6-vbdpd;K8S_POD_INFRA_CONTAINER_ID=033a0e312725abb60e66ae41d294b5e59466881540f3e07b7910d59f42981410;K8S_POD_UID=5bacb25d-97b6-4491-8fb4-99feae1d802a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"6a:2d:a8:f5:d0:24\",\"name\":\"033a0e312725abb\"},{\"mac\":\"0a:58:0a:d9:00:27\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/fab25025-7cc0-4601-a941-3d21c4a51db0\"}],\"ips\":[{\"address\":\"10.217.0.39/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.376585798+00:00 stderr F I0120 10:49:34.376553 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator-7978d7d7f6-2nt8z", UID:"0f394926-bdb9-425c-b36e-264d7fd34550", APIVersion:"v1", ResourceVersion:"38074", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.9/23] from ovn-kubernetes 2026-01-20T10:49:34.378440464+00:00 stderr P 2026-01-20T10:49:34Z [verbose] 2026-01-20T10:49:34.380044553+00:00 stderr P Add: openshift-controller-manager:controller-manager-778975cc4f-x5vcf:1a3e81c3-c292-4130-9436-f94062c91efd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"26aaf844c5160ed","mac":"5a:a2:3e:87:06:ee"},{"name":"eth0","mac":"0a:58:0a:d9:00:57","sandbox":"/var/run/netns/1f773903-cbc5-4949-b66a-00ea704aea9c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.87/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.381404174+00:00 stderr F 2026-01-20T10:49:34.381634751+00:00 stderr F I0120 10:49:34.381604 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager", Name:"controller-manager-778975cc4f-x5vcf", UID:"1a3e81c3-c292-4130-9436-f94062c91efd", APIVersion:"v1", ResourceVersion:"38112", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.87/23] from ovn-kubernetes 2026-01-20T10:49:34.381663572+00:00 stderr P 2026-01-20T10:49:34Z [verbose] 2026-01-20T10:49:34.381686723+00:00 stderr P Add: openshift-marketplace:community-operators-8jhz6:3f4dca86-e6ee-4ec9-8324-86aff960225e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"19bf0574cec6c2f","mac":"82:25:e2:d7:50:2f"},{"name":"eth0","mac":"0a:58:0a:d9:00:30","sandbox":"/var/run/netns/b9e425fb-7e3b-49a0-a1be-63ef7ceb7b6a"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.48/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.381705303+00:00 stderr F 2026-01-20T10:49:34.381769635+00:00 stderr F I0120 10:49:34.381754 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-8jhz6", UID:"3f4dca86-e6ee-4ec9-8324-86aff960225e", APIVersion:"v1", ResourceVersion:"38028", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.48/23] from ovn-kubernetes 2026-01-20T10:49:34.381793866+00:00 stderr P 2026-01-20T10:49:34Z [verbose] 2026-01-20T10:49:34.381814717+00:00 stderr P Add: openshift-authentication:oauth-openshift-74fc7c67cc-xqf8b:01feb2e0-a0f4-4573-8335-34e364e0ef40:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"b8204918f90a9ca","mac":"ca:06:87:d1:06:2a"},{"name":"eth0","mac":"0a:58:0a:d9:00:48","sandbox":"/var/run/netns/9fadf4f6-8c20-4278-82e0-9f6a9d88c69b"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.72/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.381834167+00:00 stderr F 2026-01-20T10:49:34.381897039+00:00 stderr F I0120 10:49:34.381882 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-authentication", Name:"oauth-openshift-74fc7c67cc-xqf8b", UID:"01feb2e0-a0f4-4573-8335-34e364e0ef40", APIVersion:"v1", ResourceVersion:"38202", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.72/23] from ovn-kubernetes 2026-01-20T10:49:34.387185840+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-apiserver:apiserver-7fc54b8dd7-d2bhp:41e8708a-e40d-4d28-846b-c52eda4d1755:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"75910c2f6810535","mac":"3e:61:62:c4:de:b8"},{"name":"eth0","mac":"0a:58:0a:d9:00:52","sandbox":"/var/run/netns/827afc23-7520-4bf1-bb8b-b3bcfa516a63"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.82/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.387386556+00:00 stderr F I0120 10:49:34.387240 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-apiserver", Name:"apiserver-7fc54b8dd7-d2bhp", UID:"41e8708a-e40d-4d28-846b-c52eda4d1755", APIVersion:"v1", ResourceVersion:"38050", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.82/23] from ovn-kubernetes 2026-01-20T10:49:34.393891905+00:00 stderr P 2026-01-20T10:49:34Z [verbose] 2026-01-20T10:49:34.393930156+00:00 stderr P Add: openshift-multus:multus-admission-controller-6c7c885997-4hbbc:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"b9fd178cb9bef15","mac":"26:21:e9:51:85:28"},{"name":"eth0","mac":"0a:58:0a:d9:00:20","sandbox":"/var/run/netns/e2d575d7-9708-488d-8455-8d67c4ba0a4f"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.32/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.393949417+00:00 stderr F 2026-01-20T10:49:34.394126212+00:00 stderr F I0120 10:49:34.394096 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-multus", Name:"multus-admission-controller-6c7c885997-4hbbc", UID:"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0", APIVersion:"v1", ResourceVersion:"38067", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.32/23] from ovn-kubernetes 2026-01-20T10:49:34.401894809+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-marketplace:community-operators-sdddl:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a8b18abf4689d10","mac":"96:5f:dd:78:39:f9"},{"name":"eth0","mac":"0a:58:0a:d9:00:66","sandbox":"/var/run/netns/ebd3bd16-c53b-49a4-be78-a266ba4f0921"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.102/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.402002152+00:00 stderr F I0120 10:49:34.401964 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-sdddl", UID:"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760", APIVersion:"v1", ResourceVersion:"38063", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.102/23] from ovn-kubernetes 2026-01-20T10:49:34.402537058+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-cluster-samples-operator:cluster-samples-operator-bc474d5d6-wshwg:f728c15e-d8de-4a9a-a3ea-fdcead95cb91:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d9aeaa1aa1d02c7","mac":"16:bd:13:74:c9:cd"},{"name":"eth0","mac":"0a:58:0a:d9:00:2e","sandbox":"/var/run/netns/37f39fee-efca-4dfe-9830-963fd18845a8"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.46/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.402635431+00:00 stderr F I0120 10:49:34.402600 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-cluster-samples-operator", Name:"cluster-samples-operator-bc474d5d6-wshwg", UID:"f728c15e-d8de-4a9a-a3ea-fdcead95cb91", APIVersion:"v1", ResourceVersion:"38233", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.46/23] from ovn-kubernetes 2026-01-20T10:49:34.405349414+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-marketplace:certified-operators-7287f:887d596e-c519-4bfa-af90-3edd9e1b2f0f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"896f1d1c66b49ee","mac":"a6:58:91:87:22:fd"},{"name":"eth0","mac":"0a:58:0a:d9:00:2f","sandbox":"/var/run/netns/9296f533-4afc-4c60-9b3b-e8f01b8f54a7"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.47/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.405422706+00:00 stderr F I0120 10:49:34.405396 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-7287f", UID:"887d596e-c519-4bfa-af90-3edd9e1b2f0f", APIVersion:"v1", ResourceVersion:"38027", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.47/23] from ovn-kubernetes 2026-01-20T10:49:34.406799068+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-service-ca:service-ca-666f99b6f-kk8kg:e4a7de23-6134-4044-902a-0900dc04a501:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"29fa79ece7bc4aa","mac":"ae:1a:9a:e3:46:fe"},{"name":"eth0","mac":"0a:58:0a:d9:00:28","sandbox":"/var/run/netns/c40887c4-6ced-45f5-8497-1dbc4d35241e"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.40/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.407103387+00:00 stderr F I0120 10:49:34.406849 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-service-ca", Name:"service-ca-666f99b6f-kk8kg", UID:"e4a7de23-6134-4044-902a-0900dc04a501", APIVersion:"v1", ResourceVersion:"38173", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.40/23] from ovn-kubernetes 2026-01-20T10:49:34.419468404+00:00 stderr F 2026-01-20T10:49:34Z [verbose] Add: openshift-route-controller-manager:route-controller-manager-776b8b7477-sfpvs:21d29937-debd-4407-b2b1-d1053cb0f342:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"f600c407dd70d66","mac":"8a:84:b3:a8:64:84"},{"name":"eth0","mac":"0a:58:0a:d9:00:58","sandbox":"/var/run/netns/9819bdca-30e9-4166-88d6-6bcf2ab43cee"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.88/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:49:34.419468404+00:00 stderr F I0120 10:49:34.418289 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-route-controller-manager", Name:"route-controller-manager-776b8b7477-sfpvs", UID:"21d29937-debd-4407-b2b1-d1053cb0f342", APIVersion:"v1", ResourceVersion:"38019", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.88/23] from ovn-kubernetes 2026-01-20T10:49:34.500238495+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"d558d59974b2a9b3db9e27ee455a8739e2efdbf2974fd844df32834055fc90ee" Netns:"/var/run/netns/c2fd837d-1d81-4f88-bc39-5526b22b89ff" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-7978d7d7f6-2nt8z;K8S_POD_INFRA_CONTAINER_ID=d558d59974b2a9b3db9e27ee455a8739e2efdbf2974fd844df32834055fc90ee;K8S_POD_UID=0f394926-bdb9-425c-b36e-264d7fd34550" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"46:e4:fa:17:92:00\",\"name\":\"d558d59974b2a9b\"},{\"mac\":\"0a:58:0a:d9:00:09\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c2fd837d-1d81-4f88-bc39-5526b22b89ff\"}],\"ips\":[{\"address\":\"10.217.0.9/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.671409668+00:00 stderr P 2026-01-20T10:49:34Z [verbose] 2026-01-20T10:49:34.671462680+00:00 stderr P ADD finished CNI request ContainerID:"26aaf844c5160ed714327b2727c1d365375f0f252f5c0136588433cc4b50105b" Netns:"/var/run/netns/1f773903-cbc5-4949-b66a-00ea704aea9c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-778975cc4f-x5vcf;K8S_POD_INFRA_CONTAINER_ID=26aaf844c5160ed714327b2727c1d365375f0f252f5c0136588433cc4b50105b;K8S_POD_UID=1a3e81c3-c292-4130-9436-f94062c91efd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5a:a2:3e:87:06:ee\",\"name\":\"26aaf844c5160ed\"},{\"mac\":\"0a:58:0a:d9:00:57\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1f773903-cbc5-4949-b66a-00ea704aea9c\"}],\"ips\":[{\"address\":\"10.217.0.87/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.671482130+00:00 stderr F 2026-01-20T10:49:34.690726787+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"19bf0574cec6c2fcffb7413ec6a0f6c184c9588a6d44000ad64d498b177a9d29" Netns:"/var/run/netns/b9e425fb-7e3b-49a0-a1be-63ef7ceb7b6a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=19bf0574cec6c2fcffb7413ec6a0f6c184c9588a6d44000ad64d498b177a9d29;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"82:25:e2:d7:50:2f\",\"name\":\"19bf0574cec6c2f\"},{\"mac\":\"0a:58:0a:d9:00:30\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b9e425fb-7e3b-49a0-a1be-63ef7ceb7b6a\"}],\"ips\":[{\"address\":\"10.217.0.48/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.715000365+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"b8204918f90a9ca753a24fddaaebfedda53be5597097876a148cf997da19104c" Netns:"/var/run/netns/9fadf4f6-8c20-4278-82e0-9f6a9d88c69b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-74fc7c67cc-xqf8b;K8S_POD_INFRA_CONTAINER_ID=b8204918f90a9ca753a24fddaaebfedda53be5597097876a148cf997da19104c;K8S_POD_UID=01feb2e0-a0f4-4573-8335-34e364e0ef40" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ca:06:87:d1:06:2a\",\"name\":\"b8204918f90a9ca\"},{\"mac\":\"0a:58:0a:d9:00:48\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/9fadf4f6-8c20-4278-82e0-9f6a9d88c69b\"}],\"ips\":[{\"address\":\"10.217.0.72/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.733382316+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"75910c2f6810535f83faf9be6f15ebe1bfc09ceb577e00a2485aed199f3d50e9" Netns:"/var/run/netns/827afc23-7520-4bf1-bb8b-b3bcfa516a63" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-7fc54b8dd7-d2bhp;K8S_POD_INFRA_CONTAINER_ID=75910c2f6810535f83faf9be6f15ebe1bfc09ceb577e00a2485aed199f3d50e9;K8S_POD_UID=41e8708a-e40d-4d28-846b-c52eda4d1755" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"3e:61:62:c4:de:b8\",\"name\":\"75910c2f6810535\"},{\"mac\":\"0a:58:0a:d9:00:52\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/827afc23-7520-4bf1-bb8b-b3bcfa516a63\"}],\"ips\":[{\"address\":\"10.217.0.82/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.753608102+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"b9fd178cb9bef15c61889b5a37a29730f32c86e12823401ab5ad5dbc03498c6e" Netns:"/var/run/netns/e2d575d7-9708-488d-8455-8d67c4ba0a4f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6c7c885997-4hbbc;K8S_POD_INFRA_CONTAINER_ID=b9fd178cb9bef15c61889b5a37a29730f32c86e12823401ab5ad5dbc03498c6e;K8S_POD_UID=d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"26:21:e9:51:85:28\",\"name\":\"b9fd178cb9bef15\"},{\"mac\":\"0a:58:0a:d9:00:20\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/e2d575d7-9708-488d-8455-8d67c4ba0a4f\"}],\"ips\":[{\"address\":\"10.217.0.32/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.774015743+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"a8b18abf4689d10f3ed0100a7e3b06ad6325b157c2ed815d7efa869e36652f20" Netns:"/var/run/netns/ebd3bd16-c53b-49a4-be78-a266ba4f0921" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-sdddl;K8S_POD_INFRA_CONTAINER_ID=a8b18abf4689d10f3ed0100a7e3b06ad6325b157c2ed815d7efa869e36652f20;K8S_POD_UID=fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"96:5f:dd:78:39:f9\",\"name\":\"a8b18abf4689d10\"},{\"mac\":\"0a:58:0a:d9:00:66\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/ebd3bd16-c53b-49a4-be78-a266ba4f0921\"}],\"ips\":[{\"address\":\"10.217.0.102/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.798035005+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"d9aeaa1aa1d02c7e8201fbb13a3ee252fd99aa6b0819f3318aaa2bd88982712e" Netns:"/var/run/netns/37f39fee-efca-4dfe-9830-963fd18845a8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-bc474d5d6-wshwg;K8S_POD_INFRA_CONTAINER_ID=d9aeaa1aa1d02c7e8201fbb13a3ee252fd99aa6b0819f3318aaa2bd88982712e;K8S_POD_UID=f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"16:bd:13:74:c9:cd\",\"name\":\"d9aeaa1aa1d02c7\"},{\"mac\":\"0a:58:0a:d9:00:2e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/37f39fee-efca-4dfe-9830-963fd18845a8\"}],\"ips\":[{\"address\":\"10.217.0.46/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.830836594+00:00 stderr P 2026-01-20T10:49:34Z [verbose] 2026-01-20T10:49:34.830889286+00:00 stderr P ADD finished CNI request ContainerID:"896f1d1c66b49ee6c0953e78a523b98ad3a86e177837cd70e3d39a63cc66859b" Netns:"/var/run/netns/9296f533-4afc-4c60-9b3b-e8f01b8f54a7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=896f1d1c66b49ee6c0953e78a523b98ad3a86e177837cd70e3d39a63cc66859b;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"a6:58:91:87:22:fd\",\"name\":\"896f1d1c66b49ee\"},{\"mac\":\"0a:58:0a:d9:00:2f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/9296f533-4afc-4c60-9b3b-e8f01b8f54a7\"}],\"ips\":[{\"address\":\"10.217.0.47/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.830909267+00:00 stderr F 2026-01-20T10:49:34.848977337+00:00 stderr F 2026-01-20T10:49:34Z [verbose] ADD finished CNI request ContainerID:"29fa79ece7bc4aa42a04b8e27dbcddb57b0b5ea610e528ae05ed041db9421d74" Netns:"/var/run/netns/c40887c4-6ced-45f5-8497-1dbc4d35241e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-kk8kg;K8S_POD_INFRA_CONTAINER_ID=29fa79ece7bc4aa42a04b8e27dbcddb57b0b5ea610e528ae05ed041db9421d74;K8S_POD_UID=e4a7de23-6134-4044-902a-0900dc04a501" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ae:1a:9a:e3:46:fe\",\"name\":\"29fa79ece7bc4aa\"},{\"mac\":\"0a:58:0a:d9:00:28\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c40887c4-6ced-45f5-8497-1dbc4d35241e\"}],\"ips\":[{\"address\":\"10.217.0.40/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.870565744+00:00 stderr P 2026-01-20T10:49:34Z [verbose] 2026-01-20T10:49:34.870635806+00:00 stderr P ADD finished CNI request ContainerID:"f600c407dd70d660077bd6ebfdcd7911fc2b2034dc37f08620eb35fdc05ef09c" Netns:"/var/run/netns/9819bdca-30e9-4166-88d6-6bcf2ab43cee" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-776b8b7477-sfpvs;K8S_POD_INFRA_CONTAINER_ID=f600c407dd70d660077bd6ebfdcd7911fc2b2034dc37f08620eb35fdc05ef09c;K8S_POD_UID=21d29937-debd-4407-b2b1-d1053cb0f342" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"8a:84:b3:a8:64:84\",\"name\":\"f600c407dd70d66\"},{\"mac\":\"0a:58:0a:d9:00:58\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/9819bdca-30e9-4166-88d6-6bcf2ab43cee\"}],\"ips\":[{\"address\":\"10.217.0.88/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:49:34.870659597+00:00 stderr F 2026-01-20T10:50:43.399239913+00:00 stderr F 2026-01-20T10:50:43Z [verbose] ADD starting CNI request ContainerID:"68d5a1b6636ff831e01f691c455c250ac722be00e5dd3e188fc22e9c578bd5a0" Netns:"/var/run/netns/5327f177-ffe9-42f1-9337-6e68c966a2eb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29481765-pbh8m;K8S_POD_INFRA_CONTAINER_ID=68d5a1b6636ff831e01f691c455c250ac722be00e5dd3e188fc22e9c578bd5a0;K8S_POD_UID=835aa241-cfd8-4527-953a-e91ac4516103" Path:"" 2026-01-20T10:50:43.404410403+00:00 stderr F 2026-01-20T10:50:43Z [verbose] ADD starting CNI request ContainerID:"be3253472106e7907941fee986eca843184e51149494c9b4f616a2ab02b8d643" Netns:"/var/run/netns/597dad0b-3626-4561-a9a9-0ecf10e40053" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-ndkkp;K8S_POD_INFRA_CONTAINER_ID=be3253472106e7907941fee986eca843184e51149494c9b4f616a2ab02b8d643;K8S_POD_UID=44a58473-946f-4c48-ab45-4df1b9a8bfe5" Path:"" 2026-01-20T10:50:43.405392031+00:00 stderr F 2026-01-20T10:50:43Z [verbose] ADD starting CNI request ContainerID:"391323720bca4f992f0df71d24082cd8fa5c589fd1d6eeb29ec1e108083db641" Netns:"/var/run/netns/d41c8d27-c77e-4761-94c3-88885c88b7de" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-tshnz;K8S_POD_INFRA_CONTAINER_ID=391323720bca4f992f0df71d24082cd8fa5c589fd1d6eeb29ec1e108083db641;K8S_POD_UID=b4d73f51-7528-4901-90fc-5563b2d16f4e" Path:"" 2026-01-20T10:50:43.427990173+00:00 stderr F 2026-01-20T10:50:43Z [verbose] ADD starting CNI request ContainerID:"58454f011526c2562f0c0edd673604ef46affaea55dc1997f5a85fe6f743d4a4" Netns:"/var/run/netns/a4dfe238-61fc-45a0-971d-7ce5743540a8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-qr2m4;K8S_POD_INFRA_CONTAINER_ID=58454f011526c2562f0c0edd673604ef46affaea55dc1997f5a85fe6f743d4a4;K8S_POD_UID=1cbd03fe-f63a-464e-a875-236a7f312e54" Path:"" 2026-01-20T10:50:43.743648946+00:00 stderr F 2026-01-20T10:50:43Z [verbose] Add: openshift-operator-lifecycle-manager:collect-profiles-29481765-pbh8m:835aa241-cfd8-4527-953a-e91ac4516103:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"68d5a1b6636ff83","mac":"5a:21:e7:8c:07:57"},{"name":"eth0","mac":"0a:58:0a:d9:00:1b","sandbox":"/var/run/netns/5327f177-ffe9-42f1-9337-6e68c966a2eb"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.27/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:50:43.743648946+00:00 stderr F I0120 10:50:43.742719 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"collect-profiles-29481765-pbh8m", UID:"835aa241-cfd8-4527-953a-e91ac4516103", APIVersion:"v1", ResourceVersion:"40989", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.27/23] from ovn-kubernetes 2026-01-20T10:50:43.753462709+00:00 stderr F 2026-01-20T10:50:43Z [verbose] ADD finished CNI request ContainerID:"68d5a1b6636ff831e01f691c455c250ac722be00e5dd3e188fc22e9c578bd5a0" Netns:"/var/run/netns/5327f177-ffe9-42f1-9337-6e68c966a2eb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29481765-pbh8m;K8S_POD_INFRA_CONTAINER_ID=68d5a1b6636ff831e01f691c455c250ac722be00e5dd3e188fc22e9c578bd5a0;K8S_POD_UID=835aa241-cfd8-4527-953a-e91ac4516103" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5a:21:e7:8c:07:57\",\"name\":\"68d5a1b6636ff83\"},{\"mac\":\"0a:58:0a:d9:00:1b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/5327f177-ffe9-42f1-9337-6e68c966a2eb\"}],\"ips\":[{\"address\":\"10.217.0.27/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:50:43.829351016+00:00 stderr F 2026-01-20T10:50:43Z [verbose] Add: openshift-marketplace:redhat-marketplace-qr2m4:1cbd03fe-f63a-464e-a875-236a7f312e54:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"58454f011526c25","mac":"ba:a5:6e:c5:bd:a1"},{"name":"eth0","mac":"0a:58:0a:d9:00:1a","sandbox":"/var/run/netns/a4dfe238-61fc-45a0-971d-7ce5743540a8"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.26/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:50:43.829351016+00:00 stderr F I0120 10:50:43.828258 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-qr2m4", UID:"1cbd03fe-f63a-464e-a875-236a7f312e54", APIVersion:"v1", ResourceVersion:"40988", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.26/23] from ovn-kubernetes 2026-01-20T10:50:43.830462447+00:00 stderr F 2026-01-20T10:50:43Z [verbose] Add: openshift-marketplace:redhat-operators-ndkkp:44a58473-946f-4c48-ab45-4df1b9a8bfe5:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"be3253472106e79","mac":"aa:0e:e2:07:73:93"},{"name":"eth0","mac":"0a:58:0a:d9:00:11","sandbox":"/var/run/netns/597dad0b-3626-4561-a9a9-0ecf10e40053"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.17/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:50:43.830634302+00:00 stderr F I0120 10:50:43.830607 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-ndkkp", UID:"44a58473-946f-4c48-ab45-4df1b9a8bfe5", APIVersion:"v1", ResourceVersion:"40993", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.17/23] from ovn-kubernetes 2026-01-20T10:50:43.853245493+00:00 stderr F 2026-01-20T10:50:43Z [verbose] ADD finished CNI request ContainerID:"be3253472106e7907941fee986eca843184e51149494c9b4f616a2ab02b8d643" Netns:"/var/run/netns/597dad0b-3626-4561-a9a9-0ecf10e40053" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-ndkkp;K8S_POD_INFRA_CONTAINER_ID=be3253472106e7907941fee986eca843184e51149494c9b4f616a2ab02b8d643;K8S_POD_UID=44a58473-946f-4c48-ab45-4df1b9a8bfe5" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"aa:0e:e2:07:73:93\",\"name\":\"be3253472106e79\"},{\"mac\":\"0a:58:0a:d9:00:11\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/597dad0b-3626-4561-a9a9-0ecf10e40053\"}],\"ips\":[{\"address\":\"10.217.0.17/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:50:43.853245493+00:00 stderr F 2026-01-20T10:50:43Z [verbose] Add: openshift-marketplace:certified-operators-tshnz:b4d73f51-7528-4901-90fc-5563b2d16f4e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"391323720bca4f9","mac":"16:de:36:7e:9d:70"},{"name":"eth0","mac":"0a:58:0a:d9:00:1c","sandbox":"/var/run/netns/d41c8d27-c77e-4761-94c3-88885c88b7de"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.28/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:50:43.853245493+00:00 stderr F I0120 10:50:43.851867 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-tshnz", UID:"b4d73f51-7528-4901-90fc-5563b2d16f4e", APIVersion:"v1", ResourceVersion:"40991", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.28/23] from ovn-kubernetes 2026-01-20T10:50:43.854113029+00:00 stderr F 2026-01-20T10:50:43Z [verbose] ADD finished CNI request ContainerID:"58454f011526c2562f0c0edd673604ef46affaea55dc1997f5a85fe6f743d4a4" Netns:"/var/run/netns/a4dfe238-61fc-45a0-971d-7ce5743540a8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-qr2m4;K8S_POD_INFRA_CONTAINER_ID=58454f011526c2562f0c0edd673604ef46affaea55dc1997f5a85fe6f743d4a4;K8S_POD_UID=1cbd03fe-f63a-464e-a875-236a7f312e54" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ba:a5:6e:c5:bd:a1\",\"name\":\"58454f011526c25\"},{\"mac\":\"0a:58:0a:d9:00:1a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/a4dfe238-61fc-45a0-971d-7ce5743540a8\"}],\"ips\":[{\"address\":\"10.217.0.26/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:50:43.878226043+00:00 stderr F 2026-01-20T10:50:43Z [verbose] ADD finished CNI request ContainerID:"391323720bca4f992f0df71d24082cd8fa5c589fd1d6eeb29ec1e108083db641" Netns:"/var/run/netns/d41c8d27-c77e-4761-94c3-88885c88b7de" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-tshnz;K8S_POD_INFRA_CONTAINER_ID=391323720bca4f992f0df71d24082cd8fa5c589fd1d6eeb29ec1e108083db641;K8S_POD_UID=b4d73f51-7528-4901-90fc-5563b2d16f4e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"16:de:36:7e:9d:70\",\"name\":\"391323720bca4f9\"},{\"mac\":\"0a:58:0a:d9:00:1c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/d41c8d27-c77e-4761-94c3-88885c88b7de\"}],\"ips\":[{\"address\":\"10.217.0.28/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:50:48.883759378+00:00 stderr F 2026-01-20T10:50:48Z [verbose] DEL starting CNI request ContainerID:"68d5a1b6636ff831e01f691c455c250ac722be00e5dd3e188fc22e9c578bd5a0" Netns:"/var/run/netns/5327f177-ffe9-42f1-9337-6e68c966a2eb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29481765-pbh8m;K8S_POD_INFRA_CONTAINER_ID=68d5a1b6636ff831e01f691c455c250ac722be00e5dd3e188fc22e9c578bd5a0;K8S_POD_UID=835aa241-cfd8-4527-953a-e91ac4516103" Path:"" 2026-01-20T10:50:48.884171630+00:00 stderr F 2026-01-20T10:50:48Z [verbose] Del: openshift-operator-lifecycle-manager:collect-profiles-29481765-pbh8m:835aa241-cfd8-4527-953a-e91ac4516103:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2026-01-20T10:50:49.023758102+00:00 stderr F 2026-01-20T10:50:49Z [verbose] DEL finished CNI request ContainerID:"68d5a1b6636ff831e01f691c455c250ac722be00e5dd3e188fc22e9c578bd5a0" Netns:"/var/run/netns/5327f177-ffe9-42f1-9337-6e68c966a2eb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29481765-pbh8m;K8S_POD_INFRA_CONTAINER_ID=68d5a1b6636ff831e01f691c455c250ac722be00e5dd3e188fc22e9c578bd5a0;K8S_POD_UID=835aa241-cfd8-4527-953a-e91ac4516103" Path:"", result: "", err: 2026-01-20T10:50:57.405198205+00:00 stderr F 2026-01-20T10:50:57Z [verbose] DEL starting CNI request ContainerID:"391323720bca4f992f0df71d24082cd8fa5c589fd1d6eeb29ec1e108083db641" Netns:"/var/run/netns/d41c8d27-c77e-4761-94c3-88885c88b7de" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-tshnz;K8S_POD_INFRA_CONTAINER_ID=391323720bca4f992f0df71d24082cd8fa5c589fd1d6eeb29ec1e108083db641;K8S_POD_UID=b4d73f51-7528-4901-90fc-5563b2d16f4e" Path:"" 2026-01-20T10:50:57.405880604+00:00 stderr F 2026-01-20T10:50:57Z [verbose] Del: openshift-marketplace:certified-operators-tshnz:b4d73f51-7528-4901-90fc-5563b2d16f4e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2026-01-20T10:50:57.448608775+00:00 stderr F 2026-01-20T10:50:57Z [verbose] DEL starting CNI request ContainerID:"19bf0574cec6c2fcffb7413ec6a0f6c184c9588a6d44000ad64d498b177a9d29" Netns:"/var/run/netns/b9e425fb-7e3b-49a0-a1be-63ef7ceb7b6a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=19bf0574cec6c2fcffb7413ec6a0f6c184c9588a6d44000ad64d498b177a9d29;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"" 2026-01-20T10:50:57.448899983+00:00 stderr F 2026-01-20T10:50:57Z [verbose] Del: openshift-marketplace:community-operators-8jhz6:3f4dca86-e6ee-4ec9-8324-86aff960225e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2026-01-20T10:50:57.482025788+00:00 stderr F 2026-01-20T10:50:57Z [verbose] DEL starting CNI request ContainerID:"a8b18abf4689d10f3ed0100a7e3b06ad6325b157c2ed815d7efa869e36652f20" Netns:"/var/run/netns/ebd3bd16-c53b-49a4-be78-a266ba4f0921" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-sdddl;K8S_POD_INFRA_CONTAINER_ID=a8b18abf4689d10f3ed0100a7e3b06ad6325b157c2ed815d7efa869e36652f20;K8S_POD_UID=fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Path:"" 2026-01-20T10:50:57.482025788+00:00 stderr F 2026-01-20T10:50:57Z [verbose] Del: openshift-marketplace:community-operators-sdddl:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2026-01-20T10:50:57.482937215+00:00 stderr F 2026-01-20T10:50:57Z [verbose] DEL starting CNI request ContainerID:"2cf7b8831320ab3b2bf6ae8117aef184a21c07b6471e80418ee4500c54f3fad6" Netns:"/var/run/netns/0350d956-6e45-4c91-a1aa-70f651bc3165" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=2cf7b8831320ab3b2bf6ae8117aef184a21c07b6471e80418ee4500c54f3fad6;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"" 2026-01-20T10:50:57.483187522+00:00 stderr F 2026-01-20T10:50:57Z [verbose] Del: openshift-marketplace:redhat-operators-f4jkp:4092a9f8-5acc-4932-9e90-ef962eeb301a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2026-01-20T10:50:57.521157025+00:00 stderr F 2026-01-20T10:50:57Z [verbose] DEL starting CNI request ContainerID:"896f1d1c66b49ee6c0953e78a523b98ad3a86e177837cd70e3d39a63cc66859b" Netns:"/var/run/netns/9296f533-4afc-4c60-9b3b-e8f01b8f54a7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=896f1d1c66b49ee6c0953e78a523b98ad3a86e177837cd70e3d39a63cc66859b;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"" 2026-01-20T10:50:57.521157025+00:00 stderr F 2026-01-20T10:50:57Z [verbose] Del: openshift-marketplace:certified-operators-7287f:887d596e-c519-4bfa-af90-3edd9e1b2f0f:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2026-01-20T10:50:57.564558546+00:00 stderr F 2026-01-20T10:50:57Z [verbose] DEL starting CNI request ContainerID:"4fdab1420f407743cc8ac662254758af9071357f2aa3b4d55150aa3592daafc0" Netns:"/var/run/netns/a1f9219d-bb56-4308-9cc8-b7c4bc100ea1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=4fdab1420f407743cc8ac662254758af9071357f2aa3b4d55150aa3592daafc0;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"" 2026-01-20T10:50:57.564707340+00:00 stderr F 2026-01-20T10:50:57Z [verbose] Del: openshift-marketplace:redhat-marketplace-8s8pc:c782cf62-a827-4677-b3c2-6f82c5f09cbb:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2026-01-20T10:50:57.571149116+00:00 stderr F 2026-01-20T10:50:57Z [verbose] DEL finished CNI request ContainerID:"391323720bca4f992f0df71d24082cd8fa5c589fd1d6eeb29ec1e108083db641" Netns:"/var/run/netns/d41c8d27-c77e-4761-94c3-88885c88b7de" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-tshnz;K8S_POD_INFRA_CONTAINER_ID=391323720bca4f992f0df71d24082cd8fa5c589fd1d6eeb29ec1e108083db641;K8S_POD_UID=b4d73f51-7528-4901-90fc-5563b2d16f4e" Path:"", result: "", err: 2026-01-20T10:50:57.625706968+00:00 stderr F 2026-01-20T10:50:57Z [verbose] DEL finished CNI request ContainerID:"19bf0574cec6c2fcffb7413ec6a0f6c184c9588a6d44000ad64d498b177a9d29" Netns:"/var/run/netns/b9e425fb-7e3b-49a0-a1be-63ef7ceb7b6a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=19bf0574cec6c2fcffb7413ec6a0f6c184c9588a6d44000ad64d498b177a9d29;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"", result: "", err: 2026-01-20T10:50:57.747557428+00:00 stderr F 2026-01-20T10:50:57Z [verbose] DEL finished CNI request ContainerID:"a8b18abf4689d10f3ed0100a7e3b06ad6325b157c2ed815d7efa869e36652f20" Netns:"/var/run/netns/ebd3bd16-c53b-49a4-be78-a266ba4f0921" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-sdddl;K8S_POD_INFRA_CONTAINER_ID=a8b18abf4689d10f3ed0100a7e3b06ad6325b157c2ed815d7efa869e36652f20;K8S_POD_UID=fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Path:"", result: "", err: 2026-01-20T10:50:57.759578754+00:00 stderr F 2026-01-20T10:50:57Z [verbose] DEL finished CNI request ContainerID:"896f1d1c66b49ee6c0953e78a523b98ad3a86e177837cd70e3d39a63cc66859b" Netns:"/var/run/netns/9296f533-4afc-4c60-9b3b-e8f01b8f54a7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=896f1d1c66b49ee6c0953e78a523b98ad3a86e177837cd70e3d39a63cc66859b;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"", result: "", err: 2026-01-20T10:50:57.760072340+00:00 stderr F 2026-01-20T10:50:57Z [verbose] DEL finished CNI request ContainerID:"4fdab1420f407743cc8ac662254758af9071357f2aa3b4d55150aa3592daafc0" Netns:"/var/run/netns/a1f9219d-bb56-4308-9cc8-b7c4bc100ea1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=4fdab1420f407743cc8ac662254758af9071357f2aa3b4d55150aa3592daafc0;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"", result: "", err: 2026-01-20T10:50:57.774182216+00:00 stderr F 2026-01-20T10:50:57Z [verbose] DEL finished CNI request ContainerID:"2cf7b8831320ab3b2bf6ae8117aef184a21c07b6471e80418ee4500c54f3fad6" Netns:"/var/run/netns/0350d956-6e45-4c91-a1aa-70f651bc3165" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=2cf7b8831320ab3b2bf6ae8117aef184a21c07b6471e80418ee4500c54f3fad6;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"", result: "", err: 2026-01-20T10:50:57.881137707+00:00 stderr F 2026-01-20T10:50:57Z [verbose] ADD starting CNI request ContainerID:"845fef91125384273b20c459af9e7c9681286ec366dc321f47733c88771e5eac" Netns:"/var/run/netns/9843b5cd-d78f-489e-aaae-fca9f379d02b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-nc8zc;K8S_POD_INFRA_CONTAINER_ID=845fef91125384273b20c459af9e7c9681286ec366dc321f47733c88771e5eac;K8S_POD_UID=5adb4a31-5991-4381-a1ea-f1b095a071ea" Path:"" 2026-01-20T10:50:58.043314240+00:00 stderr F 2026-01-20T10:50:58Z [verbose] Add: openshift-marketplace:marketplace-operator-8b455464d-nc8zc:5adb4a31-5991-4381-a1ea-f1b095a071ea:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"845fef911253842","mac":"62:ce:41:69:cd:23"},{"name":"eth0","mac":"0a:58:0a:d9:00:1d","sandbox":"/var/run/netns/9843b5cd-d78f-489e-aaae-fca9f379d02b"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.29/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:50:58.043478515+00:00 stderr F I0120 10:50:58.043435 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"marketplace-operator-8b455464d-nc8zc", UID:"5adb4a31-5991-4381-a1ea-f1b095a071ea", APIVersion:"v1", ResourceVersion:"41248", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.29/23] from ovn-kubernetes 2026-01-20T10:50:58.059721633+00:00 stderr F 2026-01-20T10:50:58Z [verbose] ADD finished CNI request ContainerID:"845fef91125384273b20c459af9e7c9681286ec366dc321f47733c88771e5eac" Netns:"/var/run/netns/9843b5cd-d78f-489e-aaae-fca9f379d02b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-nc8zc;K8S_POD_INFRA_CONTAINER_ID=845fef91125384273b20c459af9e7c9681286ec366dc321f47733c88771e5eac;K8S_POD_UID=5adb4a31-5991-4381-a1ea-f1b095a071ea" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"62:ce:41:69:cd:23\",\"name\":\"845fef911253842\"},{\"mac\":\"0a:58:0a:d9:00:1d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/9843b5cd-d78f-489e-aaae-fca9f379d02b\"}],\"ips\":[{\"address\":\"10.217.0.29/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:50:58.192974442+00:00 stderr F 2026-01-20T10:50:58Z [verbose] DEL starting CNI request ContainerID:"be3253472106e7907941fee986eca843184e51149494c9b4f616a2ab02b8d643" Netns:"/var/run/netns/597dad0b-3626-4561-a9a9-0ecf10e40053" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-ndkkp;K8S_POD_INFRA_CONTAINER_ID=be3253472106e7907941fee986eca843184e51149494c9b4f616a2ab02b8d643;K8S_POD_UID=44a58473-946f-4c48-ab45-4df1b9a8bfe5" Path:"" 2026-01-20T10:50:58.192974442+00:00 stderr F 2026-01-20T10:50:58Z [verbose] Del: openshift-marketplace:redhat-operators-ndkkp:44a58473-946f-4c48-ab45-4df1b9a8bfe5:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2026-01-20T10:50:58.327711734+00:00 stderr F 2026-01-20T10:50:58Z [verbose] DEL finished CNI request ContainerID:"be3253472106e7907941fee986eca843184e51149494c9b4f616a2ab02b8d643" Netns:"/var/run/netns/597dad0b-3626-4561-a9a9-0ecf10e40053" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-ndkkp;K8S_POD_INFRA_CONTAINER_ID=be3253472106e7907941fee986eca843184e51149494c9b4f616a2ab02b8d643;K8S_POD_UID=44a58473-946f-4c48-ab45-4df1b9a8bfe5" Path:"", result: "", err: 2026-01-20T10:50:58.510846351+00:00 stderr F 2026-01-20T10:50:58Z [verbose] DEL starting CNI request ContainerID:"05dc8f8fd04adcc92b768cf40248cfd351acd6c19f2be7e81050bc05cf9813bd" Netns:"/var/run/netns/ae5cf0c5-4e12-4417-a40b-6157bb079e1a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=05dc8f8fd04adcc92b768cf40248cfd351acd6c19f2be7e81050bc05cf9813bd;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"" 2026-01-20T10:50:58.510919723+00:00 stderr F 2026-01-20T10:50:58Z [verbose] Del: openshift-marketplace:marketplace-operator-8b455464d-f9xdt:3482be94-0cdb-4e2a-889b-e5fac59fdbf5:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2026-01-20T10:50:58.655146268+00:00 stderr F 2026-01-20T10:50:58Z [verbose] DEL finished CNI request ContainerID:"05dc8f8fd04adcc92b768cf40248cfd351acd6c19f2be7e81050bc05cf9813bd" Netns:"/var/run/netns/ae5cf0c5-4e12-4417-a40b-6157bb079e1a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=05dc8f8fd04adcc92b768cf40248cfd351acd6c19f2be7e81050bc05cf9813bd;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"", result: "", err: 2026-01-20T10:50:59.279113105+00:00 stderr P 2026-01-20T10:50:59Z [verbose] 2026-01-20T10:50:59.279170897+00:00 stderr P DEL starting CNI request ContainerID:"58454f011526c2562f0c0edd673604ef46affaea55dc1997f5a85fe6f743d4a4" Netns:"/var/run/netns/a4dfe238-61fc-45a0-971d-7ce5743540a8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-qr2m4;K8S_POD_INFRA_CONTAINER_ID=58454f011526c2562f0c0edd673604ef46affaea55dc1997f5a85fe6f743d4a4;K8S_POD_UID=1cbd03fe-f63a-464e-a875-236a7f312e54" Path:"" 2026-01-20T10:50:59.279194058+00:00 stderr F 2026-01-20T10:50:59.279939609+00:00 stderr P 2026-01-20T10:50:59Z [verbose] 2026-01-20T10:50:59.279978990+00:00 stderr P Del: openshift-marketplace:redhat-marketplace-qr2m4:1cbd03fe-f63a-464e-a875-236a7f312e54:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2026-01-20T10:50:59.280011681+00:00 stderr F 2026-01-20T10:50:59.451405729+00:00 stderr P 2026-01-20T10:50:59Z [verbose] 2026-01-20T10:50:59.451464631+00:00 stderr P DEL finished CNI request ContainerID:"58454f011526c2562f0c0edd673604ef46affaea55dc1997f5a85fe6f743d4a4" Netns:"/var/run/netns/a4dfe238-61fc-45a0-971d-7ce5743540a8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-qr2m4;K8S_POD_INFRA_CONTAINER_ID=58454f011526c2562f0c0edd673604ef46affaea55dc1997f5a85fe6f743d4a4;K8S_POD_UID=1cbd03fe-f63a-464e-a875-236a7f312e54" Path:"", result: "", err: 2026-01-20T10:50:59.451484901+00:00 stderr F 2026-01-20T10:51:04.730825489+00:00 stderr P 2026-01-20T10:51:04Z [verbose] 2026-01-20T10:51:04.730891280+00:00 stderr P ADD starting CNI request ContainerID:"ec3e84f9949af89c014ae8e757f0b8a412e1dcfb64a82cd458942322173e8288" Netns:"/var/run/netns/42d1a550-3d87-4d40-b0c9-874cf76be70e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-2mx7j;K8S_POD_INFRA_CONTAINER_ID=ec3e84f9949af89c014ae8e757f0b8a412e1dcfb64a82cd458942322173e8288;K8S_POD_UID=df6e4f33-df74-4326-b096-9d3e45a8c55a" Path:"" 2026-01-20T10:51:04.730914631+00:00 stderr F 2026-01-20T10:51:04.870676638+00:00 stderr P 2026-01-20T10:51:04Z [verbose] 2026-01-20T10:51:04.870718319+00:00 stderr P Add: openshift-marketplace:redhat-marketplace-2mx7j:df6e4f33-df74-4326-b096-9d3e45a8c55a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"ec3e84f9949af89","mac":"9a:45:c3:52:86:bf"},{"name":"eth0","mac":"0a:58:0a:d9:00:1e","sandbox":"/var/run/netns/42d1a550-3d87-4d40-b0c9-874cf76be70e"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.30/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:51:04.870737830+00:00 stderr F 2026-01-20T10:51:04.871039038+00:00 stderr F I0120 10:51:04.871015 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-2mx7j", UID:"df6e4f33-df74-4326-b096-9d3e45a8c55a", APIVersion:"v1", ResourceVersion:"41330", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.30/23] from ovn-kubernetes 2026-01-20T10:51:04.885419473+00:00 stderr F 2026-01-20T10:51:04Z [verbose] ADD finished CNI request ContainerID:"ec3e84f9949af89c014ae8e757f0b8a412e1dcfb64a82cd458942322173e8288" Netns:"/var/run/netns/42d1a550-3d87-4d40-b0c9-874cf76be70e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-2mx7j;K8S_POD_INFRA_CONTAINER_ID=ec3e84f9949af89c014ae8e757f0b8a412e1dcfb64a82cd458942322173e8288;K8S_POD_UID=df6e4f33-df74-4326-b096-9d3e45a8c55a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9a:45:c3:52:86:bf\",\"name\":\"ec3e84f9949af89\"},{\"mac\":\"0a:58:0a:d9:00:1e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/42d1a550-3d87-4d40-b0c9-874cf76be70e\"}],\"ips\":[{\"address\":\"10.217.0.30/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:51:05.272618798+00:00 stderr F 2026-01-20T10:51:05Z [verbose] ADD starting CNI request ContainerID:"fb18bf62522e5c121a08e1748f456fd0a1e535ddc234acf9eb07e3798d79d9ce" Netns:"/var/run/netns/b9d10e35-d217-4443-9b27-ee972fbe8129" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-mpjb7;K8S_POD_INFRA_CONTAINER_ID=fb18bf62522e5c121a08e1748f456fd0a1e535ddc234acf9eb07e3798d79d9ce;K8S_POD_UID=1d5b65e7-a4c3-495a-a5b0-72caab7218fd" Path:"" 2026-01-20T10:51:05.590964741+00:00 stderr F 2026-01-20T10:51:05Z [verbose] Add: openshift-marketplace:certified-operators-mpjb7:1d5b65e7-a4c3-495a-a5b0-72caab7218fd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"fb18bf62522e5c1","mac":"92:5a:49:91:78:d4"},{"name":"eth0","mac":"0a:58:0a:d9:00:21","sandbox":"/var/run/netns/b9d10e35-d217-4443-9b27-ee972fbe8129"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.33/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:51:05.591216408+00:00 stderr F I0120 10:51:05.591183 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-mpjb7", UID:"1d5b65e7-a4c3-495a-a5b0-72caab7218fd", APIVersion:"v1", ResourceVersion:"41338", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.33/23] from ovn-kubernetes 2026-01-20T10:51:05.606734215+00:00 stderr P 2026-01-20T10:51:05Z [verbose] 2026-01-20T10:51:05.606810317+00:00 stderr P ADD finished CNI request ContainerID:"fb18bf62522e5c121a08e1748f456fd0a1e535ddc234acf9eb07e3798d79d9ce" Netns:"/var/run/netns/b9d10e35-d217-4443-9b27-ee972fbe8129" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-mpjb7;K8S_POD_INFRA_CONTAINER_ID=fb18bf62522e5c121a08e1748f456fd0a1e535ddc234acf9eb07e3798d79d9ce;K8S_POD_UID=1d5b65e7-a4c3-495a-a5b0-72caab7218fd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"92:5a:49:91:78:d4\",\"name\":\"fb18bf62522e5c1\"},{\"mac\":\"0a:58:0a:d9:00:21\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b9d10e35-d217-4443-9b27-ee972fbe8129\"}],\"ips\":[{\"address\":\"10.217.0.33/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:51:05.606869409+00:00 stderr F 2026-01-20T10:51:06.271691033+00:00 stderr P 2026-01-20T10:51:06Z [verbose] 2026-01-20T10:51:06.273819895+00:00 stderr P ADD starting CNI request ContainerID:"569e9604055896a1e00f45e7ef1dec380f716970bf89dd82ed39e0e68fc138be" Netns:"/var/run/netns/b6d349ed-1306-4680-a33b-0c8b78e0c49e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-2nxg8;K8S_POD_INFRA_CONTAINER_ID=569e9604055896a1e00f45e7ef1dec380f716970bf89dd82ed39e0e68fc138be;K8S_POD_UID=afcd1056-dc0e-4c35-93bd-1c388cd2028e" Path:"" 2026-01-20T10:51:06.273911107+00:00 stderr F 2026-01-20T10:51:06.399753573+00:00 stderr P 2026-01-20T10:51:06Z [verbose] 2026-01-20T10:51:06.399857146+00:00 stderr P Add: openshift-marketplace:redhat-operators-2nxg8:afcd1056-dc0e-4c35-93bd-1c388cd2028e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"569e9604055896a","mac":"9e:e8:d2:22:54:68"},{"name":"eth0","mac":"0a:58:0a:d9:00:22","sandbox":"/var/run/netns/b6d349ed-1306-4680-a33b-0c8b78e0c49e"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.34/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:51:06.399924418+00:00 stderr F 2026-01-20T10:51:06.400209336+00:00 stderr F I0120 10:51:06.400152 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-2nxg8", UID:"afcd1056-dc0e-4c35-93bd-1c388cd2028e", APIVersion:"v1", ResourceVersion:"41358", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.34/23] from ovn-kubernetes 2026-01-20T10:51:06.419176993+00:00 stderr P 2026-01-20T10:51:06Z [verbose] 2026-01-20T10:51:06.419304867+00:00 stderr P ADD finished CNI request ContainerID:"569e9604055896a1e00f45e7ef1dec380f716970bf89dd82ed39e0e68fc138be" Netns:"/var/run/netns/b6d349ed-1306-4680-a33b-0c8b78e0c49e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-2nxg8;K8S_POD_INFRA_CONTAINER_ID=569e9604055896a1e00f45e7ef1dec380f716970bf89dd82ed39e0e68fc138be;K8S_POD_UID=afcd1056-dc0e-4c35-93bd-1c388cd2028e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9e:e8:d2:22:54:68\",\"name\":\"569e9604055896a\"},{\"mac\":\"0a:58:0a:d9:00:22\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b6d349ed-1306-4680-a33b-0c8b78e0c49e\"}],\"ips\":[{\"address\":\"10.217.0.34/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:51:06.419402909+00:00 stderr F 2026-01-20T10:51:06.937141885+00:00 stderr F 2026-01-20T10:51:06Z [verbose] ADD starting CNI request ContainerID:"df5798e69cf9702c0916d9b28805a3b3dc3341cec1f6d3c178089d83281db6ff" Netns:"/var/run/netns/e724087f-1cd3-4086-9a48-06d95bb6f057" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-6m4w2;K8S_POD_INFRA_CONTAINER_ID=df5798e69cf9702c0916d9b28805a3b3dc3341cec1f6d3c178089d83281db6ff;K8S_POD_UID=bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd" Path:"" 2026-01-20T10:51:07.080875237+00:00 stderr F 2026-01-20T10:51:07Z [verbose] Add: openshift-marketplace:community-operators-6m4w2:bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"df5798e69cf9702","mac":"be:02:d7:8b:0b:6e"},{"name":"eth0","mac":"0a:58:0a:d9:00:23","sandbox":"/var/run/netns/e724087f-1cd3-4086-9a48-06d95bb6f057"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.35/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:51:07.081207977+00:00 stderr F I0120 10:51:07.081170 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-6m4w2", UID:"bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd", APIVersion:"v1", ResourceVersion:"41374", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.35/23] from ovn-kubernetes 2026-01-20T10:51:07.095957642+00:00 stderr F 2026-01-20T10:51:07Z [verbose] ADD finished CNI request ContainerID:"df5798e69cf9702c0916d9b28805a3b3dc3341cec1f6d3c178089d83281db6ff" Netns:"/var/run/netns/e724087f-1cd3-4086-9a48-06d95bb6f057" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-6m4w2;K8S_POD_INFRA_CONTAINER_ID=df5798e69cf9702c0916d9b28805a3b3dc3341cec1f6d3c178089d83281db6ff;K8S_POD_UID=bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"be:02:d7:8b:0b:6e\",\"name\":\"df5798e69cf9702\"},{\"mac\":\"0a:58:0a:d9:00:23\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/e724087f-1cd3-4086-9a48-06d95bb6f057\"}],\"ips\":[{\"address\":\"10.217.0.35/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:51:07.301413622+00:00 stderr F 2026-01-20T10:51:07Z [verbose] ADD starting CNI request ContainerID:"aba0d03018940b4a37056c64c42fa71ddc006a813066cf3a5095780f837664eb" Netns:"/var/run/netns/fe021144-d0fb-4e1c-81a5-c4462707c2f9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-9k9jk;K8S_POD_INFRA_CONTAINER_ID=aba0d03018940b4a37056c64c42fa71ddc006a813066cf3a5095780f837664eb;K8S_POD_UID=635e7b8e-733b-4da3-a204-7d4cca87de4a" Path:"" 2026-01-20T10:51:07.414038226+00:00 stderr F 2026-01-20T10:51:07Z [verbose] Add: openshift-marketplace:community-operators-9k9jk:635e7b8e-733b-4da3-a204-7d4cca87de4a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"aba0d03018940b4","mac":"f6:e4:47:e6:f2:a8"},{"name":"eth0","mac":"0a:58:0a:d9:00:24","sandbox":"/var/run/netns/fe021144-d0fb-4e1c-81a5-c4462707c2f9"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.36/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:51:07.414250682+00:00 stderr F I0120 10:51:07.414212 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-9k9jk", UID:"635e7b8e-733b-4da3-a204-7d4cca87de4a", APIVersion:"v1", ResourceVersion:"41387", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.36/23] from ovn-kubernetes 2026-01-20T10:51:07.428845413+00:00 stderr F 2026-01-20T10:51:07Z [verbose] ADD finished CNI request ContainerID:"aba0d03018940b4a37056c64c42fa71ddc006a813066cf3a5095780f837664eb" Netns:"/var/run/netns/fe021144-d0fb-4e1c-81a5-c4462707c2f9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-9k9jk;K8S_POD_INFRA_CONTAINER_ID=aba0d03018940b4a37056c64c42fa71ddc006a813066cf3a5095780f837664eb;K8S_POD_UID=635e7b8e-733b-4da3-a204-7d4cca87de4a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"f6:e4:47:e6:f2:a8\",\"name\":\"aba0d03018940b4\"},{\"mac\":\"0a:58:0a:d9:00:24\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/fe021144-d0fb-4e1c-81a5-c4462707c2f9\"}],\"ips\":[{\"address\":\"10.217.0.36/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:51:36.007330418+00:00 stderr F 2026-01-20T10:51:36Z [verbose] DEL starting CNI request ContainerID:"aba0d03018940b4a37056c64c42fa71ddc006a813066cf3a5095780f837664eb" Netns:"/var/run/netns/fe021144-d0fb-4e1c-81a5-c4462707c2f9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-9k9jk;K8S_POD_INFRA_CONTAINER_ID=aba0d03018940b4a37056c64c42fa71ddc006a813066cf3a5095780f837664eb;K8S_POD_UID=635e7b8e-733b-4da3-a204-7d4cca87de4a" Path:"" 2026-01-20T10:51:36.007958865+00:00 stderr F 2026-01-20T10:51:36Z [verbose] Del: openshift-marketplace:community-operators-9k9jk:635e7b8e-733b-4da3-a204-7d4cca87de4a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2026-01-20T10:51:36.144039190+00:00 stderr P 2026-01-20T10:51:36Z [verbose] 2026-01-20T10:51:36.144111982+00:00 stderr P ADD starting CNI request ContainerID:"d98475354f5ec52e01a2da76492feb1b1acfbde59354acf86f58cc7c9bf15045" Netns:"/var/run/netns/8353cbac-f2fc-428e-8dd3-284990d1f112" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75779c45fd-v2j2v;K8S_POD_INFRA_CONTAINER_ID=d98475354f5ec52e01a2da76492feb1b1acfbde59354acf86f58cc7c9bf15045;K8S_POD_UID=f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Path:"" 2026-01-20T10:51:36.144133993+00:00 stderr F 2026-01-20T10:51:36.169143521+00:00 stderr F 2026-01-20T10:51:36Z [verbose] DEL finished CNI request ContainerID:"aba0d03018940b4a37056c64c42fa71ddc006a813066cf3a5095780f837664eb" Netns:"/var/run/netns/fe021144-d0fb-4e1c-81a5-c4462707c2f9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-9k9jk;K8S_POD_INFRA_CONTAINER_ID=aba0d03018940b4a37056c64c42fa71ddc006a813066cf3a5095780f837664eb;K8S_POD_UID=635e7b8e-733b-4da3-a204-7d4cca87de4a" Path:"", result: "", err: 2026-01-20T10:51:36.481549403+00:00 stderr F 2026-01-20T10:51:36Z [verbose] Add: openshift-image-registry:image-registry-75779c45fd-v2j2v:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d98475354f5ec52","mac":"b2:92:88:e8:a8:62"},{"name":"eth0","mac":"0a:58:0a:d9:00:3b","sandbox":"/var/run/netns/8353cbac-f2fc-428e-8dd3-284990d1f112"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.59/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:51:36.481880532+00:00 stderr F I0120 10:51:36.481846 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"image-registry-75779c45fd-v2j2v", UID:"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319", APIVersion:"v1", ResourceVersion:"38038", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.59/23] from ovn-kubernetes 2026-01-20T10:51:36.496431934+00:00 stderr P 2026-01-20T10:51:36Z [verbose] 2026-01-20T10:51:36.496487996+00:00 stderr P ADD finished CNI request ContainerID:"d98475354f5ec52e01a2da76492feb1b1acfbde59354acf86f58cc7c9bf15045" Netns:"/var/run/netns/8353cbac-f2fc-428e-8dd3-284990d1f112" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75779c45fd-v2j2v;K8S_POD_INFRA_CONTAINER_ID=d98475354f5ec52e01a2da76492feb1b1acfbde59354acf86f58cc7c9bf15045;K8S_POD_UID=f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"b2:92:88:e8:a8:62\",\"name\":\"d98475354f5ec52\"},{\"mac\":\"0a:58:0a:d9:00:3b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/8353cbac-f2fc-428e-8dd3-284990d1f112\"}],\"ips\":[{\"address\":\"10.217.0.59/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:51:36.496515986+00:00 stderr F 2026-01-20T10:55:16.514532132+00:00 stderr F 2026-01-20T10:55:16Z [verbose] ADD starting CNI request ContainerID:"1c7e13b0eafeb38110eed3744611ba6e85b64e41c64530d9031cd74012b31da6" Netns:"/var/run/netns/76034788-2617-41a9-873e-793883ee2e58" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75b7bb6564-ln84v;K8S_POD_INFRA_CONTAINER_ID=1c7e13b0eafeb38110eed3744611ba6e85b64e41c64530d9031cd74012b31da6;K8S_POD_UID=7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76" Path:"" 2026-01-20T10:55:16.764493425+00:00 stderr F 2026-01-20T10:55:16Z [verbose] Add: openshift-image-registry:image-registry-75b7bb6564-ln84v:7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"1c7e13b0eafeb38","mac":"4e:5b:8d:af:42:40"},{"name":"eth0","mac":"0a:58:0a:d9:00:25","sandbox":"/var/run/netns/76034788-2617-41a9-873e-793883ee2e58"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.37/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:55:16.764911707+00:00 stderr F I0120 10:55:16.764856 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"image-registry-75b7bb6564-ln84v", UID:"7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76", APIVersion:"v1", ResourceVersion:"42107", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.37/23] from ovn-kubernetes 2026-01-20T10:55:16.780533213+00:00 stderr F 2026-01-20T10:55:16Z [verbose] ADD finished CNI request ContainerID:"1c7e13b0eafeb38110eed3744611ba6e85b64e41c64530d9031cd74012b31da6" Netns:"/var/run/netns/76034788-2617-41a9-873e-793883ee2e58" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75b7bb6564-ln84v;K8S_POD_INFRA_CONTAINER_ID=1c7e13b0eafeb38110eed3744611ba6e85b64e41c64530d9031cd74012b31da6;K8S_POD_UID=7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"4e:5b:8d:af:42:40\",\"name\":\"1c7e13b0eafeb38\"},{\"mac\":\"0a:58:0a:d9:00:25\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/76034788-2617-41a9-873e-793883ee2e58\"}],\"ips\":[{\"address\":\"10.217.0.37/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:56:01.759817678+00:00 stderr F 2026-01-20T10:56:01Z [verbose] DEL starting CNI request ContainerID:"d98475354f5ec52e01a2da76492feb1b1acfbde59354acf86f58cc7c9bf15045" Netns:"/var/run/netns/8353cbac-f2fc-428e-8dd3-284990d1f112" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75779c45fd-v2j2v;K8S_POD_INFRA_CONTAINER_ID=d98475354f5ec52e01a2da76492feb1b1acfbde59354acf86f58cc7c9bf15045;K8S_POD_UID=f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Path:"" 2026-01-20T10:56:01.760725563+00:00 stderr F 2026-01-20T10:56:01Z [verbose] Del: openshift-image-registry:image-registry-75779c45fd-v2j2v:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2026-01-20T10:56:01.886011063+00:00 stderr F 2026-01-20T10:56:01Z [verbose] DEL finished CNI request ContainerID:"d98475354f5ec52e01a2da76492feb1b1acfbde59354acf86f58cc7c9bf15045" Netns:"/var/run/netns/8353cbac-f2fc-428e-8dd3-284990d1f112" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75779c45fd-v2j2v;K8S_POD_INFRA_CONTAINER_ID=d98475354f5ec52e01a2da76492feb1b1acfbde59354acf86f58cc7c9bf15045;K8S_POD_UID=f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Path:"", result: "", err: 2026-01-20T10:56:27.862505350+00:00 stderr F 2026-01-20T10:56:27Z [verbose] ADD starting CNI request ContainerID:"ae5d379adacec876e9afd7eea1b6de92d435a2f11553ad2dc2539de36a462c2b" Netns:"/var/run/netns/1b9e1b55-ee80-4bb3-9510-c7b843edf65b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-13-crc;K8S_POD_INFRA_CONTAINER_ID=ae5d379adacec876e9afd7eea1b6de92d435a2f11553ad2dc2539de36a462c2b;K8S_POD_UID=9387c79a-cd5b-4d24-a558-6dbbdd89fe1e" Path:"" 2026-01-20T10:56:28.004918271+00:00 stderr F 2026-01-20T10:56:28Z [verbose] Add: openshift-kube-apiserver:installer-13-crc:9387c79a-cd5b-4d24-a558-6dbbdd89fe1e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"ae5d379adacec87","mac":"8e:52:63:1a:27:42"},{"name":"eth0","mac":"0a:58:0a:d9:00:26","sandbox":"/var/run/netns/1b9e1b55-ee80-4bb3-9510-c7b843edf65b"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.38/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:56:28.005120677+00:00 stderr F I0120 10:56:28.005046 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"installer-13-crc", UID:"9387c79a-cd5b-4d24-a558-6dbbdd89fe1e", APIVersion:"v1", ResourceVersion:"42605", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.38/23] from ovn-kubernetes 2026-01-20T10:56:28.017380216+00:00 stderr F 2026-01-20T10:56:28Z [verbose] ADD finished CNI request ContainerID:"ae5d379adacec876e9afd7eea1b6de92d435a2f11553ad2dc2539de36a462c2b" Netns:"/var/run/netns/1b9e1b55-ee80-4bb3-9510-c7b843edf65b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-13-crc;K8S_POD_INFRA_CONTAINER_ID=ae5d379adacec876e9afd7eea1b6de92d435a2f11553ad2dc2539de36a462c2b;K8S_POD_UID=9387c79a-cd5b-4d24-a558-6dbbdd89fe1e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"8e:52:63:1a:27:42\",\"name\":\"ae5d379adacec87\"},{\"mac\":\"0a:58:0a:d9:00:26\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1b9e1b55-ee80-4bb3-9510-c7b843edf65b\"}],\"ips\":[{\"address\":\"10.217.0.38/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:56:34.899145502+00:00 stderr F 2026-01-20T10:56:34Z [verbose] ADD starting CNI request ContainerID:"dbfc8b06838dce400885a5284cb57e14b736e03b2baf2a0df745fa97a4ae8d28" Netns:"/var/run/netns/9340e907-b6e9-40d5-a4a7-8508391f9768" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=cert-manager;K8S_POD_NAME=cert-manager-cainjector-676dd9bd64-mggnx;K8S_POD_INFRA_CONTAINER_ID=dbfc8b06838dce400885a5284cb57e14b736e03b2baf2a0df745fa97a4ae8d28;K8S_POD_UID=c229f43c-9d3a-4848-a9da-d997459f440b" Path:"" 2026-01-20T10:56:34.939632071+00:00 stderr P 2026-01-20T10:56:34Z [verbose] 2026-01-20T10:56:34.939716433+00:00 stderr P ADD starting CNI request ContainerID:"b29d12b36fd4c3794883845bb29b5ed3571e4d3f9c54f60b3b04ed50901e64ea" Netns:"/var/run/netns/b7e9574d-b5ec-4bc3-be7a-33d3c5a65157" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=cert-manager;K8S_POD_NAME=cert-manager-758df9885c-cq6zm;K8S_POD_INFRA_CONTAINER_ID=b29d12b36fd4c3794883845bb29b5ed3571e4d3f9c54f60b3b04ed50901e64ea;K8S_POD_UID=f12a256b-7128-4680-8f54-8e40a3e56300" Path:"" 2026-01-20T10:56:34.939745054+00:00 stderr F 2026-01-20T10:56:34.985123516+00:00 stderr F 2026-01-20T10:56:34Z [verbose] ADD starting CNI request ContainerID:"a6cb3677490438d470419b2cce1db09c65e3078c7ffc12c845acb4b35e72fc8d" Netns:"/var/run/netns/f4a23a84-d008-42a9-9895-80af60bd251a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=cert-manager;K8S_POD_NAME=cert-manager-webhook-855f577f79-7bdxq;K8S_POD_INFRA_CONTAINER_ID=a6cb3677490438d470419b2cce1db09c65e3078c7ffc12c845acb4b35e72fc8d;K8S_POD_UID=e7870154-de6e-4216-81fb-b87e7502c412" Path:"" 2026-01-20T10:56:35.202409482+00:00 stderr F 2026-01-20T10:56:35Z [verbose] Add: cert-manager:cert-manager-cainjector-676dd9bd64-mggnx:c229f43c-9d3a-4848-a9da-d997459f440b:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"dbfc8b06838dce4","mac":"e6:ec:c1:1c:9f:e7"},{"name":"eth0","mac":"0a:58:0a:d9:00:29","sandbox":"/var/run/netns/9340e907-b6e9-40d5-a4a7-8508391f9768"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.41/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:56:35.203308355+00:00 stderr F I0120 10:56:35.203263 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"cert-manager", Name:"cert-manager-cainjector-676dd9bd64-mggnx", UID:"c229f43c-9d3a-4848-a9da-d997459f440b", APIVersion:"v1", ResourceVersion:"42938", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.41/23] from ovn-kubernetes 2026-01-20T10:56:35.220734065+00:00 stderr F 2026-01-20T10:56:35Z [verbose] ADD finished CNI request ContainerID:"dbfc8b06838dce400885a5284cb57e14b736e03b2baf2a0df745fa97a4ae8d28" Netns:"/var/run/netns/9340e907-b6e9-40d5-a4a7-8508391f9768" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=cert-manager;K8S_POD_NAME=cert-manager-cainjector-676dd9bd64-mggnx;K8S_POD_INFRA_CONTAINER_ID=dbfc8b06838dce400885a5284cb57e14b736e03b2baf2a0df745fa97a4ae8d28;K8S_POD_UID=c229f43c-9d3a-4848-a9da-d997459f440b" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"e6:ec:c1:1c:9f:e7\",\"name\":\"dbfc8b06838dce4\"},{\"mac\":\"0a:58:0a:d9:00:29\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/9340e907-b6e9-40d5-a4a7-8508391f9768\"}],\"ips\":[{\"address\":\"10.217.0.41/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:56:35.250794123+00:00 stderr F 2026-01-20T10:56:35Z [verbose] Add: cert-manager:cert-manager-webhook-855f577f79-7bdxq:e7870154-de6e-4216-81fb-b87e7502c412:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a6cb3677490438d","mac":"aa:58:ec:ab:17:08"},{"name":"eth0","mac":"0a:58:0a:d9:00:2c","sandbox":"/var/run/netns/f4a23a84-d008-42a9-9895-80af60bd251a"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.44/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:56:35.251020649+00:00 stderr F I0120 10:56:35.250989 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"cert-manager", Name:"cert-manager-webhook-855f577f79-7bdxq", UID:"e7870154-de6e-4216-81fb-b87e7502c412", APIVersion:"v1", ResourceVersion:"42942", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.44/23] from ovn-kubernetes 2026-01-20T10:56:35.252371786+00:00 stderr P 2026-01-20T10:56:35Z [verbose] 2026-01-20T10:56:35.252402517+00:00 stderr P Add: cert-manager:cert-manager-758df9885c-cq6zm:f12a256b-7128-4680-8f54-8e40a3e56300:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"b29d12b36fd4c37","mac":"26:a3:1f:9e:d2:5d"},{"name":"eth0","mac":"0a:58:0a:d9:00:2a","sandbox":"/var/run/netns/b7e9574d-b5ec-4bc3-be7a-33d3c5a65157"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.42/23","gateway":"10.217.0.1"}],"dns":{}} 2026-01-20T10:56:35.252421887+00:00 stderr F 2026-01-20T10:56:35.253206338+00:00 stderr F I0120 10:56:35.252617 7526 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"cert-manager", Name:"cert-manager-758df9885c-cq6zm", UID:"f12a256b-7128-4680-8f54-8e40a3e56300", APIVersion:"v1", ResourceVersion:"42939", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.42/23] from ovn-kubernetes 2026-01-20T10:56:35.264182934+00:00 stderr F 2026-01-20T10:56:35Z [verbose] ADD finished CNI request ContainerID:"a6cb3677490438d470419b2cce1db09c65e3078c7ffc12c845acb4b35e72fc8d" Netns:"/var/run/netns/f4a23a84-d008-42a9-9895-80af60bd251a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=cert-manager;K8S_POD_NAME=cert-manager-webhook-855f577f79-7bdxq;K8S_POD_INFRA_CONTAINER_ID=a6cb3677490438d470419b2cce1db09c65e3078c7ffc12c845acb4b35e72fc8d;K8S_POD_UID=e7870154-de6e-4216-81fb-b87e7502c412" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"aa:58:ec:ab:17:08\",\"name\":\"a6cb3677490438d\"},{\"mac\":\"0a:58:0a:d9:00:2c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f4a23a84-d008-42a9-9895-80af60bd251a\"}],\"ips\":[{\"address\":\"10.217.0.44/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:56:35.266609539+00:00 stderr F 2026-01-20T10:56:35Z [verbose] ADD finished CNI request ContainerID:"b29d12b36fd4c3794883845bb29b5ed3571e4d3f9c54f60b3b04ed50901e64ea" Netns:"/var/run/netns/b7e9574d-b5ec-4bc3-be7a-33d3c5a65157" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=cert-manager;K8S_POD_NAME=cert-manager-758df9885c-cq6zm;K8S_POD_INFRA_CONTAINER_ID=b29d12b36fd4c3794883845bb29b5ed3571e4d3f9c54f60b3b04ed50901e64ea;K8S_POD_UID=f12a256b-7128-4680-8f54-8e40a3e56300" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"26:a3:1f:9e:d2:5d\",\"name\":\"b29d12b36fd4c37\"},{\"mac\":\"0a:58:0a:d9:00:2a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b7e9574d-b5ec-4bc3-be7a-33d3c5a65157\"}],\"ips\":[{\"address\":\"10.217.0.42/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2026-01-20T10:56:43.407126383+00:00 stderr F 2026-01-20T10:56:43Z [verbose] readiness indicator file is gone. restart multus-daemon ././@LongLink0000644000000000000000000000024300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/8.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000726415133657716033260 0ustar zuulzuul2026-01-20T10:56:59.167198640+00:00 stdout F 2026-01-20T10:56:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_664c21ea-ed01-4805-ac25-f0430b584782 2026-01-20T10:56:59.207895076+00:00 stdout F 2026-01-20T10:56:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_664c21ea-ed01-4805-ac25-f0430b584782 to /host/opt/cni/bin/ 2026-01-20T10:56:59.231167312+00:00 stderr F 2026-01-20T10:56:59Z [verbose] multus-daemon started 2026-01-20T10:56:59.231167312+00:00 stderr F 2026-01-20T10:56:59Z [verbose] Readiness Indicator file check 2026-01-20T10:56:59.231167312+00:00 stderr F 2026-01-20T10:56:59Z [verbose] Readiness Indicator file check done! 2026-01-20T10:56:59.232055405+00:00 stderr F I0120 10:56:59.232016 30586 certificate_store.go:130] Loading cert/key pair from "/etc/cni/multus/certs/multus-client-current.pem". 2026-01-20T10:56:59.232292881+00:00 stderr F 2026-01-20T10:56:59Z [verbose] Waiting for certificate 2026-01-20T10:57:00.233316843+00:00 stderr F I0120 10:57:00.233242 30586 certificate_store.go:130] Loading cert/key pair from "/etc/cni/multus/certs/multus-client-current.pem". 2026-01-20T10:57:00.233518568+00:00 stderr F 2026-01-20T10:57:00Z [verbose] Certificate found! 2026-01-20T10:57:00.234393002+00:00 stderr F 2026-01-20T10:57:00Z [verbose] server configured with chroot: /hostroot 2026-01-20T10:57:00.234393002+00:00 stderr F 2026-01-20T10:57:00Z [verbose] Filtering pod watch for node "crc" 2026-01-20T10:57:00.335104606+00:00 stderr F 2026-01-20T10:57:00Z [verbose] API readiness check 2026-01-20T10:57:00.336228355+00:00 stderr F 2026-01-20T10:57:00Z [verbose] API readiness check done! 2026-01-20T10:57:00.336516102+00:00 stderr F 2026-01-20T10:57:00Z [verbose] Generated MultusCNI config: {"binDir":"/var/lib/cni/bin","cniVersion":"0.3.1","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","namespaceIsolation":true,"globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","type":"multus-shim","daemonSocketDir":"/run/multus/socket"} 2026-01-20T10:57:00.336761949+00:00 stderr F 2026-01-20T10:57:00Z [verbose] started to watch file /host/run/multus/cni/net.d/10-ovn-kubernetes.conf 2026-01-20T10:57:09.907433807+00:00 stderr F 2026-01-20T10:57:09Z [verbose] DEL starting CNI request ContainerID:"ae5d379adacec876e9afd7eea1b6de92d435a2f11553ad2dc2539de36a462c2b" Netns:"/var/run/netns/1b9e1b55-ee80-4bb3-9510-c7b843edf65b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-13-crc;K8S_POD_INFRA_CONTAINER_ID=ae5d379adacec876e9afd7eea1b6de92d435a2f11553ad2dc2539de36a462c2b;K8S_POD_UID=9387c79a-cd5b-4d24-a558-6dbbdd89fe1e" Path:"" 2026-01-20T10:57:09.915199032+00:00 stderr F 2026-01-20T10:57:09Z [verbose] Del: openshift-kube-apiserver:installer-13-crc:9387c79a-cd5b-4d24-a558-6dbbdd89fe1e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2026-01-20T10:57:10.216182122+00:00 stderr F 2026-01-20T10:57:10Z [verbose] DEL finished CNI request ContainerID:"ae5d379adacec876e9afd7eea1b6de92d435a2f11553ad2dc2539de36a462c2b" Netns:"/var/run/netns/1b9e1b55-ee80-4bb3-9510-c7b843edf65b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-13-crc;K8S_POD_INFRA_CONTAINER_ID=ae5d379adacec876e9afd7eea1b6de92d435a2f11553ad2dc2539de36a462c2b;K8S_POD_UID=9387c79a-cd5b-4d24-a558-6dbbdd89fe1e" Path:"", result: "", err: ././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015133657716033015 5ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015133657741033013 5ustar zuulzuul././@LongLink0000644000000000000000000000032600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000227415133657716033024 0ustar zuulzuul2026-01-20T10:49:37.053390502+00:00 stderr F W0120 10:49:37.052958 1 deprecated.go:66] 2026-01-20T10:49:37.053390502+00:00 stderr F ==== Removed Flag Warning ====================== 2026-01-20T10:49:37.053390502+00:00 stderr F 2026-01-20T10:49:37.053390502+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2026-01-20T10:49:37.053390502+00:00 stderr F 2026-01-20T10:49:37.053390502+00:00 stderr F =============================================== 2026-01-20T10:49:37.053390502+00:00 stderr F 2026-01-20T10:49:37.053390502+00:00 stderr F I0120 10:49:37.053335 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2026-01-20T10:49:37.054082223+00:00 stderr F I0120 10:49:37.053932 1 kube-rbac-proxy.go:233] Valid token audiences: 2026-01-20T10:49:37.054082223+00:00 stderr F I0120 10:49:37.053971 1 kube-rbac-proxy.go:347] Reading certificate files 2026-01-20T10:49:37.054611230+00:00 stderr F I0120 10:49:37.054568 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9001 2026-01-20T10:49:37.055741214+00:00 stderr F I0120 10:49:37.054953 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9001 ././@LongLink0000644000000000000000000000032600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000250115133657716033015 0ustar zuulzuul2025-08-13T19:59:13.861232435+00:00 stderr F W0813 19:59:13.860364 1 deprecated.go:66] 2025-08-13T19:59:13.861232435+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:13.861232435+00:00 stderr F 2025-08-13T19:59:13.861232435+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:13.861232435+00:00 stderr F 2025-08-13T19:59:13.861232435+00:00 stderr F =============================================== 2025-08-13T19:59:13.861232435+00:00 stderr F 2025-08-13T19:59:13.862362848+00:00 stderr F I0813 19:59:13.862339 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-08-13T19:59:14.022979775+00:00 stderr F I0813 19:59:14.022157 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:14.022979775+00:00 stderr F I0813 19:59:14.022382 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:14.025228869+00:00 stderr F I0813 19:59:14.025138 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9001 2025-08-13T19:59:14.133310880+00:00 stderr F I0813 19:59:14.131998 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9001 2025-08-13T20:42:43.317911807+00:00 stderr F I0813 20:42:43.317675 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000033300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015133657741033013 5ustar zuulzuul././@LongLink0000644000000000000000000000034000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000023352315133657716033027 0ustar zuulzuul2026-01-20T10:49:35.723406781+00:00 stderr F I0120 10:49:35.722784 1 start.go:62] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2026-01-20T10:49:35.726543956+00:00 stderr F I0120 10:49:35.726482 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:49:35.788150793+00:00 stderr F I0120 10:49:35.788088 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-config-operator/machine-config-controller... 2026-01-20T10:54:48.056044518+00:00 stderr F I0120 10:54:48.055207 1 leaderelection.go:260] successfully acquired lease openshift-machine-config-operator/machine-config-controller 2026-01-20T10:54:48.075507977+00:00 stderr F I0120 10:54:48.075439 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2026-01-20T10:54:48.075620370+00:00 stderr F I0120 10:54:48.075568 1 metrics.go:100] Registering Prometheus metrics 2026-01-20T10:54:48.075722193+00:00 stderr F I0120 10:54:48.075685 1 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2026-01-20T10:54:48.097205846+00:00 stderr F I0120 10:54:48.097116 1 reflector.go:351] Caches populated for *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:54:48.099289632+00:00 stderr F I0120 10:54:48.097500 1 reflector.go:351] Caches populated for *v1.Node from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:54:48.099289632+00:00 stderr F W0120 10:54:48.097648 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2026-01-20T10:54:48.099289632+00:00 stderr F E0120 10:54:48.097771 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2026-01-20T10:54:48.099289632+00:00 stderr F I0120 10:54:48.097845 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:54:48.099289632+00:00 stderr F I0120 10:54:48.098002 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:54:48.099289632+00:00 stderr F I0120 10:54:48.098008 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:54:48.099289632+00:00 stderr F I0120 10:54:48.098214 1 reflector.go:351] Caches populated for *v1.Scheduler from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:54:48.099289632+00:00 stderr F I0120 10:54:48.098815 1 reflector.go:351] Caches populated for *v1.MachineConfiguration from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2026-01-20T10:54:48.099432645+00:00 stderr F I0120 10:54:48.099400 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:54:48.099611270+00:00 stderr F I0120 10:54:48.099551 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2026-01-20T10:54:48.103139044+00:00 stderr F I0120 10:54:48.102757 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:54:48.103404421+00:00 stderr F I0120 10:54:48.103241 1 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2026-01-20T10:54:48.103504294+00:00 stderr F I0120 10:54:48.103464 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:54:48.103538484+00:00 stderr F I0120 10:54:48.103503 1 machine_set_boot_image_controller.go:151] "FeatureGates changed" enabled=["AdminNetworkPolicy","AlibabaPlatform","AzureWorkloadIdentity","BareMetalLoadBalancer","BuildCSIVolumes","CloudDualStackNodeIPs","ClusterAPIInstallAWS","ClusterAPIInstallNutanix","ClusterAPIInstallOpenStack","ClusterAPIInstallVSphere","DisableKubeletCloudCredentialProviders","ExternalCloudProvider","ExternalCloudProviderAzure","ExternalCloudProviderExternal","ExternalCloudProviderGCP","HardwareSpeed","KMSv1","MetricsServer","NetworkDiagnosticsConfig","NetworkLiveMigration","PrivateHostedZoneAWS","VSphereControlPlaneMachineSet","VSphereDriverConfiguration","VSphereStaticIPs"] disabled=["AutomatedEtcdBackup","CSIDriverSharedResource","ChunkSizeMiB","ClusterAPIInstall","ClusterAPIInstallAzure","ClusterAPIInstallGCP","ClusterAPIInstallIBMCloud","ClusterAPIInstallPowerVS","DNSNameResolver","DynamicResourceAllocation","EtcdBackendQuota","EventedPLEG","Example","ExternalOIDC","ExternalRouteCertificate","GCPClusterHostedDNS","GCPLabelsTags","GatewayAPI","ImagePolicy","InsightsConfig","InsightsConfigAPI","InsightsOnDemandDataGather","InstallAlternateInfrastructureAWS","MachineAPIOperatorDisableMachineHealthCheckController","MachineAPIProviderOpenStack","MachineConfigNodes","ManagedBootImages","MaxUnavailableStatefulSet","MetricsCollectionProfiles","MixedCPUsAllocation","NewOLM","NodeDisruptionPolicy","NodeSwap","OnClusterBuild","OpenShiftPodSecurityAdmission","PinnedImages","PlatformOperators","RouteExternalCertificate","ServiceAccountTokenNodeBinding","ServiceAccountTokenNodeBindingValidation","ServiceAccountTokenPodNodeInfo","SignatureStores","SigstoreImageVerification","TranslateStreamCloseWebsocketRequests","UpgradeStatus","VSphereMultiVCenters","ValidatingAdmissionPolicy","VolumeGroupSnapshot"] 2026-01-20T10:54:48.103911065+00:00 stderr F I0120 10:54:48.103794 1 start.go:107] FeatureGates initialized: enabled=[AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CloudDualStackNodeIPs ClusterAPIInstallAWS ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallVSphere DisableKubeletCloudCredentialProviders ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP HardwareSpeed KMSv1 MetricsServer NetworkDiagnosticsConfig NetworkLiveMigration PrivateHostedZoneAWS VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereStaticIPs] disabled=[AutomatedEtcdBackup CSIDriverSharedResource ChunkSizeMiB ClusterAPIInstall ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallPowerVS DNSNameResolver DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MixedCPUsAllocation NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereMultiVCenters ValidatingAdmissionPolicy VolumeGroupSnapshot] 2026-01-20T10:54:48.104213633+00:00 stderr F I0120 10:54:48.104145 1 drain_controller.go:168] Starting MachineConfigController-DrainController 2026-01-20T10:54:48.107362037+00:00 stderr F I0120 10:54:48.107169 1 reflector.go:351] Caches populated for *v1.MachineConfigPool from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2026-01-20T10:54:48.110560872+00:00 stderr F I0120 10:54:48.108885 1 reflector.go:351] Caches populated for *v1.ContainerRuntimeConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2026-01-20T10:54:48.110560872+00:00 stderr F I0120 10:54:48.109197 1 reflector.go:351] Caches populated for *v1.KubeletConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2026-01-20T10:54:48.110560872+00:00 stderr F I0120 10:54:48.109357 1 reflector.go:351] Caches populated for *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2026-01-20T10:54:48.110560872+00:00 stderr F I0120 10:54:48.109568 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:54:48.116114840+00:00 stderr F I0120 10:54:48.115808 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:54:48.118599507+00:00 stderr F I0120 10:54:48.116775 1 reflector.go:351] Caches populated for *v1.ControllerConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2026-01-20T10:54:48.150193408+00:00 stderr F I0120 10:54:48.148281 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:54:48.152253224+00:00 stderr F I0120 10:54:48.151237 1 template_controller.go:126] Re-syncing ControllerConfig due to secret pull-secret change 2026-01-20T10:54:48.153132297+00:00 stderr F I0120 10:54:48.153043 1 reflector.go:351] Caches populated for *v1.MachineConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2026-01-20T10:54:48.186527007+00:00 stderr F I0120 10:54:48.186468 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:54:48.205668777+00:00 stderr F I0120 10:54:48.205608 1 node_controller.go:247] Starting MachineConfigController-NodeController 2026-01-20T10:54:48.205668777+00:00 stderr F I0120 10:54:48.205659 1 template_controller.go:227] Starting MachineConfigController-TemplateController 2026-01-20T10:54:48.205695118+00:00 stderr F I0120 10:54:48.205656 1 kubelet_config_controller.go:193] Starting MachineConfigController-KubeletConfigController 2026-01-20T10:54:48.206003296+00:00 stderr F I0120 10:54:48.205964 1 render_controller.go:127] Starting MachineConfigController-RenderController 2026-01-20T10:54:48.206016267+00:00 stderr F I0120 10:54:48.206003 1 machine_set_boot_image_controller.go:181] Starting MachineConfigController-MachineSetBootImageController 2026-01-20T10:54:48.206050247+00:00 stderr F I0120 10:54:48.206017 1 container_runtime_config_controller.go:242] Starting MachineConfigController-ContainerRuntimeConfigController 2026-01-20T10:54:48.408336710+00:00 stderr F I0120 10:54:48.408252 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool master 2026-01-20T10:54:48.450224106+00:00 stderr F I0120 10:54:48.450127 1 container_runtime_config_controller.go:923] Applied ImageConfig cluster on MachineConfigPool master 2026-01-20T10:54:48.470098666+00:00 stderr F I0120 10:54:48.469986 1 kubelet_config_nodes.go:156] Applied Node configuration 97-master-generated-kubelet on MachineConfigPool master 2026-01-20T10:54:48.613981942+00:00 stderr F I0120 10:54:48.613903 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool worker 2026-01-20T10:54:48.638265329+00:00 stderr F I0120 10:54:48.638209 1 container_runtime_config_controller.go:923] Applied ImageConfig cluster on MachineConfigPool worker 2026-01-20T10:54:49.012148206+00:00 stderr F I0120 10:54:49.012051 1 kubelet_config_nodes.go:156] Applied Node configuration 97-worker-generated-kubelet on MachineConfigPool worker 2026-01-20T10:54:49.030645949+00:00 stderr F W0120 10:54:49.030575 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2026-01-20T10:54:49.030772812+00:00 stderr F E0120 10:54:49.030755 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2026-01-20T10:54:49.615097879+00:00 stderr F I0120 10:54:49.614986 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool master 2026-01-20T10:54:50.213233594+00:00 stderr F I0120 10:54:50.212648 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool worker 2026-01-20T10:54:51.861428419+00:00 stderr F W0120 10:54:51.861331 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2026-01-20T10:54:51.861428419+00:00 stderr F E0120 10:54:51.861394 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2026-01-20T10:54:53.109157419+00:00 stderr F E0120 10:54:53.109028 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:53.109298873+00:00 stderr F I0120 10:54:53.109159 1 node_controller.go:988] Pool master is paused and will not update. 2026-01-20T10:54:53.109485168+00:00 stderr F E0120 10:54:53.109407 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:53.117503732+00:00 stderr F I0120 10:54:53.117400 1 status.go:266] Degraded Machine: crc and Degraded Reason: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2026-01-20T10:54:53.117503732+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:53.121406807+00:00 stderr F I0120 10:54:53.121305 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:53.121560961+00:00 stderr F I0120 10:54:53.121486 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:53.126773400+00:00 stderr F E0120 10:54:53.126694 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:53.126839061+00:00 stderr F E0120 10:54:53.126799 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:53.134540206+00:00 stderr F E0120 10:54:53.134462 1 render_controller.go:465] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2026-01-20T10:54:53.134540206+00:00 stderr F I0120 10:54:53.134500 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:53.140652999+00:00 stderr F I0120 10:54:53.140582 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:53.145471957+00:00 stderr F E0120 10:54:53.145405 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:53.151900199+00:00 stderr F E0120 10:54:53.151784 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:53.159139782+00:00 stderr F I0120 10:54:53.159053 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:53.168134421+00:00 stderr F I0120 10:54:53.167995 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:53.179582747+00:00 stderr F E0120 10:54:53.179492 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:53.189889782+00:00 stderr F E0120 10:54:53.189242 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:53.193415585+00:00 stderr F I0120 10:54:53.191914 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:53.200686320+00:00 stderr F I0120 10:54:53.200609 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:53.233369841+00:00 stderr F E0120 10:54:53.233314 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:53.242211957+00:00 stderr F E0120 10:54:53.242121 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:53.246213993+00:00 stderr F I0120 10:54:53.246122 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:53.252825830+00:00 stderr F I0120 10:54:53.252729 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:53.327346876+00:00 stderr F E0120 10:54:53.327274 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:53.333987373+00:00 stderr F E0120 10:54:53.333954 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:53.340806645+00:00 stderr F I0120 10:54:53.340761 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:53.501578420+00:00 stderr F E0120 10:54:53.501490 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:53.523704990+00:00 stderr F I0120 10:54:53.523611 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:53.684629260+00:00 stderr F E0120 10:54:53.684534 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:53.722853509+00:00 stderr F I0120 10:54:53.722775 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:53.923138218+00:00 stderr F I0120 10:54:53.922965 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:54.043834515+00:00 stderr F E0120 10:54:54.043748 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:54.118122175+00:00 stderr F I0120 10:54:54.118029 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:54.243954389+00:00 stderr F E0120 10:54:54.243815 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:54.323960102+00:00 stderr F I0120 10:54:54.323725 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:54.760590462+00:00 stderr F E0120 10:54:54.758459 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:54.768811311+00:00 stderr F I0120 10:54:54.768702 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:54.964625411+00:00 stderr F E0120 10:54:54.964543 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:54.974452592+00:00 stderr F I0120 10:54:54.974358 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:55.334103039+00:00 stderr F W0120 10:54:55.334019 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2026-01-20T10:54:55.334103039+00:00 stderr F E0120 10:54:55.334085 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2026-01-20T10:54:56.049040258+00:00 stderr F E0120 10:54:56.048980 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:56.060849412+00:00 stderr F I0120 10:54:56.060798 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:56.255710306+00:00 stderr F E0120 10:54:56.255582 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:56.272128124+00:00 stderr F I0120 10:54:56.271224 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:58.133117354+00:00 stderr F I0120 10:54:58.132711 1 status.go:266] Degraded Machine: crc and Degraded Reason: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2026-01-20T10:54:58.133117354+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:58.621829591+00:00 stderr F E0120 10:54:58.621763 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:58.630885192+00:00 stderr F I0120 10:54:58.630837 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:54:58.832838845+00:00 stderr F E0120 10:54:58.832741 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:58.844262520+00:00 stderr F I0120 10:54:58.843852 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:55:03.752666534+00:00 stderr F E0120 10:55:03.752539 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:55:03.760849762+00:00 stderr F I0120 10:55:03.760751 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:55:03.964602043+00:00 stderr F E0120 10:55:03.964499 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:55:03.975979846+00:00 stderr F I0120 10:55:03.975919 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:55:04.076121256+00:00 stderr F W0120 10:55:04.075996 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2026-01-20T10:55:04.076121256+00:00 stderr F E0120 10:55:04.076082 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2026-01-20T10:55:14.003640520+00:00 stderr F E0120 10:55:14.003511 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:55:14.026323654+00:00 stderr F I0120 10:55:14.026177 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:55:14.217281465+00:00 stderr F E0120 10:55:14.217163 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:55:14.227417075+00:00 stderr F I0120 10:55:14.227252 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:55:25.616427862+00:00 stderr F W0120 10:55:25.616287 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2026-01-20T10:55:25.616551535+00:00 stderr F E0120 10:55:25.616526 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2026-01-20T10:55:34.510915204+00:00 stderr F E0120 10:55:34.510846 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:55:34.519329779+00:00 stderr F I0120 10:55:34.519280 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:55:34.708183773+00:00 stderr F E0120 10:55:34.708090 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:55:34.717984674+00:00 stderr F I0120 10:55:34.717917 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:56:03.627771606+00:00 stderr F W0120 10:56:03.627610 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2026-01-20T10:56:03.627771606+00:00 stderr F E0120 10:56:03.627675 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2026-01-20T10:56:15.483381835+00:00 stderr F E0120 10:56:15.483301 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:56:15.495750878+00:00 stderr F I0120 10:56:15.495707 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:56:15.678260469+00:00 stderr F E0120 10:56:15.678182 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:56:15.691003832+00:00 stderr F I0120 10:56:15.690921 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:56:26.929115617+00:00 stderr F I0120 10:56:26.928887 1 container_runtime_config_controller.go:923] Applied ImageConfig cluster on MachineConfigPool master 2026-01-20T10:56:27.038286104+00:00 stderr F I0120 10:56:27.038137 1 container_runtime_config_controller.go:923] Applied ImageConfig cluster on MachineConfigPool worker 2026-01-20T10:56:31.931782444+00:00 stderr F E0120 10:56:31.931018 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:56:31.944626579+00:00 stderr F E0120 10:56:31.944547 1 render_controller.go:385] could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:56:31.944626579+00:00 stderr F I0120 10:56:31.944577 1 render_controller.go:386] Dropping machineconfigpool "master" out of the queue: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:56:32.039349068+00:00 stderr F E0120 10:56:32.039245 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:56:32.048648988+00:00 stderr F E0120 10:56:32.048596 1 render_controller.go:385] could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:56:32.048648988+00:00 stderr F I0120 10:56:32.048626 1 render_controller.go:386] Dropping machineconfigpool "worker" out of the queue: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:56:51.891668365+00:00 stderr F W0120 10:56:51.891572 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2026-01-20T10:56:51.891668365+00:00 stderr F E0120 10:56:51.891621 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2026-01-20T10:57:23.512943008+00:00 stderr F W0120 10:57:23.512366 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:23.512943008+00:00 stderr F E0120 10:57:23.512913 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:31.945860181+00:00 stderr F E0120 10:57:31.945292 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:31.947553915+00:00 stderr F E0120 10:57:31.947484 1 render_controller.go:465] Error updating MachineConfigPool master: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:31.947553915+00:00 stderr F I0120 10:57:31.947509 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:31.953295076+00:00 stderr F E0120 10:57:31.953212 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:31.954606572+00:00 stderr F E0120 10:57:31.954542 1 render_controller.go:465] Error updating MachineConfigPool master: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:31.954606572+00:00 stderr F I0120 10:57:31.954565 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:31.965046117+00:00 stderr F E0120 10:57:31.964982 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:31.966004993+00:00 stderr F E0120 10:57:31.965927 1 render_controller.go:465] Error updating MachineConfigPool master: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:31.966004993+00:00 stderr F I0120 10:57:31.965977 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:31.986528555+00:00 stderr F E0120 10:57:31.986420 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:31.987791859+00:00 stderr F E0120 10:57:31.987696 1 render_controller.go:465] Error updating MachineConfigPool master: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:31.987791859+00:00 stderr F I0120 10:57:31.987753 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:32.028295540+00:00 stderr F E0120 10:57:32.028206 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:32.029705858+00:00 stderr F E0120 10:57:32.029639 1 render_controller.go:465] Error updating MachineConfigPool master: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.029705858+00:00 stderr F I0120 10:57:32.029683 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:32.049265475+00:00 stderr F E0120 10:57:32.049185 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:32.050596810+00:00 stderr F E0120 10:57:32.050533 1 render_controller.go:465] Error updating MachineConfigPool worker: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.050596810+00:00 stderr F I0120 10:57:32.050574 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:32.056108216+00:00 stderr F E0120 10:57:32.056008 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:32.057105782+00:00 stderr F E0120 10:57:32.057063 1 render_controller.go:465] Error updating MachineConfigPool worker: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.057105782+00:00 stderr F I0120 10:57:32.057098 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:32.067641071+00:00 stderr F E0120 10:57:32.067573 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:32.068675188+00:00 stderr F E0120 10:57:32.068620 1 render_controller.go:465] Error updating MachineConfigPool worker: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.068675188+00:00 stderr F I0120 10:57:32.068650 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:32.089217452+00:00 stderr F E0120 10:57:32.089109 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:32.090630758+00:00 stderr F E0120 10:57:32.090542 1 render_controller.go:465] Error updating MachineConfigPool worker: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.090630758+00:00 stderr F I0120 10:57:32.090596 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:32.110165805+00:00 stderr F E0120 10:57:32.109955 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:32.112860346+00:00 stderr F E0120 10:57:32.112764 1 render_controller.go:465] Error updating MachineConfigPool master: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.112860346+00:00 stderr F I0120 10:57:32.112830 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:32.131546361+00:00 stderr F E0120 10:57:32.131461 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:32.148368116+00:00 stderr F E0120 10:57:32.148273 1 render_controller.go:465] Error updating MachineConfigPool worker: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.148368116+00:00 stderr F I0120 10:57:32.148324 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:32.228787973+00:00 stderr F E0120 10:57:32.228711 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:32.274109881+00:00 stderr F E0120 10:57:32.274026 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:32.347630135+00:00 stderr F E0120 10:57:32.347527 1 render_controller.go:465] Error updating MachineConfigPool worker: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.347630135+00:00 stderr F I0120 10:57:32.347582 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:32.508355175+00:00 stderr F E0120 10:57:32.508255 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:32.547419688+00:00 stderr F E0120 10:57:32.547325 1 render_controller.go:465] Error updating MachineConfigPool master: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.547540681+00:00 stderr F I0120 10:57:32.547499 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:32.748139207+00:00 stderr F E0120 10:57:32.747778 1 render_controller.go:465] Error updating MachineConfigPool worker: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.748139207+00:00 stderr F I0120 10:57:32.747821 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:32.868299875+00:00 stderr F E0120 10:57:32.868220 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:32.973889287+00:00 stderr F E0120 10:57:32.973785 1 render_controller.go:465] Error updating MachineConfigPool master: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.973889287+00:00 stderr F I0120 10:57:32.973828 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:33.068389456+00:00 stderr F E0120 10:57:33.068313 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:33.147973891+00:00 stderr F E0120 10:57:33.147842 1 render_controller.go:465] Error updating MachineConfigPool worker: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:33.147973891+00:00 stderr F I0120 10:57:33.147921 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:33.614898478+00:00 stderr F E0120 10:57:33.614775 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:33.616673895+00:00 stderr F E0120 10:57:33.616580 1 render_controller.go:465] Error updating MachineConfigPool master: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:33.616673895+00:00 stderr F I0120 10:57:33.616638 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:33.788475888+00:00 stderr F E0120 10:57:33.788299 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:33.790856222+00:00 stderr F E0120 10:57:33.790727 1 render_controller.go:465] Error updating MachineConfigPool worker: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:33.790856222+00:00 stderr F I0120 10:57:33.790774 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:34.898245308+00:00 stderr F E0120 10:57:34.897607 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:34.899550452+00:00 stderr F E0120 10:57:34.899481 1 render_controller.go:465] Error updating MachineConfigPool master: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:34.899550452+00:00 stderr F I0120 10:57:34.899534 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:35.071475748+00:00 stderr F E0120 10:57:35.071409 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:35.072554867+00:00 stderr F E0120 10:57:35.072482 1 render_controller.go:465] Error updating MachineConfigPool worker: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:35.072554867+00:00 stderr F I0120 10:57:35.072547 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:37.461525214+00:00 stderr F E0120 10:57:37.461439 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:37.463261569+00:00 stderr F E0120 10:57:37.463179 1 render_controller.go:465] Error updating MachineConfigPool master: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.463601698+00:00 stderr F I0120 10:57:37.463258 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:37.633168513+00:00 stderr F E0120 10:57:37.632941 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:37.635794273+00:00 stderr F E0120 10:57:37.635667 1 render_controller.go:465] Error updating MachineConfigPool worker: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.635794273+00:00 stderr F I0120 10:57:37.635735 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:42.584932484+00:00 stderr F E0120 10:57:42.584273 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:42.586190238+00:00 stderr F E0120 10:57:42.586082 1 render_controller.go:465] Error updating MachineConfigPool master: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:42.586190238+00:00 stderr F I0120 10:57:42.586125 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:42.757018315+00:00 stderr F E0120 10:57:42.756930 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:42.758436402+00:00 stderr F E0120 10:57:42.758354 1 render_controller.go:465] Error updating MachineConfigPool worker: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:42.758436402+00:00 stderr F I0120 10:57:42.758377 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:48.088983990+00:00 stderr F E0120 10:57:48.088315 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config-controller: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config-controller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:52.827077471+00:00 stderr F E0120 10:57:52.826395 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:52.845221761+00:00 stderr F I0120 10:57:52.845152 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:52.998668499+00:00 stderr F E0120 10:57:52.998593 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:57:53.006742123+00:00 stderr F I0120 10:57:53.006657 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:58:10.811300708+00:00 stderr F W0120 10:58:10.811179 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2026-01-20T10:58:10.811300708+00:00 stderr F E0120 10:58:10.811212 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2026-01-20T10:58:13.327166932+00:00 stderr F E0120 10:58:13.327001 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:58:13.341537367+00:00 stderr F I0120 10:58:13.341451 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:58:13.487028822+00:00 stderr F E0120 10:58:13.486931 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2026-01-20T10:58:13.495530488+00:00 stderr F I0120 10:58:13.495469 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found ././@LongLink0000644000000000000000000000034000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000041210415133657716033021 0ustar zuulzuul2025-08-13T19:59:09.332601413+00:00 stderr F I0813 19:59:09.332217 1 start.go:62] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:59:09.334155298+00:00 stderr F I0813 19:59:09.333384 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:09.904644890+00:00 stderr F I0813 19:59:09.904226 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-config-operator/machine-config-controller... 2025-08-13T19:59:09.976994272+00:00 stderr F I0813 19:59:09.954145 1 leaderelection.go:260] successfully acquired lease openshift-machine-config-operator/machine-config-controller 2025-08-13T19:59:10.501277606+00:00 stderr F I0813 19:59:10.501175 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:10.506563697+00:00 stderr F I0813 19:59:10.502810 1 metrics.go:100] Registering Prometheus metrics 2025-08-13T19:59:10.506684680+00:00 stderr F I0813 19:59:10.506662 1 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.495076 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.542454 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.563449 1 reflector.go:351] Caches populated for *v1.ContainerRuntimeConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.565377 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.565897 1 reflector.go:351] Caches populated for *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F E0813 19:59:11.578688 1 node_controller.go:505] getting scheduler config failed: cluster scheduler couldn't be found 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.579413 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.607228 1 reflector.go:351] Caches populated for *v1.KubeletConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.625149 1 reflector.go:351] Caches populated for *v1.MachineConfiguration from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.626272 1 reflector.go:351] Caches populated for *v1.Scheduler from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.626618 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:12.205253159+00:00 stderr F I0813 19:59:12.202935 1 reflector.go:351] Caches populated for *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:12.206409072+00:00 stderr F I0813 19:59:12.205694 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:12.359926668+00:00 stderr F W0813 19:59:12.353915 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:12.359926668+00:00 stderr F E0813 19:59:12.354497 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:12.360216916+00:00 stderr F I0813 19:59:12.360107 1 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:12.361595616+00:00 stderr F I0813 19:59:12.361563 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:12.421095472+00:00 stderr F I0813 19:59:12.305966 1 machine_set_boot_image_controller.go:151] "FeatureGates changed" enabled=["AdminNetworkPolicy","AlibabaPlatform","AzureWorkloadIdentity","BareMetalLoadBalancer","BuildCSIVolumes","CloudDualStackNodeIPs","ClusterAPIInstallAWS","ClusterAPIInstallNutanix","ClusterAPIInstallOpenStack","ClusterAPIInstallVSphere","DisableKubeletCloudCredentialProviders","ExternalCloudProvider","ExternalCloudProviderAzure","ExternalCloudProviderExternal","ExternalCloudProviderGCP","HardwareSpeed","KMSv1","MetricsServer","NetworkDiagnosticsConfig","NetworkLiveMigration","PrivateHostedZoneAWS","VSphereControlPlaneMachineSet","VSphereDriverConfiguration","VSphereStaticIPs"] disabled=["AutomatedEtcdBackup","CSIDriverSharedResource","ChunkSizeMiB","ClusterAPIInstall","ClusterAPIInstallAzure","ClusterAPIInstallGCP","ClusterAPIInstallIBMCloud","ClusterAPIInstallPowerVS","DNSNameResolver","DynamicResourceAllocation","EtcdBackendQuota","EventedPLEG","Example","ExternalOIDC","ExternalRouteCertificate","GCPClusterHostedDNS","GCPLabelsTags","GatewayAPI","ImagePolicy","InsightsConfig","InsightsConfigAPI","InsightsOnDemandDataGather","InstallAlternateInfrastructureAWS","MachineAPIOperatorDisableMachineHealthCheckController","MachineAPIProviderOpenStack","MachineConfigNodes","ManagedBootImages","MaxUnavailableStatefulSet","MetricsCollectionProfiles","MixedCPUsAllocation","NewOLM","NodeDisruptionPolicy","NodeSwap","OnClusterBuild","OpenShiftPodSecurityAdmission","PinnedImages","PlatformOperators","RouteExternalCertificate","ServiceAccountTokenNodeBinding","ServiceAccountTokenNodeBindingValidation","ServiceAccountTokenPodNodeInfo","SignatureStores","SigstoreImageVerification","TranslateStreamCloseWebsocketRequests","UpgradeStatus","VSphereMultiVCenters","ValidatingAdmissionPolicy","VolumeGroupSnapshot"] 2025-08-13T19:59:12.767226169+00:00 stderr F I0813 19:59:12.765698 1 start.go:107] FeatureGates initialized: enabled=[AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CloudDualStackNodeIPs ClusterAPIInstallAWS ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallVSphere DisableKubeletCloudCredentialProviders ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP HardwareSpeed KMSv1 MetricsServer NetworkDiagnosticsConfig NetworkLiveMigration PrivateHostedZoneAWS VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereStaticIPs] disabled=[AutomatedEtcdBackup CSIDriverSharedResource ChunkSizeMiB ClusterAPIInstall ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallPowerVS DNSNameResolver DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MixedCPUsAllocation NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereMultiVCenters ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:12.767226169+00:00 stderr F I0813 19:59:12.765975 1 drain_controller.go:168] Starting MachineConfigController-DrainController 2025-08-13T19:59:12.768964998+00:00 stderr F I0813 19:59:12.767517 1 reflector.go:351] Caches populated for *v1.ControllerConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T19:59:12.776213485+00:00 stderr F I0813 19:59:12.776110 1 reflector.go:351] Caches populated for *v1.Node from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:12.779706024+00:00 stderr F I0813 19:59:12.779669 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:12.782413791+00:00 stderr F I0813 19:59:12.781746 1 reflector.go:351] Caches populated for *v1.MachineConfigPool from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T19:59:12.903315078+00:00 stderr F I0813 19:59:12.897736 1 kubelet_config_controller.go:193] Starting MachineConfigController-KubeletConfigController 2025-08-13T19:59:12.903315078+00:00 stderr F I0813 19:59:12.900105 1 container_runtime_config_controller.go:242] Starting MachineConfigController-ContainerRuntimeConfigController 2025-08-13T19:59:12.965361687+00:00 stderr F I0813 19:59:12.965178 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:12.973180490+00:00 stderr F I0813 19:59:12.973073 1 machine_set_boot_image_controller.go:181] Starting MachineConfigController-MachineSetBootImageController 2025-08-13T19:59:13.861069870+00:00 stderr F W0813 19:59:13.858186 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:13.861069870+00:00 stderr F E0813 19:59:13.858311 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:14.076055558+00:00 stderr F I0813 19:59:14.074825 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:14.076055558+00:00 stderr F I0813 19:59:14.075481 1 template_controller.go:126] Re-syncing ControllerConfig due to secret pull-secret change 2025-08-13T19:59:14.105357153+00:00 stderr F I0813 19:59:14.083405 1 reflector.go:351] Caches populated for *v1.MachineConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T19:59:14.138707644+00:00 stderr F I0813 19:59:14.137033 1 template_controller.go:227] Starting MachineConfigController-TemplateController 2025-08-13T19:59:14.160541337+00:00 stderr F I0813 19:59:14.160424 1 node_controller.go:247] Starting MachineConfigController-NodeController 2025-08-13T19:59:14.160688981+00:00 stderr F I0813 19:59:14.160594 1 render_controller.go:127] Starting MachineConfigController-RenderController 2025-08-13T19:59:16.807137172+00:00 stderr F W0813 19:59:16.784255 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:16.807137172+00:00 stderr F E0813 19:59:16.784949 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:18.197085633+00:00 stderr F I0813 19:59:18.195574 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:20.562090150+00:00 stderr F I0813 19:59:20.538752 1 trace.go:236] Trace[1112042174]: "DeltaFIFO Pop Process" ID:openshift-ingress-canary/ingress-canary-2vhcn,Depth:47,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:20.437) (total time: 100ms): 2025-08-13T19:59:20.562090150+00:00 stderr F Trace[1112042174]: [100.102354ms] [100.102354ms] END 2025-08-13T19:59:21.122969158+00:00 stderr F W0813 19:59:21.122736 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:21.123692448+00:00 stderr F E0813 19:59:21.123566 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:26.419192658+00:00 stderr F I0813 19:59:26.399961 1 reconcile.go:175] user data to be verified before ssh update: { [] core [ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBACrKovYP/jwO+a53sdlihFLUWOCBZJORwFiQNgBoHgse9pb7UsuVllby/9PMvDGujPs69Sme2dqr+ruV4PQyRs6BAD87myXikco4bl4QHlNCxCiMl4UGh+qGe3xP1pvMotXd+Cl6yzdvgyhr9MuMLVjrj2IZWM5hjJC3ZAAd98IO0E4xQ==] } 2025-08-13T19:59:26.424919171+00:00 stderr F I0813 19:59:26.422230 1 reconcile.go:151] SSH Keys reconcilable 2025-08-13T19:59:27.283901637+00:00 stderr F I0813 19:59:27.271959 1 render_controller.go:530] Generated machineconfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a from 9 configs: [{MachineConfig 00-master machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-ssh machineconfiguration.openshift.io/v1 } {MachineConfig 99-node-sizing-for-crc machineconfiguration.openshift.io/v1 } {MachineConfig 99-openshift-machineconfig-master-dummy-networks machineconfiguration.openshift.io/v1 }] 2025-08-13T19:59:27.601928073+00:00 stderr F I0813 19:59:27.599531 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23974", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-master-ef556ead28ddfad01c34ac56c7adfb5a successfully generated (release version: 4.16.0, controller version: 9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:59:27.752237497+00:00 stderr F E0813 19:59:27.750714 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:28.048144682+00:00 stderr F E0813 19:59:28.047461 1 render_controller.go:465] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:28.055639386+00:00 stderr F I0813 19:59:28.050728 1 render_controller.go:380] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:28.466367593+00:00 stderr F I0813 19:59:28.465479 1 kubelet_config_nodes.go:156] Applied Node configuration 97-master-generated-kubelet on MachineConfigPool master 2025-08-13T19:59:28.846212220+00:00 stderr F I0813 19:59:28.843613 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool master 2025-08-13T19:59:31.536257221+00:00 stderr F W0813 19:59:31.497234 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:31.559062731+00:00 stderr F E0813 19:59:31.558862 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:33.466238635+00:00 stderr F I0813 19:59:33.448482 1 reconcile.go:175] user data to be verified before ssh update: { [] core [ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBACrKovYP/jwO+a53sdlihFLUWOCBZJORwFiQNgBoHgse9pb7UsuVllby/9PMvDGujPs69Sme2dqr+ruV4PQyRs6BAD87myXikco4bl4QHlNCxCiMl4UGh+qGe3xP1pvMotXd+Cl6yzdvgyhr9MuMLVjrj2IZWM5hjJC3ZAAd98IO0E4xQ==] } 2025-08-13T19:59:33.466238635+00:00 stderr F I0813 19:59:33.449191 1 reconcile.go:151] SSH Keys reconcilable 2025-08-13T19:59:33.714022388+00:00 stderr F I0813 19:59:33.549553 1 render_controller.go:556] Pool master: now targeting: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T19:59:33.720678308+00:00 stderr F I0813 19:59:33.720638 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:33.727381299+00:00 stderr F I0813 19:59:33.727341 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:33.889329125+00:00 stderr F I0813 19:59:33.888263 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:33.910104118+00:00 stderr F I0813 19:59:33.908677 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:33.952057924+00:00 stderr F I0813 19:59:33.950705 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:34.036304985+00:00 stderr F I0813 19:59:34.032993 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:34.193674731+00:00 stderr F I0813 19:59:34.193315 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:34.553558660+00:00 stderr F I0813 19:59:34.542021 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:35.184505775+00:00 stderr F I0813 19:59:35.184412 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:36.486676493+00:00 stderr F I0813 19:59:36.474371 1 kubelet_config_nodes.go:156] Applied Node configuration 97-worker-generated-kubelet on MachineConfigPool worker 2025-08-13T19:59:36.732022357+00:00 stderr F I0813 19:59:36.731222 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:37.038352279+00:00 stderr F I0813 19:59:37.038159 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool worker 2025-08-13T19:59:39.230871006+00:00 stderr F I0813 19:59:39.230169 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool master 2025-08-13T19:59:39.312551285+00:00 stderr F I0813 19:59:39.292203 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:39.312551285+00:00 stderr F I0813 19:59:39.296699 1 node_controller.go:566] Pool master: 1 candidate nodes in 0 zones for update, capacity: 1 2025-08-13T19:59:39.312551285+00:00 stderr F I0813 19:59:39.296717 1 node_controller.go:566] Pool master: filtered to 1 candidate nodes for update, capacity: 1 2025-08-13T19:59:39.522953082+00:00 stderr F I0813 19:59:39.521362 1 node_controller.go:576] Pool master: node crc: changed taints 2025-08-13T19:59:39.774453491+00:00 stderr F I0813 19:59:39.774353 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"28168", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node crc to %!s(*string=0xc0007ca988) 2025-08-13T19:59:39.829324796+00:00 stderr F I0813 19:59:39.827563 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:39.929983155+00:00 stderr F I0813 19:59:39.929679 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:40.030141060+00:00 stderr F I0813 19:59:40.025318 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:40.059067385+00:00 stderr F I0813 19:59:40.057122 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:40.108063251+00:00 stderr F I0813 19:59:40.105291 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:40.289229585+00:00 stderr F I0813 19:59:40.288704 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:40.317947454+00:00 stderr F I0813 19:59:40.317581 1 node_controller.go:576] Pool master: node crc: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T19:59:40.320859347+00:00 stderr F I0813 19:59:40.318019 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"28255", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node crc now has machineconfiguration.openshift.io/desiredConfig=rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T19:59:40.469715670+00:00 stderr F I0813 19:59:40.467188 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:40.837558036+00:00 stderr F I0813 19:59:40.835718 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:43.090480505+00:00 stderr F I0813 19:59:43.087270 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool worker 2025-08-13T19:59:43.295528870+00:00 stderr F I0813 19:59:43.294255 1 render_controller.go:530] Generated machineconfig rendered-worker-cd34d74bc72d90aa53a9e6409d8a13ff from 7 configs: [{MachineConfig 00-worker machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-ssh machineconfiguration.openshift.io/v1 }] 2025-08-13T19:59:43.296176938+00:00 stderr F I0813 19:59:43.296135 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"87ae8215-5559-4b8a-a6cc-81c3c83b8a6e", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23055", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-worker-cd34d74bc72d90aa53a9e6409d8a13ff successfully generated (release version: 4.16.0, controller version: 9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:59:43.792422874+00:00 stderr F I0813 19:59:43.733757 1 render_controller.go:556] Pool worker: now targeting: rendered-worker-cd34d74bc72d90aa53a9e6409d8a13ff 2025-08-13T19:59:46.371377217+00:00 stderr F I0813 19:59:46.370946 1 node_controller.go:1210] No nodes available for updates 2025-08-13T19:59:46.804124423+00:00 stderr F I0813 19:59:46.801706 1 node_controller.go:576] Pool master: node crc: changed taints 2025-08-13T19:59:47.150328001+00:00 stderr F I0813 19:59:47.150101 1 node_controller.go:576] Pool master: node crc: changed annotation machineconfiguration.openshift.io/state = Working 2025-08-13T19:59:47.151249728+00:00 stderr F I0813 19:59:47.150912 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"28384", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node crc now has machineconfiguration.openshift.io/state=Working 2025-08-13T19:59:47.225183345+00:00 stderr F I0813 19:59:47.224970 1 render_controller.go:530] Generated machineconfig rendered-master-11405dc064e9fc83a779a06d1cd665b3 from 9 configs: [{MachineConfig 00-master machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-ssh machineconfiguration.openshift.io/v1 } {MachineConfig 99-node-sizing-for-crc machineconfiguration.openshift.io/v1 } {MachineConfig 99-openshift-machineconfig-master-dummy-networks machineconfiguration.openshift.io/v1 }] 2025-08-13T19:59:47.252541415+00:00 stderr F I0813 19:59:47.252418 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"28255", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-master-11405dc064e9fc83a779a06d1cd665b3 successfully generated (release version: 4.16.0, controller version: 9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:59:47.534896974+00:00 stderr F E0813 19:59:47.531644 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:47.930438080+00:00 stderr F E0813 19:59:47.923294 1 render_controller.go:465] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:47.930438080+00:00 stderr F I0813 19:59:47.923353 1 render_controller.go:380] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:49.297673365+00:00 stderr F I0813 19:59:49.248434 1 status.go:249] Pool worker: All nodes are updated with MachineConfig rendered-worker-cd34d74bc72d90aa53a9e6409d8a13ff 2025-08-13T19:59:50.209966579+00:00 stderr F E0813 19:59:50.197069 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:50.613584565+00:00 stderr F E0813 19:59:50.613296 1 render_controller.go:465] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:50.613675228+00:00 stderr F I0813 19:59:50.613659 1 render_controller.go:380] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:51.892659457+00:00 stderr F I0813 19:59:51.890142 1 node_controller.go:1210] No nodes available for updates 2025-08-13T19:59:52.768347259+00:00 stderr F W0813 19:59:52.767605 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:52.768347259+00:00 stderr F E0813 19:59:52.768293 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:53.254425664+00:00 stderr F I0813 19:59:53.253448 1 render_controller.go:556] Pool master: now targeting: rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T19:59:58.604698936+00:00 stderr F I0813 19:59:58.547385 1 node_controller.go:576] Pool master: node crc: changed taints 2025-08-13T19:59:58.641913876+00:00 stderr F I0813 19:59:58.565737 1 node_controller.go:1210] No nodes available for updates 2025-08-13T19:59:59.211441021+00:00 stderr F E0813 19:59:59.210702 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:59.249989550+00:00 stderr F E0813 19:59:59.249943 1 render_controller.go:465] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:59.250047982+00:00 stderr F I0813 19:59:59.250034 1 render_controller.go:380] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:03.749065482+00:00 stderr F I0813 20:00:03.745946 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:00:15.506909632+00:00 stderr F I0813 20:00:15.505103 1 drain_controller.go:182] node crc: uncordoning 2025-08-13T20:00:15.506909632+00:00 stderr F I0813 20:00:15.505950 1 drain_controller.go:182] node crc: initiating uncordon (currently schedulable: true) 2025-08-13T20:00:16.676519712+00:00 stderr F I0813 20:00:16.673529 1 drain_controller.go:182] node crc: uncordon succeeded (currently schedulable: true) 2025-08-13T20:00:16.676519712+00:00 stderr F I0813 20:00:16.674017 1 drain_controller.go:182] node crc: operation successful; applying completion annotation 2025-08-13T20:00:22.034286821+00:00 stderr F I0813 20:00:22.033417 1 node_controller.go:576] Pool master: node crc: Completed update to rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:00:23.939924262+00:00 stderr F W0813 20:00:23.936455 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:00:23.939924262+00:00 stderr F E0813 20:00:23.937099 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:00:27.063247291+00:00 stderr F I0813 20:00:27.050625 1 node_controller.go:566] Pool master: 1 candidate nodes in 0 zones for update, capacity: 1 2025-08-13T20:00:27.063247291+00:00 stderr F I0813 20:00:27.061253 1 node_controller.go:566] Pool master: filtered to 1 candidate nodes for update, capacity: 1 2025-08-13T20:00:27.236879842+00:00 stderr F I0813 20:00:27.236013 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"28780", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node crc to %!s(*string=0xc000ee6f08) 2025-08-13T20:00:27.281201456+00:00 stderr F I0813 20:00:27.280951 1 node_controller.go:576] Pool master: node crc: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:00:27.281765232+00:00 stderr F I0813 20:00:27.281728 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"28780", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node crc now has machineconfiguration.openshift.io/desiredConfig=rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:00:29.407765243+00:00 stderr F I0813 20:00:29.407432 1 node_controller.go:576] Pool master: node crc: changed annotation machineconfiguration.openshift.io/state = Working 2025-08-13T20:00:29.423561903+00:00 stderr F I0813 20:00:29.423510 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"28780", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node crc now has machineconfiguration.openshift.io/state=Working 2025-08-13T20:00:33.181499996+00:00 stderr F I0813 20:00:33.165414 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:00:33.516675434+00:00 stderr F I0813 20:00:33.511690 1 node_controller.go:576] Pool master: node crc: changed taints 2025-08-13T20:00:38.620300518+00:00 stderr F I0813 20:00:38.619649 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:00:54.455970291+00:00 stderr F W0813 20:00:54.454636 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:00:54.462198839+00:00 stderr F E0813 20:00:54.455773 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:01:11.763012024+00:00 stderr F I0813 20:01:11.761647 1 drain_controller.go:182] node crc: uncordoning 2025-08-13T20:01:11.763012024+00:00 stderr F I0813 20:01:11.762824 1 drain_controller.go:182] node crc: initiating uncordon (currently schedulable: true) 2025-08-13T20:01:14.404614056+00:00 stderr F I0813 20:01:14.404272 1 drain_controller.go:182] node crc: uncordon succeeded (currently schedulable: true) 2025-08-13T20:01:14.404702519+00:00 stderr F I0813 20:01:14.404687 1 drain_controller.go:182] node crc: operation successful; applying completion annotation 2025-08-13T20:01:38.032189301+00:00 stderr F W0813 20:01:38.031293 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:01:38.032189301+00:00 stderr F E0813 20:01:38.032036 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:01:51.239914643+00:00 stderr F I0813 20:01:51.236916 1 node_controller.go:576] Pool master: node crc: Completed update to rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:01:56.264214234+00:00 stderr F I0813 20:01:56.261042 1 status.go:249] Pool master: All nodes are updated with MachineConfig rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:02:19.147200121+00:00 stderr F W0813 20:02:19.146314 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:02:19.147482209+00:00 stderr F E0813 20:02:19.147454 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:03:13.621956248+00:00 stderr F W0813 20:03:13.621082 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:13.621956248+00:00 stderr F E0813 20:03:13.621837 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:21.110874138+00:00 stderr F E0813 20:03:21.110202 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config-controller: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config-controller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:02.755041202+00:00 stderr F W0813 20:04:02.748980 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:02.755041202+00:00 stderr F E0813 20:04:02.749582 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:21.134592986+00:00 stderr F E0813 20:04:21.133978 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config-controller: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config-controller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:45.331437244+00:00 stderr F W0813 20:04:45.323019 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:45.331437244+00:00 stderr F E0813 20:04:45.330911 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:21.114757787+00:00 stderr F E0813 20:05:21.113708 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config-controller: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config-controller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:39.906343176+00:00 stderr F W0813 20:05:39.905388 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:05:39.906343176+00:00 stderr F E0813 20:05:39.906129 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:06:06.591223742+00:00 stderr F I0813 20:06:06.589573 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:14.707209002+00:00 stderr F I0813 20:06:14.706548 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:14.866933636+00:00 stderr F I0813 20:06:14.866097 1 reflector.go:351] Caches populated for *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:15.643188785+00:00 stderr F W0813 20:06:15.642535 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:06:15.643350520+00:00 stderr F E0813 20:06:15.643214 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:06:18.021051777+00:00 stderr F I0813 20:06:18.019988 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:19.164475020+00:00 stderr F I0813 20:06:19.164291 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:19.611652556+00:00 stderr F I0813 20:06:19.611552 1 reflector.go:351] Caches populated for *v1.ContainerRuntimeConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:06:23.554076800+00:00 stderr F I0813 20:06:23.550996 1 reflector.go:351] Caches populated for *v1.KubeletConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:06:25.825147654+00:00 stderr F I0813 20:06:25.824990 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:27.136463934+00:00 stderr F I0813 20:06:27.136396 1 reflector.go:351] Caches populated for *v1.MachineConfiguration from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:06:29.797014992+00:00 stderr F I0813 20:06:29.796187 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:30.531452663+00:00 stderr F I0813 20:06:30.531307 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:30.535016035+00:00 stderr F I0813 20:06:30.534117 1 template_controller.go:126] Re-syncing ControllerConfig due to secret pull-secret change 2025-08-13T20:06:35.440386944+00:00 stderr F I0813 20:06:35.439627 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:36.198413388+00:00 stderr F I0813 20:06:36.197860 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:06:36.616589747+00:00 stderr F I0813 20:06:36.615422 1 reflector.go:351] Caches populated for *v1.MachineConfigPool from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:06:39.422978139+00:00 stderr F I0813 20:06:39.422199 1 reflector.go:351] Caches populated for *v1.Node from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:42.778065952+00:00 stderr F I0813 20:06:42.776328 1 reflector.go:351] Caches populated for *v1.Scheduler from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:44.194632006+00:00 stderr F I0813 20:06:44.194259 1 reflector.go:351] Caches populated for *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:06:44.650823986+00:00 stderr F I0813 20:06:44.650712 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:44.911028646+00:00 stderr F I0813 20:06:44.910966 1 reflector.go:351] Caches populated for *v1.ControllerConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:06:50.084892825+00:00 stderr F I0813 20:06:50.084114 1 reflector.go:351] Caches populated for *v1.MachineConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:06:54.079645208+00:00 stderr F I0813 20:06:54.078126 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:10.877700932+00:00 stderr F W0813 20:07:10.876997 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:07:10.877700932+00:00 stderr F E0813 20:07:10.877546 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:07:42.503931311+00:00 stderr F W0813 20:07:42.500600 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:07:42.503931311+00:00 stderr F E0813 20:07:42.501278 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:08:33.357863726+00:00 stderr F W0813 20:08:33.356766 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.358155465+00:00 stderr F E0813 20:08:33.358117 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:29.553640806+00:00 stderr F W0813 20:09:29.552854 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:09:29.553640806+00:00 stderr F E0813 20:09:29.553366 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:09:33.937122765+00:00 stderr F I0813 20:09:33.935964 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:34.571283297+00:00 stderr F I0813 20:09:34.568451 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:35.089614588+00:00 stderr F I0813 20:09:35.089312 1 reflector.go:351] Caches populated for *v1.MachineConfigPool from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:09:35.660374652+00:00 stderr F I0813 20:09:35.660267 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:36.614850148+00:00 stderr F I0813 20:09:36.612327 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:37.349693196+00:00 stderr F I0813 20:09:37.349540 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:37.349889271+00:00 stderr F I0813 20:09:37.349752 1 template_controller.go:126] Re-syncing ControllerConfig due to secret pull-secret change 2025-08-13T20:09:38.306224350+00:00 stderr F I0813 20:09:38.306099 1 reflector.go:351] Caches populated for *v1.ControllerConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:09:40.359632383+00:00 stderr F E0813 20:09:40.356692 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:40.368343192+00:00 stderr F E0813 20:09:40.367830 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:40.372648726+00:00 stderr F E0813 20:09:40.371734 1 render_controller.go:465] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:40.372648726+00:00 stderr F I0813 20:09:40.372136 1 render_controller.go:380] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:40.380648125+00:00 stderr F E0813 20:09:40.380522 1 render_controller.go:465] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:40.380648125+00:00 stderr F I0813 20:09:40.380577 1 render_controller.go:380] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:45.551874839+00:00 stderr F I0813 20:09:45.549011 1 reflector.go:351] Caches populated for *v1.MachineConfiguration from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:09:46.279887832+00:00 stderr F I0813 20:09:46.277766 1 reflector.go:351] Caches populated for *v1.MachineConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:09:46.575994511+00:00 stderr F I0813 20:09:46.575528 1 reflector.go:351] Caches populated for *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:47.411594738+00:00 stderr F I0813 20:09:47.410700 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:47.414546083+00:00 stderr F I0813 20:09:47.412227 1 reflector.go:351] Caches populated for *v1.Node from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:50.334154551+00:00 stderr F I0813 20:09:50.333387 1 reflector.go:351] Caches populated for *v1.KubeletConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:09:52.122596887+00:00 stderr F I0813 20:09:52.122510 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:03.968061577+00:00 stderr F W0813 20:10:03.967024 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:10:03.968061577+00:00 stderr F E0813 20:10:03.967728 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:10:07.599688058+00:00 stderr F I0813 20:10:07.599277 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:10:08.776015395+00:00 stderr F I0813 20:10:08.775892 1 reflector.go:351] Caches populated for *v1.Scheduler from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:09.470276829+00:00 stderr F I0813 20:10:09.468847 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:10.215148165+00:00 stderr F I0813 20:10:10.214593 1 reflector.go:351] Caches populated for *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:10:19.692741613+00:00 stderr F I0813 20:10:19.692077 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:23.228125756+00:00 stderr F I0813 20:10:23.227339 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:27.929125358+00:00 stderr F I0813 20:10:27.928636 1 reflector.go:351] Caches populated for *v1.ContainerRuntimeConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:11:02.478100283+00:00 stderr F W0813 20:11:02.477126 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:11:02.478100283+00:00 stderr F E0813 20:11:02.477947 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:11:33.421838594+00:00 stderr F W0813 20:11:33.420285 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:11:33.421838594+00:00 stderr F E0813 20:11:33.421743 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:12:27.036588905+00:00 stderr F W0813 20:12:27.036047 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:12:27.036588905+00:00 stderr F E0813 20:12:27.036497 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:12:58.297409024+00:00 stderr F W0813 20:12:58.296634 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:12:58.297409024+00:00 stderr F E0813 20:12:58.297348 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:13:28.588911106+00:00 stderr F W0813 20:13:28.588395 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:13:28.589057570+00:00 stderr F E0813 20:13:28.589040 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:14:20.404079860+00:00 stderr F W0813 20:14:20.402852 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:14:20.404079860+00:00 stderr F E0813 20:14:20.403448 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:15:15.591592395+00:00 stderr F W0813 20:15:15.590886 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:15:15.591592395+00:00 stderr F E0813 20:15:15.591532 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:16:10.081905945+00:00 stderr F W0813 20:16:10.081372 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:16:10.081905945+00:00 stderr F E0813 20:16:10.081875 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:17:04.217098347+00:00 stderr F W0813 20:17:04.216341 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:17:04.217098347+00:00 stderr F E0813 20:17:04.217026 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:17:56.486311514+00:00 stderr F W0813 20:17:56.485625 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:17:56.486311514+00:00 stderr F E0813 20:17:56.486247 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:18:54.917419329+00:00 stderr F W0813 20:18:54.916815 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:18:54.917419329+00:00 stderr F E0813 20:18:54.917299 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:19:48.919583120+00:00 stderr F W0813 20:19:48.918738 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:19:48.919583120+00:00 stderr F E0813 20:19:48.919471 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:20:36.034141950+00:00 stderr F W0813 20:20:36.033271 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:20:36.034141950+00:00 stderr F E0813 20:20:36.034025 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:21:14.926291207+00:00 stderr F W0813 20:21:14.925592 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:21:14.926291207+00:00 stderr F E0813 20:21:14.926203 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:21:47.846378734+00:00 stderr F W0813 20:21:47.845509 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:21:47.846378734+00:00 stderr F E0813 20:21:47.846321 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:22:35.519597564+00:00 stderr F W0813 20:22:35.518565 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:22:35.519597564+00:00 stderr F E0813 20:22:35.519565 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:23:17.253591317+00:00 stderr F W0813 20:23:17.252247 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:23:17.253839864+00:00 stderr F E0813 20:23:17.253555 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:24:12.130840552+00:00 stderr F W0813 20:24:12.130174 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:24:12.130947525+00:00 stderr F E0813 20:24:12.130717 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:24:46.986507470+00:00 stderr F W0813 20:24:46.985612 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:24:46.986507470+00:00 stderr F E0813 20:24:46.986375 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:25:38.363311548+00:00 stderr F W0813 20:25:38.362318 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:25:38.363311548+00:00 stderr F E0813 20:25:38.363059 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:26:32.842179333+00:00 stderr F W0813 20:26:32.841247 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:26:32.842179333+00:00 stderr F E0813 20:26:32.842122 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:27:26.684468834+00:00 stderr F W0813 20:27:26.683694 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:27:26.684468834+00:00 stderr F E0813 20:27:26.684369 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:28:04.854152355+00:00 stderr F W0813 20:28:04.853294 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:28:04.854152355+00:00 stderr F E0813 20:28:04.854021 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:29:00.352371193+00:00 stderr F W0813 20:29:00.351581 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:29:00.352371193+00:00 stderr F E0813 20:29:00.352355 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:29:39.887393446+00:00 stderr F W0813 20:29:39.886355 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:29:39.887393446+00:00 stderr F E0813 20:29:39.887065 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:30:30.924838473+00:00 stderr F W0813 20:30:30.924209 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:30:30.924838473+00:00 stderr F E0813 20:30:30.924735 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:31:07.703564453+00:00 stderr F W0813 20:31:07.702500 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:31:07.703702457+00:00 stderr F E0813 20:31:07.703684 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:31:41.915769202+00:00 stderr F W0813 20:31:41.915124 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:31:41.915769202+00:00 stderr F E0813 20:31:41.915654 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:32:17.032996847+00:00 stderr F W0813 20:32:17.032355 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:32:17.032996847+00:00 stderr F E0813 20:32:17.032943 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:33:01.622681426+00:00 stderr F W0813 20:33:01.622034 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:33:01.622930893+00:00 stderr F E0813 20:33:01.622891 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:33:47.072377624+00:00 stderr F W0813 20:33:47.071605 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:33:47.072377624+00:00 stderr F E0813 20:33:47.072320 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:34:23.300561987+00:00 stderr F W0813 20:34:23.299956 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:34:23.300664670+00:00 stderr F E0813 20:34:23.300551 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:34:59.364951638+00:00 stderr F W0813 20:34:59.364285 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:34:59.364951638+00:00 stderr F E0813 20:34:59.364742 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:35:31.472926985+00:00 stderr F W0813 20:35:31.472191 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:35:31.472926985+00:00 stderr F E0813 20:35:31.472873 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:36:30.556270120+00:00 stderr F W0813 20:36:30.555625 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:36:30.556270120+00:00 stderr F E0813 20:36:30.556227 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:37:08.151012647+00:00 stderr F W0813 20:37:08.150338 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:37:08.151169502+00:00 stderr F E0813 20:37:08.151108 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:37:48.345633999+00:00 stderr F W0813 20:37:48.344753 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:37:48.345633999+00:00 stderr F E0813 20:37:48.345434 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:38:23.450721151+00:00 stderr F W0813 20:38:23.449862 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:38:23.450721151+00:00 stderr F E0813 20:38:23.450450 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:39:00.580320141+00:00 stderr F W0813 20:39:00.579684 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:39:00.580320141+00:00 stderr F E0813 20:39:00.580237 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:39:40.650243669+00:00 stderr F W0813 20:39:40.649332 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:39:40.650328611+00:00 stderr F E0813 20:39:40.650253 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:40:40.538996386+00:00 stderr F W0813 20:40:40.538258 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:40:40.538996386+00:00 stderr F E0813 20:40:40.538909 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:41:14.181708488+00:00 stderr F W0813 20:41:14.180081 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:41:14.181708488+00:00 stderr F E0813 20:41:14.181138 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:42:10.626970184+00:00 stderr F I0813 20:42:10.622903 1 template_controller.go:126] Re-syncing ControllerConfig due to secret pull-secret change 2025-08-13T20:42:12.654457226+00:00 stderr F W0813 20:42:12.654097 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:42:12.654563459+00:00 stderr F E0813 20:42:12.654543 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:42:16.008754910+00:00 stderr F I0813 20:42:16.007899 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.014437714+00:00 stderr F I0813 20:42:16.014402 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.025124922+00:00 stderr F I0813 20:42:16.025012 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.045668374+00:00 stderr F I0813 20:42:16.045548 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.060137731+00:00 stderr F I0813 20:42:16.060088 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.066087593+00:00 stderr F I0813 20:42:16.065977 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.076672748+00:00 stderr F I0813 20:42:16.076547 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.086157632+00:00 stderr F I0813 20:42:16.086122 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.098042974+00:00 stderr F I0813 20:42:16.097929 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.138759688+00:00 stderr F I0813 20:42:16.138645 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.168434424+00:00 stderr F I0813 20:42:16.167938 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.219057253+00:00 stderr F I0813 20:42:16.218971 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.328636572+00:00 stderr F I0813 20:42:16.328586 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.379500419+00:00 stderr F I0813 20:42:16.379362 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.822158331+00:00 stderr F I0813 20:42:16.821429 1 render_controller.go:556] Pool master: now targeting: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:42:16.827337360+00:00 stderr F I0813 20:42:16.827259 1 render_controller.go:556] Pool worker: now targeting: rendered-worker-83accf81260e29bcce65a184dd980479 2025-08-13T20:42:21.836593786+00:00 stderr F I0813 20:42:21.835700 1 status.go:249] Pool worker: All nodes are updated with MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 2025-08-13T20:42:21.863504552+00:00 stderr F I0813 20:42:21.863159 1 node_controller.go:566] Pool master: 1 candidate nodes in 0 zones for update, capacity: 1 2025-08-13T20:42:21.863504552+00:00 stderr F I0813 20:42:21.863202 1 node_controller.go:566] Pool master: filtered to 1 candidate nodes for update, capacity: 1 2025-08-13T20:42:21.894567958+00:00 stderr F I0813 20:42:21.894471 1 node_controller.go:576] Pool master: node crc: changed taints 2025-08-13T20:42:21.955022110+00:00 stderr F I0813 20:42:21.950945 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"37435", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node crc to %!s(*string=0xc000f1f1c8) 2025-08-13T20:42:21.990988767+00:00 stderr F I0813 20:42:21.990888 1 node_controller.go:576] Pool master: node crc: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:42:21.993939322+00:00 stderr F I0813 20:42:21.991069 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"37453", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node crc now has machineconfiguration.openshift.io/desiredConfig=rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:42:22.105384245+00:00 stderr F E0813 20:42:22.105190 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:42:22.106994052+00:00 stderr F E0813 20:42:22.106943 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:42:22.112701226+00:00 stderr F E0813 20:42:22.112589 1 render_controller.go:465] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:42:22.112701226+00:00 stderr F I0813 20:42:22.112625 1 render_controller.go:380] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:42:22.112978044+00:00 stderr F E0813 20:42:22.112887 1 render_controller.go:465] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:42:22.112978044+00:00 stderr F I0813 20:42:22.112922 1 render_controller.go:380] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:42:24.125361272+00:00 stderr F I0813 20:42:24.124848 1 node_controller.go:576] Pool master: node crc: changed annotation machineconfiguration.openshift.io/state = Working 2025-08-13T20:42:24.127582136+00:00 stderr F I0813 20:42:24.125590 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"37453", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node crc now has machineconfiguration.openshift.io/state=Working 2025-08-13T20:42:26.958697318+00:00 stderr F I0813 20:42:26.957735 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:27.009661437+00:00 stderr F I0813 20:42:27.009377 1 node_controller.go:576] Pool master: node crc: changed taints 2025-08-13T20:42:31.999703171+00:00 stderr F I0813 20:42:31.999076 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:31.999847395+00:00 stderr F E0813 20:42:31.999080 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.014363134+00:00 stderr F I0813 20:42:32.014258 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.020685466+00:00 stderr F E0813 20:42:32.020594 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.032041254+00:00 stderr F I0813 20:42:32.030072 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.040847727+00:00 stderr F E0813 20:42:32.040665 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.050491705+00:00 stderr F I0813 20:42:32.050420 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.071996205+00:00 stderr F E0813 20:42:32.071826 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.080745538+00:00 stderr F I0813 20:42:32.080664 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.121400500+00:00 stderr F E0813 20:42:32.121311 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.131085009+00:00 stderr F I0813 20:42:32.130997 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.211753795+00:00 stderr F E0813 20:42:32.211476 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.232496443+00:00 stderr F I0813 20:42:32.232403 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.394884324+00:00 stderr F E0813 20:42:32.393039 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.404468440+00:00 stderr F I0813 20:42:32.404355 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.725402322+00:00 stderr F E0813 20:42:32.725263 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.734254078+00:00 stderr F I0813 20:42:32.734154 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:33.201087597+00:00 stderr F E0813 20:42:33.200596 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.216832791+00:00 stderr F I0813 20:42:33.216170 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.222384611+00:00 stderr F E0813 20:42:33.222356 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.233886922+00:00 stderr F I0813 20:42:33.233775 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.244505438+00:00 stderr F E0813 20:42:33.244416 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.256767982+00:00 stderr F I0813 20:42:33.256685 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.279054565+00:00 stderr F E0813 20:42:33.277531 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.293007427+00:00 stderr F I0813 20:42:33.292724 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.335658456+00:00 stderr F E0813 20:42:33.335315 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.351373300+00:00 stderr F I0813 20:42:33.351312 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.376716570+00:00 stderr F E0813 20:42:33.376532 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:33.402590646+00:00 stderr F I0813 20:42:33.401612 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:33.432120678+00:00 stderr F E0813 20:42:33.432061 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.444271948+00:00 stderr F I0813 20:42:33.444176 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.606433643+00:00 stderr F E0813 20:42:33.605536 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.615295959+00:00 stderr F I0813 20:42:33.615145 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.936213291+00:00 stderr F E0813 20:42:33.936081 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.947201958+00:00 stderr F I0813 20:42:33.946321 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:34.588471676+00:00 stderr F E0813 20:42:34.588375 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:34.605863277+00:00 stderr F I0813 20:42:34.597025 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:34.682711883+00:00 stderr F E0813 20:42:34.682433 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:34.705688565+00:00 stderr F I0813 20:42:34.704968 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:35.886755436+00:00 stderr F E0813 20:42:35.886347 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:35.896417474+00:00 stderr F I0813 20:42:35.896291 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:36.323923739+00:00 stderr F I0813 20:42:36.323531 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.324596938+00:00 stderr F I0813 20:42:36.324510 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.324995299+00:00 stderr F I0813 20:42:36.324940 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.323763 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.345954 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346108 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346192 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346305 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346364 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346501 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346557 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.323714 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.324345 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.324439 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.324456 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346847 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346907 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346965 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.347027 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.347181 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382925350+00:00 stderr F I0813 20:42:36.382895 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:37.016337511+00:00 stderr F I0813 20:42:37.015940 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:37.017895636+00:00 stderr F I0813 20:42:37.017753 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.023579580+00:00 stderr F I0813 20:42:37.023551 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:37.024699342+00:00 stderr F I0813 20:42:37.024673 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.035967567+00:00 stderr F I0813 20:42:37.035916 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:37.037004967+00:00 stderr F I0813 20:42:37.036970 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.058903918+00:00 stderr F I0813 20:42:37.058743 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:37.059901537+00:00 stderr F I0813 20:42:37.059873 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.101688622+00:00 stderr F I0813 20:42:37.101598 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:37.104434691+00:00 stderr F I0813 20:42:37.104407 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.217067408+00:00 stderr F I0813 20:42:37.217009 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:37.267748759+00:00 stderr F E0813 20:42:37.267687 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:37.268837191+00:00 stderr F E0813 20:42:37.268747 1 render_controller.go:465] Error updating MachineConfigPool master: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.268893922+00:00 stderr F I0813 20:42:37.268879 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:37.421141582+00:00 stderr F I0813 20:42:37.421048 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.620343355+00:00 stderr F I0813 20:42:37.618982 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:37.834513760+00:00 stderr F I0813 20:42:37.827334 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.149293695+00:00 stderr F I0813 20:42:38.149172 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:38.222857066+00:00 stderr F I0813 20:42:38.217434 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.459902060+00:00 stderr F E0813 20:42:38.457113 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:38.459902060+00:00 stderr F E0813 20:42:38.457837 1 render_controller.go:465] Error updating MachineConfigPool worker: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.459902060+00:00 stderr F I0813 20:42:38.457859 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:38.620535611+00:00 stderr F I0813 20:42:38.620441 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.018395911+00:00 stderr F I0813 20:42:39.018295 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.217161502+00:00 stderr F I0813 20:42:39.216939 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:39.617432801+00:00 stderr F I0813 20:42:39.617015 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.816411778+00:00 stderr F I0813 20:42:39.816312 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.216590945+00:00 stderr F I0813 20:42:40.216482 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.616889276+00:00 stderr F I0813 20:42:40.616739 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.018724301+00:00 stderr F I0813 20:42:41.018146 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.217121801+00:00 stderr F I0813 20:42:41.216984 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:41.616830045+00:00 stderr F I0813 20:42:41.616691 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.818520409+00:00 stderr F I0813 20:42:41.818366 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:42.217537563+00:00 stderr F I0813 20:42:42.217150 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:42.389906863+00:00 stderr F E0813 20:42:42.389755 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:42.390755547+00:00 stderr F E0813 20:42:42.390682 1 render_controller.go:465] Error updating MachineConfigPool master: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:42.390837650+00:00 stderr F I0813 20:42:42.390765 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:42.861080747+00:00 stderr F I0813 20:42:42.860980 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:43.579242811+00:00 stderr F E0813 20:42:43.579116 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:43.579945061+00:00 stderr F E0813 20:42:43.579869 1 render_controller.go:465] Error updating MachineConfigPool worker: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:43.579945061+00:00 stderr F I0813 20:42:43.579916 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:44.146971519+00:00 stderr F I0813 20:42:44.146875 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:44.178582740+00:00 stderr F I0813 20:42:44.178448 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:44.180025342+00:00 stderr F I0813 20:42:44.179930 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:44.527609583+00:00 stderr F I0813 20:42:44.527249 1 helpers.go:93] Shutting down due to: terminated 2025-08-13T20:42:44.527643104+00:00 stderr F I0813 20:42:44.527618 1 helpers.go:96] Context cancelled 2025-08-13T20:42:44.527938452+00:00 stderr F I0813 20:42:44.527875 1 machine_set_boot_image_controller.go:189] Shutting down MachineConfigController-MachineSetBootImageController 2025-08-13T20:42:44.529565509+00:00 stderr F I0813 20:42:44.529496 1 render_controller.go:135] Shutting down MachineConfigController-RenderController 2025-08-13T20:42:44.529694943+00:00 stderr F I0813 20:42:44.529633 1 node_controller.go:255] Shutting down MachineConfigController-NodeController 2025-08-13T20:42:44.529712833+00:00 stderr F I0813 20:42:44.529701 1 template_controller.go:235] Shutting down MachineConfigController-TemplateController 2025-08-13T20:42:44.529954020+00:00 stderr F E0813 20:42:44.529858 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config-controller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:44.529954020+00:00 stderr F I0813 20:42:44.529904 1 start.go:146] Stopped leading. Terminating. ././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-contro0000755000175000017500000000000015133657715033125 5ustar zuulzuul././@LongLink0000644000000000000000000000033200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-contro0000755000175000017500000000000015133657735033127 5ustar zuulzuul././@LongLink0000644000000000000000000000033700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-contro0000644000175000017500000006147315133657715033142 0ustar zuulzuul2025-08-13T20:11:02.865431268+00:00 stderr F I0813 20:11:02.864881 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:11:02.865957823+00:00 stderr F I0813 20:11:02.865900 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:11:02.869571566+00:00 stderr F I0813 20:11:02.869444 1 observer_polling.go:159] Starting file observer 2025-08-13T20:11:02.871168652+00:00 stderr F I0813 20:11:02.871084 1 builder.go:299] route-controller-manager version 4.16.0-202406131906.p0.g3112b45.assembly.stream.el9-3112b45-3112b458983c6fca6f77d5a945fb0026186dace6 2025-08-13T20:11:02.873305093+00:00 stderr F I0813 20:11:02.872602 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:11:03.231962636+00:00 stderr F I0813 20:11:03.230544 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:11:03.241153190+00:00 stderr F I0813 20:11:03.241104 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T20:11:03.241245003+00:00 stderr F I0813 20:11:03.241227 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T20:11:03.241307574+00:00 stderr F I0813 20:11:03.241291 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T20:11:03.241343445+00:00 stderr F I0813 20:11:03.241331 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T20:11:03.248281334+00:00 stderr F I0813 20:11:03.248242 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:11:03.248348536+00:00 stderr F W0813 20:11:03.248335 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:11:03.248401628+00:00 stderr F W0813 20:11:03.248388 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:11:03.248685216+00:00 stderr F I0813 20:11:03.248665 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:11:03.252619539+00:00 stderr F I0813 20:11:03.252579 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"route-controller-manager.openshift-route-controller-manager.svc\" [serving] validServingFor=[route-controller-manager.openshift-route-controller-manager.svc,route-controller-manager.openshift-route-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:48 +0000 UTC to 2027-08-13 20:00:49 +0000 UTC (now=2025-08-13 20:11:03.252521796 +0000 UTC))" 2025-08-13T20:11:03.252898317+00:00 stderr F I0813 20:11:03.252834 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:11:03.253358230+00:00 stderr F I0813 20:11:03.253304 1 leaderelection.go:250] attempting to acquire leader lease openshift-route-controller-manager/openshift-route-controllers... 2025-08-13T20:11:03.253598927+00:00 stderr F I0813 20:11:03.253574 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115863\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115862\" (2025-08-13 19:11:02 +0000 UTC to 2026-08-13 19:11:02 +0000 UTC (now=2025-08-13 20:11:03.253525135 +0000 UTC))" 2025-08-13T20:11:03.253748771+00:00 stderr F I0813 20:11:03.253659 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:11:03.253748771+00:00 stderr F I0813 20:11:03.253740 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:11:03.253970027+00:00 stderr F I0813 20:11:03.253765 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:11:03.253970027+00:00 stderr F I0813 20:11:03.253821 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:11:03.254063190+00:00 stderr F I0813 20:11:03.253670 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:11:03.254248195+00:00 stderr F I0813 20:11:03.254181 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:11:03.254374039+00:00 stderr F I0813 20:11:03.254319 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:11:03.254626986+00:00 stderr F I0813 20:11:03.253603 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:11:03.254626986+00:00 stderr F I0813 20:11:03.254609 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:11:03.254626986+00:00 stderr F I0813 20:11:03.254372 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:11:03.256473759+00:00 stderr F I0813 20:11:03.256445 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:11:03.257269042+00:00 stderr F I0813 20:11:03.257244 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:11:03.263174311+00:00 stderr F I0813 20:11:03.263124 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:11:03.266734733+00:00 stderr F I0813 20:11:03.266693 1 leaderelection.go:260] successfully acquired lease openshift-route-controller-manager/openshift-route-controllers 2025-08-13T20:11:03.268959017+00:00 stderr F I0813 20:11:03.268857 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-route-controller-manager", Name:"openshift-route-controllers", UID:"2ba9fc4c-f1d7-4b43-b8a4-0a6afbf10f5f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"33361", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' route-controller-manager-776b8b7477-sfpvs_f8c2bc95-1e3b-4dd4-b71e-62fdd54204d3 became leader 2025-08-13T20:11:03.285414649+00:00 stderr F I0813 20:11:03.273726 1 controller_manager.go:36] Starting "openshift.io/ingress-to-route" 2025-08-13T20:11:03.297621429+00:00 stderr F I0813 20:11:03.297503 1 ingress.go:262] ingress-to-route metrics registered with prometheus 2025-08-13T20:11:03.297621429+00:00 stderr F I0813 20:11:03.297553 1 controller_manager.go:46] Started "openshift.io/ingress-to-route" 2025-08-13T20:11:03.297621429+00:00 stderr F I0813 20:11:03.297567 1 controller_manager.go:36] Starting "openshift.io/ingress-ip" 2025-08-13T20:11:03.297621429+00:00 stderr F I0813 20:11:03.297574 1 controller_manager.go:46] Started "openshift.io/ingress-ip" 2025-08-13T20:11:03.297621429+00:00 stderr F I0813 20:11:03.297580 1 controller_manager.go:48] Started Route Controllers 2025-08-13T20:11:03.298100563+00:00 stderr F I0813 20:11:03.298077 1 ingress.go:313] Starting controller 2025-08-13T20:11:03.313641928+00:00 stderr F I0813 20:11:03.307439 1 reflector.go:351] Caches populated for *v1.IngressClass from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:11:03.313641928+00:00 stderr F I0813 20:11:03.308068 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:11:03.332055726+00:00 stderr F I0813 20:11:03.331962 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:11:03.342666120+00:00 stderr F I0813 20:11:03.342509 1 reflector.go:351] Caches populated for *v1.Route from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:11:03.355204820+00:00 stderr F I0813 20:11:03.355148 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:11:03.355473718+00:00 stderr F I0813 20:11:03.355452 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:11:03.355534719+00:00 stderr F I0813 20:11:03.355520 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:11:03.356633191+00:00 stderr F I0813 20:11:03.356544 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:11:03.356434885 +0000 UTC))" 2025-08-13T20:11:03.356708193+00:00 stderr F I0813 20:11:03.356692 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:11:03.356671482 +0000 UTC))" 2025-08-13T20:11:03.356771135+00:00 stderr F I0813 20:11:03.356754 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:11:03.356729844 +0000 UTC))" 2025-08-13T20:11:03.356883988+00:00 stderr F I0813 20:11:03.356868 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:11:03.356843687 +0000 UTC))" 2025-08-13T20:11:03.356991411+00:00 stderr F I0813 20:11:03.356971 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.356904939 +0000 UTC))" 2025-08-13T20:11:03.357045013+00:00 stderr F I0813 20:11:03.357032 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.357015692 +0000 UTC))" 2025-08-13T20:11:03.357212707+00:00 stderr F I0813 20:11:03.357191 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.357064243 +0000 UTC))" 2025-08-13T20:11:03.357282179+00:00 stderr F I0813 20:11:03.357268 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.357250468 +0000 UTC))" 2025-08-13T20:11:03.357327551+00:00 stderr F I0813 20:11:03.357315 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:11:03.35730128 +0000 UTC))" 2025-08-13T20:11:03.357505346+00:00 stderr F I0813 20:11:03.357488 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:11:03.357469695 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.359232 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"route-controller-manager.openshift-route-controller-manager.svc\" [serving] validServingFor=[route-controller-manager.openshift-route-controller-manager.svc,route-controller-manager.openshift-route-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:48 +0000 UTC to 2027-08-13 20:00:49 +0000 UTC (now=2025-08-13 20:11:03.359196934 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.359974 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115863\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115862\" (2025-08-13 19:11:02 +0000 UTC to 2026-08-13 19:11:02 +0000 UTC (now=2025-08-13 20:11:03.359899514 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360194 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:11:03.360175212 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360217 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:11:03.360204343 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360276 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:11:03.360222524 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360296 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:11:03.360283565 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360314 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.360302636 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360332 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.360319096 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360368 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.360336977 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360390 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.360377138 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360410 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:11:03.360399279 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360430 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:11:03.360416149 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360447 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.36043704 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360751 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"route-controller-manager.openshift-route-controller-manager.svc\" [serving] validServingFor=[route-controller-manager.openshift-route-controller-manager.svc,route-controller-manager.openshift-route-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:48 +0000 UTC to 2027-08-13 20:00:49 +0000 UTC (now=2025-08-13 20:11:03.360726238 +0000 UTC))" 2025-08-13T20:11:03.363035544+00:00 stderr F I0813 20:11:03.362972 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115863\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115862\" (2025-08-13 19:11:02 +0000 UTC to 2026-08-13 19:11:02 +0000 UTC (now=2025-08-13 20:11:03.362947502 +0000 UTC))" 2025-08-13T20:11:03.771896587+00:00 stderr F I0813 20:11:03.771480 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:42:35.517719936+00:00 stderr F I0813 20:42:35.515539 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:35.521577918+00:00 stderr F I0813 20:42:35.521486 1 ingress.go:325] Shutting down controller 2025-08-13T20:42:35.522140354+00:00 stderr F I0813 20:42:35.520517 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:35.538315390+00:00 stderr F I0813 20:42:35.538116 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:35.538315390+00:00 stderr F I0813 20:42:35.538286 1 genericapiserver.go:539] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:42:35.538345431+00:00 stderr F I0813 20:42:35.538320 1 genericapiserver.go:603] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:42:35.538396633+00:00 stderr F I0813 20:42:35.538343 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:35.538396633+00:00 stderr F I0813 20:42:35.538362 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:35.539195936+00:00 stderr F I0813 20:42:35.539088 1 genericapiserver.go:628] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:42:35.539195936+00:00 stderr F I0813 20:42:35.539178 1 genericapiserver.go:669] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:42:35.539902056+00:00 stderr F I0813 20:42:35.539840 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:35.539902056+00:00 stderr F I0813 20:42:35.539891 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:35.540137253+00:00 stderr F I0813 20:42:35.540075 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:42:35.540467262+00:00 stderr F I0813 20:42:35.540410 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:35.541122221+00:00 stderr F I0813 20:42:35.541044 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:35.541445950+00:00 stderr F I0813 20:42:35.541373 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:42:35.541445950+00:00 stderr F I0813 20:42:35.541415 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:35.543525750+00:00 stderr F I0813 20:42:35.543421 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:42:35.544800087+00:00 stderr F I0813 20:42:35.544735 1 builder.go:330] server exited 2025-08-13T20:42:35.552278363+00:00 stderr F W0813 20:42:35.552143 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000033700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-contro0000644000175000017500000007513215133657715033137 0ustar zuulzuul2026-01-20T10:49:37.303268542+00:00 stderr F I0120 10:49:37.302578 1 cmd.go:241] Using service-serving-cert provided certificates 2026-01-20T10:49:37.304272244+00:00 stderr F I0120 10:49:37.304245 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:49:37.311265256+00:00 stderr F I0120 10:49:37.311183 1 observer_polling.go:159] Starting file observer 2026-01-20T10:49:37.313215296+00:00 stderr F I0120 10:49:37.313193 1 builder.go:299] route-controller-manager version 4.16.0-202406131906.p0.g3112b45.assembly.stream.el9-3112b45-3112b458983c6fca6f77d5a945fb0026186dace6 2026-01-20T10:49:37.315402473+00:00 stderr F I0120 10:49:37.315379 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2026-01-20T10:49:37.754304281+00:00 stderr F I0120 10:49:37.754005 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2026-01-20T10:49:37.762088638+00:00 stderr F I0120 10:49:37.761849 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2026-01-20T10:49:37.762088638+00:00 stderr F I0120 10:49:37.761869 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2026-01-20T10:49:37.762088638+00:00 stderr F I0120 10:49:37.761918 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2026-01-20T10:49:37.762088638+00:00 stderr F I0120 10:49:37.761923 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2026-01-20T10:49:37.776688423+00:00 stderr F I0120 10:49:37.774209 1 secure_serving.go:57] Forcing use of http/1.1 only 2026-01-20T10:49:37.776688423+00:00 stderr F W0120 10:49:37.774234 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:37.776688423+00:00 stderr F W0120 10:49:37.774240 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:37.776688423+00:00 stderr F I0120 10:49:37.774487 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2026-01-20T10:49:37.785083768+00:00 stderr F I0120 10:49:37.784767 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2026-01-20T10:49:37.786789151+00:00 stderr F I0120 10:49:37.786767 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"route-controller-manager.openshift-route-controller-manager.svc\" [serving] validServingFor=[route-controller-manager.openshift-route-controller-manager.svc,route-controller-manager.openshift-route-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:48 +0000 UTC to 2027-08-13 20:00:49 +0000 UTC (now=2026-01-20 10:49:37.786722309 +0000 UTC))" 2026-01-20T10:49:37.787097190+00:00 stderr F I0120 10:49:37.787083 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906177\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906177\" (2026-01-20 09:49:37 +0000 UTC to 2027-01-20 09:49:37 +0000 UTC (now=2026-01-20 10:49:37.787039428 +0000 UTC))" 2026-01-20T10:49:37.787138671+00:00 stderr F I0120 10:49:37.787128 1 secure_serving.go:213] Serving securely on [::]:8443 2026-01-20T10:49:37.787182573+00:00 stderr F I0120 10:49:37.787171 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2026-01-20T10:49:37.787227454+00:00 stderr F I0120 10:49:37.787218 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:49:37.787280305+00:00 stderr F I0120 10:49:37.787271 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:49:37.787334567+00:00 stderr F I0120 10:49:37.787322 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2026-01-20T10:49:37.787458741+00:00 stderr F I0120 10:49:37.787446 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:49:37.787637566+00:00 stderr F I0120 10:49:37.787624 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:49:37.787664207+00:00 stderr F I0120 10:49:37.787655 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:37.787711208+00:00 stderr F I0120 10:49:37.787700 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:49:37.787734529+00:00 stderr F I0120 10:49:37.787725 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:37.787840762+00:00 stderr F I0120 10:49:37.787809 1 leaderelection.go:250] attempting to acquire leader lease openshift-route-controller-manager/openshift-route-controllers... 2026-01-20T10:49:37.793781283+00:00 stderr F I0120 10:49:37.793747 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:49:37.794247208+00:00 stderr F I0120 10:49:37.794227 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:49:37.794423903+00:00 stderr F I0120 10:49:37.794401 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:49:37.800975813+00:00 stderr F I0120 10:49:37.800585 1 leaderelection.go:260] successfully acquired lease openshift-route-controller-manager/openshift-route-controllers 2026-01-20T10:49:37.803875121+00:00 stderr F I0120 10:49:37.803141 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-route-controller-manager", Name:"openshift-route-controllers", UID:"2ba9fc4c-f1d7-4b43-b8a4-0a6afbf10f5f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"40225", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' route-controller-manager-776b8b7477-sfpvs_dad3fd0c-efa2-4638-8888-49e33744eee3 became leader 2026-01-20T10:49:37.803875121+00:00 stderr F I0120 10:49:37.803240 1 controller_manager.go:36] Starting "openshift.io/ingress-ip" 2026-01-20T10:49:37.803875121+00:00 stderr F I0120 10:49:37.803250 1 controller_manager.go:46] Started "openshift.io/ingress-ip" 2026-01-20T10:49:37.803875121+00:00 stderr F I0120 10:49:37.803256 1 controller_manager.go:36] Starting "openshift.io/ingress-to-route" 2026-01-20T10:49:37.807032947+00:00 stderr F I0120 10:49:37.807005 1 ingress.go:262] ingress-to-route metrics registered with prometheus 2026-01-20T10:49:37.807032947+00:00 stderr F I0120 10:49:37.807026 1 controller_manager.go:46] Started "openshift.io/ingress-to-route" 2026-01-20T10:49:37.807049508+00:00 stderr F I0120 10:49:37.807034 1 controller_manager.go:48] Started Route Controllers 2026-01-20T10:49:37.810885974+00:00 stderr F I0120 10:49:37.807972 1 ingress.go:313] Starting controller 2026-01-20T10:49:37.812018659+00:00 stderr F W0120 10:49:37.811974 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2026-01-20T10:49:37.816103683+00:00 stderr F I0120 10:49:37.814316 1 reflector.go:351] Caches populated for *v1.IngressClass from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:49:37.816103683+00:00 stderr F I0120 10:49:37.815429 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:49:37.824102387+00:00 stderr F E0120 10:49:37.823751 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2026-01-20T10:49:37.825080397+00:00 stderr F I0120 10:49:37.824802 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:49:37.887434236+00:00 stderr F I0120 10:49:37.887348 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:49:37.891094537+00:00 stderr F I0120 10:49:37.887711 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:37.891094537+00:00 stderr F I0120 10:49:37.888138 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:49:37.888108476 +0000 UTC))" 2026-01-20T10:49:37.891094537+00:00 stderr F I0120 10:49:37.888162 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:49:37.888149478 +0000 UTC))" 2026-01-20T10:49:37.891094537+00:00 stderr F I0120 10:49:37.888179 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:49:37.888166858 +0000 UTC))" 2026-01-20T10:49:37.891094537+00:00 stderr F I0120 10:49:37.888216 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:49:37.888183139 +0000 UTC))" 2026-01-20T10:49:37.891094537+00:00 stderr F I0120 10:49:37.888232 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:37.8882217 +0000 UTC))" 2026-01-20T10:49:37.891094537+00:00 stderr F I0120 10:49:37.888246 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:37.88823585 +0000 UTC))" 2026-01-20T10:49:37.891094537+00:00 stderr F I0120 10:49:37.888260 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:37.888250121 +0000 UTC))" 2026-01-20T10:49:37.891094537+00:00 stderr F I0120 10:49:37.888274 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:37.888263841 +0000 UTC))" 2026-01-20T10:49:37.891094537+00:00 stderr F I0120 10:49:37.888297 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:49:37.888278862 +0000 UTC))" 2026-01-20T10:49:37.891094537+00:00 stderr F I0120 10:49:37.888313 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:49:37.888303822 +0000 UTC))" 2026-01-20T10:49:37.891094537+00:00 stderr F I0120 10:49:37.888593 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"route-controller-manager.openshift-route-controller-manager.svc\" [serving] validServingFor=[route-controller-manager.openshift-route-controller-manager.svc,route-controller-manager.openshift-route-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:48 +0000 UTC to 2027-08-13 20:00:49 +0000 UTC (now=2026-01-20 10:49:37.888578311 +0000 UTC))" 2026-01-20T10:49:37.891094537+00:00 stderr F I0120 10:49:37.888833 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906177\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906177\" (2026-01-20 09:49:37 +0000 UTC to 2027-01-20 09:49:37 +0000 UTC (now=2026-01-20 10:49:37.888822088 +0000 UTC))" 2026-01-20T10:49:37.891094537+00:00 stderr F I0120 10:49:37.889873 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:37.895081509+00:00 stderr F I0120 10:49:37.891533 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:49:37.89151551 +0000 UTC))" 2026-01-20T10:49:37.895081509+00:00 stderr F I0120 10:49:37.891556 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:49:37.891546031 +0000 UTC))" 2026-01-20T10:49:37.895081509+00:00 stderr F I0120 10:49:37.891571 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:49:37.891560951 +0000 UTC))" 2026-01-20T10:49:37.895081509+00:00 stderr F I0120 10:49:37.891586 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:49:37.891576152 +0000 UTC))" 2026-01-20T10:49:37.895081509+00:00 stderr F I0120 10:49:37.891607 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:37.891591152 +0000 UTC))" 2026-01-20T10:49:37.895081509+00:00 stderr F I0120 10:49:37.891621 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:37.891611703 +0000 UTC))" 2026-01-20T10:49:37.895081509+00:00 stderr F I0120 10:49:37.891642 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:37.891627683 +0000 UTC))" 2026-01-20T10:49:37.895081509+00:00 stderr F I0120 10:49:37.891657 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:37.891647054 +0000 UTC))" 2026-01-20T10:49:37.895081509+00:00 stderr F I0120 10:49:37.891671 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:49:37.891661354 +0000 UTC))" 2026-01-20T10:49:37.895081509+00:00 stderr F I0120 10:49:37.891695 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:49:37.891677565 +0000 UTC))" 2026-01-20T10:49:37.895081509+00:00 stderr F I0120 10:49:37.891711 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:37.891700246 +0000 UTC))" 2026-01-20T10:49:37.895081509+00:00 stderr F I0120 10:49:37.891966 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"route-controller-manager.openshift-route-controller-manager.svc\" [serving] validServingFor=[route-controller-manager.openshift-route-controller-manager.svc,route-controller-manager.openshift-route-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:48 +0000 UTC to 2027-08-13 20:00:49 +0000 UTC (now=2026-01-20 10:49:37.891952813 +0000 UTC))" 2026-01-20T10:49:37.909094236+00:00 stderr F I0120 10:49:37.905157 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906177\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906177\" (2026-01-20 09:49:37 +0000 UTC to 2027-01-20 09:49:37 +0000 UTC (now=2026-01-20 10:49:37.90495861 +0000 UTC))" 2026-01-20T10:49:38.097120463+00:00 stderr F I0120 10:49:38.096831 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:49:39.325099776+00:00 stderr F W0120 10:49:39.321153 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2026-01-20T10:49:39.325099776+00:00 stderr F E0120 10:49:39.321615 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2026-01-20T10:49:41.321349701+00:00 stderr F W0120 10:49:41.320890 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2026-01-20T10:49:41.321349701+00:00 stderr F E0120 10:49:41.321337 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2026-01-20T10:49:44.909107671+00:00 stderr F I0120 10:49:44.906341 1 reflector.go:351] Caches populated for *v1.Route from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:56:07.100464281+00:00 stderr F I0120 10:56:07.095948 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:56:07.095903857 +0000 UTC))" 2026-01-20T10:56:07.100464281+00:00 stderr F I0120 10:56:07.096534 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:56:07.096521624 +0000 UTC))" 2026-01-20T10:56:07.100464281+00:00 stderr F I0120 10:56:07.096552 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.096538744 +0000 UTC))" 2026-01-20T10:56:07.100464281+00:00 stderr F I0120 10:56:07.096567 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.096556695 +0000 UTC))" 2026-01-20T10:56:07.100464281+00:00 stderr F I0120 10:56:07.096583 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.096571845 +0000 UTC))" 2026-01-20T10:56:07.100464281+00:00 stderr F I0120 10:56:07.096597 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.096587726 +0000 UTC))" 2026-01-20T10:56:07.100464281+00:00 stderr F I0120 10:56:07.096611 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.096601306 +0000 UTC))" 2026-01-20T10:56:07.100464281+00:00 stderr F I0120 10:56:07.096627 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.096616766 +0000 UTC))" 2026-01-20T10:56:07.100464281+00:00 stderr F I0120 10:56:07.096642 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.096631067 +0000 UTC))" 2026-01-20T10:56:07.100464281+00:00 stderr F I0120 10:56:07.096659 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:56:07.096649427 +0000 UTC))" 2026-01-20T10:56:07.100464281+00:00 stderr F I0120 10:56:07.096675 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.096663368 +0000 UTC))" 2026-01-20T10:56:07.100464281+00:00 stderr F I0120 10:56:07.096690 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.096679148 +0000 UTC))" 2026-01-20T10:56:07.100464281+00:00 stderr F I0120 10:56:07.096980 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"route-controller-manager.openshift-route-controller-manager.svc\" [serving] validServingFor=[route-controller-manager.openshift-route-controller-manager.svc,route-controller-manager.openshift-route-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:48 +0000 UTC to 2027-08-13 20:00:49 +0000 UTC (now=2026-01-20 10:56:07.096964356 +0000 UTC))" 2026-01-20T10:56:07.100464281+00:00 stderr F I0120 10:56:07.097249 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906177\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906177\" (2026-01-20 09:49:37 +0000 UTC to 2027-01-20 09:49:37 +0000 UTC (now=2026-01-20 10:56:07.097236814 +0000 UTC))" 2026-01-20T10:57:37.920599714+00:00 stderr F E0120 10:57:37.919796 1 leaderelection.go:332] error retrieving resource lock openshift-route-controller-manager/openshift-route-controllers: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-route-controller-manager/leases/openshift-route-controllers?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000025100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator0000755000175000017500000000000015133657716033103 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator0000755000175000017500000000000015133657742033102 5ustar zuulzuul././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator0000644000175000017500000002643515133657716033117 0ustar zuulzuul2026-01-20T10:49:34.244120763+00:00 stderr F I0120 10:49:34.240197 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2026-01-20T10:49:34.332880167+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="FeatureGates initializedknownFeatures[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot]" 2026-01-20T10:49:34.332880167+00:00 stderr F I0120 10:49:34.328885 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-dns-operator", Name:"dns-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2026-01-20T10:49:34.332880167+00:00 stderr F 2026-01-20T10:49:34Z INFO controller-runtime.metrics Starting metrics server 2026-01-20T10:49:34.332880167+00:00 stderr F 2026-01-20T10:49:34Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": "127.0.0.1:60000", "secure": false} 2026-01-20T10:49:34.332880167+00:00 stderr F 2026-01-20T10:49:34Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNS"} 2026-01-20T10:49:34.332880167+00:00 stderr F 2026-01-20T10:49:34Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DaemonSet"} 2026-01-20T10:49:34.332880167+00:00 stderr F 2026-01-20T10:49:34Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Service"} 2026-01-20T10:49:34.332880167+00:00 stderr F 2026-01-20T10:49:34Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.ConfigMap"} 2026-01-20T10:49:34.332880167+00:00 stderr F 2026-01-20T10:49:34Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.ConfigMap"} 2026-01-20T10:49:34.332880167+00:00 stderr F 2026-01-20T10:49:34Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Node"} 2026-01-20T10:49:34.332880167+00:00 stderr F 2026-01-20T10:49:34Z INFO Starting Controller {"controller": "dns_controller"} 2026-01-20T10:49:34.332880167+00:00 stderr F 2026-01-20T10:49:34Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.DNS"} 2026-01-20T10:49:34.332880167+00:00 stderr F 2026-01-20T10:49:34Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.DaemonSet"} 2026-01-20T10:49:34.332880167+00:00 stderr F 2026-01-20T10:49:34Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.ClusterOperator"} 2026-01-20T10:49:34.332880167+00:00 stderr F 2026-01-20T10:49:34Z INFO Starting Controller {"controller": "status_controller"} 2026-01-20T10:49:34.651871893+00:00 stderr F 2026-01-20T10:49:34Z INFO Starting workers {"controller": "status_controller", "worker count": 1} 2026-01-20T10:49:34.752993133+00:00 stderr F 2026-01-20T10:49:34Z INFO Starting workers {"controller": "dns_controller", "worker count": 1} 2026-01-20T10:49:34.753288192+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="reconciling request: /default" 2026-01-20T10:49:34.958785881+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="updated DNS default status: old: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"False\", LastTransitionTime:time.Date(2025, time.August, 13, 19, 59, 42, 0, time.Local), Reason:\"AsExpected\", Message:\"Enough DNS pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"False\", LastTransitionTime:time.Date(2025, time.August, 13, 19, 59, 42, 0, time.Local), Reason:\"AsExpected\", Message:\"All DNS and node-resolver pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"True\", LastTransitionTime:time.Date(2025, time.August, 13, 19, 59, 42, 0, time.Local), Reason:\"AsExpected\", Message:\"The DNS daemonset has available pods, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}, new: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"True\", LastTransitionTime:time.Date(2026, time.January, 20, 10, 49, 34, 0, time.Local), Reason:\"NoDNSPodsAvailable\", Message:\"No DNS pods are available.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"True\", LastTransitionTime:time.Date(2026, time.January, 20, 10, 49, 34, 0, time.Local), Reason:\"Reconciling\", Message:\"Have 0 available DNS pods, want 1.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"False\", LastTransitionTime:time.Date(2026, time.January, 20, 10, 49, 34, 0, time.Local), Reason:\"NoDaemonSetPods\", Message:\"The DNS daemonset has no pods available.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}" 2026-01-20T10:49:34.962105473+00:00 stderr F time="2026-01-20T10:49:34Z" level=info msg="reconciling request: /default" 2026-01-20T10:49:44.106635019+00:00 stderr F time="2026-01-20T10:49:44Z" level=info msg="reconciling request: /default" 2026-01-20T10:49:53.116676389+00:00 stderr F time="2026-01-20T10:49:53Z" level=info msg="reconciling request: /default" 2026-01-20T10:49:53.159282177+00:00 stderr F time="2026-01-20T10:49:53Z" level=info msg="updated DNS default status: old: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"True\", LastTransitionTime:time.Date(2026, time.January, 20, 10, 49, 34, 0, time.Local), Reason:\"NoDNSPodsAvailable\", Message:\"No DNS pods are available.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"True\", LastTransitionTime:time.Date(2026, time.January, 20, 10, 49, 34, 0, time.Local), Reason:\"Reconciling\", Message:\"Have 0 available DNS pods, want 1.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"False\", LastTransitionTime:time.Date(2026, time.January, 20, 10, 49, 34, 0, time.Local), Reason:\"NoDaemonSetPods\", Message:\"The DNS daemonset has no pods available.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}, new: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"False\", LastTransitionTime:time.Date(2026, time.January, 20, 10, 49, 53, 0, time.Local), Reason:\"AsExpected\", Message:\"Enough DNS pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"False\", LastTransitionTime:time.Date(2026, time.January, 20, 10, 49, 53, 0, time.Local), Reason:\"AsExpected\", Message:\"All DNS and node-resolver pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"True\", LastTransitionTime:time.Date(2026, time.January, 20, 10, 49, 53, 0, time.Local), Reason:\"AsExpected\", Message:\"The DNS daemonset has available pods, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}" 2026-01-20T10:49:53.160533825+00:00 stderr F time="2026-01-20T10:49:53Z" level=info msg="reconciling request: /default" 2026-01-20T10:57:34.447578929+00:00 stderr F time="2026-01-20T10:57:34Z" level=error msg="failed to ensure default dns Get \"https://10.217.4.1:443/apis/operator.openshift.io/v1/dnses/default\": dial tcp 10.217.4.1:443: connect: connection refused" ././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator0000644000175000017500000003325215133657716033112 0ustar zuulzuul2025-08-13T19:59:13.692541717+00:00 stderr F I0813 19:59:13.684769 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:15.984962045+00:00 stderr F time="2025-08-13T19:59:15Z" level=info msg="FeatureGates initializedknownFeatures[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot]" 2025-08-13T19:59:15.992337545+00:00 stderr F 2025-08-13T19:59:15Z INFO controller-runtime.metrics Starting metrics server 2025-08-13T19:59:16.542397145+00:00 stderr F 2025-08-13T19:59:16Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": "127.0.0.1:60000", "secure": false} 2025-08-13T19:59:16.562715444+00:00 stderr F I0813 19:59:16.552380 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-dns-operator", Name:"dns-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:16.717207318+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.DNS"} 2025-08-13T19:59:16.717207318+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.DaemonSet"} 2025-08-13T19:59:16.717207318+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.ClusterOperator"} 2025-08-13T19:59:16.717207318+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting Controller {"controller": "status_controller"} 2025-08-13T19:59:16.776197580+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNS"} 2025-08-13T19:59:16.789697915+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DaemonSet"} 2025-08-13T19:59:16.789933952+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Service"} 2025-08-13T19:59:16.789988983+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T19:59:16.790040805+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T19:59:16.790110607+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Node"} 2025-08-13T19:59:16.790152098+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting Controller {"controller": "dns_controller"} 2025-08-13T19:59:26.606910839+00:00 stderr F 2025-08-13T19:59:26Z INFO Starting workers {"controller": "dns_controller", "worker count": 1} 2025-08-13T19:59:26.606910839+00:00 stderr F 2025-08-13T19:59:26Z INFO Starting workers {"controller": "status_controller", "worker count": 1} 2025-08-13T19:59:26.711345946+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="reconciling request: /default" 2025-08-13T19:59:31.665157205+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="updated DNS default status: old: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 15, 0, time.Local), Reason:\"NoDNSPodsDesired\", Message:\"No DNS pods are desired; this could mean all nodes are tainted or unschedulable.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 17, 0, time.Local), Reason:\"Reconciling\", Message:\"Have 0 available node-resolver pods, want 1.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"False\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 15, 0, time.Local), Reason:\"NoDaemonSetPods\", Message:\"The DNS daemonset has no pods available.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}, new: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 15, 0, time.Local), Reason:\"NoDNSPodsAvailable\", Message:\"No DNS pods are available.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 17, 0, time.Local), Reason:\"Reconciling\", Message:\"Have 0 available DNS pods, want 1.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"False\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 15, 0, time.Local), Reason:\"NoDaemonSetPods\", Message:\"The DNS daemonset has no pods available.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}" 2025-08-13T19:59:31.941140121+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="reconciling request: /default" 2025-08-13T19:59:32.963913386+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="reconciling request: /default" 2025-08-13T19:59:41.020328726+00:00 stderr F time="2025-08-13T19:59:41Z" level=info msg="reconciling request: /default" 2025-08-13T19:59:43.077599078+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="updated DNS default status: old: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 15, 0, time.Local), Reason:\"NoDNSPodsAvailable\", Message:\"No DNS pods are available.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 17, 0, time.Local), Reason:\"Reconciling\", Message:\"Have 0 available DNS pods, want 1.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"False\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 15, 0, time.Local), Reason:\"NoDaemonSetPods\", Message:\"The DNS daemonset has no pods available.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}, new: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"False\", LastTransitionTime:time.Date(2025, time.August, 13, 19, 59, 42, 0, time.Local), Reason:\"AsExpected\", Message:\"Enough DNS pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"False\", LastTransitionTime:time.Date(2025, time.August, 13, 19, 59, 42, 0, time.Local), Reason:\"AsExpected\", Message:\"All DNS and node-resolver pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"True\", LastTransitionTime:time.Date(2025, time.August, 13, 19, 59, 42, 0, time.Local), Reason:\"AsExpected\", Message:\"The DNS daemonset has available pods, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}" 2025-08-13T19:59:43.159758230+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="reconciling request: /default" 2025-08-13T20:00:14.997050813+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="reconciling request: /default" 2025-08-13T20:00:19.742230217+00:00 stderr F time="2025-08-13T20:00:19Z" level=info msg="reconciling request: /default" 2025-08-13T20:00:47.670975966+00:00 stderr F time="2025-08-13T20:00:47Z" level=info msg="reconciling request: /default" 2025-08-13T20:02:29.340606718+00:00 stderr F time="2025-08-13T20:02:29Z" level=error msg="failed to ensure default dns Get \"https://10.217.4.1:443/apis/operator.openshift.io/v1/dnses/default\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:29.346119497+00:00 stderr F time="2025-08-13T20:03:29Z" level=error msg="failed to ensure default dns Get \"https://10.217.4.1:443/apis/operator.openshift.io/v1/dnses/default\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:04:29.348626550+00:00 stderr F time="2025-08-13T20:04:29Z" level=error msg="failed to ensure default dns Get \"https://10.217.4.1:443/apis/operator.openshift.io/v1/dnses/default\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:06:11.240719884+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="reconciling request: /default" 2025-08-13T20:06:26.959054664+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="reconciling request: /default" 2025-08-13T20:06:27.922351049+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="reconciling request: /default" 2025-08-13T20:06:28.002841484+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="reconciling request: /default" 2025-08-13T20:06:54.997534705+00:00 stderr F time="2025-08-13T20:06:54Z" level=info msg="reconciling request: /default" 2025-08-13T20:06:55.533701098+00:00 stderr F time="2025-08-13T20:06:55Z" level=info msg="reconciling request: /default" 2025-08-13T20:08:29.702753933+00:00 stderr F time="2025-08-13T20:08:29Z" level=error msg="failed to ensure default dns Get \"https://10.217.4.1:443/apis/operator.openshift.io/v1/dnses/default\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:09:29.337380577+00:00 stderr F time="2025-08-13T20:09:29Z" level=info msg="reconciling request: /default" 2025-08-13T20:09:38.974292084+00:00 stderr F time="2025-08-13T20:09:38Z" level=info msg="reconciling request: /default" 2025-08-13T20:09:41.903293700+00:00 stderr F time="2025-08-13T20:09:41Z" level=info msg="reconciling request: /default" 2025-08-13T20:09:41.984477328+00:00 stderr F time="2025-08-13T20:09:41Z" level=info msg="reconciling request: /default" 2025-08-13T20:09:44.310670152+00:00 stderr F time="2025-08-13T20:09:44Z" level=info msg="reconciling request: /default" 2025-08-13T20:09:52.517276043+00:00 stderr F time="2025-08-13T20:09:52Z" level=info msg="reconciling request: /default" ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator0000755000175000017500000000000015133657742033102 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator0000644000175000017500000000202015133657716033077 0ustar zuulzuul2026-01-20T10:49:34.862240861+00:00 stderr F W0120 10:49:34.861893 1 deprecated.go:66] 2026-01-20T10:49:34.862240861+00:00 stderr F ==== Removed Flag Warning ====================== 2026-01-20T10:49:34.862240861+00:00 stderr F 2026-01-20T10:49:34.862240861+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2026-01-20T10:49:34.862240861+00:00 stderr F 2026-01-20T10:49:34.862240861+00:00 stderr F =============================================== 2026-01-20T10:49:34.862240861+00:00 stderr F 2026-01-20T10:49:34.864043246+00:00 stderr F I0120 10:49:34.862641 1 kube-rbac-proxy.go:233] Valid token audiences: 2026-01-20T10:49:34.864043246+00:00 stderr F I0120 10:49:34.862685 1 kube-rbac-proxy.go:347] Reading certificate files 2026-01-20T10:49:34.864043246+00:00 stderr F I0120 10:49:34.863154 1 kube-rbac-proxy.go:395] Starting TCP socket on :9393 2026-01-20T10:49:34.864043246+00:00 stderr F I0120 10:49:34.863731 1 kube-rbac-proxy.go:402] Listening securely on :9393 ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator0000644000175000017500000000222515133657716033106 0ustar zuulzuul2025-08-13T19:59:32.028488101+00:00 stderr F W0813 19:59:32.022135 1 deprecated.go:66] 2025-08-13T19:59:32.028488101+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:32.028488101+00:00 stderr F 2025-08-13T19:59:32.028488101+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:32.028488101+00:00 stderr F 2025-08-13T19:59:32.028488101+00:00 stderr F =============================================== 2025-08-13T19:59:32.028488101+00:00 stderr F 2025-08-13T19:59:32.085928918+00:00 stderr F I0813 19:59:32.083706 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:32.085928918+00:00 stderr F I0813 19:59:32.083991 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:32.147723120+00:00 stderr F I0813 19:59:32.147658 1 kube-rbac-proxy.go:395] Starting TCP socket on :9393 2025-08-13T19:59:32.152233128+00:00 stderr F I0813 19:59:32.152208 1 kube-rbac-proxy.go:402] Listening securely on :9393 2025-08-13T20:42:42.635290547+00:00 stderr F I0813 20:42:42.634461 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015133657716033072 5ustar zuulzuul././@LongLink0000644000000000000000000000036200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015133657744033073 5ustar zuulzuul././@LongLink0000644000000000000000000000036700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000022035515133657716033103 0ustar zuulzuul2026-01-20T10:49:34.817469107+00:00 stderr F I0120 10:49:34.815859 1 cmd.go:240] Using service-serving-cert provided certificates 2026-01-20T10:49:34.817469107+00:00 stderr F I0120 10:49:34.815965 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:49:34.817469107+00:00 stderr F I0120 10:49:34.817327 1 observer_polling.go:159] Starting file observer 2026-01-20T10:49:34.943473855+00:00 stderr F I0120 10:49:34.943377 1 builder.go:298] kube-controller-manager-operator version 4.16.0-202406131906.p0.g0338b3b.assembly.stream.el9-0338b3b-0338b3be6912024d03def2c26f0fa10218fc2c25 2026-01-20T10:49:35.725209156+00:00 stderr F I0120 10:49:35.724011 1 secure_serving.go:57] Forcing use of http/1.1 only 2026-01-20T10:49:35.725209156+00:00 stderr F W0120 10:49:35.724508 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:35.725209156+00:00 stderr F W0120 10:49:35.724515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:35.731513578+00:00 stderr F I0120 10:49:35.728734 1 secure_serving.go:213] Serving securely on [::]:8443 2026-01-20T10:49:35.731513578+00:00 stderr F I0120 10:49:35.728956 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:49:35.731513578+00:00 stderr F I0120 10:49:35.728983 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:49:35.731513578+00:00 stderr F I0120 10:49:35.729133 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2026-01-20T10:49:35.731513578+00:00 stderr F I0120 10:49:35.729174 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:49:35.731513578+00:00 stderr F I0120 10:49:35.729183 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:35.731513578+00:00 stderr F I0120 10:49:35.729196 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:49:35.731513578+00:00 stderr F I0120 10:49:35.729201 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:35.731513578+00:00 stderr F I0120 10:49:35.729200 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:49:35.731513578+00:00 stderr F I0120 10:49:35.730325 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaTopology 2026-01-20T10:49:35.731513578+00:00 stderr F I0120 10:49:35.730618 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock... 2026-01-20T10:49:35.829203524+00:00 stderr F I0120 10:49:35.829142 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:49:35.830348459+00:00 stderr F I0120 10:49:35.830295 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:35.830367050+00:00 stderr F I0120 10:49:35.830350 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:55:54.387198864+00:00 stderr F I0120 10:55:54.386550 1 leaderelection.go:260] successfully acquired lease openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock 2026-01-20T10:55:54.387266966+00:00 stderr F I0120 10:55:54.387086 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator-lock", UID:"d058fe3a-98b8-4ef3-8f29-e5f93bb21bb1", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"42324", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-controller-manager-operator-6f6cb54958-rbddb_9d696994-047f-4577-b878-08619f8b204a became leader 2026-01-20T10:55:54.388533690+00:00 stderr F I0120 10:55:54.388490 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2026-01-20T10:55:54.391644034+00:00 stderr F I0120 10:55:54.391546 1 starter.go:88] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2026-01-20T10:55:54.391644034+00:00 stderr F I0120 10:55:54.391589 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2026-01-20T10:55:54.414393856+00:00 stderr F I0120 10:55:54.414311 1 base_controller.go:67] Waiting for caches to sync for GarbageCollectorWatcherController 2026-01-20T10:55:54.414738815+00:00 stderr F I0120 10:55:54.414692 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2026-01-20T10:55:54.414903980+00:00 stderr F I0120 10:55:54.414865 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2026-01-20T10:55:54.415071004+00:00 stderr F I0120 10:55:54.415023 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-controller-manager 2026-01-20T10:55:54.415088454+00:00 stderr F I0120 10:55:54.415053 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2026-01-20T10:55:54.415103945+00:00 stderr F I0120 10:55:54.415096 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:55:54.415132266+00:00 stderr F I0120 10:55:54.415114 1 base_controller.go:67] Waiting for caches to sync for SATokenSignerController 2026-01-20T10:55:54.415140696+00:00 stderr F I0120 10:55:54.415132 1 base_controller.go:67] Waiting for caches to sync for WorkerLatencyProfile 2026-01-20T10:55:54.415994679+00:00 stderr F I0120 10:55:54.415953 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2026-01-20T10:55:54.417440398+00:00 stderr F I0120 10:55:54.417130 1 base_controller.go:67] Waiting for caches to sync for KubeControllerManagerStaticResources 2026-01-20T10:55:54.417440398+00:00 stderr F I0120 10:55:54.417164 1 base_controller.go:67] Waiting for caches to sync for NodeController 2026-01-20T10:55:54.417497609+00:00 stderr F I0120 10:55:54.417469 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2026-01-20T10:55:54.419004030+00:00 stderr F I0120 10:55:54.418451 1 base_controller.go:67] Waiting for caches to sync for PruneController 2026-01-20T10:55:54.420291294+00:00 stderr F I0120 10:55:54.419284 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2026-01-20T10:55:54.420291294+00:00 stderr F I0120 10:55:54.419329 1 base_controller.go:67] Waiting for caches to sync for GuardController 2026-01-20T10:55:54.421444956+00:00 stderr F I0120 10:55:54.421404 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2026-01-20T10:55:54.422250548+00:00 stderr F I0120 10:55:54.421481 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2026-01-20T10:55:54.422250548+00:00 stderr F I0120 10:55:54.421512 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2026-01-20T10:55:54.422250548+00:00 stderr F I0120 10:55:54.421565 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2026-01-20T10:55:54.436648015+00:00 stderr F I0120 10:55:54.436514 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2026-01-20T10:55:54.515767483+00:00 stderr F I0120 10:55:54.515661 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-controller-manager 2026-01-20T10:55:54.515767483+00:00 stderr F I0120 10:55:54.515734 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-controller-manager controller ... 2026-01-20T10:55:54.519619477+00:00 stderr F I0120 10:55:54.519368 1 base_controller.go:73] Caches are synced for LoggingSyncer 2026-01-20T10:55:54.519619477+00:00 stderr F I0120 10:55:54.519400 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2026-01-20T10:55:54.519619477+00:00 stderr F I0120 10:55:54.519450 1 base_controller.go:73] Caches are synced for PruneController 2026-01-20T10:55:54.519619477+00:00 stderr F I0120 10:55:54.519477 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2026-01-20T10:55:54.521758844+00:00 stderr F E0120 10:55:54.521703 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-11" not found 2026-01-20T10:55:54.521866518+00:00 stderr F I0120 10:55:54.521832 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2026-01-20T10:55:54.521866518+00:00 stderr F I0120 10:55:54.521852 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2026-01-20T10:55:54.528426144+00:00 stderr F E0120 10:55:54.528375 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-11" not found 2026-01-20T10:55:54.537581320+00:00 stderr F I0120 10:55:54.537528 1 base_controller.go:73] Caches are synced for BackingResourceController 2026-01-20T10:55:54.537581320+00:00 stderr F I0120 10:55:54.537561 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2026-01-20T10:55:54.540356285+00:00 stderr F E0120 10:55:54.540298 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-11" not found 2026-01-20T10:55:54.562847430+00:00 stderr F E0120 10:55:54.562749 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-11" not found 2026-01-20T10:55:54.603965677+00:00 stderr F E0120 10:55:54.603877 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-11" not found 2026-01-20T10:55:54.613271477+00:00 stderr F I0120 10:55:54.613225 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:55:54.685345076+00:00 stderr F E0120 10:55:54.685258 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-11" not found 2026-01-20T10:55:54.814047419+00:00 stderr F I0120 10:55:54.813971 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:55:54.815694493+00:00 stderr F I0120 10:55:54.815615 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:55:54.815694493+00:00 stderr F I0120 10:55:54.815638 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:55:54.817357238+00:00 stderr F I0120 10:55:54.817237 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TargetUpdateRequired' "csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: past its refresh time 2025-06-27 13:05:19 +0000 UTC 2026-01-20T10:55:54.847241402+00:00 stderr F E0120 10:55:54.846783 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-11" not found 2026-01-20T10:55:55.022565569+00:00 stderr F I0120 10:55:55.021986 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:55:55.168252039+00:00 stderr F E0120 10:55:55.168178 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-11" not found 2026-01-20T10:55:55.213796565+00:00 stderr F I0120 10:55:55.213687 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:55:55.412192392+00:00 stderr F I0120 10:55:55.411972 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:55:55.417394862+00:00 stderr F I0120 10:55:55.417347 1 base_controller.go:73] Caches are synced for NodeController 2026-01-20T10:55:55.417394862+00:00 stderr F I0120 10:55:55.417365 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2026-01-20T10:55:55.610312662+00:00 stderr F I0120 10:55:55.610197 1 request.go:697] Waited for 1.190142451s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/secrets?limit=500&resourceVersion=0 2026-01-20T10:55:55.614351312+00:00 stderr F I0120 10:55:55.614282 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:55:55.809492151+00:00 stderr F E0120 10:55:55.809411 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-11" not found 2026-01-20T10:55:55.813966322+00:00 stderr F I0120 10:55:55.813894 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:55:55.819851470+00:00 stderr F I0120 10:55:55.819755 1 base_controller.go:73] Caches are synced for GuardController 2026-01-20T10:55:55.819851470+00:00 stderr F I0120 10:55:55.819788 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2026-01-20T10:55:55.822083191+00:00 stderr F I0120 10:55:55.822007 1 base_controller.go:73] Caches are synced for StaticPodStateController 2026-01-20T10:55:55.822083191+00:00 stderr F I0120 10:55:55.822028 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2026-01-20T10:55:55.822159023+00:00 stderr F I0120 10:55:55.822103 1 base_controller.go:73] Caches are synced for InstallerStateController 2026-01-20T10:55:55.822159023+00:00 stderr F I0120 10:55:55.822131 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2026-01-20T10:55:55.822176113+00:00 stderr F I0120 10:55:55.822164 1 base_controller.go:73] Caches are synced for InstallerController 2026-01-20T10:55:55.822188734+00:00 stderr F I0120 10:55:55.822174 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2026-01-20T10:55:55.823304593+00:00 stderr F I0120 10:55:55.823228 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11 2026-01-20T10:55:55.842106769+00:00 stderr F E0120 10:55:55.841978 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11] 2026-01-20T10:55:55.843881796+00:00 stderr F E0120 10:55:55.843801 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11] 2026-01-20T10:55:55.843944018+00:00 stderr F I0120 10:55:55.843881 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11 2026-01-20T10:55:55.844475523+00:00 stderr F E0120 10:55:55.844403 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-11" not found 2026-01-20T10:55:55.845101440+00:00 stderr F I0120 10:55:55.845029 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:50Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:55:55.848103701+00:00 stderr F I0120 10:55:55.847935 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11 2026-01-20T10:55:55.848203953+00:00 stderr F E0120 10:55:55.848161 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11] 2026-01-20T10:55:55.855550071+00:00 stderr F I0120 10:55:55.855426 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11]" 2026-01-20T10:55:55.869646450+00:00 stderr F I0120 10:55:55.869562 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11 2026-01-20T10:55:55.869835906+00:00 stderr F E0120 10:55:55.869775 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11] 2026-01-20T10:55:55.911419295+00:00 stderr F E0120 10:55:55.911348 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11] 2026-01-20T10:55:55.911419295+00:00 stderr F I0120 10:55:55.911376 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11 2026-01-20T10:55:55.992812374+00:00 stderr F I0120 10:55:55.992629 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11 2026-01-20T10:55:55.992994799+00:00 stderr F E0120 10:55:55.992923 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11] 2026-01-20T10:55:56.023553992+00:00 stderr F I0120 10:55:56.023488 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:55:56.154650278+00:00 stderr F I0120 10:55:56.154510 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11 2026-01-20T10:55:56.154650278+00:00 stderr F E0120 10:55:56.154569 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11] 2026-01-20T10:55:56.225470934+00:00 stderr F I0120 10:55:56.225390 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:55:56.315311141+00:00 stderr F I0120 10:55:56.315214 1 base_controller.go:73] Caches are synced for WorkerLatencyProfile 2026-01-20T10:55:56.315311141+00:00 stderr F I0120 10:55:56.315264 1 base_controller.go:110] Starting #1 worker of WorkerLatencyProfile controller ... 2026-01-20T10:55:56.316832182+00:00 stderr F I0120 10:55:56.316788 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2026-01-20T10:55:56.316849282+00:00 stderr F I0120 10:55:56.316829 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2026-01-20T10:55:56.318036165+00:00 stderr F I0120 10:55:56.317979 1 base_controller.go:73] Caches are synced for RevisionController 2026-01-20T10:55:56.318036165+00:00 stderr F I0120 10:55:56.318001 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2026-01-20T10:55:56.318119377+00:00 stderr F I0120 10:55:56.318080 1 base_controller.go:73] Caches are synced for KubeControllerManagerStaticResources 2026-01-20T10:55:56.318119377+00:00 stderr F I0120 10:55:56.318100 1 base_controller.go:110] Starting #1 worker of KubeControllerManagerStaticResources controller ... 2026-01-20T10:55:56.456051518+00:00 stderr F I0120 10:55:56.455947 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:55:56.515349653+00:00 stderr F I0120 10:55:56.515241 1 base_controller.go:73] Caches are synced for SATokenSignerController 2026-01-20T10:55:56.515349653+00:00 stderr F I0120 10:55:56.515282 1 base_controller.go:110] Starting #1 worker of SATokenSignerController controller ... 2026-01-20T10:55:56.515607630+00:00 stderr F I0120 10:55:56.515552 1 base_controller.go:73] Caches are synced for GarbageCollectorWatcherController 2026-01-20T10:55:56.515694912+00:00 stderr F I0120 10:55:56.515665 1 base_controller.go:110] Starting #1 worker of GarbageCollectorWatcherController controller ... 2026-01-20T10:55:56.612211189+00:00 stderr F I0120 10:55:56.612150 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:55:56.615462447+00:00 stderr F I0120 10:55:56.615356 1 base_controller.go:73] Caches are synced for ResourceSyncController 2026-01-20T10:55:56.615462447+00:00 stderr F I0120 10:55:56.615394 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2026-01-20T10:55:56.615462447+00:00 stderr F I0120 10:55:56.615404 1 base_controller.go:73] Caches are synced for TargetConfigController 2026-01-20T10:55:56.615462447+00:00 stderr F I0120 10:55:56.615442 1 base_controller.go:73] Caches are synced for ConfigObserver 2026-01-20T10:55:56.615462447+00:00 stderr F I0120 10:55:56.615452 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2026-01-20T10:55:56.615529138+00:00 stderr F I0120 10:55:56.615442 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2026-01-20T10:55:56.809967849+00:00 stderr F I0120 10:55:56.809810 1 request.go:697] Waited for 2.271430003s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa 2026-01-20T10:55:57.017335769+00:00 stderr F I0120 10:55:57.017251 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/csr-signer -n openshift-kube-controller-manager-operator because it changed 2026-01-20T10:55:57.810576521+00:00 stderr F I0120 10:55:57.810206 1 request.go:697] Waited for 1.333584401s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2026-01-20T10:55:58.810686480+00:00 stderr F I0120 10:55:58.810579 1 request.go:697] Waited for 1.595166309s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2026-01-20T10:55:59.239725775+00:00 stderr F I0120 10:55:59.239624 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:50Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:55:59.252291702+00:00 stderr F I0120 10:55:59.252189 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11]" to "NodeControllerDegraded: All master nodes are ready" 2026-01-20T10:55:59.414356613+00:00 stderr F I0120 10:55:59.414235 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SATokenSignerControllerOK' found expected kube-apiserver endpoints 2026-01-20T10:56:00.410706530+00:00 stderr F I0120 10:56:00.410571 1 request.go:697] Waited for 1.170414501s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/revision-pruner-11-crc 2026-01-20T10:56:02.618125842+00:00 stderr F I0120 10:56:02.616126 1 core.go:359] ConfigMap "openshift-kube-controller-manager-operator/csr-signer-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIG1tSIcjm7gAwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjYwMTIwMTA1NTU0WhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3Njg5MDY1\nNTQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCscGmreTD5gOW5KP2F\nNUpoBc5+Spnnq3B2tYkRoXcWNc2BgZ+jMRfrKhGDPLeI0kIISHxkrBN7rWAljF/6\nfC5FOupYyRzaUwkNQnjvvIuPXtDkwKkkHjfsK+w4R+SMgmXxZnBschgQlQADWx1F\nhXXj4t/rIAOD6wDn2huK9ofQ3778YfMevXs4C8/wUAoMZsH7WA45mDgK0xz829BU\n4HQ4QgB87eR8dGwI+Ck76jdHYHx29ZeflIDi09CtLSSobnLLsJJzfqcOnaDDL3FZ\nvA+xigo/n1FjfSNGaEj7Sgub6VUD9bMS90VVTXY0CjSLgR+cdFBQoDdlgxOrcOpl\ns2z5AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBRbK19lc9H0VacvthXsqwv+GhYFITAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEAnzrWGRHyvk+7dUDPe76j\nPAzLpAmD1UUwYW+uGXf3c81/Iqtn1+c3sGWje6RB7fFIee4gGj5NHYfU7f9j6rVH\nhFRFN3DFXfTpNb7i7i8+hvNbVmHV/k/XDv/z5BdwkDZM8qySTeQH9hPxLWN5LHkV\nxtm1n0YndLacqybZOXTN1ZYOr8HQpTsMK/B/rg2zN5rhUBC3xsf2qtamvReY7VXr\ngWlJQqShNWoVfblmZB8sv23Kj9n7f1QwJ32oCYcRh3dmOpfNl4ESRnrczQrqXR1U\niMutGpudJdSdfPNI/+Igd7BsVxnQofWes4qab411PeoLlMoBc0XCfICcam2+yrG+\ncQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"}} 2026-01-20T10:56:02.618125842+00:00 stderr F I0120 10:56:02.616768 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator: 2026-01-20T10:56:02.618125842+00:00 stderr F cause by changes in data.ca-bundle.crt 2026-01-20T10:56:07.101047886+00:00 stderr F I0120 10:56:07.096252 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:56:07.096187905 +0000 UTC))" 2026-01-20T10:56:07.101047886+00:00 stderr F I0120 10:56:07.099766 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:56:07.0997195 +0000 UTC))" 2026-01-20T10:56:07.101047886+00:00 stderr F I0120 10:56:07.099803 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.099780691 +0000 UTC))" 2026-01-20T10:56:07.101047886+00:00 stderr F I0120 10:56:07.099826 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.099810582 +0000 UTC))" 2026-01-20T10:56:07.101047886+00:00 stderr F I0120 10:56:07.099848 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.099834063 +0000 UTC))" 2026-01-20T10:56:07.101047886+00:00 stderr F I0120 10:56:07.099868 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.099855013 +0000 UTC))" 2026-01-20T10:56:07.101047886+00:00 stderr F I0120 10:56:07.099887 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.099873254 +0000 UTC))" 2026-01-20T10:56:07.101047886+00:00 stderr F I0120 10:56:07.099922 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.099894994 +0000 UTC))" 2026-01-20T10:56:07.101047886+00:00 stderr F I0120 10:56:07.099944 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.099929205 +0000 UTC))" 2026-01-20T10:56:07.101047886+00:00 stderr F I0120 10:56:07.099966 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:56:07.099953336 +0000 UTC))" 2026-01-20T10:56:07.101047886+00:00 stderr F I0120 10:56:07.099990 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.099974917 +0000 UTC))" 2026-01-20T10:56:07.101047886+00:00 stderr F I0120 10:56:07.100012 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.099995217 +0000 UTC))" 2026-01-20T10:56:07.101047886+00:00 stderr F I0120 10:56:07.100485 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-controller-manager-operator.svc,metrics.openshift-kube-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:22 +0000 UTC to 2027-08-13 20:00:23 +0000 UTC (now=2026-01-20 10:56:07.1004591 +0000 UTC))" 2026-01-20T10:56:07.101047886+00:00 stderr F I0120 10:56:07.100900 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906175\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906175\" (2026-01-20 09:49:34 +0000 UTC to 2027-01-20 09:49:34 +0000 UTC (now=2026-01-20 10:56:07.100878952 +0000 UTC))" 2026-01-20T10:56:07.415543567+00:00 stderr F I0120 10:56:07.415439 1 core.go:359] ConfigMap "openshift-config-managed/csr-controller-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIG1tSIcjm7gAwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjYwMTIwMTA1NTU0WhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3Njg5MDY1\nNTQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCscGmreTD5gOW5KP2F\nNUpoBc5+Spnnq3B2tYkRoXcWNc2BgZ+jMRfrKhGDPLeI0kIISHxkrBN7rWAljF/6\nfC5FOupYyRzaUwkNQnjvvIuPXtDkwKkkHjfsK+w4R+SMgmXxZnBschgQlQADWx1F\nhXXj4t/rIAOD6wDn2huK9ofQ3778YfMevXs4C8/wUAoMZsH7WA45mDgK0xz829BU\n4HQ4QgB87eR8dGwI+Ck76jdHYHx29ZeflIDi09CtLSSobnLLsJJzfqcOnaDDL3FZ\nvA+xigo/n1FjfSNGaEj7Sgub6VUD9bMS90VVTXY0CjSLgR+cdFBQoDdlgxOrcOpl\ns2z5AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBRbK19lc9H0VacvthXsqwv+GhYFITAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEAnzrWGRHyvk+7dUDPe76j\nPAzLpAmD1UUwYW+uGXf3c81/Iqtn1+c3sGWje6RB7fFIee4gGj5NHYfU7f9j6rVH\nhFRFN3DFXfTpNb7i7i8+hvNbVmHV/k/XDv/z5BdwkDZM8qySTeQH9hPxLWN5LHkV\nxtm1n0YndLacqybZOXTN1ZYOr8HQpTsMK/B/rg2zN5rhUBC3xsf2qtamvReY7VXr\ngWlJQqShNWoVfblmZB8sv23Kj9n7f1QwJ32oCYcRh3dmOpfNl4ESRnrczQrqXR1U\niMutGpudJdSdfPNI/+Igd7BsVxnQofWes4qab411PeoLlMoBc0XCfICcam2+yrG+\ncQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:13Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-controller-manager-operator","operation":"Update","time":"2026-01-20T10:56:07Z"}],"resourceVersion":null,"uid":"4aabbce1-72f4-478a-b382-9ed7c988ad76"}} 2026-01-20T10:56:07.416290347+00:00 stderr F I0120 10:56:07.416250 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ConfigMapUpdateFailed' Failed to update ConfigMap/csr-controller-ca -n openshift-config-managed: Operation cannot be fulfilled on configmaps "csr-controller-ca": the object has been modified; please apply your changes to the latest version and try again 2026-01-20T10:56:07.434650491+00:00 stderr F I0120 10:56:07.434564 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"csr-controller-ca\": the object has been modified; please apply your changes to the latest version and try again","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:50Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:56:07.448592107+00:00 stderr F I0120 10:56:07.444352 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"csr-controller-ca\": the object has been modified; please apply your changes to the latest version and try again" 2026-01-20T10:56:07.452617495+00:00 stderr F I0120 10:56:07.452554 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:50Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:56:07.463934099+00:00 stderr F I0120 10:56:07.463821 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"csr-controller-ca\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready" 2026-01-20T10:56:08.610565430+00:00 stderr F I0120 10:56:08.610493 1 request.go:697] Waited for 1.174513342s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa 2026-01-20T10:56:09.217510070+00:00 stderr P I0120 10:56:09.217359 1 core.go:359] ConfigMap "openshift-kube-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIG1tSIcjm7gAwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjYwMTIwMTA1NTU0WhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3Njg5MDY1\nNTQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCscGmreTD5gOW5KP2F\nNUpoBc5+Spnnq3B2tYkRoXcWNc2BgZ+jMRfrKhGDPLeI0kIISHxkrBN7rWAljF/6\nfC5FOupYyRzaUwkNQnjvvIuPXtDkwKkkHjfsK+w4R+SMgmXxZnBschgQlQADWx1F\nhXXj4t/rIAOD6wDn2huK9ofQ3778YfMevXs4C8/wUAoMZsH7WA45mDgK0xz829BU\n4HQ4QgB87eR8dGwI+Ck76jdHYHx29ZeflIDi09CtLSSobnLLsJJzfqcOnaDDL3FZ\nvA+xigo/n1FjfSNGaEj7Sgub6VUD9bMS90VVTXY0CjSLgR+cdFBQoDdlgxOrcOpl\ns2z5AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBRbK19lc9H0VacvthXsqwv+GhYFITAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEAnzrWGRHyvk+7dUDPe76j\nPAzLpAmD1UUwYW+uGXf3c81/Iqtn1+c3sGWje6RB7fFIee4gGj5NHYfU7f9j6rVH\nhFRFN3DFXfTpNb7i7i8+hvNbVmHV/k/XDv/z5BdwkDZM8qySTeQH9hPxLWN5LHkV\nxtm1n0YndLacqybZOXTN1ZYOr8HQpTsMK/B/rg2zN5rhUBC3xsf2qtamvReY7VXr\ngWlJQqShNWoVfblmZB8sv23Kj9n7f1QwJ32oCYcRh3dmOpfNl4ESRnrczQrqXR1U\niMutGpudJdSdfPNI/+Igd7BsVxnQofWes4qab411PeoLlMoBc0XCfICcam2+yrG+\ncQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGq 2026-01-20T10:56:09.217625823+00:00 stderr F jsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:57Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2026-01-20T10:56:07Z"}],"resourceVersion":null,"uid":"56468a41-d560-4106-974c-24e97afc9e77"}} 2026-01-20T10:56:09.217841569+00:00 stderr F I0120 10:56:09.217751 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-kube-controller-manager: 2026-01-20T10:56:09.217841569+00:00 stderr F cause by changes in data.ca-bundle.crt 2026-01-20T10:56:09.813146606+00:00 stderr F I0120 10:56:09.810414 1 request.go:697] Waited for 1.381624263s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-infra ././@LongLink0000644000000000000000000000036700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000117165015133657716033107 0ustar zuulzuul2025-08-13T20:05:36.514581427+00:00 stderr F I0813 20:05:36.511530 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T20:05:36.514581427+00:00 stderr F I0813 20:05:36.511831 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:36.536300489+00:00 stderr F I0813 20:05:36.536068 1 observer_polling.go:159] Starting file observer 2025-08-13T20:05:36.665065366+00:00 stderr F I0813 20:05:36.664937 1 builder.go:298] kube-controller-manager-operator version 4.16.0-202406131906.p0.g0338b3b.assembly.stream.el9-0338b3b-0338b3be6912024d03def2c26f0fa10218fc2c25 2025-08-13T20:05:37.295386946+00:00 stderr F I0813 20:05:37.294764 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:05:37.295478189+00:00 stderr F W0813 20:05:37.295460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:37.295518470+00:00 stderr F W0813 20:05:37.295504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:37.331330466+00:00 stderr F I0813 20:05:37.326144 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:05:37.331574223+00:00 stderr F I0813 20:05:37.331533 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaTopology 2025-08-13T20:05:37.333384024+00:00 stderr F I0813 20:05:37.332718 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:05:37.333384024+00:00 stderr F I0813 20:05:37.333071 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:05:37.333653102+00:00 stderr F I0813 20:05:37.333580 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock... 2025-08-13T20:05:37.355226880+00:00 stderr F I0813 20:05:37.353574 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:05:37.355226880+00:00 stderr F I0813 20:05:37.353645 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:37.355226880+00:00 stderr F I0813 20:05:37.354854 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:05:37.355226880+00:00 stderr F I0813 20:05:37.354923 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:05:37.355226880+00:00 stderr F I0813 20:05:37.354994 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:05:37.355226880+00:00 stderr F I0813 20:05:37.355011 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:37.387949927+00:00 stderr F I0813 20:05:37.387768 1 leaderelection.go:260] successfully acquired lease openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock 2025-08-13T20:05:37.402530115+00:00 stderr F I0813 20:05:37.390150 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator-lock", UID:"d058fe3a-98b8-4ef3-8f29-e5f93bb21bb1", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"31745", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-controller-manager-operator-6f6cb54958-rbddb_e17706f2-94c3-4c11-bacc-b114096fd37e became leader 2025-08-13T20:05:37.431259157+00:00 stderr F I0813 20:05:37.404425 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:05:37.431259157+00:00 stderr F I0813 20:05:37.423394 1 starter.go:88] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:05:37.431259157+00:00 stderr F I0813 20:05:37.428270 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:05:37.466191818+00:00 stderr F I0813 20:05:37.464311 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:37.466191818+00:00 stderr F I0813 20:05:37.464679 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:05:37.473592920+00:00 stderr F I0813 20:05:37.469592 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:37.690758209+00:00 stderr F I0813 20:05:37.690347 1 base_controller.go:67] Waiting for caches to sync for GarbageCollectorWatcherController 2025-08-13T20:05:37.691722566+00:00 stderr F I0813 20:05:37.691605 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T20:05:37.694602549+00:00 stderr F I0813 20:05:37.693336 1 base_controller.go:67] Waiting for caches to sync for WorkerLatencyProfile 2025-08-13T20:05:37.696375289+00:00 stderr F I0813 20:05:37.696346 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:05:37.696446501+00:00 stderr F I0813 20:05:37.696430 1 base_controller.go:67] Waiting for caches to sync for SATokenSignerController 2025-08-13T20:05:37.697198463+00:00 stderr F I0813 20:05:37.697115 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:05:37.697220463+00:00 stderr F I0813 20:05:37.697193 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:05:37.697648446+00:00 stderr F I0813 20:05:37.697575 1 base_controller.go:67] Waiting for caches to sync for KubeControllerManagerStaticResources 2025-08-13T20:05:37.697648446+00:00 stderr F I0813 20:05:37.697631 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-controller-manager 2025-08-13T20:05:37.698286684+00:00 stderr F I0813 20:05:37.698231 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T20:05:37.698597563+00:00 stderr F I0813 20:05:37.698542 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T20:05:37.711605395+00:00 stderr F I0813 20:05:37.711545 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T20:05:37.711720759+00:00 stderr F I0813 20:05:37.711704 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T20:05:37.713356496+00:00 stderr F I0813 20:05:37.713232 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T20:05:37.713356496+00:00 stderr F I0813 20:05:37.713305 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T20:05:37.713579652+00:00 stderr F I0813 20:05:37.713554 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T20:05:37.713664314+00:00 stderr F I0813 20:05:37.713647 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:05:37.713836589+00:00 stderr F I0813 20:05:37.713760 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:05:37.728300953+00:00 stderr F I0813 20:05:37.728166 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T20:05:37.735956003+00:00 stderr F I0813 20:05:37.734446 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T20:05:37.941763616+00:00 stderr F I0813 20:05:37.940941 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T20:05:37.941763616+00:00 stderr F I0813 20:05:37.941104 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T20:05:37.945175514+00:00 stderr F I0813 20:05:37.943577 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T20:05:37.945175514+00:00 stderr F I0813 20:05:37.943624 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T20:05:37.963699194+00:00 stderr F I0813 20:05:37.961306 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:05:37.963699194+00:00 stderr F I0813 20:05:37.961486 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:05:37.963699194+00:00 stderr F I0813 20:05:37.963073 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T20:05:37.963699194+00:00 stderr F I0813 20:05:37.963085 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T20:05:37.986055495+00:00 stderr F I0813 20:05:37.985229 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T20:05:37.986055495+00:00 stderr F I0813 20:05:37.985459 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T20:05:37.986055495+00:00 stderr F I0813 20:05:37.985485 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T20:05:37.986055495+00:00 stderr F I0813 20:05:37.985494 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10 2025-08-13T20:05:37.989486673+00:00 stderr F E0813 20:05:37.989079 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-10" not found 2025-08-13T20:05:37.990072570+00:00 stderr F I0813 20:05:37.989540 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T20:05:38.002449894+00:00 stderr F I0813 20:05:38.001002 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-controller-manager 2025-08-13T20:05:38.002449894+00:00 stderr F I0813 20:05:38.001048 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-controller-manager controller ... 2025-08-13T20:05:38.002449894+00:00 stderr F I0813 20:05:38.001931 1 base_controller.go:73] Caches are synced for WorkerLatencyProfile 2025-08-13T20:05:38.002449894+00:00 stderr F I0813 20:05:38.001956 1 base_controller.go:110] Starting #1 worker of WorkerLatencyProfile controller ... 2025-08-13T20:05:38.009507136+00:00 stderr F I0813 20:05:38.009211 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T20:05:38.009507136+00:00 stderr F I0813 20:05:38.009304 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T20:05:38.009552308+00:00 stderr F I0813 20:05:38.009494 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T20:05:38.009552308+00:00 stderr F I0813 20:05:38.009515 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T20:05:38.023609120+00:00 stderr F I0813 20:05:38.022638 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:05:38.023609120+00:00 stderr F I0813 20:05:38.022678 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:05:38.023609120+00:00 stderr F I0813 20:05:38.023510 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:38.054908236+00:00 stderr F E0813 20:05:38.054015 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10] 2025-08-13T20:05:38.071474651+00:00 stderr F I0813 20:05:38.071276 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:05:38.108738358+00:00 stderr F I0813 20:05:38.107067 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10]" 2025-08-13T20:05:38.231418341+00:00 stderr F I0813 20:05:38.231010 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:38.434263850+00:00 stderr F I0813 20:05:38.433834 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:38.499090686+00:00 stderr F I0813 20:05:38.499028 1 base_controller.go:73] Caches are synced for KubeControllerManagerStaticResources 2025-08-13T20:05:38.499168258+00:00 stderr F I0813 20:05:38.499150 1 base_controller.go:110] Starting #1 worker of KubeControllerManagerStaticResources controller ... 2025-08-13T20:05:38.621559283+00:00 stderr F I0813 20:05:38.621492 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:38.629225383+00:00 stderr F I0813 20:05:38.629182 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T20:05:38.629297075+00:00 stderr F I0813 20:05:38.629281 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T20:05:38.713959039+00:00 stderr F I0813 20:05:38.713849 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T20:05:38.714049952+00:00 stderr F I0813 20:05:38.714034 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T20:05:38.815560509+00:00 stderr F I0813 20:05:38.815178 1 request.go:697] Waited for 1.117060289s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config/configmaps?limit=500&resourceVersion=0 2025-08-13T20:05:38.829328833+00:00 stderr F I0813 20:05:38.829250 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:39.048347936+00:00 stderr F I0813 20:05:39.047959 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:39.255540009+00:00 stderr F I0813 20:05:39.255468 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:39.290947193+00:00 stderr F I0813 20:05:39.290729 1 base_controller.go:73] Caches are synced for GarbageCollectorWatcherController 2025-08-13T20:05:39.291035846+00:00 stderr F I0813 20:05:39.291017 1 base_controller.go:110] Starting #1 worker of GarbageCollectorWatcherController controller ... 2025-08-13T20:05:39.419138504+00:00 stderr F I0813 20:05:39.418571 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:39.498210308+00:00 stderr F I0813 20:05:39.498044 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:05:39.498299021+00:00 stderr F I0813 20:05:39.498283 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:05:39.620257543+00:00 stderr F I0813 20:05:39.620126 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:39.734896756+00:00 stderr F I0813 20:05:39.723880 1 base_controller.go:73] Caches are synced for SATokenSignerController 2025-08-13T20:05:39.734896756+00:00 stderr F I0813 20:05:39.723933 1 base_controller.go:110] Starting #1 worker of SATokenSignerController controller ... 2025-08-13T20:05:39.734896756+00:00 stderr F I0813 20:05:39.723982 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:05:39.734896756+00:00 stderr F I0813 20:05:39.723988 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:05:39.817011458+00:00 stderr F I0813 20:05:39.816753 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:39.893035685+00:00 stderr F I0813 20:05:39.892925 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T20:05:39.893035685+00:00 stderr F I0813 20:05:39.892974 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T20:05:39.898318626+00:00 stderr F I0813 20:05:39.898279 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:05:39.898378448+00:00 stderr F I0813 20:05:39.898363 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:05:40.013757722+00:00 stderr F I0813 20:05:40.013689 1 request.go:697] Waited for 2.027614104s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-08-13T20:05:41.215111744+00:00 stderr F I0813 20:05:41.213553 1 request.go:697] Waited for 1.489350039s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dopenshift-kube-apiserver 2025-08-13T20:05:42.218226789+00:00 stderr F I0813 20:05:42.218018 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' installer errors: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.218226789+00:00 stderr F I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.218226789+00:00 stderr F W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.218226789+00:00 stderr F F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.220484124+00:00 stderr F I0813 20:05:42.220398 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:05:42.220484124+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:05:42.220484124+00:00 stderr F CurrentRevision: (int32) 8, 2025-08-13T20:05:42.220484124+00:00 stderr F TargetRevision: (int32) 10, 2025-08-13T20:05:42.220484124+00:00 stderr F LastFailedRevision: (int32) 10, 2025-08-13T20:05:42.220484124+00:00 stderr F LastFailedTime: (*v1.Time)(0xc000a3d830)(2025-08-13 20:05:42.217747775 +0000 UTC m=+6.736788489), 2025-08-13T20:05:42.220484124+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:05:42.220484124+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:05:42.220484124+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:05:42.220484124+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:05:42.220484124+00:00 stderr F (string) (len=2059) "installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nI0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nW0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nF0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:05:42.220484124+00:00 stderr F } 2025-08-13T20:05:42.220484124+00:00 stderr F } 2025-08-13T20:05:42.220484124+00:00 stderr F because installer pod failed: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.220484124+00:00 stderr F I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.220484124+00:00 stderr F W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.220484124+00:00 stderr F F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.299764664+00:00 stderr F I0813 20:05:42.295707 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10]\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event \u0026Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC \u003cnil\u003e \u003cnil\u003e map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:05:42.326058137+00:00 stderr F I0813 20:05:42.323599 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10]\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: " 2025-08-13T20:05:42.373419313+00:00 stderr F I0813 20:05:42.373161 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event \u0026Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC \u003cnil\u003e \u003cnil\u003e map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:05:42.404745160+00:00 stderr F I0813 20:05:42.404616 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10]\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: " 2025-08-13T20:05:42.413767369+00:00 stderr F I0813 20:05:42.413731 1 request.go:697] Waited for 1.327177765s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-infra 2025-08-13T20:05:42.629149796+00:00 stderr F I0813 20:05:42.629032 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SATokenSignerControllerOK' found expected kube-apiserver endpoints 2025-08-13T20:05:43.613509565+00:00 stderr F I0813 20:05:43.613161 1 request.go:697] Waited for 1.193070645s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager 2025-08-13T20:05:57.065456713+00:00 stderr F I0813 20:05:57.063355 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-10-retry-1-crc -n openshift-kube-controller-manager because it was missing 2025-08-13T20:05:57.088850643+00:00 stderr F I0813 20:05:57.085745 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:05:57.131914866+00:00 stderr F I0813 20:05:57.131456 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:05:57.218625389+00:00 stderr F I0813 20:05:57.212321 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:05:58.028420369+00:00 stderr F I0813 20:05:58.027854 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:05:58.807715594+00:00 stderr F I0813 20:05:58.807067 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:06:00.112517718+00:00 stderr F I0813 20:06:00.112357 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:06:33.497496611+00:00 stderr F I0813 20:06:33.496837 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:06:35.691201926+00:00 stderr F I0813 20:06:35.688448 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because waiting for static pod of revision 10, found 8 2025-08-13T20:06:36.706043922+00:00 stderr F I0813 20:06:36.703456 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because waiting for static pod of revision 10, found 8 2025-08-13T20:06:50.660622631+00:00 stderr F I0813 20:06:50.658985 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because static pod is pending 2025-08-13T20:06:50.850225138+00:00 stderr F I0813 20:06:50.848552 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because static pod is pending 2025-08-13T20:06:51.121329900+00:00 stderr F I0813 20:06:51.119648 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because static pod is pending 2025-08-13T20:06:55.484035044+00:00 stderr F I0813 20:06:55.481643 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because static pod is pending 2025-08-13T20:06:55.563495791+00:00 stderr F I0813 20:06:55.563359 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event \u0026Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC \u003cnil\u003e \u003cnil\u003e map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:55.655427417+00:00 stderr F I0813 20:06:55.655320 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event \u0026Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC \u003cnil\u003e \u003cnil\u003e map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:55.658088293+00:00 stderr F I0813 20:06:55.658034 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " 2025-08-13T20:06:55.665956629+00:00 stderr F I0813 20:06:55.661252 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because static pod is pending 2025-08-13T20:06:55.683381938+00:00 stderr F E0813 20:06:55.683327 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:56.936619590+00:00 stderr F I0813 20:06:56.930386 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event \u0026Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC \u003cnil\u003e \u003cnil\u003e map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:57.091031747+00:00 stderr F I0813 20:06:57.090927 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: " 2025-08-13T20:06:57.980483799+00:00 stderr F I0813 20:06:57.979433 1 request.go:697] Waited for 1.042308714s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/revision-pruner-10-crc 2025-08-13T20:06:59.201359691+00:00 stderr F I0813 20:06:59.201295 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because static pod is pending 2025-08-13T20:07:01.121543755+00:00 stderr F I0813 20:07:01.121137 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because static pod is pending 2025-08-13T20:07:04.595151506+00:00 stderr F I0813 20:07:04.533289 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:07:04.595151506+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:07:04.595151506+00:00 stderr F CurrentRevision: (int32) 10, 2025-08-13T20:07:04.595151506+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:07:04.595151506+00:00 stderr F LastFailedRevision: (int32) 10, 2025-08-13T20:07:04.595151506+00:00 stderr F LastFailedTime: (*v1.Time)(0xc00264f218)(2025-08-13 20:05:42 +0000 UTC), 2025-08-13T20:07:04.595151506+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:07:04.595151506+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:07:04.595151506+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:07:04.595151506+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:07:04.595151506+00:00 stderr F (string) (len=2059) "installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nI0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nW0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nF0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:07:04.595151506+00:00 stderr F } 2025-08-13T20:07:04.595151506+00:00 stderr F } 2025-08-13T20:07:04.595151506+00:00 stderr F because static pod is ready 2025-08-13T20:07:04.677555779+00:00 stderr F I0813 20:07:04.677143 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "crc" from revision 8 to 10 because static pod is ready 2025-08-13T20:07:04.721704214+00:00 stderr F I0813 20:07:04.720531 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:07:04Z","message":"NodeInstallerProgressing: 1 node is at revision 10","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:04.783723732+00:00 stderr F I0813 20:07:04.783645 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 10"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 10" 2025-08-13T20:07:05.988628468+00:00 stderr F I0813 20:07:05.985923 1 request.go:697] Waited for 1.015351241s due to client-side throttling, not priority and fairness, request: DELETE:https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider 2025-08-13T20:07:15.750061857+00:00 stderr F I0813 20:07:15.749143 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 11 triggered by "required secret/localhost-recovery-client-token has changed" 2025-08-13T20:07:15.830481992+00:00 stderr F I0813 20:07:15.825103 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-controller-manager-pod-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:15.864035724+00:00 stderr F I0813 20:07:15.863695 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:15.880677681+00:00 stderr F I0813 20:07:15.880518 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/cluster-policy-controller-config-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:15.896993799+00:00 stderr F I0813 20:07:15.896746 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/controller-manager-kubeconfig-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:16.533054716+00:00 stderr F I0813 20:07:16.531142 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-controller-cert-syncer-kubeconfig-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:16.966016579+00:00 stderr F I0813 20:07:16.965723 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/serviceaccount-ca-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:17.163532201+00:00 stderr F I0813 20:07:17.163424 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/service-ca-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:17.730583569+00:00 stderr F I0813 20:07:17.730517 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/recycler-config-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:18.745910990+00:00 stderr F I0813 20:07:18.743413 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/service-account-private-key-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:19.932212122+00:00 stderr F I0813 20:07:19.894036 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/serving-cert-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:20.024030635+00:00 stderr F I0813 20:07:20.023839 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:20.190952711+00:00 stderr F I0813 20:07:20.190769 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 11 triggered by "required secret/localhost-recovery-client-token has changed" 2025-08-13T20:07:20.342014862+00:00 stderr F I0813 20:07:20.341946 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 11 created because required secret/localhost-recovery-client-token has changed 2025-08-13T20:07:21.518012327+00:00 stderr F I0813 20:07:21.517052 1 request.go:697] Waited for 1.17130022s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config 2025-08-13T20:07:21.914562077+00:00 stderr F I0813 20:07:21.914362 1 installer_controller.go:524] node crc with revision 10 is the oldest and needs new revision 11 2025-08-13T20:07:21.914562077+00:00 stderr F I0813 20:07:21.914465 1 installer_controller.go:532] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:07:21.914562077+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:07:21.914562077+00:00 stderr F CurrentRevision: (int32) 10, 2025-08-13T20:07:21.914562077+00:00 stderr F TargetRevision: (int32) 11, 2025-08-13T20:07:21.914562077+00:00 stderr F LastFailedRevision: (int32) 10, 2025-08-13T20:07:21.914562077+00:00 stderr F LastFailedTime: (*v1.Time)(0xc002d33830)(2025-08-13 20:05:42 +0000 UTC), 2025-08-13T20:07:21.914562077+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:07:21.914562077+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:07:21.914562077+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:07:21.914562077+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:07:21.914562077+00:00 stderr F (string) (len=2059) "installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nI0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nW0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nF0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:07:21.914562077+00:00 stderr F } 2025-08-13T20:07:21.914562077+00:00 stderr F } 2025-08-13T20:07:21.957209780+00:00 stderr F I0813 20:07:21.956243 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:07:21Z","message":"NodeInstallerProgressing: 1 node is at revision 10; 0 nodes have achieved new revision 11","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 10; 0 nodes have achieved new revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:21.971364466+00:00 stderr F I0813 20:07:21.959131 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeTargetRevisionChanged' Updating node "crc" from revision 10 to 11 because node crc with revision 10 is the oldest 2025-08-13T20:07:22.018179948+00:00 stderr F I0813 20:07:22.017362 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 10; 0 nodes have achieved new revision 11"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 10" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 10; 0 nodes have achieved new revision 11" 2025-08-13T20:07:22.198346004+00:00 stderr F I0813 20:07:22.177477 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-11-crc -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:23.112166394+00:00 stderr F I0813 20:07:23.109244 1 request.go:697] Waited for 1.121977728s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc 2025-08-13T20:07:24.113915855+00:00 stderr F I0813 20:07:24.112993 1 request.go:697] Waited for 1.361823165s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-08-13T20:07:24.340358286+00:00 stderr F I0813 20:07:24.340133 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-11-crc -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:25.328965391+00:00 stderr F I0813 20:07:25.328424 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:07:27.325769971+00:00 stderr F I0813 20:07:27.321708 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:07:29.319020529+00:00 stderr F I0813 20:07:29.318273 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:32.013351707+00:00 stderr F I0813 20:07:32.012561 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:08:01.280911734+00:00 stderr F I0813 20:08:01.278043 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:08:02.776377240+00:00 stderr F I0813 20:08:02.774488 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because waiting for static pod of revision 11, found 10 2025-08-13T20:08:03.237190373+00:00 stderr F I0813 20:08:03.236041 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because waiting for static pod of revision 11, found 10 2025-08-13T20:08:12.346025060+00:00 stderr F I0813 20:08:12.336670 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because static pod is pending 2025-08-13T20:08:12.393195983+00:00 stderr F I0813 20:08:12.392436 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because static pod is pending 2025-08-13T20:08:14.389113567+00:00 stderr F I0813 20:08:14.382351 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because static pod is pending 2025-08-13T20:08:22.396199897+00:00 stderr F I0813 20:08:22.394876 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because static pod is pending 2025-08-13T20:08:22.452822330+00:00 stderr F I0813 20:08:22.451881 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because static pod is pending 2025-08-13T20:08:23.367716581+00:00 stderr F I0813 20:08:23.367578 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because static pod is pending 2025-08-13T20:08:24.170857908+00:00 stderr F E0813 20:08:24.170609 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.371187832+00:00 stderr F I0813 20:08:24.369377 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.375219228+00:00 stderr F E0813 20:08:24.374420 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-operator.185b6c6f22a0eb36 openshift-kube-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-controller-manager-operator,Name:kube-controller-manager-operator,UID:8a9ccf98-e60f-4580-94d2-1560cf66cd74,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 11 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-controller-manager-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,LastTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-controller-manager-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:08:24.394850951+00:00 stderr F E0813 20:08:24.394715 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.568198841+00:00 stderr F I0813 20:08:24.567660 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.571906687+00:00 stderr F E0813 20:08:24.571270 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.765553469+00:00 stderr F I0813 20:08:24.765486 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.771100708+00:00 stderr F E0813 20:08:24.771014 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.971852124+00:00 stderr F I0813 20:08:24.969335 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.978861485+00:00 stderr F E0813 20:08:24.978496 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.165866856+00:00 stderr F I0813 20:08:25.165412 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.169325765+00:00 stderr F E0813 20:08:25.166448 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.364084589+00:00 stderr F I0813 20:08:25.363947 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.366759675+00:00 stderr F E0813 20:08:25.366692 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.704643453+00:00 stderr F I0813 20:08:25.698772 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.714601128+00:00 stderr F E0813 20:08:25.714305 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.369521486+00:00 stderr F I0813 20:08:26.360516 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.398639191+00:00 stderr F E0813 20:08:26.362744 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.685364933+00:00 stderr F I0813 20:08:27.685138 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.688204264+00:00 stderr F E0813 20:08:27.687213 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.431541927+00:00 stderr F E0813 20:08:29.429165 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-operator.185b6c6f22a0eb36 openshift-kube-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-controller-manager-operator,Name:kube-controller-manager-operator,UID:8a9ccf98-e60f-4580-94d2-1560cf66cd74,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 11 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-controller-manager-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,LastTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-controller-manager-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:08:30.267922087+00:00 stderr F I0813 20:08:30.266521 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.268515264+00:00 stderr F E0813 20:08:30.268373 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.396384583+00:00 stderr F I0813 20:08:35.395552 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.398464212+00:00 stderr F E0813 20:08:35.398026 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.708406420+00:00 stderr F E0813 20:08:37.707561 1 leaderelection.go:332] error retrieving resource lock openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager-operator/leases/kube-controller-manager-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.977995409+00:00 stderr F E0813 20:08:37.977379 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:37.991073454+00:00 stderr F E0813 20:08:37.991014 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:37.994263745+00:00 stderr F E0813 20:08:37.994197 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.995028277+00:00 stderr F E0813 20:08:37.994974 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.002764669+00:00 stderr F E0813 20:08:38.002720 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.003259743+00:00 stderr F E0813 20:08:38.003178 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.006769884+00:00 stderr F E0813 20:08:38.006664 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:38.015736081+00:00 stderr F E0813 20:08:38.015641 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.018489110+00:00 stderr F E0813 20:08:38.018422 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.036316851+00:00 stderr F E0813 20:08:38.036184 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:38.169417247+00:00 stderr F E0813 20:08:38.169315 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.371619185+00:00 stderr F E0813 20:08:38.371482 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.583145309+00:00 stderr F E0813 20:08:38.581142 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:38.770923113+00:00 stderr F E0813 20:08:38.770624 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.972205464+00:00 stderr F E0813 20:08:38.972095 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.378703148+00:00 stderr F E0813 20:08:39.378322 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:39.434393324+00:00 stderr F E0813 20:08:39.434225 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-operator.185b6c6f22a0eb36 openshift-kube-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-controller-manager-operator,Name:kube-controller-manager-operator,UID:8a9ccf98-e60f-4580-94d2-1560cf66cd74,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 11 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-controller-manager-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,LastTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-controller-manager-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:08:39.569598961+00:00 stderr F E0813 20:08:39.569421 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.772648372+00:00 stderr F E0813 20:08:39.772541 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.898155131+00:00 stderr F E0813 20:08:39.898073 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.923172408+00:00 stderr F E0813 20:08:39.922158 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.937491379+00:00 stderr F E0813 20:08:39.937365 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.965853612+00:00 stderr F E0813 20:08:39.963385 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.007860736+00:00 stderr F E0813 20:08:40.006971 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.092879614+00:00 stderr F E0813 20:08:40.092336 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.177913742+00:00 stderr F E0813 20:08:40.177511 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:40.256260358+00:00 stderr F E0813 20:08:40.255973 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.377703030+00:00 stderr F E0813 20:08:40.377602 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.586885368+00:00 stderr F E0813 20:08:40.585655 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.586885368+00:00 stderr F E0813 20:08:40.586491 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.997595883+00:00 stderr F E0813 20:08:40.997086 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:41.173972980+00:00 stderr F E0813 20:08:41.173275 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.236970396+00:00 stderr F E0813 20:08:41.232354 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.572877047+00:00 stderr F E0813 20:08:41.572494 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.004873103+00:00 stderr F E0813 20:08:42.002107 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:42.370864736+00:00 stderr F E0813 20:08:42.369130 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.520686582+00:00 stderr F E0813 20:08:42.520583 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.583390490+00:00 stderr F E0813 20:08:42.582572 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.782116257+00:00 stderr F E0813 20:08:42.781255 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-openshift-infra.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/recycler-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): Delete "https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:43.575448852+00:00 stderr F E0813 20:08:43.575363 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:43.969587302+00:00 stderr F E0813 20:08:43.969477 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.373945716+00:00 stderr F E0813 20:08:44.373709 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.979102146+00:00 stderr F E0813 20:08:44.978928 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-openshift-infra.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/recycler-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): Delete "https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:45.105530441+00:00 stderr F E0813 20:08:45.101170 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.644941877+00:00 stderr F I0813 20:08:45.643771 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.647885141+00:00 stderr F E0813 20:08:45.646113 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.376183812+00:00 stderr F E0813 20:08:46.375706 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:46.533064029+00:00 stderr F E0813 20:08:46.532345 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.940860211+00:00 stderr F E0813 20:08:46.939076 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.384872593+00:00 stderr F E0813 20:08:48.382040 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-openshift-infra.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/recycler-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): Delete "https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:49.437137672+00:00 stderr F E0813 20:08:49.437038 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-operator.185b6c6f22a0eb36 openshift-kube-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-controller-manager-operator,Name:kube-controller-manager-operator,UID:8a9ccf98-e60f-4580-94d2-1560cf66cd74,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 11 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-controller-manager-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,LastTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-controller-manager-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:08:50.225616298+00:00 stderr F E0813 20:08:50.225290 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.576501139+00:00 stderr F E0813 20:08:51.576436 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-openshift-infra.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/recycler-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): Delete "https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:51.656540804+00:00 stderr F E0813 20:08:51.656359 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.776989467+00:00 stderr F E0813 20:08:51.775149 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:52.064855021+00:00 stderr F E0813 20:08:52.064721 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.984195111+00:00 stderr F E0813 20:08:54.983672 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-openshift-infra.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/recycler-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): Delete "https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:58.174975433+00:00 stderr F E0813 20:08:58.174827 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-openshift-infra.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/recycler-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): Delete "https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:59.441281750+00:00 stderr F E0813 20:08:59.440725 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-operator.185b6c6f22a0eb36 openshift-kube-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-controller-manager-operator,Name:kube-controller-manager-operator,UID:8a9ccf98-e60f-4580-94d2-1560cf66cd74,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 11 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-controller-manager-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,LastTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-controller-manager-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:09:00.467884753+00:00 stderr F E0813 20:09:00.467455 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:01.433467986+00:00 stderr F E0813 20:09:01.433310 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-openshift-infra.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/recycler-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): Delete "https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:09:06.140281097+00:00 stderr F I0813 20:09:06.139717 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:09:06.140281097+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:09:06.140281097+00:00 stderr F CurrentRevision: (int32) 11, 2025-08-13T20:09:06.140281097+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:09:06.140281097+00:00 stderr F LastFailedRevision: (int32) 10, 2025-08-13T20:09:06.140281097+00:00 stderr F LastFailedTime: (*v1.Time)(0xc00231f1a0)(2025-08-13 20:05:42 +0000 UTC), 2025-08-13T20:09:06.140281097+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:09:06.140281097+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:09:06.140281097+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:09:06.140281097+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:09:06.140281097+00:00 stderr F (string) (len=2059) "installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nI0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nW0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nF0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:09:06.140281097+00:00 stderr F } 2025-08-13T20:09:06.140281097+00:00 stderr F } 2025-08-13T20:09:06.140281097+00:00 stderr F because static pod is ready 2025-08-13T20:09:06.171341418+00:00 stderr F I0813 20:09:06.170651 1 helpers.go:260] lister was stale at resourceVersion=32529, live get showed resourceVersion=32975 2025-08-13T20:09:06.328351489+00:00 stderr F I0813 20:09:06.328133 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "crc" from revision 10 to 11 because static pod is ready 2025-08-13T20:09:32.791415227+00:00 stderr F I0813 20:09:32.790410 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:35.842145974+00:00 stderr F I0813 20:09:35.841480 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:36.566471571+00:00 stderr F I0813 20:09:36.566334 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:36.867218033+00:00 stderr F I0813 20:09:36.867072 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.031180185+00:00 stderr F I0813 20:09:39.030762 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.077560615+00:00 stderr F I0813 20:09:39.075582 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.669969650+00:00 stderr F I0813 20:09:39.669137 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:41.868331488+00:00 stderr F I0813 20:09:41.867858 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:09:41.868331488+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:09:41.868331488+00:00 stderr F CurrentRevision: (int32) 11, 2025-08-13T20:09:41.868331488+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:09:41.868331488+00:00 stderr F LastFailedRevision: (int32) 10, 2025-08-13T20:09:41.868331488+00:00 stderr F LastFailedTime: (*v1.Time)(0xc00167baa0)(2025-08-13 20:05:42 +0000 UTC), 2025-08-13T20:09:41.868331488+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:09:41.868331488+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:09:41.868331488+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:09:41.868331488+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:09:41.868331488+00:00 stderr F (string) (len=2059) "installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nI0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nW0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nF0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:09:41.868331488+00:00 stderr F } 2025-08-13T20:09:41.868331488+00:00 stderr F } 2025-08-13T20:09:41.868331488+00:00 stderr F because static pod is ready 2025-08-13T20:09:41.901710485+00:00 stderr F I0813 20:09:41.901606 1 helpers.go:260] lister was stale at resourceVersion=32529, live get showed resourceVersion=32995 2025-08-13T20:09:43.065986166+00:00 stderr F I0813 20:09:43.065471 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:43.755607148+00:00 stderr F I0813 20:09:43.753551 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:43.828929360+00:00 stderr F I0813 20:09:43.828769 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:43.868881685+00:00 stderr F I0813 20:09:43.866849 1 request.go:697] Waited for 1.008177855s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config/configmaps?resourceVersion=32682 2025-08-13T20:09:43.874975750+00:00 stderr F I0813 20:09:43.874058 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:44.267842224+00:00 stderr F I0813 20:09:44.266984 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:09:44.267842224+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:09:44.267842224+00:00 stderr F CurrentRevision: (int32) 11, 2025-08-13T20:09:44.267842224+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:09:44.267842224+00:00 stderr F LastFailedRevision: (int32) 10, 2025-08-13T20:09:44.267842224+00:00 stderr F LastFailedTime: (*v1.Time)(0xc00184d668)(2025-08-13 20:05:42 +0000 UTC), 2025-08-13T20:09:44.267842224+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:09:44.267842224+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:09:44.267842224+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:09:44.267842224+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:09:44.267842224+00:00 stderr F (string) (len=2059) "installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nI0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nW0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nF0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:09:44.267842224+00:00 stderr F } 2025-08-13T20:09:44.267842224+00:00 stderr F } 2025-08-13T20:09:44.267842224+00:00 stderr F because static pod is ready 2025-08-13T20:09:44.305548305+00:00 stderr F I0813 20:09:44.302039 1 helpers.go:260] lister was stale at resourceVersion=32529, live get showed resourceVersion=32995 2025-08-13T20:09:44.830307690+00:00 stderr F I0813 20:09:44.830050 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.070993701+00:00 stderr F I0813 20:09:45.070496 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.882929610+00:00 stderr F I0813 20:09:45.880547 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:46.698310448+00:00 stderr F I0813 20:09:46.698204 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:46.920826358+00:00 stderr F I0813 20:09:46.920723 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:47.441742102+00:00 stderr F I0813 20:09:47.441227 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:47.488483362+00:00 stderr F I0813 20:09:47.488294 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:47.865952065+00:00 stderr F I0813 20:09:47.865749 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:48.867515881+00:00 stderr F I0813 20:09:48.867436 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:49.465855626+00:00 stderr F I0813 20:09:49.465342 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:50.469199783+00:00 stderr F I0813 20:09:50.468636 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:50.880873876+00:00 stderr F I0813 20:09:50.878988 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubecontrollermanagers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:50.891951154+00:00 stderr F I0813 20:09:50.889425 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:50Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:50.912624487+00:00 stderr F I0813 20:09:50.911375 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ",Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 11"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 10; 0 nodes have achieved new revision 11" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11" 2025-08-13T20:09:51.070035449+00:00 stderr F I0813 20:09:51.069679 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:52.062438492+00:00 stderr F I0813 20:09:52.062144 1 request.go:697] Waited for 1.166969798s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-08-13T20:09:52.074969781+00:00 stderr F I0813 20:09:52.073887 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.087851291+00:00 stderr F E0813 20:09:52.086954 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:52.089358154+00:00 stderr F I0813 20:09:52.088079 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.100062871+00:00 stderr F E0813 20:09:52.099285 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:52.107549096+00:00 stderr F I0813 20:09:52.106752 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.112456946+00:00 stderr F E0813 20:09:52.112377 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:52.116400119+00:00 stderr F I0813 20:09:52.116182 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.133078288+00:00 stderr F E0813 20:09:52.133003 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:52.136254809+00:00 stderr F I0813 20:09:52.136195 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.167870265+00:00 stderr F E0813 20:09:52.167244 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:52.177925303+00:00 stderr F I0813 20:09:52.175512 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.194191010+00:00 stderr F E0813 20:09:52.194117 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:52.356891154+00:00 stderr F I0813 20:09:52.356749 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.385455173+00:00 stderr F E0813 20:09:52.382429 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:52.704555382+00:00 stderr F I0813 20:09:52.704250 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.715332131+00:00 stderr F E0813 20:09:52.714021 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:53.119107698+00:00 stderr F I0813 20:09:53.117989 1 reflector.go:351] Caches populated for *v1.APIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:53.356032781+00:00 stderr F I0813 20:09:53.355849 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:53Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:53.364569826+00:00 stderr F E0813 20:09:53.363148 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:53.481039765+00:00 stderr F I0813 20:09:53.480338 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:53.799756183+00:00 stderr F I0813 20:09:53.799653 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:53Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:53.836003352+00:00 stderr F E0813 20:09:53.833661 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:54.645177441+00:00 stderr F I0813 20:09:54.644492 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:54Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:54.660597223+00:00 stderr F E0813 20:09:54.660244 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:54.862986776+00:00 stderr F I0813 20:09:54.862846 1 request.go:697] Waited for 1.062232365s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-08-13T20:09:54.870893733+00:00 stderr F I0813 20:09:54.868932 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:54Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:54.878888712+00:00 stderr F E0813 20:09:54.877734 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:54.878888712+00:00 stderr F I0813 20:09:54.878416 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:54Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:54.889590369+00:00 stderr F E0813 20:09:54.889437 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:59.784361326+00:00 stderr F I0813 20:09:59.782708 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:59Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:59.796398381+00:00 stderr F E0813 20:09:59.794549 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:10:01.485628003+00:00 stderr F I0813 20:10:01.485296 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:01.486631552+00:00 stderr F I0813 20:10:01.486593 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:50Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:10:01.512229046+00:00 stderr F I0813 20:10:01.512052 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T20:10:05.576588314+00:00 stderr F I0813 20:10:05.574320 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:16.788647521+00:00 stderr F I0813 20:10:16.786138 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:17.632072662+00:00 stderr F I0813 20:10:17.631636 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:18.410729797+00:00 stderr F I0813 20:10:18.410459 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:18.615626252+00:00 stderr F I0813 20:10:18.615513 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:18.806435403+00:00 stderr F I0813 20:10:18.806372 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:20.493105591+00:00 stderr F I0813 20:10:20.492429 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:24.807862058+00:00 stderr F I0813 20:10:24.807400 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:31.619188005+00:00 stderr F I0813 20:10:31.618223 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:32.803599894+00:00 stderr F I0813 20:10:32.803378 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:35.876587429+00:00 stderr F I0813 20:10:35.875698 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:42:36.387525102+00:00 stderr F I0813 20:42:36.381175 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.387525102+00:00 stderr F I0813 20:42:36.382267 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.387525102+00:00 stderr F I0813 20:42:36.383047 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.387525102+00:00 stderr F I0813 20:42:36.383551 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.396116500+00:00 stderr F I0813 20:42:36.375892 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.439697676+00:00 stderr F I0813 20:42:36.438509 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.473221493+00:00 stderr F I0813 20:42:36.472939 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.517723086+00:00 stderr F I0813 20:42:36.515446 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.525477209+00:00 stderr F I0813 20:42:36.524990 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.525649584+00:00 stderr F I0813 20:42:36.525597 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.526007115+00:00 stderr F I0813 20:42:36.525975 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.526113798+00:00 stderr F I0813 20:42:36.526057 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.526590702+00:00 stderr F I0813 20:42:36.526528 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.526860159+00:00 stderr F I0813 20:42:36.526836 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.527274371+00:00 stderr F I0813 20:42:36.527249 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.527652012+00:00 stderr F I0813 20:42:36.527630 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.527982892+00:00 stderr F I0813 20:42:36.527959 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.528339942+00:00 stderr F I0813 20:42:36.528315 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.528704013+00:00 stderr F I0813 20:42:36.528683 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.529639970+00:00 stderr F I0813 20:42:36.529613 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.530079102+00:00 stderr F I0813 20:42:36.530057 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.543610662+00:00 stderr F I0813 20:42:36.543448 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.551664234+00:00 stderr F I0813 20:42:36.551599 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.552394896+00:00 stderr F I0813 20:42:36.552291 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.552666603+00:00 stderr F I0813 20:42:36.552644 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.553034024+00:00 stderr F I0813 20:42:36.553007 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.553347353+00:00 stderr F I0813 20:42:36.553324 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.553991832+00:00 stderr F I0813 20:42:36.553964 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.554282780+00:00 stderr F I0813 20:42:36.554257 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.565556575+00:00 stderr F I0813 20:42:36.564312 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.565556575+00:00 stderr F I0813 20:42:36.565334 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.565556575+00:00 stderr F I0813 20:42:36.565351 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.565556575+00:00 stderr F I0813 20:42:36.565544 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.566095551+00:00 stderr F I0813 20:42:36.565899 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.566119391+00:00 stderr F I0813 20:42:36.566108 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.566487422+00:00 stderr F I0813 20:42:36.566257 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.566502902+00:00 stderr F I0813 20:42:36.566491 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.566909994+00:00 stderr F I0813 20:42:36.566684 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:38.016213258+00:00 stderr F E0813 20:42:38.010560 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:42:38.018502994+00:00 stderr F E0813 20:42:38.018396 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.022073757+00:00 stderr F E0813 20:42:38.021141 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.029345557+00:00 stderr F E0813 20:42:38.029269 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.034276839+00:00 stderr F E0813 20:42:38.033926 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.036483353+00:00 stderr F E0813 20:42:38.035737 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:42:38.040625892+00:00 stderr F E0813 20:42:38.039940 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.047424938+00:00 stderr F E0813 20:42:38.047061 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.051877666+00:00 stderr F E0813 20:42:38.051510 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:42:38.060824984+00:00 stderr F E0813 20:42:38.060661 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.199407930+00:00 stderr F E0813 20:42:38.199318 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.405874742+00:00 stderr F E0813 20:42:38.403054 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:42:38.599883536+00:00 stderr F E0813 20:42:38.599536 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.802826186+00:00 stderr F E0813 20:42:38.801003 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.858378818+00:00 stderr F I0813 20:42:38.858182 1 cmd.go:128] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:38.860434177+00:00 stderr F I0813 20:42:38.860371 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:38.860434177+00:00 stderr F I0813 20:42:38.860425 1 base_controller.go:172] Shutting down TargetConfigController ... 2025-08-13T20:42:38.860457888+00:00 stderr F I0813 20:42:38.860446 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:38.860457888+00:00 stderr F I0813 20:42:38.860453 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:38.860520100+00:00 stderr F I0813 20:42:38.860468 1 base_controller.go:172] Shutting down SATokenSignerController ... 2025-08-13T20:42:38.860520100+00:00 stderr F I0813 20:42:38.860504 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:42:38.860533350+00:00 stderr F I0813 20:42:38.860520 1 base_controller.go:172] Shutting down GarbageCollectorWatcherController ... 2025-08-13T20:42:38.860543201+00:00 stderr F I0813 20:42:38.860535 1 base_controller.go:172] Shutting down NodeController ... 2025-08-13T20:42:38.860583682+00:00 stderr F I0813 20:42:38.860550 1 base_controller.go:172] Shutting down GuardController ... 2025-08-13T20:42:38.860583682+00:00 stderr F I0813 20:42:38.860567 1 base_controller.go:172] Shutting down KubeControllerManagerStaticResources ... 2025-08-13T20:42:38.860930672+00:00 stderr F I0813 20:42:38.860883 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:38.860930672+00:00 stderr F I0813 20:42:38.860922 1 base_controller.go:172] Shutting down MissingStaticPodController ... 2025-08-13T20:42:38.860950012+00:00 stderr F I0813 20:42:38.860936 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:42:38.860959843+00:00 stderr F I0813 20:42:38.860949 1 base_controller.go:172] Shutting down WorkerLatencyProfile ... 2025-08-13T20:42:38.860969773+00:00 stderr F I0813 20:42:38.860962 1 base_controller.go:172] Shutting down StatusSyncer_kube-controller-manager ... 2025-08-13T20:42:38.860979963+00:00 stderr F I0813 20:42:38.860967 1 base_controller.go:150] All StatusSyncer_kube-controller-manager post start hooks have been terminated 2025-08-13T20:42:38.860989933+00:00 stderr F I0813 20:42:38.860979 1 base_controller.go:172] Shutting down InstallerStateController ... 2025-08-13T20:42:38.861177759+00:00 stderr F I0813 20:42:38.861129 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:38.861292472+00:00 stderr F I0813 20:42:38.861164 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:38.861891139+00:00 stderr F E0813 20:42:38.861726 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager-operator/leases/kube-controller-manager-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.863436384+00:00 stderr F E0813 20:42:38.861921 1 base_controller.go:268] InstallerStateController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:38.863436384+00:00 stderr F I0813 20:42:38.861973 1 base_controller.go:114] Shutting down worker of InstallerStateController controller ... 2025-08-13T20:42:38.863436384+00:00 stderr F I0813 20:42:38.861984 1 base_controller.go:104] All InstallerStateController workers have been terminated 2025-08-13T20:42:38.863436384+00:00 stderr F I0813 20:42:38.862005 1 controller_manager.go:54] InstallerStateController controller terminated 2025-08-13T20:42:38.863436384+00:00 stderr F W0813 20:42:38.862143 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000036700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000057124415133657716033111 0ustar zuulzuul2025-08-13T19:59:24.033453753+00:00 stderr F I0813 19:59:24.032400 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T19:59:24.033453753+00:00 stderr F I0813 19:59:24.033256 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:24.045613349+00:00 stderr F I0813 19:59:24.045312 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:28.618292083+00:00 stderr F I0813 19:59:28.581671 1 builder.go:298] kube-controller-manager-operator version 4.16.0-202406131906.p0.g0338b3b.assembly.stream.el9-0338b3b-0338b3be6912024d03def2c26f0fa10218fc2c25 2025-08-13T19:59:41.628019168+00:00 stderr F I0813 19:59:41.627091 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:41.688536273+00:00 stderr F I0813 19:59:41.686658 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaTopology 2025-08-13T19:59:41.688536273+00:00 stderr F I0813 19:59:41.687289 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock... 2025-08-13T19:59:41.783485650+00:00 stderr F W0813 19:59:41.728767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:41.783485650+00:00 stderr F W0813 19:59:41.783334 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:41.912309252+00:00 stderr F I0813 19:59:41.912083 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:41.922961415+00:00 stderr F I0813 19:59:41.920266 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:41.922961415+00:00 stderr F I0813 19:59:41.921423 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:41.931931161+00:00 stderr F I0813 19:59:41.923402 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:41.931931161+00:00 stderr F I0813 19:59:41.923470 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:41.931931161+00:00 stderr F I0813 19:59:41.923610 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:41.931931161+00:00 stderr F I0813 19:59:41.923627 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:41.931931161+00:00 stderr F I0813 19:59:41.923644 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.931931161+00:00 stderr F I0813 19:59:41.923650 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:41.967324630+00:00 stderr F I0813 19:59:41.964723 1 leaderelection.go:260] successfully acquired lease openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock 2025-08-13T19:59:42.010618034+00:00 stderr F I0813 19:59:42.008445 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator-lock", UID:"d058fe3a-98b8-4ef3-8f29-e5f93bb21bb1", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28303", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-controller-manager-operator-6f6cb54958-rbddb_222cdc59-c33e-47e3-9961-9d9799c1f827 became leader 2025-08-13T19:59:42.023767319+00:00 stderr F I0813 19:59:42.023455 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:42.023767319+00:00 stderr F I0813 19:59:42.023517 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:42.023966385+00:00 stderr F I0813 19:59:42.023881 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:42.023966385+00:00 stderr F I0813 19:59:42.023913 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:42.023982015+00:00 stderr F E0813 19:59:42.023960 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.023997666+00:00 stderr F E0813 19:59:42.023989 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.051606542+00:00 stderr F E0813 19:59:42.047922 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.057444769+00:00 stderr F E0813 19:59:42.056639 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.059437936+00:00 stderr F E0813 19:59:42.058443 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.067436374+00:00 stderr F E0813 19:59:42.067065 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.078942392+00:00 stderr F E0813 19:59:42.078701 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.096151052+00:00 stderr F E0813 19:59:42.094004 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.096151052+00:00 stderr F I0813 19:59:42.094275 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:42.096151052+00:00 stderr F I0813 19:59:42.095053 1 starter.go:88] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:42.135419622+00:00 stderr F E0813 19:59:42.131231 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.149967616+00:00 stderr F E0813 19:59:42.149745 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.216747230+00:00 stderr F E0813 19:59:42.212138 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.328639559+00:00 stderr F E0813 19:59:42.328373 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.380154118+00:00 stderr F E0813 19:59:42.379397 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.490361409+00:00 stderr F E0813 19:59:42.489984 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.734689153+00:00 stderr F E0813 19:59:42.730261 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.813911061+00:00 stderr F E0813 19:59:42.813173 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.093876912+00:00 stderr F I0813 19:59:43.093228 1 base_controller.go:67] Waiting for caches to sync for GarbageCollectorWatcherController 2025-08-13T19:59:43.207267524+00:00 stderr F I0813 19:59:43.193372 1 base_controller.go:67] Waiting for caches to sync for SATokenSignerController 2025-08-13T19:59:43.271215437+00:00 stderr F I0813 19:59:43.269044 1 base_controller.go:67] Waiting for caches to sync for WorkerLatencyProfile 2025-08-13T19:59:44.345763787+00:00 stderr F I0813 19:59:44.345462 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T19:59:44.346128778+00:00 stderr F I0813 19:59:44.345751 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T19:59:44.397758109+00:00 stderr F I0813 19:59:44.397026 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T19:59:44.406665463+00:00 stderr F I0813 19:59:44.406579 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:44.408134075+00:00 stderr F I0813 19:59:44.407390 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-controller-manager 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.454354 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.342036 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.454585 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.454640 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.454668 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.454686 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.455379 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.455402 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.455416 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.455430 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T19:59:44.457951755+00:00 stderr F E0813 19:59:44.456040 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.457951755+00:00 stderr F E0813 19:59:44.456098 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.605027117+00:00 stderr F I0813 19:59:44.604706 1 request.go:697] Waited for 1.261836099s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-infra/configmaps?limit=500&resourceVersion=0 2025-08-13T19:59:45.701060661+00:00 stderr F I0813 19:59:45.683986 1 base_controller.go:67] Waiting for caches to sync for KubeControllerManagerStaticResources 2025-08-13T19:59:45.777649654+00:00 stderr F E0813 19:59:45.777571 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:46.214107444+00:00 stderr F I0813 19:59:46.213425 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T19:59:49.333946649+00:00 stderr F E0813 19:59:49.333240 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.333946649+00:00 stderr F E0813 19:59:49.333693 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.660686732+00:00 stderr F I0813 19:59:49.658599 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:49.660686732+00:00 stderr F I0813 19:59:49.658650 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:49.660686732+00:00 stderr F I0813 19:59:49.658701 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T19:59:49.660686732+00:00 stderr F I0813 19:59:49.658709 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T19:59:49.686514198+00:00 stderr F I0813 19:59:49.685489 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SignerUpdateRequired' "csr-signer-signer" in "openshift-kube-controller-manager-operator" requires a new signing cert/key pair: past its refresh time 2025-06-27 13:05:19 +0000 UTC 2025-08-13T19:59:52.221081929+00:00 stderr F E0813 19:59:52.216537 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:52.947399193+00:00 stderr F E0813 19:59:52.942757 1 base_controller.go:268] InstallerStateController reconciliation failed: kubecontrollermanagers.operator.openshift.io "cluster" not found 2025-08-13T19:59:53.014988420+00:00 stderr F I0813 19:59:53.014830 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T19:59:53.014988420+00:00 stderr F I0813 19:59:53.014901 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T19:59:53.247120117+00:00 stderr F I0813 19:59:53.000502 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T19:59:53.290763200+00:00 stderr F I0813 19:59:53.290133 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T19:59:53.327388354+00:00 stderr F I0813 19:59:53.246981 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T19:59:53.327388354+00:00 stderr F I0813 19:59:53.326055 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T19:59:53.327388354+00:00 stderr F I0813 19:59:53.246985 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T19:59:53.327388354+00:00 stderr F I0813 19:59:53.326119 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T19:59:53.343642967+00:00 stderr F I0813 19:59:53.246989 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:53.343642967+00:00 stderr F I0813 19:59:53.343520 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:53.348723092+00:00 stderr F E0813 19:59:53.347167 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-8" not found 2025-08-13T19:59:53.348723092+00:00 stderr F I0813 19:59:53.246992 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:53.348723092+00:00 stderr F I0813 19:59:53.347291 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:53.349069832+00:00 stderr F I0813 19:59:53.248700 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T19:59:53.349069832+00:00 stderr F I0813 19:59:53.349039 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T19:59:53.368211478+00:00 stderr F E0813 19:59:53.365245 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-8" not found 2025-08-13T19:59:53.370976496+00:00 stderr F I0813 19:59:53.370553 1 trace.go:236] Trace[1198132947]: "DeltaFIFO Pop Process" ID:openshift-infra/build-config-change-controller,Depth:24,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:53.014) (total time: 355ms): 2025-08-13T19:59:53.370976496+00:00 stderr F Trace[1198132947]: [355.508683ms] [355.508683ms] END 2025-08-13T19:59:53.379894541+00:00 stderr F E0813 19:59:53.379715 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-8" not found 2025-08-13T19:59:53.383234916+00:00 stderr F I0813 19:59:53.380322 1 trace.go:236] Trace[1066495986]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:43.193) (total time: 10186ms): 2025-08-13T19:59:53.383234916+00:00 stderr F Trace[1066495986]: ---"Objects listed" error: 10186ms (19:59:53.380) 2025-08-13T19:59:53.383234916+00:00 stderr F Trace[1066495986]: [10.186242865s] [10.186242865s] END 2025-08-13T19:59:53.383234916+00:00 stderr F I0813 19:59:53.380358 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.416024670+00:00 stderr F E0813 19:59:53.414698 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-8" not found 2025-08-13T19:59:53.420198000+00:00 stderr F I0813 19:59:53.420157 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.422652869+00:00 stderr F I0813 19:59:53.422216 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.423601336+00:00 stderr F I0813 19:59:53.423567 1 trace.go:236] Trace[1991635445]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:43.194) (total time: 10228ms): 2025-08-13T19:59:53.423601336+00:00 stderr F Trace[1991635445]: ---"Objects listed" error: 10228ms (19:59:53.423) 2025-08-13T19:59:53.423601336+00:00 stderr F Trace[1991635445]: [10.228950532s] [10.228950532s] END 2025-08-13T19:59:53.423669918+00:00 stderr F I0813 19:59:53.423649 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.449483304+00:00 stderr F I0813 19:59:53.246974 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-controller-manager 2025-08-13T19:59:53.449483304+00:00 stderr F I0813 19:59:53.446162 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-controller-manager controller ... 2025-08-13T19:59:53.463072922+00:00 stderr F I0813 19:59:53.463014 1 trace.go:236] Trace[1514590009]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:43.150) (total time: 10312ms): 2025-08-13T19:59:53.463072922+00:00 stderr F Trace[1514590009]: ---"Objects listed" error: 10301ms (19:59:53.452) 2025-08-13T19:59:53.463072922+00:00 stderr F Trace[1514590009]: [10.312669109s] [10.312669109s] END 2025-08-13T19:59:53.463172424+00:00 stderr F I0813 19:59:53.463153 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.525856751+00:00 stderr F E0813 19:59:53.525272 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-8" not found 2025-08-13T19:59:53.527213780+00:00 stderr F E0813 19:59:53.525952 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-8" not found 2025-08-13T19:59:53.548014333+00:00 stderr F I0813 19:59:53.544828 1 trace.go:236] Trace[1007991698]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:43.194) (total time: 10345ms): 2025-08-13T19:59:53.548014333+00:00 stderr F Trace[1007991698]: ---"Objects listed" error: 10345ms (19:59:53.539) 2025-08-13T19:59:53.548014333+00:00 stderr F Trace[1007991698]: [10.345534185s] [10.345534185s] END 2025-08-13T19:59:53.548014333+00:00 stderr F I0813 19:59:53.544882 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.548014333+00:00 stderr F I0813 19:59:53.545145 1 trace.go:236] Trace[1260161130]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:43.098) (total time: 10446ms): 2025-08-13T19:59:53.548014333+00:00 stderr F Trace[1260161130]: ---"Objects listed" error: 10446ms (19:59:53.545) 2025-08-13T19:59:53.548014333+00:00 stderr F Trace[1260161130]: [10.446396791s] [10.446396791s] END 2025-08-13T19:59:53.548014333+00:00 stderr F I0813 19:59:53.545153 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.572000517+00:00 stderr F I0813 19:59:53.569230 1 base_controller.go:73] Caches are synced for WorkerLatencyProfile 2025-08-13T19:59:53.572000517+00:00 stderr F I0813 19:59:53.569288 1 base_controller.go:110] Starting #1 worker of WorkerLatencyProfile controller ... 2025-08-13T19:59:53.575611490+00:00 stderr F I0813 19:59:53.575519 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-8,config-8,controller-manager-kubeconfig-8,kube-controller-cert-syncer-kubeconfig-8,kube-controller-manager-pod-8,recycler-config-8,service-ca-8,serviceaccount-ca-8 2025-08-13T19:59:53.633472279+00:00 stderr F I0813 19:59:53.633259 1 trace.go:236] Trace[1243592164]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:43.099) (total time: 10533ms): 2025-08-13T19:59:53.633472279+00:00 stderr F Trace[1243592164]: ---"Objects listed" error: 10533ms (19:59:53.633) 2025-08-13T19:59:53.633472279+00:00 stderr F Trace[1243592164]: [10.533904655s] [10.533904655s] END 2025-08-13T19:59:53.633472279+00:00 stderr F I0813 19:59:53.633362 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.675137747+00:00 stderr F I0813 19:59:53.671142 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:53Z","message":"NodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)","reason":"NodeController_MasterNodesReady","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:53.762284671+00:00 stderr F I0813 19:59:53.730591 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T19:59:53.762284671+00:00 stderr F I0813 19:59:53.760573 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T19:59:53.788307503+00:00 stderr F I0813 19:59:53.783223 1 base_controller.go:73] Caches are synced for KubeControllerManagerStaticResources 2025-08-13T19:59:53.788307503+00:00 stderr F I0813 19:59:53.783290 1 base_controller.go:110] Starting #1 worker of KubeControllerManagerStaticResources controller ... 2025-08-13T19:59:53.792922194+00:00 stderr F I0813 19:59:53.791421 1 trace.go:236] Trace[896251159]: "DeltaFIFO Pop Process" ID:openshift-config-managed/dashboard-k8s-resources-cluster,Depth:34,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:53.645) (total time: 145ms): 2025-08-13T19:59:53.792922194+00:00 stderr F Trace[896251159]: [145.698363ms] [145.698363ms] END 2025-08-13T19:59:53.804206756+00:00 stderr F I0813 19:59:53.803143 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T19:59:53.804206756+00:00 stderr F I0813 19:59:53.803216 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T19:59:53.820246093+00:00 stderr F I0813 19:59:53.819120 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:53.820246093+00:00 stderr F I0813 19:59:53.819168 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:53.820246093+00:00 stderr F I0813 19:59:53.819235 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T19:59:53.820246093+00:00 stderr F I0813 19:59:53.819240 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T19:59:53.830456084+00:00 stderr F I0813 19:59:53.829044 1 base_controller.go:73] Caches are synced for SATokenSignerController 2025-08-13T19:59:53.830456084+00:00 stderr F I0813 19:59:53.829253 1 base_controller.go:110] Starting #1 worker of SATokenSignerController controller ... 2025-08-13T19:59:53.830456084+00:00 stderr F I0813 19:59:53.829501 1 base_controller.go:73] Caches are synced for GarbageCollectorWatcherController 2025-08-13T19:59:53.830456084+00:00 stderr F I0813 19:59:53.829521 1 base_controller.go:110] Starting #1 worker of GarbageCollectorWatcherController controller ... 2025-08-13T19:59:53.899573244+00:00 stderr F I0813 19:59:53.859656 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T19:59:53.899573244+00:00 stderr F I0813 19:59:53.859744 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T19:59:53.963060424+00:00 stderr F I0813 19:59:53.944658 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:53.963060424+00:00 stderr F I0813 19:59:53.950284 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:54.249369305+00:00 stderr F I0813 19:59:54.247394 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.289571031+00:00 stderr F I0813 19:59:54.251263 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded changed from False to True ("NodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)") 2025-08-13T19:59:54.310357104+00:00 stderr F I0813 19:59:54.305679 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SATokenSignerControllerOK' found expected kube-apiserver endpoints 2025-08-13T19:59:54.337892059+00:00 stderr F E0813 19:59:54.337627 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:54.346882635+00:00 stderr F I0813 19:59:54.341597 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.358706072+00:00 stderr F E0813 19:59:54.358653 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-8,config-8,controller-manager-kubeconfig-8,kube-controller-cert-syncer-kubeconfig-8,kube-controller-manager-pod-8,recycler-config-8,service-ca-8,serviceaccount-ca-8] 2025-08-13T19:59:54.402212842+00:00 stderr F I0813 19:59:54.401970 1 core.go:359] ConfigMap "openshift-kube-controller-manager/service-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:06Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/description":{},"f:openshift.io/owning-component":{}}}},"manager":"service-ca-operator","operation":"Update","time":"2025-08-13T19:59:39Z"}],"resourceVersion":null,"uid":"8a52a0ef-1908-47be-bed5-31ee169c99a3"}} 2025-08-13T19:59:54.403505199+00:00 stderr F I0813 19:59:54.403422 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/service-ca -n openshift-kube-controller-manager: 2025-08-13T19:59:54.403505199+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:54.410230861+00:00 stderr F I0813 19:59:54.410168 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-8,config-8,controller-manager-kubeconfig-8,kube-controller-cert-syncer-kubeconfig-8,kube-controller-manager-pod-8,recycler-config-8,service-ca-8,serviceaccount-ca-8]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.411327932+00:00 stderr F I0813 19:59:54.411251 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") 2025-08-13T19:59:54.421904543+00:00 stderr F I0813 19:59:54.418693 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 9 triggered by "required configmap/service-ca has changed" 2025-08-13T19:59:54.513584627+00:00 stderr F I0813 19:59:54.513520 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-8,config-8,controller-manager-kubeconfig-8,kube-controller-cert-syncer-kubeconfig-8,kube-controller-manager-pod-8,recycler-config-8,service-ca-8,serviceaccount-ca-8]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.514087421+00:00 stderr F I0813 19:59:54.514057 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-8,config-8,controller-manager-kubeconfig-8,kube-controller-cert-syncer-kubeconfig-8,kube-controller-manager-pod-8,recycler-config-8,service-ca-8,serviceaccount-ca-8]" 2025-08-13T19:59:54.535333137+00:00 stderr P I0813 19:59:54.535256 1 core.go:359] ConfigMap "openshift-kube-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/i 2025-08-13T19:59:54.535405889+00:00 stderr F SVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:57Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T19:59:49Z"}],"resourceVersion":null,"uid":"56468a41-d560-4106-974c-24e97afc9e77"}} 2025-08-13T19:59:54.535596484+00:00 stderr F I0813 19:59:54.535557 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-kube-controller-manager: 2025-08-13T19:59:54.535596484+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:54.619393933+00:00 stderr F I0813 19:59:54.619247 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/csr-signer-signer -n openshift-kube-controller-manager-operator because it changed 2025-08-13T19:59:54.619484395+00:00 stderr F I0813 19:59:54.619464 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CABundleUpdateRequired' "csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert 2025-08-13T19:59:54.619513726+00:00 stderr F E0813 19:59:54.619406 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:54.754106093+00:00 stderr F I0813 19:59:54.753338 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.805605461+00:00 stderr F I0813 19:59:54.805538 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-8,config-8,controller-manager-kubeconfig-8,kube-controller-cert-syncer-kubeconfig-8,kube-controller-manager-pod-8,recycler-config-8,service-ca-8,serviceaccount-ca-8]" to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T19:59:55.108944468+00:00 stderr F I0813 19:59:55.107484 1 core.go:359] ConfigMap "openshift-kube-controller-manager/aggregator-client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIV+a/r/KBVSQwDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2FnZ3JlZ2F0b3It\nY2xpZW50LXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX2FnZ3JlZ2F0b3ItY2xpZW50LXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwz1oeDcXqAniG+VxAzEbZbeheswm\nibqk0LwWbA9YAD2aJCC2U0gbXouz0u1dzDnEuwzslM0OFq2kW+1RmEB1drVBkCMV\ny/gKGmRafqGt31/rDe81XneOBzrUC/rNVDZq7rx4wsZ8YzYkPhj1frvlCCWyOdyB\n+nWF+ZZQHLXeSuHuVGnfGqmckiQf/R8ITZp/vniyeOED0w8B9ZdfVHNYJksR/Vn2\ngslU8a/mluPzSCyD10aHnX5c75yTzW4TBQvytjkEpDR5LBoRmHiuL64999DtWonq\niX7TdcoQY1LuHyilaXIp0TazmkRb3ycHAY/RQ3xumj9I25D8eLCwWvI8GwIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\nWtUaz8JmZMUc/fPnQTR0L7R9wakwHwYDVR0jBBgwFoAUWtUaz8JmZMUc/fPnQTR0\nL7R9wakwDQYJKoZIhvcNAQELBQADggEBAECt0YM/4XvtPuVY5pY2aAXRuthsw2zQ\nnXVvR5jDsfpaNvMXQWaMjM1B+giNKhDDeLmcF7GlWmFfccqBPicgFhUgQANE3ALN\ngq2Wttd641M6i4B3UuRewNySj1sc12wfgAcwaRcecDTCsZo5yuF90z4mXpZs7MWh\nKCxYPpAtLqi17IF1tJVz/03L+6WD5kUProTELtY7/KBJYV/GONMG+KAMBjg1ikMK\njA0HQiCZiWDuW1ZdAwuvh6oRNWoQy6w9Wksard/AnfXUFBwNgULMp56+tOOPHxtm\nu3XYTN0dPJXsimSk4KfS0by8waS7ocoXa3LgQxb/6h0ympDbcWtgD0w=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:00Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}},"f:labels":{".":{},"f:auth.openshift.io/managed-certificate-type":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T19:49:33Z"}],"resourceVersion":null,"uid":"1d0d5c4a-d5a2-488a-94e2-bf622b67cadf"}} 2025-08-13T19:59:55.116031370+00:00 stderr F I0813 19:59:55.115686 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager: 2025-08-13T19:59:55.116031370+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:55.688617272+00:00 stderr F I0813 19:59:55.688549 1 core.go:359] ConfigMap "openshift-kube-controller-manager-operator/csr-controller-signer-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n"}} 2025-08-13T19:59:55.704932237+00:00 stderr F I0813 19:59:55.690301 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ConfigMapUpdateFailed' Failed to update ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator: Operation cannot be fulfilled on configmaps "csr-controller-signer-ca": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:55.714899181+00:00 stderr F I0813 19:59:55.708597 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-controller-manager-pod-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T19:59:55.762149678+00:00 stderr F I0813 19:59:55.761861 1 request.go:697] Waited for 1.076004342s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-08-13T19:59:56.762951177+00:00 stderr F I0813 19:59:56.762427 1 request.go:697] Waited for 1.055958541s due to client-side throttling, not priority and fairness, request: POST:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps 2025-08-13T19:59:57.074899158+00:00 stderr F I0813 19:59:57.074757 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T19:59:57.349627799+00:00 stderr F I0813 19:59:57.331149 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nCertRotation_CSRSigningCert_Degraded: Operation cannot be fulfilled on configmaps \"csr-controller-signer-ca\": the object has been modified; please apply your changes to the latest version and try again","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:57.538299877+00:00 stderr F E0813 19:59:57.537920 1 base_controller.go:268] CertRotationController reconciliation failed: Operation cannot be fulfilled on configmaps "csr-controller-signer-ca": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:57.612464112+00:00 stderr F I0813 19:59:57.611932 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RotationError' Operation cannot be fulfilled on configmaps "csr-controller-signer-ca": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:57.729409045+00:00 stderr F I0813 19:59:57.729313 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nCertRotation_CSRSigningCert_Degraded: Operation cannot be fulfilled on configmaps \"csr-controller-signer-ca\": the object has been modified; please apply your changes to the latest version and try again" 2025-08-13T19:59:58.238010793+00:00 stderr F I0813 19:59:58.236969 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/cluster-policy-controller-config-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T19:59:58.340670809+00:00 stderr F I0813 19:59:58.333712 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:58.369221253+00:00 stderr F I0813 19:59:58.368719 1 request.go:697] Waited for 1.037197985s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa 2025-08-13T19:59:58.427858545+00:00 stderr F I0813 19:59:58.422906 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nCertRotation_CSRSigningCert_Degraded: Operation cannot be fulfilled on configmaps \"csr-controller-signer-ca\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T19:59:59.248931550+00:00 stderr F I0813 19:59:59.226352 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/controller-manager-kubeconfig-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T19:59:59.574132010+00:00 stderr F I0813 19:59:59.566388 1 request.go:697] Waited for 1.156719703s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-08-13T20:00:00.643511052+00:00 stderr F I0813 20:00:00.637169 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-controller-cert-syncer-kubeconfig-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:01.507455149+00:00 stderr F I0813 20:00:01.478751 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/serviceaccount-ca-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:02.034100931+00:00 stderr F I0813 20:00:02.023659 1 core.go:359] ConfigMap "openshift-kube-controller-manager-operator/csr-controller-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:00:02.034100931+00:00 stderr F I0813 20:00:02.024573 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ConfigMapUpdateFailed' Failed to update ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator: Operation cannot be fulfilled on configmaps "csr-controller-ca": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:02.209018759+00:00 stderr F I0813 20:00:02.206166 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/service-ca-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:02.885065476+00:00 stderr F I0813 20:00:02.884103 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/recycler-config-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:03.224935337+00:00 stderr F I0813 20:00:03.222157 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/service-account-private-key-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:03.729095862+00:00 stderr F I0813 20:00:03.725248 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/serving-cert-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:04.211702912+00:00 stderr F I0813 20:00:04.208754 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:04.786158933+00:00 stderr F I0813 20:00:04.781212 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 9 triggered by "required configmap/service-ca has changed" 2025-08-13T20:00:04.870048975+00:00 stderr F I0813 20:00:04.863754 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 9 created because required configmap/service-ca has changed 2025-08-13T20:00:04.884154757+00:00 stderr F W0813 20:00:04.883346 1 staticpod.go:38] revision 9 is unexpectedly already the latest available revision. This is a possible race! 2025-08-13T20:00:04.891111825+00:00 stderr F E0813 20:00:04.889270 1 base_controller.go:268] RevisionController reconciliation failed: conflicting latestAvailableRevision 9 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769114 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.769040049 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769378 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.769361598 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769400 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.769386709 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769418 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.769407239 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769435 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.7694241 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769452 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.76944052 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769468 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.769456861 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769484 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.769472801 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769512 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.769496222 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769535 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.769523243 +0000 UTC))" 2025-08-13T20:00:05.790969514+00:00 stderr F I0813 20:00:05.790398 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-controller-manager-operator.svc,metrics.openshift-kube-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:09 +0000 UTC to 2026-06-26 12:47:10 +0000 UTC (now=2025-08-13 20:00:05.790355257 +0000 UTC))" 2025-08-13T20:00:05.790969514+00:00 stderr F I0813 20:00:05.790724 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115180\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115178\" (2025-08-13 18:59:29 +0000 UTC to 2026-08-13 18:59:29 +0000 UTC (now=2025-08-13 20:00:05.790702476 +0000 UTC))" 2025-08-13T20:00:05.965070208+00:00 stderr F I0813 20:00:05.964149 1 request.go:697] Waited for 1.102447146s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/revision-pruner-9-crc 2025-08-13T20:00:06.586405825+00:00 stderr F I0813 20:00:06.585096 1 installer_controller.go:524] node crc with revision 8 is the oldest and needs new revision 9 2025-08-13T20:00:06.586405825+00:00 stderr F I0813 20:00:06.585750 1 installer_controller.go:532] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:00:06.586405825+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:00:06.586405825+00:00 stderr F CurrentRevision: (int32) 8, 2025-08-13T20:00:06.586405825+00:00 stderr F TargetRevision: (int32) 9, 2025-08-13T20:00:06.586405825+00:00 stderr F LastFailedRevision: (int32) 8, 2025-08-13T20:00:06.586405825+00:00 stderr F LastFailedTime: (*v1.Time)(0xc0020ea588)(2024-06-27 13:18:10 +0000 UTC), 2025-08-13T20:00:06.586405825+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:00:06.586405825+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:00:06.586405825+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:00:06.586405825+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:00:06.586405825+00:00 stderr F (string) (len=2059) "installer: ry-client-token\"\n },\n OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"serving-cert\"\n },\n ConfigMapNamePrefixes: ([]string) (len=8 cap=8) {\n (string) (len=27) \"kube-controller-manager-pod\",\n (string) (len=6) \"config\",\n (string) (len=32) \"cluster-policy-controller-config\",\n (string) (len=29) \"controller-manager-kubeconfig\",\n (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\n (string) (len=17) \"serviceaccount-ca\",\n (string) (len=10) \"service-ca\",\n (string) (len=15) \"recycler-config\"\n },\n OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"cloud-config\"\n },\n CertSecretNames: ([]string) (len=2 cap=2) {\n (string) (len=39) \"kube-controller-manager-client-cert-key\",\n (string) (len=10) \"csr-signer\"\n },\n OptionalCertSecretNamePrefixes: ([]string) ,\n CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0627 13:15:36.458960 1 cmd.go:409] Getting controller reference for node crc\nI0627 13:15:36.472798 1 cmd.go:422] Waiting for installer revisions to settle for node crc\nI0627 13:15:36.476730 1 cmd.go:514] Waiting additional period after revisions have settled for node crc\nI0627 13:16:06.477243 1 cmd.go:520] Getting installer pods for node crc\nF0627 13:16:06.480777 1 cmd.go:105] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:00:06.586405825+00:00 stderr F } 2025-08-13T20:00:06.586405825+00:00 stderr F } 2025-08-13T20:00:06.687925820+00:00 stderr F I0813 20:00:06.683522 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeTargetRevisionChanged' Updating node "crc" from revision 8 to 9 because node crc with revision 8 is the oldest 2025-08-13T20:00:06.720211090+00:00 stderr F I0813 20:00:06.711726 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:06.760673034+00:00 stderr F I0813 20:00:06.760569 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9" 2025-08-13T20:00:06.836714792+00:00 stderr F I0813 20:00:06.834742 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-9-crc -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:07.767166432+00:00 stderr F I0813 20:00:07.764212 1 request.go:697] Waited for 1.027872228s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-08-13T20:00:08.456826607+00:00 stderr P I0813 20:00:08.456025 1 core.go:359] ConfigMap "openshift-kube-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQU 2025-08-13T20:00:08.456932090+00:00 stderr F AA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:57Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:06Z"}],"resourceVersion":null,"uid":"56468a41-d560-4106-974c-24e97afc9e77"}} 2025-08-13T20:00:08.456932090+00:00 stderr F I0813 20:00:08.456716 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-kube-controller-manager: 2025-08-13T20:00:08.456932090+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:08.966021067+00:00 stderr F I0813 20:00:08.962975 1 request.go:697] Waited for 1.36289333s due to client-side throttling, not priority and fairness, request: POST:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods 2025-08-13T20:00:09.447679141+00:00 stderr F I0813 20:00:09.447065 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-9-crc -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:10.144016496+00:00 stderr F I0813 20:00:10.130744 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:10.628131900+00:00 stderr F I0813 20:00:10.557295 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Operation cannot be fulfilled on configmaps \"csr-controller-ca\": the object has been modified; please apply your changes to the latest version and try again","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:10.855083031+00:00 stderr F I0813 20:00:10.854579 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Operation cannot be fulfilled on configmaps \"csr-controller-ca\": the object has been modified; please apply your changes to the latest version and try again" 2025-08-13T20:00:10.862919385+00:00 stderr F I0813 20:00:10.861255 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Operation cannot be fulfilled on configmaps \"csr-controller-ca\": the object has been modified; please apply your changes to the latest version and try again","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:11.266415079+00:00 stderr F E0813 20:00:11.265447 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:11.937577357+00:00 stderr F I0813 20:00:11.937279 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:11.973540762+00:00 stderr F I0813 20:00:11.970921 1 request.go:697] Waited for 1.066126049s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config 2025-08-13T20:00:12.912112395+00:00 stderr F I0813 20:00:12.886263 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:16.706302621+00:00 stderr F I0813 20:00:16.705335 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:17.450477091+00:00 stderr F I0813 20:00:17.450091 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:19.774153637+00:00 stderr F I0813 20:00:19.769354 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:19.935640232+00:00 stderr F I0813 20:00:19.935570 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Operation cannot be fulfilled on configmaps \"csr-controller-ca\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T20:00:19.998082432+00:00 stderr F I0813 20:00:19.998012 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:22.011667596+00:00 stderr F I0813 20:00:22.006462 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:00:23.446037179+00:00 stderr F I0813 20:00:23.444714 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:00:24.916564830+00:00 stderr F I0813 20:00:24.914283 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:00:26.792434459+00:00 stderr F I0813 20:00:26.784906 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:00:27.694565663+00:00 stderr F I0813 20:00:27.690687 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 10 triggered by "optional secret/serving-cert has changed" 2025-08-13T20:00:27.970950544+00:00 stderr F I0813 20:00:27.969903 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:00:29.012869353+00:00 stderr F I0813 20:00:29.011822 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-controller-manager-pod-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:29.746921794+00:00 stderr F I0813 20:00:29.744278 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:30.213415615+00:00 stderr F I0813 20:00:30.212313 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/cluster-policy-controller-config-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:30.800346431+00:00 stderr F I0813 20:00:30.785484 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/controller-manager-kubeconfig-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:31.266697908+00:00 stderr F I0813 20:00:31.250574 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-controller-cert-syncer-kubeconfig-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:31.794188219+00:00 stderr F I0813 20:00:31.793698 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/serviceaccount-ca-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:33.017819839+00:00 stderr F I0813 20:00:33.017439 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/service-ca-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:33.184819941+00:00 stderr F I0813 20:00:33.179616 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/recycler-config-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:33.622176352+00:00 stderr F I0813 20:00:33.622117 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/service-account-private-key-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:34.201159381+00:00 stderr F I0813 20:00:34.200690 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/serving-cert-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:34.883284441+00:00 stderr F I0813 20:00:34.881509 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:35.514679035+00:00 stderr F I0813 20:00:35.513692 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 10 triggered by "optional secret/serving-cert has changed" 2025-08-13T20:00:35.960155817+00:00 stderr F I0813 20:00:35.958416 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 10 created because optional secret/serving-cert has changed 2025-08-13T20:00:36.965344168+00:00 stderr F I0813 20:00:36.955393 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:00:36.965344168+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:00:36.965344168+00:00 stderr F CurrentRevision: (int32) 8, 2025-08-13T20:00:36.965344168+00:00 stderr F TargetRevision: (int32) 10, 2025-08-13T20:00:36.965344168+00:00 stderr F LastFailedRevision: (int32) 8, 2025-08-13T20:00:36.965344168+00:00 stderr F LastFailedTime: (*v1.Time)(0xc001ff4780)(2024-06-27 13:18:10 +0000 UTC), 2025-08-13T20:00:36.965344168+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:00:36.965344168+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:00:36.965344168+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:00:36.965344168+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:00:36.965344168+00:00 stderr F (string) (len=2059) "installer: ry-client-token\"\n },\n OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"serving-cert\"\n },\n ConfigMapNamePrefixes: ([]string) (len=8 cap=8) {\n (string) (len=27) \"kube-controller-manager-pod\",\n (string) (len=6) \"config\",\n (string) (len=32) \"cluster-policy-controller-config\",\n (string) (len=29) \"controller-manager-kubeconfig\",\n (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\n (string) (len=17) \"serviceaccount-ca\",\n (string) (len=10) \"service-ca\",\n (string) (len=15) \"recycler-config\"\n },\n OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"cloud-config\"\n },\n CertSecretNames: ([]string) (len=2 cap=2) {\n (string) (len=39) \"kube-controller-manager-client-cert-key\",\n (string) (len=10) \"csr-signer\"\n },\n OptionalCertSecretNamePrefixes: ([]string) ,\n CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0627 13:15:36.458960 1 cmd.go:409] Getting controller reference for node crc\nI0627 13:15:36.472798 1 cmd.go:422] Waiting for installer revisions to settle for node crc\nI0627 13:15:36.476730 1 cmd.go:514] Waiting additional period after revisions have settled for node crc\nI0627 13:16:06.477243 1 cmd.go:520] Getting installer pods for node crc\nF0627 13:16:06.480777 1 cmd.go:105] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:00:36.965344168+00:00 stderr F } 2025-08-13T20:00:36.965344168+00:00 stderr F } 2025-08-13T20:00:36.965344168+00:00 stderr F because new revision pending 2025-08-13T20:00:37.063979591+00:00 stderr F I0813 20:00:37.049572 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:37.084022262+00:00 stderr F I0813 20:00:37.082999 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:37.084022262+00:00 stderr F I0813 20:00:37.083556 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9" to "NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10" 2025-08-13T20:00:37.123864098+00:00 stderr F E0813 20:00:37.120427 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:37.318318893+00:00 stderr F I0813 20:00:37.316069 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-10-crc -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:38.190099651+00:00 stderr F I0813 20:00:38.189546 1 request.go:697] Waited for 1.147826339s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa 2025-08-13T20:00:39.723621538+00:00 stderr F I0813 20:00:39.722118 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-10-crc -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:40.578344999+00:00 stderr F I0813 20:00:40.573345 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:45.563774882+00:00 stderr F I0813 20:00:45.553082 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:46.701158604+00:00 stderr F I0813 20:00:46.694673 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:47.187761398+00:00 stderr F I0813 20:00:47.184743 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:56.229393959+00:00 stderr P I0813 20:00:56.223445 1 core.go:359] ConfigMap "openshift-kube-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxM 2025-08-13T20:00:56.229760529+00:00 stderr F TQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:57Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:49Z"}],"resourceVersion":null,"uid":"56468a41-d560-4106-974c-24e97afc9e77"}} 2025-08-13T20:00:56.239003423+00:00 stderr F I0813 20:00:56.231111 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-kube-controller-manager: 2025-08-13T20:00:56.239003423+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:01:00.010151258+00:00 stderr F I0813 20:00:59.999206 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.999078302 +0000 UTC))" 2025-08-13T20:01:00.012078983+00:00 stderr F I0813 20:01:00.011490 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:00.008929353 +0000 UTC))" 2025-08-13T20:01:00.013182774+00:00 stderr F I0813 20:01:00.013038 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.012049852 +0000 UTC))" 2025-08-13T20:01:00.014099750+00:00 stderr F I0813 20:01:00.013881 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.013857453 +0000 UTC))" 2025-08-13T20:01:00.014099750+00:00 stderr F I0813 20:01:00.013939 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.013921695 +0000 UTC))" 2025-08-13T20:01:00.014099750+00:00 stderr F I0813 20:01:00.013963 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.013946756 +0000 UTC))" 2025-08-13T20:01:00.014099750+00:00 stderr F I0813 20:01:00.014016 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.014000307 +0000 UTC))" 2025-08-13T20:01:00.014099750+00:00 stderr F I0813 20:01:00.014037 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.014023008 +0000 UTC))" 2025-08-13T20:01:00.014174712+00:00 stderr F I0813 20:01:00.014103 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:00.014042849 +0000 UTC))" 2025-08-13T20:01:00.014174712+00:00 stderr F I0813 20:01:00.014137 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:00.014118021 +0000 UTC))" 2025-08-13T20:01:00.014174712+00:00 stderr F I0813 20:01:00.014159 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.014147682 +0000 UTC))" 2025-08-13T20:01:00.031031783+00:00 stderr F I0813 20:01:00.030593 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-controller-manager-operator.svc,metrics.openshift-kube-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:09 +0000 UTC to 2026-06-26 12:47:10 +0000 UTC (now=2025-08-13 20:01:00.030551769 +0000 UTC))" 2025-08-13T20:01:00.031031783+00:00 stderr F I0813 20:01:00.031018 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115180\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115178\" (2025-08-13 18:59:29 +0000 UTC to 2026-08-13 18:59:29 +0000 UTC (now=2025-08-13 20:01:00.030988972 +0000 UTC))" 2025-08-13T20:01:03.266043461+00:00 stderr P I0813 20:01:03.265053 1 core.go:359] ConfigMap "openshift-kube-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxM 2025-08-13T20:01:03.266368450+00:00 stderr F TQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:57Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:49Z"}],"resourceVersion":null,"uid":"56468a41-d560-4106-974c-24e97afc9e77"}} 2025-08-13T20:01:03.266368450+00:00 stderr F I0813 20:01:03.266008 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ConfigMapUpdateFailed' Failed to update ConfigMap/client-ca -n openshift-kube-controller-manager: Operation cannot be fulfilled on configmaps "client-ca": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:01:05.228984283+00:00 stderr F I0813 20:01:05.179740 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"client-ca\": the object has been modified; please apply your changes to the latest version and try again","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:06.133168995+00:00 stderr F I0813 20:01:06.125619 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"client-ca\": the object has been modified; please apply your changes to the latest version and try again" 2025-08-13T20:01:07.023349078+00:00 stderr F I0813 20:01:06.999583 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:07.023349078+00:00 stderr F I0813 20:01:07.002684 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:01:07.263262658+00:00 stderr F I0813 20:01:07.263101 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"client-ca\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T20:01:07.839947862+00:00 stderr F I0813 20:01:07.743713 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:01:22.554548162+00:00 stderr F I0813 20:01:22.551707 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:01:23.629342647+00:00 stderr F I0813 20:01:23.629273 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:01:24.026047149+00:00 stderr F I0813 20:01:24.025682 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:01:26.338292640+00:00 stderr F I0813 20:01:26.337632 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:26.340007288+00:00 stderr F I0813 20:01:26.338395 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:26.340007288+00:00 stderr F I0813 20:01:26.339666 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:26.350195569+00:00 stderr F I0813 20:01:26.349343 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:26.348725367 +0000 UTC))" 2025-08-13T20:01:26.351715732+00:00 stderr F I0813 20:01:26.350769 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:26.350746725 +0000 UTC))" 2025-08-13T20:01:26.351715732+00:00 stderr F I0813 20:01:26.351583 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:26.351257289 +0000 UTC))" 2025-08-13T20:01:26.352549576+00:00 stderr F I0813 20:01:26.351755 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:26.351730833 +0000 UTC))" 2025-08-13T20:01:26.352549576+00:00 stderr F I0813 20:01:26.352233 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:26.352196976 +0000 UTC))" 2025-08-13T20:01:26.353318078+00:00 stderr F I0813 20:01:26.352607 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:26.352525505 +0000 UTC))" 2025-08-13T20:01:26.353318078+00:00 stderr F I0813 20:01:26.352745 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:26.352634238 +0000 UTC))" 2025-08-13T20:01:26.353318078+00:00 stderr F I0813 20:01:26.352866 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:26.352755722 +0000 UTC))" 2025-08-13T20:01:26.353318078+00:00 stderr F I0813 20:01:26.352956 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:26.352939397 +0000 UTC))" 2025-08-13T20:01:26.353728070+00:00 stderr F I0813 20:01:26.352981 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:26.352967808 +0000 UTC))" 2025-08-13T20:01:26.353728070+00:00 stderr F I0813 20:01:26.353666 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:26.353648137 +0000 UTC))" 2025-08-13T20:01:26.359825203+00:00 stderr F I0813 20:01:26.359621 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-controller-manager-operator.svc,metrics.openshift-kube-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:22 +0000 UTC to 2027-08-13 20:00:23 +0000 UTC (now=2025-08-13 20:01:26.359510245 +0000 UTC))" 2025-08-13T20:01:26.361272905+00:00 stderr F I0813 20:01:26.360170 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115180\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115178\" (2025-08-13 18:59:29 +0000 UTC to 2026-08-13 18:59:29 +0000 UTC (now=2025-08-13 20:01:26.360149493 +0000 UTC))" 2025-08-13T20:01:29.161403988+00:00 stderr F I0813 20:01:29.161063 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="b677063b32bdea5d0626c22feb3e33cd81922e655466381de19492c7e38d993f", new="048a1a430ed1deee9ccf052d51f855e6e8f2fee2d531560f57befbbfad23cd9d") 2025-08-13T20:01:29.161679096+00:00 stderr F W0813 20:01:29.161655 1 builder.go:154] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:01:29.161911722+00:00 stderr F I0813 20:01:29.161888 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="b10e6fe375e1e85006c2f2c6e133de2da450246bee57a078c8f61fda3a384f1b", new="4311dcc9562700df28c73989dc84c69933e91c13e7ee5e5588d7570593276c97") 2025-08-13T20:01:29.163290022+00:00 stderr F I0813 20:01:29.163222 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:29.163387054+00:00 stderr F I0813 20:01:29.163330 1 base_controller.go:172] Shutting down GarbageCollectorWatcherController ... 2025-08-13T20:01:29.163417905+00:00 stderr F I0813 20:01:29.163352 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:29.163535239+00:00 stderr F I0813 20:01:29.163516 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:29.164066004+00:00 stderr F I0813 20:01:29.164005 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:01:29.164111315+00:00 stderr F I0813 20:01:29.164014 1 base_controller.go:172] Shutting down SATokenSignerController ... 2025-08-13T20:01:29.164139656+00:00 stderr F I0813 20:01:29.164101 1 base_controller.go:172] Shutting down TargetConfigController ... 2025-08-13T20:01:29.164260049+00:00 stderr F I0813 20:01:29.164165 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:01:29.164260049+00:00 stderr F I0813 20:01:29.164221 1 base_controller.go:172] Shutting down MissingStaticPodController ... 2025-08-13T20:01:29.164260049+00:00 stderr F I0813 20:01:29.164237 1 base_controller.go:172] Shutting down KubeControllerManagerStaticResources ... 2025-08-13T20:01:29.164692262+00:00 stderr F I0813 20:01:29.164607 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:01:29.164961759+00:00 stderr F I0813 20:01:29.164820 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:01:29.165095503+00:00 stderr F I0813 20:01:29.165074 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:29.165144755+00:00 stderr F I0813 20:01:29.165131 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:01:29.165466574+00:00 stderr F I0813 20:01:29.165378 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:01:29.165558206+00:00 stderr F I0813 20:01:29.165531 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:01:29.165771282+00:00 stderr F I0813 20:01:29.165666 1 base_controller.go:114] Shutting down worker of GarbageCollectorWatcherController controller ... 2025-08-13T20:01:29.165917257+00:00 stderr F I0813 20:01:29.165864 1 base_controller.go:104] All GarbageCollectorWatcherController workers have been terminated 2025-08-13T20:01:29.166484343+00:00 stderr F E0813 20:01:29.166374 1 base_controller.go:268] TargetConfigController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:01:29.166484343+00:00 stderr F I0813 20:01:29.166426 1 base_controller.go:114] Shutting down worker of TargetConfigController controller ... 2025-08-13T20:01:29.166484343+00:00 stderr F I0813 20:01:29.166437 1 base_controller.go:104] All TargetConfigController workers have been terminated 2025-08-13T20:01:29.166484343+00:00 stderr F I0813 20:01:29.166451 1 base_controller.go:172] Shutting down WorkerLatencyProfile ... 2025-08-13T20:01:29.166484343+00:00 stderr F I0813 20:01:29.166463 1 base_controller.go:114] Shutting down worker of SATokenSignerController controller ... 2025-08-13T20:01:29.166484343+00:00 stderr F I0813 20:01:29.166469 1 base_controller.go:104] All SATokenSignerController workers have been terminated 2025-08-13T20:01:29.166537154+00:00 stderr F I0813 20:01:29.166496 1 base_controller.go:114] Shutting down worker of WorkerLatencyProfile controller ... 2025-08-13T20:01:29.166537154+00:00 stderr F E0813 20:01:29.166497 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": context canceled, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-controller-manager/gce/cloud-provider-role.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-controller-manager/gce/cloud-provider-binding.yaml" (string): client rate limiter Wait returned an error: context canceled, client rate limiter Wait returned an error: context canceled] 2025-08-13T20:01:29.166537154+00:00 stderr F I0813 20:01:29.166502 1 base_controller.go:104] All WorkerLatencyProfile workers have been terminated 2025-08-13T20:01:29.166556025+00:00 stderr F I0813 20:01:29.166528 1 base_controller.go:172] Shutting down BackingResourceController ... 2025-08-13T20:01:29.166571335+00:00 stderr F I0813 20:01:29.166560 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:01:29.166586586+00:00 stderr F I0813 20:01:29.166579 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:01:29.166598846+00:00 stderr F I0813 20:01:29.166587 1 base_controller.go:114] Shutting down worker of MissingStaticPodController controller ... 2025-08-13T20:01:29.166598846+00:00 stderr F I0813 20:01:29.166586 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:29.166598846+00:00 stderr F I0813 20:01:29.166593 1 base_controller.go:104] All MissingStaticPodController workers have been terminated 2025-08-13T20:01:29.166719459+00:00 stderr F I0813 20:01:29.166606 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:29.166719459+00:00 stderr F I0813 20:01:29.166687 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:01:29.166719459+00:00 stderr F I0813 20:01:29.166704 1 controller_manager.go:54] MissingStaticPodController controller terminated 2025-08-13T20:01:29.166735550+00:00 stderr F I0813 20:01:29.166721 1 base_controller.go:114] Shutting down worker of BackingResourceController controller ... 2025-08-13T20:01:29.166735550+00:00 stderr F I0813 20:01:29.166728 1 base_controller.go:104] All BackingResourceController workers have been terminated 2025-08-13T20:01:29.166747810+00:00 stderr F I0813 20:01:29.166733 1 controller_manager.go:54] BackingResourceController controller terminated 2025-08-13T20:01:29.167183443+00:00 stderr F I0813 20:01:29.167058 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:01:29.167183443+00:00 stderr F I0813 20:01:29.167087 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:01:29.167311166+00:00 stderr F I0813 20:01:29.167260 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:01:29.167311166+00:00 stderr F I0813 20:01:29.167303 1 base_controller.go:114] Shutting down worker of StatusSyncer_kube-controller-manager controller ... 2025-08-13T20:01:29.167327837+00:00 stderr F I0813 20:01:29.167315 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:01:29.167327837+00:00 stderr F I0813 20:01:29.167322 1 controller_manager.go:54] RevisionController controller terminated 2025-08-13T20:01:29.167340177+00:00 stderr F I0813 20:01:29.167289 1 base_controller.go:172] Shutting down StatusSyncer_kube-controller-manager ... 2025-08-13T20:01:29.167340177+00:00 stderr F I0813 20:01:29.167331 1 base_controller.go:150] All StatusSyncer_kube-controller-manager post start hooks have been terminated 2025-08-13T20:01:29.167340177+00:00 stderr F I0813 20:01:29.167336 1 base_controller.go:104] All StatusSyncer_kube-controller-manager workers have been terminated 2025-08-13T20:01:29.167354207+00:00 stderr F I0813 20:01:29.167336 1 base_controller.go:172] Shutting down GuardController ... 2025-08-13T20:01:29.167366448+00:00 stderr F I0813 20:01:29.167351 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:01:29.167378408+00:00 stderr F I0813 20:01:29.167363 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:01:29.167378408+00:00 stderr F I0813 20:01:29.167369 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:01:29.167378408+00:00 stderr F I0813 20:01:29.167374 1 controller_manager.go:54] LoggingSyncer controller terminated 2025-08-13T20:01:29.167391069+00:00 stderr F I0813 20:01:29.167381 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:01:29.167402949+00:00 stderr F I0813 20:01:29.167393 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:01:29.167402949+00:00 stderr F I0813 20:01:29.167393 1 builder.go:329] server exited 2025-08-13T20:01:29.167402949+00:00 stderr F I0813 20:01:29.167394 1 base_controller.go:114] Shutting down worker of GuardController controller ... 2025-08-13T20:01:29.167416179+00:00 stderr F I0813 20:01:29.167408 1 base_controller.go:104] All GuardController workers have been terminated 2025-08-13T20:01:29.167428310+00:00 stderr F I0813 20:01:29.167413 1 base_controller.go:172] Shutting down PruneController ... 2025-08-13T20:01:29.167428310+00:00 stderr F I0813 20:01:29.167418 1 controller_manager.go:54] GuardController controller terminated 2025-08-13T20:01:29.167441000+00:00 stderr F I0813 20:01:29.167429 1 base_controller.go:114] Shutting down worker of KubeControllerManagerStaticResources controller ... 2025-08-13T20:01:29.167441000+00:00 stderr F I0813 20:01:29.167437 1 base_controller.go:104] All KubeControllerManagerStaticResources workers have been terminated 2025-08-13T20:01:29.167456530+00:00 stderr F I0813 20:01:29.167447 1 base_controller.go:172] Shutting down StaticPodStateController ... 2025-08-13T20:01:29.167498632+00:00 stderr F I0813 20:01:29.167478 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:01:29.167556643+00:00 stderr F I0813 20:01:29.167541 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:01:29.167690917+00:00 stderr F I0813 20:01:29.167656 1 controller_manager.go:54] UnsupportedConfigOverridesController controller terminated 2025-08-13T20:01:29.167728938+00:00 stderr F I0813 20:01:29.167658 1 base_controller.go:172] Shutting down InstallerController ... 2025-08-13T20:01:29.167832991+00:00 stderr F I0813 20:01:29.167726 1 base_controller.go:172] Shutting down InstallerStateController ... 2025-08-13T20:01:29.167897793+00:00 stderr F I0813 20:01:29.167583 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:01:29.167976565+00:00 stderr F I0813 20:01:29.167962 1 base_controller.go:172] Shutting down NodeController ... 2025-08-13T20:01:29.168040417+00:00 stderr F I0813 20:01:29.168007 1 base_controller.go:114] Shutting down worker of NodeController controller ... 2025-08-13T20:01:29.168077568+00:00 stderr F I0813 20:01:29.168065 1 base_controller.go:104] All NodeController workers have been terminated 2025-08-13T20:01:29.168108789+00:00 stderr F I0813 20:01:29.168097 1 controller_manager.go:54] NodeController controller terminated 2025-08-13T20:01:29.168180191+00:00 stderr F I0813 20:01:29.168140 1 base_controller.go:114] Shutting down worker of PruneController controller ... 2025-08-13T20:01:29.168180191+00:00 stderr F I0813 20:01:29.168170 1 base_controller.go:104] All PruneController workers have been terminated 2025-08-13T20:01:29.168193111+00:00 stderr F I0813 20:01:29.168177 1 controller_manager.go:54] PruneController controller terminated 2025-08-13T20:01:29.168238023+00:00 stderr F E0813 20:01:29.168082 1 base_controller.go:268] StaticPodStateController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:01:29.168337335+00:00 stderr F I0813 20:01:29.168320 1 base_controller.go:114] Shutting down worker of StaticPodStateController controller ... 2025-08-13T20:01:29.168393597+00:00 stderr F I0813 20:01:29.168376 1 base_controller.go:104] All StaticPodStateController workers have been terminated 2025-08-13T20:01:29.168426248+00:00 stderr F I0813 20:01:29.168414 1 controller_manager.go:54] StaticPodStateController controller terminated 2025-08-13T20:01:29.168454419+00:00 stderr F I0813 20:01:29.168227 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:29.168508740+00:00 stderr F I0813 20:01:29.168495 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:29.168556802+00:00 stderr F I0813 20:01:29.168247 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:29.168592053+00:00 stderr F I0813 20:01:29.168580 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:29.169355535+00:00 stderr F I0813 20:01:29.169299 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 10 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc": context canceled 2025-08-13T20:01:29.169656083+00:00 stderr F E0813 20:01:29.169600 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": context canceled 2025-08-13T20:01:29.170080875+00:00 stderr F I0813 20:01:29.170025 1 base_controller.go:114] Shutting down worker of InstallerStateController controller ... 2025-08-13T20:01:29.170080875+00:00 stderr F I0813 20:01:29.170055 1 base_controller.go:104] All InstallerStateController workers have been terminated 2025-08-13T20:01:29.170080875+00:00 stderr F I0813 20:01:29.170064 1 controller_manager.go:54] InstallerStateController controller terminated 2025-08-13T20:01:29.175724656+00:00 stderr F E0813 20:01:29.174684 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc": context canceled 2025-08-13T20:01:29.175724656+00:00 stderr F I0813 20:01:29.174870 1 base_controller.go:114] Shutting down worker of InstallerController controller ... 2025-08-13T20:01:29.175724656+00:00 stderr F I0813 20:01:29.174922 1 base_controller.go:104] All InstallerController workers have been terminated 2025-08-13T20:01:29.175724656+00:00 stderr F I0813 20:01:29.174962 1 controller_manager.go:54] InstallerController controller terminated 2025-08-13T20:01:30.877494640+00:00 stderr F W0813 20:01:30.875423 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015133657715033070 5ustar zuulzuul././@LongLink0000644000000000000000000000031200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72/collect-profiles/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015133657734033071 5ustar zuulzuul././@LongLink0000644000000000000000000000031700000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72/collect-profiles/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000000134015133657715033070 0ustar zuulzuul2025-08-13T20:15:02.232689507+00:00 stderr F time="2025-08-13T20:15:02Z" level=info msg="Successfully created configMap openshift-operator-lifecycle-manager/olm-operator-heap-hqmzq" 2025-08-13T20:15:02.544616319+00:00 stderr F time="2025-08-13T20:15:02Z" level=info msg="Successfully created configMap openshift-operator-lifecycle-manager/catalog-operator-heap-bk2n8" 2025-08-13T20:15:02.561204695+00:00 stderr F time="2025-08-13T20:15:02Z" level=info msg="Successfully deleted configMap openshift-operator-lifecycle-manager/catalog-operator-heap-pc9j9" 2025-08-13T20:15:02.585948214+00:00 stderr F time="2025-08-13T20:15:02Z" level=info msg="Successfully deleted configMap openshift-operator-lifecycle-manager/olm-operator-heap-7svh2" ././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015133657715033071 5ustar zuulzuul././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015133657734033072 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000004641115133657715033101 0ustar zuulzuul2025-08-13T20:01:08.774138269+00:00 stderr F I0813 20:01:08.766484 1 cmd.go:91] &{ true {false} installer true map[cert-configmaps:0xc0005f2640 cert-dir:0xc0005f2820 cert-secrets:0xc0005f25a0 configmaps:0xc0005f2140 namespace:0xc0002d9f40 optional-cert-configmaps:0xc0005f2780 optional-configmaps:0xc0005f2280 optional-secrets:0xc0005f21e0 pod:0xc0005f2000 pod-manifest-dir:0xc0005f23c0 resource-dir:0xc0005f2320 revision:0xc0002d9ea0 secrets:0xc0005f20a0 v:0xc0005f3220] [0xc0005f3220 0xc0002d9ea0 0xc0002d9f40 0xc0005f2000 0xc0005f2320 0xc0005f23c0 0xc0005f2140 0xc0005f2280 0xc0005f20a0 0xc0005f21e0 0xc0005f2820 0xc0005f2640 0xc0005f2780 0xc0005f25a0] [] map[cert-configmaps:0xc0005f2640 cert-dir:0xc0005f2820 cert-secrets:0xc0005f25a0 configmaps:0xc0005f2140 help:0xc0005f35e0 kubeconfig:0xc0002d9e00 log-flush-frequency:0xc0005f3180 namespace:0xc0002d9f40 optional-cert-configmaps:0xc0005f2780 optional-cert-secrets:0xc0005f26e0 optional-configmaps:0xc0005f2280 optional-secrets:0xc0005f21e0 pod:0xc0005f2000 pod-manifest-dir:0xc0005f23c0 pod-manifests-lock-file:0xc0005f2500 resource-dir:0xc0005f2320 revision:0xc0002d9ea0 secrets:0xc0005f20a0 timeout-duration:0xc0005f2460 v:0xc0005f3220 vmodule:0xc0005f32c0] [0xc0002d9e00 0xc0002d9ea0 0xc0002d9f40 0xc0005f2000 0xc0005f20a0 0xc0005f2140 0xc0005f21e0 0xc0005f2280 0xc0005f2320 0xc0005f23c0 0xc0005f2460 0xc0005f2500 0xc0005f25a0 0xc0005f2640 0xc0005f26e0 0xc0005f2780 0xc0005f2820 0xc0005f3180 0xc0005f3220 0xc0005f32c0 0xc0005f35e0] [0xc0005f2640 0xc0005f2820 0xc0005f25a0 0xc0005f2140 0xc0005f35e0 0xc0002d9e00 0xc0005f3180 0xc0002d9f40 0xc0005f2780 0xc0005f26e0 0xc0005f2280 0xc0005f21e0 0xc0005f2000 0xc0005f23c0 0xc0005f2500 0xc0005f2320 0xc0002d9ea0 0xc0005f20a0 0xc0005f2460 0xc0005f3220 0xc0005f32c0] map[104:0xc0005f35e0 118:0xc0005f3220] [] -1 0 0xc0009ffb30 true 0x73b100 []} 2025-08-13T20:01:08.774138269+00:00 stderr F I0813 20:01:08.770143 1 cmd.go:92] (*installerpod.InstallOptions)(0xc0000dc820)({ 2025-08-13T20:01:08.774138269+00:00 stderr F KubeConfig: (string) "", 2025-08-13T20:01:08.774138269+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-08-13T20:01:08.774138269+00:00 stderr F Revision: (string) (len=2) "10", 2025-08-13T20:01:08.774138269+00:00 stderr F NodeName: (string) "", 2025-08-13T20:01:08.774138269+00:00 stderr F Namespace: (string) (len=33) "openshift-kube-controller-manager", 2025-08-13T20:01:08.774138269+00:00 stderr F PodConfigMapNamePrefix: (string) (len=27) "kube-controller-manager-pod", 2025-08-13T20:01:08.774138269+00:00 stderr F SecretNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=27) "service-account-private-key", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-08-13T20:01:08.774138269+00:00 stderr F }, 2025-08-13T20:01:08.774138269+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=12) "serving-cert" 2025-08-13T20:01:08.774138269+00:00 stderr F }, 2025-08-13T20:01:08.774138269+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=8 cap=8) { 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=27) "kube-controller-manager-pod", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=6) "config", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=32) "cluster-policy-controller-config", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=29) "controller-manager-kubeconfig", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=38) "kube-controller-cert-syncer-kubeconfig", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=17) "serviceaccount-ca", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=10) "service-ca", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=15) "recycler-config" 2025-08-13T20:01:08.774138269+00:00 stderr F }, 2025-08-13T20:01:08.774138269+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=12) "cloud-config" 2025-08-13T20:01:08.774138269+00:00 stderr F }, 2025-08-13T20:01:08.774138269+00:00 stderr F CertSecretNames: ([]string) (len=2 cap=2) { 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=39) "kube-controller-manager-client-cert-key", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=10) "csr-signer" 2025-08-13T20:01:08.774138269+00:00 stderr F }, 2025-08-13T20:01:08.774138269+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) , 2025-08-13T20:01:08.774138269+00:00 stderr F CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=20) "aggregator-client-ca", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=9) "client-ca" 2025-08-13T20:01:08.774138269+00:00 stderr F }, 2025-08-13T20:01:08.774138269+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=17) "trusted-ca-bundle" 2025-08-13T20:01:08.774138269+00:00 stderr F }, 2025-08-13T20:01:08.774138269+00:00 stderr F CertDir: (string) (len=66) "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs", 2025-08-13T20:01:08.774138269+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:01:08.774138269+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-08-13T20:01:08.774138269+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-08-13T20:01:08.774138269+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-08-13T20:01:08.774138269+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-08-13T20:01:08.774138269+00:00 stderr F KubeletVersion: (string) "" 2025-08-13T20:01:08.774138269+00:00 stderr F }) 2025-08-13T20:01:08.830058183+00:00 stderr F I0813 20:01:08.828941 1 cmd.go:409] Getting controller reference for node crc 2025-08-13T20:01:09.501531129+00:00 stderr F I0813 20:01:09.497182 1 cmd.go:422] Waiting for installer revisions to settle for node crc 2025-08-13T20:01:09.701953794+00:00 stderr F I0813 20:01:09.687538 1 cmd.go:502] Pod container: installer state for node crc is not terminated, waiting 2025-08-13T20:01:20.207197869+00:00 stderr F I0813 20:01:20.203486 1 cmd.go:502] Pod container: installer state for node crc is not terminated, waiting 2025-08-13T20:01:31.558920221+00:00 stderr F I0813 20:01:31.543103 1 cmd.go:514] Waiting additional period after revisions have settled for node crc 2025-08-13T20:02:01.544989289+00:00 stderr F I0813 20:02:01.544661 1 cmd.go:520] Getting installer pods for node crc 2025-08-13T20:02:03.699434270+00:00 stderr F I0813 20:02:03.699248 1 cmd.go:538] Latest installer revision for node crc is: 10 2025-08-13T20:02:03.699434270+00:00 stderr F I0813 20:02:03.699332 1 cmd.go:427] Querying kubelet version for node crc 2025-08-13T20:02:05.745352876+00:00 stderr F I0813 20:02:05.744093 1 cmd.go:440] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-08-13T20:02:05.745352876+00:00 stderr F I0813 20:02:05.744255 1 cmd.go:289] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10" ... 2025-08-13T20:02:05.745352876+00:00 stderr F I0813 20:02:05.744967 1 cmd.go:217] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10" ... 2025-08-13T20:02:05.745352876+00:00 stderr F I0813 20:02:05.744985 1 cmd.go:225] Getting secrets ... 2025-08-13T20:02:11.197718897+00:00 stderr F I0813 20:02:11.192487 1 copy.go:32] Got secret openshift-kube-controller-manager/localhost-recovery-client-token-10 2025-08-13T20:02:15.107310794+00:00 stderr F I0813 20:02:15.095719 1 copy.go:32] Got secret openshift-kube-controller-manager/service-account-private-key-10 2025-08-13T20:02:17.830040326+00:00 stderr F I0813 20:02:17.813908 1 copy.go:32] Got secret openshift-kube-controller-manager/serving-cert-10 2025-08-13T20:02:17.830040326+00:00 stderr F I0813 20:02:17.818969 1 cmd.go:238] Getting config maps ... 2025-08-13T20:02:20.454584586+00:00 stderr F I0813 20:02:20.454385 1 copy.go:60] Got configMap openshift-kube-controller-manager/cluster-policy-controller-config-10 2025-08-13T20:02:21.108877962+00:00 stderr F I0813 20:02:21.107944 1 copy.go:60] Got configMap openshift-kube-controller-manager/config-10 2025-08-13T20:02:21.667184459+00:00 stderr F I0813 20:02:21.667055 1 copy.go:60] Got configMap openshift-kube-controller-manager/controller-manager-kubeconfig-10 2025-08-13T20:02:23.567610131+00:00 stderr F I0813 20:02:23.567409 1 copy.go:60] Got configMap openshift-kube-controller-manager/kube-controller-cert-syncer-kubeconfig-10 2025-08-13T20:02:24.259894000+00:00 stderr F I0813 20:02:24.250233 1 copy.go:60] Got configMap openshift-kube-controller-manager/kube-controller-manager-pod-10 2025-08-13T20:02:25.669894114+00:00 stderr F I0813 20:02:25.666308 1 copy.go:60] Got configMap openshift-kube-controller-manager/recycler-config-10 2025-08-13T20:02:25.939952998+00:00 stderr F I0813 20:02:25.939182 1 copy.go:60] Got configMap openshift-kube-controller-manager/service-ca-10 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.164324 1 copy.go:60] Got configMap openshift-kube-controller-manager/serviceaccount-ca-10 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.174281 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/cloud-config-10: configmaps "cloud-config-10" not found 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.174311 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.174970 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/ca.crt" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.175600 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/namespace" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.177183 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.177519 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/token" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.177704 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/service-account-private-key" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.177760 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/service-account-private-key/service-account.pub" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.177939 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/service-account-private-key/service-account.key" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.178068 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/serving-cert" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.178119 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/serving-cert/tls.crt" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.178198 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/serving-cert/tls.key" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.178285 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/cluster-policy-controller-config" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.178454 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/cluster-policy-controller-config/config.yaml" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.178607 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/config" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.178729 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/config/config.yaml" ... 2025-08-13T20:02:26.187282033+00:00 stderr F I0813 20:02:26.187037 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/controller-manager-kubeconfig" ... 2025-08-13T20:02:26.187282033+00:00 stderr F I0813 20:02:26.187129 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/controller-manager-kubeconfig/kubeconfig" ... 2025-08-13T20:02:26.187282033+00:00 stderr F I0813 20:02:26.187265 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-cert-syncer-kubeconfig" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.187323 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.187513 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.187632 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod/forceRedeploymentReason" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.187875 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod/pod.yaml" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188023 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod/version" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188169 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/recycler-config" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188321 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/recycler-config/recycler-pod.yaml" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188411 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/service-ca" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188474 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/service-ca/ca-bundle.crt" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188555 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/serviceaccount-ca" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188662 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/serviceaccount-ca/ca-bundle.crt" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188758 1 cmd.go:217] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188770 1 cmd.go:225] Getting secrets ... 2025-08-13T20:02:26.880541580+00:00 stderr F I0813 20:02:26.880477 1 copy.go:32] Got secret openshift-kube-controller-manager/csr-signer 2025-08-13T20:02:31.905193168+00:00 stderr F I0813 20:02:31.903646 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.920573976+00:00 stderr F I0813 20:02:31.917909 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.972818077+00:00 stderr F I0813 20:02:31.972687 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.242241153+00:00 stderr F I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.244102906+00:00 stderr F W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.244195099+00:00 stderr F F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000023600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_9387c79a-cd5b-4d24-a558-6dbbdd89fe1e/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015133657715033062 5ustar zuulzuul././@LongLink0000644000000000000000000000025000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_9387c79a-cd5b-4d24-a558-6dbbdd89fe1e/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015133657735033064 5ustar zuulzuul././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_9387c79a-cd5b-4d24-a558-6dbbdd89fe1e/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000016436515133657715033103 0ustar zuulzuul2026-01-20T10:56:29.608868505+00:00 stderr F I0120 10:56:29.606271 1 cmd.go:92] &{ true {false} installer true map[cert-configmaps:0xc0007b1860 cert-dir:0xc0007b1a40 cert-secrets:0xc0007b17c0 configmaps:0xc0007b1360 namespace:0xc0007b1180 optional-cert-configmaps:0xc0007b19a0 optional-cert-secrets:0xc0007b1900 optional-configmaps:0xc0007b14a0 optional-secrets:0xc0007b1400 pod:0xc0007b1220 pod-manifest-dir:0xc0007b15e0 resource-dir:0xc0007b1540 revision:0xc0007b10e0 secrets:0xc0007b12c0 v:0xc0007e4e60] [0xc0007e4e60 0xc0007b10e0 0xc0007b1180 0xc0007b1220 0xc0007b1540 0xc0007b15e0 0xc0007b1360 0xc0007b14a0 0xc0007b12c0 0xc0007b1400 0xc0007b1a40 0xc0007b1860 0xc0007b19a0 0xc0007b17c0 0xc0007b1900] [] map[cert-configmaps:0xc0007b1860 cert-dir:0xc0007b1a40 cert-secrets:0xc0007b17c0 configmaps:0xc0007b1360 help:0xc0007e5220 kubeconfig:0xc0007b1040 log-flush-frequency:0xc0007e4dc0 namespace:0xc0007b1180 optional-cert-configmaps:0xc0007b19a0 optional-cert-secrets:0xc0007b1900 optional-configmaps:0xc0007b14a0 optional-secrets:0xc0007b1400 pod:0xc0007b1220 pod-manifest-dir:0xc0007b15e0 pod-manifests-lock-file:0xc0007b1720 resource-dir:0xc0007b1540 revision:0xc0007b10e0 secrets:0xc0007b12c0 timeout-duration:0xc0007b1680 v:0xc0007e4e60 vmodule:0xc0007e4f00] [0xc0007b1040 0xc0007b10e0 0xc0007b1180 0xc0007b1220 0xc0007b12c0 0xc0007b1360 0xc0007b1400 0xc0007b14a0 0xc0007b1540 0xc0007b15e0 0xc0007b1680 0xc0007b1720 0xc0007b17c0 0xc0007b1860 0xc0007b1900 0xc0007b19a0 0xc0007b1a40 0xc0007e4dc0 0xc0007e4e60 0xc0007e4f00 0xc0007e5220] [0xc0007b1860 0xc0007b1a40 0xc0007b17c0 0xc0007b1360 0xc0007e5220 0xc0007b1040 0xc0007e4dc0 0xc0007b1180 0xc0007b19a0 0xc0007b1900 0xc0007b14a0 0xc0007b1400 0xc0007b1220 0xc0007b15e0 0xc0007b1720 0xc0007b1540 0xc0007b10e0 0xc0007b12c0 0xc0007b1680 0xc0007e4e60 0xc0007e4f00] map[104:0xc0007e5220 118:0xc0007e4e60] [] -1 0 0xc0003b8c00 true 0xa51380 []} 2026-01-20T10:56:29.608868505+00:00 stderr F I0120 10:56:29.606673 1 cmd.go:93] (*installerpod.InstallOptions)(0xc0007ba340)({ 2026-01-20T10:56:29.608868505+00:00 stderr F KubeConfig: (string) "", 2026-01-20T10:56:29.608868505+00:00 stderr F KubeClient: (kubernetes.Interface) , 2026-01-20T10:56:29.608868505+00:00 stderr F Revision: (string) (len=2) "13", 2026-01-20T10:56:29.608868505+00:00 stderr F NodeName: (string) "", 2026-01-20T10:56:29.608868505+00:00 stderr F Namespace: (string) (len=24) "openshift-kube-apiserver", 2026-01-20T10:56:29.608868505+00:00 stderr F PodConfigMapNamePrefix: (string) (len=18) "kube-apiserver-pod", 2026-01-20T10:56:29.608868505+00:00 stderr F SecretNamePrefixes: ([]string) (len=3 cap=4) { 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=11) "etcd-client", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=34) "localhost-recovery-serving-certkey", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2026-01-20T10:56:29.608868505+00:00 stderr F }, 2026-01-20T10:56:29.608868505+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=2 cap=2) { 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=17) "encryption-config", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=21) "webhook-authenticator" 2026-01-20T10:56:29.608868505+00:00 stderr F }, 2026-01-20T10:56:29.608868505+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=8 cap=8) { 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=18) "kube-apiserver-pod", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=6) "config", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=37) "kube-apiserver-cert-syncer-kubeconfig", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=28) "bound-sa-token-signing-certs", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=15) "etcd-serving-ca", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=18) "kubelet-serving-ca", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=22) "sa-token-signing-certs", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=29) "kube-apiserver-audit-policies" 2026-01-20T10:56:29.608868505+00:00 stderr F }, 2026-01-20T10:56:29.608868505+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=3 cap=4) { 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=14) "oauth-metadata", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=12) "cloud-config", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=24) "kube-apiserver-server-ca" 2026-01-20T10:56:29.608868505+00:00 stderr F }, 2026-01-20T10:56:29.608868505+00:00 stderr F CertSecretNames: ([]string) (len=10 cap=16) { 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=17) "aggregator-client", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=30) "localhost-serving-cert-certkey", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=31) "service-network-serving-certkey", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=37) "external-loadbalancer-serving-certkey", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=37) "internal-loadbalancer-serving-certkey", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=33) "bound-service-account-signing-key", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=40) "control-plane-node-admin-client-cert-key", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=31) "check-endpoints-client-cert-key", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=14) "kubelet-client", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=16) "node-kubeconfigs" 2026-01-20T10:56:29.608868505+00:00 stderr F }, 2026-01-20T10:56:29.608868505+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) { 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=17) "user-serving-cert", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=21) "user-serving-cert-000", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=21) "user-serving-cert-001", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=21) "user-serving-cert-002", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=21) "user-serving-cert-003", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=21) "user-serving-cert-004", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=21) "user-serving-cert-005", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=21) "user-serving-cert-006", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=21) "user-serving-cert-007", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=21) "user-serving-cert-008", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=21) "user-serving-cert-009" 2026-01-20T10:56:29.608868505+00:00 stderr F }, 2026-01-20T10:56:29.608868505+00:00 stderr F CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=20) "aggregator-client-ca", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=9) "client-ca", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=29) "control-plane-node-kubeconfig", 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=26) "check-endpoints-kubeconfig" 2026-01-20T10:56:29.608868505+00:00 stderr F }, 2026-01-20T10:56:29.608868505+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2026-01-20T10:56:29.608868505+00:00 stderr F (string) (len=17) "trusted-ca-bundle" 2026-01-20T10:56:29.608868505+00:00 stderr F }, 2026-01-20T10:56:29.608868505+00:00 stderr F CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", 2026-01-20T10:56:29.608868505+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2026-01-20T10:56:29.608868505+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2026-01-20T10:56:29.608868505+00:00 stderr F Timeout: (time.Duration) 2m0s, 2026-01-20T10:56:29.608868505+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2026-01-20T10:56:29.608868505+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2026-01-20T10:56:29.608868505+00:00 stderr F KubeletVersion: (string) "" 2026-01-20T10:56:29.608868505+00:00 stderr F }) 2026-01-20T10:56:29.609704998+00:00 stderr F I0120 10:56:29.609641 1 cmd.go:410] Getting controller reference for node crc 2026-01-20T10:56:29.625990587+00:00 stderr F I0120 10:56:29.625857 1 cmd.go:423] Waiting for installer revisions to settle for node crc 2026-01-20T10:56:29.698556208+00:00 stderr F I0120 10:56:29.698434 1 cmd.go:515] Waiting additional period after revisions have settled for node crc 2026-01-20T10:56:59.698798347+00:00 stderr F I0120 10:56:59.698702 1 cmd.go:521] Getting installer pods for node crc 2026-01-20T10:56:59.704945981+00:00 stderr F I0120 10:56:59.704869 1 cmd.go:539] Latest installer revision for node crc is: 13 2026-01-20T10:56:59.704945981+00:00 stderr F I0120 10:56:59.704897 1 cmd.go:428] Querying kubelet version for node crc 2026-01-20T10:56:59.707896929+00:00 stderr F I0120 10:56:59.707833 1 cmd.go:441] Got kubelet version 1.29.5+29c95f3 on target node crc 2026-01-20T10:56:59.707896929+00:00 stderr F I0120 10:56:59.707852 1 cmd.go:290] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13" ... 2026-01-20T10:56:59.708458503+00:00 stderr F I0120 10:56:59.708407 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13" ... 2026-01-20T10:56:59.708458503+00:00 stderr F I0120 10:56:59.708427 1 cmd.go:226] Getting secrets ... 2026-01-20T10:56:59.710497827+00:00 stderr F I0120 10:56:59.710433 1 copy.go:32] Got secret openshift-kube-apiserver/etcd-client-13 2026-01-20T10:56:59.712495580+00:00 stderr F I0120 10:56:59.712440 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-recovery-client-token-13 2026-01-20T10:56:59.714203815+00:00 stderr F I0120 10:56:59.714153 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-recovery-serving-certkey-13 2026-01-20T10:56:59.717087881+00:00 stderr F I0120 10:56:59.715829 1 copy.go:24] Failed to get secret openshift-kube-apiserver/encryption-config-13: secrets "encryption-config-13" not found 2026-01-20T10:56:59.717364209+00:00 stderr F I0120 10:56:59.717301 1 copy.go:32] Got secret openshift-kube-apiserver/webhook-authenticator-13 2026-01-20T10:56:59.717364209+00:00 stderr F I0120 10:56:59.717338 1 cmd.go:239] Getting config maps ... 2026-01-20T10:56:59.718673214+00:00 stderr F I0120 10:56:59.718614 1 copy.go:60] Got configMap openshift-kube-apiserver/bound-sa-token-signing-certs-13 2026-01-20T10:56:59.720262555+00:00 stderr F I0120 10:56:59.720181 1 copy.go:60] Got configMap openshift-kube-apiserver/config-13 2026-01-20T10:56:59.721830147+00:00 stderr F I0120 10:56:59.721736 1 copy.go:60] Got configMap openshift-kube-apiserver/etcd-serving-ca-13 2026-01-20T10:56:59.904612991+00:00 stderr F I0120 10:56:59.904225 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-audit-policies-13 2026-01-20T10:57:00.103125020+00:00 stderr F I0120 10:57:00.103022 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-cert-syncer-kubeconfig-13 2026-01-20T10:57:00.303031607+00:00 stderr F I0120 10:57:00.302927 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-pod-13 2026-01-20T10:57:00.502690237+00:00 stderr F I0120 10:57:00.502605 1 copy.go:60] Got configMap openshift-kube-apiserver/kubelet-serving-ca-13 2026-01-20T10:57:00.703448746+00:00 stderr F I0120 10:57:00.703365 1 copy.go:60] Got configMap openshift-kube-apiserver/sa-token-signing-certs-13 2026-01-20T10:57:00.902940302+00:00 stderr F I0120 10:57:00.902778 1 copy.go:52] Failed to get config map openshift-kube-apiserver/cloud-config-13: configmaps "cloud-config-13" not found 2026-01-20T10:57:01.103564068+00:00 stderr F I0120 10:57:01.103474 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-server-ca-13 2026-01-20T10:57:01.303309640+00:00 stderr F I0120 10:57:01.303231 1 copy.go:60] Got configMap openshift-kube-apiserver/oauth-metadata-13 2026-01-20T10:57:01.303309640+00:00 stderr F I0120 10:57:01.303266 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/etcd-client" ... 2026-01-20T10:57:01.303744641+00:00 stderr F I0120 10:57:01.303720 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/etcd-client/tls.crt" ... 2026-01-20T10:57:01.303906045+00:00 stderr F I0120 10:57:01.303885 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/etcd-client/tls.key" ... 2026-01-20T10:57:01.304110642+00:00 stderr F I0120 10:57:01.303999 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-client-token" ... 2026-01-20T10:57:01.304127202+00:00 stderr F I0120 10:57:01.304111 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-client-token/ca.crt" ... 2026-01-20T10:57:01.304244225+00:00 stderr F I0120 10:57:01.304214 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-client-token/namespace" ... 2026-01-20T10:57:01.304320187+00:00 stderr F I0120 10:57:01.304293 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-client-token/service-ca.crt" ... 2026-01-20T10:57:01.304394029+00:00 stderr F I0120 10:57:01.304375 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-client-token/token" ... 2026-01-20T10:57:01.304470951+00:00 stderr F I0120 10:57:01.304453 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-serving-certkey" ... 2026-01-20T10:57:01.304520852+00:00 stderr F I0120 10:57:01.304504 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-serving-certkey/tls.crt" ... 2026-01-20T10:57:01.304602635+00:00 stderr F I0120 10:57:01.304583 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-serving-certkey/tls.key" ... 2026-01-20T10:57:01.304678807+00:00 stderr F I0120 10:57:01.304661 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/webhook-authenticator" ... 2026-01-20T10:57:01.304726378+00:00 stderr F I0120 10:57:01.304709 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/webhook-authenticator/kubeConfig" ... 2026-01-20T10:57:01.304818540+00:00 stderr F I0120 10:57:01.304797 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/bound-sa-token-signing-certs" ... 2026-01-20T10:57:01.304910753+00:00 stderr F I0120 10:57:01.304892 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/bound-sa-token-signing-certs/service-account-001.pub" ... 2026-01-20T10:57:01.304997035+00:00 stderr F I0120 10:57:01.304980 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/config" ... 2026-01-20T10:57:01.305048806+00:00 stderr F I0120 10:57:01.305033 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/config/config.yaml" ... 2026-01-20T10:57:01.305176840+00:00 stderr F I0120 10:57:01.305159 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/etcd-serving-ca" ... 2026-01-20T10:57:01.305239491+00:00 stderr F I0120 10:57:01.305214 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/etcd-serving-ca/ca-bundle.crt" ... 2026-01-20T10:57:01.305312413+00:00 stderr F I0120 10:57:01.305294 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-audit-policies" ... 2026-01-20T10:57:01.305361604+00:00 stderr F I0120 10:57:01.305345 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-audit-policies/policy.yaml" ... 2026-01-20T10:57:01.305447707+00:00 stderr F I0120 10:57:01.305426 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-cert-syncer-kubeconfig" ... 2026-01-20T10:57:01.305499268+00:00 stderr F I0120 10:57:01.305479 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig" ... 2026-01-20T10:57:01.305580800+00:00 stderr F I0120 10:57:01.305560 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-pod" ... 2026-01-20T10:57:01.305628161+00:00 stderr F I0120 10:57:01.305608 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-pod/forceRedeploymentReason" ... 2026-01-20T10:57:01.305696263+00:00 stderr F I0120 10:57:01.305678 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-pod/kube-apiserver-startup-monitor-pod.yaml" ... 2026-01-20T10:57:01.305779615+00:00 stderr F I0120 10:57:01.305758 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-pod/pod.yaml" ... 2026-01-20T10:57:01.305863048+00:00 stderr F I0120 10:57:01.305839 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-pod/version" ... 2026-01-20T10:57:01.305939870+00:00 stderr F I0120 10:57:01.305919 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kubelet-serving-ca" ... 2026-01-20T10:57:01.306032992+00:00 stderr F I0120 10:57:01.306012 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kubelet-serving-ca/ca-bundle.crt" ... 2026-01-20T10:57:01.306168285+00:00 stderr F I0120 10:57:01.306147 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/sa-token-signing-certs" ... 2026-01-20T10:57:01.306242457+00:00 stderr F I0120 10:57:01.306221 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/sa-token-signing-certs/service-account-002.pub" ... 2026-01-20T10:57:01.306326110+00:00 stderr F I0120 10:57:01.306305 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/sa-token-signing-certs/service-account-003.pub" ... 2026-01-20T10:57:01.306419522+00:00 stderr F I0120 10:57:01.306384 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/sa-token-signing-certs/service-account-001.pub" ... 2026-01-20T10:57:01.306511514+00:00 stderr F I0120 10:57:01.306493 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-server-ca" ... 2026-01-20T10:57:01.306565606+00:00 stderr F I0120 10:57:01.306550 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-server-ca/ca-bundle.crt" ... 2026-01-20T10:57:01.306651468+00:00 stderr F I0120 10:57:01.306636 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/oauth-metadata" ... 2026-01-20T10:57:01.306705109+00:00 stderr F I0120 10:57:01.306689 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/oauth-metadata/oauthMetadata" ... 2026-01-20T10:57:01.307111041+00:00 stderr F I0120 10:57:01.307081 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs" ... 2026-01-20T10:57:01.307111041+00:00 stderr F I0120 10:57:01.307101 1 cmd.go:226] Getting secrets ... 2026-01-20T10:57:01.501647825+00:00 stderr F I0120 10:57:01.501573 1 copy.go:32] Got secret openshift-kube-apiserver/aggregator-client 2026-01-20T10:57:01.702830676+00:00 stderr F I0120 10:57:01.702727 1 copy.go:32] Got secret openshift-kube-apiserver/bound-service-account-signing-key 2026-01-20T10:57:01.905139256+00:00 stderr F I0120 10:57:01.904991 1 copy.go:32] Got secret openshift-kube-apiserver/check-endpoints-client-cert-key 2026-01-20T10:57:02.103404569+00:00 stderr F I0120 10:57:02.103275 1 copy.go:32] Got secret openshift-kube-apiserver/control-plane-node-admin-client-cert-key 2026-01-20T10:57:02.303778297+00:00 stderr F I0120 10:57:02.303663 1 copy.go:32] Got secret openshift-kube-apiserver/external-loadbalancer-serving-certkey 2026-01-20T10:57:02.501982279+00:00 stderr F I0120 10:57:02.501914 1 copy.go:32] Got secret openshift-kube-apiserver/internal-loadbalancer-serving-certkey 2026-01-20T10:57:02.703746684+00:00 stderr F I0120 10:57:02.703683 1 copy.go:32] Got secret openshift-kube-apiserver/kubelet-client 2026-01-20T10:57:02.903651042+00:00 stderr F I0120 10:57:02.903550 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-serving-cert-certkey 2026-01-20T10:57:03.103144437+00:00 stderr F I0120 10:57:03.103002 1 copy.go:32] Got secret openshift-kube-apiserver/node-kubeconfigs 2026-01-20T10:57:03.303855654+00:00 stderr F I0120 10:57:03.303754 1 copy.go:32] Got secret openshift-kube-apiserver/service-network-serving-certkey 2026-01-20T10:57:03.502835756+00:00 stderr F I0120 10:57:03.502768 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert: secrets "user-serving-cert" not found 2026-01-20T10:57:03.704177410+00:00 stderr F I0120 10:57:03.704084 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-000: secrets "user-serving-cert-000" not found 2026-01-20T10:57:03.903556493+00:00 stderr F I0120 10:57:03.903474 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-001: secrets "user-serving-cert-001" not found 2026-01-20T10:57:04.102575806+00:00 stderr F I0120 10:57:04.102501 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-002: secrets "user-serving-cert-002" not found 2026-01-20T10:57:04.301775285+00:00 stderr F I0120 10:57:04.301710 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: secrets "user-serving-cert-003" not found 2026-01-20T10:57:04.502435950+00:00 stderr F I0120 10:57:04.502373 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-004: secrets "user-serving-cert-004" not found 2026-01-20T10:57:04.703342523+00:00 stderr F I0120 10:57:04.703283 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-005: secrets "user-serving-cert-005" not found 2026-01-20T10:57:04.903400504+00:00 stderr F I0120 10:57:04.903318 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-006: secrets "user-serving-cert-006" not found 2026-01-20T10:57:05.103610159+00:00 stderr F I0120 10:57:05.103394 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-007: secrets "user-serving-cert-007" not found 2026-01-20T10:57:05.301967295+00:00 stderr F I0120 10:57:05.301827 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-008: secrets "user-serving-cert-008" not found 2026-01-20T10:57:05.503249787+00:00 stderr F I0120 10:57:05.503151 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-009: secrets "user-serving-cert-009" not found 2026-01-20T10:57:05.503249787+00:00 stderr F I0120 10:57:05.503201 1 cmd.go:239] Getting config maps ... 2026-01-20T10:57:05.705004913+00:00 stderr F I0120 10:57:05.704897 1 copy.go:60] Got configMap openshift-kube-apiserver/aggregator-client-ca 2026-01-20T10:57:05.903919104+00:00 stderr F I0120 10:57:05.903773 1 copy.go:60] Got configMap openshift-kube-apiserver/check-endpoints-kubeconfig 2026-01-20T10:57:06.111120532+00:00 stderr F I0120 10:57:06.110293 1 copy.go:60] Got configMap openshift-kube-apiserver/client-ca 2026-01-20T10:57:06.305329649+00:00 stderr F I0120 10:57:06.302844 1 copy.go:60] Got configMap openshift-kube-apiserver/control-plane-node-kubeconfig 2026-01-20T10:57:06.518086735+00:00 stderr F I0120 10:57:06.517986 1 copy.go:60] Got configMap openshift-kube-apiserver/trusted-ca-bundle 2026-01-20T10:57:06.604734956+00:00 stderr F I0120 10:57:06.604656 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client" ... 2026-01-20T10:57:06.605142407+00:00 stderr F I0120 10:57:06.605102 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client/tls.crt" ... 2026-01-20T10:57:06.606257066+00:00 stderr F I0120 10:57:06.606218 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client/tls.key" ... 2026-01-20T10:57:06.606547004+00:00 stderr F I0120 10:57:06.606490 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key" ... 2026-01-20T10:57:06.606555904+00:00 stderr F I0120 10:57:06.606545 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key/service-account.key" ... 2026-01-20T10:57:06.606789340+00:00 stderr F I0120 10:57:06.606751 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key/service-account.pub" ... 2026-01-20T10:57:06.606939694+00:00 stderr F I0120 10:57:06.606918 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key" ... 2026-01-20T10:57:06.606974845+00:00 stderr F I0120 10:57:06.606942 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key/tls.crt" ... 2026-01-20T10:57:06.607183211+00:00 stderr F I0120 10:57:06.607161 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key/tls.key" ... 2026-01-20T10:57:06.607336395+00:00 stderr F I0120 10:57:06.607302 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key" ... 2026-01-20T10:57:06.607336395+00:00 stderr F I0120 10:57:06.607324 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key/tls.crt" ... 2026-01-20T10:57:06.607478489+00:00 stderr F I0120 10:57:06.607449 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key/tls.key" ... 2026-01-20T10:57:06.607619883+00:00 stderr F I0120 10:57:06.607595 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey" ... 2026-01-20T10:57:06.697701245+00:00 stderr F I0120 10:57:06.697561 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey/tls.key" ... 2026-01-20T10:57:06.698048094+00:00 stderr F I0120 10:57:06.697994 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey/tls.crt" ... 2026-01-20T10:57:06.698533216+00:00 stderr F I0120 10:57:06.698451 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey" ... 2026-01-20T10:57:06.698533216+00:00 stderr F I0120 10:57:06.698487 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt" ... 2026-01-20T10:57:06.699750239+00:00 stderr F I0120 10:57:06.699674 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey/tls.key" ... 2026-01-20T10:57:06.699881103+00:00 stderr F I0120 10:57:06.699825 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client" ... 2026-01-20T10:57:06.699881103+00:00 stderr F I0120 10:57:06.699841 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client/tls.key" ... 2026-01-20T10:57:06.699989145+00:00 stderr F I0120 10:57:06.699937 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client/tls.crt" ... 2026-01-20T10:57:06.700120089+00:00 stderr F I0120 10:57:06.700048 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey" ... 2026-01-20T10:57:06.700120089+00:00 stderr F I0120 10:57:06.700083 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey/tls.key" ... 2026-01-20T10:57:06.700278133+00:00 stderr F I0120 10:57:06.700229 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey/tls.crt" ... 2026-01-20T10:57:06.700447167+00:00 stderr F I0120 10:57:06.700342 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs" ... 2026-01-20T10:57:06.700447167+00:00 stderr F I0120 10:57:06.700356 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig" ... 2026-01-20T10:57:06.700535950+00:00 stderr F I0120 10:57:06.700500 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/lb-ext.kubeconfig" ... 2026-01-20T10:57:06.700662023+00:00 stderr F I0120 10:57:06.700607 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/lb-int.kubeconfig" ... 2026-01-20T10:57:06.700809607+00:00 stderr F I0120 10:57:06.700770 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig" ... 2026-01-20T10:57:06.700920310+00:00 stderr F I0120 10:57:06.700883 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey" ... 2026-01-20T10:57:06.700920310+00:00 stderr F I0120 10:57:06.700898 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey/tls.crt" ... 2026-01-20T10:57:06.701054163+00:00 stderr F I0120 10:57:06.701016 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey/tls.key" ... 2026-01-20T10:57:06.701176106+00:00 stderr F I0120 10:57:06.701148 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca" ... 2026-01-20T10:57:06.701176106+00:00 stderr F I0120 10:57:06.701167 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca/ca-bundle.crt" ... 2026-01-20T10:57:06.701309080+00:00 stderr F I0120 10:57:06.701282 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/check-endpoints-kubeconfig" ... 2026-01-20T10:57:06.701309080+00:00 stderr F I0120 10:57:06.701298 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/check-endpoints-kubeconfig/kubeconfig" ... 2026-01-20T10:57:06.701431113+00:00 stderr F I0120 10:57:06.701403 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/client-ca" ... 2026-01-20T10:57:06.701431113+00:00 stderr F I0120 10:57:06.701424 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/client-ca/ca-bundle.crt" ... 2026-01-20T10:57:06.701924366+00:00 stderr F I0120 10:57:06.701864 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/control-plane-node-kubeconfig" ... 2026-01-20T10:57:06.701924366+00:00 stderr F I0120 10:57:06.701881 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/control-plane-node-kubeconfig/kubeconfig" ... 2026-01-20T10:57:06.702021958+00:00 stderr F I0120 10:57:06.701981 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/trusted-ca-bundle" ... 2026-01-20T10:57:06.702129711+00:00 stderr F I0120 10:57:06.702095 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/trusted-ca-bundle/ca-bundle.crt" ... 2026-01-20T10:57:06.702478481+00:00 stderr F I0120 10:57:06.702376 1 cmd.go:332] Getting pod configmaps/kube-apiserver-pod-13 -n openshift-kube-apiserver 2026-01-20T10:57:06.706715493+00:00 stderr F I0120 10:57:06.706600 1 cmd.go:348] Creating directory for static pod manifest "/etc/kubernetes/manifests" ... 2026-01-20T10:57:06.706715493+00:00 stderr F I0120 10:57:06.706622 1 cmd.go:376] Writing a pod under "kube-apiserver-startup-monitor-pod.yaml" key 2026-01-20T10:57:06.706715493+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-startup-monitor","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"revision":"13"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources"}},{"name":"manifests","hostPath":{"path":"/etc/kubernetes/manifests"}},{"name":"pod-resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13"}},{"name":"var-lock","hostPath":{"path":"/var/lock"}},{"name":"var-log","hostPath":{"path":"/var/log/kube-apiserver"}}],"containers":[{"name":"startup-monitor","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","startup-monitor"],"args":["-v=2","--fallback-timeout-duration=300s","--target-name=kube-apiserver","--manifests-dir=/etc/kubernetes/manifests","--resource-dir=/etc/kubernetes/static-pod-resources","--installer-lock-file=/var/lock/kube-apiserver-installer.lock","--revision=13","--node-name=crc","--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--log-file-path=/var/log/kube-apiserver/startup.log"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"manifests","mountPath":"/etc/kubernetes/manifests"},{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/secrets","subPath":"secrets"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/configmaps","subPath":"configmaps"},{"name":"var-lock","mountPath":"/var/lock"},{"name":"var-log","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"terminationGracePeriodSeconds":5,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2026-01-20T10:57:06.709726632+00:00 stderr F I0120 10:57:06.709648 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/kube-apiserver-startup-monitor-pod.yaml" ... 2026-01-20T10:57:06.709758973+00:00 stderr F I0120 10:57:06.709745 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml" ... 2026-01-20T10:57:06.709758973+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-startup-monitor","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"revision":"13"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources"}},{"name":"manifests","hostPath":{"path":"/etc/kubernetes/manifests"}},{"name":"pod-resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13"}},{"name":"var-lock","hostPath":{"path":"/var/lock"}},{"name":"var-log","hostPath":{"path":"/var/log/kube-apiserver"}}],"containers":[{"name":"startup-monitor","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","startup-monitor"],"args":["-v=2","--fallback-timeout-duration=300s","--target-name=kube-apiserver","--manifests-dir=/etc/kubernetes/manifests","--resource-dir=/etc/kubernetes/static-pod-resources","--installer-lock-file=/var/lock/kube-apiserver-installer.lock","--revision=13","--node-name=crc","--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--log-file-path=/var/log/kube-apiserver/startup.log"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"manifests","mountPath":"/etc/kubernetes/manifests"},{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/secrets","subPath":"secrets"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/configmaps","subPath":"configmaps"},{"name":"var-lock","mountPath":"/var/lock"},{"name":"var-log","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"terminationGracePeriodSeconds":5,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2026-01-20T10:57:06.709915407+00:00 stderr F I0120 10:57:06.709869 1 cmd.go:376] Writing a pod under "kube-apiserver-pod.yaml" key 2026-01-20T10:57:06.709915407+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"apiserver":"true","app":"openshift-kube-apiserver","revision":"13"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-apiserver","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-certs"}},{"name":"audit-dir","hostPath":{"path":"/var/log/kube-apiserver"}}],"initContainers":[{"name":"setup","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","100","/bin/bash","-ec"],"args":["echo \"Fixing audit permissions ...\"\nchmod 0700 /var/log/kube-apiserver \u0026\u0026 touch /var/log/kube-apiserver/audit.log \u0026\u0026 chmod 0600 /var/log/kube-apiserver/*\n\nLOCK=/var/log/kube-apiserver/.lock\necho \"Acquiring exclusive lock ${LOCK} ...\"\n\n# Waiting for 15s max for old kube-apiserver's watch-termination process to exit and remove the lock.\n# Two cases:\n# 1. if kubelet does not start the old and new in parallel (i.e. works as expected), the flock will always succeed without any time.\n# 2. if kubelet does overlap old and new pods for up to 130s, the flock will wait and immediate return when the old finishes.\n#\n# NOTE: We can increase 15s for a bigger expected overlap. But a higher value means less noise about the broken kubelet behaviour, i.e. we hide a bug.\n# NOTE: Do not tweak these timings without considering the livenessProbe initialDelaySeconds\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 15 \"${LOCK_FD}\" || {\n echo \"$(date -Iseconds -u) kubelet did not terminate old kube-apiserver before new one\" \u003e\u003e /var/log/kube-apiserver/lock.log\n echo -n \": WARNING: kubelet did not terminate old kube-apiserver before new one.\"\n\n # We failed to acquire exclusive lock, which means there is old kube-apiserver running in system.\n # Since we utilize SO_REUSEPORT, we need to make sure the old kube-apiserver stopped listening.\n #\n # NOTE: This is a fallback for broken kubelet, if you observe this please report a bug.\n echo -n \"Waiting for port 6443 to be released due to likely bug in kubelet or CRI-O \"\n while [ -n \"$(ss -Htan state listening '( sport = 6443 or sport = 6080 )')\" ]; do\n echo -n \".\"\n sleep 1\n (( tries += 1 ))\n if [[ \"${tries}\" -gt 10 ]]; then\n echo \"Timed out waiting for port :6443 and :6080 to be released, this is likely a bug in kubelet or CRI-O\"\n exit 1\n fi\n done\n # This is to make sure the server has terminated independently from the lock.\n # After the port has been freed (requests can be pending and need 60s max).\n sleep 65\n}\n# We cannot hold the lock from the init container to the main container. We release it here. There is no risk, at this point we know we are safe.\nflock -u \"${LOCK_FD}\"\n"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"containers":[{"name":"kube-apiserver","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-ec"],"args":["LOCK=/var/log/kube-apiserver/.lock\n# We should be able to acquire the lock immediatelly. If not, it means the init container has not released it yet and kubelet or CRI-O started container prematurely.\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 30 \"${LOCK_FD}\" || {\n echo \"Failed to acquire lock for kube-apiserver. Please check setup container for details. This is likely kubelet or CRI-O bug.\"\n exit 1\n}\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle ...\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nexec watch-termination --termination-touch-file=/var/log/kube-apiserver/.terminating --termination-log-file=/var/log/kube-apiserver/termination.log --graceful-termination-duration=15s --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig -- hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --advertise-address=${HOST_IP} -v=2 --permit-address-sharing\n"],"ports":[{"containerPort":6443}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"STATIC_POD_VERSION","value":"13"},{"name":"HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"GOGC","value":"100"}],"resources":{"requests":{"cpu":"265m","memory":"1Gi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"},{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"livenessProbe":{"httpGet":{"path":"livez","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"readyz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":1},"startupProbe":{"httpGet":{"path":"healthz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":30},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}},{"name":"kube-apiserver-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-cert-regeneration-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-regeneration-controller"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","-v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-insecure-readyz","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","insecure-readyz"],"args":["--insecure-port=6080","--delegate-url=https://localhost:6443/readyz"],"ports":[{"containerPort":6080}],"resources":{"requests":{"cpu":"5m","memory":"50M 2026-01-20T10:57:06.709950018+00:00 stderr F i"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-check-endpoints","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","check-endpoints"],"args":["--kubeconfig","/etc/kubernetes/static-pod-certs/configmaps/check-endpoints-kubeconfig/kubeconfig","--listen","0.0.0.0:17697","--namespace","$(POD_NAMESPACE)","--v","2"],"ports":[{"name":"check-endpoints","hostPort":17697,"containerPort":17697,"protocol":"TCP"}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"terminationGracePeriodSeconds":15,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2026-01-20T10:57:06.710342218+00:00 stderr F I0120 10:57:06.710297 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/kube-apiserver-pod.yaml" ... 2026-01-20T10:57:06.710506833+00:00 stderr F I0120 10:57:06.710464 1 cmd.go:614] Removed existing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-pod.yaml" ... 2026-01-20T10:57:06.710506833+00:00 stderr F I0120 10:57:06.710475 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-pod.yaml" ... 2026-01-20T10:57:06.710506833+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"apiserver":"true","app":"openshift-kube-apiserver","revision":"13"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-apiserver","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-certs"}},{"name":"audit-dir","hostPath":{"path":"/var/log/kube-apiserver"}}],"initContainers":[{"name":"setup","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","100","/bin/bash","-ec"],"args":["echo \"Fixing audit permissions ...\"\nchmod 0700 /var/log/kube-apiserver \u0026\u0026 touch /var/log/kube-apiserver/audit.log \u0026\u0026 chmod 0600 /var/log/kube-apiserver/*\n\nLOCK=/var/log/kube-apiserver/.lock\necho \"Acquiring exclusive lock ${LOCK} ...\"\n\n# Waiting for 15s max for old kube-apiserver's watch-termination process to exit and remove the lock.\n# Two cases:\n# 1. if kubelet does not start the old and new in parallel (i.e. works as expected), the flock will always succeed without any time.\n# 2. if kubelet does overlap old and new pods for up to 130s, the flock will wait and immediate return when the old finishes.\n#\n# NOTE: We can increase 15s for a bigger expected overlap. But a higher value means less noise about the broken kubelet behaviour, i.e. we hide a bug.\n# NOTE: Do not tweak these timings without considering the livenessProbe initialDelaySeconds\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 15 \"${LOCK_FD}\" || {\n echo \"$(date -Iseconds -u) kubelet did not terminate old kube-apiserver before new one\" \u003e\u003e /var/log/kube-apiserver/lock.log\n echo -n \": WARNING: kubelet did not terminate old kube-apiserver before new one.\"\n\n # We failed to acquire exclusive lock, which means there is old kube-apiserver running in system.\n # Since we utilize SO_REUSEPORT, we need to make sure the old kube-apiserver stopped listening.\n #\n # NOTE: This is a fallback for broken kubelet, if you observe this please report a bug.\n echo -n \"Waiting for port 6443 to be released due to likely bug in kubelet or CRI-O \"\n while [ -n \"$(ss -Htan state listening '( sport = 6443 or sport = 6080 )')\" ]; do\n echo -n \".\"\n sleep 1\n (( tries += 1 ))\n if [[ \"${tries}\" -gt 10 ]]; then\n echo \"Timed out waiting for port :6443 and :6080 to be released, this is likely a bug in kubelet or CRI-O\"\n exit 1\n fi\n done\n # This is to make sure the server has terminated independently from the lock.\n # After the port has been freed (requests can be pending and need 60s max).\n sleep 65\n}\n# We cannot hold the lock from the init container to the main container. We release it here. There is no risk, at this point we know we are safe.\nflock -u \"${LOCK_FD}\"\n"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"containers":[{"name":"kube-apiserver","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-ec"],"args":["LOCK=/var/log/kube-apiserver/.lock\n# We should be able to acquire the lock immediatelly. If not, it means the init container has not released it yet and kubelet or CRI-O started container prematurely.\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 30 \"${LOCK_FD}\" || {\n echo \"Failed to acquire lock for kube-apiserver. Please check setup container for details. This is likely kubelet or CRI-O bug.\"\n exit 1\n}\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle ...\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nexec watch-termination --termination-touch-file=/var/log/kube-apiserver/.terminating --termination-log-file=/var/log/kube-apiserver/termination.log --graceful-termination-duration=15s --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig -- hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --advertise-address=${HOST_IP} -v=2 --permit-address-sharing\n"],"ports":[{"containerPort":6443}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"STATIC_POD_VERSION","value":"13"},{"name":"HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"GOGC","value":"100"}],"resources":{"requests":{"cpu":"265m","memory":"1Gi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"},{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"livenessProbe":{"httpGet":{"path":"livez","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"readyz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":1},"startupProbe":{"httpGet":{"path":"healthz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":30},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}},{"name":"kube-apiserver-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-cert-regeneration-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-regeneration-controller"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","-v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-insecure-readyz","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","insecure-re 2026-01-20T10:57:06.710545464+00:00 stderr F adyz"],"args":["--insecure-port=6080","--delegate-url=https://localhost:6443/readyz"],"ports":[{"containerPort":6080}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-check-endpoints","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","check-endpoints"],"args":["--kubeconfig","/etc/kubernetes/static-pod-certs/configmaps/check-endpoints-kubeconfig/kubeconfig","--listen","0.0.0.0:17697","--namespace","$(POD_NAMESPACE)","--v","2"],"ports":[{"name":"check-endpoints","hostPort":17697,"containerPort":17697,"protocol":"TCP"}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"terminationGracePeriodSeconds":15,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} ././@LongLink0000644000000000000000000000027700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000755000175000017500000000000015133657716033150 5ustar zuulzuul././@LongLink0000644000000000000000000000032700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000755000175000017500000000000015133657736033152 5ustar zuulzuul././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000644000175000017500000167740715133657716033200 0ustar zuulzuul2025-08-13T20:00:57.601024272+00:00 stdout F Copying system trust bundle 2025-08-13T20:00:59.150240997+00:00 stderr F W0813 20:00:59.149367 1 cmd.go:154] Unable to read initial content of "/tmp/terminate": open /tmp/terminate: no such file or directory 2025-08-13T20:00:59.151697528+00:00 stderr F I0813 20:00:59.151463 1 observer_polling.go:159] Starting file observer 2025-08-13T20:00:59.152227024+00:00 stderr F I0813 20:00:59.151494 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T20:00:59.152227024+00:00 stderr F I0813 20:00:59.152190 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:00:59.152914523+00:00 stderr F I0813 20:00:59.152864 1 observer_polling.go:159] Starting file observer 2025-08-13T20:01:06.958417626+00:00 stderr F I0813 20:01:06.957144 1 builder.go:298] cluster-authentication-operator version v4.16.0-202406131906.p0.gb415439.assembly.stream.el9-0-g11ca161- 2025-08-13T20:01:07.010158681+00:00 stderr F I0813 20:01:07.005014 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:18.831567916+00:00 stderr F I0813 20:01:18.825876 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:01:20.315622631+00:00 stderr F I0813 20:01:20.315483 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T20:01:20.315622631+00:00 stderr F I0813 20:01:20.315570 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T20:01:20.315747505+00:00 stderr F I0813 20:01:20.315695 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T20:01:20.315767645+00:00 stderr F I0813 20:01:20.315756 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T20:01:20.445957847+00:00 stderr F I0813 20:01:20.445641 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:01:20.445957847+00:00 stderr F W0813 20:01:20.445701 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:20.445957847+00:00 stderr F W0813 20:01:20.445725 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:20.446168933+00:00 stderr F I0813 20:01:20.446115 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.946501 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:01:20.946457579 +0000 UTC))" 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.966538 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115270\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115269\" (2025-08-13 19:01:07 +0000 UTC to 2026-08-13 19:01:07 +0000 UTC (now=2025-08-13 20:01:20.96648868 +0000 UTC))" 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.966572 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.966627 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.950537 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.952678 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.967330 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.952729 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.967928 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.952749 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.968432 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.970361 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:01:21.000124869+00:00 stderr F I0813 20:01:20.991088 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:01:21.000124869+00:00 stderr F I0813 20:01:20.993434 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.058740 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:20.983738 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.064586 1 leaderelection.go:250] attempting to acquire leader lease openshift-authentication-operator/cluster-authentication-operator-lock... 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.072051 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.072104 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.072211 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.072385 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.072358079 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.072675 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:01:21.072659357 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.077533 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115270\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115269\" (2025-08-13 19:01:07 +0000 UTC to 2026-08-13 19:01:07 +0000 UTC (now=2025-08-13 20:01:21.077465674 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.077748 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:21.077726392 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.077922 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:21.077760273 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.077956 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:21.077934858 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.077978 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:21.077963238 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.078004 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.077988539 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.078026 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.07801058 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.078047 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.07803163 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.078074 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.078052561 +0000 UTC))" 2025-08-13T20:01:21.119193004+00:00 stderr F I0813 20:01:21.114507 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:21.078080662 +0000 UTC))" 2025-08-13T20:01:21.119193004+00:00 stderr F I0813 20:01:21.114660 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:21.114568952 +0000 UTC))" 2025-08-13T20:01:21.119193004+00:00 stderr F I0813 20:01:21.114731 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.114714266 +0000 UTC))" 2025-08-13T20:01:21.119193004+00:00 stderr F I0813 20:01:21.115201 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:01:21.11517397 +0000 UTC))" 2025-08-13T20:01:21.119193004+00:00 stderr F I0813 20:01:21.115454 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115270\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115269\" (2025-08-13 19:01:07 +0000 UTC to 2026-08-13 19:01:07 +0000 UTC (now=2025-08-13 20:01:21.115438927 +0000 UTC))" 2025-08-13T20:01:21.634012614+00:00 stderr F I0813 20:01:21.633686 1 leaderelection.go:260] successfully acquired lease openshift-authentication-operator/cluster-authentication-operator-lock 2025-08-13T20:01:21.663876685+00:00 stderr F I0813 20:01:21.636694 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-authentication-operator", Name:"cluster-authentication-operator-lock", UID:"09dcd617-77d7-4739-bfa0-d91f5ee3f9c6", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"30431", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' authentication-operator-7cc7ff75d5-g9qv8_dfde8735-ce87-4a11-8bd2-f4c4c0ff21d4 became leader 2025-08-13T20:01:23.664939632+00:00 stderr F I0813 20:01:23.659023 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:01:23.684375367+00:00 stderr F W0813 20:01:23.679522 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:23.684375367+00:00 stderr F E0813 20:01:23.679651 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.680257 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.680766 1 reflector.go:351] Caches populated for *v1.OAuthClient from github.com/openshift/client-go/oauth/informers/externalversions/factory.go:125 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.680951 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681079 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681094 1 base_controller.go:67] Waiting for caches to sync for OAuthServerWorkloadController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681104 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681115 1 base_controller.go:67] Waiting for caches to sync for MetadataController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681127 1 base_controller.go:67] Waiting for caches to sync for OAuthClientsController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681137 1 base_controller.go:67] Waiting for caches to sync for PayloadConfig 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681147 1 base_controller.go:67] Waiting for caches to sync for RouterCertsDomainValidationController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681157 1 base_controller.go:67] Waiting for caches to sync for ServiceCAController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681414 1 base_controller.go:67] Waiting for caches to sync for OpenshiftAuthenticationStaticResources 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681433 1 base_controller.go:67] Waiting for caches to sync for WellKnownReadyController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681486 1 base_controller.go:67] Waiting for caches to sync for OAuthServerRouteEndpointAccessibleController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681500 1 base_controller.go:67] Waiting for caches to sync for OAuthServerServiceEndpointAccessibleController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681515 1 base_controller.go:67] Waiting for caches to sync for OAuthServerServiceEndpointsEndpointAccessibleController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681529 1 base_controller.go:67] Waiting for caches to sync for IngressNodesAvailableController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681542 1 base_controller.go:67] Waiting for caches to sync for ProxyConfigController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681555 1 base_controller.go:67] Waiting for caches to sync for CustomRouteController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681566 1 base_controller.go:67] Waiting for caches to sync for TrustDistributionController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681576 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681586 1 base_controller.go:67] Waiting for caches to sync for IngressStateController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681597 1 base_controller.go:67] Waiting for caches to sync for OpenShiftAuthenticatorCertRequester 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681608 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681621 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorCertApprover_OpenShiftAuthenticator 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681657 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681673 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681686 1 base_controller.go:67] Waiting for caches to sync for SecretRevisionPruneController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681700 1 base_controller.go:67] Waiting for caches to sync for APIServiceController_openshift-apiserver 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681713 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681732 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682142 1 base_controller.go:67] Waiting for caches to sync for APIServerStaticResources 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682173 1 base_controller.go:67] Waiting for caches to sync for OAuthAPIServerControllerWorkloadController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682410 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_authentication 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682436 1 base_controller.go:67] Waiting for caches to sync for NamespaceFinalizerController_openshift-oauth-apiserver 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682449 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682467 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682483 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682506 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682680 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-08-13T20:01:23.690124541+00:00 stderr F I0813 20:01:23.687756 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.692861649+00:00 stderr F I0813 20:01:23.691127 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.692861649+00:00 stderr F I0813 20:01:23.691418 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.694466604+00:00 stderr F I0813 20:01:23.693678 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.719885519+00:00 stderr F I0813 20:01:23.718499 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.728182826+00:00 stderr F I0813 20:01:23.727762 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.736134953+00:00 stderr F I0813 20:01:23.729167 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.736630577+00:00 stderr F I0813 20:01:23.736571 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.736904285+00:00 stderr F I0813 20:01:23.732416 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:01:23.759032195+00:00 stderr F I0813 20:01:23.755954 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.759032195+00:00 stderr F I0813 20:01:23.757448 1 reflector.go:351] Caches populated for *v1.IngressController from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:01:23.761861286+00:00 stderr F I0813 20:01:23.759595 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:01:23.761861286+00:00 stderr F I0813 20:01:23.761255 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.779152519+00:00 stderr F I0813 20:01:23.779041 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:01:23.779292603+00:00 stderr F I0813 20:01:23.779245 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:01:23.781823315+00:00 stderr F I0813 20:01:23.781386 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.781877527+00:00 stderr F I0813 20:01:23.781832 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.782127914+00:00 stderr F I0813 20:01:23.782105 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-08-13T20:01:23.782259668+00:00 stderr F I0813 20:01:23.782154 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-08-13T20:01:23.782313839+00:00 stderr F I0813 20:01:23.782182 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:01:23.782469254+00:00 stderr F I0813 20:01:23.782380 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:01:23.782514255+00:00 stderr F I0813 20:01:23.782192 1 base_controller.go:73] Caches are synced for IngressStateController 2025-08-13T20:01:23.782621858+00:00 stderr F I0813 20:01:23.782587 1 base_controller.go:110] Starting #1 worker of IngressStateController controller ... 2025-08-13T20:01:23.782944457+00:00 stderr F I0813 20:01:23.782206 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorCertApprover_OpenShiftAuthenticator 2025-08-13T20:01:23.782944457+00:00 stderr F I0813 20:01:23.782859 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorCertApprover_OpenShiftAuthenticator controller ... 2025-08-13T20:01:23.782944457+00:00 stderr F I0813 20:01:23.782254 1 base_controller.go:73] Caches are synced for APIServerStaticResources 2025-08-13T20:01:23.782944457+00:00 stderr F I0813 20:01:23.782883 1 base_controller.go:110] Starting #1 worker of APIServerStaticResources controller ... 2025-08-13T20:01:23.783595756+00:00 stderr F I0813 20:01:23.783568 1 base_controller.go:73] Caches are synced for StatusSyncer_authentication 2025-08-13T20:01:23.783641957+00:00 stderr F I0813 20:01:23.783627 1 base_controller.go:110] Starting #1 worker of StatusSyncer_authentication controller ... 2025-08-13T20:01:23.783698149+00:00 stderr F I0813 20:01:23.783683 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:01:23.783728940+00:00 stderr F I0813 20:01:23.783717 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:01:23.786337854+00:00 stderr F I0813 20:01:23.786310 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:37Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 5, desired generation is 6.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:23.799062797+00:00 stderr F I0813 20:01:23.795492 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.807875158+00:00 stderr F I0813 20:01:23.807598 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.809104123+00:00 stderr F I0813 20:01:23.807656 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.812135370+00:00 stderr F I0813 20:01:23.811597 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.813392686+00:00 stderr F I0813 20:01:23.813334 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.828136846+00:00 stderr F I0813 20:01:23.827440 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.838033328+00:00 stderr F I0813 20:01:23.831671 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.840524729+00:00 stderr F I0813 20:01:23.840457 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.845056688+00:00 stderr F I0813 20:01:23.840856 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.845056688+00:00 stderr F I0813 20:01:23.841557 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.871267616+00:00 stderr F I0813 20:01:23.871128 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.897636398+00:00 stderr F I0813 20:01:23.894465 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from sigs.k8s.io/kube-storage-version-migrator/pkg/clients/informer/factory.go:132 2025-08-13T20:01:23.921866379+00:00 stderr F I0813 20:01:23.916084 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.921866379+00:00 stderr F I0813 20:01:23.917222 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.971640038+00:00 stderr F I0813 20:01:23.965615 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" 2025-08-13T20:01:23.997192686+00:00 stderr F I0813 20:01:23.994463 1 base_controller.go:73] Caches are synced for OAuthServerServiceEndpointAccessibleController 2025-08-13T20:01:23.997192686+00:00 stderr F I0813 20:01:23.994560 1 base_controller.go:110] Starting #1 worker of OAuthServerServiceEndpointAccessibleController controller ... 2025-08-13T20:01:23.997192686+00:00 stderr F I0813 20:01:23.994599 1 base_controller.go:73] Caches are synced for OAuthServerServiceEndpointsEndpointAccessibleController 2025-08-13T20:01:23.997192686+00:00 stderr F I0813 20:01:23.994605 1 base_controller.go:110] Starting #1 worker of OAuthServerServiceEndpointsEndpointAccessibleController controller ... 2025-08-13T20:01:23.997192686+00:00 stderr F E0813 20:01:23.996014 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.006004578+00:00 stderr F E0813 20:01:24.002904 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.017182026+00:00 stderr F E0813 20:01:24.015431 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.040476041+00:00 stderr F E0813 20:01:24.036712 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.041134619+00:00 stderr F I0813 20:01:24.041104 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:24.084437394+00:00 stderr F E0813 20:01:24.084350 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.084739073+00:00 stderr F I0813 20:01:24.084581 1 base_controller.go:73] Caches are synced for ServiceCAController 2025-08-13T20:01:24.084739073+00:00 stderr F I0813 20:01:24.084621 1 base_controller.go:110] Starting #1 worker of ServiceCAController controller ... 2025-08-13T20:01:24.084739073+00:00 stderr F I0813 20:01:24.084653 1 base_controller.go:73] Caches are synced for OpenshiftAuthenticationStaticResources 2025-08-13T20:01:24.084739073+00:00 stderr F I0813 20:01:24.084661 1 base_controller.go:110] Starting #1 worker of OpenshiftAuthenticationStaticResources controller ... 2025-08-13T20:01:24.097243379+00:00 stderr F I0813 20:01:24.097197 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:24.176132239+00:00 stderr F E0813 20:01:24.170080 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.278356773+00:00 stderr F I0813 20:01:24.277114 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:24.337191441+00:00 stderr F E0813 20:01:24.336831 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.466957101+00:00 stderr F I0813 20:01:24.466645 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:24.658996427+00:00 stderr F E0813 20:01:24.657273 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.667258213+00:00 stderr F I0813 20:01:24.667094 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:24.863943911+00:00 stderr F I0813 20:01:24.863327 1 request.go:697] Waited for 1.190001802s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/default/configmaps?limit=500&resourceVersion=0 2025-08-13T20:01:24.867625346+00:00 stderr F I0813 20:01:24.866515 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:24.963424247+00:00 stderr F W0813 20:01:24.961338 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:24.963424247+00:00 stderr F E0813 20:01:24.961657 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:25.067905877+00:00 stderr F I0813 20:01:25.067464 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:25.266332485+00:00 stderr F I0813 20:01:25.266070 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:25.298593314+00:00 stderr F E0813 20:01:25.298511 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:25.707237687+00:00 stderr F I0813 20:01:25.706312 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:25.784144260+00:00 stderr F I0813 20:01:25.783887 1 base_controller.go:73] Caches are synced for TrustDistributionController 2025-08-13T20:01:25.786075505+00:00 stderr F I0813 20:01:25.785996 1 base_controller.go:110] Starting #1 worker of TrustDistributionController controller ... 2025-08-13T20:01:25.865314244+00:00 stderr F I0813 20:01:25.864253 1 request.go:697] Waited for 2.190413918s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/default/endpoints?limit=500&resourceVersion=0 2025-08-13T20:01:25.893330563+00:00 stderr F I0813 20:01:25.893261 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:25.950398310+00:00 stderr F I0813 20:01:25.949985 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:25.984226265+00:00 stderr F I0813 20:01:25.983342 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:01:25.984226265+00:00 stderr F I0813 20:01:25.983382 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:01:25.984226265+00:00 stderr F I0813 20:01:25.983443 1 base_controller.go:73] Caches are synced for RouterCertsDomainValidationController 2025-08-13T20:01:25.984226265+00:00 stderr F I0813 20:01:25.983452 1 base_controller.go:110] Starting #1 worker of RouterCertsDomainValidationController controller ... 2025-08-13T20:01:26.069915768+00:00 stderr F I0813 20:01:26.069377 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:26.270666692+00:00 stderr F I0813 20:01:26.270583 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:26.284813315+00:00 stderr F I0813 20:01:26.284594 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-08-13T20:01:26.285052591+00:00 stderr F I0813 20:01:26.284942 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-08-13T20:01:26.287715797+00:00 stderr F I0813 20:01:26.285505 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T20:01:26.470182960+00:00 stderr F I0813 20:01:26.469441 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:26.579690483+00:00 stderr F E0813 20:01:26.579588 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:26.659031375+00:00 stderr F W0813 20:01:26.658260 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:26.659031375+00:00 stderr F E0813 20:01:26.658424 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:26.669964707+00:00 stderr F I0813 20:01:26.668072 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:26.868958311+00:00 stderr F I0813 20:01:26.867414 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:26.882898748+00:00 stderr F I0813 20:01:26.882653 1 base_controller.go:73] Caches are synced for APIServiceController_openshift-apiserver 2025-08-13T20:01:26.883909807+00:00 stderr F I0813 20:01:26.882904 1 base_controller.go:110] Starting #1 worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T20:01:26.885030489+00:00 stderr F I0813 20:01:26.883574 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling 2025-08-13T20:01:27.066296428+00:00 stderr F I0813 20:01:27.064957 1 request.go:697] Waited for 3.390505706s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/secrets?limit=500&resourceVersion=0 2025-08-13T20:01:27.075250893+00:00 stderr F I0813 20:01:27.073760 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.082391 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorController 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.082547 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorController controller ... 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.082610 1 base_controller.go:73] Caches are synced for OpenShiftAuthenticatorCertRequester 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.082617 1 base_controller.go:110] Starting #1 worker of OpenShiftAuthenticatorCertRequester controller ... 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.083898 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.083918 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.084145 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.084156 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:01:27.128292386+00:00 stderr F E0813 20:01:27.125655 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:27.128292386+00:00 stderr F I0813 20:01:27.126925 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:37Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 5, desired generation is 6.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable::OAuthServerServiceEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:27.269064480+00:00 stderr F I0813 20:01:27.268271 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:27.283953614+00:00 stderr F I0813 20:01:27.282300 1 base_controller.go:73] Caches are synced for SecretRevisionPruneController 2025-08-13T20:01:27.283953614+00:00 stderr F I0813 20:01:27.282363 1 base_controller.go:110] Starting #1 worker of SecretRevisionPruneController controller ... 2025-08-13T20:01:27.283953614+00:00 stderr F I0813 20:01:27.282563 1 base_controller.go:73] Caches are synced for OAuthAPIServerControllerWorkloadController 2025-08-13T20:01:27.283953614+00:00 stderr F I0813 20:01:27.282573 1 base_controller.go:110] Starting #1 worker of OAuthAPIServerControllerWorkloadController controller ... 2025-08-13T20:01:27.283953614+00:00 stderr F I0813 20:01:27.282861 1 base_controller.go:73] Caches are synced for NamespaceFinalizerController_openshift-oauth-apiserver 2025-08-13T20:01:27.283953614+00:00 stderr F I0813 20:01:27.282871 1 base_controller.go:110] Starting #1 worker of NamespaceFinalizerController_openshift-oauth-apiserver controller ... 2025-08-13T20:01:27.484936365+00:00 stderr F I0813 20:01:27.481954 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:27.668396366+00:00 stderr F I0813 20:01:27.667422 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:27.686198444+00:00 stderr F I0813 20:01:27.685345 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:01:27.686198444+00:00 stderr F I0813 20:01:27.685392 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:01:27.868074850+00:00 stderr F I0813 20:01:27.868018 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:27.882122220+00:00 stderr F I0813 20:01:27.882055 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-08-13T20:01:27.882207453+00:00 stderr F I0813 20:01:27.882190 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-08-13T20:01:27.882269575+00:00 stderr F I0813 20:01:27.882162 1 base_controller.go:73] Caches are synced for IngressNodesAvailableController 2025-08-13T20:01:27.882308516+00:00 stderr F I0813 20:01:27.882296 1 base_controller.go:110] Starting #1 worker of IngressNodesAvailableController controller ... 2025-08-13T20:01:27.882734748+00:00 stderr F I0813 20:01:27.882680 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-08-13T20:01:27.882734748+00:00 stderr F I0813 20:01:27.882709 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-08-13T20:01:27.882734748+00:00 stderr F I0813 20:01:27.882727 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-08-13T20:01:27.882755958+00:00 stderr F I0813 20:01:27.882732 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-08-13T20:01:27.882755958+00:00 stderr F I0813 20:01:27.882748 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-08-13T20:01:27.882755958+00:00 stderr F I0813 20:01:27.882752 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-08-13T20:01:27.882991775+00:00 stderr F I0813 20:01:27.882962 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-08-13T20:01:27.883093208+00:00 stderr F I0813 20:01:27.883071 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-08-13T20:01:28.265288876+00:00 stderr F I0813 20:01:28.264249 1 request.go:697] Waited for 4.479476827s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver 2025-08-13T20:01:29.140591084+00:00 stderr F E0813 20:01:29.140530 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:29.276565671+00:00 stderr F I0813 20:01:29.264871 1 request.go:697] Waited for 2.978228822s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit 2025-08-13T20:01:29.382012428+00:00 stderr F I0813 20:01:29.380383 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused" 2025-08-13T20:01:32.212365833+00:00 stderr F W0813 20:01:32.211005 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:32.212365833+00:00 stderr F E0813 20:01:32.211528 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:35.315058375+00:00 stderr F E0813 20:01:35.309483 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:35.317909616+00:00 stderr F E0813 20:01:35.317744 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:35.318626706+00:00 stderr F I0813 20:01:35.318523 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:37Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 5, desired generation is 6.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable::OAuthServerServiceEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:35.330132065+00:00 stderr F E0813 20:01:35.320317 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:35.330132065+00:00 stderr F E0813 20:01:35.328652 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:35.330814334+00:00 stderr F E0813 20:01:35.330725 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:35.351273227+00:00 stderr F E0813 20:01:35.351143 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:35.434510491+00:00 stderr F E0813 20:01:35.434401 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:35.597888489+00:00 stderr F E0813 20:01:35.597740 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:35.921267900+00:00 stderr F E0813 20:01:35.921136 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:36.565020306+00:00 stderr F E0813 20:01:36.564724 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:37.852888508+00:00 stderr F E0813 20:01:37.850910 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:39.382332008+00:00 stderr F E0813 20:01:39.382228 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:40.230678778+00:00 stderr F I0813 20:01:40.230622 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" 2025-08-13T20:01:40.415179749+00:00 stderr F E0813 20:01:40.415094 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:42.978231701+00:00 stderr F I0813 20:01:42.977340 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/webhook-authentication-integrated-oauth -n openshift-config because it changed 2025-08-13T20:01:44.713770138+00:00 stderr F W0813 20:01:44.713667 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:44.713770138+00:00 stderr F E0813 20:01:44.713725 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:45.538965267+00:00 stderr F E0813 20:01:45.538896 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:53.763712966+00:00 stderr F E0813 20:01:53.763200 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:53.775870473+00:00 stderr F E0813 20:01:53.775696 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:55.783664842+00:00 stderr F E0813 20:01:55.783488 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:02:04.290913305+00:00 stderr F W0813 20:02:04.290126 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:02:04.290913305+00:00 stderr F E0813 20:02:04.290747 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:02:20.380357169+00:00 stderr F E0813 20:02:20.343460 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:02:23.764121197+00:00 stderr F E0813 20:02:23.762438 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:02:23.769153661+00:00 stderr F E0813 20:02:23.768893 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:02:26.933185062+00:00 stderr F E0813 20:02:26.932990 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:26.948695884+00:00 stderr F E0813 20:02:26.948445 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:26.989922850+00:00 stderr F E0813 20:02:26.989511 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.004154426+00:00 stderr F E0813 20:02:27.003469 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.022583992+00:00 stderr F E0813 20:02:27.022435 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.043768966+00:00 stderr F E0813 20:02:27.043682 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:27.329108505+00:00 stderr F E0813 20:02:27.328969 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.529163642+00:00 stderr F E0813 20:02:27.528948 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:27.928094343+00:00 stderr F E0813 20:02:27.928008 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:28.128399067+00:00 stderr F E0813 20:02:28.128107 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:28.327478386+00:00 stderr F E0813 20:02:28.327428 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:28.731456270+00:00 stderr F E0813 20:02:28.729698 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:28.930338964+00:00 stderr F E0813 20:02:28.930232 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:29.127728155+00:00 stderr F E0813 20:02:29.127603 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:29.642929832+00:00 stderr F E0813 20:02:29.641740 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:29.759355994+00:00 stderr F E0813 20:02:29.755106 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:30.273063908+00:00 stderr F E0813 20:02:30.272704 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:30.931073889+00:00 stderr F E0813 20:02:30.930430 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.031080811+00:00 stderr F E0813 20:02:31.031014 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:31.896913252+00:00 stderr F E0813 20:02:31.896737 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.901731269+00:00 stderr F E0813 20:02:31.901144 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.903485989+00:00 stderr F W0813 20:02:31.903367 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.903485989+00:00 stderr F E0813 20:02:31.903426 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.903617233+00:00 stderr F E0813 20:02:31.903577 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.917903270+00:00 stderr F W0813 20:02:31.917771 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.918132147+00:00 stderr F E0813 20:02:31.918076 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.922458890+00:00 stderr F E0813 20:02:31.922432 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.934531055+00:00 stderr F E0813 20:02:31.934350 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.936878452+00:00 stderr F E0813 20:02:31.936742 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:32.127017396+00:00 stderr F E0813 20:02:32.126971 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.255019067+00:00 stderr F E0813 20:02:32.254935 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.528008705+00:00 stderr F E0813 20:02:32.527930 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.653532296+00:00 stderr F E0813 20:02:32.653441 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.727059043+00:00 stderr F W0813 20:02:32.726958 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.727059043+00:00 stderr F E0813 20:02:32.727027 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.932082382+00:00 stderr F E0813 20:02:32.931836 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.127423844+00:00 stderr F E0813 20:02:33.127363 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.253611194+00:00 stderr F E0813 20:02:33.253178 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.328117050+00:00 stderr F E0813 20:02:33.327994 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.932450330+00:00 stderr F W0813 20:02:33.930969 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.932450330+00:00 stderr F E0813 20:02:33.931612 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.052659389+00:00 stderr F E0813 20:02:34.052584 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.136916972+00:00 stderr F E0813 20:02:34.132246 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.328227679+00:00 stderr F E0813 20:02:34.328114 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.534071921+00:00 stderr F E0813 20:02:34.533970 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:34.687623582+00:00 stderr F E0813 20:02:34.687573 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.736053963+00:00 stderr F E0813 20:02:34.735985 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:35.127705846+00:00 stderr F E0813 20:02:35.127553 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.328358600+00:00 stderr F W0813 20:02:35.328198 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.328358600+00:00 stderr F E0813 20:02:35.328266 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.453077748+00:00 stderr F E0813 20:02:35.453020 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.527173802+00:00 stderr F W0813 20:02:35.527113 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.527251724+00:00 stderr F E0813 20:02:35.527236 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.727884567+00:00 stderr F E0813 20:02:35.727750 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:35.927503792+00:00 stderr F W0813 20:02:35.927397 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.927503792+00:00 stderr F E0813 20:02:35.927461 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.327550184+00:00 stderr F W0813 20:02:36.327428 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.327550184+00:00 stderr F E0813 20:02:36.327499 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.528281810+00:00 stderr F W0813 20:02:36.528120 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.528281810+00:00 stderr F E0813 20:02:36.528206 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.727435112+00:00 stderr F E0813 20:02:36.727322 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:36.747421802+00:00 stderr F E0813 20:02:36.747322 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:02:36.927103428+00:00 stderr F W0813 20:02:36.926945 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.927103428+00:00 stderr F E0813 20:02:36.927013 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.052703641+00:00 stderr F E0813 20:02:37.052619 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.130507020+00:00 stderr F E0813 20:02:37.130384 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.328582881+00:00 stderr F E0813 20:02:37.328481 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.727091989+00:00 stderr F W0813 20:02:37.726976 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.727091989+00:00 stderr F E0813 20:02:37.727046 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.927703161+00:00 stderr F E0813 20:02:37.927482 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.127932173+00:00 stderr F E0813 20:02:38.127747 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:38.328096443+00:00 stderr F W0813 20:02:38.328010 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.328096443+00:00 stderr F E0813 20:02:38.328066 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.527090450+00:00 stderr F W0813 20:02:38.526951 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.527090450+00:00 stderr F E0813 20:02:38.527022 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.728004041+00:00 stderr F E0813 20:02:38.727762 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.928185062+00:00 stderr F E0813 20:02:38.928079 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:39.128406164+00:00 stderr F W0813 20:02:39.128242 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.128406164+00:00 stderr F E0813 20:02:39.128350 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.175635611+00:00 stderr F E0813 20:02:39.175271 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:02:39.527495399+00:00 stderr F E0813 20:02:39.527376 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:39.727723831+00:00 stderr F E0813 20:02:39.727575 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.855182367+00:00 stderr F E0813 20:02:39.855037 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.928222110+00:00 stderr F W0813 20:02:39.928035 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.928222110+00:00 stderr F E0813 20:02:39.928115 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.330568398+00:00 stderr F W0813 20:02:40.330149 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.330568398+00:00 stderr F E0813 20:02:40.330514 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.530181012+00:00 stderr F E0813 20:02:40.530063 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.728542011+00:00 stderr F E0813 20:02:40.728404 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:40.928131315+00:00 stderr F E0813 20:02:40.927956 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.127443081+00:00 stderr F E0813 20:02:41.127355 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.528347618+00:00 stderr F W0813 20:02:41.528216 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.528347618+00:00 stderr F E0813 20:02:41.528295 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.728027864+00:00 stderr F E0813 20:02:41.727880 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:41.927577757+00:00 stderr F E0813 20:02:41.927448 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.128612642+00:00 stderr F W0813 20:02:42.128336 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.128612642+00:00 stderr F E0813 20:02:42.128407 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.327284090+00:00 stderr F E0813 20:02:42.327148 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:42.528119720+00:00 stderr F E0813 20:02:42.527951 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.728413734+00:00 stderr F E0813 20:02:42.728286 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:42.927731520+00:00 stderr F W0813 20:02:42.927610 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.927731520+00:00 stderr F E0813 20:02:42.927699 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.129365182+00:00 stderr F E0813 20:02:43.129188 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.527767438+00:00 stderr F W0813 20:02:43.527659 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.527767438+00:00 stderr F E0813 20:02:43.527742 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.727519107+00:00 stderr F E0813 20:02:43.727459 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.159466789+00:00 stderr F E0813 20:02:44.159362 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:44.372410054+00:00 stderr F E0813 20:02:44.372313 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.984436533+00:00 stderr F E0813 20:02:44.984282 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.300437367+00:00 stderr F E0813 20:02:45.300330 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:45.491012544+00:00 stderr F W0813 20:02:45.490924 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.491012544+00:00 stderr F E0813 20:02:45.490980 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.660187220+00:00 stderr F E0813 20:02:45.660074 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.093562493+00:00 stderr F W0813 20:02:46.093390 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.093562493+00:00 stderr F E0813 20:02:46.093459 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.296879373+00:00 stderr F E0813 20:02:46.296743 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.896875699+00:00 stderr F E0813 20:02:46.896731 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:48.229642569+00:00 stderr F E0813 20:02:48.229165 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:50.434714093+00:00 stderr F E0813 20:02:50.434581 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:50.614304156+00:00 stderr F W0813 20:02:50.614129 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:50.614304156+00:00 stderr F E0813 20:02:50.614192 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:50.774613249+00:00 stderr F E0813 20:02:50.774502 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:51.172401237+00:00 stderr F E0813 20:02:51.172180 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:51.229745813+00:00 stderr F W0813 20:02:51.229630 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:51.229745813+00:00 stderr F E0813 20:02:51.229699 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:52.156068209+00:00 stderr F W0813 20:02:52.155874 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:52.156068209+00:00 stderr F E0813 20:02:52.156054 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.765568843+00:00 stderr F E0813 20:02:53.765410 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:02:53.767631182+00:00 stderr F E0813 20:02:53.767560 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.769331031+00:00 stderr F E0813 20:02:53.769283 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.770287828+00:00 stderr F W0813 20:02:53.770226 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.770287828+00:00 stderr F E0813 20:02:53.770276 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.770340499+00:00 stderr F E0813 20:02:53.770318 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.770487224+00:00 stderr F E0813 20:02:53.770415 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:02:53.773539391+00:00 stderr F E0813 20:02:53.773464 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.778548844+00:00 stderr F E0813 20:02:53.778440 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:53.785736249+00:00 stderr F E0813 20:02:53.785659 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.811054111+00:00 stderr F E0813 20:02:53.810964 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.967387341+00:00 stderr F E0813 20:02:53.967258 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:54.167566501+00:00 stderr F E0813 20:02:54.167444 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.367052942+00:00 stderr F E0813 20:02:54.366960 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.567292725+00:00 stderr F E0813 20:02:54.567165 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:54.766946050+00:00 stderr F E0813 20:02:54.766873 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:55.089055399+00:00 stderr F E0813 20:02:55.088918 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:55.227706785+00:00 stderr F E0813 20:02:55.227612 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:55.731500816+00:00 stderr F E0813 20:02:55.731367 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:56.297721488+00:00 stderr F E0813 20:02:56.297603 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:56.897877649+00:00 stderr F E0813 20:02:56.897404 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:57.014861806+00:00 stderr F E0813 20:02:57.014663 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:58.477131781+00:00 stderr F E0813 20:02:58.474671 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:59.579520009+00:00 stderr F E0813 20:02:59.579401 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:00.687168456+00:00 stderr F E0813 20:03:00.687070 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:00.857913247+00:00 stderr F W0813 20:03:00.857746 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:00.857913247+00:00 stderr F E0813 20:03:00.857868 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:01.476541395+00:00 stderr F W0813 20:03:01.476415 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:01.476594157+00:00 stderr F E0813 20:03:01.476530 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.493944529+00:00 stderr F E0813 20:03:02.493680 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.506707073+00:00 stderr F E0813 20:03:02.506415 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.522068951+00:00 stderr F E0813 20:03:02.521926 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.552519840+00:00 stderr F E0813 20:03:02.552368 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.597944556+00:00 stderr F E0813 20:03:02.597678 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.685393231+00:00 stderr F E0813 20:03:02.685317 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.850047908+00:00 stderr F E0813 20:03:02.849866 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:03.174533113+00:00 stderr F E0813 20:03:03.174424 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:03.818949566+00:00 stderr F E0813 20:03:03.818761 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.701402750+00:00 stderr F E0813 20:03:04.701197 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:05.102472402+00:00 stderr F E0813 20:03:05.102371 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:06.297269146+00:00 stderr F E0813 20:03:06.297219 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:06.899501755+00:00 stderr F E0813 20:03:06.899331 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:07.674637578+00:00 stderr F E0813 20:03:07.674518 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:11.657603490+00:00 stderr F E0813 20:03:11.656881 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.014409079+00:00 stderr F E0813 20:03:12.014274 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.021263144+00:00 stderr F E0813 20:03:12.021168 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.033699899+00:00 stderr F E0813 20:03:12.033630 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.055958634+00:00 stderr F E0813 20:03:12.055824 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.102658386+00:00 stderr F E0813 20:03:12.102579 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.166654942+00:00 stderr F E0813 20:03:12.166523 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:03:12.186324323+00:00 stderr F E0813 20:03:12.186242 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.348351775+00:00 stderr F E0813 20:03:12.348236 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.670494515+00:00 stderr F E0813 20:03:12.670386 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.806152205+00:00 stderr F E0813 20:03:12.806023 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:13.313283152+00:00 stderr F E0813 20:03:13.313092 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:14.596306612+00:00 stderr F E0813 20:03:14.596171 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:14.944364571+00:00 stderr F E0813 20:03:14.944263 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:15.711166666+00:00 stderr F E0813 20:03:15.711008 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:16.299980244+00:00 stderr F E0813 20:03:16.299884 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:16.899382965+00:00 stderr F E0813 20:03:16.899257 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:17.159255479+00:00 stderr F E0813 20:03:17.159063 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.488493338+00:00 stderr F W0813 20:03:18.488412 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.488601241+00:00 stderr F E0813 20:03:18.488582 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.618289821+00:00 stderr F E0813 20:03:18.618122 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.625185267+00:00 stderr F E0813 20:03:18.625110 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.637349034+00:00 stderr F E0813 20:03:18.637274 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.658940010+00:00 stderr F E0813 20:03:18.658767 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.701117413+00:00 stderr F E0813 20:03:18.701022 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.784162483+00:00 stderr F E0813 20:03:18.784045 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.947229385+00:00 stderr F E0813 20:03:18.947181 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:19.269028345+00:00 stderr F E0813 20:03:19.268920 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:19.911894545+00:00 stderr F E0813 20:03:19.911560 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:21.194807962+00:00 stderr F E0813 20:03:21.194399 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:21.342962569+00:00 stderr F W0813 20:03:21.342902 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:21.343047731+00:00 stderr F E0813 20:03:21.343032 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:22.282460580+00:00 stderr F E0813 20:03:22.282076 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.052871018+00:00 stderr F E0813 20:03:23.052646 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.760202456+00:00 stderr F E0813 20:03:23.759999 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.769813371+00:00 stderr F E0813 20:03:23.769669 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:03:23.779965870+00:00 stderr F E0813 20:03:23.779894 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:03:23.781059171+00:00 stderr F E0813 20:03:23.781003 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.784121259+00:00 stderr F E0813 20:03:23.784096 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.785240851+00:00 stderr F E0813 20:03:23.785206 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.788247946+00:00 stderr F W0813 20:03:23.788197 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.788319458+00:00 stderr F E0813 20:03:23.788302 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.797432929+00:00 stderr F E0813 20:03:23.797349 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:23.806918239+00:00 stderr F E0813 20:03:23.806727 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:24.163956525+00:00 stderr F E0813 20:03:24.163896 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:24.494039410+00:00 stderr F E0813 20:03:24.493690 1 leaderelection.go:332] error retrieving resource lock openshift-authentication-operator/cluster-authentication-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-authentication-operator/leases/cluster-authentication-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.764908027+00:00 stderr F E0813 20:03:24.764638 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:26.299993099+00:00 stderr F E0813 20:03:26.299749 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.899464241+00:00 stderr F E0813 20:03:26.899334 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.286445800+00:00 stderr F E0813 20:03:27.286342 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.890913401+00:00 stderr F E0813 20:03:28.890567 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.740079510+00:00 stderr F E0813 20:03:31.739954 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:32.525712363+00:00 stderr F E0813 20:03:32.525538 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:35.426605317+00:00 stderr F E0813 20:03:35.426247 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:35.645656476+00:00 stderr F E0813 20:03:35.645505 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:36.299152909+00:00 stderr F E0813 20:03:36.299090 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.901969136+00:00 stderr F E0813 20:03:36.901541 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:39.133112993+00:00 stderr F E0813 20:03:39.133000 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:41.659148194+00:00 stderr F E0813 20:03:41.658446 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:42.442147039+00:00 stderr F W0813 20:03:42.442027 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.442147039+00:00 stderr F E0813 20:03:42.442093 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.483134259+00:00 stderr F W0813 20:03:42.482557 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.483134259+00:00 stderr F E0813 20:03:42.482661 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:43.536431116+00:00 stderr F E0813 20:03:43.536332 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:46.300290470+00:00 stderr F E0813 20:03:46.299877 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:46.908462350+00:00 stderr F E0813 20:03:46.905407 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:49.394999873+00:00 stderr F W0813 20:03:49.393645 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:49.394999873+00:00 stderr F E0813 20:03:49.393889 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.025811794+00:00 stderr F E0813 20:03:53.023946 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.776505229+00:00 stderr F E0813 20:03:53.772389 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.776505229+00:00 stderr F E0813 20:03:53.773188 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:03:53.789837509+00:00 stderr F E0813 20:03:53.779145 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.789837509+00:00 stderr F E0813 20:03:53.786216 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:03:53.789837509+00:00 stderr F E0813 20:03:53.786304 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.795836431+00:00 stderr F W0813 20:03:53.795634 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.795836431+00:00 stderr F E0813 20:03:53.795691 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.828464271+00:00 stderr F E0813 20:03:53.816611 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:54.151538628+00:00 stderr F E0813 20:03:54.150645 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:54.345151752+00:00 stderr F E0813 20:03:54.344983 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:03:56.302523861+00:00 stderr F E0813 20:03:56.300941 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:56.673452972+00:00 stderr F E0813 20:03:56.673295 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:56.903368320+00:00 stderr F E0813 20:03:56.903319 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:59.620550084+00:00 stderr F E0813 20:03:59.618633 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:02.312954040+00:00 stderr F W0813 20:04:02.309330 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:02.312954040+00:00 stderr F E0813 20:04:02.309618 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.302755927+00:00 stderr F E0813 20:04:06.301888 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.903333330+00:00 stderr F E0813 20:04:06.903206 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:16.305217907+00:00 stderr F E0813 20:04:16.304262 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:16.913825279+00:00 stderr F E0813 20:04:16.913395 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:19.850512924+00:00 stderr F W0813 20:04:19.850001 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:19.850605137+00:00 stderr F E0813 20:04:19.850588 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:20.413072252+00:00 stderr F E0813 20:04:20.412450 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.769465263+00:00 stderr F E0813 20:04:23.769303 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.770306377+00:00 stderr F E0813 20:04:23.770234 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:04:23.771539322+00:00 stderr F E0813 20:04:23.771375 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:04:23.773421256+00:00 stderr F E0813 20:04:23.773370 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.776546526+00:00 stderr F E0813 20:04:23.776490 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.783513345+00:00 stderr F W0813 20:04:23.783400 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.783513345+00:00 stderr F E0813 20:04:23.783473 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.798514465+00:00 stderr F E0813 20:04:23.798408 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:23.822333627+00:00 stderr F E0813 20:04:23.822215 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:23.972993131+00:00 stderr F E0813 20:04:23.972892 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:24.508599069+00:00 stderr F E0813 20:04:24.496475 1 leaderelection.go:332] error retrieving resource lock openshift-authentication-operator/cluster-authentication-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-authentication-operator/leases/cluster-authentication-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:24.579112658+00:00 stderr F E0813 20:04:24.579001 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:24.771043864+00:00 stderr F E0813 20:04:24.770921 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:26.307646646+00:00 stderr F E0813 20:04:26.307192 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:26.906563348+00:00 stderr F E0813 20:04:26.906472 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:27.287944838+00:00 stderr F E0813 20:04:27.287721 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:33.582276875+00:00 stderr F E0813 20:04:33.581698 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:33.987219352+00:00 stderr F E0813 20:04:33.987048 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:36.312691544+00:00 stderr F E0813 20:04:36.312200 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:36.917266547+00:00 stderr F E0813 20:04:36.916570 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:40.583078430+00:00 stderr F E0813 20:04:40.582122 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:41.022061001+00:00 stderr F E0813 20:04:41.021936 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:46.326883580+00:00 stderr F E0813 20:04:46.323295 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:46.909530754+00:00 stderr F E0813 20:04:46.909405 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:50.181715267+00:00 stderr F W0813 20:04:50.181102 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:50.181883722+00:00 stderr F E0813 20:04:50.181729 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:53.767715196+00:00 stderr F E0813 20:04:53.767349 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:04:53.770173056+00:00 stderr F E0813 20:04:53.770124 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:04:53.770377622+00:00 stderr F E0813 20:04:53.770329 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:53.771906916+00:00 stderr F E0813 20:04:53.771834 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:53.772924025+00:00 stderr F E0813 20:04:53.772878 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:53.774461939+00:00 stderr F W0813 20:04:53.774407 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:53.774461939+00:00 stderr F E0813 20:04:53.774452 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:53.785646849+00:00 stderr F E0813 20:04:53.785579 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:53.866119903+00:00 stderr F E0813 20:04:53.866006 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:56.306290470+00:00 stderr F E0813 20:04:56.306165 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:56.910579185+00:00 stderr F E0813 20:04:56.910461 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:57.350171663+00:00 stderr F E0813 20:04:57.349594 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:58.237879773+00:00 stderr F E0813 20:04:58.237672 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:00.340124593+00:00 stderr F E0813 20:05:00.338045 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:05:04.189001221+00:00 stderr F E0813 20:05:04.184949 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:05:06.315115545+00:00 stderr F E0813 20:05:06.313999 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:06.915299503+00:00 stderr F E0813 20:05:06.915171 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:09.568591731+00:00 stderr F E0813 20:05:09.568101 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:05:10.970612210+00:00 stderr F W0813 20:05:10.970159 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:10.970612210+00:00 stderr F E0813 20:05:10.970545 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:13.451698458+00:00 stderr F E0813 20:05:13.451599 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:14.913312373+00:00 stderr F W0813 20:05:14.908382 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:14.913312373+00:00 stderr F E0813 20:05:14.913234 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:16.310306737+00:00 stderr F E0813 20:05:16.310246 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:16.912380388+00:00 stderr F E0813 20:05:16.911955 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:20.615698997+00:00 stderr F E0813 20:05:20.615061 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:05:23.777205299+00:00 stderr F E0813 20:05:23.776533 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:05:23.982701903+00:00 stderr F I0813 20:05:23.982325 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31176 2025-08-13T20:05:53.780925638+00:00 stderr F E0813 20:05:53.773995 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:05:53.846642450+00:00 stderr F I0813 20:05:53.846554 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31177 2025-08-13T20:05:53.884143743+00:00 stderr F I0813 20:05:53.884055 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31177 2025-08-13T20:05:57.150880939+00:00 stderr F I0813 20:05:57.150016 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:05:57.184283916+00:00 stderr F I0813 20:05:57.184150 1 base_controller.go:73] Caches are synced for OAuthServerWorkloadController 2025-08-13T20:05:57.184283916+00:00 stderr F I0813 20:05:57.184200 1 base_controller.go:110] Starting #1 worker of OAuthServerWorkloadController controller ... 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185208 1 base_controller.go:73] Caches are synced for MetadataController 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185241 1 base_controller.go:110] Starting #1 worker of MetadataController controller ... 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185271 1 base_controller.go:73] Caches are synced for PayloadConfig 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185277 1 base_controller.go:110] Starting #1 worker of PayloadConfig controller ... 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185295 1 base_controller.go:73] Caches are synced for WellKnownReadyController 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185301 1 base_controller.go:110] Starting #1 worker of WellKnownReadyController controller ... 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185320 1 base_controller.go:73] Caches are synced for OAuthServerRouteEndpointAccessibleController 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185326 1 base_controller.go:110] Starting #1 worker of OAuthServerRouteEndpointAccessibleController controller ... 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185342 1 base_controller.go:73] Caches are synced for ProxyConfigController 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185348 1 base_controller.go:110] Starting #1 worker of ProxyConfigController controller ... 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185365 1 base_controller.go:73] Caches are synced for CustomRouteController 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185371 1 base_controller.go:110] Starting #1 worker of CustomRouteController controller ... 2025-08-13T20:05:57.191000838+00:00 stderr F I0813 20:05:57.190088 1 base_controller.go:73] Caches are synced for OAuthClientsController 2025-08-13T20:05:57.191000838+00:00 stderr F I0813 20:05:57.190136 1 base_controller.go:110] Starting #1 worker of OAuthClientsController controller ... 2025-08-13T20:05:57.259119429+00:00 stderr F I0813 20:05:57.258912 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31177 2025-08-13T20:05:57.291520217+00:00 stderr F I0813 20:05:57.291376 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31177 2025-08-13T20:05:57.467956799+00:00 stderr F I0813 20:05:57.458971 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31898 2025-08-13T20:05:57.602885093+00:00 stderr F I0813 20:05:57.602057 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31898 2025-08-13T20:05:58.111040365+00:00 stderr F I0813 20:05:58.110278 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:58.117903471+00:00 stderr F E0813 20:05:58.115503 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:05:58.117903471+00:00 stderr F E0813 20:05:58.116373 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:05:58.117903471+00:00 stderr F E0813 20:05:58.116996 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:05:58.593472469+00:00 stderr F I0813 20:05:58.593120 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31941 2025-08-13T20:05:59.392556071+00:00 stderr F I0813 20:05:59.392139 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31941 2025-08-13T20:05:59.994268782+00:00 stderr F I0813 20:05:59.994170 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31955 2025-08-13T20:06:00.393736241+00:00 stderr F I0813 20:06:00.393620 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31955 2025-08-13T20:06:00.951252697+00:00 stderr F I0813 20:06:00.951148 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:01.398687769+00:00 stderr F I0813 20:06:01.398630 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31964 2025-08-13T20:06:01.592664314+00:00 stderr F I0813 20:06:01.592543 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31964 2025-08-13T20:06:01.994168851+00:00 stderr F I0813 20:06:01.994108 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31964 2025-08-13T20:06:02.218265058+00:00 stderr F E0813 20:06:02.218209 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:02.379615469+00:00 stderr F I0813 20:06:02.379483 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:02.798964217+00:00 stderr F E0813 20:06:02.798852 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Operation cannot be fulfilled on authentications.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:03.280820726+00:00 stderr F I0813 20:06:03.280603 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:03.395710406+00:00 stderr F I0813 20:06:03.395587 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31969 2025-08-13T20:06:03.590517874+00:00 stderr F I0813 20:06:03.590118 1 request.go:697] Waited for 1.06685142s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-08-13T20:06:03.793120646+00:00 stderr F I0813 20:06:03.792978 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31969 2025-08-13T20:06:04.195684024+00:00 stderr F I0813 20:06:04.195558 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31969 2025-08-13T20:06:04.585346633+00:00 stderr F I0813 20:06:04.585242 1 request.go:697] Waited for 1.188418642s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-08-13T20:06:04.808918075+00:00 stderr F I0813 20:06:04.807539 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31969 2025-08-13T20:06:05.201243302+00:00 stderr F I0813 20:06:05.201114 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31969 2025-08-13T20:06:05.594123029+00:00 stderr F I0813 20:06:05.593970 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31969 2025-08-13T20:06:05.796711801+00:00 stderr F I0813 20:06:05.796568 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:06.395088896+00:00 stderr F I0813 20:06:06.394551 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31983 2025-08-13T20:06:06.395088896+00:00 stderr F E0813 20:06:06.394938 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:06.601353492+00:00 stderr F I0813 20:06:06.601259 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31983 2025-08-13T20:06:06.793587567+00:00 stderr F I0813 20:06:06.793457 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31983 2025-08-13T20:06:07.595002947+00:00 stderr F I0813 20:06:07.594288 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:08.200963119+00:00 stderr F I0813 20:06:08.199025 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:08.394736918+00:00 stderr F I0813 20:06:08.394611 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:08.395164010+00:00 stderr F E0813 20:06:08.395101 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:08.797241104+00:00 stderr F I0813 20:06:08.797136 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:08.802477464+00:00 stderr F I0813 20:06:08.802395 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:06:09.161467163+00:00 stderr F I0813 20:06:09.160116 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:09.214572484+00:00 stderr F I0813 20:06:09.213712 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:09.214572484+00:00 stderr F E0813 20:06:09.213961 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:09.684253444+00:00 stderr F I0813 20:06:09.684155 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:09.995109275+00:00 stderr F I0813 20:06:09.994982 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:09.995424974+00:00 stderr F E0813 20:06:09.995296 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:10.173123513+00:00 stderr F I0813 20:06:10.172991 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:10.914646817+00:00 stderr F I0813 20:06:10.912960 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:10.922015297+00:00 stderr F I0813 20:06:10.921943 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:11.096769872+00:00 stderr F I0813 20:06:11.096666 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:11.811653774+00:00 stderr F I0813 20:06:11.811553 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:11.812051776+00:00 stderr F E0813 20:06:11.811976 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:11.992449141+00:00 stderr F I0813 20:06:11.992313 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:12.193235221+00:00 stderr F I0813 20:06:12.193177 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:12.794165150+00:00 stderr F I0813 20:06:12.794056 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:13.251686431+00:00 stderr F I0813 20:06:13.251568 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:13.444129302+00:00 stderr F I0813 20:06:13.442353 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:13.444129302+00:00 stderr F E0813 20:06:13.442536 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:13.994283917+00:00 stderr F I0813 20:06:13.994167 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:14.122405486+00:00 stderr F I0813 20:06:14.122004 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:14.187278824+00:00 stderr F I0813 20:06:14.187176 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:14.195468358+00:00 stderr F I0813 20:06:14.195350 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:14.533009894+00:00 stderr F I0813 20:06:14.532219 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:14.808445681+00:00 stderr F I0813 20:06:14.807930 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:15.003177808+00:00 stderr F I0813 20:06:15.002420 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:15.003177808+00:00 stderr F E0813 20:06:15.002886 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:15.794070796+00:00 stderr F I0813 20:06:15.793986 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:15.979400003+00:00 stderr F I0813 20:06:15.979279 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:15.997270335+00:00 stderr F I0813 20:06:15.997103 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:16.399074080+00:00 stderr F I0813 20:06:16.398917 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:16.399115161+00:00 stderr F E0813 20:06:16.399084 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:16.464236376+00:00 stderr F I0813 20:06:16.464109 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from sigs.k8s.io/kube-storage-version-migrator/pkg/clients/informer/factory.go:132 2025-08-13T20:06:16.794003089+00:00 stderr F I0813 20:06:16.793881 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:17.197062101+00:00 stderr F I0813 20:06:17.196924 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:17.197212995+00:00 stderr F E0813 20:06:17.197139 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:17.722846068+00:00 stderr F I0813 20:06:17.722676 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:17.993669173+00:00 stderr F I0813 20:06:17.993511 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:17.994296241+00:00 stderr F E0813 20:06:17.994159 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:18.438699257+00:00 stderr F I0813 20:06:18.438641 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:06:19.248248059+00:00 stderr F I0813 20:06:19.248041 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:19.343039114+00:00 stderr F I0813 20:06:19.342098 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:19.584885919+00:00 stderr F I0813 20:06:19.584401 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:19.598600272+00:00 stderr F E0813 20:06:19.598516 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:19.614885068+00:00 stderr F E0813 20:06:19.614706 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:19.635157919+00:00 stderr F E0813 20:06:19.635096 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:19.674930438+00:00 stderr F E0813 20:06:19.664489 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:19.748246817+00:00 stderr F I0813 20:06:19.746043 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:19.749856013+00:00 stderr F I0813 20:06:19.749722 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:19.803189150+00:00 stderr F E0813 20:06:19.794665 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:19.828946597+00:00 stderr F I0813 20:06:19.828433 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:19.880045051+00:00 stderr F E0813 20:06:19.879148 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:20.043557963+00:00 stderr F E0813 20:06:20.042559 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:20.366935763+00:00 stderr F E0813 20:06:20.366018 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:21.009144944+00:00 stderr F E0813 20:06:21.009081 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:21.609717192+00:00 stderr F I0813 20:06:21.609270 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:22.234066591+00:00 stderr F I0813 20:06:22.233754 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:22.287365577+00:00 stderr F I0813 20:06:22.287227 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:22.294139981+00:00 stderr F E0813 20:06:22.294044 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:23.078702238+00:00 stderr F I0813 20:06:23.078452 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:23.152022367+00:00 stderr F I0813 20:06:23.151737 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:23.191397335+00:00 stderr F I0813 20:06:23.191229 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:23.191462137+00:00 stderr F E0813 20:06:23.191446 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:24.745077965+00:00 stderr F I0813 20:06:24.744379 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:24.783302970+00:00 stderr F I0813 20:06:24.783153 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:24.834289230+00:00 stderr F I0813 20:06:24.833331 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:24.847144218+00:00 stderr F E0813 20:06:24.847017 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:24.851570105+00:00 stderr F E0813 20:06:24.850085 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:24.857206446+00:00 stderr F E0813 20:06:24.857177 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:24.913851938+00:00 stderr F I0813 20:06:24.913718 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:24.951681371+00:00 stderr F I0813 20:06:24.951569 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:25.040631169+00:00 stderr F I0813 20:06:25.040492 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:25.314961334+00:00 stderr F I0813 20:06:25.312493 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:27.048497405+00:00 stderr F I0813 20:06:27.048413 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:28.215903255+00:00 stderr F I0813 20:06:28.215234 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:28.256019734+00:00 stderr F I0813 20:06:28.255448 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:28.292744106+00:00 stderr F I0813 20:06:28.292625 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:28.293143647+00:00 stderr F E0813 20:06:28.292841 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:28.414768040+00:00 stderr F I0813 20:06:28.414654 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:28.646161106+00:00 stderr F I0813 20:06:28.646046 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:29.686390835+00:00 stderr F I0813 20:06:29.685560 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:31.822085495+00:00 stderr F I0813 20:06:31.821356 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:33.153242831+00:00 stderr F I0813 20:06:33.153123 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:33.520317245+00:00 stderr F I0813 20:06:33.519759 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:33.575179558+00:00 stderr F I0813 20:06:33.573144 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:33.575179558+00:00 stderr F E0813 20:06:33.573340 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:33.576475315+00:00 stderr F I0813 20:06:33.576423 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T20:06:36.250033578+00:00 stderr F I0813 20:06:36.248480 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:06:36.270607538+00:00 stderr F I0813 20:06:36.268604 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:36.377326287+00:00 stderr F I0813 20:06:36.373692 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:36.458074342+00:00 stderr F I0813 20:06:36.457999 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:36.461352066+00:00 stderr F I0813 20:06:36.461272 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:36.967812537+00:00 stderr F I0813 20:06:36.967687 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:37.718851319+00:00 stderr F I0813 20:06:37.715596 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:39.611042031+00:00 stderr F I0813 20:06:39.610212 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:40.082855298+00:00 stderr F I0813 20:06:40.082102 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:40.161935945+00:00 stderr F I0813 20:06:40.160274 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=32119 2025-08-13T20:06:40.192331217+00:00 stderr F I0813 20:06:40.191960 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=32119 2025-08-13T20:06:40.192331217+00:00 stderr F E0813 20:06:40.192162 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:42.073213563+00:00 stderr F I0813 20:06:42.073075 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:42.353942972+00:00 stderr F I0813 20:06:42.353658 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:06:42.369517168+00:00 stderr F I0813 20:06:42.369422 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"30700\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"30700\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:42.424848485+00:00 stderr F I0813 20:06:42.422844 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded changed from True to False ("WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"30700\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"),Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused" to "WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"30700\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" 2025-08-13T20:06:43.369853199+00:00 stderr F I0813 20:06:43.363073 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:43.560472214+00:00 stderr F I0813 20:06:43.559423 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:44.436295555+00:00 stderr F I0813 20:06:44.431154 1 reflector.go:351] Caches populated for *v1.IngressController from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:06:44.744640726+00:00 stderr F I0813 20:06:44.743828 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"30700\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:06:44Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:45.613246239+00:00 stderr F I0813 20:06:45.612552 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:45.923279647+00:00 stderr F I0813 20:06:45.922384 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Available changed from False to True ("All is well") 2025-08-13T20:06:45.926984984+00:00 stderr F I0813 20:06:45.926915 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:06:44Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:47.027851337+00:00 stderr F I0813 20:06:47.027547 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:48.495591258+00:00 stderr F I0813 20:06:48.493729 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"30700\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "All is well" 2025-08-13T20:06:48.541989388+00:00 stderr F I0813 20:06:48.521692 1 helpers.go:184] lister was stale at resourceVersion=32195, live get showed resourceVersion=32202 2025-08-13T20:06:49.220022478+00:00 stderr F E0813 20:06:49.218626 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:49.263620898+00:00 stderr F I0813 20:06:49.259652 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:06:44Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:49.878502717+00:00 stderr F I0813 20:06:49.872820 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)" 2025-08-13T20:06:50.549607578+00:00 stderr F I0813 20:06:50.549511 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:50.963101364+00:00 stderr F I0813 20:06:50.962722 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:52.650859433+00:00 stderr F I0813 20:06:52.649631 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:53.395922365+00:00 stderr F I0813 20:06:53.395406 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:55.018081174+00:00 stderr F I0813 20:06:55.008754 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:00.290906550+00:00 stderr F I0813 20:07:00.269440 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:04.240339803+00:00 stderr F I0813 20:07:04.239152 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:05.713414188+00:00 stderr F I0813 20:07:05.712626 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:05.720681146+00:00 stderr F E0813 20:07:05.720598 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:07:05.732849615+00:00 stderr F E0813 20:07:05.732208 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:07:06.622944324+00:00 stderr F I0813 20:07:06.620393 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:06.814317541+00:00 stderr F E0813 20:07:06.814180 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:07:09.126943667+00:00 stderr F I0813 20:07:09.122061 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:14.841465356+00:00 stderr F E0813 20:07:14.835520 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:07:30.191761342+00:00 stderr F E0813 20:07:30.188131 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:07:58.905403376+00:00 stderr F I0813 20:07:58.896695 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:07:59.008467061+00:00 stderr F I0813 20:07:59.008386 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:06:44Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:59.042475026+00:00 stderr F I0813 20:07:59.042305 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)" to "All is well" 2025-08-13T20:08:24.045096453+00:00 stderr F E0813 20:08:24.042992 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.050072765+00:00 stderr F W0813 20:08:24.047200 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.051353532+00:00 stderr F E0813 20:08:24.047264 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:24.061981107+00:00 stderr F E0813 20:08:24.060696 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.065391385+00:00 stderr F W0813 20:08:24.065222 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.065391385+00:00 stderr F E0813 20:08:24.065301 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:24.081278270+00:00 stderr F E0813 20:08:24.081184 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.090175325+00:00 stderr F W0813 20:08:24.090064 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.090175325+00:00 stderr F E0813 20:08:24.090129 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:24.115317196+00:00 stderr F E0813 20:08:24.115227 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.117149219+00:00 stderr F W0813 20:08:24.117072 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.117149219+00:00 stderr F E0813 20:08:24.117140 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:24.164695612+00:00 stderr F E0813 20:08:24.162356 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.168014307+00:00 stderr F W0813 20:08:24.165195 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.168014307+00:00 stderr F E0813 20:08:24.166593 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:24.256644158+00:00 stderr F E0813 20:08:24.256294 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.371038608+00:00 stderr F W0813 20:08:24.370757 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.371038608+00:00 stderr F E0813 20:08:24.370980 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:24.574927424+00:00 stderr F E0813 20:08:24.572225 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:24.749282603+00:00 stderr F E0813 20:08:24.749180 1 leaderelection.go:332] error retrieving resource lock openshift-authentication-operator/cluster-authentication-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-authentication-operator/leases/cluster-authentication-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.771673885+00:00 stderr F E0813 20:08:24.771274 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.970065703+00:00 stderr F W0813 20:08:24.969294 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.970065703+00:00 stderr F E0813 20:08:24.969386 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:25.168003797+00:00 stderr F E0813 20:08:25.166709 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:25.367454255+00:00 stderr F E0813 20:08:25.367064 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.568685445+00:00 stderr F E0813 20:08:25.566403 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:25.769318827+00:00 stderr F W0813 20:08:25.767739 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.769318827+00:00 stderr F E0813 20:08:25.767873 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:25.970122625+00:00 stderr F E0813 20:08:25.968364 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:26.353991311+00:00 stderr F E0813 20:08:26.353083 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.378685579+00:00 stderr F E0813 20:08:26.378502 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:26.569700015+00:00 stderr F E0813 20:08:26.569648 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.769699879+00:00 stderr F E0813 20:08:26.767454 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.967452419+00:00 stderr F E0813 20:08:26.966467 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.165713454+00:00 stderr F E0813 20:08:27.165504 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:27.290641866+00:00 stderr F E0813 20:08:27.290541 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.303245497+00:00 stderr F E0813 20:08:27.303084 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.369988360+00:00 stderr F W0813 20:08:27.369166 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.369988360+00:00 stderr F E0813 20:08:27.369257 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:27.412205051+00:00 stderr F E0813 20:08:27.411140 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.582706679+00:00 stderr F E0813 20:08:27.582457 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.768069004+00:00 stderr F E0813 20:08:27.768009 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.812603321+00:00 stderr F E0813 20:08:27.812334 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.968471070+00:00 stderr F E0813 20:08:27.968174 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.169348299+00:00 stderr F E0813 20:08:28.168588 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.372013970+00:00 stderr F E0813 20:08:28.371098 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.413937132+00:00 stderr F E0813 20:08:28.412224 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.567555536+00:00 stderr F E0813 20:08:28.566500 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.768137066+00:00 stderr F E0813 20:08:28.767597 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.812500518+00:00 stderr F E0813 20:08:28.812421 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.966491243+00:00 stderr F E0813 20:08:28.966373 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:29.166768005+00:00 stderr F E0813 20:08:29.166653 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.211285252+00:00 stderr F E0813 20:08:29.211232 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.368611422+00:00 stderr F E0813 20:08:29.367387 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.568887134+00:00 stderr F E0813 20:08:29.568833 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.766863351+00:00 stderr F W0813 20:08:29.766470 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.766863351+00:00 stderr F E0813 20:08:29.766526 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:29.812843879+00:00 stderr F E0813 20:08:29.812659 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.968834921+00:00 stderr F E0813 20:08:29.968403 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.169648219+00:00 stderr F E0813 20:08:30.169492 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:30.366019758+00:00 stderr F E0813 20:08:30.365931 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.455198815+00:00 stderr F E0813 20:08:30.455144 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.568202395+00:00 stderr F E0813 20:08:30.566868 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.688641188+00:00 stderr F E0813 20:08:30.688524 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.813959052+00:00 stderr F E0813 20:08:30.812976 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.012709651+00:00 stderr F E0813 20:08:31.011994 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.213289311+00:00 stderr F E0813 20:08:31.213022 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.413338887+00:00 stderr F E0813 20:08:31.413275 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.812176271+00:00 stderr F E0813 20:08:31.812067 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.015182451+00:00 stderr F E0813 20:08:32.015085 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.217369507+00:00 stderr F E0813 20:08:32.216942 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:32.330155181+00:00 stderr F E0813 20:08:32.330030 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.331628133+00:00 stderr F W0813 20:08:32.331548 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.331628133+00:00 stderr F E0813 20:08:32.331595 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:32.412567704+00:00 stderr F E0813 20:08:32.412458 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.501086511+00:00 stderr F E0813 20:08:32.500995 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.612261799+00:00 stderr F E0813 20:08:32.612160 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.813102527+00:00 stderr F E0813 20:08:32.813006 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.011336201+00:00 stderr F E0813 20:08:33.011169 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.300630165+00:00 stderr F E0813 20:08:33.300571 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.411134734+00:00 stderr F E0813 20:08:33.411019 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:34.016948782+00:00 stderr F E0813 20:08:34.016646 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:34.211524341+00:00 stderr F E0813 20:08:34.211467 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:34.979539251+00:00 stderr F E0813 20:08:34.978848 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.068993066+00:00 stderr F E0813 20:08:35.068533 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.494494035+00:00 stderr F E0813 20:08:35.494334 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.867394596+00:00 stderr F E0813 20:08:35.867290 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:36.331954435+00:00 stderr F E0813 20:08:36.330279 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:36.663480730+00:00 stderr F E0813 20:08:36.663423 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:36.937287141+00:00 stderr F E0813 20:08:36.937181 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.461235673+00:00 stderr F E0813 20:08:37.461092 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.465001621+00:00 stderr F W0813 20:08:37.464585 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.465001621+00:00 stderr F E0813 20:08:37.464647 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:38.057275622+00:00 stderr F E0813 20:08:38.056945 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.121127314+00:00 stderr F E0813 20:08:40.120651 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.202952520+00:00 stderr F E0813 20:08:40.202463 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.997972094+00:00 stderr F E0813 20:08:40.997415 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.873606559+00:00 stderr F E0813 20:08:41.873176 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:42.367147970+00:00 stderr F E0813 20:08:42.366964 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.402416291+00:00 stderr F E0813 20:08:42.388495 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.402416291+00:00 stderr F W0813 20:08:42.398454 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.402416291+00:00 stderr F E0813 20:08:42.399290 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.402416291+00:00 stderr F E0813 20:08:42.398772 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.413402036+00:00 stderr F E0813 20:08:42.413346 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.421702924+00:00 stderr F E0813 20:08:42.421659 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:42.582036081+00:00 stderr F E0813 20:08:42.581967 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.767459767+00:00 stderr F E0813 20:08:42.767286 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.169258266+00:00 stderr F E0813 20:08:43.168730 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.568299387+00:00 stderr F W0813 20:08:43.568007 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.568299387+00:00 stderr F E0813 20:08:43.568082 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.590563165+00:00 stderr F E0813 20:08:43.590408 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.770945797+00:00 stderr F E0813 20:08:43.770870 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:43.967106021+00:00 stderr F E0813 20:08:43.966929 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.169006020+00:00 stderr F E0813 20:08:44.167976 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.574914518+00:00 stderr F E0813 20:08:44.574749 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:44.969279875+00:00 stderr F E0813 20:08:44.969224 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.367978046+00:00 stderr F E0813 20:08:45.367418 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.578657656+00:00 stderr F E0813 20:08:45.570487 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:45.768164160+00:00 stderr F W0813 20:08:45.767463 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.768164160+00:00 stderr F E0813 20:08:45.767521 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.968257076+00:00 stderr F E0813 20:08:45.968187 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.174493719+00:00 stderr F E0813 20:08:46.173555 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.567953009+00:00 stderr F E0813 20:08:46.567318 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:46.967325470+00:00 stderr F E0813 20:08:46.967193 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.367953876+00:00 stderr F I0813 20:08:47.367503 1 request.go:697] Waited for 1.037123464s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-08-13T20:08:47.376156851+00:00 stderr F E0813 20:08:47.374092 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.569507115+00:00 stderr F E0813 20:08:47.569113 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.780139534+00:00 stderr F E0813 20:08:47.779160 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:47.969995228+00:00 stderr F W0813 20:08:47.969926 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.970092600+00:00 stderr F E0813 20:08:47.970078 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.173554544+00:00 stderr F E0813 20:08:48.173499 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.369102530+00:00 stderr F I0813 20:08:48.369006 1 request.go:697] Waited for 1.198369338s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-08-13T20:08:48.371757767+00:00 stderr F E0813 20:08:48.370471 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.573633485+00:00 stderr F E0813 20:08:48.573276 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.969884065+00:00 stderr F E0813 20:08:48.969666 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.168740957+00:00 stderr F E0813 20:08:49.168625 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:49.566419959+00:00 stderr F I0813 20:08:49.566331 1 request.go:697] Waited for 1.114601407s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster 2025-08-13T20:08:49.768094771+00:00 stderr F E0813 20:08:49.767501 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.968180258+00:00 stderr F W0813 20:08:49.968050 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.968180258+00:00 stderr F E0813 20:08:49.968111 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:50.172106924+00:00 stderr F E0813 20:08:50.171987 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.364823499+00:00 stderr F E0813 20:08:50.364571 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.367040633+00:00 stderr F E0813 20:08:50.366962 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:50.566413309+00:00 stderr F I0813 20:08:50.566355 1 request.go:697] Waited for 1.197345059s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-08-13T20:08:50.569850187+00:00 stderr F W0813 20:08:50.569463 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.569850187+00:00 stderr F E0813 20:08:50.569519 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.769344097+00:00 stderr F E0813 20:08:50.769278 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.369874175+00:00 stderr F E0813 20:08:51.367442 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:51.769272376+00:00 stderr F E0813 20:08:51.769161 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.968970392+00:00 stderr F W0813 20:08:51.968851 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.968970392+00:00 stderr F E0813 20:08:51.968945 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.170285694+00:00 stderr F E0813 20:08:52.169633 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.369885726+00:00 stderr F E0813 20:08:52.369128 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.769490233+00:00 stderr F E0813 20:08:52.767737 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:53.168273877+00:00 stderr F E0813 20:08:53.168091 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:53.370563587+00:00 stderr F E0813 20:08:53.370429 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:53.768486325+00:00 stderr F W0813 20:08:53.768341 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:53.768540916+00:00 stderr F E0813 20:08:53.768487 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:53.834438036+00:00 stderr F E0813 20:08:53.833611 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.168090622+00:00 stderr F E0813 20:08:54.167988 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.368919430+00:00 stderr F E0813 20:08:54.368570 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.574085232+00:00 stderr F E0813 20:08:54.573232 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:55.169695249+00:00 stderr F E0813 20:08:55.169436 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:55.370972760+00:00 stderr F W0813 20:08:55.370366 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:55.370972760+00:00 stderr F E0813 20:08:55.370465 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:55.768937660+00:00 stderr F E0813 20:08:55.768823 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:55.967508383+00:00 stderr F E0813 20:08:55.967410 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.368312545+00:00 stderr F W0813 20:08:56.368198 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.368312545+00:00 stderr F E0813 20:08:56.368250 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.568381761+00:00 stderr F E0813 20:08:56.568273 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.167729875+00:00 stderr F W0813 20:08:57.167605 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.167729875+00:00 stderr F E0813 20:08:57.167705 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.368017587+00:00 stderr F E0813 20:08:57.367766 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.570202373+00:00 stderr F E0813 20:08:57.570141 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.767974264+00:00 stderr F E0813 20:08:57.767746 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:58.367627167+00:00 stderr F E0813 20:08:58.367062 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.571766060+00:00 stderr F W0813 20:08:58.569885 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.571766060+00:00 stderr F E0813 20:08:58.570083 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.967660021+00:00 stderr F E0813 20:08:58.967524 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:00.293345769+00:00 stderr F E0813 20:09:00.292919 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:00.687833059+00:00 stderr F E0813 20:09:00.687656 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:00.931614558+00:00 stderr F E0813 20:09:00.931484 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:10.590964230+00:00 stderr F I0813 20:09:10.589417 1 helpers.go:184] lister was stale at resourceVersion=32766, live get showed resourceVersion=33015 2025-08-13T20:09:10.638054530+00:00 stderr F E0813 20:09:10.637972 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:09:28.814591308+00:00 stderr F I0813 20:09:28.813369 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:29.604256818+00:00 stderr F I0813 20:09:29.603837 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:30.178241024+00:00 stderr F I0813 20:09:30.178185 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:09:31.361714086+00:00 stderr F I0813 20:09:31.361450 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:32.362193281+00:00 stderr F I0813 20:09:32.361343 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:33.781685288+00:00 stderr F I0813 20:09:33.781545 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:34.400346116+00:00 stderr F I0813 20:09:34.399530 1 helpers.go:184] lister was stale at resourceVersion=32766, live get showed resourceVersion=33023 2025-08-13T20:09:34.437505131+00:00 stderr F I0813 20:09:34.437206 1 helpers.go:184] lister was stale at resourceVersion=32766, live get showed resourceVersion=33023 2025-08-13T20:09:34.437505131+00:00 stderr F E0813 20:09:34.437374 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:09:35.160181941+00:00 stderr F I0813 20:09:35.159682 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:09:35.160444489+00:00 stderr F I0813 20:09:35.160387 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:35Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:35.182949394+00:00 stderr F I0813 20:09:35.178589 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)",Available changed from True to False ("WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)") 2025-08-13T20:09:36.183181381+00:00 stderr F I0813 20:09:36.182351 1 request.go:697] Waited for 1.014135186s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit 2025-08-13T20:09:37.383550266+00:00 stderr F I0813 20:09:37.383452 1 request.go:697] Waited for 1.267698556s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/configmaps/v4-0-config-system-service-ca 2025-08-13T20:09:37.849265269+00:00 stderr F I0813 20:09:37.849146 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:09:38.593693882+00:00 stderr F I0813 20:09:38.593476 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.600523548+00:00 stderr F E0813 20:09:38.600397 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.606486579+00:00 stderr F I0813 20:09:38.606344 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.613476549+00:00 stderr F E0813 20:09:38.613406 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.625349050+00:00 stderr F I0813 20:09:38.625280 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.633824643+00:00 stderr F E0813 20:09:38.633739 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.655101473+00:00 stderr F I0813 20:09:38.655010 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:38.656756420+00:00 stderr F I0813 20:09:38.656525 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.664366438+00:00 stderr F E0813 20:09:38.664093 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.706020173+00:00 stderr F I0813 20:09:38.705672 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.727943461+00:00 stderr F E0813 20:09:38.727275 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.728329742+00:00 stderr F I0813 20:09:38.728299 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:09:38.808550282+00:00 stderr F I0813 20:09:38.808400 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.817439457+00:00 stderr F E0813 20:09:38.817411 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.979365520+00:00 stderr F I0813 20:09:38.979309 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.993256238+00:00 stderr F E0813 20:09:38.993201 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.995624496+00:00 stderr F I0813 20:09:38.995515 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:39.074880778+00:00 stderr F I0813 20:09:39.072858 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:39.315513077+00:00 stderr F I0813 20:09:39.315405 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:39Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:39.321931551+00:00 stderr F E0813 20:09:39.321866 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:39.859586766+00:00 stderr F I0813 20:09:39.859233 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:39.965092921+00:00 stderr F I0813 20:09:39.964876 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:39Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:39.971887716+00:00 stderr F E0813 20:09:39.971853 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:40.590280895+00:00 stderr F I0813 20:09:40.588307 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:40.721889528+00:00 stderr F I0813 20:09:40.719649 1 reflector.go:351] Caches populated for *v1.IngressController from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:09:41.256895097+00:00 stderr F I0813 20:09:41.254858 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.272325010+00:00 stderr F E0813 20:09:41.270282 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:42.003101422+00:00 stderr F I0813 20:09:42.001555 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:42.762881406+00:00 stderr F I0813 20:09:42.760248 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:42.762881406+00:00 stderr F E0813 20:09:42.761447 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:09:42.768262710+00:00 stderr F I0813 20:09:42.767079 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:43.836652932+00:00 stderr F I0813 20:09:43.836540 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:43Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:43.858078695+00:00 stderr F E0813 20:09:43.853310 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:44.286400066+00:00 stderr F I0813 20:09:44.286234 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:44.439010921+00:00 stderr F I0813 20:09:44.438763 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T20:09:44.663102456+00:00 stderr F I0813 20:09:44.662930 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:45.580610793+00:00 stderr F I0813 20:09:45.579381 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:46.812063900+00:00 stderr F I0813 20:09:46.811882 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:46.854568888+00:00 stderr F I0813 20:09:46.854333 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:46Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:46.863504424+00:00 stderr F E0813 20:09:46.861719 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:47.264677707+00:00 stderr F I0813 20:09:47.264553 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:47.288824629+00:00 stderr F I0813 20:09:47.288681 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:47.736836453+00:00 stderr F I0813 20:09:47.736530 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:48.148161416+00:00 stderr F I0813 20:09:48.145409 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:48.975455756+00:00 stderr F I0813 20:09:48.974995 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:48Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:48.984041552+00:00 stderr F E0813 20:09:48.983644 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:49.615860927+00:00 stderr F I0813 20:09:49.615132 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:49Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:49.623829645+00:00 stderr F E0813 20:09:49.621235 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:49.813302098+00:00 stderr F I0813 20:09:49.813240 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:51.188335851+00:00 stderr F I0813 20:09:51.187817 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:51.601003312+00:00 stderr F E0813 20:09:51.600623 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:09:52.643865962+00:00 stderr F I0813 20:09:52.641709 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:52.686666690+00:00 stderr F I0813 20:09:52.685997 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.692047854+00:00 stderr F E0813 20:09:52.691981 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:53.054038612+00:00 stderr F I0813 20:09:53.053871 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:53.563500969+00:00 stderr F I0813 20:09:53.563367 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:54.364638189+00:00 stderr F I0813 20:09:54.361472 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:56.583068233+00:00 stderr F I0813 20:09:56.582319 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:57.815065265+00:00 stderr F I0813 20:09:57.814990 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:57.861014273+00:00 stderr F I0813 20:09:57.860957 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:06:44Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:57.884623490+00:00 stderr F E0813 20:09:57.884536 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:59.212995395+00:00 stderr F I0813 20:09:59.212118 1 request.go:697] Waited for 1.156067834s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift 2025-08-13T20:10:00.017574593+00:00 stderr F I0813 20:10:00.016494 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:00.412400763+00:00 stderr F I0813 20:10:00.412021 1 request.go:697] Waited for 1.193054236s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift 2025-08-13T20:10:01.039028689+00:00 stderr F I0813 20:10:01.038959 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:01.414870635+00:00 stderr F I0813 20:10:01.414421 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:01.612929483+00:00 stderr F I0813 20:10:01.611468 1 request.go:697] Waited for 1.187769875s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-authentication 2025-08-13T20:10:05.994944689+00:00 stderr F I0813 20:10:05.994001 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:08.314978607+00:00 stderr F I0813 20:10:08.314538 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:14.120859465+00:00 stderr F I0813 20:10:14.120610 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:20.593659503+00:00 stderr F I0813 20:10:20.593198 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:23.869260428+00:00 stderr F I0813 20:10:23.868864 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:25.514277572+00:00 stderr F I0813 20:10:25.514185 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:27.472026502+00:00 stderr F I0813 20:10:27.470139 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from sigs.k8s.io/kube-storage-version-migrator/pkg/clients/informer/factory.go:132 2025-08-13T20:10:30.920531234+00:00 stderr F I0813 20:10:30.917262 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:32.111634214+00:00 stderr F I0813 20:10:32.110131 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:32.326100383+00:00 stderr F I0813 20:10:32.324274 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:33.337597044+00:00 stderr F I0813 20:10:33.337494 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:34.581236090+00:00 stderr F I0813 20:10:34.580885 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:38.555504355+00:00 stderr F I0813 20:10:38.554902 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:38.978677758+00:00 stderr F I0813 20:10:38.978615 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:38.979649666+00:00 stderr F I0813 20:10:38.979578 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:10:38Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:10:38.996976132+00:00 stderr F I0813 20:10:38.995651 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:10:38Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:10:38.996976132+00:00 stderr F I0813 20:10:38.996248 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "All is well",Available changed from False to True ("All is well") 2025-08-13T20:10:39.002536212+00:00 stderr F E0813 20:10:39.002389 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:10:39.865847024+00:00 stderr F I0813 20:10:39.865735 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:48.375629945+00:00 stderr F I0813 20:10:48.373525 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:29:36.584846903+00:00 stderr F I0813 20:29:36.583938 1 request.go:697] Waited for 1.136256551s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift 2025-08-13T20:42:36.642834533+00:00 stderr F E0813 20:42:36.641909 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:36.655063095+00:00 stderr F E0813 20:42:36.654734 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:36.836902268+00:00 stderr F E0813 20:42:36.836436 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.036862743+00:00 stderr F E0813 20:42:37.036597 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.160458276+00:00 stderr F E0813 20:42:37.159852 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.172532134+00:00 stderr F E0813 20:42:37.172300 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.190632706+00:00 stderr F E0813 20:42:37.190565 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.216130321+00:00 stderr F E0813 20:42:37.216044 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.235806149+00:00 stderr F E0813 20:42:37.235648 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.258745950+00:00 stderr F E0813 20:42:37.258690 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.447022458+00:00 stderr F E0813 20:42:37.444536 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.558196643+00:00 stderr F E0813 20:42:37.558098 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.644990536+00:00 stderr F E0813 20:42:37.643264 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.842356836+00:00 stderr F E0813 20:42:37.842114 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.851507320+00:00 stderr F E0813 20:42:37.850143 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.180712041+00:00 stderr F E0813 20:42:38.179735 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.380054948+00:00 stderr F E0813 20:42:38.379961 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.580030223+00:00 stderr F E0813 20:42:38.579936 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.980053556+00:00 stderr F E0813 20:42:38.979878 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.180444863+00:00 stderr F E0813 20:42:39.179891 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.579247880+00:00 stderr F E0813 20:42:39.578761 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.780079240+00:00 stderr F E0813 20:42:39.780027 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.979863310+00:00 stderr F E0813 20:42:39.979715 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.379862572+00:00 stderr F E0813 20:42:40.379683 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.979263443+00:00 stderr F E0813 20:42:40.979080 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.017829305+00:00 stderr F I0813 20:42:41.017697 1 cmd.go:128] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:41.019967157+00:00 stderr F I0813 20:42:41.019734 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:42:41.020199643+00:00 stderr F I0813 20:42:41.020132 1 base_controller.go:172] Shutting down ManagementStateController ... 2025-08-13T20:42:41.020199643+00:00 stderr F I0813 20:42:41.020172 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:41.020349598+00:00 stderr F I0813 20:42:41.020265 1 base_controller.go:172] Shutting down OAuthClientsController ... 2025-08-13T20:42:41.020397429+00:00 stderr F I0813 20:42:41.020364 1 base_controller.go:172] Shutting down CustomRouteController ... 2025-08-13T20:42:41.020412520+00:00 stderr F I0813 20:42:41.020402 1 base_controller.go:172] Shutting down ProxyConfigController ... 2025-08-13T20:42:41.020630246+00:00 stderr F I0813 20:42:41.020566 1 base_controller.go:172] Shutting down OAuthServerRouteEndpointAccessibleController ... 2025-08-13T20:42:41.020630246+00:00 stderr F I0813 20:42:41.020614 1 base_controller.go:172] Shutting down WellKnownReadyController ... 2025-08-13T20:42:41.020989066+00:00 stderr F I0813 20:42:41.020682 1 base_controller.go:172] Shutting down PayloadConfig ... 2025-08-13T20:42:41.021255244+00:00 stderr F I0813 20:42:41.020046 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:41.021317106+00:00 stderr F I0813 20:42:41.020354 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:41.021331676+00:00 stderr F I0813 20:42:41.021315 1 genericapiserver.go:539] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:42:41.021392778+00:00 stderr F I0813 20:42:41.021341 1 genericapiserver.go:603] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:42:41.021580743+00:00 stderr F E0813 20:42:41.021519 1 base_controller.go:268] PayloadConfig reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:41.021734888+00:00 stderr F I0813 20:42:41.021576 1 base_controller.go:114] Shutting down worker of PayloadConfig controller ... 2025-08-13T20:42:41.021734888+00:00 stderr F I0813 20:42:41.021726 1 base_controller.go:172] Shutting down MetadataController ... 2025-08-13T20:42:41.021749498+00:00 stderr F I0813 20:42:41.021742 1 base_controller.go:172] Shutting down OAuthServerWorkloadController ... 2025-08-13T20:42:41.021897042+00:00 stderr F I0813 20:42:41.021758 1 base_controller.go:172] Shutting down EncryptionMigrationController ... 2025-08-13T20:42:41.021897042+00:00 stderr F I0813 20:42:41.021869 1 base_controller.go:172] Shutting down EncryptionStateController ... 2025-08-13T20:42:41.021916773+00:00 stderr F I0813 20:42:41.021886 1 base_controller.go:172] Shutting down EncryptionKeyController ... 2025-08-13T20:42:41.021916773+00:00 stderr F I0813 20:42:41.021910 1 base_controller.go:172] Shutting down EncryptionPruneController ... 2025-08-13T20:42:41.021938713+00:00 stderr F I0813 20:42:41.021924 1 base_controller.go:172] Shutting down IngressNodesAvailableController ... 2025-08-13T20:42:41.021951094+00:00 stderr F I0813 20:42:41.021937 1 base_controller.go:172] Shutting down EncryptionConditionController ... 2025-08-13T20:42:41.021962624+00:00 stderr F I0813 20:42:41.021951 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:41.021974175+00:00 stderr F I0813 20:42:41.021964 1 base_controller.go:172] Shutting down NamespaceFinalizerController_openshift-oauth-apiserver ... 2025-08-13T20:42:41.021985925+00:00 stderr F I0813 20:42:41.021978 1 base_controller.go:172] Shutting down OAuthAPIServerControllerWorkloadController ... 2025-08-13T20:42:41.022060777+00:00 stderr F E0813 20:42:41.022021 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:41.022116439+00:00 stderr F I0813 20:42:41.022083 1 base_controller.go:172] Shutting down SecretRevisionPruneController ... 2025-08-13T20:42:41.022128659+00:00 stderr F I0813 20:42:41.022119 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:42:41.022144009+00:00 stderr F I0813 20:42:41.022135 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:42:41.022156230+00:00 stderr F I0813 20:42:41.022149 1 base_controller.go:172] Shutting down OpenShiftAuthenticatorCertRequester ... 2025-08-13T20:42:41.022168380+00:00 stderr F I0813 20:42:41.022161 1 base_controller.go:172] Shutting down WebhookAuthenticatorController ... 2025-08-13T20:42:41.022181600+00:00 stderr F I0813 20:42:41.022175 1 base_controller.go:172] Shutting down APIServiceController_openshift-apiserver ... 2025-08-13T20:42:41.022246472+00:00 stderr F I0813 20:42:41.022188 1 base_controller.go:172] Shutting down auditPolicyController ... 2025-08-13T20:42:41.022351045+00:00 stderr F E0813 20:42:41.022301 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.022351045+00:00 stderr F I0813 20:42:41.022344 1 base_controller.go:172] Shutting down RouterCertsDomainValidationController ... 2025-08-13T20:42:41.022732486+00:00 stderr F W0813 20:42:41.022662 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:41.022732486+00:00 stderr F E0813 20:42:41.022709 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:41.022833349+00:00 stderr F I0813 20:42:41.022735 1 base_controller.go:114] Shutting down worker of OAuthAPIServerControllerWorkloadController controller ... 2025-08-13T20:42:41.022833349+00:00 stderr F I0813 20:42:41.022743 1 base_controller.go:104] All OAuthAPIServerControllerWorkloadController workers have been terminated 2025-08-13T20:42:41.022833349+00:00 stderr F I0813 20:42:41.022756 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:42:41.022833349+00:00 stderr F I0813 20:42:41.022769 1 base_controller.go:172] Shutting down TrustDistributionController ... 2025-08-13T20:42:41.022853040+00:00 stderr F I0813 20:42:41.022838 1 base_controller.go:172] Shutting down OpenshiftAuthenticationStaticResources ... 2025-08-13T20:42:41.022863260+00:00 stderr F I0813 20:42:41.022856 1 base_controller.go:172] Shutting down ServiceCAController ... 2025-08-13T20:42:41.022886701+00:00 stderr F I0813 20:42:41.022871 1 base_controller.go:172] Shutting down OAuthServerServiceEndpointsEndpointAccessibleController ... 2025-08-13T20:42:41.022902241+00:00 stderr F I0813 20:42:41.022888 1 base_controller.go:172] Shutting down OAuthServerServiceEndpointAccessibleController ... 2025-08-13T20:42:41.022912292+00:00 stderr F I0813 20:42:41.022905 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:42:41.022953973+00:00 stderr F I0813 20:42:41.022920 1 base_controller.go:172] Shutting down StatusSyncer_authentication ... 2025-08-13T20:42:41.022953973+00:00 stderr F I0813 20:42:41.022946 1 base_controller.go:150] All StatusSyncer_authentication post start hooks have been terminated 2025-08-13T20:42:41.022970083+00:00 stderr F I0813 20:42:41.022962 1 base_controller.go:172] Shutting down APIServerStaticResources ... 2025-08-13T20:42:41.022983014+00:00 stderr F I0813 20:42:41.022976 1 base_controller.go:172] Shutting down WebhookAuthenticatorCertApprover_OpenShiftAuthenticator ... 2025-08-13T20:42:41.022995354+00:00 stderr F I0813 20:42:41.022988 1 base_controller.go:172] Shutting down IngressStateController ... 2025-08-13T20:42:41.023007654+00:00 stderr F I0813 20:42:41.023000 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:42:41.023017265+00:00 stderr F I0813 20:42:41.023008 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:42:41.023026905+00:00 stderr F I0813 20:42:41.023018 1 base_controller.go:114] Shutting down worker of OAuthClientsController controller ... 2025-08-13T20:42:41.023036665+00:00 stderr F I0813 20:42:41.023026 1 base_controller.go:104] All OAuthClientsController workers have been terminated 2025-08-13T20:42:41.023046405+00:00 stderr F I0813 20:42:41.023035 1 base_controller.go:114] Shutting down worker of CustomRouteController controller ... 2025-08-13T20:42:41.023046405+00:00 stderr F I0813 20:42:41.023041 1 base_controller.go:104] All CustomRouteController workers have been terminated 2025-08-13T20:42:41.023059016+00:00 stderr F I0813 20:42:41.023051 1 base_controller.go:114] Shutting down worker of ProxyConfigController controller ... 2025-08-13T20:42:41.023101647+00:00 stderr F I0813 20:42:41.023071 1 base_controller.go:114] Shutting down worker of auditPolicyController controller ... 2025-08-13T20:42:41.023113937+00:00 stderr F I0813 20:42:41.023100 1 base_controller.go:114] Shutting down worker of OAuthServerRouteEndpointAccessibleController controller ... 2025-08-13T20:42:41.023113937+00:00 stderr F I0813 20:42:41.023108 1 base_controller.go:104] All OAuthServerRouteEndpointAccessibleController workers have been terminated 2025-08-13T20:42:41.023123998+00:00 stderr F I0813 20:42:41.023117 1 base_controller.go:114] Shutting down worker of WellKnownReadyController controller ... 2025-08-13T20:42:41.023134698+00:00 stderr F I0813 20:42:41.023124 1 base_controller.go:104] All WellKnownReadyController workers have been terminated 2025-08-13T20:42:41.023146558+00:00 stderr F I0813 20:42:41.023135 1 base_controller.go:114] Shutting down worker of MetadataController controller ... 2025-08-13T20:42:41.023146558+00:00 stderr F I0813 20:42:41.023141 1 base_controller.go:104] All MetadataController workers have been terminated 2025-08-13T20:42:41.023159119+00:00 stderr F I0813 20:42:41.023150 1 base_controller.go:114] Shutting down worker of OAuthServerWorkloadController controller ... 2025-08-13T20:42:41.023171099+00:00 stderr F I0813 20:42:41.023157 1 base_controller.go:104] All OAuthServerWorkloadController workers have been terminated 2025-08-13T20:42:41.023171099+00:00 stderr F I0813 20:42:41.023165 1 base_controller.go:114] Shutting down worker of EncryptionMigrationController controller ... 2025-08-13T20:42:41.023183889+00:00 stderr F I0813 20:42:41.023172 1 base_controller.go:104] All EncryptionMigrationController workers have been terminated 2025-08-13T20:42:41.023183889+00:00 stderr F I0813 20:42:41.023180 1 base_controller.go:114] Shutting down worker of EncryptionStateController controller ... 2025-08-13T20:42:41.023201600+00:00 stderr F I0813 20:42:41.023185 1 base_controller.go:104] All EncryptionStateController workers have been terminated 2025-08-13T20:42:41.023201600+00:00 stderr F I0813 20:42:41.023193 1 base_controller.go:114] Shutting down worker of EncryptionKeyController controller ... 2025-08-13T20:42:41.023201600+00:00 stderr F I0813 20:42:41.023197 1 base_controller.go:104] All EncryptionKeyController workers have been terminated 2025-08-13T20:42:41.023213090+00:00 stderr F I0813 20:42:41.023203 1 base_controller.go:114] Shutting down worker of EncryptionPruneController controller ... 2025-08-13T20:42:41.023213090+00:00 stderr F I0813 20:42:41.023209 1 base_controller.go:104] All EncryptionPruneController workers have been terminated 2025-08-13T20:42:41.023255831+00:00 stderr F I0813 20:42:41.023216 1 base_controller.go:114] Shutting down worker of IngressNodesAvailableController controller ... 2025-08-13T20:42:41.023268572+00:00 stderr F I0813 20:42:41.023223 1 base_controller.go:104] All IngressNodesAvailableController workers have been terminated 2025-08-13T20:42:41.023280872+00:00 stderr F I0813 20:42:41.023272 1 base_controller.go:114] Shutting down worker of EncryptionConditionController controller ... 2025-08-13T20:42:41.023290522+00:00 stderr F I0813 20:42:41.023281 1 base_controller.go:104] All EncryptionConditionController workers have been terminated 2025-08-13T20:42:41.023302743+00:00 stderr F I0813 20:42:41.023294 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:41.023312523+00:00 stderr F I0813 20:42:41.023302 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:41.023322243+00:00 stderr F I0813 20:42:41.023311 1 base_controller.go:114] Shutting down worker of NamespaceFinalizerController_openshift-oauth-apiserver controller ... 2025-08-13T20:42:41.023322243+00:00 stderr F I0813 20:42:41.023318 1 base_controller.go:104] All NamespaceFinalizerController_openshift-oauth-apiserver workers have been terminated 2025-08-13T20:42:41.023368995+00:00 stderr F I0813 20:42:41.023330 1 base_controller.go:114] Shutting down worker of SecretRevisionPruneController controller ... 2025-08-13T20:42:41.023393075+00:00 stderr F I0813 20:42:41.023378 1 base_controller.go:114] Shutting down worker of RouterCertsDomainValidationController controller ... 2025-08-13T20:42:41.023403416+00:00 stderr F I0813 20:42:41.023390 1 base_controller.go:104] All RouterCertsDomainValidationController workers have been terminated 2025-08-13T20:42:41.023413076+00:00 stderr F I0813 20:42:41.023400 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:42:41.023413076+00:00 stderr F I0813 20:42:41.023409 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:42:41.023423086+00:00 stderr F I0813 20:42:41.023417 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:42:41.023432877+00:00 stderr F I0813 20:42:41.023424 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:42:41.023442607+00:00 stderr F I0813 20:42:41.023431 1 base_controller.go:114] Shutting down worker of OpenShiftAuthenticatorCertRequester controller ... 2025-08-13T20:42:41.023442607+00:00 stderr F I0813 20:42:41.023438 1 base_controller.go:104] All OpenShiftAuthenticatorCertRequester workers have been terminated 2025-08-13T20:42:41.023452487+00:00 stderr F I0813 20:42:41.023445 1 base_controller.go:114] Shutting down worker of WebhookAuthenticatorController controller ... 2025-08-13T20:42:41.023462267+00:00 stderr F I0813 20:42:41.023451 1 base_controller.go:104] All WebhookAuthenticatorController workers have been terminated 2025-08-13T20:42:41.023462267+00:00 stderr F I0813 20:42:41.023458 1 base_controller.go:114] Shutting down worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T20:42:41.023477468+00:00 stderr F I0813 20:42:41.023468 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:42:41.023477468+00:00 stderr F I0813 20:42:41.023474 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:42:41.023487358+00:00 stderr F I0813 20:42:41.023481 1 base_controller.go:114] Shutting down worker of TrustDistributionController controller ... 2025-08-13T20:42:41.023497168+00:00 stderr F I0813 20:42:41.023486 1 base_controller.go:104] All TrustDistributionController workers have been terminated 2025-08-13T20:42:41.023497168+00:00 stderr F I0813 20:42:41.023492 1 base_controller.go:114] Shutting down worker of OpenshiftAuthenticationStaticResources controller ... 2025-08-13T20:42:41.023507299+00:00 stderr F I0813 20:42:41.023498 1 base_controller.go:104] All OpenshiftAuthenticationStaticResources workers have been terminated 2025-08-13T20:42:41.023516869+00:00 stderr F I0813 20:42:41.023505 1 base_controller.go:114] Shutting down worker of ServiceCAController controller ... 2025-08-13T20:42:41.023516869+00:00 stderr F I0813 20:42:41.023510 1 base_controller.go:104] All ServiceCAController workers have been terminated 2025-08-13T20:42:41.023526879+00:00 stderr F I0813 20:42:41.023516 1 base_controller.go:114] Shutting down worker of OAuthServerServiceEndpointsEndpointAccessibleController controller ... 2025-08-13T20:42:41.023526879+00:00 stderr F I0813 20:42:41.023521 1 base_controller.go:104] All OAuthServerServiceEndpointsEndpointAccessibleController workers have been terminated 2025-08-13T20:42:41.023536920+00:00 stderr F I0813 20:42:41.023528 1 base_controller.go:114] Shutting down worker of OAuthServerServiceEndpointAccessibleController controller ... 2025-08-13T20:42:41.023536920+00:00 stderr F I0813 20:42:41.023533 1 base_controller.go:104] All OAuthServerServiceEndpointAccessibleController workers have been terminated 2025-08-13T20:42:41.023546910+00:00 stderr F I0813 20:42:41.023539 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:42:41.023556760+00:00 stderr F I0813 20:42:41.023544 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:42:41.023556760+00:00 stderr F I0813 20:42:41.023551 1 base_controller.go:114] Shutting down worker of StatusSyncer_authentication controller ... 2025-08-13T20:42:41.023566880+00:00 stderr F I0813 20:42:41.023557 1 base_controller.go:104] All StatusSyncer_authentication workers have been terminated 2025-08-13T20:42:41.023576481+00:00 stderr F I0813 20:42:41.023564 1 base_controller.go:114] Shutting down worker of APIServerStaticResources controller ... 2025-08-13T20:42:41.023576481+00:00 stderr F I0813 20:42:41.023570 1 base_controller.go:104] All APIServerStaticResources workers have been terminated 2025-08-13T20:42:41.023586671+00:00 stderr F I0813 20:42:41.023576 1 base_controller.go:114] Shutting down worker of WebhookAuthenticatorCertApprover_OpenShiftAuthenticator controller ... 2025-08-13T20:42:41.023586671+00:00 stderr F I0813 20:42:41.023582 1 base_controller.go:104] All WebhookAuthenticatorCertApprover_OpenShiftAuthenticator workers have been terminated 2025-08-13T20:42:41.023596761+00:00 stderr F I0813 20:42:41.023588 1 base_controller.go:114] Shutting down worker of IngressStateController controller ... 2025-08-13T20:42:41.023596761+00:00 stderr F I0813 20:42:41.023593 1 base_controller.go:104] All IngressStateController workers have been terminated 2025-08-13T20:42:41.023606772+00:00 stderr F I0813 20:42:41.023598 1 base_controller.go:104] All ProxyConfigController workers have been terminated 2025-08-13T20:42:41.023606772+00:00 stderr F I0813 20:42:41.023603 1 base_controller.go:104] All SecretRevisionPruneController workers have been terminated 2025-08-13T20:42:41.028196864+00:00 stderr F I0813 20:42:41.028132 1 base_controller.go:104] All APIServiceController_openshift-apiserver workers have been terminated 2025-08-13T20:42:41.028196864+00:00 stderr F I0813 20:42:41.028167 1 base_controller.go:104] All auditPolicyController workers have been terminated 2025-08-13T20:42:41.028868273+00:00 stderr F I0813 20:42:41.028687 1 base_controller.go:114] Shutting down worker of ManagementStateController controller ... 2025-08-13T20:42:41.029181742+00:00 stderr F I0813 20:42:41.029112 1 base_controller.go:104] All ManagementStateController workers have been terminated 2025-08-13T20:42:41.029181742+00:00 stderr F I0813 20:42:41.029153 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:41.029181742+00:00 stderr F I0813 20:42:41.029161 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:41.031303004+00:00 stderr F I0813 20:42:41.029690 1 genericapiserver.go:628] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:42:41.031303004+00:00 stderr F I0813 20:42:41.029707 1 base_controller.go:104] All PayloadConfig workers have been terminated 2025-08-13T20:42:41.031303004+00:00 stderr F I0813 20:42:41.029748 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:41.031303004+00:00 stderr F I0813 20:42:41.029898 1 genericapiserver.go:669] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:42:41.031303004+00:00 stderr F I0813 20:42:41.030945 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:42:41.031303004+00:00 stderr F I0813 20:42:41.030972 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:41.031627873+00:00 stderr F I0813 20:42:41.031516 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:42:41.032096826+00:00 stderr F I0813 20:42:41.032020 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:41.032726745+00:00 stderr F I0813 20:42:41.032663 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:41.033029423+00:00 stderr F I0813 20:42:41.032918 1 builder.go:329] server exited 2025-08-13T20:42:41.033148777+00:00 stderr F I0813 20:42:41.033087 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:41.033428035+00:00 stderr F I0813 20:42:41.033141 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:41.033592650+00:00 stderr F I0813 20:42:41.033525 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:41.033738504+00:00 stderr F I0813 20:42:41.033702 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:42:41.034633180+00:00 stderr F E0813 20:42:41.034423 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-authentication-operator/leases/cluster-authentication-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.035723531+00:00 stderr F W0813 20:42:41.035649 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000644000175000017500000065061415133657716033166 0ustar zuulzuul2025-08-13T19:59:06.260034229+00:00 stdout F Copying system trust bundle 2025-08-13T19:59:22.590102599+00:00 stderr F W0813 19:59:22.588695 1 cmd.go:154] Unable to read initial content of "/tmp/terminate": open /tmp/terminate: no such file or directory 2025-08-13T19:59:22.640260319+00:00 stderr F I0813 19:59:22.638079 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:22.834208448+00:00 stderr F I0813 19:59:22.833239 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T19:59:22.848578007+00:00 stderr F I0813 19:59:22.834682 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:22.881456264+00:00 stderr F I0813 19:59:22.881027 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:25.607941583+00:00 stderr F I0813 19:59:25.589342 1 builder.go:298] cluster-authentication-operator version v4.16.0-202406131906.p0.gb415439.assembly.stream.el9-0-g11ca161- 2025-08-13T19:59:25.893253336+00:00 stderr F I0813 19:59:25.891501 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:33.373549473+00:00 stderr F I0813 19:59:33.372947 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:33.668078119+00:00 stderr F I0813 19:59:33.631417 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T19:59:33.672148474+00:00 stderr F I0813 19:59:33.672098 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T19:59:33.691128516+00:00 stderr F I0813 19:59:33.690001 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T19:59:33.691409884+00:00 stderr F I0813 19:59:33.691263 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T19:59:34.000225697+00:00 stderr F I0813 19:59:33.999685 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T19:59:34.000225697+00:00 stderr F I0813 19:59:33.999969 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:34.000225697+00:00 stderr F W0813 19:59:33.999985 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:34.000225697+00:00 stderr F W0813 19:59:33.999991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:34.141756031+00:00 stderr F I0813 19:59:34.141160 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:34.141756031+00:00 stderr F I0813 19:59:34.141651 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:34.172037764+00:00 stderr F I0813 19:59:34.171578 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:34.172037764+00:00 stderr F I0813 19:59:34.171693 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:34.175457882+00:00 stderr F I0813 19:59:34.173635 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:34.274052702+00:00 stderr F I0813 19:59:34.254901 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 19:59:34.254658649 +0000 UTC))" 2025-08-13T19:59:34.274052702+00:00 stderr F I0813 19:59:34.255474 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115169\" (2025-08-13 18:59:25 +0000 UTC to 2026-08-13 18:59:25 +0000 UTC (now=2025-08-13 19:59:34.255444882 +0000 UTC))" 2025-08-13T19:59:34.274052702+00:00 stderr F I0813 19:59:34.255505 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:34.274052702+00:00 stderr F I0813 19:59:34.255662 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T19:59:34.274052702+00:00 stderr F I0813 19:59:34.255684 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:34.362964547+00:00 stderr F I0813 19:59:34.362912 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T19:59:34.467654201+00:00 stderr F I0813 19:59:34.411506 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:34.467654201+00:00 stderr F I0813 19:59:34.412464 1 leaderelection.go:250] attempting to acquire leader lease openshift-authentication-operator/cluster-authentication-operator-lock... 2025-08-13T19:59:34.467654201+00:00 stderr F I0813 19:59:34.230255 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:34.593040415+00:00 stderr F I0813 19:59:34.468477 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:34.631076069+00:00 stderr F I0813 19:59:34.631009 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:34.630966396 +0000 UTC))" 2025-08-13T19:59:34.631178092+00:00 stderr F I0813 19:59:34.631155 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:34.631138831 +0000 UTC))" 2025-08-13T19:59:34.631231254+00:00 stderr F I0813 19:59:34.631216 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:34.631199473 +0000 UTC))" 2025-08-13T19:59:34.682004001+00:00 stderr F I0813 19:59:34.681926 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:34.631250714 +0000 UTC))" 2025-08-13T19:59:34.685440619+00:00 stderr F I0813 19:59:34.685412 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:34.685362857 +0000 UTC))" 2025-08-13T19:59:34.685525151+00:00 stderr F I0813 19:59:34.685506 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:34.68548534 +0000 UTC))" 2025-08-13T19:59:34.685711317+00:00 stderr F I0813 19:59:34.685608 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:34.685587803 +0000 UTC))" 2025-08-13T19:59:34.685770398+00:00 stderr F I0813 19:59:34.685751 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:34.685735487 +0000 UTC))" 2025-08-13T19:59:34.693335204+00:00 stderr F I0813 19:59:34.693308 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 19:59:34.693280732 +0000 UTC))" 2025-08-13T19:59:34.709294179+00:00 stderr F I0813 19:59:34.709228 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115169\" (2025-08-13 18:59:25 +0000 UTC to 2026-08-13 18:59:25 +0000 UTC (now=2025-08-13 19:59:34.709179716 +0000 UTC))" 2025-08-13T19:59:34.975225909+00:00 stderr F I0813 19:59:34.975151 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T19:59:35.045131102+00:00 stderr F I0813 19:59:35.045053 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:35.045484022+00:00 stderr F E0813 19:59:35.045451 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.045567465+00:00 stderr F E0813 19:59:35.045546 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.046629675+00:00 stderr F I0813 19:59:35.046602 1 leaderelection.go:260] successfully acquired lease openshift-authentication-operator/cluster-authentication-operator-lock 2025-08-13T19:59:35.076158477+00:00 stderr F E0813 19:59:35.069426 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.076494866+00:00 stderr F E0813 19:59:35.076466 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.076594539+00:00 stderr F I0813 19:59:35.069664 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-authentication-operator", Name:"cluster-authentication-operator-lock", UID:"09dcd617-77d7-4739-bfa0-d91f5ee3f9c6", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28178", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' authentication-operator-7cc7ff75d5-g9qv8_6afa03f3-b0bc-43af-aa46-6f8b0362eaa6 became leader 2025-08-13T19:59:35.125263296+00:00 stderr F E0813 19:59:35.110643 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.130622869+00:00 stderr F E0813 19:59:35.130220 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.172244315+00:00 stderr F E0813 19:59:35.172183 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.172356189+00:00 stderr F E0813 19:59:35.172339 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.214627814+00:00 stderr F E0813 19:59:35.214570 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.214826349+00:00 stderr F E0813 19:59:35.214763 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.302565860+00:00 stderr F E0813 19:59:35.302503 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.302658613+00:00 stderr F E0813 19:59:35.302645 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.475498699+00:00 stderr F E0813 19:59:35.475207 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.475601372+00:00 stderr F E0813 19:59:35.475584 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.153495055+00:00 stderr F I0813 19:59:36.144494 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:36.153495055+00:00 stderr F E0813 19:59:36.147098 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.153495055+00:00 stderr F E0813 19:59:36.147158 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.156144171+00:00 stderr F I0813 19:59:36.155212 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T19:59:36.270210032+00:00 stderr F I0813 19:59:36.270060 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:36.796701340+00:00 stderr F E0813 19:59:36.795870 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.796701340+00:00 stderr F E0813 19:59:36.796317 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:38.147242088+00:00 stderr F E0813 19:59:38.147176 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:38.156567364+00:00 stderr F E0813 19:59:38.156543 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:40.736104134+00:00 stderr F E0813 19:59:40.720145 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:40.736104134+00:00 stderr F E0813 19:59:40.725657 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.370973406+00:00 stderr F I0813 19:59:42.328014 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:42.370973406+00:00 stderr F I0813 19:59:42.357328 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:42.624090121+00:00 stderr F I0813 19:59:42.581157 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:42.624090121+00:00 stderr F W0813 19:59:42.581689 1 reflector.go:539] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:125: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:42.624090121+00:00 stderr F E0813 19:59:42.581720 1 reflector.go:147] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:125: Failed to watch *v1.OAuthClient: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:42.636923546+00:00 stderr F W0813 19:59:42.636451 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:42.636923546+00:00 stderr F E0813 19:59:42.636577 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:42.636923546+00:00 stderr F I0813 19:59:42.636629 1 base_controller.go:67] Waiting for caches to sync for OAuthServerRouteEndpointAccessibleController 2025-08-13T19:59:43.927570097+00:00 stderr F I0813 19:59:43.924584 1 request.go:697] Waited for 1.626886194s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/nodes?limit=500&resourceVersion=0 2025-08-13T19:59:43.983752688+00:00 stderr F I0813 19:59:43.977926 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.029193893+00:00 stderr F I0813 19:59:42.347523 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.261972449+00:00 stderr F I0813 19:59:44.261391 1 base_controller.go:67] Waiting for caches to sync for OAuthServerWorkloadController 2025-08-13T19:59:44.261972449+00:00 stderr F I0813 19:59:44.261449 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-08-13T19:59:44.261972449+00:00 stderr F I0813 19:59:44.261463 1 base_controller.go:67] Waiting for caches to sync for MetadataController 2025-08-13T19:59:44.261972449+00:00 stderr F I0813 19:59:44.261476 1 base_controller.go:67] Waiting for caches to sync for OAuthClientsController 2025-08-13T19:59:44.261972449+00:00 stderr F I0813 19:59:44.261487 1 base_controller.go:67] Waiting for caches to sync for PayloadConfig 2025-08-13T19:59:44.261972449+00:00 stderr F I0813 19:59:44.261499 1 base_controller.go:67] Waiting for caches to sync for RouterCertsDomainValidationController 2025-08-13T19:59:44.261972449+00:00 stderr F I0813 19:59:44.261508 1 base_controller.go:67] Waiting for caches to sync for ServiceCAController 2025-08-13T19:59:44.285028176+00:00 stderr F I0813 19:59:44.284967 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.313641902+00:00 stderr F I0813 19:59:44.308901 1 base_controller.go:67] Waiting for caches to sync for WellKnownReadyController 2025-08-13T19:59:44.344395498+00:00 stderr F I0813 19:59:44.330525 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.357344547+00:00 stderr F I0813 19:59:44.357293 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.379923801+00:00 stderr F I0813 19:59:44.364104 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.380647502+00:00 stderr F I0813 19:59:44.380612 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.381414253+00:00 stderr F I0813 19:59:44.381391 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.382467663+00:00 stderr F I0813 19:59:44.382438 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.419523020+00:00 stderr F I0813 19:59:44.419457 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.420504798+00:00 stderr F I0813 19:59:44.420473 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.510409980+00:00 stderr F I0813 19:59:44.510343 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T19:59:44.540482718+00:00 stderr F I0813 19:59:44.538576 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:44.561860267+00:00 stderr F I0813 19:59:44.548935 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:44.561860267+00:00 stderr F I0813 19:59:44.550243 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:44.695884888+00:00 stderr F I0813 19:59:44.670212 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-08-13T19:59:44.695884888+00:00 stderr F I0813 19:59:44.670327 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-08-13T19:59:44.704042670+00:00 stderr F I0813 19:59:44.697017 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:44.721011894+00:00 stderr F I0813 19:59:44.720828 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:44.783754522+00:00 stderr F I0813 19:59:44.725898 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:44.783754522+00:00 stderr F I0813 19:59:44.755682 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.783754522+00:00 stderr F I0813 19:59:44.770326 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:44.783754522+00:00 stderr F I0813 19:59:44.770412 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:44.885173573+00:00 stderr F I0813 19:59:44.881585 1 trace.go:236] Trace[193441872]: "DeltaFIFO Pop Process" ID:default,Depth:63,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:44.293) (total time: 510ms): 2025-08-13T19:59:44.885173573+00:00 stderr F Trace[193441872]: [510.729819ms] [510.729819ms] END 2025-08-13T19:59:45.549647464+00:00 stderr F I0813 19:59:45.548496 1 base_controller.go:67] Waiting for caches to sync for OpenshiftAuthenticationStaticResources 2025-08-13T19:59:45.549647464+00:00 stderr F I0813 19:59:45.549220 1 request.go:697] Waited for 3.178596966s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps?limit=500&resourceVersion=0 2025-08-13T19:59:45.549647464+00:00 stderr F I0813 19:59:45.549388 1 trace.go:236] Trace[1372478155]: "DeltaFIFO Pop Process" ID:csr-5phj9,Depth:13,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:44.756) (total time: 793ms): 2025-08-13T19:59:45.549647464+00:00 stderr F Trace[1372478155]: [793.1724ms] [793.1724ms] END 2025-08-13T19:59:45.554639617+00:00 stderr F I0813 19:59:45.554418 1 trace.go:236] Trace[1425268458]: "DeltaFIFO Pop Process" ID:kube-node-lease,Depth:61,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:44.935) (total time: 614ms): 2025-08-13T19:59:45.554639617+00:00 stderr F Trace[1425268458]: [614.132736ms] [614.132736ms] END 2025-08-13T19:59:45.798581691+00:00 stderr F I0813 19:59:45.793346 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:45.807958558+00:00 stderr F I0813 19:59:45.803328 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:45.814158995+00:00 stderr F I0813 19:59:45.814111 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2025-08-13T19:59:45.832557879+00:00 stderr F I0813 19:59:45.832188 1 trace.go:236] Trace[3213281]: "DeltaFIFO Pop Process" ID:openshift-apiserver,Depth:57,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:45.554) (total time: 277ms): 2025-08-13T19:59:45.832557879+00:00 stderr F Trace[3213281]: [277.296235ms] [277.296235ms] END 2025-08-13T19:59:45.834730671+00:00 stderr F I0813 19:59:45.834510 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:45.834730671+00:00 stderr F I0813 19:59:45.834641 1 base_controller.go:67] Waiting for caches to sync for OAuthServerServiceEndpointAccessibleController 2025-08-13T19:59:45.834730671+00:00 stderr F I0813 19:59:45.834710 1 base_controller.go:67] Waiting for caches to sync for OAuthServerServiceEndpointsEndpointAccessibleController 2025-08-13T19:59:45.834761962+00:00 stderr F I0813 19:59:45.834730 1 base_controller.go:67] Waiting for caches to sync for IngressNodesAvailableController 2025-08-13T19:59:45.834761962+00:00 stderr F I0813 19:59:45.834753 1 base_controller.go:67] Waiting for caches to sync for ProxyConfigController 2025-08-13T19:59:45.846495966+00:00 stderr F I0813 19:59:45.834770 1 base_controller.go:67] Waiting for caches to sync for CustomRouteController 2025-08-13T19:59:45.846495966+00:00 stderr F I0813 19:59:45.834881 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:45.945129288+00:00 stderr F E0813 19:59:45.935665 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.100202093+00:00 stderr F E0813 19:59:47.090732 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.202653813+00:00 stderr F I0813 19:59:47.200413 1 trace.go:236] Trace[1441772173]: "DeltaFIFO Pop Process" ID:openshift-etcd,Depth:37,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:45.835) (total time: 1364ms): 2025-08-13T19:59:47.202653813+00:00 stderr F Trace[1441772173]: [1.364812394s] [1.364812394s] END 2025-08-13T19:59:47.270733784+00:00 stderr F I0813 19:59:47.270612 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:47.317994751+00:00 stderr F I0813 19:59:47.307542 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:47.899932630+00:00 stderr F I0813 19:59:47.627693 1 base_controller.go:67] Waiting for caches to sync for TrustDistributionController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.683396 1 base_controller.go:67] Waiting for caches to sync for OpenShiftAuthenticatorCertRequester 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.683427 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.103115 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.103128 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.683454 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.103149 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.103165 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.683522 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.683546 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorCertApprover_OpenShiftAuthenticator 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.105971 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorCertApprover_OpenShiftAuthenticator 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.105982 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorCertApprover_OpenShiftAuthenticator controller ... 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.719088 1 reflector.go:351] Caches populated for *v1.IngressController from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.719577 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.738197 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.707766 1 request.go:697] Waited for 5.044216967s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces?limit=500&resourceVersion=0 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.762473 1 base_controller.go:67] Waiting for caches to sync for SecretRevisionPruneController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.762492 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.762535 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.107562 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.107570 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.809319 1 base_controller.go:67] Waiting for caches to sync for IngressStateController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.107630 1 base_controller.go:73] Caches are synced for IngressStateController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.107637 1 base_controller.go:110] Starting #1 worker of IngressStateController controller ... 2025-08-13T19:59:48.110215605+00:00 stderr F I0813 19:59:48.109487 1 base_controller.go:67] Waiting for caches to sync for OAuthAPIServerControllerWorkloadController 2025-08-13T19:59:48.110238695+00:00 stderr F I0813 19:59:47.809389 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:48.110291597+00:00 stderr F I0813 19:59:48.110276 1 base_controller.go:67] Waiting for caches to sync for APIServiceController_openshift-apiserver 2025-08-13T19:59:48.110433441+00:00 stderr F I0813 19:59:48.110414 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-08-13T19:59:48.110517543+00:00 stderr F I0813 19:59:48.110459 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T19:59:48.110517543+00:00 stderr F I0813 19:59:48.110468 1 base_controller.go:67] Waiting for caches to sync for NamespaceFinalizerController_openshift-oauth-apiserver 2025-08-13T19:59:48.110517543+00:00 stderr F I0813 19:59:48.109504 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T19:59:48.110533554+00:00 stderr F I0813 19:59:48.099900 1 base_controller.go:73] Caches are synced for OAuthServerServiceEndpointAccessibleController 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.110532 1 base_controller.go:110] Starting #1 worker of OAuthServerServiceEndpointAccessibleController controller ... 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.101422 1 base_controller.go:73] Caches are synced for OAuthServerServiceEndpointsEndpointAccessibleController 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.110559 1 base_controller.go:110] Starting #1 worker of OAuthServerServiceEndpointsEndpointAccessibleController controller ... 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.101513 1 base_controller.go:73] Caches are synced for IngressNodesAvailableController 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.110578 1 base_controller.go:110] Starting #1 worker of IngressNodesAvailableController controller ... 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.102613 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_authentication 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.110597 1 base_controller.go:73] Caches are synced for StatusSyncer_authentication 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.110602 1 base_controller.go:110] Starting #1 worker of StatusSyncer_authentication controller ... 2025-08-13T19:59:48.110675808+00:00 stderr F I0813 19:59:48.110655 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-08-13T19:59:48.110725059+00:00 stderr F I0813 19:59:48.110712 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-08-13T19:59:48.110767590+00:00 stderr F I0813 19:59:48.110755 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-08-13T19:59:48.111398048+00:00 stderr F I0813 19:59:48.110480 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-08-13T19:59:48.111567353+00:00 stderr F E0813 19:59:48.111464 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T19:59:48.112250462+00:00 stderr F I0813 19:59:48.112225 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:48.112636423+00:00 stderr F I0813 19:59:48.112552 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"APIServerDeployment_UnavailablePod::CustomRouteController_SyncError::IngressStateEndpoints_NonReadyEndpoints::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable::OAuthServerServiceEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:48.113512278+00:00 stderr F I0813 19:59:48.113444 1 base_controller.go:67] Waiting for caches to sync for APIServerStaticResources 2025-08-13T19:59:48.113564680+00:00 stderr F I0813 19:59:48.113544 1 base_controller.go:73] Caches are synced for APIServerStaticResources 2025-08-13T19:59:48.113602291+00:00 stderr F I0813 19:59:48.113589 1 base_controller.go:110] Starting #1 worker of APIServerStaticResources controller ... 2025-08-13T19:59:48.221219729+00:00 stderr F E0813 19:59:48.219539 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T19:59:48.945052413+00:00 stderr F E0813 19:59:48.944337 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T19:59:48.969533311+00:00 stderr F E0813 19:59:48.969473 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T19:59:49.011041914+00:00 stderr F E0813 19:59:49.010539 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T19:59:49.016328685+00:00 stderr F I0813 19:59:49.016267 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from sigs.k8s.io/kube-storage-version-migrator/pkg/clients/informer/factory.go:132 2025-08-13T19:59:49.052701932+00:00 stderr F I0813 19:59:49.052634 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:51.339436357+00:00 stderr F I0813 19:59:51.338201 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:52.915093852+00:00 stderr F I0813 19:59:52.913025 1 trace.go:236] Trace[1463818210]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.370) (total time: 10542ms): 2025-08-13T19:59:52.915093852+00:00 stderr F Trace[1463818210]: ---"Objects listed" error: 10542ms (19:59:52.912) 2025-08-13T19:59:52.915093852+00:00 stderr F Trace[1463818210]: [10.542314785s] [10.542314785s] END 2025-08-13T19:59:52.915093852+00:00 stderr F I0813 19:59:52.913746 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.915093852+00:00 stderr F I0813 19:59:52.914537 1 trace.go:236] Trace[1770587185]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.193) (total time: 10721ms): 2025-08-13T19:59:52.915093852+00:00 stderr F Trace[1770587185]: ---"Objects listed" error: 10721ms (19:59:52.914) 2025-08-13T19:59:52.915093852+00:00 stderr F Trace[1770587185]: [10.721248335s] [10.721248335s] END 2025-08-13T19:59:52.915093852+00:00 stderr F I0813 19:59:52.914546 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.915093852+00:00 stderr F W0813 19:59:52.914686 1 reflector.go:539] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:125: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:52.915093852+00:00 stderr F E0813 19:59:52.914706 1 reflector.go:147] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:125: Failed to watch *v1.OAuthClient: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:52.990222764+00:00 stderr F I0813 19:59:52.990163 1 trace.go:236] Trace[770957654]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.193) (total time: 10796ms): 2025-08-13T19:59:52.990222764+00:00 stderr F Trace[770957654]: ---"Objects listed" error: 10796ms (19:59:52.990) 2025-08-13T19:59:52.990222764+00:00 stderr F Trace[770957654]: [10.796764708s] [10.796764708s] END 2025-08-13T19:59:52.990287066+00:00 stderr F I0813 19:59:52.990272 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.990656876+00:00 stderr F I0813 19:59:52.990634 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.991002756+00:00 stderr F I0813 19:59:52.990974 1 trace.go:236] Trace[300001418]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.193) (total time: 10797ms): 2025-08-13T19:59:52.991002756+00:00 stderr F Trace[300001418]: ---"Objects listed" error: 10797ms (19:59:52.990) 2025-08-13T19:59:52.991002756+00:00 stderr F Trace[300001418]: [10.79755446s] [10.79755446s] END 2025-08-13T19:59:52.991061928+00:00 stderr F I0813 19:59:52.991044 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.003764270+00:00 stderr F I0813 19:59:52.993586 1 trace.go:236] Trace[979236584]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.663) (total time: 10329ms): 2025-08-13T19:59:53.003764270+00:00 stderr F Trace[979236584]: ---"Objects listed" error: 10329ms (19:59:52.993) 2025-08-13T19:59:53.003764270+00:00 stderr F Trace[979236584]: [10.329986423s] [10.329986423s] END 2025-08-13T19:59:53.003764270+00:00 stderr F I0813 19:59:52.993626 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.004158351+00:00 stderr F I0813 19:59:53.004124 1 trace.go:236] Trace[86867999]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.370) (total time: 10633ms): 2025-08-13T19:59:53.004158351+00:00 stderr F Trace[86867999]: ---"Objects listed" error: 10633ms (19:59:53.004) 2025-08-13T19:59:53.004158351+00:00 stderr F Trace[86867999]: [10.633566795s] [10.633566795s] END 2025-08-13T19:59:53.004207972+00:00 stderr F I0813 19:59:53.004192 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.004401748+00:00 stderr F I0813 19:59:53.004378 1 trace.go:236] Trace[328557283]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.193) (total time: 10811ms): 2025-08-13T19:59:53.004401748+00:00 stderr F Trace[328557283]: ---"Objects listed" error: 10811ms (19:59:53.004) 2025-08-13T19:59:53.004401748+00:00 stderr F Trace[328557283]: [10.811036734s] [10.811036734s] END 2025-08-13T19:59:53.004457219+00:00 stderr F I0813 19:59:53.004443 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.005326104+00:00 stderr F W0813 19:59:53.005292 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:53.005426517+00:00 stderr F E0813 19:59:53.005405 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:53.006176029+00:00 stderr F I0813 19:59:53.006146 1 trace.go:236] Trace[970424958]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.193) (total time: 10812ms): 2025-08-13T19:59:53.006176029+00:00 stderr F Trace[970424958]: ---"Objects listed" error: 10812ms (19:59:53.006) 2025-08-13T19:59:53.006176029+00:00 stderr F Trace[970424958]: [10.812689222s] [10.812689222s] END 2025-08-13T19:59:53.006235890+00:00 stderr F I0813 19:59:53.006217 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.007204038+00:00 stderr F I0813 19:59:53.007172 1 trace.go:236] Trace[1873105387]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.370) (total time: 10636ms): 2025-08-13T19:59:53.007204038+00:00 stderr F Trace[1873105387]: ---"Objects listed" error: 10636ms (19:59:53.007) 2025-08-13T19:59:53.007204038+00:00 stderr F Trace[1873105387]: [10.636448808s] [10.636448808s] END 2025-08-13T19:59:53.007264810+00:00 stderr F I0813 19:59:53.007246 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.007769564+00:00 stderr F I0813 19:59:53.007746 1 base_controller.go:73] Caches are synced for NamespaceFinalizerController_openshift-oauth-apiserver 2025-08-13T19:59:53.028289059+00:00 stderr F I0813 19:59:53.028255 1 base_controller.go:110] Starting #1 worker of NamespaceFinalizerController_openshift-oauth-apiserver controller ... 2025-08-13T19:59:53.109228056+00:00 stderr F I0813 19:59:53.109117 1 base_controller.go:73] Caches are synced for APIServiceController_openshift-apiserver 2025-08-13T19:59:53.109352030+00:00 stderr F I0813 19:59:53.109330 1 base_controller.go:110] Starting #1 worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T19:59:53.111984805+00:00 stderr F I0813 19:59:53.111953 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:53.111911542 +0000 UTC))" 2025-08-13T19:59:53.308283359+00:00 stderr F I0813 19:59:53.269501 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:53.269156764 +0000 UTC))" 2025-08-13T19:59:53.362028301+00:00 stderr F I0813 19:59:53.154295 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling 2025-08-13T19:59:53.362443493+00:00 stderr F I0813 19:59:53.154593 1 trace.go:236] Trace[1108983851]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.297) (total time: 10856ms): 2025-08-13T19:59:53.362443493+00:00 stderr F Trace[1108983851]: ---"Objects listed" error: 10856ms (19:59:53.154) 2025-08-13T19:59:53.362443493+00:00 stderr F Trace[1108983851]: [10.856865691s] [10.856865691s] END 2025-08-13T19:59:53.362521245+00:00 stderr F I0813 19:59:53.156190 1 trace.go:236] Trace[133450871]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.663) (total time: 10492ms): 2025-08-13T19:59:53.362521245+00:00 stderr F Trace[133450871]: ---"Objects listed" error: 10492ms (19:59:53.156) 2025-08-13T19:59:53.362521245+00:00 stderr F Trace[133450871]: [10.492752323s] [10.492752323s] END 2025-08-13T19:59:53.439980033+00:00 stderr F I0813 19:59:53.258277 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:53.444960535+00:00 stderr F I0813 19:59:53.270449 1 trace.go:236] Trace[1116695782]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.193) (total time: 11076ms): 2025-08-13T19:59:53.444960535+00:00 stderr F Trace[1116695782]: ---"Objects listed" error: 11076ms (19:59:53.270) 2025-08-13T19:59:53.444960535+00:00 stderr F Trace[1116695782]: [11.076712457s] [11.076712457s] END 2025-08-13T19:59:53.445165581+00:00 stderr F I0813 19:59:53.310605 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded changed from False to True ("APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready") 2025-08-13T19:59:53.445221433+00:00 stderr F I0813 19:59:53.385293 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:53.385214752 +0000 UTC))" 2025-08-13T19:59:53.445250794+00:00 stderr F I0813 19:59:53.385330 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.445344716+00:00 stderr F I0813 19:59:53.439913 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"APIServerDeployment_UnavailablePod::CustomRouteController_SyncError::IngressStateEndpoints_NonReadyEndpoints::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:53.445501421+00:00 stderr F I0813 19:59:53.441443 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.445530422+00:00 stderr F I0813 19:59:53.443214 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T19:59:53.445640545+00:00 stderr F I0813 19:59:53.443260 1 base_controller.go:73] Caches are synced for SecretRevisionPruneController 2025-08-13T19:59:53.503855384+00:00 stderr F I0813 19:59:53.443279 1 base_controller.go:73] Caches are synced for OpenShiftAuthenticatorCertRequester 2025-08-13T19:59:53.503855384+00:00 stderr F I0813 19:59:53.443295 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorController 2025-08-13T19:59:53.503855384+00:00 stderr F I0813 19:59:53.443330 1 base_controller.go:73] Caches are synced for OAuthAPIServerControllerWorkloadController 2025-08-13T19:59:53.503855384+00:00 stderr F I0813 19:59:53.479112 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.503855384+00:00 stderr F I0813 19:59:53.479188 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:53.47914662 +0000 UTC))" 2025-08-13T19:59:53.504193694+00:00 stderr F I0813 19:59:53.504129 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T19:59:53.505556923+00:00 stderr F I0813 19:59:53.504744 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorController controller ... 2025-08-13T19:59:53.505706487+00:00 stderr F I0813 19:59:53.504752 1 base_controller.go:110] Starting #1 worker of SecretRevisionPruneController controller ... 2025-08-13T19:59:53.506406407+00:00 stderr F I0813 19:59:53.504758 1 base_controller.go:110] Starting #1 worker of OpenShiftAuthenticatorCertRequester controller ... 2025-08-13T19:59:53.548425305+00:00 stderr F I0813 19:59:53.504769 1 base_controller.go:110] Starting #1 worker of OAuthAPIServerControllerWorkloadController controller ... 2025-08-13T19:59:53.548966790+00:00 stderr F I0813 19:59:53.528316 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.493298023 +0000 UTC))" 2025-08-13T19:59:53.549686591+00:00 stderr F I0813 19:59:53.549596 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.549489755 +0000 UTC))" 2025-08-13T19:59:53.549759463+00:00 stderr F I0813 19:59:53.528426 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-08-13T19:59:53.550024050+00:00 stderr F I0813 19:59:53.528443 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-08-13T19:59:53.550170024+00:00 stderr F I0813 19:59:53.528457 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-08-13T19:59:53.550360770+00:00 stderr F I0813 19:59:53.528468 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-08-13T19:59:53.550653758+00:00 stderr F I0813 19:59:53.528481 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-08-13T19:59:53.550965407+00:00 stderr F I0813 19:59:53.549918 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.549892966 +0000 UTC))" 2025-08-13T19:59:53.551064420+00:00 stderr F I0813 19:59:53.550109 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-08-13T19:59:53.551256955+00:00 stderr F I0813 19:59:53.551240 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-08-13T19:59:53.588022383+00:00 stderr F I0813 19:59:53.550308 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-08-13T19:59:53.588918889+00:00 stderr F I0813 19:59:53.550351 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-08-13T19:59:53.589305100+00:00 stderr F I0813 19:59:53.550935 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-08-13T19:59:53.589770713+00:00 stderr F I0813 19:59:53.551449 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.55140693 +0000 UTC))" 2025-08-13T19:59:53.611528333+00:00 stderr F I0813 19:59:53.611498 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.61140994 +0000 UTC))" 2025-08-13T19:59:53.617472703+00:00 stderr F I0813 19:59:53.617445 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 19:59:53.617419221 +0000 UTC))" 2025-08-13T19:59:53.631119492+00:00 stderr F I0813 19:59:53.631088 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115169\" (2025-08-13 18:59:25 +0000 UTC to 2026-08-13 18:59:25 +0000 UTC (now=2025-08-13 19:59:53.63106044 +0000 UTC))" 2025-08-13T19:59:54.191673681+00:00 stderr F I0813 19:59:53.836206 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601->10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused" to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:54.191763743+00:00 stderr F I0813 19:59:53.836250 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NoValidCertificateFound' No valid client certificate for OpenShiftAuthenticatorCertRequester is found: part of the certificate is expired: sub: CN=system:serviceaccount:openshift-oauth-apiserver:openshift-authenticator, notAfter: 2025-06-27 13:10:04 +0000 UTC 2025-08-13T19:59:54.191935458+00:00 stderr F I0813 19:59:53.836465 1 trace.go:236] Trace[1025006076]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.370) (total time: 11466ms): 2025-08-13T19:59:54.191935458+00:00 stderr F Trace[1025006076]: ---"Objects listed" error: 11466ms (19:59:53.836) 2025-08-13T19:59:54.191935458+00:00 stderr F Trace[1025006076]: [11.466047985s] [11.466047985s] END 2025-08-13T19:59:54.192130534+00:00 stderr F I0813 19:59:54.192101 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:54.371965880+00:00 stderr F I0813 19:59:54.371650 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"APIServerDeployment_UnavailablePod::CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.504917340+00:00 stderr F I0813 19:59:54.448435 1 trace.go:236] Trace[570889891]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.370) (total time: 12077ms): 2025-08-13T19:59:54.504917340+00:00 stderr F Trace[570889891]: ---"Objects listed" error: 12077ms (19:59:54.447) 2025-08-13T19:59:54.504917340+00:00 stderr F Trace[570889891]: [12.077815774s] [12.077815774s] END 2025-08-13T19:59:54.504917340+00:00 stderr F I0813 19:59:54.448476 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:54.640223627+00:00 stderr F I0813 19:59:54.638295 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:54.640223627+00:00 stderr F I0813 19:59:54.638380 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:54.809634356+00:00 stderr F I0813 19:59:54.806600 1 trace.go:236] Trace[478681942]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.475) (total time: 12331ms): 2025-08-13T19:59:54.809634356+00:00 stderr F Trace[478681942]: ---"Objects listed" error: 12331ms (19:59:54.806) 2025-08-13T19:59:54.809634356+00:00 stderr F Trace[478681942]: [12.331477334s] [12.331477334s] END 2025-08-13T19:59:54.809634356+00:00 stderr F I0813 19:59:54.807076 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:54.809634356+00:00 stderr F I0813 19:59:54.807439 1 trace.go:236] Trace[1033546446]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.191) (total time: 12615ms): 2025-08-13T19:59:54.809634356+00:00 stderr F Trace[1033546446]: ---"Objects listed" error: 12615ms (19:59:54.807) 2025-08-13T19:59:54.809634356+00:00 stderr F Trace[1033546446]: [12.61581104s] [12.61581104s] END 2025-08-13T19:59:54.809634356+00:00 stderr F I0813 19:59:54.807446 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:54.824360275+00:00 stderr F I0813 19:59:54.821143 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_UnavailablePod::CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.824360275+00:00 stderr F I0813 19:59:54.822735 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CSRApproval' The CSR "system:openshift:openshift-authenticator-dk965" has been approved 2025-08-13T19:59:54.824360275+00:00 stderr F I0813 19:59:54.824230 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" 2025-08-13T19:59:54.824485799+00:00 stderr F I0813 19:59:54.824461 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:54.824525890+00:00 stderr F I0813 19:59:54.824512 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:54.824587032+00:00 stderr F I0813 19:59:54.824573 1 base_controller.go:73] Caches are synced for TrustDistributionController 2025-08-13T19:59:54.824618303+00:00 stderr F I0813 19:59:54.824606 1 base_controller.go:110] Starting #1 worker of TrustDistributionController controller ... 2025-08-13T19:59:54.826882207+00:00 stderr F I0813 19:59:54.826753 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CSRCreated' A csr "system:openshift:openshift-authenticator-dk965" is created for OpenShiftAuthenticatorCertRequester 2025-08-13T19:59:54.835257986+00:00 stderr F I0813 19:59:54.835113 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:54.835257986+00:00 stderr F I0813 19:59:54.835149 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:54.849867433+00:00 stderr F I0813 19:59:54.849738 1 base_controller.go:73] Caches are synced for OpenshiftAuthenticationStaticResources 2025-08-13T19:59:54.849977596+00:00 stderr F I0813 19:59:54.849960 1 base_controller.go:110] Starting #1 worker of OpenshiftAuthenticationStaticResources controller ... 2025-08-13T19:59:54.862338968+00:00 stderr F I0813 19:59:54.862251 1 base_controller.go:73] Caches are synced for ServiceCAController 2025-08-13T19:59:54.862479722+00:00 stderr F I0813 19:59:54.862465 1 base_controller.go:110] Starting #1 worker of ServiceCAController controller ... 2025-08-13T19:59:54.862529714+00:00 stderr F I0813 19:59:54.862439 1 base_controller.go:73] Caches are synced for RouterCertsDomainValidationController 2025-08-13T19:59:54.862562194+00:00 stderr F I0813 19:59:54.862550 1 base_controller.go:110] Starting #1 worker of RouterCertsDomainValidationController controller ... 2025-08-13T19:59:55.068942147+00:00 stderr F E0813 19:59:55.009727 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:55.068942147+00:00 stderr F I0813 19:59:55.064916 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_UnavailablePod::CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:55.598418321+00:00 stderr F E0813 19:59:55.594431 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:55.618398260+00:00 stderr F I0813 19:59:55.609513 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_UnavailablePod::CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:55.635942630+00:00 stderr F I0813 19:59:55.635868 1 reflector.go:351] Caches populated for *v1.OAuthClient from github.com/openshift/client-go/oauth/informers/externalversions/factory.go:125 2025-08-13T19:59:56.577146750+00:00 stderr F W0813 19:59:56.576170 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:56.577146750+00:00 stderr F E0813 19:59:56.576603 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:56.803007408+00:00 stderr F I0813 19:59:56.784530 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ClientCertificateCreated' A new client certificate for OpenShiftAuthenticatorCertRequester is available 2025-08-13T19:59:57.869365875+00:00 stderr F I0813 19:59:57.817721 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_UnavailablePod::CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:57.869365875+00:00 stderr F I0813 19:59:57.836769 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:58.532150488+00:00 stderr F I0813 19:59:58.491747 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused","reason":"CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:58.555246356+00:00 stderr F I0813 19:59:58.554051 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601->10.217.4.10:53: read: connection refused" to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:59.026001885+00:00 stderr F E0813 19:59:59.023617 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:59.026001885+00:00 stderr F I0813 19:59:59.025256 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused","reason":"CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:59.026001885+00:00 stderr F I0813 19:59:59.025744 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/webhook-authentication-integrated-oauth -n openshift-config because it changed 2025-08-13T19:59:59.191602576+00:00 stderr F I0813 19:59:59.191358 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601->10.217.4.10:53: read: connection refused" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601->10.217.4.10:53: read: connection refused" 2025-08-13T20:00:00.988583578+00:00 stderr F W0813 20:00:00.985560 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:00:00.988583578+00:00 stderr F E0813 20:00:00.988021 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:00:05.766076904+00:00 stderr F I0813 20:00:05.756472 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.756441289 +0000 UTC))" 2025-08-13T20:00:05.766246049+00:00 stderr F I0813 20:00:05.766224 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.766140316 +0000 UTC))" 2025-08-13T20:00:05.766725483+00:00 stderr F I0813 20:00:05.766707 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.766684052 +0000 UTC))" 2025-08-13T20:00:05.766829396+00:00 stderr F I0813 20:00:05.766764 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.766748793 +0000 UTC))" 2025-08-13T20:00:05.766933289+00:00 stderr F I0813 20:00:05.766915 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.766896128 +0000 UTC))" 2025-08-13T20:00:05.766987800+00:00 stderr F I0813 20:00:05.766971 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.766954729 +0000 UTC))" 2025-08-13T20:00:05.767042702+00:00 stderr F I0813 20:00:05.767029 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.767011351 +0000 UTC))" 2025-08-13T20:00:05.767096963+00:00 stderr F I0813 20:00:05.767081 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.767063202 +0000 UTC))" 2025-08-13T20:00:05.767142385+00:00 stderr F I0813 20:00:05.767130 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.767116244 +0000 UTC))" 2025-08-13T20:00:05.767211757+00:00 stderr F I0813 20:00:05.767185 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.767163085 +0000 UTC))" 2025-08-13T20:00:05.767562677+00:00 stderr F I0813 20:00:05.767544 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 20:00:05.767525566 +0000 UTC))" 2025-08-13T20:00:05.768005659+00:00 stderr F I0813 20:00:05.767984 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115169\" (2025-08-13 18:59:25 +0000 UTC to 2026-08-13 18:59:25 +0000 UTC (now=2025-08-13 20:00:05.767966438 +0000 UTC))" 2025-08-13T20:00:12.004702051+00:00 stderr F I0813 20:00:11.995725 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:00:12.036581620+00:00 stderr F I0813 20:00:12.012869 1 base_controller.go:73] Caches are synced for WellKnownReadyController 2025-08-13T20:00:12.036581620+00:00 stderr F I0813 20:00:12.012914 1 base_controller.go:110] Starting #1 worker of WellKnownReadyController controller ... 2025-08-13T20:00:12.054676386+00:00 stderr F I0813 20:00:12.044553 1 base_controller.go:73] Caches are synced for OAuthServerRouteEndpointAccessibleController 2025-08-13T20:00:12.054676386+00:00 stderr F I0813 20:00:12.044598 1 base_controller.go:110] Starting #1 worker of OAuthServerRouteEndpointAccessibleController controller ... 2025-08-13T20:00:12.054676386+00:00 stderr F I0813 20:00:12.045238 1 base_controller.go:73] Caches are synced for CustomRouteController 2025-08-13T20:00:12.054676386+00:00 stderr F I0813 20:00:12.045269 1 base_controller.go:110] Starting #1 worker of CustomRouteController controller ... 2025-08-13T20:00:12.054676386+00:00 stderr F I0813 20:00:12.045301 1 base_controller.go:73] Caches are synced for ProxyConfigController 2025-08-13T20:00:12.054676386+00:00 stderr F I0813 20:00:12.045307 1 base_controller.go:110] Starting #1 worker of ProxyConfigController controller ... 2025-08-13T20:00:12.067141681+00:00 stderr F I0813 20:00:12.067044 1 base_controller.go:73] Caches are synced for PayloadConfig 2025-08-13T20:00:12.067141681+00:00 stderr F I0813 20:00:12.067095 1 base_controller.go:110] Starting #1 worker of PayloadConfig controller ... 2025-08-13T20:00:12.067141681+00:00 stderr F I0813 20:00:12.067134 1 base_controller.go:73] Caches are synced for OAuthServerWorkloadController 2025-08-13T20:00:12.067210793+00:00 stderr F I0813 20:00:12.067139 1 base_controller.go:110] Starting #1 worker of OAuthServerWorkloadController controller ... 2025-08-13T20:00:12.069852988+00:00 stderr F I0813 20:00:12.067449 1 base_controller.go:73] Caches are synced for MetadataController 2025-08-13T20:00:12.069852988+00:00 stderr F I0813 20:00:12.067480 1 base_controller.go:110] Starting #1 worker of MetadataController controller ... 2025-08-13T20:00:12.069852988+00:00 stderr F I0813 20:00:12.067502 1 base_controller.go:73] Caches are synced for OAuthClientsController 2025-08-13T20:00:12.069852988+00:00 stderr F I0813 20:00:12.067506 1 base_controller.go:110] Starting #1 worker of OAuthClientsController controller ... 2025-08-13T20:00:12.429934526+00:00 stderr F I0813 20:00:12.428685 1 apps.go:154] Deployment "openshift-authentication/oauth-openshift" changes: {"metadata":{"annotations":{"operator.openshift.io/rvs-hash":"XyQxByO8dMbdAXGOFdhjWrxg0HELpVdxvAUV_2DuovP3EK6S_sdSNGYO1dWowvrD71Ii-BaK1iul4_iTDM-yaQ","operator.openshift.io/spec-hash":"ad10eacae3023cd0d2ee52348ecb0eeffb28a18838276096bbd0a036b94e4744"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"operator.openshift.io/rvs-hash":"XyQxByO8dMbdAXGOFdhjWrxg0HELpVdxvAUV_2DuovP3EK6S_sdSNGYO1dWowvrD71Ii-BaK1iul4_iTDM-yaQ"}},"spec":{"containers":[{"args":["if [ -s /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle\"\n cp -f /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\nexec oauth-server osinserver \\\n--config=/var/config/system/configmaps/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig \\\n--v=2 \\\n--audit-log-format=json \\\n--audit-log-maxbackup=10 \\\n--audit-log-maxsize=100 \\\n--audit-log-path=/var/log/oauth-server/audit.log \\\n--audit-policy-file=/var/run/configmaps/audit/audit.yaml\n"],"command":["/bin/bash","-ec"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":6443,"scheme":"HTTPS"},"initialDelaySeconds":30,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"name":"oauth-openshift","ports":[{"containerPort":6443,"name":"https","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":6443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"securityContext":{"privileged":true,"readOnlyRootFilesystem":false,"runAsUser":0},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/audit","name":"audit-policies"},{"mountPath":"/var/log/oauth-server","name":"audit-dir"},{"mountPath":"/var/config/system/secrets/v4-0-config-system-session","name":"v4-0-config-system-session","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-cliconfig","name":"v4-0-config-system-cliconfig","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-serving-cert","name":"v4-0-config-system-serving-cert","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-service-ca","name":"v4-0-config-system-service-ca","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-router-certs","name":"v4-0-config-system-router-certs","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template","name":"v4-0-config-system-ocp-branding-template","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-login","name":"v4-0-config-user-template-login","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-provider-selection","name":"v4-0-config-user-template-provider-selection","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-error","name":"v4-0-config-user-template-error","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle","name":"v4-0-config-system-trusted-ca-bundle","readOnly":true},{"mountPath":"/var/config/user/idp/0/secret/v4-0-config-user-idp-0-file-data","name":"v4-0-config-user-idp-0-file-data","readOnly":true}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"securityContext":null,"serviceAccount":null,"volumes":[{"configMap":{"name":"audit"},"name":"audit-policies"},{"hostPath":{"path":"/var/log/oauth-server"},"name":"audit-dir"},{"name":"v4-0-config-system-session","secret":{"secretName":"v4-0-config-system-session"}},{"configMap":{"name":"v4-0-config-system-cliconfig"},"name":"v4-0-config-system-cliconfig"},{"name":"v4-0-config-system-serving-cert","secret":{"secretName":"v4-0-config-system-serving-cert"}},{"configMap":{"name":"v4-0-config-system-service-ca"},"name":"v4-0-config-system-service-ca"},{"name":"v4-0-config-system-router-certs","secret":{"secretName":"v4-0-config-system-router-certs"}},{"name":"v4-0-config-system-ocp-branding-template","secret":{"secretName":"v4-0-config-system-ocp-branding-template"}},{"name":"v4-0-config-user-template-login","secret":{"optional":true,"secretName":"v4-0-config-user-template-login"}},{"name":"v4-0-config-user-template-provider-selection","secret":{"optional":true,"secretName":"v4-0-config-user-template-provider-selection"}},{"name":"v4-0-config-user-template-error","secret":{"optional":true,"secretName":"v4-0-config-user-template-error"}},{"configMap":{"name":"v4-0-config-system-trusted-ca-bundle","optional":true},"name":"v4-0-config-system-trusted-ca-bundle"},{"name":"v4-0-config-user-idp-0-file-data","secret":{"items":[{"key":"htpasswd","path":"htpasswd"}],"secretName":"v4-0-config-user-idp-0-file-data"}}]}}}} 2025-08-13T20:00:12.975490032+00:00 stderr F I0813 20:00:12.974998 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed 2025-08-13T20:00:13.015944106+00:00 stderr F I0813 20:00:13.015377 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused","reason":"CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:13.406758249+00:00 stderr F I0813 20:00:13.405089 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:13.415712555+00:00 stderr F I0813 20:00:13.414686 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601->10.217.4.10:53: read: connection refused" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" 2025-08-13T20:00:13.606415462+00:00 stderr F I0813 20:00:13.606353 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:13.609902272+00:00 stderr F E0813 20:00:13.606943 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:13.718737195+00:00 stderr F I0813 20:00:13.608178 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" 2025-08-13T20:00:13.718737195+00:00 stderr F I0813 20:00:13.718658 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing changed from False to True ("OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.") 2025-08-13T20:00:13.843928115+00:00 stderr F E0813 20:00:13.843736 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:14.863371922+00:00 stderr F E0813 20:00:14.862982 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:14.908161409+00:00 stderr F E0813 20:00:14.904196 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.004283370+00:00 stderr F E0813 20:00:15.004205 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.173046952+00:00 stderr F E0813 20:00:15.156544 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.182964455+00:00 stderr F I0813 20:00:15.182923 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:15.253002892+00:00 stderr F E0813 20:00:15.252132 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.388949248+00:00 stderr F E0813 20:00:15.386277 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.476292329+00:00 stderr F E0813 20:00:15.461625 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.572213484+00:00 stderr F I0813 20:00:15.568489 1 apps.go:154] Deployment "openshift-authentication/oauth-openshift" changes: {"metadata":{"annotations":{"operator.openshift.io/rvs-hash":"UeLoNQgNyi4ek--YG05tkJpgr5YeX9hH-xOiyjIBYuZg66HSrnnna0O-xw2d6c90LgOJuApblDmeGo40yQBZ1g","operator.openshift.io/spec-hash":"f6f3b5299b2d9845581bd943317b8c67a8bf91da11360bafa61bc66ec3070d31"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"operator.openshift.io/rvs-hash":"UeLoNQgNyi4ek--YG05tkJpgr5YeX9hH-xOiyjIBYuZg66HSrnnna0O-xw2d6c90LgOJuApblDmeGo40yQBZ1g"}},"spec":{"containers":[{"args":["if [ -s /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle\"\n cp -f /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\nexec oauth-server osinserver \\\n--config=/var/config/system/configmaps/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig \\\n--v=2 \\\n--audit-log-format=json \\\n--audit-log-maxbackup=10 \\\n--audit-log-maxsize=100 \\\n--audit-log-path=/var/log/oauth-server/audit.log \\\n--audit-policy-file=/var/run/configmaps/audit/audit.yaml\n"],"command":["/bin/bash","-ec"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":6443,"scheme":"HTTPS"},"initialDelaySeconds":30,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"name":"oauth-openshift","ports":[{"containerPort":6443,"name":"https","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":6443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"securityContext":{"privileged":true,"readOnlyRootFilesystem":false,"runAsUser":0},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/audit","name":"audit-policies"},{"mountPath":"/var/log/oauth-server","name":"audit-dir"},{"mountPath":"/var/config/system/secrets/v4-0-config-system-session","name":"v4-0-config-system-session","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-cliconfig","name":"v4-0-config-system-cliconfig","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-serving-cert","name":"v4-0-config-system-serving-cert","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-service-ca","name":"v4-0-config-system-service-ca","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-router-certs","name":"v4-0-config-system-router-certs","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template","name":"v4-0-config-system-ocp-branding-template","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-login","name":"v4-0-config-user-template-login","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-provider-selection","name":"v4-0-config-user-template-provider-selection","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-error","name":"v4-0-config-user-template-error","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle","name":"v4-0-config-system-trusted-ca-bundle","readOnly":true},{"mountPath":"/var/config/user/idp/0/secret/v4-0-config-user-idp-0-file-data","name":"v4-0-config-user-idp-0-file-data","readOnly":true}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"securityContext":null,"serviceAccount":null,"volumes":[{"configMap":{"name":"audit"},"name":"audit-policies"},{"hostPath":{"path":"/var/log/oauth-server"},"name":"audit-dir"},{"name":"v4-0-config-system-session","secret":{"secretName":"v4-0-config-system-session"}},{"configMap":{"name":"v4-0-config-system-cliconfig"},"name":"v4-0-config-system-cliconfig"},{"name":"v4-0-config-system-serving-cert","secret":{"secretName":"v4-0-config-system-serving-cert"}},{"configMap":{"name":"v4-0-config-system-service-ca"},"name":"v4-0-config-system-service-ca"},{"name":"v4-0-config-system-router-certs","secret":{"secretName":"v4-0-config-system-router-certs"}},{"name":"v4-0-config-system-ocp-branding-template","secret":{"secretName":"v4-0-config-system-ocp-branding-template"}},{"name":"v4-0-config-user-template-login","secret":{"optional":true,"secretName":"v4-0-config-user-template-login"}},{"name":"v4-0-config-user-template-provider-selection","secret":{"optional":true,"secretName":"v4-0-config-user-template-provider-selection"}},{"name":"v4-0-config-user-template-error","secret":{"optional":true,"secretName":"v4-0-config-user-template-error"}},{"configMap":{"name":"v4-0-config-system-trusted-ca-bundle","optional":true},"name":"v4-0-config-system-trusted-ca-bundle"},{"name":"v4-0-config-user-idp-0-file-data","secret":{"items":[{"key":"htpasswd","path":"htpasswd"}],"secretName":"v4-0-config-user-idp-0-file-data"}}]}}}} 2025-08-13T20:00:16.069873294+00:00 stderr F E0813 20:00:16.069421 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.719182359+00:00 stderr F I0813 20:00:16.718490 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" 2025-08-13T20:00:16.724515991+00:00 stderr F I0813 20:00:16.721913 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed 2025-08-13T20:00:17.149870289+00:00 stderr F E0813 20:00:17.149315 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:17.165937637+00:00 stderr F I0813 20:00:17.165762 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:17.202066678+00:00 stderr F E0813 20:00:17.201641 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:17.497314967+00:00 stderr F I0813 20:00:17.497116 1 helpers.go:184] lister was stale at resourceVersion=29206, live get showed resourceVersion=29240 2025-08-13T20:00:17.500207359+00:00 stderr F I0813 20:00:17.500067 1 helpers.go:184] lister was stale at resourceVersion=29206, live get showed resourceVersion=29240 2025-08-13T20:00:17.534337442+00:00 stderr F I0813 20:00:17.533562 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:17.536315839+00:00 stderr F I0813 20:00:17.535193 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" 2025-08-13T20:00:17.668454376+00:00 stderr F E0813 20:00:17.667141 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:17.738651368+00:00 stderr F E0813 20:00:17.738582 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:17.765923386+00:00 stderr F E0813 20:00:17.765691 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:17.772682838+00:00 stderr F I0813 20:00:17.772240 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:17.984901610+00:00 stderr F E0813 20:00:17.957458 1 base_controller.go:268] CustomRouteController reconciliation failed: Operation cannot be fulfilled on authentications.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:17.998317872+00:00 stderr F I0813 20:00:17.996657 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:18.027750481+00:00 stderr F I0813 20:00:18.001198 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" 2025-08-13T20:00:18.027750481+00:00 stderr F E0813 20:00:18.005592 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:18.027750481+00:00 stderr F E0813 20:00:18.005978 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:18.027750481+00:00 stderr F E0813 20:00:18.016061 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:18.073065473+00:00 stderr F E0813 20:00:18.067757 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:18.225214142+00:00 stderr F E0813 20:00:18.223388 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:18.267422925+00:00 stderr F E0813 20:00:18.254323 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:18.267422925+00:00 stderr F I0813 20:00:18.254421 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 4, desired generation is 5.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:18.267422925+00:00 stderr F E0813 20:00:18.254661 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:18.314470386+00:00 stderr F E0813 20:00:18.314079 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:18.510731402+00:00 stderr F E0813 20:00:18.439370 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:18.580644456+00:00 stderr F I0813 20:00:18.519024 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 4, desired generation is 5.",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" 2025-08-13T20:00:18.622977703+00:00 stderr F E0813 20:00:18.616420 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:19.055428173+00:00 stderr F E0813 20:00:19.029603 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:19.235422316+00:00 stderr F E0813 20:00:19.234686 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:19.242731974+00:00 stderr F I0813 20:00:19.242602 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 0/1 pods have been updated to the latest generation","reason":"OAuthServerDeployment_PodsUpdating","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:19.300572264+00:00 stderr F E0813 20:00:19.300451 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:19.532105785+00:00 stderr F I0813 20:00:19.529928 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 4, desired generation is 5." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 0/1 pods have been updated to the latest generation" 2025-08-13T20:00:19.538820187+00:00 stderr F E0813 20:00:19.537609 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:19.610996055+00:00 stderr F I0813 20:00:19.610297 1 request.go:697] Waited for 1.007136378s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit 2025-08-13T20:00:20.135667296+00:00 stderr F E0813 20:00:20.135228 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:20.144572859+00:00 stderr F I0813 20:00:20.142549 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"IngressStateEndpoints_MissingSubsets::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 0/1 pods have been updated to the latest generation","reason":"OAuthServerDeployment_PodsUpdating","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:20.233140455+00:00 stderr F E0813 20:00:20.233035 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:20.295036100+00:00 stderr F I0813 20:00:20.286171 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" 2025-08-13T20:00:21.188543427+00:00 stderr F I0813 20:00:21.184308 1 request.go:697] Waited for 1.122772645s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/secrets/router-certs 2025-08-13T20:00:21.862728060+00:00 stderr F E0813 20:00:21.862242 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:22.463267694+00:00 stderr F E0813 20:00:22.463207 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:22.535823483+00:00 stderr F I0813 20:00:22.530093 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:22Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:22.617541683+00:00 stderr F E0813 20:00:22.617487 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:22.687107217+00:00 stderr F I0813 20:00:22.685318 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well") 2025-08-13T20:00:24.386693571+00:00 stderr F I0813 20:00:24.386152 1 request.go:697] Waited for 1.085380808s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit 2025-08-13T20:00:32.112004482+00:00 stderr F E0813 20:00:32.111258 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:35.710360804+00:00 stderr F I0813 20:00:35.669757 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/v4-0-config-user-idp-0-file-data -n openshift-authentication because it changed 2025-08-13T20:00:36.365259357+00:00 stderr F I0813 20:00:36.292967 1 apps.go:154] Deployment "openshift-authentication/oauth-openshift" changes: {"metadata":{"annotations":{"operator.openshift.io/rvs-hash":"uomocd2xQvK4ihb6gzUgAMabCuvz4ifs3T98UV1yGZo0R1LHJARY4B-40ZukHyVSzZ-3pIoV4sdQo49M3ieZtA","operator.openshift.io/spec-hash":"797989bfafe87f49a19e3bfa11bf6d778cd3f9343ed99e2bad962d75542e95e1"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"operator.openshift.io/rvs-hash":"uomocd2xQvK4ihb6gzUgAMabCuvz4ifs3T98UV1yGZo0R1LHJARY4B-40ZukHyVSzZ-3pIoV4sdQo49M3ieZtA"}},"spec":{"containers":[{"args":["if [ -s /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle\"\n cp -f /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\nexec oauth-server osinserver \\\n--config=/var/config/system/configmaps/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig \\\n--v=2 \\\n--audit-log-format=json \\\n--audit-log-maxbackup=10 \\\n--audit-log-maxsize=100 \\\n--audit-log-path=/var/log/oauth-server/audit.log \\\n--audit-policy-file=/var/run/configmaps/audit/audit.yaml\n"],"command":["/bin/bash","-ec"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":6443,"scheme":"HTTPS"},"initialDelaySeconds":30,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"name":"oauth-openshift","ports":[{"containerPort":6443,"name":"https","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":6443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"securityContext":{"privileged":true,"readOnlyRootFilesystem":false,"runAsUser":0},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/audit","name":"audit-policies"},{"mountPath":"/var/log/oauth-server","name":"audit-dir"},{"mountPath":"/var/config/system/secrets/v4-0-config-system-session","name":"v4-0-config-system-session","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-cliconfig","name":"v4-0-config-system-cliconfig","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-serving-cert","name":"v4-0-config-system-serving-cert","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-service-ca","name":"v4-0-config-system-service-ca","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-router-certs","name":"v4-0-config-system-router-certs","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template","name":"v4-0-config-system-ocp-branding-template","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-login","name":"v4-0-config-user-template-login","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-provider-selection","name":"v4-0-config-user-template-provider-selection","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-error","name":"v4-0-config-user-template-error","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle","name":"v4-0-config-system-trusted-ca-bundle","readOnly":true},{"mountPath":"/var/config/user/idp/0/secret/v4-0-config-user-idp-0-file-data","name":"v4-0-config-user-idp-0-file-data","readOnly":true}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"securityContext":null,"serviceAccount":null,"volumes":[{"configMap":{"name":"audit"},"name":"audit-policies"},{"hostPath":{"path":"/var/log/oauth-server"},"name":"audit-dir"},{"name":"v4-0-config-system-session","secret":{"secretName":"v4-0-config-system-session"}},{"configMap":{"name":"v4-0-config-system-cliconfig"},"name":"v4-0-config-system-cliconfig"},{"name":"v4-0-config-system-serving-cert","secret":{"secretName":"v4-0-config-system-serving-cert"}},{"configMap":{"name":"v4-0-config-system-service-ca"},"name":"v4-0-config-system-service-ca"},{"name":"v4-0-config-system-router-certs","secret":{"secretName":"v4-0-config-system-router-certs"}},{"name":"v4-0-config-system-ocp-branding-template","secret":{"secretName":"v4-0-config-system-ocp-branding-template"}},{"name":"v4-0-config-user-template-login","secret":{"optional":true,"secretName":"v4-0-config-user-template-login"}},{"name":"v4-0-config-user-template-provider-selection","secret":{"optional":true,"secretName":"v4-0-config-user-template-provider-selection"}},{"name":"v4-0-config-user-template-error","secret":{"optional":true,"secretName":"v4-0-config-user-template-error"}},{"configMap":{"name":"v4-0-config-system-trusted-ca-bundle","optional":true},"name":"v4-0-config-system-trusted-ca-bundle"},{"name":"v4-0-config-user-idp-0-file-data","secret":{"items":[{"key":"htpasswd","path":"htpasswd"}],"secretName":"v4-0-config-user-idp-0-file-data"}}]}}}} 2025-08-13T20:00:36.400968906+00:00 stderr F E0813 20:00:36.333924 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:36.400968906+00:00 stderr F I0813 20:00:36.339919 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:22Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:36.580487184+00:00 stderr F I0813 20:00:36.555590 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:22Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:36.580487184+00:00 stderr F I0813 20:00:36.556938 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF" 2025-08-13T20:00:36.652128637+00:00 stderr F I0813 20:00:36.651830 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed 2025-08-13T20:00:36.909954379+00:00 stderr F E0813 20:00:36.908986 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:36.912915663+00:00 stderr F E0813 20:00:36.910144 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": EOF 2025-08-13T20:00:36.973100329+00:00 stderr F E0813 20:00:36.972602 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:36.986419359+00:00 stderr F I0813 20:00:36.983106 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:22Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:36.998673178+00:00 stderr F E0813 20:00:36.998162 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": EOF 2025-08-13T20:00:37.078261178+00:00 stderr F I0813 20:00:37.077288 1 helpers.go:184] lister was stale at resourceVersion=29830, live get showed resourceVersion=29845 2025-08-13T20:00:37.097267870+00:00 stderr F I0813 20:00:37.094501 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" 2025-08-13T20:00:37.469467033+00:00 stderr F E0813 20:00:37.465356 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": EOF 2025-08-13T20:00:37.502009441+00:00 stderr F E0813 20:00:37.501224 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:37.509666649+00:00 stderr F I0813 20:00:37.508157 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:37Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 5, desired generation is 6.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:37.562600368+00:00 stderr F E0813 20:00:37.555952 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": EOF 2025-08-13T20:00:38.067596878+00:00 stderr F I0813 20:00:38.067325 1 request.go:697] Waited for 1.078996896s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/secrets/etcd-client 2025-08-13T20:00:38.324205215+00:00 stderr F I0813 20:00:38.324036 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.324214 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.324779 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325447 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:38.325396489 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325475 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:38.325461081 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325495 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:38.325482021 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325511 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:38.325500432 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325540 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:38.325528573 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325556 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:38.325545743 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325573 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:38.325560843 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325589 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:38.325577764 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325607 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:38.325597164 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325639 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:38.325615085 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325998 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:00:38.325975315 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.326321 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115169\" (2025-08-13 18:59:25 +0000 UTC to 2026-08-13 18:59:25 +0000 UTC (now=2025-08-13 20:00:38.326261023 +0000 UTC))" 2025-08-13T20:00:38.560458541+00:00 stderr F I0813 20:00:38.537772 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing changed from False to True ("OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 5, desired generation is 6.") 2025-08-13T20:00:40.670992890+00:00 stderr F I0813 20:00:40.669437 1 request.go:697] Waited for 1.047166697s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/trusted-ca-bundle 2025-08-13T20:00:42.162988253+00:00 stderr F E0813 20:00:42.160729 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": EOF 2025-08-13T20:00:42.637888014+00:00 stderr F E0813 20:00:42.637580 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": EOF 2025-08-13T20:00:42.882033666+00:00 stderr F I0813 20:00:42.881970 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="9480932818b7cb1ebdca51bb28c5cce888164a71652918cc5344387b939314ae", new="fd200c56eea686a995edc75b6728718041d30aab916df9707d4d155e1d8cd60c") 2025-08-13T20:00:42.883879189+00:00 stderr F W0813 20:00:42.883854 1 builder.go:154] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:00:42.884015782+00:00 stderr F I0813 20:00:42.883993 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="ae319b0e78a8985818f6dc7d0863a2c62f9b44ec34c1b60c870a0648a26e2f87", new="c29c68c5a2ab71f55f3d17abcfc7421d482cea9bbfd2fc3fccb363858ff20314") 2025-08-13T20:00:42.897758034+00:00 stderr F I0813 20:00:42.884538 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:00:42.897927709+00:00 stderr F I0813 20:00:42.888281 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:00:42.898027102+00:00 stderr F I0813 20:00:42.898004 1 genericapiserver.go:539] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:00:42.898078293+00:00 stderr F I0813 20:00:42.898066 1 genericapiserver.go:603] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:00:42.898159456+00:00 stderr F I0813 20:00:42.888544 1 base_controller.go:172] Shutting down OAuthClientsController ... 2025-08-13T20:00:42.898188427+00:00 stderr F I0813 20:00:42.888571 1 base_controller.go:172] Shutting down MetadataController ... 2025-08-13T20:00:42.898216907+00:00 stderr F I0813 20:00:42.889263 1 base_controller.go:114] Shutting down worker of OAuthClientsController controller ... 2025-08-13T20:00:42.898262839+00:00 stderr F I0813 20:00:42.898249 1 base_controller.go:104] All OAuthClientsController workers have been terminated 2025-08-13T20:00:42.898314980+00:00 stderr F I0813 20:00:42.898301 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:00:42.898352641+00:00 stderr F I0813 20:00:42.898341 1 genericapiserver.go:628] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:00:42.898391672+00:00 stderr F I0813 20:00:42.898379 1 genericapiserver.go:669] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:00:42.898450134+00:00 stderr F I0813 20:00:42.898437 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:00:42.898633219+00:00 stderr F I0813 20:00:42.898588 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:00:42.898738682+00:00 stderr F I0813 20:00:42.898666 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:00:42.898823935+00:00 stderr F I0813 20:00:42.898768 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:00:42.899380681+00:00 stderr F I0813 20:00:42.899277 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:42.899380681+00:00 stderr F I0813 20:00:42.899361 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:00:42.899434772+00:00 stderr F I0813 20:00:42.899395 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:00:42.899447402+00:00 stderr F I0813 20:00:42.899431 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:00:42.899701310+00:00 stderr F I0813 20:00:42.889374 1 base_controller.go:172] Shutting down APIServerStaticResources ... 2025-08-13T20:00:42.899701310+00:00 stderr F E0813 20:00:42.890466 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": context canceled, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): client rate limiter Wait returned an error: context canceled, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): client rate limiter Wait returned an error: context canceled, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): client rate limiter Wait returned an error: context canceled, client rate limiter Wait returned an error: context canceled] 2025-08-13T20:00:42.899701310+00:00 stderr F I0813 20:00:42.899692 1 base_controller.go:114] Shutting down worker of APIServerStaticResources controller ... 2025-08-13T20:00:42.899718060+00:00 stderr F I0813 20:00:42.899701 1 base_controller.go:104] All APIServerStaticResources workers have been terminated 2025-08-13T20:00:42.899718060+00:00 stderr F I0813 20:00:42.890485 1 base_controller.go:172] Shutting down StatusSyncer_authentication ... 2025-08-13T20:00:42.899718060+00:00 stderr F I0813 20:00:42.899713 1 base_controller.go:150] All StatusSyncer_authentication post start hooks have been terminated 2025-08-13T20:00:42.899734751+00:00 stderr F I0813 20:00:42.890499 1 base_controller.go:172] Shutting down IngressNodesAvailableController ... 2025-08-13T20:00:42.899734751+00:00 stderr F I0813 20:00:42.890510 1 base_controller.go:172] Shutting down OAuthServerServiceEndpointsEndpointAccessibleController ... 2025-08-13T20:00:42.899734751+00:00 stderr F I0813 20:00:42.890521 1 base_controller.go:172] Shutting down OAuthServerServiceEndpointAccessibleController ... 2025-08-13T20:00:42.899734751+00:00 stderr F I0813 20:00:42.890545 1 base_controller.go:172] Shutting down IngressStateController ... 2025-08-13T20:00:42.899745981+00:00 stderr F I0813 20:00:42.890556 1 base_controller.go:172] Shutting down auditPolicyController ... 2025-08-13T20:00:42.899745981+00:00 stderr F I0813 20:00:42.890581 1 base_controller.go:172] Shutting down WebhookAuthenticatorCertApprover_OpenShiftAuthenticator ... 2025-08-13T20:00:42.899745981+00:00 stderr F I0813 20:00:42.890590 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.890607 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.890637 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.890660 1 base_controller.go:172] Shutting down ManagementStateController ... 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.890689 1 base_controller.go:114] Shutting down worker of StatusSyncer_authentication controller ... 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.899762 1 base_controller.go:104] All StatusSyncer_authentication workers have been terminated 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.890695 1 base_controller.go:114] Shutting down worker of IngressNodesAvailableController controller ... 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.899776 1 base_controller.go:104] All IngressNodesAvailableController workers have been terminated 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.890700 1 base_controller.go:114] Shutting down worker of OAuthServerServiceEndpointsEndpointAccessibleController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904143 1 base_controller.go:104] All OAuthServerServiceEndpointsEndpointAccessibleController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890705 1 base_controller.go:114] Shutting down worker of OAuthServerServiceEndpointAccessibleController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904179 1 base_controller.go:104] All OAuthServerServiceEndpointAccessibleController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890711 1 base_controller.go:114] Shutting down worker of IngressStateController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904193 1 base_controller.go:104] All IngressStateController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890716 1 base_controller.go:114] Shutting down worker of auditPolicyController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904205 1 base_controller.go:104] All auditPolicyController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890722 1 base_controller.go:114] Shutting down worker of WebhookAuthenticatorCertApprover_OpenShiftAuthenticator controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904215 1 base_controller.go:104] All WebhookAuthenticatorCertApprover_OpenShiftAuthenticator workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890728 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904227 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890734 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904236 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890740 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904247 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890745 1 base_controller.go:114] Shutting down worker of ManagementStateController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904370 1 base_controller.go:104] All ManagementStateController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890763 1 base_controller.go:172] Shutting down PayloadConfig ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.893541 1 base_controller.go:172] Shutting down ServiceCAController ... 2025-08-13T20:00:42.908890762+00:00 stderr F E0813 20:00:42.894204 1 base_controller.go:268] ServiceCAController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904445 1 base_controller.go:114] Shutting down worker of ServiceCAController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904454 1 base_controller.go:104] All ServiceCAController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894222 1 base_controller.go:172] Shutting down ProxyConfigController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894237 1 base_controller.go:172] Shutting down CustomRouteController ... 2025-08-13T20:00:42.908890762+00:00 stderr F W0813 20:00:42.894272 1 base_controller.go:232] Updating status of "CustomRouteController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": context canceled 2025-08-13T20:00:42.908890762+00:00 stderr F E0813 20:00:42.904492 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894289 1 base_controller.go:172] Shutting down OAuthServerRouteEndpointAccessibleController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894307 1 base_controller.go:172] Shutting down WellKnownReadyController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894326 1 base_controller.go:172] Shutting down RouterCertsDomainValidationController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894549 1 base_controller.go:172] Shutting down OAuthServerWorkloadController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894655 1 base_controller.go:172] Shutting down EncryptionPruneController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894672 1 base_controller.go:172] Shutting down EncryptionStateController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894684 1 base_controller.go:172] Shutting down EncryptionConditionController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894697 1 base_controller.go:172] Shutting down EncryptionMigrationController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894709 1 base_controller.go:172] Shutting down EncryptionKeyController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894722 1 base_controller.go:172] Shutting down OAuthAPIServerControllerWorkloadController ... 2025-08-13T20:00:42.908890762+00:00 stderr F W0813 20:00:42.895173 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:00:42.908890762+00:00 stderr F E0813 20:00:42.905507 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.905536 1 base_controller.go:114] Shutting down worker of RouterCertsDomainValidationController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.905645 1 base_controller.go:114] Shutting down worker of CustomRouteController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.905661 1 base_controller.go:104] All CustomRouteController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895222 1 base_controller.go:114] Shutting down worker of ProxyConfigController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.905743 1 base_controller.go:104] All ProxyConfigController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.905750 1 base_controller.go:104] All RouterCertsDomainValidationController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895239 1 base_controller.go:114] Shutting down worker of OAuthServerRouteEndpointAccessibleController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.905759 1 base_controller.go:104] All OAuthServerRouteEndpointAccessibleController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895248 1 base_controller.go:114] Shutting down worker of WellKnownReadyController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.905932 1 base_controller.go:104] All WellKnownReadyController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895268 1 base_controller.go:172] Shutting down OpenshiftAuthenticationStaticResources ... 2025-08-13T20:00:42.908890762+00:00 stderr F E0813 20:00:42.895405 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895421 1 base_controller.go:172] Shutting down OpenShiftAuthenticatorCertRequester ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895432 1 base_controller.go:172] Shutting down SecretRevisionPruneController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895444 1 base_controller.go:172] Shutting down WebhookAuthenticatorController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895457 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895474 1 base_controller.go:172] Shutting down APIServiceController_openshift-apiserver ... 2025-08-13T20:00:42.908890762+00:00 stderr F E0813 20:00:42.895518 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: the operator is shutting down, skipping updating conditions, err = failed to reconcile enabled APIs: Get "https://10.217.4.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.user.openshift.io": context canceled 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895534 1 base_controller.go:172] Shutting down NamespaceFinalizerController_openshift-oauth-apiserver ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895570 1 base_controller.go:114] Shutting down worker of MetadataController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.906156 1 base_controller.go:104] All MetadataController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895579 1 base_controller.go:114] Shutting down worker of OAuthServerWorkloadController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.906174 1 base_controller.go:104] All OAuthServerWorkloadController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895588 1 base_controller.go:114] Shutting down worker of EncryptionPruneController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.906189 1 base_controller.go:104] All EncryptionPruneController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895593 1 base_controller.go:114] Shutting down worker of EncryptionStateController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.906412 1 base_controller.go:104] All EncryptionStateController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895600 1 base_controller.go:114] Shutting down worker of EncryptionConditionController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr P I0813 20:00:42.906432 2025-08-13T20:00:42.908997645+00:00 stderr F 1 base_controller.go:104] All EncryptionConditionController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895606 1 base_controller.go:114] Shutting down worker of EncryptionMigrationController controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906446 1 base_controller.go:104] All EncryptionMigrationController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895611 1 base_controller.go:114] Shutting down worker of EncryptionKeyController controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906566 1 base_controller.go:114] Shutting down worker of OAuthAPIServerControllerWorkloadController controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906573 1 base_controller.go:104] All OAuthAPIServerControllerWorkloadController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895623 1 base_controller.go:114] Shutting down worker of OpenShiftAuthenticatorCertRequester controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906585 1 base_controller.go:104] All OpenShiftAuthenticatorCertRequester workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906662 1 base_controller.go:104] All EncryptionKeyController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895630 1 base_controller.go:114] Shutting down worker of SecretRevisionPruneController controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906683 1 base_controller.go:104] All SecretRevisionPruneController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895636 1 base_controller.go:114] Shutting down worker of WebhookAuthenticatorController controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906694 1 base_controller.go:104] All WebhookAuthenticatorController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895642 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906752 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895651 1 base_controller.go:114] Shutting down worker of NamespaceFinalizerController_openshift-oauth-apiserver controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906762 1 base_controller.go:104] All NamespaceFinalizerController_openshift-oauth-apiserver workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895674 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895681 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906868 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895696 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.907061 1 base_controller.go:114] Shutting down worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.907071 1 base_controller.go:104] All APIServiceController_openshift-apiserver workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895715 1 base_controller.go:172] Shutting down TrustDistributionController ... 2025-08-13T20:00:42.908997645+00:00 stderr F E0813 20:00:42.895746 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": context canceled 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.907095 1 base_controller.go:114] Shutting down worker of TrustDistributionController controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.907100 1 base_controller.go:104] All TrustDistributionController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F E0813 20:00:42.896184 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": context canceled, "oauth-openshift/oauth-service.yaml" (string): client rate limiter Wait returned an error: context canceled, "oauth-openshift/trust_distribution_role.yaml" (string): client rate limiter Wait returned an error: context canceled, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): client rate limiter Wait returned an error: context canceled, client rate limiter Wait returned an error: context canceled] 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.896215 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.896228 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.907345 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F E0813 20:00:42.897470 1 base_controller.go:268] PayloadConfig reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.900227 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.907647 1 builder.go:329] server exited 2025-08-13T20:00:42.909276393+00:00 stderr F I0813 20:00:42.909246 1 base_controller.go:114] Shutting down worker of PayloadConfig controller ... 2025-08-13T20:00:42.909319824+00:00 stderr F I0813 20:00:42.909307 1 base_controller.go:104] All PayloadConfig workers have been terminated 2025-08-13T20:00:42.909401276+00:00 stderr F I0813 20:00:42.909339 1 base_controller.go:114] Shutting down worker of OpenshiftAuthenticationStaticResources controller ... 2025-08-13T20:00:42.909828228+00:00 stderr F I0813 20:00:42.909456 1 base_controller.go:104] All OpenshiftAuthenticationStaticResources workers have been terminated 2025-08-13T20:00:42.948115590+00:00 stderr F I0813 20:00:42.947976 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:00:42.948115590+00:00 stderr F I0813 20:00:42.948043 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:00:45.606345136+00:00 stderr F W0813 20:00:45.602018 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000644000175000017500000063265515133657716033173 0ustar zuulzuul2026-01-20T10:49:34.287208556+00:00 stdout F Copying system trust bundle 2026-01-20T10:49:35.241849303+00:00 stderr F W0120 10:49:35.241074 1 cmd.go:154] Unable to read initial content of "/tmp/terminate": open /tmp/terminate: no such file or directory 2026-01-20T10:49:35.243338829+00:00 stderr F I0120 10:49:35.243294 1 observer_polling.go:159] Starting file observer 2026-01-20T10:49:35.252342493+00:00 stderr F I0120 10:49:35.250573 1 cmd.go:240] Using service-serving-cert provided certificates 2026-01-20T10:49:35.252342493+00:00 stderr F I0120 10:49:35.250637 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:49:35.256119208+00:00 stderr F I0120 10:49:35.254979 1 observer_polling.go:159] Starting file observer 2026-01-20T10:49:35.340127227+00:00 stderr F I0120 10:49:35.337340 1 builder.go:298] cluster-authentication-operator version v4.16.0-202406131906.p0.gb415439.assembly.stream.el9-0-g11ca161- 2026-01-20T10:49:35.340127227+00:00 stderr F I0120 10:49:35.338489 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2026-01-20T10:49:36.213641274+00:00 stderr F I0120 10:49:36.211767 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2026-01-20T10:49:36.223260477+00:00 stderr F I0120 10:49:36.223225 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2026-01-20T10:49:36.223260477+00:00 stderr F I0120 10:49:36.223245 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2026-01-20T10:49:36.223281138+00:00 stderr F I0120 10:49:36.223272 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2026-01-20T10:49:36.223281138+00:00 stderr F I0120 10:49:36.223277 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2026-01-20T10:49:36.232110147+00:00 stderr F I0120 10:49:36.231828 1 secure_serving.go:57] Forcing use of http/1.1 only 2026-01-20T10:49:36.232110147+00:00 stderr F W0120 10:49:36.231849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:36.232110147+00:00 stderr F W0120 10:49:36.231855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:36.232110147+00:00 stderr F I0120 10:49:36.232091 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2026-01-20T10:49:36.239008796+00:00 stderr F I0120 10:49:36.238391 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2026-01-20 10:49:36.238331276 +0000 UTC))" 2026-01-20T10:49:36.239008796+00:00 stderr F I0120 10:49:36.238690 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906176\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906175\" (2026-01-20 09:49:35 +0000 UTC to 2027-01-20 09:49:35 +0000 UTC (now=2026-01-20 10:49:36.238658586 +0000 UTC))" 2026-01-20T10:49:36.239008796+00:00 stderr F I0120 10:49:36.238707 1 secure_serving.go:213] Serving securely on [::]:8443 2026-01-20T10:49:36.239008796+00:00 stderr F I0120 10:49:36.238726 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2026-01-20T10:49:36.239008796+00:00 stderr F I0120 10:49:36.238745 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:49:36.239008796+00:00 stderr F I0120 10:49:36.238757 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:49:36.241909554+00:00 stderr F I0120 10:49:36.240102 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2026-01-20T10:49:36.241909554+00:00 stderr F I0120 10:49:36.240441 1 leaderelection.go:250] attempting to acquire leader lease openshift-authentication-operator/cluster-authentication-operator-lock... 2026-01-20T10:49:36.241909554+00:00 stderr F I0120 10:49:36.240738 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2026-01-20T10:49:36.241909554+00:00 stderr F I0120 10:49:36.240852 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:49:36.241909554+00:00 stderr F I0120 10:49:36.240865 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:36.241909554+00:00 stderr F I0120 10:49:36.241041 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:49:36.241909554+00:00 stderr F I0120 10:49:36.241588 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:49:36.241909554+00:00 stderr F I0120 10:49:36.241595 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:36.246626349+00:00 stderr F I0120 10:49:36.244942 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2026-01-20T10:49:36.246626349+00:00 stderr F I0120 10:49:36.245603 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2026-01-20T10:49:36.246626349+00:00 stderr F I0120 10:49:36.245697 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2026-01-20T10:49:36.351203823+00:00 stderr F I0120 10:49:36.351138 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:49:36.355093542+00:00 stderr F I0120 10:49:36.354187 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:36.389326915+00:00 stderr F I0120 10:49:36.354140 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:36.389683576+00:00 stderr F I0120 10:49:36.389659 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:49:36.389626044 +0000 UTC))" 2026-01-20T10:49:36.389733647+00:00 stderr F I0120 10:49:36.389716 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:49:36.389674515 +0000 UTC))" 2026-01-20T10:49:36.389758868+00:00 stderr F I0120 10:49:36.389742 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:49:36.389725207 +0000 UTC))" 2026-01-20T10:49:36.389775088+00:00 stderr F I0120 10:49:36.389762 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:49:36.389751608 +0000 UTC))" 2026-01-20T10:49:36.389812849+00:00 stderr F I0120 10:49:36.389797 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:36.389785789 +0000 UTC))" 2026-01-20T10:49:36.389834120+00:00 stderr F I0120 10:49:36.389818 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:36.389805759 +0000 UTC))" 2026-01-20T10:49:36.389842800+00:00 stderr F I0120 10:49:36.389838 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:36.38982749 +0000 UTC))" 2026-01-20T10:49:36.389897042+00:00 stderr F I0120 10:49:36.389881 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:36.38984225 +0000 UTC))" 2026-01-20T10:49:36.389919863+00:00 stderr F I0120 10:49:36.389904 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:49:36.389892252 +0000 UTC))" 2026-01-20T10:49:36.389930713+00:00 stderr F I0120 10:49:36.389924 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:49:36.389914112 +0000 UTC))" 2026-01-20T10:49:36.390376276+00:00 stderr F I0120 10:49:36.390346 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2026-01-20 10:49:36.390327855 +0000 UTC))" 2026-01-20T10:49:36.390777829+00:00 stderr F I0120 10:49:36.390753 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906176\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906175\" (2026-01-20 09:49:35 +0000 UTC to 2027-01-20 09:49:35 +0000 UTC (now=2026-01-20 10:49:36.390736238 +0000 UTC))" 2026-01-20T10:49:36.390988356+00:00 stderr F I0120 10:49:36.390946 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:49:36.390931894 +0000 UTC))" 2026-01-20T10:49:36.390988356+00:00 stderr F I0120 10:49:36.390968 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:49:36.390959225 +0000 UTC))" 2026-01-20T10:49:36.391003156+00:00 stderr F I0120 10:49:36.390984 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:49:36.390973225 +0000 UTC))" 2026-01-20T10:49:36.391012357+00:00 stderr F I0120 10:49:36.391004 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:49:36.390990726 +0000 UTC))" 2026-01-20T10:49:36.391239623+00:00 stderr F I0120 10:49:36.391209 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:36.391008876 +0000 UTC))" 2026-01-20T10:49:36.391254784+00:00 stderr F I0120 10:49:36.391240 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:36.391224113 +0000 UTC))" 2026-01-20T10:49:36.391262384+00:00 stderr F I0120 10:49:36.391256 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:36.391245734 +0000 UTC))" 2026-01-20T10:49:36.391281505+00:00 stderr F I0120 10:49:36.391274 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:36.391260944 +0000 UTC))" 2026-01-20T10:49:36.395091080+00:00 stderr F I0120 10:49:36.391292 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:49:36.391278475 +0000 UTC))" 2026-01-20T10:49:36.395091080+00:00 stderr F I0120 10:49:36.394360 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:49:36.394273696 +0000 UTC))" 2026-01-20T10:49:36.395091080+00:00 stderr F I0120 10:49:36.394408 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:36.394386269 +0000 UTC))" 2026-01-20T10:49:36.404748705+00:00 stderr F I0120 10:49:36.403489 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2026-01-20 10:49:36.403414834 +0000 UTC))" 2026-01-20T10:49:36.404748705+00:00 stderr F I0120 10:49:36.403821 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906176\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906175\" (2026-01-20 09:49:35 +0000 UTC to 2027-01-20 09:49:35 +0000 UTC (now=2026-01-20 10:49:36.403795005 +0000 UTC))" 2026-01-20T10:55:39.601972097+00:00 stderr F I0120 10:55:39.600999 1 leaderelection.go:260] successfully acquired lease openshift-authentication-operator/cluster-authentication-operator-lock 2026-01-20T10:55:39.602052719+00:00 stderr F I0120 10:55:39.601139 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-authentication-operator", Name:"cluster-authentication-operator-lock", UID:"09dcd617-77d7-4739-bfa0-d91f5ee3f9c6", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"42254", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' authentication-operator-7cc7ff75d5-g9qv8_254519a5-7cb8-432b-bf54-b46044e86f0d became leader 2026-01-20T10:55:39.705603230+00:00 stderr F I0120 10:55:39.705512 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2026-01-20T10:55:39.705913058+00:00 stderr F I0120 10:55:39.705877 1 base_controller.go:67] Waiting for caches to sync for OAuthServerRouteEndpointAccessibleController 2026-01-20T10:55:39.706192125+00:00 stderr F I0120 10:55:39.705874 1 base_controller.go:67] Waiting for caches to sync for WellKnownReadyController 2026-01-20T10:55:39.706290298+00:00 stderr F I0120 10:55:39.706253 1 base_controller.go:67] Waiting for caches to sync for OAuthServerServiceEndpointAccessibleController 2026-01-20T10:55:39.706370560+00:00 stderr F I0120 10:55:39.706310 1 base_controller.go:67] Waiting for caches to sync for OAuthServerServiceEndpointsEndpointAccessibleController 2026-01-20T10:55:39.706370560+00:00 stderr F I0120 10:55:39.706353 1 base_controller.go:67] Waiting for caches to sync for IngressNodesAvailableController 2026-01-20T10:55:39.706408191+00:00 stderr F I0120 10:55:39.706380 1 base_controller.go:67] Waiting for caches to sync for ProxyConfigController 2026-01-20T10:55:39.706447712+00:00 stderr F I0120 10:55:39.706427 1 base_controller.go:67] Waiting for caches to sync for CustomRouteController 2026-01-20T10:55:39.706497023+00:00 stderr F I0120 10:55:39.706472 1 base_controller.go:67] Waiting for caches to sync for TrustDistributionController 2026-01-20T10:55:39.706535304+00:00 stderr F I0120 10:55:39.706515 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2026-01-20T10:55:39.706592596+00:00 stderr F I0120 10:55:39.706562 1 base_controller.go:67] Waiting for caches to sync for IngressStateController 2026-01-20T10:55:39.706645027+00:00 stderr F I0120 10:55:39.706610 1 base_controller.go:67] Waiting for caches to sync for OpenShiftAuthenticatorCertRequester 2026-01-20T10:55:39.706671448+00:00 stderr F I0120 10:55:39.706654 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2026-01-20T10:55:39.706714079+00:00 stderr F I0120 10:55:39.706693 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorController 2026-01-20T10:55:39.706755270+00:00 stderr F I0120 10:55:39.706729 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorCertApprover_OpenShiftAuthenticator 2026-01-20T10:55:39.707373536+00:00 stderr F I0120 10:55:39.707332 1 base_controller.go:67] Waiting for caches to sync for OAuthAPIServerControllerWorkloadController 2026-01-20T10:55:39.707433508+00:00 stderr F I0120 10:55:39.707392 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2026-01-20T10:55:39.707487769+00:00 stderr F I0120 10:55:39.707442 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2026-01-20T10:55:39.707618383+00:00 stderr F I0120 10:55:39.707566 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2026-01-20T10:55:39.707665094+00:00 stderr F I0120 10:55:39.707641 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2026-01-20T10:55:39.708229920+00:00 stderr F I0120 10:55:39.708198 1 base_controller.go:67] Waiting for caches to sync for NamespaceFinalizerController_openshift-oauth-apiserver 2026-01-20T10:55:39.710710515+00:00 stderr F I0120 10:55:39.708613 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_authentication 2026-01-20T10:55:39.710710515+00:00 stderr F I0120 10:55:39.708658 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2026-01-20T10:55:39.710710515+00:00 stderr F I0120 10:55:39.708681 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2026-01-20T10:55:39.710710515+00:00 stderr F I0120 10:55:39.708555 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2026-01-20T10:55:39.710710515+00:00 stderr F I0120 10:55:39.708701 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2026-01-20T10:55:39.710710515+00:00 stderr F I0120 10:55:39.708710 1 base_controller.go:67] Waiting for caches to sync for SecretRevisionPruneController 2026-01-20T10:55:39.710710515+00:00 stderr F I0120 10:55:39.708752 1 base_controller.go:67] Waiting for caches to sync for APIServerStaticResources 2026-01-20T10:55:39.710710515+00:00 stderr F I0120 10:55:39.708716 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2026-01-20T10:55:39.710710515+00:00 stderr F I0120 10:55:39.708802 1 base_controller.go:67] Waiting for caches to sync for MetadataController 2026-01-20T10:55:39.710710515+00:00 stderr F I0120 10:55:39.707590 1 base_controller.go:67] Waiting for caches to sync for APIServiceController_openshift-apiserver 2026-01-20T10:55:39.710710515+00:00 stderr F I0120 10:55:39.709220 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2026-01-20T10:55:39.714273381+00:00 stderr F I0120 10:55:39.711445 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:39.714273381+00:00 stderr F I0120 10:55:39.712014 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:39.714273381+00:00 stderr F I0120 10:55:39.712499 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:39.714273381+00:00 stderr F I0120 10:55:39.713345 1 base_controller.go:67] Waiting for caches to sync for OAuthServerWorkloadController 2026-01-20T10:55:39.714273381+00:00 stderr F I0120 10:55:39.713386 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2026-01-20T10:55:39.714273381+00:00 stderr F I0120 10:55:39.713409 1 base_controller.go:67] Waiting for caches to sync for ServiceCAController 2026-01-20T10:55:39.714273381+00:00 stderr F I0120 10:55:39.713442 1 base_controller.go:67] Waiting for caches to sync for PayloadConfig 2026-01-20T10:55:39.714273381+00:00 stderr F I0120 10:55:39.713453 1 base_controller.go:67] Waiting for caches to sync for RouterCertsDomainValidationController 2026-01-20T10:55:39.714273381+00:00 stderr F I0120 10:55:39.713468 1 base_controller.go:67] Waiting for caches to sync for OAuthClientsController 2026-01-20T10:55:39.714273381+00:00 stderr F I0120 10:55:39.713479 1 base_controller.go:67] Waiting for caches to sync for OpenshiftAuthenticationStaticResources 2026-01-20T10:55:39.715403130+00:00 stderr F I0120 10:55:39.715353 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:39.715542114+00:00 stderr F I0120 10:55:39.715496 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:39.715901834+00:00 stderr F I0120 10:55:39.715840 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:55:39.716262133+00:00 stderr F I0120 10:55:39.716212 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:55:39.716336655+00:00 stderr F I0120 10:55:39.716282 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:55:39.717164908+00:00 stderr F I0120 10:55:39.717119 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:39.724449801+00:00 stderr F I0120 10:55:39.724324 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:39.728280894+00:00 stderr F I0120 10:55:39.728197 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:39.730244066+00:00 stderr F I0120 10:55:39.730208 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:39.730946525+00:00 stderr F I0120 10:55:39.730310 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:55:39.731217702+00:00 stderr F I0120 10:55:39.730362 1 reflector.go:351] Caches populated for *v1.OAuthClient from github.com/openshift/client-go/oauth/informers/externalversions/factory.go:125 2026-01-20T10:55:39.731217702+00:00 stderr F I0120 10:55:39.730610 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:39.731776167+00:00 stderr F I0120 10:55:39.731634 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from sigs.k8s.io/kube-storage-version-migrator/pkg/clients/informer/factory.go:132 2026-01-20T10:55:39.731776167+00:00 stderr F I0120 10:55:39.731703 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2026-01-20T10:55:39.731923331+00:00 stderr F I0120 10:55:39.731860 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:39.732085385+00:00 stderr F I0120 10:55:39.732047 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:39.733528404+00:00 stderr F I0120 10:55:39.733502 1 reflector.go:351] Caches populated for *v1.IngressController from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2026-01-20T10:55:39.733813692+00:00 stderr F I0120 10:55:39.733778 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:39.733906534+00:00 stderr F I0120 10:55:39.733881 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:55:39.734606842+00:00 stderr F I0120 10:55:39.734590 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2026-01-20T10:55:39.734830008+00:00 stderr F I0120 10:55:39.734799 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:39.737084099+00:00 stderr F I0120 10:55:39.737023 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2026-01-20T10:55:39.737279264+00:00 stderr F I0120 10:55:39.737237 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:39.737664764+00:00 stderr F I0120 10:55:39.737620 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:55:39.739543414+00:00 stderr F I0120 10:55:39.739514 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:55:39.765791534+00:00 stderr F I0120 10:55:39.765707 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:39.805714188+00:00 stderr F I0120 10:55:39.805634 1 base_controller.go:73] Caches are synced for LoggingSyncer 2026-01-20T10:55:39.805714188+00:00 stderr F I0120 10:55:39.805667 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2026-01-20T10:55:39.806893090+00:00 stderr F I0120 10:55:39.806856 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorCertApprover_OpenShiftAuthenticator 2026-01-20T10:55:39.806893090+00:00 stderr F I0120 10:55:39.806875 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorCertApprover_OpenShiftAuthenticator controller ... 2026-01-20T10:55:39.806933511+00:00 stderr F I0120 10:55:39.806918 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2026-01-20T10:55:39.806943721+00:00 stderr F I0120 10:55:39.806932 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2026-01-20T10:55:39.806955391+00:00 stderr F I0120 10:55:39.806949 1 base_controller.go:73] Caches are synced for IngressStateController 2026-01-20T10:55:39.806963112+00:00 stderr F I0120 10:55:39.806955 1 base_controller.go:110] Starting #1 worker of IngressStateController controller ... 2026-01-20T10:55:39.807750432+00:00 stderr F I0120 10:55:39.807724 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2026-01-20T10:55:39.807750432+00:00 stderr F I0120 10:55:39.807738 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2026-01-20T10:55:39.809311964+00:00 stderr F I0120 10:55:39.809281 1 base_controller.go:73] Caches are synced for StatusSyncer_authentication 2026-01-20T10:55:39.809311964+00:00 stderr F I0120 10:55:39.809299 1 base_controller.go:110] Starting #1 worker of StatusSyncer_authentication controller ... 2026-01-20T10:55:39.809371206+00:00 stderr F I0120 10:55:39.809326 1 base_controller.go:73] Caches are synced for APIServerStaticResources 2026-01-20T10:55:39.809371206+00:00 stderr F I0120 10:55:39.809352 1 base_controller.go:73] Caches are synced for MetadataController 2026-01-20T10:55:39.809371206+00:00 stderr F I0120 10:55:39.809361 1 base_controller.go:110] Starting #1 worker of MetadataController controller ... 2026-01-20T10:55:39.809386536+00:00 stderr F I0120 10:55:39.809360 1 base_controller.go:110] Starting #1 worker of APIServerStaticResources controller ... 2026-01-20T10:55:39.810914086+00:00 stderr F I0120 10:55:39.810892 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:55:39.813758932+00:00 stderr F I0120 10:55:39.813706 1 base_controller.go:73] Caches are synced for ManagementStateController 2026-01-20T10:55:39.813758932+00:00 stderr F I0120 10:55:39.813744 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2026-01-20T10:55:39.813780083+00:00 stderr F I0120 10:55:39.813761 1 base_controller.go:73] Caches are synced for OAuthClientsController 2026-01-20T10:55:39.813780083+00:00 stderr F I0120 10:55:39.813772 1 base_controller.go:110] Starting #1 worker of OAuthClientsController controller ... 2026-01-20T10:55:39.887719105+00:00 stderr F I0120 10:55:39.887632 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:40.010960350+00:00 stderr F I0120 10:55:40.010865 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:55:40.089089702+00:00 stderr F I0120 10:55:40.088984 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:40.310483364+00:00 stderr F I0120 10:55:40.310416 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:40.407045208+00:00 stderr F I0120 10:55:40.406956 1 base_controller.go:73] Caches are synced for OAuthServerRouteEndpointAccessibleController 2026-01-20T10:55:40.407045208+00:00 stderr F I0120 10:55:40.406990 1 base_controller.go:110] Starting #1 worker of OAuthServerRouteEndpointAccessibleController controller ... 2026-01-20T10:55:40.489133456+00:00 stderr F I0120 10:55:40.489043 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:40.507465775+00:00 stderr F I0120 10:55:40.507358 1 base_controller.go:73] Caches are synced for TrustDistributionController 2026-01-20T10:55:40.507465775+00:00 stderr F I0120 10:55:40.507416 1 base_controller.go:110] Starting #1 worker of TrustDistributionController controller ... 2026-01-20T10:55:40.514130672+00:00 stderr F I0120 10:55:40.514042 1 base_controller.go:73] Caches are synced for RouterCertsDomainValidationController 2026-01-20T10:55:40.514130672+00:00 stderr F I0120 10:55:40.514101 1 base_controller.go:110] Starting #1 worker of RouterCertsDomainValidationController controller ... 2026-01-20T10:55:40.687370550+00:00 stderr F I0120 10:55:40.687286 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:40.885288497+00:00 stderr F I0120 10:55:40.885129 1 request.go:697] Waited for 1.178566508s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/default/endpoints?limit=500&resourceVersion=0 2026-01-20T10:55:40.887951318+00:00 stderr F I0120 10:55:40.887804 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:41.087539298+00:00 stderr F I0120 10:55:41.087386 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:41.107042878+00:00 stderr F I0120 10:55:41.106973 1 base_controller.go:73] Caches are synced for IngressNodesAvailableController 2026-01-20T10:55:41.107264293+00:00 stderr F I0120 10:55:41.107224 1 base_controller.go:110] Starting #1 worker of IngressNodesAvailableController controller ... 2026-01-20T10:55:41.287703163+00:00 stderr F I0120 10:55:41.287617 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:41.493862989+00:00 stderr F I0120 10:55:41.493748 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:41.508086208+00:00 stderr F I0120 10:55:41.508016 1 base_controller.go:73] Caches are synced for ProxyConfigController 2026-01-20T10:55:41.508114449+00:00 stderr F I0120 10:55:41.508100 1 base_controller.go:110] Starting #1 worker of ProxyConfigController controller ... 2026-01-20T10:55:41.687741827+00:00 stderr F I0120 10:55:41.687641 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:41.885414287+00:00 stderr F I0120 10:55:41.885295 1 request.go:697] Waited for 2.178148744s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps?limit=500&resourceVersion=0 2026-01-20T10:55:41.891270043+00:00 stderr F I0120 10:55:41.891205 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:42.098971319+00:00 stderr F I0120 10:55:42.098861 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:42.107442995+00:00 stderr F I0120 10:55:42.107378 1 base_controller.go:73] Caches are synced for OAuthServerServiceEndpointAccessibleController 2026-01-20T10:55:42.107442995+00:00 stderr F I0120 10:55:42.107396 1 base_controller.go:110] Starting #1 worker of OAuthServerServiceEndpointAccessibleController controller ... 2026-01-20T10:55:42.107525007+00:00 stderr F I0120 10:55:42.107466 1 base_controller.go:73] Caches are synced for OAuthServerServiceEndpointsEndpointAccessibleController 2026-01-20T10:55:42.107525007+00:00 stderr F I0120 10:55:42.107509 1 base_controller.go:110] Starting #1 worker of OAuthServerServiceEndpointsEndpointAccessibleController controller ... 2026-01-20T10:55:42.114694479+00:00 stderr F I0120 10:55:42.114611 1 base_controller.go:73] Caches are synced for OAuthServerWorkloadController 2026-01-20T10:55:42.114694479+00:00 stderr F I0120 10:55:42.114624 1 base_controller.go:110] Starting #1 worker of OAuthServerWorkloadController controller ... 2026-01-20T10:55:42.114873714+00:00 stderr F I0120 10:55:42.114816 1 base_controller.go:73] Caches are synced for PayloadConfig 2026-01-20T10:55:42.114873714+00:00 stderr F I0120 10:55:42.114828 1 base_controller.go:73] Caches are synced for OpenshiftAuthenticationStaticResources 2026-01-20T10:55:42.114873714+00:00 stderr F I0120 10:55:42.114843 1 base_controller.go:110] Starting #1 worker of OpenshiftAuthenticationStaticResources controller ... 2026-01-20T10:55:42.114873714+00:00 stderr F I0120 10:55:42.114830 1 base_controller.go:110] Starting #1 worker of PayloadConfig controller ... 2026-01-20T10:55:42.114960106+00:00 stderr F I0120 10:55:42.114893 1 base_controller.go:73] Caches are synced for ServiceCAController 2026-01-20T10:55:42.114960106+00:00 stderr F I0120 10:55:42.114947 1 base_controller.go:110] Starting #1 worker of ServiceCAController controller ... 2026-01-20T10:55:42.288034469+00:00 stderr F I0120 10:55:42.287917 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:42.306733067+00:00 stderr F I0120 10:55:42.306641 1 base_controller.go:73] Caches are synced for WellKnownReadyController 2026-01-20T10:55:42.306733067+00:00 stderr F I0120 10:55:42.306686 1 base_controller.go:110] Starting #1 worker of WellKnownReadyController controller ... 2026-01-20T10:55:42.494673477+00:00 stderr F I0120 10:55:42.494573 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:42.506767620+00:00 stderr F I0120 10:55:42.506702 1 base_controller.go:73] Caches are synced for CustomRouteController 2026-01-20T10:55:42.506767620+00:00 stderr F I0120 10:55:42.506722 1 base_controller.go:110] Starting #1 worker of CustomRouteController controller ... 2026-01-20T10:55:42.510358796+00:00 stderr F I0120 10:55:42.510277 1 base_controller.go:73] Caches are synced for ConfigObserver 2026-01-20T10:55:42.510358796+00:00 stderr F I0120 10:55:42.510324 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2026-01-20T10:55:42.688754212+00:00 stderr F I0120 10:55:42.688646 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:42.888481425+00:00 stderr F I0120 10:55:42.888386 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:42.908495289+00:00 stderr F I0120 10:55:42.908437 1 base_controller.go:73] Caches are synced for auditPolicyController 2026-01-20T10:55:42.908535640+00:00 stderr F I0120 10:55:42.908522 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2026-01-20T10:55:42.908735455+00:00 stderr F I0120 10:55:42.908669 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2026-01-20T10:55:43.085315352+00:00 stderr F I0120 10:55:43.085201 1 request.go:697] Waited for 3.376462566s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces?limit=500&resourceVersion=0 2026-01-20T10:55:43.090300475+00:00 stderr F I0120 10:55:43.090240 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:43.287227875+00:00 stderr F I0120 10:55:43.287145 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:43.488983513+00:00 stderr F I0120 10:55:43.488879 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:43.687412673+00:00 stderr F I0120 10:55:43.687331 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:43.709969884+00:00 stderr F I0120 10:55:43.709899 1 base_controller.go:73] Caches are synced for APIServiceController_openshift-apiserver 2026-01-20T10:55:43.709969884+00:00 stderr F I0120 10:55:43.709924 1 base_controller.go:110] Starting #1 worker of APIServiceController_openshift-apiserver controller ... 2026-01-20T10:55:43.710003715+00:00 stderr F I0120 10:55:43.709981 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling 2026-01-20T10:55:43.889112580+00:00 stderr F I0120 10:55:43.888970 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:43.907164390+00:00 stderr F I0120 10:55:43.906972 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorController 2026-01-20T10:55:43.907164390+00:00 stderr F I0120 10:55:43.907015 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorController controller ... 2026-01-20T10:55:43.907164390+00:00 stderr F I0120 10:55:43.907049 1 base_controller.go:73] Caches are synced for ConfigObserver 2026-01-20T10:55:43.907164390+00:00 stderr F I0120 10:55:43.907122 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2026-01-20T10:55:43.907700046+00:00 stderr F I0120 10:55:43.907591 1 base_controller.go:73] Caches are synced for ResourceSyncController 2026-01-20T10:55:43.907700046+00:00 stderr F I0120 10:55:43.907657 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2026-01-20T10:55:43.907957622+00:00 stderr F I0120 10:55:43.907886 1 base_controller.go:73] Caches are synced for OpenShiftAuthenticatorCertRequester 2026-01-20T10:55:43.907957622+00:00 stderr F I0120 10:55:43.907912 1 base_controller.go:110] Starting #1 worker of OpenShiftAuthenticatorCertRequester controller ... 2026-01-20T10:55:43.909728589+00:00 stderr F I0120 10:55:43.909654 1 base_controller.go:73] Caches are synced for RevisionController 2026-01-20T10:55:43.909728589+00:00 stderr F I0120 10:55:43.909700 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2026-01-20T10:55:44.088189196+00:00 stderr F I0120 10:55:44.088124 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:44.107837050+00:00 stderr F I0120 10:55:44.107751 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2026-01-20T10:55:44.107837050+00:00 stderr F I0120 10:55:44.107787 1 base_controller.go:73] Caches are synced for OAuthAPIServerControllerWorkloadController 2026-01-20T10:55:44.107837050+00:00 stderr F I0120 10:55:44.107801 1 base_controller.go:110] Starting #1 worker of OAuthAPIServerControllerWorkloadController controller ... 2026-01-20T10:55:44.107837050+00:00 stderr F I0120 10:55:44.107788 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2026-01-20T10:55:44.109137445+00:00 stderr F I0120 10:55:44.109094 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2026-01-20T10:55:44.109137445+00:00 stderr F I0120 10:55:44.109113 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2026-01-20T10:55:44.109137445+00:00 stderr F I0120 10:55:44.109124 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2026-01-20T10:55:44.109137445+00:00 stderr F I0120 10:55:44.109131 1 base_controller.go:73] Caches are synced for EncryptionStateController 2026-01-20T10:55:44.109178556+00:00 stderr F I0120 10:55:44.109135 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2026-01-20T10:55:44.109178556+00:00 stderr F I0120 10:55:44.109136 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2026-01-20T10:55:44.109449684+00:00 stderr F I0120 10:55:44.109424 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2026-01-20T10:55:44.109449684+00:00 stderr F I0120 10:55:44.109438 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2026-01-20T10:55:44.109465414+00:00 stderr F I0120 10:55:44.109454 1 base_controller.go:73] Caches are synced for NamespaceFinalizerController_openshift-oauth-apiserver 2026-01-20T10:55:44.109475734+00:00 stderr F I0120 10:55:44.109462 1 base_controller.go:110] Starting #1 worker of NamespaceFinalizerController_openshift-oauth-apiserver controller ... 2026-01-20T10:55:44.109499745+00:00 stderr F I0120 10:55:44.109477 1 base_controller.go:73] Caches are synced for SecretRevisionPruneController 2026-01-20T10:55:44.109499745+00:00 stderr F I0120 10:55:44.109483 1 base_controller.go:110] Starting #1 worker of SecretRevisionPruneController controller ... 2026-01-20T10:55:44.288609860+00:00 stderr F I0120 10:55:44.288509 1 request.go:697] Waited for 4.481398492s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift 2026-01-20T10:55:45.485029681+00:00 stderr F I0120 10:55:45.484559 1 request.go:697] Waited for 3.369289294s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/configmaps/v4-0-config-system-service-ca 2026-01-20T10:55:46.485565172+00:00 stderr F I0120 10:55:46.485044 1 request.go:697] Waited for 2.37537017s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver 2026-01-20T10:55:47.684559084+00:00 stderr F I0120 10:55:47.684450 1 request.go:697] Waited for 1.814790177s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift 2026-01-20T10:56:07.107922121+00:00 stderr F I0120 10:56:07.107046 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:56:07.106974995 +0000 UTC))" 2026-01-20T10:56:07.107922121+00:00 stderr F I0120 10:56:07.107695 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:56:07.107682314 +0000 UTC))" 2026-01-20T10:56:07.107922121+00:00 stderr F I0120 10:56:07.107715 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.107701125 +0000 UTC))" 2026-01-20T10:56:07.107922121+00:00 stderr F I0120 10:56:07.107732 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.107721345 +0000 UTC))" 2026-01-20T10:56:07.107922121+00:00 stderr F I0120 10:56:07.107752 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.107739196 +0000 UTC))" 2026-01-20T10:56:07.107922121+00:00 stderr F I0120 10:56:07.107770 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.107757666 +0000 UTC))" 2026-01-20T10:56:07.107922121+00:00 stderr F I0120 10:56:07.107786 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.107774277 +0000 UTC))" 2026-01-20T10:56:07.107922121+00:00 stderr F I0120 10:56:07.107801 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.107791097 +0000 UTC))" 2026-01-20T10:56:07.107922121+00:00 stderr F I0120 10:56:07.107816 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.107806207 +0000 UTC))" 2026-01-20T10:56:07.107922121+00:00 stderr F I0120 10:56:07.107835 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:56:07.107825668 +0000 UTC))" 2026-01-20T10:56:07.107922121+00:00 stderr F I0120 10:56:07.107855 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.107841428 +0000 UTC))" 2026-01-20T10:56:07.107922121+00:00 stderr F I0120 10:56:07.107870 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.107859779 +0000 UTC))" 2026-01-20T10:56:07.108222239+00:00 stderr F I0120 10:56:07.108185 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2026-01-20 10:56:07.108170967 +0000 UTC))" 2026-01-20T10:56:07.108490967+00:00 stderr F I0120 10:56:07.108462 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906176\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906175\" (2026-01-20 09:49:35 +0000 UTC to 2027-01-20 09:49:35 +0000 UTC (now=2026-01-20 10:56:07.108427035 +0000 UTC))" 2026-01-20T10:57:07.874612198+00:00 stderr F I0120 10:57:07.872163 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-20T10:57:07Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"43257\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:57:07.883322839+00:00 stderr F E0120 10:57:07.883261 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:07.884435108+00:00 stderr F E0120 10:57:07.878832 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:07.889897542+00:00 stderr F I0120 10:57:07.889858 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-20T10:57:07Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"43257\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:57:07.895620633+00:00 stderr F E0120 10:57:07.895543 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:07.895620633+00:00 stderr F E0120 10:57:07.895595 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:07.895863411+00:00 stderr F E0120 10:57:07.895713 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:07.896992200+00:00 stderr F E0120 10:57:07.896953 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:07.909535981+00:00 stderr F I0120 10:57:07.909453 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-20T10:57:07Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"43257\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:57:07.910416025+00:00 stderr F W0120 10:57:07.910049 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:07.910416025+00:00 stderr F E0120 10:57:07.910126 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:07.914353389+00:00 stderr F E0120 10:57:07.914307 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:07.935924240+00:00 stderr F I0120 10:57:07.935847 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-20T10:57:07Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"43257\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:57:07.939522725+00:00 stderr F E0120 10:57:07.938631 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:07.980249762+00:00 stderr F I0120 10:57:07.980170 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-20T10:57:07Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"43257\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:57:07.981568197+00:00 stderr F E0120 10:57:07.981517 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:08.062651601+00:00 stderr F I0120 10:57:08.062575 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-20T10:57:08Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"43257\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:57:08.064125350+00:00 stderr F E0120 10:57:08.064020 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:08.225007125+00:00 stderr F I0120 10:57:08.224939 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-20T10:57:08Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"43257\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:57:08.227587213+00:00 stderr F E0120 10:57:08.227548 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:08.233413787+00:00 stderr F E0120 10:57:08.233373 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:08.434655809+00:00 stderr F E0120 10:57:08.434581 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:08.548893240+00:00 stderr F I0120 10:57:08.548768 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-20T10:57:08Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"43257\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:57:08.550768809+00:00 stderr F E0120 10:57:08.550657 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:08.633871277+00:00 stderr F E0120 10:57:08.633758 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:09.033334551+00:00 stderr F I0120 10:57:09.033224 1 request.go:697] Waited for 1.112921082s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2026-01-20T10:57:09.036011252+00:00 stderr F E0120 10:57:09.035940 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:09.192355756+00:00 stderr F I0120 10:57:09.192245 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-20T10:57:09Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"43257\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:57:09.194367289+00:00 stderr F E0120 10:57:09.194299 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:09.234538121+00:00 stderr F E0120 10:57:09.234411 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:09.434617983+00:00 stderr F E0120 10:57:09.434451 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:09.635012552+00:00 stderr F E0120 10:57:09.634859 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:09.737930544+00:00 stderr F I0120 10:57:09.737816 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-20T10:57:09Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"43257\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:57:09.741169209+00:00 stderr F E0120 10:57:09.741112 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:10.053177641+00:00 stderr F I0120 10:57:10.052631 1 request.go:697] Waited for 1.216538101s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2026-01-20T10:57:10.054046114+00:00 stderr F W0120 10:57:10.053997 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:10.054046114+00:00 stderr F E0120 10:57:10.054041 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:10.235507123+00:00 stderr F E0120 10:57:10.235151 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:10.476351382+00:00 stderr F I0120 10:57:10.476232 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-20T10:57:10Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"43257\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:57:10.481535809+00:00 stderr F E0120 10:57:10.481441 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:10.634925145+00:00 stderr F E0120 10:57:10.634819 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:10.833254000+00:00 stderr F E0120 10:57:10.833163 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:11.034777119+00:00 stderr F E0120 10:57:11.034702 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:11.232428647+00:00 stderr F I0120 10:57:11.232366 1 request.go:697] Waited for 1.105582608s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2026-01-20T10:57:11.233077954+00:00 stderr F E0120 10:57:11.233024 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:11.634441658+00:00 stderr F E0120 10:57:11.634361 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:11.835279949+00:00 stderr F E0120 10:57:11.835168 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:12.034450486+00:00 stderr F E0120 10:57:12.034353 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:12.232460733+00:00 stderr F I0120 10:57:12.232378 1 request.go:697] Waited for 1.158484737s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2026-01-20T10:57:12.234275970+00:00 stderr F E0120 10:57:12.234213 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:12.435601645+00:00 stderr F W0120 10:57:12.435460 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:12.435601645+00:00 stderr F E0120 10:57:12.435558 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:13.033142827+00:00 stderr F E0120 10:57:13.033024 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:13.232906300+00:00 stderr F I0120 10:57:13.232847 1 request.go:697] Waited for 1.159921185s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2026-01-20T10:57:13.233651869+00:00 stderr F E0120 10:57:13.233634 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:13.434018149+00:00 stderr F E0120 10:57:13.433946 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:13.633986447+00:00 stderr F E0120 10:57:13.633889 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:14.033752979+00:00 stderr F E0120 10:57:14.033683 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:14.234101237+00:00 stderr F W0120 10:57:14.233467 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:14.234101237+00:00 stderr F E0120 10:57:14.234035 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"43257", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2026-01-20T10:57:14.433161321+00:00 stderr F I0120 10:57:14.433057 1 request.go:697] Waited for 1.3998691s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster 2026-01-20T10:57:14.434693332+00:00 stderr F E0120 10:57:14.434599 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:14.634418443+00:00 stderr F E0120 10:57:14.634337 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:14.834160595+00:00 stderr F E0120 10:57:14.834089 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:15.033922059+00:00 stderr F E0120 10:57:15.033849 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:15.433278489+00:00 stderr F I0120 10:57:15.433170 1 request.go:697] Waited for 1.707723331s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2026-01-20T10:57:15.435017846+00:00 stderr F E0120 10:57:15.434952 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:15.602969127+00:00 stderr F I0120 10:57:15.602820 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-20T10:57:15Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"43257\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:57:15.604742903+00:00 stderr F E0120 10:57:15.604641 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:15.634683255+00:00 stderr F W0120 10:57:15.634580 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:15.634683255+00:00 stderr F E0120 10:57:15.634628 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:15.834431248+00:00 stderr F E0120 10:57:15.834350 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:16.034749415+00:00 stderr F W0120 10:57:16.034636 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:16.034749415+00:00 stderr F E0120 10:57:16.034713 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"43257", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2026-01-20T10:57:16.233340207+00:00 stderr F E0120 10:57:16.233253 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:16.433393717+00:00 stderr F I0120 10:57:16.433295 1 request.go:697] Waited for 1.962596431s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster 2026-01-20T10:57:16.633512920+00:00 stderr F E0120 10:57:16.633430 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:16.834559486+00:00 stderr F E0120 10:57:16.834479 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:17.033580169+00:00 stderr F E0120 10:57:17.033496 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:17.234543554+00:00 stderr F E0120 10:57:17.234436 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:17.434782510+00:00 stderr F E0120 10:57:17.434690 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:17.632950720+00:00 stderr F I0120 10:57:17.632873 1 request.go:697] Waited for 1.960856855s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2026-01-20T10:57:17.833725410+00:00 stderr F W0120 10:57:17.833613 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:17.833725410+00:00 stderr F E0120 10:57:17.833681 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"43257", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2026-01-20T10:57:18.033416450+00:00 stderr F E0120 10:57:18.033353 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:18.233856952+00:00 stderr F E0120 10:57:18.233789 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:18.433511931+00:00 stderr F E0120 10:57:18.433453 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:18.633144700+00:00 stderr F I0120 10:57:18.633045 1 request.go:697] Waited for 1.996619971s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2026-01-20T10:57:18.634584149+00:00 stderr F E0120 10:57:18.634536 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:19.071462372+00:00 stderr F E0120 10:57:19.071390 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:19.234183545+00:00 stderr F E0120 10:57:19.233741 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:19.433524817+00:00 stderr F E0120 10:57:19.433442 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:19.633724912+00:00 stderr F W0120 10:57:19.633590 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:19.633724912+00:00 stderr F E0120 10:57:19.633635 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:19.832892209+00:00 stderr F I0120 10:57:19.832801 1 request.go:697] Waited for 1.977847455s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2026-01-20T10:57:19.833923456+00:00 stderr F W0120 10:57:19.833870 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:19.834021158+00:00 stderr F E0120 10:57:19.833995 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"43257", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2026-01-20T10:57:20.033806471+00:00 stderr F E0120 10:57:20.033727 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:20.233810541+00:00 stderr F E0120 10:57:20.233465 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:20.634397555+00:00 stderr F E0120 10:57:20.634278 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:20.833409878+00:00 stderr F I0120 10:57:20.833338 1 request.go:697] Waited for 1.998488541s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2026-01-20T10:57:20.834395333+00:00 stderr F E0120 10:57:20.834356 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:21.034076104+00:00 stderr F E0120 10:57:21.033981 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:21.234529225+00:00 stderr F E0120 10:57:21.234455 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:21.434537684+00:00 stderr F E0120 10:57:21.434468 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:21.833778273+00:00 stderr F W0120 10:57:21.833699 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:21.833778273+00:00 stderr F E0120 10:57:21.833742 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"43257", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2026-01-20T10:57:22.032698583+00:00 stderr F I0120 10:57:22.032612 1 request.go:697] Waited for 1.674285067s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2026-01-20T10:57:22.033922615+00:00 stderr F E0120 10:57:22.033858 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:22.234576802+00:00 stderr F E0120 10:57:22.234473 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:22.434011936+00:00 stderr F E0120 10:57:22.433946 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:22.634449657+00:00 stderr F E0120 10:57:22.634388 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:23.038177603+00:00 stderr F E0120 10:57:23.037280 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:23.232711617+00:00 stderr F I0120 10:57:23.232313 1 request.go:697] Waited for 1.954723183s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2026-01-20T10:57:23.233702334+00:00 stderr F E0120 10:57:23.233641 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:23.437787762+00:00 stderr F E0120 10:57:23.435399 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:23.635536901+00:00 stderr F W0120 10:57:23.634342 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:23.635536901+00:00 stderr F E0120 10:57:23.634397 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:23.833942508+00:00 stderr F W0120 10:57:23.833875 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:23.834036940+00:00 stderr F E0120 10:57:23.834023 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"43257", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2026-01-20T10:57:24.041957328+00:00 stderr F E0120 10:57:24.040690 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:24.232854557+00:00 stderr F I0120 10:57:24.232778 1 request.go:697] Waited for 1.997019792s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster 2026-01-20T10:57:24.433737109+00:00 stderr F E0120 10:57:24.433630 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.634330604+00:00 stderr F E0120 10:57:24.634132 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.833795559+00:00 stderr F E0120 10:57:24.833645 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:25.039117039+00:00 stderr F E0120 10:57:25.037249 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.233020126+00:00 stderr F I0120 10:57:25.232945 1 request.go:697] Waited for 1.715669751s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2026-01-20T10:57:25.234494476+00:00 stderr F E0120 10:57:25.234445 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.634646138+00:00 stderr F W0120 10:57:25.634543 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.634646138+00:00 stderr F E0120 10:57:25.634609 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"43257", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2026-01-20T10:57:25.833890407+00:00 stderr F E0120 10:57:25.833804 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:25.845767641+00:00 stderr F I0120 10:57:25.845698 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-20T10:57:25Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"43257\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:57:25.846570962+00:00 stderr F E0120 10:57:25.846523 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:26.034209784+00:00 stderr F E0120 10:57:26.034126 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:26.233891686+00:00 stderr F E0120 10:57:26.233813 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:26.433007651+00:00 stderr F I0120 10:57:26.432597 1 request.go:697] Waited for 1.798298527s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster 2026-01-20T10:57:26.639420969+00:00 stderr F E0120 10:57:26.639340 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:26.834119408+00:00 stderr F E0120 10:57:26.833690 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:27.034379034+00:00 stderr F E0120 10:57:27.034287 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:27.234345523+00:00 stderr F W0120 10:57:27.234265 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:27.234345523+00:00 stderr F E0120 10:57:27.234319 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:27.433778117+00:00 stderr F I0120 10:57:27.433690 1 request.go:697] Waited for 1.797889956s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2026-01-20T10:57:27.435768040+00:00 stderr F W0120 10:57:27.435656 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:27.435768040+00:00 stderr F E0120 10:57:27.435710 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"43257", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2026-01-20T10:57:27.633460357+00:00 stderr F E0120 10:57:27.633410 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:28.033254290+00:00 stderr F E0120 10:57:28.033178 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:28.234357619+00:00 stderr F E0120 10:57:28.234295 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:28.433709420+00:00 stderr F E0120 10:57:28.433632 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:28.632827226+00:00 stderr F I0120 10:57:28.632756 1 request.go:697] Waited for 1.796477589s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2026-01-20T10:57:28.633932365+00:00 stderr F E0120 10:57:28.633881 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:29.033980244+00:00 stderr F E0120 10:57:29.033877 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:29.238767960+00:00 stderr F W0120 10:57:29.238593 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:29.238767960+00:00 stderr F E0120 10:57:29.238743 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"43257", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2026-01-20T10:57:29.434372073+00:00 stderr F E0120 10:57:29.434311 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:29.634414624+00:00 stderr F E0120 10:57:29.634295 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:29.832920512+00:00 stderr F I0120 10:57:29.832370 1 request.go:697] Waited for 1.597921067s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster 2026-01-20T10:57:30.035632394+00:00 stderr F E0120 10:57:30.035551 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:30.234025971+00:00 stderr F W0120 10:57:30.233917 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:30.234025971+00:00 stderr F E0120 10:57:30.233981 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:30.434457321+00:00 stderr F E0120 10:57:30.434373 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:30.633971337+00:00 stderr F W0120 10:57:30.633823 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:30.633971337+00:00 stderr F E0120 10:57:30.633912 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"43257", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2026-01-20T10:57:30.832917658+00:00 stderr F I0120 10:57:30.832797 1 request.go:697] Waited for 1.196346727s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster 2026-01-20T10:57:31.033701698+00:00 stderr F E0120 10:57:31.033630 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:31.234176450+00:00 stderr F E0120 10:57:31.234090 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:31.434814155+00:00 stderr F E0120 10:57:31.434707 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:31.833707185+00:00 stderr F E0120 10:57:31.833611 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:32.033381605+00:00 stderr F I0120 10:57:32.033305 1 request.go:697] Waited for 1.397779105s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2026-01-20T10:57:32.034255778+00:00 stderr F W0120 10:57:32.034189 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.034283218+00:00 stderr F E0120 10:57:32.034244 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"43257", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2026-01-20T10:57:32.233894767+00:00 stderr F E0120 10:57:32.233816 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.634356027+00:00 stderr F E0120 10:57:32.634279 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.834757567+00:00 stderr F W0120 10:57:32.834671 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.834757567+00:00 stderr F E0120 10:57:32.834727 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:33.233815800+00:00 stderr F E0120 10:57:33.233730 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:33.434379175+00:00 stderr F E0120 10:57:33.434279 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:33.634486856+00:00 stderr F E0120 10:57:33.634413 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:34.033159160+00:00 stderr F I0120 10:57:34.033050 1 request.go:697] Waited for 1.115593253s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2026-01-20T10:57:34.034450324+00:00 stderr F E0120 10:57:34.034390 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:34.234511825+00:00 stderr F E0120 10:57:34.233872 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:34.435355876+00:00 stderr F W0120 10:57:34.435222 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:34.435355876+00:00 stderr F E0120 10:57:34.435282 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"43257", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2026-01-20T10:57:34.834828400+00:00 stderr F E0120 10:57:34.834516 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:35.033715760+00:00 stderr F W0120 10:57:35.033618 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:35.033715760+00:00 stderr F E0120 10:57:35.033679 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:35.434633612+00:00 stderr F E0120 10:57:35.434556 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:35.633827070+00:00 stderr F E0120 10:57:35.633745 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.034585908+00:00 stderr F E0120 10:57:36.034489 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.234142815+00:00 stderr F E0120 10:57:36.234036 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.435032568+00:00 stderr F W0120 10:57:36.434935 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.435032568+00:00 stderr F E0120 10:57:36.434986 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.034366947+00:00 stderr F E0120 10:57:37.034273 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.234919491+00:00 stderr F E0120 10:57:37.234693 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.434362735+00:00 stderr F E0120 10:57:37.434238 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.834221510+00:00 stderr F W0120 10:57:37.834093 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.834221510+00:00 stderr F E0120 10:57:37.834147 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:38.602002694+00:00 stderr F E0120 10:57:38.601896 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:39.621970168+00:00 stderr F E0120 10:57:39.621883 1 leaderelection.go:332] error retrieving resource lock openshift-authentication-operator/cluster-authentication-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-authentication-operator/leases/cluster-authentication-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:39.737499523+00:00 stderr F I0120 10:57:39.737427 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-20T10:57:39Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"43257\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:57:39.751604676+00:00 stderr F E0120 10:57:39.751534 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:39.755998222+00:00 stderr F E0120 10:57:39.755936 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:39.756460904+00:00 stderr F E0120 10:57:39.756161 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:39.756723971+00:00 stderr F E0120 10:57:39.756673 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:39.757753618+00:00 stderr F W0120 10:57:39.757675 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:39.757920003+00:00 stderr F E0120 10:57:39.757895 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:39.812702322+00:00 stderr F I0120 10:57:39.812583 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-20T10:57:39Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"43257\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:57:39.814318644+00:00 stderr F E0120 10:57:39.814260 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:39.834325714+00:00 stderr F E0120 10:57:39.833881 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:40.034174819+00:00 stderr F E0120 10:57:40.034097 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:40.433861399+00:00 stderr F E0120 10:57:40.433718 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:40.633941130+00:00 stderr F W0120 10:57:40.633847 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:40.633941130+00:00 stderr F E0120 10:57:40.633906 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:40.834826502+00:00 stderr F E0120 10:57:40.834530 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:41.164292845+00:00 stderr F E0120 10:57:41.163245 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:41.358279735+00:00 stderr F E0120 10:57:41.358201 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:42.150109325+00:00 stderr F E0120 10:57:42.150006 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:42.541542727+00:00 stderr F E0120 10:57:42.541451 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:42.917643453+00:00 stderr F E0120 10:57:42.917553 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:43.727908920+00:00 stderr F E0120 10:57:43.727817 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:44.111024672+00:00 stderr F E0120 10:57:44.110955 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:44.117179715+00:00 stderr F E0120 10:57:44.117137 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:44.162992077+00:00 stderr F E0120 10:57:44.162869 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:44.184103265+00:00 stderr F E0120 10:57:44.184032 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:44.226118176+00:00 stderr F E0120 10:57:44.226041 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:44.307579460+00:00 stderr F E0120 10:57:44.307481 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:44.468985789+00:00 stderr F E0120 10:57:44.468890 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:44.683138442+00:00 stderr F W0120 10:57:44.682606 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:44.683138442+00:00 stderr F E0120 10:57:44.682664 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"43257", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2026-01-20T10:57:44.791099987+00:00 stderr F E0120 10:57:44.791026 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:45.077484680+00:00 stderr F E0120 10:57:45.077439 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:45.432247283+00:00 stderr F E0120 10:57:45.432178 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:46.328698599+00:00 stderr F I0120 10:57:46.327846 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-20T10:57:46Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"43257\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:57:46.329974153+00:00 stderr F E0120 10:57:46.329934 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:46.713799453+00:00 stderr F E0120 10:57:46.713749 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:47.479857232+00:00 stderr F E0120 10:57:47.479390 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:48.084114111+00:00 stderr F W0120 10:57:48.083813 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:48.084114111+00:00 stderr F E0120 10:57:48.084091 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:58:05.191654698+00:00 stderr F E0120 10:58:05.191034 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"43257", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003406d50), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2026-01-20T10:58:18.777643626+00:00 stderr F I0120 10:58:18.776999 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:58:20.137891057+00:00 stderr F I0120 10:58:20.137597 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 ././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000755000175000017500000000000015133657716033061 5ustar zuulzuul././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000755000175000017500000000000015133657737033064 5ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000644000175000017500000000007515133657716033065 0ustar zuulzuul2026-01-20T10:49:35.102113057+00:00 stdout F serving on 8080 ././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000644000175000017500000000007515133657716033065 0ustar zuulzuul2025-08-13T19:59:14.989395384+00:00 stdout F serving on 8080 ././@LongLink0000644000000000000000000000024400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015133657716033115 5ustar zuulzuul././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/extract-content/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015133657741033113 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/extract-content/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015133657716033105 0ustar zuulzuul././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/registry-server/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015133657741033113 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/registry-server/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000067015133657716033122 0ustar zuulzuul2026-01-20T10:51:08.721600859+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="starting pprof endpoint" address="localhost:6060" 2026-01-20T10:51:13.004487146+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="serving registry" configs=/extracted-catalog/catalog port=50051 2026-01-20T10:51:13.004487146+00:00 stderr F time="2026-01-20T10:51:13Z" level=info msg="stopped caching cpu profile data" address="localhost:6060" ././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/extract-utilities/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015133657741033113 5ustar zuulzuul././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/extract-utilities/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015133657716033105 0ustar zuulzuul././@LongLink0000644000000000000000000000024100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_down0000755000175000017500000000000015133657715033156 5ustar zuulzuul././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_down0000755000175000017500000000000015133657735033160 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/6.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_down0000644000175000017500000014462315133657715033172 0ustar zuulzuul2025-08-13T20:04:44.888728027+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:04:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:04:44.893972447+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:04:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:04:54.878027590+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:04:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:04:54.878081642+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:04:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:04.898170868+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:04.898170868+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:14.877953171+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:14.878994381+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:24.878938698+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:24.901763522+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:34.880028638+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:34.888279825+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:44.877050427+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:44.877050427+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:54.877406116+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:54.877406116+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:04.873914236+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:04.897182743+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:14.875633185+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:14.876301114+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:24.876177779+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:24.887274867+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:34.901938306+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:34.901938306+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:44.884100374+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:44.884670630+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:54.887548271+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:54.901070549+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:04.877456080+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:04.878290154+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:14.876924493+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:14.876924493+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:24.876599141+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:24.877872537+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:34.877159516+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:34.878752791+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:44.876859645+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:44.876859645+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:54.884906584+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:54.889264949+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:04.874438713+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:04.875134483+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:14.878531079+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:14.879438245+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:24.875006197+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:24.877290643+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:34.875727365+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:34.875946571+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:44.875280310+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:44.875334141+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:54.896377233+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:54.896377233+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:04.874617457+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:04.875455401+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:14.876610033+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:14.876989734+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:24.875273435+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:24.875411979+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:34.876292742+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:34.876361464+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:44.877966247+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:44.877966247+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:54.875029411+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:54.875664350+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:04.891257056+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:04.891257056+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:14.875624635+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:14.878898049+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:24.875490887+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:24.876027833+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:34.874839007+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:34.875504136+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:44.875599527+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:44.876278696+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:54.874877994+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:54.875300776+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:04.885986769+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:04.888866112+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:14.877410392+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:14.877525665+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:24.875386473+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:24.875896337+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:34.876020156+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:34.876020156+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:44.875280875+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:44.876194671+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:54.875339943+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:54.875717254+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:04.874858908+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:04.874971102+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:14.874866656+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:14.875305319+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:24.875899935+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:24.875899935+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:34.874569125+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:34.875109830+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:44.875420674+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:44.876627619+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:54.873643782+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:54.875057172+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:04.874175265+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:04.874244207+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:14.874158743+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:14.874721559+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:24.874623755+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:24.874623755+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:34.874381965+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:34.874458417+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:44.875748023+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:44.876144354+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:54.873665730+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:54.874572326+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:04.875581986+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:04.875676059+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:14.875640415+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:14.876189791+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:24.875375544+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:24.875468727+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:34.875174045+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:34.875298628+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:44.874482803+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:44.874548545+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:54.880318548+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:54.881359348+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:04.875109967+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:04.875320773+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:14.876096772+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:14.876096772+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:24.876631268+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:24.877218265+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:34.874602682+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:34.874857909+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:44.873905452+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:44.874024645+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:54.874181191+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:54.874637124+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:04.875879751+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:04.876344854+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:14.874633168+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:14.875580235+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:24.874567479+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:24.874650501+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:34.874614263+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:34.874796578+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:44.875339293+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:44.875495158+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:54.875721814+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:54.875840738+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:04.879383970+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:04.879619367+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:14.873847532+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:14.875212991+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:24.875065068+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:24.875166851+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:34.873994679+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:34.874061241+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:44.877205541+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:44.877711606+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:54.875900034+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:54.876359368+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:04.875607908+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:04.875664629+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:14.876385011+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:14.876385011+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:24.874482290+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:24.874548892+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:34.874673825+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:34.874673825+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:44.875253153+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:44.875954123+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:54.874702969+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:54.875022458+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:04.875556776+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:04.875935727+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:14.874408274+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:14.874909528+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:24.875276633+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:24.875577012+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:34.874678069+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:34.874678069+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:44.875894925+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:44.876254865+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:54.876675080+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:54.885165523+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:04.874873544+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:04.875042369+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:14.875571982+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:14.875866700+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:24.874903896+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:24.875017079+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:34.874503889+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:34.874503889+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:44.875141889+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:44.875279513+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:54.874700765+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:54.875555170+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:04.878385424+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:04.878545968+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:14.875189197+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:14.875267969+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:24.874953647+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:24.875516293+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:34.876191494+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:34.876191494+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:44.874696495+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:44.875341083+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:54.875379996+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:54.875706686+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:04.876192689+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:04.877191887+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:14.874716719+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:14.875201063+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:24.875537137+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:24.875597658+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:34.875666620+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:34.876148754+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:44.875079835+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:44.875693912+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:54.876059296+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:54.876180939+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:04.874739301+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:04.875832353+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:14.877123410+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:14.877583663+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:24.876410912+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:24.876478694+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:34.877141128+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:34.877141128+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:44.875017840+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:44.875167945+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:54.875209594+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:54.875276616+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:04.876443140+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:04.876628735+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:14.875599025+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:14.875701158+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:24.875469477+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:24.875730965+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:34.874444668+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:34.874518460+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:44.876177127+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:44.876644111+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:54.875238157+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:54.875323130+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:04.874918895+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:04.875500371+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:14.875561522+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:14.875691016+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:24.876888822+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:24.877013085+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:34.874693585+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:34.874869520+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:44.874894170+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:44.875096215+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:54.877111803+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:54.877111803+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:04.875430383+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:04.875612748+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:14.875736880+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:14.875937006+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:24.876522633+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:24.877168101+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:34.873941987+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:34.874279987+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:44.875512929+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:44.875607191+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:54.882751573+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:54.882959629+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:04.874727566+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:04.875175359+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:14.873916413+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:14.876865648+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:24.879849482+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:24.880450389+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:34.874746425+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:34.875950389+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:44.875592499+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:44.876326890+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:54.875055852+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:54.875132294+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:04.876076826+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:04.876146048+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:14.874762562+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:14.874980408+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:24.880971247+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:24.880971247+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:34.874637832+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:34.874726785+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:44.875949525+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:44.876455410+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:54.878058891+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:54.878058891+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:04.873956788+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:04.874317879+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:14.875684022+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:14.875974681+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:24.876114270+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:24.876246984+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:34.875232380+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:34.877091683+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:44.876884172+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:44.877306124+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:54.875609259+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:54.876537436+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:04.874713649+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:04.877918661+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:14.876076045+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:14.876296501+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:24.877316725+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:24.879651342+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:34.878995877+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:34.880163371+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:44.875007381+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:44.875131234+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:54.876424911+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:54.876424911+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:04.876680163+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:04.877402134+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:14.874521638+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:14.874675732+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:24.874946315+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:24.874946315+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:34.875186878+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:34.875271910+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:44.877499258+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:44.877911550+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:54.873995226+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:54.873995226+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:04.874318060+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:04.874364502+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:14.874591153+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:14.874696446+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:24.876263998+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:24.876531425+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:34.874191296+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:34.874268209+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:44.876020945+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:44.876730045+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:54.876964326+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:54.877179202+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:04.876386117+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:04.876494020+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:14.875852398+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:14.875927800+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:24.874691828+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:24.874925175+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:34.874654405+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:34.875039966+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:44.877827131+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:44.878375227+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:54.874361527+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:54.875222872+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:04.875028260+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:04.875152244+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:14.873858946+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:14.873997380+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:24.877743674+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:24.878407944+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:34.875057881+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:34.875226936+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:44.876473507+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:44.876709633+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:54.874539769+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:54.874539769+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:04.874885285+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:04.874979877+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:14.875463835+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:14.875639401+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:24.873876052+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:24.874397867+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:34.876251115+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:34.876354308+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:44.875533638+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:44.875860217+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:54.873841977+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:54.873909189+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:04.876650874+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:04.876740907+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:14.877361679+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:14.877850293+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:24.874589473+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:24.875094478+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:34.874493960+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:34.874741507+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:44.874764953+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:44.875077672+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:54.874251540+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:54.874731414+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:04.873286331+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:04.874243149+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:14.875478292+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:14.875529104+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:24.875402662+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:24.875873855+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:34.873890123+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:34.874092298+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:44.876540775+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:44.876999478+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:54.875221698+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:54.875293580+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:04.876490367+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:04.877680001+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:14.874964371+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:14.875125636+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:24.874243342+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:24.875336493+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:34.876535549+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:34.876732334+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:44.874252855+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:44.876123839+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:54.873657386+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:54.874943103+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:04.875230903+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:04.875357456+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:14.874139573+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:14.875194144+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:24.876877913+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:24.878520320+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:34.876636135+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:34.881854865+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:44.876229275+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:44.876543124+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:54.878921304+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:54.879038727+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:04.875610138+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:04.876332729+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:14.874282691+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:14.874338962+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:24.875280032+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:24.875704484+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:34.873937883+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:34.874442447+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:44.875692694+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:44.876143877+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:54.876345164+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:54.876605542+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:04.875234483+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:04.875344667+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:14.874629855+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:14.874690807+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:24.875425519+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:24.876146620+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:34.879277603+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:34.879277603+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:44.874017071+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:44.874453514+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:54.876080042+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:54.876712460+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:04.878643778+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:04.880964525+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:14.885862486+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:14.885862486+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:24.877525607+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:24.878525616+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:34.873933626+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:34.875990815+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:39.843300253+00:00 stdout F serving from /tmp/tmp2jv0ugaq ././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/7.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_down0000644000175000017500000002640615133657715033170 0ustar zuulzuul2026-01-20T10:49:52.182616769+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:49:52] "GET / HTTP/1.1" 200 - 2026-01-20T10:49:52.182766943+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:49:52] "GET / HTTP/1.1" 200 - 2026-01-20T10:50:02.188024567+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:50:02] "GET / HTTP/1.1" 200 - 2026-01-20T10:50:02.188115469+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:50:02] "GET / HTTP/1.1" 200 - 2026-01-20T10:50:12.179317765+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:50:12] "GET / HTTP/1.1" 200 - 2026-01-20T10:50:12.179695226+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:50:12] "GET / HTTP/1.1" 200 - 2026-01-20T10:50:22.179135572+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:50:22] "GET / HTTP/1.1" 200 - 2026-01-20T10:50:22.179216984+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:50:22] "GET / HTTP/1.1" 200 - 2026-01-20T10:50:32.178776432+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:50:32] "GET / HTTP/1.1" 200 - 2026-01-20T10:50:32.178821764+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:50:32] "GET / HTTP/1.1" 200 - 2026-01-20T10:50:42.178830912+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:50:42] "GET / HTTP/1.1" 200 - 2026-01-20T10:50:42.178946135+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:50:42] "GET / HTTP/1.1" 200 - 2026-01-20T10:50:52.178394752+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:50:52] "GET / HTTP/1.1" 200 - 2026-01-20T10:50:52.178394752+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:50:52] "GET / HTTP/1.1" 200 - 2026-01-20T10:51:02.178733358+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:51:02] "GET / HTTP/1.1" 200 - 2026-01-20T10:51:02.178809911+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:51:02] "GET / HTTP/1.1" 200 - 2026-01-20T10:51:12.178346274+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:51:12] "GET / HTTP/1.1" 200 - 2026-01-20T10:51:12.178408595+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:51:12] "GET / HTTP/1.1" 200 - 2026-01-20T10:51:22.177972350+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:51:22] "GET / HTTP/1.1" 200 - 2026-01-20T10:51:22.178075923+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:51:22] "GET / HTTP/1.1" 200 - 2026-01-20T10:51:32.178572925+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:51:32] "GET / HTTP/1.1" 200 - 2026-01-20T10:51:32.178627926+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:51:32] "GET / HTTP/1.1" 200 - 2026-01-20T10:51:42.177527875+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:51:42] "GET / HTTP/1.1" 200 - 2026-01-20T10:51:42.180342045+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:51:42] "GET / HTTP/1.1" 200 - 2026-01-20T10:51:52.178904845+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:51:52] "GET / HTTP/1.1" 200 - 2026-01-20T10:51:52.179506132+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:51:52] "GET / HTTP/1.1" 200 - 2026-01-20T10:52:02.178571414+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:52:02] "GET / HTTP/1.1" 200 - 2026-01-20T10:52:02.178648647+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:52:02] "GET / HTTP/1.1" 200 - 2026-01-20T10:52:12.179482977+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:52:12] "GET / HTTP/1.1" 200 - 2026-01-20T10:52:12.179666582+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:52:12] "GET / HTTP/1.1" 200 - 2026-01-20T10:52:22.179807146+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:52:22] "GET / HTTP/1.1" 200 - 2026-01-20T10:52:22.179934709+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:52:22] "GET / HTTP/1.1" 200 - 2026-01-20T10:52:32.179114507+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:52:32] "GET / HTTP/1.1" 200 - 2026-01-20T10:52:32.179198549+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:52:32] "GET / HTTP/1.1" 200 - 2026-01-20T10:52:42.179165625+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:52:42] "GET / HTTP/1.1" 200 - 2026-01-20T10:52:42.179246787+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:52:42] "GET / HTTP/1.1" 200 - 2026-01-20T10:52:52.179694501+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:52:52] "GET / HTTP/1.1" 200 - 2026-01-20T10:52:52.179778994+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:52:52] "GET / HTTP/1.1" 200 - 2026-01-20T10:53:02.178216567+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:53:02] "GET / HTTP/1.1" 200 - 2026-01-20T10:53:02.178396882+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:53:02] "GET / HTTP/1.1" 200 - 2026-01-20T10:53:12.180574152+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:53:12] "GET / HTTP/1.1" 200 - 2026-01-20T10:53:12.180737086+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:53:12] "GET / HTTP/1.1" 200 - 2026-01-20T10:53:22.179994592+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:53:22] "GET / HTTP/1.1" 200 - 2026-01-20T10:53:22.180091424+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:53:22] "GET / HTTP/1.1" 200 - 2026-01-20T10:53:32.180142628+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:53:32] "GET / HTTP/1.1" 200 - 2026-01-20T10:53:32.180469447+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:53:32] "GET / HTTP/1.1" 200 - 2026-01-20T10:53:42.182257033+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:53:42] "GET / HTTP/1.1" 200 - 2026-01-20T10:53:42.182821557+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:53:42] "GET / HTTP/1.1" 200 - 2026-01-20T10:53:52.180043032+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:53:52] "GET / HTTP/1.1" 200 - 2026-01-20T10:53:52.180161185+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:53:52] "GET / HTTP/1.1" 200 - 2026-01-20T10:54:02.179641470+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:54:02] "GET / HTTP/1.1" 200 - 2026-01-20T10:54:02.179641470+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:54:02] "GET / HTTP/1.1" 200 - 2026-01-20T10:54:12.179517807+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:54:12] "GET / HTTP/1.1" 200 - 2026-01-20T10:54:12.179517807+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:54:12] "GET / HTTP/1.1" 200 - 2026-01-20T10:54:22.179428655+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:54:22] "GET / HTTP/1.1" 200 - 2026-01-20T10:54:22.179527327+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:54:22] "GET / HTTP/1.1" 200 - 2026-01-20T10:54:32.179964042+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:54:32] "GET / HTTP/1.1" 200 - 2026-01-20T10:54:32.180061124+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:54:32] "GET / HTTP/1.1" 200 - 2026-01-20T10:54:42.180406772+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:54:42] "GET / HTTP/1.1" 200 - 2026-01-20T10:54:42.180516275+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:54:42] "GET / HTTP/1.1" 200 - 2026-01-20T10:54:52.180395212+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:54:52] "GET / HTTP/1.1" 200 - 2026-01-20T10:54:52.180548066+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:54:52] "GET / HTTP/1.1" 200 - 2026-01-20T10:55:02.179799585+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:55:02] "GET / HTTP/1.1" 200 - 2026-01-20T10:55:02.179920248+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:55:02] "GET / HTTP/1.1" 200 - 2026-01-20T10:55:12.178277431+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:55:12] "GET / HTTP/1.1" 200 - 2026-01-20T10:55:12.178346092+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:55:12] "GET / HTTP/1.1" 200 - 2026-01-20T10:55:22.179536704+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:55:22] "GET / HTTP/1.1" 200 - 2026-01-20T10:55:22.179658817+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:55:22] "GET / HTTP/1.1" 200 - 2026-01-20T10:55:32.180090721+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:55:32] "GET / HTTP/1.1" 200 - 2026-01-20T10:55:32.180221684+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:55:32] "GET / HTTP/1.1" 200 - 2026-01-20T10:55:42.179871956+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:55:42] "GET / HTTP/1.1" 200 - 2026-01-20T10:55:42.179961498+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:55:42] "GET / HTTP/1.1" 200 - 2026-01-20T10:55:52.180916383+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:55:52] "GET / HTTP/1.1" 200 - 2026-01-20T10:55:52.181120049+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:55:52] "GET / HTTP/1.1" 200 - 2026-01-20T10:56:02.180533967+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:56:02] "GET / HTTP/1.1" 200 - 2026-01-20T10:56:02.180533967+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:56:02] "GET / HTTP/1.1" 200 - 2026-01-20T10:56:12.179095183+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:56:12] "GET / HTTP/1.1" 200 - 2026-01-20T10:56:12.179169685+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:56:12] "GET / HTTP/1.1" 200 - 2026-01-20T10:56:22.178724526+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:56:22] "GET / HTTP/1.1" 200 - 2026-01-20T10:56:22.178790368+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:56:22] "GET / HTTP/1.1" 200 - 2026-01-20T10:56:32.179694464+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:56:32] "GET / HTTP/1.1" 200 - 2026-01-20T10:56:32.180096815+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:56:32] "GET / HTTP/1.1" 200 - 2026-01-20T10:56:42.186376398+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:56:42] "GET / HTTP/1.1" 200 - 2026-01-20T10:56:42.186429690+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:56:42] "GET / HTTP/1.1" 200 - 2026-01-20T10:56:52.179868418+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:56:52] "GET / HTTP/1.1" 200 - 2026-01-20T10:56:52.179986312+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:56:52] "GET / HTTP/1.1" 200 - 2026-01-20T10:57:02.179488441+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:57:02] "GET / HTTP/1.1" 200 - 2026-01-20T10:57:02.179567223+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:57:02] "GET / HTTP/1.1" 200 - 2026-01-20T10:57:12.180186420+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:57:12] "GET / HTTP/1.1" 200 - 2026-01-20T10:57:12.180577680+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:57:12] "GET / HTTP/1.1" 200 - 2026-01-20T10:57:22.178817657+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:57:22] "GET / HTTP/1.1" 200 - 2026-01-20T10:57:22.178926160+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:57:22] "GET / HTTP/1.1" 200 - 2026-01-20T10:57:32.179322564+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:57:32] "GET / HTTP/1.1" 200 - 2026-01-20T10:57:32.179392876+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:57:32] "GET / HTTP/1.1" 200 - 2026-01-20T10:57:42.179285517+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:57:42] "GET / HTTP/1.1" 200 - 2026-01-20T10:57:42.179708758+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:57:42] "GET / HTTP/1.1" 200 - 2026-01-20T10:57:52.178917200+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:57:52] "GET / HTTP/1.1" 200 - 2026-01-20T10:57:52.178980362+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:57:52] "GET / HTTP/1.1" 200 - 2026-01-20T10:58:02.180762290+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:58:02] "GET / HTTP/1.1" 200 - 2026-01-20T10:58:02.180762290+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:58:02] "GET / HTTP/1.1" 200 - 2026-01-20T10:58:12.178637749+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:58:12] "GET / HTTP/1.1" 200 - 2026-01-20T10:58:12.178729021+00:00 stderr F ::ffff:10.217.0.2 - - [20/Jan/2026 10:58:12] "GET / HTTP/1.1" 200 - ././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/5.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_down0000644000175000017500000000011315133657715033153 0ustar zuulzuul2025-08-13T20:04:14.907116744+00:00 stdout F serving from /tmp/tmpsrrtesg5 ././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015133657715033070 5ustar zuulzuul././@LongLink0000644000000000000000000000030400000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015133657735033072 5ustar zuulzuul././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000007341515133657715033104 0ustar zuulzuul2026-01-20T10:49:36.552825845+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="log level info" 2026-01-20T10:49:36.552825845+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="TLS keys set, using https for metrics" 2026-01-20T10:49:36.689261740+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=pods" 2026-01-20T10:49:36.689261740+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="skipping irrelevant gvr" gvr="batch/v1, Resource=jobs" 2026-01-20T10:49:36.689261740+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=serviceaccounts" 2026-01-20T10:49:36.689261740+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=configmaps" 2026-01-20T10:49:36.689261740+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=services" 2026-01-20T10:49:36.689261740+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=roles" 2026-01-20T10:49:36.689261740+00:00 stderr F time="2026-01-20T10:49:36Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=rolebindings" 2026-01-20T10:49:37.015231149+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="detected ability to filter informers" canFilter=true 2026-01-20T10:49:37.025217504+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="OpenShift Proxy API available - setting up watch for Proxy type" 2026-01-20T10:49:37.025217504+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="OpenShift Proxy query will be used to fetch cluster proxy configuration" 2026-01-20T10:49:37.025258755+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="[CSV NS Plug-in] setting up csv namespace plug-in for namespaces: []" 2026-01-20T10:49:37.025258755+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="[CSV NS Plug-in] registering namespace informer" 2026-01-20T10:49:37.025280466+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="[CSV NS Plug-in] setting up namespace: " 2026-01-20T10:49:37.025412170+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="[CSV NS Plug-in] registered csv queue informer for: " 2026-01-20T10:49:37.025412170+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="[CSV NS Plug-in] finished setting up csv namespace labeler plugin" 2026-01-20T10:49:37.027340019+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2026-01-20T10:49:37.027340019+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="operator ready" 2026-01-20T10:49:37.027340019+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="starting informers..." 2026-01-20T10:49:37.027340019+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="informers started" 2026-01-20T10:49:37.027340019+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="waiting for caches to sync..." 2026-01-20T10:49:37.127323293+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="starting workers..." 2026-01-20T10:49:37.127548391+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="Initializing cluster operator monitor for package server" 2026-01-20T10:49:37.127548391+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="monitoring the following components [operator-lifecycle-manager-packageserver]" monitor=clusteroperator 2026-01-20T10:49:37.129771788+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="starting clusteroperator monitor loop" monitor=clusteroperator 2026-01-20T10:49:37.130074997+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v2.OperatorCondition"} 2026-01-20T10:49:37.130074997+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.Operator"} 2026-01-20T10:49:37.130102228+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v1.Role"} 2026-01-20T10:49:37.130102228+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.Deployment"} 2026-01-20T10:49:37.130102228+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v1.RoleBinding"} 2026-01-20T10:49:37.130102228+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.Namespace"} 2026-01-20T10:49:37.130111998+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v1.Deployment"} 2026-01-20T10:49:37.130111998+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.CustomResourceDefinition"} 2026-01-20T10:49:37.130143659+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2026-01-20T10:49:37.130143659+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v2.OperatorCondition"} 2026-01-20T10:49:37.130152190+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting Controller","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2026-01-20T10:49:37.130250464+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.APIService"} 2026-01-20T10:49:37.130250464+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1alpha1.Subscription"} 2026-01-20T10:49:37.130269104+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1alpha1.InstallPlan"} 2026-01-20T10:49:37.130269104+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2026-01-20T10:49:37.130269104+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v2.OperatorCondition"} 2026-01-20T10:49:37.130269104+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2026-01-20T10:49:37.130277974+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2026-01-20T10:49:37.130277974+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2026-01-20T10:49:37.130290235+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2026-01-20T10:49:37.130290235+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2026-01-20T10:49:37.130290235+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2026-01-20T10:49:37.130298565+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2026-01-20T10:49:37.130298565+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting Controller","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator"} 2026-01-20T10:49:37.130408058+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting Controller","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition"} 2026-01-20T10:49:37.130584434+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","source":"kind source: *v1alpha1.Subscription"} 2026-01-20T10:49:37.130584434+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2026-01-20T10:49:37.130584434+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","source":"kind source: *v1alpha1.InstallPlan"} 2026-01-20T10:49:37.130584434+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting Controller","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription"} 2026-01-20T10:49:37.130633415+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2026-01-20T10:49:37.130633415+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.Deployment"} 2026-01-20T10:49:37.130633415+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.Namespace"} 2026-01-20T10:49:37.130633415+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.Service"} 2026-01-20T10:49:37.130657746+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.CustomResourceDefinition"} 2026-01-20T10:49:37.130657746+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.APIService"} 2026-01-20T10:49:37.130657746+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1alpha1.Subscription"} 2026-01-20T10:49:37.130657746+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v2.OperatorCondition"} 2026-01-20T10:49:37.130657746+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2026-01-20T10:49:37.130657746+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2026-01-20T10:49:37.130667786+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2026-01-20T10:49:37.130667786+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2026-01-20T10:49:37.130676956+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2026-01-20T10:49:37.130684897+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2026-01-20T10:49:37.130684897+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2026-01-20T10:49:37.130692457+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting Controller","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2026-01-20T10:49:37.131421699+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","source":"kind source: *v1.ClusterOperator"} 2026-01-20T10:49:37.131421699+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","source":"channel source: 0xc000a73b40"} 2026-01-20T10:49:37.131421699+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting EventSource","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2026-01-20T10:49:37.131421699+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting Controller","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator"} 2026-01-20T10:49:37.177364468+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="ClusterOperator api is present" monitor=clusteroperator 2026-01-20T10:49:37.177364468+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="initializing clusteroperator resource(s) for [operator-lifecycle-manager-packageserver]" monitor=clusteroperator 2026-01-20T10:49:37.177364468+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="initialized cluster resource - operator-lifecycle-manager-packageserver" monitor=clusteroperator 2026-01-20T10:49:37.230898518+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting workers","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","worker count":1} 2026-01-20T10:49:37.231206599+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting workers","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","worker count":1} 2026-01-20T10:49:37.231979602+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting workers","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","worker count":1} 2026-01-20T10:49:37.232365643+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting workers","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","worker count":1} 2026-01-20T10:49:37.233271171+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting workers","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","worker count":1} 2026-01-20T10:49:37.274103305+00:00 stderr F {"level":"info","ts":"2026-01-20T10:49:37Z","msg":"Starting workers","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","worker count":1} 2026-01-20T10:49:37.402119084+00:00 stderr F time="2026-01-20T10:49:37Z" level=warning msg="unhealthy component: apiServices not installed" csv=packageserver id=9TCSa namespace=openshift-operator-lifecycle-manager phase=Succeeded strategy=deployment 2026-01-20T10:49:37.410154118+00:00 stderr F I0120 10:49:37.404109 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28612", FieldPath:""}): type: 'Warning' reason: 'ComponentUnhealthy' apiServices not installed 2026-01-20T10:49:37.458625875+00:00 stderr F time="2026-01-20T10:49:37Z" level=warning msg="unhealthy component: apiServices not installed" csv=packageserver id=3PXIN namespace=openshift-operator-lifecycle-manager phase=Succeeded strategy=deployment 2026-01-20T10:49:37.458625875+00:00 stderr F I0120 10:49:37.454901 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28612", FieldPath:""}): type: 'Warning' reason: 'ComponentUnhealthy' apiServices not installed 2026-01-20T10:49:37.466121453+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"packageserver\": the object has been modified; please apply your changes to the latest version and try again" csv=packageserver id=abV5B namespace=openshift-operator-lifecycle-manager phase=Succeeded 2026-01-20T10:49:37.466121453+00:00 stderr F E0120 10:49:37.461400 1 queueinformer_operator.go:319] sync {"update" "openshift-operator-lifecycle-manager/packageserver"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com "packageserver": the object has been modified; please apply your changes to the latest version and try again 2026-01-20T10:49:37.499636254+00:00 stderr F time="2026-01-20T10:49:37Z" level=warning msg="needs reinstall: apiServices not installed" csv=packageserver id=vBbHi namespace=openshift-operator-lifecycle-manager phase=Failed strategy=deployment 2026-01-20T10:49:37.499636254+00:00 stderr F I0120 10:49:37.498722 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"40206", FieldPath:""}): type: 'Normal' reason: 'NeedsReinstall' apiServices not installed 2026-01-20T10:49:37.777621232+00:00 stderr F time="2026-01-20T10:49:37Z" level=warning msg="needs reinstall: apiServices not installed" csv=packageserver id=+ORND namespace=openshift-operator-lifecycle-manager phase=Failed strategy=deployment 2026-01-20T10:49:37.778216090+00:00 stderr F I0120 10:49:37.778162 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"40206", FieldPath:""}): type: 'Normal' reason: 'NeedsReinstall' apiServices not installed 2026-01-20T10:49:37.786267665+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"packageserver\": the object has been modified; please apply your changes to the latest version and try again" csv=packageserver id=dFCnP namespace=openshift-operator-lifecycle-manager phase=Failed 2026-01-20T10:49:37.786321547+00:00 stderr F E0120 10:49:37.786286 1 queueinformer_operator.go:319] sync {"update" "openshift-operator-lifecycle-manager/packageserver"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com "packageserver": the object has been modified; please apply your changes to the latest version and try again 2026-01-20T10:49:37.936108598+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="scheduling ClusterServiceVersion for install" csv=packageserver id=2Csho namespace=openshift-operator-lifecycle-manager phase=Pending 2026-01-20T10:49:37.936409557+00:00 stderr F I0120 10:49:37.936361 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"40221", FieldPath:""}): type: 'Normal' reason: 'AllRequirementsMet' all requirements found, attempting install 2026-01-20T10:49:38.077132285+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="scheduling ClusterServiceVersion for install" csv=packageserver id=0hbaS namespace=openshift-operator-lifecycle-manager phase=Pending 2026-01-20T10:49:38.078105714+00:00 stderr F I0120 10:49:38.077360 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"40221", FieldPath:""}): type: 'Normal' reason: 'AllRequirementsMet' all requirements found, attempting install 2026-01-20T10:49:38.090827331+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"packageserver\": the object has been modified; please apply your changes to the latest version and try again" csv=packageserver id=IG0Yi namespace=openshift-operator-lifecycle-manager phase=Pending 2026-01-20T10:49:38.090827331+00:00 stderr F E0120 10:49:38.090410 1 queueinformer_operator.go:319] sync {"update" "openshift-operator-lifecycle-manager/packageserver"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com "packageserver": the object has been modified; please apply your changes to the latest version and try again 2026-01-20T10:49:38.114233844+00:00 stderr F time="2026-01-20T10:49:38Z" level=warning msg="reusing existing cert packageserver-service-cert" 2026-01-20T10:49:38.401141733+00:00 stderr F I0120 10:49:38.396709 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"40230", FieldPath:""}): type: 'Normal' reason: 'InstallSucceeded' waiting for install components to report healthy 2026-01-20T10:49:38.437309515+00:00 stderr F time="2026-01-20T10:49:38Z" level=warning msg="reusing existing cert packageserver-service-cert" 2026-01-20T10:49:38.591111510+00:00 stderr F I0120 10:49:38.588643 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"40230", FieldPath:""}): type: 'Normal' reason: 'InstallSucceeded' waiting for install components to report healthy 2026-01-20T10:49:38.611388757+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"packageserver\": the object has been modified; please apply your changes to the latest version and try again" csv=packageserver id=XcLKI namespace=openshift-operator-lifecycle-manager phase=InstallReady 2026-01-20T10:49:38.611388757+00:00 stderr F E0120 10:49:38.604907 1 queueinformer_operator.go:319] sync {"update" "openshift-operator-lifecycle-manager/packageserver"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com "packageserver": the object has been modified; please apply your changes to the latest version and try again 2026-01-20T10:49:38.638164253+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="install strategy successful" csv=packageserver id=w3NYx namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2026-01-20T10:49:38.647100725+00:00 stderr F I0120 10:49:38.645405 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"40284", FieldPath:""}): type: 'Normal' reason: 'InstallWaiting' apiServices not installed 2026-01-20T10:49:38.703442281+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="install strategy successful" csv=packageserver id=36bqE namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2026-01-20T10:49:38.704028299+00:00 stderr F I0120 10:49:38.703981 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"40284", FieldPath:""}): type: 'Normal' reason: 'InstallWaiting' apiServices not installed 2026-01-20T10:49:38.716597062+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"packageserver\": the object has been modified; please apply your changes to the latest version and try again" csv=packageserver id=AJ+e3 namespace=openshift-operator-lifecycle-manager phase=Installing 2026-01-20T10:49:38.716597062+00:00 stderr F E0120 10:49:38.713184 1 queueinformer_operator.go:319] sync {"update" "openshift-operator-lifecycle-manager/packageserver"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com "packageserver": the object has been modified; please apply your changes to the latest version and try again 2026-01-20T10:49:38.735133626+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="install strategy successful" csv=packageserver id=ot9er namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2026-01-20T10:49:38.789103211+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="install strategy successful" csv=packageserver id=x2vom namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2026-01-20T10:49:38.789103211+00:00 stderr F I0120 10:49:38.788162 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"40349", FieldPath:""}): type: 'Normal' reason: 'InstallSucceeded' install strategy completed with no errors 2026-01-20T10:50:45.367847580+00:00 stderr F 2026/01/20 10:50:45 http: TLS handshake error from 10.217.0.27:58070: tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of "x509: invalid signature: parent certificate cannot sign this kind of certificate" while trying to verify candidate authority certificate "Red Hat, Inc.") ././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000011562015133657715033077 0ustar zuulzuul2025-08-13T19:59:23.408499878+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="log level info" 2025-08-13T19:59:23.408499878+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="TLS keys set, using https for metrics" 2025-08-13T19:59:26.390932373+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=configmaps" 2025-08-13T19:59:26.390932373+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="skipping irrelevant gvr" gvr="batch/v1, Resource=jobs" 2025-08-13T19:59:26.440401473+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=rolebindings" 2025-08-13T19:59:26.440401473+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=services" 2025-08-13T19:59:26.440401473+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=pods" 2025-08-13T19:59:26.440401473+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=roles" 2025-08-13T19:59:26.440401473+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=serviceaccounts" 2025-08-13T19:59:28.078977601+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="detected ability to filter informers" canFilter=true 2025-08-13T19:59:28.977500913+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="OpenShift Proxy API available - setting up watch for Proxy type" 2025-08-13T19:59:28.977500913+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="OpenShift Proxy query will be used to fetch cluster proxy configuration" 2025-08-13T19:59:28.977500913+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="[CSV NS Plug-in] setting up csv namespace plug-in for namespaces: []" 2025-08-13T19:59:28.977500913+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="[CSV NS Plug-in] registering namespace informer" 2025-08-13T19:59:28.977500913+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="[CSV NS Plug-in] setting up namespace: " 2025-08-13T19:59:28.977500913+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="[CSV NS Plug-in] registered csv queue informer for: " 2025-08-13T19:59:28.977500913+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="[CSV NS Plug-in] finished setting up csv namespace labeler plugin" 2025-08-13T19:59:29.042326870+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2025-08-13T19:59:29.042326870+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="operator ready" 2025-08-13T19:59:29.042326870+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="starting informers..." 2025-08-13T19:59:29.042326870+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="informers started" 2025-08-13T19:59:29.042326870+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="waiting for caches to sync..." 2025-08-13T19:59:32.924188753+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="starting workers..." 2025-08-13T19:59:33.031704498+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="Initializing cluster operator monitor for package server" 2025-08-13T19:59:33.032017877+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="monitoring the following components [operator-lifecycle-manager-packageserver]" monitor=clusteroperator 2025-08-13T19:59:33.055910848+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="starting clusteroperator monitor loop" monitor=clusteroperator 2025-08-13T19:59:33.246755698+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v2.OperatorCondition"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v1.Role"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v1.RoleBinding"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v1.Deployment"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting Controller","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.Operator"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.Deployment"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.Namespace"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.CustomResourceDefinition"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.APIService"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1alpha1.Subscription"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1alpha1.InstallPlan"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v2.OperatorCondition"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting Controller","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v2.OperatorCondition"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting Controller","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-08-13T19:59:33.382939450+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="ClusterOperator api is present" monitor=clusteroperator 2025-08-13T19:59:33.382939450+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="initializing clusteroperator resource(s) for [operator-lifecycle-manager-packageserver]" monitor=clusteroperator 2025-08-13T19:59:33.542897060+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="initialized cluster resource - operator-lifecycle-manager-packageserver" monitor=clusteroperator 2025-08-13T19:59:33.553977566+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-08-13T19:59:33.554066839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.Deployment"} 2025-08-13T19:59:33.554105840+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.Namespace"} 2025-08-13T19:59:33.554139551+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.Service"} 2025-08-13T19:59:33.554170521+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.CustomResourceDefinition"} 2025-08-13T19:59:33.554200772+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.APIService"} 2025-08-13T19:59:33.554544782+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1alpha1.Subscription"} 2025-08-13T19:59:33.554595594+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v2.OperatorCondition"} 2025-08-13T19:59:33.555013716+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.555085628+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.555128149+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.555159860+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.555770787+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.556103227+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.556141448+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.556177709+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting Controller","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-08-13T19:59:36.144826478+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting EventSource","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","source":"kind source: *v1alpha1.Subscription"} 2025-08-13T19:59:36.144826478+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting EventSource","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-08-13T19:59:36.144826478+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting EventSource","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","source":"kind source: *v1alpha1.InstallPlan"} 2025-08-13T19:59:36.144826478+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting Controller","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription"} 2025-08-13T19:59:36.144826478+00:00 stderr F 2025/08/13 19:59:36 http: TLS handshake error from 10.217.0.2:45622: EOF 2025-08-13T19:59:36.432375625+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting EventSource","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","source":"kind source: *v1.ClusterOperator"} 2025-08-13T19:59:36.432375625+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting EventSource","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","source":"channel source: 0xc000b88840"} 2025-08-13T19:59:36.432375625+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting EventSource","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-08-13T19:59:36.432375625+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting Controller","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator"} 2025-08-13T19:59:36.572124379+00:00 stderr F 2025/08/13 19:59:36 http: TLS handshake error from 10.217.0.2:45620: EOF 2025-08-13T19:59:36.941610091+00:00 stderr F time="2025-08-13T19:59:36Z" level=warning msg="install timed out" csv=packageserver id=89lYL namespace=openshift-operator-lifecycle-manager phase=Installing 2025-08-13T19:59:36.979210383+00:00 stderr F I0813 19:59:36.967910 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"23959", FieldPath:""}): type: 'Warning' reason: 'InstallCheckFailed' install timeout 2025-08-13T19:59:37.168694394+00:00 stderr F I0813 19:59:37.167075 1 trace.go:236] Trace[437002520]: "DeltaFIFO Pop Process" ID:openshift-machine-api/master-user-data,Depth:124,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:36.927) (total time: 224ms): 2025-08-13T19:59:37.168694394+00:00 stderr F Trace[437002520]: [224.761967ms] [224.761967ms] END 2025-08-13T19:59:39.160935043+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:39Z","msg":"Starting workers","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","worker count":1} 2025-08-13T19:59:39.161732036+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:39Z","msg":"Starting workers","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","worker count":1} 2025-08-13T19:59:39.162600960+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:39Z","msg":"Starting workers","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","worker count":1} 2025-08-13T19:59:39.163362922+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:39Z","msg":"Starting workers","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","worker count":1} 2025-08-13T19:59:39.167356936+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:39Z","msg":"Starting workers","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","worker count":1} 2025-08-13T19:59:39.181857399+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:39Z","msg":"Starting workers","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","worker count":1} 2025-08-13T19:59:39.409155649+00:00 stderr F time="2025-08-13T19:59:39Z" level=warning msg="install timed out" csv=packageserver id=wFTJL namespace=openshift-operator-lifecycle-manager phase=Installing 2025-08-13T19:59:39.491326631+00:00 stderr F I0813 19:59:39.461032 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"23959", FieldPath:""}): type: 'Warning' reason: 'InstallCheckFailed' install timeout 2025-08-13T19:59:39.678121055+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"packageserver\": the object has been modified; please apply your changes to the latest version and try again" csv=packageserver id=YtP4v namespace=openshift-operator-lifecycle-manager phase=Installing 2025-08-13T19:59:39.678121055+00:00 stderr F E0813 19:59:39.677187 1 queueinformer_operator.go:319] sync {"update" "openshift-operator-lifecycle-manager/packageserver"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com "packageserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:40.946415519+00:00 stderr F time="2025-08-13T19:59:40Z" level=warning msg="needs reinstall: apiServices not installed" csv=packageserver id=Ot2sV namespace=openshift-operator-lifecycle-manager phase=Failed strategy=deployment 2025-08-13T19:59:40.956756573+00:00 stderr F I0813 19:59:40.956497 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28220", FieldPath:""}): type: 'Normal' reason: 'NeedsReinstall' apiServices not installed 2025-08-13T19:59:41.432936177+00:00 stderr F time="2025-08-13T19:59:41Z" level=info msg="scheduling ClusterServiceVersion for install" csv=packageserver id=GN3+B namespace=openshift-operator-lifecycle-manager phase=Pending 2025-08-13T19:59:41.432936177+00:00 stderr F I0813 19:59:41.421229 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28286", FieldPath:""}): type: 'Normal' reason: 'AllRequirementsMet' all requirements found, attempting install 2025-08-13T19:59:42.403819742+00:00 stderr F time="2025-08-13T19:59:42Z" level=warning msg="reusing existing cert packageserver-service-cert" 2025-08-13T19:59:46.244202112+00:00 stderr F I0813 19:59:46.239490 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28295", FieldPath:""}): type: 'Normal' reason: 'InstallSucceeded' waiting for install components to report healthy 2025-08-13T19:59:47.823165362+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="install strategy successful" csv=packageserver id=6ssjB namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:47.824931572+00:00 stderr F I0813 19:59:47.823956 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28383", FieldPath:""}): type: 'Normal' reason: 'InstallWaiting' apiServices not installed 2025-08-13T19:59:48.876738185+00:00 stderr F time="2025-08-13T19:59:48Z" level=info msg="install strategy successful" csv=packageserver id=pY0V5 namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:48.880552904+00:00 stderr F I0813 19:59:48.877424 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28383", FieldPath:""}): type: 'Normal' reason: 'InstallWaiting' apiServices not installed 2025-08-13T19:59:48.965878547+00:00 stderr F time="2025-08-13T19:59:48Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"packageserver\": the object has been modified; please apply your changes to the latest version and try again" csv=packageserver id=k/B5q namespace=openshift-operator-lifecycle-manager phase=Installing 2025-08-13T19:59:48.976565881+00:00 stderr F E0813 19:59:48.973051 1 queueinformer_operator.go:319] sync {"update" "openshift-operator-lifecycle-manager/packageserver"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com "packageserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:49.634998790+00:00 stderr F time="2025-08-13T19:59:49Z" level=info msg="install strategy successful" csv=packageserver id=HgOMt namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:50.622862930+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="install strategy successful" csv=packageserver id=9BQnX namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:51.329144563+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="install strategy successful" csv=packageserver id=dbNQV namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:52.031285079+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="install strategy successful" csv=packageserver id=rj7n7 namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:52.332868995+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="install strategy successful" csv=packageserver id=JU2gq namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:52.533734821+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="install strategy successful" csv=packageserver id=I3wSK namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:52.820222188+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="install strategy successful" csv=packageserver id=nCH5J namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:53.054214688+00:00 stderr F time="2025-08-13T19:59:53Z" level=info msg="could not query for GVK in api discovery" err="the server is currently unable to handle the request" group=packages.operators.coreos.com kind=PackageManifest version=v1 2025-08-13T19:59:53.110286586+00:00 stderr F time="2025-08-13T19:59:53Z" level=info msg="could not update install status" csv=packageserver error="the server is currently unable to handle the request" id=nhmlX namespace=openshift-operator-lifecycle-manager phase=Installing 2025-08-13T19:59:53.110286586+00:00 stderr F E0813 19:59:53.107826 1 queueinformer_operator.go:319] sync {"update" "openshift-operator-lifecycle-manager/packageserver"} failed: the server is currently unable to handle the request 2025-08-13T19:59:53.290519303+00:00 stderr F time="2025-08-13T19:59:53Z" level=info msg="install strategy successful" csv=packageserver id=+yVoI namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:53.297101411+00:00 stderr F I0813 19:59:53.296413 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28423", FieldPath:""}): type: 'Normal' reason: 'InstallSucceeded' install strategy completed with no errors 2025-08-13T20:03:41.629323813+00:00 stderr F time="2025-08-13T20:03:41Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:41.629323813+00:00 stderr F time="2025-08-13T20:03:41Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:03:56.983016152+00:00 stderr F time="2025-08-13T20:03:56Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:56.984258098+00:00 stderr F time="2025-08-13T20:03:56Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:03:59.043288057+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:03:59.043288057+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:59.071758619+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:03:59.071758619+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:59.102757923+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="update existing cluster role failed: nil" error="Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/olm.og.openshift-cluster-monitoring.admin-2SOrzhaSHllEqB6Becsc9Z2BniBuXZxdBrPmIq\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:59.103560566+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="update existing cluster role failed: nil" error="Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/olm.og.openshift-cluster-monitoring.edit-brB9auo7mhdQtycRdrSZm5XlKKbUjCe698FPlD\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:59.105920163+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="update existing cluster role failed: nil" error="Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/olm.og.openshift-cluster-monitoring.view-9QCGFNcofBHQ2DeWEf2qFa4NWqTOGskUedO4Tz\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:59.117917436+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:03:59.118091211+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:04:02.371881741+00:00 stderr F time="2025-08-13T20:04:02Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:04:02.371920852+00:00 stderr F time="2025-08-13T20:04:02Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:04:02.610171729+00:00 stderr F time="2025-08-13T20:04:02Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:04:02.610171729+00:00 stderr F time="2025-08-13T20:04:02Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:04:05.519098082+00:00 stderr F time="2025-08-13T20:04:05Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:04:05.519098082+00:00 stderr F time="2025-08-13T20:04:05Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:05:07.886185865+00:00 stderr F time="2025-08-13T20:05:07Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:05:07.897930811+00:00 stderr F time="2025-08-13T20:05:07Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:42:43.424379186+00:00 stderr F time="2025-08-13T20:42:43Z" level=info msg="exiting from clusteroperator monitor loop" monitor=clusteroperator 2025-08-13T20:42:43.425568160+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Stopping and waiting for non leader election runnables"} 2025-08-13T20:42:43.432031667+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Stopping and waiting for leader election runnables"} 2025-08-13T20:42:43.433208311+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription"} 2025-08-13T20:42:43.433208311+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-08-13T20:42:43.433258182+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-08-13T20:42:43.433344975+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"All workers finished","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-08-13T20:42:43.433344975+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"All workers finished","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-08-13T20:42:43.433344975+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"All workers finished","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription"} 2025-08-13T20:42:43.433360345+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator"} 2025-08-13T20:42:43.433360345+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition"} 2025-08-13T20:42:43.433360345+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"All workers finished","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator"} 2025-08-13T20:42:43.433429997+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"All workers finished","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition"} 2025-08-13T20:42:43.433429997+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator"} 2025-08-13T20:42:43.433429997+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"All workers finished","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator"} 2025-08-13T20:42:43.433429997+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Stopping and waiting for caches"} 2025-08-13T20:42:43.435462176+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Stopping and waiting for webhooks"} 2025-08-13T20:42:43.435462176+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Stopping and waiting for HTTP servers"} 2025-08-13T20:42:43.435462176+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Wait completed, proceeding to shutdown the manager"} ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015133657716033245 5ustar zuulzuul././@LongLink0000644000000000000000000000030200000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015133657741033243 5ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000222515133657716033250 0ustar zuulzuul2025-08-13T19:59:25.749930980+00:00 stderr F W0813 19:59:25.749386 1 deprecated.go:66] 2025-08-13T19:59:25.749930980+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:25.749930980+00:00 stderr F 2025-08-13T19:59:25.749930980+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:25.749930980+00:00 stderr F 2025-08-13T19:59:25.749930980+00:00 stderr F =============================================== 2025-08-13T19:59:25.749930980+00:00 stderr F 2025-08-13T19:59:25.750743644+00:00 stderr F I0813 19:59:25.750677 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:25.792617047+00:00 stderr F I0813 19:59:25.791677 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:25.796481088+00:00 stderr F I0813 19:59:25.794679 1 kube-rbac-proxy.go:395] Starting TCP socket on :8443 2025-08-13T19:59:25.796481088+00:00 stderr F I0813 19:59:25.795529 1 kube-rbac-proxy.go:402] Listening securely on :8443 2025-08-13T20:42:42.434768166+00:00 stderr F I0813 20:42:42.434158 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000202015133657716033241 0ustar zuulzuul2026-01-20T10:49:37.597324419+00:00 stderr F W0120 10:49:37.597037 1 deprecated.go:66] 2026-01-20T10:49:37.597324419+00:00 stderr F ==== Removed Flag Warning ====================== 2026-01-20T10:49:37.597324419+00:00 stderr F 2026-01-20T10:49:37.597324419+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2026-01-20T10:49:37.597324419+00:00 stderr F 2026-01-20T10:49:37.597324419+00:00 stderr F =============================================== 2026-01-20T10:49:37.597324419+00:00 stderr F 2026-01-20T10:49:37.598034662+00:00 stderr F I0120 10:49:37.597703 1 kube-rbac-proxy.go:233] Valid token audiences: 2026-01-20T10:49:37.598034662+00:00 stderr F I0120 10:49:37.597752 1 kube-rbac-proxy.go:347] Reading certificate files 2026-01-20T10:49:37.601260270+00:00 stderr F I0120 10:49:37.601207 1 kube-rbac-proxy.go:395] Starting TCP socket on :8443 2026-01-20T10:49:37.601664672+00:00 stderr F I0120 10:49:37.601638 1 kube-rbac-proxy.go:402] Listening securely on :8443 ././@LongLink0000644000000000000000000000031600000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015133657741033243 5ustar zuulzuul././@LongLink0000644000000000000000000000032300000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000237415133657716033255 0ustar zuulzuul2026-01-20T10:49:37.196383967+00:00 stderr F I0120 10:49:37.195743 1 main.go:57] starting net-attach-def-admission-controller webhook server 2026-01-20T10:49:37.198643136+00:00 stderr F W0120 10:49:37.198593 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2026-01-20T10:49:37.221247305+00:00 stderr F W0120 10:49:37.220937 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2026-01-20T10:49:37.223450842+00:00 stderr F I0120 10:49:37.222552 1 localmetrics.go:51] UPdating net-attach-def metrics for any with value 0 2026-01-20T10:49:37.223450842+00:00 stderr F I0120 10:49:37.222585 1 localmetrics.go:51] UPdating net-attach-def metrics for sriov with value 0 2026-01-20T10:49:37.223450842+00:00 stderr F I0120 10:49:37.222592 1 localmetrics.go:51] UPdating net-attach-def metrics for ib-sriov with value 0 2026-01-20T10:49:37.223450842+00:00 stderr F I0120 10:49:37.223176 1 controller.go:202] Starting net-attach-def-admission-controller 2026-01-20T10:49:37.326119518+00:00 stderr F I0120 10:49:37.324473 1 controller.go:211] net-attach-def-admission-controller synced and ready ././@LongLink0000644000000000000000000000032300000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000255215133657716033253 0ustar zuulzuul2025-08-13T19:59:12.195411848+00:00 stderr F I0813 19:59:12.184348 1 main.go:57] starting net-attach-def-admission-controller webhook server 2025-08-13T19:59:13.341524580+00:00 stderr F W0813 19:59:13.330254 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:59:13.777938261+00:00 stderr F W0813 19:59:13.777869 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:59:13.864351914+00:00 stderr F I0813 19:59:13.863190 1 localmetrics.go:51] UPdating net-attach-def metrics for any with value 0 2025-08-13T19:59:14.306827216+00:00 stderr F I0813 19:59:14.228559 1 localmetrics.go:51] UPdating net-attach-def metrics for sriov with value 0 2025-08-13T19:59:14.307369132+00:00 stderr F I0813 19:59:14.307339 1 localmetrics.go:51] UPdating net-attach-def metrics for ib-sriov with value 0 2025-08-13T19:59:14.409334398+00:00 stderr F I0813 19:59:14.382647 1 controller.go:202] Starting net-attach-def-admission-controller 2025-08-13T19:59:17.037691044+00:00 stderr F I0813 19:59:17.032208 1 controller.go:211] net-attach-def-admission-controller synced and ready 2025-08-13T20:01:48.651575909+00:00 stderr F I0813 20:01:48.650318 1 tlsutil.go:43] cetificate reloaded ././@LongLink0000644000000000000000000000025300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015133657716033015 5ustar zuulzuul././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015133657744033016 5ustar zuulzuul././@LongLink0000644000000000000000000000030500000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000422015133657716033015 0ustar zuulzuul2026-01-20T10:47:06.344562684+00:00 stderr F W0120 10:47:06.344385 1 deprecated.go:66] 2026-01-20T10:47:06.344562684+00:00 stderr F ==== Removed Flag Warning ====================== 2026-01-20T10:47:06.344562684+00:00 stderr F 2026-01-20T10:47:06.344562684+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2026-01-20T10:47:06.344562684+00:00 stderr F 2026-01-20T10:47:06.344562684+00:00 stderr F =============================================== 2026-01-20T10:47:06.344562684+00:00 stderr F 2026-01-20T10:47:06.344562684+00:00 stderr F I0120 10:47:06.344498 1 kube-rbac-proxy.go:530] Reading config file: /etc/kubernetes/crio-metrics-proxy.cfg 2026-01-20T10:47:06.348376959+00:00 stderr F I0120 10:47:06.348345 1 kube-rbac-proxy.go:233] Valid token audiences: 2026-01-20T10:47:06.348638932+00:00 stderr F I0120 10:47:06.348614 1 kube-rbac-proxy.go:347] Reading certificate files 2026-01-20T10:47:06.348843655+00:00 stderr F I0120 10:47:06.348812 1 kube-rbac-proxy.go:395] Starting TCP socket on :9637 2026-01-20T10:47:06.348909425+00:00 stderr F I0120 10:47:06.348768 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca::/etc/kubernetes/kubelet-ca.crt" 2026-01-20T10:47:06.349519322+00:00 stderr F I0120 10:47:06.349322 1 kube-rbac-proxy.go:402] Listening securely on :9637 2026-01-20T10:54:20.526416240+00:00 stderr F I0120 10:54:20.526281 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2026-01-20T10:54:48.251209481+00:00 stderr F I0120 10:54:48.250366 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2026-01-20T10:54:49.179306761+00:00 stderr F I0120 10:54:49.179184 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" ././@LongLink0000644000000000000000000000030500000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000274515133657716033027 0ustar zuulzuul2026-01-20T10:42:03.938079092+00:00 stderr F W0120 10:42:03.937893 1 deprecated.go:66] 2026-01-20T10:42:03.938079092+00:00 stderr F ==== Removed Flag Warning ====================== 2026-01-20T10:42:03.938079092+00:00 stderr F 2026-01-20T10:42:03.938079092+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2026-01-20T10:42:03.938079092+00:00 stderr F 2026-01-20T10:42:03.938079092+00:00 stderr F =============================================== 2026-01-20T10:42:03.938079092+00:00 stderr F 2026-01-20T10:42:03.938245957+00:00 stderr F I0120 10:42:03.938233 1 kube-rbac-proxy.go:530] Reading config file: /etc/kubernetes/crio-metrics-proxy.cfg 2026-01-20T10:42:03.940680018+00:00 stderr F I0120 10:42:03.940660 1 kube-rbac-proxy.go:233] Valid token audiences: 2026-01-20T10:42:03.941013198+00:00 stderr F I0120 10:42:03.940996 1 kube-rbac-proxy.go:347] Reading certificate files 2026-01-20T10:42:03.941174743+00:00 stderr F I0120 10:42:03.941128 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca::/etc/kubernetes/kubelet-ca.crt" 2026-01-20T10:42:03.941538824+00:00 stderr F I0120 10:42:03.941503 1 kube-rbac-proxy.go:395] Starting TCP socket on :9637 2026-01-20T10:42:03.942087880+00:00 stderr F I0120 10:42:03.942056 1 kube-rbac-proxy.go:402] Listening securely on :9637 2026-01-20T10:46:12.261924685+00:00 stderr F I0120 10:46:12.261778 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000030500000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000001166515133657716033030 0ustar zuulzuul2025-08-13T19:44:01.126546932+00:00 stderr F W0813 19:44:01.123942 1 deprecated.go:66] 2025-08-13T19:44:01.126546932+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:44:01.126546932+00:00 stderr F 2025-08-13T19:44:01.126546932+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:44:01.126546932+00:00 stderr F 2025-08-13T19:44:01.126546932+00:00 stderr F =============================================== 2025-08-13T19:44:01.126546932+00:00 stderr F 2025-08-13T19:44:01.126546932+00:00 stderr F I0813 19:44:01.124657 1 kube-rbac-proxy.go:530] Reading config file: /etc/kubernetes/crio-metrics-proxy.cfg 2025-08-13T19:44:01.175293808+00:00 stderr F I0813 19:44:01.175109 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:44:01.179226024+00:00 stderr F I0813 19:44:01.176759 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:44:01.182533911+00:00 stderr F I0813 19:44:01.181865 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca::/etc/kubernetes/kubelet-ca.crt" 2025-08-13T19:44:01.182613100+00:00 stderr F I0813 19:44:01.182582 1 kube-rbac-proxy.go:395] Starting TCP socket on :9637 2025-08-13T19:44:01.184817475+00:00 stderr F I0813 19:44:01.184752 1 kube-rbac-proxy.go:402] Listening securely on :9637 2025-08-13T19:51:58.235350662+00:00 stderr F I0813 19:51:58.235115 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T19:55:11.101868577+00:00 stderr F I0813 19:55:11.101578 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T19:59:31.671386093+00:00 stderr F I0813 19:59:31.671100 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T19:59:36.085180598+00:00 stderr F I0813 19:59:36.068810 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T19:59:41.058153354+00:00 stderr F I0813 19:59:41.058038 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:06:06.964137311+00:00 stderr F I0813 20:06:06.954550 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:06:14.717482026+00:00 stderr F I0813 20:06:14.717284 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:06:47.017157131+00:00 stderr F I0813 20:06:47.016195 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:06:50.034528031+00:00 stderr F I0813 20:06:50.034310 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:06:51.358262373+00:00 stderr F I0813 20:06:51.347263 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:42:13.287019643+00:00 stderr F I0813 20:42:13.282260 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:42:13.985706076+00:00 stderr F I0813 20:42:13.985582 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:42:16.592484969+00:00 stderr F I0813 20:42:16.592362 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:42:47.532180885+00:00 stderr F I0813 20:42:47.532038 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015133657744033016 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000014515133657716033017 0ustar zuulzuul2025-08-13T19:43:57.536596541+00:00 stdout P Waiting for kubelet key and certificate to be available ././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000014515133657716033017 0ustar zuulzuul2026-01-20T10:47:05.903874819+00:00 stdout P Waiting for kubelet key and certificate to be available ././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000014515133657716033017 0ustar zuulzuul2026-01-20T10:42:02.269918733+00:00 stdout P Waiting for kubelet key and certificate to be available ././@LongLink0000644000000000000000000000023700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-manager-758df9885c-cq6zm_f12a256b-7128-4680-8f54-8e40a3e56300/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-mana0000755000175000017500000000000015133657715032764 5ustar zuulzuul././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-manager-758df9885c-cq6zm_f12a256b-7128-4680-8f54-8e40a3e56300/cert-manager-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-mana0000755000175000017500000000000015133657734032765 5ustar zuulzuul././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-manager-758df9885c-cq6zm_f12a256b-7128-4680-8f54-8e40a3e56300/cert-manager-controller/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-mana0000644000175000017500000002322215133657715032767 0ustar zuulzuul2026-01-20T10:57:54.452552958+00:00 stderr F I0120 10:57:54.452354 1 controller.go:74] "starting cert-manager controller" logger="cert-manager.controller" version="v1.19.2" git_commit="6e38ee57a338a1f27bb724ddb5933f4b8e23e567" go_version="go1.25.5" platform="linux/amd64" 2026-01-20T10:57:54.452552958+00:00 stderr F I0120 10:57:54.452487 1 controller.go:290] "configured acme dns01 nameservers" logger="cert-manager.controller.build-context" nameservers=["10.217.4.10:53"] 2026-01-20T10:57:54.453134213+00:00 stderr F I0120 10:57:54.453092 1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false 2026-01-20T10:57:54.453134213+00:00 stderr F I0120 10:57:54.453120 1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false 2026-01-20T10:57:54.453134213+00:00 stderr F I0120 10:57:54.453127 1 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true 2026-01-20T10:57:54.453153593+00:00 stderr F I0120 10:57:54.453132 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false 2026-01-20T10:57:54.453153593+00:00 stderr F I0120 10:57:54.453139 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false 2026-01-20T10:57:54.457251431+00:00 stderr F I0120 10:57:54.457205 1 controller.go:89] "enabled controllers: [certificaterequests-approver certificaterequests-issuer-acme certificaterequests-issuer-ca certificaterequests-issuer-selfsigned certificaterequests-issuer-vault certificaterequests-issuer-venafi certificates-issuing certificates-key-manager certificates-metrics certificates-readiness certificates-request-manager certificates-revision-manager certificates-trigger challenges clusterissuers ingress-shim issuers orders]" logger="cert-manager.controller" 2026-01-20T10:57:54.457251431+00:00 stderr F I0120 10:57:54.457235 1 controller.go:437] "serving insecurely as tls certificate data not provided" logger="cert-manager.controller" 2026-01-20T10:57:54.457251431+00:00 stderr F I0120 10:57:54.457245 1 controller.go:102] "listening for insecure connections" logger="cert-manager.controller" address="0.0.0.0:9402" 2026-01-20T10:57:54.457933180+00:00 stderr F I0120 10:57:54.457882 1 controller.go:129] "starting metrics server" logger="cert-manager.controller" address="[::]:9402" 2026-01-20T10:57:54.457933180+00:00 stderr F I0120 10:57:54.457919 1 controller.go:182] "starting leader election" logger="cert-manager.controller" 2026-01-20T10:57:54.458030683+00:00 stderr F I0120 10:57:54.457980 1 controller.go:175] "starting healthz server" logger="cert-manager.controller" address="[::]:9403" 2026-01-20T10:57:54.460481918+00:00 stderr F I0120 10:57:54.460449 1 leaderelection.go:257] attempting to acquire leader lease kube-system/cert-manager-controller... 2026-01-20T10:57:54.471251382+00:00 stderr F I0120 10:57:54.471197 1 leaderelection.go:271] successfully acquired lease kube-system/cert-manager-controller 2026-01-20T10:57:54.478196166+00:00 stderr F I0120 10:57:54.478116 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="ingress-shim" 2026-01-20T10:57:54.480991920+00:00 stderr F I0120 10:57:54.480944 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificaterequests-approver" 2026-01-20T10:57:54.484281877+00:00 stderr F I0120 10:57:54.484215 1 controller.go:229] "skipping disabled controller" logger="cert-manager.controller" controller="certificatesigningrequests-issuer-selfsigned" 2026-01-20T10:57:54.484281877+00:00 stderr F I0120 10:57:54.484246 1 controller.go:229] "skipping disabled controller" logger="cert-manager.controller" controller="certificatesigningrequests-issuer-venafi" 2026-01-20T10:57:54.484316747+00:00 stderr F I0120 10:57:54.484253 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificates-metrics" 2026-01-20T10:57:54.487796129+00:00 stderr F I0120 10:57:54.487744 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="challenges" 2026-01-20T10:57:54.491151989+00:00 stderr F I0120 10:57:54.491108 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificates-trigger" 2026-01-20T10:57:54.497070655+00:00 stderr F I0120 10:57:54.496991 1 controller.go:229] "skipping disabled controller" logger="cert-manager.controller" controller="certificatesigningrequests-issuer-acme" 2026-01-20T10:57:54.497164978+00:00 stderr F I0120 10:57:54.497114 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificaterequests-issuer-vault" 2026-01-20T10:57:54.499743266+00:00 stderr F I0120 10:57:54.499689 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="clusterissuers" 2026-01-20T10:57:54.501784639+00:00 stderr F I0120 10:57:54.501731 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificates-issuing" 2026-01-20T10:57:54.503760322+00:00 stderr F I0120 10:57:54.503705 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificates-key-manager" 2026-01-20T10:57:54.505957720+00:00 stderr F I0120 10:57:54.505915 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="orders" 2026-01-20T10:57:54.507962093+00:00 stderr F I0120 10:57:54.507912 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificaterequests-issuer-acme" 2026-01-20T10:57:54.510559341+00:00 stderr F I0120 10:57:54.510509 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificaterequests-issuer-venafi" 2026-01-20T10:57:54.512849672+00:00 stderr F I0120 10:57:54.512807 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificates-readiness" 2026-01-20T10:57:54.516598102+00:00 stderr F I0120 10:57:54.516533 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificates-request-manager" 2026-01-20T10:57:54.519433957+00:00 stderr F I0120 10:57:54.518760 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificates-revision-manager" 2026-01-20T10:57:54.520623108+00:00 stderr F I0120 10:57:54.520576 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificaterequests-issuer-ca" 2026-01-20T10:57:54.522278872+00:00 stderr F I0120 10:57:54.522233 1 controller.go:229] "skipping disabled controller" logger="cert-manager.controller" controller="certificatesigningrequests-issuer-ca" 2026-01-20T10:57:54.522278872+00:00 stderr F I0120 10:57:54.522254 1 controller.go:229] "skipping disabled controller" logger="cert-manager.controller" controller="certificatesigningrequests-issuer-vault" 2026-01-20T10:57:54.522377564+00:00 stderr F I0120 10:57:54.522343 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificaterequests-issuer-selfsigned" 2026-01-20T10:57:54.524032118+00:00 stderr F I0120 10:57:54.523993 1 controller.go:229] "skipping disabled controller" logger="cert-manager.controller" controller="gateway-shim" 2026-01-20T10:57:54.524306115+00:00 stderr F I0120 10:57:54.524257 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="issuers" 2026-01-20T10:57:54.529374659+00:00 stderr F I0120 10:57:54.529051 1 reflector.go:436] "Caches populated" type="*v1.PartialObjectMetadata" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:57:54.529614675+00:00 stderr F I0120 10:57:54.529575 1 reflector.go:436] "Caches populated" type="*v1.PartialObjectMetadata" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:57:54.529962624+00:00 stderr F I0120 10:57:54.529899 1 reflector.go:436] "Caches populated" type="*v1.Issuer" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:57:54.530125309+00:00 stderr F I0120 10:57:54.530087 1 reflector.go:436] "Caches populated" type="*v1.Order" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:57:54.530480129+00:00 stderr F I0120 10:57:54.530419 1 reflector.go:436] "Caches populated" type="*v1.Challenge" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:57:54.533051866+00:00 stderr F I0120 10:57:54.530778 1 reflector.go:436] "Caches populated" type="*v1.CertificateRequest" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:57:54.533051866+00:00 stderr F I0120 10:57:54.530968 1 reflector.go:436] "Caches populated" type="*v1.Secret" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:57:54.533051866+00:00 stderr F I0120 10:57:54.531148 1 reflector.go:436] "Caches populated" type="*v1.Ingress" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:57:54.533051866+00:00 stderr F I0120 10:57:54.531479 1 reflector.go:436] "Caches populated" type="*v1.Certificate" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:57:54.543476792+00:00 stderr F I0120 10:57:54.543408 1 reflector.go:436] "Caches populated" type="*v1.PartialObjectMetadata" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:57:54.544184700+00:00 stderr F I0120 10:57:54.544161 1 reflector.go:436] "Caches populated" type="*v1.ClusterIssuer" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" ././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-manager-758df9885c-cq6zm_f12a256b-7128-4680-8f54-8e40a3e56300/cert-manager-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-mana0000644000175000017500000003701315133657715032772 0ustar zuulzuul2026-01-20T10:56:43.177169286+00:00 stderr F I0120 10:56:43.176898 1 controller.go:74] "starting cert-manager controller" logger="cert-manager.controller" version="v1.19.2" git_commit="6e38ee57a338a1f27bb724ddb5933f4b8e23e567" go_version="go1.25.5" platform="linux/amd64" 2026-01-20T10:56:43.177169286+00:00 stderr F I0120 10:56:43.177034 1 controller.go:290] "configured acme dns01 nameservers" logger="cert-manager.controller.build-context" nameservers=["10.217.4.10:53"] 2026-01-20T10:56:43.177531896+00:00 stderr F I0120 10:56:43.177478 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false 2026-01-20T10:56:43.177531896+00:00 stderr F I0120 10:56:43.177501 1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false 2026-01-20T10:56:43.177531896+00:00 stderr F I0120 10:56:43.177507 1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false 2026-01-20T10:56:43.177531896+00:00 stderr F I0120 10:56:43.177513 1 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true 2026-01-20T10:56:43.177531896+00:00 stderr F I0120 10:56:43.177517 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false 2026-01-20T10:56:43.181158923+00:00 stderr F I0120 10:56:43.180913 1 controller.go:89] "enabled controllers: [certificaterequests-approver certificaterequests-issuer-acme certificaterequests-issuer-ca certificaterequests-issuer-selfsigned certificaterequests-issuer-vault certificaterequests-issuer-venafi certificates-issuing certificates-key-manager certificates-metrics certificates-readiness certificates-request-manager certificates-revision-manager certificates-trigger challenges clusterissuers ingress-shim issuers orders]" logger="cert-manager.controller" 2026-01-20T10:56:43.181158923+00:00 stderr F I0120 10:56:43.180937 1 controller.go:437] "serving insecurely as tls certificate data not provided" logger="cert-manager.controller" 2026-01-20T10:56:43.181158923+00:00 stderr F I0120 10:56:43.180946 1 controller.go:102] "listening for insecure connections" logger="cert-manager.controller" address="0.0.0.0:9402" 2026-01-20T10:56:43.181595835+00:00 stderr F I0120 10:56:43.181535 1 controller.go:129] "starting metrics server" logger="cert-manager.controller" address="[::]:9402" 2026-01-20T10:56:43.181614265+00:00 stderr F I0120 10:56:43.181598 1 controller.go:182] "starting leader election" logger="cert-manager.controller" 2026-01-20T10:56:43.181875753+00:00 stderr F I0120 10:56:43.181698 1 controller.go:175] "starting healthz server" logger="cert-manager.controller" address="[::]:9403" 2026-01-20T10:56:43.183449015+00:00 stderr F I0120 10:56:43.183400 1 leaderelection.go:257] attempting to acquire leader lease kube-system/cert-manager-controller... 2026-01-20T10:56:43.201998595+00:00 stderr F I0120 10:56:43.201930 1 leaderelection.go:271] successfully acquired lease kube-system/cert-manager-controller 2026-01-20T10:56:43.208857829+00:00 stderr F I0120 10:56:43.208799 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificaterequests-issuer-acme" 2026-01-20T10:56:43.213549035+00:00 stderr F I0120 10:56:43.213422 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificaterequests-issuer-selfsigned" 2026-01-20T10:56:43.216015552+00:00 stderr F I0120 10:56:43.215942 1 controller.go:229] "skipping disabled controller" logger="cert-manager.controller" controller="certificatesigningrequests-issuer-acme" 2026-01-20T10:56:43.216015552+00:00 stderr F I0120 10:56:43.215976 1 controller.go:229] "skipping disabled controller" logger="cert-manager.controller" controller="certificatesigningrequests-issuer-selfsigned" 2026-01-20T10:56:43.216015552+00:00 stderr F I0120 10:56:43.215986 1 controller.go:229] "skipping disabled controller" logger="cert-manager.controller" controller="certificatesigningrequests-issuer-vault" 2026-01-20T10:56:43.216015552+00:00 stderr F I0120 10:56:43.215995 1 controller.go:229] "skipping disabled controller" logger="cert-manager.controller" controller="gateway-shim" 2026-01-20T10:56:43.216123825+00:00 stderr F I0120 10:56:43.216086 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificaterequests-issuer-venafi" 2026-01-20T10:56:43.218352505+00:00 stderr F I0120 10:56:43.218273 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificaterequests-approver" 2026-01-20T10:56:43.220528633+00:00 stderr F I0120 10:56:43.220457 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="orders" 2026-01-20T10:56:43.222482745+00:00 stderr F I0120 10:56:43.222200 1 controller.go:229] "skipping disabled controller" logger="cert-manager.controller" controller="certificatesigningrequests-issuer-venafi" 2026-01-20T10:56:43.222482745+00:00 stderr F I0120 10:56:43.222351 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificaterequests-issuer-vault" 2026-01-20T10:56:43.247209561+00:00 stderr F I0120 10:56:43.247085 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="clusterissuers" 2026-01-20T10:56:43.250317524+00:00 stderr F I0120 10:56:43.249983 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="challenges" 2026-01-20T10:56:43.253450229+00:00 stderr F I0120 10:56:43.253381 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificates-metrics" 2026-01-20T10:56:43.255042542+00:00 stderr F I0120 10:56:43.255012 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificates-request-manager" 2026-01-20T10:56:43.256773498+00:00 stderr F I0120 10:56:43.256697 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificates-revision-manager" 2026-01-20T10:56:43.258687360+00:00 stderr F I0120 10:56:43.258367 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificates-trigger" 2026-01-20T10:56:43.263410567+00:00 stderr F I0120 10:56:43.263329 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificates-key-manager" 2026-01-20T10:56:43.265962906+00:00 stderr F I0120 10:56:43.265550 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificates-readiness" 2026-01-20T10:56:43.270660861+00:00 stderr F I0120 10:56:43.270601 1 controller.go:229] "skipping disabled controller" logger="cert-manager.controller" controller="certificatesigningrequests-issuer-ca" 2026-01-20T10:56:43.270829646+00:00 stderr F I0120 10:56:43.270757 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificaterequests-issuer-ca" 2026-01-20T10:56:43.273019265+00:00 stderr F I0120 10:56:43.272946 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="issuers" 2026-01-20T10:56:43.276959131+00:00 stderr F I0120 10:56:43.274890 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="ingress-shim" 2026-01-20T10:56:43.279646003+00:00 stderr F I0120 10:56:43.278301 1 controller.go:246] "starting controller" logger="cert-manager.controller" controller="certificates-issuing" 2026-01-20T10:56:43.282095319+00:00 stderr F I0120 10:56:43.280233 1 reflector.go:436] "Caches populated" type="*v1.PartialObjectMetadata" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:56:43.282095319+00:00 stderr F I0120 10:56:43.280675 1 reflector.go:436] "Caches populated" type="*v1.Order" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:56:43.282095319+00:00 stderr F I0120 10:56:43.280805 1 reflector.go:436] "Caches populated" type="*v1.CertificateRequest" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:56:43.282095319+00:00 stderr F I0120 10:56:43.281076 1 reflector.go:436] "Caches populated" type="*v1.Challenge" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:56:43.282095319+00:00 stderr F I0120 10:56:43.281212 1 reflector.go:436] "Caches populated" type="*v1.Ingress" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:56:43.282095319+00:00 stderr F I0120 10:56:43.281278 1 reflector.go:436] "Caches populated" type="*v1.PartialObjectMetadata" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:56:43.282095319+00:00 stderr F I0120 10:56:43.281609 1 reflector.go:436] "Caches populated" type="*v1.Certificate" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:56:43.287136895+00:00 stderr F I0120 10:56:43.282330 1 reflector.go:436] "Caches populated" type="*v1.Secret" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:56:43.299222540+00:00 stderr F I0120 10:56:43.298807 1 reflector.go:436] "Caches populated" type="*v1.Issuer" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:56:43.350196162+00:00 stderr F I0120 10:56:43.349189 1 reflector.go:436] "Caches populated" type="*v1.ClusterIssuer" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:56:43.351243700+00:00 stderr F I0120 10:56:43.350325 1 reflector.go:436] "Caches populated" type="*v1.PartialObjectMetadata" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:57:13.221876178+00:00 stderr F E0120 10:57:13.221789 1 leaderelection.go:441] Failed to update lock optimistically: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cert-manager-controller": dial tcp 10.217.4.1:443: connect: connection refused, falling back to slow path 2026-01-20T10:57:13.222638158+00:00 stderr F E0120 10:57:13.222584 1 leaderelection.go:448] error retrieving resource lock kube-system/cert-manager-controller: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cert-manager-controller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:28.222639799+00:00 stderr F E0120 10:57:28.222564 1 leaderelection.go:441] Failed to update lock optimistically: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cert-manager-controller": dial tcp 10.217.4.1:443: connect: connection refused, falling back to slow path 2026-01-20T10:57:28.223242445+00:00 stderr F E0120 10:57:28.223188 1 leaderelection.go:448] error retrieving resource lock kube-system/cert-manager-controller: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cert-manager-controller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:43.222533576+00:00 stderr F E0120 10:57:43.222198 1 leaderelection.go:441] Failed to update lock optimistically: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cert-manager-controller": dial tcp 10.217.4.1:443: connect: connection refused, falling back to slow path 2026-01-20T10:57:43.223248795+00:00 stderr F E0120 10:57:43.222835 1 leaderelection.go:448] error retrieving resource lock kube-system/cert-manager-controller: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cert-manager-controller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:53.222970161+00:00 stderr F I0120 10:57:53.221834 1 leaderelection.go:297] failed to renew lease kube-system/cert-manager-controller: context deadline exceeded 2026-01-20T10:57:53.231364493+00:00 stderr F I0120 10:57:53.231303 1 controller.go:122] "shutting down queue as workqueue signaled shutdown" logger="cert-manager.controller.certificates-revision-manager" 2026-01-20T10:57:53.231400994+00:00 stderr F I0120 10:57:53.231342 1 controller.go:122] "shutting down queue as workqueue signaled shutdown" logger="cert-manager.controller.certificaterequests-approver" 2026-01-20T10:57:53.231400994+00:00 stderr F I0120 10:57:53.231377 1 controller.go:122] "shutting down queue as workqueue signaled shutdown" logger="cert-manager.controller.certificates-issuing" 2026-01-20T10:57:53.231432785+00:00 stderr F I0120 10:57:53.231420 1 controller.go:122] "shutting down queue as workqueue signaled shutdown" logger="cert-manager.controller.certificates-readiness" 2026-01-20T10:57:53.231449115+00:00 stderr F I0120 10:57:53.231434 1 controller.go:122] "shutting down queue as workqueue signaled shutdown" logger="cert-manager.controller.certificaterequests-issuer-venafi" 2026-01-20T10:57:53.231462085+00:00 stderr F I0120 10:57:53.231446 1 controller.go:122] "shutting down queue as workqueue signaled shutdown" logger="cert-manager.controller.issuers" 2026-01-20T10:57:53.231505817+00:00 stderr F I0120 10:57:53.231470 1 controller.go:122] "shutting down queue as workqueue signaled shutdown" logger="cert-manager.controller.clusterissuers" 2026-01-20T10:57:53.231505817+00:00 stderr F I0120 10:57:53.231400 1 controller.go:122] "shutting down queue as workqueue signaled shutdown" logger="cert-manager.controller.certificaterequests-issuer-acme" 2026-01-20T10:57:53.231552108+00:00 stderr F I0120 10:57:53.231511 1 controller.go:122] "shutting down queue as workqueue signaled shutdown" logger="cert-manager.controller.orders" 2026-01-20T10:57:53.231552108+00:00 stderr F I0120 10:57:53.231543 1 controller.go:122] "shutting down queue as workqueue signaled shutdown" logger="cert-manager.controller.certificaterequests-issuer-ca" 2026-01-20T10:57:53.231567048+00:00 stderr F I0120 10:57:53.231531 1 controller.go:122] "shutting down queue as workqueue signaled shutdown" logger="cert-manager.controller.certificates-key-manager" 2026-01-20T10:57:53.231579988+00:00 stderr F I0120 10:57:53.231559 1 controller.go:122] "shutting down queue as workqueue signaled shutdown" logger="cert-manager.controller.certificaterequests-issuer-selfsigned" 2026-01-20T10:57:53.231579988+00:00 stderr F I0120 10:57:53.231544 1 controller.go:122] "shutting down queue as workqueue signaled shutdown" logger="cert-manager.controller.certificaterequests-issuer-vault" 2026-01-20T10:57:53.231593739+00:00 stderr F I0120 10:57:53.231555 1 controller.go:122] "shutting down queue as workqueue signaled shutdown" logger="cert-manager.controller.certificates-request-manager" 2026-01-20T10:57:53.231593739+00:00 stderr F I0120 10:57:53.231572 1 controller.go:122] "shutting down queue as workqueue signaled shutdown" logger="cert-manager.controller.challenges" 2026-01-20T10:57:53.231607239+00:00 stderr F I0120 10:57:53.231517 1 controller.go:122] "shutting down queue as workqueue signaled shutdown" logger="cert-manager.controller.certificates-trigger" 2026-01-20T10:57:53.231607239+00:00 stderr F I0120 10:57:53.231593 1 controller.go:122] "shutting down queue as workqueue signaled shutdown" logger="cert-manager.controller.certificates-metrics" 2026-01-20T10:57:53.231620580+00:00 stderr F I0120 10:57:53.231604 1 controller.go:122] "shutting down queue as workqueue signaled shutdown" logger="cert-manager.controller.ingress-shim" 2026-01-20T10:57:53.231756983+00:00 stderr F E0120 10:57:53.231689 1 main.go:40] "error executing command" err="error starting controller: leader election lost" logger="cert-manager" ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29481765-pbh8m_835aa241-cfd8-4527-953a-e91ac4516103/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015133657715033070 5ustar zuulzuul././@LongLink0000644000000000000000000000031200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29481765-pbh8m_835aa241-cfd8-4527-953a-e91ac4516103/collect-profiles/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015133657734033071 5ustar zuulzuul././@LongLink0000644000000000000000000000031700000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29481765-pbh8m_835aa241-cfd8-4527-953a-e91ac4516103/collect-profiles/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000000134315133657715033073 0ustar zuulzuul2026-01-20T10:50:44.142820947+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="error verifying provided cert and key: certificate has expired" 2026-01-20T10:50:44.142972071+00:00 stderr F time="2026-01-20T10:50:44Z" level=info msg="generating a new cert and key" 2026-01-20T10:50:45.368033865+00:00 stderr F time="2026-01-20T10:50:45Z" level=info msg="error retrieving pprof profile: Get \"https://olm-operator-metrics:8443/debug/pprof/heap\": remote error: tls: unknown certificate authority" 2026-01-20T10:50:45.385419776+00:00 stderr F time="2026-01-20T10:50:45Z" level=info msg="error retrieving pprof profile: Get \"https://catalog-operator-metrics:8443/debug/pprof/heap\": remote error: tls: unknown certificate authority" ././@LongLink0000644000000000000000000000024600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000755000175000017500000000000015133657715033121 5ustar zuulzuul././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000755000175000017500000000000015133657735033123 5ustar zuulzuul././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000644000175000017500000000017015133657715033121 0ustar zuulzuul2026-01-20T10:47:35.697885306+00:00 stdout F Tue Jan 20 10:47:35 UTC 2026 2026-01-20T10:51:02.297545131+00:00 stdout F ././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000644000175000017500000000057515133657715033132 0ustar zuulzuul2025-08-13T19:51:17.827657214+00:00 stdout F Wed Aug 13 19:51:17 UTC 2025 2025-08-13T19:51:34.627632491+00:00 stderr F time="2025-08-13T19:51:34Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded" 2025-08-13T19:51:35.128202382+00:00 stdout F ././@LongLink0000644000000000000000000000024000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015133657716033105 5ustar zuulzuul././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/kube-rbac-proxy-ovn-metrics/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015133657736033107 5ustar zuulzuul././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/kube-rbac-proxy-ovn-metrics/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000001104015133657716033103 0ustar zuulzuul2026-01-20T10:56:45.565686630+00:00 stderr F ++ K8S_NODE= 2026-01-20T10:56:45.565686630+00:00 stderr F ++ [[ -n '' ]] 2026-01-20T10:56:45.565686630+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2026-01-20T10:56:45.565686630+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2026-01-20T10:56:45.565686630+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2026-01-20T10:56:45.565686630+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2026-01-20T10:56:45.565686630+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2026-01-20T10:56:45.565840465+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2026-01-20T10:56:45.565840465+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2026-01-20T10:56:45.565840465+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2026-01-20T10:56:45.565840465+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2026-01-20T10:56:45.565840465+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2026-01-20T10:56:45.566694888+00:00 stderr F + start-rbac-proxy-node ovn-metrics 9105 29105 /etc/pki/tls/metrics-cert/tls.key /etc/pki/tls/metrics-cert/tls.crt 2026-01-20T10:56:45.566694888+00:00 stderr F + local detail=ovn-metrics 2026-01-20T10:56:45.566694888+00:00 stderr F + local listen_port=9105 2026-01-20T10:56:45.566711358+00:00 stderr F + local upstream_port=29105 2026-01-20T10:56:45.566711358+00:00 stderr F + local privkey=/etc/pki/tls/metrics-cert/tls.key 2026-01-20T10:56:45.566711358+00:00 stderr F + local clientcert=/etc/pki/tls/metrics-cert/tls.crt 2026-01-20T10:56:45.566711358+00:00 stderr F + [[ 5 -ne 5 ]] 2026-01-20T10:56:45.567468828+00:00 stderr F ++ date -Iseconds 2026-01-20T10:56:45.572630027+00:00 stderr F + echo '2026-01-20T10:56:45+00:00 INFO: waiting for ovn-metrics certs to be mounted' 2026-01-20T10:56:45.572689739+00:00 stdout F 2026-01-20T10:56:45+00:00 INFO: waiting for ovn-metrics certs to be mounted 2026-01-20T10:56:45.572700079+00:00 stderr F + wait-for-certs ovn-metrics /etc/pki/tls/metrics-cert/tls.key /etc/pki/tls/metrics-cert/tls.crt 2026-01-20T10:56:45.572741270+00:00 stderr F + local detail=ovn-metrics 2026-01-20T10:56:45.572741270+00:00 stderr F + local privkey=/etc/pki/tls/metrics-cert/tls.key 2026-01-20T10:56:45.572749321+00:00 stderr F + local clientcert=/etc/pki/tls/metrics-cert/tls.crt 2026-01-20T10:56:45.572756481+00:00 stderr F + [[ 3 -ne 3 ]] 2026-01-20T10:56:45.572763661+00:00 stderr F + retries=0 2026-01-20T10:56:45.573497830+00:00 stderr F ++ date +%s 2026-01-20T10:56:45.576039219+00:00 stderr F + TS=1768906605 2026-01-20T10:56:45.576039219+00:00 stderr F + WARN_TS=1768907805 2026-01-20T10:56:45.576039219+00:00 stderr F + HAS_LOGGED_INFO=0 2026-01-20T10:56:45.576058839+00:00 stderr F + [[ ! -f /etc/pki/tls/metrics-cert/tls.key ]] 2026-01-20T10:56:45.576178653+00:00 stderr F + [[ ! -f /etc/pki/tls/metrics-cert/tls.crt ]] 2026-01-20T10:56:45.576661635+00:00 stderr F ++ date -Iseconds 2026-01-20T10:56:45.578732731+00:00 stdout F 2026-01-20T10:56:45+00:00 INFO: ovn-metrics certs mounted, starting kube-rbac-proxy 2026-01-20T10:56:45.578747522+00:00 stderr F + echo '2026-01-20T10:56:45+00:00 INFO: ovn-metrics certs mounted, starting kube-rbac-proxy' 2026-01-20T10:56:45.578747522+00:00 stderr F + exec /usr/bin/kube-rbac-proxy --logtostderr --secure-listen-address=:9105 --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 --upstream=http://127.0.0.1:29105/ --tls-private-key-file=/etc/pki/tls/metrics-cert/tls.key --tls-cert-file=/etc/pki/tls/metrics-cert/tls.crt 2026-01-20T10:56:45.613673392+00:00 stderr F W0120 10:56:45.613533 29689 deprecated.go:66] 2026-01-20T10:56:45.613673392+00:00 stderr F ==== Removed Flag Warning ====================== 2026-01-20T10:56:45.613673392+00:00 stderr F 2026-01-20T10:56:45.613673392+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2026-01-20T10:56:45.613673392+00:00 stderr F 2026-01-20T10:56:45.613673392+00:00 stderr F =============================================== 2026-01-20T10:56:45.613673392+00:00 stderr F 2026-01-20T10:56:45.614162835+00:00 stderr F I0120 10:56:45.614089 29689 kube-rbac-proxy.go:233] Valid token audiences: 2026-01-20T10:56:45.614162835+00:00 stderr F I0120 10:56:45.614137 29689 kube-rbac-proxy.go:347] Reading certificate files 2026-01-20T10:56:45.614672329+00:00 stderr F I0120 10:56:45.614632 29689 kube-rbac-proxy.go:395] Starting TCP socket on :9105 2026-01-20T10:56:45.615011208+00:00 stderr F I0120 10:56:45.614978 29689 kube-rbac-proxy.go:402] Listening securely on :9105 ././@LongLink0000644000000000000000000000026500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/kube-rbac-proxy-node/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015133657736033107 5ustar zuulzuul././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/kube-rbac-proxy-node/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000001111015133657716033101 0ustar zuulzuul2026-01-20T10:56:45.405801288+00:00 stderr F ++ K8S_NODE= 2026-01-20T10:56:45.405801288+00:00 stderr F ++ [[ -n '' ]] 2026-01-20T10:56:45.405801288+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2026-01-20T10:56:45.405801288+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2026-01-20T10:56:45.405801288+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2026-01-20T10:56:45.405801288+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2026-01-20T10:56:45.405801288+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2026-01-20T10:56:45.405801288+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2026-01-20T10:56:45.405801288+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2026-01-20T10:56:45.405801288+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2026-01-20T10:56:45.405801288+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2026-01-20T10:56:45.405801288+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2026-01-20T10:56:45.406701273+00:00 stderr F + start-rbac-proxy-node ovn-node-metrics 9103 29103 /etc/pki/tls/metrics-cert/tls.key /etc/pki/tls/metrics-cert/tls.crt 2026-01-20T10:56:45.406701273+00:00 stderr F + local detail=ovn-node-metrics 2026-01-20T10:56:45.406701273+00:00 stderr F + local listen_port=9103 2026-01-20T10:56:45.406718534+00:00 stderr F + local upstream_port=29103 2026-01-20T10:56:45.406718534+00:00 stderr F + local privkey=/etc/pki/tls/metrics-cert/tls.key 2026-01-20T10:56:45.406718534+00:00 stderr F + local clientcert=/etc/pki/tls/metrics-cert/tls.crt 2026-01-20T10:56:45.406718534+00:00 stderr F + [[ 5 -ne 5 ]] 2026-01-20T10:56:45.407290559+00:00 stderr F ++ date -Iseconds 2026-01-20T10:56:45.409722714+00:00 stderr F + echo '2026-01-20T10:56:45+00:00 INFO: waiting for ovn-node-metrics certs to be mounted' 2026-01-20T10:56:45.409777416+00:00 stdout F 2026-01-20T10:56:45+00:00 INFO: waiting for ovn-node-metrics certs to be mounted 2026-01-20T10:56:45.409782736+00:00 stderr F + wait-for-certs ovn-node-metrics /etc/pki/tls/metrics-cert/tls.key /etc/pki/tls/metrics-cert/tls.crt 2026-01-20T10:56:45.409782736+00:00 stderr F + local detail=ovn-node-metrics 2026-01-20T10:56:45.409782736+00:00 stderr F + local privkey=/etc/pki/tls/metrics-cert/tls.key 2026-01-20T10:56:45.409808817+00:00 stderr F + local clientcert=/etc/pki/tls/metrics-cert/tls.crt 2026-01-20T10:56:45.409808817+00:00 stderr F + [[ 3 -ne 3 ]] 2026-01-20T10:56:45.409808817+00:00 stderr F + retries=0 2026-01-20T10:56:45.410375792+00:00 stderr F ++ date +%s 2026-01-20T10:56:45.413418664+00:00 stderr F + TS=1768906605 2026-01-20T10:56:45.413418664+00:00 stderr F + WARN_TS=1768907805 2026-01-20T10:56:45.413418664+00:00 stderr F + HAS_LOGGED_INFO=0 2026-01-20T10:56:45.413418664+00:00 stderr F + [[ ! -f /etc/pki/tls/metrics-cert/tls.key ]] 2026-01-20T10:56:45.413418664+00:00 stderr F + [[ ! -f /etc/pki/tls/metrics-cert/tls.crt ]] 2026-01-20T10:56:45.413608859+00:00 stderr F ++ date -Iseconds 2026-01-20T10:56:45.415787887+00:00 stderr F + echo '2026-01-20T10:56:45+00:00 INFO: ovn-node-metrics certs mounted, starting kube-rbac-proxy' 2026-01-20T10:56:45.415806118+00:00 stdout F 2026-01-20T10:56:45+00:00 INFO: ovn-node-metrics certs mounted, starting kube-rbac-proxy 2026-01-20T10:56:45.415838579+00:00 stderr F + exec /usr/bin/kube-rbac-proxy --logtostderr --secure-listen-address=:9103 --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 --upstream=http://127.0.0.1:29103/ --tls-private-key-file=/etc/pki/tls/metrics-cert/tls.key --tls-cert-file=/etc/pki/tls/metrics-cert/tls.crt 2026-01-20T10:56:45.451930790+00:00 stderr F W0120 10:56:45.451804 29641 deprecated.go:66] 2026-01-20T10:56:45.451930790+00:00 stderr F ==== Removed Flag Warning ====================== 2026-01-20T10:56:45.451930790+00:00 stderr F 2026-01-20T10:56:45.451930790+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2026-01-20T10:56:45.451930790+00:00 stderr F 2026-01-20T10:56:45.451930790+00:00 stderr F =============================================== 2026-01-20T10:56:45.451930790+00:00 stderr F 2026-01-20T10:56:45.452362701+00:00 stderr F I0120 10:56:45.452340 29641 kube-rbac-proxy.go:233] Valid token audiences: 2026-01-20T10:56:45.452398392+00:00 stderr F I0120 10:56:45.452382 29641 kube-rbac-proxy.go:347] Reading certificate files 2026-01-20T10:56:45.452871545+00:00 stderr F I0120 10:56:45.452829 29641 kube-rbac-proxy.go:395] Starting TCP socket on :9103 2026-01-20T10:56:45.453813791+00:00 stderr F I0120 10:56:45.453786 29641 kube-rbac-proxy.go:402] Listening securely on :9103 ././@LongLink0000644000000000000000000000026000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/ovn-acl-logging/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015133657736033107 5ustar zuulzuul././@LongLink0000644000000000000000000000026500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/ovn-acl-logging/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000001204615133657716033112 0ustar zuulzuul2026-01-20T10:56:45.254479157+00:00 stderr F ++ K8S_NODE= 2026-01-20T10:56:45.254479157+00:00 stderr F ++ [[ -n '' ]] 2026-01-20T10:56:45.254479157+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2026-01-20T10:56:45.254479157+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2026-01-20T10:56:45.254479157+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2026-01-20T10:56:45.254479157+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2026-01-20T10:56:45.254479157+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2026-01-20T10:56:45.254766814+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2026-01-20T10:56:45.254766814+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2026-01-20T10:56:45.254766814+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2026-01-20T10:56:45.254766814+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2026-01-20T10:56:45.254766814+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2026-01-20T10:56:45.255571407+00:00 stderr F + start-audit-log-rotation 2026-01-20T10:56:45.255589977+00:00 stderr F + MAXFILESIZE=50000000 2026-01-20T10:56:45.255589977+00:00 stderr F + MAXLOGFILES=5 2026-01-20T10:56:45.256129652+00:00 stderr F ++ dirname /var/log/ovn/acl-audit-log.log 2026-01-20T10:56:45.260282943+00:00 stderr F + LOGDIR=/var/log/ovn 2026-01-20T10:56:45.260282943+00:00 stderr F + local retries=0 2026-01-20T10:56:45.260282943+00:00 stderr F + [[ 30 -gt 0 ]] 2026-01-20T10:56:45.260282943+00:00 stderr F + (( retries += 1 )) 2026-01-20T10:56:45.260660574+00:00 stderr F ++ cat /var/run/ovn/ovn-controller.pid 2026-01-20T10:56:45.265185325+00:00 stderr F + CONTROLLERPID=29449 2026-01-20T10:56:45.265185325+00:00 stderr F + [[ -n 29449 ]] 2026-01-20T10:56:45.265185325+00:00 stderr F + break 2026-01-20T10:56:45.265185325+00:00 stderr F + [[ -z 29449 ]] 2026-01-20T10:56:45.265385430+00:00 stderr F + true 2026-01-20T10:56:45.265457032+00:00 stderr F + '[' -f /var/log/ovn/acl-audit-log.log ']' 2026-01-20T10:56:45.265457032+00:00 stderr F + tail -F /var/log/ovn/acl-audit-log.log 2026-01-20T10:56:45.266408619+00:00 stderr F ++ du -b /var/log/ovn/acl-audit-log.log 2026-01-20T10:56:45.266641725+00:00 stderr F ++ tr -s '\t' ' ' 2026-01-20T10:56:45.266814519+00:00 stderr F ++ cut '-d ' -f1 2026-01-20T10:56:45.269336897+00:00 stderr F + file_size=0 2026-01-20T10:56:45.269336897+00:00 stderr F + '[' 0 -gt 50000000 ']' 2026-01-20T10:56:45.270119238+00:00 stderr F ++ ls -1 /var/log/ovn/acl-audit-log.log 2026-01-20T10:56:45.270269752+00:00 stderr F ++ wc -l 2026-01-20T10:56:45.275042951+00:00 stderr F + num_files=1 2026-01-20T10:56:45.275042951+00:00 stderr F + '[' 1 -gt 5 ']' 2026-01-20T10:56:45.275042951+00:00 stderr F + sleep 30 2026-01-20T10:57:15.278018674+00:00 stderr F + true 2026-01-20T10:57:15.278018674+00:00 stderr F + '[' -f /var/log/ovn/acl-audit-log.log ']' 2026-01-20T10:57:15.280025006+00:00 stderr F ++ du -b /var/log/ovn/acl-audit-log.log 2026-01-20T10:57:15.280653763+00:00 stderr F ++ tr -s '\t' ' ' 2026-01-20T10:57:15.280653763+00:00 stderr F ++ cut '-d ' -f1 2026-01-20T10:57:15.285804060+00:00 stderr F + file_size=0 2026-01-20T10:57:15.285838221+00:00 stderr F + '[' 0 -gt 50000000 ']' 2026-01-20T10:57:15.287292309+00:00 stderr F ++ wc -l 2026-01-20T10:57:15.287460563+00:00 stderr F ++ ls -1 /var/log/ovn/acl-audit-log.log 2026-01-20T10:57:15.292157927+00:00 stderr F + num_files=1 2026-01-20T10:57:15.292218769+00:00 stderr F + '[' 1 -gt 5 ']' 2026-01-20T10:57:15.292218769+00:00 stderr F + sleep 30 2026-01-20T10:57:45.295672461+00:00 stderr F + true 2026-01-20T10:57:45.295672461+00:00 stderr F + '[' -f /var/log/ovn/acl-audit-log.log ']' 2026-01-20T10:57:45.297598832+00:00 stderr F ++ du -b /var/log/ovn/acl-audit-log.log 2026-01-20T10:57:45.297976462+00:00 stderr F ++ tr -s '\t' ' ' 2026-01-20T10:57:45.298276950+00:00 stderr F ++ cut '-d ' -f1 2026-01-20T10:57:45.302972764+00:00 stderr F + file_size=0 2026-01-20T10:57:45.302972764+00:00 stderr F + '[' 0 -gt 50000000 ']' 2026-01-20T10:57:45.304126654+00:00 stderr F ++ ls -1 /var/log/ovn/acl-audit-log.log 2026-01-20T10:57:45.304270918+00:00 stderr F ++ wc -l 2026-01-20T10:57:45.307636467+00:00 stderr F + num_files=1 2026-01-20T10:57:45.307636467+00:00 stderr F + '[' 1 -gt 5 ']' 2026-01-20T10:57:45.307636467+00:00 stderr F + sleep 30 2026-01-20T10:58:15.311519526+00:00 stderr F + true 2026-01-20T10:58:15.311519526+00:00 stderr F + '[' -f /var/log/ovn/acl-audit-log.log ']' 2026-01-20T10:58:15.312298716+00:00 stderr F ++ du -b /var/log/ovn/acl-audit-log.log 2026-01-20T10:58:15.312583543+00:00 stderr F ++ cut '-d ' -f1 2026-01-20T10:58:15.312639784+00:00 stderr F ++ tr -s '\t' ' ' 2026-01-20T10:58:15.318827251+00:00 stderr F + file_size=0 2026-01-20T10:58:15.318827251+00:00 stderr F + '[' 0 -gt 50000000 ']' 2026-01-20T10:58:15.321219872+00:00 stderr F ++ ls -1 /var/log/ovn/acl-audit-log.log 2026-01-20T10:58:15.321219872+00:00 stderr F ++ wc -l 2026-01-20T10:58:15.324948227+00:00 stderr F + num_files=1 2026-01-20T10:58:15.324948227+00:00 stderr F + '[' 1 -gt 5 ']' 2026-01-20T10:58:15.324948227+00:00 stderr F + sleep 30 ././@LongLink0000644000000000000000000000026300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/ovnkube-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015133657737033110 5ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/ovnkube-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000736756715133657716033146 0ustar zuulzuul2026-01-20T10:56:52.546472542+00:00 stderr F + . /ovnkube-lib/ovnkube-lib.sh 2026-01-20T10:56:52.546835881+00:00 stderr F ++ set -x 2026-01-20T10:56:52.546835881+00:00 stderr F ++ K8S_NODE=crc 2026-01-20T10:56:52.546835881+00:00 stderr F ++ [[ -n crc ]] 2026-01-20T10:56:52.546835881+00:00 stderr F ++ [[ -f /env/crc ]] 2026-01-20T10:56:52.546835881+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2026-01-20T10:56:52.546835881+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2026-01-20T10:56:52.546835881+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2026-01-20T10:56:52.546835881+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2026-01-20T10:56:52.546835881+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2026-01-20T10:56:52.546835881+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2026-01-20T10:56:52.546835881+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2026-01-20T10:56:52.546835881+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2026-01-20T10:56:52.546835881+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2026-01-20T10:56:52.546835881+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2026-01-20T10:56:52.548077105+00:00 stderr F + start-ovnkube-node 4 29103 29105 2026-01-20T10:56:52.548148327+00:00 stderr F + local log_level=4 2026-01-20T10:56:52.548148327+00:00 stderr F + local metrics_port=29103 2026-01-20T10:56:52.548160308+00:00 stderr F + local ovn_metrics_port=29105 2026-01-20T10:56:52.548160308+00:00 stderr F + [[ 3 -ne 3 ]] 2026-01-20T10:56:52.548171448+00:00 stderr F + cni-bin-copy 2026-01-20T10:56:52.548223649+00:00 stderr F + . /host/etc/os-release 2026-01-20T10:56:52.548601509+00:00 stderr F ++ NAME='Red Hat Enterprise Linux CoreOS' 2026-01-20T10:56:52.548601509+00:00 stderr F ++ ID=rhcos 2026-01-20T10:56:52.548601509+00:00 stderr F ++ ID_LIKE='rhel fedora' 2026-01-20T10:56:52.548601509+00:00 stderr F ++ VERSION=416.94.202406172220-0 2026-01-20T10:56:52.548614850+00:00 stderr F ++ VERSION_ID=4.16 2026-01-20T10:56:52.548614850+00:00 stderr F ++ VARIANT=CoreOS 2026-01-20T10:56:52.548623660+00:00 stderr F ++ VARIANT_ID=coreos 2026-01-20T10:56:52.548632180+00:00 stderr F ++ PLATFORM_ID=platform:el9 2026-01-20T10:56:52.548640530+00:00 stderr F ++ PRETTY_NAME='Red Hat Enterprise Linux CoreOS 416.94.202406172220-0' 2026-01-20T10:56:52.548649791+00:00 stderr F ++ ANSI_COLOR='0;31' 2026-01-20T10:56:52.548649791+00:00 stderr F ++ CPE_NAME=cpe:/o:redhat:enterprise_linux:9::baseos::coreos 2026-01-20T10:56:52.548658931+00:00 stderr F ++ HOME_URL=https://www.redhat.com/ 2026-01-20T10:56:52.548667641+00:00 stderr F ++ DOCUMENTATION_URL=https://docs.okd.io/latest/welcome/index.html 2026-01-20T10:56:52.548676331+00:00 stderr F ++ BUG_REPORT_URL=https://access.redhat.com/labs/rhir/ 2026-01-20T10:56:52.548684881+00:00 stderr F ++ REDHAT_BUGZILLA_PRODUCT='OpenShift Container Platform' 2026-01-20T10:56:52.548693062+00:00 stderr F ++ REDHAT_BUGZILLA_PRODUCT_VERSION=4.16 2026-01-20T10:56:52.548701632+00:00 stderr F ++ REDHAT_SUPPORT_PRODUCT='OpenShift Container Platform' 2026-01-20T10:56:52.548701632+00:00 stderr F ++ REDHAT_SUPPORT_PRODUCT_VERSION=4.16 2026-01-20T10:56:52.548710532+00:00 stderr F ++ OPENSHIFT_VERSION=4.16 2026-01-20T10:56:52.548719492+00:00 stderr F ++ RHEL_VERSION=9.4 2026-01-20T10:56:52.548727793+00:00 stderr F ++ OSTREE_VERSION=416.94.202406172220-0 2026-01-20T10:56:52.548736193+00:00 stderr F + rhelmajor= 2026-01-20T10:56:52.548736193+00:00 stderr F + case "${ID}" in 2026-01-20T10:56:52.550087920+00:00 stderr F ++ echo cpe:/o:redhat:enterprise_linux:9::baseos::coreos 2026-01-20T10:56:52.551221940+00:00 stderr F ++ cut -f 5 -d : 2026-01-20T10:56:52.556009929+00:00 stderr F + RHEL_VERSION=9 2026-01-20T10:56:52.556878132+00:00 stderr F ++ echo 9 2026-01-20T10:56:52.557213041+00:00 stderr F ++ sed -E 's/([0-9]+)\.{1}[0-9]+(\.[0-9]+)?/\1/' 2026-01-20T10:56:52.560984853+00:00 stderr F + rhelmajor=9 2026-01-20T10:56:52.560984853+00:00 stderr F + sourcedir=/usr/libexec/cni/ 2026-01-20T10:56:52.561003233+00:00 stderr F + case "${rhelmajor}" in 2026-01-20T10:56:52.561003233+00:00 stderr F + sourcedir=/usr/libexec/cni/rhel9 2026-01-20T10:56:52.561024234+00:00 stderr F + cp -f /usr/libexec/cni/rhel9/ovn-k8s-cni-overlay /cni-bin-dir/ 2026-01-20T10:56:52.638711994+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2026-01-20T10:56:52.640994215+00:00 stderr F + echo 'I0120 10:56:52.640536553 - disable conntrack on geneve port' 2026-01-20T10:56:52.641019236+00:00 stdout F I0120 10:56:52.640536553 - disable conntrack on geneve port 2026-01-20T10:56:52.641027916+00:00 stderr F + iptables -t raw -A PREROUTING -p udp --dport 6081 -j NOTRACK 2026-01-20T10:56:52.646058712+00:00 stderr F + iptables -t raw -A OUTPUT -p udp --dport 6081 -j NOTRACK 2026-01-20T10:56:52.649076473+00:00 stderr F + ip6tables -t raw -A PREROUTING -p udp --dport 6081 -j NOTRACK 2026-01-20T10:56:52.652782942+00:00 stderr F + ip6tables -t raw -A OUTPUT -p udp --dport 6081 -j NOTRACK 2026-01-20T10:56:52.657260843+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2026-01-20T10:56:52.659121693+00:00 stdout F I0120 10:56:52.658729792 - starting ovnkube-node 2026-01-20T10:56:52.659133884+00:00 stderr F + echo 'I0120 10:56:52.658729792 - starting ovnkube-node' 2026-01-20T10:56:52.659133884+00:00 stderr F + '[' local == shared ']' 2026-01-20T10:56:52.659144414+00:00 stderr F + '[' local == local ']' 2026-01-20T10:56:52.659151614+00:00 stderr F + gateway_mode_flags='--gateway-mode local --gateway-interface br-ex' 2026-01-20T10:56:52.659158764+00:00 stderr F + export_network_flows_flags= 2026-01-20T10:56:52.659185345+00:00 stderr F + [[ -n '' ]] 2026-01-20T10:56:52.659185345+00:00 stderr F + [[ -n '' ]] 2026-01-20T10:56:52.659193755+00:00 stderr F + [[ -n '' ]] 2026-01-20T10:56:52.659200865+00:00 stderr F + [[ -n '' ]] 2026-01-20T10:56:52.659200865+00:00 stderr F + [[ -n '' ]] 2026-01-20T10:56:52.659208366+00:00 stderr F + [[ -n '' ]] 2026-01-20T10:56:52.659208366+00:00 stderr F + gw_interface_flag= 2026-01-20T10:56:52.659215886+00:00 stderr F + '[' -d /sys/class/net/br-ex1 ']' 2026-01-20T10:56:52.659262817+00:00 stderr F + node_mgmt_port_netdev_flags= 2026-01-20T10:56:52.659262817+00:00 stderr F + [[ -n '' ]] 2026-01-20T10:56:52.659262817+00:00 stderr F + [[ -n '' ]] 2026-01-20T10:56:52.659272727+00:00 stderr F + multi_network_enabled_flag= 2026-01-20T10:56:52.659272727+00:00 stderr F + [[ true == \t\r\u\e ]] 2026-01-20T10:56:52.659280138+00:00 stderr F + multi_network_enabled_flag=--enable-multi-network 2026-01-20T10:56:52.659280138+00:00 stderr F + multi_network_policy_enabled_flag= 2026-01-20T10:56:52.659287568+00:00 stderr F + [[ false == \t\r\u\e ]] 2026-01-20T10:56:52.659294488+00:00 stderr F + admin_network_policy_enabled_flag= 2026-01-20T10:56:52.659301708+00:00 stderr F + [[ true == \t\r\u\e ]] 2026-01-20T10:56:52.659301708+00:00 stderr F + admin_network_policy_enabled_flag=--enable-admin-network-policy 2026-01-20T10:56:52.659309668+00:00 stderr F + dns_name_resolver_enabled_flag= 2026-01-20T10:56:52.659316819+00:00 stderr F + [[ false == \t\r\u\e ]] 2026-01-20T10:56:52.659316819+00:00 stderr F + ip_forwarding_flag= 2026-01-20T10:56:52.659324099+00:00 stderr F + '[' Global == Global ']' 2026-01-20T10:56:52.659330999+00:00 stderr F + sysctl -w net.ipv4.ip_forward=1 2026-01-20T10:56:52.660944362+00:00 stdout F net.ipv4.ip_forward = 1 2026-01-20T10:56:52.661196048+00:00 stderr F + sysctl -w net.ipv6.conf.all.forwarding=1 2026-01-20T10:56:52.670518630+00:00 stdout F net.ipv6.conf.all.forwarding = 1 2026-01-20T10:56:52.671178477+00:00 stderr F + NETWORK_NODE_IDENTITY_ENABLE= 2026-01-20T10:56:52.671178477+00:00 stderr F + [[ true == \t\r\u\e ]] 2026-01-20T10:56:52.671178477+00:00 stderr F + NETWORK_NODE_IDENTITY_ENABLE=' 2026-01-20T10:56:52.671178477+00:00 stderr F --bootstrap-kubeconfig=/var/lib/kubelet/kubeconfig 2026-01-20T10:56:52.671178477+00:00 stderr F --cert-dir=/etc/ovn/ovnkube-node-certs 2026-01-20T10:56:52.671178477+00:00 stderr F --cert-duration=24h 2026-01-20T10:56:52.671178477+00:00 stderr F ' 2026-01-20T10:56:52.671178477+00:00 stderr F + ovn_v4_join_subnet_opt= 2026-01-20T10:56:52.671206368+00:00 stderr F + [[ '' != '' ]] 2026-01-20T10:56:52.671206368+00:00 stderr F + ovn_v6_join_subnet_opt= 2026-01-20T10:56:52.671206368+00:00 stderr F + [[ '' != '' ]] 2026-01-20T10:56:52.671300521+00:00 stderr F + exec /usr/bin/ovnkube --init-ovnkube-controller crc --init-node crc --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 --inactivity-probe=180000 --gateway-mode local --gateway-interface br-ex --metrics-bind-address 127.0.0.1:29103 --ovn-metrics-bind-address 127.0.0.1:29105 --metrics-enable-pprof --metrics-enable-config-duration --export-ovs-metrics --disable-snat-multiple-gws --enable-multi-network --enable-admin-network-policy --enable-multicast --zone crc --enable-interconnect --acl-logging-rate-limit 20 --bootstrap-kubeconfig=/var/lib/kubelet/kubeconfig --cert-dir=/etc/ovn/ovnkube-node-certs --cert-duration=24h 2026-01-20T10:56:52.696811607+00:00 stderr F I0120 10:56:52.696704 30089 config.go:2178] Parsed config file /run/ovnkube-config/ovnkube.conf 2026-01-20T10:56:52.696862598+00:00 stderr F I0120 10:56:52.696772 30089 config.go:2179] Parsed config: {Default:{MTU:1400 RoutableMTU:0 ConntrackZone:64000 HostMasqConntrackZone:0 OVNMasqConntrackZone:0 HostNodePortConntrackZone:0 ReassemblyConntrackZone:0 EncapType:geneve EncapIP: EncapPort:6081 InactivityProbe:100000 OpenFlowProbe:180 OfctrlWaitBeforeClear:0 MonitorAll:true LFlowCacheEnable:true LFlowCacheLimit:0 LFlowCacheLimitKb:1048576 RawClusterSubnets:10.217.0.0/22/23 ClusterSubnets:[] EnableUDPAggregation:true Zone:global} Logging:{File: CNIFile: LibovsdbFile:/var/log/ovnkube/libovsdb.log Level:4 LogFileMaxSize:100 LogFileMaxBackups:5 LogFileMaxAge:0 ACLLoggingRateLimit:20} Monitoring:{RawNetFlowTargets: RawSFlowTargets: RawIPFIXTargets: NetFlowTargets:[] SFlowTargets:[] IPFIXTargets:[]} IPFIX:{Sampling:400 CacheActiveTimeout:60 CacheMaxFlows:0} CNI:{ConfDir:/etc/cni/net.d Plugin:ovn-k8s-cni-overlay} OVNKubernetesFeature:{EnableAdminNetworkPolicy:true EnableEgressIP:true EgressIPReachabiltyTotalTimeout:1 EnableEgressFirewall:true EnableEgressQoS:true EnableEgressService:true EgressIPNodeHealthCheckPort:9107 EnableMultiNetwork:true EnableMultiNetworkPolicy:false EnableStatelessNetPol:false EnableInterconnect:false EnableMultiExternalGateway:true EnablePersistentIPs:false EnableDNSNameResolver:false EnableServiceTemplateSupport:false} Kubernetes:{BootstrapKubeconfig: CertDir: CertDuration:10m0s Kubeconfig: CACert: CAData:[] APIServer:https://api-int.crc.testing:6443 Token: TokenFile: CompatServiceCIDR: RawServiceCIDRs:10.217.4.0/23 ServiceCIDRs:[] OVNConfigNamespace:openshift-ovn-kubernetes OVNEmptyLbEvents:false PodIP: RawNoHostSubnetNodes: NoHostSubnetNodes: HostNetworkNamespace:openshift-host-network PlatformType:None HealthzBindAddress:0.0.0.0:10256 CompatMetricsBindAddress: CompatOVNMetricsBindAddress: CompatMetricsEnablePprof:false DNSServiceNamespace:openshift-dns DNSServiceName:dns-default} Metrics:{BindAddress: OVNMetricsBindAddress: ExportOVSMetrics:false EnablePprof:false NodeServerPrivKey: NodeServerCert: EnableConfigDuration:false EnableScaleMetrics:false} OvnNorth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:} OvnSouth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:} Gateway:{Mode:local Interface: EgressGWInterface: NextHop: VLANID:0 NodeportEnable:true DisableSNATMultipleGWs:false V4JoinSubnet:100.64.0.0/16 V6JoinSubnet:fd98::/64 V4MasqueradeSubnet:169.254.169.0/29 V6MasqueradeSubnet:fd69::/125 MasqueradeIPs:{V4OVNMasqueradeIP:169.254.169.1 V6OVNMasqueradeIP:fd69::1 V4HostMasqueradeIP:169.254.169.2 V6HostMasqueradeIP:fd69::2 V4HostETPLocalMasqueradeIP:169.254.169.3 V6HostETPLocalMasqueradeIP:fd69::3 V4DummyNextHopMasqueradeIP:169.254.169.4 V6DummyNextHopMasqueradeIP:fd69::4 V4OVNServiceHairpinMasqueradeIP:169.254.169.5 V6OVNServiceHairpinMasqueradeIP:fd69::5} DisablePacketMTUCheck:false RouterSubnet: SingleNode:false DisableForwarding:false AllowNoUplink:false} MasterHA:{ElectionLeaseDuration:137 ElectionRenewDeadline:107 ElectionRetryPeriod:26} ClusterMgrHA:{ElectionLeaseDuration:137 ElectionRenewDeadline:107 ElectionRetryPeriod:26} HybridOverlay:{Enabled:false RawClusterSubnets: ClusterSubnets:[] VXLANPort:4789} OvnKubeNode:{Mode:full DPResourceDeviceIdsMap:map[] MgmtPortNetdev: MgmtPortDPResourceName:} ClusterManager:{V4TransitSwitchSubnet:100.88.0.0/16 V6TransitSwitchSubnet:fd97::/64}} 2026-01-20T10:56:52.698859012+00:00 stderr F I0120 10:56:52.698824 30089 certificate_store.go:130] Loading cert/key pair from "/etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem". 2026-01-20T10:56:52.699163160+00:00 stderr F I0120 10:56:52.699131 30089 certificate_store.go:130] Loading cert/key pair from "/etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem". 2026-01-20T10:56:52.699267103+00:00 stderr F I0120 10:56:52.699243 30089 certificate_manager.go:356] kubernetes.io/kube-apiserver-client: Certificate rotation is enabled 2026-01-20T10:56:52.699274713+00:00 stderr F I0120 10:56:52.699268 30089 kube.go:358] Waiting for certificate 2026-01-20T10:56:52.699322355+00:00 stderr F I0120 10:56:52.699296 30089 kube.go:365] Certificate found 2026-01-20T10:56:52.699403527+00:00 stderr F I0120 10:56:52.699331 30089 certificate_manager.go:356] kubernetes.io/kube-apiserver-client: Certificate expiration is 2026-01-21 10:43:32 +0000 UTC, rotation deadline is 2026-01-21 08:16:09.24299082 +0000 UTC 2026-01-20T10:56:52.699403527+00:00 stderr F I0120 10:56:52.699393 30089 certificate_manager.go:356] kubernetes.io/kube-apiserver-client: Waiting 21h19m16.543601404s for next certificate rotation 2026-01-20T10:56:52.699918050+00:00 stderr F I0120 10:56:52.699888 30089 cert_rotation.go:137] Starting client certificate rotation controller 2026-01-20T10:56:52.700947738+00:00 stderr F I0120 10:56:52.700871 30089 metrics.go:532] Starting metrics server at address "127.0.0.1:29103" 2026-01-20T10:56:52.704732850+00:00 stderr F I0120 10:56:52.704572 30089 libovsdb.go:62] Client for OVN_Northbound using log verbosity 4 with lumberjack &lumberjack.Logger{Filename:"/var/log/ovnkube/libovsdb.log", MaxSize:100, MaxAge:0, MaxBackups:5, LocalTime:false, Compress:true, size:0, file:(*os.File)(nil), mu:sync.Mutex{state:0, sema:0x0}, millCh:(chan bool)(nil), startMill:sync.Once{done:0x0, m:sync.Mutex{state:0, sema:0x0}}} 2026-01-20T10:56:52.704814982+00:00 stderr F I0120 10:56:52.704675 30089 node_network_controller_manager.go:98] Starting the node network controller manager, Mode: full 2026-01-20T10:56:52.704915545+00:00 stderr F I0120 10:56:52.704874 30089 factory.go:405] Starting watch factory 2026-01-20T10:56:52.705048059+00:00 stderr F I0120 10:56:52.705010 30089 reflector.go:289] Starting reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:56:52.705048059+00:00 stderr F I0120 10:56:52.705021 30089 reflector.go:289] Starting reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:56:52.705048059+00:00 stderr F I0120 10:56:52.705033 30089 reflector.go:289] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:56:52.705048059+00:00 stderr F I0120 10:56:52.705041 30089 reflector.go:325] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:56:52.705076959+00:00 stderr F I0120 10:56:52.705047 30089 reflector.go:325] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:56:52.705076959+00:00 stderr F I0120 10:56:52.705045 30089 reflector.go:289] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:56:52.705090580+00:00 stderr F I0120 10:56:52.705074 30089 reflector.go:325] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:56:52.705120401+00:00 stderr F I0120 10:56:52.705021 30089 reflector.go:289] Starting reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:56:52.705155351+00:00 stderr F I0120 10:56:52.705142 30089 reflector.go:325] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:56:52.705206663+00:00 stderr F I0120 10:56:52.705032 30089 reflector.go:289] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:56:52.705206663+00:00 stderr F I0120 10:56:52.705199 30089 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:56:52.705262084+00:00 stderr F I0120 10:56:52.705032 30089 reflector.go:325] Listing and watching *v1.NetworkPolicy from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:56:52.713025023+00:00 stderr F I0120 10:56:52.712963 30089 metrics.go:532] Starting metrics server at address "127.0.0.1:29105" 2026-01-20T10:56:52.718366337+00:00 stderr F I0120 10:56:52.715977 30089 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:56:52.720205036+00:00 stderr F I0120 10:56:52.720164 30089 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:56:52.725650683+00:00 stderr F I0120 10:56:52.724952 30089 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:56:52.725650683+00:00 stderr F I0120 10:56:52.725088 30089 ovn_db.go:374] Found OVN DB Pod running on this node. Registering OVN DB Metrics 2026-01-20T10:56:52.726195118+00:00 stderr F I0120 10:56:52.726056 30089 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:56:52.726771673+00:00 stderr F I0120 10:56:52.726656 30089 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:56:52.732983470+00:00 stderr F I0120 10:56:52.732149 30089 ovn_db.go:329] /var/run/openvswitch/ovnnb_db.sock getting info failed: stat /var/run/openvswitch/ovnnb_db.sock: no such file or directory 2026-01-20T10:56:52.732983470+00:00 stderr F I0120 10:56:52.732199 30089 ovn_db.go:326] ovnnb_db.sock found at /var/run/ovn/ 2026-01-20T10:56:52.753477761+00:00 stderr F I0120 10:56:52.753410 30089 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:56:52.770329765+00:00 stderr F I0120 10:56:52.770229 30089 libovsdb.go:62] Client for OVN_Southbound using log verbosity 4 with lumberjack &lumberjack.Logger{Filename:"/var/log/ovnkube/libovsdb.log", MaxSize:100, MaxAge:0, MaxBackups:5, LocalTime:false, Compress:true, size:0, file:(*os.File)(nil), mu:sync.Mutex{state:0, sema:0x0}, millCh:(chan bool)(nil), startMill:sync.Once{done:0x0, m:sync.Mutex{state:0, sema:0x0}}} 2026-01-20T10:56:52.791253358+00:00 stderr F I0120 10:56:52.791168 30089 ovn_db.go:421] Found db is standalone, don't register db_cluster metrics 2026-01-20T10:56:52.793928380+00:00 stderr F I0120 10:56:52.793883 30089 network_controller_manager.go:300] Starting the ovnkube controller 2026-01-20T10:56:52.793928380+00:00 stderr F I0120 10:56:52.793911 30089 network_controller_manager.go:305] Waiting up to 5m0s for NBDB zone to match: crc 2026-01-20T10:56:52.794011263+00:00 stderr F I0120 10:56:52.793985 30089 network_controller_manager.go:325] NBDB zone sync took: 60.522µs 2026-01-20T10:56:52.794011263+00:00 stderr F I0120 10:56:52.794003 30089 factory.go:405] Starting watch factory 2026-01-20T10:56:52.794224828+00:00 stderr F I0120 10:56:52.794193 30089 reflector.go:289] Starting reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2026-01-20T10:56:52.794224828+00:00 stderr F I0120 10:56:52.794214 30089 reflector.go:325] Listing and watching *v1alpha1.AdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2026-01-20T10:56:52.794336711+00:00 stderr F I0120 10:56:52.794318 30089 reflector.go:289] Starting reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2026-01-20T10:56:52.794374102+00:00 stderr F I0120 10:56:52.794364 30089 reflector.go:325] Listing and watching *v1alpha1.BaselineAdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2026-01-20T10:56:52.796694885+00:00 stderr F I0120 10:56:52.796655 30089 reflector.go:351] Caches populated for *v1alpha1.BaselineAdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2026-01-20T10:56:52.797015153+00:00 stderr F I0120 10:56:52.796993 30089 reflector.go:351] Caches populated for *v1alpha1.AdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2026-01-20T10:56:52.805764839+00:00 stderr F I0120 10:56:52.805696 30089 reflector.go:289] Starting reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:56:52.805764839+00:00 stderr F I0120 10:56:52.805721 30089 reflector.go:325] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:56:52.808052110+00:00 stderr F I0120 10:56:52.808009 30089 reflector.go:351] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:56:52.895292747+00:00 stderr F I0120 10:56:52.895236 30089 reflector.go:289] Starting reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:56:52.895292747+00:00 stderr F I0120 10:56:52.895264 30089 reflector.go:325] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:56:52.897073496+00:00 stderr F I0120 10:56:52.897045 30089 reflector.go:351] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:56:52.906206701+00:00 stderr F I0120 10:56:52.906177 30089 reflector.go:289] Starting reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:56:52.906206701+00:00 stderr F I0120 10:56:52.906191 30089 reflector.go:325] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:56:52.908137342+00:00 stderr F I0120 10:56:52.908098 30089 reflector.go:351] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:56:52.995936175+00:00 stderr F I0120 10:56:52.995866 30089 reflector.go:289] Starting reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:56:52.995936175+00:00 stderr F I0120 10:56:52.995890 30089 reflector.go:325] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:56:52.997390414+00:00 stderr F I0120 10:56:52.997352 30089 reflector.go:351] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:56:53.006563371+00:00 stderr F I0120 10:56:53.006501 30089 reflector.go:289] Starting reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:56:53.006563371+00:00 stderr F I0120 10:56:53.006520 30089 reflector.go:325] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:56:53.008175234+00:00 stderr F I0120 10:56:53.008142 30089 reflector.go:351] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:56:53.096682166+00:00 stderr F I0120 10:56:53.096589 30089 network_controller_manager.go:336] Waiting up to 5m0s for a node to have "crc" zone 2026-01-20T10:56:53.096682166+00:00 stderr F I0120 10:56:53.096647 30089 network_controller_manager.go:359] Waiting for node in zone sync took: 17.931µs 2026-01-20T10:56:53.106269583+00:00 stderr F I0120 10:56:53.106204 30089 network_controller_manager.go:220] SCTP support detected in OVN 2026-01-20T10:56:53.106883640+00:00 stderr F I0120 10:56:53.106825 30089 default_node_network_controller.go:133] Enable node proxy healthz server on 0.0.0.0:10256 2026-01-20T10:56:53.107082085+00:00 stderr F I0120 10:56:53.107041 30089 default_node_network_controller.go:684] Starting the default node network controller 2026-01-20T10:56:53.107126186+00:00 stderr F I0120 10:56:53.107106 30089 ovs.go:159] Exec(10): /usr/bin/ovs-vsctl --timeout=15 --no-heading --data=bare --format=csv --columns name list interface 2026-01-20T10:56:53.109589783+00:00 stderr F I0120 10:56:53.109562 30089 ovnkube_controller.go:1230] Config duration recorder: updated measurement rate to approx 1 in every 10 requests 2026-01-20T10:56:53.109794218+00:00 stderr F I0120 10:56:53.109774 30089 services_controller.go:60] Creating event broadcaster 2026-01-20T10:56:53.109999684+00:00 stderr F I0120 10:56:53.109984 30089 default_network_controller.go:328] Starting the default network controller 2026-01-20T10:56:53.110138738+00:00 stderr F I0120 10:56:53.110125 30089 address_set_sync.go:394] SyncAddressSets found 0 stale address sets, 0 of them were ignored 2026-01-20T10:56:53.112851851+00:00 stderr F I0120 10:56:53.112819 30089 acl_sync.go:167] Updating Tier of existing ACLs... 2026-01-20T10:56:53.112919013+00:00 stderr F I0120 10:56:53.112902 30089 acl_sync.go:192] Updating tier's of all ACLs in cluster took 13.41µs 2026-01-20T10:56:53.113158249+00:00 stderr F I0120 10:56:53.113138 30089 port_group_sync.go:309] SyncPortGroups found 0 stale port groups 2026-01-20T10:56:53.113483118+00:00 stderr F I0120 10:56:53.113432 30089 ovs.go:162] Exec(10): stdout: "71f6858fe8b34b0\n5f52bddb9a859c7\n7d2c4481dd898c5\n6f76866c5589c45\novn-k8s-mp0\n1c7e13b0eafeb38\n26aaf844c5160ed\nfb18bf62522e5c1\ncb38236eaf8b650\nb8204918f90a9ca\n20a4a76da066ba0\n29fa79ece7bc4aa\nae5d379adacec87\n81bc64aec279850\n845fef911253842\ncf684e9d3913688\n157518a2fbcdf41\n24c018c62b0bc3f\ncac7e1a6cb9d4b9\n033a0e312725abb\nbe3a1d8e2700ba2\n4132ae12a9d721a\n3bebb50a1166a73\npatch-br-int-to-br-ex_crc\nf600c407dd70d66\ne9bfdf7ad0e854e\nbr-int\n351b90c7b77c23b\n07d15cd3389df14\nens3\nb29d12b36fd4c37\nd9aeaa1aa1d02c7\nbr-ex\n1272c455fd97422\n1b67ef467f01c9d\npatch-br-ex_crc-to-br-int\nf5a1d658ee8d94f\n3a8739569015671\n6e13b5b12625f2c\n63060f2e5e81735\n75910c2f6810535\n8ea598627749117\nc37c0f7f9c3caea\n20ab1b715d7b6c4\nb9fd178cb9bef15\na6cb3677490438d\ne95e48e211a52d9\n6c311a84b5597af\ndbfc8b06838dce4\n14d5763f6e24322\n35d926e5c859402\ndf5798e69cf9702\n569e9604055896a\nec3e84f9949af89\n6b56e04f51f76d3\nd558d59974b2a9b\n" 2026-01-20T10:56:53.113483118+00:00 stderr F I0120 10:56:53.113460 30089 ovs.go:163] Exec(10): stderr: "" 2026-01-20T10:56:53.113567210+00:00 stderr F I0120 10:56:53.113535 30089 ovs.go:159] Exec(11): /usr/bin/ovs-ofctl dump-flows br-ex 2026-01-20T10:56:53.114740621+00:00 stderr F I0120 10:56:53.114681 30089 ovs.go:162] Exec(9): stdout: "" 2026-01-20T10:56:53.114740621+00:00 stderr F I0120 10:56:53.114701 30089 ovs.go:163] Exec(9): stderr: "" 2026-01-20T10:56:53.114740621+00:00 stderr F I0120 10:56:53.114710 30089 node_network_controller_manager.go:251] CheckForStaleOVSInternalPorts took 7.79718ms 2026-01-20T10:56:53.114740621+00:00 stderr F I0120 10:56:53.114726 30089 ovs.go:159] Exec(12): /usr/bin/ovs-vsctl --timeout=15 --columns=name,external_ids --data=bare --no-headings --format=csv find Interface external_ids:sandbox!="" external_ids:vf-netdev-name!="" 2026-01-20T10:56:53.115260725+00:00 stderr F I0120 10:56:53.115232 30089 default_network_controller.go:364] Existing number of nodes: 1 2026-01-20T10:56:53.115305136+00:00 stderr F I0120 10:56:53.115293 30089 ovs.go:159] Exec(13): /usr/bin/ovn-nbctl --timeout=15 --columns=_uuid list Load_Balancer_Group 2026-01-20T10:56:53.119712875+00:00 stderr F I0120 10:56:53.119575 30089 ovs.go:162] Exec(11): stdout: "NXST_FLOW reply (xid=0x4):\n cookie=0xdeff105, duration=498.584s, table=0, n_packets=0, n_bytes=0, idle_age=498, priority=500,ip,in_port=2,nw_src=38.102.83.220,nw_dst=169.254.169.2 actions=ct(commit,table=4,zone=64001,nat(dst=38.102.83.220))\n cookie=0xdeff105, duration=498.584s, table=0, n_packets=0, n_bytes=0, idle_age=498, priority=500,ip,in_port=2,nw_src=38.102.83.220,nw_dst=172.17.0.5 actions=ct(commit,table=4,zone=64001)\n cookie=0xdeff105, duration=498.584s, table=0, n_packets=0, n_bytes=0, idle_age=498, priority=500,ip,in_port=2,nw_src=38.102.83.220,nw_dst=172.18.0.5 actions=ct(commit,table=4,zone=64001)\n cookie=0xdeff105, duration=498.584s, table=0, n_packets=0, n_bytes=0, idle_age=498, priority=500,ip,in_port=2,nw_src=38.102.83.220,nw_dst=172.19.0.5 actions=ct(commit,table=4,zone=64001)\n cookie=0xdeff105, duration=498.584s, table=0, n_packets=0, n_bytes=0, idle_age=498, priority=500,ip,in_port=2,nw_src=38.102.83.220,nw_dst=192.168.122.10 actions=ct(commit,table=4,zone=64001)\n cookie=0xdeff105, duration=498.584s, table=0, n_packets=1811, n_bytes=189425, idle_age=3, priority=500,ip,in_port=2,nw_src=38.102.83.220,nw_dst=192.168.126.11 actions=ct(commit,table=4,zone=64001)\n cookie=0xdeff105, duration=498.584s, table=0, n_packets=1520, n_bytes=4156359, idle_age=3, priority=500,ip,in_port=LOCAL,nw_dst=169.254.169.1 actions=ct(table=5,zone=64002,nat)\n cookie=0xdeff105, duration=498.584s, table=0, n_packets=1872, n_bytes=198736, idle_age=1, priority=500,ip,in_port=LOCAL,nw_dst=10.217.4.0/23 actions=ct(commit,table=2,zone=64001,nat(src=169.254.169.2))\n cookie=0xdeff105, duration=498.584s, table=0, n_packets=0, n_bytes=0, idle_age=498, priority=105,ip,in_port=2,nw_dst=10.217.4.0/23 actions=drop\n cookie=0xdeff105, duration=498.584s, table=0, n_packets=1564, n_bytes=4172692, idle_age=1, priority=500,ip,in_port=2,nw_src=10.217.4.0/23,nw_dst=169.254.169.2 actions=ct(table=3,zone=64001,nat)\n cookie=0xdeff105, duration=498.584s, table=0, n_packets=0, n_bytes=0, idle_age=498, priority=205,udp,in_port=1,dl_dst=fa:16:3e:0d:e7:11,tp_dst=6081 actions=LOCAL\n cookie=0xdeff105, duration=498.584s, table=0, n_packets=0, n_bytes=0, idle_age=498, priority=200,udp,in_port=1,tp_dst=6081 actions=NORMAL\n cookie=0xdeff105, duration=498.584s, table=0, n_packets=0, n_bytes=0, idle_age=498, priority=200,udp,in_port=LOCAL,tp_dst=6081 actions=output:1\n cookie=0xdeff105, duration=10.418s, table=0, n_packets=0, n_bytes=0, idle_age=10, priority=110,ip,in_port=1,nw_frag=yes actions=ct(table=0,zone=64004)\n cookie=0xdeff105, duration=498.584s, table=0, n_packets=0, n_bytes=0, idle_age=498, priority=109,ip,in_port=2,dl_src=fa:16:3e:0d:e7:11,nw_src=10.217.0.0/23 actions=ct(commit,zone=64000,exec(load:0x1->NXM_NX_CT_MARK[])),output:1\n cookie=0xdeff105, duration=498.584s, table=0, n_packets=0, n_bytes=0, idle_age=498, priority=105,pkt_mark=0x3f0,ip,in_port=2,dl_src=fa:16:3e:0d:e7:11 actions=ct(commit,zone=64000,nat(src=38.102.83.220),exec(load:0x1->NXM_NX_CT_MARK[])),output:1\n cookie=0xdeff105, duration=498.584s, table=0, n_packets=0, n_bytes=0, idle_age=498, priority=104,ip,in_port=2,nw_src=10.217.0.0/22 actions=drop\n cookie=0xdeff105, duration=498.584s, table=0, n_packets=7301, n_bytes=1110156, idle_age=13, priority=100,ip,in_port=2,dl_src=fa:16:3e:0d:e7:11 actions=ct(commit,zone=64000,exec(load:0x1->NXM_NX_CT_MARK[])),output:1\n cookie=0xdeff105, duration=498.584s, table=0, n_packets=45951, n_bytes=4369849, idle_age=0, priority=100,ip,in_port=LOCAL actions=ct(commit,zone=64000,exec(load:0x2->NXM_NX_CT_MARK[])),output:1\n cookie=0xdeff105, duration=498.584s, table=0, n_packets=158251, n_bytes=2150467978, idle_age=0, priority=50,ip,in_port=1 actions=ct(table=1,zone=64000,nat)\n cookie=0xdeff105, duration=499.393s, table=0, n_packets=33, n_bytes=1764, idle_age=7, priority=10,in_port=2,dl_src=fa:16:3e:0d:e7:11 actions=NORMAL\n cookie=0xdeff105, duration=498.584s, table=0, n_packets=38, n_bytes=2114, idle_age=7, priority=10,in_port=1,dl_dst=fa:16:3e:0d:e7:11 actions=output:2,LOCAL\n cookie=0xdeff105, duration=499.393s, table=0, n_packets=0, n_bytes=0, idle_age=499, priority=9,in_port=2 actions=drop\n cookie=0x0, duration=590.502s, table=0, n_packets=59673, n_bytes=6445129, idle_age=0, priority=0 actions=NORMAL\n cookie=0xdeff105, duration=498.584s, table=1, n_packets=6282, n_bytes=4990278, idle_age=13, priority=100,ct_state=+est+trk,ct_mark=0x1,ip actions=output:2\n cookie=0xdeff105, duration=498.584s, table=1, n_packets=151811, n_bytes=2145450031, idle_age=0, priority=100,ct_state=+est+trk,ct_mark=0x2,ip actions=LOCAL\n cookie=0xdeff105, duration=498.584s, table=1, n_packets=0, n_bytes=0, idle_age=498, priority=100,ct_state=+rel+trk,ct_mark=0x1,ip actions=output:2\n cookie=0xdeff105, duration=498.584s, table=1, n_packets=0, n_bytes=0, idle_age=498, priority=100,ct_state=+rel+trk,ct_mark=0x2,ip actions=LOCAL\n cookie=0xdeff105, duration=498.584s, table=1, n_packets=0, n_bytes=0, idle_age=498, priority=15,ip,nw_dst=10.217.0.0/22 actions=output:2\n cookie=0xdeff105, duration=498.584s, table=1, n_packets=0, n_bytes=0, idle_age=498, priority=13,udp,in_port=1,tp_dst=3784 actions=output:2,LOCAL\n cookie=0xdeff105, duration=498.584s, table=1, n_packets=158, n_bytes=27669, idle_age=4, priority=10,dl_dst=fa:16:3e:0d:e7:11 actions=LOCAL\n cookie=0xdeff105, duration=498.584s, table=1, n_packets=0, n_bytes=0, idle_age=498, priority=0 actions=NORMAL\n cookie=0xdeff105, duration=498.584s, table=2, n_packets=3392, n_bytes=4355095, idle_age=1, actions=mod_dl_dst:fa:16:3e:0d:e7:11,output:2\n cookie=0xdeff105, duration=498.584s, table=3, n_packets=3375, n_bytes=4362117, idle_age=1, actions=move:NXM_OF_ETH_DST[]->NXM_OF_ETH_SRC[],mod_dl_dst:fa:16:3e:0d:e7:11,LOCAL\n cookie=0xdeff105, duration=498.584s, table=4, n_packets=1811, n_bytes=189425, idle_age=3, ip actions=ct(commit,table=3,zone=64002,nat(src=169.254.169.1))\n cookie=0xdeff105, duration=498.584s, table=5, n_packets=1520, n_bytes=4156359, idle_age=3, ip actions=ct(commit,table=2,zone=64001,nat)\n" 2026-01-20T10:56:53.119712875+00:00 stderr F I0120 10:56:53.119693 30089 ovs.go:163] Exec(11): stderr: "" 2026-01-20T10:56:53.120294771+00:00 stderr F I0120 10:56:53.120265 30089 ovs.go:162] Exec(13): stdout: "_uuid : 59dab9f1-0389-4181-919a-0167ad60c25f\n\n_uuid : 02fd96d5-5e32-4855-b83b-d257ecda4e0c\n\n_uuid : dc1aa63b-0376-4ccb-99e2-10794dfff422\n" 2026-01-20T10:56:53.120331722+00:00 stderr F I0120 10:56:53.120321 30089 ovs.go:163] Exec(13): stderr: "" 2026-01-20T10:56:53.120425124+00:00 stderr F I0120 10:56:53.120393 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer_Group Row:map[name:clusterLBGroup] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {59dab9f1-0389-4181-919a-0167ad60c25f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.120462455+00:00 stderr F I0120 10:56:53.120443 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer_Group Row:map[name:clusterLBGroup] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {59dab9f1-0389-4181-919a-0167ad60c25f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.120890547+00:00 stderr F I0120 10:56:53.120828 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer_Group Row:map[name:clusterSwitchLBGroup] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {dc1aa63b-0376-4ccb-99e2-10794dfff422}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.120890547+00:00 stderr F I0120 10:56:53.120871 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer_Group Row:map[name:clusterSwitchLBGroup] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {dc1aa63b-0376-4ccb-99e2-10794dfff422}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.121287328+00:00 stderr F I0120 10:56:53.121258 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer_Group Row:map[name:clusterRouterLBGroup] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {02fd96d5-5e32-4855-b83b-d257ecda4e0c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.121301919+00:00 stderr F I0120 10:56:53.121280 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer_Group Row:map[name:clusterRouterLBGroup] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {02fd96d5-5e32-4855-b83b-d257ecda4e0c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.121862523+00:00 stderr F I0120 10:56:53.121575 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {406a913f-23d5-4a8b-a150-6a090ee355a5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.121862523+00:00 stderr F I0120 10:56:53.121624 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5d8bcfad-8c42-42c9-84cc-e00150241d43}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.121862523+00:00 stderr F I0120 10:56:53.121659 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e55fbb04-3e0a-43a9-a5cb-7bdd67b39a16}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.121862523+00:00 stderr F I0120 10:56:53.121690 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {efb4b620-b538-4498-8e1f-b2aa252ca816}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.121862523+00:00 stderr F I0120 10:56:53.121717 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2d755c81-e306-481c-a62f-0d10e2f5d48f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.121862523+00:00 stderr F I0120 10:56:53.121748 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {437a223f-c2a4-49be-a4fb-d226930b652c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.121862523+00:00 stderr F I0120 10:56:53.121776 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {75e12f21-bf66-4829-a27e-d4ed58399124}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.121862523+00:00 stderr F I0120 10:56:53.121803 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6bfa24b-ac8f-435a-8587-008597dbe58d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.121862523+00:00 stderr F I0120 10:56:53.121834 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ed88a175-3a45-4a73-9645-9da77151d550}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.121927095+00:00 stderr F I0120 10:56:53.121876 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Copp Row:map[meters:{GoMap:map[arp:arp-rate-limiter arp-resolve:arp-resolve-rate-limiter bfd:bfd-rate-limiter event-elb:event-elb-rate-limiter icmp4-error:icmp4-error-rate-limiter icmp6-error:icmp6-error-rate-limiter reject:reject-rate-limiter svc-monitor:svc-monitor-rate-limiter tcp-reset:tcp-reset-rate-limiter]} name:ovnkube-default] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8af88be7-33a2-4869-a73a-80bcee31d385}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.122004137+00:00 stderr F I0120 10:56:53.121917 30089 transact.go:42] Configuring OVN: [{Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {406a913f-23d5-4a8b-a150-6a090ee355a5}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5d8bcfad-8c42-42c9-84cc-e00150241d43}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e55fbb04-3e0a-43a9-a5cb-7bdd67b39a16}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {efb4b620-b538-4498-8e1f-b2aa252ca816}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2d755c81-e306-481c-a62f-0d10e2f5d48f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {437a223f-c2a4-49be-a4fb-d226930b652c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {75e12f21-bf66-4829-a27e-d4ed58399124}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6bfa24b-ac8f-435a-8587-008597dbe58d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ed88a175-3a45-4a73-9645-9da77151d550}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Copp Row:map[meters:{GoMap:map[arp:arp-rate-limiter arp-resolve:arp-resolve-rate-limiter bfd:bfd-rate-limiter event-elb:event-elb-rate-limiter icmp4-error:icmp4-error-rate-limiter icmp6-error:icmp6-error-rate-limiter reject:reject-rate-limiter svc-monitor:svc-monitor-rate-limiter tcp-reset:tcp-reset-rate-limiter]} name:ovnkube-default] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8af88be7-33a2-4869-a73a-80bcee31d385}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.122492650+00:00 stderr F I0120 10:56:53.122469 30089 ovs.go:159] Exec(14): /usr/bin/ovn-sbctl --timeout=15 --no-leader-only get SB_Global . options:name 2026-01-20T10:56:53.122742507+00:00 stderr F I0120 10:56:53.122692 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:8af88be7-33a2-4869-a73a-80bcee31d385}]} external_ids:{GoMap:map[k8s-cluster-router:yes]} options:{GoMap:map[mcast_relay:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.122742507+00:00 stderr F I0120 10:56:53.122725 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:8af88be7-33a2-4869-a73a-80bcee31d385}]} external_ids:{GoMap:map[k8s-cluster-router:yes]} options:{GoMap:map[mcast_relay:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.123282471+00:00 stderr F I0120 10:56:53.123222 30089 model_client.go:381] Update operations generated as: [{Op:update Table:ACL Row:map[action:drop direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:MulticastCluster:DefaultDeny:Egress k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:MulticastCluster type:DefaultDeny]} log:false match:(ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[apply-after-lb:true]} priority:1011 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f22c876a-c982-47fe-aa27-179517e8761d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.123387074+00:00 stderr F I0120 10:56:53.123349 30089 model_client.go:381] Update operations generated as: [{Op:update Table:ACL Row:map[action:drop direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:MulticastCluster:DefaultDeny:Ingress k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:MulticastCluster type:DefaultDeny]} log:false match:(ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1011 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {684df200-e39c-4451-bf48-c08eac5c3524}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.123438065+00:00 stderr F I0120 10:56:53.123405 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:f22c876a-c982-47fe-aa27-179517e8761d} {GoUUID:684df200-e39c-4451-bf48-c08eac5c3524}]}}] Timeout: Where:[where column _uuid == {209ca8a5-55e7-4f87-adee-1ae7952f089e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.123528228+00:00 stderr F I0120 10:56:53.123441 30089 transact.go:42] Configuring OVN: [{Op:update Table:ACL Row:map[action:drop direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:MulticastCluster:DefaultDeny:Egress k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:MulticastCluster type:DefaultDeny]} log:false match:(ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[apply-after-lb:true]} priority:1011 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f22c876a-c982-47fe-aa27-179517e8761d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:ACL Row:map[action:drop direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:MulticastCluster:DefaultDeny:Ingress k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:MulticastCluster type:DefaultDeny]} log:false match:(ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1011 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {684df200-e39c-4451-bf48-c08eac5c3524}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:f22c876a-c982-47fe-aa27-179517e8761d} {GoUUID:684df200-e39c-4451-bf48-c08eac5c3524}]}}] Timeout: Where:[where column _uuid == {209ca8a5-55e7-4f87-adee-1ae7952f089e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.124150905+00:00 stderr F I0120 10:56:53.124101 30089 model_client.go:381] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:MulticastCluster:AllowInterNode:Egress k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:MulticastCluster type:AllowInterNode]} log:false match:inport == @a4743249366342378346 && (ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[apply-after-lb:true]} priority:1012 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b8e398b3-e322-4290-baa5-01fafd781e3f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.124270448+00:00 stderr F I0120 10:56:53.124231 30089 model_client.go:381] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:MulticastCluster:AllowInterNode:Ingress k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:MulticastCluster type:AllowInterNode]} log:false match:outport == @a4743249366342378346 && (ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1012 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f1df6f93-2c80-4ef0-a5d0-488e4a7fd80a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.124320960+00:00 stderr F I0120 10:56:53.124291 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:b8e398b3-e322-4290-baa5-01fafd781e3f} {GoUUID:f1df6f93-2c80-4ef0-a5d0-488e4a7fd80a}]}}] Timeout: Where:[where column _uuid == {e9f73f57-05b3-4176-8265-1b664879a040}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.124362521+00:00 stderr F I0120 10:56:53.124316 30089 transact.go:42] Configuring OVN: [{Op:update Table:ACL Row:map[action:allow direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:MulticastCluster:AllowInterNode:Egress k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:MulticastCluster type:AllowInterNode]} log:false match:inport == @a4743249366342378346 && (ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[apply-after-lb:true]} priority:1012 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b8e398b3-e322-4290-baa5-01fafd781e3f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:ACL Row:map[action:allow direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:MulticastCluster:AllowInterNode:Ingress k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:MulticastCluster type:AllowInterNode]} log:false match:outport == @a4743249366342378346 && (ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1012 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f1df6f93-2c80-4ef0-a5d0-488e4a7fd80a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:b8e398b3-e322-4290-baa5-01fafd781e3f} {GoUUID:f1df6f93-2c80-4ef0-a5d0-488e4a7fd80a}]}}] Timeout: Where:[where column _uuid == {e9f73f57-05b3-4176-8265-1b664879a040}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.124838473+00:00 stderr F I0120 10:56:53.124814 30089 ovs.go:162] Exec(12): stdout: "" 2026-01-20T10:56:53.124838473+00:00 stderr F I0120 10:56:53.124825 30089 ovs.go:163] Exec(12): stderr: "" 2026-01-20T10:56:53.124904665+00:00 stderr F I0120 10:56:53.124857 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch Row:map[name:join] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e80fcef2-fac9-4844-ba83-db36f8e2cc28}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.124904665+00:00 stderr F I0120 10:56:53.124883 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch Row:map[name:join] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e80fcef2-fac9-4844-ba83-db36f8e2cc28}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.125215543+00:00 stderr F I0120 10:56:53.125175 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:01 networks:{GoSet:[100.64.0.1/16]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e6a93f86-9b7d-430d-b5fd-372b961737ff}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.125257205+00:00 stderr F I0120 10:56:53.125232 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e6a93f86-9b7d-430d-b5fd-372b961737ff}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.125284545+00:00 stderr F I0120 10:56:53.125254 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:01 networks:{GoSet:[100.64.0.1/16]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e6a93f86-9b7d-430d-b5fd-372b961737ff}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e6a93f86-9b7d-430d-b5fd-372b961737ff}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.125826320+00:00 stderr F I0120 10:56:53.125761 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[router-port:rtoj-ovn_cluster_router]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {45b73aea-7414-4894-b242-a14caec8c198}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.125858151+00:00 stderr F I0120 10:56:53.125832 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:45b73aea-7414-4894-b242-a14caec8c198}]}}] Timeout: Where:[where column _uuid == {e80fcef2-fac9-4844-ba83-db36f8e2cc28}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.125912342+00:00 stderr F I0120 10:56:53.125851 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[router-port:rtoj-ovn_cluster_router]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {45b73aea-7414-4894-b242-a14caec8c198}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:45b73aea-7414-4894-b242-a14caec8c198}]}}] Timeout: Where:[where column _uuid == {e80fcef2-fac9-4844-ba83-db36f8e2cc28}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.126205770+00:00 stderr F I0120 10:56:53.126183 30089 default_network_controller.go:419] Cleaning External Gateway ECMP routes 2026-01-20T10:56:53.126402225+00:00 stderr F I0120 10:56:53.126381 30089 repair.go:33] Syncing exgw routes took 183.195µs 2026-01-20T10:56:53.126413465+00:00 stderr F I0120 10:56:53.126407 30089 default_network_controller.go:430] Starting all the Watchers... 2026-01-20T10:56:53.126950570+00:00 stderr F I0120 10:56:53.126926 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-service-ca 2026-01-20T10:56:53.126993872+00:00 stderr F I0120 10:56:53.126963 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-host-network 2026-01-20T10:56:53.126993872+00:00 stderr F I0120 10:56:53.126978 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-apiserver-operator 2026-01-20T10:56:53.126993872+00:00 stderr F I0120 10:56:53.126982 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-image-registry 2026-01-20T10:56:53.127007822+00:00 stderr F I0120 10:56:53.127002 30089 namespace.go:100] [openshift-image-registry] adding namespace 2026-01-20T10:56:53.127007822+00:00 stderr F I0120 10:56:53.127004 30089 namespace.go:100] [openshift-apiserver-operator] adding namespace 2026-01-20T10:56:53.127018672+00:00 stderr F I0120 10:56:53.127011 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-kube-scheduler 2026-01-20T10:56:53.127026422+00:00 stderr F I0120 10:56:53.127018 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-cloud-platform-infra 2026-01-20T10:56:53.127034593+00:00 stderr F I0120 10:56:53.127025 30089 namespace.go:100] [openshift-kube-scheduler] adding namespace 2026-01-20T10:56:53.127034593+00:00 stderr F I0120 10:56:53.127030 30089 namespace.go:100] [openshift-cloud-platform-infra] adding namespace 2026-01-20T10:56:53.127049693+00:00 stderr F I0120 10:56:53.127036 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-oauth-apiserver 2026-01-20T10:56:53.127049693+00:00 stderr F I0120 10:56:53.127040 30089 obj_retry.go:502] Add event received for *v1.Namespace openstack 2026-01-20T10:56:53.127058453+00:00 stderr F I0120 10:56:53.127052 30089 namespace.go:100] [openstack] adding namespace 2026-01-20T10:56:53.127083374+00:00 stderr F I0120 10:56:53.127002 30089 namespace.go:100] [openshift-host-network] adding namespace 2026-01-20T10:56:53.127095744+00:00 stderr F I0120 10:56:53.126937 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-machine-config-operator 2026-01-20T10:56:53.127095744+00:00 stderr F I0120 10:56:53.127082 30089 namespace.go:100] [openshift-oauth-apiserver] adding namespace 2026-01-20T10:56:53.127095744+00:00 stderr F I0120 10:56:53.126945 30089 obj_retry.go:502] Add event received for *v1.Namespace kube-system 2026-01-20T10:56:53.127124755+00:00 stderr F I0120 10:56:53.127105 30089 namespace.go:100] [kube-system] adding namespace 2026-01-20T10:56:53.127124755+00:00 stderr F I0120 10:56:53.126955 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-service-ca-operator 2026-01-20T10:56:53.127162096+00:00 stderr F I0120 10:56:53.127119 30089 namespace.go:100] [openshift-machine-config-operator] adding namespace 2026-01-20T10:56:53.127162096+00:00 stderr F I0120 10:56:53.127121 30089 namespace.go:100] [openshift-service-ca-operator] adding namespace 2026-01-20T10:56:53.127162096+00:00 stderr F I0120 10:56:53.126970 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-operators 2026-01-20T10:56:53.127162096+00:00 stderr F I0120 10:56:53.127129 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.37 10.217.0.22]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-image-registry:v4 k8s.ovn.org/name:openshift-image-registry k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3fbe9c32-c447-4e1c-9b34-6fac7dd25149}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.127187227+00:00 stderr F I0120 10:56:53.127168 30089 address_set.go:304] New(3fbe9c32-c447-4e1c-9b34-6fac7dd25149/default-network-controller:Namespace:openshift-image-registry:v4/a65811733811199347) with [10.217.0.37 10.217.0.22] 2026-01-20T10:56:53.127212487+00:00 stderr F I0120 10:56:53.127183 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.37 10.217.0.22]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-image-registry:v4 k8s.ovn.org/name:openshift-image-registry k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3fbe9c32-c447-4e1c-9b34-6fac7dd25149}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.127255229+00:00 stderr F I0120 10:56:53.127241 30089 namespace.go:100] [openshift-service-ca] adding namespace 2026-01-20T10:56:53.127348821+00:00 stderr F I0120 10:56:53.127172 30089 namespace.go:100] [openshift-operators] adding namespace 2026-01-20T10:56:53.127348821+00:00 stderr F I0120 10:56:53.126931 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-kube-scheduler-operator 2026-01-20T10:56:53.127348821+00:00 stderr F I0120 10:56:53.127340 30089 namespace.go:100] [openshift-kube-scheduler-operator] adding namespace 2026-01-20T10:56:53.127348821+00:00 stderr F I0120 10:56:53.126953 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-authentication 2026-01-20T10:56:53.127370082+00:00 stderr F I0120 10:56:53.127355 30089 namespace.go:100] [openshift-authentication] adding namespace 2026-01-20T10:56:53.127370082+00:00 stderr F I0120 10:56:53.126963 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-kni-infra 2026-01-20T10:56:53.127377142+00:00 stderr F I0120 10:56:53.127368 30089 namespace.go:100] [openshift-kni-infra] adding namespace 2026-01-20T10:56:53.127540496+00:00 stderr F I0120 10:56:53.127518 30089 namespace.go:104] [openshift-image-registry] adding namespace took 506.793µs 2026-01-20T10:56:53.127569177+00:00 stderr F I0120 10:56:53.127537 30089 obj_retry.go:541] Creating *v1.Namespace openshift-image-registry took: 540.295µs 2026-01-20T10:56:53.127600958+00:00 stderr F I0120 10:56:53.127582 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-node 2026-01-20T10:56:53.127600958+00:00 stderr F I0120 10:56:53.127598 30089 namespace.go:100] [openshift-node] adding namespace 2026-01-20T10:56:53.127700890+00:00 stderr F I0120 10:56:53.127659 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-node:v4 k8s.ovn.org/name:openshift-node k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {11c9190d-7255-419a-b417-2c71ee754070}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.127700890+00:00 stderr F I0120 10:56:53.127695 30089 address_set.go:304] New(11c9190d-7255-419a-b417-2c71ee754070/default-network-controller:Namespace:openshift-node:v4/a10320713570038180226) with [] 2026-01-20T10:56:53.127731801+00:00 stderr F I0120 10:56:53.127702 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-node:v4 k8s.ovn.org/name:openshift-node k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {11c9190d-7255-419a-b417-2c71ee754070}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.128007799+00:00 stderr F I0120 10:56:53.127993 30089 namespace.go:104] [openshift-node] adding namespace took 389.57µs 2026-01-20T10:56:53.128007799+00:00 stderr F I0120 10:56:53.128004 30089 obj_retry.go:541] Creating *v1.Namespace openshift-node took: 408.19µs 2026-01-20T10:56:53.128017379+00:00 stderr F I0120 10:56:53.128013 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-dns-operator 2026-01-20T10:56:53.128036199+00:00 stderr F I0120 10:56:53.128018 30089 namespace.go:100] [openshift-dns-operator] adding namespace 2026-01-20T10:56:53.128156982+00:00 stderr F I0120 10:56:53.128113 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.18]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-dns-operator:v4 k8s.ovn.org/name:openshift-dns-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {84d4b59b-775b-4d4e-a869-e623863cffd4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.128169833+00:00 stderr F I0120 10:56:53.128155 30089 address_set.go:304] New(84d4b59b-775b-4d4e-a869-e623863cffd4/default-network-controller:Namespace:openshift-dns-operator:v4/a12081638711291249560) with [10.217.0.18] 2026-01-20T10:56:53.128189623+00:00 stderr F I0120 10:56:53.128163 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.18]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-dns-operator:v4 k8s.ovn.org/name:openshift-dns-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {84d4b59b-775b-4d4e-a869-e623863cffd4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.128563153+00:00 stderr F I0120 10:56:53.128541 30089 namespace.go:104] [openshift-dns-operator] adding namespace took 517.094µs 2026-01-20T10:56:53.128563153+00:00 stderr F I0120 10:56:53.128552 30089 obj_retry.go:541] Creating *v1.Namespace openshift-dns-operator took: 533.144µs 2026-01-20T10:56:53.128563153+00:00 stderr F I0120 10:56:53.128560 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-cloud-network-config-controller 2026-01-20T10:56:53.128599804+00:00 stderr F I0120 10:56:53.128565 30089 namespace.go:100] [openshift-cloud-network-config-controller] adding namespace 2026-01-20T10:56:53.128692177+00:00 stderr F I0120 10:56:53.128544 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.6]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-apiserver-operator:v4 k8s.ovn.org/name:openshift-apiserver-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {09051ac2-1c77-41ab-bd2c-be4e2e86af73}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.128723238+00:00 stderr F I0120 10:56:53.128712 30089 address_set.go:304] New(09051ac2-1c77-41ab-bd2c-be4e2e86af73/default-network-controller:Namespace:openshift-apiserver-operator:v4/a17733727332347776420) with [10.217.0.6] 2026-01-20T10:56:53.128759719+00:00 stderr F I0120 10:56:53.128737 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.6]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-apiserver-operator:v4 k8s.ovn.org/name:openshift-apiserver-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {09051ac2-1c77-41ab-bd2c-be4e2e86af73}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.129183500+00:00 stderr F I0120 10:56:53.129168 30089 namespace.go:104] [openshift-apiserver-operator] adding namespace took 2.157407ms 2026-01-20T10:56:53.129226181+00:00 stderr F I0120 10:56:53.129215 30089 obj_retry.go:541] Creating *v1.Namespace openshift-apiserver-operator took: 2.22543ms 2026-01-20T10:56:53.129259942+00:00 stderr F I0120 10:56:53.129248 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-network-node-identity 2026-01-20T10:56:53.129290053+00:00 stderr F I0120 10:56:53.129279 30089 namespace.go:100] [openshift-network-node-identity] adding namespace 2026-01-20T10:56:53.129312293+00:00 stderr F I0120 10:56:53.129171 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-scheduler:v4 k8s.ovn.org/name:openshift-kube-scheduler k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {87985ed0-37ee-4fd2-af4b-1c4bf264af83}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.129342824+00:00 stderr F I0120 10:56:53.129332 30089 address_set.go:304] New(87985ed0-37ee-4fd2-af4b-1c4bf264af83/default-network-controller:Namespace:openshift-kube-scheduler:v4/a15634036902741400949) with [] 2026-01-20T10:56:53.129392795+00:00 stderr F I0120 10:56:53.129359 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-scheduler:v4 k8s.ovn.org/name:openshift-kube-scheduler k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {87985ed0-37ee-4fd2-af4b-1c4bf264af83}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.129772476+00:00 stderr F I0120 10:56:53.129741 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cloud-platform-infra:v4 k8s.ovn.org/name:openshift-cloud-platform-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {bbc24026-ee3c-4098-b3f2-e27016b964c4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.129772476+00:00 stderr F I0120 10:56:53.129765 30089 address_set.go:304] New(bbc24026-ee3c-4098-b3f2-e27016b964c4/default-network-controller:Namespace:openshift-cloud-platform-infra:v4/a755693067844839690) with [] 2026-01-20T10:56:53.129785017+00:00 stderr F I0120 10:56:53.129770 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cloud-platform-infra:v4 k8s.ovn.org/name:openshift-cloud-platform-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {bbc24026-ee3c-4098-b3f2-e27016b964c4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.129809837+00:00 stderr F I0120 10:56:53.129795 30089 namespace.go:104] [openshift-kube-scheduler] adding namespace took 2.727784ms 2026-01-20T10:56:53.129834608+00:00 stderr F I0120 10:56:53.129825 30089 obj_retry.go:541] Creating *v1.Namespace openshift-kube-scheduler took: 2.804336ms 2026-01-20T10:56:53.129859129+00:00 stderr F I0120 10:56:53.129850 30089 obj_retry.go:502] Add event received for *v1.Namespace default 2026-01-20T10:56:53.129881829+00:00 stderr F I0120 10:56:53.129873 30089 namespace.go:100] [default] adding namespace 2026-01-20T10:56:53.130100395+00:00 stderr F I0120 10:56:53.130087 30089 namespace.go:104] [openshift-cloud-platform-infra] adding namespace took 3.051622ms 2026-01-20T10:56:53.130130296+00:00 stderr F I0120 10:56:53.130120 30089 obj_retry.go:541] Creating *v1.Namespace openshift-cloud-platform-infra took: 3.095324ms 2026-01-20T10:56:53.130165757+00:00 stderr F I0120 10:56:53.130156 30089 obj_retry.go:502] Add event received for *v1.Namespace cert-manager 2026-01-20T10:56:53.130196408+00:00 stderr F I0120 10:56:53.130187 30089 namespace.go:100] [cert-manager] adding namespace 2026-01-20T10:56:53.130227159+00:00 stderr F I0120 10:56:53.130145 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.2 100.64.0.2]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-host-network:v4 k8s.ovn.org/name:openshift-host-network k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {bf04507a-34b0-4813-98e8-8d9d1837a096}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.130254509+00:00 stderr F I0120 10:56:53.130244 30089 address_set.go:304] New(bf04507a-34b0-4813-98e8-8d9d1837a096/default-network-controller:Namespace:openshift-host-network:v4/a6910206611978007605) with [10.217.0.2 100.64.0.2] 2026-01-20T10:56:53.130301390+00:00 stderr F I0120 10:56:53.130274 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.2 100.64.0.2]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-host-network:v4 k8s.ovn.org/name:openshift-host-network k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {bf04507a-34b0-4813-98e8-8d9d1837a096}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.130627299+00:00 stderr F I0120 10:56:53.130611 30089 namespace.go:104] [openshift-host-network] adding namespace took 3.525685ms 2026-01-20T10:56:53.130659870+00:00 stderr F I0120 10:56:53.130650 30089 obj_retry.go:541] Creating *v1.Namespace openshift-host-network took: 3.664459ms 2026-01-20T10:56:53.130688861+00:00 stderr F I0120 10:56:53.130609 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.39]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-oauth-apiserver:v4 k8s.ovn.org/name:openshift-oauth-apiserver k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4dac2b20-e105-4916-96b0-5690b4c79e49}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.130698521+00:00 stderr F I0120 10:56:53.130683 30089 address_set.go:304] New(4dac2b20-e105-4916-96b0-5690b4c79e49/default-network-controller:Namespace:openshift-oauth-apiserver:v4/a18232515746603522929) with [10.217.0.39] 2026-01-20T10:56:53.130722042+00:00 stderr F I0120 10:56:53.130697 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.39]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-oauth-apiserver:v4 k8s.ovn.org/name:openshift-oauth-apiserver k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4dac2b20-e105-4916-96b0-5690b4c79e49}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.130745782+00:00 stderr F I0120 10:56:53.130735 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-cluster-samples-operator 2026-01-20T10:56:53.130769093+00:00 stderr F I0120 10:56:53.130760 30089 namespace.go:100] [openshift-cluster-samples-operator] adding namespace 2026-01-20T10:56:53.131024180+00:00 stderr F I0120 10:56:53.130987 30089 ovs.go:162] Exec(14): stdout: "crc\n" 2026-01-20T10:56:53.131024180+00:00 stderr F I0120 10:56:53.131012 30089 ovs.go:163] Exec(14): stderr: "" 2026-01-20T10:56:53.131111342+00:00 stderr F I0120 10:56:53.131088 30089 config.go:1590] Exec: /usr/bin/ovs-vsctl --timeout=15 set Open_vSwitch . external_ids:ovn-remote="unix:/var/run/ovn/ovnsb_db.sock" 2026-01-20T10:56:53.131111342+00:00 stderr F I0120 10:56:53.131099 30089 namespace.go:104] [openshift-oauth-apiserver] adding namespace took 4.000647ms 2026-01-20T10:56:53.131127642+00:00 stderr F I0120 10:56:53.131114 30089 obj_retry.go:541] Creating *v1.Namespace openshift-oauth-apiserver took: 4.067319ms 2026-01-20T10:56:53.131135943+00:00 stderr F I0120 10:56:53.131104 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openstack:v4 k8s.ovn.org/name:openstack k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {294cabdd-9d3c-4b30-86d3-18fb8b3d9119}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.131135943+00:00 stderr F I0120 10:56:53.131125 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-etcd 2026-01-20T10:56:53.131161413+00:00 stderr F I0120 10:56:53.131135 30089 address_set.go:304] New(294cabdd-9d3c-4b30-86d3-18fb8b3d9119/default-network-controller:Namespace:openstack:v4/a15556675108942259965) with [] 2026-01-20T10:56:53.131161413+00:00 stderr F I0120 10:56:53.131139 30089 namespace.go:100] [openshift-etcd] adding namespace 2026-01-20T10:56:53.131161413+00:00 stderr F I0120 10:56:53.131144 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openstack:v4 k8s.ovn.org/name:openstack k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {294cabdd-9d3c-4b30-86d3-18fb8b3d9119}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.131504282+00:00 stderr F I0120 10:56:53.131472 30089 namespace.go:104] [openstack] adding namespace took 4.411738ms 2026-01-20T10:56:53.131504282+00:00 stderr F I0120 10:56:53.131487 30089 obj_retry.go:541] Creating *v1.Namespace openstack took: 4.438949ms 2026-01-20T10:56:53.131504282+00:00 stderr F I0120 10:56:53.131497 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-cluster-version 2026-01-20T10:56:53.131520093+00:00 stderr F I0120 10:56:53.131504 30089 namespace.go:100] [openshift-cluster-version] adding namespace 2026-01-20T10:56:53.131631886+00:00 stderr F I0120 10:56:53.131601 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:kube-system:v4 k8s.ovn.org/name:kube-system k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {03aae892-0ac3-4b9f-a966-da2d596a1ccc}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.131662377+00:00 stderr F I0120 10:56:53.131651 30089 address_set.go:304] New(03aae892-0ac3-4b9f-a966-da2d596a1ccc/default-network-controller:Namespace:kube-system:v4/a8746611765617041202) with [] 2026-01-20T10:56:53.131695877+00:00 stderr F I0120 10:56:53.131676 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:kube-system:v4 k8s.ovn.org/name:kube-system k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {03aae892-0ac3-4b9f-a966-da2d596a1ccc}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.132055317+00:00 stderr F I0120 10:56:53.132035 30089 namespace.go:104] [kube-system] adding namespace took 4.921241ms 2026-01-20T10:56:53.132139539+00:00 stderr F I0120 10:56:53.132042 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.21 10.217.0.63]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-machine-config-operator:v4 k8s.ovn.org/name:openshift-machine-config-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9fea9088-07c8-413d-8aa0-608e3c74298e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.132139539+00:00 stderr F I0120 10:56:53.132133 30089 address_set.go:304] New(9fea9088-07c8-413d-8aa0-608e3c74298e/default-network-controller:Namespace:openshift-machine-config-operator:v4/a1512537150246498877) with [10.217.0.21 10.217.0.63] 2026-01-20T10:56:53.132171920+00:00 stderr F I0120 10:56:53.132142 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.21 10.217.0.63]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-machine-config-operator:v4 k8s.ovn.org/name:openshift-machine-config-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9fea9088-07c8-413d-8aa0-608e3c74298e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.132273533+00:00 stderr F I0120 10:56:53.132118 30089 obj_retry.go:541] Creating *v1.Namespace kube-system took: 5.017115ms 2026-01-20T10:56:53.132273533+00:00 stderr F I0120 10:56:53.132266 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-kube-storage-version-migrator-operator 2026-01-20T10:56:53.132286283+00:00 stderr F I0120 10:56:53.132274 30089 namespace.go:100] [openshift-kube-storage-version-migrator-operator] adding namespace 2026-01-20T10:56:53.132542121+00:00 stderr F I0120 10:56:53.132525 30089 namespace.go:104] [openshift-machine-config-operator] adding namespace took 5.376504ms 2026-01-20T10:56:53.132571802+00:00 stderr F I0120 10:56:53.132561 30089 obj_retry.go:541] Creating *v1.Namespace openshift-machine-config-operator took: 5.468557ms 2026-01-20T10:56:53.132606823+00:00 stderr F I0120 10:56:53.132565 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.10]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-service-ca-operator:v4 k8s.ovn.org/name:openshift-service-ca-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9ac8de8-d679-44f7-bc42-650c48e57dec}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.132615143+00:00 stderr F I0120 10:56:53.132598 30089 address_set.go:304] New(d9ac8de8-d679-44f7-bc42-650c48e57dec/default-network-controller:Namespace:openshift-service-ca-operator:v4/a9531058592630863333) with [10.217.0.10] 2026-01-20T10:56:53.132639774+00:00 stderr F I0120 10:56:53.132610 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.10]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-service-ca-operator:v4 k8s.ovn.org/name:openshift-service-ca-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9ac8de8-d679-44f7-bc42-650c48e57dec}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.132659154+00:00 stderr F I0120 10:56:53.132588 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-cluster-storage-operator 2026-01-20T10:56:53.132682935+00:00 stderr F I0120 10:56:53.132674 30089 namespace.go:100] [openshift-cluster-storage-operator] adding namespace 2026-01-20T10:56:53.133032884+00:00 stderr F I0120 10:56:53.132999 30089 namespace.go:104] [openshift-service-ca-operator] adding namespace took 5.847207ms 2026-01-20T10:56:53.133032884+00:00 stderr F I0120 10:56:53.133013 30089 obj_retry.go:541] Creating *v1.Namespace openshift-service-ca-operator took: 5.894428ms 2026-01-20T10:56:53.133032884+00:00 stderr F I0120 10:56:53.133023 30089 obj_retry.go:502] Add event received for *v1.Namespace cert-manager-operator 2026-01-20T10:56:53.133032884+00:00 stderr F I0120 10:56:53.133008 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.40]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-service-ca:v4 k8s.ovn.org/name:openshift-service-ca k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7c559651-6ac2-45b6-acf9-3f03f7401a2d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.133054625+00:00 stderr F I0120 10:56:53.133031 30089 namespace.go:100] [cert-manager-operator] adding namespace 2026-01-20T10:56:53.133054625+00:00 stderr F I0120 10:56:53.133036 30089 address_set.go:304] New(7c559651-6ac2-45b6-acf9-3f03f7401a2d/default-network-controller:Namespace:openshift-service-ca:v4/a15543462790031426324) with [10.217.0.40] 2026-01-20T10:56:53.133090506+00:00 stderr F I0120 10:56:53.133044 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.40]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-service-ca:v4 k8s.ovn.org/name:openshift-service-ca k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7c559651-6ac2-45b6-acf9-3f03f7401a2d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.133430375+00:00 stderr F I0120 10:56:53.133407 30089 namespace.go:104] [openshift-service-ca] adding namespace took 6.130765ms 2026-01-20T10:56:53.133430375+00:00 stderr F I0120 10:56:53.133405 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-operators:v4 k8s.ovn.org/name:openshift-operators k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0d92907b-b2f1-4209-bfd7-581d0f2dbd4d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.133430375+00:00 stderr F I0120 10:56:53.133422 30089 obj_retry.go:541] Creating *v1.Namespace openshift-service-ca took: 6.444573ms 2026-01-20T10:56:53.133459405+00:00 stderr F I0120 10:56:53.133427 30089 address_set.go:304] New(0d92907b-b2f1-4209-bfd7-581d0f2dbd4d/default-network-controller:Namespace:openshift-operators:v4/a17780485792851514981) with [] 2026-01-20T10:56:53.133459405+00:00 stderr F I0120 10:56:53.133432 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-nutanix-infra 2026-01-20T10:56:53.133459405+00:00 stderr F I0120 10:56:53.133441 30089 namespace.go:100] [openshift-nutanix-infra] adding namespace 2026-01-20T10:56:53.133459405+00:00 stderr F I0120 10:56:53.133433 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-operators:v4 k8s.ovn.org/name:openshift-operators k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0d92907b-b2f1-4209-bfd7-581d0f2dbd4d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.133777724+00:00 stderr F I0120 10:56:53.133746 30089 namespace.go:104] [openshift-operators] adding namespace took 6.446973ms 2026-01-20T10:56:53.133777724+00:00 stderr F I0120 10:56:53.133758 30089 obj_retry.go:541] Creating *v1.Namespace openshift-operators took: 6.604797ms 2026-01-20T10:56:53.133777724+00:00 stderr F I0120 10:56:53.133765 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-user-workload-monitoring 2026-01-20T10:56:53.133777724+00:00 stderr F I0120 10:56:53.133772 30089 namespace.go:100] [openshift-user-workload-monitoring] adding namespace 2026-01-20T10:56:53.133835025+00:00 stderr F I0120 10:56:53.133806 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.12]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-scheduler-operator:v4 k8s.ovn.org/name:openshift-kube-scheduler-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a2bbab4a-fc2a-420d-89e8-71905aa18031}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.133877606+00:00 stderr F I0120 10:56:53.133867 30089 address_set.go:304] New(a2bbab4a-fc2a-420d-89e8-71905aa18031/default-network-controller:Namespace:openshift-kube-scheduler-operator:v4/a8446891589965341694) with [10.217.0.12] 2026-01-20T10:56:53.133922698+00:00 stderr F I0120 10:56:53.133893 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.12]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-scheduler-operator:v4 k8s.ovn.org/name:openshift-kube-scheduler-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a2bbab4a-fc2a-420d-89e8-71905aa18031}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.134295647+00:00 stderr F I0120 10:56:53.134277 30089 namespace.go:104] [openshift-kube-scheduler-operator] adding namespace took 6.929436ms 2026-01-20T10:56:53.134339399+00:00 stderr F I0120 10:56:53.134326 30089 obj_retry.go:541] Creating *v1.Namespace openshift-kube-scheduler-operator took: 7.006748ms 2026-01-20T10:56:53.134374700+00:00 stderr F I0120 10:56:53.134365 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-config 2026-01-20T10:56:53.134398970+00:00 stderr F I0120 10:56:53.134390 30089 namespace.go:100] [openshift-config] adding namespace 2026-01-20T10:56:53.134420001+00:00 stderr F I0120 10:56:53.134307 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.72]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-authentication:v4 k8s.ovn.org/name:openshift-authentication k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {131500e4-dd5f-4136-9e7a-0724c2d30917}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.134469582+00:00 stderr F I0120 10:56:53.134450 30089 address_set.go:304] New(131500e4-dd5f-4136-9e7a-0724c2d30917/default-network-controller:Namespace:openshift-authentication:v4/a5821095395710037482) with [10.217.0.72] 2026-01-20T10:56:53.134516193+00:00 stderr F I0120 10:56:53.134490 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.72]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-authentication:v4 k8s.ovn.org/name:openshift-authentication k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {131500e4-dd5f-4136-9e7a-0724c2d30917}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.134937665+00:00 stderr F I0120 10:56:53.134916 30089 namespace.go:104] [openshift-authentication] adding namespace took 7.546173ms 2026-01-20T10:56:53.134986476+00:00 stderr F I0120 10:56:53.134972 30089 obj_retry.go:541] Creating *v1.Namespace openshift-authentication took: 7.621554ms 2026-01-20T10:56:53.135017857+00:00 stderr F I0120 10:56:53.135007 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-ingress 2026-01-20T10:56:53.135043057+00:00 stderr F I0120 10:56:53.135033 30089 namespace.go:100] [openshift-ingress] adding namespace 2026-01-20T10:56:53.135079048+00:00 stderr F I0120 10:56:53.134917 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kni-infra:v4 k8s.ovn.org/name:openshift-kni-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {982e3095-ca6c-4b53-837f-82c794c14593}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.135116579+00:00 stderr F I0120 10:56:53.135106 30089 address_set.go:304] New(982e3095-ca6c-4b53-837f-82c794c14593/default-network-controller:Namespace:openshift-kni-infra:v4/a12641306622762432907) with [] 2026-01-20T10:56:53.135158280+00:00 stderr F I0120 10:56:53.135131 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kni-infra:v4 k8s.ovn.org/name:openshift-kni-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {982e3095-ca6c-4b53-837f-82c794c14593}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.135533681+00:00 stderr F I0120 10:56:53.135497 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cloud-network-config-controller:v4 k8s.ovn.org/name:openshift-cloud-network-config-controller k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {37961f7a-b7f0-46c5-ad41-987d6afce443}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.135533681+00:00 stderr F I0120 10:56:53.135528 30089 address_set.go:304] New(37961f7a-b7f0-46c5-ad41-987d6afce443/default-network-controller:Namespace:openshift-cloud-network-config-controller:v4/a6429873968864053860) with [] 2026-01-20T10:56:53.135594443+00:00 stderr F I0120 10:56:53.135512 30089 namespace.go:104] [openshift-kni-infra] adding namespace took 8.138459ms 2026-01-20T10:56:53.135626174+00:00 stderr F I0120 10:56:53.135615 30089 obj_retry.go:541] Creating *v1.Namespace openshift-kni-infra took: 8.251452ms 2026-01-20T10:56:53.135651194+00:00 stderr F I0120 10:56:53.135534 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cloud-network-config-controller:v4 k8s.ovn.org/name:openshift-cloud-network-config-controller k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {37961f7a-b7f0-46c5-ad41-987d6afce443}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.135673385+00:00 stderr F I0120 10:56:53.135663 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-infra 2026-01-20T10:56:53.135696566+00:00 stderr F I0120 10:56:53.135688 30089 namespace.go:100] [openshift-infra] adding namespace 2026-01-20T10:56:53.136251950+00:00 stderr F I0120 10:56:53.136207 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-network-node-identity:v4 k8s.ovn.org/name:openshift-network-node-identity k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {abc91a62-9c79-443c-8e3b-15e8d82538db}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.136251950+00:00 stderr F I0120 10:56:53.136234 30089 address_set.go:304] New(abc91a62-9c79-443c-8e3b-15e8d82538db/default-network-controller:Namespace:openshift-network-node-identity:v4/a6647208685787594228) with [] 2026-01-20T10:56:53.136251950+00:00 stderr F I0120 10:56:53.136234 30089 namespace.go:104] [openshift-cloud-network-config-controller] adding namespace took 7.662477ms 2026-01-20T10:56:53.136275001+00:00 stderr F I0120 10:56:53.136248 30089 obj_retry.go:541] Creating *v1.Namespace openshift-cloud-network-config-controller took: 7.679717ms 2026-01-20T10:56:53.136275001+00:00 stderr F I0120 10:56:53.136240 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-network-node-identity:v4 k8s.ovn.org/name:openshift-network-node-identity k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {abc91a62-9c79-443c-8e3b-15e8d82538db}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.136275001+00:00 stderr F I0120 10:56:53.136259 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-config-managed 2026-01-20T10:56:53.136275001+00:00 stderr F I0120 10:56:53.136267 30089 namespace.go:100] [openshift-config-managed] adding namespace 2026-01-20T10:56:53.136670912+00:00 stderr F I0120 10:56:53.136625 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:default:v4 k8s.ovn.org/name:default k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {be1084dd-a9ab-4193-9794-c4853eea1791}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.136681932+00:00 stderr F I0120 10:56:53.136666 30089 address_set.go:304] New(be1084dd-a9ab-4193-9794-c4853eea1791/default-network-controller:Namespace:default:v4/a4322231855293774466) with [] 2026-01-20T10:56:53.136691812+00:00 stderr F I0120 10:56:53.136674 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:default:v4 k8s.ovn.org/name:default k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {be1084dd-a9ab-4193-9794-c4853eea1791}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.136753674+00:00 stderr F I0120 10:56:53.136651 30089 namespace.go:104] [openshift-network-node-identity] adding namespace took 7.345208ms 2026-01-20T10:56:53.136753674+00:00 stderr F I0120 10:56:53.136739 30089 obj_retry.go:541] Creating *v1.Namespace openshift-network-node-identity took: 7.460531ms 2026-01-20T10:56:53.136753674+00:00 stderr F I0120 10:56:53.136749 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-kube-storage-version-migrator 2026-01-20T10:56:53.136765594+00:00 stderr F I0120 10:56:53.136755 30089 namespace.go:100] [openshift-kube-storage-version-migrator] adding namespace 2026-01-20T10:56:53.137202016+00:00 stderr F I0120 10:56:53.137158 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.42 10.217.0.41 10.217.0.44]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:cert-manager:v4 k8s.ovn.org/name:cert-manager k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ee2bf14a-990b-4aad-a8b9-a7680a56903b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.137202016+00:00 stderr F I0120 10:56:53.137185 30089 address_set.go:304] New(ee2bf14a-990b-4aad-a8b9-a7680a56903b/default-network-controller:Namespace:cert-manager:v4/a11436057671369489943) with [10.217.0.42 10.217.0.41 10.217.0.44] 2026-01-20T10:56:53.137244777+00:00 stderr F I0120 10:56:53.137193 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.42 10.217.0.41 10.217.0.44]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:cert-manager:v4 k8s.ovn.org/name:cert-manager k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ee2bf14a-990b-4aad-a8b9-a7680a56903b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.137276288+00:00 stderr F I0120 10:56:53.137179 30089 namespace.go:104] [default] adding namespace took 7.281905ms 2026-01-20T10:56:53.137310859+00:00 stderr F I0120 10:56:53.137298 30089 obj_retry.go:541] Creating *v1.Namespace default took: 7.420979ms 2026-01-20T10:56:53.137345159+00:00 stderr F I0120 10:56:53.137333 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-network-operator 2026-01-20T10:56:53.137386711+00:00 stderr F I0120 10:56:53.137377 30089 namespace.go:100] [openshift-network-operator] adding namespace 2026-01-20T10:56:53.137526034+00:00 stderr F I0120 10:56:53.137512 30089 namespace.go:104] [cert-manager] adding namespace took 7.291856ms 2026-01-20T10:56:53.137554905+00:00 stderr F I0120 10:56:53.137518 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.46]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cluster-samples-operator:v4 k8s.ovn.org/name:openshift-cluster-samples-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {de4bd34a-a221-4ac1-8c2d-5ac98cadd20e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.137573096+00:00 stderr F I0120 10:56:53.137552 30089 address_set.go:304] New(de4bd34a-a221-4ac1-8c2d-5ac98cadd20e/default-network-controller:Namespace:openshift-cluster-samples-operator:v4/a3083655245828550199) with [10.217.0.46] 2026-01-20T10:56:53.137581646+00:00 stderr F I0120 10:56:53.137562 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.46]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cluster-samples-operator:v4 k8s.ovn.org/name:openshift-cluster-samples-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {de4bd34a-a221-4ac1-8c2d-5ac98cadd20e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.137601426+00:00 stderr F I0120 10:56:53.137543 30089 obj_retry.go:541] Creating *v1.Namespace cert-manager took: 7.354938ms 2026-01-20T10:56:53.137626797+00:00 stderr F I0120 10:56:53.137617 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-cluster-machine-approver 2026-01-20T10:56:53.137658788+00:00 stderr F I0120 10:56:53.137649 30089 namespace.go:100] [openshift-cluster-machine-approver] adding namespace 2026-01-20T10:56:53.137679628+00:00 stderr F I0120 10:56:53.137628 30089 ovs.go:159] Exec(15): /usr/bin/ovs-vsctl --timeout=15 set Open_vSwitch . external_ids:ovn-encap-type=geneve external_ids:ovn-encap-ip=192.168.126.11 external_ids:ovn-remote-probe-interval=180000 external_ids:ovn-openflow-probe-interval=180 other_config:bundle-idle-timeout=180 external_ids:hostname="crc" external_ids:ovn-is-interconn=true external_ids:ovn-monitor-all=true external_ids:ovn-ofctrl-wait-before-clear=0 external_ids:ovn-enable-lflow-cache=true external_ids:ovn-set-local-ip="true" external_ids:ovn-memlimit-lflow-cache-kb=1048576 2026-01-20T10:56:53.137954046+00:00 stderr F I0120 10:56:53.137914 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-etcd:v4 k8s.ovn.org/name:openshift-etcd k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ac960e5b-7bfc-4c54-ade3-c1a9dd66917b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.137954046+00:00 stderr F I0120 10:56:53.137938 30089 namespace.go:104] [openshift-cluster-samples-operator] adding namespace took 7.154372ms 2026-01-20T10:56:53.137954046+00:00 stderr F I0120 10:56:53.137947 30089 address_set.go:304] New(ac960e5b-7bfc-4c54-ade3-c1a9dd66917b/default-network-controller:Namespace:openshift-etcd:v4/a1263951348256964356) with [] 2026-01-20T10:56:53.137971287+00:00 stderr F I0120 10:56:53.137952 30089 obj_retry.go:541] Creating *v1.Namespace openshift-cluster-samples-operator took: 7.189172ms 2026-01-20T10:56:53.137971287+00:00 stderr F I0120 10:56:53.137961 30089 obj_retry.go:502] Add event received for *v1.Namespace kube-public 2026-01-20T10:56:53.137971287+00:00 stderr F I0120 10:56:53.137966 30089 namespace.go:100] [kube-public] adding namespace 2026-01-20T10:56:53.138004348+00:00 stderr F I0120 10:56:53.137956 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-etcd:v4 k8s.ovn.org/name:openshift-etcd k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ac960e5b-7bfc-4c54-ade3-c1a9dd66917b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.138298276+00:00 stderr F I0120 10:56:53.138260 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cluster-version:v4 k8s.ovn.org/name:openshift-cluster-version k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2355a6e5-ae30-44ea-bc37-ea95e8937a37}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.138298276+00:00 stderr F I0120 10:56:53.138286 30089 address_set.go:304] New(2355a6e5-ae30-44ea-bc37-ea95e8937a37/default-network-controller:Namespace:openshift-cluster-version:v4/a8029920972938375443) with [] 2026-01-20T10:56:53.138312036+00:00 stderr F I0120 10:56:53.138300 30089 namespace.go:104] [openshift-etcd] adding namespace took 7.151273ms 2026-01-20T10:56:53.138312036+00:00 stderr F I0120 10:56:53.138292 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cluster-version:v4 k8s.ovn.org/name:openshift-cluster-version k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2355a6e5-ae30-44ea-bc37-ea95e8937a37}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.138319446+00:00 stderr F I0120 10:56:53.138311 30089 obj_retry.go:541] Creating *v1.Namespace openshift-etcd took: 7.171363ms 2026-01-20T10:56:53.138326516+00:00 stderr F I0120 10:56:53.138320 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-network-diagnostics 2026-01-20T10:56:53.138333547+00:00 stderr F I0120 10:56:53.138326 30089 namespace.go:100] [openshift-network-diagnostics] adding namespace 2026-01-20T10:56:53.138582923+00:00 stderr F I0120 10:56:53.138565 30089 namespace.go:104] [openshift-cluster-version] adding namespace took 7.054041ms 2026-01-20T10:56:53.138582923+00:00 stderr F I0120 10:56:53.138577 30089 obj_retry.go:541] Creating *v1.Namespace openshift-cluster-version took: 7.072301ms 2026-01-20T10:56:53.138607694+00:00 stderr F I0120 10:56:53.138565 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.16]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-storage-version-migrator-operator:v4 k8s.ovn.org/name:openshift-kube-storage-version-migrator-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9bd2029-4bc5-40b5-aded-872bff118867}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.138607694+00:00 stderr F I0120 10:56:53.138587 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-authentication-operator 2026-01-20T10:56:53.138607694+00:00 stderr F I0120 10:56:53.138600 30089 namespace.go:100] [openshift-authentication-operator] adding namespace 2026-01-20T10:56:53.139313863+00:00 stderr F I0120 10:56:53.138597 30089 address_set.go:304] New(d9bd2029-4bc5-40b5-aded-872bff118867/default-network-controller:Namespace:openshift-kube-storage-version-migrator-operator:v4/a11291866915865594395) with [10.217.0.16] 2026-01-20T10:56:53.139313863+00:00 stderr F I0120 10:56:53.138805 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.16]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-storage-version-migrator-operator:v4 k8s.ovn.org/name:openshift-kube-storage-version-migrator-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9bd2029-4bc5-40b5-aded-872bff118867}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.139313863+00:00 stderr F I0120 10:56:53.139160 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cluster-storage-operator:v4 k8s.ovn.org/name:openshift-cluster-storage-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e456889c-a1a5-4ca0-b08b-79801583741a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.139313863+00:00 stderr F I0120 10:56:53.139179 30089 namespace.go:104] [openshift-kube-storage-version-migrator-operator] adding namespace took 6.898416ms 2026-01-20T10:56:53.139313863+00:00 stderr F I0120 10:56:53.139185 30089 address_set.go:304] New(e456889c-a1a5-4ca0-b08b-79801583741a/default-network-controller:Namespace:openshift-cluster-storage-operator:v4/a13337366700695359377) with [] 2026-01-20T10:56:53.139313863+00:00 stderr F I0120 10:56:53.139192 30089 obj_retry.go:541] Creating *v1.Namespace openshift-kube-storage-version-migrator-operator took: 6.917586ms 2026-01-20T10:56:53.139313863+00:00 stderr F I0120 10:56:53.139200 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-marketplace 2026-01-20T10:56:53.139313863+00:00 stderr F I0120 10:56:53.139194 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cluster-storage-operator:v4 k8s.ovn.org/name:openshift-cluster-storage-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e456889c-a1a5-4ca0-b08b-79801583741a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.139313863+00:00 stderr F I0120 10:56:53.139205 30089 namespace.go:100] [openshift-marketplace] adding namespace 2026-01-20T10:56:53.139511888+00:00 stderr F I0120 10:56:53.139468 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:cert-manager-operator:v4 k8s.ovn.org/name:cert-manager-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7b7b5b7c-a5cd-4172-b9a5-97177e6fa4bc}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.139511888+00:00 stderr F I0120 10:56:53.139492 30089 address_set.go:304] New(7b7b5b7c-a5cd-4172-b9a5-97177e6fa4bc/default-network-controller:Namespace:cert-manager-operator:v4/a15422348996377343428) with [] 2026-01-20T10:56:53.139511888+00:00 stderr F I0120 10:56:53.139497 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:cert-manager-operator:v4 k8s.ovn.org/name:cert-manager-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7b7b5b7c-a5cd-4172-b9a5-97177e6fa4bc}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.139554089+00:00 stderr F I0120 10:56:53.139525 30089 namespace.go:104] [openshift-cluster-storage-operator] adding namespace took 6.825473ms 2026-01-20T10:56:53.139554089+00:00 stderr F I0120 10:56:53.139539 30089 obj_retry.go:541] Creating *v1.Namespace openshift-cluster-storage-operator took: 6.864715ms 2026-01-20T10:56:53.139554089+00:00 stderr F I0120 10:56:53.139549 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-config-operator 2026-01-20T10:56:53.139592620+00:00 stderr F I0120 10:56:53.139554 30089 namespace.go:100] [openshift-config-operator] adding namespace 2026-01-20T10:56:53.139860997+00:00 stderr F I0120 10:56:53.139824 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-nutanix-infra:v4 k8s.ovn.org/name:openshift-nutanix-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9be864df-f271-4d7b-a8a0-e53c648721e4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.139860997+00:00 stderr F I0120 10:56:53.139843 30089 namespace.go:104] [cert-manager-operator] adding namespace took 6.804013ms 2026-01-20T10:56:53.139860997+00:00 stderr F I0120 10:56:53.139851 30089 address_set.go:304] New(9be864df-f271-4d7b-a8a0-e53c648721e4/default-network-controller:Namespace:openshift-nutanix-infra:v4/a10781256116209244644) with [] 2026-01-20T10:56:53.139860997+00:00 stderr F I0120 10:56:53.139854 30089 obj_retry.go:541] Creating *v1.Namespace cert-manager-operator took: 6.824013ms 2026-01-20T10:56:53.139877518+00:00 stderr F I0120 10:56:53.139862 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-vsphere-infra 2026-01-20T10:56:53.139877518+00:00 stderr F I0120 10:56:53.139858 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-nutanix-infra:v4 k8s.ovn.org/name:openshift-nutanix-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9be864df-f271-4d7b-a8a0-e53c648721e4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.139877518+00:00 stderr F I0120 10:56:53.139867 30089 namespace.go:100] [openshift-vsphere-infra] adding namespace 2026-01-20T10:56:53.140308649+00:00 stderr F I0120 10:56:53.140268 30089 namespace.go:104] [openshift-nutanix-infra] adding namespace took 6.820183ms 2026-01-20T10:56:53.140308649+00:00 stderr F I0120 10:56:53.140294 30089 obj_retry.go:541] Creating *v1.Namespace openshift-nutanix-infra took: 6.853334ms 2026-01-20T10:56:53.140308649+00:00 stderr F I0120 10:56:53.140302 30089 obj_retry.go:502] Add event received for *v1.Namespace hostpath-provisioner 2026-01-20T10:56:53.140326440+00:00 stderr F I0120 10:56:53.140308 30089 namespace.go:100] [hostpath-provisioner] adding namespace 2026-01-20T10:56:53.140326440+00:00 stderr F I0120 10:56:53.140254 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-user-workload-monitoring:v4 k8s.ovn.org/name:openshift-user-workload-monitoring k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c7b7eb85-7c38-4a2f-b21b-66f16447149e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.140335420+00:00 stderr F I0120 10:56:53.140324 30089 address_set.go:304] New(c7b7eb85-7c38-4a2f-b21b-66f16447149e/default-network-controller:Namespace:openshift-user-workload-monitoring:v4/a17884403498503024866) with [] 2026-01-20T10:56:53.140371681+00:00 stderr F I0120 10:56:53.140330 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-user-workload-monitoring:v4 k8s.ovn.org/name:openshift-user-workload-monitoring k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c7b7eb85-7c38-4a2f-b21b-66f16447149e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.141007079+00:00 stderr F I0120 10:56:53.140959 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-config:v4 k8s.ovn.org/name:openshift-config k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {44ff8a33-26ed-440b-98f7-fcd58e35aa55}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.141007079+00:00 stderr F I0120 10:56:53.140987 30089 address_set.go:304] New(44ff8a33-26ed-440b-98f7-fcd58e35aa55/default-network-controller:Namespace:openshift-config:v4/a14322580666718461836) with [] 2026-01-20T10:56:53.141007079+00:00 stderr F I0120 10:56:53.140987 30089 namespace.go:104] [openshift-user-workload-monitoring] adding namespace took 7.208764ms 2026-01-20T10:56:53.141007079+00:00 stderr F I0120 10:56:53.141000 30089 obj_retry.go:541] Creating *v1.Namespace openshift-user-workload-monitoring took: 7.227565ms 2026-01-20T10:56:53.141022339+00:00 stderr F I0120 10:56:53.140995 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-config:v4 k8s.ovn.org/name:openshift-config k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {44ff8a33-26ed-440b-98f7-fcd58e35aa55}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.141022339+00:00 stderr F I0120 10:56:53.141010 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-controller-manager 2026-01-20T10:56:53.141022339+00:00 stderr F I0120 10:56:53.141016 30089 namespace.go:100] [openshift-controller-manager] adding namespace 2026-01-20T10:56:53.141303586+00:00 stderr F I0120 10:56:53.141261 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ingress:v4 k8s.ovn.org/name:openshift-ingress k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {25d45f75-d6c3-4910-9c2c-f40890ced90d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.141303586+00:00 stderr F I0120 10:56:53.141289 30089 address_set.go:304] New(25d45f75-d6c3-4910-9c2c-f40890ced90d/default-network-controller:Namespace:openshift-ingress:v4/a9185810757115582127) with [] 2026-01-20T10:56:53.141303586+00:00 stderr F I0120 10:56:53.141289 30089 namespace.go:104] [openshift-config] adding namespace took 6.876506ms 2026-01-20T10:56:53.141303586+00:00 stderr F I0120 10:56:53.141298 30089 obj_retry.go:541] Creating *v1.Namespace openshift-config took: 6.907776ms 2026-01-20T10:56:53.141319647+00:00 stderr F I0120 10:56:53.141306 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-multus 2026-01-20T10:56:53.141319647+00:00 stderr F I0120 10:56:53.141294 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ingress:v4 k8s.ovn.org/name:openshift-ingress k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {25d45f75-d6c3-4910-9c2c-f40890ced90d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.141319647+00:00 stderr F I0120 10:56:53.141312 30089 namespace.go:100] [openshift-multus] adding namespace 2026-01-20T10:56:53.141784419+00:00 stderr F I0120 10:56:53.141745 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-infra:v4 k8s.ovn.org/name:openshift-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {79e4a569-af81-44a1-811a-aa1f3875d414}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.141784419+00:00 stderr F I0120 10:56:53.141771 30089 namespace.go:104] [openshift-ingress] adding namespace took 6.713821ms 2026-01-20T10:56:53.141784419+00:00 stderr F I0120 10:56:53.141770 30089 address_set.go:304] New(79e4a569-af81-44a1-811a-aa1f3875d414/default-network-controller:Namespace:openshift-infra:v4/a4190772658089390776) with [] 2026-01-20T10:56:53.141797940+00:00 stderr F I0120 10:56:53.141781 30089 obj_retry.go:541] Creating *v1.Namespace openshift-ingress took: 6.747112ms 2026-01-20T10:56:53.141797940+00:00 stderr F I0120 10:56:53.141790 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-monitoring 2026-01-20T10:56:53.141805230+00:00 stderr F I0120 10:56:53.141797 30089 namespace.go:100] [openshift-monitoring] adding namespace 2026-01-20T10:56:53.141805230+00:00 stderr F I0120 10:56:53.141785 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-infra:v4 k8s.ovn.org/name:openshift-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {79e4a569-af81-44a1-811a-aa1f3875d414}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.142099998+00:00 stderr F I0120 10:56:53.142078 30089 namespace.go:104] [openshift-infra] adding namespace took 6.3435ms 2026-01-20T10:56:53.142120378+00:00 stderr F I0120 10:56:53.142096 30089 obj_retry.go:541] Creating *v1.Namespace openshift-infra took: 6.405872ms 2026-01-20T10:56:53.142120378+00:00 stderr F I0120 10:56:53.142083 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-config-managed:v4 k8s.ovn.org/name:openshift-config-managed k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3d90c324-b001-4abe-81c1-4b4c84f38fda}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.142120378+00:00 stderr F I0120 10:56:53.142109 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-apiserver 2026-01-20T10:56:53.142120378+00:00 stderr F I0120 10:56:53.142110 30089 address_set.go:304] New(3d90c324-b001-4abe-81c1-4b4c84f38fda/default-network-controller:Namespace:openshift-config-managed:v4/a6117206921658593480) with [] 2026-01-20T10:56:53.142129978+00:00 stderr F I0120 10:56:53.142117 30089 namespace.go:100] [openshift-apiserver] adding namespace 2026-01-20T10:56:53.142129978+00:00 stderr F I0120 10:56:53.142117 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-config-managed:v4 k8s.ovn.org/name:openshift-config-managed k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3d90c324-b001-4abe-81c1-4b4c84f38fda}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.142387955+00:00 stderr F I0120 10:56:53.142354 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.25]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-storage-version-migrator:v4 k8s.ovn.org/name:openshift-kube-storage-version-migrator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c015a739-e4ae-4995-96b0-bc406fefecac}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.142397136+00:00 stderr F I0120 10:56:53.142387 30089 address_set.go:304] New(c015a739-e4ae-4995-96b0-bc406fefecac/default-network-controller:Namespace:openshift-kube-storage-version-migrator:v4/a16978912863426934758) with [10.217.0.25] 2026-01-20T10:56:53.142397136+00:00 stderr F I0120 10:56:53.142388 30089 namespace.go:104] [openshift-config-managed] adding namespace took 6.116384ms 2026-01-20T10:56:53.142404816+00:00 stderr F I0120 10:56:53.142395 30089 obj_retry.go:541] Creating *v1.Namespace openshift-config-managed took: 6.129094ms 2026-01-20T10:56:53.142404816+00:00 stderr F I0120 10:56:53.142392 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.25]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-storage-version-migrator:v4 k8s.ovn.org/name:openshift-kube-storage-version-migrator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c015a739-e4ae-4995-96b0-bc406fefecac}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.142689193+00:00 stderr F I0120 10:56:53.142658 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-network-operator:v4 k8s.ovn.org/name:openshift-network-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {04111b86-db80-412d-9e2f-97094cf524e1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.142689193+00:00 stderr F I0120 10:56:53.142681 30089 address_set.go:304] New(04111b86-db80-412d-9e2f-97094cf524e1/default-network-controller:Namespace:openshift-network-operator:v4/a17843891307737330665) with [] 2026-01-20T10:56:53.142716984+00:00 stderr F I0120 10:56:53.142686 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-network-operator:v4 k8s.ovn.org/name:openshift-network-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {04111b86-db80-412d-9e2f-97094cf524e1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.142716984+00:00 stderr F I0120 10:56:53.142709 30089 namespace.go:104] [openshift-kube-storage-version-migrator] adding namespace took 5.94601ms 2026-01-20T10:56:53.142726534+00:00 stderr F I0120 10:56:53.142721 30089 obj_retry.go:541] Creating *v1.Namespace openshift-kube-storage-version-migrator took: 5.96478ms 2026-01-20T10:56:53.142757185+00:00 stderr F I0120 10:56:53.142740 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-ingress-canary 2026-01-20T10:56:53.142757185+00:00 stderr F I0120 10:56:53.142753 30089 namespace.go:100] [openshift-ingress-canary] adding namespace 2026-01-20T10:56:53.142977101+00:00 stderr F I0120 10:56:53.142942 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cluster-machine-approver:v4 k8s.ovn.org/name:openshift-cluster-machine-approver k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9f13b0bb-074d-4fa3-ac40-66aa5570fde6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.142977101+00:00 stderr F I0120 10:56:53.142965 30089 namespace.go:104] [openshift-network-operator] adding namespace took 5.56396ms 2026-01-20T10:56:53.142977101+00:00 stderr F I0120 10:56:53.142968 30089 address_set.go:304] New(9f13b0bb-074d-4fa3-ac40-66aa5570fde6/default-network-controller:Namespace:openshift-cluster-machine-approver:v4/a8065968527448962190) with [] 2026-01-20T10:56:53.142977101+00:00 stderr F I0120 10:56:53.142973 30089 obj_retry.go:541] Creating *v1.Namespace openshift-network-operator took: 5.596251ms 2026-01-20T10:56:53.142986181+00:00 stderr F I0120 10:56:53.142980 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-ingress-operator 2026-01-20T10:56:53.142993341+00:00 stderr F I0120 10:56:53.142976 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cluster-machine-approver:v4 k8s.ovn.org/name:openshift-cluster-machine-approver k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9f13b0bb-074d-4fa3-ac40-66aa5570fde6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.142993341+00:00 stderr F I0120 10:56:53.142986 30089 namespace.go:100] [openshift-ingress-operator] adding namespace 2026-01-20T10:56:53.143300540+00:00 stderr F I0120 10:56:53.143265 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:kube-public:v4 k8s.ovn.org/name:kube-public k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3ed2cddb-7639-4bce-a4c1-426cb86feb33}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.143300540+00:00 stderr F I0120 10:56:53.143293 30089 address_set.go:304] New(3ed2cddb-7639-4bce-a4c1-426cb86feb33/default-network-controller:Namespace:kube-public:v4/a8590749387396730558) with [] 2026-01-20T10:56:53.143317120+00:00 stderr F I0120 10:56:53.143305 30089 namespace.go:104] [openshift-cluster-machine-approver] adding namespace took 5.630702ms 2026-01-20T10:56:53.143317120+00:00 stderr F I0120 10:56:53.143314 30089 obj_retry.go:541] Creating *v1.Namespace openshift-cluster-machine-approver took: 5.664503ms 2026-01-20T10:56:53.143324650+00:00 stderr F I0120 10:56:53.143299 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:kube-public:v4 k8s.ovn.org/name:kube-public k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3ed2cddb-7639-4bce-a4c1-426cb86feb33}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.143331340+00:00 stderr F I0120 10:56:53.143323 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-openstack-infra 2026-01-20T10:56:53.143331340+00:00 stderr F I0120 10:56:53.143328 30089 namespace.go:100] [openshift-openstack-infra] adding namespace 2026-01-20T10:56:53.143638389+00:00 stderr F I0120 10:56:53.143610 30089 namespace.go:104] [kube-public] adding namespace took 5.638392ms 2026-01-20T10:56:53.143638389+00:00 stderr F I0120 10:56:53.143608 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.4 10.217.0.64]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-network-diagnostics:v4 k8s.ovn.org/name:openshift-network-diagnostics k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8218c39d-9f44-4ee1-a2d7-cc0399885a25}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.143638389+00:00 stderr F I0120 10:56:53.143622 30089 obj_retry.go:541] Creating *v1.Namespace kube-public took: 5.655073ms 2026-01-20T10:56:53.143638389+00:00 stderr F I0120 10:56:53.143628 30089 address_set.go:304] New(8218c39d-9f44-4ee1-a2d7-cc0399885a25/default-network-controller:Namespace:openshift-network-diagnostics:v4/a1966919964212966539) with [10.217.0.4 10.217.0.64] 2026-01-20T10:56:53.143638389+00:00 stderr F I0120 10:56:53.143631 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-etcd-operator 2026-01-20T10:56:53.143649580+00:00 stderr F I0120 10:56:53.143637 30089 namespace.go:100] [openshift-etcd-operator] adding namespace 2026-01-20T10:56:53.143656830+00:00 stderr F I0120 10:56:53.143633 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.4 10.217.0.64]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-network-diagnostics:v4 k8s.ovn.org/name:openshift-network-diagnostics k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8218c39d-9f44-4ee1-a2d7-cc0399885a25}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.143948628+00:00 stderr F I0120 10:56:53.143911 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.19]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-authentication-operator:v4 k8s.ovn.org/name:openshift-authentication-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a8db0187-0f5e-41f3-8b4b-4d2c4d0d1e2d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.143948628+00:00 stderr F I0120 10:56:53.143935 30089 namespace.go:104] [openshift-network-diagnostics] adding namespace took 5.60419ms 2026-01-20T10:56:53.143948628+00:00 stderr F I0120 10:56:53.143939 30089 address_set.go:304] New(a8db0187-0f5e-41f3-8b4b-4d2c4d0d1e2d/default-network-controller:Namespace:openshift-authentication-operator:v4/a11592754075545683359) with [10.217.0.19] 2026-01-20T10:56:53.143948628+00:00 stderr F I0120 10:56:53.143943 30089 obj_retry.go:541] Creating *v1.Namespace openshift-network-diagnostics took: 5.616241ms 2026-01-20T10:56:53.143962278+00:00 stderr F I0120 10:56:53.143946 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.19]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-authentication-operator:v4 k8s.ovn.org/name:openshift-authentication-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a8db0187-0f5e-41f3-8b4b-4d2c4d0d1e2d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.144286837+00:00 stderr F I0120 10:56:53.144245 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.34 10.217.0.33 10.217.0.35 10.217.0.29 10.217.0.30]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-marketplace:v4 k8s.ovn.org/name:openshift-marketplace k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0e02ce6b-c33a-409a-bb6c-ed39be6a658f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.144286837+00:00 stderr F I0120 10:56:53.144274 30089 address_set.go:304] New(0e02ce6b-c33a-409a-bb6c-ed39be6a658f/default-network-controller:Namespace:openshift-marketplace:v4/a13245376580307887587) with [10.217.0.33 10.217.0.35 10.217.0.29 10.217.0.30 10.217.0.34] 2026-01-20T10:56:53.144299587+00:00 stderr F I0120 10:56:53.144284 30089 namespace.go:104] [openshift-authentication-operator] adding namespace took 5.486037ms 2026-01-20T10:56:53.144299587+00:00 stderr F I0120 10:56:53.144281 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.34 10.217.0.33 10.217.0.35 10.217.0.29 10.217.0.30]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-marketplace:v4 k8s.ovn.org/name:openshift-marketplace k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0e02ce6b-c33a-409a-bb6c-ed39be6a658f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.144299587+00:00 stderr F I0120 10:56:53.144294 30089 obj_retry.go:541] Creating *v1.Namespace openshift-authentication-operator took: 5.693523ms 2026-01-20T10:56:53.144321058+00:00 stderr F I0120 10:56:53.144307 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift 2026-01-20T10:56:53.144321058+00:00 stderr F I0120 10:56:53.144315 30089 namespace.go:100] [openshift] adding namespace 2026-01-20T10:56:53.144581445+00:00 stderr F I0120 10:56:53.144544 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.23]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-config-operator:v4 k8s.ovn.org/name:openshift-config-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9f40c027-345f-41f2-9721-c96b848490c2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.144581445+00:00 stderr F I0120 10:56:53.144567 30089 address_set.go:304] New(9f40c027-345f-41f2-9721-c96b848490c2/default-network-controller:Namespace:openshift-config-operator:v4/a15513656991472936797) with [10.217.0.23] 2026-01-20T10:56:53.144593005+00:00 stderr F I0120 10:56:53.144572 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.23]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-config-operator:v4 k8s.ovn.org/name:openshift-config-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9f40c027-345f-41f2-9721-c96b848490c2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.144601715+00:00 stderr F I0120 10:56:53.144592 30089 namespace.go:104] [openshift-marketplace] adding namespace took 5.376035ms 2026-01-20T10:56:53.144610345+00:00 stderr F I0120 10:56:53.144603 30089 obj_retry.go:541] Creating *v1.Namespace openshift-marketplace took: 5.396965ms 2026-01-20T10:56:53.144618906+00:00 stderr F I0120 10:56:53.144612 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-console-operator 2026-01-20T10:56:53.144625956+00:00 stderr F I0120 10:56:53.144619 30089 namespace.go:100] [openshift-console-operator] adding namespace 2026-01-20T10:56:53.144753389+00:00 stderr F I0120 10:56:53.144737 30089 ovs.go:162] Exec(15): stdout: "" 2026-01-20T10:56:53.144790920+00:00 stderr F I0120 10:56:53.144781 30089 ovs.go:163] Exec(15): stderr: "" 2026-01-20T10:56:53.144823911+00:00 stderr F I0120 10:56:53.144814 30089 ovs.go:159] Exec(16): /usr/bin/ovs-vsctl --timeout=15 -- clear bridge br-int netflow -- clear bridge br-int sflow -- clear bridge br-int ipfix 2026-01-20T10:56:53.144881662+00:00 stderr F I0120 10:56:53.144805 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-vsphere-infra:v4 k8s.ovn.org/name:openshift-vsphere-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {11282001-8d66-4c1f-84de-c3454bcd789a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.144907363+00:00 stderr F I0120 10:56:53.144885 30089 address_set.go:304] New(11282001-8d66-4c1f-84de-c3454bcd789a/default-network-controller:Namespace:openshift-vsphere-infra:v4/a15940016096332404286) with [] 2026-01-20T10:56:53.144948204+00:00 stderr F I0120 10:56:53.144903 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-vsphere-infra:v4 k8s.ovn.org/name:openshift-vsphere-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {11282001-8d66-4c1f-84de-c3454bcd789a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.144955764+00:00 stderr F I0120 10:56:53.144823 30089 namespace.go:104] [openshift-config-operator] adding namespace took 5.265362ms 2026-01-20T10:56:53.144962405+00:00 stderr F I0120 10:56:53.144953 30089 obj_retry.go:541] Creating *v1.Namespace openshift-config-operator took: 5.396955ms 2026-01-20T10:56:53.144968945+00:00 stderr F I0120 10:56:53.144964 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-kube-controller-manager 2026-01-20T10:56:53.144983955+00:00 stderr F I0120 10:56:53.144969 30089 namespace.go:100] [openshift-kube-controller-manager] adding namespace 2026-01-20T10:56:53.145305374+00:00 stderr F I0120 10:56:53.145270 30089 namespace.go:104] [openshift-vsphere-infra] adding namespace took 5.389425ms 2026-01-20T10:56:53.145305374+00:00 stderr F I0120 10:56:53.145298 30089 obj_retry.go:541] Creating *v1.Namespace openshift-vsphere-infra took: 5.427486ms 2026-01-20T10:56:53.145331454+00:00 stderr F I0120 10:56:53.145310 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-console-user-settings 2026-01-20T10:56:53.145331454+00:00 stderr F I0120 10:56:53.145318 30089 namespace.go:100] [openshift-console-user-settings] adding namespace 2026-01-20T10:56:53.145331454+00:00 stderr F I0120 10:56:53.145250 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.49]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:hostpath-provisioner:v4 k8s.ovn.org/name:hostpath-provisioner k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {62f9e339-06be-4a23-84c9-22e50c244827}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.145340155+00:00 stderr F I0120 10:56:53.145331 30089 address_set.go:304] New(62f9e339-06be-4a23-84c9-22e50c244827/default-network-controller:Namespace:hostpath-provisioner:v4/a9227386411571076591) with [10.217.0.49] 2026-01-20T10:56:53.145381696+00:00 stderr F I0120 10:56:53.145340 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.49]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:hostpath-provisioner:v4 k8s.ovn.org/name:hostpath-provisioner k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {62f9e339-06be-4a23-84c9-22e50c244827}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.145719675+00:00 stderr F I0120 10:56:53.145688 30089 namespace.go:104] [hostpath-provisioner] adding namespace took 5.371445ms 2026-01-20T10:56:53.145719675+00:00 stderr F I0120 10:56:53.145706 30089 obj_retry.go:541] Creating *v1.Namespace hostpath-provisioner took: 5.395505ms 2026-01-20T10:56:53.145728635+00:00 stderr F I0120 10:56:53.145715 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-operator-lifecycle-manager 2026-01-20T10:56:53.145735275+00:00 stderr F I0120 10:56:53.145709 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.87]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-controller-manager:v4 k8s.ovn.org/name:openshift-controller-manager k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1d06d0e2-5650-4996-848c-4f667cd28a66}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.145742275+00:00 stderr F I0120 10:56:53.145733 30089 namespace.go:100] [openshift-operator-lifecycle-manager] adding namespace 2026-01-20T10:56:53.145748816+00:00 stderr F I0120 10:56:53.145739 30089 address_set.go:304] New(1d06d0e2-5650-4996-848c-4f667cd28a66/default-network-controller:Namespace:openshift-controller-manager:v4/a10467312518402121836) with [10.217.0.87] 2026-01-20T10:56:53.145769516+00:00 stderr F I0120 10:56:53.145746 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.87]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-controller-manager:v4 k8s.ovn.org/name:openshift-controller-manager k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1d06d0e2-5650-4996-848c-4f667cd28a66}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.146119575+00:00 stderr F I0120 10:56:53.146087 30089 namespace.go:104] [openshift-controller-manager] adding namespace took 5.066025ms 2026-01-20T10:56:53.146119575+00:00 stderr F I0120 10:56:53.146074 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.32 10.217.0.3]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-multus:v4 k8s.ovn.org/name:openshift-multus k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a3dfe91c-3443-4cbb-84c2-f35c94b3c72b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.146119575+00:00 stderr F I0120 10:56:53.146098 30089 obj_retry.go:541] Creating *v1.Namespace openshift-controller-manager took: 5.080666ms 2026-01-20T10:56:53.146119575+00:00 stderr F I0120 10:56:53.146104 30089 address_set.go:304] New(a3dfe91c-3443-4cbb-84c2-f35c94b3c72b/default-network-controller:Namespace:openshift-multus:v4/a13687770890520536676) with [10.217.0.32 10.217.0.3] 2026-01-20T10:56:53.146119575+00:00 stderr F I0120 10:56:53.146108 30089 obj_retry.go:502] Add event received for *v1.Namespace kube-node-lease 2026-01-20T10:56:53.146119575+00:00 stderr F I0120 10:56:53.146114 30089 namespace.go:100] [kube-node-lease] adding namespace 2026-01-20T10:56:53.146139616+00:00 stderr F I0120 10:56:53.146111 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.32 10.217.0.3]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-multus:v4 k8s.ovn.org/name:openshift-multus k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a3dfe91c-3443-4cbb-84c2-f35c94b3c72b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.146536957+00:00 stderr F I0120 10:56:53.146499 30089 namespace.go:104] [openshift-multus] adding namespace took 5.181689ms 2026-01-20T10:56:53.146536957+00:00 stderr F I0120 10:56:53.146514 30089 obj_retry.go:541] Creating *v1.Namespace openshift-multus took: 5.20008ms 2026-01-20T10:56:53.146536957+00:00 stderr F I0120 10:56:53.146523 30089 obj_retry.go:502] Add event received for *v1.Namespace openstack-operators 2026-01-20T10:56:53.146536957+00:00 stderr F I0120 10:56:53.146529 30089 namespace.go:100] [openstack-operators] adding namespace 2026-01-20T10:56:53.146536957+00:00 stderr F I0120 10:56:53.146499 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-monitoring:v4 k8s.ovn.org/name:openshift-monitoring k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {019ff9cb-74b2-458c-bba7-3cd8f5e2b524}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.146549678+00:00 stderr F I0120 10:56:53.146541 30089 address_set.go:304] New(019ff9cb-74b2-458c-bba7-3cd8f5e2b524/default-network-controller:Namespace:openshift-monitoring:v4/a5151710470485437164) with [] 2026-01-20T10:56:53.146583759+00:00 stderr F I0120 10:56:53.146548 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-monitoring:v4 k8s.ovn.org/name:openshift-monitoring k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {019ff9cb-74b2-458c-bba7-3cd8f5e2b524}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.146961609+00:00 stderr F I0120 10:56:53.146934 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.82]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-apiserver:v4 k8s.ovn.org/name:openshift-apiserver k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3e1b35e7-ab39-4122-9471-98e00d6d2a4f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.146969729+00:00 stderr F I0120 10:56:53.146958 30089 address_set.go:304] New(3e1b35e7-ab39-4122-9471-98e00d6d2a4f/default-network-controller:Namespace:openshift-apiserver:v4/a12374569603079029239) with [10.217.0.82] 2026-01-20T10:56:53.146993190+00:00 stderr F I0120 10:56:53.146972 30089 namespace.go:104] [openshift-monitoring] adding namespace took 5.157079ms 2026-01-20T10:56:53.146993190+00:00 stderr F I0120 10:56:53.146968 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.82]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-apiserver:v4 k8s.ovn.org/name:openshift-apiserver k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3e1b35e7-ab39-4122-9471-98e00d6d2a4f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.147000870+00:00 stderr F I0120 10:56:53.146991 30089 obj_retry.go:541] Creating *v1.Namespace openshift-monitoring took: 5.191469ms 2026-01-20T10:56:53.147007940+00:00 stderr F I0120 10:56:53.147002 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-kube-controller-manager-operator 2026-01-20T10:56:53.147015030+00:00 stderr F I0120 10:56:53.147009 30089 namespace.go:100] [openshift-kube-controller-manager-operator] adding namespace 2026-01-20T10:56:53.147377370+00:00 stderr F I0120 10:56:53.147338 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.71]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ingress-canary:v4 k8s.ovn.org/name:openshift-ingress-canary k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7ddc544b-99f6-40be-9f0d-23611845c014}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.147377370+00:00 stderr F I0120 10:56:53.147365 30089 namespace.go:104] [openshift-apiserver] adding namespace took 5.238751ms 2026-01-20T10:56:53.147377370+00:00 stderr F I0120 10:56:53.147368 30089 address_set.go:304] New(7ddc544b-99f6-40be-9f0d-23611845c014/default-network-controller:Namespace:openshift-ingress-canary:v4/a17074529903361539284) with [10.217.0.71] 2026-01-20T10:56:53.147392580+00:00 stderr F I0120 10:56:53.147377 30089 obj_retry.go:541] Creating *v1.Namespace openshift-apiserver took: 5.258652ms 2026-01-20T10:56:53.147392580+00:00 stderr F I0120 10:56:53.147385 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-machine-api 2026-01-20T10:56:53.147392580+00:00 stderr F I0120 10:56:53.147378 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.71]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ingress-canary:v4 k8s.ovn.org/name:openshift-ingress-canary k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7ddc544b-99f6-40be-9f0d-23611845c014}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.147403730+00:00 stderr F I0120 10:56:53.147391 30089 namespace.go:100] [openshift-machine-api] adding namespace 2026-01-20T10:56:53.147808871+00:00 stderr F I0120 10:56:53.147764 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.45]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ingress-operator:v4 k8s.ovn.org/name:openshift-ingress-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3047e9b3-92bb-427f-ab3e-a5f9bc8ce704}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.147808871+00:00 stderr F I0120 10:56:53.147786 30089 namespace.go:104] [openshift-ingress-canary] adding namespace took 5.027126ms 2026-01-20T10:56:53.147808871+00:00 stderr F I0120 10:56:53.147800 30089 obj_retry.go:541] Creating *v1.Namespace openshift-ingress-canary took: 5.046476ms 2026-01-20T10:56:53.147808871+00:00 stderr F I0120 10:56:53.147801 30089 address_set.go:304] New(3047e9b3-92bb-427f-ab3e-a5f9bc8ce704/default-network-controller:Namespace:openshift-ingress-operator:v4/a12824364980436020060) with [10.217.0.45] 2026-01-20T10:56:53.147819812+00:00 stderr F I0120 10:56:53.147808 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-kube-apiserver 2026-01-20T10:56:53.147819812+00:00 stderr F I0120 10:56:53.147813 30089 namespace.go:100] [openshift-kube-apiserver] adding namespace 2026-01-20T10:56:53.147827322+00:00 stderr F I0120 10:56:53.147809 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.45]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ingress-operator:v4 k8s.ovn.org/name:openshift-ingress-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3047e9b3-92bb-427f-ab3e-a5f9bc8ce704}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.148169051+00:00 stderr F I0120 10:56:53.148127 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-openstack-infra:v4 k8s.ovn.org/name:openshift-openstack-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {df03648f-a2a6-48c0-8e83-58b35476b3fb}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.148169051+00:00 stderr F I0120 10:56:53.148161 30089 address_set.go:304] New(df03648f-a2a6-48c0-8e83-58b35476b3fb/default-network-controller:Namespace:openshift-openstack-infra:v4/a16831300222096547883) with [] 2026-01-20T10:56:53.148193311+00:00 stderr F I0120 10:56:53.148159 30089 namespace.go:104] [openshift-ingress-operator] adding namespace took 5.163519ms 2026-01-20T10:56:53.148193311+00:00 stderr F I0120 10:56:53.148166 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-openstack-infra:v4 k8s.ovn.org/name:openshift-openstack-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {df03648f-a2a6-48c0-8e83-58b35476b3fb}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.148193311+00:00 stderr F I0120 10:56:53.148175 30089 obj_retry.go:541] Creating *v1.Namespace openshift-ingress-operator took: 5.18696ms 2026-01-20T10:56:53.148479039+00:00 stderr F I0120 10:56:53.148439 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.8]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-etcd-operator:v4 k8s.ovn.org/name:openshift-etcd-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f1ee15a1-bd6d-495f-8ae4-fb5816533020}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.148479039+00:00 stderr F I0120 10:56:53.148466 30089 address_set.go:304] New(f1ee15a1-bd6d-495f-8ae4-fb5816533020/default-network-controller:Namespace:openshift-etcd-operator:v4/a14710618839743769589) with [10.217.0.8] 2026-01-20T10:56:53.148479039+00:00 stderr F I0120 10:56:53.148469 30089 namespace.go:104] [openshift-openstack-infra] adding namespace took 5.135409ms 2026-01-20T10:56:53.148488679+00:00 stderr F I0120 10:56:53.148478 30089 obj_retry.go:541] Creating *v1.Namespace openshift-openstack-infra took: 5.148799ms 2026-01-20T10:56:53.148488679+00:00 stderr F I0120 10:56:53.148485 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-dns 2026-01-20T10:56:53.148495889+00:00 stderr F I0120 10:56:53.148471 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.8]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-etcd-operator:v4 k8s.ovn.org/name:openshift-etcd-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f1ee15a1-bd6d-495f-8ae4-fb5816533020}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.148495889+00:00 stderr F I0120 10:56:53.148491 30089 namespace.go:100] [openshift-dns] adding namespace 2026-01-20T10:56:53.148844999+00:00 stderr F I0120 10:56:53.148794 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift:v4 k8s.ovn.org/name:openshift k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f040846e-f3f2-4016-b78c-e138f8ebd3e3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.148844999+00:00 stderr F I0120 10:56:53.148826 30089 namespace.go:104] [openshift-etcd-operator] adding namespace took 5.178448ms 2026-01-20T10:56:53.148844999+00:00 stderr F I0120 10:56:53.148833 30089 address_set.go:304] New(f040846e-f3f2-4016-b78c-e138f8ebd3e3/default-network-controller:Namespace:openshift:v4/a8611152139381270309) with [] 2026-01-20T10:56:53.148844999+00:00 stderr F I0120 10:56:53.148838 30089 obj_retry.go:541] Creating *v1.Namespace openshift-etcd-operator took: 5.199119ms 2026-01-20T10:56:53.148878710+00:00 stderr F I0120 10:56:53.148840 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift:v4 k8s.ovn.org/name:openshift k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f040846e-f3f2-4016-b78c-e138f8ebd3e3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.149153417+00:00 stderr F I0120 10:56:53.149128 30089 namespace.go:104] [openshift] adding namespace took 4.805479ms 2026-01-20T10:56:53.149153417+00:00 stderr F I0120 10:56:53.149144 30089 obj_retry.go:541] Creating *v1.Namespace openshift took: 4.82701ms 2026-01-20T10:56:53.149169307+00:00 stderr F I0120 10:56:53.149141 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.62 10.217.0.61]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-console-operator:v4 k8s.ovn.org/name:openshift-console-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b98e5acb-dfcd-45a0-8119-e0592c97b5c3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.149169307+00:00 stderr F I0120 10:56:53.149163 30089 address_set.go:304] New(b98e5acb-dfcd-45a0-8119-e0592c97b5c3/default-network-controller:Namespace:openshift-console-operator:v4/a16211398687523592942) with [10.217.0.61 10.217.0.62] 2026-01-20T10:56:53.149198599+00:00 stderr F I0120 10:56:53.149173 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.62 10.217.0.61]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-console-operator:v4 k8s.ovn.org/name:openshift-console-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b98e5acb-dfcd-45a0-8119-e0592c97b5c3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.149497267+00:00 stderr F I0120 10:56:53.149463 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-controller-manager:v4 k8s.ovn.org/name:openshift-kube-controller-manager k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.149497267+00:00 stderr F I0120 10:56:53.149489 30089 address_set.go:304] New(7a4da73b-99a6-4d7f-9775-40343fe32ef9/default-network-controller:Namespace:openshift-kube-controller-manager:v4/a4663622633901538608) with [] 2026-01-20T10:56:53.149497267+00:00 stderr F I0120 10:56:53.149489 30089 namespace.go:104] [openshift-console-operator] adding namespace took 4.864381ms 2026-01-20T10:56:53.149508567+00:00 stderr F I0120 10:56:53.149500 30089 obj_retry.go:541] Creating *v1.Namespace openshift-console-operator took: 4.880602ms 2026-01-20T10:56:53.149515598+00:00 stderr F I0120 10:56:53.149495 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-controller-manager:v4 k8s.ovn.org/name:openshift-kube-controller-manager k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.149515598+00:00 stderr F I0120 10:56:53.149508 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-ovn-kubernetes 2026-01-20T10:56:53.149522568+00:00 stderr F I0120 10:56:53.149515 30089 namespace.go:100] [openshift-ovn-kubernetes] adding namespace 2026-01-20T10:56:53.149846086+00:00 stderr F I0120 10:56:53.149803 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-console-user-settings:v4 k8s.ovn.org/name:openshift-console-user-settings k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f1993aac-30de-48d4-bdd3-a3299b9538ff}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.149846086+00:00 stderr F I0120 10:56:53.149834 30089 address_set.go:304] New(f1993aac-30de-48d4-bdd3-a3299b9538ff/default-network-controller:Namespace:openshift-console-user-settings:v4/a17174782576849527835) with [] 2026-01-20T10:56:53.149874867+00:00 stderr F I0120 10:56:53.149840 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-console-user-settings:v4 k8s.ovn.org/name:openshift-console-user-settings k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f1993aac-30de-48d4-bdd3-a3299b9538ff}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.149874867+00:00 stderr F I0120 10:56:53.149861 30089 namespace.go:104] [openshift-kube-controller-manager] adding namespace took 4.884552ms 2026-01-20T10:56:53.149882577+00:00 stderr F I0120 10:56:53.149874 30089 obj_retry.go:541] Creating *v1.Namespace openshift-kube-controller-manager took: 4.902732ms 2026-01-20T10:56:53.150207006+00:00 stderr F I0120 10:56:53.150167 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.11 10.217.0.14 10.217.0.24 10.217.0.43]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-operator-lifecycle-manager:v4 k8s.ovn.org/name:openshift-operator-lifecycle-manager k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.150207006+00:00 stderr F I0120 10:56:53.150194 30089 namespace.go:104] [openshift-console-user-settings] adding namespace took 4.869251ms 2026-01-20T10:56:53.150207006+00:00 stderr F I0120 10:56:53.150196 30089 address_set.go:304] New(369b3a88-e647-4277-a647-25ffea296a4a/default-network-controller:Namespace:openshift-operator-lifecycle-manager:v4/a1482332553631220387) with [10.217.0.14 10.217.0.24 10.217.0.43 10.217.0.11] 2026-01-20T10:56:53.150221566+00:00 stderr F I0120 10:56:53.150206 30089 obj_retry.go:541] Creating *v1.Namespace openshift-console-user-settings took: 4.887162ms 2026-01-20T10:56:53.150221566+00:00 stderr F I0120 10:56:53.150216 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-route-controller-manager 2026-01-20T10:56:53.150229067+00:00 stderr F I0120 10:56:53.150208 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.11 10.217.0.14 10.217.0.24 10.217.0.43]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-operator-lifecycle-manager:v4 k8s.ovn.org/name:openshift-operator-lifecycle-manager k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.150229067+00:00 stderr F I0120 10:56:53.150223 30089 namespace.go:100] [openshift-route-controller-manager] adding namespace 2026-01-20T10:56:53.150510354+00:00 stderr F I0120 10:56:53.150489 30089 namespace.go:104] [openshift-operator-lifecycle-manager] adding namespace took 4.747968ms 2026-01-20T10:56:53.150510354+00:00 stderr F I0120 10:56:53.150501 30089 obj_retry.go:541] Creating *v1.Namespace openshift-operator-lifecycle-manager took: 4.767719ms 2026-01-20T10:56:53.150510354+00:00 stderr F I0120 10:56:53.150489 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:kube-node-lease:v4 k8s.ovn.org/name:kube-node-lease k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {16ef921a-9758-4ef9-9c0a-3ab866f42178}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.150523354+00:00 stderr F I0120 10:56:53.150509 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-ovirt-infra 2026-01-20T10:56:53.150523354+00:00 stderr F I0120 10:56:53.150512 30089 address_set.go:304] New(16ef921a-9758-4ef9-9c0a-3ab866f42178/default-network-controller:Namespace:kube-node-lease:v4/a8945957557890443212) with [] 2026-01-20T10:56:53.150523354+00:00 stderr F I0120 10:56:53.150516 30089 namespace.go:100] [openshift-ovirt-infra] adding namespace 2026-01-20T10:56:53.150556325+00:00 stderr F I0120 10:56:53.150520 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:kube-node-lease:v4 k8s.ovn.org/name:kube-node-lease k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {16ef921a-9758-4ef9-9c0a-3ab866f42178}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.150884464+00:00 stderr F I0120 10:56:53.150843 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openstack-operators:v4 k8s.ovn.org/name:openstack-operators k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c2b2840b-ff18-4adf-ae8d-d304c977d1a0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.150884464+00:00 stderr F I0120 10:56:53.150863 30089 namespace.go:104] [kube-node-lease] adding namespace took 4.742888ms 2026-01-20T10:56:53.150884464+00:00 stderr F I0120 10:56:53.150873 30089 address_set.go:304] New(c2b2840b-ff18-4adf-ae8d-d304c977d1a0/default-network-controller:Namespace:openstack-operators:v4/a14495394860088409165) with [] 2026-01-20T10:56:53.150884464+00:00 stderr F I0120 10:56:53.150880 30089 obj_retry.go:541] Creating *v1.Namespace kube-node-lease took: 4.762649ms 2026-01-20T10:56:53.150893604+00:00 stderr F I0120 10:56:53.150888 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-kube-apiserver-operator 2026-01-20T10:56:53.150900934+00:00 stderr F I0120 10:56:53.150880 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openstack-operators:v4 k8s.ovn.org/name:openstack-operators k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c2b2840b-ff18-4adf-ae8d-d304c977d1a0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.150900934+00:00 stderr F I0120 10:56:53.150895 30089 namespace.go:100] [openshift-kube-apiserver-operator] adding namespace 2026-01-20T10:56:53.151393587+00:00 stderr F I0120 10:56:53.151357 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.15]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-controller-manager-operator:v4 k8s.ovn.org/name:openshift-kube-controller-manager-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {adba65cb-d07c-402b-8874-1746b8bdeaaf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.151393587+00:00 stderr F I0120 10:56:53.151382 30089 address_set.go:304] New(adba65cb-d07c-402b-8874-1746b8bdeaaf/default-network-controller:Namespace:openshift-kube-controller-manager-operator:v4/a13990978431870169537) with [10.217.0.15] 2026-01-20T10:56:53.151393587+00:00 stderr F I0120 10:56:53.151384 30089 namespace.go:104] [openstack-operators] adding namespace took 4.84821ms 2026-01-20T10:56:53.151413938+00:00 stderr F I0120 10:56:53.151395 30089 obj_retry.go:541] Creating *v1.Namespace openstack-operators took: 4.86462ms 2026-01-20T10:56:53.151413938+00:00 stderr F I0120 10:56:53.151387 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.15]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-controller-manager-operator:v4 k8s.ovn.org/name:openshift-kube-controller-manager-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {adba65cb-d07c-402b-8874-1746b8bdeaaf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.151413938+00:00 stderr F I0120 10:56:53.151404 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-controller-manager-operator 2026-01-20T10:56:53.151421958+00:00 stderr F I0120 10:56:53.151411 30089 namespace.go:100] [openshift-controller-manager-operator] adding namespace 2026-01-20T10:56:53.151757847+00:00 stderr F I0120 10:56:53.151733 30089 namespace.go:104] [openshift-kube-controller-manager-operator] adding namespace took 4.718066ms 2026-01-20T10:56:53.151757847+00:00 stderr F I0120 10:56:53.151724 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.5 10.217.0.20]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-machine-api:v4 k8s.ovn.org/name:openshift-machine-api k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9b532877-1f57-4cef-a907-ee481e0632b3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.151757847+00:00 stderr F I0120 10:56:53.151746 30089 obj_retry.go:541] Creating *v1.Namespace openshift-kube-controller-manager-operator took: 4.736477ms 2026-01-20T10:56:53.151757847+00:00 stderr F I0120 10:56:53.151751 30089 address_set.go:304] New(9b532877-1f57-4cef-a907-ee481e0632b3/default-network-controller:Namespace:openshift-machine-api:v4/a8146979490545162082) with [10.217.0.20 10.217.0.5] 2026-01-20T10:56:53.151768257+00:00 stderr F I0120 10:56:53.151757 30089 obj_retry.go:502] Add event received for *v1.Namespace openshift-console 2026-01-20T10:56:53.151768257+00:00 stderr F I0120 10:56:53.151763 30089 namespace.go:100] [openshift-console] adding namespace 2026-01-20T10:56:53.151775768+00:00 stderr F I0120 10:56:53.151758 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.5 10.217.0.20]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-machine-api:v4 k8s.ovn.org/name:openshift-machine-api k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9b532877-1f57-4cef-a907-ee481e0632b3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.152129658+00:00 stderr F I0120 10:56:53.152097 30089 namespace.go:104] [openshift-machine-api] adding namespace took 4.698537ms 2026-01-20T10:56:53.152129658+00:00 stderr F I0120 10:56:53.152112 30089 obj_retry.go:541] Creating *v1.Namespace openshift-machine-api took: 4.720837ms 2026-01-20T10:56:53.152217270+00:00 stderr F I0120 10:56:53.152167 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.38]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-apiserver:v4 k8s.ovn.org/name:openshift-kube-apiserver k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6276d4aa-26f2-4006-a763-72ea13795238}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.152217270+00:00 stderr F I0120 10:56:53.152205 30089 address_set.go:304] New(6276d4aa-26f2-4006-a763-72ea13795238/default-network-controller:Namespace:openshift-kube-apiserver:v4/a4531626005796422843) with [10.217.0.38] 2026-01-20T10:56:53.152375454+00:00 stderr F I0120 10:56:53.152332 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.38]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-apiserver:v4 k8s.ovn.org/name:openshift-kube-apiserver k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6276d4aa-26f2-4006-a763-72ea13795238}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.152792546+00:00 stderr F I0120 10:56:53.152754 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.31]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-dns:v4 k8s.ovn.org/name:openshift-dns k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9adb0821-8866-4e7b-91bc-3de62bd5d8d5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.152792546+00:00 stderr F I0120 10:56:53.152785 30089 address_set.go:304] New(9adb0821-8866-4e7b-91bc-3de62bd5d8d5/default-network-controller:Namespace:openshift-dns:v4/a11732331429224425771) with [10.217.0.31] 2026-01-20T10:56:53.152816266+00:00 stderr F I0120 10:56:53.152791 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.31]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-dns:v4 k8s.ovn.org/name:openshift-dns k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9adb0821-8866-4e7b-91bc-3de62bd5d8d5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.152893528+00:00 stderr F I0120 10:56:53.152881 30089 namespace.go:104] [openshift-kube-apiserver] adding namespace took 5.063757ms 2026-01-20T10:56:53.152900308+00:00 stderr F I0120 10:56:53.152892 30089 obj_retry.go:541] Creating *v1.Namespace openshift-kube-apiserver took: 5.076277ms 2026-01-20T10:56:53.153636308+00:00 stderr F I0120 10:56:53.153597 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ovn-kubernetes:v4 k8s.ovn.org/name:openshift-ovn-kubernetes k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {26eb6b8a-5781-42f2-86d8-29346cd69f7c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.153636308+00:00 stderr F I0120 10:56:53.153614 30089 namespace.go:104] [openshift-dns] adding namespace took 5.116098ms 2026-01-20T10:56:53.153636308+00:00 stderr F I0120 10:56:53.153626 30089 address_set.go:304] New(26eb6b8a-5781-42f2-86d8-29346cd69f7c/default-network-controller:Namespace:openshift-ovn-kubernetes:v4/a1398255725986493602) with [] 2026-01-20T10:56:53.153636308+00:00 stderr F I0120 10:56:53.153630 30089 obj_retry.go:541] Creating *v1.Namespace openshift-dns took: 5.137999ms 2026-01-20T10:56:53.153656609+00:00 stderr F I0120 10:56:53.153634 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ovn-kubernetes:v4 k8s.ovn.org/name:openshift-ovn-kubernetes k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {26eb6b8a-5781-42f2-86d8-29346cd69f7c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.153710740+00:00 stderr F I0120 10:56:53.153682 30089 ovs.go:162] Exec(16): stdout: "" 2026-01-20T10:56:53.153710740+00:00 stderr F I0120 10:56:53.153694 30089 ovs.go:163] Exec(16): stderr: "" 2026-01-20T10:56:53.153916785+00:00 stderr F I0120 10:56:53.153884 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.88]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-route-controller-manager:v4 k8s.ovn.org/name:openshift-route-controller-manager k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1ff17fce-ddcb-48ee-8395-5fd88de6ce7a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.153969147+00:00 stderr F I0120 10:56:53.153956 30089 address_set.go:304] New(1ff17fce-ddcb-48ee-8395-5fd88de6ce7a/default-network-controller:Namespace:openshift-route-controller-manager:v4/a5513313330862551964) with [10.217.0.88] 2026-01-20T10:56:53.154006728+00:00 stderr F I0120 10:56:53.153985 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.88]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-route-controller-manager:v4 k8s.ovn.org/name:openshift-route-controller-manager k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1ff17fce-ddcb-48ee-8395-5fd88de6ce7a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.154047469+00:00 stderr F I0120 10:56:53.153901 30089 namespace.go:104] [openshift-ovn-kubernetes] adding namespace took 4.379367ms 2026-01-20T10:56:53.154047469+00:00 stderr F I0120 10:56:53.154038 30089 obj_retry.go:541] Creating *v1.Namespace openshift-ovn-kubernetes took: 4.521721ms 2026-01-20T10:56:53.154350177+00:00 stderr F I0120 10:56:53.154317 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ovirt-infra:v4 k8s.ovn.org/name:openshift-ovirt-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b874879e-dad8-4f9f-bcbb-02dc1ff04309}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.154350177+00:00 stderr F I0120 10:56:53.154345 30089 address_set.go:304] New(b874879e-dad8-4f9f-bcbb-02dc1ff04309/default-network-controller:Namespace:openshift-ovirt-infra:v4/a5172224344187132617) with [] 2026-01-20T10:56:53.154376918+00:00 stderr F I0120 10:56:53.154350 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ovirt-infra:v4 k8s.ovn.org/name:openshift-ovirt-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b874879e-dad8-4f9f-bcbb-02dc1ff04309}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.154406818+00:00 stderr F I0120 10:56:53.154335 30089 namespace.go:104] [openshift-route-controller-manager] adding namespace took 4.10593ms 2026-01-20T10:56:53.154432939+00:00 stderr F I0120 10:56:53.154422 30089 obj_retry.go:541] Creating *v1.Namespace openshift-route-controller-manager took: 4.197473ms 2026-01-20T10:56:53.154700316+00:00 stderr F I0120 10:56:53.154659 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.7]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-apiserver-operator:v4 k8s.ovn.org/name:openshift-kube-apiserver-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c61fceab-5551-4b36-b095-0df4dd86c2d2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.154700316+00:00 stderr F I0120 10:56:53.154677 30089 namespace.go:104] [openshift-ovirt-infra] adding namespace took 4.153571ms 2026-01-20T10:56:53.154700316+00:00 stderr F I0120 10:56:53.154687 30089 address_set.go:304] New(c61fceab-5551-4b36-b095-0df4dd86c2d2/default-network-controller:Namespace:openshift-kube-apiserver-operator:v4/a11465645704438275080) with [10.217.0.7] 2026-01-20T10:56:53.154700316+00:00 stderr F I0120 10:56:53.154691 30089 obj_retry.go:541] Creating *v1.Namespace openshift-ovirt-infra took: 4.173852ms 2026-01-20T10:56:53.154714347+00:00 stderr F I0120 10:56:53.154693 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.7]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-apiserver-operator:v4 k8s.ovn.org/name:openshift-kube-apiserver-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c61fceab-5551-4b36-b095-0df4dd86c2d2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.155110348+00:00 stderr F I0120 10:56:53.155080 30089 namespace.go:104] [openshift-kube-apiserver-operator] adding namespace took 4.178483ms 2026-01-20T10:56:53.155110348+00:00 stderr F I0120 10:56:53.155104 30089 obj_retry.go:541] Creating *v1.Namespace openshift-kube-apiserver-operator took: 4.207794ms 2026-01-20T10:56:53.155125648+00:00 stderr F I0120 10:56:53.155051 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.9]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-controller-manager-operator:v4 k8s.ovn.org/name:openshift-controller-manager-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d0dffec9-15a9-4a4a-82c9-04e99f7b8a5e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.155125648+00:00 stderr F I0120 10:56:53.155120 30089 address_set.go:304] New(d0dffec9-15a9-4a4a-82c9-04e99f7b8a5e/default-network-controller:Namespace:openshift-controller-manager-operator:v4/a14938231737766799037) with [10.217.0.9] 2026-01-20T10:56:53.155165780+00:00 stderr F I0120 10:56:53.155127 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.9]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-controller-manager-operator:v4 k8s.ovn.org/name:openshift-controller-manager-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d0dffec9-15a9-4a4a-82c9-04e99f7b8a5e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.155487658+00:00 stderr F I0120 10:56:53.155460 30089 namespace.go:104] [openshift-controller-manager-operator] adding namespace took 4.042899ms 2026-01-20T10:56:53.155487658+00:00 stderr F I0120 10:56:53.155475 30089 obj_retry.go:541] Creating *v1.Namespace openshift-controller-manager-operator took: 4.06324ms 2026-01-20T10:56:53.155513969+00:00 stderr F I0120 10:56:53.155451 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.73 10.217.0.66]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-console:v4 k8s.ovn.org/name:openshift-console k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9af07d61-d216-4338-9242-d93a3d664841}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.155542980+00:00 stderr F I0120 10:56:53.155530 30089 address_set.go:304] New(9af07d61-d216-4338-9242-d93a3d664841/default-network-controller:Namespace:openshift-console:v4/a11622011068173273797) with [10.217.0.73 10.217.0.66] 2026-01-20T10:56:53.155580811+00:00 stderr F I0120 10:56:53.155559 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.73 10.217.0.66]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-console:v4 k8s.ovn.org/name:openshift-console k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9af07d61-d216-4338-9242-d93a3d664841}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.155870538+00:00 stderr F I0120 10:56:53.155853 30089 namespace.go:104] [openshift-console] adding namespace took 4.085951ms 2026-01-20T10:56:53.155870538+00:00 stderr F I0120 10:56:53.155864 30089 obj_retry.go:541] Creating *v1.Namespace openshift-console took: 4.100921ms 2026-01-20T10:56:53.155882769+00:00 stderr F I0120 10:56:53.155877 30089 factory.go:988] Added *v1.Namespace event handler 1 2026-01-20T10:56:53.156144006+00:00 stderr F I0120 10:56:53.156114 30089 obj_retry.go:502] Add event received for *v1.Node crc 2026-01-20T10:56:53.156161316+00:00 stderr F I0120 10:56:53.156143 30089 master.go:627] Adding or Updating Node "crc" 2026-01-20T10:56:53.156341451+00:00 stderr F I0120 10:56:53.156314 30089 default_node_network_controller.go:778] Node crc ready for ovn initialization with subnet 10.217.0.0/23 2026-01-20T10:56:53.156341451+00:00 stderr F I0120 10:56:53.156313 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch Row:map[load_balancer_group:{GoSet:[{GoUUID:59dab9f1-0389-4181-919a-0167ad60c25f} {GoUUID:dc1aa63b-0376-4ccb-99e2-10794dfff422}]} other_config:{GoMap:map[exclude_ips:10.217.0.2 mcast_eth_src:0a:58:0a:d9:00:01 mcast_ip4_src:10.217.0.1 mcast_querier:true mcast_snoop:true subnet:10.217.0.0/23]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.156353301+00:00 stderr F I0120 10:56:53.156336 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch Row:map[load_balancer_group:{GoSet:[{GoUUID:59dab9f1-0389-4181-919a-0167ad60c25f} {GoUUID:dc1aa63b-0376-4ccb-99e2-10794dfff422}]} other_config:{GoMap:map[exclude_ips:10.217.0.2 mcast_eth_src:0a:58:0a:d9:00:01 mcast_ip4_src:10.217.0.1 mcast_querier:true mcast_snoop:true subnet:10.217.0.0/23]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.156398522+00:00 stderr F I0120 10:56:53.156386 30089 ovs.go:159] Exec(17): /usr/bin/ovs-vsctl --timeout=15 --no-headings --data bare --format csv --columns type,name find Interface name=ovn-k8s-mp0 2026-01-20T10:56:53.156902046+00:00 stderr F I0120 10:56:53.156852 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[arp_proxy:0a:58:a9:fe:01:01 169.254.1.1 fe80::1 10.217.0.0/22 router-port:rtos-crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c4867092-ff2b-4983-87ca-ac317e39f546}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.156913376+00:00 stderr F I0120 10:56:53.156897 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c4867092-ff2b-4983-87ca-ac317e39f546}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.156944327+00:00 stderr F I0120 10:56:53.156912 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[arp_proxy:0a:58:a9:fe:01:01 169.254.1.1 fe80::1 10.217.0.0/22 router-port:rtos-crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c4867092-ff2b-4983-87ca-ac317e39f546}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c4867092-ff2b-4983-87ca-ac317e39f546}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.158643283+00:00 stderr F I0120 10:56:53.158601 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c4867092-ff2b-4983-87ca-ac317e39f546}]}}] Timeout: Where:[where column _uuid == {e9f73f57-05b3-4176-8265-1b664879a040}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.158643283+00:00 stderr F I0120 10:56:53.158623 30089 transact.go:42] Configuring OVN: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c4867092-ff2b-4983-87ca-ac317e39f546}]}}] Timeout: Where:[where column _uuid == {e9f73f57-05b3-4176-8265-1b664879a040}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.159008263+00:00 stderr F I0120 10:56:53.158974 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Gateway_Chassis Row:map[chassis_name:017e52b0-97d3-4d7d-aae4-9b216aa025aa name:rtos-crc-017e52b0-97d3-4d7d-aae4-9b216aa025aa priority:1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {964f1415-aaef-4b48-af02-711e20a49a83}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.159104585+00:00 stderr F I0120 10:56:53.159046 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[gateway_chassis:{GoSet:[{GoUUID:964f1415-aaef-4b48-af02-711e20a49a83}]} mac:0a:58:0a:d9:00:01 networks:{GoSet:[10.217.0.1/23]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {21ca7e68-1853-4731-afc6-84ae6ce74fe0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.159117205+00:00 stderr F I0120 10:56:53.159098 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:21ca7e68-1853-4731-afc6-84ae6ce74fe0}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.159142136+00:00 stderr F I0120 10:56:53.159110 30089 transact.go:42] Configuring OVN: [{Op:update Table:Gateway_Chassis Row:map[chassis_name:017e52b0-97d3-4d7d-aae4-9b216aa025aa name:rtos-crc-017e52b0-97d3-4d7d-aae4-9b216aa025aa priority:1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {964f1415-aaef-4b48-af02-711e20a49a83}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Logical_Router_Port Row:map[gateway_chassis:{GoSet:[{GoUUID:964f1415-aaef-4b48-af02-711e20a49a83}]} mac:0a:58:0a:d9:00:01 networks:{GoSet:[10.217.0.1/23]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {21ca7e68-1853-4731-afc6-84ae6ce74fe0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:21ca7e68-1853-4731-afc6-84ae6ce74fe0}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.159663710+00:00 stderr F I0120 10:56:53.159585 30089 model_client.go:381] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[ip:10.217.0.2 k8s.ovn.org/id:default-network-controller:NetpolNode:crc:10.217.0.2 k8s.ovn.org/name:crc k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]} log:false match:ip4.src==10.217.0.2 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {cb395e67-c62e-4261-95eb-0e35bc8ae4a2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.159728022+00:00 stderr F I0120 10:56:53.159707 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:cb395e67-c62e-4261-95eb-0e35bc8ae4a2}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.159779323+00:00 stderr F I0120 10:56:53.159746 30089 transact.go:42] Configuring OVN: [{Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[ip:10.217.0.2 k8s.ovn.org/id:default-network-controller:NetpolNode:crc:10.217.0.2 k8s.ovn.org/name:crc k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]} log:false match:ip4.src==10.217.0.2 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {cb395e67-c62e-4261-95eb-0e35bc8ae4a2}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:cb395e67-c62e-4261-95eb-0e35bc8ae4a2}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.160187534+00:00 stderr F I0120 10:56:53.160139 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:10.217.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2f5651de-4460-49ba-9c3a-86c0e7b031d0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.160203834+00:00 stderr F I0120 10:56:53.160179 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:2f5651de-4460-49ba-9c3a-86c0e7b031d0}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.160216925+00:00 stderr F I0120 10:56:53.160192 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:10.217.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2f5651de-4460-49ba-9c3a-86c0e7b031d0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:2f5651de-4460-49ba-9c3a-86c0e7b031d0}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.160765390+00:00 stderr F I0120 10:56:53.160735 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[b6:dc:d9:26:03:d4 10.217.0.2]} options:{GoMap:map[]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {df0c3d3a-8474-4c1f-ac97-c1820ebf2712}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.160823082+00:00 stderr F I0120 10:56:53.160804 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:df0c3d3a-8474-4c1f-ac97-c1820ebf2712}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.160865113+00:00 stderr F I0120 10:56:53.160840 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[b6:dc:d9:26:03:d4 10.217.0.2]} options:{GoMap:map[]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {df0c3d3a-8474-4c1f-ac97-c1820ebf2712}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:df0c3d3a-8474-4c1f-ac97-c1820ebf2712}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.161183281+00:00 stderr F I0120 10:56:53.161140 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:df0c3d3a-8474-4c1f-ac97-c1820ebf2712}]}}] Timeout: Where:[where column _uuid == {209ca8a5-55e7-4f87-adee-1ae7952f089e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.161183281+00:00 stderr F I0120 10:56:53.161162 30089 transact.go:42] Configuring OVN: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:df0c3d3a-8474-4c1f-ac97-c1820ebf2712}]}}] Timeout: Where:[where column _uuid == {209ca8a5-55e7-4f87-adee-1ae7952f089e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.161446558+00:00 stderr F I0120 10:56:53.161429 30089 switch.go:50] Hybridoverlay port does not exist for node crc 2026-01-20T10:56:53.161492800+00:00 stderr F I0120 10:56:53.161470 30089 switch.go:59] haveMP true haveHO false ManagementPortAddress 10.217.0.2/23 HybridOverlayAddressOA 10.217.0.3/23 2026-01-20T10:56:53.161600462+00:00 stderr F I0120 10:56:53.161572 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch Row:map[other_config:{GoMap:map[mcast_eth_src:0a:58:0a:d9:00:01 mcast_ip4_src:10.217.0.1 mcast_querier:true mcast_snoop:true subnet:10.217.0.0/23]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.161641993+00:00 stderr F I0120 10:56:53.161621 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch Row:map[other_config:{GoMap:map[mcast_eth_src:0a:58:0a:d9:00:01 mcast_ip4_src:10.217.0.1 mcast_querier:true mcast_snoop:true subnet:10.217.0.0/23]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.162033314+00:00 stderr F I0120 10:56:53.162010 30089 hybrid.go:140] Removing node crc hybrid overlay port 2026-01-20T10:56:53.162617649+00:00 stderr F I0120 10:56:53.162531 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:8af88be7-33a2-4869-a73a-80bcee31d385}]} external_ids:{GoMap:map[physical_ip:38.102.83.220 physical_ips:38.102.83.220]} load_balancer_group:{GoSet:[{GoUUID:59dab9f1-0389-4181-919a-0167ad60c25f} {GoUUID:02fd96d5-5e32-4855-b83b-d257ecda4e0c}]} options:{GoMap:map[always_learn_from_arp_request:false chassis:017e52b0-97d3-4d7d-aae4-9b216aa025aa dynamic_neigh_routers:true lb_force_snat_ip:router_ip mac_binding_age_threshold:300 snat-ct-zone:0]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.162662891+00:00 stderr F I0120 10:56:53.162609 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:8af88be7-33a2-4869-a73a-80bcee31d385}]} external_ids:{GoMap:map[physical_ip:38.102.83.220 physical_ips:38.102.83.220]} load_balancer_group:{GoSet:[{GoUUID:59dab9f1-0389-4181-919a-0167ad60c25f} {GoUUID:02fd96d5-5e32-4855-b83b-d257ecda4e0c}]} options:{GoMap:map[always_learn_from_arp_request:false chassis:017e52b0-97d3-4d7d-aae4-9b216aa025aa dynamic_neigh_routers:true lb_force_snat_ip:router_ip mac_binding_age_threshold:300 snat-ct-zone:0]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.164092650+00:00 stderr F I0120 10:56:53.164035 30089 ovs.go:162] Exec(17): stdout: "internal,ovn-k8s-mp0\n" 2026-01-20T10:56:53.164148951+00:00 stderr F I0120 10:56:53.164054 30089 ovs.go:163] Exec(17): stderr: "" 2026-01-20T10:56:53.164148951+00:00 stderr F I0120 10:56:53.164139 30089 ovs.go:159] Exec(18): /usr/bin/ovs-vsctl --timeout=15 --no-headings --data bare --format csv --columns type,name find Interface name=ovn-k8s-mp0_0 2026-01-20T10:56:53.165000104+00:00 stderr F I0120 10:56:53.164934 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[router-port:rtoj-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.165013004+00:00 stderr F I0120 10:56:53.164992 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}]}}] Timeout: Where:[where column _uuid == {e80fcef2-fac9-4844-ba83-db36f8e2cc28}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.165079156+00:00 stderr F I0120 10:56:53.165011 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[router-port:rtoj-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}]}}] Timeout: Where:[where column _uuid == {e80fcef2-fac9-4844-ba83-db36f8e2cc28}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.165436785+00:00 stderr F I0120 10:56:53.165385 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:02 networks:{GoSet:[100.64.0.2/16]} options:{GoMap:map[gateway_mtu:1400]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c521aef7-6fed-4a5a-b1d9-8e7a81736008}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.165466366+00:00 stderr F I0120 10:56:53.165440 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c521aef7-6fed-4a5a-b1d9-8e7a81736008}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.165475666+00:00 stderr F I0120 10:56:53.165457 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:02 networks:{GoSet:[100.64.0.2/16]} options:{GoMap:map[gateway_mtu:1400]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c521aef7-6fed-4a5a-b1d9-8e7a81736008}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c521aef7-6fed-4a5a-b1d9-8e7a81736008}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.166010021+00:00 stderr F I0120 10:56:53.165931 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b3455ab3-baec-4c77-bf58-c8ee661db1b0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.166150405+00:00 stderr F I0120 10:56:53.166033 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:b3455ab3-baec-4c77-bf58-c8ee661db1b0}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.166189096+00:00 stderr F I0120 10:56:53.166140 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b3455ab3-baec-4c77-bf58-c8ee661db1b0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:b3455ab3-baec-4c77-bf58-c8ee661db1b0}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.166730781+00:00 stderr F I0120 10:56:53.166692 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[external_ids:{GoMap:map[gateway-physical-ip:yes]} mac:fa:16:3e:0d:e7:11 networks:{GoSet:[38.102.83.220/24]} options:{GoMap:map[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3d2b378f-284f-421b-b46b-c468b7e6cd8d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.166807673+00:00 stderr F I0120 10:56:53.166780 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3d2b378f-284f-421b-b46b-c468b7e6cd8d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.166852994+00:00 stderr F I0120 10:56:53.166824 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Port Row:map[external_ids:{GoMap:map[gateway-physical-ip:yes]} mac:fa:16:3e:0d:e7:11 networks:{GoSet:[38.102.83.220/24]} options:{GoMap:map[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3d2b378f-284f-421b-b46b-c468b7e6cd8d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3d2b378f-284f-421b-b46b-c468b7e6cd8d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.167474060+00:00 stderr F I0120 10:56:53.167407 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[unknown]} options:{GoMap:map[network_name:physnet]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:localnet] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {516233f5-61de-4ea0-ae93-97e9413cd4fa}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.167704956+00:00 stderr F I0120 10:56:53.167652 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[fa:16:3e:0d:e7:11]} options:{GoMap:map[exclude-lb-vips-from-garp:true nat-addresses:router router-port:rtoe-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {32735011-f3ec-43b2-aac3-e6ed4dd32c30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.167756978+00:00 stderr F I0120 10:56:53.167724 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:516233f5-61de-4ea0-ae93-97e9413cd4fa} {GoUUID:32735011-f3ec-43b2-aac3-e6ed4dd32c30}]}}] Timeout: Where:[where column _uuid == {18f52cdd-cee0-4b9c-87dc-4d090679e6ea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.167829520+00:00 stderr F I0120 10:56:53.167757 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[unknown]} options:{GoMap:map[network_name:physnet]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:localnet] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {516233f5-61de-4ea0-ae93-97e9413cd4fa}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[fa:16:3e:0d:e7:11]} options:{GoMap:map[exclude-lb-vips-from-garp:true nat-addresses:router router-port:rtoe-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {32735011-f3ec-43b2-aac3-e6ed4dd32c30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:516233f5-61de-4ea0-ae93-97e9413cd4fa} {GoUUID:32735011-f3ec-43b2-aac3-e6ed4dd32c30}]}}] Timeout: Where:[where column _uuid == {18f52cdd-cee0-4b9c-87dc-4d090679e6ea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.168386275+00:00 stderr F I0120 10:56:53.168356 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Static_MAC_Binding Row:map[ip:169.254.169.4 logical_port:rtoe-GR_crc mac:0a:58:a9:fe:a9:04 override_dynamic_mac:true] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {46f7156c-697b-4394-a40f-1e7225954094}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.168436606+00:00 stderr F I0120 10:56:53.168415 30089 transact.go:42] Configuring OVN: [{Op:update Table:Static_MAC_Binding Row:map[ip:169.254.169.4 logical_port:rtoe-GR_crc mac:0a:58:a9:fe:a9:04 override_dynamic_mac:true] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {46f7156c-697b-4394-a40f-1e7225954094}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.168917660+00:00 stderr F I0120 10:56:53.168859 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:169.254.169.4] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2846b89d-97b9-453b-9343-7231c57eea62}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.168975331+00:00 stderr F I0120 10:56:53.168932 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:2846b89d-97b9-453b-9343-7231c57eea62}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.169005392+00:00 stderr F I0120 10:56:53.168965 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:169.254.169.4] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2846b89d-97b9-453b-9343-7231c57eea62}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:2846b89d-97b9-453b-9343-7231c57eea62}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.169454574+00:00 stderr F I0120 10:56:53.169427 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:38.102.83.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {734af81b-a9a2-45f9-a8bb-d8bce515d37d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.169518946+00:00 stderr F I0120 10:56:53.169498 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:734af81b-a9a2-45f9-a8bb-d8bce515d37d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.169559537+00:00 stderr F I0120 10:56:53.169535 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:38.102.83.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {734af81b-a9a2-45f9-a8bb-d8bce515d37d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:734af81b-a9a2-45f9-a8bb-d8bce515d37d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.169958847+00:00 stderr F I0120 10:56:53.169901 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1e0c1c89-e407-443a-b61e-39d1499f5b36}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.170016419+00:00 stderr F I0120 10:56:53.169982 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:1e0c1c89-e407-443a-b61e-39d1499f5b36}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.170055770+00:00 stderr F I0120 10:56:53.170012 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1e0c1c89-e407-443a-b61e-39d1499f5b36}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:1e0c1c89-e407-443a-b61e-39d1499f5b36}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.170964524+00:00 stderr F I0120 10:56:53.170934 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:100.64.0.2 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {92357530-52ac-4fd6-8a9d-74712c56c6bf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.171023956+00:00 stderr F I0120 10:56:53.171005 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:92357530-52ac-4fd6-8a9d-74712c56c6bf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.171172960+00:00 stderr F I0120 10:56:53.171041 30089 transact.go:42] Configuring OVN: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:100.64.0.2 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {92357530-52ac-4fd6-8a9d-74712c56c6bf}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:92357530-52ac-4fd6-8a9d-74712c56c6bf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.171742616+00:00 stderr F I0120 10:56:53.171717 30089 ovs.go:162] Exec(18): stdout: "" 2026-01-20T10:56:53.171742616+00:00 stderr F I0120 10:56:53.171732 30089 ovs.go:163] Exec(18): stderr: "" 2026-01-20T10:56:53.171884449+00:00 stderr F I0120 10:56:53.171867 30089 ovs.go:159] Exec(19): /usr/bin/ovs-vsctl --timeout=15 -- --if-exists del-port br-int k8s-crc -- --may-exist add-port br-int ovn-k8s-mp0 -- set interface ovn-k8s-mp0 type=internal mtu_request=1400 external-ids:iface-id=k8s-crc 2026-01-20T10:56:53.187294704+00:00 stderr F I0120 10:56:53.187222 30089 ovs.go:162] Exec(19): stdout: "" 2026-01-20T10:56:53.187294704+00:00 stderr F I0120 10:56:53.187252 30089 ovs.go:163] Exec(19): stderr: "" 2026-01-20T10:56:53.187294704+00:00 stderr F I0120 10:56:53.187269 30089 ovs.go:159] Exec(20): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface ovn-k8s-mp0 mac_in_use 2026-01-20T10:56:53.191327873+00:00 stderr F I0120 10:56:53.191261 30089 model_client.go:406] Delete operations generated as: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0739e536-1258-4533-951f-bde9a4f368fa}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.191366424+00:00 stderr F I0120 10:56:53.191332 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:0739e536-1258-4533-951f-bde9a4f368fa}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.191401905+00:00 stderr F I0120 10:56:53.191358 30089 transact.go:42] Configuring OVN: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0739e536-1258-4533-951f-bde9a4f368fa}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:0739e536-1258-4533-951f-bde9a4f368fa}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.192288398+00:00 stderr F I0120 10:56:53.192241 30089 model_client.go:406] Delete operations generated as: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {132ccc76-da9c-417c-9a5a-81de827507ff}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.192303709+00:00 stderr F I0120 10:56:53.192285 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:132ccc76-da9c-417c-9a5a-81de827507ff}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.192324719+00:00 stderr F I0120 10:56:53.192301 30089 transact.go:42] Configuring OVN: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {132ccc76-da9c-417c-9a5a-81de827507ff}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:132ccc76-da9c-417c-9a5a-81de827507ff}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.192931555+00:00 stderr F I0120 10:56:53.192883 30089 model_client.go:406] Delete operations generated as: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c4481e80-e6bc-4540-ba27-c21bdf17ac1c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.192974276+00:00 stderr F I0120 10:56:53.192935 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:c4481e80-e6bc-4540-ba27-c21bdf17ac1c}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.192983177+00:00 stderr F I0120 10:56:53.192958 30089 transact.go:42] Configuring OVN: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c4481e80-e6bc-4540-ba27-c21bdf17ac1c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:c4481e80-e6bc-4540-ba27-c21bdf17ac1c}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.193432248+00:00 stderr F I0120 10:56:53.193392 30089 model_client.go:406] Delete operations generated as: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9a79d917-5663-48ca-928f-d0734a603a69}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.193455849+00:00 stderr F I0120 10:56:53.193425 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:9a79d917-5663-48ca-928f-d0734a603a69}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.193464289+00:00 stderr F I0120 10:56:53.193441 30089 transact.go:42] Configuring OVN: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9a79d917-5663-48ca-928f-d0734a603a69}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:9a79d917-5663-48ca-928f-d0734a603a69}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.193868470+00:00 stderr F I0120 10:56:53.193838 30089 model_client.go:406] Delete operations generated as: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fd1ff0d5-52b0-404b-9e8c-f07a4db4b42b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.193899661+00:00 stderr F I0120 10:56:53.193870 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:fd1ff0d5-52b0-404b-9e8c-f07a4db4b42b}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.193909021+00:00 stderr F I0120 10:56:53.193890 30089 transact.go:42] Configuring OVN: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fd1ff0d5-52b0-404b-9e8c-f07a4db4b42b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:fd1ff0d5-52b0-404b-9e8c-f07a4db4b42b}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.196878362+00:00 stderr F I0120 10:56:53.196839 30089 ovs.go:162] Exec(20): stdout: "\"b6:dc:d9:26:03:d4\"\n" 2026-01-20T10:56:53.196878362+00:00 stderr F I0120 10:56:53.196857 30089 ovs.go:163] Exec(20): stderr: "" 2026-01-20T10:56:53.196878362+00:00 stderr F I0120 10:56:53.196872 30089 ovs.go:159] Exec(21): /usr/bin/ovs-vsctl --timeout=15 set interface ovn-k8s-mp0 mac=b6\:dc\:d9\:26\:03\:d4 2026-01-20T10:56:53.206755807+00:00 stderr F I0120 10:56:53.206667 30089 ovs.go:162] Exec(21): stdout: "" 2026-01-20T10:56:53.206755807+00:00 stderr F I0120 10:56:53.206725 30089 ovs.go:163] Exec(21): stderr: "" 2026-01-20T10:56:53.209694676+00:00 stderr F I0120 10:56:53.209648 30089 route_manager.go:155] Route Manager: netlink route deletion event: "{Ifindex: 5 Dst: 10.217.0.0/23 Src: 10.217.0.2 Gw: Flags: [] Table: 254 Realm: 0}" 2026-01-20T10:56:53.209694676+00:00 stderr F I0120 10:56:53.209670 30089 route_manager.go:155] Route Manager: netlink route deletion event: "{Ifindex: 5 Dst: 10.217.1.255/32 Src: 10.217.0.2 Gw: Flags: [] Table: 255 Realm: 0}" 2026-01-20T10:56:53.209694676+00:00 stderr F I0120 10:56:53.209678 30089 route_manager.go:155] Route Manager: netlink route deletion event: "{Ifindex: 5 Dst: 10.217.0.2/32 Src: 10.217.0.2 Gw: Flags: [] Table: 255 Realm: 0}" 2026-01-20T10:56:53.211644799+00:00 stderr F I0120 10:56:53.211565 30089 base_network_controller.go:458] When adding node crc for network default, found 83 pods to add to retryPods 2026-01-20T10:56:53.211644799+00:00 stderr F I0120 10:56:53.211601 30089 base_network_controller.go:464] Adding pod cert-manager/cert-manager-758df9885c-cq6zm to retryPods for network default 2026-01-20T10:56:53.211644799+00:00 stderr F I0120 10:56:53.211615 30089 base_network_controller.go:464] Adding pod cert-manager/cert-manager-cainjector-676dd9bd64-mggnx to retryPods for network default 2026-01-20T10:56:53.211644799+00:00 stderr F I0120 10:56:53.211623 30089 base_network_controller.go:464] Adding pod cert-manager/cert-manager-webhook-855f577f79-7bdxq to retryPods for network default 2026-01-20T10:56:53.211644799+00:00 stderr F I0120 10:56:53.211630 30089 base_network_controller.go:464] Adding pod hostpath-provisioner/csi-hostpathplugin-hvm8g to retryPods for network default 2026-01-20T10:56:53.211644799+00:00 stderr F I0120 10:56:53.211639 30089 base_network_controller.go:464] Adding pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m to retryPods for network default 2026-01-20T10:56:53.211687490+00:00 stderr F I0120 10:56:53.211646 30089 base_network_controller.go:464] Adding pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp to retryPods for network default 2026-01-20T10:56:53.211687490+00:00 stderr F I0120 10:56:53.211653 30089 base_network_controller.go:464] Adding pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 to retryPods for network default 2026-01-20T10:56:53.211696600+00:00 stderr F I0120 10:56:53.211687 30089 base_network_controller.go:464] Adding pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b to retryPods for network default 2026-01-20T10:56:53.211704101+00:00 stderr F I0120 10:56:53.211697 30089 base_network_controller.go:464] Adding pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 to retryPods for network default 2026-01-20T10:56:53.211713011+00:00 stderr F I0120 10:56:53.211707 30089 base_network_controller.go:464] Adding pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg to retryPods for network default 2026-01-20T10:56:53.211720191+00:00 stderr F I0120 10:56:53.211714 30089 base_network_controller.go:464] Adding pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 to retryPods for network default 2026-01-20T10:56:53.211727271+00:00 stderr F I0120 10:56:53.211722 30089 base_network_controller.go:464] Adding pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc to retryPods for network default 2026-01-20T10:56:53.211747752+00:00 stderr F I0120 10:56:53.211729 30089 base_network_controller.go:464] Adding pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 to retryPods for network default 2026-01-20T10:56:53.211747752+00:00 stderr F I0120 10:56:53.211744 30089 base_network_controller.go:464] Adding pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd to retryPods for network default 2026-01-20T10:56:53.211757782+00:00 stderr F I0120 10:56:53.211752 30089 base_network_controller.go:464] Adding pod openshift-console/console-644bb77b49-5x5xk to retryPods for network default 2026-01-20T10:56:53.211766582+00:00 stderr F I0120 10:56:53.211760 30089 base_network_controller.go:464] Adding pod openshift-console/downloads-65476884b9-9wcvx to retryPods for network default 2026-01-20T10:56:53.211800213+00:00 stderr F I0120 10:56:53.211772 30089 base_network_controller.go:464] Adding pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z to retryPods for network default 2026-01-20T10:56:53.211800213+00:00 stderr F I0120 10:56:53.211786 30089 base_network_controller.go:464] Adding pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf to retryPods for network default 2026-01-20T10:56:53.211800213+00:00 stderr F I0120 10:56:53.211794 30089 base_network_controller.go:464] Adding pod openshift-dns-operator/dns-operator-75f687757b-nz2xb to retryPods for network default 2026-01-20T10:56:53.211815034+00:00 stderr F I0120 10:56:53.211804 30089 base_network_controller.go:464] Adding pod openshift-dns/dns-default-gbw49 to retryPods for network default 2026-01-20T10:56:53.211815034+00:00 stderr F I0120 10:56:53.211811 30089 base_network_controller.go:464] Adding pod openshift-dns/node-resolver-dn27q to retryPods for network default 2026-01-20T10:56:53.211824084+00:00 stderr F I0120 10:56:53.211818 30089 base_network_controller.go:464] Adding pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg to retryPods for network default 2026-01-20T10:56:53.211831124+00:00 stderr F I0120 10:56:53.211826 30089 base_network_controller.go:464] Adding pod openshift-etcd/etcd-crc to retryPods for network default 2026-01-20T10:56:53.211855315+00:00 stderr F I0120 10:56:53.211837 30089 base_network_controller.go:464] Adding pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv to retryPods for network default 2026-01-20T10:56:53.211855315+00:00 stderr F I0120 10:56:53.211846 30089 base_network_controller.go:464] Adding pod openshift-image-registry/image-registry-75b7bb6564-ln84v to retryPods for network default 2026-01-20T10:56:53.211862925+00:00 stderr F I0120 10:56:53.211856 30089 base_network_controller.go:464] Adding pod openshift-image-registry/node-ca-l92hr to retryPods for network default 2026-01-20T10:56:53.211872925+00:00 stderr F I0120 10:56:53.211866 30089 base_network_controller.go:464] Adding pod openshift-ingress-canary/ingress-canary-2vhcn to retryPods for network default 2026-01-20T10:56:53.211881725+00:00 stderr F I0120 10:56:53.211874 30089 base_network_controller.go:464] Adding pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t to retryPods for network default 2026-01-20T10:56:53.211890196+00:00 stderr F I0120 10:56:53.211881 30089 base_network_controller.go:464] Adding pod openshift-ingress/router-default-5c9bf7bc58-6jctv to retryPods for network default 2026-01-20T10:56:53.211910216+00:00 stderr F I0120 10:56:53.211888 30089 base_network_controller.go:464] Adding pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 to retryPods for network default 2026-01-20T10:56:53.211910216+00:00 stderr F I0120 10:56:53.211899 30089 base_network_controller.go:464] Adding pod openshift-kube-apiserver/installer-13-crc to retryPods for network default 2026-01-20T10:56:53.211919666+00:00 stderr F I0120 10:56:53.211911 30089 base_network_controller.go:464] Adding pod openshift-kube-apiserver/kube-apiserver-crc to retryPods for network default 2026-01-20T10:56:53.211928507+00:00 stderr F I0120 10:56:53.211918 30089 base_network_controller.go:464] Adding pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb to retryPods for network default 2026-01-20T10:56:53.211937347+00:00 stderr F I0120 10:56:53.211930 30089 base_network_controller.go:464] Adding pod openshift-kube-controller-manager/kube-controller-manager-crc to retryPods for network default 2026-01-20T10:56:53.211948017+00:00 stderr F I0120 10:56:53.211942 30089 base_network_controller.go:464] Adding pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 to retryPods for network default 2026-01-20T10:56:53.211981018+00:00 stderr F I0120 10:56:53.211954 30089 base_network_controller.go:464] Adding pod openshift-kube-scheduler/openshift-kube-scheduler-crc to retryPods for network default 2026-01-20T10:56:53.211981018+00:00 stderr F I0120 10:56:53.211973 30089 base_network_controller.go:464] Adding pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr to retryPods for network default 2026-01-20T10:56:53.211991648+00:00 stderr F I0120 10:56:53.211981 30089 base_network_controller.go:464] Adding pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv to retryPods for network default 2026-01-20T10:56:53.211991648+00:00 stderr F I0120 10:56:53.211987 30089 base_network_controller.go:464] Adding pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw to retryPods for network default 2026-01-20T10:56:53.212005999+00:00 stderr F I0120 10:56:53.211995 30089 base_network_controller.go:464] Adding pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb to retryPods for network default 2026-01-20T10:56:53.212014759+00:00 stderr F I0120 10:56:53.212007 30089 base_network_controller.go:464] Adding pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc to retryPods for network default 2026-01-20T10:56:53.212022719+00:00 stderr F I0120 10:56:53.212014 30089 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh to retryPods for network default 2026-01-20T10:56:53.212029719+00:00 stderr F I0120 10:56:53.212021 30089 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-daemon-zpnhg to retryPods for network default 2026-01-20T10:56:53.212038249+00:00 stderr F I0120 10:56:53.212032 30089 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm to retryPods for network default 2026-01-20T10:56:53.212045140+00:00 stderr F I0120 10:56:53.212039 30089 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-server-v65wr to retryPods for network default 2026-01-20T10:56:53.212052150+00:00 stderr F I0120 10:56:53.212046 30089 base_network_controller.go:464] Adding pod openshift-marketplace/certified-operators-mpjb7 to retryPods for network default 2026-01-20T10:56:53.212076220+00:00 stderr F I0120 10:56:53.212053 30089 base_network_controller.go:464] Adding pod openshift-marketplace/community-operators-6m4w2 to retryPods for network default 2026-01-20T10:56:53.212088851+00:00 stderr F I0120 10:56:53.212082 30089 base_network_controller.go:464] Adding pod openshift-marketplace/marketplace-operator-8b455464d-nc8zc to retryPods for network default 2026-01-20T10:56:53.212097601+00:00 stderr F I0120 10:56:53.212090 30089 base_network_controller.go:464] Adding pod openshift-marketplace/redhat-marketplace-2mx7j to retryPods for network default 2026-01-20T10:56:53.212126322+00:00 stderr F I0120 10:56:53.212103 30089 base_network_controller.go:464] Adding pod openshift-marketplace/redhat-operators-2nxg8 to retryPods for network default 2026-01-20T10:56:53.212126322+00:00 stderr F I0120 10:56:53.212122 30089 base_network_controller.go:464] Adding pod openshift-multus/multus-additional-cni-plugins-bzj2p to retryPods for network default 2026-01-20T10:56:53.212135662+00:00 stderr F I0120 10:56:53.212130 30089 base_network_controller.go:464] Adding pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc to retryPods for network default 2026-01-20T10:56:53.212142702+00:00 stderr F I0120 10:56:53.212138 30089 base_network_controller.go:464] Adding pod openshift-multus/multus-q88th to retryPods for network default 2026-01-20T10:56:53.212151232+00:00 stderr F I0120 10:56:53.212145 30089 base_network_controller.go:464] Adding pod openshift-multus/network-metrics-daemon-qdfr4 to retryPods for network default 2026-01-20T10:56:53.212161833+00:00 stderr F I0120 10:56:53.212156 30089 base_network_controller.go:464] Adding pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 to retryPods for network default 2026-01-20T10:56:53.212170513+00:00 stderr F I0120 10:56:53.212163 30089 base_network_controller.go:464] Adding pod openshift-network-diagnostics/network-check-target-v54bt to retryPods for network default 2026-01-20T10:56:53.212179273+00:00 stderr F I0120 10:56:53.212170 30089 base_network_controller.go:464] Adding pod openshift-network-node-identity/network-node-identity-7xghp to retryPods for network default 2026-01-20T10:56:53.212187953+00:00 stderr F I0120 10:56:53.212180 30089 base_network_controller.go:464] Adding pod openshift-network-operator/iptables-alerter-wwpnd to retryPods for network default 2026-01-20T10:56:53.212201654+00:00 stderr F I0120 10:56:53.212188 30089 base_network_controller.go:464] Adding pod openshift-network-operator/network-operator-767c585db5-zd56b to retryPods for network default 2026-01-20T10:56:53.212201654+00:00 stderr F I0120 10:56:53.212196 30089 base_network_controller.go:464] Adding pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd to retryPods for network default 2026-01-20T10:56:53.212209254+00:00 stderr F I0120 10:56:53.212202 30089 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf to retryPods for network default 2026-01-20T10:56:53.212245905+00:00 stderr F I0120 10:56:53.212215 30089 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh to retryPods for network default 2026-01-20T10:56:53.212245905+00:00 stderr F I0120 10:56:53.212227 30089 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 to retryPods for network default 2026-01-20T10:56:53.212245905+00:00 stderr F I0120 10:56:53.212234 30089 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz to retryPods for network default 2026-01-20T10:56:53.212245905+00:00 stderr F I0120 10:56:53.212242 30089 base_network_controller.go:464] Adding pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b to retryPods for network default 2026-01-20T10:56:53.212286896+00:00 stderr F I0120 10:56:53.212256 30089 base_network_controller.go:464] Adding pod openshift-ovn-kubernetes/ovnkube-node-sdkgg to retryPods for network default 2026-01-20T10:56:53.212286896+00:00 stderr F I0120 10:56:53.212271 30089 base_network_controller.go:464] Adding pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs to retryPods for network default 2026-01-20T10:56:53.212286896+00:00 stderr F I0120 10:56:53.212278 30089 base_network_controller.go:464] Adding pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz to retryPods for network default 2026-01-20T10:56:53.212298186+00:00 stderr F I0120 10:56:53.212288 30089 base_network_controller.go:464] Adding pod openshift-service-ca/service-ca-666f99b6f-kk8kg to retryPods for network default 2026-01-20T10:56:53.212306767+00:00 stderr F I0120 10:56:53.212297 30089 obj_retry.go:233] Iterate retry objects requested (resource *v1.Pod) 2026-01-20T10:56:53.212758959+00:00 stderr F I0120 10:56:53.212684 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Encap Row:map[chassis_name:017e52b0-97d3-4d7d-aae4-9b216aa025aa ip:192.168.126.11 options:{GoMap:map[csum:true]} type:geneve] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4ffeb60a-fdfe-4f88-8208-1eba752e78d6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.212800270+00:00 stderr F I0120 10:56:53.212768 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Chassis Row:map[encaps:{GoSet:[{GoUUID:4ffeb60a-fdfe-4f88-8208-1eba752e78d6}]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b93f58b2-b1ad-47d9-8850-0082ad97586e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.212836661+00:00 stderr F I0120 10:56:53.212804 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Chassis Row:map[] Rows:[] Columns:[] Mutations:[{Column:other_config Mutator:delete Value:{GoSet:[is-remote]}} {Column:other_config Mutator:insert Value:{GoMap:map[is-remote:false]}}] Timeout: Where:[where column _uuid == {b93f58b2-b1ad-47d9-8850-0082ad97586e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.212880022+00:00 stderr F I0120 10:56:53.212828 30089 transact.go:42] Configuring OVN: [{Op:update Table:Encap Row:map[chassis_name:017e52b0-97d3-4d7d-aae4-9b216aa025aa ip:192.168.126.11 options:{GoMap:map[csum:true]} type:geneve] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4ffeb60a-fdfe-4f88-8208-1eba752e78d6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Chassis Row:map[encaps:{GoSet:[{GoUUID:4ffeb60a-fdfe-4f88-8208-1eba752e78d6}]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b93f58b2-b1ad-47d9-8850-0082ad97586e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Chassis Row:map[] Rows:[] Columns:[] Mutations:[{Column:other_config Mutator:delete Value:{GoSet:[is-remote]}} {Column:other_config Mutator:insert Value:{GoMap:map[is-remote:false]}}] Timeout: Where:[where column _uuid == {b93f58b2-b1ad-47d9-8850-0082ad97586e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.213510569+00:00 stderr F I0120 10:56:53.213479 30089 zone_ic_handler.go:220] Creating interconnect resources for local zone node crc for the network default 2026-01-20T10:56:53.213616222+00:00 stderr F I0120 10:56:53.213565 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:58:00:02 name:rtots-crc networks:{GoSet:[100.88.0.2/16]} options:{GoMap:map[mcast_flood:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e084d304-085a-4130-a762-c61b2dfff5af}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.213661244+00:00 stderr F I0120 10:56:53.213633 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e084d304-085a-4130-a762-c61b2dfff5af}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.213677194+00:00 stderr F I0120 10:56:53.213655 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:58:00:02 name:rtots-crc networks:{GoSet:[100.88.0.2/16]} options:{GoMap:map[mcast_flood:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e084d304-085a-4130-a762-c61b2dfff5af}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e084d304-085a-4130-a762-c61b2dfff5af}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.218657427+00:00 stderr F I0120 10:56:53.218577 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[requested-tnl-key:2 router-port:rtots-crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ff13d99b-65b5-4b3b-bc27-534de830144b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.218698068+00:00 stderr F I0120 10:56:53.218668 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ff13d99b-65b5-4b3b-bc27-534de830144b}]}}] Timeout: Where:[where column _uuid == {d7c11e7c-4a6b-41fe-87a6-5c70659238bf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.218724289+00:00 stderr F I0120 10:56:53.218690 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[requested-tnl-key:2 router-port:rtots-crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ff13d99b-65b5-4b3b-bc27-534de830144b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ff13d99b-65b5-4b3b-bc27-534de830144b}]}}] Timeout: Where:[where column _uuid == {d7c11e7c-4a6b-41fe-87a6-5c70659238bf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.219525501+00:00 stderr F I0120 10:56:53.219481 30089 obj_retry.go:541] Creating *v1.Node crc took: 63.346135ms 2026-01-20T10:56:53.219543492+00:00 stderr F I0120 10:56:53.219525 30089 factory.go:988] Added *v1.Node event handler 2 2026-01-20T10:56:53.219571562+00:00 stderr F I0120 10:56:53.219549 30089 ovn.go:449] Starting OVN Service Controller: Using Endpoint Slices 2026-01-20T10:56:53.219750967+00:00 stderr F I0120 10:56:53.219719 30089 services_controller.go:168] Starting controller ovn-lb-controller 2026-01-20T10:56:53.219762918+00:00 stderr F I0120 10:56:53.219753 30089 services_controller.go:176] Waiting for node tracker handler to sync 2026-01-20T10:56:53.219770838+00:00 stderr F I0120 10:56:53.219765 30089 shared_informer.go:311] Waiting for caches to sync for node-tracker-controller 2026-01-20T10:56:53.219815429+00:00 stderr F I0120 10:56:53.219794 30089 node_tracker.go:204] Processing possible switch / router updates for node crc 2026-01-20T10:56:53.219920252+00:00 stderr F I0120 10:56:53.219902 30089 node_tracker.go:165] Node crc switch + router changed, syncing services 2026-01-20T10:56:53.219930832+00:00 stderr F I0120 10:56:53.219919 30089 services_controller.go:519] Full service sync requested 2026-01-20T10:56:53.219964323+00:00 stderr F I0120 10:56:53.219948 30089 route_manager.go:149] Route Manager: netlink route addition event: "{Ifindex: 5 Dst: 10.217.0.2/32 Src: 10.217.0.2 Gw: Flags: [] Table: 255 Realm: 0}" 2026-01-20T10:56:53.219973653+00:00 stderr F I0120 10:56:53.219964 30089 route_manager.go:149] Route Manager: netlink route addition event: "{Ifindex: 5 Dst: 10.217.1.255/32 Src: 10.217.0.2 Gw: Flags: [] Table: 255 Realm: 0}" 2026-01-20T10:56:53.219981883+00:00 stderr F I0120 10:56:53.219974 30089 route_manager.go:149] Route Manager: netlink route addition event: "{Ifindex: 5 Dst: 10.217.0.0/23 Src: 10.217.0.2 Gw: Flags: [] Table: 254 Realm: 0}" 2026-01-20T10:56:53.219990104+00:00 stderr F I0120 10:56:53.219985 30089 route_manager.go:93] Route Manager: attempting to add route: {Ifindex: 5 Dst: 10.217.0.0/22 Src: Gw: 10.217.0.1 Flags: [] Table: 0 Realm: 0} 2026-01-20T10:56:53.220454546+00:00 stderr F W0120 10:56:53.220412 30089 base_network_controller_pods.go:88] Already allocated IPs: 10.217.0.55 for pod: openshift-kube-controller-manager_revision-pruner-8-crc in phase: 0xc000a20328 on switch: crc 2026-01-20T10:56:53.220472766+00:00 stderr F I0120 10:56:53.220466 30089 route_manager.go:110] Route Manager: completed adding route: {Ifindex: 5 Dst: 10.217.0.0/22 Src: Gw: 10.217.0.1 Flags: [] Table: 254 Realm: 0} 2026-01-20T10:56:53.220482227+00:00 stderr F I0120 10:56:53.220477 30089 route_manager.go:93] Route Manager: attempting to add route: {Ifindex: 5 Dst: 169.254.169.3/32 Src: Gw: 10.217.0.1 Flags: [] Table: 0 Realm: 0} 2026-01-20T10:56:53.220557669+00:00 stderr F I0120 10:56:53.220517 30089 ovs.go:159] Exec(22): /usr/sbin/sysctl -w net.ipv4.conf.ovn-k8s-mp0.forwarding=1 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.220795 30089 route_manager.go:110] Route Manager: completed adding route: {Ifindex: 5 Dst: 169.254.169.3/32 Src: Gw: 10.217.0.1 Flags: [] Table: 254 Realm: 0} 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.220810 30089 route_manager.go:149] Route Manager: netlink route addition event: "{Ifindex: 5 Dst: 10.217.0.0/22 Src: Gw: 10.217.0.1 Flags: [] Table: 254 Realm: 0}" 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.220820 30089 route_manager.go:149] Route Manager: netlink route addition event: "{Ifindex: 5 Dst: 169.254.169.3/32 Src: Gw: 10.217.0.1 Flags: [] Table: 254 Realm: 0}" 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.221999 30089 ovs.go:162] Exec(22): stdout: "net.ipv4.conf.ovn-k8s-mp0.forwarding = 1\n" 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.222010 30089 ovs.go:163] Exec(22): stderr: "" 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.222544 30089 default_network_controller.go:655] Recording add event on pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.222576 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.222598 30089 ovn.go:134] Ensuring zone local for Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr in node crc 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.222638 30089 base_network_controller_pods.go:476] [default/openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr] creating logical port openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr for pod on switch crc 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.222750 30089 default_network_controller.go:655] Recording add event on pod openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.222772 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.222784 30089 obj_retry.go:459] Detected object openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.222805 30089 port_cache.go:122] port-cache(openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd): port not found in cache or already marked for removal 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.222811 30089 pods.go:151] Deleting pod: openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd 2026-01-20T10:56:53.223765905+00:00 stderr F W0120 10:56:53.222898 30089 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd. Using logical switch crc port uuid and addrs [10.217.0.98/23] 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.222919 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} options:{GoMap:map[iface-id-ver:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2d98188d-6d49-48e7-8956-57a5c46efe26}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.222972 30089 address_set.go:613] (369b3a88-e647-4277-a647-25ffea296a4a/default-network-controller:Namespace:openshift-operator-lifecycle-manager:v4/a1482332553631220387) deleting addresses [10.217.0.98] from address set 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.222978 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.223006 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.98]}}] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.223047 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {2bf79d67-1324-4efc-8ead-1a59cc805d56}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.223212 30089 default_network_controller.go:655] Recording add event on pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.223233 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.223246 30089 ovn.go:134] Ensuring zone local for Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m in node crc 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.223290 30089 base_network_controller_pods.go:476] [default/openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m] creating logical port openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m for pod on switch crc 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.223458 30089 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.98]}}] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.223509 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} options:{GoMap:map[iface-id-ver:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.223559 30089 default_network_controller.go:655] Recording add event on pod openshift-image-registry/image-registry-75b7bb6564-ln84v 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.223571 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-image-registry/image-registry-75b7bb6564-ln84v 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.223565 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.223581 30089 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/image-registry-75b7bb6564-ln84v in node crc 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.223602 30089 base_network_controller_pods.go:476] [default/openshift-image-registry/image-registry-75b7bb6564-ln84v] creating logical port openshift-image-registry_image-registry-75b7bb6564-ln84v for pod on switch crc 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.223662 30089 default_network_controller.go:655] Recording add event on pod openshift-network-node-identity/network-node-identity-7xghp 2026-01-20T10:56:53.223765905+00:00 stderr F I0120 10:56:53.223696 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-network-node-identity/network-node-identity-7xghp 2026-01-20T10:56:53.223765905+00:00 stderr P I0120 10:56:53.223671 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {34c062e9-0e41-479f-b36f-464b48cc97e0}] Until: Durabl 2026-01-20T10:56:53.223827166+00:00 stderr F e: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.223827166+00:00 stderr F I0120 10:56:53.223722 30089 default_network_controller.go:655] Recording add event on pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2026-01-20T10:56:53.223827166+00:00 stderr F I0120 10:56:53.223732 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2026-01-20T10:56:53.223827166+00:00 stderr F I0120 10:56:53.223742 30089 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf in node crc 2026-01-20T10:56:53.223827166+00:00 stderr F I0120 10:56:53.223772 30089 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf] creating logical port openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf for pod on switch crc 2026-01-20T10:56:53.223924149+00:00 stderr F I0120 10:56:53.223853 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:25 10.217.0.37]} options:{GoMap:map[iface-id-ver:7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:25 10.217.0.37]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8d4d22e-ea0d-4ce1-bb5d-4e60842262ae}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.223935249+00:00 stderr F I0120 10:56:53.223911 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8d4d22e-ea0d-4ce1-bb5d-4e60842262ae}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.224011341+00:00 stderr F I0120 10:56:53.223966 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8d4d22e-ea0d-4ce1-bb5d-4e60842262ae}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.224011341+00:00 stderr F I0120 10:56:53.223979 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} options:{GoMap:map[iface-id-ver:8a5ae51d-d173-4531-8975-f164c975ce1f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.224099584+00:00 stderr F I0120 10:56:53.224029 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.224170845+00:00 stderr F I0120 10:56:53.224113 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.224298249+00:00 stderr F I0120 10:56:53.224239 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.6 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5caac608-1693-4730-b705-794c4dca0d50}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.224311219+00:00 stderr F I0120 10:56:53.224294 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:5caac608-1693-4730-b705-794c4dca0d50}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.224419762+00:00 stderr F I0120 10:56:53.224366 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.37 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5b16eb4b-47a3-4eb4-8e75-fbe7791a8316}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.224472684+00:00 stderr F I0120 10:56:53.224416 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:5b16eb4b-47a3-4eb4-8e75-fbe7791a8316}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.224472684+00:00 stderr F I0120 10:56:53.224386 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} options:{GoMap:map[iface-id-ver:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {34c062e9-0e41-479f-b36f-464b48cc97e0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.6 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5caac608-1693-4730-b705-794c4dca0d50}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:5caac608-1693-4730-b705-794c4dca0d50}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.224552466+00:00 stderr F I0120 10:56:53.224454 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:25 10.217.0.37]} options:{GoMap:map[iface-id-ver:7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:25 10.217.0.37]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8d4d22e-ea0d-4ce1-bb5d-4e60842262ae}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8d4d22e-ea0d-4ce1-bb5d-4e60842262ae}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8d4d22e-ea0d-4ce1-bb5d-4e60842262ae}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.37 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5b16eb4b-47a3-4eb4-8e75-fbe7791a8316}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:5b16eb4b-47a3-4eb4-8e75-fbe7791a8316}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.224673379+00:00 stderr F I0120 10:56:53.224637 30089 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2026-01-20T10:56:53.224673379+00:00 stderr F I0120 10:56:53.224652 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2026-01-20T10:56:53.224673379+00:00 stderr F I0120 10:56:53.224662 30089 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb in node crc 2026-01-20T10:56:53.224688719+00:00 stderr F I0120 10:56:53.224683 30089 base_network_controller_pods.go:476] [default/openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb] creating logical port openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb for pod on switch crc 2026-01-20T10:56:53.224881115+00:00 stderr F I0120 10:56:53.224837 30089 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/installer-11-crc 2026-01-20T10:56:53.224881115+00:00 stderr F I0120 10:56:53.224857 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/installer-11-crc 2026-01-20T10:56:53.224881115+00:00 stderr F I0120 10:56:53.224866 30089 obj_retry.go:459] Detected object openshift-kube-controller-manager/installer-11-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.224894586+00:00 stderr F I0120 10:56:53.224882 30089 port_cache.go:122] port-cache(openshift-kube-controller-manager_installer-11-crc): port not found in cache or already marked for removal 2026-01-20T10:56:53.224894586+00:00 stderr F I0120 10:56:53.224888 30089 pods.go:151] Deleting pod: openshift-kube-controller-manager/installer-11-crc 2026-01-20T10:56:53.224914506+00:00 stderr F I0120 10:56:53.224883 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} options:{GoMap:map[iface-id-ver:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {be2fa59f-4cec-4742-a4bd-dcd0913d1422}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.224969098+00:00 stderr F I0120 10:56:53.224927 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.225736428+00:00 stderr F I0120 10:56:53.224976 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {4001cc39-c23c-46dd-87d7-2a0f579404da}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.225805100+00:00 stderr F I0120 10:56:53.223710 30089 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-7xghp in node crc 2026-01-20T10:56:53.225861271+00:00 stderr F I0120 10:56:53.225817 30089 obj_retry.go:541] Creating *v1.Pod openshift-network-node-identity/network-node-identity-7xghp took: 2.109647ms 2026-01-20T10:56:53.225861271+00:00 stderr F I0120 10:56:53.225830 30089 default_network_controller.go:699] Recording success event on pod openshift-network-node-identity/network-node-identity-7xghp 2026-01-20T10:56:53.225861271+00:00 stderr F I0120 10:56:53.225839 30089 default_network_controller.go:655] Recording add event on pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2026-01-20T10:56:53.225861271+00:00 stderr F I0120 10:56:53.225849 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2026-01-20T10:56:53.225886742+00:00 stderr F I0120 10:56:53.225172 30089 default_network_controller.go:655] Recording add event on pod openshift-service-ca/service-ca-666f99b6f-kk8kg 2026-01-20T10:56:53.225886742+00:00 stderr F I0120 10:56:53.225882 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-service-ca/service-ca-666f99b6f-kk8kg 2026-01-20T10:56:53.225894862+00:00 stderr F I0120 10:56:53.225227 30089 default_network_controller.go:655] Recording add event on pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2026-01-20T10:56:53.225902333+00:00 stderr F I0120 10:56:53.225895 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2026-01-20T10:56:53.225902333+00:00 stderr F I0120 10:56:53.225241 30089 default_network_controller.go:655] Recording add event on pod openshift-operator-lifecycle-manager/collect-profiles-29481765-pbh8m 2026-01-20T10:56:53.225910303+00:00 stderr F I0120 10:56:53.225903 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-operator-lifecycle-manager/collect-profiles-29481765-pbh8m 2026-01-20T10:56:53.225918353+00:00 stderr F I0120 10:56:53.225909 30089 obj_retry.go:459] Detected object openshift-operator-lifecycle-manager/collect-profiles-29481765-pbh8m of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.225918353+00:00 stderr F I0120 10:56:53.225254 30089 default_network_controller.go:655] Recording add event on pod openshift-dns/dns-default-gbw49 2026-01-20T10:56:53.225926533+00:00 stderr F I0120 10:56:53.225921 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-dns/dns-default-gbw49 2026-01-20T10:56:53.225926533+00:00 stderr F I0120 10:56:53.223519 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.16 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1a5f15db-00b3-4563-9f1c-1aca2a061230}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.225983375+00:00 stderr F I0120 10:56:53.225953 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:1a5f15db-00b3-4563-9f1c-1aca2a061230}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.226123838+00:00 stderr F I0120 10:56:53.225981 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} options:{GoMap:map[iface-id-ver:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2d98188d-6d49-48e7-8956-57a5c46efe26}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {2bf79d67-1324-4efc-8ead-1a59cc805d56}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.16 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1a5f15db-00b3-4563-9f1c-1aca2a061230}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:1a5f15db-00b3-4563-9f1c-1aca2a061230}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.226310053+00:00 stderr F I0120 10:56:53.225303 30089 default_network_controller.go:655] Recording add event on pod openshift-marketplace/redhat-marketplace-2mx7j 2026-01-20T10:56:53.226310053+00:00 stderr F I0120 10:56:53.226267 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-marketplace/redhat-marketplace-2mx7j 2026-01-20T10:56:53.226310053+00:00 stderr F I0120 10:56:53.225306 30089 default_network_controller.go:655] Recording add event on pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2026-01-20T10:56:53.226310053+00:00 stderr F I0120 10:56:53.226283 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2026-01-20T10:56:53.226310053+00:00 stderr F I0120 10:56:53.225312 30089 port_cache.go:96] port-cache(openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m): added port &{name:openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m uuid:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26 logicalSwitch:crc ips:[0xc0008c8ab0] mac:[10 88 10 217 0 6] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.6/23] and MAC: 0a:58:0a:d9:00:06 2026-01-20T10:56:53.226310053+00:00 stderr F I0120 10:56:53.225324 30089 default_network_controller.go:655] Recording add event on pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2026-01-20T10:56:53.226310053+00:00 stderr F W0120 10:56:53.225453 30089 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-controller-manager/installer-11-crc. Using logical switch crc port uuid and addrs [10.217.0.85/23] 2026-01-20T10:56:53.226331194+00:00 stderr F I0120 10:56:53.226319 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2026-01-20T10:56:53.226437517+00:00 stderr F I0120 10:56:53.226399 30089 address_set.go:613] (7a4da73b-99a6-4d7f-9775-40343fe32ef9/default-network-controller:Namespace:openshift-kube-controller-manager:v4/a4663622633901538608) deleting addresses [10.217.0.85] from address set 2026-01-20T10:56:53.226437517+00:00 stderr F I0120 10:56:53.225620 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.11 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.226437517+00:00 stderr F I0120 10:56:53.226416 30089 port_cache.go:96] port-cache(openshift-image-registry_image-registry-75b7bb6564-ln84v): added port &{name:openshift-image-registry_image-registry-75b7bb6564-ln84v uuid:d8d4d22e-ea0d-4ce1-bb5d-4e60842262ae logicalSwitch:crc ips:[0xc000cebd70] mac:[10 88 10 217 0 37] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.37/23] and MAC: 0a:58:0a:d9:00:25 2026-01-20T10:56:53.226451717+00:00 stderr F I0120 10:56:53.226434 30089 pods.go:220] [openshift-image-registry/image-registry-75b7bb6564-ln84v] addLogicalPort took 2.837357ms, libovsdb time 1.873131ms 2026-01-20T10:56:53.226451717+00:00 stderr F I0120 10:56:53.226442 30089 obj_retry.go:541] Creating *v1.Pod openshift-image-registry/image-registry-75b7bb6564-ln84v took: 2.862037ms 2026-01-20T10:56:53.226451717+00:00 stderr F I0120 10:56:53.225671 30089 pods.go:185] Attempting to release IPs for pod: openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd, ips: 10.217.0.98 2026-01-20T10:56:53.226451717+00:00 stderr F I0120 10:56:53.226434 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.85]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.226542119+00:00 stderr F I0120 10:56:53.226473 30089 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.85]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.226542119+00:00 stderr F I0120 10:56:53.226465 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.226542119+00:00 stderr F I0120 10:56:53.226402 30089 pods.go:220] [openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m] addLogicalPort took 3.012931ms, libovsdb time 918.146µs 2026-01-20T10:56:53.226542119+00:00 stderr F I0120 10:56:53.226509 30089 obj_retry.go:541] Creating *v1.Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m took: 3.262147ms 2026-01-20T10:56:53.226609751+00:00 stderr F I0120 10:56:53.226501 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} options:{GoMap:map[iface-id-ver:8a5ae51d-d173-4531-8975-f164c975ce1f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.11 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.227109684+00:00 stderr F I0120 10:56:53.227008 30089 port_cache.go:96] port-cache(openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr): added port &{name:openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr uuid:2d98188d-6d49-48e7-8956-57a5c46efe26 logicalSwitch:crc ips:[0xc0009b4330] mac:[10 88 10 217 0 16] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.16/23] and MAC: 0a:58:0a:d9:00:10 2026-01-20T10:56:53.227109684+00:00 stderr F I0120 10:56:53.227034 30089 pods.go:220] [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr] addLogicalPort took 4.412158ms, libovsdb time 1.015286ms 2026-01-20T10:56:53.227109684+00:00 stderr F I0120 10:56:53.227045 30089 obj_retry.go:541] Creating *v1.Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr took: 4.45537ms 2026-01-20T10:56:53.227428633+00:00 stderr F I0120 10:56:53.227366 30089 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf): added port &{name:openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf uuid:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c logicalSwitch:crc ips:[0xc000e89bc0] mac:[10 88 10 217 0 11] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.11/23] and MAC: 0a:58:0a:d9:00:0b 2026-01-20T10:56:53.227428633+00:00 stderr F I0120 10:56:53.227387 30089 pods.go:220] [openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf] addLogicalPort took 3.625657ms, libovsdb time 858.463µs 2026-01-20T10:56:53.227428633+00:00 stderr F I0120 10:56:53.227396 30089 obj_retry.go:541] Creating *v1.Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf took: 3.655768ms 2026-01-20T10:56:53.228943234+00:00 stderr F I0120 10:56:53.228922 30089 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 in node crc 2026-01-20T10:56:53.228988775+00:00 stderr F I0120 10:56:53.228975 30089 ovn.go:134] Ensuring zone local for Pod openshift-service-ca/service-ca-666f99b6f-kk8kg in node crc 2026-01-20T10:56:53.229030277+00:00 stderr F I0120 10:56:53.229016 30089 base_network_controller_pods.go:476] [default/openshift-service-ca/service-ca-666f99b6f-kk8kg] creating logical port openshift-service-ca_service-ca-666f99b6f-kk8kg for pod on switch crc 2026-01-20T10:56:53.229130759+00:00 stderr F I0120 10:56:53.229102 30089 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b in node crc 2026-01-20T10:56:53.229149850+00:00 stderr F I0120 10:56:53.229130 30089 obj_retry.go:541] Creating *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b took: 37.451µs 2026-01-20T10:56:53.229149850+00:00 stderr F I0120 10:56:53.229143 30089 default_network_controller.go:699] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2026-01-20T10:56:53.229167250+00:00 stderr F I0120 10:56:53.229158 30089 default_network_controller.go:655] Recording add event on pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2026-01-20T10:56:53.229202901+00:00 stderr F I0120 10:56:53.229174 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2026-01-20T10:56:53.229202901+00:00 stderr F I0120 10:56:53.229187 30089 ovn.go:134] Ensuring zone local for Pod openshift-dns/dns-default-gbw49 in node crc 2026-01-20T10:56:53.229202901+00:00 stderr F I0120 10:56:53.229193 30089 ovn.go:134] Ensuring zone local for Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd in node crc 2026-01-20T10:56:53.229243872+00:00 stderr F I0120 10:56:53.229226 30089 base_network_controller_pods.go:476] [default/openshift-dns/dns-default-gbw49] creating logical port openshift-dns_dns-default-gbw49 for pod on switch crc 2026-01-20T10:56:53.229243872+00:00 stderr F I0120 10:56:53.229232 30089 base_network_controller_pods.go:476] [default/openshift-console-operator/console-operator-5dbbc74dc9-cp5cd] creating logical port openshift-console-operator_console-operator-5dbbc74dc9-cp5cd for pod on switch crc 2026-01-20T10:56:53.229381976+00:00 stderr F I0120 10:56:53.229334 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} options:{GoMap:map[iface-id-ver:e4a7de23-6134-4044-902a-0900dc04a501 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9409cb25-8c46-46db-98ab-5eafe9669ef8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.229416127+00:00 stderr F I0120 10:56:53.229390 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.229425127+00:00 stderr F I0120 10:56:53.229413 30089 ovn.go:134] Ensuring zone local for Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg in node crc 2026-01-20T10:56:53.229468968+00:00 stderr F I0120 10:56:53.229451 30089 base_network_controller_pods.go:476] [default/openshift-etcd-operator/etcd-operator-768d5b5d86-722mg] creating logical port openshift-etcd-operator_etcd-operator-768d5b5d86-722mg for pod on switch crc 2026-01-20T10:56:53.229468968+00:00 stderr F I0120 10:56:53.229444 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} options:{GoMap:map[iface-id-ver:13045510-8717-4a71-ade4-be95a76440a7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.229496959+00:00 stderr F I0120 10:56:53.229464 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {ef64a2e5-df47-4a70-b021-d05d7c16d1d1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.229512379+00:00 stderr F I0120 10:56:53.229491 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.229571871+00:00 stderr F I0120 10:56:53.229538 30089 pods.go:185] Attempting to release IPs for pod: openshift-kube-controller-manager/installer-11-crc, ips: 10.217.0.85 2026-01-20T10:56:53.229571871+00:00 stderr F I0120 10:56:53.229551 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {5c003dc7-268c-4cb5-b66b-9d2f810661e6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.229602052+00:00 stderr F I0120 10:56:53.229549 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} options:{GoMap:map[iface-id-ver:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6af06372-81fc-4451-8678-6253ce70f317}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.229602052+00:00 stderr F I0120 10:56:53.229578 30089 default_network_controller.go:655] Recording add event on pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2026-01-20T10:56:53.229626252+00:00 stderr F I0120 10:56:53.229606 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2026-01-20T10:56:53.229626252+00:00 stderr F I0120 10:56:53.229619 30089 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 in node crc 2026-01-20T10:56:53.229667443+00:00 stderr F I0120 10:56:53.229640 30089 base_network_controller_pods.go:476] [default/openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7] creating logical port openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 for pod on switch crc 2026-01-20T10:56:53.229750686+00:00 stderr F I0120 10:56:53.229705 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} options:{GoMap:map[iface-id-ver:0b5c38ff-1fa8-4219-994d-15776acd4a4d requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e834ded8-9d5b-46e7-b962-1ee96928bab4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.229761946+00:00 stderr F I0120 10:56:53.229743 30089 ovn.go:134] Ensuring zone local for Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp in node crc 2026-01-20T10:56:53.229785567+00:00 stderr F I0120 10:56:53.229760 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.229832758+00:00 stderr F I0120 10:56:53.229798 30089 base_network_controller_pods.go:476] [default/openshift-apiserver/apiserver-7fc54b8dd7-d2bhp] creating logical port openshift-apiserver_apiserver-7fc54b8dd7-d2bhp for pod on switch crc 2026-01-20T10:56:53.229832758+00:00 stderr F I0120 10:56:53.229818 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {f8a00f5d-1728-4139-8582-f2fb90581499}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.229924530+00:00 stderr F I0120 10:56:53.229851 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} options:{GoMap:map[iface-id-ver:71af81a9-7d43-49b2-9287-c375900aa905 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e77fb5d-c04f-467c-9883-8cb59d819d86}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.229955671+00:00 stderr F I0120 10:56:53.229914 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.229955671+00:00 stderr F I0120 10:56:53.229918 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.31 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.229968101+00:00 stderr F I0120 10:56:53.229404 30089 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/redhat-marketplace-2mx7j in node crc 2026-01-20T10:56:53.230011593+00:00 stderr F I0120 10:56:53.229975 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.230011593+00:00 stderr F I0120 10:56:53.229986 30089 base_network_controller_pods.go:476] [default/openshift-marketplace/redhat-marketplace-2mx7j] creating logical port openshift-marketplace_redhat-marketplace-2mx7j for pod on switch crc 2026-01-20T10:56:53.230023353+00:00 stderr F I0120 10:56:53.229997 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {c8d9d75a-827d-4b5a-8293-96d3de66db7c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.230094925+00:00 stderr F I0120 10:56:53.230001 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} options:{GoMap:map[iface-id-ver:13045510-8717-4a71-ade4-be95a76440a7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {5c003dc7-268c-4cb5-b66b-9d2f810661e6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.31 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.230178727+00:00 stderr F I0120 10:56:53.230111 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} options:{GoMap:map[iface-id-ver:41e8708a-e40d-4d28-846b-c52eda4d1755 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {005abe2f-f66d-42f8-945c-fbc80f820ed4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.230216548+00:00 stderr F I0120 10:56:53.230190 30089 base_network_controller_pods.go:476] [default/openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7] creating logical port openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7 for pod on switch crc 2026-01-20T10:56:53.230242109+00:00 stderr F I0120 10:56:53.230218 30089 port_cache.go:122] port-cache(openshift-operator-lifecycle-manager_collect-profiles-29481765-pbh8m): port not found in cache or already marked for removal 2026-01-20T10:56:53.230242109+00:00 stderr F I0120 10:56:53.230238 30089 pods.go:151] Deleting pod: openshift-operator-lifecycle-manager/collect-profiles-29481765-pbh8m 2026-01-20T10:56:53.230358892+00:00 stderr F I0120 10:56:53.229872 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.40 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {13384dbf-6468-4401-b6b5-1fc817c99dcd}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.230358892+00:00 stderr F I0120 10:56:53.230289 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} options:{GoMap:map[iface-id-ver:df6e4f33-df74-4326-b096-9d3e45a8c55a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {34f81bc6-9eab-493e-85aa-3c1b2544e7d2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.230358892+00:00 stderr F W0120 10:56:53.230331 30089 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-operator-lifecycle-manager/collect-profiles-29481765-pbh8m. Using logical switch crc port uuid and addrs [10.217.0.27/23] 2026-01-20T10:56:53.230358892+00:00 stderr F I0120 10:56:53.230333 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:13384dbf-6468-4401-b6b5-1fc817c99dcd}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.230358892+00:00 stderr F I0120 10:56:53.230335 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:34f81bc6-9eab-493e-85aa-3c1b2544e7d2}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.230434765+00:00 stderr F I0120 10:56:53.230385 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:34f81bc6-9eab-493e-85aa-3c1b2544e7d2}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.230434765+00:00 stderr F I0120 10:56:53.230353 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} options:{GoMap:map[iface-id-ver:e4a7de23-6134-4044-902a-0900dc04a501 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9409cb25-8c46-46db-98ab-5eafe9669ef8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {ef64a2e5-df47-4a70-b021-d05d7c16d1d1}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.40 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {13384dbf-6468-4401-b6b5-1fc817c99dcd}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:13384dbf-6468-4401-b6b5-1fc817c99dcd}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.230434765+00:00 stderr F I0120 10:56:53.230396 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.12 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2453849f-f886-4491-b6c8-a4d7af784119}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.230499857+00:00 stderr F I0120 10:56:53.230452 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2453849f-f886-4491-b6c8-a4d7af784119}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.230520047+00:00 stderr F I0120 10:56:53.230502 30089 address_set.go:613] (369b3a88-e647-4277-a647-25ffea296a4a/default-network-controller:Namespace:openshift-operator-lifecycle-manager:v4/a1482332553631220387) deleting addresses [10.217.0.27] from address set 2026-01-20T10:56:53.230581069+00:00 stderr F I0120 10:56:53.230483 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} options:{GoMap:map[iface-id-ver:71af81a9-7d43-49b2-9287-c375900aa905 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e77fb5d-c04f-467c-9883-8cb59d819d86}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {c8d9d75a-827d-4b5a-8293-96d3de66db7c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.12 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2453849f-f886-4491-b6c8-a4d7af784119}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2453849f-f886-4491-b6c8-a4d7af784119}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.230581069+00:00 stderr F I0120 10:56:53.229627 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.230581069+00:00 stderr F I0120 10:56:53.230543 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.27]}}] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.230651631+00:00 stderr F I0120 10:56:53.230595 30089 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.27]}}] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.230651631+00:00 stderr F I0120 10:56:53.230611 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.230725783+00:00 stderr F I0120 10:56:53.230686 30089 default_network_controller.go:699] Recording success event on pod openshift-image-registry/image-registry-75b7bb6564-ln84v 2026-01-20T10:56:53.230725783+00:00 stderr F I0120 10:56:53.230708 30089 default_network_controller.go:655] Recording add event on pod openshift-kube-apiserver/installer-12-crc 2026-01-20T10:56:53.230736623+00:00 stderr F I0120 10:56:53.230722 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-apiserver/installer-12-crc 2026-01-20T10:56:53.230744293+00:00 stderr F I0120 10:56:53.230733 30089 obj_retry.go:459] Detected object openshift-kube-apiserver/installer-12-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.230800935+00:00 stderr F I0120 10:56:53.230436 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} options:{GoMap:map[iface-id-ver:d0f40333-c860-4c04-8058-a0bf572dcf12 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3644fddd-ceae-4a64-8b00-dadf73515945}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.230800935+00:00 stderr F I0120 10:56:53.230765 30089 default_network_controller.go:655] Recording add event on pod openshift-kube-apiserver/installer-9-crc 2026-01-20T10:56:53.230800935+00:00 stderr F I0120 10:56:53.230776 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-apiserver/installer-9-crc 2026-01-20T10:56:53.230800935+00:00 stderr F I0120 10:56:53.230785 30089 obj_retry.go:459] Detected object openshift-kube-apiserver/installer-9-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.230819205+00:00 stderr F I0120 10:56:53.230798 30089 default_network_controller.go:699] Recording success event on pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2026-01-20T10:56:53.230819205+00:00 stderr F I0120 10:56:53.230789 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.30 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7c218153-0e49-4e95-93a2-1f77234301b9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.230819205+00:00 stderr F I0120 10:56:53.230810 30089 default_network_controller.go:655] Recording add event on pod openshift-multus/multus-additional-cni-plugins-bzj2p 2026-01-20T10:56:53.230828755+00:00 stderr F I0120 10:56:53.230820 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-multus/multus-additional-cni-plugins-bzj2p 2026-01-20T10:56:53.230883577+00:00 stderr F I0120 10:56:53.230836 30089 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-bzj2p in node crc 2026-01-20T10:56:53.230883577+00:00 stderr F I0120 10:56:53.230183 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.230883577+00:00 stderr F I0120 10:56:53.230850 30089 obj_retry.go:541] Creating *v1.Pod openshift-multus/multus-additional-cni-plugins-bzj2p took: 20.221µs 2026-01-20T10:56:53.230883577+00:00 stderr F I0120 10:56:53.230859 30089 default_network_controller.go:699] Recording success event on pod openshift-multus/multus-additional-cni-plugins-bzj2p 2026-01-20T10:56:53.230883577+00:00 stderr F I0120 10:56:53.230868 30089 default_network_controller.go:655] Recording add event on pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2026-01-20T10:56:53.230883577+00:00 stderr F I0120 10:56:53.230862 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.8 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6dfd664-3ae6-42b8-adeb-4c9f305aa327}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.230896227+00:00 stderr F I0120 10:56:53.230888 30089 default_network_controller.go:699] Recording success event on pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2026-01-20T10:56:53.230950958+00:00 stderr F I0120 10:56:53.230890 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {336e83b0-c9fa-498c-aea8-c8cc7385ea1e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.231028411+00:00 stderr F I0120 10:56:53.230969 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.62 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {513dccfe-dd54-4217-b9ce-2d865937835c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.231039331+00:00 stderr F I0120 10:56:53.231022 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:513dccfe-dd54-4217-b9ce-2d865937835c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.231048091+00:00 stderr F I0120 10:56:53.230832 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7c218153-0e49-4e95-93a2-1f77234301b9}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.231176214+00:00 stderr F I0120 10:56:53.231070 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} options:{GoMap:map[iface-id-ver:df6e4f33-df74-4326-b096-9d3e45a8c55a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {34f81bc6-9eab-493e-85aa-3c1b2544e7d2}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:34f81bc6-9eab-493e-85aa-3c1b2544e7d2}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:34f81bc6-9eab-493e-85aa-3c1b2544e7d2}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.30 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7c218153-0e49-4e95-93a2-1f77234301b9}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7c218153-0e49-4e95-93a2-1f77234301b9}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.231176214+00:00 stderr F I0120 10:56:53.230793 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.231176214+00:00 stderr F I0120 10:56:53.231041 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} options:{GoMap:map[iface-id-ver:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6af06372-81fc-4451-8678-6253ce70f317}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.62 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {513dccfe-dd54-4217-b9ce-2d865937835c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:513dccfe-dd54-4217-b9ce-2d865937835c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.231252997+00:00 stderr F I0120 10:56:53.231201 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.231252997+00:00 stderr F I0120 10:56:53.230901 30089 default_network_controller.go:655] Recording add event on pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2026-01-20T10:56:53.231266647+00:00 stderr F I0120 10:56:53.231253 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2026-01-20T10:56:53.231276007+00:00 stderr F I0120 10:56:53.231265 30089 ovn.go:134] Ensuring zone local for Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz in node crc 2026-01-20T10:56:53.231337779+00:00 stderr F I0120 10:56:53.231283 30089 base_network_controller_pods.go:476] [default/openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz] creating logical port openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz for pod on switch crc 2026-01-20T10:56:53.231521024+00:00 stderr F I0120 10:56:53.230879 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2026-01-20T10:56:53.231521024+00:00 stderr F I0120 10:56:53.231494 30089 ovn.go:134] Ensuring zone local for Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg in node crc 2026-01-20T10:56:53.231521024+00:00 stderr F I0120 10:56:53.231479 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} options:{GoMap:map[iface-id-ver:6d67253e-2acd-4bc1-8185-793587da4f17 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.231521024+00:00 stderr F I0120 10:56:53.231512 30089 base_network_controller_pods.go:476] [default/openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg] creating logical port openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg for pod on switch crc 2026-01-20T10:56:53.231546644+00:00 stderr F I0120 10:56:53.231528 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.231634627+00:00 stderr F I0120 10:56:53.231581 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.64 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2e91916d-9350-416e-9311-ee7b2033c3ec}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.231647197+00:00 stderr F I0120 10:56:53.231630 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2e91916d-9350-416e-9311-ee7b2033c3ec}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.231814671+00:00 stderr F I0120 10:56:53.231673 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} options:{GoMap:map[iface-id-ver:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.231814671+00:00 stderr F I0120 10:56:53.231680 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} options:{GoMap:map[iface-id-ver:d0f40333-c860-4c04-8058-a0bf572dcf12 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3644fddd-ceae-4a64-8b00-dadf73515945}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.64 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2e91916d-9350-416e-9311-ee7b2033c3ec}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2e91916d-9350-416e-9311-ee7b2033c3ec}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.231814671+00:00 stderr F I0120 10:56:53.231797 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.231888253+00:00 stderr F I0120 10:56:53.231845 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {5b2bd23c-417c-4ef4-b90a-359ab75287c5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.231888253+00:00 stderr F I0120 10:56:53.230752 30089 port_cache.go:122] port-cache(openshift-kube-apiserver_installer-12-crc): port not found in cache or already marked for removal 2026-01-20T10:56:53.231888253+00:00 stderr F I0120 10:56:53.231872 30089 pods.go:151] Deleting pod: openshift-kube-apiserver/installer-12-crc 2026-01-20T10:56:53.231986376+00:00 stderr F W0120 10:56:53.231944 30089 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-apiserver/installer-12-crc. Using logical switch crc port uuid and addrs [10.217.0.86/23] 2026-01-20T10:56:53.232050428+00:00 stderr F I0120 10:56:53.231585 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {518f8c39-eb57-4799-bfd1-37b6918f5c5b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.232050428+00:00 stderr F I0120 10:56:53.232006 30089 address_set.go:613] (6276d4aa-26f2-4006-a763-72ea13795238/default-network-controller:Namespace:openshift-kube-apiserver:v4/a4531626005796422843) deleting addresses [10.217.0.86] from address set 2026-01-20T10:56:53.232050428+00:00 stderr F I0120 10:56:53.230904 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a6dfd664-3ae6-42b8-adeb-4c9f305aa327}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.232103729+00:00 stderr F I0120 10:56:53.232039 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.86]}}] Timeout: Where:[where column _uuid == {6276d4aa-26f2-4006-a763-72ea13795238}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.232118670+00:00 stderr F I0120 10:56:53.232026 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} options:{GoMap:map[iface-id-ver:0b5c38ff-1fa8-4219-994d-15776acd4a4d requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e834ded8-9d5b-46e7-b962-1ee96928bab4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {f8a00f5d-1728-4139-8582-f2fb90581499}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.8 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6dfd664-3ae6-42b8-adeb-4c9f305aa327}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a6dfd664-3ae6-42b8-adeb-4c9f305aa327}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.232137290+00:00 stderr F I0120 10:56:53.232111 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.46 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {885a79ad-c3d5-4b78-8ba8-1d83439b736b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.232199762+00:00 stderr F I0120 10:56:53.232133 30089 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.86]}}] Timeout: Where:[where column _uuid == {6276d4aa-26f2-4006-a763-72ea13795238}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.232199762+00:00 stderr F I0120 10:56:53.232141 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:885a79ad-c3d5-4b78-8ba8-1d83439b736b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.232211592+00:00 stderr F I0120 10:56:53.232163 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} options:{GoMap:map[iface-id-ver:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {5b2bd23c-417c-4ef4-b90a-359ab75287c5}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.46 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {885a79ad-c3d5-4b78-8ba8-1d83439b736b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:885a79ad-c3d5-4b78-8ba8-1d83439b736b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.232220342+00:00 stderr F I0120 10:56:53.232200 30089 port_cache.go:96] port-cache(openshift-dns_dns-default-gbw49): added port &{name:openshift-dns_dns-default-gbw49 uuid:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213 logicalSwitch:crc ips:[0xc000873590] mac:[10 88 10 217 0 31] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.31/23] and MAC: 0a:58:0a:d9:00:1f 2026-01-20T10:56:53.232228792+00:00 stderr F I0120 10:56:53.232218 30089 pods.go:220] [openshift-dns/dns-default-gbw49] addLogicalPort took 3.005401ms, libovsdb time 1.765698ms 2026-01-20T10:56:53.232242023+00:00 stderr F I0120 10:56:53.232228 30089 obj_retry.go:541] Creating *v1.Pod openshift-dns/dns-default-gbw49 took: 3.046502ms 2026-01-20T10:56:53.232242023+00:00 stderr F I0120 10:56:53.232234 30089 default_network_controller.go:699] Recording success event on pod openshift-dns/dns-default-gbw49 2026-01-20T10:56:53.232332425+00:00 stderr F I0120 10:56:53.232278 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.10 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2b58dd61-e63c-407c-8db8-c7c7b5c23d24}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.232332425+00:00 stderr F I0120 10:56:53.232320 30089 default_network_controller.go:655] Recording add event on pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2026-01-20T10:56:53.232397207+00:00 stderr F I0120 10:56:53.232362 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2026-01-20T10:56:53.232397207+00:00 stderr F I0120 10:56:53.232364 30089 port_cache.go:122] port-cache(openshift-kube-apiserver_installer-9-crc): port not found in cache or already marked for removal 2026-01-20T10:56:53.232397207+00:00 stderr F I0120 10:56:53.232377 30089 pods.go:151] Deleting pod: openshift-kube-apiserver/installer-9-crc 2026-01-20T10:56:53.232397207+00:00 stderr F I0120 10:56:53.232392 30089 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz in node crc 2026-01-20T10:56:53.232458669+00:00 stderr F I0120 10:56:53.232410 30089 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz] creating logical port openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz for pod on switch crc 2026-01-20T10:56:53.232458669+00:00 stderr F W0120 10:56:53.232421 30089 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-apiserver/installer-9-crc. Using logical switch crc port uuid and addrs [10.217.0.55/23] 2026-01-20T10:56:53.232522560+00:00 stderr F I0120 10:56:53.232473 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2b58dd61-e63c-407c-8db8-c7c7b5c23d24}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.232522560+00:00 stderr F I0120 10:56:53.232495 30089 address_set.go:613] (6276d4aa-26f2-4006-a763-72ea13795238/default-network-controller:Namespace:openshift-kube-apiserver:v4/a4531626005796422843) deleting addresses [10.217.0.55] from address set 2026-01-20T10:56:53.232536251+00:00 stderr F I0120 10:56:53.232515 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.55]}}] Timeout: Where:[where column _uuid == {6276d4aa-26f2-4006-a763-72ea13795238}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.232536251+00:00 stderr F I0120 10:56:53.232498 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} options:{GoMap:map[iface-id-ver:6d67253e-2acd-4bc1-8185-793587da4f17 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {518f8c39-eb57-4799-bfd1-37b6918f5c5b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.10 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2b58dd61-e63c-407c-8db8-c7c7b5c23d24}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2b58dd61-e63c-407c-8db8-c7c7b5c23d24}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.232604572+00:00 stderr F I0120 10:56:53.232543 30089 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.55]}}] Timeout: Where:[where column _uuid == {6276d4aa-26f2-4006-a763-72ea13795238}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.232604572+00:00 stderr F I0120 10:56:53.232531 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.82 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c43bc3ed-4021-47db-b48f-03cce83a4268}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.232616923+00:00 stderr F I0120 10:56:53.232589 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c43bc3ed-4021-47db-b48f-03cce83a4268}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.232616923+00:00 stderr F I0120 10:56:53.230908 30089 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2026-01-20T10:56:53.232626393+00:00 stderr F I0120 10:56:53.232596 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} options:{GoMap:map[iface-id-ver:bd556935-a077-45df-ba3f-d42c39326ccd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {69155615-9d93-4b72-bddd-739a6e731251}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.232626393+00:00 stderr F I0120 10:56:53.232618 30089 default_network_controller.go:655] Recording add event on pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2026-01-20T10:56:53.232635253+00:00 stderr F I0120 10:56:53.232629 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2026-01-20T10:56:53.232643873+00:00 stderr F I0120 10:56:53.232636 30089 ovn.go:134] Ensuring zone local for Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb in node crc 2026-01-20T10:56:53.232654914+00:00 stderr F I0120 10:56:53.232647 30089 base_network_controller_pods.go:476] [default/openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb] creating logical port openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb for pod on switch crc 2026-01-20T10:56:53.232667154+00:00 stderr F I0120 10:56:53.232646 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.232675434+00:00 stderr F I0120 10:56:53.232613 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} options:{GoMap:map[iface-id-ver:41e8708a-e40d-4d28-846b-c52eda4d1755 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {005abe2f-f66d-42f8-945c-fbc80f820ed4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {336e83b0-c9fa-498c-aea8-c8cc7385ea1e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.82 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c43bc3ed-4021-47db-b48f-03cce83a4268}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c43bc3ed-4021-47db-b48f-03cce83a4268}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.232749126+00:00 stderr F I0120 10:56:53.232701 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.232830298+00:00 stderr F I0120 10:56:53.232775 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} options:{GoMap:map[iface-id-ver:4f8aa612-9da0-4a2b-911e-6a1764a4e74e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {805e2f41-6cb8-4ccf-9939-37cfb4fa5509}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.232830298+00:00 stderr F I0120 10:56:53.232809 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.232897840+00:00 stderr F I0120 10:56:53.232852 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.233091955+00:00 stderr F I0120 10:56:53.233019 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.43 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {836a6a8f-ef84-4ecd-996a-6579e908ce2a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.233110026+00:00 stderr F I0120 10:56:53.233081 30089 port_cache.go:96] port-cache(openshift-service-ca_service-ca-666f99b6f-kk8kg): added port &{name:openshift-service-ca_service-ca-666f99b6f-kk8kg uuid:9409cb25-8c46-46db-98ab-5eafe9669ef8 logicalSwitch:crc ips:[0xc0009286c0] mac:[10 88 10 217 0 40] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.40/23] and MAC: 0a:58:0a:d9:00:28 2026-01-20T10:56:53.233110026+00:00 stderr F I0120 10:56:53.233086 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:836a6a8f-ef84-4ecd-996a-6579e908ce2a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.233110026+00:00 stderr F I0120 10:56:53.233099 30089 pods.go:220] [openshift-service-ca/service-ca-666f99b6f-kk8kg] addLogicalPort took 4.10294ms, libovsdb time 2.703882ms 2026-01-20T10:56:53.233120066+00:00 stderr F I0120 10:56:53.233109 30089 obj_retry.go:541] Creating *v1.Pod openshift-service-ca/service-ca-666f99b6f-kk8kg took: 4.135971ms 2026-01-20T10:56:53.233120066+00:00 stderr F I0120 10:56:53.233116 30089 default_network_controller.go:699] Recording success event on pod openshift-service-ca/service-ca-666f99b6f-kk8kg 2026-01-20T10:56:53.233130766+00:00 stderr F I0120 10:56:53.233126 30089 default_network_controller.go:655] Recording add event on pod openshift-multus/multus-q88th 2026-01-20T10:56:53.233140937+00:00 stderr F I0120 10:56:53.233134 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-multus/multus-q88th 2026-01-20T10:56:53.233149967+00:00 stderr F I0120 10:56:53.233144 30089 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-q88th in node crc 2026-01-20T10:56:53.233149967+00:00 stderr F I0120 10:56:53.233136 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.5 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.233158577+00:00 stderr F I0120 10:56:53.233108 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} options:{GoMap:map[iface-id-ver:bd556935-a077-45df-ba3f-d42c39326ccd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {69155615-9d93-4b72-bddd-739a6e731251}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.43 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {836a6a8f-ef84-4ecd-996a-6579e908ce2a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:836a6a8f-ef84-4ecd-996a-6579e908ce2a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.233230630+00:00 stderr F I0120 10:56:53.233164 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.233230630+00:00 stderr F I0120 10:56:53.233187 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} options:{GoMap:map[iface-id-ver:4f8aa612-9da0-4a2b-911e-6a1764a4e74e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {805e2f41-6cb8-4ccf-9939-37cfb4fa5509}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.5 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.233242950+00:00 stderr F I0120 10:56:53.233149 30089 obj_retry.go:541] Creating *v1.Pod openshift-multus/multus-q88th took: 7.59µs 2026-01-20T10:56:53.233242950+00:00 stderr F I0120 10:56:53.233235 30089 default_network_controller.go:699] Recording success event on pod openshift-multus/multus-q88th 2026-01-20T10:56:53.233251721+00:00 stderr F I0120 10:56:53.233244 30089 default_network_controller.go:655] Recording add event on pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2026-01-20T10:56:53.233260121+00:00 stderr F I0120 10:56:53.233254 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2026-01-20T10:56:53.233268451+00:00 stderr F I0120 10:56:53.233262 30089 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh in node crc 2026-01-20T10:56:53.233325733+00:00 stderr F I0120 10:56:53.233283 30089 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh] creating logical port openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh for pod on switch crc 2026-01-20T10:56:53.233665932+00:00 stderr F I0120 10:56:53.233472 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} options:{GoMap:map[iface-id-ver:c085412c-b875-46c9-ae3e-e6b0d8067091 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.233665932+00:00 stderr F I0120 10:56:53.233525 30089 port_cache.go:96] port-cache(openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7): added port &{name:openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 uuid:6e77fb5d-c04f-467c-9883-8cb59d819d86 logicalSwitch:crc ips:[0xc000b7d3b0] mac:[10 88 10 217 0 12] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.12/23] and MAC: 0a:58:0a:d9:00:0c 2026-01-20T10:56:53.233665932+00:00 stderr F I0120 10:56:53.233537 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.233665932+00:00 stderr F I0120 10:56:53.233550 30089 pods.go:220] [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7] addLogicalPort took 3.924586ms, libovsdb time 3.033032ms 2026-01-20T10:56:53.233665932+00:00 stderr F I0120 10:56:53.233558 30089 obj_retry.go:541] Creating *v1.Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 took: 3.942457ms 2026-01-20T10:56:53.233665932+00:00 stderr F I0120 10:56:53.233563 30089 default_network_controller.go:699] Recording success event on pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2026-01-20T10:56:53.233665932+00:00 stderr F I0120 10:56:53.233570 30089 default_network_controller.go:655] Recording add event on pod openshift-network-operator/iptables-alerter-wwpnd 2026-01-20T10:56:53.233665932+00:00 stderr F I0120 10:56:53.233577 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-network-operator/iptables-alerter-wwpnd 2026-01-20T10:56:53.233665932+00:00 stderr F I0120 10:56:53.233583 30089 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-wwpnd in node crc 2026-01-20T10:56:53.233665932+00:00 stderr F I0120 10:56:53.233596 30089 obj_retry.go:541] Creating *v1.Pod openshift-network-operator/iptables-alerter-wwpnd took: 13.671µs 2026-01-20T10:56:53.233665932+00:00 stderr F I0120 10:56:53.233588 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.233759484+00:00 stderr F I0120 10:56:53.233600 30089 default_network_controller.go:699] Recording success event on pod openshift-network-operator/iptables-alerter-wwpnd 2026-01-20T10:56:53.233759484+00:00 stderr F I0120 10:56:53.233723 30089 default_network_controller.go:655] Recording add event on pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2026-01-20T10:56:53.233759484+00:00 stderr F I0120 10:56:53.233730 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2026-01-20T10:56:53.233759484+00:00 stderr F I0120 10:56:53.233737 30089 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 in node crc 2026-01-20T10:56:53.233759484+00:00 stderr F I0120 10:56:53.233747 30089 base_network_controller_pods.go:476] [default/openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7] creating logical port openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7 for pod on switch crc 2026-01-20T10:56:53.233924638+00:00 stderr F I0120 10:56:53.233866 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} options:{GoMap:map[iface-id-ver:ed024e5d-8fc2-4c22-803d-73f3c9795f19 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5ecfd58-e886-4b2c-9939-022e7f14b7a7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.233924638+00:00 stderr F I0120 10:56:53.233900 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.233990770+00:00 stderr F I0120 10:56:53.233934 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {cdc4ecc4-c623-407e-ad92-33cb6c2b7b75}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.234002410+00:00 stderr F I0120 10:56:53.233975 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.14 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8946487-1f89-4d39-965a-9596d567c892}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.234088213+00:00 stderr F I0120 10:56:53.234020 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d8946487-1f89-4d39-965a-9596d567c892}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.234218506+00:00 stderr F I0120 10:56:53.234126 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} options:{GoMap:map[iface-id-ver:c085412c-b875-46c9-ae3e-e6b0d8067091 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.14 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8946487-1f89-4d39-965a-9596d567c892}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d8946487-1f89-4d39-965a-9596d567c892}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.234239957+00:00 stderr F I0120 10:56:53.234204 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.7 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {00bb4740-117c-432b-86ba-a37c8a0d8dd2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.234301508+00:00 stderr F I0120 10:56:53.234238 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:00bb4740-117c-432b-86ba-a37c8a0d8dd2}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.234315069+00:00 stderr F I0120 10:56:53.234259 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} options:{GoMap:map[iface-id-ver:ed024e5d-8fc2-4c22-803d-73f3c9795f19 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5ecfd58-e886-4b2c-9939-022e7f14b7a7}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {cdc4ecc4-c623-407e-ad92-33cb6c2b7b75}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.7 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {00bb4740-117c-432b-86ba-a37c8a0d8dd2}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:00bb4740-117c-432b-86ba-a37c8a0d8dd2}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.234414471+00:00 stderr F I0120 10:56:53.234368 30089 pods.go:185] Attempting to release IPs for pod: openshift-operator-lifecycle-manager/collect-profiles-29481765-pbh8m, ips: 10.217.0.27 2026-01-20T10:56:53.234414471+00:00 stderr F I0120 10:56:53.234388 30089 default_network_controller.go:655] Recording add event on pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2026-01-20T10:56:53.234414471+00:00 stderr F I0120 10:56:53.234404 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2026-01-20T10:56:53.234428022+00:00 stderr F I0120 10:56:53.234411 30089 ovn.go:134] Ensuring zone local for Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd in node crc 2026-01-20T10:56:53.234437032+00:00 stderr F I0120 10:56:53.234426 30089 base_network_controller_pods.go:476] [default/openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd] creating logical port openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd for pod on switch crc 2026-01-20T10:56:53.234607757+00:00 stderr F I0120 10:56:53.234542 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} options:{GoMap:map[iface-id-ver:5bacb25d-97b6-4491-8fb4-99feae1d802a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b4158c3-d859-42e6-8259-b16ce1cbd284}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.234607757+00:00 stderr F I0120 10:56:53.234584 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.234622417+00:00 stderr F I0120 10:56:53.234598 30089 port_cache.go:96] port-cache(openshift-marketplace_redhat-marketplace-2mx7j): added port &{name:openshift-marketplace_redhat-marketplace-2mx7j uuid:34f81bc6-9eab-493e-85aa-3c1b2544e7d2 logicalSwitch:crc ips:[0xc0011ab8c0] mac:[10 88 10 217 0 30] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.30/23] and MAC: 0a:58:0a:d9:00:1e 2026-01-20T10:56:53.234622417+00:00 stderr F I0120 10:56:53.234612 30089 pods.go:220] [openshift-marketplace/redhat-marketplace-2mx7j] addLogicalPort took 4.632955ms, libovsdb time 3.536225ms 2026-01-20T10:56:53.234631887+00:00 stderr F I0120 10:56:53.234620 30089 obj_retry.go:541] Creating *v1.Pod openshift-marketplace/redhat-marketplace-2mx7j took: 5.222651ms 2026-01-20T10:56:53.234631887+00:00 stderr F I0120 10:56:53.234626 30089 default_network_controller.go:699] Recording success event on pod openshift-marketplace/redhat-marketplace-2mx7j 2026-01-20T10:56:53.234631887+00:00 stderr F I0120 10:56:53.234620 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {038121f6-33a2-46c3-a820-7e67ff387e75}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.234642537+00:00 stderr F I0120 10:56:53.234631 30089 default_network_controller.go:655] Recording add event on pod openshift-ingress/router-default-5c9bf7bc58-6jctv 2026-01-20T10:56:53.234642537+00:00 stderr F I0120 10:56:53.234639 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-ingress/router-default-5c9bf7bc58-6jctv 2026-01-20T10:56:53.234653188+00:00 stderr F I0120 10:56:53.234646 30089 ovn.go:134] Ensuring zone local for Pod openshift-ingress/router-default-5c9bf7bc58-6jctv in node crc 2026-01-20T10:56:53.234680489+00:00 stderr F I0120 10:56:53.234651 30089 obj_retry.go:541] Creating *v1.Pod openshift-ingress/router-default-5c9bf7bc58-6jctv took: 6.16µs 2026-01-20T10:56:53.234680489+00:00 stderr F I0120 10:56:53.234655 30089 default_network_controller.go:699] Recording success event on pod openshift-ingress/router-default-5c9bf7bc58-6jctv 2026-01-20T10:56:53.234680489+00:00 stderr F I0120 10:56:53.234661 30089 default_network_controller.go:655] Recording add event on pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2026-01-20T10:56:53.234680489+00:00 stderr F I0120 10:56:53.234668 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2026-01-20T10:56:53.234680489+00:00 stderr F I0120 10:56:53.234673 30089 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv in node crc 2026-01-20T10:56:53.234691059+00:00 stderr F I0120 10:56:53.234684 30089 base_network_controller_pods.go:476] [default/openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv] creating logical port openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv for pod on switch crc 2026-01-20T10:56:53.234826222+00:00 stderr F I0120 10:56:53.234781 30089 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-ingress/router-default-5c9bf7bc58-6jctv. OVN-Kubernetes controller took 2.1751e-05 seconds. No OVN measurement. 2026-01-20T10:56:53.234826222+00:00 stderr F I0120 10:56:53.234804 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} options:{GoMap:map[iface-id-ver:b54e8941-2fc4-432a-9e51-39684df9089e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {82630d91-1647-4c0c-aa84-8f820bcf919e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.234888694+00:00 stderr F I0120 10:56:53.234836 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.234888694+00:00 stderr F I0120 10:56:53.234878 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.234900164+00:00 stderr F I0120 10:56:53.234877 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.39 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e2370e4-c8a2-48a7-a016-426be9bb2419}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.234958936+00:00 stderr F I0120 10:56:53.234905 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:6e2370e4-c8a2-48a7-a016-426be9bb2419}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.234970536+00:00 stderr F I0120 10:56:53.234925 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} options:{GoMap:map[iface-id-ver:5bacb25d-97b6-4491-8fb4-99feae1d802a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b4158c3-d859-42e6-8259-b16ce1cbd284}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {038121f6-33a2-46c3-a820-7e67ff387e75}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.39 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e2370e4-c8a2-48a7-a016-426be9bb2419}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:6e2370e4-c8a2-48a7-a016-426be9bb2419}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.235146181+00:00 stderr F I0120 10:56:53.235084 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.15 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {71937aad-6e75-4d89-9a20-9586e5b9d460}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.235215593+00:00 stderr F I0120 10:56:53.235148 30089 port_cache.go:96] port-cache(openshift-console-operator_console-operator-5dbbc74dc9-cp5cd): added port &{name:openshift-console-operator_console-operator-5dbbc74dc9-cp5cd uuid:6af06372-81fc-4451-8678-6253ce70f317 logicalSwitch:crc ips:[0xc001310630] mac:[10 88 10 217 0 62] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.62/23] and MAC: 0a:58:0a:d9:00:3e 2026-01-20T10:56:53.235215593+00:00 stderr F I0120 10:56:53.235166 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.22 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9a5ec36-71c6-4938-8247-5a459bf24e1f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.235215593+00:00 stderr F I0120 10:56:53.235188 30089 pods.go:220] [openshift-console-operator/console-operator-5dbbc74dc9-cp5cd] addLogicalPort took 5.971081ms, libovsdb time 4.09512ms 2026-01-20T10:56:53.235215593+00:00 stderr F I0120 10:56:53.235137 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:71937aad-6e75-4d89-9a20-9586e5b9d460}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.235215593+00:00 stderr F I0120 10:56:53.235198 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d9a5ec36-71c6-4938-8247-5a459bf24e1f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.235315735+00:00 stderr F I0120 10:56:53.235234 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} options:{GoMap:map[iface-id-ver:b54e8941-2fc4-432a-9e51-39684df9089e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {82630d91-1647-4c0c-aa84-8f820bcf919e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.22 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9a5ec36-71c6-4938-8247-5a459bf24e1f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d9a5ec36-71c6-4938-8247-5a459bf24e1f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.235315735+00:00 stderr F I0120 10:56:53.235201 30089 obj_retry.go:541] Creating *v1.Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd took: 6.010861ms 2026-01-20T10:56:53.235334076+00:00 stderr F I0120 10:56:53.235313 30089 default_network_controller.go:699] Recording success event on pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2026-01-20T10:56:53.235334076+00:00 stderr F I0120 10:56:53.235324 30089 default_network_controller.go:655] Recording add event on pod openshift-console/downloads-65476884b9-9wcvx 2026-01-20T10:56:53.235344956+00:00 stderr F I0120 10:56:53.235338 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-console/downloads-65476884b9-9wcvx 2026-01-20T10:56:53.235355316+00:00 stderr F I0120 10:56:53.235349 30089 ovn.go:134] Ensuring zone local for Pod openshift-console/downloads-65476884b9-9wcvx in node crc 2026-01-20T10:56:53.235412368+00:00 stderr F I0120 10:56:53.235376 30089 base_network_controller_pods.go:476] [default/openshift-console/downloads-65476884b9-9wcvx] creating logical port openshift-console_downloads-65476884b9-9wcvx for pod on switch crc 2026-01-20T10:56:53.235643974+00:00 stderr F I0120 10:56:53.235583 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} options:{GoMap:map[iface-id-ver:6268b7fe-8910-4505-b404-6f1df638105c requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {745a40f7-2acc-4e2b-a087-861e0ea97ffe}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.235657114+00:00 stderr F I0120 10:56:53.235632 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.235772948+00:00 stderr F I0120 10:56:53.235723 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.235915871+00:00 stderr F I0120 10:56:53.235867 30089 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7): added port &{name:openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7 uuid:3644fddd-ceae-4a64-8b00-dadf73515945 logicalSwitch:crc ips:[0xc000dbbef0] mac:[10 88 10 217 0 64] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.64/23] and MAC: 0a:58:0a:d9:00:40 2026-01-20T10:56:53.235915871+00:00 stderr F I0120 10:56:53.235887 30089 pods.go:220] [openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7] addLogicalPort took 5.710514ms, libovsdb time 4.178962ms 2026-01-20T10:56:53.235915871+00:00 stderr F I0120 10:56:53.235895 30089 obj_retry.go:541] Creating *v1.Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 took: 6.983518ms 2026-01-20T10:56:53.235915871+00:00 stderr F I0120 10:56:53.235902 30089 default_network_controller.go:699] Recording success event on pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2026-01-20T10:56:53.235915871+00:00 stderr F I0120 10:56:53.235912 30089 default_network_controller.go:655] Recording add event on pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2026-01-20T10:56:53.235937252+00:00 stderr F I0120 10:56:53.235921 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2026-01-20T10:56:53.235937252+00:00 stderr F I0120 10:56:53.235930 30089 ovn.go:134] Ensuring zone local for Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 in node crc 2026-01-20T10:56:53.235993263+00:00 stderr F I0120 10:56:53.235953 30089 base_network_controller_pods.go:476] [default/openshift-console-operator/console-conversion-webhook-595f9969b-l6z49] creating logical port openshift-console-operator_console-conversion-webhook-595f9969b-l6z49 for pod on switch crc 2026-01-20T10:56:53.236183989+00:00 stderr F I0120 10:56:53.236126 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} options:{GoMap:map[iface-id-ver:59748b9b-c309-4712-aa85-bb38d71c4915 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6056bee0-572a-4de7-bb24-40ca6a66be30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.236200870+00:00 stderr F I0120 10:56:53.236174 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.236261482+00:00 stderr F I0120 10:56:53.236221 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.236442056+00:00 stderr F I0120 10:56:53.236389 30089 port_cache.go:96] port-cache(openshift-etcd-operator_etcd-operator-768d5b5d86-722mg): added port &{name:openshift-etcd-operator_etcd-operator-768d5b5d86-722mg uuid:e834ded8-9d5b-46e7-b962-1ee96928bab4 logicalSwitch:crc ips:[0xc000966930] mac:[10 88 10 217 0 8] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.8/23] and MAC: 0a:58:0a:d9:00:08 2026-01-20T10:56:53.236442056+00:00 stderr F I0120 10:56:53.236409 30089 pods.go:220] [openshift-etcd-operator/etcd-operator-768d5b5d86-722mg] addLogicalPort took 6.974678ms, libovsdb time 4.356808ms 2026-01-20T10:56:53.236442056+00:00 stderr F I0120 10:56:53.236419 30089 obj_retry.go:541] Creating *v1.Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg took: 7.004199ms 2026-01-20T10:56:53.236442056+00:00 stderr F I0120 10:56:53.236425 30089 default_network_controller.go:699] Recording success event on pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2026-01-20T10:56:53.236442056+00:00 stderr F I0120 10:56:53.236431 30089 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/installer-10-crc 2026-01-20T10:56:53.236456697+00:00 stderr F I0120 10:56:53.236439 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/installer-10-crc 2026-01-20T10:56:53.236456697+00:00 stderr F I0120 10:56:53.236446 30089 obj_retry.go:459] Detected object openshift-kube-controller-manager/installer-10-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.236472757+00:00 stderr F I0120 10:56:53.236457 30089 port_cache.go:122] port-cache(openshift-kube-controller-manager_installer-10-crc): port not found in cache or already marked for removal 2026-01-20T10:56:53.236472757+00:00 stderr F I0120 10:56:53.236461 30089 pods.go:151] Deleting pod: openshift-kube-controller-manager/installer-10-crc 2026-01-20T10:56:53.236528719+00:00 stderr F W0120 10:56:53.236490 30089 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-controller-manager/installer-10-crc. Using logical switch crc port uuid and addrs [10.217.0.69/23] 2026-01-20T10:56:53.236581780+00:00 stderr F I0120 10:56:53.236542 30089 address_set.go:613] (7a4da73b-99a6-4d7f-9775-40343fe32ef9/default-network-controller:Namespace:openshift-kube-controller-manager:v4/a4663622633901538608) deleting addresses [10.217.0.69] from address set 2026-01-20T10:56:53.236581780+00:00 stderr F I0120 10:56:53.236545 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.61 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {95146e00-82d8-4bf3-8c5b-dac75d43239c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.236581780+00:00 stderr F I0120 10:56:53.236566 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.69]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.236638412+00:00 stderr F I0120 10:56:53.236587 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:95146e00-82d8-4bf3-8c5b-dac75d43239c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.236638412+00:00 stderr F I0120 10:56:53.236610 30089 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.69]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.236799556+00:00 stderr F I0120 10:56:53.236610 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} options:{GoMap:map[iface-id-ver:59748b9b-c309-4712-aa85-bb38d71c4915 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6056bee0-572a-4de7-bb24-40ca6a66be30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.61 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {95146e00-82d8-4bf3-8c5b-dac75d43239c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:95146e00-82d8-4bf3-8c5b-dac75d43239c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.237128465+00:00 stderr F I0120 10:56:53.237053 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.66 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7790ce03-cefa-4620-876d-a377ca4c3dbf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.237128465+00:00 stderr F I0120 10:56:53.237111 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7790ce03-cefa-4620-876d-a377ca4c3dbf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.237148555+00:00 stderr F I0120 10:56:53.237120 30089 port_cache.go:96] port-cache(openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg): added port &{name:openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg uuid:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4 logicalSwitch:crc ips:[0xc0012e5aa0] mac:[10 88 10 217 0 46] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.46/23] and MAC: 0a:58:0a:d9:00:2e 2026-01-20T10:56:53.237148555+00:00 stderr F I0120 10:56:53.237138 30089 pods.go:220] [openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg] addLogicalPort took 5.634332ms, libovsdb time 4.946523ms 2026-01-20T10:56:53.237157875+00:00 stderr F I0120 10:56:53.237147 30089 obj_retry.go:541] Creating *v1.Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg took: 5.655942ms 2026-01-20T10:56:53.237157875+00:00 stderr F I0120 10:56:53.237154 30089 default_network_controller.go:699] Recording success event on pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2026-01-20T10:56:53.237169286+00:00 stderr F I0120 10:56:53.237162 30089 default_network_controller.go:655] Recording add event on pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2026-01-20T10:56:53.237169286+00:00 stderr F I0120 10:56:53.237126 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} options:{GoMap:map[iface-id-ver:6268b7fe-8910-4505-b404-6f1df638105c requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {745a40f7-2acc-4e2b-a087-861e0ea97ffe}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.66 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7790ce03-cefa-4620-876d-a377ca4c3dbf}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7790ce03-cefa-4620-876d-a377ca4c3dbf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.237184606+00:00 stderr F I0120 10:56:53.237173 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2026-01-20T10:56:53.237184606+00:00 stderr F I0120 10:56:53.237182 30089 ovn.go:134] Ensuring zone local for Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs in node crc 2026-01-20T10:56:53.240284419+00:00 stderr F I0120 10:56:53.240220 30089 pods.go:185] Attempting to release IPs for pod: openshift-kube-apiserver/installer-12-crc, ips: 10.217.0.86 2026-01-20T10:56:53.243320171+00:00 stderr F I0120 10:56:53.235248 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} options:{GoMap:map[iface-id-ver:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {be2fa59f-4cec-4742-a4bd-dcd0913d1422}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {4001cc39-c23c-46dd-87d7-2a0f579404da}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.15 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {71937aad-6e75-4d89-9a20-9586e5b9d460}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:71937aad-6e75-4d89-9a20-9586e5b9d460}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.243358222+00:00 stderr F I0120 10:56:53.243336 30089 default_network_controller.go:655] Recording add event on pod openshift-dns/node-resolver-dn27q 2026-01-20T10:56:53.243358222+00:00 stderr F I0120 10:56:53.243325 30089 base_network_controller_pods.go:476] [default/openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs] creating logical port openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs for pod on switch crc 2026-01-20T10:56:53.243389643+00:00 stderr F I0120 10:56:53.243374 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-dns/node-resolver-dn27q 2026-01-20T10:56:53.243445964+00:00 stderr F I0120 10:56:53.243424 30089 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-dn27q in node crc 2026-01-20T10:56:53.243445964+00:00 stderr F I0120 10:56:53.243436 30089 obj_retry.go:541] Creating *v1.Pod openshift-dns/node-resolver-dn27q took: 47.941µs 2026-01-20T10:56:53.243463845+00:00 stderr F I0120 10:56:53.243448 30089 default_network_controller.go:699] Recording success event on pod openshift-dns/node-resolver-dn27q 2026-01-20T10:56:53.243463845+00:00 stderr F I0120 10:56:53.243458 30089 default_network_controller.go:655] Recording add event on pod openshift-marketplace/community-operators-6m4w2 2026-01-20T10:56:53.243502516+00:00 stderr F I0120 10:56:53.243488 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-marketplace/community-operators-6m4w2 2026-01-20T10:56:53.243511766+00:00 stderr F I0120 10:56:53.243506 30089 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/community-operators-6m4w2 in node crc 2026-01-20T10:56:53.243545457+00:00 stderr F I0120 10:56:53.243529 30089 base_network_controller_pods.go:476] [default/openshift-marketplace/community-operators-6m4w2] creating logical port openshift-marketplace_community-operators-6m4w2 for pod on switch crc 2026-01-20T10:56:53.243975568+00:00 stderr F I0120 10:56:53.243931 30089 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-dns/node-resolver-dn27q. OVN-Kubernetes controller took 9.6502e-05 seconds. No OVN measurement. 2026-01-20T10:56:53.243975568+00:00 stderr F I0120 10:56:53.243897 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} options:{GoMap:map[iface-id-ver:bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {de883ce4-2f69-42d2-a4b3-8165b5ceb500}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.243990949+00:00 stderr F I0120 10:56:53.243964 30089 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz): added port &{name:openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz uuid:69155615-9d93-4b72-bddd-739a6e731251 logicalSwitch:crc ips:[0xc000c11320] mac:[10 88 10 217 0 43] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.43/23] and MAC: 0a:58:0a:d9:00:2b 2026-01-20T10:56:53.244026680+00:00 stderr F I0120 10:56:53.244007 30089 pods.go:220] [openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz] addLogicalPort took 11.606312ms, libovsdb time 10.846142ms 2026-01-20T10:56:53.244037430+00:00 stderr F I0120 10:56:53.244026 30089 obj_retry.go:541] Creating *v1.Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz took: 11.647724ms 2026-01-20T10:56:53.244037430+00:00 stderr F I0120 10:56:53.244016 30089 port_cache.go:96] port-cache(openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz): added port &{name:openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz uuid:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56 logicalSwitch:crc ips:[0xc0013955c0] mac:[10 88 10 217 0 10] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.10/23] and MAC: 0a:58:0a:d9:00:0a 2026-01-20T10:56:53.244115072+00:00 stderr F I0120 10:56:53.244057 30089 pods.go:220] [openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz] addLogicalPort took 12.770893ms, libovsdb time 11.508019ms 2026-01-20T10:56:53.244115072+00:00 stderr F I0120 10:56:53.244090 30089 port_cache.go:96] port-cache(openshift-apiserver_apiserver-7fc54b8dd7-d2bhp): added port &{name:openshift-apiserver_apiserver-7fc54b8dd7-d2bhp uuid:005abe2f-f66d-42f8-945c-fbc80f820ed4 logicalSwitch:crc ips:[0xc001311080] mac:[10 88 10 217 0 82] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.82/23] and MAC: 0a:58:0a:d9:00:52 2026-01-20T10:56:53.244129622+00:00 stderr F I0120 10:56:53.244111 30089 pods.go:220] [openshift-apiserver/apiserver-7fc54b8dd7-d2bhp] addLogicalPort took 14.323125ms, libovsdb time 11.441977ms 2026-01-20T10:56:53.244129622+00:00 stderr F I0120 10:56:53.244122 30089 obj_retry.go:541] Creating *v1.Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp took: 14.384287ms 2026-01-20T10:56:53.244144193+00:00 stderr F I0120 10:56:53.244129 30089 default_network_controller.go:699] Recording success event on pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2026-01-20T10:56:53.244144193+00:00 stderr F I0120 10:56:53.244033 30089 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2026-01-20T10:56:53.244152773+00:00 stderr F I0120 10:56:53.244139 30089 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/revision-pruner-8-crc 2026-01-20T10:56:53.244160853+00:00 stderr F I0120 10:56:53.244154 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/revision-pruner-8-crc 2026-01-20T10:56:53.244169654+00:00 stderr F I0120 10:56:53.244156 30089 default_network_controller.go:655] Recording add event on pod openshift-etcd/etcd-crc 2026-01-20T10:56:53.244178104+00:00 stderr F I0120 10:56:53.244165 30089 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-8-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.244178104+00:00 stderr F I0120 10:56:53.244123 30089 port_cache.go:96] port-cache(openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb): added port &{name:openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb uuid:805e2f41-6cb8-4ccf-9939-37cfb4fa5509 logicalSwitch:crc ips:[0xc0013d0f00] mac:[10 88 10 217 0 5] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.5/23] and MAC: 0a:58:0a:d9:00:05 2026-01-20T10:56:53.244208975+00:00 stderr F I0120 10:56:53.244183 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-etcd/etcd-crc 2026-01-20T10:56:53.244219365+00:00 stderr F I0120 10:56:53.244112 30089 obj_retry.go:541] Creating *v1.Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz took: 12.843315ms 2026-01-20T10:56:53.244219365+00:00 stderr F I0120 10:56:53.244208 30089 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc 2026-01-20T10:56:53.244219365+00:00 stderr F I0120 10:56:53.244214 30089 default_network_controller.go:699] Recording success event on pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2026-01-20T10:56:53.244229665+00:00 stderr F I0120 10:56:53.244217 30089 obj_retry.go:541] Creating *v1.Pod openshift-etcd/etcd-crc took: 16.431µs 2026-01-20T10:56:53.244229665+00:00 stderr F I0120 10:56:53.244224 30089 default_network_controller.go:655] Recording add event on pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2026-01-20T10:56:53.244238655+00:00 stderr F I0120 10:56:53.244227 30089 default_network_controller.go:699] Recording success event on pod openshift-etcd/etcd-crc 2026-01-20T10:56:53.244238655+00:00 stderr F I0120 10:56:53.244233 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2026-01-20T10:56:53.244248096+00:00 stderr F I0120 10:56:53.244236 30089 default_network_controller.go:655] Recording add event on pod openshift-marketplace/redhat-operators-2nxg8 2026-01-20T10:56:53.244256956+00:00 stderr F I0120 10:56:53.244245 30089 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc in node crc 2026-01-20T10:56:53.244256956+00:00 stderr F I0120 10:56:53.244248 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-marketplace/redhat-operators-2nxg8 2026-01-20T10:56:53.244293857+00:00 stderr F I0120 10:56:53.244266 30089 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/redhat-operators-2nxg8 in node crc 2026-01-20T10:56:53.244319547+00:00 stderr F I0120 10:56:53.244287 30089 port_cache.go:96] port-cache(openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd): added port &{name:openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd uuid:8b4158c3-d859-42e6-8259-b16ce1cbd284 logicalSwitch:crc ips:[0xc0014f2c90] mac:[10 88 10 217 0 39] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.39/23] and MAC: 0a:58:0a:d9:00:27 2026-01-20T10:56:53.244333448+00:00 stderr F I0120 10:56:53.244317 30089 base_network_controller_pods.go:476] [default/openshift-marketplace/redhat-operators-2nxg8] creating logical port openshift-marketplace_redhat-operators-2nxg8 for pod on switch crc 2026-01-20T10:56:53.244333448+00:00 stderr F I0120 10:56:53.244324 30089 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh): added port &{name:openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh uuid:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749 logicalSwitch:crc ips:[0xc001364120] mac:[10 88 10 217 0 14] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.14/23] and MAC: 0a:58:0a:d9:00:0e 2026-01-20T10:56:53.244357528+00:00 stderr F I0120 10:56:53.244335 30089 pods.go:220] [openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh] addLogicalPort took 11.066337ms, libovsdb time 10.06277ms 2026-01-20T10:56:53.244357528+00:00 stderr F I0120 10:56:53.244352 30089 obj_retry.go:541] Creating *v1.Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh took: 11.090187ms 2026-01-20T10:56:53.244384189+00:00 stderr F I0120 10:56:53.244370 30089 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2026-01-20T10:56:53.244393799+00:00 stderr F I0120 10:56:53.244382 30089 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/revision-pruner-10-crc 2026-01-20T10:56:53.244393799+00:00 stderr F I0120 10:56:53.244377 30089 port_cache.go:96] port-cache(openshift-console_downloads-65476884b9-9wcvx): added port &{name:openshift-console_downloads-65476884b9-9wcvx uuid:745a40f7-2acc-4e2b-a087-861e0ea97ffe logicalSwitch:crc ips:[0xc0016a8840] mac:[10 88 10 217 0 66] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.66/23] and MAC: 0a:58:0a:d9:00:42 2026-01-20T10:56:53.244418991+00:00 stderr F I0120 10:56:53.244396 30089 port_cache.go:96] port-cache(openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7): added port &{name:openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7 uuid:f5ecfd58-e886-4b2c-9939-022e7f14b7a7 logicalSwitch:crc ips:[0xc001582de0] mac:[10 88 10 217 0 7] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.7/23] and MAC: 0a:58:0a:d9:00:07 2026-01-20T10:56:53.244418991+00:00 stderr F I0120 10:56:53.244414 30089 pods.go:220] [openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7] addLogicalPort took 10.672146ms, libovsdb time 9.942237ms 2026-01-20T10:56:53.244430181+00:00 stderr F I0120 10:56:53.244423 30089 obj_retry.go:541] Creating *v1.Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 took: 10.684858ms 2026-01-20T10:56:53.244438452+00:00 stderr F I0120 10:56:53.244389 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/revision-pruner-10-crc 2026-01-20T10:56:53.244448812+00:00 stderr F I0120 10:56:53.244440 30089 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-10-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.244448812+00:00 stderr F I0120 10:56:53.244433 30089 port_cache.go:96] port-cache(openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv): added port &{name:openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv uuid:82630d91-1647-4c0c-aa84-8f820bcf919e logicalSwitch:crc ips:[0xc0011f1da0] mac:[10 88 10 217 0 22] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.22/23] and MAC: 0a:58:0a:d9:00:16 2026-01-20T10:56:53.244459282+00:00 stderr F I0120 10:56:53.244451 30089 pods.go:220] [openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv] addLogicalPort took 9.772714ms, libovsdb time 9.019493ms 2026-01-20T10:56:53.244467432+00:00 stderr F I0120 10:56:53.244459 30089 port_cache.go:122] port-cache(openshift-kube-controller-manager_revision-pruner-10-crc): port not found in cache or already marked for removal 2026-01-20T10:56:53.244480323+00:00 stderr F I0120 10:56:53.244466 30089 pods.go:151] Deleting pod: openshift-kube-controller-manager/revision-pruner-10-crc 2026-01-20T10:56:53.244480323+00:00 stderr F I0120 10:56:53.244468 30089 port_cache.go:122] port-cache(openshift-kube-controller-manager_revision-pruner-8-crc): port not found in cache or already marked for removal 2026-01-20T10:56:53.244480323+00:00 stderr F I0120 10:56:53.244473 30089 pods.go:151] Deleting pod: openshift-kube-controller-manager/revision-pruner-8-crc 2026-01-20T10:56:53.244545994+00:00 stderr F W0120 10:56:53.244526 30089 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-controller-manager/revision-pruner-10-crc. Using logical switch crc port uuid and addrs [10.217.0.68/23] 2026-01-20T10:56:53.244556965+00:00 stderr F I0120 10:56:53.244538 30089 port_cache.go:96] port-cache(openshift-console-operator_console-conversion-webhook-595f9969b-l6z49): added port &{name:openshift-console-operator_console-conversion-webhook-595f9969b-l6z49 uuid:6056bee0-572a-4de7-bb24-40ca6a66be30 logicalSwitch:crc ips:[0xc00080f020] mac:[10 88 10 217 0 61] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.61/23] and MAC: 0a:58:0a:d9:00:3d 2026-01-20T10:56:53.244556965+00:00 stderr F I0120 10:56:53.244551 30089 pods.go:220] [openshift-console-operator/console-conversion-webhook-595f9969b-l6z49] addLogicalPort took 8.615553ms, libovsdb time 7.828131ms 2026-01-20T10:56:53.244565935+00:00 stderr F I0120 10:56:53.244558 30089 obj_retry.go:541] Creating *v1.Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 took: 8.629733ms 2026-01-20T10:56:53.244574415+00:00 stderr F I0120 10:56:53.244564 30089 default_network_controller.go:699] Recording success event on pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2026-01-20T10:56:53.244584235+00:00 stderr F I0120 10:56:53.244577 30089 default_network_controller.go:655] Recording add event on pod openshift-console/console-644bb77b49-5x5xk 2026-01-20T10:56:53.244593296+00:00 stderr F I0120 10:56:53.244182 30089 pods.go:220] [openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb] addLogicalPort took 11.539161ms, libovsdb time 10.810731ms 2026-01-20T10:56:53.244603486+00:00 stderr F I0120 10:56:53.244270 30089 base_network_controller_pods.go:476] [default/openshift-multus/multus-admission-controller-6c7c885997-4hbbc] creating logical port openshift-multus_multus-admission-controller-6c7c885997-4hbbc for pod on switch crc 2026-01-20T10:56:53.244614006+00:00 stderr F I0120 10:56:53.244604 30089 obj_retry.go:541] Creating *v1.Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb took: 11.961563ms 2026-01-20T10:56:53.244624807+00:00 stderr F I0120 10:56:53.244618 30089 default_network_controller.go:699] Recording success event on pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2026-01-20T10:56:53.244635387+00:00 stderr F I0120 10:56:53.244626 30089 address_set.go:613] (7a4da73b-99a6-4d7f-9775-40343fe32ef9/default-network-controller:Namespace:openshift-kube-controller-manager:v4/a4663622633901538608) deleting addresses [10.217.0.68] from address set 2026-01-20T10:56:53.244644397+00:00 stderr F I0120 10:56:53.244634 30089 default_network_controller.go:655] Recording add event on pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2026-01-20T10:56:53.244675218+00:00 stderr F I0120 10:56:53.244650 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2026-01-20T10:56:53.244708919+00:00 stderr F I0120 10:56:53.244676 30089 ovn.go:134] Ensuring zone local for Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z in node crc 2026-01-20T10:56:53.244708919+00:00 stderr F I0120 10:56:53.244396 30089 pods.go:220] [openshift-console/downloads-65476884b9-9wcvx] addLogicalPort took 9.040143ms, libovsdb time 7.242035ms 2026-01-20T10:56:53.244708919+00:00 stderr F I0120 10:56:53.244589 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-console/console-644bb77b49-5x5xk 2026-01-20T10:56:53.244726439+00:00 stderr F I0120 10:56:53.244706 30089 obj_retry.go:541] Creating *v1.Pod openshift-console/downloads-65476884b9-9wcvx took: 9.359433ms 2026-01-20T10:56:53.244726439+00:00 stderr F I0120 10:56:53.244713 30089 default_network_controller.go:699] Recording success event on pod openshift-console/downloads-65476884b9-9wcvx 2026-01-20T10:56:53.244726439+00:00 stderr F W0120 10:56:53.244526 30089 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-controller-manager/revision-pruner-8-crc. Using logical switch crc port uuid and addrs [10.217.0.55/23] 2026-01-20T10:56:53.244736219+00:00 stderr F I0120 10:56:53.244723 30089 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/revision-pruner-9-crc 2026-01-20T10:56:53.244745400+00:00 stderr F I0120 10:56:53.244733 30089 base_network_controller_pods.go:999] Completed pod openshift-kube-controller-manager/revision-pruner-8-crc was already released for nad default before startup 2026-01-20T10:56:53.244745400+00:00 stderr F I0120 10:56:53.244741 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/revision-pruner-9-crc 2026-01-20T10:56:53.244774241+00:00 stderr F I0120 10:56:53.244752 30089 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-9-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.244774241+00:00 stderr F I0120 10:56:53.244714 30089 base_network_controller_pods.go:476] [default/openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z] creating logical port openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z for pod on switch crc 2026-01-20T10:56:53.244838432+00:00 stderr F I0120 10:56:53.244763 30089 port_cache.go:96] port-cache(openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb): added port &{name:openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb uuid:be2fa59f-4cec-4742-a4bd-dcd0913d1422 logicalSwitch:crc ips:[0xc000e219b0] mac:[10 88 10 217 0 15] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.15/23] and MAC: 0a:58:0a:d9:00:0f 2026-01-20T10:56:53.244838432+00:00 stderr F I0120 10:56:53.244795 30089 pods.go:220] [openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb] addLogicalPort took 20.117352ms, libovsdb time 9.504216ms 2026-01-20T10:56:53.244838432+00:00 stderr F I0120 10:56:53.244805 30089 obj_retry.go:541] Creating *v1.Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb took: 20.144063ms 2026-01-20T10:56:53.244838432+00:00 stderr F I0120 10:56:53.244800 30089 default_network_controller.go:655] Recording add event on pod openshift-dns-operator/dns-operator-75f687757b-nz2xb 2026-01-20T10:56:53.244838432+00:00 stderr F I0120 10:56:53.244814 30089 default_network_controller.go:699] Recording success event on pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2026-01-20T10:56:53.244838432+00:00 stderr F I0120 10:56:53.244822 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb 2026-01-20T10:56:53.244838432+00:00 stderr F I0120 10:56:53.244669 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.68]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.244858343+00:00 stderr F I0120 10:56:53.244845 30089 ovn.go:134] Ensuring zone local for Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb in node crc 2026-01-20T10:56:53.244890854+00:00 stderr F I0120 10:56:53.244867 30089 base_network_controller_pods.go:476] [default/openshift-dns-operator/dns-operator-75f687757b-nz2xb] creating logical port openshift-dns-operator_dns-operator-75f687757b-nz2xb for pod on switch crc 2026-01-20T10:56:53.244890854+00:00 stderr F I0120 10:56:53.244848 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} options:{GoMap:map[iface-id-ver:21d29937-debd-4407-b2b1-d1053cb0f342 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c2174bce-e1da-468b-aa60-b9409f80c104}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.244901214+00:00 stderr F I0120 10:56:53.244873 30089 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.68]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.244969756+00:00 stderr F I0120 10:56:53.244934 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.245073938+00:00 stderr F I0120 10:56:53.244996 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} options:{GoMap:map[iface-id-ver:0f394926-bdb9-425c-b36e-264d7fd34550 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5604df7-c1b9-4360-a570-e22fbf62c520}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.245091409+00:00 stderr F I0120 10:56:53.245027 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {7f89eae9-cb3f-438c-8fc8-824d28075b04}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.245150520+00:00 stderr F I0120 10:56:53.245091 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.245230123+00:00 stderr F I0120 10:56:53.245177 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {327158bc-926b-43b6-928d-23c33a7f6443}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.245412887+00:00 stderr F I0120 10:56:53.245349 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:22 10.217.0.34]} options:{GoMap:map[iface-id-ver:afcd1056-dc0e-4c35-93bd-1c388cd2028e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:22 10.217.0.34]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {398e0e4f-2c60-4f66-97a2-6180b1379809}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.245477159+00:00 stderr F I0120 10:56:53.245443 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:398e0e4f-2c60-4f66-97a2-6180b1379809}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.245477159+00:00 stderr F I0120 10:56:53.245397 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} options:{GoMap:map[iface-id-ver:10603adc-d495-423c-9459-4caa405960bb requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b212e2c2-3d4e-4898-aede-c926b74813f0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.245553731+00:00 stderr F I0120 10:56:53.245523 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:398e0e4f-2c60-4f66-97a2-6180b1379809}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.245564151+00:00 stderr F I0120 10:56:53.245533 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.245630783+00:00 stderr F I0120 10:56:53.245609 30089 pods.go:185] Attempting to release IPs for pod: openshift-kube-controller-manager/installer-10-crc, ips: 10.217.0.69 2026-01-20T10:56:53.245641044+00:00 stderr F I0120 10:56:53.245634 30089 default_network_controller.go:655] Recording add event on pod openshift-multus/network-metrics-daemon-qdfr4 2026-01-20T10:56:53.245651404+00:00 stderr F I0120 10:56:53.245645 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-multus/network-metrics-daemon-qdfr4 2026-01-20T10:56:53.245840989+00:00 stderr F I0120 10:56:53.245827 30089 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-qdfr4 in node crc 2026-01-20T10:56:53.245868850+00:00 stderr F I0120 10:56:53.245851 30089 base_network_controller_pods.go:476] [default/openshift-multus/network-metrics-daemon-qdfr4] creating logical port openshift-multus_network-metrics-daemon-qdfr4 for pod on switch crc 2026-01-20T10:56:53.245892330+00:00 stderr F I0120 10:56:53.245848 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {46538203-fb12-48b9-9840-a39e58a289ec}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.245960092+00:00 stderr F I0120 10:56:53.245916 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.88 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {61452dfb-4328-4d5d-af1c-a7de40dfbc7a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.246019814+00:00 stderr F I0120 10:56:53.245993 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:61452dfb-4328-4d5d-af1c-a7de40dfbc7a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.246124986+00:00 stderr F I0120 10:56:53.246082 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.9 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b428b80-dee0-4a48-a105-d91e72f01b56}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.246140127+00:00 stderr F I0120 10:56:53.246101 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} options:{GoMap:map[iface-id-ver:a702c6d2-4dde-4077-ab8c-0f8df804bf7a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3564ddfd-a311-4df3-b5d0-1e76294b4ab0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.246189688+00:00 stderr F I0120 10:56:53.246156 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:8b428b80-dee0-4a48-a105-d91e72f01b56}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.246213219+00:00 stderr F I0120 10:56:53.246174 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.246290421+00:00 stderr F I0120 10:56:53.246257 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.246300641+00:00 stderr F I0120 10:56:53.246188 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} options:{GoMap:map[iface-id-ver:0f394926-bdb9-425c-b36e-264d7fd34550 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5604df7-c1b9-4360-a570-e22fbf62c520}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {327158bc-926b-43b6-928d-23c33a7f6443}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.9 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b428b80-dee0-4a48-a105-d91e72f01b56}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:8b428b80-dee0-4a48-a105-d91e72f01b56}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.246315101+00:00 stderr F I0120 10:56:53.246283 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.34 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b0e16f9a-41a4-49a7-abee-d1893e65d16b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.246374393+00:00 stderr F I0120 10:56:53.246343 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:b0e16f9a-41a4-49a7-abee-d1893e65d16b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.246538737+00:00 stderr F I0120 10:56:53.246431 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:22 10.217.0.34]} options:{GoMap:map[iface-id-ver:afcd1056-dc0e-4c35-93bd-1c388cd2028e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:22 10.217.0.34]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {398e0e4f-2c60-4f66-97a2-6180b1379809}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:398e0e4f-2c60-4f66-97a2-6180b1379809}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:398e0e4f-2c60-4f66-97a2-6180b1379809}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.34 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b0e16f9a-41a4-49a7-abee-d1893e65d16b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:b0e16f9a-41a4-49a7-abee-d1893e65d16b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.246572178+00:00 stderr F I0120 10:56:53.246552 30089 pods.go:185] Attempting to release IPs for pod: openshift-kube-apiserver/installer-9-crc, ips: 10.217.0.55 2026-01-20T10:56:53.246572178+00:00 stderr F I0120 10:56:53.246078 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} options:{GoMap:map[iface-id-ver:21d29937-debd-4407-b2b1-d1053cb0f342 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c2174bce-e1da-468b-aa60-b9409f80c104}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {7f89eae9-cb3f-438c-8fc8-824d28075b04}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.88 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {61452dfb-4328-4d5d-af1c-a7de40dfbc7a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:61452dfb-4328-4d5d-af1c-a7de40dfbc7a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.246593249+00:00 stderr F I0120 10:56:53.246570 30089 default_network_controller.go:655] Recording add event on pod openshift-marketplace/marketplace-operator-8b455464d-nc8zc 2026-01-20T10:56:53.246601759+00:00 stderr F I0120 10:56:53.246591 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-marketplace/marketplace-operator-8b455464d-nc8zc 2026-01-20T10:56:53.246610049+00:00 stderr F I0120 10:56:53.246601 30089 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/marketplace-operator-8b455464d-nc8zc in node crc 2026-01-20T10:56:53.246672411+00:00 stderr F I0120 10:56:53.246616 30089 base_network_controller_pods.go:476] [default/openshift-marketplace/marketplace-operator-8b455464d-nc8zc] creating logical port openshift-marketplace_marketplace-operator-8b455464d-nc8zc for pod on switch crc 2026-01-20T10:56:53.246737443+00:00 stderr F I0120 10:56:53.244459 30089 obj_retry.go:541] Creating *v1.Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv took: 9.784694ms 2026-01-20T10:56:53.246737443+00:00 stderr F I0120 10:56:53.246716 30089 default_network_controller.go:699] Recording success event on pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2026-01-20T10:56:53.246748843+00:00 stderr F I0120 10:56:53.246739 30089 default_network_controller.go:655] Recording add event on pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2026-01-20T10:56:53.246757413+00:00 stderr F I0120 10:56:53.246751 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2026-01-20T10:56:53.246782524+00:00 stderr F I0120 10:56:53.246762 30089 ovn.go:134] Ensuring zone local for Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b in node crc 2026-01-20T10:56:53.246791604+00:00 stderr F I0120 10:56:53.246783 30089 base_network_controller_pods.go:476] [default/openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b] creating logical port openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b for pod on switch crc 2026-01-20T10:56:53.246987689+00:00 stderr F I0120 10:56:53.246896 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.3 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7e5e1c6-53f2-4022-a945-14d792e680f9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.247004770+00:00 stderr F I0120 10:56:53.246919 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.18 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.247019720+00:00 stderr F I0120 10:56:53.246985 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7e5e1c6-53f2-4022-a945-14d792e680f9}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.247100372+00:00 stderr F I0120 10:56:53.244429 30089 default_network_controller.go:699] Recording success event on pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2026-01-20T10:56:53.247100372+00:00 stderr F I0120 10:56:53.247054 30089 default_network_controller.go:655] Recording add event on pod openshift-image-registry/node-ca-l92hr 2026-01-20T10:56:53.247100372+00:00 stderr F I0120 10:56:53.247080 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-image-registry/node-ca-l92hr 2026-01-20T10:56:53.247100372+00:00 stderr F I0120 10:56:53.247094 30089 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-l92hr in node crc 2026-01-20T10:56:53.247117883+00:00 stderr F I0120 10:56:53.247102 30089 obj_retry.go:541] Creating *v1.Pod openshift-image-registry/node-ca-l92hr took: 10.9µs 2026-01-20T10:56:53.247117883+00:00 stderr F I0120 10:56:53.247111 30089 default_network_controller.go:699] Recording success event on pod openshift-image-registry/node-ca-l92hr 2026-01-20T10:56:53.247127413+00:00 stderr F I0120 10:56:53.247118 30089 default_network_controller.go:655] Recording add event on pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2026-01-20T10:56:53.247137533+00:00 stderr F I0120 10:56:53.247015 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} options:{GoMap:map[iface-id-ver:a702c6d2-4dde-4077-ab8c-0f8df804bf7a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3564ddfd-a311-4df3-b5d0-1e76294b4ab0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.3 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7e5e1c6-53f2-4022-a945-14d792e680f9}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7e5e1c6-53f2-4022-a945-14d792e680f9}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.247147164+00:00 stderr F I0120 10:56:53.247102 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.247160354+00:00 stderr F I0120 10:56:53.244666 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:de883ce4-2f69-42d2-a4b3-8165b5ceb500}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.247253947+00:00 stderr F I0120 10:56:53.247175 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} options:{GoMap:map[iface-id-ver:01feb2e0-a0f4-4573-8335-34e364e0ef40 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3e86699a-fa52-4a81-9386-60d37f3fa10c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.247253947+00:00 stderr F I0120 10:56:53.247208 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:de883ce4-2f69-42d2-a4b3-8165b5ceb500}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.247324489+00:00 stderr F I0120 10:56:53.247285 30089 gateway_init.go:324] Initializing Gateway Functionality 2026-01-20T10:56:53.247324489+00:00 stderr F I0120 10:56:53.247279 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.247324489+00:00 stderr F I0120 10:56:53.247281 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1d 10.217.0.29]} options:{GoMap:map[iface-id-ver:5adb4a31-5991-4381-a1ea-f1b095a071ea requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1d 10.217.0.29]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8d0a19e8-81fc-4e62-a546-49e373193af2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.247390761+00:00 stderr F I0120 10:56:53.247159 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} options:{GoMap:map[iface-id-ver:10603adc-d495-423c-9459-4caa405960bb requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b212e2c2-3d4e-4898-aede-c926b74813f0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {46538203-fb12-48b9-9840-a39e58a289ec}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.18 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.247452693+00:00 stderr F I0120 10:56:53.247390 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {65d2d7fe-9437-4599-bc37-da4da4d5905c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.247452693+00:00 stderr F I0120 10:56:53.247131 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2026-01-20T10:56:53.247452693+00:00 stderr F I0120 10:56:53.247388 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8d0a19e8-81fc-4e62-a546-49e373193af2}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.247464873+00:00 stderr F I0120 10:56:53.247446 30089 ovn.go:134] Ensuring zone local for Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 in node crc 2026-01-20T10:56:53.247473373+00:00 stderr F I0120 10:56:53.247461 30089 obj_retry.go:541] Creating *v1.Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 took: 17.611µs 2026-01-20T10:56:53.247473373+00:00 stderr F I0120 10:56:53.247469 30089 default_network_controller.go:699] Recording success event on pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2026-01-20T10:56:53.247575656+00:00 stderr F I0120 10:56:53.247525 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8d0a19e8-81fc-4e62-a546-49e373193af2}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.247682319+00:00 stderr F I0120 10:56:53.247614 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} options:{GoMap:map[iface-id-ver:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.247772221+00:00 stderr F I0120 10:56:53.247686 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.35 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e60566c7-a4e3-4fd4-b079-e4c3376e8b73}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.247772221+00:00 stderr F I0120 10:56:53.247754 30089 port_cache.go:96] port-cache(openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z): added port &{name:openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z uuid:f5604df7-c1b9-4360-a570-e22fbf62c520 logicalSwitch:crc ips:[0xc00173dcb0] mac:[10 88 10 217 0 9] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.9/23] and MAC: 0a:58:0a:d9:00:09 2026-01-20T10:56:53.247784541+00:00 stderr F I0120 10:56:53.247772 30089 pods.go:220] [openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z] addLogicalPort took 3.068492ms, libovsdb time 1.334966ms 2026-01-20T10:56:53.247784541+00:00 stderr F I0120 10:56:53.247781 30089 obj_retry.go:541] Creating *v1.Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z took: 3.110683ms 2026-01-20T10:56:53.247802912+00:00 stderr F I0120 10:56:53.247787 30089 default_network_controller.go:699] Recording success event on pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2026-01-20T10:56:53.247802912+00:00 stderr F I0120 10:56:53.247782 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:e60566c7-a4e3-4fd4-b079-e4c3376e8b73}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.247811442+00:00 stderr F I0120 10:56:53.247803 30089 default_network_controller.go:655] Recording add event on pod openshift-ingress-canary/ingress-canary-2vhcn 2026-01-20T10:56:53.247819932+00:00 stderr F I0120 10:56:53.247813 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-ingress-canary/ingress-canary-2vhcn 2026-01-20T10:56:53.247828363+00:00 stderr F I0120 10:56:53.247822 30089 ovn.go:134] Ensuring zone local for Pod openshift-ingress-canary/ingress-canary-2vhcn in node crc 2026-01-20T10:56:53.247889194+00:00 stderr F I0120 10:56:53.247836 30089 base_network_controller_pods.go:476] [default/openshift-ingress-canary/ingress-canary-2vhcn] creating logical port openshift-ingress-canary_ingress-canary-2vhcn for pod on switch crc 2026-01-20T10:56:53.247889194+00:00 stderr F I0120 10:56:53.247805 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} options:{GoMap:map[iface-id-ver:bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {de883ce4-2f69-42d2-a4b3-8165b5ceb500}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:de883ce4-2f69-42d2-a4b3-8165b5ceb500}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:de883ce4-2f69-42d2-a4b3-8165b5ceb500}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.35 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e60566c7-a4e3-4fd4-b079-e4c3376e8b73}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:e60566c7-a4e3-4fd4-b079-e4c3376e8b73}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.247977587+00:00 stderr F I0120 10:56:53.244825 30089 default_network_controller.go:655] Recording add event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc 2026-01-20T10:56:53.247990277+00:00 stderr F I0120 10:56:53.247962 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc 2026-01-20T10:56:53.248001147+00:00 stderr F I0120 10:56:53.247993 30089 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc 2026-01-20T10:56:53.248015258+00:00 stderr F I0120 10:56:53.248004 30089 obj_retry.go:541] Creating *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc took: 16.19µs 2026-01-20T10:56:53.248023878+00:00 stderr F I0120 10:56:53.248013 30089 default_network_controller.go:699] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc 2026-01-20T10:56:53.248034708+00:00 stderr F I0120 10:56:53.248026 30089 default_network_controller.go:655] Recording add event on pod openshift-machine-config-operator/machine-config-daemon-zpnhg 2026-01-20T10:56:53.248045588+00:00 stderr F I0120 10:56:53.248038 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-machine-config-operator/machine-config-daemon-zpnhg 2026-01-20T10:56:53.248122080+00:00 stderr F I0120 10:56:53.248052 30089 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-zpnhg in node crc 2026-01-20T10:56:53.248122080+00:00 stderr F I0120 10:56:53.248091 30089 obj_retry.go:541] Creating *v1.Pod openshift-machine-config-operator/machine-config-daemon-zpnhg took: 40.031µs 2026-01-20T10:56:53.248122080+00:00 stderr F I0120 10:56:53.248107 30089 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-zpnhg 2026-01-20T10:56:53.248140201+00:00 stderr F I0120 10:56:53.248119 30089 default_network_controller.go:655] Recording add event on pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2026-01-20T10:56:53.248140201+00:00 stderr F I0120 10:56:53.248131 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2026-01-20T10:56:53.248149791+00:00 stderr F I0120 10:56:53.248142 30089 ovn.go:134] Ensuring zone local for Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t in node crc 2026-01-20T10:56:53.248158051+00:00 stderr F I0120 10:56:53.248112 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} options:{GoMap:map[iface-id-ver:0b5d722a-1123-4935-9740-52a08d018bc9 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7a350d82-7987-4ce6-ae41-dd930411ca29}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.248219413+00:00 stderr F I0120 10:56:53.248164 30089 base_network_controller_pods.go:476] [default/openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t] creating logical port openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t for pod on switch crc 2026-01-20T10:56:53.248219413+00:00 stderr F I0120 10:56:53.247695 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.248230913+00:00 stderr F I0120 10:56:53.248211 30089 pods.go:185] Attempting to release IPs for pod: openshift-kube-controller-manager/revision-pruner-10-crc, ips: 10.217.0.68 2026-01-20T10:56:53.248239233+00:00 stderr F I0120 10:56:53.248228 30089 port_cache.go:122] port-cache(openshift-kube-controller-manager_revision-pruner-9-crc): port not found in cache or already marked for removal 2026-01-20T10:56:53.248239233+00:00 stderr F I0120 10:56:53.248235 30089 pods.go:151] Deleting pod: openshift-kube-controller-manager/revision-pruner-9-crc 2026-01-20T10:56:53.248295775+00:00 stderr F I0120 10:56:53.248238 30089 default_network_controller.go:655] Recording add event on pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2026-01-20T10:56:53.248295775+00:00 stderr F I0120 10:56:53.248267 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2026-01-20T10:56:53.248295775+00:00 stderr F I0120 10:56:53.248282 30089 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm in node crc 2026-01-20T10:56:53.248314055+00:00 stderr F W0120 10:56:53.248301 30089 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-controller-manager/revision-pruner-9-crc. Using logical switch crc port uuid and addrs [10.217.0.52/23] 2026-01-20T10:56:53.248314055+00:00 stderr F I0120 10:56:53.248309 30089 base_network_controller_pods.go:476] [default/openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm] creating logical port openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm for pod on switch crc 2026-01-20T10:56:53.248411888+00:00 stderr F I0120 10:56:53.248326 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.29 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {cf213cb5-56f2-46e3-839e-1ce83cbf70ce}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.248411888+00:00 stderr F I0120 10:56:53.248343 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.248411888+00:00 stderr F I0120 10:56:53.248382 30089 address_set.go:613] (7a4da73b-99a6-4d7f-9775-40343fe32ef9/default-network-controller:Namespace:openshift-kube-controller-manager:v4/a4663622633901538608) deleting addresses [10.217.0.52] from address set 2026-01-20T10:56:53.248482310+00:00 stderr F I0120 10:56:53.248427 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.52]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.248482310+00:00 stderr F I0120 10:56:53.244316 30089 pods.go:220] [openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd] addLogicalPort took 9.898156ms, libovsdb time 9.212398ms 2026-01-20T10:56:53.248482310+00:00 stderr F I0120 10:56:53.248452 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:cf213cb5-56f2-46e3-839e-1ce83cbf70ce}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.248495990+00:00 stderr F I0120 10:56:53.248476 30089 obj_retry.go:541] Creating *v1.Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd took: 14.063519ms 2026-01-20T10:56:53.248495990+00:00 stderr F I0120 10:56:53.248486 30089 default_network_controller.go:699] Recording success event on pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2026-01-20T10:56:53.248505411+00:00 stderr F I0120 10:56:53.248494 30089 default_network_controller.go:655] Recording add event on pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2026-01-20T10:56:53.248513611+00:00 stderr F I0120 10:56:53.248488 30089 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.52]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.248527841+00:00 stderr F I0120 10:56:53.248511 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2026-01-20T10:56:53.248527841+00:00 stderr F I0120 10:56:53.248520 30089 ovn.go:134] Ensuring zone local for Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 in node crc 2026-01-20T10:56:53.248588163+00:00 stderr F I0120 10:56:53.248347 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.72 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ff201db7-fe7f-4311-91e5-346c10cbf942}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.248588163+00:00 stderr F I0120 10:56:53.248515 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} options:{GoMap:map[iface-id-ver:7d51f445-054a-4e4f-a67b-a828f5a32511 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {710ea152-1844-44ad-b1a6-805ec9a3700e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.248644954+00:00 stderr F I0120 10:56:53.248589 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:ff201db7-fe7f-4311-91e5-346c10cbf942}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.248644954+00:00 stderr F I0120 10:56:53.244717 30089 ovn.go:134] Ensuring zone local for Pod openshift-console/console-644bb77b49-5x5xk in node crc 2026-01-20T10:56:53.248644954+00:00 stderr F I0120 10:56:53.248632 30089 base_network_controller_pods.go:476] [default/openshift-console/console-644bb77b49-5x5xk] creating logical port openshift-console_console-644bb77b49-5x5xk for pod on switch crc 2026-01-20T10:56:53.248656865+00:00 stderr F I0120 10:56:53.248596 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} options:{GoMap:map[iface-id-ver:120b38dc-8236-4fa6-a452-642b8ad738ee requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ad3d5728-34ed-421c-a749-1d7a957800a8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.248665965+00:00 stderr F I0120 10:56:53.248631 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.248719056+00:00 stderr F I0120 10:56:53.248543 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1d 10.217.0.29]} options:{GoMap:map[iface-id-ver:5adb4a31-5991-4381-a1ea-f1b095a071ea requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1d 10.217.0.29]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8d0a19e8-81fc-4e62-a546-49e373193af2}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8d0a19e8-81fc-4e62-a546-49e373193af2}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8d0a19e8-81fc-4e62-a546-49e373193af2}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.29 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {cf213cb5-56f2-46e3-839e-1ce83cbf70ce}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:cf213cb5-56f2-46e3-839e-1ce83cbf70ce}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.248719056+00:00 stderr F I0120 10:56:53.248622 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} options:{GoMap:map[iface-id-ver:01feb2e0-a0f4-4573-8335-34e364e0ef40 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3e86699a-fa52-4a81-9386-60d37f3fa10c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {65d2d7fe-9437-4599-bc37-da4da4d5905c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.72 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ff201db7-fe7f-4311-91e5-346c10cbf942}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:ff201db7-fe7f-4311-91e5-346c10cbf942}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.248734107+00:00 stderr F I0120 10:56:53.248193 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.248786678+00:00 stderr F I0120 10:56:53.248719 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.248786678+00:00 stderr F I0120 10:56:53.248743 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {4a46821f-f601-44e2-aacf-0ffb901e376e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.248848240+00:00 stderr F I0120 10:56:53.248791 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {480d825d-a6a3-4408-89f7-433b108e677b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.249013764+00:00 stderr F I0120 10:56:53.248540 30089 base_network_controller_pods.go:476] [default/openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8] creating logical port openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8 for pod on switch crc 2026-01-20T10:56:53.249279331+00:00 stderr F I0120 10:56:53.249211 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} options:{GoMap:map[iface-id-ver:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {99ef3a4b-7858-4c9b-90db-217867afe36a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.249297352+00:00 stderr F I0120 10:56:53.249273 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.249382964+00:00 stderr F I0120 10:56:53.249272 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} options:{GoMap:map[iface-id-ver:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9a79516e-7a72-4d42-b0ab-87a99aa064f3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.249486127+00:00 stderr F I0120 10:56:53.249409 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.249561039+00:00 stderr F I0120 10:56:53.249368 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {57f0696f-dc79-4d6c-b6a1-8c0c5c1afaae}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.249572989+00:00 stderr F I0120 10:56:53.249542 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.249669121+00:00 stderr F I0120 10:56:53.249615 30089 port_cache.go:96] port-cache(openshift-marketplace_redhat-operators-2nxg8): added port &{name:openshift-marketplace_redhat-operators-2nxg8 uuid:398e0e4f-2c60-4f66-97a2-6180b1379809 logicalSwitch:crc ips:[0xc00140d7a0] mac:[10 88 10 217 0 34] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.34/23] and MAC: 0a:58:0a:d9:00:22 2026-01-20T10:56:53.249669121+00:00 stderr F I0120 10:56:53.249654 30089 pods.go:220] [openshift-marketplace/redhat-operators-2nxg8] addLogicalPort took 5.349834ms, libovsdb time 3.173266ms 2026-01-20T10:56:53.249688622+00:00 stderr F I0120 10:56:53.249665 30089 obj_retry.go:541] Creating *v1.Pod openshift-marketplace/redhat-operators-2nxg8 took: 5.405105ms 2026-01-20T10:56:53.249688622+00:00 stderr F I0120 10:56:53.249673 30089 default_network_controller.go:699] Recording success event on pod openshift-marketplace/redhat-operators-2nxg8 2026-01-20T10:56:53.249688622+00:00 stderr F I0120 10:56:53.249683 30089 default_network_controller.go:655] Recording add event on pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2026-01-20T10:56:53.249776204+00:00 stderr F I0120 10:56:53.249729 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2026-01-20T10:56:53.249776204+00:00 stderr F I0120 10:56:53.249750 30089 ovn.go:134] Ensuring zone local for Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf in node crc 2026-01-20T10:56:53.249788915+00:00 stderr F I0120 10:56:53.249772 30089 base_network_controller_pods.go:476] [default/openshift-controller-manager/controller-manager-778975cc4f-x5vcf] creating logical port openshift-controller-manager_controller-manager-778975cc4f-x5vcf for pod on switch crc 2026-01-20T10:56:53.249902468+00:00 stderr F I0120 10:56:53.249815 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.32 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7d2c6a4d-e1a7-4457-8d2b-2a8098268121}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.249966209+00:00 stderr F I0120 10:56:53.249874 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.71 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5bdc3c8-db64-45ca-8596-53e5e38584c4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.250023202+00:00 stderr F I0120 10:56:53.249963 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7d2c6a4d-e1a7-4457-8d2b-2a8098268121}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.250097174+00:00 stderr F I0120 10:56:53.250018 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f5bdc3c8-db64-45ca-8596-53e5e38584c4}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.250173146+00:00 stderr F I0120 10:56:53.250008 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} options:{GoMap:map[iface-id-ver:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.32 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7d2c6a4d-e1a7-4457-8d2b-2a8098268121}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7d2c6a4d-e1a7-4457-8d2b-2a8098268121}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.250241908+00:00 stderr F I0120 10:56:53.250142 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} options:{GoMap:map[iface-id-ver:1a3e81c3-c292-4130-9436-f94062c91efd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {eda38bc9-7da5-4a6b-818c-4e1e8f85426d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.250255208+00:00 stderr F I0120 10:56:53.250079 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} options:{GoMap:map[iface-id-ver:0b5d722a-1123-4935-9740-52a08d018bc9 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7a350d82-7987-4ce6-ae41-dd930411ca29}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {480d825d-a6a3-4408-89f7-433b108e677b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.71 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5bdc3c8-db64-45ca-8596-53e5e38584c4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f5bdc3c8-db64-45ca-8596-53e5e38584c4}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.250264528+00:00 stderr F I0120 10:56:53.250236 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.250357461+00:00 stderr F I0120 10:56:53.250031 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.45 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e326c177-dff3-4bbe-a1eb-fe518325ee36}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.250357461+00:00 stderr F I0120 10:56:53.250329 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d9150f6f-5be3-40d7-848b-deb1197b35b9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.250739961+00:00 stderr F I0120 10:56:53.250635 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.73 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fe11781d-2eb7-4789-931b-76e487eefd5f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.250865284+00:00 stderr F I0120 10:56:53.250789 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:fe11781d-2eb7-4789-931b-76e487eefd5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.250881975+00:00 stderr F I0120 10:56:53.250853 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.19 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {562978c2-805e-4646-ada6-3dd7d281b620}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.250945476+00:00 stderr F I0120 10:56:53.250848 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} options:{GoMap:map[iface-id-ver:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9a79516e-7a72-4d42-b0ab-87a99aa064f3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.73 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fe11781d-2eb7-4789-931b-76e487eefd5f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:fe11781d-2eb7-4789-931b-76e487eefd5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.250957967+00:00 stderr F I0120 10:56:53.250939 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:562978c2-805e-4646-ada6-3dd7d281b620}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.251227894+00:00 stderr F I0120 10:56:53.250966 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} options:{GoMap:map[iface-id-ver:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {99ef3a4b-7858-4c9b-90db-217867afe36a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {57f0696f-dc79-4d6c-b6a1-8c0c5c1afaae}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.19 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {562978c2-805e-4646-ada6-3dd7d281b620}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:562978c2-805e-4646-ada6-3dd7d281b620}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.251498911+00:00 stderr F I0120 10:56:53.251438 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.87 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {01c8cadf-7d39-4ad0-9627-60cf1df4b48e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.251549322+00:00 stderr F I0120 10:56:53.251496 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:01c8cadf-7d39-4ad0-9627-60cf1df4b48e}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.253768482+00:00 stderr F I0120 10:56:53.250379 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:e326c177-dff3-4bbe-a1eb-fe518325ee36}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.253856405+00:00 stderr F I0120 10:56:53.250159 30089 gateway_localnet.go:181] Node local addresses initialized to: map[10.217.0.2:{10.217.0.0 fffffe00} 127.0.0.1:{127.0.0.0 ff000000} 169.254.169.2:{169.254.169.0 fffffff8} 172.17.0.5:{172.17.0.0 ffffff00} 172.18.0.5:{172.18.0.0 ffffff00} 172.19.0.5:{172.19.0.0 ffffff00} 192.168.122.10:{192.168.122.0 ffffff00} 192.168.126.11:{192.168.126.0 ffffff00} 38.102.83.220:{38.102.83.0 ffffff00} ::1:{::1 ffffffffffffffffffffffffffffffff} fe80::10ac:87ff:feb2:ad51:{fe80:: ffffffffffffffff0000000000000000} fe80::14bd:13ff:fe74:c9cd:{fe80:: ffffffffffffffff0000000000000000} fe80::18b1:68ff:fe9c:c40:{fe80:: ffffffffffffffff0000000000000000} fe80::2421:e9ff:fe51:8528:{fe80:: ffffffffffffffff0000000000000000} fe80::24a3:1fff:fe9e:d25d:{fe80:: ffffffffffffffff0000000000000000} fe80::2827:daff:fe64:90c3:{fe80:: ffffffffffffffff0000000000000000} fe80::2ca9:c5ff:fe72:7599:{fe80:: ffffffffffffffff0000000000000000} fe80::2cb7:3cff:fe9e:676e:{fe80:: ffffffffffffffff0000000000000000} fe80::34ef:40ff:fe1b:3e9d:{fe80:: ffffffffffffffff0000000000000000} fe80::387e:65ff:fe08:8400:{fe80:: ffffffffffffffff0000000000000000} fe80::3c61:62ff:fec4:deb8:{fe80:: ffffffffffffffff0000000000000000} fe80::44e4:faff:fe17:9200:{fe80:: ffffffffffffffff0000000000000000} fe80::473:59ff:fe32:80d7:{fe80:: ffffffffffffffff0000000000000000} fe80::4c5b:8dff:feaf:4240:{fe80:: ffffffffffffffff0000000000000000} fe80::54de:50ff:febf:191b:{fe80:: ffffffffffffffff0000000000000000} fe80::5853:e3ff:fe6b:7ff4:{fe80:: ffffffffffffffff0000000000000000} fe80::58a2:3eff:fe87:6ee:{fe80:: ffffffffffffffff0000000000000000} fe80::60ce:41ff:fe69:cd23:{fe80:: ffffffffffffffff0000000000000000} fe80::682d:a8ff:fef5:d024:{fe80:: ffffffffffffffff0000000000000000} fe80::684f:93ff:fedd:d0ef:{fe80:: ffffffffffffffff0000000000000000} fe80::7c6c:78ff:feca:a54:{fe80:: ffffffffffffffff0000000000000000} fe80::8072:e7ff:fe17:74dd:{fe80:: ffffffffffffffff0000000000000000} fe80::84ca:e1ff:feca:63fe:{fe80:: ffffffffffffffff0000000000000000} fe80::8884:b3ff:fea8:6484:{fe80:: ffffffffffffffff0000000000000000} fe80::8c52:63ff:fe1a:2742:{fe80:: ffffffffffffffff0000000000000000} fe80::905a:49ff:fe91:78d4:{fe80:: ffffffffffffffff0000000000000000} fe80::9845:c3ff:fe52:86bf:{fe80:: ffffffffffffffff0000000000000000} fe80::9871:a8ff:fedd:f93d:{fe80:: ffffffffffffffff0000000000000000} fe80::98:3eff:fe0f:af67:{fe80:: ffffffffffffffff0000000000000000} fe80::9c1f:c0ff:fee2:5336:{fe80:: ffffffffffffffff0000000000000000} fe80::9ce8:d2ff:fe22:5468:{fe80:: ffffffffffffffff0000000000000000} fe80::a4e9:b5ff:fef1:5d41:{fe80:: ffffffffffffffff0000000000000000} fe80::a809:9fff:fe9d:5416:{fe80:: ffffffffffffffff0000000000000000} fe80::a818:e682:b7c3:1665:{fe80:: ffffffffffffffff0000000000000000} fe80::a853:eaff:fe34:78ff:{fe80:: ffffffffffffffff0000000000000000} fe80::a858:ecff:feab:1708:{fe80:: ffffffffffffffff0000000000000000} fe80::ac1a:9aff:fee3:46fe:{fe80:: ffffffffffffffff0000000000000000} fe80::b47d:f0ff:fe17:6b93:{fe80:: ffffffffffffffff0000000000000000} fe80::b4dc:d9ff:fe26:3d4:{fe80:: ffffffffffffffff0000000000000000} fe80::bc02:d7ff:fe8b:b6e:{fe80:: ffffffffffffffff0000000000000000} fe80::c035:b3ff:fedb:8075:{fe80:: ffffffffffffffff0000000000000000} fe80::c806:87ff:fed1:62a:{fe80:: ffffffffffffffff0000000000000000} fe80::ccc0:c6ff:fe59:9517:{fe80:: ffffffffffffffff0000000000000000} fe80::cf6:89ff:fe0b:117:{fe80:: ffffffffffffffff0000000000000000} fe80::d478:b9ff:fe5f:395a:{fe80:: ffffffffffffffff0000000000000000} fe80::d80c:13ff:fe4b:2679:{fe80:: ffffffffffffffff0000000000000000} fe80::d81b:7aff:fe9b:be46:{fe80:: ffffffffffffffff0000000000000000} fe80::e40b:b1ff:feeb:48a5:{fe80:: ffffffffffffffff0000000000000000} fe80::e4ec:c1ff:fe1c:9fe7:{fe80:: ffffffffffffffff0000000000000000} fe80::f017:faff:fe81:dba3:{fe80:: ffffffffffffffff0000000000000000} fe80::f0a7:69ff:fe13:711a:{fe80:: ffffffffffffffff0000000000000000} fe80::f816:bbff:fe9a:a4cc:{fe80:: ffffffffffffffff0000000000000000}] 2026-01-20T10:56:53.253856405+00:00 stderr F I0120 10:56:53.253847 30089 ovs.go:159] Exec(23): /usr/bin/ovs-vsctl --timeout=15 port-to-br br-ex 2026-01-20T10:56:53.254207134+00:00 stderr F I0120 10:56:53.254099 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} options:{GoMap:map[iface-id-ver:1a3e81c3-c292-4130-9436-f94062c91efd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {eda38bc9-7da5-4a6b-818c-4e1e8f85426d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d9150f6f-5be3-40d7-848b-deb1197b35b9}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.87 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {01c8cadf-7d39-4ad0-9627-60cf1df4b48e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:01c8cadf-7d39-4ad0-9627-60cf1df4b48e}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.254232695+00:00 stderr F I0120 10:56:53.254151 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} options:{GoMap:map[iface-id-ver:7d51f445-054a-4e4f-a67b-a828f5a32511 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {710ea152-1844-44ad-b1a6-805ec9a3700e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {4a46821f-f601-44e2-aacf-0ffb901e376e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.45 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e326c177-dff3-4bbe-a1eb-fe518325ee36}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:e326c177-dff3-4bbe-a1eb-fe518325ee36}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.254691447+00:00 stderr F I0120 10:56:53.254636 30089 port_cache.go:96] port-cache(openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs): added port &{name:openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs uuid:c2174bce-e1da-468b-aa60-b9409f80c104 logicalSwitch:crc ips:[0xc00140d140] mac:[10 88 10 217 0 88] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.88/23] and MAC: 0a:58:0a:d9:00:58 2026-01-20T10:56:53.254691447+00:00 stderr F I0120 10:56:53.254663 30089 pods.go:220] [openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs] addLogicalPort took 11.398917ms, libovsdb time 8.54623ms 2026-01-20T10:56:53.254691447+00:00 stderr F I0120 10:56:53.254674 30089 obj_retry.go:541] Creating *v1.Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs took: 17.49123ms 2026-01-20T10:56:53.254691447+00:00 stderr F I0120 10:56:53.254681 30089 default_network_controller.go:699] Recording success event on pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2026-01-20T10:56:53.254716697+00:00 stderr F I0120 10:56:53.254691 30089 default_network_controller.go:655] Recording add event on pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2026-01-20T10:56:53.254716697+00:00 stderr F I0120 10:56:53.254700 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2026-01-20T10:56:53.254716697+00:00 stderr F I0120 10:56:53.254710 30089 ovn.go:134] Ensuring zone local for Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 in node crc 2026-01-20T10:56:53.254726998+00:00 stderr F I0120 10:56:53.254716 30089 obj_retry.go:541] Creating *v1.Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 took: 9.32µs 2026-01-20T10:56:53.254726998+00:00 stderr F I0120 10:56:53.254721 30089 default_network_controller.go:699] Recording success event on pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2026-01-20T10:56:53.254799900+00:00 stderr F I0120 10:56:53.254752 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.254968254+00:00 stderr F I0120 10:56:53.254908 30089 port_cache.go:96] port-cache(openshift-dns-operator_dns-operator-75f687757b-nz2xb): added port &{name:openshift-dns-operator_dns-operator-75f687757b-nz2xb uuid:b212e2c2-3d4e-4898-aede-c926b74813f0 logicalSwitch:crc ips:[0xc0018141e0] mac:[10 88 10 217 0 18] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.18/23] and MAC: 0a:58:0a:d9:00:12 2026-01-20T10:56:53.254968254+00:00 stderr F I0120 10:56:53.254932 30089 pods.go:220] [openshift-dns-operator/dns-operator-75f687757b-nz2xb] addLogicalPort took 10.07466ms, libovsdb time 7.745198ms 2026-01-20T10:56:53.254968254+00:00 stderr F I0120 10:56:53.254941 30089 obj_retry.go:541] Creating *v1.Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb took: 10.100611ms 2026-01-20T10:56:53.254968254+00:00 stderr F I0120 10:56:53.254948 30089 default_network_controller.go:699] Recording success event on pod openshift-dns-operator/dns-operator-75f687757b-nz2xb 2026-01-20T10:56:53.254968254+00:00 stderr F I0120 10:56:53.254957 30089 default_network_controller.go:655] Recording add event on pod openshift-network-operator/network-operator-767c585db5-zd56b 2026-01-20T10:56:53.254968254+00:00 stderr F I0120 10:56:53.254964 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-network-operator/network-operator-767c585db5-zd56b 2026-01-20T10:56:53.254985025+00:00 stderr F I0120 10:56:53.254972 30089 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-767c585db5-zd56b in node crc 2026-01-20T10:56:53.254985025+00:00 stderr F I0120 10:56:53.254978 30089 obj_retry.go:541] Creating *v1.Pod openshift-network-operator/network-operator-767c585db5-zd56b took: 7.23µs 2026-01-20T10:56:53.254985025+00:00 stderr F I0120 10:56:53.254983 30089 default_network_controller.go:699] Recording success event on pod openshift-network-operator/network-operator-767c585db5-zd56b 2026-01-20T10:56:53.254997015+00:00 stderr F I0120 10:56:53.254988 30089 default_network_controller.go:655] Recording add event on pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2026-01-20T10:56:53.255005735+00:00 stderr F I0120 10:56:53.254997 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2026-01-20T10:56:53.255013975+00:00 stderr F I0120 10:56:53.255004 30089 ovn.go:134] Ensuring zone local for Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc in node crc 2026-01-20T10:56:53.255027766+00:00 stderr F I0120 10:56:53.255019 30089 base_network_controller_pods.go:476] [default/openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc] creating logical port openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc for pod on switch crc 2026-01-20T10:56:53.255134169+00:00 stderr F I0120 10:56:53.255044 30089 port_cache.go:96] port-cache(openshift-multus_network-metrics-daemon-qdfr4): added port &{name:openshift-multus_network-metrics-daemon-qdfr4 uuid:3564ddfd-a311-4df3-b5d0-1e76294b4ab0 logicalSwitch:crc ips:[0xc0018833e0] mac:[10 88 10 217 0 3] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.3/23] and MAC: 0a:58:0a:d9:00:03 2026-01-20T10:56:53.255134169+00:00 stderr F I0120 10:56:53.255091 30089 pods.go:220] [openshift-multus/network-metrics-daemon-qdfr4] addLogicalPort took 9.248178ms, libovsdb time 8.025246ms 2026-01-20T10:56:53.255134169+00:00 stderr F I0120 10:56:53.255116 30089 obj_retry.go:541] Creating *v1.Pod openshift-multus/network-metrics-daemon-qdfr4 took: 9.446454ms 2026-01-20T10:56:53.255134169+00:00 stderr F I0120 10:56:53.255122 30089 default_network_controller.go:699] Recording success event on pod openshift-multus/network-metrics-daemon-qdfr4 2026-01-20T10:56:53.255134169+00:00 stderr F I0120 10:56:53.255130 30089 default_network_controller.go:655] Recording add event on pod cert-manager/cert-manager-758df9885c-cq6zm 2026-01-20T10:56:53.255153399+00:00 stderr F I0120 10:56:53.255138 30089 obj_retry.go:502] Add event received for *v1.Pod cert-manager/cert-manager-758df9885c-cq6zm 2026-01-20T10:56:53.255153399+00:00 stderr F I0120 10:56:53.255146 30089 ovn.go:134] Ensuring zone local for Pod cert-manager/cert-manager-758df9885c-cq6zm in node crc 2026-01-20T10:56:53.255212961+00:00 stderr F I0120 10:56:53.255161 30089 base_network_controller_pods.go:476] [default/cert-manager/cert-manager-758df9885c-cq6zm] creating logical port cert-manager_cert-manager-758df9885c-cq6zm for pod on switch crc 2026-01-20T10:56:53.255401676+00:00 stderr F I0120 10:56:53.255341 30089 port_cache.go:96] port-cache(openshift-multus_multus-admission-controller-6c7c885997-4hbbc): added port &{name:openshift-multus_multus-admission-controller-6c7c885997-4hbbc uuid:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6 logicalSwitch:crc ips:[0xc001623620] mac:[10 88 10 217 0 32] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.32/23] and MAC: 0a:58:0a:d9:00:20 2026-01-20T10:56:53.255401676+00:00 stderr F I0120 10:56:53.255377 30089 pods.go:220] [openshift-multus/multus-admission-controller-6c7c885997-4hbbc] addLogicalPort took 11.122209ms, libovsdb time 5.329124ms 2026-01-20T10:56:53.255401676+00:00 stderr F I0120 10:56:53.255388 30089 obj_retry.go:541] Creating *v1.Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc took: 11.14476ms 2026-01-20T10:56:53.255401676+00:00 stderr F I0120 10:56:53.255397 30089 default_network_controller.go:699] Recording success event on pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2026-01-20T10:56:53.255417446+00:00 stderr F I0120 10:56:53.255408 30089 default_network_controller.go:655] Recording add event on pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2026-01-20T10:56:53.255426146+00:00 stderr F I0120 10:56:53.255418 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2026-01-20T10:56:53.255426146+00:00 stderr F I0120 10:56:53.255400 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2a 10.217.0.42]} options:{GoMap:map[iface-id-ver:f12a256b-7128-4680-8f54-8e40a3e56300 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2a 10.217.0.42]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c58d2e89-322d-46d7-9147-b9e591118d62}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.255445177+00:00 stderr F I0120 10:56:53.255428 30089 ovn.go:134] Ensuring zone local for Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw in node crc 2026-01-20T10:56:53.255456037+00:00 stderr F I0120 10:56:53.255448 30089 base_network_controller_pods.go:476] [default/openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw] creating logical port openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw for pod on switch crc 2026-01-20T10:56:53.255515119+00:00 stderr F I0120 10:56:53.255464 30089 port_cache.go:96] port-cache(openshift-marketplace_marketplace-operator-8b455464d-nc8zc): added port &{name:openshift-marketplace_marketplace-operator-8b455464d-nc8zc uuid:8d0a19e8-81fc-4e62-a546-49e373193af2 logicalSwitch:crc ips:[0xc001667fb0] mac:[10 88 10 217 0 29] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.29/23] and MAC: 0a:58:0a:d9:00:1d 2026-01-20T10:56:53.255515119+00:00 stderr F I0120 10:56:53.255483 30089 pods.go:220] [openshift-marketplace/marketplace-operator-8b455464d-nc8zc] addLogicalPort took 8.874149ms, libovsdb time 6.792813ms 2026-01-20T10:56:53.255515119+00:00 stderr F I0120 10:56:53.255493 30089 obj_retry.go:541] Creating *v1.Pod openshift-marketplace/marketplace-operator-8b455464d-nc8zc took: 8.893419ms 2026-01-20T10:56:53.255515119+00:00 stderr F I0120 10:56:53.255497 30089 default_network_controller.go:699] Recording success event on pod openshift-marketplace/marketplace-operator-8b455464d-nc8zc 2026-01-20T10:56:53.255515119+00:00 stderr F I0120 10:56:53.255505 30089 default_network_controller.go:655] Recording add event on pod openshift-kube-scheduler/installer-7-crc 2026-01-20T10:56:53.255528599+00:00 stderr F I0120 10:56:53.255513 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-scheduler/installer-7-crc 2026-01-20T10:56:53.255528599+00:00 stderr F I0120 10:56:53.255521 30089 obj_retry.go:459] Detected object openshift-kube-scheduler/installer-7-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.255586101+00:00 stderr F I0120 10:56:53.255540 30089 port_cache.go:122] port-cache(openshift-kube-scheduler_installer-7-crc): port not found in cache or already marked for removal 2026-01-20T10:56:53.255586101+00:00 stderr F I0120 10:56:53.255553 30089 pods.go:151] Deleting pod: openshift-kube-scheduler/installer-7-crc 2026-01-20T10:56:53.255586101+00:00 stderr F I0120 10:56:53.255570 30089 port_cache.go:96] port-cache(openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8): added port &{name:openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8 uuid:99ef3a4b-7858-4c9b-90db-217867afe36a logicalSwitch:crc ips:[0xc001223200] mac:[10 88 10 217 0 19] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.19/23] and MAC: 0a:58:0a:d9:00:13 2026-01-20T10:56:53.255597941+00:00 stderr F W0120 10:56:53.255589 30089 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-scheduler/installer-7-crc. Using logical switch crc port uuid and addrs [10.217.0.67/23] 2026-01-20T10:56:53.255597941+00:00 stderr F I0120 10:56:53.255590 30089 pods.go:220] [openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8] addLogicalPort took 7.06336ms, libovsdb time 4.595133ms 2026-01-20T10:56:53.255608701+00:00 stderr F I0120 10:56:53.255602 30089 obj_retry.go:541] Creating *v1.Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 took: 7.08181ms 2026-01-20T10:56:53.255617431+00:00 stderr F I0120 10:56:53.255609 30089 default_network_controller.go:699] Recording success event on pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2026-01-20T10:56:53.255617431+00:00 stderr F I0120 10:56:53.255586 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c58d2e89-322d-46d7-9147-b9e591118d62}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.255631343+00:00 stderr F I0120 10:56:53.255618 30089 default_network_controller.go:655] Recording add event on pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2026-01-20T10:56:53.255639633+00:00 stderr F I0120 10:56:53.255634 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2026-01-20T10:56:53.255649873+00:00 stderr F I0120 10:56:53.255643 30089 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 in node crc 2026-01-20T10:56:53.255713695+00:00 stderr F I0120 10:56:53.255659 30089 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2] creating logical port openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2 for pod on switch crc 2026-01-20T10:56:53.255713695+00:00 stderr F I0120 10:56:53.255673 30089 address_set.go:613] (87985ed0-37ee-4fd2-af4b-1c4bf264af83/default-network-controller:Namespace:openshift-kube-scheduler:v4/a15634036902741400949) deleting addresses [10.217.0.67] from address set 2026-01-20T10:56:53.255713695+00:00 stderr F I0120 10:56:53.255697 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.67]}}] Timeout: Where:[where column _uuid == {87985ed0-37ee-4fd2-af4b-1c4bf264af83}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.255810877+00:00 stderr F I0120 10:56:53.255765 30089 port_cache.go:96] port-cache(openshift-console_console-644bb77b49-5x5xk): added port &{name:openshift-console_console-644bb77b49-5x5xk uuid:9a79516e-7a72-4d42-b0ab-87a99aa064f3 logicalSwitch:crc ips:[0xc001860b70] mac:[10 88 10 217 0 73] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.73/23] and MAC: 0a:58:0a:d9:00:49 2026-01-20T10:56:53.255810877+00:00 stderr F I0120 10:56:53.255787 30089 pods.go:220] [openshift-console/console-644bb77b49-5x5xk] addLogicalPort took 7.165473ms, libovsdb time 4.912192ms 2026-01-20T10:56:53.255810877+00:00 stderr F I0120 10:56:53.255798 30089 obj_retry.go:541] Creating *v1.Pod openshift-console/console-644bb77b49-5x5xk took: 11.082378ms 2026-01-20T10:56:53.255810877+00:00 stderr F I0120 10:56:53.255804 30089 default_network_controller.go:699] Recording success event on pod openshift-console/console-644bb77b49-5x5xk 2026-01-20T10:56:53.255824178+00:00 stderr F I0120 10:56:53.255794 30089 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.67]}}] Timeout: Where:[where column _uuid == {87985ed0-37ee-4fd2-af4b-1c4bf264af83}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.255824178+00:00 stderr F I0120 10:56:53.255813 30089 default_network_controller.go:655] Recording add event on pod cert-manager/cert-manager-webhook-855f577f79-7bdxq 2026-01-20T10:56:53.255824178+00:00 stderr F I0120 10:56:53.255809 30089 port_cache.go:96] port-cache(openshift-ingress-canary_ingress-canary-2vhcn): added port &{name:openshift-ingress-canary_ingress-canary-2vhcn uuid:7a350d82-7987-4ce6-ae41-dd930411ca29 logicalSwitch:crc ips:[0xc0018bb560] mac:[10 88 10 217 0 71] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.71/23] and MAC: 0a:58:0a:d9:00:47 2026-01-20T10:56:53.255833498+00:00 stderr F I0120 10:56:53.255823 30089 obj_retry.go:502] Add event received for *v1.Pod cert-manager/cert-manager-webhook-855f577f79-7bdxq 2026-01-20T10:56:53.255843468+00:00 stderr F I0120 10:56:53.255833 30089 ovn.go:134] Ensuring zone local for Pod cert-manager/cert-manager-webhook-855f577f79-7bdxq in node crc 2026-01-20T10:56:53.255904840+00:00 stderr F I0120 10:56:53.255843 30089 port_cache.go:96] port-cache(openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b): added port &{name:openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b uuid:3e86699a-fa52-4a81-9386-60d37f3fa10c logicalSwitch:crc ips:[0xc001901c50] mac:[10 88 10 217 0 72] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.72/23] and MAC: 0a:58:0a:d9:00:48 2026-01-20T10:56:53.255904840+00:00 stderr F I0120 10:56:53.255851 30089 base_network_controller_pods.go:476] [default/cert-manager/cert-manager-webhook-855f577f79-7bdxq] creating logical port cert-manager_cert-manager-webhook-855f577f79-7bdxq for pod on switch crc 2026-01-20T10:56:53.255904840+00:00 stderr F I0120 10:56:53.255889 30089 port_cache.go:96] port-cache(openshift-marketplace_community-operators-6m4w2): added port &{name:openshift-marketplace_community-operators-6m4w2 uuid:de883ce4-2f69-42d2-a4b3-8165b5ceb500 logicalSwitch:crc ips:[0xc0016222d0] mac:[10 88 10 217 0 35] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.35/23] and MAC: 0a:58:0a:d9:00:23 2026-01-20T10:56:53.255904840+00:00 stderr F I0120 10:56:53.255899 30089 pods.go:220] [openshift-marketplace/community-operators-6m4w2] addLogicalPort took 12.385314ms, libovsdb time 7.541022ms 2026-01-20T10:56:53.255923600+00:00 stderr F I0120 10:56:53.255905 30089 obj_retry.go:541] Creating *v1.Pod openshift-marketplace/community-operators-6m4w2 took: 12.406654ms 2026-01-20T10:56:53.255923600+00:00 stderr F I0120 10:56:53.255910 30089 default_network_controller.go:699] Recording success event on pod openshift-marketplace/community-operators-6m4w2 2026-01-20T10:56:53.255923600+00:00 stderr F I0120 10:56:53.255917 30089 default_network_controller.go:655] Recording add event on pod cert-manager/cert-manager-cainjector-676dd9bd64-mggnx 2026-01-20T10:56:53.255933241+00:00 stderr F I0120 10:56:53.255926 30089 obj_retry.go:502] Add event received for *v1.Pod cert-manager/cert-manager-cainjector-676dd9bd64-mggnx 2026-01-20T10:56:53.255941761+00:00 stderr F I0120 10:56:53.255933 30089 ovn.go:134] Ensuring zone local for Pod cert-manager/cert-manager-cainjector-676dd9bd64-mggnx in node crc 2026-01-20T10:56:53.255950391+00:00 stderr F I0120 10:56:53.255943 30089 base_network_controller_pods.go:476] [default/cert-manager/cert-manager-cainjector-676dd9bd64-mggnx] creating logical port cert-manager_cert-manager-cainjector-676dd9bd64-mggnx for pod on switch crc 2026-01-20T10:56:53.256008563+00:00 stderr F I0120 10:56:53.255959 30089 port_cache.go:96] port-cache(openshift-controller-manager_controller-manager-778975cc4f-x5vcf): added port &{name:openshift-controller-manager_controller-manager-778975cc4f-x5vcf uuid:eda38bc9-7da5-4a6b-818c-4e1e8f85426d logicalSwitch:crc ips:[0xc0018d6ff0] mac:[10 88 10 217 0 87] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.87/23] and MAC: 0a:58:0a:d9:00:57 2026-01-20T10:56:53.256008563+00:00 stderr F I0120 10:56:53.255973 30089 pods.go:220] [openshift-controller-manager/controller-manager-778975cc4f-x5vcf] addLogicalPort took 6.215588ms, libovsdb time 1.823219ms 2026-01-20T10:56:53.256008563+00:00 stderr F I0120 10:56:53.255980 30089 obj_retry.go:541] Creating *v1.Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf took: 6.231349ms 2026-01-20T10:56:53.256008563+00:00 stderr F I0120 10:56:53.255985 30089 default_network_controller.go:699] Recording success event on pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2026-01-20T10:56:53.256008563+00:00 stderr F I0120 10:56:53.255992 30089 default_network_controller.go:655] Recording add event on pod hostpath-provisioner/csi-hostpathplugin-hvm8g 2026-01-20T10:56:53.256008563+00:00 stderr F I0120 10:56:53.255998 30089 obj_retry.go:502] Add event received for *v1.Pod hostpath-provisioner/csi-hostpathplugin-hvm8g 2026-01-20T10:56:53.256008563+00:00 stderr F I0120 10:56:53.256004 30089 ovn.go:134] Ensuring zone local for Pod hostpath-provisioner/csi-hostpathplugin-hvm8g in node crc 2026-01-20T10:56:53.256021733+00:00 stderr F I0120 10:56:53.256012 30089 base_network_controller_pods.go:476] [default/hostpath-provisioner/csi-hostpathplugin-hvm8g] creating logical port hostpath-provisioner_csi-hostpathplugin-hvm8g for pod on switch crc 2026-01-20T10:56:53.256107215+00:00 stderr F I0120 10:56:53.255823 30089 pods.go:220] [openshift-ingress-canary/ingress-canary-2vhcn] addLogicalPort took 7.994485ms, libovsdb time 5.724974ms 2026-01-20T10:56:53.256107215+00:00 stderr F I0120 10:56:53.256054 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2c 10.217.0.44]} options:{GoMap:map[iface-id-ver:e7870154-de6e-4216-81fb-b87e7502c412 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2c 10.217.0.44]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2062ef9d-1b69-4e9a-ae97-128ed1f07896}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256107215+00:00 stderr F I0120 10:56:53.256098 30089 obj_retry.go:541] Creating *v1.Pod openshift-ingress-canary/ingress-canary-2vhcn took: 8.269953ms 2026-01-20T10:56:53.256126876+00:00 stderr F I0120 10:56:53.256119 30089 default_network_controller.go:699] Recording success event on pod openshift-ingress-canary/ingress-canary-2vhcn 2026-01-20T10:56:53.256135366+00:00 stderr F I0120 10:56:53.256130 30089 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/revision-pruner-11-crc 2026-01-20T10:56:53.256143636+00:00 stderr F I0120 10:56:53.256127 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2062ef9d-1b69-4e9a-ae97-128ed1f07896}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256151547+00:00 stderr F I0120 10:56:53.256142 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/revision-pruner-11-crc 2026-01-20T10:56:53.256210198+00:00 stderr F I0120 10:56:53.256154 30089 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-11-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.256210198+00:00 stderr F I0120 10:56:53.256169 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} options:{GoMap:map[iface-id-ver:12e733dd-0939-4f1b-9cbb-13897e093787 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52259988-af2b-4ee5-bbfe-801c4ebeb0ae}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256210198+00:00 stderr F I0120 10:56:53.256182 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2062ef9d-1b69-4e9a-ae97-128ed1f07896}]}}] Timeout: Where:[where column _uuid == {d886fd31-9d5f-4fce-925a-2065ea4614ce}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256210198+00:00 stderr F I0120 10:56:53.256199 30089 port_cache.go:122] port-cache(openshift-kube-controller-manager_revision-pruner-11-crc): port not found in cache or already marked for removal 2026-01-20T10:56:53.256210198+00:00 stderr F I0120 10:56:53.256205 30089 pods.go:151] Deleting pod: openshift-kube-controller-manager/revision-pruner-11-crc 2026-01-20T10:56:53.256225159+00:00 stderr F I0120 10:56:53.256211 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256293290+00:00 stderr F I0120 10:56:53.256249 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {7fc92973-4bc3-465a-b279-3f843add06f8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256293290+00:00 stderr F W0120 10:56:53.256260 30089 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-controller-manager/revision-pruner-11-crc. Using logical switch crc port uuid and addrs [10.217.0.83/23] 2026-01-20T10:56:53.256356152+00:00 stderr F I0120 10:56:53.256297 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.21 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256408113+00:00 stderr F I0120 10:56:53.256368 30089 address_set.go:613] (7a4da73b-99a6-4d7f-9775-40343fe32ef9/default-network-controller:Namespace:openshift-kube-controller-manager:v4/a4663622633901538608) deleting addresses [10.217.0.83] from address set 2026-01-20T10:56:53.256463655+00:00 stderr F I0120 10:56:53.256409 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.83]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256463655+00:00 stderr F I0120 10:56:53.256430 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c58d2e89-322d-46d7-9147-b9e591118d62}]}}] Timeout: Where:[where column _uuid == {d886fd31-9d5f-4fce-925a-2065ea4614ce}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256474995+00:00 stderr F I0120 10:56:53.256458 30089 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.83]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256579988+00:00 stderr F I0120 10:56:53.256515 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} options:{GoMap:map[iface-id-ver:530553aa-0a1d-423e-8a22-f5eb4bdbb883 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d2f291e9-b4fe-47a7-a644-298254d226c5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256579988+00:00 stderr F I0120 10:56:53.256537 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.49 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65079815-d96f-43ce-8e23-6a61dc263b7b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256579988+00:00 stderr F I0120 10:56:53.255860 30089 pods.go:220] [openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b] addLogicalPort took 9.091466ms, libovsdb time 6.72929ms 2026-01-20T10:56:53.256579988+00:00 stderr F I0120 10:56:53.256539 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.44 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fc1e926e-4c6b-4b78-8d31-0fe859db2dd0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256579988+00:00 stderr F I0120 10:56:53.256562 30089 obj_retry.go:541] Creating *v1.Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b took: 9.800924ms 2026-01-20T10:56:53.256600868+00:00 stderr F I0120 10:56:53.256577 30089 default_network_controller.go:699] Recording success event on pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2026-01-20T10:56:53.256600868+00:00 stderr F I0120 10:56:53.256587 30089 default_network_controller.go:655] Recording add event on pod openshift-kube-apiserver/installer-13-crc 2026-01-20T10:56:53.256600868+00:00 stderr F I0120 10:56:53.256583 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256610339+00:00 stderr F I0120 10:56:53.256587 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:fc1e926e-4c6b-4b78-8d31-0fe859db2dd0}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256610339+00:00 stderr F I0120 10:56:53.256598 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-apiserver/installer-13-crc 2026-01-20T10:56:53.256619039+00:00 stderr F I0120 10:56:53.256612 30089 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/installer-13-crc in node crc 2026-01-20T10:56:53.256681381+00:00 stderr F I0120 10:56:53.256626 30089 base_network_controller_pods.go:476] [default/openshift-kube-apiserver/installer-13-crc] creating logical port openshift-kube-apiserver_installer-13-crc for pod on switch crc 2026-01-20T10:56:53.256681381+00:00 stderr F I0120 10:56:53.256606 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2c 10.217.0.44]} options:{GoMap:map[iface-id-ver:e7870154-de6e-4216-81fb-b87e7502c412 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2c 10.217.0.44]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2062ef9d-1b69-4e9a-ae97-128ed1f07896}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2062ef9d-1b69-4e9a-ae97-128ed1f07896}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2062ef9d-1b69-4e9a-ae97-128ed1f07896}]}}] Timeout: Where:[where column _uuid == {d886fd31-9d5f-4fce-925a-2065ea4614ce}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.44 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fc1e926e-4c6b-4b78-8d31-0fe859db2dd0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:fc1e926e-4c6b-4b78-8d31-0fe859db2dd0}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256681381+00:00 stderr F I0120 10:56:53.256644 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {8f19c25c-23f2-4be6-ae5b-f3e31e0c5430}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256796474+00:00 stderr F I0120 10:56:53.256736 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} options:{GoMap:map[iface-id-ver:45a8038e-e7f2-4d93-a6f5-7753aa54e63f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f8e99409-b28a-4d27-a8e5-267ea6a801cf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256796474+00:00 stderr F I0120 10:56:53.256776 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256857105+00:00 stderr F I0120 10:56:53.256813 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256857105+00:00 stderr F I0120 10:56:53.256821 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:26 10.217.0.38]} options:{GoMap:map[iface-id-ver:9387c79a-cd5b-4d24-a558-6dbbdd89fe1e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:26 10.217.0.38]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8a3b108-0209-4107-a8a7-c85848e5a053}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256915657+00:00 stderr F I0120 10:56:53.256872 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8a3b108-0209-4107-a8a7-c85848e5a053}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.256977119+00:00 stderr F I0120 10:56:53.256933 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8a3b108-0209-4107-a8a7-c85848e5a053}]}}] Timeout: Where:[where column _uuid == {ca455606-530d-4273-b8ee-8ed4760b1f66}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257034380+00:00 stderr F I0120 10:56:53.256971 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} options:{GoMap:map[iface-id-ver:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f24db1f4-18a4-418a-9c99-1d94ebfba0da}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257034380+00:00 stderr F I0120 10:56:53.256993 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.42 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {728ddbde-04b2-49cf-8404-6ed04db916a3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257034380+00:00 stderr F I0120 10:56:53.257021 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257132543+00:00 stderr F I0120 10:56:53.257080 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257132543+00:00 stderr F I0120 10:56:53.256358 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257149983+00:00 stderr F I0120 10:56:53.257109 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:728ddbde-04b2-49cf-8404-6ed04db916a3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257235425+00:00 stderr F I0120 10:56:53.257142 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} options:{GoMap:map[iface-id-ver:120b38dc-8236-4fa6-a452-642b8ad738ee requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ad3d5728-34ed-421c-a749-1d7a957800a8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.21 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257292287+00:00 stderr F I0120 10:56:53.257176 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2a 10.217.0.42]} options:{GoMap:map[iface-id-ver:f12a256b-7128-4680-8f54-8e40a3e56300 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2a 10.217.0.42]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c58d2e89-322d-46d7-9147-b9e591118d62}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c58d2e89-322d-46d7-9147-b9e591118d62}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c58d2e89-322d-46d7-9147-b9e591118d62}]}}] Timeout: Where:[where column _uuid == {d886fd31-9d5f-4fce-925a-2065ea4614ce}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.42 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {728ddbde-04b2-49cf-8404-6ed04db916a3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:728ddbde-04b2-49cf-8404-6ed04db916a3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257417350+00:00 stderr F I0120 10:56:53.257371 30089 pods.go:185] Attempting to release IPs for pod: openshift-kube-controller-manager/revision-pruner-9-crc, ips: 10.217.0.52 2026-01-20T10:56:53.257417350+00:00 stderr F I0120 10:56:53.256590 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:65079815-d96f-43ce-8e23-6a61dc263b7b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257417350+00:00 stderr F I0120 10:56:53.257390 30089 default_network_controller.go:655] Recording add event on pod openshift-marketplace/certified-operators-mpjb7 2026-01-20T10:56:53.257417350+00:00 stderr F I0120 10:56:53.257377 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.38 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b8b3c4de-4c65-4766-8e56-37c95584c479}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257417350+00:00 stderr F I0120 10:56:53.257399 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-marketplace/certified-operators-mpjb7 2026-01-20T10:56:53.257417350+00:00 stderr F I0120 10:56:53.257394 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.24 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {86257c8a-6754-4af2-9aae-87ed521f1c5f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257436541+00:00 stderr F I0120 10:56:53.257392 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} options:{GoMap:map[iface-id-ver:12e733dd-0939-4f1b-9cbb-13897e093787 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52259988-af2b-4ee5-bbfe-801c4ebeb0ae}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {7fc92973-4bc3-465a-b279-3f843add06f8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.49 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65079815-d96f-43ce-8e23-6a61dc263b7b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:65079815-d96f-43ce-8e23-6a61dc263b7b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257451711+00:00 stderr F I0120 10:56:53.257428 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:86257c8a-6754-4af2-9aae-87ed521f1c5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257451711+00:00 stderr F I0120 10:56:53.257425 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:b8b3c4de-4c65-4766-8e56-37c95584c479}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257538033+00:00 stderr F I0120 10:56:53.257118 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.20 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7e2d0691-2edc-4313-a98c-25b101cdf576}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257550654+00:00 stderr F I0120 10:56:53.257470 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:26 10.217.0.38]} options:{GoMap:map[iface-id-ver:9387c79a-cd5b-4d24-a558-6dbbdd89fe1e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:26 10.217.0.38]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8a3b108-0209-4107-a8a7-c85848e5a053}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8a3b108-0209-4107-a8a7-c85848e5a053}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8a3b108-0209-4107-a8a7-c85848e5a053}]}}] Timeout: Where:[where column _uuid == {ca455606-530d-4273-b8ee-8ed4760b1f66}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.38 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b8b3c4de-4c65-4766-8e56-37c95584c479}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:b8b3c4de-4c65-4766-8e56-37c95584c479}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257550654+00:00 stderr F I0120 10:56:53.257529 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7e2d0691-2edc-4313-a98c-25b101cdf576}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257565254+00:00 stderr F I0120 10:56:53.257538 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} options:{GoMap:map[iface-id-ver:c229f43c-9d3a-4848-a9da-d997459f440b requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6bd84c4c-190f-4483-8240-543e175b9038}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257622706+00:00 stderr F I0120 10:56:53.256817 30089 port_cache.go:96] port-cache(openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t): added port &{name:openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t uuid:710ea152-1844-44ad-b1a6-805ec9a3700e logicalSwitch:crc ips:[0xc001833e90] mac:[10 88 10 217 0 45] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.45/23] and MAC: 0a:58:0a:d9:00:2d 2026-01-20T10:56:53.257622706+00:00 stderr F I0120 10:56:53.257575 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6bd84c4c-190f-4483-8240-543e175b9038}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257622706+00:00 stderr F I0120 10:56:53.257459 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} options:{GoMap:map[iface-id-ver:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f24db1f4-18a4-418a-9c99-1d94ebfba0da}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.24 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {86257c8a-6754-4af2-9aae-87ed521f1c5f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:86257c8a-6754-4af2-9aae-87ed521f1c5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257639176+00:00 stderr F I0120 10:56:53.257617 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6bd84c4c-190f-4483-8240-543e175b9038}]}}] Timeout: Where:[where column _uuid == {d886fd31-9d5f-4fce-925a-2065ea4614ce}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257698068+00:00 stderr F I0120 10:56:53.257407 30089 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/certified-operators-mpjb7 in node crc 2026-01-20T10:56:53.257698068+00:00 stderr F I0120 10:56:53.257674 30089 base_network_controller_pods.go:476] [default/openshift-marketplace/certified-operators-mpjb7] creating logical port openshift-marketplace_certified-operators-mpjb7 for pod on switch crc 2026-01-20T10:56:53.257698068+00:00 stderr F I0120 10:56:53.257668 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.23 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52d4cb14-db01-4356-992b-9aeb85743758}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257757809+00:00 stderr F I0120 10:56:53.257547 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} options:{GoMap:map[iface-id-ver:45a8038e-e7f2-4d93-a6f5-7753aa54e63f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f8e99409-b28a-4d27-a8e5-267ea6a801cf}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.20 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7e2d0691-2edc-4313-a98c-25b101cdf576}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7e2d0691-2edc-4313-a98c-25b101cdf576}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257757809+00:00 stderr F I0120 10:56:53.257719 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:52d4cb14-db01-4356-992b-9aeb85743758}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257837521+00:00 stderr F I0120 10:56:53.257584 30089 pods.go:220] [openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t] addLogicalPort took 9.432054ms, libovsdb time 3.066122ms 2026-01-20T10:56:53.257837521+00:00 stderr F I0120 10:56:53.257813 30089 obj_retry.go:541] Creating *v1.Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t took: 9.67258ms 2026-01-20T10:56:53.257837521+00:00 stderr F I0120 10:56:53.257819 30089 default_network_controller.go:699] Recording success event on pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2026-01-20T10:56:53.257837521+00:00 stderr F I0120 10:56:53.257807 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} options:{GoMap:map[iface-id-ver:1d5b65e7-a4c3-495a-a5b0-72caab7218fd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {740dc30b-dced-4651-920f-33387935c67c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257837521+00:00 stderr F I0120 10:56:53.257827 30089 default_network_controller.go:655] Recording add event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2026-01-20T10:56:53.257856362+00:00 stderr F I0120 10:56:53.257836 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2026-01-20T10:56:53.257856362+00:00 stderr F I0120 10:56:53.257843 30089 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc in node crc 2026-01-20T10:56:53.257856362+00:00 stderr F I0120 10:56:53.257848 30089 obj_retry.go:541] Creating *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc took: 6.861µs 2026-01-20T10:56:53.257856362+00:00 stderr F I0120 10:56:53.257841 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:740dc30b-dced-4651-920f-33387935c67c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257866732+00:00 stderr F I0120 10:56:53.257852 30089 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2026-01-20T10:56:53.257942194+00:00 stderr F I0120 10:56:53.257880 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:740dc30b-dced-4651-920f-33387935c67c}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257942194+00:00 stderr F I0120 10:56:53.257771 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} options:{GoMap:map[iface-id-ver:530553aa-0a1d-423e-8a22-f5eb4bdbb883 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d2f291e9-b4fe-47a7-a644-298254d226c5}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {8f19c25c-23f2-4be6-ae5b-f3e31e0c5430}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.23 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52d4cb14-db01-4356-992b-9aeb85743758}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:52d4cb14-db01-4356-992b-9aeb85743758}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.257959195+00:00 stderr F I0120 10:56:53.257936 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.41 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6b80fb2-3ad1-43c9-82f1-a1264c3144f4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.258018726+00:00 stderr F I0120 10:56:53.257968 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a6b80fb2-3ad1-43c9-82f1-a1264c3144f4}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.258030687+00:00 stderr F I0120 10:56:53.257986 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} options:{GoMap:map[iface-id-ver:c229f43c-9d3a-4848-a9da-d997459f440b requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6bd84c4c-190f-4483-8240-543e175b9038}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6bd84c4c-190f-4483-8240-543e175b9038}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6bd84c4c-190f-4483-8240-543e175b9038}]}}] Timeout: Where:[where column _uuid == {d886fd31-9d5f-4fce-925a-2065ea4614ce}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.41 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6b80fb2-3ad1-43c9-82f1-a1264c3144f4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a6b80fb2-3ad1-43c9-82f1-a1264c3144f4}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.258202281+00:00 stderr F I0120 10:56:53.258146 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.33 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4c834564-7798-4750-8152-583ce8856c99}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.258202281+00:00 stderr F I0120 10:56:53.258180 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:4c834564-7798-4750-8152-583ce8856c99}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.258284033+00:00 stderr F I0120 10:56:53.258193 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} options:{GoMap:map[iface-id-ver:1d5b65e7-a4c3-495a-a5b0-72caab7218fd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {740dc30b-dced-4651-920f-33387935c67c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:740dc30b-dced-4651-920f-33387935c67c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:740dc30b-dced-4651-920f-33387935c67c}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.33 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4c834564-7798-4750-8152-583ce8856c99}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:4c834564-7798-4750-8152-583ce8856c99}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.258786238+00:00 stderr F I0120 10:56:53.258722 30089 port_cache.go:96] port-cache(cert-manager_cert-manager-webhook-855f577f79-7bdxq): added port &{name:cert-manager_cert-manager-webhook-855f577f79-7bdxq uuid:2062ef9d-1b69-4e9a-ae97-128ed1f07896 logicalSwitch:crc ips:[0xc001d09890] mac:[10 88 10 217 0 44] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.44/23] and MAC: 0a:58:0a:d9:00:2c 2026-01-20T10:56:53.258786238+00:00 stderr F I0120 10:56:53.258743 30089 pods.go:185] Attempting to release IPs for pod: openshift-kube-scheduler/installer-7-crc, ips: 10.217.0.67 2026-01-20T10:56:53.258786238+00:00 stderr F I0120 10:56:53.258754 30089 pods.go:220] [cert-manager/cert-manager-webhook-855f577f79-7bdxq] addLogicalPort took 2.904938ms, libovsdb time 2.102026ms 2026-01-20T10:56:53.258786238+00:00 stderr F I0120 10:56:53.258766 30089 default_network_controller.go:655] Recording add event on pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2026-01-20T10:56:53.258786238+00:00 stderr F I0120 10:56:53.258772 30089 obj_retry.go:541] Creating *v1.Pod cert-manager/cert-manager-webhook-855f577f79-7bdxq took: 2.935859ms 2026-01-20T10:56:53.258786238+00:00 stderr F I0120 10:56:53.258781 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2026-01-20T10:56:53.258803708+00:00 stderr F I0120 10:56:53.258784 30089 default_network_controller.go:699] Recording success event on pod cert-manager/cert-manager-webhook-855f577f79-7bdxq 2026-01-20T10:56:53.258803708+00:00 stderr F I0120 10:56:53.258792 30089 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh in node crc 2026-01-20T10:56:53.258815038+00:00 stderr F I0120 10:56:53.258805 30089 pods.go:185] Attempting to release IPs for pod: openshift-kube-controller-manager/revision-pruner-11-crc, ips: 10.217.0.83 2026-01-20T10:56:53.258833119+00:00 stderr F I0120 10:56:53.258818 30089 base_network_controller_pods.go:476] [default/openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh] creating logical port openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh for pod on switch crc 2026-01-20T10:56:53.258987803+00:00 stderr F I0120 10:56:53.258936 30089 port_cache.go:96] port-cache(openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm): added port &{name:openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm uuid:ad3d5728-34ed-421c-a749-1d7a957800a8 logicalSwitch:crc ips:[0xc001a89a70] mac:[10 88 10 217 0 21] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.21/23] and MAC: 0a:58:0a:d9:00:15 2026-01-20T10:56:53.258987803+00:00 stderr F I0120 10:56:53.258953 30089 pods.go:220] [openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm] addLogicalPort took 10.663107ms, libovsdb time 1.792028ms 2026-01-20T10:56:53.258987803+00:00 stderr F I0120 10:56:53.258963 30089 obj_retry.go:541] Creating *v1.Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm took: 10.683948ms 2026-01-20T10:56:53.258987803+00:00 stderr F I0120 10:56:53.258968 30089 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2026-01-20T10:56:53.259051595+00:00 stderr F I0120 10:56:53.258994 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} options:{GoMap:map[iface-id-ver:297ab9b6-2186-4d5b-a952-2bfd59af63c4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9aafbb57-c78d-409c-9ff4-1561d4387b2d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.259114616+00:00 stderr F I0120 10:56:53.259050 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.259188938+00:00 stderr F I0120 10:56:53.259138 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.259481906+00:00 stderr F I0120 10:56:53.259438 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.63 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c92f035f-d338-40f2-abeb-ccb48ca242b7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.259537337+00:00 stderr F I0120 10:56:53.259497 30089 port_cache.go:96] port-cache(cert-manager_cert-manager-758df9885c-cq6zm): added port &{name:cert-manager_cert-manager-758df9885c-cq6zm uuid:c58d2e89-322d-46d7-9147-b9e591118d62 logicalSwitch:crc ips:[0xc00187f530] mac:[10 88 10 217 0 42] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.42/23] and MAC: 0a:58:0a:d9:00:2a 2026-01-20T10:56:53.259537337+00:00 stderr F I0120 10:56:53.259510 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c92f035f-d338-40f2-abeb-ccb48ca242b7}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.259606279+00:00 stderr F I0120 10:56:53.259530 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} options:{GoMap:map[iface-id-ver:297ab9b6-2186-4d5b-a952-2bfd59af63c4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9aafbb57-c78d-409c-9ff4-1561d4387b2d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.63 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c92f035f-d338-40f2-abeb-ccb48ca242b7}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c92f035f-d338-40f2-abeb-ccb48ca242b7}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.259606279+00:00 stderr F I0120 10:56:53.259519 30089 pods.go:220] [cert-manager/cert-manager-758df9885c-cq6zm] addLogicalPort took 4.365228ms, libovsdb time 2.316242ms 2026-01-20T10:56:53.259617510+00:00 stderr F I0120 10:56:53.259605 30089 obj_retry.go:541] Creating *v1.Pod cert-manager/cert-manager-758df9885c-cq6zm took: 4.45886ms 2026-01-20T10:56:53.259617510+00:00 stderr F I0120 10:56:53.259611 30089 default_network_controller.go:699] Recording success event on pod cert-manager/cert-manager-758df9885c-cq6zm 2026-01-20T10:56:53.259626570+00:00 stderr F I0120 10:56:53.259618 30089 default_network_controller.go:655] Recording add event on pod openshift-machine-config-operator/machine-config-server-v65wr 2026-01-20T10:56:53.259636200+00:00 stderr F I0120 10:56:53.259625 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-machine-config-operator/machine-config-server-v65wr 2026-01-20T10:56:53.259636200+00:00 stderr F I0120 10:56:53.259633 30089 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-server-v65wr in node crc 2026-01-20T10:56:53.259645520+00:00 stderr F I0120 10:56:53.259638 30089 obj_retry.go:541] Creating *v1.Pod openshift-machine-config-operator/machine-config-server-v65wr took: 6.99µs 2026-01-20T10:56:53.259645520+00:00 stderr F I0120 10:56:53.259642 30089 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-server-v65wr 2026-01-20T10:56:53.259654661+00:00 stderr F I0120 10:56:53.259646 30089 default_network_controller.go:655] Recording add event on pod openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j 2026-01-20T10:56:53.259654661+00:00 stderr F I0120 10:56:53.259652 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j 2026-01-20T10:56:53.259682151+00:00 stderr F I0120 10:56:53.259657 30089 obj_retry.go:459] Detected object openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.259682151+00:00 stderr F I0120 10:56:53.259666 30089 port_cache.go:122] port-cache(openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j): port not found in cache or already marked for removal 2026-01-20T10:56:53.259682151+00:00 stderr F I0120 10:56:53.259670 30089 pods.go:151] Deleting pod: openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j 2026-01-20T10:56:53.259753293+00:00 stderr F W0120 10:56:53.259699 30089 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j. Using logical switch crc port uuid and addrs [10.217.0.89/23] 2026-01-20T10:56:53.259786354+00:00 stderr F I0120 10:56:53.259764 30089 address_set.go:613] (369b3a88-e647-4277-a647-25ffea296a4a/default-network-controller:Namespace:openshift-operator-lifecycle-manager:v4/a1482332553631220387) deleting addresses [10.217.0.89] from address set 2026-01-20T10:56:53.259822235+00:00 stderr F I0120 10:56:53.259792 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.89]}}] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.259855466+00:00 stderr F I0120 10:56:53.259827 30089 port_cache.go:96] port-cache(openshift-kube-apiserver_installer-13-crc): added port &{name:openshift-kube-apiserver_installer-13-crc uuid:d8a3b108-0209-4107-a8a7-c85848e5a053 logicalSwitch:crc ips:[0xc000daec30] mac:[10 88 10 217 0 38] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.38/23] and MAC: 0a:58:0a:d9:00:26 2026-01-20T10:56:53.259855466+00:00 stderr F I0120 10:56:53.259845 30089 pods.go:220] [openshift-kube-apiserver/installer-13-crc] addLogicalPort took 3.226977ms, libovsdb time 2.351233ms 2026-01-20T10:56:53.259866366+00:00 stderr F I0120 10:56:53.259852 30089 obj_retry.go:541] Creating *v1.Pod openshift-kube-apiserver/installer-13-crc took: 3.242667ms 2026-01-20T10:56:53.259866366+00:00 stderr F I0120 10:56:53.259858 30089 default_network_controller.go:699] Recording success event on pod openshift-kube-apiserver/installer-13-crc 2026-01-20T10:56:53.259866366+00:00 stderr F I0120 10:56:53.259828 30089 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.89]}}] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.259975099+00:00 stderr F I0120 10:56:53.259937 30089 port_cache.go:96] port-cache(hostpath-provisioner_csi-hostpathplugin-hvm8g): added port &{name:hostpath-provisioner_csi-hostpathplugin-hvm8g uuid:52259988-af2b-4ee5-bbfe-801c4ebeb0ae logicalSwitch:crc ips:[0xc001bb1e00] mac:[10 88 10 217 0 49] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.49/23] and MAC: 0a:58:0a:d9:00:31 2026-01-20T10:56:53.259975099+00:00 stderr F I0120 10:56:53.259953 30089 pods.go:220] [hostpath-provisioner/csi-hostpathplugin-hvm8g] addLogicalPort took 3.944535ms, libovsdb time 2.542379ms 2026-01-20T10:56:53.259975099+00:00 stderr F I0120 10:56:53.259959 30089 obj_retry.go:541] Creating *v1.Pod hostpath-provisioner/csi-hostpathplugin-hvm8g took: 3.956006ms 2026-01-20T10:56:53.259975099+00:00 stderr F I0120 10:56:53.259964 30089 default_network_controller.go:699] Recording success event on pod hostpath-provisioner/csi-hostpathplugin-hvm8g 2026-01-20T10:56:53.260922694+00:00 stderr F I0120 10:56:53.260875 30089 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2): added port &{name:openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2 uuid:f24db1f4-18a4-418a-9c99-1d94ebfba0da logicalSwitch:crc ips:[0xc001d09410] mac:[10 88 10 217 0 24] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.24/23] and MAC: 0a:58:0a:d9:00:18 2026-01-20T10:56:53.260922694+00:00 stderr F I0120 10:56:53.260895 30089 pods.go:220] [openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2] addLogicalPort took 5.246651ms, libovsdb time 3.409632ms 2026-01-20T10:56:53.260922694+00:00 stderr F I0120 10:56:53.260902 30089 obj_retry.go:541] Creating *v1.Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 took: 5.260601ms 2026-01-20T10:56:53.260922694+00:00 stderr F I0120 10:56:53.260906 30089 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2026-01-20T10:56:53.260922694+00:00 stderr F I0120 10:56:53.260914 30089 default_network_controller.go:655] Recording add event on pod openshift-kube-apiserver/kube-apiserver-crc 2026-01-20T10:56:53.260951485+00:00 stderr F I0120 10:56:53.260920 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc 2026-01-20T10:56:53.260951485+00:00 stderr F I0120 10:56:53.260920 30089 port_cache.go:96] port-cache(openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc): added port &{name:openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc uuid:d2f291e9-b4fe-47a7-a644-298254d226c5 logicalSwitch:crc ips:[0xc001d08c90] mac:[10 88 10 217 0 23] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.23/23] and MAC: 0a:58:0a:d9:00:17 2026-01-20T10:56:53.260951485+00:00 stderr F I0120 10:56:53.260927 30089 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc 2026-01-20T10:56:53.260951485+00:00 stderr F I0120 10:56:53.260932 30089 pods.go:220] [openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc] addLogicalPort took 5.922029ms, libovsdb time 3.141434ms 2026-01-20T10:56:53.260951485+00:00 stderr F I0120 10:56:53.260933 30089 obj_retry.go:541] Creating *v1.Pod openshift-kube-apiserver/kube-apiserver-crc took: 8.061µs 2026-01-20T10:56:53.260951485+00:00 stderr F I0120 10:56:53.260939 30089 default_network_controller.go:699] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc 2026-01-20T10:56:53.260951485+00:00 stderr F I0120 10:56:53.260940 30089 obj_retry.go:541] Creating *v1.Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc took: 5.9357ms 2026-01-20T10:56:53.260951485+00:00 stderr F I0120 10:56:53.260946 30089 default_network_controller.go:699] Recording success event on pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2026-01-20T10:56:53.261014297+00:00 stderr F I0120 10:56:53.260972 30089 port_cache.go:96] port-cache(openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw): added port &{name:openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw uuid:f8e99409-b28a-4d27-a8e5-267ea6a801cf logicalSwitch:crc ips:[0xc000a40b70] mac:[10 88 10 217 0 20] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.20/23] and MAC: 0a:58:0a:d9:00:14 2026-01-20T10:56:53.261014297+00:00 stderr F I0120 10:56:53.260995 30089 pods.go:220] [openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw] addLogicalPort took 5.55908ms, libovsdb time 3.420811ms 2026-01-20T10:56:53.261014297+00:00 stderr F I0120 10:56:53.261002 30089 obj_retry.go:541] Creating *v1.Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw took: 5.57621ms 2026-01-20T10:56:53.261014297+00:00 stderr F I0120 10:56:53.261006 30089 default_network_controller.go:699] Recording success event on pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2026-01-20T10:56:53.261027917+00:00 stderr F I0120 10:56:53.261013 30089 default_network_controller.go:655] Recording add event on pod openshift-kube-scheduler/installer-8-crc 2026-01-20T10:56:53.261027917+00:00 stderr F I0120 10:56:53.261020 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-scheduler/installer-8-crc 2026-01-20T10:56:53.261037237+00:00 stderr F I0120 10:56:53.261029 30089 obj_retry.go:459] Detected object openshift-kube-scheduler/installer-8-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.261050888+00:00 stderr F I0120 10:56:53.261039 30089 port_cache.go:122] port-cache(openshift-kube-scheduler_installer-8-crc): port not found in cache or already marked for removal 2026-01-20T10:56:53.261050888+00:00 stderr F I0120 10:56:53.261046 30089 pods.go:151] Deleting pod: openshift-kube-scheduler/installer-8-crc 2026-01-20T10:56:53.261143100+00:00 stderr F W0120 10:56:53.261109 30089 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-scheduler/installer-8-crc. Using logical switch crc port uuid and addrs [10.217.0.84/23] 2026-01-20T10:56:53.261161461+00:00 stderr F I0120 10:56:53.261153 30089 address_set.go:613] (87985ed0-37ee-4fd2-af4b-1c4bf264af83/default-network-controller:Namespace:openshift-kube-scheduler:v4/a15634036902741400949) deleting addresses [10.217.0.84] from address set 2026-01-20T10:56:53.261212652+00:00 stderr F I0120 10:56:53.261180 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.84]}}] Timeout: Where:[where column _uuid == {87985ed0-37ee-4fd2-af4b-1c4bf264af83}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.261247374+00:00 stderr F I0120 10:56:53.261218 30089 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.84]}}] Timeout: Where:[where column _uuid == {87985ed0-37ee-4fd2-af4b-1c4bf264af83}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.261273815+00:00 stderr F I0120 10:56:53.261076 30089 port_cache.go:96] port-cache(cert-manager_cert-manager-cainjector-676dd9bd64-mggnx): added port &{name:cert-manager_cert-manager-cainjector-676dd9bd64-mggnx uuid:6bd84c4c-190f-4483-8240-543e175b9038 logicalSwitch:crc ips:[0xc0008c8f00] mac:[10 88 10 217 0 41] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.41/23] and MAC: 0a:58:0a:d9:00:29 2026-01-20T10:56:53.261283375+00:00 stderr F I0120 10:56:53.261269 30089 pods.go:220] [cert-manager/cert-manager-cainjector-676dd9bd64-mggnx] addLogicalPort took 5.330293ms, libovsdb time 2.98332ms 2026-01-20T10:56:53.261283375+00:00 stderr F I0120 10:56:53.261279 30089 obj_retry.go:541] Creating *v1.Pod cert-manager/cert-manager-cainjector-676dd9bd64-mggnx took: 5.346554ms 2026-01-20T10:56:53.261292525+00:00 stderr F I0120 10:56:53.261285 30089 default_network_controller.go:699] Recording success event on pod cert-manager/cert-manager-cainjector-676dd9bd64-mggnx 2026-01-20T10:56:53.261300965+00:00 stderr F I0120 10:56:53.261271 30089 port_cache.go:96] port-cache(openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh): added port &{name:openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh uuid:9aafbb57-c78d-409c-9ff4-1561d4387b2d logicalSwitch:crc ips:[0xc000da0e10] mac:[10 88 10 217 0 63] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.63/23] and MAC: 0a:58:0a:d9:00:3f 2026-01-20T10:56:53.261309356+00:00 stderr F I0120 10:56:53.261300 30089 pods.go:220] [openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh] addLogicalPort took 2.497307ms, libovsdb time 1.736087ms 2026-01-20T10:56:53.261309356+00:00 stderr F I0120 10:56:53.261305 30089 obj_retry.go:541] Creating *v1.Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh took: 2.516327ms 2026-01-20T10:56:53.261318206+00:00 stderr F I0120 10:56:53.261309 30089 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2026-01-20T10:56:53.261318206+00:00 stderr F I0120 10:56:53.261315 30089 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/kube-controller-manager-crc 2026-01-20T10:56:53.261331906+00:00 stderr F I0120 10:56:53.261322 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc 2026-01-20T10:56:53.261340806+00:00 stderr F I0120 10:56:53.261329 30089 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc 2026-01-20T10:56:53.261340806+00:00 stderr F I0120 10:56:53.261328 30089 port_cache.go:96] port-cache(openshift-marketplace_certified-operators-mpjb7): added port &{name:openshift-marketplace_certified-operators-mpjb7 uuid:740dc30b-dced-4651-920f-33387935c67c logicalSwitch:crc ips:[0xc000da7b00] mac:[10 88 10 217 0 33] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.33/23] and MAC: 0a:58:0a:d9:00:21 2026-01-20T10:56:53.261350627+00:00 stderr F I0120 10:56:53.261341 30089 pods.go:220] [openshift-marketplace/certified-operators-mpjb7] addLogicalPort took 3.673839ms, libovsdb time 2.900938ms 2026-01-20T10:56:53.261359427+00:00 stderr F I0120 10:56:53.261348 30089 obj_retry.go:541] Creating *v1.Pod openshift-marketplace/certified-operators-mpjb7 took: 3.941956ms 2026-01-20T10:56:53.261359427+00:00 stderr F I0120 10:56:53.261353 30089 default_network_controller.go:699] Recording success event on pod openshift-marketplace/certified-operators-mpjb7 2026-01-20T10:56:53.261368837+00:00 stderr F I0120 10:56:53.261359 30089 default_network_controller.go:655] Recording add event on pod openshift-ovn-kubernetes/ovnkube-node-sdkgg 2026-01-20T10:56:53.261377597+00:00 stderr F I0120 10:56:53.261366 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sdkgg 2026-01-20T10:56:53.261377597+00:00 stderr F I0120 10:56:53.261374 30089 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-sdkgg in node crc 2026-01-20T10:56:53.261386558+00:00 stderr F I0120 10:56:53.261378 30089 obj_retry.go:541] Creating *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sdkgg took: 6.44µs 2026-01-20T10:56:53.261386558+00:00 stderr F I0120 10:56:53.261382 30089 default_network_controller.go:699] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-sdkgg 2026-01-20T10:56:53.261395288+00:00 stderr F I0120 10:56:53.261334 30089 obj_retry.go:541] Creating *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc took: 7.05µs 2026-01-20T10:56:53.261457879+00:00 stderr F I0120 10:56:53.261394 30089 default_network_controller.go:699] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc 2026-01-20T10:56:53.261457879+00:00 stderr F I0120 10:56:53.261401 30089 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/installer-10-retry-1-crc 2026-01-20T10:56:53.261457879+00:00 stderr F I0120 10:56:53.261406 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/installer-10-retry-1-crc 2026-01-20T10:56:53.261457879+00:00 stderr F I0120 10:56:53.261411 30089 obj_retry.go:459] Detected object openshift-kube-controller-manager/installer-10-retry-1-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.261457879+00:00 stderr F I0120 10:56:53.261419 30089 port_cache.go:122] port-cache(openshift-kube-controller-manager_installer-10-retry-1-crc): port not found in cache or already marked for removal 2026-01-20T10:56:53.261457879+00:00 stderr F I0120 10:56:53.261423 30089 pods.go:151] Deleting pod: openshift-kube-controller-manager/installer-10-retry-1-crc 2026-01-20T10:56:53.261502761+00:00 stderr F W0120 10:56:53.261455 30089 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-controller-manager/installer-10-retry-1-crc. Using logical switch crc port uuid and addrs [10.217.0.76/23] 2026-01-20T10:56:53.261502761+00:00 stderr F I0120 10:56:53.261484 30089 address_set.go:613] (7a4da73b-99a6-4d7f-9775-40343fe32ef9/default-network-controller:Namespace:openshift-kube-controller-manager:v4/a4663622633901538608) deleting addresses [10.217.0.76] from address set 2026-01-20T10:56:53.261519321+00:00 stderr F I0120 10:56:53.261502 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.76]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.261559762+00:00 stderr F I0120 10:56:53.261529 30089 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.76]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.261819519+00:00 stderr F I0120 10:56:53.261702 30089 pods.go:185] Attempting to release IPs for pod: openshift-kube-scheduler/installer-8-crc, ips: 10.217.0.84 2026-01-20T10:56:53.261819519+00:00 stderr F I0120 10:56:53.261721 30089 default_network_controller.go:655] Recording add event on pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2026-01-20T10:56:53.261819519+00:00 stderr F I0120 10:56:53.261729 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2026-01-20T10:56:53.261819519+00:00 stderr F I0120 10:56:53.261735 30089 ovn.go:134] Ensuring zone local for Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv in node crc 2026-01-20T10:56:53.261819519+00:00 stderr F I0120 10:56:53.261752 30089 base_network_controller_pods.go:476] [default/openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv] creating logical port openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv for pod on switch crc 2026-01-20T10:56:53.262079766+00:00 stderr F I0120 10:56:53.262016 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} options:{GoMap:map[iface-id-ver:cf1a8966-f594-490a-9fbb-eec5bafd13d3 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2a5717ea-0a50-4ebb-b087-90e637274a33}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.262099326+00:00 stderr F I0120 10:56:53.262056 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.262161708+00:00 stderr F I0120 10:56:53.262113 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {4f91dae1-6838-4840-9491-e068cbcf1f65}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.262406705+00:00 stderr F I0120 10:56:53.262365 30089 pods.go:185] Attempting to release IPs for pod: openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j, ips: 10.217.0.89 2026-01-20T10:56:53.262703142+00:00 stderr F I0120 10:56:53.262668 30089 pods.go:185] Attempting to release IPs for pod: openshift-kube-controller-manager/installer-10-retry-1-crc, ips: 10.217.0.76 2026-01-20T10:56:53.262847286+00:00 stderr F I0120 10:56:53.262808 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.25 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0ec236b8-244d-4e2f-bfb5-0733dbce5d66}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.262865047+00:00 stderr F I0120 10:56:53.262846 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:0ec236b8-244d-4e2f-bfb5-0733dbce5d66}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.262916818+00:00 stderr F I0120 10:56:53.262860 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} options:{GoMap:map[iface-id-ver:cf1a8966-f594-490a-9fbb-eec5bafd13d3 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2a5717ea-0a50-4ebb-b087-90e637274a33}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {4f91dae1-6838-4840-9491-e068cbcf1f65}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.25 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0ec236b8-244d-4e2f-bfb5-0733dbce5d66}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:0ec236b8-244d-4e2f-bfb5-0733dbce5d66}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.263342059+00:00 stderr F I0120 10:56:53.263299 30089 port_cache.go:96] port-cache(openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv): added port &{name:openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv uuid:2a5717ea-0a50-4ebb-b087-90e637274a33 logicalSwitch:crc ips:[0xc0011ab6b0] mac:[10 88 10 217 0 25] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.25/23] and MAC: 0a:58:0a:d9:00:19 2026-01-20T10:56:53.263342059+00:00 stderr F I0120 10:56:53.263334 30089 pods.go:220] [openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv] addLogicalPort took 1.592762ms, libovsdb time 425.341µs 2026-01-20T10:56:53.263357350+00:00 stderr F I0120 10:56:53.263346 30089 obj_retry.go:541] Creating *v1.Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv took: 1.609052ms 2026-01-20T10:56:53.263364780+00:00 stderr F I0120 10:56:53.263354 30089 default_network_controller.go:699] Recording success event on pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2026-01-20T10:56:53.263373510+00:00 stderr F I0120 10:56:53.263368 30089 default_network_controller.go:655] Recording add event on pod openshift-network-diagnostics/network-check-target-v54bt 2026-01-20T10:56:53.263406811+00:00 stderr F I0120 10:56:53.263382 30089 obj_retry.go:502] Add event received for *v1.Pod openshift-network-diagnostics/network-check-target-v54bt 2026-01-20T10:56:53.263406811+00:00 stderr F I0120 10:56:53.263402 30089 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-v54bt in node crc 2026-01-20T10:56:53.263461433+00:00 stderr F I0120 10:56:53.263437 30089 base_network_controller_pods.go:476] [default/openshift-network-diagnostics/network-check-target-v54bt] creating logical port openshift-network-diagnostics_network-check-target-v54bt for pod on switch crc 2026-01-20T10:56:53.263682209+00:00 stderr F I0120 10:56:53.263636 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:34a48baf-1bee-4921-8bb2-9b7320e76f79 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c0f95133-023f-4bbd-8719-e29d2cfbb32d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.263713059+00:00 stderr F I0120 10:56:53.263686 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.263805672+00:00 stderr F I0120 10:56:53.263769 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.264245834+00:00 stderr F I0120 10:56:53.264202 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.264372468+00:00 stderr F I0120 10:56:53.264253 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.264372468+00:00 stderr F I0120 10:56:53.264278 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:34a48baf-1bee-4921-8bb2-9b7320e76f79 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c0f95133-023f-4bbd-8719-e29d2cfbb32d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.264784179+00:00 stderr F I0120 10:56:53.264751 30089 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-v54bt): added port &{name:openshift-network-diagnostics_network-check-target-v54bt uuid:c0f95133-023f-4bbd-8719-e29d2cfbb32d logicalSwitch:crc ips:[0xc0012e51d0] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04 2026-01-20T10:56:53.264784179+00:00 stderr F I0120 10:56:53.264772 30089 pods.go:220] [openshift-network-diagnostics/network-check-target-v54bt] addLogicalPort took 1.362447ms, libovsdb time 464.663µs 2026-01-20T10:56:53.264784179+00:00 stderr F I0120 10:56:53.264780 30089 obj_retry.go:541] Creating *v1.Pod openshift-network-diagnostics/network-check-target-v54bt took: 1.383558ms 2026-01-20T10:56:53.264795249+00:00 stderr F I0120 10:56:53.264787 30089 default_network_controller.go:699] Recording success event on pod openshift-network-diagnostics/network-check-target-v54bt 2026-01-20T10:56:53.264815490+00:00 stderr F I0120 10:56:53.264801 30089 factory.go:988] Added *v1.Pod event handler 3 2026-01-20T10:56:53.264852561+00:00 stderr F I0120 10:56:53.264837 30089 admin_network_policy_controller.go:124] Setting up event handlers for Admin Network Policy 2026-01-20T10:56:53.264908292+00:00 stderr F I0120 10:56:53.264890 30089 admin_network_policy_controller.go:142] Setting up event handlers for Baseline Admin Network Policy 2026-01-20T10:56:53.264935943+00:00 stderr F I0120 10:56:53.264925 30089 admin_network_policy_controller.go:159] Setting up event handlers for Namespaces in Admin Network Policy controller 2026-01-20T10:56:53.264971424+00:00 stderr F I0120 10:56:53.264948 30089 obj_retry.go:427] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod 2026-01-20T10:56:53.265015495+00:00 stderr F I0120 10:56:53.265002 30089 admin_network_policy_controller.go:175] Setting up event handlers for Pods in Admin Network Policy controller 2026-01-20T10:56:53.265040606+00:00 stderr F I0120 10:56:53.265024 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-operators 2026-01-20T10:56:53.265079817+00:00 stderr F I0120 10:56:53.265048 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-user-workload-monitoring 2026-01-20T10:56:53.265110857+00:00 stderr F I0120 10:56:53.265095 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-authentication 2026-01-20T10:56:53.265110857+00:00 stderr F I0120 10:56:53.265100 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-marketplace/redhat-operators-2nxg8 2026-01-20T10:56:53.265120558+00:00 stderr F I0120 10:56:53.265114 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-machine-config-operator 2026-01-20T10:56:53.265129068+00:00 stderr F I0120 10:56:53.265120 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-oauth-apiserver 2026-01-20T10:56:53.265129068+00:00 stderr F I0120 10:56:53.265119 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-network-operator/network-operator-767c585db5-zd56b 2026-01-20T10:56:53.265129068+00:00 stderr F I0120 10:56:53.265126 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller cert-manager-operator 2026-01-20T10:56:53.265138798+00:00 stderr F I0120 10:56:53.265130 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2026-01-20T10:56:53.265138798+00:00 stderr F I0120 10:56:53.265114 30089 admin_network_policy_controller.go:191] Setting up event handlers for Nodes in Admin Network Policy controller 2026-01-20T10:56:53.265138798+00:00 stderr F I0120 10:56:53.265132 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-network-node-identity 2026-01-20T10:56:53.265153109+00:00 stderr F I0120 10:56:53.265141 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-nutanix-infra 2026-01-20T10:56:53.265153109+00:00 stderr F I0120 10:56:53.265146 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-ingress 2026-01-20T10:56:53.265161759+00:00 stderr F I0120 10:56:53.265155 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-storage-version-migrator-operator 2026-01-20T10:56:53.265169949+00:00 stderr F I0120 10:56:53.265160 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2026-01-20T10:56:53.265169949+00:00 stderr F I0120 10:56:53.265163 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller hostpath-provisioner 2026-01-20T10:56:53.265178419+00:00 stderr F I0120 10:56:53.265171 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-config 2026-01-20T10:56:53.265178419+00:00 stderr F I0120 10:56:53.265174 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-console/console-644bb77b49-5x5xk 2026-01-20T10:56:53.265186739+00:00 stderr F I0120 10:56:53.265178 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-infra 2026-01-20T10:56:53.265195140+00:00 stderr F I0120 10:56:53.265185 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2026-01-20T10:56:53.265203290+00:00 stderr F I0120 10:56:53.265193 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-machine-config-operator/machine-config-server-v65wr 2026-01-20T10:56:53.265215960+00:00 stderr F I0120 10:56:53.265208 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/revision-pruner-10-crc 2026-01-20T10:56:53.265247061+00:00 stderr F I0120 10:56:53.265187 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-node 2026-01-20T10:56:53.265247061+00:00 stderr F I0120 10:56:53.265232 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-scheduler/installer-8-crc 2026-01-20T10:56:53.265257181+00:00 stderr F I0120 10:56:53.265246 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-marketplace/community-operators-6m4w2 2026-01-20T10:56:53.265257181+00:00 stderr F I0120 10:56:53.265246 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller default 2026-01-20T10:56:53.265265652+00:00 stderr F I0120 10:56:53.265259 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-cluster-version 2026-01-20T10:56:53.265273982+00:00 stderr F I0120 10:56:53.265254 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2026-01-20T10:56:53.265273982+00:00 stderr F I0120 10:56:53.265266 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-multus 2026-01-20T10:56:53.265282842+00:00 stderr F I0120 10:56:53.265272 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-image-registry/node-ca-l92hr 2026-01-20T10:56:53.265282842+00:00 stderr F I0120 10:56:53.265274 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-config-operator 2026-01-20T10:56:53.265296292+00:00 stderr F I0120 10:56:53.265282 30089 admin_network_policy_controller.go:549] Adding Node in Admin Network Policy controller crc 2026-01-20T10:56:53.265296292+00:00 stderr F I0120 10:56:53.265291 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-etcd 2026-01-20T10:56:53.265304963+00:00 stderr F I0120 10:56:53.265298 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-ingress-canary/ingress-canary-2vhcn 2026-01-20T10:56:53.265313803+00:00 stderr F I0120 10:56:53.265303 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller cert-manager 2026-01-20T10:56:53.265313803+00:00 stderr F I0120 10:56:53.265310 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-cluster-samples-operator 2026-01-20T10:56:53.265323773+00:00 stderr F I0120 10:56:53.265312 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2026-01-20T10:56:53.265323773+00:00 stderr F I0120 10:56:53.265317 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-cluster-storage-operator 2026-01-20T10:56:53.265333713+00:00 stderr F I0120 10:56:53.265318 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-apiserver/kube-apiserver-crc 2026-01-20T10:56:53.265333713+00:00 stderr F I0120 10:56:53.265324 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openstack-operators 2026-01-20T10:56:53.265333713+00:00 stderr F I0120 10:56:53.265326 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-marketplace/certified-operators-mpjb7 2026-01-20T10:56:53.265343634+00:00 stderr F I0120 10:56:53.265334 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-apiserver 2026-01-20T10:56:53.265343634+00:00 stderr F I0120 10:56:53.265339 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j 2026-01-20T10:56:53.265352584+00:00 stderr F I0120 10:56:53.265341 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-controller-manager 2026-01-20T10:56:53.265384515+00:00 stderr F I0120 10:56:53.265362 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-vsphere-infra 2026-01-20T10:56:53.265384515+00:00 stderr F I0120 10:56:53.265370 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller cert-manager/cert-manager-cainjector-676dd9bd64-mggnx 2026-01-20T10:56:53.265384515+00:00 stderr F I0120 10:56:53.265379 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-dns-operator 2026-01-20T10:56:53.265395195+00:00 stderr F I0120 10:56:53.265384 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller cert-manager/cert-manager-webhook-855f577f79-7bdxq 2026-01-20T10:56:53.265395195+00:00 stderr F I0120 10:56:53.265387 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-storage-version-migrator 2026-01-20T10:56:53.265404275+00:00 stderr F I0120 10:56:53.265392 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2026-01-20T10:56:53.265404275+00:00 stderr F I0120 10:56:53.265395 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-marketplace 2026-01-20T10:56:53.265404275+00:00 stderr F I0120 10:56:53.265400 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2026-01-20T10:56:53.265418026+00:00 stderr F I0120 10:56:53.265402 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller kube-public 2026-01-20T10:56:53.265418026+00:00 stderr F I0120 10:56:53.265410 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2026-01-20T10:56:53.265418026+00:00 stderr F I0120 10:56:53.265410 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-monitoring 2026-01-20T10:56:53.265428176+00:00 stderr F I0120 10:56:53.265416 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-apiserver/installer-13-crc 2026-01-20T10:56:53.265428176+00:00 stderr F I0120 10:56:53.265420 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-controller-manager-operator 2026-01-20T10:56:53.265428176+00:00 stderr F I0120 10:56:53.265423 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/kube-controller-manager-crc 2026-01-20T10:56:53.265438336+00:00 stderr F I0120 10:56:53.265428 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-machine-api 2026-01-20T10:56:53.265447796+00:00 stderr F I0120 10:56:53.265436 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-network-diagnostics 2026-01-20T10:56:53.265447796+00:00 stderr F I0120 10:56:53.265443 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-operator-lifecycle-manager 2026-01-20T10:56:53.265457327+00:00 stderr F I0120 10:56:53.265449 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-console-operator 2026-01-20T10:56:53.265466097+00:00 stderr F I0120 10:56:53.265456 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-console-user-settings 2026-01-20T10:56:53.265466097+00:00 stderr F I0120 10:56:53.265455 30089 admin_network_policy_controller.go:218] Starting controller default-network-controller 2026-01-20T10:56:53.265476547+00:00 stderr F I0120 10:56:53.265469 30089 admin_network_policy_controller.go:221] Waiting for informer caches to sync 2026-01-20T10:56:53.265506358+00:00 stderr F I0120 10:56:53.265490 30089 shared_informer.go:311] Waiting for caches to sync for default-network-controller 2026-01-20T10:56:53.265506358+00:00 stderr F I0120 10:56:53.265503 30089 shared_informer.go:318] Caches are synced for default-network-controller 2026-01-20T10:56:53.265515598+00:00 stderr F I0120 10:56:53.265508 30089 admin_network_policy_controller.go:228] Repairing Admin Network Policies 2026-01-20T10:56:53.265515598+00:00 stderr F I0120 10:56:53.265463 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-controller-manager 2026-01-20T10:56:53.265539989+00:00 stderr F I0120 10:56:53.265523 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-cluster-machine-approver 2026-01-20T10:56:53.265539989+00:00 stderr F I0120 10:56:53.265536 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-console 2026-01-20T10:56:53.265548859+00:00 stderr F I0120 10:56:53.265541 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-etcd-operator 2026-01-20T10:56:53.265561429+00:00 stderr F I0120 10:56:53.265547 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-ingress-canary 2026-01-20T10:56:53.265561429+00:00 stderr F I0120 10:56:53.265553 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-network-operator 2026-01-20T10:56:53.265561429+00:00 stderr F I0120 10:56:53.265559 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller kube-node-lease 2026-01-20T10:56:53.265570980+00:00 stderr F I0120 10:56:53.265564 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-authentication-operator 2026-01-20T10:56:53.265579140+00:00 stderr F I0120 10:56:53.265570 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-cloud-network-config-controller 2026-01-20T10:56:53.265579140+00:00 stderr F I0120 10:56:53.265575 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-ovirt-infra 2026-01-20T10:56:53.265586850+00:00 stderr F I0120 10:56:53.265581 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift 2026-01-20T10:56:53.265595210+00:00 stderr F I0120 10:56:53.265587 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-config-managed 2026-01-20T10:56:53.265605891+00:00 stderr F I0120 10:56:53.265430 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/revision-pruner-11-crc 2026-01-20T10:56:53.265605891+00:00 stderr F I0120 10:56:53.265601 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-openstack-infra 2026-01-20T10:56:53.265617171+00:00 stderr F I0120 10:56:53.265610 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-route-controller-manager 2026-01-20T10:56:53.265625441+00:00 stderr F I0120 10:56:53.265614 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2026-01-20T10:56:53.265625441+00:00 stderr F I0120 10:56:53.265618 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-ingress-operator 2026-01-20T10:56:53.265634061+00:00 stderr F I0120 10:56:53.265626 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-apiserver-operator 2026-01-20T10:56:53.265642572+00:00 stderr F I0120 10:56:53.265632 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-network-diagnostics/network-check-target-v54bt 2026-01-20T10:56:53.265642572+00:00 stderr F I0120 10:56:53.265634 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-ovn-kubernetes 2026-01-20T10:56:53.265642572+00:00 stderr F I0120 10:56:53.265638 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-ovn-kubernetes/ovnkube-node-sdkgg 2026-01-20T10:56:53.265652452+00:00 stderr F I0120 10:56:53.265606 30089 model_client.go:381] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Ingress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]} log:false match:ip4.src == 169.254.169.5 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b099dc38-13d2-468d-a119-69434d19ed27}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.265652452+00:00 stderr F I0120 10:56:53.265644 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller hostpath-provisioner/csi-hostpathplugin-hvm8g 2026-01-20T10:56:53.265666702+00:00 stderr F I0120 10:56:53.265653 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2026-01-20T10:56:53.265666702+00:00 stderr F I0120 10:56:53.265658 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2026-01-20T10:56:53.265666702+00:00 stderr F I0120 10:56:53.265642 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-controller-manager-operator 2026-01-20T10:56:53.265692343+00:00 stderr F I0120 10:56:53.265676 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-dns 2026-01-20T10:56:53.265692343+00:00 stderr F I0120 10:56:53.265688 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-apiserver 2026-01-20T10:56:53.265701793+00:00 stderr F I0120 10:56:53.265695 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-image-registry 2026-01-20T10:56:53.265701793+00:00 stderr F I0120 10:56:53.265662 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/installer-10-retry-1-crc 2026-01-20T10:56:53.265710653+00:00 stderr F I0120 10:56:53.265702 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kni-infra 2026-01-20T10:56:53.265710653+00:00 stderr F I0120 10:56:53.265701 30089 repair.go:29] Repairing admin network policies took 186.335µs 2026-01-20T10:56:53.265720674+00:00 stderr F I0120 10:56:53.265709 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-scheduler-operator 2026-01-20T10:56:53.265729834+00:00 stderr F I0120 10:56:53.265718 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-service-ca 2026-01-20T10:56:53.265729834+00:00 stderr F I0120 10:56:53.265724 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openstack 2026-01-20T10:56:53.265739994+00:00 stderr F I0120 10:56:53.265731 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller kube-system 2026-01-20T10:56:53.265749004+00:00 stderr F I0120 10:56:53.265737 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-apiserver-operator 2026-01-20T10:56:53.265749004+00:00 stderr F I0120 10:56:53.265745 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-host-network 2026-01-20T10:56:53.265760725+00:00 stderr F I0120 10:56:53.265753 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-cloud-platform-infra 2026-01-20T10:56:53.265769875+00:00 stderr F I0120 10:56:53.265760 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-scheduler 2026-01-20T10:56:53.265769875+00:00 stderr F I0120 10:56:53.265766 30089 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-service-ca-operator 2026-01-20T10:56:53.265778415+00:00 stderr F I0120 10:56:53.265745 30089 model_client.go:381] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow-related direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Egress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]} log:false match:ip4.src == 169.254.169.5 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[apply-after-lb:true]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ab13cfbf-8c22-4847-bf8f-29854d97f49b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.265816106+00:00 stderr F I0120 10:56:53.265709 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2026-01-20T10:56:53.265826166+00:00 stderr F I0120 10:56:53.265813 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-image-registry/image-registry-75b7bb6564-ln84v 2026-01-20T10:56:53.265826166+00:00 stderr F I0120 10:56:53.265803 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:b099dc38-13d2-468d-a119-69434d19ed27} {GoUUID:ab13cfbf-8c22-4847-bf8f-29854d97f49b}]}}] Timeout: Where:[where column _uuid == {209ca8a5-55e7-4f87-adee-1ae7952f089e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.265826166+00:00 stderr F I0120 10:56:53.265822 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2026-01-20T10:56:53.265836217+00:00 stderr F I0120 10:56:53.265823 30089 repair.go:104] Repairing baseline admin network policies took 106.593µs 2026-01-20T10:56:53.265836217+00:00 stderr F I0120 10:56:53.265831 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2026-01-20T10:56:53.265836217+00:00 stderr F I0120 10:56:53.265832 30089 admin_network_policy_controller.go:241] Starting Admin Network Policy workers 2026-01-20T10:56:53.265845677+00:00 stderr F I0120 10:56:53.265839 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-network-node-identity/network-node-identity-7xghp 2026-01-20T10:56:53.265845677+00:00 stderr F I0120 10:56:53.265841 30089 admin_network_policy_controller.go:252] Starting Baseline Admin Network Policy workers 2026-01-20T10:56:53.265854537+00:00 stderr F I0120 10:56:53.265844 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2026-01-20T10:56:53.265854537+00:00 stderr F I0120 10:56:53.265849 30089 admin_network_policy_controller.go:263] Starting Namespace Admin Network Policy workers 2026-01-20T10:56:53.265854537+00:00 stderr F I0120 10:56:53.265851 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2026-01-20T10:56:53.265863827+00:00 stderr F I0120 10:56:53.265858 30089 admin_network_policy_controller.go:274] Starting Pod Admin Network Policy workers 2026-01-20T10:56:53.265872268+00:00 stderr F I0120 10:56:53.265866 30089 admin_network_policy_controller.go:285] Starting Node Admin Network Policy workers 2026-01-20T10:56:53.265880618+00:00 stderr F I0120 10:56:53.265871 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-operators in Admin Network Policy controller 2026-01-20T10:56:53.265880618+00:00 stderr F I0120 10:56:53.265823 30089 transact.go:42] Configuring OVN: [{Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Ingress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]} log:false match:ip4.src == 169.254.169.5 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b099dc38-13d2-468d-a119-69434d19ed27}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:ACL Row:map[action:allow-related direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Egress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]} log:false match:ip4.src == 169.254.169.5 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[apply-after-lb:true]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ab13cfbf-8c22-4847-bf8f-29854d97f49b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:b099dc38-13d2-468d-a119-69434d19ed27} {GoUUID:ab13cfbf-8c22-4847-bf8f-29854d97f49b}]}}] Timeout: Where:[where column _uuid == {209ca8a5-55e7-4f87-adee-1ae7952f089e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.265893678+00:00 stderr F I0120 10:56:53.265858 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-dns/dns-default-gbw49 2026-01-20T10:56:53.265893678+00:00 stderr F I0120 10:56:53.265889 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/installer-11-crc 2026-01-20T10:56:53.265902518+00:00 stderr F I0120 10:56:53.265880 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-operators Admin Network Policy controller: took 9.73µs 2026-01-20T10:56:53.265902518+00:00 stderr F I0120 10:56:53.265897 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-marketplace/redhat-marketplace-2mx7j 2026-01-20T10:56:53.265911289+00:00 stderr F I0120 10:56:53.265903 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2026-01-20T10:56:53.265919709+00:00 stderr F I0120 10:56:53.265911 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2026-01-20T10:56:53.265919709+00:00 stderr F I0120 10:56:53.265916 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-operator-lifecycle-manager/collect-profiles-29481765-pbh8m 2026-01-20T10:56:53.265928479+00:00 stderr F I0120 10:56:53.265923 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2026-01-20T10:56:53.265937089+00:00 stderr F I0120 10:56:53.265929 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-apiserver/installer-9-crc 2026-01-20T10:56:53.265945180+00:00 stderr F I0120 10:56:53.265936 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/installer-10-crc 2026-01-20T10:56:53.265953580+00:00 stderr F I0120 10:56:53.265936 30089 admin_network_policy_node.go:55] Processing sync for Node crc in Admin Network Policy controller 2026-01-20T10:56:53.265962400+00:00 stderr F I0120 10:56:53.265951 30089 admin_network_policy_node.go:58] Finished syncing Node crc Admin Network Policy controller: took 17.041µs 2026-01-20T10:56:53.265962400+00:00 stderr F I0120 10:56:53.265943 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd 2026-01-20T10:56:53.265975000+00:00 stderr F I0120 10:56:53.265967 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-service-ca/service-ca-666f99b6f-kk8kg 2026-01-20T10:56:53.265997661+00:00 stderr F I0120 10:56:53.265979 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-ingress/router-default-5c9bf7bc58-6jctv 2026-01-20T10:56:53.265997661+00:00 stderr F I0120 10:56:53.265986 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2026-01-20T10:56:53.265997661+00:00 stderr F I0120 10:56:53.265994 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-multus/multus-additional-cni-plugins-bzj2p 2026-01-20T10:56:53.266009071+00:00 stderr F I0120 10:56:53.266000 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2026-01-20T10:56:53.266017881+00:00 stderr F I0120 10:56:53.266006 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2026-01-20T10:56:53.266017881+00:00 stderr F I0120 10:56:53.266012 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-apiserver/installer-12-crc 2026-01-20T10:56:53.266027302+00:00 stderr F I0120 10:56:53.266017 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-user-workload-monitoring in Admin Network Policy controller 2026-01-20T10:56:53.266027302+00:00 stderr F I0120 10:56:53.266019 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/revision-pruner-8-crc 2026-01-20T10:56:53.266037062+00:00 stderr F I0120 10:56:53.266027 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-scheduler/openshift-kube-scheduler-crc 2026-01-20T10:56:53.266037062+00:00 stderr F I0120 10:56:53.266028 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-user-workload-monitoring Admin Network Policy controller: took 13.031µs 2026-01-20T10:56:53.266037062+00:00 stderr F I0120 10:56:53.266033 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-marketplace/marketplace-operator-8b455464d-nc8zc 2026-01-20T10:56:53.266047762+00:00 stderr F I0120 10:56:53.266038 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2026-01-20T10:56:53.266057013+00:00 stderr F I0120 10:56:53.266043 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-authentication in Admin Network Policy controller 2026-01-20T10:56:53.266057013+00:00 stderr F I0120 10:56:53.266044 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2026-01-20T10:56:53.266057013+00:00 stderr F I0120 10:56:53.266051 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-authentication Admin Network Policy controller: took 7.95µs 2026-01-20T10:56:53.266105034+00:00 stderr F I0120 10:56:53.266076 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-machine-config-operator in Admin Network Policy controller 2026-01-20T10:56:53.266105034+00:00 stderr F I0120 10:56:53.266085 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-machine-config-operator Admin Network Policy controller: took 6.33µs 2026-01-20T10:56:53.266105034+00:00 stderr F I0120 10:56:53.266054 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-multus/multus-q88th 2026-01-20T10:56:53.266105034+00:00 stderr F I0120 10:56:53.266096 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-marketplace/redhat-operators-2nxg8 in Admin Network Policy controller 2026-01-20T10:56:53.266126404+00:00 stderr F I0120 10:56:53.266103 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-multus/network-metrics-daemon-qdfr4 2026-01-20T10:56:53.266126404+00:00 stderr F I0120 10:56:53.266112 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-scheduler/installer-7-crc 2026-01-20T10:56:53.266126404+00:00 stderr F I0120 10:56:53.266111 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-marketplace/redhat-operators-2nxg8 Admin Network Policy controller: took 17.931µs 2026-01-20T10:56:53.266126404+00:00 stderr F I0120 10:56:53.266122 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-machine-config-operator/machine-config-daemon-zpnhg 2026-01-20T10:56:53.266136725+00:00 stderr F I0120 10:56:53.266127 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-network-operator/network-operator-767c585db5-zd56b in Admin Network Policy controller 2026-01-20T10:56:53.266136725+00:00 stderr F I0120 10:56:53.266132 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2026-01-20T10:56:53.266145295+00:00 stderr F I0120 10:56:53.266134 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-network-operator/network-operator-767c585db5-zd56b Admin Network Policy controller: took 6.991µs 2026-01-20T10:56:53.266161805+00:00 stderr F I0120 10:56:53.266148 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2026-01-20T10:56:53.266161805+00:00 stderr F I0120 10:56:53.266152 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs in Admin Network Policy controller 2026-01-20T10:56:53.266161805+00:00 stderr F I0120 10:56:53.266157 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2026-01-20T10:56:53.266171386+00:00 stderr F I0120 10:56:53.266164 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs Admin Network Policy controller: took 13.31µs 2026-01-20T10:56:53.266178496+00:00 stderr F I0120 10:56:53.266167 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2026-01-20T10:56:53.266178496+00:00 stderr F I0120 10:56:53.266174 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b in Admin Network Policy controller 2026-01-20T10:56:53.266185836+00:00 stderr F I0120 10:56:53.266177 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2026-01-20T10:56:53.266185836+00:00 stderr F I0120 10:56:53.266181 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b Admin Network Policy controller: took 6.97µs 2026-01-20T10:56:53.266194066+00:00 stderr F I0120 10:56:53.266184 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-network-operator/iptables-alerter-wwpnd 2026-01-20T10:56:53.266194066+00:00 stderr F I0120 10:56:53.266190 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-console/console-644bb77b49-5x5xk in Admin Network Policy controller 2026-01-20T10:56:53.266207016+00:00 stderr F I0120 10:56:53.266193 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2026-01-20T10:56:53.266207016+00:00 stderr F I0120 10:56:53.266197 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-console/console-644bb77b49-5x5xk Admin Network Policy controller: took 6.8µs 2026-01-20T10:56:53.266207016+00:00 stderr F I0120 10:56:53.266201 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-dns-operator/dns-operator-75f687757b-nz2xb 2026-01-20T10:56:53.266217727+00:00 stderr F I0120 10:56:53.266206 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh in Admin Network Policy controller 2026-01-20T10:56:53.266217727+00:00 stderr F I0120 10:56:53.266209 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-dns/node-resolver-dn27q 2026-01-20T10:56:53.266217727+00:00 stderr F I0120 10:56:53.266213 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh Admin Network Policy controller: took 7.061µs 2026-01-20T10:56:53.266227537+00:00 stderr F I0120 10:56:53.266218 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-etcd/etcd-crc 2026-01-20T10:56:53.266227537+00:00 stderr F I0120 10:56:53.266223 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-machine-config-operator/machine-config-server-v65wr in Admin Network Policy controller 2026-01-20T10:56:53.266236667+00:00 stderr F I0120 10:56:53.266230 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-machine-config-operator/machine-config-server-v65wr Admin Network Policy controller: took 7.77µs 2026-01-20T10:56:53.266244497+00:00 stderr F I0120 10:56:53.266238 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/revision-pruner-10-crc in Admin Network Policy controller 2026-01-20T10:56:53.266251618+00:00 stderr F I0120 10:56:53.266245 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/revision-pruner-10-crc Admin Network Policy controller: took 6.72µs 2026-01-20T10:56:53.266258698+00:00 stderr F I0120 10:56:53.266253 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-scheduler/installer-8-crc in Admin Network Policy controller 2026-01-20T10:56:53.266265748+00:00 stderr F I0120 10:56:53.266259 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-scheduler/installer-8-crc Admin Network Policy controller: took 6.19µs 2026-01-20T10:56:53.266272618+00:00 stderr F I0120 10:56:53.266266 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-marketplace/community-operators-6m4w2 in Admin Network Policy controller 2026-01-20T10:56:53.266279928+00:00 stderr F I0120 10:56:53.266273 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-marketplace/community-operators-6m4w2 Admin Network Policy controller: took 6.45µs 2026-01-20T10:56:53.266279928+00:00 stderr F I0120 10:56:53.266225 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller cert-manager/cert-manager-758df9885c-cq6zm 2026-01-20T10:56:53.266288399+00:00 stderr F I0120 10:56:53.266280 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf in Admin Network Policy controller 2026-01-20T10:56:53.266297199+00:00 stderr F I0120 10:56:53.266286 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2026-01-20T10:56:53.266297199+00:00 stderr F I0120 10:56:53.266288 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf Admin Network Policy controller: took 7.551µs 2026-01-20T10:56:53.266310479+00:00 stderr F I0120 10:56:53.266296 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2026-01-20T10:56:53.266310479+00:00 stderr F I0120 10:56:53.266303 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-console/downloads-65476884b9-9wcvx 2026-01-20T10:56:53.266319619+00:00 stderr F I0120 10:56:53.266309 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2026-01-20T10:56:53.266319619+00:00 stderr F I0120 10:56:53.266297 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-image-registry/node-ca-l92hr in Admin Network Policy controller 2026-01-20T10:56:53.266392671+00:00 stderr F I0120 10:56:53.266328 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-image-registry/node-ca-l92hr Admin Network Policy controller: took 26.591µs 2026-01-20T10:56:53.266392671+00:00 stderr F I0120 10:56:53.266347 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ingress-canary/ingress-canary-2vhcn in Admin Network Policy controller 2026-01-20T10:56:53.266392671+00:00 stderr F I0120 10:56:53.266314 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/revision-pruner-9-crc 2026-01-20T10:56:53.266392671+00:00 stderr F I0120 10:56:53.266352 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ingress-canary/ingress-canary-2vhcn Admin Network Policy controller: took 7.08µs 2026-01-20T10:56:53.266392671+00:00 stderr F I0120 10:56:53.266373 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2026-01-20T10:56:53.266392671+00:00 stderr F I0120 10:56:53.266375 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t in Admin Network Policy controller 2026-01-20T10:56:53.266406962+00:00 stderr F I0120 10:56:53.266389 30089 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2026-01-20T10:56:53.266406962+00:00 stderr F I0120 10:56:53.266392 30089 factory.go:988] Added *v1.NetworkPolicy event handler 4 2026-01-20T10:56:53.266406962+00:00 stderr F I0120 10:56:53.266394 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t Admin Network Policy controller: took 14.16µs 2026-01-20T10:56:53.266416212+00:00 stderr F I0120 10:56:53.266411 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-apiserver/kube-apiserver-crc in Admin Network Policy controller 2026-01-20T10:56:53.266424322+00:00 stderr F I0120 10:56:53.266417 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-apiserver/kube-apiserver-crc Admin Network Policy controller: took 6.84µs 2026-01-20T10:56:53.266432412+00:00 stderr F I0120 10:56:53.266424 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-marketplace/certified-operators-mpjb7 in Admin Network Policy controller 2026-01-20T10:56:53.266432412+00:00 stderr F I0120 10:56:53.266429 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-marketplace/certified-operators-mpjb7 Admin Network Policy controller: took 5.1µs 2026-01-20T10:56:53.266440813+00:00 stderr F I0120 10:56:53.266435 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j in Admin Network Policy controller 2026-01-20T10:56:53.266451873+00:00 stderr F I0120 10:56:53.266442 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j Admin Network Policy controller: took 6.54µs 2026-01-20T10:56:53.266451873+00:00 stderr F I0120 10:56:53.266448 30089 admin_network_policy_pod.go:56] Processing sync for Pod cert-manager/cert-manager-cainjector-676dd9bd64-mggnx in Admin Network Policy controller 2026-01-20T10:56:53.266459533+00:00 stderr F I0120 10:56:53.266453 30089 admin_network_policy_pod.go:59] Finished syncing Pod cert-manager/cert-manager-cainjector-676dd9bd64-mggnx Admin Network Policy controller: took 5.57µs 2026-01-20T10:56:53.266467523+00:00 stderr F I0120 10:56:53.266459 30089 admin_network_policy_pod.go:56] Processing sync for Pod cert-manager/cert-manager-webhook-855f577f79-7bdxq in Admin Network Policy controller 2026-01-20T10:56:53.266467523+00:00 stderr F I0120 10:56:53.266464 30089 admin_network_policy_pod.go:59] Finished syncing Pod cert-manager/cert-manager-webhook-855f577f79-7bdxq Admin Network Policy controller: took 5.25µs 2026-01-20T10:56:53.266476034+00:00 stderr F I0120 10:56:53.266470 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 in Admin Network Policy controller 2026-01-20T10:56:53.266484194+00:00 stderr F I0120 10:56:53.266476 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 Admin Network Policy controller: took 5.181µs 2026-01-20T10:56:53.266492994+00:00 stderr F I0120 10:56:53.266484 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc in Admin Network Policy controller 2026-01-20T10:56:53.266492994+00:00 stderr F I0120 10:56:53.266485 30089 egressip.go:1052] syncStaleEgressReroutePolicy will remove stale nexthops: [] 2026-01-20T10:56:53.266518675+00:00 stderr F I0120 10:56:53.266503 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc Admin Network Policy controller: took 19.56µs 2026-01-20T10:56:53.266526915+00:00 stderr F I0120 10:56:53.266515 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-oauth-apiserver in Admin Network Policy controller 2026-01-20T10:56:53.266534525+00:00 stderr F I0120 10:56:53.266524 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-oauth-apiserver Admin Network Policy controller: took 11.42µs 2026-01-20T10:56:53.266542605+00:00 stderr F I0120 10:56:53.266536 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace cert-manager-operator in Admin Network Policy controller 2026-01-20T10:56:53.266550656+00:00 stderr F I0120 10:56:53.266540 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace cert-manager-operator Admin Network Policy controller: took 4.21µs 2026-01-20T10:56:53.266560626+00:00 stderr F I0120 10:56:53.266554 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-network-node-identity in Admin Network Policy controller 2026-01-20T10:56:53.266568536+00:00 stderr F I0120 10:56:53.266559 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-network-node-identity Admin Network Policy controller: took 12.401µs 2026-01-20T10:56:53.266568536+00:00 stderr F I0120 10:56:53.266565 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-nutanix-infra in Admin Network Policy controller 2026-01-20T10:56:53.266576006+00:00 stderr F I0120 10:56:53.266569 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-nutanix-infra Admin Network Policy controller: took 4.22µs 2026-01-20T10:56:53.266582626+00:00 stderr F I0120 10:56:53.266575 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-ingress in Admin Network Policy controller 2026-01-20T10:56:53.266582626+00:00 stderr F I0120 10:56:53.266579 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-ingress Admin Network Policy controller: took 4.15µs 2026-01-20T10:56:53.266593037+00:00 stderr F I0120 10:56:53.266585 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-storage-version-migrator-operator in Admin Network Policy controller 2026-01-20T10:56:53.266593037+00:00 stderr F I0120 10:56:53.266590 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-storage-version-migrator-operator Admin Network Policy controller: took 4.14µs 2026-01-20T10:56:53.266599917+00:00 stderr F I0120 10:56:53.266595 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace hostpath-provisioner in Admin Network Policy controller 2026-01-20T10:56:53.266606457+00:00 stderr F I0120 10:56:53.266600 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace hostpath-provisioner Admin Network Policy controller: took 4.06µs 2026-01-20T10:56:53.266612987+00:00 stderr F I0120 10:56:53.266605 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config in Admin Network Policy controller 2026-01-20T10:56:53.266619517+00:00 stderr F I0120 10:56:53.266612 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config Admin Network Policy controller: took 3.88µs 2026-01-20T10:56:53.266626068+00:00 stderr F I0120 10:56:53.266618 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-infra in Admin Network Policy controller 2026-01-20T10:56:53.266626068+00:00 stderr F I0120 10:56:53.266622 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-infra Admin Network Policy controller: took 3.81µs 2026-01-20T10:56:53.266632838+00:00 stderr F I0120 10:56:53.266627 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-node in Admin Network Policy controller 2026-01-20T10:56:53.266639348+00:00 stderr F I0120 10:56:53.266632 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-node Admin Network Policy controller: took 4.24µs 2026-01-20T10:56:53.266645898+00:00 stderr F I0120 10:56:53.266639 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace default in Admin Network Policy controller 2026-01-20T10:56:53.266645898+00:00 stderr F I0120 10:56:53.266642 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace default Admin Network Policy controller: took 3.41µs 2026-01-20T10:56:53.266674819+00:00 stderr F I0120 10:56:53.266654 30089 address_set.go:304] New(b27c19cc-d9d0-4d57-a5a8-06fcff438e8a/default-network-controller:EgressIP:egressip-served-pods:v4/a4548040316634674295) with [] 2026-01-20T10:56:53.266681809+00:00 stderr F I0120 10:56:53.266675 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-cluster-version in Admin Network Policy controller 2026-01-20T10:56:53.266688369+00:00 stderr F I0120 10:56:53.266681 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cluster-version Admin Network Policy controller: took 32.941µs 2026-01-20T10:56:53.266706210+00:00 stderr F I0120 10:56:53.266693 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-multus in Admin Network Policy controller 2026-01-20T10:56:53.266706210+00:00 stderr F I0120 10:56:53.266702 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-multus Admin Network Policy controller: took 9.671µs 2026-01-20T10:56:53.266726440+00:00 stderr F I0120 10:56:53.266711 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm in Admin Network Policy controller 2026-01-20T10:56:53.266726440+00:00 stderr F I0120 10:56:53.266705 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b27c19cc-d9d0-4d57-a5a8-06fcff438e8a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.266736501+00:00 stderr F I0120 10:56:53.266723 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm Admin Network Policy controller: took 14.98µs 2026-01-20T10:56:53.266743641+00:00 stderr F I0120 10:56:53.266726 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b27c19cc-d9d0-4d57-a5a8-06fcff438e8a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.266797692+00:00 stderr F I0120 10:56:53.266766 30089 ovs.go:162] Exec(23): stdout: "" 2026-01-20T10:56:53.266797692+00:00 stderr F I0120 10:56:53.266788 30089 ovs.go:163] Exec(23): stderr: "ovs-vsctl: no port named br-ex\n" 2026-01-20T10:56:53.266805312+00:00 stderr F I0120 10:56:53.266796 30089 ovs.go:165] Exec(23): err: exit status 1 2026-01-20T10:56:53.266826273+00:00 stderr F I0120 10:56:53.266738 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-apiserver/installer-13-crc in Admin Network Policy controller 2026-01-20T10:56:53.266826273+00:00 stderr F I0120 10:56:53.266821 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-apiserver/installer-13-crc Admin Network Policy controller: took 82.762µs 2026-01-20T10:56:53.266834844+00:00 stderr F I0120 10:56:53.266830 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/kube-controller-manager-crc in Admin Network Policy controller 2026-01-20T10:56:53.266841404+00:00 stderr F I0120 10:56:53.266835 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/kube-controller-manager-crc Admin Network Policy controller: took 5.861µs 2026-01-20T10:56:53.266866665+00:00 stderr F I0120 10:56:53.266843 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config-operator in Admin Network Policy controller 2026-01-20T10:56:53.266866665+00:00 stderr F I0120 10:56:53.266859 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config-operator Admin Network Policy controller: took 16.201µs 2026-01-20T10:56:53.266888756+00:00 stderr F I0120 10:56:53.266872 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-etcd in Admin Network Policy controller 2026-01-20T10:56:53.266888756+00:00 stderr F I0120 10:56:53.266882 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-etcd Admin Network Policy controller: took 9.99µs 2026-01-20T10:56:53.266896456+00:00 stderr F I0120 10:56:53.266889 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace cert-manager in Admin Network Policy controller 2026-01-20T10:56:53.266903386+00:00 stderr F I0120 10:56:53.266894 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace cert-manager Admin Network Policy controller: took 4.95µs 2026-01-20T10:56:53.266910546+00:00 stderr F I0120 10:56:53.266902 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-cluster-samples-operator in Admin Network Policy controller 2026-01-20T10:56:53.266910546+00:00 stderr F I0120 10:56:53.266907 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cluster-samples-operator Admin Network Policy controller: took 4.73µs 2026-01-20T10:56:53.266919456+00:00 stderr F I0120 10:56:53.266913 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-cluster-storage-operator in Admin Network Policy controller 2026-01-20T10:56:53.266926347+00:00 stderr F I0120 10:56:53.266919 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cluster-storage-operator Admin Network Policy controller: took 5.45µs 2026-01-20T10:56:53.266936547+00:00 stderr F I0120 10:56:53.266926 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openstack-operators in Admin Network Policy controller 2026-01-20T10:56:53.266936547+00:00 stderr F I0120 10:56:53.266931 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openstack-operators Admin Network Policy controller: took 4.84µs 2026-01-20T10:56:53.266943747+00:00 stderr F I0120 10:56:53.266937 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-apiserver in Admin Network Policy controller 2026-01-20T10:56:53.266950637+00:00 stderr F I0120 10:56:53.266943 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-apiserver Admin Network Policy controller: took 4.69µs 2026-01-20T10:56:53.266950637+00:00 stderr F I0120 10:56:53.266946 30089 helper_linux.go:92] Provided gateway interface "br-ex", found as index: 11 2026-01-20T10:56:53.266957827+00:00 stderr F I0120 10:56:53.266950 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-controller-manager in Admin Network Policy controller 2026-01-20T10:56:53.266964728+00:00 stderr F I0120 10:56:53.266956 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-controller-manager Admin Network Policy controller: took 5.89µs 2026-01-20T10:56:53.266972308+00:00 stderr F I0120 10:56:53.266963 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-vsphere-infra in Admin Network Policy controller 2026-01-20T10:56:53.266979608+00:00 stderr F I0120 10:56:53.266969 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-vsphere-infra Admin Network Policy controller: took 5.49µs 2026-01-20T10:56:53.266979608+00:00 stderr F I0120 10:56:53.266976 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-dns-operator in Admin Network Policy controller 2026-01-20T10:56:53.266988508+00:00 stderr F I0120 10:56:53.266983 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-dns-operator Admin Network Policy controller: took 4.59µs 2026-01-20T10:56:53.266995398+00:00 stderr F I0120 10:56:53.266990 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-storage-version-migrator in Admin Network Policy controller 2026-01-20T10:56:53.267002279+00:00 stderr F I0120 10:56:53.266995 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-storage-version-migrator Admin Network Policy controller: took 4.68µs 2026-01-20T10:56:53.267009929+00:00 stderr F I0120 10:56:53.267001 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-marketplace in Admin Network Policy controller 2026-01-20T10:56:53.267009929+00:00 stderr F I0120 10:56:53.267006 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-marketplace Admin Network Policy controller: took 4.65µs 2026-01-20T10:56:53.267017319+00:00 stderr F I0120 10:56:53.267012 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace kube-public in Admin Network Policy controller 2026-01-20T10:56:53.267024259+00:00 stderr F I0120 10:56:53.267017 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace kube-public Admin Network Policy controller: took 4.85µs 2026-01-20T10:56:53.267031889+00:00 stderr F I0120 10:56:53.267024 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-monitoring in Admin Network Policy controller 2026-01-20T10:56:53.267038790+00:00 stderr F I0120 10:56:53.267029 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-monitoring Admin Network Policy controller: took 4.77µs 2026-01-20T10:56:53.267150803+00:00 stderr F I0120 10:56:53.267123 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-controller-manager-operator in Admin Network Policy controller 2026-01-20T10:56:53.267150803+00:00 stderr F I0120 10:56:53.267137 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-controller-manager-operator Admin Network Policy controller: took 13.76µs 2026-01-20T10:56:53.267150803+00:00 stderr F I0120 10:56:53.267139 30089 helper_linux.go:117] Found default gateway interface br-ex 38.102.83.1 2026-01-20T10:56:53.267150803+00:00 stderr F I0120 10:56:53.267144 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-machine-api in Admin Network Policy controller 2026-01-20T10:56:53.267168203+00:00 stderr F I0120 10:56:53.267151 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-machine-api Admin Network Policy controller: took 6.601µs 2026-01-20T10:56:53.267168203+00:00 stderr F I0120 10:56:53.267158 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-network-diagnostics in Admin Network Policy controller 2026-01-20T10:56:53.267168203+00:00 stderr F I0120 10:56:53.267163 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-network-diagnostics Admin Network Policy controller: took 5µs 2026-01-20T10:56:53.267176723+00:00 stderr F I0120 10:56:53.267169 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-operator-lifecycle-manager in Admin Network Policy controller 2026-01-20T10:56:53.267184463+00:00 stderr F I0120 10:56:53.267173 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-operator-lifecycle-manager Admin Network Policy controller: took 4.75µs 2026-01-20T10:56:53.267184463+00:00 stderr F I0120 10:56:53.267181 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-console-operator in Admin Network Policy controller 2026-01-20T10:56:53.267191974+00:00 stderr F I0120 10:56:53.267186 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-console-operator Admin Network Policy controller: took 4.81µs 2026-01-20T10:56:53.267199034+00:00 stderr F I0120 10:56:53.267193 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-console-user-settings in Admin Network Policy controller 2026-01-20T10:56:53.267206074+00:00 stderr F I0120 10:56:53.267198 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-console-user-settings Admin Network Policy controller: took 4.62µs 2026-01-20T10:56:53.267213694+00:00 stderr F I0120 10:56:53.267205 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-controller-manager in Admin Network Policy controller 2026-01-20T10:56:53.267221284+00:00 stderr F I0120 10:56:53.267210 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-controller-manager Admin Network Policy controller: took 5.17µs 2026-01-20T10:56:53.267228255+00:00 stderr F I0120 10:56:53.267214 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-scheduler-operator 2026-01-20T10:56:53.267264286+00:00 stderr F I0120 10:56:53.267244 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-scheduler-operator took: 17.46µs 2026-01-20T10:56:53.267264286+00:00 stderr F I0120 10:56:53.267258 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-cluster-version 2026-01-20T10:56:53.267264286+00:00 stderr F I0120 10:56:53.267260 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-service-ca 2026-01-20T10:56:53.267272466+00:00 stderr F I0120 10:56:53.267266 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-cluster-version took: 2.58µs 2026-01-20T10:56:53.267279476+00:00 stderr F I0120 10:56:53.267272 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-storage-version-migrator 2026-01-20T10:56:53.267286476+00:00 stderr F I0120 10:56:53.267281 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-storage-version-migrator took: 3.91µs 2026-01-20T10:56:53.267296536+00:00 stderr F I0120 10:56:53.267285 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-network-operator 2026-01-20T10:56:53.267296536+00:00 stderr F I0120 10:56:53.267290 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-service-ca took: 20.87µs 2026-01-20T10:56:53.267303987+00:00 stderr F I0120 10:56:53.267248 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-apiserver-operator 2026-01-20T10:56:53.267303987+00:00 stderr F I0120 10:56:53.267299 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-multus 2026-01-20T10:56:53.267329467+00:00 stderr F I0120 10:56:53.267311 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-apiserver-operator took: 9.61µs 2026-01-20T10:56:53.267329467+00:00 stderr F I0120 10:56:53.267318 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openstack 2026-01-20T10:56:53.267329467+00:00 stderr F I0120 10:56:53.267325 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-ingress 2026-01-20T10:56:53.267355758+00:00 stderr F I0120 10:56:53.267337 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openstack took: 9.84µs 2026-01-20T10:56:53.267355758+00:00 stderr F I0120 10:56:53.267348 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-cloud-platform-infra 2026-01-20T10:56:53.267364608+00:00 stderr F I0120 10:56:53.267359 30089 gateway_init.go:370] Preparing Local Gateway 2026-01-20T10:56:53.267371348+00:00 stderr F I0120 10:56:53.267363 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-authentication 2026-01-20T10:56:53.267371348+00:00 stderr F I0120 10:56:53.267365 30089 gateway_localnet.go:26] Creating new local gateway 2026-01-20T10:56:53.267378609+00:00 stderr F I0120 10:56:53.267370 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-cloud-platform-infra took: 9.38µs 2026-01-20T10:56:53.267385749+00:00 stderr F I0120 10:56:53.267378 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-machine-config-operator 2026-01-20T10:56:53.267392419+00:00 stderr F I0120 10:56:53.267384 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-authentication took: 12.031µs 2026-01-20T10:56:53.267392419+00:00 stderr F I0120 10:56:53.267386 30089 iptables.go:108] Creating table: filter chain: FORWARD 2026-01-20T10:56:53.267399829+00:00 stderr F I0120 10:56:53.267390 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-machine-config-operator took: 3.13µs 2026-01-20T10:56:53.267399829+00:00 stderr F I0120 10:56:53.267393 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-oauth-apiserver 2026-01-20T10:56:53.267407839+00:00 stderr F I0120 10:56:53.267398 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-config-operator 2026-01-20T10:56:53.267407839+00:00 stderr F I0120 10:56:53.267403 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-oauth-apiserver took: 2.4µs 2026-01-20T10:56:53.267416670+00:00 stderr F I0120 10:56:53.267409 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-config-operator took: 2.21µs 2026-01-20T10:56:53.267423660+00:00 stderr F I0120 10:56:53.267414 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-node 2026-01-20T10:56:53.267423660+00:00 stderr F I0120 10:56:53.267418 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-console 2026-01-20T10:56:53.267430930+00:00 stderr F I0120 10:56:53.267423 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-node took: 1.42µs 2026-01-20T10:56:53.267438080+00:00 stderr F I0120 10:56:53.267428 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-console took: 2.39µs 2026-01-20T10:56:53.267438080+00:00 stderr F I0120 10:56:53.267431 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-operator-lifecycle-manager 2026-01-20T10:56:53.267454061+00:00 stderr F I0120 10:56:53.267442 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-service-ca-operator 2026-01-20T10:56:53.267454061+00:00 stderr F I0120 10:56:53.267293 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-network-operator took: 2.54µs 2026-01-20T10:56:53.267461591+00:00 stderr F I0120 10:56:53.267352 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-user-workload-monitoring 2026-01-20T10:56:53.267468871+00:00 stderr F I0120 10:56:53.267459 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-service-ca-operator took: 7.85µs 2026-01-20T10:56:53.267468871+00:00 stderr F I0120 10:56:53.267462 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-user-workload-monitoring took: 2.49µs 2026-01-20T10:56:53.267476251+00:00 stderr F I0120 10:56:53.267466 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-network-node-identity 2026-01-20T10:56:53.267476251+00:00 stderr F I0120 10:56:53.267469 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-monitoring 2026-01-20T10:56:53.267483531+00:00 stderr F I0120 10:56:53.267476 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-network-node-identity took: 2.63µs 2026-01-20T10:56:53.267490592+00:00 stderr F I0120 10:56:53.267481 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-monitoring took: 3.52µs 2026-01-20T10:56:53.267490592+00:00 stderr F I0120 10:56:53.267444 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-operator-lifecycle-manager took: 3.4µs 2026-01-20T10:56:53.267505802+00:00 stderr F I0120 10:56:53.267490 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-network-diagnostics 2026-01-20T10:56:53.267505802+00:00 stderr F I0120 10:56:53.267496 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-controller-manager-operator 2026-01-20T10:56:53.267505802+00:00 stderr F I0120 10:56:53.267501 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-network-diagnostics took: 3.51µs 2026-01-20T10:56:53.267515372+00:00 stderr F I0120 10:56:53.267508 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-ovirt-infra 2026-01-20T10:56:53.267515372+00:00 stderr F I0120 10:56:53.267511 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-controller-manager-operator took: 7.55µs 2026-01-20T10:56:53.267522992+00:00 stderr F I0120 10:56:53.267518 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-operators 2026-01-20T10:56:53.267529703+00:00 stderr F I0120 10:56:53.267483 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace cert-manager 2026-01-20T10:56:53.267529703+00:00 stderr F I0120 10:56:53.267522 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-image-registry 2026-01-20T10:56:53.267538613+00:00 stderr F I0120 10:56:53.267532 30089 obj_retry.go:541] Creating *factory.egressIPNamespace cert-manager took: 3.66µs 2026-01-20T10:56:53.267546223+00:00 stderr F I0120 10:56:53.267218 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-cluster-machine-approver in Admin Network Policy controller 2026-01-20T10:56:53.267553283+00:00 stderr F I0120 10:56:53.267542 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-etcd-operator 2026-01-20T10:56:53.267561764+00:00 stderr F I0120 10:56:53.267547 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-host-network 2026-01-20T10:56:53.267561764+00:00 stderr F I0120 10:56:53.267554 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-etcd-operator took: 2.66µs 2026-01-20T10:56:53.267561764+00:00 stderr F I0120 10:56:53.267556 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-image-registry took: 21.61µs 2026-01-20T10:56:53.267574314+00:00 stderr F I0120 10:56:53.267562 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-config-managed 2026-01-20T10:56:53.267574314+00:00 stderr F I0120 10:56:53.267561 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-scheduler 2026-01-20T10:56:53.267574314+00:00 stderr F I0120 10:56:53.267563 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-apiserver 2026-01-20T10:56:53.267583884+00:00 stderr F I0120 10:56:53.267572 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-host-network took: 11.461µs 2026-01-20T10:56:53.267583884+00:00 stderr F I0120 10:56:53.267311 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-multus took: 3.9µs 2026-01-20T10:56:53.267583884+00:00 stderr F I0120 10:56:53.267578 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace kube-system 2026-01-20T10:56:53.267593434+00:00 stderr F I0120 10:56:53.267584 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-config 2026-01-20T10:56:53.267593434+00:00 stderr F I0120 10:56:53.267585 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-apiserver took: 8.5µs 2026-01-20T10:56:53.267602325+00:00 stderr F I0120 10:56:53.267590 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-scheduler took: 15.52µs 2026-01-20T10:56:53.267602325+00:00 stderr F I0120 10:56:53.267594 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-config took: 3.09µs 2026-01-20T10:56:53.267602325+00:00 stderr F I0120 10:56:53.267338 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-ingress took: 4.321µs 2026-01-20T10:56:53.267611755+00:00 stderr F I0120 10:56:53.267597 30089 obj_retry.go:541] Creating *factory.egressIPNamespace kube-system took: 9.13µs 2026-01-20T10:56:53.267611755+00:00 stderr F I0120 10:56:53.267602 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-marketplace 2026-01-20T10:56:53.267620415+00:00 stderr F I0120 10:56:53.267608 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace cert-manager-operator 2026-01-20T10:56:53.267620415+00:00 stderr F I0120 10:56:53.267613 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-marketplace took: 2.88µs 2026-01-20T10:56:53.267629395+00:00 stderr F I0120 10:56:53.267609 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-dns 2026-01-20T10:56:53.267629395+00:00 stderr F I0120 10:56:53.267620 30089 obj_retry.go:541] Creating *factory.egressIPNamespace cert-manager-operator took: 3.1µs 2026-01-20T10:56:53.267629395+00:00 stderr F I0120 10:56:53.267573 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-config-managed took: 2.78µs 2026-01-20T10:56:53.267638796+00:00 stderr F I0120 10:56:53.267629 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-vsphere-infra 2026-01-20T10:56:53.267638796+00:00 stderr F I0120 10:56:53.267620 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-console-operator 2026-01-20T10:56:53.267647216+00:00 stderr F I0120 10:56:53.267638 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-vsphere-infra took: 2.72µs 2026-01-20T10:56:53.267647216+00:00 stderr F I0120 10:56:53.267644 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-authentication-operator 2026-01-20T10:56:53.267655706+00:00 stderr F I0120 10:56:53.267645 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-console-operator took: 3.01µs 2026-01-20T10:56:53.267655706+00:00 stderr F I0120 10:56:53.267647 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-dns took: 17.481µs 2026-01-20T10:56:53.267655706+00:00 stderr F I0120 10:56:53.267652 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-authentication-operator took: 3.89µs 2026-01-20T10:56:53.267668586+00:00 stderr F I0120 10:56:53.267654 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-apiserver-operator 2026-01-20T10:56:53.267668586+00:00 stderr F I0120 10:56:53.267658 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-infra 2026-01-20T10:56:53.267677137+00:00 stderr F I0120 10:56:53.267545 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cluster-machine-approver Admin Network Policy controller: took 326.239µs 2026-01-20T10:56:53.267677137+00:00 stderr F I0120 10:56:53.267667 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-infra took: 1.98µs 2026-01-20T10:56:53.267685767+00:00 stderr F I0120 10:56:53.267677 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-cluster-storage-operator 2026-01-20T10:56:53.267685767+00:00 stderr F I0120 10:56:53.267680 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-console in Admin Network Policy controller 2026-01-20T10:56:53.267694417+00:00 stderr F I0120 10:56:53.267686 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-console Admin Network Policy controller: took 6.07µs 2026-01-20T10:56:53.267694417+00:00 stderr F I0120 10:56:53.267687 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-cluster-storage-operator took: 3.33µs 2026-01-20T10:56:53.267703377+00:00 stderr F I0120 10:56:53.267694 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-etcd-operator in Admin Network Policy controller 2026-01-20T10:56:53.267703377+00:00 stderr F I0120 10:56:53.267694 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-apiserver-operator took: 29.631µs 2026-01-20T10:56:53.267712278+00:00 stderr F I0120 10:56:53.267518 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-ovirt-infra took: 2.79µs 2026-01-20T10:56:53.267712278+00:00 stderr F I0120 10:56:53.267592 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace hostpath-provisioner 2026-01-20T10:56:53.267811330+00:00 stderr F I0120 10:56:53.267534 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kni-infra 2026-01-20T10:56:53.267904613+00:00 stderr F I0120 10:56:53.267882 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kni-infra took: 11.48µs 2026-01-20T10:56:53.267904613+00:00 stderr F I0120 10:56:53.267897 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace default 2026-01-20T10:56:53.267912813+00:00 stderr F I0120 10:56:53.267907 30089 obj_retry.go:541] Creating *factory.egressIPNamespace default took: 4.06µs 2026-01-20T10:56:53.267919823+00:00 stderr F I0120 10:56:53.267913 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-dns-operator 2026-01-20T10:56:53.267926843+00:00 stderr F I0120 10:56:53.267921 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-dns-operator took: 2.11µs 2026-01-20T10:56:53.267933893+00:00 stderr F I0120 10:56:53.267925 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-ingress-canary 2026-01-20T10:56:53.267941494+00:00 stderr F I0120 10:56:53.267932 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-ingress-canary took: 1.81µs 2026-01-20T10:56:53.267941494+00:00 stderr F I0120 10:56:53.267567 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-storage-version-migrator-operator 2026-01-20T10:56:53.267967834+00:00 stderr F I0120 10:56:53.267950 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-storage-version-migrator-operator took: 4.17µs 2026-01-20T10:56:53.267967834+00:00 stderr F I0120 10:56:53.267961 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openstack-operators 2026-01-20T10:56:53.267982425+00:00 stderr F I0120 10:56:53.267969 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openstack-operators took: 3.41µs 2026-01-20T10:56:53.267982425+00:00 stderr F I0120 10:56:53.267975 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-cluster-machine-approver 2026-01-20T10:56:53.267989805+00:00 stderr F I0120 10:56:53.267981 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-cluster-machine-approver took: 2.36µs 2026-01-20T10:56:53.267989805+00:00 stderr F I0120 10:56:53.267886 30089 obj_retry.go:541] Creating *factory.egressIPNamespace hostpath-provisioner took: 4.67µs 2026-01-20T10:56:53.267997205+00:00 stderr F I0120 10:56:53.267991 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-controller-manager 2026-01-20T10:56:53.268004315+00:00 stderr F I0120 10:56:53.267999 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-controller-manager took: 2.79µs 2026-01-20T10:56:53.268011165+00:00 stderr F I0120 10:56:53.268003 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-machine-api 2026-01-20T10:56:53.268018226+00:00 stderr F I0120 10:56:53.268010 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-machine-api took: 2.53µs 2026-01-20T10:56:53.268018226+00:00 stderr F I0120 10:56:53.268015 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift 2026-01-20T10:56:53.268027076+00:00 stderr F I0120 10:56:53.268021 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift took: 980ns 2026-01-20T10:56:53.268027076+00:00 stderr F I0120 10:56:53.267598 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-nutanix-infra 2026-01-20T10:56:53.268035736+00:00 stderr F I0120 10:56:53.268031 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-nutanix-infra took: 1.98µs 2026-01-20T10:56:53.268042736+00:00 stderr F I0120 10:56:53.268035 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-cluster-samples-operator 2026-01-20T10:56:53.268049746+00:00 stderr F I0120 10:56:53.268041 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-cluster-samples-operator took: 1.87µs 2026-01-20T10:56:53.268049746+00:00 stderr F I0120 10:56:53.268046 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace kube-node-lease 2026-01-20T10:56:53.268057097+00:00 stderr F I0120 10:56:53.268052 30089 obj_retry.go:541] Creating *factory.egressIPNamespace kube-node-lease took: 1.431µs 2026-01-20T10:56:53.268083027+00:00 stderr F I0120 10:56:53.268056 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-openstack-infra 2026-01-20T10:56:53.268083027+00:00 stderr F I0120 10:56:53.268076 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-openstack-infra took: 2.34µs 2026-01-20T10:56:53.268096748+00:00 stderr F I0120 10:56:53.267601 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-etcd 2026-01-20T10:56:53.268096748+00:00 stderr F I0120 10:56:53.268089 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-etcd took: 2.9µs 2026-01-20T10:56:53.268105448+00:00 stderr F I0120 10:56:53.268094 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-controller-manager-operator 2026-01-20T10:56:53.268105448+00:00 stderr F I0120 10:56:53.268100 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-controller-manager-operator took: 1.59µs 2026-01-20T10:56:53.268113828+00:00 stderr F I0120 10:56:53.268105 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-route-controller-manager 2026-01-20T10:56:53.268113828+00:00 stderr F I0120 10:56:53.268111 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-route-controller-manager took: 1.67µs 2026-01-20T10:56:53.268124588+00:00 stderr F I0120 10:56:53.267621 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace kube-public 2026-01-20T10:56:53.268124588+00:00 stderr F I0120 10:56:53.268121 30089 obj_retry.go:541] Creating *factory.egressIPNamespace kube-public took: 2.38µs 2026-01-20T10:56:53.268131959+00:00 stderr F I0120 10:56:53.268126 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-console-user-settings 2026-01-20T10:56:53.268138989+00:00 stderr F I0120 10:56:53.268132 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-console-user-settings took: 1.15µs 2026-01-20T10:56:53.268138989+00:00 stderr F I0120 10:56:53.268136 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-ovn-kubernetes 2026-01-20T10:56:53.268148309+00:00 stderr F I0120 10:56:53.268142 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-ovn-kubernetes took: 2.29µs 2026-01-20T10:56:53.268155159+00:00 stderr F I0120 10:56:53.267695 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-cloud-network-config-controller 2026-01-20T10:56:53.268162159+00:00 stderr F I0120 10:56:53.268153 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-cloud-network-config-controller took: 2.21µs 2026-01-20T10:56:53.268162159+00:00 stderr F I0120 10:56:53.268158 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-ingress-operator 2026-01-20T10:56:53.268170300+00:00 stderr F I0120 10:56:53.268163 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-ingress-operator took: 1.32µs 2026-01-20T10:56:53.268170300+00:00 stderr F I0120 10:56:53.267705 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-etcd-operator Admin Network Policy controller: took 10.93µs 2026-01-20T10:56:53.268197960+00:00 stderr F I0120 10:56:53.268178 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-ingress-canary in Admin Network Policy controller 2026-01-20T10:56:53.268197960+00:00 stderr F I0120 10:56:53.268187 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-ingress-canary Admin Network Policy controller: took 9.19µs 2026-01-20T10:56:53.268197960+00:00 stderr F I0120 10:56:53.268194 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-network-operator in Admin Network Policy controller 2026-01-20T10:56:53.268207151+00:00 stderr F I0120 10:56:53.268198 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-network-operator Admin Network Policy controller: took 4.87µs 2026-01-20T10:56:53.268207151+00:00 stderr F I0120 10:56:53.267526 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-operators took: 1.621µs 2026-01-20T10:56:53.268237041+00:00 stderr F I0120 10:56:53.268217 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-apiserver 2026-01-20T10:56:53.268244482+00:00 stderr F I0120 10:56:53.268238 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-apiserver took: 5.99µs 2026-01-20T10:56:53.268251562+00:00 stderr F I0120 10:56:53.268245 30089 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-controller-manager 2026-01-20T10:56:53.268260162+00:00 stderr F I0120 10:56:53.268253 30089 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-controller-manager took: 2.04µs 2026-01-20T10:56:53.268283053+00:00 stderr F I0120 10:56:53.268266 30089 factory.go:988] Added *v1.Namespace event handler 5 2026-01-20T10:56:53.268289993+00:00 stderr F I0120 10:56:53.268203 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace kube-node-lease in Admin Network Policy controller 2026-01-20T10:56:53.268298103+00:00 stderr F I0120 10:56:53.268292 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace kube-node-lease Admin Network Policy controller: took 86.972µs 2026-01-20T10:56:53.268307163+00:00 stderr F I0120 10:56:53.268299 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/revision-pruner-11-crc in Admin Network Policy controller 2026-01-20T10:56:53.268307163+00:00 stderr F I0120 10:56:53.268305 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/revision-pruner-11-crc Admin Network Policy controller: took 5.87µs 2026-01-20T10:56:53.268326484+00:00 stderr F I0120 10:56:53.268313 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv in Admin Network Policy controller 2026-01-20T10:56:53.268326484+00:00 stderr F I0120 10:56:53.268320 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv Admin Network Policy controller: took 7.251µs 2026-01-20T10:56:53.268333504+00:00 stderr F I0120 10:56:53.268325 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-network-diagnostics/network-check-target-v54bt in Admin Network Policy controller 2026-01-20T10:56:53.268333504+00:00 stderr F I0120 10:56:53.268329 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-network-diagnostics/network-check-target-v54bt Admin Network Policy controller: took 4.19µs 2026-01-20T10:56:53.268340384+00:00 stderr F I0120 10:56:53.268333 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ovn-kubernetes/ovnkube-node-sdkgg in Admin Network Policy controller 2026-01-20T10:56:53.268346974+00:00 stderr F I0120 10:56:53.268340 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ovn-kubernetes/ovnkube-node-sdkgg Admin Network Policy controller: took 3.96µs 2026-01-20T10:56:53.268346974+00:00 stderr F I0120 10:56:53.268344 30089 admin_network_policy_pod.go:56] Processing sync for Pod hostpath-provisioner/csi-hostpathplugin-hvm8g in Admin Network Policy controller 2026-01-20T10:56:53.268353795+00:00 stderr F I0120 10:56:53.268348 30089 admin_network_policy_pod.go:59] Finished syncing Pod hostpath-provisioner/csi-hostpathplugin-hvm8g Admin Network Policy controller: took 3.88µs 2026-01-20T10:56:53.268360355+00:00 stderr F I0120 10:56:53.268353 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 in Admin Network Policy controller 2026-01-20T10:56:53.268360355+00:00 stderr F I0120 10:56:53.268356 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 Admin Network Policy controller: took 4.171µs 2026-01-20T10:56:53.268367595+00:00 stderr F I0120 10:56:53.268362 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc in Admin Network Policy controller 2026-01-20T10:56:53.268367595+00:00 stderr F I0120 10:56:53.268361 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2026-01-20T10:56:53.268374435+00:00 stderr F I0120 10:56:53.268366 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc Admin Network Policy controller: took 4.8µs 2026-01-20T10:56:53.268374435+00:00 stderr F I0120 10:56:53.268372 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/installer-10-retry-1-crc in Admin Network Policy controller 2026-01-20T10:56:53.268381245+00:00 stderr F I0120 10:56:53.268376 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/installer-10-retry-1-crc Admin Network Policy controller: took 4.33µs 2026-01-20T10:56:53.268388195+00:00 stderr F I0120 10:56:53.268382 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz took: 10.17µs 2026-01-20T10:56:53.268397566+00:00 stderr F I0120 10:56:53.268383 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf in Admin Network Policy controller 2026-01-20T10:56:53.268397566+00:00 stderr F I0120 10:56:53.268394 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf Admin Network Policy controller: took 11.151µs 2026-01-20T10:56:53.268428797+00:00 stderr F I0120 10:56:53.268405 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-scheduler/installer-7-crc 2026-01-20T10:56:53.268428797+00:00 stderr F I0120 10:56:53.268410 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-machine-config-operator/machine-config-daemon-zpnhg 2026-01-20T10:56:53.268428797+00:00 stderr F I0120 10:56:53.268411 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2026-01-20T10:56:53.268439917+00:00 stderr F I0120 10:56:53.268428 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2026-01-20T10:56:53.268439917+00:00 stderr F I0120 10:56:53.268426 30089 obj_retry.go:459] Detected object openshift-kube-scheduler/installer-7-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.268449227+00:00 stderr F I0120 10:56:53.268441 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-machine-config-operator/machine-config-daemon-zpnhg took: 12.881µs 2026-01-20T10:56:53.268457057+00:00 stderr F I0120 10:56:53.268444 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 took: 8.76µs 2026-01-20T10:56:53.268457057+00:00 stderr F I0120 10:56:53.268411 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-image-registry/image-registry-75b7bb6564-ln84v in Admin Network Policy controller 2026-01-20T10:56:53.268465577+00:00 stderr F I0120 10:56:53.268444 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-network-operator/iptables-alerter-wwpnd 2026-01-20T10:56:53.268465577+00:00 stderr F I0120 10:56:53.268460 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-image-registry/image-registry-75b7bb6564-ln84v Admin Network Policy controller: took 48.411µs 2026-01-20T10:56:53.268476098+00:00 stderr F I0120 10:56:53.268451 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd took: 21.301µs 2026-01-20T10:56:53.268484558+00:00 stderr F I0120 10:56:53.268471 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb in Admin Network Policy controller 2026-01-20T10:56:53.268484558+00:00 stderr F I0120 10:56:53.268479 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-network-operator/iptables-alerter-wwpnd took: 14.251µs 2026-01-20T10:56:53.268492918+00:00 stderr F I0120 10:56:53.268393 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2026-01-20T10:56:53.268492918+00:00 stderr F I0120 10:56:53.268485 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb Admin Network Policy controller: took 12.03µs 2026-01-20T10:56:53.268503358+00:00 stderr F I0120 10:56:53.268498 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr in Admin Network Policy controller 2026-01-20T10:56:53.268511649+00:00 stderr F I0120 10:56:53.268505 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw took: 11.64µs 2026-01-20T10:56:53.268546400+00:00 stderr F I0120 10:56:53.268526 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2026-01-20T10:56:53.268546400+00:00 stderr F I0120 10:56:53.268533 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2026-01-20T10:56:53.268546400+00:00 stderr F I0120 10:56:53.268539 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-dns-operator/dns-operator-75f687757b-nz2xb 2026-01-20T10:56:53.268556000+00:00 stderr F I0120 10:56:53.268540 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-dns/node-resolver-dn27q 2026-01-20T10:56:53.268556000+00:00 stderr F I0120 10:56:53.268551 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2026-01-20T10:56:53.268564130+00:00 stderr F I0120 10:56:53.268551 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv took: 6.211µs 2026-01-20T10:56:53.268564130+00:00 stderr F I0120 10:56:53.268553 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-dns-operator/dns-operator-75f687757b-nz2xb took: 3.58µs 2026-01-20T10:56:53.268564130+00:00 stderr F I0120 10:56:53.268560 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-dns/node-resolver-dn27q took: 6.78µs 2026-01-20T10:56:53.268571860+00:00 stderr F I0120 10:56:53.268563 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-console/downloads-65476884b9-9wcvx 2026-01-20T10:56:53.268578440+00:00 stderr F I0120 10:56:53.268544 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z took: 6.19µs 2026-01-20T10:56:53.268585241+00:00 stderr F I0120 10:56:53.268577 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-console/downloads-65476884b9-9wcvx took: 2.8µs 2026-01-20T10:56:53.268585241+00:00 stderr F I0120 10:56:53.268579 30089 obj_retry.go:502] Add event received for *factory.egressIPPod cert-manager/cert-manager-758df9885c-cq6zm 2026-01-20T10:56:53.268605301+00:00 stderr F I0120 10:56:53.268593 30089 obj_retry.go:541] Creating *factory.egressIPPod cert-manager/cert-manager-758df9885c-cq6zm took: 4.09µs 2026-01-20T10:56:53.268605301+00:00 stderr F I0120 10:56:53.268569 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-etcd/etcd-crc 2026-01-20T10:56:53.268630912+00:00 stderr F I0120 10:56:53.268618 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-etcd/etcd-crc took: 11.26µs 2026-01-20T10:56:53.268637612+00:00 stderr F I0120 10:56:53.268630 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-machine-config-operator/machine-config-server-v65wr 2026-01-20T10:56:53.268658543+00:00 stderr F I0120 10:56:53.268642 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-marketplace/redhat-operators-2nxg8 2026-01-20T10:56:53.268665993+00:00 stderr F I0120 10:56:53.268655 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2026-01-20T10:56:53.268665993+00:00 stderr F I0120 10:56:53.268661 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-marketplace/redhat-operators-2nxg8 took: 7.401µs 2026-01-20T10:56:53.268673003+00:00 stderr F I0120 10:56:53.268665 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2026-01-20T10:56:53.268681833+00:00 stderr F I0120 10:56:53.268672 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2026-01-20T10:56:53.268681833+00:00 stderr F I0120 10:56:53.268676 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b took: 10.93µs 2026-01-20T10:56:53.268694954+00:00 stderr F I0120 10:56:53.268677 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg took: 3.21µs 2026-01-20T10:56:53.268694954+00:00 stderr F I0120 10:56:53.268681 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-console/console-644bb77b49-5x5xk 2026-01-20T10:56:53.268694954+00:00 stderr F I0120 10:56:53.268686 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh took: 8.51µs 2026-01-20T10:56:53.268694954+00:00 stderr F I0120 10:56:53.268688 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2026-01-20T10:56:53.268706104+00:00 stderr F I0120 10:56:53.268694 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-network-operator/network-operator-767c585db5-zd56b 2026-01-20T10:56:53.268714924+00:00 stderr F I0120 10:56:53.268702 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-network-operator/network-operator-767c585db5-zd56b took: 2.24µs 2026-01-20T10:56:53.268714924+00:00 stderr F I0120 10:56:53.268704 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-console/console-644bb77b49-5x5xk took: 12.271µs 2026-01-20T10:56:53.268714924+00:00 stderr F I0120 10:56:53.268656 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2026-01-20T10:56:53.268727064+00:00 stderr F I0120 10:56:53.268720 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 took: 3.14µs 2026-01-20T10:56:53.268734335+00:00 stderr F I0120 10:56:53.268727 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/revision-pruner-9-crc 2026-01-20T10:56:53.268741315+00:00 stderr F I0120 10:56:53.268732 30089 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-9-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.268749795+00:00 stderr F I0120 10:56:53.268740 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-marketplace/community-operators-6m4w2 2026-01-20T10:56:53.268749795+00:00 stderr F I0120 10:56:53.268746 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-marketplace/community-operators-6m4w2 took: 1.62µs 2026-01-20T10:56:53.268778256+00:00 stderr F I0120 10:56:53.268752 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2026-01-20T10:56:53.268778256+00:00 stderr F I0120 10:56:53.268758 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-machine-config-operator/kube-rbac-proxy-crio-crc took: 1.37µs 2026-01-20T10:56:53.268778256+00:00 stderr F I0120 10:56:53.268702 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs took: 2.62µs 2026-01-20T10:56:53.268778256+00:00 stderr F I0120 10:56:53.268768 30089 obj_retry.go:502] Add event received for *factory.egressIPPod cert-manager/cert-manager-webhook-855f577f79-7bdxq 2026-01-20T10:56:53.268778256+00:00 stderr F I0120 10:56:53.268771 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/revision-pruner-10-crc 2026-01-20T10:56:53.268790486+00:00 stderr F I0120 10:56:53.268775 30089 obj_retry.go:541] Creating *factory.egressIPPod cert-manager/cert-manager-webhook-855f577f79-7bdxq took: 2.36µs 2026-01-20T10:56:53.268790486+00:00 stderr F I0120 10:56:53.268773 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-scheduler/installer-8-crc 2026-01-20T10:56:53.268790486+00:00 stderr F I0120 10:56:53.268779 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j 2026-01-20T10:56:53.268804206+00:00 stderr F I0120 10:56:53.268785 30089 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-10-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.268804206+00:00 stderr F I0120 10:56:53.268789 30089 obj_retry.go:459] Detected object openshift-kube-scheduler/installer-8-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.268804206+00:00 stderr F I0120 10:56:53.268793 30089 obj_retry.go:459] Detected object openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.268804206+00:00 stderr F I0120 10:56:53.268790 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-ingress-canary/ingress-canary-2vhcn 2026-01-20T10:56:53.268814137+00:00 stderr F I0120 10:56:53.268804 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2026-01-20T10:56:53.268814137+00:00 stderr F I0120 10:56:53.268807 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2026-01-20T10:56:53.268814137+00:00 stderr F I0120 10:56:53.268807 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-ingress-canary/ingress-canary-2vhcn took: 1.86µs 2026-01-20T10:56:53.268823617+00:00 stderr F I0120 10:56:53.268814 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-controller-manager/controller-manager-778975cc4f-x5vcf took: 2.62µs 2026-01-20T10:56:53.268823617+00:00 stderr F I0120 10:56:53.268817 30089 obj_retry.go:502] Add event received for *factory.egressIPPod cert-manager/cert-manager-cainjector-676dd9bd64-mggnx 2026-01-20T10:56:53.268832467+00:00 stderr F I0120 10:56:53.268817 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t took: 3.14µs 2026-01-20T10:56:53.268832467+00:00 stderr F I0120 10:56:53.268815 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-marketplace/certified-operators-mpjb7 2026-01-20T10:56:53.268832467+00:00 stderr F I0120 10:56:53.268691 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2026-01-20T10:56:53.268844398+00:00 stderr F I0120 10:56:53.268836 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 took: 2.58µs 2026-01-20T10:56:53.268844398+00:00 stderr F I0120 10:56:53.268837 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-marketplace/certified-operators-mpjb7 took: 4.47µs 2026-01-20T10:56:53.268854058+00:00 stderr F I0120 10:56:53.268843 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-apiserver/kube-apiserver-crc 2026-01-20T10:56:53.268854058+00:00 stderr F I0120 10:56:53.268825 30089 obj_retry.go:541] Creating *factory.egressIPPod cert-manager/cert-manager-cainjector-676dd9bd64-mggnx took: 1.52µs 2026-01-20T10:56:53.268863048+00:00 stderr F I0120 10:56:53.268851 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-apiserver/kube-apiserver-crc took: 1.48µs 2026-01-20T10:56:53.268863048+00:00 stderr F I0120 10:56:53.268707 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-machine-config-operator/machine-config-server-v65wr took: 70.952µs 2026-01-20T10:56:53.268873818+00:00 stderr F I0120 10:56:53.268863 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-image-registry/node-ca-l92hr 2026-01-20T10:56:53.268883109+00:00 stderr F I0120 10:56:53.268875 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-image-registry/node-ca-l92hr took: 2.84µs 2026-01-20T10:56:53.268883109+00:00 stderr F I0120 10:56:53.268807 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2026-01-20T10:56:53.268926200+00:00 stderr F I0120 10:56:53.268890 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 took: 3.53µs 2026-01-20T10:56:53.268926200+00:00 stderr F I0120 10:56:53.268504 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr Admin Network Policy controller: took 6.72µs 2026-01-20T10:56:53.268926200+00:00 stderr F I0120 10:56:53.268565 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 took: 7.05µs 2026-01-20T10:56:53.268926200+00:00 stderr F I0120 10:56:53.268911 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-network-node-identity/network-node-identity-7xghp in Admin Network Policy controller 2026-01-20T10:56:53.268926200+00:00 stderr F I0120 10:56:53.268913 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2026-01-20T10:56:53.268926200+00:00 stderr F I0120 10:56:53.268918 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-network-node-identity/network-node-identity-7xghp Admin Network Policy controller: took 9.76µs 2026-01-20T10:56:53.268936420+00:00 stderr F I0120 10:56:53.268924 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh took: 3.151µs 2026-01-20T10:56:53.268936420+00:00 stderr F I0120 10:56:53.268925 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz in Admin Network Policy controller 2026-01-20T10:56:53.268943940+00:00 stderr F I0120 10:56:53.268938 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz Admin Network Policy controller: took 11.6µs 2026-01-20T10:56:53.268990361+00:00 stderr F I0120 10:56:53.268953 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp in Admin Network Policy controller 2026-01-20T10:56:53.268990361+00:00 stderr F I0120 10:56:53.268964 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp Admin Network Policy controller: took 13.551µs 2026-01-20T10:56:53.268990361+00:00 stderr F I0120 10:56:53.268972 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-dns/dns-default-gbw49 in Admin Network Policy controller 2026-01-20T10:56:53.268990361+00:00 stderr F I0120 10:56:53.268978 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-dns/dns-default-gbw49 Admin Network Policy controller: took 6.26µs 2026-01-20T10:56:53.268990361+00:00 stderr F I0120 10:56:53.268986 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/installer-11-crc in Admin Network Policy controller 2026-01-20T10:56:53.268999452+00:00 stderr F I0120 10:56:53.268991 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/installer-11-crc Admin Network Policy controller: took 6.08µs 2026-01-20T10:56:53.269006562+00:00 stderr F I0120 10:56:53.268998 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-marketplace/redhat-marketplace-2mx7j in Admin Network Policy controller 2026-01-20T10:56:53.269016432+00:00 stderr F I0120 10:56:53.269004 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-marketplace/redhat-marketplace-2mx7j Admin Network Policy controller: took 6.25µs 2026-01-20T10:56:53.269016432+00:00 stderr F I0120 10:56:53.269012 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m in Admin Network Policy controller 2026-01-20T10:56:53.269023802+00:00 stderr F I0120 10:56:53.269018 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m Admin Network Policy controller: took 6.31µs 2026-01-20T10:56:53.269031922+00:00 stderr F I0120 10:56:53.269026 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-authentication-operator in Admin Network Policy controller 2026-01-20T10:56:53.269038513+00:00 stderr F I0120 10:56:53.269033 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-authentication-operator Admin Network Policy controller: took 7.92µs 2026-01-20T10:56:53.269046613+00:00 stderr F I0120 10:56:53.269042 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-cloud-network-config-controller in Admin Network Policy controller 2026-01-20T10:56:53.269053183+00:00 stderr F I0120 10:56:53.269046 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cloud-network-config-controller Admin Network Policy controller: took 4.06µs 2026-01-20T10:56:53.269053183+00:00 stderr F I0120 10:56:53.269050 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-ovirt-infra in Admin Network Policy controller 2026-01-20T10:56:53.269090114+00:00 stderr F I0120 10:56:53.269053 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-ovirt-infra Admin Network Policy controller: took 3.12µs 2026-01-20T10:56:53.269090114+00:00 stderr F I0120 10:56:53.269078 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift in Admin Network Policy controller 2026-01-20T10:56:53.269090114+00:00 stderr F I0120 10:56:53.269083 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift Admin Network Policy controller: took 4.96µs 2026-01-20T10:56:53.269102064+00:00 stderr F I0120 10:56:53.269089 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config-managed in Admin Network Policy controller 2026-01-20T10:56:53.269102064+00:00 stderr F I0120 10:56:53.269093 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config-managed Admin Network Policy controller: took 4.22µs 2026-01-20T10:56:53.269109355+00:00 stderr F I0120 10:56:53.269103 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg in Admin Network Policy controller 2026-01-20T10:56:53.269116385+00:00 stderr F I0120 10:56:53.269111 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg Admin Network Policy controller: took 12.131µs 2026-01-20T10:56:53.269122925+00:00 stderr F I0120 10:56:53.269109 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2026-01-20T10:56:53.269129735+00:00 stderr F I0120 10:56:53.269117 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/kube-controller-manager-crc 2026-01-20T10:56:53.269129735+00:00 stderr F I0120 10:56:53.269126 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm took: 3.58µs 2026-01-20T10:56:53.269194777+00:00 stderr F I0120 10:56:53.269134 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-controller-manager/kube-controller-manager-crc took: 4.05µs 2026-01-20T10:56:53.269194777+00:00 stderr F I0120 10:56:53.269142 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2026-01-20T10:56:53.269194777+00:00 stderr F I0120 10:56:53.269135 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/revision-pruner-11-crc 2026-01-20T10:56:53.269194777+00:00 stderr F I0120 10:56:53.269152 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv took: 2.31µs 2026-01-20T10:56:53.269194777+00:00 stderr F I0120 10:56:53.269156 30089 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-11-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.269194777+00:00 stderr F I0120 10:56:53.269122 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-operator-lifecycle-manager/collect-profiles-29481765-pbh8m in Admin Network Policy controller 2026-01-20T10:56:53.269194777+00:00 stderr F I0120 10:56:53.269168 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-apiserver/installer-13-crc 2026-01-20T10:56:53.269194777+00:00 stderr F I0120 10:56:53.269169 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-operator-lifecycle-manager/collect-profiles-29481765-pbh8m Admin Network Policy controller: took 46.801µs 2026-01-20T10:56:53.269194777+00:00 stderr F I0120 10:56:53.269181 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-apiserver/installer-13-crc took: 3.3µs 2026-01-20T10:56:53.269194777+00:00 stderr F I0120 10:56:53.269184 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b in Admin Network Policy controller 2026-01-20T10:56:53.269210097+00:00 stderr F I0120 10:56:53.269133 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2026-01-20T10:56:53.269210097+00:00 stderr F I0120 10:56:53.269195 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b Admin Network Policy controller: took 10.931µs 2026-01-20T10:56:53.269210097+00:00 stderr F I0120 10:56:53.269202 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc took: 2.84µs 2026-01-20T10:56:53.269217837+00:00 stderr F I0120 10:56:53.269208 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-apiserver/installer-9-crc in Admin Network Policy controller 2026-01-20T10:56:53.269217837+00:00 stderr F I0120 10:56:53.269208 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/installer-10-retry-1-crc 2026-01-20T10:56:53.269225158+00:00 stderr F I0120 10:56:53.269215 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-apiserver/installer-9-crc Admin Network Policy controller: took 8.04µs 2026-01-20T10:56:53.269225158+00:00 stderr F I0120 10:56:53.269218 30089 obj_retry.go:459] Detected object openshift-kube-controller-manager/installer-10-retry-1-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.269232378+00:00 stderr F I0120 10:56:53.269224 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/installer-10-crc in Admin Network Policy controller 2026-01-20T10:56:53.269239288+00:00 stderr F I0120 10:56:53.269231 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/installer-10-crc Admin Network Policy controller: took 7.66µs 2026-01-20T10:56:53.269246178+00:00 stderr F I0120 10:56:53.269238 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd in Admin Network Policy controller 2026-01-20T10:56:53.269255638+00:00 stderr F I0120 10:56:53.269244 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd Admin Network Policy controller: took 6.47µs 2026-01-20T10:56:53.269255638+00:00 stderr F I0120 10:56:53.269252 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-service-ca/service-ca-666f99b6f-kk8kg in Admin Network Policy controller 2026-01-20T10:56:53.269264449+00:00 stderr F I0120 10:56:53.269258 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-service-ca/service-ca-666f99b6f-kk8kg Admin Network Policy controller: took 6.3µs 2026-01-20T10:56:53.269271369+00:00 stderr F I0120 10:56:53.269265 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ingress/router-default-5c9bf7bc58-6jctv in Admin Network Policy controller 2026-01-20T10:56:53.269278299+00:00 stderr F I0120 10:56:53.269270 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ingress/router-default-5c9bf7bc58-6jctv Admin Network Policy controller: took 5.85µs 2026-01-20T10:56:53.269285339+00:00 stderr F I0120 10:56:53.269277 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb in Admin Network Policy controller 2026-01-20T10:56:53.269292659+00:00 stderr F I0120 10:56:53.269282 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb Admin Network Policy controller: took 5.72µs 2026-01-20T10:56:53.269299170+00:00 stderr F I0120 10:56:53.269292 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-openstack-infra in Admin Network Policy controller 2026-01-20T10:56:53.269305740+00:00 stderr F I0120 10:56:53.269300 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-openstack-infra Admin Network Policy controller: took 8.871µs 2026-01-20T10:56:53.269324680+00:00 stderr F I0120 10:56:53.269315 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-route-controller-manager in Admin Network Policy controller 2026-01-20T10:56:53.269324680+00:00 stderr F I0120 10:56:53.269319 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-route-controller-manager Admin Network Policy controller: took 3.98µs 2026-01-20T10:56:53.269331730+00:00 stderr F I0120 10:56:53.269323 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-ingress-operator in Admin Network Policy controller 2026-01-20T10:56:53.269331730+00:00 stderr F I0120 10:56:53.269328 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-ingress-operator Admin Network Policy controller: took 4.21µs 2026-01-20T10:56:53.269338601+00:00 stderr F I0120 10:56:53.269334 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-apiserver-operator in Admin Network Policy controller 2026-01-20T10:56:53.269345111+00:00 stderr F I0120 10:56:53.269338 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-apiserver-operator Admin Network Policy controller: took 4.331µs 2026-01-20T10:56:53.269351661+00:00 stderr F I0120 10:56:53.269344 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-ovn-kubernetes in Admin Network Policy controller 2026-01-20T10:56:53.269351661+00:00 stderr F I0120 10:56:53.269348 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-ovn-kubernetes Admin Network Policy controller: took 4.04µs 2026-01-20T10:56:53.269358461+00:00 stderr F I0120 10:56:53.269354 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-controller-manager-operator in Admin Network Policy controller 2026-01-20T10:56:53.269364991+00:00 stderr F I0120 10:56:53.269358 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-controller-manager-operator Admin Network Policy controller: took 3.91µs 2026-01-20T10:56:53.269373932+00:00 stderr F I0120 10:56:53.269363 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-dns in Admin Network Policy controller 2026-01-20T10:56:53.269373932+00:00 stderr F I0120 10:56:53.269367 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-dns Admin Network Policy controller: took 4µs 2026-01-20T10:56:53.269380852+00:00 stderr F I0120 10:56:53.269373 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-apiserver in Admin Network Policy controller 2026-01-20T10:56:53.269380852+00:00 stderr F I0120 10:56:53.269377 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-apiserver Admin Network Policy controller: took 4.06µs 2026-01-20T10:56:53.269387712+00:00 stderr F I0120 10:56:53.269383 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-image-registry in Admin Network Policy controller 2026-01-20T10:56:53.269394262+00:00 stderr F I0120 10:56:53.269387 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-image-registry Admin Network Policy controller: took 3.87µs 2026-01-20T10:56:53.269400832+00:00 stderr F I0120 10:56:53.269392 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kni-infra in Admin Network Policy controller 2026-01-20T10:56:53.269400832+00:00 stderr F I0120 10:56:53.269396 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kni-infra Admin Network Policy controller: took 3.87µs 2026-01-20T10:56:53.269409713+00:00 stderr F I0120 10:56:53.269402 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-multus/multus-additional-cni-plugins-bzj2p in Admin Network Policy controller 2026-01-20T10:56:53.269418313+00:00 stderr F I0120 10:56:53.269412 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-multus/multus-additional-cni-plugins-bzj2p Admin Network Policy controller: took 10.041µs 2026-01-20T10:56:53.269444873+00:00 stderr F I0120 10:56:53.269423 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 in Admin Network Policy controller 2026-01-20T10:56:53.269444873+00:00 stderr F I0120 10:56:53.269434 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 Admin Network Policy controller: took 11.36µs 2026-01-20T10:56:53.269444873+00:00 stderr F I0120 10:56:53.269441 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc in Admin Network Policy controller 2026-01-20T10:56:53.269455504+00:00 stderr F I0120 10:56:53.269446 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc Admin Network Policy controller: took 5.53µs 2026-01-20T10:56:53.269464244+00:00 stderr F I0120 10:56:53.269453 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-apiserver/installer-12-crc in Admin Network Policy controller 2026-01-20T10:56:53.269464244+00:00 stderr F I0120 10:56:53.269460 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-apiserver/installer-12-crc Admin Network Policy controller: took 6.73µs 2026-01-20T10:56:53.269473154+00:00 stderr F I0120 10:56:53.269466 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/revision-pruner-8-crc in Admin Network Policy controller 2026-01-20T10:56:53.269481164+00:00 stderr F I0120 10:56:53.269473 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/revision-pruner-8-crc Admin Network Policy controller: took 6.18µs 2026-01-20T10:56:53.269489435+00:00 stderr F I0120 10:56:53.269479 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in Admin Network Policy controller 2026-01-20T10:56:53.269489435+00:00 stderr F I0120 10:56:53.269485 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-scheduler/openshift-kube-scheduler-crc Admin Network Policy controller: took 5.89µs 2026-01-20T10:56:53.269499575+00:00 stderr F I0120 10:56:53.269493 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-marketplace/marketplace-operator-8b455464d-nc8zc in Admin Network Policy controller 2026-01-20T10:56:53.269506665+00:00 stderr F I0120 10:56:53.269499 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-marketplace/marketplace-operator-8b455464d-nc8zc Admin Network Policy controller: took 6.05µs 2026-01-20T10:56:53.269526626+00:00 stderr F I0120 10:56:53.269506 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd in Admin Network Policy controller 2026-01-20T10:56:53.269526626+00:00 stderr F I0120 10:56:53.269512 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd Admin Network Policy controller: took 6.62µs 2026-01-20T10:56:53.269526626+00:00 stderr F I0120 10:56:53.269519 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 in Admin Network Policy controller 2026-01-20T10:56:53.269535706+00:00 stderr F I0120 10:56:53.269524 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 Admin Network Policy controller: took 5.951µs 2026-01-20T10:56:53.269535706+00:00 stderr F I0120 10:56:53.269531 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-multus/multus-q88th in Admin Network Policy controller 2026-01-20T10:56:53.269544436+00:00 stderr F I0120 10:56:53.269537 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-multus/multus-q88th Admin Network Policy controller: took 5.9µs 2026-01-20T10:56:53.269552766+00:00 stderr F I0120 10:56:53.269544 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-multus/network-metrics-daemon-qdfr4 in Admin Network Policy controller 2026-01-20T10:56:53.269561297+00:00 stderr F I0120 10:56:53.269549 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-multus/network-metrics-daemon-qdfr4 Admin Network Policy controller: took 5.47µs 2026-01-20T10:56:53.269591047+00:00 stderr F I0120 10:56:53.269568 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-scheduler/installer-7-crc in Admin Network Policy controller 2026-01-20T10:56:53.269591047+00:00 stderr F I0120 10:56:53.269579 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-scheduler/installer-7-crc Admin Network Policy controller: took 22.091µs 2026-01-20T10:56:53.269591047+00:00 stderr F I0120 10:56:53.269586 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-machine-config-operator/machine-config-daemon-zpnhg in Admin Network Policy controller 2026-01-20T10:56:53.269599788+00:00 stderr F I0120 10:56:53.269591 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-machine-config-operator/machine-config-daemon-zpnhg Admin Network Policy controller: took 6µs 2026-01-20T10:56:53.269606748+00:00 stderr F I0120 10:56:53.269599 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd in Admin Network Policy controller 2026-01-20T10:56:53.269614588+00:00 stderr F I0120 10:56:53.269604 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd Admin Network Policy controller: took 6.141µs 2026-01-20T10:56:53.269621848+00:00 stderr F I0120 10:56:53.269611 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz in Admin Network Policy controller 2026-01-20T10:56:53.269621848+00:00 stderr F I0120 10:56:53.269618 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz Admin Network Policy controller: took 6.8µs 2026-01-20T10:56:53.269631859+00:00 stderr F I0120 10:56:53.269624 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 in Admin Network Policy controller 2026-01-20T10:56:53.269638780+00:00 stderr F I0120 10:56:53.269630 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 Admin Network Policy controller: took 5.77µs 2026-01-20T10:56:53.269646500+00:00 stderr F I0120 10:56:53.269637 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv in Admin Network Policy controller 2026-01-20T10:56:53.269646500+00:00 stderr F I0120 10:56:53.269642 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv Admin Network Policy controller: took 5.99µs 2026-01-20T10:56:53.269661990+00:00 stderr F I0120 10:56:53.269649 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw in Admin Network Policy controller 2026-01-20T10:56:53.269661990+00:00 stderr F I0120 10:56:53.269655 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw Admin Network Policy controller: took 5.79µs 2026-01-20T10:56:53.269704911+00:00 stderr F I0120 10:56:53.269684 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-scheduler-operator in Admin Network Policy controller 2026-01-20T10:56:53.269713192+00:00 stderr F I0120 10:56:53.269703 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-network-diagnostics/network-check-target-v54bt 2026-01-20T10:56:53.269713192+00:00 stderr F I0120 10:56:53.269707 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-scheduler-operator Admin Network Policy controller: took 40.991µs 2026-01-20T10:56:53.269720502+00:00 stderr F I0120 10:56:53.269715 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-network-diagnostics/network-check-target-v54bt took: 2.1µs 2026-01-20T10:56:53.269753863+00:00 stderr F I0120 10:56:53.269724 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-service-ca in Admin Network Policy controller 2026-01-20T10:56:53.269753863+00:00 stderr F I0120 10:56:53.269734 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/installer-11-crc 2026-01-20T10:56:53.269753863+00:00 stderr F I0120 10:56:53.269734 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2026-01-20T10:56:53.269753863+00:00 stderr F I0120 10:56:53.269743 30089 obj_retry.go:459] Detected object openshift-kube-controller-manager/installer-11-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.269762503+00:00 stderr F I0120 10:56:53.269752 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-marketplace/redhat-marketplace-2mx7j 2026-01-20T10:56:53.269769133+00:00 stderr F I0120 10:56:53.269760 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2026-01-20T10:56:53.269769133+00:00 stderr F I0120 10:56:53.269760 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2026-01-20T10:56:53.269776213+00:00 stderr F I0120 10:56:53.269768 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd 2026-01-20T10:56:53.269776213+00:00 stderr F I0120 10:56:53.269771 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m took: 2.67µs 2026-01-20T10:56:53.269786064+00:00 stderr F I0120 10:56:53.269778 30089 obj_retry.go:459] Detected object openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.269792704+00:00 stderr F I0120 10:56:53.269783 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-apiserver/installer-9-crc 2026-01-20T10:56:53.269792704+00:00 stderr F I0120 10:56:53.269785 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz took: 6.4µs 2026-01-20T10:56:53.269799674+00:00 stderr F I0120 10:56:53.269787 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-multus/multus-additional-cni-plugins-bzj2p 2026-01-20T10:56:53.269799674+00:00 stderr F I0120 10:56:53.269792 30089 obj_retry.go:459] Detected object openshift-kube-apiserver/installer-9-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.269806944+00:00 stderr F I0120 10:56:53.269799 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2026-01-20T10:56:53.269806944+00:00 stderr F I0120 10:56:53.269801 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-multus/multus-additional-cni-plugins-bzj2p took: 4.21µs 2026-01-20T10:56:53.269814344+00:00 stderr F I0120 10:56:53.269805 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/installer-10-crc 2026-01-20T10:56:53.269814344+00:00 stderr F I0120 10:56:53.269805 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2026-01-20T10:56:53.269821164+00:00 stderr F I0120 10:56:53.269813 30089 obj_retry.go:459] Detected object openshift-kube-controller-manager/installer-10-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.269821164+00:00 stderr F I0120 10:56:53.269815 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg took: 6.16µs 2026-01-20T10:56:53.269828555+00:00 stderr F I0120 10:56:53.269812 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2026-01-20T10:56:53.269828555+00:00 stderr F I0120 10:56:53.269823 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 took: 4.84µs 2026-01-20T10:56:53.269836415+00:00 stderr F I0120 10:56:53.269830 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb took: 2.37µs 2026-01-20T10:56:53.269844435+00:00 stderr F I0120 10:56:53.269833 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-ingress/router-default-5c9bf7bc58-6jctv 2026-01-20T10:56:53.269844435+00:00 stderr F I0120 10:56:53.269838 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-scheduler/openshift-kube-scheduler-crc 2026-01-20T10:56:53.269853185+00:00 stderr F I0120 10:56:53.269844 30089 obj_retry.go:502] Add event received for *factory.egressIPPod hostpath-provisioner/csi-hostpathplugin-hvm8g 2026-01-20T10:56:53.269853185+00:00 stderr F I0120 10:56:53.269849 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-scheduler/openshift-kube-scheduler-crc took: 2.67µs 2026-01-20T10:56:53.269861886+00:00 stderr F I0120 10:56:53.269845 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-ingress/router-default-5c9bf7bc58-6jctv took: 4.06µs 2026-01-20T10:56:53.269861886+00:00 stderr F I0120 10:56:53.269856 30089 obj_retry.go:541] Creating *factory.egressIPPod hostpath-provisioner/csi-hostpathplugin-hvm8g took: 3.03µs 2026-01-20T10:56:53.269878406+00:00 stderr F I0120 10:56:53.269737 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-service-ca Admin Network Policy controller: took 13.09µs 2026-01-20T10:56:53.269878406+00:00 stderr F I0120 10:56:53.269863 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2026-01-20T10:56:53.269878406+00:00 stderr F I0120 10:56:53.269865 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2026-01-20T10:56:53.269878406+00:00 stderr F I0120 10:56:53.269872 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openstack in Admin Network Policy controller 2026-01-20T10:56:53.269878406+00:00 stderr F I0120 10:56:53.269761 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-marketplace/redhat-marketplace-2mx7j took: 1.91µs 2026-01-20T10:56:53.269889356+00:00 stderr F I0120 10:56:53.269877 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openstack Admin Network Policy controller: took 5.51µs 2026-01-20T10:56:53.269889356+00:00 stderr F I0120 10:56:53.269882 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf took: 5.11µs 2026-01-20T10:56:53.269898397+00:00 stderr F I0120 10:56:53.269884 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-service-ca/service-ca-666f99b6f-kk8kg 2026-01-20T10:56:53.269898397+00:00 stderr F I0120 10:56:53.269752 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr took: 3.311µs 2026-01-20T10:56:53.269907557+00:00 stderr F I0120 10:56:53.269895 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2026-01-20T10:56:53.269907557+00:00 stderr F I0120 10:56:53.269892 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-marketplace/marketplace-operator-8b455464d-nc8zc 2026-01-20T10:56:53.269907557+00:00 stderr F I0120 10:56:53.269902 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-dns/dns-default-gbw49 2026-01-20T10:56:53.269916497+00:00 stderr F I0120 10:56:53.269901 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2026-01-20T10:56:53.269916497+00:00 stderr F I0120 10:56:53.269911 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-marketplace/marketplace-operator-8b455464d-nc8zc took: 4.05µs 2026-01-20T10:56:53.269925127+00:00 stderr F I0120 10:56:53.269912 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-dns/dns-default-gbw49 took: 3.48µs 2026-01-20T10:56:53.269925127+00:00 stderr F I0120 10:56:53.269897 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-service-ca/service-ca-666f99b6f-kk8kg took: 2.28µs 2026-01-20T10:56:53.269925127+00:00 stderr F I0120 10:56:53.269920 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 took: 5.07µs 2026-01-20T10:56:53.269934737+00:00 stderr F I0120 10:56:53.269906 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb took: 2.65µs 2026-01-20T10:56:53.269934737+00:00 stderr F I0120 10:56:53.269924 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-ovn-kubernetes/ovnkube-node-sdkgg 2026-01-20T10:56:53.269945958+00:00 stderr F I0120 10:56:53.269724 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-image-registry/image-registry-75b7bb6564-ln84v 2026-01-20T10:56:53.269945958+00:00 stderr F I0120 10:56:53.269943 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-network-node-identity/network-node-identity-7xghp 2026-01-20T10:56:53.269954358+00:00 stderr F I0120 10:56:53.269944 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-image-registry/image-registry-75b7bb6564-ln84v took: 3.31µs 2026-01-20T10:56:53.269954358+00:00 stderr F I0120 10:56:53.269948 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-ovn-kubernetes/ovnkube-node-sdkgg took: 7.16µs 2026-01-20T10:56:53.269961938+00:00 stderr F I0120 10:56:53.269953 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-operator-lifecycle-manager/collect-profiles-29481765-pbh8m 2026-01-20T10:56:53.269996189+00:00 stderr F I0120 10:56:53.269960 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2026-01-20T10:56:53.269996189+00:00 stderr F I0120 10:56:53.269972 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b took: 2.46µs 2026-01-20T10:56:53.269996189+00:00 stderr F I0120 10:56:53.269975 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-multus/multus-q88th 2026-01-20T10:56:53.269996189+00:00 stderr F I0120 10:56:53.269977 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp took: 3.19µs 2026-01-20T10:56:53.269996189+00:00 stderr F I0120 10:56:53.269950 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2026-01-20T10:56:53.270006969+00:00 stderr F I0120 10:56:53.269994 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2026-01-20T10:56:53.270006969+00:00 stderr F I0120 10:56:53.270002 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-multus/multus-admission-controller-6c7c885997-4hbbc took: 5.13µs 2026-01-20T10:56:53.270015730+00:00 stderr F I0120 10:56:53.270003 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-multus/network-metrics-daemon-qdfr4 2026-01-20T10:56:53.270015730+00:00 stderr F I0120 10:56:53.270011 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-apiserver/installer-12-crc 2026-01-20T10:56:53.270024790+00:00 stderr F I0120 10:56:53.270006 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd took: 2.47µs 2026-01-20T10:56:53.270032660+00:00 stderr F I0120 10:56:53.270025 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-multus/network-metrics-daemon-qdfr4 took: 7.51µs 2026-01-20T10:56:53.270032660+00:00 stderr F I0120 10:56:53.269884 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace kube-system in Admin Network Policy controller 2026-01-20T10:56:53.270040940+00:00 stderr F I0120 10:56:53.270026 30089 obj_retry.go:459] Detected object openshift-kube-apiserver/installer-12-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.270048060+00:00 stderr F I0120 10:56:53.270031 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/revision-pruner-8-crc 2026-01-20T10:56:53.270055731+00:00 stderr F I0120 10:56:53.270049 30089 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-8-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.270101122+00:00 stderr F I0120 10:56:53.269961 30089 obj_retry.go:459] Detected object openshift-operator-lifecycle-manager/collect-profiles-29481765-pbh8m of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2026-01-20T10:56:53.270157453+00:00 stderr F I0120 10:56:53.269953 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-network-node-identity/network-node-identity-7xghp took: 2.74µs 2026-01-20T10:56:53.270187204+00:00 stderr F I0120 10:56:53.269988 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-multus/multus-q88th took: 2.74µs 2026-01-20T10:56:53.270245176+00:00 stderr F I0120 10:56:53.270212 30089 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2026-01-20T10:56:53.270281547+00:00 stderr F I0120 10:56:53.270270 30089 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 took: 4.23µs 2026-01-20T10:56:53.270322508+00:00 stderr F I0120 10:56:53.270036 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace kube-system Admin Network Policy controller: took 151.514µs 2026-01-20T10:56:53.270333468+00:00 stderr F I0120 10:56:53.270326 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-apiserver-operator in Admin Network Policy controller 2026-01-20T10:56:53.270342048+00:00 stderr F I0120 10:56:53.270333 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-apiserver-operator Admin Network Policy controller: took 9.42µs 2026-01-20T10:56:53.270350619+00:00 stderr F I0120 10:56:53.270341 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-host-network in Admin Network Policy controller 2026-01-20T10:56:53.270350619+00:00 stderr F I0120 10:56:53.270346 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-host-network Admin Network Policy controller: took 5.22µs 2026-01-20T10:56:53.270360849+00:00 stderr F I0120 10:56:53.270353 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-network-operator/iptables-alerter-wwpnd in Admin Network Policy controller 2026-01-20T10:56:53.270371019+00:00 stderr F I0120 10:56:53.270364 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-network-operator/iptables-alerter-wwpnd Admin Network Policy controller: took 12.431µs 2026-01-20T10:56:53.270381099+00:00 stderr F I0120 10:56:53.270375 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z in Admin Network Policy controller 2026-01-20T10:56:53.270389170+00:00 stderr F I0120 10:56:53.270379 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z Admin Network Policy controller: took 4.85µs 2026-01-20T10:56:53.270397290+00:00 stderr F I0120 10:56:53.270389 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb in Admin Network Policy controller 2026-01-20T10:56:53.270397290+00:00 stderr F I0120 10:56:53.270393 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb Admin Network Policy controller: took 9.331µs 2026-01-20T10:56:53.270406230+00:00 stderr F I0120 10:56:53.270399 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-dns/node-resolver-dn27q in Admin Network Policy controller 2026-01-20T10:56:53.270406230+00:00 stderr F I0120 10:56:53.270369 30089 iptables.go:110] Chain: "FORWARD" in table: "filter" already exists, skipping creation: running [/usr/sbin/iptables -t filter -N FORWARD --wait]: exit status 1: iptables: Chain already exists. 2026-01-20T10:56:53.270434841+00:00 stderr F I0120 10:56:53.270403 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-dns/node-resolver-dn27q Admin Network Policy controller: took 4.18µs 2026-01-20T10:56:53.270434841+00:00 stderr F I0120 10:56:53.270429 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-etcd/etcd-crc in Admin Network Policy controller 2026-01-20T10:56:53.270446501+00:00 stderr F I0120 10:56:53.270435 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-etcd/etcd-crc Admin Network Policy controller: took 5.94µs 2026-01-20T10:56:53.270446501+00:00 stderr F I0120 10:56:53.270440 30089 admin_network_policy_pod.go:56] Processing sync for Pod cert-manager/cert-manager-758df9885c-cq6zm in Admin Network Policy controller 2026-01-20T10:56:53.270453411+00:00 stderr F I0120 10:56:53.270445 30089 admin_network_policy_pod.go:59] Finished syncing Pod cert-manager/cert-manager-758df9885c-cq6zm Admin Network Policy controller: took 5.55µs 2026-01-20T10:56:53.270453411+00:00 stderr F I0120 10:56:53.270450 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg in Admin Network Policy controller 2026-01-20T10:56:53.270460291+00:00 stderr F I0120 10:56:53.270455 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg Admin Network Policy controller: took 4.65µs 2026-01-20T10:56:53.270483252+00:00 stderr F I0120 10:56:53.270470 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 in Admin Network Policy controller 2026-01-20T10:56:53.270483252+00:00 stderr F I0120 10:56:53.270477 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 Admin Network Policy controller: took 17.791µs 2026-01-20T10:56:53.270492352+00:00 stderr F I0120 10:56:53.270482 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-console/downloads-65476884b9-9wcvx in Admin Network Policy controller 2026-01-20T10:56:53.270492352+00:00 stderr F I0120 10:56:53.270486 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-console/downloads-65476884b9-9wcvx Admin Network Policy controller: took 3.8µs 2026-01-20T10:56:53.270500683+00:00 stderr F I0120 10:56:53.270490 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 in Admin Network Policy controller 2026-01-20T10:56:53.270500683+00:00 stderr F I0120 10:56:53.270495 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 Admin Network Policy controller: took 4.46µs 2026-01-20T10:56:53.270509003+00:00 stderr F I0120 10:56:53.270499 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/revision-pruner-9-crc in Admin Network Policy controller 2026-01-20T10:56:53.270509003+00:00 stderr F I0120 10:56:53.270503 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/revision-pruner-9-crc Admin Network Policy controller: took 4.251µs 2026-01-20T10:56:53.270517643+00:00 stderr F I0120 10:56:53.270508 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh in Admin Network Policy controller 2026-01-20T10:56:53.270517643+00:00 stderr F I0120 10:56:53.270512 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh Admin Network Policy controller: took 3.92µs 2026-01-20T10:56:53.270526253+00:00 stderr F I0120 10:56:53.270516 30089 admin_network_policy_pod.go:56] Processing sync for Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 in Admin Network Policy controller 2026-01-20T10:56:53.270526253+00:00 stderr F I0120 10:56:53.270520 30089 admin_network_policy_pod.go:59] Finished syncing Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 Admin Network Policy controller: took 4.1µs 2026-01-20T10:56:53.270537914+00:00 stderr F I0120 10:56:53.270525 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-cloud-platform-infra in Admin Network Policy controller 2026-01-20T10:56:53.270537914+00:00 stderr F I0120 10:56:53.270529 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cloud-platform-infra Admin Network Policy controller: took 3.59µs 2026-01-20T10:56:53.270537914+00:00 stderr F I0120 10:56:53.270533 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-scheduler in Admin Network Policy controller 2026-01-20T10:56:53.270547354+00:00 stderr F I0120 10:56:53.270537 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-scheduler Admin Network Policy controller: took 3.41µs 2026-01-20T10:56:53.270547354+00:00 stderr F I0120 10:56:53.270542 30089 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-service-ca-operator in Admin Network Policy controller 2026-01-20T10:56:53.270547354+00:00 stderr F I0120 10:56:53.270545 30089 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-service-ca-operator Admin Network Policy controller: took 3.02µs 2026-01-20T10:56:53.270572704+00:00 stderr F I0120 10:56:53.270420 30089 factory.go:988] Added *v1.Pod event handler 6 2026-01-20T10:56:53.270705508+00:00 stderr F I0120 10:56:53.270673 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Policy Row:map[action:allow match:ip4.src == 10.217.0.0/22 && ip4.dst == 10.217.0.0/22 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0aa7f47a-d2ec-4d10-8558-5ef3bcc81b0d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.270776220+00:00 stderr F I0120 10:56:53.270753 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:0aa7f47a-d2ec-4d10-8558-5ef3bcc81b0d}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.270818021+00:00 stderr F I0120 10:56:53.270793 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Policy Row:map[action:allow match:ip4.src == 10.217.0.0/22 && ip4.dst == 10.217.0.0/22 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0aa7f47a-d2ec-4d10-8558-5ef3bcc81b0d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:0aa7f47a-d2ec-4d10-8558-5ef3bcc81b0d}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.271237662+00:00 stderr F I0120 10:56:53.271209 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Policy Row:map[action:allow match:ip4.src == 10.217.0.0/22 && ip4.dst == 100.64.0.0/16 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {56cc3414-2f3a-494a-bb5b-fac89ab63a76}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.271262563+00:00 stderr F I0120 10:56:53.271243 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:56cc3414-2f3a-494a-bb5b-fac89ab63a76}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.271271243+00:00 stderr F I0120 10:56:53.271255 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Policy Row:map[action:allow match:ip4.src == 10.217.0.0/22 && ip4.dst == 100.64.0.0/16 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {56cc3414-2f3a-494a-bb5b-fac89ab63a76}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:56cc3414-2f3a-494a-bb5b-fac89ab63a76}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.271542590+00:00 stderr F I0120 10:56:53.271514 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Policy Row:map[action:allow external_ids:{GoMap:map[ip-family:ip k8s.ovn.org/id:default-network-controller:EgressIP:102:EIP-No-Reroute-reply-traffic:ip k8s.ovn.org/name:EIP-No-Reroute-reply-traffic k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressIP priority:102]} match:pkt.mark == 42 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {725b51e4-bb6e-431f-9362-bdc8357c6b0d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.271608922+00:00 stderr F I0120 10:56:53.271588 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:725b51e4-bb6e-431f-9362-bdc8357c6b0d}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.271655703+00:00 stderr F I0120 10:56:53.271626 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Policy Row:map[action:allow external_ids:{GoMap:map[ip-family:ip k8s.ovn.org/id:default-network-controller:EgressIP:102:EIP-No-Reroute-reply-traffic:ip k8s.ovn.org/name:EIP-No-Reroute-reply-traffic k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressIP priority:102]} match:pkt.mark == 42 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {725b51e4-bb6e-431f-9362-bdc8357c6b0d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:725b51e4-bb6e-431f-9362-bdc8357c6b0d}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.271968041+00:00 stderr F I0120 10:56:53.271949 30089 address_set.go:304] New(89261db8-e0fa-461c-a902-652c89cbed51/default-network-controller:EgressIP:node-ips:v4/a14918748166599097711) with [] 2026-01-20T10:56:53.271978842+00:00 stderr F I0120 10:56:53.271970 30089 address_set.go:304] New(b27c19cc-d9d0-4d57-a5a8-06fcff438e8a/default-network-controller:EgressIP:egressip-served-pods:v4/a4548040316634674295) with [] 2026-01-20T10:56:53.271996502+00:00 stderr F I0120 10:56:53.271984 30089 address_set.go:304] New(6ad154d5-3275-4190-8d21-ff202885643c/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with [] 2026-01-20T10:56:53.272055534+00:00 stderr F I0120 10:56:53.272031 30089 obj_retry.go:502] Add event received for *factory.egressNode crc 2026-01-20T10:56:53.272260719+00:00 stderr F I0120 10:56:53.272231 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[192.168.122.10 192.168.126.11 38.102.83.220 172.17.0.5 172.18.0.5 172.19.0.5]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {89261db8-e0fa-461c-a902-652c89cbed51}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.272272089+00:00 stderr F I0120 10:56:53.272252 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[192.168.122.10 192.168.126.11 38.102.83.220 172.17.0.5 172.18.0.5 172.19.0.5]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {89261db8-e0fa-461c-a902-652c89cbed51}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.272630930+00:00 stderr F I0120 10:56:53.272582 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Policy Row:map[action:allow match:(ip4.src == $a4548040316634674295 || ip4.src == $a13607449821398607916) && ip4.dst == $a14918748166599097711 options:{GoMap:map[pkt_mark:1008]} priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6eea91c7-d03c-41d8-906d-a701fa21697a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.272644920+00:00 stderr F I0120 10:56:53.272624 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:6eea91c7-d03c-41d8-906d-a701fa21697a}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.272674471+00:00 stderr F I0120 10:56:53.272640 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Policy Row:map[action:allow match:(ip4.src == $a4548040316634674295 || ip4.src == $a13607449821398607916) && ip4.dst == $a14918748166599097711 options:{GoMap:map[pkt_mark:1008]} priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6eea91c7-d03c-41d8-906d-a701fa21697a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:6eea91c7-d03c-41d8-906d-a701fa21697a}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.272933138+00:00 stderr F I0120 10:56:53.272897 30089 egressip.go:1280] Egress node: crc about to be initialized 2026-01-20T10:56:53.272999570+00:00 stderr F I0120 10:56:53.272969 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f05c0207-10b6-48ad-8eef-da8e7afca1a7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.273019600+00:00 stderr F I0120 10:56:53.273000 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:f05c0207-10b6-48ad-8eef-da8e7afca1a7}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.273040241+00:00 stderr F I0120 10:56:53.273013 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f05c0207-10b6-48ad-8eef-da8e7afca1a7}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:f05c0207-10b6-48ad-8eef-da8e7afca1a7}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.273305518+00:00 stderr F I0120 10:56:53.273269 30089 obj_retry.go:541] Creating *factory.egressNode crc took: 1.218804ms 2026-01-20T10:56:53.273305518+00:00 stderr F I0120 10:56:53.273286 30089 factory.go:988] Added *v1.Node event handler 7 2026-01-20T10:56:53.273323918+00:00 stderr F I0120 10:56:53.273304 30089 factory.go:988] Added *v1.EgressIP event handler 8 2026-01-20T10:56:53.273469872+00:00 stderr F I0120 10:56:53.273423 30089 iptables.go:121] Adding rule in table: filter, chain: FORWARD with args: "-o ovn-k8s-mp0 -j ACCEPT" for protocol: 0 2026-01-20T10:56:53.273482043+00:00 stderr F I0120 10:56:53.273465 30089 factory.go:988] Added *v1.EgressFirewall event handler 9 2026-01-20T10:56:53.273540194+00:00 stderr F I0120 10:56:53.273519 30089 obj_retry.go:502] Add event received for *factory.egressFwNode crc 2026-01-20T10:56:53.273589115+00:00 stderr F I0120 10:56:53.273572 30089 obj_retry.go:541] Creating *factory.egressFwNode crc took: 40.881µs 2026-01-20T10:56:53.273597476+00:00 stderr F I0120 10:56:53.273587 30089 factory.go:988] Added *v1.Node event handler 10 2026-01-20T10:56:53.273605516+00:00 stderr F I0120 10:56:53.273598 30089 egressqos.go:193] Setting up event handlers for EgressQoS 2026-01-20T10:56:53.273757740+00:00 stderr F I0120 10:56:53.273742 30089 egressqos.go:245] Starting EgressQoS Controller 2026-01-20T10:56:53.273788561+00:00 stderr F I0120 10:56:53.273778 30089 shared_informer.go:311] Waiting for caches to sync for egressqosnodes 2026-01-20T10:56:53.273812071+00:00 stderr F I0120 10:56:53.273803 30089 shared_informer.go:318] Caches are synced for egressqosnodes 2026-01-20T10:56:53.273834722+00:00 stderr F I0120 10:56:53.273826 30089 shared_informer.go:311] Waiting for caches to sync for egressqospods 2026-01-20T10:56:53.273856073+00:00 stderr F I0120 10:56:53.273848 30089 shared_informer.go:318] Caches are synced for egressqospods 2026-01-20T10:56:53.273879993+00:00 stderr F I0120 10:56:53.273871 30089 shared_informer.go:311] Waiting for caches to sync for egressqos 2026-01-20T10:56:53.273901834+00:00 stderr F I0120 10:56:53.273893 30089 shared_informer.go:318] Caches are synced for egressqos 2026-01-20T10:56:53.273923264+00:00 stderr F I0120 10:56:53.273915 30089 egressqos.go:259] Repairing EgressQoSes 2026-01-20T10:56:53.273944485+00:00 stderr F I0120 10:56:53.273936 30089 egressqos.go:400] Starting repairing loop for egressqos 2026-01-20T10:56:53.274022047+00:00 stderr F I0120 10:56:53.274011 30089 egressqos.go:402] Finished repairing loop for egressqos: 74.822µs 2026-01-20T10:56:53.274076828+00:00 stderr F I0120 10:56:53.274051 30089 egressservice_zone.go:129] Setting up event handlers for Egress Services 2026-01-20T10:56:53.274157181+00:00 stderr F I0120 10:56:53.274132 30089 egressqos.go:1008] Processing sync for EgressQoS node crc 2026-01-20T10:56:53.274271354+00:00 stderr F I0120 10:56:53.274259 30089 egressservice_zone.go:205] Starting Egress Services Controller 2026-01-20T10:56:53.274296574+00:00 stderr F I0120 10:56:53.274288 30089 shared_informer.go:311] Waiting for caches to sync for egressservices 2026-01-20T10:56:53.274320295+00:00 stderr F I0120 10:56:53.274311 30089 shared_informer.go:318] Caches are synced for egressservices 2026-01-20T10:56:53.274342946+00:00 stderr F I0120 10:56:53.274334 30089 shared_informer.go:311] Waiting for caches to sync for egressservices_services 2026-01-20T10:56:53.274364626+00:00 stderr F I0120 10:56:53.274356 30089 shared_informer.go:318] Caches are synced for egressservices_services 2026-01-20T10:56:53.274398577+00:00 stderr F I0120 10:56:53.274301 30089 egressservice_zone_endpointslice.go:80] Ignoring updating default/kubernetes for endpointslice default/kubernetes as it is not a known egress service 2026-01-20T10:56:53.274407387+00:00 stderr F I0120 10:56:53.274378 30089 shared_informer.go:311] Waiting for caches to sync for egressservices_endpointslices 2026-01-20T10:56:53.274407387+00:00 stderr F I0120 10:56:53.274401 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-config-operator/metrics for endpointslice openshift-config-operator/metrics-tw775 as it is not a known egress service 2026-01-20T10:56:53.274427708+00:00 stderr F I0120 10:56:53.274410 30089 shared_informer.go:318] Caches are synced for egressservices_endpointslices 2026-01-20T10:56:53.274427708+00:00 stderr F I0120 10:56:53.274414 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-kube-storage-version-migrator-operator/metrics for endpointslice openshift-kube-storage-version-migrator-operator/metrics-zbxs7 as it is not a known egress service 2026-01-20T10:56:53.274439868+00:00 stderr F I0120 10:56:53.274426 30089 shared_informer.go:311] Waiting for caches to sync for egressservices_nodes 2026-01-20T10:56:53.274439868+00:00 stderr F I0120 10:56:53.274428 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-marketplace/redhat-operators for endpointslice openshift-marketplace/redhat-operators-47g6l as it is not a known egress service 2026-01-20T10:56:53.274439868+00:00 stderr F I0120 10:56:53.274432 30089 shared_informer.go:318] Caches are synced for egressservices_nodes 2026-01-20T10:56:53.274447928+00:00 stderr F I0120 10:56:53.274438 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-api/machine-api-operator-machine-webhook for endpointslice openshift-machine-api/machine-api-operator-machine-webhook-xj4tp as it is not a known egress service 2026-01-20T10:56:53.274447928+00:00 stderr F I0120 10:56:53.274439 30089 egressservice_zone.go:223] Repairing Egress Services 2026-01-20T10:56:53.274455419+00:00 stderr F I0120 10:56:53.274446 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-service-ca-operator/metrics for endpointslice openshift-service-ca-operator/metrics-wrkrj as it is not a known egress service 2026-01-20T10:56:53.274587742+00:00 stderr F I0120 10:56:53.274147 30089 egressqos.go:1023] EgressQoS crc node retrieved from lister: &Node{ObjectMeta:{crc c83c88d3-f34d-4083-a59d-1c50f90f89b8 42611 0 2024-06-26 12:44:56 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:crc kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node-role.kubernetes.io/worker: node.openshift.io/os_id:rhcos topology.hostpath.csi/node:crc] map[csi.volume.kubernetes.io/nodeid:{"kubevirt.io.hostpath-provisioner":"crc"} k8s.ovn.org/host-cidrs:["172.17.0.5/24","172.18.0.5/24","172.19.0.5/24","192.168.122.10/24","192.168.126.11/24","38.102.83.220/24"] k8s.ovn.org/l3-gateway-config:{"default":{"mode":"shared","interface-id":"br-ex_crc","mac-address":"fa:16:3e:0d:e7:11","ip-addresses":["38.102.83.220/24"],"ip-address":"38.102.83.220/24","next-hops":["38.102.83.1"],"next-hop":"38.102.83.1","node-port-enable":"true","vlan-id":"0"}} k8s.ovn.org/network-ids:{"default":"0"} k8s.ovn.org/node-chassis-id:017e52b0-97d3-4d7d-aae4-9b216aa025aa k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-mgmt-port-mac-address:b6:dc:d9:26:03:d4 k8s.ovn.org/node-primary-ifaddr:{"ipv4":"38.102.83.220/24"} k8s.ovn.org/node-subnets:{"default":["10.217.0.0/23"]} k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"} k8s.ovn.org/remote-zone-migrated:crc k8s.ovn.org/zone-name:crc machineconfiguration.openshift.io/controlPlaneTopology:SingleReplica machineconfiguration.openshift.io/currentConfig:rendered-master-11405dc064e9fc83a779a06d1cd665b3 machineconfiguration.openshift.io/desiredConfig:rendered-master-ef556ead28ddfad01c34ac56c7adfb5a machineconfiguration.openshift.io/desiredDrain:uncordon-rendered-master-11405dc064e9fc83a779a06d1cd665b3 machineconfiguration.openshift.io/lastAppliedDrain:uncordon-rendered-master-11405dc064e9fc83a779a06d1cd665b3 machineconfiguration.openshift.io/lastObservedServerCAAnnotation:false machineconfiguration.openshift.io/lastSyncedControllerConfigResourceVersion:41986 machineconfiguration.openshift.io/post-config-action: machineconfiguration.openshift.io/reason:missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2026-01-20T10:56:53.274587742+00:00 stderr P machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found machineconfiguration.openshift.io/state:Degraded volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{12 0} {} 12 DecimalSI},ephemeral-storage: {{85294297088 0} {} 83295212Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{33654132736 0} {} 32865364Ki BinarySI},pods: {{250 0} {} 250 DecimalSI},},Allocatable:ResourceList{cpu: {{11800 -3} {} 11800m DecimalSI},ephemeral-storage: {{76397865653 0} {} 76397865653 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{33182273536 0} {} 32404564Ki BinarySI},pods: {{250 0} {} 250 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2026-01-20 10:56:28 +0000 UTC,LastTransitionTime:2025-08-13 19:57:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2026-01-20 10:56:28 +0000 UTC,LastTransitionTime:2025-08-13 19:57:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2026-01-20 10:56:28 +0000 UTC,LastTransitionTime:2025-08-13 19:57:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2026-01-20 10:56:28 +0000 UTC,LastTransitionTime:2026-01-20 10:48:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.126.11,},NodeAddress{Type:Hostname,Address:crc,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c1bd596843fb445da20eca66471ddf66,SystemUUID:a519b063-d122-47cf-ae3b-7548803df408,BootID:b3bdf423-1340-4a9d-8f31-b9e9c5c3e8af,KernelVersion:5.14.0-427.22.1.el9_4.x86_64,OSImage:Red Hat Enterprise Linux CoreOS 416.94.202406172220-0,ContainerRuntimeVersion:cri-o://1.29.5-5.rhaos4.16.git7032128.el9,KubeletVersion:v1.29.5+29c95f3,KubeProxyVersion:v1.29.5+29c95f3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.redhat.io/redhat/redhat-operator-index@sha256:138329cd9b6add8177f4d5df00da30f2b1bacc11b18c809a739fb2baf98aad91 registry.redhat.io/redhat/redhat-operator-index@sha256:c2d429a85d55f4a3fec8906f1f92341186f3211225e3ea20ef04b91f0adae73d registry.redhat.io/redhat/redhat-operator-index:v4.16],SizeBytes:4017254108,},ContainerImage{Names:[registry.redhat.io/redhat/redhat-operator-index@sha256:98affcb112cdd069bfed8c7b5f300a1252e5b67d78a89515108331d589de6390],SizeBytes:3571551444,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f],SizeBytes:2572133253,},ContainerImage{Names:[registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6],SizeBytes:2121001615,},ContainerImage{Names:[registry.redhat.io/redhat/community-operator-index@sha256:10f7e60897961c51162a70cc2f119b5dcd1f7657d27e19a8655387e5b6285ae7 registry.redhat.io/redhat/community-operator-index@sha256:a83f6e8642a40645dfbe8d783cf00d0f441e20c107ab1da99511d3e93d2c4a47 registry.redhat.io/redhat/community-operator-index:v4.16],SizeBytes:1956709612,},ContainerImage{Names:[registry.redhat.io/redhat/community-operator-index@sha256:6baf614a7bf44076cbfcf4d04b52541e8eaaf096cfaeb88b1ccccfb8d8b9d50a],SizeBytes:1858356929,},ContainerImage{Names:[registry.redhat.io/redhat/redhat-marketplace-index@sha256:c23978d64c4a9bc9fb79ae5eac557412c101befe8f94129a9ad468318b2f69e5 registry.redhat.io/redhat/redhat-marketplace-index@sha256:c92d2950f4881269a460eec5f117f807eab79a88090f688a9fa4117a045560a7 registry.redhat.io/redhat/redhat-marketplace-index:v4.16],SizeBytes:1584436458,},ContainerImage{Names:[registry.redhat.io/redhat/certified-operator-index@sha256:5b97fb6ab9eed07d2db3deae6f28723f6409b8953bfb0447eb322dfb34e62c26 registry.redhat.io/redhat/certified-operator-index@sha256:8ad07d4cc799bd6d9c4b193307abf12b8e5778053746817fb832961b3aa2152b registry.redhat.io/redhat/certified-operator-index:v4.16],SizeBytes:1553514218,},ContainerImage{Names:[registry.redhat.io/redhat/certified-operator-index@sha256:c465a1b8318dfd33120b2692dd1f6aae7db4b6e69f50720cb1391d55fadec562],SizeBytes:1487797447,},ContainerImage{Names:[registry.redhat.io/redhat/redhat-marketplace-index@sha256:55a23334bbbe439e5a7e305ed720325ac4192d078b06fe8be277fe1eff62c533],SizeBytes:1458020021,},ContainerImage{Names:[registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b],SizeBytes:1374511543,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009],SizeBytes:1346691049,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca],SizeBytes:1222078702,},ContainerImage{Names:[registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f],SizeBytes:1116811194,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842],SizeBytes:1067242914,},ContainerImage{Names:[registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9b 2026-01-20T10:56:53.274605783+00:00 stderr F b0ccf6512d],SizeBytes:993487271,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a],SizeBytes:874809222,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2],SizeBytes:829474731,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251],SizeBytes:826261505,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8],SizeBytes:823328808,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2],SizeBytes:775169417,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648],SizeBytes:685289316,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73],SizeBytes:677900529,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae],SizeBytes:654603911,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc],SizeBytes:596693555,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78],SizeBytes:568208801,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce],SizeBytes:562097717,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403],SizeBytes:541135334,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d],SizeBytes:539461335,},ContainerImage{Names:[registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8 registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9 registry.redhat.io/openshift4/ose-csi-external-provisioner:latest],SizeBytes:520763795,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3],SizeBytes:507363664,},ContainerImage{Names:[quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69],SizeBytes:503433479,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3],SizeBytes:503286020,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce],SizeBytes:502054492,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b],SizeBytes:501535327,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75],SizeBytes:501474997,},ContainerImage{Names:[quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f],SizeBytes:499981426,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72],SizeBytes:498615097,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791],SizeBytes:498403671,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611],SizeBytes:497554071,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f],SizeBytes:497168817,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa],SizeBytes:497128745,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc],SizeBytes:496236158,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d],SizeBytes:495929820,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15],SizeBytes:494198000,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730],SizeBytes:493495521,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208],SizeBytes:492229908,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8],SizeBytes:488729683,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99],SizeBytes:487322445,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0],SizeBytes:484252300,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} 2026-01-20T10:56:53.274605783+00:00 stderr F I0120 10:56:53.274596 30089 egressqos.go:1011] Finished syncing EgressQoS node crc : 467.992µs 2026-01-20T10:56:53.274691115+00:00 stderr F I0120 10:56:53.274661 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6ad154d5-3275-4190-8d21-ff202885643c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.274716085+00:00 stderr F I0120 10:56:53.274687 30089 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6ad154d5-3275-4190-8d21-ff202885643c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.274989463+00:00 stderr F I0120 10:56:53.274955 30089 master_controller.go:86] Starting Admin Policy Based Route Controller 2026-01-20T10:56:53.274989463+00:00 stderr F I0120 10:56:53.274967 30089 external_controller.go:276] Starting Admin Policy Based Route Controller 2026-01-20T10:56:53.275013273+00:00 stderr F I0120 10:56:53.274985 30089 egressservice_zone_endpointslice.go:80] Ignoring updating cert-manager/cert-manager-cainjector for endpointslice cert-manager/cert-manager-cainjector-fdtx5 as it is not a known egress service 2026-01-20T10:56:53.275013273+00:00 stderr F I0120 10:56:53.274999 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-cluster-version/cluster-version-operator for endpointslice openshift-cluster-version/cluster-version-operator-qt7zf as it is not a known egress service 2026-01-20T10:56:53.275022844+00:00 stderr F I0120 10:56:53.275011 30089 egressservice_zone_node.go:110] Processing sync for Egress Service node crc 2026-01-20T10:56:53.275031304+00:00 stderr F I0120 10:56:53.275021 30089 egressservice_zone_node.go:113] Finished syncing Egress Service node crc: 12.85µs 2026-01-20T10:56:53.275042104+00:00 stderr F I0120 10:56:53.275035 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console-operator/webhook for endpointslice openshift-console-operator/webhook-b7j7h as it is not a known egress service 2026-01-20T10:56:53.275051024+00:00 stderr F I0120 10:56:53.275042 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console/downloads for endpointslice openshift-console/downloads-zsr67 as it is not a known egress service 2026-01-20T10:56:53.275058845+00:00 stderr F I0120 10:56:53.275050 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-kube-controller-manager/kube-controller-manager for endpointslice openshift-kube-controller-manager/kube-controller-manager-fcp2k as it is not a known egress service 2026-01-20T10:56:53.275088885+00:00 stderr F I0120 10:56:53.275071 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-kube-scheduler/scheduler for endpointslice openshift-kube-scheduler/scheduler-4wbzh as it is not a known egress service 2026-01-20T10:56:53.275140697+00:00 stderr F I0120 10:56:53.275115 30089 egressservice_zone_endpointslice.go:80] Ignoring updating cert-manager/cert-manager-webhook for endpointslice cert-manager/cert-manager-webhook-7k8gk as it is not a known egress service 2026-01-20T10:56:53.275140697+00:00 stderr F I0120 10:56:53.275125 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-controller-manager-operator/metrics for endpointslice openshift-controller-manager-operator/metrics-psf8p as it is not a known egress service 2026-01-20T10:56:53.275140697+00:00 stderr F I0120 10:56:53.275131 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-operator-lifecycle-manager/packageserver-service for endpointslice openshift-operator-lifecycle-manager/packageserver-service-tlm8t as it is not a known egress service 2026-01-20T10:56:53.275154047+00:00 stderr F I0120 10:56:53.275137 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-apiserver/api for endpointslice openshift-apiserver/api-7hq6z as it is not a known egress service 2026-01-20T10:56:53.275154047+00:00 stderr F I0120 10:56:53.275145 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-api/control-plane-machine-set-operator for endpointslice openshift-machine-api/control-plane-machine-set-operator-nmjkn as it is not a known egress service 2026-01-20T10:56:53.275154047+00:00 stderr F I0120 10:56:53.275150 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-marketplace/community-operators for endpointslice openshift-marketplace/community-operators-h826k as it is not a known egress service 2026-01-20T10:56:53.275168577+00:00 stderr F I0120 10:56:53.275157 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-ingress-canary/ingress-canary for endpointslice openshift-ingress-canary/ingress-canary-rhnd4 as it is not a known egress service 2026-01-20T10:56:53.275168577+00:00 stderr F I0120 10:56:53.275162 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-oauth-apiserver/api for endpointslice openshift-oauth-apiserver/api-2pj4d as it is not a known egress service 2026-01-20T10:56:53.275178048+00:00 stderr F I0120 10:56:53.275167 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console/console for endpointslice openshift-console/console-wg4kr as it is not a known egress service 2026-01-20T10:56:53.275178048+00:00 stderr F I0120 10:56:53.275172 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-operator-lifecycle-manager/catalog-operator-metrics for endpointslice openshift-operator-lifecycle-manager/catalog-operator-metrics-fqfm8 as it is not a known egress service 2026-01-20T10:56:53.275187868+00:00 stderr F I0120 10:56:53.275178 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-authentication-operator/metrics for endpointslice openshift-authentication-operator/metrics-dp499 as it is not a known egress service 2026-01-20T10:56:53.275187868+00:00 stderr F I0120 10:56:53.275184 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-marketplace/marketplace-operator-metrics for endpointslice openshift-marketplace/marketplace-operator-metrics-fcwkk as it is not a known egress service 2026-01-20T10:56:53.275196998+00:00 stderr F I0120 10:56:53.275190 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-multus/multus-admission-controller for endpointslice openshift-multus/multus-admission-controller-s6h4d as it is not a known egress service 2026-01-20T10:56:53.275205958+00:00 stderr F I0120 10:56:53.275196 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-network-diagnostics/network-check-target for endpointslice openshift-network-diagnostics/network-check-target-kqkjk as it is not a known egress service 2026-01-20T10:56:53.275205958+00:00 stderr F I0120 10:56:53.275201 30089 egressservice_zone_endpointslice.go:80] Ignoring updating cert-manager/cert-manager for endpointslice cert-manager/cert-manager-lrf7j as it is not a known egress service 2026-01-20T10:56:53.275214079+00:00 stderr F I0120 10:56:53.275208 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-apiserver/check-endpoints for endpointslice openshift-apiserver/check-endpoints-sbfp5 as it is not a known egress service 2026-01-20T10:56:53.275221359+00:00 stderr F I0120 10:56:53.275213 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console-operator/metrics for endpointslice openshift-console-operator/metrics-rdlxc as it is not a known egress service 2026-01-20T10:56:53.275221359+00:00 stderr F I0120 10:56:53.275218 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-dns-operator/metrics for endpointslice openshift-dns-operator/metrics-cxk8j as it is not a known egress service 2026-01-20T10:56:53.275228709+00:00 stderr F I0120 10:56:53.275223 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-kube-controller-manager-operator/metrics for endpointslice openshift-kube-controller-manager-operator/metrics-cz5rv as it is not a known egress service 2026-01-20T10:56:53.275237090+00:00 stderr F I0120 10:56:53.275229 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-config-operator/machine-config-daemon for endpointslice openshift-machine-config-operator/machine-config-daemon-2nvnz as it is not a known egress service 2026-01-20T10:56:53.275246241+00:00 stderr F I0120 10:56:53.275234 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-authentication/oauth-openshift for endpointslice openshift-authentication/oauth-openshift-6gdxk as it is not a known egress service 2026-01-20T10:56:53.275246241+00:00 stderr F I0120 10:56:53.275242 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-ingress/router-internal-default for endpointslice openshift-ingress/router-internal-default-29hv8 as it is not a known egress service 2026-01-20T10:56:53.275262821+00:00 stderr F I0120 10:56:53.275248 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-api/machine-api-operator for endpointslice openshift-machine-api/machine-api-operator-2js9r as it is not a known egress service 2026-01-20T10:56:53.275262821+00:00 stderr F I0120 10:56:53.275253 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-config-operator/machine-config-controller for endpointslice openshift-machine-config-operator/machine-config-controller-7t8hc as it is not a known egress service 2026-01-20T10:56:53.275262821+00:00 stderr F I0120 10:56:53.275259 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-controller-manager/controller-manager for endpointslice openshift-controller-manager/controller-manager-kxmft as it is not a known egress service 2026-01-20T10:56:53.275273411+00:00 stderr F I0120 10:56:53.275265 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-kube-apiserver-operator/metrics for endpointslice openshift-kube-apiserver-operator/metrics-kbv55 as it is not a known egress service 2026-01-20T10:56:53.275273411+00:00 stderr F I0120 10:56:53.275270 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-api/machine-api-operator-webhook for endpointslice openshift-machine-api/machine-api-operator-webhook-x4gjx as it is not a known egress service 2026-01-20T10:56:53.275282651+00:00 stderr F I0120 10:56:53.275276 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-config-operator/machine-config-operator for endpointslice openshift-machine-config-operator/machine-config-operator-p8xmw as it is not a known egress service 2026-01-20T10:56:53.275291202+00:00 stderr F I0120 10:56:53.275281 30089 default_network_controller.go:572] Completing all the Watchers took 148.863006ms 2026-01-20T10:56:53.275291202+00:00 stderr F I0120 10:56:53.275284 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-route-controller-manager/route-controller-manager for endpointslice openshift-route-controller-manager/route-controller-manager-64jvm as it is not a known egress service 2026-01-20T10:56:53.275298652+00:00 stderr F I0120 10:56:53.275293 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-etcd-operator/metrics for endpointslice openshift-etcd-operator/metrics-z62zm as it is not a known egress service 2026-01-20T10:56:53.275306182+00:00 stderr F I0120 10:56:53.275297 30089 default_network_controller.go:576] Starting unidling controllers 2026-01-20T10:56:53.275306182+00:00 stderr F I0120 10:56:53.275299 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-kube-apiserver/apiserver for endpointslice openshift-kube-apiserver/apiserver-5mvtf as it is not a known egress service 2026-01-20T10:56:53.275315512+00:00 stderr F I0120 10:56:53.275305 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-dns/dns-default for endpointslice openshift-dns/dns-default-lctlx as it is not a known egress service 2026-01-20T10:56:53.275315512+00:00 stderr F I0120 10:56:53.275311 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-etcd/etcd for endpointslice openshift-etcd/etcd-8wmzv as it is not a known egress service 2026-01-20T10:56:53.275324533+00:00 stderr F I0120 10:56:53.275316 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-api/cluster-autoscaler-operator for endpointslice openshift-machine-api/cluster-autoscaler-operator-r4g5l as it is not a known egress service 2026-01-20T10:56:53.275337863+00:00 stderr F I0120 10:56:53.275323 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-operator-lifecycle-manager/package-server-manager-metrics for endpointslice openshift-operator-lifecycle-manager/package-server-manager-metrics-mq66p as it is not a known egress service 2026-01-20T10:56:53.275337863+00:00 stderr F I0120 10:56:53.275329 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-apiserver-operator/metrics for endpointslice openshift-apiserver-operator/metrics-sgtfh as it is not a known egress service 2026-01-20T10:56:53.275337863+00:00 stderr F I0120 10:56:53.275334 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-image-registry/image-registry for endpointslice openshift-image-registry/image-registry-cfsrx as it is not a known egress service 2026-01-20T10:56:53.275374214+00:00 stderr F I0120 10:56:53.275346 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-api/machine-api-controllers for endpointslice openshift-machine-api/machine-api-controllers-j9jjt as it is not a known egress service 2026-01-20T10:56:53.275374214+00:00 stderr F I0120 10:56:53.275346 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-console/console-644bb77b49-5x5xk 2026-01-20T10:56:53.275374214+00:00 stderr F I0120 10:56:53.275357 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-marketplace/certified-operators for endpointslice openshift-marketplace/certified-operators-bw9bv as it is not a known egress service 2026-01-20T10:56:53.275374214+00:00 stderr F I0120 10:56:53.275364 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2026-01-20T10:56:53.275387464+00:00 stderr F I0120 10:56:53.275374 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-machine-config-operator/machine-config-server-v65wr 2026-01-20T10:56:53.275387464+00:00 stderr F I0120 10:56:53.275380 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-marketplace/redhat-operators-2nxg8 2026-01-20T10:56:53.275387464+00:00 stderr F I0120 10:56:53.275382 30089 unidle.go:45] Registering OVN SB ControllerEvent handler 2026-01-20T10:56:53.275397465+00:00 stderr F I0120 10:56:53.275386 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-network-operator/network-operator-767c585db5-zd56b 2026-01-20T10:56:53.275397465+00:00 stderr F I0120 10:56:53.275393 30089 unidle.go:62] Populating Initial ContollerEvent events 2026-01-20T10:56:53.275437116+00:00 stderr F I0120 10:56:53.275413 30089 unidle.go:78] Setting up event handlers for services 2026-01-20T10:56:53.275569179+00:00 stderr F I0120 10:56:53.275365 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-operator-lifecycle-manager/olm-operator-metrics for endpointslice openshift-operator-lifecycle-manager/olm-operator-metrics-vql58 as it is not a known egress service 2026-01-20T10:56:53.275569179+00:00 stderr F I0120 10:56:53.275393 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2026-01-20T10:56:53.275569179+00:00 stderr F I0120 10:56:53.275562 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-ingress-operator/metrics for endpointslice openshift-ingress-operator/metrics-cd48g as it is not a known egress service 2026-01-20T10:56:53.275582019+00:00 stderr F I0120 10:56:53.275573 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-kube-scheduler-operator/metrics for endpointslice openshift-kube-scheduler-operator/metrics-zk8d6 as it is not a known egress service 2026-01-20T10:56:53.275591060+00:00 stderr F I0120 10:56:53.275581 30089 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-marketplace/redhat-marketplace for endpointslice openshift-marketplace/redhat-marketplace-8k279 as it is not a known egress service 2026-01-20T10:56:53.275591060+00:00 stderr F I0120 10:56:53.275580 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2026-01-20T10:56:53.275605760+00:00 stderr F I0120 10:56:53.275592 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-image-registry/node-ca-l92hr 2026-01-20T10:56:53.275605760+00:00 stderr F I0120 10:56:53.275601 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-ingress-canary/ingress-canary-2vhcn 2026-01-20T10:56:53.275615500+00:00 stderr F I0120 10:56:53.275609 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2026-01-20T10:56:53.275624641+00:00 stderr F I0120 10:56:53.275616 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/revision-pruner-10-crc 2026-01-20T10:56:53.275634161+00:00 stderr F I0120 10:56:53.275624 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-scheduler/installer-8-crc 2026-01-20T10:56:53.275643151+00:00 stderr F I0120 10:56:53.275631 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-marketplace/community-operators-6m4w2 2026-01-20T10:56:53.275643151+00:00 stderr F I0120 10:56:53.275638 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2026-01-20T10:56:53.275652401+00:00 stderr F I0120 10:56:53.275645 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-marketplace/certified-operators-mpjb7 2026-01-20T10:56:53.275661572+00:00 stderr F I0120 10:56:53.275654 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j 2026-01-20T10:56:53.275671212+00:00 stderr F I0120 10:56:53.275661 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-apiserver/kube-apiserver-crc 2026-01-20T10:56:53.275680182+00:00 stderr F I0120 10:56:53.275667 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: cert-manager/cert-manager-webhook-855f577f79-7bdxq 2026-01-20T10:56:53.275680182+00:00 stderr F I0120 10:56:53.275676 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2026-01-20T10:56:53.275689302+00:00 stderr F I0120 10:56:53.275682 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2026-01-20T10:56:53.275698833+00:00 stderr F I0120 10:56:53.275688 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: cert-manager/cert-manager-cainjector-676dd9bd64-mggnx 2026-01-20T10:56:53.275698833+00:00 stderr F I0120 10:56:53.275695 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/kube-controller-manager-crc 2026-01-20T10:56:53.275707903+00:00 stderr F I0120 10:56:53.275701 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/revision-pruner-11-crc 2026-01-20T10:56:53.275716213+00:00 stderr F I0120 10:56:53.275708 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2026-01-20T10:56:53.275716213+00:00 stderr F I0120 10:56:53.275324 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-image-registry 2026-01-20T10:56:53.275725273+00:00 stderr F I0120 10:56:53.275715 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2026-01-20T10:56:53.275733513+00:00 stderr F I0120 10:56:53.275723 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-apiserver/installer-13-crc 2026-01-20T10:56:53.275733513+00:00 stderr F I0120 10:56:53.275723 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kni-infra 2026-01-20T10:56:53.275746284+00:00 stderr F I0120 10:56:53.275731 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2026-01-20T10:56:53.275746284+00:00 stderr F I0120 10:56:53.275734 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-scheduler-operator 2026-01-20T10:56:53.275746284+00:00 stderr F I0120 10:56:53.275739 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2026-01-20T10:56:53.275746284+00:00 stderr F I0120 10:56:53.275741 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-service-ca 2026-01-20T10:56:53.275778455+00:00 stderr F I0120 10:56:53.275766 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openstack 2026-01-20T10:56:53.275778455+00:00 stderr F I0120 10:56:53.275772 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: kube-system 2026-01-20T10:56:53.275778455+00:00 stderr F I0120 10:56:53.275773 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/installer-10-retry-1-crc 2026-01-20T10:56:53.275787965+00:00 stderr F I0120 10:56:53.275776 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-apiserver-operator 2026-01-20T10:56:53.275787965+00:00 stderr F I0120 10:56:53.275783 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-network-diagnostics/network-check-target-v54bt 2026-01-20T10:56:53.275796895+00:00 stderr F I0120 10:56:53.275784 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-host-network 2026-01-20T10:56:53.275796895+00:00 stderr F I0120 10:56:53.275791 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-ovn-kubernetes/ovnkube-node-sdkgg 2026-01-20T10:56:53.275805875+00:00 stderr F I0120 10:56:53.275793 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cloud-platform-infra 2026-01-20T10:56:53.275805875+00:00 stderr F I0120 10:56:53.275800 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: hostpath-provisioner/csi-hostpathplugin-hvm8g 2026-01-20T10:56:53.275815366+00:00 stderr F I0120 10:56:53.275802 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-scheduler 2026-01-20T10:56:53.275815366+00:00 stderr F I0120 10:56:53.275808 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2026-01-20T10:56:53.275815366+00:00 stderr F I0120 10:56:53.275809 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-service-ca-operator 2026-01-20T10:56:53.275824696+00:00 stderr F I0120 10:56:53.275817 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2026-01-20T10:56:53.275824696+00:00 stderr F I0120 10:56:53.275818 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-operators 2026-01-20T10:56:53.275833446+00:00 stderr F I0120 10:56:53.275826 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-user-workload-monitoring 2026-01-20T10:56:53.275833446+00:00 stderr F I0120 10:56:53.275826 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-network-node-identity/network-node-identity-7xghp 2026-01-20T10:56:53.275843636+00:00 stderr F I0120 10:56:53.275831 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-authentication 2026-01-20T10:56:53.275843636+00:00 stderr F I0120 10:56:53.275837 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2026-01-20T10:56:53.275850927+00:00 stderr F I0120 10:56:53.275838 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-machine-config-operator 2026-01-20T10:56:53.275850927+00:00 stderr F I0120 10:56:53.275845 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-image-registry/image-registry-75b7bb6564-ln84v 2026-01-20T10:56:53.275858197+00:00 stderr F I0120 10:56:53.275847 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-oauth-apiserver 2026-01-20T10:56:53.275858197+00:00 stderr F I0120 10:56:53.275854 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-dns/dns-default-gbw49 2026-01-20T10:56:53.275865037+00:00 stderr F I0120 10:56:53.275855 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: cert-manager-operator 2026-01-20T10:56:53.275871707+00:00 stderr F I0120 10:56:53.275862 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/installer-11-crc 2026-01-20T10:56:53.275871707+00:00 stderr F I0120 10:56:53.275864 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-network-node-identity 2026-01-20T10:56:53.275878737+00:00 stderr F I0120 10:56:53.275871 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-marketplace/redhat-marketplace-2mx7j 2026-01-20T10:56:53.275878737+00:00 stderr F I0120 10:56:53.275873 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-nutanix-infra 2026-01-20T10:56:53.275885797+00:00 stderr F I0120 10:56:53.275879 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2026-01-20T10:56:53.275885797+00:00 stderr F I0120 10:56:53.275881 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-ingress 2026-01-20T10:56:53.275892968+00:00 stderr F I0120 10:56:53.275887 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2026-01-20T10:56:53.275900028+00:00 stderr F I0120 10:56:53.275888 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-storage-version-migrator-operator 2026-01-20T10:56:53.275900028+00:00 stderr F I0120 10:56:53.275895 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2026-01-20T10:56:53.275907328+00:00 stderr F I0120 10:56:53.275899 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: hostpath-provisioner 2026-01-20T10:56:53.275907328+00:00 stderr F I0120 10:56:53.275903 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-operator-lifecycle-manager/collect-profiles-29481765-pbh8m 2026-01-20T10:56:53.275914058+00:00 stderr F I0120 10:56:53.275905 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-config 2026-01-20T10:56:53.275920748+00:00 stderr F I0120 10:56:53.275911 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2026-01-20T10:56:53.275920748+00:00 stderr F I0120 10:56:53.275913 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-infra 2026-01-20T10:56:53.275927809+00:00 stderr F I0120 10:56:53.275919 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2026-01-20T10:56:53.275927809+00:00 stderr F I0120 10:56:53.275922 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-node 2026-01-20T10:56:53.275937029+00:00 stderr F I0120 10:56:53.275928 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/installer-10-crc 2026-01-20T10:56:53.275937029+00:00 stderr F I0120 10:56:53.275929 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: default 2026-01-20T10:56:53.275944079+00:00 stderr F I0120 10:56:53.275936 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd 2026-01-20T10:56:53.275944079+00:00 stderr F I0120 10:56:53.275938 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cluster-version 2026-01-20T10:56:53.275951069+00:00 stderr F I0120 10:56:53.275944 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-service-ca/service-ca-666f99b6f-kk8kg 2026-01-20T10:56:53.275951069+00:00 stderr F I0120 10:56:53.275945 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-multus 2026-01-20T10:56:53.275958159+00:00 stderr F I0120 10:56:53.275952 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-apiserver/installer-9-crc 2026-01-20T10:56:53.275965200+00:00 stderr F I0120 10:56:53.275955 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-config-operator 2026-01-20T10:56:53.275965200+00:00 stderr F I0120 10:56:53.275961 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2026-01-20T10:56:53.275971950+00:00 stderr F I0120 10:56:53.275962 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-etcd 2026-01-20T10:56:53.275978620+00:00 stderr F I0120 10:56:53.275969 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-multus/multus-additional-cni-plugins-bzj2p 2026-01-20T10:56:53.275978620+00:00 stderr F I0120 10:56:53.275970 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: cert-manager 2026-01-20T10:56:53.275985630+00:00 stderr F I0120 10:56:53.275977 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2026-01-20T10:56:53.275985630+00:00 stderr F I0120 10:56:53.275978 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cluster-samples-operator 2026-01-20T10:56:53.275992670+00:00 stderr F I0120 10:56:53.275985 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-ingress/router-default-5c9bf7bc58-6jctv 2026-01-20T10:56:53.275992670+00:00 stderr F I0120 10:56:53.275987 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cluster-storage-operator 2026-01-20T10:56:53.275999790+00:00 stderr F I0120 10:56:53.275994 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/revision-pruner-8-crc 2026-01-20T10:56:53.276006821+00:00 stderr F I0120 10:56:53.275995 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openstack-operators 2026-01-20T10:56:53.276006821+00:00 stderr F I0120 10:56:53.276002 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-scheduler/openshift-kube-scheduler-crc 2026-01-20T10:56:53.276014151+00:00 stderr F I0120 10:56:53.276004 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-apiserver 2026-01-20T10:56:53.276020761+00:00 stderr F I0120 10:56:53.276010 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-marketplace/marketplace-operator-8b455464d-nc8zc 2026-01-20T10:56:53.276020761+00:00 stderr F I0120 10:56:53.276012 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-controller-manager 2026-01-20T10:56:53.276029951+00:00 stderr F I0120 10:56:53.276019 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2026-01-20T10:56:53.276029951+00:00 stderr F I0120 10:56:53.276021 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-vsphere-infra 2026-01-20T10:56:53.276036871+00:00 stderr F I0120 10:56:53.276028 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-apiserver/installer-12-crc 2026-01-20T10:56:53.276036871+00:00 stderr F I0120 10:56:53.276030 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-dns-operator 2026-01-20T10:56:53.276043872+00:00 stderr F I0120 10:56:53.276036 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2026-01-20T10:56:53.276043872+00:00 stderr F I0120 10:56:53.276037 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-storage-version-migrator 2026-01-20T10:56:53.276050902+00:00 stderr F I0120 10:56:53.276044 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-multus/multus-q88th 2026-01-20T10:56:53.276050902+00:00 stderr F I0120 10:56:53.276046 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-marketplace 2026-01-20T10:56:53.276058572+00:00 stderr F I0120 10:56:53.276052 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-multus/network-metrics-daemon-qdfr4 2026-01-20T10:56:53.276107513+00:00 stderr F I0120 10:56:53.276079 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2026-01-20T10:56:53.276107513+00:00 stderr F I0120 10:56:53.276098 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-machine-config-operator/machine-config-daemon-zpnhg 2026-01-20T10:56:53.276119754+00:00 stderr F I0120 10:56:53.276105 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2026-01-20T10:56:53.276119754+00:00 stderr F I0120 10:56:53.276112 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2026-01-20T10:56:53.276127014+00:00 stderr F I0120 10:56:53.276118 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-scheduler/installer-7-crc 2026-01-20T10:56:53.276134114+00:00 stderr F I0120 10:56:53.276121 30089 network_attach_def_controller.go:134] Starting network-controller-manager NAD controller 2026-01-20T10:56:53.276156025+00:00 stderr F I0120 10:56:53.276126 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2026-01-20T10:56:53.276164795+00:00 stderr F I0120 10:56:53.276159 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2026-01-20T10:56:53.276171395+00:00 stderr F I0120 10:56:53.276165 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-network-operator/iptables-alerter-wwpnd 2026-01-20T10:56:53.276178405+00:00 stderr F I0120 10:56:53.276170 30089 shared_informer.go:311] Waiting for caches to sync for network-controller-manager 2026-01-20T10:56:53.276203236+00:00 stderr F I0120 10:56:53.276170 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2026-01-20T10:56:53.276230067+00:00 stderr F I0120 10:56:53.276209 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2026-01-20T10:56:53.276230067+00:00 stderr F I0120 10:56:53.276222 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2026-01-20T10:56:53.276238307+00:00 stderr F I0120 10:56:53.276228 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-console/downloads-65476884b9-9wcvx 2026-01-20T10:56:53.276238307+00:00 stderr F I0120 10:56:53.276234 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2026-01-20T10:56:53.276245627+00:00 stderr F I0120 10:56:53.276239 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-dns-operator/dns-operator-75f687757b-nz2xb 2026-01-20T10:56:53.276252587+00:00 stderr F I0120 10:56:53.276245 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-dns/node-resolver-dn27q 2026-01-20T10:56:53.276261137+00:00 stderr F I0120 10:56:53.276255 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-etcd/etcd-crc 2026-01-20T10:56:53.276287078+00:00 stderr F I0120 10:56:53.276265 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: cert-manager/cert-manager-758df9885c-cq6zm 2026-01-20T10:56:53.276287078+00:00 stderr F I0120 10:56:53.276283 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/revision-pruner-9-crc 2026-01-20T10:56:53.276315849+00:00 stderr F I0120 10:56:53.276292 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2026-01-20T10:56:53.276315849+00:00 stderr F I0120 10:56:53.276053 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: kube-public 2026-01-20T10:56:53.276315849+00:00 stderr F I0120 10:56:53.276306 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2026-01-20T10:56:53.276326119+00:00 stderr F I0120 10:56:53.276320 30089 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2026-01-20T10:56:53.276355350+00:00 stderr F I0120 10:56:53.276332 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-monitoring 2026-01-20T10:56:53.276363170+00:00 stderr F I0120 10:56:53.276353 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-controller-manager-operator 2026-01-20T10:56:53.276370420+00:00 stderr F I0120 10:56:53.276362 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-machine-api 2026-01-20T10:56:53.276377511+00:00 stderr F I0120 10:56:53.276369 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-network-diagnostics 2026-01-20T10:56:53.276384601+00:00 stderr F I0120 10:56:53.276376 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-operator-lifecycle-manager 2026-01-20T10:56:53.276391691+00:00 stderr F I0120 10:56:53.276382 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-console-operator 2026-01-20T10:56:53.276398501+00:00 stderr F I0120 10:56:53.276389 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-console-user-settings 2026-01-20T10:56:53.276398501+00:00 stderr F I0120 10:56:53.276389 30089 reflector.go:289] Starting reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2026-01-20T10:56:53.276409531+00:00 stderr F I0120 10:56:53.276397 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-controller-manager 2026-01-20T10:56:53.276409531+00:00 stderr F I0120 10:56:53.276399 30089 reflector.go:325] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2026-01-20T10:56:53.276409531+00:00 stderr F I0120 10:56:53.276405 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cluster-machine-approver 2026-01-20T10:56:53.276418952+00:00 stderr F I0120 10:56:53.276413 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-console 2026-01-20T10:56:53.276426042+00:00 stderr F I0120 10:56:53.276419 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-etcd-operator 2026-01-20T10:56:53.276433102+00:00 stderr F I0120 10:56:53.276425 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-ingress-canary 2026-01-20T10:56:53.276440172+00:00 stderr F I0120 10:56:53.276432 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-network-operator 2026-01-20T10:56:53.276447792+00:00 stderr F I0120 10:56:53.276438 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: kube-node-lease 2026-01-20T10:56:53.276455063+00:00 stderr F I0120 10:56:53.276444 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-authentication-operator 2026-01-20T10:56:53.276455063+00:00 stderr F I0120 10:56:53.276451 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cloud-network-config-controller 2026-01-20T10:56:53.276464143+00:00 stderr F I0120 10:56:53.276458 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-ovirt-infra 2026-01-20T10:56:53.276471203+00:00 stderr F I0120 10:56:53.276464 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift 2026-01-20T10:56:53.276478263+00:00 stderr F I0120 10:56:53.276470 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-config-managed 2026-01-20T10:56:53.276485313+00:00 stderr F I0120 10:56:53.276476 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-openstack-infra 2026-01-20T10:56:53.276492954+00:00 stderr F I0120 10:56:53.276483 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-route-controller-manager 2026-01-20T10:56:53.276492954+00:00 stderr F I0120 10:56:53.276489 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-ingress-operator 2026-01-20T10:56:53.276501854+00:00 stderr F I0120 10:56:53.276496 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-apiserver-operator 2026-01-20T10:56:53.276508954+00:00 stderr F I0120 10:56:53.276503 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-ovn-kubernetes 2026-01-20T10:56:53.276516034+00:00 stderr F I0120 10:56:53.276509 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-controller-manager-operator 2026-01-20T10:56:53.276523134+00:00 stderr F I0120 10:56:53.276515 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-dns 2026-01-20T10:56:53.276530255+00:00 stderr F I0120 10:56:53.276522 30089 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-apiserver 2026-01-20T10:56:53.278143388+00:00 stderr F I0120 10:56:53.278119 30089 reflector.go:351] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2026-01-20T10:56:53.279286939+00:00 stderr F I0120 10:56:53.279165 30089 iptables.go:121] Adding rule in table: filter, chain: FORWARD with args: "-i ovn-k8s-mp0 -j ACCEPT" for protocol: 0 2026-01-20T10:56:53.282044823+00:00 stderr F I0120 10:56:53.282010 30089 iptables.go:108] Creating table: filter chain: INPUT 2026-01-20T10:56:53.283850222+00:00 stderr F I0120 10:56:53.283818 30089 iptables.go:110] Chain: "INPUT" in table: "filter" already exists, skipping creation: running [/usr/sbin/iptables -t filter -N INPUT --wait]: exit status 1: iptables: Chain already exists. 2026-01-20T10:56:53.286328278+00:00 stderr F I0120 10:56:53.286300 30089 iptables.go:121] Adding rule in table: filter, chain: INPUT with args: "-i ovn-k8s-mp0 -m comment --comment from OVN to localhost -j ACCEPT" for protocol: 0 2026-01-20T10:56:53.288455415+00:00 stderr F I0120 10:56:53.288426 30089 iptables.go:108] Creating table: nat chain: POSTROUTING 2026-01-20T10:56:53.290243914+00:00 stderr F I0120 10:56:53.290214 30089 iptables.go:110] Chain: "POSTROUTING" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N POSTROUTING --wait]: exit status 1: iptables: Chain already exists. 2026-01-20T10:56:53.307583130+00:00 stderr F I0120 10:56:53.302953 30089 iptables.go:121] Adding rule in table: nat, chain: POSTROUTING with args: "-s 169.254.169.1 -j MASQUERADE" for protocol: 0 2026-01-20T10:56:53.308006201+00:00 stderr F I0120 10:56:53.307975 30089 iptables.go:121] Adding rule in table: nat, chain: POSTROUTING with args: "-s 10.217.0.0/23 -j MASQUERADE" for protocol: 0 2026-01-20T10:56:53.310027656+00:00 stderr F I0120 10:56:53.309988 30089 ovs.go:159] Exec(24): /usr/bin/ovs-vsctl --timeout=15 port-to-br br-ex 2026-01-20T10:56:53.317640831+00:00 stderr F I0120 10:56:53.317614 30089 ovs.go:162] Exec(24): stdout: "" 2026-01-20T10:56:53.317640831+00:00 stderr F I0120 10:56:53.317629 30089 ovs.go:163] Exec(24): stderr: "ovs-vsctl: no port named br-ex\n" 2026-01-20T10:56:53.317640831+00:00 stderr F I0120 10:56:53.317635 30089 ovs.go:165] Exec(24): err: exit status 1 2026-01-20T10:56:53.317659261+00:00 stderr F I0120 10:56:53.317645 30089 ovs.go:159] Exec(25): /usr/bin/ovs-vsctl --timeout=15 br-exists br-ex 2026-01-20T10:56:53.320881648+00:00 stderr F I0120 10:56:53.320852 30089 shared_informer.go:318] Caches are synced for node-tracker-controller 2026-01-20T10:56:53.320881648+00:00 stderr F I0120 10:56:53.320870 30089 services_controller.go:184] Setting up event handlers for services 2026-01-20T10:56:53.320978691+00:00 stderr F I0120 10:56:53.320956 30089 services_controller.go:551] Adding service openshift-dns-operator/metrics 2026-01-20T10:56:53.320978691+00:00 stderr F I0120 10:56:53.320961 30089 services_controller.go:194] Setting up event handlers for endpoint slices 2026-01-20T10:56:53.320987091+00:00 stderr F I0120 10:56:53.320979 30089 services_controller.go:551] Adding service openshift-image-registry/image-registry 2026-01-20T10:56:53.321004551+00:00 stderr F I0120 10:56:53.320992 30089 services_controller.go:551] Adding service openshift-machine-config-operator/machine-config-daemon 2026-01-20T10:56:53.321011431+00:00 stderr F I0120 10:56:53.321006 30089 services_controller.go:551] Adding service cert-manager/cert-manager-webhook 2026-01-20T10:56:53.321045672+00:00 stderr F I0120 10:56:53.321028 30089 services_controller.go:551] Adding service openshift-marketplace/redhat-operators 2026-01-20T10:56:53.321045672+00:00 stderr F I0120 10:56:53.321041 30089 services_controller.go:551] Adding service openshift-kube-controller-manager/kube-controller-manager 2026-01-20T10:56:53.321053183+00:00 stderr F I0120 10:56:53.321048 30089 services_controller.go:551] Adding service openshift-kube-scheduler/scheduler 2026-01-20T10:56:53.321080433+00:00 stderr F I0120 10:56:53.321076 30089 services_controller.go:551] Adding service openshift-machine-api/machine-api-operator 2026-01-20T10:56:53.321122014+00:00 stderr F I0120 10:56:53.321109 30089 services_controller.go:204] Waiting for service and endpoint handlers to sync 2026-01-20T10:56:53.321129615+00:00 stderr F I0120 10:56:53.321113 30089 services_controller.go:551] Adding service openshift-marketplace/certified-operators 2026-01-20T10:56:53.321158905+00:00 stderr F I0120 10:56:53.321122 30089 shared_informer.go:311] Waiting for caches to sync for ovn-lb-controller 2026-01-20T10:56:53.321167276+00:00 stderr F I0120 10:56:53.321147 30089 services_controller.go:551] Adding service openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2026-01-20T10:56:53.321278329+00:00 stderr F I0120 10:56:53.321252 30089 services_controller.go:551] Adding service openshift-controller-manager/controller-manager 2026-01-20T10:56:53.321278329+00:00 stderr F I0120 10:56:53.321270 30089 services_controller.go:551] Adding service openshift-network-diagnostics/network-check-source 2026-01-20T10:56:53.321305119+00:00 stderr F I0120 10:56:53.321284 30089 services_controller.go:551] Adding service cert-manager/cert-manager 2026-01-20T10:56:53.321305119+00:00 stderr F I0120 10:56:53.321299 30089 services_controller.go:551] Adding service openshift-cluster-machine-approver/machine-approver 2026-01-20T10:56:53.321406452+00:00 stderr F I0120 10:56:53.321371 30089 services_controller.go:551] Adding service openshift-ingress-canary/ingress-canary 2026-01-20T10:56:53.321406452+00:00 stderr F I0120 10:56:53.321398 30089 services_controller.go:551] Adding service openshift-machine-config-operator/machine-config-controller 2026-01-20T10:56:53.321432443+00:00 stderr F I0120 10:56:53.321417 30089 services_controller.go:551] Adding service openshift-authentication/oauth-openshift 2026-01-20T10:56:53.321432443+00:00 stderr F I0120 10:56:53.321428 30089 services_controller.go:551] Adding service openshift-machine-api/machine-api-operator-machine-webhook 2026-01-20T10:56:53.321440083+00:00 stderr F I0120 10:56:53.321435 30089 services_controller.go:551] Adding service openshift-service-ca-operator/metrics 2026-01-20T10:56:53.321447243+00:00 stderr F I0120 10:56:53.321440 30089 services_controller.go:551] Adding service openshift-kube-scheduler-operator/metrics 2026-01-20T10:56:53.321454393+00:00 stderr F I0120 10:56:53.321446 30089 services_controller.go:551] Adding service openshift-monitoring/cluster-monitoring-operator 2026-01-20T10:56:53.321454393+00:00 stderr F I0120 10:56:53.321450 30089 services_controller.go:551] Adding service openshift-multus/multus-admission-controller 2026-01-20T10:56:53.321461863+00:00 stderr F I0120 10:56:53.321456 30089 services_controller.go:551] Adding service openshift-operator-lifecycle-manager/olm-operator-metrics 2026-01-20T10:56:53.321468914+00:00 stderr F I0120 10:56:53.321461 30089 services_controller.go:551] Adding service openshift-route-controller-manager/route-controller-manager 2026-01-20T10:56:53.321468914+00:00 stderr F I0120 10:56:53.321466 30089 services_controller.go:551] Adding service openshift-cluster-samples-operator/metrics 2026-01-20T10:56:53.321476224+00:00 stderr F I0120 10:56:53.321470 30089 services_controller.go:551] Adding service openshift-controller-manager-operator/metrics 2026-01-20T10:56:53.321499284+00:00 stderr F I0120 10:56:53.321486 30089 services_controller.go:551] Adding service openshift-kube-apiserver-operator/metrics 2026-01-20T10:56:53.321514715+00:00 stderr F I0120 10:56:53.321504 30089 services_controller.go:551] Adding service openshift-operator-lifecycle-manager/catalog-operator-metrics 2026-01-20T10:56:53.321521335+00:00 stderr F I0120 10:56:53.321513 30089 services_controller.go:551] Adding service openshift-console-operator/webhook 2026-01-20T10:56:53.321529485+00:00 stderr F I0120 10:56:53.321524 30089 services_controller.go:551] Adding service openshift-console/downloads 2026-01-20T10:56:53.321567916+00:00 stderr F I0120 10:56:53.321531 30089 services_controller.go:551] Adding service openshift-etcd-operator/metrics 2026-01-20T10:56:53.321567916+00:00 stderr F I0120 10:56:53.321541 30089 services_controller.go:551] Adding service openshift-machine-api/machine-api-operator-webhook 2026-01-20T10:56:53.321567916+00:00 stderr F I0120 10:56:53.321548 30089 services_controller.go:551] Adding service openshift-config-operator/metrics 2026-01-20T10:56:53.321567916+00:00 stderr F I0120 10:56:53.321558 30089 services_controller.go:551] Adding service openshift-kube-storage-version-migrator-operator/metrics 2026-01-20T10:56:53.321567916+00:00 stderr F I0120 10:56:53.321564 30089 services_controller.go:551] Adding service openshift-kube-controller-manager-operator/metrics 2026-01-20T10:56:53.321579037+00:00 stderr F I0120 10:56:53.321574 30089 services_controller.go:551] Adding service default/kubernetes 2026-01-20T10:56:53.321585677+00:00 stderr F I0120 10:56:53.321580 30089 services_controller.go:551] Adding service openshift-cluster-version/cluster-version-operator 2026-01-20T10:56:53.321592267+00:00 stderr F I0120 10:56:53.321585 30089 services_controller.go:551] Adding service openshift-kube-apiserver/apiserver 2026-01-20T10:56:53.321600427+00:00 stderr F I0120 10:56:53.321596 30089 services_controller.go:551] Adding service openshift-machine-api/cluster-autoscaler-operator 2026-01-20T10:56:53.321607027+00:00 stderr F I0120 10:56:53.321601 30089 services_controller.go:551] Adding service cert-manager/cert-manager-cainjector 2026-01-20T10:56:53.321613607+00:00 stderr F I0120 10:56:53.321606 30089 services_controller.go:551] Adding service openshift-operator-lifecycle-manager/packageserver-service 2026-01-20T10:56:53.321613607+00:00 stderr F I0120 10:56:53.321611 30089 services_controller.go:551] Adding service openshift-ovn-kubernetes/ovn-kubernetes-node 2026-01-20T10:56:53.321622048+00:00 stderr F I0120 10:56:53.321617 30089 services_controller.go:551] Adding service openshift-console-operator/metrics 2026-01-20T10:56:53.321628638+00:00 stderr F I0120 10:56:53.321621 30089 services_controller.go:551] Adding service openshift-etcd/etcd 2026-01-20T10:56:53.321628638+00:00 stderr F I0120 10:56:53.321626 30089 services_controller.go:551] Adding service openshift-image-registry/image-registry-operator 2026-01-20T10:56:53.321637028+00:00 stderr F I0120 10:56:53.321632 30089 services_controller.go:551] Adding service openshift-ingress/router-internal-default 2026-01-20T10:56:53.321643658+00:00 stderr F I0120 10:56:53.321638 30089 services_controller.go:551] Adding service openshift-machine-api/control-plane-machine-set-operator 2026-01-20T10:56:53.321650248+00:00 stderr F I0120 10:56:53.321642 30089 services_controller.go:551] Adding service openshift-oauth-apiserver/api 2026-01-20T10:56:53.321650248+00:00 stderr F I0120 10:56:53.321647 30089 services_controller.go:551] Adding service openshift-authentication-operator/metrics 2026-01-20T10:56:53.321690270+00:00 stderr F I0120 10:56:53.321658 30089 services_controller.go:551] Adding service openshift-apiserver-operator/metrics 2026-01-20T10:56:53.321690270+00:00 stderr F I0120 10:56:53.321666 30089 services_controller.go:551] Adding service openshift-apiserver/api 2026-01-20T10:56:53.321690270+00:00 stderr F I0120 10:56:53.321675 30089 services_controller.go:551] Adding service openshift-dns/dns-default 2026-01-20T10:56:53.321690270+00:00 stderr F I0120 10:56:53.321681 30089 services_controller.go:551] Adding service openshift-machine-api/machine-api-controllers 2026-01-20T10:56:53.321690270+00:00 stderr F I0120 10:56:53.321685 30089 services_controller.go:551] Adding service openshift-marketplace/marketplace-operator-metrics 2026-01-20T10:56:53.321699160+00:00 stderr F I0120 10:56:53.321691 30089 services_controller.go:551] Adding service openshift-marketplace/redhat-marketplace 2026-01-20T10:56:53.321699160+00:00 stderr F I0120 10:56:53.321696 30089 services_controller.go:551] Adding service default/openshift 2026-01-20T10:56:53.321718080+00:00 stderr F I0120 10:56:53.321710 30089 services_controller.go:551] Adding service openshift-marketplace/community-operators 2026-01-20T10:56:53.321718080+00:00 stderr F I0120 10:56:53.321716 30089 services_controller.go:551] Adding service openshift-network-diagnostics/network-check-target 2026-01-20T10:56:53.321725000+00:00 stderr F I0120 10:56:53.321721 30089 services_controller.go:551] Adding service openshift-network-operator/metrics 2026-01-20T10:56:53.321731581+00:00 stderr F I0120 10:56:53.321726 30089 services_controller.go:551] Adding service openshift-operator-lifecycle-manager/package-server-manager-metrics 2026-01-20T10:56:53.321738161+00:00 stderr F I0120 10:56:53.321732 30089 services_controller.go:551] Adding service openshift-apiserver/check-endpoints 2026-01-20T10:56:53.321744751+00:00 stderr F I0120 10:56:53.321737 30089 services_controller.go:551] Adding service openshift-ingress-operator/metrics 2026-01-20T10:56:53.321744751+00:00 stderr F I0120 10:56:53.321741 30089 services_controller.go:551] Adding service openshift-machine-config-operator/machine-config-operator 2026-01-20T10:56:53.321751621+00:00 stderr F I0120 10:56:53.321746 30089 services_controller.go:551] Adding service openshift-multus/network-metrics-service 2026-01-20T10:56:53.321758181+00:00 stderr F I0120 10:56:53.321752 30089 services_controller.go:551] Adding service openshift-console/console 2026-01-20T10:56:53.324767332+00:00 stderr F I0120 10:56:53.324732 30089 ovs.go:162] Exec(25): stdout: "" 2026-01-20T10:56:53.324767332+00:00 stderr F I0120 10:56:53.324743 30089 ovs.go:163] Exec(25): stderr: "" 2026-01-20T10:56:53.324767332+00:00 stderr F I0120 10:56:53.324752 30089 ovs.go:159] Exec(26): /usr/bin/ovs-vsctl --timeout=15 list-ports br-ex 2026-01-20T10:56:53.331242826+00:00 stderr F I0120 10:56:53.331202 30089 ovs.go:162] Exec(26): stdout: "ens3\npatch-br-ex_crc-to-br-int\n" 2026-01-20T10:56:53.331242826+00:00 stderr F I0120 10:56:53.331219 30089 ovs.go:163] Exec(26): stderr: "" 2026-01-20T10:56:53.331242826+00:00 stderr F I0120 10:56:53.331229 30089 ovs.go:159] Exec(27): /usr/bin/ovs-vsctl --timeout=15 get Port ens3 Interfaces 2026-01-20T10:56:53.337219608+00:00 stderr F I0120 10:56:53.337193 30089 ovs.go:162] Exec(27): stdout: "[8e15ed8f-6469-48a0-a50f-2c970b3114f9]\n" 2026-01-20T10:56:53.337219608+00:00 stderr F I0120 10:56:53.337208 30089 ovs.go:163] Exec(27): stderr: "" 2026-01-20T10:56:53.337236728+00:00 stderr F I0120 10:56:53.337219 30089 ovs.go:159] Exec(28): /usr/bin/ovs-vsctl --timeout=15 get Port patch-br-ex_crc-to-br-int Interfaces 2026-01-20T10:56:53.343270800+00:00 stderr F I0120 10:56:53.343235 30089 ovs.go:162] Exec(28): stdout: "[da1397f3-f073-4f8c-ace3-7b1d0235f35d]\n" 2026-01-20T10:56:53.343270800+00:00 stderr F I0120 10:56:53.343251 30089 ovs.go:163] Exec(28): stderr: "" 2026-01-20T10:56:53.343270800+00:00 stderr F I0120 10:56:53.343259 30089 ovs.go:159] Exec(29): /usr/bin/ovs-vsctl --timeout=15 get Interface 8e15ed8f-6469-48a0-a50f-2c970b3114f9 Type 2026-01-20T10:56:53.349978321+00:00 stderr F I0120 10:56:53.349919 30089 ovs.go:162] Exec(29): stdout: "system\n" 2026-01-20T10:56:53.349978321+00:00 stderr F I0120 10:56:53.349934 30089 ovs.go:163] Exec(29): stderr: "" 2026-01-20T10:56:53.349978321+00:00 stderr F I0120 10:56:53.349943 30089 ovs.go:159] Exec(30): /usr/bin/ovs-vsctl --timeout=15 get Interface da1397f3-f073-4f8c-ace3-7b1d0235f35d Type 2026-01-20T10:56:53.355529770+00:00 stderr F I0120 10:56:53.355489 30089 ovs.go:162] Exec(30): stdout: "patch\n" 2026-01-20T10:56:53.355529770+00:00 stderr F I0120 10:56:53.355503 30089 ovs.go:163] Exec(30): stderr: "" 2026-01-20T10:56:53.355529770+00:00 stderr F I0120 10:56:53.355512 30089 ovs.go:159] Exec(31): /usr/bin/ovs-vsctl --timeout=15 get interface ens3 ofport 2026-01-20T10:56:53.360778231+00:00 stderr F I0120 10:56:53.360745 30089 ovs.go:162] Exec(31): stdout: "1\n" 2026-01-20T10:56:53.360778231+00:00 stderr F I0120 10:56:53.360760 30089 ovs.go:163] Exec(31): stderr: "" 2026-01-20T10:56:53.360778231+00:00 stderr F I0120 10:56:53.360768 30089 ovs.go:159] Exec(32): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface br-ex mac_in_use 2026-01-20T10:56:53.367184853+00:00 stderr F I0120 10:56:53.367147 30089 ovs.go:162] Exec(32): stdout: "\"fa:16:3e:0d:e7:11\"\n" 2026-01-20T10:56:53.367184853+00:00 stderr F I0120 10:56:53.367160 30089 ovs.go:163] Exec(32): stderr: "" 2026-01-20T10:56:53.367184853+00:00 stderr F I0120 10:56:53.367169 30089 ovs.go:159] Exec(33): /usr/sbin/sysctl -w net.ipv4.conf.br-ex.forwarding=1 2026-01-20T10:56:53.368682004+00:00 stderr F I0120 10:56:53.368644 30089 ovs.go:162] Exec(33): stdout: "net.ipv4.conf.br-ex.forwarding = 1\n" 2026-01-20T10:56:53.368682004+00:00 stderr F I0120 10:56:53.368657 30089 ovs.go:163] Exec(33): stderr: "" 2026-01-20T10:56:53.368682004+00:00 stderr F I0120 10:56:53.368666 30089 ovs.go:159] Exec(34): /usr/bin/ovs-vsctl --timeout=15 --if-exists get Open_vSwitch . external_ids:ovn-bridge-mappings 2026-01-20T10:56:53.374015178+00:00 stderr F I0120 10:56:53.373986 30089 ovs.go:162] Exec(34): stdout: "\"physnet:br-ex\"\n" 2026-01-20T10:56:53.374015178+00:00 stderr F I0120 10:56:53.373999 30089 ovs.go:163] Exec(34): stderr: "" 2026-01-20T10:56:53.374015178+00:00 stderr F I0120 10:56:53.374008 30089 ovs.go:159] Exec(35): /usr/bin/ovs-vsctl --timeout=15 set Open_vSwitch . external_ids:ovn-bridge-mappings=physnet:br-ex 2026-01-20T10:56:53.377225394+00:00 stderr F I0120 10:56:53.377189 30089 shared_informer.go:318] Caches are synced for network-controller-manager 2026-01-20T10:56:53.377225394+00:00 stderr F I0120 10:56:53.377207 30089 network_attach_def_controller.go:182] Starting repairing loop for network-controller-manager 2026-01-20T10:56:53.377338417+00:00 stderr F I0120 10:56:53.377305 30089 network_attach_def_controller.go:184] Finished repairing loop for network-controller-manager: 99.163µs err: 2026-01-20T10:56:53.377338417+00:00 stderr F I0120 10:56:53.377321 30089 network_attach_def_controller.go:153] Starting workers for network-controller-manager NAD controller 2026-01-20T10:56:53.379382482+00:00 stderr F I0120 10:56:53.379345 30089 ovs.go:162] Exec(35): stdout: "" 2026-01-20T10:56:53.379382482+00:00 stderr F I0120 10:56:53.379356 30089 ovs.go:163] Exec(35): stderr: "" 2026-01-20T10:56:53.379382482+00:00 stderr F I0120 10:56:53.379375 30089 ovs.go:159] Exec(36): /usr/bin/ovs-vsctl --timeout=15 --if-exists get Open_vSwitch . external_ids:system-id 2026-01-20T10:56:53.384296404+00:00 stderr F I0120 10:56:53.384265 30089 ovs.go:162] Exec(36): stdout: "\"017e52b0-97d3-4d7d-aae4-9b216aa025aa\"\n" 2026-01-20T10:56:53.384296404+00:00 stderr F I0120 10:56:53.384280 30089 ovs.go:163] Exec(36): stderr: "" 2026-01-20T10:56:53.384296404+00:00 stderr F I0120 10:56:53.384290 30089 ovs.go:159] Exec(37): /usr/bin/ovs-appctl --timeout=15 dpif/show-dp-features br-ex 2026-01-20T10:56:53.387782228+00:00 stderr F I0120 10:56:53.387736 30089 ovs.go:162] Exec(37): stdout: "Masked set action: Yes\nTunnel push pop: No\nUfid: Yes\nTruncate action: Yes\nClone action: Yes\nSample nesting: 10\nConntrack eventmask: Yes\nConntrack clear: Yes\nMax dp_hash algorithm: 1\nCheck pkt length action: Yes\nConntrack timeout policy: Yes\nExplicit Drop action: No\nOptimized Balance TCP mode: No\nConntrack all-zero IP SNAT: Yes\nMPLS Label add: Yes\nMax VLAN headers: 2\nMax MPLS depth: 3\nRecirc: Yes\nCT state: Yes\nCT zone: Yes\nCT mark: Yes\nCT label: Yes\nCT state NAT: Yes\nCT orig tuple: Yes\nCT orig tuple for IPv6: Yes\nIPv6 ND Extension: No\n" 2026-01-20T10:56:53.387782228+00:00 stderr F I0120 10:56:53.387758 30089 ovs.go:163] Exec(37): stderr: "" 2026-01-20T10:56:53.387848300+00:00 stderr F I0120 10:56:53.387828 30089 ovs.go:159] Exec(38): /usr/bin/ovs-vsctl --timeout=15 --if-exists get Open_vSwitch . other_config:hw-offload 2026-01-20T10:56:53.392590997+00:00 stderr F I0120 10:56:53.392562 30089 ovs.go:162] Exec(38): stdout: "\n" 2026-01-20T10:56:53.392590997+00:00 stderr F I0120 10:56:53.392575 30089 ovs.go:163] Exec(38): stderr: "" 2026-01-20T10:56:53.393094331+00:00 stderr F I0120 10:56:53.393054 30089 iptables.go:146] Deleting rule in table: filter, chain: FORWARD with args: "-p tcp -m tcp --dport 22623 -j REJECT" for protocol: 0 2026-01-20T10:56:53.395907807+00:00 stderr F I0120 10:56:53.395886 30089 iptables.go:146] Deleting rule in table: filter, chain: FORWARD with args: "-p tcp -m tcp --dport 22624 -j REJECT" for protocol: 0 2026-01-20T10:56:53.398881837+00:00 stderr F I0120 10:56:53.398849 30089 iptables.go:146] Deleting rule in table: filter, chain: OUTPUT with args: "-p tcp -m tcp --dport 22623 -j REJECT" for protocol: 0 2026-01-20T10:56:53.401517918+00:00 stderr F I0120 10:56:53.401486 30089 iptables.go:146] Deleting rule in table: filter, chain: OUTPUT with args: "-p tcp -m tcp --dport 22624 -j REJECT" for protocol: 0 2026-01-20T10:56:53.404212530+00:00 stderr F I0120 10:56:53.404077 30089 iptables.go:108] Creating table: filter chain: FORWARD 2026-01-20T10:56:53.406158512+00:00 stderr F I0120 10:56:53.406034 30089 iptables.go:110] Chain: "FORWARD" in table: "filter" already exists, skipping creation: running [/usr/sbin/iptables -t filter -N FORWARD --wait]: exit status 1: iptables: Chain already exists. 2026-01-20T10:56:53.411208008+00:00 stderr F I0120 10:56:53.411144 30089 iptables.go:108] Creating table: filter chain: OUTPUT 2026-01-20T10:56:53.413172191+00:00 stderr F I0120 10:56:53.413130 30089 iptables.go:110] Chain: "OUTPUT" in table: "filter" already exists, skipping creation: running [/usr/sbin/iptables -t filter -N OUTPUT --wait]: exit status 1: iptables: Chain already exists. 2026-01-20T10:56:53.417780035+00:00 stderr F I0120 10:56:53.417716 30089 gateway_localnet.go:152] Local Gateway Creation Complete 2026-01-20T10:56:53.417926158+00:00 stderr F I0120 10:56:53.417894 30089 default_node_network_controller.go:1301] MTU (1500) of network interface eth10 is big enough to deal with Geneve header overhead (sum 1458). 2026-01-20T10:56:53.417926158+00:00 stderr F I0120 10:56:53.417911 30089 kube.go:128] Setting annotations map[k8s.ovn.org/gateway-mtu-support: k8s.ovn.org/l3-gateway-config:{"default":{"mode":"local","interface-id":"br-ex_crc","mac-address":"fa:16:3e:0d:e7:11","ip-addresses":["38.102.83.220/24"],"ip-address":"38.102.83.220/24","next-hops":["38.102.83.1"],"next-hop":"38.102.83.1","node-port-enable":"true","vlan-id":"0"}} k8s.ovn.org/node-chassis-id:017e52b0-97d3-4d7d-aae4-9b216aa025aa k8s.ovn.org/node-mgmt-port-mac-address:b6:dc:d9:26:03:d4 k8s.ovn.org/node-primary-ifaddr:{"ipv4":"38.102.83.220/24"} k8s.ovn.org/zone-name:crc] on node crc 2026-01-20T10:56:53.422105861+00:00 stderr F I0120 10:56:53.422004 30089 shared_informer.go:318] Caches are synced for ovn-lb-controller 2026-01-20T10:56:53.422105861+00:00 stderr F I0120 10:56:53.422026 30089 repair.go:57] Starting repairing loop for services 2026-01-20T10:56:53.422617695+00:00 stderr F I0120 10:56:53.422570 30089 repair.go:128] Deleted 0 stale service LBs 2026-01-20T10:56:53.422617695+00:00 stderr F I0120 10:56:53.422603 30089 repair.go:134] Deleted 0 stale Chassis Template Vars 2026-01-20T10:56:53.422633495+00:00 stderr F I0120 10:56:53.422620 30089 repair.go:59] Finished repairing loop for services: 596.406µs 2026-01-20T10:56:53.422900093+00:00 stderr F I0120 10:56:53.422868 30089 services_controller.go:314] Controller cache of 56 load balancers initialized for 55 services 2026-01-20T10:56:53.422900093+00:00 stderr F I0120 10:56:53.422886 30089 services_controller.go:225] Starting workers 2026-01-20T10:56:53.422955144+00:00 stderr F I0120 10:56:53.422928 30089 services_controller.go:332] Processing sync for service openshift-dns-operator/metrics 2026-01-20T10:56:53.423180880+00:00 stderr F I0120 10:56:53.422940 30089 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-dns-operator c5ee1e81-63ee-4ea0-9d8f-d24cd624c3f2 4375 0 2024-06-26 12:39:11 +0000 UTC map[name:dns-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000159347 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: dns-operator,},ClusterIP:10.217.4.52,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.52],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.423180880+00:00 stderr F I0120 10:56:53.423131 30089 lb_config.go:1016] Cluster endpoints for openshift-dns-operator/metrics are: map[TCP/metrics:{9393 [10.217.0.18] []}] 2026-01-20T10:56:53.423180880+00:00 stderr F I0120 10:56:53.423163 30089 services_controller.go:413] Built service openshift-dns-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.52"}, protocol:"TCP", inport:9393, clusterEndpoints:services.lbEndpoints{Port:9393, V4IPs:[]string{"10.217.0.18"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.423211931+00:00 stderr F I0120 10:56:53.423182 30089 services_controller.go:414] Built service openshift-dns-operator/metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.423211931+00:00 stderr F I0120 10:56:53.423190 30089 services_controller.go:415] Built service openshift-dns-operator/metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.423273592+00:00 stderr F I0120 10:56:53.423220 30089 services_controller.go:421] Built service openshift-dns-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-dns-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.52", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.18", Port:9393, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.423273592+00:00 stderr F I0120 10:56:53.423259 30089 services_controller.go:422] Built service openshift-dns-operator/metrics per-node LB []services.LB{} 2026-01-20T10:56:53.423273592+00:00 stderr F I0120 10:56:53.423267 30089 services_controller.go:423] Built service openshift-dns-operator/metrics template LB []services.LB{} 2026-01-20T10:56:53.423287163+00:00 stderr F I0120 10:56:53.423273 30089 services_controller.go:424] Service openshift-dns-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.423329284+00:00 stderr F I0120 10:56:53.423288 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-dns-operator/metrics_TCP_cluster", UUID:"404c8c52-bd98-412a-93df-61640fc7575c", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-dns-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.52", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.18", Port:9393, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.423491958+00:00 stderr F I0120 10:56:53.423393 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.52:9393:10.217.0.18:9393]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {404c8c52-bd98-412a-93df-61640fc7575c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.423491958+00:00 stderr F I0120 10:56:53.423444 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.52:9393:10.217.0.18:9393]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {404c8c52-bd98-412a-93df-61640fc7575c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.423805338+00:00 stderr F I0120 10:56:53.423667 30089 services_controller.go:332] Processing sync for service openshift-image-registry/image-registry 2026-01-20T10:56:53.423805338+00:00 stderr F I0120 10:56:53.423678 30089 services_controller.go:397] Service image-registry retrieved from lister: &Service{ObjectMeta:{image-registry openshift-image-registry 7b12735e-9db4-4c6e-99f6-b2626c4e9f08 17962 0 2024-06-27 13:18:52 +0000 UTC map[docker-registry:default] map[imageregistry.operator.openshift.io/checksum:sha256:1c19715a76014ae1d56140d6390a08f14f453c1a59dc36c15718f40c638ef63d service.alpha.openshift.io/serving-cert-secret-name:image-registry-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:5000-tcp,Protocol:TCP,Port:5000,TargetPort:{0 5000 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{docker-registry: default,},ClusterIP:10.217.4.41,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.41],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.423805338+00:00 stderr F I0120 10:56:53.423739 30089 lb_config.go:1016] Cluster endpoints for openshift-image-registry/image-registry are: map[TCP/5000-tcp:{5000 [10.217.0.37] []}] 2026-01-20T10:56:53.423805338+00:00 stderr F I0120 10:56:53.423757 30089 services_controller.go:413] Built service openshift-image-registry/image-registry LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.41"}, protocol:"TCP", inport:5000, clusterEndpoints:services.lbEndpoints{Port:5000, V4IPs:[]string{"10.217.0.37"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.423805338+00:00 stderr F I0120 10:56:53.423768 30089 services_controller.go:414] Built service openshift-image-registry/image-registry LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.423805338+00:00 stderr F I0120 10:56:53.423773 30089 services_controller.go:415] Built service openshift-image-registry/image-registry LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.423805338+00:00 stderr F I0120 10:56:53.423784 30089 services_controller.go:421] Built service openshift-image-registry/image-registry cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-image-registry/image-registry_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-image-registry/image-registry"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.41", Port:5000, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.37", Port:5000, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.423805338+00:00 stderr F I0120 10:56:53.423800 30089 services_controller.go:422] Built service openshift-image-registry/image-registry per-node LB []services.LB{} 2026-01-20T10:56:53.423829528+00:00 stderr F I0120 10:56:53.423806 30089 services_controller.go:423] Built service openshift-image-registry/image-registry template LB []services.LB{} 2026-01-20T10:56:53.423829528+00:00 stderr F I0120 10:56:53.423812 30089 services_controller.go:424] Service openshift-image-registry/image-registry has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.423890200+00:00 stderr F I0120 10:56:53.423822 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-image-registry/image-registry_TCP_cluster", UUID:"1636f317-82cf-4afb-adb2-1388c1aee17c", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-image-registry/image-registry"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-image-registry/image-registry_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-image-registry/image-registry"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.41", Port:5000, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.37", Port:5000, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.424034224+00:00 stderr F I0120 10:56:53.423909 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-image-registry/image-registry]} name:Service_openshift-image-registry/image-registry_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:5000:10.217.0.37:5000]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1636f317-82cf-4afb-adb2-1388c1aee17c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.424034224+00:00 stderr F I0120 10:56:53.423936 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-image-registry/image-registry]} name:Service_openshift-image-registry/image-registry_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:5000:10.217.0.37:5000]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1636f317-82cf-4afb-adb2-1388c1aee17c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.424139956+00:00 stderr F I0120 10:56:53.424106 30089 services_controller.go:332] Processing sync for service openshift-machine-config-operator/machine-config-daemon 2026-01-20T10:56:53.424172677+00:00 stderr F I0120 10:56:53.424123 30089 services_controller.go:397] Service machine-config-daemon retrieved from lister: &Service{ObjectMeta:{machine-config-daemon openshift-machine-config-operator bddcb8c2-0f2d-4efa-a0ec-3e0648c24386 4880 0 2024-06-26 12:39:15 +0000 UTC map[k8s-app:machine-config-daemon] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002fafd7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},ServicePort{Name:health,Protocol:TCP,Port:8798,TargetPort:{0 8798 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-daemon,},ClusterIP:10.217.5.82,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.82],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.424215228+00:00 stderr F I0120 10:56:53.424182 30089 lb_config.go:1016] Cluster endpoints for openshift-machine-config-operator/machine-config-daemon are: map[TCP/health:{8798 [192.168.126.11] []} TCP/metrics:{9001 [192.168.126.11] []}] 2026-01-20T10:56:53.424215228+00:00 stderr F I0120 10:56:53.424206 30089 services_controller.go:413] Built service openshift-machine-config-operator/machine-config-daemon LB cluster-wide configs []services.lbConfig(nil) 2026-01-20T10:56:53.424274320+00:00 stderr F I0120 10:56:53.424212 30089 services_controller.go:414] Built service openshift-machine-config-operator/machine-config-daemon LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.82"}, protocol:"TCP", inport:9001, clusterEndpoints:services.lbEndpoints{Port:9001, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.82"}, protocol:"TCP", inport:8798, clusterEndpoints:services.lbEndpoints{Port:8798, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.424274320+00:00 stderr F I0120 10:56:53.424254 30089 services_controller.go:415] Built service openshift-machine-config-operator/machine-config-daemon LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.424290180+00:00 stderr F I0120 10:56:53.424279 30089 services_controller.go:421] Built service openshift-machine-config-operator/machine-config-daemon cluster-wide LB []services.LB{} 2026-01-20T10:56:53.424327261+00:00 stderr F I0120 10:56:53.424286 30089 services_controller.go:422] Built service openshift-machine-config-operator/machine-config-daemon per-node LB []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-daemon_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-daemon"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.82", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9001, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.82", Port:8798, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:8798, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.424327261+00:00 stderr F I0120 10:56:53.424311 30089 services_controller.go:423] Built service openshift-machine-config-operator/machine-config-daemon template LB []services.LB{} 2026-01-20T10:56:53.424327261+00:00 stderr F I0120 10:56:53.424318 30089 services_controller.go:424] Service openshift-machine-config-operator/machine-config-daemon has 0 cluster-wide, 2 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.424383623+00:00 stderr F I0120 10:56:53.424329 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-daemon_TCP_node_router+switch_crc", UUID:"03279af5-9ea3-4256-ad64-2f0188f56e36", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-daemon"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-daemon_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-daemon"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.82", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9001, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.82", Port:8798, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:8798, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.424451985+00:00 stderr F I0120 10:56:53.424414 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.82:8798:192.168.126.11:8798 10.217.5.82:9001:192.168.126.11:9001]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {03279af5-9ea3-4256-ad64-2f0188f56e36}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.424466225+00:00 stderr F I0120 10:56:53.424442 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.82:8798:192.168.126.11:8798 10.217.5.82:9001:192.168.126.11:9001]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {03279af5-9ea3-4256-ad64-2f0188f56e36}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.424611839+00:00 stderr F I0120 10:56:53.424565 30089 services_controller.go:332] Processing sync for service cert-manager/cert-manager-webhook 2026-01-20T10:56:53.424652760+00:00 stderr F I0120 10:56:53.424576 30089 services_controller.go:397] Service cert-manager-webhook retrieved from lister: &Service{ObjectMeta:{cert-manager-webhook cert-manager d1133a70-2dbf-40a7-a6bb-813db182c8f1 42888 0 2026-01-20 10:56:34 +0000 UTC map[app:webhook app.kubernetes.io/component:webhook app.kubernetes.io/instance:cert-manager app.kubernetes.io/name:webhook app.kubernetes.io/version:v1.19.2] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9402,TargetPort:{1 0 http-metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app.kubernetes.io/component: webhook,app.kubernetes.io/instance: cert-manager,app.kubernetes.io/name: webhook,},ClusterIP:10.217.5.163,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.163],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.424663870+00:00 stderr F I0120 10:56:53.424643 30089 lb_config.go:1016] Cluster endpoints for cert-manager/cert-manager-webhook are: map[TCP/https:{10250 [10.217.0.44] []} TCP/metrics:{9402 [10.217.0.44] []}] 2026-01-20T10:56:53.424675341+00:00 stderr F I0120 10:56:53.424663 30089 services_controller.go:413] Built service cert-manager/cert-manager-webhook LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.163"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:10250, V4IPs:[]string{"10.217.0.44"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.163"}, protocol:"TCP", inport:9402, clusterEndpoints:services.lbEndpoints{Port:9402, V4IPs:[]string{"10.217.0.44"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.424688951+00:00 stderr F I0120 10:56:53.424674 30089 services_controller.go:414] Built service cert-manager/cert-manager-webhook LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.424688951+00:00 stderr F I0120 10:56:53.424679 30089 services_controller.go:415] Built service cert-manager/cert-manager-webhook LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.424744882+00:00 stderr F I0120 10:56:53.424694 30089 services_controller.go:421] Built service cert-manager/cert-manager-webhook cluster-wide LB []services.LB{services.LB{Name:"Service_cert-manager/cert-manager-webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"cert-manager/cert-manager-webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.163", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.44", Port:10250, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.163", Port:9402, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.44", Port:9402, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.424744882+00:00 stderr F I0120 10:56:53.424724 30089 services_controller.go:422] Built service cert-manager/cert-manager-webhook per-node LB []services.LB{} 2026-01-20T10:56:53.424744882+00:00 stderr F I0120 10:56:53.424734 30089 services_controller.go:423] Built service cert-manager/cert-manager-webhook template LB []services.LB{} 2026-01-20T10:56:53.424765613+00:00 stderr F I0120 10:56:53.424742 30089 services_controller.go:424] Service cert-manager/cert-manager-webhook has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.424809154+00:00 stderr F I0120 10:56:53.424774 30089 services_controller.go:332] Processing sync for service openshift-marketplace/redhat-operators 2026-01-20T10:56:53.424809154+00:00 stderr F I0120 10:56:53.424756 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_cert-manager/cert-manager-webhook_TCP_cluster", UUID:"e594b84b-f427-4997-808d-a66bd2fba108", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"cert-manager/cert-manager-webhook"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_cert-manager/cert-manager-webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"cert-manager/cert-manager-webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.163", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.44", Port:10250, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.163", Port:9402, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.44", Port:9402, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.424897497+00:00 stderr F I0120 10:56:53.424788 30089 services_controller.go:397] Service redhat-operators retrieved from lister: &Service{ObjectMeta:{redhat-operators openshift-marketplace adccbaa4-8d5b-4985-9a89-66271ea4bf4e 6530 0 2024-06-26 12:47:51 +0000 UTC map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators 9ba0c63a-ccef-4143-b195-48b1ad0b0bb7 0xc0002fbe1d 0xc0002fbe1e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: redhat-operators,olm.managed: true,},ClusterIP:10.217.5.52,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.52],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.425033050+00:00 stderr F I0120 10:56:53.424868 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:cert-manager/cert-manager-webhook]} name:Service_cert-manager/cert-manager-webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.163:443:10.217.0.44:10250 10.217.5.163:9402:10.217.0.44:9402]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e594b84b-f427-4997-808d-a66bd2fba108}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.425033050+00:00 stderr F I0120 10:56:53.424903 30089 lb_config.go:1016] Cluster endpoints for openshift-marketplace/redhat-operators are: map[TCP/grpc:{50051 [10.217.0.34] []}] 2026-01-20T10:56:53.425033050+00:00 stderr F I0120 10:56:53.424925 30089 services_controller.go:413] Built service openshift-marketplace/redhat-operators LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.52"}, protocol:"TCP", inport:50051, clusterEndpoints:services.lbEndpoints{Port:50051, V4IPs:[]string{"10.217.0.34"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.425033050+00:00 stderr F I0120 10:56:53.424951 30089 services_controller.go:414] Built service openshift-marketplace/redhat-operators LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.425033050+00:00 stderr F I0120 10:56:53.424957 30089 services_controller.go:415] Built service openshift-marketplace/redhat-operators LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.425033050+00:00 stderr F I0120 10:56:53.424924 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:cert-manager/cert-manager-webhook]} name:Service_cert-manager/cert-manager-webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.163:443:10.217.0.44:10250 10.217.5.163:9402:10.217.0.44:9402]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e594b84b-f427-4997-808d-a66bd2fba108}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.425033050+00:00 stderr F I0120 10:56:53.424978 30089 services_controller.go:421] Built service openshift-marketplace/redhat-operators cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.52", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.34", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.425033050+00:00 stderr F I0120 10:56:53.424996 30089 services_controller.go:422] Built service openshift-marketplace/redhat-operators per-node LB []services.LB{} 2026-01-20T10:56:53.425033050+00:00 stderr F I0120 10:56:53.425002 30089 services_controller.go:423] Built service openshift-marketplace/redhat-operators template LB []services.LB{} 2026-01-20T10:56:53.425076171+00:00 stderr F I0120 10:56:53.425037 30089 services_controller.go:424] Service openshift-marketplace/redhat-operators has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.425130793+00:00 stderr F I0120 10:56:53.425051 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-operators_TCP_cluster", UUID:"dbd78453-346b-4a67-b084-f31f95f15b67", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-operators"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.52", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.34", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.425388940+00:00 stderr F I0120 10:56:53.425222 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-operators]} name:Service_openshift-marketplace/redhat-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.52:50051:10.217.0.34:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {dbd78453-346b-4a67-b084-f31f95f15b67}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.425388940+00:00 stderr F I0120 10:56:53.425250 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-operators]} name:Service_openshift-marketplace/redhat-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.52:50051:10.217.0.34:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {dbd78453-346b-4a67-b084-f31f95f15b67}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.431720140+00:00 stderr F I0120 10:56:53.426166 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns-operator/metrics"} 2026-01-20T10:56:53.431720140+00:00 stderr F I0120 10:56:53.426199 30089 services_controller.go:336] Finished syncing service metrics on namespace openshift-dns-operator : 3.260678ms 2026-01-20T10:56:53.431720140+00:00 stderr F I0120 10:56:53.426216 30089 services_controller.go:332] Processing sync for service openshift-kube-controller-manager/kube-controller-manager 2026-01-20T10:56:53.431720140+00:00 stderr F I0120 10:56:53.426222 30089 services_controller.go:397] Service kube-controller-manager retrieved from lister: &Service{ObjectMeta:{kube-controller-manager openshift-kube-controller-manager 419fdf14-5d8d-4271-b9e7-729de80d8cd2 5235 0 2024-06-26 12:47:11 +0000 UTC map[prometheus:kube-controller-manager] map[operator.openshift.io/spec-hash:bb05a56151ce98d11c8554843985ba99e0498dcafd98129435c2d982c5ea4c11 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 10257 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{kube-controller-manager: true,},ClusterIP:10.217.4.112,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.112],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.431720140+00:00 stderr F I0120 10:56:53.426279 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-image-registry/image-registry"} 2026-01-20T10:56:53.431720140+00:00 stderr F I0120 10:56:53.426294 30089 lb_config.go:1016] Cluster endpoints for openshift-kube-controller-manager/kube-controller-manager are: map[TCP/https:{10257 [192.168.126.11] []}] 2026-01-20T10:56:53.431720140+00:00 stderr F I0120 10:56:53.426307 30089 services_controller.go:413] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs []services.lbConfig(nil) 2026-01-20T10:56:53.431720140+00:00 stderr F I0120 10:56:53.426309 30089 services_controller.go:336] Finished syncing service image-registry on namespace openshift-image-registry : 2.642531ms 2026-01-20T10:56:53.431720140+00:00 stderr F I0120 10:56:53.426319 30089 services_controller.go:332] Processing sync for service openshift-kube-scheduler/scheduler 2026-01-20T10:56:53.431720140+00:00 stderr F I0120 10:56:53.426314 30089 services_controller.go:414] Built service openshift-kube-controller-manager/kube-controller-manager LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.112"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:10257, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.431720140+00:00 stderr F I0120 10:56:53.426326 30089 services_controller.go:415] Built service openshift-kube-controller-manager/kube-controller-manager LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.431720140+00:00 stderr F I0120 10:56:53.426346 30089 services_controller.go:421] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB []services.LB{} 2026-01-20T10:56:53.431720140+00:00 stderr F I0120 10:56:53.426354 30089 services_controller.go:422] Built service openshift-kube-controller-manager/kube-controller-manager per-node LB []services.LB{services.LB{Name:"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager/kube-controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.112", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:10257, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.431720140+00:00 stderr F I0120 10:56:53.426394 30089 services_controller.go:423] Built service openshift-kube-controller-manager/kube-controller-manager template LB []services.LB{} 2026-01-20T10:56:53.431720140+00:00 stderr F I0120 10:56:53.426404 30089 services_controller.go:424] Service openshift-kube-controller-manager/kube-controller-manager has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.431720140+00:00 stderr F I0120 10:56:53.426430 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-daemon"} 2026-01-20T10:56:53.431720140+00:00 stderr F I0120 10:56:53.426449 30089 services_controller.go:336] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator : 2.342222ms 2026-01-20T10:56:53.431720140+00:00 stderr F I0120 10:56:53.426465 30089 services_controller.go:332] Processing sync for service openshift-machine-api/machine-api-operator 2026-01-20T10:56:53.431720140+00:00 stderr F I0120 10:56:53.426324 30089 services_controller.go:397] Service scheduler retrieved from lister: &Service{ObjectMeta:{scheduler openshift-kube-scheduler a839a554-406d-4df8-b3ae-b533cb3e24bc 4695 0 2024-06-26 12:47:07 +0000 UTC map[prometheus:kube-scheduler] map[operator.openshift.io/spec-hash:f185087b7610499b49263c17685abe7f251a50c890808284a072687bf6d73275 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 10259 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{scheduler: true,},ClusterIP:10.217.5.218,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.218],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.431720140+00:00 stderr F I0120 10:56:53.426431 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_node_router+switch_crc", UUID:"b8952058-0880-4949-ae72-093072a0d7c5", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager/kube-controller-manager"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager/kube-controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.112", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:10257, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.431720140+00:00 stderr P I0120 10:56:53.426484 30089 lb_config.go:1016] Cluster endpoints for openshift-kube-scheduler/schedule 2026-01-20T10:56:53.431792232+00:00 stderr F r are: map[TCP/https:{10259 [192.168.126.11] []}] 2026-01-20T10:56:53.431792232+00:00 stderr F I0120 10:56:53.426494 30089 services_controller.go:413] Built service openshift-kube-scheduler/scheduler LB cluster-wide configs []services.lbConfig(nil) 2026-01-20T10:56:53.431792232+00:00 stderr F I0120 10:56:53.426499 30089 services_controller.go:414] Built service openshift-kube-scheduler/scheduler LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.218"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:10259, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.431792232+00:00 stderr F I0120 10:56:53.426522 30089 services_controller.go:415] Built service openshift-kube-scheduler/scheduler LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.431792232+00:00 stderr F I0120 10:56:53.426541 30089 services_controller.go:421] Built service openshift-kube-scheduler/scheduler cluster-wide LB []services.LB{} 2026-01-20T10:56:53.431792232+00:00 stderr F I0120 10:56:53.426472 30089 services_controller.go:397] Service machine-api-operator retrieved from lister: &Service{ObjectMeta:{machine-api-operator openshift-machine-api ef047d6e-c72f-4309-a95e-08fb0ed08662 4792 0 2024-06-26 12:39:26 +0000 UTC map[k8s-app:machine-api-operator] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002fa7fb }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-api-operator,},ClusterIP:10.217.5.127,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.127],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.431792232+00:00 stderr F I0120 10:56:53.427032 30089 lb_config.go:1016] Cluster endpoints for openshift-machine-api/machine-api-operator are: map[TCP/https:{8443 [10.217.0.5] []}] 2026-01-20T10:56:53.432590774+00:00 stderr F I0120 10:56:53.432534 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-operators"} 2026-01-20T10:56:53.432590774+00:00 stderr F I0120 10:56:53.432581 30089 services_controller.go:336] Finished syncing service redhat-operators on namespace openshift-marketplace : 7.80695ms 2026-01-20T10:56:53.432623545+00:00 stderr F I0120 10:56:53.432607 30089 services_controller.go:332] Processing sync for service openshift-ingress-operator/metrics 2026-01-20T10:56:53.436182450+00:00 stderr F I0120 10:56:53.427049 30089 services_controller.go:413] Built service openshift-machine-api/machine-api-operator LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.127"}, protocol:"TCP", inport:8443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.5"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.436182450+00:00 stderr F I0120 10:56:53.433907 30089 services_controller.go:414] Built service openshift-machine-api/machine-api-operator LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.436182450+00:00 stderr F I0120 10:56:53.433919 30089 services_controller.go:415] Built service openshift-machine-api/machine-api-operator LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.436182450+00:00 stderr F I0120 10:56:53.433885 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.112:443:192.168.126.11:10257]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b8952058-0880-4949-ae72-093072a0d7c5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.436182450+00:00 stderr F I0120 10:56:53.433952 30089 services_controller.go:421] Built service openshift-machine-api/machine-api-operator cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.127", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.5", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.436182450+00:00 stderr F I0120 10:56:53.433994 30089 services_controller.go:422] Built service openshift-machine-api/machine-api-operator per-node LB []services.LB{} 2026-01-20T10:56:53.436182450+00:00 stderr F I0120 10:56:53.434001 30089 services_controller.go:423] Built service openshift-machine-api/machine-api-operator template LB []services.LB{} 2026-01-20T10:56:53.436182450+00:00 stderr F I0120 10:56:53.434008 30089 services_controller.go:424] Service openshift-machine-api/machine-api-operator has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.436182450+00:00 stderr F I0120 10:56:53.433971 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.112:443:192.168.126.11:10257]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b8952058-0880-4949-ae72-093072a0d7c5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.436182450+00:00 stderr F I0120 10:56:53.434027 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator_TCP_cluster", UUID:"5e50823a-046a-4005-92b9-c7d2d651ffe9", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.127", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.5", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.436182450+00:00 stderr F I0120 10:56:53.434243 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.127:8443:10.217.0.5:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e50823a-046a-4005-92b9-c7d2d651ffe9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.436182450+00:00 stderr F I0120 10:56:53.434294 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.127:8443:10.217.0.5:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e50823a-046a-4005-92b9-c7d2d651ffe9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.436182450+00:00 stderr F I0120 10:56:53.426547 30089 services_controller.go:422] Built service openshift-kube-scheduler/scheduler per-node LB []services.LB{services.LB{Name:"Service_openshift-kube-scheduler/scheduler_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler/scheduler"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.218", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:10259, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.436182450+00:00 stderr F I0120 10:56:53.434544 30089 services_controller.go:423] Built service openshift-kube-scheduler/scheduler template LB []services.LB{} 2026-01-20T10:56:53.436182450+00:00 stderr F I0120 10:56:53.434551 30089 services_controller.go:424] Service openshift-kube-scheduler/scheduler has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.436182450+00:00 stderr P I0120 10:56:53.434570 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-kube-scheduler/scheduler_TCP_node_router+switch_crc", UUID:"915e622c-d89a-4906-831f-8daeda55c910", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler/scheduler"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMa 2026-01-20T10:56:53.436236752+00:00 stderr F p{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-kube-scheduler/scheduler_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler/scheduler"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.218", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:10259, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.436236752+00:00 stderr F I0120 10:56:53.434676 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.218:443:192.168.126.11:10259]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {915e622c-d89a-4906-831f-8daeda55c910}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.436236752+00:00 stderr F I0120 10:56:53.434710 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.218:443:192.168.126.11:10259]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {915e622c-d89a-4906-831f-8daeda55c910}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.436236752+00:00 stderr F I0120 10:56:53.434769 30089 default_node_network_controller.go:974] Waiting for gateway and management port readiness... 2026-01-20T10:56:53.436236752+00:00 stderr F I0120 10:56:53.434965 30089 ovs.go:159] Exec(39): /usr/bin/ovs-appctl --timeout=15 -t /var/run/ovn/ovn-controller.29449.ctl connection-status 2026-01-20T10:56:53.436236752+00:00 stderr F I0120 10:56:53.435187 30089 obj_retry.go:555] Update event received for resource *v1.Node, old object is equal to new: false 2026-01-20T10:56:53.436236752+00:00 stderr F I0120 10:56:53.435207 30089 obj_retry.go:607] Update event received for *v1.Node crc 2026-01-20T10:56:53.436236752+00:00 stderr F I0120 10:56:53.435314 30089 master.go:627] Adding or Updating Node "crc" 2026-01-20T10:56:53.436236752+00:00 stderr F I0120 10:56:53.435341 30089 hybrid.go:140] Removing node crc hybrid overlay port 2026-01-20T10:56:53.436236752+00:00 stderr F I0120 10:56:53.435556 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:8af88be7-33a2-4869-a73a-80bcee31d385}]} external_ids:{GoMap:map[physical_ip:38.102.83.220 physical_ips:38.102.83.220]} load_balancer_group:{GoSet:[{GoUUID:59dab9f1-0389-4181-919a-0167ad60c25f} {GoUUID:02fd96d5-5e32-4855-b83b-d257ecda4e0c}]} options:{GoMap:map[always_learn_from_arp_request:false chassis:017e52b0-97d3-4d7d-aae4-9b216aa025aa dynamic_neigh_routers:true lb_force_snat_ip:router_ip mac_binding_age_threshold:300 snat-ct-zone:0]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.436236752+00:00 stderr F I0120 10:56:53.435587 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:8af88be7-33a2-4869-a73a-80bcee31d385}]} external_ids:{GoMap:map[physical_ip:38.102.83.220 physical_ips:38.102.83.220]} load_balancer_group:{GoSet:[{GoUUID:59dab9f1-0389-4181-919a-0167ad60c25f} {GoUUID:02fd96d5-5e32-4855-b83b-d257ecda4e0c}]} options:{GoMap:map[always_learn_from_arp_request:false chassis:017e52b0-97d3-4d7d-aae4-9b216aa025aa dynamic_neigh_routers:true lb_force_snat_ip:router_ip mac_binding_age_threshold:300 snat-ct-zone:0]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.436236752+00:00 stderr F I0120 10:56:53.435781 30089 node_tracker.go:204] Processing possible switch / router updates for node crc 2026-01-20T10:56:53.436236752+00:00 stderr F I0120 10:56:53.435825 30089 node_tracker.go:165] Node crc switch + router changed, syncing services 2026-01-20T10:56:53.436236752+00:00 stderr F I0120 10:56:53.435836 30089 services_controller.go:519] Full service sync requested 2026-01-20T10:56:53.436236752+00:00 stderr F I0120 10:56:53.435869 30089 ovs.go:159] Exec(40): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface ovn-k8s-mp0 ofport 2026-01-20T10:56:53.438702998+00:00 stderr F I0120 10:56:53.432619 30089 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-ingress-operator 1e390522-c38e-4189-86b8-ad75c61e3844 6514 0 2024-06-26 12:47:50 +0000 UTC map[name:ingress-operator] map[capability.openshift.io/name:Ingress include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0001598d7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.4.255,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.255],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.438702998+00:00 stderr F I0120 10:56:53.438231 30089 lb_config.go:1016] Cluster endpoints for openshift-ingress-operator/metrics are: map[TCP/metrics:{9393 [10.217.0.45] []}] 2026-01-20T10:56:53.438702998+00:00 stderr F I0120 10:56:53.438257 30089 services_controller.go:413] Built service openshift-ingress-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.255"}, protocol:"TCP", inport:9393, clusterEndpoints:services.lbEndpoints{Port:9393, V4IPs:[]string{"10.217.0.45"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.438702998+00:00 stderr F I0120 10:56:53.438284 30089 services_controller.go:414] Built service openshift-ingress-operator/metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.438702998+00:00 stderr F I0120 10:56:53.438291 30089 services_controller.go:415] Built service openshift-ingress-operator/metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.438702998+00:00 stderr F I0120 10:56:53.438318 30089 services_controller.go:421] Built service openshift-ingress-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-ingress-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.255", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.45", Port:9393, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.438702998+00:00 stderr F I0120 10:56:53.438361 30089 services_controller.go:422] Built service openshift-ingress-operator/metrics per-node LB []services.LB{} 2026-01-20T10:56:53.438702998+00:00 stderr F I0120 10:56:53.438370 30089 services_controller.go:423] Built service openshift-ingress-operator/metrics template LB []services.LB{} 2026-01-20T10:56:53.438702998+00:00 stderr F I0120 10:56:53.438379 30089 services_controller.go:424] Service openshift-ingress-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.438702998+00:00 stderr F I0120 10:56:53.438403 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-ingress-operator/metrics_TCP_cluster", UUID:"3ce3d754-52d1-4b39-a0be-cc15ffb5a10c", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-ingress-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.255", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.45", Port:9393, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.438702998+00:00 stderr F I0120 10:56:53.438587 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.255:9393:10.217.0.45:9393]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3ce3d754-52d1-4b39-a0be-cc15ffb5a10c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.438762850+00:00 stderr F I0120 10:56:53.438652 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.255:9393:10.217.0.45:9393]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3ce3d754-52d1-4b39-a0be-cc15ffb5a10c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.440661311+00:00 stderr F I0120 10:56:53.438837 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"cert-manager/cert-manager-webhook"} 2026-01-20T10:56:53.440661311+00:00 stderr F I0120 10:56:53.438863 30089 services_controller.go:336] Finished syncing service cert-manager-webhook on namespace cert-manager : 14.294764ms 2026-01-20T10:56:53.440661311+00:00 stderr F I0120 10:56:53.438971 30089 services_controller.go:332] Processing sync for service openshift-marketplace/certified-operators 2026-01-20T10:56:53.440661311+00:00 stderr F I0120 10:56:53.439802 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager/kube-controller-manager"} 2026-01-20T10:56:53.440661311+00:00 stderr F I0120 10:56:53.439824 30089 services_controller.go:336] Finished syncing service kube-controller-manager on namespace openshift-kube-controller-manager : 13.607567ms 2026-01-20T10:56:53.440661311+00:00 stderr F I0120 10:56:53.439841 30089 services_controller.go:332] Processing sync for service openshift-kube-scheduler-operator/metrics 2026-01-20T10:56:53.440661311+00:00 stderr F I0120 10:56:53.439895 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator"} 2026-01-20T10:56:53.440661311+00:00 stderr F I0120 10:56:53.439918 30089 services_controller.go:336] Finished syncing service machine-api-operator on namespace openshift-machine-api : 13.445722ms 2026-01-20T10:56:53.440661311+00:00 stderr F I0120 10:56:53.439929 30089 services_controller.go:332] Processing sync for service openshift-marketplace/redhat-marketplace 2026-01-20T10:56:53.440661311+00:00 stderr F I0120 10:56:53.439970 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler/scheduler"} 2026-01-20T10:56:53.440661311+00:00 stderr F I0120 10:56:53.440013 30089 services_controller.go:336] Finished syncing service scheduler on namespace openshift-kube-scheduler : 13.692639ms 2026-01-20T10:56:53.440661311+00:00 stderr F I0120 10:56:53.440029 30089 services_controller.go:332] Processing sync for service default/kubernetes 2026-01-20T10:56:53.440661311+00:00 stderr F I0120 10:56:53.440256 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[router-port:rtoj-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.440661311+00:00 stderr F I0120 10:56:53.440329 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}]}}] Timeout: Where:[where column _uuid == {e80fcef2-fac9-4844-ba83-db36f8e2cc28}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.440661311+00:00 stderr F I0120 10:56:53.440355 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[router-port:rtoj-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}]}}] Timeout: Where:[where column _uuid == {e80fcef2-fac9-4844-ba83-db36f8e2cc28}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.445924022+00:00 stderr F I0120 10:56:53.445848 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-operator/metrics"} 2026-01-20T10:56:53.445924022+00:00 stderr F I0120 10:56:53.445892 30089 services_controller.go:336] Finished syncing service metrics on namespace openshift-ingress-operator : 13.283157ms 2026-01-20T10:56:53.445957353+00:00 stderr F I0120 10:56:53.445920 30089 services_controller.go:332] Processing sync for service openshift-config-operator/metrics 2026-01-20T10:56:53.445984493+00:00 stderr F I0120 10:56:53.445967 30089 services_controller.go:551] Adding service openshift-config-operator/metrics 2026-01-20T10:56:53.445984493+00:00 stderr F I0120 10:56:53.445980 30089 services_controller.go:551] Adding service openshift-console/downloads 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446215 30089 services_controller.go:551] Adding service openshift-etcd-operator/metrics 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446233 30089 services_controller.go:551] Adding service openshift-machine-api/machine-api-operator-webhook 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446243 30089 services_controller.go:551] Adding service openshift-kube-controller-manager-operator/metrics 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446200 30089 services_controller.go:397] Service kubernetes retrieved from lister: &Service{ObjectMeta:{kubernetes default 057366b7-69c1-49f0-88ca-660c8863cae8 249 0 2024-06-26 12:38:03 +0000 UTC map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446250 30089 services_controller.go:551] Adding service openshift-kube-storage-version-migrator-operator/metrics 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446258 30089 services_controller.go:551] Adding service cert-manager/cert-manager-cainjector 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446266 30089 services_controller.go:551] Adding service default/kubernetes 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446274 30089 services_controller.go:551] Adding service openshift-cluster-version/cluster-version-operator 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446280 30089 services_controller.go:551] Adding service openshift-kube-apiserver/apiserver 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446277 30089 lb_config.go:1016] Cluster endpoints for default/kubernetes are: map[TCP/https:{6443 [192.168.126.11] []}] 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446288 30089 services_controller.go:551] Adding service openshift-machine-api/cluster-autoscaler-operator 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446295 30089 services_controller.go:413] Built service default/kubernetes LB cluster-wide configs []services.lbConfig(nil) 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446312 30089 services_controller.go:414] Built service default/kubernetes LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.1"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446326 30089 services_controller.go:415] Built service default/kubernetes LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446358 30089 services_controller.go:421] Built service default/kubernetes cluster-wide LB []services.LB{} 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446172 30089 services_controller.go:397] Service redhat-marketplace retrieved from lister: &Service{ObjectMeta:{redhat-marketplace openshift-marketplace 73712edb-385d-4bf0-9c03-b6c570b1a22f 6434 0 2024-06-26 12:47:50 +0000 UTC map[olm.managed:true olm.service-spec-hash:aUeLNNcZzVZO2rcaZ5Kc8V3jffO0Ss4T6qX6V5] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-marketplace 6f259421-4edb-49d8-a6ce-aa41dfc64264 0xc0002fbd5d 0xc0002fbd5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: redhat-marketplace,olm.managed: true,},ClusterIP:10.217.4.65,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.65],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446364 30089 services_controller.go:422] Built service default/kubernetes per-node LB []services.LB{services.LB{Name:"Service_default/kubernetes_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.1", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446408 30089 services_controller.go:423] Built service default/kubernetes template LB []services.LB{} 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446416 30089 services_controller.go:424] Service default/kubernetes has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446296 30089 services_controller.go:551] Adding service openshift-console-operator/metrics 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446425 30089 lb_config.go:1016] Cluster endpoints for openshift-marketplace/redhat-marketplace are: map[TCP/grpc:{50051 [10.217.0.30] []}] 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446452 30089 services_controller.go:551] Adding service openshift-operator-lifecycle-manager/packageserver-service 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446432 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_default/kubernetes_TCP_node_router+switch_crc", UUID:"64f8efea-8a64-40da-b2c5-6e8d4a7a1c68", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_default/kubernetes_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.1", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446450 30089 services_controller.go:413] Built service openshift-marketplace/redhat-marketplace LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.65"}, protocol:"TCP", inport:50051, clusterEndpoints:services.lbEndpoints{Port:50051, V4IPs:[]string{"10.217.0.30"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446052 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:02 networks:{GoSet:[100.64.0.2/16]} options:{GoMap:map[gateway_mtu:1400]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c521aef7-6fed-4a5a-b1d9-8e7a81736008}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446473 30089 services_controller.go:414] Built service openshift-marketplace/redhat-marketplace LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.450295880+00:00 stderr F I0120 10:56:53.446487 30089 services_controller.go:415] Built service openshift-marketplace/redhat-marketplace LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.450371322+00:00 stderr F I0120 10:56:53.446519 30089 services_controller.go:421] Built service openshift-marketplace/redhat-marketplace cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-marketplace_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-marketplace"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.65", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.30", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.450371322+00:00 stderr F I0120 10:56:53.446465 30089 services_controller.go:551] Adding service openshift-ovn-kubernetes/ovn-kubernetes-node 2026-01-20T10:56:53.450371322+00:00 stderr F I0120 10:56:53.446555 30089 services_controller.go:422] Built service openshift-marketplace/redhat-marketplace per-node LB []services.LB{} 2026-01-20T10:56:53.450371322+00:00 stderr F I0120 10:56:53.446565 30089 services_controller.go:423] Built service openshift-marketplace/redhat-marketplace template LB []services.LB{} 2026-01-20T10:56:53.450371322+00:00 stderr F I0120 10:56:53.446580 30089 services_controller.go:551] Adding service openshift-oauth-apiserver/api 2026-01-20T10:56:53.450371322+00:00 stderr F I0120 10:56:53.446580 30089 services_controller.go:424] Service openshift-marketplace/redhat-marketplace has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.450371322+00:00 stderr F I0120 10:56:53.446549 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c521aef7-6fed-4a5a-b1d9-8e7a81736008}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450371322+00:00 stderr F I0120 10:56:53.446551 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {64f8efea-8a64-40da-b2c5-6e8d4a7a1c68}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450371322+00:00 stderr F I0120 10:56:53.446606 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {64f8efea-8a64-40da-b2c5-6e8d4a7a1c68}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450371322+00:00 stderr F I0120 10:56:53.446602 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:02 networks:{GoSet:[100.64.0.2/16]} options:{GoMap:map[gateway_mtu:1400]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c521aef7-6fed-4a5a-b1d9-8e7a81736008}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c521aef7-6fed-4a5a-b1d9-8e7a81736008}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450371322+00:00 stderr F I0120 10:56:53.446605 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-marketplace_TCP_cluster", UUID:"026ccb94-9135-4ac8-9f1b-4f7198943ac3", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-marketplace"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-marketplace_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-marketplace"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.65", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.30", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.450371322+00:00 stderr F I0120 10:56:53.446590 30089 services_controller.go:551] Adding service openshift-authentication-operator/metrics 2026-01-20T10:56:53.450371322+00:00 stderr F I0120 10:56:53.446760 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.65:50051:10.217.0.30:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {026ccb94-9135-4ac8-9f1b-4f7198943ac3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450371322+00:00 stderr F I0120 10:56:53.445995 30089 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-config-operator f04ada1b-55ad-45a3-9231-6d1ff7242fa0 4291 0 2024-06-26 12:39:24 +0000 UTC map[app:openshift-config-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:config-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000158e2f }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-config-operator,},ClusterIP:10.217.5.120,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.120],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.450371322+00:00 stderr P I0120 10:56:53.446814 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protoc 2026-01-20T10:56:53.450417733+00:00 stderr F ol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.65:50051:10.217.0.30:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {026ccb94-9135-4ac8-9f1b-4f7198943ac3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.446833 30089 lb_config.go:1016] Cluster endpoints for openshift-config-operator/metrics are: map[TCP/https:{8443 [10.217.0.23] []}] 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.446853 30089 services_controller.go:413] Built service openshift-config-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.120"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.23"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.446866 30089 services_controller.go:414] Built service openshift-config-operator/metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.446873 30089 services_controller.go:415] Built service openshift-config-operator/metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.446135 30089 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-kube-scheduler-operator 080e1aaf-7269-495b-ab74-593efe4192ec 4661 0 2024-06-26 12:39:09 +0000 UTC map[app:openshift-kube-scheduler-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:kube-scheduler-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000159cfb }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-kube-scheduler-operator,},ClusterIP:10.217.4.108,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.108],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.446950 30089 lb_config.go:1016] Cluster endpoints for openshift-kube-scheduler-operator/metrics are: map[TCP/https:{8443 [10.217.0.12] []}] 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.446972 30089 services_controller.go:551] Adding service openshift-etcd/etcd 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.446975 30089 services_controller.go:413] Built service openshift-kube-scheduler-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.108"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.12"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.446990 30089 services_controller.go:551] Adding service openshift-image-registry/image-registry-operator 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.446998 30089 services_controller.go:414] Built service openshift-kube-scheduler-operator/metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.447002 30089 services_controller.go:551] Adding service openshift-ingress/router-internal-default 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.447011 30089 services_controller.go:415] Built service openshift-kube-scheduler-operator/metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.447012 30089 services_controller.go:551] Adding service openshift-machine-api/control-plane-machine-set-operator 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.447037 30089 services_controller.go:551] Adding service openshift-marketplace/marketplace-operator-metrics 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.447044 30089 services_controller.go:551] Adding service openshift-marketplace/redhat-marketplace 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.447053 30089 services_controller.go:551] Adding service default/openshift 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.446098 30089 services_controller.go:397] Service certified-operators retrieved from lister: &Service{ObjectMeta:{certified-operators openshift-marketplace 97052848-7332-4254-8854-60d45bb91123 6358 0 2024-06-26 12:47:48 +0000 UTC map[olm.managed:true olm.service-spec-hash:7FOCZ3GVMQ1pwQKJahWmE09uJDRx6ab8xxcEYE] map[] [{operators.coreos.com/v1alpha1 CatalogSource certified-operators 16d5fe82-aef0-4700-8b13-e78e71d2a10d 0xc0002fb5ed 0xc0002fb5ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.249,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.249],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.447105 30089 lb_config.go:1016] Cluster endpoints for openshift-marketplace/certified-operators are: map[TCP/grpc:{50051 [10.217.0.33] []}] 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.447038 30089 services_controller.go:421] Built service openshift-kube-scheduler-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-kube-scheduler-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.108", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.12", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.447119 30089 services_controller.go:413] Built service openshift-marketplace/certified-operators LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.249"}, protocol:"TCP", inport:50051, clusterEndpoints:services.lbEndpoints{Port:50051, V4IPs:[]string{"10.217.0.33"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.447131 30089 services_controller.go:414] Built service openshift-marketplace/certified-operators LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.447137 30089 services_controller.go:415] Built service openshift-marketplace/certified-operators LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.447133 30089 services_controller.go:422] Built service openshift-kube-scheduler-operator/metrics per-node LB []services.LB{} 2026-01-20T10:56:53.450417733+00:00 stderr F I0120 10:56:53.447151 30089 services_controller.go:423] Built service openshift-kube-scheduler-operator/metrics template LB []services.LB{} 2026-01-20T10:56:53.450417733+00:00 stderr P I0120 10:56:53.447148 30089 services_controller.go:421] Built service openshift-marketplace/certified-operators cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/certified-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/certified-operators"}, Opts:services.LBOpts{Reject:true, 2026-01-20T10:56:53.450456844+00:00 stderr F EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.249", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.33", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.450456844+00:00 stderr F I0120 10:56:53.447165 30089 services_controller.go:424] Service openshift-kube-scheduler-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.450456844+00:00 stderr F I0120 10:56:53.447194 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-kube-scheduler-operator/metrics_TCP_cluster", UUID:"9037868a-bf59-4e20-8fc8-16e697f234f6", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-kube-scheduler-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.108", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.12", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.450456844+00:00 stderr F I0120 10:56:53.447027 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"} 2026-01-20T10:56:53.450456844+00:00 stderr F I0120 10:56:53.447238 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-marketplace"} 2026-01-20T10:56:53.450456844+00:00 stderr F I0120 10:56:53.447227 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b3455ab3-baec-4c77-bf58-c8ee661db1b0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450456844+00:00 stderr F I0120 10:56:53.447255 30089 services_controller.go:336] Finished syncing service redhat-marketplace on namespace openshift-marketplace : 7.326217ms 2026-01-20T10:56:53.450456844+00:00 stderr F I0120 10:56:53.447277 30089 services_controller.go:332] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics 2026-01-20T10:56:53.450456844+00:00 stderr F I0120 10:56:53.447290 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:b3455ab3-baec-4c77-bf58-c8ee661db1b0}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450456844+00:00 stderr F I0120 10:56:53.447168 30089 services_controller.go:422] Built service openshift-marketplace/certified-operators per-node LB []services.LB{} 2026-01-20T10:56:53.450456844+00:00 stderr F I0120 10:56:53.447286 30089 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-kube-storage-version-migrator-operator 3e887cd0-b481-460c-b943-d944dc64df2f 4706 0 2024-06-26 12:39:17 +0000 UTC map[app:kube-storage-version-migrator-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000159e87 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-storage-version-migrator-operator,},ClusterIP:10.217.4.151,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.151],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.450456844+00:00 stderr F I0120 10:56:53.447349 30089 services_controller.go:423] Built service openshift-marketplace/certified-operators template LB []services.LB{} 2026-01-20T10:56:53.450456844+00:00 stderr F I0120 10:56:53.447363 30089 lb_config.go:1016] Cluster endpoints for openshift-kube-storage-version-migrator-operator/metrics are: map[TCP/https:{8443 [10.217.0.16] []}] 2026-01-20T10:56:53.450456844+00:00 stderr F I0120 10:56:53.447366 30089 services_controller.go:424] Service openshift-marketplace/certified-operators has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.450456844+00:00 stderr F I0120 10:56:53.447376 30089 services_controller.go:413] Built service openshift-kube-storage-version-migrator-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.151"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.16"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.450456844+00:00 stderr F I0120 10:56:53.447356 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.108:443:10.217.0.12:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450456844+00:00 stderr F I0120 10:56:53.447253 30089 services_controller.go:336] Finished syncing service kubernetes on namespace default : 7.222265ms 2026-01-20T10:56:53.450456844+00:00 stderr F I0120 10:56:53.447403 30089 services_controller.go:332] Processing sync for service openshift-machine-api/machine-api-operator-machine-webhook 2026-01-20T10:56:53.450456844+00:00 stderr F I0120 10:56:53.447400 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.108:443:10.217.0.12:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450456844+00:00 stderr P I0120 10:56:53.447408 30089 services_controller.go:397] Service machine-api-operator-machine-webhook retrieved from lister: &Service{ObjectMeta:{machine-api-operator-machine-webhook openshift-machine-api 7dd2300f-f67e-4eb3-a3fa-1f22c230305a 4821 0 2024-06-26 12:39:13 +0000 UTC map[k8s-app:machine-api-operator-machine-webhook] map[capability.opensh 2026-01-20T10:56:53.450492955+00:00 stderr F ift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:machine-api-operator-machine-webhook-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002fa95b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 machine-webhook},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{api: clusterapi,k8s-app: controller,},ClusterIP:10.217.4.242,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.242],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.450492955+00:00 stderr F I0120 10:56:53.447410 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/certified-operators_TCP_cluster", UUID:"46ded4bb-21d9-4d3a-a886-ac7e004b5ce4", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/certified-operators"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/certified-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/certified-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.249", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.33", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.450492955+00:00 stderr F I0120 10:56:53.447459 30089 lb_config.go:1016] Cluster endpoints for openshift-machine-api/machine-api-operator-machine-webhook are: map[] 2026-01-20T10:56:53.450492955+00:00 stderr F I0120 10:56:53.447391 30089 services_controller.go:414] Built service openshift-kube-storage-version-migrator-operator/metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.450492955+00:00 stderr F I0120 10:56:53.447577 30089 services_controller.go:415] Built service openshift-kube-storage-version-migrator-operator/metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.450492955+00:00 stderr F I0120 10:56:53.447315 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b3455ab3-baec-4c77-bf58-c8ee661db1b0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:b3455ab3-baec-4c77-bf58-c8ee661db1b0}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450492955+00:00 stderr F I0120 10:56:53.447576 30089 services_controller.go:413] Built service openshift-machine-api/machine-api-operator-machine-webhook LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.242"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.450492955+00:00 stderr F I0120 10:56:53.447605 30089 services_controller.go:414] Built service openshift-machine-api/machine-api-operator-machine-webhook LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.450492955+00:00 stderr F I0120 10:56:53.447582 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/certified-operators]} name:Service_openshift-marketplace/certified-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.249:50051:10.217.0.33:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {46ded4bb-21d9-4d3a-a886-ac7e004b5ce4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450492955+00:00 stderr F I0120 10:56:53.447600 30089 services_controller.go:421] Built service openshift-kube-storage-version-migrator-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-storage-version-migrator-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.151", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.16", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.450492955+00:00 stderr F I0120 10:56:53.447630 30089 services_controller.go:422] Built service openshift-kube-storage-version-migrator-operator/metrics per-node LB []services.LB{} 2026-01-20T10:56:53.450492955+00:00 stderr F I0120 10:56:53.447641 30089 services_controller.go:423] Built service openshift-kube-storage-version-migrator-operator/metrics template LB []services.LB{} 2026-01-20T10:56:53.450492955+00:00 stderr F I0120 10:56:53.447629 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/certified-operators]} name:Service_openshift-marketplace/certified-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.249:50051:10.217.0.33:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {46ded4bb-21d9-4d3a-a886-ac7e004b5ce4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450492955+00:00 stderr F I0120 10:56:53.447619 30089 services_controller.go:415] Built service openshift-machine-api/machine-api-operator-machine-webhook LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.450492955+00:00 stderr F I0120 10:56:53.447674 30089 services_controller.go:421] Built service openshift-machine-api/machine-api-operator-machine-webhook cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-machine-webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.242", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.450492955+00:00 stderr P I0120 10:56:53.446892 30089 services_controller.go:421] Built service openshift-config-operator/metrics cluster-wide LB 2026-01-20T10:56:53.450551117+00:00 stderr F []services.LB{services.LB{Name:"Service_openshift-config-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-config-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.120", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.23", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447697 30089 services_controller.go:422] Built service openshift-machine-api/machine-api-operator-machine-webhook per-node LB []services.LB{} 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447706 30089 services_controller.go:423] Built service openshift-machine-api/machine-api-operator-machine-webhook template LB []services.LB{} 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447703 30089 services_controller.go:422] Built service openshift-config-operator/metrics per-node LB []services.LB{} 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447716 30089 services_controller.go:424] Service openshift-machine-api/machine-api-operator-machine-webhook has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447719 30089 services_controller.go:423] Built service openshift-config-operator/metrics template LB []services.LB{} 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447730 30089 services_controller.go:424] Service openshift-config-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447733 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster", UUID:"733bd827-e0e7-4901-9b0d-4f2fcb21be04", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-machine-webhook"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-machine-webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.242", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447752 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-config-operator/metrics_TCP_cluster", UUID:"8687c189-4f68-4a92-accd-596f2b18fbfd", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-config-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-config-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-config-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.120", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.23", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447791 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler-operator/metrics"} 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447804 30089 services_controller.go:336] Finished syncing service metrics on namespace openshift-kube-scheduler-operator : 7.964395ms 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447075 30089 services_controller.go:551] Adding service openshift-apiserver-operator/metrics 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447816 30089 services_controller.go:332] Processing sync for service openshift-service-ca-operator/metrics 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447650 30089 services_controller.go:424] Service openshift-kube-storage-version-migrator-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447825 30089 services_controller.go:551] Adding service openshift-apiserver/api 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447836 30089 services_controller.go:551] Adding service openshift-dns/dns-default 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447842 30089 services_controller.go:551] Adding service openshift-machine-api/machine-api-controllers 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447848 30089 services_controller.go:551] Adding service openshift-apiserver/check-endpoints 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447853 30089 services_controller.go:551] Adding service openshift-marketplace/community-operators 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447860 30089 services_controller.go:551] Adding service openshift-network-diagnostics/network-check-target 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447868 30089 services_controller.go:551] Adding service openshift-network-operator/metrics 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447824 30089 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-service-ca-operator 030283b3-acfe-40ed-811c-d9f7f79607f6 5225 0 2024-06-26 12:39:07 +0000 UTC map[app:service-ca-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000a6e6ff }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: service-ca-operator,},ClusterIP:10.217.5.165,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.165],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.450551117+00:00 stderr F I0120 10:56:53.447874 30089 services_controller.go:551] Adding service openshift-operator-lifecycle-manager/package-server-manager-metrics 2026-01-20T10:56:53.450551117+00:00 stderr P I0120 10:56:53.447855 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none rej 2026-01-20T10:56:53.450599548+00:00 stderr F ect:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.242:443:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {733bd827-e0e7-4901-9b0d-4f2fcb21be04}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.447885 30089 services_controller.go:551] Adding service openshift-console/console 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.447888 30089 lb_config.go:1016] Cluster endpoints for openshift-service-ca-operator/metrics are: map[TCP/https:{8443 [10.217.0.10] []}] 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.447893 30089 services_controller.go:551] Adding service openshift-ingress-operator/metrics 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.447903 30089 services_controller.go:551] Adding service openshift-machine-config-operator/machine-config-operator 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.447851 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster", UUID:"0f1d2ac6-1aa6-4b79-88e2-c16f28812b5b", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-storage-version-migrator-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-storage-version-migrator-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.151", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.16", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.447883 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.120:443:10.217.0.23:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8687c189-4f68-4a92-accd-596f2b18fbfd}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.447927 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.120:443:10.217.0.23:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8687c189-4f68-4a92-accd-596f2b18fbfd}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.447925 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.242:443:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {733bd827-e0e7-4901-9b0d-4f2fcb21be04}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.447911 30089 services_controller.go:551] Adding service openshift-multus/network-metrics-service 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.448125 30089 services_controller.go:551] Adding service cert-manager/cert-manager-webhook 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.448136 30089 services_controller.go:551] Adding service openshift-dns-operator/metrics 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.448143 30089 services_controller.go:551] Adding service openshift-image-registry/image-registry 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.448153 30089 services_controller.go:551] Adding service openshift-machine-config-operator/machine-config-daemon 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.448162 30089 services_controller.go:551] Adding service openshift-marketplace/redhat-operators 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.448169 30089 services_controller.go:551] Adding service openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.448176 30089 services_controller.go:551] Adding service openshift-controller-manager/controller-manager 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.447900 30089 services_controller.go:413] Built service openshift-service-ca-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.165"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.10"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.448182 30089 services_controller.go:551] Adding service openshift-kube-controller-manager/kube-controller-manager 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.448192 30089 services_controller.go:551] Adding service openshift-kube-scheduler/scheduler 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.448191 30089 services_controller.go:414] Built service openshift-service-ca-operator/metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.448198 30089 services_controller.go:551] Adding service openshift-machine-api/machine-api-operator 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.448203 30089 services_controller.go:415] Built service openshift-service-ca-operator/metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.448048 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.151:443:10.217.0.16:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0f1d2ac6-1aa6-4b79-88e2-c16f28812b5b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450599548+00:00 stderr F I0120 10:56:53.448215 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.151:443:10.217.0.16:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0f1d2ac6-1aa6-4b79-88e2-c16f28812b5b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450599548+00:00 stderr P I0120 10:56:53.448232 30089 services_controller.go:421] Built service openshift-service-ca-operator/metrics cluster-wide LB []services.LB{services.LB{N 2026-01-20T10:56:53.450637459+00:00 stderr F ame:"Service_openshift-service-ca-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-service-ca-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.165", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.10", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.448241 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[external_ids:{GoMap:map[gateway-physical-ip:yes]} mac:fa:16:3e:0d:e7:11 networks:{GoSet:[38.102.83.220/24]} options:{GoMap:map[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3d2b378f-284f-421b-b46b-c468b7e6cd8d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450361 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3d2b378f-284f-421b-b46b-c468b7e6cd8d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450394 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Port Row:map[external_ids:{GoMap:map[gateway-physical-ip:yes]} mac:fa:16:3e:0d:e7:11 networks:{GoSet:[38.102.83.220/24]} options:{GoMap:map[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3d2b378f-284f-421b-b46b-c468b7e6cd8d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3d2b378f-284f-421b-b46b-c468b7e6cd8d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.448205 30089 services_controller.go:551] Adding service openshift-marketplace/certified-operators 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450454 30089 services_controller.go:551] Adding service cert-manager/cert-manager 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450462 30089 services_controller.go:551] Adding service openshift-network-diagnostics/network-check-source 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450468 30089 services_controller.go:551] Adding service openshift-authentication/oauth-openshift 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450473 30089 services_controller.go:551] Adding service openshift-cluster-machine-approver/machine-approver 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450480 30089 services_controller.go:551] Adding service openshift-ingress-canary/ingress-canary 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450486 30089 services_controller.go:551] Adding service openshift-machine-config-operator/machine-config-controller 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450497 30089 services_controller.go:551] Adding service openshift-kube-scheduler-operator/metrics 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.448260 30089 services_controller.go:422] Built service openshift-service-ca-operator/metrics per-node LB []services.LB{} 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450504 30089 services_controller.go:551] Adding service openshift-machine-api/machine-api-operator-machine-webhook 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450517 30089 services_controller.go:551] Adding service openshift-service-ca-operator/metrics 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450520 30089 services_controller.go:423] Built service openshift-service-ca-operator/metrics template LB []services.LB{} 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.448278 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-config-operator/metrics"} 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450535 30089 services_controller.go:424] Service openshift-service-ca-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450524 30089 services_controller.go:551] Adding service openshift-cluster-samples-operator/metrics 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450549 30089 services_controller.go:551] Adding service openshift-monitoring/cluster-monitoring-operator 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450555 30089 services_controller.go:551] Adding service openshift-multus/multus-admission-controller 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450560 30089 services_controller.go:551] Adding service openshift-operator-lifecycle-manager/olm-operator-metrics 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450566 30089 services_controller.go:551] Adding service openshift-route-controller-manager/route-controller-manager 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450572 30089 services_controller.go:551] Adding service openshift-console-operator/webhook 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450578 30089 services_controller.go:551] Adding service openshift-controller-manager-operator/metrics 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450584 30089 services_controller.go:551] Adding service openshift-kube-apiserver-operator/metrics 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450589 30089 services_controller.go:551] Adding service openshift-operator-lifecycle-manager/catalog-operator-metrics 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.448748 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-machine-webhook"} 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450607 30089 services_controller.go:336] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api : 3.201056ms 2026-01-20T10:56:53.450637459+00:00 stderr F I0120 10:56:53.450624 30089 services_controller.go:332] Processing sync for service cert-manager/cert-manager-cainjector 2026-01-20T10:56:53.450670540+00:00 stderr F I0120 10:56:53.450574 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-service-ca-operator/metrics_TCP_cluster", UUID:"aa7bf248-62bd-463a-bce9-eaabc90e6138", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-service-ca-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-service-ca-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-service-ca-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.165", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.10", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.450765402+00:00 stderr F I0120 10:56:53.450640 30089 services_controller.go:397] Service cert-manager-cainjector retrieved from lister: &Service{ObjectMeta:{cert-manager-cainjector cert-manager 17915584-2494-42c4-a8a9-acf2692b2235 42879 0 2026-01-20 10:56:34 +0000 UTC map[app:cainjector app.kubernetes.io/component:cainjector app.kubernetes.io/instance:cert-manager app.kubernetes.io/name:cainjector app.kubernetes.io/version:v1.19.2] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http-metrics,Protocol:TCP,Port:9402,TargetPort:{0 9402 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app.kubernetes.io/component: cainjector,app.kubernetes.io/instance: cert-manager,app.kubernetes.io/name: cainjector,},ClusterIP:10.217.5.147,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.147],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.450765402+00:00 stderr F I0120 10:56:53.450536 30089 services_controller.go:336] Finished syncing service metrics on namespace openshift-config-operator : 4.615724ms 2026-01-20T10:56:53.450803373+00:00 stderr F I0120 10:56:53.450732 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.165:443:10.217.0.10:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {aa7bf248-62bd-463a-bce9-eaabc90e6138}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450803373+00:00 stderr F I0120 10:56:53.450773 30089 services_controller.go:332] Processing sync for service openshift-cluster-version/cluster-version-operator 2026-01-20T10:56:53.450803373+00:00 stderr F I0120 10:56:53.450774 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.165:443:10.217.0.10:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {aa7bf248-62bd-463a-bce9-eaabc90e6138}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.450830954+00:00 stderr F I0120 10:56:53.448302 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/certified-operators"} 2026-01-20T10:56:53.450848204+00:00 stderr F I0120 10:56:53.450782 30089 services_controller.go:397] Service cluster-version-operator retrieved from lister: &Service{ObjectMeta:{cluster-version-operator openshift-cluster-version b85c5397-4189-4029-b181-4e339da207b7 6237 0 2024-06-26 12:38:51 +0000 UTC map[k8s-app:cluster-version-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true kubernetes.io/description:Expose cluster-version operator metrics to other in-cluster consumers. Access requires a prometheus-k8s RoleBinding in this namespace. service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:cluster-version-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000158d77 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9099,TargetPort:{0 9099 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: cluster-version-operator,},ClusterIP:10.217.5.47,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.47],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.450848204+00:00 stderr F I0120 10:56:53.450835 30089 services_controller.go:336] Finished syncing service certified-operators on namespace openshift-marketplace : 11.861809ms 2026-01-20T10:56:53.450865155+00:00 stderr F I0120 10:56:53.450846 30089 lb_config.go:1016] Cluster endpoints for cert-manager/cert-manager-cainjector are: map[TCP/http-metrics:{9402 [10.217.0.41] []}] 2026-01-20T10:56:53.450865155+00:00 stderr F I0120 10:56:53.450854 30089 services_controller.go:332] Processing sync for service openshift-console-operator/webhook 2026-01-20T10:56:53.450878365+00:00 stderr F I0120 10:56:53.450863 30089 lb_config.go:1016] Cluster endpoints for openshift-cluster-version/cluster-version-operator are: map[TCP/metrics:{9099 [192.168.126.11] []}] 2026-01-20T10:56:53.450891086+00:00 stderr F I0120 10:56:53.450878 30089 services_controller.go:413] Built service openshift-cluster-version/cluster-version-operator LB cluster-wide configs []services.lbConfig(nil) 2026-01-20T10:56:53.450891086+00:00 stderr F I0120 10:56:53.450883 30089 services_controller.go:414] Built service openshift-cluster-version/cluster-version-operator LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.47"}, protocol:"TCP", inport:9099, clusterEndpoints:services.lbEndpoints{Port:9099, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.450904826+00:00 stderr F I0120 10:56:53.450878 30089 services_controller.go:413] Built service cert-manager/cert-manager-cainjector LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.147"}, protocol:"TCP", inport:9402, clusterEndpoints:services.lbEndpoints{Port:9402, V4IPs:[]string{"10.217.0.41"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.450904826+00:00 stderr F I0120 10:56:53.450893 30089 services_controller.go:415] Built service openshift-cluster-version/cluster-version-operator LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.450904826+00:00 stderr F I0120 10:56:53.450899 30089 services_controller.go:414] Built service cert-manager/cert-manager-cainjector LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.450924897+00:00 stderr F I0120 10:56:53.450908 30089 services_controller.go:415] Built service cert-manager/cert-manager-cainjector LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.450924897+00:00 stderr F I0120 10:56:53.450917 30089 services_controller.go:421] Built service openshift-cluster-version/cluster-version-operator cluster-wide LB []services.LB{} 2026-01-20T10:56:53.450971018+00:00 stderr F I0120 10:56:53.450923 30089 services_controller.go:422] Built service openshift-cluster-version/cluster-version-operator per-node LB []services.LB{services.LB{Name:"Service_openshift-cluster-version/cluster-version-operator_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-cluster-version/cluster-version-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.47", Port:9099, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9099, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.450971018+00:00 stderr F I0120 10:56:53.450875 30089 services_controller.go:397] Service webhook retrieved from lister: &Service{ObjectMeta:{webhook openshift-console-operator 0bec6a60-3529-4fdb-81de-718ea6c4dae4 9610 0 2024-06-26 12:53:34 +0000 UTC map[name:console-conversion-webhook] map[capability.openshift.io/name:Console include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:webhook-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000158fd7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:webhook,Protocol:TCP,Port:9443,TargetPort:{0 9443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: console-conversion-webhook,},ClusterIP:10.217.5.84,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.84],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.450971018+00:00 stderr F I0120 10:56:53.450949 30089 services_controller.go:423] Built service openshift-cluster-version/cluster-version-operator template LB []services.LB{} 2026-01-20T10:56:53.450971018+00:00 stderr F I0120 10:56:53.450957 30089 services_controller.go:424] Service openshift-cluster-version/cluster-version-operator has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.450994428+00:00 stderr F I0120 10:56:53.450967 30089 lb_config.go:1016] Cluster endpoints for openshift-console-operator/webhook are: map[TCP/webhook:{9443 [10.217.0.61] []}] 2026-01-20T10:56:53.450994428+00:00 stderr F I0120 10:56:53.450985 30089 services_controller.go:413] Built service openshift-console-operator/webhook LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.84"}, protocol:"TCP", inport:9443, clusterEndpoints:services.lbEndpoints{Port:9443, V4IPs:[]string{"10.217.0.61"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.451006849+00:00 stderr F I0120 10:56:53.450972 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-cluster-version/cluster-version-operator_TCP_node_router+switch_crc", UUID:"516e9abb-2a1b-4eba-859b-c827d44fb86e", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-cluster-version/cluster-version-operator"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-cluster-version/cluster-version-operator_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-cluster-version/cluster-version-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.47", Port:9099, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9099, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.451006849+00:00 stderr F I0120 10:56:53.450997 30089 services_controller.go:414] Built service openshift-console-operator/webhook LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.451019379+00:00 stderr F I0120 10:56:53.451006 30089 services_controller.go:415] Built service openshift-console-operator/webhook LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.451078931+00:00 stderr F I0120 10:56:53.451003 30089 services_controller.go:421] Built service cert-manager/cert-manager-cainjector cluster-wide LB []services.LB{services.LB{Name:"Service_cert-manager/cert-manager-cainjector_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"cert-manager/cert-manager-cainjector"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.147", Port:9402, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.41", Port:9402, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.451130212+00:00 stderr F I0120 10:56:53.451084 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.47:9099:192.168.126.11:9099]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {516e9abb-2a1b-4eba-859b-c827d44fb86e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.451130212+00:00 stderr F I0120 10:56:53.451115 30089 services_controller.go:422] Built service cert-manager/cert-manager-cainjector per-node LB []services.LB{} 2026-01-20T10:56:53.451149612+00:00 stderr F I0120 10:56:53.451128 30089 services_controller.go:423] Built service cert-manager/cert-manager-cainjector template LB []services.LB{} 2026-01-20T10:56:53.451149612+00:00 stderr F I0120 10:56:53.451120 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.47:9099:192.168.126.11:9099]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {516e9abb-2a1b-4eba-859b-c827d44fb86e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.451149612+00:00 stderr F I0120 10:56:53.451138 30089 services_controller.go:424] Service cert-manager/cert-manager-cainjector has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.451206934+00:00 stderr F I0120 10:56:53.451021 30089 services_controller.go:421] Built service openshift-console-operator/webhook cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-console-operator/webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.84", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.61", Port:9443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.451206934+00:00 stderr F I0120 10:56:53.451158 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_cert-manager/cert-manager-cainjector_TCP_cluster", UUID:"fe521e4f-43fc-4430-8cab-34cedb6b703d", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"cert-manager/cert-manager-cainjector"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_cert-manager/cert-manager-cainjector_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"cert-manager/cert-manager-cainjector"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.147", Port:9402, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.41", Port:9402, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.451206934+00:00 stderr F I0120 10:56:53.451195 30089 services_controller.go:422] Built service openshift-console-operator/webhook per-node LB []services.LB{} 2026-01-20T10:56:53.451239875+00:00 stderr F I0120 10:56:53.451205 30089 services_controller.go:423] Built service openshift-console-operator/webhook template LB []services.LB{} 2026-01-20T10:56:53.451239875+00:00 stderr F I0120 10:56:53.451211 30089 services_controller.go:424] Service openshift-console-operator/webhook has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.451277586+00:00 stderr F I0120 10:56:53.451233 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-console-operator/webhook_TCP_cluster", UUID:"a25f22fe-ae1f-47da-afc5-e1fde93bc930", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/webhook"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-console-operator/webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.84", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.61", Port:9443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.451363168+00:00 stderr F I0120 10:56:53.451325 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/webhook]} name:Service_openshift-console-operator/webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.84:9443:10.217.0.61:9443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a25f22fe-ae1f-47da-afc5-e1fde93bc930}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.451376819+00:00 stderr F I0120 10:56:53.451350 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/webhook]} name:Service_openshift-console-operator/webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.84:9443:10.217.0.61:9443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a25f22fe-ae1f-47da-afc5-e1fde93bc930}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.451426200+00:00 stderr F I0120 10:56:53.451369 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:cert-manager/cert-manager-cainjector]} name:Service_cert-manager/cert-manager-cainjector_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.147:9402:10.217.0.41:9402]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fe521e4f-43fc-4430-8cab-34cedb6b703d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.451444920+00:00 stderr F I0120 10:56:53.451417 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:cert-manager/cert-manager-cainjector]} name:Service_cert-manager/cert-manager-cainjector_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.147:9402:10.217.0.41:9402]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fe521e4f-43fc-4430-8cab-34cedb6b703d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.452055127+00:00 stderr F I0120 10:56:53.451975 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-storage-version-migrator-operator/metrics"} 2026-01-20T10:56:53.452055127+00:00 stderr F I0120 10:56:53.452035 30089 services_controller.go:336] Finished syncing service metrics on namespace openshift-kube-storage-version-migrator-operator : 4.756078ms 2026-01-20T10:56:53.452099609+00:00 stderr F I0120 10:56:53.452072 30089 services_controller.go:332] Processing sync for service openshift-console/downloads 2026-01-20T10:56:53.452263973+00:00 stderr F I0120 10:56:53.452083 30089 services_controller.go:397] Service downloads retrieved from lister: &Service{ObjectMeta:{downloads openshift-console d6818508-d113-4821-84c8-94f59cfa13cb 9742 0 2024-06-26 12:53:44 +0000 UTC map[] map[operator.openshift.io/spec-hash:41d6e4f36bf41ab5be57dec2289f1f8807bbed4b0f642342f213a53bb3ff4d6d] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: downloads,},ClusterIP:10.217.4.196,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.196],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.453727352+00:00 stderr F I0120 10:56:53.453631 30089 lb_config.go:1016] Cluster endpoints for openshift-console/downloads are: map[TCP/http:{8080 [10.217.0.66] []}] 2026-01-20T10:56:53.453727352+00:00 stderr F I0120 10:56:53.453685 30089 services_controller.go:413] Built service openshift-console/downloads LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.196"}, protocol:"TCP", inport:80, clusterEndpoints:services.lbEndpoints{Port:8080, V4IPs:[]string{"10.217.0.66"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.453727352+00:00 stderr F I0120 10:56:53.453708 30089 services_controller.go:414] Built service openshift-console/downloads LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.453727352+00:00 stderr F I0120 10:56:53.453718 30089 services_controller.go:415] Built service openshift-console/downloads LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.453819424+00:00 stderr F I0120 10:56:53.453740 30089 services_controller.go:421] Built service openshift-console/downloads cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-console/downloads_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/downloads"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.196", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.66", Port:8080, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.453819424+00:00 stderr F I0120 10:56:53.453772 30089 services_controller.go:422] Built service openshift-console/downloads per-node LB []services.LB{} 2026-01-20T10:56:53.453819424+00:00 stderr F I0120 10:56:53.453780 30089 services_controller.go:423] Built service openshift-console/downloads template LB []services.LB{} 2026-01-20T10:56:53.453819424+00:00 stderr F I0120 10:56:53.453788 30089 services_controller.go:424] Service openshift-console/downloads has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.453846025+00:00 stderr F I0120 10:56:53.453805 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-console/downloads_TCP_cluster", UUID:"66ce05b8-462b-4fd9-b81f-bcc75a439997", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/downloads"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-console/downloads_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/downloads"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.196", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.66", Port:8080, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.454995387+00:00 stderr F I0120 10:56:53.454205 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.196:80:10.217.0.66:8080]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {66ce05b8-462b-4fd9-b81f-bcc75a439997}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.454995387+00:00 stderr F I0120 10:56:53.454254 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.196:80:10.217.0.66:8080]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {66ce05b8-462b-4fd9-b81f-bcc75a439997}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.454995387+00:00 stderr F I0120 10:56:53.452349 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/webhook"} 2026-01-20T10:56:53.454995387+00:00 stderr F I0120 10:56:53.454447 30089 services_controller.go:336] Finished syncing service webhook on namespace openshift-console-operator : 3.581856ms 2026-01-20T10:56:53.454995387+00:00 stderr F I0120 10:56:53.454475 30089 services_controller.go:332] Processing sync for service openshift-controller-manager-operator/metrics 2026-01-20T10:56:53.454995387+00:00 stderr F I0120 10:56:53.452381 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-service-ca-operator/metrics"} 2026-01-20T10:56:53.454995387+00:00 stderr F I0120 10:56:53.454552 30089 services_controller.go:336] Finished syncing service metrics on namespace openshift-service-ca-operator : 6.729182ms 2026-01-20T10:56:53.454995387+00:00 stderr F I0120 10:56:53.454578 30089 services_controller.go:332] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service 2026-01-20T10:56:53.454995387+00:00 stderr F I0120 10:56:53.454485 30089 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-controller-manager-operator 2f6bb711-85a4-408c-913a-54f006dcf2e9 4322 0 2024-06-26 12:39:07 +0000 UTC map[app:openshift-controller-manager-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:openshift-controller-manager-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0001591cb }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-controller-manager-operator,},ClusterIP:10.217.5.152,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.152],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.454995387+00:00 stderr F I0120 10:56:53.454632 30089 lb_config.go:1016] Cluster endpoints for openshift-controller-manager-operator/metrics are: map[TCP/https:{8443 [10.217.0.9] []}] 2026-01-20T10:56:53.454995387+00:00 stderr F I0120 10:56:53.454647 30089 services_controller.go:413] Built service openshift-controller-manager-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.152"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.9"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.454995387+00:00 stderr F I0120 10:56:53.454666 30089 services_controller.go:414] Built service openshift-controller-manager-operator/metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.454995387+00:00 stderr F I0120 10:56:53.454676 30089 services_controller.go:415] Built service openshift-controller-manager-operator/metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.454995387+00:00 stderr F I0120 10:56:53.454698 30089 services_controller.go:421] Built service openshift-controller-manager-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-controller-manager-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.152", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.9", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.454995387+00:00 stderr F I0120 10:56:53.454725 30089 services_controller.go:422] Built service openshift-controller-manager-operator/metrics per-node LB []services.LB{} 2026-01-20T10:56:53.454995387+00:00 stderr F I0120 10:56:53.454733 30089 services_controller.go:423] Built service openshift-controller-manager-operator/metrics template LB []services.LB{} 2026-01-20T10:56:53.454995387+00:00 stderr F I0120 10:56:53.454588 30089 services_controller.go:397] Service packageserver-service retrieved from lister: &Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager 8099635d-a821-489e-8b18-cae3e83f00b2 6451 0 2024-06-26 12:47:50 +0000 UTC map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver 0beab272-7637-4d44-b3aa-502dcafbc929 0xc000a6e48d 0xc000a6e48e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:5443,Protocol:TCP,Port:5443,TargetPort:{0 5443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: packageserver,},ClusterIP:10.217.4.230,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.230],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.454995387+00:00 stderr F I0120 10:56:53.454742 30089 services_controller.go:424] Service openshift-controller-manager-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.454995387+00:00 stderr P I0120 10:56:53.454762 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-controller-manager-operator/metrics_TCP_cluster", UUID:"cd325bf7-5a1f-48df-b966-4cb50de55e08", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-controller-manager-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Se 2026-01-20T10:56:53.455050558+00:00 stderr F rvice", "k8s.ovn.org/owner":"openshift-controller-manager-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.152", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.9", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.455050558+00:00 stderr F I0120 10:56:53.454784 30089 lb_config.go:1016] Cluster endpoints for openshift-operator-lifecycle-manager/packageserver-service are: map[TCP/5443:{5443 [10.217.0.43] []}] 2026-01-20T10:56:53.455050558+00:00 stderr F I0120 10:56:53.454917 30089 services_controller.go:413] Built service openshift-operator-lifecycle-manager/packageserver-service LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.230"}, protocol:"TCP", inport:5443, clusterEndpoints:services.lbEndpoints{Port:5443, V4IPs:[]string{"10.217.0.43"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.455050558+00:00 stderr F I0120 10:56:53.454898 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager-operator/metrics]} name:Service_openshift-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.152:443:10.217.0.9:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {cd325bf7-5a1f-48df-b966-4cb50de55e08}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.455050558+00:00 stderr F I0120 10:56:53.454951 30089 services_controller.go:414] Built service openshift-operator-lifecycle-manager/packageserver-service LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.455050558+00:00 stderr F I0120 10:56:53.454961 30089 services_controller.go:415] Built service openshift-operator-lifecycle-manager/packageserver-service LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.455050558+00:00 stderr F I0120 10:56:53.454942 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager-operator/metrics]} name:Service_openshift-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.152:443:10.217.0.9:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {cd325bf7-5a1f-48df-b966-4cb50de55e08}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.455050558+00:00 stderr F I0120 10:56:53.454978 30089 services_controller.go:421] Built service openshift-operator-lifecycle-manager/packageserver-service cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/packageserver-service"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.230", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.43", Port:5443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.455050558+00:00 stderr F I0120 10:56:53.455009 30089 services_controller.go:422] Built service openshift-operator-lifecycle-manager/packageserver-service per-node LB []services.LB{} 2026-01-20T10:56:53.455050558+00:00 stderr F I0120 10:56:53.455019 30089 services_controller.go:423] Built service openshift-operator-lifecycle-manager/packageserver-service template LB []services.LB{} 2026-01-20T10:56:53.455050558+00:00 stderr F I0120 10:56:53.455027 30089 services_controller.go:424] Service openshift-operator-lifecycle-manager/packageserver-service has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.455135580+00:00 stderr F I0120 10:56:53.455043 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster", UUID:"68eacba8-7e00-494f-a0e9-a0d7f4ab5c77", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/packageserver-service"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/packageserver-service"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.230", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.43", Port:5443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.455297615+00:00 stderr F I0120 10:56:53.455184 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.230:5443:10.217.0.43:5443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {68eacba8-7e00-494f-a0e9-a0d7f4ab5c77}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.455297615+00:00 stderr F I0120 10:56:53.455216 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.230:5443:10.217.0.43:5443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {68eacba8-7e00-494f-a0e9-a0d7f4ab5c77}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.455297615+00:00 stderr F I0120 10:56:53.452436 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"cert-manager/cert-manager-cainjector"} 2026-01-20T10:56:53.455318525+00:00 stderr F I0120 10:56:53.455296 30089 services_controller.go:336] Finished syncing service cert-manager-cainjector on namespace cert-manager : 4.668206ms 2026-01-20T10:56:53.455332186+00:00 stderr F I0120 10:56:53.455318 30089 services_controller.go:332] Processing sync for service openshift-apiserver/api 2026-01-20T10:56:53.455403027+00:00 stderr F I0120 10:56:53.452480 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[unknown]} options:{GoMap:map[network_name:physnet]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:localnet] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {516233f5-61de-4ea0-ae93-97e9413cd4fa}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.455403027+00:00 stderr F I0120 10:56:53.455326 30089 services_controller.go:397] Service api retrieved from lister: &Service{ObjectMeta:{api openshift-apiserver fb5bd66d-5e82-4bcc-8126-39324a92dccc 5229 0 2024-06-26 12:47:09 +0000 UTC map[prometheus:openshift-apiserver] map[operator.openshift.io/spec-hash:9c74227d7f96d723d980c50373a5e91f08c5893365bfd5a5040449b1b6585a23 service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.69,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.69],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.455429678+00:00 stderr F I0120 10:56:53.455414 30089 lb_config.go:1016] Cluster endpoints for openshift-apiserver/api are: map[TCP/https:{8443 [10.217.0.82] []}] 2026-01-20T10:56:53.455443348+00:00 stderr F I0120 10:56:53.455429 30089 services_controller.go:413] Built service openshift-apiserver/api LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.69"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.82"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.455456389+00:00 stderr F I0120 10:56:53.455441 30089 services_controller.go:414] Built service openshift-apiserver/api LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.455456389+00:00 stderr F I0120 10:56:53.455448 30089 services_controller.go:415] Built service openshift-apiserver/api LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.455482810+00:00 stderr F I0120 10:56:53.452408 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-cluster-version/cluster-version-operator"} 2026-01-20T10:56:53.455496520+00:00 stderr F I0120 10:56:53.455484 30089 services_controller.go:336] Finished syncing service cluster-version-operator on namespace openshift-cluster-version : 4.708327ms 2026-01-20T10:56:53.455496520+00:00 stderr F I0120 10:56:53.455474 30089 services_controller.go:421] Built service openshift-apiserver/api cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-apiserver/api_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/api"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.69", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.82", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.455510780+00:00 stderr F I0120 10:56:53.455499 30089 services_controller.go:422] Built service openshift-apiserver/api per-node LB []services.LB{} 2026-01-20T10:56:53.455510780+00:00 stderr F I0120 10:56:53.455503 30089 services_controller.go:332] Processing sync for service openshift-machine-api/control-plane-machine-set-operator 2026-01-20T10:56:53.455523961+00:00 stderr F I0120 10:56:53.455508 30089 services_controller.go:423] Built service openshift-apiserver/api template LB []services.LB{} 2026-01-20T10:56:53.455523961+00:00 stderr F I0120 10:56:53.455517 30089 services_controller.go:424] Service openshift-apiserver/api has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.455575502+00:00 stderr F I0120 10:56:53.455522 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[fa:16:3e:0d:e7:11]} options:{GoMap:map[exclude-lb-vips-from-garp:true nat-addresses:router router-port:rtoe-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {32735011-f3ec-43b2-aac3-e6ed4dd32c30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.455575502+00:00 stderr F I0120 10:56:53.455532 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-apiserver/api_TCP_cluster", UUID:"1b4cbcfd-9a85-4def-b60e-4acd6a3a0b14", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/api"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-apiserver/api_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/api"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.69", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.82", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.455592692+00:00 stderr F I0120 10:56:53.455512 30089 services_controller.go:397] Service control-plane-machine-set-operator retrieved from lister: &Service{ObjectMeta:{control-plane-machine-set-operator openshift-machine-api 7c42fd7c-0955-49c7-819c-4685e0681272 4749 0 2024-06-26 12:39:09 +0000 UTC map[k8s-app:control-plane-machine-set-operator] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:control-plane-machine-set-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000159ff7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:9443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: control-plane-machine-set-operator,},ClusterIP:10.217.5.136,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.136],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.455609573+00:00 stderr F I0120 10:56:53.455586 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:516233f5-61de-4ea0-ae93-97e9413cd4fa} {GoUUID:32735011-f3ec-43b2-aac3-e6ed4dd32c30}]}}] Timeout: Where:[where column _uuid == {18f52cdd-cee0-4b9c-87dc-4d090679e6ea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.455627953+00:00 stderr F I0120 10:56:53.455609 30089 lb_config.go:1016] Cluster endpoints for openshift-machine-api/control-plane-machine-set-operator are: map[TCP/https:{9443 [10.217.0.20] []}] 2026-01-20T10:56:53.455643614+00:00 stderr F I0120 10:56:53.455628 30089 services_controller.go:413] Built service openshift-machine-api/control-plane-machine-set-operator LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.136"}, protocol:"TCP", inport:9443, clusterEndpoints:services.lbEndpoints{Port:9443, V4IPs:[]string{"10.217.0.20"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.455657354+00:00 stderr F I0120 10:56:53.455642 30089 services_controller.go:414] Built service openshift-machine-api/control-plane-machine-set-operator LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.455657354+00:00 stderr F I0120 10:56:53.455609 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[unknown]} options:{GoMap:map[network_name:physnet]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:localnet] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {516233f5-61de-4ea0-ae93-97e9413cd4fa}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[fa:16:3e:0d:e7:11]} options:{GoMap:map[exclude-lb-vips-from-garp:true nat-addresses:router router-port:rtoe-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {32735011-f3ec-43b2-aac3-e6ed4dd32c30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:516233f5-61de-4ea0-ae93-97e9413cd4fa} {GoUUID:32735011-f3ec-43b2-aac3-e6ed4dd32c30}]}}] Timeout: Where:[where column _uuid == {18f52cdd-cee0-4b9c-87dc-4d090679e6ea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.455657354+00:00 stderr F I0120 10:56:53.455650 30089 services_controller.go:415] Built service openshift-machine-api/control-plane-machine-set-operator LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.455672835+00:00 stderr F I0120 10:56:53.455641 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.69:443:10.217.0.82:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1b4cbcfd-9a85-4def-b60e-4acd6a3a0b14}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.455731566+00:00 stderr F I0120 10:56:53.455669 30089 services_controller.go:421] Built service openshift-machine-api/control-plane-machine-set-operator cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/control-plane-machine-set-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.136", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.20", Port:9443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.455731566+00:00 stderr F I0120 10:56:53.455722 30089 services_controller.go:422] Built service openshift-machine-api/control-plane-machine-set-operator per-node LB []services.LB{} 2026-01-20T10:56:53.455770887+00:00 stderr F I0120 10:56:53.455737 30089 services_controller.go:423] Built service openshift-machine-api/control-plane-machine-set-operator template LB []services.LB{} 2026-01-20T10:56:53.455770887+00:00 stderr F I0120 10:56:53.455750 30089 services_controller.go:424] Service openshift-machine-api/control-plane-machine-set-operator has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.455818128+00:00 stderr F I0120 10:56:53.455676 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.69:443:10.217.0.82:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1b4cbcfd-9a85-4def-b60e-4acd6a3a0b14}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.455818128+00:00 stderr F I0120 10:56:53.455775 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster", UUID:"0549e0a0-7df6-4da0-b4ff-6834505eba14", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/control-plane-machine-set-operator"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/control-plane-machine-set-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.136", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.20", Port:9443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.455886840+00:00 stderr F I0120 10:56:53.452419 30089 ovs.go:162] Exec(39): stdout: "connected\n" 2026-01-20T10:56:53.455886840+00:00 stderr F I0120 10:56:53.455874 30089 ovs.go:163] Exec(39): stderr: "" 2026-01-20T10:56:53.455918791+00:00 stderr F I0120 10:56:53.455886 30089 default_node_network_controller.go:385] Node connection status = connected 2026-01-20T10:56:53.455918791+00:00 stderr F I0120 10:56:53.455901 30089 ovs.go:159] Exec(41): /usr/bin/ovs-vsctl --timeout=15 -- br-exists br-int 2026-01-20T10:56:53.455960302+00:00 stderr F I0120 10:56:53.455902 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.136:9443:10.217.0.20:9443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0549e0a0-7df6-4da0-b4ff-6834505eba14}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.455984393+00:00 stderr F I0120 10:56:53.455950 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.136:9443:10.217.0.20:9443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0549e0a0-7df6-4da0-b4ff-6834505eba14}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.463291598+00:00 stderr F I0120 10:56:53.463218 30089 ovs.go:162] Exec(40): stdout: "1\n" 2026-01-20T10:56:53.463291598+00:00 stderr F I0120 10:56:53.463254 30089 ovs.go:163] Exec(40): stderr: "" 2026-01-20T10:56:53.463291598+00:00 stderr F I0120 10:56:53.463272 30089 ovs.go:159] Exec(42): /usr/bin/ovs-ofctl --no-stats --no-names dump-flows br-int table=65,out_port=1 2026-01-20T10:56:53.463429172+00:00 stderr F I0120 10:56:53.463378 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/control-plane-machine-set-operator"} 2026-01-20T10:56:53.463429172+00:00 stderr F I0120 10:56:53.463414 30089 services_controller.go:336] Finished syncing service control-plane-machine-set-operator on namespace openshift-machine-api : 7.908161ms 2026-01-20T10:56:53.463446002+00:00 stderr F I0120 10:56:53.463437 30089 services_controller.go:332] Processing sync for service openshift-marketplace/community-operators 2026-01-20T10:56:53.463496054+00:00 stderr F I0120 10:56:53.463456 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/api"} 2026-01-20T10:56:53.463496054+00:00 stderr F I0120 10:56:53.463478 30089 services_controller.go:336] Finished syncing service api on namespace openshift-apiserver : 8.160008ms 2026-01-20T10:56:53.463512204+00:00 stderr F I0120 10:56:53.463492 30089 services_controller.go:332] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2026-01-20T10:56:53.463512204+00:00 stderr F I0120 10:56:53.463501 30089 services_controller.go:336] Finished syncing service ovn-kubernetes-control-plane on namespace openshift-ovn-kubernetes : 9.31µs 2026-01-20T10:56:53.463527125+00:00 stderr F I0120 10:56:53.463510 30089 services_controller.go:332] Processing sync for service openshift-ingress-canary/ingress-canary 2026-01-20T10:56:53.463541135+00:00 stderr F I0120 10:56:53.463447 30089 services_controller.go:397] Service community-operators retrieved from lister: &Service{ObjectMeta:{community-operators openshift-marketplace daa5c70d-2f05-4c99-b062-49370cf4b7bd 6377 0 2024-06-26 12:47:48 +0000 UTC map[olm.managed:true olm.service-spec-hash:9y90X0LnOAvWXlE7PZKqH0sBNEP83PNwaUfqVB] map[] [{operators.coreos.com/v1alpha1 CatalogSource community-operators e583c58d-4569-4cab-9192-62c813516208 0xc0002fb89d 0xc0002fb89e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: community-operators,olm.managed: true,},ClusterIP:10.217.4.229,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.229],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.463595866+00:00 stderr F I0120 10:56:53.463560 30089 lb_config.go:1016] Cluster endpoints for openshift-marketplace/community-operators are: map[TCP/grpc:{50051 [10.217.0.35] []}] 2026-01-20T10:56:53.463595866+00:00 stderr F I0120 10:56:53.463517 30089 services_controller.go:397] Service ingress-canary retrieved from lister: &Service{ObjectMeta:{ingress-canary openshift-ingress-canary cd641ce4-6a02-4a0c-9222-6ab30b234450 10172 0 2024-06-26 12:54:01 +0000 UTC map[ingress.openshift.io/canary:canary_controller] map[] [{apps/v1 daemonset ingress-canary b5512a08-cd29-46f9-9661-4c860338b2ca 0xc0001597f7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:8080-tcp,Protocol:TCP,Port:8080,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},ServicePort{Name:8888-tcp,Protocol:TCP,Port:8888,TargetPort:{0 8888 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{ingresscanary.operator.openshift.io/daemonset-ingresscanary: canary_controller,},ClusterIP:10.217.4.204,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.204],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.463595866+00:00 stderr F I0120 10:56:53.463578 30089 services_controller.go:413] Built service openshift-marketplace/community-operators LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.229"}, protocol:"TCP", inport:50051, clusterEndpoints:services.lbEndpoints{Port:50051, V4IPs:[]string{"10.217.0.35"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.463595866+00:00 stderr F I0120 10:56:53.463590 30089 services_controller.go:414] Built service openshift-marketplace/community-operators LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.463614377+00:00 stderr F I0120 10:56:53.463595 30089 services_controller.go:415] Built service openshift-marketplace/community-operators LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.463628187+00:00 stderr F I0120 10:56:53.463591 30089 lb_config.go:1016] Cluster endpoints for openshift-ingress-canary/ingress-canary are: map[TCP/8080-tcp:{8080 [10.217.0.71] []} TCP/8888-tcp:{8888 [10.217.0.71] []}] 2026-01-20T10:56:53.463641998+00:00 stderr F I0120 10:56:53.463616 30089 services_controller.go:421] Built service openshift-marketplace/community-operators cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/community-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/community-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.229", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.35", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.463641998+00:00 stderr F I0120 10:56:53.463629 30089 services_controller.go:413] Built service openshift-ingress-canary/ingress-canary LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.204"}, protocol:"TCP", inport:8080, clusterEndpoints:services.lbEndpoints{Port:8080, V4IPs:[]string{"10.217.0.71"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.204"}, protocol:"TCP", inport:8888, clusterEndpoints:services.lbEndpoints{Port:8888, V4IPs:[]string{"10.217.0.71"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.463661288+00:00 stderr F I0120 10:56:53.463638 30089 services_controller.go:422] Built service openshift-marketplace/community-operators per-node LB []services.LB{} 2026-01-20T10:56:53.463661288+00:00 stderr F I0120 10:56:53.463643 30089 services_controller.go:414] Built service openshift-ingress-canary/ingress-canary LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.463661288+00:00 stderr F I0120 10:56:53.463646 30089 services_controller.go:423] Built service openshift-marketplace/community-operators template LB []services.LB{} 2026-01-20T10:56:53.463661288+00:00 stderr F I0120 10:56:53.463650 30089 services_controller.go:415] Built service openshift-ingress-canary/ingress-canary LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.463661288+00:00 stderr F I0120 10:56:53.463653 30089 services_controller.go:424] Service openshift-marketplace/community-operators has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.463713059+00:00 stderr F I0120 10:56:53.463666 30089 services_controller.go:421] Built service openshift-ingress-canary/ingress-canary cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-ingress-canary/ingress-canary_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-canary/ingress-canary"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.204", Port:8080, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.71", Port:8080, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.204", Port:8888, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.71", Port:8888, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.463713059+00:00 stderr F I0120 10:56:53.463668 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/community-operators_TCP_cluster", UUID:"acd71d69-4870-4695-b8eb-935626516f5d", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/community-operators"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/community-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/community-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.229", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.35", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.463713059+00:00 stderr F I0120 10:56:53.463697 30089 services_controller.go:422] Built service openshift-ingress-canary/ingress-canary per-node LB []services.LB{} 2026-01-20T10:56:53.463713059+00:00 stderr F I0120 10:56:53.463705 30089 services_controller.go:423] Built service openshift-ingress-canary/ingress-canary template LB []services.LB{} 2026-01-20T10:56:53.463736870+00:00 stderr F I0120 10:56:53.463713 30089 services_controller.go:424] Service openshift-ingress-canary/ingress-canary has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.463788471+00:00 stderr F I0120 10:56:53.463728 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-ingress-canary/ingress-canary_TCP_cluster", UUID:"6fe909bf-0efe-41d7-93bd-ab2cc0acd4db", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-canary/ingress-canary"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-ingress-canary/ingress-canary_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-canary/ingress-canary"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.204", Port:8080, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.71", Port:8080, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.204", Port:8888, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.71", Port:8888, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.463831942+00:00 stderr F I0120 10:56:53.463779 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.229:50051:10.217.0.35:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {acd71d69-4870-4695-b8eb-935626516f5d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.463847293+00:00 stderr F I0120 10:56:53.463817 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.229:50051:10.217.0.35:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {acd71d69-4870-4695-b8eb-935626516f5d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.463903224+00:00 stderr F I0120 10:56:53.463850 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.204:8080:10.217.0.71:8080 10.217.4.204:8888:10.217.0.71:8888]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6fe909bf-0efe-41d7-93bd-ab2cc0acd4db}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.464033928+00:00 stderr F I0120 10:56:53.463969 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/downloads"} 2026-01-20T10:56:53.464033928+00:00 stderr F I0120 10:56:53.463986 30089 services_controller.go:336] Finished syncing service downloads on namespace openshift-console : 11.914468ms 2026-01-20T10:56:53.464033928+00:00 stderr F I0120 10:56:53.463996 30089 services_controller.go:332] Processing sync for service openshift-oauth-apiserver/api 2026-01-20T10:56:53.464121210+00:00 stderr F I0120 10:56:53.464003 30089 services_controller.go:397] Service api retrieved from lister: &Service{ObjectMeta:{api openshift-oauth-apiserver 8ccd218c-b483-42f1-81ef-8a1e9a05f574 5246 0 2024-06-26 12:47:12 +0000 UTC map[app:openshift-oauth-apiserver] map[operator.openshift.io/spec-hash:9c74227d7f96d723d980c50373a5e91f08c5893365bfd5a5040449b1b6585a23 prometheus.io/scheme:https prometheus.io/scrape:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.114,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.114],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.464144791+00:00 stderr F I0120 10:56:53.464115 30089 lb_config.go:1016] Cluster endpoints for openshift-oauth-apiserver/api are: map[TCP/https:{8443 [10.217.0.39] []}] 2026-01-20T10:56:53.464144791+00:00 stderr F I0120 10:56:53.464129 30089 services_controller.go:413] Built service openshift-oauth-apiserver/api LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.114"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.39"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.464159471+00:00 stderr F I0120 10:56:53.464141 30089 services_controller.go:414] Built service openshift-oauth-apiserver/api LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.464159471+00:00 stderr F I0120 10:56:53.464149 30089 services_controller.go:415] Built service openshift-oauth-apiserver/api LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.464215452+00:00 stderr F I0120 10:56:53.464163 30089 services_controller.go:421] Built service openshift-oauth-apiserver/api cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-oauth-apiserver/api_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-oauth-apiserver/api"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.114", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.39", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.464215452+00:00 stderr F I0120 10:56:53.464191 30089 services_controller.go:422] Built service openshift-oauth-apiserver/api per-node LB []services.LB{} 2026-01-20T10:56:53.464215452+00:00 stderr F I0120 10:56:53.464199 30089 services_controller.go:423] Built service openshift-oauth-apiserver/api template LB []services.LB{} 2026-01-20T10:56:53.464215452+00:00 stderr F I0120 10:56:53.464205 30089 services_controller.go:424] Service openshift-oauth-apiserver/api has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.464267784+00:00 stderr F I0120 10:56:53.464216 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-oauth-apiserver/api_TCP_cluster", UUID:"7595c030-5437-4d83-9952-897bbf081592", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-oauth-apiserver/api"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-oauth-apiserver/api_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-oauth-apiserver/api"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.114", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.39", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.464366596+00:00 stderr F I0120 10:56:53.464315 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.114:443:10.217.0.39:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7595c030-5437-4d83-9952-897bbf081592}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.464383797+00:00 stderr F I0120 10:56:53.464354 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.114:443:10.217.0.39:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7595c030-5437-4d83-9952-897bbf081592}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.464383797+00:00 stderr F I0120 10:56:53.464367 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/packageserver-service"} 2026-01-20T10:56:53.464398267+00:00 stderr F I0120 10:56:53.464357 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager-operator/metrics"} 2026-01-20T10:56:53.464416288+00:00 stderr F I0120 10:56:53.464397 30089 services_controller.go:336] Finished syncing service metrics on namespace openshift-controller-manager-operator : 9.919805ms 2026-01-20T10:56:53.464416288+00:00 stderr F I0120 10:56:53.464396 30089 services_controller.go:336] Finished syncing service packageserver-service on namespace openshift-operator-lifecycle-manager : 9.811852ms 2026-01-20T10:56:53.464432798+00:00 stderr F I0120 10:56:53.464425 30089 services_controller.go:332] Processing sync for service openshift-console/console 2026-01-20T10:56:53.464444898+00:00 stderr F I0120 10:56:53.464435 30089 services_controller.go:332] Processing sync for service openshift-operator-lifecycle-manager/catalog-operator-metrics 2026-01-20T10:56:53.464509830+00:00 stderr F I0120 10:56:53.464440 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Static_MAC_Binding Row:map[ip:169.254.169.4 logical_port:rtoe-GR_crc mac:0a:58:a9:fe:a9:04 override_dynamic_mac:true] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {46f7156c-697b-4394-a40f-1e7225954094}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.464509830+00:00 stderr F I0120 10:56:53.464491 30089 transact.go:42] Configuring OVN: [{Op:update Table:Static_MAC_Binding Row:map[ip:169.254.169.4 logical_port:rtoe-GR_crc mac:0a:58:a9:fe:a9:04 override_dynamic_mac:true] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {46f7156c-697b-4394-a40f-1e7225954094}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.464577372+00:00 stderr F I0120 10:56:53.464435 30089 services_controller.go:397] Service console retrieved from lister: &Service{ObjectMeta:{console openshift-console 5b0bdd1d-b81c-479c-9a03-f3ff2b5db014 9795 0 2024-06-26 12:53:44 +0000 UTC map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: ui,},ClusterIP:10.217.4.140,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.140],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.464593322+00:00 stderr F I0120 10:56:53.464446 30089 services_controller.go:397] Service catalog-operator-metrics retrieved from lister: &Service{ObjectMeta:{catalog-operator-metrics openshift-operator-lifecycle-manager 6766edb6-ebfb-4434-a0f6-d2bb95b7aa72 5067 0 2024-06-26 12:39:23 +0000 UTC map[app:catalog-operator] map[capability.openshift.io/name:OperatorLifecycleManager include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:catalog-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000a6e267 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https-metrics,Protocol:TCP,Port:8443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: catalog-operator,},ClusterIP:10.217.5.17,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.17],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.464612193+00:00 stderr F I0120 10:56:53.464589 30089 lb_config.go:1016] Cluster endpoints for openshift-console/console are: map[TCP/https:{8443 [10.217.0.73] []}] 2026-01-20T10:56:53.464625653+00:00 stderr F I0120 10:56:53.464317 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/community-operators"} 2026-01-20T10:56:53.464638263+00:00 stderr F I0120 10:56:53.464615 30089 lb_config.go:1016] Cluster endpoints for openshift-operator-lifecycle-manager/catalog-operator-metrics are: map[TCP/https-metrics:{8443 [10.217.0.11] []}] 2026-01-20T10:56:53.464651264+00:00 stderr F I0120 10:56:53.464639 30089 services_controller.go:336] Finished syncing service community-operators on namespace openshift-marketplace : 1.186201ms 2026-01-20T10:56:53.464664744+00:00 stderr F I0120 10:56:53.464604 30089 services_controller.go:413] Built service openshift-console/console LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.140"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.73"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.464664744+00:00 stderr F I0120 10:56:53.464641 30089 services_controller.go:413] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.17"}, protocol:"TCP", inport:8443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.464664744+00:00 stderr F I0120 10:56:53.464656 30089 services_controller.go:414] Built service openshift-console/console LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.464680295+00:00 stderr F I0120 10:56:53.464661 30089 services_controller.go:332] Processing sync for service openshift-authentication-operator/metrics 2026-01-20T10:56:53.464680295+00:00 stderr F I0120 10:56:53.464661 30089 services_controller.go:414] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.464680295+00:00 stderr F I0120 10:56:53.464663 30089 services_controller.go:415] Built service openshift-console/console LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.464680295+00:00 stderr F I0120 10:56:53.464673 30089 services_controller.go:415] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.464740746+00:00 stderr F I0120 10:56:53.464691 30089 services_controller.go:421] Built service openshift-console/console cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-console/console_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/console"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.140", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.73", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.464740746+00:00 stderr F I0120 10:56:53.464697 30089 services_controller.go:421] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/catalog-operator-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.17", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.11", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.464740746+00:00 stderr F I0120 10:56:53.464721 30089 services_controller.go:422] Built service openshift-console/console per-node LB []services.LB{} 2026-01-20T10:56:53.464740746+00:00 stderr F I0120 10:56:53.464731 30089 services_controller.go:422] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics per-node LB []services.LB{} 2026-01-20T10:56:53.464740746+00:00 stderr F I0120 10:56:53.464733 30089 services_controller.go:423] Built service openshift-console/console template LB []services.LB{} 2026-01-20T10:56:53.464764967+00:00 stderr F I0120 10:56:53.464741 30089 services_controller.go:423] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics template LB []services.LB{} 2026-01-20T10:56:53.464764967+00:00 stderr F I0120 10:56:53.464744 30089 services_controller.go:424] Service openshift-console/console has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.464764967+00:00 stderr F I0120 10:56:53.464685 30089 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-authentication-operator 20ebd9ba-71d4-4753-8707-d87939791a19 4335 0 2024-06-26 12:39:09 +0000 UTC map[app:authentication-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000158ad7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operator,},ClusterIP:10.217.5.51,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.51],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.464764967+00:00 stderr F I0120 10:56:53.464750 30089 services_controller.go:424] Service openshift-operator-lifecycle-manager/catalog-operator-metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.464788297+00:00 stderr F I0120 10:56:53.464760 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-console/console_TCP_cluster", UUID:"1b2be37e-8144-48c9-927b-da6e21dae8a9", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/console"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-console/console_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/console"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.140", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.73", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.464802038+00:00 stderr F I0120 10:56:53.464780 30089 lb_config.go:1016] Cluster endpoints for openshift-authentication-operator/metrics are: map[TCP/https:{8443 [10.217.0.19] []}] 2026-01-20T10:56:53.464815028+00:00 stderr F I0120 10:56:53.464798 30089 services_controller.go:413] Built service openshift-authentication-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.51"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.19"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.464815028+00:00 stderr F I0120 10:56:53.464777 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster", UUID:"1a7d5584-355c-4545-8ea4-6c97fee9c8d6", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/catalog-operator-metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/catalog-operator-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.17", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.11", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.464828638+00:00 stderr F I0120 10:56:53.464813 30089 services_controller.go:414] Built service openshift-authentication-operator/metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.464828638+00:00 stderr F I0120 10:56:53.464821 30089 services_controller.go:415] Built service openshift-authentication-operator/metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.464886690+00:00 stderr F I0120 10:56:53.464839 30089 services_controller.go:421] Built service openshift-authentication-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-authentication-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.51", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.19", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.464886690+00:00 stderr F I0120 10:56:53.464871 30089 services_controller.go:422] Built service openshift-authentication-operator/metrics per-node LB []services.LB{} 2026-01-20T10:56:53.464908041+00:00 stderr F I0120 10:56:53.464884 30089 services_controller.go:423] Built service openshift-authentication-operator/metrics template LB []services.LB{} 2026-01-20T10:56:53.464908041+00:00 stderr F I0120 10:56:53.464893 30089 services_controller.go:424] Service openshift-authentication-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.464922431+00:00 stderr F I0120 10:56:53.464883 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:10.217.0.73:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1b2be37e-8144-48c9-927b-da6e21dae8a9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.464964372+00:00 stderr F I0120 10:56:53.464922 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:10.217.0.73:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1b2be37e-8144-48c9-927b-da6e21dae8a9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.464964372+00:00 stderr F I0120 10:56:53.464915 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-authentication-operator/metrics_TCP_cluster", UUID:"43d1a806-6d56-4f19-9c53-1ce78b0d24a1", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-authentication-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.51", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.19", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.464964372+00:00 stderr F I0120 10:56:53.464915 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/catalog-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.17:8443:10.217.0.11:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1a7d5584-355c-4545-8ea4-6c97fee9c8d6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.464985853+00:00 stderr F I0120 10:56:53.464938 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:169.254.169.4] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2846b89d-97b9-453b-9343-7231c57eea62}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.464998673+00:00 stderr F I0120 10:56:53.464962 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/catalog-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.17:8443:10.217.0.11:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1a7d5584-355c-4545-8ea4-6c97fee9c8d6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.465014623+00:00 stderr F I0120 10:56:53.464992 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:2846b89d-97b9-453b-9343-7231c57eea62}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.465054234+00:00 stderr F I0120 10:56:53.465016 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:169.254.169.4] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2846b89d-97b9-453b-9343-7231c57eea62}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:2846b89d-97b9-453b-9343-7231c57eea62}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.465098945+00:00 stderr F I0120 10:56:53.465041 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.51:443:10.217.0.19:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {43d1a806-6d56-4f19-9c53-1ce78b0d24a1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.465129446+00:00 stderr F I0120 10:56:53.465095 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-oauth-apiserver/api"} 2026-01-20T10:56:53.465129446+00:00 stderr F I0120 10:56:53.465106 30089 services_controller.go:336] Finished syncing service api on namespace openshift-oauth-apiserver : 1.109589ms 2026-01-20T10:56:53.465150517+00:00 stderr F I0120 10:56:53.465118 30089 services_controller.go:332] Processing sync for service openshift-marketplace/marketplace-operator-metrics 2026-01-20T10:56:53.465150517+00:00 stderr F I0120 10:56:53.465099 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.51:443:10.217.0.19:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {43d1a806-6d56-4f19-9c53-1ce78b0d24a1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.465186448+00:00 stderr F I0120 10:56:53.465124 30089 services_controller.go:397] Service marketplace-operator-metrics retrieved from lister: &Service{ObjectMeta:{marketplace-operator-metrics openshift-marketplace 1bfd7637-f88e-403e-8d75-c71b380fc127 4909 0 2024-06-26 12:39:13 +0000 UTC map[name:marketplace-operator] map[capability.openshift.io/name:marketplace include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:marketplace-operator-metrics service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002fbc67 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{Name:https-metrics,Protocol:TCP,Port:8081,TargetPort:{0 8081 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: marketplace-operator,},ClusterIP:10.217.5.19,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.19],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.465200938+00:00 stderr F I0120 10:56:53.465189 30089 lb_config.go:1016] Cluster endpoints for openshift-marketplace/marketplace-operator-metrics are: map[TCP/https-metrics:{8081 [10.217.0.29] []} TCP/metrics:{8383 [10.217.0.29] []}] 2026-01-20T10:56:53.465213888+00:00 stderr F I0120 10:56:53.465200 30089 services_controller.go:413] Built service openshift-marketplace/marketplace-operator-metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.19"}, protocol:"TCP", inport:8383, clusterEndpoints:services.lbEndpoints{Port:8383, V4IPs:[]string{"10.217.0.29"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.19"}, protocol:"TCP", inport:8081, clusterEndpoints:services.lbEndpoints{Port:8081, V4IPs:[]string{"10.217.0.29"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.465226819+00:00 stderr F I0120 10:56:53.465210 30089 services_controller.go:414] Built service openshift-marketplace/marketplace-operator-metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.465226819+00:00 stderr F I0120 10:56:53.465221 30089 services_controller.go:415] Built service openshift-marketplace/marketplace-operator-metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.465252710+00:00 stderr F I0120 10:56:53.465233 30089 services_controller.go:421] Built service openshift-marketplace/marketplace-operator-metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/marketplace-operator-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.19", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.29", Port:8383, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.19", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.29", Port:8081, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.465265680+00:00 stderr F I0120 10:56:53.465252 30089 services_controller.go:422] Built service openshift-marketplace/marketplace-operator-metrics per-node LB []services.LB{} 2026-01-20T10:56:53.465265680+00:00 stderr F I0120 10:56:53.465259 30089 services_controller.go:423] Built service openshift-marketplace/marketplace-operator-metrics template LB []services.LB{} 2026-01-20T10:56:53.465278670+00:00 stderr F I0120 10:56:53.465265 30089 services_controller.go:424] Service openshift-marketplace/marketplace-operator-metrics has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.465320461+00:00 stderr F I0120 10:56:53.465276 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster", UUID:"fd4c9213-5ce0-448d-a75e-3598556830dc", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/marketplace-operator-metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/marketplace-operator-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.19", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.29", Port:8383, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.19", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.29", Port:8081, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.465383583+00:00 stderr F I0120 10:56:53.465354 30089 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-ovn-kubernetes/ovn-kubernetes-control-plane. OVN-Kubernetes controller took 0.142339268 seconds. No OVN measurement. 2026-01-20T10:56:53.465432754+00:00 stderr F I0120 10:56:53.465359 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.19:8081:10.217.0.29:8081 10.217.5.19:8383:10.217.0.29:8383]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fd4c9213-5ce0-448d-a75e-3598556830dc}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.465531047+00:00 stderr F I0120 10:56:53.465467 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.19:8081:10.217.0.29:8081 10.217.5.19:8383:10.217.0.29:8383]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fd4c9213-5ce0-448d-a75e-3598556830dc}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.465629739+00:00 stderr F I0120 10:56:53.465594 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:38.102.83.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {734af81b-a9a2-45f9-a8bb-d8bce515d37d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.465645810+00:00 stderr F I0120 10:56:53.465625 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:734af81b-a9a2-45f9-a8bb-d8bce515d37d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.465658550+00:00 stderr F I0120 10:56:53.465638 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:38.102.83.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {734af81b-a9a2-45f9-a8bb-d8bce515d37d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:734af81b-a9a2-45f9-a8bb-d8bce515d37d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.465734163+00:00 stderr F I0120 10:56:53.465513 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication-operator/metrics"} 2026-01-20T10:56:53.465784154+00:00 stderr F I0120 10:56:53.465765 30089 services_controller.go:336] Finished syncing service metrics on namespace openshift-authentication-operator : 1.09984ms 2026-01-20T10:56:53.465838136+00:00 stderr F I0120 10:56:53.465821 30089 services_controller.go:332] Processing sync for service openshift-controller-manager/controller-manager 2026-01-20T10:56:53.465941608+00:00 stderr F I0120 10:56:53.463889 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.204:8080:10.217.0.71:8080 10.217.4.204:8888:10.217.0.71:8888]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6fe909bf-0efe-41d7-93bd-ab2cc0acd4db}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.465970019+00:00 stderr F I0120 10:56:53.465533 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/console"} 2026-01-20T10:56:53.466012710+00:00 stderr F I0120 10:56:53.465984 30089 services_controller.go:336] Finished syncing service console on namespace openshift-console : 1.548461ms 2026-01-20T10:56:53.466028111+00:00 stderr F I0120 10:56:53.466012 30089 services_controller.go:332] Processing sync for service openshift-kube-controller-manager-operator/metrics 2026-01-20T10:56:53.466093002+00:00 stderr F I0120 10:56:53.466037 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1e0c1c89-e407-443a-b61e-39d1499f5b36}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.466167754+00:00 stderr F I0120 10:56:53.466020 30089 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-kube-controller-manager-operator 136038f9-f376-4b0b-8c75-a42240d176cc 4549 0 2024-06-26 12:39:14 +0000 UTC map[app:kube-controller-manager-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:kube-controller-manager-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000159baf }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-controller-manager-operator,},ClusterIP:10.217.4.79,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.79],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.466186685+00:00 stderr F I0120 10:56:53.466168 30089 lb_config.go:1016] Cluster endpoints for openshift-kube-controller-manager-operator/metrics are: map[TCP/https:{8443 [10.217.0.15] []}] 2026-01-20T10:56:53.466199915+00:00 stderr F I0120 10:56:53.466186 30089 services_controller.go:413] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.79"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.15"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.466213256+00:00 stderr F I0120 10:56:53.466198 30089 services_controller.go:414] Built service openshift-kube-controller-manager-operator/metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.466213256+00:00 stderr F I0120 10:56:53.466204 30089 services_controller.go:415] Built service openshift-kube-controller-manager-operator/metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.466264927+00:00 stderr F I0120 10:56:53.466222 30089 services_controller.go:421] Built service openshift-kube-controller-manager-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.79", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.15", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.466264927+00:00 stderr F I0120 10:56:53.466251 30089 services_controller.go:422] Built service openshift-kube-controller-manager-operator/metrics per-node LB []services.LB{} 2026-01-20T10:56:53.466264927+00:00 stderr F I0120 10:56:53.466261 30089 services_controller.go:423] Built service openshift-kube-controller-manager-operator/metrics template LB []services.LB{} 2026-01-20T10:56:53.466282037+00:00 stderr F I0120 10:56:53.466269 30089 services_controller.go:424] Service openshift-kube-controller-manager-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.466344399+00:00 stderr F I0120 10:56:53.466286 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster", UUID:"a30efd8e-3079-4ca5-a3fd-b660bc1089ea", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.79", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.15", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.466381910+00:00 stderr F I0120 10:56:53.465870 30089 services_controller.go:397] Service controller-manager retrieved from lister: &Service{ObjectMeta:{controller-manager openshift-controller-manager 2222c363-21dc-4d99-b2be-80dc3cdf8209 4361 0 2024-06-26 12:47:07 +0000 UTC map[prometheus:openshift-controller-manager] map[operator.openshift.io/spec-hash:b3b96749ab82e4de02ef6aa9f0e168108d09315e18d73931c12251d267378e74 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{controller-manager: true,},ClusterIP:10.217.4.104,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.104],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.466472942+00:00 stderr F I0120 10:56:53.466413 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.79:443:10.217.0.15:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a30efd8e-3079-4ca5-a3fd-b660bc1089ea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.466489493+00:00 stderr F I0120 10:56:53.466458 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.79:443:10.217.0.15:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a30efd8e-3079-4ca5-a3fd-b660bc1089ea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.466525854+00:00 stderr F I0120 10:56:53.466438 30089 lb_config.go:1016] Cluster endpoints for openshift-controller-manager/controller-manager are: map[TCP/https:{8443 [10.217.0.87] []}] 2026-01-20T10:56:53.466589555+00:00 stderr F I0120 10:56:53.466561 30089 services_controller.go:413] Built service openshift-controller-manager/controller-manager LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.104"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.87"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.466640887+00:00 stderr F I0120 10:56:53.465984 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/marketplace-operator-metrics"} 2026-01-20T10:56:53.466640887+00:00 stderr F I0120 10:56:53.466628 30089 services_controller.go:336] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace : 1.50783ms 2026-01-20T10:56:53.466656567+00:00 stderr F I0120 10:56:53.466641 30089 services_controller.go:332] Processing sync for service openshift-multus/multus-admission-controller 2026-01-20T10:56:53.466697418+00:00 stderr F I0120 10:56:53.466676 30089 services_controller.go:414] Built service openshift-controller-manager/controller-manager LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.466744969+00:00 stderr F I0120 10:56:53.466728 30089 services_controller.go:415] Built service openshift-controller-manager/controller-manager LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.466944585+00:00 stderr F I0120 10:56:53.466893 30089 services_controller.go:421] Built service openshift-controller-manager/controller-manager cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-controller-manager/controller-manager_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager/controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.104", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.87", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.467009846+00:00 stderr F I0120 10:56:53.466989 30089 services_controller.go:422] Built service openshift-controller-manager/controller-manager per-node LB []services.LB{} 2026-01-20T10:56:53.467081638+00:00 stderr F I0120 10:56:53.467042 30089 services_controller.go:423] Built service openshift-controller-manager/controller-manager template LB []services.LB{} 2026-01-20T10:56:53.467145020+00:00 stderr F I0120 10:56:53.467125 30089 services_controller.go:424] Service openshift-controller-manager/controller-manager has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.467251423+00:00 stderr F I0120 10:56:53.467188 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-controller-manager/controller-manager_TCP_cluster", UUID:"b3787e47-e57b-411b-a351-f5dcf926f4a7", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager/controller-manager"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-controller-manager/controller-manager_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager/controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.104", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.87", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.467459308+00:00 stderr F I0120 10:56:53.467399 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.104:443:10.217.0.87:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b3787e47-e57b-411b-a351-f5dcf926f4a7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.467545870+00:00 stderr F I0120 10:56:53.467495 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.104:443:10.217.0.87:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b3787e47-e57b-411b-a351-f5dcf926f4a7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.467762766+00:00 stderr F I0120 10:56:53.466109 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:1e0c1c89-e407-443a-b61e-39d1499f5b36}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.467845568+00:00 stderr F I0120 10:56:53.467798 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1e0c1c89-e407-443a-b61e-39d1499f5b36}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:1e0c1c89-e407-443a-b61e-39d1499f5b36}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.467972902+00:00 stderr F I0120 10:56:53.467927 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager/controller-manager"} 2026-01-20T10:56:53.467972902+00:00 stderr F I0120 10:56:53.467954 30089 services_controller.go:336] Finished syncing service controller-manager on namespace openshift-controller-manager : 2.134106ms 2026-01-20T10:56:53.467989332+00:00 stderr F I0120 10:56:53.467971 30089 services_controller.go:332] Processing sync for service openshift-network-diagnostics/network-check-target 2026-01-20T10:56:53.468135506+00:00 stderr F I0120 10:56:53.466998 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-canary/ingress-canary"} 2026-01-20T10:56:53.468198847+00:00 stderr F I0120 10:56:53.468179 30089 services_controller.go:336] Finished syncing service ingress-canary on namespace openshift-ingress-canary : 4.662463ms 2026-01-20T10:56:53.468236338+00:00 stderr F I0120 10:56:53.466829 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager-operator/metrics"} 2026-01-20T10:56:53.468274729+00:00 stderr F I0120 10:56:53.466649 30089 services_controller.go:397] Service multus-admission-controller retrieved from lister: &Service{ObjectMeta:{multus-admission-controller openshift-multus 35568373-18ec-4ba2-8d18-12de10aa5a3f 5005 0 2024-06-26 12:45:47 +0000 UTC map[app:multus-admission-controller] map[service.alpha.openshift.io/serving-cert-secret-name:multus-admission-controller-secret service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{operator.openshift.io/v1 Network cluster 5ca11404-f665-4aa0-85cf-da2f3e9c86ad 0xc000c06227 0xc000c06228}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:webhook,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: multus-admission-controller,},ClusterIP:10.217.4.247,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.247],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.468321591+00:00 stderr F I0120 10:56:53.465546 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/catalog-operator-metrics"} 2026-01-20T10:56:53.468357422+00:00 stderr F I0120 10:56:53.467982 30089 services_controller.go:397] Service network-check-target retrieved from lister: &Service{ObjectMeta:{network-check-target openshift-network-diagnostics 151fdab6-cca2-4880-a96c-48e605cc8d3d 2803 0 2024-06-26 12:45:59 +0000 UTC map[] map[] [{operator.openshift.io/v1 Network cluster 5ca11404-f665-4aa0-85cf-da2f3e9c86ad 0xc000a6e057 0xc000a6e058}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: network-check-target,},ClusterIP:10.217.5.248,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.248],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.468440744+00:00 stderr F I0120 10:56:53.468415 30089 lb_config.go:1016] Cluster endpoints for openshift-network-diagnostics/network-check-target are: map[TCP/:{8080 [10.217.0.4] []}] 2026-01-20T10:56:53.468503365+00:00 stderr F I0120 10:56:53.468476 30089 services_controller.go:413] Built service openshift-network-diagnostics/network-check-target LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.248"}, protocol:"TCP", inport:80, clusterEndpoints:services.lbEndpoints{Port:8080, V4IPs:[]string{"10.217.0.4"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.468549488+00:00 stderr F I0120 10:56:53.468533 30089 services_controller.go:414] Built service openshift-network-diagnostics/network-check-target LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.468595909+00:00 stderr F I0120 10:56:53.468579 30089 services_controller.go:415] Built service openshift-network-diagnostics/network-check-target LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.468681031+00:00 stderr F I0120 10:56:53.468639 30089 services_controller.go:421] Built service openshift-network-diagnostics/network-check-target cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-network-diagnostics/network-check-target_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-network-diagnostics/network-check-target"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.248", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.4", Port:8080, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.468730462+00:00 stderr F I0120 10:56:53.468713 30089 services_controller.go:422] Built service openshift-network-diagnostics/network-check-target per-node LB []services.LB{} 2026-01-20T10:56:53.468778054+00:00 stderr F I0120 10:56:53.468761 30089 services_controller.go:423] Built service openshift-network-diagnostics/network-check-target template LB []services.LB{} 2026-01-20T10:56:53.468828275+00:00 stderr F I0120 10:56:53.468811 30089 services_controller.go:424] Service openshift-network-diagnostics/network-check-target has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.468945558+00:00 stderr F I0120 10:56:53.468876 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-network-diagnostics/network-check-target_TCP_cluster", UUID:"64185ba6-b0f4-4c6d-b401-fb03d791f35d", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-network-diagnostics/network-check-target"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-network-diagnostics/network-check-target_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-network-diagnostics/network-check-target"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.248", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.4", Port:8080, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.469207825+00:00 stderr F I0120 10:56:53.469127 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.248:80:10.217.0.4:8080]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {64185ba6-b0f4-4c6d-b401-fb03d791f35d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.469319568+00:00 stderr F I0120 10:56:53.469263 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:100.64.0.2 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {92357530-52ac-4fd6-8a9d-74712c56c6bf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.469367199+00:00 stderr F I0120 10:56:53.469326 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:92357530-52ac-4fd6-8a9d-74712c56c6bf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.469383209+00:00 stderr F I0120 10:56:53.469353 30089 transact.go:42] Configuring OVN: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:100.64.0.2 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {92357530-52ac-4fd6-8a9d-74712c56c6bf}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:92357530-52ac-4fd6-8a9d-74712c56c6bf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.469428010+00:00 stderr F I0120 10:56:53.469261 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.248:80:10.217.0.4:8080]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {64185ba6-b0f4-4c6d-b401-fb03d791f35d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.469481992+00:00 stderr F I0120 10:56:53.469457 30089 services_controller.go:332] Processing sync for service cert-manager/cert-manager 2026-01-20T10:56:53.469499882+00:00 stderr F I0120 10:56:53.469483 30089 lb_config.go:1016] Cluster endpoints for openshift-multus/multus-admission-controller are: map[TCP/metrics:{8443 [10.217.0.32] []} TCP/webhook:{6443 [10.217.0.32] []}] 2026-01-20T10:56:53.469547884+00:00 stderr F I0120 10:56:53.469508 30089 services_controller.go:413] Built service openshift-multus/multus-admission-controller LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.247"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{"10.217.0.32"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.247"}, protocol:"TCP", inport:8443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.32"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.469547884+00:00 stderr F I0120 10:56:53.469530 30089 services_controller.go:414] Built service openshift-multus/multus-admission-controller LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.469547884+00:00 stderr F I0120 10:56:53.469471 30089 services_controller.go:397] Service cert-manager retrieved from lister: &Service{ObjectMeta:{cert-manager cert-manager 3e6e73db-5a03-46ff-9ebd-6afc528fb218 42884 0 2026-01-20 10:56:34 +0000 UTC map[app:cert-manager app.kubernetes.io/component:controller app.kubernetes.io/instance:cert-manager app.kubernetes.io/name:cert-manager app.kubernetes.io/version:v1.19.2] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:tcp-prometheus-servicemonitor,Protocol:TCP,Port:9402,TargetPort:{1 0 http-metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: cert-manager,app.kubernetes.io/name: cert-manager,},ClusterIP:10.217.5.219,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.219],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.469547884+00:00 stderr F I0120 10:56:53.469541 30089 services_controller.go:415] Built service openshift-multus/multus-admission-controller LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.469569604+00:00 stderr F I0120 10:56:53.469557 30089 lb_config.go:1016] Cluster endpoints for cert-manager/cert-manager are: map[TCP/tcp-prometheus-servicemonitor:{9402 [10.217.0.42] []}] 2026-01-20T10:56:53.469582894+00:00 stderr F I0120 10:56:53.469569 30089 services_controller.go:413] Built service cert-manager/cert-manager LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.219"}, protocol:"TCP", inport:9402, clusterEndpoints:services.lbEndpoints{Port:9402, V4IPs:[]string{"10.217.0.42"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.469582894+00:00 stderr F I0120 10:56:53.469579 30089 services_controller.go:414] Built service cert-manager/cert-manager LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.469601845+00:00 stderr F I0120 10:56:53.469583 30089 services_controller.go:415] Built service cert-manager/cert-manager LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.469614575+00:00 stderr F I0120 10:56:53.469594 30089 services_controller.go:421] Built service cert-manager/cert-manager cluster-wide LB []services.LB{services.LB{Name:"Service_cert-manager/cert-manager_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"cert-manager/cert-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.219", Port:9402, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.42", Port:9402, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.469614575+00:00 stderr F I0120 10:56:53.469611 30089 services_controller.go:422] Built service cert-manager/cert-manager per-node LB []services.LB{} 2026-01-20T10:56:53.469628546+00:00 stderr F I0120 10:56:53.469616 30089 services_controller.go:423] Built service cert-manager/cert-manager template LB []services.LB{} 2026-01-20T10:56:53.469628546+00:00 stderr F I0120 10:56:53.469571 30089 services_controller.go:421] Built service openshift-multus/multus-admission-controller cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-multus/multus-admission-controller_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-multus/multus-admission-controller"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.247", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.32", Port:6443, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.247", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.32", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.469628546+00:00 stderr F I0120 10:56:53.469623 30089 services_controller.go:424] Service cert-manager/cert-manager has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.469644056+00:00 stderr F I0120 10:56:53.469628 30089 services_controller.go:422] Built service openshift-multus/multus-admission-controller per-node LB []services.LB{} 2026-01-20T10:56:53.469644056+00:00 stderr F I0120 10:56:53.469640 30089 services_controller.go:423] Built service openshift-multus/multus-admission-controller template LB []services.LB{} 2026-01-20T10:56:53.469658046+00:00 stderr F I0120 10:56:53.469648 30089 services_controller.go:424] Service openshift-multus/multus-admission-controller has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.469670427+00:00 stderr F I0120 10:56:53.469637 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_cert-manager/cert-manager_TCP_cluster", UUID:"08b19f46-a8f8-496c-9d21-410cf6e473e6", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"cert-manager/cert-manager"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_cert-manager/cert-manager_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"cert-manager/cert-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.219", Port:9402, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.42", Port:9402, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.469730508+00:00 stderr F I0120 10:56:53.469666 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-multus/multus-admission-controller_TCP_cluster", UUID:"8f8bf377-ba57-45b8-b726-f3b9236cc1ab", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-multus/multus-admission-controller"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-multus/multus-admission-controller_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-multus/multus-admission-controller"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.247", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.32", Port:6443, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.247", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.32", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.469772209+00:00 stderr F I0120 10:56:53.469732 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:cert-manager/cert-manager]} name:Service_cert-manager/cert-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:9402:10.217.0.42:9402]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {08b19f46-a8f8-496c-9d21-410cf6e473e6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.469786830+00:00 stderr F I0120 10:56:53.469761 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:cert-manager/cert-manager]} name:Service_cert-manager/cert-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.219:9402:10.217.0.42:9402]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {08b19f46-a8f8-496c-9d21-410cf6e473e6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.469856762+00:00 stderr F I0120 10:56:53.469830 30089 services_controller.go:336] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager : 5.397573ms 2026-01-20T10:56:53.469856762+00:00 stderr F I0120 10:56:53.469804 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.247:443:10.217.0.32:6443 10.217.4.247:8443:10.217.0.32:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f8bf377-ba57-45b8-b726-f3b9236cc1ab}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.469856762+00:00 stderr F I0120 10:56:53.469843 30089 services_controller.go:332] Processing sync for service openshift-apiserver/check-endpoints 2026-01-20T10:56:53.469959734+00:00 stderr F I0120 10:56:53.469896 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.247:443:10.217.0.32:6443 10.217.4.247:8443:10.217.0.32:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f8bf377-ba57-45b8-b726-f3b9236cc1ab}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.469959734+00:00 stderr F I0120 10:56:53.469850 30089 services_controller.go:397] Service check-endpoints retrieved from lister: &Service{ObjectMeta:{check-endpoints openshift-apiserver 435aa879-8965-459a-9b2a-dfd8f8924b3a 5567 0 2024-06-26 12:47:30 +0000 UTC map[prometheus:openshift-apiserver-check-endpoints] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000158a17 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:check-endpoints,Protocol:TCP,Port:17698,TargetPort:{0 17698 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.23,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.23],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.469979915+00:00 stderr F I0120 10:56:53.469969 30089 lb_config.go:1016] Cluster endpoints for openshift-apiserver/check-endpoints are: map[TCP/check-endpoints:{17698 [10.217.0.82] []}] 2026-01-20T10:56:53.469998735+00:00 stderr F I0120 10:56:53.469979 30089 services_controller.go:413] Built service openshift-apiserver/check-endpoints LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.23"}, protocol:"TCP", inport:17698, clusterEndpoints:services.lbEndpoints{Port:17698, V4IPs:[]string{"10.217.0.82"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.470018476+00:00 stderr F I0120 10:56:53.470001 30089 services_controller.go:414] Built service openshift-apiserver/check-endpoints LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.470018476+00:00 stderr F I0120 10:56:53.470006 30089 services_controller.go:415] Built service openshift-apiserver/check-endpoints LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.470084288+00:00 stderr F I0120 10:56:53.470018 30089 services_controller.go:421] Built service openshift-apiserver/check-endpoints cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-apiserver/check-endpoints_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/check-endpoints"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.23", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.82", Port:17698, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.470084288+00:00 stderr F I0120 10:56:53.469440 30089 services_controller.go:336] Finished syncing service metrics on namespace openshift-kube-controller-manager-operator : 3.423701ms 2026-01-20T10:56:53.470084288+00:00 stderr F I0120 10:56:53.470039 30089 services_controller.go:422] Built service openshift-apiserver/check-endpoints per-node LB []services.LB{} 2026-01-20T10:56:53.470084288+00:00 stderr F I0120 10:56:53.470049 30089 services_controller.go:423] Built service openshift-apiserver/check-endpoints template LB []services.LB{} 2026-01-20T10:56:53.470084288+00:00 stderr F I0120 10:56:53.470056 30089 services_controller.go:424] Service openshift-apiserver/check-endpoints has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.470084288+00:00 stderr F I0120 10:56:53.470077 30089 services_controller.go:332] Processing sync for service openshift-network-diagnostics/network-check-source 2026-01-20T10:56:53.470112578+00:00 stderr F I0120 10:56:53.470090 30089 services_controller.go:336] Finished syncing service network-check-source on namespace openshift-network-diagnostics : 31.281µs 2026-01-20T10:56:53.470112578+00:00 stderr F I0120 10:56:53.470101 30089 services_controller.go:332] Processing sync for service openshift-console-operator/metrics 2026-01-20T10:56:53.470126549+00:00 stderr F I0120 10:56:53.470087 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-apiserver/check-endpoints_TCP_cluster", UUID:"9644365b-5102-49b0-be5c-9e41d7162eca", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/check-endpoints"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-apiserver/check-endpoints_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/check-endpoints"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.23", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.82", Port:17698, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.470154769+00:00 stderr F I0120 10:56:53.470134 30089 ovs.go:162] Exec(41): stdout: "" 2026-01-20T10:56:53.470154769+00:00 stderr F I0120 10:56:53.470146 30089 ovs.go:163] Exec(41): stderr: "" 2026-01-20T10:56:53.470168840+00:00 stderr F I0120 10:56:53.470158 30089 ovs.go:159] Exec(43): /usr/bin/ovs-ofctl dump-aggregate br-int 2026-01-20T10:56:53.470230551+00:00 stderr F I0120 10:56:53.470109 30089 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-console-operator 793d323e-de30-470a-af76-520af7b2dad8 9604 0 2024-06-26 12:53:34 +0000 UTC map[name:console-operator] map[capability.openshift.io/name:Console include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000158f17 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: console-operator,},ClusterIP:10.217.5.211,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.211],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.470246052+00:00 stderr F I0120 10:56:53.470210 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/check-endpoints]} name:Service_openshift-apiserver/check-endpoints_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.23:17698:10.217.0.82:17698]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9644365b-5102-49b0-be5c-9e41d7162eca}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.470246052+00:00 stderr F I0120 10:56:53.470233 30089 lb_config.go:1016] Cluster endpoints for openshift-console-operator/metrics are: map[TCP/https:{8443 [10.217.0.62] []}] 2026-01-20T10:56:53.470263252+00:00 stderr F I0120 10:56:53.470240 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/check-endpoints]} name:Service_openshift-apiserver/check-endpoints_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.23:17698:10.217.0.82:17698]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9644365b-5102-49b0-be5c-9e41d7162eca}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.470277023+00:00 stderr F I0120 10:56:53.470251 30089 services_controller.go:413] Built service openshift-console-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.211"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.62"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.470277023+00:00 stderr F I0120 10:56:53.470269 30089 services_controller.go:414] Built service openshift-console-operator/metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.470296743+00:00 stderr F I0120 10:56:53.470276 30089 services_controller.go:415] Built service openshift-console-operator/metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.470354915+00:00 stderr F I0120 10:56:53.470303 30089 services_controller.go:421] Built service openshift-console-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-console-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.211", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.62", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.470354915+00:00 stderr F I0120 10:56:53.470335 30089 services_controller.go:422] Built service openshift-console-operator/metrics per-node LB []services.LB{} 2026-01-20T10:56:53.470354915+00:00 stderr F I0120 10:56:53.470344 30089 services_controller.go:423] Built service openshift-console-operator/metrics template LB []services.LB{} 2026-01-20T10:56:53.470372275+00:00 stderr F I0120 10:56:53.470352 30089 services_controller.go:424] Service openshift-console-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.470425936+00:00 stderr F I0120 10:56:53.470368 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-console-operator/metrics_TCP_cluster", UUID:"0acef795-a8b4-42f0-a0f3-a0fbcd6acd2c", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-console-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.211", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.62", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.470665703+00:00 stderr F I0120 10:56:53.470493 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.211:443:10.217.0.62:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0acef795-a8b4-42f0-a0f3-a0fbcd6acd2c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.470665703+00:00 stderr F I0120 10:56:53.470535 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.211:443:10.217.0.62:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0acef795-a8b4-42f0-a0f3-a0fbcd6acd2c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.472517202+00:00 stderr F I0120 10:56:53.471798 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"cert-manager/cert-manager"} 2026-01-20T10:56:53.472517202+00:00 stderr F I0120 10:56:53.471833 30089 services_controller.go:336] Finished syncing service cert-manager on namespace cert-manager : 2.375983ms 2026-01-20T10:56:53.472517202+00:00 stderr F I0120 10:56:53.471849 30089 services_controller.go:332] Processing sync for service openshift-authentication/oauth-openshift 2026-01-20T10:56:53.472517202+00:00 stderr F I0120 10:56:53.471856 30089 services_controller.go:397] Service oauth-openshift retrieved from lister: &Service{ObjectMeta:{oauth-openshift openshift-authentication 64190ecd-229c-482a-966a-b5649b5042ed 5248 0 2024-06-26 12:47:15 +0000 UTC map[app:oauth-openshift] map[operator.openshift.io/spec-hash:d9e6d53076d47ab2d123d8b1ba8ec6543488d973dcc4e02349493cd1c33bce83 service.alpha.openshift.io/serving-cert-secret-name:v4-0-config-system-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: oauth-openshift,},ClusterIP:10.217.4.40,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.40],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.472517202+00:00 stderr F I0120 10:56:53.471929 30089 lb_config.go:1016] Cluster endpoints for openshift-authentication/oauth-openshift are: map[TCP/https:{6443 [10.217.0.72] []}] 2026-01-20T10:56:53.472517202+00:00 stderr F I0120 10:56:53.471944 30089 services_controller.go:413] Built service openshift-authentication/oauth-openshift LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.40"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{"10.217.0.72"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.472517202+00:00 stderr F I0120 10:56:53.471956 30089 services_controller.go:414] Built service openshift-authentication/oauth-openshift LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.472517202+00:00 stderr F I0120 10:56:53.471962 30089 services_controller.go:415] Built service openshift-authentication/oauth-openshift LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.472517202+00:00 stderr F I0120 10:56:53.471980 30089 services_controller.go:421] Built service openshift-authentication/oauth-openshift cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-authentication/oauth-openshift_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication/oauth-openshift"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.40", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.72", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.472517202+00:00 stderr F I0120 10:56:53.472002 30089 services_controller.go:422] Built service openshift-authentication/oauth-openshift per-node LB []services.LB{} 2026-01-20T10:56:53.472517202+00:00 stderr F I0120 10:56:53.472011 30089 services_controller.go:423] Built service openshift-authentication/oauth-openshift template LB []services.LB{} 2026-01-20T10:56:53.472517202+00:00 stderr F I0120 10:56:53.472043 30089 services_controller.go:424] Service openshift-authentication/oauth-openshift has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.472517202+00:00 stderr F I0120 10:56:53.472074 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-authentication/oauth-openshift_TCP_cluster", UUID:"01ea2a7a-b4a0-46ba-a9ec-611e94b854e1", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication/oauth-openshift"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-authentication/oauth-openshift_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication/oauth-openshift"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.40", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.72", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.472517202+00:00 stderr F I0120 10:56:53.472196 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:10.217.0.72:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {01ea2a7a-b4a0-46ba-a9ec-611e94b854e1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.472517202+00:00 stderr F I0120 10:56:53.472245 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:10.217.0.72:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {01ea2a7a-b4a0-46ba-a9ec-611e94b854e1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.472517202+00:00 stderr F I0120 10:56:53.472447 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/metrics"} 2026-01-20T10:56:53.472517202+00:00 stderr F I0120 10:56:53.472465 30089 services_controller.go:336] Finished syncing service metrics on namespace openshift-console-operator : 2.363223ms 2026-01-20T10:56:53.472517202+00:00 stderr F I0120 10:56:53.472479 30089 services_controller.go:332] Processing sync for service openshift-ingress/router-internal-default 2026-01-20T10:56:53.472576404+00:00 stderr F I0120 10:56:53.472486 30089 services_controller.go:397] Service router-internal-default retrieved from lister: &Service{ObjectMeta:{router-internal-default openshift-ingress 3ded9605-ced3-4583-97b6-f93264b463a7 7398 0 2024-06-26 12:48:38 +0000 UTC map[ingresscontroller.operator.openshift.io/owning-ingresscontroller:default] map[service.alpha.openshift.io/serving-cert-secret-name:router-metrics-certs-default service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{apps/v1 Deployment router-default 9ae4d312-7fc4-4344-ab7a-669da95f56bf 0xc00015997e }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{1 0 http},NodePort:0,AppProtocol:nil,},ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:1936,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default,},ClusterIP:10.217.4.220,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.220],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.472628055+00:00 stderr F I0120 10:56:53.472588 30089 lb_config.go:1016] Cluster endpoints for openshift-ingress/router-internal-default are: map[TCP/http:{80 [192.168.126.11] []} TCP/https:{443 [192.168.126.11] []} TCP/metrics:{1936 [192.168.126.11] []}] 2026-01-20T10:56:53.472628055+00:00 stderr F I0120 10:56:53.472619 30089 services_controller.go:413] Built service openshift-ingress/router-internal-default LB cluster-wide configs []services.lbConfig(nil) 2026-01-20T10:56:53.472646785+00:00 stderr F I0120 10:56:53.472625 30089 services_controller.go:414] Built service openshift-ingress/router-internal-default LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.220"}, protocol:"TCP", inport:80, clusterEndpoints:services.lbEndpoints{Port:80, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.220"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:443, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.220"}, protocol:"TCP", inport:1936, clusterEndpoints:services.lbEndpoints{Port:1936, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.472659806+00:00 stderr F I0120 10:56:53.472643 30089 services_controller.go:415] Built service openshift-ingress/router-internal-default LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.472681666+00:00 stderr F I0120 10:56:53.472674 30089 services_controller.go:421] Built service openshift-ingress/router-internal-default cluster-wide LB []services.LB{} 2026-01-20T10:56:53.472729008+00:00 stderr F I0120 10:56:53.472681 30089 services_controller.go:422] Built service openshift-ingress/router-internal-default per-node LB []services.LB{services.LB{Name:"Service_openshift-ingress/router-internal-default_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress/router-internal-default"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.220", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:80, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.220", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:443, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.220", Port:1936, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:1936, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.472729008+00:00 stderr F I0120 10:56:53.472717 30089 services_controller.go:423] Built service openshift-ingress/router-internal-default template LB []services.LB{} 2026-01-20T10:56:53.472729008+00:00 stderr F I0120 10:56:53.472725 30089 services_controller.go:424] Service openshift-ingress/router-internal-default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.472786279+00:00 stderr F I0120 10:56:53.472736 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-ingress/router-internal-default_TCP_node_router+switch_crc", UUID:"a76db1b6-b60a-43ed-8934-a7defee207e6", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress/router-internal-default"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-ingress/router-internal-default_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress/router-internal-default"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.220", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:80, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.220", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:443, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.220", Port:1936, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:1936, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.472887342+00:00 stderr F I0120 10:56:53.472841 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress/router-internal-default]} name:Service_openshift-ingress/router-internal-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.220:1936:192.168.126.11:1936 10.217.4.220:443:192.168.126.11:443 10.217.4.220:80:192.168.126.11:80]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a76db1b6-b60a-43ed-8934-a7defee207e6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.472910082+00:00 stderr F I0120 10:56:53.472874 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress/router-internal-default]} name:Service_openshift-ingress/router-internal-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.220:1936:192.168.126.11:1936 10.217.4.220:443:192.168.126.11:443 10.217.4.220:80:192.168.126.11:80]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a76db1b6-b60a-43ed-8934-a7defee207e6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.472910082+00:00 stderr F I0120 10:56:53.472888 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication/oauth-openshift"} 2026-01-20T10:56:53.472924783+00:00 stderr F I0120 10:56:53.472906 30089 services_controller.go:336] Finished syncing service oauth-openshift on namespace openshift-authentication : 1.056847ms 2026-01-20T10:56:53.472924783+00:00 stderr F I0120 10:56:53.472921 30089 services_controller.go:332] Processing sync for service openshift-machine-config-operator/machine-config-controller 2026-01-20T10:56:53.473009375+00:00 stderr F I0120 10:56:53.472927 30089 services_controller.go:397] Service machine-config-controller retrieved from lister: &Service{ObjectMeta:{machine-config-controller openshift-machine-config-operator 3ff83f1a-4058-4b9e-a4fd-83f51836c82e 4847 0 2024-06-26 12:39:14 +0000 UTC map[k8s-app:machine-config-controller] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:mcc-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002fad5b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-controller,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.473009375+00:00 stderr F I0120 10:56:53.472977 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-network-diagnostics/network-check-target"} 2026-01-20T10:56:53.473009375+00:00 stderr F I0120 10:56:53.472996 30089 services_controller.go:336] Finished syncing service network-check-target on namespace openshift-network-diagnostics : 5.025903ms 2026-01-20T10:56:53.473033386+00:00 stderr F I0120 10:56:53.473005 30089 lb_config.go:1016] Cluster endpoints for openshift-machine-config-operator/machine-config-controller are: map[TCP/metrics:{9001 [10.217.0.63] []}] 2026-01-20T10:56:53.473033386+00:00 stderr F I0120 10:56:53.473022 30089 services_controller.go:332] Processing sync for service openshift-route-controller-manager/route-controller-manager 2026-01-20T10:56:53.473033386+00:00 stderr F I0120 10:56:53.473018 30089 services_controller.go:413] Built service openshift-machine-config-operator/machine-config-controller LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.214"}, protocol:"TCP", inport:9001, clusterEndpoints:services.lbEndpoints{Port:9001, V4IPs:[]string{"10.217.0.63"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.473049276+00:00 stderr F I0120 10:56:53.473032 30089 services_controller.go:414] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.473049276+00:00 stderr F I0120 10:56:53.473038 30089 services_controller.go:415] Built service openshift-machine-config-operator/machine-config-controller LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.473092847+00:00 stderr F I0120 10:56:53.473030 30089 services_controller.go:397] Service route-controller-manager retrieved from lister: &Service{ObjectMeta:{route-controller-manager openshift-route-controller-manager 105b901a-a54d-4dae-b7b6-99e83d48166f 5156 0 2024-06-26 12:47:06 +0000 UTC map[prometheus:route-controller-manager] map[operator.openshift.io/spec-hash:a480352ea60c2dcd2b3870bf0c3650528ef9b51aaa3fe6baa1e3711da18fffa3 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{route-controller-manager: true,},ClusterIP:10.217.5.173,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.173],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.473092847+00:00 stderr F I0120 10:56:53.473054 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/check-endpoints"} 2026-01-20T10:56:53.473114958+00:00 stderr F I0120 10:56:53.473096 30089 services_controller.go:336] Finished syncing service check-endpoints on namespace openshift-apiserver : 3.250776ms 2026-01-20T10:56:53.473114958+00:00 stderr F I0120 10:56:53.473099 30089 lb_config.go:1016] Cluster endpoints for openshift-route-controller-manager/route-controller-manager are: map[TCP/https:{8443 [10.217.0.88] []}] 2026-01-20T10:56:53.473129678+00:00 stderr F I0120 10:56:53.473051 30089 services_controller.go:421] Built service openshift-machine-config-operator/machine-config-controller cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-controller"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.214", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.63", Port:9001, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.473129678+00:00 stderr F I0120 10:56:53.473115 30089 services_controller.go:332] Processing sync for service openshift-kube-apiserver-operator/metrics 2026-01-20T10:56:53.473129678+00:00 stderr F I0120 10:56:53.473111 30089 services_controller.go:413] Built service openshift-route-controller-manager/route-controller-manager LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.173"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.88"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.473129678+00:00 stderr F I0120 10:56:53.473120 30089 services_controller.go:422] Built service openshift-machine-config-operator/machine-config-controller per-node LB []services.LB{} 2026-01-20T10:56:53.473151079+00:00 stderr F I0120 10:56:53.473127 30089 services_controller.go:423] Built service openshift-machine-config-operator/machine-config-controller template LB []services.LB{} 2026-01-20T10:56:53.473151079+00:00 stderr F I0120 10:56:53.473124 30089 services_controller.go:414] Built service openshift-route-controller-manager/route-controller-manager LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.473151079+00:00 stderr F I0120 10:56:53.473134 30089 services_controller.go:424] Service openshift-machine-config-operator/machine-config-controller has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.473151079+00:00 stderr F I0120 10:56:53.473136 30089 services_controller.go:415] Built service openshift-route-controller-manager/route-controller-manager LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.473199220+00:00 stderr F I0120 10:56:53.473152 30089 services_controller.go:421] Built service openshift-route-controller-manager/route-controller-manager cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-route-controller-manager/route-controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.173", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.88", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.473199220+00:00 stderr F I0120 10:56:53.473181 30089 services_controller.go:422] Built service openshift-route-controller-manager/route-controller-manager per-node LB []services.LB{} 2026-01-20T10:56:53.473199220+00:00 stderr F I0120 10:56:53.473123 30089 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-kube-apiserver-operator ed79a864-3d59-456e-8a6c-724ec68e6d1b 4515 0 2024-06-26 12:39:27 +0000 UTC map[app:kube-apiserver-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:kube-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000159a7b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.31,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.31],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.473199220+00:00 stderr F I0120 10:56:53.473188 30089 services_controller.go:423] Built service openshift-route-controller-manager/route-controller-manager template LB []services.LB{} 2026-01-20T10:56:53.473222530+00:00 stderr F I0120 10:56:53.473199 30089 services_controller.go:424] Service openshift-route-controller-manager/route-controller-manager has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.473222530+00:00 stderr F I0120 10:56:53.473210 30089 lb_config.go:1016] Cluster endpoints for openshift-kube-apiserver-operator/metrics are: map[TCP/https:{8443 [10.217.0.7] []}] 2026-01-20T10:56:53.473240141+00:00 stderr F I0120 10:56:53.473226 30089 services_controller.go:413] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.31"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.7"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.473240141+00:00 stderr F I0120 10:56:53.473212 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster", UUID:"5536fef9-19d0-4375-b45b-e06dafe12061", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-route-controller-manager/route-controller-manager"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-route-controller-manager/route-controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.173", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.88", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.473254761+00:00 stderr F I0120 10:56:53.473239 30089 services_controller.go:414] Built service openshift-kube-apiserver-operator/metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.473254761+00:00 stderr F I0120 10:56:53.473247 30089 services_controller.go:415] Built service openshift-kube-apiserver-operator/metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.473302253+00:00 stderr F I0120 10:56:53.473148 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster", UUID:"14fa73a9-9675-4207-8711-28031fe0d8db", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-controller"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-controller"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.214", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.63", Port:9001, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.473302253+00:00 stderr F I0120 10:56:53.473263 30089 services_controller.go:421] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.31", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.7", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.473302253+00:00 stderr F I0120 10:56:53.473288 30089 services_controller.go:422] Built service openshift-kube-apiserver-operator/metrics per-node LB []services.LB{} 2026-01-20T10:56:53.473302253+00:00 stderr F I0120 10:56:53.473297 30089 services_controller.go:423] Built service openshift-kube-apiserver-operator/metrics template LB []services.LB{} 2026-01-20T10:56:53.473325793+00:00 stderr F I0120 10:56:53.473307 30089 services_controller.go:424] Service openshift-kube-apiserver-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.473325793+00:00 stderr F I0120 10:56:53.473302 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.173:443:10.217.0.88:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5536fef9-19d0-4375-b45b-e06dafe12061}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.473367354+00:00 stderr F I0120 10:56:53.473327 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.173:443:10.217.0.88:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5536fef9-19d0-4375-b45b-e06dafe12061}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.473367354+00:00 stderr F I0120 10:56:53.473326 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster", UUID:"11ea2791-06de-4f67-9dea-91c73a312b37", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.31", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.7", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.473388495+00:00 stderr F I0120 10:56:53.473350 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.214:9001:10.217.0.63:9001]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {14fa73a9-9675-4207-8711-28031fe0d8db}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.473402895+00:00 stderr F I0120 10:56:53.473375 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.214:9001:10.217.0.63:9001]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {14fa73a9-9675-4207-8711-28031fe0d8db}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.473454527+00:00 stderr F I0120 10:56:53.472982 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-multus/multus-admission-controller"} 2026-01-20T10:56:53.473518048+00:00 stderr F I0120 10:56:53.473462 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.31:443:10.217.0.7:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {11ea2791-06de-4f67-9dea-91c73a312b37}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.473518048+00:00 stderr F I0120 10:56:53.473488 30089 ovs.go:162] Exec(42): stdout: " cookie=0xaad46e3b, table=65, priority=100,reg15=0x2,metadata=0x3 actions=output:1\n" 2026-01-20T10:56:53.473518048+00:00 stderr F I0120 10:56:53.473501 30089 ovs.go:163] Exec(42): stderr: "" 2026-01-20T10:56:53.473518048+00:00 stderr F I0120 10:56:53.473510 30089 management-port.go:161] Management port ovn-k8s-mp0 is ready 2026-01-20T10:56:53.473542929+00:00 stderr F I0120 10:56:53.473505 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.31:443:10.217.0.7:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {11ea2791-06de-4f67-9dea-91c73a312b37}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.473581990+00:00 stderr F I0120 10:56:53.473489 30089 services_controller.go:336] Finished syncing service multus-admission-controller on namespace openshift-multus : 6.84181ms 2026-01-20T10:56:53.473640491+00:00 stderr F I0120 10:56:53.473622 30089 services_controller.go:332] Processing sync for service openshift-machine-api/machine-api-operator-webhook 2026-01-20T10:56:53.473739014+00:00 stderr F I0120 10:56:53.473700 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-route-controller-manager/route-controller-manager"} 2026-01-20T10:56:53.473739014+00:00 stderr F I0120 10:56:53.473724 30089 services_controller.go:336] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager : 700.328µs 2026-01-20T10:56:53.473756564+00:00 stderr F I0120 10:56:53.473741 30089 services_controller.go:332] Processing sync for service openshift-machine-config-operator/machine-config-operator 2026-01-20T10:56:53.473818656+00:00 stderr F I0120 10:56:53.473770 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-controller"} 2026-01-20T10:56:53.473818656+00:00 stderr F I0120 10:56:53.473750 30089 services_controller.go:397] Service machine-config-operator retrieved from lister: &Service{ObjectMeta:{machine-config-operator openshift-machine-config-operator 355a1056-7d77-4a52-a1f5-8eb39c13574e 4891 0 2024-06-26 12:39:13 +0000 UTC map[k8s-app:machine-config-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:mco-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002fb39b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-operator,},ClusterIP:10.217.5.4,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.4],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.473818656+00:00 stderr F I0120 10:56:53.473806 30089 services_controller.go:336] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator : 882.713µs 2026-01-20T10:56:53.473843257+00:00 stderr F I0120 10:56:53.473818 30089 lb_config.go:1016] Cluster endpoints for openshift-machine-config-operator/machine-config-operator are: map[TCP/metrics:{9001 [10.217.0.21] []}] 2026-01-20T10:56:53.473843257+00:00 stderr F I0120 10:56:53.473832 30089 services_controller.go:413] Built service openshift-machine-config-operator/machine-config-operator LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.4"}, protocol:"TCP", inport:9001, clusterEndpoints:services.lbEndpoints{Port:9001, V4IPs:[]string{"10.217.0.21"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.473857857+00:00 stderr F I0120 10:56:53.473841 30089 services_controller.go:414] Built service openshift-machine-config-operator/machine-config-operator LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.473857857+00:00 stderr F I0120 10:56:53.473848 30089 services_controller.go:415] Built service openshift-machine-config-operator/machine-config-operator LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.473871247+00:00 stderr F I0120 10:56:53.473851 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress/router-internal-default"} 2026-01-20T10:56:53.473871247+00:00 stderr F I0120 10:56:53.473866 30089 services_controller.go:336] Finished syncing service router-internal-default on namespace openshift-ingress : 1.387286ms 2026-01-20T10:56:53.473885098+00:00 stderr F I0120 10:56:53.473876 30089 services_controller.go:332] Processing sync for service openshift-kube-apiserver/apiserver 2026-01-20T10:56:53.473885098+00:00 stderr F I0120 10:56:53.473859 30089 services_controller.go:421] Built service openshift-machine-config-operator/machine-config-operator cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.4", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.21", Port:9001, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.473899208+00:00 stderr F I0120 10:56:53.473885 30089 services_controller.go:422] Built service openshift-machine-config-operator/machine-config-operator per-node LB []services.LB{} 2026-01-20T10:56:53.473899208+00:00 stderr F I0120 10:56:53.473892 30089 services_controller.go:423] Built service openshift-machine-config-operator/machine-config-operator template LB []services.LB{} 2026-01-20T10:56:53.473912528+00:00 stderr F I0120 10:56:53.473899 30089 services_controller.go:424] Service openshift-machine-config-operator/machine-config-operator has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.473955410+00:00 stderr F I0120 10:56:53.473885 30089 services_controller.go:397] Service apiserver retrieved from lister: &Service{ObjectMeta:{apiserver openshift-kube-apiserver 44a33f79-7e24-4f1b-bc46-f52dfcec13b8 3793 0 2024-06-26 12:47:04 +0000 UTC map[] map[operator.openshift.io/spec-hash:2787a90499aeabb4cf7acbefa3d43f6c763431fdc60904fdfa1fe74cd04203ee] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.86,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.86],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.473955410+00:00 stderr F I0120 10:56:53.473914 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster", UUID:"f2bd9885-097c-4b3e-916f-b4598e49252e", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-operator"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.4", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.21", Port:9001, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.473955410+00:00 stderr F I0120 10:56:53.473948 30089 lb_config.go:1016] Cluster endpoints for openshift-kube-apiserver/apiserver are: map[TCP/https:{6443 [192.168.126.11] []}] 2026-01-20T10:56:53.473977330+00:00 stderr F I0120 10:56:53.473959 30089 services_controller.go:413] Built service openshift-kube-apiserver/apiserver LB cluster-wide configs []services.lbConfig(nil) 2026-01-20T10:56:53.473977330+00:00 stderr F I0120 10:56:53.473964 30089 services_controller.go:414] Built service openshift-kube-apiserver/apiserver LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.86"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.473977330+00:00 stderr F I0120 10:56:53.473973 30089 services_controller.go:415] Built service openshift-kube-apiserver/apiserver LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.473995081+00:00 stderr F I0120 10:56:53.473989 30089 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{} 2026-01-20T10:56:53.474034922+00:00 stderr F I0120 10:56:53.473995 30089 services_controller.go:422] Built service openshift-kube-apiserver/apiserver per-node LB []services.LB{services.LB{Name:"Service_openshift-kube-apiserver/apiserver_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver/apiserver"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.86", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.474034922+00:00 stderr F I0120 10:56:53.474016 30089 services_controller.go:423] Built service openshift-kube-apiserver/apiserver template LB []services.LB{} 2026-01-20T10:56:53.474034922+00:00 stderr F I0120 10:56:53.474003 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.4:9001:10.217.0.21:9001]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f2bd9885-097c-4b3e-916f-b4598e49252e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.474034922+00:00 stderr F I0120 10:56:53.473822 30089 services_controller.go:332] Processing sync for service openshift-etcd-operator/metrics 2026-01-20T10:56:53.474084763+00:00 stderr F I0120 10:56:53.474035 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.4:9001:10.217.0.21:9001]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f2bd9885-097c-4b3e-916f-b4598e49252e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.474105493+00:00 stderr F I0120 10:56:53.474035 30089 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-etcd-operator 470dd1a6-5645-4282-97e4-ebd3fef4caae 4531 0 2024-06-26 12:39:06 +0000 UTC map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000159507 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: etcd-operator,},ClusterIP:10.217.4.182,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.182],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.474105493+00:00 stderr F I0120 10:56:53.474099 30089 lb_config.go:1016] Cluster endpoints for openshift-etcd-operator/metrics are: map[TCP/https:{8443 [10.217.0.8] []}] 2026-01-20T10:56:53.474119444+00:00 stderr F I0120 10:56:53.474109 30089 services_controller.go:413] Built service openshift-etcd-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.182"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.8"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.474139725+00:00 stderr F I0120 10:56:53.474117 30089 services_controller.go:414] Built service openshift-etcd-operator/metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.474139725+00:00 stderr F I0120 10:56:53.474122 30089 services_controller.go:415] Built service openshift-etcd-operator/metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.474139725+00:00 stderr F I0120 10:56:53.474023 30089 services_controller.go:424] Service openshift-kube-apiserver/apiserver has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.474155026+00:00 stderr F I0120 10:56:53.474133 30089 services_controller.go:421] Built service openshift-etcd-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-etcd-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.182", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.8", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.474155026+00:00 stderr F I0120 10:56:53.474151 30089 services_controller.go:422] Built service openshift-etcd-operator/metrics per-node LB []services.LB{} 2026-01-20T10:56:53.474168696+00:00 stderr F I0120 10:56:53.474157 30089 services_controller.go:423] Built service openshift-etcd-operator/metrics template LB []services.LB{} 2026-01-20T10:56:53.474168696+00:00 stderr F I0120 10:56:53.474163 30089 services_controller.go:424] Service openshift-etcd-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.474187957+00:00 stderr F I0120 10:56:53.474149 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-kube-apiserver/apiserver_TCP_node_router+switch_crc", UUID:"274b346b-9aa9-401d-b752-d4168e631034", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver/apiserver"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-kube-apiserver/apiserver_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver/apiserver"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.86", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.474278589+00:00 stderr F I0120 10:56:53.474175 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-etcd-operator/metrics_TCP_cluster", UUID:"85c70b85-2b80-443a-b268-ffc8695f018e", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-etcd-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.182", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.8", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.474334270+00:00 stderr F I0120 10:56:53.473675 30089 services_controller.go:397] Service machine-api-operator-webhook retrieved from lister: &Service{ObjectMeta:{machine-api-operator-webhook openshift-machine-api 128263d4-d278-44f6-9ae4-9e9ecc572513 4862 0 2024-06-26 12:39:14 +0000 UTC map[k8s-app:machine-api-operator-webhook] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:machine-api-operator-webhook-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002fac1b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 webhook-server},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{api: clusterapi,k8s-app: controller,},ClusterIP:10.217.5.44,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.44],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.474387382+00:00 stderr F I0120 10:56:53.474330 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.86:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {274b346b-9aa9-401d-b752-d4168e631034}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.474387382+00:00 stderr F I0120 10:56:53.474357 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:443:10.217.0.8:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {85c70b85-2b80-443a-b268-ffc8695f018e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.474409342+00:00 stderr F I0120 10:56:53.474375 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.86:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {274b346b-9aa9-401d-b752-d4168e631034}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.474409342+00:00 stderr F I0120 10:56:53.474385 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:443:10.217.0.8:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {85c70b85-2b80-443a-b268-ffc8695f018e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.474472834+00:00 stderr F I0120 10:56:53.474450 30089 lb_config.go:1016] Cluster endpoints for openshift-machine-api/machine-api-operator-webhook are: map[] 2026-01-20T10:56:53.474535846+00:00 stderr F I0120 10:56:53.474509 30089 services_controller.go:413] Built service openshift-machine-api/machine-api-operator-webhook LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.44"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.474584777+00:00 stderr F I0120 10:56:53.474566 30089 services_controller.go:414] Built service openshift-machine-api/machine-api-operator-webhook LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.474630768+00:00 stderr F I0120 10:56:53.474614 30089 services_controller.go:415] Built service openshift-machine-api/machine-api-operator-webhook LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.474742911+00:00 stderr F I0120 10:56:53.474706 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver-operator/metrics"} 2026-01-20T10:56:53.474742911+00:00 stderr F I0120 10:56:53.474726 30089 services_controller.go:336] Finished syncing service metrics on namespace openshift-kube-apiserver-operator : 1.611043ms 2026-01-20T10:56:53.474742911+00:00 stderr F I0120 10:56:53.474738 30089 services_controller.go:332] Processing sync for service openshift-dns/dns-default 2026-01-20T10:56:53.474810293+00:00 stderr F I0120 10:56:53.474674 30089 services_controller.go:421] Built service openshift-machine-api/machine-api-operator-webhook cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.44", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.474874865+00:00 stderr F I0120 10:56:53.474843 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver/apiserver"} 2026-01-20T10:56:53.474874865+00:00 stderr F I0120 10:56:53.474856 30089 services_controller.go:336] Finished syncing service apiserver on namespace openshift-kube-apiserver : 980.517µs 2026-01-20T10:56:53.474874865+00:00 stderr F I0120 10:56:53.474864 30089 services_controller.go:332] Processing sync for service openshift-etcd/etcd 2026-01-20T10:56:53.474874865+00:00 stderr F I0120 10:56:53.474857 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd-operator/metrics"} 2026-01-20T10:56:53.474892995+00:00 stderr F I0120 10:56:53.474874 30089 services_controller.go:336] Finished syncing service metrics on namespace openshift-etcd-operator : 1.050028ms 2026-01-20T10:56:53.474983427+00:00 stderr F I0120 10:56:53.474914 30089 services_controller.go:332] Processing sync for service openshift-machine-api/cluster-autoscaler-operator 2026-01-20T10:56:53.474983427+00:00 stderr F I0120 10:56:53.474869 30089 services_controller.go:397] Service etcd retrieved from lister: &Service{ObjectMeta:{etcd openshift-etcd 09198b54-ff7d-4bc0-9111-00e2f443a981 4485 0 2024-06-26 12:38:46 +0000 UTC map[k8s-app:etcd] map[operator.openshift.io/spec-hash:0685cfaa0976bfb7ba58513629369c20bf05f4fba36949e982bdb43af328f0e1 prometheus.io/scheme:https prometheus.io/scrape:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:etcd,Protocol:TCP,Port:2379,TargetPort:{0 2379 },NodePort:0,AppProtocol:nil,},ServicePort{Name:etcd-metrics,Protocol:TCP,Port:9979,TargetPort:{0 9979 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{etcd: true,},ClusterIP:10.217.5.137,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.137],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.474983427+00:00 stderr F I0120 10:56:53.474948 30089 lb_config.go:1016] Cluster endpoints for openshift-etcd/etcd are: map[TCP/etcd:{2379 [192.168.126.11] []} TCP/etcd-metrics:{9979 [192.168.126.11] []}] 2026-01-20T10:56:53.474983427+00:00 stderr F I0120 10:56:53.474971 30089 services_controller.go:413] Built service openshift-etcd/etcd LB cluster-wide configs []services.lbConfig(nil) 2026-01-20T10:56:53.475001238+00:00 stderr F I0120 10:56:53.474976 30089 services_controller.go:414] Built service openshift-etcd/etcd LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.137"}, protocol:"TCP", inport:2379, clusterEndpoints:services.lbEndpoints{Port:2379, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.137"}, protocol:"TCP", inport:9979, clusterEndpoints:services.lbEndpoints{Port:9979, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.475001238+00:00 stderr F I0120 10:56:53.474988 30089 services_controller.go:415] Built service openshift-etcd/etcd LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.475019418+00:00 stderr F I0120 10:56:53.475009 30089 services_controller.go:421] Built service openshift-etcd/etcd cluster-wide LB []services.LB{} 2026-01-20T10:56:53.475034659+00:00 stderr F I0120 10:56:53.475015 30089 services_controller.go:422] Built service openshift-etcd/etcd per-node LB []services.LB{services.LB{Name:"Service_openshift-etcd/etcd_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd/etcd"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.137", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:2379, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.137", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9979, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.475047319+00:00 stderr F I0120 10:56:53.475033 30089 services_controller.go:423] Built service openshift-etcd/etcd template LB []services.LB{} 2026-01-20T10:56:53.475047319+00:00 stderr F I0120 10:56:53.475041 30089 services_controller.go:424] Service openshift-etcd/etcd has 0 cluster-wide, 2 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.475142022+00:00 stderr F I0120 10:56:53.474744 30089 services_controller.go:397] Service dns-default retrieved from lister: &Service{ObjectMeta:{dns-default openshift-dns 9c0247d8-5697-41ea-812e-582bb93c9b4d 5259 0 2024-06-26 12:47:19 +0000 UTC map[dns.operator.openshift.io/owning-dns:default] map[service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:dns-default-metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{operator.openshift.io/v1 DNS default 8e7b8280-016f-4ceb-a792-fc5be2494468 0xc000159417 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:dns,Protocol:UDP,Port:53,TargetPort:{1 0 dns},NodePort:0,AppProtocol:nil,},ServicePort{Name:dns-tcp,Protocol:TCP,Port:53,TargetPort:{1 0 dns-tcp},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.217.4.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.475266705+00:00 stderr F I0120 10:56:53.475235 30089 lb_config.go:1016] Cluster endpoints for openshift-dns/dns-default are: map[TCP/dns-tcp:{5353 [10.217.0.31] []} TCP/metrics:{9154 [10.217.0.31] []} UDP/dns:{5353 [10.217.0.31] []}] 2026-01-20T10:56:53.475325306+00:00 stderr F I0120 10:56:53.475308 30089 services_controller.go:413] Built service openshift-dns/dns-default LB cluster-wide configs []services.lbConfig(nil) 2026-01-20T10:56:53.475403408+00:00 stderr F I0120 10:56:53.474914 30089 services_controller.go:422] Built service openshift-machine-api/machine-api-operator-webhook per-node LB []services.LB{} 2026-01-20T10:56:53.475403408+00:00 stderr F I0120 10:56:53.475388 30089 services_controller.go:423] Built service openshift-machine-api/machine-api-operator-webhook template LB []services.LB{} 2026-01-20T10:56:53.475425729+00:00 stderr F I0120 10:56:53.475398 30089 services_controller.go:424] Service openshift-machine-api/machine-api-operator-webhook has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.475460260+00:00 stderr F I0120 10:56:53.475353 30089 services_controller.go:414] Built service openshift-dns/dns-default LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.10"}, protocol:"UDP", inport:53, clusterEndpoints:services.lbEndpoints{Port:5353, V4IPs:[]string{"10.217.0.31"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.10"}, protocol:"TCP", inport:53, clusterEndpoints:services.lbEndpoints{Port:5353, V4IPs:[]string{"10.217.0.31"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.10"}, protocol:"TCP", inport:9154, clusterEndpoints:services.lbEndpoints{Port:9154, V4IPs:[]string{"10.217.0.31"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.475506831+00:00 stderr F I0120 10:56:53.475489 30089 services_controller.go:415] Built service openshift-dns/dns-default LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.475584413+00:00 stderr F I0120 10:56:53.475569 30089 services_controller.go:421] Built service openshift-dns/dns-default cluster-wide LB []services.LB{} 2026-01-20T10:56:53.475658485+00:00 stderr F I0120 10:56:53.475609 30089 services_controller.go:422] Built service openshift-dns/dns-default per-node LB []services.LB{services.LB{Name:"Service_openshift-dns/dns-default_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.10", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.31", Port:5353, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.10", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.31", Port:9154, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}, services.LB{Name:"Service_openshift-dns/dns-default_UDP_node_router+switch_crc", UUID:"", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.10", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.31", Port:5353, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.475716857+00:00 stderr F I0120 10:56:53.475698 30089 services_controller.go:423] Built service openshift-dns/dns-default template LB []services.LB{} 2026-01-20T10:56:53.475779708+00:00 stderr F I0120 10:56:53.474808 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-operator"} 2026-01-20T10:56:53.475793399+00:00 stderr F I0120 10:56:53.475781 30089 services_controller.go:336] Finished syncing service machine-config-operator on namespace openshift-machine-config-operator : 2.037394ms 2026-01-20T10:56:53.475813419+00:00 stderr F I0120 10:56:53.475795 30089 services_controller.go:332] Processing sync for service openshift-operator-lifecycle-manager/package-server-manager-metrics 2026-01-20T10:56:53.475842730+00:00 stderr F I0120 10:56:53.475749 30089 services_controller.go:424] Service openshift-dns/dns-default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 2 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.475888701+00:00 stderr F I0120 10:56:53.475807 30089 services_controller.go:397] Service package-server-manager-metrics retrieved from lister: &Service{ObjectMeta:{package-server-manager-metrics openshift-operator-lifecycle-manager ae547e8e-2a0a-43b3-8358-80f1e40dfde9 5119 0 2024-06-26 12:39:24 +0000 UTC map[] map[capability.openshift.io/name:OperatorLifecycleManager include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:package-server-manager-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000a6e3cf }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8443,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: package-server-manager,},ClusterIP:10.217.4.147,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.147],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.475902841+00:00 stderr F I0120 10:56:53.475891 30089 lb_config.go:1016] Cluster endpoints for openshift-operator-lifecycle-manager/package-server-manager-metrics are: map[TCP/metrics:{8443 [10.217.0.24] []}] 2026-01-20T10:56:53.475913612+00:00 stderr F I0120 10:56:53.475902 30089 services_controller.go:413] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.147"}, protocol:"TCP", inport:8443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.24"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.475924332+00:00 stderr F I0120 10:56:53.475912 30089 services_controller.go:414] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.475924332+00:00 stderr F I0120 10:56:53.475917 30089 services_controller.go:415] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.475965793+00:00 stderr F I0120 10:56:53.475930 30089 services_controller.go:421] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/package-server-manager-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.147", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.24", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.475965793+00:00 stderr F I0120 10:56:53.475953 30089 services_controller.go:422] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics per-node LB []services.LB{} 2026-01-20T10:56:53.475965793+00:00 stderr F I0120 10:56:53.475960 30089 services_controller.go:423] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics template LB []services.LB{} 2026-01-20T10:56:53.475984284+00:00 stderr F I0120 10:56:53.475967 30089 services_controller.go:424] Service openshift-operator-lifecycle-manager/package-server-manager-metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.476047435+00:00 stderr F I0120 10:56:53.475981 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster", UUID:"393c32dd-2ea5-4e3d-b98a-4cb2c2e8cbef", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/package-server-manager-metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/package-server-manager-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.147", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.24", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.476165458+00:00 stderr F I0120 10:56:53.476119 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.147:8443:10.217.0.24:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {393c32dd-2ea5-4e3d-b98a-4cb2c2e8cbef}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.476251531+00:00 stderr F I0120 10:56:53.475947 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-dns/dns-default_UDP_node_router+switch_crc", UUID:"0638bd05-8826-488d-bed7-e92d2c002903", Protocol:"udp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}, services.LB{Name:"Service_openshift-dns/dns-default_TCP_node_router+switch_crc", UUID:"19e6099f-93ac-4c53-b159-d6e39047458d", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-dns/dns-default_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.10", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.31", Port:5353, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.10", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.31", Port:9154, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}, services.LB{Name:"Service_openshift-dns/dns-default_UDP_node_router+switch_crc", UUID:"", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.10", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.31", Port:5353, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.476346973+00:00 stderr F I0120 10:56:53.475051 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-etcd/etcd_TCP_node_router+switch_crc", UUID:"7b275cee-1db5-46de-8569-dce67abda430", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd/etcd"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-etcd/etcd_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd/etcd"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.137", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:2379, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.137", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9979, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.476475916+00:00 stderr F I0120 10:56:53.476422 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.137:2379:192.168.126.11:2379 10.217.5.137:9979:192.168.126.11:9979]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7b275cee-1db5-46de-8569-dce67abda430}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.476495237+00:00 stderr F I0120 10:56:53.475416 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster", UUID:"61827598-f37d-413c-8229-0ed852809fb6", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-webhook"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.44", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.476495237+00:00 stderr F I0120 10:56:53.476467 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.137:2379:192.168.126.11:2379 10.217.5.137:9979:192.168.126.11:9979]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7b275cee-1db5-46de-8569-dce67abda430}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.476592229+00:00 stderr F I0120 10:56:53.476545 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:10.217.0.31:5353 10.217.4.10:9154:10.217.0.31:9154]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {19e6099f-93ac-4c53-b159-d6e39047458d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.476637771+00:00 stderr F I0120 10:56:53.476190 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.147:8443:10.217.0.24:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {393c32dd-2ea5-4e3d-b98a-4cb2c2e8cbef}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.476750513+00:00 stderr F I0120 10:56:53.476571 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.44:443:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {61827598-f37d-413c-8229-0ed852809fb6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.476928378+00:00 stderr F I0120 10:56:53.476723 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:10.217.0.31:5353]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0638bd05-8826-488d-bed7-e92d2c002903}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.477034222+00:00 stderr F I0120 10:56:53.476780 30089 services_controller.go:397] Service cluster-autoscaler-operator retrieved from lister: &Service{ObjectMeta:{cluster-autoscaler-operator openshift-machine-api c1b7d52c-5c8b-4bd1-8ac9-b4a3af1e9062 4713 0 2024-06-26 12:39:18 +0000 UTC map[k8s-app:cluster-autoscaler-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:cluster-autoscaler-operator-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000159f1b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9192,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: cluster-autoscaler-operator,},ClusterIP:10.217.5.83,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.83],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.477124694+00:00 stderr F I0120 10:56:53.476987 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:10.217.0.31:5353 10.217.4.10:9154:10.217.0.31:9154]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {19e6099f-93ac-4c53-b159-d6e39047458d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:10.217.0.31:5353]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0638bd05-8826-488d-bed7-e92d2c002903}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.477189806+00:00 stderr F I0120 10:56:53.477153 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/package-server-manager-metrics"} 2026-01-20T10:56:53.477189806+00:00 stderr F I0120 10:56:53.477175 30089 services_controller.go:336] Finished syncing service package-server-manager-metrics on namespace openshift-operator-lifecycle-manager : 1.379436ms 2026-01-20T10:56:53.477202386+00:00 stderr F I0120 10:56:53.477189 30089 services_controller.go:332] Processing sync for service openshift-operator-lifecycle-manager/olm-operator-metrics 2026-01-20T10:56:53.477272018+00:00 stderr F I0120 10:56:53.477196 30089 services_controller.go:397] Service olm-operator-metrics retrieved from lister: &Service{ObjectMeta:{olm-operator-metrics openshift-operator-lifecycle-manager f54a9b6f-c334-4276-9ca3-b290325fd276 5100 0 2024-06-26 12:39:23 +0000 UTC map[app:olm-operator] map[capability.openshift.io/name:OperatorLifecycleManager include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:olm-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000a6e31f }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https-metrics,Protocol:TCP,Port:8443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: olm-operator,},ClusterIP:10.217.5.220,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.220],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.477284818+00:00 stderr F I0120 10:56:53.476778 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.44:443:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {61827598-f37d-413c-8229-0ed852809fb6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.477284818+00:00 stderr F I0120 10:56:53.477273 30089 lb_config.go:1016] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics are: map[TCP/https-metrics:{8443 [10.217.0.14] []}] 2026-01-20T10:56:53.477300849+00:00 stderr F I0120 10:56:53.477286 30089 services_controller.go:413] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.220"}, protocol:"TCP", inport:8443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.14"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.477313709+00:00 stderr F I0120 10:56:53.477299 30089 services_controller.go:414] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.477313709+00:00 stderr F I0120 10:56:53.477305 30089 services_controller.go:415] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.477366781+00:00 stderr F I0120 10:56:53.477319 30089 services_controller.go:421] Built service openshift-operator-lifecycle-manager/olm-operator-metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/olm-operator-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.220", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.14", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.477366781+00:00 stderr F I0120 10:56:53.476917 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd/etcd"} 2026-01-20T10:56:53.477379991+00:00 stderr F I0120 10:56:53.477364 30089 services_controller.go:336] Finished syncing service etcd on namespace openshift-etcd : 2.488226ms 2026-01-20T10:56:53.477379991+00:00 stderr F I0120 10:56:53.477366 30089 lb_config.go:1016] Cluster endpoints for openshift-machine-api/cluster-autoscaler-operator are: map[] 2026-01-20T10:56:53.477393571+00:00 stderr F I0120 10:56:53.477379 30089 services_controller.go:413] Built service openshift-machine-api/cluster-autoscaler-operator LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.83"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.83"}, protocol:"TCP", inport:9192, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.477406912+00:00 stderr F I0120 10:56:53.477397 30089 services_controller.go:414] Built service openshift-machine-api/cluster-autoscaler-operator LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.477406912+00:00 stderr F I0120 10:56:53.477345 30089 services_controller.go:422] Built service openshift-operator-lifecycle-manager/olm-operator-metrics per-node LB []services.LB{} 2026-01-20T10:56:53.477417852+00:00 stderr F I0120 10:56:53.477411 30089 services_controller.go:423] Built service openshift-operator-lifecycle-manager/olm-operator-metrics template LB []services.LB{} 2026-01-20T10:56:53.477428312+00:00 stderr F I0120 10:56:53.477420 30089 services_controller.go:424] Service openshift-operator-lifecycle-manager/olm-operator-metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.477442763+00:00 stderr F I0120 10:56:53.477428 30089 services_controller.go:415] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.477485994+00:00 stderr F I0120 10:56:53.477446 30089 services_controller.go:421] Built service openshift-machine-api/cluster-autoscaler-operator cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/cluster-autoscaler-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.83", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:"10.217.5.83", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.477497914+00:00 stderr F I0120 10:56:53.477400 30089 services_controller.go:332] Processing sync for service openshift-apiserver-operator/metrics 2026-01-20T10:56:53.477497914+00:00 stderr F I0120 10:56:53.477453 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster", UUID:"ed8cbb68-e50d-4e49-8495-eff895089054", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/olm-operator-metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/olm-operator-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.220", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.14", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.477561016+00:00 stderr F I0120 10:56:53.477496 30089 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-apiserver-operator 4c2fba48-c67e-4420-9529-0bb456da4341 4348 0 2024-06-26 12:39:11 +0000 UTC map[app:openshift-apiserver-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:openshift-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0001588a7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-apiserver-operator,},ClusterIP:10.217.5.125,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.125],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.477581906+00:00 stderr F I0120 10:56:53.477558 30089 lb_config.go:1016] Cluster endpoints for openshift-apiserver-operator/metrics are: map[TCP/https:{8443 [10.217.0.6] []}] 2026-01-20T10:56:53.477581906+00:00 stderr F I0120 10:56:53.477569 30089 services_controller.go:413] Built service openshift-apiserver-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.125"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.6"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.477593266+00:00 stderr F I0120 10:56:53.477578 30089 services_controller.go:414] Built service openshift-apiserver-operator/metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.477593266+00:00 stderr F I0120 10:56:53.477585 30089 services_controller.go:415] Built service openshift-apiserver-operator/metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.477593266+00:00 stderr F I0120 10:56:53.477567 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.220:8443:10.217.0.14:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ed8cbb68-e50d-4e49-8495-eff895089054}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.477639568+00:00 stderr F I0120 10:56:53.477596 30089 services_controller.go:421] Built service openshift-apiserver-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-apiserver-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.125", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.6", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.477639568+00:00 stderr F I0120 10:56:53.477598 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.220:8443:10.217.0.14:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ed8cbb68-e50d-4e49-8495-eff895089054}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.477639568+00:00 stderr F I0120 10:56:53.477620 30089 services_controller.go:422] Built service openshift-apiserver-operator/metrics per-node LB []services.LB{} 2026-01-20T10:56:53.477639568+00:00 stderr F I0120 10:56:53.477630 30089 services_controller.go:423] Built service openshift-apiserver-operator/metrics template LB []services.LB{} 2026-01-20T10:56:53.477656758+00:00 stderr F I0120 10:56:53.477637 30089 services_controller.go:424] Service openshift-apiserver-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.477701889+00:00 stderr F I0120 10:56:53.477653 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-apiserver-operator/metrics_TCP_cluster", UUID:"6c6bde70-7352-441c-a61d-299eaf2273c0", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-apiserver-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.125", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.6", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.477809502+00:00 stderr F I0120 10:56:53.477761 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver-operator/metrics]} name:Service_openshift-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.125:443:10.217.0.6:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6c6bde70-7352-441c-a61d-299eaf2273c0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.477824372+00:00 stderr F I0120 10:56:53.477798 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver-operator/metrics]} name:Service_openshift-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.125:443:10.217.0.6:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6c6bde70-7352-441c-a61d-299eaf2273c0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.477907925+00:00 stderr F I0120 10:56:53.477872 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-webhook"} 2026-01-20T10:56:53.477907925+00:00 stderr F I0120 10:56:53.477469 30089 services_controller.go:422] Built service openshift-machine-api/cluster-autoscaler-operator per-node LB []services.LB{} 2026-01-20T10:56:53.477907925+00:00 stderr F I0120 10:56:53.477897 30089 services_controller.go:336] Finished syncing service machine-api-operator-webhook on namespace openshift-machine-api : 4.277483ms 2026-01-20T10:56:53.477907925+00:00 stderr F I0120 10:56:53.477902 30089 services_controller.go:423] Built service openshift-machine-api/cluster-autoscaler-operator template LB []services.LB{} 2026-01-20T10:56:53.477942776+00:00 stderr F I0120 10:56:53.477909 30089 services_controller.go:332] Processing sync for service openshift-machine-api/machine-api-controllers 2026-01-20T10:56:53.477942776+00:00 stderr F I0120 10:56:53.477911 30089 services_controller.go:424] Service openshift-machine-api/cluster-autoscaler-operator has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.478017047+00:00 stderr F I0120 10:56:53.477916 30089 services_controller.go:397] Service machine-api-controllers retrieved from lister: &Service{ObjectMeta:{machine-api-controllers openshift-machine-api 6a75af62-23dd-4080-8ef6-00c8bb47e103 4782 0 2024-06-26 12:39:26 +0000 UTC map[k8s-app:controller] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002fa25b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:mhc-mtrc,Protocol:TCP,Port:8444,TargetPort:{1 0 mhc-mtrc},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: controller,},ClusterIP:10.217.5.185,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.185],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.478030238+00:00 stderr F I0120 10:56:53.478020 30089 lb_config.go:1016] Cluster endpoints for openshift-machine-api/machine-api-controllers are: map[] 2026-01-20T10:56:53.478102360+00:00 stderr F I0120 10:56:53.478034 30089 services_controller.go:413] Built service openshift-machine-api/machine-api-controllers LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.185"}, protocol:"TCP", inport:8441, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.185"}, protocol:"TCP", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.185"}, protocol:"TCP", inport:8444, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.478102360+00:00 stderr F I0120 10:56:53.478050 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/olm-operator-metrics"} 2026-01-20T10:56:53.478102360+00:00 stderr F I0120 10:56:53.478073 30089 services_controller.go:414] Built service openshift-machine-api/machine-api-controllers LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.478102360+00:00 stderr F I0120 10:56:53.478081 30089 services_controller.go:415] Built service openshift-machine-api/machine-api-controllers LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.478102360+00:00 stderr F I0120 10:56:53.478095 30089 services_controller.go:336] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager : 885.623µs 2026-01-20T10:56:53.478129500+00:00 stderr F I0120 10:56:53.478107 30089 services_controller.go:332] Processing sync for service openshift-cluster-machine-approver/machine-approver 2026-01-20T10:56:53.478129500+00:00 stderr F I0120 10:56:53.478115 30089 services_controller.go:336] Finished syncing service machine-approver on namespace openshift-cluster-machine-approver : 7.73µs 2026-01-20T10:56:53.478129500+00:00 stderr F I0120 10:56:53.478104 30089 services_controller.go:421] Built service openshift-machine-api/machine-api-controllers cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-controllers_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-controllers"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.185", Port:8441, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:"10.217.5.185", Port:8442, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:"10.217.5.185", Port:8444, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.478129500+00:00 stderr F I0120 10:56:53.478125 30089 services_controller.go:332] Processing sync for service openshift-monitoring/cluster-monitoring-operator 2026-01-20T10:56:53.478142051+00:00 stderr F I0120 10:56:53.478129 30089 services_controller.go:422] Built service openshift-machine-api/machine-api-controllers per-node LB []services.LB{} 2026-01-20T10:56:53.478142051+00:00 stderr F I0120 10:56:53.478138 30089 services_controller.go:423] Built service openshift-machine-api/machine-api-controllers template LB []services.LB{} 2026-01-20T10:56:53.478155331+00:00 stderr F I0120 10:56:53.478147 30089 services_controller.go:424] Service openshift-machine-api/machine-api-controllers has 3 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.478189592+00:00 stderr F I0120 10:56:53.478147 30089 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-cluster-machine-approver/machine-approver. OVN-Kubernetes controller took 0.156815621 seconds. No OVN measurement. 2026-01-20T10:56:53.478203442+00:00 stderr F I0120 10:56:53.478167 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-controllers_TCP_cluster", UUID:"9345b326-0288-485f-8374-532f033762a6", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-controllers"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-controllers_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-controllers"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.185", Port:8441, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:"10.217.5.185", Port:8442, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:"10.217.5.185", Port:8444, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.478237993+00:00 stderr F I0120 10:56:53.478210 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"} 2026-01-20T10:56:53.478237993+00:00 stderr F I0120 10:56:53.478226 30089 services_controller.go:336] Finished syncing service dns-default on namespace openshift-dns : 3.487132ms 2026-01-20T10:56:53.478249674+00:00 stderr F I0120 10:56:53.478237 30089 services_controller.go:332] Processing sync for service openshift-cluster-samples-operator/metrics 2026-01-20T10:56:53.478249674+00:00 stderr F I0120 10:56:53.478243 30089 services_controller.go:336] Finished syncing service metrics on namespace openshift-cluster-samples-operator : 6.61µs 2026-01-20T10:56:53.478260844+00:00 stderr F I0120 10:56:53.478251 30089 services_controller.go:332] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-node 2026-01-20T10:56:53.478271484+00:00 stderr F I0120 10:56:53.478259 30089 services_controller.go:336] Finished syncing service ovn-kubernetes-node on namespace openshift-ovn-kubernetes : 7.7µs 2026-01-20T10:56:53.478271484+00:00 stderr F I0120 10:56:53.478266 30089 services_controller.go:332] Processing sync for service openshift-image-registry/image-registry-operator 2026-01-20T10:56:53.478282214+00:00 stderr F I0120 10:56:53.478273 30089 services_controller.go:336] Finished syncing service image-registry-operator on namespace openshift-image-registry : 6.31µs 2026-01-20T10:56:53.478293225+00:00 stderr F I0120 10:56:53.478131 30089 services_controller.go:336] Finished syncing service cluster-monitoring-operator on namespace openshift-monitoring : 5.54µs 2026-01-20T10:56:53.478303545+00:00 stderr F I0120 10:56:53.478288 30089 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-image-registry/image-registry-operator. OVN-Kubernetes controller took 0.156648236 seconds. No OVN measurement. 2026-01-20T10:56:53.478303545+00:00 stderr F I0120 10:56:53.478297 30089 services_controller.go:332] Processing sync for service openshift-network-operator/metrics 2026-01-20T10:56:53.478315665+00:00 stderr F I0120 10:56:53.478305 30089 services_controller.go:336] Finished syncing service metrics on namespace openshift-network-operator : 9.62µs 2026-01-20T10:56:53.478315665+00:00 stderr F I0120 10:56:53.478312 30089 services_controller.go:332] Processing sync for service openshift-multus/network-metrics-service 2026-01-20T10:56:53.478326776+00:00 stderr F I0120 10:56:53.478293 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.185:8441: 10.217.5.185:8442: 10.217.5.185:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9345b326-0288-485f-8374-532f033762a6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.478340866+00:00 stderr F I0120 10:56:53.478323 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver-operator/metrics"} 2026-01-20T10:56:53.478340866+00:00 stderr F I0120 10:56:53.478333 30089 services_controller.go:336] Finished syncing service metrics on namespace openshift-apiserver-operator : 953.405µs 2026-01-20T10:56:53.478376417+00:00 stderr F I0120 10:56:53.478328 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.185:8441: 10.217.5.185:8442: 10.217.5.185:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9345b326-0288-485f-8374-532f033762a6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.478417428+00:00 stderr F I0120 10:56:53.477986 30089 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster", UUID:"6fdcf5bf-9a8f-4dc7-b401-5b01d42512e2", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/cluster-autoscaler-operator"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/cluster-autoscaler-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.83", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:"10.217.5.83", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.478452039+00:00 stderr F I0120 10:56:53.478318 30089 services_controller.go:336] Finished syncing service network-metrics-service on namespace openshift-multus : 5.92µs 2026-01-20T10:56:53.478452039+00:00 stderr F I0120 10:56:53.478447 30089 services_controller.go:332] Processing sync for service default/kubernetes 2026-01-20T10:56:53.478565962+00:00 stderr F I0120 10:56:53.478356 30089 services_controller.go:332] Processing sync for service openshift-marketplace/redhat-marketplace 2026-01-20T10:56:53.478565962+00:00 stderr F I0120 10:56:53.478455 30089 services_controller.go:397] Service kubernetes retrieved from lister: &Service{ObjectMeta:{kubernetes default 057366b7-69c1-49f0-88ca-660c8863cae8 249 0 2024-06-26 12:38:03 +0000 UTC map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.478598913+00:00 stderr F I0120 10:56:53.478569 30089 lb_config.go:1016] Cluster endpoints for default/kubernetes are: map[TCP/https:{6443 [192.168.126.11] []}] 2026-01-20T10:56:53.478598913+00:00 stderr F I0120 10:56:53.478583 30089 services_controller.go:413] Built service default/kubernetes LB cluster-wide configs []services.lbConfig(nil) 2026-01-20T10:56:53.478598913+00:00 stderr F I0120 10:56:53.478590 30089 services_controller.go:414] Built service default/kubernetes LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.1"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.478669125+00:00 stderr F I0120 10:56:53.478602 30089 services_controller.go:415] Built service default/kubernetes LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.478669125+00:00 stderr F I0120 10:56:53.478625 30089 services_controller.go:421] Built service default/kubernetes cluster-wide LB []services.LB{} 2026-01-20T10:56:53.478669125+00:00 stderr F I0120 10:56:53.478556 30089 services_controller.go:397] Service redhat-marketplace retrieved from lister: &Service{ObjectMeta:{redhat-marketplace openshift-marketplace 73712edb-385d-4bf0-9c03-b6c570b1a22f 6434 0 2024-06-26 12:47:50 +0000 UTC map[olm.managed:true olm.service-spec-hash:aUeLNNcZzVZO2rcaZ5Kc8V3jffO0Ss4T6qX6V5] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-marketplace 6f259421-4edb-49d8-a6ce-aa41dfc64264 0xc0002fbd5d 0xc0002fbd5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: redhat-marketplace,olm.managed: true,},ClusterIP:10.217.4.65,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.65],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.478669125+00:00 stderr F I0120 10:56:53.478633 30089 services_controller.go:422] Built service default/kubernetes per-node LB []services.LB{services.LB{Name:"Service_default/kubernetes_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.1", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.478669125+00:00 stderr F I0120 10:56:53.478654 30089 services_controller.go:423] Built service default/kubernetes template LB []services.LB{} 2026-01-20T10:56:53.478669125+00:00 stderr F I0120 10:56:53.478281 30089 services_controller.go:332] Processing sync for service default/openshift 2026-01-20T10:56:53.478669125+00:00 stderr F I0120 10:56:53.478663 30089 services_controller.go:424] Service default/kubernetes has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.478684775+00:00 stderr F I0120 10:56:53.478672 30089 services_controller.go:336] Finished syncing service openshift on namespace default : 384.41µs 2026-01-20T10:56:53.478699455+00:00 stderr F I0120 10:56:53.478683 30089 services_controller.go:441] Skipping no-op change for service default/kubernetes 2026-01-20T10:56:53.478699455+00:00 stderr F I0120 10:56:53.478683 30089 services_controller.go:332] Processing sync for service openshift-ingress-operator/metrics 2026-01-20T10:56:53.478699455+00:00 stderr F I0120 10:56:53.478688 30089 services_controller.go:336] Finished syncing service kubernetes on namespace default : 242.726µs 2026-01-20T10:56:53.478711096+00:00 stderr F I0120 10:56:53.478698 30089 services_controller.go:332] Processing sync for service cert-manager/cert-manager-webhook 2026-01-20T10:56:53.478766957+00:00 stderr F I0120 10:56:53.478692 30089 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-ingress-operator 1e390522-c38e-4189-86b8-ad75c61e3844 6514 0 2024-06-26 12:47:50 +0000 UTC map[name:ingress-operator] map[capability.openshift.io/name:Ingress include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0001598d7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.4.255,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.255],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.478778727+00:00 stderr F I0120 10:56:53.478765 30089 lb_config.go:1016] Cluster endpoints for openshift-ingress-operator/metrics are: map[TCP/metrics:{9393 [10.217.0.45] []}] 2026-01-20T10:56:53.478816388+00:00 stderr F I0120 10:56:53.478781 30089 services_controller.go:413] Built service openshift-ingress-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.255"}, protocol:"TCP", inport:9393, clusterEndpoints:services.lbEndpoints{Port:9393, V4IPs:[]string{"10.217.0.45"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.478816388+00:00 stderr F I0120 10:56:53.478808 30089 services_controller.go:414] Built service openshift-ingress-operator/metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.478828229+00:00 stderr F I0120 10:56:53.478815 30089 services_controller.go:415] Built service openshift-ingress-operator/metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.478863130+00:00 stderr F I0120 10:56:53.478826 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-controllers"} 2026-01-20T10:56:53.478863130+00:00 stderr F I0120 10:56:53.478837 30089 services_controller.go:421] Built service openshift-ingress-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-ingress-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.255", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.45", Port:9393, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.478880050+00:00 stderr F I0120 10:56:53.478864 30089 services_controller.go:422] Built service openshift-ingress-operator/metrics per-node LB []services.LB{} 2026-01-20T10:56:53.478880050+00:00 stderr F I0120 10:56:53.478853 30089 services_controller.go:336] Finished syncing service machine-api-controllers on namespace openshift-machine-api : 942.894µs 2026-01-20T10:56:53.478880050+00:00 stderr F I0120 10:56:53.478873 30089 services_controller.go:423] Built service openshift-ingress-operator/metrics template LB []services.LB{} 2026-01-20T10:56:53.478891720+00:00 stderr F I0120 10:56:53.478882 30089 services_controller.go:424] Service openshift-ingress-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.478902101+00:00 stderr F I0120 10:56:53.478893 30089 services_controller.go:332] Processing sync for service openshift-dns-operator/metrics 2026-01-20T10:56:53.478912731+00:00 stderr F I0120 10:56:53.478864 30089 ovs.go:162] Exec(43): stdout: "NXST_AGGREGATE reply (xid=0x4): packet_count=156696 byte_count=84468718 flow_count=3057\n" 2026-01-20T10:56:53.478912731+00:00 stderr F I0120 10:56:53.478905 30089 services_controller.go:441] Skipping no-op change for service openshift-ingress-operator/metrics 2026-01-20T10:56:53.478923471+00:00 stderr F I0120 10:56:53.478912 30089 ovs.go:163] Exec(43): stderr: "" 2026-01-20T10:56:53.478923471+00:00 stderr F I0120 10:56:53.478914 30089 services_controller.go:336] Finished syncing service metrics on namespace openshift-ingress-operator : 228.886µs 2026-01-20T10:56:53.478936632+00:00 stderr F I0120 10:56:53.478928 30089 services_controller.go:332] Processing sync for service openshift-image-registry/image-registry 2026-01-20T10:56:53.478949442+00:00 stderr F I0120 10:56:53.478940 30089 ovs.go:159] Exec(44): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface patch-br-ex_crc-to-br-int ofport 2026-01-20T10:56:53.478993193+00:00 stderr F I0120 10:56:53.478903 30089 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-dns-operator c5ee1e81-63ee-4ea0-9d8f-d24cd624c3f2 4375 0 2024-06-26 12:39:11 +0000 UTC map[name:dns-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000159347 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: dns-operator,},ClusterIP:10.217.4.52,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.52],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.478993193+00:00 stderr F I0120 10:56:53.478936 30089 services_controller.go:397] Service image-registry retrieved from lister: &Service{ObjectMeta:{image-registry openshift-image-registry 7b12735e-9db4-4c6e-99f6-b2626c4e9f08 17962 0 2024-06-27 13:18:52 +0000 UTC map[docker-registry:default] map[imageregistry.operator.openshift.io/checksum:sha256:1c19715a76014ae1d56140d6390a08f14f453c1a59dc36c15718f40c638ef63d service.alpha.openshift.io/serving-cert-secret-name:image-registry-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:5000-tcp,Protocol:TCP,Port:5000,TargetPort:{0 5000 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{docker-registry: default,},ClusterIP:10.217.4.41,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.41],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.479010043+00:00 stderr F I0120 10:56:53.478705 30089 services_controller.go:397] Service cert-manager-webhook retrieved from lister: &Service{ObjectMeta:{cert-manager-webhook cert-manager d1133a70-2dbf-40a7-a6bb-813db182c8f1 42888 0 2026-01-20 10:56:34 +0000 UTC map[app:webhook app.kubernetes.io/component:webhook app.kubernetes.io/instance:cert-manager app.kubernetes.io/name:webhook app.kubernetes.io/version:v1.19.2] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9402,TargetPort:{1 0 http-metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app.kubernetes.io/component: webhook,app.kubernetes.io/instance: cert-manager,app.kubernetes.io/name: webhook,},ClusterIP:10.217.5.163,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.163],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.479010043+00:00 stderr F I0120 10:56:53.478996 30089 lb_config.go:1016] Cluster endpoints for openshift-dns-operator/metrics are: map[TCP/metrics:{9393 [10.217.0.18] []}] 2026-01-20T10:56:53.479021004+00:00 stderr F I0120 10:56:53.479004 30089 lb_config.go:1016] Cluster endpoints for openshift-image-registry/image-registry are: map[TCP/5000-tcp:{5000 [10.217.0.37] []}] 2026-01-20T10:56:53.479031664+00:00 stderr F I0120 10:56:53.479012 30089 services_controller.go:413] Built service openshift-dns-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.52"}, protocol:"TCP", inport:9393, clusterEndpoints:services.lbEndpoints{Port:9393, V4IPs:[]string{"10.217.0.18"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.479031664+00:00 stderr F I0120 10:56:53.479018 30089 services_controller.go:413] Built service openshift-image-registry/image-registry LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.41"}, protocol:"TCP", inport:5000, clusterEndpoints:services.lbEndpoints{Port:5000, V4IPs:[]string{"10.217.0.37"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.479031664+00:00 stderr F I0120 10:56:53.479026 30089 services_controller.go:414] Built service openshift-dns-operator/metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.479046844+00:00 stderr F I0120 10:56:53.479030 30089 services_controller.go:414] Built service openshift-image-registry/image-registry LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.479046844+00:00 stderr F I0120 10:56:53.479033 30089 services_controller.go:415] Built service openshift-dns-operator/metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.479046844+00:00 stderr F I0120 10:56:53.479038 30089 services_controller.go:415] Built service openshift-image-registry/image-registry LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.479113166+00:00 stderr F I0120 10:56:53.479014 30089 lb_config.go:1016] Cluster endpoints for cert-manager/cert-manager-webhook are: map[TCP/https:{10250 [10.217.0.44] []} TCP/metrics:{9402 [10.217.0.44] []}] 2026-01-20T10:56:53.479171208+00:00 stderr F I0120 10:56:53.479054 30089 services_controller.go:421] Built service openshift-image-registry/image-registry cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-image-registry/image-registry_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-image-registry/image-registry"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.41", Port:5000, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.37", Port:5000, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.479171208+00:00 stderr F I0120 10:56:53.479162 30089 services_controller.go:422] Built service openshift-image-registry/image-registry per-node LB []services.LB{} 2026-01-20T10:56:53.479183708+00:00 stderr F I0120 10:56:53.479171 30089 services_controller.go:423] Built service openshift-image-registry/image-registry template LB []services.LB{} 2026-01-20T10:56:53.479183708+00:00 stderr F I0120 10:56:53.479178 30089 services_controller.go:424] Service openshift-image-registry/image-registry has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.479195048+00:00 stderr F I0120 10:56:53.478976 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.83:443: 10.217.5.83:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6fdcf5bf-9a8f-4dc7-b401-5b01d42512e2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.479195048+00:00 stderr F I0120 10:56:53.479157 30089 services_controller.go:413] Built service cert-manager/cert-manager-webhook LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.163"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:10250, V4IPs:[]string{"10.217.0.44"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.163"}, protocol:"TCP", inport:9402, clusterEndpoints:services.lbEndpoints{Port:9402, V4IPs:[]string{"10.217.0.44"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.479206139+00:00 stderr F I0120 10:56:53.479198 30089 services_controller.go:441] Skipping no-op change for service openshift-image-registry/image-registry 2026-01-20T10:56:53.479223299+00:00 stderr F I0120 10:56:53.479201 30089 services_controller.go:414] Built service cert-manager/cert-manager-webhook LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.479223299+00:00 stderr F I0120 10:56:53.479052 30089 services_controller.go:421] Built service openshift-dns-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-dns-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.52", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.18", Port:9393, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.479223299+00:00 stderr F I0120 10:56:53.479217 30089 services_controller.go:415] Built service cert-manager/cert-manager-webhook LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.479235789+00:00 stderr F I0120 10:56:53.479222 30089 services_controller.go:422] Built service openshift-dns-operator/metrics per-node LB []services.LB{} 2026-01-20T10:56:53.479235789+00:00 stderr F I0120 10:56:53.479232 30089 services_controller.go:423] Built service openshift-dns-operator/metrics template LB []services.LB{} 2026-01-20T10:56:53.479248970+00:00 stderr F I0120 10:56:53.479237 30089 lb_config.go:1016] Cluster endpoints for openshift-marketplace/redhat-marketplace are: map[TCP/grpc:{50051 [10.217.0.30] []}] 2026-01-20T10:56:53.479288541+00:00 stderr F I0120 10:56:53.479255 30089 services_controller.go:413] Built service openshift-marketplace/redhat-marketplace LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.65"}, protocol:"TCP", inport:50051, clusterEndpoints:services.lbEndpoints{Port:50051, V4IPs:[]string{"10.217.0.30"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.479327932+00:00 stderr F I0120 10:56:53.479260 30089 services_controller.go:421] Built service cert-manager/cert-manager-webhook cluster-wide LB []services.LB{services.LB{Name:"Service_cert-manager/cert-manager-webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"cert-manager/cert-manager-webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.163", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.44", Port:10250, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.163", Port:9402, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.44", Port:9402, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.479327932+00:00 stderr F I0120 10:56:53.479321 30089 services_controller.go:422] Built service cert-manager/cert-manager-webhook per-node LB []services.LB{} 2026-01-20T10:56:53.479340082+00:00 stderr F I0120 10:56:53.479331 30089 services_controller.go:423] Built service cert-manager/cert-manager-webhook template LB []services.LB{} 2026-01-20T10:56:53.479352982+00:00 stderr F I0120 10:56:53.479343 30089 services_controller.go:424] Service cert-manager/cert-manager-webhook has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.479402744+00:00 stderr F I0120 10:56:53.479378 30089 services_controller.go:441] Skipping no-op change for service cert-manager/cert-manager-webhook 2026-01-20T10:56:53.479402744+00:00 stderr F I0120 10:56:53.479394 30089 services_controller.go:336] Finished syncing service cert-manager-webhook on namespace cert-manager : 692.808µs 2026-01-20T10:56:53.479415534+00:00 stderr F I0120 10:56:53.479263 30089 services_controller.go:424] Service openshift-dns-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.479447455+00:00 stderr F I0120 10:56:53.479423 30089 services_controller.go:332] Processing sync for service openshift-machine-config-operator/machine-config-daemon 2026-01-20T10:56:53.479529307+00:00 stderr F I0120 10:56:53.479301 30089 services_controller.go:414] Built service openshift-marketplace/redhat-marketplace LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.479529307+00:00 stderr F I0120 10:56:53.479521 30089 services_controller.go:415] Built service openshift-marketplace/redhat-marketplace LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.479589849+00:00 stderr F I0120 10:56:53.479537 30089 services_controller.go:421] Built service openshift-marketplace/redhat-marketplace cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-marketplace_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-marketplace"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.65", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.30", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.479589849+00:00 stderr F I0120 10:56:53.479444 30089 services_controller.go:397] Service machine-config-daemon retrieved from lister: &Service{ObjectMeta:{machine-config-daemon openshift-machine-config-operator bddcb8c2-0f2d-4efa-a0ec-3e0648c24386 4880 0 2024-06-26 12:39:15 +0000 UTC map[k8s-app:machine-config-daemon] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002fafd7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},ServicePort{Name:health,Protocol:TCP,Port:8798,TargetPort:{0 8798 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-daemon,},ClusterIP:10.217.5.82,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.82],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.479589849+00:00 stderr F I0120 10:56:53.479571 30089 services_controller.go:422] Built service openshift-marketplace/redhat-marketplace per-node LB []services.LB{} 2026-01-20T10:56:53.479589849+00:00 stderr F I0120 10:56:53.479582 30089 services_controller.go:423] Built service openshift-marketplace/redhat-marketplace template LB []services.LB{} 2026-01-20T10:56:53.479608329+00:00 stderr F I0120 10:56:53.479591 30089 services_controller.go:424] Service openshift-marketplace/redhat-marketplace has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.479618839+00:00 stderr F I0120 10:56:53.479611 30089 services_controller.go:441] Skipping no-op change for service openshift-marketplace/redhat-marketplace 2026-01-20T10:56:53.479629200+00:00 stderr F I0120 10:56:53.479618 30089 services_controller.go:336] Finished syncing service redhat-marketplace on namespace openshift-marketplace : 1.275143ms 2026-01-20T10:56:53.479666101+00:00 stderr F I0120 10:56:53.479425 30089 services_controller.go:441] Skipping no-op change for service openshift-dns-operator/metrics 2026-01-20T10:56:53.481551241+00:00 stderr F I0120 10:56:53.481492 30089 lb_config.go:1016] Cluster endpoints for openshift-machine-config-operator/machine-config-daemon are: map[TCP/health:{8798 [192.168.126.11] []} TCP/metrics:{9001 [192.168.126.11] []}] 2026-01-20T10:56:53.481551241+00:00 stderr F I0120 10:56:53.481544 30089 services_controller.go:413] Built service openshift-machine-config-operator/machine-config-daemon LB cluster-wide configs []services.lbConfig(nil) 2026-01-20T10:56:53.481610842+00:00 stderr F I0120 10:56:53.481554 30089 services_controller.go:414] Built service openshift-machine-config-operator/machine-config-daemon LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.82"}, protocol:"TCP", inport:9001, clusterEndpoints:services.lbEndpoints{Port:9001, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.82"}, protocol:"TCP", inport:8798, clusterEndpoints:services.lbEndpoints{Port:8798, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.481610842+00:00 stderr F I0120 10:56:53.481595 30089 services_controller.go:415] Built service openshift-machine-config-operator/machine-config-daemon LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.481675264+00:00 stderr F I0120 10:56:53.481648 30089 services_controller.go:421] Built service openshift-machine-config-operator/machine-config-daemon cluster-wide LB []services.LB{} 2026-01-20T10:56:53.481720215+00:00 stderr F I0120 10:56:53.481663 30089 services_controller.go:422] Built service openshift-machine-config-operator/machine-config-daemon per-node LB []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-daemon_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-daemon"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.82", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9001, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.82", Port:8798, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:8798, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.481720215+00:00 stderr F I0120 10:56:53.481708 30089 services_controller.go:423] Built service openshift-machine-config-operator/machine-config-daemon template LB []services.LB{} 2026-01-20T10:56:53.481741286+00:00 stderr F I0120 10:56:53.481719 30089 services_controller.go:424] Service openshift-machine-config-operator/machine-config-daemon has 0 cluster-wide, 2 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.481782177+00:00 stderr F I0120 10:56:53.481757 30089 services_controller.go:441] Skipping no-op change for service openshift-machine-config-operator/machine-config-daemon 2026-01-20T10:56:53.481782177+00:00 stderr F I0120 10:56:53.481768 30089 services_controller.go:336] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator : 2.345412ms 2026-01-20T10:56:53.481796037+00:00 stderr F I0120 10:56:53.481785 30089 services_controller.go:332] Processing sync for service openshift-kube-controller-manager/kube-controller-manager 2026-01-20T10:56:53.481906340+00:00 stderr F I0120 10:56:53.481795 30089 services_controller.go:397] Service kube-controller-manager retrieved from lister: &Service{ObjectMeta:{kube-controller-manager openshift-kube-controller-manager 419fdf14-5d8d-4271-b9e7-729de80d8cd2 5235 0 2024-06-26 12:47:11 +0000 UTC map[prometheus:kube-controller-manager] map[operator.openshift.io/spec-hash:bb05a56151ce98d11c8554843985ba99e0498dcafd98129435c2d982c5ea4c11 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 10257 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{kube-controller-manager: true,},ClusterIP:10.217.4.112,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.112],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.482024093+00:00 stderr F I0120 10:56:53.479205 30089 services_controller.go:336] Finished syncing service image-registry on namespace openshift-image-registry : 276.817µs 2026-01-20T10:56:53.482303000+00:00 stderr F I0120 10:56:53.482240 30089 services_controller.go:332] Processing sync for service openshift-kube-scheduler/scheduler 2026-01-20T10:56:53.482426044+00:00 stderr F I0120 10:56:53.479281 30089 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.83:443: 10.217.5.83:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6fdcf5bf-9a8f-4dc7-b401-5b01d42512e2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.482566448+00:00 stderr F I0120 10:56:53.481162 30089 services_controller.go:332] Processing sync for service openshift-marketplace/redhat-operators 2026-01-20T10:56:53.482670341+00:00 stderr F I0120 10:56:53.482557 30089 services_controller.go:397] Service redhat-operators retrieved from lister: &Service{ObjectMeta:{redhat-operators openshift-marketplace adccbaa4-8d5b-4985-9a89-66271ea4bf4e 6530 0 2024-06-26 12:47:51 +0000 UTC map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators 9ba0c63a-ccef-4143-b195-48b1ad0b0bb7 0xc0002fbe1d 0xc0002fbe1e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: redhat-operators,olm.managed: true,},ClusterIP:10.217.5.52,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.52],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.482698912+00:00 stderr F I0120 10:56:53.482681 30089 lb_config.go:1016] Cluster endpoints for openshift-marketplace/redhat-operators are: map[TCP/grpc:{50051 [10.217.0.34] []}] 2026-01-20T10:56:53.482743053+00:00 stderr F I0120 10:56:53.482699 30089 services_controller.go:413] Built service openshift-marketplace/redhat-operators LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.52"}, protocol:"TCP", inport:50051, clusterEndpoints:services.lbEndpoints{Port:50051, V4IPs:[]string{"10.217.0.34"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.482743053+00:00 stderr F I0120 10:56:53.482724 30089 services_controller.go:414] Built service openshift-marketplace/redhat-operators LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.482743053+00:00 stderr F I0120 10:56:53.482732 30089 services_controller.go:415] Built service openshift-marketplace/redhat-operators LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.482802324+00:00 stderr F I0120 10:56:53.482749 30089 services_controller.go:421] Built service openshift-marketplace/redhat-operators cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.52", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.34", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.482802324+00:00 stderr F I0120 10:56:53.482788 30089 services_controller.go:422] Built service openshift-marketplace/redhat-operators per-node LB []services.LB{} 2026-01-20T10:56:53.482802324+00:00 stderr F I0120 10:56:53.482797 30089 services_controller.go:423] Built service openshift-marketplace/redhat-operators template LB []services.LB{} 2026-01-20T10:56:53.482820045+00:00 stderr F I0120 10:56:53.482806 30089 services_controller.go:424] Service openshift-marketplace/redhat-operators has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.482887217+00:00 stderr F I0120 10:56:53.482832 30089 services_controller.go:441] Skipping no-op change for service openshift-marketplace/redhat-operators 2026-01-20T10:56:53.482887217+00:00 stderr F I0120 10:56:53.482839 30089 services_controller.go:336] Finished syncing service redhat-operators on namespace openshift-marketplace : 1.684375ms 2026-01-20T10:56:53.482887217+00:00 stderr F I0120 10:56:53.482856 30089 services_controller.go:332] Processing sync for service openshift-machine-api/machine-api-operator 2026-01-20T10:56:53.482958058+00:00 stderr F I0120 10:56:53.482862 30089 services_controller.go:397] Service machine-api-operator retrieved from lister: &Service{ObjectMeta:{machine-api-operator openshift-machine-api ef047d6e-c72f-4309-a95e-08fb0ed08662 4792 0 2024-06-26 12:39:26 +0000 UTC map[k8s-app:machine-api-operator] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002fa7fb }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-api-operator,},ClusterIP:10.217.5.127,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.127],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.483141743+00:00 stderr F I0120 10:56:53.482963 30089 lb_config.go:1016] Cluster endpoints for openshift-machine-api/machine-api-operator are: map[TCP/https:{8443 [10.217.0.5] []}] 2026-01-20T10:56:53.483196045+00:00 stderr F I0120 10:56:53.483154 30089 services_controller.go:413] Built service openshift-machine-api/machine-api-operator LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.127"}, protocol:"TCP", inport:8443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.5"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.483196045+00:00 stderr F I0120 10:56:53.483181 30089 services_controller.go:414] Built service openshift-machine-api/machine-api-operator LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.483196045+00:00 stderr F I0120 10:56:53.483189 30089 services_controller.go:415] Built service openshift-machine-api/machine-api-operator LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.483270747+00:00 stderr F I0120 10:56:53.483220 30089 services_controller.go:421] Built service openshift-machine-api/machine-api-operator cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.127", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.5", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.483270747+00:00 stderr F I0120 10:56:53.483256 30089 services_controller.go:422] Built service openshift-machine-api/machine-api-operator per-node LB []services.LB{} 2026-01-20T10:56:53.483270747+00:00 stderr F I0120 10:56:53.483265 30089 services_controller.go:423] Built service openshift-machine-api/machine-api-operator template LB []services.LB{} 2026-01-20T10:56:53.483288597+00:00 stderr F I0120 10:56:53.483279 30089 services_controller.go:424] Service openshift-machine-api/machine-api-operator has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.483309508+00:00 stderr F I0120 10:56:53.483300 30089 services_controller.go:441] Skipping no-op change for service openshift-machine-api/machine-api-operator 2026-01-20T10:56:53.483323428+00:00 stderr F I0120 10:56:53.483305 30089 services_controller.go:336] Finished syncing service machine-api-operator on namespace openshift-machine-api : 448.921µs 2026-01-20T10:56:53.483323428+00:00 stderr F I0120 10:56:53.483319 30089 services_controller.go:332] Processing sync for service openshift-kube-scheduler-operator/metrics 2026-01-20T10:56:53.483423191+00:00 stderr F I0120 10:56:53.483327 30089 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-kube-scheduler-operator 080e1aaf-7269-495b-ab74-593efe4192ec 4661 0 2024-06-26 12:39:09 +0000 UTC map[app:openshift-kube-scheduler-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:kube-scheduler-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000159cfb }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-kube-scheduler-operator,},ClusterIP:10.217.4.108,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.108],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.483439881+00:00 stderr F I0120 10:56:53.483416 30089 lb_config.go:1016] Cluster endpoints for openshift-kube-scheduler-operator/metrics are: map[TCP/https:{8443 [10.217.0.12] []}] 2026-01-20T10:56:53.483439881+00:00 stderr F I0120 10:56:53.482351 30089 services_controller.go:397] Service scheduler retrieved from lister: &Service{ObjectMeta:{scheduler openshift-kube-scheduler a839a554-406d-4df8-b3ae-b533cb3e24bc 4695 0 2024-06-26 12:47:07 +0000 UTC map[prometheus:kube-scheduler] map[operator.openshift.io/spec-hash:f185087b7610499b49263c17685abe7f251a50c890808284a072687bf6d73275 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 10259 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{scheduler: true,},ClusterIP:10.217.5.218,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.218],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.483454321+00:00 stderr F I0120 10:56:53.483430 30089 services_controller.go:413] Built service openshift-kube-scheduler-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.108"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.12"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.483454321+00:00 stderr F I0120 10:56:53.483449 30089 services_controller.go:414] Built service openshift-kube-scheduler-operator/metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.483472432+00:00 stderr F I0120 10:56:53.483457 30089 services_controller.go:415] Built service openshift-kube-scheduler-operator/metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.483485452+00:00 stderr F I0120 10:56:53.483467 30089 lb_config.go:1016] Cluster endpoints for openshift-kube-scheduler/scheduler are: map[TCP/https:{10259 [192.168.126.11] []}] 2026-01-20T10:56:53.483498293+00:00 stderr F I0120 10:56:53.483486 30089 services_controller.go:413] Built service openshift-kube-scheduler/scheduler LB cluster-wide configs []services.lbConfig(nil) 2026-01-20T10:56:53.483568004+00:00 stderr F I0120 10:56:53.483478 30089 services_controller.go:421] Built service openshift-kube-scheduler-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-kube-scheduler-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.108", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.12", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.483568004+00:00 stderr F I0120 10:56:53.483494 30089 services_controller.go:414] Built service openshift-kube-scheduler/scheduler LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.218"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:10259, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.483568004+00:00 stderr F I0120 10:56:53.483509 30089 services_controller.go:422] Built service openshift-kube-scheduler-operator/metrics per-node LB []services.LB{} 2026-01-20T10:56:53.483568004+00:00 stderr F I0120 10:56:53.483512 30089 services_controller.go:415] Built service openshift-kube-scheduler/scheduler LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.483568004+00:00 stderr F I0120 10:56:53.483526 30089 services_controller.go:423] Built service openshift-kube-scheduler-operator/metrics template LB []services.LB{} 2026-01-20T10:56:53.483568004+00:00 stderr F I0120 10:56:53.483536 30089 services_controller.go:424] Service openshift-kube-scheduler-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.483568004+00:00 stderr F I0120 10:56:53.483547 30089 services_controller.go:421] Built service openshift-kube-scheduler/scheduler cluster-wide LB []services.LB{} 2026-01-20T10:56:53.483568004+00:00 stderr F I0120 10:56:53.483556 30089 services_controller.go:441] Skipping no-op change for service openshift-kube-scheduler-operator/metrics 2026-01-20T10:56:53.483568004+00:00 stderr F I0120 10:56:53.483562 30089 services_controller.go:336] Finished syncing service metrics on namespace openshift-kube-scheduler-operator : 242.806µs 2026-01-20T10:56:53.483587915+00:00 stderr F I0120 10:56:53.483573 30089 services_controller.go:332] Processing sync for service openshift-machine-api/machine-api-operator-machine-webhook 2026-01-20T10:56:53.483602205+00:00 stderr F I0120 10:56:53.483558 30089 services_controller.go:422] Built service openshift-kube-scheduler/scheduler per-node LB []services.LB{services.LB{Name:"Service_openshift-kube-scheduler/scheduler_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler/scheduler"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.218", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:10259, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.483602205+00:00 stderr F I0120 10:56:53.483596 30089 services_controller.go:423] Built service openshift-kube-scheduler/scheduler template LB []services.LB{} 2026-01-20T10:56:53.483620836+00:00 stderr F I0120 10:56:53.483607 30089 services_controller.go:424] Service openshift-kube-scheduler/scheduler has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.483668137+00:00 stderr F I0120 10:56:53.483638 30089 services_controller.go:441] Skipping no-op change for service openshift-kube-scheduler/scheduler 2026-01-20T10:56:53.483668137+00:00 stderr F I0120 10:56:53.483580 30089 services_controller.go:397] Service machine-api-operator-machine-webhook retrieved from lister: &Service{ObjectMeta:{machine-api-operator-machine-webhook openshift-machine-api 7dd2300f-f67e-4eb3-a3fa-1f22c230305a 4821 0 2024-06-26 12:39:13 +0000 UTC map[k8s-app:machine-api-operator-machine-webhook] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:machine-api-operator-machine-webhook-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002fa95b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 machine-webhook},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{api: clusterapi,k8s-app: controller,},ClusterIP:10.217.4.242,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.242],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.483668137+00:00 stderr F I0120 10:56:53.483651 30089 services_controller.go:336] Finished syncing service scheduler on namespace openshift-kube-scheduler : 1.418588ms 2026-01-20T10:56:53.483668137+00:00 stderr F I0120 10:56:53.483662 30089 lb_config.go:1016] Cluster endpoints for openshift-machine-api/machine-api-operator-machine-webhook are: map[] 2026-01-20T10:56:53.483685427+00:00 stderr F I0120 10:56:53.483671 30089 services_controller.go:332] Processing sync for service openshift-config-operator/metrics 2026-01-20T10:56:53.483698878+00:00 stderr F I0120 10:56:53.483672 30089 services_controller.go:413] Built service openshift-machine-api/machine-api-operator-machine-webhook LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.242"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.483698878+00:00 stderr F I0120 10:56:53.483690 30089 services_controller.go:414] Built service openshift-machine-api/machine-api-operator-machine-webhook LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.483717178+00:00 stderr F I0120 10:56:53.483698 30089 services_controller.go:415] Built service openshift-machine-api/machine-api-operator-machine-webhook LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.483759619+00:00 stderr F I0120 10:56:53.483712 30089 services_controller.go:421] Built service openshift-machine-api/machine-api-operator-machine-webhook cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-machine-webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.242", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.483759619+00:00 stderr F I0120 10:56:53.483745 30089 services_controller.go:422] Built service openshift-machine-api/machine-api-operator-machine-webhook per-node LB []services.LB{} 2026-01-20T10:56:53.483759619+00:00 stderr F I0120 10:56:53.483754 30089 services_controller.go:423] Built service openshift-machine-api/machine-api-operator-machine-webhook template LB []services.LB{} 2026-01-20T10:56:53.483777220+00:00 stderr F I0120 10:56:53.483767 30089 services_controller.go:424] Service openshift-machine-api/machine-api-operator-machine-webhook has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.483777220+00:00 stderr F I0120 10:56:53.483682 30089 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-config-operator f04ada1b-55ad-45a3-9231-6d1ff7242fa0 4291 0 2024-06-26 12:39:24 +0000 UTC map[app:openshift-config-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:config-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000158e2f }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-config-operator,},ClusterIP:10.217.5.120,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.120],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.483794280+00:00 stderr F I0120 10:56:53.483785 30089 services_controller.go:441] Skipping no-op change for service openshift-machine-api/machine-api-operator-machine-webhook 2026-01-20T10:56:53.483807551+00:00 stderr F I0120 10:56:53.483792 30089 services_controller.go:336] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api : 219.186µs 2026-01-20T10:56:53.483807551+00:00 stderr F I0120 10:56:53.483792 30089 lb_config.go:1016] Cluster endpoints for openshift-config-operator/metrics are: map[TCP/https:{8443 [10.217.0.23] []}] 2026-01-20T10:56:53.483826581+00:00 stderr F I0120 10:56:53.483809 30089 services_controller.go:332] Processing sync for service openshift-marketplace/certified-operators 2026-01-20T10:56:53.483839831+00:00 stderr F I0120 10:56:53.483809 30089 services_controller.go:413] Built service openshift-config-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.120"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.23"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.483839831+00:00 stderr F I0120 10:56:53.483832 30089 services_controller.go:414] Built service openshift-config-operator/metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.483854032+00:00 stderr F I0120 10:56:53.483841 30089 services_controller.go:415] Built service openshift-config-operator/metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.483902883+00:00 stderr F I0120 10:56:53.483818 30089 services_controller.go:397] Service certified-operators retrieved from lister: &Service{ObjectMeta:{certified-operators openshift-marketplace 97052848-7332-4254-8854-60d45bb91123 6358 0 2024-06-26 12:47:48 +0000 UTC map[olm.managed:true olm.service-spec-hash:7FOCZ3GVMQ1pwQKJahWmE09uJDRx6ab8xxcEYE] map[] [{operators.coreos.com/v1alpha1 CatalogSource certified-operators 16d5fe82-aef0-4700-8b13-e78e71d2a10d 0xc0002fb5ed 0xc0002fb5ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.249,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.249],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.483918653+00:00 stderr F I0120 10:56:53.483861 30089 services_controller.go:421] Built service openshift-config-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-config-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-config-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.120", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.23", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.483918653+00:00 stderr F I0120 10:56:53.483902 30089 lb_config.go:1016] Cluster endpoints for openshift-marketplace/certified-operators are: map[TCP/grpc:{50051 [10.217.0.33] []}] 2026-01-20T10:56:53.483936074+00:00 stderr F I0120 10:56:53.483918 30089 services_controller.go:422] Built service openshift-config-operator/metrics per-node LB []services.LB{} 2026-01-20T10:56:53.483936074+00:00 stderr F I0120 10:56:53.483921 30089 services_controller.go:413] Built service openshift-marketplace/certified-operators LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.249"}, protocol:"TCP", inport:50051, clusterEndpoints:services.lbEndpoints{Port:50051, V4IPs:[]string{"10.217.0.33"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.483956194+00:00 stderr F I0120 10:56:53.483936 30089 services_controller.go:423] Built service openshift-config-operator/metrics template LB []services.LB{} 2026-01-20T10:56:53.483956194+00:00 stderr F I0120 10:56:53.483937 30089 services_controller.go:414] Built service openshift-marketplace/certified-operators LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.483956194+00:00 stderr F I0120 10:56:53.483947 30089 services_controller.go:424] Service openshift-config-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.483971435+00:00 stderr F I0120 10:56:53.483947 30089 services_controller.go:415] Built service openshift-marketplace/certified-operators LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.483984905+00:00 stderr F I0120 10:56:53.483972 30089 services_controller.go:441] Skipping no-op change for service openshift-config-operator/metrics 2026-01-20T10:56:53.483984905+00:00 stderr F I0120 10:56:53.483979 30089 services_controller.go:336] Finished syncing service metrics on namespace openshift-config-operator : 309.278µs 2026-01-20T10:56:53.484001496+00:00 stderr F I0120 10:56:53.483969 30089 services_controller.go:421] Built service openshift-marketplace/certified-operators cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/certified-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/certified-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.249", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.33", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.484014596+00:00 stderr F I0120 10:56:53.484002 30089 services_controller.go:422] Built service openshift-marketplace/certified-operators per-node LB []services.LB{} 2026-01-20T10:56:53.484014596+00:00 stderr F I0120 10:56:53.484011 30089 services_controller.go:423] Built service openshift-marketplace/certified-operators template LB []services.LB{} 2026-01-20T10:56:53.484028326+00:00 stderr F I0120 10:56:53.484019 30089 services_controller.go:424] Service openshift-marketplace/certified-operators has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.484150069+00:00 stderr F I0120 10:56:53.484118 30089 services_controller.go:441] Skipping no-op change for service openshift-marketplace/certified-operators 2026-01-20T10:56:53.484150069+00:00 stderr F I0120 10:56:53.484132 30089 services_controller.go:336] Finished syncing service certified-operators on namespace openshift-marketplace : 322.128µs 2026-01-20T10:56:53.484208501+00:00 stderr F I0120 10:56:53.482041 30089 lb_config.go:1016] Cluster endpoints for openshift-kube-controller-manager/kube-controller-manager are: map[TCP/https:{10257 [192.168.126.11] []}] 2026-01-20T10:56:53.484208501+00:00 stderr F I0120 10:56:53.484197 30089 services_controller.go:413] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs []services.lbConfig(nil) 2026-01-20T10:56:53.484224521+00:00 stderr F I0120 10:56:53.484205 30089 services_controller.go:414] Built service openshift-kube-controller-manager/kube-controller-manager LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.112"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:10257, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.484244042+00:00 stderr F I0120 10:56:53.484218 30089 services_controller.go:415] Built service openshift-kube-controller-manager/kube-controller-manager LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.484257812+00:00 stderr F I0120 10:56:53.484249 30089 services_controller.go:421] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB []services.LB{} 2026-01-20T10:56:53.484317644+00:00 stderr F I0120 10:56:53.484257 30089 services_controller.go:422] Built service openshift-kube-controller-manager/kube-controller-manager per-node LB []services.LB{services.LB{Name:"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager/kube-controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.112", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:10257, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2026-01-20T10:56:53.484317644+00:00 stderr F I0120 10:56:53.484292 30089 services_controller.go:423] Built service openshift-kube-controller-manager/kube-controller-manager template LB []services.LB{} 2026-01-20T10:56:53.484317644+00:00 stderr F I0120 10:56:53.484302 30089 services_controller.go:424] Service openshift-kube-controller-manager/kube-controller-manager has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.484335134+00:00 stderr F I0120 10:56:53.484327 30089 services_controller.go:441] Skipping no-op change for service openshift-kube-controller-manager/kube-controller-manager 2026-01-20T10:56:53.484348795+00:00 stderr F I0120 10:56:53.484333 30089 services_controller.go:336] Finished syncing service kube-controller-manager on namespace openshift-kube-controller-manager : 2.546827ms 2026-01-20T10:56:53.485377793+00:00 stderr F I0120 10:56:53.485347 30089 services_controller.go:332] Processing sync for service openshift-service-ca-operator/metrics 2026-01-20T10:56:53.485505706+00:00 stderr F I0120 10:56:53.485362 30089 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-service-ca-operator 030283b3-acfe-40ed-811c-d9f7f79607f6 5225 0 2024-06-26 12:39:07 +0000 UTC map[app:service-ca-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000a6e6ff }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: service-ca-operator,},ClusterIP:10.217.5.165,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.165],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2026-01-20T10:56:53.485530647+00:00 stderr F I0120 10:56:53.485497 30089 lb_config.go:1016] Cluster endpoints for openshift-service-ca-operator/metrics are: map[TCP/https:{8443 [10.217.0.10] []}] 2026-01-20T10:56:53.485530647+00:00 stderr F I0120 10:56:53.485513 30089 services_controller.go:413] Built service openshift-service-ca-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.165"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.10"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2026-01-20T10:56:53.485642829+00:00 stderr F I0120 10:56:53.485617 30089 services_controller.go:336] Finished syncing service metrics on namespace openshift-dns-operator : 2.496276ms 2026-01-20T10:56:53.485987658+00:00 stderr F I0120 10:56:53.485891 30089 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/cluster-autoscaler-operator"} 2026-01-20T10:56:53.485987658+00:00 stderr F I0120 10:56:53.485919 30089 services_controller.go:336] Finished syncing service cluster-autoscaler-operator on namespace openshift-machine-api : 11.003721ms 2026-01-20T10:56:53.485987658+00:00 stderr F I0120 10:56:53.485532 30089 services_controller.go:414] Built service openshift-service-ca-operator/metrics LB per-node configs []services.lbConfig(nil) 2026-01-20T10:56:53.485987658+00:00 stderr F I0120 10:56:53.485966 30089 services_controller.go:415] Built service openshift-service-ca-operator/metrics LB template configs []services.lbConfig(nil) 2026-01-20T10:56:53.486084911+00:00 stderr F I0120 10:56:53.486003 30089 services_controller.go:421] Built service openshift-service-ca-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-service-ca-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-service-ca-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.165", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.10", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2026-01-20T10:56:53.486084911+00:00 stderr F I0120 10:56:53.486050 30089 services_controller.go:422] Built service openshift-service-ca-operator/metrics per-node LB []services.LB{} 2026-01-20T10:56:53.486084911+00:00 stderr F I0120 10:56:53.486075 30089 services_controller.go:423] Built service openshift-service-ca-operator/metrics template LB []services.LB{} 2026-01-20T10:56:53.486108892+00:00 stderr F I0120 10:56:53.486084 30089 services_controller.go:424] Service openshift-service-ca-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2026-01-20T10:56:53.486121372+00:00 stderr F I0120 10:56:53.486108 30089 services_controller.go:441] Skipping no-op change for service openshift-service-ca-operator/metrics 2026-01-20T10:56:53.486121372+00:00 stderr F I0120 10:56:53.486113 30089 services_controller.go:336] Finished syncing service metrics on namespace openshift-service-ca-operator : 771.16µs 2026-01-20T10:56:53.491446133+00:00 stderr F I0120 10:56:53.491381 30089 ovs.go:162] Exec(44): stdout: "2\n" 2026-01-20T10:56:53.491446133+00:00 stderr F I0120 10:56:53.491418 30089 ovs.go:163] Exec(44): stderr: "" 2026-01-20T10:56:53.491446133+00:00 stderr F I0120 10:56:53.491431 30089 gateway.go:365] Gateway is ready 2026-01-20T10:56:53.491494364+00:00 stderr F I0120 10:56:53.491448 30089 gateway_localnet.go:78] Creating Local Gateway Openflow Manager 2026-01-20T10:56:53.491494364+00:00 stderr F I0120 10:56:53.491468 30089 ovs.go:159] Exec(45): /usr/bin/ovs-vsctl --timeout=15 get Interface patch-br-ex_crc-to-br-int ofport 2026-01-20T10:56:53.500886122+00:00 stderr F I0120 10:56:53.500822 30089 ovs.go:162] Exec(45): stdout: "2\n" 2026-01-20T10:56:53.500886122+00:00 stderr F I0120 10:56:53.500841 30089 ovs.go:163] Exec(45): stderr: "" 2026-01-20T10:56:53.500886122+00:00 stderr F I0120 10:56:53.500852 30089 ovs.go:159] Exec(46): /usr/bin/ovs-vsctl --timeout=15 get interface ens3 ofport 2026-01-20T10:56:53.508779341+00:00 stderr F I0120 10:56:53.508746 30089 ovs.go:162] Exec(46): stdout: "1\n" 2026-01-20T10:56:53.508838103+00:00 stderr F I0120 10:56:53.508823 30089 ovs.go:163] Exec(46): stderr: "" 2026-01-20T10:56:53.516866055+00:00 stderr F I0120 10:56:53.516824 30089 node_ip_handler_linux.go:431] Skipping non-useable IP address for host: 127.0.0.1/8 lo 2026-01-20T10:56:53.516897316+00:00 stderr F I0120 10:56:53.516869 30089 node_ip_handler_linux.go:431] Skipping non-useable IP address for host: 10.217.0.2/23 ovn-k8s-mp0 2026-01-20T10:56:53.516897316+00:00 stderr F I0120 10:56:53.516890 30089 node_ip_handler_linux.go:431] Skipping non-useable IP address for host: 169.254.169.2/29 br-ex 2026-01-20T10:56:53.516911786+00:00 stderr F I0120 10:56:53.516902 30089 node_ip_handler_linux.go:228] Node primary address changed to 192.168.126.11. Updating OVN encap IP. 2026-01-20T10:56:53.516945827+00:00 stderr F I0120 10:56:53.516921 30089 ovs.go:159] Exec(47): /usr/bin/ovs-vsctl --timeout=15 get Open_vSwitch . external_ids:ovn-encap-ip 2026-01-20T10:56:53.522789972+00:00 stderr F I0120 10:56:53.522744 30089 base_network_controller.go:458] When adding node crc for network default, found 83 pods to add to retryPods 2026-01-20T10:56:53.522789972+00:00 stderr F I0120 10:56:53.522769 30089 base_network_controller.go:464] Adding pod cert-manager/cert-manager-758df9885c-cq6zm to retryPods for network default 2026-01-20T10:56:53.522789972+00:00 stderr F I0120 10:56:53.522780 30089 base_network_controller.go:464] Adding pod cert-manager/cert-manager-cainjector-676dd9bd64-mggnx to retryPods for network default 2026-01-20T10:56:53.522789972+00:00 stderr F I0120 10:56:53.522786 30089 base_network_controller.go:464] Adding pod cert-manager/cert-manager-webhook-855f577f79-7bdxq to retryPods for network default 2026-01-20T10:56:53.522816522+00:00 stderr F I0120 10:56:53.522791 30089 base_network_controller.go:464] Adding pod hostpath-provisioner/csi-hostpathplugin-hvm8g to retryPods for network default 2026-01-20T10:56:53.522816522+00:00 stderr F I0120 10:56:53.522797 30089 base_network_controller.go:464] Adding pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m to retryPods for network default 2026-01-20T10:56:53.522816522+00:00 stderr F I0120 10:56:53.522802 30089 base_network_controller.go:464] Adding pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp to retryPods for network default 2026-01-20T10:56:53.522816522+00:00 stderr F I0120 10:56:53.522808 30089 base_network_controller.go:464] Adding pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 to retryPods for network default 2026-01-20T10:56:53.522816522+00:00 stderr F I0120 10:56:53.522813 30089 base_network_controller.go:464] Adding pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b to retryPods for network default 2026-01-20T10:56:53.522830603+00:00 stderr F I0120 10:56:53.522819 30089 base_network_controller.go:464] Adding pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 to retryPods for network default 2026-01-20T10:56:53.522830603+00:00 stderr F I0120 10:56:53.522824 30089 base_network_controller.go:464] Adding pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg to retryPods for network default 2026-01-20T10:56:53.522852083+00:00 stderr F I0120 10:56:53.522830 30089 base_network_controller.go:464] Adding pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 to retryPods for network default 2026-01-20T10:56:53.522852083+00:00 stderr F I0120 10:56:53.522836 30089 base_network_controller.go:464] Adding pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc to retryPods for network default 2026-01-20T10:56:53.522852083+00:00 stderr F I0120 10:56:53.522842 30089 base_network_controller.go:464] Adding pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 to retryPods for network default 2026-01-20T10:56:53.522852083+00:00 stderr F I0120 10:56:53.522847 30089 base_network_controller.go:464] Adding pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd to retryPods for network default 2026-01-20T10:56:53.522864294+00:00 stderr F I0120 10:56:53.522852 30089 base_network_controller.go:464] Adding pod openshift-console/console-644bb77b49-5x5xk to retryPods for network default 2026-01-20T10:56:53.522864294+00:00 stderr F I0120 10:56:53.522858 30089 base_network_controller.go:464] Adding pod openshift-console/downloads-65476884b9-9wcvx to retryPods for network default 2026-01-20T10:56:53.522875834+00:00 stderr F I0120 10:56:53.522863 30089 base_network_controller.go:464] Adding pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z to retryPods for network default 2026-01-20T10:56:53.522875834+00:00 stderr F I0120 10:56:53.522868 30089 base_network_controller.go:464] Adding pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf to retryPods for network default 2026-01-20T10:56:53.522887034+00:00 stderr F I0120 10:56:53.522873 30089 base_network_controller.go:464] Adding pod openshift-dns-operator/dns-operator-75f687757b-nz2xb to retryPods for network default 2026-01-20T10:56:53.522887034+00:00 stderr F I0120 10:56:53.522880 30089 base_network_controller.go:464] Adding pod openshift-dns/dns-default-gbw49 to retryPods for network default 2026-01-20T10:56:53.522898524+00:00 stderr F I0120 10:56:53.522886 30089 base_network_controller.go:464] Adding pod openshift-dns/node-resolver-dn27q to retryPods for network default 2026-01-20T10:56:53.522898524+00:00 stderr F I0120 10:56:53.522891 30089 base_network_controller.go:464] Adding pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg to retryPods for network default 2026-01-20T10:56:53.522909965+00:00 stderr F I0120 10:56:53.522895 30089 base_network_controller.go:464] Adding pod openshift-etcd/etcd-crc to retryPods for network default 2026-01-20T10:56:53.522909965+00:00 stderr F I0120 10:56:53.522902 30089 base_network_controller.go:464] Adding pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv to retryPods for network default 2026-01-20T10:56:53.522920975+00:00 stderr F I0120 10:56:53.522907 30089 base_network_controller.go:464] Adding pod openshift-image-registry/image-registry-75b7bb6564-ln84v to retryPods for network default 2026-01-20T10:56:53.522920975+00:00 stderr F I0120 10:56:53.522913 30089 base_network_controller.go:464] Adding pod openshift-image-registry/node-ca-l92hr to retryPods for network default 2026-01-20T10:56:53.522920975+00:00 stderr F I0120 10:56:53.522918 30089 base_network_controller.go:464] Adding pod openshift-ingress-canary/ingress-canary-2vhcn to retryPods for network default 2026-01-20T10:56:53.522932735+00:00 stderr F I0120 10:56:53.522922 30089 base_network_controller.go:464] Adding pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t to retryPods for network default 2026-01-20T10:56:53.522932735+00:00 stderr F I0120 10:56:53.522928 30089 base_network_controller.go:464] Adding pod openshift-ingress/router-default-5c9bf7bc58-6jctv to retryPods for network default 2026-01-20T10:56:53.522943806+00:00 stderr F I0120 10:56:53.522933 30089 base_network_controller.go:464] Adding pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 to retryPods for network default 2026-01-20T10:56:53.522943806+00:00 stderr F I0120 10:56:53.522939 30089 base_network_controller.go:464] Adding pod openshift-kube-apiserver/installer-13-crc to retryPods for network default 2026-01-20T10:56:53.522963286+00:00 stderr F I0120 10:56:53.522944 30089 base_network_controller.go:464] Adding pod openshift-kube-apiserver/kube-apiserver-crc to retryPods for network default 2026-01-20T10:56:53.522963286+00:00 stderr F I0120 10:56:53.522950 30089 base_network_controller.go:464] Adding pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb to retryPods for network default 2026-01-20T10:56:53.522963286+00:00 stderr F I0120 10:56:53.522958 30089 base_network_controller.go:464] Adding pod openshift-kube-controller-manager/kube-controller-manager-crc to retryPods for network default 2026-01-20T10:56:53.522975146+00:00 stderr F I0120 10:56:53.522964 30089 base_network_controller.go:464] Adding pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 to retryPods for network default 2026-01-20T10:56:53.522975146+00:00 stderr F I0120 10:56:53.522970 30089 base_network_controller.go:464] Adding pod openshift-kube-scheduler/openshift-kube-scheduler-crc to retryPods for network default 2026-01-20T10:56:53.522986157+00:00 stderr F I0120 10:56:53.522975 30089 base_network_controller.go:464] Adding pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr to retryPods for network default 2026-01-20T10:56:53.522986157+00:00 stderr F I0120 10:56:53.522980 30089 base_network_controller.go:464] Adding pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv to retryPods for network default 2026-01-20T10:56:53.522997017+00:00 stderr F I0120 10:56:53.522985 30089 base_network_controller.go:464] Adding pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw to retryPods for network default 2026-01-20T10:56:53.522997017+00:00 stderr F I0120 10:56:53.522990 30089 base_network_controller.go:464] Adding pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb to retryPods for network default 2026-01-20T10:56:53.523007847+00:00 stderr F I0120 10:56:53.522996 30089 base_network_controller.go:464] Adding pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc to retryPods for network default 2026-01-20T10:56:53.523007847+00:00 stderr F I0120 10:56:53.523001 30089 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh to retryPods for network default 2026-01-20T10:56:53.523019088+00:00 stderr F I0120 10:56:53.523006 30089 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-daemon-zpnhg to retryPods for network default 2026-01-20T10:56:53.523019088+00:00 stderr F I0120 10:56:53.523011 30089 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm to retryPods for network default 2026-01-20T10:56:53.523030518+00:00 stderr F I0120 10:56:53.523016 30089 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-server-v65wr to retryPods for network default 2026-01-20T10:56:53.523030518+00:00 stderr F I0120 10:56:53.523023 30089 base_network_controller.go:464] Adding pod openshift-marketplace/certified-operators-mpjb7 to retryPods for network default 2026-01-20T10:56:53.523041858+00:00 stderr F I0120 10:56:53.523028 30089 base_network_controller.go:464] Adding pod openshift-marketplace/community-operators-6m4w2 to retryPods for network default 2026-01-20T10:56:53.523041858+00:00 stderr F I0120 10:56:53.523034 30089 base_network_controller.go:464] Adding pod openshift-marketplace/marketplace-operator-8b455464d-nc8zc to retryPods for network default 2026-01-20T10:56:53.523052828+00:00 stderr F I0120 10:56:53.523039 30089 base_network_controller.go:464] Adding pod openshift-marketplace/redhat-marketplace-2mx7j to retryPods for network default 2026-01-20T10:56:53.523052828+00:00 stderr F I0120 10:56:53.523045 30089 base_network_controller.go:464] Adding pod openshift-marketplace/redhat-operators-2nxg8 to retryPods for network default 2026-01-20T10:56:53.523052828+00:00 stderr F I0120 10:56:53.523049 30089 base_network_controller.go:464] Adding pod openshift-multus/multus-additional-cni-plugins-bzj2p to retryPods for network default 2026-01-20T10:56:53.523087019+00:00 stderr F I0120 10:56:53.523054 30089 base_network_controller.go:464] Adding pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc to retryPods for network default 2026-01-20T10:56:53.523162251+00:00 stderr F I0120 10:56:53.523126 30089 base_network_controller.go:464] Adding pod openshift-multus/multus-q88th to retryPods for network default 2026-01-20T10:56:53.523162251+00:00 stderr F I0120 10:56:53.523139 30089 base_network_controller.go:464] Adding pod openshift-multus/network-metrics-daemon-qdfr4 to retryPods for network default 2026-01-20T10:56:53.523162251+00:00 stderr F I0120 10:56:53.523146 30089 base_network_controller.go:464] Adding pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 to retryPods for network default 2026-01-20T10:56:53.523162251+00:00 stderr F I0120 10:56:53.523151 30089 base_network_controller.go:464] Adding pod openshift-network-diagnostics/network-check-target-v54bt to retryPods for network default 2026-01-20T10:56:53.523162251+00:00 stderr F I0120 10:56:53.523156 30089 base_network_controller.go:464] Adding pod openshift-network-node-identity/network-node-identity-7xghp to retryPods for network default 2026-01-20T10:56:53.523180992+00:00 stderr F I0120 10:56:53.523161 30089 base_network_controller.go:464] Adding pod openshift-network-operator/iptables-alerter-wwpnd to retryPods for network default 2026-01-20T10:56:53.523180992+00:00 stderr F I0120 10:56:53.523166 30089 base_network_controller.go:464] Adding pod openshift-network-operator/network-operator-767c585db5-zd56b to retryPods for network default 2026-01-20T10:56:53.523180992+00:00 stderr F I0120 10:56:53.523171 30089 base_network_controller.go:464] Adding pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd to retryPods for network default 2026-01-20T10:56:53.523180992+00:00 stderr F I0120 10:56:53.523176 30089 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf to retryPods for network default 2026-01-20T10:56:53.523193882+00:00 stderr F I0120 10:56:53.523182 30089 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh to retryPods for network default 2026-01-20T10:56:53.523204742+00:00 stderr F I0120 10:56:53.523197 30089 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 to retryPods for network default 2026-01-20T10:56:53.523215673+00:00 stderr F I0120 10:56:53.523202 30089 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz to retryPods for network default 2026-01-20T10:56:53.523215673+00:00 stderr F I0120 10:56:53.523208 30089 base_network_controller.go:464] Adding pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b to retryPods for network default 2026-01-20T10:56:53.523226773+00:00 stderr F I0120 10:56:53.523213 30089 base_network_controller.go:464] Adding pod openshift-ovn-kubernetes/ovnkube-node-sdkgg to retryPods for network default 2026-01-20T10:56:53.523226773+00:00 stderr F I0120 10:56:53.523218 30089 base_network_controller.go:464] Adding pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs to retryPods for network default 2026-01-20T10:56:53.523226773+00:00 stderr F I0120 10:56:53.523224 30089 base_network_controller.go:464] Adding pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz to retryPods for network default 2026-01-20T10:56:53.523248514+00:00 stderr F I0120 10:56:53.523228 30089 base_network_controller.go:464] Adding pod openshift-service-ca/service-ca-666f99b6f-kk8kg to retryPods for network default 2026-01-20T10:56:53.523248514+00:00 stderr F I0120 10:56:53.523242 30089 obj_retry.go:233] Iterate retry objects requested (resource *v1.Pod) 2026-01-20T10:56:53.523287455+00:00 stderr F I0120 10:56:53.523261 30089 obj_retry.go:555] Update event received for resource *factory.egressNode, old object is equal to new: false 2026-01-20T10:56:53.523287455+00:00 stderr F I0120 10:56:53.523277 30089 obj_retry.go:607] Update event received for *factory.egressNode crc 2026-01-20T10:56:53.523400478+00:00 stderr F I0120 10:56:53.523353 30089 obj_retry.go:555] Update event received for resource *factory.egressFwNode, old object is equal to new: true 2026-01-20T10:56:53.523400478+00:00 stderr F I0120 10:56:53.523373 30089 obj_retry.go:427] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod 2026-01-20T10:56:53.523484180+00:00 stderr F I0120 10:56:53.523380 30089 obj_retry.go:402] Going to retry *v1.Pod resource setup for 69 objects: [openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-machine-config-operator/machine-config-server-v65wr openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz cert-manager/cert-manager-758df9885c-cq6zm hostpath-provisioner/csi-hostpathplugin-hvm8g openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 openshift-network-diagnostics/network-check-target-v54bt cert-manager/cert-manager-cainjector-676dd9bd64-mggnx openshift-console/downloads-65476884b9-9wcvx openshift-kube-controller-manager/kube-controller-manager-crc openshift-console/console-644bb77b49-5x5xk openshift-multus/multus-additional-cni-plugins-bzj2p openshift-network-node-identity/network-node-identity-7xghp openshift-network-operator/iptables-alerter-wwpnd openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf openshift-apiserver/apiserver-7fc54b8dd7-d2bhp openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv openshift-marketplace/community-operators-6m4w2 openshift-service-ca/service-ca-666f99b6f-kk8kg openshift-ingress-canary/ingress-canary-2vhcn openshift-marketplace/redhat-marketplace-2mx7j openshift-marketplace/marketplace-operator-8b455464d-nc8zc openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw openshift-multus/multus-admission-controller-6c7c885997-4hbbc openshift-multus/multus-q88th openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b cert-manager/cert-manager-webhook-855f577f79-7bdxq openshift-dns/node-resolver-dn27q openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb openshift-multus/network-metrics-daemon-qdfr4 openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 openshift-kube-apiserver/installer-13-crc openshift-machine-config-operator/kube-rbac-proxy-crio-crc openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc openshift-image-registry/image-registry-75b7bb6564-ln84v openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh openshift-machine-config-operator/machine-config-daemon-zpnhg openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 openshift-kube-apiserver/kube-apiserver-crc openshift-image-registry/node-ca-l92hr openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv openshift-marketplace/redhat-operators-2nxg8 openshift-network-operator/network-operator-767c585db5-zd56b openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 openshift-controller-manager/controller-manager-778975cc4f-x5vcf openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm openshift-marketplace/certified-operators-mpjb7 openshift-ovn-kubernetes/ovnkube-node-sdkgg openshift-console-operator/console-operator-5dbbc74dc9-cp5cd openshift-etcd/etcd-crc openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb openshift-ingress/router-default-5c9bf7bc58-6jctv openshift-dns-operator/dns-operator-75f687757b-nz2xb openshift-dns/dns-default-gbw49 openshift-etcd-operator/etcd-operator-768d5b5d86-722mg] 2026-01-20T10:56:53.523538801+00:00 stderr F I0120 10:56:53.523509 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2026-01-20T10:56:53.523538801+00:00 stderr F I0120 10:56:53.523529 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2026-01-20T10:56:53.523551401+00:00 stderr F I0120 10:56:53.523542 30089 ovn.go:134] Ensuring zone local for Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 in node crc 2026-01-20T10:56:53.523561812+00:00 stderr F I0120 10:56:53.523545 30089 obj_retry.go:411] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources 2026-01-20T10:56:53.523561812+00:00 stderr F I0120 10:56:53.523549 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 after 0 failed attempt(s) 2026-01-20T10:56:53.523572942+00:00 stderr F I0120 10:56:53.523541 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-server-v65wr 2026-01-20T10:56:53.523572942+00:00 stderr F I0120 10:56:53.523562 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2026-01-20T10:56:53.523585262+00:00 stderr F I0120 10:56:53.523568 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2026-01-20T10:56:53.523585262+00:00 stderr F I0120 10:56:53.523561 30089 default_network_controller.go:699] Recording success event on pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2026-01-20T10:56:53.523585262+00:00 stderr F I0120 10:56:53.523580 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2026-01-20T10:56:53.523597813+00:00 stderr F I0120 10:56:53.523583 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-marketplace/marketplace-operator-8b455464d-nc8zc 2026-01-20T10:56:53.523597813+00:00 stderr F I0120 10:56:53.523585 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc 2026-01-20T10:56:53.523597813+00:00 stderr F I0120 10:56:53.523592 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2026-01-20T10:56:53.523634514+00:00 stderr F I0120 10:56:53.523598 30089 ovn.go:134] Ensuring zone local for Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t in node crc 2026-01-20T10:56:53.523634514+00:00 stderr F I0120 10:56:53.523615 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2026-01-20T10:56:53.523634514+00:00 stderr F I0120 10:56:53.523627 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2026-01-20T10:56:53.523634514+00:00 stderr F I0120 10:56:53.523629 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2026-01-20T10:56:53.523651724+00:00 stderr F I0120 10:56:53.523628 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2026-01-20T10:56:53.523651724+00:00 stderr F I0120 10:56:53.523637 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2026-01-20T10:56:53.523651724+00:00 stderr F I0120 10:56:53.523613 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2026-01-20T10:56:53.523664114+00:00 stderr F I0120 10:56:53.523648 30089 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc in node crc 2026-01-20T10:56:53.523664114+00:00 stderr F I0120 10:56:53.523602 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2026-01-20T10:56:53.523675165+00:00 stderr F I0120 10:56:53.523661 30089 ovn.go:134] Ensuring zone local for Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 in node crc 2026-01-20T10:56:53.523675165+00:00 stderr F I0120 10:56:53.523661 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc 2026-01-20T10:56:53.523675165+00:00 stderr F I0120 10:56:53.523664 30089 base_network_controller_pods.go:476] [default/openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t] creating logical port openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t for pod on switch crc 2026-01-20T10:56:53.523675165+00:00 stderr F I0120 10:56:53.523671 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc 2026-01-20T10:56:53.523687085+00:00 stderr F I0120 10:56:53.523679 30089 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc 2026-01-20T10:56:53.523697685+00:00 stderr F I0120 10:56:53.523685 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s) 2026-01-20T10:56:53.523697685+00:00 stderr F I0120 10:56:53.523693 30089 default_network_controller.go:699] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc 2026-01-20T10:56:53.523709656+00:00 stderr F I0120 10:56:53.523692 30089 base_network_controller_pods.go:476] [default/openshift-multus/multus-admission-controller-6c7c885997-4hbbc] creating logical port openshift-multus_multus-admission-controller-6c7c885997-4hbbc for pod on switch crc 2026-01-20T10:56:53.523709656+00:00 stderr F I0120 10:56:53.523575 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-server-v65wr 2026-01-20T10:56:53.523709656+00:00 stderr F I0120 10:56:53.523703 30089 base_network_controller_pods.go:476] [default/openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8] creating logical port openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8 for pod on switch crc 2026-01-20T10:56:53.523721506+00:00 stderr F I0120 10:56:53.523710 30089 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-server-v65wr in node crc 2026-01-20T10:56:53.523721506+00:00 stderr F I0120 10:56:53.523716 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-server-v65wr after 0 failed attempt(s) 2026-01-20T10:56:53.523732426+00:00 stderr F I0120 10:56:53.523721 30089 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-server-v65wr 2026-01-20T10:56:53.523732426+00:00 stderr F I0120 10:56:53.523595 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc 2026-01-20T10:56:53.523747327+00:00 stderr F I0120 10:56:53.523736 30089 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc 2026-01-20T10:56:53.523747327+00:00 stderr F I0120 10:56:53.523743 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s) 2026-01-20T10:56:53.523758047+00:00 stderr F I0120 10:56:53.523748 30089 default_network_controller.go:699] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc 2026-01-20T10:56:53.523758047+00:00 stderr F I0120 10:56:53.523603 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-zpnhg 2026-01-20T10:56:53.523768797+00:00 stderr F I0120 10:56:53.523760 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-zpnhg 2026-01-20T10:56:53.523813728+00:00 stderr F I0120 10:56:53.523786 30089 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-zpnhg in node crc 2026-01-20T10:56:53.523813728+00:00 stderr F I0120 10:56:53.523794 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-multus/multus-q88th 2026-01-20T10:56:53.523813728+00:00 stderr F I0120 10:56:53.523798 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-zpnhg after 0 failed attempt(s) 2026-01-20T10:56:53.523813728+00:00 stderr F I0120 10:56:53.523802 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-multus/multus-q88th 2026-01-20T10:56:53.523813728+00:00 stderr F I0120 10:56:53.523805 30089 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-zpnhg 2026-01-20T10:56:53.523813728+00:00 stderr F I0120 10:56:53.523809 30089 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-q88th in node crc 2026-01-20T10:56:53.523829369+00:00 stderr F I0120 10:56:53.523541 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-ingress-canary/ingress-canary-2vhcn 2026-01-20T10:56:53.523829369+00:00 stderr F I0120 10:56:53.523817 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-multus/multus-q88th after 0 failed attempt(s) 2026-01-20T10:56:53.523829369+00:00 stderr F I0120 10:56:53.523822 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-ingress-canary/ingress-canary-2vhcn 2026-01-20T10:56:53.523829369+00:00 stderr F I0120 10:56:53.523825 30089 default_network_controller.go:699] Recording success event on pod openshift-multus/multus-q88th 2026-01-20T10:56:53.523841589+00:00 stderr F I0120 10:56:53.523830 30089 ovn.go:134] Ensuring zone local for Pod openshift-ingress-canary/ingress-canary-2vhcn in node crc 2026-01-20T10:56:53.523852759+00:00 stderr F I0120 10:56:53.523839 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2026-01-20T10:56:53.523852759+00:00 stderr F I0120 10:56:53.523845 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2026-01-20T10:56:53.523863830+00:00 stderr F I0120 10:56:53.523850 30089 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz in node crc 2026-01-20T10:56:53.523863830+00:00 stderr F I0120 10:56:53.523858 30089 base_network_controller_pods.go:476] [default/openshift-ingress-canary/ingress-canary-2vhcn] creating logical port openshift-ingress-canary_ingress-canary-2vhcn for pod on switch crc 2026-01-20T10:56:53.523874780+00:00 stderr F I0120 10:56:53.523866 30089 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz] creating logical port openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz for pod on switch crc 2026-01-20T10:56:53.523885260+00:00 stderr F I0120 10:56:53.523867 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2026-01-20T10:56:53.523899390+00:00 stderr F I0120 10:56:53.523883 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2026-01-20T10:56:53.523899390+00:00 stderr F I0120 10:56:53.523892 30089 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc in node crc 2026-01-20T10:56:53.523910211+00:00 stderr F I0120 10:56:53.523898 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc after 0 failed attempt(s) 2026-01-20T10:56:53.523910211+00:00 stderr F I0120 10:56:53.523905 30089 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2026-01-20T10:56:53.523921201+00:00 stderr F I0120 10:56:53.523900 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sdkgg 2026-01-20T10:56:53.523921201+00:00 stderr F I0120 10:56:53.523915 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2026-01-20T10:56:53.523921201+00:00 stderr F I0120 10:56:53.523916 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sdkgg 2026-01-20T10:56:53.523933731+00:00 stderr F I0120 10:56:53.523920 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2026-01-20T10:56:53.523933731+00:00 stderr F I0120 10:56:53.523925 30089 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b in node crc 2026-01-20T10:56:53.523933731+00:00 stderr F I0120 10:56:53.523926 30089 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-sdkgg in node crc 2026-01-20T10:56:53.523933731+00:00 stderr F I0120 10:56:53.523930 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b after 0 failed attempt(s) 2026-01-20T10:56:53.523945982+00:00 stderr F I0120 10:56:53.523934 30089 default_network_controller.go:699] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2026-01-20T10:56:53.523956232+00:00 stderr F I0120 10:56:53.523943 30089 obj_retry.go:296] Retry object setup: *v1.Pod cert-manager/cert-manager-webhook-855f577f79-7bdxq 2026-01-20T10:56:53.523956232+00:00 stderr F I0120 10:56:53.523948 30089 obj_retry.go:358] Adding new object: *v1.Pod cert-manager/cert-manager-webhook-855f577f79-7bdxq 2026-01-20T10:56:53.523966812+00:00 stderr F I0120 10:56:53.523954 30089 ovn.go:134] Ensuring zone local for Pod cert-manager/cert-manager-webhook-855f577f79-7bdxq in node crc 2026-01-20T10:56:53.523977013+00:00 stderr F I0120 10:56:53.523934 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sdkgg after 0 failed attempt(s) 2026-01-20T10:56:53.523988273+00:00 stderr F I0120 10:56:53.523976 30089 default_network_controller.go:699] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-sdkgg 2026-01-20T10:56:53.523988273+00:00 stderr F I0120 10:56:53.523895 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} options:{GoMap:map[iface-id-ver:7d51f445-054a-4e4f-a67b-a828f5a32511 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {710ea152-1844-44ad-b1a6-805ec9a3700e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.523998913+00:00 stderr F I0120 10:56:53.523985 30089 base_network_controller_pods.go:476] [default/cert-manager/cert-manager-webhook-855f577f79-7bdxq] creating logical port cert-manager_cert-manager-webhook-855f577f79-7bdxq for pod on switch crc 2026-01-20T10:56:53.524074855+00:00 stderr F I0120 10:56:53.524012 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} options:{GoMap:map[iface-id-ver:bd556935-a077-45df-ba3f-d42c39326ccd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {69155615-9d93-4b72-bddd-739a6e731251}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.524117186+00:00 stderr F I0120 10:56:53.524027 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.524133777+00:00 stderr F I0120 10:56:53.524018 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} options:{GoMap:map[iface-id-ver:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.524133777+00:00 stderr F I0120 10:56:53.524082 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.524187518+00:00 stderr F I0120 10:56:53.524150 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.524187518+00:00 stderr F I0120 10:56:53.524157 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {4a46821f-f601-44e2-aacf-0ffb901e376e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.524187518+00:00 stderr F I0120 10:56:53.524163 30089 ovs.go:162] Exec(47): stdout: "\"192.168.126.11\"\n" 2026-01-20T10:56:53.524201038+00:00 stderr F I0120 10:56:53.524176 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.524201038+00:00 stderr F I0120 10:56:53.524182 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2026-01-20T10:56:53.524201038+00:00 stderr F I0120 10:56:53.524188 30089 ovs.go:163] Exec(47): stderr: "" 2026-01-20T10:56:53.524239789+00:00 stderr F I0120 10:56:53.524198 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2026-01-20T10:56:53.524239789+00:00 stderr F I0120 10:56:53.524206 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.524239789+00:00 stderr F I0120 10:56:53.524232 30089 ovn.go:134] Ensuring zone local for Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 in node crc 2026-01-20T10:56:53.524257200+00:00 stderr F I0120 10:56:53.524242 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 after 0 failed attempt(s) 2026-01-20T10:56:53.524257200+00:00 stderr F I0120 10:56:53.524241 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2026-01-20T10:56:53.524257200+00:00 stderr F I0120 10:56:53.523602 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2026-01-20T10:56:53.524257200+00:00 stderr F I0120 10:56:53.524253 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2026-01-20T10:56:53.524269990+00:00 stderr F I0120 10:56:53.524259 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-qdfr4 2026-01-20T10:56:53.524269990+00:00 stderr F I0120 10:56:53.524262 30089 ovn.go:134] Ensuring zone local for Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b in node crc 2026-01-20T10:56:53.524269990+00:00 stderr F I0120 10:56:53.524264 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2026-01-20T10:56:53.524281540+00:00 stderr F I0120 10:56:53.524270 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-qdfr4 2026-01-20T10:56:53.524281540+00:00 stderr F I0120 10:56:53.524276 30089 ovn.go:134] Ensuring zone local for Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd in node crc 2026-01-20T10:56:53.524292261+00:00 stderr F I0120 10:56:53.524277 30089 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-qdfr4 in node crc 2026-01-20T10:56:53.524302661+00:00 stderr F I0120 10:56:53.524292 30089 base_network_controller_pods.go:476] [default/openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b] creating logical port openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b for pod on switch crc 2026-01-20T10:56:53.524313501+00:00 stderr F I0120 10:56:53.524306 30089 base_network_controller_pods.go:476] [default/openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd] creating logical port openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd for pod on switch crc 2026-01-20T10:56:53.524323862+00:00 stderr F I0120 10:56:53.524308 30089 base_network_controller_pods.go:476] [default/openshift-multus/network-metrics-daemon-qdfr4] creating logical port openshift-multus_network-metrics-daemon-qdfr4 for pod on switch crc 2026-01-20T10:56:53.524396113+00:00 stderr F I0120 10:56:53.524361 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2026-01-20T10:56:53.524438305+00:00 stderr F I0120 10:56:53.524424 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2026-01-20T10:56:53.524483486+00:00 stderr F I0120 10:56:53.524468 30089 ovn.go:134] Ensuring zone local for Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd in node crc 2026-01-20T10:56:53.524766364+00:00 stderr F I0120 10:56:53.524711 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2c 10.217.0.44]} options:{GoMap:map[iface-id-ver:e7870154-de6e-4216-81fb-b87e7502c412 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2c 10.217.0.44]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2062ef9d-1b69-4e9a-ae97-128ed1f07896}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.524766364+00:00 stderr F I0120 10:56:53.524729 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.45 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e326c177-dff3-4bbe-a1eb-fe518325ee36}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.524800465+00:00 stderr F I0120 10:56:53.523593 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-marketplace/marketplace-operator-8b455464d-nc8zc 2026-01-20T10:56:53.524800465+00:00 stderr F I0120 10:56:53.524774 30089 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/marketplace-operator-8b455464d-nc8zc in node crc 2026-01-20T10:56:53.524800465+00:00 stderr F I0120 10:56:53.524769 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} options:{GoMap:map[iface-id-ver:0b5d722a-1123-4935-9740-52a08d018bc9 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7a350d82-7987-4ce6-ae41-dd930411ca29}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.524813405+00:00 stderr F I0120 10:56:53.524799 30089 base_network_controller_pods.go:476] [default/openshift-marketplace/marketplace-operator-8b455464d-nc8zc] creating logical port openshift-marketplace_marketplace-operator-8b455464d-nc8zc for pod on switch crc 2026-01-20T10:56:53.524855696+00:00 stderr F I0120 10:56:53.524820 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.524855696+00:00 stderr F I0120 10:56:53.523573 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-marketplace/redhat-marketplace-2mx7j 2026-01-20T10:56:53.524870067+00:00 stderr F I0120 10:56:53.524860 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-marketplace/redhat-marketplace-2mx7j 2026-01-20T10:56:53.524880537+00:00 stderr F I0120 10:56:53.524869 30089 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/redhat-marketplace-2mx7j in node crc 2026-01-20T10:56:53.524913278+00:00 stderr F I0120 10:56:53.524879 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {480d825d-a6a3-4408-89f7-433b108e677b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.524954809+00:00 stderr F I0120 10:56:53.523642 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2026-01-20T10:56:53.524954809+00:00 stderr F I0120 10:56:53.524948 30089 ovn.go:134] Ensuring zone local for Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z in node crc 2026-01-20T10:56:53.525006000+00:00 stderr F I0120 10:56:53.524979 30089 base_network_controller_pods.go:476] [default/openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z] creating logical port openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z for pod on switch crc 2026-01-20T10:56:53.525006000+00:00 stderr F I0120 10:56:53.524974 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1d 10.217.0.29]} options:{GoMap:map[iface-id-ver:5adb4a31-5991-4381-a1ea-f1b095a071ea requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1d 10.217.0.29]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8d0a19e8-81fc-4e62-a546-49e373193af2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.525052152+00:00 stderr F I0120 10:56:53.524780 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:e326c177-dff3-4bbe-a1eb-fe518325ee36}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.525101513+00:00 stderr F I0120 10:56:53.525033 30089 base_network_controller_pods.go:476] [default/openshift-console-operator/console-operator-5dbbc74dc9-cp5cd] creating logical port openshift-console-operator_console-operator-5dbbc74dc9-cp5cd for pod on switch crc 2026-01-20T10:56:53.525143214+00:00 stderr F I0120 10:56:53.524889 30089 base_network_controller_pods.go:476] [default/openshift-marketplace/redhat-marketplace-2mx7j] creating logical port openshift-marketplace_redhat-marketplace-2mx7j for pod on switch crc 2026-01-20T10:56:53.526727275+00:00 stderr F I0120 10:56:53.526644 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} options:{GoMap:map[iface-id-ver:0f394926-bdb9-425c-b36e-264d7fd34550 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5604df7-c1b9-4360-a570-e22fbf62c520}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.526794617+00:00 stderr F I0120 10:56:53.526752 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.526893610+00:00 stderr F I0120 10:56:53.525689 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.71 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5bdc3c8-db64-45ca-8596-53e5e38584c4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.526893610+00:00 stderr F I0120 10:56:53.525080 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2062ef9d-1b69-4e9a-ae97-128ed1f07896}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.526907270+00:00 stderr F I0120 10:56:53.523572 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2026-01-20T10:56:53.526907270+00:00 stderr F I0120 10:56:53.523651 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2026-01-20T10:56:53.526907270+00:00 stderr F I0120 10:56:53.523636 30089 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm in node crc 2026-01-20T10:56:53.526907270+00:00 stderr F I0120 10:56:53.524203 30089 node_ip_handler_linux.go:482] Will not update encap IP 192.168.126.11 - it is already configured 2026-01-20T10:56:53.526926941+00:00 stderr F I0120 10:56:53.524502 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2026-01-20T10:56:53.526926941+00:00 stderr F I0120 10:56:53.524502 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.43 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {836a6a8f-ef84-4ecd-996a-6579e908ce2a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.526926941+00:00 stderr F I0120 10:56:53.524527 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2026-01-20T10:56:53.526926941+00:00 stderr F I0120 10:56:53.524522 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-etcd/etcd-crc 2026-01-20T10:56:53.526926941+00:00 stderr F I0120 10:56:53.524538 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2026-01-20T10:56:53.526926941+00:00 stderr F I0120 10:56:53.524517 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} options:{GoMap:map[iface-id-ver:a702c6d2-4dde-4077-ab8c-0f8df804bf7a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3564ddfd-a311-4df3-b5d0-1e76294b4ab0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.526941041+00:00 stderr F I0120 10:56:53.524517 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} options:{GoMap:map[iface-id-ver:5bacb25d-97b6-4491-8fb4-99feae1d802a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b4158c3-d859-42e6-8259-b16ce1cbd284}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.526941041+00:00 stderr F I0120 10:56:53.524249 30089 default_network_controller.go:699] Recording success event on pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2026-01-20T10:56:53.526953451+00:00 stderr F I0120 10:56:53.524544 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2026-01-20T10:56:53.526953451+00:00 stderr F I0120 10:56:53.524551 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2026-01-20T10:56:53.526953451+00:00 stderr F I0120 10:56:53.524555 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-dns/node-resolver-dn27q 2026-01-20T10:56:53.526965322+00:00 stderr F I0120 10:56:53.524522 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} options:{GoMap:map[iface-id-ver:01feb2e0-a0f4-4573-8335-34e364e0ef40 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3e86699a-fa52-4a81-9386-60d37f3fa10c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.526976032+00:00 stderr F I0120 10:56:53.524560 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-image-registry/image-registry-75b7bb6564-ln84v 2026-01-20T10:56:53.526987082+00:00 stderr F I0120 10:56:53.524554 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.32 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7d2c6a4d-e1a7-4457-8d2b-2a8098268121}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.526987082+00:00 stderr F I0120 10:56:53.524559 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-marketplace/redhat-operators-2nxg8 2026-01-20T10:56:53.526987082+00:00 stderr F I0120 10:56:53.524569 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb 2026-01-20T10:56:53.527002232+00:00 stderr F I0120 10:56:53.524562 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-image-registry/node-ca-l92hr 2026-01-20T10:56:53.527002232+00:00 stderr F I0120 10:56:53.524570 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2026-01-20T10:56:53.527002232+00:00 stderr F I0120 10:56:53.523651 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2026-01-20T10:56:53.527002232+00:00 stderr F I0120 10:56:53.524568 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2026-01-20T10:56:53.527002232+00:00 stderr F I0120 10:56:53.524582 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2026-01-20T10:56:53.527002232+00:00 stderr F I0120 10:56:53.524581 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2026-01-20T10:56:53.527015803+00:00 stderr F I0120 10:56:53.523615 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2026-01-20T10:56:53.527015803+00:00 stderr F I0120 10:56:53.524582 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2026-01-20T10:56:53.527015803+00:00 stderr F I0120 10:56:53.524585 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-ingress/router-default-5c9bf7bc58-6jctv 2026-01-20T10:56:53.527015803+00:00 stderr F I0120 10:56:53.524591 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2026-01-20T10:56:53.527015803+00:00 stderr F I0120 10:56:53.524589 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-dns/dns-default-gbw49 2026-01-20T10:56:53.527028283+00:00 stderr F I0120 10:56:53.524594 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-marketplace/certified-operators-mpjb7 2026-01-20T10:56:53.527028283+00:00 stderr F I0120 10:56:53.524591 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2026-01-20T10:56:53.527028283+00:00 stderr F I0120 10:56:53.524601 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-7xghp 2026-01-20T10:56:53.527040023+00:00 stderr F I0120 10:56:53.524602 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-bzj2p 2026-01-20T10:56:53.527040023+00:00 stderr F I0120 10:56:53.524600 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-apiserver/installer-13-crc 2026-01-20T10:56:53.527040023+00:00 stderr F I0120 10:56:53.524607 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-v54bt 2026-01-20T10:56:53.527040023+00:00 stderr F I0120 10:56:53.524615 30089 obj_retry.go:296] Retry object setup: *v1.Pod cert-manager/cert-manager-cainjector-676dd9bd64-mggnx 2026-01-20T10:56:53.527040023+00:00 stderr F I0120 10:56:53.524618 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2026-01-20T10:56:53.527053034+00:00 stderr F I0120 10:56:53.524618 30089 obj_retry.go:296] Retry object setup: *v1.Pod hostpath-provisioner/csi-hostpathplugin-hvm8g 2026-01-20T10:56:53.527053034+00:00 stderr F I0120 10:56:53.524613 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-marketplace/community-operators-6m4w2 2026-01-20T10:56:53.527053034+00:00 stderr F I0120 10:56:53.524630 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2026-01-20T10:56:53.527053034+00:00 stderr F I0120 10:56:53.524641 30089 obj_retry.go:296] Retry object setup: *v1.Pod cert-manager/cert-manager-758df9885c-cq6zm 2026-01-20T10:56:53.527053034+00:00 stderr F I0120 10:56:53.524636 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2026-01-20T10:56:53.527138696+00:00 stderr F I0120 10:56:53.524637 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-wwpnd 2026-01-20T10:56:53.527157887+00:00 stderr F I0120 10:56:53.524652 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2026-01-20T10:56:53.527169277+00:00 stderr F I0120 10:56:53.524668 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2026-01-20T10:56:53.527169277+00:00 stderr F I0120 10:56:53.524676 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-console/downloads-65476884b9-9wcvx 2026-01-20T10:56:53.527169277+00:00 stderr F I0120 10:56:53.524684 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-service-ca/service-ca-666f99b6f-kk8kg 2026-01-20T10:56:53.527169277+00:00 stderr F I0120 10:56:53.524693 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc 2026-01-20T10:56:53.527181657+00:00 stderr F I0120 10:56:53.524702 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2026-01-20T10:56:53.527181657+00:00 stderr F I0120 10:56:53.524710 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-console/console-644bb77b49-5x5xk 2026-01-20T10:56:53.527181657+00:00 stderr F I0120 10:56:53.525020 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8d0a19e8-81fc-4e62-a546-49e373193af2}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.527193527+00:00 stderr F I0120 10:56:53.525050 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} options:{GoMap:map[iface-id-ver:7d51f445-054a-4e4f-a67b-a828f5a32511 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {710ea152-1844-44ad-b1a6-805ec9a3700e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {4a46821f-f601-44e2-aacf-0ffb901e376e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.45 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e326c177-dff3-4bbe-a1eb-fe518325ee36}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:e326c177-dff3-4bbe-a1eb-fe518325ee36}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.527207648+00:00 stderr F I0120 10:56:53.527176 30089 node_ip_handler_linux.go:441] Node address changed to map[172.17.0.5/24:{} 172.18.0.5/24:{} 172.19.0.5/24:{} 192.168.122.10/24:{} 192.168.126.11/24:{} 38.102.83.220/24:{}]. Updating annotations. 2026-01-20T10:56:53.527244139+00:00 stderr F I0120 10:56:53.527193 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.527333621+00:00 stderr F I0120 10:56:53.527291 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {038121f6-33a2-46c3-a820-7e67ff387e75}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.527372883+00:00 stderr F I0120 10:56:53.526858 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} options:{GoMap:map[iface-id-ver:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {99ef3a4b-7858-4c9b-90db-217867afe36a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.527500566+00:00 stderr F I0120 10:56:53.527461 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.529094018+00:00 stderr F I0120 10:56:53.526905 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {327158bc-926b-43b6-928d-23c33a7f6443}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.529183470+00:00 stderr F I0120 10:56:53.529145 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {57f0696f-dc79-4d6c-b6a1-8c0c5c1afaae}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.529320914+00:00 stderr F I0120 10:56:53.529264 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} options:{GoMap:map[iface-id-ver:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6af06372-81fc-4451-8678-6253ce70f317}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.529379476+00:00 stderr F I0120 10:56:53.529346 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.529456048+00:00 stderr F I0120 10:56:53.529414 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.529468558+00:00 stderr F I0120 10:56:53.526947 30089 obj_retry.go:296] Retry object setup: *v1.Pod openshift-network-operator/network-operator-767c585db5-zd56b 2026-01-20T10:56:53.529501249+00:00 stderr F I0120 10:56:53.529475 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-network-operator/network-operator-767c585db5-zd56b 2026-01-20T10:56:53.529501249+00:00 stderr F I0120 10:56:53.529489 30089 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-767c585db5-zd56b in node crc 2026-01-20T10:56:53.529501249+00:00 stderr F I0120 10:56:53.529494 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-network-operator/network-operator-767c585db5-zd56b after 0 failed attempt(s) 2026-01-20T10:56:53.529514399+00:00 stderr F I0120 10:56:53.529499 30089 default_network_controller.go:699] Recording success event on pod openshift-network-operator/network-operator-767c585db5-zd56b 2026-01-20T10:56:53.529660223+00:00 stderr F I0120 10:56:53.529629 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2026-01-20T10:56:53.529660223+00:00 stderr F I0120 10:56:53.529615 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.9 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b428b80-dee0-4a48-a105-d91e72f01b56}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.529673863+00:00 stderr F I0120 10:56:53.526943 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f5bdc3c8-db64-45ca-8596-53e5e38584c4}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.529714034+00:00 stderr F I0120 10:56:53.529681 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:8b428b80-dee0-4a48-a105-d91e72f01b56}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.529790816+00:00 stderr F I0120 10:56:53.529676 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} options:{GoMap:map[iface-id-ver:0b5d722a-1123-4935-9740-52a08d018bc9 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7a350d82-7987-4ce6-ae41-dd930411ca29}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {480d825d-a6a3-4408-89f7-433b108e677b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.71 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5bdc3c8-db64-45ca-8596-53e5e38584c4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f5bdc3c8-db64-45ca-8596-53e5e38584c4}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.529790816+00:00 stderr F I0120 10:56:53.529711 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} options:{GoMap:map[iface-id-ver:0f394926-bdb9-425c-b36e-264d7fd34550 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5604df7-c1b9-4360-a570-e22fbf62c520}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {327158bc-926b-43b6-928d-23c33a7f6443}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.9 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b428b80-dee0-4a48-a105-d91e72f01b56}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:8b428b80-dee0-4a48-a105-d91e72f01b56}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.529927370+00:00 stderr F I0120 10:56:53.526978 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2026-01-20T10:56:53.529927370+00:00 stderr F I0120 10:56:53.529912 30089 ovn.go:134] Ensuring zone local for Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg in node crc 2026-01-20T10:56:53.529964931+00:00 stderr F I0120 10:56:53.529940 30089 base_network_controller_pods.go:476] [default/openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg] creating logical port openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg for pod on switch crc 2026-01-20T10:56:53.530020572+00:00 stderr F I0120 10:56:53.529965 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.62 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {513dccfe-dd54-4217-b9ce-2d865937835c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.530020572+00:00 stderr F I0120 10:56:53.526973 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-etcd/etcd-crc 2026-01-20T10:56:53.530020572+00:00 stderr F I0120 10:56:53.527080 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2062ef9d-1b69-4e9a-ae97-128ed1f07896}]}}] Timeout: Where:[where column _uuid == {d886fd31-9d5f-4fce-925a-2065ea4614ce}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.530020572+00:00 stderr F I0120 10:56:53.527123 30089 ovn.go:134] Ensuring zone local for Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m in node crc 2026-01-20T10:56:53.530040583+00:00 stderr F I0120 10:56:53.527133 30089 ovn.go:134] Ensuring zone local for Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw in node crc 2026-01-20T10:56:53.530040583+00:00 stderr F I0120 10:56:53.527162 30089 base_network_controller_pods.go:476] [default/openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm] creating logical port openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm for pod on switch crc 2026-01-20T10:56:53.530040583+00:00 stderr F I0120 10:56:53.527619 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2026-01-20T10:56:53.530040583+00:00 stderr F I0120 10:56:53.527628 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2026-01-20T10:56:53.530040583+00:00 stderr F I0120 10:56:53.527634 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-dns/node-resolver-dn27q 2026-01-20T10:56:53.530040583+00:00 stderr F I0120 10:56:53.527660 30089 kube.go:128] Setting annotations map[k8s.ovn.org/host-cidrs:["172.17.0.5/24","172.18.0.5/24","172.19.0.5/24","192.168.122.10/24","192.168.126.11/24","38.102.83.220/24"] k8s.ovn.org/l3-gateway-config:{"default":{"mode":"local","interface-id":"br-ex_crc","mac-address":"fa:16:3e:0d:e7:11","ip-addresses":["38.102.83.220/24"],"ip-address":"38.102.83.220/24","next-hops":["38.102.83.1"],"next-hop":"38.102.83.1","node-port-enable":"true","vlan-id":"0"}} k8s.ovn.org/node-chassis-id:017e52b0-97d3-4d7d-aae4-9b216aa025aa k8s.ovn.org/node-primary-ifaddr:{"ipv4":"38.102.83.220/24"}] on node crc 2026-01-20T10:56:53.530040583+00:00 stderr F I0120 10:56:53.527732 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.530054773+00:00 stderr F I0120 10:56:53.527832 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:836a6a8f-ef84-4ecd-996a-6579e908ce2a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.530054773+00:00 stderr F I0120 10:56:53.527862 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-image-registry/image-registry-75b7bb6564-ln84v 2026-01-20T10:56:53.530054773+00:00 stderr F I0120 10:56:53.527872 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2026-01-20T10:56:53.530092074+00:00 stderr F I0120 10:56:53.527854 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} options:{GoMap:map[iface-id-ver:df6e4f33-df74-4326-b096-9d3e45a8c55a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {34f81bc6-9eab-493e-85aa-3c1b2544e7d2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.530092074+00:00 stderr F I0120 10:56:53.527907 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-bzj2p 2026-01-20T10:56:53.530092074+00:00 stderr F I0120 10:56:53.527990 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-apiserver/installer-13-crc 2026-01-20T10:56:53.530092074+00:00 stderr F I0120 10:56:53.528008 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2026-01-20T10:56:53.530092074+00:00 stderr F I0120 10:56:53.528003 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7d2c6a4d-e1a7-4457-8d2b-2a8098268121}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.530092074+00:00 stderr F I0120 10:56:53.528142 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2026-01-20T10:56:53.530109895+00:00 stderr F I0120 10:56:53.528128 30089 port_cache.go:96] port-cache(openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t): added port &{name:openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t uuid:710ea152-1844-44ad-b1a6-805ec9a3700e logicalSwitch:crc ips:[0xc00099a510] mac:[10 88 10 217 0 45] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.45/23] and MAC: 0a:58:0a:d9:00:2d 2026-01-20T10:56:53.530109895+00:00 stderr F I0120 10:56:53.528168 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-v54bt 2026-01-20T10:56:53.530109895+00:00 stderr F I0120 10:56:53.528181 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2026-01-20T10:56:53.530109895+00:00 stderr F I0120 10:56:53.528170 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.39 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e2370e4-c8a2-48a7-a016-426be9bb2419}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.530109895+00:00 stderr F I0120 10:56:53.528234 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-marketplace/redhat-operators-2nxg8 2026-01-20T10:56:53.530109895+00:00 stderr F I0120 10:56:53.528246 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2026-01-20T10:56:53.530123145+00:00 stderr F I0120 10:56:53.528301 30089 obj_retry.go:358] Adding new object: *v1.Pod cert-manager/cert-manager-cainjector-676dd9bd64-mggnx 2026-01-20T10:56:53.530123145+00:00 stderr F I0120 10:56:53.528299 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "7d51f445-054a-4e4f-a67b-a828f5a32511" 2026-01-20T10:56:53.530123145+00:00 stderr F I0120 10:56:53.528312 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-ingress/router-default-5c9bf7bc58-6jctv 2026-01-20T10:56:53.530123145+00:00 stderr F I0120 10:56:53.528353 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2026-01-20T10:56:53.530123145+00:00 stderr F I0120 10:56:53.528354 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2026-01-20T10:56:53.530135905+00:00 stderr F I0120 10:56:53.528405 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2026-01-20T10:56:53.530135905+00:00 stderr F I0120 10:56:53.528463 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2026-01-20T10:56:53.530135905+00:00 stderr F I0120 10:56:53.528479 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-dns/dns-default-gbw49 2026-01-20T10:56:53.530135905+00:00 stderr F I0120 10:56:53.528484 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb 2026-01-20T10:56:53.530151856+00:00 stderr F I0120 10:56:53.528533 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-image-registry/node-ca-l92hr 2026-01-20T10:56:53.530151856+00:00 stderr F I0120 10:56:53.528562 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-marketplace/certified-operators-mpjb7 2026-01-20T10:56:53.530151856+00:00 stderr F I0120 10:56:53.528566 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2026-01-20T10:56:53.530151856+00:00 stderr F I0120 10:56:53.528600 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2026-01-20T10:56:53.530164977+00:00 stderr F I0120 10:56:53.528616 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2026-01-20T10:56:53.530164977+00:00 stderr F I0120 10:56:53.528635 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2026-01-20T10:56:53.530164977+00:00 stderr F I0120 10:56:53.528693 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-marketplace/community-operators-6m4w2 2026-01-20T10:56:53.530164977+00:00 stderr F I0120 10:56:53.528705 30089 obj_retry.go:358] Adding new object: *v1.Pod cert-manager/cert-manager-758df9885c-cq6zm 2026-01-20T10:56:53.530164977+00:00 stderr F I0120 10:56:53.528700 30089 obj_retry.go:358] Adding new object: *v1.Pod hostpath-provisioner/csi-hostpathplugin-hvm8g 2026-01-20T10:56:53.530178147+00:00 stderr F I0120 10:56:53.528730 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-7xghp 2026-01-20T10:56:53.530178147+00:00 stderr F I0120 10:56:53.528786 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc 2026-01-20T10:56:53.530178147+00:00 stderr F I0120 10:56:53.528820 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2026-01-20T10:56:53.530178147+00:00 stderr F I0120 10:56:53.528822 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2026-01-20T10:56:53.530178147+00:00 stderr F I0120 10:56:53.528827 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-console/console-644bb77b49-5x5xk 2026-01-20T10:56:53.530204168+00:00 stderr F I0120 10:56:53.528832 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8d0a19e8-81fc-4e62-a546-49e373193af2}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.530204168+00:00 stderr F I0120 10:56:53.528898 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-wwpnd 2026-01-20T10:56:53.530204168+00:00 stderr F I0120 10:56:53.528902 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-service-ca/service-ca-666f99b6f-kk8kg 2026-01-20T10:56:53.530204168+00:00 stderr F I0120 10:56:53.528895 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2026-01-20T10:56:53.530204168+00:00 stderr F I0120 10:56:53.528933 30089 obj_retry.go:358] Adding new object: *v1.Pod openshift-console/downloads-65476884b9-9wcvx 2026-01-20T10:56:53.530216898+00:00 stderr F I0120 10:56:53.530080 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:513dccfe-dd54-4217-b9ce-2d865937835c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.530265520+00:00 stderr F I0120 10:56:53.530204 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} options:{GoMap:map[iface-id-ver:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.530325971+00:00 stderr F I0120 10:56:53.530219 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} options:{GoMap:map[iface-id-ver:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6af06372-81fc-4451-8678-6253ce70f317}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.62 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {513dccfe-dd54-4217-b9ce-2d865937835c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:513dccfe-dd54-4217-b9ce-2d865937835c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.530364082+00:00 stderr F I0120 10:56:53.527019 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.530439754+00:00 stderr F I0120 10:56:53.530415 30089 ovn.go:134] Ensuring zone local for Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb in node crc 2026-01-20T10:56:53.530439754+00:00 stderr F I0120 10:56:53.530411 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.530452465+00:00 stderr F I0120 10:56:53.530441 30089 base_network_controller_pods.go:476] [default/openshift-dns-operator/dns-operator-75f687757b-nz2xb] creating logical port openshift-dns-operator_dns-operator-75f687757b-nz2xb for pod on switch crc 2026-01-20T10:56:53.530636139+00:00 stderr F I0120 10:56:53.530286 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.530688401+00:00 stderr F I0120 10:56:53.530633 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} options:{GoMap:map[iface-id-ver:10603adc-d495-423c-9459-4caa405960bb requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b212e2c2-3d4e-4898-aede-c926b74813f0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.530727952+00:00 stderr F I0120 10:56:53.530687 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {5b2bd23c-417c-4ef4-b90a-359ab75287c5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.530727952+00:00 stderr F I0120 10:56:53.530707 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.530821554+00:00 stderr F I0120 10:56:53.530782 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {46538203-fb12-48b9-9840-a39e58a289ec}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.530821554+00:00 stderr F I0120 10:56:53.530789 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.44 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fc1e926e-4c6b-4b78-8d31-0fe859db2dd0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.530918497+00:00 stderr F I0120 10:56:53.530871 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:fc1e926e-4c6b-4b78-8d31-0fe859db2dd0}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.531029710+00:00 stderr F I0120 10:56:53.530906 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2c 10.217.0.44]} options:{GoMap:map[iface-id-ver:e7870154-de6e-4216-81fb-b87e7502c412 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2c 10.217.0.44]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2062ef9d-1b69-4e9a-ae97-128ed1f07896}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2062ef9d-1b69-4e9a-ae97-128ed1f07896}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2062ef9d-1b69-4e9a-ae97-128ed1f07896}]}}] Timeout: Where:[where column _uuid == {d886fd31-9d5f-4fce-925a-2065ea4614ce}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.44 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fc1e926e-4c6b-4b78-8d31-0fe859db2dd0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:fc1e926e-4c6b-4b78-8d31-0fe859db2dd0}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.531131842+00:00 stderr F I0120 10:56:53.531074 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.19 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {562978c2-805e-4646-ada6-3dd7d281b620}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.531229215+00:00 stderr F I0120 10:56:53.531173 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.3 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7e5e1c6-53f2-4022-a945-14d792e680f9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.531303627+00:00 stderr F I0120 10:56:53.531259 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7e5e1c6-53f2-4022-a945-14d792e680f9}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.531351178+00:00 stderr F I0120 10:56:53.531335 30089 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-7xghp in node crc 2026-01-20T10:56:53.531395129+00:00 stderr F I0120 10:56:53.531347 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.46 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {885a79ad-c3d5-4b78-8ba8-1d83439b736b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.531395129+00:00 stderr F I0120 10:56:53.531296 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} options:{GoMap:map[iface-id-ver:a702c6d2-4dde-4077-ab8c-0f8df804bf7a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3564ddfd-a311-4df3-b5d0-1e76294b4ab0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.3 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7e5e1c6-53f2-4022-a945-14d792e680f9}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7e5e1c6-53f2-4022-a945-14d792e680f9}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.531413140+00:00 stderr F I0120 10:56:53.531404 30089 base_network_controller_pods.go:476] [default/openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw] creating logical port openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw for pod on switch crc 2026-01-20T10:56:53.531423870+00:00 stderr F I0120 10:56:53.531393 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.18 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.531434640+00:00 stderr F I0120 10:56:53.531414 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:885a79ad-c3d5-4b78-8ba8-1d83439b736b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.531475411+00:00 stderr F I0120 10:56:53.531269 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:562978c2-805e-4646-ada6-3dd7d281b620}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.532169339+00:00 stderr F I0120 10:56:53.532033 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.29 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {cf213cb5-56f2-46e3-839e-1ce83cbf70ce}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.532169339+00:00 stderr F I0120 10:56:53.532088 30089 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv in node crc 2026-01-20T10:56:53.532169339+00:00 stderr F I0120 10:56:53.532152 30089 base_network_controller_pods.go:476] [default/openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv] creating logical port openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv for pod on switch crc 2026-01-20T10:56:53.532251751+00:00 stderr F I0120 10:56:53.532205 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:cf213cb5-56f2-46e3-839e-1ce83cbf70ce}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.532251751+00:00 stderr F I0120 10:56:53.531375 30089 base_network_controller_pods.go:476] [default/openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m] creating logical port openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m for pod on switch crc 2026-01-20T10:56:53.532357484+00:00 stderr F I0120 10:56:53.532242 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1d 10.217.0.29]} options:{GoMap:map[iface-id-ver:5adb4a31-5991-4381-a1ea-f1b095a071ea requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1d 10.217.0.29]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8d0a19e8-81fc-4e62-a546-49e373193af2}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8d0a19e8-81fc-4e62-a546-49e373193af2}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8d0a19e8-81fc-4e62-a546-49e373193af2}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.29 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {cf213cb5-56f2-46e3-839e-1ce83cbf70ce}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:cf213cb5-56f2-46e3-839e-1ce83cbf70ce}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.532588820+00:00 stderr F I0120 10:56:53.531449 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.532588820+00:00 stderr F I0120 10:56:53.531373 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-7xghp after 0 failed attempt(s) 2026-01-20T10:56:53.532605541+00:00 stderr F I0120 10:56:53.532590 30089 default_network_controller.go:699] Recording success event on pod openshift-network-node-identity/network-node-identity-7xghp 2026-01-20T10:56:53.532605541+00:00 stderr F I0120 10:56:53.531503 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:34f81bc6-9eab-493e-85aa-3c1b2544e7d2}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.532754135+00:00 stderr F I0120 10:56:53.532572 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} options:{GoMap:map[iface-id-ver:10603adc-d495-423c-9459-4caa405960bb requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b212e2c2-3d4e-4898-aede-c926b74813f0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {46538203-fb12-48b9-9840-a39e58a289ec}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.18 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.532991382+00:00 stderr F I0120 10:56:53.532933 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} options:{GoMap:map[iface-id-ver:b54e8941-2fc4-432a-9e51-39684df9089e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {82630d91-1647-4c0c-aa84-8f820bcf919e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.533150296+00:00 stderr F I0120 10:56:53.533032 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} options:{GoMap:map[iface-id-ver:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {99ef3a4b-7858-4c9b-90db-217867afe36a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {57f0696f-dc79-4d6c-b6a1-8c0c5c1afaae}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.19 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {562978c2-805e-4646-ada6-3dd7d281b620}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:562978c2-805e-4646-ada6-3dd7d281b620}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.533350741+00:00 stderr F I0120 10:56:53.533009 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.533433033+00:00 stderr F I0120 10:56:53.533313 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} options:{GoMap:map[iface-id-ver:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.533480744+00:00 stderr F I0120 10:56:53.533444 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.533529156+00:00 stderr F I0120 10:56:53.533488 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.533529156+00:00 stderr F I0120 10:56:53.533505 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {34c062e9-0e41-479f-b36f-464b48cc97e0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.533703660+00:00 stderr F I0120 10:56:53.531437 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} options:{GoMap:map[iface-id-ver:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {5b2bd23c-417c-4ef4-b90a-359ab75287c5}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.46 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {885a79ad-c3d5-4b78-8ba8-1d83439b736b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:885a79ad-c3d5-4b78-8ba8-1d83439b736b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.533797193+00:00 stderr F I0120 10:56:53.531525 30089 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-bzj2p in node crc 2026-01-20T10:56:53.533862384+00:00 stderr F I0120 10:56:53.533839 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-bzj2p after 0 failed attempt(s) 2026-01-20T10:56:53.533922286+00:00 stderr F I0120 10:56:53.533893 30089 default_network_controller.go:699] Recording success event on pod openshift-multus/multus-additional-cni-plugins-bzj2p 2026-01-20T10:56:53.533968647+00:00 stderr F I0120 10:56:53.531526 30089 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc 2026-01-20T10:56:53.534027719+00:00 stderr F I0120 10:56:53.534005 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s) 2026-01-20T10:56:53.534125661+00:00 stderr F I0120 10:56:53.531532 30089 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/installer-13-crc in node crc 2026-01-20T10:56:53.534173643+00:00 stderr F I0120 10:56:53.531534 30089 ovn.go:134] Ensuring zone local for Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp in node crc 2026-01-20T10:56:53.534219304+00:00 stderr F I0120 10:56:53.531541 30089 ovn.go:134] Ensuring zone local for Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg in node crc 2026-01-20T10:56:53.534257085+00:00 stderr F I0120 10:56:53.531541 30089 ovn.go:134] Ensuring zone local for Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs in node crc 2026-01-20T10:56:53.534291516+00:00 stderr F I0120 10:56:53.531555 30089 ovn.go:134] Ensuring zone local for Pod openshift-console/console-644bb77b49-5x5xk in node crc 2026-01-20T10:56:53.534321586+00:00 stderr F I0120 10:56:53.531557 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} options:{GoMap:map[iface-id-ver:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.32 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7d2c6a4d-e1a7-4457-8d2b-2a8098268121}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7d2c6a4d-e1a7-4457-8d2b-2a8098268121}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.534926852+00:00 stderr F I0120 10:56:53.531627 30089 ovn.go:134] Ensuring zone local for Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf in node crc 2026-01-20T10:56:53.534960093+00:00 stderr F I0120 10:56:53.531634 30089 pods.go:220] [openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t] addLogicalPort took 7.996651ms, libovsdb time 3.033131ms 2026-01-20T10:56:53.534993084+00:00 stderr F I0120 10:56:53.531642 30089 port_cache.go:96] port-cache(openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z): added port &{name:openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z uuid:f5604df7-c1b9-4360-a570-e22fbf62c520 logicalSwitch:crc ips:[0xc001861020] mac:[10 88 10 217 0 9] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.9/23] and MAC: 0a:58:0a:d9:00:09 2026-01-20T10:56:53.535022725+00:00 stderr F I0120 10:56:53.531659 30089 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-v54bt in node crc 2026-01-20T10:56:53.535052826+00:00 stderr F I0120 10:56:53.531664 30089 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh in node crc 2026-01-20T10:56:53.535111107+00:00 stderr F I0120 10:56:53.531668 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} options:{GoMap:map[iface-id-ver:45a8038e-e7f2-4d93-a6f5-7753aa54e63f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f8e99409-b28a-4d27-a8e5-267ea6a801cf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.535158158+00:00 stderr F I0120 10:56:53.530126 30089 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc 2026-01-20T10:56:53.535189139+00:00 stderr F I0120 10:56:53.531699 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:6e2370e4-c8a2-48a7-a016-426be9bb2419}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.535218510+00:00 stderr F I0120 10:56:53.531716 30089 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/redhat-operators-2nxg8 in node crc 2026-01-20T10:56:53.535257221+00:00 stderr F I0120 10:56:53.531721 30089 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 in node crc 2026-01-20T10:56:53.535292192+00:00 stderr F I0120 10:56:53.531732 30089 ovn.go:134] Ensuring zone local for Pod cert-manager/cert-manager-cainjector-676dd9bd64-mggnx in node crc 2026-01-20T10:56:53.535338063+00:00 stderr F I0120 10:56:53.531746 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "0f394926-bdb9-425c-b36e-264d7fd34550" 2026-01-20T10:56:53.535375724+00:00 stderr F I0120 10:56:53.531753 30089 ovn.go:134] Ensuring zone local for Pod openshift-ingress/router-default-5c9bf7bc58-6jctv in node crc 2026-01-20T10:56:53.535418805+00:00 stderr F I0120 10:56:53.531759 30089 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 in node crc 2026-01-20T10:56:53.535458306+00:00 stderr F I0120 10:56:53.531764 30089 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 in node crc 2026-01-20T10:56:53.535506327+00:00 stderr F I0120 10:56:53.531770 30089 ovn.go:134] Ensuring zone local for Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz in node crc 2026-01-20T10:56:53.535564829+00:00 stderr F I0120 10:56:53.535524 30089 base_network_controller_pods.go:476] [default/openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7] creating logical port openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 for pod on switch crc 2026-01-20T10:56:53.535663941+00:00 stderr F I0120 10:56:53.531784 30089 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb in node crc 2026-01-20T10:56:53.535680722+00:00 stderr F I0120 10:56:53.535665 30089 base_network_controller_pods.go:476] [default/openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb] creating logical port openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb for pod on switch crc 2026-01-20T10:56:53.535715383+00:00 stderr F I0120 10:56:53.535633 30089 base_network_controller_pods.go:476] [default/openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz] creating logical port openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz for pod on switch crc 2026-01-20T10:56:53.535865358+00:00 stderr F I0120 10:56:53.535817 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} options:{GoMap:map[iface-id-ver:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {be2fa59f-4cec-4742-a4bd-dcd0913d1422}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.535865358+00:00 stderr F I0120 10:56:53.531790 30089 ovn.go:134] Ensuring zone local for Pod openshift-dns/dns-default-gbw49 in node crc 2026-01-20T10:56:53.535896719+00:00 stderr F I0120 10:56:53.535861 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.535909449+00:00 stderr F I0120 10:56:53.535897 30089 base_network_controller_pods.go:476] [default/openshift-dns/dns-default-gbw49] creating logical port openshift-dns_dns-default-gbw49 for pod on switch crc 2026-01-20T10:56:53.535922699+00:00 stderr F I0120 10:56:53.535902 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {4001cc39-c23c-46dd-87d7-2a0f579404da}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.535966770+00:00 stderr F I0120 10:56:53.531804 30089 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/certified-operators-mpjb7 in node crc 2026-01-20T10:56:53.536021252+00:00 stderr F I0120 10:56:53.535992 30089 base_network_controller_pods.go:476] [default/openshift-marketplace/certified-operators-mpjb7] creating logical port openshift-marketplace_certified-operators-mpjb7 for pod on switch crc 2026-01-20T10:56:53.536036762+00:00 stderr F I0120 10:56:53.531804 30089 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh in node crc 2026-01-20T10:56:53.536053373+00:00 stderr F I0120 10:56:53.536044 30089 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh] creating logical port openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh for pod on switch crc 2026-01-20T10:56:53.536288209+00:00 stderr F I0120 10:56:53.536232 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} options:{GoMap:map[iface-id-ver:1d5b65e7-a4c3-495a-a5b0-72caab7218fd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {740dc30b-dced-4651-920f-33387935c67c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536288209+00:00 stderr F I0120 10:56:53.531809 30089 ovn.go:134] Ensuring zone local for Pod cert-manager/cert-manager-758df9885c-cq6zm in node crc 2026-01-20T10:56:53.536313669+00:00 stderr F I0120 10:56:53.536274 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.15 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {71937aad-6e75-4d89-9a20-9586e5b9d460}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536313669+00:00 stderr F I0120 10:56:53.536279 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:740dc30b-dced-4651-920f-33387935c67c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536313669+00:00 stderr F I0120 10:56:53.536270 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} options:{GoMap:map[iface-id-ver:13045510-8717-4a71-ade4-be95a76440a7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536336350+00:00 stderr F I0120 10:56:53.536289 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} options:{GoMap:map[iface-id-ver:c085412c-b875-46c9-ae3e-e6b0d8067091 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536336350+00:00 stderr F I0120 10:56:53.536306 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:71937aad-6e75-4d89-9a20-9586e5b9d460}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536336350+00:00 stderr F I0120 10:56:53.531811 30089 ovn.go:134] Ensuring zone local for Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr in node crc 2026-01-20T10:56:53.536351050+00:00 stderr F I0120 10:56:53.536327 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:740dc30b-dced-4651-920f-33387935c67c}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536364061+00:00 stderr F I0120 10:56:53.536343 30089 base_network_controller_pods.go:476] [default/openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr] creating logical port openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr for pod on switch crc 2026-01-20T10:56:53.536409202+00:00 stderr F I0120 10:56:53.536324 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} options:{GoMap:map[iface-id-ver:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {be2fa59f-4cec-4742-a4bd-dcd0913d1422}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {4001cc39-c23c-46dd-87d7-2a0f579404da}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.15 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {71937aad-6e75-4d89-9a20-9586e5b9d460}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:71937aad-6e75-4d89-9a20-9586e5b9d460}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536409202+00:00 stderr F I0120 10:56:53.536362 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536409202+00:00 stderr F I0120 10:56:53.536375 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536442643+00:00 stderr F I0120 10:56:53.536425 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536489274+00:00 stderr F I0120 10:56:53.536447 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {5c003dc7-268c-4cb5-b66b-9d2f810661e6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536556796+00:00 stderr F I0120 10:56:53.536314 30089 base_network_controller_pods.go:476] [default/cert-manager/cert-manager-758df9885c-cq6zm] creating logical port cert-manager_cert-manager-758df9885c-cq6zm for pod on switch crc 2026-01-20T10:56:53.536626298+00:00 stderr F I0120 10:56:53.536573 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} options:{GoMap:map[iface-id-ver:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2d98188d-6d49-48e7-8956-57a5c46efe26}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536666699+00:00 stderr F I0120 10:56:53.536629 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536684919+00:00 stderr F I0120 10:56:53.536657 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.33 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4c834564-7798-4750-8152-583ce8856c99}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536726500+00:00 stderr F I0120 10:56:53.536688 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {2bf79d67-1324-4efc-8ead-1a59cc805d56}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536726500+00:00 stderr F I0120 10:56:53.531813 30089 ovn.go:134] Ensuring zone local for Pod hostpath-provisioner/csi-hostpathplugin-hvm8g in node crc 2026-01-20T10:56:53.536726500+00:00 stderr F I0120 10:56:53.536701 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:4c834564-7798-4750-8152-583ce8856c99}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536750241+00:00 stderr F I0120 10:56:53.536726 30089 base_network_controller_pods.go:476] [default/hostpath-provisioner/csi-hostpathplugin-hvm8g] creating logical port hostpath-provisioner_csi-hostpathplugin-hvm8g for pod on switch crc 2026-01-20T10:56:53.536790742+00:00 stderr F I0120 10:56:53.536723 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} options:{GoMap:map[iface-id-ver:1d5b65e7-a4c3-495a-a5b0-72caab7218fd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {740dc30b-dced-4651-920f-33387935c67c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:740dc30b-dced-4651-920f-33387935c67c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:740dc30b-dced-4651-920f-33387935c67c}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.33 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4c834564-7798-4750-8152-583ce8856c99}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:4c834564-7798-4750-8152-583ce8856c99}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536836023+00:00 stderr F I0120 10:56:53.536785 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.14 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8946487-1f89-4d39-965a-9596d567c892}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536889004+00:00 stderr F I0120 10:56:53.536851 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} options:{GoMap:map[iface-id-ver:12e733dd-0939-4f1b-9cbb-13897e093787 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52259988-af2b-4ee5-bbfe-801c4ebeb0ae}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536904275+00:00 stderr F I0120 10:56:53.536886 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536944016+00:00 stderr F I0120 10:56:53.536837 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d8946487-1f89-4d39-965a-9596d567c892}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536944016+00:00 stderr F I0120 10:56:53.536921 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {7fc92973-4bc3-465a-b279-3f843add06f8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.536987837+00:00 stderr F I0120 10:56:53.531818 30089 ovn.go:134] Ensuring zone local for Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 in node crc 2026-01-20T10:56:53.537044419+00:00 stderr F I0120 10:56:53.536692 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2a 10.217.0.42]} options:{GoMap:map[iface-id-ver:f12a256b-7128-4680-8f54-8e40a3e56300 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2a 10.217.0.42]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c58d2e89-322d-46d7-9147-b9e591118d62}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.537119890+00:00 stderr F I0120 10:56:53.537074 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c58d2e89-322d-46d7-9147-b9e591118d62}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.537184772+00:00 stderr F I0120 10:56:53.537149 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c58d2e89-322d-46d7-9147-b9e591118d62}]}}] Timeout: Where:[where column _uuid == {d886fd31-9d5f-4fce-925a-2065ea4614ce}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.537184772+00:00 stderr F I0120 10:56:53.537149 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.16 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1a5f15db-00b3-4563-9f1c-1aca2a061230}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.537184772+00:00 stderr F I0120 10:56:53.537146 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.31 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.537232053+00:00 stderr F I0120 10:56:53.537194 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:1a5f15db-00b3-4563-9f1c-1aca2a061230}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.537232053+00:00 stderr F I0120 10:56:53.537209 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.49 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65079815-d96f-43ce-8e23-6a61dc263b7b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.537254154+00:00 stderr F I0120 10:56:53.537214 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.537254154+00:00 stderr F I0120 10:56:53.537240 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:65079815-d96f-43ce-8e23-6a61dc263b7b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.537268144+00:00 stderr F I0120 10:56:53.531839 30089 ovn.go:134] Ensuring zone local for Pod openshift-console/downloads-65476884b9-9wcvx in node crc 2026-01-20T10:56:53.537282315+00:00 stderr F I0120 10:56:53.537221 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} options:{GoMap:map[iface-id-ver:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2d98188d-6d49-48e7-8956-57a5c46efe26}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {2bf79d67-1324-4efc-8ead-1a59cc805d56}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.16 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1a5f15db-00b3-4563-9f1c-1aca2a061230}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:1a5f15db-00b3-4563-9f1c-1aca2a061230}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.537296345+00:00 stderr F I0120 10:56:53.537280 30089 base_network_controller_pods.go:476] [default/openshift-console/downloads-65476884b9-9wcvx] creating logical port openshift-console_downloads-65476884b9-9wcvx for pod on switch crc 2026-01-20T10:56:53.537309815+00:00 stderr F I0120 10:56:53.537257 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} options:{GoMap:map[iface-id-ver:12e733dd-0939-4f1b-9cbb-13897e093787 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52259988-af2b-4ee5-bbfe-801c4ebeb0ae}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {7fc92973-4bc3-465a-b279-3f843add06f8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.49 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65079815-d96f-43ce-8e23-6a61dc263b7b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:65079815-d96f-43ce-8e23-6a61dc263b7b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.537356357+00:00 stderr F I0120 10:56:53.537247 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} options:{GoMap:map[iface-id-ver:13045510-8717-4a71-ade4-be95a76440a7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {5c003dc7-268c-4cb5-b66b-9d2f810661e6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.31 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.537435999+00:00 stderr F I0120 10:56:53.537413 30089 base_network_controller_pods.go:476] [default/openshift-console-operator/console-conversion-webhook-595f9969b-l6z49] creating logical port openshift-console-operator_console-conversion-webhook-595f9969b-l6z49 for pod on switch crc 2026-01-20T10:56:53.537541241+00:00 stderr F I0120 10:56:53.537490 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.42 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {728ddbde-04b2-49cf-8404-6ed04db916a3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.537559462+00:00 stderr F I0120 10:56:53.537537 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:728ddbde-04b2-49cf-8404-6ed04db916a3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.537572912+00:00 stderr F I0120 10:56:53.537423 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} options:{GoMap:map[iface-id-ver:6268b7fe-8910-4505-b404-6f1df638105c requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {745a40f7-2acc-4e2b-a087-861e0ea97ffe}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.537623554+00:00 stderr F I0120 10:56:53.537555 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2a 10.217.0.42]} options:{GoMap:map[iface-id-ver:f12a256b-7128-4680-8f54-8e40a3e56300 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2a 10.217.0.42]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c58d2e89-322d-46d7-9147-b9e591118d62}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c58d2e89-322d-46d7-9147-b9e591118d62}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c58d2e89-322d-46d7-9147-b9e591118d62}]}}] Timeout: Where:[where column _uuid == {d886fd31-9d5f-4fce-925a-2065ea4614ce}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.42 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {728ddbde-04b2-49cf-8404-6ed04db916a3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:728ddbde-04b2-49cf-8404-6ed04db916a3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.537623554+00:00 stderr F I0120 10:56:53.537593 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.537692785+00:00 stderr F I0120 10:56:53.531850 30089 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-wwpnd in node crc 2026-01-20T10:56:53.537708066+00:00 stderr F I0120 10:56:53.537687 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-wwpnd after 0 failed attempt(s) 2026-01-20T10:56:53.537708066+00:00 stderr F I0120 10:56:53.537699 30089 default_network_controller.go:699] Recording success event on pod openshift-network-operator/iptables-alerter-wwpnd 2026-01-20T10:56:53.537720916+00:00 stderr F I0120 10:56:53.531852 30089 ovn.go:134] Ensuring zone local for Pod openshift-service-ca/service-ca-666f99b6f-kk8kg in node crc 2026-01-20T10:56:53.537772837+00:00 stderr F I0120 10:56:53.537744 30089 base_network_controller_pods.go:476] [default/openshift-service-ca/service-ca-666f99b6f-kk8kg] creating logical port openshift-service-ca_service-ca-666f99b6f-kk8kg for pod on switch crc 2026-01-20T10:56:53.537772837+00:00 stderr F I0120 10:56:53.531877 30089 ovn.go:134] Ensuring zone local for Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc in node crc 2026-01-20T10:56:53.537792758+00:00 stderr F I0120 10:56:53.537783 30089 base_network_controller_pods.go:476] [default/openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc] creating logical port openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc for pod on switch crc 2026-01-20T10:56:53.537863940+00:00 stderr F I0120 10:56:53.531885 30089 ovn.go:134] Ensuring zone local for Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv in node crc 2026-01-20T10:56:53.537887230+00:00 stderr F I0120 10:56:53.537861 30089 base_network_controller_pods.go:476] [default/openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv] creating logical port openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv for pod on switch crc 2026-01-20T10:56:53.537955292+00:00 stderr F I0120 10:56:53.531898 30089 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-dn27q in node crc 2026-01-20T10:56:53.537955292+00:00 stderr F I0120 10:56:53.537940 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-dns/node-resolver-dn27q after 0 failed attempt(s) 2026-01-20T10:56:53.537955292+00:00 stderr F I0120 10:56:53.537923 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} options:{GoMap:map[iface-id-ver:530553aa-0a1d-423e-8a22-f5eb4bdbb883 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d2f291e9-b4fe-47a7-a644-298254d226c5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.537955292+00:00 stderr F I0120 10:56:53.537949 30089 default_network_controller.go:699] Recording success event on pod openshift-dns/node-resolver-dn27q 2026-01-20T10:56:53.537973843+00:00 stderr F I0120 10:56:53.531981 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} options:{GoMap:map[iface-id-ver:120b38dc-8236-4fa6-a452-642b8ad738ee requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ad3d5728-34ed-421c-a749-1d7a957800a8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.537987413+00:00 stderr F I0120 10:56:53.537972 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538027344+00:00 stderr F I0120 10:56:53.537989 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538027344+00:00 stderr F I0120 10:56:53.538009 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {8f19c25c-23f2-4be6-ae5b-f3e31e0c5430}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538096606+00:00 stderr F I0120 10:56:53.538032 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} options:{GoMap:map[iface-id-ver:cf1a8966-f594-490a-9fbb-eec5bafd13d3 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2a5717ea-0a50-4ebb-b087-90e637274a33}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538096606+00:00 stderr F I0120 10:56:53.538050 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538127107+00:00 stderr F I0120 10:56:53.538039 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} options:{GoMap:map[iface-id-ver:e4a7de23-6134-4044-902a-0900dc04a501 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9409cb25-8c46-46db-98ab-5eafe9669ef8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538127107+00:00 stderr F I0120 10:56:53.538099 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538177198+00:00 stderr F I0120 10:56:53.538144 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {4f91dae1-6838-4840-9491-e068cbcf1f65}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538177198+00:00 stderr F I0120 10:56:53.538141 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538263250+00:00 stderr F I0120 10:56:53.538219 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {ef64a2e5-df47-4a70-b021-d05d7c16d1d1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538361093+00:00 stderr F I0120 10:56:53.538315 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.23 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52d4cb14-db01-4356-992b-9aeb85743758}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538378053+00:00 stderr F I0120 10:56:53.538356 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:52d4cb14-db01-4356-992b-9aeb85743758}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538429645+00:00 stderr F I0120 10:56:53.538371 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} options:{GoMap:map[iface-id-ver:530553aa-0a1d-423e-8a22-f5eb4bdbb883 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d2f291e9-b4fe-47a7-a644-298254d226c5}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {8f19c25c-23f2-4be6-ae5b-f3e31e0c5430}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.23 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52d4cb14-db01-4356-992b-9aeb85743758}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:52d4cb14-db01-4356-992b-9aeb85743758}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538456625+00:00 stderr F I0120 10:56:53.538435 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.25 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0ec236b8-244d-4e2f-bfb5-0733dbce5d66}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538470726+00:00 stderr F I0120 10:56:53.538448 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.21 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538514137+00:00 stderr F I0120 10:56:53.538473 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:0ec236b8-244d-4e2f-bfb5-0733dbce5d66}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538514137+00:00 stderr F I0120 10:56:53.538490 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538559278+00:00 stderr F I0120 10:56:53.538496 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} options:{GoMap:map[iface-id-ver:cf1a8966-f594-490a-9fbb-eec5bafd13d3 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2a5717ea-0a50-4ebb-b087-90e637274a33}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {4f91dae1-6838-4840-9491-e068cbcf1f65}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.25 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0ec236b8-244d-4e2f-bfb5-0733dbce5d66}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:0ec236b8-244d-4e2f-bfb5-0733dbce5d66}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538580640+00:00 stderr F I0120 10:56:53.538509 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} options:{GoMap:map[iface-id-ver:120b38dc-8236-4fa6-a452-642b8ad738ee requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ad3d5728-34ed-421c-a749-1d7a957800a8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.21 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538673502+00:00 stderr F I0120 10:56:53.531998 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {65d2d7fe-9437-4599-bc37-da4da4d5905c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538774375+00:00 stderr F I0120 10:56:53.530117 30089 ovn.go:134] Ensuring zone local for Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb in node crc 2026-01-20T10:56:53.538774375+00:00 stderr F I0120 10:56:53.538734 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.40 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {13384dbf-6468-4401-b6b5-1fc817c99dcd}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538793045+00:00 stderr F I0120 10:56:53.538779 30089 base_network_controller_pods.go:476] [default/openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb] creating logical port openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb for pod on switch crc 2026-01-20T10:56:53.538846976+00:00 stderr F I0120 10:56:53.538802 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:13384dbf-6468-4401-b6b5-1fc817c99dcd}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538846976+00:00 stderr F I0120 10:56:53.532017 30089 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/image-registry-75b7bb6564-ln84v in node crc 2026-01-20T10:56:53.538870027+00:00 stderr F I0120 10:56:53.538858 30089 base_network_controller_pods.go:476] [default/openshift-image-registry/image-registry-75b7bb6564-ln84v] creating logical port openshift-image-registry_image-registry-75b7bb6564-ln84v for pod on switch crc 2026-01-20T10:56:53.538919078+00:00 stderr F I0120 10:56:53.538731 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} options:{GoMap:map[iface-id-ver:59748b9b-c309-4712-aa85-bb38d71c4915 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6056bee0-572a-4de7-bb24-40ca6a66be30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538972810+00:00 stderr F I0120 10:56:53.538931 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.72 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ff201db7-fe7f-4311-91e5-346c10cbf942}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538989980+00:00 stderr F I0120 10:56:53.538970 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:ff201db7-fe7f-4311-91e5-346c10cbf942}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.538989980+00:00 stderr F I0120 10:56:53.538965 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} options:{GoMap:map[iface-id-ver:4f8aa612-9da0-4a2b-911e-6a1764a4e74e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {805e2f41-6cb8-4ccf-9939-37cfb4fa5509}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.539049132+00:00 stderr F I0120 10:56:53.539008 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.539049132+00:00 stderr F I0120 10:56:53.538987 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} options:{GoMap:map[iface-id-ver:01feb2e0-a0f4-4573-8335-34e364e0ef40 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3e86699a-fa52-4a81-9386-60d37f3fa10c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {65d2d7fe-9437-4599-bc37-da4da4d5905c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.72 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ff201db7-fe7f-4311-91e5-346c10cbf942}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:ff201db7-fe7f-4311-91e5-346c10cbf942}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.542966115+00:00 stderr F I0120 10:56:53.538839 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} options:{GoMap:map[iface-id-ver:e4a7de23-6134-4044-902a-0900dc04a501 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9409cb25-8c46-46db-98ab-5eafe9669ef8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {ef64a2e5-df47-4a70-b021-d05d7c16d1d1}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.40 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {13384dbf-6468-4401-b6b5-1fc817c99dcd}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:13384dbf-6468-4401-b6b5-1fc817c99dcd}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.542966115+00:00 stderr F I0120 10:56:53.539097 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.542966115+00:00 stderr F I0120 10:56:53.534114 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:34f81bc6-9eab-493e-85aa-3c1b2544e7d2}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.542966115+00:00 stderr F I0120 10:56:53.539015 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:25 10.217.0.37]} options:{GoMap:map[iface-id-ver:7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:25 10.217.0.37]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8d4d22e-ea0d-4ce1-bb5d-4e60842262ae}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.542966115+00:00 stderr F I0120 10:56:53.539340 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8d4d22e-ea0d-4ce1-bb5d-4e60842262ae}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.542966115+00:00 stderr F I0120 10:56:53.539393 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8d4d22e-ea0d-4ce1-bb5d-4e60842262ae}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.542966115+00:00 stderr F I0120 10:56:53.539463 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.5 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.542966115+00:00 stderr F I0120 10:56:53.539508 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.542966115+00:00 stderr F I0120 10:56:53.539525 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} options:{GoMap:map[iface-id-ver:4f8aa612-9da0-4a2b-911e-6a1764a4e74e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {805e2f41-6cb8-4ccf-9939-37cfb4fa5509}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.5 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.542966115+00:00 stderr F I0120 10:56:53.534168 30089 default_network_controller.go:699] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc 2026-01-20T10:56:53.542966115+00:00 stderr F I0120 10:56:53.534168 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.6 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5caac608-1693-4730-b705-794c4dca0d50}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.542966115+00:00 stderr F I0120 10:56:53.539632 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:5caac608-1693-4730-b705-794c4dca0d50}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.542966115+00:00 stderr F I0120 10:56:53.539671 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.30 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7c218153-0e49-4e95-93a2-1f77234301b9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.542966115+00:00 stderr P I0120 10:56:53.539656 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} options:{GoMap:map[iface-id-ver:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}] Until: Durable: Comment: L 2026-01-20T10:56:53.543013396+00:00 stderr F ock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {34c062e9-0e41-479f-b36f-464b48cc97e0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.6 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5caac608-1693-4730-b705-794c4dca0d50}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:5caac608-1693-4730-b705-794c4dca0d50}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543013396+00:00 stderr F I0120 10:56:53.539714 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.37 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5b16eb4b-47a3-4eb4-8e75-fbe7791a8316}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543013396+00:00 stderr F I0120 10:56:53.539736 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7c218153-0e49-4e95-93a2-1f77234301b9}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543013396+00:00 stderr F I0120 10:56:53.539745 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:5b16eb4b-47a3-4eb4-8e75-fbe7791a8316}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543013396+00:00 stderr F I0120 10:56:53.539762 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:25 10.217.0.37]} options:{GoMap:map[iface-id-ver:7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:25 10.217.0.37]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8d4d22e-ea0d-4ce1-bb5d-4e60842262ae}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8d4d22e-ea0d-4ce1-bb5d-4e60842262ae}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8d4d22e-ea0d-4ce1-bb5d-4e60842262ae}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.37 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5b16eb4b-47a3-4eb4-8e75-fbe7791a8316}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:5b16eb4b-47a3-4eb4-8e75-fbe7791a8316}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543013396+00:00 stderr F I0120 10:56:53.539762 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} options:{GoMap:map[iface-id-ver:df6e4f33-df74-4326-b096-9d3e45a8c55a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {34f81bc6-9eab-493e-85aa-3c1b2544e7d2}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:34f81bc6-9eab-493e-85aa-3c1b2544e7d2}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:34f81bc6-9eab-493e-85aa-3c1b2544e7d2}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.30 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7c218153-0e49-4e95-93a2-1f77234301b9}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7c218153-0e49-4e95-93a2-1f77234301b9}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543013396+00:00 stderr F I0120 10:56:53.534189 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.22 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9a5ec36-71c6-4938-8247-5a459bf24e1f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543013396+00:00 stderr F I0120 10:56:53.539930 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d9a5ec36-71c6-4938-8247-5a459bf24e1f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543013396+00:00 stderr F I0120 10:56:53.534211 30089 base_network_controller_pods.go:476] [default/openshift-kube-apiserver/installer-13-crc] creating logical port openshift-kube-apiserver_installer-13-crc for pod on switch crc 2026-01-20T10:56:53.543013396+00:00 stderr P I0120 10:56:53.539953 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} options:{GoMap:map[iface-id-ver:b54e8941-2fc4-432a-9e51-39684df9089e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {82630d91-1647-4c0c-aa84-8f820bcf919e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.22 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] 2026-01-20T10:56:53.543038547+00:00 stderr F Mutations:[] Timeout: Where:[where column _uuid == {d9a5ec36-71c6-4938-8247-5a459bf24e1f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d9a5ec36-71c6-4938-8247-5a459bf24e1f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543038547+00:00 stderr F I0120 10:56:53.534230 30089 base_network_controller_pods.go:476] [default/openshift-apiserver/apiserver-7fc54b8dd7-d2bhp] creating logical port openshift-apiserver_apiserver-7fc54b8dd7-d2bhp for pod on switch crc 2026-01-20T10:56:53.543038547+00:00 stderr F I0120 10:56:53.534286 30089 base_network_controller_pods.go:476] [default/openshift-etcd-operator/etcd-operator-768d5b5d86-722mg] creating logical port openshift-etcd-operator_etcd-operator-768d5b5d86-722mg for pod on switch crc 2026-01-20T10:56:53.543038547+00:00 stderr F I0120 10:56:53.540277 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:26 10.217.0.38]} options:{GoMap:map[iface-id-ver:9387c79a-cd5b-4d24-a558-6dbbdd89fe1e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:26 10.217.0.38]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8a3b108-0209-4107-a8a7-c85848e5a053}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543038547+00:00 stderr F I0120 10:56:53.540322 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8a3b108-0209-4107-a8a7-c85848e5a053}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543038547+00:00 stderr F I0120 10:56:53.540373 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8a3b108-0209-4107-a8a7-c85848e5a053}]}}] Timeout: Where:[where column _uuid == {ca455606-530d-4273-b8ee-8ed4760b1f66}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543038547+00:00 stderr F I0120 10:56:53.534302 30089 base_network_controller_pods.go:476] [default/openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs] creating logical port openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs for pod on switch crc 2026-01-20T10:56:53.543038547+00:00 stderr F I0120 10:56:53.540419 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} options:{GoMap:map[iface-id-ver:0b5c38ff-1fa8-4219-994d-15776acd4a4d requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e834ded8-9d5b-46e7-b962-1ee96928bab4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543038547+00:00 stderr F I0120 10:56:53.540437 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} options:{GoMap:map[iface-id-ver:41e8708a-e40d-4d28-846b-c52eda4d1755 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {005abe2f-f66d-42f8-945c-fbc80f820ed4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543038547+00:00 stderr F I0120 10:56:53.540470 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543038547+00:00 stderr F I0120 10:56:53.540493 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543038547+00:00 stderr F I0120 10:56:53.540519 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {f8a00f5d-1728-4139-8582-f2fb90581499}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543038547+00:00 stderr F I0120 10:56:53.540556 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {336e83b0-c9fa-498c-aea8-c8cc7385ea1e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543038547+00:00 stderr F I0120 10:56:53.540562 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} options:{GoMap:map[iface-id-ver:21d29937-debd-4407-b2b1-d1053cb0f342 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c2174bce-e1da-468b-aa60-b9409f80c104}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543038547+00:00 stderr F I0120 10:56:53.540601 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543038547+00:00 stderr F I0120 10:56:53.540638 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {7f89eae9-cb3f-438c-8fc8-824d28075b04}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543038547+00:00 stderr F I0120 10:56:53.540760 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.38 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b8b3c4de-4c65-4766-8e56-37c95584c479}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543038547+00:00 stderr F I0120 10:56:53.540800 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:b8b3c4de-4c65-4766-8e56-37c95584c479}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543038547+00:00 stderr P I0120 10:56:53.540826 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:26 10.217.0.38]} options:{GoMap:map[iface-id-ver:9387c79a-cd5b-4d24-a558-6dbbdd89fe1e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:26 10.217.0.38]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8a3b108-0209-4107-a8a7-c85848e5a053}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8a3b108-0209-4107-a8a7-c85848e5a053}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{ 2026-01-20T10:56:53.543089038+00:00 stderr F GoSet:[{GoUUID:d8a3b108-0209-4107-a8a7-c85848e5a053}]}}] Timeout: Where:[where column _uuid == {ca455606-530d-4273-b8ee-8ed4760b1f66}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.38 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b8b3c4de-4c65-4766-8e56-37c95584c479}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:b8b3c4de-4c65-4766-8e56-37c95584c479}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543089038+00:00 stderr F I0120 10:56:53.540876 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.8 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6dfd664-3ae6-42b8-adeb-4c9f305aa327}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543089038+00:00 stderr F I0120 10:56:53.540912 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.88 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {61452dfb-4328-4d5d-af1c-a7de40dfbc7a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543089038+00:00 stderr F I0120 10:56:53.540940 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:61452dfb-4328-4d5d-af1c-a7de40dfbc7a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543089038+00:00 stderr F I0120 10:56:53.540977 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a6dfd664-3ae6-42b8-adeb-4c9f305aa327}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543089038+00:00 stderr F I0120 10:56:53.540963 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} options:{GoMap:map[iface-id-ver:21d29937-debd-4407-b2b1-d1053cb0f342 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c2174bce-e1da-468b-aa60-b9409f80c104}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {7f89eae9-cb3f-438c-8fc8-824d28075b04}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.88 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {61452dfb-4328-4d5d-af1c-a7de40dfbc7a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:61452dfb-4328-4d5d-af1c-a7de40dfbc7a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543089038+00:00 stderr F I0120 10:56:53.540997 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} options:{GoMap:map[iface-id-ver:0b5c38ff-1fa8-4219-994d-15776acd4a4d requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e834ded8-9d5b-46e7-b962-1ee96928bab4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {f8a00f5d-1728-4139-8582-f2fb90581499}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.8 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6dfd664-3ae6-42b8-adeb-4c9f305aa327}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a6dfd664-3ae6-42b8-adeb-4c9f305aa327}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543089038+00:00 stderr F I0120 10:56:53.534794 30089 base_network_controller_pods.go:476] [default/openshift-console/console-644bb77b49-5x5xk] creating logical port openshift-console_console-644bb77b49-5x5xk for pod on switch crc 2026-01-20T10:56:53.543089038+00:00 stderr F I0120 10:56:53.541144 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.82 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c43bc3ed-4021-47db-b48f-03cce83a4268}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543089038+00:00 stderr F I0120 10:56:53.541216 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c43bc3ed-4021-47db-b48f-03cce83a4268}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543089038+00:00 stderr F I0120 10:56:53.541235 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} options:{GoMap:map[iface-id-ver:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9a79516e-7a72-4d42-b0ab-87a99aa064f3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543089038+00:00 stderr F I0120 10:56:53.541271 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543089038+00:00 stderr P I0120 10:56:53.541245 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} options:{GoMap:map[iface-id-ver:41e8708a-e40d-4d28-846b-c52eda4d1755 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where colum 2026-01-20T10:56:53.543118069+00:00 stderr F n _uuid == {005abe2f-f66d-42f8-945c-fbc80f820ed4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {336e83b0-c9fa-498c-aea8-c8cc7385ea1e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.82 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c43bc3ed-4021-47db-b48f-03cce83a4268}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c43bc3ed-4021-47db-b48f-03cce83a4268}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.534994 30089 base_network_controller_pods.go:476] [default/openshift-controller-manager/controller-manager-778975cc4f-x5vcf] creating logical port openshift-controller-manager_controller-manager-778975cc4f-x5vcf for pod on switch crc 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.535001 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t after 0 failed attempt(s) 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.541427 30089 default_network_controller.go:699] Recording success event on pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.535025 30089 pods.go:220] [openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z] addLogicalPort took 10.053616ms, libovsdb time 688.399µs 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.541444 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z after 0 failed attempt(s) 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.541454 30089 default_network_controller.go:699] Recording success event on pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.535033 30089 port_cache.go:96] port-cache(cert-manager_cert-manager-webhook-855f577f79-7bdxq): added port &{name:cert-manager_cert-manager-webhook-855f577f79-7bdxq uuid:2062ef9d-1b69-4e9a-ae97-128ed1f07896 logicalSwitch:crc ips:[0xc00072d170] mac:[10 88 10 217 0 44] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.44/23] and MAC: 0a:58:0a:d9:00:2c 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.541472 30089 pods.go:220] [cert-manager/cert-manager-webhook-855f577f79-7bdxq] addLogicalPort took 17.495494ms, libovsdb time 868.813µs 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.541472 30089 port_cache.go:96] port-cache(openshift-etcd-operator_etcd-operator-768d5b5d86-722mg): added port &{name:openshift-etcd-operator_etcd-operator-768d5b5d86-722mg uuid:e834ded8-9d5b-46e7-b962-1ee96928bab4 logicalSwitch:crc ips:[0xc001395350] mac:[10 88 10 217 0 8] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.8/23] and MAC: 0a:58:0a:d9:00:08 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.535109 30089 base_network_controller_pods.go:476] [default/openshift-network-diagnostics/network-check-target-v54bt] creating logical port openshift-network-diagnostics_network-check-target-v54bt for pod on switch crc 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.541491 30089 port_cache.go:96] port-cache(openshift-ingress-canary_ingress-canary-2vhcn): added port &{name:openshift-ingress-canary_ingress-canary-2vhcn uuid:7a350d82-7987-4ce6-ae41-dd930411ca29 logicalSwitch:crc ips:[0xc0013e9fb0] mac:[10 88 10 217 0 71] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.71/23] and MAC: 0a:58:0a:d9:00:47 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.541508 30089 port_cache.go:96] port-cache(openshift-console-operator_console-operator-5dbbc74dc9-cp5cd): added port &{name:openshift-console-operator_console-operator-5dbbc74dc9-cp5cd uuid:6af06372-81fc-4451-8678-6253ce70f317 logicalSwitch:crc ips:[0xc001a16d20] mac:[10 88 10 217 0 62] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.62/23] and MAC: 0a:58:0a:d9:00:3e 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.541505 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} options:{GoMap:map[iface-id-ver:1a3e81c3-c292-4130-9436-f94062c91efd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {eda38bc9-7da5-4a6b-818c-4e1e8f85426d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.541520 30089 port_cache.go:96] port-cache(openshift-multus_network-metrics-daemon-qdfr4): added port &{name:openshift-multus_network-metrics-daemon-qdfr4 uuid:3564ddfd-a311-4df3-b5d0-1e76294b4ab0 logicalSwitch:crc ips:[0xc0011c8390] mac:[10 88 10 217 0 3] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.3/23] and MAC: 0a:58:0a:d9:00:03 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.541536 30089 port_cache.go:96] port-cache(openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg): added port &{name:openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg uuid:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4 logicalSwitch:crc ips:[0xc001333560] mac:[10 88 10 217 0 46] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.46/23] and MAC: 0a:58:0a:d9:00:2e 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.541539 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.541548 30089 port_cache.go:96] port-cache(openshift-marketplace_marketplace-operator-8b455464d-nc8zc): added port &{name:openshift-marketplace_marketplace-operator-8b455464d-nc8zc uuid:8d0a19e8-81fc-4e62-a546-49e373193af2 logicalSwitch:crc ips:[0xc001582b40] mac:[10 88 10 217 0 29] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.29/23] and MAC: 0a:58:0a:d9:00:1d 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.541560 30089 port_cache.go:96] port-cache(openshift-dns-operator_dns-operator-75f687757b-nz2xb): added port &{name:openshift-dns-operator_dns-operator-75f687757b-nz2xb uuid:b212e2c2-3d4e-4898-aede-c926b74813f0 logicalSwitch:crc ips:[0xc0016e7500] mac:[10 88 10 217 0 18] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.18/23] and MAC: 0a:58:0a:d9:00:12 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.541570 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d9150f6f-5be3-40d7-848b-deb1197b35b9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.541481 30089 obj_retry.go:379] Retry successful for *v1.Pod cert-manager/cert-manager-webhook-855f577f79-7bdxq after 0 failed attempt(s) 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.541588 30089 default_network_controller.go:699] Recording success event on pod cert-manager/cert-manager-webhook-855f577f79-7bdxq 2026-01-20T10:56:53.543118069+00:00 stderr F I0120 10:56:53.535188 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543118069+00:00 stderr P I0120 10:56:53.541312 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]} 2026-01-20T10:56:53.543151530+00:00 stderr F }] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543151530+00:00 stderr F I0120 10:56:53.541638 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543151530+00:00 stderr F I0120 10:56:53.541668 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:34a48baf-1bee-4921-8bb2-9b7320e76f79 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c0f95133-023f-4bbd-8719-e29d2cfbb32d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543151530+00:00 stderr F I0120 10:56:53.541711 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543151530+00:00 stderr F I0120 10:56:53.541760 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543151530+00:00 stderr F I0120 10:56:53.541854 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.87 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {01c8cadf-7d39-4ad0-9627-60cf1df4b48e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543151530+00:00 stderr F I0120 10:56:53.541888 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:01c8cadf-7d39-4ad0-9627-60cf1df4b48e}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543151530+00:00 stderr F I0120 10:56:53.541904 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} options:{GoMap:map[iface-id-ver:1a3e81c3-c292-4130-9436-f94062c91efd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {eda38bc9-7da5-4a6b-818c-4e1e8f85426d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d9150f6f-5be3-40d7-848b-deb1197b35b9}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.87 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {01c8cadf-7d39-4ad0-9627-60cf1df4b48e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:01c8cadf-7d39-4ad0-9627-60cf1df4b48e}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543151530+00:00 stderr F I0120 10:56:53.542056 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.20 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7e2d0691-2edc-4313-a98c-25b101cdf576}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543151530+00:00 stderr F I0120 10:56:53.542130 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7e2d0691-2edc-4313-a98c-25b101cdf576}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543151530+00:00 stderr F I0120 10:56:53.541571 30089 port_cache.go:96] port-cache(openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8): added port &{name:openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8 uuid:99ef3a4b-7858-4c9b-90db-217867afe36a logicalSwitch:crc ips:[0xc000d2bf20] mac:[10 88 10 217 0 19] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.19/23] and MAC: 0a:58:0a:d9:00:13 2026-01-20T10:56:53.543151530+00:00 stderr F I0120 10:56:53.542197 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543151530+00:00 stderr F I0120 10:56:53.542153 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} options:{GoMap:map[iface-id-ver:45a8038e-e7f2-4d93-a6f5-7753aa54e63f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f8e99409-b28a-4d27-a8e5-267ea6a801cf}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.20 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7e2d0691-2edc-4313-a98c-25b101cdf576}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7e2d0691-2edc-4313-a98c-25b101cdf576}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543151530+00:00 stderr F I0120 10:56:53.542228 30089 pods.go:220] [openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8] addLogicalPort took 18.541711ms, libovsdb time 1.871349ms 2026-01-20T10:56:53.543151530+00:00 stderr F I0120 10:56:53.542238 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 after 0 failed attempt(s) 2026-01-20T10:56:53.543151530+00:00 stderr P I0120 10:56:53.542242 30089 default_network_controller.go:699] Recording success event on pod openshift-au 2026-01-20T10:56:53.543177731+00:00 stderr F thentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.542229 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.73 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fe11781d-2eb7-4789-931b-76e487eefd5f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.542217 30089 port_cache.go:96] port-cache(openshift-multus_multus-admission-controller-6c7c885997-4hbbc): added port &{name:openshift-multus_multus-admission-controller-6c7c885997-4hbbc uuid:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6 logicalSwitch:crc ips:[0xc001832330] mac:[10 88 10 217 0 32] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.32/23] and MAC: 0a:58:0a:d9:00:20 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.542259 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.542270 30089 port_cache.go:96] port-cache(hostpath-provisioner_csi-hostpathplugin-hvm8g): added port &{name:hostpath-provisioner_csi-hostpathplugin-hvm8g uuid:52259988-af2b-4ee5-bbfe-801c4ebeb0ae logicalSwitch:crc ips:[0xc0014f2ff0] mac:[10 88 10 217 0 49] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.49/23] and MAC: 0a:58:0a:d9:00:31 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.542281 30089 port_cache.go:96] port-cache(openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb): added port &{name:openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb uuid:be2fa59f-4cec-4742-a4bd-dcd0913d1422 logicalSwitch:crc ips:[0xc000dffa10] mac:[10 88 10 217 0 15] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.15/23] and MAC: 0a:58:0a:d9:00:0f 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.542297 30089 port_cache.go:96] port-cache(openshift-marketplace_certified-operators-mpjb7): added port &{name:openshift-marketplace_certified-operators-mpjb7 uuid:740dc30b-dced-4651-920f-33387935c67c logicalSwitch:crc ips:[0xc00187fc50] mac:[10 88 10 217 0 33] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.33/23] and MAC: 0a:58:0a:d9:00:21 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.542306 30089 port_cache.go:96] port-cache(openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr): added port &{name:openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr uuid:2d98188d-6d49-48e7-8956-57a5c46efe26 logicalSwitch:crc ips:[0xc001666360] mac:[10 88 10 217 0 16] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.16/23] and MAC: 0a:58:0a:d9:00:10 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.542294 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:fe11781d-2eb7-4789-931b-76e487eefd5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.542314 30089 port_cache.go:96] port-cache(openshift-dns_dns-default-gbw49): added port &{name:openshift-dns_dns-default-gbw49 uuid:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213 logicalSwitch:crc ips:[0xc0008c8b70] mac:[10 88 10 217 0 31] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.31/23] and MAC: 0a:58:0a:d9:00:1f 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.535126 30089 base_network_controller_pods.go:476] [default/openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh] creating logical port openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh for pod on switch crc 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.542280 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:34a48baf-1bee-4921-8bb2-9b7320e76f79 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c0f95133-023f-4bbd-8719-e29d2cfbb32d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.542340 30089 pods.go:220] [hostpath-provisioner/csi-hostpathplugin-hvm8g] addLogicalPort took 5.618699ms, libovsdb time 412.241µs 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.542347 30089 obj_retry.go:379] Retry successful for *v1.Pod hostpath-provisioner/csi-hostpathplugin-hvm8g after 0 failed attempt(s) 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.542351 30089 default_network_controller.go:699] Recording success event on pod hostpath-provisioner/csi-hostpathplugin-hvm8g 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.542356 30089 pods.go:220] [openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb] addLogicalPort took 6.700168ms, libovsdb time 1.352155ms 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.542361 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb after 0 failed attempt(s) 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.542364 30089 default_network_controller.go:699] Recording success event on pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.542370 30089 pods.go:220] [openshift-marketplace/certified-operators-mpjb7] addLogicalPort took 6.396828ms, libovsdb time 962.365µs 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.542376 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-marketplace/certified-operators-mpjb7 after 0 failed attempt(s) 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.542379 30089 default_network_controller.go:699] Recording success event on pod openshift-marketplace/certified-operators-mpjb7 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.542385 30089 pods.go:220] [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr] addLogicalPort took 6.04854ms, libovsdb time 473.252µs 2026-01-20T10:56:53.543177731+00:00 stderr F I0120 10:56:53.542391 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr after 0 failed attempt(s) 2026-01-20T10:56:53.543177731+00:00 stderr P I0120 10:56:53.542323 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} options:{GoMap:map[iface-id-ver:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9a79516e-7a72-4d42-b0ab-87a99aa064f3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa06 2026-01-20T10:56:53.543209091+00:00 stderr F 4f3}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.73 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fe11781d-2eb7-4789-931b-76e487eefd5f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:fe11781d-2eb7-4789-931b-76e487eefd5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.542399 30089 pods.go:220] [openshift-dns/dns-default-gbw49] addLogicalPort took 6.517822ms, libovsdb time 557.464µs 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.542411 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-dns/dns-default-gbw49 after 0 failed attempt(s) 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.542415 30089 default_network_controller.go:699] Recording success event on pod openshift-dns/dns-default-gbw49 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.542335 30089 pods.go:220] [openshift-multus/multus-admission-controller-6c7c885997-4hbbc] addLogicalPort took 18.662993ms, libovsdb time 3.627166ms 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.543003 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc after 0 failed attempt(s) 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.543017 30089 default_network_controller.go:699] Recording success event on pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.542395 30089 default_network_controller.go:699] Recording success event on pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.535385 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "e7870154-de6e-4216-81fb-b87e7502c412" 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.543109 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "0b5d722a-1123-4935-9740-52a08d018bc9" 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.543121 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "e9127708-ccfd-4891-8a3a-f0cacb77e0f4" 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.543134 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "a702c6d2-4dde-4077-ab8c-0f8df804bf7a" 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.543138 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "f728c15e-d8de-4a9a-a3ea-fdcead95cb91" 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.543143 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "5adb4a31-5991-4381-a1ea-f1b095a071ea" 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.543147 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "10603adc-d495-423c-9459-4caa405960bb" 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.543154 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.543159 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.543165 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "12e733dd-0939-4f1b-9cbb-13897e093787" 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.543170 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.543176 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "1d5b65e7-a4c3-495a-a5b0-72caab7218fd" 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.543182 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.543188 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "13045510-8717-4a71-ade4-be95a76440a7" 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.543193 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "f12a256b-7128-4680-8f54-8e40a3e56300" 2026-01-20T10:56:53.543209091+00:00 stderr F I0120 10:56:53.543199 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "530553aa-0a1d-423e-8a22-f5eb4bdbb883" 2026-01-20T10:56:53.543242122+00:00 stderr F I0120 10:56:53.535425 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-ingress/router-default-5c9bf7bc58-6jctv after 0 failed attempt(s) 2026-01-20T10:56:53.543242122+00:00 stderr F I0120 10:56:53.543205 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "120b38dc-8236-4fa6-a452-642b8ad738ee" 2026-01-20T10:56:53.543242122+00:00 stderr F I0120 10:56:53.543218 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "cf1a8966-f594-490a-9fbb-eec5bafd13d3" 2026-01-20T10:56:53.543242122+00:00 stderr F I0120 10:56:53.543218 30089 default_network_controller.go:699] Recording success event on pod openshift-ingress/router-default-5c9bf7bc58-6jctv 2026-01-20T10:56:53.543242122+00:00 stderr F I0120 10:56:53.543224 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "e4a7de23-6134-4044-902a-0900dc04a501" 2026-01-20T10:56:53.543242122+00:00 stderr F I0120 10:56:53.543232 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "01feb2e0-a0f4-4573-8335-34e364e0ef40" 2026-01-20T10:56:53.543262273+00:00 stderr F I0120 10:56:53.543238 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "4f8aa612-9da0-4a2b-911e-6a1764a4e74e" 2026-01-20T10:56:53.543262273+00:00 stderr F I0120 10:56:53.543227 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543262273+00:00 stderr F I0120 10:56:53.535500 30089 base_network_controller_pods.go:476] [default/openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7] creating logical port openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7 for pod on switch crc 2026-01-20T10:56:53.543360395+00:00 stderr F I0120 10:56:53.543303 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543414907+00:00 stderr F I0120 10:56:53.535707 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} options:{GoMap:map[iface-id-ver:71af81a9-7d43-49b2-9287-c375900aa905 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e77fb5d-c04f-467c-9883-8cb59d819d86}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543469918+00:00 stderr F I0120 10:56:53.543433 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543487519+00:00 stderr F I0120 10:56:53.543457 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} options:{GoMap:map[iface-id-ver:ed024e5d-8fc2-4c22-803d-73f3c9795f19 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5ecfd58-e886-4b2c-9939-022e7f14b7a7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543543540+00:00 stderr F I0120 10:56:53.531791 30089 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-l92hr in node crc 2026-01-20T10:56:53.543543540+00:00 stderr F I0120 10:56:53.543509 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {c8d9d75a-827d-4b5a-8293-96d3de66db7c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543564981+00:00 stderr F I0120 10:56:53.543537 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-image-registry/node-ca-l92hr after 0 failed attempt(s) 2026-01-20T10:56:53.543564981+00:00 stderr F I0120 10:56:53.543551 30089 default_network_controller.go:699] Recording success event on pod openshift-image-registry/node-ca-l92hr 2026-01-20T10:56:53.543651653+00:00 stderr F I0120 10:56:53.543248 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "43ae1c37-047b-4ee2-9fee-41e337dd4ac8" 2026-01-20T10:56:53.543651653+00:00 stderr F I0120 10:56:53.543633 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76" 2026-01-20T10:56:53.543651653+00:00 stderr F I0120 10:56:53.543641 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "b54e8941-2fc4-432a-9e51-39684df9089e" 2026-01-20T10:56:53.543651653+00:00 stderr F I0120 10:56:53.543647 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "df6e4f33-df74-4326-b096-9d3e45a8c55a" 2026-01-20T10:56:53.543856798+00:00 stderr F I0120 10:56:53.543652 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "9387c79a-cd5b-4d24-a558-6dbbdd89fe1e" 2026-01-20T10:56:53.543856798+00:00 stderr F I0120 10:56:53.543656 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "21d29937-debd-4407-b2b1-d1053cb0f342" 2026-01-20T10:56:53.543856798+00:00 stderr F I0120 10:56:53.543661 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "0b5c38ff-1fa8-4219-994d-15776acd4a4d" 2026-01-20T10:56:53.543856798+00:00 stderr F I0120 10:56:53.543665 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "1a3e81c3-c292-4130-9436-f94062c91efd" 2026-01-20T10:56:53.543856798+00:00 stderr F I0120 10:56:53.543669 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "41e8708a-e40d-4d28-846b-c52eda4d1755" 2026-01-20T10:56:53.543856798+00:00 stderr F I0120 10:56:53.543674 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "45a8038e-e7f2-4d93-a6f5-7753aa54e63f" 2026-01-20T10:56:53.543856798+00:00 stderr F I0120 10:56:53.543678 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "34a48baf-1bee-4921-8bb2-9b7320e76f79" 2026-01-20T10:56:53.543856798+00:00 stderr F I0120 10:56:53.543683 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" 2026-01-20T10:56:53.543856798+00:00 stderr F I0120 10:56:53.536937 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} options:{GoMap:map[iface-id-ver:c085412c-b875-46c9-ae3e-e6b0d8067091 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.14 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8946487-1f89-4d39-965a-9596d567c892}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d8946487-1f89-4d39-965a-9596d567c892}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.543878829+00:00 stderr F I0120 10:56:53.531819 30089 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/community-operators-6m4w2 in node crc 2026-01-20T10:56:53.543933140+00:00 stderr F I0120 10:56:53.543903 30089 base_network_controller_pods.go:476] [default/openshift-marketplace/community-operators-6m4w2] creating logical port openshift-marketplace_community-operators-6m4w2 for pod on switch crc 2026-01-20T10:56:53.543933140+00:00 stderr F I0120 10:56:53.543910 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.61 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {95146e00-82d8-4bf3-8c5b-dac75d43239c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544104635+00:00 stderr F I0120 10:56:53.543966 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:95146e00-82d8-4bf3-8c5b-dac75d43239c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544104635+00:00 stderr F I0120 10:56:53.531835 30089 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf in node crc 2026-01-20T10:56:53.544104635+00:00 stderr F I0120 10:56:53.544046 30089 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf] creating logical port openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf for pod on switch crc 2026-01-20T10:56:53.544104635+00:00 stderr F I0120 10:56:53.543995 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} options:{GoMap:map[iface-id-ver:59748b9b-c309-4712-aa85-bb38d71c4915 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6056bee0-572a-4de7-bb24-40ca6a66be30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.61 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {95146e00-82d8-4bf3-8c5b-dac75d43239c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:95146e00-82d8-4bf3-8c5b-dac75d43239c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544237389+00:00 stderr F I0120 10:56:53.544178 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.12 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2453849f-f886-4491-b6c8-a4d7af784119}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544258080+00:00 stderr F I0120 10:56:53.544229 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} options:{GoMap:map[iface-id-ver:bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {de883ce4-2f69-42d2-a4b3-8165b5ceb500}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544258080+00:00 stderr F I0120 10:56:53.544239 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2453849f-f886-4491-b6c8-a4d7af784119}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544317991+00:00 stderr F I0120 10:56:53.544275 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:de883ce4-2f69-42d2-a4b3-8165b5ceb500}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544336742+00:00 stderr F I0120 10:56:53.544276 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} options:{GoMap:map[iface-id-ver:6d67253e-2acd-4bc1-8185-793587da4f17 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544350232+00:00 stderr F I0120 10:56:53.544264 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} options:{GoMap:map[iface-id-ver:71af81a9-7d43-49b2-9287-c375900aa905 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e77fb5d-c04f-467c-9883-8cb59d819d86}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {c8d9d75a-827d-4b5a-8293-96d3de66db7c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.12 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2453849f-f886-4491-b6c8-a4d7af784119}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2453849f-f886-4491-b6c8-a4d7af784119}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544369032+00:00 stderr F I0120 10:56:53.544341 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:de883ce4-2f69-42d2-a4b3-8165b5ceb500}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544412144+00:00 stderr F I0120 10:56:53.544370 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544412144+00:00 stderr F I0120 10:56:53.531824 30089 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 in node crc 2026-01-20T10:56:53.544475855+00:00 stderr F I0120 10:56:53.544432 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {518f8c39-eb57-4799-bfd1-37b6918f5c5b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544475855+00:00 stderr F I0120 10:56:53.544449 30089 base_network_controller_pods.go:476] [default/openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7] creating logical port openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7 for pod on switch crc 2026-01-20T10:56:53.544493886+00:00 stderr F I0120 10:56:53.544453 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} options:{GoMap:map[iface-id-ver:8a5ae51d-d173-4531-8975-f164c975ce1f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544506876+00:00 stderr F I0120 10:56:53.537657 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544625569+00:00 stderr F I0120 10:56:53.544547 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544625569+00:00 stderr F I0120 10:56:53.544605 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "59748b9b-c309-4712-aa85-bb38d71c4915" 2026-01-20T10:56:53.544644810+00:00 stderr F I0120 10:56:53.544626 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "c085412c-b875-46c9-ae3e-e6b0d8067091" 2026-01-20T10:56:53.544644810+00:00 stderr F I0120 10:56:53.531905 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} options:{GoMap:map[iface-id-ver:bd556935-a077-45df-ba3f-d42c39326ccd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {69155615-9d93-4b72-bddd-739a6e731251}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.43 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {836a6a8f-ef84-4ecd-996a-6579e908ce2a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:836a6a8f-ef84-4ecd-996a-6579e908ce2a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544697001+00:00 stderr F I0120 10:56:53.544653 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544711991+00:00 stderr F I0120 10:56:53.544669 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} options:{GoMap:map[iface-id-ver:d0f40333-c860-4c04-8058-a0bf572dcf12 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3644fddd-ceae-4a64-8b00-dadf73515945}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544782173+00:00 stderr F I0120 10:56:53.544738 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544828414+00:00 stderr F I0120 10:56:53.544774 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.35 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e60566c7-a4e3-4fd4-b079-e4c3376e8b73}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544828414+00:00 stderr F I0120 10:56:53.542326 30089 port_cache.go:96] port-cache(cert-manager_cert-manager-758df9885c-cq6zm): added port &{name:cert-manager_cert-manager-758df9885c-cq6zm uuid:c58d2e89-322d-46d7-9147-b9e591118d62 logicalSwitch:crc ips:[0xc0014f28a0] mac:[10 88 10 217 0 42] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.42/23] and MAC: 0a:58:0a:d9:00:2a 2026-01-20T10:56:53.544828414+00:00 stderr F I0120 10:56:53.544815 30089 port_cache.go:96] port-cache(openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc): added port &{name:openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc uuid:d2f291e9-b4fe-47a7-a644-298254d226c5 logicalSwitch:crc ips:[0xc0009fd170] mac:[10 88 10 217 0 23] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.23/23] and MAC: 0a:58:0a:d9:00:17 2026-01-20T10:56:53.544851445+00:00 stderr F I0120 10:56:53.544810 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.10 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2b58dd61-e63c-407c-8db8-c7c7b5c23d24}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544851445+00:00 stderr F I0120 10:56:53.542470 30089 pods.go:220] [openshift-multus/network-metrics-daemon-qdfr4] addLogicalPort took 18.173101ms, libovsdb time 583.836µs 2026-01-20T10:56:53.544851445+00:00 stderr F I0120 10:56:53.544840 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-multus/network-metrics-daemon-qdfr4 after 0 failed attempt(s) 2026-01-20T10:56:53.544851445+00:00 stderr F I0120 10:56:53.544830 30089 port_cache.go:96] port-cache(openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm): added port &{name:openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm uuid:ad3d5728-34ed-421c-a749-1d7a957800a8 logicalSwitch:crc ips:[0xc00187f4a0] mac:[10 88 10 217 0 21] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.21/23] and MAC: 0a:58:0a:d9:00:15 2026-01-20T10:56:53.544869556+00:00 stderr F I0120 10:56:53.544849 30089 default_network_controller.go:699] Recording success event on pod openshift-multus/network-metrics-daemon-qdfr4 2026-01-20T10:56:53.544869556+00:00 stderr F I0120 10:56:53.544795 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.544869556+00:00 stderr F I0120 10:56:53.542478 30089 pods.go:220] [openshift-ingress-canary/ingress-canary-2vhcn] addLogicalPort took 18.629443ms, libovsdb time 2.108966ms 2026-01-20T10:56:53.544869556+00:00 stderr F I0120 10:56:53.544857 30089 port_cache.go:96] port-cache(openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv): added port &{name:openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv uuid:2a5717ea-0a50-4ebb-b087-90e637274a33 logicalSwitch:crc ips:[0xc001623ef0] mac:[10 88 10 217 0 25] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.25/23] and MAC: 0a:58:0a:d9:00:19 2026-01-20T10:56:53.544885396+00:00 stderr F I0120 10:56:53.542482 30089 pods.go:220] [openshift-console-operator/console-operator-5dbbc74dc9-cp5cd] addLogicalPort took 17.547464ms, libovsdb time 1.579111ms 2026-01-20T10:56:53.544885396+00:00 stderr F I0120 10:56:53.535283 30089 base_network_controller_pods.go:476] [default/openshift-marketplace/redhat-operators-2nxg8] creating logical port openshift-marketplace_redhat-operators-2nxg8 for pod on switch crc 2026-01-20T10:56:53.544885396+00:00 stderr F I0120 10:56:53.544873 30089 port_cache.go:96] port-cache(openshift-service-ca_service-ca-666f99b6f-kk8kg): added port &{name:openshift-service-ca_service-ca-666f99b6f-kk8kg uuid:9409cb25-8c46-46db-98ab-5eafe9669ef8 logicalSwitch:crc ips:[0xc000a328a0] mac:[10 88 10 217 0 40] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.40/23] and MAC: 0a:58:0a:d9:00:28 2026-01-20T10:56:53.544928747+00:00 stderr F I0120 10:56:53.544889 30089 port_cache.go:96] port-cache(openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b): added port &{name:openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b uuid:3e86699a-fa52-4a81-9386-60d37f3fa10c logicalSwitch:crc ips:[0xc001332360] mac:[10 88 10 217 0 72] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.72/23] and MAC: 0a:58:0a:d9:00:48 2026-01-20T10:56:53.544928747+00:00 stderr F I0120 10:56:53.544906 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "71af81a9-7d43-49b2-9287-c375900aa905" 2026-01-20T10:56:53.544928747+00:00 stderr F I0120 10:56:53.544912 30089 port_cache.go:96] port-cache(openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb): added port &{name:openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb uuid:805e2f41-6cb8-4ccf-9939-37cfb4fa5509 logicalSwitch:crc ips:[0xc000aa9b30] mac:[10 88 10 217 0 5] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.5/23] and MAC: 0a:58:0a:d9:00:05 2026-01-20T10:56:53.544928747+00:00 stderr F I0120 10:56:53.544920 30089 pods.go:220] [openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm] addLogicalPort took 17.766361ms, libovsdb time 492.383µs 2026-01-20T10:56:53.544950698+00:00 stderr F I0120 10:56:53.544932 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm after 0 failed attempt(s) 2026-01-20T10:56:53.545175304+00:00 stderr F I0120 10:56:53.545122 30089 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2026-01-20T10:56:53.545175304+00:00 stderr F I0120 10:56:53.544864 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-ingress-canary/ingress-canary-2vhcn after 0 failed attempt(s) 2026-01-20T10:56:53.545175304+00:00 stderr F I0120 10:56:53.545168 30089 default_network_controller.go:699] Recording success event on pod openshift-ingress-canary/ingress-canary-2vhcn 2026-01-20T10:56:53.545202144+00:00 stderr F I0120 10:56:53.544923 30089 port_cache.go:96] port-cache(openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m): added port &{name:openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m uuid:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26 logicalSwitch:crc ips:[0xc00041a390] mac:[10 88 10 217 0 6] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.6/23] and MAC: 0a:58:0a:d9:00:06 2026-01-20T10:56:53.545202144+00:00 stderr F I0120 10:56:53.545187 30089 port_cache.go:96] port-cache(openshift-image-registry_image-registry-75b7bb6564-ln84v): added port &{name:openshift-image-registry_image-registry-75b7bb6564-ln84v uuid:d8d4d22e-ea0d-4ce1-bb5d-4e60842262ae logicalSwitch:crc ips:[0xc000b7c480] mac:[10 88 10 217 0 37] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.37/23] and MAC: 0a:58:0a:d9:00:25 2026-01-20T10:56:53.545246075+00:00 stderr F I0120 10:56:53.545200 30089 port_cache.go:96] port-cache(openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv): added port &{name:openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv uuid:82630d91-1647-4c0c-aa84-8f820bcf919e logicalSwitch:crc ips:[0xc00081f8f0] mac:[10 88 10 217 0 22] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.22/23] and MAC: 0a:58:0a:d9:00:16 2026-01-20T10:56:53.545246075+00:00 stderr F I0120 10:56:53.544831 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:e60566c7-a4e3-4fd4-b079-e4c3376e8b73}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.545246075+00:00 stderr F I0120 10:56:53.545226 30089 port_cache.go:96] port-cache(openshift-marketplace_redhat-marketplace-2mx7j): added port &{name:openshift-marketplace_redhat-marketplace-2mx7j uuid:34f81bc6-9eab-493e-85aa-3c1b2544e7d2 logicalSwitch:crc ips:[0xc0005e5740] mac:[10 88 10 217 0 30] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.30/23] and MAC: 0a:58:0a:d9:00:1e 2026-01-20T10:56:53.545246075+00:00 stderr F I0120 10:56:53.545211 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:22 10.217.0.34]} options:{GoMap:map[iface-id-ver:afcd1056-dc0e-4c35-93bd-1c388cd2028e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:22 10.217.0.34]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {398e0e4f-2c60-4f66-97a2-6180b1379809}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.545269276+00:00 stderr F I0120 10:56:53.545235 30089 port_cache.go:96] port-cache(openshift-kube-apiserver_installer-13-crc): added port &{name:openshift-kube-apiserver_installer-13-crc uuid:d8a3b108-0209-4107-a8a7-c85848e5a053 logicalSwitch:crc ips:[0xc000872780] mac:[10 88 10 217 0 38] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.38/23] and MAC: 0a:58:0a:d9:00:26 2026-01-20T10:56:53.545269276+00:00 stderr F I0120 10:56:53.545252 30089 port_cache.go:96] port-cache(openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs): added port &{name:openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs uuid:c2174bce-e1da-468b-aa60-b9409f80c104 logicalSwitch:crc ips:[0xc000b7cf30] mac:[10 88 10 217 0 88] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.88/23] and MAC: 0a:58:0a:d9:00:58 2026-01-20T10:56:53.545283136+00:00 stderr F I0120 10:56:53.545267 30089 port_cache.go:96] port-cache(openshift-controller-manager_controller-manager-778975cc4f-x5vcf): added port &{name:openshift-controller-manager_controller-manager-778975cc4f-x5vcf uuid:eda38bc9-7da5-4a6b-818c-4e1e8f85426d logicalSwitch:crc ips:[0xc00160c660] mac:[10 88 10 217 0 87] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.87/23] and MAC: 0a:58:0a:d9:00:57 2026-01-20T10:56:53.545283136+00:00 stderr F I0120 10:56:53.545275 30089 port_cache.go:96] port-cache(openshift-apiserver_apiserver-7fc54b8dd7-d2bhp): added port &{name:openshift-apiserver_apiserver-7fc54b8dd7-d2bhp uuid:005abe2f-f66d-42f8-945c-fbc80f820ed4 logicalSwitch:crc ips:[0xc000967140] mac:[10 88 10 217 0 82] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.82/23] and MAC: 0a:58:0a:d9:00:52 2026-01-20T10:56:53.545297077+00:00 stderr F I0120 10:56:53.545268 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:398e0e4f-2c60-4f66-97a2-6180b1379809}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.545297077+00:00 stderr F I0120 10:56:53.545286 30089 port_cache.go:96] port-cache(openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw): added port &{name:openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw uuid:f8e99409-b28a-4d27-a8e5-267ea6a801cf logicalSwitch:crc ips:[0xc00187ede0] mac:[10 88 10 217 0 20] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.20/23] and MAC: 0a:58:0a:d9:00:14 2026-01-20T10:56:53.545310747+00:00 stderr F I0120 10:56:53.545295 30089 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-v54bt): added port &{name:openshift-network-diagnostics_network-check-target-v54bt uuid:c0f95133-023f-4bbd-8719-e29d2cfbb32d logicalSwitch:crc ips:[0xc001340fc0] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04 2026-01-20T10:56:53.545310747+00:00 stderr F I0120 10:56:53.545236 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} options:{GoMap:map[iface-id-ver:bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {de883ce4-2f69-42d2-a4b3-8165b5ceb500}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:de883ce4-2f69-42d2-a4b3-8165b5ceb500}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:de883ce4-2f69-42d2-a4b3-8165b5ceb500}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.35 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e60566c7-a4e3-4fd4-b079-e4c3376e8b73}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:e60566c7-a4e3-4fd4-b079-e4c3376e8b73}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.545330058+00:00 stderr F I0120 10:56:53.545277 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.11 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.545330058+00:00 stderr F I0120 10:56:53.545311 30089 pods.go:220] [openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m] addLogicalPort took 13.947949ms, libovsdb time 360.459µs 2026-01-20T10:56:53.545330058+00:00 stderr F I0120 10:56:53.545306 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.64 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2e91916d-9350-416e-9311-ee7b2033c3ec}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.545345368+00:00 stderr F I0120 10:56:53.545321 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:398e0e4f-2c60-4f66-97a2-6180b1379809}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.545345368+00:00 stderr F I0120 10:56:53.545332 30089 pods.go:220] [openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv] addLogicalPort took 13.19913ms, libovsdb time 1.273974ms 2026-01-20T10:56:53.545359918+00:00 stderr F I0120 10:56:53.545342 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv after 0 failed attempt(s) 2026-01-20T10:56:53.545359918+00:00 stderr F I0120 10:56:53.545347 30089 default_network_controller.go:699] Recording success event on pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2026-01-20T10:56:53.545359918+00:00 stderr F I0120 10:56:53.545355 30089 pods.go:220] [openshift-marketplace/redhat-marketplace-2mx7j] addLogicalPort took 20.472201ms, libovsdb time 1.476349ms 2026-01-20T10:56:53.545374279+00:00 stderr F I0120 10:56:53.545364 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-marketplace/redhat-marketplace-2mx7j after 0 failed attempt(s) 2026-01-20T10:56:53.545374279+00:00 stderr F I0120 10:56:53.545369 30089 default_network_controller.go:699] Recording success event on pod openshift-marketplace/redhat-marketplace-2mx7j 2026-01-20T10:56:53.545388229+00:00 stderr F I0120 10:56:53.545354 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2e91916d-9350-416e-9311-ee7b2033c3ec}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.545388229+00:00 stderr F I0120 10:56:53.545378 30089 pods.go:220] [openshift-kube-apiserver/installer-13-crc] addLogicalPort took 11.183026ms, libovsdb time 433.591µs 2026-01-20T10:56:53.545406390+00:00 stderr F I0120 10:56:53.545369 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.545406390+00:00 stderr F I0120 10:56:53.545390 30089 pods.go:220] [openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs] addLogicalPort took 11.091183ms, libovsdb time 382.53µs 2026-01-20T10:56:53.545406390+00:00 stderr F I0120 10:56:53.545399 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs after 0 failed attempt(s) 2026-01-20T10:56:53.545421310+00:00 stderr F I0120 10:56:53.545404 30089 default_network_controller.go:699] Recording success event on pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2026-01-20T10:56:53.545421310+00:00 stderr F I0120 10:56:53.545413 30089 pods.go:220] [openshift-controller-manager/controller-manager-778975cc4f-x5vcf] addLogicalPort took 10.424656ms, libovsdb time 549.875µs 2026-01-20T10:56:53.545435390+00:00 stderr F I0120 10:56:53.545420 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf after 0 failed attempt(s) 2026-01-20T10:56:53.545435390+00:00 stderr F I0120 10:56:53.545424 30089 default_network_controller.go:699] Recording success event on pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2026-01-20T10:56:53.545435390+00:00 stderr F I0120 10:56:53.535202 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s) 2026-01-20T10:56:53.545450651+00:00 stderr F I0120 10:56:53.545434 30089 default_network_controller.go:699] Recording success event on pod openshift-etcd/etcd-crc 2026-01-20T10:56:53.545450651+00:00 stderr F I0120 10:56:53.535348 30089 base_network_controller_pods.go:476] [default/cert-manager/cert-manager-cainjector-676dd9bd64-mggnx] creating logical port cert-manager_cert-manager-cainjector-676dd9bd64-mggnx for pod on switch crc 2026-01-20T10:56:53.545450651+00:00 stderr F I0120 10:56:53.545389 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} options:{GoMap:map[iface-id-ver:d0f40333-c860-4c04-8058-a0bf572dcf12 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3644fddd-ceae-4a64-8b00-dadf73515945}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.64 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2e91916d-9350-416e-9311-ee7b2033c3ec}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2e91916d-9350-416e-9311-ee7b2033c3ec}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.545507442+00:00 stderr F I0120 10:56:53.545407 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} options:{GoMap:map[iface-id-ver:8a5ae51d-d173-4531-8975-f164c975ce1f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.11 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.545523093+00:00 stderr F I0120 10:56:53.542507 30089 pods.go:220] [openshift-dns-operator/dns-operator-75f687757b-nz2xb] addLogicalPort took 12.072409ms, libovsdb time 2.328691ms 2026-01-20T10:56:53.545536943+00:00 stderr F I0120 10:56:53.545519 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb after 0 failed attempt(s) 2026-01-20T10:56:53.545536943+00:00 stderr F I0120 10:56:53.545526 30089 default_network_controller.go:699] Recording success event on pod openshift-dns-operator/dns-operator-75f687757b-nz2xb 2026-01-20T10:56:53.545536943+00:00 stderr F I0120 10:56:53.544810 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.66 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7790ce03-cefa-4620-876d-a377ca4c3dbf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.545551333+00:00 stderr F I0120 10:56:53.545536 30089 gateway_shared_intf.go:2029] Setting OVN Masquerade route with source: 192.168.126.11 2026-01-20T10:56:53.545811820+00:00 stderr F I0120 10:56:53.545580 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7790ce03-cefa-4620-876d-a377ca4c3dbf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.545811820+00:00 stderr F I0120 10:56:53.545614 30089 ovs.go:159] Exec(48): /usr/sbin/ip route replace table 7 10.217.4.0/23 via 10.217.0.1 dev ovn-k8s-mp0 2026-01-20T10:56:53.545873392+00:00 stderr F I0120 10:56:53.545386 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-apiserver/installer-13-crc after 0 failed attempt(s) 2026-01-20T10:56:53.545873392+00:00 stderr F I0120 10:56:53.545850 30089 default_network_controller.go:699] Recording success event on pod openshift-kube-apiserver/installer-13-crc 2026-01-20T10:56:53.545873392+00:00 stderr F I0120 10:56:53.545835 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} options:{GoMap:map[iface-id-ver:c229f43c-9d3a-4848-a9da-d997459f440b requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6bd84c4c-190f-4483-8240-543e175b9038}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.545890642+00:00 stderr F I0120 10:56:53.544856 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2b58dd61-e63c-407c-8db8-c7c7b5c23d24}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.545890642+00:00 stderr F I0120 10:56:53.545851 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.34 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b0e16f9a-41a4-49a7-abee-d1893e65d16b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.545904523+00:00 stderr F I0120 10:56:53.545614 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} options:{GoMap:map[iface-id-ver:6268b7fe-8910-4505-b404-6f1df638105c requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {745a40f7-2acc-4e2b-a087-861e0ea97ffe}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.66 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7790ce03-cefa-4620-876d-a377ca4c3dbf}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7790ce03-cefa-4620-876d-a377ca4c3dbf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.545921213+00:00 stderr F I0120 10:56:53.545898 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6bd84c4c-190f-4483-8240-543e175b9038}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.545984585+00:00 stderr F I0120 10:56:53.545927 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:b0e16f9a-41a4-49a7-abee-d1893e65d16b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.545984585+00:00 stderr F I0120 10:56:53.545322 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m after 0 failed attempt(s) 2026-01-20T10:56:53.545984585+00:00 stderr F I0120 10:56:53.545885 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} options:{GoMap:map[iface-id-ver:6d67253e-2acd-4bc1-8185-793587da4f17 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {518f8c39-eb57-4799-bfd1-37b6918f5c5b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.10 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2b58dd61-e63c-407c-8db8-c7c7b5c23d24}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2b58dd61-e63c-407c-8db8-c7c7b5c23d24}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.545984585+00:00 stderr F I0120 10:56:53.543511 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.546051886+00:00 stderr F I0120 10:56:53.545959 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:22 10.217.0.34]} options:{GoMap:map[iface-id-ver:afcd1056-dc0e-4c35-93bd-1c388cd2028e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:22 10.217.0.34]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {398e0e4f-2c60-4f66-97a2-6180b1379809}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:398e0e4f-2c60-4f66-97a2-6180b1379809}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:398e0e4f-2c60-4f66-97a2-6180b1379809}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.34 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b0e16f9a-41a4-49a7-abee-d1893e65d16b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:b0e16f9a-41a4-49a7-abee-d1893e65d16b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.546051886+00:00 stderr F I0120 10:56:53.546034 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {cdc4ecc4-c623-407e-ad92-33cb6c2b7b75}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.546111498+00:00 stderr F I0120 10:56:53.542495 30089 pods.go:220] [openshift-marketplace/marketplace-operator-8b455464d-nc8zc] addLogicalPort took 17.704888ms, libovsdb time 2.65294ms 2026-01-20T10:56:53.546111498+00:00 stderr F I0120 10:56:53.546081 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-marketplace/marketplace-operator-8b455464d-nc8zc after 0 failed attempt(s) 2026-01-20T10:56:53.546111498+00:00 stderr F I0120 10:56:53.546089 30089 default_network_controller.go:699] Recording success event on pod openshift-marketplace/marketplace-operator-8b455464d-nc8zc 2026-01-20T10:56:53.546111498+00:00 stderr F I0120 10:56:53.535302 30089 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2] creating logical port openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2 for pod on switch crc 2026-01-20T10:56:53.546111498+00:00 stderr F I0120 10:56:53.544956 30089 pods.go:220] [openshift-service-ca/service-ca-666f99b6f-kk8kg] addLogicalPort took 7.219432ms, libovsdb time 998.066µs 2026-01-20T10:56:53.546134429+00:00 stderr F I0120 10:56:53.546106 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-service-ca/service-ca-666f99b6f-kk8kg after 0 failed attempt(s) 2026-01-20T10:56:53.546134429+00:00 stderr F I0120 10:56:53.546114 30089 default_network_controller.go:699] Recording success event on pod openshift-service-ca/service-ca-666f99b6f-kk8kg 2026-01-20T10:56:53.546134429+00:00 stderr F I0120 10:56:53.544967 30089 pods.go:220] [openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb] addLogicalPort took 6.197244ms, libovsdb time 454.032µs 2026-01-20T10:56:53.546134429+00:00 stderr F I0120 10:56:53.546127 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb after 0 failed attempt(s) 2026-01-20T10:56:53.546149449+00:00 stderr F I0120 10:56:53.546133 30089 default_network_controller.go:699] Recording success event on pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2026-01-20T10:56:53.546149449+00:00 stderr F I0120 10:56:53.544979 30089 pods.go:220] [cert-manager/cert-manager-758df9885c-cq6zm] addLogicalPort took 8.685389ms, libovsdb time 1.077699ms 2026-01-20T10:56:53.546149449+00:00 stderr F I0120 10:56:53.542554 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} options:{GoMap:map[iface-id-ver:297ab9b6-2186-4d5b-a952-2bfd59af63c4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9aafbb57-c78d-409c-9ff4-1561d4387b2d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.546164289+00:00 stderr F I0120 10:56:53.546153 30089 obj_retry.go:379] Retry successful for *v1.Pod cert-manager/cert-manager-758df9885c-cq6zm after 0 failed attempt(s) 2026-01-20T10:56:53.546164289+00:00 stderr F I0120 10:56:53.546159 30089 default_network_controller.go:699] Recording success event on pod cert-manager/cert-manager-758df9885c-cq6zm 2026-01-20T10:56:53.546183580+00:00 stderr F I0120 10:56:53.542474 30089 pods.go:220] [openshift-etcd-operator/etcd-operator-768d5b5d86-722mg] addLogicalPort took 8.194027ms, libovsdb time 467.403µs 2026-01-20T10:56:53.546183580+00:00 stderr F I0120 10:56:53.546173 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg after 0 failed attempt(s) 2026-01-20T10:56:53.546183580+00:00 stderr F I0120 10:56:53.544874 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd after 0 failed attempt(s) 2026-01-20T10:56:53.546201370+00:00 stderr F I0120 10:56:53.546190 30089 default_network_controller.go:699] Recording success event on pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2026-01-20T10:56:53.546201370+00:00 stderr F I0120 10:56:53.545022 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "bd556935-a077-45df-ba3f-d42c39326ccd" 2026-01-20T10:56:53.546214791+00:00 stderr F I0120 10:56:53.546180 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.546214791+00:00 stderr F I0120 10:56:53.546205 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "d0f40333-c860-4c04-8058-a0bf572dcf12" 2026-01-20T10:56:53.546261752+00:00 stderr F I0120 10:56:53.545304 30089 port_cache.go:96] port-cache(openshift-console_console-644bb77b49-5x5xk): added port &{name:openshift-console_console-644bb77b49-5x5xk uuid:9a79516e-7a72-4d42-b0ab-87a99aa064f3 logicalSwitch:crc ips:[0xc00080f8f0] mac:[10 88 10 217 0 73] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.73/23] and MAC: 0a:58:0a:d9:00:49 2026-01-20T10:56:53.546261752+00:00 stderr F I0120 10:56:53.546237 30089 pods.go:220] [openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw] addLogicalPort took 14.841482ms, libovsdb time 475.222µs 2026-01-20T10:56:53.546261752+00:00 stderr F I0120 10:56:53.546248 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw after 0 failed attempt(s) 2026-01-20T10:56:53.546261752+00:00 stderr F I0120 10:56:53.546254 30089 default_network_controller.go:699] Recording success event on pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2026-01-20T10:56:53.546278662+00:00 stderr F I0120 10:56:53.546246 30089 port_cache.go:96] port-cache(openshift-console-operator_console-conversion-webhook-595f9969b-l6z49): added port &{name:openshift-console-operator_console-conversion-webhook-595f9969b-l6z49 uuid:6056bee0-572a-4de7-bb24-40ca6a66be30 logicalSwitch:crc ips:[0xc00099a4b0] mac:[10 88 10 217 0 61] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.61/23] and MAC: 0a:58:0a:d9:00:3d 2026-01-20T10:56:53.546278662+00:00 stderr F I0120 10:56:53.546270 30089 pods.go:220] [openshift-apiserver/apiserver-7fc54b8dd7-d2bhp] addLogicalPort took 12.041018ms, libovsdb time 1.219423ms 2026-01-20T10:56:53.546278662+00:00 stderr F I0120 10:56:53.546269 30089 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh): added port &{name:openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh uuid:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749 logicalSwitch:crc ips:[0xc000dd3aa0] mac:[10 88 10 217 0 14] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.14/23] and MAC: 0a:58:0a:d9:00:0e 2026-01-20T10:56:53.546293483+00:00 stderr F I0120 10:56:53.546269 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.546293483+00:00 stderr F I0120 10:56:53.546279 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp after 0 failed attempt(s) 2026-01-20T10:56:53.546293483+00:00 stderr F I0120 10:56:53.546287 30089 default_network_controller.go:699] Recording success event on pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2026-01-20T10:56:53.546312683+00:00 stderr F I0120 10:56:53.546283 30089 port_cache.go:96] port-cache(openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7): added port &{name:openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 uuid:6e77fb5d-c04f-467c-9883-8cb59d819d86 logicalSwitch:crc ips:[0xc000dd2f00] mac:[10 88 10 217 0 12] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.12/23] and MAC: 0a:58:0a:d9:00:0c 2026-01-20T10:56:53.546325774+00:00 stderr F I0120 10:56:53.546306 30089 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz): added port &{name:openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz uuid:69155615-9d93-4b72-bddd-739a6e731251 logicalSwitch:crc ips:[0xc001832780] mac:[10 88 10 217 0 43] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.43/23] and MAC: 0a:58:0a:d9:00:2b 2026-01-20T10:56:53.546338684+00:00 stderr F I0120 10:56:53.546326 30089 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7): added port &{name:openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7 uuid:3644fddd-ceae-4a64-8b00-dadf73515945 logicalSwitch:crc ips:[0xc000d777d0] mac:[10 88 10 217 0 64] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.64/23] and MAC: 0a:58:0a:d9:00:40 2026-01-20T10:56:53.546351714+00:00 stderr F I0120 10:56:53.546336 30089 pods.go:220] [openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7] addLogicalPort took 1.91392ms, libovsdb time 642.137µs 2026-01-20T10:56:53.546351714+00:00 stderr F I0120 10:56:53.546339 30089 port_cache.go:96] port-cache(openshift-console_downloads-65476884b9-9wcvx): added port &{name:openshift-console_downloads-65476884b9-9wcvx uuid:745a40f7-2acc-4e2b-a087-861e0ea97ffe logicalSwitch:crc ips:[0xc001623590] mac:[10 88 10 217 0 66] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.66/23] and MAC: 0a:58:0a:d9:00:42 2026-01-20T10:56:53.546369285+00:00 stderr F I0120 10:56:53.546355 30089 pods.go:220] [openshift-console/downloads-65476884b9-9wcvx] addLogicalPort took 9.08315ms, libovsdb time 719.059µs 2026-01-20T10:56:53.546369285+00:00 stderr F I0120 10:56:53.546362 30089 pods.go:220] [openshift-console-operator/console-conversion-webhook-595f9969b-l6z49] addLogicalPort took 8.976247ms, libovsdb time 603.096µs 2026-01-20T10:56:53.546383005+00:00 stderr F I0120 10:56:53.546366 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-console/downloads-65476884b9-9wcvx after 0 failed attempt(s) 2026-01-20T10:56:53.546383005+00:00 stderr F I0120 10:56:53.546353 30089 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} options:{GoMap:map[iface-id-ver:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f24db1f4-18a4-418a-9c99-1d94ebfba0da}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.546383005+00:00 stderr F I0120 10:56:53.546347 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 after 0 failed attempt(s) 2026-01-20T10:56:53.546402326+00:00 stderr F I0120 10:56:53.546382 30089 default_network_controller.go:699] Recording success event on pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2026-01-20T10:56:53.546402326+00:00 stderr F I0120 10:56:53.546383 30089 pods.go:220] [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7] addLogicalPort took 10.862747ms, libovsdb time 638.386µs 2026-01-20T10:56:53.546402326+00:00 stderr F I0120 10:56:53.546370 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 after 0 failed attempt(s) 2026-01-20T10:56:53.546402326+00:00 stderr F I0120 10:56:53.546392 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 after 0 failed attempt(s) 2026-01-20T10:56:53.546402326+00:00 stderr F I0120 10:56:53.546398 30089 default_network_controller.go:699] Recording success event on pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2026-01-20T10:56:53.546417896+00:00 stderr F I0120 10:56:53.546394 30089 default_network_controller.go:699] Recording success event on pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2026-01-20T10:56:53.546417896+00:00 stderr F I0120 10:56:53.546399 30089 port_cache.go:96] port-cache(openshift-marketplace_community-operators-6m4w2): added port &{name:openshift-marketplace_community-operators-6m4w2 uuid:de883ce4-2f69-42d2-a4b3-8165b5ceb500 logicalSwitch:crc ips:[0xc0015073b0] mac:[10 88 10 217 0 35] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.35/23] and MAC: 0a:58:0a:d9:00:23 2026-01-20T10:56:53.546417896+00:00 stderr F I0120 10:56:53.546401 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.546432526+00:00 stderr F I0120 10:56:53.546415 30089 pods.go:220] [openshift-marketplace/community-operators-6m4w2] addLogicalPort took 2.520357ms, libovsdb time 810.191µs 2026-01-20T10:56:53.546432526+00:00 stderr F I0120 10:56:53.546418 30089 pods.go:220] [openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz] addLogicalPort took 22.551947ms, libovsdb time 13.111887ms 2026-01-20T10:56:53.546432526+00:00 stderr F I0120 10:56:53.546422 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-marketplace/community-operators-6m4w2 after 0 failed attempt(s) 2026-01-20T10:56:53.546432526+00:00 stderr F I0120 10:56:53.546426 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz after 0 failed attempt(s) 2026-01-20T10:56:53.546447827+00:00 stderr F I0120 10:56:53.546432 30089 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2026-01-20T10:56:53.546447827+00:00 stderr F I0120 10:56:53.546428 30089 default_network_controller.go:699] Recording success event on pod openshift-marketplace/community-operators-6m4w2 2026-01-20T10:56:53.546447827+00:00 stderr F I0120 10:56:53.545327 30089 pods.go:220] [openshift-image-registry/image-registry-75b7bb6564-ln84v] addLogicalPort took 6.47564ms, libovsdb time 570.195µs 2026-01-20T10:56:53.546461837+00:00 stderr F I0120 10:56:53.546450 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-image-registry/image-registry-75b7bb6564-ln84v after 0 failed attempt(s) 2026-01-20T10:56:53.546461837+00:00 stderr F I0120 10:56:53.546456 30089 default_network_controller.go:699] Recording success event on pod openshift-image-registry/image-registry-75b7bb6564-ln84v 2026-01-20T10:56:53.546478628+00:00 stderr F I0120 10:56:53.545954 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6bd84c4c-190f-4483-8240-543e175b9038}]}}] Timeout: Where:[where column _uuid == {d886fd31-9d5f-4fce-925a-2065ea4614ce}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.546478628+00:00 stderr F I0120 10:56:53.546455 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.546478628+00:00 stderr F I0120 10:56:53.544962 30089 pods.go:220] [openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b] addLogicalPort took 20.681978ms, libovsdb time 859.653µs 2026-01-20T10:56:53.546492998+00:00 stderr F I0120 10:56:53.546479 30089 pods.go:220] [openshift-network-diagnostics/network-check-target-v54bt] addLogicalPort took 11.37642ms, libovsdb time 474.002µs 2026-01-20T10:56:53.546492998+00:00 stderr F I0120 10:56:53.546488 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-network-diagnostics/network-check-target-v54bt after 0 failed attempt(s) 2026-01-20T10:56:53.546507188+00:00 stderr F I0120 10:56:53.546486 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b after 0 failed attempt(s) 2026-01-20T10:56:53.546507188+00:00 stderr F I0120 10:56:53.546494 30089 default_network_controller.go:699] Recording success event on pod openshift-network-diagnostics/network-check-target-v54bt 2026-01-20T10:56:53.546507188+00:00 stderr F I0120 10:56:53.546481 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.7 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {00bb4740-117c-432b-86ba-a37c8a0d8dd2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.546520609+00:00 stderr F I0120 10:56:53.546372 30089 default_network_controller.go:699] Recording success event on pod openshift-console/downloads-65476884b9-9wcvx 2026-01-20T10:56:53.546520609+00:00 stderr F I0120 10:56:53.545077 30089 pods.go:220] [openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc] addLogicalPort took 7.280173ms, libovsdb time 449.323µs 2026-01-20T10:56:53.546533779+00:00 stderr F I0120 10:56:53.546519 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc after 0 failed attempt(s) 2026-01-20T10:56:53.546546159+00:00 stderr F I0120 10:56:53.546533 30089 default_network_controller.go:699] Recording success event on pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2026-01-20T10:56:53.546546159+00:00 stderr F I0120 10:56:53.546532 30089 route_manager.go:93] Route Manager: attempting to add route: {Ifindex: 11 Dst: 169.254.169.1/32 Src: 192.168.126.11 Gw: Flags: [] Table: 0 Realm: 0} 2026-01-20T10:56:53.546546159+00:00 stderr F I0120 10:56:53.546433 30089 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf): added port &{name:openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf uuid:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c logicalSwitch:crc ips:[0xc001441b00] mac:[10 88 10 217 0 11] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.11/23] and MAC: 0a:58:0a:d9:00:0b 2026-01-20T10:56:53.546560300+00:00 stderr F I0120 10:56:53.546550 30089 pods.go:220] [openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf] addLogicalPort took 2.529207ms, libovsdb time 815.222µs 2026-01-20T10:56:53.546560300+00:00 stderr F I0120 10:56:53.546178 30089 default_network_controller.go:699] Recording success event on pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2026-01-20T10:56:53.546578450+00:00 stderr F I0120 10:56:53.546558 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf after 0 failed attempt(s) 2026-01-20T10:56:53.546578450+00:00 stderr F I0120 10:56:53.546567 30089 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2026-01-20T10:56:53.546578450+00:00 stderr F I0120 10:56:53.546226 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd" 2026-01-20T10:56:53.546578450+00:00 stderr F I0120 10:56:53.546376 30089 pods.go:220] [openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh] addLogicalPort took 10.339363ms, libovsdb time 7.687893ms 2026-01-20T10:56:53.546632922+00:00 stderr F I0120 10:56:53.546589 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh after 0 failed attempt(s) 2026-01-20T10:56:53.546632922+00:00 stderr F I0120 10:56:53.546605 30089 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2026-01-20T10:56:53.546632922+00:00 stderr F I0120 10:56:53.546500 30089 default_network_controller.go:699] Recording success event on pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2026-01-20T10:56:53.546632922+00:00 stderr F I0120 10:56:53.546602 30089 port_cache.go:96] port-cache(openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz): added port &{name:openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz uuid:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56 logicalSwitch:crc ips:[0xc000d05c80] mac:[10 88 10 217 0 10] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.10/23] and MAC: 0a:58:0a:d9:00:0a 2026-01-20T10:56:53.546632922+00:00 stderr F I0120 10:56:53.546588 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "8a5ae51d-d173-4531-8975-f164c975ce1f" 2026-01-20T10:56:53.546649462+00:00 stderr F I0120 10:56:53.546630 30089 pods.go:220] [openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz] addLogicalPort took 11.020461ms, libovsdb time 586.585µs 2026-01-20T10:56:53.546649462+00:00 stderr F I0120 10:56:53.546635 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "6268b7fe-8910-4505-b404-6f1df638105c" 2026-01-20T10:56:53.546649462+00:00 stderr F I0120 10:56:53.546642 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz after 0 failed attempt(s) 2026-01-20T10:56:53.546663522+00:00 stderr F I0120 10:56:53.546644 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "6d67253e-2acd-4bc1-8185-793587da4f17" 2026-01-20T10:56:53.546663522+00:00 stderr F I0120 10:56:53.546649 30089 default_network_controller.go:699] Recording success event on pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2026-01-20T10:56:53.546663522+00:00 stderr F I0120 10:56:53.542498 30089 pods.go:220] [openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg] addLogicalPort took 12.568193ms, libovsdb time 3.440841ms 2026-01-20T10:56:53.546663522+00:00 stderr F I0120 10:56:53.544943 30089 pods.go:220] [openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv] addLogicalPort took 7.087627ms, libovsdb time 523.065µs 2026-01-20T10:56:53.546677993+00:00 stderr F I0120 10:56:53.546663 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv after 0 failed attempt(s) 2026-01-20T10:56:53.546677993+00:00 stderr F I0120 10:56:53.546650 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.63 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c92f035f-d338-40f2-abeb-ccb48ca242b7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.546696413+00:00 stderr F I0120 10:56:53.546680 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg after 0 failed attempt(s) 2026-01-20T10:56:53.546696413+00:00 stderr F I0120 10:56:53.546688 30089 default_network_controller.go:699] Recording success event on pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2026-01-20T10:56:53.546710264+00:00 stderr F I0120 10:56:53.535214 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} options:{GoMap:map[iface-id-ver:5bacb25d-97b6-4491-8fb4-99feae1d802a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b4158c3-d859-42e6-8259-b16ce1cbd284}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {038121f6-33a2-46c3-a820-7e67ff387e75}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.39 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e2370e4-c8a2-48a7-a016-426be9bb2419}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:6e2370e4-c8a2-48a7-a016-426be9bb2419}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.546710264+00:00 stderr F I0120 10:56:53.546694 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c92f035f-d338-40f2-abeb-ccb48ca242b7}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.546723794+00:00 stderr F I0120 10:56:53.546691 30089 port_cache.go:96] port-cache(openshift-marketplace_redhat-operators-2nxg8): added port &{name:openshift-marketplace_redhat-operators-2nxg8 uuid:398e0e4f-2c60-4f66-97a2-6180b1379809 logicalSwitch:crc ips:[0xc001b1eb70] mac:[10 88 10 217 0 34] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.34/23] and MAC: 0a:58:0a:d9:00:22 2026-01-20T10:56:53.546723794+00:00 stderr F I0120 10:56:53.546714 30089 pods.go:220] [openshift-marketplace/redhat-operators-2nxg8] addLogicalPort took 11.438483ms, libovsdb time 727.499µs 2026-01-20T10:56:53.546736734+00:00 stderr F I0120 10:56:53.546722 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-marketplace/redhat-operators-2nxg8 after 0 failed attempt(s) 2026-01-20T10:56:53.546736734+00:00 stderr F I0120 10:56:53.546728 30089 default_network_controller.go:699] Recording success event on pod openshift-marketplace/redhat-operators-2nxg8 2026-01-20T10:56:53.546749965+00:00 stderr F I0120 10:56:53.546738 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "afcd1056-dc0e-4c35-93bd-1c388cd2028e" 2026-01-20T10:56:53.546767235+00:00 stderr F I0120 10:56:53.546709 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} options:{GoMap:map[iface-id-ver:297ab9b6-2186-4d5b-a952-2bfd59af63c4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9aafbb57-c78d-409c-9ff4-1561d4387b2d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.63 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c92f035f-d338-40f2-abeb-ccb48ca242b7}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c92f035f-d338-40f2-abeb-ccb48ca242b7}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.546865878+00:00 stderr F I0120 10:56:53.545960 30089 default_network_controller.go:699] Recording success event on pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2026-01-20T10:56:53.546865878+00:00 stderr F I0120 10:56:53.546669 30089 default_network_controller.go:699] Recording success event on pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2026-01-20T10:56:53.546881288+00:00 stderr F I0120 10:56:53.546528 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:00bb4740-117c-432b-86ba-a37c8a0d8dd2}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.546975422+00:00 stderr F I0120 10:56:53.546924 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.41 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6b80fb2-3ad1-43c9-82f1-a1264c3144f4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.546975422+00:00 stderr F I0120 10:56:53.546950 30089 route_manager.go:110] Route Manager: completed adding route: {Ifindex: 11 Dst: 169.254.169.1/32 Src: 192.168.126.11 Gw: Flags: [] Table: 254 Realm: 0} 2026-01-20T10:56:53.546975422+00:00 stderr F I0120 10:56:53.546354 30089 pods.go:220] [openshift-console/console-644bb77b49-5x5xk] addLogicalPort took 11.575476ms, libovsdb time 513.444µs 2026-01-20T10:56:53.546975422+00:00 stderr F I0120 10:56:53.546878 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} options:{GoMap:map[iface-id-ver:ed024e5d-8fc2-4c22-803d-73f3c9795f19 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5ecfd58-e886-4b2c-9939-022e7f14b7a7}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {cdc4ecc4-c623-407e-ad92-33cb6c2b7b75}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.7 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {00bb4740-117c-432b-86ba-a37c8a0d8dd2}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:00bb4740-117c-432b-86ba-a37c8a0d8dd2}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.546999732+00:00 stderr F I0120 10:56:53.546972 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-console/console-644bb77b49-5x5xk after 0 failed attempt(s) 2026-01-20T10:56:53.546999732+00:00 stderr F I0120 10:56:53.546980 30089 default_network_controller.go:699] Recording success event on pod openshift-console/console-644bb77b49-5x5xk 2026-01-20T10:56:53.546999732+00:00 stderr F I0120 10:56:53.546979 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a6b80fb2-3ad1-43c9-82f1-a1264c3144f4}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.547102335+00:00 stderr F I0120 10:56:53.547001 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} options:{GoMap:map[iface-id-ver:c229f43c-9d3a-4848-a9da-d997459f440b requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6bd84c4c-190f-4483-8240-543e175b9038}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6bd84c4c-190f-4483-8240-543e175b9038}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6bd84c4c-190f-4483-8240-543e175b9038}]}}] Timeout: Where:[where column _uuid == {d886fd31-9d5f-4fce-925a-2065ea4614ce}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.41 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6b80fb2-3ad1-43c9-82f1-a1264c3144f4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a6b80fb2-3ad1-43c9-82f1-a1264c3144f4}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.547229878+00:00 stderr F I0120 10:56:53.546942 30089 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.24 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {86257c8a-6754-4af2-9aae-87ed521f1c5f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.547250489+00:00 stderr F I0120 10:56:53.547213 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "297ab9b6-2186-4d5b-a952-2bfd59af63c4" 2026-01-20T10:56:53.547299850+00:00 stderr F I0120 10:56:53.547247 30089 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:86257c8a-6754-4af2-9aae-87ed521f1c5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.547334551+00:00 stderr F I0120 10:56:53.547195 30089 port_cache.go:96] port-cache(openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh): added port &{name:openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh uuid:9aafbb57-c78d-409c-9ff4-1561d4387b2d logicalSwitch:crc ips:[0xc000d76d20] mac:[10 88 10 217 0 63] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.63/23] and MAC: 0a:58:0a:d9:00:3f 2026-01-20T10:56:53.547334551+00:00 stderr F I0120 10:56:53.547274 30089 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} options:{GoMap:map[iface-id-ver:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f24db1f4-18a4-418a-9c99-1d94ebfba0da}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.220 logical_ip:10.217.0.24 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {86257c8a-6754-4af2-9aae-87ed521f1c5f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:86257c8a-6754-4af2-9aae-87ed521f1c5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2026-01-20T10:56:53.547334551+00:00 stderr F I0120 10:56:53.547324 30089 pods.go:220] [openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh] addLogicalPort took 12.198904ms, libovsdb time 471.993µs 2026-01-20T10:56:53.547349131+00:00 stderr F I0120 10:56:53.547339 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh after 0 failed attempt(s) 2026-01-20T10:56:53.547360862+00:00 stderr F I0120 10:56:53.547346 30089 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2026-01-20T10:56:53.547408743+00:00 stderr F I0120 10:56:53.547375 30089 port_cache.go:96] port-cache(openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd): added port &{name:openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd uuid:8b4158c3-d859-42e6-8259-b16ce1cbd284 logicalSwitch:crc ips:[0xc001582330] mac:[10 88 10 217 0 39] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.39/23] and MAC: 0a:58:0a:d9:00:27 2026-01-20T10:56:53.547408743+00:00 stderr F I0120 10:56:53.547394 30089 pods.go:220] [openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd] addLogicalPort took 23.099441ms, libovsdb time 12.151182ms 2026-01-20T10:56:53.547470294+00:00 stderr F I0120 10:56:53.547445 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd after 0 failed attempt(s) 2026-01-20T10:56:53.547470294+00:00 stderr F I0120 10:56:53.547455 30089 default_network_controller.go:699] Recording success event on pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2026-01-20T10:56:53.547470294+00:00 stderr F I0120 10:56:53.547465 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "5bacb25d-97b6-4491-8fb4-99feae1d802a" 2026-01-20T10:56:53.547790593+00:00 stderr F I0120 10:56:53.547742 30089 port_cache.go:96] port-cache(openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7): added port &{name:openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7 uuid:f5ecfd58-e886-4b2c-9939-022e7f14b7a7 logicalSwitch:crc ips:[0xc0015069f0] mac:[10 88 10 217 0 7] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.7/23] and MAC: 0a:58:0a:d9:00:07 2026-01-20T10:56:53.547790593+00:00 stderr F I0120 10:56:53.547766 30089 pods.go:220] [openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7] addLogicalPort took 12.280545ms, libovsdb time 854.743µs 2026-01-20T10:56:53.547790593+00:00 stderr F I0120 10:56:53.547783 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 after 0 failed attempt(s) 2026-01-20T10:56:53.547803663+00:00 stderr F I0120 10:56:53.547794 30089 default_network_controller.go:699] Recording success event on pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2026-01-20T10:56:53.547813993+00:00 stderr F I0120 10:56:53.547806 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "ed024e5d-8fc2-4c22-803d-73f3c9795f19" 2026-01-20T10:56:53.547887135+00:00 stderr F I0120 10:56:53.547845 30089 port_cache.go:96] port-cache(cert-manager_cert-manager-cainjector-676dd9bd64-mggnx): added port &{name:cert-manager_cert-manager-cainjector-676dd9bd64-mggnx uuid:6bd84c4c-190f-4483-8240-543e175b9038 logicalSwitch:crc ips:[0xc001c16330] mac:[10 88 10 217 0 41] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.41/23] and MAC: 0a:58:0a:d9:00:29 2026-01-20T10:56:53.547887135+00:00 stderr F I0120 10:56:53.547868 30089 pods.go:220] [cert-manager/cert-manager-cainjector-676dd9bd64-mggnx] addLogicalPort took 12.520272ms, libovsdb time 838.292µs 2026-01-20T10:56:53.547887135+00:00 stderr F I0120 10:56:53.547875 30089 obj_retry.go:379] Retry successful for *v1.Pod cert-manager/cert-manager-cainjector-676dd9bd64-mggnx after 0 failed attempt(s) 2026-01-20T10:56:53.547887135+00:00 stderr F I0120 10:56:53.547879 30089 default_network_controller.go:699] Recording success event on pod cert-manager/cert-manager-cainjector-676dd9bd64-mggnx 2026-01-20T10:56:53.547900066+00:00 stderr F I0120 10:56:53.547891 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "c229f43c-9d3a-4848-a9da-d997459f440b" 2026-01-20T10:56:53.547910486+00:00 stderr F I0120 10:56:53.547898 30089 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" 2026-01-20T10:56:53.547910486+00:00 stderr F I0120 10:56:53.547899 30089 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2): added port &{name:openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2 uuid:f24db1f4-18a4-418a-9c99-1d94ebfba0da logicalSwitch:crc ips:[0xc001aacf00] mac:[10 88 10 217 0 24] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.24/23] and MAC: 0a:58:0a:d9:00:18 2026-01-20T10:56:53.547926416+00:00 stderr F I0120 10:56:53.547912 30089 pods.go:220] [openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2] addLogicalPort took 12.614814ms, libovsdb time 615.866µs 2026-01-20T10:56:53.547926416+00:00 stderr F I0120 10:56:53.547921 30089 obj_retry.go:379] Retry successful for *v1.Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 after 0 failed attempt(s) 2026-01-20T10:56:53.547937167+00:00 stderr F I0120 10:56:53.547927 30089 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2026-01-20T10:56:53.547947437+00:00 stderr F I0120 10:56:53.547939 30089 obj_retry.go:413] Function iterateRetryResources for *v1.Pod ended (in 24.55662ms) 2026-01-20T10:56:53.548248625+00:00 stderr F I0120 10:56:53.548179 30089 route_manager.go:149] Route Manager: netlink route addition event: "{Ifindex: 5 Dst: 10.217.4.0/23 Src: Gw: 10.217.0.1 Flags: [] Table: 7 Realm: 0}" 2026-01-20T10:56:53.548739298+00:00 stderr F I0120 10:56:53.548700 30089 ovs.go:162] Exec(48): stdout: "" 2026-01-20T10:56:53.548739298+00:00 stderr F I0120 10:56:53.548712 30089 ovs.go:163] Exec(48): stderr: "" 2026-01-20T10:56:53.548739298+00:00 stderr F I0120 10:56:53.548729 30089 gateway_shared_intf.go:1674] Successfully added route into custom routing table: 7 2026-01-20T10:56:53.548752588+00:00 stderr F I0120 10:56:53.548740 30089 ovs.go:159] Exec(49): /usr/sbin/ip -4 rule 2026-01-20T10:56:53.550790252+00:00 stderr F I0120 10:56:53.550742 30089 ovs.go:162] Exec(49): stdout: "0:\tfrom all lookup local\n30:\tfrom all fwmark 0x1745ec lookup 7\n32766:\tfrom all lookup main\n32767:\tfrom all lookup default\n" 2026-01-20T10:56:53.550790252+00:00 stderr F I0120 10:56:53.550763 30089 ovs.go:163] Exec(49): stderr: "" 2026-01-20T10:56:53.550790252+00:00 stderr F I0120 10:56:53.550773 30089 ovs.go:159] Exec(50): /usr/sbin/sysctl -w net.ipv4.conf.ovn-k8s-mp0.rp_filter=2 2026-01-20T10:56:53.551702276+00:00 stderr F I0120 10:56:53.551658 30089 ovs.go:162] Exec(50): stdout: "net.ipv4.conf.ovn-k8s-mp0.rp_filter = 2\n" 2026-01-20T10:56:53.551702276+00:00 stderr F I0120 10:56:53.551673 30089 ovs.go:163] Exec(50): stderr: "" 2026-01-20T10:56:53.551702276+00:00 stderr F I0120 10:56:53.551682 30089 ovs.go:159] Exec(51): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface patch-br-ex_crc-to-br-int ofport 2026-01-20T10:56:53.558037733+00:00 stderr F I0120 10:56:53.557991 30089 ovs.go:162] Exec(51): stdout: "2\n" 2026-01-20T10:56:53.558037733+00:00 stderr F I0120 10:56:53.558006 30089 ovs.go:163] Exec(51): stderr: "" 2026-01-20T10:56:53.558037733+00:00 stderr F I0120 10:56:53.558014 30089 ovs.go:159] Exec(52): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface ens3 ofport 2026-01-20T10:56:53.563540349+00:00 stderr F I0120 10:56:53.563460 30089 ovs.go:162] Exec(52): stdout: "1\n" 2026-01-20T10:56:53.563540349+00:00 stderr F I0120 10:56:53.563519 30089 ovs.go:163] Exec(52): stderr: "" 2026-01-20T10:56:53.567199086+00:00 stderr F I0120 10:56:53.567123 30089 gateway_iptables.go:487] Chain: "OVN-KUBE-ITP" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OVN-KUBE-ITP --wait]: exit status 1: iptables: Chain already exists. 2026-01-20T10:56:53.569317691+00:00 stderr F I0120 10:56:53.569262 30089 gateway_iptables.go:487] Chain: "OVN-KUBE-ITP" in table: "mangle" already exists, skipping creation: running [/usr/sbin/iptables -t mangle -N OVN-KUBE-ITP --wait]: exit status 1: iptables: Chain already exists. 2026-01-20T10:56:53.571530300+00:00 stderr F I0120 10:56:53.571476 30089 gateway_iptables.go:487] Chain: "OVN-KUBE-EGRESS-SVC" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OVN-KUBE-EGRESS-SVC --wait]: exit status 1: iptables: Chain already exists. 2026-01-20T10:56:53.573567344+00:00 stderr F I0120 10:56:53.573500 30089 gateway_iptables.go:487] Chain: "OVN-KUBE-NODEPORT" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OVN-KUBE-NODEPORT --wait]: exit status 1: iptables: Chain already exists. 2026-01-20T10:56:53.575544477+00:00 stderr F I0120 10:56:53.575505 30089 gateway_iptables.go:487] Chain: "OVN-KUBE-EXTERNALIP" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OVN-KUBE-EXTERNALIP --wait]: exit status 1: iptables: Chain already exists. 2026-01-20T10:56:53.577388445+00:00 stderr F I0120 10:56:53.577344 30089 gateway_iptables.go:487] Chain: "OVN-KUBE-ETP" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OVN-KUBE-ETP --wait]: exit status 1: iptables: Chain already exists. 2026-01-20T10:56:53.577401225+00:00 stderr F I0120 10:56:53.577387 30089 iptables.go:108] Creating table: mangle chain: OUTPUT 2026-01-20T10:56:53.579260615+00:00 stderr F I0120 10:56:53.579213 30089 iptables.go:110] Chain: "OUTPUT" in table: "mangle" already exists, skipping creation: running [/usr/sbin/iptables -t mangle -N OUTPUT --wait]: exit status 1: iptables: Chain already exists. 2026-01-20T10:56:53.581105724+00:00 stderr F I0120 10:56:53.581069 30089 iptables.go:108] Creating table: nat chain: OUTPUT 2026-01-20T10:56:53.582914021+00:00 stderr F I0120 10:56:53.582866 30089 iptables.go:110] Chain: "OUTPUT" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OUTPUT --wait]: exit status 1: iptables: Chain already exists. 2026-01-20T10:56:53.585433058+00:00 stderr F I0120 10:56:53.585400 30089 iptables.go:108] Creating table: nat chain: POSTROUTING 2026-01-20T10:56:53.588243432+00:00 stderr F I0120 10:56:53.588213 30089 iptables.go:110] Chain: "POSTROUTING" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N POSTROUTING --wait]: exit status 1: iptables: Chain already exists. 2026-01-20T10:56:53.590607085+00:00 stderr F I0120 10:56:53.590578 30089 iptables.go:108] Creating table: nat chain: PREROUTING 2026-01-20T10:56:53.592517696+00:00 stderr F I0120 10:56:53.592485 30089 iptables.go:110] Chain: "PREROUTING" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N PREROUTING --wait]: exit status 1: iptables: Chain already exists. 2026-01-20T10:56:53.602474568+00:00 stderr F I0120 10:56:53.602443 30089 gateway_shared_intf.go:2124] Ensuring IP Neighbor entry for: 169.254.169.1 2026-01-20T10:56:53.602557741+00:00 stderr F I0120 10:56:53.602533 30089 gateway_shared_intf.go:2124] Ensuring IP Neighbor entry for: 169.254.169.4 2026-01-20T10:56:53.602645193+00:00 stderr F I0120 10:56:53.602591 30089 obj_retry_gateway.go:28] [newRetryFrameworkNodeWithParameters] g.watchFactory=&{10 0xc0000ff7a0 0xc0000ff810 0xc0000ff8f0 0xc0000ff960 0xc0000ff9d0 0xc0000ffa40 0xc00012e000 0xc0000ffab0 0xc0000ffb20 map[0x23d45a0:0xc0003a60e0 0x23d4ae0:0xc00026ee70 0x23d4d80:0xc0003a6000 0x23d5020:0xc0003a61c0 0x23d52c0:0xc0003a6230 0x23d5aa0:0xc0003a62a0 0x23f31a0:0xc00026eb60 0x23f4020:0xc00026ebd0 0x23f4760:0xc00026ed90 0x23f5980:0xc00026ed20 0x23f7680:0xc00026ee00 0x23f8160:0xc00026ec40] 0xc000152de0} 2026-01-20T10:56:53.602677054+00:00 stderr F I0120 10:56:53.602659 30089 gateway.go:143] Starting gateway service sync 2026-01-20T10:56:53.603277370+00:00 stderr F I0120 10:56:53.603252 30089 openflow_manager.go:85] Gateway OpenFlow sync requested 2026-01-20T10:56:53.603277370+00:00 stderr F I0120 10:56:53.603263 30089 gateway_iptables.go:544] Recreating iptables rules for table: nat, chain: OVN-KUBE-ITP 2026-01-20T10:56:53.607723557+00:00 stderr F I0120 10:56:53.607676 30089 gateway_iptables.go:544] Recreating iptables rules for table: nat, chain: OVN-KUBE-EGRESS-SVC 2026-01-20T10:56:53.610192263+00:00 stderr F I0120 10:56:53.610164 30089 gateway_iptables.go:544] Recreating iptables rules for table: nat, chain: OVN-KUBE-NODEPORT 2026-01-20T10:56:53.613447149+00:00 stderr F I0120 10:56:53.613394 30089 gateway_iptables.go:544] Recreating iptables rules for table: nat, chain: OVN-KUBE-EXTERNALIP 2026-01-20T10:56:53.616728985+00:00 stderr F I0120 10:56:53.616688 30089 gateway_iptables.go:544] Recreating iptables rules for table: nat, chain: OVN-KUBE-ETP 2026-01-20T10:56:53.618888153+00:00 stderr F I0120 10:56:53.618845 30089 gateway_iptables.go:544] Recreating iptables rules for table: nat, chain: OVN-KUBE-SNAT-MGMTPORT 2026-01-20T10:56:53.621260266+00:00 stderr F I0120 10:56:53.621235 30089 gateway_iptables.go:544] Recreating iptables rules for table: mangle, chain: OVN-KUBE-ITP 2026-01-20T10:56:53.623591127+00:00 stderr F I0120 10:56:53.623552 30089 gateway.go:160] Gateway service sync done. Time taken: 20.875812ms 2026-01-20T10:56:53.623608068+00:00 stderr F I0120 10:56:53.623598 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-cluster-samples-operator/metrics 2026-01-20T10:56:53.623637099+00:00 stderr F I0120 10:56:53.623618 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-cluster-samples-operator/metrics took: 4.1µs 2026-01-20T10:56:53.623637099+00:00 stderr F I0120 10:56:53.623631 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-monitoring/cluster-monitoring-operator 2026-01-20T10:56:53.623661919+00:00 stderr F I0120 10:56:53.623642 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-monitoring/cluster-monitoring-operator took: 820ns 2026-01-20T10:56:53.623661919+00:00 stderr F I0120 10:56:53.623653 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-multus/multus-admission-controller 2026-01-20T10:56:53.623669499+00:00 stderr F I0120 10:56:53.623661 30089 gateway_shared_intf.go:609] Adding service multus-admission-controller in namespace openshift-multus 2026-01-20T10:56:53.623729231+00:00 stderr F I0120 10:56:53.623706 30089 gateway_shared_intf.go:635] Updating already programmed rules for multus-admission-controller in namespace openshift-multus 2026-01-20T10:56:53.623729231+00:00 stderr F I0120 10:56:53.623725 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.623737271+00:00 stderr F I0120 10:56:53.623730 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-multus/multus-admission-controller took: 70.962µs 2026-01-20T10:56:53.623744391+00:00 stderr F I0120 10:56:53.623737 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-operator-lifecycle-manager/olm-operator-metrics 2026-01-20T10:56:53.623751452+00:00 stderr F I0120 10:56:53.623744 30089 gateway_shared_intf.go:609] Adding service olm-operator-metrics in namespace openshift-operator-lifecycle-manager 2026-01-20T10:56:53.623782302+00:00 stderr F I0120 10:56:53.623766 30089 gateway_shared_intf.go:635] Updating already programmed rules for olm-operator-metrics in namespace openshift-operator-lifecycle-manager 2026-01-20T10:56:53.623782302+00:00 stderr F I0120 10:56:53.623776 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.623789863+00:00 stderr F I0120 10:56:53.623781 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-operator-lifecycle-manager/olm-operator-metrics took: 37.481µs 2026-01-20T10:56:53.623797763+00:00 stderr F I0120 10:56:53.623788 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-route-controller-manager/route-controller-manager 2026-01-20T10:56:53.623797763+00:00 stderr F I0120 10:56:53.623794 30089 gateway_shared_intf.go:609] Adding service route-controller-manager in namespace openshift-route-controller-manager 2026-01-20T10:56:53.623827574+00:00 stderr F I0120 10:56:53.623811 30089 gateway_shared_intf.go:635] Updating already programmed rules for route-controller-manager in namespace openshift-route-controller-manager 2026-01-20T10:56:53.623827574+00:00 stderr F I0120 10:56:53.623821 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.623835274+00:00 stderr F I0120 10:56:53.623826 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-route-controller-manager/route-controller-manager took: 32.4µs 2026-01-20T10:56:53.623848734+00:00 stderr F I0120 10:56:53.623833 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-console-operator/webhook 2026-01-20T10:56:53.623848734+00:00 stderr F I0120 10:56:53.623839 30089 gateway_shared_intf.go:609] Adding service webhook in namespace openshift-console-operator 2026-01-20T10:56:53.623871125+00:00 stderr F I0120 10:56:53.623855 30089 gateway_shared_intf.go:635] Updating already programmed rules for webhook in namespace openshift-console-operator 2026-01-20T10:56:53.623871125+00:00 stderr F I0120 10:56:53.623865 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.623878705+00:00 stderr F I0120 10:56:53.623871 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-console-operator/webhook took: 31.051µs 2026-01-20T10:56:53.623886005+00:00 stderr F I0120 10:56:53.623877 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-controller-manager-operator/metrics 2026-01-20T10:56:53.623886005+00:00 stderr F I0120 10:56:53.623883 30089 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-controller-manager-operator 2026-01-20T10:56:53.623927286+00:00 stderr F I0120 10:56:53.623911 30089 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-controller-manager-operator 2026-01-20T10:56:53.623927286+00:00 stderr F I0120 10:56:53.623921 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.623935956+00:00 stderr F I0120 10:56:53.623926 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-controller-manager-operator/metrics took: 43.611µs 2026-01-20T10:56:53.623949407+00:00 stderr F I0120 10:56:53.623933 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-kube-apiserver-operator/metrics 2026-01-20T10:56:53.623949407+00:00 stderr F I0120 10:56:53.623939 30089 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-kube-apiserver-operator 2026-01-20T10:56:53.623987528+00:00 stderr F I0120 10:56:53.623955 30089 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-kube-apiserver-operator 2026-01-20T10:56:53.623987528+00:00 stderr F I0120 10:56:53.623966 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.623987528+00:00 stderr F I0120 10:56:53.623971 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-kube-apiserver-operator/metrics took: 31.521µs 2026-01-20T10:56:53.623987528+00:00 stderr F I0120 10:56:53.623977 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-operator-lifecycle-manager/catalog-operator-metrics 2026-01-20T10:56:53.623987528+00:00 stderr F I0120 10:56:53.623983 30089 gateway_shared_intf.go:609] Adding service catalog-operator-metrics in namespace openshift-operator-lifecycle-manager 2026-01-20T10:56:53.624014558+00:00 stderr F I0120 10:56:53.623998 30089 gateway_shared_intf.go:635] Updating already programmed rules for catalog-operator-metrics in namespace openshift-operator-lifecycle-manager 2026-01-20T10:56:53.624014558+00:00 stderr F I0120 10:56:53.624008 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.624022389+00:00 stderr F I0120 10:56:53.624015 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-operator-lifecycle-manager/catalog-operator-metrics took: 30.44µs 2026-01-20T10:56:53.624039689+00:00 stderr F I0120 10:56:53.624021 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-config-operator/metrics 2026-01-20T10:56:53.624039689+00:00 stderr F I0120 10:56:53.624027 30089 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-config-operator 2026-01-20T10:56:53.624047139+00:00 stderr F I0120 10:56:53.624042 30089 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-config-operator 2026-01-20T10:56:53.624066910+00:00 stderr F I0120 10:56:53.624048 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.624066910+00:00 stderr F I0120 10:56:53.624053 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-config-operator/metrics took: 25.8µs 2026-01-20T10:56:53.624125191+00:00 stderr F I0120 10:56:53.624080 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-console/downloads 2026-01-20T10:56:53.624125191+00:00 stderr F I0120 10:56:53.624091 30089 gateway_shared_intf.go:609] Adding service downloads in namespace openshift-console 2026-01-20T10:56:53.624137072+00:00 stderr F I0120 10:56:53.624125 30089 gateway_shared_intf.go:635] Updating already programmed rules for downloads in namespace openshift-console 2026-01-20T10:56:53.624137072+00:00 stderr F I0120 10:56:53.624133 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.624146022+00:00 stderr F I0120 10:56:53.624139 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-console/downloads took: 47.242µs 2026-01-20T10:56:53.624153092+00:00 stderr F I0120 10:56:53.624147 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-etcd-operator/metrics 2026-01-20T10:56:53.624160142+00:00 stderr F I0120 10:56:53.624155 30089 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-etcd-operator 2026-01-20T10:56:53.624196163+00:00 stderr F I0120 10:56:53.624178 30089 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-etcd-operator 2026-01-20T10:56:53.624196163+00:00 stderr F I0120 10:56:53.624192 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.624217854+00:00 stderr F I0120 10:56:53.624200 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-etcd-operator/metrics took: 44.741µs 2026-01-20T10:56:53.624217854+00:00 stderr F I0120 10:56:53.624214 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-api/machine-api-operator-webhook 2026-01-20T10:56:53.624266075+00:00 stderr F I0120 10:56:53.624222 30089 gateway_shared_intf.go:609] Adding service machine-api-operator-webhook in namespace openshift-machine-api 2026-01-20T10:56:53.624302916+00:00 stderr F I0120 10:56:53.624285 30089 gateway_shared_intf.go:635] Updating already programmed rules for machine-api-operator-webhook in namespace openshift-machine-api 2026-01-20T10:56:53.624302916+00:00 stderr F I0120 10:56:53.624297 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.624310646+00:00 stderr F I0120 10:56:53.624303 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-api/machine-api-operator-webhook took: 81.772µs 2026-01-20T10:56:53.624317726+00:00 stderr F I0120 10:56:53.624310 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-kube-controller-manager-operator/metrics 2026-01-20T10:56:53.624324746+00:00 stderr F I0120 10:56:53.624316 30089 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-kube-controller-manager-operator 2026-01-20T10:56:53.624352317+00:00 stderr F I0120 10:56:53.624335 30089 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-kube-controller-manager-operator 2026-01-20T10:56:53.624352317+00:00 stderr F I0120 10:56:53.624348 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.624361587+00:00 stderr F I0120 10:56:53.624355 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-kube-controller-manager-operator/metrics took: 37.151µs 2026-01-20T10:56:53.624370088+00:00 stderr F I0120 10:56:53.624363 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-kube-storage-version-migrator-operator/metrics 2026-01-20T10:56:53.624380278+00:00 stderr F I0120 10:56:53.624373 30089 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-kube-storage-version-migrator-operator 2026-01-20T10:56:53.624412239+00:00 stderr F I0120 10:56:53.624394 30089 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-kube-storage-version-migrator-operator 2026-01-20T10:56:53.624412239+00:00 stderr F I0120 10:56:53.624407 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.624419749+00:00 stderr F I0120 10:56:53.624414 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-kube-storage-version-migrator-operator/metrics took: 43.251µs 2026-01-20T10:56:53.624426799+00:00 stderr F I0120 10:56:53.624420 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway cert-manager/cert-manager-cainjector 2026-01-20T10:56:53.624433789+00:00 stderr F I0120 10:56:53.624427 30089 gateway_shared_intf.go:609] Adding service cert-manager-cainjector in namespace cert-manager 2026-01-20T10:56:53.624460610+00:00 stderr F I0120 10:56:53.624445 30089 gateway_shared_intf.go:635] Updating already programmed rules for cert-manager-cainjector in namespace cert-manager 2026-01-20T10:56:53.624460610+00:00 stderr F I0120 10:56:53.624454 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.624468300+00:00 stderr F I0120 10:56:53.624460 30089 obj_retry.go:541] Creating *factory.serviceForGateway cert-manager/cert-manager-cainjector took: 33.201µs 2026-01-20T10:56:53.624475430+00:00 stderr F I0120 10:56:53.624466 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway default/kubernetes 2026-01-20T10:56:53.624475430+00:00 stderr F I0120 10:56:53.624472 30089 gateway_shared_intf.go:609] Adding service kubernetes in namespace default 2026-01-20T10:56:53.624502901+00:00 stderr F I0120 10:56:53.624487 30089 gateway_shared_intf.go:635] Updating already programmed rules for kubernetes in namespace default 2026-01-20T10:56:53.624502901+00:00 stderr F I0120 10:56:53.624496 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.624511191+00:00 stderr F I0120 10:56:53.624502 30089 obj_retry.go:541] Creating *factory.serviceForGateway default/kubernetes took: 29.731µs 2026-01-20T10:56:53.624518322+00:00 stderr F I0120 10:56:53.624508 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-cluster-version/cluster-version-operator 2026-01-20T10:56:53.624518322+00:00 stderr F I0120 10:56:53.624515 30089 gateway_shared_intf.go:609] Adding service cluster-version-operator in namespace openshift-cluster-version 2026-01-20T10:56:53.624547202+00:00 stderr F I0120 10:56:53.624530 30089 gateway_shared_intf.go:635] Updating already programmed rules for cluster-version-operator in namespace openshift-cluster-version 2026-01-20T10:56:53.624547202+00:00 stderr F I0120 10:56:53.624540 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.624556303+00:00 stderr F I0120 10:56:53.624546 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-cluster-version/cluster-version-operator took: 30.461µs 2026-01-20T10:56:53.624556303+00:00 stderr F I0120 10:56:53.624552 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-kube-apiserver/apiserver 2026-01-20T10:56:53.624565253+00:00 stderr F I0120 10:56:53.624559 30089 gateway_shared_intf.go:609] Adding service apiserver in namespace openshift-kube-apiserver 2026-01-20T10:56:53.624595904+00:00 stderr F I0120 10:56:53.624577 30089 gateway_shared_intf.go:635] Updating already programmed rules for apiserver in namespace openshift-kube-apiserver 2026-01-20T10:56:53.624595904+00:00 stderr F I0120 10:56:53.624589 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.624603974+00:00 stderr F I0120 10:56:53.624596 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-kube-apiserver/apiserver took: 35.99µs 2026-01-20T10:56:53.624613824+00:00 stderr F I0120 10:56:53.624604 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-api/cluster-autoscaler-operator 2026-01-20T10:56:53.624620904+00:00 stderr F I0120 10:56:53.624612 30089 gateway_shared_intf.go:609] Adding service cluster-autoscaler-operator in namespace openshift-machine-api 2026-01-20T10:56:53.624646625+00:00 stderr F I0120 10:56:53.624629 30089 gateway_shared_intf.go:635] Updating already programmed rules for cluster-autoscaler-operator in namespace openshift-machine-api 2026-01-20T10:56:53.624646625+00:00 stderr F I0120 10:56:53.624642 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.624668685+00:00 stderr F I0120 10:56:53.624650 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-api/cluster-autoscaler-operator took: 36.481µs 2026-01-20T10:56:53.624668685+00:00 stderr F I0120 10:56:53.624664 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-console-operator/metrics 2026-01-20T10:56:53.624677916+00:00 stderr F I0120 10:56:53.624672 30089 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-console-operator 2026-01-20T10:56:53.624708286+00:00 stderr F I0120 10:56:53.624692 30089 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-console-operator 2026-01-20T10:56:53.624708286+00:00 stderr F I0120 10:56:53.624702 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.624715997+00:00 stderr F I0120 10:56:53.624707 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-console-operator/metrics took: 36.541µs 2026-01-20T10:56:53.624723237+00:00 stderr F I0120 10:56:53.624714 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-operator-lifecycle-manager/packageserver-service 2026-01-20T10:56:53.624723237+00:00 stderr F I0120 10:56:53.624720 30089 gateway_shared_intf.go:609] Adding service packageserver-service in namespace openshift-operator-lifecycle-manager 2026-01-20T10:56:53.624750998+00:00 stderr F I0120 10:56:53.624734 30089 gateway_shared_intf.go:635] Updating already programmed rules for packageserver-service in namespace openshift-operator-lifecycle-manager 2026-01-20T10:56:53.624750998+00:00 stderr F I0120 10:56:53.624745 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.624759658+00:00 stderr F I0120 10:56:53.624750 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-operator-lifecycle-manager/packageserver-service took: 30.07µs 2026-01-20T10:56:53.624767298+00:00 stderr F I0120 10:56:53.624756 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-ovn-kubernetes/ovn-kubernetes-node 2026-01-20T10:56:53.624774528+00:00 stderr F I0120 10:56:53.624764 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-ovn-kubernetes/ovn-kubernetes-node took: 400ns 2026-01-20T10:56:53.624774528+00:00 stderr F I0120 10:56:53.624770 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-authentication-operator/metrics 2026-01-20T10:56:53.624781848+00:00 stderr F I0120 10:56:53.624776 30089 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-authentication-operator 2026-01-20T10:56:53.624814099+00:00 stderr F I0120 10:56:53.624797 30089 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-authentication-operator 2026-01-20T10:56:53.624814099+00:00 stderr F I0120 10:56:53.624807 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.624822439+00:00 stderr F I0120 10:56:53.624813 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-authentication-operator/metrics took: 36.201µs 2026-01-20T10:56:53.624829590+00:00 stderr F I0120 10:56:53.624819 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-etcd/etcd 2026-01-20T10:56:53.624829590+00:00 stderr F I0120 10:56:53.624826 30089 gateway_shared_intf.go:609] Adding service etcd in namespace openshift-etcd 2026-01-20T10:56:53.624857010+00:00 stderr F I0120 10:56:53.624841 30089 gateway_shared_intf.go:635] Updating already programmed rules for etcd in namespace openshift-etcd 2026-01-20T10:56:53.624857010+00:00 stderr F I0120 10:56:53.624851 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.624865331+00:00 stderr F I0120 10:56:53.624856 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-etcd/etcd took: 30.061µs 2026-01-20T10:56:53.624872731+00:00 stderr F I0120 10:56:53.624862 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-image-registry/image-registry-operator 2026-01-20T10:56:53.624872731+00:00 stderr F I0120 10:56:53.624869 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-image-registry/image-registry-operator took: 280ns 2026-01-20T10:56:53.624881611+00:00 stderr F I0120 10:56:53.624875 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-ingress/router-internal-default 2026-01-20T10:56:53.624888631+00:00 stderr F I0120 10:56:53.624881 30089 gateway_shared_intf.go:609] Adding service router-internal-default in namespace openshift-ingress 2026-01-20T10:56:53.624913772+00:00 stderr F I0120 10:56:53.624898 30089 gateway_shared_intf.go:635] Updating already programmed rules for router-internal-default in namespace openshift-ingress 2026-01-20T10:56:53.624913772+00:00 stderr F I0120 10:56:53.624908 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.624921482+00:00 stderr F I0120 10:56:53.624913 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-ingress/router-internal-default took: 31.941µs 2026-01-20T10:56:53.624929052+00:00 stderr F I0120 10:56:53.624920 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-api/control-plane-machine-set-operator 2026-01-20T10:56:53.624929052+00:00 stderr F I0120 10:56:53.624926 30089 gateway_shared_intf.go:609] Adding service control-plane-machine-set-operator in namespace openshift-machine-api 2026-01-20T10:56:53.624960613+00:00 stderr F I0120 10:56:53.624945 30089 gateway_shared_intf.go:635] Updating already programmed rules for control-plane-machine-set-operator in namespace openshift-machine-api 2026-01-20T10:56:53.624960613+00:00 stderr F I0120 10:56:53.624954 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.624968083+00:00 stderr F I0120 10:56:53.624960 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-api/control-plane-machine-set-operator took: 33.891µs 2026-01-20T10:56:53.624975203+00:00 stderr F I0120 10:56:53.624966 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-oauth-apiserver/api 2026-01-20T10:56:53.624975203+00:00 stderr F I0120 10:56:53.624972 30089 gateway_shared_intf.go:609] Adding service api in namespace openshift-oauth-apiserver 2026-01-20T10:56:53.625003004+00:00 stderr F I0120 10:56:53.624987 30089 gateway_shared_intf.go:635] Updating already programmed rules for api in namespace openshift-oauth-apiserver 2026-01-20T10:56:53.625003004+00:00 stderr F I0120 10:56:53.624997 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.625011674+00:00 stderr F I0120 10:56:53.625002 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-oauth-apiserver/api took: 30.071µs 2026-01-20T10:56:53.625018775+00:00 stderr F I0120 10:56:53.625008 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-marketplace/redhat-marketplace 2026-01-20T10:56:53.625018775+00:00 stderr F I0120 10:56:53.625015 30089 gateway_shared_intf.go:609] Adding service redhat-marketplace in namespace openshift-marketplace 2026-01-20T10:56:53.625051475+00:00 stderr F I0120 10:56:53.625035 30089 gateway_shared_intf.go:635] Updating already programmed rules for redhat-marketplace in namespace openshift-marketplace 2026-01-20T10:56:53.625051475+00:00 stderr F I0120 10:56:53.625044 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.625070796+00:00 stderr F I0120 10:56:53.625050 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-marketplace/redhat-marketplace took: 34.801µs 2026-01-20T10:56:53.625070796+00:00 stderr F I0120 10:56:53.625056 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway default/openshift 2026-01-20T10:56:53.625112227+00:00 stderr F I0120 10:56:53.625095 30089 obj_retry.go:541] Creating *factory.serviceForGateway default/openshift took: 360ns 2026-01-20T10:56:53.625112227+00:00 stderr F I0120 10:56:53.625107 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-apiserver-operator/metrics 2026-01-20T10:56:53.625129967+00:00 stderr F I0120 10:56:53.625113 30089 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-apiserver-operator 2026-01-20T10:56:53.625137348+00:00 stderr F I0120 10:56:53.625129 30089 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-apiserver-operator 2026-01-20T10:56:53.625137348+00:00 stderr F I0120 10:56:53.625134 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.625146228+00:00 stderr F I0120 10:56:53.625139 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-apiserver-operator/metrics took: 26.001µs 2026-01-20T10:56:53.625153268+00:00 stderr F I0120 10:56:53.625145 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-apiserver/api 2026-01-20T10:56:53.625160768+00:00 stderr F I0120 10:56:53.625151 30089 gateway_shared_intf.go:609] Adding service api in namespace openshift-apiserver 2026-01-20T10:56:53.625185249+00:00 stderr F I0120 10:56:53.625165 30089 gateway_shared_intf.go:635] Updating already programmed rules for api in namespace openshift-apiserver 2026-01-20T10:56:53.625185249+00:00 stderr F I0120 10:56:53.625176 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.625185249+00:00 stderr F I0120 10:56:53.625181 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-apiserver/api took: 29.601µs 2026-01-20T10:56:53.625194069+00:00 stderr F I0120 10:56:53.625187 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-dns/dns-default 2026-01-20T10:56:53.625201179+00:00 stderr F I0120 10:56:53.625195 30089 gateway_shared_intf.go:609] Adding service dns-default in namespace openshift-dns 2026-01-20T10:56:53.625255761+00:00 stderr F I0120 10:56:53.625224 30089 gateway_shared_intf.go:635] Updating already programmed rules for dns-default in namespace openshift-dns 2026-01-20T10:56:53.625255761+00:00 stderr F I0120 10:56:53.625246 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.625263701+00:00 stderr F I0120 10:56:53.625254 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-dns/dns-default took: 58.572µs 2026-01-20T10:56:53.625270881+00:00 stderr F I0120 10:56:53.625265 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-api/machine-api-controllers 2026-01-20T10:56:53.625279691+00:00 stderr F I0120 10:56:53.625274 30089 gateway_shared_intf.go:609] Adding service machine-api-controllers in namespace openshift-machine-api 2026-01-20T10:56:53.625305692+00:00 stderr F I0120 10:56:53.625289 30089 gateway_shared_intf.go:635] Updating already programmed rules for machine-api-controllers in namespace openshift-machine-api 2026-01-20T10:56:53.625305692+00:00 stderr F I0120 10:56:53.625301 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.625313322+00:00 stderr F I0120 10:56:53.625306 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-api/machine-api-controllers took: 33.531µs 2026-01-20T10:56:53.625326503+00:00 stderr F I0120 10:56:53.625313 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-marketplace/marketplace-operator-metrics 2026-01-20T10:56:53.625326503+00:00 stderr F I0120 10:56:53.625319 30089 gateway_shared_intf.go:609] Adding service marketplace-operator-metrics in namespace openshift-marketplace 2026-01-20T10:56:53.625350343+00:00 stderr F I0120 10:56:53.625334 30089 gateway_shared_intf.go:635] Updating already programmed rules for marketplace-operator-metrics in namespace openshift-marketplace 2026-01-20T10:56:53.625350343+00:00 stderr F I0120 10:56:53.625344 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.625358763+00:00 stderr F I0120 10:56:53.625350 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-marketplace/marketplace-operator-metrics took: 30.771µs 2026-01-20T10:56:53.625365854+00:00 stderr F I0120 10:56:53.625356 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-apiserver/check-endpoints 2026-01-20T10:56:53.625365854+00:00 stderr F I0120 10:56:53.625362 30089 gateway_shared_intf.go:609] Adding service check-endpoints in namespace openshift-apiserver 2026-01-20T10:56:53.625391494+00:00 stderr F I0120 10:56:53.625375 30089 gateway_shared_intf.go:635] Updating already programmed rules for check-endpoints in namespace openshift-apiserver 2026-01-20T10:56:53.625391494+00:00 stderr F I0120 10:56:53.625385 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.625399514+00:00 stderr F I0120 10:56:53.625390 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-apiserver/check-endpoints took: 27.961µs 2026-01-20T10:56:53.625406756+00:00 stderr F I0120 10:56:53.625396 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-marketplace/community-operators 2026-01-20T10:56:53.625406756+00:00 stderr F I0120 10:56:53.625403 30089 gateway_shared_intf.go:609] Adding service community-operators in namespace openshift-marketplace 2026-01-20T10:56:53.625435296+00:00 stderr F I0120 10:56:53.625419 30089 gateway_shared_intf.go:635] Updating already programmed rules for community-operators in namespace openshift-marketplace 2026-01-20T10:56:53.625435296+00:00 stderr F I0120 10:56:53.625429 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.625443037+00:00 stderr F I0120 10:56:53.625435 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-marketplace/community-operators took: 31.792µs 2026-01-20T10:56:53.625450197+00:00 stderr F I0120 10:56:53.625441 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-network-diagnostics/network-check-target 2026-01-20T10:56:53.625450197+00:00 stderr F I0120 10:56:53.625447 30089 gateway_shared_intf.go:609] Adding service network-check-target in namespace openshift-network-diagnostics 2026-01-20T10:56:53.625490288+00:00 stderr F I0120 10:56:53.625473 30089 gateway_shared_intf.go:635] Updating already programmed rules for network-check-target in namespace openshift-network-diagnostics 2026-01-20T10:56:53.625490288+00:00 stderr F I0120 10:56:53.625484 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.625497898+00:00 stderr F I0120 10:56:53.625490 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-network-diagnostics/network-check-target took: 43.081µs 2026-01-20T10:56:53.625505578+00:00 stderr F I0120 10:56:53.625496 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-network-operator/metrics 2026-01-20T10:56:53.625513518+00:00 stderr F I0120 10:56:53.625502 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-network-operator/metrics took: 270ns 2026-01-20T10:56:53.625520619+00:00 stderr F I0120 10:56:53.625510 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-operator-lifecycle-manager/package-server-manager-metrics 2026-01-20T10:56:53.625520619+00:00 stderr F I0120 10:56:53.625517 30089 gateway_shared_intf.go:609] Adding service package-server-manager-metrics in namespace openshift-operator-lifecycle-manager 2026-01-20T10:56:53.625547719+00:00 stderr F I0120 10:56:53.625532 30089 gateway_shared_intf.go:635] Updating already programmed rules for package-server-manager-metrics in namespace openshift-operator-lifecycle-manager 2026-01-20T10:56:53.625547719+00:00 stderr F I0120 10:56:53.625541 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.625556020+00:00 stderr F I0120 10:56:53.625547 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-operator-lifecycle-manager/package-server-manager-metrics took: 29.541µs 2026-01-20T10:56:53.625563180+00:00 stderr F I0120 10:56:53.625552 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-console/console 2026-01-20T10:56:53.625563180+00:00 stderr F I0120 10:56:53.625559 30089 gateway_shared_intf.go:609] Adding service console in namespace openshift-console 2026-01-20T10:56:53.625588890+00:00 stderr F I0120 10:56:53.625573 30089 gateway_shared_intf.go:635] Updating already programmed rules for console in namespace openshift-console 2026-01-20T10:56:53.625588890+00:00 stderr F I0120 10:56:53.625582 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.625597151+00:00 stderr F I0120 10:56:53.625588 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-console/console took: 28.32µs 2026-01-20T10:56:53.625611061+00:00 stderr F I0120 10:56:53.625594 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-ingress-operator/metrics 2026-01-20T10:56:53.625611061+00:00 stderr F I0120 10:56:53.625603 30089 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-ingress-operator 2026-01-20T10:56:53.625658672+00:00 stderr F I0120 10:56:53.625617 30089 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-ingress-operator 2026-01-20T10:56:53.625658672+00:00 stderr F I0120 10:56:53.625628 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.625658672+00:00 stderr F I0120 10:56:53.625633 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-ingress-operator/metrics took: 32.691µs 2026-01-20T10:56:53.625658672+00:00 stderr F I0120 10:56:53.625639 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-config-operator/machine-config-operator 2026-01-20T10:56:53.625658672+00:00 stderr F I0120 10:56:53.625645 30089 gateway_shared_intf.go:609] Adding service machine-config-operator in namespace openshift-machine-config-operator 2026-01-20T10:56:53.625675463+00:00 stderr F I0120 10:56:53.625661 30089 gateway_shared_intf.go:635] Updating already programmed rules for machine-config-operator in namespace openshift-machine-config-operator 2026-01-20T10:56:53.625682993+00:00 stderr F I0120 10:56:53.625667 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.625682993+00:00 stderr F I0120 10:56:53.625672 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-config-operator/machine-config-operator took: 27.361µs 2026-01-20T10:56:53.625682993+00:00 stderr F I0120 10:56:53.625678 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-multus/network-metrics-service 2026-01-20T10:56:53.625690843+00:00 stderr F I0120 10:56:53.625684 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-multus/network-metrics-service took: 250ns 2026-01-20T10:56:53.625697923+00:00 stderr F I0120 10:56:53.625690 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway cert-manager/cert-manager-webhook 2026-01-20T10:56:53.625707433+00:00 stderr F I0120 10:56:53.625696 30089 gateway_shared_intf.go:609] Adding service cert-manager-webhook in namespace cert-manager 2026-01-20T10:56:53.625716074+00:00 stderr F I0120 10:56:53.625710 30089 gateway_shared_intf.go:635] Updating already programmed rules for cert-manager-webhook in namespace cert-manager 2026-01-20T10:56:53.625826937+00:00 stderr F I0120 10:56:53.625716 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.625837557+00:00 stderr F I0120 10:56:53.625825 30089 obj_retry.go:541] Creating *factory.serviceForGateway cert-manager/cert-manager-webhook took: 122.713µs 2026-01-20T10:56:53.625859257+00:00 stderr F I0120 10:56:53.625841 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-dns-operator/metrics 2026-01-20T10:56:53.625866788+00:00 stderr F I0120 10:56:53.625857 30089 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-dns-operator 2026-01-20T10:56:53.625895508+00:00 stderr F I0120 10:56:53.625879 30089 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-dns-operator 2026-01-20T10:56:53.625895508+00:00 stderr F I0120 10:56:53.625891 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.625903159+00:00 stderr F I0120 10:56:53.625896 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-dns-operator/metrics took: 41.291µs 2026-01-20T10:56:53.625910089+00:00 stderr F I0120 10:56:53.625903 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-image-registry/image-registry 2026-01-20T10:56:53.625917129+00:00 stderr F I0120 10:56:53.625909 30089 gateway_shared_intf.go:609] Adding service image-registry in namespace openshift-image-registry 2026-01-20T10:56:53.625938980+00:00 stderr F I0120 10:56:53.625923 30089 gateway_shared_intf.go:635] Updating already programmed rules for image-registry in namespace openshift-image-registry 2026-01-20T10:56:53.625938980+00:00 stderr F I0120 10:56:53.625933 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.625946710+00:00 stderr F I0120 10:56:53.625939 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-image-registry/image-registry took: 29.42µs 2026-01-20T10:56:53.625954500+00:00 stderr F I0120 10:56:53.625945 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-config-operator/machine-config-daemon 2026-01-20T10:56:53.625961550+00:00 stderr F I0120 10:56:53.625951 30089 gateway_shared_intf.go:609] Adding service machine-config-daemon in namespace openshift-machine-config-operator 2026-01-20T10:56:53.625987211+00:00 stderr F I0120 10:56:53.625970 30089 gateway_shared_intf.go:635] Updating already programmed rules for machine-config-daemon in namespace openshift-machine-config-operator 2026-01-20T10:56:53.625987211+00:00 stderr F I0120 10:56:53.625983 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.625998541+00:00 stderr F I0120 10:56:53.625990 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-config-operator/machine-config-daemon took: 38.221µs 2026-01-20T10:56:53.626005891+00:00 stderr F I0120 10:56:53.626000 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-marketplace/redhat-operators 2026-01-20T10:56:53.626015912+00:00 stderr F I0120 10:56:53.626009 30089 gateway_shared_intf.go:609] Adding service redhat-operators in namespace openshift-marketplace 2026-01-20T10:56:53.626047562+00:00 stderr F I0120 10:56:53.626031 30089 gateway_shared_intf.go:635] Updating already programmed rules for redhat-operators in namespace openshift-marketplace 2026-01-20T10:56:53.626047562+00:00 stderr F I0120 10:56:53.626042 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.626055323+00:00 stderr F I0120 10:56:53.626047 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-marketplace/redhat-operators took: 40.571µs 2026-01-20T10:56:53.626077003+00:00 stderr F I0120 10:56:53.626054 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-controller-manager/controller-manager 2026-01-20T10:56:53.626088483+00:00 stderr F I0120 10:56:53.626082 30089 gateway_shared_intf.go:609] Adding service controller-manager in namespace openshift-controller-manager 2026-01-20T10:56:53.626114904+00:00 stderr F I0120 10:56:53.626098 30089 gateway_shared_intf.go:635] Updating already programmed rules for controller-manager in namespace openshift-controller-manager 2026-01-20T10:56:53.626114904+00:00 stderr F I0120 10:56:53.626108 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.626123134+00:00 stderr F I0120 10:56:53.626114 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-controller-manager/controller-manager took: 53.961µs 2026-01-20T10:56:53.626130255+00:00 stderr F I0120 10:56:53.626120 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-kube-controller-manager/kube-controller-manager 2026-01-20T10:56:53.626130255+00:00 stderr F I0120 10:56:53.626126 30089 gateway_shared_intf.go:609] Adding service kube-controller-manager in namespace openshift-kube-controller-manager 2026-01-20T10:56:53.626162245+00:00 stderr F I0120 10:56:53.626146 30089 gateway_shared_intf.go:635] Updating already programmed rules for kube-controller-manager in namespace openshift-kube-controller-manager 2026-01-20T10:56:53.626162245+00:00 stderr F I0120 10:56:53.626156 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.626266858+00:00 stderr F I0120 10:56:53.626162 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-kube-controller-manager/kube-controller-manager took: 34.901µs 2026-01-20T10:56:53.626266858+00:00 stderr F I0120 10:56:53.626168 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-kube-scheduler/scheduler 2026-01-20T10:56:53.626266858+00:00 stderr F I0120 10:56:53.626173 30089 gateway_shared_intf.go:609] Adding service scheduler in namespace openshift-kube-scheduler 2026-01-20T10:56:53.626266858+00:00 stderr F I0120 10:56:53.626192 30089 gateway_shared_intf.go:635] Updating already programmed rules for scheduler in namespace openshift-kube-scheduler 2026-01-20T10:56:53.626266858+00:00 stderr F I0120 10:56:53.626197 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.626266858+00:00 stderr F I0120 10:56:53.626202 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-kube-scheduler/scheduler took: 28.48µs 2026-01-20T10:56:53.626266858+00:00 stderr F I0120 10:56:53.626208 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-api/machine-api-operator 2026-01-20T10:56:53.626266858+00:00 stderr F I0120 10:56:53.626213 30089 gateway_shared_intf.go:609] Adding service machine-api-operator in namespace openshift-machine-api 2026-01-20T10:56:53.626266858+00:00 stderr F I0120 10:56:53.626228 30089 gateway_shared_intf.go:635] Updating already programmed rules for machine-api-operator in namespace openshift-machine-api 2026-01-20T10:56:53.626266858+00:00 stderr F I0120 10:56:53.626233 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.626266858+00:00 stderr F I0120 10:56:53.626238 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-api/machine-api-operator took: 24.89µs 2026-01-20T10:56:53.626266858+00:00 stderr F I0120 10:56:53.626244 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-marketplace/certified-operators 2026-01-20T10:56:53.626266858+00:00 stderr F I0120 10:56:53.626250 30089 gateway_shared_intf.go:609] Adding service certified-operators in namespace openshift-marketplace 2026-01-20T10:56:53.626266858+00:00 stderr F I0120 10:56:53.626263 30089 gateway_shared_intf.go:635] Updating already programmed rules for certified-operators in namespace openshift-marketplace 2026-01-20T10:56:53.626284289+00:00 stderr F I0120 10:56:53.626268 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.626284289+00:00 stderr F I0120 10:56:53.626274 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-marketplace/certified-operators took: 23.75µs 2026-01-20T10:56:53.626284289+00:00 stderr F I0120 10:56:53.626279 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2026-01-20T10:56:53.626292249+00:00 stderr F I0120 10:56:53.626286 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-ovn-kubernetes/ovn-kubernetes-control-plane took: 550ns 2026-01-20T10:56:53.626299289+00:00 stderr F I0120 10:56:53.626291 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway cert-manager/cert-manager 2026-01-20T10:56:53.626306309+00:00 stderr F I0120 10:56:53.626297 30089 gateway_shared_intf.go:609] Adding service cert-manager in namespace cert-manager 2026-01-20T10:56:53.626332230+00:00 stderr F I0120 10:56:53.626311 30089 gateway_shared_intf.go:635] Updating already programmed rules for cert-manager in namespace cert-manager 2026-01-20T10:56:53.626332230+00:00 stderr F I0120 10:56:53.626322 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.626332230+00:00 stderr F I0120 10:56:53.626328 30089 obj_retry.go:541] Creating *factory.serviceForGateway cert-manager/cert-manager took: 30.141µs 2026-01-20T10:56:53.626340650+00:00 stderr F I0120 10:56:53.626335 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-network-diagnostics/network-check-source 2026-01-20T10:56:53.626347710+00:00 stderr F I0120 10:56:53.626342 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-network-diagnostics/network-check-source took: 470ns 2026-01-20T10:56:53.626354730+00:00 stderr F I0120 10:56:53.626347 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-authentication/oauth-openshift 2026-01-20T10:56:53.626361731+00:00 stderr F I0120 10:56:53.626353 30089 gateway_shared_intf.go:609] Adding service oauth-openshift in namespace openshift-authentication 2026-01-20T10:56:53.626385581+00:00 stderr F I0120 10:56:53.626366 30089 gateway_shared_intf.go:635] Updating already programmed rules for oauth-openshift in namespace openshift-authentication 2026-01-20T10:56:53.626385581+00:00 stderr F I0120 10:56:53.626376 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.626385581+00:00 stderr F I0120 10:56:53.626382 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-authentication/oauth-openshift took: 28.691µs 2026-01-20T10:56:53.626393781+00:00 stderr F I0120 10:56:53.626388 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-cluster-machine-approver/machine-approver 2026-01-20T10:56:53.626400652+00:00 stderr F I0120 10:56:53.626394 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-cluster-machine-approver/machine-approver took: 420ns 2026-01-20T10:56:53.626408282+00:00 stderr F I0120 10:56:53.626399 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-ingress-canary/ingress-canary 2026-01-20T10:56:53.626408282+00:00 stderr F I0120 10:56:53.626405 30089 gateway_shared_intf.go:609] Adding service ingress-canary in namespace openshift-ingress-canary 2026-01-20T10:56:53.626440123+00:00 stderr F I0120 10:56:53.626420 30089 gateway_shared_intf.go:635] Updating already programmed rules for ingress-canary in namespace openshift-ingress-canary 2026-01-20T10:56:53.626440123+00:00 stderr F I0120 10:56:53.626430 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.626440123+00:00 stderr F I0120 10:56:53.626435 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-ingress-canary/ingress-canary took: 30.21µs 2026-01-20T10:56:53.626454493+00:00 stderr F I0120 10:56:53.626441 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-config-operator/machine-config-controller 2026-01-20T10:56:53.626454493+00:00 stderr F I0120 10:56:53.626447 30089 gateway_shared_intf.go:609] Adding service machine-config-controller in namespace openshift-machine-config-operator 2026-01-20T10:56:53.626483434+00:00 stderr F I0120 10:56:53.626461 30089 gateway_shared_intf.go:635] Updating already programmed rules for machine-config-controller in namespace openshift-machine-config-operator 2026-01-20T10:56:53.626483434+00:00 stderr F I0120 10:56:53.626471 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.626483434+00:00 stderr F I0120 10:56:53.626477 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-config-operator/machine-config-controller took: 29.061µs 2026-01-20T10:56:53.626492484+00:00 stderr F I0120 10:56:53.626483 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-kube-scheduler-operator/metrics 2026-01-20T10:56:53.626492484+00:00 stderr F I0120 10:56:53.626488 30089 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-kube-scheduler-operator 2026-01-20T10:56:53.626518825+00:00 stderr F I0120 10:56:53.626502 30089 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-kube-scheduler-operator 2026-01-20T10:56:53.626518825+00:00 stderr F I0120 10:56:53.626512 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.626526965+00:00 stderr F I0120 10:56:53.626517 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-kube-scheduler-operator/metrics took: 28.561µs 2026-01-20T10:56:53.626526965+00:00 stderr F I0120 10:56:53.626523 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-api/machine-api-operator-machine-webhook 2026-01-20T10:56:53.626535875+00:00 stderr F I0120 10:56:53.626530 30089 gateway_shared_intf.go:609] Adding service machine-api-operator-machine-webhook in namespace openshift-machine-api 2026-01-20T10:56:53.626562316+00:00 stderr F I0120 10:56:53.626546 30089 gateway_shared_intf.go:635] Updating already programmed rules for machine-api-operator-machine-webhook in namespace openshift-machine-api 2026-01-20T10:56:53.626562316+00:00 stderr F I0120 10:56:53.626556 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.626570716+00:00 stderr F I0120 10:56:53.626561 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-api/machine-api-operator-machine-webhook took: 31.311µs 2026-01-20T10:56:53.626577816+00:00 stderr F I0120 10:56:53.626567 30089 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-service-ca-operator/metrics 2026-01-20T10:56:53.626577816+00:00 stderr F I0120 10:56:53.626574 30089 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-service-ca-operator 2026-01-20T10:56:53.626606097+00:00 stderr F I0120 10:56:53.626589 30089 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-service-ca-operator 2026-01-20T10:56:53.626606097+00:00 stderr F I0120 10:56:53.626599 30089 openflow_manager.go:87] Gateway OpenFlow sync already requested 2026-01-20T10:56:53.626613777+00:00 stderr F I0120 10:56:53.626607 30089 obj_retry.go:541] Creating *factory.serviceForGateway openshift-service-ca-operator/metrics took: 31.531µs 2026-01-20T10:56:53.626637708+00:00 stderr F I0120 10:56:53.626619 30089 factory.go:988] Added *v1.Service event handler 11 2026-01-20T10:56:53.626696459+00:00 stderr F I0120 10:56:53.626646 30089 obj_retry_gateway.go:28] [newRetryFrameworkNodeWithParameters] g.watchFactory=&{11 0xc0000ff7a0 0xc0000ff810 0xc0000ff8f0 0xc0000ff960 0xc0000ff9d0 0xc0000ffa40 0xc00012e000 0xc0000ffab0 0xc0000ffb20 map[0x23d45a0:0xc0003a60e0 0x23d4ae0:0xc00026ee70 0x23d4d80:0xc0003a6000 0x23d5020:0xc0003a61c0 0x23d52c0:0xc0003a6230 0x23d5aa0:0xc0003a62a0 0x23f31a0:0xc00026eb60 0x23f4020:0xc00026ebd0 0x23f4760:0xc00026ed90 0x23f5980:0xc00026ed20 0x23f7680:0xc00026ee00 0x23f8160:0xc00026ec40] 0xc000152de0} 2026-01-20T10:56:53.626747901+00:00 stderr F I0120 10:56:53.626726 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-marketplace/community-operators-h826k 2026-01-20T10:56:53.626756401+00:00 stderr F I0120 10:56:53.626750 30089 gateway_shared_intf.go:842] Adding endpointslice community-operators-h826k in namespace openshift-marketplace 2026-01-20T10:56:53.626803132+00:00 stderr F I0120 10:56:53.626784 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-marketplace/community-operators-h826k took: 38.121µs 2026-01-20T10:56:53.626810702+00:00 stderr F I0120 10:56:53.626799 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-apiserver/api-7hq6z 2026-01-20T10:56:53.626817602+00:00 stderr F I0120 10:56:53.626809 30089 gateway_shared_intf.go:842] Adding endpointslice api-7hq6z in namespace openshift-apiserver 2026-01-20T10:56:53.626839853+00:00 stderr F I0120 10:56:53.626824 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-apiserver/api-7hq6z took: 16.571µs 2026-01-20T10:56:53.626839853+00:00 stderr F I0120 10:56:53.626834 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-api/control-plane-machine-set-operator-nmjkn 2026-01-20T10:56:53.626849153+00:00 stderr F I0120 10:56:53.626843 30089 gateway_shared_intf.go:842] Adding endpointslice control-plane-machine-set-operator-nmjkn in namespace openshift-machine-api 2026-01-20T10:56:53.626874064+00:00 stderr F I0120 10:56:53.626858 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-api/control-plane-machine-set-operator-nmjkn took: 15.32µs 2026-01-20T10:56:53.626874064+00:00 stderr F I0120 10:56:53.626867 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-ingress-canary/ingress-canary-rhnd4 2026-01-20T10:56:53.626881824+00:00 stderr F I0120 10:56:53.626874 30089 gateway_shared_intf.go:842] Adding endpointslice ingress-canary-rhnd4 in namespace openshift-ingress-canary 2026-01-20T10:56:53.626903915+00:00 stderr F I0120 10:56:53.626888 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-ingress-canary/ingress-canary-rhnd4 took: 14.46µs 2026-01-20T10:56:53.626903915+00:00 stderr F I0120 10:56:53.626898 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-oauth-apiserver/api-2pj4d 2026-01-20T10:56:53.626911685+00:00 stderr F I0120 10:56:53.626904 30089 gateway_shared_intf.go:842] Adding endpointslice api-2pj4d in namespace openshift-oauth-apiserver 2026-01-20T10:56:53.626933255+00:00 stderr F I0120 10:56:53.626917 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-oauth-apiserver/api-2pj4d took: 13.82µs 2026-01-20T10:56:53.626933255+00:00 stderr F I0120 10:56:53.626927 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-console/console-wg4kr 2026-01-20T10:56:53.626940976+00:00 stderr F I0120 10:56:53.626933 30089 gateway_shared_intf.go:842] Adding endpointslice console-wg4kr in namespace openshift-console 2026-01-20T10:56:53.626964736+00:00 stderr F I0120 10:56:53.626949 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-console/console-wg4kr took: 15.821µs 2026-01-20T10:56:53.626964736+00:00 stderr F I0120 10:56:53.626958 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/catalog-operator-metrics-fqfm8 2026-01-20T10:56:53.626972527+00:00 stderr F I0120 10:56:53.626965 30089 gateway_shared_intf.go:842] Adding endpointslice catalog-operator-metrics-fqfm8 in namespace openshift-operator-lifecycle-manager 2026-01-20T10:56:53.626997447+00:00 stderr F I0120 10:56:53.626981 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/catalog-operator-metrics-fqfm8 took: 15.461µs 2026-01-20T10:56:53.626997447+00:00 stderr F I0120 10:56:53.626991 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-authentication-operator/metrics-dp499 2026-01-20T10:56:53.627005217+00:00 stderr F I0120 10:56:53.626998 30089 gateway_shared_intf.go:842] Adding endpointslice metrics-dp499 in namespace openshift-authentication-operator 2026-01-20T10:56:53.627028798+00:00 stderr F I0120 10:56:53.627012 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-authentication-operator/metrics-dp499 took: 15.121µs 2026-01-20T10:56:53.627028798+00:00 stderr F I0120 10:56:53.627022 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-marketplace/marketplace-operator-metrics-fcwkk 2026-01-20T10:56:53.627036288+00:00 stderr F I0120 10:56:53.627028 30089 gateway_shared_intf.go:842] Adding endpointslice marketplace-operator-metrics-fcwkk in namespace openshift-marketplace 2026-01-20T10:56:53.627074009+00:00 stderr F I0120 10:56:53.627044 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-marketplace/marketplace-operator-metrics-fcwkk took: 15.95µs 2026-01-20T10:56:53.627074009+00:00 stderr F I0120 10:56:53.627054 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-console-operator/metrics-rdlxc 2026-01-20T10:56:53.627100940+00:00 stderr F I0120 10:56:53.627082 30089 gateway_shared_intf.go:842] Adding endpointslice metrics-rdlxc in namespace openshift-console-operator 2026-01-20T10:56:53.627108380+00:00 stderr F I0120 10:56:53.627101 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-console-operator/metrics-rdlxc took: 41.501µs 2026-01-20T10:56:53.627115480+00:00 stderr F I0120 10:56:53.627108 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-dns-operator/metrics-cxk8j 2026-01-20T10:56:53.627122560+00:00 stderr F I0120 10:56:53.627114 30089 gateway_shared_intf.go:842] Adding endpointslice metrics-cxk8j in namespace openshift-dns-operator 2026-01-20T10:56:53.627139781+00:00 stderr F I0120 10:56:53.627127 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-dns-operator/metrics-cxk8j took: 13.551µs 2026-01-20T10:56:53.627139781+00:00 stderr F I0120 10:56:53.627135 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-kube-controller-manager-operator/metrics-cz5rv 2026-01-20T10:56:53.627147471+00:00 stderr F I0120 10:56:53.627141 30089 gateway_shared_intf.go:842] Adding endpointslice metrics-cz5rv in namespace openshift-kube-controller-manager-operator 2026-01-20T10:56:53.627171512+00:00 stderr F I0120 10:56:53.627154 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-kube-controller-manager-operator/metrics-cz5rv took: 13.66µs 2026-01-20T10:56:53.627171512+00:00 stderr F I0120 10:56:53.627165 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-config-operator/machine-config-daemon-2nvnz 2026-01-20T10:56:53.627179042+00:00 stderr F I0120 10:56:53.627171 30089 gateway_shared_intf.go:842] Adding endpointslice machine-config-daemon-2nvnz in namespace openshift-machine-config-operator 2026-01-20T10:56:53.627206293+00:00 stderr F I0120 10:56:53.627190 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-config-operator/machine-config-daemon-2nvnz took: 17.82µs 2026-01-20T10:56:53.627206293+00:00 stderr F I0120 10:56:53.627200 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-multus/multus-admission-controller-s6h4d 2026-01-20T10:56:53.627213773+00:00 stderr F I0120 10:56:53.627207 30089 gateway_shared_intf.go:842] Adding endpointslice multus-admission-controller-s6h4d in namespace openshift-multus 2026-01-20T10:56:53.627227083+00:00 stderr F I0120 10:56:53.627221 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-multus/multus-admission-controller-s6h4d took: 14.44µs 2026-01-20T10:56:53.627234023+00:00 stderr F I0120 10:56:53.627227 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-network-diagnostics/network-check-target-kqkjk 2026-01-20T10:56:53.627241124+00:00 stderr F I0120 10:56:53.627233 30089 gateway_shared_intf.go:842] Adding endpointslice network-check-target-kqkjk in namespace openshift-network-diagnostics 2026-01-20T10:56:53.627268824+00:00 stderr F I0120 10:56:53.627251 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-network-diagnostics/network-check-target-kqkjk took: 16.781µs 2026-01-20T10:56:53.627268824+00:00 stderr F I0120 10:56:53.627264 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway cert-manager/cert-manager-lrf7j 2026-01-20T10:56:53.627279535+00:00 stderr F I0120 10:56:53.627273 30089 gateway_shared_intf.go:842] Adding endpointslice cert-manager-lrf7j in namespace cert-manager 2026-01-20T10:56:53.627310845+00:00 stderr F I0120 10:56:53.627292 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway cert-manager/cert-manager-lrf7j took: 19.881µs 2026-01-20T10:56:53.627335816+00:00 stderr F I0120 10:56:53.627318 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-apiserver/check-endpoints-sbfp5 2026-01-20T10:56:53.627335816+00:00 stderr F I0120 10:56:53.627332 30089 gateway_shared_intf.go:842] Adding endpointslice check-endpoints-sbfp5 in namespace openshift-apiserver 2026-01-20T10:56:53.627368437+00:00 stderr F I0120 10:56:53.627350 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-apiserver/check-endpoints-sbfp5 took: 19.48µs 2026-01-20T10:56:53.627368437+00:00 stderr F I0120 10:56:53.627364 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-authentication/oauth-openshift-6gdxk 2026-01-20T10:56:53.627377687+00:00 stderr F I0120 10:56:53.627371 30089 gateway_shared_intf.go:842] Adding endpointslice oauth-openshift-6gdxk in namespace openshift-authentication 2026-01-20T10:56:53.627410448+00:00 stderr F I0120 10:56:53.627392 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-authentication/oauth-openshift-6gdxk took: 19.72µs 2026-01-20T10:56:53.627410448+00:00 stderr F I0120 10:56:53.627404 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-ingress/router-internal-default-29hv8 2026-01-20T10:56:53.627419748+00:00 stderr F I0120 10:56:53.627413 30089 gateway_shared_intf.go:842] Adding endpointslice router-internal-default-29hv8 in namespace openshift-ingress 2026-01-20T10:56:53.627448809+00:00 stderr F I0120 10:56:53.627431 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-ingress/router-internal-default-29hv8 took: 18.91µs 2026-01-20T10:56:53.627448809+00:00 stderr F I0120 10:56:53.627444 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-api/machine-api-operator-2js9r 2026-01-20T10:56:53.627458359+00:00 stderr F I0120 10:56:53.627453 30089 gateway_shared_intf.go:842] Adding endpointslice machine-api-operator-2js9r in namespace openshift-machine-api 2026-01-20T10:56:53.627488610+00:00 stderr F I0120 10:56:53.627471 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-api/machine-api-operator-2js9r took: 19.38µs 2026-01-20T10:56:53.627488610+00:00 stderr F I0120 10:56:53.627484 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-config-operator/machine-config-controller-7t8hc 2026-01-20T10:56:53.627510921+00:00 stderr F I0120 10:56:53.627493 30089 gateway_shared_intf.go:842] Adding endpointslice machine-config-controller-7t8hc in namespace openshift-machine-config-operator 2026-01-20T10:56:53.627536341+00:00 stderr F I0120 10:56:53.627518 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-config-operator/machine-config-controller-7t8hc took: 25.741µs 2026-01-20T10:56:53.627536341+00:00 stderr F I0120 10:56:53.627529 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-api/machine-api-operator-webhook-x4gjx 2026-01-20T10:56:53.627559402+00:00 stderr F I0120 10:56:53.627538 30089 gateway_shared_intf.go:842] Adding endpointslice machine-api-operator-webhook-x4gjx in namespace openshift-machine-api 2026-01-20T10:56:53.627566752+00:00 stderr F I0120 10:56:53.627556 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-api/machine-api-operator-webhook-x4gjx took: 18.591µs 2026-01-20T10:56:53.627575552+00:00 stderr F I0120 10:56:53.627565 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-config-operator/machine-config-operator-p8xmw 2026-01-20T10:56:53.627583172+00:00 stderr F I0120 10:56:53.627575 30089 gateway_shared_intf.go:842] Adding endpointslice machine-config-operator-p8xmw in namespace openshift-machine-config-operator 2026-01-20T10:56:53.627623384+00:00 stderr F I0120 10:56:53.627594 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-config-operator/machine-config-operator-p8xmw took: 19.631µs 2026-01-20T10:56:53.627623384+00:00 stderr F I0120 10:56:53.627607 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-route-controller-manager/route-controller-manager-64jvm 2026-01-20T10:56:53.627623384+00:00 stderr F I0120 10:56:53.627617 30089 gateway_shared_intf.go:842] Adding endpointslice route-controller-manager-64jvm in namespace openshift-route-controller-manager 2026-01-20T10:56:53.627654394+00:00 stderr F I0120 10:56:53.627636 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-route-controller-manager/route-controller-manager-64jvm took: 19.901µs 2026-01-20T10:56:53.627654394+00:00 stderr F I0120 10:56:53.627648 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-controller-manager/controller-manager-kxmft 2026-01-20T10:56:53.627662065+00:00 stderr F I0120 10:56:53.627655 30089 gateway_shared_intf.go:842] Adding endpointslice controller-manager-kxmft in namespace openshift-controller-manager 2026-01-20T10:56:53.627684145+00:00 stderr F I0120 10:56:53.627668 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-controller-manager/controller-manager-kxmft took: 13.691µs 2026-01-20T10:56:53.627684145+00:00 stderr F I0120 10:56:53.627678 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-kube-apiserver-operator/metrics-kbv55 2026-01-20T10:56:53.627691695+00:00 stderr F I0120 10:56:53.627684 30089 gateway_shared_intf.go:842] Adding endpointslice metrics-kbv55 in namespace openshift-kube-apiserver-operator 2026-01-20T10:56:53.627715186+00:00 stderr F I0120 10:56:53.627698 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-kube-apiserver-operator/metrics-kbv55 took: 14.61µs 2026-01-20T10:56:53.627715186+00:00 stderr F I0120 10:56:53.627708 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-etcd-operator/metrics-z62zm 2026-01-20T10:56:53.627722716+00:00 stderr F I0120 10:56:53.627715 30089 gateway_shared_intf.go:842] Adding endpointslice metrics-z62zm in namespace openshift-etcd-operator 2026-01-20T10:56:53.627744127+00:00 stderr F I0120 10:56:53.627728 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-etcd-operator/metrics-z62zm took: 13.65µs 2026-01-20T10:56:53.627744127+00:00 stderr F I0120 10:56:53.627738 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-kube-apiserver/apiserver-5mvtf 2026-01-20T10:56:53.627754127+00:00 stderr F I0120 10:56:53.627744 30089 gateway_shared_intf.go:842] Adding endpointslice apiserver-5mvtf in namespace openshift-kube-apiserver 2026-01-20T10:56:53.627764837+00:00 stderr F I0120 10:56:53.627757 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-kube-apiserver/apiserver-5mvtf took: 13.43µs 2026-01-20T10:56:53.627788158+00:00 stderr F I0120 10:56:53.627770 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-api/cluster-autoscaler-operator-r4g5l 2026-01-20T10:56:53.627788158+00:00 stderr F I0120 10:56:53.627781 30089 gateway_shared_intf.go:842] Adding endpointslice cluster-autoscaler-operator-r4g5l in namespace openshift-machine-api 2026-01-20T10:56:53.627811218+00:00 stderr F I0120 10:56:53.627794 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-api/cluster-autoscaler-operator-r4g5l took: 13.4µs 2026-01-20T10:56:53.627811218+00:00 stderr F I0120 10:56:53.627805 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/package-server-manager-metrics-mq66p 2026-01-20T10:56:53.627818999+00:00 stderr F I0120 10:56:53.627812 30089 gateway_shared_intf.go:842] Adding endpointslice package-server-manager-metrics-mq66p in namespace openshift-operator-lifecycle-manager 2026-01-20T10:56:53.627847639+00:00 stderr F I0120 10:56:53.627829 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/package-server-manager-metrics-mq66p took: 17.611µs 2026-01-20T10:56:53.627847639+00:00 stderr F I0120 10:56:53.627843 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-dns/dns-default-lctlx 2026-01-20T10:56:53.627858450+00:00 stderr F I0120 10:56:53.627851 30089 gateway_shared_intf.go:842] Adding endpointslice dns-default-lctlx in namespace openshift-dns 2026-01-20T10:56:53.627886690+00:00 stderr F I0120 10:56:53.627869 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-dns/dns-default-lctlx took: 17.991µs 2026-01-20T10:56:53.627886690+00:00 stderr F I0120 10:56:53.627882 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-etcd/etcd-8wmzv 2026-01-20T10:56:53.627898971+00:00 stderr F I0120 10:56:53.627891 30089 gateway_shared_intf.go:842] Adding endpointslice etcd-8wmzv in namespace openshift-etcd 2026-01-20T10:56:53.627927921+00:00 stderr F I0120 10:56:53.627910 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-etcd/etcd-8wmzv took: 19.781µs 2026-01-20T10:56:53.627927921+00:00 stderr F I0120 10:56:53.627921 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-api/machine-api-controllers-j9jjt 2026-01-20T10:56:53.627935762+00:00 stderr F I0120 10:56:53.627928 30089 gateway_shared_intf.go:842] Adding endpointslice machine-api-controllers-j9jjt in namespace openshift-machine-api 2026-01-20T10:56:53.627973173+00:00 stderr F I0120 10:56:53.627956 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-api/machine-api-controllers-j9jjt took: 28.621µs 2026-01-20T10:56:53.627973173+00:00 stderr F I0120 10:56:53.627968 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-marketplace/certified-operators-bw9bv 2026-01-20T10:56:53.627982133+00:00 stderr F I0120 10:56:53.627976 30089 gateway_shared_intf.go:842] Adding endpointslice certified-operators-bw9bv in namespace openshift-marketplace 2026-01-20T10:56:53.628012334+00:00 stderr F I0120 10:56:53.627994 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-marketplace/certified-operators-bw9bv took: 18.06µs 2026-01-20T10:56:53.628012334+00:00 stderr F I0120 10:56:53.628008 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/olm-operator-metrics-vql58 2026-01-20T10:56:53.628023844+00:00 stderr F I0120 10:56:53.628016 30089 gateway_shared_intf.go:842] Adding endpointslice olm-operator-metrics-vql58 in namespace openshift-operator-lifecycle-manager 2026-01-20T10:56:53.628082686+00:00 stderr F I0120 10:56:53.628037 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/olm-operator-metrics-vql58 took: 21.3µs 2026-01-20T10:56:53.628082686+00:00 stderr F I0120 10:56:53.628051 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-apiserver-operator/metrics-sgtfh 2026-01-20T10:56:53.628094096+00:00 stderr F I0120 10:56:53.628058 30089 gateway_shared_intf.go:842] Adding endpointslice metrics-sgtfh in namespace openshift-apiserver-operator 2026-01-20T10:56:53.628121647+00:00 stderr F I0120 10:56:53.628104 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-apiserver-operator/metrics-sgtfh took: 45.501µs 2026-01-20T10:56:53.628121647+00:00 stderr F I0120 10:56:53.628115 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-image-registry/image-registry-cfsrx 2026-01-20T10:56:53.628129217+00:00 stderr F I0120 10:56:53.628121 30089 gateway_shared_intf.go:842] Adding endpointslice image-registry-cfsrx in namespace openshift-image-registry 2026-01-20T10:56:53.628151467+00:00 stderr F I0120 10:56:53.628134 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-image-registry/image-registry-cfsrx took: 13.231µs 2026-01-20T10:56:53.628151467+00:00 stderr F I0120 10:56:53.628144 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-marketplace/redhat-marketplace-8k279 2026-01-20T10:56:53.628159058+00:00 stderr F I0120 10:56:53.628151 30089 gateway_shared_intf.go:842] Adding endpointslice redhat-marketplace-8k279 in namespace openshift-marketplace 2026-01-20T10:56:53.628180768+00:00 stderr F I0120 10:56:53.628165 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-marketplace/redhat-marketplace-8k279 took: 13.851µs 2026-01-20T10:56:53.628180768+00:00 stderr F I0120 10:56:53.628174 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-ingress-operator/metrics-cd48g 2026-01-20T10:56:53.628188508+00:00 stderr F I0120 10:56:53.628180 30089 gateway_shared_intf.go:842] Adding endpointslice metrics-cd48g in namespace openshift-ingress-operator 2026-01-20T10:56:53.628211290+00:00 stderr F I0120 10:56:53.628195 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-ingress-operator/metrics-cd48g took: 13.36µs 2026-01-20T10:56:53.628211290+00:00 stderr F I0120 10:56:53.628204 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-kube-scheduler-operator/metrics-zk8d6 2026-01-20T10:56:53.628219020+00:00 stderr F I0120 10:56:53.628211 30089 gateway_shared_intf.go:842] Adding endpointslice metrics-zk8d6 in namespace openshift-kube-scheduler-operator 2026-01-20T10:56:53.628263981+00:00 stderr F I0120 10:56:53.628224 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-kube-scheduler-operator/metrics-zk8d6 took: 13.29µs 2026-01-20T10:56:53.628263981+00:00 stderr F I0120 10:56:53.628251 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-kube-storage-version-migrator-operator/metrics-zbxs7 2026-01-20T10:56:53.628263981+00:00 stderr F I0120 10:56:53.628260 30089 gateway_shared_intf.go:842] Adding endpointslice metrics-zbxs7 in namespace openshift-kube-storage-version-migrator-operator 2026-01-20T10:56:53.628290702+00:00 stderr F I0120 10:56:53.628273 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-kube-storage-version-migrator-operator/metrics-zbxs7 took: 14.54µs 2026-01-20T10:56:53.628290702+00:00 stderr F I0120 10:56:53.628283 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-marketplace/redhat-operators-47g6l 2026-01-20T10:56:53.628298472+00:00 stderr F I0120 10:56:53.628290 30089 gateway_shared_intf.go:842] Adding endpointslice redhat-operators-47g6l in namespace openshift-marketplace 2026-01-20T10:56:53.628309012+00:00 stderr F I0120 10:56:53.628303 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-marketplace/redhat-operators-47g6l took: 13.59µs 2026-01-20T10:56:53.628316153+00:00 stderr F I0120 10:56:53.628309 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway default/kubernetes 2026-01-20T10:56:53.628323233+00:00 stderr F I0120 10:56:53.628315 30089 gateway_shared_intf.go:842] Adding endpointslice kubernetes in namespace default 2026-01-20T10:56:53.628331873+00:00 stderr F I0120 10:56:53.628326 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway default/kubernetes took: 11.16µs 2026-01-20T10:56:53.628338813+00:00 stderr F I0120 10:56:53.628331 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-config-operator/metrics-tw775 2026-01-20T10:56:53.628345883+00:00 stderr F I0120 10:56:53.628338 30089 gateway_shared_intf.go:842] Adding endpointslice metrics-tw775 in namespace openshift-config-operator 2026-01-20T10:56:53.628369574+00:00 stderr F I0120 10:56:53.628352 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-config-operator/metrics-tw775 took: 14.84µs 2026-01-20T10:56:53.628369574+00:00 stderr F I0120 10:56:53.628362 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-console-operator/webhook-b7j7h 2026-01-20T10:56:53.628377664+00:00 stderr F I0120 10:56:53.628369 30089 gateway_shared_intf.go:842] Adding endpointslice webhook-b7j7h in namespace openshift-console-operator 2026-01-20T10:56:53.628394975+00:00 stderr F I0120 10:56:53.628383 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-console-operator/webhook-b7j7h took: 14.37µs 2026-01-20T10:56:53.628394975+00:00 stderr F I0120 10:56:53.628390 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-console/downloads-zsr67 2026-01-20T10:56:53.628402635+00:00 stderr F I0120 10:56:53.628396 30089 gateway_shared_intf.go:842] Adding endpointslice downloads-zsr67 in namespace openshift-console 2026-01-20T10:56:53.628427686+00:00 stderr F I0120 10:56:53.628410 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-console/downloads-zsr67 took: 14.33µs 2026-01-20T10:56:53.628427686+00:00 stderr F I0120 10:56:53.628421 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-kube-controller-manager/kube-controller-manager-fcp2k 2026-01-20T10:56:53.628435376+00:00 stderr F I0120 10:56:53.628429 30089 gateway_shared_intf.go:842] Adding endpointslice kube-controller-manager-fcp2k in namespace openshift-kube-controller-manager 2026-01-20T10:56:53.628466957+00:00 stderr F I0120 10:56:53.628448 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-kube-controller-manager/kube-controller-manager-fcp2k took: 19.561µs 2026-01-20T10:56:53.628466957+00:00 stderr F I0120 10:56:53.628462 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-kube-scheduler/scheduler-4wbzh 2026-01-20T10:56:53.628489537+00:00 stderr F I0120 10:56:53.628471 30089 gateway_shared_intf.go:842] Adding endpointslice scheduler-4wbzh in namespace openshift-kube-scheduler 2026-01-20T10:56:53.628512068+00:00 stderr F I0120 10:56:53.628495 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-kube-scheduler/scheduler-4wbzh took: 24.64µs 2026-01-20T10:56:53.628512068+00:00 stderr F I0120 10:56:53.628506 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-api/machine-api-operator-machine-webhook-xj4tp 2026-01-20T10:56:53.628519828+00:00 stderr F I0120 10:56:53.628512 30089 gateway_shared_intf.go:842] Adding endpointslice machine-api-operator-machine-webhook-xj4tp in namespace openshift-machine-api 2026-01-20T10:56:53.628530958+00:00 stderr F I0120 10:56:53.628525 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-api/machine-api-operator-machine-webhook-xj4tp took: 13.18µs 2026-01-20T10:56:53.628538068+00:00 stderr F I0120 10:56:53.628531 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-service-ca-operator/metrics-wrkrj 2026-01-20T10:56:53.628545119+00:00 stderr F I0120 10:56:53.628537 30089 gateway_shared_intf.go:842] Adding endpointslice metrics-wrkrj in namespace openshift-service-ca-operator 2026-01-20T10:56:53.628567179+00:00 stderr F I0120 10:56:53.628550 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-service-ca-operator/metrics-wrkrj took: 12.931µs 2026-01-20T10:56:53.628567179+00:00 stderr F I0120 10:56:53.628560 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway cert-manager/cert-manager-cainjector-fdtx5 2026-01-20T10:56:53.628574899+00:00 stderr F I0120 10:56:53.628567 30089 gateway_shared_intf.go:842] Adding endpointslice cert-manager-cainjector-fdtx5 in namespace cert-manager 2026-01-20T10:56:53.628599800+00:00 stderr F I0120 10:56:53.628583 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway cert-manager/cert-manager-cainjector-fdtx5 took: 15.18µs 2026-01-20T10:56:53.628599800+00:00 stderr F I0120 10:56:53.628593 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-cluster-version/cluster-version-operator-qt7zf 2026-01-20T10:56:53.628607540+00:00 stderr F I0120 10:56:53.628600 30089 gateway_shared_intf.go:842] Adding endpointslice cluster-version-operator-qt7zf in namespace openshift-cluster-version 2026-01-20T10:56:53.628635841+00:00 stderr F I0120 10:56:53.628619 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-cluster-version/cluster-version-operator-qt7zf took: 19.43µs 2026-01-20T10:56:53.628635841+00:00 stderr F I0120 10:56:53.628629 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/packageserver-service-tlm8t 2026-01-20T10:56:53.628643631+00:00 stderr F I0120 10:56:53.628636 30089 gateway_shared_intf.go:842] Adding endpointslice packageserver-service-tlm8t in namespace openshift-operator-lifecycle-manager 2026-01-20T10:56:53.628701733+00:00 stderr F I0120 10:56:53.628649 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/packageserver-service-tlm8t took: 13.81µs 2026-01-20T10:56:53.628701733+00:00 stderr F I0120 10:56:53.628659 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway cert-manager/cert-manager-webhook-7k8gk 2026-01-20T10:56:53.628701733+00:00 stderr F I0120 10:56:53.628665 30089 gateway_shared_intf.go:842] Adding endpointslice cert-manager-webhook-7k8gk in namespace cert-manager 2026-01-20T10:56:53.628701733+00:00 stderr F I0120 10:56:53.628679 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway cert-manager/cert-manager-webhook-7k8gk took: 14.63µs 2026-01-20T10:56:53.628701733+00:00 stderr F I0120 10:56:53.628685 30089 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-controller-manager-operator/metrics-psf8p 2026-01-20T10:56:53.628701733+00:00 stderr F I0120 10:56:53.628692 30089 gateway_shared_intf.go:842] Adding endpointslice metrics-psf8p in namespace openshift-controller-manager-operator 2026-01-20T10:56:53.628712323+00:00 stderr F I0120 10:56:53.628704 30089 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-controller-manager-operator/metrics-psf8p took: 13.541µs 2026-01-20T10:56:53.628719413+00:00 stderr F I0120 10:56:53.628713 30089 factory.go:988] Added *v1.EndpointSlice event handler 12 2026-01-20T10:56:53.628778645+00:00 stderr F I0120 10:56:53.628757 30089 gateway.go:244] Spawning Conntrack Rule Check Thread 2026-01-20T10:56:53.628786825+00:00 stderr F I0120 10:56:53.628780 30089 default_node_network_controller.go:980] Gateway and management port readiness took 193.992792ms 2026-01-20T10:56:53.628959119+00:00 stderr F I0120 10:56:53.628930 30089 ovs.go:159] Exec(53): /usr/bin/ovs-ofctl -O OpenFlow13 --bundle replace-flows br-ex - 2026-01-20T10:56:53.629668028+00:00 stderr F I0120 10:56:53.629635 30089 ovs.go:159] Exec(54): /usr/bin/ovs-vsctl --timeout=15 --if-exists del-br br-ext 2026-01-20T10:56:53.629755470+00:00 stderr F I0120 10:56:53.629715 30089 route_manager.go:93] Route Manager: attempting to add route: {Ifindex: 11 Dst: 10.217.4.0/23 Src: 169.254.169.2 Gw: 169.254.169.4 Flags: [] Table: 0 Realm: 0} 2026-01-20T10:56:53.630299254+00:00 stderr F I0120 10:56:53.630273 30089 route_manager.go:110] Route Manager: completed adding route: {Ifindex: 11 Dst: 10.217.4.0/23 Src: 169.254.169.2 Gw: 169.254.169.4 Flags: [] Table: 254 Realm: 0} 2026-01-20T10:56:53.633040327+00:00 stderr F I0120 10:56:53.633005 30089 node_ip_handler_linux.go:431] Skipping non-useable IP address for host: 127.0.0.1/8 lo 2026-01-20T10:56:53.633072328+00:00 stderr F I0120 10:56:53.633048 30089 node_ip_handler_linux.go:431] Skipping non-useable IP address for host: 10.217.0.2/23 ovn-k8s-mp0 2026-01-20T10:56:53.633099428+00:00 stderr F I0120 10:56:53.633081 30089 node_ip_handler_linux.go:431] Skipping non-useable IP address for host: 169.254.169.2/29 br-ex 2026-01-20T10:56:53.633134479+00:00 stderr F I0120 10:56:53.633113 30089 node_ip_handler_linux.go:141] Node IP manager is running 2026-01-20T10:56:53.637897376+00:00 stderr F I0120 10:56:53.637841 30089 ovs.go:162] Exec(54): stdout: "" 2026-01-20T10:56:53.637897376+00:00 stderr F I0120 10:56:53.637877 30089 ovs.go:163] Exec(54): stderr: "" 2026-01-20T10:56:53.637914506+00:00 stderr F I0120 10:56:53.637900 30089 ovs.go:159] Exec(55): /usr/bin/ovs-vsctl --timeout=15 --if-exists del-port br-int int 2026-01-20T10:56:53.647361465+00:00 stderr F I0120 10:56:53.647306 30089 ovs.go:162] Exec(55): stdout: "" 2026-01-20T10:56:53.647361465+00:00 stderr F I0120 10:56:53.647342 30089 ovs.go:163] Exec(55): stderr: "" 2026-01-20T10:56:53.647668583+00:00 stderr F I0120 10:56:53.647638 30089 healthcheck_node.go:124] "Starting node proxy healthz server" address="0.0.0.0:10256" 2026-01-20T10:56:53.648517027+00:00 stderr F I0120 10:56:53.648469 30089 egressservice_node.go:84] Setting up event handlers for Egress Services 2026-01-20T10:56:53.648748293+00:00 stderr F W0120 10:56:53.648710 30089 egressip_healthcheck.go:74] Health checking using insecure connection 2026-01-20T10:56:53.648788534+00:00 stderr F I0120 10:56:53.648763 30089 egressservice_node.go:172] Starting Egress Services Controller 2026-01-20T10:56:53.648818134+00:00 stderr F I0120 10:56:53.648795 30089 shared_informer.go:311] Waiting for caches to sync for egressservices 2026-01-20T10:56:53.648818134+00:00 stderr F I0120 10:56:53.648806 30089 egressip_healthcheck.go:107] Starting Egress IP Health Server on 10.217.0.2:9107 2026-01-20T10:56:53.648899647+00:00 stderr F I0120 10:56:53.648809 30089 shared_informer.go:318] Caches are synced for egressservices 2026-01-20T10:56:53.648973059+00:00 stderr F I0120 10:56:53.648947 30089 shared_informer.go:311] Waiting for caches to sync for egressservices_services 2026-01-20T10:56:53.648973059+00:00 stderr F I0120 10:56:53.648965 30089 shared_informer.go:318] Caches are synced for egressservices_services 2026-01-20T10:56:53.648982189+00:00 stderr F I0120 10:56:53.648973 30089 shared_informer.go:311] Waiting for caches to sync for egressservices_endpointslices 2026-01-20T10:56:53.648982189+00:00 stderr F I0120 10:56:53.648978 30089 shared_informer.go:318] Caches are synced for egressservices_endpointslices 2026-01-20T10:56:53.648989979+00:00 stderr F I0120 10:56:53.648985 30089 egressservice_node.go:186] Repairing Egress Services 2026-01-20T10:56:53.661103589+00:00 stderr F I0120 10:56:53.656858 30089 iptables.go:108] Creating table: nat chain: OVN-KUBE-EGRESS-SVC 2026-01-20T10:56:53.661827969+00:00 stderr F W0120 10:56:53.661800 30089 management-port_linux.go:495] missing management port nat rule in chain OVN-KUBE-SNAT-MGMTPORT, adding it 2026-01-20T10:56:53.664862239+00:00 stderr F I0120 10:56:53.664815 30089 node_controller.go:43] Starting Admin Policy Based Route Node Controller 2026-01-20T10:56:53.664862239+00:00 stderr F I0120 10:56:53.664835 30089 external_controller.go:276] Starting Admin Policy Based Route Controller 2026-01-20T10:56:53.670113317+00:00 stderr F I0120 10:56:53.670078 30089 egressip.go:183] Starting Egress IP Controller 2026-01-20T10:56:53.670234141+00:00 stderr F I0120 10:56:53.670210 30089 shared_informer.go:311] Waiting for caches to sync for eippod 2026-01-20T10:56:53.670234141+00:00 stderr F I0120 10:56:53.670221 30089 shared_informer.go:318] Caches are synced for eippod 2026-01-20T10:56:53.670234141+00:00 stderr F I0120 10:56:53.670231 30089 shared_informer.go:311] Waiting for caches to sync for eipeip 2026-01-20T10:56:53.670245031+00:00 stderr F I0120 10:56:53.670235 30089 shared_informer.go:318] Caches are synced for eipeip 2026-01-20T10:56:53.670245031+00:00 stderr F I0120 10:56:53.670242 30089 shared_informer.go:311] Waiting for caches to sync for eipnamespace 2026-01-20T10:56:53.670252522+00:00 stderr F I0120 10:56:53.670245 30089 shared_informer.go:318] Caches are synced for eipnamespace 2026-01-20T10:56:53.670358714+00:00 stderr F I0120 10:56:53.670320 30089 iptables_manager.go:96] IPTables manager: own chain: table nat, chain OVN-KUBE-EGRESS-IP-MULTI-NIC, protocol IPv4 2026-01-20T10:56:53.672347956+00:00 stderr F I0120 10:56:53.672312 30089 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2026-01-20T10:56:53.675499860+00:00 stderr F I0120 10:56:53.675453 30089 iptables_manager.go:164] IPTables manager: ensure rule - table nat, chain POSTROUTING, protocol IPv4, rule: {[-j OVN-KUBE-EGRESS-IP-MULTI-NIC]} 2026-01-20T10:56:53.677489212+00:00 stderr F I0120 10:56:53.677446 30089 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2026-01-20T10:56:53.683482771+00:00 stderr F I0120 10:56:53.683443 30089 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2026-01-20T10:56:53.686826999+00:00 stderr F I0120 10:56:53.686782 30089 iptables_manager.go:164] IPTables manager: ensure rule - table mangle, chain PREROUTING, protocol IPv4, rule: {[-m mark --mark 0 -j CONNMARK --restore-mark]} 2026-01-20T10:56:53.688923345+00:00 stderr F I0120 10:56:53.688855 30089 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2026-01-20T10:56:53.693793724+00:00 stderr F I0120 10:56:53.693758 30089 iptables.go:358] "Running" command="iptables-save" arguments=["-t","mangle"] 2026-01-20T10:56:53.698863688+00:00 stderr F I0120 10:56:53.698812 30089 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2026-01-20T10:56:53.721300401+00:00 stderr F I0120 10:56:53.721223 30089 iptables_manager.go:164] IPTables manager: ensure rule - table mangle, chain PREROUTING, protocol IPv4, rule: {[-m mark --mark 1008 -j CONNMARK --save-mark]} 2026-01-20T10:56:53.724115176+00:00 stderr F I0120 10:56:53.723811 30089 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2026-01-20T10:56:53.731472480+00:00 stderr F I0120 10:56:53.731412 30089 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2026-01-20T10:56:53.738658120+00:00 stderr F I0120 10:56:53.738603 30089 iptables.go:358] "Running" command="iptables-save" arguments=["-t","mangle"] 2026-01-20T10:56:53.787049230+00:00 stderr F I0120 10:56:53.786983 30089 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2026-01-20T10:56:53.790764059+00:00 stderr F I0120 10:56:53.790732 30089 iptables.go:358] "Running" command="ip6tables-save" arguments=["-t","nat"] 2026-01-20T10:56:53.793508151+00:00 stderr F I0120 10:56:53.793479 30089 link_network_manager.go:116] Link manager is running 2026-01-20T10:56:53.793508151+00:00 stderr F I0120 10:56:53.793503 30089 default_node_network_controller.go:1128] Default node network controller initialized and ready. 2026-01-20T10:56:53.793569503+00:00 stderr F I0120 10:56:53.793536 30089 ovspinning_linux.go:39] OVS CPU affinity pinning disabled 2026-01-20T10:57:09.875898532+00:00 stderr F I0120 10:57:09.875532 30089 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 1 items received 2026-01-20T10:57:09.876860028+00:00 stderr F I0120 10:57:09.876180 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=43189&timeout=8m3s&timeoutSeconds=483&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:09.891749282+00:00 stderr F I0120 10:57:09.891679 30089 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Namespace total 0 items received 2026-01-20T10:57:09.892477551+00:00 stderr F I0120 10:57:09.892391 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Namespace returned Get "https://api-int.crc.testing:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=43161&timeout=8m59s&timeoutSeconds=539&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:09.912016717+00:00 stderr F I0120 10:57:09.911473 30089 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 0 items received 2026-01-20T10:57:09.912016717+00:00 stderr F I0120 10:57:09.911908 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=43159&timeout=9m53s&timeoutSeconds=593&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:09.919710891+00:00 stderr F I0120 10:57:09.919648 30089 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 1 items received 2026-01-20T10:57:09.920449251+00:00 stderr F I0120 10:57:09.919956 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=43258&timeout=5m48s&timeoutSeconds=348&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:09.924048386+00:00 stderr F I0120 10:57:09.922761 30089 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 6 items received 2026-01-20T10:57:09.924048386+00:00 stderr F I0120 10:57:09.923003 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=43251&timeout=9m20s&timeoutSeconds=560&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:09.928789671+00:00 stderr F I0120 10:57:09.927764 30089 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.NetworkPolicy total 0 items received 2026-01-20T10:57:09.928789671+00:00 stderr F I0120 10:57:09.927958 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.NetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=43159&timeout=5m35s&timeoutSeconds=335&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:09.952545640+00:00 stderr F I0120 10:57:09.952465 30089 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 0 items received 2026-01-20T10:57:09.953204897+00:00 stderr F I0120 10:57:09.952896 30089 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=43159&timeout=9m30s&timeoutSeconds=570&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:09.954335347+00:00 stderr F I0120 10:57:09.953620 30089 reflector.go:800] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: Watch close - *v1alpha1.BaselineAdminNetworkPolicy total 0 items received 2026-01-20T10:57:09.955415016+00:00 stderr F I0120 10:57:09.954851 30089 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.BaselineAdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/baselineadminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=43178&timeout=8m15s&timeoutSeconds=495&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:09.963136810+00:00 stderr F I0120 10:57:09.962846 30089 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 0 items received 2026-01-20T10:57:09.963550661+00:00 stderr F I0120 10:57:09.963513 30089 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=43181&timeout=8m0s&timeoutSeconds=480&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:09.975357483+00:00 stderr F I0120 10:57:09.975287 30089 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 0 items received 2026-01-20T10:57:09.975759663+00:00 stderr F I0120 10:57:09.975729 30089 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=43178&timeout=8m47s&timeoutSeconds=527&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:09.989596609+00:00 stderr F I0120 10:57:09.989375 30089 reflector.go:800] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: Watch close - *v1alpha1.AdminNetworkPolicy total 0 items received 2026-01-20T10:57:09.989894237+00:00 stderr F I0120 10:57:09.989831 30089 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.AdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/adminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=43156&timeout=6m12s&timeoutSeconds=372&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:09.994200182+00:00 stderr F I0120 10:57:09.994038 30089 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 0 items received 2026-01-20T10:57:09.995124496+00:00 stderr F I0120 10:57:09.994560 30089 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=43159&timeout=6m40s&timeoutSeconds=400&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:10.000580790+00:00 stderr F I0120 10:57:10.000461 30089 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 0 items received 2026-01-20T10:57:10.000956250+00:00 stderr F I0120 10:57:10.000923 30089 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=43162&timeout=8m5s&timeoutSeconds=485&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:10.001180566+00:00 stderr F I0120 10:57:10.001124 30089 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 0 items received 2026-01-20T10:57:10.001499144+00:00 stderr F I0120 10:57:10.001462 30089 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=43183&timeout=6m41s&timeoutSeconds=401&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:10.010511082+00:00 stderr F I0120 10:57:10.010266 30089 cni.go:258] [openshift-kube-apiserver/installer-13-crc ae5d379adacec876e9afd7eea1b6de92d435a2f11553ad2dc2539de36a462c2b network default NAD default] DEL starting CNI request [openshift-kube-apiserver/installer-13-crc ae5d379adacec876e9afd7eea1b6de92d435a2f11553ad2dc2539de36a462c2b network default NAD default] 2026-01-20T10:57:10.197163419+00:00 stderr F I0120 10:57:10.197088 30089 cni.go:279] [openshift-kube-apiserver/installer-13-crc ae5d379adacec876e9afd7eea1b6de92d435a2f11553ad2dc2539de36a462c2b network default NAD default] DEL finished CNI request [openshift-kube-apiserver/installer-13-crc ae5d379adacec876e9afd7eea1b6de92d435a2f11553ad2dc2539de36a462c2b network default NAD default], result "{\"dns\":{}}", err 2026-01-20T10:57:10.726270361+00:00 stderr F I0120 10:57:10.726144 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=43251&timeout=9m36s&timeoutSeconds=576&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:10.771196509+00:00 stderr F I0120 10:57:10.771107 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=43159&timeout=9m11s&timeoutSeconds=551&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:10.876648878+00:00 stderr F I0120 10:57:10.876569 30089 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=43181&timeout=7m23s&timeoutSeconds=443&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:10.968266800+00:00 stderr F I0120 10:57:10.968163 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.NetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=43159&timeout=9m24s&timeoutSeconds=564&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:11.006815660+00:00 stderr F I0120 10:57:11.006537 30089 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.AdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/adminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=43156&timeout=8m57s&timeoutSeconds=537&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:11.017262216+00:00 stderr F I0120 10:57:11.017187 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=43258&timeout=5m39s&timeoutSeconds=339&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:11.038823417+00:00 stderr F I0120 10:57:11.038750 30089 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.BaselineAdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/baselineadminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=43178&timeout=9m46s&timeoutSeconds=586&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:11.082626125+00:00 stderr F I0120 10:57:11.082473 30089 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=43162&timeout=8m31s&timeoutSeconds=511&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:11.106132117+00:00 stderr F I0120 10:57:11.106017 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=43189&timeout=7m9s&timeoutSeconds=429&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:11.117108576+00:00 stderr F I0120 10:57:11.117012 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Namespace returned Get "https://api-int.crc.testing:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=43161&timeout=9m48s&timeoutSeconds=588&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:11.225311318+00:00 stderr F I0120 10:57:11.225244 30089 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=43159&timeout=8m15s&timeoutSeconds=495&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:11.317143467+00:00 stderr F I0120 10:57:11.315698 30089 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=43159&timeout=9m6s&timeoutSeconds=546&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:11.489696860+00:00 stderr F I0120 10:57:11.489595 30089 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=43178&timeout=8m9s&timeoutSeconds=489&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:11.524080570+00:00 stderr F I0120 10:57:11.523986 30089 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=43183&timeout=8m43s&timeoutSeconds=523&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:12.983824163+00:00 stderr F I0120 10:57:12.983643 30089 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.AdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/adminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=43156&timeout=9m37s&timeoutSeconds=577&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:13.046340546+00:00 stderr F I0120 10:57:13.046246 30089 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=43159&timeout=9m59s&timeoutSeconds=599&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:13.049071868+00:00 stderr F I0120 10:57:13.048995 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.NetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=43159&timeout=5m41s&timeoutSeconds=341&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:13.083775015+00:00 stderr F I0120 10:57:13.083707 30089 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=43181&timeout=9m19s&timeoutSeconds=559&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:13.216317891+00:00 stderr F I0120 10:57:13.216246 30089 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=43159&timeout=5m11s&timeoutSeconds=311&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:13.310383138+00:00 stderr F I0120 10:57:13.310297 30089 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=43183&timeout=6m12s&timeoutSeconds=372&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:13.500525787+00:00 stderr F I0120 10:57:13.500377 30089 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.BaselineAdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/baselineadminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=43178&timeout=7m57s&timeoutSeconds=477&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:13.592258793+00:00 stderr F I0120 10:57:13.592104 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=43159&timeout=8m12s&timeoutSeconds=492&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:13.801448655+00:00 stderr F I0120 10:57:13.801347 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=43251&timeout=7m6s&timeoutSeconds=426&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:13.836037160+00:00 stderr F I0120 10:57:13.835929 30089 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=43178&timeout=6m56s&timeoutSeconds=416&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:13.848925631+00:00 stderr F I0120 10:57:13.848548 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=43189&timeout=7m27s&timeoutSeconds=447&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:13.893263013+00:00 stderr F I0120 10:57:13.893188 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Namespace returned Get "https://api-int.crc.testing:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=43161&timeout=5m22s&timeoutSeconds=322&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:14.201735221+00:00 stderr F I0120 10:57:14.201659 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=43258&timeout=6m59s&timeoutSeconds=419&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:14.207287218+00:00 stderr F I0120 10:57:14.207240 30089 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=43162&timeout=7m15s&timeoutSeconds=435&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:16.876453324+00:00 stderr F I0120 10:57:16.876351 30089 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.BaselineAdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/baselineadminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=43178&timeout=5m22s&timeoutSeconds=322&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:16.971848337+00:00 stderr F I0120 10:57:16.971789 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.NetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=43159&timeout=7m23s&timeoutSeconds=443&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:17.499503171+00:00 stderr F I0120 10:57:17.499397 30089 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=43159&timeout=9m40s&timeoutSeconds=580&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:17.517250990+00:00 stderr F I0120 10:57:17.517015 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=43159&timeout=6m16s&timeoutSeconds=376&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:18.150637380+00:00 stderr F I0120 10:57:18.150528 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=43251&timeout=5m3s&timeoutSeconds=303&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:18.217783086+00:00 stderr F I0120 10:57:18.217682 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Namespace returned Get "https://api-int.crc.testing:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=43161&timeout=6m11s&timeoutSeconds=371&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:18.363803388+00:00 stderr F I0120 10:57:18.363681 30089 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.AdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/adminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=43156&timeout=5m29s&timeoutSeconds=329&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:18.579734748+00:00 stderr F I0120 10:57:18.579640 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=43258&timeout=5m2s&timeoutSeconds=302&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:19.143781225+00:00 stderr F I0120 10:57:19.143641 30089 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=43181&timeout=8m50s&timeoutSeconds=530&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:19.156430419+00:00 stderr F I0120 10:57:19.156360 30089 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=43183&timeout=6m11s&timeoutSeconds=371&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:19.160907828+00:00 stderr F I0120 10:57:19.160858 30089 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=43159&timeout=6m19s&timeoutSeconds=379&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:19.235246223+00:00 stderr F I0120 10:57:19.235165 30089 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=43162&timeout=9m23s&timeoutSeconds=563&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:19.717622121+00:00 stderr F I0120 10:57:19.717544 30089 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=43189&timeout=6m37s&timeoutSeconds=397&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:19.904403169+00:00 stderr F I0120 10:57:19.904321 30089 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=43178&timeout=8m54s&timeoutSeconds=534&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:25.641767526+00:00 stderr F I0120 10:57:25.641036 30089 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod closed with: too old resource version: 43251 (43263) 2026-01-20T10:57:25.951554129+00:00 stderr F I0120 10:57:25.950875 30089 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Namespace closed with: too old resource version: 43161 (43263) 2026-01-20T10:57:26.162626340+00:00 stderr F I0120 10:57:26.162560 30089 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall closed with: too old resource version: 43162 (43268) 2026-01-20T10:57:26.664262856+00:00 stderr F I0120 10:57:26.664208 30089 reflector.go:449] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.BaselineAdminNetworkPolicy closed with: too old resource version: 43178 (43291) 2026-01-20T10:57:27.565085699+00:00 stderr F I0120 10:57:27.565017 30089 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute closed with: too old resource version: 43159 (43291) 2026-01-20T10:57:27.787108801+00:00 stderr F I0120 10:57:27.787031 30089 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService closed with: too old resource version: 43181 (43291) 2026-01-20T10:57:28.030759454+00:00 stderr F I0120 10:57:28.030688 30089 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP closed with: too old resource version: 43183 (43291) 2026-01-20T10:57:28.984457345+00:00 stderr F I0120 10:57:28.984345 30089 with_retry.go:234] Got a Retry-After 5s response for attempt 1 to https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods?labelSelector=app%3Dovnkube-master&resourceVersion=0 2026-01-20T10:57:28.984527617+00:00 stderr F I0120 10:57:28.984348 30089 with_retry.go:234] Got a Retry-After 5s response for attempt 1 to https://api-int.crc.testing:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=43159&timeout=9m11s&timeoutSeconds=551&watch=true 2026-01-20T10:57:28.987114565+00:00 stderr F I0120 10:57:28.986407 30089 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.NetworkPolicy closed with: too old resource version: 43159 (43263) 2026-01-20T10:57:29.925097440+00:00 stderr F I0120 10:57:29.925000 30089 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice closed with: too old resource version: 43258 (43263) 2026-01-20T10:57:30.107397071+00:00 stderr F I0120 10:57:30.107321 30089 with_retry.go:234] Got a Retry-After 5s response for attempt 1 to https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=43159&timeout=7m37s&timeoutSeconds=457&watch=true 2026-01-20T10:57:30.109205639+00:00 stderr F I0120 10:57:30.109157 30089 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service closed with: too old resource version: 43159 (43263) 2026-01-20T10:57:30.224834147+00:00 stderr F I0120 10:57:30.224759 30089 reflector.go:449] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition closed with: too old resource version: 43159 (43291) 2026-01-20T10:57:30.625475522+00:00 stderr F I0120 10:57:30.625394 30089 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node closed with: too old resource version: 43189 (43263) 2026-01-20T10:57:30.754834713+00:00 stderr F I0120 10:57:30.754779 30089 reflector.go:449] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.AdminNetworkPolicy closed with: too old resource version: 43156 (43308) 2026-01-20T10:57:31.067869771+00:00 stderr F I0120 10:57:31.067778 30089 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS closed with: too old resource version: 43178 (43268) 2026-01-20T10:57:38.612957934+00:00 stderr F I0120 10:57:38.612897 30089 reflector.go:325] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:38.640422020+00:00 stderr F I0120 10:57:38.640227 30089 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:38.642158326+00:00 stderr F I0120 10:57:38.642034 30089 obj_retry.go:453] Detected object openshift-kube-apiserver/installer-12-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.642158326+00:00 stderr F I0120 10:57:38.642084 30089 obj_retry.go:453] Detected object openshift-kube-apiserver/installer-12-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.642158326+00:00 stderr F I0120 10:57:38.642049 30089 obj_retry.go:459] Detected object openshift-kube-apiserver/installer-13-crc of type *v1.Pod in terminal state (e.g. completed) during update event: will remove it 2026-01-20T10:57:38.642158326+00:00 stderr F I0120 10:57:38.642130 30089 pods.go:151] Deleting pod: openshift-kube-apiserver/installer-13-crc 2026-01-20T10:57:38.642158326+00:00 stderr F I0120 10:57:38.642129 30089 factory.go:1465] Object openshift-kube-apiserver/kube-apiserver-crc is replaced, invoking delete followed by add handler 2026-01-20T10:57:38.642204257+00:00 stderr F I0120 10:57:38.642135 30089 obj_retry.go:453] Detected object openshift-kube-apiserver/installer-9-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.642204257+00:00 stderr F I0120 10:57:38.642167 30089 obj_retry.go:453] Detected object openshift-kube-apiserver/installer-9-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.642204257+00:00 stderr F I0120 10:57:38.642174 30089 factory.go:1465] Object openshift-kube-apiserver/kube-apiserver-crc is replaced, invoking delete followed by add handler 2026-01-20T10:57:38.642204257+00:00 stderr F I0120 10:57:38.642178 30089 handler.go:411] Object openshift-kube-apiserver/kube-apiserver-crc is replaced, invoking delete followed by add handler 2026-01-20T10:57:38.642243378+00:00 stderr F I0120 10:57:38.642202 30089 factory.go:1465] Object openshift-kube-apiserver/kube-apiserver-crc is replaced, invoking delete followed by add handler 2026-01-20T10:57:38.642243378+00:00 stderr F I0120 10:57:38.642204 30089 factory.go:1465] Object openshift-kube-apiserver/kube-apiserver-crc is replaced, invoking delete followed by add handler 2026-01-20T10:57:38.642243378+00:00 stderr F I0120 10:57:38.642212 30089 handler.go:411] Object openshift-kube-apiserver/kube-apiserver-crc is replaced, invoking delete followed by add handler 2026-01-20T10:57:38.642243378+00:00 stderr F I0120 10:57:38.642228 30089 factory.go:1465] Object openshift-kube-apiserver/kube-apiserver-crc is replaced, invoking delete followed by add handler 2026-01-20T10:57:38.643910063+00:00 stderr F I0120 10:57:38.643833 30089 obj_retry.go:453] Detected object openshift-kube-controller-manager/installer-10-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.643910063+00:00 stderr F I0120 10:57:38.643881 30089 obj_retry.go:453] Detected object openshift-kube-controller-manager/installer-10-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.644302773+00:00 stderr F I0120 10:57:38.644261 30089 obj_retry.go:453] Detected object openshift-kube-controller-manager/installer-11-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.644302773+00:00 stderr F I0120 10:57:38.644286 30089 obj_retry.go:453] Detected object openshift-kube-controller-manager/installer-11-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.644331534+00:00 stderr F I0120 10:57:38.644306 30089 obj_retry.go:453] Detected object openshift-kube-controller-manager/installer-10-retry-1-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.644331534+00:00 stderr F I0120 10:57:38.644320 30089 obj_retry.go:453] Detected object openshift-kube-controller-manager/installer-10-retry-1-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.644501248+00:00 stderr F I0120 10:57:38.644455 30089 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-10-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.644520029+00:00 stderr F I0120 10:57:38.644506 30089 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-10-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.644909659+00:00 stderr F I0120 10:57:38.644851 30089 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-9-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.644909659+00:00 stderr F I0120 10:57:38.644879 30089 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-9-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.644972010+00:00 stderr F I0120 10:57:38.644921 30089 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-11-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.644972010+00:00 stderr F I0120 10:57:38.644964 30089 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-11-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.645059993+00:00 stderr F I0120 10:57:38.645020 30089 obj_retry.go:453] Detected object openshift-kube-scheduler/installer-7-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.645059993+00:00 stderr F I0120 10:57:38.645044 30089 obj_retry.go:453] Detected object openshift-kube-scheduler/installer-7-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.645311489+00:00 stderr F I0120 10:57:38.645238 30089 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-8-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.645398471+00:00 stderr F I0120 10:57:38.645371 30089 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-8-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.650531977+00:00 stderr F I0120 10:57:38.649913 30089 obj_retry.go:453] Detected object openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.650531977+00:00 stderr F I0120 10:57:38.649943 30089 obj_retry.go:453] Detected object openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.650531977+00:00 stderr F I0120 10:57:38.649934 30089 obj_retry.go:453] Detected object openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.650531977+00:00 stderr F I0120 10:57:38.649971 30089 obj_retry.go:453] Detected object openshift-operator-lifecycle-manager/collect-profiles-29481765-pbh8m of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.650531977+00:00 stderr F I0120 10:57:38.649974 30089 obj_retry.go:453] Detected object openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.650531977+00:00 stderr F I0120 10:57:38.650002 30089 obj_retry.go:453] Detected object openshift-operator-lifecycle-manager/collect-profiles-29481765-pbh8m of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.650580519+00:00 stderr F I0120 10:57:38.650544 30089 pods.go:185] Attempting to release IPs for pod: openshift-kube-apiserver/installer-13-crc, ips: 10.217.0.38 2026-01-20T10:57:38.650580519+00:00 stderr F I0120 10:57:38.650572 30089 obj_retry.go:459] Detected object openshift-kube-apiserver/installer-13-crc of type *factory.egressIPPod in terminal state (e.g. completed) during update event: will remove it 2026-01-20T10:57:38.650637460+00:00 stderr F I0120 10:57:38.650595 30089 obj_retry.go:453] Detected object openshift-kube-scheduler/installer-8-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:38.650637460+00:00 stderr F I0120 10:57:38.650609 30089 obj_retry.go:453] Detected object openshift-kube-scheduler/installer-8-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2026-01-20T10:57:41.850168563+00:00 stderr F I0120 10:57:41.850086 30089 reflector.go:325] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:57:41.851810477+00:00 stderr F I0120 10:57:41.851772 30089 reflector.go:351] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:57:45.951955826+00:00 stderr F I0120 10:57:45.951896 30089 reflector.go:325] Listing and watching *v1alpha1.BaselineAdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2026-01-20T10:57:45.954393541+00:00 stderr F I0120 10:57:45.954376 30089 reflector.go:351] Caches populated for *v1alpha1.BaselineAdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2026-01-20T10:57:46.851930907+00:00 stderr F I0120 10:57:46.851833 30089 reflector.go:325] Listing and watching *v1.NetworkPolicy from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:46.854530316+00:00 stderr F I0120 10:57:46.854500 30089 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:47.572744989+00:00 stderr F I0120 10:57:47.572696 30089 reflector.go:325] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:57:47.575137972+00:00 stderr F I0120 10:57:47.575116 30089 reflector.go:351] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:57:48.596362858+00:00 stderr F I0120 10:57:48.596247 30089 reflector.go:325] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:48.599444070+00:00 stderr F I0120 10:57:48.599400 30089 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:48.976356328+00:00 stderr F I0120 10:57:48.976267 30089 reflector.go:325] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:57:48.977747604+00:00 stderr F I0120 10:57:48.977679 30089 reflector.go:351] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:57:50.259141621+00:00 stderr F I0120 10:57:50.259051 30089 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:50.262114310+00:00 stderr F I0120 10:57:50.262083 30089 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:50.963483097+00:00 stderr F I0120 10:57:50.963395 30089 reflector.go:325] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:57:50.966388924+00:00 stderr F I0120 10:57:50.966333 30089 reflector.go:351] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:57:51.342862241+00:00 stderr F I0120 10:57:51.342754 30089 reflector.go:325] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:51.347491192+00:00 stderr F I0120 10:57:51.347437 30089 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:51.347715738+00:00 stderr F I0120 10:57:51.347651 30089 namespace.go:144] [cert-manager] updating namespace 2026-01-20T10:57:51.347715738+00:00 stderr F I0120 10:57:51.347673 30089 namespace.go:144] [cert-manager-operator] updating namespace 2026-01-20T10:57:51.347812281+00:00 stderr F I0120 10:57:51.347769 30089 namespace.go:144] [kube-node-lease] updating namespace 2026-01-20T10:57:51.347812281+00:00 stderr F I0120 10:57:51.347787 30089 namespace.go:144] [default] updating namespace 2026-01-20T10:57:51.347812281+00:00 stderr F I0120 10:57:51.347791 30089 namespace.go:144] [kube-system] updating namespace 2026-01-20T10:57:51.347812281+00:00 stderr F I0120 10:57:51.347806 30089 namespace.go:144] [kube-public] updating namespace 2026-01-20T10:57:51.347839892+00:00 stderr F I0120 10:57:51.347774 30089 namespace.go:144] [openshift] updating namespace 2026-01-20T10:57:51.347850532+00:00 stderr F I0120 10:57:51.347841 30089 namespace.go:144] [openshift-apiserver-operator] updating namespace 2026-01-20T10:57:51.347895363+00:00 stderr F I0120 10:57:51.347868 30089 namespace.go:144] [openshift-apiserver] updating namespace 2026-01-20T10:57:51.347909304+00:00 stderr F I0120 10:57:51.347872 30089 namespace.go:144] [hostpath-provisioner] updating namespace 2026-01-20T10:57:51.348052458+00:00 stderr F I0120 10:57:51.347992 30089 namespace.go:144] [openshift-authentication-operator] updating namespace 2026-01-20T10:57:51.348093529+00:00 stderr F I0120 10:57:51.348047 30089 namespace.go:144] [openshift-cloud-network-config-controller] updating namespace 2026-01-20T10:57:51.348162961+00:00 stderr F I0120 10:57:51.348136 30089 namespace.go:144] [openshift-cluster-machine-approver] updating namespace 2026-01-20T10:57:51.348219162+00:00 stderr F I0120 10:57:51.348191 30089 namespace.go:144] [openshift-cloud-platform-infra] updating namespace 2026-01-20T10:57:51.348232973+00:00 stderr F I0120 10:57:51.348223 30089 namespace.go:144] [openshift-cluster-samples-operator] updating namespace 2026-01-20T10:57:51.348265104+00:00 stderr F I0120 10:57:51.348193 30089 namespace.go:144] [openshift-authentication] updating namespace 2026-01-20T10:57:51.348336296+00:00 stderr F I0120 10:57:51.348305 30089 namespace.go:144] [openshift-cluster-version] updating namespace 2026-01-20T10:57:51.348348546+00:00 stderr F I0120 10:57:51.348336 30089 namespace.go:144] [openshift-cluster-storage-operator] updating namespace 2026-01-20T10:57:51.348403327+00:00 stderr F I0120 10:57:51.348376 30089 namespace.go:144] [openshift-config] updating namespace 2026-01-20T10:57:51.348523570+00:00 stderr F I0120 10:57:51.348494 30089 namespace.go:144] [openshift-config-managed] updating namespace 2026-01-20T10:57:51.348586302+00:00 stderr F I0120 10:57:51.348560 30089 namespace.go:144] [openshift-config-operator] updating namespace 2026-01-20T10:57:51.348662424+00:00 stderr F I0120 10:57:51.348634 30089 namespace.go:144] [openshift-console] updating namespace 2026-01-20T10:57:51.348678564+00:00 stderr F I0120 10:57:51.348669 30089 namespace.go:144] [openshift-console-user-settings] updating namespace 2026-01-20T10:57:51.348724166+00:00 stderr F I0120 10:57:51.348708 30089 namespace.go:144] [openshift-console-operator] updating namespace 2026-01-20T10:57:51.348853669+00:00 stderr F I0120 10:57:51.348781 30089 namespace.go:144] [openshift-controller-manager] updating namespace 2026-01-20T10:57:51.348919121+00:00 stderr F I0120 10:57:51.348891 30089 namespace.go:144] [openshift-controller-manager-operator] updating namespace 2026-01-20T10:57:51.348955492+00:00 stderr F I0120 10:57:51.348930 30089 namespace.go:144] [openshift-dns-operator] updating namespace 2026-01-20T10:57:51.348986673+00:00 stderr F I0120 10:57:51.348898 30089 namespace.go:144] [openshift-dns] updating namespace 2026-01-20T10:57:51.349031004+00:00 stderr F I0120 10:57:51.349007 30089 namespace.go:144] [openshift-etcd-operator] updating namespace 2026-01-20T10:57:51.349086505+00:00 stderr F I0120 10:57:51.349044 30089 namespace.go:144] [openshift-etcd] updating namespace 2026-01-20T10:57:51.349136436+00:00 stderr F I0120 10:57:51.349113 30089 namespace.go:144] [openshift-image-registry] updating namespace 2026-01-20T10:57:51.349179708+00:00 stderr F I0120 10:57:51.349164 30089 namespace.go:144] [openshift-host-network] updating namespace 2026-01-20T10:57:51.349234329+00:00 stderr F I0120 10:57:51.349165 30089 namespace.go:144] [openshift-infra] updating namespace 2026-01-20T10:57:51.349283760+00:00 stderr F I0120 10:57:51.349186 30089 namespace.go:144] [openshift-ingress] updating namespace 2026-01-20T10:57:51.349347102+00:00 stderr F I0120 10:57:51.349229 30089 namespace.go:144] [openshift-ingress-canary] updating namespace 2026-01-20T10:57:51.349406303+00:00 stderr F I0120 10:57:51.349270 30089 namespace.go:144] [openshift-kni-infra] updating namespace 2026-01-20T10:57:51.349496956+00:00 stderr F I0120 10:57:51.349278 30089 namespace.go:144] [openshift-ingress-operator] updating namespace 2026-01-20T10:57:51.349510176+00:00 stderr F I0120 10:57:51.349347 30089 namespace.go:144] [openshift-kube-controller-manager] updating namespace 2026-01-20T10:57:51.349551787+00:00 stderr F I0120 10:57:51.349346 30089 namespace.go:144] [openshift-kube-apiserver-operator] updating namespace 2026-01-20T10:57:51.349551787+00:00 stderr F I0120 10:57:51.349536 30089 namespace.go:144] [openshift-machine-config-operator] updating namespace 2026-01-20T10:57:51.349588128+00:00 stderr F I0120 10:57:51.349562 30089 namespace.go:144] [openshift-kube-controller-manager-operator] updating namespace 2026-01-20T10:57:51.349599899+00:00 stderr F I0120 10:57:51.349407 30089 namespace.go:144] [openshift-kube-scheduler] updating namespace 2026-01-20T10:57:51.349635239+00:00 stderr F I0120 10:57:51.349612 30089 namespace.go:144] [openshift-multus] updating namespace 2026-01-20T10:57:51.349646900+00:00 stderr F I0120 10:57:51.349635 30089 namespace.go:144] [openshift-marketplace] updating namespace 2026-01-20T10:57:51.349657380+00:00 stderr F I0120 10:57:51.349615 30089 namespace.go:144] [openshift-kube-scheduler-operator] updating namespace 2026-01-20T10:57:51.349695791+00:00 stderr F I0120 10:57:51.349467 30089 namespace.go:144] [openshift-kube-storage-version-migrator] updating namespace 2026-01-20T10:57:51.349707071+00:00 stderr F I0120 10:57:51.349697 30089 namespace.go:144] [openshift-monitoring] updating namespace 2026-01-20T10:57:51.349739172+00:00 stderr F I0120 10:57:51.349526 30089 namespace.go:144] [openshift-kube-storage-version-migrator-operator] updating namespace 2026-01-20T10:57:51.349750302+00:00 stderr F I0120 10:57:51.349741 30089 namespace.go:144] [openshift-machine-api] updating namespace 2026-01-20T10:57:51.349781873+00:00 stderr F I0120 10:57:51.349464 30089 namespace.go:144] [openshift-kube-apiserver] updating namespace 2026-01-20T10:57:51.349860895+00:00 stderr F I0120 10:57:51.349836 30089 namespace.go:144] [openshift-network-diagnostics] updating namespace 2026-01-20T10:57:51.349909707+00:00 stderr F I0120 10:57:51.349887 30089 namespace.go:144] [openshift-network-operator] updating namespace 2026-01-20T10:57:51.349947208+00:00 stderr F I0120 10:57:51.349930 30089 namespace.go:144] [openshift-network-node-identity] updating namespace 2026-01-20T10:57:51.350005389+00:00 stderr F I0120 10:57:51.349980 30089 namespace.go:144] [openshift-oauth-apiserver] updating namespace 2026-01-20T10:57:51.350033720+00:00 stderr F I0120 10:57:51.349934 30089 namespace.go:144] [openshift-node] updating namespace 2026-01-20T10:57:51.350111762+00:00 stderr F I0120 10:57:51.350086 30089 namespace.go:144] [openshift-openstack-infra] updating namespace 2026-01-20T10:57:51.350151713+00:00 stderr F I0120 10:57:51.350126 30089 namespace.go:144] [openshift-operators] updating namespace 2026-01-20T10:57:51.350184574+00:00 stderr F I0120 10:57:51.350160 30089 namespace.go:144] [openshift-ovirt-infra] updating namespace 2026-01-20T10:57:51.350229015+00:00 stderr F I0120 10:57:51.350206 30089 namespace.go:144] [openshift-ovn-kubernetes] updating namespace 2026-01-20T10:57:51.350262816+00:00 stderr F I0120 10:57:51.350248 30089 namespace.go:144] [openshift-nutanix-infra] updating namespace 2026-01-20T10:57:51.350322177+00:00 stderr F I0120 10:57:51.350297 30089 namespace.go:144] [openshift-user-workload-monitoring] updating namespace 2026-01-20T10:57:51.350367139+00:00 stderr F I0120 10:57:51.350343 30089 namespace.go:144] [openstack] updating namespace 2026-01-20T10:57:51.350402159+00:00 stderr F I0120 10:57:51.350038 30089 namespace.go:144] [openshift-operator-lifecycle-manager] updating namespace 2026-01-20T10:57:51.350498752+00:00 stderr F I0120 10:57:51.350468 30089 namespace.go:144] [openshift-route-controller-manager] updating namespace 2026-01-20T10:57:51.350498752+00:00 stderr F I0120 10:57:51.350268 30089 namespace.go:144] [openshift-service-ca-operator] updating namespace 2026-01-20T10:57:51.350517792+00:00 stderr F I0120 10:57:51.350508 30089 namespace.go:144] [openshift-vsphere-infra] updating namespace 2026-01-20T10:57:51.350545943+00:00 stderr F I0120 10:57:51.350347 30089 namespace.go:144] [openshift-service-ca] updating namespace 2026-01-20T10:57:51.350558103+00:00 stderr F I0120 10:57:51.350543 30089 namespace.go:144] [openstack-operators] updating namespace 2026-01-20T10:57:51.888240063+00:00 stderr F I0120 10:57:51.888159 30089 reflector.go:325] Listing and watching *v1alpha1.AdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2026-01-20T10:57:51.890157284+00:00 stderr F I0120 10:57:51.890103 30089 reflector.go:351] Caches populated for *v1alpha1.AdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2026-01-20T10:57:52.422854901+00:00 stderr F I0120 10:57:52.422778 30089 reflector.go:325] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:52.424840584+00:00 stderr F I0120 10:57:52.424792 30089 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:57:52.425268635+00:00 stderr F I0120 10:57:52.425225 30089 master.go:627] Adding or Updating Node "crc" 2026-01-20T10:57:52.425286335+00:00 stderr F I0120 10:57:52.425278 30089 hybrid.go:140] Removing node crc hybrid overlay port 2026-01-20T10:57:53.316358580+00:00 stderr F I0120 10:57:53.315219 30089 reflector.go:325] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2026-01-20T10:57:53.316938775+00:00 stderr F I0120 10:57:53.316864 30089 reflector.go:351] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2026-01-20T10:57:54.500015003+00:00 stderr F I0120 10:57:54.499941 30089 reflector.go:325] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:57:54.501716797+00:00 stderr F I0120 10:57:54.501683 30089 reflector.go:351] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2026-01-20T10:58:04.810398594+00:00 stderr F I0120 10:58:04.810294 30089 namespace.go:100] [openshift-must-gather-jdb4k] adding namespace 2026-01-20T10:58:04.812865906+00:00 stderr F I0120 10:58:04.812831 30089 namespace.go:104] [openshift-must-gather-jdb4k] adding namespace took 2.509813ms 2026-01-20T10:58:04.834465005+00:00 stderr F I0120 10:58:04.834398 30089 namespace.go:144] [openshift-must-gather-jdb4k] updating namespace 2026-01-20T10:58:05.765849542+00:00 stderr F I0120 10:58:05.765701 30089 namespace.go:144] [openshift-must-gather-jdb4k] updating namespace 2026-01-20T10:58:12.926755032+00:00 stderr F I0120 10:58:12.926346 30089 namespace.go:144] [openshift-must-gather-jdb4k] updating namespace 2026-01-20T10:58:12.938695586+00:00 stderr F I0120 10:58:12.938610 30089 namespace.go:278] [openshift-must-gather-jdb4k] deleting namespace ././@LongLink0000644000000000000000000000025700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/ovn-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015133657737033110 5ustar zuulzuul././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/ovn-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000001754115133657716033117 0ustar zuulzuul2026-01-20T10:56:45.029776811+00:00 stderr F ++ K8S_NODE=crc 2026-01-20T10:56:45.029776811+00:00 stderr F ++ [[ -n crc ]] 2026-01-20T10:56:45.029776811+00:00 stderr F ++ [[ -f /env/crc ]] 2026-01-20T10:56:45.029776811+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2026-01-20T10:56:45.029887384+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2026-01-20T10:56:45.029887384+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2026-01-20T10:56:45.029887384+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2026-01-20T10:56:45.029887384+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2026-01-20T10:56:45.029887384+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2026-01-20T10:56:45.029887384+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2026-01-20T10:56:45.029887384+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2026-01-20T10:56:45.029887384+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2026-01-20T10:56:45.029887384+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2026-01-20T10:56:45.030768669+00:00 stderr F + start-ovn-controller info 2026-01-20T10:56:45.030768669+00:00 stderr F + local log_level=info 2026-01-20T10:56:45.030786709+00:00 stderr F + [[ 1 -ne 1 ]] 2026-01-20T10:56:45.031221171+00:00 stderr F ++ date -Iseconds 2026-01-20T10:56:45.033598055+00:00 stderr F + echo '2026-01-20T10:56:45+00:00 - starting ovn-controller' 2026-01-20T10:56:45.033616025+00:00 stdout F 2026-01-20T10:56:45+00:00 - starting ovn-controller 2026-01-20T10:56:45.033651186+00:00 stderr F + exec ovn-controller unix:/var/run/openvswitch/db.sock -vfile:off --no-chdir --pidfile=/var/run/ovn/ovn-controller.pid --syslog-method=null --log-file=/var/log/ovn/acl-audit-log.log -vFACILITY:local0 -vconsole:info -vconsole:acl_log:off '-vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' -vsyslog:acl_log:info -vfile:acl_log:info 2026-01-20T10:56:45.037276773+00:00 stderr F 2026-01-20T10:56:45Z|00001|vlog|INFO|opened log file /var/log/ovn/acl-audit-log.log 2026-01-20T10:56:45.115803842+00:00 stderr F 2026-01-20T10:56:45.114Z|00002|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... 2026-01-20T10:56:45.115803842+00:00 stderr F 2026-01-20T10:56:45.114Z|00003|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected 2026-01-20T10:56:45.120717265+00:00 stderr F 2026-01-20T10:56:45.120Z|00004|main|INFO|OVN internal version is : [24.03.3-20.33.0-72.6] 2026-01-20T10:56:45.120717265+00:00 stderr F 2026-01-20T10:56:45.120Z|00005|main|INFO|OVS IDL reconnected, force recompute. 2026-01-20T10:56:45.120799087+00:00 stderr F 2026-01-20T10:56:45.120Z|00006|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2026-01-20T10:56:45.120799087+00:00 stderr F 2026-01-20T10:56:45.120Z|00007|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connection attempt failed (No such file or directory) 2026-01-20T10:56:45.120799087+00:00 stderr F 2026-01-20T10:56:45.120Z|00008|main|INFO|OVNSB IDL reconnected, force recompute. 2026-01-20T10:56:46.121555997+00:00 stderr F 2026-01-20T10:56:46.121Z|00009|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2026-01-20T10:56:46.121555997+00:00 stderr F 2026-01-20T10:56:46.121Z|00010|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connection attempt failed (No such file or directory) 2026-01-20T10:56:46.121555997+00:00 stderr F 2026-01-20T10:56:46.121Z|00011|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: waiting 2 seconds before reconnect 2026-01-20T10:56:48.122840631+00:00 stderr F 2026-01-20T10:56:48.122Z|00012|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2026-01-20T10:56:48.122840631+00:00 stderr F 2026-01-20T10:56:48.122Z|00013|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connection attempt failed (No such file or directory) 2026-01-20T10:56:48.122840631+00:00 stderr F 2026-01-20T10:56:48.122Z|00014|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: waiting 4 seconds before reconnect 2026-01-20T10:56:52.125464535+00:00 stderr F 2026-01-20T10:56:52.125Z|00015|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2026-01-20T10:56:52.125464535+00:00 stderr F 2026-01-20T10:56:52.125Z|00016|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connected 2026-01-20T10:56:52.148824613+00:00 stderr F 2026-01-20T10:56:52.148Z|00017|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2026-01-20T10:56:52.148824613+00:00 stderr F 2026-01-20T10:56:52.148Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2026-01-20T10:56:52.149762799+00:00 stderr F 2026-01-20T10:56:52.149Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2026-01-20T10:56:52.149762799+00:00 stderr F 2026-01-20T10:56:52.149Z|00020|features|INFO|OVS Feature: ct_zero_snat, state: supported 2026-01-20T10:56:52.149762799+00:00 stderr F 2026-01-20T10:56:52.149Z|00021|features|INFO|OVS Feature: ct_flush, state: supported 2026-01-20T10:56:52.149762799+00:00 stderr F 2026-01-20T10:56:52.149Z|00022|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported 2026-01-20T10:56:52.149781369+00:00 stderr F 2026-01-20T10:56:52.149Z|00023|main|INFO|OVS feature set changed, force recompute. 2026-01-20T10:56:52.149781369+00:00 stderr F 2026-01-20T10:56:52.149Z|00024|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2026-01-20T10:56:52.149781369+00:00 stderr F 2026-01-20T10:56:52.149Z|00025|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2026-01-20T10:56:52.150338544+00:00 stderr F 2026-01-20T10:56:52.150Z|00026|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2026-01-20T10:56:52.150411266+00:00 stderr F 2026-01-20T10:56:52.150Z|00027|main|INFO|OVS OpenFlow connection reconnected,force recompute. 2026-01-20T10:56:52.151486004+00:00 stderr F 2026-01-20T10:56:52.151Z|00028|main|INFO|OVS feature set changed, force recompute. 2026-01-20T10:56:52.177989858+00:00 stderr F 2026-01-20T10:56:52.177Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2026-01-20T10:56:52.177989858+00:00 stderr F 2026-01-20T10:56:52.177Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2026-01-20T10:56:52.177989858+00:00 stderr F 2026-01-20T10:56:52.177Z|00001|statctrl(ovn_statctrl1)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2026-01-20T10:56:52.177989858+00:00 stderr F 2026-01-20T10:56:52.177Z|00002|rconn(ovn_statctrl1)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2026-01-20T10:56:52.177989858+00:00 stderr F 2026-01-20T10:56:52.177Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2026-01-20T10:56:52.177989858+00:00 stderr F 2026-01-20T10:56:52.177Z|00003|rconn(ovn_statctrl1)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2026-01-20T10:57:07.853506860+00:00 stderr F 2026-01-20T10:57:07.853Z|00029|memory|INFO|19072 kB peak resident set size after 22.8 seconds 2026-01-20T10:57:07.853506860+00:00 stderr F 2026-01-20T10:57:07.853Z|00030|memory|INFO|idl-cells-OVN_Southbound:12959 idl-cells-Open_vSwitch:3258 lflow-cache-entries-cache-expr:281 lflow-cache-entries-cache-matches:623 lflow-cache-size-KB:713 local_datapath_usage-KB:1 ofctrl_desired_flow_usage-KB:721 ofctrl_installed_flow_usage-KB:530 ofctrl_sb_flow_ref_usage-KB:285 2026-01-20T10:57:10.062144708+00:00 stderr F 2026-01-20T10:57:10.062Z|00031|binding|INFO|Releasing lport openshift-kube-apiserver_installer-13-crc from this chassis (sb_readonly=0) 2026-01-20T10:57:10.062144708+00:00 stderr F 2026-01-20T10:57:10.062Z|00032|if_status|WARN|Trying to release unknown interface openshift-kube-apiserver_installer-13-crc 2026-01-20T10:57:10.062144708+00:00 stderr F 2026-01-20T10:57:10.062Z|00033|binding|INFO|Setting lport openshift-kube-apiserver_installer-13-crc down in Southbound 2026-01-20T10:57:37.869905053+00:00 stderr F 2026-01-20T10:57:37.869Z|00034|memory_trim|INFO|Detected inactivity (last active 30016 ms ago): trimming memory ././@LongLink0000644000000000000000000000024500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/sbdb/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015133657737033110 5ustar zuulzuul././@LongLink0000644000000000000000000000025200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/sbdb/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000000445315133657716033115 0ustar zuulzuul2026-01-20T10:56:50.172221992+00:00 stderr F + [[ -f /env/_master ]] 2026-01-20T10:56:50.172464808+00:00 stderr F + . /ovnkube-lib/ovnkube-lib.sh 2026-01-20T10:56:50.172539170+00:00 stderr F ++ set -x 2026-01-20T10:56:50.172580451+00:00 stderr F ++ K8S_NODE= 2026-01-20T10:56:50.172616842+00:00 stderr F ++ [[ -n '' ]] 2026-01-20T10:56:50.172641953+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2026-01-20T10:56:50.172665543+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2026-01-20T10:56:50.172688414+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2026-01-20T10:56:50.172711315+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2026-01-20T10:56:50.172733655+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2026-01-20T10:56:50.172756086+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2026-01-20T10:56:50.172778676+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2026-01-20T10:56:50.172801027+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2026-01-20T10:56:50.172823278+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2026-01-20T10:56:50.172845498+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2026-01-20T10:56:50.173885856+00:00 stderr F + trap quit-sbdb TERM INT 2026-01-20T10:56:50.173942787+00:00 stderr F + start-sbdb info 2026-01-20T10:56:50.173976918+00:00 stderr F + local log_level=info 2026-01-20T10:56:50.174007699+00:00 stderr F + [[ 1 -ne 1 ]] 2026-01-20T10:56:50.174303198+00:00 stderr F + wait 30010 2026-01-20T10:56:50.176888186+00:00 stderr F + exec /usr/share/ovn/scripts/ovn-ctl --no-monitor --db-sb-sock=/var/run/ovn/ovnsb_db.sock '--ovn-sb-log=-vconsole:info -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' run_sb_ovsdb 2026-01-20T10:56:50.280073193+00:00 stderr F 2026-01-20T10:56:50.279Z|00001|vlog|INFO|opened log file /var/log/ovn/ovsdb-server-sb.log 2026-01-20T10:56:50.367576198+00:00 stderr F 2026-01-20T10:56:50.367Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 3.3.1 2026-01-20T10:57:00.376313505+00:00 stderr F 2026-01-20T10:57:00.376Z|00003|memory|INFO|17584 kB peak resident set size after 10.1 seconds 2026-01-20T10:57:00.376558901+00:00 stderr F 2026-01-20T10:57:00.376Z|00004|memory|INFO|atoms:16790 cells:14889 json-caches:2 monitors:5 n-weak-refs:278 sessions:3 ././@LongLink0000644000000000000000000000025600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/kubecfg-setup/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015133657737033110 5ustar zuulzuul././@LongLink0000644000000000000000000000026300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/kubecfg-setup/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000000000015133657716033075 0ustar zuulzuul././@LongLink0000644000000000000000000000024500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/nbdb/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015133657737033110 5ustar zuulzuul././@LongLink0000644000000000000000000000025200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/nbdb/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000000455715133657716033122 0ustar zuulzuul2026-01-20T10:56:47.822498351+00:00 stderr F + [[ -f /env/_master ]] 2026-01-20T10:56:47.822498351+00:00 stderr F + . /ovnkube-lib/ovnkube-lib.sh 2026-01-20T10:56:47.822673655+00:00 stderr F ++ set -x 2026-01-20T10:56:47.822673655+00:00 stderr F ++ K8S_NODE=crc 2026-01-20T10:56:47.822673655+00:00 stderr F ++ [[ -n crc ]] 2026-01-20T10:56:47.822673655+00:00 stderr F ++ [[ -f /env/crc ]] 2026-01-20T10:56:47.822673655+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2026-01-20T10:56:47.822684566+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2026-01-20T10:56:47.822692616+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2026-01-20T10:56:47.822700346+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2026-01-20T10:56:47.822700346+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2026-01-20T10:56:47.822707766+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2026-01-20T10:56:47.822714856+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2026-01-20T10:56:47.822722477+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2026-01-20T10:56:47.822729747+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2026-01-20T10:56:47.822729747+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2026-01-20T10:56:47.823962350+00:00 stderr F + trap quit-nbdb TERM INT 2026-01-20T10:56:47.823977381+00:00 stderr F + start-nbdb info 2026-01-20T10:56:47.823977381+00:00 stderr F + local log_level=info 2026-01-20T10:56:47.824013092+00:00 stderr F + [[ 1 -ne 1 ]] 2026-01-20T10:56:47.824370801+00:00 stderr F + wait 29905 2026-01-20T10:56:47.824680350+00:00 stderr F + exec /usr/share/ovn/scripts/ovn-ctl --no-monitor --db-nb-sock=/var/run/ovn/ovnnb_db.sock '--ovn-nb-log=-vconsole:info -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' run_nb_ovsdb 2026-01-20T10:56:47.966564516+00:00 stderr F 2026-01-20T10:56:47.966Z|00001|vlog|INFO|opened log file /var/log/ovn/ovsdb-server-nb.log 2026-01-20T10:56:48.004421845+00:00 stderr F 2026-01-20T10:56:48.004Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 3.3.1 2026-01-20T10:56:58.009467563+00:00 stderr F 2026-01-20T10:56:58.009Z|00003|memory|INFO|12928 kB peak resident set size after 10.0 seconds 2026-01-20T10:56:58.010026189+00:00 stderr F 2026-01-20T10:56:58.009Z|00004|memory|INFO|atoms:4869 cells:3543 json-caches:2 monitors:3 n-weak-refs:120 sessions:2 ././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/northd/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015133657737033110 5ustar zuulzuul././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/northd/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000001276615133657716033123 0ustar zuulzuul2026-01-20T10:56:45.726518668+00:00 stderr F + [[ -f /env/_master ]] 2026-01-20T10:56:45.726518668+00:00 stderr F + . /ovnkube-lib/ovnkube-lib.sh 2026-01-20T10:56:45.726518668+00:00 stderr F ++ set -x 2026-01-20T10:56:45.726708093+00:00 stderr F ++ K8S_NODE= 2026-01-20T10:56:45.726708093+00:00 stderr F ++ [[ -n '' ]] 2026-01-20T10:56:45.726708093+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2026-01-20T10:56:45.726708093+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2026-01-20T10:56:45.726708093+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2026-01-20T10:56:45.726708093+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2026-01-20T10:56:45.726708093+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2026-01-20T10:56:45.726708093+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2026-01-20T10:56:45.726708093+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2026-01-20T10:56:45.726708093+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2026-01-20T10:56:45.726708093+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2026-01-20T10:56:45.726708093+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2026-01-20T10:56:45.727576546+00:00 stderr F + trap quit-ovn-northd TERM INT 2026-01-20T10:56:45.727576546+00:00 stderr F + start-ovn-northd info 2026-01-20T10:56:45.727576546+00:00 stderr F + local log_level=info 2026-01-20T10:56:45.727576546+00:00 stderr F + [[ 1 -ne 1 ]] 2026-01-20T10:56:45.728103750+00:00 stderr F ++ date -Iseconds 2026-01-20T10:56:45.730284129+00:00 stderr F + echo '2026-01-20T10:56:45+00:00 - starting ovn-northd' 2026-01-20T10:56:45.730304949+00:00 stdout F 2026-01-20T10:56:45+00:00 - starting ovn-northd 2026-01-20T10:56:45.730534075+00:00 stderr F + wait 29764 2026-01-20T10:56:45.730684839+00:00 stderr F + exec ovn-northd --no-chdir -vconsole:info -vfile:off '-vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' --pidfile /var/run/ovn/ovn-northd.pid --n-threads=1 2026-01-20T10:56:45.736699421+00:00 stderr F 2026-01-20T10:56:45.736Z|00001|ovn_northd|INFO|OVN internal version is : [24.03.3-20.33.0-72.6] 2026-01-20T10:56:45.737259607+00:00 stderr F 2026-01-20T10:56:45.737Z|00002|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connecting... 2026-01-20T10:56:45.737337659+00:00 stderr F 2026-01-20T10:56:45.737Z|00003|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connection attempt failed (No such file or directory) 2026-01-20T10:56:45.737459802+00:00 stderr F 2026-01-20T10:56:45.737Z|00004|ovn_northd|INFO|OVN NB IDL reconnected, force recompute. 2026-01-20T10:56:45.737542425+00:00 stderr F 2026-01-20T10:56:45.737Z|00005|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2026-01-20T10:56:45.737594796+00:00 stderr F 2026-01-20T10:56:45.737Z|00006|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connection attempt failed (No such file or directory) 2026-01-20T10:56:45.737646957+00:00 stderr F 2026-01-20T10:56:45.737Z|00007|ovn_northd|INFO|OVN SB IDL reconnected, force recompute. 2026-01-20T10:56:46.738255149+00:00 stderr F 2026-01-20T10:56:46.738Z|00008|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connecting... 2026-01-20T10:56:46.738512166+00:00 stderr F 2026-01-20T10:56:46.738Z|00009|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connection attempt failed (No such file or directory) 2026-01-20T10:56:46.738631229+00:00 stderr F 2026-01-20T10:56:46.738Z|00010|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: waiting 2 seconds before reconnect 2026-01-20T10:56:46.738808044+00:00 stderr F 2026-01-20T10:56:46.738Z|00011|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2026-01-20T10:56:46.738961968+00:00 stderr F 2026-01-20T10:56:46.738Z|00012|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connection attempt failed (No such file or directory) 2026-01-20T10:56:46.739256496+00:00 stderr F 2026-01-20T10:56:46.739Z|00013|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: waiting 2 seconds before reconnect 2026-01-20T10:56:48.740847600+00:00 stderr F 2026-01-20T10:56:48.740Z|00014|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connecting... 2026-01-20T10:56:48.740992673+00:00 stderr F 2026-01-20T10:56:48.740Z|00015|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connected 2026-01-20T10:56:48.741094766+00:00 stderr F 2026-01-20T10:56:48.741Z|00016|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2026-01-20T10:56:48.741149188+00:00 stderr F 2026-01-20T10:56:48.741Z|00017|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connection attempt failed (No such file or directory) 2026-01-20T10:56:48.741191819+00:00 stderr F 2026-01-20T10:56:48.741Z|00018|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: waiting 4 seconds before reconnect 2026-01-20T10:56:52.742646900+00:00 stderr F 2026-01-20T10:56:52.742Z|00019|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2026-01-20T10:56:52.745302301+00:00 stderr F 2026-01-20T10:56:52.743Z|00020|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connected 2026-01-20T10:56:52.745302301+00:00 stderr F 2026-01-20T10:56:52.743Z|00021|ovn_northd|INFO|ovn-northd lock acquired. This ovn-northd instance is now active. 2026-01-20T10:56:53.157718998+00:00 stderr F 2026-01-20T10:56:53.157Z|00022|ipam|WARN|d6057acb-0f02-4ebe-8cea-b3228e61764c: Duplicate IP set: 10.217.0.2 2026-01-20T10:57:03.484928093+00:00 stderr F 2026-01-20T10:57:03.484Z|00023|memory|INFO|14336 kB peak resident set size after 17.7 seconds 2026-01-20T10:57:03.485375265+00:00 stderr F 2026-01-20T10:57:03.485Z|00024|memory|INFO|idl-cells-OVN_Northbound:2813 idl-cells-OVN_Southbound:12959 ././@LongLink0000644000000000000000000000023300000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015133657716033043 5ustar zuulzuul././@LongLink0000644000000000000000000000024300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015133657737033046 5ustar zuulzuul././@LongLink0000644000000000000000000000025000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000644000175000017500000007570315133657716033061 0ustar zuulzuul2025-08-13T19:50:44.179030078+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:50:44.237874850+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:50:44.289116574+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:50:44.432193094+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:50:44.488575685+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:50:44.596138679+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:51:44.793865665+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:51:44.800948797+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:51:44.807599916+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:51:44.816644794+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:51:44.820681709+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:51:44.824596800+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:52:44.840933394+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:52:44.848563401+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:52:44.857305630+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:52:44.866752939+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:52:44.873025757+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:52:44.877576937+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:53:44.895479215+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:53:44.904981736+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:53:44.916047412+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:53:44.927692085+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:53:44.931149313+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:53:44.935619241+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:54:44.950291420+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:54:44.959097171+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:54:44.972143633+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:54:44.985855865+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:54:44.989359554+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:54:44.993524073+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:55:45.010639720+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:55:45.019616857+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:55:45.025727891+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:55:45.034734568+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:55:45.038501826+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:55:45.042243723+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:56:45.054496589+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:56:45.061991153+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:56:45.072677698+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:56:45.095294644+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:56:45.102084498+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:56:45.109215231+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:57:45.130749729+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:57:45.137631605+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:57:45.144302516+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:57:45.154681432+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:57:45.158736538+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:57:45.163941517+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:58:45.177593190+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:58:45.184123356+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:58:45.190436746+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:58:45.202169050+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:58:45.205926076+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:58:45.213955125+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:59:45.467363579+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:59:45.990733368+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:59:46.348749022+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:59:46.413734585+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:59:46.497041809+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:59:46.855909219+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:00:47.400675889+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:00:47.671347326+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:00:47.768266490+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:00:47.917191257+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:00:47.985897916+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:00:48.049021466+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:01:48.345872873+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:01:48.925360996+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:01:48.933652583+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:01:48.944651026+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:01:48.949217056+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:01:48.953745906+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:02:49.892068063+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:02:49.995503284+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:02:50.002994117+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:02:50.014342111+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:02:50.018820879+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:02:50.024269954+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:03:50.089607897+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:03:50.099923071+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:03:50.109966648+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:03:50.119533921+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:03:50.127032885+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:03:50.130664498+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:04:50.144878272+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:04:50.205034795+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:04:50.216842483+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:04:50.230380341+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:04:50.234838258+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:04:50.242270821+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:05:50.258590982+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:05:50.269693380+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:05:50.281888759+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:05:50.300749569+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:05:50.305997930+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:05:50.310677554+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:06:50.341001987+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:06:50.367363093+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:06:50.379169482+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:06:50.392109473+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:06:50.395078188+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:06:50.398663201+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:07:50.415535003+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:07:50.426037825+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:07:50.440053076+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:07:50.457313681+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:07:50.476738488+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:07:50.481346290+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:08:50.505933265+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:08:50.511639988+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:08:50.519055261+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:08:50.532878397+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:08:50.536563493+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:08:50.542156553+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:09:50.561316124+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:09:50.588121533+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:09:50.647838885+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:09:50.702960175+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:09:50.714519997+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:09:50.719568922+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:10:50.739488450+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:10:50.747188040+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:10:50.754209522+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:10:50.767261916+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:10:50.772190487+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:10:50.776316666+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:11:50.794322108+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:11:50.803628205+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:11:50.813347114+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:11:50.825701918+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:11:50.830052603+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:11:50.834189241+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:12:50.848728773+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:12:50.854349454+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:12:50.863721563+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:12:50.870427995+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:12:50.874595945+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:12:50.878092745+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:13:50.889843871+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:13:50.897546822+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:13:50.903580425+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:13:50.913621003+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:13:50.917484983+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:13:50.922079615+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:14:50.934660344+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:14:50.946599046+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:14:50.955119770+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:14:50.965687243+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:14:50.970257264+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:14:50.974985180+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:15:50.990328799+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:15:50.998571764+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:15:51.007188290+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:15:51.021448067+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:15:51.026611565+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:15:51.031947957+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:16:51.046189254+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:16:51.054469721+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:16:51.064193108+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:16:51.075767989+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:16:51.079844505+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:16:51.083611393+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:17:51.095301291+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:17:51.106125390+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:17:51.122001684+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:17:51.132767541+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:17:51.140506572+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:17:51.148720447+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:18:51.164139785+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:18:51.171831135+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:18:51.184226349+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:18:51.204513368+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:18:51.215682157+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:18:51.223433189+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:19:51.242413375+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:19:51.255073497+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:19:51.264893077+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:19:51.276414546+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:19:51.280402130+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:19:51.287049410+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:20:51.299959742+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:20:51.305878951+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:20:51.312962754+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:20:51.323135154+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:20:51.326248163+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:20:51.330379541+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:21:51.343764978+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:21:51.352820006+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:21:51.361837164+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:21:51.372254532+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:21:51.376860923+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:21:51.381518287+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:22:51.398739807+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:22:51.406085316+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:22:51.412600773+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:22:51.421683252+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:22:51.425453870+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:22:51.429822705+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:23:51.445523366+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:23:51.453693890+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:23:51.460924166+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:23:51.471057946+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:23:51.478256292+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:23:51.481915147+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:24:51.495866769+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:24:51.503283901+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:24:51.512755712+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:24:51.527875515+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:24:51.532710953+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:24:51.536374068+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:25:51.549034970+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:25:51.557078810+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:25:51.565216283+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:25:51.575481146+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:25:51.580249903+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:25:51.584875085+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:26:51.598649328+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:26:51.605657798+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:26:51.612983138+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:26:51.622559972+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:26:51.627584345+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:26:51.631536758+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:27:51.647962447+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:27:51.658085056+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:27:51.665942891+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:27:51.677450850+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:27:51.681587858+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:27:51.685494920+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:28:51.706042488+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:28:51.720733561+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:28:51.729951246+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:28:51.755563072+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:28:51.768132063+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:28:51.772989693+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:29:51.793283737+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:29:51.806941069+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:29:51.816148324+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:29:51.830997841+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:29:51.837600431+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:29:51.842635045+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:30:51.859244339+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:30:51.871837001+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:30:51.879857372+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:30:51.892769683+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:30:51.898760945+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:30:51.902899894+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:31:51.915497901+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:31:51.926827037+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:31:51.934752705+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:31:51.946344638+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:31:51.950888159+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:31:51.954856063+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:32:51.967487272+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:32:51.973539756+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:32:51.981221237+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:32:51.991501942+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:32:51.995170888+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:32:51.999110381+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:33:52.015352083+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:33:52.028486171+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:33:52.036493631+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:33:52.053462279+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:33:52.062454957+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:33:52.069294024+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:34:52.087208945+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:34:52.094613198+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:34:52.102524996+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:34:52.111355859+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:34:52.122145500+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:34:52.126930647+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:35:52.148494694+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:35:52.156479084+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:35:52.169187349+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:35:52.182749539+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:35:52.188410682+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:35:52.194130316+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:36:52.212706927+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:36:52.225700141+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:36:52.236669598+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:36:52.248599752+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:36:52.252548165+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:36:52.257011264+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:37:52.271169813+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:37:52.282038636+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:37:52.291913851+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:37:52.315675506+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:37:52.324174341+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:37:52.329417192+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:38:52.345967174+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:38:52.353704657+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:38:52.364194029+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:38:52.374217218+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:38:52.378944234+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:38:52.382480376+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:39:52.408991716+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:39:52.419278352+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:39:52.432388350+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:39:52.445580601+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:39:52.451324546+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:39:52.456467214+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:40:52.475429215+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:40:52.484263909+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:40:52.492461906+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:40:52.503455673+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:40:52.507405917+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:40:52.511266108+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:41:52.525956568+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:41:52.538127589+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:41:52.556161139+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:41:52.577509234+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:41:52.582425076+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:41:52.590598602+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:42:46.781617096+00:00 stdout F shutting down node-ca ././@LongLink0000644000000000000000000000025000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000644000175000017500000001504015133657716033045 0ustar zuulzuul2026-01-20T10:47:24.726856531+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:47:24.737861159+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2026-01-20T10:47:24.742774952+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2026-01-20T10:47:24.751720584+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:47:24.754826158+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2026-01-20T10:47:24.758766194+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2026-01-20T10:48:24.769459391+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:48:24.779272428+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2026-01-20T10:48:24.789684229+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2026-01-20T10:48:24.803226730+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:48:24.808486948+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2026-01-20T10:48:24.815416593+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2026-01-20T10:49:24.828885890+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:49:24.836592615+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2026-01-20T10:49:24.843940210+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2026-01-20T10:49:24.854020706+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:49:24.858102101+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2026-01-20T10:49:24.862961689+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2026-01-20T10:50:24.874836612+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:50:24.879774792+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2026-01-20T10:50:24.884189297+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2026-01-20T10:50:24.893783388+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:50:24.898364218+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2026-01-20T10:50:24.903425793+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2026-01-20T10:51:24.915038229+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:51:24.920473606+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2026-01-20T10:51:24.934286544+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2026-01-20T10:51:24.935689524+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:51:24.939727861+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2026-01-20T10:51:24.943552731+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2026-01-20T10:52:24.962157942+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:52:24.970200830+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2026-01-20T10:52:24.980041289+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2026-01-20T10:52:24.990886406+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:52:24.996541836+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2026-01-20T10:52:25.002472184+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2026-01-20T10:53:25.015589775+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:53:25.025152601+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2026-01-20T10:53:25.031167987+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2026-01-20T10:53:25.041039362+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:53:25.044741377+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2026-01-20T10:53:25.048531946+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2026-01-20T10:54:25.057844445+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:54:25.068837978+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2026-01-20T10:54:25.076844381+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2026-01-20T10:54:25.088413260+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:54:25.092901259+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2026-01-20T10:54:25.096860305+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2026-01-20T10:55:25.109736475+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:55:25.115318794+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2026-01-20T10:55:25.120177163+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2026-01-20T10:55:25.130546250+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:55:25.133598781+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2026-01-20T10:55:25.139002905+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2026-01-20T10:56:25.152788124+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:56:25.158804596+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2026-01-20T10:56:25.164364045+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2026-01-20T10:56:25.174606301+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:56:25.177667134+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2026-01-20T10:56:25.180907791+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2026-01-20T10:57:25.189805844+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:57:25.194845217+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2026-01-20T10:57:25.201206225+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2026-01-20T10:57:25.208899179+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:57:25.212338080+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2026-01-20T10:57:25.215223876+00:00 stdout F image-registry.openshift-image-registry.svc:5000 ././@LongLink0000644000000000000000000000025700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015133657716033072 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015133657742033071 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000012467015133657716033106 0ustar zuulzuul2025-08-13T20:05:59.835111434+00:00 stderr F I0813 20:05:59.834569 1 cmd.go:91] &{ true {false} installer true map[cert-configmaps:0xc0007986e0 cert-dir:0xc0007988c0 cert-secrets:0xc000798640 configmaps:0xc0007981e0 namespace:0xc000798000 optional-cert-configmaps:0xc000798820 optional-configmaps:0xc000798320 optional-secrets:0xc000798280 pod:0xc0007980a0 pod-manifest-dir:0xc000798460 resource-dir:0xc0007983c0 revision:0xc0001f5f40 secrets:0xc000798140 v:0xc0007992c0] [0xc0007992c0 0xc0001f5f40 0xc000798000 0xc0007980a0 0xc0007983c0 0xc000798460 0xc0007981e0 0xc000798320 0xc000798140 0xc000798280 0xc0007988c0 0xc0007986e0 0xc000798820 0xc000798640] [] map[cert-configmaps:0xc0007986e0 cert-dir:0xc0007988c0 cert-secrets:0xc000798640 configmaps:0xc0007981e0 help:0xc000799680 kubeconfig:0xc0000f9f40 log-flush-frequency:0xc000799220 namespace:0xc000798000 optional-cert-configmaps:0xc000798820 optional-cert-secrets:0xc000798780 optional-configmaps:0xc000798320 optional-secrets:0xc000798280 pod:0xc0007980a0 pod-manifest-dir:0xc000798460 pod-manifests-lock-file:0xc0007985a0 resource-dir:0xc0007983c0 revision:0xc0001f5f40 secrets:0xc000798140 timeout-duration:0xc000798500 v:0xc0007992c0 vmodule:0xc000799360] [0xc0000f9f40 0xc0001f5f40 0xc000798000 0xc0007980a0 0xc000798140 0xc0007981e0 0xc000798280 0xc000798320 0xc0007983c0 0xc000798460 0xc000798500 0xc0007985a0 0xc000798640 0xc0007986e0 0xc000798780 0xc000798820 0xc0007988c0 0xc000799220 0xc0007992c0 0xc000799360 0xc000799680] [0xc0007986e0 0xc0007988c0 0xc000798640 0xc0007981e0 0xc000799680 0xc0000f9f40 0xc000799220 0xc000798000 0xc000798820 0xc000798780 0xc000798320 0xc000798280 0xc0007980a0 0xc000798460 0xc0007985a0 0xc0007983c0 0xc0001f5f40 0xc000798140 0xc000798500 0xc0007992c0 0xc000799360] map[104:0xc000799680 118:0xc0007992c0] [] -1 0 0xc0006fdb30 true 0x73b100 []} 2025-08-13T20:05:59.835111434+00:00 stderr F I0813 20:05:59.834918 1 cmd.go:92] (*installerpod.InstallOptions)(0xc000583d40)({ 2025-08-13T20:05:59.835111434+00:00 stderr F KubeConfig: (string) "", 2025-08-13T20:05:59.835111434+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-08-13T20:05:59.835111434+00:00 stderr F Revision: (string) (len=2) "10", 2025-08-13T20:05:59.835111434+00:00 stderr F NodeName: (string) "", 2025-08-13T20:05:59.835111434+00:00 stderr F Namespace: (string) (len=33) "openshift-kube-controller-manager", 2025-08-13T20:05:59.835111434+00:00 stderr F PodConfigMapNamePrefix: (string) (len=27) "kube-controller-manager-pod", 2025-08-13T20:05:59.835111434+00:00 stderr F SecretNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=27) "service-account-private-key", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-08-13T20:05:59.835111434+00:00 stderr F }, 2025-08-13T20:05:59.835111434+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=12) "serving-cert" 2025-08-13T20:05:59.835111434+00:00 stderr F }, 2025-08-13T20:05:59.835111434+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=8 cap=8) { 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=27) "kube-controller-manager-pod", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=6) "config", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=32) "cluster-policy-controller-config", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=29) "controller-manager-kubeconfig", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=38) "kube-controller-cert-syncer-kubeconfig", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=17) "serviceaccount-ca", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=10) "service-ca", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=15) "recycler-config" 2025-08-13T20:05:59.835111434+00:00 stderr F }, 2025-08-13T20:05:59.835111434+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=12) "cloud-config" 2025-08-13T20:05:59.835111434+00:00 stderr F }, 2025-08-13T20:05:59.835111434+00:00 stderr F CertSecretNames: ([]string) (len=2 cap=2) { 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=39) "kube-controller-manager-client-cert-key", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=10) "csr-signer" 2025-08-13T20:05:59.835111434+00:00 stderr F }, 2025-08-13T20:05:59.835111434+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) , 2025-08-13T20:05:59.835111434+00:00 stderr F CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=20) "aggregator-client-ca", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=9) "client-ca" 2025-08-13T20:05:59.835111434+00:00 stderr F }, 2025-08-13T20:05:59.835111434+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=17) "trusted-ca-bundle" 2025-08-13T20:05:59.835111434+00:00 stderr F }, 2025-08-13T20:05:59.835111434+00:00 stderr F CertDir: (string) (len=66) "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs", 2025-08-13T20:05:59.835111434+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:05:59.835111434+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-08-13T20:05:59.835111434+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-08-13T20:05:59.835111434+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-08-13T20:05:59.835111434+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-08-13T20:05:59.835111434+00:00 stderr F KubeletVersion: (string) "" 2025-08-13T20:05:59.835111434+00:00 stderr F }) 2025-08-13T20:05:59.842751153+00:00 stderr F I0813 20:05:59.838103 1 cmd.go:409] Getting controller reference for node crc 2025-08-13T20:05:59.852998977+00:00 stderr F I0813 20:05:59.852958 1 cmd.go:422] Waiting for installer revisions to settle for node crc 2025-08-13T20:05:59.857587378+00:00 stderr F I0813 20:05:59.857188 1 cmd.go:514] Waiting additional period after revisions have settled for node crc 2025-08-13T20:06:29.871751453+00:00 stderr F I0813 20:06:29.871543 1 cmd.go:520] Getting installer pods for node crc 2025-08-13T20:06:29.927650613+00:00 stderr F I0813 20:06:29.912246 1 cmd.go:538] Latest installer revision for node crc is: 10 2025-08-13T20:06:29.927820478+00:00 stderr F I0813 20:06:29.927747 1 cmd.go:427] Querying kubelet version for node crc 2025-08-13T20:06:29.937145415+00:00 stderr F I0813 20:06:29.936965 1 cmd.go:440] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-08-13T20:06:29.937145415+00:00 stderr F I0813 20:06:29.937036 1 cmd.go:289] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10" ... 2025-08-13T20:06:29.937458994+00:00 stderr F I0813 20:06:29.937385 1 cmd.go:217] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10" ... 2025-08-13T20:06:29.937458994+00:00 stderr F I0813 20:06:29.937440 1 cmd.go:225] Getting secrets ... 2025-08-13T20:06:29.948920812+00:00 stderr F I0813 20:06:29.948461 1 copy.go:32] Got secret openshift-kube-controller-manager/localhost-recovery-client-token-10 2025-08-13T20:06:29.953436762+00:00 stderr F I0813 20:06:29.952585 1 copy.go:32] Got secret openshift-kube-controller-manager/service-account-private-key-10 2025-08-13T20:06:29.958541818+00:00 stderr F I0813 20:06:29.958434 1 copy.go:32] Got secret openshift-kube-controller-manager/serving-cert-10 2025-08-13T20:06:29.958541818+00:00 stderr F I0813 20:06:29.958509 1 cmd.go:238] Getting config maps ... 2025-08-13T20:06:30.028237794+00:00 stderr F I0813 20:06:30.028116 1 copy.go:60] Got configMap openshift-kube-controller-manager/cluster-policy-controller-config-10 2025-08-13T20:06:30.033609938+00:00 stderr F I0813 20:06:30.033515 1 copy.go:60] Got configMap openshift-kube-controller-manager/config-10 2025-08-13T20:06:30.036895452+00:00 stderr F I0813 20:06:30.036684 1 copy.go:60] Got configMap openshift-kube-controller-manager/controller-manager-kubeconfig-10 2025-08-13T20:06:30.041016810+00:00 stderr F I0813 20:06:30.040969 1 copy.go:60] Got configMap openshift-kube-controller-manager/kube-controller-cert-syncer-kubeconfig-10 2025-08-13T20:06:30.045113907+00:00 stderr F I0813 20:06:30.045031 1 copy.go:60] Got configMap openshift-kube-controller-manager/kube-controller-manager-pod-10 2025-08-13T20:06:30.078006779+00:00 stderr F I0813 20:06:30.077171 1 copy.go:60] Got configMap openshift-kube-controller-manager/recycler-config-10 2025-08-13T20:06:30.286083348+00:00 stderr F I0813 20:06:30.284316 1 copy.go:60] Got configMap openshift-kube-controller-manager/service-ca-10 2025-08-13T20:06:30.480644879+00:00 stderr F I0813 20:06:30.480576 1 copy.go:60] Got configMap openshift-kube-controller-manager/serviceaccount-ca-10 2025-08-13T20:06:30.680035208+00:00 stderr F I0813 20:06:30.679933 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/cloud-config-10: configmaps "cloud-config-10" not found 2025-08-13T20:06:30.680208983+00:00 stderr F I0813 20:06:30.680162 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token" ... 2025-08-13T20:06:30.680323866+00:00 stderr F I0813 20:06:30.680302 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-08-13T20:06:30.682192650+00:00 stderr F I0813 20:06:30.682158 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/token" ... 2025-08-13T20:06:30.682834408+00:00 stderr F I0813 20:06:30.682731 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/ca.crt" ... 2025-08-13T20:06:30.683144557+00:00 stderr F I0813 20:06:30.683114 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/namespace" ... 2025-08-13T20:06:30.684131515+00:00 stderr F I0813 20:06:30.684090 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/service-account-private-key" ... 2025-08-13T20:06:30.684222138+00:00 stderr F I0813 20:06:30.684198 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/service-account-private-key/service-account.pub" ... 2025-08-13T20:06:30.685318589+00:00 stderr F I0813 20:06:30.685287 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/service-account-private-key/service-account.key" ... 2025-08-13T20:06:30.685529345+00:00 stderr F I0813 20:06:30.685503 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/serving-cert" ... 2025-08-13T20:06:30.685589647+00:00 stderr F I0813 20:06:30.685570 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/serving-cert/tls.key" ... 2025-08-13T20:06:30.687402839+00:00 stderr F I0813 20:06:30.687362 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/serving-cert/tls.crt" ... 2025-08-13T20:06:30.689492049+00:00 stderr F I0813 20:06:30.689451 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/cluster-policy-controller-config" ... 2025-08-13T20:06:30.689641083+00:00 stderr F I0813 20:06:30.689564 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/cluster-policy-controller-config/config.yaml" ... 2025-08-13T20:06:30.707133994+00:00 stderr F I0813 20:06:30.707021 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/config" ... 2025-08-13T20:06:30.707330309+00:00 stderr F I0813 20:06:30.707300 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/config/config.yaml" ... 2025-08-13T20:06:30.714536146+00:00 stderr F I0813 20:06:30.714440 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/controller-manager-kubeconfig" ... 2025-08-13T20:06:30.714651599+00:00 stderr F I0813 20:06:30.714628 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/controller-manager-kubeconfig/kubeconfig" ... 2025-08-13T20:06:30.728982389+00:00 stderr F I0813 20:06:30.718252 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-cert-syncer-kubeconfig" ... 2025-08-13T20:06:30.729157004+00:00 stderr F I0813 20:06:30.729126 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig" ... 2025-08-13T20:06:30.738554784+00:00 stderr F I0813 20:06:30.738445 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod" ... 2025-08-13T20:06:30.738554784+00:00 stderr F I0813 20:06:30.738503 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod/forceRedeploymentReason" ... 2025-08-13T20:06:30.740369666+00:00 stderr F I0813 20:06:30.740296 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod/pod.yaml" ... 2025-08-13T20:06:30.740596192+00:00 stderr F I0813 20:06:30.740528 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod/version" ... 2025-08-13T20:06:30.740945882+00:00 stderr F I0813 20:06:30.740732 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/recycler-config" ... 2025-08-13T20:06:30.740945882+00:00 stderr F I0813 20:06:30.740861 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/recycler-config/recycler-pod.yaml" ... 2025-08-13T20:06:30.742336842+00:00 stderr F I0813 20:06:30.742225 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/service-ca" ... 2025-08-13T20:06:30.742336842+00:00 stderr F I0813 20:06:30.742294 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/service-ca/ca-bundle.crt" ... 2025-08-13T20:06:30.746833191+00:00 stderr F I0813 20:06:30.743521 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/serviceaccount-ca" ... 2025-08-13T20:06:30.746833191+00:00 stderr F I0813 20:06:30.743564 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/serviceaccount-ca/ca-bundle.crt" ... 2025-08-13T20:06:30.746833191+00:00 stderr F I0813 20:06:30.745655 1 cmd.go:217] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs" ... 2025-08-13T20:06:30.746833191+00:00 stderr F I0813 20:06:30.745674 1 cmd.go:225] Getting secrets ... 2025-08-13T20:06:30.879011066+00:00 stderr F I0813 20:06:30.877620 1 copy.go:32] Got secret openshift-kube-controller-manager/csr-signer 2025-08-13T20:06:31.081470183+00:00 stderr F I0813 20:06:31.080423 1 copy.go:32] Got secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key 2025-08-13T20:06:31.081644418+00:00 stderr F I0813 20:06:31.081623 1 cmd.go:238] Getting config maps ... 2025-08-13T20:06:31.371338561+00:00 stderr F I0813 20:06:31.371276 1 copy.go:60] Got configMap openshift-kube-controller-manager/aggregator-client-ca 2025-08-13T20:06:31.897416895+00:00 stderr F I0813 20:06:31.897357 1 copy.go:60] Got configMap openshift-kube-controller-manager/client-ca 2025-08-13T20:06:31.955715866+00:00 stderr F I0813 20:06:31.955186 1 copy.go:60] Got configMap openshift-kube-controller-manager/trusted-ca-bundle 2025-08-13T20:06:31.956504899+00:00 stderr F I0813 20:06:31.956057 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/csr-signer" ... 2025-08-13T20:06:31.957035874+00:00 stderr F I0813 20:06:31.956553 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/csr-signer/tls.crt" ... 2025-08-13T20:06:31.957035874+00:00 stderr F I0813 20:06:31.956928 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/csr-signer/tls.key" ... 2025-08-13T20:06:31.957107226+00:00 stderr F I0813 20:06:31.957059 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/kube-controller-manager-client-cert-key" ... 2025-08-13T20:06:31.957107226+00:00 stderr F I0813 20:06:31.957075 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/kube-controller-manager-client-cert-key/tls.key" ... 2025-08-13T20:06:31.957278241+00:00 stderr F I0813 20:06:31.957185 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/kube-controller-manager-client-cert-key/tls.crt" ... 2025-08-13T20:06:31.957642931+00:00 stderr F I0813 20:06:31.957345 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/aggregator-client-ca" ... 2025-08-13T20:06:31.962102239+00:00 stderr F I0813 20:06:31.960192 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/aggregator-client-ca/ca-bundle.crt" ... 2025-08-13T20:06:31.962102239+00:00 stderr F I0813 20:06:31.960413 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/client-ca" ... 2025-08-13T20:06:32.028939486+00:00 stderr F I0813 20:06:32.028378 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/client-ca/ca-bundle.crt" ... 2025-08-13T20:06:32.037492591+00:00 stderr F I0813 20:06:32.034442 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/trusted-ca-bundle" ... 2025-08-13T20:06:32.037492591+00:00 stderr F I0813 20:06:32.034579 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/trusted-ca-bundle/ca-bundle.crt" ... 2025-08-13T20:06:32.037492591+00:00 stderr F I0813 20:06:32.035400 1 cmd.go:331] Getting pod configmaps/kube-controller-manager-pod-10 -n openshift-kube-controller-manager 2025-08-13T20:06:32.068991654+00:00 stderr F I0813 20:06:32.065289 1 cmd.go:347] Creating directory for static pod manifest "/etc/kubernetes/manifests" ... 2025-08-13T20:06:32.068991654+00:00 stderr F I0813 20:06:32.065360 1 cmd.go:375] Writing a pod under "kube-controller-manager-pod.yaml" key 2025-08-13T20:06:32.068991654+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager","namespace":"openshift-kube-controller-manager","creationTimestamp":null,"labels":{"app":"kube-controller-manager","kube-controller-manager":"true","revision":"10"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-controller-manager","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs"}}],"containers":[{"name":"kube-controller-manager","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 10257 \\))\" ]; do sleep 1; done'\n\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nif [ -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ]; then\n echo \"Setting custom CA bundle for cloud provider\"\n export AWS_CA_BUNDLE=/etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem\nfi\n\nexec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt \\\n --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key --controllers=* --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12"],"ports":[{"containerPort":10257}],"resources":{"requests":{"cpu":"60m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"cluster-policy-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n 2025-08-13T20:06:32.069063786+00:00 stderr F \"$(ss -Htanop \\( sport = 10357 \\))\" ]; do sleep 1; done'\n\nexec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --namespace=${POD_NAMESPACE} -v=2"],"ports":[{"containerPort":10357}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["cluster-kube-controller-manager-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-recovery-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 9443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:9443 -v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:06:32.130591450+00:00 stderr F I0813 20:06:32.130436 1 cmd.go:606] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/kube-controller-manager-pod.yaml" ... 2025-08-13T20:06:32.143163911+00:00 stderr F I0813 20:06:32.143032 1 cmd.go:613] Removed existing static pod manifest "/etc/kubernetes/manifests/kube-controller-manager-pod.yaml" ... 2025-08-13T20:06:32.143220332+00:00 stderr F I0813 20:06:32.143086 1 cmd.go:617] Writing static pod manifest "/etc/kubernetes/manifests/kube-controller-manager-pod.yaml" ... 2025-08-13T20:06:32.143220332+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager","namespace":"openshift-kube-controller-manager","creationTimestamp":null,"labels":{"app":"kube-controller-manager","kube-controller-manager":"true","revision":"10"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-controller-manager","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs"}}],"containers":[{"name":"kube-controller-manager","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 10257 \\))\" ]; do sleep 1; done'\n\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nif [ -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ]; then\n echo \"Setting custom CA bundle for cloud provider\"\n export AWS_CA_BUNDLE=/etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem\nfi\n\nexec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt \\\n --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key --controllers=* --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12"],"ports":[{"containerPort":10257}],"resources":{"requests":{"cpu":"60m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"cluster-policy-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 10357 \\))\" ]; do sleep 1; done'\n\nexec cluster-policy 2025-08-13T20:06:32.143262553+00:00 stderr F -controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --namespace=${POD_NAMESPACE} -v=2"],"ports":[{"containerPort":10357}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["cluster-kube-controller-manager-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-recovery-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 9443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:9443 -v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} ././@LongLink0000644000000000000000000000025100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiser0000755000175000017500000000000015133657715033066 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiser0000755000175000017500000000000015133657735033070 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiser0000644000175000017500000012467715133657715033111 0ustar zuulzuul2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142757 1 flags.go:64] FLAG: --accesstoken-inactivity-timeout="0s" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142853 1 flags.go:64] FLAG: --admission-control-config-file="" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142859 1 flags.go:64] FLAG: --advertise-address="" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142863 1 flags.go:64] FLAG: --api-audiences="[https://kubernetes.default.svc]" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142869 1 flags.go:64] FLAG: --audit-log-batch-buffer-size="10000" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142873 1 flags.go:64] FLAG: --audit-log-batch-max-size="1" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142876 1 flags.go:64] FLAG: --audit-log-batch-max-wait="0s" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142879 1 flags.go:64] FLAG: --audit-log-batch-throttle-burst="0" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142882 1 flags.go:64] FLAG: --audit-log-batch-throttle-enable="false" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142886 1 flags.go:64] FLAG: --audit-log-batch-throttle-qps="0" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142891 1 flags.go:64] FLAG: --audit-log-compress="false" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142893 1 flags.go:64] FLAG: --audit-log-format="json" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142896 1 flags.go:64] FLAG: --audit-log-maxage="0" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142899 1 flags.go:64] FLAG: --audit-log-maxbackup="10" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142902 1 flags.go:64] FLAG: --audit-log-maxsize="100" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142905 1 flags.go:64] FLAG: --audit-log-mode="blocking" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142908 1 flags.go:64] FLAG: --audit-log-path="/var/log/oauth-apiserver/audit.log" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142911 1 flags.go:64] FLAG: --audit-log-truncate-enabled="false" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142914 1 flags.go:64] FLAG: --audit-log-truncate-max-batch-size="10485760" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142918 1 flags.go:64] FLAG: --audit-log-truncate-max-event-size="102400" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142921 1 flags.go:64] FLAG: --audit-log-version="audit.k8s.io/v1" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142924 1 flags.go:64] FLAG: --audit-policy-file="/var/run/configmaps/audit/policy.yaml" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142927 1 flags.go:64] FLAG: --audit-webhook-batch-buffer-size="10000" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142930 1 flags.go:64] FLAG: --audit-webhook-batch-initial-backoff="10s" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142934 1 flags.go:64] FLAG: --audit-webhook-batch-max-size="400" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142936 1 flags.go:64] FLAG: --audit-webhook-batch-max-wait="30s" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142939 1 flags.go:64] FLAG: --audit-webhook-batch-throttle-burst="15" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142944 1 flags.go:64] FLAG: --audit-webhook-batch-throttle-enable="true" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142947 1 flags.go:64] FLAG: --audit-webhook-batch-throttle-qps="10" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142950 1 flags.go:64] FLAG: --audit-webhook-config-file="" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142953 1 flags.go:64] FLAG: --audit-webhook-initial-backoff="10s" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142956 1 flags.go:64] FLAG: --audit-webhook-mode="batch" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142959 1 flags.go:64] FLAG: --audit-webhook-truncate-enabled="false" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142962 1 flags.go:64] FLAG: --audit-webhook-truncate-max-batch-size="10485760" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142965 1 flags.go:64] FLAG: --audit-webhook-truncate-max-event-size="102400" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142968 1 flags.go:64] FLAG: --audit-webhook-version="audit.k8s.io/v1" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142971 1 flags.go:64] FLAG: --authentication-kubeconfig="" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142974 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142977 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142980 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142982 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142988 1 flags.go:64] FLAG: --authorization-kubeconfig="" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142991 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142994 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.142997 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143000 1 flags.go:64] FLAG: --cert-dir="apiserver.local.config/certificates" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143003 1 flags.go:64] FLAG: --client-ca-file="" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143007 1 flags.go:64] FLAG: --contention-profiling="false" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143010 1 flags.go:64] FLAG: --cors-allowed-origins="[//127\\.0\\.0\\.1(:|$),//localhost(:|$)]" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143014 1 flags.go:64] FLAG: --debug-socket-path="" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143017 1 flags.go:64] FLAG: --default-watch-cache-size="100" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143020 1 flags.go:64] FLAG: --delete-collection-workers="1" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143022 1 flags.go:64] FLAG: --disable-admission-plugins="[]" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143027 1 flags.go:64] FLAG: --egress-selector-config-file="" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143030 1 flags.go:64] FLAG: --enable-admission-plugins="[]" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143034 1 flags.go:64] FLAG: --enable-garbage-collector="true" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143040 1 flags.go:64] FLAG: --enable-priority-and-fairness="false" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143048 1 flags.go:64] FLAG: --encryption-provider-config="" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143052 1 flags.go:64] FLAG: --encryption-provider-config-automatic-reload="false" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143055 1 flags.go:64] FLAG: --etcd-cafile="/var/run/configmaps/etcd-serving-ca/ca-bundle.crt" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143071 1 flags.go:64] FLAG: --etcd-certfile="/var/run/secrets/etcd-client/tls.crt" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143075 1 flags.go:64] FLAG: --etcd-compaction-interval="5m0s" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143078 1 flags.go:64] FLAG: --etcd-count-metric-poll-period="1m0s" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143081 1 flags.go:64] FLAG: --etcd-db-metric-poll-interval="30s" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143084 1 flags.go:64] FLAG: --etcd-healthcheck-timeout="10s" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143087 1 flags.go:64] FLAG: --etcd-keyfile="/var/run/secrets/etcd-client/tls.key" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143090 1 flags.go:64] FLAG: --etcd-prefix="openshift.io" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143094 1 flags.go:64] FLAG: --etcd-readycheck-timeout="2s" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143096 1 flags.go:64] FLAG: --etcd-servers="[https://192.168.126.11:2379]" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143101 1 flags.go:64] FLAG: --etcd-servers-overrides="[]" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143105 1 flags.go:64] FLAG: --external-hostname="" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143108 1 flags.go:64] FLAG: --feature-gates="" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143117 1 flags.go:64] FLAG: --goaway-chance="0" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143122 1 flags.go:64] FLAG: --help="false" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143125 1 flags.go:64] FLAG: --http2-max-streams-per-connection="1000" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143128 1 flags.go:64] FLAG: --kubeconfig="" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143131 1 flags.go:64] FLAG: --lease-reuse-duration-seconds="60" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143134 1 flags.go:64] FLAG: --livez-grace-period="0s" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143137 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143141 1 flags.go:64] FLAG: --max-mutating-requests-inflight="200" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143144 1 flags.go:64] FLAG: --max-requests-inflight="400" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143147 1 flags.go:64] FLAG: --min-request-timeout="1800" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143150 1 flags.go:64] FLAG: --permit-address-sharing="false" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143153 1 flags.go:64] FLAG: --permit-port-sharing="false" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143157 1 flags.go:64] FLAG: --profiling="true" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143160 1 flags.go:64] FLAG: --request-timeout="1m0s" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143164 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143170 1 flags.go:64] FLAG: --requestheader-client-ca-file="" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143173 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143178 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143184 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143188 1 flags.go:64] FLAG: --secure-port="8443" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143192 1 flags.go:64] FLAG: --shutdown-delay-duration="15s" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143195 1 flags.go:64] FLAG: --shutdown-send-retry-after="true" 2026-01-20T10:49:38.145794485+00:00 stderr F I0120 10:49:38.143198 1 flags.go:64] FLAG: --shutdown-watch-termination-grace-period="0s" 2026-01-20T10:49:38.145794485+00:00 stderr P I0120 10:49:38.1432 2026-01-20T10:49:38.145978351+00:00 stderr F 02 1 flags.go:64] FLAG: --storage-backend="" 2026-01-20T10:49:38.145978351+00:00 stderr F I0120 10:49:38.143205 1 flags.go:64] FLAG: --storage-media-type="application/json" 2026-01-20T10:49:38.145978351+00:00 stderr F I0120 10:49:38.143208 1 flags.go:64] FLAG: --strict-transport-security-directives="[]" 2026-01-20T10:49:38.145978351+00:00 stderr F I0120 10:49:38.143212 1 flags.go:64] FLAG: --tls-cert-file="/var/run/secrets/serving-cert/tls.crt" 2026-01-20T10:49:38.145978351+00:00 stderr F I0120 10:49:38.143215 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2026-01-20T10:49:38.145978351+00:00 stderr F I0120 10:49:38.143223 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2026-01-20T10:49:38.145978351+00:00 stderr F I0120 10:49:38.143226 1 flags.go:64] FLAG: --tls-private-key-file="/var/run/secrets/serving-cert/tls.key" 2026-01-20T10:49:38.145978351+00:00 stderr F I0120 10:49:38.143229 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2026-01-20T10:49:38.145978351+00:00 stderr F I0120 10:49:38.143234 1 flags.go:64] FLAG: --tracing-config-file="" 2026-01-20T10:49:38.145978351+00:00 stderr F I0120 10:49:38.143237 1 flags.go:64] FLAG: --v="2" 2026-01-20T10:49:38.145978351+00:00 stderr F I0120 10:49:38.143240 1 flags.go:64] FLAG: --vmodule="" 2026-01-20T10:49:38.145978351+00:00 stderr F I0120 10:49:38.143243 1 flags.go:64] FLAG: --watch-cache="true" 2026-01-20T10:49:38.145978351+00:00 stderr F I0120 10:49:38.143246 1 flags.go:64] FLAG: --watch-cache-sizes="[]" 2026-01-20T10:49:38.238400156+00:00 stderr F I0120 10:49:38.238320 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2026-01-20T10:49:39.117787352+00:00 stderr F I0120 10:49:39.117735 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2026-01-20T10:49:39.120636739+00:00 stderr F I0120 10:49:39.120605 1 audit.go:340] Using audit backend: ignoreErrors 2026-01-20T10:49:39.142097063+00:00 stderr F I0120 10:49:39.141115 1 plugins.go:157] Loaded 2 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook. 2026-01-20T10:49:39.142097063+00:00 stderr F I0120 10:49:39.141160 1 plugins.go:160] Loaded 2 validating admission controller(s) successfully in the following order: ValidatingAdmissionPolicy,ValidatingAdmissionWebhook. 2026-01-20T10:49:39.145556378+00:00 stderr F I0120 10:49:39.145509 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2026-01-20T10:49:39.145556378+00:00 stderr F I0120 10:49:39.145546 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2026-01-20T10:49:39.145613780+00:00 stderr F I0120 10:49:39.145580 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2026-01-20T10:49:39.145613780+00:00 stderr F I0120 10:49:39.145599 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2026-01-20T10:49:39.169335842+00:00 stderr F I0120 10:49:39.169281 1 store.go:1579] "Monitoring resource count at path" resource="oauthclients.oauth.openshift.io" path="//oauth/clients" 2026-01-20T10:49:39.177310255+00:00 stderr F I0120 10:49:39.177271 1 store.go:1579] "Monitoring resource count at path" resource="oauthauthorizetokens.oauth.openshift.io" path="//oauth/authorizetokens" 2026-01-20T10:49:39.184096412+00:00 stderr F I0120 10:49:39.183834 1 store.go:1579] "Monitoring resource count at path" resource="oauthaccesstokens.oauth.openshift.io" path="//oauth/accesstokens" 2026-01-20T10:49:39.190477976+00:00 stderr F I0120 10:49:39.190446 1 store.go:1579] "Monitoring resource count at path" resource="oauthclientauthorizations.oauth.openshift.io" path="//oauth/clientauthorizations" 2026-01-20T10:49:39.200147270+00:00 stderr F I0120 10:49:39.199364 1 handler.go:275] Adding GroupVersion oauth.openshift.io v1 to ResourceManager 2026-01-20T10:49:39.200147270+00:00 stderr F I0120 10:49:39.199788 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2026-01-20T10:49:39.200147270+00:00 stderr F I0120 10:49:39.199797 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2026-01-20T10:49:39.200735489+00:00 stderr F I0120 10:49:39.200710 1 cacher.go:451] cacher (oauthauthorizetokens.oauth.openshift.io): initialized 2026-01-20T10:49:39.200748169+00:00 stderr F I0120 10:49:39.200736 1 reflector.go:351] Caches populated for *oauth.OAuthAuthorizeToken from storage/cacher.go:/oauth/authorizetokens 2026-01-20T10:49:39.200980976+00:00 stderr F I0120 10:49:39.200960 1 cacher.go:451] cacher (oauthclientauthorizations.oauth.openshift.io): initialized 2026-01-20T10:49:39.201017997+00:00 stderr F I0120 10:49:39.201008 1 reflector.go:351] Caches populated for *oauth.OAuthClientAuthorization from storage/cacher.go:/oauth/clientauthorizations 2026-01-20T10:49:39.201226114+00:00 stderr F I0120 10:49:39.201190 1 cacher.go:451] cacher (oauthaccesstokens.oauth.openshift.io): initialized 2026-01-20T10:49:39.201226114+00:00 stderr F I0120 10:49:39.201198 1 cacher.go:451] cacher (oauthclients.oauth.openshift.io): initialized 2026-01-20T10:49:39.201241404+00:00 stderr F I0120 10:49:39.201223 1 reflector.go:351] Caches populated for *oauth.OAuthAccessToken from storage/cacher.go:/oauth/accesstokens 2026-01-20T10:49:39.201241404+00:00 stderr F I0120 10:49:39.201231 1 reflector.go:351] Caches populated for *oauth.OAuthClient from storage/cacher.go:/oauth/clients 2026-01-20T10:49:39.222146210+00:00 stderr F I0120 10:49:39.221281 1 store.go:1579] "Monitoring resource count at path" resource="users.user.openshift.io" path="//users" 2026-01-20T10:49:39.226471093+00:00 stderr F I0120 10:49:39.226435 1 cacher.go:451] cacher (users.user.openshift.io): initialized 2026-01-20T10:49:39.226528654+00:00 stderr F I0120 10:49:39.226518 1 reflector.go:351] Caches populated for *user.User from storage/cacher.go:/users 2026-01-20T10:49:39.236575420+00:00 stderr F I0120 10:49:39.236521 1 store.go:1579] "Monitoring resource count at path" resource="identities.user.openshift.io" path="//useridentities" 2026-01-20T10:49:39.238349334+00:00 stderr F I0120 10:49:39.238301 1 cacher.go:451] cacher (identities.user.openshift.io): initialized 2026-01-20T10:49:39.238349334+00:00 stderr F I0120 10:49:39.238322 1 reflector.go:351] Caches populated for *user.Identity from storage/cacher.go:/useridentities 2026-01-20T10:49:39.244628925+00:00 stderr F I0120 10:49:39.244582 1 store.go:1579] "Monitoring resource count at path" resource="groups.user.openshift.io" path="//groups" 2026-01-20T10:49:39.248273067+00:00 stderr F I0120 10:49:39.248234 1 handler.go:275] Adding GroupVersion user.openshift.io v1 to ResourceManager 2026-01-20T10:49:39.248373480+00:00 stderr F I0120 10:49:39.248345 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2026-01-20T10:49:39.248373480+00:00 stderr F I0120 10:49:39.248365 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2026-01-20T10:49:39.249315638+00:00 stderr F I0120 10:49:39.249283 1 cacher.go:451] cacher (groups.user.openshift.io): initialized 2026-01-20T10:49:39.249330159+00:00 stderr F I0120 10:49:39.249318 1 reflector.go:351] Caches populated for *user.Group from storage/cacher.go:/groups 2026-01-20T10:49:39.409579939+00:00 stderr F I0120 10:49:39.409529 1 genericapiserver.go:560] "[graceful-termination] using HTTP Server shutdown timeout" shutdownTimeout="2s" 2026-01-20T10:49:39.410010092+00:00 stderr F I0120 10:49:39.409843 1 genericapiserver.go:528] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2026-01-20T10:49:39.415539891+00:00 stderr F I0120 10:49:39.414810 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:49:39.415539891+00:00 stderr F I0120 10:49:39.414846 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:49:39.415539891+00:00 stderr F I0120 10:49:39.414866 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2026-01-20T10:49:39.415539891+00:00 stderr F I0120 10:49:39.414887 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:49:39.415539891+00:00 stderr F I0120 10:49:39.414899 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:39.415539891+00:00 stderr F I0120 10:49:39.414911 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:49:39.415539891+00:00 stderr F I0120 10:49:39.414922 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:39.415613163+00:00 stderr F I0120 10:49:39.415590 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2026-01-20 10:49:39.415559791 +0000 UTC))" 2026-01-20T10:49:39.415907402+00:00 stderr F I0120 10:49:39.415876 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906178\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906178\" (2026-01-20 09:49:38 +0000 UTC to 2027-01-20 09:49:38 +0000 UTC (now=2026-01-20 10:49:39.41585152 +0000 UTC))" 2026-01-20T10:49:39.418281744+00:00 stderr F I0120 10:49:39.415908 1 secure_serving.go:213] Serving securely on [::]:8443 2026-01-20T10:49:39.418281744+00:00 stderr F I0120 10:49:39.416041 1 genericapiserver.go:681] [graceful-termination] waiting for shutdown to be initiated 2026-01-20T10:49:39.418281744+00:00 stderr F I0120 10:49:39.415995 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:49:39.418281744+00:00 stderr F I0120 10:49:39.417395 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:39.422623407+00:00 stderr F I0120 10:49:39.422298 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:39.426147764+00:00 stderr F I0120 10:49:39.426114 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:39.426886776+00:00 stderr F I0120 10:49:39.426847 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:39.427524647+00:00 stderr F I0120 10:49:39.427475 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:39.459580193+00:00 stderr F I0120 10:49:39.459515 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:39.491785334+00:00 stderr F I0120 10:49:39.491713 1 reflector.go:351] Caches populated for *v1.Group from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:39.491917838+00:00 stderr F I0120 10:49:39.491879 1 reflector.go:351] Caches populated for *v1.OAuthClient from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:39.515336531+00:00 stderr F I0120 10:49:39.515263 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:39.515336531+00:00 stderr F I0120 10:49:39.515281 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:49:39.515336531+00:00 stderr F I0120 10:49:39.515291 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:39.515805285+00:00 stderr F I0120 10:49:39.515768 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:49:39.515738983 +0000 UTC))" 2026-01-20T10:49:39.515805285+00:00 stderr F I0120 10:49:39.515800 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:49:39.515787424 +0000 UTC))" 2026-01-20T10:49:39.515842126+00:00 stderr F I0120 10:49:39.515819 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:49:39.515805865 +0000 UTC))" 2026-01-20T10:49:39.515853226+00:00 stderr F I0120 10:49:39.515839 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:49:39.515828496 +0000 UTC))" 2026-01-20T10:49:39.515861777+00:00 stderr F I0120 10:49:39.515855 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:39.515844196 +0000 UTC))" 2026-01-20T10:49:39.515892618+00:00 stderr F I0120 10:49:39.515870 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:39.515860157 +0000 UTC))" 2026-01-20T10:49:39.515892618+00:00 stderr F I0120 10:49:39.515889 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:39.515878447 +0000 UTC))" 2026-01-20T10:49:39.515914138+00:00 stderr F I0120 10:49:39.515906 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:39.515895138 +0000 UTC))" 2026-01-20T10:49:39.515945569+00:00 stderr F I0120 10:49:39.515923 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:49:39.515910798 +0000 UTC))" 2026-01-20T10:49:39.515945569+00:00 stderr F I0120 10:49:39.515942 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:49:39.515932709 +0000 UTC))" 2026-01-20T10:49:39.515981370+00:00 stderr F I0120 10:49:39.515959 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:39.515946769 +0000 UTC))" 2026-01-20T10:49:39.516306290+00:00 stderr F I0120 10:49:39.516277 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2026-01-20 10:49:39.516262989 +0000 UTC))" 2026-01-20T10:49:39.516578278+00:00 stderr F I0120 10:49:39.516549 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906178\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906178\" (2026-01-20 09:49:38 +0000 UTC to 2027-01-20 09:49:38 +0000 UTC (now=2026-01-20 10:49:39.516535607 +0000 UTC))" 2026-01-20T10:56:07.099174355+00:00 stderr F I0120 10:56:07.099115 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:56:07.099045292 +0000 UTC))" 2026-01-20T10:56:07.099174355+00:00 stderr F I0120 10:56:07.099157 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:56:07.099140844 +0000 UTC))" 2026-01-20T10:56:07.099223547+00:00 stderr F I0120 10:56:07.099183 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.099164425 +0000 UTC))" 2026-01-20T10:56:07.099223547+00:00 stderr F I0120 10:56:07.099205 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.099189836 +0000 UTC))" 2026-01-20T10:56:07.099235947+00:00 stderr F I0120 10:56:07.099229 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.099212836 +0000 UTC))" 2026-01-20T10:56:07.101174599+00:00 stderr F I0120 10:56:07.099249 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.099234627 +0000 UTC))" 2026-01-20T10:56:07.101174599+00:00 stderr F I0120 10:56:07.099294 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.099261808 +0000 UTC))" 2026-01-20T10:56:07.101174599+00:00 stderr F I0120 10:56:07.099316 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.099299739 +0000 UTC))" 2026-01-20T10:56:07.101174599+00:00 stderr F I0120 10:56:07.099338 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.099321639 +0000 UTC))" 2026-01-20T10:56:07.101174599+00:00 stderr F I0120 10:56:07.099359 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:56:07.0993449 +0000 UTC))" 2026-01-20T10:56:07.101174599+00:00 stderr F I0120 10:56:07.099384 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.09936553 +0000 UTC))" 2026-01-20T10:56:07.101174599+00:00 stderr F I0120 10:56:07.099426 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.099390401 +0000 UTC))" 2026-01-20T10:56:07.101174599+00:00 stderr F I0120 10:56:07.099864 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2026-01-20 10:56:07.099841113 +0000 UTC))" 2026-01-20T10:56:07.101174599+00:00 stderr F I0120 10:56:07.100214 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906178\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906178\" (2026-01-20 09:49:38 +0000 UTC to 2027-01-20 09:49:38 +0000 UTC (now=2026-01-20 10:56:07.100193783 +0000 UTC))" 2026-01-20T10:57:24.159514027+00:00 stderr F E0120 10:57:24.159439 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.159564709+00:00 stderr F E0120 10:57:24.159525 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.196281089+00:00 stderr F E0120 10:57:24.196196 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.196281089+00:00 stderr F E0120 10:57:24.196255 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.102243609+00:00 stderr F E0120 10:57:25.101741 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.102243609+00:00 stderr F E0120 10:57:25.101818 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.106372077+00:00 stderr F E0120 10:57:25.105947 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.106372077+00:00 stderr F E0120 10:57:25.106025 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.144741812+00:00 stderr F E0120 10:57:25.144673 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.144767933+00:00 stderr F E0120 10:57:25.144743 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.146113788+00:00 stderr F E0120 10:57:25.146054 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.146138369+00:00 stderr F E0120 10:57:25.146129 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:35.944367542+00:00 stderr F E0120 10:57:35.944264 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:35.944367542+00:00 stderr F E0120 10:57:35.944328 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:45.949327156+00:00 stderr F E0120 10:57:45.949239 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:45.949327156+00:00 stderr F E0120 10:57:45.949292 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiser0000644000175000017500000037061315133657715033102 0ustar zuulzuul2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.198944 1 flags.go:64] FLAG: --accesstoken-inactivity-timeout="0s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200040 1 flags.go:64] FLAG: --admission-control-config-file="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200051 1 flags.go:64] FLAG: --advertise-address="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200058 1 flags.go:64] FLAG: --api-audiences="[https://kubernetes.default.svc]" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200067 1 flags.go:64] FLAG: --audit-log-batch-buffer-size="10000" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200074 1 flags.go:64] FLAG: --audit-log-batch-max-size="1" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200078 1 flags.go:64] FLAG: --audit-log-batch-max-wait="0s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200089 1 flags.go:64] FLAG: --audit-log-batch-throttle-burst="0" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200093 1 flags.go:64] FLAG: --audit-log-batch-throttle-enable="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200099 1 flags.go:64] FLAG: --audit-log-batch-throttle-qps="0" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200106 1 flags.go:64] FLAG: --audit-log-compress="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200110 1 flags.go:64] FLAG: --audit-log-format="json" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200114 1 flags.go:64] FLAG: --audit-log-maxage="0" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200118 1 flags.go:64] FLAG: --audit-log-maxbackup="10" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200122 1 flags.go:64] FLAG: --audit-log-maxsize="100" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200126 1 flags.go:64] FLAG: --audit-log-mode="blocking" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200130 1 flags.go:64] FLAG: --audit-log-path="/var/log/oauth-apiserver/audit.log" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200134 1 flags.go:64] FLAG: --audit-log-truncate-enabled="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200138 1 flags.go:64] FLAG: --audit-log-truncate-max-batch-size="10485760" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200143 1 flags.go:64] FLAG: --audit-log-truncate-max-event-size="102400" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200148 1 flags.go:64] FLAG: --audit-log-version="audit.k8s.io/v1" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200152 1 flags.go:64] FLAG: --audit-policy-file="/var/run/configmaps/audit/policy.yaml" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200156 1 flags.go:64] FLAG: --audit-webhook-batch-buffer-size="10000" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200160 1 flags.go:64] FLAG: --audit-webhook-batch-initial-backoff="10s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200168 1 flags.go:64] FLAG: --audit-webhook-batch-max-size="400" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200174 1 flags.go:64] FLAG: --audit-webhook-batch-max-wait="30s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200178 1 flags.go:64] FLAG: --audit-webhook-batch-throttle-burst="15" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200186 1 flags.go:64] FLAG: --audit-webhook-batch-throttle-enable="true" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200191 1 flags.go:64] FLAG: --audit-webhook-batch-throttle-qps="10" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200196 1 flags.go:64] FLAG: --audit-webhook-config-file="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200199 1 flags.go:64] FLAG: --audit-webhook-initial-backoff="10s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200203 1 flags.go:64] FLAG: --audit-webhook-mode="batch" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200207 1 flags.go:64] FLAG: --audit-webhook-truncate-enabled="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200211 1 flags.go:64] FLAG: --audit-webhook-truncate-max-batch-size="10485760" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200215 1 flags.go:64] FLAG: --audit-webhook-truncate-max-event-size="102400" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200219 1 flags.go:64] FLAG: --audit-webhook-version="audit.k8s.io/v1" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200223 1 flags.go:64] FLAG: --authentication-kubeconfig="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200227 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200231 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200234 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200238 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200246 1 flags.go:64] FLAG: --authorization-kubeconfig="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200250 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200254 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200257 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200263 1 flags.go:64] FLAG: --cert-dir="apiserver.local.config/certificates" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200268 1 flags.go:64] FLAG: --client-ca-file="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200275 1 flags.go:64] FLAG: --contention-profiling="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200281 1 flags.go:64] FLAG: --cors-allowed-origins="[//127\\.0\\.0\\.1(:|$),//localhost(:|$)]" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200288 1 flags.go:64] FLAG: --debug-socket-path="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200293 1 flags.go:64] FLAG: --default-watch-cache-size="100" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200298 1 flags.go:64] FLAG: --delete-collection-workers="1" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200303 1 flags.go:64] FLAG: --disable-admission-plugins="[]" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200310 1 flags.go:64] FLAG: --egress-selector-config-file="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200315 1 flags.go:64] FLAG: --enable-admission-plugins="[]" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200321 1 flags.go:64] FLAG: --enable-garbage-collector="true" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200326 1 flags.go:64] FLAG: --enable-priority-and-fairness="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200332 1 flags.go:64] FLAG: --encryption-provider-config="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200339 1 flags.go:64] FLAG: --encryption-provider-config-automatic-reload="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200345 1 flags.go:64] FLAG: --etcd-cafile="/var/run/configmaps/etcd-serving-ca/ca-bundle.crt" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200350 1 flags.go:64] FLAG: --etcd-certfile="/var/run/secrets/etcd-client/tls.crt" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200355 1 flags.go:64] FLAG: --etcd-compaction-interval="5m0s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200367 1 flags.go:64] FLAG: --etcd-count-metric-poll-period="1m0s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200464 1 flags.go:64] FLAG: --etcd-db-metric-poll-interval="30s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200542 1 flags.go:64] FLAG: --etcd-healthcheck-timeout="10s" 2025-08-13T19:59:27.216054413+00:00 stderr F I0813 19:59:27.200547 1 flags.go:64] FLAG: --etcd-keyfile="/var/run/secrets/etcd-client/tls.key" 2025-08-13T19:59:27.216054413+00:00 stderr F I0813 19:59:27.215297 1 flags.go:64] FLAG: --etcd-prefix="openshift.io" 2025-08-13T19:59:27.216054413+00:00 stderr F I0813 19:59:27.215322 1 flags.go:64] FLAG: --etcd-readycheck-timeout="2s" 2025-08-13T19:59:27.216054413+00:00 stderr F I0813 19:59:27.215329 1 flags.go:64] FLAG: --etcd-servers="[https://192.168.126.11:2379]" 2025-08-13T19:59:27.216054413+00:00 stderr F I0813 19:59:27.215350 1 flags.go:64] FLAG: --etcd-servers-overrides="[]" 2025-08-13T19:59:27.216054413+00:00 stderr F I0813 19:59:27.215356 1 flags.go:64] FLAG: --external-hostname="" 2025-08-13T19:59:27.219097400+00:00 stderr F I0813 19:59:27.215361 1 flags.go:64] FLAG: --feature-gates="" 2025-08-13T19:59:27.219097400+00:00 stderr F I0813 19:59:27.219074 1 flags.go:64] FLAG: --goaway-chance="0" 2025-08-13T19:59:27.219132261+00:00 stderr F I0813 19:59:27.219097 1 flags.go:64] FLAG: --help="false" 2025-08-13T19:59:27.219132261+00:00 stderr F I0813 19:59:27.219104 1 flags.go:64] FLAG: --http2-max-streams-per-connection="1000" 2025-08-13T19:59:27.219132261+00:00 stderr F I0813 19:59:27.219110 1 flags.go:64] FLAG: --kubeconfig="" 2025-08-13T19:59:27.219132261+00:00 stderr F I0813 19:59:27.219121 1 flags.go:64] FLAG: --lease-reuse-duration-seconds="60" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219223 1 flags.go:64] FLAG: --livez-grace-period="0s" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219259 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219268 1 flags.go:64] FLAG: --max-mutating-requests-inflight="200" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219272 1 flags.go:64] FLAG: --max-requests-inflight="400" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219277 1 flags.go:64] FLAG: --min-request-timeout="1800" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219281 1 flags.go:64] FLAG: --permit-address-sharing="false" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219291 1 flags.go:64] FLAG: --permit-port-sharing="false" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219295 1 flags.go:64] FLAG: --profiling="true" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219300 1 flags.go:64] FLAG: --request-timeout="1m0s" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219313 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219324 1 flags.go:64] FLAG: --requestheader-client-ca-file="" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219329 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219335 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219341 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219346 1 flags.go:64] FLAG: --secure-port="8443" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219350 1 flags.go:64] FLAG: --shutdown-delay-duration="15s" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219354 1 flags.go:64] FLAG: --shutdown-send-retry-after="true" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219358 1 flags.go:64] FLAG: --shutdown-watch-termination-grace-period="0s" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219364 1 flags.go:64] FLAG: --storage-backend="" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219370 1 flags.go:64] FLAG: --storage-media-type="application/json" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219375 1 flags.go:64] FLAG: --strict-transport-security-directives="[]" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219380 1 flags.go:64] FLAG: --tls-cert-file="/var/run/secrets/serving-cert/tls.crt" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219385 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219395 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219402 1 flags.go:64] FLAG: --tls-private-key-file="/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219431 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219608 1 flags.go:64] FLAG: --tracing-config-file="" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219614 1 flags.go:64] FLAG: --v="2" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219619 1 flags.go:64] FLAG: --vmodule="" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219629 1 flags.go:64] FLAG: --watch-cache="true" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219637 1 flags.go:64] FLAG: --watch-cache-sizes="[]" 2025-08-13T19:59:28.503555713+00:00 stderr F I0813 19:59:28.503189 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:31.354631633+00:00 stderr F I0813 19:59:31.354516 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:31.413209243+00:00 stderr F I0813 19:59:31.412390 1 audit.go:340] Using audit backend: ignoreErrors 2025-08-13T19:59:31.525990398+00:00 stderr F I0813 19:59:31.521558 1 plugins.go:157] Loaded 2 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook. 2025-08-13T19:59:31.525990398+00:00 stderr F I0813 19:59:31.521671 1 plugins.go:160] Loaded 2 validating admission controller(s) successfully in the following order: ValidatingAdmissionPolicy,ValidatingAdmissionWebhook. 2025-08-13T19:59:31.533613475+00:00 stderr F I0813 19:59:31.533000 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T19:59:31.533613475+00:00 stderr F I0813 19:59:31.533154 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T19:59:31.533613475+00:00 stderr F I0813 19:59:31.533585 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T19:59:31.533613475+00:00 stderr F I0813 19:59:31.533605 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T19:59:31.715911232+00:00 stderr F I0813 19:59:31.714652 1 store.go:1579] "Monitoring resource count at path" resource="oauthclients.oauth.openshift.io" path="//oauth/clients" 2025-08-13T19:59:31.790381324+00:00 stderr F I0813 19:59:31.789477 1 store.go:1579] "Monitoring resource count at path" resource="oauthauthorizetokens.oauth.openshift.io" path="//oauth/authorizetokens" 2025-08-13T19:59:31.811382352+00:00 stderr F I0813 19:59:31.811291 1 store.go:1579] "Monitoring resource count at path" resource="oauthaccesstokens.oauth.openshift.io" path="//oauth/accesstokens" 2025-08-13T19:59:31.844340722+00:00 stderr F I0813 19:59:31.844266 1 cacher.go:451] cacher (oauthaccesstokens.oauth.openshift.io): initialized 2025-08-13T19:59:31.845079293+00:00 stderr F I0813 19:59:31.845048 1 reflector.go:351] Caches populated for *oauth.OAuthAccessToken from storage/cacher.go:/oauth/accesstokens 2025-08-13T19:59:31.851908797+00:00 stderr F I0813 19:59:31.851750 1 cacher.go:451] cacher (oauthclients.oauth.openshift.io): initialized 2025-08-13T19:59:31.857767484+00:00 stderr F I0813 19:59:31.853637 1 cacher.go:451] cacher (oauthauthorizetokens.oauth.openshift.io): initialized 2025-08-13T19:59:31.858423753+00:00 stderr F I0813 19:59:31.858378 1 reflector.go:351] Caches populated for *oauth.OAuthClient from storage/cacher.go:/oauth/clients 2025-08-13T19:59:31.960014699+00:00 stderr F I0813 19:59:31.895619 1 reflector.go:351] Caches populated for *oauth.OAuthAuthorizeToken from storage/cacher.go:/oauth/authorizetokens 2025-08-13T19:59:31.960014699+00:00 stderr F I0813 19:59:31.940494 1 store.go:1579] "Monitoring resource count at path" resource="oauthclientauthorizations.oauth.openshift.io" path="//oauth/clientauthorizations" 2025-08-13T19:59:31.960014699+00:00 stderr F I0813 19:59:31.944551 1 cacher.go:451] cacher (oauthclientauthorizations.oauth.openshift.io): initialized 2025-08-13T19:59:31.960014699+00:00 stderr F I0813 19:59:31.944618 1 reflector.go:351] Caches populated for *oauth.OAuthClientAuthorization from storage/cacher.go:/oauth/clientauthorizations 2025-08-13T19:59:32.129051887+00:00 stderr F I0813 19:59:32.128892 1 handler.go:275] Adding GroupVersion oauth.openshift.io v1 to ResourceManager 2025-08-13T19:59:32.146996099+00:00 stderr F I0813 19:59:32.141533 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T19:59:32.146996099+00:00 stderr F I0813 19:59:32.141612 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T19:59:32.268331218+00:00 stderr F I0813 19:59:32.268131 1 store.go:1579] "Monitoring resource count at path" resource="users.user.openshift.io" path="//users" 2025-08-13T19:59:32.280715111+00:00 stderr F I0813 19:59:32.280485 1 cacher.go:451] cacher (users.user.openshift.io): initialized 2025-08-13T19:59:32.280715111+00:00 stderr F I0813 19:59:32.280534 1 reflector.go:351] Caches populated for *user.User from storage/cacher.go:/users 2025-08-13T19:59:32.342159052+00:00 stderr F I0813 19:59:32.340089 1 store.go:1579] "Monitoring resource count at path" resource="identities.user.openshift.io" path="//useridentities" 2025-08-13T19:59:32.365422735+00:00 stderr F I0813 19:59:32.364535 1 store.go:1579] "Monitoring resource count at path" resource="groups.user.openshift.io" path="//groups" 2025-08-13T19:59:32.365656342+00:00 stderr F I0813 19:59:32.365546 1 cacher.go:451] cacher (identities.user.openshift.io): initialized 2025-08-13T19:59:32.366472145+00:00 stderr F I0813 19:59:32.366259 1 reflector.go:351] Caches populated for *user.Identity from storage/cacher.go:/useridentities 2025-08-13T19:59:32.376074109+00:00 stderr F I0813 19:59:32.367976 1 cacher.go:451] cacher (groups.user.openshift.io): initialized 2025-08-13T19:59:32.376074109+00:00 stderr F I0813 19:59:32.368038 1 reflector.go:351] Caches populated for *user.Group from storage/cacher.go:/groups 2025-08-13T19:59:32.376074109+00:00 stderr F I0813 19:59:32.369329 1 handler.go:275] Adding GroupVersion user.openshift.io v1 to ResourceManager 2025-08-13T19:59:32.376074109+00:00 stderr F I0813 19:59:32.369391 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T19:59:32.376074109+00:00 stderr F I0813 19:59:32.369402 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T19:59:35.053460550+00:00 stderr F I0813 19:59:35.053277 1 genericapiserver.go:560] "[graceful-termination] using HTTP Server shutdown timeout" shutdownTimeout="2s" 2025-08-13T19:59:35.054209721+00:00 stderr F I0813 19:59:35.054173 1 genericapiserver.go:528] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T19:59:35.064731441+00:00 stderr F I0813 19:59:35.064541 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:20 +0000 UTC to 2026-06-26 12:47:21 +0000 UTC (now=2025-08-13 19:59:35.064180125 +0000 UTC))" 2025-08-13T19:59:35.071705980+00:00 stderr F I0813 19:59:35.071409 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115170\" (2025-08-13 18:59:28 +0000 UTC to 2026-08-13 18:59:28 +0000 UTC (now=2025-08-13 19:59:35.069769704 +0000 UTC))" 2025-08-13T19:59:35.074109038+00:00 stderr F I0813 19:59:35.073763 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:35.074109038+00:00 stderr F I0813 19:59:35.073924 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:35.074109038+00:00 stderr F I0813 19:59:35.074058 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:35.074169020+00:00 stderr F I0813 19:59:35.074129 1 genericapiserver.go:681] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T19:59:35.074615162+00:00 stderr F I0813 19:59:35.074521 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:35.075933650+00:00 stderr F I0813 19:59:35.075587 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:35.075933650+00:00 stderr F I0813 19:59:35.075671 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:35.075933650+00:00 stderr F I0813 19:59:35.075697 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.075933650+00:00 stderr F I0813 19:59:35.075736 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:35.084971758+00:00 stderr F I0813 19:59:35.084607 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:35.090388342+00:00 stderr F I0813 19:59:35.090352 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:35.104283678+00:00 stderr F I0813 19:59:35.095711 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:35.261017356+00:00 stderr F I0813 19:59:35.258405 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:35.261017356+00:00 stderr F E0813 19:59:35.259591 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.261017356+00:00 stderr F E0813 19:59:35.259635 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.291381701+00:00 stderr F I0813 19:59:35.291261 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:35.314580743+00:00 stderr F I0813 19:59:35.310533 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:35.315930531+00:00 stderr F E0813 19:59:35.315769 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.316024594+00:00 stderr F E0813 19:59:35.316004 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.360429779+00:00 stderr F I0813 19:59:35.326507 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.374394 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375589 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:35.375543711 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375633 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:35.375602292 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375657 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:35.375639933 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375676 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:35.375662334 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375692 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:35.375681064 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375710 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:35.375696965 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375761 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:35.375715045 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375922 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:35.375902321 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.376298 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:20 +0000 UTC to 2026-06-26 12:47:21 +0000 UTC (now=2025-08-13 19:59:35.376273311 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.376628 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115170\" (2025-08-13 18:59:28 +0000 UTC to 2026-08-13 18:59:28 +0000 UTC (now=2025-08-13 19:59:35.376610761 +0000 UTC))" 2025-08-13T19:59:35.398331219+00:00 stderr F I0813 19:59:35.398292 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:35.401314314+00:00 stderr F E0813 19:59:35.401282 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.413428499+00:00 stderr F E0813 19:59:35.413179 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.448248762+00:00 stderr F E0813 19:59:35.423880 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.448248762+00:00 stderr F E0813 19:59:35.435259 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.464321790+00:00 stderr F E0813 19:59:35.464255 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.498595977+00:00 stderr F E0813 19:59:35.481648 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.505466903+00:00 stderr F I0813 19:59:35.505434 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:35.562928661+00:00 stderr F E0813 19:59:35.545409 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.673230705+00:00 stderr F I0813 19:59:35.673175 1 reflector.go:351] Caches populated for *v1.Group from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:35.675129009+00:00 stderr F E0813 19:59:35.675000 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.728270224+00:00 stderr F E0813 19:59:35.727924 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.837193959+00:00 stderr F E0813 19:59:35.837129 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.033313399+00:00 stderr F E0813 19:59:36.033147 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.035455731+00:00 stderr F E0813 19:59:36.035426 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.036464989+00:00 stderr F E0813 19:59:36.036407 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.037394686+00:00 stderr F E0813 19:59:36.037322 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.037973162+00:00 stderr F E0813 19:59:36.033144 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.048474692+00:00 stderr F E0813 19:59:36.048441 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.052049914+00:00 stderr F I0813 19:59:36.052021 1 reflector.go:351] Caches populated for *v1.OAuthClient from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:36.158320383+00:00 stderr F E0813 19:59:36.158265 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.376588465+00:00 stderr F E0813 19:59:36.376519 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.377338296+00:00 stderr F E0813 19:59:36.377303 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.377953704+00:00 stderr F E0813 19:59:36.377920 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.378338595+00:00 stderr F E0813 19:59:36.378310 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.379342223+00:00 stderr F E0813 19:59:36.378599 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.436415990+00:00 stderr F E0813 19:59:36.436345 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.437664406+00:00 stderr F E0813 19:59:36.437540 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.472188440+00:00 stderr F E0813 19:59:36.445173 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.472631222+00:00 stderr F E0813 19:59:36.445254 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.476919835+00:00 stderr F E0813 19:59:36.445319 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.477208573+00:00 stderr F E0813 19:59:36.447532 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.478147860+00:00 stderr F E0813 19:59:36.448082 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.478380826+00:00 stderr F E0813 19:59:36.448223 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.478628153+00:00 stderr F E0813 19:59:36.448301 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.479070036+00:00 stderr F E0813 19:59:36.448361 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.480056024+00:00 stderr F E0813 19:59:36.448428 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.480478096+00:00 stderr F E0813 19:59:36.448483 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.483198044+00:00 stderr F E0813 19:59:36.483108 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.483739059+00:00 stderr F E0813 19:59:36.483712 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.485178480+00:00 stderr F E0813 19:59:36.484054 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.485286323+00:00 stderr F E0813 19:59:36.484127 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.485370796+00:00 stderr F E0813 19:59:36.484190 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.485545511+00:00 stderr F E0813 19:59:36.484253 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.485641593+00:00 stderr F E0813 19:59:36.484421 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.485720866+00:00 stderr F E0813 19:59:36.484505 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.485938202+00:00 stderr F E0813 19:59:36.484602 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.486062785+00:00 stderr F E0813 19:59:36.484761 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.486180459+00:00 stderr F E0813 19:59:36.484900 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.486455607+00:00 stderr F E0813 19:59:36.484981 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.575577707+00:00 stderr F E0813 19:59:36.575523 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.577465621+00:00 stderr F E0813 19:59:36.577436 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.578437578+00:00 stderr F E0813 19:59:36.578405 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.580095616+00:00 stderr F E0813 19:59:36.580072 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.597659256+00:00 stderr F E0813 19:59:36.580571 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.598071008+00:00 stderr F E0813 19:59:36.580626 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.598313015+00:00 stderr F E0813 19:59:36.580696 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.598541692+00:00 stderr F E0813 19:59:36.580749 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.602086093+00:00 stderr F E0813 19:59:36.595085 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.602660729+00:00 stderr F E0813 19:59:36.595307 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.606749556+00:00 stderr F E0813 19:59:36.606721 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.608390602+00:00 stderr F E0813 19:59:36.608366 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.663521664+00:00 stderr F E0813 19:59:36.663462 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.665381357+00:00 stderr F E0813 19:59:36.663881 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.665555102+00:00 stderr F E0813 19:59:36.665531 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.665762728+00:00 stderr F E0813 19:59:36.663938 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.679138919+00:00 stderr F E0813 19:59:36.664129 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.680098706+00:00 stderr F E0813 19:59:36.664191 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.680434846+00:00 stderr F E0813 19:59:36.664262 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.681091575+00:00 stderr F E0813 19:59:36.664327 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.688758883+00:00 stderr F E0813 19:59:36.688652 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.689161305+00:00 stderr F E0813 19:59:36.689135 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.689456673+00:00 stderr F E0813 19:59:36.689423 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.689703040+00:00 stderr F E0813 19:59:36.689677 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.690070311+00:00 stderr F E0813 19:59:36.690047 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.690914385+00:00 stderr F E0813 19:59:36.690884 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.691368538+00:00 stderr F E0813 19:59:36.691183 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.733156989+00:00 stderr F E0813 19:59:36.733102 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.756324479+00:00 stderr F E0813 19:59:36.756160 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.759497590+00:00 stderr F E0813 19:59:36.759459 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.760244381+00:00 stderr F E0813 19:59:36.760215 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.851534213+00:00 stderr F E0813 19:59:36.851139 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.865752729+00:00 stderr F E0813 19:59:36.852028 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.875198648+00:00 stderr F E0813 19:59:36.852119 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.875459345+00:00 stderr F E0813 19:59:36.852199 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.875731643+00:00 stderr F E0813 19:59:36.852256 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.876069383+00:00 stderr F E0813 19:59:36.852310 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.876304159+00:00 stderr F E0813 19:59:36.852393 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.876563787+00:00 stderr F E0813 19:59:36.852462 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.877315348+00:00 stderr F E0813 19:59:36.852523 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.877532394+00:00 stderr F E0813 19:59:36.852576 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.877748271+00:00 stderr F E0813 19:59:36.870143 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.936192617+00:00 stderr F E0813 19:59:36.929752 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.937054831+00:00 stderr F E0813 19:59:36.932134 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.940238372+00:00 stderr F E0813 19:59:36.932220 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.940929152+00:00 stderr F E0813 19:59:36.932304 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.941176139+00:00 stderr F E0813 19:59:36.932386 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.466960256+00:00 stderr F E0813 19:59:37.458372 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.466960256+00:00 stderr F E0813 19:59:37.458449 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.466960256+00:00 stderr F E0813 19:59:37.459342 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.466960256+00:00 stderr F E0813 19:59:37.459485 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.466960256+00:00 stderr F E0813 19:59:37.459713 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.716233812+00:00 stderr F E0813 19:59:37.716104 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.716739256+00:00 stderr F E0813 19:59:37.716716 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.717144728+00:00 stderr F E0813 19:59:37.717093 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.717200919+00:00 stderr F E0813 19:59:37.717185 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.717411685+00:00 stderr F E0813 19:59:37.717359 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.971477538+00:00 stderr F E0813 19:59:37.969981 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:38.120652770+00:00 stderr F E0813 19:59:38.120511 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:38.121609857+00:00 stderr F E0813 19:59:38.121584 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:38.122111541+00:00 stderr F E0813 19:59:38.122088 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:38.122400310+00:00 stderr F E0813 19:59:38.122369 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:38.122666387+00:00 stderr F E0813 19:59:38.122641 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:38.159218769+00:00 stderr F E0813 19:59:38.159141 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:39.318221776+00:00 stderr F E0813 19:59:39.317938 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.318472713+00:00 stderr F E0813 19:59:39.317948 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.319165733+00:00 stderr F E0813 19:59:39.319135 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.334884941+00:00 stderr F E0813 19:59:39.334715 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.335225161+00:00 stderr F E0813 19:59:39.335185 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.650348344+00:00 stderr F E0813 19:59:39.650258 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.650348344+00:00 stderr F E0813 19:59:39.650321 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.650915920+00:00 stderr F E0813 19:59:39.650751 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.651055594+00:00 stderr F E0813 19:59:39.651002 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.651371383+00:00 stderr F E0813 19:59:39.651350 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:40.533147208+00:00 stderr F E0813 19:59:40.533083 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:40.720975642+00:00 stderr F E0813 19:59:40.720818 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.924524750+00:00 stderr F E0813 19:59:41.924344 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:41.974490634+00:00 stderr F E0813 19:59:41.974428 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:41.983171752+00:00 stderr F E0813 19:59:41.982641 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:41.983171752+00:00 stderr F E0813 19:59:41.982901 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:41.983934714+00:00 stderr F E0813 19:59:41.983729 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:42.260972311+00:00 stderr F E0813 19:59:42.260295 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:42.261708322+00:00 stderr F E0813 19:59:42.261350 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:42.261708322+00:00 stderr F E0813 19:59:42.261654 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:42.261923968+00:00 stderr F E0813 19:59:42.261891 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:42.262538365+00:00 stderr F E0813 19:59:42.262510 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:45.662298766+00:00 stderr F E0813 19:59:45.658946 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:45.847894196+00:00 stderr F E0813 19:59:45.845683 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.205464043+00:00 stderr F E0813 19:59:47.202257 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.242596192+00:00 stderr F E0813 19:59:47.242459 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.243157758+00:00 stderr F E0813 19:59:47.243104 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.243861318+00:00 stderr F E0813 19:59:47.243727 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.250236379+00:00 stderr F E0813 19:59:47.250172 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.656272564+00:00 stderr F E0813 19:59:47.656166 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.656481800+00:00 stderr F E0813 19:59:47.656440 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.656747918+00:00 stderr F E0813 19:59:47.656688 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.656765598+00:00 stderr F E0813 19:59:47.656740 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.662089500+00:00 stderr F E0813 19:59:47.661973 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.712139347+00:00 stderr F http2: server: error reading preface from client 10.217.0.2:56190: read tcp 10.217.0.39:8443->10.217.0.2:56190: read: connection reset by peer 2025-08-13T19:59:51.870432504+00:00 stderr F I0813 19:59:51.869647 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:51.881604152+00:00 stderr F I0813 19:59:51.881522 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:51.881392326 +0000 UTC))" 2025-08-13T19:59:51.881726736+00:00 stderr F I0813 19:59:51.881707 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:51.881681854 +0000 UTC))" 2025-08-13T19:59:51.881908551+00:00 stderr F I0813 19:59:51.881888 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.881749776 +0000 UTC))" 2025-08-13T19:59:51.881968363+00:00 stderr F I0813 19:59:51.881953 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.881936802 +0000 UTC))" 2025-08-13T19:59:51.882024604+00:00 stderr F I0813 19:59:51.882012 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.881997093 +0000 UTC))" 2025-08-13T19:59:51.882072705+00:00 stderr F I0813 19:59:51.882060 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.882043885 +0000 UTC))" 2025-08-13T19:59:51.882116687+00:00 stderr F I0813 19:59:51.882104 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.882090576 +0000 UTC))" 2025-08-13T19:59:51.882160598+00:00 stderr F I0813 19:59:51.882148 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.882134397 +0000 UTC))" 2025-08-13T19:59:51.882230560+00:00 stderr F I0813 19:59:51.882212 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.882190329 +0000 UTC))" 2025-08-13T19:59:51.883008762+00:00 stderr F I0813 19:59:51.882979 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:20 +0000 UTC to 2026-06-26 12:47:21 +0000 UTC (now=2025-08-13 19:59:51.88294675 +0000 UTC))" 2025-08-13T19:59:51.883424654+00:00 stderr F I0813 19:59:51.883398 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115170\" (2025-08-13 18:59:28 +0000 UTC to 2026-08-13 18:59:28 +0000 UTC (now=2025-08-13 19:59:51.883371553 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.773747 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.773664261 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.773882 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.773861226 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.773915 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.773890167 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.773933 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.773921418 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.773954 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.773940988 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.773970 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.773959979 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.774004 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.77398973 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.774020 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.77400938 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.774038 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.774024961 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.774059 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.774046851 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.774377 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:20 +0000 UTC to 2026-06-26 12:47:21 +0000 UTC (now=2025-08-13 20:00:05.77435506 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.774709 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115170\" (2025-08-13 18:59:28 +0000 UTC to 2026-08-13 18:59:28 +0000 UTC (now=2025-08-13 20:00:05.77468649 +0000 UTC))" 2025-08-13T20:00:56.698411453+00:00 stderr F I0813 20:00:56.678254 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:00:56.698411453+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:00:56.845173228+00:00 stderr F I0813 20:00:56.844884 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:56.845173228+00:00 stderr F I0813 20:00:56.845073 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:00:56.846576258+00:00 stderr F I0813 20:00:56.846323 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.847425 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:56.847258947 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.847863 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:56.847678039 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.847919 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:56.847876525 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.847944 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:56.847925996 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.847962 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:56.847951797 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.847984 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:56.847969098 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.848007 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:56.847993848 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.848028 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:56.848013279 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.848067 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:56.84805345 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.848361 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:56.848342018 +0000 UTC))" 2025-08-13T20:00:56.848745390+00:00 stderr F I0813 20:00:56.848729 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2025-08-13 20:00:56.848690248 +0000 UTC))" 2025-08-13T20:00:56.853486745+00:00 stderr F I0813 20:00:56.850681 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115170\" (2025-08-13 18:59:28 +0000 UTC to 2026-08-13 18:59:28 +0000 UTC (now=2025-08-13 20:00:56.850582102 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.028636 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:00.028399108 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.028708 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:00.028692296 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.028740 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.028714357 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029003 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.028910823 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029024 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.029012746 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029080 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.029056577 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029122 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.029084858 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029261 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.029130349 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029285 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:00.029270753 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029304 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:00.029293694 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029331 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.029319794 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029748 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2025-08-13 20:01:00.029691875 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.030145 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115170\" (2025-08-13 18:59:28 +0000 UTC to 2026-08-13 18:59:28 +0000 UTC (now=2025-08-13 20:01:00.030123767 +0000 UTC))" 2025-08-13T20:01:05.265928136+00:00 stderr F I0813 20:01:05.264561 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:01:05.265928136+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:10.946272075+00:00 stderr F E0813 20:01:10.945945 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled 2025-08-13T20:01:10.946272075+00:00 stderr F E0813 20:01:10.946239 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout 2025-08-13T20:01:10.948325794+00:00 stderr F E0813 20:01:10.948042 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout 2025-08-13T20:01:10.948325794+00:00 stderr F E0813 20:01:10.948091 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout 2025-08-13T20:01:10.950701332+00:00 stderr F E0813 20:01:10.950364 1 timeout.go:142] post-timeout activity - time-elapsed: 3.955753ms, GET "/apis/oauth.openshift.io/v1/oauthclients/console" result: 2025-08-13T20:01:16.667006236+00:00 stderr F I0813 20:01:16.666703 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:01:16.667006236+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:31.303546439+00:00 stderr F I0813 20:01:31.302924 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:01:31.303546439+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:36.667417336+00:00 stderr F I0813 20:01:36.667273 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:01:36.667417336+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:44.989727596+00:00 stderr F I0813 20:01:44.989550 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:01:44.989727596+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:47.000469451+00:00 stderr F I0813 20:01:47.000415 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:01:47.000469451+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:54.660572879+00:00 stderr F I0813 20:01:54.660450 1 healthz.go:261] etcd check failed: healthz 2025-08-13T20:01:54.660572879+00:00 stderr F [-]etcd failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:54.660946280+00:00 stderr F E0813 20:01:54.660916 1 timeout.go:142] post-timeout activity - time-elapsed: 5.657571ms, GET "/healthz" result: 2025-08-13T20:01:54.661201217+00:00 stderr F I0813 20:01:54.661031 1 healthz.go:261] etcd check failed: healthz 2025-08-13T20:01:54.661201217+00:00 stderr F [-]etcd failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:58.103602303+00:00 stderr F I0813 20:01:58.103359 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:01:58.103602303+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:02:03.279348222+00:00 stderr F I0813 20:02:03.279140 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:02:03.279348222+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:02:15.043214406+00:00 stderr F I0813 20:02:15.042674 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:02:15.043214406+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:04:04.348973941+00:00 stderr F E0813 20:04:04.345887 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:04.348973941+00:00 stderr F E0813 20:04:04.348915 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:04.416483897+00:00 stderr F E0813 20:04:04.416321 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:04.416483897+00:00 stderr F E0813 20:04:04.416456 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.085312727+00:00 stderr F E0813 20:04:05.085217 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.085312727+00:00 stderr F E0813 20:04:05.085293 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.104089753+00:00 stderr F E0813 20:04:05.103978 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.104149364+00:00 stderr F E0813 20:04:05.104081 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.298401566+00:00 stderr F E0813 20:04:05.297203 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.298401566+00:00 stderr F E0813 20:04:05.297288 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.298401566+00:00 stderr F E0813 20:04:05.297651 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.298401566+00:00 stderr F E0813 20:04:05.297674 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:38.360691791+00:00 stderr F E0813 20:04:38.359378 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:38.360691791+00:00 stderr F E0813 20:04:38.359546 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:39.335531786+00:00 stderr F E0813 20:04:39.335387 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:39.335588038+00:00 stderr F E0813 20:04:39.335522 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:41.029295599+00:00 stderr F E0813 20:04:41.029154 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:41.029350590+00:00 stderr F E0813 20:04:41.029302 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:47.275168065+00:00 stderr F E0813 20:04:47.274663 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:47.275290638+00:00 stderr F E0813 20:04:47.275243 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:59.036167873+00:00 stderr F E0813 20:04:59.033754 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:59.036167873+00:00 stderr F E0813 20:04:59.033999 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.093002537+00:00 stderr F E0813 20:05:05.092767 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.093002537+00:00 stderr F E0813 20:05:05.092979 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.308754306+00:00 stderr F E0813 20:05:05.308472 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.308754306+00:00 stderr F E0813 20:05:05.308717 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.326847394+00:00 stderr F E0813 20:05:05.326709 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.326904556+00:00 stderr F E0813 20:05:05.326887 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.355996059+00:00 stderr F E0813 20:05:05.355847 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.355996059+00:00 stderr F E0813 20:05:05.355970 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:06.264399003+00:00 stderr F E0813 20:05:06.264250 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:06.265136094+00:00 stderr F E0813 20:05:06.264572 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:16.280454782+00:00 stderr F E0813 20:05:16.280262 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:16.280454782+00:00 stderr F E0813 20:05:16.280434 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:06:23.498213610+00:00 stderr F I0813 20:06:23.497964 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:06:28.010316348+00:00 stderr F I0813 20:06:28.010101 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:06:28.096337502+00:00 stderr F I0813 20:06:28.096272 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:06:48.196574406+00:00 stderr F I0813 20:06:48.196290 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:06:50.297769168+00:00 stderr F I0813 20:06:50.297496 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:05.311458663+00:00 stderr F I0813 20:07:05.310420 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:08:42.713645834+00:00 stderr F E0813 20:08:42.709166 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.725764152+00:00 stderr F E0813 20:08:42.725565 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.766877421+00:00 stderr F E0813 20:08:42.766703 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.766877421+00:00 stderr F E0813 20:08:42.766847 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.513741693+00:00 stderr F E0813 20:08:43.512988 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.513741693+00:00 stderr F E0813 20:08:43.513085 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.530236006+00:00 stderr F E0813 20:08:43.530011 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.530236006+00:00 stderr F E0813 20:08:43.530131 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.609963312+00:00 stderr F E0813 20:08:43.609275 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.609963312+00:00 stderr F E0813 20:08:43.609384 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.611957989+00:00 stderr F E0813 20:08:43.611516 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.611957989+00:00 stderr F E0813 20:08:43.611585 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.570522845+00:00 stderr F E0813 20:08:48.570328 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.570522845+00:00 stderr F E0813 20:08:48.570428 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.887212386+00:00 stderr F E0813 20:08:49.887128 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.887265298+00:00 stderr F E0813 20:08:49.887209 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.262668022+00:00 stderr F E0813 20:08:52.262575 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.263017712+00:00 stderr F E0813 20:08:52.262984 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.801090063+00:00 stderr F E0813 20:08:56.800852 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.801152225+00:00 stderr F E0813 20:08:56.801084 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:28.515678168+00:00 stderr F I0813 20:09:28.515417 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:46.544141888+00:00 stderr F I0813 20:09:46.540980 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:50.709580955+00:00 stderr F I0813 20:09:50.709457 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:54.399265802+00:00 stderr F I0813 20:09:54.397571 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:55.196098577+00:00 stderr F I0813 20:09:55.196024 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:14.687619245+00:00 stderr F I0813 20:10:14.687467 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:17:57.759289405+00:00 stderr F I0813 20:17:57.759058 1 trace.go:236] Trace[646062886]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:8e440eb5-de6b-4d07-ad72-6fd790f8653a,client:10.217.0.62,api-group:oauth.openshift.io,api-version:v1,name:console,subresource:,namespace:,protocol:HTTP/2.0,resource:oauthclients,scope:resource,url:/apis/oauth.openshift.io/v1/oauthclients/console,user-agent:console/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (13-Aug-2025 20:17:57.161) (total time: 588ms): 2025-08-13T20:17:57.759289405+00:00 stderr F Trace[646062886]: ---"About to write a response" 588ms (20:17:57.750) 2025-08-13T20:17:57.759289405+00:00 stderr F Trace[646062886]: [588.849764ms] [588.849764ms] END 2025-08-13T20:18:02.690719743+00:00 stderr F I0813 20:18:02.690551 1 trace.go:236] Trace[2103687003]: "Create etcd3" audit-id:f118d6a4-62e6-4076-a8b6-218f0a0d636e,key:/oauth/clients/openshift-browser-client,type:*oauth.OAuthClient,resource:oauthclients.oauth.openshift.io (13-Aug-2025 20:18:01.702) (total time: 988ms): 2025-08-13T20:18:02.690719743+00:00 stderr F Trace[2103687003]: ---"Txn call succeeded" 987ms (20:18:02.690) 2025-08-13T20:18:02.690719743+00:00 stderr F Trace[2103687003]: [988.013605ms] [988.013605ms] END 2025-08-13T20:18:02.951389737+00:00 stderr F I0813 20:18:02.950466 1 trace.go:236] Trace[353295370]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:f118d6a4-62e6-4076-a8b6-218f0a0d636e,client:10.217.0.19,api-group:oauth.openshift.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:oauthclients,scope:resource,url:/apis/oauth.openshift.io/v1/oauthclients,user-agent:authentication-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:POST (13-Aug-2025 20:18:01.701) (total time: 1248ms): 2025-08-13T20:18:02.951389737+00:00 stderr F Trace[353295370]: ---"Write to database call failed" len:206,err:oauthclients.oauth.openshift.io "openshift-browser-client" already exists 1248ms (20:18:02.950) 2025-08-13T20:18:02.951389737+00:00 stderr F Trace[353295370]: [1.248658908s] [1.248658908s] END 2025-08-13T20:18:04.535705811+00:00 stderr F I0813 20:18:04.531464 1 trace.go:236] Trace[1861376822]: "Create etcd3" audit-id:523a0143-4800-4c01-b2ee-b100a17a7106,key:/oauth/clients/openshift-challenging-client,type:*oauth.OAuthClient,resource:oauthclients.oauth.openshift.io (13-Aug-2025 20:18:02.996) (total time: 1534ms): 2025-08-13T20:18:04.535705811+00:00 stderr F Trace[1861376822]: ---"Txn call succeeded" 1534ms (20:18:04.531) 2025-08-13T20:18:04.535705811+00:00 stderr F Trace[1861376822]: [1.534461529s] [1.534461529s] END 2025-08-13T20:18:06.381304026+00:00 stderr F I0813 20:18:06.380288 1 trace.go:236] Trace[1300541338]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:523a0143-4800-4c01-b2ee-b100a17a7106,client:10.217.0.19,api-group:oauth.openshift.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:oauthclients,scope:resource,url:/apis/oauth.openshift.io/v1/oauthclients,user-agent:authentication-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:POST (13-Aug-2025 20:18:02.985) (total time: 3395ms): 2025-08-13T20:18:06.381304026+00:00 stderr F Trace[1300541338]: ---"Write to database call failed" len:167,err:oauthclients.oauth.openshift.io "openshift-challenging-client" already exists 3383ms (20:18:06.379) 2025-08-13T20:18:06.381304026+00:00 stderr F Trace[1300541338]: [3.395104984s] [3.395104984s] END 2025-08-13T20:42:36.329669044+00:00 stderr F I0813 20:42:36.326608 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.329669044+00:00 stderr F I0813 20:42:36.322968 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.329669044+00:00 stderr F I0813 20:42:36.324161 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.329882780+00:00 stderr F I0813 20:42:36.329763 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.352735829+00:00 stderr F I0813 20:42:36.323150 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.352735829+00:00 stderr F I0813 20:42:36.324215 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:42.122575675+00:00 stderr F I0813 20:42:42.121383 1 genericapiserver.go:541] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:42.122575675+00:00 stderr F I0813 20:42:42.121637 1 genericapiserver.go:689] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:42.122837923+00:00 stderr F I0813 20:42:42.122675 1 genericapiserver.go:696] [graceful-termination] RunPreShutdownHooks has completed 2025-08-13T20:42:42.123355858+00:00 stderr F I0813 20:42:42.122480 1 genericapiserver.go:1057] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-oauth-apiserver", Name:"apiserver-69c565c9b6-vbdpd", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving 2025-08-13T20:42:42.134867230+00:00 stderr F W0813 20:42:42.134581 1 genericapiserver.go:1060] failed to create event openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd.185b6e4e3e4981aa: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/events": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000027700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiser0000755000175000017500000000000015133657735033070 5ustar zuulzuul././@LongLink0000644000000000000000000000030400000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiser0000644000175000017500000000000015133657715033056 0ustar zuulzuul././@LongLink0000644000000000000000000000030400000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiser0000644000175000017500000000000015133657715033056 0ustar zuulzuul././@LongLink0000644000000000000000000000026500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000755000175000017500000000000015133657715033102 5ustar zuulzuul././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000755000175000017500000000000015133657735033104 5ustar zuulzuul././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000644000175000017500000007035515133657715033116 0ustar zuulzuul2025-08-13T20:11:02.960889364+00:00 stderr F I0813 20:11:02.960329 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:11:02.962080249+00:00 stderr F I0813 20:11:02.961753 1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.16.0-202406131906.p0.g1432fe0.assembly.stream.el9-1432fe0) 2025-08-13T20:11:02.976121501+00:00 stderr F I0813 20:11:02.976055 1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3deb112ca908d86a8b7f07feb4e0da8204aab510e2799d2dccdab5e5905d1b24" 2025-08-13T20:11:02.976121501+00:00 stderr F I0813 20:11:02.976091 1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f69708d7711c9fdc19d1b60591f04a3061fe1f796853e2daea9edea688b2e086" 2025-08-13T20:11:02.976374458+00:00 stderr F I0813 20:11:02.976349 1 standalone_apiserver.go:105] Started health checks at 0.0.0.0:8443 2025-08-13T20:11:02.977037737+00:00 stderr F I0813 20:11:02.976985 1 leaderelection.go:250] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers... 2025-08-13T20:11:03.001296893+00:00 stderr F I0813 20:11:03.000593 1 leaderelection.go:260] successfully acquired lease openshift-controller-manager/openshift-master-controllers 2025-08-13T20:11:03.001400006+00:00 stderr F I0813 20:11:03.001133 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-controller-manager", Name:"openshift-master-controllers", UID:"05722deb-fc4c-4763-8689-9fea6b1f7ec9", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"33358", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' controller-manager-778975cc4f-x5vcf became leader 2025-08-13T20:11:03.004702171+00:00 stderr F I0813 20:11:03.004599 1 controller_manager.go:145] Starting "openshift.io/serviceaccount" 2025-08-13T20:11:03.004702171+00:00 stderr F I0813 20:11:03.004686 1 serviceaccount.go:16] openshift.io/serviceaccount: no managed names specified 2025-08-13T20:11:03.004732031+00:00 stderr F W0813 20:11:03.004699 1 controller_manager.go:152] Skipping "openshift.io/serviceaccount" 2025-08-13T20:11:03.004732031+00:00 stderr F I0813 20:11:03.004708 1 controller_manager.go:145] Starting "openshift.io/origin-namespace" 2025-08-13T20:11:03.012836514+00:00 stderr F I0813 20:11:03.012692 1 controller_manager.go:155] Started "openshift.io/origin-namespace" 2025-08-13T20:11:03.012836514+00:00 stderr F I0813 20:11:03.012731 1 controller_manager.go:145] Starting "openshift.io/image-import" 2025-08-13T20:11:03.017441236+00:00 stderr F I0813 20:11:03.017343 1 imagestream_controller.go:66] Starting image stream controller 2025-08-13T20:11:03.022117020+00:00 stderr F I0813 20:11:03.021134 1 controller_manager.go:155] Started "openshift.io/image-import" 2025-08-13T20:11:03.022117020+00:00 stderr F I0813 20:11:03.021241 1 controller_manager.go:145] Starting "openshift.io/templateinstancefinalizer" 2025-08-13T20:11:03.022117020+00:00 stderr F I0813 20:11:03.021350 1 scheduled_image_controller.go:68] Starting scheduled import controller 2025-08-13T20:11:03.028759010+00:00 stderr F I0813 20:11:03.028660 1 controller_manager.go:155] Started "openshift.io/templateinstancefinalizer" 2025-08-13T20:11:03.028759010+00:00 stderr F I0813 20:11:03.028734 1 controller_manager.go:145] Starting "openshift.io/unidling" 2025-08-13T20:11:03.029112391+00:00 stderr F I0813 20:11:03.028885 1 templateinstance_finalizer.go:189] TemplateInstanceFinalizer controller waiting for cache sync 2025-08-13T20:11:03.037609334+00:00 stderr F I0813 20:11:03.037503 1 controller_manager.go:155] Started "openshift.io/unidling" 2025-08-13T20:11:03.038359756+00:00 stderr F I0813 20:11:03.038263 1 controller_manager.go:145] Starting "openshift.io/builder-serviceaccount" 2025-08-13T20:11:03.041640580+00:00 stderr F I0813 20:11:03.041595 1 controller_manager.go:155] Started "openshift.io/builder-serviceaccount" 2025-08-13T20:11:03.041756663+00:00 stderr F I0813 20:11:03.041706 1 controller_manager.go:145] Starting "openshift.io/deployer-serviceaccount" 2025-08-13T20:11:03.042480464+00:00 stderr F I0813 20:11:03.042400 1 serviceaccounts_controller.go:111] "Starting service account controller" 2025-08-13T20:11:03.042550836+00:00 stderr F I0813 20:11:03.042535 1 shared_informer.go:311] Waiting for caches to sync for service account 2025-08-13T20:11:03.045358456+00:00 stderr F I0813 20:11:03.045299 1 controller_manager.go:155] Started "openshift.io/deployer-serviceaccount" 2025-08-13T20:11:03.045358456+00:00 stderr F I0813 20:11:03.045337 1 controller_manager.go:145] Starting "openshift.io/deploymentconfig" 2025-08-13T20:11:03.045713826+00:00 stderr F I0813 20:11:03.045604 1 serviceaccounts_controller.go:111] "Starting service account controller" 2025-08-13T20:11:03.045736167+00:00 stderr F I0813 20:11:03.045724 1 shared_informer.go:311] Waiting for caches to sync for service account 2025-08-13T20:11:03.051008128+00:00 stderr F I0813 20:11:03.050886 1 controller_manager.go:155] Started "openshift.io/deploymentconfig" 2025-08-13T20:11:03.051008128+00:00 stderr F I0813 20:11:03.050944 1 controller_manager.go:145] Starting "openshift.io/templateinstance" 2025-08-13T20:11:03.051386649+00:00 stderr F I0813 20:11:03.051242 1 factory.go:78] Starting deploymentconfig controller 2025-08-13T20:11:03.065455903+00:00 stderr F I0813 20:11:03.065352 1 controller_manager.go:155] Started "openshift.io/templateinstance" 2025-08-13T20:11:03.065455903+00:00 stderr F I0813 20:11:03.065397 1 controller_manager.go:145] Starting "openshift.io/serviceaccount-pull-secrets" 2025-08-13T20:11:03.068100168+00:00 stderr F I0813 20:11:03.068064 1 controller_manager.go:155] Started "openshift.io/serviceaccount-pull-secrets" 2025-08-13T20:11:03.068212132+00:00 stderr F I0813 20:11:03.068186 1 controller_manager.go:145] Starting "openshift.io/deployer-rolebindings" 2025-08-13T20:11:03.068317465+00:00 stderr F I0813 20:11:03.068249 1 registry_urls_observation_controller.go:139] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_urls" 2025-08-13T20:11:03.068317465+00:00 stderr F I0813 20:11:03.068310 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_urls 2025-08-13T20:11:03.068393337+00:00 stderr F I0813 20:11:03.068341 1 keyid_observation_controller.go:164] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_kids" 2025-08-13T20:11:03.068393337+00:00 stderr F I0813 20:11:03.068373 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_kids 2025-08-13T20:11:03.068410747+00:00 stderr F I0813 20:11:03.068401 1 legacy_token_secret_controller.go:109] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-token-secret" 2025-08-13T20:11:03.068422458+00:00 stderr F I0813 20:11:03.068408 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_legacy-token-secret 2025-08-13T20:11:03.068434378+00:00 stderr F I0813 20:11:03.068426 1 image_pull_secret_controller.go:301] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_image-pull-secret" 2025-08-13T20:11:03.068446038+00:00 stderr F I0813 20:11:03.068432 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_image-pull-secret 2025-08-13T20:11:03.068457779+00:00 stderr F I0813 20:11:03.068448 1 legacy_image_pull_secret_controller.go:131] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret" 2025-08-13T20:11:03.068457779+00:00 stderr F I0813 20:11:03.068454 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret 2025-08-13T20:11:03.068704496+00:00 stderr F I0813 20:11:03.068203 1 service_account_controller.go:336] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_service-account" 2025-08-13T20:11:03.068704496+00:00 stderr F I0813 20:11:03.068622 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_service-account 2025-08-13T20:11:03.076382426+00:00 stderr F I0813 20:11:03.076273 1 controller_manager.go:155] Started "openshift.io/deployer-rolebindings" 2025-08-13T20:11:03.076555351+00:00 stderr F I0813 20:11:03.076483 1 controller_manager.go:145] Starting "openshift.io/image-signature-import" 2025-08-13T20:11:03.077265761+00:00 stderr F I0813 20:11:03.076414 1 defaultrolebindings.go:154] Starting DeployerRoleBindingController 2025-08-13T20:11:03.077265761+00:00 stderr F I0813 20:11:03.077204 1 shared_informer.go:311] Waiting for caches to sync for DeployerRoleBindingController 2025-08-13T20:11:03.080546645+00:00 stderr F I0813 20:11:03.080439 1 controller_manager.go:155] Started "openshift.io/image-signature-import" 2025-08-13T20:11:03.080546645+00:00 stderr F I0813 20:11:03.080472 1 controller_manager.go:145] Starting "openshift.io/deployer" 2025-08-13T20:11:03.099499309+00:00 stderr F I0813 20:11:03.094601 1 controller_manager.go:155] Started "openshift.io/deployer" 2025-08-13T20:11:03.099499309+00:00 stderr F I0813 20:11:03.094726 1 controller_manager.go:145] Starting "openshift.io/image-trigger" 2025-08-13T20:11:03.099499309+00:00 stderr F I0813 20:11:03.094683 1 factory.go:73] Starting deployer controller 2025-08-13T20:11:03.113845090+00:00 stderr F I0813 20:11:03.113658 1 controller_manager.go:155] Started "openshift.io/image-trigger" 2025-08-13T20:11:03.113946443+00:00 stderr F I0813 20:11:03.113871 1 image_trigger_controller.go:229] Starting trigger controller 2025-08-13T20:11:03.114174979+00:00 stderr F I0813 20:11:03.114046 1 controller_manager.go:145] Starting "openshift.io/image-puller-rolebindings" 2025-08-13T20:11:03.118686329+00:00 stderr F I0813 20:11:03.118620 1 controller_manager.go:155] Started "openshift.io/image-puller-rolebindings" 2025-08-13T20:11:03.118686329+00:00 stderr F W0813 20:11:03.118641 1 controller_manager.go:142] "openshift.io/default-rolebindings" is disabled 2025-08-13T20:11:03.118686329+00:00 stderr F I0813 20:11:03.118648 1 controller_manager.go:145] Starting "openshift.io/build" 2025-08-13T20:11:03.119373948+00:00 stderr F I0813 20:11:03.119263 1 defaultrolebindings.go:154] Starting ImagePullerRoleBindingController 2025-08-13T20:11:03.119373948+00:00 stderr F I0813 20:11:03.119301 1 shared_informer.go:311] Waiting for caches to sync for ImagePullerRoleBindingController 2025-08-13T20:11:03.126268746+00:00 stderr F I0813 20:11:03.126188 1 controller_manager.go:155] Started "openshift.io/build" 2025-08-13T20:11:03.126268746+00:00 stderr F I0813 20:11:03.126228 1 controller_manager.go:145] Starting "openshift.io/build-config-change" 2025-08-13T20:11:03.135650695+00:00 stderr F I0813 20:11:03.135601 1 controller_manager.go:155] Started "openshift.io/build-config-change" 2025-08-13T20:11:03.135730517+00:00 stderr F I0813 20:11:03.135712 1 controller_manager.go:145] Starting "openshift.io/builder-rolebindings" 2025-08-13T20:11:03.139568287+00:00 stderr F I0813 20:11:03.139490 1 controller_manager.go:155] Started "openshift.io/builder-rolebindings" 2025-08-13T20:11:03.139568287+00:00 stderr F I0813 20:11:03.139538 1 controller_manager.go:157] Started Origin Controllers 2025-08-13T20:11:03.139651340+00:00 stderr F I0813 20:11:03.139628 1 defaultrolebindings.go:154] Starting BuilderRoleBindingController 2025-08-13T20:11:03.139741142+00:00 stderr F I0813 20:11:03.139723 1 shared_informer.go:311] Waiting for caches to sync for BuilderRoleBindingController 2025-08-13T20:11:03.169260069+00:00 stderr F I0813 20:11:03.169193 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.173518971+00:00 stderr F I0813 20:11:03.173458 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.176474546+00:00 stderr F I0813 20:11:03.176362 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.177665040+00:00 stderr F I0813 20:11:03.177622 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.180083009+00:00 stderr F I0813 20:11:03.180003 1 reflector.go:351] Caches populated for *v1.Build from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.181215371+00:00 stderr F I0813 20:11:03.181106 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.181507830+00:00 stderr F I0813 20:11:03.181444 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.182751745+00:00 stderr F I0813 20:11:03.182712 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.185134084+00:00 stderr F I0813 20:11:03.183022 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.190246390+00:00 stderr F I0813 20:11:03.190190 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.194325617+00:00 stderr F I0813 20:11:03.194225 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.198378524+00:00 stderr F W0813 20:11:03.198291 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:11:03.199765823+00:00 stderr F I0813 20:11:03.199544 1 reflector.go:351] Caches populated for *v1.TemplateInstance from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.202071779+00:00 stderr F I0813 20:11:03.201023 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.214545197+00:00 stderr F I0813 20:11:03.210094 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.221080394+00:00 stderr F I0813 20:11:03.220985 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.222953158+00:00 stderr F I0813 20:11:03.222883 1 reflector.go:351] Caches populated for *v1.DeploymentConfig from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.223201495+00:00 stderr F I0813 20:11:03.223176 1 reflector.go:351] Caches populated for *v1.Build from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.224708568+00:00 stderr F I0813 20:11:03.224663 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.227238901+00:00 stderr F I0813 20:11:03.227174 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.227449147+00:00 stderr F I0813 20:11:03.227385 1 reflector.go:351] Caches populated for *v1.BuildConfig from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.231337349+00:00 stderr F I0813 20:11:03.231271 1 templateinstance_finalizer.go:194] Starting TemplateInstanceFinalizer controller 2025-08-13T20:11:03.234503929+00:00 stderr F I0813 20:11:03.234457 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.239160863+00:00 stderr F I0813 20:11:03.238714 1 buildconfig_controller.go:212] Starting buildconfig controller 2025-08-13T20:11:03.255859232+00:00 stderr F I0813 20:11:03.255749 1 factory.go:85] deploymentconfig controller caches are synced. Starting workers. 2025-08-13T20:11:03.258393844+00:00 stderr F I0813 20:11:03.258294 1 shared_informer.go:318] Caches are synced for service account 2025-08-13T20:11:03.269350588+00:00 stderr F I0813 20:11:03.269203 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.269546314+00:00 stderr F I0813 20:11:03.269467 1 shared_informer.go:318] Caches are synced for service account 2025-08-13T20:11:03.279697525+00:00 stderr F I0813 20:11:03.279610 1 templateinstance_controller.go:297] Starting TemplateInstance controller 2025-08-13T20:11:03.279697525+00:00 stderr F I0813 20:11:03.279670 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_urls 2025-08-13T20:11:03.279753797+00:00 stderr F I0813 20:11:03.279691 1 registry_urls_observation_controller.go:146] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_urls" 2025-08-13T20:11:03.280877749+00:00 stderr F I0813 20:11:03.280840 1 shared_informer.go:318] Caches are synced for DeployerRoleBindingController 2025-08-13T20:11:03.282687271+00:00 stderr F W0813 20:11:03.282655 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:11:03.286173031+00:00 stderr F I0813 20:11:03.286089 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.300431929+00:00 stderr F I0813 20:11:03.300296 1 factory.go:80] Deployer controller caches are synced. Starting workers. 2025-08-13T20:11:03.323603564+00:00 stderr F I0813 20:11:03.323502 1 shared_informer.go:318] Caches are synced for ImagePullerRoleBindingController 2025-08-13T20:11:03.366003709+00:00 stderr F I0813 20:11:03.365879 1 shared_informer.go:318] Caches are synced for BuilderRoleBindingController 2025-08-13T20:11:03.371257520+00:00 stderr F I0813 20:11:03.369556 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.392538160+00:00 stderr F I0813 20:11:03.392303 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.500976489+00:00 stderr F I0813 20:11:03.497362 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.514602170+00:00 stderr F I0813 20:11:03.513510 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.527231062+00:00 stderr F I0813 20:11:03.526879 1 build_controller.go:503] Starting build controller 2025-08-13T20:11:03.527231062+00:00 stderr F I0813 20:11:03.526940 1 build_controller.go:505] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000 2025-08-13T20:11:03.568669160+00:00 stderr F I0813 20:11:03.568575 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret 2025-08-13T20:11:03.568727282+00:00 stderr F I0813 20:11:03.568691 1 legacy_image_pull_secret_controller.go:138] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret" 2025-08-13T20:11:03.568727282+00:00 stderr F I0813 20:11:03.568720 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_service-account 2025-08-13T20:11:03.568827745+00:00 stderr F I0813 20:11:03.568734 1 service_account_controller.go:343] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_service-account" 2025-08-13T20:11:03.569076492+00:00 stderr F I0813 20:11:03.568993 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_kids 2025-08-13T20:11:03.569076492+00:00 stderr F I0813 20:11:03.569026 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_legacy-token-secret 2025-08-13T20:11:03.569076492+00:00 stderr F I0813 20:11:03.569044 1 legacy_token_secret_controller.go:116] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-token-secret" 2025-08-13T20:11:03.569076492+00:00 stderr F I0813 20:11:03.569038 1 keyid_observation_controller.go:172] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_kids" 2025-08-13T20:11:03.569341419+00:00 stderr F I0813 20:11:03.569012 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_image-pull-secret 2025-08-13T20:11:03.569357520+00:00 stderr F I0813 20:11:03.569339 1 image_pull_secret_controller.go:327] Waiting for service account token signing cert to be observed 2025-08-13T20:11:03.569367700+00:00 stderr F I0813 20:11:03.569352 1 image_pull_secret_controller.go:313] Waiting for image registry urls to be observed 2025-08-13T20:11:03.569367700+00:00 stderr F I0813 20:11:03.569356 1 image_pull_secret_controller.go:330] "Observed service account token signing certs" kids=["Eis1R21gHpHFLAkJU-GQ-azSF6VzwnC1XhhzxsZx2Qg"] 2025-08-13T20:11:03.569382131+00:00 stderr F I0813 20:11:03.569370 1 image_pull_secret_controller.go:317] "Observed image registry urls" urls=["10.217.4.41:5000","default-route-openshift-image-registry.apps-crc.testing","image-registry.openshift-image-registry.svc.cluster.local:5000","image-registry.openshift-image-registry.svc:5000"] 2025-08-13T20:11:03.569448183+00:00 stderr F I0813 20:11:03.569389 1 image_pull_secret_controller.go:374] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_image-pull-secret" 2025-08-13T20:19:56.300590604+00:00 stderr F W0813 20:19:56.295341 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:20:24.817863086+00:00 stderr F I0813 20:20:24.810877 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:21:03.289154765+00:00 stderr F I0813 20:21:03.287183 1 image_pull_secret_controller.go:353] "Observed image registry urls" urls=["10.217.4.41:5000","default-route-openshift-image-registry.apps-crc.testing","image-registry.openshift-image-registry.svc.cluster.local:5000","image-registry.openshift-image-registry.svc:5000"] 2025-08-13T20:21:03.519750035+00:00 stderr F I0813 20:21:03.519604 1 image_pull_secret_controller.go:363] "Observed service account token signing certs" kids=["Eis1R21gHpHFLAkJU-GQ-azSF6VzwnC1XhhzxsZx2Qg"] 2025-08-13T20:25:55.305263486+00:00 stderr F W0813 20:25:55.303637 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:29:57.085327979+00:00 stderr F I0813 20:29:57.083719 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:31:03.288080738+00:00 stderr F I0813 20:31:03.284011 1 image_pull_secret_controller.go:353] "Observed image registry urls" urls=["10.217.4.41:5000","default-route-openshift-image-registry.apps-crc.testing","image-registry.openshift-image-registry.svc.cluster.local:5000","image-registry.openshift-image-registry.svc:5000"] 2025-08-13T20:31:03.518212103+00:00 stderr F I0813 20:31:03.518089 1 image_pull_secret_controller.go:363] "Observed service account token signing certs" kids=["Eis1R21gHpHFLAkJU-GQ-azSF6VzwnC1XhhzxsZx2Qg"] 2025-08-13T20:34:21.325502573+00:00 stderr F W0813 20:34:21.324495 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:41:03.284951366+00:00 stderr F I0813 20:41:03.282391 1 image_pull_secret_controller.go:353] "Observed image registry urls" urls=["10.217.4.41:5000","default-route-openshift-image-registry.apps-crc.testing","image-registry.openshift-image-registry.svc.cluster.local:5000","image-registry.openshift-image-registry.svc:5000"] 2025-08-13T20:41:03.572943138+00:00 stderr F I0813 20:41:03.572884 1 image_pull_secret_controller.go:363] "Observed service account token signing certs" kids=["Eis1R21gHpHFLAkJU-GQ-azSF6VzwnC1XhhzxsZx2Qg"] 2025-08-13T20:42:35.632999370+00:00 stderr F I0813 20:42:35.631768 1 keyid_observation_controller.go:174] "Shutting down controller" name="openshift.io/internal-image-registry-pull-secrets_kids" 2025-08-13T20:42:35.634115802+00:00 stderr F I0813 20:42:35.633951 1 project_finalizer_controller.go:74] Shutting down 2025-08-13T20:42:35.634352309+00:00 stderr F I0813 20:42:35.634323 1 legacy_token_secret_controller.go:118] "Shutting down controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-token-secret" 2025-08-13T20:42:35.634432091+00:00 stderr F I0813 20:42:35.634416 1 service_account_controller.go:345] "Shutting down controller" name="openshift.io/internal-image-registry-pull-secrets_service-account" 2025-08-13T20:42:35.634506093+00:00 stderr F I0813 20:42:35.634490 1 legacy_image_pull_secret_controller.go:140] "Shutting down controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret" 2025-08-13T20:42:35.635735189+00:00 stderr F I0813 20:42:35.635662 1 serviceaccounts_controller.go:123] "Shutting down service account controller" 2025-08-13T20:42:35.637739937+00:00 stderr F I0813 20:42:35.636859 1 scheduled_image_controller.go:81] Shutting down image stream controller 2025-08-13T20:42:35.637739937+00:00 stderr F I0813 20:42:35.636904 1 imagestream_controller.go:81] Shutting down image stream controller 2025-08-13T20:42:35.637739937+00:00 stderr F I0813 20:42:35.636963 1 image_trigger_controller.go:245] Shutting down trigger controller 2025-08-13T20:42:35.638701804+00:00 stderr F I0813 20:42:35.638352 1 image_pull_secret_controller.go:376] "Shutting down controller" name="openshift.io/internal-image-registry-pull-secrets_image-pull-secret" 2025-08-13T20:42:35.638701804+00:00 stderr F I0813 20:42:35.638478 1 serviceaccounts_controller.go:123] "Shutting down service account controller" 2025-08-13T20:42:35.638701804+00:00 stderr F I0813 20:42:35.638527 1 buildconfig_controller.go:219] Shutting down buildconfig controller 2025-08-13T20:42:35.638753276+00:00 stderr F I0813 20:42:35.638736 1 build_controller.go:521] Shutting down build controller 2025-08-13T20:42:35.638896900+00:00 stderr F I0813 20:42:35.638876 1 signature_import_controller.go:81] Shutting down 2025-08-13T20:42:35.638987433+00:00 stderr F I0813 20:42:35.638942 1 factory.go:88] Shutting down deployer controller 2025-08-13T20:42:35.639000763+00:00 stderr F I0813 20:42:35.638986 1 defaultrolebindings.go:166] Shutting down DeployerRoleBindingController 2025-08-13T20:42:35.639026874+00:00 stderr F I0813 20:42:35.634119 1 registry_urls_observation_controller.go:148] "Shutting down controller" name="openshift.io/internal-image-registry-pull-secrets_urls" 2025-08-13T20:42:35.639449976+00:00 stderr F I0813 20:42:35.638947 1 defaultrolebindings.go:166] Shutting down BuilderRoleBindingController 2025-08-13T20:42:35.639600830+00:00 stderr F I0813 20:42:35.639579 1 templateinstance_finalizer.go:201] Stopping TemplateInstanceFinalizer controller 2025-08-13T20:42:35.639742184+00:00 stderr F I0813 20:42:35.639722 1 defaultrolebindings.go:166] Shutting down ImagePullerRoleBindingController 2025-08-13T20:42:35.640471215+00:00 stderr F I0813 20:42:35.640011 1 factory.go:95] Shutting down deploymentconfig controller 2025-08-13T20:42:35.652143072+00:00 stderr F W0813 20:42:35.649302 1 controller_manager.go:107] Controller Manager received stop signal: leaderelection lost ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000644000175000017500000074711415133657715033122 0ustar zuulzuul2026-01-20T10:49:37.336279618+00:00 stderr F I0120 10:49:37.335224 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:49:37.367116887+00:00 stderr F I0120 10:49:37.363714 1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.16.0-202406131906.p0.g1432fe0.assembly.stream.el9-1432fe0) 2026-01-20T10:49:37.373354557+00:00 stderr F I0120 10:49:37.372893 1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3deb112ca908d86a8b7f07feb4e0da8204aab510e2799d2dccdab5e5905d1b24" 2026-01-20T10:49:37.373354557+00:00 stderr F I0120 10:49:37.373332 1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f69708d7711c9fdc19d1b60591f04a3061fe1f796853e2daea9edea688b2e086" 2026-01-20T10:49:37.375176033+00:00 stderr F I0120 10:49:37.373303 1 standalone_apiserver.go:105] Started health checks at 0.0.0.0:8443 2026-01-20T10:49:37.377927017+00:00 stderr F I0120 10:49:37.377884 1 leaderelection.go:250] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers... 2026-01-20T10:49:37.447457935+00:00 stderr F I0120 10:49:37.447397 1 leaderelection.go:260] successfully acquired lease openshift-controller-manager/openshift-master-controllers 2026-01-20T10:49:37.448871257+00:00 stderr F I0120 10:49:37.448025 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-controller-manager", Name:"openshift-master-controllers", UID:"05722deb-fc4c-4763-8689-9fea6b1f7ec9", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"40212", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' controller-manager-778975cc4f-x5vcf became leader 2026-01-20T10:49:37.465234266+00:00 stderr F I0120 10:49:37.465174 1 controller_manager.go:145] Starting "openshift.io/unidling" 2026-01-20T10:49:37.486806713+00:00 stderr F I0120 10:49:37.486757 1 controller_manager.go:155] Started "openshift.io/unidling" 2026-01-20T10:49:37.486806713+00:00 stderr F W0120 10:49:37.486774 1 controller_manager.go:142] "openshift.io/default-rolebindings" is disabled 2026-01-20T10:49:37.486806713+00:00 stderr F I0120 10:49:37.486780 1 controller_manager.go:145] Starting "openshift.io/build-config-change" 2026-01-20T10:49:37.502148480+00:00 stderr F I0120 10:49:37.499461 1 controller_manager.go:155] Started "openshift.io/build-config-change" 2026-01-20T10:49:37.502148480+00:00 stderr F I0120 10:49:37.499480 1 controller_manager.go:145] Starting "openshift.io/image-trigger" 2026-01-20T10:49:37.513800305+00:00 stderr F I0120 10:49:37.513741 1 controller_manager.go:155] Started "openshift.io/image-trigger" 2026-01-20T10:49:37.513800305+00:00 stderr F I0120 10:49:37.513767 1 controller_manager.go:145] Starting "openshift.io/image-signature-import" 2026-01-20T10:49:37.517555390+00:00 stderr F I0120 10:49:37.517525 1 image_trigger_controller.go:229] Starting trigger controller 2026-01-20T10:49:37.527321397+00:00 stderr F I0120 10:49:37.525557 1 controller_manager.go:155] Started "openshift.io/image-signature-import" 2026-01-20T10:49:37.527321397+00:00 stderr F I0120 10:49:37.525577 1 controller_manager.go:145] Starting "openshift.io/image-import" 2026-01-20T10:49:37.530735011+00:00 stderr F I0120 10:49:37.530704 1 imagestream_controller.go:66] Starting image stream controller 2026-01-20T10:49:37.540087846+00:00 stderr F I0120 10:49:37.539310 1 controller_manager.go:155] Started "openshift.io/image-import" 2026-01-20T10:49:37.540087846+00:00 stderr F I0120 10:49:37.539326 1 controller_manager.go:145] Starting "openshift.io/serviceaccount" 2026-01-20T10:49:37.540087846+00:00 stderr F I0120 10:49:37.539333 1 serviceaccount.go:16] openshift.io/serviceaccount: no managed names specified 2026-01-20T10:49:37.540087846+00:00 stderr F W0120 10:49:37.539338 1 controller_manager.go:152] Skipping "openshift.io/serviceaccount" 2026-01-20T10:49:37.540087846+00:00 stderr F I0120 10:49:37.539343 1 controller_manager.go:145] Starting "openshift.io/builder-rolebindings" 2026-01-20T10:49:37.540087846+00:00 stderr F I0120 10:49:37.539439 1 scheduled_image_controller.go:68] Starting scheduled import controller 2026-01-20T10:49:37.553091862+00:00 stderr F I0120 10:49:37.552680 1 controller_manager.go:155] Started "openshift.io/builder-rolebindings" 2026-01-20T10:49:37.553091862+00:00 stderr F I0120 10:49:37.552700 1 controller_manager.go:145] Starting "openshift.io/deploymentconfig" 2026-01-20T10:49:37.562304863+00:00 stderr F I0120 10:49:37.562097 1 defaultrolebindings.go:154] Starting BuilderRoleBindingController 2026-01-20T10:49:37.562304863+00:00 stderr F I0120 10:49:37.562114 1 shared_informer.go:311] Waiting for caches to sync for BuilderRoleBindingController 2026-01-20T10:49:37.570044219+00:00 stderr F I0120 10:49:37.569657 1 controller_manager.go:155] Started "openshift.io/deploymentconfig" 2026-01-20T10:49:37.570044219+00:00 stderr F I0120 10:49:37.569685 1 controller_manager.go:145] Starting "openshift.io/deployer-rolebindings" 2026-01-20T10:49:37.570044219+00:00 stderr F I0120 10:49:37.569922 1 factory.go:78] Starting deploymentconfig controller 2026-01-20T10:49:37.577437914+00:00 stderr F I0120 10:49:37.577405 1 controller_manager.go:155] Started "openshift.io/deployer-rolebindings" 2026-01-20T10:49:37.577437914+00:00 stderr F I0120 10:49:37.577426 1 controller_manager.go:145] Starting "openshift.io/origin-namespace" 2026-01-20T10:49:37.580108365+00:00 stderr F I0120 10:49:37.577692 1 defaultrolebindings.go:154] Starting DeployerRoleBindingController 2026-01-20T10:49:37.580108365+00:00 stderr F I0120 10:49:37.577703 1 shared_informer.go:311] Waiting for caches to sync for DeployerRoleBindingController 2026-01-20T10:49:37.585394956+00:00 stderr F I0120 10:49:37.585051 1 controller_manager.go:155] Started "openshift.io/origin-namespace" 2026-01-20T10:49:37.585394956+00:00 stderr F I0120 10:49:37.585089 1 controller_manager.go:145] Starting "openshift.io/builder-serviceaccount" 2026-01-20T10:49:37.592633537+00:00 stderr F I0120 10:49:37.591621 1 controller_manager.go:155] Started "openshift.io/builder-serviceaccount" 2026-01-20T10:49:37.592633537+00:00 stderr F I0120 10:49:37.591657 1 controller_manager.go:145] Starting "openshift.io/build" 2026-01-20T10:49:37.597570617+00:00 stderr F I0120 10:49:37.596095 1 serviceaccounts_controller.go:111] "Starting service account controller" 2026-01-20T10:49:37.597570617+00:00 stderr F I0120 10:49:37.596149 1 shared_informer.go:311] Waiting for caches to sync for service account 2026-01-20T10:49:37.598300280+00:00 stderr F I0120 10:49:37.597951 1 controller_manager.go:155] Started "openshift.io/build" 2026-01-20T10:49:37.598300280+00:00 stderr F I0120 10:49:37.597970 1 controller_manager.go:145] Starting "openshift.io/templateinstance" 2026-01-20T10:49:37.626153018+00:00 stderr F I0120 10:49:37.623491 1 controller_manager.go:155] Started "openshift.io/templateinstance" 2026-01-20T10:49:37.626153018+00:00 stderr F I0120 10:49:37.623511 1 controller_manager.go:145] Starting "openshift.io/templateinstancefinalizer" 2026-01-20T10:49:37.633189602+00:00 stderr F I0120 10:49:37.633132 1 controller_manager.go:155] Started "openshift.io/templateinstancefinalizer" 2026-01-20T10:49:37.633189602+00:00 stderr F I0120 10:49:37.633157 1 controller_manager.go:145] Starting "openshift.io/serviceaccount-pull-secrets" 2026-01-20T10:49:37.633397488+00:00 stderr F I0120 10:49:37.633366 1 templateinstance_finalizer.go:189] TemplateInstanceFinalizer controller waiting for cache sync 2026-01-20T10:49:37.640465844+00:00 stderr F I0120 10:49:37.640431 1 controller_manager.go:155] Started "openshift.io/serviceaccount-pull-secrets" 2026-01-20T10:49:37.640465844+00:00 stderr F I0120 10:49:37.640450 1 controller_manager.go:145] Starting "openshift.io/deployer-serviceaccount" 2026-01-20T10:49:37.640658520+00:00 stderr F I0120 10:49:37.640625 1 service_account_controller.go:336] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_service-account" 2026-01-20T10:49:37.640658520+00:00 stderr F I0120 10:49:37.640642 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_service-account 2026-01-20T10:49:37.640667860+00:00 stderr F I0120 10:49:37.640657 1 keyid_observation_controller.go:164] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_kids" 2026-01-20T10:49:37.640667860+00:00 stderr F I0120 10:49:37.640661 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_kids 2026-01-20T10:49:37.640691601+00:00 stderr F I0120 10:49:37.640678 1 registry_urls_observation_controller.go:139] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_urls" 2026-01-20T10:49:37.640691601+00:00 stderr F I0120 10:49:37.640688 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_urls 2026-01-20T10:49:37.640739992+00:00 stderr F I0120 10:49:37.640721 1 image_pull_secret_controller.go:301] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_image-pull-secret" 2026-01-20T10:49:37.640739992+00:00 stderr F I0120 10:49:37.640730 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_image-pull-secret 2026-01-20T10:49:37.640748342+00:00 stderr F I0120 10:49:37.640742 1 legacy_image_pull_secret_controller.go:131] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret" 2026-01-20T10:49:37.640755563+00:00 stderr F I0120 10:49:37.640746 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret 2026-01-20T10:49:37.640789714+00:00 stderr F I0120 10:49:37.640762 1 legacy_token_secret_controller.go:109] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-token-secret" 2026-01-20T10:49:37.640797234+00:00 stderr F I0120 10:49:37.640788 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_legacy-token-secret 2026-01-20T10:49:37.654993066+00:00 stderr F I0120 10:49:37.653156 1 controller_manager.go:155] Started "openshift.io/deployer-serviceaccount" 2026-01-20T10:49:37.654993066+00:00 stderr F I0120 10:49:37.653176 1 controller_manager.go:145] Starting "openshift.io/deployer" 2026-01-20T10:49:37.654993066+00:00 stderr F I0120 10:49:37.653347 1 serviceaccounts_controller.go:111] "Starting service account controller" 2026-01-20T10:49:37.654993066+00:00 stderr F I0120 10:49:37.653355 1 shared_informer.go:311] Waiting for caches to sync for service account 2026-01-20T10:49:37.660107772+00:00 stderr F I0120 10:49:37.659126 1 controller_manager.go:155] Started "openshift.io/deployer" 2026-01-20T10:49:37.660107772+00:00 stderr F I0120 10:49:37.659141 1 controller_manager.go:145] Starting "openshift.io/image-puller-rolebindings" 2026-01-20T10:49:37.660107772+00:00 stderr F I0120 10:49:37.659265 1 factory.go:73] Starting deployer controller 2026-01-20T10:49:37.666444845+00:00 stderr F I0120 10:49:37.666410 1 controller_manager.go:155] Started "openshift.io/image-puller-rolebindings" 2026-01-20T10:49:37.666444845+00:00 stderr F I0120 10:49:37.666428 1 controller_manager.go:157] Started Origin Controllers 2026-01-20T10:49:37.666735434+00:00 stderr F I0120 10:49:37.666700 1 defaultrolebindings.go:154] Starting ImagePullerRoleBindingController 2026-01-20T10:49:37.666735434+00:00 stderr F I0120 10:49:37.666712 1 shared_informer.go:311] Waiting for caches to sync for ImagePullerRoleBindingController 2026-01-20T10:49:37.685303960+00:00 stderr F I0120 10:49:37.683970 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:37.688214538+00:00 stderr F I0120 10:49:37.688186 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:37.688775375+00:00 stderr F I0120 10:49:37.688755 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:37.688959661+00:00 stderr F W0120 10:49:37.688938 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io) 2026-01-20T10:49:37.689025013+00:00 stderr F E0120 10:49:37.689011 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io) 2026-01-20T10:49:37.689093985+00:00 stderr F I0120 10:49:37.689048 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:37.689466446+00:00 stderr F I0120 10:49:37.689432 1 reflector.go:351] Caches populated for *v1.Build from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:37.689659782+00:00 stderr F W0120 10:49:37.689634 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2026-01-20T10:49:37.689672032+00:00 stderr F E0120 10:49:37.689658 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2026-01-20T10:49:37.689709143+00:00 stderr F I0120 10:49:37.689690 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:37.692392345+00:00 stderr F W0120 10:49:37.692351 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2026-01-20T10:49:37.692421196+00:00 stderr F E0120 10:49:37.692392 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2026-01-20T10:49:37.692432886+00:00 stderr F W0120 10:49:37.692425 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2026-01-20T10:49:37.692443897+00:00 stderr F E0120 10:49:37.692436 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2026-01-20T10:49:37.693830099+00:00 stderr F W0120 10:49:37.693788 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io) 2026-01-20T10:49:37.693830099+00:00 stderr F E0120 10:49:37.693808 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io) 2026-01-20T10:49:37.703111552+00:00 stderr F I0120 10:49:37.703071 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:37.703254846+00:00 stderr F I0120 10:49:37.703220 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:37.703719220+00:00 stderr F I0120 10:49:37.703698 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:37.704083931+00:00 stderr F I0120 10:49:37.704033 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:37.705155414+00:00 stderr F I0120 10:49:37.705113 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:37.713286942+00:00 stderr F I0120 10:49:37.713248 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:37.714186849+00:00 stderr F I0120 10:49:37.714158 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:37.714257081+00:00 stderr F I0120 10:49:37.714237 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:37.714413226+00:00 stderr F I0120 10:49:37.714391 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:37.716692366+00:00 stderr F W0120 10:49:37.716620 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io) 2026-01-20T10:49:37.716749017+00:00 stderr F E0120 10:49:37.716721 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.Image: failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io) 2026-01-20T10:49:37.719150651+00:00 stderr F I0120 10:49:37.719116 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:37.734109166+00:00 stderr F I0120 10:49:37.734008 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:37.744565855+00:00 stderr F I0120 10:49:37.744469 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_urls 2026-01-20T10:49:37.744565855+00:00 stderr F I0120 10:49:37.744510 1 registry_urls_observation_controller.go:146] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_urls" 2026-01-20T10:49:37.747617888+00:00 stderr F I0120 10:49:37.747502 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:37.756589241+00:00 stderr F I0120 10:49:37.756286 1 shared_informer.go:318] Caches are synced for service account 2026-01-20T10:49:37.764114030+00:00 stderr F I0120 10:49:37.762372 1 shared_informer.go:318] Caches are synced for BuilderRoleBindingController 2026-01-20T10:49:37.764114030+00:00 stderr F I0120 10:49:37.763205 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:37.780150099+00:00 stderr F I0120 10:49:37.776368 1 shared_informer.go:318] Caches are synced for ImagePullerRoleBindingController 2026-01-20T10:49:37.783196372+00:00 stderr F I0120 10:49:37.783158 1 shared_informer.go:318] Caches are synced for DeployerRoleBindingController 2026-01-20T10:49:37.797604370+00:00 stderr F I0120 10:49:37.797532 1 shared_informer.go:318] Caches are synced for service account 2026-01-20T10:49:37.852913764+00:00 stderr F I0120 10:49:37.852834 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:37.861657101+00:00 stderr F I0120 10:49:37.861601 1 factory.go:80] Deployer controller caches are synced. Starting workers. 2026-01-20T10:49:37.869836060+00:00 stderr F I0120 10:49:37.869800 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:37.943867725+00:00 stderr F I0120 10:49:37.942204 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret 2026-01-20T10:49:37.943867725+00:00 stderr F I0120 10:49:37.942293 1 legacy_image_pull_secret_controller.go:138] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret" 2026-01-20T10:49:37.943867725+00:00 stderr F I0120 10:49:37.942346 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_service-account 2026-01-20T10:49:37.943867725+00:00 stderr F I0120 10:49:37.942380 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_kids 2026-01-20T10:49:37.943867725+00:00 stderr F I0120 10:49:37.942396 1 service_account_controller.go:343] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_service-account" 2026-01-20T10:49:37.943867725+00:00 stderr F I0120 10:49:37.942405 1 keyid_observation_controller.go:172] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_kids" 2026-01-20T10:49:37.943867725+00:00 stderr F I0120 10:49:37.943048 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_image-pull-secret 2026-01-20T10:49:37.943867725+00:00 stderr F I0120 10:49:37.943103 1 image_pull_secret_controller.go:327] Waiting for service account token signing cert to be observed 2026-01-20T10:49:37.943867725+00:00 stderr F I0120 10:49:37.943229 1 image_pull_secret_controller.go:330] "Observed service account token signing certs" kids=["Eis1R21gHpHFLAkJU-GQ-azSF6VzwnC1XhhzxsZx2Qg"] 2026-01-20T10:49:37.943867725+00:00 stderr F I0120 10:49:37.943229 1 image_pull_secret_controller.go:313] Waiting for image registry urls to be observed 2026-01-20T10:49:37.943867725+00:00 stderr F I0120 10:49:37.943257 1 image_pull_secret_controller.go:317] "Observed image registry urls" urls=["10.217.4.41:5000","default-route-openshift-image-registry.apps-crc.testing","image-registry.openshift-image-registry.svc.cluster.local:5000","image-registry.openshift-image-registry.svc:5000"] 2026-01-20T10:49:37.943867725+00:00 stderr F I0120 10:49:37.943262 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_legacy-token-secret 2026-01-20T10:49:37.943867725+00:00 stderr F I0120 10:49:37.943288 1 image_pull_secret_controller.go:374] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_image-pull-secret" 2026-01-20T10:49:37.943867725+00:00 stderr F I0120 10:49:37.943290 1 legacy_token_secret_controller.go:116] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-token-secret" 2026-01-20T10:49:37.943867725+00:00 stderr F I0120 10:49:37.943693 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="default" name="builder-dockercfg-hn9nn" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-06-11 01:03:42.222533516 +0000 UTC" 2026-01-20T10:49:37.943867725+00:00 stderr F I0120 10:49:37.943722 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="default" name="builder-dockercfg-hn9nn" serviceaccount="builder" 2026-01-20T10:49:37.944041830+00:00 stderr F I0120 10:49:37.944000 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="default" name="deployer-dockercfg-rxncs" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-06-11 01:03:42.222410372 +0000 UTC" 2026-01-20T10:49:37.944104522+00:00 stderr F I0120 10:49:37.944081 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="default" name="deployer-dockercfg-rxncs" serviceaccount="deployer" 2026-01-20T10:49:37.944104522+00:00 stderr F I0120 10:49:37.944087 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="hostpath-provisioner" name="builder-dockercfg-68c6h" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-06-11 01:03:42.222378131 +0000 UTC" 2026-01-20T10:49:37.944118892+00:00 stderr F I0120 10:49:37.944102 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="hostpath-provisioner" name="builder-dockercfg-68c6h" serviceaccount="builder" 2026-01-20T10:49:37.944371160+00:00 stderr F I0120 10:49:37.944339 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="hostpath-provisioner" name="csi-hostpath-provisioner-sa-dockercfg-nqbbq" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-06-11 01:03:42.222269868 +0000 UTC" 2026-01-20T10:49:37.944371160+00:00 stderr F I0120 10:49:37.944357 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="hostpath-provisioner" name="csi-hostpath-provisioner-sa-dockercfg-nqbbq" serviceaccount="csi-hostpath-provisioner-sa" 2026-01-20T10:49:37.945331699+00:00 stderr F I0120 10:49:37.945298 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="default" name="default-dockercfg-rwmqp" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-06-11 01:03:42.221892785 +0000 UTC" 2026-01-20T10:49:37.945331699+00:00 stderr F I0120 10:49:37.945322 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="default" name="default-dockercfg-rwmqp" serviceaccount="default" 2026-01-20T10:49:37.990197896+00:00 stderr F I0120 10:49:37.989742 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="hostpath-provisioner" name="csi-provisioner-dockercfg-m4vbf" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-06-11 01:03:42.204118603 +0000 UTC" 2026-01-20T10:49:37.990197896+00:00 stderr F I0120 10:49:37.989822 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="hostpath-provisioner" name="csi-provisioner-dockercfg-m4vbf" serviceaccount="csi-provisioner" 2026-01-20T10:49:37.990197896+00:00 stderr F I0120 10:49:37.990171 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="hostpath-provisioner" name="default-dockercfg-svxcm" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-06-11 01:03:42.203937574 +0000 UTC" 2026-01-20T10:49:37.990197896+00:00 stderr F I0120 10:49:37.990183 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="hostpath-provisioner" name="default-dockercfg-svxcm" serviceaccount="default" 2026-01-20T10:49:37.991158736+00:00 stderr F I0120 10:49:37.990490 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="hostpath-provisioner" name="deployer-dockercfg-xtrqb" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-06-11 01:03:43.603820634 +0000 UTC" 2026-01-20T10:49:37.991158736+00:00 stderr F I0120 10:49:37.990524 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="hostpath-provisioner" name="deployer-dockercfg-xtrqb" serviceaccount="deployer" 2026-01-20T10:49:37.998153748+00:00 stderr F I0120 10:49:37.998079 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-node-lease" name="builder-dockercfg-fhvt9" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-06-11 01:03:43.600793366 +0000 UTC" 2026-01-20T10:49:37.998153748+00:00 stderr F I0120 10:49:37.998136 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-node-lease" name="builder-dockercfg-fhvt9" serviceaccount="builder" 2026-01-20T10:49:38.001079017+00:00 stderr F I0120 10:49:38.000293 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-node-lease" name="default-dockercfg-dp7cf" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-06-11 01:03:43.599889995 +0000 UTC" 2026-01-20T10:49:38.001079017+00:00 stderr F I0120 10:49:38.000313 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-node-lease" name="default-dockercfg-dp7cf" serviceaccount="default" 2026-01-20T10:49:38.019161038+00:00 stderr F I0120 10:49:38.018824 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-node-lease" name="deployer-dockercfg-l8zq8" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-06-11 01:03:43.592486585 +0000 UTC" 2026-01-20T10:49:38.019161038+00:00 stderr F I0120 10:49:38.018853 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-node-lease" name="deployer-dockercfg-l8zq8" serviceaccount="deployer" 2026-01-20T10:49:38.020148438+00:00 stderr F I0120 10:49:38.019275 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-public" name="builder-dockercfg-pq2fn" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-06-11 01:03:43.592296419 +0000 UTC" 2026-01-20T10:49:38.020148438+00:00 stderr F I0120 10:49:38.019318 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-public" name="builder-dockercfg-pq2fn" serviceaccount="builder" 2026-01-20T10:49:38.020148438+00:00 stderr F I0120 10:49:38.019780 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-public" name="default-dockercfg-mg7xn" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-06-11 01:03:43.59209398 +0000 UTC" 2026-01-20T10:49:38.020148438+00:00 stderr F I0120 10:49:38.019791 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-public" name="default-dockercfg-mg7xn" serviceaccount="default" 2026-01-20T10:49:38.029237255+00:00 stderr F I0120 10:49:38.029200 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-public" name="deployer-dockercfg-4blxw" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-06-11 01:03:43.588330851 +0000 UTC" 2026-01-20T10:49:38.029237255+00:00 stderr F I0120 10:49:38.029221 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-public" name="deployer-dockercfg-4blxw" serviceaccount="deployer" 2026-01-20T10:49:38.041207680+00:00 stderr F I0120 10:49:38.040263 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="attachdetach-controller-dockercfg-fdtjb" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-06-11 01:03:43.583905268 +0000 UTC" 2026-01-20T10:49:38.041207680+00:00 stderr F I0120 10:49:38.040291 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="attachdetach-controller-dockercfg-fdtjb" serviceaccount="attachdetach-controller" 2026-01-20T10:49:38.045392987+00:00 stderr F I0120 10:49:38.044289 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="builder-dockercfg-kkqp2" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-11 01:03:44.982293662 +0000 UTC" 2026-01-20T10:49:38.045392987+00:00 stderr F I0120 10:49:38.044316 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="builder-dockercfg-kkqp2" serviceaccount="builder" 2026-01-20T10:49:38.074987099+00:00 stderr F I0120 10:49:38.074927 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="certificate-controller-dockercfg-9v2kj" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-11 01:03:44.970045053 +0000 UTC" 2026-01-20T10:49:38.074987099+00:00 stderr F I0120 10:49:38.074957 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="certificate-controller-dockercfg-9v2kj" serviceaccount="certificate-controller" 2026-01-20T10:49:38.079097404+00:00 stderr F I0120 10:49:38.078458 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="clusterrole-aggregation-controller-dockercfg-2tcfh" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-11 01:03:44.968632318 +0000 UTC" 2026-01-20T10:49:38.079097404+00:00 stderr F I0120 10:49:38.078488 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="clusterrole-aggregation-controller-dockercfg-2tcfh" serviceaccount="clusterrole-aggregation-controller" 2026-01-20T10:49:38.081677352+00:00 stderr F I0120 10:49:38.081642 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="cronjob-controller-dockercfg-g2sp5" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-11 01:03:44.967351235 +0000 UTC" 2026-01-20T10:49:38.081677352+00:00 stderr F I0120 10:49:38.081662 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="cronjob-controller-dockercfg-g2sp5" serviceaccount="cronjob-controller" 2026-01-20T10:49:38.089523181+00:00 stderr F I0120 10:49:38.087565 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="daemon-set-controller-dockercfg-pjkzz" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-11 01:03:44.964984243 +0000 UTC" 2026-01-20T10:49:38.089523181+00:00 stderr F I0120 10:49:38.087584 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="daemon-set-controller-dockercfg-pjkzz" serviceaccount="daemon-set-controller" 2026-01-20T10:49:38.098371121+00:00 stderr F I0120 10:49:38.096947 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="default-dockercfg-q6b6n" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-11 01:03:44.961234804 +0000 UTC" 2026-01-20T10:49:38.098371121+00:00 stderr F I0120 10:49:38.096974 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="default-dockercfg-q6b6n" serviceaccount="default" 2026-01-20T10:49:38.105739156+00:00 stderr F I0120 10:49:38.105699 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="deployer-dockercfg-bscn9" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-11 01:03:44.957732851 +0000 UTC" 2026-01-20T10:49:38.105804678+00:00 stderr F I0120 10:49:38.105793 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="deployer-dockercfg-bscn9" serviceaccount="deployer" 2026-01-20T10:49:38.111223283+00:00 stderr F I0120 10:49:38.109669 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="deployment-controller-dockercfg-xwj9s" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-11 01:03:44.956145434 +0000 UTC" 2026-01-20T10:49:38.111223283+00:00 stderr F I0120 10:49:38.109695 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="deployment-controller-dockercfg-xwj9s" serviceaccount="deployment-controller" 2026-01-20T10:49:38.118052760+00:00 stderr F I0120 10:49:38.118011 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="disruption-controller-dockercfg-27hxh" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-11 01:03:44.952812308 +0000 UTC" 2026-01-20T10:49:38.118052760+00:00 stderr F I0120 10:49:38.118044 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="disruption-controller-dockercfg-27hxh" serviceaccount="disruption-controller" 2026-01-20T10:49:38.118385140+00:00 stderr F I0120 10:49:38.118364 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="endpoint-controller-dockercfg-fnmd9" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-11 01:03:44.952666536 +0000 UTC" 2026-01-20T10:49:38.118439332+00:00 stderr F I0120 10:49:38.118413 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="endpoint-controller-dockercfg-fnmd9" serviceaccount="endpoint-controller" 2026-01-20T10:49:38.161480924+00:00 stderr F I0120 10:49:38.161436 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="endpointslice-controller-dockercfg-kvrd9" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-11 01:03:44.935439267 +0000 UTC" 2026-01-20T10:49:38.161560696+00:00 stderr F I0120 10:49:38.161549 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="endpointslice-controller-dockercfg-kvrd9" serviceaccount="endpointslice-controller" 2026-01-20T10:49:38.163104882+00:00 stderr F I0120 10:49:38.162776 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="endpointslicemirroring-controller-dockercfg-skzmn" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-11 01:03:44.934903071 +0000 UTC" 2026-01-20T10:49:38.163104882+00:00 stderr F I0120 10:49:38.162804 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="endpointslicemirroring-controller-dockercfg-skzmn" serviceaccount="endpointslicemirroring-controller" 2026-01-20T10:49:38.168089435+00:00 stderr F I0120 10:49:38.167118 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="ephemeral-volume-controller-dockercfg-jfqhh" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:44 +0000 UTC" refreshTime="2025-06-11 01:03:46.33317863 +0000 UTC" 2026-01-20T10:49:38.168089435+00:00 stderr F I0120 10:49:38.167152 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="ephemeral-volume-controller-dockercfg-jfqhh" serviceaccount="ephemeral-volume-controller" 2026-01-20T10:49:38.175609084+00:00 stderr F I0120 10:49:38.175229 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="generic-garbage-collector-dockercfg-wqxkz" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:46 +0000 UTC" refreshTime="2025-06-11 01:03:49.129924147 +0000 UTC" 2026-01-20T10:49:38.175609084+00:00 stderr F I0120 10:49:38.175258 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="generic-garbage-collector-dockercfg-wqxkz" serviceaccount="generic-garbage-collector" 2026-01-20T10:49:38.175609084+00:00 stderr F I0120 10:49:38.175528 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="expand-controller-dockercfg-ls7wp" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 20:59:44 +0000 UTC" refreshTime="2025-06-11 01:03:46.329795568 +0000 UTC" 2026-01-20T10:49:38.175609084+00:00 stderr F I0120 10:49:38.175540 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="expand-controller-dockercfg-ls7wp" serviceaccount="expand-controller" 2026-01-20T10:49:38.211043263+00:00 stderr F I0120 10:49:38.210376 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="horizontal-pod-autoscaler-dockercfg-5mlhd" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:46 +0000 UTC" refreshTime="2025-06-11 01:03:49.115865091 +0000 UTC" 2026-01-20T10:49:38.211043263+00:00 stderr F I0120 10:49:38.210401 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="horizontal-pod-autoscaler-dockercfg-5mlhd" serviceaccount="horizontal-pod-autoscaler" 2026-01-20T10:49:38.221299355+00:00 stderr F I0120 10:49:38.220856 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="job-controller-dockercfg-wq5x7" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:46 +0000 UTC" refreshTime="2025-06-11 01:03:49.111670816 +0000 UTC" 2026-01-20T10:49:38.221299355+00:00 stderr F I0120 10:49:38.220882 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="job-controller-dockercfg-wq5x7" serviceaccount="job-controller" 2026-01-20T10:49:38.232542518+00:00 stderr F I0120 10:49:38.232479 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="legacy-service-account-token-cleaner-dockercfg-qqxct" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:46 +0000 UTC" refreshTime="2025-06-11 01:03:49.107023689 +0000 UTC" 2026-01-20T10:49:38.232542518+00:00 stderr F I0120 10:49:38.232509 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="legacy-service-account-token-cleaner-dockercfg-qqxct" serviceaccount="legacy-service-account-token-cleaner" 2026-01-20T10:49:38.236127917+00:00 stderr F I0120 10:49:38.235565 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="namespace-controller-dockercfg-5hkmr" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:46 +0000 UTC" refreshTime="2025-06-11 01:03:49.105785744 +0000 UTC" 2026-01-20T10:49:38.236127917+00:00 stderr F I0120 10:49:38.235585 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="namespace-controller-dockercfg-5hkmr" serviceaccount="namespace-controller" 2026-01-20T10:49:38.249256067+00:00 stderr F I0120 10:49:38.248494 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="node-controller-dockercfg-r8598" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:47 +0000 UTC" refreshTime="2025-06-11 01:03:50.500614203 +0000 UTC" 2026-01-20T10:49:38.249256067+00:00 stderr F I0120 10:49:38.248521 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="node-controller-dockercfg-r8598" serviceaccount="node-controller" 2026-01-20T10:49:38.271139894+00:00 stderr F I0120 10:49:38.271048 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="persistent-volume-binder-dockercfg-49lxl" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:47 +0000 UTC" refreshTime="2025-06-11 01:03:50.491595628 +0000 UTC" 2026-01-20T10:49:38.271206816+00:00 stderr F I0120 10:49:38.271195 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="persistent-volume-binder-dockercfg-49lxl" serviceaccount="persistent-volume-binder" 2026-01-20T10:49:38.300330112+00:00 stderr F I0120 10:49:38.298763 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="pod-garbage-collector-dockercfg-9jzsm" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:47 +0000 UTC" refreshTime="2025-06-11 01:03:50.48050909 +0000 UTC" 2026-01-20T10:49:38.300330112+00:00 stderr F I0120 10:49:38.298789 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="pod-garbage-collector-dockercfg-9jzsm" serviceaccount="pod-garbage-collector" 2026-01-20T10:49:38.328808290+00:00 stderr F I0120 10:49:38.328553 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="pv-protection-controller-dockercfg-r2lrg" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:47 +0000 UTC" refreshTime="2025-06-11 01:03:50.468597632 +0000 UTC" 2026-01-20T10:49:38.328808290+00:00 stderr F I0120 10:49:38.328579 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="pv-protection-controller-dockercfg-r2lrg" serviceaccount="pv-protection-controller" 2026-01-20T10:49:38.328808290+00:00 stderr F I0120 10:49:38.328711 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="pvc-protection-controller-dockercfg-zqpk9" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 20:59:48 +0000 UTC" refreshTime="2025-06-11 01:03:51.868530321 +0000 UTC" 2026-01-20T10:49:38.328808290+00:00 stderr F I0120 10:49:38.328733 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="pvc-protection-controller-dockercfg-zqpk9" serviceaccount="pvc-protection-controller" 2026-01-20T10:49:38.329020346+00:00 stderr F I0120 10:49:38.328996 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="replicaset-controller-dockercfg-m7w7t" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:48 +0000 UTC" refreshTime="2025-06-11 01:03:51.868407657 +0000 UTC" 2026-01-20T10:49:38.329020346+00:00 stderr F I0120 10:49:38.329013 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="replicaset-controller-dockercfg-m7w7t" serviceaccount="replicaset-controller" 2026-01-20T10:49:38.343453747+00:00 stderr F I0120 10:49:38.342951 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="replication-controller-dockercfg-zx22f" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:48 +0000 UTC" refreshTime="2025-06-11 01:03:51.862833092 +0000 UTC" 2026-01-20T10:49:38.343453747+00:00 stderr F I0120 10:49:38.342977 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="replication-controller-dockercfg-zx22f" serviceaccount="replication-controller" 2026-01-20T10:49:38.355807542+00:00 stderr F I0120 10:49:38.355763 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="resourcequota-controller-dockercfg-f7clv" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:48 +0000 UTC" refreshTime="2025-06-11 01:03:51.857707364 +0000 UTC" 2026-01-20T10:49:38.355878555+00:00 stderr F I0120 10:49:38.355867 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="resourcequota-controller-dockercfg-f7clv" serviceaccount="resourcequota-controller" 2026-01-20T10:49:38.381780513+00:00 stderr F I0120 10:49:38.381737 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="root-ca-cert-publisher-dockercfg-4z4hh" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:48 +0000 UTC" refreshTime="2025-06-11 01:03:51.847316755 +0000 UTC" 2026-01-20T10:49:38.381841815+00:00 stderr F I0120 10:49:38.381830 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="root-ca-cert-publisher-dockercfg-4z4hh" serviceaccount="root-ca-cert-publisher" 2026-01-20T10:49:38.382219546+00:00 stderr F I0120 10:49:38.382202 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="service-account-controller-dockercfg-wvw6s" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:48 +0000 UTC" refreshTime="2025-06-11 01:03:51.847125121 +0000 UTC" 2026-01-20T10:49:38.382258718+00:00 stderr F I0120 10:49:38.382248 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="service-account-controller-dockercfg-wvw6s" serviceaccount="service-account-controller" 2026-01-20T10:49:38.382614319+00:00 stderr F I0120 10:49:38.382597 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="service-ca-cert-publisher-dockercfg-npjg7" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-06-11 01:03:53.24696752 +0000 UTC" 2026-01-20T10:49:38.382649440+00:00 stderr F I0120 10:49:38.382639 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="service-ca-cert-publisher-dockercfg-npjg7" serviceaccount="service-ca-cert-publisher" 2026-01-20T10:49:38.406337922+00:00 stderr F I0120 10:49:38.406261 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="service-controller-dockercfg-4cv62" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-06-11 01:03:53.237510864 +0000 UTC" 2026-01-20T10:49:38.406337922+00:00 stderr F I0120 10:49:38.406288 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="service-controller-dockercfg-4cv62" serviceaccount="service-controller" 2026-01-20T10:49:38.407824857+00:00 stderr F I0120 10:49:38.407799 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="statefulset-controller-dockercfg-ndvv5" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-06-11 01:03:53.236887781 +0000 UTC" 2026-01-20T10:49:38.407824857+00:00 stderr F I0120 10:49:38.407818 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="statefulset-controller-dockercfg-ndvv5" serviceaccount="statefulset-controller" 2026-01-20T10:49:38.415616254+00:00 stderr F I0120 10:49:38.415575 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="ttl-after-finished-controller-dockercfg-7wg62" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-06-11 01:03:53.233784915 +0000 UTC" 2026-01-20T10:49:38.415678736+00:00 stderr F I0120 10:49:38.415666 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="ttl-after-finished-controller-dockercfg-7wg62" serviceaccount="ttl-after-finished-controller" 2026-01-20T10:49:38.429397904+00:00 stderr F I0120 10:49:38.426508 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver-operator" name="builder-dockercfg-fcp4f" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-06-11 01:03:53.229410758 +0000 UTC" 2026-01-20T10:49:38.429397904+00:00 stderr F I0120 10:49:38.426537 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver-operator" name="builder-dockercfg-fcp4f" serviceaccount="builder" 2026-01-20T10:49:38.429397904+00:00 stderr F I0120 10:49:38.426841 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver-operator" name="default-dockercfg-qknsb" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-06-11 01:03:53.229269385 +0000 UTC" 2026-01-20T10:49:38.429397904+00:00 stderr F I0120 10:49:38.426852 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver-operator" name="default-dockercfg-qknsb" serviceaccount="default" 2026-01-20T10:49:38.444175265+00:00 stderr F I0120 10:49:38.444134 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver-operator" name="deployer-dockercfg-rk5zr" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-06-11 01:03:53.22238698 +0000 UTC" 2026-01-20T10:49:38.444240646+00:00 stderr F I0120 10:49:38.444228 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver-operator" name="deployer-dockercfg-rk5zr" serviceaccount="deployer" 2026-01-20T10:49:38.463508382+00:00 stderr F I0120 10:49:38.463463 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver-operator" name="openshift-apiserver-operator-dockercfg-vw4hh" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:50 +0000 UTC" refreshTime="2025-06-11 01:03:54.614627692 +0000 UTC" 2026-01-20T10:49:38.463574564+00:00 stderr F I0120 10:49:38.463563 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver-operator" name="openshift-apiserver-operator-dockercfg-vw4hh" serviceaccount="openshift-apiserver-operator" 2026-01-20T10:49:38.464637557+00:00 stderr F I0120 10:49:38.464619 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver" name="builder-dockercfg-bsrrx" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-06-11 01:03:53.214158717 +0000 UTC" 2026-01-20T10:49:38.464687499+00:00 stderr F I0120 10:49:38.464677 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver" name="builder-dockercfg-bsrrx" serviceaccount="builder" 2026-01-20T10:49:38.472116756+00:00 stderr F I0120 10:49:38.472076 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver" name="default-dockercfg-hxncm" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:50 +0000 UTC" refreshTime="2025-06-11 01:03:54.611190731 +0000 UTC" 2026-01-20T10:49:38.472187788+00:00 stderr F I0120 10:49:38.472176 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver" name="default-dockercfg-hxncm" serviceaccount="default" 2026-01-20T10:49:38.493179207+00:00 stderr F I0120 10:49:38.493108 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver" name="deployer-dockercfg-qkt4v" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 20:59:51 +0000 UTC" refreshTime="2025-06-11 01:03:56.002777494 +0000 UTC" 2026-01-20T10:49:38.493179207+00:00 stderr F I0120 10:49:38.493136 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver" name="deployer-dockercfg-qkt4v" serviceaccount="deployer" 2026-01-20T10:49:38.506563575+00:00 stderr F I0120 10:49:38.505263 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver" name="openshift-apiserver-sa-dockercfg-r9fjc" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:51 +0000 UTC" refreshTime="2025-06-11 01:03:55.997910483 +0000 UTC" 2026-01-20T10:49:38.506563575+00:00 stderr F I0120 10:49:38.505302 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver" name="openshift-apiserver-sa-dockercfg-r9fjc" serviceaccount="openshift-apiserver-sa" 2026-01-20T10:49:38.506563575+00:00 stderr F I0120 10:49:38.505686 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication-operator" name="authentication-operator-dockercfg-7rvdq" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:51 +0000 UTC" refreshTime="2025-06-11 01:03:55.997733913 +0000 UTC" 2026-01-20T10:49:38.506563575+00:00 stderr F I0120 10:49:38.505714 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication-operator" name="authentication-operator-dockercfg-7rvdq" serviceaccount="authentication-operator" 2026-01-20T10:49:38.506563575+00:00 stderr F I0120 10:49:38.506008 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication-operator" name="builder-dockercfg-gr58d" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:51 +0000 UTC" refreshTime="2025-06-11 01:03:55.997602449 +0000 UTC" 2026-01-20T10:49:38.506563575+00:00 stderr F I0120 10:49:38.506019 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication-operator" name="builder-dockercfg-gr58d" serviceaccount="builder" 2026-01-20T10:49:38.509466273+00:00 stderr F I0120 10:49:38.506893 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication-operator" name="default-dockercfg-mpz9v" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 20:59:51 +0000 UTC" refreshTime="2025-06-11 01:03:55.997250818 +0000 UTC" 2026-01-20T10:49:38.509466273+00:00 stderr F I0120 10:49:38.506920 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication-operator" name="default-dockercfg-mpz9v" serviceaccount="default" 2026-01-20T10:49:38.521188520+00:00 stderr F W0120 10:49:38.520218 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2026-01-20T10:49:38.521188520+00:00 stderr F E0120 10:49:38.520577 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2026-01-20T10:49:38.533332729+00:00 stderr F I0120 10:49:38.533271 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication-operator" name="deployer-dockercfg-7xqgr" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:41 +0000 UTC" refreshTime="2025-06-11 01:05:05.986712089 +0000 UTC" 2026-01-20T10:49:38.533332729+00:00 stderr F I0120 10:49:38.533309 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication-operator" name="deployer-dockercfg-7xqgr" serviceaccount="deployer" 2026-01-20T10:49:38.550163092+00:00 stderr F I0120 10:49:38.549893 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication" name="builder-dockercfg-wbrzn" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:41 +0000 UTC" refreshTime="2025-06-11 01:05:05.980057366 +0000 UTC" 2026-01-20T10:49:38.550163092+00:00 stderr F I0120 10:49:38.549923 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication" name="builder-dockercfg-wbrzn" serviceaccount="builder" 2026-01-20T10:49:38.552957417+00:00 stderr F I0120 10:49:38.550649 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication" name="default-dockercfg-8smsw" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:41 +0000 UTC" refreshTime="2025-06-11 01:05:05.979746484 +0000 UTC" 2026-01-20T10:49:38.552957417+00:00 stderr F I0120 10:49:38.550666 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication" name="default-dockercfg-8smsw" serviceaccount="default" 2026-01-20T10:49:38.555111113+00:00 stderr F I0120 10:49:38.554344 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication" name="deployer-dockercfg-txlvt" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:41 +0000 UTC" refreshTime="2025-06-11 01:05:05.978273804 +0000 UTC" 2026-01-20T10:49:38.555111113+00:00 stderr F I0120 10:49:38.554372 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication" name="deployer-dockercfg-txlvt" serviceaccount="deployer" 2026-01-20T10:49:38.566133639+00:00 stderr F I0120 10:49:38.565005 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication" name="oauth-openshift-dockercfg-6sd5l" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:41 +0000 UTC" refreshTime="2025-06-11 01:05:05.97401405 +0000 UTC" 2026-01-20T10:49:38.566133639+00:00 stderr F I0120 10:49:38.565038 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication" name="oauth-openshift-dockercfg-6sd5l" serviceaccount="oauth-openshift" 2026-01-20T10:49:38.566133639+00:00 stderr F W0120 10:49:38.565789 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2026-01-20T10:49:38.566133639+00:00 stderr F E0120 10:49:38.565806 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2026-01-20T10:49:38.586020164+00:00 stderr F I0120 10:49:38.584387 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cloud-network-config-controller" name="builder-dockercfg-4stzg" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:45 +0000 UTC" refreshTime="2025-06-11 01:05:11.56626177 +0000 UTC" 2026-01-20T10:49:38.586020164+00:00 stderr F I0120 10:49:38.584417 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cloud-network-config-controller" name="builder-dockercfg-4stzg" serviceaccount="builder" 2026-01-20T10:49:38.586020164+00:00 stderr F I0120 10:49:38.585711 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cloud-network-config-controller" name="default-dockercfg-bswg4" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:00:45 +0000 UTC" refreshTime="2025-06-11 01:05:11.565728042 +0000 UTC" 2026-01-20T10:49:38.586020164+00:00 stderr F I0120 10:49:38.585745 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cloud-network-config-controller" name="default-dockercfg-bswg4" serviceaccount="default" 2026-01-20T10:49:38.589125129+00:00 stderr F I0120 10:49:38.589052 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cloud-network-config-controller" name="deployer-dockercfg-95h82" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:45 +0000 UTC" refreshTime="2025-06-11 01:05:11.564388413 +0000 UTC" 2026-01-20T10:49:38.589125129+00:00 stderr F I0120 10:49:38.589099 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cloud-network-config-controller" name="deployer-dockercfg-95h82" serviceaccount="deployer" 2026-01-20T10:49:38.590561913+00:00 stderr F I0120 10:49:38.589383 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cloud-platform-infra" name="builder-dockercfg-88rrx" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:45 +0000 UTC" refreshTime="2025-06-11 01:05:11.564266057 +0000 UTC" 2026-01-20T10:49:38.590561913+00:00 stderr F I0120 10:49:38.589425 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cloud-platform-infra" name="builder-dockercfg-88rrx" serviceaccount="builder" 2026-01-20T10:49:38.592672748+00:00 stderr F I0120 10:49:38.592646 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cloud-platform-infra" name="default-dockercfg-7xbdb" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:45 +0000 UTC" refreshTime="2025-06-11 01:05:11.562958961 +0000 UTC" 2026-01-20T10:49:38.592672748+00:00 stderr F I0120 10:49:38.592666 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cloud-platform-infra" name="default-dockercfg-7xbdb" serviceaccount="default" 2026-01-20T10:49:38.628599321+00:00 stderr F I0120 10:49:38.627930 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cloud-platform-infra" name="deployer-dockercfg-d4ldp" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:46 +0000 UTC" refreshTime="2025-06-11 01:05:12.948841484 +0000 UTC" 2026-01-20T10:49:38.628599321+00:00 stderr F I0120 10:49:38.627975 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cloud-platform-infra" name="deployer-dockercfg-d4ldp" serviceaccount="deployer" 2026-01-20T10:49:38.628599321+00:00 stderr F I0120 10:49:38.628576 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-machine-approver" name="builder-dockercfg-dkg74" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:46 +0000 UTC" refreshTime="2025-06-11 01:05:12.948575832 +0000 UTC" 2026-01-20T10:49:38.628599321+00:00 stderr F I0120 10:49:38.628589 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-machine-approver" name="builder-dockercfg-dkg74" serviceaccount="builder" 2026-01-20T10:49:38.630337594+00:00 stderr F I0120 10:49:38.628859 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-machine-approver" name="default-dockercfg-89xjf" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:00:46 +0000 UTC" refreshTime="2025-06-11 01:05:12.948461968 +0000 UTC" 2026-01-20T10:49:38.630337594+00:00 stderr F I0120 10:49:38.628876 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-machine-approver" name="default-dockercfg-89xjf" serviceaccount="default" 2026-01-20T10:49:38.630337594+00:00 stderr F W0120 10:49:38.630308 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io) 2026-01-20T10:49:38.630337594+00:00 stderr F E0120 10:49:38.630324 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.Image: failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io) 2026-01-20T10:49:38.631770079+00:00 stderr F I0120 10:49:38.630501 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-machine-approver" name="deployer-dockercfg-vb2qm" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:46 +0000 UTC" refreshTime="2025-06-11 01:05:12.947805856 +0000 UTC" 2026-01-20T10:49:38.631770079+00:00 stderr F I0120 10:49:38.630517 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-machine-approver" name="deployer-dockercfg-vb2qm" serviceaccount="deployer" 2026-01-20T10:49:38.639174293+00:00 stderr F I0120 10:49:38.639117 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-machine-approver" name="machine-approver-sa-dockercfg-6nbmk" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:46 +0000 UTC" refreshTime="2025-06-11 01:05:12.944367128 +0000 UTC" 2026-01-20T10:49:38.639174293+00:00 stderr F I0120 10:49:38.639147 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-machine-approver" name="machine-approver-sa-dockercfg-6nbmk" serviceaccount="machine-approver-sa" 2026-01-20T10:49:38.669186438+00:00 stderr F I0120 10:49:38.669113 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-samples-operator" name="builder-dockercfg-bgnkz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:47 +0000 UTC" refreshTime="2025-06-11 01:05:14.332387811 +0000 UTC" 2026-01-20T10:49:38.669186438+00:00 stderr F I0120 10:49:38.669158 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-samples-operator" name="builder-dockercfg-bgnkz" serviceaccount="builder" 2026-01-20T10:49:38.673969004+00:00 stderr F I0120 10:49:38.669660 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-samples-operator" name="cluster-samples-operator-dockercfg-q289q" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:47 +0000 UTC" refreshTime="2025-06-11 01:05:14.332142727 +0000 UTC" 2026-01-20T10:49:38.673969004+00:00 stderr F I0120 10:49:38.669678 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-samples-operator" name="cluster-samples-operator-dockercfg-q289q" serviceaccount="cluster-samples-operator" 2026-01-20T10:49:38.673969004+00:00 stderr F I0120 10:49:38.670310 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-samples-operator" name="default-dockercfg-78cjw" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:47 +0000 UTC" refreshTime="2025-06-11 01:05:14.331882056 +0000 UTC" 2026-01-20T10:49:38.673969004+00:00 stderr F I0120 10:49:38.670324 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-samples-operator" name="default-dockercfg-78cjw" serviceaccount="default" 2026-01-20T10:49:38.673969004+00:00 stderr F I0120 10:49:38.670932 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-samples-operator" name="deployer-dockercfg-hx9zf" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:47 +0000 UTC" refreshTime="2025-06-11 01:05:14.33163914 +0000 UTC" 2026-01-20T10:49:38.673969004+00:00 stderr F I0120 10:49:38.670952 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-samples-operator" name="deployer-dockercfg-hx9zf" serviceaccount="deployer" 2026-01-20T10:49:38.675700536+00:00 stderr F I0120 10:49:38.675679 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-storage-operator" name="builder-dockercfg-l8dbc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:48 +0000 UTC" refreshTime="2025-06-11 01:05:15.72973603 +0000 UTC" 2026-01-20T10:49:38.675766618+00:00 stderr F I0120 10:49:38.675754 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-storage-operator" name="builder-dockercfg-l8dbc" serviceaccount="builder" 2026-01-20T10:49:38.734206048+00:00 stderr F I0120 10:49:38.734160 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-storage-operator" name="default-dockercfg-l44fb" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:48 +0000 UTC" refreshTime="2025-06-11 01:05:15.706348729 +0000 UTC" 2026-01-20T10:49:38.734277660+00:00 stderr F I0120 10:49:38.734266 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-storage-operator" name="default-dockercfg-l44fb" serviceaccount="default" 2026-01-20T10:49:38.735962172+00:00 stderr F I0120 10:49:38.735943 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-storage-operator" name="deployer-dockercfg-g97l8" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:49 +0000 UTC" refreshTime="2025-06-11 01:05:17.105629528 +0000 UTC" 2026-01-20T10:49:38.736030164+00:00 stderr F I0120 10:49:38.736013 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-storage-operator" name="deployer-dockercfg-g97l8" serviceaccount="deployer" 2026-01-20T10:49:38.741215092+00:00 stderr F I0120 10:49:38.741188 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-version" name="builder-dockercfg-l4k9s" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:48 +0000 UTC" refreshTime="2025-06-11 01:05:15.703531115 +0000 UTC" 2026-01-20T10:49:38.741313365+00:00 stderr F I0120 10:49:38.741301 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-version" name="builder-dockercfg-l4k9s" serviceaccount="builder" 2026-01-20T10:49:38.752386432+00:00 stderr F I0120 10:49:38.752347 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-version" name="default-dockercfg-5wpfz" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:00:49 +0000 UTC" refreshTime="2025-06-11 01:05:17.099070912 +0000 UTC" 2026-01-20T10:49:38.752440614+00:00 stderr F I0120 10:49:38.752429 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-version" name="default-dockercfg-5wpfz" serviceaccount="default" 2026-01-20T10:49:38.772646069+00:00 stderr F I0120 10:49:38.772603 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-version" name="deployer-dockercfg-r7kd4" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:50 +0000 UTC" refreshTime="2025-06-11 01:05:18.490971505 +0000 UTC" 2026-01-20T10:49:38.772732892+00:00 stderr F I0120 10:49:38.772714 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-version" name="deployer-dockercfg-r7kd4" serviceaccount="deployer" 2026-01-20T10:49:38.856291447+00:00 stderr F I0120 10:49:38.855865 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config-managed" name="builder-dockercfg-nndcv" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:50 +0000 UTC" refreshTime="2025-06-11 01:05:18.457674655 +0000 UTC" 2026-01-20T10:49:38.856291447+00:00 stderr F I0120 10:49:38.856129 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config-managed" name="builder-dockercfg-nndcv" serviceaccount="builder" 2026-01-20T10:49:38.862782205+00:00 stderr F I0120 10:49:38.862723 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config-managed" name="default-dockercfg-5zsff" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:00:50 +0000 UTC" refreshTime="2025-06-11 01:05:18.454921403 +0000 UTC" 2026-01-20T10:49:38.862782205+00:00 stderr F I0120 10:49:38.862762 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config-managed" name="default-dockercfg-5zsff" serviceaccount="default" 2026-01-20T10:49:38.874484721+00:00 stderr F I0120 10:49:38.874435 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config-managed" name="deployer-dockercfg-v47lz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:56 +0000 UTC" refreshTime="2025-06-11 01:05:26.850242592 +0000 UTC" 2026-01-20T10:49:38.874576444+00:00 stderr F I0120 10:49:38.874559 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config-managed" name="deployer-dockercfg-v47lz" serviceaccount="deployer" 2026-01-20T10:49:38.888825228+00:00 stderr F I0120 10:49:38.888465 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config-operator" name="builder-dockercfg-lbblj" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:50 +0000 UTC" refreshTime="2025-06-11 01:05:18.444628161 +0000 UTC" 2026-01-20T10:49:38.888825228+00:00 stderr F I0120 10:49:38.888494 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config-operator" name="builder-dockercfg-lbblj" serviceaccount="builder" 2026-01-20T10:49:38.907839227+00:00 stderr F I0120 10:49:38.907780 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config-operator" name="default-dockercfg-rltwn" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:01:05 +0000 UTC" refreshTime="2025-06-11 01:05:39.436901642 +0000 UTC" 2026-01-20T10:49:38.907839227+00:00 stderr F I0120 10:49:38.907807 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config-operator" name="default-dockercfg-rltwn" serviceaccount="default" 2026-01-20T10:49:38.988255197+00:00 stderr F I0120 10:49:38.988126 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config-operator" name="deployer-dockercfg-8tp68" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:05 +0000 UTC" refreshTime="2025-06-11 01:05:39.404767787 +0000 UTC" 2026-01-20T10:49:38.988255197+00:00 stderr F I0120 10:49:38.988166 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config-operator" name="deployer-dockercfg-8tp68" serviceaccount="deployer" 2026-01-20T10:49:39.002992805+00:00 stderr F I0120 10:49:39.002285 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config-operator" name="openshift-config-operator-dockercfg-6jthd" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:05 +0000 UTC" refreshTime="2025-06-11 01:05:39.399103227 +0000 UTC" 2026-01-20T10:49:39.002992805+00:00 stderr F I0120 10:49:39.002321 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config-operator" name="openshift-config-operator-dockercfg-6jthd" serviceaccount="openshift-config-operator" 2026-01-20T10:49:39.006370648+00:00 stderr F W0120 10:49:39.006314 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2026-01-20T10:49:39.006370648+00:00 stderr F E0120 10:49:39.006345 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2026-01-20T10:49:39.008822533+00:00 stderr F I0120 10:49:39.008788 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config" name="builder-dockercfg-c75dg" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:05 +0000 UTC" refreshTime="2025-06-11 01:05:39.396509248 +0000 UTC" 2026-01-20T10:49:39.008837903+00:00 stderr F I0120 10:49:39.008818 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config" name="builder-dockercfg-c75dg" serviceaccount="builder" 2026-01-20T10:49:39.028924215+00:00 stderr F I0120 10:49:39.028878 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config" name="default-dockercfg-hbnsp" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:05 +0000 UTC" refreshTime="2025-06-11 01:05:39.388462011 +0000 UTC" 2026-01-20T10:49:39.028983397+00:00 stderr F I0120 10:49:39.028971 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config" name="default-dockercfg-hbnsp" serviceaccount="default" 2026-01-20T10:49:39.039372443+00:00 stderr F W0120 10:49:39.039314 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io) 2026-01-20T10:49:39.039372443+00:00 stderr F E0120 10:49:39.039352 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io) 2026-01-20T10:49:39.041903121+00:00 stderr F I0120 10:49:39.041873 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config" name="deployer-dockercfg-q8mb8" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-06-11 01:05:42.183264752 +0000 UTC" 2026-01-20T10:49:39.041961732+00:00 stderr F I0120 10:49:39.041947 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config" name="deployer-dockercfg-q8mb8" serviceaccount="deployer" 2026-01-20T10:49:39.088672035+00:00 stderr F W0120 10:49:39.088621 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io) 2026-01-20T10:49:39.088672035+00:00 stderr F E0120 10:49:39.088653 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io) 2026-01-20T10:49:39.129664533+00:00 stderr F I0120 10:49:39.129599 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console-operator" name="builder-dockercfg-5h26t" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-06-11 01:05:42.14818032 +0000 UTC" 2026-01-20T10:49:39.129664533+00:00 stderr F I0120 10:49:39.129628 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console-operator" name="builder-dockercfg-5h26t" serviceaccount="builder" 2026-01-20T10:49:39.142568547+00:00 stderr F I0120 10:49:39.141899 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console-operator" name="console-operator-dockercfg-lwp4z" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-06-11 01:05:42.143256081 +0000 UTC" 2026-01-20T10:49:39.142568547+00:00 stderr F I0120 10:49:39.141929 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console-operator" name="console-operator-dockercfg-lwp4z" serviceaccount="console-operator" 2026-01-20T10:49:39.151859770+00:00 stderr F I0120 10:49:39.151196 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console-operator" name="default-dockercfg-vgw7h" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-06-11 01:05:42.139538576 +0000 UTC" 2026-01-20T10:49:39.151859770+00:00 stderr F I0120 10:49:39.151223 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console-operator" name="default-dockercfg-vgw7h" serviceaccount="default" 2026-01-20T10:49:39.170292891+00:00 stderr F I0120 10:49:39.169925 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console-operator" name="deployer-dockercfg-cgf7g" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-06-11 01:05:42.132046688 +0000 UTC" 2026-01-20T10:49:39.170292891+00:00 stderr F I0120 10:49:39.169954 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console-operator" name="deployer-dockercfg-cgf7g" serviceaccount="deployer" 2026-01-20T10:49:39.182820793+00:00 stderr F I0120 10:49:39.182708 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console-user-settings" name="builder-dockercfg-s9kk5" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-06-11 01:05:43.526932032 +0000 UTC" 2026-01-20T10:49:39.182902385+00:00 stderr F I0120 10:49:39.182890 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console-user-settings" name="builder-dockercfg-s9kk5" serviceaccount="builder" 2026-01-20T10:49:39.263631144+00:00 stderr F I0120 10:49:39.263571 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console-user-settings" name="default-dockercfg-mkcsd" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-06-11 01:05:42.094586072 +0000 UTC" 2026-01-20T10:49:39.263631144+00:00 stderr F I0120 10:49:39.263598 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console-user-settings" name="default-dockercfg-mkcsd" serviceaccount="default" 2026-01-20T10:49:39.275387722+00:00 stderr F I0120 10:49:39.275289 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console-user-settings" name="deployer-dockercfg-s5pld" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-06-11 01:05:42.089906881 +0000 UTC" 2026-01-20T10:49:39.275387722+00:00 stderr F I0120 10:49:39.275321 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console-user-settings" name="deployer-dockercfg-s5pld" serviceaccount="deployer" 2026-01-20T10:49:39.298423044+00:00 stderr F I0120 10:49:39.298343 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console" name="builder-dockercfg-nmnq6" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-06-11 01:05:42.080677296 +0000 UTC" 2026-01-20T10:49:39.298423044+00:00 stderr F I0120 10:49:39.298372 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console" name="builder-dockercfg-nmnq6" serviceaccount="builder" 2026-01-20T10:49:39.311367978+00:00 stderr F I0120 10:49:39.311255 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console" name="console-dockercfg-ng44q" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-06-11 01:05:43.475512198 +0000 UTC" 2026-01-20T10:49:39.311367978+00:00 stderr F I0120 10:49:39.311283 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console" name="console-dockercfg-ng44q" serviceaccount="console" 2026-01-20T10:49:39.338144284+00:00 stderr F I0120 10:49:39.338048 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console" name="default-dockercfg-bv4gd" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-06-11 01:05:43.464807008 +0000 UTC" 2026-01-20T10:49:39.338144284+00:00 stderr F I0120 10:49:39.338108 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console" name="default-dockercfg-bv4gd" serviceaccount="default" 2026-01-20T10:49:39.401107081+00:00 stderr F I0120 10:49:39.397595 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console" name="deployer-dockercfg-mpsf7" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-06-11 01:05:43.44098095 +0000 UTC" 2026-01-20T10:49:39.401107081+00:00 stderr F I0120 10:49:39.397622 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console" name="deployer-dockercfg-mpsf7" serviceaccount="deployer" 2026-01-20T10:49:39.418230853+00:00 stderr F I0120 10:49:39.418133 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager-operator" name="builder-dockercfg-rn5hk" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-06-11 01:05:43.432847943 +0000 UTC" 2026-01-20T10:49:39.418230853+00:00 stderr F I0120 10:49:39.418166 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager-operator" name="builder-dockercfg-rn5hk" serviceaccount="builder" 2026-01-20T10:49:39.435345804+00:00 stderr F I0120 10:49:39.435287 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager-operator" name="default-dockercfg-hmzqd" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-06-11 01:05:43.425898387 +0000 UTC" 2026-01-20T10:49:39.435345804+00:00 stderr F I0120 10:49:39.435311 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager-operator" name="default-dockercfg-hmzqd" serviceaccount="default" 2026-01-20T10:49:39.453893839+00:00 stderr F I0120 10:49:39.453832 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager-operator" name="deployer-dockercfg-8mhlz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-06-11 01:05:43.418483693 +0000 UTC" 2026-01-20T10:49:39.453893839+00:00 stderr F I0120 10:49:39.453867 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager-operator" name="deployer-dockercfg-8mhlz" serviceaccount="deployer" 2026-01-20T10:49:39.462890633+00:00 stderr F I0120 10:49:39.462844 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager-operator" name="openshift-controller-manager-operator-dockercfg-zx7mb" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-06-11 01:05:43.414878984 +0000 UTC" 2026-01-20T10:49:39.462890633+00:00 stderr F I0120 10:49:39.462877 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager-operator" name="openshift-controller-manager-operator-dockercfg-zx7mb" serviceaccount="openshift-controller-manager-operator" 2026-01-20T10:49:39.535353900+00:00 stderr F I0120 10:49:39.535054 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager" name="builder-dockercfg-gmnbf" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-06-11 01:05:43.385992396 +0000 UTC" 2026-01-20T10:49:39.535353900+00:00 stderr F I0120 10:49:39.535102 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager" name="builder-dockercfg-gmnbf" serviceaccount="builder" 2026-01-20T10:49:39.572329387+00:00 stderr F I0120 10:49:39.572270 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager" name="default-dockercfg-vdmzk" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-06-11 01:05:43.371104282 +0000 UTC" 2026-01-20T10:49:39.572329387+00:00 stderr F I0120 10:49:39.572296 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager" name="default-dockercfg-vdmzk" serviceaccount="default" 2026-01-20T10:49:39.587102237+00:00 stderr F I0120 10:49:39.586779 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager" name="deployer-dockercfg-q4jdx" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:09 +0000 UTC" refreshTime="2025-06-11 01:05:44.76530247 +0000 UTC" 2026-01-20T10:49:39.587102237+00:00 stderr F I0120 10:49:39.586805 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager" name="deployer-dockercfg-q4jdx" serviceaccount="deployer" 2026-01-20T10:49:39.590082268+00:00 stderr F I0120 10:49:39.589390 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager" name="openshift-controller-manager-sa-dockercfg-58g82" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:09 +0000 UTC" refreshTime="2025-06-11 01:05:44.764258958 +0000 UTC" 2026-01-20T10:49:39.590082268+00:00 stderr F I0120 10:49:39.589422 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager" name="openshift-controller-manager-sa-dockercfg-58g82" serviceaccount="openshift-controller-manager-sa" 2026-01-20T10:49:39.598921667+00:00 stderr F I0120 10:49:39.598868 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns-operator" name="builder-dockercfg-pnlmc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:09 +0000 UTC" refreshTime="2025-06-11 01:05:44.760466406 +0000 UTC" 2026-01-20T10:49:39.598921667+00:00 stderr F I0120 10:49:39.598897 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns-operator" name="builder-dockercfg-pnlmc" serviceaccount="builder" 2026-01-20T10:49:39.670140026+00:00 stderr F I0120 10:49:39.669581 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns-operator" name="default-dockercfg-zzdtv" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:09 +0000 UTC" refreshTime="2025-06-11 01:05:44.732181916 +0000 UTC" 2026-01-20T10:49:39.670140026+00:00 stderr F I0120 10:49:39.669606 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns-operator" name="default-dockercfg-zzdtv" serviceaccount="default" 2026-01-20T10:49:39.694994513+00:00 stderr F I0120 10:49:39.694934 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns-operator" name="deployer-dockercfg-ft65g" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:01:09 +0000 UTC" refreshTime="2025-06-11 01:05:44.722042176 +0000 UTC" 2026-01-20T10:49:39.694994513+00:00 stderr F I0120 10:49:39.694966 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns-operator" name="deployer-dockercfg-ft65g" serviceaccount="deployer" 2026-01-20T10:49:39.708797194+00:00 stderr F I0120 10:49:39.708745 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns-operator" name="dns-operator-dockercfg-wgzbx" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:01:10 +0000 UTC" refreshTime="2025-06-11 01:05:46.116514203 +0000 UTC" 2026-01-20T10:49:39.708797194+00:00 stderr F I0120 10:49:39.708772 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns-operator" name="dns-operator-dockercfg-wgzbx" serviceaccount="dns-operator" 2026-01-20T10:49:39.715827737+00:00 stderr F I0120 10:49:39.715384 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns" name="builder-dockercfg-hlsv2" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:10 +0000 UTC" refreshTime="2025-06-11 01:05:46.113908208 +0000 UTC" 2026-01-20T10:49:39.715827737+00:00 stderr F I0120 10:49:39.715413 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns" name="builder-dockercfg-hlsv2" serviceaccount="builder" 2026-01-20T10:49:39.728155313+00:00 stderr F I0120 10:49:39.727610 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns" name="default-dockercfg-4pr8h" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:10 +0000 UTC" refreshTime="2025-06-11 01:05:46.108969269 +0000 UTC" 2026-01-20T10:49:39.728155313+00:00 stderr F I0120 10:49:39.727634 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns" name="default-dockercfg-4pr8h" serviceaccount="default" 2026-01-20T10:49:39.809483710+00:00 stderr F I0120 10:49:39.809425 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns" name="deployer-dockercfg-45hhc" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:01:10 +0000 UTC" refreshTime="2025-06-11 01:05:46.076245016 +0000 UTC" 2026-01-20T10:49:39.809483710+00:00 stderr F I0120 10:49:39.809451 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns" name="deployer-dockercfg-45hhc" serviceaccount="deployer" 2026-01-20T10:49:39.843263129+00:00 stderr F I0120 10:49:39.843212 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns" name="dns-dockercfg-dff28" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:01:18 +0000 UTC" refreshTime="2025-06-11 01:05:57.262734593 +0000 UTC" 2026-01-20T10:49:39.843263129+00:00 stderr F I0120 10:49:39.843249 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns" name="dns-dockercfg-dff28" serviceaccount="dns" 2026-01-20T10:49:39.848586852+00:00 stderr F I0120 10:49:39.848543 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns" name="node-resolver-dockercfg-5kr6x" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:01:18 +0000 UTC" refreshTime="2025-06-11 01:05:57.260594896 +0000 UTC" 2026-01-20T10:49:39.848586852+00:00 stderr F I0120 10:49:39.848567 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns" name="node-resolver-dockercfg-5kr6x" serviceaccount="node-resolver" 2026-01-20T10:49:39.856042769+00:00 stderr F I0120 10:49:39.855992 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd-operator" name="builder-dockercfg-sf67n" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:20 +0000 UTC" refreshTime="2025-06-11 01:06:00.057618061 +0000 UTC" 2026-01-20T10:49:39.856127062+00:00 stderr F I0120 10:49:39.856114 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd-operator" name="builder-dockercfg-sf67n" serviceaccount="builder" 2026-01-20T10:49:39.869336223+00:00 stderr F I0120 10:49:39.869291 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd-operator" name="default-dockercfg-xdg4w" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:20 +0000 UTC" refreshTime="2025-06-11 01:06:00.052298432 +0000 UTC" 2026-01-20T10:49:39.869409335+00:00 stderr F I0120 10:49:39.869398 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd-operator" name="default-dockercfg-xdg4w" serviceaccount="default" 2026-01-20T10:49:39.945311417+00:00 stderr F I0120 10:49:39.945036 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd-operator" name="deployer-dockercfg-zmpgs" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:01:20 +0000 UTC" refreshTime="2025-06-11 01:06:00.021998996 +0000 UTC" 2026-01-20T10:49:39.945385859+00:00 stderr F I0120 10:49:39.945373 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd-operator" name="deployer-dockercfg-zmpgs" serviceaccount="deployer" 2026-01-20T10:49:39.976833978+00:00 stderr F I0120 10:49:39.976741 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd-operator" name="etcd-operator-dockercfg-hwzhz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-06-11 01:06:01.409316782 +0000 UTC" 2026-01-20T10:49:39.976833978+00:00 stderr F I0120 10:49:39.976800 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd-operator" name="etcd-operator-dockercfg-hwzhz" serviceaccount="etcd-operator" 2026-01-20T10:49:39.982520501+00:00 stderr F I0120 10:49:39.982483 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd" name="builder-dockercfg-sqwsk" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-06-11 01:06:01.407020184 +0000 UTC" 2026-01-20T10:49:39.982520501+00:00 stderr F I0120 10:49:39.982510 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd" name="builder-dockercfg-sqwsk" serviceaccount="builder" 2026-01-20T10:49:39.997503477+00:00 stderr F I0120 10:49:39.997468 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd" name="default-dockercfg-vd62w" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:20 +0000 UTC" refreshTime="2025-06-11 01:06:00.001024497 +0000 UTC" 2026-01-20T10:49:39.997503477+00:00 stderr F I0120 10:49:39.997491 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd" name="default-dockercfg-vd62w" serviceaccount="default" 2026-01-20T10:49:40.008939356+00:00 stderr F I0120 10:49:40.008909 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd" name="deployer-dockercfg-p6hbm" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-06-11 01:06:01.396449638 +0000 UTC" 2026-01-20T10:49:40.008939356+00:00 stderr F I0120 10:49:40.008933 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd" name="deployer-dockercfg-p6hbm" serviceaccount="deployer" 2026-01-20T10:49:40.075635127+00:00 stderr F I0120 10:49:40.075442 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd" name="etcd-backup-sa-dockercfg-rd8b5" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-06-11 01:06:01.36983798 +0000 UTC" 2026-01-20T10:49:40.075635127+00:00 stderr F I0120 10:49:40.075472 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd" name="etcd-backup-sa-dockercfg-rd8b5" serviceaccount="etcd-backup-sa" 2026-01-20T10:49:40.109720195+00:00 stderr F I0120 10:49:40.109270 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd" name="etcd-sa-dockercfg-cgskw" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-06-11 01:06:01.356304739 +0000 UTC" 2026-01-20T10:49:40.109720195+00:00 stderr F I0120 10:49:40.109297 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd" name="etcd-sa-dockercfg-cgskw" serviceaccount="etcd-sa" 2026-01-20T10:49:40.115631445+00:00 stderr F I0120 10:49:40.115599 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd" name="installer-sa-dockercfg-gxvhz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-06-11 01:06:01.353772835 +0000 UTC" 2026-01-20T10:49:40.115631445+00:00 stderr F I0120 10:49:40.115625 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd" name="installer-sa-dockercfg-gxvhz" serviceaccount="installer-sa" 2026-01-20T10:49:40.140988748+00:00 stderr F I0120 10:49:40.140927 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-host-network" name="builder-dockercfg-h5pg5" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-06-11 01:06:01.343644409 +0000 UTC" 2026-01-20T10:49:40.140988748+00:00 stderr F I0120 10:49:40.140956 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-host-network" name="builder-dockercfg-h5pg5" serviceaccount="builder" 2026-01-20T10:49:40.156526191+00:00 stderr F I0120 10:49:40.156469 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-host-network" name="default-dockercfg-swwqf" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-06-11 01:06:01.337425784 +0000 UTC" 2026-01-20T10:49:40.156526191+00:00 stderr F I0120 10:49:40.156496 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-host-network" name="default-dockercfg-swwqf" serviceaccount="default" 2026-01-20T10:49:40.202800100+00:00 stderr F I0120 10:49:40.202730 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-host-network" name="deployer-dockercfg-ddh74" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:28 +0000 UTC" refreshTime="2025-06-11 01:11:47.118925009 +0000 UTC" 2026-01-20T10:49:40.202800100+00:00 stderr F I0120 10:49:40.202764 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-host-network" name="deployer-dockercfg-ddh74" serviceaccount="deployer" 2026-01-20T10:49:40.249613986+00:00 stderr F I0120 10:49:40.247909 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-image-registry" name="builder-dockercfg-2jkwc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:28 +0000 UTC" refreshTime="2025-06-11 01:11:47.100857443 +0000 UTC" 2026-01-20T10:49:40.249613986+00:00 stderr F I0120 10:49:40.247944 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-image-registry" name="builder-dockercfg-2jkwc" serviceaccount="builder" 2026-01-20T10:49:40.253177545+00:00 stderr F I0120 10:49:40.252497 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-image-registry" name="cluster-image-registry-operator-dockercfg-ddjzq" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:28 +0000 UTC" refreshTime="2025-06-11 01:11:47.099019014 +0000 UTC" 2026-01-20T10:49:40.253177545+00:00 stderr F I0120 10:49:40.252533 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-image-registry" name="cluster-image-registry-operator-dockercfg-ddjzq" serviceaccount="cluster-image-registry-operator" 2026-01-20T10:49:40.261485868+00:00 stderr F I0120 10:49:40.261435 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-image-registry" name="default-dockercfg-w58sb" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:28 +0000 UTC" refreshTime="2025-06-11 01:11:47.095439457 +0000 UTC" 2026-01-20T10:49:40.261485868+00:00 stderr F I0120 10:49:40.261459 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-image-registry" name="default-dockercfg-w58sb" serviceaccount="default" 2026-01-20T10:49:40.291049409+00:00 stderr F I0120 10:49:40.290985 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-image-registry" name="deployer-dockercfg-5sk9l" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:28 +0000 UTC" refreshTime="2025-06-11 01:11:47.083619813 +0000 UTC" 2026-01-20T10:49:40.291049409+00:00 stderr F I0120 10:49:40.291014 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-image-registry" name="deployer-dockercfg-5sk9l" serviceaccount="deployer" 2026-01-20T10:49:40.349183410+00:00 stderr F I0120 10:49:40.347428 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-image-registry" name="node-ca-dockercfg-mcgx9" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-06-11 01:11:48.461042001 +0000 UTC" 2026-01-20T10:49:40.349183410+00:00 stderr F I0120 10:49:40.347452 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-image-registry" name="node-ca-dockercfg-mcgx9" serviceaccount="node-ca" 2026-01-20T10:49:40.383138104+00:00 stderr F I0120 10:49:40.381412 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-image-registry" name="pruner-dockercfg-nzhll" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-06-11 01:11:48.44744906 +0000 UTC" 2026-01-20T10:49:40.383138104+00:00 stderr F I0120 10:49:40.381441 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-image-registry" name="pruner-dockercfg-nzhll" serviceaccount="pruner" 2026-01-20T10:49:40.388089494+00:00 stderr F I0120 10:49:40.387940 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-image-registry" name="registry-dockercfg-q786x" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-06-11 01:11:48.444833488 +0000 UTC" 2026-01-20T10:49:40.388089494+00:00 stderr F I0120 10:49:40.387963 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-image-registry" name="registry-dockercfg-q786x" serviceaccount="registry" 2026-01-20T10:49:40.414108357+00:00 stderr F I0120 10:49:40.413180 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="build-config-change-controller-dockercfg-x9cbn" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-06-11 01:11:48.43474042 +0000 UTC" 2026-01-20T10:49:40.414108357+00:00 stderr F I0120 10:49:40.413204 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="build-config-change-controller-dockercfg-x9cbn" serviceaccount="build-config-change-controller" 2026-01-20T10:49:40.428108194+00:00 stderr F I0120 10:49:40.427562 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="build-controller-dockercfg-6s44z" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-06-11 01:11:48.428987369 +0000 UTC" 2026-01-20T10:49:40.428108194+00:00 stderr F I0120 10:49:40.427587 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="build-controller-dockercfg-6s44z" serviceaccount="build-controller" 2026-01-20T10:49:40.469548506+00:00 stderr F I0120 10:49:40.469255 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="builder-dockercfg-ztkx9" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-06-11 01:11:48.412310889 +0000 UTC" 2026-01-20T10:49:40.469548506+00:00 stderr F I0120 10:49:40.469281 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="builder-dockercfg-ztkx9" serviceaccount="builder" 2026-01-20T10:49:40.514269138+00:00 stderr F I0120 10:49:40.514201 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="cluster-csr-approver-controller-dockercfg-4n58l" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-06-11 01:11:48.394336706 +0000 UTC" 2026-01-20T10:49:40.514304789+00:00 stderr F I0120 10:49:40.514264 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="cluster-csr-approver-controller-dockercfg-4n58l" serviceaccount="cluster-csr-approver-controller" 2026-01-20T10:49:40.521961202+00:00 stderr F I0120 10:49:40.521902 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="cluster-quota-reconciliation-controller-dockercfg-6clv4" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-06-11 01:11:48.39125382 +0000 UTC" 2026-01-20T10:49:40.521961202+00:00 stderr F I0120 10:49:40.521940 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="cluster-quota-reconciliation-controller-dockercfg-6clv4" serviceaccount="cluster-quota-reconciliation-controller" 2026-01-20T10:49:40.553984818+00:00 stderr F I0120 10:49:40.553926 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="default-dockercfg-qcclx" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-11 01:11:49.778447145 +0000 UTC" 2026-01-20T10:49:40.553984818+00:00 stderr F I0120 10:49:40.553957 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="default-dockercfg-qcclx" serviceaccount="default" 2026-01-20T10:49:40.572790500+00:00 stderr F I0120 10:49:40.570282 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="default-rolebindings-controller-dockercfg-mjvl7" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-11 01:11:49.771904327 +0000 UTC" 2026-01-20T10:49:40.572790500+00:00 stderr F I0120 10:49:40.570325 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="default-rolebindings-controller-dockercfg-mjvl7" serviceaccount="default-rolebindings-controller" 2026-01-20T10:49:40.617347527+00:00 stderr F I0120 10:49:40.617136 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="deployer-controller-dockercfg-nps8b" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-11 01:11:49.753158796 +0000 UTC" 2026-01-20T10:49:40.617347527+00:00 stderr F I0120 10:49:40.617164 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="deployer-controller-dockercfg-nps8b" serviceaccount="deployer-controller" 2026-01-20T10:49:40.650572709+00:00 stderr F I0120 10:49:40.650511 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="deployer-dockercfg-fjtnq" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-11 01:11:49.739811853 +0000 UTC" 2026-01-20T10:49:40.650572709+00:00 stderr F I0120 10:49:40.650542 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="deployer-dockercfg-fjtnq" serviceaccount="deployer" 2026-01-20T10:49:40.676465898+00:00 stderr F I0120 10:49:40.676405 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="deploymentconfig-controller-dockercfg-7sjgp" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-11 01:11:49.729454022 +0000 UTC" 2026-01-20T10:49:40.676465898+00:00 stderr F I0120 10:49:40.676438 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="deploymentconfig-controller-dockercfg-7sjgp" serviceaccount="deploymentconfig-controller" 2026-01-20T10:49:40.695235800+00:00 stderr F I0120 10:49:40.694729 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="image-import-controller-dockercfg-wtcck" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-11 01:11:49.722123406 +0000 UTC" 2026-01-20T10:49:40.695235800+00:00 stderr F I0120 10:49:40.694761 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="image-import-controller-dockercfg-wtcck" serviceaccount="image-import-controller" 2026-01-20T10:49:40.708546955+00:00 stderr F I0120 10:49:40.708403 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="image-trigger-controller-dockercfg-75z9g" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-11 01:11:49.716653364 +0000 UTC" 2026-01-20T10:49:40.708546955+00:00 stderr F I0120 10:49:40.708432 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="image-trigger-controller-dockercfg-75z9g" serviceaccount="image-trigger-controller" 2026-01-20T10:49:40.710653700+00:00 stderr F W0120 10:49:40.710627 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2026-01-20T10:49:40.710668800+00:00 stderr F E0120 10:49:40.710653 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2026-01-20T10:49:40.746950376+00:00 stderr F I0120 10:49:40.743747 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="ingress-to-route-controller-dockercfg-486s5" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-11 01:11:49.702517269 +0000 UTC" 2026-01-20T10:49:40.746950376+00:00 stderr F I0120 10:49:40.743780 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="ingress-to-route-controller-dockercfg-486s5" serviceaccount="ingress-to-route-controller" 2026-01-20T10:49:40.789686137+00:00 stderr F I0120 10:49:40.789628 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="namespace-security-allocation-controller-dockercfg-d9nzv" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-11 01:11:49.684163642 +0000 UTC" 2026-01-20T10:49:40.789686137+00:00 stderr F I0120 10:49:40.789656 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="namespace-security-allocation-controller-dockercfg-d9nzv" serviceaccount="namespace-security-allocation-controller" 2026-01-20T10:49:40.815556595+00:00 stderr F I0120 10:49:40.815496 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="node-bootstrapper-dockercfg-mj85j" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-11 01:11:49.673815467 +0000 UTC" 2026-01-20T10:49:40.815556595+00:00 stderr F I0120 10:49:40.815523 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="node-bootstrapper-dockercfg-mj85j" serviceaccount="node-bootstrapper" 2026-01-20T10:49:40.840638599+00:00 stderr F I0120 10:49:40.837526 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="origin-namespace-controller-dockercfg-5s4zt" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-11 01:11:49.665004411 +0000 UTC" 2026-01-20T10:49:40.840638599+00:00 stderr F I0120 10:49:40.837554 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="origin-namespace-controller-dockercfg-5s4zt" serviceaccount="origin-namespace-controller" 2026-01-20T10:49:40.855670596+00:00 stderr F I0120 10:49:40.855573 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="podsecurity-admission-label-syncer-controller-dockercfg-b5pxh" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-11 01:11:49.657790907 +0000 UTC" 2026-01-20T10:49:40.855670596+00:00 stderr F I0120 10:49:40.855612 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="podsecurity-admission-label-syncer-controller-dockercfg-b5pxh" serviceaccount="podsecurity-admission-label-syncer-controller" 2026-01-20T10:49:40.868554759+00:00 stderr F I0120 10:49:40.868107 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="privileged-namespaces-psa-label-syncer-dockercfg-lm8jh" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-11 01:11:49.652791179 +0000 UTC" 2026-01-20T10:49:40.868554759+00:00 stderr F I0120 10:49:40.868158 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="privileged-namespaces-psa-label-syncer-dockercfg-lm8jh" serviceaccount="privileged-namespaces-psa-label-syncer" 2026-01-20T10:49:40.923439961+00:00 stderr F W0120 10:49:40.923373 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io) 2026-01-20T10:49:40.923439961+00:00 stderr F E0120 10:49:40.923414 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io) 2026-01-20T10:49:40.928727511+00:00 stderr F I0120 10:49:40.928652 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="pv-recycler-controller-dockercfg-d76pz" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-11 01:11:49.628565113 +0000 UTC" 2026-01-20T10:49:40.928727511+00:00 stderr F I0120 10:49:40.928696 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="pv-recycler-controller-dockercfg-d76pz" serviceaccount="pv-recycler-controller" 2026-01-20T10:49:40.955307652+00:00 stderr F I0120 10:49:40.955238 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="resourcequota-controller-dockercfg-mlv87" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-11 01:11:49.617923268 +0000 UTC" 2026-01-20T10:49:40.955307652+00:00 stderr F I0120 10:49:40.955274 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="resourcequota-controller-dockercfg-mlv87" serviceaccount="resourcequota-controller" 2026-01-20T10:49:40.966587285+00:00 stderr F W0120 10:49:40.966520 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io) 2026-01-20T10:49:40.966587285+00:00 stderr F E0120 10:49:40.966560 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.Image: failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io) 2026-01-20T10:49:40.975596989+00:00 stderr F I0120 10:49:40.975520 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="serviceaccount-controller-dockercfg-l8hfk" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-11 01:11:49.609807113 +0000 UTC" 2026-01-20T10:49:40.975596989+00:00 stderr F I0120 10:49:40.975560 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="serviceaccount-controller-dockercfg-l8hfk" serviceaccount="serviceaccount-controller" 2026-01-20T10:49:40.988935216+00:00 stderr F I0120 10:49:40.988891 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="serviceaccount-pull-secrets-controller-dockercfg-hshqh" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-11 01:11:49.604459603 +0000 UTC" 2026-01-20T10:49:40.988935216+00:00 stderr F I0120 10:49:40.988917 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="serviceaccount-pull-secrets-controller-dockercfg-hshqh" serviceaccount="serviceaccount-pull-secrets-controller" 2026-01-20T10:49:41.010012127+00:00 stderr F I0120 10:49:41.009923 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="template-instance-controller-dockercfg-f72bl" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-11 01:11:50.996107688 +0000 UTC" 2026-01-20T10:49:41.010012127+00:00 stderr F I0120 10:49:41.009974 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="template-instance-controller-dockercfg-f72bl" serviceaccount="template-instance-controller" 2026-01-20T10:49:41.068714885+00:00 stderr F I0120 10:49:41.068632 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="template-instance-finalizer-controller-dockercfg-xwvr9" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-11 01:11:49.572570552 +0000 UTC" 2026-01-20T10:49:41.068714885+00:00 stderr F I0120 10:49:41.068666 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="template-instance-finalizer-controller-dockercfg-xwvr9" serviceaccount="template-instance-finalizer-controller" 2026-01-20T10:49:41.097840553+00:00 stderr F I0120 10:49:41.097758 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="unidling-controller-dockercfg-ndddq" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-11 01:11:50.960910956 +0000 UTC" 2026-01-20T10:49:41.097840553+00:00 stderr F I0120 10:49:41.097805 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="unidling-controller-dockercfg-ndddq" serviceaccount="unidling-controller" 2026-01-20T10:49:41.114650555+00:00 stderr F I0120 10:49:41.114597 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress-canary" name="builder-dockercfg-jjc4r" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-11 01:11:50.954175783 +0000 UTC" 2026-01-20T10:49:41.114650555+00:00 stderr F I0120 10:49:41.114625 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress-canary" name="builder-dockercfg-jjc4r" serviceaccount="builder" 2026-01-20T10:49:41.128928300+00:00 stderr F I0120 10:49:41.128876 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress-canary" name="default-dockercfg-4clxc" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-11 01:11:50.948469697 +0000 UTC" 2026-01-20T10:49:41.128928300+00:00 stderr F I0120 10:49:41.128903 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress-canary" name="default-dockercfg-4clxc" serviceaccount="default" 2026-01-20T10:49:41.148715373+00:00 stderr F I0120 10:49:41.148664 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress-canary" name="deployer-dockercfg-njf4l" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-11 01:11:50.94054862 +0000 UTC" 2026-01-20T10:49:41.148715373+00:00 stderr F I0120 10:49:41.148691 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress-canary" name="deployer-dockercfg-njf4l" serviceaccount="deployer" 2026-01-20T10:49:41.203318915+00:00 stderr F I0120 10:49:41.203232 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress-operator" name="builder-dockercfg-qnlh9" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-11 01:11:50.918726179 +0000 UTC" 2026-01-20T10:49:41.203318915+00:00 stderr F I0120 10:49:41.203267 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress-operator" name="builder-dockercfg-qnlh9" serviceaccount="builder" 2026-01-20T10:49:41.234564208+00:00 stderr F I0120 10:49:41.234481 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress-operator" name="default-dockercfg-dbsd9" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-11 01:11:50.906223998 +0000 UTC" 2026-01-20T10:49:41.234564208+00:00 stderr F I0120 10:49:41.234510 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress-operator" name="default-dockercfg-dbsd9" serviceaccount="default" 2026-01-20T10:49:41.246856652+00:00 stderr F W0120 10:49:41.246810 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2026-01-20T10:49:41.246856652+00:00 stderr F E0120 10:49:41.246840 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2026-01-20T10:49:41.288303824+00:00 stderr F I0120 10:49:41.287992 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress-operator" name="deployer-dockercfg-m9j7c" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-11 01:11:50.884819322 +0000 UTC" 2026-01-20T10:49:41.288388947+00:00 stderr F I0120 10:49:41.288373 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress-operator" name="deployer-dockercfg-m9j7c" serviceaccount="deployer" 2026-01-20T10:49:41.293306467+00:00 stderr F I0120 10:49:41.293248 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress-operator" name="ingress-operator-dockercfg-sxxwd" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-11 01:11:50.882713482 +0000 UTC" 2026-01-20T10:49:41.293368929+00:00 stderr F I0120 10:49:41.293357 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress-operator" name="ingress-operator-dockercfg-sxxwd" serviceaccount="ingress-operator" 2026-01-20T10:49:41.293630247+00:00 stderr F I0120 10:49:41.293615 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress" name="builder-dockercfg-dc6f6" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-11 01:11:50.882560153 +0000 UTC" 2026-01-20T10:49:41.293663058+00:00 stderr F I0120 10:49:41.293653 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress" name="builder-dockercfg-dc6f6" serviceaccount="builder" 2026-01-20T10:49:41.368613970+00:00 stderr F I0120 10:49:41.368560 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress" name="default-dockercfg-dvqwl" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-11 01:11:50.852592388 +0000 UTC" 2026-01-20T10:49:41.368693442+00:00 stderr F I0120 10:49:41.368677 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress" name="default-dockercfg-dvqwl" serviceaccount="default" 2026-01-20T10:49:41.377972506+00:00 stderr F I0120 10:49:41.377929 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress" name="deployer-dockercfg-6cpmp" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-11 01:11:50.848841619 +0000 UTC" 2026-01-20T10:49:41.378038988+00:00 stderr F I0120 10:49:41.378027 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress" name="deployer-dockercfg-6cpmp" serviceaccount="deployer" 2026-01-20T10:49:41.389124295+00:00 stderr F I0120 10:49:41.389052 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress" name="router-dockercfg-n864z" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-11 01:11:50.844393711 +0000 UTC" 2026-01-20T10:49:41.389124295+00:00 stderr F I0120 10:49:41.389095 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress" name="router-dockercfg-n864z" serviceaccount="router" 2026-01-20T10:49:41.403213874+00:00 stderr F I0120 10:49:41.403162 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kni-infra" name="builder-dockercfg-pzdvv" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-11 01:11:50.838748819 +0000 UTC" 2026-01-20T10:49:41.403213874+00:00 stderr F I0120 10:49:41.403188 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kni-infra" name="builder-dockercfg-pzdvv" serviceaccount="builder" 2026-01-20T10:49:41.415034154+00:00 stderr F I0120 10:49:41.414985 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kni-infra" name="default-dockercfg-2zsnk" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-11 01:11:50.834018479 +0000 UTC" 2026-01-20T10:49:41.415113037+00:00 stderr F I0120 10:49:41.415101 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kni-infra" name="default-dockercfg-2zsnk" serviceaccount="default" 2026-01-20T10:49:41.501231910+00:00 stderr F I0120 10:49:41.501099 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kni-infra" name="deployer-dockercfg-v52pl" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-11 01:11:50.799580081 +0000 UTC" 2026-01-20T10:49:41.501231910+00:00 stderr F I0120 10:49:41.501125 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kni-infra" name="deployer-dockercfg-v52pl" serviceaccount="deployer" 2026-01-20T10:49:41.516319849+00:00 stderr F I0120 10:49:41.516263 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver-operator" name="builder-dockercfg-2cs69" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-11 01:11:50.793509997 +0000 UTC" 2026-01-20T10:49:41.516319849+00:00 stderr F I0120 10:49:41.516293 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver-operator" name="builder-dockercfg-2cs69" serviceaccount="builder" 2026-01-20T10:49:41.528265423+00:00 stderr F I0120 10:49:41.528207 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver-operator" name="default-dockercfg-7dskq" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-11 01:11:50.788730936 +0000 UTC" 2026-01-20T10:49:41.528265423+00:00 stderr F I0120 10:49:41.528234 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver-operator" name="default-dockercfg-7dskq" serviceaccount="default" 2026-01-20T10:49:41.536523624+00:00 stderr F I0120 10:49:41.536464 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver-operator" name="deployer-dockercfg-xjjdg" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:52.185422943 +0000 UTC" 2026-01-20T10:49:41.536523624+00:00 stderr F I0120 10:49:41.536492 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver-operator" name="deployer-dockercfg-xjjdg" serviceaccount="deployer" 2026-01-20T10:49:41.550121179+00:00 stderr F I0120 10:49:41.550025 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver-operator" name="kube-apiserver-operator-dockercfg-n4bm9" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:52.180008018 +0000 UTC" 2026-01-20T10:49:41.550121179+00:00 stderr F I0120 10:49:41.550080 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver-operator" name="kube-apiserver-operator-dockercfg-n4bm9" serviceaccount="kube-apiserver-operator" 2026-01-20T10:49:41.591574442+00:00 stderr F W0120 10:49:41.591491 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2026-01-20T10:49:41.591574442+00:00 stderr F E0120 10:49:41.591535 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2026-01-20T10:49:41.635384436+00:00 stderr F I0120 10:49:41.635306 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver" name="builder-dockercfg-rrcrf" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:52.145901679 +0000 UTC" 2026-01-20T10:49:41.635384436+00:00 stderr F I0120 10:49:41.635344 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver" name="builder-dockercfg-rrcrf" serviceaccount="builder" 2026-01-20T10:49:41.648947140+00:00 stderr F I0120 10:49:41.648893 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver" name="default-dockercfg-dlw8f" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:52.140460961 +0000 UTC" 2026-01-20T10:49:41.648947140+00:00 stderr F I0120 10:49:41.648936 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver" name="default-dockercfg-dlw8f" serviceaccount="default" 2026-01-20T10:49:41.682131620+00:00 stderr F I0120 10:49:41.676957 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver" name="deployer-dockercfg-vr4tw" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:52.129232823 +0000 UTC" 2026-01-20T10:49:41.682131620+00:00 stderr F I0120 10:49:41.676990 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver" name="deployer-dockercfg-vr4tw" serviceaccount="deployer" 2026-01-20T10:49:41.691697541+00:00 stderr F I0120 10:49:41.691639 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver" name="installer-sa-dockercfg-4kgh8" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:52.1233624 +0000 UTC" 2026-01-20T10:49:41.691697541+00:00 stderr F I0120 10:49:41.691673 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver" name="installer-sa-dockercfg-4kgh8" serviceaccount="installer-sa" 2026-01-20T10:49:41.696604031+00:00 stderr F I0120 10:49:41.694857 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver" name="localhost-recovery-client-dockercfg-qll5d" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:52.122069809 +0000 UTC" 2026-01-20T10:49:41.696604031+00:00 stderr F I0120 10:49:41.694881 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver" name="localhost-recovery-client-dockercfg-qll5d" serviceaccount="localhost-recovery-client" 2026-01-20T10:49:41.725074568+00:00 stderr F W0120 10:49:41.724225 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io) 2026-01-20T10:49:41.725074568+00:00 stderr F E0120 10:49:41.724267 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io) 2026-01-20T10:49:41.782545058+00:00 stderr F I0120 10:49:41.782477 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager-operator" name="builder-dockercfg-lfl8l" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:52.087024342 +0000 UTC" 2026-01-20T10:49:41.782545058+00:00 stderr F I0120 10:49:41.782515 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager-operator" name="builder-dockercfg-lfl8l" serviceaccount="builder" 2026-01-20T10:49:41.805049474+00:00 stderr F I0120 10:49:41.804994 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager-operator" name="default-dockercfg-ztmg5" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:52.078015008 +0000 UTC" 2026-01-20T10:49:41.805049474+00:00 stderr F I0120 10:49:41.805023 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager-operator" name="default-dockercfg-ztmg5" serviceaccount="default" 2026-01-20T10:49:41.813405388+00:00 stderr F I0120 10:49:41.813359 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager-operator" name="deployer-dockercfg-qpmjq" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:52.074674081 +0000 UTC" 2026-01-20T10:49:41.813481460+00:00 stderr F I0120 10:49:41.813469 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager-operator" name="deployer-dockercfg-qpmjq" serviceaccount="deployer" 2026-01-20T10:49:41.825821767+00:00 stderr F I0120 10:49:41.825763 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager-operator" name="kube-controller-manager-operator-dockercfg-mwmd7" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:52.069721011 +0000 UTC" 2026-01-20T10:49:41.825821767+00:00 stderr F I0120 10:49:41.825793 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager-operator" name="kube-controller-manager-operator-dockercfg-mwmd7" serviceaccount="kube-controller-manager-operator" 2026-01-20T10:49:41.829325263+00:00 stderr F I0120 10:49:41.829281 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager" name="builder-dockercfg-4xp92" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:52.068303523 +0000 UTC" 2026-01-20T10:49:41.829325263+00:00 stderr F I0120 10:49:41.829309 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager" name="builder-dockercfg-4xp92" serviceaccount="builder" 2026-01-20T10:49:41.916592971+00:00 stderr F I0120 10:49:41.916534 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager" name="default-dockercfg-8rtql" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:52.03340234 +0000 UTC" 2026-01-20T10:49:41.916592971+00:00 stderr F I0120 10:49:41.916568 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager" name="default-dockercfg-8rtql" serviceaccount="default" 2026-01-20T10:49:41.934826197+00:00 stderr F I0120 10:49:41.934766 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager" name="deployer-dockercfg-bnp5r" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:52.026107054 +0000 UTC" 2026-01-20T10:49:41.934826197+00:00 stderr F I0120 10:49:41.934794 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager" name="deployer-dockercfg-bnp5r" serviceaccount="deployer" 2026-01-20T10:49:41.947886594+00:00 stderr F I0120 10:49:41.947834 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager" name="installer-sa-dockercfg-dl9g2" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:52.020882207 +0000 UTC" 2026-01-20T10:49:41.947886594+00:00 stderr F I0120 10:49:41.947861 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager" name="installer-sa-dockercfg-dl9g2" serviceaccount="installer-sa" 2026-01-20T10:49:41.955317461+00:00 stderr F I0120 10:49:41.955280 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager" name="kube-controller-manager-sa-dockercfg-4jsp6" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:52.017901588 +0000 UTC" 2026-01-20T10:49:41.955317461+00:00 stderr F I0120 10:49:41.955304 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager" name="kube-controller-manager-sa-dockercfg-4jsp6" serviceaccount="kube-controller-manager-sa" 2026-01-20T10:49:41.961451648+00:00 stderr F I0120 10:49:41.961409 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager" name="localhost-recovery-client-dockercfg-862fd" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:52.01545071 +0000 UTC" 2026-01-20T10:49:41.961451648+00:00 stderr F I0120 10:49:41.961445 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager" name="localhost-recovery-client-dockercfg-862fd" serviceaccount="localhost-recovery-client" 2026-01-20T10:49:42.059921457+00:00 stderr F I0120 10:49:42.059758 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler-operator" name="builder-dockercfg-2h8s9" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:51.976114027 +0000 UTC" 2026-01-20T10:49:42.059921457+00:00 stderr F I0120 10:49:42.059792 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler-operator" name="builder-dockercfg-2h8s9" serviceaccount="builder" 2026-01-20T10:49:42.078298247+00:00 stderr F I0120 10:49:42.078245 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler-operator" name="default-dockercfg-vmhb5" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:51.968719054 +0000 UTC" 2026-01-20T10:49:42.078298247+00:00 stderr F I0120 10:49:42.078278 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler-operator" name="default-dockercfg-vmhb5" serviceaccount="default" 2026-01-20T10:49:42.082163324+00:00 stderr F I0120 10:49:42.082129 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler-operator" name="deployer-dockercfg-ltvsw" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:51.967161275 +0000 UTC" 2026-01-20T10:49:42.082163324+00:00 stderr F I0120 10:49:42.082157 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler-operator" name="deployer-dockercfg-ltvsw" serviceaccount="deployer" 2026-01-20T10:49:42.091708166+00:00 stderr F I0120 10:49:42.091665 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler-operator" name="openshift-kube-scheduler-operator-dockercfg-w67m2" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:51.963351283 +0000 UTC" 2026-01-20T10:49:42.091708166+00:00 stderr F I0120 10:49:42.091691 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler-operator" name="openshift-kube-scheduler-operator-dockercfg-w67m2" serviceaccount="openshift-kube-scheduler-operator" 2026-01-20T10:49:42.103036481+00:00 stderr F I0120 10:49:42.102998 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler" name="builder-dockercfg-wmtqz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:51.958817516 +0000 UTC" 2026-01-20T10:49:42.103107553+00:00 stderr F I0120 10:49:42.103096 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler" name="builder-dockercfg-wmtqz" serviceaccount="builder" 2026-01-20T10:49:42.188477333+00:00 stderr F I0120 10:49:42.188420 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler" name="default-dockercfg-m6rz5" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:51.924645264 +0000 UTC" 2026-01-20T10:49:42.188477333+00:00 stderr F I0120 10:49:42.188456 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler" name="default-dockercfg-m6rz5" serviceaccount="default" 2026-01-20T10:49:42.208627246+00:00 stderr F I0120 10:49:42.208579 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler" name="deployer-dockercfg-mks8v" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-11 01:11:51.916580638 +0000 UTC" 2026-01-20T10:49:42.208627246+00:00 stderr F I0120 10:49:42.208603 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler" name="deployer-dockercfg-mks8v" serviceaccount="deployer" 2026-01-20T10:49:42.215780925+00:00 stderr F I0120 10:49:42.215758 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler" name="installer-sa-dockercfg-9ln8g" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.313705054 +0000 UTC" 2026-01-20T10:49:42.215826506+00:00 stderr F I0120 10:49:42.215815 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler" name="installer-sa-dockercfg-9ln8g" serviceaccount="installer-sa" 2026-01-20T10:49:42.223206491+00:00 stderr F I0120 10:49:42.223155 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler" name="localhost-recovery-client-dockercfg-b5dfm" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.310756529 +0000 UTC" 2026-01-20T10:49:42.223206491+00:00 stderr F I0120 10:49:42.223189 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler" name="localhost-recovery-client-dockercfg-b5dfm" serviceaccount="localhost-recovery-client" 2026-01-20T10:49:42.242167178+00:00 stderr F I0120 10:49:42.242127 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler" name="openshift-kube-scheduler-sa-dockercfg-d9dtc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.303164013 +0000 UTC" 2026-01-20T10:49:42.242227320+00:00 stderr F I0120 10:49:42.242214 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler" name="openshift-kube-scheduler-sa-dockercfg-d9dtc" serviceaccount="openshift-kube-scheduler-sa" 2026-01-20T10:49:42.332177029+00:00 stderr F I0120 10:49:42.332113 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator-operator" name="builder-dockercfg-cp4s8" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.267177921 +0000 UTC" 2026-01-20T10:49:42.332177029+00:00 stderr F I0120 10:49:42.332152 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator-operator" name="builder-dockercfg-cp4s8" serviceaccount="builder" 2026-01-20T10:49:42.342686380+00:00 stderr F I0120 10:49:42.342625 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator-operator" name="default-dockercfg-47tpp" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.262965308 +0000 UTC" 2026-01-20T10:49:42.342686380+00:00 stderr F I0120 10:49:42.342659 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator-operator" name="default-dockercfg-47tpp" serviceaccount="default" 2026-01-20T10:49:42.349040453+00:00 stderr F I0120 10:49:42.348995 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator-operator" name="deployer-dockercfg-vphfw" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.260414896 +0000 UTC" 2026-01-20T10:49:42.349040453+00:00 stderr F I0120 10:49:42.349029 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator-operator" name="deployer-dockercfg-vphfw" serviceaccount="deployer" 2026-01-20T10:49:42.355396037+00:00 stderr F I0120 10:49:42.355361 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator-operator" name="kube-storage-version-migrator-operator-dockercfg-8l4fr" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.257872913 +0000 UTC" 2026-01-20T10:49:42.355410348+00:00 stderr F I0120 10:49:42.355393 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator-operator" name="kube-storage-version-migrator-operator-dockercfg-8l4fr" serviceaccount="kube-storage-version-migrator-operator" 2026-01-20T10:49:42.383794962+00:00 stderr F I0120 10:49:42.383738 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator" name="builder-dockercfg-lvk8s" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.24652046 +0000 UTC" 2026-01-20T10:49:42.383794962+00:00 stderr F I0120 10:49:42.383768 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator" name="builder-dockercfg-lvk8s" serviceaccount="builder" 2026-01-20T10:49:42.468363638+00:00 stderr F I0120 10:49:42.467891 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator" name="default-dockercfg-4vhnw" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.212855694 +0000 UTC" 2026-01-20T10:49:42.468363638+00:00 stderr F I0120 10:49:42.467917 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator" name="default-dockercfg-4vhnw" serviceaccount="default" 2026-01-20T10:49:42.475499906+00:00 stderr F I0120 10:49:42.475403 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator" name="deployer-dockercfg-tzgw6" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.209854299 +0000 UTC" 2026-01-20T10:49:42.475499906+00:00 stderr F I0120 10:49:42.475430 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator" name="deployer-dockercfg-tzgw6" serviceaccount="deployer" 2026-01-20T10:49:42.484144649+00:00 stderr F I0120 10:49:42.482503 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator" name="kube-storage-version-migrator-sa-dockercfg-5v9xj" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.207015988 +0000 UTC" 2026-01-20T10:49:42.484144649+00:00 stderr F I0120 10:49:42.482534 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator" name="kube-storage-version-migrator-sa-dockercfg-5v9xj" serviceaccount="kube-storage-version-migrator-sa" 2026-01-20T10:49:42.605160595+00:00 stderr F I0120 10:49:42.603622 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="builder-dockercfg-t2jdt" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.158569185 +0000 UTC" 2026-01-20T10:49:42.605160595+00:00 stderr F I0120 10:49:42.603653 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="builder-dockercfg-t2jdt" serviceaccount="builder" 2026-01-20T10:49:42.607373623+00:00 stderr F I0120 10:49:42.605463 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="cluster-autoscaler-dockercfg-5ld89" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.157828571 +0000 UTC" 2026-01-20T10:49:42.607373623+00:00 stderr F I0120 10:49:42.605492 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="cluster-autoscaler-dockercfg-5ld89" serviceaccount="cluster-autoscaler" 2026-01-20T10:49:42.656034294+00:00 stderr F I0120 10:49:42.655958 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="cluster-autoscaler-operator-dockercfg-nlvfr" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.137630323 +0000 UTC" 2026-01-20T10:49:42.656034294+00:00 stderr F I0120 10:49:42.655989 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="cluster-autoscaler-operator-dockercfg-nlvfr" serviceaccount="cluster-autoscaler-operator" 2026-01-20T10:49:42.662123280+00:00 stderr F I0120 10:49:42.662080 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="control-plane-machine-set-operator-dockercfg-7wtgv" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.135189001 +0000 UTC" 2026-01-20T10:49:42.662123280+00:00 stderr F I0120 10:49:42.662107 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="control-plane-machine-set-operator-dockercfg-7wtgv" serviceaccount="control-plane-machine-set-operator" 2026-01-20T10:49:42.683012297+00:00 stderr F I0120 10:49:42.682914 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="default-dockercfg-t6f6m" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.126849003 +0000 UTC" 2026-01-20T10:49:42.683012297+00:00 stderr F I0120 10:49:42.682939 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="default-dockercfg-t6f6m" serviceaccount="default" 2026-01-20T10:49:42.684025377+00:00 stderr F I0120 10:49:42.683410 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="deployer-dockercfg-z5zkh" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.126642512 +0000 UTC" 2026-01-20T10:49:42.684025377+00:00 stderr F I0120 10:49:42.683427 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="deployer-dockercfg-z5zkh" serviceaccount="deployer" 2026-01-20T10:49:42.689121012+00:00 stderr F I0120 10:49:42.689083 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="machine-api-controllers-dockercfg-5gkdn" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.124381904 +0000 UTC" 2026-01-20T10:49:42.689121012+00:00 stderr F I0120 10:49:42.689112 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="machine-api-controllers-dockercfg-5gkdn" serviceaccount="machine-api-controllers" 2026-01-20T10:49:42.788127838+00:00 stderr F I0120 10:49:42.788048 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="machine-api-operator-dockercfg-q7fmc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.084795762 +0000 UTC" 2026-01-20T10:49:42.788127838+00:00 stderr F I0120 10:49:42.788098 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="machine-api-operator-dockercfg-q7fmc" serviceaccount="machine-api-operator" 2026-01-20T10:49:42.801865936+00:00 stderr F I0120 10:49:42.801801 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="machine-api-termination-handler-dockercfg-86pgj" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.079296075 +0000 UTC" 2026-01-20T10:49:42.801865936+00:00 stderr F I0120 10:49:42.801832 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="machine-api-termination-handler-dockercfg-86pgj" serviceaccount="machine-api-termination-handler" 2026-01-20T10:49:42.810701175+00:00 stderr F I0120 10:49:42.810651 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="builder-dockercfg-kjnvp" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.075759875 +0000 UTC" 2026-01-20T10:49:42.810701175+00:00 stderr F I0120 10:49:42.810686 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="builder-dockercfg-kjnvp" serviceaccount="builder" 2026-01-20T10:49:42.814612235+00:00 stderr F I0120 10:49:42.814564 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="default-dockercfg-lh7f8" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.074190423 +0000 UTC" 2026-01-20T10:49:42.814612235+00:00 stderr F I0120 10:49:42.814598 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="default-dockercfg-lh7f8" serviceaccount="default" 2026-01-20T10:49:42.829716355+00:00 stderr F I0120 10:49:42.829658 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="deployer-dockercfg-8qpnv" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:53.068152155 +0000 UTC" 2026-01-20T10:49:42.829716355+00:00 stderr F I0120 10:49:42.829688 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="deployer-dockercfg-8qpnv" serviceaccount="deployer" 2026-01-20T10:49:43.275647437+00:00 stderr F I0120 10:49:43.273227 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="machine-config-controller-dockercfg-wtlbj" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:52.890722439 +0000 UTC" 2026-01-20T10:49:43.275647437+00:00 stderr F I0120 10:49:43.273255 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="machine-config-controller-dockercfg-wtlbj" serviceaccount="machine-config-controller" 2026-01-20T10:49:43.275647437+00:00 stderr F I0120 10:49:43.274640 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="machine-config-daemon-dockercfg-rfwqs" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:52.890157965 +0000 UTC" 2026-01-20T10:49:43.275647437+00:00 stderr F I0120 10:49:43.274704 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="machine-config-daemon-dockercfg-rfwqs" serviceaccount="machine-config-daemon" 2026-01-20T10:49:43.275647437+00:00 stderr F I0120 10:49:43.275212 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="machine-config-operator-dockercfg-vhshz" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:52.889922614 +0000 UTC" 2026-01-20T10:49:43.275647437+00:00 stderr F I0120 10:49:43.275225 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="machine-config-operator-dockercfg-vhshz" serviceaccount="machine-config-operator" 2026-01-20T10:49:43.275647437+00:00 stderr F I0120 10:49:43.275632 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="machine-config-server-dockercfg-xm5kr" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:52.889756917 +0000 UTC" 2026-01-20T10:49:43.275700039+00:00 stderr F I0120 10:49:43.275649 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="machine-config-server-dockercfg-xm5kr" serviceaccount="machine-config-server" 2026-01-20T10:49:43.278111533+00:00 stderr F I0120 10:49:43.278078 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="machine-os-builder-dockercfg-p6ljl" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:52.888785768 +0000 UTC" 2026-01-20T10:49:43.278111533+00:00 stderr F I0120 10:49:43.278105 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="machine-os-builder-dockercfg-p6ljl" serviceaccount="machine-os-builder" 2026-01-20T10:49:43.292631235+00:00 stderr F I0120 10:49:43.292566 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="machine-os-puller-dockercfg-bkkmv" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:52.882986603 +0000 UTC" 2026-01-20T10:49:43.292631235+00:00 stderr F I0120 10:49:43.292607 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="machine-os-puller-dockercfg-bkkmv" serviceaccount="machine-os-puller" 2026-01-20T10:49:43.293754989+00:00 stderr F I0120 10:49:43.293725 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="node-bootstrapper-dockercfg-8hvnd" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:52.882523148 +0000 UTC" 2026-01-20T10:49:43.293774340+00:00 stderr F I0120 10:49:43.293752 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="node-bootstrapper-dockercfg-8hvnd" serviceaccount="node-bootstrapper" 2026-01-20T10:49:43.296258006+00:00 stderr F I0120 10:49:43.295831 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="builder-dockercfg-w5l7k" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:52.881680504 +0000 UTC" 2026-01-20T10:49:43.296258006+00:00 stderr F I0120 10:49:43.295857 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="builder-dockercfg-w5l7k" serviceaccount="builder" 2026-01-20T10:49:43.303168936+00:00 stderr F I0120 10:49:43.301991 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="certified-operators-dockercfg-twmwc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:52.879218416 +0000 UTC" 2026-01-20T10:49:43.303168936+00:00 stderr F I0120 10:49:43.302136 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="certified-operators-dockercfg-twmwc" serviceaccount="certified-operators" 2026-01-20T10:49:43.303168936+00:00 stderr F I0120 10:49:43.302089 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="community-operators-dockercfg-sv888" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:52.879181211 +0000 UTC" 2026-01-20T10:49:43.303168936+00:00 stderr F I0120 10:49:43.302275 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="community-operators-dockercfg-sv888" serviceaccount="community-operators" 2026-01-20T10:49:43.329491497+00:00 stderr F I0120 10:49:43.329428 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="default-dockercfg-4w6pc" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:52.868252307 +0000 UTC" 2026-01-20T10:49:43.329491497+00:00 stderr F I0120 10:49:43.329455 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="default-dockercfg-4w6pc" serviceaccount="default" 2026-01-20T10:49:43.337927664+00:00 stderr F I0120 10:49:43.337883 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="deployer-dockercfg-wdpgc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:52.864860319 +0000 UTC" 2026-01-20T10:49:43.337927664+00:00 stderr F I0120 10:49:43.337913 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="deployer-dockercfg-wdpgc" serviceaccount="deployer" 2026-01-20T10:49:43.341765602+00:00 stderr F I0120 10:49:43.341729 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="marketplace-operator-dockercfg-b4zbk" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:52.863321864 +0000 UTC" 2026-01-20T10:49:43.341765602+00:00 stderr F I0120 10:49:43.341755 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="marketplace-operator-dockercfg-b4zbk" serviceaccount="marketplace-operator" 2026-01-20T10:49:43.371190457+00:00 stderr F I0120 10:49:43.371123 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="redhat-marketplace-dockercfg-kpdvz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:52.851590479 +0000 UTC" 2026-01-20T10:49:43.371190457+00:00 stderr F I0120 10:49:43.371154 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="redhat-marketplace-dockercfg-kpdvz" serviceaccount="redhat-marketplace" 2026-01-20T10:49:43.381928315+00:00 stderr F I0120 10:49:43.381868 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="redhat-operators-dockercfg-dwn4s" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:52.847265931 +0000 UTC" 2026-01-20T10:49:43.381928315+00:00 stderr F I0120 10:49:43.381893 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="redhat-operators-dockercfg-dwn4s" serviceaccount="redhat-operators" 2026-01-20T10:49:43.460816697+00:00 stderr F I0120 10:49:43.460763 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-monitoring" name="builder-dockercfg-c82h4" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:52.815708745 +0000 UTC" 2026-01-20T10:49:43.460816697+00:00 stderr F I0120 10:49:43.460788 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-monitoring" name="builder-dockercfg-c82h4" serviceaccount="builder" 2026-01-20T10:49:43.468031978+00:00 stderr F I0120 10:49:43.467990 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-monitoring" name="cluster-monitoring-operator-dockercfg-vg26t" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:52.812813185 +0000 UTC" 2026-01-20T10:49:43.468031978+00:00 stderr F I0120 10:49:43.468014 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-monitoring" name="cluster-monitoring-operator-dockercfg-vg26t" serviceaccount="cluster-monitoring-operator" 2026-01-20T10:49:43.482679423+00:00 stderr F I0120 10:49:43.482618 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-monitoring" name="default-dockercfg-vffxx" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:52.806966499 +0000 UTC" 2026-01-20T10:49:43.482679423+00:00 stderr F I0120 10:49:43.482650 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-monitoring" name="default-dockercfg-vffxx" serviceaccount="default" 2026-01-20T10:49:43.515934087+00:00 stderr F I0120 10:49:43.515792 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-monitoring" name="deployer-dockercfg-fzkn2" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:52.793696723 +0000 UTC" 2026-01-20T10:49:43.515934087+00:00 stderr F I0120 10:49:43.515821 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-monitoring" name="deployer-dockercfg-fzkn2" serviceaccount="deployer" 2026-01-20T10:49:43.522230008+00:00 stderr F I0120 10:49:43.520513 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-multus" name="builder-dockercfg-wqmk7" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:54.191811833 +0000 UTC" 2026-01-20T10:49:43.522230008+00:00 stderr F I0120 10:49:43.520557 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-multus" name="builder-dockercfg-wqmk7" serviceaccount="builder" 2026-01-20T10:49:43.605723121+00:00 stderr F I0120 10:49:43.605666 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-multus" name="default-dockercfg-smth4" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:54.157748364 +0000 UTC" 2026-01-20T10:49:43.605723121+00:00 stderr F I0120 10:49:43.605694 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-multus" name="default-dockercfg-smth4" serviceaccount="default" 2026-01-20T10:49:43.607950569+00:00 stderr F I0120 10:49:43.607920 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-multus" name="deployer-dockercfg-lbcm2" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-11 01:11:52.756838536 +0000 UTC" 2026-01-20T10:49:43.607950569+00:00 stderr F I0120 10:49:43.607939 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-multus" name="deployer-dockercfg-lbcm2" serviceaccount="deployer" 2026-01-20T10:49:43.616190330+00:00 stderr F I0120 10:49:43.616114 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-multus" name="metrics-daemon-sa-dockercfg-22xbz" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:54.153569689 +0000 UTC" 2026-01-20T10:49:43.616190330+00:00 stderr F I0120 10:49:43.616144 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-multus" name="metrics-daemon-sa-dockercfg-22xbz" serviceaccount="metrics-daemon-sa" 2026-01-20T10:49:43.643901285+00:00 stderr F I0120 10:49:43.643834 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-multus" name="multus-ac-dockercfg-ltm2q" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:54.142480327 +0000 UTC" 2026-01-20T10:49:43.643901285+00:00 stderr F I0120 10:49:43.643863 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-multus" name="multus-ac-dockercfg-ltm2q" serviceaccount="multus-ac" 2026-01-20T10:49:43.654687923+00:00 stderr F I0120 10:49:43.654647 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-multus" name="multus-ancillary-tools-dockercfg-6hnwp" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:54.138155448 +0000 UTC" 2026-01-20T10:49:43.654687923+00:00 stderr F I0120 10:49:43.654676 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-multus" name="multus-ancillary-tools-dockercfg-6hnwp" serviceaccount="multus-ancillary-tools" 2026-01-20T10:49:43.735211055+00:00 stderr F I0120 10:49:43.735125 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-multus" name="multus-dockercfg-2hmjh" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:54.105963559 +0000 UTC" 2026-01-20T10:49:43.735211055+00:00 stderr F I0120 10:49:43.735155 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-multus" name="multus-dockercfg-2hmjh" serviceaccount="multus" 2026-01-20T10:49:43.748134619+00:00 stderr F I0120 10:49:43.748033 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-diagnostics" name="builder-dockercfg-v84zj" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:54.100813638 +0000 UTC" 2026-01-20T10:49:43.748134619+00:00 stderr F I0120 10:49:43.748109 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-diagnostics" name="builder-dockercfg-v84zj" serviceaccount="builder" 2026-01-20T10:49:43.764136877+00:00 stderr F I0120 10:49:43.758884 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-diagnostics" name="default-dockercfg-l7mph" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:54.096463109 +0000 UTC" 2026-01-20T10:49:43.764136877+00:00 stderr F I0120 10:49:43.758921 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-diagnostics" name="default-dockercfg-l7mph" serviceaccount="default" 2026-01-20T10:49:43.785907189+00:00 stderr F I0120 10:49:43.784172 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-diagnostics" name="deployer-dockercfg-xfj6f" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:54.086349541 +0000 UTC" 2026-01-20T10:49:43.785907189+00:00 stderr F I0120 10:49:43.784213 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-diagnostics" name="deployer-dockercfg-xfj6f" serviceaccount="deployer" 2026-01-20T10:49:43.795176832+00:00 stderr F I0120 10:49:43.795117 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-diagnostics" name="network-diagnostics-dockercfg-lpz5v" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:54.081970596 +0000 UTC" 2026-01-20T10:49:43.795260385+00:00 stderr F I0120 10:49:43.795247 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-diagnostics" name="network-diagnostics-dockercfg-lpz5v" serviceaccount="network-diagnostics" 2026-01-20T10:49:43.874021494+00:00 stderr F I0120 10:49:43.873972 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-node-identity" name="builder-dockercfg-jflnh" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:54.050427075 +0000 UTC" 2026-01-20T10:49:43.874111787+00:00 stderr F I0120 10:49:43.874099 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-node-identity" name="builder-dockercfg-jflnh" serviceaccount="builder" 2026-01-20T10:49:43.881617666+00:00 stderr F I0120 10:49:43.881594 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-node-identity" name="default-dockercfg-75lxp" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:54.047370422 +0000 UTC" 2026-01-20T10:49:43.881670017+00:00 stderr F I0120 10:49:43.881658 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-node-identity" name="default-dockercfg-75lxp" serviceaccount="default" 2026-01-20T10:49:43.904681238+00:00 stderr F I0120 10:49:43.904584 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-node-identity" name="deployer-dockercfg-rj2df" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:54.038183494 +0000 UTC" 2026-01-20T10:49:43.904681238+00:00 stderr F I0120 10:49:43.904624 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-node-identity" name="deployer-dockercfg-rj2df" serviceaccount="deployer" 2026-01-20T10:49:43.918411046+00:00 stderr F I0120 10:49:43.918351 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-node-identity" name="network-node-identity-dockercfg-q58sj" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:54.032674011 +0000 UTC" 2026-01-20T10:49:43.918411046+00:00 stderr F I0120 10:49:43.918380 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-node-identity" name="network-node-identity-dockercfg-q58sj" serviceaccount="network-node-identity" 2026-01-20T10:49:43.929770352+00:00 stderr F I0120 10:49:43.929687 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-operator" name="builder-dockercfg-rm8cp" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:54.028145932 +0000 UTC" 2026-01-20T10:49:43.929770352+00:00 stderr F I0120 10:49:43.929735 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-operator" name="builder-dockercfg-rm8cp" serviceaccount="builder" 2026-01-20T10:49:44.009506670+00:00 stderr F I0120 10:49:44.009418 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-operator" name="cluster-network-operator-dockercfg-fknq9" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:53.996254233 +0000 UTC" 2026-01-20T10:49:44.009506670+00:00 stderr F I0120 10:49:44.009465 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-operator" name="cluster-network-operator-dockercfg-fknq9" serviceaccount="cluster-network-operator" 2026-01-20T10:49:44.017206025+00:00 stderr F I0120 10:49:44.015742 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-operator" name="default-dockercfg-qbblb" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:53.993712704 +0000 UTC" 2026-01-20T10:49:44.017206025+00:00 stderr F I0120 10:49:44.015768 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-operator" name="default-dockercfg-qbblb" serviceaccount="default" 2026-01-20T10:49:44.042114414+00:00 stderr F I0120 10:49:44.040910 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-operator" name="deployer-dockercfg-dxzsm" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:53.983652817 +0000 UTC" 2026-01-20T10:49:44.042114414+00:00 stderr F I0120 10:49:44.040940 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-operator" name="deployer-dockercfg-dxzsm" serviceaccount="deployer" 2026-01-20T10:49:44.055619935+00:00 stderr F I0120 10:49:44.055529 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-operator" name="iptables-alerter-dockercfg-m85pb" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:53.977811648 +0000 UTC" 2026-01-20T10:49:44.055619935+00:00 stderr F I0120 10:49:44.055578 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-operator" name="iptables-alerter-dockercfg-m85pb" serviceaccount="iptables-alerter" 2026-01-20T10:49:44.067869508+00:00 stderr F I0120 10:49:44.067822 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-node" name="builder-dockercfg-x5dlr" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:53.972888665 +0000 UTC" 2026-01-20T10:49:44.067944150+00:00 stderr F I0120 10:49:44.067932 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-node" name="builder-dockercfg-x5dlr" serviceaccount="builder" 2026-01-20T10:49:44.143151101+00:00 stderr F I0120 10:49:44.143102 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-node" name="default-dockercfg-rkcl2" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:53.942783196 +0000 UTC" 2026-01-20T10:49:44.143216853+00:00 stderr F I0120 10:49:44.143205 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-node" name="default-dockercfg-rkcl2" serviceaccount="default" 2026-01-20T10:49:44.156096716+00:00 stderr F I0120 10:49:44.156017 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-node" name="deployer-dockercfg-ng566" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:53.937609048 +0000 UTC" 2026-01-20T10:49:44.156096716+00:00 stderr F I0120 10:49:44.156050 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-node" name="deployer-dockercfg-ng566" serviceaccount="deployer" 2026-01-20T10:49:44.182362046+00:00 stderr F I0120 10:49:44.182314 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-nutanix-infra" name="builder-dockercfg-zj94t" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:53.927089878 +0000 UTC" 2026-01-20T10:49:44.182432918+00:00 stderr F I0120 10:49:44.182420 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-nutanix-infra" name="builder-dockercfg-zj94t" serviceaccount="builder" 2026-01-20T10:49:44.194882897+00:00 stderr F I0120 10:49:44.194822 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-nutanix-infra" name="default-dockercfg-w25km" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-11 01:11:53.922085474 +0000 UTC" 2026-01-20T10:49:44.194882897+00:00 stderr F I0120 10:49:44.194855 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-nutanix-infra" name="default-dockercfg-w25km" serviceaccount="default" 2026-01-20T10:49:44.202004884+00:00 stderr F I0120 10:49:44.201957 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-nutanix-infra" name="deployer-dockercfg-dgzbc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.319223983 +0000 UTC" 2026-01-20T10:49:44.202004884+00:00 stderr F I0120 10:49:44.201983 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-nutanix-infra" name="deployer-dockercfg-dgzbc" serviceaccount="deployer" 2026-01-20T10:49:44.282349542+00:00 stderr F I0120 10:49:44.282293 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-oauth-apiserver" name="builder-dockercfg-ssklj" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.287097436 +0000 UTC" 2026-01-20T10:49:44.282349542+00:00 stderr F I0120 10:49:44.282320 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-oauth-apiserver" name="builder-dockercfg-ssklj" serviceaccount="builder" 2026-01-20T10:49:44.296268836+00:00 stderr F I0120 10:49:44.296216 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-oauth-apiserver" name="default-dockercfg-m5zsx" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.281530086 +0000 UTC" 2026-01-20T10:49:44.296268836+00:00 stderr F I0120 10:49:44.296247 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-oauth-apiserver" name="default-dockercfg-m5zsx" serviceaccount="default" 2026-01-20T10:49:44.324192136+00:00 stderr F I0120 10:49:44.324037 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-oauth-apiserver" name="deployer-dockercfg-s7krv" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.270403392 +0000 UTC" 2026-01-20T10:49:44.324192136+00:00 stderr F I0120 10:49:44.324171 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-oauth-apiserver" name="deployer-dockercfg-s7krv" serviceaccount="deployer" 2026-01-20T10:49:44.330404105+00:00 stderr F I0120 10:49:44.330382 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-oauth-apiserver" name="oauth-apiserver-sa-dockercfg-qvbzg" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.267854886 +0000 UTC" 2026-01-20T10:49:44.330451147+00:00 stderr F I0120 10:49:44.330440 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-oauth-apiserver" name="oauth-apiserver-sa-dockercfg-qvbzg" serviceaccount="oauth-apiserver-sa" 2026-01-20T10:49:44.343219255+00:00 stderr F I0120 10:49:44.343053 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-openstack-infra" name="builder-dockercfg-74x9t" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.262796 +0000 UTC" 2026-01-20T10:49:44.343296547+00:00 stderr F I0120 10:49:44.343284 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-openstack-infra" name="builder-dockercfg-74x9t" serviceaccount="builder" 2026-01-20T10:49:44.421324904+00:00 stderr F I0120 10:49:44.421270 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-openstack-infra" name="default-dockercfg-n4wxs" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.231510072 +0000 UTC" 2026-01-20T10:49:44.421324904+00:00 stderr F I0120 10:49:44.421295 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-openstack-infra" name="default-dockercfg-n4wxs" serviceaccount="default" 2026-01-20T10:49:44.437098145+00:00 stderr F I0120 10:49:44.437029 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-openstack-infra" name="deployer-dockercfg-ddvf7" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.225201575 +0000 UTC" 2026-01-20T10:49:44.437176007+00:00 stderr F I0120 10:49:44.437161 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-openstack-infra" name="deployer-dockercfg-ddvf7" serviceaccount="deployer" 2026-01-20T10:49:44.456353771+00:00 stderr F I0120 10:49:44.456307 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operator-lifecycle-manager" name="builder-dockercfg-kvbw2" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.217491376 +0000 UTC" 2026-01-20T10:49:44.456419393+00:00 stderr F I0120 10:49:44.456408 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operator-lifecycle-manager" name="builder-dockercfg-kvbw2" serviceaccount="builder" 2026-01-20T10:49:44.469051088+00:00 stderr F I0120 10:49:44.469003 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operator-lifecycle-manager" name="collect-profiles-dockercfg-45g9d" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.212412958 +0000 UTC" 2026-01-20T10:49:44.469144200+00:00 stderr F I0120 10:49:44.469130 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operator-lifecycle-manager" name="collect-profiles-dockercfg-45g9d" serviceaccount="collect-profiles" 2026-01-20T10:49:44.482164997+00:00 stderr F I0120 10:49:44.482127 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operator-lifecycle-manager" name="default-dockercfg-sps9x" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.207161449 +0000 UTC" 2026-01-20T10:49:44.482229279+00:00 stderr F I0120 10:49:44.482217 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operator-lifecycle-manager" name="default-dockercfg-sps9x" serviceaccount="default" 2026-01-20T10:49:44.561762202+00:00 stderr F I0120 10:49:44.561690 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operator-lifecycle-manager" name="deployer-dockercfg-fm6b6" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.17533784 +0000 UTC" 2026-01-20T10:49:44.561762202+00:00 stderr F I0120 10:49:44.561721 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operator-lifecycle-manager" name="deployer-dockercfg-fm6b6" serviceaccount="deployer" 2026-01-20T10:49:44.569504068+00:00 stderr F I0120 10:49:44.569480 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operator-lifecycle-manager" name="olm-operator-serviceaccount-dockercfg-ncpbj" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.172216109 +0000 UTC" 2026-01-20T10:49:44.569531158+00:00 stderr F I0120 10:49:44.569498 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operator-lifecycle-manager" name="olm-operator-serviceaccount-dockercfg-ncpbj" serviceaccount="olm-operator-serviceaccount" 2026-01-20T10:49:44.595895061+00:00 stderr F I0120 10:49:44.595845 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operators" name="builder-dockercfg-bmp44" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.161675088 +0000 UTC" 2026-01-20T10:49:44.595895061+00:00 stderr F I0120 10:49:44.595868 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operators" name="builder-dockercfg-bmp44" serviceaccount="builder" 2026-01-20T10:49:44.608528786+00:00 stderr F I0120 10:49:44.608473 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operators" name="default-dockercfg-6cjkw" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.156622218 +0000 UTC" 2026-01-20T10:49:44.608528786+00:00 stderr F I0120 10:49:44.608501 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operators" name="default-dockercfg-6cjkw" serviceaccount="default" 2026-01-20T10:49:44.614992893+00:00 stderr F I0120 10:49:44.614572 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operators" name="deployer-dockercfg-kwdj9" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.154179764 +0000 UTC" 2026-01-20T10:49:44.614992893+00:00 stderr F I0120 10:49:44.614593 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operators" name="deployer-dockercfg-kwdj9" serviceaccount="deployer" 2026-01-20T10:49:44.656675843+00:00 stderr F I0120 10:49:44.656617 1 reflector.go:351] Caches populated for *v1.Build from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:44.694730952+00:00 stderr F I0120 10:49:44.694682 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovirt-infra" name="builder-dockercfg-686g2" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.122141328 +0000 UTC" 2026-01-20T10:49:44.694730952+00:00 stderr F I0120 10:49:44.694709 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovirt-infra" name="builder-dockercfg-686g2" serviceaccount="builder" 2026-01-20T10:49:44.702162608+00:00 stderr F I0120 10:49:44.701983 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovirt-infra" name="default-dockercfg-j8jz7" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.119234656 +0000 UTC" 2026-01-20T10:49:44.702162608+00:00 stderr F I0120 10:49:44.702038 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovirt-infra" name="default-dockercfg-j8jz7" serviceaccount="default" 2026-01-20T10:49:44.729154640+00:00 stderr F I0120 10:49:44.729083 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovirt-infra" name="deployer-dockercfg-kmtk7" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.108392349 +0000 UTC" 2026-01-20T10:49:44.729154640+00:00 stderr F I0120 10:49:44.729114 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovirt-infra" name="deployer-dockercfg-kmtk7" serviceaccount="deployer" 2026-01-20T10:49:44.743667182+00:00 stderr F I0120 10:49:44.743397 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovn-kubernetes" name="builder-dockercfg-gvwbd" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.102660627 +0000 UTC" 2026-01-20T10:49:44.743667182+00:00 stderr F I0120 10:49:44.743436 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovn-kubernetes" name="builder-dockercfg-gvwbd" serviceaccount="builder" 2026-01-20T10:49:44.755244475+00:00 stderr F I0120 10:49:44.755175 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovn-kubernetes" name="default-dockercfg-gd6sg" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.097948456 +0000 UTC" 2026-01-20T10:49:44.755244475+00:00 stderr F I0120 10:49:44.755212 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovn-kubernetes" name="default-dockercfg-gd6sg" serviceaccount="default" 2026-01-20T10:49:44.829019873+00:00 stderr F I0120 10:49:44.828963 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovn-kubernetes" name="deployer-dockercfg-nhhff" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.06843068 +0000 UTC" 2026-01-20T10:49:44.829019873+00:00 stderr F I0120 10:49:44.828989 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovn-kubernetes" name="deployer-dockercfg-nhhff" serviceaccount="deployer" 2026-01-20T10:49:44.842633687+00:00 stderr F I0120 10:49:44.842236 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovn-kubernetes" name="ovn-kubernetes-control-plane-dockercfg-76h6h" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.063119043 +0000 UTC" 2026-01-20T10:49:44.842633687+00:00 stderr F I0120 10:49:44.842266 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovn-kubernetes" name="ovn-kubernetes-control-plane-dockercfg-76h6h" serviceaccount="ovn-kubernetes-control-plane" 2026-01-20T10:49:44.862003966+00:00 stderr F I0120 10:49:44.861937 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovn-kubernetes" name="ovn-kubernetes-node-dockercfg-jpwlq" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-11 01:11:55.055237958 +0000 UTC" 2026-01-20T10:49:44.862003966+00:00 stderr F I0120 10:49:44.861967 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovn-kubernetes" name="ovn-kubernetes-node-dockercfg-jpwlq" serviceaccount="ovn-kubernetes-node" 2026-01-20T10:49:44.874744615+00:00 stderr F I0120 10:49:44.874651 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-route-controller-manager" name="builder-dockercfg-6nwr2" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-11 01:11:56.450155507 +0000 UTC" 2026-01-20T10:49:44.874744615+00:00 stderr F I0120 10:49:44.874681 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-route-controller-manager" name="builder-dockercfg-6nwr2" serviceaccount="builder" 2026-01-20T10:49:44.889084542+00:00 stderr F I0120 10:49:44.889015 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-route-controller-manager" name="default-dockercfg-gd9rc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-11 01:11:56.44440986 +0000 UTC" 2026-01-20T10:49:44.889084542+00:00 stderr F I0120 10:49:44.889043 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-route-controller-manager" name="default-dockercfg-gd9rc" serviceaccount="default" 2026-01-20T10:49:44.963263482+00:00 stderr F I0120 10:49:44.963198 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-route-controller-manager" name="deployer-dockercfg-6wg24" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-11 01:11:56.414737284 +0000 UTC" 2026-01-20T10:49:44.963263482+00:00 stderr F I0120 10:49:44.963233 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-route-controller-manager" name="deployer-dockercfg-6wg24" serviceaccount="deployer" 2026-01-20T10:49:44.973949826+00:00 stderr F I0120 10:49:44.973903 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-route-controller-manager" name="route-controller-manager-sa-dockercfg-9r4gl" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-11 01:11:56.410448998 +0000 UTC" 2026-01-20T10:49:44.973949826+00:00 stderr F I0120 10:49:44.973927 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-route-controller-manager" name="route-controller-manager-sa-dockercfg-9r4gl" serviceaccount="route-controller-manager-sa" 2026-01-20T10:49:44.994909245+00:00 stderr F I0120 10:49:44.994849 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca-operator" name="builder-dockercfg-bdttl" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-11 01:11:56.402077707 +0000 UTC" 2026-01-20T10:49:44.994909245+00:00 stderr F I0120 10:49:44.994884 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca-operator" name="builder-dockercfg-bdttl" serviceaccount="builder" 2026-01-20T10:49:45.015577804+00:00 stderr F I0120 10:49:45.015500 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca-operator" name="default-dockercfg-ptcxl" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-11 01:11:56.39381724 +0000 UTC" 2026-01-20T10:49:45.015577804+00:00 stderr F I0120 10:49:45.015533 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca-operator" name="default-dockercfg-ptcxl" serviceaccount="default" 2026-01-20T10:49:45.057119360+00:00 stderr F I0120 10:49:45.057033 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca-operator" name="deployer-dockercfg-2k6vp" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-11 01:11:56.377201173 +0000 UTC" 2026-01-20T10:49:45.057119360+00:00 stderr F I0120 10:49:45.057078 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca-operator" name="deployer-dockercfg-2k6vp" serviceaccount="deployer" 2026-01-20T10:49:45.114931841+00:00 stderr F I0120 10:49:45.114864 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca-operator" name="service-ca-operator-dockercfg-zrj8d" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-11 01:11:56.354083545 +0000 UTC" 2026-01-20T10:49:45.114931841+00:00 stderr F I0120 10:49:45.114896 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca-operator" name="service-ca-operator-dockercfg-zrj8d" serviceaccount="service-ca-operator" 2026-01-20T10:49:45.121939164+00:00 stderr F I0120 10:49:45.121887 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca" name="builder-dockercfg-hklq6" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-11 01:11:56.351259383 +0000 UTC" 2026-01-20T10:49:45.121939164+00:00 stderr F I0120 10:49:45.121927 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca" name="builder-dockercfg-hklq6" serviceaccount="builder" 2026-01-20T10:49:45.129350380+00:00 stderr F I0120 10:49:45.129310 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca" name="default-dockercfg-r6bvc" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-11 01:11:56.34829028 +0000 UTC" 2026-01-20T10:49:45.129350380+00:00 stderr F I0120 10:49:45.129338 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca" name="default-dockercfg-r6bvc" serviceaccount="default" 2026-01-20T10:49:45.137327853+00:00 stderr F I0120 10:49:45.137283 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:45.148623857+00:00 stderr F I0120 10:49:45.148578 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca" name="deployer-dockercfg-k7zp8" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-11 01:11:56.340582974 +0000 UTC" 2026-01-20T10:49:45.148623857+00:00 stderr F I0120 10:49:45.148608 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca" name="deployer-dockercfg-k7zp8" serviceaccount="deployer" 2026-01-20T10:49:45.168961917+00:00 stderr F I0120 10:49:45.168899 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca" name="service-ca-dockercfg-79vsd" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-11 01:11:56.332456474 +0000 UTC" 2026-01-20T10:49:45.168961917+00:00 stderr F I0120 10:49:45.168932 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca" name="service-ca-dockercfg-79vsd" serviceaccount="service-ca" 2026-01-20T10:49:45.250579593+00:00 stderr F I0120 10:49:45.249923 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-user-workload-monitoring" name="builder-dockercfg-dktpk" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-11 01:11:56.300047679 +0000 UTC" 2026-01-20T10:49:45.250579593+00:00 stderr F I0120 10:49:45.249974 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-user-workload-monitoring" name="builder-dockercfg-dktpk" serviceaccount="builder" 2026-01-20T10:49:45.254457731+00:00 stderr F I0120 10:49:45.254391 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-user-workload-monitoring" name="default-dockercfg-qbbwv" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-11 01:11:56.29825664 +0000 UTC" 2026-01-20T10:49:45.254457731+00:00 stderr F I0120 10:49:45.254444 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-user-workload-monitoring" name="default-dockercfg-qbbwv" serviceaccount="default" 2026-01-20T10:49:45.268744936+00:00 stderr F I0120 10:49:45.268690 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-user-workload-monitoring" name="deployer-dockercfg-cxqvw" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-11 01:11:56.292553083 +0000 UTC" 2026-01-20T10:49:45.268796248+00:00 stderr F I0120 10:49:45.268764 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-user-workload-monitoring" name="deployer-dockercfg-cxqvw" serviceaccount="deployer" 2026-01-20T10:49:45.282004260+00:00 stderr F I0120 10:49:45.281948 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-vsphere-infra" name="builder-dockercfg-d58tr" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-11 01:11:56.287242309 +0000 UTC" 2026-01-20T10:49:45.282004260+00:00 stderr F I0120 10:49:45.281995 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-vsphere-infra" name="builder-dockercfg-d58tr" serviceaccount="builder" 2026-01-20T10:49:45.309729494+00:00 stderr F I0120 10:49:45.309660 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-vsphere-infra" name="default-dockercfg-mvb5j" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-11 01:11:56.276176388 +0000 UTC" 2026-01-20T10:49:45.309729494+00:00 stderr F I0120 10:49:45.309710 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-vsphere-infra" name="default-dockercfg-mvb5j" serviceaccount="default" 2026-01-20T10:49:45.332937712+00:00 stderr F I0120 10:49:45.332871 1 reflector.go:351] Caches populated for *v1.TemplateInstance from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:45.334314023+00:00 stderr F I0120 10:49:45.334270 1 templateinstance_finalizer.go:194] Starting TemplateInstanceFinalizer controller 2026-01-20T10:49:45.380952354+00:00 stderr F I0120 10:49:45.380878 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-vsphere-infra" name="deployer-dockercfg-tlw89" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-11 01:11:56.24766568 +0000 UTC" 2026-01-20T10:49:45.380952354+00:00 stderr F I0120 10:49:45.380913 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-vsphere-infra" name="deployer-dockercfg-tlw89" serviceaccount="deployer" 2026-01-20T10:49:45.394831557+00:00 stderr F I0120 10:49:45.394775 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift" name="builder-dockercfg-7bl85" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-11 01:11:56.242179228 +0000 UTC" 2026-01-20T10:49:45.394831557+00:00 stderr F I0120 10:49:45.394802 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift" name="builder-dockercfg-7bl85" serviceaccount="builder" 2026-01-20T10:49:45.402197931+00:00 stderr F I0120 10:49:45.402146 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift" name="default-dockercfg-mqnf2" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-11 01:11:56.239153268 +0000 UTC" 2026-01-20T10:49:45.402197931+00:00 stderr F I0120 10:49:45.402170 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift" name="default-dockercfg-mqnf2" serviceaccount="default" 2026-01-20T10:49:45.422746517+00:00 stderr F I0120 10:49:45.422699 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift" name="deployer-dockercfg-xhxvc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-11 01:11:56.230934246 +0000 UTC" 2026-01-20T10:49:45.422813979+00:00 stderr F I0120 10:49:45.422801 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift" name="deployer-dockercfg-xhxvc" serviceaccount="deployer" 2026-01-20T10:49:45.424227972+00:00 stderr F I0120 10:49:45.424199 1 templateinstance_controller.go:297] Starting TemplateInstance controller 2026-01-20T10:49:47.203635071+00:00 stderr F I0120 10:49:47.203290 1 reflector.go:351] Caches populated for *v1.BuildConfig from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:47.300502022+00:00 stderr F I0120 10:49:47.300360 1 buildconfig_controller.go:212] Starting buildconfig controller 2026-01-20T10:49:47.313727215+00:00 stderr F I0120 10:49:47.313650 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:47.400093245+00:00 stderr F I0120 10:49:47.399993 1 build_controller.go:503] Starting build controller 2026-01-20T10:49:47.400093245+00:00 stderr F I0120 10:49:47.400017 1 build_controller.go:505] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000 2026-01-20T10:49:47.455012478+00:00 stderr F W0120 10:49:47.454934 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2026-01-20T10:49:47.455320348+00:00 stderr F I0120 10:49:47.455286 1 reflector.go:351] Caches populated for *v1.DeploymentConfig from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:49:47.459764073+00:00 stderr F W0120 10:49:47.459727 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2026-01-20T10:49:47.470517021+00:00 stderr F I0120 10:49:47.470464 1 factory.go:85] deploymentconfig controller caches are synced. Starting workers. 2026-01-20T10:56:25.051191020+00:00 stderr F I0120 10:56:25.050552 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="openstack" name="deployer-dockercfg-l7zvr" expected=4 actual=0 2026-01-20T10:56:25.051191020+00:00 stderr F I0120 10:56:25.051155 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openstack" name="deployer-dockercfg-l7zvr" serviceaccount="deployer" 2026-01-20T10:56:25.051444678+00:00 stderr F I0120 10:56:25.050607 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="openstack" name="default-dockercfg-jjlsb" expected=4 actual=0 2026-01-20T10:56:25.051444678+00:00 stderr F I0120 10:56:25.051434 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openstack" name="default-dockercfg-jjlsb" serviceaccount="default" 2026-01-20T10:56:25.051576561+00:00 stderr F I0120 10:56:25.050607 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="openstack" name="builder-dockercfg-jvf2f" expected=4 actual=0 2026-01-20T10:56:25.051576561+00:00 stderr F I0120 10:56:25.051566 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openstack" name="builder-dockercfg-jvf2f" serviceaccount="builder" 2026-01-20T10:56:25.763055203+00:00 stderr F I0120 10:56:25.762665 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="openstack-operators" name="deployer-dockercfg-slcjc" expected=4 actual=0 2026-01-20T10:56:25.763055203+00:00 stderr F I0120 10:56:25.762695 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openstack-operators" name="deployer-dockercfg-slcjc" serviceaccount="deployer" 2026-01-20T10:56:25.767496602+00:00 stderr F I0120 10:56:25.766192 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="openstack-operators" name="default-dockercfg-gmd2s" expected=4 actual=0 2026-01-20T10:56:25.767496602+00:00 stderr F I0120 10:56:25.766220 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openstack-operators" name="default-dockercfg-gmd2s" serviceaccount="default" 2026-01-20T10:56:25.769948738+00:00 stderr F I0120 10:56:25.769637 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="openstack-operators" name="builder-dockercfg-7dgw2" expected=4 actual=0 2026-01-20T10:56:25.769948738+00:00 stderr F I0120 10:56:25.769677 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openstack-operators" name="builder-dockercfg-7dgw2" serviceaccount="builder" 2026-01-20T10:56:30.701009859+00:00 stderr F I0120 10:56:30.700202 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="cert-manager-operator" name="builder-dockercfg-l898c" expected=4 actual=0 2026-01-20T10:56:30.701009859+00:00 stderr F I0120 10:56:30.700967 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="cert-manager-operator" name="builder-dockercfg-l898c" serviceaccount="builder" 2026-01-20T10:56:30.704258367+00:00 stderr F I0120 10:56:30.704067 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="cert-manager-operator" name="deployer-dockercfg-zgdp4" expected=4 actual=0 2026-01-20T10:56:30.704258367+00:00 stderr F I0120 10:56:30.704131 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="cert-manager-operator" name="deployer-dockercfg-zgdp4" serviceaccount="deployer" 2026-01-20T10:56:30.704906055+00:00 stderr F I0120 10:56:30.704329 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="cert-manager-operator" name="default-dockercfg-vp8l6" expected=4 actual=0 2026-01-20T10:56:30.704906055+00:00 stderr F I0120 10:56:30.704364 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="cert-manager-operator" name="default-dockercfg-vp8l6" serviceaccount="default" 2026-01-20T10:56:33.673268219+00:00 stderr F I0120 10:56:33.671240 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="cert-manager" name="default-dockercfg-x94s4" expected=4 actual=0 2026-01-20T10:56:33.673268219+00:00 stderr F I0120 10:56:33.671277 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="cert-manager" name="default-dockercfg-x94s4" serviceaccount="default" 2026-01-20T10:56:33.677652588+00:00 stderr F I0120 10:56:33.676878 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="cert-manager" name="deployer-dockercfg-qfcbg" expected=4 actual=0 2026-01-20T10:56:33.677652588+00:00 stderr F I0120 10:56:33.676906 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="cert-manager" name="deployer-dockercfg-qfcbg" serviceaccount="deployer" 2026-01-20T10:56:33.677652588+00:00 stderr F I0120 10:56:33.677057 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="cert-manager" name="builder-dockercfg-mqj77" expected=4 actual=0 2026-01-20T10:56:33.677652588+00:00 stderr F I0120 10:56:33.677099 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="cert-manager" name="builder-dockercfg-mqj77" serviceaccount="builder" 2026-01-20T10:56:34.039126093+00:00 stderr F I0120 10:56:34.039020 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="cert-manager" name="cert-manager-cainjector-dockercfg-td5b4" expected=4 actual=0 2026-01-20T10:56:34.039126093+00:00 stderr F I0120 10:56:34.039053 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="cert-manager" name="cert-manager-cainjector-dockercfg-td5b4" serviceaccount="cert-manager-cainjector" 2026-01-20T10:56:34.045448813+00:00 stderr F I0120 10:56:34.045404 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="cert-manager" name="cert-manager-dockercfg-bfbbt" expected=4 actual=0 2026-01-20T10:56:34.045448813+00:00 stderr F I0120 10:56:34.045429 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="cert-manager" name="cert-manager-dockercfg-bfbbt" serviceaccount="cert-manager" 2026-01-20T10:56:34.059554472+00:00 stderr F I0120 10:56:34.059320 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="cert-manager" name="cert-manager-webhook-dockercfg-qgv7c" expected=4 actual=0 2026-01-20T10:56:34.059554472+00:00 stderr F I0120 10:56:34.059351 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="cert-manager" name="cert-manager-webhook-dockercfg-qgv7c" serviceaccount="cert-manager-webhook" 2026-01-20T10:57:25.699877943+00:00 stderr F E0120 10:57:25.699238 1 leaderelection.go:332] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-controller-manager/leases/openshift-master-controllers": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:58:11.337966996+00:00 stderr F W0120 10:58:11.337447 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2026-01-20T10:58:17.713437755+00:00 stderr F I0120 10:58:17.712859 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:58:20.558633794+00:00 stderr F I0120 10:58:20.558047 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2026-01-20T10:58:20.558633794+00:00 stderr F I0120 10:58:20.558610 1 image_pull_secret_controller.go:353] "Observed image registry urls" urls=["10.217.4.41:5000","default-route-openshift-image-registry.apps-crc.testing","image-registry.openshift-image-registry.svc.cluster.local:5000","image-registry.openshift-image-registry.svc:5000"] 2026-01-20T10:58:20.571820269+00:00 stderr F I0120 10:58:20.571744 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000025200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-manager-cainjector-676dd9bd64-mggnx_c229f43c-9d3a-4848-a9da-d997459f440b/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-mana0000755000175000017500000000000015133657715032764 5ustar zuulzuul././@LongLink0000644000000000000000000000030200000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-manager-cainjector-676dd9bd64-mggnx_c229f43c-9d3a-4848-a9da-d997459f440b/cert-manager-cainjector/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-mana0000755000175000017500000000000015133657734032765 5ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-manager-cainjector-676dd9bd64-mggnx_c229f43c-9d3a-4848-a9da-d997459f440b/cert-manager-cainjector/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-mana0000644000175000017500000003001215133657715032762 0ustar zuulzuul2026-01-20T10:57:53.503848768+00:00 stderr F I0120 10:57:53.503599 1 cainjector.go:46] "starting cert-manager ca-injector" logger="cert-manager.cainjector" version="v1.19.2" git_commit="6e38ee57a338a1f27bb724ddb5933f4b8e23e567" go_version="go1.25.5" platform="linux/amd64" 2026-01-20T10:57:53.506668383+00:00 stderr F I0120 10:57:53.506586 1 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true 2026-01-20T10:57:53.506668383+00:00 stderr F I0120 10:57:53.506615 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false 2026-01-20T10:57:53.506668383+00:00 stderr F I0120 10:57:53.506622 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false 2026-01-20T10:57:53.506668383+00:00 stderr F I0120 10:57:53.506629 1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false 2026-01-20T10:57:53.506668383+00:00 stderr F I0120 10:57:53.506635 1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false 2026-01-20T10:57:53.537234231+00:00 stderr F I0120 10:57:53.537132 1 setup.go:119] "Registering a reconciler for injectable" logger="cert-manager" kind="mutatingwebhookconfiguration" 2026-01-20T10:57:53.537494168+00:00 stderr F I0120 10:57:53.537441 1 setup.go:119] "Registering a reconciler for injectable" logger="cert-manager" kind="validatingwebhookconfiguration" 2026-01-20T10:57:53.537665223+00:00 stderr F I0120 10:57:53.537622 1 setup.go:119] "Registering a reconciler for injectable" logger="cert-manager" kind="apiservice" 2026-01-20T10:57:53.537846417+00:00 stderr F I0120 10:57:53.537812 1 setup.go:119] "Registering a reconciler for injectable" logger="cert-manager" kind="customresourcedefinition" 2026-01-20T10:57:53.538082904+00:00 stderr F I0120 10:57:53.538030 1 server.go:208] "Starting metrics server" logger="cert-manager.controller-runtime.metrics" 2026-01-20T10:57:53.540088027+00:00 stderr F I0120 10:57:53.538764 1 server.go:247] "Serving metrics server" logger="cert-manager.controller-runtime.metrics" bindAddress="0.0.0.0:9402" secure=false 2026-01-20T10:57:53.540564389+00:00 stderr F I0120 10:57:53.540529 1 reflector.go:436] "Caches populated" logger="cert-manager.controller-runtime.cache" type="*v1.MutatingWebhookConfiguration" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:57:53.540739414+00:00 stderr F I0120 10:57:53.540701 1 reflector.go:436] "Caches populated" logger="cert-manager.controller-runtime.cache" type="*v1.ValidatingWebhookConfiguration" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:57:53.551652603+00:00 stderr F I0120 10:57:53.551378 1 reflector.go:436] "Caches populated" logger="cert-manager.controller-runtime.cache" type="*v1.APIService" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:57:53.861165998+00:00 stderr F I0120 10:57:53.860144 1 reflector.go:436] "Caches populated" logger="cert-manager.controller-runtime.cache" type="*v1.CustomResourceDefinition" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:57:53.939596393+00:00 stderr F I0120 10:57:53.939533 1 leaderelection.go:257] attempting to acquire leader lease kube-system/cert-manager-cainjector-leader-election... 2026-01-20T10:57:53.946108914+00:00 stderr F I0120 10:57:53.946045 1 leaderelection.go:271] successfully acquired lease kube-system/cert-manager-cainjector-leader-election 2026-01-20T10:57:53.946313590+00:00 stderr F I0120 10:57:53.946140 1 recorder.go:104] "cert-manager-cainjector-676dd9bd64-mggnx_e03c4078-1411-4c1e-8021-c406a4ce2125 became leader" logger="cert-manager.events" type="Normal" object={"kind":"Lease","namespace":"kube-system","name":"cert-manager-cainjector-leader-election","uid":"a0925355-122d-4ec4-b049-890264931895","apiVersion":"coordination.k8s.io/v1","resourceVersion":"43383"} reason="LeaderElection" 2026-01-20T10:57:53.946572097+00:00 stderr F I0120 10:57:53.946517 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="mutatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="MutatingWebhookConfiguration" source="kind source: *v1.PartialObjectMetadata" 2026-01-20T10:57:53.946670079+00:00 stderr F I0120 10:57:53.946634 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="validatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="ValidatingWebhookConfiguration" source="kind source: *v1.Certificate" 2026-01-20T10:57:53.946670079+00:00 stderr F I0120 10:57:53.946516 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="mutatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="MutatingWebhookConfiguration" source="kind source: *v1.PartialObjectMetadata" 2026-01-20T10:57:53.946681419+00:00 stderr F I0120 10:57:53.946511 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="mutatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="MutatingWebhookConfiguration" source="kind source: *v1.MutatingWebhookConfiguration" 2026-01-20T10:57:53.948562859+00:00 stderr F I0120 10:57:53.947192 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="validatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="ValidatingWebhookConfiguration" source="kind source: *v1.PartialObjectMetadata" 2026-01-20T10:57:53.948562859+00:00 stderr F I0120 10:57:53.947314 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="apiservice" controllerGroup="apiregistration.k8s.io" controllerKind="APIService" source="kind source: *v1.PartialObjectMetadata" 2026-01-20T10:57:53.948562859+00:00 stderr F I0120 10:57:53.947340 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="apiservice" controllerGroup="apiregistration.k8s.io" controllerKind="APIService" source="kind source: *v1.APIService" 2026-01-20T10:57:53.948562859+00:00 stderr F I0120 10:57:53.947237 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="validatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="ValidatingWebhookConfiguration" source="kind source: *v1.ValidatingWebhookConfiguration" 2026-01-20T10:57:53.948562859+00:00 stderr F I0120 10:57:53.947753 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="mutatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="MutatingWebhookConfiguration" source="kind source: *v1.Certificate" 2026-01-20T10:57:53.948562859+00:00 stderr F I0120 10:57:53.948077 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="apiservice" controllerGroup="apiregistration.k8s.io" controllerKind="APIService" source="kind source: *v1.Certificate" 2026-01-20T10:57:53.948562859+00:00 stderr F I0120 10:57:53.948320 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="apiservice" controllerGroup="apiregistration.k8s.io" controllerKind="APIService" source="kind source: *v1.PartialObjectMetadata" 2026-01-20T10:57:53.949112774+00:00 stderr F I0120 10:57:53.947048 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="validatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="ValidatingWebhookConfiguration" source="kind source: *v1.PartialObjectMetadata" 2026-01-20T10:57:53.949865433+00:00 stderr F I0120 10:57:53.949813 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="customresourcedefinition" controllerGroup="apiextensions.k8s.io" controllerKind="CustomResourceDefinition" source="kind source: *v1.Certificate" 2026-01-20T10:57:53.949911535+00:00 stderr F I0120 10:57:53.949888 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="customresourcedefinition" controllerGroup="apiextensions.k8s.io" controllerKind="CustomResourceDefinition" source="kind source: *v1.PartialObjectMetadata" 2026-01-20T10:57:53.950108400+00:00 stderr F I0120 10:57:53.950035 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="customresourcedefinition" controllerGroup="apiextensions.k8s.io" controllerKind="CustomResourceDefinition" source="kind source: *v1.PartialObjectMetadata" 2026-01-20T10:57:53.950125250+00:00 stderr F I0120 10:57:53.950073 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="customresourcedefinition" controllerGroup="apiextensions.k8s.io" controllerKind="CustomResourceDefinition" source="kind source: *v1.CustomResourceDefinition" 2026-01-20T10:57:53.951289092+00:00 stderr F I0120 10:57:53.951244 1 reflector.go:436] "Caches populated" logger="cert-manager.controller-runtime.cache" type="*v1.Certificate" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:57:53.962138339+00:00 stderr F I0120 10:57:53.962056 1 reflector.go:436] "Caches populated" logger="cert-manager.controller-runtime.cache" type="*v1.PartialObjectMetadata" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:57:54.052343454+00:00 stderr F I0120 10:57:54.052279 1 controller.go:286] "Starting Controller" logger="cert-manager" controller="mutatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="MutatingWebhookConfiguration" 2026-01-20T10:57:54.052343454+00:00 stderr F I0120 10:57:54.052310 1 controller.go:289] "Starting workers" logger="cert-manager" controller="mutatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="MutatingWebhookConfiguration" worker count=1 2026-01-20T10:57:54.054153582+00:00 stderr F I0120 10:57:54.054097 1 controller.go:286] "Starting Controller" logger="cert-manager" controller="customresourcedefinition" controllerGroup="apiextensions.k8s.io" controllerKind="CustomResourceDefinition" 2026-01-20T10:57:54.054153582+00:00 stderr F I0120 10:57:54.054136 1 controller.go:289] "Starting workers" logger="cert-manager" controller="customresourcedefinition" controllerGroup="apiextensions.k8s.io" controllerKind="CustomResourceDefinition" worker count=1 2026-01-20T10:57:54.054218354+00:00 stderr F I0120 10:57:54.054183 1 controller.go:286] "Starting Controller" logger="cert-manager" controller="validatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="ValidatingWebhookConfiguration" 2026-01-20T10:57:54.054218354+00:00 stderr F I0120 10:57:54.054205 1 controller.go:289] "Starting workers" logger="cert-manager" controller="validatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="ValidatingWebhookConfiguration" worker count=1 2026-01-20T10:57:54.054366418+00:00 stderr F I0120 10:57:54.054275 1 controller.go:286] "Starting Controller" logger="cert-manager" controller="apiservice" controllerGroup="apiregistration.k8s.io" controllerKind="APIService" 2026-01-20T10:57:54.054366418+00:00 stderr F I0120 10:57:54.054308 1 controller.go:289] "Starting workers" logger="cert-manager" controller="apiservice" controllerGroup="apiregistration.k8s.io" controllerKind="APIService" worker count=1 2026-01-20T10:57:54.059720749+00:00 stderr F I0120 10:57:54.059678 1 reconciler.go:141] "Updated object" logger="cert-manager" kind="mutatingwebhookconfiguration" kind="mutatingwebhookconfiguration" name="cert-manager-webhook" 2026-01-20T10:57:54.062861932+00:00 stderr F I0120 10:57:54.062818 1 reconciler.go:141] "Updated object" logger="cert-manager" kind="validatingwebhookconfiguration" kind="validatingwebhookconfiguration" name="cert-manager-webhook" 2026-01-20T10:57:54.067991577+00:00 stderr F I0120 10:57:54.067958 1 reconciler.go:141] "Updated object" logger="cert-manager" kind="validatingwebhookconfiguration" kind="validatingwebhookconfiguration" name="cert-manager-webhook" ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-manager-cainjector-676dd9bd64-mggnx_c229f43c-9d3a-4848-a9da-d997459f440b/cert-manager-cainjector/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-mana0000644000175000017500000004012715133657715032772 0ustar zuulzuul2026-01-20T10:56:42.198298379+00:00 stderr F I0120 10:56:42.174791 1 cainjector.go:46] "starting cert-manager ca-injector" logger="cert-manager.cainjector" version="v1.19.2" git_commit="6e38ee57a338a1f27bb724ddb5933f4b8e23e567" go_version="go1.25.5" platform="linux/amd64" 2026-01-20T10:56:42.198298379+00:00 stderr F I0120 10:56:42.177324 1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false 2026-01-20T10:56:42.198298379+00:00 stderr F I0120 10:56:42.177340 1 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true 2026-01-20T10:56:42.198298379+00:00 stderr F I0120 10:56:42.177345 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false 2026-01-20T10:56:42.198298379+00:00 stderr F I0120 10:56:42.177350 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false 2026-01-20T10:56:42.198298379+00:00 stderr F I0120 10:56:42.177354 1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false 2026-01-20T10:56:42.217895916+00:00 stderr F I0120 10:56:42.217812 1 setup.go:119] "Registering a reconciler for injectable" logger="cert-manager" kind="mutatingwebhookconfiguration" 2026-01-20T10:56:42.218139023+00:00 stderr F I0120 10:56:42.218105 1 setup.go:119] "Registering a reconciler for injectable" logger="cert-manager" kind="validatingwebhookconfiguration" 2026-01-20T10:56:42.218225986+00:00 stderr F I0120 10:56:42.218192 1 setup.go:119] "Registering a reconciler for injectable" logger="cert-manager" kind="apiservice" 2026-01-20T10:56:42.218306218+00:00 stderr F I0120 10:56:42.218275 1 setup.go:119] "Registering a reconciler for injectable" logger="cert-manager" kind="customresourcedefinition" 2026-01-20T10:56:42.218474793+00:00 stderr F I0120 10:56:42.218441 1 server.go:208] "Starting metrics server" logger="cert-manager.controller-runtime.metrics" 2026-01-20T10:56:42.219223412+00:00 stderr F I0120 10:56:42.218945 1 server.go:247] "Serving metrics server" logger="cert-manager.controller-runtime.metrics" bindAddress="0.0.0.0:9402" secure=false 2026-01-20T10:56:42.222570932+00:00 stderr F I0120 10:56:42.221794 1 reflector.go:436] "Caches populated" logger="cert-manager.controller-runtime.cache" type="*v1.ValidatingWebhookConfiguration" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:56:42.222570932+00:00 stderr F I0120 10:56:42.222111 1 reflector.go:436] "Caches populated" logger="cert-manager.controller-runtime.cache" type="*v1.MutatingWebhookConfiguration" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:56:42.226755405+00:00 stderr F I0120 10:56:42.226705 1 reflector.go:436] "Caches populated" logger="cert-manager.controller-runtime.cache" type="*v1.APIService" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:56:42.507633822+00:00 stderr F I0120 10:56:42.507571 1 reflector.go:436] "Caches populated" logger="cert-manager.controller-runtime.cache" type="*v1.CustomResourceDefinition" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:56:42.519664256+00:00 stderr F I0120 10:56:42.519584 1 leaderelection.go:257] attempting to acquire leader lease kube-system/cert-manager-cainjector-leader-election... 2026-01-20T10:56:42.529556692+00:00 stderr F I0120 10:56:42.529504 1 leaderelection.go:271] successfully acquired lease kube-system/cert-manager-cainjector-leader-election 2026-01-20T10:56:42.529838630+00:00 stderr F I0120 10:56:42.529736 1 recorder.go:104] "cert-manager-cainjector-676dd9bd64-mggnx_032b8b86-0d45-410a-9a37-97453c8c06f1 became leader" logger="cert-manager.events" type="Normal" object={"kind":"Lease","namespace":"kube-system","name":"cert-manager-cainjector-leader-election","uid":"a0925355-122d-4ec4-b049-890264931895","apiVersion":"coordination.k8s.io/v1","resourceVersion":"43058"} reason="LeaderElection" 2026-01-20T10:56:42.529956153+00:00 stderr F I0120 10:56:42.529933 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="mutatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="MutatingWebhookConfiguration" source="kind source: *v1.Certificate" 2026-01-20T10:56:42.530011364+00:00 stderr F I0120 10:56:42.529940 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="mutatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="MutatingWebhookConfiguration" source="kind source: *v1.MutatingWebhookConfiguration" 2026-01-20T10:56:42.530133508+00:00 stderr F I0120 10:56:42.529974 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="mutatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="MutatingWebhookConfiguration" source="kind source: *v1.PartialObjectMetadata" 2026-01-20T10:56:42.530274221+00:00 stderr F I0120 10:56:42.530245 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="apiservice" controllerGroup="apiregistration.k8s.io" controllerKind="APIService" source="kind source: *v1.Certificate" 2026-01-20T10:56:42.530286212+00:00 stderr F I0120 10:56:42.530273 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="customresourcedefinition" controllerGroup="apiextensions.k8s.io" controllerKind="CustomResourceDefinition" source="kind source: *v1.Certificate" 2026-01-20T10:56:42.530326133+00:00 stderr F I0120 10:56:42.530280 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="apiservice" controllerGroup="apiregistration.k8s.io" controllerKind="APIService" source="kind source: *v1.APIService" 2026-01-20T10:56:42.530336223+00:00 stderr F I0120 10:56:42.530322 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="apiservice" controllerGroup="apiregistration.k8s.io" controllerKind="APIService" source="kind source: *v1.PartialObjectMetadata" 2026-01-20T10:56:42.530373614+00:00 stderr F I0120 10:56:42.530322 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="mutatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="MutatingWebhookConfiguration" source="kind source: *v1.PartialObjectMetadata" 2026-01-20T10:56:42.530373614+00:00 stderr F I0120 10:56:42.530357 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="customresourcedefinition" controllerGroup="apiextensions.k8s.io" controllerKind="CustomResourceDefinition" source="kind source: *v1.CustomResourceDefinition" 2026-01-20T10:56:42.530373614+00:00 stderr F I0120 10:56:42.530363 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="customresourcedefinition" controllerGroup="apiextensions.k8s.io" controllerKind="CustomResourceDefinition" source="kind source: *v1.PartialObjectMetadata" 2026-01-20T10:56:42.530386394+00:00 stderr F I0120 10:56:42.530374 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="validatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="ValidatingWebhookConfiguration" source="kind source: *v1.PartialObjectMetadata" 2026-01-20T10:56:42.530396865+00:00 stderr F I0120 10:56:42.530388 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="validatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="ValidatingWebhookConfiguration" source="kind source: *v1.PartialObjectMetadata" 2026-01-20T10:56:42.530407275+00:00 stderr F I0120 10:56:42.530351 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="customresourcedefinition" controllerGroup="apiextensions.k8s.io" controllerKind="CustomResourceDefinition" source="kind source: *v1.PartialObjectMetadata" 2026-01-20T10:56:42.530416025+00:00 stderr F I0120 10:56:42.530299 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="apiservice" controllerGroup="apiregistration.k8s.io" controllerKind="APIService" source="kind source: *v1.PartialObjectMetadata" 2026-01-20T10:56:42.530416025+00:00 stderr F I0120 10:56:42.530408 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="validatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="ValidatingWebhookConfiguration" source="kind source: *v1.ValidatingWebhookConfiguration" 2026-01-20T10:56:42.530437096+00:00 stderr F I0120 10:56:42.530410 1 controller.go:353] "Starting EventSource" logger="cert-manager" controller="validatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="ValidatingWebhookConfiguration" source="kind source: *v1.Certificate" 2026-01-20T10:56:42.535700247+00:00 stderr F I0120 10:56:42.535587 1 reflector.go:436] "Caches populated" logger="cert-manager.controller-runtime.cache" type="*v1.Certificate" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:56:42.543727813+00:00 stderr F I0120 10:56:42.543667 1 reflector.go:436] "Caches populated" logger="cert-manager.controller-runtime.cache" type="*v1.PartialObjectMetadata" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:56:42.638020960+00:00 stderr F I0120 10:56:42.637934 1 controller.go:286] "Starting Controller" logger="cert-manager" controller="validatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="ValidatingWebhookConfiguration" 2026-01-20T10:56:42.638330228+00:00 stderr F I0120 10:56:42.637976 1 controller.go:289] "Starting workers" logger="cert-manager" controller="validatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="ValidatingWebhookConfiguration" worker count=1 2026-01-20T10:56:42.638490693+00:00 stderr F I0120 10:56:42.638409 1 controller.go:286] "Starting Controller" logger="cert-manager" controller="apiservice" controllerGroup="apiregistration.k8s.io" controllerKind="APIService" 2026-01-20T10:56:42.638604056+00:00 stderr F I0120 10:56:42.638527 1 controller.go:289] "Starting workers" logger="cert-manager" controller="apiservice" controllerGroup="apiregistration.k8s.io" controllerKind="APIService" worker count=1 2026-01-20T10:56:42.639723616+00:00 stderr F I0120 10:56:42.639695 1 controller.go:286] "Starting Controller" logger="cert-manager" controller="customresourcedefinition" controllerGroup="apiextensions.k8s.io" controllerKind="CustomResourceDefinition" 2026-01-20T10:56:42.639826489+00:00 stderr F I0120 10:56:42.639805 1 controller.go:289] "Starting workers" logger="cert-manager" controller="customresourcedefinition" controllerGroup="apiextensions.k8s.io" controllerKind="CustomResourceDefinition" worker count=1 2026-01-20T10:56:42.639931022+00:00 stderr F I0120 10:56:42.639911 1 controller.go:286] "Starting Controller" logger="cert-manager" controller="mutatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="MutatingWebhookConfiguration" 2026-01-20T10:56:42.639984443+00:00 stderr F I0120 10:56:42.639963 1 controller.go:289] "Starting workers" logger="cert-manager" controller="mutatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="MutatingWebhookConfiguration" worker count=1 2026-01-20T10:56:42.664683578+00:00 stderr F I0120 10:56:42.657474 1 reconciler.go:141] "Updated object" logger="cert-manager" kind="validatingwebhookconfiguration" kind="validatingwebhookconfiguration" name="cert-manager-webhook" 2026-01-20T10:56:42.664683578+00:00 stderr F I0120 10:56:42.657513 1 reconciler.go:141] "Updated object" logger="cert-manager" kind="mutatingwebhookconfiguration" kind="mutatingwebhookconfiguration" name="cert-manager-webhook" 2026-01-20T10:56:42.664880773+00:00 stderr F E0120 10:56:42.664784 1 reconciler.go:137] "unable to update target object with new CA data" err="Operation cannot be fulfilled on validatingwebhookconfigurations.admissionregistration.k8s.io \"cert-manager-webhook\": the object has been modified; please apply your changes to the latest version and try again" logger="cert-manager" kind="validatingwebhookconfiguration" kind="validatingwebhookconfiguration" name="cert-manager-webhook" 2026-01-20T10:56:42.664911434+00:00 stderr F E0120 10:56:42.664858 1 controller.go:474] "Reconciler error" err="Operation cannot be fulfilled on validatingwebhookconfigurations.admissionregistration.k8s.io \"cert-manager-webhook\": the object has been modified; please apply your changes to the latest version and try again" logger="cert-manager" controller="validatingwebhookconfiguration" controllerGroup="admissionregistration.k8s.io" controllerKind="ValidatingWebhookConfiguration" ValidatingWebhookConfiguration="cert-manager-webhook" namespace="" name="cert-manager-webhook" reconcileID="5bf607c9-f2ac-475b-a2ba-3c8fc7afdf34" 2026-01-20T10:56:42.666873957+00:00 stderr F I0120 10:56:42.666823 1 reconciler.go:141] "Updated object" logger="cert-manager" kind="mutatingwebhookconfiguration" kind="mutatingwebhookconfiguration" name="cert-manager-webhook" 2026-01-20T10:56:42.671376167+00:00 stderr F I0120 10:56:42.671326 1 reconciler.go:141] "Updated object" logger="cert-manager" kind="validatingwebhookconfiguration" kind="validatingwebhookconfiguration" name="cert-manager-webhook" 2026-01-20T10:56:42.677943755+00:00 stderr F I0120 10:56:42.677895 1 reconciler.go:141] "Updated object" logger="cert-manager" kind="validatingwebhookconfiguration" kind="validatingwebhookconfiguration" name="cert-manager-webhook" 2026-01-20T10:57:12.548606683+00:00 stderr F E0120 10:57:12.547738 1 leaderelection.go:441] Failed to update lock optimistically: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cert-manager-cainjector-leader-election?timeout=20s": dial tcp 10.217.4.1:443: connect: connection refused, falling back to slow path 2026-01-20T10:57:12.550249567+00:00 stderr F E0120 10:57:12.550202 1 leaderelection.go:448] error retrieving resource lock kube-system/cert-manager-cainjector-leader-election: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cert-manager-cainjector-leader-election?timeout=20s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:27.548138061+00:00 stderr F E0120 10:57:27.547411 1 leaderelection.go:441] Failed to update lock optimistically: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cert-manager-cainjector-leader-election?timeout=20s": dial tcp 10.217.4.1:443: connect: connection refused, falling back to slow path 2026-01-20T10:57:27.549007924+00:00 stderr F E0120 10:57:27.548958 1 leaderelection.go:448] error retrieving resource lock kube-system/cert-manager-cainjector-leader-election: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cert-manager-cainjector-leader-election?timeout=20s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:42.549241141+00:00 stderr F E0120 10:57:42.548570 1 leaderelection.go:441] Failed to update lock optimistically: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cert-manager-cainjector-leader-election?timeout=20s": dial tcp 10.217.4.1:443: connect: connection refused, falling back to slow path 2026-01-20T10:57:42.549887208+00:00 stderr F E0120 10:57:42.549774 1 leaderelection.go:448] error retrieving resource lock kube-system/cert-manager-cainjector-leader-election: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cert-manager-cainjector-leader-election?timeout=20s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:52.547334913+00:00 stderr F I0120 10:57:52.546680 1 leaderelection.go:297] failed to renew lease kube-system/cert-manager-cainjector-leader-election: context deadline exceeded 2026-01-20T10:57:52.563956933+00:00 stderr F E0120 10:57:52.563868 1 main.go:43] "error executing command" err="error running manager: leader election lost" logger="cert-manager" ././@LongLink0000644000000000000000000000023500000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015133657715033062 5ustar zuulzuul././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015133657735033064 5ustar zuulzuul././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000016500515133657715033073 0ustar zuulzuul2025-08-13T20:00:30.228096014+00:00 stderr F I0813 20:00:30.192242 1 cmd.go:92] &{ true {false} installer true map[cert-configmaps:0xc000a0b9a0 cert-dir:0xc000a0bb80 cert-secrets:0xc000a0b900 configmaps:0xc000a0b4a0 namespace:0xc000a0b2c0 optional-cert-configmaps:0xc000a0bae0 optional-cert-secrets:0xc000a0ba40 optional-configmaps:0xc000a0b5e0 optional-secrets:0xc000a0b540 pod:0xc000a0b360 pod-manifest-dir:0xc000a0b720 resource-dir:0xc000a0b680 revision:0xc000a0b220 secrets:0xc000a0b400 v:0xc000a20fa0] [0xc000a20fa0 0xc000a0b220 0xc000a0b2c0 0xc000a0b360 0xc000a0b680 0xc000a0b720 0xc000a0b4a0 0xc000a0b5e0 0xc000a0b400 0xc000a0b540 0xc000a0bb80 0xc000a0b9a0 0xc000a0bae0 0xc000a0b900 0xc000a0ba40] [] map[cert-configmaps:0xc000a0b9a0 cert-dir:0xc000a0bb80 cert-secrets:0xc000a0b900 configmaps:0xc000a0b4a0 help:0xc000a21360 kubeconfig:0xc000a0b180 log-flush-frequency:0xc000a20f00 namespace:0xc000a0b2c0 optional-cert-configmaps:0xc000a0bae0 optional-cert-secrets:0xc000a0ba40 optional-configmaps:0xc000a0b5e0 optional-secrets:0xc000a0b540 pod:0xc000a0b360 pod-manifest-dir:0xc000a0b720 pod-manifests-lock-file:0xc000a0b860 resource-dir:0xc000a0b680 revision:0xc000a0b220 secrets:0xc000a0b400 timeout-duration:0xc000a0b7c0 v:0xc000a20fa0 vmodule:0xc000a21040] [0xc000a0b180 0xc000a0b220 0xc000a0b2c0 0xc000a0b360 0xc000a0b400 0xc000a0b4a0 0xc000a0b540 0xc000a0b5e0 0xc000a0b680 0xc000a0b720 0xc000a0b7c0 0xc000a0b860 0xc000a0b900 0xc000a0b9a0 0xc000a0ba40 0xc000a0bae0 0xc000a0bb80 0xc000a20f00 0xc000a20fa0 0xc000a21040 0xc000a21360] [0xc000a0b9a0 0xc000a0bb80 0xc000a0b900 0xc000a0b4a0 0xc000a21360 0xc000a0b180 0xc000a20f00 0xc000a0b2c0 0xc000a0bae0 0xc000a0ba40 0xc000a0b5e0 0xc000a0b540 0xc000a0b360 0xc000a0b720 0xc000a0b860 0xc000a0b680 0xc000a0b220 0xc000a0b400 0xc000a0b7c0 0xc000a20fa0 0xc000a21040] map[104:0xc000a21360 118:0xc000a20fa0] [] -1 0 0xc000a023c0 true 0xa51380 []} 2025-08-13T20:00:30.241164866+00:00 stderr F I0813 20:00:30.240993 1 cmd.go:93] (*installerpod.InstallOptions)(0xc000a14340)({ 2025-08-13T20:00:30.241164866+00:00 stderr F KubeConfig: (string) "", 2025-08-13T20:00:30.241164866+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-08-13T20:00:30.241164866+00:00 stderr F Revision: (string) (len=1) "9", 2025-08-13T20:00:30.241164866+00:00 stderr F NodeName: (string) "", 2025-08-13T20:00:30.241164866+00:00 stderr F Namespace: (string) (len=24) "openshift-kube-apiserver", 2025-08-13T20:00:30.241164866+00:00 stderr F PodConfigMapNamePrefix: (string) (len=18) "kube-apiserver-pod", 2025-08-13T20:00:30.241164866+00:00 stderr F SecretNamePrefixes: ([]string) (len=3 cap=4) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=11) "etcd-client", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=34) "localhost-recovery-serving-certkey", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=17) "encryption-config", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "webhook-authenticator" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=8 cap=8) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=18) "kube-apiserver-pod", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=6) "config", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=37) "kube-apiserver-cert-syncer-kubeconfig", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=28) "bound-sa-token-signing-certs", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=15) "etcd-serving-ca", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=18) "kubelet-serving-ca", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=22) "sa-token-signing-certs", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=29) "kube-apiserver-audit-policies" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=3 cap=4) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=14) "oauth-metadata", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=12) "cloud-config", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=24) "kube-apiserver-server-ca" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F CertSecretNames: ([]string) (len=10 cap=16) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=17) "aggregator-client", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=30) "localhost-serving-cert-certkey", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=31) "service-network-serving-certkey", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=37) "external-loadbalancer-serving-certkey", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=37) "internal-loadbalancer-serving-certkey", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=33) "bound-service-account-signing-key", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=40) "control-plane-node-admin-client-cert-key", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=31) "check-endpoints-client-cert-key", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=14) "kubelet-client", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=16) "node-kubeconfigs" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=17) "user-serving-cert", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-000", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-001", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-002", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-003", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-004", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-005", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-006", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-007", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-008", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-009" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=20) "aggregator-client-ca", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=9) "client-ca", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=29) "control-plane-node-kubeconfig", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=26) "check-endpoints-kubeconfig" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=17) "trusted-ca-bundle" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", 2025-08-13T20:00:30.241164866+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:00:30.241164866+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-08-13T20:00:30.241164866+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-08-13T20:00:30.241164866+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-08-13T20:00:30.241164866+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-08-13T20:00:30.241164866+00:00 stderr F KubeletVersion: (string) "" 2025-08-13T20:00:30.241164866+00:00 stderr F }) 2025-08-13T20:00:30.264011288+00:00 stderr F I0813 20:00:30.263917 1 cmd.go:410] Getting controller reference for node crc 2025-08-13T20:00:30.302991629+00:00 stderr F I0813 20:00:30.302663 1 cmd.go:423] Waiting for installer revisions to settle for node crc 2025-08-13T20:00:30.336711361+00:00 stderr F I0813 20:00:30.311214 1 cmd.go:515] Waiting additional period after revisions have settled for node crc 2025-08-13T20:01:00.316616777+00:00 stderr F I0813 20:01:00.315386 1 cmd.go:521] Getting installer pods for node crc 2025-08-13T20:01:05.767036875+00:00 stderr F I0813 20:01:05.763385 1 cmd.go:539] Latest installer revision for node crc is: 9 2025-08-13T20:01:05.767036875+00:00 stderr F I0813 20:01:05.763700 1 cmd.go:428] Querying kubelet version for node crc 2025-08-13T20:01:06.853768102+00:00 stderr F I0813 20:01:06.853476 1 cmd.go:441] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-08-13T20:01:06.869727917+00:00 stderr F I0813 20:01:06.869633 1 cmd.go:290] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9" ... 2025-08-13T20:01:06.878019154+00:00 stderr F I0813 20:01:06.871148 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9" ... 2025-08-13T20:01:06.878019154+00:00 stderr F I0813 20:01:06.871194 1 cmd.go:226] Getting secrets ... 2025-08-13T20:01:07.408893731+00:00 stderr F I0813 20:01:07.408596 1 copy.go:32] Got secret openshift-kube-apiserver/etcd-client-9 2025-08-13T20:01:08.339296910+00:00 stderr F I0813 20:01:08.339174 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-recovery-client-token-9 2025-08-13T20:01:08.436071690+00:00 stderr F I0813 20:01:08.429282 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-recovery-serving-certkey-9 2025-08-13T20:01:08.510517961+00:00 stderr F I0813 20:01:08.510440 1 copy.go:24] Failed to get secret openshift-kube-apiserver/encryption-config-9: secrets "encryption-config-9" not found 2025-08-13T20:01:08.565021486+00:00 stderr F I0813 20:01:08.564954 1 copy.go:32] Got secret openshift-kube-apiserver/webhook-authenticator-9 2025-08-13T20:01:08.565151519+00:00 stderr F I0813 20:01:08.565130 1 cmd.go:239] Getting config maps ... 2025-08-13T20:01:08.623391170+00:00 stderr F I0813 20:01:08.623334 1 copy.go:60] Got configMap openshift-kube-apiserver/bound-sa-token-signing-certs-9 2025-08-13T20:01:08.683964707+00:00 stderr F I0813 20:01:08.683901 1 copy.go:60] Got configMap openshift-kube-apiserver/config-9 2025-08-13T20:01:08.808296833+00:00 stderr F I0813 20:01:08.808012 1 copy.go:60] Got configMap openshift-kube-apiserver/etcd-serving-ca-9 2025-08-13T20:01:09.196587924+00:00 stderr F I0813 20:01:09.196529 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-audit-policies-9 2025-08-13T20:01:09.493872661+00:00 stderr F I0813 20:01:09.486134 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-cert-syncer-kubeconfig-9 2025-08-13T20:01:09.605195335+00:00 stderr F I0813 20:01:09.594112 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-pod-9 2025-08-13T20:01:09.927254178+00:00 stderr F I0813 20:01:09.919756 1 copy.go:60] Got configMap openshift-kube-apiserver/kubelet-serving-ca-9 2025-08-13T20:01:10.003346238+00:00 stderr F I0813 20:01:10.003247 1 copy.go:60] Got configMap openshift-kube-apiserver/sa-token-signing-certs-9 2025-08-13T20:01:10.221272442+00:00 stderr F I0813 20:01:10.218353 1 copy.go:52] Failed to get config map openshift-kube-apiserver/cloud-config-9: configmaps "cloud-config-9" not found 2025-08-13T20:01:10.326975286+00:00 stderr F I0813 20:01:10.319482 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-server-ca-9 2025-08-13T20:01:14.409000361+00:00 stderr F I0813 20:01:14.407728 1 copy.go:60] Got configMap openshift-kube-apiserver/oauth-metadata-9 2025-08-13T20:01:14.409000361+00:00 stderr F I0813 20:01:14.407859 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/etcd-client" ... 2025-08-13T20:01:14.409000361+00:00 stderr F I0813 20:01:14.408445 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/etcd-client/tls.crt" ... 2025-08-13T20:01:14.410232186+00:00 stderr F I0813 20:01:14.409156 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/etcd-client/tls.key" ... 2025-08-13T20:01:14.410232186+00:00 stderr F I0813 20:01:14.409353 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-client-token" ... 2025-08-13T20:01:14.410232186+00:00 stderr F I0813 20:01:14.409407 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-client-token/ca.crt" ... 2025-08-13T20:01:14.410232186+00:00 stderr F I0813 20:01:14.409621 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-client-token/namespace" ... 2025-08-13T20:01:14.410232186+00:00 stderr F I0813 20:01:14.409896 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-08-13T20:01:14.410232186+00:00 stderr F I0813 20:01:14.410201 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-client-token/token" ... 2025-08-13T20:01:14.411103001+00:00 stderr F I0813 20:01:14.410373 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-serving-certkey" ... 2025-08-13T20:01:14.411103001+00:00 stderr F I0813 20:01:14.410522 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-serving-certkey/tls.crt" ... 2025-08-13T20:01:14.411103001+00:00 stderr F I0813 20:01:14.410745 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-serving-certkey/tls.key" ... 2025-08-13T20:01:14.420639313+00:00 stderr F I0813 20:01:14.420544 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/webhook-authenticator" ... 2025-08-13T20:01:14.421512138+00:00 stderr F I0813 20:01:14.420869 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/webhook-authenticator/kubeConfig" ... 2025-08-13T20:01:14.421512138+00:00 stderr F I0813 20:01:14.421250 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/bound-sa-token-signing-certs" ... 2025-08-13T20:01:14.421512138+00:00 stderr F I0813 20:01:14.421475 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/bound-sa-token-signing-certs/service-account-001.pub" ... 2025-08-13T20:01:14.422232248+00:00 stderr F I0813 20:01:14.421654 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/config" ... 2025-08-13T20:01:14.422232248+00:00 stderr F I0813 20:01:14.421763 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/config/config.yaml" ... 2025-08-13T20:01:14.422232248+00:00 stderr F I0813 20:01:14.422214 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/etcd-serving-ca" ... 2025-08-13T20:01:14.426586313+00:00 stderr F I0813 20:01:14.422292 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/etcd-serving-ca/ca-bundle.crt" ... 2025-08-13T20:01:14.426586313+00:00 stderr F I0813 20:01:14.422534 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-audit-policies" ... 2025-08-13T20:01:14.447609162+00:00 stderr F I0813 20:01:14.446867 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-audit-policies/policy.yaml" ... 2025-08-13T20:01:14.447609162+00:00 stderr F I0813 20:01:14.447272 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-cert-syncer-kubeconfig" ... 2025-08-13T20:01:14.447609162+00:00 stderr F I0813 20:01:14.447334 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig" ... 2025-08-13T20:01:14.447697575+00:00 stderr F I0813 20:01:14.447640 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-pod" ... 2025-08-13T20:01:14.449193027+00:00 stderr F I0813 20:01:14.447761 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-pod/kube-apiserver-startup-monitor-pod.yaml" ... 2025-08-13T20:01:14.449193027+00:00 stderr F I0813 20:01:14.448156 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-pod/pod.yaml" ... 2025-08-13T20:01:14.449193027+00:00 stderr F I0813 20:01:14.448585 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-pod/version" ... 2025-08-13T20:01:14.449193027+00:00 stderr F I0813 20:01:14.448743 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-pod/forceRedeploymentReason" ... 2025-08-13T20:01:14.481921090+00:00 stderr F I0813 20:01:14.481337 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kubelet-serving-ca" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.515599 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kubelet-serving-ca/ca-bundle.crt" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.516600 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/sa-token-signing-certs" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.516819 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/sa-token-signing-certs/service-account-001.pub" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.518506 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/sa-token-signing-certs/service-account-002.pub" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.519250 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/sa-token-signing-certs/service-account-003.pub" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.519910 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-server-ca" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.520220 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-server-ca/ca-bundle.crt" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.520725 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/oauth-metadata" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.521051 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/oauth-metadata/oauthMetadata" ... 2025-08-13T20:01:14.536874107+00:00 stderr F I0813 20:01:14.535308 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs" ... 2025-08-13T20:01:14.536874107+00:00 stderr F I0813 20:01:14.535433 1 cmd.go:226] Getting secrets ... 2025-08-13T20:01:18.851315639+00:00 stderr F I0813 20:01:18.840609 1 copy.go:32] Got secret openshift-kube-apiserver/aggregator-client 2025-08-13T20:01:20.264538334+00:00 stderr F I0813 20:01:20.257589 1 copy.go:32] Got secret openshift-kube-apiserver/bound-service-account-signing-key 2025-08-13T20:01:20.938573184+00:00 stderr F I0813 20:01:20.927406 1 copy.go:32] Got secret openshift-kube-apiserver/check-endpoints-client-cert-key 2025-08-13T20:01:21.314140603+00:00 stderr F I0813 20:01:21.313762 1 copy.go:32] Got secret openshift-kube-apiserver/control-plane-node-admin-client-cert-key 2025-08-13T20:01:21.682394503+00:00 stderr F I0813 20:01:21.667018 1 copy.go:32] Got secret openshift-kube-apiserver/external-loadbalancer-serving-certkey 2025-08-13T20:01:21.964411525+00:00 stderr F I0813 20:01:21.964309 1 copy.go:32] Got secret openshift-kube-apiserver/internal-loadbalancer-serving-certkey 2025-08-13T20:01:22.200873837+00:00 stderr F I0813 20:01:22.198399 1 copy.go:32] Got secret openshift-kube-apiserver/kubelet-client 2025-08-13T20:01:22.848864233+00:00 stderr F I0813 20:01:22.842456 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-serving-cert-certkey 2025-08-13T20:01:23.458539357+00:00 stderr F I0813 20:01:23.458246 1 copy.go:32] Got secret openshift-kube-apiserver/node-kubeconfigs 2025-08-13T20:01:23.877878804+00:00 stderr F I0813 20:01:23.866135 1 copy.go:32] Got secret openshift-kube-apiserver/service-network-serving-certkey 2025-08-13T20:01:23.987633334+00:00 stderr F I0813 20:01:23.987242 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert: secrets "user-serving-cert" not found 2025-08-13T20:01:27.081947854+00:00 stderr F I0813 20:01:27.081695 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-000: secrets "user-serving-cert-000" not found 2025-08-13T20:01:31.496984765+00:00 stderr F I0813 20:01:31.490588 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-001: secrets "user-serving-cert-001" not found 2025-08-13T20:01:31.589915885+00:00 stderr F I0813 20:01:31.563334 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-002: secrets "user-serving-cert-002" not found 2025-08-13T20:01:36.824979329+00:00 stderr F I0813 20:01:36.823316 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: secrets "user-serving-cert-003" not found 2025-08-13T20:01:50.825746244+00:00 stderr F I0813 20:01:50.825271 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-004: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-004?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) 2025-08-13T20:01:56.153885648+00:00 stderr F I0813 20:01:56.150536 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-004: secrets "user-serving-cert-004" not found 2025-08-13T20:01:59.443927570+00:00 stderr F I0813 20:01:59.433966 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-005: secrets "user-serving-cert-005" not found 2025-08-13T20:02:01.285415308+00:00 stderr F I0813 20:02:01.285274 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-006: secrets "user-serving-cert-006" not found 2025-08-13T20:02:03.459094407+00:00 stderr F I0813 20:02:03.457472 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-007: secrets "user-serving-cert-007" not found 2025-08-13T20:02:05.722268108+00:00 stderr F I0813 20:02:05.721390 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-008: secrets "user-serving-cert-008" not found 2025-08-13T20:02:09.365492892+00:00 stderr F I0813 20:02:09.362561 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-009: secrets "user-serving-cert-009" not found 2025-08-13T20:02:09.365492892+00:00 stderr F I0813 20:02:09.362663 1 cmd.go:239] Getting config maps ... 2025-08-13T20:02:13.044816027+00:00 stderr F I0813 20:02:13.044401 1 copy.go:60] Got configMap openshift-kube-apiserver/aggregator-client-ca 2025-08-13T20:02:15.097919247+00:00 stderr F I0813 20:02:15.089376 1 copy.go:60] Got configMap openshift-kube-apiserver/check-endpoints-kubeconfig 2025-08-13T20:02:16.180974263+00:00 stderr F I0813 20:02:16.179374 1 copy.go:60] Got configMap openshift-kube-apiserver/client-ca 2025-08-13T20:02:17.837210851+00:00 stderr F I0813 20:02:17.830667 1 copy.go:60] Got configMap openshift-kube-apiserver/control-plane-node-kubeconfig 2025-08-13T20:02:20.534021793+00:00 stderr F I0813 20:02:20.531035 1 copy.go:60] Got configMap openshift-kube-apiserver/trusted-ca-bundle 2025-08-13T20:02:20.535981769+00:00 stderr F I0813 20:02:20.535950 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client" ... 2025-08-13T20:02:20.536941226+00:00 stderr F I0813 20:02:20.536680 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client/tls.crt" ... 2025-08-13T20:02:20.537939034+00:00 stderr F I0813 20:02:20.537889 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client/tls.key" ... 2025-08-13T20:02:20.538769028+00:00 stderr F I0813 20:02:20.538690 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key" ... 2025-08-13T20:02:20.538769028+00:00 stderr F I0813 20:02:20.538732 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key/service-account.key" ... 2025-08-13T20:02:20.540256270+00:00 stderr F I0813 20:02:20.540170 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key/service-account.pub" ... 2025-08-13T20:02:20.540469467+00:00 stderr F I0813 20:02:20.540399 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key" ... 2025-08-13T20:02:20.540469467+00:00 stderr F I0813 20:02:20.540461 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key/tls.crt" ... 2025-08-13T20:02:20.540669492+00:00 stderr F I0813 20:02:20.540589 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key/tls.key" ... 2025-08-13T20:02:20.540817227+00:00 stderr F I0813 20:02:20.540735 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key" ... 2025-08-13T20:02:20.540834697+00:00 stderr F I0813 20:02:20.540814 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key/tls.crt" ... 2025-08-13T20:02:20.541009062+00:00 stderr F I0813 20:02:20.540964 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key/tls.key" ... 2025-08-13T20:02:20.541183147+00:00 stderr F I0813 20:02:20.541137 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey" ... 2025-08-13T20:02:20.541195297+00:00 stderr F I0813 20:02:20.541179 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey/tls.crt" ... 2025-08-13T20:02:20.541486776+00:00 stderr F I0813 20:02:20.541436 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey/tls.key" ... 2025-08-13T20:02:20.541641640+00:00 stderr F I0813 20:02:20.541607 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey" ... 2025-08-13T20:02:20.541641640+00:00 stderr F I0813 20:02:20.541625 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt" ... 2025-08-13T20:02:20.630024441+00:00 stderr F I0813 20:02:20.629923 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey/tls.key" ... 2025-08-13T20:02:20.630249658+00:00 stderr F I0813 20:02:20.630212 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client" ... 2025-08-13T20:02:20.630264268+00:00 stderr F I0813 20:02:20.630246 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client/tls.crt" ... 2025-08-13T20:02:20.630597198+00:00 stderr F I0813 20:02:20.630538 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client/tls.key" ... 2025-08-13T20:02:20.630886386+00:00 stderr F I0813 20:02:20.630822 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey" ... 2025-08-13T20:02:20.630886386+00:00 stderr F I0813 20:02:20.630870 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey/tls.crt" ... 2025-08-13T20:02:20.631072681+00:00 stderr F I0813 20:02:20.631039 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey/tls.key" ... 2025-08-13T20:02:20.632426470+00:00 stderr F I0813 20:02:20.632312 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs" ... 2025-08-13T20:02:20.632426470+00:00 stderr F I0813 20:02:20.632404 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/lb-ext.kubeconfig" ... 2025-08-13T20:02:20.632697428+00:00 stderr F I0813 20:02:20.632617 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/lb-int.kubeconfig" ... 2025-08-13T20:02:20.633043997+00:00 stderr F I0813 20:02:20.632913 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig" ... 2025-08-13T20:02:20.633158671+00:00 stderr F I0813 20:02:20.633103 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig" ... 2025-08-13T20:02:20.633451849+00:00 stderr F I0813 20:02:20.633357 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey" ... 2025-08-13T20:02:20.633451849+00:00 stderr F I0813 20:02:20.633396 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey/tls.crt" ... 2025-08-13T20:02:20.635386384+00:00 stderr F I0813 20:02:20.635321 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey/tls.key" ... 2025-08-13T20:02:20.635597430+00:00 stderr F I0813 20:02:20.635523 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca" ... 2025-08-13T20:02:20.636249549+00:00 stderr F I0813 20:02:20.636167 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca/ca-bundle.crt" ... 2025-08-13T20:02:20.636457635+00:00 stderr F I0813 20:02:20.636391 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/check-endpoints-kubeconfig" ... 2025-08-13T20:02:20.636457635+00:00 stderr F I0813 20:02:20.636410 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/check-endpoints-kubeconfig/kubeconfig" ... 2025-08-13T20:02:20.637398282+00:00 stderr F I0813 20:02:20.636551 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/client-ca" ... 2025-08-13T20:02:20.637398282+00:00 stderr F I0813 20:02:20.636573 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/client-ca/ca-bundle.crt" ... 2025-08-13T20:02:20.637398282+00:00 stderr F I0813 20:02:20.637108 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/control-plane-node-kubeconfig" ... 2025-08-13T20:02:20.637398282+00:00 stderr F I0813 20:02:20.637231 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/control-plane-node-kubeconfig/kubeconfig" ... 2025-08-13T20:02:20.637574647+00:00 stderr F I0813 20:02:20.637490 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/trusted-ca-bundle" ... 2025-08-13T20:02:20.637733881+00:00 stderr F I0813 20:02:20.637640 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/trusted-ca-bundle/ca-bundle.crt" ... 2025-08-13T20:02:20.638261256+00:00 stderr F I0813 20:02:20.638139 1 cmd.go:332] Getting pod configmaps/kube-apiserver-pod-9 -n openshift-kube-apiserver 2025-08-13T20:02:21.120887644+00:00 stderr F I0813 20:02:21.117049 1 cmd.go:348] Creating directory for static pod manifest "/etc/kubernetes/manifests" ... 2025-08-13T20:02:21.120887644+00:00 stderr F I0813 20:02:21.117113 1 cmd.go:376] Writing a pod under "kube-apiserver-pod.yaml" key 2025-08-13T20:02:21.120887644+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"apiserver":"true","app":"openshift-kube-apiserver","revision":"9"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-apiserver","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-certs"}},{"name":"audit-dir","hostPath":{"path":"/var/log/kube-apiserver"}}],"initContainers":[{"name":"setup","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","100","/bin/bash","-ec"],"args":["echo \"Fixing audit permissions ...\"\nchmod 0700 /var/log/kube-apiserver \u0026\u0026 touch /var/log/kube-apiserver/audit.log \u0026\u0026 chmod 0600 /var/log/kube-apiserver/*\n\nLOCK=/var/log/kube-apiserver/.lock\necho \"Acquiring exclusive lock ${LOCK} ...\"\n\n# Waiting for 15s max for old kube-apiserver's watch-termination process to exit and remove the lock.\n# Two cases:\n# 1. if kubelet does not start the old and new in parallel (i.e. works as expected), the flock will always succeed without any time.\n# 2. if kubelet does overlap old and new pods for up to 130s, the flock will wait and immediate return when the old finishes.\n#\n# NOTE: We can increase 15s for a bigger expected overlap. But a higher value means less noise about the broken kubelet behaviour, i.e. we hide a bug.\n# NOTE: Do not tweak these timings without considering the livenessProbe initialDelaySeconds\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 15 \"${LOCK_FD}\" || {\n echo \"$(date -Iseconds -u) kubelet did not terminate old kube-apiserver before new one\" \u003e\u003e /var/log/kube-apiserver/lock.log\n echo -n \": WARNING: kubelet did not terminate old kube-apiserver before new one.\"\n\n # We failed to acquire exclusive lock, which means there is old kube-apiserver running in system.\n # Since we utilize SO_REUSEPORT, we need to make sure the old kube-apiserver stopped listening.\n #\n # NOTE: This is a fallback for broken kubelet, if you observe this please report a bug.\n echo -n \"Waiting for port 6443 to be released due to likely bug in kubelet or CRI-O \"\n while [ -n \"$(ss -Htan state listening '( sport = 6443 or sport = 6080 )')\" ]; do\n echo -n \".\"\n sleep 1\n (( tries += 1 ))\n if [[ \"${tries}\" -gt 10 ]]; then\n echo \"Timed out waiting for port :6443 and :6080 to be released, this is likely a bug in kubelet or CRI-O\"\n exit 1\n fi\n done\n # This is to make sure the server has terminated independently from the lock.\n # After the port has been freed (requests can be pending and need 60s max).\n sleep 65\n}\n# We cannot hold the lock from the init container to the main container. We release it here. There is no risk, at this point we know we are safe.\nflock -u \"${LOCK_FD}\"\n"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"containers":[{"name":"kube-apiserver","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-ec"],"args":["LOCK=/var/log/kube-apiserver/.lock\n# We should be able to acquire the lock immediatelly. If not, it means the init container has not released it yet and kubelet or CRI-O started container prematurely.\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 30 \"${LOCK_FD}\" || {\n echo \"Failed to acquire lock for kube-apiserver. Please check setup container for details. This is likely kubelet or CRI-O bug.\"\n exit 1\n}\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle ...\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nexec watch-termination --termination-touch-file=/var/log/kube-apiserver/.terminating --termination-log-file=/var/log/kube-apiserver/termination.log --graceful-termination-duration=15s --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig -- hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --advertise-address=${HOST_IP} -v=2 --permit-address-sharing\n"],"ports":[{"containerPort":6443}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"STATIC_POD_VERSION","value":"9"},{"name":"HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"GOGC","value":"100"}],"resources":{"requests":{"cpu":"265m","memory":"1Gi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"},{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"livenessProbe":{"httpGet":{"path":"livez","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"readyz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":1},"startupProbe":{"httpGet":{"path":"healthz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":30},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}},{"name":"kube-apiserver-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-cert-regeneration-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-regeneration-controller"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","-v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-insecure-readyz","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","insecure-readyz"],"args":["--insecure-port=6080","--delegate-url= 2025-08-13T20:02:21.120947706+00:00 stderr F https://localhost:6443/readyz"],"ports":[{"containerPort":6080}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-check-endpoints","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","check-endpoints"],"args":["--kubeconfig","/etc/kubernetes/static-pod-certs/configmaps/check-endpoints-kubeconfig/kubeconfig","--listen","0.0.0.0:17697","--namespace","$(POD_NAMESPACE)","--v","2"],"ports":[{"name":"check-endpoints","hostPort":17697,"containerPort":17697,"protocol":"TCP"}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"terminationGracePeriodSeconds":15,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:02:21.130930681+00:00 stderr F I0813 20:02:21.122047 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/kube-apiserver-pod.yaml" ... 2025-08-13T20:02:21.130930681+00:00 stderr F I0813 20:02:21.124085 1 cmd.go:614] Removed existing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-pod.yaml" ... 2025-08-13T20:02:21.130930681+00:00 stderr F I0813 20:02:21.124100 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-pod.yaml" ... 2025-08-13T20:02:21.130930681+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"apiserver":"true","app":"openshift-kube-apiserver","revision":"9"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-apiserver","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-certs"}},{"name":"audit-dir","hostPath":{"path":"/var/log/kube-apiserver"}}],"initContainers":[{"name":"setup","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","100","/bin/bash","-ec"],"args":["echo \"Fixing audit permissions ...\"\nchmod 0700 /var/log/kube-apiserver \u0026\u0026 touch /var/log/kube-apiserver/audit.log \u0026\u0026 chmod 0600 /var/log/kube-apiserver/*\n\nLOCK=/var/log/kube-apiserver/.lock\necho \"Acquiring exclusive lock ${LOCK} ...\"\n\n# Waiting for 15s max for old kube-apiserver's watch-termination process to exit and remove the lock.\n# Two cases:\n# 1. if kubelet does not start the old and new in parallel (i.e. works as expected), the flock will always succeed without any time.\n# 2. if kubelet does overlap old and new pods for up to 130s, the flock will wait and immediate return when the old finishes.\n#\n# NOTE: We can increase 15s for a bigger expected overlap. But a higher value means less noise about the broken kubelet behaviour, i.e. we hide a bug.\n# NOTE: Do not tweak these timings without considering the livenessProbe initialDelaySeconds\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 15 \"${LOCK_FD}\" || {\n echo \"$(date -Iseconds -u) kubelet did not terminate old kube-apiserver before new one\" \u003e\u003e /var/log/kube-apiserver/lock.log\n echo -n \": WARNING: kubelet did not terminate old kube-apiserver before new one.\"\n\n # We failed to acquire exclusive lock, which means there is old kube-apiserver running in system.\n # Since we utilize SO_REUSEPORT, we need to make sure the old kube-apiserver stopped listening.\n #\n # NOTE: This is a fallback for broken kubelet, if you observe this please report a bug.\n echo -n \"Waiting for port 6443 to be released due to likely bug in kubelet or CRI-O \"\n while [ -n \"$(ss -Htan state listening '( sport = 6443 or sport = 6080 )')\" ]; do\n echo -n \".\"\n sleep 1\n (( tries += 1 ))\n if [[ \"${tries}\" -gt 10 ]]; then\n echo \"Timed out waiting for port :6443 and :6080 to be released, this is likely a bug in kubelet or CRI-O\"\n exit 1\n fi\n done\n # This is to make sure the server has terminated independently from the lock.\n # After the port has been freed (requests can be pending and need 60s max).\n sleep 65\n}\n# We cannot hold the lock from the init container to the main container. We release it here. There is no risk, at this point we know we are safe.\nflock -u \"${LOCK_FD}\"\n"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"containers":[{"name":"kube-apiserver","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-ec"],"args":["LOCK=/var/log/kube-apiserver/.lock\n# We should be able to acquire the lock immediatelly. If not, it means the init container has not released it yet and kubelet or CRI-O started container prematurely.\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 30 \"${LOCK_FD}\" || {\n echo \"Failed to acquire lock for kube-apiserver. Please check setup container for details. This is likely kubelet or CRI-O bug.\"\n exit 1\n}\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle ...\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nexec watch-termination --termination-touch-file=/var/log/kube-apiserver/.terminating --termination-log-file=/var/log/kube-apiserver/termination.log --graceful-termination-duration=15s --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig -- hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --advertise-address=${HOST_IP} -v=2 --permit-address-sharing\n"],"ports":[{"containerPort":6443}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"STATIC_POD_VERSION","value":"9"},{"name":"HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"GOGC","value":"100"}],"resources":{"requests":{"cpu":"265m","memory":"1Gi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"},{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"livenessProbe":{"httpGet":{"path":"livez","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"readyz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":1},"startupProbe":{"httpGet":{"path":"healthz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":30},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}},{"name":"kube-apiserver-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-cert-regeneration-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-regeneration-controller"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","-v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-insecure-readyz","image":"quay.io/crcont/openshift-crc-cluster-kube-a 2025-08-13T20:02:21.131093105+00:00 stderr F piserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","insecure-readyz"],"args":["--insecure-port=6080","--delegate-url=https://localhost:6443/readyz"],"ports":[{"containerPort":6080}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-check-endpoints","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","check-endpoints"],"args":["--kubeconfig","/etc/kubernetes/static-pod-certs/configmaps/check-endpoints-kubeconfig/kubeconfig","--listen","0.0.0.0:17697","--namespace","$(POD_NAMESPACE)","--v","2"],"ports":[{"name":"check-endpoints","hostPort":17697,"containerPort":17697,"protocol":"TCP"}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"terminationGracePeriodSeconds":15,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:02:21.131093105+00:00 stderr F I0813 20:02:21.124736 1 cmd.go:376] Writing a pod under "kube-apiserver-startup-monitor-pod.yaml" key 2025-08-13T20:02:21.131093105+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-startup-monitor","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"revision":"9"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources"}},{"name":"manifests","hostPath":{"path":"/etc/kubernetes/manifests"}},{"name":"pod-resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9"}},{"name":"var-lock","hostPath":{"path":"/var/lock"}},{"name":"var-log","hostPath":{"path":"/var/log/kube-apiserver"}}],"containers":[{"name":"startup-monitor","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","startup-monitor"],"args":["-v=2","--fallback-timeout-duration=300s","--target-name=kube-apiserver","--manifests-dir=/etc/kubernetes/manifests","--resource-dir=/etc/kubernetes/static-pod-resources","--installer-lock-file=/var/lock/kube-apiserver-installer.lock","--revision=9","--node-name=crc","--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--log-file-path=/var/log/kube-apiserver/startup.log"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"manifests","mountPath":"/etc/kubernetes/manifests"},{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/secrets","subPath":"secrets"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/configmaps","subPath":"configmaps"},{"name":"var-lock","mountPath":"/var/lock"},{"name":"var-log","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"terminationGracePeriodSeconds":5,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:02:21.131093105+00:00 stderr F I0813 20:02:21.125040 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/kube-apiserver-startup-monitor-pod.yaml" ... 2025-08-13T20:02:21.131093105+00:00 stderr F I0813 20:02:21.125152 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml" ... 2025-08-13T20:02:21.131093105+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-startup-monitor","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"revision":"9"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources"}},{"name":"manifests","hostPath":{"path":"/etc/kubernetes/manifests"}},{"name":"pod-resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9"}},{"name":"var-lock","hostPath":{"path":"/var/lock"}},{"name":"var-log","hostPath":{"path":"/var/log/kube-apiserver"}}],"containers":[{"name":"startup-monitor","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","startup-monitor"],"args":["-v=2","--fallback-timeout-duration=300s","--target-name=kube-apiserver","--manifests-dir=/etc/kubernetes/manifests","--resource-dir=/etc/kubernetes/static-pod-resources","--installer-lock-file=/var/lock/kube-apiserver-installer.lock","--revision=9","--node-name=crc","--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--log-file-path=/var/log/kube-apiserver/startup.log"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"manifests","mountPath":"/etc/kubernetes/manifests"},{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/secrets","subPath":"secrets"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/configmaps","subPath":"configmaps"},{"name":"var-lock","mountPath":"/var/lock"},{"name":"var-log","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"terminationGracePeriodSeconds":5,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} ././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7dae59545f22b3fb679a7fbf878a6379/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015133657715033062 5ustar zuulzuul././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7dae59545f22b3fb679a7fbf878a6379/startup-monitor/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015133657715033062 5ustar zuulzuul././@LongLink0000644000000000000000000000024600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_rout0000755000175000017500000000000015133657716033211 5ustar zuulzuul././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_rout0000755000175000017500000000000015133657737033214 5ustar zuulzuul././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/4.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_rout0000644000175000017500000014437215133657716033226 0ustar zuulzuul2026-01-20T10:47:24.855952312+00:00 stderr F I0120 10:47:24.855682 1 template.go:560] "msg"="starting router" "logger"="router" "version"="majorFromGit: \nminorFromGit: \ncommitFromGit: 4d9b8c4afa6cd89b41f4bd5e7c09ccddd8679bc6\nversionFromGit: 4.0.0-527-g4d9b8c4a\ngitTreeState: clean\nbuildDate: 2024-06-13T22:04:06Z\n" 2026-01-20T10:47:24.860820114+00:00 stderr F I0120 10:47:24.860799 1 metrics.go:156] "msg"="router health and metrics port listening on HTTP and HTTPS" "address"="0.0.0.0:1936" "logger"="metrics" 2026-01-20T10:47:24.871728289+00:00 stderr F I0120 10:47:24.871669 1 router.go:217] "msg"="creating a new template router" "logger"="template" "writeDir"="/var/lib/haproxy" 2026-01-20T10:47:24.871779700+00:00 stderr F I0120 10:47:24.871749 1 router.go:302] "msg"="router will coalesce reloads within an interval of each other" "interval"="5s" "logger"="template" 2026-01-20T10:47:24.872317295+00:00 stderr F I0120 10:47:24.872277 1 router.go:372] "msg"="watching for changes" "logger"="template" "path"="/etc/pki/tls/private" 2026-01-20T10:47:24.872367176+00:00 stderr F I0120 10:47:24.872346 1 router.go:282] "msg"="router is including routes in all namespaces" "logger"="router" 2026-01-20T10:47:25.261646322+00:00 stderr F I0120 10:47:25.260941 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:25.261646322+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:25.261646322+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:26.258217519+00:00 stderr F I0120 10:47:26.258111 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:26.258217519+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:26.258217519+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:27.257643463+00:00 stderr F I0120 10:47:27.257532 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:27.257643463+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:27.257643463+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:28.258556098+00:00 stderr F I0120 10:47:28.258427 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:28.258556098+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:28.258556098+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:29.258141497+00:00 stderr F I0120 10:47:29.258022 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:29.258141497+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:29.258141497+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:30.258658211+00:00 stderr F I0120 10:47:30.258573 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:30.258658211+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:30.258658211+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:31.258610659+00:00 stderr F I0120 10:47:31.258462 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:31.258610659+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:31.258610659+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:32.257461308+00:00 stderr F I0120 10:47:32.257369 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:32.257461308+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:32.257461308+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:33.257837118+00:00 stderr F I0120 10:47:33.257706 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:33.257837118+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:33.257837118+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:34.258026913+00:00 stderr F I0120 10:47:34.257939 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:34.258026913+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:34.258026913+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:35.257557340+00:00 stderr F I0120 10:47:35.257426 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:35.257557340+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:35.257557340+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:36.257624721+00:00 stderr F I0120 10:47:36.257553 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:36.257624721+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:36.257624721+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:37.257764385+00:00 stderr F I0120 10:47:37.257688 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:37.257764385+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:37.257764385+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:38.258056703+00:00 stderr F I0120 10:47:38.257937 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:38.258056703+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:38.258056703+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:39.257538989+00:00 stderr F I0120 10:47:39.257458 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:39.257538989+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:39.257538989+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:40.258163926+00:00 stderr F I0120 10:47:40.258092 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:40.258163926+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:40.258163926+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:41.257276882+00:00 stderr F I0120 10:47:41.257184 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:41.257276882+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:41.257276882+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:42.258773402+00:00 stderr F I0120 10:47:42.258671 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:42.258773402+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:42.258773402+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:43.258448683+00:00 stderr F I0120 10:47:43.258364 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:43.258448683+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:43.258448683+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:44.258770141+00:00 stderr F I0120 10:47:44.258353 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:44.258770141+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:44.258770141+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:45.258609517+00:00 stderr F I0120 10:47:45.257746 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:45.258609517+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:45.258609517+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:46.257177168+00:00 stderr F I0120 10:47:46.257050 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:46.257177168+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:46.257177168+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:47.257892757+00:00 stderr F I0120 10:47:47.257836 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:47.257892757+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:47.257892757+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:48.258098213+00:00 stderr F I0120 10:47:48.257996 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:48.258098213+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:48.258098213+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:49.256667954+00:00 stderr F I0120 10:47:49.256625 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:49.256667954+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:49.256667954+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:50.257642551+00:00 stderr F I0120 10:47:50.257589 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:50.257642551+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:50.257642551+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:51.257653880+00:00 stderr F I0120 10:47:51.257593 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:51.257653880+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:51.257653880+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:52.258452101+00:00 stderr F I0120 10:47:52.258383 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:52.258452101+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:52.258452101+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:53.257770423+00:00 stderr F I0120 10:47:53.257686 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:53.257770423+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:53.257770423+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:54.258346368+00:00 stderr F I0120 10:47:54.258277 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:54.258346368+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:54.258346368+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:54.871225330+00:00 stderr F W0120 10:47:54.870899 1 reflector.go:539] github.com/openshift/router/pkg/router/template/service_lookup.go:33: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:47:54.871298492+00:00 stderr F I0120 10:47:54.871220 1 trace.go:236] Trace[23715136]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/template/service_lookup.go:33 (20-Jan-2026 10:47:24.868) (total time: 30002ms): 2026-01-20T10:47:54.871298492+00:00 stderr F Trace[23715136]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (10:47:54.870) 2026-01-20T10:47:54.871298492+00:00 stderr F Trace[23715136]: [30.002735718s] [30.002735718s] END 2026-01-20T10:47:54.871298492+00:00 stderr F E0120 10:47:54.871248 1 reflector.go:147] github.com/openshift/router/pkg/router/template/service_lookup.go:33: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:47:54.876697007+00:00 stderr F W0120 10:47:54.876596 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:47:54.876841612+00:00 stderr F I0120 10:47:54.876812 1 trace.go:236] Trace[1562564995]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (20-Jan-2026 10:47:24.873) (total time: 30003ms): 2026-01-20T10:47:54.876841612+00:00 stderr F Trace[1562564995]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (10:47:54.876) 2026-01-20T10:47:54.876841612+00:00 stderr F Trace[1562564995]: [30.003166941s] [30.003166941s] END 2026-01-20T10:47:54.876934534+00:00 stderr F E0120 10:47:54.876911 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:47:54.877781317+00:00 stderr F W0120 10:47:54.877701 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:47:54.877899290+00:00 stderr F I0120 10:47:54.877790 1 trace.go:236] Trace[2089740097]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (20-Jan-2026 10:47:24.873) (total time: 30004ms): 2026-01-20T10:47:54.877899290+00:00 stderr F Trace[2089740097]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30004ms (10:47:54.877) 2026-01-20T10:47:54.877899290+00:00 stderr F Trace[2089740097]: [30.004126546s] [30.004126546s] END 2026-01-20T10:47:54.877899290+00:00 stderr F E0120 10:47:54.877809 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:47:55.257288848+00:00 stderr F I0120 10:47:55.257230 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:55.257288848+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:55.257288848+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:55.727158533+00:00 stderr F W0120 10:47:55.727030 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2026-01-20T10:47:55.727328418+00:00 stderr F E0120 10:47:55.727293 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2026-01-20T10:47:56.258500091+00:00 stderr F I0120 10:47:56.258430 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:56.258500091+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:56.258500091+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:57.257275536+00:00 stderr F I0120 10:47:57.257221 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:57.257275536+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:57.257275536+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:58.257655435+00:00 stderr F I0120 10:47:58.257600 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:58.257655435+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:58.257655435+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:47:59.258517589+00:00 stderr F I0120 10:47:59.258428 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:47:59.258517589+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:47:59.258517589+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:00.258163869+00:00 stderr F I0120 10:48:00.258106 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:00.258163869+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:00.258163869+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:01.258599420+00:00 stderr F I0120 10:48:01.258532 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:01.258599420+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:01.258599420+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:02.257838449+00:00 stderr F I0120 10:48:02.257786 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:02.257838449+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:02.257838449+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:03.258111376+00:00 stderr F I0120 10:48:03.258009 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:03.258111376+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:03.258111376+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:04.258557849+00:00 stderr F I0120 10:48:04.258480 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:04.258557849+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:04.258557849+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:05.258474245+00:00 stderr F I0120 10:48:05.257801 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:05.258474245+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:05.258474245+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:06.257652433+00:00 stderr F I0120 10:48:06.257605 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:06.257652433+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:06.257652433+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:07.258140305+00:00 stderr F I0120 10:48:07.258100 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:07.258140305+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:07.258140305+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:08.257584730+00:00 stderr F I0120 10:48:08.257522 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:08.257584730+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:08.257584730+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:09.259244905+00:00 stderr F I0120 10:48:09.258387 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:09.259244905+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:09.259244905+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:10.259536142+00:00 stderr F I0120 10:48:10.259427 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:10.259536142+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:10.259536142+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:11.257894728+00:00 stderr F I0120 10:48:11.257815 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:11.257894728+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:11.257894728+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:12.258500084+00:00 stderr F I0120 10:48:12.258434 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:12.258500084+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:12.258500084+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:13.257780344+00:00 stderr F I0120 10:48:13.257688 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:13.257780344+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:13.257780344+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:14.257903388+00:00 stderr F I0120 10:48:14.257815 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:14.257903388+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:14.257903388+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:15.258424719+00:00 stderr F I0120 10:48:15.258347 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:15.258424719+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:15.258424719+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:16.257594799+00:00 stderr F I0120 10:48:16.257522 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:16.257594799+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:16.257594799+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:17.258156198+00:00 stderr F I0120 10:48:17.258053 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:17.258156198+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:17.258156198+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:18.257944007+00:00 stderr F I0120 10:48:18.257811 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:18.257944007+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:18.257944007+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:19.259084782+00:00 stderr F I0120 10:48:19.257779 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:19.259084782+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:19.259084782+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:20.258100359+00:00 stderr F I0120 10:48:20.257955 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:20.258100359+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:20.258100359+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:21.258312479+00:00 stderr F I0120 10:48:21.258194 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:21.258312479+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:21.258312479+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:22.258317903+00:00 stderr F I0120 10:48:22.258213 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:22.258317903+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:22.258317903+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:23.257968218+00:00 stderr F I0120 10:48:23.257883 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:23.257968218+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:23.257968218+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:24.258474806+00:00 stderr F I0120 10:48:24.258386 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:24.258474806+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:24.258474806+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:25.259276172+00:00 stderr F I0120 10:48:25.259108 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:25.259276172+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:25.259276172+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:26.258528715+00:00 stderr F I0120 10:48:26.258432 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:26.258528715+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:26.258528715+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:26.288262220+00:00 stderr F W0120 10:48:26.288154 1 reflector.go:539] github.com/openshift/router/pkg/router/template/service_lookup.go:33: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:48:26.288317961+00:00 stderr F I0120 10:48:26.288297 1 trace.go:236] Trace[955803362]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/template/service_lookup.go:33 (20-Jan-2026 10:47:56.286) (total time: 30001ms): 2026-01-20T10:48:26.288317961+00:00 stderr F Trace[955803362]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (10:48:26.288) 2026-01-20T10:48:26.288317961+00:00 stderr F Trace[955803362]: [30.001421533s] [30.001421533s] END 2026-01-20T10:48:26.288411284+00:00 stderr F E0120 10:48:26.288336 1 reflector.go:147] github.com/openshift/router/pkg/router/template/service_lookup.go:33: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:48:26.318619373+00:00 stderr F W0120 10:48:26.318518 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:48:26.318752126+00:00 stderr F I0120 10:48:26.318670 1 trace.go:236] Trace[707498021]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (20-Jan-2026 10:47:56.317) (total time: 30001ms): 2026-01-20T10:48:26.318752126+00:00 stderr F Trace[707498021]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (10:48:26.318) 2026-01-20T10:48:26.318752126+00:00 stderr F Trace[707498021]: [30.001499967s] [30.001499967s] END 2026-01-20T10:48:26.318752126+00:00 stderr F E0120 10:48:26.318738 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:48:27.257974994+00:00 stderr F I0120 10:48:27.257909 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:27.257974994+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:27.257974994+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:28.257778190+00:00 stderr F I0120 10:48:28.257034 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:28.257778190+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:28.257778190+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:28.782139443+00:00 stderr F W0120 10:48:28.781984 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:48:28.782251976+00:00 stderr F I0120 10:48:28.782137 1 trace.go:236] Trace[1615971149]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (20-Jan-2026 10:47:58.780) (total time: 30001ms): 2026-01-20T10:48:28.782251976+00:00 stderr F Trace[1615971149]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (10:48:28.781) 2026-01-20T10:48:28.782251976+00:00 stderr F Trace[1615971149]: [30.001042394s] [30.001042394s] END 2026-01-20T10:48:28.782251976+00:00 stderr F E0120 10:48:28.782188 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2026-01-20T10:48:29.258283079+00:00 stderr F I0120 10:48:29.258171 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:29.258283079+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:29.258283079+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:30.258718885+00:00 stderr F I0120 10:48:30.258646 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:30.258718885+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:30.258718885+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:31.257957175+00:00 stderr F I0120 10:48:31.257840 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:31.257957175+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:31.257957175+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:32.257596389+00:00 stderr F I0120 10:48:32.257520 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:32.257596389+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:32.257596389+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:33.259008663+00:00 stderr F I0120 10:48:33.258913 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:33.259008663+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:33.259008663+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:34.259884642+00:00 stderr F I0120 10:48:34.259790 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:34.259884642+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:34.259884642+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:34.980314921+00:00 stderr F W0120 10:48:34.980183 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2026-01-20T10:48:34.980314921+00:00 stderr F E0120 10:48:34.980254 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2026-01-20T10:48:35.002945057+00:00 stderr F I0120 10:48:35.002820 1 reflector.go:351] Caches populated for *v1.EndpointSlice from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2026-01-20T10:48:35.004899212+00:00 stderr F I0120 10:48:35.004843 1 reflector.go:351] Caches populated for *v1.Service from github.com/openshift/router/pkg/router/template/service_lookup.go:33 2026-01-20T10:48:35.258078434+00:00 stderr F I0120 10:48:35.257917 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:35.258078434+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:35.258078434+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:36.259218160+00:00 stderr F I0120 10:48:36.259053 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:36.259218160+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:36.259218160+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:37.258975368+00:00 stderr F I0120 10:48:37.258822 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:37.258975368+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:37.258975368+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:38.477980505+00:00 stderr F I0120 10:48:38.477905 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:38.477980505+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:38.477980505+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:38.841378914+00:00 stderr F W0120 10:48:38.841317 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2026-01-20T10:48:38.841378914+00:00 stderr F E0120 10:48:38.841352 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2026-01-20T10:48:39.258301157+00:00 stderr F I0120 10:48:39.258129 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:39.258301157+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:39.258301157+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:40.258257090+00:00 stderr F I0120 10:48:40.258176 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:40.258257090+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:40.258257090+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:41.258332695+00:00 stderr F I0120 10:48:41.258230 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:41.258332695+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:41.258332695+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:42.258289877+00:00 stderr F I0120 10:48:42.258210 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:42.258289877+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:42.258289877+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:43.257951971+00:00 stderr F I0120 10:48:43.257876 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:43.257951971+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:43.257951971+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:44.258507109+00:00 stderr F I0120 10:48:44.258429 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:44.258507109+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:44.258507109+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:45.257586758+00:00 stderr F I0120 10:48:45.257476 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:45.257586758+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:45.257586758+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:46.258053085+00:00 stderr F I0120 10:48:46.257998 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:46.258053085+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:46.258053085+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:46.464702561+00:00 stderr F W0120 10:48:46.464642 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2026-01-20T10:48:46.464870415+00:00 stderr F E0120 10:48:46.464842 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2026-01-20T10:48:47.258392829+00:00 stderr F I0120 10:48:47.258220 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:47.258392829+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:47.258392829+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:48.259958456+00:00 stderr F I0120 10:48:48.259257 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:48.259958456+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:48.259958456+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:49.258290683+00:00 stderr F I0120 10:48:49.258208 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:49.258290683+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:49.258290683+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:50.259193331+00:00 stderr F I0120 10:48:50.259126 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:50.259193331+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:50.259193331+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:51.257326173+00:00 stderr F I0120 10:48:51.257283 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:51.257326173+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:51.257326173+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:52.258123630+00:00 stderr F I0120 10:48:52.257975 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:52.258123630+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:52.258123630+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:53.258703990+00:00 stderr F I0120 10:48:53.258629 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:53.258703990+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:53.258703990+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:54.257913553+00:00 stderr F I0120 10:48:54.257842 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:54.257913553+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:54.257913553+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:55.258039570+00:00 stderr F I0120 10:48:55.257921 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:55.258039570+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:55.258039570+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:56.258169938+00:00 stderr F I0120 10:48:56.258132 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:56.258169938+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:56.258169938+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:57.257320877+00:00 stderr F I0120 10:48:57.257279 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:57.257320877+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:57.257320877+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:58.258110055+00:00 stderr F I0120 10:48:58.258041 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:58.258110055+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:58.258110055+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:48:59.257534533+00:00 stderr F I0120 10:48:59.257466 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:48:59.257534533+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:48:59.257534533+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:00.257899967+00:00 stderr F I0120 10:49:00.257840 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:00.257899967+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:00.257899967+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:01.258522669+00:00 stderr F I0120 10:49:01.258460 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:01.258522669+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:01.258522669+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:02.258134221+00:00 stderr F I0120 10:49:02.257982 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:02.258134221+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:02.258134221+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:03.260446809+00:00 stderr F I0120 10:49:03.260322 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:03.260446809+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:03.260446809+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:04.264930399+00:00 stderr F I0120 10:49:04.264857 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:04.264930399+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:04.264930399+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:05.022884462+00:00 stderr F W0120 10:49:05.022803 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2026-01-20T10:49:05.023029266+00:00 stderr F E0120 10:49:05.023014 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2026-01-20T10:49:05.259035236+00:00 stderr F I0120 10:49:05.258967 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:05.259035236+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:05.259035236+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:06.258612177+00:00 stderr F I0120 10:49:06.258536 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:06.258612177+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:06.258612177+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:07.257325154+00:00 stderr F I0120 10:49:07.257264 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:07.257325154+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:07.257325154+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:08.257464252+00:00 stderr F I0120 10:49:08.257422 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:08.257464252+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:08.257464252+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:09.257583229+00:00 stderr F I0120 10:49:09.257502 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:09.257583229+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:09.257583229+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:10.259166686+00:00 stderr F I0120 10:49:10.259051 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:10.259166686+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:10.259166686+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:11.259321483+00:00 stderr F I0120 10:49:11.259233 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:11.259321483+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:11.259321483+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:12.258168085+00:00 stderr F I0120 10:49:12.258124 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:12.258168085+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:12.258168085+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:13.258691403+00:00 stderr F I0120 10:49:13.258609 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:13.258691403+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:13.258691403+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:14.258786650+00:00 stderr F I0120 10:49:14.258617 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:14.258786650+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:14.258786650+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:15.259326739+00:00 stderr F I0120 10:49:15.259267 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:15.259326739+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:15.259326739+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:16.258334595+00:00 stderr F I0120 10:49:16.258239 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:16.258334595+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:16.258334595+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:17.257914988+00:00 stderr F I0120 10:49:17.257799 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:17.257914988+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:17.257914988+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:18.258827287+00:00 stderr F I0120 10:49:18.258784 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:18.258827287+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:18.258827287+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:19.258959785+00:00 stderr F I0120 10:49:19.258895 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:19.258959785+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:19.258959785+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:20.258692156+00:00 stderr F I0120 10:49:20.258634 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:20.258692156+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:20.258692156+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:21.258994555+00:00 stderr F I0120 10:49:21.258911 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:21.258994555+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:21.258994555+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:22.257626893+00:00 stderr F I0120 10:49:22.257576 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:22.257626893+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:22.257626893+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:23.258719965+00:00 stderr F I0120 10:49:23.258659 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:23.258719965+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:23.258719965+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:24.257567208+00:00 stderr F I0120 10:49:24.257526 1 healthz.go:261] backend-http,has-synced check failed: healthz 2026-01-20T10:49:24.257567208+00:00 stderr F [-]backend-http failed: backend reported failure 2026-01-20T10:49:24.257567208+00:00 stderr F [-]has-synced failed: Router not synced 2026-01-20T10:49:24.270574014+00:00 stderr F E0120 10:49:24.270537 1 factory.go:130] failed to sync cache for *v1.Route shared informer 2026-01-20T10:49:24.271847704+00:00 stderr F I0120 10:49:24.271829 1 template.go:844] "msg"="Shutdown requested, waiting 45s for new connections to cease" "logger"="router" 2026-01-20T10:49:24.276229707+00:00 stderr F E0120 10:49:24.276187 1 haproxy.go:418] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory 2026-01-20T10:49:24.383677639+00:00 stderr F I0120 10:49:24.383619 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2026-01-20T10:50:09.273664281+00:00 stderr F I0120 10:50:09.273592 1 template.go:846] "msg"="Instructing the template router to terminate" "logger"="router" 2026-01-20T10:50:09.294542547+00:00 stderr F I0120 10:50:09.294466 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Shutting down\n" 2026-01-20T10:50:09.294542547+00:00 stderr F I0120 10:50:09.294530 1 template.go:850] "msg"="Shutdown complete, exiting" "logger"="router" ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/5.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_rout0000644000175000017500000000642515133657716033222 0ustar zuulzuul2026-01-20T10:50:43.824136905+00:00 stderr F I0120 10:50:43.820235 1 template.go:560] "msg"="starting router" "logger"="router" "version"="majorFromGit: \nminorFromGit: \ncommitFromGit: 4d9b8c4afa6cd89b41f4bd5e7c09ccddd8679bc6\nversionFromGit: 4.0.0-527-g4d9b8c4a\ngitTreeState: clean\nbuildDate: 2024-06-13T22:04:06Z\n" 2026-01-20T10:50:43.826724550+00:00 stderr F I0120 10:50:43.826689 1 metrics.go:156] "msg"="router health and metrics port listening on HTTP and HTTPS" "address"="0.0.0.0:1936" "logger"="metrics" 2026-01-20T10:50:43.834009509+00:00 stderr F I0120 10:50:43.833799 1 router.go:217] "msg"="creating a new template router" "logger"="template" "writeDir"="/var/lib/haproxy" 2026-01-20T10:50:43.835121102+00:00 stderr F I0120 10:50:43.834144 1 router.go:302] "msg"="router will coalesce reloads within an interval of each other" "interval"="5s" "logger"="template" 2026-01-20T10:50:43.835121102+00:00 stderr F I0120 10:50:43.834846 1 router.go:372] "msg"="watching for changes" "logger"="template" "path"="/etc/pki/tls/private" 2026-01-20T10:50:43.835121102+00:00 stderr F I0120 10:50:43.834899 1 router.go:282] "msg"="router is including routes in all namespaces" "logger"="router" 2026-01-20T10:50:43.852720569+00:00 stderr F I0120 10:50:43.852647 1 reflector.go:351] Caches populated for *v1.EndpointSlice from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2026-01-20T10:50:43.858233917+00:00 stderr F I0120 10:50:43.854160 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2026-01-20T10:50:43.858233917+00:00 stderr F I0120 10:50:43.856042 1 reflector.go:351] Caches populated for *v1.Service from github.com/openshift/router/pkg/router/template/service_lookup.go:33 2026-01-20T10:50:43.940113866+00:00 stderr F E0120 10:50:43.939803 1 haproxy.go:418] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory 2026-01-20T10:50:43.989110018+00:00 stderr F I0120 10:50:43.986797 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2026-01-20T10:51:56.173958428+00:00 stderr F I0120 10:51:56.173893 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2026-01-20T10:55:36.553858494+00:00 stderr F I0120 10:55:36.553674 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2026-01-20T10:55:41.538263953+00:00 stderr F I0120 10:55:41.538214 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2026-01-20T10:57:57.697636115+00:00 stderr F I0120 10:57:57.697502 1 reflector.go:351] Caches populated for *v1.EndpointSlice from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2026-01-20T10:57:57.701865567+00:00 stderr F I0120 10:57:57.701787 1 reflector.go:351] Caches populated for *v1.Service from github.com/openshift/router/pkg/router/template/service_lookup.go:33 ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_rout0000644000175000017500000014600015133657716033214 0ustar zuulzuul2025-08-13T19:56:16.897628970+00:00 stderr F I0813 19:56:16.897259 1 template.go:560] "msg"="starting router" "logger"="router" "version"="majorFromGit: \nminorFromGit: \ncommitFromGit: 4d9b8c4afa6cd89b41f4bd5e7c09ccddd8679bc6\nversionFromGit: 4.0.0-527-g4d9b8c4a\ngitTreeState: clean\nbuildDate: 2024-06-13T22:04:06Z\n" 2025-08-13T19:56:16.903853848+00:00 stderr F I0813 19:56:16.902128 1 metrics.go:156] "msg"="router health and metrics port listening on HTTP and HTTPS" "address"="0.0.0.0:1936" "logger"="metrics" 2025-08-13T19:56:16.906242926+00:00 stderr F I0813 19:56:16.906162 1 router.go:217] "msg"="creating a new template router" "logger"="template" "writeDir"="/var/lib/haproxy" 2025-08-13T19:56:16.906321448+00:00 stderr F I0813 19:56:16.906281 1 router.go:302] "msg"="router will coalesce reloads within an interval of each other" "interval"="5s" "logger"="template" 2025-08-13T19:56:16.906988387+00:00 stderr F I0813 19:56:16.906917 1 router.go:372] "msg"="watching for changes" "logger"="template" "path"="/etc/pki/tls/private" 2025-08-13T19:56:16.907098210+00:00 stderr F I0813 19:56:16.907056 1 router.go:282] "msg"="router is including routes in all namespaces" "logger"="router" 2025-08-13T19:56:17.432753830+00:00 stderr F I0813 19:56:17.432644 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:17.432753830+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:17.432753830+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:18.431893231+00:00 stderr F I0813 19:56:18.431619 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:18.431893231+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:18.431893231+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:19.432022349+00:00 stderr F I0813 19:56:19.431929 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:19.432022349+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:19.432022349+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:20.432639101+00:00 stderr F I0813 19:56:20.432527 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:20.432639101+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:20.432639101+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:21.432452011+00:00 stderr F I0813 19:56:21.432309 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:21.432452011+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:21.432452011+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:22.432538597+00:00 stderr F I0813 19:56:22.432355 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:22.432538597+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:22.432538597+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:23.432382067+00:00 stderr F I0813 19:56:23.432308 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:23.432382067+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:23.432382067+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:24.433473033+00:00 stderr F I0813 19:56:24.433330 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:24.433473033+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:24.433473033+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:25.433471328+00:00 stderr F I0813 19:56:25.431263 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:25.433471328+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:25.433471328+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:26.432059721+00:00 stderr F I0813 19:56:26.431979 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:26.432059721+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:26.432059721+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:27.434503986+00:00 stderr F I0813 19:56:27.434363 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:27.434503986+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:27.434503986+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:28.432741230+00:00 stderr F I0813 19:56:28.432682 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:28.432741230+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:28.432741230+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:29.439648112+00:00 stderr F I0813 19:56:29.439264 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:29.439648112+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:29.439648112+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:30.435213880+00:00 stderr F I0813 19:56:30.433077 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:30.435213880+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:30.435213880+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:31.433340482+00:00 stderr F I0813 19:56:31.433017 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:31.433340482+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:31.433340482+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:32.433555683+00:00 stderr F I0813 19:56:32.433494 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:32.433555683+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:32.433555683+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:33.434362750+00:00 stderr F I0813 19:56:33.434286 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:33.434362750+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:33.434362750+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:34.432458420+00:00 stderr F I0813 19:56:34.432307 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:34.432458420+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:34.432458420+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:35.432746463+00:00 stderr F I0813 19:56:35.432248 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:35.432746463+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:35.432746463+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:36.433244751+00:00 stderr F I0813 19:56:36.433192 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:36.433244751+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:36.433244751+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:37.431705222+00:00 stderr F I0813 19:56:37.431613 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:37.431705222+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:37.431705222+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:38.432325045+00:00 stderr F I0813 19:56:38.432166 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:38.432325045+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:38.432325045+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:39.431535137+00:00 stderr F I0813 19:56:39.431441 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:39.431535137+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:39.431535137+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:40.431416609+00:00 stderr F I0813 19:56:40.431310 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:40.431416609+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:40.431416609+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:41.431708842+00:00 stderr F I0813 19:56:41.431601 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:41.431708842+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:41.431708842+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:42.433355784+00:00 stderr F I0813 19:56:42.433239 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:42.433355784+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:42.433355784+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:43.433293677+00:00 stderr F I0813 19:56:43.433204 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:43.433293677+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:43.433293677+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:44.437035918+00:00 stderr F I0813 19:56:44.436975 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:44.437035918+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:44.437035918+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:45.434593002+00:00 stderr F I0813 19:56:45.434106 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:45.434593002+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:45.434593002+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:46.433540697+00:00 stderr F I0813 19:56:46.433398 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:46.433540697+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:46.433540697+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:46.905667848+00:00 stderr F W0813 19:56:46.905508 1 reflector.go:539] github.com/openshift/router/pkg/router/template/service_lookup.go:33: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:46.905970527+00:00 stderr F I0813 19:56:46.905939 1 trace.go:236] Trace[1642337220]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/template/service_lookup.go:33 (13-Aug-2025 19:56:16.903) (total time: 30002ms): 2025-08-13T19:56:46.905970527+00:00 stderr F Trace[1642337220]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:56:46.905) 2025-08-13T19:56:46.905970527+00:00 stderr F Trace[1642337220]: [30.002025157s] [30.002025157s] END 2025-08-13T19:56:46.906068190+00:00 stderr F E0813 19:56:46.906047 1 reflector.go:147] github.com/openshift/router/pkg/router/template/service_lookup.go:33: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:46.908969293+00:00 stderr F W0813 19:56:46.908900 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:46.909091246+00:00 stderr F I0813 19:56:46.909070 1 trace.go:236] Trace[436705070]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 19:56:16.907) (total time: 30001ms): 2025-08-13T19:56:46.909091246+00:00 stderr F Trace[436705070]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:56:46.908) 2025-08-13T19:56:46.909091246+00:00 stderr F Trace[436705070]: [30.001298066s] [30.001298066s] END 2025-08-13T19:56:46.909150168+00:00 stderr F E0813 19:56:46.909134 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:46.909259201+00:00 stderr F W0813 19:56:46.909072 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:46.909357404+00:00 stderr F I0813 19:56:46.909284 1 trace.go:236] Trace[2037442418]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 19:56:16.907) (total time: 30001ms): 2025-08-13T19:56:46.909357404+00:00 stderr F Trace[2037442418]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:56:46.908) 2025-08-13T19:56:46.909357404+00:00 stderr F Trace[2037442418]: [30.001572733s] [30.001572733s] END 2025-08-13T19:56:46.909357404+00:00 stderr F E0813 19:56:46.909336 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:47.432589043+00:00 stderr F I0813 19:56:47.432529 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:47.432589043+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:47.432589043+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:48.432951947+00:00 stderr F I0813 19:56:48.432525 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:48.432951947+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:48.432951947+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:49.432066177+00:00 stderr F I0813 19:56:49.431941 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:49.432066177+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:49.432066177+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:50.432125873+00:00 stderr F I0813 19:56:50.432032 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:50.432125873+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:50.432125873+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:51.432001853+00:00 stderr F I0813 19:56:51.431906 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:51.432001853+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:51.432001853+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:52.432737878+00:00 stderr F I0813 19:56:52.432631 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:52.432737878+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:52.432737878+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:53.432868787+00:00 stderr F I0813 19:56:53.432673 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:53.432868787+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:53.432868787+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:54.433275973+00:00 stderr F I0813 19:56:54.432875 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:54.433275973+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:54.433275973+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:55.432271579+00:00 stderr F I0813 19:56:55.432202 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:55.432271579+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:55.432271579+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:56.432325345+00:00 stderr F I0813 19:56:56.432186 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:56.432325345+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:56.432325345+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:57.432645998+00:00 stderr F I0813 19:56:57.432569 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:57.432645998+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:57.432645998+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:58.432199589+00:00 stderr F I0813 19:56:58.432124 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:58.432199589+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:58.432199589+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:59.433020537+00:00 stderr F I0813 19:56:59.432864 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:59.433020537+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:59.433020537+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:00.433573017+00:00 stderr F I0813 19:57:00.433509 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:00.433573017+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:00.433573017+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:01.432562092+00:00 stderr F I0813 19:57:01.432418 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:01.432562092+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:01.432562092+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:02.433124814+00:00 stderr F I0813 19:57:02.433003 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:02.433124814+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:02.433124814+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:03.432258824+00:00 stderr F I0813 19:57:03.432162 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:03.432258824+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:03.432258824+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:04.432722113+00:00 stderr F I0813 19:57:04.432145 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:04.432722113+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:04.432722113+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:05.431260714+00:00 stderr F I0813 19:57:05.431205 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:05.431260714+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:05.431260714+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:06.432030181+00:00 stderr F I0813 19:57:06.431977 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:06.432030181+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:06.432030181+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:07.431464501+00:00 stderr F I0813 19:57:07.431406 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:07.431464501+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:07.431464501+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:08.432085384+00:00 stderr F I0813 19:57:08.431989 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:08.432085384+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:08.432085384+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:09.432053706+00:00 stderr F I0813 19:57:09.431951 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:09.432053706+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:09.432053706+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:10.432654007+00:00 stderr F I0813 19:57:10.432610 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:10.432654007+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:10.432654007+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:11.431640704+00:00 stderr F I0813 19:57:11.431588 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:11.431640704+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:11.431640704+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:12.434203290+00:00 stderr F I0813 19:57:12.434111 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:12.434203290+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:12.434203290+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:13.433521056+00:00 stderr F I0813 19:57:13.433423 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:13.433521056+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:13.433521056+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:14.433498481+00:00 stderr F I0813 19:57:14.433149 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:14.433498481+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:14.433498481+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:15.432030014+00:00 stderr F I0813 19:57:15.431954 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:15.432030014+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:15.432030014+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:16.434512699+00:00 stderr F I0813 19:57:16.434364 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:16.434512699+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:16.434512699+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:17.433485565+00:00 stderr F I0813 19:57:17.432334 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:17.433485565+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:17.433485565+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:17.783031196+00:00 stderr F W0813 19:57:17.780368 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:17.783031196+00:00 stderr F I0813 19:57:17.780622 1 trace.go:236] Trace[1748179681]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 19:56:47.772) (total time: 30007ms): 2025-08-13T19:57:17.783031196+00:00 stderr F Trace[1748179681]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30007ms (19:57:17.780) 2025-08-13T19:57:17.783031196+00:00 stderr F Trace[1748179681]: [30.007586633s] [30.007586633s] END 2025-08-13T19:57:17.783031196+00:00 stderr F E0813 19:57:17.780688 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:18.038305706+00:00 stderr F W0813 19:57:18.037956 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:18.038305706+00:00 stderr F I0813 19:57:18.038270 1 trace.go:236] Trace[1900261663]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 19:56:48.036) (total time: 30001ms): 2025-08-13T19:57:18.038305706+00:00 stderr F Trace[1900261663]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:57:18.037) 2025-08-13T19:57:18.038305706+00:00 stderr F Trace[1900261663]: [30.001957303s] [30.001957303s] END 2025-08-13T19:57:18.038418509+00:00 stderr F E0813 19:57:18.038308 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:18.135368408+00:00 stderr F W0813 19:57:18.135253 1 reflector.go:539] github.com/openshift/router/pkg/router/template/service_lookup.go:33: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:18.135478171+00:00 stderr F I0813 19:57:18.135462 1 trace.go:236] Trace[1715725877]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/template/service_lookup.go:33 (13-Aug-2025 19:56:48.134) (total time: 30001ms): 2025-08-13T19:57:18.135478171+00:00 stderr F Trace[1715725877]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:57:18.135) 2025-08-13T19:57:18.135478171+00:00 stderr F Trace[1715725877]: [30.001251833s] [30.001251833s] END 2025-08-13T19:57:18.135532372+00:00 stderr F E0813 19:57:18.135510 1 reflector.go:147] github.com/openshift/router/pkg/router/template/service_lookup.go:33: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:18.431943426+00:00 stderr F I0813 19:57:18.431887 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:18.431943426+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:18.431943426+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:19.431645322+00:00 stderr F I0813 19:57:19.431575 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:19.431645322+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:19.431645322+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:20.435646542+00:00 stderr F I0813 19:57:20.435520 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:20.435646542+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:20.435646542+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:21.431757945+00:00 stderr F I0813 19:57:21.431628 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:21.431757945+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:21.431757945+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:22.433349495+00:00 stderr F I0813 19:57:22.433199 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:22.433349495+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:22.433349495+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:23.431476856+00:00 stderr F I0813 19:57:23.431395 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:23.431476856+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:23.431476856+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:24.431854371+00:00 stderr F I0813 19:57:24.431497 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:24.431854371+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:24.431854371+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:25.433028709+00:00 stderr F I0813 19:57:25.432858 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:25.433028709+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:25.433028709+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:26.433458567+00:00 stderr F I0813 19:57:26.433214 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:26.433458567+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:26.433458567+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:27.433281236+00:00 stderr F I0813 19:57:27.433191 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:27.433281236+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:27.433281236+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:28.432026316+00:00 stderr F I0813 19:57:28.431902 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:28.432026316+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:28.432026316+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:29.433555235+00:00 stderr F I0813 19:57:29.433273 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:29.433555235+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:29.433555235+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:30.431537989+00:00 stderr F I0813 19:57:30.431465 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:30.431537989+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:30.431537989+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:31.431923854+00:00 stderr F I0813 19:57:31.431866 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:31.431923854+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:31.431923854+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:32.431204570+00:00 stderr F I0813 19:57:32.431059 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:32.431204570+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:32.431204570+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:33.433366346+00:00 stderr F I0813 19:57:33.433269 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:33.433366346+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:33.433366346+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:34.432642709+00:00 stderr F I0813 19:57:34.432592 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:34.432642709+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:34.432642709+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:35.432859600+00:00 stderr F I0813 19:57:35.432686 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:35.432859600+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:35.432859600+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:36.437029824+00:00 stderr F I0813 19:57:36.436741 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:36.437029824+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:36.437029824+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:36.933443039+00:00 stderr F W0813 19:57:36.933286 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:57:36.933583323+00:00 stderr F I0813 19:57:36.933566 1 trace.go:236] Trace[2081447682]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 19:57:19.880) (total time: 17053ms): 2025-08-13T19:57:36.933583323+00:00 stderr F Trace[2081447682]: ---"Objects listed" error:the server is currently unable to handle the request (get routes.route.openshift.io) 17052ms (19:57:36.933) 2025-08-13T19:57:36.933583323+00:00 stderr F Trace[2081447682]: [17.053173756s] [17.053173756s] END 2025-08-13T19:57:36.933668565+00:00 stderr F E0813 19:57:36.933651 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:57:36.968356316+00:00 stderr F I0813 19:57:36.968289 1 trace.go:236] Trace[1148232741]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 19:57:19.880) (total time: 17087ms): 2025-08-13T19:57:36.968356316+00:00 stderr F Trace[1148232741]: ---"Objects listed" error: 17087ms (19:57:36.967) 2025-08-13T19:57:36.968356316+00:00 stderr F Trace[1148232741]: [17.087755934s] [17.087755934s] END 2025-08-13T19:57:36.968468509+00:00 stderr F I0813 19:57:36.968427 1 reflector.go:351] Caches populated for *v1.EndpointSlice from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-08-13T19:57:37.032534479+00:00 stderr F I0813 19:57:37.032393 1 trace.go:236] Trace[287589973]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/template/service_lookup.go:33 (13-Aug-2025 19:57:21.283) (total time: 15748ms): 2025-08-13T19:57:37.032534479+00:00 stderr F Trace[287589973]: ---"Objects listed" error: 15748ms (19:57:37.031) 2025-08-13T19:57:37.032534479+00:00 stderr F Trace[287589973]: [15.748753569s] [15.748753569s] END 2025-08-13T19:57:37.032609361+00:00 stderr F I0813 19:57:37.032593 1 reflector.go:351] Caches populated for *v1.Service from github.com/openshift/router/pkg/router/template/service_lookup.go:33 2025-08-13T19:57:37.434524076+00:00 stderr F I0813 19:57:37.434316 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:37.434524076+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:37.434524076+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:38.433052679+00:00 stderr F I0813 19:57:38.432706 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:38.433052679+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:38.433052679+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:39.435744381+00:00 stderr F I0813 19:57:39.435644 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:39.435744381+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:39.435744381+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:40.432219425+00:00 stderr F I0813 19:57:40.432159 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:40.432219425+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:40.432219425+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:41.766582397+00:00 stderr F I0813 19:57:41.766530 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:41.766582397+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:41.766582397+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:42.001205487+00:00 stderr F W0813 19:57:42.001067 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:57:42.001205487+00:00 stderr F E0813 19:57:42.001137 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:57:42.434103768+00:00 stderr F I0813 19:57:42.434003 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:42.434103768+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:42.434103768+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:43.432131466+00:00 stderr F I0813 19:57:43.432032 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:43.432131466+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:43.432131466+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:44.431976806+00:00 stderr F I0813 19:57:44.431854 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:44.431976806+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:44.431976806+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:45.432675090+00:00 stderr F I0813 19:57:45.432578 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:45.432675090+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:45.432675090+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:46.433509920+00:00 stderr F I0813 19:57:46.433334 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:46.433509920+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:46.433509920+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:47.432967569+00:00 stderr F I0813 19:57:47.432819 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:47.432967569+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:47.432967569+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:48.447547809+00:00 stderr F I0813 19:57:48.447211 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:48.447547809+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:48.447547809+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:49.433520123+00:00 stderr F I0813 19:57:49.433466 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:49.433520123+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:49.433520123+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:50.439672484+00:00 stderr F I0813 19:57:50.438318 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:50.439672484+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:50.439672484+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:50.477344599+00:00 stderr F W0813 19:57:50.477079 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:57:50.477344599+00:00 stderr F E0813 19:57:50.477160 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:57:51.433433181+00:00 stderr F I0813 19:57:51.433291 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:51.433433181+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:51.433433181+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:52.431983943+00:00 stderr F I0813 19:57:52.431923 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:52.431983943+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:52.431983943+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:53.432056270+00:00 stderr F I0813 19:57:53.431981 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:53.432056270+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:53.432056270+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:54.433287283+00:00 stderr F I0813 19:57:54.433181 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:54.433287283+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:54.433287283+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:55.434947235+00:00 stderr F I0813 19:57:55.434648 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:55.434947235+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:55.434947235+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:56.432574223+00:00 stderr F I0813 19:57:56.432093 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:56.432574223+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:56.432574223+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:57.433262348+00:00 stderr F I0813 19:57:57.433165 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:57.433262348+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:57.433262348+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:58.431917455+00:00 stderr F I0813 19:57:58.431696 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:58.431917455+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:58.431917455+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:59.432912138+00:00 stderr F I0813 19:57:59.432742 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:59.432912138+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:59.432912138+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:00.431893395+00:00 stderr F I0813 19:58:00.431689 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:00.431893395+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:00.431893395+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:01.434020642+00:00 stderr F I0813 19:58:01.433872 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:01.434020642+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:01.434020642+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:02.435928520+00:00 stderr F I0813 19:58:02.435742 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:02.435928520+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:02.435928520+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:03.433875287+00:00 stderr F I0813 19:58:03.433666 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:03.433875287+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:03.433875287+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:04.431119524+00:00 stderr F I0813 19:58:04.430898 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:04.431119524+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:04.431119524+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:05.432938093+00:00 stderr F I0813 19:58:05.432765 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:05.432938093+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:05.432938093+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:06.435314875+00:00 stderr F I0813 19:58:06.434429 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:06.435314875+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:06.435314875+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:07.434281442+00:00 stderr F I0813 19:58:07.434059 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:07.434281442+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:07.434281442+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:08.433708781+00:00 stderr F I0813 19:58:08.433055 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:08.433708781+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:08.433708781+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:09.437898085+00:00 stderr F I0813 19:58:09.437671 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:09.437898085+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:09.437898085+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:10.432124936+00:00 stderr F I0813 19:58:10.431734 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:10.432124936+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:10.432124936+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:11.034272901+00:00 stderr F W0813 19:58:11.034023 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:58:11.034272901+00:00 stderr F E0813 19:58:11.034095 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:58:11.432869003+00:00 stderr F I0813 19:58:11.432656 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:11.432869003+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:11.432869003+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:12.433933519+00:00 stderr F I0813 19:58:12.432599 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:12.433933519+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:12.433933519+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:13.431929237+00:00 stderr F I0813 19:58:13.431765 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:13.431929237+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:13.431929237+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:14.431767508+00:00 stderr F I0813 19:58:14.431622 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:14.431767508+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:14.431767508+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:15.434103371+00:00 stderr F I0813 19:58:15.434008 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:15.434103371+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:15.434103371+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:16.433465119+00:00 stderr F I0813 19:58:16.433332 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:16.433465119+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:16.433465119+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:16.457355050+00:00 stderr F E0813 19:58:16.457214 1 factory.go:130] failed to sync cache for *v1.Route shared informer 2025-08-13T19:58:16.463553086+00:00 stderr F I0813 19:58:16.463474 1 template.go:844] "msg"="Shutdown requested, waiting 45s for new connections to cease" "logger"="router" 2025-08-13T19:58:16.466863901+00:00 stderr F E0813 19:58:16.466693 1 haproxy.go:418] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory 2025-08-13T19:58:16.525526943+00:00 stderr F I0813 19:58:16.525381 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T19:59:01.471118640+00:00 stderr F I0813 19:59:01.470384 1 template.go:846] "msg"="Instructing the template router to terminate" "logger"="router" 2025-08-13T19:59:02.673417482+00:00 stderr F I0813 19:59:02.673063 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Shutting down\n" 2025-08-13T19:59:02.673417482+00:00 stderr F I0813 19:59:02.673291 1 template.go:850] "msg"="Shutdown complete, exiting" "logger"="router" ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/3.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_rout0000644000175000017500000011174415133657716033223 0ustar zuulzuul2025-08-13T19:59:11.304677918+00:00 stderr F I0813 19:59:11.297505 1 template.go:560] "msg"="starting router" "logger"="router" "version"="majorFromGit: \nminorFromGit: \ncommitFromGit: 4d9b8c4afa6cd89b41f4bd5e7c09ccddd8679bc6\nversionFromGit: 4.0.0-527-g4d9b8c4a\ngitTreeState: clean\nbuildDate: 2024-06-13T22:04:06Z\n" 2025-08-13T19:59:11.673366627+00:00 stderr F I0813 19:59:11.672883 1 metrics.go:156] "msg"="router health and metrics port listening on HTTP and HTTPS" "address"="0.0.0.0:1936" "logger"="metrics" 2025-08-13T19:59:11.769373724+00:00 stderr F I0813 19:59:11.769189 1 router.go:217] "msg"="creating a new template router" "logger"="template" "writeDir"="/var/lib/haproxy" 2025-08-13T19:59:11.771932617+00:00 stderr F I0813 19:59:11.769613 1 router.go:302] "msg"="router will coalesce reloads within an interval of each other" "interval"="5s" "logger"="template" 2025-08-13T19:59:11.777891327+00:00 stderr F I0813 19:59:11.773270 1 router.go:372] "msg"="watching for changes" "logger"="template" "path"="/etc/pki/tls/private" 2025-08-13T19:59:11.777891327+00:00 stderr F I0813 19:59:11.773414 1 router.go:282] "msg"="router is including routes in all namespaces" "logger"="router" 2025-08-13T19:59:11.886728879+00:00 stderr F W0813 19:59:11.886634 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:11.886728879+00:00 stderr F E0813 19:59:11.886705 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:12.672132258+00:00 stderr F I0813 19:59:12.671738 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:12.672132258+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:12.672132258+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:12.681237917+00:00 stderr F I0813 19:59:12.678724 1 reflector.go:351] Caches populated for *v1.Service from github.com/openshift/router/pkg/router/template/service_lookup.go:33 2025-08-13T19:59:12.851562443+00:00 stderr F I0813 19:59:12.851175 1 reflector.go:351] Caches populated for *v1.EndpointSlice from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-08-13T19:59:13.380116350+00:00 stderr F W0813 19:59:13.379504 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:13.380116350+00:00 stderr F E0813 19:59:13.380097 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:13.459080341+00:00 stderr F I0813 19:59:13.456206 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:13.459080341+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:13.459080341+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:14.445501809+00:00 stderr F I0813 19:59:14.439664 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:14.445501809+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:14.445501809+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:15.468994886+00:00 stderr F I0813 19:59:15.459972 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:15.468994886+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:15.468994886+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:16.104399630+00:00 stderr F W0813 19:59:16.103133 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:16.104399630+00:00 stderr F E0813 19:59:16.103188 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:16.447703496+00:00 stderr F I0813 19:59:16.446220 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:16.447703496+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:16.447703496+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:17.449471752+00:00 stderr F I0813 19:59:17.447716 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:17.449471752+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:17.449471752+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:18.455398967+00:00 stderr F I0813 19:59:18.454926 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:18.455398967+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:18.455398967+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:19.478456090+00:00 stderr F I0813 19:59:19.478324 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:19.478456090+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:19.478456090+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:20.443736896+00:00 stderr F I0813 19:59:20.443497 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:20.443736896+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:20.443736896+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:21.154386143+00:00 stderr F W0813 19:59:21.149472 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:21.154386143+00:00 stderr F E0813 19:59:21.149615 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:21.441014694+00:00 stderr F I0813 19:59:21.438582 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:21.441014694+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:21.441014694+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:22.436881922+00:00 stderr F I0813 19:59:22.436077 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:22.436881922+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:22.436881922+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:23.444874115+00:00 stderr F I0813 19:59:23.444249 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:23.444874115+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:23.444874115+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:24.443249214+00:00 stderr F I0813 19:59:24.441639 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:24.443249214+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:24.443249214+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:25.434856409+00:00 stderr F I0813 19:59:25.433416 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:25.434856409+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:25.434856409+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:26.455137963+00:00 stderr F I0813 19:59:26.452756 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:26.455137963+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:26.455137963+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:27.442377615+00:00 stderr F I0813 19:59:27.441945 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:27.442377615+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:27.442377615+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:28.441913556+00:00 stderr F I0813 19:59:28.440770 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:28.441913556+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:28.441913556+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:29.432264236+00:00 stderr F I0813 19:59:29.432104 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:29.432264236+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:29.432264236+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:30.434530676+00:00 stderr F I0813 19:59:30.434323 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:30.434530676+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:30.434530676+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:31.436502367+00:00 stderr F I0813 19:59:31.436416 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:31.436502367+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:31.436502367+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:32.438185499+00:00 stderr F I0813 19:59:32.436917 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:32.438185499+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:32.438185499+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:32.780568869+00:00 stderr F W0813 19:59:32.779563 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:32.780568869+00:00 stderr F E0813 19:59:32.779636 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:33.442515279+00:00 stderr F I0813 19:59:33.439470 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:33.442515279+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:33.442515279+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:34.440001973+00:00 stderr F I0813 19:59:34.432664 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:34.440001973+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:34.440001973+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:35.437748283+00:00 stderr F I0813 19:59:35.437596 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:35.437748283+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:35.437748283+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:36.432709335+00:00 stderr F I0813 19:59:36.432555 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:36.432709335+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:36.432709335+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:37.441529821+00:00 stderr F I0813 19:59:37.441270 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:37.441529821+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:37.441529821+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:38.437191323+00:00 stderr F I0813 19:59:38.435413 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:38.437191323+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:38.437191323+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:39.443362104+00:00 stderr F I0813 19:59:39.443272 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:39.443362104+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:39.443362104+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:40.463715509+00:00 stderr F I0813 19:59:40.439513 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:40.463715509+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:40.463715509+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:41.444454315+00:00 stderr F I0813 19:59:41.444077 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:41.444454315+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:41.444454315+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:42.447673783+00:00 stderr F I0813 19:59:42.447382 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:42.447673783+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:42.447673783+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:43.439405871+00:00 stderr F I0813 19:59:43.439305 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:43.439405871+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:43.439405871+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:44.440923330+00:00 stderr F I0813 19:59:44.438757 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:44.440923330+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:44.440923330+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:45.438150496+00:00 stderr F I0813 19:59:45.436473 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:45.438150496+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:45.438150496+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:46.442893156+00:00 stderr F I0813 19:59:46.437085 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:46.442893156+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:46.442893156+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:47.574656328+00:00 stderr F I0813 19:59:47.573480 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:47.574656328+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:47.574656328+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:48.435955780+00:00 stderr F I0813 19:59:48.434292 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:48.435955780+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:48.435955780+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:49.444657025+00:00 stderr F I0813 19:59:49.441019 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:49.444657025+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:49.444657025+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:50.441014186+00:00 stderr F I0813 19:59:50.440726 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:50.441014186+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:50.441014186+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:51.438074809+00:00 stderr F I0813 19:59:51.437388 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:51.438074809+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:51.438074809+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:52.271503646+00:00 stderr F W0813 19:59:52.271401 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:52.271503646+00:00 stderr F E0813 19:59:52.271467 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:52.437958611+00:00 stderr F I0813 19:59:52.435764 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:52.437958611+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:52.437958611+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:53.437003279+00:00 stderr F I0813 19:59:53.436582 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:53.437003279+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:53.437003279+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:54.435994195+00:00 stderr F I0813 19:59:54.435870 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:54.435994195+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:54.435994195+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:55.447610142+00:00 stderr F I0813 19:59:55.444133 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:55.447610142+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:55.447610142+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:56.435445371+00:00 stderr F I0813 19:59:56.435342 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:56.435445371+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:56.435445371+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:57.437007470+00:00 stderr F I0813 19:59:57.436719 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:57.437007470+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:57.437007470+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:58.441676389+00:00 stderr F I0813 19:59:58.441568 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:58.441676389+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:58.441676389+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:59.441219781+00:00 stderr F I0813 19:59:59.435551 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:59.441219781+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:59.441219781+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:00.449216644+00:00 stderr F I0813 20:00:00.449017 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:00.449216644+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:00.449216644+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:01.435257071+00:00 stderr F I0813 20:00:01.435136 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:01.435257071+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:01.435257071+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:02.435075695+00:00 stderr F I0813 20:00:02.434217 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:02.435075695+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:02.435075695+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:03.435159681+00:00 stderr F I0813 20:00:03.435040 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:03.435159681+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:03.435159681+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:04.434374502+00:00 stderr F I0813 20:00:04.433405 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:04.434374502+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:04.434374502+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:05.435923430+00:00 stderr F I0813 20:00:05.434204 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:05.435923430+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:05.435923430+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:06.436268404+00:00 stderr F I0813 20:00:06.435113 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:06.436268404+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:06.436268404+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:07.444277557+00:00 stderr F I0813 20:00:07.444061 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:07.444277557+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:07.444277557+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:08.440445100+00:00 stderr F I0813 20:00:08.440318 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:08.440445100+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:08.440445100+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:09.436634906+00:00 stderr F I0813 20:00:09.436496 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:09.436634906+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:09.436634906+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:10.444501654+00:00 stderr F I0813 20:00:10.443430 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:10.444501654+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:10.444501654+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:11.447671037+00:00 stderr F I0813 20:00:11.442535 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:11.447671037+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:11.447671037+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:12.432586351+00:00 stderr F I0813 20:00:12.432076 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:12.432586351+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:12.432586351+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:13.443036294+00:00 stderr F I0813 20:00:13.441571 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:13.443036294+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:13.443036294+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:14.439365563+00:00 stderr F I0813 20:00:14.437341 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:14.439365563+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:14.439365563+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:15.439457808+00:00 stderr F I0813 20:00:15.439401 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:15.439457808+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:15.439457808+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:16.445945307+00:00 stderr F I0813 20:00:16.445316 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:16.445945307+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:16.445945307+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:17.448372851+00:00 stderr F I0813 20:00:17.448314 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:17.448372851+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:17.448372851+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:18.458895964+00:00 stderr F I0813 20:00:18.452920 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:18.458895964+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:18.458895964+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:19.443228891+00:00 stderr F I0813 20:00:19.439909 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:19.443228891+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:19.443228891+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:20.452264183+00:00 stderr F I0813 20:00:20.451613 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:20.452264183+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:20.452264183+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:21.444279109+00:00 stderr F I0813 20:00:21.441573 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:21.444279109+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:21.444279109+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:22.442027309+00:00 stderr F I0813 20:00:22.441223 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:22.442027309+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:22.442027309+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:23.442934930+00:00 stderr F I0813 20:00:23.441492 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:23.442934930+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:23.442934930+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:24.460454004+00:00 stderr F I0813 20:00:24.460307 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:24.460454004+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:24.460454004+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:25.448935401+00:00 stderr F I0813 20:00:25.441689 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:25.448935401+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:25.448935401+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:26.031207313+00:00 stderr F I0813 20:00:26.031093 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-08-13T20:00:26.092081849+00:00 stderr F E0813 20:00:26.090484 1 haproxy.go:418] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory 2025-08-13T20:00:26.742089514+00:00 stderr F I0813 20:00:26.741483 1 healthz.go:261] backend-http check failed: healthz 2025-08-13T20:00:26.742089514+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:27.296769290+00:00 stderr F I0813 20:00:27.296070 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T20:00:31.257363322+00:00 stderr F I0813 20:00:31.255569 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T20:00:41.640227457+00:00 stderr F I0813 20:00:41.636935 1 template.go:941] "msg"="reloaded metrics certificate" "cert"="/etc/pki/tls/metrics-certs/tls.crt" "key"="/etc/pki/tls/metrics-certs/tls.key" "logger"="router" 2025-08-13T20:01:09.895922795+00:00 stderr F I0813 20:01:09.894921 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T20:01:24.804112015+00:00 stderr F W0813 20:01:24.803826 1 reflector.go:462] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 23; INTERNAL_ERROR; received from peer") has prevented the request from succeeding 2025-08-13T20:02:17.794050969+00:00 stderr F W0813 20:02:17.793041 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:02:17.794050969+00:00 stderr F E0813 20:02:17.793139 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:03:19.527530230+00:00 stderr F W0813 20:03:19.527086 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?resourceVersion=29471": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:03:19.527530230+00:00 stderr F I0813 20:03:19.527406 1 trace.go:236] Trace[310903481]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 20:02:49.524) (total time: 30002ms): 2025-08-13T20:03:19.527530230+00:00 stderr F Trace[310903481]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?resourceVersion=29471": dial tcp 10.217.4.1:443: i/o timeout 30002ms (20:03:19.527) 2025-08-13T20:03:19.527530230+00:00 stderr F Trace[310903481]: [30.002341853s] [30.002341853s] END 2025-08-13T20:03:19.527530230+00:00 stderr F E0813 20:03:19.527464 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?resourceVersion=29471": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:04:28.566676368+00:00 stderr F W0813 20:04:28.566243 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?resourceVersion=29471": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:04:28.566676368+00:00 stderr F I0813 20:04:28.566510 1 trace.go:236] Trace[1141895289]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 20:03:58.471) (total time: 30095ms): 2025-08-13T20:04:28.566676368+00:00 stderr F Trace[1141895289]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?resourceVersion=29471": dial tcp 10.217.4.1:443: i/o timeout 30082ms (20:04:28.553) 2025-08-13T20:04:28.566676368+00:00 stderr F Trace[1141895289]: [30.095336028s] [30.095336028s] END 2025-08-13T20:04:28.566676368+00:00 stderr F E0813 20:04:28.566571 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?resourceVersion=29471": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:05:22.531648891+00:00 stderr F W0813 20:05:22.531122 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:22.533378770+00:00 stderr F E0813 20:05:22.533163 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:23.709339045+00:00 stderr F I0813 20:05:23.709111 1 reflector.go:351] Caches populated for *v1.Service from github.com/openshift/router/pkg/router/template/service_lookup.go:33 2025-08-13T20:05:23.943489560+00:00 stderr F I0813 20:05:23.943369 1 reflector.go:351] Caches populated for *v1.EndpointSlice from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-08-13T20:05:24.175916606+00:00 stderr F I0813 20:05:24.175825 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T20:05:29.171169261+00:00 stderr F I0813 20:05:29.171023 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T20:05:40.055113516+00:00 stderr F I0813 20:05:40.055049 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T20:05:45.053675044+00:00 stderr F I0813 20:05:45.050701 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T20:06:04.550607868+00:00 stderr F W0813 20:06:04.550499 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:06:04.551886485+00:00 stderr F E0813 20:06:04.551731 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:06:53.870533142+00:00 stderr F W0813 20:06:53.870282 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:06:53.870533142+00:00 stderr F E0813 20:06:53.870427 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:07:40.667625093+00:00 stderr F I0813 20:07:40.666123 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-08-13T20:09:05.305116521+00:00 stderr F I0813 20:09:05.304952 1 reflector.go:351] Caches populated for *v1.EndpointSlice from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-08-13T20:09:05.342759711+00:00 stderr F I0813 20:09:05.342612 1 reflector.go:351] Caches populated for *v1.Service from github.com/openshift/router/pkg/router/template/service_lookup.go:33 2025-08-13T20:42:36.362457950+00:00 stderr F I0813 20:42:36.351003 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.362457950+00:00 stderr F I0813 20:42:36.351200 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.362457950+00:00 stderr F I0813 20:42:36.351247 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:42.965144927+00:00 stderr F I0813 20:42:42.964922 1 template.go:844] "msg"="Shutdown requested, waiting 45s for new connections to cease" "logger"="router" ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015133657716033071 5ustar zuulzuul././@LongLink0000644000000000000000000000031200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/collect-profiles/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015133657741033067 5ustar zuulzuul././@LongLink0000644000000000000000000000031700000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/collect-profiles/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000000134015133657716033071 0ustar zuulzuul2025-08-13T20:30:04.045714829+00:00 stderr F time="2025-08-13T20:30:04Z" level=info msg="Successfully created configMap openshift-operator-lifecycle-manager/olm-operator-heap-48qq2" 2025-08-13T20:30:04.473513386+00:00 stderr F time="2025-08-13T20:30:04Z" level=info msg="Successfully created configMap openshift-operator-lifecycle-manager/catalog-operator-heap-88mpx" 2025-08-13T20:30:04.492237284+00:00 stderr F time="2025-08-13T20:30:04Z" level=info msg="Successfully deleted configMap openshift-operator-lifecycle-manager/olm-operator-heap-hqmzq" 2025-08-13T20:30:04.500235874+00:00 stderr F time="2025-08-13T20:30:04Z" level=info msg="Successfully deleted configMap openshift-operator-lifecycle-manager/catalog-operator-heap-bk2n8" ././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000755000175000017500000000000015133657716033103 5ustar zuulzuul././@LongLink0000644000000000000000000000036700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000755000175000017500000000000015133657741033101 5ustar zuulzuul././@LongLink0000644000000000000000000000037400000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000644000175000017500000050677315133657716033127 0ustar zuulzuul2026-01-20T10:49:36.488760324+00:00 stderr F I0120 10:49:36.487423 1 cmd.go:241] Using service-serving-cert provided certificates 2026-01-20T10:49:36.501143881+00:00 stderr F I0120 10:49:36.493323 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:49:36.501143881+00:00 stderr F I0120 10:49:36.493976 1 observer_polling.go:159] Starting file observer 2026-01-20T10:49:36.659789923+00:00 stderr F I0120 10:49:36.659729 1 builder.go:299] openshift-controller-manager-operator version 4.16.0-202406131906.p0.g8996996.assembly.stream.el9-8996996-899699681f8bb984d0f249dec171e630440c461b 2026-01-20T10:49:37.738894822+00:00 stderr F I0120 10:49:37.738406 1 secure_serving.go:57] Forcing use of http/1.1 only 2026-01-20T10:49:37.738894822+00:00 stderr F W0120 10:49:37.738853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:37.738894822+00:00 stderr F W0120 10:49:37.738859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:37.748504424+00:00 stderr F I0120 10:49:37.747158 1 secure_serving.go:213] Serving securely on [::]:8443 2026-01-20T10:49:37.749623059+00:00 stderr F I0120 10:49:37.749425 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2026-01-20T10:49:37.750548387+00:00 stderr F I0120 10:49:37.750248 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:49:37.751407453+00:00 stderr F I0120 10:49:37.750984 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:49:37.751407453+00:00 stderr F I0120 10:49:37.751037 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:37.751760133+00:00 stderr F I0120 10:49:37.751670 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:49:37.751760133+00:00 stderr F I0120 10:49:37.751684 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:37.752578969+00:00 stderr F I0120 10:49:37.752198 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:49:37.753182157+00:00 stderr F I0120 10:49:37.752764 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:49:37.753182157+00:00 stderr F I0120 10:49:37.752934 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2026-01-20T10:49:37.753197547+00:00 stderr F I0120 10:49:37.753189 1 leaderelection.go:250] attempting to acquire leader lease openshift-controller-manager-operator/openshift-controller-manager-operator-lock... 2026-01-20T10:49:37.851462520+00:00 stderr F I0120 10:49:37.851183 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:37.853880014+00:00 stderr F I0120 10:49:37.853828 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:37.858489824+00:00 stderr F I0120 10:49:37.857483 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:54:42.929905802+00:00 stderr F I0120 10:54:42.929313 1 leaderelection.go:260] successfully acquired lease openshift-controller-manager-operator/openshift-controller-manager-operator-lock 2026-01-20T10:54:42.930050045+00:00 stderr F I0120 10:54:42.929959 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator-lock", UID:"decdf66a-787e-4779-ba30-d58496563da0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41957", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-controller-manager-operator-7978d7d7f6-2nt8z_acbfa801-09c0-429c-a7ad-e1dfad2b5ef2 became leader 2026-01-20T10:54:42.933847696+00:00 stderr F I0120 10:54:42.933738 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2026-01-20T10:54:42.941438569+00:00 stderr F I0120 10:54:42.941350 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2026-01-20T10:54:42.941482000+00:00 stderr F I0120 10:54:42.941414 1 starter.go:115] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2026-01-20T10:54:43.057920633+00:00 stderr F I0120 10:54:43.057833 1 base_controller.go:67] Waiting for caches to sync for ImagePullSecretCleanupController 2026-01-20T10:54:43.058393956+00:00 stderr F I0120 10:54:43.058357 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2026-01-20T10:54:43.059326201+00:00 stderr F I0120 10:54:43.058704 1 base_controller.go:67] Waiting for caches to sync for OpenshiftControllerManagerStaticResources 2026-01-20T10:54:43.059326201+00:00 stderr F I0120 10:54:43.058886 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2026-01-20T10:54:43.059326201+00:00 stderr F I0120 10:54:43.058936 1 operator.go:145] Starting OpenShiftControllerManagerOperator 2026-01-20T10:54:43.059326201+00:00 stderr F I0120 10:54:43.058942 1 base_controller.go:67] Waiting for caches to sync for UserCAObservationController 2026-01-20T10:54:43.059326201+00:00 stderr F I0120 10:54:43.059011 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2026-01-20T10:54:43.059486246+00:00 stderr F I0120 10:54:43.059434 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_openshift-controller-manager 2026-01-20T10:54:43.158502475+00:00 stderr F I0120 10:54:43.158435 1 base_controller.go:73] Caches are synced for LoggingSyncer 2026-01-20T10:54:43.158502475+00:00 stderr F I0120 10:54:43.158467 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2026-01-20T10:54:43.159388799+00:00 stderr F I0120 10:54:43.159355 1 base_controller.go:73] Caches are synced for UserCAObservationController 2026-01-20T10:54:43.159388799+00:00 stderr F I0120 10:54:43.159361 1 base_controller.go:73] Caches are synced for ConfigObserver 2026-01-20T10:54:43.159409140+00:00 stderr F I0120 10:54:43.159396 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2026-01-20T10:54:43.159419470+00:00 stderr F I0120 10:54:43.159376 1 base_controller.go:110] Starting #1 worker of UserCAObservationController controller ... 2026-01-20T10:54:43.159509752+00:00 stderr F I0120 10:54:43.159479 1 base_controller.go:73] Caches are synced for OpenshiftControllerManagerStaticResources 2026-01-20T10:54:43.159559423+00:00 stderr F I0120 10:54:43.159515 1 base_controller.go:73] Caches are synced for StatusSyncer_openshift-controller-manager 2026-01-20T10:54:43.159559423+00:00 stderr F I0120 10:54:43.159552 1 base_controller.go:110] Starting #1 worker of StatusSyncer_openshift-controller-manager controller ... 2026-01-20T10:54:43.159586704+00:00 stderr F I0120 10:54:43.159537 1 base_controller.go:110] Starting #1 worker of OpenshiftControllerManagerStaticResources controller ... 2026-01-20T10:54:43.159623375+00:00 stderr F I0120 10:54:43.159494 1 base_controller.go:73] Caches are synced for ResourceSyncController 2026-01-20T10:54:43.159623375+00:00 stderr F I0120 10:54:43.159614 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2026-01-20T10:54:43.186572303+00:00 stderr F I0120 10:54:43.186485 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:43.258944192+00:00 stderr F I0120 10:54:43.258838 1 base_controller.go:73] Caches are synced for ImagePullSecretCleanupController 2026-01-20T10:54:43.258944192+00:00 stderr F I0120 10:54:43.258864 1 base_controller.go:110] Starting #1 worker of ImagePullSecretCleanupController controller ... 2026-01-20T10:56:07.103330128+00:00 stderr F I0120 10:56:07.102449 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:56:07.102409072 +0000 UTC))" 2026-01-20T10:56:07.103330128+00:00 stderr F I0120 10:56:07.102978 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:56:07.102964178 +0000 UTC))" 2026-01-20T10:56:07.103330128+00:00 stderr F I0120 10:56:07.102996 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.102983138 +0000 UTC))" 2026-01-20T10:56:07.103330128+00:00 stderr F I0120 10:56:07.103011 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.103000659 +0000 UTC))" 2026-01-20T10:56:07.103330128+00:00 stderr F I0120 10:56:07.103028 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.103015689 +0000 UTC))" 2026-01-20T10:56:07.103330128+00:00 stderr F I0120 10:56:07.103045 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.10303428 +0000 UTC))" 2026-01-20T10:56:07.103330128+00:00 stderr F I0120 10:56:07.103078 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.10304962 +0000 UTC))" 2026-01-20T10:56:07.103330128+00:00 stderr F I0120 10:56:07.103094 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.103083001 +0000 UTC))" 2026-01-20T10:56:07.103330128+00:00 stderr F I0120 10:56:07.103113 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.103098701 +0000 UTC))" 2026-01-20T10:56:07.103330128+00:00 stderr F I0120 10:56:07.103132 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:56:07.103120712 +0000 UTC))" 2026-01-20T10:56:07.103330128+00:00 stderr F I0120 10:56:07.103149 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.103137282 +0000 UTC))" 2026-01-20T10:56:07.103330128+00:00 stderr F I0120 10:56:07.103172 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.103159673 +0000 UTC))" 2026-01-20T10:56:07.104612982+00:00 stderr F I0120 10:56:07.103550 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-controller-manager-operator.svc,metrics.openshift-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:14 +0000 UTC to 2027-08-13 20:00:15 +0000 UTC (now=2026-01-20 10:56:07.103535763 +0000 UTC))" 2026-01-20T10:56:07.104612982+00:00 stderr F I0120 10:56:07.103810 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906177\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906176\" (2026-01-20 09:49:36 +0000 UTC to 2027-01-20 09:49:36 +0000 UTC (now=2026-01-20 10:56:07.1037957 +0000 UTC))" 2026-01-20T10:57:42.974088515+00:00 stderr F E0120 10:57:42.973527 1 leaderelection.go:332] error retrieving resource lock openshift-controller-manager-operator/openshift-controller-manager-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-controller-manager-operator/leases/openshift-controller-manager-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:43.203801170+00:00 stderr P E0120 10:57:43.203713 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2026-01-20T10:57:43.203881022+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:43.244013124+00:00 stderr P E0120 10:57:43.243940 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2026-01-20T10:57:43.244094856+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:43.506896837+00:00 stderr P E0120 10:57:43.506802 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2026-01-20T10:57:43.506984949+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:44.285802435+00:00 stderr P E0120 10:57:44.285716 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2026-01-20T10:57:44.285878837+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:45.065093003+00:00 stderr P E0120 10:57:45.064993 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2026-01-20T10:57:45.065175725+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:45.845725077+00:00 stderr P E0120 10:57:45.845593 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2026-01-20T10:57:45.845799558+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:46.626881985+00:00 stderr P E0120 10:57:46.626168 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2026-01-20T10:57:46.626934476+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:47.408124785+00:00 stderr P E0120 10:57:47.407162 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2026-01-20T10:57:47.408181977+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:48.186334255+00:00 stderr P E0120 10:57:48.185723 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2026-01-20T10:57:48.186622793+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:58:17.114339078+00:00 stderr F I0120 10:58:17.113622 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:20.135437625+00:00 stderr F I0120 10:58:20.134794 1 reflector.go:351] Caches populated for *v1.OpenShiftControllerManager from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:20.173967373+00:00 stderr P I0120 10:58:20.173802 1 core.go:341] ConfigMap "openshift-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIG1tSIcjm7gAwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjYwMTIwMTA1NTU0WhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3Njg5MDY1\nNTQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCscGmreTD5gOW5KP2F\nNUpoBc5+Spnnq3B2tYkRoXcWNc2BgZ+jMRfrKhGDPLeI0kIISHxkrBN7rWAljF/6\nfC5FOupYyRzaUwkNQnjvvIuPXtDkwKkkHjfsK+w4R+SMgmXxZnBschgQlQADWx1F\nhXXj4t/rIAOD6wDn2huK9ofQ3778YfMevXs4C8/wUAoMZsH7WA45mDgK0xz829BU\n4HQ4QgB87eR8dGwI+Ck76jdHYHx29ZeflIDi09CtLSSobnLLsJJzfqcOnaDDL3FZ\nvA+xigo/n1FjfSNGaEj7Sgub6VUD9bMS90VVTXY0CjSLgR+cdFBQoDdlgxOrcOpl\ns2z5AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBRbK19lc9H0VacvthXsqwv+GhYFITAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEAnzrWGRHyvk+7dUDPe76j\nPAzLpAmD1UUwYW+uGXf3c81/Iqtn1+c3sGWje6RB7fFIee4gGj5NHYfU7f9j6rVH\nhFRFN3DFXfTpNb7i7i8+hvNbVmHV/k/XDv/z5BdwkDZM8qySTeQH9hPxLWN5LHkV\nxtm1n0YndLacqybZOXTN1ZYOr8HQpTsMK/B/rg2zN5rhUBC3xsf2qtamvReY7VXr\ngWlJQqShNWoVfblmZB8sv23Kj9n7f1QwJ32oCYcRh3dmOpfNl4ESRnrczQrqXR1U\niMutGpudJdSdfPNI/+Igd7BsVxnQofWes4qab411PeoLlMoBc0XCfICcam2+yrG+\ncQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/i 2026-01-20T10:58:20.174142128+00:00 stderr F WkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2026-01-20T10:56:07Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2026-01-20T10:58:20.174797345+00:00 stderr F I0120 10:58:20.174766 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-controller-manager: 2026-01-20T10:58:20.174797345+00:00 stderr F cause by changes in data.ca-bundle.crt 2026-01-20T10:58:20.188166374+00:00 stderr P I0120 10:58:20.187778 1 core.go:341] ConfigMap "openshift-route-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIG1tSIcjm7gAwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjYwMTIwMTA1NTU0WhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3Njg5MDY1\nNTQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCscGmreTD5gOW5KP2F\nNUpoBc5+Spnnq3B2tYkRoXcWNc2BgZ+jMRfrKhGDPLeI0kIISHxkrBN7rWAljF/6\nfC5FOupYyRzaUwkNQnjvvIuPXtDkwKkkHjfsK+w4R+SMgmXxZnBschgQlQADWx1F\nhXXj4t/rIAOD6wDn2huK9ofQ3778YfMevXs4C8/wUAoMZsH7WA45mDgK0xz829BU\n4HQ4QgB87eR8dGwI+Ck76jdHYHx29ZeflIDi09CtLSSobnLLsJJzfqcOnaDDL3FZ\nvA+xigo/n1FjfSNGaEj7Sgub6VUD9bMS90VVTXY0CjSLgR+cdFBQoDdlgxOrcOpl\ns2z5AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBRbK19lc9H0VacvthXsqwv+GhYFITAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEAnzrWGRHyvk+7dUDPe76j\nPAzLpAmD1UUwYW+uGXf3c81/Iqtn1+c3sGWje6RB7fFIee4gGj5NHYfU7f9j6rVH\nhFRFN3DFXfTpNb7i7i8+hvNbVmHV/k/XDv/z5BdwkDZM8qySTeQH9hPxLWN5LHkV\nxtm1n0YndLacqybZOXTN1ZYOr8HQpTsMK/B/rg2zN5rhUBC3xsf2qtamvReY7VXr\ngWlJQqShNWoVfblmZB8sv23Kj9n7f1QwJ32oCYcRh3dmOpfNl4ESRnrczQrqXR1U\niMutGpudJdSdfPNI/+Igd7BsVxnQofWes4qab411PeoLlMoBc0XCfICcam2+yrG+\ncQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zG 2026-01-20T10:58:20.188212886+00:00 stderr F qjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2026-01-20T10:56:07Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2026-01-20T10:58:20.189929179+00:00 stderr F I0120 10:58:20.189833 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-route-controller-manager: 2026-01-20T10:58:20.189929179+00:00 stderr F cause by changes in data.ca-bundle.crt 2026-01-20T10:58:20.195327966+00:00 stderr F I0120 10:58:20.195286 1 apps.go:154] Deployment "openshift-controller-manager/controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"8836958424fce9ce20960e8d333e3500ae0dc7390469e6d589fff6461a2d7768"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/client-ca":"43541"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["openshift-controller-manager","start"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"proxy-ca-bundles"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"openshift-global-ca"},"name":"proxy-ca-bundles"}]}}}} 2026-01-20T10:58:20.212146643+00:00 stderr F I0120 10:58:20.211493 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed 2026-01-20T10:58:20.216259138+00:00 stderr F I0120 10:58:20.216217 1 apps.go:154] Deployment "openshift-route-controller-manager/route-controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"dcf2a1c5b55e94b4fba96b13bf105652b0a90de0f499e9011686de54e163ad1c"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/client-ca":"43543"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["route-controller-manager","start"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"route-controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}}]}}}} 2026-01-20T10:58:20.221422298+00:00 stderr F I0120 10:58:20.221392 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed 2026-01-20T10:58:20.237133208+00:00 stderr F I0120 10:58:20.236729 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2026-01-20T10:58:20Z","message":"Progressing: deployment/controller-manager: observed generation is 16, desired generation is 17.\nProgressing: deployment/route-controller-manager: observed generation is 14, desired generation is 15.","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:11:19Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:58:20.245558092+00:00 stderr F I0120 10:58:20.245478 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 16, desired generation is 17.\nProgressing: deployment/route-controller-manager: observed generation is 14, desired generation is 15.") 2026-01-20T10:58:20.271890960+00:00 stderr F I0120 10:58:20.271816 1 core.go:341] ConfigMap "openshift-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-controller-manager.client-ca.configmap":"MMJ18g=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2026-01-20T10:58:20.272153057+00:00 stderr F I0120 10:58:20.272120 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-controller-manager: 2026-01-20T10:58:20.272153057+00:00 stderr F cause by changes in data.openshift-controller-manager.client-ca.configmap 2026-01-20T10:58:20.287560229+00:00 stderr F I0120 10:58:20.287509 1 core.go:341] ConfigMap "openshift-route-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-route-controller-manager.client-ca.configmap":"MMJ18g=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2026-01-20T10:58:20.287752794+00:00 stderr F I0120 10:58:20.287685 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-route-controller-manager: 2026-01-20T10:58:20.287752794+00:00 stderr F cause by changes in data.openshift-route-controller-manager.client-ca.configmap 2026-01-20T10:58:20.298970358+00:00 stderr F I0120 10:58:20.298902 1 apps.go:154] Deployment "openshift-controller-manager/controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"ec1d7294633fd6b739ce74bf5f166d954554a75c6474daf908d3698b640da346"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/config":"43552"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["openshift-controller-manager","start"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"proxy-ca-bundles"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"openshift-global-ca"},"name":"proxy-ca-bundles"}]}}}} 2026-01-20T10:58:20.305738380+00:00 stderr F I0120 10:58:20.305684 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed 2026-01-20T10:58:20.309201549+00:00 stderr F I0120 10:58:20.309024 1 apps.go:154] Deployment "openshift-route-controller-manager/route-controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"f2f6fbd68f38859586116749e2784cca5578848de74b2d07288892d97426ea25"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/config":"43554"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["route-controller-manager","start"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"route-controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}}]}}}} 2026-01-20T10:58:20.319149211+00:00 stderr F I0120 10:58:20.318974 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed 2026-01-20T10:58:20.345330086+00:00 stderr F I0120 10:58:20.345192 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2026-01-20T10:58:20Z","message":"Progressing: deployment/controller-manager: observed generation is 16, desired generation is 18.\nProgressing: deployment/route-controller-manager: observed generation is 14, desired generation is 16.","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:11:19Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:58:20.354597601+00:00 stderr F E0120 10:58:20.354538 1 base_controller.go:268] StatusSyncer_openshift-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2026-01-20T10:58:20.368534235+00:00 stderr F I0120 10:58:20.368299 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2026-01-20T10:58:20Z","message":"Progressing: deployment/controller-manager: observed generation is 16, desired generation is 18.\nProgressing: deployment/route-controller-manager: observed generation is 14, desired generation is 16.","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:11:19Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:58:20.374406525+00:00 stderr F E0120 10:58:20.373729 1 base_controller.go:268] StatusSyncer_openshift-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2026-01-20T10:58:20.389245091+00:00 stderr F I0120 10:58:20.389155 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2026-01-20T10:58:20Z","message":"Progressing: deployment/controller-manager: observed generation is 16, desired generation is 18.\nProgressing: deployment/route-controller-manager: observed generation is 14, desired generation is 16.","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:11:19Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:58:20.394847004+00:00 stderr F E0120 10:58:20.394084 1 base_controller.go:268] StatusSyncer_openshift-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2026-01-20T10:58:20.441954650+00:00 stderr F I0120 10:58:20.441884 1 core.go:341] ConfigMap "openshift-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-controller-manager.client-ca.configmap":"MMJ18g=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2026-01-20T10:58:20.442096304+00:00 stderr F I0120 10:58:20.442046 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ConfigMapUpdateFailed' Failed to update ConfigMap/config -n openshift-controller-manager: Operation cannot be fulfilled on configmaps "config": the object has been modified; please apply your changes to the latest version and try again 2026-01-20T10:58:20.536807910+00:00 stderr F I0120 10:58:20.536724 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2026-01-20T10:58:20Z","message":"Progressing: deployment/controller-manager: observed generation is 16, desired generation is 18.\nProgressing: deployment/route-controller-manager: observed generation is 14, desired generation is 16.","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:11:19Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:58:20.542483304+00:00 stderr F E0120 10:58:20.542405 1 base_controller.go:268] StatusSyncer_openshift-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-controller-manager": the object has been modified; please apply your changes to the latest version and try again ././@LongLink0000644000000000000000000000037400000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000644000175000017500000011407115133657716033111 0ustar zuulzuul2025-08-13T20:05:39.477713001+00:00 stderr F I0813 20:05:39.477011 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:05:39.480944024+00:00 stderr F I0813 20:05:39.480813 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:39.482761686+00:00 stderr F I0813 20:05:39.482640 1 observer_polling.go:159] Starting file observer 2025-08-13T20:05:39.616820695+00:00 stderr F I0813 20:05:39.612634 1 builder.go:299] openshift-controller-manager-operator version 4.16.0-202406131906.p0.g8996996.assembly.stream.el9-8996996-899699681f8bb984d0f249dec171e630440c461b 2025-08-13T20:05:39.971388769+00:00 stderr F I0813 20:05:39.970377 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:05:39.971388769+00:00 stderr F W0813 20:05:39.970434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:39.971388769+00:00 stderr F W0813 20:05:39.970443 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:39.979965674+00:00 stderr F I0813 20:05:39.978309 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:05:39.986559793+00:00 stderr F I0813 20:05:39.986511 1 leaderelection.go:250] attempting to acquire leader lease openshift-controller-manager-operator/openshift-controller-manager-operator-lock... 2025-08-13T20:05:39.991060022+00:00 stderr F I0813 20:05:39.991028 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:05:39.991487234+00:00 stderr F I0813 20:05:39.991445 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:05:39.991591387+00:00 stderr F I0813 20:05:39.991531 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:05:39.992067701+00:00 stderr F I0813 20:05:39.992036 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:05:39.993505212+00:00 stderr F I0813 20:05:39.993450 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:39.993673207+00:00 stderr F I0813 20:05:39.992697 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:05:39.993687587+00:00 stderr F I0813 20:05:39.993668 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:05:39.993845892+00:00 stderr F I0813 20:05:39.992951 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:05:39.993888643+00:00 stderr F I0813 20:05:39.993837 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:40.095002829+00:00 stderr F I0813 20:05:40.094054 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:40.095197894+00:00 stderr F I0813 20:05:40.095170 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:40.095346198+00:00 stderr F I0813 20:05:40.095286 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:08:48.315010930+00:00 stderr F E0813 20:08:48.305568 1 leaderelection.go:332] error retrieving resource lock openshift-controller-manager-operator/openshift-controller-manager-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-controller-manager-operator/leases/openshift-controller-manager-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:10:59.203663912+00:00 stderr F I0813 20:10:59.202174 1 leaderelection.go:260] successfully acquired lease openshift-controller-manager-operator/openshift-controller-manager-operator-lock 2025-08-13T20:10:59.208330496+00:00 stderr F I0813 20:10:59.205651 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator-lock", UID:"decdf66a-787e-4779-ba30-d58496563da0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"33251", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-controller-manager-operator-7978d7d7f6-2nt8z_97a869b3-b1e5-4caf-ad32-912e51043cee became leader 2025-08-13T20:10:59.232052376+00:00 stderr F I0813 20:10:59.231988 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:10:59.243593237+00:00 stderr F I0813 20:10:59.243456 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:10:59.244106501+00:00 stderr F I0813 20:10:59.243119 1 starter.go:115] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:10:59.298685346+00:00 stderr F I0813 20:10:59.298610 1 base_controller.go:67] Waiting for caches to sync for ImagePullSecretCleanupController 2025-08-13T20:10:59.309429584+00:00 stderr F I0813 20:10:59.309371 1 base_controller.go:67] Waiting for caches to sync for OpenshiftControllerManagerStaticResources 2025-08-13T20:10:59.309539297+00:00 stderr F I0813 20:10:59.309523 1 operator.go:145] Starting OpenShiftControllerManagerOperator 2025-08-13T20:10:59.310213146+00:00 stderr F I0813 20:10:59.310182 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:10:59.310296649+00:00 stderr F I0813 20:10:59.310279 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:10:59.310347910+00:00 stderr F I0813 20:10:59.310335 1 base_controller.go:67] Waiting for caches to sync for UserCAObservationController 2025-08-13T20:10:59.310550696+00:00 stderr F I0813 20:10:59.310525 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_openshift-controller-manager 2025-08-13T20:10:59.310699400+00:00 stderr F I0813 20:10:59.310676 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:10:59.411721487+00:00 stderr F I0813 20:10:59.411566 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:10:59.411721487+00:00 stderr F I0813 20:10:59.411639 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:10:59.411844520+00:00 stderr F I0813 20:10:59.411737 1 base_controller.go:73] Caches are synced for UserCAObservationController 2025-08-13T20:10:59.411844520+00:00 stderr F I0813 20:10:59.411745 1 base_controller.go:110] Starting #1 worker of UserCAObservationController controller ... 2025-08-13T20:10:59.412006155+00:00 stderr F I0813 20:10:59.411946 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.418689607+00:00 stderr F I0813 20:10:59.418610 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.431131613+00:00 stderr F I0813 20:10:59.430873 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.442558351+00:00 stderr F I0813 20:10:59.442445 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.443436006+00:00 stderr F I0813 20:10:59.443371 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.448866902+00:00 stderr F I0813 20:10:59.448819 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.455659307+00:00 stderr F I0813 20:10:59.455614 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.459617080+00:00 stderr F I0813 20:10:59.458001 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.469865814+00:00 stderr F I0813 20:10:59.469573 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.510179590+00:00 stderr F I0813 20:10:59.510092 1 base_controller.go:73] Caches are synced for OpenshiftControllerManagerStaticResources 2025-08-13T20:10:59.510280923+00:00 stderr F I0813 20:10:59.510258 1 base_controller.go:110] Starting #1 worker of OpenshiftControllerManagerStaticResources controller ... 2025-08-13T20:10:59.511697503+00:00 stderr F I0813 20:10:59.511488 1 base_controller.go:73] Caches are synced for StatusSyncer_openshift-controller-manager 2025-08-13T20:10:59.511697503+00:00 stderr F I0813 20:10:59.511529 1 base_controller.go:110] Starting #1 worker of StatusSyncer_openshift-controller-manager controller ... 2025-08-13T20:10:59.511697503+00:00 stderr F I0813 20:10:59.511581 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:10:59.511697503+00:00 stderr F I0813 20:10:59.511589 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:10:59.511697503+00:00 stderr F I0813 20:10:59.511616 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:10:59.511697503+00:00 stderr F I0813 20:10:59.511629 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:10:59.552331418+00:00 stderr F I0813 20:10:59.552188 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.563880430+00:00 stderr F I0813 20:10:59.563622 1 apps.go:154] Deployment "openshift-controller-manager/controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"f6a079a2c81073c36b72bd771673556c9f87406689988ef3e993138968845bcc"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/config":"30489"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["openshift-controller-manager","start"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"proxy-ca-bundles"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"openshift-global-ca"},"name":"proxy-ca-bundles"}]}}}} 2025-08-13T20:10:59.587185468+00:00 stderr F I0813 20:10:59.585561 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed 2025-08-13T20:10:59.596895606+00:00 stderr F I0813 20:10:59.596737 1 apps.go:154] Deployment "openshift-route-controller-manager/route-controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"a8b1e7668d4b183445825c8d9d8daba3bc69d22add1b4a1e25e5081d7b9c2cd7"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/config":"30534"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["route-controller-manager","start"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"route-controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}}]}}}} 2025-08-13T20:10:59.601860628+00:00 stderr F I0813 20:10:59.601714 1 base_controller.go:73] Caches are synced for ImagePullSecretCleanupController 2025-08-13T20:10:59.602059954+00:00 stderr F I0813 20:10:59.602030 1 base_controller.go:110] Starting #1 worker of ImagePullSecretCleanupController controller ... 2025-08-13T20:10:59.614457720+00:00 stderr F I0813 20:10:59.614278 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed 2025-08-13T20:10:59.650180144+00:00 stderr F I0813 20:10:59.650071 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: observed generation is 15, desired generation is 16.\nProgressing: deployment/route-controller-manager: observed generation is 13, desired generation is 14.","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:10:59Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:10:59.677105436+00:00 stderr F I0813 20:10:59.676700 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 14, desired generation is 15.\nProgressing: deployment/route-controller-manager: observed generation is 12, desired generation is 13.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 15, desired generation is 16.\nProgressing: deployment/route-controller-manager: observed generation is 13, desired generation is 14.",Available changed from False to True ("All is well") 2025-08-13T20:10:59.996206105+00:00 stderr F I0813 20:10:59.996116 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: updated replicas is 0, desired replicas is 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:10:59Z","message":"Available: no route controller manager deployment pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:11:00.047997790+00:00 stderr F I0813 20:11:00.047867 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 15, desired generation is 16.\nProgressing: deployment/route-controller-manager: observed generation is 13, desired generation is 14." to "Progressing: deployment/controller-manager: updated replicas is 0, desired replicas is 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1",Available changed from True to False ("Available: no route controller manager deployment pods available on any node.") 2025-08-13T20:11:00.317103545+00:00 stderr F I0813 20:11:00.316543 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: available replicas is 0, desired available replicas \u003e 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:10:59Z","message":"Available: no pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:11:00.340647020+00:00 stderr F I0813 20:11:00.339734 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: updated replicas is 0, desired replicas is 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available message changed from "Available: no route controller manager deployment pods available on any node." to "Available: no pods available on any node." 2025-08-13T20:11:19.410645645+00:00 stderr F I0813 20:11:19.409794 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:11:19Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:11:19Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:11:19.441531050+00:00 stderr F I0813 20:11:19.436405 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.420037 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430567 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430613 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430625 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430641 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430655 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430667 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430679 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430691 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469924708+00:00 stderr F I0813 20:42:36.430703 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.640999140+00:00 stderr F I0813 20:42:36.430724 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.641265608+00:00 stderr F I0813 20:42:36.430733 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.641460023+00:00 stderr F I0813 20:42:36.430743 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.642325258+00:00 stderr F I0813 20:42:36.430752 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.644339656+00:00 stderr F I0813 20:42:36.422097 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.644450870+00:00 stderr F I0813 20:42:36.430828 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645439398+00:00 stderr F I0813 20:42:36.430849 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645439398+00:00 stderr F I0813 20:42:36.430860 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645439398+00:00 stderr F I0813 20:42:36.430891 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645439398+00:00 stderr F I0813 20:42:36.430908 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645439398+00:00 stderr F I0813 20:42:36.430919 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645439398+00:00 stderr F I0813 20:42:36.430929 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645439398+00:00 stderr F I0813 20:42:36.430940 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645688405+00:00 stderr F I0813 20:42:36.430950 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645845310+00:00 stderr F I0813 20:42:36.430960 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.646290823+00:00 stderr F I0813 20:42:36.430970 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.646290823+00:00 stderr F I0813 20:42:36.430981 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.646484298+00:00 stderr F I0813 20:42:36.430991 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.646500289+00:00 stderr F I0813 20:42:36.431008 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.646677554+00:00 stderr F I0813 20:42:36.431018 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.646696514+00:00 stderr F I0813 20:42:36.431038 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.646708345+00:00 stderr F I0813 20:42:36.431049 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.651420531+00:00 stderr F I0813 20:42:36.431059 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.651566525+00:00 stderr F I0813 20:42:36.431071 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.652165712+00:00 stderr F I0813 20:42:36.431081 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.652720058+00:00 stderr F I0813 20:42:36.431096 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.652883633+00:00 stderr F I0813 20:42:36.431111 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.655507488+00:00 stderr F I0813 20:42:36.431122 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.655507488+00:00 stderr F I0813 20:42:36.431132 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.655507488+00:00 stderr F I0813 20:42:36.431147 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.655889539+00:00 stderr F I0813 20:42:36.431164 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.656338182+00:00 stderr F I0813 20:42:36.422033 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.657733093+00:00 stderr F I0813 20:42:36.422154 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.659246806+00:00 stderr F I0813 20:42:36.422194 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.659583316+00:00 stderr F I0813 20:42:36.422249 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:40.377512944+00:00 stderr F I0813 20:42:40.375202 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:40.377512944+00:00 stderr F I0813 20:42:40.376636 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:40.377736721+00:00 stderr F I0813 20:42:40.377656 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:40.379708598+00:00 stderr F I0813 20:42:40.379633 1 base_controller.go:172] Shutting down OpenshiftControllerManagerStaticResources ... 2025-08-13T20:42:40.379708598+00:00 stderr F I0813 20:42:40.379656 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:40.379708598+00:00 stderr F I0813 20:42:40.379685 1 base_controller.go:172] Shutting down ImagePullSecretCleanupController ... 2025-08-13T20:42:40.379708598+00:00 stderr F I0813 20:42:40.379689 1 base_controller.go:172] Shutting down StatusSyncer_openshift-controller-manager ... 2025-08-13T20:42:40.379708598+00:00 stderr F I0813 20:42:40.379697 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:42:40.379735989+00:00 stderr F I0813 20:42:40.379712 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:40.379735989+00:00 stderr F I0813 20:42:40.379719 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:40.380063278+00:00 stderr F I0813 20:42:40.379990 1 base_controller.go:172] Shutting down UserCAObservationController ... 2025-08-13T20:42:40.380163951+00:00 stderr F I0813 20:42:40.380116 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:40.380163951+00:00 stderr F I0813 20:42:40.380153 1 base_controller.go:114] Shutting down worker of ImagePullSecretCleanupController controller ... 2025-08-13T20:42:40.380180221+00:00 stderr F I0813 20:42:40.380162 1 base_controller.go:104] All ImagePullSecretCleanupController workers have been terminated 2025-08-13T20:42:40.380191982+00:00 stderr F I0813 20:42:40.380171 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:42:40.380206042+00:00 stderr F I0813 20:42:40.380196 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:42:40.380252273+00:00 stderr F I0813 20:42:40.380210 1 base_controller.go:114] Shutting down worker of UserCAObservationController controller ... 2025-08-13T20:42:40.380378737+00:00 stderr F I0813 20:42:40.380219 1 base_controller.go:104] All UserCAObservationController workers have been terminated 2025-08-13T20:42:40.381132839+00:00 stderr F I0813 20:42:40.379696 1 base_controller.go:150] All StatusSyncer_openshift-controller-manager post start hooks have been terminated 2025-08-13T20:42:40.381132839+00:00 stderr F I0813 20:42:40.381115 1 base_controller.go:114] Shutting down worker of OpenshiftControllerManagerStaticResources controller ... 2025-08-13T20:42:40.381132839+00:00 stderr F I0813 20:42:40.381125 1 base_controller.go:104] All OpenshiftControllerManagerStaticResources workers have been terminated 2025-08-13T20:42:40.381155830+00:00 stderr F I0813 20:42:40.381134 1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-controller-manager controller ... 2025-08-13T20:42:40.381155830+00:00 stderr F I0813 20:42:40.381140 1 base_controller.go:104] All StatusSyncer_openshift-controller-manager workers have been terminated 2025-08-13T20:42:40.381360435+00:00 stderr F I0813 20:42:40.380217 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:40.381360435+00:00 stderr F I0813 20:42:40.381248 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:42:40.381360435+00:00 stderr F I0813 20:42:40.381248 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:40.381904751+00:00 stderr F I0813 20:42:40.380269 1 operator.go:151] Shutting down OpenShiftControllerManagerOperator 2025-08-13T20:42:40.383730464+00:00 stderr F I0813 20:42:40.382525 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:40.383730464+00:00 stderr F W0813 20:42:40.382565 1 builder.go:131] graceful termination failed, controllers failed with error: stopped ././@LongLink0000644000000000000000000000037400000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000644000175000017500000052140215133657716033111 0ustar zuulzuul2025-08-13T19:59:22.501014350+00:00 stderr F I0813 19:59:22.498949 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T19:59:22.518155678+00:00 stderr F I0813 19:59:22.517397 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:22.740731243+00:00 stderr F I0813 19:59:22.705993 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:24.642256226+00:00 stderr F I0813 19:59:24.638870 1 builder.go:299] openshift-controller-manager-operator version 4.16.0-202406131906.p0.g8996996.assembly.stream.el9-8996996-899699681f8bb984d0f249dec171e630440c461b 2025-08-13T19:59:34.801442426+00:00 stderr F I0813 19:59:34.794211 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:34.801442426+00:00 stderr F W0813 19:59:34.799728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:34.801442426+00:00 stderr F W0813 19:59:34.799743 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:34.853910601+00:00 stderr F I0813 19:59:34.852730 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:34.930903836+00:00 stderr F I0813 19:59:34.924662 1 leaderelection.go:250] attempting to acquire leader lease openshift-controller-manager-operator/openshift-controller-manager-operator-lock... 2025-08-13T19:59:35.156718343+00:00 stderr F I0813 19:59:35.156653 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:35.274368056+00:00 stderr F I0813 19:59:35.237965 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:35.274496450+00:00 stderr F I0813 19:59:35.198935 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:35.274530241+00:00 stderr F I0813 19:59:35.199918 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:35.274556002+00:00 stderr F I0813 19:59:35.204331 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.274581893+00:00 stderr F I0813 19:59:35.253733 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:35.284174546+00:00 stderr F I0813 19:59:35.284131 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:35.352969556+00:00 stderr F I0813 19:59:35.296585 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:35.596705014+00:00 stderr F I0813 19:59:35.309268 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:35.605019651+00:00 stderr F I0813 19:59:35.584214 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:35.834367478+00:00 stderr F I0813 19:59:35.584244 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:35.843013895+00:00 stderr F I0813 19:59:35.842575 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:35.843500579+00:00 stderr F E0813 19:59:35.843458 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.843619232+00:00 stderr F E0813 19:59:35.843597 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.852731992+00:00 stderr F E0813 19:59:35.852661 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.879277559+00:00 stderr F E0813 19:59:35.876410 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.879277559+00:00 stderr F E0813 19:59:35.876472 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.887279527+00:00 stderr F E0813 19:59:35.887219 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.904343823+00:00 stderr F E0813 19:59:35.904291 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.912429184+00:00 stderr F E0813 19:59:35.912348 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.959033952+00:00 stderr F E0813 19:59:35.958973 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.959146965+00:00 stderr F E0813 19:59:35.959131 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.040763242+00:00 stderr F E0813 19:59:36.040676 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.054607717+00:00 stderr F E0813 19:59:36.054573 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.232656712+00:00 stderr F E0813 19:59:36.232421 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.331167670+00:00 stderr F I0813 19:59:36.329754 1 leaderelection.go:260] successfully acquired lease openshift-controller-manager-operator/openshift-controller-manager-operator-lock 2025-08-13T19:59:36.344607243+00:00 stderr F I0813 19:59:36.344427 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator-lock", UID:"decdf66a-787e-4779-ba30-d58496563da0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28188", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-controller-manager-operator-7978d7d7f6-2nt8z_9d027477-b136-47ef-a304-2dd35bc9cd4d became leader 2025-08-13T19:59:36.434342421+00:00 stderr F E0813 19:59:36.434276 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.557910153+00:00 stderr F E0813 19:59:36.555387 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.789066893+00:00 stderr F E0813 19:59:36.782173 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.792564442+00:00 stderr F I0813 19:59:36.792518 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:36.888852057+00:00 stderr F I0813 19:59:36.847317 1 starter.go:115] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:36.995937550+00:00 stderr F I0813 19:59:36.848765 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:37.271882766+00:00 stderr F E0813 19:59:37.268698 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:37.436072326+00:00 stderr F E0813 19:59:37.422409 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:37.628636835+00:00 stderr F I0813 19:59:37.627170 1 base_controller.go:67] Waiting for caches to sync for ImagePullSecretCleanupController 2025-08-13T19:59:37.691184378+00:00 stderr F I0813 19:59:37.674414 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:38.100083794+00:00 stderr F I0813 19:59:38.097705 1 base_controller.go:67] Waiting for caches to sync for UserCAObservationController 2025-08-13T19:59:39.611923168+00:00 stderr F E0813 19:59:39.610157 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:39.611923168+00:00 stderr F E0813 19:59:39.610561 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:39.904962902+00:00 stderr F I0813 19:59:39.899051 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:39.904962902+00:00 stderr F I0813 19:59:39.899362 1 base_controller.go:67] Waiting for caches to sync for OpenshiftControllerManagerStaticResources 2025-08-13T19:59:39.904962902+00:00 stderr F I0813 19:59:39.899512 1 operator.go:145] Starting OpenShiftControllerManagerOperator 2025-08-13T19:59:39.904962902+00:00 stderr F I0813 19:59:39.900209 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:39.904962902+00:00 stderr F I0813 19:59:39.901615 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_openshift-controller-manager 2025-08-13T19:59:42.993629804+00:00 stderr F E0813 19:59:42.978620 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.993629804+00:00 stderr F E0813 19:59:42.993327 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.024014960+00:00 stderr F I0813 19:59:43.021103 1 trace.go:236] Trace[411137344]: "DeltaFIFO Pop Process" ID:default,Depth:63,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:42.802) (total time: 214ms): 2025-08-13T19:59:43.024014960+00:00 stderr F Trace[411137344]: [214.00944ms] [214.00944ms] END 2025-08-13T19:59:43.033957184+00:00 stderr F I0813 19:59:43.032975 1 base_controller.go:73] Caches are synced for UserCAObservationController 2025-08-13T19:59:43.033957184+00:00 stderr F I0813 19:59:43.033125 1 base_controller.go:110] Starting #1 worker of UserCAObservationController controller ... 2025-08-13T19:59:43.033957184+00:00 stderr F I0813 19:59:43.033291 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:43.033957184+00:00 stderr F I0813 19:59:43.033300 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:43.120035607+00:00 stderr F I0813 19:59:43.107927 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:43.120035607+00:00 stderr F I0813 19:59:43.108215 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:43.120035607+00:00 stderr F I0813 19:59:43.109536 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:43.124359101+00:00 stderr F I0813 19:59:43.123596 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:43.219533754+00:00 stderr F I0813 19:59:43.217513 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:43.361143030+00:00 stderr F I0813 19:59:43.355828 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:43.368662565+00:00 stderr F I0813 19:59:43.368109 1 trace.go:236] Trace[1825624301]: "DeltaFIFO Pop Process" ID:cluster-admin,Depth:188,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:43.255) (total time: 112ms): 2025-08-13T19:59:43.368662565+00:00 stderr F Trace[1825624301]: [112.148267ms] [112.148267ms] END 2025-08-13T19:59:43.477358833+00:00 stderr F I0813 19:59:43.476446 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:43.637191349+00:00 stderr F I0813 19:59:43.636024 1 trace.go:236] Trace[430823319]: "DeltaFIFO Pop Process" ID:multus-group,Depth:146,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:43.432) (total time: 202ms): 2025-08-13T19:59:43.637191349+00:00 stderr F Trace[430823319]: [202.973466ms] [202.973466ms] END 2025-08-13T19:59:45.230277591+00:00 stderr F I0813 19:59:45.131770 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:45.264485136+00:00 stderr F I0813 19:59:45.235273 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:46.966283855+00:00 stderr F I0813 19:59:46.929670 1 trace.go:236] Trace[1910610313]: "DeltaFIFO Pop Process" ID:aggregate-olm-edit,Depth:223,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:45.235) (total time: 1693ms): 2025-08-13T19:59:46.966283855+00:00 stderr F Trace[1910610313]: [1.693771071s] [1.693771071s] END 2025-08-13T19:59:47.040871171+00:00 stderr F I0813 19:59:46.973674 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:47.190433995+00:00 stderr F I0813 19:59:47.190257 1 trace.go:236] Trace[287803092]: "DeltaFIFO Pop Process" ID:net-attach-def-project,Depth:179,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:47.076) (total time: 113ms): 2025-08-13T19:59:47.190433995+00:00 stderr F Trace[287803092]: [113.420843ms] [113.420843ms] END 2025-08-13T19:59:47.267690217+00:00 stderr F I0813 19:59:47.079030 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:47.304922349+00:00 stderr F I0813 19:59:47.293440 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:47.340553874+00:00 stderr F I0813 19:59:47.340491 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:47.342163810+00:00 stderr F I0813 19:59:47.341746 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:47.412154455+00:00 stderr F I0813 19:59:47.401080 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:47.412154455+00:00 stderr F I0813 19:59:47.401137 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:47.412154455+00:00 stderr F I0813 19:59:47.401095 1 base_controller.go:73] Caches are synced for OpenshiftControllerManagerStaticResources 2025-08-13T19:59:47.412154455+00:00 stderr F I0813 19:59:47.401182 1 base_controller.go:110] Starting #1 worker of OpenshiftControllerManagerStaticResources controller ... 2025-08-13T19:59:47.412154455+00:00 stderr F I0813 19:59:47.409057 1 base_controller.go:73] Caches are synced for StatusSyncer_openshift-controller-manager 2025-08-13T19:59:47.412154455+00:00 stderr F I0813 19:59:47.409075 1 base_controller.go:110] Starting #1 worker of StatusSyncer_openshift-controller-manager controller ... 2025-08-13T19:59:48.127951600+00:00 stderr F E0813 19:59:48.115094 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:48.128310830+00:00 stderr F E0813 19:59:48.128272 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.115961985+00:00 stderr P I0813 19:59:49.109919 1 core.go:341] ConfigMap "openshift-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ 2025-08-13T19:59:49.116028847+00:00 stderr F /NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T19:49:36Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T19:59:49.116028847+00:00 stderr F I0813 19:59:49.111960 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-controller-manager: 2025-08-13T19:59:49.116028847+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:49.865857840+00:00 stderr P I0813 19:59:49.863221 1 core.go:341] ConfigMap "openshift-route-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/ 2025-08-13T19:59:49.865993994+00:00 stderr F iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T19:49:36Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T19:59:49.899495549+00:00 stderr F I0813 19:59:49.899305 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-route-controller-manager: 2025-08-13T19:59:49.899495549+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:50.414174951+00:00 stderr F I0813 19:59:50.410733 1 apps.go:154] Deployment "openshift-controller-manager/controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"1accb1ac836aa33f8208a614d8657e2894b298fdc84501b2ced8f0aea7081a7e"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/client-ca":"28451"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["openshift-controller-manager","start"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"proxy-ca-bundles"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"openshift-global-ca"},"name":"proxy-ca-bundles"}]}}}} 2025-08-13T19:59:50.764362733+00:00 stderr F I0813 19:59:50.763635 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed 2025-08-13T19:59:51.005057045+00:00 stderr F I0813 19:59:51.002574 1 apps.go:154] Deployment "openshift-route-controller-manager/route-controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"bd022aaa08f7a35114f39954475616ebb6669ca5cfd704016d15dc0be2736d06"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/client-ca":"28474"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["route-controller-manager","start"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"route-controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}}]}}}} 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.802982 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:51.802735174 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803080 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:51.803061153 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803107 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.803090464 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803124 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.803113735 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803143 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.803131585 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803214 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.803148676 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803326 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.803221628 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803352 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.803334771 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803370 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.803359372 +0000 UTC))" 2025-08-13T19:59:51.890136225+00:00 stderr F I0813 19:59:51.803714 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-controller-manager-operator.svc,metrics.openshift-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 19:59:51.803696981 +0000 UTC))" 2025-08-13T19:59:51.890136225+00:00 stderr F I0813 19:59:51.865124 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115174\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115167\" (2025-08-13 18:59:24 +0000 UTC to 2026-08-13 18:59:24 +0000 UTC (now=2025-08-13 19:59:51.865080941 +0000 UTC))" 2025-08-13T19:59:51.890136225+00:00 stderr F I0813 19:59:51.862192 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed 2025-08-13T19:59:51.908679914+00:00 stderr F I0813 19:59:51.908319 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:52.226393681+00:00 stderr F I0813 19:59:52.224708 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: observed generation is 10, desired generation is 11.\nProgressing: deployment/route-controller-manager: observed generation is 8, desired generation is 9.","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T19:59:52Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:52.309720466+00:00 stderr F I0813 19:59:52.307397 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 10, desired generation is 11.\nProgressing: deployment/route-controller-manager: observed generation is 8, desired generation is 9.",Available changed from False to True ("All is well") 2025-08-13T19:59:52.632061214+00:00 stderr F I0813 19:59:52.630311 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: observed generation is 10, desired generation is 11.\nProgressing: deployment/route-controller-manager: observed generation is 8, desired generation is 9.","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T19:59:52Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:53.011012106+00:00 stderr F I0813 19:59:52.950889 1 trace.go:236] Trace[587110061]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:37.636) (total time: 15314ms): 2025-08-13T19:59:53.011012106+00:00 stderr F Trace[587110061]: ---"Objects listed" error: 15314ms (19:59:52.950) 2025-08-13T19:59:53.011012106+00:00 stderr F Trace[587110061]: [15.31460899s] [15.31460899s] END 2025-08-13T19:59:53.011402667+00:00 stderr F I0813 19:59:53.011356 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.012131518+00:00 stderr F E0813 19:59:52.962146 1 base_controller.go:268] StatusSyncer_openshift-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:53.033095616+00:00 stderr F I0813 19:59:53.033026 1 base_controller.go:73] Caches are synced for ImagePullSecretCleanupController 2025-08-13T19:59:53.039407866+00:00 stderr F I0813 19:59:53.039368 1 base_controller.go:110] Starting #1 worker of ImagePullSecretCleanupController controller ... 2025-08-13T19:59:53.294618370+00:00 stderr F I0813 19:59:53.294426 1 core.go:341] ConfigMap "openshift-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-controller-manager.client-ca.configmap":"MfAL0g=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T19:59:53.309522275+00:00 stderr F I0813 19:59:53.295151 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-controller-manager: 2025-08-13T19:59:53.309522275+00:00 stderr F cause by changes in data.openshift-controller-manager.client-ca.configmap 2025-08-13T19:59:53.404664057+00:00 stderr F I0813 19:59:53.402032 1 core.go:341] ConfigMap "openshift-route-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-route-controller-manager.client-ca.configmap":"MfAL0g=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T19:59:53.404664057+00:00 stderr F I0813 19:59:53.402516 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-route-controller-manager: 2025-08-13T19:59:53.404664057+00:00 stderr F cause by changes in data.openshift-route-controller-manager.client-ca.configmap 2025-08-13T19:59:53.610534275+00:00 stderr F I0813 19:59:53.610241 1 apps.go:154] Deployment "openshift-controller-manager/controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"9560e1ebfed279fa4b00d6622d2d0c7548ddaafda3bf7adb90c2675e98237adc"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/config":"28609"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["openshift-controller-manager","start"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"proxy-ca-bundles"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"openshift-global-ca"},"name":"proxy-ca-bundles"}]}}}} 2025-08-13T19:59:53.839948774+00:00 stderr F I0813 19:59:53.792171 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed 2025-08-13T19:59:53.839948774+00:00 stderr F I0813 19:59:53.817658 1 apps.go:154] Deployment "openshift-route-controller-manager/route-controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"55202ddf9b00b1660ea2cb5f3c4a6bcc3fe4ffc25c2e72e085bdaf7de2334698"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/config":"28614"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["route-controller-manager","start"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"route-controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}}]}}}} 2025-08-13T19:59:53.899455651+00:00 stderr F I0813 19:59:53.858690 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed 2025-08-13T19:59:54.035239521+00:00 stderr F I0813 19:59:54.035105 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: observed generation is 11, desired generation is 12.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas \u003e 1\nProgressing: deployment/route-controller-manager: observed generation is 9, desired generation is 10.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T19:59:53Z","message":"Available: no pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.195940292+00:00 stderr F I0813 19:59:54.195299 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 10, desired generation is 11.\nProgressing: deployment/route-controller-manager: observed generation is 8, desired generation is 9." to "Progressing: deployment/controller-manager: observed generation is 11, desired generation is 12.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 9, desired generation is 10.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available changed from True to False ("Available: no pods available on any node.") 2025-08-13T19:59:56.033043570+00:00 stderr F I0813 19:59:56.025824 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: available replicas is 0, desired available replicas \u003e 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T19:59:53Z","message":"Available: no pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:56.508439492+00:00 stderr F I0813 19:59:56.501318 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 11, desired generation is 12.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 9, desired generation is 10.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" 2025-08-13T19:59:57.112488860+00:00 stderr F I0813 19:59:57.111042 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: available replicas is 0, desired available replicas \u003e 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T19:59:53Z","message":"Available: no pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:57.287965542+00:00 stderr F E0813 19:59:57.279216 1 base_controller.go:268] StatusSyncer_openshift-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:05.762510063+00:00 stderr F I0813 20:00:05.757458 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.757412947 +0000 UTC))" 2025-08-13T20:00:05.762510063+00:00 stderr F I0813 20:00:05.757647 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.757632283 +0000 UTC))" 2025-08-13T20:00:05.762510063+00:00 stderr F I0813 20:00:05.757667 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.757654564 +0000 UTC))" 2025-08-13T20:00:05.762510063+00:00 stderr F I0813 20:00:05.757719 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.757672245 +0000 UTC))" 2025-08-13T20:00:05.762510063+00:00 stderr F I0813 20:00:05.757753 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.757728736 +0000 UTC))" 2025-08-13T20:00:05.785996492+00:00 stderr F I0813 20:00:05.773048 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.77295724 +0000 UTC))" 2025-08-13T20:00:05.785996492+00:00 stderr F I0813 20:00:05.773142 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.773125205 +0000 UTC))" 2025-08-13T20:00:05.785996492+00:00 stderr F I0813 20:00:05.773167 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.773149976 +0000 UTC))" 2025-08-13T20:00:05.785996492+00:00 stderr F I0813 20:00:05.773188 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.773176797 +0000 UTC))" 2025-08-13T20:00:05.785996492+00:00 stderr F I0813 20:00:05.773207 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.773196747 +0000 UTC))" 2025-08-13T20:00:05.785996492+00:00 stderr F I0813 20:00:05.773564 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-controller-manager-operator.svc,metrics.openshift-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 20:00:05.773546847 +0000 UTC))" 2025-08-13T20:00:05.829599936+00:00 stderr F I0813 20:00:05.828483 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115174\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115167\" (2025-08-13 18:59:24 +0000 UTC to 2026-08-13 18:59:24 +0000 UTC (now=2025-08-13 20:00:05.828419362 +0000 UTC))" 2025-08-13T20:00:08.126868879+00:00 stderr P I0813 20:00:08.115317 1 core.go:341] ConfigMap "openshift-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IB 2025-08-13T20:00:08.126971252+00:00 stderr F DwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:05Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T20:00:08.143879924+00:00 stderr F I0813 20:00:08.143169 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-controller-manager: 2025-08-13T20:00:08.143879924+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:08.734625589+00:00 stderr P I0813 20:00:08.729751 1 core.go:341] ConfigMap "openshift-route-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQ 2025-08-13T20:00:08.734699641+00:00 stderr F UAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:05Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T20:00:08.735633667+00:00 stderr F I0813 20:00:08.735589 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-route-controller-manager: 2025-08-13T20:00:08.735633667+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:08.868503696+00:00 stderr F I0813 20:00:08.868378 1 apps.go:154] Deployment "openshift-controller-manager/controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"fc47afb69400ec93f9f5988897f8473ca4c4bdea046fab70a0e165e5b4cc33c1"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/client-ca":"28980"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["openshift-controller-manager","start"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"proxy-ca-bundles"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"openshift-global-ca"},"name":"proxy-ca-bundles"}]}}}} 2025-08-13T20:00:09.328665487+00:00 stderr F I0813 20:00:09.303747 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed 2025-08-13T20:00:09.761062066+00:00 stderr F I0813 20:00:09.759946 1 apps.go:154] Deployment "openshift-route-controller-manager/route-controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"2997ec2c10eb5744ab5a4358602bc88f99f56641a9b7132faff9749e82773557"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/client-ca":"28996"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["route-controller-manager","start"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"route-controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}}]}}}} 2025-08-13T20:00:10.271157441+00:00 stderr F I0813 20:00:10.267309 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed 2025-08-13T20:00:10.956011969+00:00 stderr F I0813 20:00:10.955606 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: observed generation is 12, desired generation is 13.\nProgressing: deployment/route-controller-manager: observed generation is 10, desired generation is 11.","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:10Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:11.317181517+00:00 stderr F I0813 20:00:11.315079 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 12, desired generation is 13.\nProgressing: deployment/route-controller-manager: observed generation is 10, desired generation is 11.",Available changed from False to True ("All is well") 2025-08-13T20:00:20.807714778+00:00 stderr F I0813 20:00:20.797011 1 core.go:341] ConfigMap "openshift-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-controller-manager.client-ca.configmap":"nkDd2A==","openshift-controller-manager.serving-cert.secret":"inPi3w=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:00:20.807714778+00:00 stderr F I0813 20:00:20.799219 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-controller-manager: 2025-08-13T20:00:20.807714778+00:00 stderr F cause by changes in data.openshift-controller-manager.client-ca.configmap,data.openshift-controller-manager.serving-cert.secret 2025-08-13T20:00:22.570906954+00:00 stderr F I0813 20:00:22.561663 1 core.go:341] ConfigMap "openshift-route-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-route-controller-manager.client-ca.configmap":"nkDd2A=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:00:22.570906954+00:00 stderr F I0813 20:00:22.563114 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-route-controller-manager: 2025-08-13T20:00:22.570906954+00:00 stderr F cause by changes in data.openshift-route-controller-manager.client-ca.configmap 2025-08-13T20:00:23.296960378+00:00 stderr F I0813 20:00:23.294715 1 apps.go:154] Deployment "openshift-controller-manager/controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"6bcc007c303cd189832d7956655ea4a765a39149dbd0e31991e8b4f86ad92eeb"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/config":"29406","configmaps/openshift-service-ca":"29219"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["openshift-controller-manager","start"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"proxy-ca-bundles"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"openshift-global-ca"},"name":"proxy-ca-bundles"}]}}}} 2025-08-13T20:00:23.454507270+00:00 stderr F I0813 20:00:23.453199 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed 2025-08-13T20:00:23.804053877+00:00 stderr F I0813 20:00:23.802669 1 apps.go:154] Deployment "openshift-route-controller-manager/route-controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"d2cfbecabf7957093938fbbf20602fed627c8dcf88c7baa3039d1aa76706feb6"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/config":"29435"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["route-controller-manager","start"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"route-controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}}]}}}} 2025-08-13T20:00:24.288062459+00:00 stderr F I0813 20:00:24.286413 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed 2025-08-13T20:00:24.762012823+00:00 stderr F I0813 20:00:24.761237 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: observed generation is 13, desired generation is 14.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas \u003e 1\nProgressing: deployment/route-controller-manager: observed generation is 11, desired generation is 12.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:24Z","message":"Available: no pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:24.887286025+00:00 stderr F I0813 20:00:24.885987 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 12, desired generation is 13.\nProgressing: deployment/route-controller-manager: observed generation is 10, desired generation is 11." to "Progressing: deployment/controller-manager: observed generation is 13, desired generation is 14.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 11, desired generation is 12.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available changed from True to False ("Available: no pods available on any node.") 2025-08-13T20:00:40.856666175+00:00 stderr F I0813 20:00:40.845673 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:24Z","message":"Available: no route controller manager deployment pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:40.990130500+00:00 stderr F I0813 20:00:40.969350 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 13, desired generation is 14.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 11, desired generation is 12.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available message changed from "Available: no pods available on any node." to "Available: no route controller manager deployment pods available on any node." 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.049126 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:00.048880262 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.049726 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:00.049707486 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.049761 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.049732806 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050027 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.049766497 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050108 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.050042495 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050128 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.050114927 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050146 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.050134378 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050166 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.050152008 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050184 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:00.050172309 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050252 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:00.05019623 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050294 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.050261171 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050992 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-controller-manager-operator.svc,metrics.openshift-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 20:01:00.050970852 +0000 UTC))" 2025-08-13T20:01:00.053948177+00:00 stderr F I0813 20:01:00.051276 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115174\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115167\" (2025-08-13 18:59:24 +0000 UTC to 2026-08-13 18:59:24 +0000 UTC (now=2025-08-13 20:01:00.05126006 +0000 UTC))" 2025-08-13T20:01:08.673277712+00:00 stderr P I0813 20:01:08.666614 1 core.go:341] ConfigMap "openshift-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1Nj 2025-08-13T20:01:08.674489597+00:00 stderr F YwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:49Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T20:01:08.713898331+00:00 stderr F I0813 20:01:08.711258 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-controller-manager: 2025-08-13T20:01:08.713898331+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:01:09.637633520+00:00 stderr F I0813 20:01:09.637216 1 core.go:341] ConfigMap "openshift-route-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-route-controller-manager.serving-cert.secret":"6wVDCg=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:01:09.654632895+00:00 stderr F I0813 20:01:09.637763 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-route-controller-manager: 2025-08-13T20:01:09.654632895+00:00 stderr F cause by changes in data.openshift-route-controller-manager.serving-cert.secret 2025-08-13T20:01:10.338074073+00:00 stderr P I0813 20:01:10.337043 1 core.go:341] ConfigMap "openshift-route-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUx 2025-08-13T20:01:10.338280109+00:00 stderr F MTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:49Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T20:01:10.344929248+00:00 stderr F I0813 20:01:10.344119 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-route-controller-manager: 2025-08-13T20:01:10.344929248+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:01:14.401913999+00:00 stderr F I0813 20:01:14.394388 1 apps.go:154] Deployment "openshift-controller-manager/controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"f04f287b763c8ca46f7254ec00d7f77f509cbf1bfd94b52bf4b4d93869c665c0"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/client-ca":"30255"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["openshift-controller-manager","start"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"proxy-ca-bundles"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"openshift-global-ca"},"name":"proxy-ca-bundles"}]}}}} 2025-08-13T20:01:17.951922354+00:00 stderr F I0813 20:01:17.951238 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed 2025-08-13T20:01:18.815583180+00:00 stderr F I0813 20:01:18.814730 1 apps.go:154] Deployment "openshift-route-controller-manager/route-controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"b9a81854272e0c5424d4034a7ee3633ceff604e77368302720feb1f03d857755"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/client-ca":"30328","configmaps/config":"30299"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["route-controller-manager","start"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"route-controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}}]}}}} 2025-08-13T20:01:20.174150267+00:00 stderr F I0813 20:01:20.173618 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed 2025-08-13T20:01:21.295577804+00:00 stderr F I0813 20:01:21.295110 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: observed generation is 14, desired generation is 15.\nProgressing: deployment/route-controller-manager: observed generation is 12, desired generation is 13.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:24Z","message":"Available: no route controller manager deployment pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:21.648275290+00:00 stderr F I0813 20:01:21.638715 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 14, desired generation is 15.\nProgressing: deployment/route-controller-manager: observed generation is 12, desired generation is 13.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" 2025-08-13T20:01:23.289898379+00:00 stderr F I0813 20:01:23.275156 1 core.go:341] ConfigMap "openshift-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-controller-manager.client-ca.configmap":"d_XbdQ=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:01:23.304622698+00:00 stderr F I0813 20:01:23.293916 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-controller-manager: 2025-08-13T20:01:23.304622698+00:00 stderr F cause by changes in data.openshift-controller-manager.client-ca.configmap 2025-08-13T20:01:28.280990974+00:00 stderr F I0813 20:01:28.280362 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:28.281032975+00:00 stderr F I0813 20:01:28.280998 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:28.281407776+00:00 stderr F I0813 20:01:28.281358 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:28.282386073+00:00 stderr F I0813 20:01:28.282302 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:28.28226776 +0000 UTC))" 2025-08-13T20:01:28.282428555+00:00 stderr F I0813 20:01:28.282362 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:28.282335452 +0000 UTC))" 2025-08-13T20:01:28.282428555+00:00 stderr F I0813 20:01:28.282384 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:28.282369833 +0000 UTC))" 2025-08-13T20:01:28.282428555+00:00 stderr F I0813 20:01:28.282414 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:28.282402004 +0000 UTC))" 2025-08-13T20:01:28.282498927+00:00 stderr F I0813 20:01:28.282431 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.282420344 +0000 UTC))" 2025-08-13T20:01:28.282498927+00:00 stderr F I0813 20:01:28.282451 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.282438615 +0000 UTC))" 2025-08-13T20:01:28.282498927+00:00 stderr F I0813 20:01:28.282474 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.282456705 +0000 UTC))" 2025-08-13T20:01:28.282513407+00:00 stderr F I0813 20:01:28.282495 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.282482476 +0000 UTC))" 2025-08-13T20:01:28.282735313+00:00 stderr F I0813 20:01:28.282523 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:28.282501107 +0000 UTC))" 2025-08-13T20:01:28.282735313+00:00 stderr F I0813 20:01:28.282564 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:28.282548048 +0000 UTC))" 2025-08-13T20:01:28.282735313+00:00 stderr F I0813 20:01:28.282589 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.282577829 +0000 UTC))" 2025-08-13T20:01:28.289939799+00:00 stderr F I0813 20:01:28.283343 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-controller-manager-operator.svc,metrics.openshift-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:14 +0000 UTC to 2027-08-13 20:00:15 +0000 UTC (now=2025-08-13 20:01:28.28332429 +0000 UTC))" 2025-08-13T20:01:28.289939799+00:00 stderr F I0813 20:01:28.284285 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115174\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115167\" (2025-08-13 18:59:24 +0000 UTC to 2026-08-13 18:59:24 +0000 UTC (now=2025-08-13 20:01:28.284143234 +0000 UTC))" 2025-08-13T20:01:31.519181458+00:00 stderr F I0813 20:01:31.515303 1 core.go:341] ConfigMap "openshift-route-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-route-controller-manager.client-ca.configmap":"d_XbdQ=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:01:31.520220907+00:00 stderr F I0813 20:01:31.520148 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-route-controller-manager: 2025-08-13T20:01:31.520220907+00:00 stderr F cause by changes in data.openshift-route-controller-manager.client-ca.configmap 2025-08-13T20:01:32.710407505+00:00 stderr F I0813 20:01:32.709976 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="f4b72f648a02bf4d745720b461c43dc88e5b533156c427b7905f426178ca53a1", new="d241a06236d5f1f5f86885717c7d346103e02b5d1ed9dcf4c19f7f338250fbcb") 2025-08-13T20:01:32.711219518+00:00 stderr F W0813 20:01:32.710474 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:01:32.711219518+00:00 stderr F I0813 20:01:32.710576 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="9fa7e5fbef9e286ed42003219ce81736b0a30e8ce2f7dd520c0c149b834fa6a0", new="db6902c5c5fee4f9a52663b228002d42646911159d139a2d4d9110064da348fd") 2025-08-13T20:01:32.711219518+00:00 stderr F I0813 20:01:32.710987 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:32.711219518+00:00 stderr F I0813 20:01:32.711074 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:32.711219518+00:00 stderr F I0813 20:01:32.711163 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:32.711678161+00:00 stderr F I0813 20:01:32.711622 1 base_controller.go:172] Shutting down StatusSyncer_openshift-controller-manager ... 2025-08-13T20:01:32.711678161+00:00 stderr F I0813 20:01:32.711623 1 base_controller.go:172] Shutting down OpenshiftControllerManagerStaticResources ... 2025-08-13T20:01:32.711946829+00:00 stderr F I0813 20:01:32.711872 1 operator.go:151] Shutting down OpenShiftControllerManagerOperator 2025-08-13T20:01:32.712037742+00:00 stderr F I0813 20:01:32.711949 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:01:32.712098493+00:00 stderr F I0813 20:01:32.711995 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:01:32.713875504+00:00 stderr F I0813 20:01:32.712115 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:01:32.713875504+00:00 stderr F W0813 20:01:32.712173 1 builder.go:131] graceful termination failed, controllers failed with error: stopped ././@LongLink0000644000000000000000000000023500000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015133657716033041 5ustar zuulzuul././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015133657742033040 5ustar zuulzuul././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000006575415133657716033064 0ustar zuulzuul2025-08-13T20:01:08.525355365+00:00 stderr F I0813 20:01:08.494832 1 cmd.go:92] &{ true {false} installer true map[cert-dir:0xc0005d0c80 cert-secrets:0xc0005d0a00 configmaps:0xc0005d05a0 namespace:0xc0005d03c0 optional-configmaps:0xc0005d06e0 optional-secrets:0xc0005d0640 pod:0xc0005d0460 pod-manifest-dir:0xc0005d0820 resource-dir:0xc0005d0780 revision:0xc0005d0320 secrets:0xc0005d0500 v:0xc0005d1680] [0xc0005d1680 0xc0005d0320 0xc0005d03c0 0xc0005d0460 0xc0005d0780 0xc0005d0820 0xc0005d05a0 0xc0005d06e0 0xc0005d0640 0xc0005d0500 0xc0005d0c80 0xc0005d0a00] [] map[cert-configmaps:0xc0005d0aa0 cert-dir:0xc0005d0c80 cert-secrets:0xc0005d0a00 configmaps:0xc0005d05a0 help:0xc0005d1a40 kubeconfig:0xc0005d0280 log-flush-frequency:0xc0005d15e0 namespace:0xc0005d03c0 optional-cert-configmaps:0xc0005d0be0 optional-cert-secrets:0xc0005d0b40 optional-configmaps:0xc0005d06e0 optional-secrets:0xc0005d0640 pod:0xc0005d0460 pod-manifest-dir:0xc0005d0820 pod-manifests-lock-file:0xc0005d0960 resource-dir:0xc0005d0780 revision:0xc0005d0320 secrets:0xc0005d0500 timeout-duration:0xc0005d08c0 v:0xc0005d1680 vmodule:0xc0005d1720] [0xc0005d0280 0xc0005d0320 0xc0005d03c0 0xc0005d0460 0xc0005d0500 0xc0005d05a0 0xc0005d0640 0xc0005d06e0 0xc0005d0780 0xc0005d0820 0xc0005d08c0 0xc0005d0960 0xc0005d0a00 0xc0005d0aa0 0xc0005d0b40 0xc0005d0be0 0xc0005d0c80 0xc0005d15e0 0xc0005d1680 0xc0005d1720 0xc0005d1a40] [0xc0005d0aa0 0xc0005d0c80 0xc0005d0a00 0xc0005d05a0 0xc0005d1a40 0xc0005d0280 0xc0005d15e0 0xc0005d03c0 0xc0005d0be0 0xc0005d0b40 0xc0005d06e0 0xc0005d0640 0xc0005d0460 0xc0005d0820 0xc0005d0960 0xc0005d0780 0xc0005d0320 0xc0005d0500 0xc0005d08c0 0xc0005d1680 0xc0005d1720] map[104:0xc0005d1a40 118:0xc0005d1680] [] -1 0 0xc000567560 true 0x215dc20 []} 2025-08-13T20:01:08.525355365+00:00 stderr F I0813 20:01:08.518481 1 cmd.go:93] (*installerpod.InstallOptions)(0xc000581380)({ 2025-08-13T20:01:08.525355365+00:00 stderr F KubeConfig: (string) "", 2025-08-13T20:01:08.525355365+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-08-13T20:01:08.525355365+00:00 stderr F Revision: (string) (len=1) "7", 2025-08-13T20:01:08.525355365+00:00 stderr F NodeName: (string) "", 2025-08-13T20:01:08.525355365+00:00 stderr F Namespace: (string) (len=24) "openshift-kube-scheduler", 2025-08-13T20:01:08.525355365+00:00 stderr F PodConfigMapNamePrefix: (string) (len=18) "kube-scheduler-pod", 2025-08-13T20:01:08.525355365+00:00 stderr F SecretNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-08-13T20:01:08.525355365+00:00 stderr F }, 2025-08-13T20:01:08.525355365+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=12) "serving-cert" 2025-08-13T20:01:08.525355365+00:00 stderr F }, 2025-08-13T20:01:08.525355365+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=5 cap=8) { 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=18) "kube-scheduler-pod", 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=6) "config", 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=17) "serviceaccount-ca", 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=20) "scheduler-kubeconfig", 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=37) "kube-scheduler-cert-syncer-kubeconfig" 2025-08-13T20:01:08.525355365+00:00 stderr F }, 2025-08-13T20:01:08.525355365+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=16) "policy-configmap" 2025-08-13T20:01:08.525355365+00:00 stderr F }, 2025-08-13T20:01:08.525355365+00:00 stderr F CertSecretNames: ([]string) (len=1 cap=1) { 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=30) "kube-scheduler-client-cert-key" 2025-08-13T20:01:08.525355365+00:00 stderr F }, 2025-08-13T20:01:08.525355365+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) , 2025-08-13T20:01:08.525355365+00:00 stderr F CertConfigMapNamePrefixes: ([]string) , 2025-08-13T20:01:08.525355365+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) , 2025-08-13T20:01:08.525355365+00:00 stderr F CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-scheduler-certs", 2025-08-13T20:01:08.525355365+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:01:08.525355365+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-08-13T20:01:08.525355365+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-08-13T20:01:08.525355365+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-08-13T20:01:08.525355365+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-08-13T20:01:08.525355365+00:00 stderr F KubeletVersion: (string) "" 2025-08-13T20:01:08.525355365+00:00 stderr F }) 2025-08-13T20:01:08.558026046+00:00 stderr F I0813 20:01:08.545567 1 cmd.go:410] Getting controller reference for node crc 2025-08-13T20:01:09.709344395+00:00 stderr F I0813 20:01:09.708316 1 cmd.go:423] Waiting for installer revisions to settle for node crc 2025-08-13T20:01:10.010008738+00:00 stderr F I0813 20:01:10.008949 1 cmd.go:515] Waiting additional period after revisions have settled for node crc 2025-08-13T20:01:40.010064498+00:00 stderr F I0813 20:01:40.009280 1 cmd.go:521] Getting installer pods for node crc 2025-08-13T20:01:52.020591303+00:00 stderr F I0813 20:01:52.020372 1 cmd.go:539] Latest installer revision for node crc is: 7 2025-08-13T20:01:52.020591303+00:00 stderr F I0813 20:01:52.020447 1 cmd.go:428] Querying kubelet version for node crc 2025-08-13T20:01:56.148918597+00:00 stderr F I0813 20:01:56.130444 1 cmd.go:441] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-08-13T20:01:56.148918597+00:00 stderr F I0813 20:01:56.130517 1 cmd.go:290] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7" ... 2025-08-13T20:01:56.148918597+00:00 stderr F I0813 20:01:56.131123 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7" ... 2025-08-13T20:01:56.148918597+00:00 stderr F I0813 20:01:56.131137 1 cmd.go:226] Getting secrets ... 2025-08-13T20:01:59.477298721+00:00 stderr F I0813 20:01:59.477140 1 copy.go:32] Got secret openshift-kube-scheduler/localhost-recovery-client-token-7 2025-08-13T20:02:01.293670193+00:00 stderr F I0813 20:02:01.284574 1 copy.go:32] Got secret openshift-kube-scheduler/serving-cert-7 2025-08-13T20:02:01.293670193+00:00 stderr F I0813 20:02:01.284669 1 cmd.go:239] Getting config maps ... 2025-08-13T20:02:03.471152171+00:00 stderr F I0813 20:02:03.462088 1 copy.go:60] Got configMap openshift-kube-scheduler/config-7 2025-08-13T20:02:05.728719872+00:00 stderr F I0813 20:02:05.722475 1 copy.go:60] Got configMap openshift-kube-scheduler/kube-scheduler-cert-syncer-kubeconfig-7 2025-08-13T20:02:09.366901882+00:00 stderr F I0813 20:02:09.366325 1 copy.go:60] Got configMap openshift-kube-scheduler/kube-scheduler-pod-7 2025-08-13T20:02:13.052766894+00:00 stderr F I0813 20:02:13.050491 1 copy.go:60] Got configMap openshift-kube-scheduler/scheduler-kubeconfig-7 2025-08-13T20:02:15.100946713+00:00 stderr F I0813 20:02:15.096609 1 copy.go:60] Got configMap openshift-kube-scheduler/serviceaccount-ca-7 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.827442 1 copy.go:52] Failed to get config map openshift-kube-scheduler/policy-configmap-7: configmaps "policy-configmap-7" not found 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.827567 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/localhost-recovery-client-token" ... 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.828621 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/localhost-recovery-client-token/ca.crt" ... 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.829105 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/localhost-recovery-client-token/namespace" ... 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.829208 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.829339 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/localhost-recovery-client-token/token" ... 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.829444 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/serving-cert" ... 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.829640 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/serving-cert/tls.key" ... 2025-08-13T20:02:17.849720188+00:00 stderr F I0813 20:02:17.831747 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/serving-cert/tls.crt" ... 2025-08-13T20:02:17.850252683+00:00 stderr F I0813 20:02:17.850183 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/config" ... 2025-08-13T20:02:17.859245109+00:00 stderr F I0813 20:02:17.858457 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/config/config.yaml" ... 2025-08-13T20:02:17.859245109+00:00 stderr F I0813 20:02:17.858690 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/kube-scheduler-cert-syncer-kubeconfig" ... 2025-08-13T20:02:17.859245109+00:00 stderr F I0813 20:02:17.858969 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig" ... 2025-08-13T20:02:17.859245109+00:00 stderr F I0813 20:02:17.859105 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/kube-scheduler-pod" ... 2025-08-13T20:02:17.859902438+00:00 stderr F I0813 20:02:17.859387 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/kube-scheduler-pod/forceRedeploymentReason" ... 2025-08-13T20:02:17.859902438+00:00 stderr F I0813 20:02:17.859545 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/kube-scheduler-pod/pod.yaml" ... 2025-08-13T20:02:17.859902438+00:00 stderr F I0813 20:02:17.859682 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/kube-scheduler-pod/version" ... 2025-08-13T20:02:17.859902438+00:00 stderr F I0813 20:02:17.859864 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/scheduler-kubeconfig" ... 2025-08-13T20:02:17.860032522+00:00 stderr F I0813 20:02:17.859969 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/scheduler-kubeconfig/kubeconfig" ... 2025-08-13T20:02:17.860201287+00:00 stderr F I0813 20:02:17.860141 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/serviceaccount-ca" ... 2025-08-13T20:02:17.860311810+00:00 stderr F I0813 20:02:17.860253 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/serviceaccount-ca/ca-bundle.crt" ... 2025-08-13T20:02:17.860493775+00:00 stderr F I0813 20:02:17.860418 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-scheduler-certs" ... 2025-08-13T20:02:17.860493775+00:00 stderr F I0813 20:02:17.860480 1 cmd.go:226] Getting secrets ... 2025-08-13T20:02:20.477510320+00:00 stderr F I0813 20:02:20.474599 1 copy.go:32] Got secret openshift-kube-scheduler/kube-scheduler-client-cert-key 2025-08-13T20:02:20.477510320+00:00 stderr F I0813 20:02:20.474675 1 cmd.go:239] Getting config maps ... 2025-08-13T20:02:20.477510320+00:00 stderr F I0813 20:02:20.474688 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key" ... 2025-08-13T20:02:20.477510320+00:00 stderr F I0813 20:02:20.474734 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.crt" ... 2025-08-13T20:02:20.477510320+00:00 stderr F I0813 20:02:20.475101 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.key" ... 2025-08-13T20:02:20.477510320+00:00 stderr F I0813 20:02:20.475229 1 cmd.go:332] Getting pod configmaps/kube-scheduler-pod-7 -n openshift-kube-scheduler 2025-08-13T20:02:21.118620340+00:00 stderr F I0813 20:02:21.118522 1 cmd.go:348] Creating directory for static pod manifest "/etc/kubernetes/manifests" ... 2025-08-13T20:02:21.118620340+00:00 stderr F I0813 20:02:21.118597 1 cmd.go:376] Writing a pod under "kube-scheduler-pod.yaml" key 2025-08-13T20:02:21.118620340+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"openshift-kube-scheduler","namespace":"openshift-kube-scheduler","creationTimestamp":null,"labels":{"app":"openshift-kube-scheduler","revision":"7","scheduler":"true"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-scheduler","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-certs"}}],"initContainers":[{"name":"wait-for-host-port","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","30","/bin/bash","-c"],"args":["echo -n \"Waiting for port :10259 to be released.\"\nwhile [ -n \"$(ss -Htan '( sport = 10259 )')\" ]; do\n echo -n \".\"\n sleep 1\ndone\n"],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"containers":[{"name":"kube-scheduler","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["hyperkube","kube-scheduler"],"args":["--config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml","--cert-dir=/var/run/kubernetes","--authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--feature-gates=AdminNetworkPolicy=true,AlibabaPlatform=true,AutomatedEtcdBackup=false,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,CSIDriverSharedResource=false,ChunkSizeMiB=false,CloudDualStackNodeIPs=true,ClusterAPIInstall=false,ClusterAPIInstallAWS=true,ClusterAPIInstallAzure=false,ClusterAPIInstallGCP=false,ClusterAPIInstallIBMCloud=false,ClusterAPIInstallNutanix=true,ClusterAPIInstallOpenStack=true,ClusterAPIInstallPowerVS=false,ClusterAPIInstallVSphere=true,DNSNameResolver=false,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalCloudProvider=true,ExternalCloudProviderAzure=true,ExternalCloudProviderExternal=true,ExternalCloudProviderGCP=true,ExternalOIDC=false,ExternalRouteCertificate=false,GCPClusterHostedDNS=false,GCPLabelsTags=false,GatewayAPI=false,HardwareSpeed=true,ImagePolicy=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InstallAlternateInfrastructureAWS=false,KMSv1=true,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,ManagedBootImages=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MetricsServer=true,MixedCPUsAllocation=false,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NewOLM=false,NodeDisruptionPolicy=false,NodeSwap=false,OnClusterBuild=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,PrivateHostedZoneAWS=true,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=false,VSphereStaticIPs=true,ValidatingAdmissionPolicy=false,VolumeGroupSnapshot=false","-v=2","--tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt","--tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key","--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256","--tls-min-version=VersionTLS12"],"ports":[{"containerPort":10259}],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"readinessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-cert-syncer","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["cluster-kube-scheduler-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-recovery-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 11443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:11443 -v=2\n"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:02:21.176472970+00:00 stderr F I0813 20:02:21.152955 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/kube-scheduler-pod.yaml" ... 2025-08-13T20:02:21.176472970+00:00 stderr F I0813 20:02:21.153271 1 cmd.go:614] Removed existing static pod manifest "/etc/kubernetes/manifests/kube-scheduler-pod.yaml" ... 2025-08-13T20:02:21.176472970+00:00 stderr F I0813 20:02:21.153282 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-scheduler-pod.yaml" ... 2025-08-13T20:02:21.176472970+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"openshift-kube-scheduler","namespace":"openshift-kube-scheduler","creationTimestamp":null,"labels":{"app":"openshift-kube-scheduler","revision":"7","scheduler":"true"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-scheduler","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-certs"}}],"initContainers":[{"name":"wait-for-host-port","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","30","/bin/bash","-c"],"args":["echo -n \"Waiting for port :10259 to be released.\"\nwhile [ -n \"$(ss -Htan '( sport = 10259 )')\" ]; do\n echo -n \".\"\n sleep 1\ndone\n"],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"containers":[{"name":"kube-scheduler","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["hyperkube","kube-scheduler"],"args":["--config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml","--cert-dir=/var/run/kubernetes","--authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--feature-gates=AdminNetworkPolicy=true,AlibabaPlatform=true,AutomatedEtcdBackup=false,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,CSIDriverSharedResource=false,ChunkSizeMiB=false,CloudDualStackNodeIPs=true,ClusterAPIInstall=false,ClusterAPIInstallAWS=true,ClusterAPIInstallAzure=false,ClusterAPIInstallGCP=false,ClusterAPIInstallIBMCloud=false,ClusterAPIInstallNutanix=true,ClusterAPIInstallOpenStack=true,ClusterAPIInstallPowerVS=false,ClusterAPIInstallVSphere=true,DNSNameResolver=false,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalCloudProvider=true,ExternalCloudProviderAzure=true,ExternalCloudProviderExternal=true,ExternalCloudProviderGCP=true,ExternalOIDC=false,ExternalRouteCertificate=false,GCPClusterHostedDNS=false,GCPLabelsTags=false,GatewayAPI=false,HardwareSpeed=true,ImagePolicy=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InstallAlternateInfrastructureAWS=false,KMSv1=true,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,ManagedBootImages=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MetricsServer=true,MixedCPUsAllocation=false,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NewOLM=false,NodeDisruptionPolicy=false,NodeSwap=false,OnClusterBuild=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,PrivateHostedZoneAWS=true,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=false,VSphereStaticIPs=true,ValidatingAdmissionPolicy=false,VolumeGroupSnapshot=false","-v=2","--tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt","--tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key","--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256","--tls-min-version=VersionTLS12"],"ports":[{"containerPort":10259}],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"readinessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-cert-syncer","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["cluster-kube-scheduler-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-recovery-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 11443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:11443 -v=2\n"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015133657716033015 5ustar zuulzuul././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015133657744033016 5ustar zuulzuul././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000023413515133657716033027 0ustar zuulzuul2025-08-13T19:57:10.537128831+00:00 stderr F I0813 19:57:10.536908 22232 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:57:10.537343497+00:00 stderr F I0813 19:57:10.537327 22232 update.go:2595] Running: mount --rbind /run/secrets /rootfs/run/secrets 2025-08-13T19:57:10.544102180+00:00 stderr F I0813 19:57:10.543965 22232 update.go:2595] Running: mount --rbind /usr/bin /rootfs/run/machine-config-daemon-bin 2025-08-13T19:57:10.548261819+00:00 stderr F I0813 19:57:10.548151 22232 daemon.go:513] using appropriate binary for source=rhel-9 target=rhel-9 2025-08-13T19:57:10.660138773+00:00 stderr F I0813 19:57:10.660004 22232 daemon.go:566] Invoking re-exec /run/bin/machine-config-daemon 2025-08-13T19:57:10.729900775+00:00 stderr F I0813 19:57:10.729682 22232 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:57:10.730192564+00:00 stderr F E0813 19:57:10.730121 22232 rpm-ostree.go:276] Merged secret file does not exist; defaulting to cluster pull secret 2025-08-13T19:57:10.730305507+00:00 stderr F I0813 19:57:10.730252 22232 rpm-ostree.go:263] Linking ostree authfile to /var/lib/kubelet/config.json 2025-08-13T19:57:10.952575284+00:00 stderr F I0813 19:57:10.952524 22232 daemon.go:317] Booted osImageURL: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 (416.94.202406172220-0) 3aa42b3e55a31016873768eb92311a9ba07c871ac81e126e9561a61d8f9d2f24 2025-08-13T19:57:10.953974444+00:00 stderr F I0813 19:57:10.953931 22232 start.go:134] overriding kubernetes api to https://api-int.crc.testing:6443 2025-08-13T19:57:10.955283391+00:00 stderr F I0813 19:57:10.955215 22232 metrics.go:100] Registering Prometheus metrics 2025-08-13T19:57:10.955349303+00:00 stderr F I0813 19:57:10.955309 22232 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2025-08-13T19:57:10.986460181+00:00 stderr F I0813 19:57:10.985143 22232 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:57:10.986460181+00:00 stderr F I0813 19:57:10.985317 22232 writer.go:88] NodeWriter initialized with credentials from /var/lib/kubelet/kubeconfig 2025-08-13T19:57:11.001876102+00:00 stderr F I0813 19:57:11.000562 22232 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:57:11.001876102+00:00 stderr F I0813 19:57:11.001140 22232 start.go:214] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:57:11.001876102+00:00 stderr F I0813 19:57:11.001213 22232 update.go:2610] Starting to manage node: crc 2025-08-13T19:57:11.013211645+00:00 stderr F I0813 19:57:11.009509 22232 rpm-ostree.go:308] Running captured: rpm-ostree status 2025-08-13T19:57:11.071088538+00:00 stderr F I0813 19:57:11.070943 22232 daemon.go:1727] State: idle 2025-08-13T19:57:11.071088538+00:00 stderr F Deployments: 2025-08-13T19:57:11.071088538+00:00 stderr F * ostree-unverified-registry:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2025-08-13T19:57:11.071088538+00:00 stderr F Digest: sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2025-08-13T19:57:11.071088538+00:00 stderr F Version: 416.94.202406172220-0 (2024-06-17T22:24:17Z) 2025-08-13T19:57:11.071088538+00:00 stderr F LocalPackages: hyperv-daemons-0-0.42.20190303git.el9.x86_64 2025-08-13T19:57:11.071088538+00:00 stderr F hyperv-daemons-license-0-0.42.20190303git.el9.noarch 2025-08-13T19:57:11.071088538+00:00 stderr F hypervfcopyd-0-0.42.20190303git.el9.x86_64 2025-08-13T19:57:11.071088538+00:00 stderr F hypervkvpd-0-0.42.20190303git.el9.x86_64 2025-08-13T19:57:11.071088538+00:00 stderr F hypervvssd-0-0.42.20190303git.el9.x86_64 2025-08-13T19:57:11.071903531+00:00 stderr F I0813 19:57:11.071718 22232 coreos.go:53] CoreOS aleph version: mtime=2022-08-01 23:42:11 +0000 UTC 2025-08-13T19:57:11.071903531+00:00 stderr F { 2025-08-13T19:57:11.071903531+00:00 stderr F "container-image": { 2025-08-13T19:57:11.071903531+00:00 stderr F "image-digest": "sha256:ea0cef07b0cd5ba8ff8c487324bf6a4df15fa31e69962a8e8fb7d00f1caa7f1d", 2025-08-13T19:57:11.071903531+00:00 stderr F "image-labels": { 2025-08-13T19:57:11.071903531+00:00 stderr F "containers.bootc": "1", 2025-08-13T19:57:11.071903531+00:00 stderr F "coreos-assembler.image-config-checksum": "626c78cc9e2da6ffd642d678ac6109f5532e817e107932464c8222f8d3492491", 2025-08-13T19:57:11.071903531+00:00 stderr F "coreos-assembler.image-input-checksum": "6f9238f67f75298a27a620eacd78d313fd451541fc877d3f6aa681c0f6b22811", 2025-08-13T19:57:11.071903531+00:00 stderr F "io.openshift.build.version-display-names": "machine-os=Red Hat Enterprise Linux CoreOS", 2025-08-13T19:57:11.071903531+00:00 stderr F "io.openshift.build.versions": "machine-os=416.94.202405291527-0", 2025-08-13T19:57:11.071903531+00:00 stderr F "org.opencontainers.image.revision": "b01a02cc1b92e2dacf12d9b5cddf690a2439ce64", 2025-08-13T19:57:11.071903531+00:00 stderr F "org.opencontainers.image.source": "https://github.com/openshift/os", 2025-08-13T19:57:11.071903531+00:00 stderr F "org.opencontainers.image.version": "416.94.202405291527-0", 2025-08-13T19:57:11.071903531+00:00 stderr F "ostree.bootable": "true", 2025-08-13T19:57:11.071903531+00:00 stderr F "ostree.commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2025-08-13T19:57:11.071903531+00:00 stderr F "ostree.final-diffid": "sha256:12787d84fa137cd5649a9005efe98ec9d05ea46245fdc50aecb7dd007f2035b1", 2025-08-13T19:57:11.071903531+00:00 stderr F "ostree.linux": "5.14.0-427.18.1.el9_4.x86_64", 2025-08-13T19:57:11.071903531+00:00 stderr F "rpmostree.inputhash": "a00190b296e00061b72bcbc4ace9fb4b317e86da7d8c58471b027239035b05d6" 2025-08-13T19:57:11.071903531+00:00 stderr F }, 2025-08-13T19:57:11.071903531+00:00 stderr F "image-name": "oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive" 2025-08-13T19:57:11.071903531+00:00 stderr F }, 2025-08-13T19:57:11.071903531+00:00 stderr F "osbuild-version": "114", 2025-08-13T19:57:11.071903531+00:00 stderr F "ostree-commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2025-08-13T19:57:11.071903531+00:00 stderr F "ref": "docker://ostree-image-signed:oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive", 2025-08-13T19:57:11.071903531+00:00 stderr F "version": "416.94.202405291527-0" 2025-08-13T19:57:11.071903531+00:00 stderr F } 2025-08-13T19:57:11.072011204+00:00 stderr F I0813 19:57:11.071965 22232 coreos.go:70] Ignition provisioning: time=2024-06-26T12:42:18Z 2025-08-13T19:57:11.072011204+00:00 stderr F I0813 19:57:11.071980 22232 rpm-ostree.go:308] Running captured: journalctl --list-boots 2025-08-13T19:57:11.084910483+00:00 stderr F I0813 19:57:11.084726 22232 daemon.go:1736] journalctl --list-boots: 2025-08-13T19:57:11.084910483+00:00 stderr F IDX BOOT ID FIRST ENTRY LAST ENTRY 2025-08-13T19:57:11.084910483+00:00 stderr F -2 286f1119e01c427899b4130371f705c5 Thu 2024-06-27 13:36:35 UTC Thu 2024-06-27 13:36:39 UTC 2025-08-13T19:57:11.084910483+00:00 stderr F -1 2ff245ef1efc4648b6c81a61c24bb5db Thu 2024-06-27 13:37:00 UTC Thu 2024-06-27 13:37:11 UTC 2025-08-13T19:57:11.084910483+00:00 stderr F 0 7bac8de7aad04ed8a9adc4391f6449b7 Wed 2025-08-13 19:43:15 UTC Wed 2025-08-13 19:57:11 UTC 2025-08-13T19:57:11.084910483+00:00 stderr F I0813 19:57:11.084819 22232 rpm-ostree.go:308] Running captured: systemctl list-units --state=failed --no-legend 2025-08-13T19:57:11.098599324+00:00 stderr F I0813 19:57:11.097905 22232 daemon.go:1751] systemd service state: OK 2025-08-13T19:57:11.098599324+00:00 stderr F I0813 19:57:11.097949 22232 daemon.go:1327] Starting MachineConfigDaemon 2025-08-13T19:57:11.098599324+00:00 stderr F I0813 19:57:11.098084 22232 daemon.go:1334] Enabling Kubelet Healthz Monitor 2025-08-13T19:57:11.990345587+00:00 stderr F I0813 19:57:11.990162 22232 daemon.go:647] Node crc is part of the control plane 2025-08-13T19:57:12.010350119+00:00 stderr F I0813 19:57:12.010192 22232 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs1986687374 --cleanup 2025-08-13T19:57:12.013985292+00:00 stderr F [2025-08-13T19:57:12Z INFO nmstatectl] Nmstate version: 2.2.29 2025-08-13T19:57:12.014047174+00:00 stdout F 2025-08-13T19:57:12.014055814+00:00 stderr F [2025-08-13T19:57:12Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-08-13T19:57:12.025548503+00:00 stderr F I0813 19:57:12.025258 22232 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-08-13T19:57:12.025548503+00:00 stderr F I0813 19:57:12.025316 22232 daemon.go:1680] Current+desired config: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:57:12.025548503+00:00 stderr F I0813 19:57:12.025328 22232 daemon.go:1695] state: Done 2025-08-13T19:57:12.025548503+00:00 stderr F I0813 19:57:12.025374 22232 update.go:2595] Running: rpm-ostree cleanup -r 2025-08-13T19:57:12.086057180+00:00 stdout F Deployments unchanged. 2025-08-13T19:57:12.096181789+00:00 stderr F I0813 19:57:12.096073 22232 daemon.go:2096] Validating against current config rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:57:12.096716985+00:00 stderr F I0813 19:57:12.096505 22232 daemon.go:2008] SSH key location ("/home/core/.ssh/authorized_keys.d/ignition") up-to-date! 2025-08-13T19:57:12.367529387+00:00 stderr F W0813 19:57:12.367366 22232 daemon.go:2601] Unable to check manifest for matching hash: error parsing image name "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883": reading manifest sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 in quay.io/openshift-release-dev/ocp-v4.0-art-dev: unauthorized: access to the requested resource is not authorized 2025-08-13T19:57:12.367529387+00:00 stderr F I0813 19:57:12.367431 22232 rpm-ostree.go:308] Running captured: rpm-ostree kargs 2025-08-13T19:57:12.438064461+00:00 stderr F I0813 19:57:12.437991 22232 update.go:2610] Validated on-disk state 2025-08-13T19:57:12.443242619+00:00 stderr F I0813 19:57:12.443192 22232 daemon.go:2198] Completing update to target MachineConfig: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:57:22.467338806+00:00 stderr F I0813 19:57:22.467156 22232 update.go:2610] Update completed for config rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 and node has been successfully uncordoned 2025-08-13T19:57:22.488642294+00:00 stderr F I0813 19:57:22.487546 22232 daemon.go:2223] In desired state MachineConfig: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:57:22.505041773+00:00 stderr F I0813 19:57:22.504737 22232 config_drift_monitor.go:246] Config Drift Monitor started 2025-08-13T19:57:22.505041773+00:00 stderr F I0813 19:57:22.504953 22232 daemon.go:735] Transitioned from state: -> Done 2025-08-13T19:58:11.115676832+00:00 stderr F I0813 19:58:11.115363 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 23037 2025-08-13T19:58:22.507690785+00:00 stderr F I0813 19:58:22.507325 22232 daemon.go:858] Starting health listener on 127.0.0.1:8798 2025-08-13T19:59:31.927707858+00:00 stderr F I0813 19:59:31.927502 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 28133 2025-08-13T19:59:36.357925063+00:00 stderr F I0813 19:59:36.357581 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 28190 2025-08-13T19:59:40.727962252+00:00 stderr F W0813 19:59:40.718365 22232 daemon.go:2601] Unable to check manifest for matching hash: error parsing image name "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883": reading manifest sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 in quay.io/openshift-release-dev/ocp-v4.0-art-dev: unauthorized: access to the requested resource is not authorized 2025-08-13T19:59:40.727962252+00:00 stderr F I0813 19:59:40.718636 22232 rpm-ostree.go:308] Running captured: rpm-ostree kargs 2025-08-13T19:59:41.458408873+00:00 stderr F I0813 19:59:41.458316 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 28276 2025-08-13T19:59:45.463438017+00:00 stderr F I0813 19:59:45.461866 22232 daemon.go:921] Preflight config drift check successful (took 5.102890289s) 2025-08-13T19:59:45.473484843+00:00 stderr F I0813 19:59:45.469582 22232 config_drift_monitor.go:255] Config Drift Monitor has shut down 2025-08-13T19:59:46.475458464+00:00 stderr F I0813 19:59:46.475394 22232 update.go:2632] Adding SIGTERM protection 2025-08-13T19:59:46.535535217+00:00 stderr F I0813 19:59:46.535393 22232 update.go:1011] Checking Reconcilable for config rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 to rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T19:59:47.009685103+00:00 stderr F I0813 19:59:47.009130 22232 reconcile.go:175] user data to be verified before ssh update: { [] core [ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBACrKovYP/jwO+a53sdlihFLUWOCBZJORwFiQNgBoHgse9pb7UsuVllby/9PMvDGujPs69Sme2dqr+ruV4PQyRs6BAD87myXikco4bl4QHlNCxCiMl4UGh+qGe3xP1pvMotXd+Cl6yzdvgyhr9MuMLVjrj2IZWM5hjJC3ZAAd98IO0E4xQ==] } 2025-08-13T19:59:47.009685103+00:00 stderr F I0813 19:59:47.009225 22232 reconcile.go:151] SSH Keys reconcilable 2025-08-13T19:59:47.357071315+00:00 stderr F I0813 19:59:47.349732 22232 update.go:2610] Starting update from rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 to rendered-master-ef556ead28ddfad01c34ac56c7adfb5a: &{osUpdate:false kargs:false fips:false passwd:true files:false units:false kernelType:false extensions:false} 2025-08-13T19:59:47.929262636+00:00 stderr F I0813 19:59:47.928599 22232 upgrade_monitor.go:324] MCN Featuregate is not enabled. Please enable the TechPreviewNoUpgrade featureset to use MachineConfigNodes 2025-08-13T19:59:47.929262636+00:00 stderr F I0813 19:59:47.928671 22232 update.go:1135] Changes do not require drain, skipping. 2025-08-13T19:59:47.929262636+00:00 stderr F I0813 19:59:47.928751 22232 update.go:1824] Updating files 2025-08-13T19:59:47.929262636+00:00 stderr F I0813 19:59:47.928761 22232 file_writers.go:233] Writing file "/usr/local/bin/nm-clean-initrd-state.sh" 2025-08-13T19:59:48.033316733+00:00 stderr F I0813 19:59:48.033244 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/01-ipv6.conf" 2025-08-13T19:59:48.696336273+00:00 stderr F I0813 19:59:48.689565 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/20-keyfiles.conf" 2025-08-13T19:59:49.030039376+00:00 stderr F I0813 19:59:49.022894 22232 file_writers.go:233] Writing file "/etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt" 2025-08-13T19:59:49.484090729+00:00 stderr F I0813 19:59:49.483886 22232 file_writers.go:233] Writing file "/etc/kubernetes/apiserver-url.env" 2025-08-13T19:59:49.585269533+00:00 stderr F I0813 19:59:49.585015 22232 file_writers.go:233] Writing file "/etc/audit/rules.d/mco-audit-quiet-containers.rules" 2025-08-13T19:59:49.689499053+00:00 stderr F I0813 19:59:49.687103 22232 file_writers.go:233] Writing file "/etc/tmpfiles.d/cleanup-cni.conf" 2025-08-13T19:59:49.956253737+00:00 stderr F I0813 19:59:49.949356 22232 file_writers.go:233] Writing file "/usr/local/bin/configure-ovs.sh" 2025-08-13T19:59:50.261551160+00:00 stderr F I0813 19:59:50.261371 22232 file_writers.go:233] Writing file "/etc/containers/storage.conf" 2025-08-13T19:59:50.385114742+00:00 stderr F I0813 19:59:50.385051 22232 file_writers.go:233] Writing file "/etc/mco/proxy.env" 2025-08-13T19:59:50.741647616+00:00 stderr F I0813 19:59:50.722828 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/10-default-env-godebug.conf" 2025-08-13T19:59:51.109315807+00:00 stderr F I0813 19:59:51.109245 22232 file_writers.go:233] Writing file "/etc/modules-load.d/iptables.conf" 2025-08-13T19:59:51.192529159+00:00 stderr F I0813 19:59:51.192456 22232 file_writers.go:233] Writing file "/etc/node-sizing-enabled.env" 2025-08-13T19:59:51.218742786+00:00 stderr F I0813 19:59:51.215769 22232 file_writers.go:233] Writing file "/usr/local/sbin/dynamic-system-reserved-calc.sh" 2025-08-13T19:59:51.460395665+00:00 stderr F I0813 19:59:51.460311 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/kubelet-cgroups.conf" 2025-08-13T19:59:51.521491636+00:00 stderr F I0813 19:59:51.521209 22232 file_writers.go:233] Writing file "/etc/systemd/system/kubelet.service.d/20-logging.conf" 2025-08-13T19:59:51.562914567+00:00 stderr F I0813 19:59:51.560100 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/sdn.conf" 2025-08-13T19:59:51.696166476+00:00 stderr F I0813 19:59:51.684283 22232 file_writers.go:233] Writing file "/etc/nmstate/nmstate.conf" 2025-08-13T19:59:51.777634438+00:00 stderr F I0813 19:59:51.777471 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh" 2025-08-13T19:59:52.071225977+00:00 stderr F I0813 19:59:52.069309 22232 file_writers.go:233] Writing file "/var/lib/kubelet/config.json" 2025-08-13T19:59:52.171214088+00:00 stderr F I0813 19:59:52.170641 22232 file_writers.go:233] Writing file "/etc/kubernetes/ca.crt" 2025-08-13T19:59:52.276324934+00:00 stderr F I0813 19:59:52.276183 22232 file_writers.go:233] Writing file "/etc/dnsmasq.conf" 2025-08-13T19:59:52.514677088+00:00 stderr F I0813 19:59:52.513265 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/forcedns-rhel9-fix" 2025-08-13T19:59:52.705937040+00:00 stderr F I0813 19:59:52.704355 22232 file_writers.go:233] Writing file "/etc/sysctl.d/arp.conf" 2025-08-13T19:59:52.794668479+00:00 stderr F I0813 19:59:52.793886 22232 file_writers.go:233] Writing file "/etc/sysctl.d/gc-thresh.conf" 2025-08-13T19:59:52.920257429+00:00 stderr F I0813 19:59:52.920192 22232 file_writers.go:233] Writing file "/etc/sysctl.d/inotify.conf" 2025-08-13T19:59:53.087749554+00:00 stderr F I0813 19:59:53.087687 22232 file_writers.go:233] Writing file "/etc/sysctl.d/enable-userfaultfd.conf" 2025-08-13T19:59:53.152336605+00:00 stderr F I0813 19:59:53.152274 22232 file_writers.go:233] Writing file "/etc/sysctl.d/vm-max-map.conf" 2025-08-13T19:59:53.237315557+00:00 stderr F I0813 19:59:53.237175 22232 file_writers.go:233] Writing file "/usr/local/bin/mco-hostname" 2025-08-13T19:59:53.267968770+00:00 stderr F I0813 19:59:53.267902 22232 file_writers.go:233] Writing file "/usr/local/bin/recover-kubeconfig.sh" 2025-08-13T19:59:53.297734859+00:00 stderr F I0813 19:59:53.296534 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet-plugins/volume/exec/.dummy" 2025-08-13T19:59:53.435545397+00:00 stderr F I0813 19:59:53.435193 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/99-vsphere-disable-tx-udp-tnl" 2025-08-13T19:59:53.530363950+00:00 stderr F I0813 19:59:53.530307 22232 file_writers.go:233] Writing file "/usr/local/bin/wait-for-primary-ip.sh" 2025-08-13T19:59:53.563872605+00:00 stderr F I0813 19:59:53.563294 22232 file_writers.go:233] Writing file "/etc/containers/registries.conf" 2025-08-13T19:59:53.635688322+00:00 stderr F I0813 19:59:53.635414 22232 file_writers.go:233] Writing file "/etc/crio/crio.conf.d/00-default" 2025-08-13T19:59:53.689104175+00:00 stderr F I0813 19:59:53.684418 22232 file_writers.go:233] Writing file "/etc/containers/policy.json" 2025-08-13T19:59:53.737416492+00:00 stderr F I0813 19:59:53.737245 22232 file_writers.go:233] Writing file "/etc/kubernetes/cloud.conf" 2025-08-13T19:59:53.975887440+00:00 stderr F I0813 19:59:53.975753 22232 file_writers.go:233] Writing file "/etc/kubernetes/crio-metrics-proxy.cfg" 2025-08-13T19:59:54.027015427+00:00 stderr F I0813 19:59:54.026269 22232 file_writers.go:233] Writing file "/etc/kubernetes/manifests/criometricsproxy.yaml" 2025-08-13T19:59:54.359685090+00:00 stderr F I0813 19:59:54.358593 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet.conf" 2025-08-13T19:59:54.453024270+00:00 stderr F I0813 19:59:54.452862 22232 file_writers.go:233] Writing file "/usr/local/bin/kubenswrapper" 2025-08-13T19:59:54.508676137+00:00 stderr F I0813 19:59:54.508492 22232 file_writers.go:293] Writing systemd unit "NetworkManager-clean-initrd-state.service" 2025-08-13T19:59:54.524150608+00:00 stderr F I0813 19:59:54.520664 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T19:59:54.552559448+00:00 stderr F I0813 19:59:54.552348 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T19:59:54.552559448+00:00 stderr F I0813 19:59:54.552424 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-profile-unix-socket.conf" 2025-08-13T19:59:54.585310891+00:00 stderr F I0813 19:59:54.575814 22232 file_writers.go:207] Writing systemd unit dropin "05-mco-ordering.conf" 2025-08-13T19:59:54.590633523+00:00 stderr F I0813 19:59:54.590146 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T19:59:57.032018376+00:00 stderr F I0813 19:59:57.026027 22232 update.go:2118] Preset systemd unit crio.service 2025-08-13T19:59:57.032018376+00:00 stderr F I0813 19:59:57.026074 22232 file_writers.go:293] Writing systemd unit "disable-mglru.service" 2025-08-13T19:59:57.066021635+00:00 stderr F I0813 19:59:57.061688 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T19:59:57.207558150+00:00 stderr F I0813 19:59:57.201523 22232 update.go:2155] Could not reset unit preset for docker.socket, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file docker.socket does not exist. 2025-08-13T19:59:57.207558150+00:00 stderr F ) 2025-08-13T19:59:57.207558150+00:00 stderr F I0813 19:59:57.201627 22232 file_writers.go:293] Writing systemd unit "firstboot-osupdate.target" 2025-08-13T19:59:57.491331609+00:00 stderr F I0813 19:59:57.484415 22232 file_writers.go:293] Writing systemd unit "kubelet-auto-node-size.service" 2025-08-13T19:59:57.503493765+00:00 stderr F I0813 19:59:57.503151 22232 file_writers.go:293] Writing systemd unit "kubelet-dependencies.target" 2025-08-13T19:59:59.829564801+00:00 stderr F I0813 19:59:59.826592 22232 update.go:2118] Preset systemd unit kubelet-dependencies.target 2025-08-13T19:59:59.829664244+00:00 stderr F I0813 19:59:59.829642 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T19:59:59.838453104+00:00 stderr F I0813 19:59:59.838391 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T19:59:59.838532937+00:00 stderr F I0813 19:59:59.838512 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T19:59:59.843176079+00:00 stderr F I0813 19:59:59.843066 22232 file_writers.go:293] Writing systemd unit "kubelet.service" 2025-08-13T19:59:59.857110866+00:00 stderr F I0813 19:59:59.857036 22232 file_writers.go:293] Writing systemd unit "kubens.service" 2025-08-13T19:59:59.871352322+00:00 stderr F I0813 19:59:59.870212 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-firstboot.service" 2025-08-13T19:59:59.894870933+00:00 stderr F I0813 19:59:59.894672 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-pull.service" 2025-08-13T19:59:59.910213540+00:00 stderr F I0813 19:59:59.909423 22232 file_writers.go:293] Writing systemd unit "node-valid-hostname.service" 2025-08-13T19:59:59.917510698+00:00 stderr F I0813 19:59:59.917338 22232 file_writers.go:293] Writing systemd unit "nodeip-configuration.service" 2025-08-13T19:59:59.931979601+00:00 stderr F I0813 19:59:59.930098 22232 file_writers.go:293] Writing systemd unit "ovs-configuration.service" 2025-08-13T19:59:59.933565056+00:00 stderr F I0813 19:59:59.933409 22232 file_writers.go:207] Writing systemd unit dropin "10-ovs-vswitchd-restart.conf" 2025-08-13T20:00:02.559116822+00:00 stderr F I0813 20:00:02.552812 22232 update.go:2118] Preset systemd unit ovs-vswitchd.service 2025-08-13T20:00:02.559116822+00:00 stderr F I0813 20:00:02.552900 22232 file_writers.go:207] Writing systemd unit dropin "10-ovsdb-restart.conf" 2025-08-13T20:00:02.794046180+00:00 stderr F I0813 20:00:02.793485 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:00:03.064705348+00:00 stderr F I0813 20:00:03.064320 22232 update.go:2155] Could not reset unit preset for pivot.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file pivot.service does not exist. 2025-08-13T20:00:03.064705348+00:00 stderr F ) 2025-08-13T20:00:03.064705348+00:00 stderr F I0813 20:00:03.064467 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:00:03.064705348+00:00 stderr F I0813 20:00:03.064495 22232 file_writers.go:207] Writing systemd unit dropin "mco-controlplane-nice.conf" 2025-08-13T20:00:05.470235199+00:00 stderr F I0813 20:00:05.470067 22232 update.go:2118] Preset systemd unit rpm-ostreed.service 2025-08-13T20:00:05.470235199+00:00 stderr F I0813 20:00:05.470165 22232 file_writers.go:293] Writing systemd unit "wait-for-primary-ip.service" 2025-08-13T20:00:05.512909075+00:00 stderr F I0813 20:00:05.511641 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T20:00:05.621250585+00:00 stderr F I0813 20:00:05.621138 22232 update.go:2155] Could not reset unit preset for zincati.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file zincati.service does not exist. 2025-08-13T20:00:05.621250585+00:00 stderr F ) 2025-08-13T20:00:05.621250585+00:00 stderr F I0813 20:00:05.621185 22232 file_writers.go:293] Writing systemd unit "kubelet-cleanup.service" 2025-08-13T20:00:05.630887549+00:00 stderr F I0813 20:00:05.630133 22232 file_writers.go:293] Writing systemd unit "dummy-network.service" 2025-08-13T20:00:07.534870020+00:00 stderr F I0813 20:00:07.525150 22232 update.go:2096] Enabled systemd units: [NetworkManager-clean-initrd-state.service disable-mglru.service firstboot-osupdate.target kubelet-auto-node-size.service kubelet.service machine-config-daemon-firstboot.service machine-config-daemon-pull.service node-valid-hostname.service nodeip-configuration.service openvswitch.service ovs-configuration.service ovsdb-server.service wait-for-primary-ip.service kubelet-cleanup.service dummy-network.service] 2025-08-13T20:00:09.652570903+00:00 stderr F I0813 20:00:09.652173 22232 update.go:2107] Disabled systemd units [kubens.service] 2025-08-13T20:00:09.652648545+00:00 stderr F I0813 20:00:09.652607 22232 update.go:1887] Deleting stale data 2025-08-13T20:00:09.653243112+00:00 stderr F I0813 20:00:09.652960 22232 update.go:2293] updating the permission of the kubeconfig to: 0o600 2025-08-13T20:00:09.653243112+00:00 stderr F I0813 20:00:09.653074 22232 update.go:2316] updating SSH keys 2025-08-13T20:00:09.661636461+00:00 stderr F I0813 20:00:09.655549 22232 update.go:2217] Writing SSH keys to "/home/core/.ssh/authorized_keys.d/ignition" 2025-08-13T20:00:09.669466795+00:00 stderr F I0813 20:00:09.668341 22232 update.go:2259] Checking if absent users need to be disconfigured 2025-08-13T20:00:09.947245965+00:00 stderr F I0813 20:00:09.947112 22232 update.go:2284] Password has been configured 2025-08-13T20:00:10.034967376+00:00 stderr F I0813 20:00:10.029610 22232 update.go:2610] Node has Desired Config rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, skipping reboot 2025-08-13T20:00:10.083516291+00:00 stderr F I0813 20:00:10.081628 22232 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-08-13T20:00:10.083516291+00:00 stderr F I0813 20:00:10.081684 22232 daemon.go:1686] Current config: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T20:00:10.083516291+00:00 stderr F I0813 20:00:10.081695 22232 daemon.go:1687] Desired config: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:00:10.083516291+00:00 stderr F I0813 20:00:10.081701 22232 daemon.go:1695] state: Done 2025-08-13T20:00:10.140755523+00:00 stderr F I0813 20:00:10.139823 22232 daemon.go:2198] Completing update to target MachineConfig: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:00:20.784312371+00:00 stderr F I0813 20:00:20.784137 22232 update.go:2610] Update completed for config rendered-master-ef556ead28ddfad01c34ac56c7adfb5a and node has been successfully uncordoned 2025-08-13T20:00:20.937038686+00:00 stderr F I0813 20:00:20.936759 22232 daemon.go:2223] In desired state MachineConfig: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:00:21.054077223+00:00 stderr F I0813 20:00:21.053931 22232 config_drift_monitor.go:246] Config Drift Monitor started 2025-08-13T20:00:21.054077223+00:00 stderr F I0813 20:00:21.054062 22232 update.go:2640] Removing SIGTERM protection 2025-08-13T20:00:28.570968623+00:00 stderr F W0813 20:00:28.569716 22232 daemon.go:2601] Unable to check manifest for matching hash: error parsing image name "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883": reading manifest sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 in quay.io/openshift-release-dev/ocp-v4.0-art-dev: unauthorized: access to the requested resource is not authorized 2025-08-13T20:00:28.570968623+00:00 stderr F I0813 20:00:28.569765 22232 rpm-ostree.go:308] Running captured: rpm-ostree kargs 2025-08-13T20:00:29.061416487+00:00 stderr F I0813 20:00:29.058613 22232 daemon.go:921] Preflight config drift check successful (took 838.048625ms) 2025-08-13T20:00:29.061416487+00:00 stderr F I0813 20:00:29.059008 22232 config_drift_monitor.go:255] Config Drift Monitor has shut down 2025-08-13T20:00:29.153688818+00:00 stderr F I0813 20:00:29.153551 22232 update.go:2632] Adding SIGTERM protection 2025-08-13T20:00:29.590755131+00:00 stderr F I0813 20:00:29.584560 22232 update.go:1011] Checking Reconcilable for config rendered-master-ef556ead28ddfad01c34ac56c7adfb5a to rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:00:29.828121039+00:00 stderr F I0813 20:00:29.826580 22232 update.go:2610] Starting update from rendered-master-ef556ead28ddfad01c34ac56c7adfb5a to rendered-master-11405dc064e9fc83a779a06d1cd665b3: &{osUpdate:false kargs:false fips:false passwd:false files:true units:false kernelType:false extensions:false} 2025-08-13T20:00:29.859963587+00:00 stderr F I0813 20:00:29.858244 22232 upgrade_monitor.go:324] MCN Featuregate is not enabled. Please enable the TechPreviewNoUpgrade featureset to use MachineConfigNodes 2025-08-13T20:00:29.859963587+00:00 stderr F I0813 20:00:29.858329 22232 update.go:1135] Changes do not require drain, skipping. 2025-08-13T20:00:29.859963587+00:00 stderr F I0813 20:00:29.858389 22232 update.go:1824] Updating files 2025-08-13T20:00:29.859963587+00:00 stderr F I0813 20:00:29.858397 22232 file_writers.go:233] Writing file "/usr/local/bin/nm-clean-initrd-state.sh" 2025-08-13T20:00:29.928929103+00:00 stderr F I0813 20:00:29.927253 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/01-ipv6.conf" 2025-08-13T20:00:29.956200881+00:00 stderr F I0813 20:00:29.953938 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/20-keyfiles.conf" 2025-08-13T20:00:29.986563087+00:00 stderr F I0813 20:00:29.986401 22232 file_writers.go:233] Writing file "/etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt" 2025-08-13T20:00:30.003994594+00:00 stderr F I0813 20:00:30.002598 22232 file_writers.go:233] Writing file "/etc/kubernetes/apiserver-url.env" 2025-08-13T20:00:30.058935210+00:00 stderr F I0813 20:00:30.058265 22232 file_writers.go:233] Writing file "/etc/audit/rules.d/mco-audit-quiet-containers.rules" 2025-08-13T20:00:30.138138529+00:00 stderr F I0813 20:00:30.137114 22232 file_writers.go:233] Writing file "/etc/tmpfiles.d/cleanup-cni.conf" 2025-08-13T20:00:30.180542928+00:00 stderr F I0813 20:00:30.180426 22232 file_writers.go:233] Writing file "/usr/local/bin/configure-ovs.sh" 2025-08-13T20:00:30.205171880+00:00 stderr F I0813 20:00:30.205111 22232 file_writers.go:233] Writing file "/etc/containers/storage.conf" 2025-08-13T20:00:30.320662783+00:00 stderr F I0813 20:00:30.318958 22232 file_writers.go:233] Writing file "/etc/mco/proxy.env" 2025-08-13T20:00:30.446189862+00:00 stderr F I0813 20:00:30.446118 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/10-default-env-godebug.conf" 2025-08-13T20:00:30.712471235+00:00 stderr F I0813 20:00:30.712360 22232 file_writers.go:233] Writing file "/etc/modules-load.d/iptables.conf" 2025-08-13T20:00:30.749099069+00:00 stderr F I0813 20:00:30.749001 22232 file_writers.go:233] Writing file "/etc/node-sizing-enabled.env" 2025-08-13T20:00:30.793060323+00:00 stderr F I0813 20:00:30.792857 22232 file_writers.go:233] Writing file "/usr/local/sbin/dynamic-system-reserved-calc.sh" 2025-08-13T20:00:30.822506863+00:00 stderr F I0813 20:00:30.822361 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/kubelet-cgroups.conf" 2025-08-13T20:00:30.855605566+00:00 stderr F I0813 20:00:30.855477 22232 file_writers.go:233] Writing file "/etc/systemd/system/kubelet.service.d/20-logging.conf" 2025-08-13T20:00:30.893684752+00:00 stderr F I0813 20:00:30.893516 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/sdn.conf" 2025-08-13T20:00:30.930010118+00:00 stderr F I0813 20:00:30.929909 22232 file_writers.go:233] Writing file "/etc/nmstate/nmstate.conf" 2025-08-13T20:00:30.949467253+00:00 stderr F I0813 20:00:30.949256 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh" 2025-08-13T20:00:30.975131595+00:00 stderr F I0813 20:00:30.975007 22232 file_writers.go:233] Writing file "/var/lib/kubelet/config.json" 2025-08-13T20:00:31.026386506+00:00 stderr F I0813 20:00:31.023025 22232 file_writers.go:233] Writing file "/etc/kubernetes/ca.crt" 2025-08-13T20:00:31.054892399+00:00 stderr F I0813 20:00:31.053008 22232 file_writers.go:233] Writing file "/etc/dnsmasq.conf" 2025-08-13T20:00:31.236716744+00:00 stderr F I0813 20:00:31.236482 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/forcedns-rhel9-fix" 2025-08-13T20:00:31.287532863+00:00 stderr F I0813 20:00:31.287281 22232 file_writers.go:233] Writing file "/etc/sysctl.d/arp.conf" 2025-08-13T20:00:31.367567625+00:00 stderr F I0813 20:00:31.367362 22232 file_writers.go:233] Writing file "/etc/sysctl.d/gc-thresh.conf" 2025-08-13T20:00:31.458897719+00:00 stderr F I0813 20:00:31.458746 22232 file_writers.go:233] Writing file "/etc/sysctl.d/inotify.conf" 2025-08-13T20:00:31.519898518+00:00 stderr F I0813 20:00:31.518075 22232 file_writers.go:233] Writing file "/etc/sysctl.d/enable-userfaultfd.conf" 2025-08-13T20:00:31.640943030+00:00 stderr F I0813 20:00:31.639712 22232 file_writers.go:233] Writing file "/etc/sysctl.d/vm-max-map.conf" 2025-08-13T20:00:31.995951702+00:00 stderr F I0813 20:00:31.995010 22232 file_writers.go:233] Writing file "/usr/local/bin/mco-hostname" 2025-08-13T20:00:32.941604476+00:00 stderr F I0813 20:00:32.941205 22232 file_writers.go:233] Writing file "/usr/local/bin/recover-kubeconfig.sh" 2025-08-13T20:00:33.099537179+00:00 stderr F I0813 20:00:33.098742 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet-plugins/volume/exec/.dummy" 2025-08-13T20:00:33.259858511+00:00 stderr F I0813 20:00:33.259051 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/99-vsphere-disable-tx-udp-tnl" 2025-08-13T20:00:33.358342259+00:00 stderr F I0813 20:00:33.357730 22232 file_writers.go:233] Writing file "/usr/local/bin/wait-for-primary-ip.sh" 2025-08-13T20:00:33.429253221+00:00 stderr F I0813 20:00:33.421085 22232 file_writers.go:233] Writing file "/etc/containers/registries.conf" 2025-08-13T20:00:33.718915470+00:00 stderr F I0813 20:00:33.714664 22232 file_writers.go:233] Writing file "/etc/crio/crio.conf.d/00-default" 2025-08-13T20:00:33.950983087+00:00 stderr F I0813 20:00:33.949050 22232 file_writers.go:233] Writing file "/etc/containers/policy.json" 2025-08-13T20:00:34.153575824+00:00 stderr F I0813 20:00:34.152013 22232 file_writers.go:233] Writing file "/etc/kubernetes/cloud.conf" 2025-08-13T20:00:34.337065366+00:00 stderr F I0813 20:00:34.335199 22232 file_writers.go:233] Writing file "/etc/kubernetes/crio-metrics-proxy.cfg" 2025-08-13T20:00:34.391819897+00:00 stderr F I0813 20:00:34.391466 22232 file_writers.go:233] Writing file "/etc/kubernetes/manifests/criometricsproxy.yaml" 2025-08-13T20:00:34.773422499+00:00 stderr F I0813 20:00:34.769368 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet.conf" 2025-08-13T20:00:34.913323098+00:00 stderr F I0813 20:00:34.912048 22232 file_writers.go:233] Writing file "/usr/local/bin/kubenswrapper" 2025-08-13T20:00:34.991521507+00:00 stderr F I0813 20:00:34.989402 22232 file_writers.go:293] Writing systemd unit "NetworkManager-clean-initrd-state.service" 2025-08-13T20:00:35.019358491+00:00 stderr F I0813 20:00:35.019218 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T20:00:35.085055785+00:00 stderr F I0813 20:00:35.082897 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:00:35.085055785+00:00 stderr F I0813 20:00:35.083651 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-profile-unix-socket.conf" 2025-08-13T20:00:35.103228133+00:00 stderr F I0813 20:00:35.101299 22232 file_writers.go:207] Writing systemd unit dropin "05-mco-ordering.conf" 2025-08-13T20:00:35.157459499+00:00 stderr F I0813 20:00:35.157364 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T20:00:37.250278123+00:00 stderr F I0813 20:00:37.250022 22232 update.go:2118] Preset systemd unit crio.service 2025-08-13T20:00:37.250278123+00:00 stderr F I0813 20:00:37.250190 22232 file_writers.go:293] Writing systemd unit "disable-mglru.service" 2025-08-13T20:00:37.253953048+00:00 stderr F I0813 20:00:37.253910 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T20:00:37.405302973+00:00 stderr F I0813 20:00:37.405192 22232 update.go:2155] Could not reset unit preset for docker.socket, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file docker.socket does not exist. 2025-08-13T20:00:37.405302973+00:00 stderr F ) 2025-08-13T20:00:37.405302973+00:00 stderr F I0813 20:00:37.405273 22232 file_writers.go:293] Writing systemd unit "firstboot-osupdate.target" 2025-08-13T20:00:37.420331222+00:00 stderr F I0813 20:00:37.417654 22232 file_writers.go:293] Writing systemd unit "kubelet-auto-node-size.service" 2025-08-13T20:00:37.423037109+00:00 stderr F I0813 20:00:37.422320 22232 file_writers.go:293] Writing systemd unit "kubelet-dependencies.target" 2025-08-13T20:00:39.911409521+00:00 stderr F I0813 20:00:39.911340 22232 update.go:2118] Preset systemd unit kubelet-dependencies.target 2025-08-13T20:00:39.911522365+00:00 stderr F I0813 20:00:39.911506 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T20:00:39.919367808+00:00 stderr F I0813 20:00:39.919290 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:00:39.919452881+00:00 stderr F I0813 20:00:39.919431 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T20:00:39.927199332+00:00 stderr F I0813 20:00:39.927170 22232 file_writers.go:293] Writing systemd unit "kubelet.service" 2025-08-13T20:00:39.932952896+00:00 stderr F I0813 20:00:39.932926 22232 file_writers.go:293] Writing systemd unit "kubens.service" 2025-08-13T20:00:39.946758159+00:00 stderr F I0813 20:00:39.946698 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-firstboot.service" 2025-08-13T20:00:39.957654460+00:00 stderr F I0813 20:00:39.956581 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-pull.service" 2025-08-13T20:00:39.981065418+00:00 stderr F I0813 20:00:39.980511 22232 file_writers.go:293] Writing systemd unit "node-valid-hostname.service" 2025-08-13T20:00:39.999995307+00:00 stderr F I0813 20:00:39.999741 22232 file_writers.go:293] Writing systemd unit "nodeip-configuration.service" 2025-08-13T20:00:40.023938250+00:00 stderr F I0813 20:00:40.023650 22232 file_writers.go:293] Writing systemd unit "ovs-configuration.service" 2025-08-13T20:00:40.040043639+00:00 stderr F I0813 20:00:40.035400 22232 file_writers.go:207] Writing systemd unit dropin "10-ovs-vswitchd-restart.conf" 2025-08-13T20:00:42.156462427+00:00 stderr F I0813 20:00:42.155441 22232 update.go:2118] Preset systemd unit ovs-vswitchd.service 2025-08-13T20:00:42.156462427+00:00 stderr F I0813 20:00:42.155486 22232 file_writers.go:207] Writing systemd unit dropin "10-ovsdb-restart.conf" 2025-08-13T20:00:42.566969882+00:00 stderr F I0813 20:00:42.561559 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:00:42.671871473+00:00 stderr F I0813 20:00:42.670285 22232 update.go:2155] Could not reset unit preset for pivot.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file pivot.service does not exist. 2025-08-13T20:00:42.671871473+00:00 stderr F ) 2025-08-13T20:00:42.671871473+00:00 stderr F I0813 20:00:42.670332 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:00:42.671871473+00:00 stderr F I0813 20:00:42.670357 22232 file_writers.go:207] Writing systemd unit dropin "mco-controlplane-nice.conf" 2025-08-13T20:00:46.439115612+00:00 stderr F I0813 20:00:46.435710 22232 update.go:2118] Preset systemd unit rpm-ostreed.service 2025-08-13T20:00:46.439115612+00:00 stderr F I0813 20:00:46.435881 22232 file_writers.go:293] Writing systemd unit "wait-for-primary-ip.service" 2025-08-13T20:00:46.556598492+00:00 stderr F I0813 20:00:46.553101 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T20:00:46.749134532+00:00 stderr F I0813 20:00:46.744992 22232 update.go:2155] Could not reset unit preset for zincati.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file zincati.service does not exist. 2025-08-13T20:00:46.749134532+00:00 stderr F ) 2025-08-13T20:00:46.749134532+00:00 stderr F I0813 20:00:46.745047 22232 file_writers.go:293] Writing systemd unit "kubelet-cleanup.service" 2025-08-13T20:00:46.979914161+00:00 stderr F I0813 20:00:46.960227 22232 file_writers.go:293] Writing systemd unit "dummy-network.service" 2025-08-13T20:00:50.349518792+00:00 stderr F I0813 20:00:50.332944 22232 update.go:2096] Enabled systemd units: [NetworkManager-clean-initrd-state.service disable-mglru.service firstboot-osupdate.target kubelet-auto-node-size.service kubelet.service machine-config-daemon-firstboot.service machine-config-daemon-pull.service node-valid-hostname.service nodeip-configuration.service openvswitch.service ovs-configuration.service ovsdb-server.service wait-for-primary-ip.service kubelet-cleanup.service dummy-network.service] 2025-08-13T20:00:52.701873276+00:00 stderr F I0813 20:00:52.701092 22232 update.go:2107] Disabled systemd units [kubens.service] 2025-08-13T20:00:52.701873276+00:00 stderr F I0813 20:00:52.701211 22232 update.go:1887] Deleting stale data 2025-08-13T20:00:52.701873276+00:00 stderr F I0813 20:00:52.701287 22232 update.go:2293] updating the permission of the kubeconfig to: 0o600 2025-08-13T20:00:52.701873276+00:00 stderr F I0813 20:00:52.701558 22232 update.go:2259] Checking if absent users need to be disconfigured 2025-08-13T20:01:04.903053940+00:00 stderr F I0813 20:01:04.902693 22232 update.go:2284] Password has been configured 2025-08-13T20:01:05.406232137+00:00 stderr F I0813 20:01:05.404157 22232 update.go:2610] Node has Desired Config rendered-master-11405dc064e9fc83a779a06d1cd665b3, skipping reboot 2025-08-13T20:01:05.669318508+00:00 stderr F I0813 20:01:05.666377 22232 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-08-13T20:01:05.669318508+00:00 stderr F I0813 20:01:05.666613 22232 daemon.go:1686] Current config: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:01:05.669318508+00:00 stderr F I0813 20:01:05.666626 22232 daemon.go:1687] Desired config: rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:01:05.669318508+00:00 stderr F I0813 20:01:05.666638 22232 daemon.go:1695] state: Done 2025-08-13T20:01:05.722514235+00:00 stderr F I0813 20:01:05.721976 22232 daemon.go:2198] Completing update to target MachineConfig: rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:01:29.333812924+00:00 stderr F I0813 20:01:29.332455 22232 update.go:2610] Update completed for config rendered-master-11405dc064e9fc83a779a06d1cd665b3 and node has been successfully uncordoned 2025-08-13T20:01:31.494362400+00:00 stderr F I0813 20:01:31.488741 22232 daemon.go:2223] In desired state MachineConfig: rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:01:31.628122714+00:00 stderr F I0813 20:01:31.627395 22232 config_drift_monitor.go:246] Config Drift Monitor started 2025-08-13T20:01:31.628122714+00:00 stderr F I0813 20:01:31.627505 22232 update.go:2640] Removing SIGTERM protection 2025-08-13T20:05:15.716149813+00:00 stderr F I0813 20:05:15.715368 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 28276 2025-08-13T20:06:07.027530456+00:00 stderr F I0813 20:06:07.027467 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 31988 2025-08-13T20:06:14.811137908+00:00 stderr F I0813 20:06:14.810074 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 32019 2025-08-13T20:06:49.883441178+00:00 stderr F I0813 20:06:49.877623 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 32205 2025-08-13T20:06:50.782933748+00:00 stderr F I0813 20:06:50.782658 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 32210 2025-08-13T20:06:51.459098074+00:00 stderr F I0813 20:06:51.458152 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 32257 2025-08-13T20:09:06.622937465+00:00 stderr F I0813 20:09:06.622671 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 32257 2025-08-13T20:36:38.651731771+00:00 stderr F I0813 20:36:38.651488 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 32257 2025-08-13T20:42:13.301299505+00:00 stderr F I0813 20:42:13.300560 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 37415 2025-08-13T20:42:14.048707112+00:00 stderr F I0813 20:42:14.026597 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 37419 2025-08-13T20:42:16.625653205+00:00 stderr F I0813 20:42:16.625182 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 37431 2025-08-13T20:42:23.469674098+00:00 stderr F I0813 20:42:23.469267 22232 rpm-ostree.go:308] Running captured: rpm-ostree kargs 2025-08-13T20:42:24.066938248+00:00 stderr F I0813 20:42:24.065572 22232 daemon.go:921] Preflight config drift check successful (took 1.190182413s) 2025-08-13T20:42:24.072380705+00:00 stderr F I0813 20:42:24.072300 22232 config_drift_monitor.go:255] Config Drift Monitor has shut down 2025-08-13T20:42:24.105901431+00:00 stderr F I0813 20:42:24.104044 22232 update.go:2632] Adding SIGTERM protection 2025-08-13T20:42:24.149125037+00:00 stderr F I0813 20:42:24.149020 22232 update.go:1011] Checking Reconcilable for config rendered-master-11405dc064e9fc83a779a06d1cd665b3 to rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:42:24.198179062+00:00 stderr F I0813 20:42:24.198116 22232 update.go:2610] Starting update from rendered-master-11405dc064e9fc83a779a06d1cd665b3 to rendered-master-ef556ead28ddfad01c34ac56c7adfb5a: &{osUpdate:false kargs:false fips:false passwd:false files:true units:false kernelType:false extensions:false} 2025-08-13T20:42:24.214914824+00:00 stderr F I0813 20:42:24.214755 22232 upgrade_monitor.go:324] MCN Featuregate is not enabled. Please enable the TechPreviewNoUpgrade featureset to use MachineConfigNodes 2025-08-13T20:42:24.214914824+00:00 stderr F I0813 20:42:24.214848 22232 update.go:1135] Changes do not require drain, skipping. 2025-08-13T20:42:24.214914824+00:00 stderr F I0813 20:42:24.214909 22232 update.go:1824] Updating files 2025-08-13T20:42:24.214956025+00:00 stderr F I0813 20:42:24.214916 22232 file_writers.go:233] Writing file "/usr/local/bin/nm-clean-initrd-state.sh" 2025-08-13T20:42:24.252122577+00:00 stderr F I0813 20:42:24.251988 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/01-ipv6.conf" 2025-08-13T20:42:24.263958918+00:00 stderr F I0813 20:42:24.263917 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/20-keyfiles.conf" 2025-08-13T20:42:24.277697514+00:00 stderr F I0813 20:42:24.277629 22232 file_writers.go:233] Writing file "/etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt" 2025-08-13T20:42:24.290403600+00:00 stderr F I0813 20:42:24.290214 22232 file_writers.go:233] Writing file "/etc/kubernetes/apiserver-url.env" 2025-08-13T20:42:24.301490240+00:00 stderr F I0813 20:42:24.301378 22232 file_writers.go:233] Writing file "/etc/audit/rules.d/mco-audit-quiet-containers.rules" 2025-08-13T20:42:24.311638183+00:00 stderr F I0813 20:42:24.311569 22232 file_writers.go:233] Writing file "/etc/tmpfiles.d/cleanup-cni.conf" 2025-08-13T20:42:24.322919988+00:00 stderr F I0813 20:42:24.322846 22232 file_writers.go:233] Writing file "/usr/local/bin/configure-ovs.sh" 2025-08-13T20:42:24.336051967+00:00 stderr F I0813 20:42:24.335914 22232 file_writers.go:233] Writing file "/etc/containers/storage.conf" 2025-08-13T20:42:24.361308765+00:00 stderr F I0813 20:42:24.361173 22232 file_writers.go:233] Writing file "/etc/mco/proxy.env" 2025-08-13T20:42:24.382915808+00:00 stderr F I0813 20:42:24.382510 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/10-default-env-godebug.conf" 2025-08-13T20:42:24.404915332+00:00 stderr F I0813 20:42:24.403312 22232 file_writers.go:233] Writing file "/etc/modules-load.d/iptables.conf" 2025-08-13T20:42:24.431580261+00:00 stderr F I0813 20:42:24.430042 22232 file_writers.go:233] Writing file "/etc/node-sizing-enabled.env" 2025-08-13T20:42:24.449065225+00:00 stderr F I0813 20:42:24.449000 22232 file_writers.go:233] Writing file "/usr/local/sbin/dynamic-system-reserved-calc.sh" 2025-08-13T20:42:24.467327221+00:00 stderr F I0813 20:42:24.467204 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/kubelet-cgroups.conf" 2025-08-13T20:42:24.479612755+00:00 stderr F I0813 20:42:24.479511 22232 file_writers.go:233] Writing file "/etc/systemd/system/kubelet.service.d/20-logging.conf" 2025-08-13T20:42:24.497209363+00:00 stderr F I0813 20:42:24.494684 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/sdn.conf" 2025-08-13T20:42:24.511488174+00:00 stderr F I0813 20:42:24.511369 22232 file_writers.go:233] Writing file "/etc/nmstate/nmstate.conf" 2025-08-13T20:42:24.525574921+00:00 stderr F I0813 20:42:24.525487 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh" 2025-08-13T20:42:24.542909090+00:00 stderr F I0813 20:42:24.542836 22232 file_writers.go:233] Writing file "/var/lib/kubelet/config.json" 2025-08-13T20:42:24.555617787+00:00 stderr F I0813 20:42:24.555531 22232 file_writers.go:233] Writing file "/etc/kubernetes/ca.crt" 2025-08-13T20:42:24.571710431+00:00 stderr F I0813 20:42:24.571625 22232 file_writers.go:233] Writing file "/etc/dnsmasq.conf" 2025-08-13T20:42:24.597107223+00:00 stderr F I0813 20:42:24.596981 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/forcedns-rhel9-fix" 2025-08-13T20:42:24.612088595+00:00 stderr F I0813 20:42:24.611990 22232 file_writers.go:233] Writing file "/etc/sysctl.d/arp.conf" 2025-08-13T20:42:24.626033247+00:00 stderr F I0813 20:42:24.625139 22232 file_writers.go:233] Writing file "/etc/sysctl.d/gc-thresh.conf" 2025-08-13T20:42:24.640163794+00:00 stderr F I0813 20:42:24.640072 22232 file_writers.go:233] Writing file "/etc/sysctl.d/inotify.conf" 2025-08-13T20:42:24.654876238+00:00 stderr F I0813 20:42:24.654702 22232 file_writers.go:233] Writing file "/etc/sysctl.d/enable-userfaultfd.conf" 2025-08-13T20:42:24.678018696+00:00 stderr F I0813 20:42:24.677958 22232 file_writers.go:233] Writing file "/etc/sysctl.d/vm-max-map.conf" 2025-08-13T20:42:24.691753702+00:00 stderr F I0813 20:42:24.691701 22232 file_writers.go:233] Writing file "/usr/local/bin/mco-hostname" 2025-08-13T20:42:24.710099500+00:00 stderr F I0813 20:42:24.709966 22232 file_writers.go:233] Writing file "/usr/local/bin/recover-kubeconfig.sh" 2025-08-13T20:42:24.722011874+00:00 stderr F I0813 20:42:24.721912 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet-plugins/volume/exec/.dummy" 2025-08-13T20:42:24.745032968+00:00 stderr F I0813 20:42:24.744917 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/99-vsphere-disable-tx-udp-tnl" 2025-08-13T20:42:24.755827509+00:00 stderr F I0813 20:42:24.755676 22232 file_writers.go:233] Writing file "/usr/local/bin/wait-for-primary-ip.sh" 2025-08-13T20:42:24.770988076+00:00 stderr F I0813 20:42:24.770880 22232 file_writers.go:233] Writing file "/etc/containers/registries.conf" 2025-08-13T20:42:24.790473008+00:00 stderr F I0813 20:42:24.789876 22232 file_writers.go:233] Writing file "/etc/crio/crio.conf.d/00-default" 2025-08-13T20:42:24.807441977+00:00 stderr F I0813 20:42:24.807336 22232 file_writers.go:233] Writing file "/etc/containers/policy.json" 2025-08-13T20:42:24.844291519+00:00 stderr F I0813 20:42:24.844124 22232 file_writers.go:233] Writing file "/etc/kubernetes/cloud.conf" 2025-08-13T20:42:24.855415970+00:00 stderr F I0813 20:42:24.855268 22232 file_writers.go:233] Writing file "/etc/kubernetes/crio-metrics-proxy.cfg" 2025-08-13T20:42:24.866124419+00:00 stderr F I0813 20:42:24.866022 22232 file_writers.go:233] Writing file "/etc/kubernetes/manifests/criometricsproxy.yaml" 2025-08-13T20:42:24.881380209+00:00 stderr F I0813 20:42:24.879962 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet.conf" 2025-08-13T20:42:24.894569969+00:00 stderr F I0813 20:42:24.894399 22232 file_writers.go:233] Writing file "/usr/local/bin/kubenswrapper" 2025-08-13T20:42:24.908705326+00:00 stderr F I0813 20:42:24.908483 22232 file_writers.go:293] Writing systemd unit "NetworkManager-clean-initrd-state.service" 2025-08-13T20:42:24.911642411+00:00 stderr F I0813 20:42:24.911520 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T20:42:24.917033356+00:00 stderr F I0813 20:42:24.916871 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:24.917033356+00:00 stderr F I0813 20:42:24.916973 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-profile-unix-socket.conf" 2025-08-13T20:42:24.919952361+00:00 stderr F I0813 20:42:24.919753 22232 file_writers.go:207] Writing systemd unit dropin "05-mco-ordering.conf" 2025-08-13T20:42:24.924153292+00:00 stderr F I0813 20:42:24.923975 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T20:42:26.791141357+00:00 stderr F I0813 20:42:26.791027 22232 update.go:2118] Preset systemd unit crio.service 2025-08-13T20:42:26.791141357+00:00 stderr F I0813 20:42:26.791080 22232 file_writers.go:293] Writing systemd unit "disable-mglru.service" 2025-08-13T20:42:26.794389151+00:00 stderr F I0813 20:42:26.794326 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T20:42:26.850068746+00:00 stderr F I0813 20:42:26.849861 22232 update.go:2155] Could not reset unit preset for docker.socket, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file docker.socket does not exist. 2025-08-13T20:42:26.850068746+00:00 stderr F ) 2025-08-13T20:42:26.850068746+00:00 stderr F I0813 20:42:26.849932 22232 file_writers.go:293] Writing systemd unit "firstboot-osupdate.target" 2025-08-13T20:42:26.853442923+00:00 stderr F I0813 20:42:26.853333 22232 file_writers.go:293] Writing systemd unit "kubelet-auto-node-size.service" 2025-08-13T20:42:26.859037604+00:00 stderr F I0813 20:42:26.858955 22232 file_writers.go:293] Writing systemd unit "kubelet-dependencies.target" 2025-08-13T20:42:28.576991194+00:00 stderr F I0813 20:42:28.575017 22232 update.go:2118] Preset systemd unit kubelet-dependencies.target 2025-08-13T20:42:28.576991194+00:00 stderr F I0813 20:42:28.575063 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T20:42:28.580716541+00:00 stderr F I0813 20:42:28.580693 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:28.580854265+00:00 stderr F I0813 20:42:28.580751 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T20:42:28.584480740+00:00 stderr F I0813 20:42:28.584458 22232 file_writers.go:293] Writing systemd unit "kubelet.service" 2025-08-13T20:42:28.593538641+00:00 stderr F I0813 20:42:28.593516 22232 file_writers.go:293] Writing systemd unit "kubens.service" 2025-08-13T20:42:28.601360256+00:00 stderr F I0813 20:42:28.599059 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-firstboot.service" 2025-08-13T20:42:28.602883280+00:00 stderr F I0813 20:42:28.602706 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-pull.service" 2025-08-13T20:42:28.611321354+00:00 stderr F I0813 20:42:28.611279 22232 file_writers.go:293] Writing systemd unit "node-valid-hostname.service" 2025-08-13T20:42:28.614991489+00:00 stderr F I0813 20:42:28.614967 22232 file_writers.go:293] Writing systemd unit "nodeip-configuration.service" 2025-08-13T20:42:28.618508081+00:00 stderr F I0813 20:42:28.618438 22232 file_writers.go:293] Writing systemd unit "ovs-configuration.service" 2025-08-13T20:42:28.621332872+00:00 stderr F I0813 20:42:28.621310 22232 file_writers.go:207] Writing systemd unit dropin "10-ovs-vswitchd-restart.conf" 2025-08-13T20:42:30.087391678+00:00 stderr F I0813 20:42:30.086209 22232 update.go:2118] Preset systemd unit ovs-vswitchd.service 2025-08-13T20:42:30.087510012+00:00 stderr F I0813 20:42:30.087491 22232 file_writers.go:207] Writing systemd unit dropin "10-ovsdb-restart.conf" 2025-08-13T20:42:30.093930477+00:00 stderr F I0813 20:42:30.093554 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:30.115671134+00:00 stderr F I0813 20:42:30.114339 22232 update.go:2155] Could not reset unit preset for pivot.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file pivot.service does not exist. 2025-08-13T20:42:30.115671134+00:00 stderr F ) 2025-08-13T20:42:30.115671134+00:00 stderr F I0813 20:42:30.115519 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:30.115671134+00:00 stderr F I0813 20:42:30.115562 22232 file_writers.go:207] Writing systemd unit dropin "mco-controlplane-nice.conf" 2025-08-13T20:42:31.509335524+00:00 stderr F I0813 20:42:31.509270 22232 update.go:2118] Preset systemd unit rpm-ostreed.service 2025-08-13T20:42:31.509424616+00:00 stderr F I0813 20:42:31.509409 22232 file_writers.go:293] Writing systemd unit "wait-for-primary-ip.service" 2025-08-13T20:42:31.513537035+00:00 stderr F I0813 20:42:31.513011 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T20:42:31.546347121+00:00 stderr F I0813 20:42:31.546288 22232 update.go:2155] Could not reset unit preset for zincati.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file zincati.service does not exist. 2025-08-13T20:42:31.546347121+00:00 stderr F ) 2025-08-13T20:42:31.546415863+00:00 stderr F I0813 20:42:31.546400 22232 file_writers.go:293] Writing systemd unit "kubelet-cleanup.service" 2025-08-13T20:42:31.552157308+00:00 stderr F I0813 20:42:31.552085 22232 file_writers.go:293] Writing systemd unit "dummy-network.service" 2025-08-13T20:42:33.062583914+00:00 stderr F I0813 20:42:33.061640 22232 update.go:2096] Enabled systemd units: [NetworkManager-clean-initrd-state.service disable-mglru.service firstboot-osupdate.target kubelet-auto-node-size.service kubelet.service machine-config-daemon-firstboot.service machine-config-daemon-pull.service node-valid-hostname.service nodeip-configuration.service openvswitch.service ovs-configuration.service ovsdb-server.service wait-for-primary-ip.service kubelet-cleanup.service dummy-network.service] 2025-08-13T20:42:34.628042927+00:00 stderr F I0813 20:42:34.627976 22232 update.go:2107] Disabled systemd units [kubens.service] 2025-08-13T20:42:34.628143799+00:00 stderr F I0813 20:42:34.628129 22232 update.go:1887] Deleting stale data 2025-08-13T20:42:34.628275183+00:00 stderr F I0813 20:42:34.628255 22232 update.go:2293] updating the permission of the kubeconfig to: 0o600 2025-08-13T20:42:34.629196670+00:00 stderr F I0813 20:42:34.629177 22232 update.go:2259] Checking if absent users need to be disconfigured 2025-08-13T20:42:34.902454208+00:00 stderr F I0813 20:42:34.901737 22232 update.go:2284] Password has been configured 2025-08-13T20:42:34.910419238+00:00 stderr F I0813 20:42:34.910383 22232 update.go:2610] Node has Desired Config rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, skipping reboot 2025-08-13T20:42:34.974693461+00:00 stderr F I0813 20:42:34.974482 22232 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-08-13T20:42:34.984181794+00:00 stderr F I0813 20:42:34.984085 22232 update.go:2259] Checking if absent users need to be disconfigured 2025-08-13T20:42:35.064191821+00:00 stderr F I0813 20:42:35.064034 22232 update.go:2284] Password has been configured 2025-08-13T20:42:35.064191821+00:00 stderr F I0813 20:42:35.064092 22232 update.go:1824] Updating files 2025-08-13T20:42:35.064191821+00:00 stderr F I0813 20:42:35.064105 22232 file_writers.go:233] Writing file "/usr/local/bin/nm-clean-initrd-state.sh" 2025-08-13T20:42:35.080327306+00:00 stderr F I0813 20:42:35.080203 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/01-ipv6.conf" 2025-08-13T20:42:35.092598230+00:00 stderr F I0813 20:42:35.092489 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/20-keyfiles.conf" 2025-08-13T20:42:35.103657929+00:00 stderr F I0813 20:42:35.103567 22232 file_writers.go:233] Writing file "/etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt" 2025-08-13T20:42:35.120481844+00:00 stderr F I0813 20:42:35.120434 22232 file_writers.go:233] Writing file "/etc/kubernetes/apiserver-url.env" 2025-08-13T20:42:35.135935139+00:00 stderr F I0813 20:42:35.135849 22232 file_writers.go:233] Writing file "/etc/audit/rules.d/mco-audit-quiet-containers.rules" 2025-08-13T20:42:35.149484510+00:00 stderr F I0813 20:42:35.149389 22232 file_writers.go:233] Writing file "/etc/tmpfiles.d/cleanup-cni.conf" 2025-08-13T20:42:35.161465315+00:00 stderr F I0813 20:42:35.161379 22232 file_writers.go:233] Writing file "/usr/local/bin/configure-ovs.sh" 2025-08-13T20:42:35.182567244+00:00 stderr F I0813 20:42:35.182459 22232 file_writers.go:233] Writing file "/etc/containers/storage.conf" 2025-08-13T20:42:35.199287336+00:00 stderr F I0813 20:42:35.199095 22232 file_writers.go:233] Writing file "/etc/mco/proxy.env" 2025-08-13T20:42:35.213657570+00:00 stderr F I0813 20:42:35.213564 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/10-default-env-godebug.conf" 2025-08-13T20:42:35.225871972+00:00 stderr F I0813 20:42:35.225709 22232 file_writers.go:233] Writing file "/etc/modules-load.d/iptables.conf" 2025-08-13T20:42:35.250539514+00:00 stderr F I0813 20:42:35.250122 22232 file_writers.go:233] Writing file "/etc/node-sizing-enabled.env" 2025-08-13T20:42:35.273525386+00:00 stderr F I0813 20:42:35.271163 22232 file_writers.go:233] Writing file "/usr/local/sbin/dynamic-system-reserved-calc.sh" 2025-08-13T20:42:35.301671208+00:00 stderr F I0813 20:42:35.301016 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/kubelet-cgroups.conf" 2025-08-13T20:42:35.328518312+00:00 stderr F I0813 20:42:35.328357 22232 file_writers.go:233] Writing file "/etc/systemd/system/kubelet.service.d/20-logging.conf" 2025-08-13T20:42:35.352840333+00:00 stderr F I0813 20:42:35.350893 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/sdn.conf" 2025-08-13T20:42:35.373667023+00:00 stderr F I0813 20:42:35.371918 22232 file_writers.go:233] Writing file "/etc/nmstate/nmstate.conf" 2025-08-13T20:42:35.404121861+00:00 stderr F I0813 20:42:35.403939 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh" 2025-08-13T20:42:35.436752152+00:00 stderr F I0813 20:42:35.436632 22232 file_writers.go:233] Writing file "/var/lib/kubelet/config.json" 2025-08-13T20:42:35.453877856+00:00 stderr F I0813 20:42:35.453638 22232 file_writers.go:233] Writing file "/etc/kubernetes/ca.crt" 2025-08-13T20:42:35.468862658+00:00 stderr F I0813 20:42:35.468286 22232 file_writers.go:233] Writing file "/etc/dnsmasq.conf" 2025-08-13T20:42:35.491322875+00:00 stderr F I0813 20:42:35.490614 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/forcedns-rhel9-fix" 2025-08-13T20:42:35.504189526+00:00 stderr F I0813 20:42:35.504055 22232 file_writers.go:233] Writing file "/etc/sysctl.d/arp.conf" 2025-08-13T20:42:35.520377993+00:00 stderr F I0813 20:42:35.518283 22232 file_writers.go:233] Writing file "/etc/sysctl.d/gc-thresh.conf" 2025-08-13T20:42:35.539960518+00:00 stderr F I0813 20:42:35.538639 22232 file_writers.go:233] Writing file "/etc/sysctl.d/inotify.conf" 2025-08-13T20:42:35.556549106+00:00 stderr F I0813 20:42:35.556450 22232 file_writers.go:233] Writing file "/etc/sysctl.d/enable-userfaultfd.conf" 2025-08-13T20:42:35.568008026+00:00 stderr F I0813 20:42:35.567730 22232 file_writers.go:233] Writing file "/etc/sysctl.d/vm-max-map.conf" 2025-08-13T20:42:35.577947273+00:00 stderr F I0813 20:42:35.577848 22232 file_writers.go:233] Writing file "/usr/local/bin/mco-hostname" 2025-08-13T20:42:35.588734984+00:00 stderr F I0813 20:42:35.588631 22232 file_writers.go:233] Writing file "/usr/local/bin/recover-kubeconfig.sh" 2025-08-13T20:42:35.601888393+00:00 stderr F I0813 20:42:35.601827 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet-plugins/volume/exec/.dummy" 2025-08-13T20:42:35.615974709+00:00 stderr F I0813 20:42:35.615912 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/99-vsphere-disable-tx-udp-tnl" 2025-08-13T20:42:35.629195690+00:00 stderr F I0813 20:42:35.629134 22232 file_writers.go:233] Writing file "/usr/local/bin/wait-for-primary-ip.sh" 2025-08-13T20:42:35.651689649+00:00 stderr F I0813 20:42:35.651629 22232 file_writers.go:233] Writing file "/etc/containers/registries.conf" 2025-08-13T20:42:35.669093641+00:00 stderr F I0813 20:42:35.669030 22232 file_writers.go:233] Writing file "/etc/crio/crio.conf.d/00-default" 2025-08-13T20:42:35.682464926+00:00 stderr F I0813 20:42:35.682366 22232 file_writers.go:233] Writing file "/etc/containers/policy.json" 2025-08-13T20:42:35.696846231+00:00 stderr F I0813 20:42:35.696688 22232 file_writers.go:233] Writing file "/etc/kubernetes/cloud.conf" 2025-08-13T20:42:35.709859166+00:00 stderr F I0813 20:42:35.708958 22232 file_writers.go:233] Writing file "/etc/kubernetes/crio-metrics-proxy.cfg" 2025-08-13T20:42:35.721989996+00:00 stderr F I0813 20:42:35.721926 22232 file_writers.go:233] Writing file "/etc/kubernetes/manifests/criometricsproxy.yaml" 2025-08-13T20:42:35.736007530+00:00 stderr F I0813 20:42:35.735745 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet.conf" 2025-08-13T20:42:35.753092332+00:00 stderr F I0813 20:42:35.752857 22232 file_writers.go:233] Writing file "/usr/local/bin/kubenswrapper" 2025-08-13T20:42:35.780277376+00:00 stderr F I0813 20:42:35.779309 22232 file_writers.go:293] Writing systemd unit "NetworkManager-clean-initrd-state.service" 2025-08-13T20:42:35.783961322+00:00 stderr F I0813 20:42:35.783555 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T20:42:35.789884023+00:00 stderr F I0813 20:42:35.788276 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:35.789884023+00:00 stderr F I0813 20:42:35.788335 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-profile-unix-socket.conf" 2025-08-13T20:42:35.791366746+00:00 stderr F I0813 20:42:35.791309 22232 file_writers.go:207] Writing systemd unit dropin "05-mco-ordering.conf" 2025-08-13T20:42:35.794091574+00:00 stderr F I0813 20:42:35.794017 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T20:42:37.702915466+00:00 stderr F I0813 20:42:37.702382 22232 update.go:2118] Preset systemd unit crio.service 2025-08-13T20:42:37.702915466+00:00 stderr F I0813 20:42:37.702519 22232 file_writers.go:293] Writing systemd unit "disable-mglru.service" 2025-08-13T20:42:37.707136597+00:00 stderr F I0813 20:42:37.706349 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T20:42:37.893965314+00:00 stderr F I0813 20:42:37.893585 22232 update.go:2155] Could not reset unit preset for docker.socket, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file docker.socket does not exist. 2025-08-13T20:42:37.893965314+00:00 stderr F ) 2025-08-13T20:42:37.893965314+00:00 stderr F I0813 20:42:37.893629 22232 file_writers.go:293] Writing systemd unit "firstboot-osupdate.target" 2025-08-13T20:42:37.897539277+00:00 stderr F I0813 20:42:37.897475 22232 file_writers.go:293] Writing systemd unit "kubelet-auto-node-size.service" 2025-08-13T20:42:37.901289415+00:00 stderr F I0813 20:42:37.901217 22232 file_writers.go:293] Writing systemd unit "kubelet-dependencies.target" 2025-08-13T20:42:39.316065013+00:00 stderr F I0813 20:42:39.315961 22232 update.go:2118] Preset systemd unit kubelet-dependencies.target 2025-08-13T20:42:39.316114875+00:00 stderr F I0813 20:42:39.316068 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T20:42:39.321291884+00:00 stderr F I0813 20:42:39.320616 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:39.321291884+00:00 stderr F I0813 20:42:39.320721 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T20:42:39.324873447+00:00 stderr F I0813 20:42:39.323944 22232 file_writers.go:293] Writing systemd unit "kubelet.service" 2025-08-13T20:42:39.327420141+00:00 stderr F I0813 20:42:39.327371 22232 file_writers.go:293] Writing systemd unit "kubens.service" 2025-08-13T20:42:39.330551491+00:00 stderr F I0813 20:42:39.330511 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-firstboot.service" 2025-08-13T20:42:39.334212687+00:00 stderr F I0813 20:42:39.334104 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-pull.service" 2025-08-13T20:42:39.337603454+00:00 stderr F I0813 20:42:39.337539 22232 file_writers.go:293] Writing systemd unit "node-valid-hostname.service" 2025-08-13T20:42:39.339667454+00:00 stderr F I0813 20:42:39.339601 22232 file_writers.go:293] Writing systemd unit "nodeip-configuration.service" 2025-08-13T20:42:39.341844787+00:00 stderr F I0813 20:42:39.341732 22232 file_writers.go:293] Writing systemd unit "ovs-configuration.service" 2025-08-13T20:42:39.344166914+00:00 stderr F I0813 20:42:39.344100 22232 file_writers.go:207] Writing systemd unit dropin "10-ovs-vswitchd-restart.conf" 2025-08-13T20:42:41.135897899+00:00 stderr F I0813 20:42:41.135709 22232 update.go:2118] Preset systemd unit ovs-vswitchd.service 2025-08-13T20:42:41.135897899+00:00 stderr F I0813 20:42:41.135751 22232 file_writers.go:207] Writing systemd unit dropin "10-ovsdb-restart.conf" 2025-08-13T20:42:41.139032159+00:00 stderr F I0813 20:42:41.138978 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:41.405824051+00:00 stderr F W0813 20:42:41.405685 22232 daemon.go:1366] Got an error from auxiliary tools: kubelet health check has failed 1 times: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused 2025-08-13T20:42:42.159850680+00:00 stderr F I0813 20:42:42.159719 22232 update.go:2155] Could not reset unit preset for pivot.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file pivot.service does not exist. 2025-08-13T20:42:42.159850680+00:00 stderr F ) 2025-08-13T20:42:42.159850680+00:00 stderr F I0813 20:42:42.159768 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:42.159983064+00:00 stderr F I0813 20:42:42.159942 22232 file_writers.go:207] Writing systemd unit dropin "mco-controlplane-nice.conf" 2025-08-13T20:42:43.488849285+00:00 stderr F I0813 20:42:43.488711 22232 update.go:2118] Preset systemd unit rpm-ostreed.service 2025-08-13T20:42:43.488849285+00:00 stderr F I0813 20:42:43.488755 22232 file_writers.go:293] Writing systemd unit "wait-for-primary-ip.service" 2025-08-13T20:42:43.492146470+00:00 stderr F I0813 20:42:43.492068 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T20:42:44.411025122+00:00 stderr F I0813 20:42:44.410920 22232 update.go:2155] Could not reset unit preset for zincati.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file zincati.service does not exist. 2025-08-13T20:42:44.411025122+00:00 stderr F ) 2025-08-13T20:42:44.411025122+00:00 stderr F I0813 20:42:44.410979 22232 file_writers.go:293] Writing systemd unit "kubelet-cleanup.service" 2025-08-13T20:42:44.415625774+00:00 stderr F I0813 20:42:44.415545 22232 file_writers.go:293] Writing systemd unit "dummy-network.service" 2025-08-13T20:42:44.951528605+00:00 stderr F I0813 20:42:44.951462 22232 daemon.go:1302] Got SIGTERM, but actively updating ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000003575315133657716033034 0ustar zuulzuul2025-08-13T19:54:10.556685779+00:00 stderr F I0813 19:54:10.556542 19129 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:54:10.557371878+00:00 stderr F I0813 19:54:10.557253 19129 update.go:2595] Running: mount --rbind /run/secrets /rootfs/run/secrets 2025-08-13T19:54:10.560882359+00:00 stderr F I0813 19:54:10.560852 19129 update.go:2595] Running: mount --rbind /usr/bin /rootfs/run/machine-config-daemon-bin 2025-08-13T19:54:10.564159432+00:00 stderr F I0813 19:54:10.564115 19129 daemon.go:513] using appropriate binary for source=rhel-9 target=rhel-9 2025-08-13T19:54:10.671602010+00:00 stderr F I0813 19:54:10.671421 19129 daemon.go:566] Invoking re-exec /run/bin/machine-config-daemon 2025-08-13T19:54:10.760550110+00:00 stderr F I0813 19:54:10.759528 19129 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:54:10.761339942+00:00 stderr F E0813 19:54:10.761303 19129 rpm-ostree.go:276] Merged secret file does not exist; defaulting to cluster pull secret 2025-08-13T19:54:10.761511917+00:00 stderr F I0813 19:54:10.761490 19129 rpm-ostree.go:263] Linking ostree authfile to /var/lib/kubelet/config.json 2025-08-13T19:54:10.973048117+00:00 stderr F I0813 19:54:10.972984 19129 daemon.go:317] Booted osImageURL: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 (416.94.202406172220-0) 3aa42b3e55a31016873768eb92311a9ba07c871ac81e126e9561a61d8f9d2f24 2025-08-13T19:54:10.975268441+00:00 stderr F I0813 19:54:10.975225 19129 start.go:134] overriding kubernetes api to https://api-int.crc.testing:6443 2025-08-13T19:54:10.976896327+00:00 stderr F I0813 19:54:10.976617 19129 metrics.go:100] Registering Prometheus metrics 2025-08-13T19:54:10.978138493+00:00 stderr F I0813 19:54:10.978067 19129 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2025-08-13T19:54:11.001263983+00:00 stderr F I0813 19:54:11.001140 19129 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:54:11.004521666+00:00 stderr F I0813 19:54:11.004417 19129 writer.go:88] NodeWriter initialized with credentials from /var/lib/kubelet/kubeconfig 2025-08-13T19:54:11.016089966+00:00 stderr F I0813 19:54:11.015988 19129 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:54:11.017386113+00:00 stderr F I0813 19:54:11.016256 19129 start.go:214] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:54:11.018157775+00:00 stderr F I0813 19:54:11.018022 19129 update.go:2610] Starting to manage node: crc 2025-08-13T19:54:11.022896291+00:00 stderr F I0813 19:54:11.022232 19129 rpm-ostree.go:308] Running captured: rpm-ostree status 2025-08-13T19:54:11.074973758+00:00 stderr F I0813 19:54:11.074899 19129 daemon.go:1727] State: idle 2025-08-13T19:54:11.074973758+00:00 stderr F Deployments: 2025-08-13T19:54:11.074973758+00:00 stderr F * ostree-unverified-registry:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2025-08-13T19:54:11.074973758+00:00 stderr F Digest: sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2025-08-13T19:54:11.074973758+00:00 stderr F Version: 416.94.202406172220-0 (2024-06-17T22:24:17Z) 2025-08-13T19:54:11.074973758+00:00 stderr F LocalPackages: hyperv-daemons-0-0.42.20190303git.el9.x86_64 2025-08-13T19:54:11.074973758+00:00 stderr F hyperv-daemons-license-0-0.42.20190303git.el9.noarch 2025-08-13T19:54:11.074973758+00:00 stderr F hypervfcopyd-0-0.42.20190303git.el9.x86_64 2025-08-13T19:54:11.074973758+00:00 stderr F hypervkvpd-0-0.42.20190303git.el9.x86_64 2025-08-13T19:54:11.074973758+00:00 stderr F hypervvssd-0-0.42.20190303git.el9.x86_64 2025-08-13T19:54:11.075414900+00:00 stderr F I0813 19:54:11.075378 19129 coreos.go:53] CoreOS aleph version: mtime=2022-08-01 23:42:11 +0000 UTC 2025-08-13T19:54:11.075414900+00:00 stderr F { 2025-08-13T19:54:11.075414900+00:00 stderr F "container-image": { 2025-08-13T19:54:11.075414900+00:00 stderr F "image-digest": "sha256:ea0cef07b0cd5ba8ff8c487324bf6a4df15fa31e69962a8e8fb7d00f1caa7f1d", 2025-08-13T19:54:11.075414900+00:00 stderr F "image-labels": { 2025-08-13T19:54:11.075414900+00:00 stderr F "containers.bootc": "1", 2025-08-13T19:54:11.075414900+00:00 stderr F "coreos-assembler.image-config-checksum": "626c78cc9e2da6ffd642d678ac6109f5532e817e107932464c8222f8d3492491", 2025-08-13T19:54:11.075414900+00:00 stderr F "coreos-assembler.image-input-checksum": "6f9238f67f75298a27a620eacd78d313fd451541fc877d3f6aa681c0f6b22811", 2025-08-13T19:54:11.075414900+00:00 stderr F "io.openshift.build.version-display-names": "machine-os=Red Hat Enterprise Linux CoreOS", 2025-08-13T19:54:11.075414900+00:00 stderr F "io.openshift.build.versions": "machine-os=416.94.202405291527-0", 2025-08-13T19:54:11.075414900+00:00 stderr F "org.opencontainers.image.revision": "b01a02cc1b92e2dacf12d9b5cddf690a2439ce64", 2025-08-13T19:54:11.075414900+00:00 stderr F "org.opencontainers.image.source": "https://github.com/openshift/os", 2025-08-13T19:54:11.075414900+00:00 stderr F "org.opencontainers.image.version": "416.94.202405291527-0", 2025-08-13T19:54:11.075414900+00:00 stderr F "ostree.bootable": "true", 2025-08-13T19:54:11.075414900+00:00 stderr F "ostree.commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2025-08-13T19:54:11.075414900+00:00 stderr F "ostree.final-diffid": "sha256:12787d84fa137cd5649a9005efe98ec9d05ea46245fdc50aecb7dd007f2035b1", 2025-08-13T19:54:11.075414900+00:00 stderr F "ostree.linux": "5.14.0-427.18.1.el9_4.x86_64", 2025-08-13T19:54:11.075414900+00:00 stderr F "rpmostree.inputhash": "a00190b296e00061b72bcbc4ace9fb4b317e86da7d8c58471b027239035b05d6" 2025-08-13T19:54:11.075414900+00:00 stderr F }, 2025-08-13T19:54:11.075414900+00:00 stderr F "image-name": "oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive" 2025-08-13T19:54:11.075414900+00:00 stderr F }, 2025-08-13T19:54:11.075414900+00:00 stderr F "osbuild-version": "114", 2025-08-13T19:54:11.075414900+00:00 stderr F "ostree-commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2025-08-13T19:54:11.075414900+00:00 stderr F "ref": "docker://ostree-image-signed:oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive", 2025-08-13T19:54:11.075414900+00:00 stderr F "version": "416.94.202405291527-0" 2025-08-13T19:54:11.075414900+00:00 stderr F } 2025-08-13T19:54:11.075551544+00:00 stderr F I0813 19:54:11.075531 19129 coreos.go:70] Ignition provisioning: time=2024-06-26T12:42:18Z 2025-08-13T19:54:11.075591915+00:00 stderr F I0813 19:54:11.075578 19129 rpm-ostree.go:308] Running captured: journalctl --list-boots 2025-08-13T19:54:11.085158798+00:00 stderr F I0813 19:54:11.085050 19129 daemon.go:1736] journalctl --list-boots: 2025-08-13T19:54:11.085158798+00:00 stderr F IDX BOOT ID FIRST ENTRY LAST ENTRY 2025-08-13T19:54:11.085158798+00:00 stderr F -2 286f1119e01c427899b4130371f705c5 Thu 2024-06-27 13:36:35 UTC Thu 2024-06-27 13:36:39 UTC 2025-08-13T19:54:11.085158798+00:00 stderr F -1 2ff245ef1efc4648b6c81a61c24bb5db Thu 2024-06-27 13:37:00 UTC Thu 2024-06-27 13:37:11 UTC 2025-08-13T19:54:11.085158798+00:00 stderr F 0 7bac8de7aad04ed8a9adc4391f6449b7 Wed 2025-08-13 19:43:15 UTC Wed 2025-08-13 19:54:11 UTC 2025-08-13T19:54:11.085158798+00:00 stderr F I0813 19:54:11.085107 19129 rpm-ostree.go:308] Running captured: systemctl list-units --state=failed --no-legend 2025-08-13T19:54:11.096965756+00:00 stderr F I0813 19:54:11.096023 19129 daemon.go:1751] systemd service state: OK 2025-08-13T19:54:11.096965756+00:00 stderr F I0813 19:54:11.096062 19129 daemon.go:1327] Starting MachineConfigDaemon 2025-08-13T19:54:11.096965756+00:00 stderr F I0813 19:54:11.096159 19129 daemon.go:1334] Enabling Kubelet Healthz Monitor 2025-08-13T19:54:12.018003324+00:00 stderr F I0813 19:54:12.017747 19129 daemon.go:647] Node crc is part of the control plane 2025-08-13T19:54:12.039676423+00:00 stderr F I0813 19:54:12.039574 19129 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs7331210 --cleanup 2025-08-13T19:54:12.044998325+00:00 stderr F [2025-08-13T19:54:12Z INFO nmstatectl] Nmstate version: 2.2.29 2025-08-13T19:54:12.045105768+00:00 stdout F 2025-08-13T19:54:12.045116079+00:00 stderr F [2025-08-13T19:54:12Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-08-13T19:54:12.053293452+00:00 stderr F I0813 19:54:12.053225 19129 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-08-13T19:54:12.053293452+00:00 stderr F I0813 19:54:12.053264 19129 daemon.go:1680] Current+desired config: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:54:12.053293452+00:00 stderr F I0813 19:54:12.053273 19129 daemon.go:1695] state: Done 2025-08-13T19:54:12.053318993+00:00 stderr F I0813 19:54:12.053307 19129 update.go:2595] Running: rpm-ostree cleanup -r 2025-08-13T19:54:12.115456337+00:00 stdout F Deployments unchanged. 2025-08-13T19:54:12.126749860+00:00 stderr F I0813 19:54:12.126665 19129 daemon.go:2096] Validating against current config rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:54:12.127261514+00:00 stderr F I0813 19:54:12.127202 19129 daemon.go:2008] SSH key location ("/home/core/.ssh/authorized_keys.d/ignition") up-to-date! 2025-08-13T19:54:12.425502300+00:00 stderr F W0813 19:54:12.425404 19129 daemon.go:2601] Unable to check manifest for matching hash: error parsing image name "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883": reading manifest sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 in quay.io/openshift-release-dev/ocp-v4.0-art-dev: unauthorized: access to the requested resource is not authorized 2025-08-13T19:54:12.425502300+00:00 stderr F I0813 19:54:12.425443 19129 rpm-ostree.go:308] Running captured: rpm-ostree kargs 2025-08-13T19:54:12.499133842+00:00 stderr F I0813 19:54:12.498978 19129 update.go:2610] Validated on-disk state 2025-08-13T19:54:12.504581478+00:00 stderr F I0813 19:54:12.504490 19129 daemon.go:2198] Completing update to target MachineConfig: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:54:22.534493943+00:00 stderr F I0813 19:54:22.534332 19129 update.go:2610] Update completed for config rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 and node has been successfully uncordoned 2025-08-13T19:54:22.551916871+00:00 stderr F I0813 19:54:22.551858 19129 daemon.go:2223] In desired state MachineConfig: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:54:22.567300330+00:00 stderr F I0813 19:54:22.567229 19129 config_drift_monitor.go:246] Config Drift Monitor started 2025-08-13T19:55:11.143020151+00:00 stderr F I0813 19:55:11.142956 19129 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 23037 2025-08-13T19:57:10.161621328+00:00 stderr F I0813 19:57:10.160577 19129 daemon.go:1363] Shutting down MachineConfigDaemon ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/5.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000005316015133657716033024 0ustar zuulzuul2026-01-20T10:54:17.668380593+00:00 stderr F I0120 10:54:17.667986 24340 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2026-01-20T10:54:17.668380593+00:00 stderr F I0120 10:54:17.668273 24340 update.go:2595] Running: mount --rbind /run/secrets /rootfs/run/secrets 2026-01-20T10:54:17.673343306+00:00 stderr F I0120 10:54:17.673255 24340 update.go:2595] Running: mount --rbind /usr/bin /rootfs/run/machine-config-daemon-bin 2026-01-20T10:54:17.677028654+00:00 stderr F I0120 10:54:17.676969 24340 daemon.go:513] using appropriate binary for source=rhel-9 target=rhel-9 2026-01-20T10:54:17.780706727+00:00 stderr F I0120 10:54:17.780621 24340 daemon.go:566] Invoking re-exec /run/bin/machine-config-daemon 2026-01-20T10:54:17.838114918+00:00 stderr F I0120 10:54:17.837988 24340 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2026-01-20T10:54:17.842592037+00:00 stderr F E0120 10:54:17.838434 24340 rpm-ostree.go:276] Merged secret file does not exist; defaulting to cluster pull secret 2026-01-20T10:54:17.842592037+00:00 stderr F I0120 10:54:17.838617 24340 rpm-ostree.go:263] Linking ostree authfile to /var/lib/kubelet/config.json 2026-01-20T10:54:18.081815105+00:00 stderr F I0120 10:54:18.080576 24340 daemon.go:317] Booted osImageURL: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 (416.94.202406172220-0) 3aa42b3e55a31016873768eb92311a9ba07c871ac81e126e9561a61d8f9d2f24 2026-01-20T10:54:18.083467708+00:00 stderr F I0120 10:54:18.083374 24340 start.go:134] overriding kubernetes api to https://api-int.crc.testing:6443 2026-01-20T10:54:18.084687091+00:00 stderr F I0120 10:54:18.084630 24340 metrics.go:100] Registering Prometheus metrics 2026-01-20T10:54:18.084761963+00:00 stderr F I0120 10:54:18.084723 24340 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2026-01-20T10:54:18.137963871+00:00 stderr F I0120 10:54:18.137844 24340 simple_featuregate_reader.go:171] Starting feature-gate-detector 2026-01-20T10:54:18.143038457+00:00 stderr F I0120 10:54:18.139374 24340 writer.go:88] NodeWriter initialized with credentials from /var/lib/kubelet/kubeconfig 2026-01-20T10:54:18.147821373+00:00 stderr F I0120 10:54:18.146156 24340 start.go:214] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2026-01-20T10:54:18.147821373+00:00 stderr F I0120 10:54:18.146224 24340 update.go:2610] Starting to manage node: crc 2026-01-20T10:54:18.151108401+00:00 stderr F I0120 10:54:18.150973 24340 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2026-01-20T10:54:18.154240755+00:00 stderr F I0120 10:54:18.154144 24340 rpm-ostree.go:308] Running captured: rpm-ostree status 2026-01-20T10:54:18.198002431+00:00 stderr F I0120 10:54:18.197902 24340 daemon.go:1727] State: idle 2026-01-20T10:54:18.198002431+00:00 stderr F Deployments: 2026-01-20T10:54:18.198002431+00:00 stderr F * ostree-unverified-registry:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2026-01-20T10:54:18.198002431+00:00 stderr F Digest: sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2026-01-20T10:54:18.198002431+00:00 stderr F Version: 416.94.202406172220-0 (2024-06-17T22:24:17Z) 2026-01-20T10:54:18.198002431+00:00 stderr F LocalPackages: hyperv-daemons-0-0.42.20190303git.el9.x86_64 2026-01-20T10:54:18.198002431+00:00 stderr F hyperv-daemons-license-0-0.42.20190303git.el9.noarch 2026-01-20T10:54:18.198002431+00:00 stderr F hypervfcopyd-0-0.42.20190303git.el9.x86_64 2026-01-20T10:54:18.198002431+00:00 stderr F hypervkvpd-0-0.42.20190303git.el9.x86_64 2026-01-20T10:54:18.198002431+00:00 stderr F hypervvssd-0-0.42.20190303git.el9.x86_64 2026-01-20T10:54:18.198365731+00:00 stderr F I0120 10:54:18.198306 24340 coreos.go:53] CoreOS aleph version: mtime=2022-08-01 23:42:11 +0000 UTC 2026-01-20T10:54:18.198365731+00:00 stderr F { 2026-01-20T10:54:18.198365731+00:00 stderr F "container-image": { 2026-01-20T10:54:18.198365731+00:00 stderr F "image-digest": "sha256:ea0cef07b0cd5ba8ff8c487324bf6a4df15fa31e69962a8e8fb7d00f1caa7f1d", 2026-01-20T10:54:18.198365731+00:00 stderr F "image-labels": { 2026-01-20T10:54:18.198365731+00:00 stderr F "containers.bootc": "1", 2026-01-20T10:54:18.198365731+00:00 stderr F "coreos-assembler.image-config-checksum": "626c78cc9e2da6ffd642d678ac6109f5532e817e107932464c8222f8d3492491", 2026-01-20T10:54:18.198365731+00:00 stderr F "coreos-assembler.image-input-checksum": "6f9238f67f75298a27a620eacd78d313fd451541fc877d3f6aa681c0f6b22811", 2026-01-20T10:54:18.198365731+00:00 stderr F "io.openshift.build.version-display-names": "machine-os=Red Hat Enterprise Linux CoreOS", 2026-01-20T10:54:18.198365731+00:00 stderr F "io.openshift.build.versions": "machine-os=416.94.202405291527-0", 2026-01-20T10:54:18.198365731+00:00 stderr F "org.opencontainers.image.revision": "b01a02cc1b92e2dacf12d9b5cddf690a2439ce64", 2026-01-20T10:54:18.198365731+00:00 stderr F "org.opencontainers.image.source": "https://github.com/openshift/os", 2026-01-20T10:54:18.198365731+00:00 stderr F "org.opencontainers.image.version": "416.94.202405291527-0", 2026-01-20T10:54:18.198365731+00:00 stderr F "ostree.bootable": "true", 2026-01-20T10:54:18.198365731+00:00 stderr F "ostree.commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2026-01-20T10:54:18.198365731+00:00 stderr F "ostree.final-diffid": "sha256:12787d84fa137cd5649a9005efe98ec9d05ea46245fdc50aecb7dd007f2035b1", 2026-01-20T10:54:18.198365731+00:00 stderr F "ostree.linux": "5.14.0-427.18.1.el9_4.x86_64", 2026-01-20T10:54:18.198365731+00:00 stderr F "rpmostree.inputhash": "a00190b296e00061b72bcbc4ace9fb4b317e86da7d8c58471b027239035b05d6" 2026-01-20T10:54:18.198365731+00:00 stderr F }, 2026-01-20T10:54:18.198365731+00:00 stderr F "image-name": "oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive" 2026-01-20T10:54:18.198365731+00:00 stderr F }, 2026-01-20T10:54:18.198365731+00:00 stderr F "osbuild-version": "114", 2026-01-20T10:54:18.198365731+00:00 stderr F "ostree-commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2026-01-20T10:54:18.198365731+00:00 stderr F "ref": "docker://ostree-image-signed:oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive", 2026-01-20T10:54:18.198365731+00:00 stderr F "version": "416.94.202405291527-0" 2026-01-20T10:54:18.198365731+00:00 stderr F } 2026-01-20T10:54:18.198413822+00:00 stderr F I0120 10:54:18.198396 24340 coreos.go:70] Ignition provisioning: time=2024-06-26T12:42:18Z 2026-01-20T10:54:18.198413822+00:00 stderr F I0120 10:54:18.198405 24340 rpm-ostree.go:308] Running captured: journalctl --list-boots 2026-01-20T10:54:18.206904718+00:00 stderr F I0120 10:54:18.206848 24340 daemon.go:1736] journalctl --list-boots: 2026-01-20T10:54:18.206904718+00:00 stderr F IDX BOOT ID FIRST ENTRY LAST ENTRY 2026-01-20T10:54:18.206904718+00:00 stderr F -4 286f1119e01c427899b4130371f705c5 Thu 2024-06-27 13:36:35 UTC Thu 2024-06-27 13:36:39 UTC 2026-01-20T10:54:18.206904718+00:00 stderr F -3 2ff245ef1efc4648b6c81a61c24bb5db Thu 2024-06-27 13:37:00 UTC Thu 2024-06-27 13:37:11 UTC 2026-01-20T10:54:18.206904718+00:00 stderr F -2 7bac8de7aad04ed8a9adc4391f6449b7 Wed 2025-08-13 19:43:15 UTC Wed 2025-08-13 20:42:52 UTC 2026-01-20T10:54:18.206904718+00:00 stderr F -1 0f41a1dcc88d4821a5e7f7104b70ba6f Tue 2026-01-20 10:41:40 UTC Tue 2026-01-20 10:46:28 UTC 2026-01-20T10:54:18.206904718+00:00 stderr F 0 b3bdf42313404a9d8f31b9e9c5c3e8af Tue 2026-01-20 10:46:34 UTC Tue 2026-01-20 10:54:18 UTC 2026-01-20T10:54:18.206904718+00:00 stderr F I0120 10:54:18.206868 24340 rpm-ostree.go:308] Running captured: systemctl list-units --state=failed --no-legend 2026-01-20T10:54:18.218219490+00:00 stderr F I0120 10:54:18.218161 24340 daemon.go:1751] systemd service state: OK 2026-01-20T10:54:18.218219490+00:00 stderr F I0120 10:54:18.218190 24340 daemon.go:1327] Starting MachineConfigDaemon 2026-01-20T10:54:18.218273981+00:00 stderr F I0120 10:54:18.218256 24340 daemon.go:1334] Enabling Kubelet Healthz Monitor 2026-01-20T10:54:19.151452687+00:00 stderr F I0120 10:54:19.151362 24340 daemon.go:647] Node crc is part of the control plane 2026-01-20T10:54:19.203242378+00:00 stderr F I0120 10:54:19.203159 24340 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs1208802695 --cleanup 2026-01-20T10:54:19.205789306+00:00 stderr F [2026-01-20T10:54:19Z INFO nmstatectl] Nmstate version: 2.2.29 2026-01-20T10:54:19.205865158+00:00 stdout F 2026-01-20T10:54:19.205874878+00:00 stderr F [2026-01-20T10:54:19Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2026-01-20T10:54:19.215012132+00:00 stderr F I0120 10:54:19.214950 24340 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2026-01-20T10:54:19.215041593+00:00 stderr F E0120 10:54:19.215018 24340 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2026-01-20T10:54:19.215041593+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:20.544564245+00:00 stderr F I0120 10:54:20.544257 24340 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 41880 2026-01-20T10:54:21.226130333+00:00 stderr F I0120 10:54:21.225975 24340 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs2991607601 --cleanup 2026-01-20T10:54:21.227973672+00:00 stderr F [2026-01-20T10:54:21Z INFO nmstatectl] Nmstate version: 2.2.29 2026-01-20T10:54:21.227998993+00:00 stdout F 2026-01-20T10:54:21.228011053+00:00 stderr F [2026-01-20T10:54:21Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2026-01-20T10:54:21.235330798+00:00 stderr F I0120 10:54:21.234958 24340 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2026-01-20T10:54:21.235330798+00:00 stderr F E0120 10:54:21.235013 24340 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2026-01-20T10:54:21.235330798+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:29.246947420+00:00 stderr F I0120 10:54:29.246813 24340 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs2024267167 --cleanup 2026-01-20T10:54:29.250811344+00:00 stdout F 2026-01-20T10:54:29.250833374+00:00 stderr F [2026-01-20T10:54:29Z INFO nmstatectl] Nmstate version: 2.2.29 2026-01-20T10:54:29.250833374+00:00 stderr F [2026-01-20T10:54:29Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2026-01-20T10:54:29.257825350+00:00 stderr F I0120 10:54:29.257780 24340 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2026-01-20T10:54:29.257847791+00:00 stderr F E0120 10:54:29.257829 24340 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2026-01-20T10:54:29.257847791+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:45.270380342+00:00 stderr F I0120 10:54:45.270273 24340 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs2264716209 --cleanup 2026-01-20T10:54:45.274663536+00:00 stderr F [2026-01-20T10:54:45Z INFO nmstatectl] Nmstate version: 2.2.29 2026-01-20T10:54:45.274858871+00:00 stdout F 2026-01-20T10:54:45.274870492+00:00 stderr F [2026-01-20T10:54:45Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2026-01-20T10:54:45.284680553+00:00 stderr F I0120 10:54:45.284610 24340 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2026-01-20T10:54:45.284747235+00:00 stderr F E0120 10:54:45.284705 24340 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2026-01-20T10:54:45.284747235+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:54:48.281381816+00:00 stderr F I0120 10:54:48.278121 24340 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 41979 2026-01-20T10:54:49.205159511+00:00 stderr F I0120 10:54:49.202018 24340 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 41986 2026-01-20T10:55:17.306114494+00:00 stderr F I0120 10:55:17.305475 24340 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs3212365595 --cleanup 2026-01-20T10:55:17.309741490+00:00 stderr F [2026-01-20T10:55:17Z INFO nmstatectl] Nmstate version: 2.2.29 2026-01-20T10:55:17.309840973+00:00 stdout F 2026-01-20T10:55:17.309847693+00:00 stderr F [2026-01-20T10:55:17Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2026-01-20T10:55:17.318516194+00:00 stderr F I0120 10:55:17.318448 24340 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2026-01-20T10:55:17.318601797+00:00 stderr F E0120 10:55:17.318554 24340 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2026-01-20T10:55:17.318601797+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:56:17.334565303+00:00 stderr F I0120 10:56:17.334464 24340 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs2950777166 --cleanup 2026-01-20T10:56:17.338747955+00:00 stderr F [2026-01-20T10:56:17Z INFO nmstatectl] Nmstate version: 2.2.29 2026-01-20T10:56:17.338814367+00:00 stdout F 2026-01-20T10:56:17.338824007+00:00 stderr F [2026-01-20T10:56:17Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2026-01-20T10:56:17.353699368+00:00 stderr F I0120 10:56:17.353611 24340 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2026-01-20T10:56:17.353769300+00:00 stderr F E0120 10:56:17.353690 24340 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2026-01-20T10:56:17.353769300+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:17.379903668+00:00 stderr F I0120 10:57:17.379743 24340 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs1732164442 --cleanup 2026-01-20T10:57:17.384448909+00:00 stderr F [2026-01-20T10:57:17Z INFO nmstatectl] Nmstate version: 2.2.29 2026-01-20T10:57:17.384681225+00:00 stdout F 2026-01-20T10:57:17.384695665+00:00 stderr F [2026-01-20T10:57:17Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2026-01-20T10:57:17.393980480+00:00 stderr F I0120 10:57:17.393907 24340 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2026-01-20T10:57:17.394023512+00:00 stderr F E0120 10:57:17.393981 24340 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2026-01-20T10:57:17.394023512+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:17.397548955+00:00 stderr F E0120 10:57:17.397475 24340 writer.go:242] Error setting Degraded annotation for node crc: unable to update node "&Node{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}": Patch "https://api-int.crc.testing:6443/api/v1/nodes/crc": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:17.397548955+00:00 stderr F E0120 10:57:17.397506 24340 daemon.go:603] Could not update annotation: unable to update node "&Node{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}": Patch "https://api-int.crc.testing:6443/api/v1/nodes/crc": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:46.721566399+00:00 stderr F I0120 10:57:46.721224 24340 daemon.go:1363] Shutting down MachineConfigDaemon ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/6.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000004217615133657716033031 0ustar zuulzuul2026-01-20T10:57:47.088905093+00:00 stderr F I0120 10:57:47.088780 32694 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2026-01-20T10:57:47.089258352+00:00 stderr F I0120 10:57:47.089127 32694 update.go:2595] Running: mount --rbind /run/secrets /rootfs/run/secrets 2026-01-20T10:57:47.092163809+00:00 stderr F I0120 10:57:47.092087 32694 update.go:2595] Running: mount --rbind /usr/bin /rootfs/run/machine-config-daemon-bin 2026-01-20T10:57:47.095393474+00:00 stderr F I0120 10:57:47.095336 32694 daemon.go:513] using appropriate binary for source=rhel-9 target=rhel-9 2026-01-20T10:57:47.202174959+00:00 stderr F I0120 10:57:47.202095 32694 daemon.go:566] Invoking re-exec /run/bin/machine-config-daemon 2026-01-20T10:57:47.249941222+00:00 stderr F I0120 10:57:47.249339 32694 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2026-01-20T10:57:47.250253180+00:00 stderr F E0120 10:57:47.250217 32694 rpm-ostree.go:276] Merged secret file does not exist; defaulting to cluster pull secret 2026-01-20T10:57:47.250359003+00:00 stderr F I0120 10:57:47.250328 32694 rpm-ostree.go:263] Linking ostree authfile to /var/lib/kubelet/config.json 2026-01-20T10:57:47.496798040+00:00 stderr F I0120 10:57:47.496715 32694 daemon.go:317] Booted osImageURL: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 (416.94.202406172220-0) 3aa42b3e55a31016873768eb92311a9ba07c871ac81e126e9561a61d8f9d2f24 2026-01-20T10:57:47.497921960+00:00 stderr F I0120 10:57:47.497858 32694 start.go:134] overriding kubernetes api to https://api-int.crc.testing:6443 2026-01-20T10:57:47.498947567+00:00 stderr F I0120 10:57:47.498917 32694 metrics.go:100] Registering Prometheus metrics 2026-01-20T10:57:47.499020879+00:00 stderr F I0120 10:57:47.499000 32694 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2026-01-20T10:57:47.511563881+00:00 stderr F I0120 10:57:47.511526 32694 simple_featuregate_reader.go:171] Starting feature-gate-detector 2026-01-20T10:57:47.513004279+00:00 stderr F I0120 10:57:47.512966 32694 writer.go:88] NodeWriter initialized with credentials from /var/lib/kubelet/kubeconfig 2026-01-20T10:57:47.524330669+00:00 stderr F I0120 10:57:47.522396 32694 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2026-01-20T10:57:47.524330669+00:00 stderr F I0120 10:57:47.522478 32694 start.go:214] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2026-01-20T10:57:47.524330669+00:00 stderr F I0120 10:57:47.522500 32694 update.go:2610] Starting to manage node: crc 2026-01-20T10:57:47.533519881+00:00 stderr F I0120 10:57:47.533428 32694 rpm-ostree.go:308] Running captured: rpm-ostree status 2026-01-20T10:57:47.578418169+00:00 stderr F I0120 10:57:47.578337 32694 daemon.go:1727] State: idle 2026-01-20T10:57:47.578418169+00:00 stderr F Deployments: 2026-01-20T10:57:47.578418169+00:00 stderr F * ostree-unverified-registry:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2026-01-20T10:57:47.578418169+00:00 stderr F Digest: sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2026-01-20T10:57:47.578418169+00:00 stderr F Version: 416.94.202406172220-0 (2024-06-17T22:24:17Z) 2026-01-20T10:57:47.578418169+00:00 stderr F LocalPackages: hyperv-daemons-0-0.42.20190303git.el9.x86_64 2026-01-20T10:57:47.578418169+00:00 stderr F hyperv-daemons-license-0-0.42.20190303git.el9.noarch 2026-01-20T10:57:47.578418169+00:00 stderr F hypervfcopyd-0-0.42.20190303git.el9.x86_64 2026-01-20T10:57:47.578418169+00:00 stderr F hypervkvpd-0-0.42.20190303git.el9.x86_64 2026-01-20T10:57:47.578418169+00:00 stderr F hypervvssd-0-0.42.20190303git.el9.x86_64 2026-01-20T10:57:47.578692666+00:00 stderr F I0120 10:57:47.578651 32694 coreos.go:53] CoreOS aleph version: mtime=2022-08-01 23:42:11 +0000 UTC 2026-01-20T10:57:47.578692666+00:00 stderr F { 2026-01-20T10:57:47.578692666+00:00 stderr F "container-image": { 2026-01-20T10:57:47.578692666+00:00 stderr F "image-digest": "sha256:ea0cef07b0cd5ba8ff8c487324bf6a4df15fa31e69962a8e8fb7d00f1caa7f1d", 2026-01-20T10:57:47.578692666+00:00 stderr F "image-labels": { 2026-01-20T10:57:47.578692666+00:00 stderr F "containers.bootc": "1", 2026-01-20T10:57:47.578692666+00:00 stderr F "coreos-assembler.image-config-checksum": "626c78cc9e2da6ffd642d678ac6109f5532e817e107932464c8222f8d3492491", 2026-01-20T10:57:47.578692666+00:00 stderr F "coreos-assembler.image-input-checksum": "6f9238f67f75298a27a620eacd78d313fd451541fc877d3f6aa681c0f6b22811", 2026-01-20T10:57:47.578692666+00:00 stderr F "io.openshift.build.version-display-names": "machine-os=Red Hat Enterprise Linux CoreOS", 2026-01-20T10:57:47.578692666+00:00 stderr F "io.openshift.build.versions": "machine-os=416.94.202405291527-0", 2026-01-20T10:57:47.578692666+00:00 stderr F "org.opencontainers.image.revision": "b01a02cc1b92e2dacf12d9b5cddf690a2439ce64", 2026-01-20T10:57:47.578692666+00:00 stderr F "org.opencontainers.image.source": "https://github.com/openshift/os", 2026-01-20T10:57:47.578692666+00:00 stderr F "org.opencontainers.image.version": "416.94.202405291527-0", 2026-01-20T10:57:47.578692666+00:00 stderr F "ostree.bootable": "true", 2026-01-20T10:57:47.578692666+00:00 stderr F "ostree.commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2026-01-20T10:57:47.578692666+00:00 stderr F "ostree.final-diffid": "sha256:12787d84fa137cd5649a9005efe98ec9d05ea46245fdc50aecb7dd007f2035b1", 2026-01-20T10:57:47.578692666+00:00 stderr F "ostree.linux": "5.14.0-427.18.1.el9_4.x86_64", 2026-01-20T10:57:47.578692666+00:00 stderr F "rpmostree.inputhash": "a00190b296e00061b72bcbc4ace9fb4b317e86da7d8c58471b027239035b05d6" 2026-01-20T10:57:47.578692666+00:00 stderr F }, 2026-01-20T10:57:47.578692666+00:00 stderr F "image-name": "oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive" 2026-01-20T10:57:47.578692666+00:00 stderr F }, 2026-01-20T10:57:47.578692666+00:00 stderr F "osbuild-version": "114", 2026-01-20T10:57:47.578692666+00:00 stderr F "ostree-commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2026-01-20T10:57:47.578692666+00:00 stderr F "ref": "docker://ostree-image-signed:oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive", 2026-01-20T10:57:47.578692666+00:00 stderr F "version": "416.94.202405291527-0" 2026-01-20T10:57:47.578692666+00:00 stderr F } 2026-01-20T10:57:47.578760798+00:00 stderr F I0120 10:57:47.578738 32694 coreos.go:70] Ignition provisioning: time=2024-06-26T12:42:18Z 2026-01-20T10:57:47.578760798+00:00 stderr F I0120 10:57:47.578750 32694 rpm-ostree.go:308] Running captured: journalctl --list-boots 2026-01-20T10:57:47.590195350+00:00 stderr F I0120 10:57:47.590142 32694 daemon.go:1736] journalctl --list-boots: 2026-01-20T10:57:47.590195350+00:00 stderr F IDX BOOT ID FIRST ENTRY LAST ENTRY 2026-01-20T10:57:47.590195350+00:00 stderr F -4 286f1119e01c427899b4130371f705c5 Thu 2024-06-27 13:36:35 UTC Thu 2024-06-27 13:36:39 UTC 2026-01-20T10:57:47.590195350+00:00 stderr F -3 2ff245ef1efc4648b6c81a61c24bb5db Thu 2024-06-27 13:37:00 UTC Thu 2024-06-27 13:37:11 UTC 2026-01-20T10:57:47.590195350+00:00 stderr F -2 7bac8de7aad04ed8a9adc4391f6449b7 Wed 2025-08-13 19:43:15 UTC Wed 2025-08-13 20:42:52 UTC 2026-01-20T10:57:47.590195350+00:00 stderr F -1 0f41a1dcc88d4821a5e7f7104b70ba6f Tue 2026-01-20 10:41:40 UTC Tue 2026-01-20 10:46:28 UTC 2026-01-20T10:57:47.590195350+00:00 stderr F 0 b3bdf42313404a9d8f31b9e9c5c3e8af Tue 2026-01-20 10:46:34 UTC Tue 2026-01-20 10:57:47 UTC 2026-01-20T10:57:47.590195350+00:00 stderr F I0120 10:57:47.590179 32694 rpm-ostree.go:308] Running captured: systemctl list-units --state=failed --no-legend 2026-01-20T10:57:47.602535337+00:00 stderr F I0120 10:57:47.602474 32694 daemon.go:1751] systemd service state: OK 2026-01-20T10:57:47.602535337+00:00 stderr F I0120 10:57:47.602508 32694 daemon.go:1327] Starting MachineConfigDaemon 2026-01-20T10:57:47.602690101+00:00 stderr F I0120 10:57:47.602658 32694 daemon.go:1334] Enabling Kubelet Healthz Monitor 2026-01-20T10:57:48.524387985+00:00 stderr F I0120 10:57:48.524324 32694 daemon.go:647] Node crc is part of the control plane 2026-01-20T10:57:48.660086434+00:00 stderr F I0120 10:57:48.660000 32694 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs1424797890 --cleanup 2026-01-20T10:57:48.663423642+00:00 stderr F [2026-01-20T10:57:48Z INFO nmstatectl] Nmstate version: 2.2.29 2026-01-20T10:57:48.663486503+00:00 stderr F [2026-01-20T10:57:48Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2026-01-20T10:57:48.663577636+00:00 stdout F 2026-01-20T10:57:48.670994802+00:00 stderr F I0120 10:57:48.670934 32694 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2026-01-20T10:57:48.671019223+00:00 stderr F E0120 10:57:48.671000 32694 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2026-01-20T10:57:48.671019223+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:50.687165201+00:00 stderr F I0120 10:57:50.687013 32694 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs3450082080 --cleanup 2026-01-20T10:57:50.690010616+00:00 stdout F 2026-01-20T10:57:50.690031056+00:00 stderr F [2026-01-20T10:57:50Z INFO nmstatectl] Nmstate version: 2.2.29 2026-01-20T10:57:50.690031056+00:00 stderr F [2026-01-20T10:57:50Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2026-01-20T10:57:50.703906303+00:00 stderr F I0120 10:57:50.703856 32694 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2026-01-20T10:57:50.704199911+00:00 stderr F E0120 10:57:50.703925 32694 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2026-01-20T10:57:50.704199911+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:57:54.721034228+00:00 stderr F I0120 10:57:54.720959 32694 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs3092106676 --cleanup 2026-01-20T10:57:54.723527403+00:00 stderr F [2026-01-20T10:57:54Z INFO nmstatectl] Nmstate version: 2.2.29 2026-01-20T10:57:54.723642527+00:00 stdout F 2026-01-20T10:57:54.723649357+00:00 stderr F [2026-01-20T10:57:54Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2026-01-20T10:57:54.732816409+00:00 stderr F I0120 10:57:54.732424 32694 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2026-01-20T10:57:54.732871861+00:00 stderr F E0120 10:57:54.732841 32694 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2026-01-20T10:57:54.732871861+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:58:02.742725474+00:00 stderr F I0120 10:58:02.742606 32694 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs426200726 --cleanup 2026-01-20T10:58:02.745168096+00:00 stderr F [2026-01-20T10:58:02Z INFO nmstatectl] Nmstate version: 2.2.29 2026-01-20T10:58:02.745223378+00:00 stdout F 2026-01-20T10:58:02.745231798+00:00 stderr F [2026-01-20T10:58:02Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2026-01-20T10:58:02.752338249+00:00 stderr F I0120 10:58:02.752285 32694 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2026-01-20T10:58:02.752370480+00:00 stderr F E0120 10:58:02.752356 32694 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2026-01-20T10:58:02.752370480+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2026-01-20T10:58:18.773387878+00:00 stderr F I0120 10:58:18.772207 32694 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs16959998 --cleanup 2026-01-20T10:58:18.777251947+00:00 stdout F 2026-01-20T10:58:18.777311568+00:00 stderr F [2026-01-20T10:58:18Z INFO nmstatectl] Nmstate version: 2.2.29 2026-01-20T10:58:18.777311568+00:00 stderr F [2026-01-20T10:58:18Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2026-01-20T10:58:18.785322312+00:00 stderr F I0120 10:58:18.785213 32694 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2026-01-20T10:58:18.785322312+00:00 stderr F E0120 10:58:18.785299 32694 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2026-01-20T10:58:18.785322312+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found ././@LongLink0000644000000000000000000000030200000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015133657744033016 5ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000227415133657716033024 0ustar zuulzuul2026-01-20T10:47:24.861334478+00:00 stderr F W0120 10:47:24.860999 5616 deprecated.go:66] 2026-01-20T10:47:24.861334478+00:00 stderr F ==== Removed Flag Warning ====================== 2026-01-20T10:47:24.861334478+00:00 stderr F 2026-01-20T10:47:24.861334478+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2026-01-20T10:47:24.861334478+00:00 stderr F 2026-01-20T10:47:24.861334478+00:00 stderr F =============================================== 2026-01-20T10:47:24.861334478+00:00 stderr F 2026-01-20T10:47:24.861486702+00:00 stderr F I0120 10:47:24.861474 5616 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2026-01-20T10:47:24.862596021+00:00 stderr F I0120 10:47:24.862573 5616 kube-rbac-proxy.go:233] Valid token audiences: 2026-01-20T10:47:24.862676364+00:00 stderr F I0120 10:47:24.862664 5616 kube-rbac-proxy.go:347] Reading certificate files 2026-01-20T10:47:24.865028657+00:00 stderr F I0120 10:47:24.863563 5616 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9001 2026-01-20T10:47:24.865028657+00:00 stderr F I0120 10:47:24.863835 5616 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9001 ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000250115133657716033015 0ustar zuulzuul2025-08-13T19:50:45.604729576+00:00 stderr F W0813 19:50:45.602545 13767 deprecated.go:66] 2025-08-13T19:50:45.604729576+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:50:45.604729576+00:00 stderr F 2025-08-13T19:50:45.604729576+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:50:45.604729576+00:00 stderr F 2025-08-13T19:50:45.604729576+00:00 stderr F =============================================== 2025-08-13T19:50:45.604729576+00:00 stderr F 2025-08-13T19:50:45.604729576+00:00 stderr F I0813 19:50:45.602752 13767 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-08-13T19:50:45.618370216+00:00 stderr F I0813 19:50:45.617282 13767 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:50:45.618370216+00:00 stderr F I0813 19:50:45.617490 13767 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:50:45.649070553+00:00 stderr F I0813 19:50:45.647523 13767 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9001 2025-08-13T19:50:45.662264789+00:00 stderr F I0813 19:50:45.660720 13767 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9001 2025-08-13T20:42:46.043998571+00:00 stderr F I0813 20:42:46.043741 13767 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-o0000755000175000017500000000000015133657715032742 5ustar zuulzuul././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-o0000755000175000017500000000000015133657735032744 5ustar zuulzuul././@LongLink0000644000000000000000000000032000000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-o0000644000175000017500000005633515133657715032760 0ustar zuulzuul2026-01-20T10:49:36.102241911+00:00 stderr F I0120 10:49:36.099426 1 cmd.go:233] Using service-serving-cert provided certificates 2026-01-20T10:49:36.102241911+00:00 stderr F I0120 10:49:36.100287 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:49:36.104478688+00:00 stderr F I0120 10:49:36.104353 1 observer_polling.go:159] Starting file observer 2026-01-20T10:49:36.154445980+00:00 stderr F I0120 10:49:36.154152 1 builder.go:271] service-ca-operator version v4.16.0-202406131906.p0.g538c7b9.assembly.stream.el9-0-g6d77dd5- 2026-01-20T10:49:36.155434861+00:00 stderr F I0120 10:49:36.155384 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2026-01-20T10:49:37.069620686+00:00 stderr F I0120 10:49:37.068845 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2026-01-20T10:49:37.075315550+00:00 stderr F I0120 10:49:37.074899 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2026-01-20T10:49:37.075315550+00:00 stderr F I0120 10:49:37.074922 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2026-01-20T10:49:37.075315550+00:00 stderr F I0120 10:49:37.074940 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2026-01-20T10:49:37.075315550+00:00 stderr F I0120 10:49:37.074946 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2026-01-20T10:49:37.090692468+00:00 stderr F I0120 10:49:37.090458 1 secure_serving.go:57] Forcing use of http/1.1 only 2026-01-20T10:49:37.090692468+00:00 stderr F W0120 10:49:37.090479 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:37.090692468+00:00 stderr F W0120 10:49:37.090485 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:37.094722751+00:00 stderr F I0120 10:49:37.093403 1 genericapiserver.go:525] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2026-01-20T10:49:37.098135015+00:00 stderr F I0120 10:49:37.097481 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2026-01-20T10:49:37.098135015+00:00 stderr F I0120 10:49:37.097750 1 leaderelection.go:250] attempting to acquire leader lease openshift-service-ca-operator/service-ca-operator-lock... 2026-01-20T10:49:37.104604392+00:00 stderr F I0120 10:49:37.104550 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:49:37.104604392+00:00 stderr F I0120 10:49:37.104566 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:49:37.104634903+00:00 stderr F I0120 10:49:37.104612 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:49:37.104634903+00:00 stderr F I0120 10:49:37.104623 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:37.104688364+00:00 stderr F I0120 10:49:37.104652 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:49:37.105671895+00:00 stderr F I0120 10:49:37.104792 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:37.105671895+00:00 stderr F I0120 10:49:37.104917 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2026-01-20T10:49:37.110529532+00:00 stderr F I0120 10:49:37.110142 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2026-01-20 10:49:37.110109499 +0000 UTC))" 2026-01-20T10:49:37.110529532+00:00 stderr F I0120 10:49:37.110436 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906177\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906176\" (2026-01-20 09:49:36 +0000 UTC to 2027-01-20 09:49:36 +0000 UTC (now=2026-01-20 10:49:37.110416469 +0000 UTC))" 2026-01-20T10:49:37.110529532+00:00 stderr F I0120 10:49:37.110461 1 secure_serving.go:210] Serving securely on [::]:8443 2026-01-20T10:49:37.110529532+00:00 stderr F I0120 10:49:37.110484 1 genericapiserver.go:673] [graceful-termination] waiting for shutdown to be initiated 2026-01-20T10:49:37.110529532+00:00 stderr F I0120 10:49:37.110499 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:49:37.204991009+00:00 stderr F I0120 10:49:37.204915 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:37.206610299+00:00 stderr F I0120 10:49:37.205205 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:37.205165165 +0000 UTC))" 2026-01-20T10:49:37.206610299+00:00 stderr F I0120 10:49:37.205585 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2026-01-20 10:49:37.205562587 +0000 UTC))" 2026-01-20T10:49:37.206610299+00:00 stderr F I0120 10:49:37.205924 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906177\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906176\" (2026-01-20 09:49:36 +0000 UTC to 2027-01-20 09:49:36 +0000 UTC (now=2026-01-20 10:49:37.205904098 +0000 UTC))" 2026-01-20T10:49:37.206610299+00:00 stderr F I0120 10:49:37.206143 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:37.206610299+00:00 stderr F I0120 10:49:37.206196 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:49:37.206610299+00:00 stderr F I0120 10:49:37.206599 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:49:37.206570418 +0000 UTC))" 2026-01-20T10:49:37.206645200+00:00 stderr F I0120 10:49:37.206618 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:49:37.206607539 +0000 UTC))" 2026-01-20T10:49:37.206645200+00:00 stderr F I0120 10:49:37.206634 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:49:37.206623139 +0000 UTC))" 2026-01-20T10:49:37.206667161+00:00 stderr F I0120 10:49:37.206649 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:49:37.20663866 +0000 UTC))" 2026-01-20T10:49:37.206679791+00:00 stderr F I0120 10:49:37.206673 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:37.206661891 +0000 UTC))" 2026-01-20T10:49:37.206790044+00:00 stderr F I0120 10:49:37.206688 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:37.206678171 +0000 UTC))" 2026-01-20T10:49:37.206790044+00:00 stderr F I0120 10:49:37.206709 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:37.206698922 +0000 UTC))" 2026-01-20T10:49:37.206790044+00:00 stderr F I0120 10:49:37.206723 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:37.206713182 +0000 UTC))" 2026-01-20T10:49:37.206790044+00:00 stderr F I0120 10:49:37.206739 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:49:37.206728423 +0000 UTC))" 2026-01-20T10:49:37.206790044+00:00 stderr F I0120 10:49:37.206753 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:49:37.206743773 +0000 UTC))" 2026-01-20T10:49:37.206790044+00:00 stderr F I0120 10:49:37.206767 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:37.206757464 +0000 UTC))" 2026-01-20T10:49:37.207134045+00:00 stderr F I0120 10:49:37.207092 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2026-01-20 10:49:37.207078883 +0000 UTC))" 2026-01-20T10:49:37.207363462+00:00 stderr F I0120 10:49:37.207340 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906177\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906176\" (2026-01-20 09:49:36 +0000 UTC to 2027-01-20 09:49:36 +0000 UTC (now=2026-01-20 10:49:37.207328041 +0000 UTC))" 2026-01-20T10:56:02.154159458+00:00 stderr F I0120 10:56:02.153517 1 leaderelection.go:260] successfully acquired lease openshift-service-ca-operator/service-ca-operator-lock 2026-01-20T10:56:02.154297752+00:00 stderr F I0120 10:56:02.153641 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator-lock", UID:"76f50757-9318-4fb1-b428-2b64c99787ea", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"42376", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' service-ca-operator-546b4f8984-pwccz_9f75dfdb-e545-46d9-9d7f-225937af579b became leader 2026-01-20T10:56:02.157596940+00:00 stderr F I0120 10:56:02.157560 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2026-01-20T10:56:02.158311449+00:00 stderr F I0120 10:56:02.158234 1 base_controller.go:67] Waiting for caches to sync for ServiceCAOperator 2026-01-20T10:56:02.158893436+00:00 stderr F I0120 10:56:02.158294 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2026-01-20T10:56:02.159190174+00:00 stderr F I0120 10:56:02.159142 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_service-ca 2026-01-20T10:56:02.258917546+00:00 stderr F I0120 10:56:02.258791 1 base_controller.go:73] Caches are synced for ServiceCAOperator 2026-01-20T10:56:02.258917546+00:00 stderr F I0120 10:56:02.258842 1 base_controller.go:110] Starting #1 worker of ServiceCAOperator controller ... 2026-01-20T10:56:02.260014587+00:00 stderr F I0120 10:56:02.259948 1 base_controller.go:73] Caches are synced for StatusSyncer_service-ca 2026-01-20T10:56:02.260014587+00:00 stderr F I0120 10:56:02.259980 1 base_controller.go:110] Starting #1 worker of StatusSyncer_service-ca controller ... 2026-01-20T10:56:02.260275234+00:00 stderr F I0120 10:56:02.260224 1 base_controller.go:73] Caches are synced for LoggingSyncer 2026-01-20T10:56:02.260354346+00:00 stderr F I0120 10:56:02.260329 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2026-01-20T10:56:02.658735984+00:00 stderr F I0120 10:56:02.658632 1 base_controller.go:73] Caches are synced for ResourceSyncController 2026-01-20T10:56:02.658735984+00:00 stderr F I0120 10:56:02.658669 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2026-01-20T10:56:07.111364064+00:00 stderr F I0120 10:56:07.107361 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:56:07.107309834 +0000 UTC))" 2026-01-20T10:56:07.111364064+00:00 stderr F I0120 10:56:07.107562 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:56:07.107545761 +0000 UTC))" 2026-01-20T10:56:07.111364064+00:00 stderr F I0120 10:56:07.107583 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.107567871 +0000 UTC))" 2026-01-20T10:56:07.111364064+00:00 stderr F I0120 10:56:07.107602 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.107588122 +0000 UTC))" 2026-01-20T10:56:07.111364064+00:00 stderr F I0120 10:56:07.107625 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.107610322 +0000 UTC))" 2026-01-20T10:56:07.111364064+00:00 stderr F I0120 10:56:07.107644 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.107630533 +0000 UTC))" 2026-01-20T10:56:07.111364064+00:00 stderr F I0120 10:56:07.107672 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.107649873 +0000 UTC))" 2026-01-20T10:56:07.111364064+00:00 stderr F I0120 10:56:07.107692 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.107677694 +0000 UTC))" 2026-01-20T10:56:07.111364064+00:00 stderr F I0120 10:56:07.107714 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.107697845 +0000 UTC))" 2026-01-20T10:56:07.111364064+00:00 stderr F I0120 10:56:07.107735 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:56:07.107722805 +0000 UTC))" 2026-01-20T10:56:07.111364064+00:00 stderr F I0120 10:56:07.107757 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.107742006 +0000 UTC))" 2026-01-20T10:56:07.111364064+00:00 stderr F I0120 10:56:07.107787 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.107772867 +0000 UTC))" 2026-01-20T10:56:07.111364064+00:00 stderr F I0120 10:56:07.108250 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2026-01-20 10:56:07.108230149 +0000 UTC))" 2026-01-20T10:56:07.111364064+00:00 stderr F I0120 10:56:07.108589 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906177\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906176\" (2026-01-20 09:49:36 +0000 UTC to 2027-01-20 09:49:36 +0000 UTC (now=2026-01-20 10:56:07.108572469 +0000 UTC))" ././@LongLink0000644000000000000000000000032000000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-o0000644000175000017500000006266415133657715032762 0ustar zuulzuul2025-08-13T20:05:46.695171711+00:00 stderr F I0813 20:05:46.693109 1 cmd.go:233] Using service-serving-cert provided certificates 2025-08-13T20:05:46.698320051+00:00 stderr F I0813 20:05:46.697106 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:46.701930994+00:00 stderr F I0813 20:05:46.700578 1 observer_polling.go:159] Starting file observer 2025-08-13T20:05:46.766204245+00:00 stderr F I0813 20:05:46.766110 1 builder.go:271] service-ca-operator version v4.16.0-202406131906.p0.g538c7b9.assembly.stream.el9-0-g6d77dd5- 2025-08-13T20:05:46.768050687+00:00 stderr F I0813 20:05:46.768021 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:05:47.202189140+00:00 stderr F I0813 20:05:47.202074 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:05:47.209249832+00:00 stderr F I0813 20:05:47.209203 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T20:05:47.209327574+00:00 stderr F I0813 20:05:47.209313 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T20:05:47.209388826+00:00 stderr F I0813 20:05:47.209374 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T20:05:47.209432707+00:00 stderr F I0813 20:05:47.209417 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T20:05:47.234427463+00:00 stderr F I0813 20:05:47.234338 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:05:47.234427463+00:00 stderr F W0813 20:05:47.234389 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:47.234427463+00:00 stderr F W0813 20:05:47.234399 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:47.234479114+00:00 stderr F I0813 20:05:47.234405 1 genericapiserver.go:525] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:05:47.239726024+00:00 stderr F I0813 20:05:47.239644 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:05:47.239726024+00:00 stderr F I0813 20:05:47.239690 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:05:47.239852348+00:00 stderr F I0813 20:05:47.239830 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:05:47.240207368+00:00 stderr F I0813 20:05:47.240155 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2025-08-13 20:05:47.240124436 +0000 UTC))" 2025-08-13T20:05:47.240207368+00:00 stderr F I0813 20:05:47.240178 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:05:47.240223789+00:00 stderr F I0813 20:05:47.240207 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:47.240819806+00:00 stderr F I0813 20:05:47.240600 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115547\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115546\" (2025-08-13 19:05:46 +0000 UTC to 2026-08-13 19:05:46 +0000 UTC (now=2025-08-13 20:05:47.240573979 +0000 UTC))" 2025-08-13T20:05:47.240819806+00:00 stderr F I0813 20:05:47.240685 1 secure_serving.go:210] Serving securely on [::]:8443 2025-08-13T20:05:47.240819806+00:00 stderr F I0813 20:05:47.240714 1 genericapiserver.go:673] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:05:47.241303220+00:00 stderr F I0813 20:05:47.241198 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:05:47.242046351+00:00 stderr F I0813 20:05:47.241956 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:05:47.242115163+00:00 stderr F I0813 20:05:47.242060 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:05:47.242115163+00:00 stderr F I0813 20:05:47.242100 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:47.243732009+00:00 stderr F I0813 20:05:47.243709 1 leaderelection.go:250] attempting to acquire leader lease openshift-service-ca-operator/service-ca-operator-lock... 2025-08-13T20:05:47.269716273+00:00 stderr F I0813 20:05:47.269602 1 leaderelection.go:260] successfully acquired lease openshift-service-ca-operator/service-ca-operator-lock 2025-08-13T20:05:47.272188094+00:00 stderr F I0813 20:05:47.272007 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator-lock", UID:"76f50757-9318-4fb1-b428-2b64c99787ea", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"31850", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' service-ca-operator-546b4f8984-pwccz_cd77d967-43e4-48e5-af41-05935a4b105a became leader 2025-08-13T20:05:47.306324662+00:00 stderr F I0813 20:05:47.306185 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:05:47.328919019+00:00 stderr F I0813 20:05:47.328769 1 base_controller.go:67] Waiting for caches to sync for ServiceCAOperator 2025-08-13T20:05:47.330111923+00:00 stderr F I0813 20:05:47.329537 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:05:47.330460283+00:00 stderr F I0813 20:05:47.330404 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:05:47.330460283+00:00 stderr F I0813 20:05:47.330431 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:05:47.336307180+00:00 stderr F I0813 20:05:47.336161 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_service-ca 2025-08-13T20:05:47.341560371+00:00 stderr F I0813 20:05:47.341414 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:47.342178988+00:00 stderr F I0813 20:05:47.342066 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:05:47.342018904 +0000 UTC))" 2025-08-13T20:05:47.345069291+00:00 stderr F I0813 20:05:47.344954 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:05:47.345166924+00:00 stderr F I0813 20:05:47.344954 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:47.345447712+00:00 stderr F I0813 20:05:47.345315 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2025-08-13 20:05:47.345247906 +0000 UTC))" 2025-08-13T20:05:47.345686099+00:00 stderr F I0813 20:05:47.345621 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115547\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115546\" (2025-08-13 19:05:46 +0000 UTC to 2026-08-13 19:05:46 +0000 UTC (now=2025-08-13 20:05:47.345601186 +0000 UTC))" 2025-08-13T20:05:47.347929553+00:00 stderr F I0813 20:05:47.347827 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:05:47.347745868 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.353588 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:05:47.348304684 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.353744 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:05:47.353713179 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.353765 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:05:47.3537514 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.353925 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:05:47.353820102 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.353957 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:05:47.353941695 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.353974 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:05:47.353962766 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.354005 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:05:47.353985766 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.354030 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:05:47.354012267 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.354054 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:05:47.354040398 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.354182 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:05:47.35412156 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.354568 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2025-08-13 20:05:47.354544052 +0000 UTC))" 2025-08-13T20:05:47.358087384+00:00 stderr F I0813 20:05:47.357824 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115547\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115546\" (2025-08-13 19:05:46 +0000 UTC to 2026-08-13 19:05:46 +0000 UTC (now=2025-08-13 20:05:47.357737924 +0000 UTC))" 2025-08-13T20:05:47.430618321+00:00 stderr F I0813 20:05:47.430480 1 base_controller.go:73] Caches are synced for ServiceCAOperator 2025-08-13T20:05:47.430668902+00:00 stderr F I0813 20:05:47.430627 1 base_controller.go:110] Starting #1 worker of ServiceCAOperator controller ... 2025-08-13T20:05:47.437831297+00:00 stderr F I0813 20:05:47.437741 1 base_controller.go:73] Caches are synced for StatusSyncer_service-ca 2025-08-13T20:05:47.437999972+00:00 stderr F I0813 20:05:47.437834 1 base_controller.go:110] Starting #1 worker of StatusSyncer_service-ca controller ... 2025-08-13T20:05:47.807892884+00:00 stderr F I0813 20:05:47.807418 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:05:47.807892884+00:00 stderr F I0813 20:05:47.807476 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:08:49.261077294+00:00 stderr F E0813 20:08:49.260139 1 leaderelection.go:332] error retrieving resource lock openshift-service-ca-operator/service-ca-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca-operator/leases/service-ca-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:36.410661849+00:00 stderr F I0813 20:42:36.386102 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.410661849+00:00 stderr F I0813 20:42:36.385986 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.410661849+00:00 stderr F I0813 20:42:36.386126 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.410661849+00:00 stderr F I0813 20:42:36.386147 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.410661849+00:00 stderr F I0813 20:42:36.396431 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.410947878+00:00 stderr F I0813 20:42:36.410909 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.430056448+00:00 stderr F I0813 20:42:36.429968 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.446965186+00:00 stderr F I0813 20:42:36.444700 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.446965186+00:00 stderr F I0813 20:42:36.444988 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.446965186+00:00 stderr F I0813 20:42:36.445099 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.446965186+00:00 stderr F I0813 20:42:36.445265 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.454712929+00:00 stderr F I0813 20:42:36.453927 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.454983407+00:00 stderr F I0813 20:42:36.454916 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.507069389+00:00 stderr F I0813 20:42:36.503562 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.516213932+00:00 stderr F I0813 20:42:36.515748 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531605046+00:00 stderr F I0813 20:42:36.516940 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531735290+00:00 stderr F I0813 20:42:36.516985 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.538946848+00:00 stderr F I0813 20:42:36.519622 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.538946848+00:00 stderr F I0813 20:42:36.519747 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:40.273148956+00:00 stderr F I0813 20:42:40.269755 1 cmd.go:121] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:40.275012839+00:00 stderr F I0813 20:42:40.274951 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:40.275037130+00:00 stderr F I0813 20:42:40.275010 1 base_controller.go:172] Shutting down StatusSyncer_service-ca ... 2025-08-13T20:42:40.275901815+00:00 stderr F I0813 20:42:40.275017 1 base_controller.go:150] All StatusSyncer_service-ca post start hooks have been terminated 2025-08-13T20:42:40.275901815+00:00 stderr F I0813 20:42:40.275890 1 base_controller.go:172] Shutting down ServiceCAOperator ... 2025-08-13T20:42:40.276134082+00:00 stderr F I0813 20:42:40.274141 1 genericapiserver.go:538] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:40.281716873+00:00 stderr F I0813 20:42:40.281678 1 genericapiserver.go:541] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:42:40.281880867+00:00 stderr F I0813 20:42:40.281865 1 genericapiserver.go:605] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:42:40.282949078+00:00 stderr F I0813 20:42:40.281702 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:40.282949078+00:00 stderr F I0813 20:42:40.281741 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:40.282949078+00:00 stderr F I0813 20:42:40.282934 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:40.283165404+00:00 stderr F I0813 20:42:40.273874 1 genericapiserver.go:681] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:40.283165404+00:00 stderr F I0813 20:42:40.281762 1 base_controller.go:114] Shutting down worker of ServiceCAOperator controller ... 2025-08-13T20:42:40.283165404+00:00 stderr F I0813 20:42:40.283146 1 base_controller.go:104] All ServiceCAOperator workers have been terminated 2025-08-13T20:42:40.283165404+00:00 stderr F I0813 20:42:40.281754 1 base_controller.go:114] Shutting down worker of StatusSyncer_service-ca controller ... 2025-08-13T20:42:40.283165404+00:00 stderr F I0813 20:42:40.283157 1 base_controller.go:104] All StatusSyncer_service-ca workers have been terminated 2025-08-13T20:42:40.283275118+00:00 stderr F I0813 20:42:40.281769 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:40.283275118+00:00 stderr F I0813 20:42:40.283200 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:40.284330578+00:00 stderr F I0813 20:42:40.284206 1 genericapiserver.go:639] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:40.285855702+00:00 stderr F I0813 20:42:40.285683 1 secure_serving.go:255] Stopped listening on [::]:8443 2025-08-13T20:42:40.285855702+00:00 stderr F I0813 20:42:40.285725 1 genericapiserver.go:588] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:40.285855702+00:00 stderr F I0813 20:42:40.285838 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:40.285946314+00:00 stderr F I0813 20:42:40.285886 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:40.287334595+00:00 stderr F I0813 20:42:40.287275 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:40.287426957+00:00 stderr F I0813 20:42:40.287375 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:40.287506089+00:00 stderr F I0813 20:42:40.287457 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:42:40.287506089+00:00 stderr F I0813 20:42:40.287477 1 genericapiserver.go:630] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:42:40.287523390+00:00 stderr F I0813 20:42:40.287504 1 genericapiserver.go:671] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:42:40.287700335+00:00 stderr F I0813 20:42:40.285932 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:40.288313053+00:00 stderr F I0813 20:42:40.288252 1 genericapiserver.go:701] [graceful-termination] apiserver is exiting 2025-08-13T20:42:40.289258460+00:00 stderr F I0813 20:42:40.289175 1 builder.go:302] server exited 2025-08-13T20:42:40.290335971+00:00 stderr F E0813 20:42:40.290246 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca-operator/leases/service-ca-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.291335530+00:00 stderr F I0813 20:42:40.291154 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator-lock", UID:"76f50757-9318-4fb1-b428-2b64c99787ea", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"37355", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' service-ca-operator-546b4f8984-pwccz_cd77d967-43e4-48e5-af41-05935a4b105a stopped leading 2025-08-13T20:42:40.292880134+00:00 stderr F W0813 20:42:40.292702 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000032000000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-o0000644000175000017500000016255115133657715032756 0ustar zuulzuul2025-08-13T19:59:15.609666557+00:00 stderr F I0813 19:59:15.581369 1 cmd.go:233] Using service-serving-cert provided certificates 2025-08-13T19:59:15.609666557+00:00 stderr F I0813 19:59:15.596283 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:15.899439107+00:00 stderr F I0813 19:59:15.895681 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:17.925502651+00:00 stderr F I0813 19:59:17.918436 1 builder.go:271] service-ca-operator version v4.16.0-202406131906.p0.g538c7b9.assembly.stream.el9-0-g6d77dd5- 2025-08-13T19:59:19.416925586+00:00 stderr F I0813 19:59:19.416426 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:27.199472670+00:00 stderr F I0813 19:59:27.164373 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:27.361919811+00:00 stderr F I0813 19:59:27.359754 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T19:59:27.361919811+00:00 stderr F I0813 19:59:27.359870 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T19:59:27.361919811+00:00 stderr F I0813 19:59:27.359974 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T19:59:27.361919811+00:00 stderr F I0813 19:59:27.359983 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T19:59:27.539080441+00:00 stderr F I0813 19:59:27.537724 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:27.539080441+00:00 stderr F W0813 19:59:27.538022 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:27.539080441+00:00 stderr F W0813 19:59:27.538031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:27.544388722+00:00 stderr F I0813 19:59:27.541588 1 genericapiserver.go:525] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T19:59:27.754735019+00:00 stderr F I0813 19:59:27.754668 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:27.783809777+00:00 stderr F I0813 19:59:27.782997 1 leaderelection.go:250] attempting to acquire leader lease openshift-service-ca-operator/service-ca-operator-lock... 2025-08-13T19:59:27.811830216+00:00 stderr F I0813 19:59:27.811703 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:19 +0000 UTC to 2026-06-26 12:47:20 +0000 UTC (now=2025-08-13 19:59:27.811669361 +0000 UTC))" 2025-08-13T19:59:27.825500576+00:00 stderr F I0813 19:59:27.825462 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:27.825998880+00:00 stderr F I0813 19:59:27.825978 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:27.836639763+00:00 stderr F I0813 19:59:27.836563 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:27.837165378+00:00 stderr F I0813 19:59:27.837143 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:27.838887667+00:00 stderr F I0813 19:59:27.838569 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:27.838887667+00:00 stderr F I0813 19:59:27.838654 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:28.034964727+00:00 stderr F I0813 19:59:28.032713 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:28.069188012+00:00 stderr F I0813 19:59:28.053879 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115166\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:19 +0000 UTC to 2026-08-13 18:59:19 +0000 UTC (now=2025-08-13 19:59:27.812142195 +0000 UTC))" 2025-08-13T19:59:28.069188012+00:00 stderr F I0813 19:59:28.054053 1 secure_serving.go:210] Serving securely on [::]:8443 2025-08-13T19:59:28.069188012+00:00 stderr F I0813 19:59:28.054180 1 genericapiserver.go:673] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T19:59:28.069188012+00:00 stderr F I0813 19:59:28.054368 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:28.207045091+00:00 stderr F I0813 19:59:28.204297 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:28.327321909+00:00 stderr F I0813 19:59:28.291761 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:28.425916980+00:00 stderr F E0813 19:59:28.423919 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.425916980+00:00 stderr F E0813 19:59:28.424055 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.425916980+00:00 stderr F I0813 19:59:28.425035 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:28.425002914 +0000 UTC))" 2025-08-13T19:59:28.425916980+00:00 stderr F I0813 19:59:28.425070 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:28.425050155 +0000 UTC))" 2025-08-13T19:59:28.436945004+00:00 stderr F E0813 19:59:28.436254 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.508061931+00:00 stderr F E0813 19:59:28.507546 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.511589642+00:00 stderr F I0813 19:59:28.511486 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:28.523985505+00:00 stderr F E0813 19:59:28.522980 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.523985505+00:00 stderr F E0813 19:59:28.523131 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.533477246+00:00 stderr F I0813 19:59:28.533326 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:28.425082696 +0000 UTC))" 2025-08-13T19:59:28.533477246+00:00 stderr F I0813 19:59:28.533400 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:28.533377223 +0000 UTC))" 2025-08-13T19:59:28.534188476+00:00 stderr F I0813 19:59:28.533661 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:28.533417554 +0000 UTC))" 2025-08-13T19:59:28.534188476+00:00 stderr F I0813 19:59:28.533718 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:28.533696162 +0000 UTC))" 2025-08-13T19:59:28.534188476+00:00 stderr F I0813 19:59:28.533754 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:28.533733783 +0000 UTC))" 2025-08-13T19:59:28.534188476+00:00 stderr F I0813 19:59:28.533870 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:28.533761984 +0000 UTC))" 2025-08-13T19:59:28.534476044+00:00 stderr F I0813 19:59:28.534395 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:19 +0000 UTC to 2026-06-26 12:47:20 +0000 UTC (now=2025-08-13 19:59:28.534367511 +0000 UTC))" 2025-08-13T19:59:28.562541554+00:00 stderr F E0813 19:59:28.559065 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.562541554+00:00 stderr F E0813 19:59:28.559198 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.600922078+00:00 stderr F E0813 19:59:28.600126 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.629967166+00:00 stderr F E0813 19:59:28.627224 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.687441405+00:00 stderr F E0813 19:59:28.686150 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.708043192+00:00 stderr F E0813 19:59:28.707397 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.756934746+00:00 stderr F I0813 19:59:28.756063 1 leaderelection.go:260] successfully acquired lease openshift-service-ca-operator/service-ca-operator-lock 2025-08-13T19:59:28.776867694+00:00 stderr F I0813 19:59:28.758535 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator-lock", UID:"76f50757-9318-4fb1-b428-2b64c99787ea", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28107", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' service-ca-operator-546b4f8984-pwccz_c8767c2b-8c87-42c2-ac6c-176a996e5e2c became leader 2025-08-13T19:59:28.819010675+00:00 stderr F I0813 19:59:28.813946 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115166\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:19 +0000 UTC to 2026-08-13 18:59:19 +0000 UTC (now=2025-08-13 19:59:28.534766172 +0000 UTC))" 2025-08-13T19:59:28.894882238+00:00 stderr F I0813 19:59:28.877043 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:28.910128372+00:00 stderr F E0813 19:59:28.901710 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.910128372+00:00 stderr F E0813 19:59:28.901828 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.951357188+00:00 stderr F I0813 19:59:28.951299 1 base_controller.go:67] Waiting for caches to sync for ServiceCAOperator 2025-08-13T19:59:28.951456871+00:00 stderr F I0813 19:59:28.951439 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:28.951975935+00:00 stderr F I0813 19:59:28.951949 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_service-ca 2025-08-13T19:59:31.323743733+00:00 stderr F I0813 19:59:31.292566 1 request.go:697] Waited for 2.247873836s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/secrets?limit=500&resourceVersion=0 2025-08-13T19:59:31.532284157+00:00 stderr F I0813 19:59:31.453284 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:31.532284157+00:00 stderr F I0813 19:59:31.532072 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:33.853457023+00:00 stderr F E0813 19:59:33.847947 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:33.853457023+00:00 stderr F E0813 19:59:33.848557 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:34.226722143+00:00 stderr F I0813 19:59:34.154580 1 base_controller.go:73] Caches are synced for StatusSyncer_service-ca 2025-08-13T19:59:34.226722143+00:00 stderr F I0813 19:59:34.154670 1 base_controller.go:110] Starting #1 worker of StatusSyncer_service-ca controller ... 2025-08-13T19:59:34.226722143+00:00 stderr F I0813 19:59:34.155324 1 base_controller.go:73] Caches are synced for ServiceCAOperator 2025-08-13T19:59:34.226722143+00:00 stderr F I0813 19:59:34.155334 1 base_controller.go:110] Starting #1 worker of ServiceCAOperator controller ... 2025-08-13T19:59:34.504016287+00:00 stderr F E0813 19:59:34.497677 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:34.504016287+00:00 stderr F E0813 19:59:34.497764 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.801623925+00:00 stderr F E0813 19:59:35.778348 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.825044193+00:00 stderr F E0813 19:59:35.824071 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.955154397+00:00 stderr F I0813 19:59:36.955035 1 rotate.go:99] Rotating service CA due to the CA being past the mid-point of its validity. 2025-08-13T19:59:38.336152503+00:00 stderr F I0813 19:59:38.333455 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:38.336152503+00:00 stderr F I0813 19:59:38.334696 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:38.364878932+00:00 stderr F E0813 19:59:38.364613 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:38.386488598+00:00 stderr F E0813 19:59:38.385301 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:39.234761047+00:00 stderr F I0813 19:59:39.234566 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/signing-key -n openshift-service-ca because it changed 2025-08-13T19:59:39.234761047+00:00 stderr F I0813 19:59:39.234628 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ServiceCARotated' Rotating service CA due to the CA being past the mid-point of its validity. 2025-08-13T19:59:39.234761047+00:00 stderr F The previous CA will be trusted until 2026-09-12T19:59:38Z 2025-08-13T19:59:39.508502940+00:00 stderr F I0813 19:59:39.461379 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/service-ca -n openshift-config-managed: 2025-08-13T19:59:39.508502940+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:39.508502940+00:00 stderr F I0813 19:59:39.477989 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/signing-cabundle -n openshift-service-ca: 2025-08-13T19:59:39.508502940+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:39.756660394+00:00 stderr F I0813 19:59:39.756568 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/service-ca -n openshift-service-ca because it changed 2025-08-13T19:59:39.943240583+00:00 stderr F I0813 19:59:39.941911 1 status_controller.go:213] clusteroperator/service-ca diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:15Z","message":"Progressing: \nProgressing: service-ca is updating","reason":"_ManagedDeploymentsAvailable","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"}]}} 2025-08-13T19:59:40.032656072+00:00 stderr F I0813 19:59:40.032507 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/service-ca changed: Progressing message changed from "Progressing: \nProgressing: service-ca does not have available replicas" to "Progressing: \nProgressing: service-ca is updating" 2025-08-13T19:59:40.563249286+00:00 stderr F I0813 19:59:40.559852 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/service-ca -n openshift-service-ca because it changed 2025-08-13T19:59:40.887673554+00:00 stderr F I0813 19:59:40.887078 1 status_controller.go:213] clusteroperator/service-ca diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:15Z","message":"Progressing: \nProgressing: service-ca does not have available replicas","reason":"_ManagedDeploymentsAvailable","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"}]}} 2025-08-13T19:59:41.121658674+00:00 stderr F I0813 19:59:41.119997 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/service-ca changed: Progressing message changed from "Progressing: \nProgressing: service-ca is updating" to "Progressing: \nProgressing: service-ca does not have available replicas" 2025-08-13T19:59:43.486698869+00:00 stderr F E0813 19:59:43.485454 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.520859453+00:00 stderr F E0813 19:59:43.515370 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:51.907020147+00:00 stderr F I0813 19:59:51.906115 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:51.905972337 +0000 UTC))" 2025-08-13T19:59:51.917932748+00:00 stderr F I0813 19:59:51.916595 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:51.906769359 +0000 UTC))" 2025-08-13T19:59:51.917932748+00:00 stderr F I0813 19:59:51.916660 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.916633141 +0000 UTC))" 2025-08-13T19:59:51.917932748+00:00 stderr F I0813 19:59:51.916891 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.916724353 +0000 UTC))" 2025-08-13T19:59:51.917932748+00:00 stderr F I0813 19:59:51.916936 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.916913229 +0000 UTC))" 2025-08-13T19:59:51.917932748+00:00 stderr F I0813 19:59:51.916967 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.91694589 +0000 UTC))" 2025-08-13T19:59:51.942556640+00:00 stderr F I0813 19:59:51.942353 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.942279082 +0000 UTC))" 2025-08-13T19:59:51.942556640+00:00 stderr F I0813 19:59:51.942455 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.942436646 +0000 UTC))" 2025-08-13T19:59:51.942556640+00:00 stderr F I0813 19:59:51.942475 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.942462417 +0000 UTC))" 2025-08-13T19:59:51.945436062+00:00 stderr F I0813 19:59:51.945304 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:19 +0000 UTC to 2026-06-26 12:47:20 +0000 UTC (now=2025-08-13 19:59:51.945250866 +0000 UTC))" 2025-08-13T19:59:51.955585631+00:00 stderr F I0813 19:59:51.955361 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115166\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:19 +0000 UTC to 2026-08-13 18:59:19 +0000 UTC (now=2025-08-13 19:59:51.95521261 +0000 UTC))" 2025-08-13T19:59:52.001613473+00:00 stderr F I0813 19:59:51.999483 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:53.136393990+00:00 stderr F I0813 19:59:53.135266 1 status_controller.go:213] clusteroperator/service-ca diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T19:59:53Z","message":"Progressing: All service-ca-operator deployments updated","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"}]}} 2025-08-13T19:59:53.166916290+00:00 stderr F I0813 19:59:53.166657 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated") 2025-08-13T20:00:05.796004788+00:00 stderr F I0813 20:00:05.775547 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.775474452 +0000 UTC))" 2025-08-13T20:00:05.796196303+00:00 stderr F I0813 20:00:05.796174 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.796110161 +0000 UTC))" 2025-08-13T20:00:05.796261825+00:00 stderr F I0813 20:00:05.796245 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.796221094 +0000 UTC))" 2025-08-13T20:00:05.796323487+00:00 stderr F I0813 20:00:05.796308 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.796285846 +0000 UTC))" 2025-08-13T20:00:05.796417939+00:00 stderr F I0813 20:00:05.796360 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.796344797 +0000 UTC))" 2025-08-13T20:00:05.796482201+00:00 stderr F I0813 20:00:05.796468 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.79644076 +0000 UTC))" 2025-08-13T20:00:05.796551873+00:00 stderr F I0813 20:00:05.796533 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.796501622 +0000 UTC))" 2025-08-13T20:00:05.796614985+00:00 stderr F I0813 20:00:05.796599 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.796577774 +0000 UTC))" 2025-08-13T20:00:05.796687467+00:00 stderr F I0813 20:00:05.796674 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.796658416 +0000 UTC))" 2025-08-13T20:00:05.796747519+00:00 stderr F I0813 20:00:05.796732 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.796708588 +0000 UTC))" 2025-08-13T20:00:05.797270254+00:00 stderr F I0813 20:00:05.797212 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:19 +0000 UTC to 2026-06-26 12:47:20 +0000 UTC (now=2025-08-13 20:00:05.797190811 +0000 UTC))" 2025-08-13T20:00:05.797677035+00:00 stderr F I0813 20:00:05.797646 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115166\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:19 +0000 UTC to 2026-08-13 18:59:19 +0000 UTC (now=2025-08-13 20:00:05.797621284 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.968609 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.968081498 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997119 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:59.997060114 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997182 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.997132747 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997201 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.997189298 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997284 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997211129 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997369 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997293401 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997459 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997377464 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997485 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997469856 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997511 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:59.997495797 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997553 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:00:59.997524678 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997573 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997561949 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.998251 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:19 +0000 UTC to 2026-06-26 12:47:20 +0000 UTC (now=2025-08-13 20:00:59.998212147 +0000 UTC))" 2025-08-13T20:01:00.024263490+00:00 stderr F I0813 20:01:00.015459 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115166\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:19 +0000 UTC to 2026-08-13 18:59:19 +0000 UTC (now=2025-08-13 20:01:00.015395877 +0000 UTC))" 2025-08-13T20:01:31.257242288+00:00 stderr F I0813 20:01:31.255926 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="can't remove non-existent watcher: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:31.257353131+00:00 stderr F I0813 20:01:31.257289 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="can't remove non-existent watcher: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:31.258978588+00:00 stderr F I0813 20:01:31.258095 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259200 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:31.259092921 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259309 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:31.259262486 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259346 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:31.259318087 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259371 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:31.259352518 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259397 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:31.259385379 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259415 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:31.25940386 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259432 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:31.25942063 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259483 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:31.259437051 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259500 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:31.259488302 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259542 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:31.259529983 +0000 UTC))" 2025-08-13T20:01:31.259630776+00:00 stderr F I0813 20:01:31.259608 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:31.259554514 +0000 UTC))" 2025-08-13T20:01:31.260928813+00:00 stderr F I0813 20:01:31.260054 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2025-08-13 20:01:31.260027158 +0000 UTC))" 2025-08-13T20:01:31.260928813+00:00 stderr F I0813 20:01:31.260391 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115166\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:19 +0000 UTC to 2026-08-13 18:59:19 +0000 UTC (now=2025-08-13 20:01:31.260373778 +0000 UTC))" 2025-08-13T20:01:36.007462308+00:00 stderr F I0813 20:01:36.007012 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="f269835458ddba2137165862fff7fd217525c9e40a9c6f489e3967be4db60372", new="cfa284ff196c7f240b2c9d12ecb58ca59660cd15e39fec28f4ebbfa7f9c29fb3") 2025-08-13T20:01:36.007530580+00:00 stderr F W0813 20:01:36.007467 1 builder.go:132] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:01:36.007639663+00:00 stderr F I0813 20:01:36.007582 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="6f1dcb91ce339e53ed60d5e5e4ed5a83d1b7ad3e174dad355b404f5099a93f97", new="31318292ca2ee5278a39f1090b20b1f370fcbc25269641a3cee345c20dd36b58") 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.008876 1 genericapiserver.go:538] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.008946 1 genericapiserver.go:541] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.008994 1 genericapiserver.go:605] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.009336 1 genericapiserver.go:639] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.009375 1 genericapiserver.go:630] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.009395 1 genericapiserver.go:671] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.009449 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.009486 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.011416 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.011631 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.011682 1 base_controller.go:172] Shutting down ServiceCAOperator ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.011699 1 base_controller.go:172] Shutting down StatusSyncer_service-ca ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012644 1 base_controller.go:150] All StatusSyncer_service-ca post start hooks have been terminated 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.011709 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012354 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012677 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012363 1 base_controller.go:114] Shutting down worker of ServiceCAOperator controller ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012712 1 base_controller.go:104] All ServiceCAOperator workers have been terminated 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012569 1 base_controller.go:114] Shutting down worker of StatusSyncer_service-ca controller ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012762 1 base_controller.go:104] All StatusSyncer_service-ca workers have been terminated 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012813 1 genericapiserver.go:681] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012888 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012916 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012923 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:01:36.013602943+00:00 stderr F I0813 20:01:36.013542 1 secure_serving.go:255] Stopped listening on [::]:8443 2025-08-13T20:01:36.013618823+00:00 stderr F I0813 20:01:36.013595 1 genericapiserver.go:588] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:01:36.013618823+00:00 stderr F I0813 20:01:36.013613 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:01:36.014059546+00:00 stderr F I0813 20:01:36.014014 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:01:36.014082177+00:00 stderr F I0813 20:01:36.014063 1 genericapiserver.go:701] [graceful-termination] apiserver is exiting 2025-08-13T20:01:36.014145768+00:00 stderr F I0813 20:01:36.014102 1 builder.go:302] server exited 2025-08-13T20:01:40.236433072+00:00 stderr F W0813 20:01:40.231558 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000024300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015133657715033114 5ustar zuulzuul././@LongLink0000644000000000000000000000026500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/extract-utilities/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015133657734033115 5ustar zuulzuul././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/extract-utilities/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015133657715033104 0ustar zuulzuul././@LongLink0000644000000000000000000000026300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/extract-content/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015133657734033115 5ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/extract-content/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015133657715033104 0ustar zuulzuul././@LongLink0000644000000000000000000000026300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/registry-server/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015133657734033115 5ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/registry-server/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000067015133657715033121 0ustar zuulzuul2026-01-20T10:51:08.703355624+00:00 stderr F time="2026-01-20T10:51:08Z" level=info msg="starting pprof endpoint" address="localhost:6060" 2026-01-20T10:51:09.617641105+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="serving registry" configs=/extracted-catalog/catalog port=50051 2026-01-20T10:51:09.617641105+00:00 stderr F time="2026-01-20T10:51:09Z" level=info msg="stopped caching cpu profile data" address="localhost:6060" ././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015133657715033071 5ustar zuulzuul././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015133657735033073 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000012467015133657715033105 0ustar zuulzuul2025-08-13T20:07:28.135939739+00:00 stderr F I0813 20:07:28.133266 1 cmd.go:91] &{ true {false} installer true map[cert-configmaps:0xc000bb7c20 cert-dir:0xc000bb7e00 cert-secrets:0xc000bb7b80 configmaps:0xc000bb7720 namespace:0xc000bb7540 optional-cert-configmaps:0xc000bb7d60 optional-configmaps:0xc000bb7860 optional-secrets:0xc000bb77c0 pod:0xc000bb75e0 pod-manifest-dir:0xc000bb79a0 resource-dir:0xc000bb7900 revision:0xc000bb74a0 secrets:0xc000bb7680 v:0xc000bcc820] [0xc000bcc820 0xc000bb74a0 0xc000bb7540 0xc000bb75e0 0xc000bb7900 0xc000bb79a0 0xc000bb7720 0xc000bb7860 0xc000bb7680 0xc000bb77c0 0xc000bb7e00 0xc000bb7c20 0xc000bb7d60 0xc000bb7b80] [] map[cert-configmaps:0xc000bb7c20 cert-dir:0xc000bb7e00 cert-secrets:0xc000bb7b80 configmaps:0xc000bb7720 help:0xc000bccbe0 kubeconfig:0xc000bb7400 log-flush-frequency:0xc000bcc780 namespace:0xc000bb7540 optional-cert-configmaps:0xc000bb7d60 optional-cert-secrets:0xc000bb7cc0 optional-configmaps:0xc000bb7860 optional-secrets:0xc000bb77c0 pod:0xc000bb75e0 pod-manifest-dir:0xc000bb79a0 pod-manifests-lock-file:0xc000bb7ae0 resource-dir:0xc000bb7900 revision:0xc000bb74a0 secrets:0xc000bb7680 timeout-duration:0xc000bb7a40 v:0xc000bcc820 vmodule:0xc000bcc8c0] [0xc000bb7400 0xc000bb74a0 0xc000bb7540 0xc000bb75e0 0xc000bb7680 0xc000bb7720 0xc000bb77c0 0xc000bb7860 0xc000bb7900 0xc000bb79a0 0xc000bb7a40 0xc000bb7ae0 0xc000bb7b80 0xc000bb7c20 0xc000bb7cc0 0xc000bb7d60 0xc000bb7e00 0xc000bcc780 0xc000bcc820 0xc000bcc8c0 0xc000bccbe0] [0xc000bb7c20 0xc000bb7e00 0xc000bb7b80 0xc000bb7720 0xc000bccbe0 0xc000bb7400 0xc000bcc780 0xc000bb7540 0xc000bb7d60 0xc000bb7cc0 0xc000bb7860 0xc000bb77c0 0xc000bb75e0 0xc000bb79a0 0xc000bb7ae0 0xc000bb7900 0xc000bb74a0 0xc000bb7680 0xc000bb7a40 0xc000bcc820 0xc000bcc8c0] map[104:0xc000bccbe0 118:0xc000bcc820] [] -1 0 0xc000b7f560 true 0x73b100 []} 2025-08-13T20:07:28.135939739+00:00 stderr F I0813 20:07:28.133934 1 cmd.go:92] (*installerpod.InstallOptions)(0xc000b28680)({ 2025-08-13T20:07:28.135939739+00:00 stderr F KubeConfig: (string) "", 2025-08-13T20:07:28.135939739+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-08-13T20:07:28.135939739+00:00 stderr F Revision: (string) (len=2) "11", 2025-08-13T20:07:28.135939739+00:00 stderr F NodeName: (string) "", 2025-08-13T20:07:28.135939739+00:00 stderr F Namespace: (string) (len=33) "openshift-kube-controller-manager", 2025-08-13T20:07:28.135939739+00:00 stderr F PodConfigMapNamePrefix: (string) (len=27) "kube-controller-manager-pod", 2025-08-13T20:07:28.135939739+00:00 stderr F SecretNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=27) "service-account-private-key", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-08-13T20:07:28.135939739+00:00 stderr F }, 2025-08-13T20:07:28.135939739+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=12) "serving-cert" 2025-08-13T20:07:28.135939739+00:00 stderr F }, 2025-08-13T20:07:28.135939739+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=8 cap=8) { 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=27) "kube-controller-manager-pod", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=6) "config", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=32) "cluster-policy-controller-config", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=29) "controller-manager-kubeconfig", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=38) "kube-controller-cert-syncer-kubeconfig", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=17) "serviceaccount-ca", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=10) "service-ca", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=15) "recycler-config" 2025-08-13T20:07:28.135939739+00:00 stderr F }, 2025-08-13T20:07:28.135939739+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=12) "cloud-config" 2025-08-13T20:07:28.135939739+00:00 stderr F }, 2025-08-13T20:07:28.135939739+00:00 stderr F CertSecretNames: ([]string) (len=2 cap=2) { 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=39) "kube-controller-manager-client-cert-key", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=10) "csr-signer" 2025-08-13T20:07:28.135939739+00:00 stderr F }, 2025-08-13T20:07:28.135939739+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) , 2025-08-13T20:07:28.135939739+00:00 stderr F CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=20) "aggregator-client-ca", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=9) "client-ca" 2025-08-13T20:07:28.135939739+00:00 stderr F }, 2025-08-13T20:07:28.135939739+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=17) "trusted-ca-bundle" 2025-08-13T20:07:28.135939739+00:00 stderr F }, 2025-08-13T20:07:28.135939739+00:00 stderr F CertDir: (string) (len=66) "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs", 2025-08-13T20:07:28.135939739+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:07:28.135939739+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-08-13T20:07:28.135939739+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-08-13T20:07:28.135939739+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-08-13T20:07:28.135939739+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-08-13T20:07:28.135939739+00:00 stderr F KubeletVersion: (string) "" 2025-08-13T20:07:28.135939739+00:00 stderr F }) 2025-08-13T20:07:28.209260091+00:00 stderr F I0813 20:07:28.204641 1 cmd.go:409] Getting controller reference for node crc 2025-08-13T20:07:28.244244624+00:00 stderr F I0813 20:07:28.244110 1 cmd.go:422] Waiting for installer revisions to settle for node crc 2025-08-13T20:07:28.250320798+00:00 stderr F I0813 20:07:28.250209 1 cmd.go:514] Waiting additional period after revisions have settled for node crc 2025-08-13T20:07:58.255243085+00:00 stderr F I0813 20:07:58.255028 1 cmd.go:520] Getting installer pods for node crc 2025-08-13T20:07:58.270363959+00:00 stderr F I0813 20:07:58.270251 1 cmd.go:538] Latest installer revision for node crc is: 11 2025-08-13T20:07:58.270363959+00:00 stderr F I0813 20:07:58.270306 1 cmd.go:427] Querying kubelet version for node crc 2025-08-13T20:07:58.280990563+00:00 stderr F I0813 20:07:58.280531 1 cmd.go:440] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-08-13T20:07:58.280990563+00:00 stderr F I0813 20:07:58.280586 1 cmd.go:289] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11" ... 2025-08-13T20:07:58.281383995+00:00 stderr F I0813 20:07:58.281285 1 cmd.go:217] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11" ... 2025-08-13T20:07:58.281383995+00:00 stderr F I0813 20:07:58.281325 1 cmd.go:225] Getting secrets ... 2025-08-13T20:07:58.295838999+00:00 stderr F I0813 20:07:58.295612 1 copy.go:32] Got secret openshift-kube-controller-manager/localhost-recovery-client-token-11 2025-08-13T20:07:58.328000401+00:00 stderr F I0813 20:07:58.303010 1 copy.go:32] Got secret openshift-kube-controller-manager/service-account-private-key-11 2025-08-13T20:07:58.328000401+00:00 stderr F I0813 20:07:58.307771 1 copy.go:32] Got secret openshift-kube-controller-manager/serving-cert-11 2025-08-13T20:07:58.328000401+00:00 stderr F I0813 20:07:58.327978 1 cmd.go:238] Getting config maps ... 2025-08-13T20:07:58.335728553+00:00 stderr F I0813 20:07:58.335667 1 copy.go:60] Got configMap openshift-kube-controller-manager/cluster-policy-controller-config-11 2025-08-13T20:07:58.345595516+00:00 stderr F I0813 20:07:58.345471 1 copy.go:60] Got configMap openshift-kube-controller-manager/config-11 2025-08-13T20:07:58.349395815+00:00 stderr F I0813 20:07:58.349363 1 copy.go:60] Got configMap openshift-kube-controller-manager/controller-manager-kubeconfig-11 2025-08-13T20:07:58.352842753+00:00 stderr F I0813 20:07:58.352817 1 copy.go:60] Got configMap openshift-kube-controller-manager/kube-controller-cert-syncer-kubeconfig-11 2025-08-13T20:07:58.358223618+00:00 stderr F I0813 20:07:58.358192 1 copy.go:60] Got configMap openshift-kube-controller-manager/kube-controller-manager-pod-11 2025-08-13T20:07:58.468715386+00:00 stderr F I0813 20:07:58.468660 1 copy.go:60] Got configMap openshift-kube-controller-manager/recycler-config-11 2025-08-13T20:07:58.660710620+00:00 stderr F I0813 20:07:58.660603 1 copy.go:60] Got configMap openshift-kube-controller-manager/service-ca-11 2025-08-13T20:07:58.859350016+00:00 stderr F I0813 20:07:58.859288 1 copy.go:60] Got configMap openshift-kube-controller-manager/serviceaccount-ca-11 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.062954 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/cloud-config-11: configmaps "cloud-config-11" not found 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.062997 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/localhost-recovery-client-token" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.064041 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/localhost-recovery-client-token/token" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.064218 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/localhost-recovery-client-token/ca.crt" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.064407 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/localhost-recovery-client-token/namespace" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.064495 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.064655 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/service-account-private-key" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.064713 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/service-account-private-key/service-account.key" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.064861 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/service-account-private-key/service-account.pub" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.065012 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/serving-cert" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.065094 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/serving-cert/tls.crt" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.065186 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/serving-cert/tls.key" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.065278 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/cluster-policy-controller-config" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.065386 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/cluster-policy-controller-config/config.yaml" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067285 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/config" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067412 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/config/config.yaml" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067532 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/controller-manager-kubeconfig" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067588 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/controller-manager-kubeconfig/kubeconfig" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067680 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/kube-controller-cert-syncer-kubeconfig" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067727 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067873 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/kube-controller-manager-pod" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067959 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/kube-controller-manager-pod/forceRedeploymentReason" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068127 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/kube-controller-manager-pod/pod.yaml" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068266 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/kube-controller-manager-pod/version" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068367 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/recycler-config" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068466 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/recycler-config/recycler-pod.yaml" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068558 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/service-ca" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068637 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/service-ca/ca-bundle.crt" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068727 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/serviceaccount-ca" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068848 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/serviceaccount-ca/ca-bundle.crt" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.069090 1 cmd.go:217] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.069107 1 cmd.go:225] Getting secrets ... 2025-08-13T20:07:59.267692263+00:00 stderr F I0813 20:07:59.267338 1 copy.go:32] Got secret openshift-kube-controller-manager/csr-signer 2025-08-13T20:07:59.469969843+00:00 stderr F I0813 20:07:59.469055 1 copy.go:32] Got secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key 2025-08-13T20:07:59.469969843+00:00 stderr F I0813 20:07:59.469120 1 cmd.go:238] Getting config maps ... 2025-08-13T20:07:59.669069401+00:00 stderr F I0813 20:07:59.668869 1 copy.go:60] Got configMap openshift-kube-controller-manager/aggregator-client-ca 2025-08-13T20:07:59.871931127+00:00 stderr F I0813 20:07:59.866753 1 copy.go:60] Got configMap openshift-kube-controller-manager/client-ca 2025-08-13T20:08:00.074967828+00:00 stderr F I0813 20:08:00.071345 1 copy.go:60] Got configMap openshift-kube-controller-manager/trusted-ca-bundle 2025-08-13T20:08:00.074967828+00:00 stderr F I0813 20:08:00.071666 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/csr-signer" ... 2025-08-13T20:08:00.074967828+00:00 stderr F I0813 20:08:00.072018 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/csr-signer/tls.crt" ... 2025-08-13T20:08:00.081078783+00:00 stderr F I0813 20:08:00.072324 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/csr-signer/tls.key" ... 2025-08-13T20:08:00.081078783+00:00 stderr F I0813 20:08:00.080616 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/kube-controller-manager-client-cert-key" ... 2025-08-13T20:08:00.081078783+00:00 stderr F I0813 20:08:00.080639 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/kube-controller-manager-client-cert-key/tls.crt" ... 2025-08-13T20:08:00.084667776+00:00 stderr F I0813 20:08:00.081228 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/kube-controller-manager-client-cert-key/tls.key" ... 2025-08-13T20:08:00.084667776+00:00 stderr F I0813 20:08:00.083488 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/aggregator-client-ca" ... 2025-08-13T20:08:00.084667776+00:00 stderr F I0813 20:08:00.083523 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/aggregator-client-ca/ca-bundle.crt" ... 2025-08-13T20:08:00.088862496+00:00 stderr F I0813 20:08:00.084704 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/client-ca" ... 2025-08-13T20:08:00.088862496+00:00 stderr F I0813 20:08:00.084977 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/client-ca/ca-bundle.crt" ... 2025-08-13T20:08:00.088862496+00:00 stderr F I0813 20:08:00.087658 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/trusted-ca-bundle" ... 2025-08-13T20:08:00.148623069+00:00 stderr F I0813 20:08:00.090499 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/trusted-ca-bundle/ca-bundle.crt" ... 2025-08-13T20:08:00.157142794+00:00 stderr F I0813 20:08:00.150176 1 cmd.go:331] Getting pod configmaps/kube-controller-manager-pod-11 -n openshift-kube-controller-manager 2025-08-13T20:08:00.281932152+00:00 stderr F I0813 20:08:00.263229 1 cmd.go:347] Creating directory for static pod manifest "/etc/kubernetes/manifests" ... 2025-08-13T20:08:00.281932152+00:00 stderr F I0813 20:08:00.263292 1 cmd.go:375] Writing a pod under "kube-controller-manager-pod.yaml" key 2025-08-13T20:08:00.281932152+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager","namespace":"openshift-kube-controller-manager","creationTimestamp":null,"labels":{"app":"kube-controller-manager","kube-controller-manager":"true","revision":"11"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-controller-manager","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs"}}],"containers":[{"name":"kube-controller-manager","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 10257 \\))\" ]; do sleep 1; done'\n\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nif [ -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ]; then\n echo \"Setting custom CA bundle for cloud provider\"\n export AWS_CA_BUNDLE=/etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem\nfi\n\nexec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt \\\n --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key --controllers=* --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12"],"ports":[{"containerPort":10257}],"resources":{"requests":{"cpu":"60m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"cluster-policy-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n 2025-08-13T20:08:00.282006684+00:00 stderr F \"$(ss -Htanop \\( sport = 10357 \\))\" ]; do sleep 1; done'\n\nexec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --namespace=${POD_NAMESPACE} -v=2"],"ports":[{"containerPort":10357}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["cluster-kube-controller-manager-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-recovery-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 9443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:9443 -v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:08:00.294647666+00:00 stderr F I0813 20:08:00.294562 1 cmd.go:606] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/kube-controller-manager-pod.yaml" ... 2025-08-13T20:08:00.298865277+00:00 stderr F I0813 20:08:00.295071 1 cmd.go:613] Removed existing static pod manifest "/etc/kubernetes/manifests/kube-controller-manager-pod.yaml" ... 2025-08-13T20:08:00.298865277+00:00 stderr F I0813 20:08:00.295115 1 cmd.go:617] Writing static pod manifest "/etc/kubernetes/manifests/kube-controller-manager-pod.yaml" ... 2025-08-13T20:08:00.298865277+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager","namespace":"openshift-kube-controller-manager","creationTimestamp":null,"labels":{"app":"kube-controller-manager","kube-controller-manager":"true","revision":"11"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-controller-manager","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs"}}],"containers":[{"name":"kube-controller-manager","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 10257 \\))\" ]; do sleep 1; done'\n\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nif [ -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ]; then\n echo \"Setting custom CA bundle for cloud provider\"\n export AWS_CA_BUNDLE=/etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem\nfi\n\nexec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt \\\n --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key --controllers=* --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12"],"ports":[{"containerPort":10257}],"resources":{"requests":{"cpu":"60m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"cluster-policy-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791","command":["/bin/bash","-euxo","pipefail","-c"] 2025-08-13T20:08:00.298947980+00:00 stderr F ,"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 10357 \\))\" ]; do sleep 1; done'\n\nexec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --namespace=${POD_NAMESPACE} -v=2"],"ports":[{"containerPort":10357}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["cluster-kube-controller-manager-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-recovery-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 9443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:9443 -v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} ././@LongLink0000644000000000000000000000026500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000755000175000017500000000000015133657715033055 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000755000175000017500000000000015133657735033057 5ustar zuulzuul././@LongLink0000644000000000000000000000030300000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000644000175000017500000000352315133657715033062 0ustar zuulzuul2026-01-20T10:49:35.867848761+00:00 stderr F I0120 10:49:35.865625 1 migrator.go:18] FLAG: --add_dir_header="false" 2026-01-20T10:49:35.867848761+00:00 stderr F I0120 10:49:35.866033 1 migrator.go:18] FLAG: --alsologtostderr="true" 2026-01-20T10:49:35.867848761+00:00 stderr F I0120 10:49:35.866037 1 migrator.go:18] FLAG: --kube-api-burst="1000" 2026-01-20T10:49:35.867848761+00:00 stderr F I0120 10:49:35.866043 1 migrator.go:18] FLAG: --kube-api-qps="40" 2026-01-20T10:49:35.867848761+00:00 stderr F I0120 10:49:35.866048 1 migrator.go:18] FLAG: --kubeconfig="" 2026-01-20T10:49:35.867848761+00:00 stderr F I0120 10:49:35.866086 1 migrator.go:18] FLAG: --log_backtrace_at=":0" 2026-01-20T10:49:35.867848761+00:00 stderr F I0120 10:49:35.866091 1 migrator.go:18] FLAG: --log_dir="" 2026-01-20T10:49:35.867848761+00:00 stderr F I0120 10:49:35.866095 1 migrator.go:18] FLAG: --log_file="" 2026-01-20T10:49:35.867848761+00:00 stderr F I0120 10:49:35.866098 1 migrator.go:18] FLAG: --log_file_max_size="1800" 2026-01-20T10:49:35.867848761+00:00 stderr F I0120 10:49:35.866102 1 migrator.go:18] FLAG: --logtostderr="true" 2026-01-20T10:49:35.867848761+00:00 stderr F I0120 10:49:35.866105 1 migrator.go:18] FLAG: --one_output="false" 2026-01-20T10:49:35.867848761+00:00 stderr F I0120 10:49:35.866108 1 migrator.go:18] FLAG: --skip_headers="false" 2026-01-20T10:49:35.867848761+00:00 stderr F I0120 10:49:35.866111 1 migrator.go:18] FLAG: --skip_log_headers="false" 2026-01-20T10:49:35.867848761+00:00 stderr F I0120 10:49:35.866115 1 migrator.go:18] FLAG: --stderrthreshold="2" 2026-01-20T10:49:35.867848761+00:00 stderr F I0120 10:49:35.866118 1 migrator.go:18] FLAG: --v="2" 2026-01-20T10:49:35.867848761+00:00 stderr F I0120 10:49:35.866121 1 migrator.go:18] FLAG: --vmodule="" ././@LongLink0000644000000000000000000000030300000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000644000175000017500000000376615133657715033073 0ustar zuulzuul2025-08-13T19:59:09.272236653+00:00 stderr F I0813 19:59:09.269668 1 migrator.go:18] FLAG: --add_dir_header="false" 2025-08-13T19:59:09.315418924+00:00 stderr F I0813 19:59:09.315376 1 migrator.go:18] FLAG: --alsologtostderr="true" 2025-08-13T19:59:09.315482885+00:00 stderr F I0813 19:59:09.315453 1 migrator.go:18] FLAG: --kube-api-burst="1000" 2025-08-13T19:59:09.315541857+00:00 stderr F I0813 19:59:09.315512 1 migrator.go:18] FLAG: --kube-api-qps="40" 2025-08-13T19:59:09.315594069+00:00 stderr F I0813 19:59:09.315571 1 migrator.go:18] FLAG: --kubeconfig="" 2025-08-13T19:59:09.315638740+00:00 stderr F I0813 19:59:09.315617 1 migrator.go:18] FLAG: --log_backtrace_at=":0" 2025-08-13T19:59:09.315681631+00:00 stderr F I0813 19:59:09.315659 1 migrator.go:18] FLAG: --log_dir="" 2025-08-13T19:59:09.315724812+00:00 stderr F I0813 19:59:09.315710 1 migrator.go:18] FLAG: --log_file="" 2025-08-13T19:59:09.315765133+00:00 stderr F I0813 19:59:09.315747 1 migrator.go:18] FLAG: --log_file_max_size="1800" 2025-08-13T19:59:09.315911708+00:00 stderr F I0813 19:59:09.315890 1 migrator.go:18] FLAG: --logtostderr="true" 2025-08-13T19:59:09.316095953+00:00 stderr F I0813 19:59:09.315951 1 migrator.go:18] FLAG: --one_output="false" 2025-08-13T19:59:09.316153515+00:00 stderr F I0813 19:59:09.316130 1 migrator.go:18] FLAG: --skip_headers="false" 2025-08-13T19:59:09.316200426+00:00 stderr F I0813 19:59:09.316184 1 migrator.go:18] FLAG: --skip_log_headers="false" 2025-08-13T19:59:09.316252307+00:00 stderr F I0813 19:59:09.316234 1 migrator.go:18] FLAG: --stderrthreshold="2" 2025-08-13T19:59:09.316293679+00:00 stderr F I0813 19:59:09.316280 1 migrator.go:18] FLAG: --v="2" 2025-08-13T19:59:09.316350440+00:00 stderr F I0813 19:59:09.316311 1 migrator.go:18] FLAG: --vmodule="" 2025-08-13T20:42:36.480632977+00:00 stderr F I0813 20:42:36.473590 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF ././@LongLink0000644000000000000000000000026000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000755000175000017500000000000015133657715032777 5ustar zuulzuul././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000755000175000017500000000000015133657735033001 5ustar zuulzuul././@LongLink0000644000000000000000000000030500000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000001665715133657715033020 0ustar zuulzuul2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246619 1 flags.go:64] FLAG: --add-dir-header="false" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246740 1 flags.go:64] FLAG: --allow-paths="[]" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246747 1 flags.go:64] FLAG: --alsologtostderr="false" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246751 1 flags.go:64] FLAG: --auth-header-fields-enabled="false" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246754 1 flags.go:64] FLAG: --auth-header-groups-field-name="x-remote-groups" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246758 1 flags.go:64] FLAG: --auth-header-groups-field-separator="|" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246761 1 flags.go:64] FLAG: --auth-header-user-field-name="x-remote-user" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246765 1 flags.go:64] FLAG: --auth-token-audiences="[]" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246768 1 flags.go:64] FLAG: --client-ca-file="" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246771 1 flags.go:64] FLAG: --config-file="/etc/kube-rbac-proxy/config-file.yaml" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246774 1 flags.go:64] FLAG: --help="false" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246777 1 flags.go:64] FLAG: --http2-disable="false" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246780 1 flags.go:64] FLAG: --http2-max-concurrent-streams="100" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246784 1 flags.go:64] FLAG: --http2-max-size="262144" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246787 1 flags.go:64] FLAG: --ignore-paths="[]" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246804 1 flags.go:64] FLAG: --insecure-listen-address="" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246807 1 flags.go:64] FLAG: --kube-api-burst="0" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246811 1 flags.go:64] FLAG: --kube-api-qps="0" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246815 1 flags.go:64] FLAG: --kubeconfig="" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246818 1 flags.go:64] FLAG: --log-backtrace-at="" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246821 1 flags.go:64] FLAG: --log-dir="" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246824 1 flags.go:64] FLAG: --log-file="" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246827 1 flags.go:64] FLAG: --log-file-max-size="0" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246830 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246834 1 flags.go:64] FLAG: --logtostderr="true" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246837 1 flags.go:64] FLAG: --oidc-ca-file="" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246840 1 flags.go:64] FLAG: --oidc-clientID="" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246843 1 flags.go:64] FLAG: --oidc-groups-claim="groups" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246845 1 flags.go:64] FLAG: --oidc-groups-prefix="" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246848 1 flags.go:64] FLAG: --oidc-issuer="" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246852 1 flags.go:64] FLAG: --oidc-sign-alg="[RS256]" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246863 1 flags.go:64] FLAG: --oidc-username-claim="email" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246866 1 flags.go:64] FLAG: --one-output="false" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246869 1 flags.go:64] FLAG: --proxy-endpoints-port="0" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246872 1 flags.go:64] FLAG: --secure-listen-address="0.0.0.0:8443" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246875 1 flags.go:64] FLAG: --skip-headers="false" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246878 1 flags.go:64] FLAG: --skip-log-headers="false" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246881 1 flags.go:64] FLAG: --stderrthreshold="" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246884 1 flags.go:64] FLAG: --tls-cert-file="/etc/tls/private/tls.crt" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246887 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305]" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246897 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246900 1 flags.go:64] FLAG: --tls-private-key-file="/etc/tls/private/tls.key" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246903 1 flags.go:64] FLAG: --tls-reload-interval="1m0s" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246908 1 flags.go:64] FLAG: --upstream="http://localhost:8080/" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246911 1 flags.go:64] FLAG: --upstream-ca-file="" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246914 1 flags.go:64] FLAG: --upstream-client-cert-file="" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246916 1 flags.go:64] FLAG: --upstream-client-key-file="" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246919 1 flags.go:64] FLAG: --upstream-force-h2c="false" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246922 1 flags.go:64] FLAG: --v="3" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246925 1 flags.go:64] FLAG: --version="false" 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246929 1 flags.go:64] FLAG: --vmodule="" 2026-01-20T10:49:36.249166196+00:00 stderr F W0120 10:49:36.246936 1 deprecated.go:66] 2026-01-20T10:49:36.249166196+00:00 stderr F ==== Removed Flag Warning ====================== 2026-01-20T10:49:36.249166196+00:00 stderr F 2026-01-20T10:49:36.249166196+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2026-01-20T10:49:36.249166196+00:00 stderr F 2026-01-20T10:49:36.249166196+00:00 stderr F =============================================== 2026-01-20T10:49:36.249166196+00:00 stderr F 2026-01-20T10:49:36.249166196+00:00 stderr F I0120 10:49:36.246947 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2026-01-20T10:49:36.250952061+00:00 stderr F I0120 10:49:36.249903 1 kube-rbac-proxy.go:233] Valid token audiences: 2026-01-20T10:49:36.250952061+00:00 stderr F I0120 10:49:36.249979 1 kube-rbac-proxy.go:347] Reading certificate files 2026-01-20T10:49:36.256499520+00:00 stderr F I0120 10:49:36.256386 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:8443 2026-01-20T10:49:36.258700156+00:00 stderr F I0120 10:49:36.258675 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:8443 ././@LongLink0000644000000000000000000000030500000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000001706415133657715033011 0ustar zuulzuul2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.288377 1 flags.go:64] FLAG: --add-dir-header="false" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289195 1 flags.go:64] FLAG: --allow-paths="[]" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289218 1 flags.go:64] FLAG: --alsologtostderr="false" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289224 1 flags.go:64] FLAG: --auth-header-fields-enabled="false" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289229 1 flags.go:64] FLAG: --auth-header-groups-field-name="x-remote-groups" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289236 1 flags.go:64] FLAG: --auth-header-groups-field-separator="|" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289242 1 flags.go:64] FLAG: --auth-header-user-field-name="x-remote-user" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289247 1 flags.go:64] FLAG: --auth-token-audiences="[]" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289260 1 flags.go:64] FLAG: --client-ca-file="" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289265 1 flags.go:64] FLAG: --config-file="/etc/kube-rbac-proxy/config-file.yaml" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289270 1 flags.go:64] FLAG: --help="false" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289275 1 flags.go:64] FLAG: --http2-disable="false" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289281 1 flags.go:64] FLAG: --http2-max-concurrent-streams="100" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289288 1 flags.go:64] FLAG: --http2-max-size="262144" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289293 1 flags.go:64] FLAG: --ignore-paths="[]" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289306 1 flags.go:64] FLAG: --insecure-listen-address="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289311 1 flags.go:64] FLAG: --kube-api-burst="0" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289318 1 flags.go:64] FLAG: --kube-api-qps="0" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289416 1 flags.go:64] FLAG: --kubeconfig="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289423 1 flags.go:64] FLAG: --log-backtrace-at="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289429 1 flags.go:64] FLAG: --log-dir="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289434 1 flags.go:64] FLAG: --log-file="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289439 1 flags.go:64] FLAG: --log-file-max-size="0" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289445 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289468 1 flags.go:64] FLAG: --logtostderr="true" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289474 1 flags.go:64] FLAG: --oidc-ca-file="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289479 1 flags.go:64] FLAG: --oidc-clientID="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289484 1 flags.go:64] FLAG: --oidc-groups-claim="groups" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289489 1 flags.go:64] FLAG: --oidc-groups-prefix="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289493 1 flags.go:64] FLAG: --oidc-issuer="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289498 1 flags.go:64] FLAG: --oidc-sign-alg="[RS256]" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289519 1 flags.go:64] FLAG: --oidc-username-claim="email" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289523 1 flags.go:64] FLAG: --one-output="false" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289528 1 flags.go:64] FLAG: --proxy-endpoints-port="0" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289593 1 flags.go:64] FLAG: --secure-listen-address="0.0.0.0:8443" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289600 1 flags.go:64] FLAG: --skip-headers="false" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289605 1 flags.go:64] FLAG: --skip-log-headers="false" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289610 1 flags.go:64] FLAG: --stderrthreshold="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289614 1 flags.go:64] FLAG: --tls-cert-file="/etc/tls/private/tls.crt" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289619 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305]" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289642 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289648 1 flags.go:64] FLAG: --tls-private-key-file="/etc/tls/private/tls.key" 2025-08-13T19:59:04.289688734+00:00 stderr F I0813 19:59:04.289659 1 flags.go:64] FLAG: --tls-reload-interval="1m0s" 2025-08-13T19:59:04.289688734+00:00 stderr F I0813 19:59:04.289668 1 flags.go:64] FLAG: --upstream="http://localhost:8080/" 2025-08-13T19:59:04.289688734+00:00 stderr F I0813 19:59:04.289673 1 flags.go:64] FLAG: --upstream-ca-file="" 2025-08-13T19:59:04.289688734+00:00 stderr F I0813 19:59:04.289677 1 flags.go:64] FLAG: --upstream-client-cert-file="" 2025-08-13T19:59:04.289688734+00:00 stderr F I0813 19:59:04.289682 1 flags.go:64] FLAG: --upstream-client-key-file="" 2025-08-13T19:59:04.289699934+00:00 stderr F I0813 19:59:04.289686 1 flags.go:64] FLAG: --upstream-force-h2c="false" 2025-08-13T19:59:04.289699934+00:00 stderr F I0813 19:59:04.289691 1 flags.go:64] FLAG: --v="3" 2025-08-13T19:59:04.289709695+00:00 stderr F I0813 19:59:04.289696 1 flags.go:64] FLAG: --version="false" 2025-08-13T19:59:04.289709695+00:00 stderr F I0813 19:59:04.289703 1 flags.go:64] FLAG: --vmodule="" 2025-08-13T19:59:04.289906890+00:00 stderr F W0813 19:59:04.289737 1 deprecated.go:66] 2025-08-13T19:59:04.289906890+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:04.289906890+00:00 stderr F 2025-08-13T19:59:04.289906890+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:04.289906890+00:00 stderr F 2025-08-13T19:59:04.289906890+00:00 stderr F =============================================== 2025-08-13T19:59:04.289906890+00:00 stderr F 2025-08-13T19:59:04.289906890+00:00 stderr F I0813 19:59:04.289857 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-08-13T19:59:04.291096624+00:00 stderr F I0813 19:59:04.291044 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:04.291190227+00:00 stderr F I0813 19:59:04.291158 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:04.292125893+00:00 stderr F I0813 19:59:04.292006 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:8443 2025-08-13T19:59:04.292856924+00:00 stderr F I0813 19:59:04.292710 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:8443 2025-08-13T20:42:44.436262779+00:00 stderr F I0813 20:42:44.435618 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000030500000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000755000175000017500000000000015133657735033001 5ustar zuulzuul././@LongLink0000644000000000000000000000031200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000003130715133657715033005 0ustar zuulzuul2025-08-13T20:05:37.430602579+00:00 stderr F I0813 20:05:37.428176 1 start.go:72] Version: 4.16.0-202406131906.p0.g49a82ff.assembly.stream.el9 2025-08-13T20:05:37.431396461+00:00 stderr F I0813 20:05:37.431086 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:37.531196519+00:00 stderr F I0813 20:05:37.531136 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-api/machine-api-operator... 2025-08-13T20:08:24.743848747+00:00 stderr F E0813 20:08:24.742720 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/machine-api-operator: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:12:03.283463562+00:00 stderr F I0813 20:12:03.282470 1 leaderelection.go:260] successfully acquired lease openshift-machine-api/machine-api-operator 2025-08-13T20:12:03.408514977+00:00 stderr F I0813 20:12:03.408278 1 operator.go:214] Starting Machine API Operator 2025-08-13T20:12:03.414883729+00:00 stderr F I0813 20:12:03.414653 1 reflector.go:289] Starting reflector *v1.DaemonSet (17m16.814879022s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.414883729+00:00 stderr F I0813 20:12:03.414725 1 reflector.go:325] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.415272170+00:00 stderr F I0813 20:12:03.415179 1 reflector.go:289] Starting reflector *v1.ValidatingWebhookConfiguration (17m16.814879022s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.415292001+00:00 stderr F I0813 20:12:03.415267 1 reflector.go:325] Listing and watching *v1.ValidatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.415292001+00:00 stderr F I0813 20:12:03.415270 1 reflector.go:289] Starting reflector *v1.Proxy (14m58.262499485s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.415304351+00:00 stderr F I0813 20:12:03.415294 1 reflector.go:325] Listing and watching *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.415413054+00:00 stderr F I0813 20:12:03.415295 1 reflector.go:289] Starting reflector *v1.FeatureGate (14m58.262499485s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.415431205+00:00 stderr F I0813 20:12:03.415408 1 reflector.go:325] Listing and watching *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.415558749+00:00 stderr F I0813 20:12:03.415230 1 reflector.go:289] Starting reflector *v1.MutatingWebhookConfiguration (17m16.814879022s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.415558749+00:00 stderr F I0813 20:12:03.415521 1 reflector.go:325] Listing and watching *v1.MutatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.415901538+00:00 stderr F I0813 20:12:03.415833 1 reflector.go:289] Starting reflector *v1beta1.MachineSet (17m0.773680079s) from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:12:03.415901538+00:00 stderr F I0813 20:12:03.415866 1 reflector.go:325] Listing and watching *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:12:03.416042702+00:00 stderr F I0813 20:12:03.416014 1 reflector.go:289] Starting reflector *v1.Deployment (17m16.814879022s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.416146305+00:00 stderr F I0813 20:12:03.416087 1 reflector.go:289] Starting reflector *v1.ClusterVersion (14m58.262499485s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.416183666+00:00 stderr F I0813 20:12:03.416143 1 reflector.go:325] Listing and watching *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.416218927+00:00 stderr F I0813 20:12:03.416098 1 reflector.go:325] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.416357301+00:00 stderr F I0813 20:12:03.416207 1 reflector.go:289] Starting reflector *v1beta1.Machine (17m0.773680079s) from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:12:03.416357301+00:00 stderr F I0813 20:12:03.416297 1 reflector.go:325] Listing and watching *v1beta1.Machine from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:12:03.416733772+00:00 stderr F I0813 20:12:03.416616 1 reflector.go:289] Starting reflector *v1.ClusterOperator (14m58.262499485s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.416733772+00:00 stderr F I0813 20:12:03.416652 1 reflector.go:325] Listing and watching *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.432283748+00:00 stderr F I0813 20:12:03.432183 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.432682600+00:00 stderr F I0813 20:12:03.432620 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.434528012+00:00 stderr F I0813 20:12:03.434460 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.435013376+00:00 stderr F I0813 20:12:03.434965 1 reflector.go:351] Caches populated for *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:12:03.435326135+00:00 stderr F I0813 20:12:03.434887 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.435607613+00:00 stderr F I0813 20:12:03.435557 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.435745857+00:00 stderr F I0813 20:12:03.435723 1 reflector.go:351] Caches populated for *v1beta1.Machine from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:12:03.436151169+00:00 stderr F I0813 20:12:03.436098 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.442764949+00:00 stderr F I0813 20:12:03.442694 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.443384886+00:00 stderr F I0813 20:12:03.443356 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.508870364+00:00 stderr F I0813 20:12:03.508710 1 operator.go:226] Synced up caches 2025-08-13T20:12:03.509038179+00:00 stderr F I0813 20:12:03.509010 1 operator.go:231] Started feature gate accessor 2025-08-13T20:12:03.510557512+00:00 stderr F I0813 20:12:03.510481 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:12:03.510951624+00:00 stderr F I0813 20:12:03.510718 1 start.go:121] Synced up machine api informer caches 2025-08-13T20:12:03.511367516+00:00 stderr F I0813 20:12:03.511243 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-machine-api", Name:"machine-api-operator", UID:"7e7b28b7-f1de-4b37-8a34-a8d6ed3ac1fa", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:12:03.553660058+00:00 stderr F I0813 20:12:03.553520 1 status.go:69] Syncing status: re-syncing 2025-08-13T20:12:03.564646243+00:00 stderr F I0813 20:12:03.564506 1 sync.go:75] Provider is NoOp, skipping synchronisation 2025-08-13T20:12:03.571000755+00:00 stderr F I0813 20:12:03.570848 1 status.go:99] Syncing status: available 2025-08-13T20:27:01.768104333+00:00 stderr F I0813 20:27:01.766359 1 status.go:69] Syncing status: re-syncing 2025-08-13T20:27:01.777550173+00:00 stderr F I0813 20:27:01.777474 1 sync.go:75] Provider is NoOp, skipping synchronisation 2025-08-13T20:27:01.781471015+00:00 stderr F I0813 20:27:01.780930 1 status.go:99] Syncing status: available 2025-08-13T20:41:59.995734233+00:00 stderr F I0813 20:41:59.994921 1 status.go:69] Syncing status: re-syncing 2025-08-13T20:42:00.004860526+00:00 stderr F I0813 20:42:00.003019 1 sync.go:75] Provider is NoOp, skipping synchronisation 2025-08-13T20:42:00.011051705+00:00 stderr F I0813 20:42:00.009869 1 status.go:99] Syncing status: available 2025-08-13T20:42:36.421768310+00:00 stderr F I0813 20:42:36.415200 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.424103457+00:00 stderr F I0813 20:42:36.424040 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445927616+00:00 stderr F I0813 20:42:36.415396 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445927616+00:00 stderr F I0813 20:42:36.419956 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445927616+00:00 stderr F I0813 20:42:36.443491 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445927616+00:00 stderr F I0813 20:42:36.415610 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445927616+00:00 stderr F I0813 20:42:36.421869 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445927616+00:00 stderr F I0813 20:42:36.421888 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445927616+00:00 stderr F I0813 20:42:36.421911 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.457922882+00:00 stderr F I0813 20:42:36.421929 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF ././@LongLink0000644000000000000000000000031200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000003046315133657715033007 0ustar zuulzuul2025-08-13T19:59:30.422692088+00:00 stderr F I0813 19:59:30.324610 1 start.go:72] Version: 4.16.0-202406131906.p0.g49a82ff.assembly.stream.el9 2025-08-13T19:59:30.730222715+00:00 stderr F I0813 19:59:30.729311 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:32.264897550+00:00 stderr F I0813 19:59:32.241064 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-api/machine-api-operator... 2025-08-13T19:59:32.786068206+00:00 stderr F I0813 19:59:32.785728 1 leaderelection.go:260] successfully acquired lease openshift-machine-api/machine-api-operator 2025-08-13T19:59:33.460151711+00:00 stderr F I0813 19:59:33.456613 1 reflector.go:289] Starting reflector *v1.DaemonSet (14m53.02528887s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.460264245+00:00 stderr F I0813 19:59:33.460239 1 reflector.go:325] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.467459410+00:00 stderr F I0813 19:59:33.466231 1 reflector.go:289] Starting reflector *v1.ValidatingWebhookConfiguration (14m53.02528887s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.470166087+00:00 stderr F I0813 19:59:33.470142 1 reflector.go:325] Listing and watching *v1.ValidatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.470908838+00:00 stderr F I0813 19:59:33.470700 1 reflector.go:289] Starting reflector *v1.Deployment (14m53.02528887s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.471114444+00:00 stderr F I0813 19:59:33.471098 1 reflector.go:325] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.528297624+00:00 stderr F I0813 19:59:33.526236 1 reflector.go:289] Starting reflector *v1.MutatingWebhookConfiguration (14m53.02528887s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.528948693+00:00 stderr F I0813 19:59:33.528641 1 reflector.go:325] Listing and watching *v1.MutatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.536238090+00:00 stderr F I0813 19:59:33.535890 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.546972206+00:00 stderr F I0813 19:59:33.536681 1 reflector.go:289] Starting reflector *v1beta1.Machine (11m3.181866381s) from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T19:59:33.662896021+00:00 stderr F I0813 19:59:33.662732 1 reflector.go:325] Listing and watching *v1beta1.Machine from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T19:59:33.677561929+00:00 stderr F I0813 19:59:33.552532 1 reflector.go:289] Starting reflector *v1beta1.MachineSet (11m3.181866381s) from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T19:59:33.710037935+00:00 stderr F I0813 19:59:33.700993 1 reflector.go:325] Listing and watching *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T19:59:33.758410833+00:00 stderr F I0813 19:59:33.570180 1 operator.go:214] Starting Machine API Operator 2025-08-13T19:59:33.758410833+00:00 stderr F I0813 19:59:33.571397 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.758410833+00:00 stderr F I0813 19:59:33.571579 1 reflector.go:289] Starting reflector *v1.Proxy (11m37.97476721s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:33.758410833+00:00 stderr F I0813 19:59:33.571608 1 reflector.go:289] Starting reflector *v1.ClusterOperator (11m37.97476721s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:33.758410833+00:00 stderr F I0813 19:59:33.571644 1 reflector.go:289] Starting reflector *v1.ClusterVersion (11m37.97476721s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:33.758410833+00:00 stderr F I0813 19:59:33.732953 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.758410833+00:00 stderr F I0813 19:59:33.734011 1 reflector.go:351] Caches populated for *v1beta1.Machine from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T19:59:33.762266853+00:00 stderr F I0813 19:59:33.762202 1 reflector.go:289] Starting reflector *v1.FeatureGate (11m37.97476721s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:33.777463697+00:00 stderr F I0813 19:59:33.777395 1 reflector.go:325] Listing and watching *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:33.813468923+00:00 stderr F I0813 19:59:33.813001 1 reflector.go:325] Listing and watching *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:33.818254169+00:00 stderr F I0813 19:59:33.814688 1 reflector.go:325] Listing and watching *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:34.934623872+00:00 stderr F I0813 19:59:34.934463 1 reflector.go:325] Listing and watching *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:34.982955680+00:00 stderr F I0813 19:59:34.978433 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:35.033982514+00:00 stderr F I0813 19:59:35.019614 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:35.033982514+00:00 stderr F I0813 19:59:35.033448 1 operator.go:226] Synced up caches 2025-08-13T19:59:35.033982514+00:00 stderr F I0813 19:59:35.033478 1 operator.go:231] Started feature gate accessor 2025-08-13T19:59:35.037744732+00:00 stderr F I0813 19:59:35.034090 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:35.054888230+00:00 stderr F I0813 19:59:35.049334 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:35.280303986+00:00 stderr F I0813 19:59:35.279731 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:35.350771493+00:00 stderr F I0813 19:59:35.305181 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-machine-api", Name:"machine-api-operator", UID:"7e7b28b7-f1de-4b37-8a34-a8d6ed3ac1fa", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:35.350771493+00:00 stderr F I0813 19:59:35.345068 1 reflector.go:351] Caches populated for *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T19:59:35.350771493+00:00 stderr F I0813 19:59:35.345498 1 start.go:121] Synced up machine api informer caches 2025-08-13T19:59:35.863532660+00:00 stderr F I0813 19:59:35.854514 1 status.go:69] Syncing status: re-syncing 2025-08-13T19:59:35.998346093+00:00 stderr F I0813 19:59:35.927690 1 sync.go:75] Provider is NoOp, skipping synchronisation 2025-08-13T19:59:36.046953968+00:00 stderr F I0813 19:59:36.046744 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:36.141443632+00:00 stderr F I0813 19:59:36.141079 1 status.go:99] Syncing status: available 2025-08-13T19:59:36.367874066+00:00 stderr F I0813 19:59:36.366889 1 status.go:69] Syncing status: re-syncing 2025-08-13T19:59:36.407210128+00:00 stderr F I0813 19:59:36.405968 1 sync.go:75] Provider is NoOp, skipping synchronisation 2025-08-13T19:59:36.451908862+00:00 stderr F I0813 19:59:36.451686 1 status.go:99] Syncing status: available 2025-08-13T20:01:53.429105265+00:00 stderr F E0813 20:01:53.428030 1 leaderelection.go:369] Failed to update lock: Operation cannot be fulfilled on leases.coordination.k8s.io "machine-api-operator": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:02:53.434100147+00:00 stderr F E0813 20:02:53.432992 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/machine-api-operator: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.444965011+00:00 stderr F E0813 20:03:53.443054 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/machine-api-operator: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:53.434373710+00:00 stderr F E0813 20:04:53.434088 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/machine-api-operator: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:34.054634372+00:00 stderr F I0813 20:05:34.050754 1 leaderelection.go:285] failed to renew lease openshift-machine-api/machine-api-operator: timed out waiting for the condition 2025-08-13T20:05:34.152035191+00:00 stderr F E0813 20:05:34.147127 1 leaderelection.go:308] Failed to release lock: Operation cannot be fulfilled on leases.coordination.k8s.io "machine-api-operator": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:05:34.165941200+00:00 stderr F F0813 20:05:34.165368 1 start.go:104] Leader election lost ././@LongLink0000644000000000000000000000031200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000002463115133657715033007 0ustar zuulzuul2026-01-20T10:49:37.408116847+00:00 stderr F I0120 10:49:37.406494 1 start.go:72] Version: 4.16.0-202406131906.p0.g49a82ff.assembly.stream.el9 2026-01-20T10:49:37.408116847+00:00 stderr F I0120 10:49:37.407380 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:49:37.521102338+00:00 stderr F I0120 10:49:37.519443 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-api/machine-api-operator... 2026-01-20T10:54:34.757725915+00:00 stderr F I0120 10:54:34.755690 1 leaderelection.go:260] successfully acquired lease openshift-machine-api/machine-api-operator 2026-01-20T10:54:34.770328382+00:00 stderr F I0120 10:54:34.769475 1 operator.go:214] Starting Machine API Operator 2026-01-20T10:54:34.770328382+00:00 stderr F I0120 10:54:34.769677 1 reflector.go:289] Starting reflector *v1.ValidatingWebhookConfiguration (13m36.846470212s) from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:54:34.770366333+00:00 stderr F I0120 10:54:34.770317 1 reflector.go:289] Starting reflector *v1.FeatureGate (15m31.431202805s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:54:34.770366333+00:00 stderr F I0120 10:54:34.770336 1 reflector.go:325] Listing and watching *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:54:34.770380063+00:00 stderr F I0120 10:54:34.770367 1 reflector.go:289] Starting reflector *v1beta1.Machine (18m47.558923391s) from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2026-01-20T10:54:34.770380063+00:00 stderr F I0120 10:54:34.770376 1 reflector.go:325] Listing and watching *v1beta1.Machine from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2026-01-20T10:54:34.771764180+00:00 stderr F I0120 10:54:34.770402 1 reflector.go:289] Starting reflector *v1.ClusterVersion (15m31.431202805s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:54:34.771764180+00:00 stderr F I0120 10:54:34.771745 1 reflector.go:289] Starting reflector *v1.Deployment (13m36.846470212s) from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:54:34.771783360+00:00 stderr F I0120 10:54:34.771756 1 reflector.go:325] Listing and watching *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:54:34.771783360+00:00 stderr F I0120 10:54:34.771763 1 reflector.go:325] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:54:34.771910363+00:00 stderr F I0120 10:54:34.771870 1 reflector.go:289] Starting reflector *v1.Proxy (15m31.431202805s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:54:34.771910363+00:00 stderr F I0120 10:54:34.771887 1 reflector.go:325] Listing and watching *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:54:34.772211852+00:00 stderr F I0120 10:54:34.770338 1 reflector.go:325] Listing and watching *v1.ValidatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:54:34.772410528+00:00 stderr F I0120 10:54:34.772377 1 reflector.go:289] Starting reflector *v1beta1.MachineSet (18m47.558923391s) from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2026-01-20T10:54:34.772410528+00:00 stderr F I0120 10:54:34.772394 1 reflector.go:325] Listing and watching *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2026-01-20T10:54:34.772457549+00:00 stderr F I0120 10:54:34.772439 1 reflector.go:289] Starting reflector *v1.ClusterOperator (15m31.431202805s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:54:34.772457549+00:00 stderr F I0120 10:54:34.772449 1 reflector.go:325] Listing and watching *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:54:34.772566082+00:00 stderr F I0120 10:54:34.769813 1 reflector.go:289] Starting reflector *v1.DaemonSet (13m36.846470212s) from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:54:34.772996243+00:00 stderr F I0120 10:54:34.772563 1 reflector.go:325] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:54:34.777169514+00:00 stderr F I0120 10:54:34.777135 1 reflector.go:351] Caches populated for *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2026-01-20T10:54:34.777477042+00:00 stderr F I0120 10:54:34.777408 1 reflector.go:289] Starting reflector *v1.MutatingWebhookConfiguration (13m36.846470212s) from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:54:34.777477042+00:00 stderr F I0120 10:54:34.777460 1 reflector.go:325] Listing and watching *v1.MutatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:54:34.777549814+00:00 stderr F I0120 10:54:34.777495 1 reflector.go:351] Caches populated for *v1beta1.Machine from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2026-01-20T10:54:34.777740780+00:00 stderr F I0120 10:54:34.777705 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:54:34.778033848+00:00 stderr F I0120 10:54:34.777974 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:54:34.782548637+00:00 stderr F I0120 10:54:34.782445 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:54:34.782895696+00:00 stderr F I0120 10:54:34.782836 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:54:34.786407021+00:00 stderr F I0120 10:54:34.786377 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:54:34.787117879+00:00 stderr F I0120 10:54:34.787054 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:54:34.787697905+00:00 stderr F I0120 10:54:34.787676 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:54:34.791503866+00:00 stderr F I0120 10:54:34.791463 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:54:34.871139799+00:00 stderr F I0120 10:54:34.870631 1 start.go:121] Synced up machine api informer caches 2026-01-20T10:54:34.876526713+00:00 stderr F I0120 10:54:34.876510 1 operator.go:226] Synced up caches 2026-01-20T10:54:34.876566834+00:00 stderr F I0120 10:54:34.876556 1 operator.go:231] Started feature gate accessor 2026-01-20T10:54:34.876622506+00:00 stderr F I0120 10:54:34.876595 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2026-01-20T10:54:34.876904793+00:00 stderr F I0120 10:54:34.876829 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-machine-api", Name:"machine-api-operator", UID:"7e7b28b7-f1de-4b37-8a34-a8d6ed3ac1fa", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2026-01-20T10:54:34.890795383+00:00 stderr F I0120 10:54:34.890701 1 status.go:69] Syncing status: re-syncing 2026-01-20T10:54:34.896734292+00:00 stderr F I0120 10:54:34.896688 1 sync.go:75] Provider is NoOp, skipping synchronisation 2026-01-20T10:54:34.899340141+00:00 stderr F I0120 10:54:34.899298 1 status.go:99] Syncing status: available 2026-01-20T10:57:34.790567999+00:00 stderr F E0120 10:57:34.789947 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/machine-api-operator: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015133657716033043 5ustar zuulzuul././@LongLink0000644000000000000000000000033600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015133657737033046 5ustar zuulzuul././@LongLink0000644000000000000000000000034300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000644000175000017500000005547515133657716033065 0ustar zuulzuul2025-08-13T20:05:31.350875728+00:00 stdout F Overwriting root TLS certificate authority trust store 2025-08-13T20:05:34.560759116+00:00 stderr F I0813 20:05:34.538410 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:34.573181841+00:00 stderr F I0813 20:05:34.572394 1 observer_polling.go:159] Starting file observer 2025-08-13T20:05:35.152155431+00:00 stderr F I0813 20:05:35.151753 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:05:35.153203081+00:00 stderr F I0813 20:05:35.152517 1 leaderelection.go:250] attempting to acquire leader lease openshift-image-registry/openshift-master-controllers... 2025-08-13T20:05:35.236413874+00:00 stderr F I0813 20:05:35.234124 1 leaderelection.go:260] successfully acquired lease openshift-image-registry/openshift-master-controllers 2025-08-13T20:05:35.236594939+00:00 stderr F I0813 20:05:35.236561 1 main.go:33] Cluster Image Registry Operator Version: v4.16.0-202406131906.p0.g0fc07ed.assembly.stream.el9-dirty 2025-08-13T20:05:35.236630760+00:00 stderr F I0813 20:05:35.236617 1 main.go:34] Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime 2025-08-13T20:05:35.236661851+00:00 stderr F I0813 20:05:35.236649 1 main.go:35] Go OS/Arch: linux/amd64 2025-08-13T20:05:35.236703192+00:00 stderr F I0813 20:05:35.236680 1 main.go:66] Watching files [/var/run/configmaps/trusted-ca/tls-ca-bundle.pem /etc/secrets/tls.crt /etc/secrets/tls.key]... 2025-08-13T20:05:35.237208107+00:00 stderr F I0813 20:05:35.236940 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-image-registry", Name:"openshift-master-controllers", UID:"661e70ba-12a1-40d0-a89e-ec3849239f8f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"31556", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cluster-image-registry-operator-7769bd8d7d-q5cvv_54cb8f8a-f1fe-4304-818b-373b62828b0d became leader 2025-08-13T20:05:35.432352325+00:00 stderr F I0813 20:05:35.432268 1 metrics.go:88] Starting MetricsController 2025-08-13T20:05:35.433254951+00:00 stderr F I0813 20:05:35.433184 1 imageregistrycertificates.go:207] Starting ImageRegistryCertificatesController 2025-08-13T20:05:35.441323172+00:00 stderr F I0813 20:05:35.435108 1 clusteroperator.go:143] Starting ClusterOperatorStatusController 2025-08-13T20:05:35.441323172+00:00 stderr F I0813 20:05:35.435151 1 nodecadaemon.go:202] Starting NodeCADaemonController 2025-08-13T20:05:35.441323172+00:00 stderr F I0813 20:05:35.433210 1 imageconfig.go:86] Starting ImageConfigController 2025-08-13T20:05:35.441323172+00:00 stderr F I0813 20:05:35.435274 1 azurestackcloud.go:172] Starting AzureStackCloudController 2025-08-13T20:05:35.441323172+00:00 stderr F I0813 20:05:35.435290 1 azurepathfixcontroller.go:317] Starting AzurePathFixController 2025-08-13T20:05:35.444151743+00:00 stderr F I0813 20:05:35.441931 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:05:35.916298394+00:00 stderr F I0813 20:05:35.900142 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:05:35.916298394+00:00 stderr F I0813 20:05:35.900192 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:05:35.916298394+00:00 stderr F I0813 20:05:35.900349 1 nodecadaemon.go:209] Started NodeCADaemonController 2025-08-13T20:05:35.977097965+00:00 stderr F W0813 20:05:35.881106 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:35.977097965+00:00 stderr F E0813 20:05:35.920501 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:35.977097965+00:00 stderr F I0813 20:05:35.924914 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:05:35.977212858+00:00 stderr F W0813 20:05:35.896004 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:35.977301521+00:00 stderr F E0813 20:05:35.977280 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:35.977617480+00:00 stderr F I0813 20:05:35.977441 1 azurestackcloud.go:179] Started AzureStackCloudController 2025-08-13T20:05:35.996186362+00:00 stderr F I0813 20:05:35.988230 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:36.013418735+00:00 stderr F I0813 20:05:36.013360 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:36.030196465+00:00 stderr F I0813 20:05:36.028088 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:36.035365954+00:00 stderr F I0813 20:05:36.035338 1 clusteroperator.go:150] Started ClusterOperatorStatusController 2025-08-13T20:05:36.036283540+00:00 stderr F I0813 20:05:36.036260 1 azurepathfixcontroller.go:324] Started AzurePathFixController 2025-08-13T20:05:36.073980569+00:00 stderr F I0813 20:05:36.067391 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:36.133655429+00:00 stderr F I0813 20:05:36.133375 1 controllerimagepruner.go:386] Starting ImagePrunerController 2025-08-13T20:05:36.280345169+00:00 stderr F I0813 20:05:36.280188 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:36.333671476+00:00 stderr F I0813 20:05:36.333611 1 imageregistrycertificates.go:214] Started ImageRegistryCertificatesController 2025-08-13T20:05:37.020230946+00:00 stderr F W0813 20:05:37.018982 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:37.020230946+00:00 stderr F E0813 20:05:37.019044 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:37.281066526+00:00 stderr F W0813 20:05:37.280612 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:37.281066526+00:00 stderr F E0813 20:05:37.280700 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:39.504821258+00:00 stderr F W0813 20:05:39.504581 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:39.504821258+00:00 stderr F E0813 20:05:39.504637 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:40.202603800+00:00 stderr F W0813 20:05:40.202537 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:40.202693703+00:00 stderr F E0813 20:05:40.202679 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:45.347291943+00:00 stderr F W0813 20:05:45.346582 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:45.347291943+00:00 stderr F E0813 20:05:45.347206 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:45.904703685+00:00 stderr F W0813 20:05:45.904527 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:45.904703685+00:00 stderr F E0813 20:05:45.904584 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:52.005179407+00:00 stderr F I0813 20:05:52.004586 1 reflector.go:351] Caches populated for *v1.ImageStream from github.com/openshift/client-go/image/informers/externalversions/factory.go:125 2025-08-13T20:05:52.033375184+00:00 stderr F I0813 20:05:52.033311 1 metrics.go:94] Started MetricsController 2025-08-13T20:05:54.197518657+00:00 stderr F I0813 20:05:54.197085 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:05:54.236459062+00:00 stderr F I0813 20:05:54.236312 1 imageconfig.go:93] Started ImageConfigController 2025-08-13T20:05:54.236585266+00:00 stderr F I0813 20:05:54.236562 1 controller.go:452] Starting Controller 2025-08-13T20:07:14.860751489+00:00 stderr F E0813 20:07:14.860066 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:07:14.864649781+00:00 stderr F E0813 20:07:14.864593 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:07:30.246382098+00:00 stderr F W0813 20:07:30.245491 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:07:30.246382098+00:00 stderr F E0813 20:07:30.246168 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:07:37.313181438+00:00 stderr F I0813 20:07:37.312479 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:08:10.069237273+00:00 stderr F I0813 20:08:10.067066 1 reflector.go:351] Caches populated for *v1.ImageStream from github.com/openshift/client-go/image/informers/externalversions/factory.go:125 2025-08-13T20:08:35.407215323+00:00 stderr F E0813 20:08:35.406443 1 leaderelection.go:332] error retrieving resource lock openshift-image-registry/openshift-master-controllers: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-image-registry/leases/openshift-master-controllers?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:29.484992848+00:00 stderr F I0813 20:09:29.484498 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:33.942276903+00:00 stderr F I0813 20:09:33.941268 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:36.313329223+00:00 stderr F I0813 20:09:36.312206 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:37.685525694+00:00 stderr F I0813 20:09:37.685404 1 reflector.go:351] Caches populated for *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:39.680454581+00:00 stderr F I0813 20:09:39.678471 1 reflector.go:351] Caches populated for *v1.ImagePruner from github.com/openshift/client-go/imageregistry/informers/externalversions/factory.go:125 2025-08-13T20:09:42.768944279+00:00 stderr F I0813 20:09:42.767471 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:43.115880377+00:00 stderr F I0813 20:09:43.115025 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:44.458981734+00:00 stderr F I0813 20:09:44.458753 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:48.363674635+00:00 stderr F I0813 20:09:48.363526 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:49.508340314+00:00 stderr F I0813 20:09:49.507835 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:52.284846749+00:00 stderr F I0813 20:09:52.284085 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:57.366653799+00:00 stderr F I0813 20:09:57.364300 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:07.683255424+00:00 stderr F I0813 20:10:07.681191 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:15.606673474+00:00 stderr F I0813 20:10:15.605733 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:15.953871779+00:00 stderr F I0813 20:10:15.953646 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:29.627732939+00:00 stderr F I0813 20:10:29.627031 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:30.282694577+00:00 stderr F I0813 20:10:30.282122 1 reflector.go:351] Caches populated for *v1.Config from github.com/openshift/client-go/imageregistry/informers/externalversions/factory.go:125 2025-08-13T20:10:33.630065369+00:00 stderr F I0813 20:10:33.628984 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:43.053059944+00:00 stderr F I0813 20:10:43.052423 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:43.982593184+00:00 stderr F I0813 20:10:43.981760 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:42:36.400650211+00:00 stderr F I0813 20:42:36.399905 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.404340367+00:00 stderr F I0813 20:42:36.404002 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.415105747+00:00 stderr F I0813 20:42:36.408974 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.415105747+00:00 stderr F I0813 20:42:36.409471 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.425743904+00:00 stderr F I0813 20:42:36.398202 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.499699596+00:00 stderr F I0813 20:42:36.499303 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.516513441+00:00 stderr F I0813 20:42:36.516428 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.541373478+00:00 stderr F I0813 20:42:36.541150 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568582222+00:00 stderr F I0813 20:42:36.568013 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.584575623+00:00 stderr F I0813 20:42:36.584446 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.584976065+00:00 stderr F I0813 20:42:36.584877 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.585458739+00:00 stderr F I0813 20:42:36.585398 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.586120668+00:00 stderr F I0813 20:42:36.586047 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.586513629+00:00 stderr F I0813 20:42:36.586436 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.586865359+00:00 stderr F I0813 20:42:36.585473 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.586865359+00:00 stderr F I0813 20:42:36.586739 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.587276561+00:00 stderr F I0813 20:42:36.587143 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.587560869+00:00 stderr F I0813 20:42:36.586722 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.588023573+00:00 stderr F I0813 20:42:36.587939 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.588713133+00:00 stderr F I0813 20:42:36.588590 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.588987971+00:00 stderr F I0813 20:42:36.588931 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.589666410+00:00 stderr F I0813 20:42:36.586706 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.715605201+00:00 stderr F E0813 20:42:36.715085 1 leaderelection.go:332] error retrieving resource lock openshift-image-registry/openshift-master-controllers: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-image-registry/leases/openshift-master-controllers?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.215010560+00:00 stderr F I0813 20:42:39.213109 1 main.go:52] Received SIGTERM or SIGINT signal, shutting down the operator. 2025-08-13T20:42:39.217512372+00:00 stderr F I0813 20:42:39.217454 1 controllerimagepruner.go:390] Shutting down ImagePrunerController ... 2025-08-13T20:42:39.217538183+00:00 stderr F I0813 20:42:39.217517 1 controller.go:456] Shutting down Controller ... 2025-08-13T20:42:39.217561664+00:00 stderr F I0813 20:42:39.217544 1 imageconfig.go:95] Shutting down ImageConfigController 2025-08-13T20:42:39.217573774+00:00 stderr F I0813 20:42:39.217567 1 metrics.go:96] Shutting down MetricsController 2025-08-13T20:42:39.218483870+00:00 stderr F I0813 20:42:39.217577 1 imageregistrycertificates.go:216] Shutting down ImageRegistryCertificatesController 2025-08-13T20:42:39.218851141+00:00 stderr F I0813 20:42:39.218713 1 leaderelection.go:285] failed to renew lease openshift-image-registry/openshift-master-controllers: timed out waiting for the condition 2025-08-13T20:42:39.223121694+00:00 stderr F I0813 20:42:39.222866 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:39.223121694+00:00 stderr F I0813 20:42:39.222930 1 azurepathfixcontroller.go:326] Shutting down AzurePathFixController 2025-08-13T20:42:39.223121694+00:00 stderr F I0813 20:42:39.222943 1 clusteroperator.go:152] Shutting down ClusterOperatorStatusController 2025-08-13T20:42:39.223121694+00:00 stderr F I0813 20:42:39.222953 1 azurestackcloud.go:181] Shutting down AzureStackCloudController 2025-08-13T20:42:39.223121694+00:00 stderr F I0813 20:42:39.222962 1 nodecadaemon.go:211] Shutting down NodeCADaemonController 2025-08-13T20:42:39.224584866+00:00 stderr F I0813 20:42:39.223740 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:39.226353487+00:00 stderr F I0813 20:42:39.226310 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:39.229605441+00:00 stderr F E0813 20:42:39.229511 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-image-registry/leases/openshift-master-controllers?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.230585389+00:00 stderr F W0813 20:42:39.230513 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000034300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000644000175000017500000004013715133657716033052 0ustar zuulzuul2026-01-20T10:49:34.385632414+00:00 stdout F Overwriting root TLS certificate authority trust store 2026-01-20T10:49:35.346125050+00:00 stderr F I0120 10:49:35.343734 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:49:35.348215133+00:00 stderr F I0120 10:49:35.346269 1 observer_polling.go:159] Starting file observer 2026-01-20T10:49:35.412190972+00:00 stderr F I0120 10:49:35.411961 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2026-01-20T10:49:35.412789080+00:00 stderr F I0120 10:49:35.412741 1 leaderelection.go:250] attempting to acquire leader lease openshift-image-registry/openshift-master-controllers... 2026-01-20T10:55:15.424617479+00:00 stderr F I0120 10:55:15.423917 1 leaderelection.go:260] successfully acquired lease openshift-image-registry/openshift-master-controllers 2026-01-20T10:55:15.424617479+00:00 stderr F I0120 10:55:15.424048 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-image-registry", Name:"openshift-master-controllers", UID:"661e70ba-12a1-40d0-a89e-ec3849239f8f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"42084", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cluster-image-registry-operator-7769bd8d7d-q5cvv_0e211d34-faf9-491b-828f-db4a261a6a98 became leader 2026-01-20T10:55:15.424662520+00:00 stderr F I0120 10:55:15.424640 1 main.go:33] Cluster Image Registry Operator Version: v4.16.0-202406131906.p0.g0fc07ed.assembly.stream.el9-dirty 2026-01-20T10:55:15.424674270+00:00 stderr F I0120 10:55:15.424659 1 main.go:34] Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime 2026-01-20T10:55:15.424674270+00:00 stderr F I0120 10:55:15.424667 1 main.go:35] Go OS/Arch: linux/amd64 2026-01-20T10:55:15.424716971+00:00 stderr F I0120 10:55:15.424675 1 main.go:66] Watching files [/var/run/configmaps/trusted-ca/tls-ca-bundle.pem /etc/secrets/tls.crt /etc/secrets/tls.key]... 2026-01-20T10:55:15.440317298+00:00 stderr F I0120 10:55:15.440218 1 metrics.go:88] Starting MetricsController 2026-01-20T10:55:15.441356935+00:00 stderr F I0120 10:55:15.441290 1 azurepathfixcontroller.go:317] Starting AzurePathFixController 2026-01-20T10:55:15.441356935+00:00 stderr F I0120 10:55:15.441344 1 clusteroperator.go:143] Starting ClusterOperatorStatusController 2026-01-20T10:55:15.441395456+00:00 stderr F I0120 10:55:15.441361 1 nodecadaemon.go:202] Starting NodeCADaemonController 2026-01-20T10:55:15.441395456+00:00 stderr F I0120 10:55:15.441379 1 imageregistrycertificates.go:207] Starting ImageRegistryCertificatesController 2026-01-20T10:55:15.441412526+00:00 stderr F I0120 10:55:15.441392 1 imageconfig.go:86] Starting ImageConfigController 2026-01-20T10:55:15.441637472+00:00 stderr F I0120 10:55:15.441449 1 azurestackcloud.go:172] Starting AzureStackCloudController 2026-01-20T10:55:15.441637472+00:00 stderr F I0120 10:55:15.441456 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2026-01-20T10:55:15.615542908+00:00 stderr F I0120 10:55:15.615287 1 azurestackcloud.go:179] Started AzureStackCloudController 2026-01-20T10:55:15.615542908+00:00 stderr F I0120 10:55:15.615374 1 clusteroperator.go:150] Started ClusterOperatorStatusController 2026-01-20T10:55:15.615542908+00:00 stderr F I0120 10:55:15.615419 1 base_controller.go:73] Caches are synced for LoggingSyncer 2026-01-20T10:55:15.615622710+00:00 stderr F I0120 10:55:15.615541 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2026-01-20T10:55:15.615622710+00:00 stderr F I0120 10:55:15.615443 1 imageconfig.go:93] Started ImageConfigController 2026-01-20T10:55:15.615622710+00:00 stderr F I0120 10:55:15.615443 1 nodecadaemon.go:209] Started NodeCADaemonController 2026-01-20T10:55:15.615622710+00:00 stderr F I0120 10:55:15.615333 1 azurepathfixcontroller.go:324] Started AzurePathFixController 2026-01-20T10:55:15.618043215+00:00 stderr F I0120 10:55:15.617976 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:15.627144017+00:00 stderr F I0120 10:55:15.626908 1 reflector.go:351] Caches populated for *v1.ImageStream from github.com/openshift/client-go/image/informers/externalversions/factory.go:125 2026-01-20T10:55:15.649394220+00:00 stderr F I0120 10:55:15.649271 1 metrics.go:94] Started MetricsController 2026-01-20T10:55:15.650054179+00:00 stderr F I0120 10:55:15.649954 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:15.677815049+00:00 stderr F I0120 10:55:15.677710 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:15.742215504+00:00 stderr F I0120 10:55:15.742134 1 controllerimagepruner.go:386] Starting ImagePrunerController 2026-01-20T10:55:15.742284657+00:00 stderr F I0120 10:55:15.742216 1 controller.go:452] Starting Controller 2026-01-20T10:55:15.742284657+00:00 stderr F I0120 10:55:15.742249 1 imageregistrycertificates.go:214] Started ImageRegistryCertificatesController 2026-01-20T10:55:15.758991091+00:00 stderr F I0120 10:55:15.757975 1 generator.go:62] object *v1.Secret, Namespace=openshift-image-registry, Name=installation-pull-secrets updated: changed:data..dockerconfigjson={ -> }, changed:metadata.annotations.imageregistry.operator.openshift.io/checksum={"sha256:134d2023417aa99dc70c099f12731fc3d94cb8fe5fef3d499d5c1ff70d124cfb" -> "sha256:cd9658903c20944eb60992db7d1167845b9660771a71716e1875bc10e5145610"}, changed:metadata.managedFields.0.time={"2025-08-13T20:00:23Z" -> "2026-01-20T10:55:15Z"}, changed:metadata.resourceVersion={"29461" -> "42088"} 2026-01-20T10:55:16.046687531+00:00 stderr F I0120 10:55:16.046158 1 apps.go:154] Deployment "openshift-image-registry/image-registry" changes: {"metadata":{"annotations":{"imageregistry.operator.openshift.io/checksum":"sha256:3616891ac97a04dff8f52e8fc01cee609bbfbe0247bfa3ef0f9ebbcc435b27f1","operator.openshift.io/spec-hash":"3abd68f3c2e68f9a4d2c85d68647a58f3da61ebcaeeecc1baedcf649ce0065c8"}},"spec":{"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"imageregistry.operator.openshift.io/dependencies-checksum":"sha256:53869d9c320e001c11f9e0c8b26efab68c1d93a6051736c231681cabec99482e"}},"spec":{"containers":[{"command":["/bin/sh","-c","mkdir -p /etc/pki/ca-trust/extracted/edk2 /etc/pki/ca-trust/extracted/java /etc/pki/ca-trust/extracted/openssl /etc/pki/ca-trust/extracted/pem \u0026\u0026 update-ca-trust extract \u0026\u0026 exec /usr/bin/dockerregistry"],"env":[{"name":"REGISTRY_STORAGE","value":"filesystem"},{"name":"REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY","value":"/registry"},{"name":"REGISTRY_HTTP_ADDR","value":":5000"},{"name":"REGISTRY_HTTP_NET","value":"tcp"},{"name":"REGISTRY_HTTP_SECRET","value":"56a187e4d57194474f191fc1c74d365975dcd1d1b128301855ac1d25d0b497b3a0ba226d71cd71c0d3f1e86c45bc36460c308cfd8ce5ec72262061fe2bf42b78"},{"name":"REGISTRY_LOG_LEVEL","value":"info"},{"name":"REGISTRY_OPENSHIFT_QUOTA_ENABLED","value":"true"},{"name":"REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR","value":"inmemory"},{"name":"REGISTRY_STORAGE_DELETE_ENABLED","value":"true"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_ENABLED","value":"true"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_INTERVAL","value":"10s"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_THRESHOLD","value":"1"},{"name":"REGISTRY_OPENSHIFT_METRICS_ENABLED","value":"true"},{"name":"REGISTRY_OPENSHIFT_SERVER_ADDR","value":"image-registry.openshift-image-registry.svc:5000"},{"name":"REGISTRY_HTTP_TLS_CERTIFICATE","value":"/etc/secrets/tls.crt"},{"name":"REGISTRY_HTTP_TLS_KEY","value":"/etc/secrets/tls.key"}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"httpGet":{"path":"/healthz","port":5000,"scheme":"HTTPS"},"initialDelaySeconds":5,"timeoutSeconds":5},"name":"registry","ports":[{"containerPort":5000,"protocol":"TCP"}],"readinessProbe":{"httpGet":{"path":"/healthz","port":5000,"scheme":"HTTPS"},"initialDelaySeconds":15,"timeoutSeconds":5},"resources":{"requests":{"cpu":"100m","memory":"256Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/registry","name":"registry-storage"},{"mountPath":"/etc/secrets","name":"registry-tls"},{"mountPath":"/etc/pki/ca-trust/extracted","name":"ca-trust-extracted"},{"mountPath":"/etc/pki/ca-trust/source/anchors","name":"registry-certificates"},{"mountPath":"/usr/share/pki/ca-trust-source","name":"trusted-ca"},{"mountPath":"/var/lib/kubelet/","name":"installation-pull-secrets"},{"mountPath":"/var/run/secrets/openshift/serviceaccount","name":"bound-sa-token","readOnly":true}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"volumes":[{"name":"registry-storage","persistentVolumeClaim":{"claimName":"crc-image-registry-storage"}},{"name":"registry-tls","projected":{"sources":[{"secret":{"name":"image-registry-tls"}}]}},{"emptyDir":{},"name":"ca-trust-extracted"},{"configMap":{"name":"image-registry-certificates"},"name":"registry-certificates"},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"anchors/ca-bundle.crt"}],"name":"trusted-ca","optional":true},"name":"trusted-ca"},{"name":"installation-pull-secrets","secret":{"items":[{"key":".dockerconfigjson","path":"config.json"}],"optional":true,"secretName":"installation-pull-secrets"}},{"name":"bound-sa-token","projected":{"sources":[{"serviceAccountToken":{"audience":"openshift","path":"token"}}]}}]}}}} 2026-01-20T10:55:16.061494016+00:00 stderr F I0120 10:55:16.061423 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-image-registry", Name:"cluster-image-registry-operator", UID:"485aecbc-d986-4290-a12b-2be6eccbc76c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/image-registry -n openshift-image-registry because it changed 2026-01-20T10:55:16.062198805+00:00 stderr F I0120 10:55:16.061955 1 generator.go:62] object *v1.Deployment, Namespace=openshift-image-registry, Name=image-registry updated: changed:metadata.annotations.imageregistry.operator.openshift.io/checksum={"sha256:cee8150fa701d72b2f46d7bcceb33c236c438f750bbc1b40d8b605e0b4d71ff6" -> "sha256:3616891ac97a04dff8f52e8fc01cee609bbfbe0247bfa3ef0f9ebbcc435b27f1"}, changed:metadata.annotations.operator.openshift.io/spec-hash={"2cb17909e0e1b27c8d2f6aa675d4f102e5035c1403d0e41f721e84207e2599da" -> "3abd68f3c2e68f9a4d2c85d68647a58f3da61ebcaeeecc1baedcf649ce0065c8"}, changed:metadata.generation={"4.000000" -> "5.000000"}, changed:metadata.managedFields.0.manager={"cluster-image-registry-operator" -> "kube-controller-manager"}, added:metadata.managedFields.0.subresource="status", changed:metadata.managedFields.0.time={"2025-08-13T20:00:24Z" -> "2026-01-20T10:51:56Z"}, changed:metadata.managedFields.1.manager={"kube-controller-manager" -> "cluster-image-registry-operator"}, removed:metadata.managedFields.1.subresource="status", changed:metadata.managedFields.1.time={"2026-01-20T10:51:56Z" -> "2026-01-20T10:55:16Z"}, changed:metadata.resourceVersion={"41561" -> "42091"}, changed:spec.template.metadata.annotations.imageregistry.operator.openshift.io/dependencies-checksum={"sha256:c059b55dea10a776646453f9fb9086f121d0269969b8a65f980859a763a4ec10" -> "sha256:53869d9c320e001c11f9e0c8b26efab68c1d93a6051736c231681cabec99482e"} 2026-01-20T10:55:16.063175091+00:00 stderr F I0120 10:55:16.063137 1 controller.go:338] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2025-08-13T20:01:21Z" -> "2026-01-20T10:55:16Z"}, changed:status.conditions.0.message={"The registry is ready" -> "The deployment has not completed"}, changed:status.conditions.0.reason={"Ready" -> "DeploymentNotCompleted"}, changed:status.conditions.0.status={"False" -> "True"}, changed:status.conditions.1.message={"The registry is ready" -> "The registry has minimum availability"}, changed:status.conditions.1.reason={"Ready" -> "MinimumAvailability"}, changed:status.generations.1.lastGeneration={"4.000000" -> "5.000000"} 2026-01-20T10:55:16.084300364+00:00 stderr F I0120 10:55:16.083927 1 generator.go:62] object *v1.ClusterOperator, Name=image-registry updated: removed:apiVersion="config.openshift.io/v1", removed:kind="ClusterOperator", changed:metadata.managedFields.2.time={"2025-08-13T20:01:21Z" -> "2026-01-20T10:55:16Z"}, changed:metadata.resourceVersion={"30445" -> "42096"}, changed:status.conditions.0.message={"Available: The registry is ready\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created" -> "Available: The registry has minimum availability\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created"}, changed:status.conditions.0.reason={"Ready" -> "MinimumAvailability"}, changed:status.conditions.1.lastTransitionTime={"2025-08-13T20:01:21Z" -> "2026-01-20T10:55:16Z"}, changed:status.conditions.1.message={"Progressing: The registry is ready\nNodeCADaemonProgressing: The daemon set node-ca is deployed" -> "Progressing: The deployment has not completed\nNodeCADaemonProgressing: The daemon set node-ca is deployed"}, changed:status.conditions.1.reason={"Ready" -> "DeploymentNotCompleted"}, changed:status.conditions.1.status={"False" -> "True"} 2026-01-20T10:55:36.546722173+00:00 stderr F I0120 10:55:36.546636 1 controller.go:338] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2026-01-20T10:55:16Z" -> "2026-01-20T10:55:36Z"}, changed:status.conditions.0.message={"The deployment has not completed" -> "The registry is ready"}, changed:status.conditions.0.reason={"DeploymentNotCompleted" -> "Ready"}, changed:status.conditions.0.status={"True" -> "False"}, changed:status.conditions.1.message={"The registry has minimum availability" -> "The registry is ready"}, changed:status.conditions.1.reason={"MinimumAvailability" -> "Ready"} 2026-01-20T10:55:36.568221507+00:00 stderr F I0120 10:55:36.568158 1 generator.go:62] object *v1.ClusterOperator, Name=image-registry updated: changed:metadata.managedFields.2.time={"2026-01-20T10:55:16Z" -> "2026-01-20T10:55:36Z"}, changed:metadata.resourceVersion={"42096" -> "42236"}, changed:status.conditions.0.message={"Available: The registry has minimum availability\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created" -> "Available: The registry is ready\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created"}, changed:status.conditions.0.reason={"MinimumAvailability" -> "Ready"}, changed:status.conditions.1.lastTransitionTime={"2026-01-20T10:55:16Z" -> "2026-01-20T10:55:36Z"}, changed:status.conditions.1.message={"Progressing: The deployment has not completed\nNodeCADaemonProgressing: The daemon set node-ca is deployed" -> "Progressing: The registry is ready\nNodeCADaemonProgressing: The daemon set node-ca is deployed"}, changed:status.conditions.1.reason={"DeploymentNotCompleted" -> "Ready"}, changed:status.conditions.1.status={"True" -> "False"} 2026-01-20T10:57:15.441821745+00:00 stderr F E0120 10:57:15.441029 1 leaderelection.go:332] error retrieving resource lock openshift-image-registry/openshift-master-controllers: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-image-registry/leases/openshift-master-controllers?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:58:15.244344360+00:00 stderr F I0120 10:58:15.243554 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 ././@LongLink0000644000000000000000000000034300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000644000175000017500000013306715133657716033057 0ustar zuulzuul2025-08-13T19:59:07.238527301+00:00 stdout F Overwriting root TLS certificate authority trust store 2025-08-13T19:59:32.468279537+00:00 stderr F I0813 19:59:32.410410 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:32.532267201+00:00 stderr F I0813 19:59:32.529815 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:34.559928351+00:00 stderr F I0813 19:59:34.555025 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:34.559928351+00:00 stderr F I0813 19:59:34.559332 1 leaderelection.go:250] attempting to acquire leader lease openshift-image-registry/openshift-master-controllers... 2025-08-13T19:59:34.716124384+00:00 stderr F I0813 19:59:34.698956 1 leaderelection.go:260] successfully acquired lease openshift-image-registry/openshift-master-controllers 2025-08-13T19:59:34.716124384+00:00 stderr F I0813 19:59:34.699759 1 main.go:33] Cluster Image Registry Operator Version: v4.16.0-202406131906.p0.g0fc07ed.assembly.stream.el9-dirty 2025-08-13T19:59:34.716124384+00:00 stderr F I0813 19:59:34.699856 1 main.go:34] Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime 2025-08-13T19:59:34.716124384+00:00 stderr F I0813 19:59:34.699863 1 main.go:35] Go OS/Arch: linux/amd64 2025-08-13T19:59:34.716124384+00:00 stderr F I0813 19:59:34.699996 1 main.go:66] Watching files [/var/run/configmaps/trusted-ca/tls-ca-bundle.pem /etc/secrets/tls.crt /etc/secrets/tls.key]... 2025-08-13T19:59:34.716124384+00:00 stderr F I0813 19:59:34.706495 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-image-registry", Name:"openshift-master-controllers", UID:"661e70ba-12a1-40d0-a89e-ec3849239f8f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28177", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cluster-image-registry-operator-7769bd8d7d-q5cvv_6e4ef831-9ca6-4914-905d-7214cad92174 became leader 2025-08-13T19:59:36.902238389+00:00 stderr F I0813 19:59:36.899333 1 metrics.go:88] Starting MetricsController 2025-08-13T19:59:36.902238389+00:00 stderr F I0813 19:59:36.901727 1 nodecadaemon.go:202] Starting NodeCADaemonController 2025-08-13T19:59:37.182626381+00:00 stderr F I0813 19:59:37.158995 1 azurestackcloud.go:172] Starting AzureStackCloudController 2025-08-13T19:59:37.197080523+00:00 stderr F I0813 19:59:37.189139 1 azurepathfixcontroller.go:317] Starting AzurePathFixController 2025-08-13T19:59:37.197080523+00:00 stderr F I0813 19:59:37.191135 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:37.197080523+00:00 stderr F I0813 19:59:37.191173 1 imageconfig.go:86] Starting ImageConfigController 2025-08-13T19:59:37.243681972+00:00 stderr F I0813 19:59:37.242191 1 imageregistrycertificates.go:207] Starting ImageRegistryCertificatesController 2025-08-13T19:59:37.247402978+00:00 stderr F I0813 19:59:37.247377 1 clusteroperator.go:143] Starting ClusterOperatorStatusController 2025-08-13T19:59:37.413710078+00:00 stderr F I0813 19:59:37.381885 1 azurestackcloud.go:179] Started AzureStackCloudController 2025-08-13T19:59:37.501316355+00:00 stderr F I0813 19:59:37.444165 1 nodecadaemon.go:209] Started NodeCADaemonController 2025-08-13T19:59:38.007381131+00:00 stderr F E0813 19:59:38.007004 1 azurestackcloud.go:76] AzureStackCloudController: unable to sync: config.imageregistry.operator.openshift.io "cluster" not found, requeuing 2025-08-13T19:59:39.197697421+00:00 stderr F I0813 19:59:39.197219 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:39.522447768+00:00 stderr F I0813 19:59:39.197982 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:39.591490816+00:00 stderr F I0813 19:59:39.577516 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:39.591490816+00:00 stderr F I0813 19:59:39.579410 1 reflector.go:351] Caches populated for *v1.ImagePruner from github.com/openshift/client-go/imageregistry/informers/externalversions/factory.go:125 2025-08-13T19:59:39.591490816+00:00 stderr F W0813 19:59:39.579746 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:39.591490816+00:00 stderr F E0813 19:59:39.580001 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:39.652526036+00:00 stderr F W0813 19:59:39.652136 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:39.652526036+00:00 stderr F E0813 19:59:39.652274 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:39.781990576+00:00 stderr F I0813 19:59:39.757255 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:39.901159103+00:00 stderr F I0813 19:59:39.881095 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:39.901159103+00:00 stderr F I0813 19:59:39.888743 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:39.901159103+00:00 stderr F I0813 19:59:39.889591 1 azurepathfixcontroller.go:324] Started AzurePathFixController 2025-08-13T19:59:39.948356619+00:00 stderr F I0813 19:59:39.948219 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:39.983099779+00:00 stderr F I0813 19:59:39.982996 1 clusteroperator.go:150] Started ClusterOperatorStatusController 2025-08-13T19:59:40.086146276+00:00 stderr F I0813 19:59:40.079998 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:40.148072042+00:00 stderr F I0813 19:59:40.148002 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:40.246921819+00:00 stderr F I0813 19:59:40.246315 1 imageregistrycertificates.go:214] Started ImageRegistryCertificatesController 2025-08-13T19:59:40.820345945+00:00 stderr F W0813 19:59:40.797598 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:40.820481959+00:00 stderr F E0813 19:59:40.820460 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:41.074455408+00:00 stderr F W0813 19:59:41.074352 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:41.074455408+00:00 stderr F E0813 19:59:41.074410 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:41.135983282+00:00 stderr F I0813 19:59:41.135814 1 generator.go:62] object *v1.ClusterOperator, Name=image-registry updated: removed:apiVersion="config.openshift.io/v1", removed:kind="ClusterOperator", changed:metadata.managedFields.2.time={"2024-06-27T13:34:18Z" -> "2025-08-13T19:59:40Z"}, changed:metadata.resourceVersion={"23930" -> "28282"}, changed:status.conditions.0.message={"Available: The deployment does not have available replicas\nNodeCADaemonAvailable: The daemon set node-ca does not have available replicas\nImagePrunerAvailable: Pruner CronJob has been created" -> "Available: The deployment does not have available replicas\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created"}, changed:status.conditions.0.reason={"NoReplicasAvailable::NodeCADaemonNoAvailableReplicas" -> "NoReplicasAvailable"}, changed:status.conditions.1.message={"Progressing: The deployment has not completed\nNodeCADaemonProgressing: The daemon set node-ca is deploying node pods" -> "Progressing: The deployment has not completed\nNodeCADaemonProgressing: The daemon set node-ca is deployed"}, changed:status.conditions.1.reason={"DeploymentNotCompleted::NodeCADaemonUnavailable" -> "DeploymentNotCompleted"} 2025-08-13T19:59:41.141502780+00:00 stderr F I0813 19:59:41.136956 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:41.307344807+00:00 stderr F I0813 19:59:41.306542 1 controllerimagepruner.go:386] Starting ImagePrunerController 2025-08-13T19:59:42.817613347+00:00 stderr F W0813 19:59:42.814334 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:42.817613347+00:00 stderr F E0813 19:59:42.815089 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:43.437966550+00:00 stderr F W0813 19:59:43.437445 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:43.437966550+00:00 stderr F E0813 19:59:43.437509 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:47.824966453+00:00 stderr F W0813 19:59:47.824230 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:47.824966453+00:00 stderr F E0813 19:59:47.824864 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:48.187447036+00:00 stderr F W0813 19:59:48.187101 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:48.187447036+00:00 stderr F E0813 19:59:48.187195 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:54.721302158+00:00 stderr F W0813 19:59:54.720490 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:54.721302158+00:00 stderr F E0813 19:59:54.721160 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:58.226936117+00:00 stderr F W0813 19:59:58.226058 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:58.226936117+00:00 stderr F E0813 19:59:58.226686 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:00:15.597920077+00:00 stderr F I0813 20:00:15.596690 1 reflector.go:351] Caches populated for *v1.ImageStream from github.com/openshift/client-go/image/informers/externalversions/factory.go:125 2025-08-13T20:00:15.602244620+00:00 stderr F I0813 20:00:15.602202 1 metrics.go:94] Started MetricsController 2025-08-13T20:00:17.585059728+00:00 stderr F I0813 20:00:17.568887 1 generator.go:62] object *v1.ConfigMap, Namespace=openshift-image-registry, Name=serviceca updated: removed:metadata.annotations.openshift.io/description="Configmap is added/updated with a data item containing the CA signing bundle that can be used to verify service-serving certificates", removed:metadata.annotations.openshift.io/owning-component="service-ca", changed:metadata.resourceVersion={"29256" -> "29269"} 2025-08-13T20:00:17.629035862+00:00 stderr F I0813 20:00:17.598084 1 generator.go:62] object *v1.ConfigMap, Namespace=openshift-image-registry, Name=image-registry-certificates updated: changed:data.image-registry.openshift-image-registry.svc..5000={"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIQ8BvhCMTtRcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA4MjUxMjQ3MDZaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBSHmdGnJknQSxTvpkMa8GYETWnG\n0TANBgkqhkiG9w0BAQsFAAOCAQEAAoJbZdck+sGqSy8/82bjeTrcuNzegezF5kjp\nLRqVOHCh7gDbM2kVONtVb642umD5+RFzLDUZZlDyuMV9eCpAZH44DIseU89LC/+t\nLmmKaXSbWjExzoU1bdCCV8DnuiIEkeRmuzs7NC0IjZHxhSxbcEsicRWuyDTsBPbG\nTyuolJjhPa1uKZaEk2ASdT9SgLRFbjPGn13MKX3FwgohOQaPEPqWEPDbPljNkROi\nQsb9CUeZg+vsRRhZiADo/PamU9mLJ/8ePXIBzcE0VdgmRmDa4H92gV0ie+/fFrU3\nRBZMOM4iG1SW0yexNbJ+DYmxvDdxP/3btkulPGxFKue/4fzlyw==\n-----END CERTIFICATE-----\n" -> "-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n"}, changed:data.image-registry.openshift-image-registry.svc.cluster.local..5000={"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIQ8BvhCMTtRcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA4MjUxMjQ3MDZaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBSHmdGnJknQSxTvpkMa8GYETWnG\n0TANBgkqhkiG9w0BAQsFAAOCAQEAAoJbZdck+sGqSy8/82bjeTrcuNzegezF5kjp\nLRqVOHCh7gDbM2kVONtVb642umD5+RFzLDUZZlDyuMV9eCpAZH44DIseU89LC/+t\nLmmKaXSbWjExzoU1bdCCV8DnuiIEkeRmuzs7NC0IjZHxhSxbcEsicRWuyDTsBPbG\nTyuolJjhPa1uKZaEk2ASdT9SgLRFbjPGn13MKX3FwgohOQaPEPqWEPDbPljNkROi\nQsb9CUeZg+vsRRhZiADo/PamU9mLJ/8ePXIBzcE0VdgmRmDa4H92gV0ie+/fFrU3\nRBZMOM4iG1SW0yexNbJ+DYmxvDdxP/3btkulPGxFKue/4fzlyw==\n-----END CERTIFICATE-----\n" -> "-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n"}, changed:metadata.annotations.imageregistry.operator.openshift.io/checksum={"sha256:f0bdbd4b4b483117b7890bf3f3d072b58bd5c5ac221322fb5c069a25e1e8cb66" -> "sha256:653bf3ca288d3df55507222afe41f33beb70f23514182de2cc295244a1d79403"}, changed:metadata.managedFields.0.time={"2024-06-27T13:18:57Z" -> "2025-08-13T20:00:17Z"}, changed:metadata.resourceVersion={"18030" -> "29265"} 2025-08-13T20:00:17.949627914+00:00 stderr F I0813 20:00:17.948689 1 generator.go:62] object *v1.ConfigMap, Namespace=openshift-config-managed, Name=image-registry-ca updated: changed:data.image-registry.openshift-image-registry.svc..5000={"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIQ8BvhCMTtRcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA4MjUxMjQ3MDZaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBSHmdGnJknQSxTvpkMa8GYETWnG\n0TANBgkqhkiG9w0BAQsFAAOCAQEAAoJbZdck+sGqSy8/82bjeTrcuNzegezF5kjp\nLRqVOHCh7gDbM2kVONtVb642umD5+RFzLDUZZlDyuMV9eCpAZH44DIseU89LC/+t\nLmmKaXSbWjExzoU1bdCCV8DnuiIEkeRmuzs7NC0IjZHxhSxbcEsicRWuyDTsBPbG\nTyuolJjhPa1uKZaEk2ASdT9SgLRFbjPGn13MKX3FwgohOQaPEPqWEPDbPljNkROi\nQsb9CUeZg+vsRRhZiADo/PamU9mLJ/8ePXIBzcE0VdgmRmDa4H92gV0ie+/fFrU3\nRBZMOM4iG1SW0yexNbJ+DYmxvDdxP/3btkulPGxFKue/4fzlyw==\n-----END CERTIFICATE-----\n" -> "-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n"}, changed:data.image-registry.openshift-image-registry.svc.cluster.local..5000={"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIQ8BvhCMTtRcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA4MjUxMjQ3MDZaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBSHmdGnJknQSxTvpkMa8GYETWnG\n0TANBgkqhkiG9w0BAQsFAAOCAQEAAoJbZdck+sGqSy8/82bjeTrcuNzegezF5kjp\nLRqVOHCh7gDbM2kVONtVb642umD5+RFzLDUZZlDyuMV9eCpAZH44DIseU89LC/+t\nLmmKaXSbWjExzoU1bdCCV8DnuiIEkeRmuzs7NC0IjZHxhSxbcEsicRWuyDTsBPbG\nTyuolJjhPa1uKZaEk2ASdT9SgLRFbjPGn13MKX3FwgohOQaPEPqWEPDbPljNkROi\nQsb9CUeZg+vsRRhZiADo/PamU9mLJ/8ePXIBzcE0VdgmRmDa4H92gV0ie+/fFrU3\nRBZMOM4iG1SW0yexNbJ+DYmxvDdxP/3btkulPGxFKue/4fzlyw==\n-----END CERTIFICATE-----\n" -> "-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n"}, changed:metadata.annotations.imageregistry.operator.openshift.io/checksum={"sha256:d537ad4b8475d1cab44f6a9af72391f500985389b6664e1dba8e546d76b40b02" -> "sha256:8a4b2012ebacc19d57e4aae623e3012d9b0e3cf1957da5588850ec19f97baa40"}, changed:metadata.managedFields.0.time={"2024-06-27T13:18:53Z" -> "2025-08-13T20:00:17Z"}, changed:metadata.resourceVersion={"17963" -> "29281"} 2025-08-13T20:00:22.798386460+00:00 stderr F I0813 20:00:22.779164 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:00:22.798386460+00:00 stderr F I0813 20:00:22.792146 1 imageconfig.go:93] Started ImageConfigController 2025-08-13T20:00:22.848609373+00:00 stderr F I0813 20:00:22.848251 1 controller.go:452] Starting Controller 2025-08-13T20:00:23.585293700+00:00 stderr F I0813 20:00:23.584565 1 generator.go:62] object *v1.Secret, Namespace=openshift-image-registry, Name=installation-pull-secrets updated: changed:data..dockerconfigjson={ -> }, changed:metadata.annotations.imageregistry.operator.openshift.io/checksum={"sha256:085fdb2709b57d501872b4e20b38e3618d21be40f24851b4fad2074469e1fa6d" -> "sha256:134d2023417aa99dc70c099f12731fc3d94cb8fe5fef3d499d5c1ff70d124cfb"}, changed:metadata.managedFields.0.time={"2024-06-27T13:34:15Z" -> "2025-08-13T20:00:23Z"}, changed:metadata.resourceVersion={"23543" -> "29461"} 2025-08-13T20:00:24.745952665+00:00 stderr F I0813 20:00:24.745340 1 apps.go:154] Deployment "openshift-image-registry/image-registry" changes: {"metadata":{"annotations":{"imageregistry.operator.openshift.io/checksum":"sha256:cee8150fa701d72b2f46d7bcceb33c236c438f750bbc1b40d8b605e0b4d71ff6","operator.openshift.io/spec-hash":"2cb17909e0e1b27c8d2f6aa675d4f102e5035c1403d0e41f721e84207e2599da"}},"spec":{"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"imageregistry.operator.openshift.io/dependencies-checksum":"sha256:c059b55dea10a776646453f9fb9086f121d0269969b8a65f980859a763a4ec10"}},"spec":{"containers":[{"command":["/bin/sh","-c","mkdir -p /etc/pki/ca-trust/extracted/edk2 /etc/pki/ca-trust/extracted/java /etc/pki/ca-trust/extracted/openssl /etc/pki/ca-trust/extracted/pem \u0026\u0026 update-ca-trust extract \u0026\u0026 exec /usr/bin/dockerregistry"],"env":[{"name":"REGISTRY_STORAGE","value":"filesystem"},{"name":"REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY","value":"/registry"},{"name":"REGISTRY_HTTP_ADDR","value":":5000"},{"name":"REGISTRY_HTTP_NET","value":"tcp"},{"name":"REGISTRY_HTTP_SECRET","value":"56a187e4d57194474f191fc1c74d365975dcd1d1b128301855ac1d25d0b497b3a0ba226d71cd71c0d3f1e86c45bc36460c308cfd8ce5ec72262061fe2bf42b78"},{"name":"REGISTRY_LOG_LEVEL","value":"info"},{"name":"REGISTRY_OPENSHIFT_QUOTA_ENABLED","value":"true"},{"name":"REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR","value":"inmemory"},{"name":"REGISTRY_STORAGE_DELETE_ENABLED","value":"true"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_ENABLED","value":"true"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_INTERVAL","value":"10s"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_THRESHOLD","value":"1"},{"name":"REGISTRY_OPENSHIFT_METRICS_ENABLED","value":"true"},{"name":"REGISTRY_OPENSHIFT_SERVER_ADDR","value":"image-registry.openshift-image-registry.svc:5000"},{"name":"REGISTRY_HTTP_TLS_CERTIFICATE","value":"/etc/secrets/tls.crt"},{"name":"REGISTRY_HTTP_TLS_KEY","value":"/etc/secrets/tls.key"}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"httpGet":{"path":"/healthz","port":5000,"scheme":"HTTPS"},"initialDelaySeconds":5,"timeoutSeconds":5},"name":"registry","ports":[{"containerPort":5000,"protocol":"TCP"}],"readinessProbe":{"httpGet":{"path":"/healthz","port":5000,"scheme":"HTTPS"},"initialDelaySeconds":15,"timeoutSeconds":5},"resources":{"requests":{"cpu":"100m","memory":"256Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/registry","name":"registry-storage"},{"mountPath":"/etc/secrets","name":"registry-tls"},{"mountPath":"/etc/pki/ca-trust/extracted","name":"ca-trust-extracted"},{"mountPath":"/etc/pki/ca-trust/source/anchors","name":"registry-certificates"},{"mountPath":"/usr/share/pki/ca-trust-source","name":"trusted-ca"},{"mountPath":"/var/lib/kubelet/","name":"installation-pull-secrets"},{"mountPath":"/var/run/secrets/openshift/serviceaccount","name":"bound-sa-token","readOnly":true}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"volumes":[{"name":"registry-storage","persistentVolumeClaim":{"claimName":"crc-image-registry-storage"}},{"name":"registry-tls","projected":{"sources":[{"secret":{"name":"image-registry-tls"}}]}},{"emptyDir":{},"name":"ca-trust-extracted"},{"configMap":{"name":"image-registry-certificates"},"name":"registry-certificates"},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"anchors/ca-bundle.crt"}],"name":"trusted-ca","optional":true},"name":"trusted-ca"},{"name":"installation-pull-secrets","secret":{"items":[{"key":".dockerconfigjson","path":"config.json"}],"optional":true,"secretName":"installation-pull-secrets"}},{"name":"bound-sa-token","projected":{"sources":[{"serviceAccountToken":{"audience":"openshift","path":"token"}}]}}]}}}} 2025-08-13T20:00:24.918966429+00:00 stderr F I0813 20:00:24.918342 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-image-registry", Name:"cluster-image-registry-operator", UID:"485aecbc-d986-4290-a12b-2be6eccbc76c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/image-registry -n openshift-image-registry because it changed 2025-08-13T20:00:24.975235403+00:00 stderr F I0813 20:00:24.965374 1 generator.go:62] object *v1.Deployment, Namespace=openshift-image-registry, Name=image-registry updated: changed:metadata.annotations.imageregistry.operator.openshift.io/checksum={"sha256:f7ef6312b4aa9a5819f99115a43ad18318ade18e78d0d43d8f8db34ee8a97e8d" -> "sha256:cee8150fa701d72b2f46d7bcceb33c236c438f750bbc1b40d8b605e0b4d71ff6"}, changed:metadata.annotations.operator.openshift.io/spec-hash={"944e90126c7956be2484e645afe5c783cacf55a40f11cb132e07b25294ee50fa" -> "2cb17909e0e1b27c8d2f6aa675d4f102e5035c1403d0e41f721e84207e2599da"}, changed:metadata.generation={"3.000000" -> "4.000000"}, changed:metadata.managedFields.0.manager={"cluster-image-registry-operator" -> "kube-controller-manager"}, added:metadata.managedFields.0.subresource="status", changed:metadata.managedFields.0.time={"2024-06-27T13:34:15Z" -> "2025-08-13T19:49:58Z"}, changed:metadata.managedFields.1.manager={"kube-controller-manager" -> "cluster-image-registry-operator"}, removed:metadata.managedFields.1.subresource="status", changed:metadata.managedFields.1.time={"2025-08-13T19:49:58Z" -> "2025-08-13T20:00:24Z"}, changed:metadata.resourceVersion={"25235" -> "29506"}, changed:spec.template.metadata.annotations.imageregistry.operator.openshift.io/dependencies-checksum={"sha256:c4eb23334aa8e38243d604088f7d430c98cd061e527b1f39182877df0dc8680c" -> "sha256:c059b55dea10a776646453f9fb9086f121d0269969b8a65f980859a763a4ec10"} 2025-08-13T20:00:25.031080526+00:00 stderr F I0813 20:00:25.029193 1 controller.go:338] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.2.lastTransitionTime={"2024-06-26T12:53:37Z" -> "2025-08-13T20:00:25Z"}, added:status.conditions.2.message="The deployment does not have available replicas", added:status.conditions.2.reason="Unavailable", changed:status.conditions.2.status={"False" -> "True"}, changed:status.generations.1.lastGeneration={"3.000000" -> "4.000000"} 2025-08-13T20:00:25.122118682+00:00 stderr F I0813 20:00:25.122061 1 generator.go:62] object *v1.ClusterOperator, Name=image-registry updated: changed:metadata.managedFields.2.time={"2025-08-13T19:59:40Z" -> "2025-08-13T20:00:25Z"}, changed:metadata.resourceVersion={"28282" -> "29540"}, changed:status.conditions.2.lastTransitionTime={"2024-06-26T12:53:37Z" -> "2025-08-13T20:00:25Z"}, added:status.conditions.2.message="Degraded: The deployment does not have available replicas", changed:status.conditions.2.reason={"AsExpected" -> "Unavailable"}, changed:status.conditions.2.status={"False" -> "True"} 2025-08-13T20:00:49.469631363+00:00 stderr F E0813 20:00:49.468721 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:49.493293998+00:00 stderr F E0813 20:00:49.489473 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:09.481028325+00:00 stderr F I0813 20:01:09.479275 1 controller.go:338] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.1.lastTransitionTime={"2024-06-27T13:34:18Z" -> "2025-08-13T20:01:09Z"}, changed:status.conditions.1.message={"The deployment does not have available replicas" -> "The registry has minimum availability"}, changed:status.conditions.1.reason={"NoReplicasAvailable" -> "MinimumAvailability"}, changed:status.conditions.1.status={"False" -> "True"}, changed:status.conditions.2.lastTransitionTime={"2025-08-13T20:00:25Z" -> "2025-08-13T20:01:09Z"}, removed:status.conditions.2.message="The deployment does not have available replicas", removed:status.conditions.2.reason="Unavailable", changed:status.conditions.2.status={"True" -> "False"}, changed:status.readyReplicas={"0.000000" -> "1.000000"} 2025-08-13T20:01:10.204765482+00:00 stderr F I0813 20:01:10.204334 1 generator.go:62] object *v1.ClusterOperator, Name=image-registry updated: changed:metadata.managedFields.2.time={"2025-08-13T20:00:25Z" -> "2025-08-13T20:01:09Z"}, changed:metadata.resourceVersion={"29540" -> "30319"}, changed:status.conditions.0.lastTransitionTime={"2024-06-27T13:34:14Z" -> "2025-08-13T20:01:09Z"}, changed:status.conditions.0.message={"Available: The deployment does not have available replicas\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created" -> "Available: The registry has minimum availability\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created"}, changed:status.conditions.0.reason={"NoReplicasAvailable" -> "MinimumAvailability"}, changed:status.conditions.0.status={"False" -> "True"}, changed:status.conditions.2.lastTransitionTime={"2025-08-13T20:00:25Z" -> "2025-08-13T20:01:09Z"}, removed:status.conditions.2.message="Degraded: The deployment does not have available replicas", changed:status.conditions.2.reason={"Unavailable" -> "AsExpected"}, changed:status.conditions.2.status={"True" -> "False"} 2025-08-13T20:01:19.011059104+00:00 stderr F W0813 20:01:19.006176 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:19.011059104+00:00 stderr F E0813 20:01:19.006869 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:21.283886640+00:00 stderr F I0813 20:01:21.283335 1 controller.go:338] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2024-06-27T13:34:15Z" -> "2025-08-13T20:01:21Z"}, changed:status.conditions.0.message={"The deployment has not completed" -> "The registry is ready"}, changed:status.conditions.0.reason={"DeploymentNotCompleted" -> "Ready"}, changed:status.conditions.0.status={"True" -> "False"}, changed:status.conditions.1.message={"The registry has minimum availability" -> "The registry is ready"}, changed:status.conditions.1.reason={"MinimumAvailability" -> "Ready"} 2025-08-13T20:01:21.697944527+00:00 stderr F I0813 20:01:21.697887 1 generator.go:62] object *v1.ClusterOperator, Name=image-registry updated: changed:metadata.managedFields.2.time={"2025-08-13T20:01:09Z" -> "2025-08-13T20:01:21Z"}, changed:metadata.resourceVersion={"30319" -> "30445"}, changed:status.conditions.0.message={"Available: The registry has minimum availability\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created" -> "Available: The registry is ready\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created"}, changed:status.conditions.0.reason={"MinimumAvailability" -> "Ready"}, changed:status.conditions.1.lastTransitionTime={"2024-06-27T13:34:14Z" -> "2025-08-13T20:01:21Z"}, changed:status.conditions.1.message={"Progressing: The deployment has not completed\nNodeCADaemonProgressing: The daemon set node-ca is deployed" -> "Progressing: The registry is ready\nNodeCADaemonProgressing: The daemon set node-ca is deployed"}, changed:status.conditions.1.reason={"DeploymentNotCompleted" -> "Ready"}, changed:status.conditions.1.status={"True" -> "False"} 2025-08-13T20:01:22.535888750+00:00 stderr F I0813 20:01:22.535112 1 observer_polling.go:120] Observed file "/etc/secrets/tls.crt" has been modified (old="1b4fefb1e2cc07f8a890596a1d5f574f6950f0d920567a2f2dc82a253d167d29", new="c608c7a9b6a1e5e93b05aeb838482ea7878695c06fd899cda56bfdd48b1f7613") 2025-08-13T20:01:22.536123607+00:00 stderr F W0813 20:01:22.536099 1 builder.go:154] Restart triggered because of file /etc/secrets/tls.crt was modified 2025-08-13T20:01:22.536476357+00:00 stderr F I0813 20:01:22.536316 1 observer_polling.go:120] Observed file "/etc/secrets/tls.key" has been modified (old="ce0633a34805c4dab1eb6f9f90254ed254d7f3cf0899dcbde466b988b655d7c9", new="2d36d9ce30dbba8b01ea274d2bf93b78a3e49924e61177e7a6e202a5e6fd5047") 2025-08-13T20:01:22.536712773+00:00 stderr F I0813 20:01:22.536659 1 main.go:54] Watched file changed, shutting down the operator. 2025-08-13T20:01:22.538703110+00:00 stderr F I0813 20:01:22.538674 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:01:22.540514022+00:00 stderr F I0813 20:01:22.538889 1 nodecadaemon.go:211] Shutting down NodeCADaemonController 2025-08-13T20:01:22.565977378+00:00 stderr F I0813 20:01:22.538933 1 azurestackcloud.go:181] Shutting down AzureStackCloudController 2025-08-13T20:01:22.566163403+00:00 stderr F I0813 20:01:22.539232 1 controller.go:456] Shutting down Controller ... 2025-08-13T20:01:22.566208224+00:00 stderr F I0813 20:01:22.539238 1 clusteroperator.go:152] Shutting down ClusterOperatorStatusController 2025-08-13T20:01:22.566243185+00:00 stderr F I0813 20:01:22.539258 1 imageconfig.go:95] Shutting down ImageConfigController 2025-08-13T20:01:22.566372099+00:00 stderr F I0813 20:01:22.539259 1 azurepathfixcontroller.go:326] Shutting down AzurePathFixController 2025-08-13T20:01:22.566426581+00:00 stderr F I0813 20:01:22.539290 1 metrics.go:96] Shutting down MetricsController 2025-08-13T20:01:22.566454101+00:00 stderr F I0813 20:01:22.539308 1 controllerimagepruner.go:390] Shutting down ImagePrunerController ... 2025-08-13T20:01:22.566491692+00:00 stderr F I0813 20:01:22.539334 1 imageregistrycertificates.go:216] Shutting down ImageRegistryCertificatesController 2025-08-13T20:01:22.566533594+00:00 stderr F I0813 20:01:22.539593 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:01:22.566898264+00:00 stderr F I0813 20:01:22.566827 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:01:23.271401421+00:00 stderr F W0813 20:01:23.271351 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000023500000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015133657716033041 5ustar zuulzuul././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015133657741033037 5ustar zuulzuul././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000006575415133657716033064 0ustar zuulzuul2025-08-13T20:07:26.749909171+00:00 stderr F I0813 20:07:26.747954 1 cmd.go:92] &{ true {false} installer true map[cert-dir:0xc0001be280 cert-secrets:0xc000871e00 configmaps:0xc0008719a0 namespace:0xc0008717c0 optional-configmaps:0xc000871ae0 optional-secrets:0xc000871a40 pod:0xc000871860 pod-manifest-dir:0xc000871c20 resource-dir:0xc000871b80 revision:0xc000871720 secrets:0xc000871900 v:0xc0001bf9a0] [0xc0001bf9a0 0xc000871720 0xc0008717c0 0xc000871860 0xc000871b80 0xc000871c20 0xc0008719a0 0xc000871ae0 0xc000871a40 0xc000871900 0xc0001be280 0xc000871e00] [] map[cert-configmaps:0xc000871ea0 cert-dir:0xc0001be280 cert-secrets:0xc000871e00 configmaps:0xc0008719a0 help:0xc0001bfd60 kubeconfig:0xc000871680 log-flush-frequency:0xc0001bf900 namespace:0xc0008717c0 optional-cert-configmaps:0xc0001be000 optional-cert-secrets:0xc000871f40 optional-configmaps:0xc000871ae0 optional-secrets:0xc000871a40 pod:0xc000871860 pod-manifest-dir:0xc000871c20 pod-manifests-lock-file:0xc000871d60 resource-dir:0xc000871b80 revision:0xc000871720 secrets:0xc000871900 timeout-duration:0xc000871cc0 v:0xc0001bf9a0 vmodule:0xc0001bfa40] [0xc000871680 0xc000871720 0xc0008717c0 0xc000871860 0xc000871900 0xc0008719a0 0xc000871a40 0xc000871ae0 0xc000871b80 0xc000871c20 0xc000871cc0 0xc000871d60 0xc000871e00 0xc000871ea0 0xc000871f40 0xc0001be000 0xc0001be280 0xc0001bf900 0xc0001bf9a0 0xc0001bfa40 0xc0001bfd60] [0xc000871ea0 0xc0001be280 0xc000871e00 0xc0008719a0 0xc0001bfd60 0xc000871680 0xc0001bf900 0xc0008717c0 0xc0001be000 0xc000871f40 0xc000871ae0 0xc000871a40 0xc000871860 0xc000871c20 0xc000871d60 0xc000871b80 0xc000871720 0xc000871900 0xc000871cc0 0xc0001bf9a0 0xc0001bfa40] map[104:0xc0001bfd60 118:0xc0001bf9a0] [] -1 0 0xc0003b8f60 true 0x215dc20 []} 2025-08-13T20:07:26.749909171+00:00 stderr F I0813 20:07:26.748951 1 cmd.go:93] (*installerpod.InstallOptions)(0xc00053cd00)({ 2025-08-13T20:07:26.749909171+00:00 stderr F KubeConfig: (string) "", 2025-08-13T20:07:26.749909171+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-08-13T20:07:26.749909171+00:00 stderr F Revision: (string) (len=1) "8", 2025-08-13T20:07:26.749909171+00:00 stderr F NodeName: (string) "", 2025-08-13T20:07:26.749909171+00:00 stderr F Namespace: (string) (len=24) "openshift-kube-scheduler", 2025-08-13T20:07:26.749909171+00:00 stderr F PodConfigMapNamePrefix: (string) (len=18) "kube-scheduler-pod", 2025-08-13T20:07:26.749909171+00:00 stderr F SecretNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-08-13T20:07:26.749909171+00:00 stderr F }, 2025-08-13T20:07:26.749909171+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=12) "serving-cert" 2025-08-13T20:07:26.749909171+00:00 stderr F }, 2025-08-13T20:07:26.749909171+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=5 cap=8) { 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=18) "kube-scheduler-pod", 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=6) "config", 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=17) "serviceaccount-ca", 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=20) "scheduler-kubeconfig", 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=37) "kube-scheduler-cert-syncer-kubeconfig" 2025-08-13T20:07:26.749909171+00:00 stderr F }, 2025-08-13T20:07:26.749909171+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=16) "policy-configmap" 2025-08-13T20:07:26.749909171+00:00 stderr F }, 2025-08-13T20:07:26.749909171+00:00 stderr F CertSecretNames: ([]string) (len=1 cap=1) { 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=30) "kube-scheduler-client-cert-key" 2025-08-13T20:07:26.749909171+00:00 stderr F }, 2025-08-13T20:07:26.749909171+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) , 2025-08-13T20:07:26.749909171+00:00 stderr F CertConfigMapNamePrefixes: ([]string) , 2025-08-13T20:07:26.749909171+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) , 2025-08-13T20:07:26.749909171+00:00 stderr F CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-scheduler-certs", 2025-08-13T20:07:26.749909171+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:07:26.749909171+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-08-13T20:07:26.749909171+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-08-13T20:07:26.749909171+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-08-13T20:07:26.749909171+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-08-13T20:07:26.749909171+00:00 stderr F KubeletVersion: (string) "" 2025-08-13T20:07:26.749909171+00:00 stderr F }) 2025-08-13T20:07:26.759987490+00:00 stderr F I0813 20:07:26.756597 1 cmd.go:410] Getting controller reference for node crc 2025-08-13T20:07:26.890296186+00:00 stderr F I0813 20:07:26.890185 1 cmd.go:423] Waiting for installer revisions to settle for node crc 2025-08-13T20:07:26.906916322+00:00 stderr F I0813 20:07:26.900969 1 cmd.go:515] Waiting additional period after revisions have settled for node crc 2025-08-13T20:07:56.901688677+00:00 stderr F I0813 20:07:56.901382 1 cmd.go:521] Getting installer pods for node crc 2025-08-13T20:07:56.915142843+00:00 stderr F I0813 20:07:56.913745 1 cmd.go:539] Latest installer revision for node crc is: 8 2025-08-13T20:07:56.915142843+00:00 stderr F I0813 20:07:56.913829 1 cmd.go:428] Querying kubelet version for node crc 2025-08-13T20:07:56.918199111+00:00 stderr F I0813 20:07:56.918136 1 cmd.go:441] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-08-13T20:07:56.918225582+00:00 stderr F I0813 20:07:56.918198 1 cmd.go:290] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8" ... 2025-08-13T20:07:56.919940471+00:00 stderr F I0813 20:07:56.918696 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8" ... 2025-08-13T20:07:56.919940471+00:00 stderr F I0813 20:07:56.918734 1 cmd.go:226] Getting secrets ... 2025-08-13T20:07:56.926031345+00:00 stderr F I0813 20:07:56.925940 1 copy.go:32] Got secret openshift-kube-scheduler/localhost-recovery-client-token-8 2025-08-13T20:07:56.931971486+00:00 stderr F I0813 20:07:56.930266 1 copy.go:32] Got secret openshift-kube-scheduler/serving-cert-8 2025-08-13T20:07:56.931971486+00:00 stderr F I0813 20:07:56.930330 1 cmd.go:239] Getting config maps ... 2025-08-13T20:07:56.940625444+00:00 stderr F I0813 20:07:56.940492 1 copy.go:60] Got configMap openshift-kube-scheduler/config-8 2025-08-13T20:07:56.951428843+00:00 stderr F I0813 20:07:56.946266 1 copy.go:60] Got configMap openshift-kube-scheduler/kube-scheduler-cert-syncer-kubeconfig-8 2025-08-13T20:07:56.951934968+00:00 stderr F I0813 20:07:56.951697 1 copy.go:60] Got configMap openshift-kube-scheduler/kube-scheduler-pod-8 2025-08-13T20:07:56.960256417+00:00 stderr F I0813 20:07:56.959273 1 copy.go:60] Got configMap openshift-kube-scheduler/scheduler-kubeconfig-8 2025-08-13T20:07:56.971857669+00:00 stderr F I0813 20:07:56.971645 1 copy.go:60] Got configMap openshift-kube-scheduler/serviceaccount-ca-8 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.979995 1 copy.go:52] Failed to get config map openshift-kube-scheduler/policy-configmap-8: configmaps "policy-configmap-8" not found 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.980054 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/localhost-recovery-client-token" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.980286 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/localhost-recovery-client-token/namespace" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.980565 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.980868 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/localhost-recovery-client-token/token" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.981019 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/localhost-recovery-client-token/ca.crt" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.981161 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/serving-cert" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.981239 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/serving-cert/tls.crt" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.981328 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/serving-cert/tls.key" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.981428 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/config" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.981513 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/config/config.yaml" ... 2025-08-13T20:07:56.981986470+00:00 stderr F I0813 20:07:56.981908 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/kube-scheduler-cert-syncer-kubeconfig" ... 2025-08-13T20:07:56.981986470+00:00 stderr F I0813 20:07:56.981980 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig" ... 2025-08-13T20:07:56.982170095+00:00 stderr F I0813 20:07:56.982079 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/kube-scheduler-pod" ... 2025-08-13T20:07:56.982186575+00:00 stderr F I0813 20:07:56.982176 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/kube-scheduler-pod/version" ... 2025-08-13T20:07:56.982357660+00:00 stderr F I0813 20:07:56.982274 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/kube-scheduler-pod/forceRedeploymentReason" ... 2025-08-13T20:07:56.982442893+00:00 stderr F I0813 20:07:56.982391 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/kube-scheduler-pod/pod.yaml" ... 2025-08-13T20:07:56.982599657+00:00 stderr F I0813 20:07:56.982513 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/scheduler-kubeconfig" ... 2025-08-13T20:07:56.982660939+00:00 stderr F I0813 20:07:56.982614 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/scheduler-kubeconfig/kubeconfig" ... 2025-08-13T20:07:56.983564815+00:00 stderr F I0813 20:07:56.983457 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/serviceaccount-ca" ... 2025-08-13T20:07:56.983733740+00:00 stderr F I0813 20:07:56.983647 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/serviceaccount-ca/ca-bundle.crt" ... 2025-08-13T20:07:56.986030016+00:00 stderr F I0813 20:07:56.984629 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-scheduler-certs" ... 2025-08-13T20:07:56.986030016+00:00 stderr F I0813 20:07:56.984666 1 cmd.go:226] Getting secrets ... 2025-08-13T20:07:57.112871912+00:00 stderr F I0813 20:07:57.109998 1 copy.go:32] Got secret openshift-kube-scheduler/kube-scheduler-client-cert-key 2025-08-13T20:07:57.112871912+00:00 stderr F I0813 20:07:57.110076 1 cmd.go:239] Getting config maps ... 2025-08-13T20:07:57.112871912+00:00 stderr F I0813 20:07:57.110090 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key" ... 2025-08-13T20:07:57.112871912+00:00 stderr F I0813 20:07:57.110125 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.crt" ... 2025-08-13T20:07:57.112871912+00:00 stderr F I0813 20:07:57.110429 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.key" ... 2025-08-13T20:07:57.112871912+00:00 stderr F I0813 20:07:57.110567 1 cmd.go:332] Getting pod configmaps/kube-scheduler-pod-8 -n openshift-kube-scheduler 2025-08-13T20:07:57.307066400+00:00 stderr F I0813 20:07:57.307008 1 cmd.go:348] Creating directory for static pod manifest "/etc/kubernetes/manifests" ... 2025-08-13T20:07:57.307175413+00:00 stderr F I0813 20:07:57.307143 1 cmd.go:376] Writing a pod under "kube-scheduler-pod.yaml" key 2025-08-13T20:07:57.307175413+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"openshift-kube-scheduler","namespace":"openshift-kube-scheduler","creationTimestamp":null,"labels":{"app":"openshift-kube-scheduler","revision":"8","scheduler":"true"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-scheduler","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-certs"}}],"initContainers":[{"name":"wait-for-host-port","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","30","/bin/bash","-c"],"args":["echo -n \"Waiting for port :10259 to be released.\"\nwhile [ -n \"$(ss -Htan '( sport = 10259 )')\" ]; do\n echo -n \".\"\n sleep 1\ndone\n"],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"containers":[{"name":"kube-scheduler","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["hyperkube","kube-scheduler"],"args":["--config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml","--cert-dir=/var/run/kubernetes","--authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--feature-gates=AdminNetworkPolicy=true,AlibabaPlatform=true,AutomatedEtcdBackup=false,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,CSIDriverSharedResource=false,ChunkSizeMiB=false,CloudDualStackNodeIPs=true,ClusterAPIInstall=false,ClusterAPIInstallAWS=true,ClusterAPIInstallAzure=false,ClusterAPIInstallGCP=false,ClusterAPIInstallIBMCloud=false,ClusterAPIInstallNutanix=true,ClusterAPIInstallOpenStack=true,ClusterAPIInstallPowerVS=false,ClusterAPIInstallVSphere=true,DNSNameResolver=false,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalCloudProvider=true,ExternalCloudProviderAzure=true,ExternalCloudProviderExternal=true,ExternalCloudProviderGCP=true,ExternalOIDC=false,ExternalRouteCertificate=false,GCPClusterHostedDNS=false,GCPLabelsTags=false,GatewayAPI=false,HardwareSpeed=true,ImagePolicy=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InstallAlternateInfrastructureAWS=false,KMSv1=true,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,ManagedBootImages=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MetricsServer=true,MixedCPUsAllocation=false,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NewOLM=false,NodeDisruptionPolicy=false,NodeSwap=false,OnClusterBuild=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,PrivateHostedZoneAWS=true,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=false,VSphereStaticIPs=true,ValidatingAdmissionPolicy=false,VolumeGroupSnapshot=false","-v=2","--tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt","--tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key","--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256","--tls-min-version=VersionTLS12"],"ports":[{"containerPort":10259}],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"readinessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-cert-syncer","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["cluster-kube-scheduler-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-recovery-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 11443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:11443 -v=2\n"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:07:57.336503384+00:00 stderr F I0813 20:07:57.332129 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml" ... 2025-08-13T20:07:57.336503384+00:00 stderr F I0813 20:07:57.332700 1 cmd.go:614] Removed existing static pod manifest "/etc/kubernetes/manifests/kube-scheduler-pod.yaml" ... 2025-08-13T20:07:57.336503384+00:00 stderr F I0813 20:07:57.332713 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-scheduler-pod.yaml" ... 2025-08-13T20:07:57.336503384+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"openshift-kube-scheduler","namespace":"openshift-kube-scheduler","creationTimestamp":null,"labels":{"app":"openshift-kube-scheduler","revision":"8","scheduler":"true"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-scheduler","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-certs"}}],"initContainers":[{"name":"wait-for-host-port","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","30","/bin/bash","-c"],"args":["echo -n \"Waiting for port :10259 to be released.\"\nwhile [ -n \"$(ss -Htan '( sport = 10259 )')\" ]; do\n echo -n \".\"\n sleep 1\ndone\n"],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"containers":[{"name":"kube-scheduler","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["hyperkube","kube-scheduler"],"args":["--config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml","--cert-dir=/var/run/kubernetes","--authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--feature-gates=AdminNetworkPolicy=true,AlibabaPlatform=true,AutomatedEtcdBackup=false,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,CSIDriverSharedResource=false,ChunkSizeMiB=false,CloudDualStackNodeIPs=true,ClusterAPIInstall=false,ClusterAPIInstallAWS=true,ClusterAPIInstallAzure=false,ClusterAPIInstallGCP=false,ClusterAPIInstallIBMCloud=false,ClusterAPIInstallNutanix=true,ClusterAPIInstallOpenStack=true,ClusterAPIInstallPowerVS=false,ClusterAPIInstallVSphere=true,DNSNameResolver=false,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalCloudProvider=true,ExternalCloudProviderAzure=true,ExternalCloudProviderExternal=true,ExternalCloudProviderGCP=true,ExternalOIDC=false,ExternalRouteCertificate=false,GCPClusterHostedDNS=false,GCPLabelsTags=false,GatewayAPI=false,HardwareSpeed=true,ImagePolicy=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InstallAlternateInfrastructureAWS=false,KMSv1=true,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,ManagedBootImages=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MetricsServer=true,MixedCPUsAllocation=false,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NewOLM=false,NodeDisruptionPolicy=false,NodeSwap=false,OnClusterBuild=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,PrivateHostedZoneAWS=true,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=false,VSphereStaticIPs=true,ValidatingAdmissionPolicy=false,VolumeGroupSnapshot=false","-v=2","--tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt","--tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key","--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256","--tls-min-version=VersionTLS12"],"ports":[{"containerPort":10259}],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"readinessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-cert-syncer","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["cluster-kube-scheduler-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-recovery-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 11443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:11443 -v=2\n"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} ././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000755000175000017500000000000015133657716033073 5ustar zuulzuul././@LongLink0000644000000000000000000000030200000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000755000175000017500000000000015133657741033071 5ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000644000175000017500000273622615133657716033117 0ustar zuulzuul2025-08-13T20:01:21.034056637+00:00 stderr F I0813 20:01:21.027560 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:01:21.034056637+00:00 stderr F I0813 20:01:21.027993 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:01:21.034056637+00:00 stderr F I0813 20:01:21.033219 1 observer_polling.go:159] Starting file observer 2025-08-13T20:01:21.804748812+00:00 stderr F I0813 20:01:21.804107 1 builder.go:299] console-operator version - 2025-08-13T20:01:23.971163904+00:00 stderr F I0813 20:01:23.969130 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:01:23.971163904+00:00 stderr F W0813 20:01:23.969729 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:23.971163904+00:00 stderr F W0813 20:01:23.969910 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.040286 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.040734 1 leaderelection.go:250] attempting to acquire leader lease openshift-console-operator/console-operator-lock... 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.043251 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.043292 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.043447 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.043467 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.043492 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.043500 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.044628 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:24.108125490+00:00 stderr F I0813 20:01:24.103247 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:01:24.108125490+00:00 stderr F I0813 20:01:24.103350 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:01:24.148206102+00:00 stderr F I0813 20:01:24.144496 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:24.148206102+00:00 stderr F I0813 20:01:24.144576 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:24.148206102+00:00 stderr F I0813 20:01:24.145110 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:01:31.459571568+00:00 stderr F I0813 20:01:31.458825 1 leaderelection.go:260] successfully acquired lease openshift-console-operator/console-operator-lock 2025-08-13T20:01:31.509035438+00:00 stderr F I0813 20:01:31.508679 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-console-operator", Name:"console-operator-lock", UID:"30020b87-25e8-41b0-a858-a4ef10623cf0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"30535", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' console-operator-5dbbc74dc9-cp5cd_26c3518c-99b0-4182-9f74-a6df43fe8de3 became leader 2025-08-13T20:01:31.661391592+00:00 stderr F I0813 20:01:31.656860 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:01:31.683595846+00:00 stderr F I0813 20:01:31.679742 1 starter.go:206] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:01:31.683595846+00:00 stderr F I0813 20:01:31.680373 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:01:36.909432847+00:00 stderr F I0813 20:01:36.897612 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:01:36.909432847+00:00 stderr F I0813 20:01:36.901422 1 base_controller.go:67] Waiting for caches to sync for ConsoleCLIDownloadsController 2025-08-13T20:01:36.992913177+00:00 stderr F I0813 20:01:36.992598 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:01:36.993120243+00:00 stderr F I0813 20:01:36.992895 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:01:36.993120243+00:00 stderr F I0813 20:01:36.992994 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-08-13T20:01:36.993120243+00:00 stderr F I0813 20:01:36.993019 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:01:36.993120243+00:00 stderr F I0813 20:01:36.993031 1 base_controller.go:67] Waiting for caches to sync for ConsoleServiceController 2025-08-13T20:01:36.993120243+00:00 stderr F I0813 20:01:36.993048 1 base_controller.go:67] Waiting for caches to sync for ConsoleServiceController 2025-08-13T20:01:36.993626828+00:00 stderr F I0813 20:01:36.993059 1 base_controller.go:67] Waiting for caches to sync for DownloadsRouteController 2025-08-13T20:01:36.993626828+00:00 stderr F I0813 20:01:36.993200 1 base_controller.go:67] Waiting for caches to sync for ConsoleOperator 2025-08-13T20:01:36.993643118+00:00 stderr F I0813 20:01:36.993223 1 base_controller.go:67] Waiting for caches to sync for InformerWithSwitchController 2025-08-13T20:01:36.993643118+00:00 stderr F I0813 20:01:36.993637 1 base_controller.go:73] Caches are synced for InformerWithSwitchController 2025-08-13T20:01:36.993733931+00:00 stderr F I0813 20:01:36.993692 1 base_controller.go:110] Starting #1 worker of InformerWithSwitchController controller ... 2025-08-13T20:01:36.994068770+00:00 stderr F I0813 20:01:36.993365 1 base_controller.go:67] Waiting for caches to sync for ConsoleRouteController 2025-08-13T20:01:36.994091761+00:00 stderr F I0813 20:01:36.993446 1 base_controller.go:67] Waiting for caches to sync for CLIOIDCClientStatusController 2025-08-13T20:01:36.994091761+00:00 stderr F I0813 20:01:36.993461 1 base_controller.go:67] Waiting for caches to sync for ConsoleDownloadsDeploymentSyncController 2025-08-13T20:01:36.994091761+00:00 stderr F I0813 20:01:36.993471 1 base_controller.go:67] Waiting for caches to sync for HealthCheckController 2025-08-13T20:01:36.994104791+00:00 stderr F I0813 20:01:36.993479 1 base_controller.go:67] Waiting for caches to sync for PodDisruptionBudgetController 2025-08-13T20:01:36.994104791+00:00 stderr F I0813 20:01:36.993486 1 base_controller.go:67] Waiting for caches to sync for OAuthClientsController 2025-08-13T20:01:36.994115792+00:00 stderr F I0813 20:01:36.993494 1 base_controller.go:67] Waiting for caches to sync for OAuthClientSecretController 2025-08-13T20:01:36.994126842+00:00 stderr F I0813 20:01:36.993498 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_console 2025-08-13T20:01:36.994126842+00:00 stderr F I0813 20:01:36.993505 1 base_controller.go:67] Waiting for caches to sync for OIDCSetupController 2025-08-13T20:01:36.994137602+00:00 stderr F I0813 20:01:36.993513 1 base_controller.go:67] Waiting for caches to sync for PodDisruptionBudgetController 2025-08-13T20:01:36.994971946+00:00 stderr F I0813 20:01:36.994439 1 base_controller.go:67] Waiting for caches to sync for ClusterUpgradeNotificationController 2025-08-13T20:01:37.000550155+00:00 stderr F I0813 20:01:36.996683 1 base_controller.go:73] Caches are synced for ClusterUpgradeNotificationController 2025-08-13T20:01:37.000550155+00:00 stderr F I0813 20:01:36.998209 1 base_controller.go:110] Starting #1 worker of ClusterUpgradeNotificationController controller ... 2025-08-13T20:01:37.000550155+00:00 stderr F E0813 20:01:36.999530 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T20:01:37.019403323+00:00 stderr F W0813 20:01:37.018234 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:37.019403323+00:00 stderr F E0813 20:01:37.018301 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:37.098673342+00:00 stderr F I0813 20:01:37.093008 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:01:37.098673342+00:00 stderr F I0813 20:01:37.098659 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:01:37.098709193+00:00 stderr F I0813 20:01:37.093245 1 base_controller.go:73] Caches are synced for ConsoleServiceController 2025-08-13T20:01:37.098709193+00:00 stderr F I0813 20:01:37.098683 1 base_controller.go:110] Starting #1 worker of ConsoleServiceController controller ... 2025-08-13T20:01:37.098709193+00:00 stderr F I0813 20:01:37.093279 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-08-13T20:01:37.098721753+00:00 stderr F I0813 20:01:37.098709 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-08-13T20:01:37.098858247+00:00 stderr F I0813 20:01:37.093289 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:01:37.098858247+00:00 stderr F I0813 20:01:37.098766 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:01:37.098858247+00:00 stderr F I0813 20:01:37.093302 1 base_controller.go:73] Caches are synced for ConsoleServiceController 2025-08-13T20:01:37.098878058+00:00 stderr F I0813 20:01:37.098858 1 base_controller.go:110] Starting #1 worker of ConsoleServiceController controller ... 2025-08-13T20:01:37.110930882+00:00 stderr F I0813 20:01:37.110829 1 base_controller.go:73] Caches are synced for PodDisruptionBudgetController 2025-08-13T20:01:37.111337563+00:00 stderr F I0813 20:01:37.111260 1 base_controller.go:73] Caches are synced for CLIOIDCClientStatusController 2025-08-13T20:01:37.111337563+00:00 stderr F I0813 20:01:37.111296 1 base_controller.go:110] Starting #1 worker of PodDisruptionBudgetController controller ... 2025-08-13T20:01:37.111501118+00:00 stderr F I0813 20:01:37.111416 1 base_controller.go:73] Caches are synced for StatusSyncer_console 2025-08-13T20:01:37.111501118+00:00 stderr F I0813 20:01:37.111304 1 base_controller.go:110] Starting #1 worker of CLIOIDCClientStatusController controller ... 2025-08-13T20:01:37.111501118+00:00 stderr F I0813 20:01:37.111447 1 base_controller.go:110] Starting #1 worker of StatusSyncer_console controller ... 2025-08-13T20:01:37.111501118+00:00 stderr F I0813 20:01:37.111329 1 base_controller.go:73] Caches are synced for ConsoleDownloadsDeploymentSyncController 2025-08-13T20:01:37.111501118+00:00 stderr F I0813 20:01:37.111483 1 base_controller.go:110] Starting #1 worker of ConsoleDownloadsDeploymentSyncController controller ... 2025-08-13T20:01:37.112073594+00:00 stderr F I0813 20:01:37.111337 1 base_controller.go:73] Caches are synced for PodDisruptionBudgetController 2025-08-13T20:01:37.112073594+00:00 stderr F I0813 20:01:37.112043 1 base_controller.go:110] Starting #1 worker of PodDisruptionBudgetController controller ... 2025-08-13T20:01:37.112248559+00:00 stderr F I0813 20:01:37.111345 1 base_controller.go:73] Caches are synced for OAuthClientSecretController 2025-08-13T20:01:37.112297170+00:00 stderr F I0813 20:01:37.112281 1 base_controller.go:110] Starting #1 worker of OAuthClientSecretController controller ... 2025-08-13T20:01:37.115871872+00:00 stderr F I0813 20:01:37.115684 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:01:37.115871872+00:00 stderr F I0813 20:01:37.115732 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:01:37.139159046+00:00 stderr F I0813 20:01:37.138982 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:37.157168150+00:00 stderr F I0813 20:01:37.156393 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:37.184583552+00:00 stderr F I0813 20:01:37.184437 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:37.193553177+00:00 stderr F I0813 20:01:37.193494 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:01:37.193633830+00:00 stderr F I0813 20:01:37.193619 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:01:37.198707084+00:00 stderr F I0813 20:01:37.198596 1 base_controller.go:73] Caches are synced for OIDCSetupController 2025-08-13T20:01:37.198707084+00:00 stderr F I0813 20:01:37.198656 1 base_controller.go:110] Starting #1 worker of OIDCSetupController controller ... 2025-08-13T20:01:38.553891696+00:00 stderr F W0813 20:01:38.553477 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:38.553891696+00:00 stderr F E0813 20:01:38.553542 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:40.636470078+00:00 stderr F W0813 20:01:40.635957 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:40.636470078+00:00 stderr F E0813 20:01:40.636399 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:46.988630633+00:00 stderr F W0813 20:01:46.987604 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:46.988681465+00:00 stderr F E0813 20:01:46.988648 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:51.227743396+00:00 stderr F I0813 20:01:51.224273 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"Deployment_InsufficientReplicas::RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:01:07Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:55.704917637+00:00 stderr F W0813 20:01:55.704384 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:55.704917637+00:00 stderr F E0813 20:01:55.704888 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:56.189227326+00:00 stderr F I0813 20:01:56.178481 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available" 2025-08-13T20:02:10.877413274+00:00 stderr F W0813 20:02:10.876711 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:02:10.877413274+00:00 stderr F E0813 20:02:10.877391 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:02:35.304614522+00:00 stderr F E0813 20:02:35.303652 1 leaderelection.go:332] error retrieving resource lock openshift-console-operator/console-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-console-operator/leases/console-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.001196281+00:00 stderr F E0813 20:02:37.001089 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.006719239+00:00 stderr F I0813 20:02:37.006649 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.006719239+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.006719239+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.006719239+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:37.006719239+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:37.006719239+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:37.006719239+00:00 stderr F - { 2025-08-13T20:02:37.006719239+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.006719239+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.006719239+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:37.006719239+00:00 stderr F - }, 2025-08-13T20:02:37.006719239+00:00 stderr F + { 2025-08-13T20:02:37.006719239+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.006719239+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.006719239+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.001171681 +0000 UTC m=+76.206270985", 2025-08-13T20:02:37.006719239+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:37.006719239+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:37.006719239+00:00 stderr F + }, 2025-08-13T20:02:37.006719239+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:37.006719239+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.006719239+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:37.006719239+00:00 stderr F }, 2025-08-13T20:02:37.006719239+00:00 stderr F Version: "", 2025-08-13T20:02:37.006719239+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.006719239+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.006719239+00:00 stderr F } 2025-08-13T20:02:37.008377726+00:00 stderr F E0813 20:02:37.008348 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.015011335+00:00 stderr F E0813 20:02:37.014935 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.017540448+00:00 stderr F I0813 20:02:37.017287 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.017540448+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.017540448+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.017540448+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:37.017540448+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:37.017540448+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:37.017540448+00:00 stderr F - { 2025-08-13T20:02:37.017540448+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.017540448+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.017540448+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:37.017540448+00:00 stderr F - }, 2025-08-13T20:02:37.017540448+00:00 stderr F + { 2025-08-13T20:02:37.017540448+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.017540448+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.017540448+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.014994335 +0000 UTC m=+76.220093699", 2025-08-13T20:02:37.017540448+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:37.017540448+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:37.017540448+00:00 stderr F + }, 2025-08-13T20:02:37.017540448+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:37.017540448+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.017540448+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:37.017540448+00:00 stderr F }, 2025-08-13T20:02:37.017540448+00:00 stderr F Version: "", 2025-08-13T20:02:37.017540448+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.017540448+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.017540448+00:00 stderr F } 2025-08-13T20:02:37.018674320+00:00 stderr F E0813 20:02:37.018650 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.030345693+00:00 stderr F E0813 20:02:37.030317 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.032925026+00:00 stderr F I0813 20:02:37.032590 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.032925026+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.032925026+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.032925026+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:37.032925026+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:37.032925026+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:37.032925026+00:00 stderr F - { 2025-08-13T20:02:37.032925026+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.032925026+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.032925026+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:37.032925026+00:00 stderr F - }, 2025-08-13T20:02:37.032925026+00:00 stderr F + { 2025-08-13T20:02:37.032925026+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.032925026+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.032925026+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.030398664 +0000 UTC m=+76.235497818", 2025-08-13T20:02:37.032925026+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:37.032925026+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:37.032925026+00:00 stderr F + }, 2025-08-13T20:02:37.032925026+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:37.032925026+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.032925026+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:37.032925026+00:00 stderr F }, 2025-08-13T20:02:37.032925026+00:00 stderr F Version: "", 2025-08-13T20:02:37.032925026+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.032925026+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.032925026+00:00 stderr F } 2025-08-13T20:02:37.035612393+00:00 stderr F E0813 20:02:37.035519 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.056986483+00:00 stderr F E0813 20:02:37.056954 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.059095323+00:00 stderr F I0813 20:02:37.059069 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.059095323+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.059095323+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.059095323+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:37.059095323+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:37.059095323+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:37.059095323+00:00 stderr F - { 2025-08-13T20:02:37.059095323+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.059095323+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.059095323+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:37.059095323+00:00 stderr F - }, 2025-08-13T20:02:37.059095323+00:00 stderr F + { 2025-08-13T20:02:37.059095323+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.059095323+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.059095323+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.057040674 +0000 UTC m=+76.262139848", 2025-08-13T20:02:37.059095323+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:37.059095323+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:37.059095323+00:00 stderr F + }, 2025-08-13T20:02:37.059095323+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:37.059095323+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.059095323+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:37.059095323+00:00 stderr F }, 2025-08-13T20:02:37.059095323+00:00 stderr F Version: "", 2025-08-13T20:02:37.059095323+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.059095323+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.059095323+00:00 stderr F } 2025-08-13T20:02:37.060330618+00:00 stderr F E0813 20:02:37.060220 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.103134429+00:00 stderr F E0813 20:02:37.103081 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.106226368+00:00 stderr F I0813 20:02:37.106197 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.106226368+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.106226368+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.106226368+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:37.106226368+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:37.106226368+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:37.106226368+00:00 stderr F - { 2025-08-13T20:02:37.106226368+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.106226368+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.106226368+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:37.106226368+00:00 stderr F - }, 2025-08-13T20:02:37.106226368+00:00 stderr F + { 2025-08-13T20:02:37.106226368+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.106226368+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.106226368+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.103221472 +0000 UTC m=+76.308320556", 2025-08-13T20:02:37.106226368+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:37.106226368+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:37.106226368+00:00 stderr F + }, 2025-08-13T20:02:37.106226368+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:37.106226368+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.106226368+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:37.106226368+00:00 stderr F }, 2025-08-13T20:02:37.106226368+00:00 stderr F Version: "", 2025-08-13T20:02:37.106226368+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.106226368+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.106226368+00:00 stderr F } 2025-08-13T20:02:37.107951347+00:00 stderr F E0813 20:02:37.107861 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.121601696+00:00 stderr F E0813 20:02:37.121368 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.124528880+00:00 stderr F I0813 20:02:37.124409 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.124528880+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.124528880+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.124528880+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:37.124528880+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:37.124528880+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.124528880+00:00 stderr F - { 2025-08-13T20:02:37.124528880+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.124528880+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.124528880+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:37.124528880+00:00 stderr F - }, 2025-08-13T20:02:37.124528880+00:00 stderr F + { 2025-08-13T20:02:37.124528880+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.124528880+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.124528880+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.121440242 +0000 UTC m=+76.326539426", 2025-08-13T20:02:37.124528880+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.124528880+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:37.124528880+00:00 stderr F + }, 2025-08-13T20:02:37.124528880+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:37.124528880+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.124528880+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:37.124528880+00:00 stderr F }, 2025-08-13T20:02:37.124528880+00:00 stderr F Version: "", 2025-08-13T20:02:37.124528880+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.124528880+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.124528880+00:00 stderr F } 2025-08-13T20:02:37.126343221+00:00 stderr F E0813 20:02:37.126256 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.126520716+00:00 stderr F E0813 20:02:37.126442 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.129926634+00:00 stderr F E0813 20:02:37.129762 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.130551841+00:00 stderr F E0813 20:02:37.130431 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.132014023+00:00 stderr F I0813 20:02:37.131960 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.132014023+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.132014023+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.132014023+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:37.132014023+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:37.132014023+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:37.132014023+00:00 stderr F - { 2025-08-13T20:02:37.132014023+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:37.132014023+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.132014023+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:37.132014023+00:00 stderr F - }, 2025-08-13T20:02:37.132014023+00:00 stderr F + { 2025-08-13T20:02:37.132014023+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:37.132014023+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.132014023+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.126492286 +0000 UTC m=+76.331591490", 2025-08-13T20:02:37.132014023+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.132014023+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:37.132014023+00:00 stderr F + }, 2025-08-13T20:02:37.132014023+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:37.132014023+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:37.132014023+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:37.132014023+00:00 stderr F }, 2025-08-13T20:02:37.132014023+00:00 stderr F Version: "", 2025-08-13T20:02:37.132014023+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.132014023+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.132014023+00:00 stderr F } 2025-08-13T20:02:37.133736572+00:00 stderr F I0813 20:02:37.133640 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.133736572+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.133736572+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.133736572+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:37.133736572+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:37.133736572+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:37.133736572+00:00 stderr F - { 2025-08-13T20:02:37.133736572+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:37.133736572+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.133736572+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:37.133736572+00:00 stderr F - }, 2025-08-13T20:02:37.133736572+00:00 stderr F + { 2025-08-13T20:02:37.133736572+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:37.133736572+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.133736572+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.130450949 +0000 UTC m=+76.335550003", 2025-08-13T20:02:37.133736572+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.133736572+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:37.133736572+00:00 stderr F + }, 2025-08-13T20:02:37.133736572+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:37.133736572+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.133736572+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:37.133736572+00:00 stderr F }, 2025-08-13T20:02:37.133736572+00:00 stderr F Version: "", 2025-08-13T20:02:37.133736572+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.133736572+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.133736572+00:00 stderr F } 2025-08-13T20:02:37.134305229+00:00 stderr F E0813 20:02:37.134188 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.134900546+00:00 stderr F E0813 20:02:37.134697 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.135542744+00:00 stderr F I0813 20:02:37.135463 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.135542744+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.135542744+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.135542744+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:37.135542744+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:37.135542744+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:37.135542744+00:00 stderr F - { 2025-08-13T20:02:37.135542744+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:37.135542744+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.135542744+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:37.135542744+00:00 stderr F - }, 2025-08-13T20:02:37.135542744+00:00 stderr F + { 2025-08-13T20:02:37.135542744+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:37.135542744+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.135542744+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.126359392 +0000 UTC m=+76.331458536", 2025-08-13T20:02:37.135542744+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.135542744+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:37.135542744+00:00 stderr F + }, 2025-08-13T20:02:37.135542744+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:37.135542744+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.135542744+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:37.135542744+00:00 stderr F }, 2025-08-13T20:02:37.135542744+00:00 stderr F Version: "", 2025-08-13T20:02:37.135542744+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.135542744+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.135542744+00:00 stderr F } 2025-08-13T20:02:37.136386588+00:00 stderr F I0813 20:02:37.136322 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.136386588+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.136386588+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.136386588+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:37.136386588+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:37.136386588+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.136386588+00:00 stderr F - { 2025-08-13T20:02:37.136386588+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.136386588+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.136386588+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:37.136386588+00:00 stderr F - }, 2025-08-13T20:02:37.136386588+00:00 stderr F + { 2025-08-13T20:02:37.136386588+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.136386588+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.136386588+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.134232746 +0000 UTC m=+76.339331950", 2025-08-13T20:02:37.136386588+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.136386588+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:37.136386588+00:00 stderr F + }, 2025-08-13T20:02:37.136386588+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:37.136386588+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.136386588+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:37.136386588+00:00 stderr F }, 2025-08-13T20:02:37.136386588+00:00 stderr F Version: "", 2025-08-13T20:02:37.136386588+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.136386588+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.136386588+00:00 stderr F } 2025-08-13T20:02:37.136681286+00:00 stderr F E0813 20:02:37.136602 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.136954614+00:00 stderr F E0813 20:02:37.136764 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.137758157+00:00 stderr F E0813 20:02:37.137668 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.138021025+00:00 stderr F E0813 20:02:37.137945 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.138645062+00:00 stderr F I0813 20:02:37.138562 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.138645062+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.138645062+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.138645062+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:37.138645062+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:37.138645062+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.138645062+00:00 stderr F - { 2025-08-13T20:02:37.138645062+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.138645062+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.138645062+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:37.138645062+00:00 stderr F - }, 2025-08-13T20:02:37.138645062+00:00 stderr F + { 2025-08-13T20:02:37.138645062+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.138645062+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.138645062+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.136946624 +0000 UTC m=+76.342045848", 2025-08-13T20:02:37.138645062+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.138645062+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:37.138645062+00:00 stderr F + }, 2025-08-13T20:02:37.138645062+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:37.138645062+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.138645062+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:37.138645062+00:00 stderr F }, 2025-08-13T20:02:37.138645062+00:00 stderr F Version: "", 2025-08-13T20:02:37.138645062+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.138645062+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.138645062+00:00 stderr F } 2025-08-13T20:02:37.143150611+00:00 stderr F E0813 20:02:37.143051 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.144031406+00:00 stderr F E0813 20:02:37.143962 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.144376696+00:00 stderr F E0813 20:02:37.144241 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.145581960+00:00 stderr F I0813 20:02:37.145476 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.145581960+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.145581960+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.145581960+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:37.145581960+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:37.145581960+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:37.145581960+00:00 stderr F - { 2025-08-13T20:02:37.145581960+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:37.145581960+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.145581960+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:37.145581960+00:00 stderr F - }, 2025-08-13T20:02:37.145581960+00:00 stderr F + { 2025-08-13T20:02:37.145581960+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:37.145581960+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.145581960+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.143086079 +0000 UTC m=+76.348185273", 2025-08-13T20:02:37.145581960+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.145581960+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:37.145581960+00:00 stderr F + }, 2025-08-13T20:02:37.145581960+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:37.145581960+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.145581960+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:37.145581960+00:00 stderr F }, 2025-08-13T20:02:37.145581960+00:00 stderr F Version: "", 2025-08-13T20:02:37.145581960+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.145581960+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.145581960+00:00 stderr F } 2025-08-13T20:02:37.146268710+00:00 stderr F I0813 20:02:37.146151 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.146268710+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.146268710+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.146268710+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:37.146268710+00:00 stderr F - { 2025-08-13T20:02:37.146268710+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:37.146268710+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.146268710+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:37.146268710+00:00 stderr F - }, 2025-08-13T20:02:37.146268710+00:00 stderr F + { 2025-08-13T20:02:37.146268710+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:37.146268710+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.146268710+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.144033216 +0000 UTC m=+76.349132530", 2025-08-13T20:02:37.146268710+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.146268710+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:37.146268710+00:00 stderr F + }, 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:37.146268710+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:37.146268710+00:00 stderr F }, 2025-08-13T20:02:37.146268710+00:00 stderr F Version: "", 2025-08-13T20:02:37.146268710+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.146268710+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.146268710+00:00 stderr F } 2025-08-13T20:02:37.146268710+00:00 stderr F I0813 20:02:37.146196 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.146268710+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.146268710+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.146268710+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:37.146268710+00:00 stderr F - { 2025-08-13T20:02:37.146268710+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:37.146268710+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.146268710+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:37.146268710+00:00 stderr F - }, 2025-08-13T20:02:37.146268710+00:00 stderr F + { 2025-08-13T20:02:37.146268710+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:37.146268710+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.146268710+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.144273173 +0000 UTC m=+76.349372367", 2025-08-13T20:02:37.146268710+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.146268710+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:37.146268710+00:00 stderr F + }, 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.146268710+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:37.146268710+00:00 stderr F }, 2025-08-13T20:02:37.146268710+00:00 stderr F Version: "", 2025-08-13T20:02:37.146268710+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.146268710+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.146268710+00:00 stderr F } 2025-08-13T20:02:37.147033832+00:00 stderr F E0813 20:02:37.146954 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.150736087+00:00 stderr F I0813 20:02:37.150114 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.150736087+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.150736087+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.150736087+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:37.150736087+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:37.150736087+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.150736087+00:00 stderr F - { 2025-08-13T20:02:37.150736087+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.150736087+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.150736087+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:37.150736087+00:00 stderr F - }, 2025-08-13T20:02:37.150736087+00:00 stderr F + { 2025-08-13T20:02:37.150736087+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.150736087+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.150736087+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.147029161 +0000 UTC m=+76.352128676", 2025-08-13T20:02:37.150736087+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.150736087+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:37.150736087+00:00 stderr F + }, 2025-08-13T20:02:37.150736087+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:37.150736087+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.150736087+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:37.150736087+00:00 stderr F }, 2025-08-13T20:02:37.150736087+00:00 stderr F Version: "", 2025-08-13T20:02:37.150736087+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.150736087+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.150736087+00:00 stderr F } 2025-08-13T20:02:37.189472952+00:00 stderr F E0813 20:02:37.189377 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.192793477+00:00 stderr F I0813 20:02:37.192709 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.192793477+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.192793477+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.192793477+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:37.192793477+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:37.192793477+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:37.192793477+00:00 stderr F - { 2025-08-13T20:02:37.192793477+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.192793477+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.192793477+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:37.192793477+00:00 stderr F - }, 2025-08-13T20:02:37.192793477+00:00 stderr F + { 2025-08-13T20:02:37.192793477+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.192793477+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.192793477+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.189460582 +0000 UTC m=+76.394559916", 2025-08-13T20:02:37.192793477+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:37.192793477+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:37.192793477+00:00 stderr F + }, 2025-08-13T20:02:37.192793477+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:37.192793477+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.192793477+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:37.192793477+00:00 stderr F }, 2025-08-13T20:02:37.192793477+00:00 stderr F Version: "", 2025-08-13T20:02:37.192793477+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.192793477+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.192793477+00:00 stderr F } 2025-08-13T20:02:37.211320526+00:00 stderr F E0813 20:02:37.211235 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.223465952+00:00 stderr F E0813 20:02:37.223395 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.225205742+00:00 stderr F I0813 20:02:37.225143 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.225205742+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.225205742+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.225205742+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:37.225205742+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:37.225205742+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.225205742+00:00 stderr F - { 2025-08-13T20:02:37.225205742+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.225205742+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.225205742+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:37.225205742+00:00 stderr F - }, 2025-08-13T20:02:37.225205742+00:00 stderr F + { 2025-08-13T20:02:37.225205742+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.225205742+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.225205742+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.223457082 +0000 UTC m=+76.428556296", 2025-08-13T20:02:37.225205742+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.225205742+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:37.225205742+00:00 stderr F + }, 2025-08-13T20:02:37.225205742+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:37.225205742+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.225205742+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:37.225205742+00:00 stderr F }, 2025-08-13T20:02:37.225205742+00:00 stderr F Version: "", 2025-08-13T20:02:37.225205742+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.225205742+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.225205742+00:00 stderr F } 2025-08-13T20:02:37.410190499+00:00 stderr F E0813 20:02:37.410003 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.422460029+00:00 stderr F E0813 20:02:37.421621 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.428228663+00:00 stderr F I0813 20:02:37.427543 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.428228663+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.428228663+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.428228663+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:37.428228663+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:37.428228663+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:37.428228663+00:00 stderr F - { 2025-08-13T20:02:37.428228663+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:37.428228663+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.428228663+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:37.428228663+00:00 stderr F - }, 2025-08-13T20:02:37.428228663+00:00 stderr F + { 2025-08-13T20:02:37.428228663+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:37.428228663+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.428228663+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.422604473 +0000 UTC m=+76.627703867", 2025-08-13T20:02:37.428228663+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.428228663+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:37.428228663+00:00 stderr F + }, 2025-08-13T20:02:37.428228663+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:37.428228663+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.428228663+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:37.428228663+00:00 stderr F }, 2025-08-13T20:02:37.428228663+00:00 stderr F Version: "", 2025-08-13T20:02:37.428228663+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.428228663+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.428228663+00:00 stderr F } 2025-08-13T20:02:37.610046740+00:00 stderr F E0813 20:02:37.609937 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.633734006+00:00 stderr F E0813 20:02:37.633628 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.635942429+00:00 stderr F I0813 20:02:37.635888 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.635942429+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.635942429+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.635942429+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:37.635942429+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:37.635942429+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:37.635942429+00:00 stderr F - { 2025-08-13T20:02:37.635942429+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:37.635942429+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.635942429+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:37.635942429+00:00 stderr F - }, 2025-08-13T20:02:37.635942429+00:00 stderr F + { 2025-08-13T20:02:37.635942429+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:37.635942429+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.635942429+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.633691365 +0000 UTC m=+76.838790539", 2025-08-13T20:02:37.635942429+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.635942429+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:37.635942429+00:00 stderr F + }, 2025-08-13T20:02:37.635942429+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:37.635942429+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:37.635942429+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:37.635942429+00:00 stderr F }, 2025-08-13T20:02:37.635942429+00:00 stderr F Version: "", 2025-08-13T20:02:37.635942429+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.635942429+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.635942429+00:00 stderr F } 2025-08-13T20:02:37.809628154+00:00 stderr F E0813 20:02:37.809489 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.821830842+00:00 stderr F E0813 20:02:37.821736 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.824464047+00:00 stderr F I0813 20:02:37.824383 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.824464047+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.824464047+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.824464047+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:37.824464047+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:37.824464047+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:37.824464047+00:00 stderr F - { 2025-08-13T20:02:37.824464047+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:37.824464047+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.824464047+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:37.824464047+00:00 stderr F - }, 2025-08-13T20:02:37.824464047+00:00 stderr F + { 2025-08-13T20:02:37.824464047+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:37.824464047+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.824464047+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.821865803 +0000 UTC m=+77.026965177", 2025-08-13T20:02:37.824464047+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.824464047+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:37.824464047+00:00 stderr F + }, 2025-08-13T20:02:37.824464047+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:37.824464047+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.824464047+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:37.824464047+00:00 stderr F }, 2025-08-13T20:02:37.824464047+00:00 stderr F Version: "", 2025-08-13T20:02:37.824464047+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.824464047+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.824464047+00:00 stderr F } 2025-08-13T20:02:38.009616618+00:00 stderr F E0813 20:02:38.009553 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.022606408+00:00 stderr F E0813 20:02:38.022533 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.024687358+00:00 stderr F I0813 20:02:38.024601 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:38.024687358+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:38.024687358+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:38.024687358+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:38.024687358+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:38.024687358+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:38.024687358+00:00 stderr F - { 2025-08-13T20:02:38.024687358+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:38.024687358+00:00 stderr F - Status: "False", 2025-08-13T20:02:38.024687358+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:38.024687358+00:00 stderr F - }, 2025-08-13T20:02:38.024687358+00:00 stderr F + { 2025-08-13T20:02:38.024687358+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:38.024687358+00:00 stderr F + Status: "True", 2025-08-13T20:02:38.024687358+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:38.022592148 +0000 UTC m=+77.227691292", 2025-08-13T20:02:38.024687358+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:38.024687358+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:38.024687358+00:00 stderr F + }, 2025-08-13T20:02:38.024687358+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:38.024687358+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:38.024687358+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:38.024687358+00:00 stderr F }, 2025-08-13T20:02:38.024687358+00:00 stderr F Version: "", 2025-08-13T20:02:38.024687358+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:38.024687358+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:38.024687358+00:00 stderr F } 2025-08-13T20:02:38.208207273+00:00 stderr F I0813 20:02:38.208072 1 request.go:697] Waited for 1.014815649s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:02:38.210816507+00:00 stderr F E0813 20:02:38.210699 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.374117816+00:00 stderr F E0813 20:02:38.373953 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.377282966+00:00 stderr F I0813 20:02:38.377135 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:38.377282966+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:38.377282966+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:38.377282966+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:38.377282966+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:38.377282966+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:38.377282966+00:00 stderr F - { 2025-08-13T20:02:38.377282966+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:38.377282966+00:00 stderr F - Status: "False", 2025-08-13T20:02:38.377282966+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:38.377282966+00:00 stderr F - }, 2025-08-13T20:02:38.377282966+00:00 stderr F + { 2025-08-13T20:02:38.377282966+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:38.377282966+00:00 stderr F + Status: "True", 2025-08-13T20:02:38.377282966+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:38.374052224 +0000 UTC m=+77.579151698", 2025-08-13T20:02:38.377282966+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:38.377282966+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:38.377282966+00:00 stderr F + }, 2025-08-13T20:02:38.377282966+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:38.377282966+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:38.377282966+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:38.377282966+00:00 stderr F }, 2025-08-13T20:02:38.377282966+00:00 stderr F Version: "", 2025-08-13T20:02:38.377282966+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:38.377282966+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:38.377282966+00:00 stderr F } 2025-08-13T20:02:38.409229868+00:00 stderr F E0813 20:02:38.409114 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.432512932+00:00 stderr F E0813 20:02:38.432459 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.434983522+00:00 stderr F I0813 20:02:38.434956 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:38.434983522+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:38.434983522+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:38.434983522+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:38.434983522+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:38.434983522+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:38.434983522+00:00 stderr F - { 2025-08-13T20:02:38.434983522+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:38.434983522+00:00 stderr F - Status: "False", 2025-08-13T20:02:38.434983522+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:38.434983522+00:00 stderr F - }, 2025-08-13T20:02:38.434983522+00:00 stderr F + { 2025-08-13T20:02:38.434983522+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:38.434983522+00:00 stderr F + Status: "True", 2025-08-13T20:02:38.434983522+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:38.432589994 +0000 UTC m=+77.637689208", 2025-08-13T20:02:38.434983522+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:38.434983522+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:38.434983522+00:00 stderr F + }, 2025-08-13T20:02:38.434983522+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:38.434983522+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:38.434983522+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:38.434983522+00:00 stderr F }, 2025-08-13T20:02:38.434983522+00:00 stderr F Version: "", 2025-08-13T20:02:38.434983522+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:38.434983522+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:38.434983522+00:00 stderr F } 2025-08-13T20:02:38.609067568+00:00 stderr F E0813 20:02:38.608938 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.631156048+00:00 stderr F E0813 20:02:38.631058 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.633299470+00:00 stderr F I0813 20:02:38.633058 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:38.633299470+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:38.633299470+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:38.633299470+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:38.633299470+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:38.633299470+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:38.633299470+00:00 stderr F - { 2025-08-13T20:02:38.633299470+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:38.633299470+00:00 stderr F - Status: "False", 2025-08-13T20:02:38.633299470+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:38.633299470+00:00 stderr F - }, 2025-08-13T20:02:38.633299470+00:00 stderr F + { 2025-08-13T20:02:38.633299470+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:38.633299470+00:00 stderr F + Status: "True", 2025-08-13T20:02:38.633299470+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:38.631231761 +0000 UTC m=+77.836330955", 2025-08-13T20:02:38.633299470+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:38.633299470+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:38.633299470+00:00 stderr F + }, 2025-08-13T20:02:38.633299470+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:38.633299470+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:38.633299470+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:38.633299470+00:00 stderr F }, 2025-08-13T20:02:38.633299470+00:00 stderr F Version: "", 2025-08-13T20:02:38.633299470+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:38.633299470+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:38.633299470+00:00 stderr F } 2025-08-13T20:02:38.809022192+00:00 stderr F E0813 20:02:38.808964 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.831928866+00:00 stderr F E0813 20:02:38.831696 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.833717297+00:00 stderr F I0813 20:02:38.833666 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:38.833717297+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:38.833717297+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:38.833717297+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:38.833717297+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:38.833717297+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:38.833717297+00:00 stderr F - { 2025-08-13T20:02:38.833717297+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:38.833717297+00:00 stderr F - Status: "False", 2025-08-13T20:02:38.833717297+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:38.833717297+00:00 stderr F - }, 2025-08-13T20:02:38.833717297+00:00 stderr F + { 2025-08-13T20:02:38.833717297+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:38.833717297+00:00 stderr F + Status: "True", 2025-08-13T20:02:38.833717297+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:38.831754401 +0000 UTC m=+78.036853625", 2025-08-13T20:02:38.833717297+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:38.833717297+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:38.833717297+00:00 stderr F + }, 2025-08-13T20:02:38.833717297+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:38.833717297+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:38.833717297+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:38.833717297+00:00 stderr F }, 2025-08-13T20:02:38.833717297+00:00 stderr F Version: "", 2025-08-13T20:02:38.833717297+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:38.833717297+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:38.833717297+00:00 stderr F } 2025-08-13T20:02:39.008508193+00:00 stderr F E0813 20:02:39.008405 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.033549498+00:00 stderr F E0813 20:02:39.033436 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.036017018+00:00 stderr F I0813 20:02:39.035943 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:39.036017018+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:39.036017018+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:39.036017018+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:39.036017018+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:39.036017018+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:39.036017018+00:00 stderr F - { 2025-08-13T20:02:39.036017018+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:39.036017018+00:00 stderr F - Status: "False", 2025-08-13T20:02:39.036017018+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:39.036017018+00:00 stderr F - }, 2025-08-13T20:02:39.036017018+00:00 stderr F + { 2025-08-13T20:02:39.036017018+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:39.036017018+00:00 stderr F + Status: "True", 2025-08-13T20:02:39.036017018+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:39.033546278 +0000 UTC m=+78.238645532", 2025-08-13T20:02:39.036017018+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:39.036017018+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:39.036017018+00:00 stderr F + }, 2025-08-13T20:02:39.036017018+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:39.036017018+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:39.036017018+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:39.036017018+00:00 stderr F }, 2025-08-13T20:02:39.036017018+00:00 stderr F Version: "", 2025-08-13T20:02:39.036017018+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:39.036017018+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:39.036017018+00:00 stderr F } 2025-08-13T20:02:39.209076115+00:00 stderr F E0813 20:02:39.209019 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.233264635+00:00 stderr F E0813 20:02:39.233185 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.234977154+00:00 stderr F I0813 20:02:39.234925 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:39.234977154+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:39.234977154+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:39.234977154+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:39.234977154+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:39.234977154+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:39.234977154+00:00 stderr F - { 2025-08-13T20:02:39.234977154+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:39.234977154+00:00 stderr F - Status: "False", 2025-08-13T20:02:39.234977154+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:39.234977154+00:00 stderr F - }, 2025-08-13T20:02:39.234977154+00:00 stderr F + { 2025-08-13T20:02:39.234977154+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:39.234977154+00:00 stderr F + Status: "True", 2025-08-13T20:02:39.234977154+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:39.233222854 +0000 UTC m=+78.438322008", 2025-08-13T20:02:39.234977154+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:39.234977154+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:39.234977154+00:00 stderr F + }, 2025-08-13T20:02:39.234977154+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:39.234977154+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:39.234977154+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:39.234977154+00:00 stderr F }, 2025-08-13T20:02:39.234977154+00:00 stderr F Version: "", 2025-08-13T20:02:39.234977154+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:39.234977154+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:39.234977154+00:00 stderr F } 2025-08-13T20:02:39.408274038+00:00 stderr F I0813 20:02:39.407918 1 request.go:697] Waited for 1.030436886s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:02:39.409755390+00:00 stderr F E0813 20:02:39.409699 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.609879939+00:00 stderr F E0813 20:02:39.609674 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.652287489+00:00 stderr F E0813 20:02:39.652176 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.654922334+00:00 stderr F I0813 20:02:39.654739 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:39.654922334+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:39.654922334+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:39.654922334+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:39.654922334+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:39.654922334+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:39.654922334+00:00 stderr F - { 2025-08-13T20:02:39.654922334+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:39.654922334+00:00 stderr F - Status: "False", 2025-08-13T20:02:39.654922334+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:39.654922334+00:00 stderr F - }, 2025-08-13T20:02:39.654922334+00:00 stderr F + { 2025-08-13T20:02:39.654922334+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:39.654922334+00:00 stderr F + Status: "True", 2025-08-13T20:02:39.654922334+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:39.652405342 +0000 UTC m=+78.857504516", 2025-08-13T20:02:39.654922334+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:39.654922334+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:39.654922334+00:00 stderr F + }, 2025-08-13T20:02:39.654922334+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:39.654922334+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:39.654922334+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:39.654922334+00:00 stderr F }, 2025-08-13T20:02:39.654922334+00:00 stderr F Version: "", 2025-08-13T20:02:39.654922334+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:39.654922334+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:39.654922334+00:00 stderr F } 2025-08-13T20:02:39.733002751+00:00 stderr F E0813 20:02:39.732689 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.736083259+00:00 stderr F I0813 20:02:39.735966 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:39.736083259+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:39.736083259+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:39.736083259+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:39.736083259+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:39.736083259+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:39.736083259+00:00 stderr F - { 2025-08-13T20:02:39.736083259+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:39.736083259+00:00 stderr F - Status: "False", 2025-08-13T20:02:39.736083259+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:39.736083259+00:00 stderr F - }, 2025-08-13T20:02:39.736083259+00:00 stderr F + { 2025-08-13T20:02:39.736083259+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:39.736083259+00:00 stderr F + Status: "True", 2025-08-13T20:02:39.736083259+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:39.732833536 +0000 UTC m=+78.937932970", 2025-08-13T20:02:39.736083259+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:39.736083259+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:39.736083259+00:00 stderr F + }, 2025-08-13T20:02:39.736083259+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:39.736083259+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:39.736083259+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:39.736083259+00:00 stderr F }, 2025-08-13T20:02:39.736083259+00:00 stderr F Version: "", 2025-08-13T20:02:39.736083259+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:39.736083259+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:39.736083259+00:00 stderr F } 2025-08-13T20:02:39.810677247+00:00 stderr F E0813 20:02:39.810556 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.855908507+00:00 stderr F E0813 20:02:39.855723 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.858237484+00:00 stderr F I0813 20:02:39.858171 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:39.858237484+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:39.858237484+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:39.858237484+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:39.858237484+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:39.858237484+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:39.858237484+00:00 stderr F - { 2025-08-13T20:02:39.858237484+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:39.858237484+00:00 stderr F - Status: "False", 2025-08-13T20:02:39.858237484+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:39.858237484+00:00 stderr F - }, 2025-08-13T20:02:39.858237484+00:00 stderr F + { 2025-08-13T20:02:39.858237484+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:39.858237484+00:00 stderr F + Status: "True", 2025-08-13T20:02:39.858237484+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:39.855873176 +0000 UTC m=+79.060972630", 2025-08-13T20:02:39.858237484+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:39.858237484+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:39.858237484+00:00 stderr F + }, 2025-08-13T20:02:39.858237484+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:39.858237484+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:39.858237484+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:39.858237484+00:00 stderr F }, 2025-08-13T20:02:39.858237484+00:00 stderr F Version: "", 2025-08-13T20:02:39.858237484+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:39.858237484+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:39.858237484+00:00 stderr F } 2025-08-13T20:02:40.011393083+00:00 stderr F E0813 20:02:40.010825 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.055184292+00:00 stderr F E0813 20:02:40.055065 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.057880279+00:00 stderr F I0813 20:02:40.057721 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:40.057880279+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:40.057880279+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:40.057880279+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:40.057880279+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:40.057880279+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:40.057880279+00:00 stderr F - { 2025-08-13T20:02:40.057880279+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:40.057880279+00:00 stderr F - Status: "False", 2025-08-13T20:02:40.057880279+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:40.057880279+00:00 stderr F - }, 2025-08-13T20:02:40.057880279+00:00 stderr F + { 2025-08-13T20:02:40.057880279+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:40.057880279+00:00 stderr F + Status: "True", 2025-08-13T20:02:40.057880279+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:40.055165632 +0000 UTC m=+79.260264806", 2025-08-13T20:02:40.057880279+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:40.057880279+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:40.057880279+00:00 stderr F + }, 2025-08-13T20:02:40.057880279+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:40.057880279+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:40.057880279+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:40.057880279+00:00 stderr F }, 2025-08-13T20:02:40.057880279+00:00 stderr F Version: "", 2025-08-13T20:02:40.057880279+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:40.057880279+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:40.057880279+00:00 stderr F } 2025-08-13T20:02:40.211032458+00:00 stderr F E0813 20:02:40.210726 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.254105447+00:00 stderr F E0813 20:02:40.253832 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.257162694+00:00 stderr F I0813 20:02:40.257024 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:40.257162694+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:40.257162694+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:40.257162694+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:40.257162694+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:40.257162694+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:40.257162694+00:00 stderr F - { 2025-08-13T20:02:40.257162694+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:40.257162694+00:00 stderr F - Status: "False", 2025-08-13T20:02:40.257162694+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:40.257162694+00:00 stderr F - }, 2025-08-13T20:02:40.257162694+00:00 stderr F + { 2025-08-13T20:02:40.257162694+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:40.257162694+00:00 stderr F + Status: "True", 2025-08-13T20:02:40.257162694+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:40.253945202 +0000 UTC m=+79.459044626", 2025-08-13T20:02:40.257162694+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:40.257162694+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:40.257162694+00:00 stderr F + }, 2025-08-13T20:02:40.257162694+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:40.257162694+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:40.257162694+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:40.257162694+00:00 stderr F }, 2025-08-13T20:02:40.257162694+00:00 stderr F Version: "", 2025-08-13T20:02:40.257162694+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:40.257162694+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:40.257162694+00:00 stderr F } 2025-08-13T20:02:40.408925773+00:00 stderr F I0813 20:02:40.408472 1 request.go:697] Waited for 1.17324116s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:02:40.410283412+00:00 stderr F E0813 20:02:40.410202 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.453893266+00:00 stderr F E0813 20:02:40.453701 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.459104355+00:00 stderr F I0813 20:02:40.458981 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:40.459104355+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:40.459104355+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:40.459104355+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:40.459104355+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:40.459104355+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:40.459104355+00:00 stderr F - { 2025-08-13T20:02:40.459104355+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:40.459104355+00:00 stderr F - Status: "False", 2025-08-13T20:02:40.459104355+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:40.459104355+00:00 stderr F - }, 2025-08-13T20:02:40.459104355+00:00 stderr F + { 2025-08-13T20:02:40.459104355+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:40.459104355+00:00 stderr F + Status: "True", 2025-08-13T20:02:40.459104355+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:40.45403173 +0000 UTC m=+79.659131874", 2025-08-13T20:02:40.459104355+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:40.459104355+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:40.459104355+00:00 stderr F + }, 2025-08-13T20:02:40.459104355+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:40.459104355+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:40.459104355+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:40.459104355+00:00 stderr F }, 2025-08-13T20:02:40.459104355+00:00 stderr F Version: "", 2025-08-13T20:02:40.459104355+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:40.459104355+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:40.459104355+00:00 stderr F } 2025-08-13T20:02:40.610141543+00:00 stderr F E0813 20:02:40.609707 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.693087400+00:00 stderr F E0813 20:02:40.693030 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.695862119+00:00 stderr F I0813 20:02:40.695752 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:40.695862119+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:40.695862119+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:40.695862119+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:40.695862119+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:40.695862119+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:40.695862119+00:00 stderr F - { 2025-08-13T20:02:40.695862119+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:40.695862119+00:00 stderr F - Status: "False", 2025-08-13T20:02:40.695862119+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:40.695862119+00:00 stderr F - }, 2025-08-13T20:02:40.695862119+00:00 stderr F + { 2025-08-13T20:02:40.695862119+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:40.695862119+00:00 stderr F + Status: "True", 2025-08-13T20:02:40.695862119+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:40.693200443 +0000 UTC m=+79.898299647", 2025-08-13T20:02:40.695862119+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:40.695862119+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:40.695862119+00:00 stderr F + }, 2025-08-13T20:02:40.695862119+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:40.695862119+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:40.695862119+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:40.695862119+00:00 stderr F }, 2025-08-13T20:02:40.695862119+00:00 stderr F Version: "", 2025-08-13T20:02:40.695862119+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:40.695862119+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:40.695862119+00:00 stderr F } 2025-08-13T20:02:40.809537742+00:00 stderr F E0813 20:02:40.809455 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.009569698+00:00 stderr F E0813 20:02:41.009458 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.092326639+00:00 stderr F E0813 20:02:41.092276 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.094510742+00:00 stderr F I0813 20:02:41.094483 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:41.094510742+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:41.094510742+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:41.094510742+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:41.094510742+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:41.094510742+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:41.094510742+00:00 stderr F - { 2025-08-13T20:02:41.094510742+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:41.094510742+00:00 stderr F - Status: "False", 2025-08-13T20:02:41.094510742+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:41.094510742+00:00 stderr F - }, 2025-08-13T20:02:41.094510742+00:00 stderr F + { 2025-08-13T20:02:41.094510742+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:41.094510742+00:00 stderr F + Status: "True", 2025-08-13T20:02:41.094510742+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:41.092412442 +0000 UTC m=+80.297511526", 2025-08-13T20:02:41.094510742+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:41.094510742+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:41.094510742+00:00 stderr F + }, 2025-08-13T20:02:41.094510742+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:41.094510742+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:41.094510742+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:41.094510742+00:00 stderr F }, 2025-08-13T20:02:41.094510742+00:00 stderr F Version: "", 2025-08-13T20:02:41.094510742+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:41.094510742+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:41.094510742+00:00 stderr F } 2025-08-13T20:02:41.209867223+00:00 stderr F E0813 20:02:41.209685 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.292589613+00:00 stderr F E0813 20:02:41.292529 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.295547067+00:00 stderr F I0813 20:02:41.295514 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:41.295547067+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:41.295547067+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:41.295547067+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:41.295547067+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:41.295547067+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:41.295547067+00:00 stderr F - { 2025-08-13T20:02:41.295547067+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:41.295547067+00:00 stderr F - Status: "False", 2025-08-13T20:02:41.295547067+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:41.295547067+00:00 stderr F - }, 2025-08-13T20:02:41.295547067+00:00 stderr F + { 2025-08-13T20:02:41.295547067+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:41.295547067+00:00 stderr F + Status: "True", 2025-08-13T20:02:41.295547067+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:41.292687976 +0000 UTC m=+80.497787250", 2025-08-13T20:02:41.295547067+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:41.295547067+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:41.295547067+00:00 stderr F + }, 2025-08-13T20:02:41.295547067+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:41.295547067+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:41.295547067+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:41.295547067+00:00 stderr F }, 2025-08-13T20:02:41.295547067+00:00 stderr F Version: "", 2025-08-13T20:02:41.295547067+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:41.295547067+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:41.295547067+00:00 stderr F } 2025-08-13T20:02:41.408646444+00:00 stderr F I0813 20:02:41.408524 1 request.go:697] Waited for 1.151170611s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:02:41.410157427+00:00 stderr F E0813 20:02:41.410122 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.452286258+00:00 stderr F E0813 20:02:41.452087 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.455177620+00:00 stderr F I0813 20:02:41.455081 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:41.455177620+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:41.455177620+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:41.455177620+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:41.455177620+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:41.455177620+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:41.455177620+00:00 stderr F - { 2025-08-13T20:02:41.455177620+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:41.455177620+00:00 stderr F - Status: "False", 2025-08-13T20:02:41.455177620+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:41.455177620+00:00 stderr F - }, 2025-08-13T20:02:41.455177620+00:00 stderr F + { 2025-08-13T20:02:41.455177620+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:41.455177620+00:00 stderr F + Status: "True", 2025-08-13T20:02:41.455177620+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:41.452167594 +0000 UTC m=+80.657266869", 2025-08-13T20:02:41.455177620+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:41.455177620+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:41.455177620+00:00 stderr F + }, 2025-08-13T20:02:41.455177620+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:41.455177620+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:41.455177620+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:41.455177620+00:00 stderr F }, 2025-08-13T20:02:41.455177620+00:00 stderr F Version: "", 2025-08-13T20:02:41.455177620+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:41.455177620+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:41.455177620+00:00 stderr F } 2025-08-13T20:02:41.492750232+00:00 stderr F E0813 20:02:41.492654 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.498085424+00:00 stderr F I0813 20:02:41.498014 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:41.498085424+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:41.498085424+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:41.498085424+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:41.498085424+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:41.498085424+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:41.498085424+00:00 stderr F - { 2025-08-13T20:02:41.498085424+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:41.498085424+00:00 stderr F - Status: "False", 2025-08-13T20:02:41.498085424+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:41.498085424+00:00 stderr F - }, 2025-08-13T20:02:41.498085424+00:00 stderr F + { 2025-08-13T20:02:41.498085424+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:41.498085424+00:00 stderr F + Status: "True", 2025-08-13T20:02:41.498085424+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:41.492727882 +0000 UTC m=+80.697827116", 2025-08-13T20:02:41.498085424+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:41.498085424+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:41.498085424+00:00 stderr F + }, 2025-08-13T20:02:41.498085424+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:41.498085424+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:41.498085424+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:41.498085424+00:00 stderr F }, 2025-08-13T20:02:41.498085424+00:00 stderr F Version: "", 2025-08-13T20:02:41.498085424+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:41.498085424+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:41.498085424+00:00 stderr F } 2025-08-13T20:02:41.609886224+00:00 stderr F E0813 20:02:41.609696 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.692903652+00:00 stderr F E0813 20:02:41.692665 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.695712942+00:00 stderr F I0813 20:02:41.695640 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:41.695712942+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:41.695712942+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:41.695712942+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:41.695712942+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:41.695712942+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:41.695712942+00:00 stderr F - { 2025-08-13T20:02:41.695712942+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:41.695712942+00:00 stderr F - Status: "False", 2025-08-13T20:02:41.695712942+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:41.695712942+00:00 stderr F - }, 2025-08-13T20:02:41.695712942+00:00 stderr F + { 2025-08-13T20:02:41.695712942+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:41.695712942+00:00 stderr F + Status: "True", 2025-08-13T20:02:41.695712942+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:41.692911722 +0000 UTC m=+80.898011126", 2025-08-13T20:02:41.695712942+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:41.695712942+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:41.695712942+00:00 stderr F + }, 2025-08-13T20:02:41.695712942+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:41.695712942+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:41.695712942+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:41.695712942+00:00 stderr F }, 2025-08-13T20:02:41.695712942+00:00 stderr F Version: "", 2025-08-13T20:02:41.695712942+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:41.695712942+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:41.695712942+00:00 stderr F } 2025-08-13T20:02:41.809092337+00:00 stderr F E0813 20:02:41.808992 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.971767138+00:00 stderr F E0813 20:02:41.971645 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.975059611+00:00 stderr F I0813 20:02:41.975003 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:41.975059611+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:41.975059611+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:41.975059611+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:41.975059611+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:41.975059611+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:41.975059611+00:00 stderr F - { 2025-08-13T20:02:41.975059611+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:41.975059611+00:00 stderr F - Status: "False", 2025-08-13T20:02:41.975059611+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:41.975059611+00:00 stderr F - }, 2025-08-13T20:02:41.975059611+00:00 stderr F + { 2025-08-13T20:02:41.975059611+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:41.975059611+00:00 stderr F + Status: "True", 2025-08-13T20:02:41.975059611+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:41.971699266 +0000 UTC m=+81.176798460", 2025-08-13T20:02:41.975059611+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:41.975059611+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:41.975059611+00:00 stderr F + }, 2025-08-13T20:02:41.975059611+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:41.975059611+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:41.975059611+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:41.975059611+00:00 stderr F }, 2025-08-13T20:02:41.975059611+00:00 stderr F Version: "", 2025-08-13T20:02:41.975059611+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:41.975059611+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:41.975059611+00:00 stderr F } 2025-08-13T20:02:42.009924016+00:00 stderr F E0813 20:02:42.009727 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.173314598+00:00 stderr F E0813 20:02:42.173193 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.175037337+00:00 stderr F I0813 20:02:42.174736 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:42.175037337+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:42.175037337+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:42.175037337+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:42.175037337+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:42.175037337+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:42.175037337+00:00 stderr F - { 2025-08-13T20:02:42.175037337+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:42.175037337+00:00 stderr F - Status: "False", 2025-08-13T20:02:42.175037337+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:42.175037337+00:00 stderr F - }, 2025-08-13T20:02:42.175037337+00:00 stderr F + { 2025-08-13T20:02:42.175037337+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:42.175037337+00:00 stderr F + Status: "True", 2025-08-13T20:02:42.175037337+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:42.173233336 +0000 UTC m=+81.378332610", 2025-08-13T20:02:42.175037337+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:42.175037337+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:42.175037337+00:00 stderr F + }, 2025-08-13T20:02:42.175037337+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:42.175037337+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:42.175037337+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:42.175037337+00:00 stderr F }, 2025-08-13T20:02:42.175037337+00:00 stderr F Version: "", 2025-08-13T20:02:42.175037337+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:42.175037337+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:42.175037337+00:00 stderr F } 2025-08-13T20:02:42.209069618+00:00 stderr F E0813 20:02:42.208979 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.373713945+00:00 stderr F E0813 20:02:42.373274 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.376973758+00:00 stderr F I0813 20:02:42.376690 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:42.376973758+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:42.376973758+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:42.376973758+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:42.376973758+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:42.376973758+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:42.376973758+00:00 stderr F - { 2025-08-13T20:02:42.376973758+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:42.376973758+00:00 stderr F - Status: "False", 2025-08-13T20:02:42.376973758+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:42.376973758+00:00 stderr F - }, 2025-08-13T20:02:42.376973758+00:00 stderr F + { 2025-08-13T20:02:42.376973758+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:42.376973758+00:00 stderr F + Status: "True", 2025-08-13T20:02:42.376973758+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:42.373340805 +0000 UTC m=+81.578439969", 2025-08-13T20:02:42.376973758+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:42.376973758+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:42.376973758+00:00 stderr F + }, 2025-08-13T20:02:42.376973758+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:42.376973758+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:42.376973758+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:42.376973758+00:00 stderr F }, 2025-08-13T20:02:42.376973758+00:00 stderr F Version: "", 2025-08-13T20:02:42.376973758+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:42.376973758+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:42.376973758+00:00 stderr F } 2025-08-13T20:02:42.410688000+00:00 stderr F E0813 20:02:42.410560 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.608154843+00:00 stderr F I0813 20:02:42.607562 1 request.go:697] Waited for 1.109243645s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:02:42.610078638+00:00 stderr F E0813 20:02:42.609974 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.772931454+00:00 stderr F E0813 20:02:42.772878 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.775012764+00:00 stderr F I0813 20:02:42.774987 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:42.775012764+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:42.775012764+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:42.775012764+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:42.775012764+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:42.775012764+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:42.775012764+00:00 stderr F - { 2025-08-13T20:02:42.775012764+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:42.775012764+00:00 stderr F - Status: "False", 2025-08-13T20:02:42.775012764+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:42.775012764+00:00 stderr F - }, 2025-08-13T20:02:42.775012764+00:00 stderr F + { 2025-08-13T20:02:42.775012764+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:42.775012764+00:00 stderr F + Status: "True", 2025-08-13T20:02:42.775012764+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:42.773033487 +0000 UTC m=+81.978132701", 2025-08-13T20:02:42.775012764+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:42.775012764+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:42.775012764+00:00 stderr F + }, 2025-08-13T20:02:42.775012764+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:42.775012764+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:42.775012764+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:42.775012764+00:00 stderr F }, 2025-08-13T20:02:42.775012764+00:00 stderr F Version: "", 2025-08-13T20:02:42.775012764+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:42.775012764+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:42.775012764+00:00 stderr F } 2025-08-13T20:02:42.809667162+00:00 stderr F E0813 20:02:42.809511 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.972950400+00:00 stderr F E0813 20:02:42.972830 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.983392638+00:00 stderr F I0813 20:02:42.983251 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:42.983392638+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:42.983392638+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:42.983392638+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:42.983392638+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:42.983392638+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:42.983392638+00:00 stderr F - { 2025-08-13T20:02:42.983392638+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:42.983392638+00:00 stderr F - Status: "False", 2025-08-13T20:02:42.983392638+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:42.983392638+00:00 stderr F - }, 2025-08-13T20:02:42.983392638+00:00 stderr F + { 2025-08-13T20:02:42.983392638+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:42.983392638+00:00 stderr F + Status: "True", 2025-08-13T20:02:42.983392638+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:42.972919719 +0000 UTC m=+82.178018933", 2025-08-13T20:02:42.983392638+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:42.983392638+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:42.983392638+00:00 stderr F + }, 2025-08-13T20:02:42.983392638+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:42.983392638+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:42.983392638+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:42.983392638+00:00 stderr F }, 2025-08-13T20:02:42.983392638+00:00 stderr F Version: "", 2025-08-13T20:02:42.983392638+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:42.983392638+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:42.983392638+00:00 stderr F } 2025-08-13T20:02:43.009577656+00:00 stderr F E0813 20:02:43.009483 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.211282169+00:00 stderr F E0813 20:02:43.210955 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.332547909+00:00 stderr F E0813 20:02:43.332470 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.335764841+00:00 stderr F I0813 20:02:43.335650 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:43.335764841+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:43.335764841+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:43.335764841+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:43.335764841+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:43.335764841+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:43.335764841+00:00 stderr F - { 2025-08-13T20:02:43.335764841+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:43.335764841+00:00 stderr F - Status: "False", 2025-08-13T20:02:43.335764841+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:43.335764841+00:00 stderr F - }, 2025-08-13T20:02:43.335764841+00:00 stderr F + { 2025-08-13T20:02:43.335764841+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:43.335764841+00:00 stderr F + Status: "True", 2025-08-13T20:02:43.335764841+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:43.332531919 +0000 UTC m=+82.537631143", 2025-08-13T20:02:43.335764841+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:43.335764841+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:43.335764841+00:00 stderr F + }, 2025-08-13T20:02:43.335764841+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:43.335764841+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:43.335764841+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:43.335764841+00:00 stderr F }, 2025-08-13T20:02:43.335764841+00:00 stderr F Version: "", 2025-08-13T20:02:43.335764841+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:43.335764841+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:43.335764841+00:00 stderr F } 2025-08-13T20:02:43.410632006+00:00 stderr F E0813 20:02:43.410544 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.533812061+00:00 stderr F E0813 20:02:43.533681 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.535882560+00:00 stderr F I0813 20:02:43.535707 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:43.535882560+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:43.535882560+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:43.535882560+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:43.535882560+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:43.535882560+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:43.535882560+00:00 stderr F - { 2025-08-13T20:02:43.535882560+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:43.535882560+00:00 stderr F - Status: "False", 2025-08-13T20:02:43.535882560+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:43.535882560+00:00 stderr F - }, 2025-08-13T20:02:43.535882560+00:00 stderr F + { 2025-08-13T20:02:43.535882560+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:43.535882560+00:00 stderr F + Status: "True", 2025-08-13T20:02:43.535882560+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:43.533750569 +0000 UTC m=+82.738849793", 2025-08-13T20:02:43.535882560+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:43.535882560+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:43.535882560+00:00 stderr F + }, 2025-08-13T20:02:43.535882560+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:43.535882560+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:43.535882560+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:43.535882560+00:00 stderr F }, 2025-08-13T20:02:43.535882560+00:00 stderr F Version: "", 2025-08-13T20:02:43.535882560+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:43.535882560+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:43.535882560+00:00 stderr F } 2025-08-13T20:02:43.613427862+00:00 stderr F E0813 20:02:43.613301 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.692654802+00:00 stderr F E0813 20:02:43.692552 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.694875885+00:00 stderr F I0813 20:02:43.694731 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:43.694875885+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:43.694875885+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:43.694875885+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:43.694875885+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:43.694875885+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:43.694875885+00:00 stderr F - { 2025-08-13T20:02:43.694875885+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:43.694875885+00:00 stderr F - Status: "False", 2025-08-13T20:02:43.694875885+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:43.694875885+00:00 stderr F - }, 2025-08-13T20:02:43.694875885+00:00 stderr F + { 2025-08-13T20:02:43.694875885+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:43.694875885+00:00 stderr F + Status: "True", 2025-08-13T20:02:43.694875885+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:43.692635021 +0000 UTC m=+82.897734245", 2025-08-13T20:02:43.694875885+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:43.694875885+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:43.694875885+00:00 stderr F + }, 2025-08-13T20:02:43.694875885+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:43.694875885+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:43.694875885+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:43.694875885+00:00 stderr F }, 2025-08-13T20:02:43.694875885+00:00 stderr F Version: "", 2025-08-13T20:02:43.694875885+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:43.694875885+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:43.694875885+00:00 stderr F } 2025-08-13T20:02:43.738092648+00:00 stderr F E0813 20:02:43.738038 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.740215249+00:00 stderr F I0813 20:02:43.740186 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:43.740215249+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:43.740215249+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:43.740215249+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:43.740215249+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:43.740215249+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:43.740215249+00:00 stderr F - { 2025-08-13T20:02:43.740215249+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:43.740215249+00:00 stderr F - Status: "False", 2025-08-13T20:02:43.740215249+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:43.740215249+00:00 stderr F - }, 2025-08-13T20:02:43.740215249+00:00 stderr F + { 2025-08-13T20:02:43.740215249+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:43.740215249+00:00 stderr F + Status: "True", 2025-08-13T20:02:43.740215249+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:43.738236632 +0000 UTC m=+82.943335826", 2025-08-13T20:02:43.740215249+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:43.740215249+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:43.740215249+00:00 stderr F + }, 2025-08-13T20:02:43.740215249+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:43.740215249+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:43.740215249+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:43.740215249+00:00 stderr F }, 2025-08-13T20:02:43.740215249+00:00 stderr F Version: "", 2025-08-13T20:02:43.740215249+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:43.740215249+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:43.740215249+00:00 stderr F } 2025-08-13T20:02:43.810450213+00:00 stderr F E0813 20:02:43.810300 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.937389464+00:00 stderr F E0813 20:02:43.937257 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.943501168+00:00 stderr F I0813 20:02:43.943424 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:43.943501168+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:43.943501168+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:43.943501168+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:43.943501168+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:43.943501168+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:43.943501168+00:00 stderr F - { 2025-08-13T20:02:43.943501168+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:43.943501168+00:00 stderr F - Status: "False", 2025-08-13T20:02:43.943501168+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:43.943501168+00:00 stderr F - }, 2025-08-13T20:02:43.943501168+00:00 stderr F + { 2025-08-13T20:02:43.943501168+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:43.943501168+00:00 stderr F + Status: "True", 2025-08-13T20:02:43.943501168+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:43.937455325 +0000 UTC m=+83.142555230", 2025-08-13T20:02:43.943501168+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:43.943501168+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:43.943501168+00:00 stderr F + }, 2025-08-13T20:02:43.943501168+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:43.943501168+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:43.943501168+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:43.943501168+00:00 stderr F }, 2025-08-13T20:02:43.943501168+00:00 stderr F Version: "", 2025-08-13T20:02:43.943501168+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:43.943501168+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:43.943501168+00:00 stderr F } 2025-08-13T20:02:44.009754338+00:00 stderr F E0813 20:02:44.009698 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.133475947+00:00 stderr F E0813 20:02:44.133326 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.136048801+00:00 stderr F I0813 20:02:44.136012 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:44.136048801+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:44.136048801+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:44.136048801+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:44.136048801+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:44.136048801+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:44.136048801+00:00 stderr F - { 2025-08-13T20:02:44.136048801+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:44.136048801+00:00 stderr F - Status: "False", 2025-08-13T20:02:44.136048801+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:44.136048801+00:00 stderr F - }, 2025-08-13T20:02:44.136048801+00:00 stderr F + { 2025-08-13T20:02:44.136048801+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:44.136048801+00:00 stderr F + Status: "True", 2025-08-13T20:02:44.136048801+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:44.133948111 +0000 UTC m=+83.339047395", 2025-08-13T20:02:44.136048801+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:44.136048801+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:44.136048801+00:00 stderr F + }, 2025-08-13T20:02:44.136048801+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:44.136048801+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:44.136048801+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:44.136048801+00:00 stderr F }, 2025-08-13T20:02:44.136048801+00:00 stderr F Version: "", 2025-08-13T20:02:44.136048801+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:44.136048801+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:44.136048801+00:00 stderr F } 2025-08-13T20:02:44.210293679+00:00 stderr F E0813 20:02:44.210220 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.410241533+00:00 stderr F E0813 20:02:44.410145 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.610735652+00:00 stderr F E0813 20:02:44.610572 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.653021509+00:00 stderr F E0813 20:02:44.652960 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.654728947+00:00 stderr F I0813 20:02:44.654702 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:44.654728947+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:44.654728947+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:44.654728947+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:44.654728947+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:44.654728947+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:44.654728947+00:00 stderr F - { 2025-08-13T20:02:44.654728947+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:44.654728947+00:00 stderr F - Status: "False", 2025-08-13T20:02:44.654728947+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:44.654728947+00:00 stderr F - }, 2025-08-13T20:02:44.654728947+00:00 stderr F + { 2025-08-13T20:02:44.654728947+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:44.654728947+00:00 stderr F + Status: "True", 2025-08-13T20:02:44.654728947+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:44.653127182 +0000 UTC m=+83.858226276", 2025-08-13T20:02:44.654728947+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:44.654728947+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:44.654728947+00:00 stderr F + }, 2025-08-13T20:02:44.654728947+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:44.654728947+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:44.654728947+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:44.654728947+00:00 stderr F }, 2025-08-13T20:02:44.654728947+00:00 stderr F Version: "", 2025-08-13T20:02:44.654728947+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:44.654728947+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:44.654728947+00:00 stderr F } 2025-08-13T20:02:44.809875933+00:00 stderr F E0813 20:02:44.809720 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.852525840+00:00 stderr F E0813 20:02:44.852477 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.854682591+00:00 stderr F I0813 20:02:44.854656 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:44.854682591+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:44.854682591+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:44.854682591+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:44.854682591+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:44.854682591+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:44.854682591+00:00 stderr F - { 2025-08-13T20:02:44.854682591+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:44.854682591+00:00 stderr F - Status: "False", 2025-08-13T20:02:44.854682591+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:44.854682591+00:00 stderr F - }, 2025-08-13T20:02:44.854682591+00:00 stderr F + { 2025-08-13T20:02:44.854682591+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:44.854682591+00:00 stderr F + Status: "True", 2025-08-13T20:02:44.854682591+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:44.852597902 +0000 UTC m=+84.057697016", 2025-08-13T20:02:44.854682591+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:44.854682591+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:44.854682591+00:00 stderr F + }, 2025-08-13T20:02:44.854682591+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:44.854682591+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:44.854682591+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:44.854682591+00:00 stderr F }, 2025-08-13T20:02:44.854682591+00:00 stderr F Version: "", 2025-08-13T20:02:44.854682591+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:44.854682591+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:44.854682591+00:00 stderr F } 2025-08-13T20:02:45.010542128+00:00 stderr F E0813 20:02:45.010409 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.209175334+00:00 stderr F E0813 20:02:45.208632 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.254010902+00:00 stderr F E0813 20:02:45.253914 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.256010770+00:00 stderr F I0813 20:02:45.255942 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:45.256010770+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:45.256010770+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:45.256010770+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:45.256010770+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:45.256010770+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:45.256010770+00:00 stderr F - { 2025-08-13T20:02:45.256010770+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:45.256010770+00:00 stderr F - Status: "False", 2025-08-13T20:02:45.256010770+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:45.256010770+00:00 stderr F - }, 2025-08-13T20:02:45.256010770+00:00 stderr F + { 2025-08-13T20:02:45.256010770+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:45.256010770+00:00 stderr F + Status: "True", 2025-08-13T20:02:45.256010770+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:45.253957231 +0000 UTC m=+84.459056385", 2025-08-13T20:02:45.256010770+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:45.256010770+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:45.256010770+00:00 stderr F + }, 2025-08-13T20:02:45.256010770+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:45.256010770+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:45.256010770+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:45.256010770+00:00 stderr F }, 2025-08-13T20:02:45.256010770+00:00 stderr F Version: "", 2025-08-13T20:02:45.256010770+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:45.256010770+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:45.256010770+00:00 stderr F } 2025-08-13T20:02:45.409763546+00:00 stderr F E0813 20:02:45.409605 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.453226156+00:00 stderr F E0813 20:02:45.453104 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.456240132+00:00 stderr F I0813 20:02:45.456165 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:45.456240132+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:45.456240132+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:45.456240132+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:45.456240132+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:45.456240132+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:45.456240132+00:00 stderr F - { 2025-08-13T20:02:45.456240132+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:45.456240132+00:00 stderr F - Status: "False", 2025-08-13T20:02:45.456240132+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:45.456240132+00:00 stderr F - }, 2025-08-13T20:02:45.456240132+00:00 stderr F + { 2025-08-13T20:02:45.456240132+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:45.456240132+00:00 stderr F + Status: "True", 2025-08-13T20:02:45.456240132+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:45.453183404 +0000 UTC m=+84.658282768", 2025-08-13T20:02:45.456240132+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:45.456240132+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:45.456240132+00:00 stderr F + }, 2025-08-13T20:02:45.456240132+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:45.456240132+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:45.456240132+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:45.456240132+00:00 stderr F }, 2025-08-13T20:02:45.456240132+00:00 stderr F Version: "", 2025-08-13T20:02:45.456240132+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:45.456240132+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:45.456240132+00:00 stderr F } 2025-08-13T20:02:45.610164553+00:00 stderr F E0813 20:02:45.609866 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.654445656+00:00 stderr F E0813 20:02:45.654335 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.658755559+00:00 stderr F I0813 20:02:45.658340 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:45.658755559+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:45.658755559+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:45.658755559+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:45.658755559+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:45.658755559+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:45.658755559+00:00 stderr F - { 2025-08-13T20:02:45.658755559+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:45.658755559+00:00 stderr F - Status: "False", 2025-08-13T20:02:45.658755559+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:45.658755559+00:00 stderr F - }, 2025-08-13T20:02:45.658755559+00:00 stderr F + { 2025-08-13T20:02:45.658755559+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:45.658755559+00:00 stderr F + Status: "True", 2025-08-13T20:02:45.658755559+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:45.654403195 +0000 UTC m=+84.859502529", 2025-08-13T20:02:45.658755559+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:45.658755559+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:45.658755559+00:00 stderr F + }, 2025-08-13T20:02:45.658755559+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:45.658755559+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:45.658755559+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:45.658755559+00:00 stderr F }, 2025-08-13T20:02:45.658755559+00:00 stderr F Version: "", 2025-08-13T20:02:45.658755559+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:45.658755559+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:45.658755559+00:00 stderr F } 2025-08-13T20:02:45.812534816+00:00 stderr F E0813 20:02:45.812435 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.009827914+00:00 stderr F E0813 20:02:46.009709 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.491641599+00:00 stderr F E0813 20:02:46.491579 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.493878643+00:00 stderr F I0813 20:02:46.493768 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:46.493878643+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:46.493878643+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:46.493878643+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:46.493878643+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:46.493878643+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:46.493878643+00:00 stderr F - { 2025-08-13T20:02:46.493878643+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:46.493878643+00:00 stderr F - Status: "False", 2025-08-13T20:02:46.493878643+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:46.493878643+00:00 stderr F - }, 2025-08-13T20:02:46.493878643+00:00 stderr F + { 2025-08-13T20:02:46.493878643+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:46.493878643+00:00 stderr F + Status: "True", 2025-08-13T20:02:46.493878643+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:46.491770803 +0000 UTC m=+85.696870017", 2025-08-13T20:02:46.493878643+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:46.493878643+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:46.493878643+00:00 stderr F + }, 2025-08-13T20:02:46.493878643+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:46.493878643+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:46.493878643+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:46.493878643+00:00 stderr F }, 2025-08-13T20:02:46.493878643+00:00 stderr F Version: "", 2025-08-13T20:02:46.493878643+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:46.493878643+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:46.493878643+00:00 stderr F } 2025-08-13T20:02:46.495978243+00:00 stderr F E0813 20:02:46.495218 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.692088328+00:00 stderr F E0813 20:02:46.692031 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.694663531+00:00 stderr F I0813 20:02:46.694637 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:46.694663531+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:46.694663531+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:46.694663531+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:46.694663531+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:46.694663531+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:46.694663531+00:00 stderr F - { 2025-08-13T20:02:46.694663531+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:46.694663531+00:00 stderr F - Status: "False", 2025-08-13T20:02:46.694663531+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:46.694663531+00:00 stderr F - }, 2025-08-13T20:02:46.694663531+00:00 stderr F + { 2025-08-13T20:02:46.694663531+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:46.694663531+00:00 stderr F + Status: "True", 2025-08-13T20:02:46.694663531+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:46.692192771 +0000 UTC m=+85.897291995", 2025-08-13T20:02:46.694663531+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:46.694663531+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:46.694663531+00:00 stderr F + }, 2025-08-13T20:02:46.694663531+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:46.694663531+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:46.694663531+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:46.694663531+00:00 stderr F }, 2025-08-13T20:02:46.694663531+00:00 stderr F Version: "", 2025-08-13T20:02:46.694663531+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:46.694663531+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:46.694663531+00:00 stderr F } 2025-08-13T20:02:46.696121993+00:00 stderr F E0813 20:02:46.696027 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.893951606+00:00 stderr F E0813 20:02:46.893726 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.896709045+00:00 stderr F I0813 20:02:46.896600 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:46.896709045+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:46.896709045+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:46.896709045+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:46.896709045+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:46.896709045+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:46.896709045+00:00 stderr F - { 2025-08-13T20:02:46.896709045+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:46.896709045+00:00 stderr F - Status: "False", 2025-08-13T20:02:46.896709045+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:46.896709045+00:00 stderr F - }, 2025-08-13T20:02:46.896709045+00:00 stderr F + { 2025-08-13T20:02:46.896709045+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:46.896709045+00:00 stderr F + Status: "True", 2025-08-13T20:02:46.896709045+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:46.893873064 +0000 UTC m=+86.098972518", 2025-08-13T20:02:46.896709045+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:46.896709045+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:46.896709045+00:00 stderr F + }, 2025-08-13T20:02:46.896709045+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:46.896709045+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:46.896709045+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:46.896709045+00:00 stderr F }, 2025-08-13T20:02:46.896709045+00:00 stderr F Version: "", 2025-08-13T20:02:46.896709045+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:46.896709045+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:46.896709045+00:00 stderr F } 2025-08-13T20:02:46.898198017+00:00 stderr F E0813 20:02:46.898159 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.972571309+00:00 stderr F E0813 20:02:46.972327 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.975397639+00:00 stderr F I0813 20:02:46.975340 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:46.975397639+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:46.975397639+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:46.975397639+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:46.975397639+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:46.975397639+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:46.975397639+00:00 stderr F - { 2025-08-13T20:02:46.975397639+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:46.975397639+00:00 stderr F - Status: "False", 2025-08-13T20:02:46.975397639+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:46.975397639+00:00 stderr F - }, 2025-08-13T20:02:46.975397639+00:00 stderr F + { 2025-08-13T20:02:46.975397639+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:46.975397639+00:00 stderr F + Status: "True", 2025-08-13T20:02:46.975397639+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:46.972388134 +0000 UTC m=+86.177487448", 2025-08-13T20:02:46.975397639+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:46.975397639+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:46.975397639+00:00 stderr F + }, 2025-08-13T20:02:46.975397639+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:46.975397639+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:46.975397639+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:46.975397639+00:00 stderr F }, 2025-08-13T20:02:46.975397639+00:00 stderr F Version: "", 2025-08-13T20:02:46.975397639+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:46.975397639+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:46.975397639+00:00 stderr F } 2025-08-13T20:02:46.977408187+00:00 stderr F E0813 20:02:46.977378 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:47.094554669+00:00 stderr F E0813 20:02:47.094426 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:47.096831594+00:00 stderr F I0813 20:02:47.096699 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:47.096831594+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:47.096831594+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:47.096831594+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:47.096831594+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:47.096831594+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:47.096831594+00:00 stderr F - { 2025-08-13T20:02:47.096831594+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:47.096831594+00:00 stderr F - Status: "False", 2025-08-13T20:02:47.096831594+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:47.096831594+00:00 stderr F - }, 2025-08-13T20:02:47.096831594+00:00 stderr F + { 2025-08-13T20:02:47.096831594+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:47.096831594+00:00 stderr F + Status: "True", 2025-08-13T20:02:47.096831594+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:47.094501647 +0000 UTC m=+86.299600961", 2025-08-13T20:02:47.096831594+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:47.096831594+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:47.096831594+00:00 stderr F + }, 2025-08-13T20:02:47.096831594+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:47.096831594+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:47.096831594+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:47.096831594+00:00 stderr F }, 2025-08-13T20:02:47.096831594+00:00 stderr F Version: "", 2025-08-13T20:02:47.096831594+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:47.096831594+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:47.096831594+00:00 stderr F } 2025-08-13T20:02:47.098274195+00:00 stderr F E0813 20:02:47.098170 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:47.292954108+00:00 stderr F E0813 20:02:47.292596 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:47.295254294+00:00 stderr F I0813 20:02:47.295053 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:47.295254294+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:47.295254294+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:47.295254294+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:47.295254294+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:47.295254294+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:47.295254294+00:00 stderr F - { 2025-08-13T20:02:47.295254294+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:47.295254294+00:00 stderr F - Status: "False", 2025-08-13T20:02:47.295254294+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:47.295254294+00:00 stderr F - }, 2025-08-13T20:02:47.295254294+00:00 stderr F + { 2025-08-13T20:02:47.295254294+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:47.295254294+00:00 stderr F + Status: "True", 2025-08-13T20:02:47.295254294+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:47.29266478 +0000 UTC m=+86.497763994", 2025-08-13T20:02:47.295254294+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:47.295254294+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:47.295254294+00:00 stderr F + }, 2025-08-13T20:02:47.295254294+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:47.295254294+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:47.295254294+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:47.295254294+00:00 stderr F }, 2025-08-13T20:02:47.295254294+00:00 stderr F Version: "", 2025-08-13T20:02:47.295254294+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:47.295254294+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:47.295254294+00:00 stderr F } 2025-08-13T20:02:47.296515290+00:00 stderr F E0813 20:02:47.296303 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.058882924+00:00 stderr F E0813 20:02:49.058373 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.060875371+00:00 stderr F I0813 20:02:49.060752 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:49.060875371+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:49.060875371+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:49.060875371+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:49.060875371+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:49.060875371+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:49.060875371+00:00 stderr F - { 2025-08-13T20:02:49.060875371+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:49.060875371+00:00 stderr F - Status: "False", 2025-08-13T20:02:49.060875371+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:49.060875371+00:00 stderr F - }, 2025-08-13T20:02:49.060875371+00:00 stderr F + { 2025-08-13T20:02:49.060875371+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:49.060875371+00:00 stderr F + Status: "True", 2025-08-13T20:02:49.060875371+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:49.058833793 +0000 UTC m=+88.263933037", 2025-08-13T20:02:49.060875371+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:49.060875371+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:49.060875371+00:00 stderr F + }, 2025-08-13T20:02:49.060875371+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:49.060875371+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:49.060875371+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:49.060875371+00:00 stderr F }, 2025-08-13T20:02:49.060875371+00:00 stderr F Version: "", 2025-08-13T20:02:49.060875371+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:49.060875371+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:49.060875371+00:00 stderr F } 2025-08-13T20:02:49.061996513+00:00 stderr F E0813 20:02:49.061969 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.258492338+00:00 stderr F E0813 20:02:49.258437 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.261381521+00:00 stderr F I0813 20:02:49.261351 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:49.261381521+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:49.261381521+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:49.261381521+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:49.261381521+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:49.261381521+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:49.261381521+00:00 stderr F - { 2025-08-13T20:02:49.261381521+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:49.261381521+00:00 stderr F - Status: "False", 2025-08-13T20:02:49.261381521+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:49.261381521+00:00 stderr F - }, 2025-08-13T20:02:49.261381521+00:00 stderr F + { 2025-08-13T20:02:49.261381521+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:49.261381521+00:00 stderr F + Status: "True", 2025-08-13T20:02:49.261381521+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:49.258576731 +0000 UTC m=+88.463675915", 2025-08-13T20:02:49.261381521+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:49.261381521+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:49.261381521+00:00 stderr F + }, 2025-08-13T20:02:49.261381521+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:49.261381521+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:49.261381521+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:49.261381521+00:00 stderr F }, 2025-08-13T20:02:49.261381521+00:00 stderr F Version: "", 2025-08-13T20:02:49.261381521+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:49.261381521+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:49.261381521+00:00 stderr F } 2025-08-13T20:02:49.262738109+00:00 stderr F E0813 20:02:49.262649 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.463326122+00:00 stderr F E0813 20:02:49.463212 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.464921058+00:00 stderr F I0813 20:02:49.464835 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:49.464921058+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:49.464921058+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:49.464921058+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:49.464921058+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:49.464921058+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:49.464921058+00:00 stderr F - { 2025-08-13T20:02:49.464921058+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:49.464921058+00:00 stderr F - Status: "False", 2025-08-13T20:02:49.464921058+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:49.464921058+00:00 stderr F - }, 2025-08-13T20:02:49.464921058+00:00 stderr F + { 2025-08-13T20:02:49.464921058+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:49.464921058+00:00 stderr F + Status: "True", 2025-08-13T20:02:49.464921058+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:49.463280471 +0000 UTC m=+88.668379695", 2025-08-13T20:02:49.464921058+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:49.464921058+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:49.464921058+00:00 stderr F + }, 2025-08-13T20:02:49.464921058+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:49.464921058+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:49.464921058+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:49.464921058+00:00 stderr F }, 2025-08-13T20:02:49.464921058+00:00 stderr F Version: "", 2025-08-13T20:02:49.464921058+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:49.464921058+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:49.464921058+00:00 stderr F } 2025-08-13T20:02:49.466155773+00:00 stderr F E0813 20:02:49.466097 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.662081552+00:00 stderr F E0813 20:02:49.661961 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.665182730+00:00 stderr F I0813 20:02:49.664728 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:49.665182730+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:49.665182730+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:49.665182730+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:49.665182730+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:49.665182730+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:49.665182730+00:00 stderr F - { 2025-08-13T20:02:49.665182730+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:49.665182730+00:00 stderr F - Status: "False", 2025-08-13T20:02:49.665182730+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:49.665182730+00:00 stderr F - }, 2025-08-13T20:02:49.665182730+00:00 stderr F + { 2025-08-13T20:02:49.665182730+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:49.665182730+00:00 stderr F + Status: "True", 2025-08-13T20:02:49.665182730+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:49.662041651 +0000 UTC m=+88.867141105", 2025-08-13T20:02:49.665182730+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:49.665182730+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:49.665182730+00:00 stderr F + }, 2025-08-13T20:02:49.665182730+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:49.665182730+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:49.665182730+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:49.665182730+00:00 stderr F }, 2025-08-13T20:02:49.665182730+00:00 stderr F Version: "", 2025-08-13T20:02:49.665182730+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:49.665182730+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:49.665182730+00:00 stderr F } 2025-08-13T20:02:49.666707054+00:00 stderr F E0813 20:02:49.666621 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.859386661+00:00 stderr F E0813 20:02:49.859269 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.862996234+00:00 stderr F I0813 20:02:49.862758 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:49.862996234+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:49.862996234+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:49.862996234+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:49.862996234+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:49.862996234+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:49.862996234+00:00 stderr F - { 2025-08-13T20:02:49.862996234+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:49.862996234+00:00 stderr F - Status: "False", 2025-08-13T20:02:49.862996234+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:49.862996234+00:00 stderr F - }, 2025-08-13T20:02:49.862996234+00:00 stderr F + { 2025-08-13T20:02:49.862996234+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:49.862996234+00:00 stderr F + Status: "True", 2025-08-13T20:02:49.862996234+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:49.85935953 +0000 UTC m=+89.064458744", 2025-08-13T20:02:49.862996234+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:49.862996234+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:49.862996234+00:00 stderr F + }, 2025-08-13T20:02:49.862996234+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:49.862996234+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:49.862996234+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:49.862996234+00:00 stderr F }, 2025-08-13T20:02:49.862996234+00:00 stderr F Version: "", 2025-08-13T20:02:49.862996234+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:49.862996234+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:49.862996234+00:00 stderr F } 2025-08-13T20:02:49.864553388+00:00 stderr F E0813 20:02:49.864490 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:52.100349989+00:00 stderr F E0813 20:02:52.100206 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:52.102302825+00:00 stderr F I0813 20:02:52.102244 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:52.102302825+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:52.102302825+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:52.102302825+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:52.102302825+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:52.102302825+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:52.102302825+00:00 stderr F - { 2025-08-13T20:02:52.102302825+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:52.102302825+00:00 stderr F - Status: "False", 2025-08-13T20:02:52.102302825+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:52.102302825+00:00 stderr F - }, 2025-08-13T20:02:52.102302825+00:00 stderr F + { 2025-08-13T20:02:52.102302825+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:52.102302825+00:00 stderr F + Status: "True", 2025-08-13T20:02:52.102302825+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:52.100318508 +0000 UTC m=+91.305417732", 2025-08-13T20:02:52.102302825+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:52.102302825+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:52.102302825+00:00 stderr F + }, 2025-08-13T20:02:52.102302825+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:52.102302825+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:52.102302825+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:52.102302825+00:00 stderr F }, 2025-08-13T20:02:52.102302825+00:00 stderr F Version: "", 2025-08-13T20:02:52.102302825+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:52.102302825+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:52.102302825+00:00 stderr F } 2025-08-13T20:02:52.103716805+00:00 stderr F E0813 20:02:52.103639 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.922128069+00:00 stderr F W0813 20:02:53.921703 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.922128069+00:00 stderr F E0813 20:02:53.921904 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.185693288+00:00 stderr F E0813 20:02:54.185571 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.188333574+00:00 stderr F I0813 20:02:54.188235 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:54.188333574+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:54.188333574+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:54.188333574+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:54.188333574+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:54.188333574+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:54.188333574+00:00 stderr F - { 2025-08-13T20:02:54.188333574+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:54.188333574+00:00 stderr F - Status: "False", 2025-08-13T20:02:54.188333574+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:54.188333574+00:00 stderr F - }, 2025-08-13T20:02:54.188333574+00:00 stderr F + { 2025-08-13T20:02:54.188333574+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:54.188333574+00:00 stderr F + Status: "True", 2025-08-13T20:02:54.188333574+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:54.185654827 +0000 UTC m=+93.390754171", 2025-08-13T20:02:54.188333574+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:54.188333574+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:54.188333574+00:00 stderr F + }, 2025-08-13T20:02:54.188333574+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:54.188333574+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:54.188333574+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:54.188333574+00:00 stderr F }, 2025-08-13T20:02:54.188333574+00:00 stderr F Version: "", 2025-08-13T20:02:54.188333574+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:54.188333574+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:54.188333574+00:00 stderr F } 2025-08-13T20:02:54.190895287+00:00 stderr F E0813 20:02:54.190263 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.385528939+00:00 stderr F E0813 20:02:54.385395 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.389274516+00:00 stderr F I0813 20:02:54.389174 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:54.389274516+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:54.389274516+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:54.389274516+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:54.389274516+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:54.389274516+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:54.389274516+00:00 stderr F - { 2025-08-13T20:02:54.389274516+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:54.389274516+00:00 stderr F - Status: "False", 2025-08-13T20:02:54.389274516+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:54.389274516+00:00 stderr F - }, 2025-08-13T20:02:54.389274516+00:00 stderr F + { 2025-08-13T20:02:54.389274516+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:54.389274516+00:00 stderr F + Status: "True", 2025-08-13T20:02:54.389274516+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:54.385467807 +0000 UTC m=+93.590566981", 2025-08-13T20:02:54.389274516+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:54.389274516+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:54.389274516+00:00 stderr F + }, 2025-08-13T20:02:54.389274516+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:54.389274516+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:54.389274516+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:54.389274516+00:00 stderr F }, 2025-08-13T20:02:54.389274516+00:00 stderr F Version: "", 2025-08-13T20:02:54.389274516+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:54.389274516+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:54.389274516+00:00 stderr F } 2025-08-13T20:02:54.394194976+00:00 stderr F E0813 20:02:54.394097 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.590049464+00:00 stderr F E0813 20:02:54.589960 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.592184545+00:00 stderr F I0813 20:02:54.592117 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:54.592184545+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:54.592184545+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:54.592184545+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:54.592184545+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:54.592184545+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:54.592184545+00:00 stderr F - { 2025-08-13T20:02:54.592184545+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:54.592184545+00:00 stderr F - Status: "False", 2025-08-13T20:02:54.592184545+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:54.592184545+00:00 stderr F - }, 2025-08-13T20:02:54.592184545+00:00 stderr F + { 2025-08-13T20:02:54.592184545+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:54.592184545+00:00 stderr F + Status: "True", 2025-08-13T20:02:54.592184545+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:54.590033023 +0000 UTC m=+93.795132307", 2025-08-13T20:02:54.592184545+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:54.592184545+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:54.592184545+00:00 stderr F + }, 2025-08-13T20:02:54.592184545+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:54.592184545+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:54.592184545+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:54.592184545+00:00 stderr F }, 2025-08-13T20:02:54.592184545+00:00 stderr F Version: "", 2025-08-13T20:02:54.592184545+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:54.592184545+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:54.592184545+00:00 stderr F } 2025-08-13T20:02:54.593226934+00:00 stderr F E0813 20:02:54.593183 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.789140294+00:00 stderr F E0813 20:02:54.789091 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.791350607+00:00 stderr F I0813 20:02:54.791303 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:54.791350607+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:54.791350607+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:54.791350607+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:54.791350607+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:54.791350607+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:54.791350607+00:00 stderr F - { 2025-08-13T20:02:54.791350607+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:54.791350607+00:00 stderr F - Status: "False", 2025-08-13T20:02:54.791350607+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:54.791350607+00:00 stderr F - }, 2025-08-13T20:02:54.791350607+00:00 stderr F + { 2025-08-13T20:02:54.791350607+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:54.791350607+00:00 stderr F + Status: "True", 2025-08-13T20:02:54.791350607+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:54.789221796 +0000 UTC m=+93.994320850", 2025-08-13T20:02:54.791350607+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:54.791350607+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:54.791350607+00:00 stderr F + }, 2025-08-13T20:02:54.791350607+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:54.791350607+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:54.791350607+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:54.791350607+00:00 stderr F }, 2025-08-13T20:02:54.791350607+00:00 stderr F Version: "", 2025-08-13T20:02:54.791350607+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:54.791350607+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:54.791350607+00:00 stderr F } 2025-08-13T20:02:54.793879449+00:00 stderr F E0813 20:02:54.793257 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.987382889+00:00 stderr F E0813 20:02:54.987331 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.990219700+00:00 stderr F I0813 20:02:54.990194 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:54.990219700+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:54.990219700+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:54.990219700+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:54.990219700+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:54.990219700+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:54.990219700+00:00 stderr F - { 2025-08-13T20:02:54.990219700+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:54.990219700+00:00 stderr F - Status: "False", 2025-08-13T20:02:54.990219700+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:54.990219700+00:00 stderr F - }, 2025-08-13T20:02:54.990219700+00:00 stderr F + { 2025-08-13T20:02:54.990219700+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:54.990219700+00:00 stderr F + Status: "True", 2025-08-13T20:02:54.990219700+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:54.987465371 +0000 UTC m=+94.192564445", 2025-08-13T20:02:54.990219700+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:54.990219700+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:54.990219700+00:00 stderr F + }, 2025-08-13T20:02:54.990219700+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:54.990219700+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:54.990219700+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:54.990219700+00:00 stderr F }, 2025-08-13T20:02:54.990219700+00:00 stderr F Version: "", 2025-08-13T20:02:54.990219700+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:54.990219700+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:54.990219700+00:00 stderr F } 2025-08-13T20:02:54.991403564+00:00 stderr F E0813 20:02:54.991380 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.347261235+00:00 stderr F E0813 20:03:02.346578 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.348916012+00:00 stderr F I0813 20:03:02.348863 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:02.348916012+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:02.348916012+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:02.348916012+00:00 stderr F ... // 43 identical elements 2025-08-13T20:03:02.348916012+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:03:02.348916012+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:03:02.348916012+00:00 stderr F - { 2025-08-13T20:03:02.348916012+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:03:02.348916012+00:00 stderr F - Status: "False", 2025-08-13T20:03:02.348916012+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:03:02.348916012+00:00 stderr F - }, 2025-08-13T20:03:02.348916012+00:00 stderr F + { 2025-08-13T20:03:02.348916012+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:03:02.348916012+00:00 stderr F + Status: "True", 2025-08-13T20:03:02.348916012+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:02.347210973 +0000 UTC m=+101.552310187", 2025-08-13T20:03:02.348916012+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:03:02.348916012+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:03:02.348916012+00:00 stderr F + }, 2025-08-13T20:03:02.348916012+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:03:02.348916012+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:02.348916012+00:00 stderr F ... // 14 identical elements 2025-08-13T20:03:02.348916012+00:00 stderr F }, 2025-08-13T20:03:02.348916012+00:00 stderr F Version: "", 2025-08-13T20:03:02.348916012+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:02.348916012+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:02.348916012+00:00 stderr F } 2025-08-13T20:03:02.350906699+00:00 stderr F E0813 20:03:02.350729 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.433210680+00:00 stderr F E0813 20:03:04.433095 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.435199707+00:00 stderr F I0813 20:03:04.435118 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:04.435199707+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:04.435199707+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:04.435199707+00:00 stderr F ... // 10 identical elements 2025-08-13T20:03:04.435199707+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:03:04.435199707+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:04.435199707+00:00 stderr F - { 2025-08-13T20:03:04.435199707+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:03:04.435199707+00:00 stderr F - Status: "False", 2025-08-13T20:03:04.435199707+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:03:04.435199707+00:00 stderr F - }, 2025-08-13T20:03:04.435199707+00:00 stderr F + { 2025-08-13T20:03:04.435199707+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:03:04.435199707+00:00 stderr F + Status: "True", 2025-08-13T20:03:04.435199707+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:04.433156558 +0000 UTC m=+103.638255912", 2025-08-13T20:03:04.435199707+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:04.435199707+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:03:04.435199707+00:00 stderr F + }, 2025-08-13T20:03:04.435199707+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:03:04.435199707+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:04.435199707+00:00 stderr F ... // 47 identical elements 2025-08-13T20:03:04.435199707+00:00 stderr F }, 2025-08-13T20:03:04.435199707+00:00 stderr F Version: "", 2025-08-13T20:03:04.435199707+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:04.435199707+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:04.435199707+00:00 stderr F } 2025-08-13T20:03:04.436316628+00:00 stderr F E0813 20:03:04.436239 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.637671522+00:00 stderr F E0813 20:03:04.637543 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.640534914+00:00 stderr F I0813 20:03:04.640422 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:04.640534914+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:04.640534914+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:04.640534914+00:00 stderr F ... // 19 identical elements 2025-08-13T20:03:04.640534914+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:03:04.640534914+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:03:04.640534914+00:00 stderr F - { 2025-08-13T20:03:04.640534914+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:03:04.640534914+00:00 stderr F - Status: "False", 2025-08-13T20:03:04.640534914+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:03:04.640534914+00:00 stderr F - }, 2025-08-13T20:03:04.640534914+00:00 stderr F + { 2025-08-13T20:03:04.640534914+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:03:04.640534914+00:00 stderr F + Status: "True", 2025-08-13T20:03:04.640534914+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:04.637684293 +0000 UTC m=+103.842783657", 2025-08-13T20:03:04.640534914+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:04.640534914+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:03:04.640534914+00:00 stderr F + }, 2025-08-13T20:03:04.640534914+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:03:04.640534914+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:04.640534914+00:00 stderr F ... // 38 identical elements 2025-08-13T20:03:04.640534914+00:00 stderr F }, 2025-08-13T20:03:04.640534914+00:00 stderr F Version: "", 2025-08-13T20:03:04.640534914+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:04.640534914+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:04.640534914+00:00 stderr F } 2025-08-13T20:03:04.642434758+00:00 stderr F E0813 20:03:04.642398 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.838517592+00:00 stderr F E0813 20:03:04.838410 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.840338214+00:00 stderr F I0813 20:03:04.840273 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:04.840338214+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:04.840338214+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:04.840338214+00:00 stderr F ... // 2 identical elements 2025-08-13T20:03:04.840338214+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:03:04.840338214+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:03:04.840338214+00:00 stderr F - { 2025-08-13T20:03:04.840338214+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:03:04.840338214+00:00 stderr F - Status: "False", 2025-08-13T20:03:04.840338214+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:03:04.840338214+00:00 stderr F - }, 2025-08-13T20:03:04.840338214+00:00 stderr F + { 2025-08-13T20:03:04.840338214+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:03:04.840338214+00:00 stderr F + Status: "True", 2025-08-13T20:03:04.840338214+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:04.838476711 +0000 UTC m=+104.043575975", 2025-08-13T20:03:04.840338214+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:04.840338214+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:03:04.840338214+00:00 stderr F + }, 2025-08-13T20:03:04.840338214+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:03:04.840338214+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:03:04.840338214+00:00 stderr F ... // 55 identical elements 2025-08-13T20:03:04.840338214+00:00 stderr F }, 2025-08-13T20:03:04.840338214+00:00 stderr F Version: "", 2025-08-13T20:03:04.840338214+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:04.840338214+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:04.840338214+00:00 stderr F } 2025-08-13T20:03:04.841729014+00:00 stderr F E0813 20:03:04.841647 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:05.036209562+00:00 stderr F E0813 20:03:05.036004 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:05.041749930+00:00 stderr F I0813 20:03:05.041664 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:05.041749930+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:05.041749930+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:05.041749930+00:00 stderr F ... // 19 identical elements 2025-08-13T20:03:05.041749930+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:03:05.041749930+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:03:05.041749930+00:00 stderr F - { 2025-08-13T20:03:05.041749930+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:03:05.041749930+00:00 stderr F - Status: "False", 2025-08-13T20:03:05.041749930+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:03:05.041749930+00:00 stderr F - }, 2025-08-13T20:03:05.041749930+00:00 stderr F + { 2025-08-13T20:03:05.041749930+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:03:05.041749930+00:00 stderr F + Status: "True", 2025-08-13T20:03:05.041749930+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:05.036119899 +0000 UTC m=+104.241219113", 2025-08-13T20:03:05.041749930+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:05.041749930+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:03:05.041749930+00:00 stderr F + }, 2025-08-13T20:03:05.041749930+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:03:05.041749930+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:05.041749930+00:00 stderr F ... // 38 identical elements 2025-08-13T20:03:05.041749930+00:00 stderr F }, 2025-08-13T20:03:05.041749930+00:00 stderr F Version: "", 2025-08-13T20:03:05.041749930+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:05.041749930+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:05.041749930+00:00 stderr F } 2025-08-13T20:03:05.042971324+00:00 stderr F E0813 20:03:05.042921 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:05.234516949+00:00 stderr F E0813 20:03:05.234399 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:05.237211016+00:00 stderr F I0813 20:03:05.237104 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:05.237211016+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:05.237211016+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:05.237211016+00:00 stderr F ... // 10 identical elements 2025-08-13T20:03:05.237211016+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:03:05.237211016+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:05.237211016+00:00 stderr F - { 2025-08-13T20:03:05.237211016+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:03:05.237211016+00:00 stderr F - Status: "False", 2025-08-13T20:03:05.237211016+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:03:05.237211016+00:00 stderr F - }, 2025-08-13T20:03:05.237211016+00:00 stderr F + { 2025-08-13T20:03:05.237211016+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:03:05.237211016+00:00 stderr F + Status: "True", 2025-08-13T20:03:05.237211016+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:05.234450737 +0000 UTC m=+104.439549901", 2025-08-13T20:03:05.237211016+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:05.237211016+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:03:05.237211016+00:00 stderr F + }, 2025-08-13T20:03:05.237211016+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:03:05.237211016+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:05.237211016+00:00 stderr F ... // 47 identical elements 2025-08-13T20:03:05.237211016+00:00 stderr F }, 2025-08-13T20:03:05.237211016+00:00 stderr F Version: "", 2025-08-13T20:03:05.237211016+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:05.237211016+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:05.237211016+00:00 stderr F } 2025-08-13T20:03:05.239046088+00:00 stderr F E0813 20:03:05.238768 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:22.836279379+00:00 stderr F E0813 20:03:22.835474 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:22.839112150+00:00 stderr F I0813 20:03:22.839011 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:22.839112150+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:22.839112150+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:22.839112150+00:00 stderr F ... // 43 identical elements 2025-08-13T20:03:22.839112150+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:03:22.839112150+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:03:22.839112150+00:00 stderr F - { 2025-08-13T20:03:22.839112150+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:03:22.839112150+00:00 stderr F - Status: "False", 2025-08-13T20:03:22.839112150+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:03:22.839112150+00:00 stderr F - }, 2025-08-13T20:03:22.839112150+00:00 stderr F + { 2025-08-13T20:03:22.839112150+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:03:22.839112150+00:00 stderr F + Status: "True", 2025-08-13T20:03:22.839112150+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:22.83629378 +0000 UTC m=+122.041393044", 2025-08-13T20:03:22.839112150+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:03:22.839112150+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:03:22.839112150+00:00 stderr F + }, 2025-08-13T20:03:22.839112150+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:03:22.839112150+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:22.839112150+00:00 stderr F ... // 14 identical elements 2025-08-13T20:03:22.839112150+00:00 stderr F }, 2025-08-13T20:03:22.839112150+00:00 stderr F Version: "", 2025-08-13T20:03:22.839112150+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:22.839112150+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:22.839112150+00:00 stderr F } 2025-08-13T20:03:22.841314113+00:00 stderr F E0813 20:03:22.841211 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.921173105+00:00 stderr F E0813 20:03:24.921029 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.923902473+00:00 stderr F I0813 20:03:24.923866 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:24.923902473+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:24.923902473+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:24.923902473+00:00 stderr F ... // 10 identical elements 2025-08-13T20:03:24.923902473+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:03:24.923902473+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:24.923902473+00:00 stderr F - { 2025-08-13T20:03:24.923902473+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:03:24.923902473+00:00 stderr F - Status: "False", 2025-08-13T20:03:24.923902473+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:03:24.923902473+00:00 stderr F - }, 2025-08-13T20:03:24.923902473+00:00 stderr F + { 2025-08-13T20:03:24.923902473+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:03:24.923902473+00:00 stderr F + Status: "True", 2025-08-13T20:03:24.923902473+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:24.921269628 +0000 UTC m=+124.126368962", 2025-08-13T20:03:24.923902473+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:24.923902473+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:03:24.923902473+00:00 stderr F + }, 2025-08-13T20:03:24.923902473+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:03:24.923902473+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:24.923902473+00:00 stderr F ... // 47 identical elements 2025-08-13T20:03:24.923902473+00:00 stderr F }, 2025-08-13T20:03:24.923902473+00:00 stderr F Version: "", 2025-08-13T20:03:24.923902473+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:24.923902473+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:24.923902473+00:00 stderr F } 2025-08-13T20:03:24.925448307+00:00 stderr F E0813 20:03:24.925394 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.126030619+00:00 stderr F E0813 20:03:25.125969 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.129151838+00:00 stderr F I0813 20:03:25.129119 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:25.129151838+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:25.129151838+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:25.129151838+00:00 stderr F ... // 19 identical elements 2025-08-13T20:03:25.129151838+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:03:25.129151838+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:03:25.129151838+00:00 stderr F - { 2025-08-13T20:03:25.129151838+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:03:25.129151838+00:00 stderr F - Status: "False", 2025-08-13T20:03:25.129151838+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:03:25.129151838+00:00 stderr F - }, 2025-08-13T20:03:25.129151838+00:00 stderr F + { 2025-08-13T20:03:25.129151838+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:03:25.129151838+00:00 stderr F + Status: "True", 2025-08-13T20:03:25.129151838+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:25.126180074 +0000 UTC m=+124.331279348", 2025-08-13T20:03:25.129151838+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:25.129151838+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:03:25.129151838+00:00 stderr F + }, 2025-08-13T20:03:25.129151838+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:03:25.129151838+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:25.129151838+00:00 stderr F ... // 38 identical elements 2025-08-13T20:03:25.129151838+00:00 stderr F }, 2025-08-13T20:03:25.129151838+00:00 stderr F Version: "", 2025-08-13T20:03:25.129151838+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:25.129151838+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:25.129151838+00:00 stderr F } 2025-08-13T20:03:25.130926699+00:00 stderr F E0813 20:03:25.130880 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.326744395+00:00 stderr F E0813 20:03:25.326646 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.329996568+00:00 stderr F I0813 20:03:25.329836 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:25.329996568+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:25.329996568+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:25.329996568+00:00 stderr F ... // 2 identical elements 2025-08-13T20:03:25.329996568+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:03:25.329996568+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:03:25.329996568+00:00 stderr F - { 2025-08-13T20:03:25.329996568+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:03:25.329996568+00:00 stderr F - Status: "False", 2025-08-13T20:03:25.329996568+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:03:25.329996568+00:00 stderr F - }, 2025-08-13T20:03:25.329996568+00:00 stderr F + { 2025-08-13T20:03:25.329996568+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:03:25.329996568+00:00 stderr F + Status: "True", 2025-08-13T20:03:25.329996568+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:25.326953871 +0000 UTC m=+124.532053195", 2025-08-13T20:03:25.329996568+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:25.329996568+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:03:25.329996568+00:00 stderr F + }, 2025-08-13T20:03:25.329996568+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:03:25.329996568+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:03:25.329996568+00:00 stderr F ... // 55 identical elements 2025-08-13T20:03:25.329996568+00:00 stderr F }, 2025-08-13T20:03:25.329996568+00:00 stderr F Version: "", 2025-08-13T20:03:25.329996568+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:25.329996568+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:25.329996568+00:00 stderr F } 2025-08-13T20:03:25.332283563+00:00 stderr F E0813 20:03:25.332211 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.525548166+00:00 stderr F E0813 20:03:25.525415 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.528057577+00:00 stderr F I0813 20:03:25.527970 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:25.528057577+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:25.528057577+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:25.528057577+00:00 stderr F ... // 19 identical elements 2025-08-13T20:03:25.528057577+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:03:25.528057577+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:03:25.528057577+00:00 stderr F - { 2025-08-13T20:03:25.528057577+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:03:25.528057577+00:00 stderr F - Status: "False", 2025-08-13T20:03:25.528057577+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:03:25.528057577+00:00 stderr F - }, 2025-08-13T20:03:25.528057577+00:00 stderr F + { 2025-08-13T20:03:25.528057577+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:03:25.528057577+00:00 stderr F + Status: "True", 2025-08-13T20:03:25.528057577+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:25.525488714 +0000 UTC m=+124.730587908", 2025-08-13T20:03:25.528057577+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:25.528057577+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:03:25.528057577+00:00 stderr F + }, 2025-08-13T20:03:25.528057577+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:03:25.528057577+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:25.528057577+00:00 stderr F ... // 38 identical elements 2025-08-13T20:03:25.528057577+00:00 stderr F }, 2025-08-13T20:03:25.528057577+00:00 stderr F Version: "", 2025-08-13T20:03:25.528057577+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:25.528057577+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:25.528057577+00:00 stderr F } 2025-08-13T20:03:25.529081187+00:00 stderr F E0813 20:03:25.528986 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.722506365+00:00 stderr F E0813 20:03:25.722404 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.724716808+00:00 stderr F I0813 20:03:25.724619 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:25.724716808+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:25.724716808+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:25.724716808+00:00 stderr F ... // 10 identical elements 2025-08-13T20:03:25.724716808+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:03:25.724716808+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:25.724716808+00:00 stderr F - { 2025-08-13T20:03:25.724716808+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:03:25.724716808+00:00 stderr F - Status: "False", 2025-08-13T20:03:25.724716808+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:03:25.724716808+00:00 stderr F - }, 2025-08-13T20:03:25.724716808+00:00 stderr F + { 2025-08-13T20:03:25.724716808+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:03:25.724716808+00:00 stderr F + Status: "True", 2025-08-13T20:03:25.724716808+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:25.722476354 +0000 UTC m=+124.927575578", 2025-08-13T20:03:25.724716808+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:25.724716808+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:03:25.724716808+00:00 stderr F + }, 2025-08-13T20:03:25.724716808+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:03:25.724716808+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:25.724716808+00:00 stderr F ... // 47 identical elements 2025-08-13T20:03:25.724716808+00:00 stderr F }, 2025-08-13T20:03:25.724716808+00:00 stderr F Version: "", 2025-08-13T20:03:25.724716808+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:25.724716808+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:25.724716808+00:00 stderr F } 2025-08-13T20:03:25.726247031+00:00 stderr F E0813 20:03:25.726156 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:35.309131285+00:00 stderr F E0813 20:03:35.308076 1 leaderelection.go:332] error retrieving resource lock openshift-console-operator/console-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-console-operator/leases/console-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.002192285+00:00 stderr F E0813 20:03:37.002088 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.005224771+00:00 stderr F I0813 20:03:37.005161 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:37.005224771+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:37.005224771+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:37.005224771+00:00 stderr F ... // 43 identical elements 2025-08-13T20:03:37.005224771+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:03:37.005224771+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:03:37.005224771+00:00 stderr F - { 2025-08-13T20:03:37.005224771+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:03:37.005224771+00:00 stderr F - Status: "False", 2025-08-13T20:03:37.005224771+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:03:37.005224771+00:00 stderr F - }, 2025-08-13T20:03:37.005224771+00:00 stderr F + { 2025-08-13T20:03:37.005224771+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:03:37.005224771+00:00 stderr F + Status: "True", 2025-08-13T20:03:37.005224771+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:37.002153564 +0000 UTC m=+136.207252708", 2025-08-13T20:03:37.005224771+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:03:37.005224771+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:03:37.005224771+00:00 stderr F + }, 2025-08-13T20:03:37.005224771+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:03:37.005224771+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:37.005224771+00:00 stderr F ... // 14 identical elements 2025-08-13T20:03:37.005224771+00:00 stderr F }, 2025-08-13T20:03:37.005224771+00:00 stderr F Version: "", 2025-08-13T20:03:37.005224771+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:37.005224771+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:37.005224771+00:00 stderr F } 2025-08-13T20:03:37.006494927+00:00 stderr F E0813 20:03:37.006414 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.115226139+00:00 stderr F E0813 20:03:37.115175 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.117290808+00:00 stderr F I0813 20:03:37.117251 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:37.117290808+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:37.117290808+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:37.117290808+00:00 stderr F ... // 19 identical elements 2025-08-13T20:03:37.117290808+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:03:37.117290808+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:03:37.117290808+00:00 stderr F - { 2025-08-13T20:03:37.117290808+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:03:37.117290808+00:00 stderr F - Status: "False", 2025-08-13T20:03:37.117290808+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:03:37.117290808+00:00 stderr F - }, 2025-08-13T20:03:37.117290808+00:00 stderr F + { 2025-08-13T20:03:37.117290808+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:03:37.117290808+00:00 stderr F + Status: "True", 2025-08-13T20:03:37.117290808+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:37.115319432 +0000 UTC m=+136.320418566", 2025-08-13T20:03:37.117290808+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:37.117290808+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:03:37.117290808+00:00 stderr F + }, 2025-08-13T20:03:37.117290808+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:03:37.117290808+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:37.117290808+00:00 stderr F ... // 38 identical elements 2025-08-13T20:03:37.117290808+00:00 stderr F }, 2025-08-13T20:03:37.117290808+00:00 stderr F Version: "", 2025-08-13T20:03:37.117290808+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:37.117290808+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:37.117290808+00:00 stderr F } 2025-08-13T20:03:37.118891614+00:00 stderr F E0813 20:03:37.118757 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.119163772+00:00 stderr F E0813 20:03:37.117534 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.120903341+00:00 stderr F I0813 20:03:37.120758 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:37.120903341+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:37.120903341+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:37.120903341+00:00 stderr F ... // 10 identical elements 2025-08-13T20:03:37.120903341+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:03:37.120903341+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:37.120903341+00:00 stderr F - { 2025-08-13T20:03:37.120903341+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:03:37.120903341+00:00 stderr F - Status: "False", 2025-08-13T20:03:37.120903341+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:03:37.120903341+00:00 stderr F - }, 2025-08-13T20:03:37.120903341+00:00 stderr F + { 2025-08-13T20:03:37.120903341+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:03:37.120903341+00:00 stderr F + Status: "True", 2025-08-13T20:03:37.120903341+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:37.118886654 +0000 UTC m=+136.323986058", 2025-08-13T20:03:37.120903341+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:37.120903341+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:03:37.120903341+00:00 stderr F + }, 2025-08-13T20:03:37.120903341+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:03:37.120903341+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:37.120903341+00:00 stderr F ... // 47 identical elements 2025-08-13T20:03:37.120903341+00:00 stderr F }, 2025-08-13T20:03:37.120903341+00:00 stderr F Version: "", 2025-08-13T20:03:37.120903341+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:37.120903341+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:37.120903341+00:00 stderr F } 2025-08-13T20:03:37.121382915+00:00 stderr F I0813 20:03:37.121358 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:37.121382915+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:37.121382915+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:37.121382915+00:00 stderr F ... // 2 identical elements 2025-08-13T20:03:37.121382915+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:03:37.121382915+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:03:37.121382915+00:00 stderr F - { 2025-08-13T20:03:37.121382915+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:03:37.121382915+00:00 stderr F - Status: "False", 2025-08-13T20:03:37.121382915+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:03:37.121382915+00:00 stderr F - }, 2025-08-13T20:03:37.121382915+00:00 stderr F + { 2025-08-13T20:03:37.121382915+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:03:37.121382915+00:00 stderr F + Status: "True", 2025-08-13T20:03:37.121382915+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:37.119211223 +0000 UTC m=+136.324310487", 2025-08-13T20:03:37.121382915+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:37.121382915+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:03:37.121382915+00:00 stderr F + }, 2025-08-13T20:03:37.121382915+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:03:37.121382915+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:03:37.121382915+00:00 stderr F ... // 55 identical elements 2025-08-13T20:03:37.121382915+00:00 stderr F }, 2025-08-13T20:03:37.121382915+00:00 stderr F Version: "", 2025-08-13T20:03:37.121382915+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:37.121382915+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:37.121382915+00:00 stderr F } 2025-08-13T20:03:37.121892799+00:00 stderr F E0813 20:03:37.121868 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.122761554+00:00 stderr F E0813 20:03:37.122662 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.123134605+00:00 stderr F E0813 20:03:37.123101 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.128750475+00:00 stderr F E0813 20:03:37.128724 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.131482963+00:00 stderr F I0813 20:03:37.131423 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:37.131482963+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:37.131482963+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:37.131482963+00:00 stderr F ... // 19 identical elements 2025-08-13T20:03:37.131482963+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:03:37.131482963+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:03:37.131482963+00:00 stderr F - { 2025-08-13T20:03:37.131482963+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:03:37.131482963+00:00 stderr F - Status: "False", 2025-08-13T20:03:37.131482963+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:03:37.131482963+00:00 stderr F - }, 2025-08-13T20:03:37.131482963+00:00 stderr F + { 2025-08-13T20:03:37.131482963+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:03:37.131482963+00:00 stderr F + Status: "True", 2025-08-13T20:03:37.131482963+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:37.128898429 +0000 UTC m=+136.333997793", 2025-08-13T20:03:37.131482963+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:37.131482963+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:03:37.131482963+00:00 stderr F + }, 2025-08-13T20:03:37.131482963+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:03:37.131482963+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:37.131482963+00:00 stderr F ... // 38 identical elements 2025-08-13T20:03:37.131482963+00:00 stderr F }, 2025-08-13T20:03:37.131482963+00:00 stderr F Version: "", 2025-08-13T20:03:37.131482963+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:37.131482963+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:37.131482963+00:00 stderr F } 2025-08-13T20:03:37.132311937+00:00 stderr F E0813 20:03:37.132267 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.136277390+00:00 stderr F E0813 20:03:37.136211 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.136968619+00:00 stderr F I0813 20:03:37.136907 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:37.136968619+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:37.136968619+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:37.136968619+00:00 stderr F ... // 10 identical elements 2025-08-13T20:03:37.136968619+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:03:37.136968619+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:37.136968619+00:00 stderr F - { 2025-08-13T20:03:37.136968619+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:03:37.136968619+00:00 stderr F - Status: "False", 2025-08-13T20:03:37.136968619+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:03:37.136968619+00:00 stderr F - }, 2025-08-13T20:03:37.136968619+00:00 stderr F + { 2025-08-13T20:03:37.136968619+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:03:37.136968619+00:00 stderr F + Status: "True", 2025-08-13T20:03:37.136968619+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:37.132306286 +0000 UTC m=+136.337405491", 2025-08-13T20:03:37.136968619+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:37.136968619+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:03:37.136968619+00:00 stderr F + }, 2025-08-13T20:03:37.136968619+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:03:37.136968619+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:37.136968619+00:00 stderr F ... // 47 identical elements 2025-08-13T20:03:37.136968619+00:00 stderr F }, 2025-08-13T20:03:37.136968619+00:00 stderr F Version: "", 2025-08-13T20:03:37.136968619+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:37.136968619+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:37.136968619+00:00 stderr F } 2025-08-13T20:03:37.146010717+00:00 stderr F E0813 20:03:37.145622 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:50.832318355+00:00 stderr F W0813 20:03:50.831668 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:50.832318355+00:00 stderr F E0813 20:03:50.832276 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:03.811403786+00:00 stderr F E0813 20:04:03.807034 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:03.825416906+00:00 stderr F I0813 20:04:03.824582 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:03.825416906+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:03.825416906+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:03.825416906+00:00 stderr F ... // 43 identical elements 2025-08-13T20:04:03.825416906+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:04:03.825416906+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:04:03.825416906+00:00 stderr F - { 2025-08-13T20:04:03.825416906+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:04:03.825416906+00:00 stderr F - Status: "False", 2025-08-13T20:04:03.825416906+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:04:03.825416906+00:00 stderr F - }, 2025-08-13T20:04:03.825416906+00:00 stderr F + { 2025-08-13T20:04:03.825416906+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:04:03.825416906+00:00 stderr F + Status: "True", 2025-08-13T20:04:03.825416906+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:03.807627468 +0000 UTC m=+163.012726792", 2025-08-13T20:04:03.825416906+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:04:03.825416906+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:04:03.825416906+00:00 stderr F + }, 2025-08-13T20:04:03.825416906+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:04:03.825416906+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:04:03.825416906+00:00 stderr F ... // 14 identical elements 2025-08-13T20:04:03.825416906+00:00 stderr F }, 2025-08-13T20:04:03.825416906+00:00 stderr F Version: "", 2025-08-13T20:04:03.825416906+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:03.825416906+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:03.825416906+00:00 stderr F } 2025-08-13T20:04:03.858934092+00:00 stderr F E0813 20:04:03.853628 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.896446866+00:00 stderr F E0813 20:04:05.893516 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.923424836+00:00 stderr F I0813 20:04:05.923124 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:05.923424836+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:05.923424836+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:05.923424836+00:00 stderr F ... // 10 identical elements 2025-08-13T20:04:05.923424836+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:04:05.923424836+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:04:05.923424836+00:00 stderr F - { 2025-08-13T20:04:05.923424836+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:04:05.923424836+00:00 stderr F - Status: "False", 2025-08-13T20:04:05.923424836+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:04:05.923424836+00:00 stderr F - }, 2025-08-13T20:04:05.923424836+00:00 stderr F + { 2025-08-13T20:04:05.923424836+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:04:05.923424836+00:00 stderr F + Status: "True", 2025-08-13T20:04:05.923424836+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:05.894241363 +0000 UTC m=+165.099340748", 2025-08-13T20:04:05.923424836+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:05.923424836+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:04:05.923424836+00:00 stderr F + }, 2025-08-13T20:04:05.923424836+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:04:05.923424836+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:05.923424836+00:00 stderr F ... // 47 identical elements 2025-08-13T20:04:05.923424836+00:00 stderr F }, 2025-08-13T20:04:05.923424836+00:00 stderr F Version: "", 2025-08-13T20:04:05.923424836+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:05.923424836+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:05.923424836+00:00 stderr F } 2025-08-13T20:04:05.937612201+00:00 stderr F E0813 20:04:05.936704 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.095120444+00:00 stderr F E0813 20:04:06.095053 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.098044487+00:00 stderr F I0813 20:04:06.098014 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:06.098044487+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:06.098044487+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:06.098044487+00:00 stderr F ... // 19 identical elements 2025-08-13T20:04:06.098044487+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:04:06.098044487+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:04:06.098044487+00:00 stderr F - { 2025-08-13T20:04:06.098044487+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:04:06.098044487+00:00 stderr F - Status: "False", 2025-08-13T20:04:06.098044487+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:04:06.098044487+00:00 stderr F - }, 2025-08-13T20:04:06.098044487+00:00 stderr F + { 2025-08-13T20:04:06.098044487+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:04:06.098044487+00:00 stderr F + Status: "True", 2025-08-13T20:04:06.098044487+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:06.095211036 +0000 UTC m=+165.300310130", 2025-08-13T20:04:06.098044487+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:06.098044487+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:04:06.098044487+00:00 stderr F + }, 2025-08-13T20:04:06.098044487+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:04:06.098044487+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:06.098044487+00:00 stderr F ... // 38 identical elements 2025-08-13T20:04:06.098044487+00:00 stderr F }, 2025-08-13T20:04:06.098044487+00:00 stderr F Version: "", 2025-08-13T20:04:06.098044487+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:06.098044487+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:06.098044487+00:00 stderr F } 2025-08-13T20:04:06.102716631+00:00 stderr F E0813 20:04:06.100421 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.296891860+00:00 stderr F E0813 20:04:06.296625 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.300179564+00:00 stderr F I0813 20:04:06.300020 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:06.300179564+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:06.300179564+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:06.300179564+00:00 stderr F ... // 2 identical elements 2025-08-13T20:04:06.300179564+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:04:06.300179564+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:04:06.300179564+00:00 stderr F - { 2025-08-13T20:04:06.300179564+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:04:06.300179564+00:00 stderr F - Status: "False", 2025-08-13T20:04:06.300179564+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:04:06.300179564+00:00 stderr F - }, 2025-08-13T20:04:06.300179564+00:00 stderr F + { 2025-08-13T20:04:06.300179564+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:04:06.300179564+00:00 stderr F + Status: "True", 2025-08-13T20:04:06.300179564+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:06.296729665 +0000 UTC m=+165.501828979", 2025-08-13T20:04:06.300179564+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:06.300179564+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:04:06.300179564+00:00 stderr F + }, 2025-08-13T20:04:06.300179564+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:04:06.300179564+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:04:06.300179564+00:00 stderr F ... // 55 identical elements 2025-08-13T20:04:06.300179564+00:00 stderr F }, 2025-08-13T20:04:06.300179564+00:00 stderr F Version: "", 2025-08-13T20:04:06.300179564+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:06.300179564+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:06.300179564+00:00 stderr F } 2025-08-13T20:04:06.301719637+00:00 stderr F E0813 20:04:06.301617 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.491970365+00:00 stderr F E0813 20:04:06.491887 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.494330812+00:00 stderr F I0813 20:04:06.494219 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:06.494330812+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:06.494330812+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:06.494330812+00:00 stderr F ... // 19 identical elements 2025-08-13T20:04:06.494330812+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:04:06.494330812+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:04:06.494330812+00:00 stderr F - { 2025-08-13T20:04:06.494330812+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:04:06.494330812+00:00 stderr F - Status: "False", 2025-08-13T20:04:06.494330812+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:04:06.494330812+00:00 stderr F - }, 2025-08-13T20:04:06.494330812+00:00 stderr F + { 2025-08-13T20:04:06.494330812+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:04:06.494330812+00:00 stderr F + Status: "True", 2025-08-13T20:04:06.494330812+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:06.491934914 +0000 UTC m=+165.697034178", 2025-08-13T20:04:06.494330812+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:06.494330812+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:04:06.494330812+00:00 stderr F + }, 2025-08-13T20:04:06.494330812+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:04:06.494330812+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:06.494330812+00:00 stderr F ... // 38 identical elements 2025-08-13T20:04:06.494330812+00:00 stderr F }, 2025-08-13T20:04:06.494330812+00:00 stderr F Version: "", 2025-08-13T20:04:06.494330812+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:06.494330812+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:06.494330812+00:00 stderr F } 2025-08-13T20:04:06.495895007+00:00 stderr F E0813 20:04:06.495730 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.689370306+00:00 stderr F E0813 20:04:06.689248 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.691285151+00:00 stderr F I0813 20:04:06.691211 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:06.691285151+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:06.691285151+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:06.691285151+00:00 stderr F ... // 10 identical elements 2025-08-13T20:04:06.691285151+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:04:06.691285151+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:04:06.691285151+00:00 stderr F - { 2025-08-13T20:04:06.691285151+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:04:06.691285151+00:00 stderr F - Status: "False", 2025-08-13T20:04:06.691285151+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:04:06.691285151+00:00 stderr F - }, 2025-08-13T20:04:06.691285151+00:00 stderr F + { 2025-08-13T20:04:06.691285151+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:04:06.691285151+00:00 stderr F + Status: "True", 2025-08-13T20:04:06.691285151+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:06.689319605 +0000 UTC m=+165.894418939", 2025-08-13T20:04:06.691285151+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:06.691285151+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:04:06.691285151+00:00 stderr F + }, 2025-08-13T20:04:06.691285151+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:04:06.691285151+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:06.691285151+00:00 stderr F ... // 47 identical elements 2025-08-13T20:04:06.691285151+00:00 stderr F }, 2025-08-13T20:04:06.691285151+00:00 stderr F Version: "", 2025-08-13T20:04:06.691285151+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:06.691285151+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:06.691285151+00:00 stderr F } 2025-08-13T20:04:06.692658550+00:00 stderr F E0813 20:04:06.692545 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:35.324389024+00:00 stderr F E0813 20:04:35.323337 1 leaderelection.go:332] error retrieving resource lock openshift-console-operator/console-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-console-operator/leases/console-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.015702996+00:00 stderr F E0813 20:04:37.015359 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.032889778+00:00 stderr F I0813 20:04:37.032444 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:37.032889778+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:37.032889778+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:37.032889778+00:00 stderr F ... // 43 identical elements 2025-08-13T20:04:37.032889778+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:04:37.032889778+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:04:37.032889778+00:00 stderr F - { 2025-08-13T20:04:37.032889778+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:04:37.032889778+00:00 stderr F - Status: "False", 2025-08-13T20:04:37.032889778+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:04:37.032889778+00:00 stderr F - }, 2025-08-13T20:04:37.032889778+00:00 stderr F + { 2025-08-13T20:04:37.032889778+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:04:37.032889778+00:00 stderr F + Status: "True", 2025-08-13T20:04:37.032889778+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:37.015561332 +0000 UTC m=+196.220660786", 2025-08-13T20:04:37.032889778+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:04:37.032889778+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:04:37.032889778+00:00 stderr F + }, 2025-08-13T20:04:37.032889778+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:04:37.032889778+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:04:37.032889778+00:00 stderr F ... // 14 identical elements 2025-08-13T20:04:37.032889778+00:00 stderr F }, 2025-08-13T20:04:37.032889778+00:00 stderr F Version: "", 2025-08-13T20:04:37.032889778+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:37.032889778+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:37.032889778+00:00 stderr F } 2025-08-13T20:04:37.040484775+00:00 stderr F E0813 20:04:37.040421 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.127144927+00:00 stderr F E0813 20:04:37.126951 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.130468242+00:00 stderr F I0813 20:04:37.130394 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:37.130468242+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:37.130468242+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:37.130468242+00:00 stderr F ... // 19 identical elements 2025-08-13T20:04:37.130468242+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:04:37.130468242+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:04:37.130468242+00:00 stderr F - { 2025-08-13T20:04:37.130468242+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:04:37.130468242+00:00 stderr F - Status: "False", 2025-08-13T20:04:37.130468242+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:04:37.130468242+00:00 stderr F - }, 2025-08-13T20:04:37.130468242+00:00 stderr F + { 2025-08-13T20:04:37.130468242+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:04:37.130468242+00:00 stderr F + Status: "True", 2025-08-13T20:04:37.130468242+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:37.127039864 +0000 UTC m=+196.332139128", 2025-08-13T20:04:37.130468242+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:37.130468242+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:04:37.130468242+00:00 stderr F + }, 2025-08-13T20:04:37.130468242+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:04:37.130468242+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:37.130468242+00:00 stderr F ... // 38 identical elements 2025-08-13T20:04:37.130468242+00:00 stderr F }, 2025-08-13T20:04:37.130468242+00:00 stderr F Version: "", 2025-08-13T20:04:37.130468242+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:37.130468242+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:37.130468242+00:00 stderr F } 2025-08-13T20:04:37.141613431+00:00 stderr F E0813 20:04:37.141491 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.141957491+00:00 stderr F E0813 20:04:37.141922 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.142348962+00:00 stderr F E0813 20:04:37.142323 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.142905478+00:00 stderr F E0813 20:04:37.142812 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.145037209+00:00 stderr F I0813 20:04:37.144970 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:37.145037209+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:37.145037209+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:37.145037209+00:00 stderr F ... // 19 identical elements 2025-08-13T20:04:37.145037209+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:04:37.145037209+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:04:37.145037209+00:00 stderr F - { 2025-08-13T20:04:37.145037209+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:04:37.145037209+00:00 stderr F - Status: "False", 2025-08-13T20:04:37.145037209+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:04:37.145037209+00:00 stderr F - }, 2025-08-13T20:04:37.145037209+00:00 stderr F + { 2025-08-13T20:04:37.145037209+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:04:37.145037209+00:00 stderr F + Status: "True", 2025-08-13T20:04:37.145037209+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:37.142403284 +0000 UTC m=+196.347502578", 2025-08-13T20:04:37.145037209+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:37.145037209+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:04:37.145037209+00:00 stderr F + }, 2025-08-13T20:04:37.145037209+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:04:37.145037209+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:37.145037209+00:00 stderr F ... // 38 identical elements 2025-08-13T20:04:37.145037209+00:00 stderr F }, 2025-08-13T20:04:37.145037209+00:00 stderr F Version: "", 2025-08-13T20:04:37.145037209+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:37.145037209+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:37.145037209+00:00 stderr F } 2025-08-13T20:04:37.145271676+00:00 stderr F I0813 20:04:37.145211 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:37.145271676+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:37.145271676+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:37.145271676+00:00 stderr F ... // 2 identical elements 2025-08-13T20:04:37.145271676+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:04:37.145271676+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:04:37.145271676+00:00 stderr F - { 2025-08-13T20:04:37.145271676+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:04:37.145271676+00:00 stderr F - Status: "False", 2025-08-13T20:04:37.145271676+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:04:37.145271676+00:00 stderr F - }, 2025-08-13T20:04:37.145271676+00:00 stderr F + { 2025-08-13T20:04:37.145271676+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:04:37.145271676+00:00 stderr F + Status: "True", 2025-08-13T20:04:37.145271676+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:37.142885227 +0000 UTC m=+196.347984652", 2025-08-13T20:04:37.145271676+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:37.145271676+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:04:37.145271676+00:00 stderr F + }, 2025-08-13T20:04:37.145271676+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:04:37.145271676+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:04:37.145271676+00:00 stderr F ... // 55 identical elements 2025-08-13T20:04:37.145271676+00:00 stderr F }, 2025-08-13T20:04:37.145271676+00:00 stderr F Version: "", 2025-08-13T20:04:37.145271676+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:37.145271676+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:37.145271676+00:00 stderr F } 2025-08-13T20:04:37.145656917+00:00 stderr F I0813 20:04:37.145549 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:37.145656917+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:37.145656917+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:37.145656917+00:00 stderr F ... // 10 identical elements 2025-08-13T20:04:37.145656917+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:04:37.145656917+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:04:37.145656917+00:00 stderr F - { 2025-08-13T20:04:37.145656917+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:04:37.145656917+00:00 stderr F - Status: "False", 2025-08-13T20:04:37.145656917+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:04:37.145656917+00:00 stderr F - }, 2025-08-13T20:04:37.145656917+00:00 stderr F + { 2025-08-13T20:04:37.145656917+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:04:37.145656917+00:00 stderr F + Status: "True", 2025-08-13T20:04:37.145656917+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:37.141552419 +0000 UTC m=+196.346651603", 2025-08-13T20:04:37.145656917+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:37.145656917+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:04:37.145656917+00:00 stderr F + }, 2025-08-13T20:04:37.145656917+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:04:37.145656917+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:37.145656917+00:00 stderr F ... // 47 identical elements 2025-08-13T20:04:37.145656917+00:00 stderr F }, 2025-08-13T20:04:37.145656917+00:00 stderr F Version: "", 2025-08-13T20:04:37.145656917+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:37.145656917+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:37.145656917+00:00 stderr F } 2025-08-13T20:04:37.148239511+00:00 stderr F E0813 20:04:37.148127 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.148386505+00:00 stderr F E0813 20:04:37.148301 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.174762180+00:00 stderr F E0813 20:04:37.174421 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.174762180+00:00 stderr F E0813 20:04:37.174495 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.222108896+00:00 stderr F I0813 20:04:37.219913 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:37.222108896+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:37.222108896+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:37.222108896+00:00 stderr F ... // 10 identical elements 2025-08-13T20:04:37.222108896+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:04:37.222108896+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:04:37.222108896+00:00 stderr F - { 2025-08-13T20:04:37.222108896+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:04:37.222108896+00:00 stderr F - Status: "False", 2025-08-13T20:04:37.222108896+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:04:37.222108896+00:00 stderr F - }, 2025-08-13T20:04:37.222108896+00:00 stderr F + { 2025-08-13T20:04:37.222108896+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:04:37.222108896+00:00 stderr F + Status: "True", 2025-08-13T20:04:37.222108896+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:37.174489833 +0000 UTC m=+196.379589187", 2025-08-13T20:04:37.222108896+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:37.222108896+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:04:37.222108896+00:00 stderr F + }, 2025-08-13T20:04:37.222108896+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:04:37.222108896+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:37.222108896+00:00 stderr F ... // 47 identical elements 2025-08-13T20:04:37.222108896+00:00 stderr F }, 2025-08-13T20:04:37.222108896+00:00 stderr F Version: "", 2025-08-13T20:04:37.222108896+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:37.222108896+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:37.222108896+00:00 stderr F } 2025-08-13T20:04:37.240222785+00:00 stderr F E0813 20:04:37.239626 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:40.159436499+00:00 stderr F W0813 20:04:40.159270 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:40.159436499+00:00 stderr F E0813 20:04:40.159417 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.939682414+00:00 stderr F W0813 20:05:15.939006 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.939682414+00:00 stderr F E0813 20:05:15.939606 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:58.894029135+00:00 stderr F I0813 20:05:58.893245 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:00.833503405+00:00 stderr F W0813 20:06:00.833311 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:06:00.833503405+00:00 stderr F E0813 20:06:00.833411 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:06:07.070995261+00:00 stderr F I0813 20:06:07.070544 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:09.506747891+00:00 stderr F I0813 20:06:09.505328 1 reflector.go:351] Caches populated for *v1.ConsolePlugin from github.com/openshift/client-go/console/informers/externalversions/factory.go:125 2025-08-13T20:06:10.008389275+00:00 stderr F I0813 20:06:10.008146 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:11.858828575+00:00 stderr F I0813 20:06:11.858679 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:14.574120261+00:00 stderr F I0813 20:06:14.571912 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:16.430994894+00:00 stderr F I0813 20:06:16.428932 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T20:06:16.503308295+00:00 stderr F I0813 20:06:16.502856 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:19.745989123+00:00 stderr F I0813 20:06:19.745756 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:21.584488059+00:00 stderr F I0813 20:06:21.583599 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:24.265860302+00:00 stderr F I0813 20:06:24.265660 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:27.103264534+00:00 stderr F I0813 20:06:27.103083 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:30.102166851+00:00 stderr F I0813 20:06:30.101372 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:30.696617163+00:00 stderr F I0813 20:06:30.692061 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:32.583329881+00:00 stderr F I0813 20:06:32.577091 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:34.743077822+00:00 stderr F I0813 20:06:34.741619 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:37.031253456+00:00 stderr F I0813 20:06:37.029264 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:38.867945805+00:00 stderr F I0813 20:06:38.867053 1 reflector.go:351] Caches populated for *v1.ConsoleCLIDownload from github.com/openshift/client-go/console/informers/externalversions/factory.go:125 2025-08-13T20:06:39.042915812+00:00 stderr F I0813 20:06:39.041057 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:39.826960871+00:00 stderr F I0813 20:06:39.826296 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:40.339728663+00:00 stderr F I0813 20:06:40.339674 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:44.448014411+00:00 stderr F I0813 20:06:44.446671 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:46.224728850+00:00 stderr F W0813 20:06:46.224243 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:06:46.224728850+00:00 stderr F E0813 20:06:46.224304 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:06:48.474750951+00:00 stderr F I0813 20:06:48.471933 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:48.536239163+00:00 stderr F I0813 20:06:48.536187 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:06:50.189428322+00:00 stderr F I0813 20:06:50.188710 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:06:51.116409329+00:00 stderr F I0813 20:06:51.116171 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:55.690126201+00:00 stderr F I0813 20:06:55.687737 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:06:57.820930254+00:00 stderr F I0813 20:06:57.814209 1 reflector.go:351] Caches populated for operators.coreos.com/v1, Resource=olmconfigs from k8s.io/client-go/dynamic/dynamicinformer/informer.go:108 2025-08-13T20:07:04.256997701+00:00 stderr F I0813 20:07:04.256271 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:05.194607533+00:00 stderr F I0813 20:07:05.193920 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:29.199852862+00:00 stderr F W0813 20:07:29.198739 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:07:29.199852862+00:00 stderr F E0813 20:07:29.199674 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:08:11.536021817+00:00 stderr F I0813 20:08:11.534969 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:08:11.599255470+00:00 stderr F I0813 20:08:11.599129 1 base_controller.go:73] Caches are synced for HealthCheckController 2025-08-13T20:08:11.599615240+00:00 stderr F I0813 20:08:11.599512 1 base_controller.go:73] Caches are synced for ConsoleRouteController 2025-08-13T20:08:11.599682932+00:00 stderr F I0813 20:08:11.599608 1 base_controller.go:73] Caches are synced for DownloadsRouteController 2025-08-13T20:08:11.599682932+00:00 stderr F I0813 20:08:11.599612 1 base_controller.go:73] Caches are synced for ConsoleOperator 2025-08-13T20:08:11.600076293+00:00 stderr F I0813 20:08:11.599243 1 base_controller.go:110] Starting #1 worker of HealthCheckController controller ... 2025-08-13T20:08:11.600330041+00:00 stderr F I0813 20:08:11.599912 1 base_controller.go:73] Caches are synced for OAuthClientsController 2025-08-13T20:08:11.600393192+00:00 stderr F I0813 20:08:11.600358 1 base_controller.go:110] Starting #1 worker of OAuthClientsController controller ... 2025-08-13T20:08:11.601047241+00:00 stderr F I0813 20:08:11.599595 1 base_controller.go:110] Starting #1 worker of ConsoleRouteController controller ... 2025-08-13T20:08:11.601182305+00:00 stderr F I0813 20:08:11.599678 1 base_controller.go:110] Starting #1 worker of DownloadsRouteController controller ... 2025-08-13T20:08:11.601873235+00:00 stderr F I0813 20:08:11.599678 1 base_controller.go:110] Starting #1 worker of ConsoleOperator controller ... 2025-08-13T20:08:11.609975107+00:00 stderr F I0813 20:08:11.609743 1 base_controller.go:73] Caches are synced for ConsoleCLIDownloadsController 2025-08-13T20:08:11.609975107+00:00 stderr F I0813 20:08:11.609956 1 base_controller.go:110] Starting #1 worker of ConsoleCLIDownloadsController controller ... 2025-08-13T20:08:11.611633065+00:00 stderr F I0813 20:08:11.611351 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling 2025-08-13T20:08:11.647366239+00:00 stderr F I0813 20:08:11.647226 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:11.647366239+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:11.647366239+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:11.647366239+00:00 stderr F ... // 7 identical elements 2025-08-13T20:08:11.647366239+00:00 stderr F {Type: "DownloadsDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:11.647366239+00:00 stderr F {Type: "DownloadsDefaultRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:11.647366239+00:00 stderr F - { 2025-08-13T20:08:11.647366239+00:00 stderr F - Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:08:11.647366239+00:00 stderr F - Status: "True", 2025-08-13T20:08:11.647366239+00:00 stderr F - LastTransitionTime: s"2025-08-13 20:01:07 +0000 UTC", 2025-08-13T20:08:11.647366239+00:00 stderr F - Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:08:11.647366239+00:00 stderr F - Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:08:11.647366239+00:00 stderr F - }, 2025-08-13T20:08:11.647366239+00:00 stderr F + { 2025-08-13T20:08:11.647366239+00:00 stderr F + Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:08:11.647366239+00:00 stderr F + Status: "False", 2025-08-13T20:08:11.647366239+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.642454678 +0000 UTC m=+410.847553882", 2025-08-13T20:08:11.647366239+00:00 stderr F + }, 2025-08-13T20:08:11.647366239+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:11.647366239+00:00 stderr F - { 2025-08-13T20:08:11.647366239+00:00 stderr F - Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:08:11.647366239+00:00 stderr F - Status: "False", 2025-08-13T20:08:11.647366239+00:00 stderr F - LastTransitionTime: s"2025-08-13 20:01:07 +0000 UTC", 2025-08-13T20:08:11.647366239+00:00 stderr F - Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:08:11.647366239+00:00 stderr F - Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:08:11.647366239+00:00 stderr F - }, 2025-08-13T20:08:11.647366239+00:00 stderr F + { 2025-08-13T20:08:11.647366239+00:00 stderr F + Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:08:11.647366239+00:00 stderr F + Status: "True", 2025-08-13T20:08:11.647366239+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.642456158 +0000 UTC m=+410.847555162", 2025-08-13T20:08:11.647366239+00:00 stderr F + }, 2025-08-13T20:08:11.647366239+00:00 stderr F {Type: "ServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:08:11.647366239+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:11.647366239+00:00 stderr F ... // 48 identical elements 2025-08-13T20:08:11.647366239+00:00 stderr F }, 2025-08-13T20:08:11.647366239+00:00 stderr F Version: "", 2025-08-13T20:08:11.647366239+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:08:11.647366239+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:11.647366239+00:00 stderr F } 2025-08-13T20:08:11.647426461+00:00 stderr F I0813 20:08:11.647372 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:11.647426461+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:11.647426461+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:11.647426461+00:00 stderr F ... // 45 identical elements 2025-08-13T20:08:11.647426461+00:00 stderr F {Type: "ConsoleNotificationSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:11.647426461+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:11.647426461+00:00 stderr F - { 2025-08-13T20:08:11.647426461+00:00 stderr F - Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:08:11.647426461+00:00 stderr F - Status: "True", 2025-08-13T20:08:11.647426461+00:00 stderr F - LastTransitionTime: s"2025-08-13 20:01:06 +0000 UTC", 2025-08-13T20:08:11.647426461+00:00 stderr F - Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:08:11.647426461+00:00 stderr F - Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:08:11.647426461+00:00 stderr F - }, 2025-08-13T20:08:11.647426461+00:00 stderr F + { 2025-08-13T20:08:11.647426461+00:00 stderr F + Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:08:11.647426461+00:00 stderr F + Status: "False", 2025-08-13T20:08:11.647426461+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.644182268 +0000 UTC m=+410.849281462", 2025-08-13T20:08:11.647426461+00:00 stderr F + }, 2025-08-13T20:08:11.647426461+00:00 stderr F {Type: "ConsoleCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:08:11.647426461+00:00 stderr F - { 2025-08-13T20:08:11.647426461+00:00 stderr F - Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:08:11.647426461+00:00 stderr F - Status: "False", 2025-08-13T20:08:11.647426461+00:00 stderr F - LastTransitionTime: s"2025-08-13 20:01:06 +0000 UTC", 2025-08-13T20:08:11.647426461+00:00 stderr F - Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:08:11.647426461+00:00 stderr F - Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:08:11.647426461+00:00 stderr F - }, 2025-08-13T20:08:11.647426461+00:00 stderr F + { 2025-08-13T20:08:11.647426461+00:00 stderr F + Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:08:11.647426461+00:00 stderr F + Status: "True", 2025-08-13T20:08:11.647426461+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.644184158 +0000 UTC m=+410.849283172", 2025-08-13T20:08:11.647426461+00:00 stderr F + }, 2025-08-13T20:08:11.647426461+00:00 stderr F {Type: "ConsoleDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:08:11.647426461+00:00 stderr F {Type: "ConsoleDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:08:11.647426461+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:11.647426461+00:00 stderr F }, 2025-08-13T20:08:11.647426461+00:00 stderr F Version: "", 2025-08-13T20:08:11.647426461+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:08:11.647426461+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:11.647426461+00:00 stderr F } 2025-08-13T20:08:11.684401991+00:00 stderr F I0813 20:08:11.684097 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:11.684401991+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:11.684401991+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:11.684401991+00:00 stderr F ... // 13 identical elements 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F - { 2025-08-13T20:08:11.684401991+00:00 stderr F - Type: "SyncLoopRefreshProgressing", 2025-08-13T20:08:11.684401991+00:00 stderr F - Status: "True", 2025-08-13T20:08:11.684401991+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:08:11.684401991+00:00 stderr F - Reason: "InProgress", 2025-08-13T20:08:11.684401991+00:00 stderr F - Message: "working toward version 4.16.0, 0 replicas available", 2025-08-13T20:08:11.684401991+00:00 stderr F - }, 2025-08-13T20:08:11.684401991+00:00 stderr F + { 2025-08-13T20:08:11.684401991+00:00 stderr F + Type: "SyncLoopRefreshProgressing", 2025-08-13T20:08:11.684401991+00:00 stderr F + Status: "False", 2025-08-13T20:08:11.684401991+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.681641202 +0000 UTC m=+410.886740456", 2025-08-13T20:08:11.684401991+00:00 stderr F + }, 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "OAuthClientSecretSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "OAuthClientSecretSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F ... // 6 identical elements 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "ServiceCASyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "TrustedCASyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F - { 2025-08-13T20:08:11.684401991+00:00 stderr F - Type: "DeploymentAvailable", 2025-08-13T20:08:11.684401991+00:00 stderr F - Status: "False", 2025-08-13T20:08:11.684401991+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:08:11.684401991+00:00 stderr F - Reason: "InsufficientReplicas", 2025-08-13T20:08:11.684401991+00:00 stderr F - Message: "0 replicas available for console deployment", 2025-08-13T20:08:11.684401991+00:00 stderr F - }, 2025-08-13T20:08:11.684401991+00:00 stderr F + { 2025-08-13T20:08:11.684401991+00:00 stderr F + Type: "DeploymentAvailable", 2025-08-13T20:08:11.684401991+00:00 stderr F + Status: "True", 2025-08-13T20:08:11.684401991+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.681643252 +0000 UTC m=+410.886742386", 2025-08-13T20:08:11.684401991+00:00 stderr F + }, 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "TrustedCASyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "OAuthServingCertValidationProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F ... // 33 identical elements 2025-08-13T20:08:11.684401991+00:00 stderr F }, 2025-08-13T20:08:11.684401991+00:00 stderr F Version: "", 2025-08-13T20:08:11.684401991+00:00 stderr F - ReadyReplicas: 0, 2025-08-13T20:08:11.684401991+00:00 stderr F + ReadyReplicas: 1, 2025-08-13T20:08:11.684401991+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:11.684401991+00:00 stderr F } 2025-08-13T20:08:11.726243231+00:00 stderr F I0813 20:08:11.724414 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"Deployment_InsufficientReplicas::RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:01:07Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:08:11.738741229+00:00 stderr F I0813 20:08:11.737607 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:11.738741229+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:11.738741229+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:11.738741229+00:00 stderr F ... // 45 identical elements 2025-08-13T20:08:11.738741229+00:00 stderr F {Type: "ConsoleNotificationSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:11.738741229+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:11.738741229+00:00 stderr F - { 2025-08-13T20:08:11.738741229+00:00 stderr F - Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:08:11.738741229+00:00 stderr F - Status: "True", 2025-08-13T20:08:11.738741229+00:00 stderr F - LastTransitionTime: s"2025-08-13 20:01:06 +0000 UTC", 2025-08-13T20:08:11.738741229+00:00 stderr F - Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:08:11.738741229+00:00 stderr F - Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:08:11.738741229+00:00 stderr F - }, 2025-08-13T20:08:11.738741229+00:00 stderr F + { 2025-08-13T20:08:11.738741229+00:00 stderr F + Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:08:11.738741229+00:00 stderr F + Status: "False", 2025-08-13T20:08:11.738741229+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.705518836 +0000 UTC m=+410.910617990", 2025-08-13T20:08:11.738741229+00:00 stderr F + }, 2025-08-13T20:08:11.738741229+00:00 stderr F {Type: "ConsoleCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:08:11.738741229+00:00 stderr F - { 2025-08-13T20:08:11.738741229+00:00 stderr F - Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:08:11.738741229+00:00 stderr F - Status: "False", 2025-08-13T20:08:11.738741229+00:00 stderr F - LastTransitionTime: s"2025-08-13 20:01:06 +0000 UTC", 2025-08-13T20:08:11.738741229+00:00 stderr F - Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:08:11.738741229+00:00 stderr F - Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:08:11.738741229+00:00 stderr F - }, 2025-08-13T20:08:11.738741229+00:00 stderr F + { 2025-08-13T20:08:11.738741229+00:00 stderr F + Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:08:11.738741229+00:00 stderr F + Status: "True", 2025-08-13T20:08:11.738741229+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.705520676 +0000 UTC m=+410.910619680", 2025-08-13T20:08:11.738741229+00:00 stderr F + }, 2025-08-13T20:08:11.738741229+00:00 stderr F {Type: "ConsoleDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:08:11.738741229+00:00 stderr F {Type: "ConsoleDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:08:11.738741229+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:11.738741229+00:00 stderr F }, 2025-08-13T20:08:11.738741229+00:00 stderr F Version: "", 2025-08-13T20:08:11.738741229+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:11.738741229+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:11.738741229+00:00 stderr F } 2025-08-13T20:08:11.765700862+00:00 stderr F I0813 20:08:11.764955 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'" to "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'",Upgradeable message changed from "ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)" to "ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)" 2025-08-13T20:08:11.787871757+00:00 stderr F I0813 20:08:11.787363 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:08:11Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"RouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:01:07Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:08:11.798038939+00:00 stderr F I0813 20:08:11.797846 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:11.798038939+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:11.798038939+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:11.798038939+00:00 stderr F {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-08-13T20:08:11.798038939+00:00 stderr F - { 2025-08-13T20:08:11.798038939+00:00 stderr F - Type: "RouteHealthDegraded", 2025-08-13T20:08:11.798038939+00:00 stderr F - Status: "True", 2025-08-13T20:08:11.798038939+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:08:11.798038939+00:00 stderr F - Reason: "StatusError", 2025-08-13T20:08:11.798038939+00:00 stderr F - Message: "route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'", 2025-08-13T20:08:11.798038939+00:00 stderr F - }, 2025-08-13T20:08:11.798038939+00:00 stderr F + { 2025-08-13T20:08:11.798038939+00:00 stderr F + Type: "RouteHealthDegraded", 2025-08-13T20:08:11.798038939+00:00 stderr F + Status: "False", 2025-08-13T20:08:11.798038939+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.795014552 +0000 UTC m=+411.000113566", 2025-08-13T20:08:11.798038939+00:00 stderr F + }, 2025-08-13T20:08:11.798038939+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:11.798038939+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:11.798038939+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:11.798038939+00:00 stderr F {Type: "AuthStatusHandlerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:08:11.798038939+00:00 stderr F {Type: "AuthStatusHandlerProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:08:11.798038939+00:00 stderr F - { 2025-08-13T20:08:11.798038939+00:00 stderr F - Type: "RouteHealthAvailable", 2025-08-13T20:08:11.798038939+00:00 stderr F - Status: "False", 2025-08-13T20:08:11.798038939+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:08:11.798038939+00:00 stderr F - Reason: "StatusError", 2025-08-13T20:08:11.798038939+00:00 stderr F - Message: "route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'", 2025-08-13T20:08:11.798038939+00:00 stderr F - }, 2025-08-13T20:08:11.798038939+00:00 stderr F + { 2025-08-13T20:08:11.798038939+00:00 stderr F + Type: "RouteHealthAvailable", 2025-08-13T20:08:11.798038939+00:00 stderr F + Status: "True", 2025-08-13T20:08:11.798038939+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.795013432 +0000 UTC m=+411.000112516", 2025-08-13T20:08:11.798038939+00:00 stderr F + }, 2025-08-13T20:08:11.798038939+00:00 stderr F }, 2025-08-13T20:08:11.798038939+00:00 stderr F Version: "", 2025-08-13T20:08:11.798038939+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:11.798038939+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:11.798038939+00:00 stderr F } 2025-08-13T20:08:11.810129396+00:00 stderr F I0813 20:08:11.810025 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing changed from True to False ("All is well"),Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'" to "RouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'" 2025-08-13T20:08:11.817676502+00:00 stderr F I0813 20:08:11.815021 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"RouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:08:11Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"RouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:08:11Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:08:11.829639345+00:00 stderr F E0813 20:08:11.828392 1 base_controller.go:268] StatusSyncer_console reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "console": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:08:12.214932842+00:00 stderr F I0813 20:08:12.214692 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:08:12Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:08:11Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:08:12Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2025-08-13T20:08:12Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:08:12.240848715+00:00 stderr F I0813 20:08:12.240681 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded changed from True to False ("All is well"),Available changed from False to True ("All is well"),Upgradeable changed from False to True ("All is well") 2025-08-13T20:08:35.466638407+00:00 stderr F E0813 20:08:35.465262 1 leaderelection.go:332] error retrieving resource lock openshift-console-operator/console-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-console-operator/leases/console-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.009647895+00:00 stderr F E0813 20:08:37.009556 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.012931140+00:00 stderr F I0813 20:08:37.012550 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.012931140+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.012931140+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.012931140+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:37.012931140+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:37.012931140+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:37.012931140+00:00 stderr F - { 2025-08-13T20:08:37.012931140+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.012931140+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.012931140+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:37.012931140+00:00 stderr F - }, 2025-08-13T20:08:37.012931140+00:00 stderr F + { 2025-08-13T20:08:37.012931140+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.012931140+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.012931140+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.009736018 +0000 UTC m=+436.214835452", 2025-08-13T20:08:37.012931140+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:37.012931140+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:37.012931140+00:00 stderr F + }, 2025-08-13T20:08:37.012931140+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:37.012931140+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.012931140+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:37.012931140+00:00 stderr F }, 2025-08-13T20:08:37.012931140+00:00 stderr F Version: "", 2025-08-13T20:08:37.012931140+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.012931140+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.012931140+00:00 stderr F } 2025-08-13T20:08:37.014415062+00:00 stderr F E0813 20:08:37.014389 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.024056229+00:00 stderr F E0813 20:08:37.024023 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.025952823+00:00 stderr F I0813 20:08:37.025888 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.025952823+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.025952823+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.025952823+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:37.025952823+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:37.025952823+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:37.025952823+00:00 stderr F - { 2025-08-13T20:08:37.025952823+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.025952823+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.025952823+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:37.025952823+00:00 stderr F - }, 2025-08-13T20:08:37.025952823+00:00 stderr F + { 2025-08-13T20:08:37.025952823+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.025952823+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.025952823+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.02412334 +0000 UTC m=+436.229222504", 2025-08-13T20:08:37.025952823+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:37.025952823+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:37.025952823+00:00 stderr F + }, 2025-08-13T20:08:37.025952823+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:37.025952823+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.025952823+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:37.025952823+00:00 stderr F }, 2025-08-13T20:08:37.025952823+00:00 stderr F Version: "", 2025-08-13T20:08:37.025952823+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.025952823+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.025952823+00:00 stderr F } 2025-08-13T20:08:37.027692853+00:00 stderr F E0813 20:08:37.027653 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.039249144+00:00 stderr F E0813 20:08:37.039190 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.041532590+00:00 stderr F I0813 20:08:37.041481 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.041532590+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.041532590+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.041532590+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:37.041532590+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:37.041532590+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:37.041532590+00:00 stderr F - { 2025-08-13T20:08:37.041532590+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.041532590+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.041532590+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:37.041532590+00:00 stderr F - }, 2025-08-13T20:08:37.041532590+00:00 stderr F + { 2025-08-13T20:08:37.041532590+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.041532590+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.041532590+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.039340387 +0000 UTC m=+436.244439561", 2025-08-13T20:08:37.041532590+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:37.041532590+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:37.041532590+00:00 stderr F + }, 2025-08-13T20:08:37.041532590+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:37.041532590+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.041532590+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:37.041532590+00:00 stderr F }, 2025-08-13T20:08:37.041532590+00:00 stderr F Version: "", 2025-08-13T20:08:37.041532590+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.041532590+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.041532590+00:00 stderr F } 2025-08-13T20:08:37.043141576+00:00 stderr F E0813 20:08:37.043110 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.068983347+00:00 stderr F E0813 20:08:37.068842 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.071530750+00:00 stderr F I0813 20:08:37.071381 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.071530750+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.071530750+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.071530750+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:37.071530750+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:37.071530750+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:37.071530750+00:00 stderr F - { 2025-08-13T20:08:37.071530750+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.071530750+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.071530750+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:37.071530750+00:00 stderr F - }, 2025-08-13T20:08:37.071530750+00:00 stderr F + { 2025-08-13T20:08:37.071530750+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.071530750+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.071530750+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.068923635 +0000 UTC m=+436.274022979", 2025-08-13T20:08:37.071530750+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:37.071530750+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:37.071530750+00:00 stderr F + }, 2025-08-13T20:08:37.071530750+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:37.071530750+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.071530750+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:37.071530750+00:00 stderr F }, 2025-08-13T20:08:37.071530750+00:00 stderr F Version: "", 2025-08-13T20:08:37.071530750+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.071530750+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.071530750+00:00 stderr F } 2025-08-13T20:08:37.074563647+00:00 stderr F E0813 20:08:37.074482 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.118723633+00:00 stderr F E0813 20:08:37.118547 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.120936056+00:00 stderr F I0813 20:08:37.120846 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.120936056+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.120936056+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.120936056+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:37.120936056+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:37.120936056+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:37.120936056+00:00 stderr F - { 2025-08-13T20:08:37.120936056+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.120936056+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.120936056+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:37.120936056+00:00 stderr F - }, 2025-08-13T20:08:37.120936056+00:00 stderr F + { 2025-08-13T20:08:37.120936056+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.120936056+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.120936056+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.118607149 +0000 UTC m=+436.323706353", 2025-08-13T20:08:37.120936056+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:37.120936056+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:37.120936056+00:00 stderr F + }, 2025-08-13T20:08:37.120936056+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:37.120936056+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.120936056+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:37.120936056+00:00 stderr F }, 2025-08-13T20:08:37.120936056+00:00 stderr F Version: "", 2025-08-13T20:08:37.120936056+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.120936056+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.120936056+00:00 stderr F } 2025-08-13T20:08:37.122380188+00:00 stderr F E0813 20:08:37.122297 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.126693301+00:00 stderr F E0813 20:08:37.126620 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.128860763+00:00 stderr F E0813 20:08:37.128828 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.129044779+00:00 stderr F I0813 20:08:37.128986 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.129044779+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.129044779+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.129044779+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:37.129044779+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:37.129044779+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:37.129044779+00:00 stderr F - { 2025-08-13T20:08:37.129044779+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:37.129044779+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.129044779+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:37.129044779+00:00 stderr F - }, 2025-08-13T20:08:37.129044779+00:00 stderr F + { 2025-08-13T20:08:37.129044779+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:37.129044779+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.129044779+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.126777714 +0000 UTC m=+436.331918699", 2025-08-13T20:08:37.129044779+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.129044779+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:37.129044779+00:00 stderr F + }, 2025-08-13T20:08:37.129044779+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:37.129044779+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:37.129044779+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:37.129044779+00:00 stderr F }, 2025-08-13T20:08:37.129044779+00:00 stderr F Version: "", 2025-08-13T20:08:37.129044779+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.129044779+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.129044779+00:00 stderr F } 2025-08-13T20:08:37.129133261+00:00 stderr F E0813 20:08:37.129080 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.130269294+00:00 stderr F E0813 20:08:37.130192 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.131451618+00:00 stderr F I0813 20:08:37.131424 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.131451618+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.131451618+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.131451618+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:37.131451618+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:37.131451618+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.131451618+00:00 stderr F - { 2025-08-13T20:08:37.131451618+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.131451618+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.131451618+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:37.131451618+00:00 stderr F - }, 2025-08-13T20:08:37.131451618+00:00 stderr F + { 2025-08-13T20:08:37.131451618+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.131451618+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.131451618+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.128947986 +0000 UTC m=+436.334047360", 2025-08-13T20:08:37.131451618+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.131451618+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:37.131451618+00:00 stderr F + }, 2025-08-13T20:08:37.131451618+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:37.131451618+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.131451618+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:37.131451618+00:00 stderr F }, 2025-08-13T20:08:37.131451618+00:00 stderr F Version: "", 2025-08-13T20:08:37.131451618+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.131451618+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.131451618+00:00 stderr F } 2025-08-13T20:08:37.132122847+00:00 stderr F E0813 20:08:37.131759 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.132520558+00:00 stderr F E0813 20:08:37.132497 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.134950188+00:00 stderr F E0813 20:08:37.134854 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.135505744+00:00 stderr F I0813 20:08:37.135447 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.135505744+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.135505744+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.135505744+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:37.135505744+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:37.135505744+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:37.135505744+00:00 stderr F - { 2025-08-13T20:08:37.135505744+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:37.135505744+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.135505744+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:37.135505744+00:00 stderr F - }, 2025-08-13T20:08:37.135505744+00:00 stderr F + { 2025-08-13T20:08:37.135505744+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:37.135505744+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.135505744+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.131959352 +0000 UTC m=+436.337599672", 2025-08-13T20:08:37.135505744+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.135505744+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:37.135505744+00:00 stderr F + }, 2025-08-13T20:08:37.135505744+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:37.135505744+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.135505744+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:37.135505744+00:00 stderr F }, 2025-08-13T20:08:37.135505744+00:00 stderr F Version: "", 2025-08-13T20:08:37.135505744+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.135505744+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.135505744+00:00 stderr F } 2025-08-13T20:08:37.136976046+00:00 stderr F E0813 20:08:37.136520 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.136976046+00:00 stderr F I0813 20:08:37.136883 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.136976046+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.136976046+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.136976046+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:37.136976046+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:37.136976046+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:37.136976046+00:00 stderr F - { 2025-08-13T20:08:37.136976046+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:37.136976046+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.136976046+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:37.136976046+00:00 stderr F - }, 2025-08-13T20:08:37.136976046+00:00 stderr F + { 2025-08-13T20:08:37.136976046+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:37.136976046+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.136976046+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.129233434 +0000 UTC m=+436.334332668", 2025-08-13T20:08:37.136976046+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.136976046+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:37.136976046+00:00 stderr F + }, 2025-08-13T20:08:37.136976046+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:37.136976046+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.136976046+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:37.136976046+00:00 stderr F }, 2025-08-13T20:08:37.136976046+00:00 stderr F Version: "", 2025-08-13T20:08:37.136976046+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.136976046+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.136976046+00:00 stderr F } 2025-08-13T20:08:37.137052298+00:00 stderr F I0813 20:08:37.137032 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.137052298+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.137052298+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.137052298+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:37.137052298+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:37.137052298+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.137052298+00:00 stderr F - { 2025-08-13T20:08:37.137052298+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.137052298+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.137052298+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:37.137052298+00:00 stderr F - }, 2025-08-13T20:08:37.137052298+00:00 stderr F + { 2025-08-13T20:08:37.137052298+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.137052298+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.137052298+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.13256414 +0000 UTC m=+436.337663404", 2025-08-13T20:08:37.137052298+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.137052298+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:37.137052298+00:00 stderr F + }, 2025-08-13T20:08:37.137052298+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:37.137052298+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.137052298+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:37.137052298+00:00 stderr F }, 2025-08-13T20:08:37.137052298+00:00 stderr F Version: "", 2025-08-13T20:08:37.137052298+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.137052298+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.137052298+00:00 stderr F } 2025-08-13T20:08:37.138667285+00:00 stderr F E0813 20:08:37.138621 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.138667285+00:00 stderr F E0813 20:08:37.138645 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.138865790+00:00 stderr F E0813 20:08:37.138778 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.140730034+00:00 stderr F I0813 20:08:37.140654 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.140730034+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.140730034+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.140730034+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:37.140730034+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:37.140730034+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:37.140730034+00:00 stderr F - { 2025-08-13T20:08:37.140730034+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:37.140730034+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.140730034+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:37.140730034+00:00 stderr F - }, 2025-08-13T20:08:37.140730034+00:00 stderr F + { 2025-08-13T20:08:37.140730034+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:37.140730034+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.140730034+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.138652834 +0000 UTC m=+436.343751888", 2025-08-13T20:08:37.140730034+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.140730034+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:37.140730034+00:00 stderr F + }, 2025-08-13T20:08:37.140730034+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:37.140730034+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:37.140730034+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:37.140730034+00:00 stderr F }, 2025-08-13T20:08:37.140730034+00:00 stderr F Version: "", 2025-08-13T20:08:37.140730034+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.140730034+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.140730034+00:00 stderr F } 2025-08-13T20:08:37.141507606+00:00 stderr F E0813 20:08:37.141454 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.145027597+00:00 stderr F E0813 20:08:37.144866 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.147193069+00:00 stderr F I0813 20:08:37.147113 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.147193069+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.147193069+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.147193069+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:37.147193069+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:37.147193069+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.147193069+00:00 stderr F - { 2025-08-13T20:08:37.147193069+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.147193069+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.147193069+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:37.147193069+00:00 stderr F - }, 2025-08-13T20:08:37.147193069+00:00 stderr F + { 2025-08-13T20:08:37.147193069+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.147193069+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.147193069+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.141502156 +0000 UTC m=+436.346601450", 2025-08-13T20:08:37.147193069+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.147193069+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:37.147193069+00:00 stderr F + }, 2025-08-13T20:08:37.147193069+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:37.147193069+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.147193069+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:37.147193069+00:00 stderr F }, 2025-08-13T20:08:37.147193069+00:00 stderr F Version: "", 2025-08-13T20:08:37.147193069+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.147193069+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.147193069+00:00 stderr F } 2025-08-13T20:08:37.147287632+00:00 stderr F E0813 20:08:37.147229 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.147414245+00:00 stderr F E0813 20:08:37.147333 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.149523346+00:00 stderr F I0813 20:08:37.149277 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.149523346+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.149523346+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.149523346+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:37.149523346+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:37.149523346+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:37.149523346+00:00 stderr F - { 2025-08-13T20:08:37.149523346+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:37.149523346+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.149523346+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:37.149523346+00:00 stderr F - }, 2025-08-13T20:08:37.149523346+00:00 stderr F + { 2025-08-13T20:08:37.149523346+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:37.149523346+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.149523346+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.144932974 +0000 UTC m=+436.350032228", 2025-08-13T20:08:37.149523346+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.149523346+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:37.149523346+00:00 stderr F + }, 2025-08-13T20:08:37.149523346+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:37.149523346+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.149523346+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:37.149523346+00:00 stderr F }, 2025-08-13T20:08:37.149523346+00:00 stderr F Version: "", 2025-08-13T20:08:37.149523346+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.149523346+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.149523346+00:00 stderr F } 2025-08-13T20:08:37.149849525+00:00 stderr F I0813 20:08:37.149707 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.149849525+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.149849525+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.149849525+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:37.149849525+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:37.149849525+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:37.149849525+00:00 stderr F - { 2025-08-13T20:08:37.149849525+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:37.149849525+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.149849525+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:37.149849525+00:00 stderr F - }, 2025-08-13T20:08:37.149849525+00:00 stderr F + { 2025-08-13T20:08:37.149849525+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:37.149849525+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.149849525+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.147289752 +0000 UTC m=+436.352388986", 2025-08-13T20:08:37.149849525+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.149849525+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:37.149849525+00:00 stderr F + }, 2025-08-13T20:08:37.149849525+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:37.149849525+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.149849525+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:37.149849525+00:00 stderr F }, 2025-08-13T20:08:37.149849525+00:00 stderr F Version: "", 2025-08-13T20:08:37.149849525+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.149849525+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.149849525+00:00 stderr F } 2025-08-13T20:08:37.150974307+00:00 stderr F I0813 20:08:37.150709 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.150974307+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.150974307+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.150974307+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:37.150974307+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:37.150974307+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.150974307+00:00 stderr F - { 2025-08-13T20:08:37.150974307+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.150974307+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.150974307+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:37.150974307+00:00 stderr F - }, 2025-08-13T20:08:37.150974307+00:00 stderr F + { 2025-08-13T20:08:37.150974307+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.150974307+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.150974307+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.147427256 +0000 UTC m=+436.352526510", 2025-08-13T20:08:37.150974307+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.150974307+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:37.150974307+00:00 stderr F + }, 2025-08-13T20:08:37.150974307+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:37.150974307+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.150974307+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:37.150974307+00:00 stderr F }, 2025-08-13T20:08:37.150974307+00:00 stderr F Version: "", 2025-08-13T20:08:37.150974307+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.150974307+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.150974307+00:00 stderr F } 2025-08-13T20:08:37.205003486+00:00 stderr F E0813 20:08:37.204868 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.217414432+00:00 stderr F I0813 20:08:37.214036 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.217414432+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.217414432+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.217414432+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:37.217414432+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:37.217414432+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:37.217414432+00:00 stderr F - { 2025-08-13T20:08:37.217414432+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.217414432+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.217414432+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:37.217414432+00:00 stderr F - }, 2025-08-13T20:08:37.217414432+00:00 stderr F + { 2025-08-13T20:08:37.217414432+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.217414432+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.217414432+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.204941165 +0000 UTC m=+436.410040379", 2025-08-13T20:08:37.217414432+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:37.217414432+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:37.217414432+00:00 stderr F + }, 2025-08-13T20:08:37.217414432+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:37.217414432+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.217414432+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:37.217414432+00:00 stderr F }, 2025-08-13T20:08:37.217414432+00:00 stderr F Version: "", 2025-08-13T20:08:37.217414432+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.217414432+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.217414432+00:00 stderr F } 2025-08-13T20:08:37.224473165+00:00 stderr F E0813 20:08:37.224400 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.241217205+00:00 stderr F E0813 20:08:37.241152 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.243916362+00:00 stderr F I0813 20:08:37.243773 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.243916362+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.243916362+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.243916362+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:37.243916362+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:37.243916362+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:37.243916362+00:00 stderr F - { 2025-08-13T20:08:37.243916362+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:37.243916362+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.243916362+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:37.243916362+00:00 stderr F - }, 2025-08-13T20:08:37.243916362+00:00 stderr F + { 2025-08-13T20:08:37.243916362+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:37.243916362+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.243916362+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.241369539 +0000 UTC m=+436.446468853", 2025-08-13T20:08:37.243916362+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.243916362+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:37.243916362+00:00 stderr F + }, 2025-08-13T20:08:37.243916362+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:37.243916362+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:37.243916362+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:37.243916362+00:00 stderr F }, 2025-08-13T20:08:37.243916362+00:00 stderr F Version: "", 2025-08-13T20:08:37.243916362+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.243916362+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.243916362+00:00 stderr F } 2025-08-13T20:08:37.416690046+00:00 stderr F E0813 20:08:37.416629 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.431698516+00:00 stderr F E0813 20:08:37.431585 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.434256229+00:00 stderr F I0813 20:08:37.434128 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.434256229+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.434256229+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.434256229+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:37.434256229+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:37.434256229+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.434256229+00:00 stderr F - { 2025-08-13T20:08:37.434256229+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.434256229+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.434256229+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:37.434256229+00:00 stderr F - }, 2025-08-13T20:08:37.434256229+00:00 stderr F + { 2025-08-13T20:08:37.434256229+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.434256229+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.434256229+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.431642004 +0000 UTC m=+436.636741178", 2025-08-13T20:08:37.434256229+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.434256229+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:37.434256229+00:00 stderr F + }, 2025-08-13T20:08:37.434256229+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:37.434256229+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.434256229+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:37.434256229+00:00 stderr F }, 2025-08-13T20:08:37.434256229+00:00 stderr F Version: "", 2025-08-13T20:08:37.434256229+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.434256229+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.434256229+00:00 stderr F } 2025-08-13T20:08:37.615607029+00:00 stderr F E0813 20:08:37.615445 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.629245430+00:00 stderr F E0813 20:08:37.629132 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.632642387+00:00 stderr F I0813 20:08:37.632497 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.632642387+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.632642387+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.632642387+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:37.632642387+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:37.632642387+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:37.632642387+00:00 stderr F - { 2025-08-13T20:08:37.632642387+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:37.632642387+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.632642387+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:37.632642387+00:00 stderr F - }, 2025-08-13T20:08:37.632642387+00:00 stderr F + { 2025-08-13T20:08:37.632642387+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:37.632642387+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.632642387+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.629412355 +0000 UTC m=+436.834511619", 2025-08-13T20:08:37.632642387+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.632642387+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:37.632642387+00:00 stderr F + }, 2025-08-13T20:08:37.632642387+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:37.632642387+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.632642387+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:37.632642387+00:00 stderr F }, 2025-08-13T20:08:37.632642387+00:00 stderr F Version: "", 2025-08-13T20:08:37.632642387+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.632642387+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.632642387+00:00 stderr F } 2025-08-13T20:08:37.817262301+00:00 stderr F E0813 20:08:37.816548 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.832922820+00:00 stderr F E0813 20:08:37.832497 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.838182340+00:00 stderr F I0813 20:08:37.835045 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.838182340+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.838182340+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.838182340+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:37.838182340+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:37.838182340+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:37.838182340+00:00 stderr F - { 2025-08-13T20:08:37.838182340+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:37.838182340+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.838182340+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:37.838182340+00:00 stderr F - }, 2025-08-13T20:08:37.838182340+00:00 stderr F + { 2025-08-13T20:08:37.838182340+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:37.838182340+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.838182340+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.83257012 +0000 UTC m=+437.037669464", 2025-08-13T20:08:37.838182340+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.838182340+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:37.838182340+00:00 stderr F + }, 2025-08-13T20:08:37.838182340+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:37.838182340+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.838182340+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:37.838182340+00:00 stderr F }, 2025-08-13T20:08:37.838182340+00:00 stderr F Version: "", 2025-08-13T20:08:37.838182340+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.838182340+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.838182340+00:00 stderr F } 2025-08-13T20:08:38.017333607+00:00 stderr F E0813 20:08:38.016268 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.028539988+00:00 stderr F E0813 20:08:38.028429 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.030472094+00:00 stderr F I0813 20:08:38.030194 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:38.030472094+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:38.030472094+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:38.030472094+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:38.030472094+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:38.030472094+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:38.030472094+00:00 stderr F - { 2025-08-13T20:08:38.030472094+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:38.030472094+00:00 stderr F - Status: "False", 2025-08-13T20:08:38.030472094+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:38.030472094+00:00 stderr F - }, 2025-08-13T20:08:38.030472094+00:00 stderr F + { 2025-08-13T20:08:38.030472094+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:38.030472094+00:00 stderr F + Status: "True", 2025-08-13T20:08:38.030472094+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:38.028485107 +0000 UTC m=+437.233584351", 2025-08-13T20:08:38.030472094+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:38.030472094+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:38.030472094+00:00 stderr F + }, 2025-08-13T20:08:38.030472094+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:38.030472094+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:38.030472094+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:38.030472094+00:00 stderr F }, 2025-08-13T20:08:38.030472094+00:00 stderr F Version: "", 2025-08-13T20:08:38.030472094+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:38.030472094+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:38.030472094+00:00 stderr F } 2025-08-13T20:08:38.214811839+00:00 stderr F E0813 20:08:38.214667 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.377653358+00:00 stderr F E0813 20:08:38.377601 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.379842010+00:00 stderr F I0813 20:08:38.379755 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:38.379842010+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:38.379842010+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:38.379842010+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:38.379842010+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:38.379842010+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:38.379842010+00:00 stderr F - { 2025-08-13T20:08:38.379842010+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:38.379842010+00:00 stderr F - Status: "False", 2025-08-13T20:08:38.379842010+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:38.379842010+00:00 stderr F - }, 2025-08-13T20:08:38.379842010+00:00 stderr F + { 2025-08-13T20:08:38.379842010+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:38.379842010+00:00 stderr F + Status: "True", 2025-08-13T20:08:38.379842010+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:38.37774217 +0000 UTC m=+437.582841404", 2025-08-13T20:08:38.379842010+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:38.379842010+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:38.379842010+00:00 stderr F + }, 2025-08-13T20:08:38.379842010+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:38.379842010+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:38.379842010+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:38.379842010+00:00 stderr F }, 2025-08-13T20:08:38.379842010+00:00 stderr F Version: "", 2025-08-13T20:08:38.379842010+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:38.379842010+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:38.379842010+00:00 stderr F } 2025-08-13T20:08:38.420703232+00:00 stderr F I0813 20:08:38.420561 1 request.go:697] Waited for 1.16981886s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:08:38.422954186+00:00 stderr F E0813 20:08:38.422867 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.446282985+00:00 stderr F E0813 20:08:38.446243 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.448131668+00:00 stderr F I0813 20:08:38.448104 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:38.448131668+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:38.448131668+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:38.448131668+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:38.448131668+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:38.448131668+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:38.448131668+00:00 stderr F - { 2025-08-13T20:08:38.448131668+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:38.448131668+00:00 stderr F - Status: "False", 2025-08-13T20:08:38.448131668+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:38.448131668+00:00 stderr F - }, 2025-08-13T20:08:38.448131668+00:00 stderr F + { 2025-08-13T20:08:38.448131668+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:38.448131668+00:00 stderr F + Status: "True", 2025-08-13T20:08:38.448131668+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:38.446349037 +0000 UTC m=+437.651448241", 2025-08-13T20:08:38.448131668+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:38.448131668+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:38.448131668+00:00 stderr F + }, 2025-08-13T20:08:38.448131668+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:38.448131668+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:38.448131668+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:38.448131668+00:00 stderr F }, 2025-08-13T20:08:38.448131668+00:00 stderr F Version: "", 2025-08-13T20:08:38.448131668+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:38.448131668+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:38.448131668+00:00 stderr F } 2025-08-13T20:08:38.616531016+00:00 stderr F E0813 20:08:38.616131 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.641005418+00:00 stderr F E0813 20:08:38.640926 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.642727708+00:00 stderr F I0813 20:08:38.642666 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:38.642727708+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:38.642727708+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:38.642727708+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:38.642727708+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:38.642727708+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:38.642727708+00:00 stderr F - { 2025-08-13T20:08:38.642727708+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:38.642727708+00:00 stderr F - Status: "False", 2025-08-13T20:08:38.642727708+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:38.642727708+00:00 stderr F - }, 2025-08-13T20:08:38.642727708+00:00 stderr F + { 2025-08-13T20:08:38.642727708+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:38.642727708+00:00 stderr F + Status: "True", 2025-08-13T20:08:38.642727708+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:38.640990858 +0000 UTC m=+437.846090092", 2025-08-13T20:08:38.642727708+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:38.642727708+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:38.642727708+00:00 stderr F + }, 2025-08-13T20:08:38.642727708+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:38.642727708+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:38.642727708+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:38.642727708+00:00 stderr F }, 2025-08-13T20:08:38.642727708+00:00 stderr F Version: "", 2025-08-13T20:08:38.642727708+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:38.642727708+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:38.642727708+00:00 stderr F } 2025-08-13T20:08:38.815355887+00:00 stderr F E0813 20:08:38.815104 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.838994335+00:00 stderr F E0813 20:08:38.838944 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.841204708+00:00 stderr F I0813 20:08:38.841152 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:38.841204708+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:38.841204708+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:38.841204708+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:38.841204708+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:38.841204708+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:38.841204708+00:00 stderr F - { 2025-08-13T20:08:38.841204708+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:38.841204708+00:00 stderr F - Status: "False", 2025-08-13T20:08:38.841204708+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:38.841204708+00:00 stderr F - }, 2025-08-13T20:08:38.841204708+00:00 stderr F + { 2025-08-13T20:08:38.841204708+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:38.841204708+00:00 stderr F + Status: "True", 2025-08-13T20:08:38.841204708+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:38.839076657 +0000 UTC m=+438.044175821", 2025-08-13T20:08:38.841204708+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:38.841204708+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:38.841204708+00:00 stderr F + }, 2025-08-13T20:08:38.841204708+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:38.841204708+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:38.841204708+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:38.841204708+00:00 stderr F }, 2025-08-13T20:08:38.841204708+00:00 stderr F Version: "", 2025-08-13T20:08:38.841204708+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:38.841204708+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:38.841204708+00:00 stderr F } 2025-08-13T20:08:39.020182970+00:00 stderr F E0813 20:08:39.019603 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.043126507+00:00 stderr F E0813 20:08:39.043075 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.044739084+00:00 stderr F I0813 20:08:39.044699 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:39.044739084+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:39.044739084+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:39.044739084+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:39.044739084+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:39.044739084+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:39.044739084+00:00 stderr F - { 2025-08-13T20:08:39.044739084+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:39.044739084+00:00 stderr F - Status: "False", 2025-08-13T20:08:39.044739084+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:39.044739084+00:00 stderr F - }, 2025-08-13T20:08:39.044739084+00:00 stderr F + { 2025-08-13T20:08:39.044739084+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:39.044739084+00:00 stderr F + Status: "True", 2025-08-13T20:08:39.044739084+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:39.04320704 +0000 UTC m=+438.248306194", 2025-08-13T20:08:39.044739084+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:39.044739084+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:39.044739084+00:00 stderr F + }, 2025-08-13T20:08:39.044739084+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:39.044739084+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:39.044739084+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:39.044739084+00:00 stderr F }, 2025-08-13T20:08:39.044739084+00:00 stderr F Version: "", 2025-08-13T20:08:39.044739084+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:39.044739084+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:39.044739084+00:00 stderr F } 2025-08-13T20:08:39.216042325+00:00 stderr F E0813 20:08:39.215998 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.240292330+00:00 stderr F E0813 20:08:39.238950 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.241651919+00:00 stderr F I0813 20:08:39.241606 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:39.241651919+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:39.241651919+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:39.241651919+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:39.241651919+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:39.241651919+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:39.241651919+00:00 stderr F - { 2025-08-13T20:08:39.241651919+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:39.241651919+00:00 stderr F - Status: "False", 2025-08-13T20:08:39.241651919+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:39.241651919+00:00 stderr F - }, 2025-08-13T20:08:39.241651919+00:00 stderr F + { 2025-08-13T20:08:39.241651919+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:39.241651919+00:00 stderr F + Status: "True", 2025-08-13T20:08:39.241651919+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:39.239087206 +0000 UTC m=+438.444186720", 2025-08-13T20:08:39.241651919+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:39.241651919+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:39.241651919+00:00 stderr F + }, 2025-08-13T20:08:39.241651919+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:39.241651919+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:39.241651919+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:39.241651919+00:00 stderr F }, 2025-08-13T20:08:39.241651919+00:00 stderr F Version: "", 2025-08-13T20:08:39.241651919+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:39.241651919+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:39.241651919+00:00 stderr F } 2025-08-13T20:08:39.415781821+00:00 stderr F E0813 20:08:39.415305 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.614970782+00:00 stderr F I0813 20:08:39.613913 1 request.go:697] Waited for 1.165510665s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:08:39.615019093+00:00 stderr F E0813 20:08:39.614983 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.666494069+00:00 stderr F E0813 20:08:39.666435 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.672193102+00:00 stderr F I0813 20:08:39.671638 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:39.672193102+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:39.672193102+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:39.672193102+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:39.672193102+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:39.672193102+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:39.672193102+00:00 stderr F - { 2025-08-13T20:08:39.672193102+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:39.672193102+00:00 stderr F - Status: "False", 2025-08-13T20:08:39.672193102+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:39.672193102+00:00 stderr F - }, 2025-08-13T20:08:39.672193102+00:00 stderr F + { 2025-08-13T20:08:39.672193102+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:39.672193102+00:00 stderr F + Status: "True", 2025-08-13T20:08:39.672193102+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:39.666922771 +0000 UTC m=+438.872022185", 2025-08-13T20:08:39.672193102+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:39.672193102+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:39.672193102+00:00 stderr F + }, 2025-08-13T20:08:39.672193102+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:39.672193102+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:39.672193102+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:39.672193102+00:00 stderr F }, 2025-08-13T20:08:39.672193102+00:00 stderr F Version: "", 2025-08-13T20:08:39.672193102+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:39.672193102+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:39.672193102+00:00 stderr F } 2025-08-13T20:08:39.738606097+00:00 stderr F E0813 20:08:39.738379 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.740856061+00:00 stderr F I0813 20:08:39.740551 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:39.740856061+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:39.740856061+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:39.740856061+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:39.740856061+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:39.740856061+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:39.740856061+00:00 stderr F - { 2025-08-13T20:08:39.740856061+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:39.740856061+00:00 stderr F - Status: "False", 2025-08-13T20:08:39.740856061+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:39.740856061+00:00 stderr F - }, 2025-08-13T20:08:39.740856061+00:00 stderr F + { 2025-08-13T20:08:39.740856061+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:39.740856061+00:00 stderr F + Status: "True", 2025-08-13T20:08:39.740856061+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:39.738432722 +0000 UTC m=+438.943531886", 2025-08-13T20:08:39.740856061+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:39.740856061+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:39.740856061+00:00 stderr F + }, 2025-08-13T20:08:39.740856061+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:39.740856061+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:39.740856061+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:39.740856061+00:00 stderr F }, 2025-08-13T20:08:39.740856061+00:00 stderr F Version: "", 2025-08-13T20:08:39.740856061+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:39.740856061+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:39.740856061+00:00 stderr F } 2025-08-13T20:08:39.815139801+00:00 stderr F E0813 20:08:39.815043 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.861647434+00:00 stderr F E0813 20:08:39.861102 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.863310732+00:00 stderr F I0813 20:08:39.863196 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:39.863310732+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:39.863310732+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:39.863310732+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:39.863310732+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:39.863310732+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:39.863310732+00:00 stderr F - { 2025-08-13T20:08:39.863310732+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:39.863310732+00:00 stderr F - Status: "False", 2025-08-13T20:08:39.863310732+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:39.863310732+00:00 stderr F - }, 2025-08-13T20:08:39.863310732+00:00 stderr F + { 2025-08-13T20:08:39.863310732+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:39.863310732+00:00 stderr F + Status: "True", 2025-08-13T20:08:39.863310732+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:39.861184181 +0000 UTC m=+439.066283455", 2025-08-13T20:08:39.863310732+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:39.863310732+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:39.863310732+00:00 stderr F + }, 2025-08-13T20:08:39.863310732+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:39.863310732+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:39.863310732+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:39.863310732+00:00 stderr F }, 2025-08-13T20:08:39.863310732+00:00 stderr F Version: "", 2025-08-13T20:08:39.863310732+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:39.863310732+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:39.863310732+00:00 stderr F } 2025-08-13T20:08:40.023122134+00:00 stderr F E0813 20:08:40.022978 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.068376981+00:00 stderr F E0813 20:08:40.068301 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.070879393+00:00 stderr F I0813 20:08:40.070814 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:40.070879393+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:40.070879393+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:40.070879393+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:40.070879393+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:40.070879393+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:40.070879393+00:00 stderr F - { 2025-08-13T20:08:40.070879393+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:40.070879393+00:00 stderr F - Status: "False", 2025-08-13T20:08:40.070879393+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:40.070879393+00:00 stderr F - }, 2025-08-13T20:08:40.070879393+00:00 stderr F + { 2025-08-13T20:08:40.070879393+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:40.070879393+00:00 stderr F + Status: "True", 2025-08-13T20:08:40.070879393+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:40.068369611 +0000 UTC m=+439.273468915", 2025-08-13T20:08:40.070879393+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:40.070879393+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:40.070879393+00:00 stderr F + }, 2025-08-13T20:08:40.070879393+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:40.070879393+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:40.070879393+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:40.070879393+00:00 stderr F }, 2025-08-13T20:08:40.070879393+00:00 stderr F Version: "", 2025-08-13T20:08:40.070879393+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:40.070879393+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:40.070879393+00:00 stderr F } 2025-08-13T20:08:40.218648330+00:00 stderr F E0813 20:08:40.218518 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.262499887+00:00 stderr F E0813 20:08:40.262217 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.270760884+00:00 stderr F I0813 20:08:40.267420 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:40.270760884+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:40.270760884+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:40.270760884+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:40.270760884+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:40.270760884+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:40.270760884+00:00 stderr F - { 2025-08-13T20:08:40.270760884+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:40.270760884+00:00 stderr F - Status: "False", 2025-08-13T20:08:40.270760884+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:40.270760884+00:00 stderr F - }, 2025-08-13T20:08:40.270760884+00:00 stderr F + { 2025-08-13T20:08:40.270760884+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:40.270760884+00:00 stderr F + Status: "True", 2025-08-13T20:08:40.270760884+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:40.262340883 +0000 UTC m=+439.467440147", 2025-08-13T20:08:40.270760884+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:40.270760884+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:40.270760884+00:00 stderr F + }, 2025-08-13T20:08:40.270760884+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:40.270760884+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:40.270760884+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:40.270760884+00:00 stderr F }, 2025-08-13T20:08:40.270760884+00:00 stderr F Version: "", 2025-08-13T20:08:40.270760884+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:40.270760884+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:40.270760884+00:00 stderr F } 2025-08-13T20:08:40.416578585+00:00 stderr F E0813 20:08:40.416357 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.477135771+00:00 stderr F E0813 20:08:40.476206 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.519601119+00:00 stderr F I0813 20:08:40.518976 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:40.519601119+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:40.519601119+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:40.519601119+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:40.519601119+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:40.519601119+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:40.519601119+00:00 stderr F - { 2025-08-13T20:08:40.519601119+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:40.519601119+00:00 stderr F - Status: "False", 2025-08-13T20:08:40.519601119+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:40.519601119+00:00 stderr F - }, 2025-08-13T20:08:40.519601119+00:00 stderr F + { 2025-08-13T20:08:40.519601119+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:40.519601119+00:00 stderr F + Status: "True", 2025-08-13T20:08:40.519601119+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:40.476273256 +0000 UTC m=+439.681372500", 2025-08-13T20:08:40.519601119+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:40.519601119+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:40.519601119+00:00 stderr F + }, 2025-08-13T20:08:40.519601119+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:40.519601119+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:40.519601119+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:40.519601119+00:00 stderr F }, 2025-08-13T20:08:40.519601119+00:00 stderr F Version: "", 2025-08-13T20:08:40.519601119+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:40.519601119+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:40.519601119+00:00 stderr F } 2025-08-13T20:08:40.622538090+00:00 stderr F E0813 20:08:40.622405 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.705527799+00:00 stderr F E0813 20:08:40.705212 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.707585248+00:00 stderr F I0813 20:08:40.707517 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:40.707585248+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:40.707585248+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:40.707585248+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:40.707585248+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:40.707585248+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:40.707585248+00:00 stderr F - { 2025-08-13T20:08:40.707585248+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:40.707585248+00:00 stderr F - Status: "False", 2025-08-13T20:08:40.707585248+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:40.707585248+00:00 stderr F - }, 2025-08-13T20:08:40.707585248+00:00 stderr F + { 2025-08-13T20:08:40.707585248+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:40.707585248+00:00 stderr F + Status: "True", 2025-08-13T20:08:40.707585248+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:40.705510879 +0000 UTC m=+439.910610083", 2025-08-13T20:08:40.707585248+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:40.707585248+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:40.707585248+00:00 stderr F + }, 2025-08-13T20:08:40.707585248+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:40.707585248+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:40.707585248+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:40.707585248+00:00 stderr F }, 2025-08-13T20:08:40.707585248+00:00 stderr F Version: "", 2025-08-13T20:08:40.707585248+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:40.707585248+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:40.707585248+00:00 stderr F } 2025-08-13T20:08:40.814590936+00:00 stderr F I0813 20:08:40.814038 1 request.go:697] Waited for 1.073262681s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:08:40.830016859+00:00 stderr F E0813 20:08:40.826096 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.020581012+00:00 stderr F E0813 20:08:41.020441 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.106359572+00:00 stderr F E0813 20:08:41.106226 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.108849493+00:00 stderr F I0813 20:08:41.108071 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:41.108849493+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:41.108849493+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:41.108849493+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:41.108849493+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:41.108849493+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:41.108849493+00:00 stderr F - { 2025-08-13T20:08:41.108849493+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:41.108849493+00:00 stderr F - Status: "False", 2025-08-13T20:08:41.108849493+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:41.108849493+00:00 stderr F - }, 2025-08-13T20:08:41.108849493+00:00 stderr F + { 2025-08-13T20:08:41.108849493+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:41.108849493+00:00 stderr F + Status: "True", 2025-08-13T20:08:41.108849493+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:41.10630736 +0000 UTC m=+440.311406594", 2025-08-13T20:08:41.108849493+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:41.108849493+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:41.108849493+00:00 stderr F + }, 2025-08-13T20:08:41.108849493+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:41.108849493+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:41.108849493+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:41.108849493+00:00 stderr F }, 2025-08-13T20:08:41.108849493+00:00 stderr F Version: "", 2025-08-13T20:08:41.108849493+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:41.108849493+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:41.108849493+00:00 stderr F } 2025-08-13T20:08:41.220472763+00:00 stderr F E0813 20:08:41.219218 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.312987176+00:00 stderr F E0813 20:08:41.310744 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.317692401+00:00 stderr F I0813 20:08:41.317583 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:41.317692401+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:41.317692401+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:41.317692401+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:41.317692401+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:41.317692401+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:41.317692401+00:00 stderr F - { 2025-08-13T20:08:41.317692401+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:41.317692401+00:00 stderr F - Status: "False", 2025-08-13T20:08:41.317692401+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:41.317692401+00:00 stderr F - }, 2025-08-13T20:08:41.317692401+00:00 stderr F + { 2025-08-13T20:08:41.317692401+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:41.317692401+00:00 stderr F + Status: "True", 2025-08-13T20:08:41.317692401+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:41.310887966 +0000 UTC m=+440.516016311", 2025-08-13T20:08:41.317692401+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:41.317692401+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:41.317692401+00:00 stderr F + }, 2025-08-13T20:08:41.317692401+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:41.317692401+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:41.317692401+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:41.317692401+00:00 stderr F }, 2025-08-13T20:08:41.317692401+00:00 stderr F Version: "", 2025-08-13T20:08:41.317692401+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:41.317692401+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:41.317692401+00:00 stderr F } 2025-08-13T20:08:41.415911757+00:00 stderr F E0813 20:08:41.415712 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.470984576+00:00 stderr F E0813 20:08:41.470499 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.474142146+00:00 stderr F I0813 20:08:41.474075 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:41.474142146+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:41.474142146+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:41.474142146+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:41.474142146+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:41.474142146+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:41.474142146+00:00 stderr F - { 2025-08-13T20:08:41.474142146+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:41.474142146+00:00 stderr F - Status: "False", 2025-08-13T20:08:41.474142146+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:41.474142146+00:00 stderr F - }, 2025-08-13T20:08:41.474142146+00:00 stderr F + { 2025-08-13T20:08:41.474142146+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:41.474142146+00:00 stderr F + Status: "True", 2025-08-13T20:08:41.474142146+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:41.470928164 +0000 UTC m=+440.676027688", 2025-08-13T20:08:41.474142146+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:41.474142146+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:41.474142146+00:00 stderr F + }, 2025-08-13T20:08:41.474142146+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:41.474142146+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:41.474142146+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:41.474142146+00:00 stderr F }, 2025-08-13T20:08:41.474142146+00:00 stderr F Version: "", 2025-08-13T20:08:41.474142146+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:41.474142146+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:41.474142146+00:00 stderr F } 2025-08-13T20:08:41.499540955+00:00 stderr F E0813 20:08:41.499430 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.501244953+00:00 stderr F I0813 20:08:41.501116 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:41.501244953+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:41.501244953+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:41.501244953+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:41.501244953+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:41.501244953+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:41.501244953+00:00 stderr F - { 2025-08-13T20:08:41.501244953+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:41.501244953+00:00 stderr F - Status: "False", 2025-08-13T20:08:41.501244953+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:41.501244953+00:00 stderr F - }, 2025-08-13T20:08:41.501244953+00:00 stderr F + { 2025-08-13T20:08:41.501244953+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:41.501244953+00:00 stderr F + Status: "True", 2025-08-13T20:08:41.501244953+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:41.499509634 +0000 UTC m=+440.704608858", 2025-08-13T20:08:41.501244953+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:41.501244953+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:41.501244953+00:00 stderr F + }, 2025-08-13T20:08:41.501244953+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:41.501244953+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:41.501244953+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:41.501244953+00:00 stderr F }, 2025-08-13T20:08:41.501244953+00:00 stderr F Version: "", 2025-08-13T20:08:41.501244953+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:41.501244953+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:41.501244953+00:00 stderr F } 2025-08-13T20:08:41.617100685+00:00 stderr F E0813 20:08:41.617046 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.702115823+00:00 stderr F E0813 20:08:41.700869 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.703451601+00:00 stderr F I0813 20:08:41.703292 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:41.703451601+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:41.703451601+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:41.703451601+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:41.703451601+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:41.703451601+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:41.703451601+00:00 stderr F - { 2025-08-13T20:08:41.703451601+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:41.703451601+00:00 stderr F - Status: "False", 2025-08-13T20:08:41.703451601+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:41.703451601+00:00 stderr F - }, 2025-08-13T20:08:41.703451601+00:00 stderr F + { 2025-08-13T20:08:41.703451601+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:41.703451601+00:00 stderr F + Status: "True", 2025-08-13T20:08:41.703451601+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:41.700951129 +0000 UTC m=+440.906050523", 2025-08-13T20:08:41.703451601+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:41.703451601+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:41.703451601+00:00 stderr F + }, 2025-08-13T20:08:41.703451601+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:41.703451601+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:41.703451601+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:41.703451601+00:00 stderr F }, 2025-08-13T20:08:41.703451601+00:00 stderr F Version: "", 2025-08-13T20:08:41.703451601+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:41.703451601+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:41.703451601+00:00 stderr F } 2025-08-13T20:08:41.815672598+00:00 stderr F E0813 20:08:41.815214 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.988818273+00:00 stderr F E0813 20:08:41.988457 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.998696056+00:00 stderr F I0813 20:08:41.990208 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:41.998696056+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:41.998696056+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:41.998696056+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:41.998696056+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:41.998696056+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:41.998696056+00:00 stderr F - { 2025-08-13T20:08:41.998696056+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:41.998696056+00:00 stderr F - Status: "False", 2025-08-13T20:08:41.998696056+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:41.998696056+00:00 stderr F - }, 2025-08-13T20:08:41.998696056+00:00 stderr F + { 2025-08-13T20:08:41.998696056+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:41.998696056+00:00 stderr F + Status: "True", 2025-08-13T20:08:41.998696056+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:41.988499663 +0000 UTC m=+441.193598837", 2025-08-13T20:08:41.998696056+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:41.998696056+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:41.998696056+00:00 stderr F + }, 2025-08-13T20:08:41.998696056+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:41.998696056+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:41.998696056+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:41.998696056+00:00 stderr F }, 2025-08-13T20:08:41.998696056+00:00 stderr F Version: "", 2025-08-13T20:08:41.998696056+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:41.998696056+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:41.998696056+00:00 stderr F } 2025-08-13T20:08:42.021301974+00:00 stderr F E0813 20:08:42.018061 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.186702406+00:00 stderr F E0813 20:08:42.186597 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.191558915+00:00 stderr F I0813 20:08:42.190611 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:42.191558915+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:42.191558915+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:42.191558915+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:42.191558915+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:42.191558915+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:42.191558915+00:00 stderr F - { 2025-08-13T20:08:42.191558915+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:42.191558915+00:00 stderr F - Status: "False", 2025-08-13T20:08:42.191558915+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:42.191558915+00:00 stderr F - }, 2025-08-13T20:08:42.191558915+00:00 stderr F + { 2025-08-13T20:08:42.191558915+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:42.191558915+00:00 stderr F + Status: "True", 2025-08-13T20:08:42.191558915+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:42.186660165 +0000 UTC m=+441.391759379", 2025-08-13T20:08:42.191558915+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:42.191558915+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:42.191558915+00:00 stderr F + }, 2025-08-13T20:08:42.191558915+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:42.191558915+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:42.191558915+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:42.191558915+00:00 stderr F }, 2025-08-13T20:08:42.191558915+00:00 stderr F Version: "", 2025-08-13T20:08:42.191558915+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:42.191558915+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:42.191558915+00:00 stderr F } 2025-08-13T20:08:42.217340755+00:00 stderr F E0813 20:08:42.216746 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.387145983+00:00 stderr F E0813 20:08:42.386163 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.389334026+00:00 stderr F I0813 20:08:42.387949 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:42.389334026+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:42.389334026+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:42.389334026+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:42.389334026+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:42.389334026+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:42.389334026+00:00 stderr F - { 2025-08-13T20:08:42.389334026+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:42.389334026+00:00 stderr F - Status: "False", 2025-08-13T20:08:42.389334026+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:42.389334026+00:00 stderr F - }, 2025-08-13T20:08:42.389334026+00:00 stderr F + { 2025-08-13T20:08:42.389334026+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:42.389334026+00:00 stderr F + Status: "True", 2025-08-13T20:08:42.389334026+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:42.386224797 +0000 UTC m=+441.591324171", 2025-08-13T20:08:42.389334026+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:42.389334026+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:42.389334026+00:00 stderr F + }, 2025-08-13T20:08:42.389334026+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:42.389334026+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:42.389334026+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:42.389334026+00:00 stderr F }, 2025-08-13T20:08:42.389334026+00:00 stderr F Version: "", 2025-08-13T20:08:42.389334026+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:42.389334026+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:42.389334026+00:00 stderr F } 2025-08-13T20:08:42.420863810+00:00 stderr F E0813 20:08:42.418609 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.614367508+00:00 stderr F I0813 20:08:42.613673 1 request.go:697] Waited for 1.112255989s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:08:42.619195186+00:00 stderr F E0813 20:08:42.616075 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.777729212+00:00 stderr F E0813 20:08:42.777624 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.781402937+00:00 stderr F I0813 20:08:42.781368 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:42.781402937+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:42.781402937+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:42.781402937+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:42.781402937+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:42.781402937+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:42.781402937+00:00 stderr F - { 2025-08-13T20:08:42.781402937+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:42.781402937+00:00 stderr F - Status: "False", 2025-08-13T20:08:42.781402937+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:42.781402937+00:00 stderr F - }, 2025-08-13T20:08:42.781402937+00:00 stderr F + { 2025-08-13T20:08:42.781402937+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:42.781402937+00:00 stderr F + Status: "True", 2025-08-13T20:08:42.781402937+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:42.777866465 +0000 UTC m=+441.982965660", 2025-08-13T20:08:42.781402937+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:42.781402937+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:42.781402937+00:00 stderr F + }, 2025-08-13T20:08:42.781402937+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:42.781402937+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:42.781402937+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:42.781402937+00:00 stderr F }, 2025-08-13T20:08:42.781402937+00:00 stderr F Version: "", 2025-08-13T20:08:42.781402937+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:42.781402937+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:42.781402937+00:00 stderr F } 2025-08-13T20:08:42.820097516+00:00 stderr F E0813 20:08:42.817337 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.980056452+00:00 stderr F E0813 20:08:42.979975 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.984561011+00:00 stderr F I0813 20:08:42.984516 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:42.984561011+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:42.984561011+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:42.984561011+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:42.984561011+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:42.984561011+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:42.984561011+00:00 stderr F - { 2025-08-13T20:08:42.984561011+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:42.984561011+00:00 stderr F - Status: "False", 2025-08-13T20:08:42.984561011+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:42.984561011+00:00 stderr F - }, 2025-08-13T20:08:42.984561011+00:00 stderr F + { 2025-08-13T20:08:42.984561011+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:42.984561011+00:00 stderr F + Status: "True", 2025-08-13T20:08:42.984561011+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:42.980152504 +0000 UTC m=+442.185251748", 2025-08-13T20:08:42.984561011+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:42.984561011+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:42.984561011+00:00 stderr F + }, 2025-08-13T20:08:42.984561011+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:42.984561011+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:42.984561011+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:42.984561011+00:00 stderr F }, 2025-08-13T20:08:42.984561011+00:00 stderr F Version: "", 2025-08-13T20:08:42.984561011+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:42.984561011+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:42.984561011+00:00 stderr F } 2025-08-13T20:08:43.026203515+00:00 stderr F E0813 20:08:43.026107 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.219599940+00:00 stderr F E0813 20:08:43.219361 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.351032518+00:00 stderr F E0813 20:08:43.350753 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.358561944+00:00 stderr F I0813 20:08:43.357865 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:43.358561944+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:43.358561944+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:43.358561944+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:43.358561944+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:43.358561944+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:43.358561944+00:00 stderr F - { 2025-08-13T20:08:43.358561944+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:43.358561944+00:00 stderr F - Status: "False", 2025-08-13T20:08:43.358561944+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:43.358561944+00:00 stderr F - }, 2025-08-13T20:08:43.358561944+00:00 stderr F + { 2025-08-13T20:08:43.358561944+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:43.358561944+00:00 stderr F + Status: "True", 2025-08-13T20:08:43.358561944+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:43.350980336 +0000 UTC m=+442.556079610", 2025-08-13T20:08:43.358561944+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:43.358561944+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:43.358561944+00:00 stderr F + }, 2025-08-13T20:08:43.358561944+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:43.358561944+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:43.358561944+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:43.358561944+00:00 stderr F }, 2025-08-13T20:08:43.358561944+00:00 stderr F Version: "", 2025-08-13T20:08:43.358561944+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:43.358561944+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:43.358561944+00:00 stderr F } 2025-08-13T20:08:43.420359366+00:00 stderr F E0813 20:08:43.420292 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.544543156+00:00 stderr F E0813 20:08:43.541670 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.544543156+00:00 stderr F I0813 20:08:43.543481 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:43.544543156+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:43.544543156+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:43.544543156+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:43.544543156+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:43.544543156+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:43.544543156+00:00 stderr F - { 2025-08-13T20:08:43.544543156+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:43.544543156+00:00 stderr F - Status: "False", 2025-08-13T20:08:43.544543156+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:43.544543156+00:00 stderr F - }, 2025-08-13T20:08:43.544543156+00:00 stderr F + { 2025-08-13T20:08:43.544543156+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:43.544543156+00:00 stderr F + Status: "True", 2025-08-13T20:08:43.544543156+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:43.541760616 +0000 UTC m=+442.746859820", 2025-08-13T20:08:43.544543156+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:43.544543156+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:43.544543156+00:00 stderr F + }, 2025-08-13T20:08:43.544543156+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:43.544543156+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:43.544543156+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:43.544543156+00:00 stderr F }, 2025-08-13T20:08:43.544543156+00:00 stderr F Version: "", 2025-08-13T20:08:43.544543156+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:43.544543156+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:43.544543156+00:00 stderr F } 2025-08-13T20:08:43.621723299+00:00 stderr F E0813 20:08:43.617077 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.704040239+00:00 stderr F E0813 20:08:43.700450 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.707824157+00:00 stderr F I0813 20:08:43.707671 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:43.707824157+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:43.707824157+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:43.707824157+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:43.707824157+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:43.707824157+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:43.707824157+00:00 stderr F - { 2025-08-13T20:08:43.707824157+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:43.707824157+00:00 stderr F - Status: "False", 2025-08-13T20:08:43.707824157+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:43.707824157+00:00 stderr F - }, 2025-08-13T20:08:43.707824157+00:00 stderr F + { 2025-08-13T20:08:43.707824157+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:43.707824157+00:00 stderr F + Status: "True", 2025-08-13T20:08:43.707824157+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:43.700504517 +0000 UTC m=+442.905603752", 2025-08-13T20:08:43.707824157+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:43.707824157+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:43.707824157+00:00 stderr F + }, 2025-08-13T20:08:43.707824157+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:43.707824157+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:43.707824157+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:43.707824157+00:00 stderr F }, 2025-08-13T20:08:43.707824157+00:00 stderr F Version: "", 2025-08-13T20:08:43.707824157+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:43.707824157+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:43.707824157+00:00 stderr F } 2025-08-13T20:08:43.747565477+00:00 stderr F E0813 20:08:43.747500 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.754464205+00:00 stderr F I0813 20:08:43.749557 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:43.754464205+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:43.754464205+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:43.754464205+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:43.754464205+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:43.754464205+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:43.754464205+00:00 stderr F - { 2025-08-13T20:08:43.754464205+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:43.754464205+00:00 stderr F - Status: "False", 2025-08-13T20:08:43.754464205+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:43.754464205+00:00 stderr F - }, 2025-08-13T20:08:43.754464205+00:00 stderr F + { 2025-08-13T20:08:43.754464205+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:43.754464205+00:00 stderr F + Status: "True", 2025-08-13T20:08:43.754464205+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:43.74766571 +0000 UTC m=+442.952764914", 2025-08-13T20:08:43.754464205+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:43.754464205+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:43.754464205+00:00 stderr F + }, 2025-08-13T20:08:43.754464205+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:43.754464205+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:43.754464205+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:43.754464205+00:00 stderr F }, 2025-08-13T20:08:43.754464205+00:00 stderr F Version: "", 2025-08-13T20:08:43.754464205+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:43.754464205+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:43.754464205+00:00 stderr F } 2025-08-13T20:08:43.817070210+00:00 stderr F E0813 20:08:43.816973 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.954648214+00:00 stderr F E0813 20:08:43.954369 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.958497534+00:00 stderr F I0813 20:08:43.958002 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:43.958497534+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:43.958497534+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:43.958497534+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:43.958497534+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:43.958497534+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:43.958497534+00:00 stderr F - { 2025-08-13T20:08:43.958497534+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:43.958497534+00:00 stderr F - Status: "False", 2025-08-13T20:08:43.958497534+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:43.958497534+00:00 stderr F - }, 2025-08-13T20:08:43.958497534+00:00 stderr F + { 2025-08-13T20:08:43.958497534+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:43.958497534+00:00 stderr F + Status: "True", 2025-08-13T20:08:43.958497534+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:43.954440868 +0000 UTC m=+443.159540072", 2025-08-13T20:08:43.958497534+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:43.958497534+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:43.958497534+00:00 stderr F + }, 2025-08-13T20:08:43.958497534+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:43.958497534+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:43.958497534+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:43.958497534+00:00 stderr F }, 2025-08-13T20:08:43.958497534+00:00 stderr F Version: "", 2025-08-13T20:08:43.958497534+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:43.958497534+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:43.958497534+00:00 stderr F } 2025-08-13T20:08:44.016321952+00:00 stderr F E0813 20:08:44.015605 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.140135912+00:00 stderr F E0813 20:08:44.140038 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.142288394+00:00 stderr F I0813 20:08:44.142194 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:44.142288394+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:44.142288394+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:44.142288394+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:44.142288394+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:44.142288394+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:44.142288394+00:00 stderr F - { 2025-08-13T20:08:44.142288394+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:44.142288394+00:00 stderr F - Status: "False", 2025-08-13T20:08:44.142288394+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:44.142288394+00:00 stderr F - }, 2025-08-13T20:08:44.142288394+00:00 stderr F + { 2025-08-13T20:08:44.142288394+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:44.142288394+00:00 stderr F + Status: "True", 2025-08-13T20:08:44.142288394+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:44.140098821 +0000 UTC m=+443.345198165", 2025-08-13T20:08:44.142288394+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:44.142288394+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:44.142288394+00:00 stderr F + }, 2025-08-13T20:08:44.142288394+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:44.142288394+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:44.142288394+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:44.142288394+00:00 stderr F }, 2025-08-13T20:08:44.142288394+00:00 stderr F Version: "", 2025-08-13T20:08:44.142288394+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:44.142288394+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:44.142288394+00:00 stderr F } 2025-08-13T20:08:44.217677275+00:00 stderr F E0813 20:08:44.216206 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.414622152+00:00 stderr F E0813 20:08:44.414500 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.627485295+00:00 stderr F E0813 20:08:44.625858 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.660118291+00:00 stderr F E0813 20:08:44.659512 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.664169017+00:00 stderr F I0813 20:08:44.661294 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:44.664169017+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:44.664169017+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:44.664169017+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:44.664169017+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:44.664169017+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:44.664169017+00:00 stderr F - { 2025-08-13T20:08:44.664169017+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:44.664169017+00:00 stderr F - Status: "False", 2025-08-13T20:08:44.664169017+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:44.664169017+00:00 stderr F - }, 2025-08-13T20:08:44.664169017+00:00 stderr F + { 2025-08-13T20:08:44.664169017+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:44.664169017+00:00 stderr F + Status: "True", 2025-08-13T20:08:44.664169017+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:44.659587885 +0000 UTC m=+443.864687109", 2025-08-13T20:08:44.664169017+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:44.664169017+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:44.664169017+00:00 stderr F + }, 2025-08-13T20:08:44.664169017+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:44.664169017+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:44.664169017+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:44.664169017+00:00 stderr F }, 2025-08-13T20:08:44.664169017+00:00 stderr F Version: "", 2025-08-13T20:08:44.664169017+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:44.664169017+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:44.664169017+00:00 stderr F } 2025-08-13T20:08:44.816568856+00:00 stderr F E0813 20:08:44.816386 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.858538260+00:00 stderr F E0813 20:08:44.858437 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.861991979+00:00 stderr F I0813 20:08:44.860457 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:44.861991979+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:44.861991979+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:44.861991979+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:44.861991979+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:44.861991979+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:44.861991979+00:00 stderr F - { 2025-08-13T20:08:44.861991979+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:44.861991979+00:00 stderr F - Status: "False", 2025-08-13T20:08:44.861991979+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:44.861991979+00:00 stderr F - }, 2025-08-13T20:08:44.861991979+00:00 stderr F + { 2025-08-13T20:08:44.861991979+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:44.861991979+00:00 stderr F + Status: "True", 2025-08-13T20:08:44.861991979+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:44.858510859 +0000 UTC m=+444.063610093", 2025-08-13T20:08:44.861991979+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:44.861991979+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:44.861991979+00:00 stderr F + }, 2025-08-13T20:08:44.861991979+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:44.861991979+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:44.861991979+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:44.861991979+00:00 stderr F }, 2025-08-13T20:08:44.861991979+00:00 stderr F Version: "", 2025-08-13T20:08:44.861991979+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:44.861991979+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:44.861991979+00:00 stderr F } 2025-08-13T20:08:45.015636224+00:00 stderr F E0813 20:08:45.015347 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.215994528+00:00 stderr F E0813 20:08:45.214927 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.268234216+00:00 stderr F E0813 20:08:45.268094 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.270711397+00:00 stderr F I0813 20:08:45.270626 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:45.270711397+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:45.270711397+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:45.270711397+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:45.270711397+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:45.270711397+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:45.270711397+00:00 stderr F - { 2025-08-13T20:08:45.270711397+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:45.270711397+00:00 stderr F - Status: "False", 2025-08-13T20:08:45.270711397+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:45.270711397+00:00 stderr F - }, 2025-08-13T20:08:45.270711397+00:00 stderr F + { 2025-08-13T20:08:45.270711397+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:45.270711397+00:00 stderr F + Status: "True", 2025-08-13T20:08:45.270711397+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:45.268154714 +0000 UTC m=+444.473253928", 2025-08-13T20:08:45.270711397+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:45.270711397+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:45.270711397+00:00 stderr F + }, 2025-08-13T20:08:45.270711397+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:45.270711397+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:45.270711397+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:45.270711397+00:00 stderr F }, 2025-08-13T20:08:45.270711397+00:00 stderr F Version: "", 2025-08-13T20:08:45.270711397+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:45.270711397+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:45.270711397+00:00 stderr F } 2025-08-13T20:08:45.422093407+00:00 stderr F E0813 20:08:45.414446 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.459687385+00:00 stderr F E0813 20:08:45.459567 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.462676851+00:00 stderr F I0813 20:08:45.462418 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:45.462676851+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:45.462676851+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:45.462676851+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:45.462676851+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:45.462676851+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:45.462676851+00:00 stderr F - { 2025-08-13T20:08:45.462676851+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:45.462676851+00:00 stderr F - Status: "False", 2025-08-13T20:08:45.462676851+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:45.462676851+00:00 stderr F - }, 2025-08-13T20:08:45.462676851+00:00 stderr F + { 2025-08-13T20:08:45.462676851+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:45.462676851+00:00 stderr F + Status: "True", 2025-08-13T20:08:45.462676851+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:45.459644254 +0000 UTC m=+444.664743638", 2025-08-13T20:08:45.462676851+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:45.462676851+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:45.462676851+00:00 stderr F + }, 2025-08-13T20:08:45.462676851+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:45.462676851+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:45.462676851+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:45.462676851+00:00 stderr F }, 2025-08-13T20:08:45.462676851+00:00 stderr F Version: "", 2025-08-13T20:08:45.462676851+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:45.462676851+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:45.462676851+00:00 stderr F } 2025-08-13T20:08:45.618344284+00:00 stderr F E0813 20:08:45.617682 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.657507677+00:00 stderr F E0813 20:08:45.657306 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.663505239+00:00 stderr F I0813 20:08:45.663430 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:45.663505239+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:45.663505239+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:45.663505239+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:45.663505239+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:45.663505239+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:45.663505239+00:00 stderr F - { 2025-08-13T20:08:45.663505239+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:45.663505239+00:00 stderr F - Status: "False", 2025-08-13T20:08:45.663505239+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:45.663505239+00:00 stderr F - }, 2025-08-13T20:08:45.663505239+00:00 stderr F + { 2025-08-13T20:08:45.663505239+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:45.663505239+00:00 stderr F + Status: "True", 2025-08-13T20:08:45.663505239+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:45.657411114 +0000 UTC m=+444.862510488", 2025-08-13T20:08:45.663505239+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:45.663505239+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:45.663505239+00:00 stderr F + }, 2025-08-13T20:08:45.663505239+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:45.663505239+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:45.663505239+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:45.663505239+00:00 stderr F }, 2025-08-13T20:08:45.663505239+00:00 stderr F Version: "", 2025-08-13T20:08:45.663505239+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:45.663505239+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:45.663505239+00:00 stderr F } 2025-08-13T20:08:45.815375213+00:00 stderr F E0813 20:08:45.815224 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.015615404+00:00 stderr F E0813 20:08:46.015490 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.501005391+00:00 stderr F E0813 20:08:46.500924 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.502990298+00:00 stderr F I0813 20:08:46.502964 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:46.502990298+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:46.502990298+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:46.502990298+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:46.502990298+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:46.502990298+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:46.502990298+00:00 stderr F - { 2025-08-13T20:08:46.502990298+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:46.502990298+00:00 stderr F - Status: "False", 2025-08-13T20:08:46.502990298+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:46.502990298+00:00 stderr F - }, 2025-08-13T20:08:46.502990298+00:00 stderr F + { 2025-08-13T20:08:46.502990298+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:46.502990298+00:00 stderr F + Status: "True", 2025-08-13T20:08:46.502990298+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:46.501100054 +0000 UTC m=+445.706199228", 2025-08-13T20:08:46.502990298+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:46.502990298+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:46.502990298+00:00 stderr F + }, 2025-08-13T20:08:46.502990298+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:46.502990298+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:46.502990298+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:46.502990298+00:00 stderr F }, 2025-08-13T20:08:46.502990298+00:00 stderr F Version: "", 2025-08-13T20:08:46.502990298+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:46.502990298+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:46.502990298+00:00 stderr F } 2025-08-13T20:08:46.505478419+00:00 stderr F E0813 20:08:46.505412 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.706044639+00:00 stderr F E0813 20:08:46.705954 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.708050136+00:00 stderr F I0813 20:08:46.707877 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:46.708050136+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:46.708050136+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:46.708050136+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:46.708050136+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:46.708050136+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:46.708050136+00:00 stderr F - { 2025-08-13T20:08:46.708050136+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:46.708050136+00:00 stderr F - Status: "False", 2025-08-13T20:08:46.708050136+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:46.708050136+00:00 stderr F - }, 2025-08-13T20:08:46.708050136+00:00 stderr F + { 2025-08-13T20:08:46.708050136+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:46.708050136+00:00 stderr F + Status: "True", 2025-08-13T20:08:46.708050136+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:46.706148572 +0000 UTC m=+445.911247926", 2025-08-13T20:08:46.708050136+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:46.708050136+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:46.708050136+00:00 stderr F + }, 2025-08-13T20:08:46.708050136+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:46.708050136+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:46.708050136+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:46.708050136+00:00 stderr F }, 2025-08-13T20:08:46.708050136+00:00 stderr F Version: "", 2025-08-13T20:08:46.708050136+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:46.708050136+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:46.708050136+00:00 stderr F } 2025-08-13T20:08:46.709550969+00:00 stderr F E0813 20:08:46.709501 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.901179643+00:00 stderr F E0813 20:08:46.900695 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.906398613+00:00 stderr F I0813 20:08:46.906310 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:46.906398613+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:46.906398613+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:46.906398613+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:46.906398613+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:46.906398613+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:46.906398613+00:00 stderr F - { 2025-08-13T20:08:46.906398613+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:46.906398613+00:00 stderr F - Status: "False", 2025-08-13T20:08:46.906398613+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:46.906398613+00:00 stderr F - }, 2025-08-13T20:08:46.906398613+00:00 stderr F + { 2025-08-13T20:08:46.906398613+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:46.906398613+00:00 stderr F + Status: "True", 2025-08-13T20:08:46.906398613+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:46.901114832 +0000 UTC m=+446.106214156", 2025-08-13T20:08:46.906398613+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:46.906398613+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:46.906398613+00:00 stderr F + }, 2025-08-13T20:08:46.906398613+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:46.906398613+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:46.906398613+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:46.906398613+00:00 stderr F }, 2025-08-13T20:08:46.906398613+00:00 stderr F Version: "", 2025-08-13T20:08:46.906398613+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:46.906398613+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:46.906398613+00:00 stderr F } 2025-08-13T20:08:46.907769112+00:00 stderr F E0813 20:08:46.907662 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.977342047+00:00 stderr F E0813 20:08:46.977226 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.979552040+00:00 stderr F I0813 20:08:46.979470 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:46.979552040+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:46.979552040+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:46.979552040+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:46.979552040+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:46.979552040+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:46.979552040+00:00 stderr F - { 2025-08-13T20:08:46.979552040+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:46.979552040+00:00 stderr F - Status: "False", 2025-08-13T20:08:46.979552040+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:46.979552040+00:00 stderr F - }, 2025-08-13T20:08:46.979552040+00:00 stderr F + { 2025-08-13T20:08:46.979552040+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:46.979552040+00:00 stderr F + Status: "True", 2025-08-13T20:08:46.979552040+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:46.977278655 +0000 UTC m=+446.182377829", 2025-08-13T20:08:46.979552040+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:46.979552040+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:46.979552040+00:00 stderr F + }, 2025-08-13T20:08:46.979552040+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:46.979552040+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:46.979552040+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:46.979552040+00:00 stderr F }, 2025-08-13T20:08:46.979552040+00:00 stderr F Version: "", 2025-08-13T20:08:46.979552040+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:46.979552040+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:46.979552040+00:00 stderr F } 2025-08-13T20:08:46.983096882+00:00 stderr F E0813 20:08:46.982938 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.098157441+00:00 stderr F E0813 20:08:47.098074 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.102136425+00:00 stderr F I0813 20:08:47.100739 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:47.102136425+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:47.102136425+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:47.102136425+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:47.102136425+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:47.102136425+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:47.102136425+00:00 stderr F - { 2025-08-13T20:08:47.102136425+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:47.102136425+00:00 stderr F - Status: "False", 2025-08-13T20:08:47.102136425+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:47.102136425+00:00 stderr F - }, 2025-08-13T20:08:47.102136425+00:00 stderr F + { 2025-08-13T20:08:47.102136425+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:47.102136425+00:00 stderr F + Status: "True", 2025-08-13T20:08:47.102136425+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:47.098144541 +0000 UTC m=+446.303243835", 2025-08-13T20:08:47.102136425+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:47.102136425+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:47.102136425+00:00 stderr F + }, 2025-08-13T20:08:47.102136425+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:47.102136425+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:47.102136425+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:47.102136425+00:00 stderr F }, 2025-08-13T20:08:47.102136425+00:00 stderr F Version: "", 2025-08-13T20:08:47.102136425+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:47.102136425+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:47.102136425+00:00 stderr F } 2025-08-13T20:08:47.105107300+00:00 stderr F E0813 20:08:47.105013 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.299446792+00:00 stderr F E0813 20:08:47.298300 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.314666088+00:00 stderr F I0813 20:08:47.299923 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:47.314666088+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:47.314666088+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:47.314666088+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:47.314666088+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:47.314666088+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:47.314666088+00:00 stderr F - { 2025-08-13T20:08:47.314666088+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:47.314666088+00:00 stderr F - Status: "False", 2025-08-13T20:08:47.314666088+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:47.314666088+00:00 stderr F - }, 2025-08-13T20:08:47.314666088+00:00 stderr F + { 2025-08-13T20:08:47.314666088+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:47.314666088+00:00 stderr F + Status: "True", 2025-08-13T20:08:47.314666088+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:47.298356561 +0000 UTC m=+446.503455745", 2025-08-13T20:08:47.314666088+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:47.314666088+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:47.314666088+00:00 stderr F + }, 2025-08-13T20:08:47.314666088+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:47.314666088+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:47.314666088+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:47.314666088+00:00 stderr F }, 2025-08-13T20:08:47.314666088+00:00 stderr F Version: "", 2025-08-13T20:08:47.314666088+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:47.314666088+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:47.314666088+00:00 stderr F } 2025-08-13T20:08:47.314666088+00:00 stderr F E0813 20:08:47.301488 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.073730023+00:00 stderr F E0813 20:08:49.073678 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.078610473+00:00 stderr F I0813 20:08:49.077425 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:49.078610473+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:49.078610473+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:49.078610473+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:49.078610473+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:49.078610473+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:49.078610473+00:00 stderr F - { 2025-08-13T20:08:49.078610473+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:49.078610473+00:00 stderr F - Status: "False", 2025-08-13T20:08:49.078610473+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:49.078610473+00:00 stderr F - }, 2025-08-13T20:08:49.078610473+00:00 stderr F + { 2025-08-13T20:08:49.078610473+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:49.078610473+00:00 stderr F + Status: "True", 2025-08-13T20:08:49.078610473+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:49.073921938 +0000 UTC m=+448.279021332", 2025-08-13T20:08:49.078610473+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:49.078610473+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:49.078610473+00:00 stderr F + }, 2025-08-13T20:08:49.078610473+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:49.078610473+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:49.078610473+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:49.078610473+00:00 stderr F }, 2025-08-13T20:08:49.078610473+00:00 stderr F Version: "", 2025-08-13T20:08:49.078610473+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:49.078610473+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:49.078610473+00:00 stderr F } 2025-08-13T20:08:49.082722501+00:00 stderr F E0813 20:08:49.082646 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.272062159+00:00 stderr F E0813 20:08:49.271885 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.275436466+00:00 stderr F I0813 20:08:49.275396 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:49.275436466+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:49.275436466+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:49.275436466+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:49.275436466+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:49.275436466+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:49.275436466+00:00 stderr F - { 2025-08-13T20:08:49.275436466+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:49.275436466+00:00 stderr F - Status: "False", 2025-08-13T20:08:49.275436466+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:49.275436466+00:00 stderr F - }, 2025-08-13T20:08:49.275436466+00:00 stderr F + { 2025-08-13T20:08:49.275436466+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:49.275436466+00:00 stderr F + Status: "True", 2025-08-13T20:08:49.275436466+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:49.271969677 +0000 UTC m=+448.477068891", 2025-08-13T20:08:49.275436466+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:49.275436466+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:49.275436466+00:00 stderr F + }, 2025-08-13T20:08:49.275436466+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:49.275436466+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:49.275436466+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:49.275436466+00:00 stderr F }, 2025-08-13T20:08:49.275436466+00:00 stderr F Version: "", 2025-08-13T20:08:49.275436466+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:49.275436466+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:49.275436466+00:00 stderr F } 2025-08-13T20:08:49.277923097+00:00 stderr F E0813 20:08:49.276996 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.470053486+00:00 stderr F E0813 20:08:49.469879 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.472027282+00:00 stderr F I0813 20:08:49.471955 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:49.472027282+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:49.472027282+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:49.472027282+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:49.472027282+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:49.472027282+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:49.472027282+00:00 stderr F - { 2025-08-13T20:08:49.472027282+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:49.472027282+00:00 stderr F - Status: "False", 2025-08-13T20:08:49.472027282+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:49.472027282+00:00 stderr F - }, 2025-08-13T20:08:49.472027282+00:00 stderr F + { 2025-08-13T20:08:49.472027282+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:49.472027282+00:00 stderr F + Status: "True", 2025-08-13T20:08:49.472027282+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:49.469974794 +0000 UTC m=+448.675074108", 2025-08-13T20:08:49.472027282+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:49.472027282+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:49.472027282+00:00 stderr F + }, 2025-08-13T20:08:49.472027282+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:49.472027282+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:49.472027282+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:49.472027282+00:00 stderr F }, 2025-08-13T20:08:49.472027282+00:00 stderr F Version: "", 2025-08-13T20:08:49.472027282+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:49.472027282+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:49.472027282+00:00 stderr F } 2025-08-13T20:08:49.473334110+00:00 stderr F E0813 20:08:49.473274 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.667066634+00:00 stderr F E0813 20:08:49.666953 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.672081948+00:00 stderr F I0813 20:08:49.671872 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:49.672081948+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:49.672081948+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:49.672081948+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:49.672081948+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:49.672081948+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:49.672081948+00:00 stderr F - { 2025-08-13T20:08:49.672081948+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:49.672081948+00:00 stderr F - Status: "False", 2025-08-13T20:08:49.672081948+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:49.672081948+00:00 stderr F - }, 2025-08-13T20:08:49.672081948+00:00 stderr F + { 2025-08-13T20:08:49.672081948+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:49.672081948+00:00 stderr F + Status: "True", 2025-08-13T20:08:49.672081948+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:49.667166027 +0000 UTC m=+448.872265331", 2025-08-13T20:08:49.672081948+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:49.672081948+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:49.672081948+00:00 stderr F + }, 2025-08-13T20:08:49.672081948+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:49.672081948+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:49.672081948+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:49.672081948+00:00 stderr F }, 2025-08-13T20:08:49.672081948+00:00 stderr F Version: "", 2025-08-13T20:08:49.672081948+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:49.672081948+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:49.672081948+00:00 stderr F } 2025-08-13T20:08:49.673729856+00:00 stderr F E0813 20:08:49.673622 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.863610400+00:00 stderr F E0813 20:08:49.863525 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.865616637+00:00 stderr F I0813 20:08:49.865399 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:49.865616637+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:49.865616637+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:49.865616637+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:49.865616637+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:49.865616637+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:49.865616637+00:00 stderr F - { 2025-08-13T20:08:49.865616637+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:49.865616637+00:00 stderr F - Status: "False", 2025-08-13T20:08:49.865616637+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:49.865616637+00:00 stderr F - }, 2025-08-13T20:08:49.865616637+00:00 stderr F + { 2025-08-13T20:08:49.865616637+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:49.865616637+00:00 stderr F + Status: "True", 2025-08-13T20:08:49.865616637+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:49.863575439 +0000 UTC m=+449.068674603", 2025-08-13T20:08:49.865616637+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:49.865616637+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:49.865616637+00:00 stderr F + }, 2025-08-13T20:08:49.865616637+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:49.865616637+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:49.865616637+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:49.865616637+00:00 stderr F }, 2025-08-13T20:08:49.865616637+00:00 stderr F Version: "", 2025-08-13T20:08:49.865616637+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:49.865616637+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:49.865616637+00:00 stderr F } 2025-08-13T20:08:49.867161861+00:00 stderr F E0813 20:08:49.867084 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.105158806+00:00 stderr F E0813 20:08:52.105106 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.107373990+00:00 stderr F I0813 20:08:52.107315 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:52.107373990+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:52.107373990+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:52.107373990+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:52.107373990+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:52.107373990+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:52.107373990+00:00 stderr F - { 2025-08-13T20:08:52.107373990+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:52.107373990+00:00 stderr F - Status: "False", 2025-08-13T20:08:52.107373990+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:52.107373990+00:00 stderr F - }, 2025-08-13T20:08:52.107373990+00:00 stderr F + { 2025-08-13T20:08:52.107373990+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:52.107373990+00:00 stderr F + Status: "True", 2025-08-13T20:08:52.107373990+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:52.105270139 +0000 UTC m=+451.310369213", 2025-08-13T20:08:52.107373990+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:52.107373990+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:52.107373990+00:00 stderr F + }, 2025-08-13T20:08:52.107373990+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:52.107373990+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:52.107373990+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:52.107373990+00:00 stderr F }, 2025-08-13T20:08:52.107373990+00:00 stderr F Version: "", 2025-08-13T20:08:52.107373990+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:52.107373990+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:52.107373990+00:00 stderr F } 2025-08-13T20:08:52.109545982+00:00 stderr F E0813 20:08:52.108731 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.207325077+00:00 stderr F E0813 20:08:54.206946 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.210384064+00:00 stderr F I0813 20:08:54.209528 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:54.210384064+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:54.210384064+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:54.210384064+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:54.210384064+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:54.210384064+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:54.210384064+00:00 stderr F - { 2025-08-13T20:08:54.210384064+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:54.210384064+00:00 stderr F - Status: "False", 2025-08-13T20:08:54.210384064+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:54.210384064+00:00 stderr F - }, 2025-08-13T20:08:54.210384064+00:00 stderr F + { 2025-08-13T20:08:54.210384064+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:54.210384064+00:00 stderr F + Status: "True", 2025-08-13T20:08:54.210384064+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:54.207307786 +0000 UTC m=+453.412407020", 2025-08-13T20:08:54.210384064+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:54.210384064+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:54.210384064+00:00 stderr F + }, 2025-08-13T20:08:54.210384064+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:54.210384064+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:54.210384064+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:54.210384064+00:00 stderr F }, 2025-08-13T20:08:54.210384064+00:00 stderr F Version: "", 2025-08-13T20:08:54.210384064+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:54.210384064+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:54.210384064+00:00 stderr F } 2025-08-13T20:08:54.212992689+00:00 stderr F E0813 20:08:54.211585 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.401921646+00:00 stderr F E0813 20:08:54.399433 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.403230154+00:00 stderr F I0813 20:08:54.401987 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:54.403230154+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:54.403230154+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:54.403230154+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:54.403230154+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:54.403230154+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:54.403230154+00:00 stderr F - { 2025-08-13T20:08:54.403230154+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:54.403230154+00:00 stderr F - Status: "False", 2025-08-13T20:08:54.403230154+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:54.403230154+00:00 stderr F - }, 2025-08-13T20:08:54.403230154+00:00 stderr F + { 2025-08-13T20:08:54.403230154+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:54.403230154+00:00 stderr F + Status: "True", 2025-08-13T20:08:54.403230154+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:54.399525887 +0000 UTC m=+453.604625211", 2025-08-13T20:08:54.403230154+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:54.403230154+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:54.403230154+00:00 stderr F + }, 2025-08-13T20:08:54.403230154+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:54.403230154+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:54.403230154+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:54.403230154+00:00 stderr F }, 2025-08-13T20:08:54.403230154+00:00 stderr F Version: "", 2025-08-13T20:08:54.403230154+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:54.403230154+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:54.403230154+00:00 stderr F } 2025-08-13T20:08:54.404868690+00:00 stderr F E0813 20:08:54.404648 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.598589065+00:00 stderr F E0813 20:08:54.598467 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.600546971+00:00 stderr F I0813 20:08:54.600449 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:54.600546971+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:54.600546971+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:54.600546971+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:54.600546971+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:54.600546971+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:54.600546971+00:00 stderr F - { 2025-08-13T20:08:54.600546971+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:54.600546971+00:00 stderr F - Status: "False", 2025-08-13T20:08:54.600546971+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:54.600546971+00:00 stderr F - }, 2025-08-13T20:08:54.600546971+00:00 stderr F + { 2025-08-13T20:08:54.600546971+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:54.600546971+00:00 stderr F + Status: "True", 2025-08-13T20:08:54.600546971+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:54.598694108 +0000 UTC m=+453.803793192", 2025-08-13T20:08:54.600546971+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:54.600546971+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:54.600546971+00:00 stderr F + }, 2025-08-13T20:08:54.600546971+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:54.600546971+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:54.600546971+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:54.600546971+00:00 stderr F }, 2025-08-13T20:08:54.600546971+00:00 stderr F Version: "", 2025-08-13T20:08:54.600546971+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:54.600546971+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:54.600546971+00:00 stderr F } 2025-08-13T20:08:54.606496561+00:00 stderr F E0813 20:08:54.605662 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.803139279+00:00 stderr F E0813 20:08:54.803041 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.804947011+00:00 stderr F I0813 20:08:54.804771 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:54.804947011+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:54.804947011+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:54.804947011+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:54.804947011+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:54.804947011+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:54.804947011+00:00 stderr F - { 2025-08-13T20:08:54.804947011+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:54.804947011+00:00 stderr F - Status: "False", 2025-08-13T20:08:54.804947011+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:54.804947011+00:00 stderr F - }, 2025-08-13T20:08:54.804947011+00:00 stderr F + { 2025-08-13T20:08:54.804947011+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:54.804947011+00:00 stderr F + Status: "True", 2025-08-13T20:08:54.804947011+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:54.803096458 +0000 UTC m=+454.008195642", 2025-08-13T20:08:54.804947011+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:54.804947011+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:54.804947011+00:00 stderr F + }, 2025-08-13T20:08:54.804947011+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:54.804947011+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:54.804947011+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:54.804947011+00:00 stderr F }, 2025-08-13T20:08:54.804947011+00:00 stderr F Version: "", 2025-08-13T20:08:54.804947011+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:54.804947011+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:54.804947011+00:00 stderr F } 2025-08-13T20:08:54.812567070+00:00 stderr F E0813 20:08:54.812132 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.994513956+00:00 stderr F E0813 20:08:54.994469 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.998052418+00:00 stderr F I0813 20:08:54.998024 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:54.998052418+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:54.998052418+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:54.998052418+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:54.998052418+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:54.998052418+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:54.998052418+00:00 stderr F - { 2025-08-13T20:08:54.998052418+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:54.998052418+00:00 stderr F - Status: "False", 2025-08-13T20:08:54.998052418+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:54.998052418+00:00 stderr F - }, 2025-08-13T20:08:54.998052418+00:00 stderr F + { 2025-08-13T20:08:54.998052418+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:54.998052418+00:00 stderr F + Status: "True", 2025-08-13T20:08:54.998052418+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:54.994837066 +0000 UTC m=+454.199936400", 2025-08-13T20:08:54.998052418+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:54.998052418+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:54.998052418+00:00 stderr F + }, 2025-08-13T20:08:54.998052418+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:54.998052418+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:54.998052418+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:54.998052418+00:00 stderr F }, 2025-08-13T20:08:54.998052418+00:00 stderr F Version: "", 2025-08-13T20:08:54.998052418+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:54.998052418+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:54.998052418+00:00 stderr F } 2025-08-13T20:08:55.000666323+00:00 stderr F E0813 20:08:54.999518 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:29.132447831+00:00 stderr F I0813 20:09:29.131961 1 reflector.go:351] Caches populated for *v1.ConsolePlugin from github.com/openshift/client-go/console/informers/externalversions/factory.go:125 2025-08-13T20:09:31.727617477+00:00 stderr F I0813 20:09:31.726940 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:33.466439120+00:00 stderr F I0813 20:09:33.466285 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:38.293980459+00:00 stderr F I0813 20:09:38.293268 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:38.438579575+00:00 stderr F I0813 20:09:38.438168 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:09:39.328846060+00:00 stderr F I0813 20:09:39.328717 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:41.223935102+00:00 stderr F I0813 20:09:41.221758 1 reflector.go:351] Caches populated for *v1.ConsoleCLIDownload from github.com/openshift/client-go/console/informers/externalversions/factory.go:125 2025-08-13T20:09:42.504541109+00:00 stderr F I0813 20:09:42.504138 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:43.362113407+00:00 stderr F I0813 20:09:43.361586 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:43.809874274+00:00 stderr F I0813 20:09:43.808495 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:09:44.589466095+00:00 stderr F I0813 20:09:44.589375 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:44.848149892+00:00 stderr F I0813 20:09:44.848068 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:45.227115527+00:00 stderr F I0813 20:09:45.227056 1 reflector.go:351] Caches populated for operators.coreos.com/v1, Resource=olmconfigs from k8s.io/client-go/dynamic/dynamicinformer/informer.go:108 2025-08-13T20:09:45.377858239+00:00 stderr F I0813 20:09:45.376607 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:09:45.788857763+00:00 stderr F I0813 20:09:45.786291 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:48.932375290+00:00 stderr F I0813 20:09:48.931130 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:50.007225758+00:00 stderr F I0813 20:09:50.006990 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:51.632549997+00:00 stderr F I0813 20:09:51.628985 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:52.739357640+00:00 stderr F I0813 20:09:52.737293 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:53.232652774+00:00 stderr F I0813 20:09:53.232458 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:08.318075215+00:00 stderr F I0813 20:10:08.307610 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:09.116330181+00:00 stderr F I0813 20:10:09.115726 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:13.496759312+00:00 stderr F I0813 20:10:13.496314 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T20:10:14.844105511+00:00 stderr F I0813 20:10:14.842060 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:21.201057198+00:00 stderr F I0813 20:10:21.200251 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:26.969289168+00:00 stderr F I0813 20:10:26.960073 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:27.566027517+00:00 stderr F I0813 20:10:27.565375 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:28.128064702+00:00 stderr F I0813 20:10:28.127899 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:29.661418915+00:00 stderr F I0813 20:10:29.639889 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:38.203711119+00:00 stderr F I0813 20:10:38.202869 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:42:36.424210440+00:00 stderr F I0813 20:42:36.413613 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.424210440+00:00 stderr F I0813 20:42:36.411381 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.424210440+00:00 stderr F I0813 20:42:36.417730 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.424210440+00:00 stderr F I0813 20:42:36.411546 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.424210440+00:00 stderr F I0813 20:42:36.420250 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.470292238+00:00 stderr F I0813 20:42:36.424386 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.479606457+00:00 stderr F I0813 20:42:36.478993 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.479606457+00:00 stderr F I0813 20:42:36.479403 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.486942459+00:00 stderr F I0813 20:42:36.482209 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.487206916+00:00 stderr F I0813 20:42:36.487174 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.488106742+00:00 stderr F I0813 20:42:36.488085 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.488608537+00:00 stderr F I0813 20:42:36.488588 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.489345088+00:00 stderr F I0813 20:42:36.489320 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.489868833+00:00 stderr F I0813 20:42:36.489841 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.490397378+00:00 stderr F I0813 20:42:36.490371 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.490919933+00:00 stderr F I0813 20:42:36.490891 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491293134+00:00 stderr F I0813 20:42:36.491263 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.498563 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.499140 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.499433 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.500829 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.501329 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.501561 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.501902 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.502180 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.502471 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.513653 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.516357957+00:00 stderr F I0813 20:42:36.516322 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.516766668+00:00 stderr F I0813 20:42:36.516740 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.517208181+00:00 stderr F I0813 20:42:36.517182 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.517532220+00:00 stderr F I0813 20:42:36.517509 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.535073116+00:00 stderr F I0813 20:42:36.411567 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.724747085+00:00 stderr F E0813 20:42:36.724674 1 leaderelection.go:332] error retrieving resource lock openshift-console-operator/console-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-console-operator/leases/console-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.047531220+00:00 stderr F E0813 20:42:37.047077 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.072970634+00:00 stderr F I0813 20:42:37.072881 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.072970634+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.072970634+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.072970634+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:37.072970634+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:37.072970634+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:37.072970634+00:00 stderr F - { 2025-08-13T20:42:37.072970634+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.072970634+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.072970634+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:37.072970634+00:00 stderr F - }, 2025-08-13T20:42:37.072970634+00:00 stderr F + { 2025-08-13T20:42:37.072970634+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.072970634+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.072970634+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.047612663 +0000 UTC m=+2476.252711917", 2025-08-13T20:42:37.072970634+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:37.072970634+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:37.072970634+00:00 stderr F + }, 2025-08-13T20:42:37.072970634+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:37.072970634+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.072970634+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:37.072970634+00:00 stderr F }, 2025-08-13T20:42:37.072970634+00:00 stderr F Version: "", 2025-08-13T20:42:37.072970634+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.072970634+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.072970634+00:00 stderr F } 2025-08-13T20:42:37.080679256+00:00 stderr F E0813 20:42:37.080588 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.091382095+00:00 stderr F E0813 20:42:37.088612 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.093880137+00:00 stderr F I0813 20:42:37.091708 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.093880137+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.093880137+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.093880137+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:37.093880137+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:37.093880137+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:37.093880137+00:00 stderr F - { 2025-08-13T20:42:37.093880137+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.093880137+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.093880137+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:37.093880137+00:00 stderr F - }, 2025-08-13T20:42:37.093880137+00:00 stderr F + { 2025-08-13T20:42:37.093880137+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.093880137+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.093880137+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.088681547 +0000 UTC m=+2476.293780771", 2025-08-13T20:42:37.093880137+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:37.093880137+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:37.093880137+00:00 stderr F + }, 2025-08-13T20:42:37.093880137+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:37.093880137+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.093880137+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:37.093880137+00:00 stderr F }, 2025-08-13T20:42:37.093880137+00:00 stderr F Version: "", 2025-08-13T20:42:37.093880137+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.093880137+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.093880137+00:00 stderr F } 2025-08-13T20:42:37.093880137+00:00 stderr F E0813 20:42:37.093400 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.104514653+00:00 stderr F E0813 20:42:37.104469 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.107567551+00:00 stderr F I0813 20:42:37.107531 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.107567551+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.107567551+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.107567551+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:37.107567551+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:37.107567551+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:37.107567551+00:00 stderr F - { 2025-08-13T20:42:37.107567551+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.107567551+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.107567551+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:37.107567551+00:00 stderr F - }, 2025-08-13T20:42:37.107567551+00:00 stderr F + { 2025-08-13T20:42:37.107567551+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.107567551+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.107567551+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.104589836 +0000 UTC m=+2476.309689100", 2025-08-13T20:42:37.107567551+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:37.107567551+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:37.107567551+00:00 stderr F + }, 2025-08-13T20:42:37.107567551+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:37.107567551+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.107567551+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:37.107567551+00:00 stderr F }, 2025-08-13T20:42:37.107567551+00:00 stderr F Version: "", 2025-08-13T20:42:37.107567551+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.107567551+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.107567551+00:00 stderr F } 2025-08-13T20:42:37.108659843+00:00 stderr F E0813 20:42:37.108634 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.130076070+00:00 stderr F E0813 20:42:37.130008 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.134904380+00:00 stderr F I0813 20:42:37.134145 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.134904380+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.134904380+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.134904380+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:37.134904380+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:37.134904380+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:37.134904380+00:00 stderr F - { 2025-08-13T20:42:37.134904380+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.134904380+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.134904380+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:37.134904380+00:00 stderr F - }, 2025-08-13T20:42:37.134904380+00:00 stderr F + { 2025-08-13T20:42:37.134904380+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.134904380+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.134904380+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.130170743 +0000 UTC m=+2476.335269987", 2025-08-13T20:42:37.134904380+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:37.134904380+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:37.134904380+00:00 stderr F + }, 2025-08-13T20:42:37.134904380+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:37.134904380+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.134904380+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:37.134904380+00:00 stderr F }, 2025-08-13T20:42:37.134904380+00:00 stderr F Version: "", 2025-08-13T20:42:37.134904380+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.134904380+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.134904380+00:00 stderr F } 2025-08-13T20:42:37.136082974+00:00 stderr F E0813 20:42:37.136006 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.152401314+00:00 stderr F E0813 20:42:37.152276 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.153696011+00:00 stderr F E0813 20:42:37.153636 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.154406892+00:00 stderr F I0813 20:42:37.154344 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.154406892+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.154406892+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.154406892+00:00 stderr F ... // 2 identical elements 2025-08-13T20:42:37.154406892+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.154406892+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:42:37.154406892+00:00 stderr F - { 2025-08-13T20:42:37.154406892+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.154406892+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.154406892+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:42:37.154406892+00:00 stderr F - }, 2025-08-13T20:42:37.154406892+00:00 stderr F + { 2025-08-13T20:42:37.154406892+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.154406892+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.154406892+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.152329982 +0000 UTC m=+2476.357429176", 2025-08-13T20:42:37.154406892+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.154406892+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:42:37.154406892+00:00 stderr F + }, 2025-08-13T20:42:37.154406892+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.154406892+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:42:37.154406892+00:00 stderr F ... // 55 identical elements 2025-08-13T20:42:37.154406892+00:00 stderr F }, 2025-08-13T20:42:37.154406892+00:00 stderr F Version: "", 2025-08-13T20:42:37.154406892+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.154406892+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.154406892+00:00 stderr F } 2025-08-13T20:42:37.154951978+00:00 stderr F E0813 20:42:37.154902 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.156953415+00:00 stderr F E0813 20:42:37.156911 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.157254174+00:00 stderr F I0813 20:42:37.157149 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.157254174+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.157254174+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.157254174+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:37.157254174+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:37.157254174+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.157254174+00:00 stderr F - { 2025-08-13T20:42:37.157254174+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.157254174+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.157254174+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:37.157254174+00:00 stderr F - }, 2025-08-13T20:42:37.157254174+00:00 stderr F + { 2025-08-13T20:42:37.157254174+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.157254174+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.157254174+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.153704092 +0000 UTC m=+2476.358803346", 2025-08-13T20:42:37.157254174+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.157254174+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:42:37.157254174+00:00 stderr F + }, 2025-08-13T20:42:37.157254174+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:37.157254174+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.157254174+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:37.157254174+00:00 stderr F }, 2025-08-13T20:42:37.157254174+00:00 stderr F Version: "", 2025-08-13T20:42:37.157254174+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.157254174+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.157254174+00:00 stderr F } 2025-08-13T20:42:37.157768069+00:00 stderr F E0813 20:42:37.157694 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.158715096+00:00 stderr F I0813 20:42:37.158674 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.158715096+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.158715096+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.158715096+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:37.158715096+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:37.158715096+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:37.158715096+00:00 stderr F - { 2025-08-13T20:42:37.158715096+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:37.158715096+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.158715096+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:37.158715096+00:00 stderr F - }, 2025-08-13T20:42:37.158715096+00:00 stderr F + { 2025-08-13T20:42:37.158715096+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:37.158715096+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.158715096+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.156949085 +0000 UTC m=+2476.362048299", 2025-08-13T20:42:37.158715096+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.158715096+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:42:37.158715096+00:00 stderr F + }, 2025-08-13T20:42:37.158715096+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:37.158715096+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.158715096+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:37.158715096+00:00 stderr F }, 2025-08-13T20:42:37.158715096+00:00 stderr F Version: "", 2025-08-13T20:42:37.158715096+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.158715096+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.158715096+00:00 stderr F } 2025-08-13T20:42:37.161049123+00:00 stderr F E0813 20:42:37.160953 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.162561037+00:00 stderr F E0813 20:42:37.162500 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.168692684+00:00 stderr F E0813 20:42:37.166942 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.170063273+00:00 stderr F I0813 20:42:37.169966 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.170063273+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.170063273+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.170063273+00:00 stderr F ... // 2 identical elements 2025-08-13T20:42:37.170063273+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.170063273+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:42:37.170063273+00:00 stderr F - { 2025-08-13T20:42:37.170063273+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.170063273+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.170063273+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:42:37.170063273+00:00 stderr F - }, 2025-08-13T20:42:37.170063273+00:00 stderr F + { 2025-08-13T20:42:37.170063273+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.170063273+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.170063273+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.162545296 +0000 UTC m=+2476.367644531", 2025-08-13T20:42:37.170063273+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.170063273+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:42:37.170063273+00:00 stderr F + }, 2025-08-13T20:42:37.170063273+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.170063273+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:42:37.170063273+00:00 stderr F ... // 55 identical elements 2025-08-13T20:42:37.170063273+00:00 stderr F }, 2025-08-13T20:42:37.170063273+00:00 stderr F Version: "", 2025-08-13T20:42:37.170063273+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.170063273+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.170063273+00:00 stderr F } 2025-08-13T20:42:37.170813665+00:00 stderr F E0813 20:42:37.170710 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.171744182+00:00 stderr F E0813 20:42:37.171719 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.174842371+00:00 stderr F I0813 20:42:37.174733 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.174842371+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.174842371+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.174842371+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:37.174842371+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:37.174842371+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:37.174842371+00:00 stderr F - { 2025-08-13T20:42:37.174842371+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:37.174842371+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.174842371+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:37.174842371+00:00 stderr F - }, 2025-08-13T20:42:37.174842371+00:00 stderr F + { 2025-08-13T20:42:37.174842371+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:37.174842371+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.174842371+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.169093895 +0000 UTC m=+2476.374193069", 2025-08-13T20:42:37.174842371+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.174842371+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:42:37.174842371+00:00 stderr F + }, 2025-08-13T20:42:37.174842371+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:37.174842371+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.174842371+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:37.174842371+00:00 stderr F }, 2025-08-13T20:42:37.174842371+00:00 stderr F Version: "", 2025-08-13T20:42:37.174842371+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.174842371+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.174842371+00:00 stderr F } 2025-08-13T20:42:37.175020506+00:00 stderr F E0813 20:42:37.174961 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.175419478+00:00 stderr F E0813 20:42:37.175365 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.175579562+00:00 stderr F I0813 20:42:37.175559 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.175579562+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.175579562+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.175579562+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:37.175579562+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:37.175579562+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.175579562+00:00 stderr F - { 2025-08-13T20:42:37.175579562+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.175579562+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.175579562+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:37.175579562+00:00 stderr F - }, 2025-08-13T20:42:37.175579562+00:00 stderr F + { 2025-08-13T20:42:37.175579562+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.175579562+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.175579562+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.172106842 +0000 UTC m=+2476.377206116", 2025-08-13T20:42:37.175579562+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.175579562+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:42:37.175579562+00:00 stderr F + }, 2025-08-13T20:42:37.175579562+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:37.175579562+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.175579562+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:37.175579562+00:00 stderr F }, 2025-08-13T20:42:37.175579562+00:00 stderr F Version: "", 2025-08-13T20:42:37.175579562+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.175579562+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.175579562+00:00 stderr F } 2025-08-13T20:42:37.177169998+00:00 stderr F E0813 20:42:37.177145 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.178307351+00:00 stderr F E0813 20:42:37.178282 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.181095601+00:00 stderr F I0813 20:42:37.181070 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.181095601+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.181095601+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.181095601+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:37.181095601+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:37.181095601+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:37.181095601+00:00 stderr F - { 2025-08-13T20:42:37.181095601+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.181095601+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.181095601+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:37.181095601+00:00 stderr F - }, 2025-08-13T20:42:37.181095601+00:00 stderr F + { 2025-08-13T20:42:37.181095601+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.181095601+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.181095601+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.178357482 +0000 UTC m=+2476.383456656", 2025-08-13T20:42:37.181095601+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:37.181095601+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:37.181095601+00:00 stderr F + }, 2025-08-13T20:42:37.181095601+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:37.181095601+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.181095601+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:37.181095601+00:00 stderr F }, 2025-08-13T20:42:37.181095601+00:00 stderr F Version: "", 2025-08-13T20:42:37.181095601+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.181095601+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.181095601+00:00 stderr F } 2025-08-13T20:42:37.181326208+00:00 stderr F E0813 20:42:37.181273 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.182698968+00:00 stderr F E0813 20:42:37.182673 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.184335625+00:00 stderr F E0813 20:42:37.183135 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.193748266+00:00 stderr F I0813 20:42:37.193682 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.193748266+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.193748266+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.193748266+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:37.193748266+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:37.193748266+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.193748266+00:00 stderr F - { 2025-08-13T20:42:37.193748266+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.193748266+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.193748266+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:37.193748266+00:00 stderr F - }, 2025-08-13T20:42:37.193748266+00:00 stderr F + { 2025-08-13T20:42:37.193748266+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.193748266+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.193748266+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.181330718 +0000 UTC m=+2476.386429932", 2025-08-13T20:42:37.193748266+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.193748266+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:42:37.193748266+00:00 stderr F + }, 2025-08-13T20:42:37.193748266+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:37.193748266+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.193748266+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:37.193748266+00:00 stderr F }, 2025-08-13T20:42:37.193748266+00:00 stderr F Version: "", 2025-08-13T20:42:37.193748266+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.193748266+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.193748266+00:00 stderr F } 2025-08-13T20:42:37.196264659+00:00 stderr F E0813 20:42:37.195842 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.197475353+00:00 stderr F I0813 20:42:37.197447 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.197475353+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.197475353+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.197475353+00:00 stderr F ... // 2 identical elements 2025-08-13T20:42:37.197475353+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.197475353+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:42:37.197475353+00:00 stderr F - { 2025-08-13T20:42:37.197475353+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.197475353+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.197475353+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:42:37.197475353+00:00 stderr F - }, 2025-08-13T20:42:37.197475353+00:00 stderr F + { 2025-08-13T20:42:37.197475353+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.197475353+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.197475353+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.182755359 +0000 UTC m=+2476.387854593", 2025-08-13T20:42:37.197475353+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.197475353+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:42:37.197475353+00:00 stderr F + }, 2025-08-13T20:42:37.197475353+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.197475353+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:42:37.197475353+00:00 stderr F ... // 55 identical elements 2025-08-13T20:42:37.197475353+00:00 stderr F }, 2025-08-13T20:42:37.197475353+00:00 stderr F Version: "", 2025-08-13T20:42:37.197475353+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.197475353+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.197475353+00:00 stderr F } 2025-08-13T20:42:37.197866805+00:00 stderr F I0813 20:42:37.197734 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.197866805+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.197866805+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.197866805+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:37.197866805+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:37.197866805+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:37.197866805+00:00 stderr F - { 2025-08-13T20:42:37.197866805+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:37.197866805+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.197866805+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:37.197866805+00:00 stderr F - }, 2025-08-13T20:42:37.197866805+00:00 stderr F + { 2025-08-13T20:42:37.197866805+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:37.197866805+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.197866805+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.195891338 +0000 UTC m=+2476.400990662", 2025-08-13T20:42:37.197866805+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.197866805+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:42:37.197866805+00:00 stderr F + }, 2025-08-13T20:42:37.197866805+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:37.197866805+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.197866805+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:37.197866805+00:00 stderr F }, 2025-08-13T20:42:37.197866805+00:00 stderr F Version: "", 2025-08-13T20:42:37.197866805+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.197866805+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.197866805+00:00 stderr F } 2025-08-13T20:42:37.201089658+00:00 stderr F I0813 20:42:37.199658 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.201089658+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.201089658+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.201089658+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F - { 2025-08-13T20:42:37.201089658+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:37.201089658+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.201089658+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:37.201089658+00:00 stderr F - }, 2025-08-13T20:42:37.201089658+00:00 stderr F + { 2025-08-13T20:42:37.201089658+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:37.201089658+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.201089658+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.175048387 +0000 UTC m=+2476.380147631", 2025-08-13T20:42:37.201089658+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.201089658+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:42:37.201089658+00:00 stderr F + }, 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:37.201089658+00:00 stderr F }, 2025-08-13T20:42:37.201089658+00:00 stderr F Version: "", 2025-08-13T20:42:37.201089658+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.201089658+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.201089658+00:00 stderr F } 2025-08-13T20:42:37.201089658+00:00 stderr F I0813 20:42:37.200041 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.201089658+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.201089658+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.201089658+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F - { 2025-08-13T20:42:37.201089658+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.201089658+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.201089658+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:37.201089658+00:00 stderr F - }, 2025-08-13T20:42:37.201089658+00:00 stderr F + { 2025-08-13T20:42:37.201089658+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.201089658+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.201089658+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.183194532 +0000 UTC m=+2476.388293796", 2025-08-13T20:42:37.201089658+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.201089658+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:42:37.201089658+00:00 stderr F + }, 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:37.201089658+00:00 stderr F }, 2025-08-13T20:42:37.201089658+00:00 stderr F Version: "", 2025-08-13T20:42:37.201089658+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.201089658+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.201089658+00:00 stderr F } 2025-08-13T20:42:37.278750157+00:00 stderr F E0813 20:42:37.278081 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.361374069+00:00 stderr F E0813 20:42:37.360021 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.362399578+00:00 stderr F I0813 20:42:37.361872 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.362399578+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.362399578+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.362399578+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:37.362399578+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:37.362399578+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:37.362399578+00:00 stderr F - { 2025-08-13T20:42:37.362399578+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.362399578+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.362399578+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:37.362399578+00:00 stderr F - }, 2025-08-13T20:42:37.362399578+00:00 stderr F + { 2025-08-13T20:42:37.362399578+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.362399578+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.362399578+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.360093372 +0000 UTC m=+2476.565192556", 2025-08-13T20:42:37.362399578+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:37.362399578+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:37.362399578+00:00 stderr F + }, 2025-08-13T20:42:37.362399578+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:37.362399578+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.362399578+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:37.362399578+00:00 stderr F }, 2025-08-13T20:42:37.362399578+00:00 stderr F Version: "", 2025-08-13T20:42:37.362399578+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.362399578+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.362399578+00:00 stderr F } 2025-08-13T20:42:37.480029010+00:00 stderr F E0813 20:42:37.478833 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.499478650+00:00 stderr F E0813 20:42:37.499402 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.508584733+00:00 stderr F I0813 20:42:37.508555 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.508584733+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.508584733+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.508584733+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:37.508584733+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:37.508584733+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.508584733+00:00 stderr F - { 2025-08-13T20:42:37.508584733+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.508584733+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.508584733+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:37.508584733+00:00 stderr F - }, 2025-08-13T20:42:37.508584733+00:00 stderr F + { 2025-08-13T20:42:37.508584733+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.508584733+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.508584733+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.499967515 +0000 UTC m=+2476.705066729", 2025-08-13T20:42:37.508584733+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.508584733+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:42:37.508584733+00:00 stderr F + }, 2025-08-13T20:42:37.508584733+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:37.508584733+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.508584733+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:37.508584733+00:00 stderr F }, 2025-08-13T20:42:37.508584733+00:00 stderr F Version: "", 2025-08-13T20:42:37.508584733+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.508584733+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.508584733+00:00 stderr F } 2025-08-13T20:42:37.678252325+00:00 stderr F E0813 20:42:37.678114 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.709321850+00:00 stderr F E0813 20:42:37.709216 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.714349285+00:00 stderr F I0813 20:42:37.714323 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.714349285+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.714349285+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.714349285+00:00 stderr F ... // 2 identical elements 2025-08-13T20:42:37.714349285+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.714349285+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:42:37.714349285+00:00 stderr F - { 2025-08-13T20:42:37.714349285+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.714349285+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.714349285+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:42:37.714349285+00:00 stderr F - }, 2025-08-13T20:42:37.714349285+00:00 stderr F + { 2025-08-13T20:42:37.714349285+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.714349285+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.714349285+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.709431683 +0000 UTC m=+2476.914531117", 2025-08-13T20:42:37.714349285+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.714349285+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:42:37.714349285+00:00 stderr F + }, 2025-08-13T20:42:37.714349285+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.714349285+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:42:37.714349285+00:00 stderr F ... // 55 identical elements 2025-08-13T20:42:37.714349285+00:00 stderr F }, 2025-08-13T20:42:37.714349285+00:00 stderr F Version: "", 2025-08-13T20:42:37.714349285+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.714349285+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.714349285+00:00 stderr F } 2025-08-13T20:42:37.880558617+00:00 stderr F E0813 20:42:37.880503 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.902973793+00:00 stderr F E0813 20:42:37.902903 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.905525307+00:00 stderr F I0813 20:42:37.905036 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.905525307+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.905525307+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.905525307+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:37.905525307+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:37.905525307+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:37.905525307+00:00 stderr F - { 2025-08-13T20:42:37.905525307+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:37.905525307+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.905525307+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:37.905525307+00:00 stderr F - }, 2025-08-13T20:42:37.905525307+00:00 stderr F + { 2025-08-13T20:42:37.905525307+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:37.905525307+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.905525307+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.902964833 +0000 UTC m=+2477.108064057", 2025-08-13T20:42:37.905525307+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.905525307+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:42:37.905525307+00:00 stderr F + }, 2025-08-13T20:42:37.905525307+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:37.905525307+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.905525307+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:37.905525307+00:00 stderr F }, 2025-08-13T20:42:37.905525307+00:00 stderr F Version: "", 2025-08-13T20:42:37.905525307+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.905525307+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.905525307+00:00 stderr F } 2025-08-13T20:42:38.079416610+00:00 stderr F E0813 20:42:38.079292 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.087560055+00:00 stderr F E0813 20:42:38.087532 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.095856864+00:00 stderr F I0813 20:42:38.095829 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:38.095856864+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:38.095856864+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:38.095856864+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:38.095856864+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:38.095856864+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:38.095856864+00:00 stderr F - { 2025-08-13T20:42:38.095856864+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:38.095856864+00:00 stderr F - Status: "False", 2025-08-13T20:42:38.095856864+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:38.095856864+00:00 stderr F - }, 2025-08-13T20:42:38.095856864+00:00 stderr F + { 2025-08-13T20:42:38.095856864+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:38.095856864+00:00 stderr F + Status: "True", 2025-08-13T20:42:38.095856864+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:38.087688979 +0000 UTC m=+2477.292788183", 2025-08-13T20:42:38.095856864+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:38.095856864+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:42:38.095856864+00:00 stderr F + }, 2025-08-13T20:42:38.095856864+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:38.095856864+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:38.095856864+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:38.095856864+00:00 stderr F }, 2025-08-13T20:42:38.095856864+00:00 stderr F Version: "", 2025-08-13T20:42:38.095856864+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:38.095856864+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:38.095856864+00:00 stderr F } 2025-08-13T20:42:38.281930229+00:00 stderr F I0813 20:42:38.281864 1 request.go:697] Waited for 1.077706361s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:42:38.284905095+00:00 stderr F E0813 20:42:38.284725 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.297294822+00:00 stderr F E0813 20:42:38.297210 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.299001191+00:00 stderr F I0813 20:42:38.298970 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:38.299001191+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:38.299001191+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:38.299001191+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:38.299001191+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:38.299001191+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:38.299001191+00:00 stderr F - { 2025-08-13T20:42:38.299001191+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:38.299001191+00:00 stderr F - Status: "False", 2025-08-13T20:42:38.299001191+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:38.299001191+00:00 stderr F - }, 2025-08-13T20:42:38.299001191+00:00 stderr F + { 2025-08-13T20:42:38.299001191+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:38.299001191+00:00 stderr F + Status: "True", 2025-08-13T20:42:38.299001191+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:38.297415085 +0000 UTC m=+2477.502514279", 2025-08-13T20:42:38.299001191+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:38.299001191+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:42:38.299001191+00:00 stderr F + }, 2025-08-13T20:42:38.299001191+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:38.299001191+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:38.299001191+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:38.299001191+00:00 stderr F }, 2025-08-13T20:42:38.299001191+00:00 stderr F Version: "", 2025-08-13T20:42:38.299001191+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:38.299001191+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:38.299001191+00:00 stderr F } 2025-08-13T20:42:38.481760610+00:00 stderr F E0813 20:42:38.481054 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.645268924+00:00 stderr F E0813 20:42:38.643682 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.647826998+00:00 stderr F I0813 20:42:38.645476 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:38.647826998+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:38.647826998+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:38.647826998+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:38.647826998+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:38.647826998+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:38.647826998+00:00 stderr F - { 2025-08-13T20:42:38.647826998+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:38.647826998+00:00 stderr F - Status: "False", 2025-08-13T20:42:38.647826998+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:38.647826998+00:00 stderr F - }, 2025-08-13T20:42:38.647826998+00:00 stderr F + { 2025-08-13T20:42:38.647826998+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:38.647826998+00:00 stderr F + Status: "True", 2025-08-13T20:42:38.647826998+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:38.64373663 +0000 UTC m=+2477.848835834", 2025-08-13T20:42:38.647826998+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:38.647826998+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:38.647826998+00:00 stderr F + }, 2025-08-13T20:42:38.647826998+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:38.647826998+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:38.647826998+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:38.647826998+00:00 stderr F }, 2025-08-13T20:42:38.647826998+00:00 stderr F Version: "", 2025-08-13T20:42:38.647826998+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:38.647826998+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:38.647826998+00:00 stderr F } 2025-08-13T20:42:38.679048798+00:00 stderr F E0813 20:42:38.677606 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.699143977+00:00 stderr F E0813 20:42:38.699048 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.702053281+00:00 stderr F I0813 20:42:38.700674 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:38.702053281+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:38.702053281+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:38.702053281+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:38.702053281+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:38.702053281+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:38.702053281+00:00 stderr F - { 2025-08-13T20:42:38.702053281+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:38.702053281+00:00 stderr F - Status: "False", 2025-08-13T20:42:38.702053281+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:38.702053281+00:00 stderr F - }, 2025-08-13T20:42:38.702053281+00:00 stderr F + { 2025-08-13T20:42:38.702053281+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:38.702053281+00:00 stderr F + Status: "True", 2025-08-13T20:42:38.702053281+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:38.699106996 +0000 UTC m=+2477.904206200", 2025-08-13T20:42:38.702053281+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:38.702053281+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:42:38.702053281+00:00 stderr F + }, 2025-08-13T20:42:38.702053281+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:38.702053281+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:38.702053281+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:38.702053281+00:00 stderr F }, 2025-08-13T20:42:38.702053281+00:00 stderr F Version: "", 2025-08-13T20:42:38.702053281+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:38.702053281+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:38.702053281+00:00 stderr F } 2025-08-13T20:42:38.877903141+00:00 stderr F E0813 20:42:38.877464 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.920318844+00:00 stderr F E0813 20:42:38.920260 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.922484606+00:00 stderr F I0813 20:42:38.922457 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:38.922484606+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:38.922484606+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:38.922484606+00:00 stderr F ... // 2 identical elements 2025-08-13T20:42:38.922484606+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:38.922484606+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:42:38.922484606+00:00 stderr F - { 2025-08-13T20:42:38.922484606+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:38.922484606+00:00 stderr F - Status: "False", 2025-08-13T20:42:38.922484606+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:42:38.922484606+00:00 stderr F - }, 2025-08-13T20:42:38.922484606+00:00 stderr F + { 2025-08-13T20:42:38.922484606+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:38.922484606+00:00 stderr F + Status: "True", 2025-08-13T20:42:38.922484606+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:38.920400506 +0000 UTC m=+2478.125499590", 2025-08-13T20:42:38.922484606+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:38.922484606+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:42:38.922484606+00:00 stderr F + }, 2025-08-13T20:42:38.922484606+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:38.922484606+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:42:38.922484606+00:00 stderr F ... // 55 identical elements 2025-08-13T20:42:38.922484606+00:00 stderr F }, 2025-08-13T20:42:38.922484606+00:00 stderr F Version: "", 2025-08-13T20:42:38.922484606+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:38.922484606+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:38.922484606+00:00 stderr F } 2025-08-13T20:42:39.078488314+00:00 stderr F E0813 20:42:39.077955 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.122207924+00:00 stderr F E0813 20:42:39.122164 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.123761269+00:00 stderr F I0813 20:42:39.123735 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:39.123761269+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:39.123761269+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:39.123761269+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:39.123761269+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:39.123761269+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:39.123761269+00:00 stderr F - { 2025-08-13T20:42:39.123761269+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:39.123761269+00:00 stderr F - Status: "False", 2025-08-13T20:42:39.123761269+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:39.123761269+00:00 stderr F - }, 2025-08-13T20:42:39.123761269+00:00 stderr F + { 2025-08-13T20:42:39.123761269+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:39.123761269+00:00 stderr F + Status: "True", 2025-08-13T20:42:39.123761269+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:39.122300367 +0000 UTC m=+2478.327399581", 2025-08-13T20:42:39.123761269+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:39.123761269+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:42:39.123761269+00:00 stderr F + }, 2025-08-13T20:42:39.123761269+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:39.123761269+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:39.123761269+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:39.123761269+00:00 stderr F }, 2025-08-13T20:42:39.123761269+00:00 stderr F Version: "", 2025-08-13T20:42:39.123761269+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:39.123761269+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:39.123761269+00:00 stderr F } 2025-08-13T20:42:39.279080997+00:00 stderr F E0813 20:42:39.278603 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.291431193+00:00 stderr F E0813 20:42:39.291116 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.295216782+00:00 stderr F I0813 20:42:39.294507 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:39.295216782+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:39.295216782+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:39.295216782+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:39.295216782+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:39.295216782+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:39.295216782+00:00 stderr F - { 2025-08-13T20:42:39.295216782+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:39.295216782+00:00 stderr F - Status: "False", 2025-08-13T20:42:39.295216782+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:39.295216782+00:00 stderr F - }, 2025-08-13T20:42:39.295216782+00:00 stderr F + { 2025-08-13T20:42:39.295216782+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:39.295216782+00:00 stderr F + Status: "True", 2025-08-13T20:42:39.295216782+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:39.291218057 +0000 UTC m=+2478.496317251", 2025-08-13T20:42:39.295216782+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:39.295216782+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:42:39.295216782+00:00 stderr F + }, 2025-08-13T20:42:39.295216782+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:39.295216782+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:39.295216782+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:39.295216782+00:00 stderr F }, 2025-08-13T20:42:39.295216782+00:00 stderr F Version: "", 2025-08-13T20:42:39.295216782+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:39.295216782+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:39.295216782+00:00 stderr F } 2025-08-13T20:42:39.478261130+00:00 stderr F I0813 20:42:39.477886 1 request.go:697] Waited for 1.178505017s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:42:39.479417903+00:00 stderr F E0813 20:42:39.479296 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.501115318+00:00 stderr F E0813 20:42:39.501043 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.503024893+00:00 stderr F I0813 20:42:39.502969 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:39.503024893+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:39.503024893+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:39.503024893+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:39.503024893+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:39.503024893+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:39.503024893+00:00 stderr F - { 2025-08-13T20:42:39.503024893+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:39.503024893+00:00 stderr F - Status: "False", 2025-08-13T20:42:39.503024893+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:39.503024893+00:00 stderr F - }, 2025-08-13T20:42:39.503024893+00:00 stderr F + { 2025-08-13T20:42:39.503024893+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:39.503024893+00:00 stderr F + Status: "True", 2025-08-13T20:42:39.503024893+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:39.501092018 +0000 UTC m=+2478.706191252", 2025-08-13T20:42:39.503024893+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:39.503024893+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:42:39.503024893+00:00 stderr F + }, 2025-08-13T20:42:39.503024893+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:39.503024893+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:39.503024893+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:39.503024893+00:00 stderr F }, 2025-08-13T20:42:39.503024893+00:00 stderr F Version: "", 2025-08-13T20:42:39.503024893+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:39.503024893+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:39.503024893+00:00 stderr F } 2025-08-13T20:42:39.678143251+00:00 stderr F E0813 20:42:39.678091 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.877598372+00:00 stderr F E0813 20:42:39.877538 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.919911252+00:00 stderr F E0813 20:42:39.919849 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.922002062+00:00 stderr F I0813 20:42:39.921973 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:39.922002062+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:39.922002062+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:39.922002062+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:39.922002062+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:39.922002062+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:39.922002062+00:00 stderr F - { 2025-08-13T20:42:39.922002062+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:39.922002062+00:00 stderr F - Status: "False", 2025-08-13T20:42:39.922002062+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:39.922002062+00:00 stderr F - }, 2025-08-13T20:42:39.922002062+00:00 stderr F + { 2025-08-13T20:42:39.922002062+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:39.922002062+00:00 stderr F + Status: "True", 2025-08-13T20:42:39.922002062+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:39.920004324 +0000 UTC m=+2479.125103398", 2025-08-13T20:42:39.922002062+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:39.922002062+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:42:39.922002062+00:00 stderr F + }, 2025-08-13T20:42:39.922002062+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:39.922002062+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:39.922002062+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:39.922002062+00:00 stderr F }, 2025-08-13T20:42:39.922002062+00:00 stderr F Version: "", 2025-08-13T20:42:39.922002062+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:39.922002062+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:39.922002062+00:00 stderr F } 2025-08-13T20:42:39.999384503+00:00 stderr F E0813 20:42:39.999332 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.001633568+00:00 stderr F I0813 20:42:40.001606 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:40.001633568+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:40.001633568+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:40.001633568+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:40.001633568+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:40.001633568+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:40.001633568+00:00 stderr F - { 2025-08-13T20:42:40.001633568+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:40.001633568+00:00 stderr F - Status: "False", 2025-08-13T20:42:40.001633568+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:40.001633568+00:00 stderr F - }, 2025-08-13T20:42:40.001633568+00:00 stderr F + { 2025-08-13T20:42:40.001633568+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:40.001633568+00:00 stderr F + Status: "True", 2025-08-13T20:42:40.001633568+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:39.999473585 +0000 UTC m=+2479.204572669", 2025-08-13T20:42:40.001633568+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:40.001633568+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:40.001633568+00:00 stderr F + }, 2025-08-13T20:42:40.001633568+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:40.001633568+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:40.001633568+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:40.001633568+00:00 stderr F }, 2025-08-13T20:42:40.001633568+00:00 stderr F Version: "", 2025-08-13T20:42:40.001633568+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:40.001633568+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:40.001633568+00:00 stderr F } 2025-08-13T20:42:40.077993219+00:00 stderr F E0813 20:42:40.077941 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.160942641+00:00 stderr F E0813 20:42:40.160874 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.163209416+00:00 stderr F I0813 20:42:40.163164 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:40.163209416+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:40.163209416+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:40.163209416+00:00 stderr F ... // 2 identical elements 2025-08-13T20:42:40.163209416+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:40.163209416+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:42:40.163209416+00:00 stderr F - { 2025-08-13T20:42:40.163209416+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:40.163209416+00:00 stderr F - Status: "False", 2025-08-13T20:42:40.163209416+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:42:40.163209416+00:00 stderr F - }, 2025-08-13T20:42:40.163209416+00:00 stderr F + { 2025-08-13T20:42:40.163209416+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:40.163209416+00:00 stderr F + Status: "True", 2025-08-13T20:42:40.163209416+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:40.16092296 +0000 UTC m=+2479.366022164", 2025-08-13T20:42:40.163209416+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:40.163209416+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:42:40.163209416+00:00 stderr F + }, 2025-08-13T20:42:40.163209416+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:40.163209416+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:42:40.163209416+00:00 stderr F ... // 55 identical elements 2025-08-13T20:42:40.163209416+00:00 stderr F }, 2025-08-13T20:42:40.163209416+00:00 stderr F Version: "", 2025-08-13T20:42:40.163209416+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:40.163209416+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:40.163209416+00:00 stderr F } 2025-08-13T20:42:40.278471909+00:00 stderr F E0813 20:42:40.278269 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.359349001+00:00 stderr F E0813 20:42:40.359265 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.361268266+00:00 stderr F I0813 20:42:40.361120 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:40.361268266+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:40.361268266+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:40.361268266+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:40.361268266+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:40.361268266+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:40.361268266+00:00 stderr F - { 2025-08-13T20:42:40.361268266+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:40.361268266+00:00 stderr F - Status: "False", 2025-08-13T20:42:40.361268266+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:40.361268266+00:00 stderr F - }, 2025-08-13T20:42:40.361268266+00:00 stderr F + { 2025-08-13T20:42:40.361268266+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:40.361268266+00:00 stderr F + Status: "True", 2025-08-13T20:42:40.361268266+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:40.35931355 +0000 UTC m=+2479.564412834", 2025-08-13T20:42:40.361268266+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:40.361268266+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:42:40.361268266+00:00 stderr F + }, 2025-08-13T20:42:40.361268266+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:40.361268266+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:40.361268266+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:40.361268266+00:00 stderr F }, 2025-08-13T20:42:40.361268266+00:00 stderr F Version: "", 2025-08-13T20:42:40.361268266+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:40.361268266+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:40.361268266+00:00 stderr F } 2025-08-13T20:42:40.478274389+00:00 stderr F E0813 20:42:40.478160 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.500108769+00:00 stderr F E0813 20:42:40.499999 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.502410005+00:00 stderr F I0813 20:42:40.502353 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:40.502410005+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:40.502410005+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:40.502410005+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:40.502410005+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:40.502410005+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:40.502410005+00:00 stderr F - { 2025-08-13T20:42:40.502410005+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:40.502410005+00:00 stderr F - Status: "False", 2025-08-13T20:42:40.502410005+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:40.502410005+00:00 stderr F - }, 2025-08-13T20:42:40.502410005+00:00 stderr F + { 2025-08-13T20:42:40.502410005+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:40.502410005+00:00 stderr F + Status: "True", 2025-08-13T20:42:40.502410005+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:40.500058417 +0000 UTC m=+2479.705157681", 2025-08-13T20:42:40.502410005+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:40.502410005+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:42:40.502410005+00:00 stderr F + }, 2025-08-13T20:42:40.502410005+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:40.502410005+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:40.502410005+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:40.502410005+00:00 stderr F }, 2025-08-13T20:42:40.502410005+00:00 stderr F Version: "", 2025-08-13T20:42:40.502410005+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:40.502410005+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:40.502410005+00:00 stderr F } 2025-08-13T20:42:40.677891154+00:00 stderr F I0813 20:42:40.677742 1 request.go:697] Waited for 1.174569142s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:42:40.678918274+00:00 stderr F E0813 20:42:40.678855 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.721323527+00:00 stderr F E0813 20:42:40.721165 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.724381145+00:00 stderr F I0813 20:42:40.724319 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:40.724381145+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:40.724381145+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:40.724381145+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:40.724381145+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:40.724381145+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:40.724381145+00:00 stderr F - { 2025-08-13T20:42:40.724381145+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:40.724381145+00:00 stderr F - Status: "False", 2025-08-13T20:42:40.724381145+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:40.724381145+00:00 stderr F - }, 2025-08-13T20:42:40.724381145+00:00 stderr F + { 2025-08-13T20:42:40.724381145+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:40.724381145+00:00 stderr F + Status: "True", 2025-08-13T20:42:40.724381145+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:40.721275235 +0000 UTC m=+2479.926374889", 2025-08-13T20:42:40.724381145+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:40.724381145+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:42:40.724381145+00:00 stderr F + }, 2025-08-13T20:42:40.724381145+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:40.724381145+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:40.724381145+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:40.724381145+00:00 stderr F }, 2025-08-13T20:42:40.724381145+00:00 stderr F Version: "", 2025-08-13T20:42:40.724381145+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:40.724381145+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:40.724381145+00:00 stderr F } 2025-08-13T20:42:40.799625384+00:00 stderr F I0813 20:42:40.798651 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:40.801046815+00:00 stderr F I0813 20:42:40.799884 1 leaderelection.go:285] failed to renew lease openshift-console-operator/console-operator-lock: timed out waiting for the condition 2025-08-13T20:42:40.801210510+00:00 stderr F I0813 20:42:40.801130 1 base_controller.go:172] Shutting down ConsoleCLIDownloadsController ... 2025-08-13T20:42:40.801210510+00:00 stderr F I0813 20:42:40.801202 1 base_controller.go:172] Shutting down HealthCheckController ... 2025-08-13T20:42:40.801294602+00:00 stderr F I0813 20:42:40.801250 1 base_controller.go:172] Shutting down ConsoleOperator ... 2025-08-13T20:42:40.801307573+00:00 stderr F I0813 20:42:40.801290 1 base_controller.go:172] Shutting down DownloadsRouteController ... 2025-08-13T20:42:40.801317543+00:00 stderr F I0813 20:42:40.801310 1 base_controller.go:172] Shutting down ConsoleRouteController ... 2025-08-13T20:42:40.801368394+00:00 stderr F I0813 20:42:40.801326 1 base_controller.go:172] Shutting down OAuthClientsController ... 2025-08-13T20:42:40.801368394+00:00 stderr F I0813 20:42:40.801361 1 base_controller.go:172] Shutting down OIDCSetupController ... 2025-08-13T20:42:40.801384975+00:00 stderr F I0813 20:42:40.801377 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:40.801433016+00:00 stderr F I0813 20:42:40.801394 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:42:40.801433016+00:00 stderr F I0813 20:42:40.801424 1 base_controller.go:172] Shutting down OAuthClientSecretController ... 2025-08-13T20:42:40.801445507+00:00 stderr F I0813 20:42:40.801439 1 base_controller.go:172] Shutting down PodDisruptionBudgetController ... 2025-08-13T20:42:40.801457967+00:00 stderr F I0813 20:42:40.801450 1 base_controller.go:172] Shutting down ConsoleDownloadsDeploymentSyncController ... 2025-08-13T20:42:40.801508098+00:00 stderr F I0813 20:42:40.801468 1 base_controller.go:172] Shutting down StatusSyncer_console ... 2025-08-13T20:42:40.801623962+00:00 stderr F I0813 20:42:40.801569 1 base_controller.go:172] Shutting down CLIOIDCClientStatusController ... 2025-08-13T20:42:40.801623962+00:00 stderr F I0813 20:42:40.801605 1 base_controller.go:172] Shutting down PodDisruptionBudgetController ... 2025-08-13T20:42:40.801623962+00:00 stderr F I0813 20:42:40.801618 1 base_controller.go:172] Shutting down ConsoleServiceController ... 2025-08-13T20:42:40.801650342+00:00 stderr F I0813 20:42:40.801630 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:42:40.801650342+00:00 stderr F I0813 20:42:40.801642 1 base_controller.go:172] Shutting down ManagementStateController ... 2025-08-13T20:42:40.801660693+00:00 stderr F I0813 20:42:40.801654 1 base_controller.go:172] Shutting down ConsoleServiceController ... 2025-08-13T20:42:40.801673243+00:00 stderr F I0813 20:42:40.801665 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:40.801683563+00:00 stderr F I0813 20:42:40.801676 1 base_controller.go:172] Shutting down ClusterUpgradeNotificationController ... 2025-08-13T20:42:40.801695694+00:00 stderr F I0813 20:42:40.801689 1 base_controller.go:172] Shutting down InformerWithSwitchController ... 2025-08-13T20:42:40.801754715+00:00 stderr F I0813 20:42:40.801711 1 base_controller.go:114] Shutting down worker of CLIOIDCClientStatusController controller ... 2025-08-13T20:42:40.801754715+00:00 stderr F I0813 20:42:40.801736 1 base_controller.go:104] All CLIOIDCClientStatusController workers have been terminated 2025-08-13T20:42:40.801767756+00:00 stderr F I0813 20:42:40.801753 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:42:40.801767756+00:00 stderr F I0813 20:42:40.801760 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:42:40.801870469+00:00 stderr F I0813 20:42:40.801851 1 base_controller.go:114] Shutting down worker of ManagementStateController controller ... 2025-08-13T20:42:40.801870469+00:00 stderr F I0813 20:42:40.801860 1 base_controller.go:104] All ManagementStateController workers have been terminated 2025-08-13T20:42:40.801887949+00:00 stderr F I0813 20:42:40.801868 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:40.801887949+00:00 stderr F I0813 20:42:40.801874 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:40.801887949+00:00 stderr F I0813 20:42:40.801881 1 base_controller.go:114] Shutting down worker of InformerWithSwitchController controller ... 2025-08-13T20:42:40.801899380+00:00 stderr F I0813 20:42:40.801886 1 base_controller.go:104] All InformerWithSwitchController workers have been terminated 2025-08-13T20:42:40.802006443+00:00 stderr F E0813 20:42:40.801959 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:40.802019993+00:00 stderr F I0813 20:42:40.802004 1 base_controller.go:114] Shutting down worker of PodDisruptionBudgetController controller ... 2025-08-13T20:42:40.802019993+00:00 stderr F I0813 20:42:40.802013 1 base_controller.go:104] All PodDisruptionBudgetController workers have been terminated 2025-08-13T20:42:40.802182328+00:00 stderr F E0813 20:42:40.802113 1 base_controller.go:268] ConsoleServiceController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:40.802334832+00:00 stderr F I0813 20:42:40.801490 1 base_controller.go:150] All StatusSyncer_console post start hooks have been terminated 2025-08-13T20:42:40.802334832+00:00 stderr F I0813 20:42:40.802308 1 base_controller.go:114] Shutting down worker of OAuthClientsController controller ... 2025-08-13T20:42:40.802334832+00:00 stderr F I0813 20:42:40.802328 1 base_controller.go:104] All OAuthClientsController workers have been terminated 2025-08-13T20:42:40.802351263+00:00 stderr F I0813 20:42:40.802338 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:40.802351263+00:00 stderr F I0813 20:42:40.802330 1 base_controller.go:114] Shutting down worker of OIDCSetupController controller ... 2025-08-13T20:42:40.802418525+00:00 stderr F I0813 20:42:40.802367 1 base_controller.go:104] All OIDCSetupController workers have been terminated 2025-08-13T20:42:40.802418525+00:00 stderr F I0813 20:42:40.802347 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:40.802418525+00:00 stderr F I0813 20:42:40.802406 1 base_controller.go:114] Shutting down worker of StatusSyncer_console controller ... 2025-08-13T20:42:40.802441615+00:00 stderr F I0813 20:42:40.802433 1 base_controller.go:114] Shutting down worker of ConsoleCLIDownloadsController controller ... 2025-08-13T20:42:40.802454836+00:00 stderr F I0813 20:42:40.802445 1 base_controller.go:104] All ConsoleCLIDownloadsController workers have been terminated 2025-08-13T20:42:40.802464726+00:00 stderr F I0813 20:42:40.802447 1 base_controller.go:104] All StatusSyncer_console workers have been terminated 2025-08-13T20:42:40.802464726+00:00 stderr F I0813 20:42:40.802396 1 base_controller.go:114] Shutting down worker of OAuthClientSecretController controller ... 2025-08-13T20:42:40.802474956+00:00 stderr F I0813 20:42:40.802462 1 base_controller.go:114] Shutting down worker of HealthCheckController controller ... 2025-08-13T20:42:40.802474956+00:00 stderr F I0813 20:42:40.802470 1 base_controller.go:104] All OAuthClientSecretController workers have been terminated 2025-08-13T20:42:40.802485237+00:00 stderr F I0813 20:42:40.802387 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:42:40.802485237+00:00 stderr F I0813 20:42:40.802477 1 base_controller.go:104] All HealthCheckController workers have been terminated 2025-08-13T20:42:40.802495347+00:00 stderr F I0813 20:42:40.802480 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:42:40.802505137+00:00 stderr F I0813 20:42:40.802494 1 base_controller.go:114] Shutting down worker of ConsoleOperator controller ... 2025-08-13T20:42:40.802514787+00:00 stderr F I0813 20:42:40.802506 1 base_controller.go:104] All ConsoleOperator workers have been terminated 2025-08-13T20:42:40.802607070+00:00 stderr F I0813 20:42:40.802552 1 base_controller.go:114] Shutting down worker of DownloadsRouteController controller ... 2025-08-13T20:42:40.802607070+00:00 stderr F E0813 20:42:40.802586 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:40.802607070+00:00 stderr F I0813 20:42:40.802593 1 base_controller.go:104] All DownloadsRouteController workers have been terminated 2025-08-13T20:42:40.802622521+00:00 stderr F I0813 20:42:40.802610 1 base_controller.go:114] Shutting down worker of ConsoleRouteController controller ... 2025-08-13T20:42:40.802632471+00:00 stderr F I0813 20:42:40.802621 1 base_controller.go:104] All ConsoleRouteController workers have been terminated 2025-08-13T20:42:40.803512196+00:00 stderr F W0813 20:42:40.803457 1 builder.go:131] graceful termination failed, controllers failed with error: stopped ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000644000175000017500000067477315133657716033125 0ustar zuulzuul2025-08-13T19:59:35.430117225+00:00 stderr F I0813 19:59:35.366203 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T19:59:35.430117225+00:00 stderr F I0813 19:59:35.428254 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:35.614651095+00:00 stderr F I0813 19:59:35.614467 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:37.273990016+00:00 stderr F I0813 19:59:37.272267 1 builder.go:299] console-operator version - 2025-08-13T19:59:42.268464924+00:00 stderr F I0813 19:59:42.264480 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:42.268464924+00:00 stderr F W0813 19:59:42.265251 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:42.268464924+00:00 stderr F W0813 19:59:42.265265 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:42.417642327+00:00 stderr F I0813 19:59:42.417146 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:42.447585060+00:00 stderr F I0813 19:59:42.447491 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:42.460890169+00:00 stderr F I0813 19:59:42.447747 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:42.473611792+00:00 stderr F I0813 19:59:42.464867 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:42.473611792+00:00 stderr F I0813 19:59:42.470576 1 leaderelection.go:250] attempting to acquire leader lease openshift-console-operator/console-operator-lock... 2025-08-13T19:59:42.473611792+00:00 stderr F I0813 19:59:42.471496 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:42.484648657+00:00 stderr F I0813 19:59:42.479569 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:42.485093609+00:00 stderr F I0813 19:59:42.485053 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:42.485156761+00:00 stderr F I0813 19:59:42.485136 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:42.494534208+00:00 stderr F I0813 19:59:42.493042 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.494534208+00:00 stderr F I0813 19:59:42.493284 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:42.624227255+00:00 stderr F I0813 19:59:42.570766 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:42.689951678+00:00 stderr F I0813 19:59:42.685458 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:42.742681721+00:00 stderr F I0813 19:59:42.742026 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:42.742923228+00:00 stderr F E0813 19:59:42.742750 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.742923228+00:00 stderr F E0813 19:59:42.742910 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.749667020+00:00 stderr F E0813 19:59:42.749548 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.749667020+00:00 stderr F E0813 19:59:42.749610 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.761053454+00:00 stderr F E0813 19:59:42.760935 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.762733642+00:00 stderr F E0813 19:59:42.762656 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.788828846+00:00 stderr F E0813 19:59:42.788226 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.788828846+00:00 stderr F E0813 19:59:42.788471 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.857348119+00:00 stderr F E0813 19:59:42.828627 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.857348119+00:00 stderr F E0813 19:59:42.837936 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.918957066+00:00 stderr F E0813 19:59:42.915022 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.926281904+00:00 stderr F E0813 19:59:42.926087 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.077938407+00:00 stderr F E0813 19:59:43.077073 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.096293421+00:00 stderr F E0813 19:59:43.093707 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.140896612+00:00 stderr F I0813 19:59:43.138759 1 leaderelection.go:260] successfully acquired lease openshift-console-operator/console-operator-lock 2025-08-13T19:59:43.140896612+00:00 stderr F I0813 19:59:43.139101 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-console-operator", Name:"console-operator-lock", UID:"30020b87-25e8-41b0-a858-a4ef10623cf0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28335", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' console-operator-5dbbc74dc9-cp5cd_b845076a-9ad5-4a9c-bbd4-efec7e6dc1b0 became leader 2025-08-13T19:59:43.398872586+00:00 stderr F E0813 19:59:43.398742 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.414467350+00:00 stderr F E0813 19:59:43.414400 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.612315180+00:00 stderr F I0813 19:59:43.603718 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:43.613618117+00:00 stderr F I0813 19:59:43.613557 1 starter.go:206] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:43.623642823+00:00 stderr F I0813 19:59:43.614377 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:45.601309287+00:00 stderr F E0813 19:59:45.600542 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:45.601434231+00:00 stderr F E0813 19:59:45.601415 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:46.883192447+00:00 stderr F E0813 19:59:46.883025 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:46.898282427+00:00 stderr F E0813 19:59:46.898118 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.387849482+00:00 stderr F I0813 19:59:47.387336 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T19:59:47.389257912+00:00 stderr F I0813 19:59:47.388575 1 base_controller.go:67] Waiting for caches to sync for InformerWithSwitchController 2025-08-13T19:59:47.389257912+00:00 stderr F I0813 19:59:47.388614 1 base_controller.go:73] Caches are synced for InformerWithSwitchController 2025-08-13T19:59:47.389257912+00:00 stderr F I0813 19:59:47.388689 1 base_controller.go:110] Starting #1 worker of InformerWithSwitchController controller ... 2025-08-13T19:59:47.389757287+00:00 stderr F I0813 19:59:47.389689 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:47.391083334+00:00 stderr F I0813 19:59:47.390899 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_console 2025-08-13T19:59:47.391309231+00:00 stderr F I0813 19:59:47.391175 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:47.391326521+00:00 stderr F I0813 19:59:47.391312 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-08-13T19:59:47.391346322+00:00 stderr F I0813 19:59:47.391331 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:47.391359512+00:00 stderr F I0813 19:59:47.391352 1 base_controller.go:67] Waiting for caches to sync for ConsoleServiceController 2025-08-13T19:59:47.391435754+00:00 stderr F I0813 19:59:47.391367 1 base_controller.go:67] Waiting for caches to sync for ConsoleRouteController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.391538 1 base_controller.go:67] Waiting for caches to sync for ConsoleServiceController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.391745 1 base_controller.go:67] Waiting for caches to sync for DownloadsRouteController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.391763 1 base_controller.go:67] Waiting for caches to sync for ConsoleOperator 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.392019 1 base_controller.go:67] Waiting for caches to sync for ConsoleCLIDownloadsController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.392231 1 base_controller.go:67] Waiting for caches to sync for ConsoleDownloadsDeploymentSyncController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398725 1 base_controller.go:67] Waiting for caches to sync for HealthCheckController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398868 1 base_controller.go:67] Waiting for caches to sync for PodDisruptionBudgetController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398891 1 base_controller.go:67] Waiting for caches to sync for PodDisruptionBudgetController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398909 1 base_controller.go:67] Waiting for caches to sync for OAuthClientsController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398926 1 base_controller.go:67] Waiting for caches to sync for OAuthClientSecretController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398940 1 base_controller.go:67] Waiting for caches to sync for OIDCSetupController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398953 1 base_controller.go:67] Waiting for caches to sync for CLIOIDCClientStatusController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398968 1 base_controller.go:67] Waiting for caches to sync for ClusterUpgradeNotificationController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398973 1 base_controller.go:73] Caches are synced for ClusterUpgradeNotificationController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398978 1 base_controller.go:110] Starting #1 worker of ClusterUpgradeNotificationController controller ... 2025-08-13T19:59:47.401900803+00:00 stderr F E0813 19:59:47.401812 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T19:59:47.410046175+00:00 stderr F E0813 19:59:47.409978 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T19:59:47.420544574+00:00 stderr F E0813 19:59:47.420492 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T19:59:47.447700848+00:00 stderr F E0813 19:59:47.442074 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T19:59:47.489532211+00:00 stderr F E0813 19:59:47.489400 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T19:59:47.585230899+00:00 stderr F E0813 19:59:47.574356 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T19:59:47.742574994+00:00 stderr F W0813 19:59:47.742493 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:47.798358105+00:00 stderr F E0813 19:59:47.794599 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T19:59:47.823880362+00:00 stderr F W0813 19:59:47.822633 1 reflector.go:539] github.com/openshift/console-operator/pkg/console/controllers/util/informers.go:106: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:47.830907522+00:00 stderr F E0813 19:59:47.830857 1 reflector.go:147] github.com/openshift/console-operator/pkg/console/controllers/util/informers.go:106: Failed to watch *v1.OAuthClient: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:47.860633060+00:00 stderr F E0813 19:59:47.860553 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:49.510969275+00:00 stderr F I0813 19:59:49.508335 1 base_controller.go:73] Caches are synced for CLIOIDCClientStatusController 2025-08-13T19:59:49.510969275+00:00 stderr F I0813 19:59:49.509186 1 base_controller.go:110] Starting #1 worker of CLIOIDCClientStatusController controller ... 2025-08-13T19:59:49.510969275+00:00 stderr F I0813 19:59:49.509247 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T19:59:49.510969275+00:00 stderr F I0813 19:59:49.509255 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T19:59:49.515921776+00:00 stderr F I0813 19:59:49.515608 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-08-13T19:59:49.515921776+00:00 stderr F I0813 19:59:49.515678 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-08-13T19:59:49.520307491+00:00 stderr F I0813 19:59:49.520186 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:49.533243100+00:00 stderr F I0813 19:59:49.531943 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:49.533243100+00:00 stderr F I0813 19:59:49.532167 1 base_controller.go:73] Caches are synced for ConsoleServiceController 2025-08-13T19:59:49.533243100+00:00 stderr F I0813 19:59:49.532184 1 base_controller.go:110] Starting #1 worker of ConsoleServiceController controller ... 2025-08-13T19:59:49.535538356+00:00 stderr F I0813 19:59:49.533502 1 base_controller.go:73] Caches are synced for ConsoleServiceController 2025-08-13T19:59:49.535538356+00:00 stderr F I0813 19:59:49.533548 1 base_controller.go:110] Starting #1 worker of ConsoleServiceController controller ... 2025-08-13T19:59:49.541890497+00:00 stderr F I0813 19:59:49.536144 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:49.541890497+00:00 stderr F I0813 19:59:49.536295 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:49.618398968+00:00 stderr F I0813 19:59:49.618299 1 base_controller.go:73] Caches are synced for PodDisruptionBudgetController 2025-08-13T19:59:49.618398968+00:00 stderr F I0813 19:59:49.618337 1 base_controller.go:110] Starting #1 worker of PodDisruptionBudgetController controller ... 2025-08-13T19:59:49.646890379+00:00 stderr F I0813 19:59:49.644990 1 base_controller.go:73] Caches are synced for PodDisruptionBudgetController 2025-08-13T19:59:49.646890379+00:00 stderr F I0813 19:59:49.645071 1 base_controller.go:110] Starting #1 worker of PodDisruptionBudgetController controller ... 2025-08-13T19:59:49.721671560+00:00 stderr F I0813 19:59:49.718432 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:49.721671560+00:00 stderr F I0813 19:59:49.718955 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:49.721671560+00:00 stderr F E0813 19:59:49.719549 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.721671560+00:00 stderr F E0813 19:59:49.719590 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.898184732+00:00 stderr F I0813 19:59:49.898121 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:49.940917540+00:00 stderr F I0813 19:59:49.928583 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:49.940917540+00:00 stderr F I0813 19:59:49.936265 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:49.942210487+00:00 stderr F I0813 19:59:49.942177 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:49.948889987+00:00 stderr F I0813 19:59:49.945321 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:50.002587738+00:00 stderr F I0813 19:59:50.001186 1 base_controller.go:73] Caches are synced for StatusSyncer_console 2025-08-13T19:59:50.002587738+00:00 stderr F I0813 19:59:50.001291 1 base_controller.go:110] Starting #1 worker of StatusSyncer_console controller ... 2025-08-13T19:59:50.398113313+00:00 stderr F I0813 19:59:50.330334 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:50.402557329+00:00 stderr F I0813 19:59:50.402519 1 base_controller.go:73] Caches are synced for OIDCSetupController 2025-08-13T19:59:50.402662812+00:00 stderr F I0813 19:59:50.402640 1 base_controller.go:110] Starting #1 worker of OIDCSetupController controller ... 2025-08-13T19:59:50.402731154+00:00 stderr F I0813 19:59:50.402517 1 base_controller.go:73] Caches are synced for ConsoleDownloadsDeploymentSyncController 2025-08-13T19:59:50.402823577+00:00 stderr F I0813 19:59:50.402760 1 base_controller.go:110] Starting #1 worker of ConsoleDownloadsDeploymentSyncController controller ... 2025-08-13T19:59:50.678549937+00:00 stderr F W0813 19:59:50.636559 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:50.678549937+00:00 stderr F E0813 19:59:50.636746 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:50.678549937+00:00 stderr F W0813 19:59:50.637078 1 reflector.go:539] github.com/openshift/console-operator/pkg/console/controllers/util/informers.go:106: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:50.678549937+00:00 stderr F E0813 19:59:50.637099 1 reflector.go:147] github.com/openshift/console-operator/pkg/console/controllers/util/informers.go:106: Failed to watch *v1.OAuthClient: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:51.714595091+00:00 stderr F I0813 19:59:51.713173 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientSyncDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::OAuthClientSync_FailedRegister::OAuthClientsController_SyncError::RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:52.756014067+00:00 stderr F I0813 19:59:52.744048 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.766049034+00:00 stderr F I0813 19:59:52.744062 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.986403855+00:00 stderr F I0813 19:59:52.978266 1 base_controller.go:73] Caches are synced for OAuthClientSecretController 2025-08-13T19:59:52.986403855+00:00 stderr F I0813 19:59:52.978319 1 base_controller.go:110] Starting #1 worker of OAuthClientSecretController controller ... 2025-08-13T19:59:53.101033032+00:00 stderr F I0813 19:59:53.096560 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.118272194+00:00 stderr F I0813 19:59:53.106464 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded changed from False to True ("ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientSyncDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused"),Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:47256->10.217.4.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:53.128385752+00:00 stderr F I0813 19:59:53.128276 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:53Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientSyncDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::OAuthClientSync_FailedRegister::OAuthClientsController_SyncError::RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:53.174648771+00:00 stderr F E0813 19:59:53.145522 1 base_controller.go:268] StatusSyncer_console reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "console": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:53.174648771+00:00 stderr F I0813 19:59:53.172768 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:53Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientSyncDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::OAuthClientSync_FailedRegister::OAuthClientsController_SyncError::RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:53.216904325+00:00 stderr F E0813 19:59:53.208544 1 base_controller.go:268] StatusSyncer_console reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "console": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:53.220559200+00:00 stderr F I0813 19:59:53.217599 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:53.221030863+00:00 stderr F I0813 19:59:53.220678 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:53.221165027+00:00 stderr F I0813 19:59:53.221102 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:53.229032401+00:00 stderr F I0813 19:59:53.222660 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:53.222200806 +0000 UTC))" 2025-08-13T19:59:53.240469716+00:00 stderr F I0813 19:59:53.240433 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:53.240379524 +0000 UTC))" 2025-08-13T19:59:53.240560859+00:00 stderr F I0813 19:59:53.240540 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:53.240514928 +0000 UTC))" 2025-08-13T19:59:53.241113115+00:00 stderr F I0813 19:59:53.241084 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:53.240698013 +0000 UTC))" 2025-08-13T19:59:53.241191047+00:00 stderr F I0813 19:59:53.241170 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.241152076 +0000 UTC))" 2025-08-13T19:59:53.241241448+00:00 stderr F I0813 19:59:53.241228 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.241212937 +0000 UTC))" 2025-08-13T19:59:53.241293460+00:00 stderr F I0813 19:59:53.241280 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.241260189 +0000 UTC))" 2025-08-13T19:59:53.241341011+00:00 stderr F I0813 19:59:53.241325 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.24131169 +0000 UTC))" 2025-08-13T19:59:53.241386082+00:00 stderr F I0813 19:59:53.241373 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.241359553 +0000 UTC))" 2025-08-13T19:59:53.243396370+00:00 stderr F I0813 19:59:53.243370 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-console-operator.svc\" [serving] validServingFor=[metrics.openshift-console-operator.svc,metrics.openshift-console-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:53:40 +0000 UTC to 2026-06-26 12:53:41 +0000 UTC (now=2025-08-13 19:59:53.243349938 +0000 UTC))" 2025-08-13T19:59:53.244109660+00:00 stderr F I0813 19:59:53.244085 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.303105192+00:00 stderr F W0813 19:59:53.303040 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:53.303400070+00:00 stderr F E0813 19:59:53.303374 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:53.304328677+00:00 stderr F I0813 19:59:53.244764 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115181\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115179\" (2025-08-13 18:59:37 +0000 UTC to 2026-08-13 18:59:37 +0000 UTC (now=2025-08-13 19:59:53.24376803 +0000 UTC))" 2025-08-13T19:59:54.584047775+00:00 stderr F I0813 19:59:54.583331 1 reflector.go:351] Caches populated for *v1.OAuthClient from github.com/openshift/console-operator/pkg/console/controllers/util/informers.go:106 2025-08-13T19:59:59.114234360+00:00 stderr F W0813 19:59:59.112599 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:59.114711074+00:00 stderr F E0813 19:59:59.114643 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.736477 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.736397648 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.818989 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.818933081 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819019 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.819004153 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819043 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.819026214 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819062 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.819050635 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819085 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.819068345 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819112 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.819093806 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819157 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.819119837 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819188 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.819171098 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819220 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.819203849 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819560 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-console-operator.svc\" [serving] validServingFor=[metrics.openshift-console-operator.svc,metrics.openshift-console-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:53:40 +0000 UTC to 2026-06-26 12:53:41 +0000 UTC (now=2025-08-13 20:00:05.819537779 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819995 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115181\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115179\" (2025-08-13 18:59:37 +0000 UTC to 2026-08-13 18:59:37 +0000 UTC (now=2025-08-13 20:00:05.81991988 +0000 UTC))" 2025-08-13T20:00:09.419606640+00:00 stderr F I0813 20:00:09.418744 1 reflector.go:351] Caches populated for *v1.ConsolePlugin from github.com/openshift/client-go/console/informers/externalversions/factory.go:125 2025-08-13T20:00:10.061828092+00:00 stderr F I0813 20:00:10.056742 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092230 1 base_controller.go:73] Caches are synced for ConsoleCLIDownloadsController 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092282 1 base_controller.go:110] Starting #1 worker of ConsoleCLIDownloadsController controller ... 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092324 1 base_controller.go:73] Caches are synced for ConsoleRouteController 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092329 1 base_controller.go:110] Starting #1 worker of ConsoleRouteController controller ... 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092343 1 base_controller.go:73] Caches are synced for DownloadsRouteController 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092348 1 base_controller.go:110] Starting #1 worker of DownloadsRouteController controller ... 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092366 1 base_controller.go:73] Caches are synced for ConsoleOperator 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092733 1 base_controller.go:110] Starting #1 worker of ConsoleOperator controller ... 2025-08-13T20:00:10.121956577+00:00 stderr F I0813 20:00:10.121143 1 base_controller.go:73] Caches are synced for OAuthClientsController 2025-08-13T20:00:10.121956577+00:00 stderr F I0813 20:00:10.121235 1 base_controller.go:110] Starting #1 worker of OAuthClientsController controller ... 2025-08-13T20:00:10.151497129+00:00 stderr F I0813 20:00:10.151095 1 base_controller.go:73] Caches are synced for HealthCheckController 2025-08-13T20:00:10.151558671+00:00 stderr F I0813 20:00:10.151535 1 base_controller.go:110] Starting #1 worker of HealthCheckController controller ... 2025-08-13T20:00:10.205152159+00:00 stderr F I0813 20:00:10.169180 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling 2025-08-13T20:00:11.778877921+00:00 stderr F I0813 20:00:11.778245 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:11.778877921+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:11.778877921+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:11.778877921+00:00 stderr F    ... // 38 identical elements 2025-08-13T20:00:11.778877921+00:00 stderr F    {Type: "ConfigMapSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:00:11.778877921+00:00 stderr F    {Type: "ServiceCASyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:00:11.778877921+00:00 stderr F -  { 2025-08-13T20:00:11.778877921+00:00 stderr F -  Type: "OAuthClientSyncDegraded", 2025-08-13T20:00:11.778877921+00:00 stderr F -  Status: "True", 2025-08-13T20:00:11.778877921+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:11.778877921+00:00 stderr F -  Reason: "FailedRegister", 2025-08-13T20:00:11.778877921+00:00 stderr F -  Message: "the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)", 2025-08-13T20:00:11.778877921+00:00 stderr F -  }, 2025-08-13T20:00:11.778877921+00:00 stderr F +  { 2025-08-13T20:00:11.778877921+00:00 stderr F +  Type: "OAuthClientSyncDegraded", 2025-08-13T20:00:11.778877921+00:00 stderr F +  Status: "False", 2025-08-13T20:00:11.778877921+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:10.35249456 +0000 UTC m=+46.642382196", 2025-08-13T20:00:11.778877921+00:00 stderr F +  }, 2025-08-13T20:00:11.778877921+00:00 stderr F    {Type: "OAuthClientSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:09 +0000 UTC"}}, 2025-08-13T20:00:11.778877921+00:00 stderr F    {Type: "RedirectServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:00:11.778877921+00:00 stderr F    ... // 19 identical elements 2025-08-13T20:00:11.778877921+00:00 stderr F    }, 2025-08-13T20:00:11.778877921+00:00 stderr F    Version: "", 2025-08-13T20:00:11.778877921+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:11.778877921+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:11.778877921+00:00 stderr F   } 2025-08-13T20:00:11.869674490+00:00 stderr F I0813 20:00:11.864551 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:11.869674490+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:11.869674490+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:11.869674490+00:00 stderr F    ... // 45 identical elements 2025-08-13T20:00:11.869674490+00:00 stderr F    {Type: "ConsoleNotificationSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:00:11.869674490+00:00 stderr F    {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:00:11.869674490+00:00 stderr F -  { 2025-08-13T20:00:11.869674490+00:00 stderr F -  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:00:11.869674490+00:00 stderr F -  Status: "True", 2025-08-13T20:00:11.869674490+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:11.869674490+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:11.869674490+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:00:11.869674490+00:00 stderr F -  }, 2025-08-13T20:00:11.869674490+00:00 stderr F +  { 2025-08-13T20:00:11.869674490+00:00 stderr F +  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:00:11.869674490+00:00 stderr F +  Status: "False", 2025-08-13T20:00:11.869674490+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:10.776893212 +0000 UTC m=+47.066780948", 2025-08-13T20:00:11.869674490+00:00 stderr F +  }, 2025-08-13T20:00:11.869674490+00:00 stderr F    {Type: "ConsoleCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:00:11.869674490+00:00 stderr F -  { 2025-08-13T20:00:11.869674490+00:00 stderr F -  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:00:11.869674490+00:00 stderr F -  Status: "False", 2025-08-13T20:00:11.869674490+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:11.869674490+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:11.869674490+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:00:11.869674490+00:00 stderr F -  }, 2025-08-13T20:00:11.869674490+00:00 stderr F +  { 2025-08-13T20:00:11.869674490+00:00 stderr F +  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:00:11.869674490+00:00 stderr F +  Status: "True", 2025-08-13T20:00:11.869674490+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:10.776895672 +0000 UTC m=+47.066782988", 2025-08-13T20:00:11.869674490+00:00 stderr F +  }, 2025-08-13T20:00:11.869674490+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:00:11.869674490+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:00:11.869674490+00:00 stderr F    ... // 10 identical elements 2025-08-13T20:00:11.869674490+00:00 stderr F    }, 2025-08-13T20:00:11.869674490+00:00 stderr F    Version: "", 2025-08-13T20:00:11.869674490+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:11.869674490+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:11.869674490+00:00 stderr F   } 2025-08-13T20:00:11.906421638+00:00 stderr F I0813 20:00:11.906049 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:11.906421638+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:11.906421638+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:11.906421638+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-08-13T20:00:11.906421638+00:00 stderr F    {Type: "RouteHealthDegraded", Status: "True", LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, Reason: "FailedGet", ...}, 2025-08-13T20:00:11.906421638+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:00:11.906421638+00:00 stderr F -  { 2025-08-13T20:00:11.906421638+00:00 stderr F -  Type: "OAuthClientsControllerDegraded", 2025-08-13T20:00:11.906421638+00:00 stderr F -  Status: "True", 2025-08-13T20:00:11.906421638+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:11.906421638+00:00 stderr F -  Reason: "SyncError", 2025-08-13T20:00:11.906421638+00:00 stderr F -  Message: "the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)", 2025-08-13T20:00:11.906421638+00:00 stderr F -  }, 2025-08-13T20:00:11.906421638+00:00 stderr F +  { 2025-08-13T20:00:11.906421638+00:00 stderr F +  Type: "OAuthClientsControllerDegraded", 2025-08-13T20:00:11.906421638+00:00 stderr F +  Status: "False", 2025-08-13T20:00:11.906421638+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:11.893081228 +0000 UTC m=+48.182968804", 2025-08-13T20:00:11.906421638+00:00 stderr F +  Reason: "AsExpected", 2025-08-13T20:00:11.906421638+00:00 stderr F +  }, 2025-08-13T20:00:11.906421638+00:00 stderr F    {Type: "DownloadsDeploymentSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:41 +0000 UTC"}}, 2025-08-13T20:00:11.906421638+00:00 stderr F    {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:00:11.906421638+00:00 stderr F    ... // 56 identical elements 2025-08-13T20:00:11.906421638+00:00 stderr F    }, 2025-08-13T20:00:11.906421638+00:00 stderr F    Version: "", 2025-08-13T20:00:11.906421638+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:11.906421638+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:11.906421638+00:00 stderr F   } 2025-08-13T20:00:12.014816399+00:00 stderr F I0813 20:00:12.014451 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:12.014816399+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:12.014816399+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:12.014816399+00:00 stderr F    ... // 7 identical elements 2025-08-13T20:00:12.014816399+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:12.014816399+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:00:12.014816399+00:00 stderr F -  { 2025-08-13T20:00:12.014816399+00:00 stderr F -  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:12.014816399+00:00 stderr F -  Status: "True", 2025-08-13T20:00:12.014816399+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.014816399+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.014816399+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:12.014816399+00:00 stderr F -  }, 2025-08-13T20:00:12.014816399+00:00 stderr F +  { 2025-08-13T20:00:12.014816399+00:00 stderr F +  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:12.014816399+00:00 stderr F +  Status: "False", 2025-08-13T20:00:12.014816399+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:10.999440878 +0000 UTC m=+47.289328174", 2025-08-13T20:00:12.014816399+00:00 stderr F +  }, 2025-08-13T20:00:12.014816399+00:00 stderr F    {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:12.014816399+00:00 stderr F -  { 2025-08-13T20:00:12.014816399+00:00 stderr F -  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.014816399+00:00 stderr F -  Status: "False", 2025-08-13T20:00:12.014816399+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.014816399+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.014816399+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:12.014816399+00:00 stderr F -  }, 2025-08-13T20:00:12.014816399+00:00 stderr F +  { 2025-08-13T20:00:12.014816399+00:00 stderr F +  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.014816399+00:00 stderr F +  Status: "True", 2025-08-13T20:00:12.014816399+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:10.999434448 +0000 UTC m=+47.289325784", 2025-08-13T20:00:12.014816399+00:00 stderr F +  }, 2025-08-13T20:00:12.014816399+00:00 stderr F    {Type: "ServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:00:12.014816399+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:00:12.014816399+00:00 stderr F    ... // 48 identical elements 2025-08-13T20:00:12.014816399+00:00 stderr F    }, 2025-08-13T20:00:12.014816399+00:00 stderr F    Version: "", 2025-08-13T20:00:12.014816399+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:12.014816399+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:12.014816399+00:00 stderr F   } 2025-08-13T20:00:12.096945711+00:00 stderr F I0813 20:00:12.094858 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:12.096945711+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:12.096945711+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:12.096945711+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-08-13T20:00:12.096945711+00:00 stderr F    {Type: "RouteHealthDegraded", Status: "True", LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, Reason: "FailedGet", ...}, 2025-08-13T20:00:12.096945711+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:00:12.096945711+00:00 stderr F -  { 2025-08-13T20:00:12.096945711+00:00 stderr F -  Type: "OAuthClientsControllerDegraded", 2025-08-13T20:00:12.096945711+00:00 stderr F -  Status: "True", 2025-08-13T20:00:12.096945711+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.096945711+00:00 stderr F -  Reason: "SyncError", 2025-08-13T20:00:12.096945711+00:00 stderr F -  Message: "the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)", 2025-08-13T20:00:12.096945711+00:00 stderr F -  }, 2025-08-13T20:00:12.096945711+00:00 stderr F +  { 2025-08-13T20:00:12.096945711+00:00 stderr F +  Type: "OAuthClientsControllerDegraded", 2025-08-13T20:00:12.096945711+00:00 stderr F +  Status: "False", 2025-08-13T20:00:12.096945711+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:12.082043956 +0000 UTC m=+48.371931642", 2025-08-13T20:00:12.096945711+00:00 stderr F +  Reason: "AsExpected", 2025-08-13T20:00:12.096945711+00:00 stderr F +  }, 2025-08-13T20:00:12.096945711+00:00 stderr F    {Type: "DownloadsDeploymentSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:41 +0000 UTC"}}, 2025-08-13T20:00:12.096945711+00:00 stderr F    {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:00:12.096945711+00:00 stderr F    ... // 56 identical elements 2025-08-13T20:00:12.096945711+00:00 stderr F    }, 2025-08-13T20:00:12.096945711+00:00 stderr F    Version: "", 2025-08-13T20:00:12.096945711+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:12.096945711+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:12.096945711+00:00 stderr F   } 2025-08-13T20:00:12.279647800+00:00 stderr F I0813 20:00:12.275503 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::OAuthClientsController_SyncError::RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:12.279647800+00:00 stderr F I0813 20:00:12.278446 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:12.279647800+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:12.279647800+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:12.279647800+00:00 stderr F    ... // 7 identical elements 2025-08-13T20:00:12.279647800+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:12.279647800+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:00:12.279647800+00:00 stderr F -  { 2025-08-13T20:00:12.279647800+00:00 stderr F -  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:12.279647800+00:00 stderr F -  Status: "True", 2025-08-13T20:00:12.279647800+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.279647800+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.279647800+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:12.279647800+00:00 stderr F -  }, 2025-08-13T20:00:12.279647800+00:00 stderr F +  { 2025-08-13T20:00:12.279647800+00:00 stderr F +  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:12.279647800+00:00 stderr F +  Status: "False", 2025-08-13T20:00:12.279647800+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:12.16985366 +0000 UTC m=+48.459741076", 2025-08-13T20:00:12.279647800+00:00 stderr F +  }, 2025-08-13T20:00:12.279647800+00:00 stderr F    {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:12.279647800+00:00 stderr F -  { 2025-08-13T20:00:12.279647800+00:00 stderr F -  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.279647800+00:00 stderr F -  Status: "False", 2025-08-13T20:00:12.279647800+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.279647800+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.279647800+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:12.279647800+00:00 stderr F -  }, 2025-08-13T20:00:12.279647800+00:00 stderr F +  { 2025-08-13T20:00:12.279647800+00:00 stderr F +  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.279647800+00:00 stderr F +  Status: "True", 2025-08-13T20:00:12.279647800+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:12.16985055 +0000 UTC m=+48.459738236", 2025-08-13T20:00:12.279647800+00:00 stderr F +  }, 2025-08-13T20:00:12.279647800+00:00 stderr F    {Type: "ServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:00:12.279647800+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:00:12.279647800+00:00 stderr F    ... // 48 identical elements 2025-08-13T20:00:12.279647800+00:00 stderr F    }, 2025-08-13T20:00:12.279647800+00:00 stderr F    Version: "", 2025-08-13T20:00:12.279647800+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:12.279647800+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:12.279647800+00:00 stderr F   } 2025-08-13T20:00:12.297038556+00:00 stderr F I0813 20:00:12.295727 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:12.297038556+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:12.297038556+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:12.297038556+00:00 stderr F    ... // 45 identical elements 2025-08-13T20:00:12.297038556+00:00 stderr F    {Type: "ConsoleNotificationSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:00:12.297038556+00:00 stderr F    {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:00:12.297038556+00:00 stderr F -  { 2025-08-13T20:00:12.297038556+00:00 stderr F -  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:00:12.297038556+00:00 stderr F -  Status: "True", 2025-08-13T20:00:12.297038556+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.297038556+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.297038556+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:00:12.297038556+00:00 stderr F -  }, 2025-08-13T20:00:12.297038556+00:00 stderr F +  { 2025-08-13T20:00:12.297038556+00:00 stderr F +  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:00:12.297038556+00:00 stderr F +  Status: "False", 2025-08-13T20:00:12.297038556+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:12.286220938 +0000 UTC m=+48.576108394", 2025-08-13T20:00:12.297038556+00:00 stderr F +  }, 2025-08-13T20:00:12.297038556+00:00 stderr F    {Type: "ConsoleCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:00:12.297038556+00:00 stderr F -  { 2025-08-13T20:00:12.297038556+00:00 stderr F -  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.297038556+00:00 stderr F -  Status: "False", 2025-08-13T20:00:12.297038556+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.297038556+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.297038556+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:00:12.297038556+00:00 stderr F -  }, 2025-08-13T20:00:12.297038556+00:00 stderr F +  { 2025-08-13T20:00:12.297038556+00:00 stderr F +  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.297038556+00:00 stderr F +  Status: "True", 2025-08-13T20:00:12.297038556+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:12.286222948 +0000 UTC m=+48.576110244", 2025-08-13T20:00:12.297038556+00:00 stderr F +  }, 2025-08-13T20:00:12.297038556+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:00:12.297038556+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:00:12.297038556+00:00 stderr F    ... // 10 identical elements 2025-08-13T20:00:12.297038556+00:00 stderr F    }, 2025-08-13T20:00:12.297038556+00:00 stderr F    Version: "", 2025-08-13T20:00:12.297038556+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:12.297038556+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:12.297038556+00:00 stderr F   } 2025-08-13T20:00:12.378818518+00:00 stderr F I0813 20:00:12.378663 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientSyncDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused" to "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused" 2025-08-13T20:00:12.536265268+00:00 stderr F E0813 20:00:12.534526 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:12.536265268+00:00 stderr F E0813 20:00:12.534603 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:12.545689446+00:00 stderr F I0813 20:00:12.539000 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:12.545689446+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:12.545689446+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:12.545689446+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-08-13T20:00:12.545689446+00:00 stderr F    { 2025-08-13T20:00:12.545689446+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:00:12.545689446+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:12.545689446+00:00 stderr F    Reason: "FailedGet", 2025-08-13T20:00:12.545689446+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:12.545689446+00:00 stderr F    "failed to GET route (https://console-openshift-console.apps-crc.", 2025-08-13T20:00:12.545689446+00:00 stderr F    `testing): Get "https://console-openshift-console.apps-crc.testin`, 2025-08-13T20:00:12.545689446+00:00 stderr F    `g": dial tcp`, 2025-08-13T20:00:12.545689446+00:00 stderr F -  ":", 2025-08-13T20:00:12.545689446+00:00 stderr F    " ", 2025-08-13T20:00:12.545689446+00:00 stderr F -  "lookup console-openshift-console.apps-crc.testing on 10.217.4.10", 2025-08-13T20:00:12.545689446+00:00 stderr F -  ":53: read udp 10.217.0.62:46199->10.217.4.10:53: read", 2025-08-13T20:00:12.545689446+00:00 stderr F +  "192.168.130.11:443: connect", 2025-08-13T20:00:12.545689446+00:00 stderr F    ": connection refused", 2025-08-13T20:00:12.545689446+00:00 stderr F    }, ""), 2025-08-13T20:00:12.545689446+00:00 stderr F    }, 2025-08-13T20:00:12.545689446+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:00:12.545689446+00:00 stderr F    {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:00:12.545689446+00:00 stderr F    ... // 55 identical elements 2025-08-13T20:00:12.545689446+00:00 stderr F    {Type: "AuthStatusHandlerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:00:12.545689446+00:00 stderr F    {Type: "AuthStatusHandlerProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:00:12.545689446+00:00 stderr F    { 2025-08-13T20:00:12.545689446+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:00:12.545689446+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:12.545689446+00:00 stderr F    Reason: "FailedGet", 2025-08-13T20:00:12.545689446+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:12.545689446+00:00 stderr F    "failed to GET route (https://console-openshift-console.apps-crc.", 2025-08-13T20:00:12.545689446+00:00 stderr F    `testing): Get "https://console-openshift-console.apps-crc.testin`, 2025-08-13T20:00:12.545689446+00:00 stderr F    `g": dial tcp`, 2025-08-13T20:00:12.545689446+00:00 stderr F -  ":", 2025-08-13T20:00:12.545689446+00:00 stderr F    " ", 2025-08-13T20:00:12.545689446+00:00 stderr F -  "lookup console-openshift-console.apps-crc.testing on 10.217.4.10", 2025-08-13T20:00:12.545689446+00:00 stderr F -  ":53: read udp 10.217.0.62:46199->10.217.4.10:53: read", 2025-08-13T20:00:12.545689446+00:00 stderr F +  "192.168.130.11:443: connect", 2025-08-13T20:00:12.545689446+00:00 stderr F    ": connection refused", 2025-08-13T20:00:12.545689446+00:00 stderr F    }, ""), 2025-08-13T20:00:12.545689446+00:00 stderr F    }, 2025-08-13T20:00:12.545689446+00:00 stderr F    }, 2025-08-13T20:00:12.545689446+00:00 stderr F    Version: "", 2025-08-13T20:00:12.545689446+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:12.545689446+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:12.545689446+00:00 stderr F   } 2025-08-13T20:00:12.888645326+00:00 stderr F I0813 20:00:12.851613 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:12.888645326+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:12.888645326+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:12.888645326+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-08-13T20:00:12.888645326+00:00 stderr F    { 2025-08-13T20:00:12.888645326+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:00:12.888645326+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:12.888645326+00:00 stderr F    Reason: "FailedGet", 2025-08-13T20:00:12.888645326+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:12.888645326+00:00 stderr F    "failed to GET route (https://console-openshift-console.apps-crc.", 2025-08-13T20:00:12.888645326+00:00 stderr F    `testing): Get "https://console-openshift-console.apps-crc.testin`, 2025-08-13T20:00:12.888645326+00:00 stderr F    `g": dial tcp`, 2025-08-13T20:00:12.888645326+00:00 stderr F -  ":", 2025-08-13T20:00:12.888645326+00:00 stderr F    " ", 2025-08-13T20:00:12.888645326+00:00 stderr F -  "lookup console-openshift-console.apps-crc.testing on 10.217.4.10", 2025-08-13T20:00:12.888645326+00:00 stderr F -  ":53: read udp 10.217.0.62:46199->10.217.4.10:53: read", 2025-08-13T20:00:12.888645326+00:00 stderr F +  "192.168.130.11:443: connect", 2025-08-13T20:00:12.888645326+00:00 stderr F    ": connection refused", 2025-08-13T20:00:12.888645326+00:00 stderr F    }, ""), 2025-08-13T20:00:12.888645326+00:00 stderr F    }, 2025-08-13T20:00:12.888645326+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:00:12.888645326+00:00 stderr F    {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:00:12.888645326+00:00 stderr F    ... // 55 identical elements 2025-08-13T20:00:12.888645326+00:00 stderr F    {Type: "AuthStatusHandlerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:00:12.888645326+00:00 stderr F    {Type: "AuthStatusHandlerProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:00:12.888645326+00:00 stderr F    { 2025-08-13T20:00:12.888645326+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:00:12.888645326+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:12.888645326+00:00 stderr F    Reason: "FailedGet", 2025-08-13T20:00:12.888645326+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:12.888645326+00:00 stderr F    "failed to GET route (https://console-openshift-console.apps-crc.", 2025-08-13T20:00:12.888645326+00:00 stderr F    `testing): Get "https://console-openshift-console.apps-crc.testin`, 2025-08-13T20:00:12.888645326+00:00 stderr F    `g": dial tcp`, 2025-08-13T20:00:12.888645326+00:00 stderr F -  ":", 2025-08-13T20:00:12.888645326+00:00 stderr F    " ", 2025-08-13T20:00:12.888645326+00:00 stderr F -  "lookup console-openshift-console.apps-crc.testing on 10.217.4.10", 2025-08-13T20:00:12.888645326+00:00 stderr F -  ":53: read udp 10.217.0.62:46199->10.217.4.10:53: read", 2025-08-13T20:00:12.888645326+00:00 stderr F +  "192.168.130.11:443: connect", 2025-08-13T20:00:12.888645326+00:00 stderr F    ": connection refused", 2025-08-13T20:00:12.888645326+00:00 stderr F    }, ""), 2025-08-13T20:00:12.888645326+00:00 stderr F    }, 2025-08-13T20:00:12.888645326+00:00 stderr F    }, 2025-08-13T20:00:12.888645326+00:00 stderr F    Version: "", 2025-08-13T20:00:12.888645326+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:12.888645326+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:12.888645326+00:00 stderr F   } 2025-08-13T20:00:12.890976112+00:00 stderr F I0813 20:00:12.865542 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:12.969377458+00:00 stderr F I0813 20:00:12.966556 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:12.969377458+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:12.969377458+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:12.969377458+00:00 stderr F    ... // 7 identical elements 2025-08-13T20:00:12.969377458+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:12.969377458+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:00:12.969377458+00:00 stderr F -  { 2025-08-13T20:00:12.969377458+00:00 stderr F -  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:12.969377458+00:00 stderr F -  Status: "True", 2025-08-13T20:00:12.969377458+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.969377458+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.969377458+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:12.969377458+00:00 stderr F -  }, 2025-08-13T20:00:12.969377458+00:00 stderr F +  { 2025-08-13T20:00:12.969377458+00:00 stderr F +  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:12.969377458+00:00 stderr F +  Status: "False", 2025-08-13T20:00:12.969377458+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:12.962050399 +0000 UTC m=+49.251937695", 2025-08-13T20:00:12.969377458+00:00 stderr F +  }, 2025-08-13T20:00:12.969377458+00:00 stderr F    {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:12.969377458+00:00 stderr F -  { 2025-08-13T20:00:12.969377458+00:00 stderr F -  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.969377458+00:00 stderr F -  Status: "False", 2025-08-13T20:00:12.969377458+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.969377458+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.969377458+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:12.969377458+00:00 stderr F -  }, 2025-08-13T20:00:12.969377458+00:00 stderr F +  { 2025-08-13T20:00:12.969377458+00:00 stderr F +  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.969377458+00:00 stderr F +  Status: "True", 2025-08-13T20:00:12.969377458+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:12.962047889 +0000 UTC m=+49.251935475", 2025-08-13T20:00:12.969377458+00:00 stderr F +  }, 2025-08-13T20:00:12.969377458+00:00 stderr F    {Type: "ServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:00:12.969377458+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:00:12.969377458+00:00 stderr F    ... // 48 identical elements 2025-08-13T20:00:12.969377458+00:00 stderr F    }, 2025-08-13T20:00:12.969377458+00:00 stderr F    Version: "", 2025-08-13T20:00:12.969377458+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:12.969377458+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:12.969377458+00:00 stderr F   } 2025-08-13T20:00:13.033632120+00:00 stderr F E0813 20:00:13.029315 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:13.033632120+00:00 stderr F E0813 20:00:13.029355 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:13.073359003+00:00 stderr F I0813 20:00:13.072674 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused" to "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused" 2025-08-13T20:00:13.091020806+00:00 stderr F E0813 20:00:13.090678 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:13.260640553+00:00 stderr F I0813 20:00:13.258400 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"DownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"DownloadsCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:13.344880425+00:00 stderr F I0813 20:00:13.344591 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused" to "DownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused",Upgradeable message changed from "ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)" to "DownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)" 2025-08-13T20:00:13.484965749+00:00 stderr F I0813 20:00:13.483043 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:13.484965749+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:13.484965749+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:13.484965749+00:00 stderr F    ... // 7 identical elements 2025-08-13T20:00:13.484965749+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:13.484965749+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:00:13.484965749+00:00 stderr F -  { 2025-08-13T20:00:13.484965749+00:00 stderr F -  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:13.484965749+00:00 stderr F -  Status: "True", 2025-08-13T20:00:13.484965749+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:13.484965749+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:13.484965749+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:13.484965749+00:00 stderr F -  }, 2025-08-13T20:00:13.484965749+00:00 stderr F +  { 2025-08-13T20:00:13.484965749+00:00 stderr F +  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:13.484965749+00:00 stderr F +  Status: "False", 2025-08-13T20:00:13.484965749+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:13.440922653 +0000 UTC m=+49.730810019", 2025-08-13T20:00:13.484965749+00:00 stderr F +  }, 2025-08-13T20:00:13.484965749+00:00 stderr F    {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:13.484965749+00:00 stderr F -  { 2025-08-13T20:00:13.484965749+00:00 stderr F -  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:13.484965749+00:00 stderr F -  Status: "False", 2025-08-13T20:00:13.484965749+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:13.484965749+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:13.484965749+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:13.484965749+00:00 stderr F -  }, 2025-08-13T20:00:13.484965749+00:00 stderr F +  { 2025-08-13T20:00:13.484965749+00:00 stderr F +  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:13.484965749+00:00 stderr F +  Status: "True", 2025-08-13T20:00:13.484965749+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:13.440919713 +0000 UTC m=+49.730807239", 2025-08-13T20:00:13.484965749+00:00 stderr F +  }, 2025-08-13T20:00:13.484965749+00:00 stderr F    {Type: "ServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:00:13.484965749+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:00:13.484965749+00:00 stderr F    ... // 48 identical elements 2025-08-13T20:00:13.484965749+00:00 stderr F    }, 2025-08-13T20:00:13.484965749+00:00 stderr F    Version: "", 2025-08-13T20:00:13.484965749+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:13.484965749+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:13.484965749+00:00 stderr F   } 2025-08-13T20:00:13.587009919+00:00 stderr F E0813 20:00:13.586324 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:13.587009919+00:00 stderr F E0813 20:00:13.586656 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:13.587743220+00:00 stderr F E0813 20:00:13.587122 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:13.752083226+00:00 stderr F I0813 20:00:13.751974 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:14.129461516+00:00 stderr F I0813 20:00:14.122744 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "DownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused",Upgradeable changed from False to True ("All is well") 2025-08-13T20:00:14.175457698+00:00 stderr F E0813 20:00:14.175390 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:14.175553251+00:00 stderr F E0813 20:00:14.175539 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:14.175988813+00:00 stderr F E0813 20:00:14.175880 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:14.526271251+00:00 stderr F E0813 20:00:14.494881 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:14.526425155+00:00 stderr F E0813 20:00:14.526399 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:14.584962424+00:00 stderr F I0813 20:00:14.581817 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:00:14Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:14.743107083+00:00 stderr F E0813 20:00:14.742362 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:14.743107083+00:00 stderr F E0813 20:00:14.742404 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:14.743107083+00:00 stderr F E0813 20:00:14.742578 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.394056514+00:00 stderr F E0813 20:00:15.393954 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.394115755+00:00 stderr F E0813 20:00:15.394101 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.399979303+00:00 stderr F E0813 20:00:15.399954 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.741117730+00:00 stderr F E0813 20:00:15.740427 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.741117730+00:00 stderr F E0813 20:00:15.741014 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.750376894+00:00 stderr F E0813 20:00:15.750217 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.005937411+00:00 stderr F E0813 20:00:16.005193 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.005937411+00:00 stderr F E0813 20:00:16.005266 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.005937411+00:00 stderr F E0813 20:00:16.005459 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.200377335+00:00 stderr F E0813 20:00:16.200319 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.200448337+00:00 stderr F E0813 20:00:16.200431 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.200679204+00:00 stderr F E0813 20:00:16.200650 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.452760992+00:00 stderr F E0813 20:00:16.452659 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.452908646+00:00 stderr F E0813 20:00:16.452858 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.453274616+00:00 stderr F E0813 20:00:16.453131 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.705107057+00:00 stderr F E0813 20:00:16.703434 1 base_controller.go:268] StatusSyncer_console reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "console": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:17.638125962+00:00 stderr F I0813 20:00:17.637402 1 apps.go:154] Deployment "openshift-console/console" changes: {"metadata":{"annotations":{"console.openshift.io/service-ca-config-version":"29218","operator.openshift.io/spec-hash":"b2372c4f2f3d3abb58592f8e229a7b3901addc8a288a978cd753c769ea967ca8"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"strategy":{"rollingUpdate":{"maxSurge":null,"maxUnavailable":null}},"template":{"metadata":{"annotations":{"console.openshift.io/service-ca-config-version":"29218"}},"spec":{"containers":[{"command":["/opt/bridge/bin/bridge","--public-dir=/opt/bridge/static","--config=/var/console-config/console-config.yaml","--service-ca-file=/var/service-ca/service-ca.crt","--v=2"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae","imagePullPolicy":"IfNotPresent","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"failureThreshold":1,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"name":"console","ports":[{"containerPort":8443,"name":"https","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{"requests":{"cpu":"10m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"readOnlyRootFilesystem":false},"startupProbe":{"failureThreshold":30,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/serving-cert","name":"console-serving-cert","readOnly":true},{"mountPath":"/var/oauth-config","name":"console-oauth-config","readOnly":true},{"mountPath":"/var/console-config","name":"console-config","readOnly":true},{"mountPath":"/var/service-ca","name":"service-ca","readOnly":true},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"trusted-ca-bundle","readOnly":true},{"mountPath":"/var/oauth-serving-cert","name":"oauth-serving-cert","readOnly":true}]}],"dnsPolicy":null,"serviceAccount":null,"volumes":[{"name":"console-serving-cert","secret":{"secretName":"console-serving-cert"}},{"name":"console-oauth-config","secret":{"secretName":"console-oauth-config"}},{"configMap":{"name":"console-config"},"name":"console-config"},{"configMap":{"name":"service-ca"},"name":"service-ca"},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"trusted-ca-bundle"},"name":"trusted-ca-bundle"},{"configMap":{"name":"oauth-serving-cert"},"name":"oauth-serving-cert"}]}}}} 2025-08-13T20:00:17.981299097+00:00 stderr F E0813 20:00:17.980371 1 status.go:130] SyncLoopRefreshProgressing InProgress changes made during sync updates, additional sync expected 2025-08-13T20:00:17.981299097+00:00 stderr F E0813 20:00:17.980549 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:17.984023174+00:00 stderr F I0813 20:00:17.983958 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/console -n openshift-console because it changed 2025-08-13T20:00:18.129528263+00:00 stderr F I0813 20:00:18.129456 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:18.129528263+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:18.129528263+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:18.129528263+00:00 stderr F    ... // 13 identical elements 2025-08-13T20:00:18.129528263+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:00:18.129528263+00:00 stderr F    {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:00:18.129528263+00:00 stderr F    { 2025-08-13T20:00:18.129528263+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:00:18.129528263+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:18.129528263+00:00 stderr F    Reason: "InProgress", 2025-08-13T20:00:18.129528263+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:18.129528263+00:00 stderr F -  "working toward version 4.16.0, 0 replicas available", 2025-08-13T20:00:18.129528263+00:00 stderr F +  "changes made during sync updates, additional sync expected", 2025-08-13T20:00:18.129528263+00:00 stderr F    }, ""), 2025-08-13T20:00:18.129528263+00:00 stderr F    }, 2025-08-13T20:00:18.129528263+00:00 stderr F    {Type: "OAuthClientSecretSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:00:18.129528263+00:00 stderr F    {Type: "OAuthClientSecretSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:00:18.129528263+00:00 stderr F    ... // 44 identical elements 2025-08-13T20:00:18.129528263+00:00 stderr F    }, 2025-08-13T20:00:18.129528263+00:00 stderr F    Version: "", 2025-08-13T20:00:18.129528263+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:18.129528263+00:00 stderr F    Generations: []v1.GenerationStatus{ 2025-08-13T20:00:18.129528263+00:00 stderr F    {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, 2025-08-13T20:00:18.129528263+00:00 stderr F    { 2025-08-13T20:00:18.129528263+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:00:18.129528263+00:00 stderr F    Namespace: "openshift-console", 2025-08-13T20:00:18.129528263+00:00 stderr F    Name: "console", 2025-08-13T20:00:18.129528263+00:00 stderr F -  LastGeneration: 3, 2025-08-13T20:00:18.129528263+00:00 stderr F +  LastGeneration: 4, 2025-08-13T20:00:18.129528263+00:00 stderr F    Hash: "", 2025-08-13T20:00:18.129528263+00:00 stderr F    }, 2025-08-13T20:00:18.129528263+00:00 stderr F    }, 2025-08-13T20:00:18.129528263+00:00 stderr F   } 2025-08-13T20:00:18.203709259+00:00 stderr F E0813 20:00:18.203654 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:18.203857193+00:00 stderr F E0813 20:00:18.203763 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:18.204171062+00:00 stderr F E0813 20:00:18.204143 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:18.594160241+00:00 stderr F I0813 20:00:18.560202 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:18.906190398+00:00 stderr F I0813 20:00:18.905262 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available" to "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" 2025-08-13T20:00:19.181957811+00:00 stderr F I0813 20:00:19.180766 1 apps.go:154] Deployment "openshift-console/console" changes: {"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"strategy":{"rollingUpdate":{"maxSurge":null,"maxUnavailable":null}},"template":{"spec":{"containers":[{"command":["/opt/bridge/bin/bridge","--public-dir=/opt/bridge/static","--config=/var/console-config/console-config.yaml","--service-ca-file=/var/service-ca/service-ca.crt","--v=2"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae","imagePullPolicy":"IfNotPresent","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"failureThreshold":1,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"name":"console","ports":[{"containerPort":8443,"name":"https","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{"requests":{"cpu":"10m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"readOnlyRootFilesystem":false},"startupProbe":{"failureThreshold":30,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/serving-cert","name":"console-serving-cert","readOnly":true},{"mountPath":"/var/oauth-config","name":"console-oauth-config","readOnly":true},{"mountPath":"/var/console-config","name":"console-config","readOnly":true},{"mountPath":"/var/service-ca","name":"service-ca","readOnly":true},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"trusted-ca-bundle","readOnly":true},{"mountPath":"/var/oauth-serving-cert","name":"oauth-serving-cert","readOnly":true}]}],"dnsPolicy":null,"serviceAccount":null,"volumes":[{"name":"console-serving-cert","secret":{"secretName":"console-serving-cert"}},{"name":"console-oauth-config","secret":{"secretName":"console-oauth-config"}},{"configMap":{"name":"console-config"},"name":"console-config"},{"configMap":{"name":"service-ca"},"name":"service-ca"},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"trusted-ca-bundle"},"name":"trusted-ca-bundle"},{"configMap":{"name":"oauth-serving-cert"},"name":"oauth-serving-cert"}]}}}} 2025-08-13T20:00:19.273735058+00:00 stderr F E0813 20:00:19.273406 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:19.273735058+00:00 stderr F E0813 20:00:19.273483 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:19.273930014+00:00 stderr F E0813 20:00:19.273739 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:19.396090437+00:00 stderr F E0813 20:00:19.393079 1 status.go:130] SyncLoopRefreshProgressing InProgress changes made during sync updates, additional sync expected 2025-08-13T20:00:19.396090437+00:00 stderr F E0813 20:00:19.393134 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:19.396090437+00:00 stderr F I0813 20:00:19.394369 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/console -n openshift-console because it changed 2025-08-13T20:00:19.941306583+00:00 stderr F E0813 20:00:19.928673 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:19.941306583+00:00 stderr F E0813 20:00:19.938591 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:20.063366034+00:00 stderr F I0813 20:00:20.058701 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:20.063366034+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:20.063366034+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:20.063366034+00:00 stderr F    ... // 13 identical elements 2025-08-13T20:00:20.063366034+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:00:20.063366034+00:00 stderr F    {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:00:20.063366034+00:00 stderr F    { 2025-08-13T20:00:20.063366034+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:00:20.063366034+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:20.063366034+00:00 stderr F    Reason: "InProgress", 2025-08-13T20:00:20.063366034+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:20.063366034+00:00 stderr F -  "changes made during sync updates, additional sync expected", 2025-08-13T20:00:20.063366034+00:00 stderr F +  "working toward version 4.16.0, 0 replicas available", 2025-08-13T20:00:20.063366034+00:00 stderr F    }, ""), 2025-08-13T20:00:20.063366034+00:00 stderr F    }, 2025-08-13T20:00:20.063366034+00:00 stderr F    {Type: "OAuthClientSecretSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:00:20.063366034+00:00 stderr F    {Type: "OAuthClientSecretSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:00:20.063366034+00:00 stderr F    ... // 44 identical elements 2025-08-13T20:00:20.063366034+00:00 stderr F    }, 2025-08-13T20:00:20.063366034+00:00 stderr F    Version: "", 2025-08-13T20:00:20.063366034+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:20.063366034+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:20.063366034+00:00 stderr F   } 2025-08-13T20:00:20.461970770+00:00 stderr F I0813 20:00:20.459046 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:20.641330564+00:00 stderr F I0813 20:00:20.641200 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available" 2025-08-13T20:00:20.795446338+00:00 stderr F E0813 20:00:20.788140 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:20.795446338+00:00 stderr F E0813 20:00:20.788183 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:21.156084512+00:00 stderr F E0813 20:00:21.155039 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:21.156084512+00:00 stderr F E0813 20:00:21.155745 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:21.156084512+00:00 stderr F E0813 20:00:21.156032 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:21.709299925+00:00 stderr F E0813 20:00:21.705293 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:21.709299925+00:00 stderr F E0813 20:00:21.705709 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:21.709299925+00:00 stderr F E0813 20:00:21.705972 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:22.124634068+00:00 stderr F E0813 20:00:22.117354 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:22.124634068+00:00 stderr F E0813 20:00:22.117397 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:27.342212836+00:00 stderr F E0813 20:00:27.338400 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:27.342212836+00:00 stderr F E0813 20:00:27.338884 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:29.403930083+00:00 stderr F E0813 20:00:29.403323 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:29.404013726+00:00 stderr F E0813 20:00:29.403997 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:32.183362086+00:00 stderr F E0813 20:00:32.174301 1 status.go:130] RouteHealthDegraded StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:32.183362086+00:00 stderr F E0813 20:00:32.174954 1 status.go:130] RouteHealthAvailable StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:32.187702610+00:00 stderr F I0813 20:00:32.187561 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:32.187702610+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:32.187702610+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:32.187702610+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-08-13T20:00:32.187702610+00:00 stderr F    { 2025-08-13T20:00:32.187702610+00:00 stderr F    Type: "RouteHealthDegraded", 2025-08-13T20:00:32.187702610+00:00 stderr F    Status: "True", 2025-08-13T20:00:32.187702610+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:32.187702610+00:00 stderr F -  Reason: "FailedGet", 2025-08-13T20:00:32.187702610+00:00 stderr F +  Reason: "StatusError", 2025-08-13T20:00:32.187702610+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:32.187702610+00:00 stderr F -  "failed to GET route (", 2025-08-13T20:00:32.187702610+00:00 stderr F +  "route not yet available, ", 2025-08-13T20:00:32.187702610+00:00 stderr F    "https://console-openshift-console.apps-crc.testing", 2025-08-13T20:00:32.187702610+00:00 stderr F -  `): Get "https://console-openshift-console.apps-crc.testing": dia`, 2025-08-13T20:00:32.187702610+00:00 stderr F -  "l tcp 192.168.130.11:443: connect: connection refused", 2025-08-13T20:00:32.187702610+00:00 stderr F +  " returns '503 Service Unavailable'", 2025-08-13T20:00:32.187702610+00:00 stderr F    }, ""), 2025-08-13T20:00:32.187702610+00:00 stderr F    }, 2025-08-13T20:00:32.187702610+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:00:32.187702610+00:00 stderr F    {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:00:32.187702610+00:00 stderr F    ... // 55 identical elements 2025-08-13T20:00:32.187702610+00:00 stderr F    {Type: "AuthStatusHandlerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:00:32.187702610+00:00 stderr F    {Type: "AuthStatusHandlerProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:00:32.187702610+00:00 stderr F    { 2025-08-13T20:00:32.187702610+00:00 stderr F    Type: "RouteHealthAvailable", 2025-08-13T20:00:32.187702610+00:00 stderr F    Status: "False", 2025-08-13T20:00:32.187702610+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:32.187702610+00:00 stderr F -  Reason: "FailedGet", 2025-08-13T20:00:32.187702610+00:00 stderr F +  Reason: "StatusError", 2025-08-13T20:00:32.187702610+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:32.187702610+00:00 stderr F -  "failed to GET route (", 2025-08-13T20:00:32.187702610+00:00 stderr F +  "route not yet available, ", 2025-08-13T20:00:32.187702610+00:00 stderr F    "https://console-openshift-console.apps-crc.testing", 2025-08-13T20:00:32.187702610+00:00 stderr F -  `): Get "https://console-openshift-console.apps-crc.testing": dia`, 2025-08-13T20:00:32.187702610+00:00 stderr F -  "l tcp 192.168.130.11:443: connect: connection refused", 2025-08-13T20:00:32.187702610+00:00 stderr F +  " returns '503 Service Unavailable'", 2025-08-13T20:00:32.187702610+00:00 stderr F    }, ""), 2025-08-13T20:00:32.187702610+00:00 stderr F    }, 2025-08-13T20:00:32.187702610+00:00 stderr F    }, 2025-08-13T20:00:32.187702610+00:00 stderr F    Version: "", 2025-08-13T20:00:32.187702610+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:32.187702610+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:32.187702610+00:00 stderr F   } 2025-08-13T20:00:33.048627008+00:00 stderr F E0813 20:00:33.030179 1 base_controller.go:268] HealthCheckController reconciliation failed: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:33.426894334+00:00 stderr F I0813 20:00:33.413078 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"RouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"Deployment_InsufficientReplicas::RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:33.651450317+00:00 stderr F I0813 20:00:33.646207 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'" 2025-08-13T20:00:33.658043525+00:00 stderr F E0813 20:00:33.640606 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:33.658668352+00:00 stderr F E0813 20:00:33.658630 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:33.870570415+00:00 stderr F I0813 20:00:33.870467 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"RouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"Deployment_InsufficientReplicas::RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:33.996030952+00:00 stderr F E0813 20:00:33.993652 1 base_controller.go:268] StatusSyncer_console reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "console": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:34.083989390+00:00 stderr F E0813 20:00:34.083684 1 status.go:130] RouteHealthDegraded StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:34.083989390+00:00 stderr F E0813 20:00:34.083728 1 status.go:130] RouteHealthAvailable StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:34.084042942+00:00 stderr F E0813 20:00:34.084025 1 base_controller.go:268] HealthCheckController reconciliation failed: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:34.166760700+00:00 stderr F E0813 20:00:34.165395 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:34.166760700+00:00 stderr F E0813 20:00:34.165741 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:35.974390653+00:00 stderr F E0813 20:00:35.963976 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:35.974390653+00:00 stderr F E0813 20:00:35.974218 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:36.148861788+00:00 stderr F E0813 20:00:36.148466 1 status.go:130] RouteHealthDegraded StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:36.148861788+00:00 stderr F E0813 20:00:36.148707 1 status.go:130] RouteHealthAvailable StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:36.149084144+00:00 stderr F E0813 20:00:36.148959 1 base_controller.go:268] HealthCheckController reconciliation failed: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:41.791228523+00:00 stderr F E0813 20:00:41.768136 1 status.go:130] RouteHealthDegraded StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:41.791362577+00:00 stderr F E0813 20:00:41.791342 1 status.go:130] RouteHealthAvailable StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:41.797892713+00:00 stderr F E0813 20:00:41.791605 1 base_controller.go:268] HealthCheckController reconciliation failed: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.961555 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.955325084 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.962859 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:59.962733346 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.962930 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.962911841 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963081 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.963064525 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963120 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.963105756 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963142 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.963126837 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963159 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.963147707 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963195 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.963164118 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963214 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:59.963201899 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963235 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:00:59.9632251 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963332 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.963315972 +0000 UTC))" 2025-08-13T20:00:59.979997458+00:00 stderr F I0813 20:00:59.970434 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-console-operator.svc\" [serving] validServingFor=[metrics.openshift-console-operator.svc,metrics.openshift-console-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:53:40 +0000 UTC to 2026-06-26 12:53:41 +0000 UTC (now=2025-08-13 20:00:59.964032113 +0000 UTC))" 2025-08-13T20:00:59.979997458+00:00 stderr F I0813 20:00:59.978952 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115181\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115179\" (2025-08-13 18:59:37 +0000 UTC to 2026-08-13 18:59:37 +0000 UTC (now=2025-08-13 20:00:59.978871596 +0000 UTC))" 2025-08-13T20:01:06.989884093+00:00 stderr F E0813 20:01:06.988167 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:06.989884093+00:00 stderr F E0813 20:01:06.988728 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:06.993574719+00:00 stderr F E0813 20:01:06.988515 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:06.993646521+00:00 stderr F E0813 20:01:06.993631 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.015583326+00:00 stderr F I0813 20:01:07.015493 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:01:07.015583326+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:01:07.015583326+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:01:07.015583326+00:00 stderr F    ... // 45 identical elements 2025-08-13T20:01:07.015583326+00:00 stderr F    {Type: "ConsoleNotificationSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:01:07.015583326+00:00 stderr F    {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:01:07.015583326+00:00 stderr F -  { 2025-08-13T20:01:07.015583326+00:00 stderr F -  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:01:07.015583326+00:00 stderr F -  Status: "False", 2025-08-13T20:01:07.015583326+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-08-13T20:01:07.015583326+00:00 stderr F -  }, 2025-08-13T20:01:07.015583326+00:00 stderr F +  { 2025-08-13T20:01:07.015583326+00:00 stderr F +  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:01:07.015583326+00:00 stderr F +  Status: "True", 2025-08-13T20:01:07.015583326+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:06.993883917 +0000 UTC m=+103.283771773", 2025-08-13T20:01:07.015583326+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.015583326+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:01:07.015583326+00:00 stderr F +  }, 2025-08-13T20:01:07.015583326+00:00 stderr F    {Type: "ConsoleCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:01:07.015583326+00:00 stderr F -  { 2025-08-13T20:01:07.015583326+00:00 stderr F -  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.015583326+00:00 stderr F -  Status: "True", 2025-08-13T20:01:07.015583326+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-08-13T20:01:07.015583326+00:00 stderr F -  }, 2025-08-13T20:01:07.015583326+00:00 stderr F +  { 2025-08-13T20:01:07.015583326+00:00 stderr F +  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.015583326+00:00 stderr F +  Status: "False", 2025-08-13T20:01:07.015583326+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:06.993886447 +0000 UTC m=+103.283773743", 2025-08-13T20:01:07.015583326+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.015583326+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:01:07.015583326+00:00 stderr F +  }, 2025-08-13T20:01:07.015583326+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:01:07.015583326+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:01:07.015583326+00:00 stderr F    ... // 10 identical elements 2025-08-13T20:01:07.015583326+00:00 stderr F    }, 2025-08-13T20:01:07.015583326+00:00 stderr F    Version: "", 2025-08-13T20:01:07.015583326+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:01:07.015583326+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:01:07.015583326+00:00 stderr F   } 2025-08-13T20:01:07.078335875+00:00 stderr F I0813 20:01:07.065635 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:01:07.078335875+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:01:07.078335875+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:01:07.078335875+00:00 stderr F    ... // 7 identical elements 2025-08-13T20:01:07.078335875+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:01:07.078335875+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:01:07.078335875+00:00 stderr F -  { 2025-08-13T20:01:07.078335875+00:00 stderr F -  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:01:07.078335875+00:00 stderr F -  Status: "False", 2025-08-13T20:01:07.078335875+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:13 +0000 UTC", 2025-08-13T20:01:07.078335875+00:00 stderr F -  }, 2025-08-13T20:01:07.078335875+00:00 stderr F +  { 2025-08-13T20:01:07.078335875+00:00 stderr F +  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:01:07.078335875+00:00 stderr F +  Status: "True", 2025-08-13T20:01:07.078335875+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:06.988753571 +0000 UTC m=+103.278640987", 2025-08-13T20:01:07.078335875+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.078335875+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:01:07.078335875+00:00 stderr F +  }, 2025-08-13T20:01:07.078335875+00:00 stderr F    {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:01:07.078335875+00:00 stderr F -  { 2025-08-13T20:01:07.078335875+00:00 stderr F -  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.078335875+00:00 stderr F -  Status: "True", 2025-08-13T20:01:07.078335875+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:13 +0000 UTC", 2025-08-13T20:01:07.078335875+00:00 stderr F -  }, 2025-08-13T20:01:07.078335875+00:00 stderr F +  { 2025-08-13T20:01:07.078335875+00:00 stderr F +  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.078335875+00:00 stderr F +  Status: "False", 2025-08-13T20:01:07.078335875+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:06.988755081 +0000 UTC m=+103.278642367", 2025-08-13T20:01:07.078335875+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.078335875+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:01:07.078335875+00:00 stderr F +  }, 2025-08-13T20:01:07.078335875+00:00 stderr F    {Type: "ServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:01:07.078335875+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:01:07.078335875+00:00 stderr F    ... // 48 identical elements 2025-08-13T20:01:07.078335875+00:00 stderr F    }, 2025-08-13T20:01:07.078335875+00:00 stderr F    Version: "", 2025-08-13T20:01:07.078335875+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:01:07.078335875+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:01:07.078335875+00:00 stderr F   } 2025-08-13T20:01:07.264500694+00:00 stderr F E0813 20:01:07.264400 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.275896009+00:00 stderr F E0813 20:01:07.272629 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.275896009+00:00 stderr F E0813 20:01:07.272826 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.278903904+00:00 stderr F I0813 20:01:07.276103 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:01:07.278903904+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:01:07.278903904+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:01:07.278903904+00:00 stderr F    ... // 45 identical elements 2025-08-13T20:01:07.278903904+00:00 stderr F    {Type: "ConsoleNotificationSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:01:07.278903904+00:00 stderr F    {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:01:07.278903904+00:00 stderr F -  { 2025-08-13T20:01:07.278903904+00:00 stderr F -  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:01:07.278903904+00:00 stderr F -  Status: "False", 2025-08-13T20:01:07.278903904+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-08-13T20:01:07.278903904+00:00 stderr F -  }, 2025-08-13T20:01:07.278903904+00:00 stderr F +  { 2025-08-13T20:01:07.278903904+00:00 stderr F +  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:01:07.278903904+00:00 stderr F +  Status: "True", 2025-08-13T20:01:07.278903904+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:07.273043147 +0000 UTC m=+103.562930533", 2025-08-13T20:01:07.278903904+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.278903904+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:01:07.278903904+00:00 stderr F +  }, 2025-08-13T20:01:07.278903904+00:00 stderr F    {Type: "ConsoleCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:01:07.278903904+00:00 stderr F -  { 2025-08-13T20:01:07.278903904+00:00 stderr F -  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.278903904+00:00 stderr F -  Status: "True", 2025-08-13T20:01:07.278903904+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-08-13T20:01:07.278903904+00:00 stderr F -  }, 2025-08-13T20:01:07.278903904+00:00 stderr F +  { 2025-08-13T20:01:07.278903904+00:00 stderr F +  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.278903904+00:00 stderr F +  Status: "False", 2025-08-13T20:01:07.278903904+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:07.273040577 +0000 UTC m=+103.562928533", 2025-08-13T20:01:07.278903904+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.278903904+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:01:07.278903904+00:00 stderr F +  }, 2025-08-13T20:01:07.278903904+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:01:07.278903904+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:01:07.278903904+00:00 stderr F    ... // 10 identical elements 2025-08-13T20:01:07.278903904+00:00 stderr F    }, 2025-08-13T20:01:07.278903904+00:00 stderr F    Version: "", 2025-08-13T20:01:07.278903904+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:01:07.278903904+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:01:07.278903904+00:00 stderr F   } 2025-08-13T20:01:07.587016780+00:00 stderr F I0813 20:01:07.538002 1 core.go:341] ConfigMap "openshift-console/console-config" changes: {"apiVersion":"v1","data":{"console-config.yaml":"apiVersion: console.openshift.io/v1\nauth:\n authType: openshift\n clientID: console\n clientSecretFile: /var/oauth-config/clientSecret\n oauthEndpointCAFile: /var/oauth-serving-cert/ca-bundle.crt\nclusterInfo:\n consoleBaseAddress: https://console-openshift-console.apps-crc.testing\n controlPlaneTopology: SingleReplica\n masterPublicURL: https://api.crc.testing:6443\n nodeArchitectures:\n - amd64\n nodeOperatingSystems:\n - linux\n releaseVersion: 4.16.0\ncustomization:\n branding: ocp\n documentationBaseURL: https://access.redhat.com/documentation/en-us/openshift_container_platform/4.16/\nkind: ConsoleConfig\nproviders: {}\nservingInfo:\n bindAddress: https://[::]:8443\n certFile: /var/serving-cert/tls.crt\n keyFile: /var/serving-cert/tls.key\nsession: {}\ntelemetry:\n CLUSTER_ID: a84dabf3-edcf-4828-b6a1-f9d3a6f02304\n SEGMENT_API_HOST: console.redhat.com/connections/api/v1\n SEGMENT_JS_HOST: console.redhat.com/connections/cdn\n SEGMENT_PUBLIC_API_KEY: BnuS1RP39EmLQjP21ko67oDjhbl9zpNU\n TELEMETER_CLIENT_DISABLED: \"true\"\n"},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:01:07.587016780+00:00 stderr F I0813 20:01:07.543698 1 helpers.go:184] lister was stale at resourceVersion=29730, live get showed resourceVersion=30184 2025-08-13T20:01:07.587016780+00:00 stderr F E0813 20:01:07.544621 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.587016780+00:00 stderr F I0813 20:01:07.551336 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:01:07.587016780+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:01:07.587016780+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:01:07.587016780+00:00 stderr F    ... // 7 identical elements 2025-08-13T20:01:07.587016780+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:01:07.587016780+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:01:07.587016780+00:00 stderr F -  { 2025-08-13T20:01:07.587016780+00:00 stderr F -  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:01:07.587016780+00:00 stderr F -  Status: "False", 2025-08-13T20:01:07.587016780+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:13 +0000 UTC", 2025-08-13T20:01:07.587016780+00:00 stderr F -  }, 2025-08-13T20:01:07.587016780+00:00 stderr F +  { 2025-08-13T20:01:07.587016780+00:00 stderr F +  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:01:07.587016780+00:00 stderr F +  Status: "True", 2025-08-13T20:01:07.587016780+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:07.544159288 +0000 UTC m=+103.834046734", 2025-08-13T20:01:07.587016780+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.587016780+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:01:07.587016780+00:00 stderr F +  }, 2025-08-13T20:01:07.587016780+00:00 stderr F    {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:01:07.587016780+00:00 stderr F -  { 2025-08-13T20:01:07.587016780+00:00 stderr F -  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.587016780+00:00 stderr F -  Status: "True", 2025-08-13T20:01:07.587016780+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:13 +0000 UTC", 2025-08-13T20:01:07.587016780+00:00 stderr F -  }, 2025-08-13T20:01:07.587016780+00:00 stderr F +  { 2025-08-13T20:01:07.587016780+00:00 stderr F +  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.587016780+00:00 stderr F +  Status: "False", 2025-08-13T20:01:07.587016780+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:07.544161238 +0000 UTC m=+103.834048644", 2025-08-13T20:01:07.587016780+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.587016780+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:01:07.587016780+00:00 stderr F +  }, 2025-08-13T20:01:07.587016780+00:00 stderr F    {Type: "ServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:01:07.587016780+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:01:07.587016780+00:00 stderr F    ... // 48 identical elements 2025-08-13T20:01:07.587016780+00:00 stderr F    }, 2025-08-13T20:01:07.587016780+00:00 stderr F    Version: "", 2025-08-13T20:01:07.587016780+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:01:07.587016780+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:01:07.587016780+00:00 stderr F   } 2025-08-13T20:01:07.587016780+00:00 stderr F I0813 20:01:07.551749 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/console-config -n openshift-console: 2025-08-13T20:01:07.587016780+00:00 stderr F cause by changes in data.console-config.yaml 2025-08-13T20:01:07.587016780+00:00 stderr F E0813 20:01:07.555037 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.587016780+00:00 stderr F E0813 20:01:07.555048 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.587016780+00:00 stderr F E0813 20:01:07.555177 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.587016780+00:00 stderr F E0813 20:01:07.558442 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.587016780+00:00 stderr F E0813 20:01:07.558451 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.587016780+00:00 stderr F E0813 20:01:07.558591 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.652762715+00:00 stderr F E0813 20:01:07.646400 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.652762715+00:00 stderr F E0813 20:01:07.646455 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.652762715+00:00 stderr F E0813 20:01:07.646626 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.697226813+00:00 stderr F E0813 20:01:07.696232 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.697226813+00:00 stderr F E0813 20:01:07.696304 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.697226813+00:00 stderr F E0813 20:01:07.696511 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.869646059+00:00 stderr F I0813 20:01:07.823208 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"Deployment_InsufficientReplicas::RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:01:07Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:07.881203309+00:00 stderr F E0813 20:01:07.880855 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.894914909+00:00 stderr F E0813 20:01:07.894527 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.904815762+00:00 stderr F E0813 20:01:07.898203 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.904815762+00:00 stderr F E0813 20:01:07.884409 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:07.910492344+00:00 stderr F E0813 20:01:07.909147 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.910667659+00:00 stderr F E0813 20:01:07.910649 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.911087881+00:00 stderr F E0813 20:01:07.911062 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.916939968+00:00 stderr F E0813 20:01:07.916908 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:07.917004159+00:00 stderr F E0813 20:01:07.916989 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:07.917282507+00:00 stderr F E0813 20:01:07.917254 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.110279260+00:00 stderr F E0813 20:01:08.084310 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.110279260+00:00 stderr F E0813 20:01:08.084362 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.110279260+00:00 stderr F E0813 20:01:08.084596 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.110863267+00:00 stderr F I0813 20:01:08.110664 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'" to "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'",Upgradeable changed from True to False ("ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)") 2025-08-13T20:01:08.139733750+00:00 stderr F E0813 20:01:08.118295 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.139733750+00:00 stderr F E0813 20:01:08.118382 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.139733750+00:00 stderr F E0813 20:01:08.118624 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.177380183+00:00 stderr F I0813 20:01:08.177300 1 apps.go:154] Deployment "openshift-console/console" changes: {"metadata":{"annotations":{"console.openshift.io/console-config-version":"30193","operator.openshift.io/spec-hash":"f1efe610e88a03d177d7da7c48414aef173b52463548255cc8d0eb8b0da2387b"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"strategy":{"rollingUpdate":{"maxSurge":null,"maxUnavailable":null}},"template":{"metadata":{"annotations":{"console.openshift.io/console-config-version":"30193"}},"spec":{"containers":[{"command":["/opt/bridge/bin/bridge","--public-dir=/opt/bridge/static","--config=/var/console-config/console-config.yaml","--service-ca-file=/var/service-ca/service-ca.crt","--v=2"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae","imagePullPolicy":"IfNotPresent","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"failureThreshold":1,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"name":"console","ports":[{"containerPort":8443,"name":"https","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{"requests":{"cpu":"10m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"readOnlyRootFilesystem":false},"startupProbe":{"failureThreshold":30,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/serving-cert","name":"console-serving-cert","readOnly":true},{"mountPath":"/var/oauth-config","name":"console-oauth-config","readOnly":true},{"mountPath":"/var/console-config","name":"console-config","readOnly":true},{"mountPath":"/var/service-ca","name":"service-ca","readOnly":true},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"trusted-ca-bundle","readOnly":true},{"mountPath":"/var/oauth-serving-cert","name":"oauth-serving-cert","readOnly":true}]}],"dnsPolicy":null,"serviceAccount":null,"volumes":[{"name":"console-serving-cert","secret":{"secretName":"console-serving-cert"}},{"name":"console-oauth-config","secret":{"secretName":"console-oauth-config"}},{"configMap":{"name":"console-config"},"name":"console-config"},{"configMap":{"name":"service-ca"},"name":"service-ca"},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"trusted-ca-bundle"},"name":"trusted-ca-bundle"},{"configMap":{"name":"oauth-serving-cert"},"name":"oauth-serving-cert"}]}}}} 2025-08-13T20:01:08.227915294+00:00 stderr F E0813 20:01:08.225041 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.269017006+00:00 stderr F E0813 20:01:08.262308 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.269017006+00:00 stderr F E0813 20:01:08.263064 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:08.269017006+00:00 stderr F E0813 20:01:08.263077 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:08.269017006+00:00 stderr F E0813 20:01:08.263497 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:08.276528661+00:00 stderr F I0813 20:01:08.274511 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"Deployment_InsufficientReplicas::RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:01:07Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:08.283445968+00:00 stderr F E0813 20:01:08.282072 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.324892079+00:00 stderr F E0813 20:01:08.324712 1 status.go:130] SyncLoopRefreshProgressing InProgress changes made during sync updates, additional sync expected 2025-08-13T20:01:08.324892079+00:00 stderr F E0813 20:01:08.324766 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:01:08.327935766+00:00 stderr F I0813 20:01:08.327887 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/console -n openshift-console because it changed 2025-08-13T20:01:08.350113509+00:00 stderr F I0813 20:01:08.347209 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:08.350113509+00:00 stderr F I0813 20:01:08.347910 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:08.409921724+00:00 stderr F I0813 20:01:08.405262 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:08.414879955+00:00 stderr F I0813 20:01:08.414034 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:08.41398487 +0000 UTC))" 2025-08-13T20:01:08.415876944+00:00 stderr F I0813 20:01:08.414953 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:08.414931167 +0000 UTC))" 2025-08-13T20:01:08.415969196+00:00 stderr F I0813 20:01:08.415945 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:08.415914925 +0000 UTC))" 2025-08-13T20:01:08.416032318+00:00 stderr F I0813 20:01:08.416013 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:08.415991547 +0000 UTC))" 2025-08-13T20:01:08.416092010+00:00 stderr F I0813 20:01:08.416073 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:08.416057379 +0000 UTC))" 2025-08-13T20:01:08.416142221+00:00 stderr F I0813 20:01:08.416129 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:08.416112411 +0000 UTC))" 2025-08-13T20:01:08.416246064+00:00 stderr F I0813 20:01:08.416230 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:08.416210733 +0000 UTC))" 2025-08-13T20:01:08.416304896+00:00 stderr F I0813 20:01:08.416288 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:08.416267875 +0000 UTC))" 2025-08-13T20:01:08.416354447+00:00 stderr F I0813 20:01:08.416342 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:08.416327387 +0000 UTC))" 2025-08-13T20:01:08.416457460+00:00 stderr F I0813 20:01:08.416437 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:08.416421569 +0000 UTC))" 2025-08-13T20:01:08.416518042+00:00 stderr F I0813 20:01:08.416497 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:08.416480951 +0000 UTC))" 2025-08-13T20:01:08.417026957+00:00 stderr F I0813 20:01:08.416992 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-console-operator.svc\" [serving] validServingFor=[metrics.openshift-console-operator.svc,metrics.openshift-console-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:12 +0000 UTC to 2027-08-13 20:00:13 +0000 UTC (now=2025-08-13 20:01:08.416966885 +0000 UTC))" 2025-08-13T20:01:08.420489225+00:00 stderr F I0813 20:01:08.420460 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115181\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115179\" (2025-08-13 18:59:37 +0000 UTC to 2026-08-13 18:59:37 +0000 UTC (now=2025-08-13 20:01:08.420425854 +0000 UTC))" 2025-08-13T20:01:08.738228045+00:00 stderr F E0813 20:01:08.733037 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.738228045+00:00 stderr F E0813 20:01:08.733079 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.752475751+00:00 stderr F E0813 20:01:08.746554 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.826232414+00:00 stderr F I0813 20:01:08.765352 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'" to "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'",Upgradeable message changed from "ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)" to "ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)" 2025-08-13T20:01:08.826232414+00:00 stderr F I0813 20:01:08.817376 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:01:08.826232414+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:01:08.826232414+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:01:08.826232414+00:00 stderr F    ... // 13 identical elements 2025-08-13T20:01:08.826232414+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:01:08.826232414+00:00 stderr F    {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:01:08.826232414+00:00 stderr F    { 2025-08-13T20:01:08.826232414+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:01:08.826232414+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:01:08.826232414+00:00 stderr F    Reason: "InProgress", 2025-08-13T20:01:08.826232414+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:01:08.826232414+00:00 stderr F -  "working toward version 4.16.0, 0 replicas available", 2025-08-13T20:01:08.826232414+00:00 stderr F +  "changes made during sync updates, additional sync expected", 2025-08-13T20:01:08.826232414+00:00 stderr F    }, ""), 2025-08-13T20:01:08.826232414+00:00 stderr F    }, 2025-08-13T20:01:08.826232414+00:00 stderr F    {Type: "OAuthClientSecretSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:01:08.826232414+00:00 stderr F    {Type: "OAuthClientSecretSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:01:08.826232414+00:00 stderr F    ... // 44 identical elements 2025-08-13T20:01:08.826232414+00:00 stderr F    }, 2025-08-13T20:01:08.826232414+00:00 stderr F    Version: "", 2025-08-13T20:01:08.826232414+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:01:08.826232414+00:00 stderr F    Generations: []v1.GenerationStatus{ 2025-08-13T20:01:08.826232414+00:00 stderr F    {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, 2025-08-13T20:01:08.826232414+00:00 stderr F    { 2025-08-13T20:01:08.826232414+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:01:08.826232414+00:00 stderr F    Namespace: "openshift-console", 2025-08-13T20:01:08.826232414+00:00 stderr F    Name: "console", 2025-08-13T20:01:08.826232414+00:00 stderr F -  LastGeneration: 4, 2025-08-13T20:01:08.826232414+00:00 stderr F +  LastGeneration: 5, 2025-08-13T20:01:08.826232414+00:00 stderr F    Hash: "", 2025-08-13T20:01:08.826232414+00:00 stderr F    }, 2025-08-13T20:01:08.826232414+00:00 stderr F    }, 2025-08-13T20:01:08.826232414+00:00 stderr F   } 2025-08-13T20:01:09.226328592+00:00 stderr F E0813 20:01:09.174533 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.226500987+00:00 stderr F E0813 20:01:09.226463 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.250915843+00:00 stderr F E0813 20:01:09.250812 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.409214567+00:00 stderr F E0813 20:01:09.409153 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.409302429+00:00 stderr F E0813 20:01:09.409287 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.412097119+00:00 stderr F E0813 20:01:09.410035 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.450065452+00:00 stderr F E0813 20:01:09.449911 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:09.450065452+00:00 stderr F E0813 20:01:09.449991 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:09.450697530+00:00 stderr F E0813 20:01:09.450576 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:09.500530601+00:00 stderr F I0813 20:01:09.493874 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"Deployment_InsufficientReplicas::RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:01:07Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:09.622490358+00:00 stderr F E0813 20:01:09.614090 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.622490358+00:00 stderr F E0813 20:01:09.614158 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.622490358+00:00 stderr F E0813 20:01:09.614374 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.648479929+00:00 stderr F E0813 20:01:09.645963 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:09.648479929+00:00 stderr F E0813 20:01:09.646046 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:09.648479929+00:00 stderr F E0813 20:01:09.646219 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:09.924185441+00:00 stderr F I0813 20:01:09.915994 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available" to "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" 2025-08-13T20:01:09.958638953+00:00 stderr F E0813 20:01:09.958438 1 status.go:130] RouteHealthDegraded StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:01:09.958638953+00:00 stderr F E0813 20:01:09.958476 1 status.go:130] RouteHealthAvailable StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:01:09.958869540+00:00 stderr F E0813 20:01:09.958723 1 base_controller.go:268] HealthCheckController reconciliation failed: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:01:10.207044716+00:00 stderr F E0813 20:01:10.199527 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:10.207044716+00:00 stderr F E0813 20:01:10.199568 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:10.207044716+00:00 stderr F E0813 20:01:10.199727 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:10.207044716+00:00 stderr F E0813 20:01:10.199932 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:10.207044716+00:00 stderr F E0813 20:01:10.199941 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:10.207044716+00:00 stderr F E0813 20:01:10.200059 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:10.235524539+00:00 stderr F E0813 20:01:10.235060 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:01:10.235524539+00:00 stderr F E0813 20:01:10.235387 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:01:10.443988903+00:00 stderr F I0813 20:01:10.426019 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:01:10.443988903+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:01:10.443988903+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:01:10.443988903+00:00 stderr F    ... // 13 identical elements 2025-08-13T20:01:10.443988903+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:01:10.443988903+00:00 stderr F    {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:01:10.443988903+00:00 stderr F    { 2025-08-13T20:01:10.443988903+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:01:10.443988903+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:01:10.443988903+00:00 stderr F    Reason: "InProgress", 2025-08-13T20:01:10.443988903+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:01:10.443988903+00:00 stderr F -  "changes made during sync updates, additional sync expected", 2025-08-13T20:01:10.443988903+00:00 stderr F +  "working toward version 4.16.0, 0 replicas available", 2025-08-13T20:01:10.443988903+00:00 stderr F    }, ""), 2025-08-13T20:01:10.443988903+00:00 stderr F    }, 2025-08-13T20:01:10.443988903+00:00 stderr F    {Type: "OAuthClientSecretSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:01:10.443988903+00:00 stderr F    {Type: "OAuthClientSecretSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:01:10.443988903+00:00 stderr F    ... // 44 identical elements 2025-08-13T20:01:10.443988903+00:00 stderr F    }, 2025-08-13T20:01:10.443988903+00:00 stderr F    Version: "", 2025-08-13T20:01:10.443988903+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:01:10.443988903+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:01:10.443988903+00:00 stderr F   } 2025-08-13T20:01:10.648973078+00:00 stderr F I0813 20:01:10.648051 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="986026bc94c265a214cb3459ff9cc01d5aa0eabbc41959f11d26b6222c432f4b", new="c8d612f3b74dc6507c61e4d04d4ecf5c547ff292af799c7a689fe7a15e5377e0") 2025-08-13T20:01:10.684128900+00:00 stderr F W0813 20:01:10.679640 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:01:10.684128900+00:00 stderr F I0813 20:01:10.680909 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="4b5d87903056afff0f59aa1059503707e0decf9c5ece89d2e759b1a6adbf089a", new="b9e8e76d9d6343210f883954e57c9ccdef1698a4fed96aca367288053d3b1f02") 2025-08-13T20:01:10.684128900+00:00 stderr F I0813 20:01:10.683590 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:10.684128900+00:00 stderr F I0813 20:01:10.683741 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:10.684188392+00:00 stderr F I0813 20:01:10.684120 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:10.684188392+00:00 stderr F I0813 20:01:10.684129 1 base_controller.go:172] Shutting down PodDisruptionBudgetController ... 2025-08-13T20:01:10.684349227+00:00 stderr F I0813 20:01:10.684313 1 base_controller.go:172] Shutting down PodDisruptionBudgetController ... 2025-08-13T20:01:10.684398598+00:00 stderr F I0813 20:01:10.684385 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:01:10.684434139+00:00 stderr F I0813 20:01:10.684408 1 base_controller.go:172] Shutting down ClusterUpgradeNotificationController ... 2025-08-13T20:01:10.684480740+00:00 stderr F I0813 20:01:10.684468 1 base_controller.go:172] Shutting down ConsoleServiceController ... 2025-08-13T20:01:10.684521261+00:00 stderr F I0813 20:01:10.684509 1 base_controller.go:172] Shutting down ConsoleServiceController ... 2025-08-13T20:01:10.684556742+00:00 stderr F I0813 20:01:10.684517 1 base_controller.go:172] Shutting down InformerWithSwitchController ... 2025-08-13T20:01:10.684587203+00:00 stderr F W0813 20:01:10.684548 1 builder.go:131] graceful termination failed, controllers failed with error: stopped 2025-08-13T20:01:10.685148869+00:00 stderr F I0813 20:01:10.684633 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000644000175000017500000006627415133657716033114 0ustar zuulzuul2026-01-20T10:49:34.610807392+00:00 stderr F I0120 10:49:34.609530 1 cmd.go:241] Using service-serving-cert provided certificates 2026-01-20T10:49:34.610807392+00:00 stderr F I0120 10:49:34.610034 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:49:34.612838014+00:00 stderr F I0120 10:49:34.610906 1 observer_polling.go:159] Starting file observer 2026-01-20T10:49:34.653706638+00:00 stderr F I0120 10:49:34.653409 1 builder.go:299] console-operator version - 2026-01-20T10:49:35.124330753+00:00 stderr F I0120 10:49:35.123891 1 secure_serving.go:57] Forcing use of http/1.1 only 2026-01-20T10:49:35.124330753+00:00 stderr F W0120 10:49:35.124261 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:35.124330753+00:00 stderr F W0120 10:49:35.124267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:35.127639035+00:00 stderr F I0120 10:49:35.127339 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2026-01-20T10:49:35.129161551+00:00 stderr F I0120 10:49:35.127825 1 leaderelection.go:250] attempting to acquire leader lease openshift-console-operator/console-operator-lock... 2026-01-20T10:49:35.129161551+00:00 stderr F I0120 10:49:35.128197 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:49:35.129161551+00:00 stderr F I0120 10:49:35.128212 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:49:35.129161551+00:00 stderr F I0120 10:49:35.128254 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:49:35.129161551+00:00 stderr F I0120 10:49:35.128262 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:35.129161551+00:00 stderr F I0120 10:49:35.128278 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:49:35.129161551+00:00 stderr F I0120 10:49:35.128283 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:35.129161551+00:00 stderr F I0120 10:49:35.129033 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2026-01-20T10:49:35.129329136+00:00 stderr F I0120 10:49:35.129231 1 secure_serving.go:213] Serving securely on [::]:8443 2026-01-20T10:49:35.129329136+00:00 stderr F I0120 10:49:35.129288 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:49:35.234726766+00:00 stderr F I0120 10:49:35.228638 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:35.234726766+00:00 stderr F I0120 10:49:35.228696 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:49:35.234726766+00:00 stderr F I0120 10:49:35.228774 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:54:50.098543866+00:00 stderr F I0120 10:54:50.097472 1 leaderelection.go:260] successfully acquired lease openshift-console-operator/console-operator-lock 2026-01-20T10:54:50.098609198+00:00 stderr F I0120 10:54:50.097902 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-console-operator", Name:"console-operator-lock", UID:"30020b87-25e8-41b0-a858-a4ef10623cf0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41992", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' console-operator-5dbbc74dc9-cp5cd_09445845-47a2-420c-a852-6d578a8c62cd became leader 2026-01-20T10:54:50.103607810+00:00 stderr F I0120 10:54:50.103517 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2026-01-20T10:54:50.108854731+00:00 stderr F I0120 10:54:50.108777 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2026-01-20T10:54:50.109075036+00:00 stderr F I0120 10:54:50.108977 1 starter.go:206] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2026-01-20T10:54:50.130271972+00:00 stderr F I0120 10:54:50.129371 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2026-01-20T10:54:50.130821236+00:00 stderr F I0120 10:54:50.130751 1 base_controller.go:67] Waiting for caches to sync for ConsoleCLIDownloadsController 2026-01-20T10:54:50.130821236+00:00 stderr F I0120 10:54:50.130815 1 base_controller.go:67] Waiting for caches to sync for ConsoleDownloadsDeploymentSyncController 2026-01-20T10:54:50.136177719+00:00 stderr F I0120 10:54:50.134890 1 base_controller.go:67] Waiting for caches to sync for HealthCheckController 2026-01-20T10:54:50.136177719+00:00 stderr F I0120 10:54:50.134949 1 base_controller.go:67] Waiting for caches to sync for PodDisruptionBudgetController 2026-01-20T10:54:50.136177719+00:00 stderr F I0120 10:54:50.134966 1 base_controller.go:67] Waiting for caches to sync for PodDisruptionBudgetController 2026-01-20T10:54:50.136177719+00:00 stderr F I0120 10:54:50.135215 1 base_controller.go:67] Waiting for caches to sync for InformerWithSwitchController 2026-01-20T10:54:50.136177719+00:00 stderr F I0120 10:54:50.135241 1 base_controller.go:67] Waiting for caches to sync for OAuthClientSecretController 2026-01-20T10:54:50.136177719+00:00 stderr F I0120 10:54:50.135248 1 base_controller.go:73] Caches are synced for InformerWithSwitchController 2026-01-20T10:54:50.136177719+00:00 stderr F I0120 10:54:50.135260 1 base_controller.go:110] Starting #1 worker of InformerWithSwitchController controller ... 2026-01-20T10:54:50.136177719+00:00 stderr F I0120 10:54:50.135265 1 base_controller.go:67] Waiting for caches to sync for OIDCSetupController 2026-01-20T10:54:50.136177719+00:00 stderr F I0120 10:54:50.135281 1 base_controller.go:67] Waiting for caches to sync for CLIOIDCClientStatusController 2026-01-20T10:54:50.136177719+00:00 stderr F I0120 10:54:50.135297 1 base_controller.go:67] Waiting for caches to sync for ClusterUpgradeNotificationController 2026-01-20T10:54:50.136177719+00:00 stderr F I0120 10:54:50.135303 1 base_controller.go:73] Caches are synced for ClusterUpgradeNotificationController 2026-01-20T10:54:50.136177719+00:00 stderr F I0120 10:54:50.135310 1 base_controller.go:110] Starting #1 worker of ClusterUpgradeNotificationController controller ... 2026-01-20T10:54:50.136177719+00:00 stderr F I0120 10:54:50.135350 1 base_controller.go:67] Waiting for caches to sync for ConsoleOperator 2026-01-20T10:54:50.136177719+00:00 stderr F I0120 10:54:50.136033 1 base_controller.go:67] Waiting for caches to sync for OAuthClientsController 2026-01-20T10:54:50.146522485+00:00 stderr F I0120 10:54:50.146455 1 base_controller.go:67] Waiting for caches to sync for ConsoleServiceController 2026-01-20T10:54:50.146522485+00:00 stderr F I0120 10:54:50.146495 1 base_controller.go:67] Waiting for caches to sync for ConsoleRouteController 2026-01-20T10:54:50.146522485+00:00 stderr F I0120 10:54:50.146507 1 base_controller.go:67] Waiting for caches to sync for ConsoleServiceController 2026-01-20T10:54:50.146578897+00:00 stderr F I0120 10:54:50.146518 1 base_controller.go:67] Waiting for caches to sync for DownloadsRouteController 2026-01-20T10:54:50.146829373+00:00 stderr F I0120 10:54:50.146739 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2026-01-20T10:54:50.146829373+00:00 stderr F I0120 10:54:50.146805 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2026-01-20T10:54:50.149928386+00:00 stderr F E0120 10:54:50.147901 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2026-01-20T10:54:50.149928386+00:00 stderr F I0120 10:54:50.147915 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2026-01-20T10:54:50.149928386+00:00 stderr F I0120 10:54:50.149555 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2026-01-20T10:54:50.155351810+00:00 stderr F I0120 10:54:50.155283 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_console 2026-01-20T10:54:50.159315116+00:00 stderr F E0120 10:54:50.159235 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2026-01-20T10:54:50.230263628+00:00 stderr F I0120 10:54:50.230134 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2026-01-20T10:54:50.230263628+00:00 stderr F I0120 10:54:50.230170 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2026-01-20T10:54:50.231412698+00:00 stderr F I0120 10:54:50.231340 1 base_controller.go:73] Caches are synced for ConsoleDownloadsDeploymentSyncController 2026-01-20T10:54:50.231412698+00:00 stderr F I0120 10:54:50.231375 1 base_controller.go:110] Starting #1 worker of ConsoleDownloadsDeploymentSyncController controller ... 2026-01-20T10:54:50.231444039+00:00 stderr F I0120 10:54:50.231349 1 base_controller.go:73] Caches are synced for ConsoleCLIDownloadsController 2026-01-20T10:54:50.231444039+00:00 stderr F I0120 10:54:50.231420 1 base_controller.go:110] Starting #1 worker of ConsoleCLIDownloadsController controller ... 2026-01-20T10:54:50.235690342+00:00 stderr F I0120 10:54:50.235581 1 base_controller.go:73] Caches are synced for CLIOIDCClientStatusController 2026-01-20T10:54:50.235690342+00:00 stderr F I0120 10:54:50.235604 1 base_controller.go:110] Starting #1 worker of CLIOIDCClientStatusController controller ... 2026-01-20T10:54:50.235690342+00:00 stderr F I0120 10:54:50.235630 1 base_controller.go:73] Caches are synced for OAuthClientSecretController 2026-01-20T10:54:50.235690342+00:00 stderr F I0120 10:54:50.235628 1 base_controller.go:73] Caches are synced for PodDisruptionBudgetController 2026-01-20T10:54:50.235690342+00:00 stderr F I0120 10:54:50.235648 1 base_controller.go:73] Caches are synced for OIDCSetupController 2026-01-20T10:54:50.235690342+00:00 stderr F I0120 10:54:50.235661 1 base_controller.go:110] Starting #1 worker of OIDCSetupController controller ... 2026-01-20T10:54:50.235690342+00:00 stderr F I0120 10:54:50.235649 1 base_controller.go:110] Starting #1 worker of PodDisruptionBudgetController controller ... 2026-01-20T10:54:50.235690342+00:00 stderr F I0120 10:54:50.235674 1 base_controller.go:73] Caches are synced for PodDisruptionBudgetController 2026-01-20T10:54:50.235742274+00:00 stderr F I0120 10:54:50.235692 1 base_controller.go:110] Starting #1 worker of PodDisruptionBudgetController controller ... 2026-01-20T10:54:50.235742274+00:00 stderr F I0120 10:54:50.235722 1 base_controller.go:73] Caches are synced for HealthCheckController 2026-01-20T10:54:50.235742274+00:00 stderr F I0120 10:54:50.235728 1 base_controller.go:110] Starting #1 worker of HealthCheckController controller ... 2026-01-20T10:54:50.235759284+00:00 stderr F I0120 10:54:50.235636 1 base_controller.go:110] Starting #1 worker of OAuthClientSecretController controller ... 2026-01-20T10:54:50.236475173+00:00 stderr F I0120 10:54:50.236425 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling 2026-01-20T10:54:50.246916102+00:00 stderr F I0120 10:54:50.246810 1 base_controller.go:73] Caches are synced for ConsoleServiceController 2026-01-20T10:54:50.246916102+00:00 stderr F I0120 10:54:50.246847 1 base_controller.go:110] Starting #1 worker of ConsoleServiceController controller ... 2026-01-20T10:54:50.246916102+00:00 stderr F I0120 10:54:50.246854 1 base_controller.go:73] Caches are synced for ConsoleServiceController 2026-01-20T10:54:50.246916102+00:00 stderr F I0120 10:54:50.246887 1 base_controller.go:110] Starting #1 worker of ConsoleServiceController controller ... 2026-01-20T10:54:50.246916102+00:00 stderr F I0120 10:54:50.246889 1 base_controller.go:73] Caches are synced for ConsoleRouteController 2026-01-20T10:54:50.247001424+00:00 stderr F I0120 10:54:50.246920 1 base_controller.go:110] Starting #1 worker of ConsoleRouteController controller ... 2026-01-20T10:54:50.247001424+00:00 stderr F I0120 10:54:50.246921 1 base_controller.go:73] Caches are synced for OAuthClientsController 2026-01-20T10:54:50.247001424+00:00 stderr F I0120 10:54:50.246962 1 base_controller.go:110] Starting #1 worker of OAuthClientsController controller ... 2026-01-20T10:54:50.247001424+00:00 stderr F I0120 10:54:50.246968 1 base_controller.go:73] Caches are synced for ResourceSyncController 2026-01-20T10:54:50.247001424+00:00 stderr F I0120 10:54:50.246980 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2026-01-20T10:54:50.247091316+00:00 stderr F I0120 10:54:50.247005 1 base_controller.go:73] Caches are synced for ManagementStateController 2026-01-20T10:54:50.247091316+00:00 stderr F I0120 10:54:50.247082 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2026-01-20T10:54:50.248335699+00:00 stderr F I0120 10:54:50.248282 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2026-01-20T10:54:50.248335699+00:00 stderr F I0120 10:54:50.248301 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2026-01-20T10:54:50.250444445+00:00 stderr F I0120 10:54:50.250374 1 base_controller.go:73] Caches are synced for LoggingSyncer 2026-01-20T10:54:50.250444445+00:00 stderr F I0120 10:54:50.250401 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2026-01-20T10:54:50.255409608+00:00 stderr F I0120 10:54:50.255035 1 base_controller.go:73] Caches are synced for DownloadsRouteController 2026-01-20T10:54:50.255409608+00:00 stderr F I0120 10:54:50.255072 1 base_controller.go:110] Starting #1 worker of DownloadsRouteController controller ... 2026-01-20T10:54:50.255449939+00:00 stderr F I0120 10:54:50.255439 1 base_controller.go:73] Caches are synced for StatusSyncer_console 2026-01-20T10:54:50.255463409+00:00 stderr F I0120 10:54:50.255449 1 base_controller.go:110] Starting #1 worker of StatusSyncer_console controller ... 2026-01-20T10:54:50.333471728+00:00 stderr F I0120 10:54:50.333408 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:54:50.348239142+00:00 stderr F I0120 10:54:50.348189 1 base_controller.go:73] Caches are synced for ConsoleOperator 2026-01-20T10:54:50.348316794+00:00 stderr F I0120 10:54:50.348301 1 base_controller.go:110] Starting #1 worker of ConsoleOperator controller ... 2026-01-20T10:56:07.101340384+00:00 stderr F I0120 10:56:07.096842 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:56:07.096790951 +0000 UTC))" 2026-01-20T10:56:07.101340384+00:00 stderr F I0120 10:56:07.097029 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:56:07.097007007 +0000 UTC))" 2026-01-20T10:56:07.101340384+00:00 stderr F I0120 10:56:07.097054 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.097034968 +0000 UTC))" 2026-01-20T10:56:07.101340384+00:00 stderr F I0120 10:56:07.097096 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.097077439 +0000 UTC))" 2026-01-20T10:56:07.101340384+00:00 stderr F I0120 10:56:07.097120 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.097103929 +0000 UTC))" 2026-01-20T10:56:07.101340384+00:00 stderr F I0120 10:56:07.097140 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.09712606 +0000 UTC))" 2026-01-20T10:56:07.101340384+00:00 stderr F I0120 10:56:07.097161 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.09714538 +0000 UTC))" 2026-01-20T10:56:07.101340384+00:00 stderr F I0120 10:56:07.097195 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.097165921 +0000 UTC))" 2026-01-20T10:56:07.112108124+00:00 stderr F I0120 10:56:07.097217 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.097200442 +0000 UTC))" 2026-01-20T10:56:07.112144455+00:00 stderr F I0120 10:56:07.112133 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:56:07.112076463 +0000 UTC))" 2026-01-20T10:56:07.112192116+00:00 stderr F I0120 10:56:07.112166 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.112147305 +0000 UTC))" 2026-01-20T10:56:07.112252757+00:00 stderr F I0120 10:56:07.112194 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.112178855 +0000 UTC))" 2026-01-20T10:56:07.113049279+00:00 stderr F I0120 10:56:07.112650 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-console-operator.svc\" [serving] validServingFor=[metrics.openshift-console-operator.svc,metrics.openshift-console-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:12 +0000 UTC to 2027-08-13 20:00:13 +0000 UTC (now=2026-01-20 10:56:07.112627587 +0000 UTC))" 2026-01-20T10:56:07.113049279+00:00 stderr F I0120 10:56:07.113014 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906175\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906174\" (2026-01-20 09:49:34 +0000 UTC to 2027-01-20 09:49:34 +0000 UTC (now=2026-01-20 10:56:07.112993037 +0000 UTC))" 2026-01-20T10:58:14.379088962+00:00 stderr F I0120 10:58:14.378263 1 reflector.go:351] Caches populated for *v1.ConsolePlugin from github.com/openshift/client-go/console/informers/externalversions/factory.go:125 2026-01-20T10:58:15.734440348+00:00 stderr F I0120 10:58:15.733991 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:58:16.960302185+00:00 stderr F I0120 10:58:16.959912 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:58:19.227024110+00:00 stderr F I0120 10:58:19.226453 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:58:20.441586101+00:00 stderr F I0120 10:58:20.440847 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 ././@LongLink0000644000000000000000000000021000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015133657716033006 5ustar zuulzuul././@LongLink0000644000000000000000000000021500000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015133660073032774 5ustar zuulzuul././@LongLink0000644000000000000000000000022200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000005637315133657716033026 0ustar zuulzuul2026-01-20T10:47:13.629692378+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:47:13.628177Z","logger":"etcd-client","caller":"v3@v3.5.13/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0000de000/192.168.126.11:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:2379: connect: connection refused\""} 2026-01-20T10:47:13.629692378+00:00 stderr F Error: context deadline exceeded 2026-01-20T10:47:13.676924738+00:00 stderr F dataDir is present on crc 2026-01-20T10:47:15.679966289+00:00 stderr P failed to create etcd client, but the server is already initialized as member "crc" before, starting as etcd member: context deadline exceeded 2026-01-20T10:47:15.682554013+00:00 stdout P Waiting for ports 2379, 2380 and 9978 to be released. 2026-01-20T10:47:15.687248425+00:00 stderr F 2026-01-20T10:47:15.687248425+00:00 stderr F real 0m0.005s 2026-01-20T10:47:15.687248425+00:00 stderr F user 0m0.000s 2026-01-20T10:47:15.687248425+00:00 stderr F sys 0m0.005s 2026-01-20T10:47:15.690442938+00:00 stdout F ETCD_QUOTA_BACKEND_BYTES=8589934592 2026-01-20T10:47:15.690442938+00:00 stdout F ALL_ETCD_ENDPOINTS=https://192.168.126.11:2379 2026-01-20T10:47:15.690442938+00:00 stdout F ETCD_IMAGE=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3 2026-01-20T10:47:15.690442938+00:00 stdout F ETCD_STATIC_POD_VERSION=3 2026-01-20T10:47:15.690442938+00:00 stdout F ETCDCTL_ENDPOINTS=https://192.168.126.11:2379 2026-01-20T10:47:15.690442938+00:00 stdout F ETCDCTL_KEY=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key 2026-01-20T10:47:15.690442938+00:00 stdout F ETCDCTL_API=3 2026-01-20T10:47:15.690442938+00:00 stdout F ETCDCTL_CACERT=/etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt 2026-01-20T10:47:15.690442938+00:00 stdout F ETCD_HEARTBEAT_INTERVAL=100 2026-01-20T10:47:15.690442938+00:00 stdout F ETCD_NAME=crc 2026-01-20T10:47:15.690442938+00:00 stdout F ETCD_SOCKET_REUSE_ADDRESS=true 2026-01-20T10:47:15.690442938+00:00 stdout F ETCD_EXPERIMENTAL_WARNING_APPLY_DURATION=200ms 2026-01-20T10:47:15.690442938+00:00 stdout F ETCD_EXPERIMENTAL_MAX_LEARNERS=1 2026-01-20T10:47:15.690442938+00:00 stdout F ETCD_DATA_DIR=/var/lib/etcd 2026-01-20T10:47:15.690442938+00:00 stdout F ETCD_ELECTION_TIMEOUT=1000 2026-01-20T10:47:15.690442938+00:00 stdout F ETCDCTL_CERT=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt 2026-01-20T10:47:15.690442938+00:00 stdout F ETCD_INITIAL_CLUSTER_STATE=existing 2026-01-20T10:47:15.690442938+00:00 stdout F ETCD_INITIAL_CLUSTER= 2026-01-20T10:47:15.690442938+00:00 stdout F ETCD_CIPHER_SUITES=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 2026-01-20T10:47:15.690442938+00:00 stdout F ETCD_EXPERIMENTAL_WATCH_PROGRESS_NOTIFY_INTERVAL=5s 2026-01-20T10:47:15.690442938+00:00 stdout F ETCD_ENABLE_PPROF=true 2026-01-20T10:47:15.691051628+00:00 stderr F + exec nice -n -19 ionice -c2 -n0 etcd --logger=zap --log-level=info --experimental-initial-corrupt-check=true --snapshot-count=10000 --initial-advertise-peer-urls=https://192.168.126.11:2380 --cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.crt --key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.key --trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt --client-cert-auth=true --peer-cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt --peer-key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key --peer-trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt --peer-client-cert-auth=true --advertise-client-urls=https://192.168.126.11:2379 --listen-client-urls=https://0.0.0.0:2379,unixs://192.168.126.11:0 --listen-peer-urls=https://0.0.0.0:2380 --metrics=extensive --listen-metrics-urls=https://0.0.0.0:9978 2026-01-20T10:47:15.727003532+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:15.726776Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_CIPHER_SUITES","variable-value":"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"} 2026-01-20T10:47:15.727003532+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:15.726944Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_DATA_DIR","variable-value":"/var/lib/etcd"} 2026-01-20T10:47:15.727003532+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:15.726965Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_ELECTION_TIMEOUT","variable-value":"1000"} 2026-01-20T10:47:15.727003532+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:15.72698Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_ENABLE_PPROF","variable-value":"true"} 2026-01-20T10:47:15.727096785+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:15.727013Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_EXPERIMENTAL_MAX_LEARNERS","variable-value":"1"} 2026-01-20T10:47:15.727096785+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:15.727031Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_EXPERIMENTAL_WARNING_APPLY_DURATION","variable-value":"200ms"} 2026-01-20T10:47:15.727118066+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:15.727091Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_EXPERIMENTAL_WATCH_PROGRESS_NOTIFY_INTERVAL","variable-value":"5s"} 2026-01-20T10:47:15.727134026+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:15.727111Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_HEARTBEAT_INTERVAL","variable-value":"100"} 2026-01-20T10:47:15.727149587+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:15.727128Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_INITIAL_CLUSTER_STATE","variable-value":"existing"} 2026-01-20T10:47:15.727197278+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:15.727158Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_NAME","variable-value":"crc"} 2026-01-20T10:47:15.727213359+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:15.727196Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_QUOTA_BACKEND_BYTES","variable-value":"8589934592"} 2026-01-20T10:47:15.727232359+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:15.727214Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_SOCKET_REUSE_ADDRESS","variable-value":"true"} 2026-01-20T10:47:15.727315182+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:47:15.727242Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_IMAGE=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3"} 2026-01-20T10:47:15.727315182+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:47:15.727282Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_STATIC_POD_VERSION=3"} 2026-01-20T10:47:15.727332553+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:47:15.727304Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_INITIAL_CLUSTER="} 2026-01-20T10:47:15.727770358+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:47:15.727395Z","caller":"embed/config.go:683","msg":"Running http and grpc server on single port. This is not recommended for production."} 2026-01-20T10:47:15.727788908+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:15.727738Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--logger=zap","--log-level=info","--experimental-initial-corrupt-check=true","--snapshot-count=10000","--initial-advertise-peer-urls=https://192.168.126.11:2380","--cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.crt","--key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.key","--trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt","--client-cert-auth=true","--peer-cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt","--peer-key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key","--peer-trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt","--peer-client-cert-auth=true","--advertise-client-urls=https://192.168.126.11:2379","--listen-client-urls=https://0.0.0.0:2379,unixs://192.168.126.11:0","--listen-peer-urls=https://0.0.0.0:2380","--metrics=extensive","--listen-metrics-urls=https://0.0.0.0:9978"]} 2026-01-20T10:47:15.727922182+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:15.727852Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/etcd","dir-type":"member"} 2026-01-20T10:47:15.727922182+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:47:15.727902Z","caller":"embed/config.go:683","msg":"Running http and grpc server on single port. This is not recommended for production."} 2026-01-20T10:47:15.727939593+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:15.727916Z","caller":"embed/etcd.go:121","msg":"configuring socket options","reuse-address":true,"reuse-port":false} 2026-01-20T10:47:15.727958494+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:15.727935Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://0.0.0.0:2380"]} 2026-01-20T10:47:15.728085678+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:15.727989Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = true, crl-file = ","cipher-suites":["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256","TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]} 2026-01-20T10:47:15.729658478+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:15.729585Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://0.0.0.0:2379","unixs://192.168.126.11:0"]} 2026-01-20T10:47:15.729658478+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:15.729639Z","caller":"embed/etcd.go:620","msg":"pprof is enabled","path":"/debug/pprof"} 2026-01-20T10:47:15.730529467+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:15.730394Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.13","git-sha":"GitNotFound","go-version":"go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime","go-os":"linux","go-arch":"amd64","max-cpu-set":12,"max-cpu-available":12,"member-initialized":true,"name":"crc","data-dir":"/var/lib/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.126.11:2380"],"listen-peer-urls":["https://0.0.0.0:2380"],"advertise-client-urls":["https://192.168.126.11:2379"],"listen-client-urls":["https://0.0.0.0:2379","unixs://192.168.126.11:0"],"listen-metrics-urls":["https://0.0.0.0:9978"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"existing","initial-cluster-token":"","quota-backend-bytes":8589934592,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s","max-learners":1} 2026-01-20T10:47:15.731411495+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:47:15.731332Z","caller":"fileutil/fileutil.go:53","msg":"check file permission","error":"directory \"/var/lib/etcd/member/snap\" exist, but the permission is \"drwxr-xr-x\". The recommended permission is \"-rwx------\" to prevent possible unprivileged access to the data"} 2026-01-20T10:47:15.774222162+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:15.774098Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/etcd/member/snap/db","took":"42.60013ms"} 2026-01-20T10:47:16.115085033+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.114893Z","caller":"etcdserver/server.go:514","msg":"recovered v2 store from snapshot","snapshot-index":40004,"snapshot-size":"8.9 kB"} 2026-01-20T10:47:16.115085033+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.114988Z","caller":"etcdserver/server.go:527","msg":"recovered v3 backend from snapshot","backend-size-bytes":74416128,"backend-size":"74 MB","backend-size-in-use-bytes":44539904,"backend-size-in-use":"44 MB"} 2026-01-20T10:47:16.244007979+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.243817Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"37a6ceb54a88a89a","local-member-id":"d44fc94b15474c4c","commit-index":41800} 2026-01-20T10:47:16.244122763+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.24404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c switched to configuration voters=(15298667783517588556)"} 2026-01-20T10:47:16.244122763+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.244101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became follower at term 8"} 2026-01-20T10:47:16.244144483+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.244117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft d44fc94b15474c4c [peers: [d44fc94b15474c4c], term: 8, commit: 41800, applied: 40004, lastindex: 41800, lastterm: 8]"} 2026-01-20T10:47:16.244307119+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.244246Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} 2026-01-20T10:47:16.244307119+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.244267Z","caller":"membership/cluster.go:280","msg":"recovered/added member from store","cluster-id":"37a6ceb54a88a89a","local-member-id":"d44fc94b15474c4c","recovered-remote-peer-id":"d44fc94b15474c4c","recovered-remote-peer-urls":["https://192.168.126.11:2380"],"recovered-remote-peer-is-learner":false} 2026-01-20T10:47:16.244307119+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.24428Z","caller":"membership/cluster.go:290","msg":"set cluster version from store","cluster-version":"3.5"} 2026-01-20T10:47:16.244389391+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:47:16.244348Z","caller":"fileutil/fileutil.go:53","msg":"check file permission","error":"directory \"/var/lib/etcd/member\" exist, but the permission is \"drwxr-xr-x\". The recommended permission is \"-rwx------\" to prevent possible unprivileged access to the data"} 2026-01-20T10:47:16.244701641+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:47:16.244607Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"} 2026-01-20T10:47:16.244760703+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.244694Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":36405} 2026-01-20T10:47:16.284104507+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.283918Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":38002} 2026-01-20T10:47:16.284490010+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.284424Z","caller":"etcdserver/quota.go:117","msg":"enabled backend quota","quota-name":"v3-applier","quota-size-bytes":8589934592,"quota-size":"8.6 GB"} 2026-01-20T10:47:16.285274936+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.285216Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"d44fc94b15474c4c","timeout":"7s"} 2026-01-20T10:47:16.295716763+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.29553Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"d44fc94b15474c4c"} 2026-01-20T10:47:16.295716763+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.295598Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"d44fc94b15474c4c","local-server-version":"3.5.13","cluster-id":"37a6ceb54a88a89a","cluster-version":"3.5"} 2026-01-20T10:47:16.296026384+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.295904Z","caller":"etcdserver/server.go:760","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"d44fc94b15474c4c","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} 2026-01-20T10:47:16.296660505+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.296202Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} 2026-01-20T10:47:16.296690706+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.296651Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} 2026-01-20T10:47:16.296690706+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.29667Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} 2026-01-20T10:47:16.297977587+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.297926Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt, client-cert-auth = true, crl-file = ","cipher-suites":["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256","TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]} 2026-01-20T10:47:16.298163313+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.298006Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"[::]:2380"} 2026-01-20T10:47:16.298163313+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.298126Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"[::]:2380"} 2026-01-20T10:47:16.298845776+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.298774Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d44fc94b15474c4c","initial-advertise-peer-urls":["https://192.168.126.11:2380"],"listen-peer-urls":["https://0.0.0.0:2380"],"advertise-client-urls":["https://192.168.126.11:2379"],"listen-client-urls":["https://0.0.0.0:2379","unixs://192.168.126.11:0"],"listen-metrics-urls":["https://0.0.0.0:9978"]} 2026-01-20T10:47:16.298886467+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:16.298816Z","caller":"embed/etcd.go:859","msg":"serving metrics","address":"https://0.0.0.0:9978"} 2026-01-20T10:47:17.045750128+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:17.045597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c is starting a new election at term 8"} 2026-01-20T10:47:17.045750128+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:17.045683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became pre-candidate at term 8"} 2026-01-20T10:47:17.045817740+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:17.045754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c received MsgPreVoteResp from d44fc94b15474c4c at term 8"} 2026-01-20T10:47:17.045817740+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:17.045774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became candidate at term 9"} 2026-01-20T10:47:17.045817740+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:17.045791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c received MsgVoteResp from d44fc94b15474c4c at term 9"} 2026-01-20T10:47:17.045817740+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:17.045807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became leader at term 9"} 2026-01-20T10:47:17.045843821+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:17.045821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d44fc94b15474c4c elected leader d44fc94b15474c4c at term 9"} 2026-01-20T10:47:17.046332677+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:17.046246Z","caller":"etcdserver/server.go:2119","msg":"published local member to cluster through raft","local-member-id":"d44fc94b15474c4c","local-member-attributes":"{Name:crc ClientURLs:[https://192.168.126.11:2379]}","request-path":"/0/members/d44fc94b15474c4c/attributes","cluster-id":"37a6ceb54a88a89a","publish-timeout":"7s"} 2026-01-20T10:47:17.046332677+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:17.04629Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} 2026-01-20T10:47:17.046703259+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:17.046646Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} 2026-01-20T10:47:17.047392012+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:17.047302Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} 2026-01-20T10:47:17.047454974+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:17.047415Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} 2026-01-20T10:47:17.050577315+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:17.050469Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"[::]:2379"} 2026-01-20T10:47:17.051199515+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:17.050999Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.126.11:0"} ././@LongLink0000644000000000000000000000022200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000016011315133657716033012 0ustar zuulzuul2026-01-20T10:42:11.299634433+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:42:11.297956Z","logger":"etcd-client","caller":"v3@v3.5.13/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00010c000/192.168.126.11:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:2379: connect: connection refused\""} 2026-01-20T10:42:11.299634433+00:00 stderr F Error: context deadline exceeded 2026-01-20T10:42:11.397362642+00:00 stderr F dataDir is present on crc 2026-01-20T10:42:13.399264465+00:00 stderr P failed to create etcd client, but the server is already initialized as member "crc" before, starting as etcd member: context deadline exceeded 2026-01-20T10:42:13.403076956+00:00 stdout P Waiting for ports 2379, 2380 and 9978 to be released. 2026-01-20T10:42:13.411490031+00:00 stderr F 2026-01-20T10:42:13.411490031+00:00 stderr F real 0m0.009s 2026-01-20T10:42:13.411490031+00:00 stderr F user 0m0.001s 2026-01-20T10:42:13.411490031+00:00 stderr F sys 0m0.008s 2026-01-20T10:42:13.414928531+00:00 stdout F ETCD_QUOTA_BACKEND_BYTES=8589934592 2026-01-20T10:42:13.414928531+00:00 stdout F ALL_ETCD_ENDPOINTS=https://192.168.126.11:2379 2026-01-20T10:42:13.414928531+00:00 stdout F ETCD_IMAGE=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3 2026-01-20T10:42:13.414928531+00:00 stdout F ETCD_STATIC_POD_VERSION=3 2026-01-20T10:42:13.414928531+00:00 stdout F ETCDCTL_ENDPOINTS=https://192.168.126.11:2379 2026-01-20T10:42:13.414928531+00:00 stdout F ETCDCTL_KEY=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key 2026-01-20T10:42:13.414928531+00:00 stdout F ETCDCTL_API=3 2026-01-20T10:42:13.414928531+00:00 stdout F ETCDCTL_CACERT=/etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt 2026-01-20T10:42:13.414928531+00:00 stdout F ETCD_HEARTBEAT_INTERVAL=100 2026-01-20T10:42:13.414928531+00:00 stdout F ETCD_NAME=crc 2026-01-20T10:42:13.414928531+00:00 stdout F ETCD_SOCKET_REUSE_ADDRESS=true 2026-01-20T10:42:13.414928531+00:00 stdout F ETCD_EXPERIMENTAL_WARNING_APPLY_DURATION=200ms 2026-01-20T10:42:13.414928531+00:00 stdout F ETCD_EXPERIMENTAL_MAX_LEARNERS=1 2026-01-20T10:42:13.414928531+00:00 stdout F ETCD_DATA_DIR=/var/lib/etcd 2026-01-20T10:42:13.414928531+00:00 stdout F ETCD_ELECTION_TIMEOUT=1000 2026-01-20T10:42:13.414928531+00:00 stdout F ETCDCTL_CERT=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt 2026-01-20T10:42:13.414928531+00:00 stdout F ETCD_INITIAL_CLUSTER_STATE=existing 2026-01-20T10:42:13.414928531+00:00 stdout F ETCD_INITIAL_CLUSTER= 2026-01-20T10:42:13.414928531+00:00 stdout F ETCD_CIPHER_SUITES=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 2026-01-20T10:42:13.414928531+00:00 stdout F ETCD_EXPERIMENTAL_WATCH_PROGRESS_NOTIFY_INTERVAL=5s 2026-01-20T10:42:13.414928531+00:00 stdout F ETCD_ENABLE_PPROF=true 2026-01-20T10:42:13.415484358+00:00 stderr F + exec nice -n -19 ionice -c2 -n0 etcd --logger=zap --log-level=info --experimental-initial-corrupt-check=true --snapshot-count=10000 --initial-advertise-peer-urls=https://192.168.126.11:2380 --cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.crt --key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.key --trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt --client-cert-auth=true --peer-cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt --peer-key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key --peer-trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt --peer-client-cert-auth=true --advertise-client-urls=https://192.168.126.11:2379 --listen-client-urls=https://0.0.0.0:2379,unixs://192.168.126.11:0 --listen-peer-urls=https://0.0.0.0:2380 --metrics=extensive --listen-metrics-urls=https://0.0.0.0:9978 2026-01-20T10:42:13.562324378+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:13.562065Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_CIPHER_SUITES","variable-value":"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"} 2026-01-20T10:42:13.562324378+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:13.562277Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_DATA_DIR","variable-value":"/var/lib/etcd"} 2026-01-20T10:42:13.562324378+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:13.562299Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_ELECTION_TIMEOUT","variable-value":"1000"} 2026-01-20T10:42:13.562366629+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:13.562314Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_ENABLE_PPROF","variable-value":"true"} 2026-01-20T10:42:13.562377549+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:13.562354Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_EXPERIMENTAL_MAX_LEARNERS","variable-value":"1"} 2026-01-20T10:42:13.562418130+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:13.562379Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_EXPERIMENTAL_WARNING_APPLY_DURATION","variable-value":"200ms"} 2026-01-20T10:42:13.562418130+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:13.562403Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_EXPERIMENTAL_WATCH_PROGRESS_NOTIFY_INTERVAL","variable-value":"5s"} 2026-01-20T10:42:13.562482982+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:13.562428Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_HEARTBEAT_INTERVAL","variable-value":"100"} 2026-01-20T10:42:13.562482982+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:13.562463Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_INITIAL_CLUSTER_STATE","variable-value":"existing"} 2026-01-20T10:42:13.562556124+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:13.562504Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_NAME","variable-value":"crc"} 2026-01-20T10:42:13.562605156+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:13.562563Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_QUOTA_BACKEND_BYTES","variable-value":"8589934592"} 2026-01-20T10:42:13.562605156+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:13.562592Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_SOCKET_REUSE_ADDRESS","variable-value":"true"} 2026-01-20T10:42:13.562686878+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:42:13.562628Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_IMAGE=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3"} 2026-01-20T10:42:13.562686878+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:42:13.562663Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_STATIC_POD_VERSION=3"} 2026-01-20T10:42:13.562704099+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:42:13.562686Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_INITIAL_CLUSTER="} 2026-01-20T10:42:13.685678883+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:42:13.685419Z","caller":"embed/config.go:683","msg":"Running http and grpc server on single port. This is not recommended for production."} 2026-01-20T10:42:13.685809097+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:13.685628Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--logger=zap","--log-level=info","--experimental-initial-corrupt-check=true","--snapshot-count=10000","--initial-advertise-peer-urls=https://192.168.126.11:2380","--cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.crt","--key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.key","--trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt","--client-cert-auth=true","--peer-cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt","--peer-key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key","--peer-trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt","--peer-client-cert-auth=true","--advertise-client-urls=https://192.168.126.11:2379","--listen-client-urls=https://0.0.0.0:2379,unixs://192.168.126.11:0","--listen-peer-urls=https://0.0.0.0:2380","--metrics=extensive","--listen-metrics-urls=https://0.0.0.0:9978"]} 2026-01-20T10:42:13.686137737+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:13.68604Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/etcd","dir-type":"member"} 2026-01-20T10:42:13.686160207+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:42:13.686139Z","caller":"embed/config.go:683","msg":"Running http and grpc server on single port. This is not recommended for production."} 2026-01-20T10:42:13.686239979+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:13.686163Z","caller":"embed/etcd.go:121","msg":"configuring socket options","reuse-address":true,"reuse-port":false} 2026-01-20T10:42:13.686259370+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:13.686203Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://0.0.0.0:2380"]} 2026-01-20T10:42:13.686476246+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:13.686386Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = true, crl-file = ","cipher-suites":["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256","TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]} 2026-01-20T10:42:13.688576278+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:13.688466Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://0.0.0.0:2379","unixs://192.168.126.11:0"]} 2026-01-20T10:42:13.688576278+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:13.688533Z","caller":"embed/etcd.go:620","msg":"pprof is enabled","path":"/debug/pprof"} 2026-01-20T10:42:13.700881696+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:13.700679Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.13","git-sha":"GitNotFound","go-version":"go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime","go-os":"linux","go-arch":"amd64","max-cpu-set":12,"max-cpu-available":12,"member-initialized":true,"name":"crc","data-dir":"/var/lib/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.126.11:2380"],"listen-peer-urls":["https://0.0.0.0:2380"],"advertise-client-urls":["https://192.168.126.11:2379"],"listen-client-urls":["https://0.0.0.0:2379","unixs://192.168.126.11:0"],"listen-metrics-urls":["https://0.0.0.0:9978"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"existing","initial-cluster-token":"","quota-backend-bytes":8589934592,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s","max-learners":1} 2026-01-20T10:42:13.758905958+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:42:13.700869Z","caller":"fileutil/fileutil.go:53","msg":"check file permission","error":"directory \"/var/lib/etcd\" exist, but the permission is \"drwxr-xr-x\". The recommended permission is \"-rwx------\" to prevent possible unprivileged access to the data"} 2026-01-20T10:42:13.759577107+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:42:13.759499Z","caller":"fileutil/fileutil.go:53","msg":"check file permission","error":"directory \"/var/lib/etcd/member/snap\" exist, but the permission is \"drwxr-xr-x\". The recommended permission is \"-rwx------\" to prevent possible unprivileged access to the data"} 2026-01-20T10:42:14.005292510+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.00511Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/etcd/member/snap/db","took":"243.439656ms"} 2026-01-20T10:42:14.763978985+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.763623Z","caller":"etcdserver/server.go:514","msg":"recovered v2 store from snapshot","snapshot-index":40004,"snapshot-size":"8.9 kB"} 2026-01-20T10:42:14.763978985+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.763838Z","caller":"etcdserver/server.go:527","msg":"recovered v3 backend from snapshot","backend-size-bytes":74416128,"backend-size":"74 MB","backend-size-in-use-bytes":42295296,"backend-size-in-use":"42 MB"} 2026-01-20T10:42:14.857548002+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.857352Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"37a6ceb54a88a89a","local-member-id":"d44fc94b15474c4c","commit-index":41289} 2026-01-20T10:42:14.857649085+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.857519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c switched to configuration voters=(15298667783517588556)"} 2026-01-20T10:42:14.857649085+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.857561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became follower at term 7"} 2026-01-20T10:42:14.857649085+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.857576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft d44fc94b15474c4c [peers: [d44fc94b15474c4c], term: 7, commit: 41289, applied: 40004, lastindex: 41289, lastterm: 7]"} 2026-01-20T10:42:14.857817470+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.857767Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} 2026-01-20T10:42:14.857817470+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.857796Z","caller":"membership/cluster.go:280","msg":"recovered/added member from store","cluster-id":"37a6ceb54a88a89a","local-member-id":"d44fc94b15474c4c","recovered-remote-peer-id":"d44fc94b15474c4c","recovered-remote-peer-urls":["https://192.168.126.11:2380"],"recovered-remote-peer-is-learner":false} 2026-01-20T10:42:14.857817470+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.857808Z","caller":"membership/cluster.go:290","msg":"set cluster version from store","cluster-version":"3.5"} 2026-01-20T10:42:14.857905992+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:42:14.857873Z","caller":"fileutil/fileutil.go:53","msg":"check file permission","error":"directory \"/var/lib/etcd/member\" exist, but the permission is \"drwxr-xr-x\". The recommended permission is \"-rwx------\" to prevent possible unprivileged access to the data"} 2026-01-20T10:42:14.862263620+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:42:14.86219Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"} 2026-01-20T10:42:14.863449074+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.863356Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":36405} 2026-01-20T10:42:14.934415373+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.934222Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":37545} 2026-01-20T10:42:14.939536313+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.939443Z","caller":"etcdserver/quota.go:117","msg":"enabled backend quota","quota-name":"v3-applier","quota-size-bytes":8589934592,"quota-size":"8.6 GB"} 2026-01-20T10:42:14.945006422+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.944936Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"d44fc94b15474c4c","timeout":"7s"} 2026-01-20T10:42:14.961902595+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.961786Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"d44fc94b15474c4c"} 2026-01-20T10:42:14.961968217+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.961931Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"d44fc94b15474c4c","local-server-version":"3.5.13","cluster-id":"37a6ceb54a88a89a","cluster-version":"3.5"} 2026-01-20T10:42:14.963385887+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.963187Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} 2026-01-20T10:42:14.963403378+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.963382Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} 2026-01-20T10:42:14.963413328+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.9634Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} 2026-01-20T10:42:14.964428647+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.964206Z","caller":"etcdserver/server.go:760","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"d44fc94b15474c4c","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} 2026-01-20T10:42:14.967211869+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.967155Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt, client-cert-auth = true, crl-file = ","cipher-suites":["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256","TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]} 2026-01-20T10:42:14.967355173+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.967283Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"[::]:2380"} 2026-01-20T10:42:14.967387774+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.967372Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"[::]:2380"} 2026-01-20T10:42:14.969839605+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.969687Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d44fc94b15474c4c","initial-advertise-peer-urls":["https://192.168.126.11:2380"],"listen-peer-urls":["https://0.0.0.0:2380"],"advertise-client-urls":["https://192.168.126.11:2379"],"listen-client-urls":["https://0.0.0.0:2379","unixs://192.168.126.11:0"],"listen-metrics-urls":["https://0.0.0.0:9978"]} 2026-01-20T10:42:14.970005390+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:14.96992Z","caller":"embed/etcd.go:859","msg":"serving metrics","address":"https://0.0.0.0:9978"} 2026-01-20T10:42:15.058433198+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:15.058193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c is starting a new election at term 7"} 2026-01-20T10:42:15.058505130+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:15.058395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became pre-candidate at term 7"} 2026-01-20T10:42:15.058664015+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:15.058588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c received MsgPreVoteResp from d44fc94b15474c4c at term 7"} 2026-01-20T10:42:15.060063746+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:15.05995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became candidate at term 8"} 2026-01-20T10:42:15.060077826+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:15.060055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c received MsgVoteResp from d44fc94b15474c4c at term 8"} 2026-01-20T10:42:15.060120417+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:15.060085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became leader at term 8"} 2026-01-20T10:42:15.060223060+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:15.060112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d44fc94b15474c4c elected leader d44fc94b15474c4c at term 8"} 2026-01-20T10:42:15.062984331+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:15.062913Z","caller":"etcdserver/server.go:2119","msg":"published local member to cluster through raft","local-member-id":"d44fc94b15474c4c","local-member-attributes":"{Name:crc ClientURLs:[https://192.168.126.11:2379]}","request-path":"/0/members/d44fc94b15474c4c/attributes","cluster-id":"37a6ceb54a88a89a","publish-timeout":"7s"} 2026-01-20T10:42:15.063006352+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:15.062973Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} 2026-01-20T10:42:15.063047563+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:15.062954Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} 2026-01-20T10:42:15.063366472+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:15.06329Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} 2026-01-20T10:42:15.063380772+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:15.06336Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} 2026-01-20T10:42:15.067157852+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:15.067062Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.126.11:0"} 2026-01-20T10:42:15.067563324+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:15.067487Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"[::]:2379"} 2026-01-20T10:42:29.324276415+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:29.324013Z","caller":"traceutil/trace.go:171","msg":"trace[278281669] transaction","detail":"{read_only:false; response_revision:37572; number_of_response:1; }","duration":"168.902683ms","start":"2026-01-20T10:42:29.155077Z","end":"2026-01-20T10:42:29.32398Z","steps":["trace[278281669] 'process raft request' (duration: 168.606504ms)"],"step_count":1} 2026-01-20T10:43:39.467268135+00:00 stderr F {"level":"info","ts":"2026-01-20T10:43:39.467083Z","caller":"traceutil/trace.go:171","msg":"trace[1677972126] transaction","detail":"{read_only:false; response_revision:37689; number_of_response:1; }","duration":"144.360631ms","start":"2026-01-20T10:43:39.322693Z","end":"2026-01-20T10:43:39.467054Z","steps":["trace[1677972126] 'process raft request' (duration: 144.191685ms)"],"step_count":1} 2026-01-20T10:43:51.673214088+00:00 stderr F {"level":"info","ts":"2026-01-20T10:43:51.673098Z","caller":"traceutil/trace.go:171","msg":"trace[2035051642] transaction","detail":"{read_only:false; response_revision:37705; number_of_response:1; }","duration":"182.470711ms","start":"2026-01-20T10:43:51.490605Z","end":"2026-01-20T10:43:51.673075Z","steps":["trace[2035051642] 'process raft request' (duration: 182.336686ms)"],"step_count":1} 2026-01-20T10:43:51.678757003+00:00 stderr F {"level":"info","ts":"2026-01-20T10:43:51.678663Z","caller":"traceutil/trace.go:171","msg":"trace[1249702686] transaction","detail":"{read_only:false; response_revision:37706; number_of_response:1; }","duration":"163.448072ms","start":"2026-01-20T10:43:51.515198Z","end":"2026-01-20T10:43:51.678647Z","steps":["trace[1249702686] 'process raft request' (duration: 163.326108ms)"],"step_count":1} 2026-01-20T10:43:51.679058463+00:00 stderr F {"level":"info","ts":"2026-01-20T10:43:51.678976Z","caller":"traceutil/trace.go:171","msg":"trace[1776461845] transaction","detail":"{read_only:false; response_revision:37707; number_of_response:1; }","duration":"153.276205ms","start":"2026-01-20T10:43:51.525684Z","end":"2026-01-20T10:43:51.678961Z","steps":["trace[1776461845] 'process raft request' (duration: 152.925803ms)"],"step_count":1} 2026-01-20T10:43:51.679170937+00:00 stderr F {"level":"info","ts":"2026-01-20T10:43:51.679078Z","caller":"traceutil/trace.go:171","msg":"trace[1679587817] transaction","detail":"{read_only:false; response_revision:37708; number_of_response:1; }","duration":"143.709637ms","start":"2026-01-20T10:43:51.535344Z","end":"2026-01-20T10:43:51.679054Z","steps":["trace[1679587817] 'process raft request' (duration: 143.567062ms)"],"step_count":1} 2026-01-20T10:44:12.163131696+00:00 stderr F {"level":"info","ts":"2026-01-20T10:44:12.163008Z","caller":"traceutil/trace.go:171","msg":"trace[1674827645] linearizableReadLoop","detail":"{readStateIndex:41508; appliedIndex:41506; }","duration":"327.676235ms","start":"2026-01-20T10:44:11.835307Z","end":"2026-01-20T10:44:12.162983Z","steps":["trace[1674827645] 'read index received' (duration: 268.529316ms)","trace[1674827645] 'applied index is now lower than readState.Index' (duration: 59.145819ms)"],"step_count":2} 2026-01-20T10:44:12.163131696+00:00 stderr F {"level":"info","ts":"2026-01-20T10:44:12.163091Z","caller":"traceutil/trace.go:171","msg":"trace[323988076] transaction","detail":"{read_only:false; response_revision:37735; number_of_response:1; }","duration":"458.454911ms","start":"2026-01-20T10:44:11.704618Z","end":"2026-01-20T10:44:12.163073Z","steps":["trace[323988076] 'process raft request' (duration: 399.268911ms)","trace[323988076] 'compare' (duration: 58.631945ms)"],"step_count":2} 2026-01-20T10:44:12.163241329+00:00 stderr F {"level":"info","ts":"2026-01-20T10:44:12.163187Z","caller":"traceutil/trace.go:171","msg":"trace[1039613057] transaction","detail":"{read_only:false; response_revision:37736; number_of_response:1; }","duration":"457.774643ms","start":"2026-01-20T10:44:11.705387Z","end":"2026-01-20T10:44:12.163161Z","steps":["trace[1039613057] 'process raft request' (duration: 457.502156ms)"],"step_count":1} 2026-01-20T10:44:12.163364882+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:12.163261Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"327.939181ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/ingresscontrollers.v1.operator.openshift.io\" ","response":"range_response_count:1 size:6225"} 2026-01-20T10:44:12.163417303+00:00 stderr F {"level":"info","ts":"2026-01-20T10:44:12.16337Z","caller":"traceutil/trace.go:171","msg":"trace[1320225830] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/ingresscontrollers.v1.operator.openshift.io; range_end:; response_count:1; response_revision:37736; }","duration":"328.061705ms","start":"2026-01-20T10:44:11.835293Z","end":"2026-01-20T10:44:12.163355Z","steps":["trace[1320225830] 'agreement among raft nodes before linearized reading' (duration: 327.816478ms)"],"step_count":1} 2026-01-20T10:44:12.163953478+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:12.16362Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2026-01-20T10:44:11.70459Z","time spent":"458.536893ms","remote":"[::1]:33760","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5146,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2026-01-20T10:44:12.163953478+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:12.163421Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2026-01-20T10:44:11.835239Z","time spent":"328.169637ms","remote":"192.168.126.11:44408","response type":"/etcdserverpb.KV/Range","request count":0,"request size":100,"response count":1,"response size":6248,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/ingresscontrollers.v1.operator.openshift.io\" "} 2026-01-20T10:44:12.163973308+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:12.163755Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2026-01-20T10:44:11.70537Z","time spent":"457.901056ms","remote":"192.168.126.11:44366","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5052,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2026-01-20T10:44:14.227628906+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:14.227432Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2026-01-20T10:44:12.711325Z","time spent":"1.516100232s","remote":"[::1]:51542","response type":"/etcdserverpb.Maintenance/Status","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""} 2026-01-20T10:44:14.227628906+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:14.22758Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2026-01-20T10:44:12.329137Z","time spent":"1.898439214s","remote":"[::1]:51514","response type":"/etcdserverpb.Maintenance/Status","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""} 2026-01-20T10:44:14.227934674+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:14.227833Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2026-01-20T10:44:12.70431Z","time spent":"1.523516014s","remote":"[::1]:51534","response type":"/etcdserverpb.Maintenance/Status","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""} 2026-01-20T10:44:14.230619797+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:14.229929Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.898942447s","expected-duration":"200ms","prefix":"","request":"header: txn: success:> failure: >>","response":"size:18"} 2026-01-20T10:44:14.231030058+00:00 stderr F {"level":"info","ts":"2026-01-20T10:44:14.2309Z","caller":"traceutil/trace.go:171","msg":"trace[981732708] linearizableReadLoop","detail":"{readStateIndex:41509; appliedIndex:41508; }","duration":"1.680159775s","start":"2026-01-20T10:44:12.55072Z","end":"2026-01-20T10:44:14.23088Z","steps":["trace[981732708] 'read index received' (duration: 78.932µs)","trace[981732708] 'applied index is now lower than readState.Index' (duration: 1.680079123s)"],"step_count":2} 2026-01-20T10:44:14.231853401+00:00 stderr F {"level":"info","ts":"2026-01-20T10:44:14.231038Z","caller":"traceutil/trace.go:171","msg":"trace[946378609] transaction","detail":"{read_only:false; response_revision:37737; number_of_response:1; }","duration":"2.050972812s","start":"2026-01-20T10:44:12.180022Z","end":"2026-01-20T10:44:14.230994Z","steps":["trace[946378609] 'process raft request' (duration: 149.934469ms)","trace[946378609] 'compare' (duration: 1.897562869s)"],"step_count":2} 2026-01-20T10:44:14.231853401+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:14.231262Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2026-01-20T10:44:12.179995Z","time spent":"2.051147477s","remote":"192.168.126.11:44408","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5050,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2026-01-20T10:44:14.231853401+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:14.231266Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"235.412374ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/replicasets/\" range_end:\"/kubernetes.io/replicasets0\" count_only:true ","response":"range_response_count:0 size:8"} 2026-01-20T10:44:14.231853401+00:00 stderr F {"level":"info","ts":"2026-01-20T10:44:14.231314Z","caller":"traceutil/trace.go:171","msg":"trace[1161409747] range","detail":"{range_begin:/kubernetes.io/replicasets/; range_end:/kubernetes.io/replicasets0; response_count:0; response_revision:37737; }","duration":"235.458195ms","start":"2026-01-20T10:44:13.99584Z","end":"2026-01-20T10:44:14.231299Z","steps":["trace[1161409747] 'agreement among raft nodes before linearized reading' (duration: 235.291721ms)"],"step_count":1} 2026-01-20T10:44:14.231853401+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:14.231312Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.201316189s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/kubernetes.io/apiextensions.k8s.io/customresourcedefinitions0\" count_only:true ","response":"range_response_count:0 size:8"} 2026-01-20T10:44:14.231853401+00:00 stderr F {"level":"info","ts":"2026-01-20T10:44:14.231386Z","caller":"traceutil/trace.go:171","msg":"trace[723316670] range","detail":"{range_begin:/kubernetes.io/apiextensions.k8s.io/customresourcedefinitions/; range_end:/kubernetes.io/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:37737; }","duration":"1.201401311s","start":"2026-01-20T10:44:13.029959Z","end":"2026-01-20T10:44:14.231361Z","steps":["trace[723316670] 'agreement among raft nodes before linearized reading' (duration: 1.201197886s)"],"step_count":1} 2026-01-20T10:44:14.231853401+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:14.23109Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.68036784s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/horizontalpodautoscalers/\" range_end:\"/kubernetes.io/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:6"} 2026-01-20T10:44:14.231853401+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:14.23144Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2026-01-20T10:44:13.029848Z","time spent":"1.201575387s","remote":"[::1]:33320","response type":"/etcdserverpb.KV/Range","request count":0,"request size":130,"response count":107,"response size":31,"request content":"key:\"/kubernetes.io/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/kubernetes.io/apiextensions.k8s.io/customresourcedefinitions0\" count_only:true "} 2026-01-20T10:44:14.231853401+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:14.231387Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.396712774s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/projecthelmchartrepositories.v1beta1.helm.openshift.io\" ","response":"range_response_count:1 size:4438"} 2026-01-20T10:44:14.231853401+00:00 stderr F {"level":"info","ts":"2026-01-20T10:44:14.231469Z","caller":"traceutil/trace.go:171","msg":"trace[1344682976] range","detail":"{range_begin:/kubernetes.io/horizontalpodautoscalers/; range_end:/kubernetes.io/horizontalpodautoscalers0; response_count:0; response_revision:37737; }","duration":"1.680736671s","start":"2026-01-20T10:44:12.550701Z","end":"2026-01-20T10:44:14.231438Z","steps":["trace[1344682976] 'agreement among raft nodes before linearized reading' (duration: 1.680332819s)"],"step_count":1} 2026-01-20T10:44:14.231853401+00:00 stderr F {"level":"info","ts":"2026-01-20T10:44:14.231538Z","caller":"traceutil/trace.go:171","msg":"trace[503540484] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/projecthelmchartrepositories.v1beta1.helm.openshift.io; range_end:; response_count:1; response_revision:37737; }","duration":"1.39688047s","start":"2026-01-20T10:44:12.83461Z","end":"2026-01-20T10:44:14.23149Z","steps":["trace[503540484] 'agreement among raft nodes before linearized reading' (duration: 1.396502158s)"],"step_count":1} 2026-01-20T10:44:14.231853401+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:14.231538Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2026-01-20T10:44:12.550647Z","time spent":"1.680870115s","remote":"[::1]:33464","response type":"/etcdserverpb.KV/Range","request count":0,"request size":86,"response count":0,"response size":29,"request content":"key:\"/kubernetes.io/horizontalpodautoscalers/\" range_end:\"/kubernetes.io/horizontalpodautoscalers0\" count_only:true "} 2026-01-20T10:44:14.231853401+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:14.231586Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2026-01-20T10:44:12.834553Z","time spent":"1.397023574s","remote":"[::1]:33850","response type":"/etcdserverpb.KV/Range","request count":0,"request size":111,"response count":1,"response size":4461,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/projecthelmchartrepositories.v1beta1.helm.openshift.io\" "} 2026-01-20T10:44:14.231853401+00:00 stderr F {"level":"info","ts":"2026-01-20T10:44:14.231587Z","caller":"traceutil/trace.go:171","msg":"trace[1960484916] range","detail":"{range_begin:/kubernetes.io/apiregistration.k8s.io/apiservices/; range_end:/kubernetes.io/apiregistration.k8s.io/apiservices0; response_count:0; response_revision:37737; }","duration":"196.675491ms","start":"2026-01-20T10:44:14.034877Z","end":"2026-01-20T10:44:14.231553Z","steps":["trace[1960484916] 'agreement among raft nodes before linearized reading' (duration: 196.593959ms)"],"step_count":1} 2026-01-20T10:44:14.231853401+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:14.231416Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"455.244434ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/flowschemas/\" range_end:\"/kubernetes.io/flowschemas0\" count_only:true ","response":"range_response_count:0 size:8"} 2026-01-20T10:44:14.231853401+00:00 stderr F {"level":"info","ts":"2026-01-20T10:44:14.231704Z","caller":"traceutil/trace.go:171","msg":"trace[925473937] range","detail":"{range_begin:/kubernetes.io/flowschemas/; range_end:/kubernetes.io/flowschemas0; response_count:0; response_revision:37737; }","duration":"455.561513ms","start":"2026-01-20T10:44:13.776133Z","end":"2026-01-20T10:44:14.231694Z","steps":["trace[925473937] 'agreement among raft nodes before linearized reading' (duration: 455.175372ms)"],"step_count":1} 2026-01-20T10:44:14.231853401+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:14.231762Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2026-01-20T10:44:13.776069Z","time spent":"455.680797ms","remote":"192.168.126.11:44286","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":25,"response size":31,"request content":"key:\"/kubernetes.io/flowschemas/\" range_end:\"/kubernetes.io/flowschemas0\" count_only:true "} 2026-01-20T10:44:21.828996587+00:00 stderr F {"level":"info","ts":"2026-01-20T10:44:21.828832Z","caller":"traceutil/trace.go:171","msg":"trace[821902851] linearizableReadLoop","detail":"{readStateIndex:41524; appliedIndex:41524; }","duration":"293.412672ms","start":"2026-01-20T10:44:21.535389Z","end":"2026-01-20T10:44:21.828802Z","steps":["trace[821902851] 'read index received' (duration: 293.403192ms)","trace[821902851] 'applied index is now lower than readState.Index' (duration: 7.4µs)"],"step_count":2} 2026-01-20T10:44:21.828996587+00:00 stderr F {"level":"info","ts":"2026-01-20T10:44:21.828837Z","caller":"traceutil/trace.go:171","msg":"trace[1150916004] transaction","detail":"{read_only:false; response_revision:37750; number_of_response:1; }","duration":"346.885076ms","start":"2026-01-20T10:44:21.481922Z","end":"2026-01-20T10:44:21.828807Z","steps":["trace[1150916004] 'process raft request' (duration: 346.676ms)"],"step_count":1} 2026-01-20T10:44:21.829194363+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:21.829067Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2026-01-20T10:44:21.4819Z","time spent":"347.060351ms","remote":"[::1]:33494","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":546,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2026-01-20T10:44:21.829214243+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:21.829129Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"293.72431ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/certificatesigningrequests/\" range_end:\"/kubernetes.io/certificatesigningrequests0\" count_only:true ","response":"range_response_count:0 size:8"} 2026-01-20T10:44:21.829344967+00:00 stderr F {"level":"info","ts":"2026-01-20T10:44:21.829219Z","caller":"traceutil/trace.go:171","msg":"trace[845119955] range","detail":"{range_begin:/kubernetes.io/certificatesigningrequests/; range_end:/kubernetes.io/certificatesigningrequests0; response_count:0; response_revision:37750; }","duration":"293.828913ms","start":"2026-01-20T10:44:21.535375Z","end":"2026-01-20T10:44:21.829203Z","steps":["trace[845119955] 'agreement among raft nodes before linearized reading' (duration: 293.691179ms)"],"step_count":1} 2026-01-20T10:44:22.565886133+00:00 stderr F {"level":"info","ts":"2026-01-20T10:44:22.565715Z","caller":"traceutil/trace.go:171","msg":"trace[1722037559] linearizableReadLoop","detail":"{readStateIndex:41525; appliedIndex:41524; }","duration":"397.72737ms","start":"2026-01-20T10:44:22.16796Z","end":"2026-01-20T10:44:22.565687Z","steps":["trace[1722037559] 'read index received' (duration: 318.494134ms)","trace[1722037559] 'applied index is now lower than readState.Index' (duration: 79.231256ms)"],"step_count":2} 2026-01-20T10:44:22.566280824+00:00 stderr F {"level":"info","ts":"2026-01-20T10:44:22.566118Z","caller":"traceutil/trace.go:171","msg":"trace[1410293860] transaction","detail":"{read_only:false; response_revision:37751; number_of_response:1; }","duration":"811.825385ms","start":"2026-01-20T10:44:21.754264Z","end":"2026-01-20T10:44:22.566089Z","steps":["trace[1410293860] 'process raft request' (duration: 732.171327ms)","trace[1410293860] 'compare' (duration: 78.692051ms)"],"step_count":2} 2026-01-20T10:44:22.566396407+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:22.566292Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"398.310845ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/health\" ","response":"range_response_count:0 size:6"} 2026-01-20T10:44:22.566462689+00:00 stderr F {"level":"info","ts":"2026-01-20T10:44:22.566401Z","caller":"traceutil/trace.go:171","msg":"trace[1011690833] range","detail":"{range_begin:/kubernetes.io/health; range_end:; response_count:0; response_revision:37751; }","duration":"398.433449ms","start":"2026-01-20T10:44:22.167941Z","end":"2026-01-20T10:44:22.566375Z","steps":["trace[1011690833] 'agreement among raft nodes before linearized reading' (duration: 398.248524ms)"],"step_count":1} 2026-01-20T10:44:22.566537551+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:22.56648Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2026-01-20T10:44:22.167878Z","time spent":"398.588663ms","remote":"[::1]:33292","response type":"/etcdserverpb.KV/Range","request count":0,"request size":23,"response count":0,"response size":29,"request content":"key:\"/kubernetes.io/health\" "} 2026-01-20T10:44:22.566678215+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:22.56661Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2026-01-20T10:44:21.754231Z","time spent":"812.111373ms","remote":"[::1]:33760","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5126,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2026-01-20T10:44:22.571224779+00:00 stderr F {"level":"info","ts":"2026-01-20T10:44:22.571072Z","caller":"traceutil/trace.go:171","msg":"trace[800008854] transaction","detail":"{read_only:false; response_revision:37752; number_of_response:1; }","duration":"389.971588ms","start":"2026-01-20T10:44:22.181078Z","end":"2026-01-20T10:44:22.57105Z","steps":["trace[800008854] 'process raft request' (duration: 389.729962ms)"],"step_count":1} 2026-01-20T10:44:22.571270430+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:22.571203Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2026-01-20T10:44:22.181042Z","time spent":"390.081151ms","remote":"192.168.126.11:44366","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5116,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2026-01-20T10:44:22.571270430+00:00 stderr F {"level":"info","ts":"2026-01-20T10:44:22.571225Z","caller":"traceutil/trace.go:171","msg":"trace[2024589652] transaction","detail":"{read_only:false; response_revision:37753; number_of_response:1; }","duration":"388.993502ms","start":"2026-01-20T10:44:22.182214Z","end":"2026-01-20T10:44:22.571207Z","steps":["trace[2024589652] 'process raft request' (duration: 388.709464ms)"],"step_count":1} 2026-01-20T10:44:22.571377373+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:44:22.571316Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2026-01-20T10:44:22.182199Z","time spent":"389.069554ms","remote":"[::1]:33760","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5131,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2026-01-20T10:45:10.988233235+00:00 stderr F {"level":"info","ts":"2026-01-20T10:45:10.988074Z","caller":"traceutil/trace.go:171","msg":"trace[276476715] linearizableReadLoop","detail":"{readStateIndex:41611; appliedIndex:41610; }","duration":"164.87755ms","start":"2026-01-20T10:45:10.823161Z","end":"2026-01-20T10:45:10.988039Z","steps":["trace[276476715] 'read index received' (duration: 164.381895ms)","trace[276476715] 'applied index is now lower than readState.Index' (duration: 493.845µs)"],"step_count":2} 2026-01-20T10:45:10.988324878+00:00 stderr F {"level":"info","ts":"2026-01-20T10:45:10.988207Z","caller":"traceutil/trace.go:171","msg":"trace[181253532] transaction","detail":"{read_only:false; response_revision:37827; number_of_response:1; }","duration":"318.144278ms","start":"2026-01-20T10:45:10.670019Z","end":"2026-01-20T10:45:10.988163Z","steps":["trace[181253532] 'process raft request' (duration: 317.657224ms)"],"step_count":1} 2026-01-20T10:45:10.988459902+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:45:10.988408Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2026-01-20T10:45:10.669993Z","time spent":"318.306143ms","remote":"[::1]:33850","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":8767,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2026-01-20T10:45:10.988594386+00:00 stderr F {"level":"info","ts":"2026-01-20T10:45:10.9885Z","caller":"traceutil/trace.go:171","msg":"trace[1846440548] range","detail":"{range_begin:/kubernetes.io/quota.openshift.io/clusterresourcequotas/; range_end:/kubernetes.io/quota.openshift.io/clusterresourcequotas0; response_count:0; response_revision:37827; }","duration":"108.185239ms","start":"2026-01-20T10:45:10.880285Z","end":"2026-01-20T10:45:10.988471Z","steps":["trace[1846440548] 'agreement among raft nodes before linearized reading' (duration: 108.012914ms)"],"step_count":1} 2026-01-20T10:45:10.988594386+00:00 stderr F {"level":"info","ts":"2026-01-20T10:45:10.988537Z","caller":"traceutil/trace.go:171","msg":"trace[1500998922] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/runtimeclasses.v1.node.k8s.io; range_end:; response_count:1; response_revision:37827; }","duration":"165.365224ms","start":"2026-01-20T10:45:10.823156Z","end":"2026-01-20T10:45:10.988521Z","steps":["trace[1500998922] 'agreement among raft nodes before linearized reading' (duration: 165.064955ms)"],"step_count":1} 2026-01-20T10:45:11.235215919+00:00 stderr F {"level":"info","ts":"2026-01-20T10:45:11.235051Z","caller":"traceutil/trace.go:171","msg":"trace[1812874337] transaction","detail":"{read_only:false; response_revision:37828; number_of_response:1; }","duration":"229.043759ms","start":"2026-01-20T10:45:11.00597Z","end":"2026-01-20T10:45:11.235014Z","steps":["trace[1812874337] 'process raft request' (duration: 79.5586ms)","trace[1812874337] 'compare' (duration: 149.357335ms)"],"step_count":2} 2026-01-20T10:45:43.310107967+00:00 stderr F {"level":"info","ts":"2026-01-20T10:45:43.310003Z","caller":"traceutil/trace.go:171","msg":"trace[666981337] linearizableReadLoop","detail":"{readStateIndex:41676; appliedIndex:41675; }","duration":"139.565651ms","start":"2026-01-20T10:45:43.170415Z","end":"2026-01-20T10:45:43.309981Z","steps":["trace[666981337] 'read index received' (duration: 137.395753ms)","trace[666981337] 'applied index is now lower than readState.Index' (duration: 2.168808ms)"],"step_count":2} 2026-01-20T10:45:43.310153778+00:00 stderr F {"level":"info","ts":"2026-01-20T10:45:43.310115Z","caller":"traceutil/trace.go:171","msg":"trace[1465718862] range","detail":"{range_begin:/kubernetes.io/certificatesigningrequests/; range_end:/kubernetes.io/certificatesigningrequests0; response_count:0; response_revision:37885; }","duration":"139.695744ms","start":"2026-01-20T10:45:43.170409Z","end":"2026-01-20T10:45:43.310104Z","steps":["trace[1465718862] 'agreement among raft nodes before linearized reading' (duration: 139.660503ms)"],"step_count":1} 2026-01-20T10:46:12.239924062+00:00 stderr F {"level":"info","ts":"2026-01-20T10:46:12.239652Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"} 2026-01-20T10:46:12.239924062+00:00 stderr F {"level":"info","ts":"2026-01-20T10:46:12.239829Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"crc","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://192.168.126.11:2380"],"advertise-client-urls":["https://192.168.126.11:2379"]} 2026-01-20T10:46:12.240479307+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:46:12.239996Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp [::]:2379: use of closed network connection"} 2026-01-20T10:46:12.240479307+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:46:12.24028Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp [::]:2379: use of closed network connection"} 2026-01-20T10:46:12.316628655+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:46:12.316495Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept unix 192.168.126.11:0: use of closed network connection"} 2026-01-20T10:46:12.316685107+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:46:12.316617Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept unix 192.168.126.11:0: use of closed network connection"} 2026-01-20T10:46:12.317911360+00:00 stderr F {"level":"info","ts":"2026-01-20T10:46:12.317817Z","caller":"etcdserver/server.go:1522","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d44fc94b15474c4c","current-leader-member-id":"d44fc94b15474c4c"} 2026-01-20T10:46:12.354866289+00:00 stderr F {"level":"info","ts":"2026-01-20T10:46:12.354666Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"[::]:2380"} 2026-01-20T10:46:12.355024193+00:00 stderr F {"level":"info","ts":"2026-01-20T10:46:12.354976Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"[::]:2380"} 2026-01-20T10:46:12.355201508+00:00 stderr F {"level":"info","ts":"2026-01-20T10:46:12.355008Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"crc","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://192.168.126.11:2380"],"advertise-client-urls":["https://192.168.126.11:2379"]} ././@LongLink0000644000000000000000000000022500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/0.log.gzhome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000327347215133657716033032 0ustar zuulzuul‹Î_oi0.logì]{sÚH¶ÿûÞO¡âŸ™Ý± _ê–H¹j‡I\ë8¹¶3MR” V$F~ÌÔ~¬ûî'»ç´#ã¹[w“TbZ­£þ÷9Ý"Œ0Ç&®Mù5õÚB´)mIq]% ¤Mˆ•f}$ÖÖ‘¾Õ£F»qç'Q㨑¥ð3«"á¸Ì“ÿ€)£x8Ô LÓYзƒQ¨£ †42÷üo·¼é4)o%:Kºa”é$Г,NšÃ¸-̧C˜j®‡ÑЊÖ4ò“+Œnão:±~8Ò}\Ÿ u6{šŽú“è¥íV‹Ü~iŸjÒ¢kRé6)“MJÛŒ+nö³L'p79jà——LËüܶ‚¸¯­cëöû£0Òû@ë¾î[}0>ò3fVÏùQk*îù0ÄQ¤ƒ,Œ£±Õ‘‚Ì—F–øQ:‰“¬muð’uwè¬~èÃc‡m󃕫ŜðÒ=˜¦ºÿ¥ÑøçVJKRÊ<ÊÕªÀ;óÅfú>ƒ%æÐ-]`¯¤F=å)æqw•ZßÏü7ab…©5It Š`Áâ‚$¨ ÛŒƸðÄ2™…°­,†[50ÝBY[¹fY½ife7ÚJur«Í³üL룲„òðw¸ÙO­±÷`Bžß°zz'úâ'ªL0dóYOc®]1—0ÏY¬=†u}´~öCCžf¡ˆS ¥v»Äò£¾åyÊEh= RiÄÖ¬~•ÔqˆCV¹ü´ÙÀšÑ1iJÒ§Ý *•ä·þÄ[Ó‡´¸Sm¸…#©·Ì@ÐÈëÓ7Ýÿúôáú¤ûúäôï‹7Ý׿^w®Ž]Çõ<.=àÉùy×R?œ]\_ßdÙ]F…}í±Ø³÷'o;Ç¿Mý‡f·â‰ŽÒ›pÙ…xí¾¾mÅÁľMbƒöáÀßÒŸ9²ípå tÏ÷Òï{´ç¸:àÌó)]Ÿ¹•Š+?вçóPÂuÝë Ιxýßc¹W×'×g§ÝÞtê\^}¸8ÞƒÌéõù¡ФÿÞùõ¸öÙú6ëŒ4xÞ˜nö$†(£Á²Z©ÿÿšPá¦6'Z'6X~ó›~Øo 'ÏöeÌéÉiçòúqàsápìOŠE£Kça~+ðíÞ4êt3H²=¤ü®sryýºsrÝu.:9?¦„ìAèâä}çx£ ß®iÀ~¯»—OWîÉ›7—««ã,™ê=hu~ùع<{ß¹¸>9ïþ|ryqvñDôñü×î›O— Ñ ÆŒqZ—öû“_ºçÀ¼ °cºµ7'àºÞœ]·ný¤5 {F¶û,ë¼sЏº×°¶Ÿ®Q€dO…ÜIw²§ýÔñìâìú ¸{zþé ÔÑ8¡Î±¾SŒ–õ ïAâôìã;\ʧ3 /×çWÝÎé›wüûê¤ûóÙõ»îIçªK™Û}{ú¾{õî<öÑbÞåN³V¨ÁÅÙ<îŠMÔ6Î*Q;}wO~þ+åÄÙ²ÂMsëâõé»îÇËoѶ»®Ï~üuáqœ½¬ñâäõy§û¨þ¸Õ[H‰òV3 qÓ…¶ìȲ©gA†l> FHžöÙv^·ÿîOò¶)}ŽÃhÀ¾Ÿè$CÆéì"±´ƒ8I¦“Ìntðͬ f¦øï›Fãi”#%0<»ÇïC–š…š&£ts¸tñN´:{)p=‹“Üh0„Âg¦ .0"Í4ÜëïH~׸‡Ì0YΚÝ̸žû£çáTÙ·ÍH?¯ÊYÈŒrmnåD ¾¬sléò*ÛÚXÌØ®ÊCÛ'­£Ê ÁÇßfæÑ4 ï+¨‰u X0ª?ÖY)˜ž†·zqoq©úv,ª*Ý'ÊÁÍ4øÍ$A$÷þQni Fþ0máߨÀ ”—:A<Œò Š=,Ê-݆I¡/± %ýÞHà ³íÈk¸u=&•'Ýú£)Îú¦*»Ï$\Ê%!áÎ2Ã*¹.%‹‡Åë:/„w5­Âáò pÅK-¥-U@ÑèK©ñÆr©R¾‡Å,Ü?tuýYß”¤‡eä ¶eþU¬ṗ—Ò„õÎÊ¿¶®"Ž£êÁÍ7_^oeé^yVÍ·C_ 76¾ª`âvÂAò—ÒäŠ.{àEãý°¸_*-©jK>=b{.£®»ïjŽXn++¼âiT¼jiØžÏLøOÜ¡8(]÷eXµgrP`}!ÍXi£nÅc‚Ê: ã˾w^ûE{ÜpÀ.§Q„û¨Xé3âF}±çGV GÚl±6­ë›0Åà(Î,äǰâƽلMâþÔì’7÷÷¸û⌳epPÇý02¢SkàÚžIH†@õ³™Ë}ÉÙÇRgÒ íØ›4s«º“æÂžýIsïa:”†ôAz”†òÁº”9KªnæÊ¡:• âÏß«\Ð>h·²ÄŸì{rÇ2·šš=Ë2‘í]K3s­oY¾ÿÑÎeãë&Ç$¨K¤WÏ1I¢¶;&JåÜ3öÆO­žÖQÕј‹ç|ì~˜¬õ­àR:ó0Á“Ÿ¬iìíñˆÂ]Ïù¿Qw)qXÁ "T¸¹ÔCËO„—ÆÁ7Yñ˜Â”DC† fÔOt OCÚ "ÖF{àR}H ŒoÅ¡*q ÕY¹éq¬Z#ÍJƒÜhMeÜqh-,’¹[À@U¾0¥ÙI³»0»Éá\Ÿc¡‘RÛ< ƒk³Ž­g‹ Gn4=A ud•<óñü\‹Ø°Ë#Ÿ–k<Áh®$#‰`åNn€L: ᙨ;îÀ­;í,ÍÛºG²™â–y;ì“TÓÝ8{£ò+N¡„÷ê(?xk¹Í’¹Siɹ(«l¹ž«­ÙDó1z#VIî‘Z†®Ümv.™C@<`¬ÐVV'~vƒ±±¯{ÓaËLØ)¨ç1¹¨±Vꉭr~¬;%?ÊÏAäñ«D´Iø1…°óÛ0> 3;½ñaèm˜]ÄÙPG Æa\š=Œi“Ѧg} AòŸYùg›6õÈ늿X¿´SÌ‚²A8I¨D |ɉĈkFÓûü³ŸÈ;Ü—h+cÿÞ&SH¾!RÉÅgÿÖGy%‹£&Õ°ËYJèŠÆ6í¶æ-wP åWlˆøaàgH ‡‹§Th¹ÎQ„@Û‘¾åFï˜ñõ¨q£õ=íg¶9ëò6ñ Bò“Ü62&žâwŠã³*­t=ø†91AŸa\®ñmSäå¬$À_'ÿ0›—,îò³àf:±Á“½¨cnßR –µªFüú”h %pUŠ¿õà¾>›™*'êU”ò\ì,NÌõ¿â‡›8Í컈?He>ÔsW6B 'šfá¨5û]¿³(Ìz-“9ãÆaj\÷â=ðjàbâäÁú²ìÛ¾4,£Ÿ‹7/÷cØûÒè'w÷‰¾4°tÒK%ÓêlfÛæPÎb|SäóƒI “@9¬i4IÂ[Xè;üAe ÎÃ'£olf*”{¼NßHú2\-"F ½ð¿‡]”t\V£$V·ì‘yÒ*\²Ø]·¸]Ë,.Zý^)ÁÚÄÑ–™•Å1¸«†p ¤2Ž"tÃV¼ÓtI0Nì# (-½MÐòŒòP±´“Àî–Á3ãD[ƒ$[³ÐÜ(EéD}ßh3ˆÍ¬4œB²ƒ{tMi}{}Ptç; +Uó%t|.¾U|³p†8fAKAo}ïÃ̦dÓ~È&B%CœzVä(æU'9“i24~ÜYç&xÙ$;ùé—mm)ìçLƒðIÀç¦iSý{³!Sê%srP°’;êÅÁþIH;(Ò;@Qš|NIqe­8')¥l˰bî†s)ÅæüAN¦¬œï|–Ã)+';x>eÓqÏŸMyÖ³)¨½¥QEë€çÒ-àxÞÒÇù ³,ñÁtÍâì„\ãs»ý5ï!l^° N=‹e¬òìâ|Áªt¼o<½o·Mê:öÒE­uÒ­'Þ ‚Ztã;«ÌÜVn(­b_}×äáßv?$l$ª¯¸ª%dÁ· ruëYu¡•ë®^¶¢Ê£tkT@•Ͼ㦥ï´"}·HëÊ+êRé:C AòI°½€ç;?ê‡}üRÅÐp9z- ¹œ:Ï&Ño!M{Ÿ?&ú§8Ó—:äMÒµÙ»Àûn=˜Ò9ÐÖ¦6#TP^‰çR`;JkFá^ÖÂ(¸8´FÚï—öï¶Áp™CH=RÕ„a†°ðo¯‹Â¸=lÜæž.*¦(feuDÅÔÚKU ZœÇöF¦A¼´Ÿn6‹]•ì&‰§Ã«à×ZBÅ$?ƒØ×›š´¾ñǾ õˆujòO—çiûó– âë?MÃ:?§7;ÉBŠ‚2m­>¼UzÚã­§½½C÷¸ †ÄöÿÞ…\@UÙ¡é:’ò{Ûø&ÈÁ\Už#¶,Ò•œÉzZÄ”sØUB•㺠ì5VÉ «èqš7µð/³“ J¹vf}j«ïëqtuBn_\¹25{hƒ)nH™•†`Ž»,T:´ÎA%CBH¶Mج¼Ð"™-¤\ƒ ú`šè¾9«Û |]ì4륔w-«ßˆLEê4³9Ê{9dEu 5Ì$e_2Cʱ<Ü RAÂqäòAXU7èÌOùÎÅ¢6ƒŸ)u3gIÐc©W³Ã&‹1ùÊB¿\êOC.ß„lˆiÚÔ¦*¨"›® T££þ°ùpàSç­£ùWk™S/ó´é;Ëú~¶¬¶…«¢Ü£Â§ÉŸJ£ÀV1;ÅwV~°(»ññÛâ Ž4 üeúÌq›B¹ÿóß@ýk¾Önñ^Û¨L€WG$HùéŠàP ¹œ‡¨?êeÁãH·§‡aÔ.uþð+0ŒuÚ ²´õ*¿½ Ò{d*y…{·“8JuΖvyd¶ëŒ¦Èi¥Fyàˆ‰ØI£¸zL£$eªB£JlùÎ&Z›¯„ðÁñMÒcaÊ—߉?· ÐÔ ðë§\,,Æë:A7ë„r½=Ï Ïìe5¹ØÍ9N‰I.¾ZæZÈü¯ù„nò@[$M1+ëÆƒîlJ{]®LxMÁ1#T®ÊÁfË£žBºUŽ¢„ì»I›s*FžEz²bɰ*âyJˆÝå&=Å•R¢†Ü€cOCtйj?—îV¸tµn€Üm:Â¥”>"(ÕtÈ;åvATêV¹ô˜<:, Ò!˽L¡–?ç‚4NŸàÏ•p9ٳƞ«‘tö°]F<‡@]BjÛ.ÛÕv)irÕ8T%„+½üí…­*Á<^e¼%h»/Åÿ<ßX’ñ>ƒà„³G ¦ŽÄv¡ëñ'FâIÆI˜=#N@¶al¬A¶;‰åÿ“±MÅz36 †²¸Ö1Àq\ƒê†d`yX²¯M`pÖíb{pHÝèD‰l‰c°ÍC¬ìÒÜ`jƒdœÈ¡ºÜãa ‰Ã¡Éj…Ÿ‹ô*Š):Ä<Š}Åh "MÄ@©;1)Ô„ºUSöMP#ã9ØG*}Ñôðãs³Qùú©~GWÈ>?êî—Ë/šÌ?ÎsÎa¬ÃA´(º4‘^D‹r´H`R´8xÐ9‚ÂAl‚‹¶Übj"¯œ¦P{U‘Ë Ž%øJj2㨉IåQ 5y—¥&öÝØˆÃxéúAg¡&¹uÐã>ššV˜$Hhµ”ÔÿMÁ¡'. &QT.KMR©³ÁƒÎDMÞšFPƒ³™©§&Ö­ ¡Ê¿•ÿp“í[.Nˆ »i&$ïͨ¿i2®nlÚ+]CV”WÄI(V) ¢.8½¯íÆW$D k²ØMwBÁ¢Íú(€ËæÂ(‚öhø¼ܦÐ6­Ä1‘kYC>p!Þ‚kXG•‰©Í{tGxªâ6±¶ÄšìN†Ò0¬…ЈýŽ1Ç›E^œ€cA ððNhZ ÞÄ þ)ÃßX/?¡ºp¤;ÂT‰IÇ–P˲'#.#Î5Z:fóˆc"4G×An\Ò˜Ú‚V‚8Kš÷©»ã(IFœñÈe‘º0ò3"}¤Ê¸ÏúÀа¯Ö?339=‹k 0ŠØµvZ G‹ï«Å à)GlXQrR¥+zWV&DquÑ«­í_ÑG¶ÆŠWŠCfzys©º±ýq´ T)Q‹È}ÚšÙ‚VÂ¥r-ã=—"Ž4DÔ"®=ÂûñJAls'ðdÄ•†Yµ(Û žco±¡Hb¯Ã[¸IºªÈJð&·2N ^‹ðÆšÑ>‚þ|ÞÖG_a‡‚zâ$®üD´ÙPZÅaƒÑÊTç´@yÂ>öÄÛp›ÒŠÈЬaoÉÉÉ`.œmÇû‹vÇ*kfsÄ~ÙaE–gÔéÞSÕ› ¢8‡E꫾é 2"ƒÞÏam!I~€VhÍD"ưcÍ I§±,7âwgKÌ™îÎh˜‘±®~}T ±öTG3º¿}ÐMAç«çÕÑœ´=BuîÜXÜQ…ÙA ^ì{œPwE%“™˜+’ÒڷÖØÅ´Áž1Ió‚ˆ<x_¬x… œAË8… \o’Œ.æD'N•§i"C>ÿþøüÛâêvùåáñåõvõr¾þ¬ß_µ|þrýºøàòµNT„tϰ6u¹"ú2˜£/g<¦ C7¯4SÓ0Ce9»-ÄI¢‚j .¬ñ:fDäÔd㬴çEdùM*@¬;êZ­edRn +* Õà~1ÞPl9Äx µAö‡\Õó"~£ækD<Âá á‚CLE×D&ã8ƒ)lœc:ÎèaÅtwÒ˜¼„¡1‘¬ßéž0I¾s7ŠŽ|©„Ÿ‰ ¼u5‰<Äà Ü4ù¾.zZÜÝÞ\¯~®î®·¦áòuy÷øeû ®ñ&|¿©ö!IiZºoÙÚ"JC—£4 Ç&3„››CÖGM=‡Â³R‹¯ˆ£!z/•'Júe’žÄDtD6䱪“}«É|ưAO± ^ì©ZðÁ ɸˆ… *ÚÄr±­÷H&ÚzDG+äZhóÊBcUq7 öP9„D¬Ûp Æ<Ç¢Ò+Ç­‚ )ólKõ/wr¹›üñKâˆi#‚,6d±8ì6hÏÑx" ˆQ{Oi*»‡â*:tGAvÞ\Ä1GÂÖdEYg×v¬Ûr!-€Òã4´ÔMñd}§MÚâöJ·¿þ&­¢ÛE>d {À@¢ÛRÝ œþyPy ZW‹þqõº ÎïðúÚ\F·ÁÀÄèö—Ú o»S†·ç¸í{ñí–¢úBqáþÅ9ZåX¬YÑ{Œ˜Ó½»{}Ûvj¬w­w‹(¿_Ý<è<®ÈçPÿòê ©"¬àE8qÉ&Ö@ÏÛÈìrÑܤbX‰¶‘/·Ó­6<"½Y› .&¸Q¨ËÙF˜„jÏ5><".NMðòx·õÎ×Èß×aÇï×—_E…/‡8u¹¸Ã¿×釔T‰øÇíäÖ©â²4ä“ù£Á‹ÍlYÈf.Î&pcõëdëæ;¯ÐJŠ Ñ`,[ònl‡ÃaµÔ:E|¾Mc\]Þ \®xqÿúÃ…A‰øÕryYE*œ˜QìsGùX&;‚ã–Dû–1³WÜ»áíáÎùæ†uQæ±Z·çt\µ¬huFlt%#à^Xu}Dº-S}_6¥yع­;Ej†ByÇL¼âú=sâßù¼Vq·•¨ê qØ €t!fIÇ»$å ^j¦ü+¡#P7dã4yàjšµ­SÃHþc¬ ‰r6ŠIL†F§­Z_„IÎÊ÷ζô-0ù°ÙÆy²{©•áÓšö¡ l¹¸@±«ÄyäÚá6íþAz úöÛŽ½öUžêúùâ×ýÓ¾?å]°âÕ´¾–J„Çßĉº½º€¬˜–pƒ~I1ØÏ»ÆXku¹¤ÙÕ½¤ðU/¨ê¨ÙhC­©Ç $8êpâS0Fsó&×ÛܲK4ö8G)¬6)À”d+¹‘ëâ^ÈmxÆ´l%›F7LF?†§t³®§IÓïo¯KÕ•£`ŒýtJ8;±'Š¥„ÙòÿÁwÑÃç% s5¥FÔ‘†ô'2숵Ύ;lŒoy’EXaÞŽ Úhš“ªB¸œÅ¾}¡,9½6¡àÛ±äf!¨ ßöŒwÿ˜ Y¨¤ ÎmRª yªð)ª¼ËD!—l³E‰°tâ œ0Ÿíf îåP{´ƒÉÆY£sPãÊ×.MÒÛ'E.úS#RÈÕÀF±²Kë9§5—ô¶±Bz‹ð~¢h'‡[¡0Ü´3 ´!/Q—Ëdå¯M6æ +Ùà ^Šˆ¡âüáÉê|qÕ›ø¿ì]ër9®~•©ý³Ž:¼àÆóçÇy†-[²×ÄNÆv6›yúZ²Õ’(5›ìV²[“ÌT*vL‘øáJ€x7YåšNÌ®ö’7W©§Sð0AçØ3¦¥ç' éÏKÿ ß”†oÕJ€ó¾…N]ðá Ùùí¼’J®ÇsoVfIÉÏ¥:@*TRŠŽ|¨ì¸[@£¹,þž÷l£*‡SB7nòçLþEf0ùm×’ú’½ëZüè—.CU*Cl;5~ÂU¹ºüâ¶?×&äòk_Â`}’š0„]†$µO€Bû{­Ò)kØÞ'¦q›Èú~§qÙÕ…Ÿ@Èþd%cÖªûnDœ!Á1Çêë[ã¦jh­LêFy—©k?vIÉÊð­(÷"¯”£1Ú¯ä²óm§)˜‘Ø©¡J>ô‹®Ð6$Q°µ±ÔÈíå $µ|‹*Yúñìö:ñ©KÑwê²3]¡Ë8üŒ.ã%uoQät£HÄ“aë®±Áxí&:IÅ»°¿øyUÃ!ˆþ@ dɱ¬–B–ê9¯Ù¬j¤TÕ¤~#3à—çŽá—œ© ­Dר$ìQ`Š®eܸ#’,ðtÆ cdï¯ørãáñæãÝêùîãÃËëó½±µ¾Y}NáÓÄÒq©»Ò/á«î &%«½äm«Ê¿D¦ª,±'A.õ Ùáâë)æRÖFdŽ‚|•Úh– O»ƒýÅ=Ïaq—QéŒÛ6GÁ,ߨB‡ëÔl—uÉPܬ­Ùs\»Õš¸¸ð« c¢K™Âû>¤¬xÃmùnYùÕQ êã£v{¤¦¸Àèg]ˆDR5RjPH‘‚¯™§ IœØ¨¥æX£Ÿ0! õ„qňeD±D{„ž¸?ZÉKÝVJa‚ÞçÛ(âD"ZZosS¨ì;î8Ð8±¨³ g”J?göDü¾~º¼{ýöùðõîñÓíÃó§ÍÓÃC&ø˜°t ?y€@˜p*üàKp$ˆ¢’Ñ"ÖêQÍ_@]@ä¿´•_íÇUõ‚¨ñMXX$&þe{ùùL(@E¼$òãnŠÞëQ7E‰—Ÿ4 ÎnŠ˜s}A¹º™ã^Æ—¶p-ysÚÍÚ8…κ¢þÕÌoË‹äXÔ¾obg‚ù'…²}z€å§<(Øý¹¸_¯×|K«ÇߟÖë÷r`¸³¶€„“ W Áz>ŒCnrUÖ·gÆh5us½&(¥Ý\ðk¢ÂzÝy}ƒoR'>;zO§À·—o¯„˜#šã¶&~iðí»œ†ôÈâ8Q¸òC‚P‚‚Ôþ` lœá±Í¦óm ƒó¸@+kÇ¡Êêãa.OÊÜþ£d–1`ŠÑ’È5»<1Hr•S_.ÌÅÜQ¬tfÖÊuö8“c• ècð#@‹Ö²:ûbkO…9¦vÉû¦Ôoa°†GAš.¡OKgÝ­H”(ÓÇJÖ¢8¼.ÐæJLAˆO°6†©P[9£©¡€¦‹j/¹-XS*Έ)¨â5Å5B§Å5‚é´o5«ŸÅ&§±Ã`¥3—îêö¬.—œ¦`¦.‰î ºf¸D[;d°AE’킪Åàò=·%00Ìßä–±óè¢Çk¶,üüÍÜÅÕËÍã×Ïê=¾E:÷Õí6¸¡Õ÷—Oß?Nò†8,y=Uþ+WSŒL‘[Öµ’²Æ9bä `tJE@€i ÔþËuTmŽÖ _œBqË\2ƒÈÖã=AkíQϾ²ô•#u²b­sÅ’Í®å*nçI7-ÙŸ¬d,AR‚´É.‘}ùãõ‡BpÙòú%a›$.›fßÒÞ–×÷Ü#Ja¦‘zeé/É Ô[¾<²Ô3F › :ŸÉ¾Cl¬Ž¬ØÃ %Æž #K’eslî\ÚŒ:´‘dÑIu[‚ Æwu‘ÌmôRe× ŸÚµB§ãôê€jö࣠¸ËyûíY³À7<͸aK¾³7‚UãÃ%š [«&S, bÃv.9H­(X<‘Õµ*C)méխЛd\$¢…‰GE"Å\òmp°’2\U1ÞZޏ:ƒ%²º0©Ko3‡¡œÓöÖ6Ø ÀN§TáȦ$^$…ÆæÎÛ²U·Vo§:Íø¿z|ø¸µŠßíæ²¶"¡5­d½úãöé©*ÅÛ‹Þ‰q§k»ÀEÀC£R–svô€¨s ~Q¸DÏúÄQ'ªÚØ¥úy¶„btMc+PûÛy2c±E¤¿}~ýöòA¿ýýËóï«Ç»×ç‡õËjs£–ÖÓêÍý3ÔÉÅ©{庄1Ê…Â)²¿,ö.#d+”¤ùûÖ½Wóâå·ûç/ªUôŒ_žìôË«ÊÍñ¼ÐÏ'Æ4AèoòØ"Ä5Ša’f%RúšC­¬Î§¨¶æ(ÙYUß;ÏvSØ­¤›‚Y=qqðš;ïÐÞ°Cµî·%ë*9=’øÐÌ·TZÈ©ŸÕykš£|KL>^ìÝ<泿û“•°ÍùŽ™#¾#¬‘ïEõc©ú· «™AS ³Åùªž'ŠUê—¸*ƒ?Á©ÁŸ2óÖ¼ëÐbKiœ¿‰wS%/ñ—w…ï'÷òý0ãö¾ØÀt“¦Ã'ûšÌýèºècš$À5IA_#I¡"ªÑ<ƒø^6úáð¯«õóºJc'Ÿ•(a!,“(E ‰ÙóºÌaÉyK¾û²Òpÿ¿ÎN«)Éàêæ=¾-AX ƒuGÅš$——|ˆ úµL4‘•]ú‹*[:5|v¥ÀYF¾5ß‚p–qhPÓ ñŽßp‰¶ÁhAWO;½2Š »sA2ë±®—ÛiÒü).cûØ·ŠºZŠëñfýIÒ¢n÷÷îåÑ×w°óOÂ﫪Cp«fÄøu ®j¯$¥4»F·j:í*@›\’ìUW÷ëò Ä÷«îxôªï"'Ó}ß 5ǰtÝ4ª&ç8³g¸­ÁÁ‚õÕodö|ú ÔŽ,ÆÁx•g ‹ê«Ã¶kiÍ3ÐJ¼X‚«ÈQª–ÔkA Û<³íå,ëì¨/ŒaüÎÚh‚0®ž);·t²’vêP«ÆQ& .¬‘÷ÌB瀣\Yí†pT°ÀE&Héô"ƒã.Ä9]3G™|ú9ÊÂK>xQK›{ûï–nW(/Ÿ>í |7|«˜wšÆJiÌe¶9Ìt -™é\fÿÙd¨Š¶õŸ*Ÿ4¿-j²`ɶ„UîeÜÊ…,ƒµeÁWè8ú$t _·'å\äy”‚±Ð ] œûg Vh ŒÄ.¡yr¥(<—DöU³qÅ%§º­Qåz,b'×9t„a\$áQ‘ˆQòCmßV EÕJÃóãƒE²JWù KSøV¡æÚøÍTqåÕ¥°’5Šn±YÈzwÖ=möY9y+T@QhåEˆÂiT|³­”š#4¦Ûð ÔͲ½YI3'l’“ãwerÉBµÑ5&97OƒªÒá_VŒ÷$ÌÈ·«§?ÿng“V›ÐcòX Ì~\㤘-¾Phù`Ëý`éPùäÃ>²GˆB¢>M´Žì*ßÄ¥Ÿ±5½~p6©±@:$5!Ç¥Â(ª`ä¬RÐm–Ò ËØ«2I¥Rƒ}ë§ÆN¨•š~ Œ5‰WbpŽZ ¥Æ†K„@iÌþDu$ÙÉE¾nî³½vG+J½b—œ…ïjûÏNA½XŒëù¦K`ªë‘œ"’8×Ê8‰¥Œ×aˆéòÛ þT,^\fœž|¾ÏúþhEŒsul¡Ãœùp‘¼•h“à "•Z‰¶e•6§ÿµð;V¥KUå¹ì¹Ê3´r–cÏ0W”dwŽ.'Íûs(ƒú1ǘí¤?8LA‘,tõ’ÁAbl°B“o¾óAå'I8OMR@ÁWu<#Hµš#«³"²D©%B¿„'ù&‚ ¸ ˆ pœ8ÿ~óùƒþoçVà}?÷‹âÜo÷››×›—Oë}ŽÜXûwïÏ¥Èý¹ýƒ$…í ¾~ÿº„nþÄ¿‹z‘<û 7e³,7Ã-Ñ~þýîõëgÅ®Ïw›O7_Z=?~¿zY“¿¿»Á@§©ÄØœ¨ÛÈ ø¯â¸lð¿n‡fÀ½É,¨çî¡V«o—ð‘ÓjbUDU¶]ÊDýsåkªÒÄGŽ1^VÅsõáb f )§‰g)ûwVr¬ö—ãaAä`‰Æ‚Èh^I±‰ßŒ8öMb€U`õŠ=Yw÷f—®ô=p/CLøËÝöàÙ¸íà`%IELŽ AÉ;tV<­î™ÇIìNhõŽMÊN€ͶoÉŸiKØó=ÀkfÛüœlû_Jsy¥yN=Zw;æ.Š.2»U¨w@ýq«“M×+=Ó)ýèÙá…°¢{Zß*óQVÏ·›Íí´(j§œg”l±ct _¡ÀT“Ñã%ý³·.žÁ9}©²&ŽœU˜!Ȉ´ õÙW%JÎаmG'â ãýÖôJ%Á¦ûîÅ/­1FÁœbŒÄ"˜àõ¤ÞCIA©â'×”.ƒ8g™Ö— 61?.ý£%KÖþëRÎ5Ûý«¶n’t,¾þ ÎŽ‹U/³bTð›>çélÚI¾Ñ¬°Ÿd„M‚⦌%†¹s árbxK&È¿ì{'à l»v±¼¨»µ$BÐtAÅàžÊ; ?,6>¥àR:SÓ/4kÉP┄T Àí(p–«HÆÖ&®âQ£Ø9ÐU\êB@Œ‹vêÍÔŒ|è§f½Þ|||x~þòür7Ú¹³l‘Űٲ]ÀMØŒ5õÁ>xˆd­ó'bs!Ñ/@w!Å[½—¿äÉ¥ ©³÷ðiܼÆ|Ážˆ3 »íÚQ¤&!;1¤˜š0àø‰íü浑yÜ:€v;²:úÝ‹¯<0 w‹ÛÙM ÿ³Áè¬x$6ÂMâ!iþ)Fê™v‘C”+ 1*M7er_ w²Mˆžj¢þ‘£¸É1“rÍü0Ö:qaÔî¶æÒŽFÑ9a® s@‘9ZÂꮽUÙátö^ÕP“Ý-‰.ìT·<¦Ó²šµ”<´ÝŸ¾¼>ÜïžàbÙ:mvÚÙ”nRMïƒk …z/©f"ƒjÇŽdºVHú˦Z!Ý[¬5“E›„é”d”ÇðàÜX‡=)牅¢%ÍȦàA Ôä±ùÂòYÂ)›%´¾ÎÓEk‰æµÖBѳ SÌ~ `:+$`÷]£ÍÆ ( /¦ÏÄó•Ýú÷§%5ôþ‡SÀÑQhRPó– ƒš÷NÐÏàÅh<Ñ{¸Mx @úk´²(¾É÷E51WY4 Ù,jÀÛ@SÔQó«I  “…Õ€Q9ärbÊ'ä ŒÊÄc#!ËœèŸd~ð¿*Àä„@ì™ZŒÕ‰Q["-àš£µþ`vqöòXuæüËcßZÚ}a…›ÛÛ5ÃfV미Þì<0($‹;©û!Ÿ`žqÛV±¿Â„€ÆýN+¤-º —ÕjAÈU=‘ö1$¬z%!žTÒF™yv±ëÏ?öNºèwÏé»þ¬HÙ÷‹ƒÃôD×Ya:è-z°F[A—˜ìQ]iqå\’À5ëXm$ ¾±”6ÄÒdcÄ€eô¹²tJEÓ¨XäÌ ÁÉJÚV‘ï‚NéàõÜp3 …ö2x^n(F&\Ô:ÚRߟZGÆ?öðÖÞõÄ:R'yÆ Z‘Ÿ8Sç/½ú‹èÕ%MLðiþZ[…HË–_Ý?ß¼¨¾~U&u»¾Ý¿V¿KÿýGÅðÏQoèÝæ¡(Ü3}Á% æbìnDöÌ3šÁª“T@¯ØS÷mÎÆû=zøújæÚËêFOfÿÿú´™–ãCï—4L!Äš² ›VªÚä’ºÉWábõÝä{Pê ¥#“ÈÐ"уÊF"CCÑ?¶”Ôcð³ÚäçÒŒál²E¦nŒ”ÜXt_xk’É9Ù0ÀñNm{,r¢`\4 g-ˆ3ƒd˜ç—läð¬‚Qù, [ÅzOGNÚ]@µŠ™³5œÖ¸¸¾³µtL@WŒ=ùöš+gþpî+\ÃæöV$¥ÕëúÓŸ8íI½Ú§ç]Â6<÷AjÌ8š¢ãƦÏíÄœ¾¹²rÎøV  cðíçæè )WßÜY·™Iö™j%”6ô¶ÒÅÑ;ÆíË#ôæ.š¿WñÞʪƒì‰T­÷6~œcµªYRÑkbuò¸4«™ãÌË\è’ZFŽÏT‚GuþÿÆ6útó´¾û`Uß^N¹¾:I|®2Nûê”ï«lâó<3¢º†’Z˜Bšß/ŠÖ]ÖQIþ9¯5¾~þ¦Ê¡¥z·Â ±-ëÄlôÍbMØÜˆ‡è­4f¶oä®|›ñFëâ€HF¿ö2ÇÐêW%€¿<ŒcK^Ìë×=ýþÞïò··+YÛ« ÖíoŠ– ª¿¢k‚Þ>. ½FܘthÉ%J1Æ3Ð+³áz'ó?˜ýIh“‘ïm¾_”ˆ\û»„—š`Kdb±ÃUfÑ—OˆFŠ#±ïºDJ˜K—»?¸£ìì®áÉŠZðÇN'x*¼Ç[¢“% ⿹{¶å¸qå~eë¼äÅC}Ðû˜—Tª’oHÉ’ÖVKr$Y{üaù|YºIJÃ11$’³>YWmI£ Ñ÷{¯Â‡Šæ<² >@ÍúD>ŽªDɇÌÒi¢“Ì¢*Y¯išC•m$ËÍÒ?^¦`àªWÁ­ ûd˜þàˆUE¢lñŸ˜V v÷Jö ¿Š  j_Ž~±bÉv¬ä^_¼g64ÙƒŸgß”_”u¼Z û¢Ö÷!ž‰ž’­åÔx¥\ç— [ >’ê)ý¾BZ³XrÇ{¿.’Òç°ÏW÷ß,x,<ó‡Ã§kŠtÃ7áðçó—?«v+´ b´uÍ5ª‚`™€9b‚~ ÊÏ}WG¸m²TKí/±\:,Å+˸@¢«mÔ뎬Øì¼ka­†P<’™CckSæÖ#y[}$iRmt÷ö¹|æàbE›m§:· kê èMVa¡‚Õc"ÇjÀÈÚ½õóStk¸¹%†Ñr¥¨,(hí,<‹wv9Õ0ÍÜL¡«ZBj—:W=ذ»œÊ³š¡ Þfú)Õ×~ydøE¤ &C£…ÌÊ哳˜ôÙù„ÃÛ”lØUŸ"¹Ó]ªÃ3V™~Ì•÷ÿû?…¦«®Ú‹| õ„`ë]égÙyÛ"Ku„2„³ûÒR  0K!1NB²@I~«îñ6%.€…ù}ŸÙø;c]£¨-©Z:„;J¨)Ìd&ô‰ÖêwtÅú½ xž*”ª0ÌREÏ£Eï7+Qð Ž¡Wõs²DoxF¾O ÔŸIå6C6ÖìY h‰ÇèÖY}õÚáúêãñÇCá‘OáÃë×Oÿx­2 Œ†F2DÅjJ@P$C&ë¤{ȹ\4÷šM¬×.ÎåÒz5ïjC_;ÍŽ"\c8o¥÷jWi¢±6 Ì*bòsÍzµÓäÚÓþ²]…7¸M6V> Ô­Èù;c6³#•nàÙˆôˆš÷!u–W‡„†BuÎ ìÓ¬6PªPAÃOúÎç—§Û-šª¾ |yªÓ!+=”ÉŽ"Qpe7&à±Q‰³\ê²UPãÚÇXÛÿÐÁ?—Ðl’ÊOV2¡<¸ëhìlÕüÛ‹×7w%-*…§ì0U« …“üÜ¡jVbÛ*5o‚lÑòúÒv‡ ÈK3û!U&n'‡3™}¤`©ìAeÊŽÇÀ±:³¯ìlNiº4ó3îZTÕ×w±zÔñõÀgZåxãêç°ñ|­¿^øì(QÔK¦ Á¦Ä'ĺ8‘Œ3ÅQ2;ÎFëˆ>à,_ûvé_—r>Þà2ó„4É–ZŸG¬Ë#©=Ã¥áÂí¨ U‰¢-t6/f­WJË<0Rs+nL‘§É:þâÙYBÇ‹•ÌÒ‡b;þÂ"}”¹Ûl—tËÝÓžëDÏ7ßïf7 Œ?ð [i¡j^g)âÒÂË 0',² $Ë/È iŽ1 éȳ ù²Êt6pÂì±IMF gÚö¶ÃTò»Ð[Ñ)å\C„íæx±ã Ìñ®zh£ í ¯Ÿ®ÇÓ¸BX8kòðÁL-â°xrÖäÙgæ_mG€)ìL€Æèq”ç[5_â ¼í„ÌJœ°ÊØKkí‰í°GûF²)ˆ^/ØöxuýE…ú¡XüÓzϧOW×µÔþñãpýt÷xè8©¼É1p¢]-‚U³*8ªÆŒâÖÖë,]Up6˜Q:Q¯¨ 0C8YñуÑçç\áTÑÒˆÉ5I}ŠK‡dB¤ýC2™-À†²ìJœ\SF¶jkK‘œŠö^Øp€ÚæÆ)‘C1«%Þ¨¢6ßábMÀV½Ïä1ŸÁ¾ðçë/·7ßõÛÏþ¡žå3E^ÎÙ¼Äv~Ë$ËS#”$N¶ë!˜MÛ ´7 bn—µ[’Cɦ¿s¬ÛjÒÁÎn5yÅÿê~2—ÞiäÏ«»½Áoz³ß,àöï}´êG‡:ûƒ¾þòôã®ÕÞÏ ¢CÏV‡;ùOC`£øv „~à`ââñ{;é¬Õä9 x[2Ž=Öå(ߎpXS³â¼ 0JXLn\› Îàˆšì*%ÅÉÚ„™*)ÓÄH•þ® 1—`\f>©ZÇ ×OzU†G¬Š@z„€Æ¢\õõ« –Ú#â›{ ƒŠ>ï/9¿µÝñ®zN~;Änøߤpˆ÷Ÿ™© žeER]C•›ßP k:†½2‰[[õÞ%xò•ÙyÔaçá»FÚmÙ ˜¡P‡4+í#åû·Ž0Ûbe‰±D¿,†Â¶+mƒ‡´ƒ·íº.†}÷]}y|ÖçV`þDG݈—·¿Ïo^pÒ^fý¨LÉיּÕÓ“ÜRF¥^QW bÞ¬GÙÔ¬—½Jê/£*Α@0ßW™û–µ¨i°s¼, ™/áÀ m°„ýVvl~’8Á4Ø!‚ sº1äôŽðÜ"°£Ò“ …+ J©i^ À¸wœ@áìÇ®@òÐ8g‰Ñ|“NâmÇomìü:º)CFOQåµ$¨ wG ƒŠo ÎsKUªÜgRå~œ*§¶ ÁƑͰº4ê0#OpzwUõJsC׎w)èÕIÜÌød¤Ëðˆu™ò¶†b(L”oFj!W¬C´qÂ!0®mÖ‘Ò~òÜ€šYäçI"`¢yšÈÉþÁ½ zuŒL1Dê&+Èœ‘íßWT¦nŠêI…УÚÓö‘~Õ5Ž”ÃÂ…k"º4çÒÚ\Dw^ªŽü* 3Ñ®Z·¥ï~ûiN9$"Šç™:L– —¬Ûpã tg¼/:Õ¢úrhûÅÏ}3Dóê›ëÇÇ'µÆ†Á¸Q#)¶–5rlõƒž ëÒ·zŽlH)7hõŒ+ZA-ýäˆÝú w)QUä“é#6yçS"ðE¦ á¬bÌi¢ ¶ÈG¨¦Ò‡¦Â°[ŠCÙ]ÚÌT?‡Š(ˆ/[–NEûÒT®)÷9Qg˜œØÜ·é'a«ŽŸˆªÄv­æ1÷øúÐ<>}þx« ôü|÷mÎ[Ë~f‡˜arfÅ«upMÀÈ«Õêeq‚ø¡M±¡8µaíȨš§z]o#ÍaFœ*H²»†wÞdì’z#’0;„ßïŠq©©ãR™ƒ¾ 1žg¢‰``žƒŠÃ~¹Á*-Ï­²ë4DB?O!ù€Œ¶è ×ÇÉyÆM)¤@`#ù½5®D””m³]€x&êiïö¯UA¿}Æ®q—Ñ*˜cܵ‚¯Ôé;ñžo_Ì—ù¤¢Ë7ýk'Ÿ\ ØW;«Ðª’ÕÎļ_Zs»5,·Rñ-%%ŠgU¼‹Ñùyóüp õ@§ã/-À{¶—oᬈ Ù™ Ķh)+ÁÙöÛIc‘§´ q³ƒ<ÙÕ̲½äF×Ĥ~7_¾>ËÞÓW¼½ZñìãCW÷øT—ˆŸ8o‡:®íÐ*ÛÚÅ’ ûn9ígÉC jÜŽoùxúëáþÊr˜‹”.»´«Îe©ñˆ“mKæM*覘`qUݬr«TJ$ð‘çoG2Ù%Ó>e7 !»…N6ÑÆ/ïvǪU8Nô¿È¸Ô”«bíì54úÈi>$úOfI#rvµÎ:ÐF+}"H—¶×RJ{ÛkÊœOí5ì §3~ 7-¹÷x™¾¿ÚšÈ™8OÞø£’hº#\¿Ì°t€K†uØ 럓Ôf»#ÐCMS*IPwRRU ’¸q R’‘d›Žq&ϧ!§/NU¢w—ÉŽ¢Üf¾I³&yÐOœ¬Œž±r¯5j”M ~»X@äÜ,ðº¹ÁlµJâ®Ë¾Ü~½?•2öÊõ%—§ÛoÏw*]îfÛ† OÙÁçéQh•U¼Š—CEé²0¢j'ZZ¹\ ô k»â¥6v >+CÈ‘—2âyV†ŸÛ%0ã†TË7ªR(4¤ºG‹„ã*)ÂöqlŽ©Qý&,¿BûÛÓ’ÕËë¯WÏ}]N?9Í „ß«rİ)_ÎÖã„ØIZÅÖ±jnD´<šsa§v kÜ¡,Ù å&ŽÈ8ËÄgæNa¶ëS{!u[1±­Øt~GÞµà§3ÄqúÑ®LÁj®.½Æ"ˆ”ùl-E΢Yl;2­”Õ¼;šÕÑÂ1š“­ë•6”M3o[εý˜¿ÞbTpêWœnzž±2?O¡,?óv/Žê¸Ud@©Feë­U©9^9Ë{\ü0´9ß¾^=Üb¼V›òæN‡ðUâ§šð_r’!3l\¢é4à;•%ž¥²ü<ª¨¶hOQ2”v»ð"y êC¬#”ši ^±u¿kÕs‹¿B1ï@ž§·Ávh„Q¨@atsr0\M©äþ²˜CÕð6‚^ŸJ%:œ 99b•œWÞm0B±œßаFÝ;;Xܰ𓘿»¿ú|{xÒw>¿<ýøxúë³Ê÷›ôéðÊ÷ôR%߽˒—’"ÅêÂ(³Ô…µÓ@ÚBÀëS[/“¶"ȹUBRá€EVæ`¿VÀ‹c.1™Ç+IHóx¥˜]î|¼Y‘€·Hqá,0²f;ÏêÉ“wX…¶þ«ü/&ï#£_;æ'a±ÿ\CR˜´·ôVI+çú·‹βãàf%xÓ§rÔ‹X„7ï<¯Á›¢½¦¬&2™=)‹Ëj®¾¾|™”¬nÁ#Ë9ªŠtÎË *Ea‹ãúÞP)¹´ùà®›αñ‚©Ð{Ã’zòjJ®bPÆŠRH“ˆÈ$¾Ê³Ü?`™Å›Š«9ÄÊðg©¨Ì`ÇË”8Ú¡!I'ŽöðˆUöF›7Pf~Ñïž5O¬¦ê’êý™¢`5o±¸(¸½Í²±OK%'tGx¡ª 2GœªæŠð˜d¼2z×°¤$3ÚI/â’©ù Vè/›µ*†·)Øá×Îk6Ustž±n4¡¨‘$a7Ø´RXÁ v„Û±V¿Ä&."…ñè ÄÞvñx¦G9ó©éÿh·±·ÿÿ7U¬/ã$õa”¥>d’Ô‡qÑÉ!›¥>ÏœA©„Ó*œpà]J4U'‚ÿßÀßÑÀ¦¹VÉÈ€5æ¼z:Þ)Wú_x¢LKFTëAU“³Ò| yá|673ÓF•˜)%ŸJ}‡îÙ¥˜ÖÉÏÒ®ò³3J¦SôO|‘õŠÿ${À{¬g@~bwØØ¦L£¿õ;íÛ_^V?´²©¼àóâ68ªª4‚¨Dl‹rM÷o¶’¬†|[>ˆóv-«G/aN´—‚@²E˯Ѭ \"YƒÅq<¬³‚Òö hV¼$tûZA×·O/wÜ]Û6ª»ÏmaSû¹zãó´ã¯w÷w/ʆ®ˆi¡ãZõ±€Ïó-8 H«øÖ¨Ú§dÝ=,­4›€íDmÙ`Ç, å<½7fU;¢ÌWõ³<í ò«”ÞÁµOD=/Ò"¦AŠq S[ò{gsÉàŒãŽÃTtò–᜘T›6 —+…ö0¡giÙbÓi“§Q³Òl<¨BA°¢{ Eêü…uYöÅÙ-°çy;¹·%€çyÛ.žu…7+É’€k‹§“8ÔðŒì:P9åCè>U_Š æE;¬Fv{D ®½w>ª¨±êÙš8c¦æA2©çgsæ<ÝЈÞÃMâ·»kÈÊîÁmæãŒ"VóØ/¹û;b]m›ŠAu(‹ÃŒ[‘I…q¯^l" œVó|q‚'Ù "š§ Že–&(—O\¬„åõ¡ÔH_š/ë¾9¨.‹´ k¡¦/Ùúç $¦•*ý*§Ÿ^~b òé&ÝÄ›Ãóõëk•ׯ˜•ä‘K¤ÎRD> 6€Ú)V}j0Îõ¥$£š]ÎdL-É´G¨‘{B2÷¯×fß© 2jOïÔÒê©ë—¶ó¡f­9ÕaCÅ‘ååsOjÙMuHƒ¢«öIÛ#T6Qb * +T“87ÊëkùÂBçd&«ïÛl0ÇIÕÔ]ÖI½ß¦$Œ ùvFøIìxÄ*Õ”SrÙ.ìI¾*ÌïÛ†ïñråÌYBÑ@‚2Bñ³„âóúƒën!žì±½šKÅâ©}6´–¬¸ ÙPS| -µAí´¡°ØùˆÑ ž  D4‹Ø|kÓðj%¦R£0ôqâÄf-«2ÂzĉÃ~4ß©^ùû«Mš3¾iU‰-]R†cÝÆwwÏ_” ûÑ úÓñ½v\÷Ëá'µó/´úF§^/&04|¹zþò·ß‘Õ¶²b?}ßõ÷§'+ñ¸ùt0Gþðé‡òùß~Wc]¥r Ñ,A¿ýç¿þmüÑ»‡Ã÷ç÷ÈùÀù÷µËwì N?·«·@7 ´|‚Ç(¡Áêcnÿü­RV# g >¹Ì ZU4X²#H$n¿Ê[eŠ•† áE§w©‹ž®û<Ùû¯[dvÊ€>IFÝ®ÆÐÂ@lÅðÚ±®“pªr¢Ë´¢ª`äÂL°HÂfŽM¹ =вó‹Pù—Žr”XŠ[Ší)•mÛY¦E⛬“8p­‘Õá« ªØMª y¥ÚWóS'ÆãŒá­—RçÚ¥©2Ìþâ”S»ƒ›b@À‡R´ÅªaËT¶öGUe‚b‹(P ÊGbûHœ ß17)Æ™¦’ö"ÑOVªô—…ìÊ¢Ám |$}ªè9tMÄrg¬+´"y¤Òø]w1äà¬"„¾áfiïšš)0ý“´¦ŠËìNãÐø(žC™ÍRæíð#¨¶ð¯ô¡-ƸH`Xäã*: UEŦ¸­ô¡¸†›ÕÚ,†þ8‹WNÿÇÞµnÉmãèW™“?û'-“ ˆKÞ`Ÿb]îÄ}âv{ºíLÿbÝ´/Ékkv<AÜRêe³,&´kóâ~"¢òÃ,M˜„Žè/Eဉ_äÛñâ¹hnV…øañ€e.•-ù—~ÙìƈlÛcÛ,)ËåHËH±YB¼ðϱP87g×YLÓ†³¿G šH&85Ü5”°½G—™wÏþQ9£âÙóÚøˆuÞ™-|Ô\ç7”h*a2›„g`]­½µˆ(³Mo½èoeÁnBœ• €2üÒéjêëŸE‰#/S_ ¢ÐÖs8"½r—›¶Ãé¸Ç—9_Ü,ÞŒJsÓŠÛ*å74ò;€ë)€üøååÔO<þ‡;οZ:Ï™?Ü}ù7üù¡3#ð>0#*ß›^Bæ$Ë 7þœ¸ dúG_îx=U–8/ßv¿ñw ¹·l¼sZeó(5x>Îæ¤“èrÈì᏷úïJøa7‚?ÿ§«!í´°DÜGgÁ§ˆÎU ‡YçI£JÅ7 í¶ÈPü£-)_1ÍÊM…ɾÁ²uu,@Ë'þ{\‡V‰aûN\½ªhßâšÓMµÙ’ž†8–‚ý`qį› àײ©ÒPÐc¢Œ@Aª欳zl‰R)`>Qm‹mqöÕàžo­ÇCgÕ C'§r(,³ » nŸk&¯rÐí¸Ö’£cC –Ü4¢ Gx—zËókfŠ@m9n.丗£êæ ;” fU–ÍãqžRÙýesùÙçt›Š$W´åyUƒŸ±·ÁqݪsÜ^ À1iØ R ¾«‚ ÑÚWk1‘’šá@ù¨‰@¬s"C¹#gtµŠ×eBUr”Z;Ýÿìä;ÏB«Þ³~ûYv¿ £ÆÿkáVOQŒ)¥U65¦ÜR0B‡xŒü?/ÞÒP4ÞžÄ*Vï ³šz•nD¶-.ûl ?x‘&çH‘ó*MFÔG\Fç$Xˆ¸|RÒá^ŸÝÊJÿz9îA´Šã·ß—<ó' ÿñ–Æ»¼D` êîåáãóƒÓ¶iÃíÅ)ë6FNXzsïd•¥§&LBS5 J ›ìj¿$ûâ í—4¯.™•L¿É ‰Z¸=oú…4¤YÓŸË{gGtÜÂô»ê8úi^dúí¼2ˆ#ºõ;…Ñ™éro‘_Y¼­ ¸ˆÌ~?‡›/"»| ]¹tû- Ñ5¹0+Ëiªk B.ô& )æíÁ\ÿ·€¤ .–³Å5æBnèÃÊ9FG¡¿FJÉ0;ï£/霵Ë"~¥9»lô*5wœè±@Š÷ÊgΰÈ*ƒW+J ûÔZéæy ¤Å€<«bxÛ×ãÅë±ñòôùþÝðë௻ݎ?ÐÝãï_v»ÓÛ§ÄzÏOÆYB^ùdÜð ãwâ„·|'nø¸âã09àtLD ¬mõÃø A±2ꀬ±i¥ŽB¡o }Ë@¦VIxé×.Âp¾Þwx¼j*Fw™¯C®´Ÿ0ÿ¹tƺŠq G€¯«.ƹ¶ó%)ß <1Zù.¹iñ±ì¿³ù¯§çßg1^&þèºLU®ë,;vu^¥³MKmÁ¡14/Ã$ð’Ôu+sR’ŠvAÅÂÞTc&Üôä"Ú̈`[?öÕn­«BŸÃ—9œJZgà54ìÊòv=G–^¹yϾº× ÌXò,SÍІ‰½ö‡‹ — £›Õ<ÙWEáÚHÃöšÐÿu<"Þ0b=’1]¢M'¤.Dì'η(!W.Bƨo²Ö•ŸíïF”ÑF{×õÓ.û×K’), [·úŽñþdKÓG¨[}Ç•(Õò= N0 17±ýâî…Qªp¾š–›<Þãû¯¯bwGÛݧñßýfFÃþïÝÇ÷/Ÿ><½þxç‚߀Va‰ca›d°´uvYIOsppÂî)8¬1xüŽHÔ0%ëà9ÊþÌZiÁÑZI¢H£@쀖!œ(ª @+§dÍ6WOɺ0ÝãÐ_Êü<”W.Že>oV3;åk»9aˆµlK]ð³ù¡•mýÀÐr'–q…›²M•ËlS *g~/g š @,©‘©ls¸j¾qw¼KÅŒ¬}S ’“¼Â¸;±ãͧ#×f›û‹™$†Ub€ !)D²lRW+o5Äa'*)ɼD„1ÏÊb)$ݬrð1ƒ?ÿ/RÞ-¸Ö”vßÂ/ݵ,ëa•Òí,½zz±_ßyï¿}ºÿþòüýó%vÅ1œywþw_ïŸ-I3ß½sD<;Ðó³»þŒ†‚ü^† öƒÍN/ã9ØÒ8/-å´ôDÐMð¥¨#·%ÕÒ䨥æÂãþ m öþø‰1´¸‚”. ©´*.»YTœuê(o0ÅÊþ²™¡Œ?¼Í¼3@ %BJÏ|Áøˆu™ÑÛ1SÖøXé ö’À–Lã*I°”¼e¤™4£ËZ€Õ[š9vý2uœ‹¤‘!ÏŠSq&stµ ‡@¡Ô%=“‹ñEœã´ËÌŠ~ä*¿S$“7]Ãoû¨†eôÖ`öõF•ŽËe‰WOú÷ý篟ÞÇîñá·³Y´&oQ€ÃwC2i•‰‰³&& %±×ø¹'²m‚–± ¯]8ÊŒ*JX%3‘›š³ÔÇðÌc.š÷_??ýx|õz÷*¦8ß»sùû›‰ŠZÊ)4/*†ÖÓIQ‰åɉ±6 ,ÀaCLV‰ $Ô×™mS5Ø·Z‚ÖðLG¡mËÇÊB¡Óì;¹æ…!Bœ·¥`sD–MÖ„{7Š À"™H –9­“ ip99)˜:´—Ç A\î¡øGÜÎ@P—=+H¾„2ñ¼,P yD2û®/Bm% vBK¼ Š`:oð|:ˆÂÁS¼ÞE7“ˆ § \‘Àä .¤˜ÀŒˆ² ;wJØ r…H°/ I½G'µ•“‡#·lf’9ÈÚŠ`õ:zËÑBÀ “5†þR€‰ÓuM?^KÈèfUådŸíŒ€T϶Œ¢ÑýR;ÛìÆ¶¹×ŒÝRB@â‹5!Ĉ³œB !à4§ü®P*ü.SQNÔ…à—~|ĺjrg¹¡ªÊ'‡k©…L²F 4´˜ss„–ÈãZåõaç:åMÔ9¼Ý´›Þ‹D„©ÅõÇ‹•w|µ* òÔ!a¹îsU{U!®âÛë¦Üºg€ŒZZOdÞï†|™=[º÷°{ÿrÿmÔ°Ýâ‹¡0ùîÖ,êMó6Û÷0W°¤”¿³‰+ö¹È µRá]#Ù"|i„À>1ìßX˜¾Z\+© Ã….mºVH$‡ñîžYNŠÄ©MÃ]Q¥˜t.3oÓI:Áä3˜¾ñ «L:Y áË'R­MïY˜B2³¾J  NÙWtšLË‹„÷~³½Ÿ P§v£?=šš?}Þݼ÷ N•Oú‡€ÝÌÅ—{ó<|û±ût¿ûÝŸ©z ÒcívÅi¤&Ë©(L:½øâ(ŽIçÄ1C(º…e±¯6ï”-²,I,Pˆëdª©†èñ¶0¯í„ìX'`çýÜ jø&à@OÅ•F§›Õ¼2ˆEwá|ÛÞèˆâ#ƒƒôZ¾žh‘ý@¶4AlZU„Á—ǯfvõö°èKésžÙ5ÅYf[ Æ7«Zé(m™c\¤£YÈþÐJ¾å»Ï²7ï¬æ[}0o±@ô€§FIuâ‰øÄ7,ßÓÕ*ƒyË3r€JÆ%0F[NÂ)·*Üþˆ ÚÒg¹oV‹„V2.§jÆôgô ªþV¾FZ¦¼fs‹Ù‹¥±ÑÕªj(ØwFæjÆaÈäÕxM­Œëh¨{IbHAš (\Åæ(6tÛðŒ‚y Ä:ô^ã“_4j±ct™Šé/‰}G{Áñ«¢m†ÐAåÆŠ %ÀrÆ@ŒH«5·v%··+O÷Òî%œ—âœD^Ü߬fB´S‚´±âNLˆŽ áÛMˆÉÈ… c„š¥Ò"ª#r€íæBL@Þ »n‘Š':wˆÿÜýv1 Ӳ†9÷0 RX<ïÑð3‹£ƒ &MÖˆ±ÛÍí'OÅ„(Sü “ú÷¾#.Õ"L ‹%h‚Ø9¤טzãWS­\ÜVJ„m0ŽÎ ÔR éÆEÀ:M*‘«JsîÁ»œJ¨±ÅX¨}¶¥úAÓÿd© À*Å£À{øroÈ·Å›$çùéóý ´Œ¦/ïö¥ü_Ž’öË©’ÿËãûÝ'cÃ]áùé…:éÁÀOÍ|¨ÐInZI"–{*‹.ݶ–v›©+°ïãÕ8ßK"ËfÕ•c9;j uu9kí»HÊ…¢˜¼„Ùª®ýo ®d6ST‡ —©ë™¬]‚¡œf:î~——…º eYEôiÝìj¨jP”)ÉRHÀEtjÓC-€)ø¢S aNsgqŒSϓ͊È#¢´¬c5A•¨Ó"µË ОeõG$ÅíÕ.¨Ó ÓMÕî«#¢¾x"ôÇÓçï÷sh(ÿýæ(š’KIX¥˜¹iÝjVV‚K·Ú\’r÷ä’Žµ“àEýôæ*eË÷çõSç“ú™C±ê1¢Íô_ù×Õk©7 x¯m^¤¥sâP¡¥H7H"£'† Ç7ÒÒ¥sޝWM-ò—€!\WKv,~X¥–DMÏ7Ú§¦Io<#úšv›ùÐh>´ï…­ÐÑ'ÃŽt,¿èÔâCMžÅjh'³ãOsÊíRaG ¶´ôpÿõiõ&S(¼2¥ ,_µŸ2Ï9‹=rÐiÎù]¹„£9ºLE›¦}kŽél¢o|ÄÊÅØÒäêg¿[˜¨×Hméë²;Fˆ·mè9 ÿmׯS2þv9q†YC•9Yã¨Ø>v¤Ùí:ä/ÊÀ*K¬†IªP¦UòBÐ2&j™9©ä%SÁ…×§°„«hü XÃUÐ}Ëë$W)”¸:ºXHD² #buUcח⛫Šû#¯Iþ‘þkÅ5Öi#ëþõÞôФÓònßÿs0އ¤q|ö³ýþ·ç}¤öb’}7àîÁhzŽÚjÑŽ&ŽìÜùÖÓwçM2XìªÜêðöG ¥¨…¢«I€¹ÉáÉ%Ê}È…´R–.Ä 3þÎQ“AI¦Šäû«RWjt—ŠÖ̯‹gÈã#Öù;õ)'ªõwýµ,JóòQ»&¸ mž“8û|}f¸yÁîbªõÊüK¯)áÅÀ?5³a^%$l)áH‰˜Jx+(×iµ<™'‰UZŽ2§å kzG*miøG[è"Bõ>Ë>-š>¬ÑTyݵq'Å@f¸ÜÙáWFØ÷YNbm¦¼é¦L 5ëšbL¡z{Ç6ã:‹Íd\ÅâÛˆ(¡×Üt˜²}ÓÑ0Î4ü˹ÒnÕ7(÷îi–^몀JbS‹šeQD”6©+½&ùD ¸ŽÞÕeá’…7ÙC´ƒçM¼åœ@³&>r±,<¢á6ÞU&²ä´ÈÆyÇÆ*`Ñå­m|vdöKŸ|ÕœòÐ-g)m»–©°ùù¶§XmÙÿZtU&R²DOVÉD ùº/틤o¾ôà¨Ä²pœ‚N˜r âºx¡ý¥ká°2‡j„$‡}0ï;gг¹h󟓌Ýß<a3GW«‚H _MõY<õÑ6“}e+ãú#BK7µ±ÛÒoáF‘\`\öE¦8³P­gЦ iÒïo.¹øDyºZ ã2w*„1.i`J«SK´KFËÅ|ܼ!hÒBФ…&ÖÈ]R_2Ï+Ž ó¼ŠPœÆݦ¢Þ¿JÒ€­òséŒUQ“t¡º ~«œ1·ÛÝþlYEÍܨÅ×ÖÙ%ª?|JíéË»áצBzIéeŒs¥ ÅYAB.ƒŸ²E—»}¶Ùrä6A-F°H€VIQƒ1‡,F}Ùb¯íl¼¶{y¾{yøí‹¯®xß®å$w(R[!'æguVN(g¦N¤ÚDL| "W·¯p'H)ÕšhõG@À¿Q[=-i«7;T…[Aû#òkÄ«êü&'szMU pá@%žš u,9Ë\…È=Z™ìÇÚ_¶h÷Æ·©è¬·¯²d s>/;ŒÎX‡ø¥ËPßZ¿• hË,Y vç°o—X™»ÔBùR’ÎÈbž— Ë<"ÌJ…h1]­æÒ>‹ü]kÍXã¦ÍØžq7@S /Άè–増ûÝ÷ç‡o?Ά¿Û?CüùÍ‘:Œ!¾_àý—ß>.›ÎÉ?ÝúôZë÷¤^ŠÛöêýW-Ô6ß1ý@ˆ&Âbúj|ƒ¥Ç‡/ýéîyWz%Òe¯Dç§^~ì:¿ýœwí=g#ÃLZ–ú[€&ýÂ’…;=š”²-†MŸïÐ9˜©Îç3ÏYw e«}¶h}Âù­¿¹ºµý±„,åKûcœJ%ÑÌ{væM»±KÚ— ™RÝŸ°Â•Y‹Þòkî3´+>öà1M;õ¤”Ö¿är-ˆ0u—æ"2 MÓ/¹û›—«É£«Õ¼ävb*¶Ÿ‹ø¹tHq¿D“kIY«#pã·¯ti†ôرé8ˆ {Lºšß¼€ßTY«ø=ý,´¿y,/I9]­†ßöY9B®/BoÆ8iñÐýúa6-_Í8YÀ8¶˜¸†q˜hžqå&úÑÕj°~3Æþg'‹}£¬b4,K°ˆ-*[Í6]À¶½ÀPÃ6¡<Ï6.ÁaŒnVɵ˜8äELËb‚ULÃÔ¢m¨œ,qHÐV±ÒBÅ*ñ’ýGÍs*Ùq–SŽâ£ÛÌW¬2w¤@ù¬Oæìˆu+Èç@L‹fÎÛ{æ÷ê«M(¿È1¤Õ%+ K8³å5 <½Øf¸9ky?Êñjµì¯íËì.)JXgwÛ&û'JY_j”XÍ7ï¾Ç þ2y+ã,ßr]jt³*¶…N—R7eÛ\®éGäx“µ&ð÷ª4~zzùöè½k©9æ?ÝR}ˆšTÀ¢c‹²n]zM»¦ÒÄ¢s%%™}Ú;WžM3©ü4"ÔF»²  ,ÐVÁdAº®¨Ú¯—€T®  Å5!µ5EŒû᯼‚·ÉCaU'i€j¼jŠž_ÌȃQ¬ ×0"É&àÒ)pR|k°ßn)$«C] ¤•Ë¡OÝÿöõ³ÿn÷ôøøý‹’coÄËfòÉ‚ñÈ<Ÿ69ÞRгò¹¸¼dD¡Mú!¨¶púÍÅ£m5GŘWÇdP“açmÄZK[(­i–«¹<€z¸Ùsw­Ëq·úURúmŽ»qén¸Rù•?ç1DŠ´ëfJQ•ó^çΓ`w¥r{·¯³’“Tªšœ™ÆhàCULfÅp‰±ÍÊ“úñ1™qM6ˆÉü¶vtS¼{³ VáæwùãÛ¦8,hŠþª›öƒ=‹U=$Ûé|k ÖC¯Y¶Ô$#QãÒàæ@8µ1[â_gJ𥠜­a«–5¢ÑÓƒ¨ÆÀÿ†Wîî8X7 1”µŒ'<,†¢Ã,Ìe¾rnou²*¤]Y¼‹ÒÄ6Ñìã5} ×1Åš²,¬9ß÷Xœ÷„»%uùÇ@ì«n†Uè™POÔ¢Âh·ïÀS–ï}#òôº…«ÛÃs¶YåKÃW‡åˆHOìuX0Û·¢Ú Û¬Ÿm±—kt펙ô¨eãhx ÕGb”B×ý§÷2w² X{~‘•šn§Ë¬´³fƒÛÕa*&F5âæŸÝ£¯16f»ßÕè®Ûçˆu™dMÐ\~ØGWÊG·Dq‘Ë—JDŠBAYäáõÑjRý,æ.4éot¢fˆq]#¹½må“4~}PÝÀàÒÊ}ªKËÚò˵¿­ª/4HWû“Z'~74ĸÔÓÐÁE@?Œp•ªÛ[‚æ:Á“£ ¶»Pd[ÊSVÅ4·$—X®Î4éÙ>¥yœýéxSR¢j+i~“KVÒÕp-»„a}²šL&jã É3ßùì!Ù4uŠ]t¡Å)¹]J€¢ }Œ”“ââ@CýïŽï:uóÚvŠ;.­™¢täïÙî¼þé5nøŽ”ë«1ÅLIÅ$DJå<†4ø€XÒ`rY^‘iJ‰É¶s» ×9Hó¨@< ª€‡*Û\UÕXß¹ø¯]ùMDç¦~*h_ÔO}†žŽÏäˆ<7—®Vù•Œ_U‰P]IX ¥$P¨í {‹¶"Î µ4tLò€Ò¤–ªÍú ‰†—®[VÄÚ ÇÓÊê®Rô†¡_FPÆ €{.2ö0Q~RÒ?­&ÐE·„˜bㆧ²ÒÓWj…7;q˜q©qI¿„¤†qTVH€ìèêd•|‹È±åªEßÍAÿBFü ‘lP³÷NðÀéªÛÄïÞ¾~|·¶ôï^ÿzó¤>Ac¯?mÞðÅnä6Ì49³Ž`Ï}qCšÄܽFad¤@ƒ{Äû‰8Íešð¨'IJ‚²·vñ¢†2å.KW›á1,bkB“‹~yL¡Ôƒ‘϶d, wžHu7À’la¹Ÿˆ8.^Á<žý·“ÕØ]ð6rØ”0¡d$GUoÛ…XOö_v<ηê.noÛvCr¡B-X)ñÁå‡2ŽG«šÕø+:W¿$©øoûäg¨è=ú–ÐÇšvÝ?½ÿ—·ãES3¬hÙ³ ÏT(ØY³óÕaÊ÷_bëµÐ=¿þZ?aèúËŠÍèò bà¬ìf&ý`× F¯‡?\بnQ±yÜCY}U5¤¢L`þz{u´ õYBp·gCUëgd »z ³³çÙ-¶ ƒ‡Øw%‚jjs3øË˜ìÓÏ¿ÜÝ<ð›îîn4+I7tË|#·éöðî!àýÜKì‹Á cIlä*Ô̤ï,IQhb~rçH ­¿úÑú? Z— 6ªNlr·1°Gh ×S´ öÎwÎTe|©þ“8M eNš{t}Âþ°yý_¦ìÔ/ÎÉ. _U®ž1äx—â‡êûŸ pR?Õ½ïdÿˆù^ä`A[:.[æÒ‡ûˆ%ßhõõ§,õÂÕ?h‹å®U|,+tbîé?åÈž5šûÚ_h‘k |u»œœ‚šïÄQ#(,÷X‰ …v¹=­)Ûòz$æ×`_­^3Ô£©î>MÅHqÌ"¤mÎödöþ`ÈŽLÁÖÌ6 Ï]÷ „!Ùöú£Ø¦sb">ypDLÄÖ€jÊ`ÛÆªAµû§ÏêÏmmEW‡xž ô4bñåpûÒÚ =4oóî¦Y_|ŸYMa¢S€RS¤,N¥¢WúeRŽšQcÕ¯fF€#.ê­ôCÚ ´õ¨[X?Lè;õ <Ý¿ùíõçÕÅ÷›»?þó[Ûˆ˜²÷¼VÚÒ¥« £Àf ÷•~³øâBåë?4^0\ ;Y‰dÈ’ïêfµ(I3êy y!eM#P¬€b‹`ü‹fð °§½áß(3ÁÚg¬mý¸ìž©iûA¹Àž2"@šæ&ÏèÐ<÷¨&‚8{˜wî/Qí —¢ZQf’5là›ä‚]Ò<=¨Þòm¸}skºnßÜñË•®mŸ³Ú©RúÛ ìÅŠ#Lš§<ÛµðsîÈ;CÆO<`vôWšú‰,ý¶:TmuЮ#¼«1;!AÉ›è¹}®µ:YÙ‰¶ ”É·âÞÙøŽTª” nû@Ü ©3fÖ#øx&רÀÌÎÉ¢Â_ÞßåççÿW#wÛb˜Y·â©Í¾ô¾weD4m^ËÒûÚs \vò¨"‘ú!ºö% M®FÄ;ï¿O:ò5£»yûøp÷çÝÛû¯èŽ?ߨãÝç52oïoIˆð/¢O0äØ÷t½i¤¢ZÀœ¦å( œ—·&$šÔ]™8*z’”½29RkRMHCfñ©ÉŠ–ûGÌ×Z±•ÀeÓ Íßî_¿ý\yr¤iG…\Rº,4MU“W¦‡á= R ´Â5×rÌåÎhsEýù+ÅÕÉjB16hI`j“ÿd£nˆm± '8 È­3®{¡íZ™œ³uªhlTe[çÑc /sCr«ÓÎ0uúÕ žCKÿ§M²Xç©“0¿ï"Þ´ïïÓëwÕIži‰Ùog.µÕT=c£n?eÄHC©®w®çV+‰)¹k¾Ôª¢ø…F¿:r¨¿ÙL‡‰S…!´Ò½~äWÄ)8ãRÔôÅšÉc›ö“KCM¿&iëœ[É|Úáglò¶Ø¨ÔáØMíðóU;:ü¾¯:+>&7ÔÀ¢Ì asÙ@ÂL=Æ¢±hÝR…ý’@4¯0£ù‘»BeæÅi[ý5S¡‰©­B3úþU¥F ¶—jFß©d£ñ\ò†,"°lÒ/炈ÿQQ°Ú–Ü`€~TÄ4Ø…ëHŸÃ°f¶r©ÅÆšìS©Ø²ãýaùÁ)Œï7"Miåbò%ßz>jÁ¤?ÊY¬õh󛂯pÉ'-;>ØhHö¦€5뜺ð8Ö-”GÐêC…†$;wÊO«¸SópÀHÜÆµè†Jö¿mlOF>­ŽÐYV>Íð0³!ÉïÝÜÆu¯×÷ßöÜCê=ر}RЏÛñÿÓòÅ/ß’–íß¿~ûì'E1Réj+ŠmúqëÞ&Gí³M?î|9M q!â¦G$;3…€Û¶@íËî/Jñû0î›;ý¹§~惓tžèÑ©¿sCî&pG b{H1´„·Ð©/üˆYG…èÄUy*ýÅ¢§ 9¼øI&„öÑ^¥¶¥Ãäñ"à[…IÛ;2Ø_”ž:2!æPJð&ϺùšO3O¨¿èkµçªßqSpI·þ4¨éÃìÊì1ƒÓÄîP]R*k¦Fuš©©F.38¦ Ù…Î/HŽ÷••Ÿ2Bì‹O1›UœAì§B€à¦«ôÀ=­ ÉÚ††ÓŠZØr&§ägvR!’RaTÝNŽÙûÕÑ* J¸kx†îùìYG ù(ä§²»dÅõ˜`k+®Ô§VÜøG–Åã5ïã^eR¢g†ùå|¢¡;øªw®²ˆ„àFïÝ«Þy>9˜cÀÈu”E j‰]Ÿ“Œ ™M ª ÑB—¢É2T4Y˜í˜_¦bg‚Å&êðòäúcÀ“Ö‚Š»ÆÙëú15Ä[YS"OÃ^¬vk ï$ ûrÍÓ*£¥áo;wvJøx°šq¤ÝÒ°8cšI»!oÄ·öFJEÆSod|ÐØ-¥+ GÆï1$\¶ÎëI\ÿæM”ÙƒÁ}Ÿ°fçíz•iHP#â|`[zBDÙvšçí»çù¦„é›ì_Üý¦êótÿñçGeÉcqÑ{ÛöXü^ÅÐ w‘zö’F4€miNiâÁ…Þ÷FÔ6ÁwºLÁÄ3R*—Çȼw*ºª” »ÇöQgÝêW“m· -¾l†‰ÜØ••ñ´ÿÁN ä?dïy<†¹ÍïXÕà õ]?š…ÚRHRÜ ·x >8ÿu7SlýÉ|JÝ ¢ªZáü»ÉÈ™j“€¤.•kcÁF«±dª½ó¹ÕR+šÌè¤Ð¯æÝ멦ºF zJžb[$"Æ8˜,îX·yÊzÞ"”=°ÆÉP‚”²ƒçKž«£ÕlžÒÏr‘C¢&¾é¹5O±ž~ pF=—àð‡š«xÿáýÓ‡ŸÛ7‹³ÕÍ‚ Õñ=—F^8iˆ·õTÅ‘f³ « Ç®yŒ+Ê5©{³ËR¿hF0Æ%q/WWÏ-€3C\¬Ó+ÉQÂÐUnÀ–'‰ã›ÿ¼û×iëηÐ|K†Àö½g*õ>eÊk¶Î{?‹|—*TZt‘¶qyn°·4ÁЃLgíý¨ ¸à6y¿jL vÍ"P$(Eˆ¡\AW-ÉE+‚Ívõ«5Ä¢”®m•1mb”SúŠŽrýÍ wÞ½û÷{ÕÃU5ùwùãÛ&ó›ˆøÕ¶”Ÿ¥ Ó¢¾=ò¦Éþ Þ!a½1™¿y|£š Ô7D<û»o?h"?CÚÔL’ë¸HJZ‡ó8œ•Õ™öP©Ææõ.ÚCÌÎý)3É&¤s­aTt †'‰R«\têÑ´ôÅŒ…ºœPö•¨Ü-Ùì”ê Í7úÌÈ`ô³cBOþÚ¾’pó€Yé¼oXy0{ïA\ꎚ;Ä«l˜ë;\\G†³ì–Ý–TpÔ;V·vÝ÷QRŽôu afi9ºì‚jk ©hvtx±Êw8kÈoW‡)w 1/ †4¹nZ?a¨IÈ;^¢†\»Æzw.›Ñ3$±k IÒpé·~k9,š—ìöº—„Âve"Bn6a}²ŠÊ¯!Ðx‘=ª÷O¹gd›]/ä)¸n{µÅúTŠCÜ–žÐÛ¤Ê0p¨Šk-³­µ9x/@ÅÅÖæ·³+eW'«`¶þ¦ÊVr¾vdr϶èX#ï¶`Ïý 1ˆgçVó-,Pßl|%•ø½v[­†qd÷jIRha¨=Ö|Œq!õ×JsÛå†ÃŒ£ÆA Äj'É—wª÷x´JÆù˜’ocœvÅ¡èÈ*{ಒþÇ]é^§/+â¬oD±.à‚÷mÍQ)ë}&$EöÕ!&vMæØæç‡¼(¦Ù¶z „Ÿ¼ à‚‘Å‡Aà‹ˆöRåÜî‘“ÄÀc_/†B5–0ú¦§Œ´+U¨/Š™*ÏF¢/é³)„~J)²³f—U®S1faFF#ºýº´Ÿ2Ê P]øÿûßꀙB¤Âˆ„Ð÷<#Gfµ$^SŽ`¨Ëe‰8l—»$yÜ®ã±jœ{2û MQ2;Í®xŒiÒ“¨èŠLèvJÕ<³ì¾.·(0-¥¢“¸,ßVG«äÇLM|SÝg?Ä7Ð1ئ~†P†ðõ-¦ŒáŽ‹CB¨³ÛPb8SÊÚí#I¦8à°Ø.kjˆÃPõ>j'ò]ǽ?i"ÈÌ8ž\ÕŽ¸©÷\\ŒTÚH§‡p‡ž¡ l¨çæV'«Qcö‹ý~H-lÓ¨Núèñ™@~WkýñVÉï„á%Û=._„2×YKÖ[éæ²5¬afh³mµ @ä›Ä‚ †Ä¨g^Õ%RyâýôýÓçÇ DW‚ñ+}1¼œI‚Á´hî„\6óàS¼¸¡ò@:ÈUÉV´™!úÕμm‚¡ŽAC¼!Á`ìù³‚9²àæÅ˜O_ßÞÿzÿfš€DYˆ’‡Šh]ÈE, çöfHH´ÛÜ”ÖKˆf ê±ö"1í¸Á -Q(º†6ÛgmD¿)‰îŸ~ùûÿüóŤ®ÆhÛ%v÷Â& ~¿ÿåïo~¡;º“ôš¾¾»½óÿxÞ:´ß ¯ÎRýz¨W+!“ôl·…7,»j|GQƒr·Â’™ýPIvæfŠJ¡‡Â6W£\¦¢¨Áj=Ù¥ä :býˆ1$ÂD‘Cý=¡ÉAbä8"è¨Å 4Z¢7Z{_=ýíX‘"c2[S Á¬T¬ŽV5¤™»éØPÚˆšøQðÝ›§ö™?N‰>,lsnòC ©X|~zÜ}IóLP ¯º¹PR}öT˜Ç$([,Ï׳ø ȱ I !W¥‰ 5F#bÌÕ.VTšÑv¹»÷´=Èk¾~Ç^|Ub?ýíáéÃ;µã7ïîß}xúó`Ñ?ë'¼è ѨÌPÙ¨Þhëi5Ôwé>á¦Ø1{Î…D§x”ÊûHøu,sóá©[6çd»á–ãs–±bဠ™“ØQæŠtZ£«öŵZ†QÉQs )Zr67Y´!»$ru²W¬ê(¨áà³k§õ3²Z‹5×%nQb²¤>â óÆJ¬Äwét= ±/¦çQ3¢O󠼆Çt¨FŒ†—„¡ð¼ñu+X'%ˆ‹Íèƒï;<È~ñ¬!ì.ÔØ#8t5ͨã°!j¢aKUÛ¦¦¢¼ €¯(À‘SÂ\,ÈïNÎÙ{–õÑjPçH?+Úžñgݤë‡dm•²egGà,Ã)i Û ÞrxÊæÆJ³¨Óé€dQ`¤«b§ªwû¾Ø©gkÓ§Æ+¸yð©—^»FP ’f"¨^zíE[5€G³eAº‚.°õõãª6e¼h¶b*›2ïjLYà˜º¾ž¬¦-Þ–[±aqÖÖ?vl} ¥!‹$ÛÎ+ÈH§ø™ÑÇ%(®»]9Er4gI×EsÖLõ{¢9ïÄ1yå·Ç!CCG-ISq¯‚Ú7¦E|Z§SÓC*Ù–°P9ß³ºj,[ž˜›È9¥b#A\$z¢g V*ÆÓq?W\f%T£CF-8œ¿ÁUcu]J°é¶ŸwÞï€ß¿HR^ëA>?Ý¿}}{ÿvŸÄ”` ž´Êêž•ä8z7¢Í cÇ]vpÎ3y¯nª ·PÿÄj é«ñUýiÛrð~×Õå˱ kÎ#P²( C®€´"ç„rpiòÑIK¬c#èKEeY²…T=—tI’ø0>æ¹!FÕ”{×ù]ŽQñr'ÔþäœEý_­:Hµ¢]ã$FŠqÄž'yq»ºIŠ^2AjZ8}Ï{Yâ£è&.@´÷\?4*;É y(dú–U(Ëq<­ú–K!®¨SÔÑÁ¸µ>˜¿È$mæqbÐ8çÿÙ»¶å»“ª )y]±'ú¡w7,ýkùús»òÊÇðKæË‹½×õíûi‹œN edŸŠ'– â 4ì¡-œìYúVЉþjõ*õaÒ(Hš"0ÅX ðºF³ÃúζÇ;éh·¯aðÏbÏŸ¢áEâ?RÇÀ&ˇÍà¿g7WêD)÷•è»é×| €ë¯ÊÏë{Uªñ}»²[¹›Yå Ç!ßMN8Od¹jË BnÕžìf«›…W×#'<¯æ) 7É ©!,4&'¹£]”PP}†ý7d ËTÔl'p°‰S›UèØe®u“dÛ5u2d²8¢ÞÖâ”SbÞÐÖÉ€RÊaÀ(pÎ¥²:BlC¨£ÜŽÞu`ò¤Z3¦¶r§¬¶2¯sÛ(¯r{IšƒÝE}³DQNºPNΨó&ë½Aô²;ª.HIbwÛö|A‡ Ј0BTIëj1â K…ÁÑPñ1§&ƒUî×Ó@3©X(YÐÒî12Ô*Û†½dîò€±ß ¢(ÈÈê;¢!(=ú=™Ç#œ·‚oÜ^9D¿úöI_ùÝ£·mJ¿^~ÿ|y÷A½„.+6>àDê–tIÓ:¼ÒáÖª­²O·²¨‚~t(¶´Ýkòg‘Pg–ÃZÈÇ—[­€ª#Æ8 “ÝÐv!M¨š›ÄÁÖ ±ÉV¥¼Ž]ûDšÃ d™¤¨‡ñÌ ,Ψ[ØVo¡ø5@,!‹ú•ØÏn=BJÆtÊ ¸”0Üñ‹{ä__qÀÐòð”“dCMëÜ6«`ZKÊ<6Ÿl߬¸ŸjgÔ¹MiÒ@¶°[¥ŠJ3ÕE:ŠJzIr¤aÝ_ËxgŒ!¼ó”C$ˆ?ZbO‘L=5Nú½U©ˆÕ©Ùež YÄIà}`[IúwôfÈGÄýgìåIÉß4Cvì…:TǬ±NÿéÃÍ—³oÖtvþYõʇ»Ë§ž´m9±@õœ˜ëÚ¯sO¾C æ\¶w’m»©>Hª 8%ààRÀ\ÚO­6üt+{¤À¬{(!û_¢Õ H†~‘Ð#bé2¥¥¨ñÂ<7¹ ˆNSanÚUe†Ò:[ò€õ¥ð¤y ˆ¶ Ì÷Ò®.Ψ†MT¦$jÀo/)ªïØM©g¨¡æ„9õAÕ¤ v|ªd¯A_¤íml28j´Šëü5Rë¡Ë‚V )r ö·kÞpÎn®öv§H˜ß¿å#J±'ÅbøÜ´y×Ý÷×ådQ¨>˨±»”5åØ|–©Úͳ¸«=¼,0çæú¨ûÕb¢”Ó’b‹ôb_¢)‚zxÃ^Á–m–dÛ~Ú|eý¸Üà+¦êNÚ%e®e–l¦Üqªþ¼þÝú7Äî+óÜÕn…j¯ôk·°­Â×@¹8T¶ØiÌ#Í1Oúq‰Wóƒ¡ZTXæáØÄLp瓪‹”BNÜýÞæ#ŽaÝÆNÙ2'½ÇÑçÆ^`X°ÕAªBË»IS(zðês;žkztA™‡múU€~(‹¬jm|%w³m>‚¥Ç+ Ô~¨’æ›{Y¨àI›|+Q¹²jþ”Wc•e®žèWE€ø£ù¦¡nOñs ¦á`B†&d*nyØ&°Ž,s <Ök¯”¹Ø–&VÙÂ6Ã\§@Cl+]¹Éê÷„®î‰ÂËd@©Ð-€Vi|Þ~a¥Žþ¿ ¦ Áhùd®ääˆ1ðZœ ‰¸$o6Àã`³ÉqD¬à×Õ¾~ÍdGôtu ¤,)ÛNå õ²Ð_B¥Ðo£Ç¶T»Õk“§ˆ¡aŸæ×[íî[ÐÒ¶O&°zþi5byÄpŸ§>ç€~û¤„%²F‰!µ$9ﯗæã$¿1ÄÝíå÷W ÝßNUÖ€IÆ÷ÝWîxx)öø†¡$H–WÝŽh罦>ß@ªï×6ЈçùBÁæNõ”ÉâJöHašÖÉA˜·¸1`‘DCQz@ñ­Ñ£ØûŽñÜ6’Õ‰ŽM(e+Øù4¥ÖV<jØ‚2Ï#Ó¤‘üÔù¯f´y®ÊþÃûWöÕ…œ¢Ø.¬²¼nÞ4óíìËi˜6£T]ŸŸ%¬oöÜxÚØöôJËÎ?¬á8ËK…ÞnÏãËHòbÏü¶Šz–\]ƒÅwÜœŒ¥¤°‡Ç›˜7ÀD·YôÿʪØ ”Iïk[/Ú|Ò¢MƒÚ¼Gù >©ºNÙûJöŠ9u©é¦ÀÝ#Êj%°US¹s Ps h"N©U%>:P”›Ö’«É…å½î±dR?[ôš>JUɽÈÃGDxSÝk÷J©ìBS’5Da¬.éFÞwI7FßrÉí«%ÿ2æüU9ÉbÓêCr’ž­ ØÃíB– õÆ7"¯#ñY\r=£ðÝnØ¡þÊcΕ¼Î8ë„1³ÑÕÏU X‘÷A=<¹ëCá¸èÛ`â&" PäùÞb.Ñ\ç*(òâòv0 öÕ6áʸÅ4H@Û1òä%„òƦÁ®ù°ûÄ4ÉÉv³®nÉä„)ïj"‚xüóèß>üg©žºTd0Îúõ|D,D ²lÈpÖÝ ¹2qÔ¿³ýÖEPÖ›™fº¡¾OdAš#m’pÊÁæ0O›™gT3¥)Ndk,Ø™)qîvÑwTz™}8 ‡Ý*b1ÛàÛ(»ý+ã 7À:ˆpÝJU&’v(©>_ýDš΀¢ê-o– 5 Y£‹Þ2Ô|„!,÷´™Mt!Åfÿ³º¦ß?¯ö ¼f§©6t ¡@hd<•Òñ¸‘áfo£>)µ ·§] Ì#ý¸·˜ Eh€·zôìøQ^}i­kJ^Ì~=»ýõòûÍ=ôo·—ŸÏžF ï>\œÿöÏÏãƒçÏ.òÅyîj"¡ X5§)ç’4C_y”u¹°›¬ j,nª«‰$M–7tw&¢%Ò“­ü†Ð/zD„.„c+ÜñðvÏD^ËÓ £IœÛLDÂÖ¹¨„C‘:Üñ#ižŽ³±‘ ð¾ÙÌ[`†¾iМ» 0RÎIUdì*G–ü²Y*][Ñü•œC›WhóM±Á+%¶ŽÚµ ÆÑn-`…žBS/Ïk ª>bpz[Êluà$Dî0ÚdýOª²†[CÉÝJjŸ )ÃÛR‘8‘4¥"RM*”¹– Á4ïaÄMIÿ;Æ6Œ}{•£Y§´$ùe“¦Æä¿ÜžÝiìwþý^·–cqüº?Éb"S¨ ¬±;»T·…êË›ïo~pÕtªƒxìáÛTrK¡!éé‹1sÙ<5=Ã6Ü—Šü¡4~=üóÒ6ŒA!ú|¶Š<ž05å ‹´«òÀµ†–Å…ì§Â2%Ø KÕ‡C‡~i ’táꨄYÒ° ;Ú‰#ä¶ `5kЈÑlñék89¤9Í’A´Ó&Æi„Ž0Ž‚t0®5Wqxh:‘©2èÇjl”ÚlCµÊÔb…*8òa.¦…ÉåBò3-‹Í‹q?ÓôˆçPö‹¢Ö±;ÀØFs%òoWŸT;ªÆxw÷íìæîóõwýwV–ûb‘¡ràÕÝÖÑ/ç…>‚j:§óç?ttP?ÌêûŸm·>û‘‡ó~FƒÒ_û™‡ÛÒŸ õ+óÚêžT˜%ö䕊Ei0WDÆý †“2Cn§ 5—uA7Ê«]sKÒ\(àɶêmˆ2m“dQ7:0.3?c\UÔ#-xww¦‘×RÊŸ ä,´oùÍ™ÉñÍž¾yΞ[ ùÝ¿¾\2îÿôñ!ñ+ßÍ“µJw·gËÔ_Q@ý5Ê9o¶%MÂêf3…nm>"w­6 EF¥îR¦¡•· ™>eV ÙLwª¢2, sùl8I‰\ÄùFI#ªÔ õ¾ÑÃñrVW/I–­ñòùÝÕÅíÕユ§Ÿò§N¸TG®‘…[X)z¬”´ÂêÃ]…Z¨¼¸‹î¹HýRÈ!‚WC“¨F•Ïz=öáïÑÆÕ(Û’»­Érëäy¾ÈÛþ,WjÐÖë¨ìk$Ë•e@Nk!ñ‘XÊÕ—úDM;Yž`Ê çjËr‘÷∱á±½°Ù«¼w“¤®%dD*úêÈŒ*oq»X‰õþ%0¶…BciŠM¡À:JÙ‚4×Ä^™$"Âé²Å!õ-d< §Íî®%d¢™÷,yŸÛ<À/º?U½?Ñï%«Þ—ÏÎÏ J_Õ»2hzE4).µ"mµrĤxѵøx[{dPm.ÜrÛÙo!’¹´A(uv-Ž8îŸÞùU}[É,PvG¤õÙ~ "õ‰(_ýzÿíêû QþµüöÛÇEÇ/ñR?ýÅ ­ËÄ£îþ’ÅèO‚äD¢öÌí÷‰çF¾~”üTb¿g|8¢§Y¯X$ÂÇM¢·|Œ&Î@­ÖLSV½®Î°§*òÀ‚2Ogfœ0§rè²þ©rD¶S«Qö[DãT‰AšãgM`;7g/?Uš³­„ *leµ;Ûšh¿áÉáÌNþ¿ºü³ÔåkŠ‘õ#£Êá€b䨝dËêDc¿áDѸÌU·¸­Yï¤4£µšÅ‚2OZOß5k°èígä8T_Ô›ìäÚá)>?©Ê*QÅf”k‰Ýé9žH¡‘µ±íáV _aÚ‘îjçù‚0WzŽ'+ßza‰™l9§0bwRõpqîÉ CJÆyæ.ƒÇ8‘Uý,£Éª­6˜ÉÆzþü.WHõ¤BÑ[dž€Š`Ý@(‡#¸g@P5è_º2j(/3jÈ•È7N)ª§ÑHžŠJ‚µäÉTª,ˆñtŸ*ŸôáO•ÁpÒ„~0£ŠƒÆ³ØÝ„|£8q€Rôw•E;ô0ž¹WjoÝ…­MŒÏßŸŠ¯Íƒ©âà¼@MÁ¬F¸KÒÅZëÕóö&]l´Ôà€Ræ^”Óùžq£dx ©K…¿T!µöC ×IN-ëlvy =éH*Õ|ª'ZD¿)Óiÿúâ„! ‚PÔ_+n 2‹ªszëÀ0åLÑ|ží¦ÎÝf7gç¿ê?"Ι„>žŸ3üp÷ïß~ûgWŽ6KUÌ2D—”­¢Žo3׊Åuí¡.pšÄH[ÔsŠýÓ‰êŽÀ„½D[3žaû>»›\Q4 Ãèa«Uë¨ÉÖ:øý‚2£ˆ<¡¨ÝqGdÖ !Šj·îˆl>‚þ›¼+jn#ÇÑ%5/÷²­ ‚y¸÷­º‡«½Ç»­+Kr_Û±gfý-YjK´Èf³;;µ©™ÊŒâ´šø U—nÚ¢Ib¹“éX£»ÔÏRW[¤cÊ9w·%Ë063<`¿Ôät˜ÁbJ|,·_qÌ1m$"ò*X.öÏ­ŒÀÕt ‹Èå»­±S‰‰%I-ݼèV²ß\®¸MÄY‚ÈÚ„CLv—V²{嵬w¦¸?´™âW“]®ï#¹ ”[‹wʱl Ö[‹ø#S3úž'žLÕÀ†‹]qX9ç%@ÎêUï¹8ãŠuá6•ÿ VVä‰Ã LaqµYÏ57RŵQ¨ ¤C"¦¤¦$fBŠ4u±¦b¿V—lÉ?.&HKÄä"€WSO˜ÈÒàWzìÍ¥¤XÍÌÀÕÌ€cr"woý4¨ßÞ>¾—»íõÇ«_ŸºOëŸ:ʱÇ9iED¼/³¢¬9Læ DÑd’›dè.DZ j\ZŒj‰uðš oðŒ7††8[–ÞöÞÙ*ô×®—©Û>1ÛþÕ‰ãÝâ·L]¬ýÔ À® é4µ¬Ý8ÉŒ1ïý“7' 5áñС2ŠYÙš™GMფ7JÀMঙÇG‹Éý@Gi³F/«{Xo·ëfF¤LVÉ+óFääCÎÑ~Õy¢xd#¾G¯¬€&öÅ=I`\wÿò ®ŠåŸ•fƒ¡*ÈŒæ<ÈäDÓJRks¹Á©_aÜ׎&uyX¬±inýÃj ŽkåQÆÛ×l`ÃgL»OeX™`ËÎk¸¦É•‰ÌŒ™ðžˆ"Þ:⃘¨³s¢ V¡§ê”µ €äyí`iEµÊ\L`Üf¥-€IŠ#Sq†§Ã$¢¼îXG°¯zX¿^‹y =¾ß;}8¸‡GàÿðíjóYBùGîžëhžRV Øœ —9áØp!w8ˆ1U¹z”R —w&…„q»÷BBk‘h‚…(G……xK\žÊ¡ÂP|ü#!ºŽMBŸ×éåÉc‡…'i++b‡ð+ÑB viµyš)½¨7Ñl¦Ex ÖÝÁGÝ÷ŸoëörÂÏ{T.¼ˆù½L”d-«÷ýÏœ è=ˆ¦Ån–·Ö¢x¢¥í‚ªÊ"œö×(?íÄÒ'mì㬎ƒ­:ŠYÞ&®©{^oï·í̃W’¢_.žÞ[G@æ¬uP²xz( &æV±Ÿ.¼¸yÄóÐòt žyšyì*e‰à›Òi£ ¯qMÔm¿ã㦙¹„ xíÀDí…‰ƒ@9I:X { ÊÔca•U;{©£¸(ˆ£ AOÌE>ÿy÷ð¥ÛÞ\}º½{|ºÙ<¾ùlóùzó¥ÛµÐuÏè×OÍLÄÄ•‘Üàò`ž¤ø°ŸÞ{ÉD$IHO5>©ÉV xtK;²5 ätZÜd<99CzŸ8V¢µq—×¾û¹¥ëA|‘ïñ˜7N¶…ÕÄõp?ùÁ,%Î×&Nbk×ßÐM±KDj7»uGôÆç]Øl‚˜ n±û¿·´pÿãò6cÍžô¢Í¤G¦åÖ(œ•ÄÎ.o45'Ù’ü“¢ë4£ÑI«’þu’‡?Þ¿þßž?oÚe9µžŒmA–ã-˜¼ÓqIûdšÔ¬ öŽgäÍ&YÖLK·Êƒ£…tï7ŽIM"=ÿCÁæß0a÷å–·vX•Q±ä\üµ´I,È­‘Š€¥µ«òn~e)ð놵ï¡CÍÚ“ñ8AFë¢Å9§†6¶Àÿù­d ¨Ý  Àæ·jµ\Þ̽ZbA-FFïÍŸp/;s~o¡Í=¤UËÙ½¸Ëyi/Uæä^>Š­bœ”{Ï툭+Õ_iO°‘`,Õ†$Î Sç< ö•rfhÅ/è¼­¼½s.§EY7&y¾ +9¢~ÀøW-âƒ'$ [P™ù]ðeu--ík<»üE’+غg~ã]B»¨Š@P ]å.Ìk7}·8XMÁsp+ÿîøuéÒà“nœV=%‹Øß¶iBÖ[ãÈÍ@§%ö’ŠÃœÞz_íþøþQÿê¼bw?ÚÞ{¾?ý`”ƒ¶‘à··5Ä`i’&NØÚ(BÂ&=†˜SJbøx¥šxU³zpçãÄÌŽ.ˆY¢K¦IÐç«|œÄO¬óòÆBÓ´Yì#›™’ØÇ›hó¸ê1y‡3UMð£”òèqTðã#D?Ñ0BÅ ½N˜0ö >»iêÔn“î4²õ®Ð浞œ‰=CƒäU_:xgÆR²°„ݱ„åνÉ$ûìþ÷8$>XÂÏ«›'$K{§±Ð_÷qÄ â›ÿ"Ÿ?=üqÓ£ô£È¨Û£k?Bè”'ÖWà”¿Ð)ãÝUöë¼ ”à¸~'„mMó¸2bFt¬IÏÄD ”h›}ÔŽD´\`ÛÑ^.Cۭܤg™—V èké*Ê‹ŒŠ—³^y}8žÓ” ô Ç]I÷+‚PF#ñ«}©?„þ÷‡ö?5äE ­¨Æ9'uÇ‘€Ö}ë€ð“½w£y=ë¾5Éá)f¢åßÚ:!Û¦Öœõν=¿ª:$öVp^<Õ~¼qSZo¸ ­wyÐ Mý&*Wú•ÃS;5‰j§½ÊÀxE3i¶Z –_Yu1Ùv©=^ö‹½=¢­95ÁžnIþ©;5I4ƒÆD;¶U€¸ÂpÙîÌÅ­ÝZM²àf°˜ü¡ i,ï%ªz]¦?|Æ´CÙË¡oŸ(=5if \!¡†´1úé'¥Å¼*+‘MÐS*Èš…dmy³HÕVVB»nXâ£èü«©aÃg¤çˆ¶µÃ¢w½¡¢“q­Ãª^úÎê^,'P"¬rbùÔ.¬ î—Эÿ‹»±YV¤šÆ2嵕0Ÿ§'r¥Ô2ÁØËй‰ÝD„³8ÓüOƒ¥•](.¼h8Þ73à¨w%…¯§WäpyÐCh:è -€<ý†|ܵÅêiT×ð~ó°IÌna–\|øÄG'gŸ=sæ-·¯æ]¥ü¼bñTs a=lÖ•6vßž~wSá$öœ¾=˜šs¸F@Õl\WÝ!u*{ƒì1˜ƒ­ð½|廳ژd«ˆ¦E‰•¾5F#ÎÂþfHûJjJ¤ª(egƒ)ç|ÛÛå‰c0§ŽøÜ/Ø3¿`•Z&åZœêtÀs`l¤ˆÚp?#Æî˜#Ë`Óì`s^Ȭ™úc•œ‰ír´™i Œ:éXÂÍ,Úà. ø>È"$càãjÛ` ˜™X0. î¤Ãn)ïð)Šž˜ùdNnÈ·„@p%¡1”ãßqËΩœÒôF˜µão¹°±®ùi\é/#Ì ŠX3I×épXýÓ4%AUM&FytTMŒ.fA]ªàw ¬& ªoítBǨŠçFU³ TuZ™ÅÈ1sðà›Â+SI„8`m€Y s*:œô“¶Âç¼äõ~Q|®lv…Ñ,«š£C õSÔ’áÈÞþ©ÞÀé$=7”Çi1ü†dÑ÷@`pZ¢ßà_§Î~ b¶ÄéÈÎIʺÄ@4%ðÌìýxž€oé˜ØÄI:fsRÚ¢½3âÎ<ÎÑ©Q}WO?6O?D}/$f#O\êEžá—i3#eÕdK˜ßRC˜õÚbê!d &ƒœ ‡uÚi*>Ф ÌŠÑZëFÀl‹-hÑÍ]O©£ì9³¢¨ÄW]‡1rÓ×"¼µÞ”‡Ã#ÁàMu€?IûfýÖˆJÆÊ6ˆvÑ ÷ñîëõÑk~Ðáv½Þ¿ÝÄns›í¸ ƒ¶^8ë­*0§@A¸ÉÑîHá5„_ÒSEÈñ . C¶‹]térõƒ ÚÀ¯v aˆq ü6دf‡_d1´ü’äý²§Pta”p·9æT2™9NŠ-DCòkΓˆ«û›ëßEöºãW_xçîÄ“Ý}Ýý#ÝÊÃooô+ûò÷^äüÊ!ŽBgÐáUTâ1VÒÚµZs‹¦cïÉ1ëÛNŒípZ͇Ñåë,Gt> ÔÉvC‰5Ái‰ì£vCÁ(œ&åàŸ¶…OZ—g©Ýµ&ž•ÇF‹Ê𗾊‹m#l :Ëö¤rxn‹!o)9êepˆ“@!b E¥X#+yÍHL–¼í¨ÍßgÑlÏ{%G‡™ T\ÙÎä3ãèS•ò4!y¢²Ãr–Á"kÈoù([«=[…•cÏnVÚˆ½i ›Ý^îo^nfºû¯W·×‡?y¼~ÒšËméqU¶Ò¥Á~Œ¶fä@4œãÑ”±“„ÖªBFíÃï²{TXŠÙ&'‘aj„Ô¢—]­ZBö0ªŠ<*ݤ=jcûd`Mš¼·~Î=*¢¾ÿ&²\mDÆwòÛ·÷W²Œ§=ìïN•߈<WýsŒ<§WÉ®©áéáÇu›’‚8˜´Ñ¡Š«É ŒÇ2º–‹~¸­«å> Ô£äãyŸm‚£,$ˆ²lúÚNIFh…Iœ§Ë—úæ€àW=meœ®ô|xï…^ŸÂŒ»Aòþ·j ìU_3ä78ï=ŒÞªo ¥UTÜk—bÄl_V$ƒ˜íËŠ>y-?@ƒ ¦/-ÑUŒaé æqî¾-³Çóò)]rdùå¾¥3anzM´»nÏ^Q(΃3|N¥QûK!‰VHB˜®öóÝãÓýÕÓçNÙÂú­¬f7ÝËÜý!ØÑ}~þÆŸÆa¥DÁ4+ZÔPÒ®F‰«ŠöæÀÁQÈÂ(+9m¶º)î;ÒÏ(-¢i€£ýk;‚BºÙ–ô3èlÍèÎaTW,y#žÒÍ}ïC¡O-àÈ>§Ñè0§V™Úßô ‹+–à!ð·‚ëwù]V£bî~^¯?Ë+vñ£NZw_é>ŽB`³ðž½zìc°¤Ã×ýL÷ðy)¶æÞzÄOÛ˜fÁec³="Ñô¤â£È³¾vÐr–Å9º¹‘Yw:ïºÒ%Ç='úu§EW=hš_Ä—aÈ|J¸ í#bòaå£á"âóÉ2Ÿ~Ð…­§-®×´í>ÑÓ¸Ù8žó8AIê*d†Á:ÇmÆò” ¯*«­ ÑÑòy6˜{Fª ¨¬rL…ËA5åÞ½1^”A"t˜›-FÄlΛzEŰ¿ ø'«’²Æ›P‹ÎuÈ1§’ÚóôI&ä3,Êùq“»²‡7èäM÷üýù)Œ‚f †gEæ†d°^›—§s­›Á³Z zŒ‡gDbžÁ¥®aâjϤ1|pqitœ›¨@¥ìΫ£tÅÑZ"·HÈÌE\-¸g¬å)È1§†kO‘EVF…íœÐœ,$;™Ö™»c/zÆ´ûõy¡ÜÕ¶)ç“tÔŒ=)ù…»õ2yO¹WïmOã:›xï) ðÎ'iõ2lðúÚÀ€K#¼£¹Ë_UÌÏ^Vl=Y¾€£ø×¦¸µEø¤ÿµ@4§qø¦a˲Aåíf­¹Ø<Þôû/#ö—[âÿsÄû8âß|—%«<5ˆ}‘¶Ö{{óÌ?Èx ¬÷Vt2SÖ%¿±. ëžCº\ê ‰°Þ¿¶—D€ÇázRS£¶îì "æÄ%¤.X„‡!w éËØ¿KQ}cPo sê—LhÍ>® Dcæ…æÝ]ÁkO·ÿðû›Í—Þô²‚/{Ê|ÀžQa°“«© ј3’58ž±¡Hì—¼Pæ“€^m0†y ÉL9Þ‰9=±| Ç@¯¯Mç.ô„sßjŠœ!Q¶§KvQgexù‰¶|Šö,Nû_I3ZœV$5qlVŽ#„%gÚï:êĤ¯>]w/÷Ìßn>í.5×e?ÖÓ†6Áó¦û¾¾½}É2fuéB]ãDO3:qäý¼‚nu–VÙaþª½qœÍ 2¼ÑBñ"Õ®B·Ž<2ú…]…ÒÍí*DΉ ‹ª)¢—6‹ÅîZå¹E”ÖTëÏIsÄ~¸|[ÏãÊû Ú^Ú3$ÈJ_¤»øq+Ÿs·Hg‹æQ…l€¯“R7°µÀlyiÄ=ÐÒ˜m1ÌÙ"e8?Ÿ×%#°wË-†¢*r/:Õc0bNµã ÄAõJŠ-‡¼ßõõæãõæÍ×ëƒ`ï¯6_ä?ösIØëP’ âº{ü¿Ûïÿn#Ь(ìLM¸M†a2Í|µ Ûq©å\–ÁH2“lUŒ oòt} °&¤AúÚV–„ £²³s×ňœ™S3ŒdÉãÂTó±(’F_=ËhÌ©ëÓ¾ØFoNg£:ž“xs{}ÿõîo'ƒB÷|¥í¯’Õœ~0²ÝÇÁ¬-&]s "Æ‚plÙâDÑ5$s;éç¹íï„.³§d5Ë@NmØÜœ l¢ ³gžŸÍ “ÃådÉ$±qÉp9¶¬Ä¦°Ç§œë¸Z¤5ËÖë˜AKõûŸ­¥Ps[¦å¹"z¤É〩p°|×ÊyÃÌYb‡èr• ºrLN³,­`°¾hãðˆ –u¥|në·¤>ÂÎ{Aµ“£9Ï`½RiX¥1º|ìhšÎ©~èù'‘×MêŒGKW›«ÖSÉÛ½ÖpB¹§‰ÊÅ,•äßòHÒªã!‰ÑŒ~2"a1"…•¯åc "e ¦…++$yR H9µ’‹a~@‚E  ì[ÏOÔ4KN°ÜXòj>Ê)X3õ»‡ƒÍC lˆ½š ÊNª…þPC‰gÈGcû;ȉ° a#@\Ixb 6BŒî"ãÿná>9Óu°´ÜÀ+²d}q3T‘Þ.ãFÿDž7Dމ@F5!É3åîO[2>5¿j.) K>… ¥õUQöeC||…/ßÔ'ù-¾ëßßÍ 9+Hñ=Û_¡M„œX 9ò¶ͧ,äx£µoYÈÁ:q4lðÑkMóĉ¼·“'Ì\°Ñ‹Ñrqäû=9ò<éò¯4c¸Ò4Á!/ùÈ(jø*” ’†©†¯rĬ1Ðã“GÔô­K¸/©¤%ÓQŒKQÌ Šh£ pŠŽ3(&+·ÉQIƒ¥•À˜ÅDt0Ŭ±‚~Ž' ˜<ÂÌ7‰g²ª&2L"-C1 ¸ˆ]¦=Á¥â8\ºüôá LxãÆ9;‚.ë%Tá¨u< UÀÔ J v[û÷ÃeÈÕúëõßIþãîîþ ^þKLùú¯·Ûëß?8ÙzOxs½}ùL¬4$}?€+³Ÿx{HlLÏò9®æßômßÝè[ Œl®ož¯·'7`$HÓ[w’¿¤ž±_Ûþ17ïnï~¾ûz÷óúáÝÓç«Ûw‰¬úåŸPŒG^EÊï`NKðTQ‰ç¼|$ˆSÝ‹7…î…V`ÁÅ’AR$Ÿµ © y°²ïBlW ÄWcÒò¼Ú Ü Zœ;H1îŒ^w°«"H¯QYsiZî~A´<¶xîÄÉžž!ýúaî.~ijT<öë'ù¬“v†Žuqÿ ŸV+Ç<ÖL„u6 ó">-uQg‡Äb5²Ç%! ›uFÁ¥Ç¾H«IW~\IR`½[ÚWñÌ3z)#ÚIJ‘ íB´>>-W.É Šôê±”ä· QYrê² :Ë&$¦;¯%C·(h—Û¥E>\ÞªºÖäáê`1ùdB‡*?§…a21|Ĥ\‚t&»6 Ú rût,jû ƒ²³%dš·åvÏšiuÞÿÔ“ÇZ© Úöe»üÿì]Ýr[9Ž~•®Üç„‚˜šš«½Ù»}Gv:®ÄvÚqj§ß~I¶%Jü9ëüJ…»¤Ìb8Ø‚hÖÆ9ƒ$€º†×9’¦ ôkîÆÉHÅÖXݳ¢É—¥"œ@¥z!͹°cûĈi¨\Ôå¸v/¨jß¾ÅýíŽ1UT•ó§âß4Žªªù ê‡e–»åUÙšÖf+FÏpœÖ$¦fò÷ΣálEã£Xf˜4ÇwéÿÕΣA~UÏŽ>Íø$#üg­<Êø[Ma!–ÄÔØÄŒ ˜‹ÂfQnb?1é±åÂîÁ­]nR2ón3ÚÛr“2j;¥šþÞx´rÅ !HÏè:1¡Kaq?1úÚ1õÅÖÈXîíK†­EÝ…l¡xv±š)/Ö"¢éÅu“Æ/¸O>NìÔpù_Á¹¾ô’ÑõÇÆë¼ß|£Íu#Òùuõ'u GñVó¨')7,q5A ª(¾ìH‘)¤²2r®·ÿ…L#²V=sŒš _Ü×v£Èû‡ƒ·é^YSŸ.´¥ Ÿãp7zÖVügóÖ¼™?î1w …tF³K3 \SÄåë)¦^ d’ ¬Úij¢¡½9¹~Tø Âà-ý–Ll-‘-¢Û¨Èr+"”Tà+jÕHåÈ’òÕ3" ˆ-íÔ)`¼´³¡´~EmÁñ;7¹Àˆ2£­ºª¿9ºPb.¶kò72®P?)r@/X?x)é|x-îDÞ|üdÃìòþîéß@mÍ”*|qeÚG ž'Íi!àå4ï¥WEúÓíïûßëÛŽšýÒ ßFyÒ(=£ÆD‰ÀÉÂ]5’?®0£ª­þ“+`\÷›ŽÎúϸƒu= z!ÍÒŒž¼§±î³B.Øõ,ñÖKÝœ—8d‰w^-›WyçurQçµÙ*ƒ<Ç¢0 TRY˜$Û80£çˆÞk=6%t>\:cïW/ô¡ d }2±j¤OnƬ Æla{78}mܰ*_aíf.ÓŸÌK¥‰2;$‰¥EàÆVrñ"›ÚFD²¦ ˆ[áµ/èDTv_0Zß·³¿VÔÿà=_c¼¦ãõûÇÇßãS[ìîBÂU>éA¬‘ Æ{í´ý™¨ºT€øAäâœ)ŠÏE¯tüé¡=G5u—vׂ«»kg(%ÇîZ¯ܾ z¹E2U=FÞ¡ã^gÝg6Öåñøú ¥4©ü{€5-òK¸óÖÑéïÿ[–¼eåW–åî²®)çSŽäÀs+ÈG-ÕϤgµ$_’š™"°py#r¬é"•ìvš˜z;58õÍ—7õ²6R•’åØÔo%ÄXÖýUí1³«½~«ãÏ·H+ŠùƒÅrCbwS"ò$kzŠý¢ Ç‡¯7oïMíæ>ùúþÕý6é1´Q¼­†|H=Ï›*jŒË­›ë©4·í'¯¯ L’C[¼›¬¨E_çä5ã/™t¥/dŸ>_ 8"|×c³ømé©Å¦géצ°D«#:ù=ÒùÛð]¯¬q%È·Õ%·“ÔÅí)V[ò6‹±.Óp«1M…å§´™ìjP¯9ÐÛ?¯3ÁeœžíkìoêiGdÉö{ê.9O®³¥‘†€y+},[×è#¹T¶®œ}Ëx%Ïãº=µCöriãºöö%ò¾Ëó-°«^˜Ñ{ ‹5U7“Œ Y`ÖäfZ!¶Å)€†aå*ÈÍ¿•â¦Úß§/iç¾Ô{=Ü)—¶8ƒ×úñûÛÝÓÁ á_²ŒïSöÁ )NiUK° "±óí5ŽUh:¬Œm’…É9¬_OÅwg yl§‡D¶;<€÷—6Ö!®mºÛÜw+Ÿq¤R¿yLCkŽj"âä[j«Ù™<ë­E…¯ßr:q×f–èh¤²‚"T‰èñ5gҔʨD OSvólcïüjC"v,t†D†ð Ôqö¼O"©bÁb®a%×<ÇI…b™Y¹FêÅR‰kÙʯ«à™ ÐVž61M#vð®ßÎê'ö}ÐCÃ(ujD“GºàóþÝÕã—›§o_•ö67O·Ÿnõ'Ÿ Ø÷÷ ‰?5…HbÒÚMû …!î¶ N$zjÆ6í¡W_øsŒm(T„¶4Sé½/šJ ìÛΜ8Â;¶áó&ߢ–Ñ1çEj¹B‹¼]†c`^u%ƒòV­ÝåÞ·‡[{;Ñ ãêÇ×§Ù_lRCy×Mê -LÐQ2"eÖø³u ©Š<#µ.¡FãR¡uNbÙÕ%Ÿsu3b R:¶Îé-›®‰ú%lØ©§xH‚.íÒ†j±Ï|ã«pÙš"íîN³Õ.î²Ætv³šFOeÕȖŦd«öÜ¢Rƒ~}OÇ»Í]GŸ".æ[íT¹·®Ø·*ñMÑÒz½yÈO2Ì®VÃ8=£MOÕ2Žm 8õèÝï˜ô¼–‚ sâ¥|‹\Í7™Üv–ã,ßxrú·Îõo/ÎÙi³›U-žBýµù®‹ù'öƒè“iŠÉj—Xؼ±:a‡ý¼ÖOPì©j jƒtí2±%«‡»LlcýÕ Jâ"ÕË„ó»Lv—ÅìfÄÙm*6#’iY€7ü}ó‰eËL¼!'‡à[!:ÇE‚ z²rÌæ¡—j=W MV‘©Ðz;•¥BÈgs–׫ÕXkŸ&µ›\·¿e‰JKg£N=u¹(5jèØ)×<åÔSþ‡»›§ÇÛM_ÄÌ™­FÎ Y ±Ìó@Xâ¹A§dKz¯T2{ç&uûÒ$‰mMé2™H±gØPBl5TêÜ*ÓõúÓFÉòð]ÿçîYPöÿ2ƒù 9NHØÞ XAÜ»‹²Œäk¯T""lØ‘ª£ó4Ab z¸;:·O`O5Ø¢ðÈ]nŸÜ±Û7ØøC>zÒüÈI*…ãê¢h~|Ž‘»«bîMnv™Šf–"è©Þxýù9}‘ m- Õ:ýQB Æ©9Dfß<»ÔøîµÇTùœbVØ4er U–RQØT"sÉ/4a4ôÐ$ @6t¸k †>jB ê, •¥+ViLLe¶RØGPçØJ^0 Dÿzµš ,ÍôHáÒŒ ®'H„ 5¨ma\†?®EqÒˆÜÇ2ߨÉùµçû‹g{çf7« îäÉsmíSÿ>¸,jX:ù¶û„ nšÑ#º¥k¤˜m°Üo7|žã›‡)Ep|ÎŒî/îsftv³\FÜ˰PÛR”³êvöh÷‰°jS㞊ñ¸OFMÙ¤ùËó÷…¶Õ³Ë­^{[}]¿ã|£{¼þdÿ|ŒßSúþùsf…=á¢öCÎ4Ç× sé^û!g:±ìÞìbdÍ—‘S¿£hkÈ:Òµäúžt¼?JGÀ§#j &6 ÉX6mìãÙõ»»†”sI³Ë”Ó‘ qVýßÛ|dö‰EùH %t¡¶½s¨Á/Æ®1ò†Ó.¸ØÝÕ¶˜ñ¶°CÑc…»‹{Äñs2ÁÙäìjþ.jik©©!LÑߎ1j¨ÒïïìÁ¯îðâþåíí2ÛrêR¼À‚ –Ÿ @|hÁ7›gˆßOnâZxÕ¿=ó]€‰–à o%+y‰ÑÁ£’|O©Ëbw+BÑb£Rû°AhÍX1”õôR†¨â dT’ÇÜ3ôìf6… 0ÏkŽ‹-6E4p‰@KlJZyiÏŽŒ‘3ƒôÊ•ÒIJh2¨Î¦P¸Dè|4,´[[°ù|sýã«-Ÿ½y|z¯¡ÆƒžþÏy(øõaóå8Lf×&/ýý™YAjˆ—þü™èWC´á%¢.ƃy©NÊ©çÞεvo8oöhÕÞÇ{½ÓÃïž)¡5´|jˆL<ÍLï$v7.îå!õ¼¬ø¸ƒ+i…h¡ÿ$™&â×NÇJ¦ùÑÄCÜ>xU¤SXJ§”Þœ}Ýy%èˆéXS&5Õ¾%²V- Lƒ~‚׬ÙSf+¡Ý8‚p]Z}³Ï¢AÙ_ÉDå%…=[zCÒowôèzÞü}±ÕQôÕP踆âÇ,=ŠæËÈXm­œ”ø¼ÒëeUsùòì6oºUă@zÓÊ5ÿÆÂ*Ê‚8HõU½™‚sU”’Í0aZaÉ^¢âE¦›‹ö("ï¿_Ý}ûzsÜhÔ†6¤fü]7ý+41uAÀ‡$`oºaTGV‰h=oèªÄÇCÔ*èW¼ý0`¤¢r§ltN 3Ôzj Õ|½KgŸœ:õ©§¬ÿ:¤t4!cTŠPz¢¡Ï\ñìS¨wê‹,ÅIæç}\d{øñƒ›(,b_Œ~í¸FïŽ2÷•y†ºšrÝ «×¯]¢kã9âëíæfYÿ?¥Ï›\s"56sŒ?Òì!Öƒ£æ—ØŽtúqVe]XM‰[¤.rÐ:?"È`páÇýˆÓJ1FnQÕ™UVß?HR6l-Ñï7÷F=’}òëMÛÎ%Öü¯Ÿ%±ˆPG·h²œ%µbXŽ$ã¨Åä:_~EI¤²æ2ÍD,ìe’hèM‹½-Lå…ÓÊ%W±†§ãˆÅ0!R6•×Xjh¥Uª"Fª¯´Ž¶"§Ø,Þ'O¸Ä.HO9¸è̵Æå~TÆóÆ¥X‘Æûx)h/ß!¿~áåf5i<ªMakƒ\2ÿHº'Ûoï}Ë3§  àűüø®)NõÓÀž. èÿøðãéfÜÜ]Ý_ý~óxæ¯|_ï¿ç„g ‹êgÒ5½añOŒ-Å÷ï¡^ÚΛÀ$‡¸¨ËÖõ”ŠžXB~sñŒZ#6ë±mxÞa‹+"ÀÄ‹´WîhÚÑ3nS\„f±»ãÚÀÿ½Æã$—#:DzÌ6¯»Çxï!Sbà)rb( @V®1®+1 ú‹ ”o¾iÚ¬Ü5ßwTd°‡»¶"Ã*‡š÷{GïšË «ê\¡A4¦Öø‘ÎD7þ5ƒCš(Åâ¯Ui÷ÿûðøåMŽð¨¾þûÓãm#­G–UÙâ®PÿQS”œ„U÷F=êüñãáéêôøÃ«"œï~mýœ¡–˜X¡2ô4Ç¢CޏuU3áÏtæ7S}I4kr©7†Hu¥þÆžð>ÍÎ(;¢°dêDCjˆf (}Z(W¡'óI¼CX½yÖΫFz©ý'•¥&|Qj4ÓÉ> ¾Òm„Ôè±5^t†JM…KaZ{\Z’ìÁlÊ‘¢ÿЏÐê‚»"8ŒÞ~ö+¸¶U%$®^°VMôÇ2¦ ÃÑZ¼Ã¹æà.ß„æÉÁŠ•ëbTº¦+ñÐáJÔ‡ P}X\¼æêâµ³p¢šâuÒÿ½ÏBtÍnVS¼VÑ´N@G—µå¥¯ß%jïG¹n ?$Jo†8¦Î„(pdÖp,ÈÞs±­¸1ü@sl' Íuáç9SÓ¢þ 8f-L†ûŠ«vhÞÝÞoqH7›¶zDÐó­è(Àvȵ; ÃÚeïRsÊñJ†qóbÊ¿ Q|ª‹ûµèg<ˆR$äšgW2/¦¤¿ä/îa—¯1/}fEò–Q–.œr0C'¿!UE€êäë·Ï(ñI.1F!¿ˆK|°ífÈàWŒ“ÊȪ̧Ùº‡»»÷·OÎvÐ}‘?þø¸Gºþx½Ù¸ø1AÛžãàNs#±m±Yd#SO·:kÞkí©­ ƒ8lÌK¥&ÚS-•#tÂ"«ÉvvÐkF®s^1LÁ6¤É¾ ‚œ[–P¡¹âiížrØuƧ$‘Š;?HWÙÙDõÓ^ È >« %&¿ˆÏäp|¿«‹Œä—z_;_íjý\åCŽOû—Õ(ý¬,›wò]X'H’ JsÃQ3ùÏ<ç4Óþؤ†Ê¼‰§Pi#á¶è‹]ªB;h‘£×œW©ËóÄ6ôÝR—b!üêÈ FæÌįñ‰ ["\¤M8[÷éÈ ˜&^¯àº‚™Ê‰z‚a׸nd7ý„‹Ô׿¼N•%Z0°ðå1aMüŸM°·7”ÕûÙÕ*j´€0Ù–›X=°ˆ6— ¢:YÀ8Òˆ º ±œ5€NxqÎ@cÑ1¯"hØ«´Ü?ÊIèì2àÝe“Hëå6eh,ØêŒ2V ±^¿± Ól©I54Ö I€Cx–Úey¤îÅ/Va©Ua±f|Ï€5*,¡(IžÝ¬Fƒ“L ®žÛ4ØÚw8.â[àÔ…ÐACQ¿x´ÃV—ÕñMÕYEi±M\8á¼»xLy„†—›Õð-úÉ6@¶ð-ñ2}“ØÁ7¼áë/f›¯ö˜ˆÑ—ù–Ü32õY¾IÌéÛìfUÓšÙq¿k¶–mŒêb7°Æ–ó‘Æ3!ê§ÑàW$ÖXðîêÛÁÆIýÃW×w·÷ï¯6_Ú:LÜ»nb—uÄpñ»zÈ„¬ƒÔ·#W¨«I9{ˆ“½ˆU€NmÛ’^…˜nfÔBÔ´9ŒtaÅC$YEñÅ­Žþ,W»¦©?•hw^V>Îpž®~<}VöÜn¶?ßöºjiþšÚˆûޕ֗Œf¨]¿2¶m¤~"G’T°””‘cöà…8Ct5wðÃ…u“ÜxX]zdŠ«–¯o¾}}øóî H)óÔ¿hRTR£¿¦ž’ëYq-³ýÓ¨§ƒ(8LkMl8…²SMàb*&‡´ß%x ¸3jP\=4ØâA®W\ƒ- ¶E³_qõñÕŸ j`‚ËEý¬$ºyüÇ?ÿû¿Ž¶#E—~ûñÝÀäïnT™í¸*j·J{Õ@s¯âáû×oOÿ¾ÿÇ?Ç´Øep_aszéB ½tgyÞ4çelÓÜÙþ—þÖ[“·wJ\•}š3»~ˆzö‡t>ØU«„Ì*D¼ 1@˜Ün-àY³£&%I,ÄòzQΕg7)×)‰'ˆª60/Sο°¨J ‘&[¢¹¾L©"` ç KD@sêH …½&aùÞ_¢ê2%«#ò\ Ù @I&btYLáÙÕj 'LSh¨wÙª“–è.8êQ^ïÔêǰø‰ˆª6?9ôÏû ºŒX!Ðvíäø6»YͲ9ž,UHorÌ¿‘E ±òfàm»=åS§f%T}îJá.:8äü¡õ^~=({ C°ÇƒÓÑélM^­üéæÝãÃç›_6ŸÕk<þ±»ÿxÖw8¾”\T)ÉuN­ÀÐ_á€"åŽNyHÎ\—X|Mß%Gigî‡\gTŸcò‹îH’@j8T Œ^÷ìòô%V—ÃjXF‰`X@ ›Û”+ Þ/¬²…&¡Î3 5NT•Á"bCD”&VC®Šf´¾}<Œ}vª_¿}ú´Ùþß6opÚ"éÍËi*xL±ä FÂŒÜ(•½,:8{kM­Af9gãâÚ Ááz©ÈEý.þÙfïd †gþPc0!ê L¤.Û+°—Yƒ!u{4Ó^s’ŽÞúW4ƒÀ1ê^ûyQîåžáÝxL>§¾9ž+o&M„œ,Q‡îO2®}pÉ®ë®8Ÿ|,‹{pþ¶‰,wô–î„71„ÏïçQÓ¤ *Hêy ‚ú]•ôL Ž0’—e£ÈçH­`šyxæè¨â@‘û`*ªæ‚'"ê‘n% "Œ¥Mõˆé‰9Y´E ÒêŽà”Ø¥n 8]eîx¬¢|•ÀÒ:w|.FœÓj)rûD”í#x…–:† ѧ–ºÝ¨™»]çØ{·fNoдø¼ØA£UtKÀ641ňÚ:ÇÙL`3EÕ aÍЇ1º+媢٠–6@ˆÙvñ½\ztx7°4ç@l`'‹˜<b] !0%dB¼.=ÜKÙÍkð,Ös*jvãŽ|ò¢;v.K¿¹Ww¬oM|‚Y¶¢ÙG’¸¸#ÇÕ J¢÷§³¬4¹7¾”@pfÚyàëóÂ)`»kÃM3¨¼†Y3ZѬ(´£>"º†³-´Sw¼¥ã‚O;.Â);Œjƒ³­ŠÅ½j×Á%^]+g!'‹)7]xGvæ„NŸ°¨éÂÛ,—„ÿû?ÕºlŒêÕÒ#`Dß(€o×pñ±t-·“âñ ±ŒÞˆâ·õ»—,‚1äÐ{²²ŠsiOqJ<‡©B}¸Vç¿Dm-Ýâ$iXÛ˜õ ß<=?~»}þ¦Z{éülóÕrªmä!ÄHå£F4q¦¢²“d㺉Hzœ5¢DŒcZRo j¬ž…Y…–zuÍl…¶Gû÷ŸªÍuÆÄCÍ)Êh ¡d Æ–˜ñ÷Òèb < …73žm~+#ÂÌÏîFZF¾i`¬ ’ÆŒKk£¼'ã ò’Šz%F±èL!eïŸö+«t}”¨ˆpuµa Ó—ã!ýÐf'þ+ƒ0eõ{¥F¼Ñî³Lo õØÖd„ ‰<βa5w¿È>Ø·ø|"/Þ-oIJÕÚÈ&1Ô(–JUF¶ò|kìdiU‘ h”ó—ˆtg.Rœ$ßRAÀÖr5w§îÚ×-ü4ü&Û½®¡ÙÃç—Qow÷ï>jö¹±Ú^óÐ!êuÛðÎ FC™Ê­j¤:ç¢Yˆ„l¹Á‹àzìv}ggu'3¼„ä’ R»ÑˆÑ…´ÔIЯ:—šRsÈ!„L<æ¬>H^èû/é1H¥xÌÖšîÉb*ødH4”>àC˜sÝhœütðgׇž>?>|{¾ŸL6Ü]¶>^øÈåË×s3œ9£·k â>Í®:KlûQHŽKe*­¶¢¯íĦÕÚ å ;•As"`u© ‰€&R„SyÇÚ>HÏa‰J: ¹p÷²]¸ËFêû•Uåon Ã&é#² 9Þ«®uíÕ 9d­ôú¸H×àZ&/ÛÜF´ž–…ʦê0À ÆIY¬§‚‹@eƒW…dOÚ÷K«Ñ¶Kªm# ¯Ý¤2ÎÂýæ“t{„'8<†ûüû­]zkN`:ÓßxÕÙhp·ÏcsÑ6¯/›·JPób!Ì¿)Fç#¤$íoŠœK %2ê‹],Ò±• gýÄti5ÕÖc©Ÿ>œô1}FQ4lVÉK<ûµ÷÷ÏÖŠöóÓÍW“×÷§›[éóý¾]÷æÍ—»-iÈßÿöó üŸÿeU ¿~ýöüocÿðž¤íùíóã:þÍ.FÜ|þø~[îwL×`Úüç¶E7Fë<˜@åógß3å4¨!ë&…v´qÌþ]Y–”’† ±úÑÆNŸvH†ÀÙ°q.Ÿvˆ¦I”¶÷²çŒw»Ö,ÁÒd1…ì;Ž@:8î˜#gÆs:œ¥m±á¡©¹¾l\t\—¯h+üÌÌ9€8r¦¹Ë°¹®ì»Žg4 ¾ÏD®Vãø”ÿ¥wlW {J%áy&eQÛ¯NØ"€Ü|¶ˆ¶_=G¡69 ÌÚJÁÖ`Ý6îÍ)Ö,w‘Ù¡@õÿëáñ·×C¨YUîH!µ »è1TØ©©xMý$‰ƒ¹Çvuj8Š3®Ìˆ ââùІ„ñ2ÑVZ17OÄÑeN\ Õwf½ŒSË ybt¢ÞœÇµGïÁR #ÝoI±‚ñr_ØnåYŽýÊjZìkɧ8Ko<ÞŒ-ALSý ˆ ƒÑ0züÞBš‡½Ó?nîÞ<}xûðæñnó›ÿqûáþö·ñ:ÎîV¾~zóå~ÈÆóÆîãOÍZ«Øã°‰«Ÿ’:#çÖ½ë\(é~Á—Õ|°.º З¡ æÏÅ'bí4$Lá \ €ÆRuáWpѦü"Àxð¦e°ãŸOoUç s²æ.†H-lRCërm»y®R¯hÊu®êäu7üûÒ„-àñ!ÛD"=Ú5õ­9ø$éÚp'ÏÓTÌqË¢rÄÿmgfžÎñweO TÓ®IàçÒÉUÀ­"²,о}ŒMm<ÔU…å ìœ "ò‹[S(…⥈.œóUB¯+«9gãoÖï„ú­¤?8,¸·ÐG8lè¸á舂ÌÏ™^*y ÝW/sKr"ïÂ`†¹T Mä}¤Ëº IrÐD?/yóòbÕœžöª¬yP}õ`2ö$BYà‹õ!ú–ÑuI‚ÄDm÷—”¹¿Ì¨/ø!D)WÝ«ú’+ ¯Û®óÓë^SQ­ ƒIäpzÝþ ‹®/Y #¤ú ­>V°Ëæ 4®¹èÃwuÍO°¤úùP¶ _:X¶•;È Ú/­f^£ÚƒþƒG¬PX1+¬}©›@‘£º ›ß&Š~2Ëä¬`Éq1߀oj C—"¦Ø¡üäÂm—½·¥™*Sšˆ‹Ä ÿ“’e#Oùv²W¡ô¨2Õ×d¤¨s|‘Ù»e(”°%/L¬ùªÿÀm¤yóPÿ¦øà½Tù7¢y$È™ÇD@]jý5r¾Þ:Xì¢ch·މcj ­Ò7´E*ÊÉh5:dÝ2eMjÄ­jº¬I[+eÇæìSŽTŒoÔýVûMž°,R¡A“D_ï6zX8×R‰Vñc”M‹•ÚIJ>›oš–MÂTRØÜ¶pÈ™ÄdeUµÉlE¨ÁVßMž‘-´Bo£œÂa|3ýÖ áÍ-¤âJûis«!Íã¿þu~ÿpûeæh|žŽâœ`ÍïÞ£ÔˆÚE·njXçè#Àµ”Ó»ˆìØ!t™ã‘Csæ¿—§v¸ [ãä°Å $ä¥Y7A¡Z ÚõO– g"­.P"†§vLdþÔŒƒGs*Su½êT|€EöG¡iöœ##ÒeZáZüÜu¸ÅÞŸÞܵXóK2'ª¼²@Ñú0;m*«.4npˆ‡Ž{ú# Ö'ƒ3æ®4Ëú’h¬Y_lbíL!¦5³ª*»Ùšð¼‰¬ll©hlsP7M[E%!>òì“ßh2µà V3§Ÿ'§ø­Õ«ã#(t¿ÆE {$Z¹«Pp±¶`‚´»Ïlî>¾yÿåáISÖ×SÍøáÍl7··›ç‡Íþ=&±À®egÓT6ÙýԬ勀²Õ2S E¡Ã(èÛ.,~PÅôÂ.3rH!±+І#—®«wJÊ6 M´Ð‚+ú–Þy_}ðÐ XxåëéQ¬~[~p=mKÆ„B»ޏÖoîYÍ¿êªoª|Œ[Ñ„ÈAÿIèã@!9Ö¯žÛ©Rï0€_Óa£– NŽ!ªã÷žt4ò Pâ–Pœ\þºt/š×·D ³@\ëhÙäõAü¥øÄuÅìûôìÕ‚8Wq܃á-XpN­ê>œpX¢ÖˆÒT:|r»)’+áªúÂû·ºK4'{úE¥´‹uuà ~¯4»sí*(¬5pÍØ(bqU˜Ëy ¶ 5D2;UÏ `Us W>G+Q/†Ç‘²ôí‘´« ƒÍ±t2Y{lA’•Õ¤Ê`5½¨¼ü™âM¡¾xêNñTíèQmÇÅú¸x œÓjRc½8ñ®¬UI°®ò Óªë.²s;ê¨$Ó£OWÃkÚÁ«U`§vE”áURlhyQï•@DÜì)¾‡Â»€°Ç’Ë€lš²j¤é=Ô€,{.¬$ΟNÓ²4ŽèqÈvØŽšåÑú(K[NÄ#”UÅØ(i*Í^§®£˜sp›NáVw` Õp»VÕòQ{[Ôŧ½øåþoyOq/ðM.5qd’±Å}‡–âª;T7—-•ÐÇ \Æ‹{;‰FÎe¾Š¬ –qPìâjÕnvWa¼*,#l ~`íÔ˜ 4×÷D8U…4ó(¡^䕌b•té#0®Ñ*,©ôi* ð…èûÇ_ÿò÷¿ýJ˜b‚hÝÿB¬[•éæ›¾à—7ŸïUòã ÝÛOU*)«eÝozó×›çÿþòë_ªYš^iü>¬~µZ¾Üá ]Svóèšþü„·Éû³y›þü_õ]ÛöUäœýjDÀ`Á ¸>Â7º¢‘×LÆÊšI]é`!p…¯‰Á‡ËÎÆVî³ë“¥UM¢þ^²›ÃöŽéC²U“𥠠ÔBT軈WúïWwJŠ\§ÃÈG JH…\œô£§óšy\žîÿoð8c=V»q¾â8¥tÔÅ8Rÿö»QËZÐ;²«àÍðY^íÞ¨;ž>ÜßÝ<Ýê?¿}Ò?í?kÛþesÄŒ¼wâVâÑX*ÍÞ?hœ¢æ˜(å[¤øöø¨f¶¹{»1ÑoÞþ¡âüé×8r£1ž|@Ÿñæ?þý§Ó¯j¬>ýå ¬–9à¹Ï¢=h5¡ë#\ÈðM›F|=¾˜Ô0n¾ÜÿëfRNV'BÎHŸ¼ƒs+ÒˆÛ{Ò¶+Š®©'Í{ð¤·Í.²Jõ““ÓDLÀ £È˾Q!ñ²o4|‹É~1å ¢Î>xƲéEbã^¯÷¡},ARѵî¸iqÌT;[XcSà"•í ‰Šv!’mˆ,­"fÒ ¬éu5ÖgäŠ+?61KVÏÈUŽpÊÍ;j"iì/?RÇàG]ÉbŸ‹$î#€LäCi^ä³èÇ'qÓü°gÑo_ zz`Ô±©Wæu úMu¤K1*ÔŽËÕPcH Î«P_ª«r ÿÕ1*Qv^êdie¯Å* teˆÒÀmUˆÅèS:…(æ4ØH3„Ò]bGˆ"¦+`ÔQÃà.„£¸ Øð;¾}K˜H6oïîÞæ’¸8Ê®ñŽij1 ³!ïï¸62F-ãªü–Qmq𖪃7´i #íS9xãÊA}òÙqU¯+«ŒÝD䮌€ýùç%¤!9¡]cÖJןÿxx[ºK¶Ô–—ÜìjaÍ}âARKm {k›KA?ÊèBAÉ( Ó*’ ÑÇñv2US`J9²&©ÅÙC¾„äuýZðì¥ÅºZèÚ» ÃêÇÄÑg2%;ÊÀÄÇm-¯¬‹(=ï.1“!ÁIô¡›m[rRu}9g¿¯ªCìO]Kè‡dIטØaÄÈ÷ö¯Ï¿<|ú¼½nÙ¡!¬_.Q*:›QÈÐ8±£ ¡¦ö‘ x†ÁÙÀÉbŽæ)!I;1_·—Fð4S•DãµÁ“üÚ9œŠyë~°ÓÔ¤qÒËpˆ³Ý$RתêT¬©ï&™kj’]ÿ¶qK´A#¾vgž‚Øã›§çÇo·Ïšð5Â(HXFwÝÁsÌZk²`‡–¼3bê…¥£úÆŠòWÀâ—çlÇÈT&ÐÔ^;`¹:š2ÅÕOÄBðù1Ô„|‰µS®ÝC’AQt´¨'ï ¬º¯[(À]B/þONÍðr?`ê#o‘CÄ2Œ5‚‡2Œp®£w¢‚œ28Òí»kç³á˜°‹#7Â< Qâwï±ß™ä<|2JÀÙ—㺩…HÃÄ;4ݤï0abͽ@ýÕ?³ŒRIöh¬—«Ê=íÙ×l`V6$ULA‰DÒrÿ´%ƒ‰xG â†À’ª„‰´€ªù0ͳš¹(î…QeDY KƒÒ}+"ÏMËA¨4©`ŒJQQ¿êd¥Pè(Tz#@\Š}Ùm'Ê,N¤ (af‘Á4Ç=+OÙT€6dØÍK^§¥ÿ•FK«K+Žf˜.’²Ï2=ù¿ÿmÔåARÀØiN4eõ"±u½6[ð•ÌE晈H5™`„X|Ù;^­e¿6ëDêk@[CäQŒëš©1CrÆÅYÓÃãi«Ö·¥?÷_?ÒuG¡{¼{wÿ|ÿ­„ÓW–,ñ^-¬ 1` ïE Ê{Ò^ßu†¬õÕ)Y’ DƒÐm‰9é~Ѱ#´§‰*"“%Ž€ÚgÚ©`Ú᜕&ÁlúYe¥±)hÅ´ûec±Æ<»MƒmaÂ?”ÎXeÜíÒh^ùS³u'K°ìv«$!kÇîÔ-¢VcÚ6wÀ4ÙmsµÎ}¤éW¥"Këðx³ÛnŒŸc[e‰'Œ(¨kØf©uÇ…SW³ ZÙ–‚ÏÊ0ו9ú{E•mfãKÊ<»Y ÛBž€1éI 6?£Þl:Êæüi‰ŠŽàuìš¡ÃdšúñïïŸÞŸ=þN1žÎÿýÝÇ/¿vmšp[p¶A'ËÙR¨‡tÀßZÕ®ŠÏ!Ç=ív$Ö·î_ –¹õª¤\O»w·Cý–ÕDŽyÛŠÙŸ7ºé…woï퇼Ÿõë‡;ïs}ü|ÿqasJÄ7[ZL= Á¤¼æÿ—¥½Ì¾dáܪì$:IPwJ»KU«‚ZÄžœ€YñÏ)ÜÚªn°ë$±c´ï¼é_cÕâÎÁ«Ë¨FY’Ž–5u™Ä â2ªÖÖµт˔Vq¯wÁf8¼^Õë¦Ñ륬GòPk×µèèuŽÉЊúAYz™ÇÀ1¬Î!¨µ®Gº›ô õÝWj;~q-vìoÖRÖ³¯Ò(œb;Ûd»¥»qòZÚr’%šÚôÕn¤P»áóîÀì‘o¨>·2p”ë¬òËr¹?ðx›zí†yò@¢Ó]=³#V•nM>‘9µç…v±L!¬íªï±2¨p£þ’ÅQdÖ¦®¿ÙŒZ¬ …B)‰›Ý¬AQ`2ïiÿÉI `vÆ¥@v¨O\Àlôæëûƒ0—Þ¶ÁpOüXXöjìÃð"*B[ƒaÛ=k aøýº–ÃÓizÌJ˦é×ýúl¬žEóâ±úu¿~i¾ÞÅ<šîD¡6 C¤®,WL 9Ëj˜nF£fqÆ%S¬?8‘˜£ÁŠQ³›ã…+/Wk´j¾b…”žWµOöù‰7¶ONG,ôº'ЂõJ?4T¨·@ûxÿõã—?<¿Š¿nÄRX¸¶`»/›¯g!Ðå¸!›}Ù*ã× C¸Z¥4å€7n7¿öúï9éõQ(åM]¦ŽpÔ~:§LCÛ%.¬¯ZØàíBœXRj»`¬z/,§ÔGê ¨”øW'$ºµs£ÉödçÓ=&óSðÐàÂËÐáž&D2à úË{ Öüä –ŠXÒ8"›`äŸFì‡÷”>lgbØÖ°²tVÞá+¢B'Dþu2¤;¦‹)oý) ! ®W )ïÇ9_Ã3©1"߿ڋ[ÛQÎ[›Q#2¥s„üàù–>TòÕ2º¡ùÜbP‰#ä× À–ŒT ñ”iâ 9m©>L4>Þ?=ÕYNþ²÷Û¯#¥{Åc뇟ß?üñðñý·à_ï~µ¸;ôº~+ &zÇšÞi¦»ÿúo}„eˆb„C‚:cê&AcèyJ‰’ôÌòŽ$æ(}ÞÉR"Iõçgö™õjÜ !_jf”ZdŸ9 á­:ÂÖ/ÍFgŽç¸í~e±lšo2º›Shq^6·;Ú li"@bÏ€/eMJ›âݽ4Þ½ýãÎè:Ì»g™Ôþõî/}x_hYpÖF•å!î#kO‰¤™tq¹„W¡£–•ãÐä-w©Åq˜ƒÁjÁH³§Éf4á9\£)‰oí9rÖ­=gÎ…üÒ9•%‰–=æ¡ûÝ"¤Ážãû²O%)2DJ)q¯Ññ#âxq!»IsûO×vc3äŠãÉT¡J` >_~Mó÷×.‚ÌîÕ2˜dßh¿Þ¤M±1 Bì˜EÌJ‘‘uýRšQGAˆ$´üù÷RNû±¥vW~ÝAÕ8{”‘‚òj^chUP¥‰,WhÀ…²¼,ˆTyÍŠ³G/7kâ5NäØA¸DEG°B”H_ê ¡gtC(Œ¦Ba.MCÑ/³i%V9…¹Õ9»L}ró}Dð'f~ªÁÁÈyJèу6ÕÞ×;Ú¦Æ}%¸š±[ZøÝ·$í7 }ëþ<~÷;>»Š»XÐöœ'И¥M†X«2¤P²ì3J ©îfGðO!ÝØ¸"ö ÀE6êcÈßÛ¦î’D˜ƒDP®Bì"4IU‰ÀP†UŸe„HØgx›]»HpÌ1¦”¥79ó#’Œ_o ¦ì“r¹½vsR«ùO#ÑûÇŸþþÿüÉ_U2:ä&‹qí¢Þ=îÏE–ãøçÞ=˜Eþül‹ï¦< HüÛ?þöüïÏ?ý½yøÊäàëËKŸ>î@Ï×<ì}zÿüøááéîÓo"_Ï2CâeÓW~ÚlüÊ\_X<~µá§ýÃ>æ´,¶ÿÌô¦[%jVÒT"S‡³ôðNó®XÓ3Ceš±.ž ã‡‚T›?¸]7n~ÕRcÃì*õˆÉΡ ºGÅø¡pÂ:d‰ŽÂ Ed5¸(D¯‚܃Œâ{6­OwšQ2}\È7O4D®ÉC†\¬GïÕ¶˜Ò$Ñ{Óù2¶t*§5¾Ì]ĸU4kÜ~]ðùÌÔ!j~y*FÓˆBxÓ̓½9tP,íKÕl‡Ê]ëƒ;éÖs–`¨0CLwšÌrÔªr bé‘JCp¨hŠŽœé©aØÿ´¿^êêBÒdñvZ AŒÍX¶9L„Èœë|5CÜÀW¼°sàåfMæuSÒ”Nw(Ö =&!d¡vGj6s„û²øâ¤ñ¯ÃaÒ7~^ØÀfºwwHoïŸ|Êu‘iFÅ(oºyQSC\èõsÌö ±®¯’¤|]_]v‹Å9EFäþöÕÑá¹Ýî‚'4—%ð¶ý %u%(n?RÂDuIA^L5I±à¸´=bN«Û>Û"ñ:Àm%ºF „Õ´%®v٭Б 2iôL ÎX4ŸUÆb¤rïÿËÕZ|¶òäh÷a™Š iTYã}-ÔÚÂûZÒiÖoÛaRõØ$õñþéùñ÷‡çßM ºÐÌÔ›n’7èŠÄ®ò‘²¯¤Z¯*­€f18Ò¶d-Þˆ´ª*‹Ñíñf-/¥A'U‰‹4E#Y¾Žm= 6À·dDð4rA¸Çù¼<™`ÒŸ8Ô}žßÊf$âò\J!Ó2——ÑuZ%œRO0Zm¹àj5n[ÐÉi(šá£X z³¥’ÅàËÍZðä1Ñ®e–¬)’ösÍŽ®@EÕ7lÙïwïc,ïsAтμëÝ©pJ|{d¥zï— Å¹ÄÙmêõ{š,‹NûwÇJ'¬«ß§<å¬Ü^v0´ÎráþÀÇŽ7|Ä;Äó_ ñ‰àRa¡ÚuµÓÐ…,oöÄ$-~?O z«æ‚<|oÐÛÔôVÃDgk&äâ9e²2,±Áþ2­kºL\žzVJƒ ùûÃêò.…æònž,5NR÷œ–ëiÄ*_c(¶ŠÌ®Öâ:sšbYÄ7ÓðÃ*“‰q<)zqË×jÈÍsÅO÷¿T›â/ÿ‡ë&tÒ›nF5(r†‘ˆ†HAGä*{â^™Ç¹BÙæá›‚1vy‚#¥¥(U¥E*ÖxgÔ1|ãj@ÑN½µRSØÜaGçÎ×ྯPž½q0”‘à¡ÜÛW²ßʤ\ä=jÂUv‚cOI#FIwÍ®+sØÖ Ž-Æ*-J-€õëP¾8[øs¼Z“'Ž“2Æ%,öÛj¹›¤UJËteb”‰vø2Û6²¼µTÑ,`w_ÆêóóÓ²mŒrÅåàˆ@ÚÜŒ ”0r‚BC>’lH4LSÊ!-*œÑ_ ›GÃf%¥„ŠËS@3\×Ñôm¯/lG¶äµbÜÀXóë[ØÓ—ÿá벪YLiS£«³`–I›Ò§«‚£Ä¨°Ç öÌ-†“©j8•ŠÛ¬÷b7Mú,qÖ›Ç=*¼½Ý¤\¶›èi¾€öÃÐ'¤¦ÝyqA¡¤Ë%Ván?7qÄîg;BóagÎâ®Jæ…ûšåø¼Y.Òy®A2©d†Zþ˜#ŒWŸ„v—)–›n^nSo–‹¸‹¡²ž ßœ± (N)˜7ã’êÚeK¬¢ \]\hï0‹6‘GMØ áúò½ý½‹ÝÙÅšð÷pÊN„âäâHft,‡oÍléÙ{MË0&‡Ðî¸Ô(ìÖ¥?¸û¸Cƒéóæ%âî%¤&Ãê2$Zl y¡×ˆ*†}´…©9 ‰¨yʰFVLzSe0ÝK¢F¿•;eÐçôáì#ù…ò[¹û×Ûw_û¶EŸK‡ … PZ¤ã HW¤ÃÄ{´g!öÙFÔ^å$¡£,íƒs‰×W¥[§$"óäáu®³•blïX*~Ì.Öâ8|U ÛàÖ\#ìzLP/ë'L›þš ¸{þòܹ߭¤Ó@SÈ¢uæ+"J•ù„ŧ¤}Fè´¶CàßÚæ+ç.˜€,¦8FÇ¥‡Ó~ÀãÛû‡;Ó®ÿ1ÎêÓ$ÌЖld¨›NÅNð#†}rŒ‚EF_}æ Vˆ¢¦ÜSK^ÆÅਠ=y£òÆÂ”c‚‰,ö±ÊJb&®ä~Y-²rv›´4EM)ŸàÊžœ±*o4Gb~‹Ó‚¼qˆ$XþÒãþc°Ôh½û—%H>ó×ÓF"¥ZÞ¨¾Ò $³‹µ¼I#O)¦„'³wó3Ši£ñ:XãP^_/ì펈7@WƒIÍq¦üý½h¾"^1Géôdc*dö0B¼É#f[ü®¨tÁ¤köõ¹M&¨¦½fŠ&}F§!ðjÞÓëC´íî9Å€œbÄ~!ñú¡äžœŒ|rßç]Vemíú÷½–˜¥z%f·:œ¯óÕ.^¬æÍ.Ö`”Ñ<5‡IXÄ5•x…yõ#6Þº§b(¼›8ö4 %ª¬o‡LãZÍ™ä­æÃÊçèšuY'úÈo™7ªórÈæ‘Ÿr©Ý…|þ8ÅÖÌŽÀŸ'AeÀõx:”ZÍYTïo®¯3EÌdYzÅœÙÍ¡\f8^­Å EšXDp‘_7dAÚø!xOG.LFÇâðºƒfá^é6;fá‚Ð ,YwiüÜreË,ךߞ£Ëƒ¥»‹mÕš¿f›ˆPR?,žá;;lSÎ&3J‹;\šöi\Ý¿{¸·¨üË/}U³ÂÍ»RÕ«fbá[¨ÅoFÎ2žÍŒ^#ÊföÙàèNq‰A$‡"\c‰aƒ=©§¨½á- iž¦#ñÿ¿O?>ýþöéáñÃWÿ™™†VÆ£—†dÞæ‘ƒ¯#5ªN‡ŠÏÒš§¨÷pîD¸i"ø†EÔnOÒSa„۵܌«Æ€-¿E­ÊZDTQt.ƒ©;Xb ìöå5Æ€CÚ::r:ï ê«b´è(°^Íö8"Œ]ŽJƒû‹ÿL£tQ2ýVIä [Sôfdˆ›NŽ\ÜLþéþñ×÷Ï_?š÷.7Ÿ1Ì-P€t[`ìiqfs–z…&ú^ñMÄ]ã\ÌÀ⤆æ*FŠk^€QJ¥Ú‡TjM9<„âEN@¢§á«TÓ…qò7æï% œ%kã¬ðñÐaÖ@’˜´­²Â¾ØŒ cÄE‹3¬².®^Ъ?î0 T-†P©j6#òlEoÔŽÓ‹!!‰Ø‹arÚ ‡„Ià¶&ãUÇË8Æ‚»Ÿ8¶ø\ø÷wúð ?ÿ¬üŽï~É¿ý+-šsÒ¤—_¢ú2ñ5Šo® ÿÔysZ×%»–’£êA.FÉ2£ ) o_¨©²D,þÏÈ6$„ÉØGKZ«íÓØR(«T_m_âý)M`̶p›·D(n¢vƒ†RêéTJÈ'š¿#€â‚5¦c"iÐ;IAëzGE¨”1F¸Pâ)ƒÀ¢˜Û»å,´]¥vÒæ…¼Tx 8®z•jç––ñDµØ«¹à²¿ÄèœC`YUG‘œ¶(·çI)à­ÁŒÏžyvÐ/!ø²5sIø áM"RZcjsî‚'"É€[¼Š’kXÄ=p΂õ0 Õj‡åAÅÿ#e†HêêiyÔR¯¤žõn„?Ë?à/3XU­J©˜•L’Y­ÙËôDwUVx8…qH]ÊŽç UCWf$>¢Û‚ÕÄÒë@ϤD$‹›P߯AQ>1²¬_')~½»ºßùLÝ´sHÖíÃ÷ö·¿ß>ÿíæóöæ·±ÖŸÙ|¼½úõþáÉRž§ïIþf÷áÍþ¾msóx³y~ØdÏ7¶—MÍ¡ Ÿñ ˆM#²Tõ›h`O‹9¤ßµbúyP… ‚Rœõ6‘8ßÍò¢…î&mÍDB»¤jÒ¼A€ØänÔÇÕÝ !d²ÓTÅàò6ž¸k²à#ÑRƒ„%ÙÂû†ºœáSH*#_}'“á4Ô€—Ù±wU]ʧ]ÊpJMOB¶gõì­ç~±%ã­f¾K™iˆB^7)ÑÔ¤Ò`3 €³°¢%xGZ3/˾¦ÁIó*HG‰ÿ+Ѳ˜E4\ôsf11n´²‚’ôVit9ó tÓ[Mëù ¨-£ÉÍîf¦4¢h7Ÿ¯}ÝïfÍ^|WV¢6{«4Ùó"µY€­T=zu¯yò+{ìdýæÜd© Ñ,ï?Á¹Ë÷ýØžî¶Çìðû_ØŸ¶€d¥›¿n¯?[Bµ!¥O–.ëõæŽÿ43LGÚÚ^fÔĶ-ZƒÚ^fªSh¿'Òø'lÛºöÙw‚Ú —` ÑUÐÄ–ðõ=½'!¬«¯ð^:<êêf÷SE°ö¸/ßî-}1‡§·æïªæHUýî¸[÷¡Ÿ±: lÿm²º ý‡$ƒ2¡Þ!eÇã×íóçí·§E¡; 2¤ñNMQ ÕTž€3÷"xº iÇ‹àªX;(—âÐà¢%ɱ$( ª³±,Å<ïQPÎ9’u q°(jRPUh2Ë—› !ªD/¾írfwaødyüÇoöã“ÿ’ûªs°Œ}¤{æ‰6-ˆµùÙ Ø3å.ÏGêB¹Kœ¨üî¼È: <®1MÆ lçÁïXßXÚù†€³ßuË&@à8 ó‰Ä7´Á|d_G¦Õ Æ®øLÿy™¶ìè=±|0{nKÛ~ô2›ûüÉöXf].Ríµ]ØÑK-ÙÓ3S°§eåy‹{93gêaÜÀ¶/SŠXt}j;T]íýigXYSë ¼BLNÒáªâ]5}ò5º¿^žî:EüO¦¬&W \1zW Ó‚}Ó]¤ 쨕îý`G•´æda½–]‘ÈüY¸fđػ¤öÖNÀ-r&³6W+Â8`Â9`¢C*ºÉ_“é[ŒÃðy~ ¬MÛMb0n²X¥¡ÐÛ£ó×eøzw{sõ´}îÓ•°(ŸHg­Óz±Í >¶ø¨»³tŠÈâáieÙë”`gI rŠ˜Néç¼€É5Ë15\§©•˜Øé–¹%6mçÕ3ŠI2¥™‰eЉép¦4“ +«ŒÆ’ÔÂüVy…ý°2­ñÈ®¾soÐ×wüa_ãâø!µµÀe•ŠƒJ_ß5Lj5Ð4iú»hÄ!óº„@ÉĶŸ¾Ýí6Óù8iüÑôì»/Òx~ü¶í“ÛQ@†¦šóE\3KF\zNÿJ€g2µWÒkI»’]˜UøÙfs$‘yÖáˆÙXG‰ôp¸Éœ™ÕÑ"‡;g›sínˆ½œ!ãrmÉ&ÿè'nö5öíeëvõĆI3ÇÆ{¹èkˆÑvzq¼‹¹ƒó–ß̇Üè•Ï[8Ülf‰êÂêê³x¥E÷¶ ©¿Í!€Ô؇ø|.>í{9!x]Åm}Øv‡ò”ÏÛ»/7ŸMuÎ><ÝÝÚ~÷×ö#~HÿúêÝŒ‡0 ªd-¸ÖµÈHæÝäi@Gòëa=éµvURåÖC1Û¹&÷©ÿP`1{"§B¹ôÅî¹6¶¹løß\“‰U6×?^/kž%Ÿ¦E)¦iÙæ}Õy £˜²tp]…ÛkÃ[7¨xœ-Yƒ%ºf/}Ó9Lö€æ(É>¤YõˆG“„ØXj¦VŽ]1xzékqøà,K×T÷,;jºýíŽ=SVÀ‰È‚¥ öäÖ}aßœ˜ºh:^ö?o¯îž?wºle§ŠÎµ3(ÔÌ`@KÁâ”…À¼_~¯"É@œ(Ð<¦_½Ù>>ož÷·°ûtŠQS«ico“¯(Ÿ¦žƒøŽs{g…Ö-²ÕD¹ÈI[>Ä”6R 6£\V·1k!8Laõc3ž²œì¥á-§Ö÷¸v ÝÐ7ÆËÝÝÄŠ:Å·}ð]P7‘hh*ÂYõÞ¾¬ÙÕdzûéÖ~}Ü캠'Ç!® »+®fbš5è–6^VˬWèKRèKód´ªçïí8Bîêe$Ÿ€›ì™@!^q#¯Mchb†ýÀ’×uS¦§4›gçmwÅÞ²6G\~‡ß‚kB/øþ×j{½ ýc²Þ=Ø»^_Ù›ÂR)úfûß¶î¯î–mD¿.ª«Öðz±ZR‡NÂiuôs<˜LœbÇp,8ë1,S͘¿È¾G‹mÃD£›Æ_ÔcDz^£L¡­ƒ„äMæFvö Ö=Í ˆü`$<o«:#êÙG‡ÔÛëøâŸ·÷fÇw;¾Ì>9x¼¦Cjñ¸÷sÜ} ó$cp>ÝÝ÷èÿ\®£¥m ËÔÔ šØÎ= °É¤Á|³ÇI¦£Œ§é ‡§òià/dàëá©|\{¸t’²“¬«²½½&=qUäû™D¾DWè;†¾ÍÉ“ö§,@ ¤úßxùÑžlûËr# «æF¤æÚ×¾˜nÿQºtö”n·>RÕ]mTÁm„Z8;ï> æ™u’ìÑÕb{Äc ½¸A·úuDú=íjIŠR‰®å¾Å‰e¤ëÈÜØ@ÚsÖLXЯÀP“È¡ÒôÈ÷ÃPóªu·+ë'¿§Ì$¸š*v‹Ú˜"{]•šæµ:ºÑÒ¼ÖE%ÍŽç;8šOB5Øg½j¾5ê(ï.œ4. ’f/òo˼HÀÕÌLÎ.äImäÜÀ¿àúž˜Qø¡ä4 ˆ6aâ9œT….Æ¿ÿYöüeîê™õ9_³ôD_ú ä¨×á沫u‘z¡xr³TCHeà°ÆŒ×ƒœºÝŽx§I%Ã8ÜüT‡4WJ:’I—3'³[`JqV9ØwØ…‘`u²4­:ÃpoŠ"@Eá鮾è> EúŽwB^¡=¸4Ÿª~‡Û#œ‹5ƒ8^§|æ”_¿n¿Ÿßñ³/ ¾fÏ,ûws·¡’ø¤²­Ë# u¡¤PD÷ŒÔÿ”û½32s{úðéñáˇÛûÍÛ#;Œqz¶WxóLˆF5a”Oj2ƒaÁ…&›#_csÞ+ ³ Í“š pR“n@µ·-0ôNçlÅÉž'VV0òÇû41-yxe ã‡æl¼þж;`¹ºÕœ׫{Ǧ[5)Þ¼-QЬn,U7úÓû†yuKÔ0ƒ ;†Ø¬¾GK+Ñ·½V°•œ¦ébjIlC7|zDÍd.¯Î!šÔ—á<>Ümõû¹¹6¿I¥‡ S+ð:pš«9kâ,nœ)°LRôY1Saoí(¨ KŒ$Í^³äºÉHTk:s#š/Y|ñ®˜Q2ÆnO!Æë‰s³ý’x%{x4’_'ë±8‹gÿ%2`J£ÅB5‘åî~…ÑEÁ’# ί۳ôå*…ëwÛ«'­×IȲ>„Ÿ&%ŒÊDÕ¾WÄZ¥–èÒÝyF(½vXð4x$psåŒ4öžÅ罄r>z$ŠÐ:™ 8f ‹6T`o9zÓ†Zû0v'UÏ™#Ó pŠA'ZçEz¦å¡¬ÌcqR>³¡§”Æx•&¥Eê´‰ÓÑ&ÄU6¿¦‘‹OI¬¿?Ü}û²ãø;ù| ¨L«%s$hNvU%è.°íBˆ‹É NDyn¾Ï‰˯¼N15ÙJ´÷ž¥w²-Ï·qîE'yþí£lþ´{Ëß߬Zwvmûe ´0¥Úô–]Êžqmh5áâi}Z²øÄÛ1GÁܹJ0ËÏ=×A‹I6ê<È sž½k]ª“ÆlzY ë]<£†Ä}DMHL\SðÆéLÎhþcD$;ñCœe‹sƒ› t z³$'#1U„ºв-¤ExÌÅ5åŽlúÊxŒéì4ÔMŠJ7z3­–¦t%5qE1¯ÇòkPbJÅÑ›£W×¢âèt…Ài D«öÔØÝom~Ó¿üå:±ê^óõÇ/Ÿ`k?º†uZ °…£¯9Çg&—î*–Ò›t`706«Iˆ#2Æ~? ýÇC0ù¶3å(¬(¶w ÂãEtHLMûdm$ö>d&\Ú‚ÅÛn… ZèZF,R‚¿²çÐ/‚ߎ(1©^ÛöÚ`8¬Pz…‰RØEGï¦RøæîáÛÇÉþùÓÃã—âãUÏzáÜó{T ïTd¹NpM'Å1hEÕ0{äiÍ¢á¬fz•gÕÒtš’,›ƒw~ÞU€·ÈyÖYÉ W‰¾ÆYØ£0úEç(‘¢3Ñ„&D~mo>ÄLÜnq¹K,mt¡–ü#Ë„œMÚN„ÖûÂÈWðD–®Y> ñ¹¢žƒ=‰ä1-ë¯ÆÉŸêæ ,[F×–Äša™$ˆ pPÓúê뫦•Õæ¶xP-s[0ë¶"å _F ©r[†áÑIXtÜ$iJmcŽs`½YÛmŬÛ"Ž.Ýæ’Ëw.í¿Þ Nš–Wô®)¿Ï´ŠW Þ,*\z¥jm|Û“à›¢]Záxa§¦ö®!ÓZ¿‰ƒ¥›ÓÊÀ€Nš±«& ;—ªì`¥v¹³ì‡ÃòW(‰ƒ-ø’8‡ÃŠ>_f~”V›i»àyQ$¬hHÚvĬW¿MU\˜…b¯.è\E#õ%æ“"êñ¡”»AƤ¢‰I°éÀ×ìþeå ²+Xø<3h'=ö¤©ÖE©¦i‚Cp^ÞùÌ SàÝ™kœ@lAü<ðR¶"e$  ÜMoiæ äà.9ŒÔT‘¢Êý©û![9!Y»ËãÚz{ÿëÓ~þùÓßLl_~ÞÿñË‹©ýrg϶Øóîáæ7ûð›3×eôÆ4¹7)ù]À–fª#$²u‰*Z=:ȯÛF5£Áœ/(‹g7*¥q³Yú€qÕlÔ]?šÄEwð”Ø.¶d2ö×à¤ØF¼ó«–ÿè½^ÙÛ‰}þ_&ïF¬ñ‹w,MïX‹[=cÓŽE®¡I4Pä-n^JØY”Ý6oº¬ .ð|MH9}˜Ý¼H!Ku\ÍîMöîî^b‘&3¡5óq@¼ýüx¦×.…«žé•¡™7ˆ¥¨æMÍâ>ÐYS#ÌÏÝ= ¿†aÎ^31?„%¦;¬[λUjhKǨQÑýç _ÎÕõÝöß ëÿüððõ„yì?,éÞþkZú/æ—@ÿùCò7ÀÿNN5G2^ZŒ<«9BÕ9o+u9~ ÑZþ”Þõ ¢ÇíÍöö÷íÇ7j¢8Xê/0æ“?á°®ÃCnŸ>Ü?üÕ¼è_·ž?_Ýx‘ư[ú®RT3ú¿ÿ-dÛ­ Ó§o2¤š8jº-Y…»öõTø:0à¬I¡0h™I‘ÌÚÔalì[¿3’MÎ){m`¡E7öë¶Ó-ƒkºÉ18vÍl„\ÈFNC[ ù ­ z‚|V­L9Œ?.¬€‹0½”G\ÜbȃSg1×ŸØØ#Dµ¢>O}pÑœP»Òb©Ò’QQAï g=[ê±_·ÏÞ5WV¢5ø4Ëc¹ÖÒDšà8„z­Ù#Pk$m]Ü8‡Už}ÆkFU~pQœÌó+ˆàùj¿VÎÒ+Œ3ïŠÁYYøµëøM¾8„!ª"ÇR Ø´0I§­‘k÷l"¬]ãcN‹ƒÏÜl8ÛEžÜÌ͆ž¿nþs"ûy÷ß1½áØø·W› øæôŽc“eÑùiJ' )hËÞÔCIÔ²½)Î$šºDš!UŠI˜ÍÁQà‚}j¦ϳòîî³å£••ò7*/—ÛýttžÁ7©¹â™Rˆ !H³ÚtÚ1¨-¸ó” û…gékF++T›¾Ò‚øÅ¾ãÙƒ´xÂt¤[3“ɼ ’ý4·ê]©ÞÀ§•y.‚±èÄ«‹3jK“Vc6™8®¬ˆóœv³äÀ/Ò#»&½Y”¶\oQ¢9#„ج6¿Dm†-¢%j›flÝÙs…ÑŠ´–(Œ-0âEZcø6i¡"] 3zĘ ¨µV<`À¢H•*ZK¦8¯6ÎN<­¬@mŠÉ;[®4onECl÷/N©åžVåy²öê×íÍÝÕÓÓ,«áë¯A€ºWˆlɹÓ#j¸£SµNÀÅ|OoDx¦cúüŠ{žÓ®<É-1ŽQå€ç‰O÷"£œ·;ʤÃÁÙΤ11"-Ú aÇ.ݲA¼¬²A-»|Wä »cÛÍ÷ÓZ[ÆãÃï½;Žó¿Ñƒna§*c °švP_3eZBª-PX—oaBEýˆ&ôÓŠ6@ Ð&Ì”aíU41Ÿú¨ƒNxã‡Ø’ØÛ~,Ón±ÀÊ cËÕ¢`°Hu¡¨ßÏi~Uº·3ãgÎWë-0“4ï”bQ(?wÞab„ì‘‘œz\礷ŽN –X‰…–¢Q‹•x‡¾Š8EÔȘª‚×ÿfFñëöys(-ýŽWUWŠ9$Ä« /9¼óS{%Ïs×B—E,’ÀEgrd†+à¡ÉAjš:Àòõ°xPÖ¨êíòºÙ‚ÀàÌÐ|Y“ò¡ïó¬)@Þi¤ÑÃÄ*QÊéívïFBÒd AcÍQ ×Öjùgé'¿äšìDy0 Rœ/C0ç& ³†4{õ"©fb/M–ÿ…°ÈL„êg¾ì±?“ñW.Vs~©ª?MjCÑòk×´i¥†ûÔcf ®¥èü²crY‰Mʰ`ÛZD>ïêEsL@#aõÈI’¥;raÉ=½QË!¥¥ÕoØK»ì[Ó„ÏÕ'Öí"…} pxòëÈõÕ'ž~~45oí=RÜ™ú–ñêææ“x`·½¾¹Ù2]y ?ª¸uáÜ»ÄX<¹©з_þ?yW²irœ_…ÐÅõ?™™Ä Oº‚.‚/†!°Ivf7ÅEÖø½ü~2GTÉŸdVåZœû285Y•±e¬_(·û^bÖÇÉ77ºÓô]…-ÆÄËžä©q㤔/=r†Âë¯F´…ˆM O¶iLá#aæÅ–XiˆõãÌœ¼8½yøôõêü4šÐ~¦ =Ó@= ‰!’—b8êœÉòMÓZ“˜„N*"r»ÞƒZK²Ïô±fh­þj¯)·h-;}£1Œ”*"Ñ1&¬#,Ö- G…ÔÜ¥!×È{<Ã6Õt{=hvÄ<0q±=;rµd ¢Ñ¡ïKÕVÐhšþ-ub,W Eß(E·J0—«Ô¯(2%uïç×/çÚü4aúç™þñ™c²þÉ'¤®Vxwv}óõòî9V;B­ðõwLª²7hé¡~ö®§¹ Ô&¡F…ü¥Â7š]*|Þ¡R¡m9âTNº&á‹æÆ»,däšSìØl+¤&{ト¤¡tš¾ÉGhM Ø”Ò ßÝÞ<þñËí÷‡›»kòò„J[[[!ž÷; ž@`(2gåWϪ!J †K3ÌÅ+·ƒWÔ}£êØ¢êê20'ª˜CQ×€¨¨ê1;¾·"ß M'¿0¥È å^ ÞEýg?àhêØåh²5+û(íX; NC| §¸XÅ÷ü¸j|?[íâ¹ êêb5ƒ`ú£BháZ„¸pàš®"B86|Š®A°àÞ‚áÛùYHaÑc™SÛ€…† ½¬Ï&ÄÖ·©Ê‹j‹¸—£`ë3†FÁW³Z“q†õ6µ1pâ_‰l“üÈËŽ{Æ«}BôBì›îõÆß¾üçå§_Ô`¼hT<(:ÿÃ!ñl„-HÍ«±(/cþ¹&ÙŒ™àED\›À°ó‰‰‡Fz¶&êÏ•HÓãï¨M=/+jȇa ±,,ÂYaY‘kа$#mÃ@“°@’¡:™Ñi“F¢­Í’·—7_¯ÎÏî.ï÷Ì#=ÿßDôYä}ž)É  J6J#³F¼,ÖÔš:“dƒ9aôM²a«–G@ìˆØS’¢8q?ö¶ë¬€È¢Î~ô¾*þ_²ÆcE¢)"‹Ó@µí¡!µ~<äÜ2u˜L©Ùrlçœÿþðýþ¬0~ÀKi8fÐg‘cÄ"dÌF”v£E‰²í+rN’$L”¦…õ«$òY’ƒž<½’"ñQV-Ý^^ürv?¸g)'" cðP€kdC©—­¯È3eVJM5LA$ ~Ù¯Š8s†épQ%y3åå?ïU¸,ZÔÏÝLŒ<ÜÝ¿~´Dzø·]d~b)cJ4%¤Õ} {S‰A£ç¡¸5QJ=YG{!¶¯°|GJÏ2à­©=•]RƒS/¦²•êœuIWd2f‡ 'Ë"¶˜ñVÝJ#¦@ü1æáÃ&#ú¸÷îXcv**¿Úý°ýã¯g×_wá…”hå)c)QÚk ¦SÂPT!à»zü(!%'7ªÈ~Àg¬¥ùPRTe}Ü`ï—’¢‹õ¥r˜uMÆv@u”14Ùà£ô¯#ßéóõAm|úñ憶X–Çzõs憌ۢÿŠ$PvŒ¤Dï27ôšCÓç†^³gÌÚ¨¤[CTXG±Xh•³mçÏ,˜bm’í§‘¦<˜X×¼Œ›Ýœædc#Kâ]2òHÖ¦?§Ö†Ò¡¾)ï·ú`¹à‡,€xîBïKÌÎ!üîR‘&”C?àÀ‡yh½'MQQ•j”¦Apn@Eõˆp Úàã¶žu½z/žcø»£'=ÿØÆH#ÊoKRWÜZ]¼ËûÿÈÙÏþ#7†^{•cŽK¢žyÁ’¨®‰äËçOôž2#¬Ê‰!5™’ˆ‘™’øj+èœá°pŸÒ;`‚¼zŸÎn®v?ÿüòöþÃíå—Ëo—Û¯]W=leyTÁ~­ÓX€Á=h©œ,/¥Në(¬s+í¦½ü**)¤šIÛí‹ê)ûð¯5eÐWm\‹ºzG‡šÎƒãx眢áÆø^ËÞô~PR|»oÂDÚÛÕh}¬G¶TÙ]Þ¸KœP G2ÞÑižÚRl 5㾬*V|6½ÏgäVT™¢‡qÑëp žþ¶€ ~d~Ó¶~ÈQôp;8øÛ dªÁ«?µ…Èö;É^£3{.½ô,üÃ×´›ÏÞ/ÉÐkâp×ü»Æ!É…4<±ÉµðU>¹Å6eWô/±Á¢ú¢Hì6s¿‰’ž¯V3±É²P h)êwK”€2Ä·Ôcª½`T!“ñ½Ûìlµõ  VØjòXf\¢ì¾žÕÕªÇâB^_ì&Æ!j1°# §+ Õóð!øa¾Õ¢MyƒuŠ1U(Ci}Ž]ÜgŸoV¥oq±*žHÛTüBNT° ¥Gß,ØV¯*Žó­—ÇÓvé TðMJèqvó¼Ï»ºZ ã,]$Â-«·õ»‰!ð¡D é uãÔÐx[þUÍ· {ÜnføæÄK ßv(Qù¶[NðŠo«›U±-,Êc×2.(¶.=²ø¶…ÝââfÇD©< ãj25è[P7¶bTùF¥•évóìž±õÕ*Ç‘ƒ4é[ˆ1ªÂ 1.v¥ìCÒˆWcµqÆÕ"*ùˆ “¬ 2ôÝ,3.:Éo¿}ºZ ã¢_Ôê!`ã ‘Ò㈻&£$a"h˜qÕà-A4ÎÐØ"ãb`*2޲³“«›Õð-èQÉŽ[øfÈq„oªo=M+j·ÄZc{›ûç«//[Ô¯®Ï¾\îÐ#7½«bŸ{ÕõÍ(œ~#·ˆYí*q,‰Iôy9YQrÆð›ýì(à›’12"ÉQDìÂÁU×:j ßæð²šó¢ÜúÔÞ|}~þa‚CV@xQš úŠ—;ùò#äS Ï$š" ¼~#r“€X¾Ô¹dêMtÍӔܑ»ã7R3¯)nsܘùI` âÔ8WÄÏ\1ïwZßÎÞ>ÑwÊžbýÙ¶ù±IºÈƒ‡±„–¡w ùGa“¾ÒÇ®]íêb²=;,lV/òR#l»íà‡„Mݞܠ÷ŠÜ“dƒ÷©Ée" µTä‡d :D-z€ &¢=wú¿ÙãwwöáëÕÁ\íääúìfÞËg‘)„Š@N®¢~FÀ¹©œg’M‘ýÑÁ ½MÒAcî–="Èút:޽+ÚìzAÝÅÕÝíÃÍ«Ó6¤YÙ×»`t×' †R®êO¥ža<õÔ€q©£HªYʸ‘++VT¶¢x*Æ)”í/ZÑeʾkuBÉæ{”‘=„@1)#¿‚ž¥ŒS$pÜ}׫½‡çiÖuk¬÷am(ùQéÏ#Õ+å ôÔ‹ƒO¹ä›—X¯HvrM¯!eôæþªÉ¯QÆ$%˜u#XÊÆ„+ŠLRG@òMêÀctCò€Ð!ú†ûÍŠ±ÿ;ÍsàY¤Â¦'ÕH(‹f{ºW„œÊ…Kð .5IQŒa~Úf“å£>-ǂʥ1wOàOß'*œøýœ@"ÒØØeÁU”×çÞOHçh4Í›RvN$U„6!EEÍ‹ùò÷Š SpŽ’ U¾&n¿@h°¥kø ˜Ñª#áHõqȦ8DIbM̺ä=ÜB”ÇÛ}ºZMÙ'ù…4n³™ÊnKù ÙLŽ`3eñä½ð{ ¢¿´ jÔÉ»¸ü|öðõ¾Éd‚wø‡nR—M&øž‰Bô‘cs}ufKFç*°CÕXqŠR1çí®H1ÅVÚ¦H‡M­D¬ÎqLÌC*÷º•hŠÊ¡_0j\{ÔÑ%õ•Ê”ÒòEfz›óØýË&PõÒÚö+pI†»¹$å´†)ïMìiöÉðS¨iՅ䋿³/슮3Ì)ž–‚9Û0‚?´r¥l’!…²ØòBÀw›/À@®>Z™‰ˆ;µV?ÊïMkмÞuD±wë´Z×ÞD±Fht#ãá‡Ú[Ö4{£Jž  A¬ W¨$آ̢J&ŸtZÑdŠN²9ÐT«á$ˆF¢[’®}ŽàÍ&…0Þõ ·Eé¬Bè«<Â%£¼•Yk†©‚ Q±Ø'³kžˆ8%¥aUòè›Â*Q›îÅÃ=]P ßjš5Þ½ µM²™Ge¨HT«NqE8 >kžnV K\À:î«ášY?ïlî§¶ospÏ0cDDH)Ôôw1ÙqðBö¨³8b"„ï4ö=˜[ÞÒ.[0\ç_¶Ž…úw'Ÿo¿_Ÿ\}ûp­žìí¯»YÇ{Õ÷· Ìd9²&I°GCÆ$¡§ûܦ·c"Jÿ_<}¶ÖÜ«!ŠÀE!"É®^Q¶KŠôgº›Ì‰ `ÿB¢ÍmD¨kˆÁFÙÔéë=¸ûéNÝýC]Bÿo¦ K\ÿ܈Žøv„Ë@<¦S ÚÄÀ¶¢ö{“Û#ºöRõ„ˆÅ§¾ùøvFÞææ_³ÎËN”´eÖyujb‰uê½…|¥úé6CòNVí…‡õü‹3Ʀäyq>ÄÚ1y»è— Áˆ$è…:€Á1`H†ÝB©^lìBd‘ ©PÄ’T€ËæpV7«q Ê]ØÎÆü1sFCýRﱉ٘¬ü7Äl Ü…pe»ÚÄ»ÑݶAÌlB¶óº%+tN5¿¸ÍnkŠÊ΄š+ÎCd=QxFŒ©T‚¡1Ô?+èÑyŠŽûå ½z¸ÔcŒ&2c7¤jœçc;ôÏý|Õ‹'lŒù|³ª Sf{œ[øf‹6}L#|SsÙ‘\bd½vJnðþÒÛ³Ÿ4ìÐÿê÷³keàÙùåhkjÊøNCM5©EÐÒA, @pY ˆ‰fèµÛÈÚ䃔 †ä#ˆïi†3c‡öHÕ¹£—d;#±¬×»_dkÈ®.^ݬjð9,l%lb{Cb[B¸h€¡)¸QŒ¼ <Þz,r“~þÌÝ4å&\ '+^íhÐE)ˆžò¥…'BMÄ%ĨéÕ’’>1Cb’º€{Ä‘í¬Áqõ®†Ê¿¨ÍŽDeÆŠ†–Td,S6ª_]­F¿ÍÝŠ‰¡…oÑ:G`ˆoÑ÷Œ‘0a²”É0¨aª\ÒÐYéBÕcb‘mzñ\Jnu³ªÐÍ/( ¶±Y£»1¶uíáQ*Š•áÇÝÓ›3Å¢”xLŽ6DG©ÌúlãÅŠ:SÖôê¯VYÂÔ$d(8ÝàÛ#^iôœ ¿ØvðÇì¼xòÖ×"ÖÚ"®1ß"¾!ŽQ—âê1†‡ÌYŒÍ u÷Ñd–vm8+úÏ w×À¢»9‹ó°&À”™oé†-êú4v#êEú¼A½p¡dÏÖïZ½ˆ1…‘0Q±«å>‘úëî‡S/ïËêùº,‘/©—ÒC¾1ÿ‰SÔËv1#AhR/ñêÜȘzE9Šz19Ž?ÐJÙW@>37Ƚœ]\_ÝݽZ Fç|žRáá—OŸÚvªªÏp"†4€ˆ]й6›æÝ„ĉ¯Î‹ÊâH=ŊΪ¨!';gò ØóÕª£ÆçZšêÐd@„ÜáS×:LU´…oï€É4 ö«§¥º.ºÇ3¢Í‚€ÏhW<8ä¡çüBÌgŠOòŒl56ú@ÓcŒØ0§º˜B”Ô:¦3ãÉ›éD§˜ªBT©((ˆY¬õ©f Š8l* ƒÁôƒç!A‰Ðó¢€­çŠäýXŸÇ{b¨ë’ðÿóßÕMxê;¸Àq¤ÜjGô€0DhºF¼ooÝ$â³mØŒjË)4 §°(S’8ÛµùD)3ZV°Äüæ²½ÖAÌ×  Žç?Æ ;7Às/^õ1Æ€²2 VÈ ¤’Ìp~ hE¶ISUhB“Ì[d†d&@WÏ¥  £ ïÐÓ{”`öÅÁ ÑlFàЩ‘bñeßsQàž~¡gšÏ8ûÙà‚ã&‰3 ItCG]xú¶†Cµ-¶â ëÅ•“wË^øõ¾ß^\¯=cPŽâ‚ÉVµUÈ@(ŠA¶ÙaEÈ)b¤¿š¢ÆÎõbD¢1Œí7ë#'=Ìö¼@Ÿ+ÌU®0°>AÀ•‡Í%)±Òîšß°±ºMÙ†ÀKRÿ|[ØûcîŒ!gØ–ë1œê§RTÔ!È xÇ]ý‡hÑàŒX¶v#–z™¶–]¹‘M? ‡÷jloN)Á÷tµŠ`ÁYY(!ëC²SHêg‰MåMew©~¥GøWÉÐàí͹šèûËÛóË³ÎÆpá'†?Ö®”à÷O„?¹¿º¾<@ýíX›}èäNýý#ÿ4Âå­¾½¼þ~oGüûé©ÿS5@«âÒÉý¯7öoz®¯Ý|Zþõß~úë?¿=—ºN-åó_¬ö¤ÐHquØîcnõ§íç‚_ŸµÙ¤_ºcØéÏzÕ/—÷§þËŸNŽYÞ¼þ~±Ò Ö÷äãÉÝù‰ÞéÏ»ß÷¿Ô]ír[9Ž}•®þ=¾! € R[ýûSŽã¤]íÄ™ØNmÞ~I¶®eÊü”&éšîª‘í«Kð_þýíñáýÿœò5~\Þ>^ÿ{sy'ÿØT”¿'›2ìô?‰"ª®Küõןô?šˆžÞncýOû~ýñן§Ô„]Ž•¿·ºã÷‰\­ý€Ê3¨?”bQB–£c½´Eˆ6‹Æ&Wf‚Jƒˆ§Vi*Çmäq Òt'¢¨4µã‹š«p^¼L1ÌÓm]8ƒn»üvó]#‚û‡!ö?ééãg^~ùvyõÏåçë½s¿Ž _©6H’ÚTÛœ·Xi6sךØœ·Ò_ç€rSJS,ïÑ9Š6%~x¼¹ýøœûúÛžÚ;]„£;©!ì ¬Qƒ<¹›ómÁu% %±H,e[N¢cB.Z$¬EZ‰iÆý¡½¶:6Ú,©n‡¡“ùEd  ƒàÙ8û‹¡ñÈŒŸ*WœÂØ7O$èWCŒ0Âà8^ =a¸‡aÍ ƒT>aùÞ»•¦0…$ê^60ãAòCûÏ]m^)=™I`Ô——ÚÚÀ –FŽ)rž:‰T±¯!{C¿_Z…/¯¿¸€o© ´L\TåŽ#g&÷)!ªQÁ8¼o©vX4„´˜!¢r¶"“Q!änÉV «JEñ¢áZ`˜ºke{Çõœ S)n^F`ºŒ^v1Ç#05‹ó"0:C6ÚÑÿ*ú2жèkÂ+¬B/ÄöÔÑ„7x+ì lR²5ÁKGÞ9ÛD‘a•UËáÞ-+¸!ù]³Ô›*+?¸|µ²ådq‘¤Ig¥d[C>¸Ð|žuSGŽ‘0œx@”ÁûÛíå×ë#—ªßî>ê¯Ûx2ýå¯ú7?n~^ý}}õÏú0í~çâãÍåç¯w÷7W÷ïž>ÛüòÅýÝãw#ú~uñpw±ÿË]êÃH¥·!èÉtS`—gGÏfBÝã8ðnvkyFýŸ÷Ð>•ê×ݘY¡€¼Ö¤Ÿ#¤Xt}l.R¶`c¿ b‘ÍÑtB¾EÏ!XÖ3|ψ·ªì¥g¤ÛÄ!¤Ýȇcž1n”\Ñ3úßÝA8ðÜ«›7ÎøFþ•oDæ¼£ßSáP…¶õVæQta˜ÓOª9ýV`”5 |ѹ ü€ÓýÂjÂX®Ií1ŸûÐ{9Á5@«‹ü+5co*ï7çâáaSÍ:·%ûõÌiÌžã`HÏ 7V÷ÕÆ2â9Ê?_îÍìЗS]À—ó+Èè^ RM}Íä(ú’'ªÚ †[a‡4¶âsû‡*æ…<,Ž^{¶Qlíê¹Bl©EL5É– ÕîÄï¢âNˆÄtrô$}Æ/õ~ Ÿ¦Á½F¤‰¥Bç(•Èî÷ÕåÃåíÝç­?Y[üôºbBC­¶œÝÔ—YN$¢æôÝÔ—9žÈãÀV˜ê'Ôw?…œ[<ÃI«'Z[‰®®.^6œµål€úw¡äQÙ.pO…#áÈ”ü‰Û°^É®/­òzµA%Tv0D…^$e«ÞW’šáÿèkk8‡¡uÎiE/'¶`*g'™Bf£¯÷Ÿ–îœ Æ™ŽP U‰JÕŽÐÅqJ]@©gb†~oâ íû`ò#4ŒGý§\þ] %fd[w–eµ°Jbdý¶ä°éd’OCÄLÜ52ϧ¨Z L+€;ruì÷—³T¸ O‡>”éV2”pÀ;[øº˜ãYXSwã£O-@±V†Ž·Æá±« MR Þ·v\ïÔÚV«íVÿ2‰üâ7¾ÝÝÝÞ¿ÛÞ(OCˆÍ"Ñð«¬)¼ßñd½…ä1å[Ûže4e¤¾¶w›Œ|BçÜU˜>:ô? :uR}h¿½Ómÿrùí~;/ñþ§.óË»ëÿSºaâÙ_\>ª üúpsµÐ4t ß6>–G'¨a„RÝÄs”§+ͺÍ4—6}5#bqŸ4Ц„¸ÕÒ|¿þv«~ýP5ùØ.>\!ãGúç™ÓJEÍŽ”X~L°1[Œ¶’ܳ£ï‘\›R„€CþI’®š+‹U N2àêúûÃͧ›ëƒSŽÀCØ({ªàQj§Vù¥ì4ε„&ÁCÔm ,E4þ¡¡ÀRÄÓüâ|¡E¶—“)¿*–{ºÚ~ÖJ‡\ÜÞ]ýÓ˜þác!ŸÆÁ&tŒNé¹ïf±á !…Ñ©Iu›u6 è#TX|$.Z|‘”õWò™Qׯ¯íI<¶„–`“A(šô>õèn¢ AN=Aó°.uâ]ëÁ£.ZsANä+øˆÕ {IlW8;q/ö)ö@ß:’sÔ„9DÑØvÀè#ü zµâ⢪Ëÿƽ@ ]뺞ËãVÁZFFçÚôÔXÛtåÄ.H˜s)pBö¾œmPŒxŒ¡‚”¤xL1Qþ˜>KiJÏi8ÈÒBÊÊD.Úø¥Œ@Jž&°@²ÙqèJ&™#11™dˆ˜—ûh$¥¸O%˜/èß‹hÒUãb z\br0Ò§¡èj ôF˜ âbã» ãUÊL0š^I\î&K>ºQ§­•²]€«Õ”¯"-Æ5é±ÑúC„WÖ i3Ý»–Iƒô4„"ßÕ/“—þÞàg“²Í,¾VïŸÜÁ÷wFŒ§ŽÆÏ>•AY aJDU8C*âŒò3•Vš¡2ì­:Dܤ2"Gn *Ø5Ç“¦F“jšoG aŒݨ)º]ãÓÛûê²ü û•UÍøÄÅ&‚¶Ü+»CÂ02ãSß~×'Ý80Ûº°Ñ»ñm«ÍJ~qÁFÑVG ·}qÛvU±‡³÷+«Ù6Ô@Ë9=¾ Û–ŒŽJFØI®±0Nñ")…06}¡º1„®9}JW?þóãû4sÌ0õ'uã!ˆ¯QÍ;ŠÛ7° Ò¤,ÃÃJ^SÆf§%ØWŰD ª«`,ÑÅ*G‰½zgžér"LC Òžj:m:¡„”è çö¯d5å–PM”õ Ÿ'»kÒÖ™.NÝŸ<Ÿ‡ÕýîþâÛã&§°ûãy`‰‹ÓØ|XŠP—­)\‰kV„|S„˜6D‡°B®Ã™2Uï4¶ çj‚—iƒ ÕÜÆ,ï. w±{çå‡ÿ öËî³92 û ž]…L(¥‡íKŽI{%·)˜±l™]A6Æè*£Z¢žz7`£¥v~ÆØÝ¿Õayò`¦Á@½wt©Â̤°+ž$Yݱ’Ä”ô.Š)pv¤€Í-‹ú­ùÇ»¯7jAtý/n¶Å»Öä®|>S™¨böÂ5ÊÄCÙY‰ùY…+9NÒ&((Ô¤M$1h˜6„" n­F^À–Lûmø*nï.?^|¸Ô/¾²rºo7Vf÷ýëåí4äyY<«£\¼È eýµ ÀÛI÷²ŸRi)6~G œxª;ŠéŒ ÃiØßª½>^»½ûùå ¨Ú—XÜ|º¾úyu{½§xºýòüãia·¥Äu!wi €ÉÐåŠ/VBš{k›7pn€ø®\(%¯Í£…9Ϭ—ûð)óY¹<Ç5 `Ab” ˆ 2²³p‡Ök%¥)CeôµÁ&#ÐS¼E.Øu1ñ„›• ä«Gœ˜àcSd<ÚÈ)7m/‡Y.Œe‰èì.L×ýš‹Aظ.õ®Ž\îÖ««!€yü@\*©µÈx–єܭ¾¶ñÅV«‰´øhé©¿šgóˆÀ=IŽŒtÂ÷õÕR&zðê!`*û´8pñí‹íÂ1ïWVsA£o…ѱç¦}‹˜Æ¡}£®Ù©)‚qE6ìZfsÜ{ùÊóK0ªl)o›ê\*îZ¶Åj¿¬ª^L¿iðZ¶ ¼qh¸‘-×3ÑœU."h Bƒ'-UŸ´ 'Ͱ5G ßU¿Ý4¤l!ô~i5û¬#KˆZ¶ %&èïŒÜ<¢GAJä„lÀåЮyçªëxQŸÅW4ë-.íRvXìje•u)`“]#Ç.&Ù5ÕáÐÕæcâSV§[Ñò´zt{XuúM¯4´%]  7øx¾…rókO¢’›Ñ·†špeÆ”8ßU±f“JínþWîXËÂÕ€“WsP†‡ú²18R ¹—Ï”x+,Û² V¶ådjÐzH2ìlÀi{b4Ÿ?}zýÐ1-dÆbòJñí¨m»YÎŽg9O™s®¯¬i™°ê-?²]Ê¢1³ÃÄI\JÓ4nþ»ÎÌN² @„ €†¾\ÄCî’`%ˆ)(MËG ":ôýC•6P t(ˆLNcÇÉ“Í>^º|¼}¸X§ÿG›enñIB¨BˆEW%B–•t%¦Iò"Ø„‘H0äªôuYg¡1sˆ¿a M5–OT¬ÑKòزmQ'!Ûb¹ܤ‹`–SéïúÞ<"öônyýVµÛ{óTGJ!öLé.â§xõP(õ 3¶X>ØcÙ›IkoÉ$ü¶å”*}mG›€¤ú‡ò]º<éP?ªuÉ’£î”¾ðÍÅÊÅnŠìÏiÞðËÇ6øÃ¬yu.YÅ+°)¥"Ö²u+iO Ê-C—ÈSÒ‚*¹G¦AyψIÁPcV™Yž‰Å/>ãá]ú5õ£è“‹Óô’—%Æ”°ì©œèíæä­8³µkyM*¯QE|X ‚¥£8}P" YzWOdiÍÛ€ÊÛêlÊEq[wÃÑ‹¦ö+«Hè&±¼m;Eÿ•yÄŽ¨û€™zêQsméæ}[{ëÐ^sìòeœY¿0¼ÙP½ÙÁê$Ul¶‹›ÍÙÈzi•w.è=ù¦C7Y‘4´qÐ3Ëã8œüI½‡u…÷4ßaýÐ1Ï!¸ÅEb¬A…TVGÒJÖS¢d}m´qBMHS h„4‚´èº¦.XÆØ«l 6æ|‚ª\ÚØè0~¬–VImì=F›6’qªm\è)ò×0È\%ß!–‹Aà“4p,ƒÀ<ëb\`‹ì-ﳘ¦„ úÖÆàÎM‰âÆ •Ô1"ØäP:ÍeÝÝ—/_o~Ž^Öåu?C¨Òý.U?ú\ °’Ï$ͯ&&¤&ƒƒ@KtöÐŽÔ‹ò]ŒÆ/s£;{ÿn`,hÁ¨ cÅ~ðà°ì`ŒÙ^Ó½0¦ !,"…†µ¨_¯1 dª¢ÝõdÊÕ[Áä¥ŽÈÆÛÒ¥\cDXXl~Eq+1‘+)|[kž p¿˜ 6":›b\Ç€ë q)n—`oY+NAr]±¢ß$Ì ÇŠ¾ºÒ+Ñ’˜•ëóÕ_¤$tå>Ïô¼´G0ú¢*ƒ—U«gdSÕ}tÞó¹·¹c»mˆ:VŽO[ÿNøbËPwñùú«¹ ×7—"·×]VÂg.Æ¢,Á¡‹P¡Z$P(â)›dÞËlJ¥¼h ¨Ã\#ñ6÷èfutÀr2…|ÔÑ‚.‰?)ùh¾AyO•ùýñözêí[a)¸?OyҩǺ{³¦Ô>ØäÜ¢ž¥ 6HT{+ _ºE0±§¬Ïø,קΨÜõd¹•¸0;^ا°»È9‘2ȲV¼ä:lãÖ8ã¤Ç7¥ŽË½F|ë¢ûmÚð_þl;æôp¾ý4Óoó> ¦ŠZ;Œ¾À[µÝ¤,ÃÙj&¡PÃO1ž9>LEß z˜‰šÛ­+è<¥¯'Z>§Bé{O\ÄÁnfÉáã^S´¾ ]2vʳã 'fðlÆ œ'U0qYŸ;eã/;îäÃ!ÏÒšBXo=c–$8·>G:EŠ .!ÄpÚÙu¹»Ê]úŒµ§X¶-W`ÑüI58ÅŽ k)BEO¸Ó=&§y!›Ý¾q¬8„h#Ê!›弪•L&%ꌶ Ê»b´®F8…úˆƒò®9§P§æ,ánf ¤¾ý¦3¨¶þØ ´›:‡2Prc ¼±‡{•ÙflHLýƒ¢ Bšweñ®¢ÈF-•JuMb”Ÿ¹É”˜ªã]&X]ŸFñ½€Ø<uµ!zkë`쪮°q-‡Õ™+P\$UÔØK´Zü·ör³ØŸë¶ZM¹¼Â¦x§¤1¿(°X?c¨Â" éH¤Ú+÷YH ìH†D—ÂfêÙpEíŸ`5 ì]9)jé·ÛŒ·ëÎ2M®VS^a375†Š/ê+ÖÏÈ—Whœ¤?nÙk²ÙÁo5¨¿mw7k–GlÖm9Ù˜=ùïd32“FVµyÂ.Ä?Ë189k€õ°ÓøŽ4Î?Iéëk™õ嘽½V_OUTùÌÕø±tfÉå§n­4É-Fõp©É(Û2“ç1xôMöŽÉª²›;Õý{\<Ó̹SÍTj -ÖÉåÀÌ(E«!ŒùñáO›0åJ•ÔÔ{l`°±äS8‰}0zpàSÚ‡¶žð¯ºG÷÷ïÔ8ìa6å‘®ÎÞ¸­½yÃØŒ„nH›„Ð5wżJ»\?i¯ÿn¦5ùï¤_Ý£wÄZÙ°‘*kEESã‚<™í^“¬ mò¿ ÊÍfÒ¾ g”ÞFÎ{8AJa?Ö~0’`ÈQXÇÅoz—k‚RRFG‚ìû³|¦Ô咾ޣú©r›W#ŒÒ8º˜}BÖØÉCøå/íòh³À«©ˆÊrv®…4"úÚ^¶@Eýó ¶Ëë)!ôTJðÃ=¾¶…•™§ŒgÆŠÔr9õ@ÄÙ£¿ZZEî!xYô/bÄso÷ôoÁº%›bç…üý»{uk´ÿÌÍ!JE+†zÿ’Êš}¶i%™I:rj9Ð @à Á…*>Ž=9eJ¬ßžÒ/wÏ—2Ä»ª Äq1œ´v˜ 0Ö–|jz/”)Ô»na›[‚MHê:ø8‰D]ðz L6BâïëËÛ‡¿{†e7Û{$’”7›¸XÜìò…ûåN"ü²¢ רHyqnh³1¸®Þm°ƒvz#uxeö ¥çŠÊ}›ÏÅ?é~š&ða¡hí©à0•À>?ýs%ž)à†éèÛÀA ž ƒz.=;'úo˜Q² î¾?^=<ê6õ÷çaD¸‚©8 ^aá7’¿‚ÞËdJi­¾6’@= 8>¤m¡è‚…ú0 ÷\@£Ѿº€–,ï¹±}–72F·ã³>¾‘ºÔì|œõZ*®Ÿ,f#‹ëçÏkð7F‰ ±Aý•$'ãÁ‣ŽKdRŸYxòæ yÝñ½ã½¹î3&ÁgU‡žhµ¿eÄQ,²‰ƒ•Ð&é Š˜§„8ª84öºÎ!Ä`ê!Y×ChxƒN†ÑëOp(r@X’ Wh]÷ô› ÈÓì$1%²,tGS ŒïÆP@Òq•>šífMÙÖñ‚Ø”I‚X±­ê(Rq[³ui«…U1HÆE ømçÕ¿2ÏÈ–­¨ðÎs‹ÀÀÀ2´×j·{ÆF)6Sb‚ÎɺÛôÒO]ä—™Càóg?.É.q¸ $¾l$Ë*»ɤ ª¤¡ 5þ¤#·î„õ!HO0T.^ÔÉ>ý¡A6 \§ßBðÂé·•g]ÉÕÊ* d£ˆKrî}ëkÕ•äųÈp±a¨fþõlž–@; x(î›ä¹ÚVK«¢õT=¡‘´÷gÞ8æžÙæÁ!0¦à¾–ÝEÀî‡5©Ÿ#Ÿ÷©mÈ2;¢çPlÔ7”ãD“ràl~p/ÆYÔŽ¶žpf%×s'uéÑze'_ oçBh÷Ùˆ{¯ƒ€C®P$߀ãÿ©»Öä8r}‡ÿìŸQ6  Ð{˜ I–m­eKcÉžöÁö{²²Jª,‹J2IVMïLÄDŒmQI¼ˆç‡׎|Ù¨nAŸAÂAé첡f°¥ì>¯”ðÿMóãÝý凋«Ë;kBøn¶IÅñI¿ìòn˜à…4W‘¨¼äöïÒªàù|§Ê‚øƒÐ l_íD¨¥-Iµ+„ÐZµÜZ|6HT1¶å!ã«þû‹‡ww7×ßožº— æDE#kïv¢Xí¥*p‰èïp/ŽÊÛ‡{ çŸ~Sئ™ßLû²’’ošSKIÄãVäŠzRJ“Û(ƒUÞ]Z/¾ìä–%‹ùBœ!@âi®Vó&­„¨ovi¥g­$Á 'EûW‰˜ûLþùãþérùì3—_înÍqëàG›Ï€|TÇÆ hB¦Q2iÜ·UÁ·“~ÅëÛN÷ÿÎd3iÌ#5ŽTŒ"¼ä”_h;È>¨%Rq“ˆt>- ¿`Ë>¹?‹«§NÔ¯ñäBÙîG¾‚¯{d¸WÈ‹‡«ÕD‡&ñOÄM|›“ ±‹oØÒÀ‘¬Õ$¹Ø=-j¡fl¨Î‘ÆAe}Œž"–ßkÌBŽ,nV5/ÀSÒpª¾bþÕÉ1aW8EÔ„(â•0àC-UyØô¬“RÄÌ6‹l#’üüÎóÍj§¼@¥d[,&3]¹â¦f:« „ˆtÊTÌa²`Xêåpd_ª%ͰÌ ½¤Þd1kÞxeê…Î#"èd®Að²É¤'g[f\—¯Î¿Ù†1¾:L¤þ§?iµs¹ä·ùâmk$ß7Ó¼¬ØÉ¥–•›úÔx$C^FŒag=b˜Xʦæ Ž¡¤†Ée—°.ˆ2Ä!†)Š«ÇüÙ} 9èr¬’ç,8&IÄnH |$¬KÎ.;õY%DWáJÇèÂgßö%MFØeûìæ Æ œw©ïñMÛX=%±’nŸ 7¸ÚIˆ* hÑ?cU®¾·!"-®Vå´É\É”M¡mB⺠RlМÌ9Ínõ»¾þP÷êWçD¶´%jÁ}®i®W°‡¢Ï®ÌfFRÙ2d¨ä`‹¡G!»B1Ã'kxúõC]Tò.m©L|PR}:I+)ŧ {.¥bÎ1×» Ì:ÎX®.»ÇÉê]l™Ÿ€À-e÷“¨Jðßzn*å ¶©ÇÒ£&a¢WÛœfúAÞ8¨¡Wm†tƒ ÙÑQÂÐÒ‚¡qÛN5>år›ñ‚0Ž](&ÎMW7ôîi—¯v¨Ó"ö•Þ@Ÿë%Aß*^j—„@ Å™¤pô± Ý^£ûE”|O‹ƒÒþs½ÌõŒuÎé]²0í‹ËT =ø41ùpÜÞ¾<¢kê!Û3X=U;H B€†n˜àH0!ÄÞP mCéÑ7+DBã#,‰„ú•9 ÅÍ*Qz0FÇçVÞ°·ÚŽ®¤OcpÒ¾Ùc ¹uóדt¥ÄÀåòÇÓgu“n¯gµÙrȉ¢y•EXEßÚâ(MÏÔ䦑ÔËZ‹ÆØ.zDl©ÊØbƒd9i«¥‰È æ¡;·>ÓŸb†Ä(Ë›Ù~*È›1$ "¿¤ø(hêÑÞæoKÉÆ ¹GÞ4°nzDp²ñóÔ)Æœ@XPo F¨B ÖSŒ3ÅgA"$dTœ[¨eM"[WlÏDìÁ }8¯P }=UÿêâîþúKÓŽ‚œè¸¨L€PñvÅÄPÊ¢L-È7$9­_íd£èˆºD©\Á3C$×+:ÇîËAlž[»Þøû±cóœ6àTaj(JÙÔdŸžɆ¼<6ƒ*ëÅ…‚U .´‹‹á¸ÅNàØ1÷· %Ü0 mûlÅ’€b«v󼻸ZMã4¨¿gÜ?r‡dq†£XV²¯Õñª~²úµè ‹ß±i£!³CÖÿÙ2k™a«[‘‚WV§“„Pä7¹õ"õ|ó²äâjUü¶í[ŽæßM4öîbú¦*4¸(›¦-;½m~/A󙞲¤ÈE¾Å,ÞÆòfUm™n ÂÂÞcVåVÙŽ„±„[<{3)©)K˜2YBÊd M¹ÊåœPr¶1sUv×ìæèÅeÊYÂ0é“êEŽPÛômqa²ô@¤z³kR€œš÷ÆíŽ ¦&ð=Ã)%ìf¥ö™u€ú++d °y»›Ó°y/W«P߀úø‡tŒv°<"ûÈ&˜‚W·Þoá¶:U í­Øó%5$…yFò)ôúà;Øßý —_GU£Ù?nµ¢œ)±!,‚à«LIÉ;3ú\§É‚FCb´4y®Þë¦Iå9á· È|6!^¸Ä¶F3t#^Hõ…ƒ‰Ùj+˜C‚ÕªÁîâYxÖåÍ*¬Ÿ¨gÈÇæ`yFÖèWª¯—_ ynƒú(1B{»Ø|Ä~IæF€,[¦/j7³¹ºF&Õ.a½VÕP„Ö¹mÙÑáfU5¢0¹”‚“z%ï;؆Bmv.}©º¡D6tù©w„Ul[3Û];æ{ü«ìñu#Ï͵MX×hÛA»ÁMÀ¹-\ÀÒÆí=×<ù–\Ê"_¿Ü¬†m6œ¥EÞÀ6R/×ñ"lNKÝ6ƒÛ–p7ßü–žÚà¨Ø+=ó øfÝéمه›Õò ˆp ß!¥ö–ÚùˆØàȰ>Õž<´õÀ½Žn%·#À"«.–üúÆÁÝUC6’Y\¦¦†'çÁÉQºpyDò§›¼GJaƒ‡Cjs…|ì’‚}cðÖ^(а8y ní­nƒqa2e¡ LB±wz_uF®V¡¾'P/1m±º~séá›ÑÔÃæbPCÍþÉW“¦×«IíÈ»›§–Èu–“Œ™PÍ.¶ÊÍVÂKéN·<úÓfC6‰ðĤ®Ò6‰A æº$&Ʀwzî] „§ÀÝ µ® “ j§#ÔˆšƒU‹²«Ï„Ò¯`ÐÏLÕ«J£º ¢^=r³ûfGp¼¡Db¸=±wY!`uˆëaÒ0&•–Þ>wzî9zƒ¯»‹Ç,XÐâfæQ‹0ÁÒ)X‘Mg M6çœj1˜‡ñšÚÖš³ù§ŽÃI*Q³¼zìõ€‡2³9í—í®2›ò€­‹«Õ<ö2Ï¥Ò™uTÔ¡i© bdÇ(رwãñ]ÃëŸ/vüÏCúÏý”ÃÅõe“õε¿»8Ek“Æû ŒWÒAÔù@›A-CAõ’!¶Ñ8%/í’¡G4áö‡¤a MëÁ0¥WAæŠÉž&UL*­ÙÐk°q5»µ»ªãì;¼¸M9ŠKI5X?ß^Ñ7É`àúâ!Ôö1b°÷Q¶ ªGÈÖ4Þûˆo¨Q’-]` ²P€Ò…"—YÜ«ª/„ô ú®qLkB'P¯mÛP7׸:áiÖ $R…*ƒe,s-_Y^\­ŠqabªÞ7<ÿnÛOSìb\ Î ÛR&HÝÊ–6 Ù–è’elÓ‡2¹F"ùÂîóÅ*ç†@rxf¦êyKQ‹öÀu³mS |¨Ð6ý8.ñÍr–Ù²ÐájUÛ½õ÷Q}-oß0pËZ1J1±…`Ûò\?ôiW÷î_7WŸïï¿å»V—Ï­þ`õ@Îl“3`>Ä"ôÖΑ‚ÕEC¶-oA²Q çΖÕÊ‹WñÒKzßœ°UŽÛå%F«„éEy9½±~ßä"óªùœËúLNŸÅR‘Øš7QCšUŸiG¡˜æ^`HD„öd€?7û¹ÁZÌýÜ‘6[‹ß§ü?ß?>=\>}¾PsºãéÍ÷?®o/rqñØ5çE„'¥µ”€9gynäY‘<:ß‚NCDDM„ Á™eD•© *©Ÿc…ÝN)ùpóñòÇÝÓW?nï>´Íé½%õ¯b•p( ¾QYÐa _2J l® á¡ —œ5ŸÂT —Q—K¸¯;ØgÃ×Â*¹‡ã@’!ò S¢$ÕƒãÃä!¶TÙ©­¢t&¿bØf€Ä¥šgýêÒ“=‰òûqD"4 ]5 wÔx‰C˜Q£[`>BãÒß“ (…íûqæ]7;T #ÐÅÜ þl.>Ü_¹ù~ýñÓÅÇÏ?Ÿ¤Í&¤ŒLx›æfWö&…5f^•‰Õ²Þ ² ‘ ?9qÞ$1ªãºD¢iQ^JÂHªR=u—ßàÍãØ“³U(e'A‡õ¤ÐŽBÙ&¸ FMdGå'Ôó_ï/lhÉíü×#RSÁ4©åElš£Ìzlr¯Ke :AŽ èô›À‡uNÚ]ó}‹Ë”ë+SP‡þ·EÈË#ú¦À‚LÑ©-°ØÍì5ñ=†ÀdIZATþÌö&}ƒ“ê™qRg:2•¥‚drñ|ýtq³ÊM§¢~"ñ.òm‡}w„û 7vû¼pF=Þ“n2[ƒÜuMí’Œ¤†Ù‹¸º¿zT&=èŸnBgOžß¿Í o@=¤Ìlñ®HO“#|+‰×öäfìô¼%Th$BY#c~ÉÙ‚R# Ûõ£Õ”põR•ݧ%µôkH¾ û{T>JaõË<žtÉÙÃý‡•…ë ÍVÖmÒc/>¿X¾ŽEš¨eF–ÔWߨÈëD])ȬS´W¡ÙdNBšÓvÍÃÔ¼PlBŠOáÜ | …ƉXHøœ/ðï¡ú>‘»I; ऺI®¥É;‰3`ìÍj¾A¡qO)NÖôWêí6ͳ‚)5%¿0ô@!º§ÒJ„‡ê^44aW8ˆ!AˆÝAKmËÃÌWïªø*e¶RÊOr¼\¬¦Ù' ŽGÕGd[»-BuŽ]‚Mê;»ylgir!z:©}öÁ‡EæïΉmó€Ü›¦†éU Hˆ5½©ÏÀ)6BÌ®gœµ^p©JIŠI¥˜ò}÷ Z 2°"˜Ü&‹@1ô(’ N¢t6™´oø8‘Ò}»üªB¡Ìx,„&‡X‡PÜ«¡wå}3ùËzˆÐX³Sç= l-ä/H¶x,èõJ)nÓEÃÓ*Öówº¸º¶ó™ZùâÝ 5†¨"M ÑU³GCzr Õ—ïÈ/éû%XËùì4$Š©wŽM­¯ãÂÎc eO/¥d ´ÎU»7å|ÅÅjš;Å@ Bò[¸–PÿC=Û<…-˜IŸ(òÝàjÁû iu#–¹FÞ¯÷Rì.²HƇ›U¦Õƒ¾A[Ø–Ô¨°“.¶IS•3DÇ/Ôóà9 ‹¦Ñ“Æ"à ¸:`‘áù˜ä@“A¸hÝŒ½zV‰0ìý&°Ìà¬G¼[‘«a$ÄbqœJŠÌ®0F¼»8ä3Ÿ/Ve~ ¬Ú#ѹ¹ÖÝcm âðlÑÃÝý¯:ìEÉj­AeV‰p¡ÈkÈnË[dÐEô†Öxni-%:Ôú¥ÀÝJj•˜½u •]cÕw¹ÌØüˆÌâj5jÌN_ý‘³«q=g@4ê¯lÔcý®Mû />ÅŸê`zÆ+ºº  ?:¦æ'¶éD_¡æ>JYÍC~ÒxA¯!{ô³Ñ&OÏ--±IÍ ¥ý àÚ©æqƒš¤âr뽚Sù­’õÁW«Tsà¼?»š#bÓÞ›Q ŽT÷kÏ.Ÿž.¯?Û_^n‘Y4«~xúï«ašÖ…Ê®šzСì‚FÉïB|&еÖoöÁ󹥃BÛ8¹' Âß$)“¢ÉYÇr9cbC¾¾(o@‚(1B ÈRÒ!œ] šÖˆ·ÕN\·u¯^ <…ȈväPækÊzç‹«U ¨§ÉqTm:7ã¸iq!E‰>‰ð)ðÙLŸ‡©±(¿Õ—öN»>ÇPvÚ™òÀlŠŒPdýlÐü¹åAšöN°Þ\íXF…ªõ˜¦„aà+ºß¶¤ÈWÉO-®V¥Ç8§ÿÝÆ7Ÿœ£Ð÷ä¨qà,›î=`7ãªñA|šlnE«8 ìç<Ö—v¿ã ,nV…3Aêα†iCùVªåÎ_Ïãk¹Î[‰£“¶Š_>ܾ,$~œ¾ðÎÀ«-¿ÿªü¿ÿñýúæƒþívcq©÷{ú~swyus·Çk£FGO¶¾ /1¾?¥jù¶•¾:·öžžŽ¬£ÞÔY¼H=ŸŠ9!•UøDØ„#ŠÄúÙ1¡ð6gîÔñ}¡f¸Ž³KŽOÚ¯ñ*éfŸ~_8úûTÛÇð12¤a“³ÀûfT(1¤– :1Zc{@éMVn¡ÛH=µ˜k|_ŒÅP'AÞõ]išªE˜×÷žWMã)Úªœ3B9¥š.fx¯n¿M3ª×תè<}¿¿àç6ýôàâI4‚oZè({ûÔóŠÓL• òIO°X;H1;¿¤ÎÍœ»Œ‰øÜš‰ñšéÝ„ >Âi§9ùì‡Ww·×Ïó ‹üõ×Oé¯oÛžKx³ÉqŒ:"r  ™P²õ*=Yÿu* SAc¿ˆ`¨ÐÁçäÀª"æÎ^H2Bí«#o@8¥‚§S5è¶-Éæ©Ô¼ëñK—ì°KþöãÍõ¯ë»›‹¯—ß.?é•.¯¿Î|Á‹ý ÛôÝIÕ3µ¬åŽŒIÏóˆsý$§¼¤Á«ê‹t,Ť®íNȶ³¾Ðkˆîª¸CòqÛó©6Š‘ºtW€Æ+/Àxþ×sׯµxâÕÝ_ÿÚø| ¾©Ÿì¼§ÔÕaœDZ²±6;gð¸CžÏ<™†© ñ_0Õt”£>Yå$0d¹¢ŒPBýì eSÿ‹HèQBvxLH“AaT _í‹Þõf¨Ù·µÑúý&%_o?íž‚—$ÇÖ¸¸SI­Ób·k~ß̰² ³“†'6¨°zå5õ.å>%‘Wm€Ûb’/IEœG—ÊÅUΣŒ,ˆ:ÄèW;†³[8‰@C,¢sç—Ž¡â6½¾1ÂI5Zö’Ęþ»kI«#ÙÑ[©¯æ¤CŠT[èYzNan5ׯ`ÀõÚWo WÖÒáÀI ñÌcwÝÁ­ú\8ÉÔ+$…ôÿ U5ð„NÒ½#T¹›+^ª ²[\ILr74úîxj ´¿!-‰0ðÉóÞ··_ý¿&vfx^W7qJ‚ð„´÷ˆ”¦¹ Úô‚Íæ—]0¸DÅéPÎîv¬$2Ãõ¥£]×6ž1…„x`˜Há±ošˆmö©îY-ø-¥Ïy«P’Ó¿¨I ]©4æmëC~€èùk*øžãœ(½„ÿX?b É’`II‚oÀ Ñ/#InÈÔœ;ƒ-Źùöõ —ÅýÐ.ûñz¨.ÈÐ ,ŽR¬8§õˆŠ%T•^ÌeÅ+éL™8t¶Nš‚„øÈûi3-Ži-ó“…°mÓâšiÖƒa=Áµ%Ëþh¶z»7#®©¿Ç5µ<&•² VcÌË#ƒ@y1#øT,6íÃÏž½Ï_V18˜parŽe°×Èã5iö¸-£ÆméKÚ7÷”F(õ\ÚbÚ;ã}ÙZÌćÅ©|ÇPUVa2>7³’ϤeD-‘X]Cƒ1DbŒ ›L.’rÐÄ—%A`"àñ˜+AOwrCêA2®4£=…ð/…3ÍãTۚѨÂËÁ=Nsá,ÐA“*$çv [½Çl²8âqч næRXDSö€ß¦ôñþ©§”º] šëàPªCN©óÆræ‚û¡'2ΧɈÃÀ5u‡qç`Éù¢Ï^‹¯2ÃùÌ^‘6w1ÐCrÈû6˜ÊVÜÞ÷_øòg7Ÿ.Ûš†§ƒwËÈ<4ÌbH¡kw‰4b`gÛ°[z3ýAªò×" ˆ 2Ù…z–ÔEÛƒFÕ;ù±Ó’y¥¸ Ã_õ)úo¡ªWíê·tñ¯¶¦~pÔ=縧æÂ±o˜…|J4‰¡ìˆ¤¦ù¢šµR|Å^¸¸ ÅÄ5Ajo%–®¨o4©i MèÐS Ôoúßà " àƒ6ùjàKŸƒ­SÒˆRȉìË!›­>­CÑ«¹n LâÐi¢ŠSÓ…bs€¤GìÙs%ƒ™+ŸÝéf°ÎyQWš1ì9—ßÑ•8ŸÚ_MùJ†ÒâCŒøâJæÅ#†®d¢["éó:F® ,↠!õpV'MºÙ'Þ÷Õ¨ÉÎgÛ=+;°Í,Ä¢Q$ÊuV_V㿪6¶ß÷²ï»zÆœþ%h“Eu8ªlýS(¤‡’Aó µüÐŒÔÖ¸Á|{óq?+óbsùBÀ›{ýÇuן‹/$Øî¹|ÆG)†Êm]­Ä6ã^N_šõÜi9)¼QeG±¿j]ÈcM/\ éº+¸>¿}‰ýõlH‡Û³óoÿ­"½ºØ‰hšuˆAšƒÃòeAt8”ÌÃû<ôùJBSîm3—Ä·ÙG„4Ô=IF·¹Á@3ÄÅùˆÿÜuY“>‹陨#zà^ÀíÈ1øÇ]–Í8¦ÙDÛ/g†µà‹Ž™Gä[IgʶO\ E' ~IÎY êÒˆ_õì º$² „€›ÎS-BŽ+`#Œì£è˜”ÏPWò™rª-Ç$- äÀ±Æè8ä™{ô¸É‰nû¦qXVvÁ½+º ³¹³q%’)ôX¼¸„NÚ<0$0 y º jDL‹cͭég{U°j(÷—}‘¾ýqç³Ý‰CΤŠ;2±Øñz¤WšæwÕ‚­¸)ûž}PÌI!pÊ’ª¤1e®Ì@Äɵ`™é»¥ÝNí˜ã¥ Ë",ìU<§ç¿~Ÿ  É Þ9)0òBêš1ÖŠr öûÒšæ“É©OÆX©Ô—1»—,¾àJ63|R­˜ô Ô⓼­á?ä“dŸô‹ÆŠÀòÿ‹+Yø¸ú"ñBB‹QŸªäá¥Èzš¯ —Ôœ„NË®’ש¨ ;¨růS;uZJôÝl¦MhÌëdƒ4¦EÓ­ÿ; *h›§ðCíP=z–¢uìéYjȺ\\ÃSÉÿ®ËÂrßÅGÉ'ŸYLq¹dk¨¾e£SߌÑ|¬ê£-ð®õk ™‹ø{ù\çFÃ;çœØz˘ÓIH] ДTI‚nh›áˆ× ÅŠƒ.$yäd×ëÄçÓ˃0&ymCS¯%€óŽý×í1J'{- É²;Üɾxy äñéÓ×[? îÄDž¢Ç¡Z/@Ϭæ¶V €Ÿ@wDJó|0D&_ãƒTÚ(2‰e}p%’I.h,h.h3[ìF\lpáyI ”ü ®×¶µû÷Ýâٵɣ{Çõ?FÄ1sØ`‡ f ‚ÿ˜L#Dd»LÃ-šÉFî<Á¦9Ý—ókØùÅ‹›Û½øÏ.ÚîÏ$ð;Bfâ44»ƒ¯×*æH™-’‹ßÏžw»;ÓªÕ ÿQˆÆ$-[LIáI7¯àŠªh°\Þ±ò„¶ò9$û-ÊC½&«@tBÑï®™w\>wm£iø=ÇèµdÄäÒS’@¤ó¶öÞ Pæ¿¿ò´¡mÀ%¹©Üš‰_œéÇò#1ÁÌÚÖ׎Fü›š2Trˆi,C%Ø abX0;¿1Ýà㋾l²ëïÿâþày"Áû–«kKXÕ^ïîþZ²Mú6¯Å÷êIÒ <Á˜v¶ÀWí$ ¹Â§ [=쎨³»Ë‹ý¨¿Ö?P?YøT[pxGF{6ÔOAAÙDú\N:GÝJÓqfJjËÖäøhK‡ÇŠÙ™ŠµúPŸóÿ”´X«a?¢‹Zh‘pöùð⸻ù¦"ÿ ÂIÁýhðÉÙt-.!bb®J×bqÇ.î ßò=‹dJ¶f H¥Ñ’­!h¶0vE”ùS\Zy,þ1õüAœq'ÉÇ•ñ‡»o— ËüŽ7j,£¡æ¢&s~é#«„ÿ u*zïõĺBé¨RɳFZM ¦Þ^îÎâi'D iœ“Š‘ŸÜ#jþ»'{¾Wn%Š)ÈN¼'ØèB⇮±5?ìØ5×\ˆˆ}ÂØÄJcƒ¯Ø¥óLÅi- !‹^qÅ ;°Ì˜ø¦É4Š cvÐùdƒõÞ…¦p.µÖ¬3MųХ©(€«0•,™ÒJZSL5>:µ©`ê7ÃH‘ÈE7RàéD·x‹KÐĬÿ½1-³x€k!LÙ¼EÃlp)µ™%7¶ð‚1v°Íxaº®à(Î¥´/h$rùìˆõÙi]^¹-‚rÑöÄÙ¯CÀA3ôoå½­þ5©_ÓsLcê‡vö¤¾=ôó†ox1`ʪ¸|,ZÃ~\é•5¬$2eø†—„Àm+OjòÞÉÀ­¢=½Œ¿‡»Û‹W_.ï..o­ño!ôlO¥šJöáYÂ?=\]_¾ãt¸»²úé^Ýÿaï¶6œºSÔÝåõ̓=œÅÅÅ-~‰šÙðªîúéá¯[û™‡¢òö×å?þëÃîòéåö’s‡?±Âìç_PVÛÿP\ýÑãO©Vqý,Æ— :hGÕ j[èF¼<ºÔ³ÕâĘ҉¢|ÃðdäÕ\Á<ÈøJ~­%¸ÏÞYd0Á±­sa !¡©(ˆúa„ÇÏñjÆyÇ6Ü øÖ±MSIâ…ãDŽÍ5Ž`·~Ó‹;ªØ@d- zÏ]:Î¥˜À>x^ŒûÇAoƒ/̪ŒŽ`ç< a’oü-6MÌF´5|«âºöØ{!³€ªÉ©©Å3P’šâÊìˆ>[Üd3©¶Ó84F´Ò†‚~xE–¼IЇG°WAßîGüÓrß!è‹_À¤>-¿ &685îÇ·qÞÆýàá½Ðßvp\,"cýž–.jÄóþG^L"XD0Ö4êT[åº,‹E±’Å”Þ>,RhôdqZ†ú±ž®ïHàS4>3iîíO}™ÚÒ臘¥ëEŠ‹£è9‡8¼Ö¤–.Q„ÔTÁ'ã"ű†ŽèÙ vú¶NkFS)ìÌ ~q‰a?^ ¡ÜÔó.¿=|äx!ÕÔÕMZð ÞR€Š@p!$Äv°ò ÆJgF„­˜AÐ@KåK¡”§¯8ˆoRA¯énu ‹aUPÒ”¬×xvp®gì=>~|裷ñ-½EÈ0ŽG·ˆÁ±ĺ™ïêÒ>6fÙ-ÖSC8î´,‰ñãøúCô‰lƒôÿ§’ï`–„žAº¤µkU;ÌmQKi ’4?¬@ÚeU¶¤’E$Ÿ–8|XµH\ P~–Ò÷ÿ¢)­8v£Z µ”B@Öþó±‚Rˆ9aYiyB’Õ‡Õh-É¢Zn e³_MÞkÓ½H·{¸ æÀÛ”\ð›NÉíiÿOöý“_f/~âþƒðÙ7wŸ´Zøíò‹úúJöÈÏ—mCܤ§ðÏÝ:){Aìh¬G› €Ö ¸>‘u‘¢f›ýi1BuÏ3èRÙAÅa¶Ùÿ,£)Í~5i‡;*µÓ:hØIÍK@JÛ‚¯V±ÐN„_:!žUÉ›:hè)=xHˆ>´2Sõ mÞM€[€)rE£)P©© LYª•ˆ¦x¨¨$!…“{(mpçe Nx[=¿½ºüóA­JŸx¿|âG T¼¹~ºùý¨ÿre¿rG_ÿ«þEUÐÕÃÍj`¹PÍÜÜë?®Û®ë |S§%ßÑ4@dd!¶$Ïã4'†ˆæÊÛkìÅAñ”U ÎU©+‘MIƒÅðØChòa›}Lýôƒ»G¸ž*‰Ñ f6+Vñÿìü7•èoÏIØì汤ň±kF®Èy,M]$#éÍ%b)Íhdè['ï‚4Œ\iâFpÄDôÑwAž‰&<Þ… Õ#Êq1D"LCëCÁñí»]v…éùÃ*G”C4º¥iTñÄaDiÑu­+“PÒ#(Ž« kÕ¦¯«Z©P›%T¾¤¶èòäi«O«Qœí DFiÒƒr¶zˆº@ô¼3˜þ0«WÎûb1fqv9¯ÆkYB,«?äÕ¿’Ð$ ]}vØdEí}È>ºÆo@ ´ÈC¤}•†ª÷þ‹ãîìâó•ŠRë°³HJ×`AÏi«Õ¹"40¥bD1Äüþ³tf؆½6z$j³ 7C¶Á±hPq`ÏÍ0cy&é½m’¶ŒápG0LsctzHž+öÖRÑ`(廕ÈfØ‹¾5 íHIêí%o4#ö’\ì™ »=×ß>©ûÂô)œL³ ‹ÆÝ*ÂHÐl¿xĤ=yâë©‚ƒ`¦Ü[ê[£ÚÚ*¼ãžÆ¼žlZQÁ:V–3‹]+ByÌS<—¦TnÀ”íÆ? fŠUèK;5²–zœ¬$?dH=õDˆÉ@‚:À1ž»Àß#ãˆÁ$õ0 5à Š³~ެ„6Éd¢ÖM§6ê)ePÏÁ„>õrOSÕîÏfg,‹×D‚ʳO„XŽ"Ù Û•PfX„½tHQŠ·$“A¿EDùí˜LUg`D–ñ9D®íIpÜÑÄq ?,Á8Ç ƒ}¸Ïe «/«iIè[iÒ™ZòÄIj#îš4Çd6ꂪ@ÿª3½ÜÄ‹D­àRYU6+TEùáð篩Àª°·òÂ_`U¬Ÿ1„U†ö Xsì@º€ì5gÚ&ªTs·7Wš(~ÞªO¿ß}”¾øÎÙ‰äÀ Uæ•RѼ$†#í©'!͈ðúÚ.Il8ò£ž1‚id9LË•=/H£…hªAŽqdŒZ<°°C,¶–L¯Z'Îmûòl§`ýi!Þ†m¼oC4™£¸®™eHxü8¤ Ö_hèÓª-•ã=:çJË@öå”s\}ZÕu-Þl€›ç‘\ÀÅqÇÕ\°ûq–˜†µV DcÄ ¹¸ç®î ¶¤5ɹÛê˪.ç’Ý=`ˆMJ3°›pº¥êœLˆE˜eËIS,ê-{¸­¿¬FoAôt£¦„Dp(“T³êE5^Ÿ`®A­Åj­% ŽC…Ö|¢b”w °ôùÓª2I\b‚Ô¤8o7àC™¤x袩ÓÒ£Ké{s\`¦z÷¼D/ì*Tì‹ &"—§+Ì(Þõ­5Qj,µÔ#‡,]è"0òlÃûÇ+âý•ÎÇûy&!KŠ>BEýhÜ¡h!‹¸µ–Ê›Ð×v–k5ÙDDG†?Í&¤ç2X«Vq ÍéýÃÍÚÃÙï—w¦Ø³ë«ß/ŸÇHZÿB™ Õ5XT ELç¨Â¢´xÁ¢E!g§‰B2¤d´ÏŽš *1p!>>¢GGIKÁ Mä–çÕÅÙoçw¿š\X ùÂìâÐEþã럟þžtЮ`µXŠ5©†“b!-Éç!ÆRša#öÚšÇ74´Ä¢èº9Ö„“DšrÏðçíù—ù †Ï÷ôÇí4Óˆn±»™Šèa¼9 elV™2²¦oýÔ]MvI޾J½ÞÌŠ©øA €ºÂìf1{Y¦-¶%RMR®òÜk.0' I1SRdW½7o^—l¥3~ø + ¦©ºí)Fž¤r¨cŸ‹ì<¢©êY{ßáxœS9uBß}]·‹N•Ñ<˜PP¹ñ³f 4õÜI M®›P7‚D„IpÐé%š“lÊ#,ÔÀÁ{ÉÙÝH¾<Šð¶÷O‹Ÿ›§×ç3—’ÿüö¯ÇÇv(ÑÝ^´îŠ{¼Ù¹€£I£dS”P'!,{s”`MŸ«¤„à=‡i{Y¿l7r悌—§ûýrµ–ÓY?\\Ôzæ7̬ 4*«-_¥Ã³yˆ$9,GBjÔój Á®y5åí`BU¤9šÞ‰SŽ[„1àJ.w$@á,B¥‘!$FSÈØäÕ3 !Ps‡@†­¥XÓÌøº]í½<Þ~Ú¯Rüs¯LäòÏh7ÌÝãFðAK‹ÍœŒn„9—XÀ–òø€äeÄHDM†då­M$žˆ†ë©äø&»Õ^×%&ONe Ôw»Õ÷µHYþç—×õ×§å$¶oÝ9†’,HqÉlç¥nýU6º@‡`íu9œ²ÍmÏ«í¶çHH”»ÿó¼žÆ·&ýaä-¤õk¢,PJqŠ­›Kä×ÊV*d¢¤SlKºÉ0dr›¦ « …ÏsH$‰XEþ˜ŒõÐd²óB Ñ Aph `ÑïøÌ†¨ÇF¡OÔƒ`­ÏñÎ;7!H%ŽÌÜ -@®(žê¢ê§Zç5f„â6(Ã]@ùÿùÌ4‰7º|¬úÝ©êÖð]%]º¨‰;˜rf,1Ud;çÌ$9 U<.ºb×U°ØžÇ=7{Q¥Î!&Î=HîfÑåg-äóØøÜ¹³Móˆ„Ó"å”×Û O,‡ã”YlÆ ¶<ÂT!Ën)ÙDƒ€[唯·¢+ÔÞ{[`ç£ÞrenØUrIï?M#\HJÅ“P!~¼™… \,Ø:댫?’h`OæPÉâ) —¾KI¤ ¸3÷[š„]¦lÃ,<º’ ¾Ëxl`%^´ Gþ×zÿvE¦“ýïÌðÓŸO-‚ áZB Y—E$göF2j z\¸5@¢­ID5$e?2t+?×Êu¿~÷HzÜï·Ò‹Ÿ¶{csèN×ìs©BS@±bIÀËš6`(1É8–Uî11%1:,F ˆÁýP®žÿUðj–5[ÑqlZ삜Iúœèþe0Ÿm0_îÍ:H)M26ˆá?å¦Ë_»ß¾m7Ï¿­Ö‹çåófûë8 ¾ˆ|¦#ÔkN¢Ÿ9ˆUÖÁ"zÿwh°Hè•ÏO3c¸Ü”u¢Ç´axS &ä-=º`'Ø †ÄzLÈ#|Í~#uôÍ­#`ñÐi¿7‚ÍÝ`Ê y“¹Á<|·Kî+¾¬¤’  O¼!‡“NÄöDžuj¹ÎÏ{ež™Íx¹A¿¹«äÜÌåqªÃ—ïüöeçF XTç=£Çø™¼Üo?(©×Úh|* t×8€.úª¦kÖ}dÿ²]×I`pOðdC 0Àæq†þc$F‘¿‹èùÖШ"þñ GßœûÄ+n€9ƒ ±Ù›‡þò(ÏApɮɑd¡"xçËïñõÕéÚ‚Y°(¦fÞΥ!GjEð7̽ ¶Ãƒkš³Q´6‡Žþh&‘4™4ºe82ßGJ‚‰=’dti½›j&—÷OûÇv¡wQES*xΪ¾·©nØÑÇ6Ò|]PBîÖ'팫›»àh¢¥F“‡qm/¨¼^HA—€¡qÙHúAy 0œ®xF(Èkëå)Ý T5‚C$jâ0'8øºÞ-†eR-ƒbç,™Â,\L× 2hd ÈÚ›‡®*q¬XÒ&ã'ÔÒåƒâs5Ð)â+úAÏ?®!]Õ;}YQg‰ë˜#ÞúØ f½#(‹ˆåkÑ#6~¼l¾¶S_×ä]PrÌÖølÙÏC²w$‰&Ú«~g¬/‡¸¯;‡°‘VÍãGh7°Uܾ?sû"~îï‹Ú(mðÙ“”TËQ¼|’ú­ÉÅ £) öuQ×l¿+Ž0×7ØNGJŽ7¹s(%-®oÞïT7UHÔ ®±èõ-§ûý½µXôÃoÕí$)sá±³A‡² óZìµ9©ïH\¿I«ÑŽ!É%)ðsÑ(U¾ž˜+ÁD¨$7…¾õÿð£CÊú_Í`FÕ>ƘXdÊÂ$Ú$§âHTMpÂ@;N¹7nƒ“ãÆ”‰—„,G¼YP(²î—ç}Œäíœõ1–(-Œ¾¬ˆÎöë€ÑŽÝÌøÉË&u–Œ'¾±ûp¦jà@§-8 á6ÛažïEÞhÄNvàíç›ÝâËëêI[ÏäËÖ•¤b)ã$ä1ï˜7QWå@äLza$°&,.s2ºMÃå°Ãç±k>&1L§,Äáš`X ÇK2·Ë¯÷ï~´ ½ßÞï(<3üDõì¿#ö&áÑ‹%e?k½?¼uKÛpMm÷u-?@: abÛÏi~¨™Û%ÒåëE©€ËµnöòJÖóy4Ù¾EšI˜[Gä©¢{WL£R¶³k†9]±'ËEóºŸ1ÚÓ8_-1x”·øèSeû‘Dš´ý›ÎéŒÞÚäÇpî D#Å#†+sgLcÇYoÖÛÍf¿øé¦Ñ—˜`¯jž‰ëh.Ñ‚ÎÓ^›Th[+U|8Ë6ß_£l‚Ù\Û¥sí±„šÌke%#¸ušä?,âmÅ)D¯Í*”(ñ¾¿¬Æ—îšzè­©¢$ › ùõïëÜd÷‡À®DßÀeËÞ¤¹ÎG’hÄÉÉ ÝZßÜ58¼Xw„bŒ×Ô·éë’‹ú°Ë”Ô”¯ª¢®†‡^«ŽuÓíÕ×Oß´¯½•R”ŽÎ1%Å›ìQɯ‰bsçtöåÖ±®?îhhL8e%,›·ö¤e=Ñ“ˆ©ÝìïÓìßâÃÓÏ:xÄôScòe}ꗮŌ[ŠGM¶¬ŸdTCO$o PÎ}ÝdÁT]ªJÐL‘ÍL)F¾âÀh·í)=T*3ŠlìL?<–‚£Œáù3ªƒtªˆª°c „Eä‚Çg`ACå ,ˆq;ÂÙq"•ƉÞušKØ{9Ç@™c”ï†T³âèÊöÇkS³>”ŸZì âfΩÉ#ÔP‘±ÚU5ib~šHëöÉòBG&„…‹l¼|Rú­.eyGS2‘]”ßÀ8Žçǘ5“ºàÿï‹C~KõÆY; à*|z@k!:7µ(|œ8´¬¿û}“Ì»¿±»3vñ|¯w ý¾é§eeäOI|‰=ÇèËðY|A’†b$©&„#ºçBb?;ÅR´€I¬Yÿi#)‘›‡¿nõ0 9b–—â\N Â;ÃŽ’NhÈ[Cˆ¢Àè‚/øœ ±¢dàÈ¢„Yf¾ã/¾DèKz’ˆ•ø“˜ÓwùnHFò×MŸaÎM*ìÉ©y’ÿ›£ÏZ­‰×$pRÚ ˜Wòý85önç”ß4Åj©ÔßëQGÌ#µ;#a(çTHRž„Ö$ù÷XöÑOB ’›±êêðˆšëÂ(É…¥zâl²7/ÇKB’'@S !D›šT5h$“&€ÀŽLù&_}5Ôý3Á®†Î"Öø©Šo*ÿ>Z<ýÇÛÄvoÍ)ÍÐáU·l'˜¢#`ôYt°MUFjÄPgåm¦9tÎrsÐ!9vÍr2Ê“/“C0£úà[óàƒdg4Þ·ƒ…Õý}~âƒà\š­`L\hßTˆ†'áBç4bœ…‹#ÛÓD«¡‘²è‹ EÀËÓýzÙ¥MˆDò×ÿØlÈ_–Äs¿ú©3ˇc,ÿÎâëêþûz³Û¯vwo?ëÿòâP™^ëßÃ}?°xyýò´Ú=¶ÔgñØÁ¼(4æ>¸äbÞA2MZÞZ`¦ÅŸè8ø™¸¨ÚÀ*˜ò¾Ôvÿy¹ßJÀ à¨ßË{Îõ³œ–Ãß¼{¾ ŠdN2ˆ¥‘ë§HÂ$L°¤En^N«öxÃàÅ”òjG–¤vÙ‹öÛ¾5$ë¢uùX“{c™4ÉF¸ÓBt˜âöADzŒ 3½Gŵ ’ÄÙûK0³ž³ ¤sÝE928‹äXÌ ‰V£Ñ`ð7†A”³"'°lиhfvBðׇ"õƬ¹Ö§*”£/+Œý$1LS©ž?6gC¬Ÿ`9Äý¡&•ÓeÕZ¼œ¨½OËû]éÕCÞ?ªC=m~´3ö¾#’«¤ôäB଒C²™e$°&¶^ÞÚIº5ÍÖ銗ÛÛzkÈ‹ì°"óŸÈ¢l=%î—í¼€mZCWR²và ¢Ãtmr$£FM'å’èå=x^krIròïzÜì]ã»­o÷¯OûQ_ëãŸë‡ç¦Å!ã±è6<óÕ!HW­Ù4*ir"28j¤:+@pUû—"éF<±m ¸{Ø®^êÈÝÒÁ¢ÕU\àcQ°˜íŒˆÇ-ŸÛ]NÒh4d™rï Ù„y~DXƒpRÛÍŒ]q´è; %£ hÀBÁ¹rRÇG_V-šNÃKœ¤ÃŒgoçBM‚x£¿ õþ9Q¤-Qñ@ùÄÁ‡äœÓ ­F*.饳“BE DÔÒÌ‚ ÔŒ=‚n !Æù—¾xêÑvL¾”òÇš¡:|¸Iœ¾¬@Ù:ÆãÞŽâCcÒ%‡ó²ÁªýéÚkؘkvÞ/VÝ‹*ë•Þpïû~–±e‚/9ûXÂO–úFjTãAÏÃM‡ŽÅ;ëçÀƒB ù‰(°ôÞµ°ÿŸ÷iÖh6CØN´b”¤üëÇz÷¥¥“AÑììeKïd2“bªº:’Q#/£+zÌ”˜Sr$ð3šóú¯“\«f0lÄ`þfYJÒÝ„Nsú,ÈÁßd Šõé¥y#‘µâ‘À¹ôz>Dµ‡Æ‰Ë«ÍjûG[Ã&Œ’ú;n×›?^&©“<«o«å°é¼ªé&†Ùw¤Mk™`DdãµùúÒõÝA~†’üCƒ€ªøä¸¿ct®).î9|Mà+lä ®s.^wa^®6ÏϯëÕþ×€«I;D¬ñÉÝÅÍt]M† É ‰ãjHö‘Z+eìˆÞf•Ñy1ô;Ê/] ¨Bõ-­uý.¢beÔÁ-æ(#z¾Âº.¥> >íú¸»ÛÉã 2л3?Ÿ¦žÁ„TŸG^=j:ÙÄ–‡}ˆ\ËÓ=S†Í´U\¹æå^[%Ñ¢œº"$o(Ç«QW]?6)’j¢®®¡®Ð¼Õ^u—œÿæiù~fâøÃ—§WçîNTqÀQÍÌüe{Í4cU£¡%2v2­b‰t/dJ%¢-Í—Òz-‹>z—×kI‘²zHÉnÆ“øªÔäÑòÜT­ °Â±†QÉ“²Ä¿2áV2Cbß»Àù IWb œ¶ò#ÕdHò–µKéÆpˆ¦êØ‚±4‡–4¥ëýA‡¨àгvlY;ýñîîÛÓæ#:/ÛÕFÇÝúÏ:ÓæVÙ›„LŒ]0@yâUþÙ8>¦ëû#Ö F^,XoŒåÈ© =š( M‹{Ÿ¡Ádõ¬HX=¿l¶µtlé² u"+€‚ª :“ ƒuéÐð$”ªªŠî€#cnÈEKx•ÈPBhïÿíËY/õ´ýþúòõ~¿\ô„GYÇþuZå%‚½f<]ÍÕ¬WnzáÔh¡¼edòžn­…ð¡\ÞÆ­†Î$W.¸œéÎBµgîÄ;Š`fÔNJ¤~2—ñ:.ÇÇ¡×2~âqG¿Û=ÞoåyÛn¶ø¹Zþ?w×Öɱ›ÿÊÂÏQo±È"«†Ÿò ‚ /Ap •äµ`­¤èbÿûsÑôÌÔLUWW}x±ÞѨ/¼“E~ü}bÍ™“,kì¸!Û$räÕnP·ž±ŒTö 5Tá‚ZõPj`*‡r§@#¢4†É%NpézµõƒINÉKC3jYEºñÞÔ>ÄXÂQ4ÞG[Ç]æ½äûŽwäha¾>¦Z½r©ÉŸ¿8?HˆÊ{ß°< Iܲ)^yÔ˜pfûxŒµøhäâàT½âIþx+¤”N´hí½yÖfüjíãäÄ:|BE çã’ !Nv‘{—Ø?'ü _žo>ß?jÒws÷lÅ]ãœzŽ-ç¶ÞÑ’¸ ~z»ÿ~wšŒqíìKŸ^U_ß6Œð.nà_Tªßì ÿýå üÏRY‰#Ïöéígûéçþ:üË}þÏ¿?î|é§-½vŸ˜ãûá‹ÕÒFÛ|Í>Ú~o|-¥ÁêI7û—¿ü¨oúíîíË¿þÛ?:)~¬è~„\¥K½ø÷§Û=yðé§O¯ï7&6_~Ü<Óßžßß¾üØûÖ¿]?¼ßým!cÿé§Ÿ>ý¬º÷no¼½ñÊõ¿õOŸ~úá¤l‡¢Ä£)ù&8·AíYhX^ÞÇÃå…ëÏìÚ¡„@\¶C¼Y uÎ ÅÍwŽq×>^¦¼¼põPNÂÁ2òñ5fn/_Þ^8~«˜l…ì,û§˜H/ÚÎ -”œ˜Û|"ÎÖ»$ˆàfé]¢&EMì£wLÝ@ú4˜¬‚‹#õ¥$¡MvV¨oðEýM˜ÇMÜ¥Cç.®L!U¤£gÓ CmÎÑÂävˆ.g(Žâ ã:§¾d?Ѐäà8€ç@]¢«‰8þc#š1‡;Œ98qÀQÄâ1st1'ø©Úi‹Èý ×°64á`:ŽQÿ›î×Ü)Û KèÌvö—²¹ë‡{/îS~÷ÒöÍ*¢}{*kO› „I³B™ã õ~é`߈ÈéX õ…#’‡˜‹ö‘׸S´>ÐÅ¢ýó¨5§•Ý¿YÙ=†ú®%ÌŸ~ÏQŒ q/Æ?‚'À5®Ä9v"‡M¡Ù2I×´E´°+´ºmeer(‰š§;_´'œP|(Ù“@é„ßÿ @'˜D—Ü”òB \ò0Çä„ øÃÀ¯6Æ,Ù~4P¸7fúù7ºrÛu³ë3ñ XÛj½Ï´¦"Ózh³.i–¿â›RaÍýÔjc‡aÌI´k ÓÕ#ÎF£bÙý»ÄÅ0]é˜]à;&T0]<Ú>v˜¤®jLÂ,uŽKGJçuSÕ~„ œRsñD€¸gp«‚s"rÕÁysqгÖH–Ò¬Øo‰a9‚hã ~уÐB“ñGÜóÑ\<Éî&Uúv²—í®Áf71ÙiêtHÑÉÄêehM"Xš_®‡ˆx(Zvù”",‘F.KEvêxD›‚aO Q|WÁ¨±Ñí%K´ Ir™ò˜ê‚Æ=‰B©]ÈS×òX¨jrœêc£/}ЧB0Éž Fßÿ(Xx@²}›—8ƒxnTmžì¯ï~{yz.aU\Á0‡VŒ_·?½½¼ßMÀ ŠgXgfgÖ)†6oF@~úhW µÏôxÔº¾䨔›È‘c‘²ƒ'}{_<êØÎ†:øùz)›¢¨LêüÔ$ÛKœÕù™$,ßtæ]ZÛ–ý#ec”0…1áÐÕ‚ƒ‹Ç&L8ÂäóË›¼z½(v«h™Š%ï+ZkƒFáE'.ëïF$éQÒ¢4HLÉá$gkÖ¼Ìl¡¸3‘šÒ3äQæUÅ<|Y|ŽwD….C* pJ¹»B*,o<€b[$èÙ ½=âÑj¯Ž]ƒªŠy xž7û¸gúO1N<¡s³'àÒ¥K7X¥~Yô®|ä&lë™¶5ÊíÔ.[JMŽfUXCwNq²ç¬#P/‹¹bz jèÊÅCäPL5ÊÈõÈŒˆÑÃ`²4ab)S¼PÎÓ;~éÚ¡Ry]:¨ºc 1_:Œ¾«½ U‡?ªX<³7:§÷'ù‡”Ø»Yü;l¢ìÓáƒfé–®õ½¨¹x}Û‡4ßxœÕ®“ß`Ð~m/Ú¡î4ù \t0Ëb¤P® Á6ñä~ôF’u FM.VB[Ó(EËŠë…ØG¸^;êté>Ô‡ÖPt’a%Õ+˜U„—ƒÅ‹‹´rÌ ~)“$|,æ9Ýz¸«…_ca•³8¥ ×j!Nrv•d¤yœ…пœ–l|œ0ÈÂ]……RäÙRæ(/Ø|çêöþúÛãÓëÛýÍëÖõmZ>7‘7/7WoOWû?[?쀓Bd `Q.†Ek"/~Ñ„D å½òU-áøLrCìÍs +›ÉÙU¾Ÿß^Û/O+¦@íLªp¿!¶ °·ý$S'?%kO Ɔj\4C±^$!‹z·£a-ê¢ †n’“î"?-%dŠú«ñô†Ù¿°ùíV¶VWm°¿<¨ VH`vTqÄ…Uë¨Xʼn|W¬p4ìýò‘bH¹HQ,œOÁ5ÎÿfSôMëÏiwu'ï!±þÇ3²ÀÖl2Óù‹§¬?ü¼ Øÿh@è±ju;Á‹Öз Ç@Ùaè1‘%R7ÃK0xu¢œ*:à!¥’åU‚a.AQ¤S¼°æTÕ:äˆÔìÆfqX]ÂKÃh„GO,N`.Àb’J€EðQù唪.hÍQ,g+Úë§\7ìèÍ*í©| +¼Á lS¹CÁYlc[ €‚í¦Ù|ãZ¾…4x'1¤2ßô¹ùÆ@Y̆ݫU0NÌHH\_ýSî|Æý_‹2ØPpÈ­=ð§Ùmˆ”4‹ÝâZØIÙä³×ší@ÌíÎ *3È\¡ë«¦P–™”SÝѧJ€>6C` @?õ¡$w--Û4¢º0»Tã¾ß©l¼¿¾¼ïmK© 6w DݤˆÒPyLe) œ/KQ6ØÑ±‡ éCkÒÈ&É©CÁy%ù–uh Uq2ÐĨËâÌ„Íî[õƒ4Ys‚×DXá’ÏíâÑ¢‹1Aw‚“4I%ͨæBiÈ ÐQ8ÀÂ-øOš>h¨äZ]ô^ó7sVë}ÏÃï¿Ü)1¿>½¿©Uz¼×?7?·žüðKì E©òK¡hQ‚ Y‹²#f'·ÔÀ O’$}a–$yj)#õüšÖÀR+dWîç ãåîÆdën¢qHÁo…Î‹Š† eQñ˜¯ÿí¨ÕèHÛÉ„½^&+¶öÀE7OVZpþ âQ¶ Ю°Ih?>锪‰öA Ãùnæ’Qv‰ð–2‚§qL JÐ;öí¸ ë—K-ã`+ܦ"öÝ¿€$DÜ‚‹÷„”µ#jt’ý"N’ŒÎÑ, ¡ñ·T·öè‰ '–bt³Ü’É ‹Íåyß²ÌsõúÙ•‚kìÀvCðY;À’µ#Rõ°^Ÿ‡QxߌoÒfl´;øKÛäú¯°ó1Ù<þ—ùr«D?N83®îø‡»}¾w/oÓÀ¤’ÄMBr®¡L$ÎLoì°Ut»!Q_*ŒƒÈRÁ8(I³»svDë²ûŠr@‚cÓ0ºE‹aк¯ë’†!9ZÄ0(-ÆËK?|_W¥ÚàY4BƒE5Þ·¨¼†èQ•„tAƒ>"QOmNÖak´Ù{(j3äGôè¥Ï ȇ=W?¾I›F“M‘Á¥5šdWŸôÒý +ZvÝ ÆÖÕgž½dç:‹*7µØ"Z»­ôÙÈqšXÝ´ÜdƒfÄE-÷)¥ÀE-§”óÙ#ÂtRraÒkŽ•||6gë‘Kë8Çet\ l–Õñ¦1Œi^œÈ/ªèâZ†Ë18vHÌÙÉ•“ú]ØÂÓô=ž\Ù’/›ÀèÓIá#zÚóêã›´i¼>?_úD Á(Š>È`ôôL±U´¦®?¼y¸¿}úýñáéÚÊU·OWúÁÕÇ'WßåIz)=4aGG1hذ­‘hÝÔÞ„#%ÏTÓÒSáæB¶WcD¢.½+/–÷ƒùÑMZÔž)$¾´ÚcÂEÔÞ°/)^«èÌLÝûÃÛûëè W¿Æ×«Õ·&b ÈÄ(ÿß§ÙC°Ê?L¤Z2ycG2ö´ Bž1Õ5ûØt9~X†SÜK¨áú&Ë1…»Ø4ý—pp8ºI› ‰O ò¯=ÅŽX£Ü"‹"¬I˧‹C’ì²Ü·+aÝ^Ý\_}}¼}¸›d,¼÷’ä‡fÊ—Í’4"²ê÷ö8"V?“ ¢ê,-‚è¹~ðY:©8G·—!ŒïѦábÍpa 'äNúl3‰G\R÷C±ÿûþôv½×ܲ.O_½^Vÿµ;§:¿¸mòõ\ÍØl‹ä%ÍQà–M½`œŠ{0òg&Û§“½rþD‹¢*/B \crΠl(ŸEB‘¶“ɱ#sÙëQߣÅä¤<à…MNpK4(_ºÅ—Á¯Jj×7+2í÷¼¿üz÷öü |Q‰¾ýåz ¿¨­Ž ʃí ¡­ 9™dýz’ÁzèØÅ…'.¶ —];¢O'…G8í¥ïÑ ðììˆWxÍüQxÍ"†ÿŸøØV”»úz­7¾Q¨±ø•ƒ¼<^?LëEÀE‰ GrÀ~ÿaðrO³£§¡Š†ÄW™P±áÑÂ…6ψö •^/Žºéøm† =P¸t26#) •œx?):ÂkÍö/óJ}‚¦°M…UU"qÀÝrŸB7¥<Tž—é¨Ü%Ú®(e:X,®ÊW¦ˆ˜ê§»›´”à4r›ÐpÁúýä1̘èfG›*öDï˜ÓFoæ>ÕÎá§hMœ°(+dQ)ž—{qÊuàÞ¬QVÙæƒw{¥ôñ%²¸¤ÞàÍO¬X”ש —Téì gu:ð†¾èðF—Aksƒ’Œ¨\h%$_äoCt÷.eÜ @$RÚOqö®1 zU¤ÕŸ3\X‚kšØAÇœD“ËÙJªáˆqIΕò ™"BI( ðÑ«Õh}ðƸX ûÙq>´ lz‡ë]É3Çð¿A(rãDœ„"ã|Oo÷fUøß2X7¤Kó-7Ý(ÁÓ|´›jÜv[<çlë@™o1&¤"ßríæãÕjg£«B›Ö—K2.ib\Ô(2í^žÞßî‚mk‹ú<³*f¢©4kr#ÑC,ó9?Î0"DÈ[ŸšSô.tƒRÖÍJ ì=±îS}<øS'žô­~yz}û@Uµ®;eÓÕÛÓ¯ww§‰Ð©Z^}d-‘KĤªÌ½' Ä륮+QÁˆ\B&¡.ed‹`¹£NÍÔ§f ùÂ%‹Ûz™‚¦:šVÿ%w%¨X¤ã’ÆA³Yö\QòsÈE¹ˆ(¹Ó¼eºŒRʰê6›€Ëž4ЂfH…wmÇnšzMÏBÇ™»ç‡wý³›*~}Â9pÊV‹m÷r…I±mNŘ[©K¹˜{D¾.8ZúÔà|½ì|b×¼@nu ”¦ML¤º#>µãIæ38ØQûŠcZV&• ‹ú²œò'6»·©Ð×»Eýî> Êø³Îñ$œhÂy® äæ=“›K`Þ£sH æž*`=†¾U›]EÛjE*ÉA¶kl÷^G hMöö—*Œ/‘=¹ˆ#g÷þžætà€À³8Ò„®¤ŽÍ*Ý_7ÀD—ŠÊd_gCŠâ‚Lù}¢Ô饯ÅM9\”h4g'ƒDHŽ[’kcðàÃl3à'.ªµ]-I,ò5¸Tò ÑÖed3‡Ý«U.ÚèjÒˆîÂŒó¾éT88ƤƔg3ª'ƒOšAQB&’ãôͳ%ÑÝ›U #…ˆ—V8-Ù~¶k“ô-Ãw†™¶¡ò¸Jß|QßôÅsú6z³JucP!™ n‰Ðin9gáP"e|ÎIÄ`Û4d!¾Q†oêI˜¼+«›²äÿ¸»šæHnûW¾Ìe”M Ah;ú²sÙØqŒcvb/»‡T*¹µÖ×êÃnö¿/PUR¥ªXEf&™ÝãÛá®Vg%â$€ÀL)œ®ÜPÒn—V 8¯%=ÖæöçÔ3’!S7ƒB&mùÕc^˜¢nØ­”/D× ¿h8}ºÕãí7Ûèió®fDÖ%º‹Áàäzu, vÊ:8Ì •E‰gCw”djµpH&°z++PÛê­œveQ›Õ1ã!LR[S,ä„X|ádk+-€§ÄqÑ`¾0.2ÐñR›õÂ)U×[Y‰Ú\ì¼f0ß5×÷Ÿ‘t½,ç-”ˆº¸§¤HÙÇ“‘«5sx ÿ ‹×wâîËõ’ªn¦7u¿&"EÞÏor?y¾¾]þÆô‡NžÄÏ>oÔ§+Ø÷<.oïŸõÿuvf<ƒ`]èe OžÐ?ý°M–>\tÿþŸþñån›·~Å7üõüæeùÓfš·'«¡~gµCO\†è¢·xòéÓÉ•ÀËJ÷Ów‡7:©»›‚jvÓN0Y Æ¢4Õb1ªAÆA}¹` y—C5¥¤LoWVr»êƒzHTêŒœŽ¿d3ŸvG[À“ÞR'à 4)Ng€§hí×@§Ý&ÓߟÞÜ/~Ùš÷…e¼ºZÖ ±/ÑÃ2q2Äd÷jÄ8´ãzµŒüU3¹åxÆèE€/8§p4Çp Wëui… ccô6Ì 2Ž|s”‘CpH¢Œó¸Zre,‹(Bìe:kÏÐ8ÆŠpø5àfUœJNúƒ^h±Xð"`mì©òF} ²ß9x³¼¤CåQtÀq¿è€}¢"-v],¸èÖ“6az MÒ¸]LAópÄÎzëþŽƒ±÷ˆI5Œù”Uªµ äýGø#”irï0–8dMZYŸy™d]Ù ëÓM3½¥•å1vc4³½„õça­ÏÛÈÜ”(WçC—Ô#<ý.¼={ûÁ3M‘Ôéòf¹Ún«°ï´ñáþæzñ{ï9ÃX«<}×P[ÎÄÐD[ 8c=÷n(~ðON‰‚â Ãéåÿú§ÅÐn§hZú?Ùæc€/hDÌ4˜À´šGå—ôš;q3вë hI#c l|/rO{«_Ò×6‘höø_oì›ÇÿÖ4'{— Þ‰£¡Ô-"Wöÿ}³]w³ø³Ùûí^ÜoÈ#ú·Ç– £:±”ˆ›šÉ7€XZÂ#Ö&¯KÖRA:ÊÚã5˜ë•ÇdMAoiEù(– 9:Šs[gàÖÖÉâg×…ªï­Ó€6˜ìžÎDR‚U]<ÃÀ>o¢ÿºŽc>|¿TýÿõZ<òž±žîÓO—‚§ûæzš<¦4%`çÑÄ)çês.«°Y"_ßÒ@˜¥ê|y’vØ€‰çÇë#m£íÁ²SšÆÃ`‰°Z´£‡‘8Ÿ;ϰ¯·7§ˆpzòªìÈ[“VXØ!pZÃxÁ…öpêB"¥b°3ž"Æ£—ž ššQOä’¨ÇQ â §&Œ´Ôµ7Ô¨±S*¯¶ÿz‘pq}§†“ ’x—.z÷Á°“¨ØzS`ö£êX¤l´qCÙdGʬËÎprò„$Þ ìQ$ö.YרP(FÁmŠ7$¸­¡“PìÈ›MpïÜ}Õs'AѹÓy_ŒÀSð¡¥JC¨?"¬ëLŽñ9àýŸ­óЧ›Q¯wÀƒpÈØ¦¸ǵõGoÂ?Í ‚ÍTó²Ë­a\à=¬§l='l”'ú‘_ÕPÃ{¨qÊÙÀÑÜÞƒ¬kî=¬[ßš½÷²d´Ö˜ô­¥õ¡ª÷°JÜÃb÷ñíƒÝ¡ƒFÖݤÄ0шB%$gG#‚Ò2‚5s'*_[’:ñ6_EÉŽæ­0+%NÔèì Ên¤bn‘út‰àÛ¦>/—7÷¿ßN-–‡3ÑhM „)ê@¨} ××ñ;y,î¼Eoüñ:6‚º ëŠ2Z>šA‹™*n9àNêë@ë«J@Õ…h-ºyiÝýàôùfØœö`­¯†¼§Ep#’ޤµÑ;™y䪹PÙ&$Ñ7䯢÷þøü—µm*ÙS 'ª›pP­W C…æ¥Ç^KAö![µâ+9à~ƒƒU/á‹€:X(¿ª-ëCƒ$iàNίÆ5ÞøûÉ×E)—×ú]O™éo…OÑáo«ý°.¡x~|Y›¢÷†ˆvèõ=[98⑼9±¡ÿ+•y1`Þe’¶¯—À;æHJ “¼åßʱ ¾Ëk¯zÒçFøÐ¾‡Mg·$ºKdÉz vøž¦jžÕÚ²{[Œð_ˆí 9ìXr“ÏÈêó@Gå ‰!Òü‡çwùíõÿ ÌÆcÆK¼Ïy‹—|/Òp8/R­¸{¥{/!lA! â¦jý(0s’ûk+dÏQY`%D”|–d/ðóóÑ5¸»’ ÚbÓ1â²qt\ôn…§î—¸¾Ð–È÷þöuì¥<ünŽKÑÑÓÓÕõãò7Yêú/Üÿz×Ý?þ<È<Éòá³rV!yëô›e;ôMôÊ96ôZºž«›•àÚ .1%?ÍÆUžmІ¦'±a•¼up:anûå&ÇaU$NO¬èÓ 3åšvC¬Z½oKÂ+r¶ü]Jj;XjpŸ¹+A7®«Èýטe£L|-Z© G”ƒ¹ãàQ6SÄV’u~F .„ã3m×LV$ôDT‘eW+_yœ‘Æ­®múIvy‘EOè€mú K¶*sú!õ“@¢¥ZÁ7¸È×Ûâû[gwOýÑ_]b<ý|¿< ˜´Åã͸ºaxu°(šÖå:]–õ@ZÂ.W´µy”vyÆ,‘kªê¸'¸*(-à!0ÍÒÀØ¥M4»ª¢´¯ÑfŠbÕ¸9ÅÍa@õq`i ›Á]óÐ`Iv<ø*7_ƒ…T¯¤Ëu^6—Í',ØXo²œ3hÒùè­¸ªu¹]ŒsDäæ bÆâ:¨Ø©"ò²d‚4ç¢Æ«¦¤‹p!˜P^?Ü6~x\^~>÷Ñ@â/×Ö[º0Â[Š«ðšó“«mw7g½Œ[`à’òœ½‡¦ =Ãî ÇVU<w>âŠê³¢ç+ÙcÆ8Û†3Áà}0Îtª•}@ ñ–K(U)RÁî ô ª­€ªLù\sÚM0y{ü0¥cû.Bâd`åÇ#dÇP7_á÷ÇĈ|p{SRÙ Á‰ •Ú)?þÃ^jιI£‡¦ŽyE'=”/=*Ùñ¢/@ãQ5wtúÍ“]òýùËóç^ÑÝjË-n®e³ÖÂ_ËØ¹àb–½ØvIJ³|}„>54y+“³Üô¥M´ÃÒU¶Ã˜ÞcËà­ìœG™OÍÌ”×á%‰)Î'¦Ä”|šçM8•2SZ:l^\Žío,’]®#·OôøFÃJuépÊšV¹œÎ`´—h©Î {ÝÓ0‚¶ç7vºkiÝž?áp[ÿÌÐó/QS_ëa”³ä•ª5Ž*Ñ*Sµ³Žª?¢+`¶fù1ôYHõŽ“¾v+’gymË:dnH ¡A‰ÇÎa³Ûà¿®¯–‹ß7Ë·\ZövÈÓ S@~“â@®©]‘¾—/&çÁR ³>"ùã¡AbßÃ?t[rdSæ'ØôņSqxO¬5 Á+{­ErsC[,v‚sÆÅ¹+ªÅˆÏŸž_Ï/²c>Œ¡³Ûš1¹1´qÄ Šo¯PA}@LÕü³ê?ÊÖâÉYغ‚™¼D6ÉáÖ“I #Ô×6ÁÍl„Ñ4ɹÎ8~Cþ5ÛnÏŸEö§#‚eˆÄ-1š1Y}I`é¨E;Ã{qÕKh©©— žwŒŠÝ9›û†dáÄV4UòVcxùgn›´–8Fîd?ѼùòÓ˜j`õÙz‡ #KBÀ#ýÙ5LÒ¢UõHoòmqFZõ¼¤l–óeÉ„—ò•L"8›f:}“L/)¯my`Ïk‘ÎB/©æ°¶[š(_àå¥×çôöáËÿ l:Ñ"Ù–6ê`Ìý=òÌè¦ÏV.¿š~Ô±ˆ €Þ^iͲÌè\H,l…UÅ‘:eqìf7[n`¶¨Ã—ÐÇøuÊ¡ÑA^ßÉ~X}òt>Œ‚0Ʀö vD“˜®ŽIµªÔ®Þ5±l4h ¨Ä£„³Ù’‘bÊÁöÅTåšØt6pô³CÑ»–ª×ÄÎÙ8?Q‘Ÿf¦04uóÃ0;ÍåÅ}µWg±G$`˜!h9FS¥¡«HÈõLZow¡¤RŽŒŸi$Ó{2ªbÒòÚ`h`EZ©CLšbì:"$œ?ùºª“½ò,B_ýæT"ÃaqŒGêœrÏ0bÇЇ ä-ÔHÒ$¥T­šE´¯´C®`ª0áfæè1 d›žKÚIrÝ´J½s[à t"çuÉûûrÕ°c:@íKUy}Ñ|=’M5£Z*DFªŸà";‹ï^ï‘^eÊ鸇¦ aß.X ÆÓí•OMð H°¡ÎÏ€'Ù×ZOÙ›ññ|ÿËòNM}ùÛ鱃а|‹ã•‘EU0›·¡)Dùv‹ƒ‡ÕU`=Ü :«=­&wÚh• Z·âª¼¡SÞ9Š[År½§öÈ»éÀÚAÞ m>ǪIP7zEdnʇC׃CŠŽÖx3‰}\ hѱSÊ C_g4´–¸¯Åw¹/øu+Ðz‰‹åãóéó£èiRû8^'y¤v»%y…%)d€$êóÕÆEc=À–-„Mç&A¾iF6:§i>¶R«ر ‚<$7^ÅŽÉ6oš9ÛÄ!Õ”ÖÛr°CÕIBŒe#ßl…YÒ#%­o(£‹0#dǺ1ùYmX¨6ðV\¸`#¨Í›)ru'Éz +PšÉ[Ö®H”þœxÆâþöáüqùþ¯1uÑÛ* t4íŒ<‘"MÑt4cªØHé-ÿøælÎ/n–õþõþþaOçÿ!ð½ü·»Ëå—3°hD¿¢]//·ŸA‚I;KìóÚeˆ9Fk]k²‚¥·˜?éËž\ëK‰rËë_——ïÕc·HúÊ}÷ˆÍÊ6O¹~ßü›„ž¿-Ož?Ÿß¼É£[-~gó€ï|tȱê>ȹf‘Åú€Qgo“išƒ“(èZBqˆý™I粌çÍñäõBéxÛUñs¦M;t+]Çœ7¤ííYùz‚ÇÁC<ËE¤ïª\îÅ||>á+t/ª¾‚Éf}…øød·GO–5buymˆÀÊò:€àvj±jÇêk9›Äµ¶,Ù| v(V—8 CóùÌ“(ù¾ djºIÐ7ñâM=Ç?œ×¸¹¾½~–cªüúf<†£­EÌ‚süÛax àKüE଻p!}öYÉ[¨‹£8»·hÌý–rषJ‡bçööà-Þ!RË ‚®©™µ5¿+ü OîaÀÀi¢›‚<â˜cþªØ:ÈæhrY¿+±Q·ò)àÖ]uJÉ­änåWÒKv¬õÄSåV^^{5f¨*v—l7†ˆt¶Dê¹&^÷qáuŸƒÐñªY<«Xñ¨È1¯Ød#âva%×}àEmÁ7·ËÅÐÞåz€T2Ô­†¹“)T4SÒ:¢ÊÙ”¶·Ô¬oQØ1ËëÑ™š» ú1ˆ$Äѹo™’9yÎñÊÀñÚútÔ]x볨ê1Yí¹•O•“Žlgkg¿óÑ·†]äÔ½˜(J;ð>ÚªÕžE`+Þ{³òxph©Ñ°ÃtTçhB'o1ÌTúæÅæ© `šBpعX,¬.2 c|_:]ŽõÎ1²…8@QÆB 1Ÿÿ @ÉꢭԪœc䵫†0oT²ãöñ°„6Sä?&f«AãüÅEé­ƒ'-!‚F‹:ÒÀ[3õÔêMé©Õ™ÎÊwš||,ÑEþÔºa©ß5ãíÊJŽ­–”ø#òt+ýa˜•ÆØÄÙò Â:ÛË£õÁîÍu©@ ›bø_…Ew?Ÿ.†qq€a>h_V0ȾxT¯öP`¬Âãz\\5-; Tâh ÊZ¨˜UÊB·²©äg‰­‡AOÙQ`Á}:¹”Q!}jé–ÈØ1T¨A¾V¯Ÿ&'+¨4ì‹F{Cæë‚ ˜s*d’嫽••„}òV}¬ê4JÔ6¦±3 {ôy²ÚBiI¹1xte`Cðv¤Úz++QsÄä8ÌìëŒkìTŒ°ïŒ“3µ³œË19ÄŠNB q/±C9xäú©¿ ºâ«KŒ§Ÿ/â—ç]á"G;Ìg´x§ž÷Ð %ïœÄë­BøßéÓIc÷Dc˜•˜ƒsòïäk ôÅ@ðY‚PµPÆç.$›Í{K+@º•ÝkÓÇÜ@‡;I€@'ÒNDÁÆwÑ9ÀÐqE ófÀ¹±“¾Þcˆec¿·‡WH4®Æ~mcH ÙYÓ[ZÑ 7tÞq4C\‰`´õMQX;¦ëYNe†ASÆ“çŠqÚÄ‘Wœ9æ'+OÀé-­DqFžåm¤az‹Ñs˜¤77ªwίRˆ0YoP¬7ÓÉQ° Y%)«µtgBaEæf:‰*ü ‚Ð!#ÆøIjÛÄZq›È(îf²ÚJ»ÕEÍ!ùµè ɥYt$m“^ÇMÂ4½1…Q2à¼Å z+½É³ÑŠ#è çÅ^0«81Íòò¶´ry-½ù‹afÅ93¦EIÝ:’(ñrHµƒììÌ´ís‹u¿|E¶^7½¸d2P¬0uX7ëmXþ VtU í·ÿØçÈw)ò:pmbÆ}/ ÃZC‹a­¡½êµs›„^åtö´zy”ŸûÙÝê®láÕúcOíE®"®2ì4j‹ª ÀyßÖ5ùÓèLÄ3hlÎ*ןTÎFôi±±%èÍX׆9| h„RF i{õŠDª{ˆò P©NçÊÐz•lç ²ŽâA¼Ñ¨ÈW‡fÖ×¶ ö¨æÐ¥oušY‹‹ùàc<ˆ°ŠÏ jàQl’µ´Úê—Oÿ:ÔŸ¢òOêj³“ÂÚ,2£:;mœE k]Mª3¦ƒI圿î}š·ÆÉ:_Æ;ä®v”‰j°؆Õ€m é)±ÚiHÕ'Ž!dhHc(üq|ãèˆ2MT$‡kÌ|mÉG9mVw»A‘ÓWßzv´úð?g·/÷‹çÙÓòaq·˜¯ ÁQ÷ÔŒ¨ºhFk7ú}ÖÏU­@ð=u!*ª¨i Q{ßlíܤusgtŸ¨>¥uŽîCJFÏ4(ÑHõ…yºv~%Ô~ºH#ƒ§w”¸»ÛÙç—Çû‡yá(—í+v ë\'?ÕN·•»7µ”>&¯ØæI_²\€Ç{ÚÓ£‘ô9pé®-}òkð}ÑeîVóç NG¸€ÍÏvwe 54JõÃë·)R‘üeŠe ©ÕL/ÂkèÐ¥¥ÑйâÝŽ.-ÄQ>zÛxmqĺ`ý D#öÝ?Gš|Z-¿ÍŸ¿Î_Ö«—‡ùþÛÆn`UøB{ùoe‚жFPaK˜& ¸Ud!=Gc èd}O#ì­5æ`0ÉFBÄxèˆÂM,° ‰§]–²HäEiµyê*úkÄ)l7ÿíY8Qž¸~á­Ã÷²~^~nY¾¬îæ÷òpaèðBê^þè¯ËÕ/bcåƒßÏ?î¾Îï~ \¾Éá?=Ü>·hÞª,‘GÖ|¬¾µ ­@ 5Edâà*…뺙éLÄ7£7U˜´+NŽ¢e’7ª¶PfÔÑxuEÐ#ö0XöïÖ•3È™çŠs'(BòFùõùÃ|ÓÞ²™…™íRÒÛ<áÑsʰÉuÕV×4ë°,€ô´SîDèvåô0ǧQgÌQå“åR!z4&QµI=]¾ZŽ£Íµƒ9hãDËâf©ØOãÌå’ÖëÅÓö·.¿?ËÕ—"ñvdûÊ÷q¥4Óû×T{Qé} }”~ͤ6p ê<„m»Ã¸(¶£€#bµ[ùl ]mÅ6£¶ëLEc®WŽë0² »;EþêÔ­gL[q†VÏh°G)Z¯fëÅ—0BQ ImlW9Õ5ïÆ ‹f7¶™˜fS¯]G… ‹Ô)òCdÎa²¿P(3§#R5i©XšbžýµEVÎÐAdqpÞ{ã~¦úêjù?ò›ßqn¨¬ù‰‰»Š.øЈêoÈËnC2¶“aL†Ý {µÓ2 ñý…#š5ba{k – Uˆ RUN››ñUe6¯%<‹0ú;e‡3 Íø$ŒpqtrJè!’Ì´zå“¥Zð‰|¶õ¨¨ 1–Y;c&ÀvÉ#,Ö”k{Q{}:¬N®Ê€Âq¨-$âñ@õh9vDÕßmMžX¹õ‡ÿ^-¿}X<ξͿ-W?äßîç¿}x–;Fê°ƒ £aTÂCåSxÈ«ª±§…wi2•Í]%¤Ñ ŽœV·hD§nÑ+ˆN™¼,¥Ã¸HoÌÚ?*ŸŽÑ÷ËÒ¡Á“8P$ž¢t”ŸvkduTœ' õ²¿¾Y¯ÛÏó?ËUýi¹|:¹¿¾}žÿ1è¦ÔD*C`°˜ß¿ýLG4+¸@ØßæÜ”J‹$ŒƒÃ½æwácwJt5¿›/¾ÏïP´Ã0†QÆŽeò໓ힲX‹±ÿUž_ç«Ï_o?¼ÑcØþHxyPÛÓláuJ“¥4 亪1ƒÄ%7ƒz¬¤z^…L|îP“:§LD׫£† ]ï™TB½xšnLžwðƒc*i©‡!8‚0Å dÞá —6åYpâ?º6=yíx¡ /¬wj¾HÔ¹SV28“b2TTð¿¬$ÄÆèdþžš-8ɸ&òþê¬Tã"ÒhnÆI¢{NÏ7ãí‡`TÓÕ9gÙo§Ö.2„×q¬£Wš´àùf£!îêüPÅ¡÷Õ¡Åwëø8J^ŒyçJïTÙ0ƣʄŒ¦óeŽœN2)«XdtQM:ÎÄq'KÙ+s©W¾½œQhì­UÿOŠ#1F5H¨é2Ð+E›í–½_b!e̺HÕÄQ %Ð{mF[³‰Í[6ú¹Â¾]Z‰²ŸrœýÂëv,ºUåƒ3<«µJ³Ä.™rœ)ÙS¥ KÈWë0½zm–0ª"°rb:ëµ½FÃûß–ëùô–÷¨srjÈe™©6iVA›]Q«‰s,_-µ.s†¼„“Ž'±Š³UUg ëúF·H‡ÛÞÔÜ<Þ~“ —wÍÖó»—•ø&3ùÂå݆^S盢މÂ:¯Œ*MhhOA&ÂR4¦Q®‰{"Ÿ-žt‰ÑñJ‡(lB1OQ=A‚U0ùb„>ó°¶cƒPÔEerØÀêTŸc ”Ž9#J4áœ.~’DM*3‰ йºZ2»2°ÂÝÓË,|Ûâ7tkÇ3fØv¥çð ¤YÝ™RÜ+Ñš°Œ”eùì|–ŽQÈOhñÅdÖäþƒS$§ï„5üÖ0=e8ê·ºÍÞÊâŒtŽ]´–°'N#mbء҇ՆÑKŽKÁŸ—Ïî?Oñ ŠÉF×¢DË 0 Cò»žÃ6°5=Èy;^W'µ ìêtuÛˆnR!ÉÕ,ñÝåæ“-q1ºnO½&¸Ùµ&;p Ë¥%:´ª±›Â#D9uÊ! TTÐ|ÐWøuSa¸ùýÿù†Œw’ÿãЦ?¼È÷oùæ/GÑ_>~•­=„•«Ï¿=Þü>{ÛkÖxê·Û»¯r›»<@d׫Dz]¯uomzEÇå«^ëÞúyÏaKæv¾!Þy¹áQR^Y_¯Þä¦fFY;ÔF+WU•øÐhkioÈ#€kÃÂz'Ñ!MV¨¹k†Ñ‹ƒaŒòœq¯^ëË×Nqè¥ýÉ2*z¼bPtoX,C½B•GkÔÖ¤6dT>²AJ.Â%mòöˆ¡7[ëc’[¤þå??ýÇo§›£NGi‰óðtw”:Ye¢«£²£ôx{OˆýåA‹m½l¼Hé0>—K-ŒÏKß7ŠÌu(X‡æ¥/” üã9>6J‚&àIúÇr¶@hÓG%~xDͨsZ‰Oõ›Ýê“e ÐmId­fŸ÷GËQ@Œ.[Æ"ÃÑââ|Ê'‹3Bl§‡Ë¹K—QÂe¡§ñ”¾7/ÖÐ%ïÍ»¨?0:Zν9?x±fùýMÛ{WQû)†Ã¨#pí†C踹94rŽ”²îÐpü×Íþë ÚmÐÈ^ˆ¸ÂÌE´ÏïP™î›¹Â)Bl´çÀ~T¦&½;áý™^ªÍôNø€‹ö‰:œ¦¦ª*ê BÓÓT5˜«¦Ð "†Òa‹AkȦÕT¼aft´5%Ÿ%¡ Ú2ó"±ÛI÷f°Æ/CT¬¹A~rG‘x@ ]:?'ÑšºQxt²,Ób¢ÏÊL‹/²n’†²^÷ÖPBFФNä"4yfw1ubBÃÔ ¢U?‡+üé»™©×‡m»òlïë+RF5÷†“Ÿ0vˆÅSêá'¿á¢â¿Näl’âbâ0V‡n²ÖÊ-|¢âA”tQ]¤ÍZМÔZ > ‘º;VŽÊ xéâ »2§*uc*ËCw§ÊZí#N•Ù)94Ä*o[&y%Øã+¨ªûùÓÃòG¨×\Ò-”ÁÛ/ŸzSÆØ2õTùÚqÊW1•§|+ß{I ‘VâPóž&…ýͰÝáÑ™a Ù€¼M˜a-+Þzwýˆa7"ÿé~ùëãÃòö~=³HZfóÙÏü¯wß;elÔvR˜õÎW{ 456ÈzçEŽÆ 8Ã$Žlßå`”›£ó?ñªÒ£¶ëõÃr=Û 1ÞÏ3±¬Õn7Yßû¡ö¸§èhpF«¾[¤^'?ÿö²|¾=ÐâoD˲ Á÷­Sž¤®uui—•v`ž¥Àdä78Ûï:„p 5øGbN¼ãZà°m=§¿l‡mtÉ=&pq´76èr _­Ø:Ï%tþ©)Q:xïPŠþP"­pGJDu /mT˜DG&§Dêç ^ŠÍxÈÙ€—ÂW‹Kå›òR†A Ý[oHmÅñÀq âÁ?K´Þ8ä¬RêŸwL}亪Ӹ,+*ó\ßÃß8ûÆ0ÓÄÛwÝëç~;“rµ¿‡«çsåsMï ñ¿Ÿs–H[Cj;8lïS2 êU]}þÒN°‚4¡%²žêæÜ`ÍÐÖfÀ •N=Qªª‘9f•… DÜ)ÇÃó!i• Äa ÷Tia–å³µg¶e.^I4Ž»H¢ü­°ktOФ4ÊÖ‹Û~A­¨=IÉ@•†] )šI¢–Â'âB6«kPqÚ%&Mqá{£G#áC‘d eÂçD\a7TáË[Ò,ÑÂäªRö\pØÛe’+ E;L_l6xt²œ¹¬€’† ãb_ÖÀ$¥IýYTÑfP07©®”ëÍ’=ufõ‰3ë¼Álw¶@iŸ½@{¦Y=«{ä4í` [ÓÕÿ<ƒí»Z¾<ϧaEŸ{D™U\qÓ:ªd¥äjÔ´¹‡KT“Sž3æéDazJ[Bkã€o'ËT˜Zâd*ªÄ7¹6¦šy•°DiåÛà_w”‘f”èÖç„/ìÉeðM|^fOÚ&Yū윶M+C‘»î-Î&’W²›±‹‰j8ZÛ4Á„e“Qçg˜ÞÅ¢te Ó%¤å0mûïVí’Zr]Íò‘«°7·ÞWD¨‚ ¿zZ)D &OăWéÈQl^aD’F­\¯Ö%ãr‚3GSÆØ™+¨eØÎºœŒY“ ¸¦ñt¿an©c-§ÊXK0—­ UÀ¹KdÑpžôÄÔà5Â\ë£a®¸©Já™[lkS-äÝ¢V&û‹Ëg/Ò3^{ÛvX³éL<ÀA›M+`m^GÀÚ"›ÎŒ¬…4ÞO€aãäüi “ÆgICµid‰Ê<@„ëàB¸A±ëÜ ¥ën–pk ï·íqwË—û=¸rƒFî4áÓîƒø¢56B;Ïâ\c“%µ)jµêÑP‚s:Ã}°a8?å>HŒ…‹Q§‰ÿ |ìDè‹r-Þz´…R«ö½t€f» ìûŒgœA©íoŒ?®Ñ †WšÅmäiwˆîÐiñ×<û3PËûÝ/¬·–üóáš67±ueŸW/ó&×& XœÒ¿ãÕnõziÿŽò¢Þ”j´åcDê ci:ORÚl rÀ2$’ÖRÚ"TMäŒh×BiÛìJê–iÎÉxÛ!ãlh"ÕŽ®]£QYÝ®E4ïžÅÕM‰|Ʀ =wzrM{$¶>ü¯A©ë˜Ø—w0')=I콬ؿŒ²˜ ¦ˆ“b…_Û¯‰Ì‹”°8jW—y€CŠ:ˆzŠ|árÕÛûo‹ÇÝbï§åÃân!ÙüËa÷ã`†¶Ï)JÓb@uï*ÜÎT4*yBÑ ÎõÝY[DÖf…oÀÁX­!PÆMKxt`vOÁeï  Xñu%Ü+vÝñfÜ´ç°`*·D¡ý×&‚rAmG}NéTÄÖä—N{k›~êCT|Íx)R"}nòž€løM=€²ÞeH5ÈÇ%ÛY,Ĥzt°œD¾á´-ßl!¶a–«·ÜBèJ9•[+a“E½[Ïp~+ù†(œ`´½Z"ÿØùÜ,^ŸŸ] ãËúbïåæb“õ|±÷\H·p# ­ÞëQvò¨#=¯lŠXVGP±Z  Ý ô7èìl³£ê§©/ðéÁn ¼>mV仿?~Ù”Ç_×Yðjg±þ=c XãXµ+s‰ë£87`` ß9/fˆŽÖgnøôð"ôÍÈK¤žÐ*gÜ TѶ2tjC€å‰ˆ$u/'"’¤Æš´—H%gn‚SþL nlÚfO¼&C"ÂÖ_Ûß9žÆé24áÐFÚs!ìïX/ÅBma9”m Ëq=ms–‰Ý¤­&ÙtÑöa°Y÷ÍBoÝ­0»²>HÙwºÝ<­ßó/òaAõúIy={ZßÎn?Ïfëw…ÓÆŸíôr4£&)vÏíºÀ¨)®õ·%d³¼“G‰PÁÈQèÉñª±y‹ÕitRìŠzÄh…†Sà•×ñ4:aD£Ë=« Æ'Ò™š&œ|ÖD:iÎWäíI¿‹vÖ¸>þ9‡4½\ËîŸEz˜ Øzb§1h[Ï^í )ݬe÷Ï–úUÜ,Yú•’“B,Š®·Q£‘† nƒ×Ö°š¯ aw“«'Ö*&5il|SŸ™LŽªec|#ð‘ è©A©þ Kñ=ZXÍö-rÇ?(je}–ø¬«^Åšy4oo¼nÔÌšE·–êÖ)­Ùæ¨[›l$+£îiÔHÙ:­Ñ][×"ëþºÖjÕµÿÇÝÕ47’ãØ¿âØK_FY €dMGŸö2»±»{܈ÙVU)Ú¶¼–\ÕýïTRZ¦D&“©êžÃLGùCÎxð:¤¹Ñ¨¡R®õ¡¨ÕÙ0°x]LÊÁ¦aa ½§Ü·\H¨ Ÿ(?Xöq±yYÞ­Ê4à´-UõDç­f w!3UK® Öa(ʸlÅŠ%N·íÒŠlƒód¯Í¶¯ ¡ã-$Ù6°3œ(CMu¹™ÊÚý¹Ü±­a…)ù•aм®, Øgä®*¯|øÇ1³ï`®ÓáܤD«”–#:Kbl¢´\`´v”‹••] çbÌÞ¿YŸ¼€ë[¨çê„O¯Ì¹n_ 4-禦ýª>‰#™©7Bjª°â‹r w×t#V ‰bRîÈ·²nà'.óÞ=è‰hÍórÿ‹Û•[ÿ9=ðº¯Ðm¿ÿy¹Þ장¦®›â”|ìlÍ‚èÀ/êngÄ–üLè/Ê>p6Ù«š©Q5ß-ÖŠ›AÅįÍÍW¨¯Ó¦É9 CÈ8Ä›²s,cgF;¤†»-•LÉØžâAÛÁ‰ybÆNØ«oÕjÕ×Í—•ªwm¦¿ Ëì¤N³C®ÐH"R ¹`+šoF™®5#hܯ©„’‘tì³…bÆTj¸g§&#éÞ ×fgŒ~ú1b̉1b°Í/zÄ;sÓ{¸%ìŒbÜ!ì<š6­ñ#¦±4pv›\`‚sˆÌ{½Ì!S ψVT*<£X1ªWC'¯n|É @Ÿ•e‹Ó“‘ŽëТ]W•HLpÃBôôjò âÉ%e“8.†.åpý›âãÀùëýr˜:"]¸ÎÙ:¿áñ´ë P/’€ÐBy£û´;ÞåT°:«»àx&ÆÜñŽÖ&ëlzÆh})PÁš·£·]ÄÉï"ÄÊ áèí:iwr83'Ò7½ƒ Wt~ä…ob×OË™S “ÃÎJhlù ÷ºó»}`z4áãüå·ÅæùAv™Øøññõi¹ùãp$¼à5ÎMJ¤+~cpì­áÊ Þá6kF®…xFçJªj¬‡<¹bH济j¢Êï,^›[ÉâôªüÞø„`)tÁpH³+г‰×¿nC<ô†·Ž&¤\o¦•„:k<„UJS].ð¤ ëMÍÍ®qÞ`@Û°„fŠªp4¶3†c,¹Çµm–P9&½Õ£=ª dÀ_›P½›þª@¯@„j·j£Dçµ­»ê‹M›’™Òrp2.Š}F-a„ øÓa§:qîLE8züº‡¶_ÛEÃþM¢òæQ–â~¾™ŸÌS xЀ~ÀÈ“{[N„©ûÜuÉ\¢ÌÆQ§·-@g² mËk¸¨+R¶´uø¹‚^Î.¦¸þÖÓ8Gã$ùØÈ6ø+ϯÑgß®ölW:[/?? ¬atÂç .@ £ú´F¿0'KV~ÕbÍÏglÔòÞ•¢sÞ%f}¶L\¨2Y&Þ³H£Äl4ÌLC|]RÕ@3ŽLýô-æbg†äÅ+ë$”kʦ½æTÔ”£u–#Hõœ]Ì "Ž#Ó0E¤Î¢@ø/·–óÏ‹Ùòñyõ"¡Ä| £"Ö[½€Q#Wdg uH‡Æ·\'†jWih;À(|®:@;ý™‹´)U Þ³J“bC.ùൕ³ëFmDEÕÔªX™)­ÄìÉ]律]—²ouß• ‚³ëÚw5JIÉŸJˆ6!T9ÜÄkì^%¬J¹¢ l#‡QzH²<5S6$Veñ 2y{´#-N§Ì `÷ÄO Bh2} K“²/MPMÇÐ9&˜VØO"ŸÍâQb£Mv~mÿG§™NÝ„‰%ê©‘¯vV(¬ÅLüÆ€Üæ7Öå èl4!0¯8¹ùÎyœ,VîY¤IêDà,ç€u×f^k¦nÿ;ûD‡®” wùÜxJ×XéÏ6v[’Ĥ4ŒÜFzìŒJÛùköám‹wc¾ov7ŸÝ-^6³ç×Û‡åúËÀº9oí¤¬[Õ‚Z $Üw#{ðŠìÕRVÊiä[rŸéö.2°Ñ)îY§IŠBìp[õy]&¼BŽ0$¥¥œçwÉ ­Õ¶¾¯)ñ}=Ru#^1ALÉ·ŽÂ$ RµÝÿê­ò6u:«Ñ»0)Ù2Ö4‘¨²Adv¦­Îê{c5+V¶²M1.)V&Ìç‚%ÆÅd±òÑ2T¢„r‡k-3O¯…œhÌÓ•ŠÀ&7ÑY"—ë·€ˆa •Âjš¦dY<‘¯oDzDæ°ì>aSI­äì¤Öž@Q<-vQÄmiÔ1Ä' #¾ŠcØÉ¯Ü2‰â{“µ $T알ç‚@ÂxÈ6I¶©q=û4‰#μ¥+oýôq„á˜# å9-äAÛ y¸2íC ÆÚ‡iv˜n¿{¬›Íbƒ°%+þš|åi=¿Ûî¨7Û^ »Ë;}š?¬ç6¥ùûÍÓë£Xê×Õ§çé~wßEó[_480šœ–´¼¹M–%ô^í§ç—•,Ìz·G÷ ý>¦wL?`/F´ÓËE‡˜½»(›13L[ ‹6åÿüþô~KÂé–´.º’¬*&Í»ÕãóüeññgyçÏ‹ÍÇÿ½InS±rJàùÑU¬å-ß~º‹tkî ';öquD¹¹ùåfýz§@úøóþÑ~}~Ý|üyÂ'ø:x]üºuäØq7‹¹l+--Îù¿@rÒàÍ/¿Ü|’-ûªÖùåÕ8v¤×ÀõTd»Õ”ï‚CT±ŒÐ$f>u}›Ê#…’@™Bv‰÷ÉÛøž5ÅÉ>Äà‡\Æ7€Âöª ¨€¢Ñ§Ž/ Äe°ËÄC:ãÑ3RˆÈcC0¦-iäÒðÁ‰>ýäÚk"¾­ø@ß]Ÿ©øp±i‡81D4²â#U¦^ÉèÁDq¦¾8Ôä$£CˆC.™Ç!‘ƒ‘u%v9*ÙÈ„™•¼øþÂõÝÌëÛ•¤$*ÞBùþk²jÑÄŠU#Ðqäyt"9”&’eÕ$ \²hÞg-š¤SïÅ É[œÿw1Ú1-Ç~íªm?\M†È"{‚kr°î¾8Ûé¾ßImÏS9\¶e¼UGªKmXq_mu¨sk=^ÝWÞ%QA±a[&E…‰ELÖ?õÌÒ¨§e¼2&¸*ù¢ºý3^KŸUtÞºª‘UÉR ›dtÛ7lX‘8G`¯}ì°u5·Nä½l+<i®·Ë'5ƒº[¿ôã ÷`‘çùøü"pzXH€3{š? 䯯gÏëùìa~»xŸöé®å1ÅQð%ŽìVÏ1‹ë’b”Gë5Ž<µ8ß|m?–‘*)H¬gÔz“‰àݯ¨à%3$£øÔ¢÷Ñ! LÁ£g & ‰†É3Ç«£#T¤ÍØ’øóÀn¢!³—Žœ`w›ªÍ3 @ÃX Ès&+¡z†i„ õ•ƒ»6*bMU?“&aÓ¨ë,ÅÑ×äe±ZË *ûoÈÆ·#î¬GSPÖ«E›.ó¾ñ3G³5Á wC¸¾ç[ g0¢m,€rôNv©× ¢eGB(*4`óçÌ~|ë;Å©£…šàƒÄ{ÒbAøÐæcðQS­ Úƒ†BüËLêP_fv;—?,.¯æìf‹ßuOó‡–ÀSè™_-`¹¬À³Ûü1™ðï¿ðTä7ÂÕWÃKâ©“•ÕµHÓ.Š–Ÿw܉‡³§¥wóÍüaÕ»Kþ²˜?Ï~øOM£×¿®ˆ©0ËTr^%Ó³Y+ÀsmÀp¬Jü‹÷Ç<‰ÞÉíëòá~šLŒX²¡pxK^NL^ìÓÈ3Žr† ‰—Àè¶SŠ"q:õ§¬¸,z9q½Þ~|³ùíÃâ¿äÕþmµz~Weöß›ùfñ§ûÅï ¿QQ£åâþø5N\ðÅκà VRS­6“SÓw唿Ú{™Ÿôao–úP²Nw‹å×ÅýÛ…Š¾“wÓ™þ–ø„ý‹í?d¹|»yX}[¼Ül¾ÌŸnæè¶ï~šTéŒÿ(Ùs(WÒ(D[áÍxBDÆèÇ–ºâCˆ]¬Ô™©@ qѤ C{oVTcè;‰öAÑÿŒ½6ÇûÅŽÆ´ØÑFùØ1‹¹&E´ˆ2p£KHÞ ÃÞñEg1yh­))J•1Ë!:3+•=;Z§Ñ  E )/°ìµ±4ÀìYthyZ è`4ÄD '_Œë*f„L.Cßl²Hèøj%L^‡]oGtX8ñ?hÔ¹š…CŠQ…óÈvô“òŸ]ñáÛíö²¦¿‡’½l™³kîÀ'Ç9¬Òh/˾d{uH UxvÅ—°6­÷9Þ“ÅEKN'¡Zƒ%8ˆYNKaÊÃ ù«ã F’ÊçäÓÄ-)šÖ5fÇ<æ!cpî³õ¼dP¨Ãù¯Q|#Ÿ ¥KÏFkõb½‹×F ÛºŠÒ !l€ñEÆ'J'ÿnÙ§¦— Ãp©„^=q¨RŸätñDD—/öŠ.Ú\œZ+YíÕ³G#½OvPI²@ÁZí®ù‹6Võ;‰‹¦ò>ãZÖy FŒ— Jvz1wJXo"¦»–¯V’ÇrŽqȨ1#Žˆlͬ.´lƒùO$‡©ŠM×9ÍBs~±µ&S}¥ÖJ¦¡ûæhÂ讳!Xà!`Ðf\< «v± Ý3OÖÌúmqûeµú­ŠÜñ\¸N‚ÉãÂÆ˜%1œKâ¢g™VUàÁ9eX0T‰hŠ/.N¦œFk¶CwàåDEhÈæ+Å^i¡žAvƒóƒ »åÙPez`¬¯a‰ˆLVvþ —Tmµ”jKªÎ@ÏÉñD®zÖf¡gÓú9GÛ7Bž3Á€„<Êà(ä!V]k Ç!NR!³|”°s|…ÌpõF*:¥Ä,80YèÙ³O«~9ópH6”¼¦dM¶¶.Áå­‹¯Ó_÷°!žB ºŽD. pÆçQÂéÖž¡š$Í5í x<’uÀ~L<×<œÜMC"¯ëÅË4‚¡sll(à\Ìû6ž’'LÏ>MÐ!-ÎÍ2;T• c +^5'Œ•ØQ;¿bƒ6{±…,îzä\Nd> t:ó#Û¾(13™˜Inª©Ò*ßG[4QáÍÈÒ0 °ÁÀŽÆÁq&3ƒ*$±ƒÆÅWi¥–õºö¶> îP<„ìõ‰Sr×'jAJ%¿{&j‚yjÖã |XÇΚ1ø@6X5¬N¨XCøÑ9*0Òß•ê,)0Xq*À…l«‡€A/L ¤xfÂÝÁgÈ~{Ú @#\6 5‰3ö^Ž+ÿgš2’$† žZðÑr œ CzÖhB òÔ}y…uµnÔ2Ôª0ËGÈÑiùÜÔà¯î×½j†à!špÀ÷ùr#¯p#¯v£ÅùÿØW¶×IîÏþ›|}óòÇr;Ux-6šíofK±ùÉX=¶€ª},¿0SÉfŽü9ÚNNAõ|%Œ ÕRÔÛ0'wƒ-F'‡èõXqSNN·?¿¬neÇýï¿ô¶Ô _5ò«[ìín²6/¯‹’AËf7h™ë—ê2uí–*PÕ¨à@'ä6é¹ÞøÂíÿEë–Ò›…÷ÊŠ* Á˜½Q'Ì€—ºAùÌ‚ƒÅ~Ú>åÍ÷'[ß|zY=Þ,Ÿf‹ÇÕËûž”pà[RÓguÞ›ò)Z­ö3Øi¥åwÆÝ o¤åõ•Ù>9mYNn©(±d°«Åzò×$”4œ#-P?âwûî„%þb§ú å aÄ.@ïÃÉ€…oó‡ò?}ï`èðÞë‡Õ·›O÷óÍ\ˆŽtàÿŸ½«[rãÆÕ¯’Ú»s12 €‹}†ó Îx¼qÅÎäÌ8[›·?€Fci$¶Hv“š\l*••¥n䇀r6osÉ„ÅÅH 5ç×c½ÿÄšhQTrÒ­m#¡µmdç¼êeÝ5Pö2‹ë“í^V^¾¸?YZK•†§ ÍKÔ¤Íøk#Ø¡À ÇÏ^_u2üšÃKöï üî!(H~1¿8qƒæjctƒ¹Úe þCéòóÇo|}x>–ÿÈ‘wwÿôý|´¶ïÖÔ7^{Ø‹œNØÖßÌÑ~}½¡™ý"ÿüéŸ ø•‚ØYK ëñË~‚W%]83YÓ”*³=ª]â—- E©á…¯_ÝíWž‹ðu²²6ø Ž®MDÛS‰¯ÞÇÑkX¼6ïáåÆÁoŸ~{øþÇW¸OŸ~ýøæ£óíܹ•7<ödWS~ë-î¾T€Ð«}VòŠ»ämÇŒ¼9ÚÂæhË"‹¬1rÕê7µT¬žS¹—ÈN—Öfõ„-4cì°z ³t`Ã~ólØ09r.†÷ÕàK†à l3 |‹¸ë¬ëôÜzçΗÏ_>ýˆ2žï¼Eïó…fíCŽ•=A ó‚´Û®|ì‹èƒl³æ-žxLŒkŠä2§Hšd;&åvLòbž$ ˜¤†*¹†I‰Ëem'KkÃ$ˆÎÅÅÜIÙüÕ«7ÌuLÊxÖ§3“LŽ/Äç˜dn”³Û]Ç$ 2ãûn¹ Ã}zkêG9шÌOý±' „dP¢§þÜ«¨¤~ËŽ[PÉÎÇšü@‹p™1mF¥V^3UÝYØŽUîÝd«2H¯ålå©Lgt\Z*©Ú1tǾu ’Üaà-¨ÄçSGÆ£’ËñåBõ *¹&Ø4—äÛ Ó”ø½Áèx&/àG{óí:Ô4pÞ<éÄpÚ1‡d]ow(™Ë¥y3ĤÇG}¥48>‡ù,W!&j¹žÿ¸´FÇ ú„°bjŠk€˜Äi¾ãPÌáK|ˆÁï 1wÞ…`¶ÿû÷ýàï‡ÇÁOûK¼ñ…†BSû[L‡-ZÃÚÃíloF-nG­ êª~‘³x§:hAy&Á…µa–ùQ9Aî,6‘'ÚYùÌÖLðŠLŠD%Èr9—Þ-c5ûµwÄ®s:ºî«{"–1¶æUÞâØ¸˜nÍ»\C3³üÉàlšA^Ñ]…Äàm‰Û“OÔŽf’œ¼²žŽÕÜSÆ¡ùÉÂÓቲ“ÜvÀ™ä„q œÙSqz:ÜYJp&*Éœ¯ÁYâ`œQB½œ ¬{¹ý‘>»:-\ Ë*äê}êéšYÇ•ÕûÔ+hècŸ6ÝÏ ¬¨†TÈàÝ_yR1‘\&Xv ¬’êI'"sy*pd /Î;YYKΉaGìU]9§ºÚªpdq˜ 7ÀS¾¬IqE TÕ¢ŸÃàÈÛ¹ÿ§\§ˆ`zÏâ”ðƒ +Š!Ù“ |hߊ‰.ª—]$•ÊÍ±Z~ž­!Žßæ” Ž‹©Ot±—òþdûKO¦wœþĦ‘.„;T_vë”QÛ×ð~»Két´Ý)nåàè «£³?„ú¦@*SI—Öd‡x—S á¡v¨Aq´ŽPÙkv“Eè[±Qqhgfº:‚6íØ|}­ªh‘ kP›¿TNíã¹hÍd÷ÁÅÈxÙÊfKVðYfß®÷Œf¢Eñá̧‡?¾>þå=ÛíG ^„vz+{êL /»‰•ÏíS¬²% Ó¤ö—ˆ£“f7 ’™‰\‡¤ò ÑãÂZ)†ØŸ§Ø…HÌoB¤óš¬ˆd.\"’é-„cúöß"ûw‰c4Ò»Æ1æ:kÊi æÆÜ'$d§– 7»A¡Õ Šd‡œP'ÄzPCJAÍÉÊšPg?ÌF¹ uªj«£Q˜:!b. íìñ*i”±Y]”ÿ.YÝoïµàý@èUÊêꄬîÅSOožXxNV÷â©Wà#F/ÆØ²¯SÀ8œz&zy]J‰!Ï$Ÿùè¢ûúðÑÙÿÞž“[_L|ዉDËf" 7>Ú[­"ZÝσ5o£—1fY*kèÎöqóe¾K|6cjÉwQÍ!5Q9F>Ê`Ç•ïJØ^´Û„˜HrÞpÆì'âìAÏó•ÛBÕ¯ £jÑc5ÑŽ¤ƒ¡‚Ï/}ÖÌí|0×Ïø¢ÒÌÙˆ 7#`œ~o’¢jñÞÄû,4~»iUJLŸª”bÕÅe!Šä)…(KOS{‚0«ödéñW\‹èsˆ´¥ÙÉ„¹¦äW8X´lv²ÓNÎô~.Ù E뇅eëÌ^š-‘SzM(.›­¼Ã¤Ze™p³Pœet"’fË^;“…b=7ÇõýPE@ÒpÆÒ9Â54`§¤t˜%3É5´-±Mñ>~ÿXª¥¿N%Vûzhó/+|„C­^Ñ ÄÌø0÷/Wåz…’°*ÔfZÂÂÙÞo+Q õ³ &q®}˜\s©'ûDpŽöþ­¡cî°“­sÒ)Ã%?!bÚù^\H¡F‚‘i a0Aá-eQó’(nÒ<`é™vYœ±s&¦ß?Ù®Š„_ÿØ6Y¹¢„¬L´ ³aÍû³…x¡{ôʫ஀ó©maÛ­‚p:”É^a ŸJb ûöÍÁ¿ †ÁÜ? m‡‘gðəòeãƒ/YJ®Þ­ÃP¾ØÔ” ÀØ Ç#@aQâ)sܤa†áhë< >™d&Ú.í£´ã4᯶º_ŸöþÑ>ùˇ6™Êºr°L‹ˆ›B°¿7eàH•Æg¹ Ì{ 4S¥9ÞGéùÝ `,«´—Ô%Þ`ãT×d½™³b €óÒ¥´fÔCÉޙ߱ä:ºY ÆTµw*Xlã;Je€½ó}+n¥{èœBûþì+Ts†ËÆWjÜ‚ÁÕjî2 Ž>ºÁ`Q›1AÙvÂóŠê)>ª{.Üvs4*‹èæ7iª×‡&¨õCO¥,âQP#޼½´Ù·¬Øuä!x²aÓ‘ÂM–§@`ªT.z¶O>þ«êo^ÿòœñ& úªjÓ׊â$ð.êŒ{úUd|%^­xSëÛ‹0'®G±S•QÄdZ´ê§RÅÚk'fÔwÄã¦#‘gG±ìœs—Q¬-™c”JƒéP³®2جßk–wƒ:‘ð¶Ý§S\™Y-ºc!Àâä²-4r¶d2˜ÛSc"îEÕ’ÊÕ‰Ñ ªE–áÆ<å°Ë"逶7Ïf¼ô…{öÿwÏûòAÖ« ÁüÔû¢Csâ™Å~¾ÿõáÓŸ_Ëäà¯ó¿þäÝ×ÇûßúŸk [SNè¦5¥Y}Ьëö-ÐQ½ßU™´ÒæáPª‚t*ŽÏ;ߘ– ;Í™(wÁô€ƒh:މ9ÄN0§”.ùypûXS¿Bîç‰È²¬p³ZqSŠÊ`züÕ¿pØaŽ˜dr†ðåEÏø¿<*ÝñpÍç!ÚîßÌ1t ¤hÜtI×0zRÒ|)]‘ê‘Õ0ôõ œr}³¼ºˆWÑ—!ûÆŽr¿þÚÑËbºàW %o;Ó¯v\Îñ’šÄ—LÀzèð;Ç_Ÿ=´~¾OÆgX÷¤£zñ`Q•0€nË5ÈìÙÙI)åK2KO¯ª«.XÒ06¯› iÔm½mwKú„`¡aØf(uÂUšAãNP“Nm4ûöå÷&ÌÎ ÃÓ×Ë´n!à*æð¤‰2So«É‰Æ8S DHu.ÍÌù0©ìšƒPž÷v²æ!ÎüpÅ”»2°N„ij ¿ÎÌ;3pi§„s% Dcñ¤ÉÒ¡"µs§½=Ћúå7uùœékÌ 6ûÈbs*Þ=eîËy¡âúý_/T\íðˆAuYö˜6¡ãyÊ´-y.ŠÙ6qœ‘>?׸k,Þ {ÄØrUÏÑB©ñîD4Cn±låÔUÿå7Em:“¨²¯¬»œ}žÐß"dzüÏ—÷ÒwnݾdküÇjm4œRâÕžÙÜó  f…ñWå7ÎùÑZÈ-Îpý ¨Üe5Äó±}%ÅxëSKªÓ=ÈBÏG½€©Zq¢C«6…Û"ÃLJøUüXÖ4›¯-Û4x~äOX˜Ëg&I ðò¯:³¶‘ÿŒ½ê,xo‹z!ÜV3|6TfŒ/,;t ¾¹/¼”Wyz¸ÿëÞ?øÁTß“7×´¨D… q‹µE\Á‰i¡Ÿ…®9….q£ÔƹƲóÖ݆¡e (Õ º ,Ù ñmWÛ㹫eÝ,Tº)¿€Dó¡÷° Ï WwA1VF¢‡Þ¶.¹ˆQ¶AoX,©—œÜ4mJªãÚ5Ù¥l[nÁ:òlÉÓãÿëۇǯß^äv:¦¨m³Ä+âV2?w ÚRÈiÕ5g׳`{%iHEDãø~ÄÙ!¥!wk@†ZM;ØîÎåék?ä1„ñǰթ…»jCÈôqSmÅé%|.çÒ–Ó8'§”¹…ÃÐʽ܄©¨›¤áô/h0ƒ=/ó&6ÍŽ…OLfá¼tj&|–ÇL¸•µ5çtÁiò"£Õâ¯Ãi¦¸N ¢t«­”Ø0tµm"s…bŒ\E×l¾S);tÏpõÍl8©\‡Mš®.æ\ PtEqLIÊõzܾ)±¦fpÝ€ 35šD&€­·(fyRê\ßv´HSÁ6‡5Co,DqÃu®‹lÜZœiÔÆÚØà«p{軨´; hÞÚ~&Š]C뇜Ό7 ¬=Ü á-øÌg½ 7Ñ™ É::ÌÔ¨ˆNÀ[r 橌¤ïÿɬÓó‡—‘Y?ÿÛÏט$~¶eª|¾ûþx÷ôøç÷ÓM½™ä™h윷kèÍ$‚·!ö–;Ïè8¬všniq!¤ºk¬¡L}v߬ö! Ü«9Ü«#,øÆœIpzR’ŒíJiëÐ^K;Zf*<²NÉS0!Üž\úÓïÞôéáóÇ?¿öyÉY¦3„5CëÑGßÚvB ýF:#ólø-#"AÕ!6I•ÿND1(ÿ`p—»ÊS†9±˜¹ˆ±’ø5x‘ÜM46¹Û”0³7’<_œù™ÚC™@4…y3Ï¥üã«Å ÏÄ]þ¤œÓgþü©ïº,d¡©`z^ùÓ¦H€ 绉7‰nÐÚ6qÓœÅznÚÎpöDJƒpV9É­q– ßÀ— %œõÙÈšò¼ƒ«ÿBÓ%ZàÀ¬›Áb¢j…fÔ‡i´Ó¥áЗyS¯õ­`W–(xŸÇLÌâ˜3›e³ã†x° ’±¾ ä5cxµP!›+k+2ñó|Á‰PF`¬¿µýÞc%…éU`&g¾$Èõ%ǨzΕþêËRγ§0V!ËFgö ÌTd¦ ˆ*ºÃŒ„ñF)Ý_¾ü~9&T~~; äîþׇûßîì ÿñhÛöùÎÞ¿E'üæÔ™=øß>ðÍyÅÔ ÎiQîú¬î`™^õ’Ì4¸l] ë¾_é´¯ãºW‰Ï% ÐNä=Öý­SîBõ²ÜúÀ`>¡‚I9]6ÐøŠÍma) Ì46 ÜÖ8#qMxMTºžOe„ç².¼›O]“zÖ5€¬˜<ªÝ‰Œ÷ñ©}°Åù|#U‡šÐŠS'N¤2}s¹1üšáùð‹ð«kÌ pþŒß¿w=@£¢3òhÞ›ÄÛcëÿ9Z²Ã]æ:œM²Ü:>fÖpÀq"´>å!0Û ´qi ô‚†ÔÂVžMÀU×Âí"YΉ„†ä1pB†[C®N‡\  7]S9àùÈÍËù«ƒg¾5Ñ™-Ê·*æ)׿S˜uB‰Þ©<øP2òa_/Ò™©€0kíû«°6 BL‘Æ Ø|+¢‘ :ææÐPþ0jí.û• õD ƒ&AØ«³òm¡Õ¾O³Çiºœ19bxí÷¿Ù4ÍÔ–S P|‰35‰yFéƒî0Fˆ4GïŸÎ{V>ùסïÓA€w÷Oߟû›áµN¶hz>0º­“-¢—ó(uƒi‡œÆ•5ذ=Ð@¼!†êTJ—Y±í(”!u ºs¦%Á[*êüþµ ‚…ºÝ‘!êyzàõÎ-¦w¸s#ìè_ëÆ™zL¨Sà4gÔ¹ ×eÞŠÔúVPÚáT(Mk®ÐÈEh¥Í¹ŒFâhÔ‚£A2Wq4•bþ ‚Qoß›ÃhºŠ2qEMK(^Îü‡#IÎXkJ§I…ÉÜsŦþ¨:Ób4oäqýóÛ÷-ä? Ÿ÷]f¥°|—•c¢M'*š ìáv|Ï,÷ûÙuÙme\® É#Ãúb³÷y ¹û~|NÝtÝÂfàl/`†×±ë m¢Rk41ÁQ‰½øT2CR/aGaß õ?®ÈËg¼lSÛ™Ï?}~züöÓ/_¿ÿôé—·?co Î+QÊÃâîJó¦cM2ŸdS48¦ï%Ÿ“]äÆA‡šÌÜÄJ¤œã¨Üx`•ª,‘td]SËkÍNi]’v³½…$Ñs÷îÄÖoÇ’äjÂ!)VùžˆkHAîrŠÀòHNÒ $ˆUº€¤¶ç€Dt~YYâtIáêŽ!‡¼@1?¤mi‚n&ˆ!cYÅ£9Rˆ†XØOMÃ]@ÓFñ ·Îi¼|xòÁAw÷w÷O}ýÅ‹C,LpHªiCzÃ"ÆUÄÍÐæ4"¿Q‘×@øF ®µÀU-ZËî[_r™Öá‡pFxN;æ—æ'~úsÿòûÝ7C«§¿ì¿>=üç§ïöoÔ~S³ Åv4·õ™áMGýÜësÍÑvdÂ[$P)×q+åzÖ,ªS’„ ›ÔÉÀ{O¿æÄ¿G2ådR÷_úR%Ó²´•Ä܉MX­qûM†S&Pæ~¿‘Ð8t6­gö[¸:gÀTEg åImGq @goü{ éO²íý™@:C@ML[޳«Ùy¦»¨©€Î¦} .’æZÙÅØ·W}¸ ihr¼Øı'øhV-o™Í^C'[ÝŸ^DŸ‘ywRÁÜÀlÞ=(.¦FDˆ1d쇟¼¢LxW!=ÿŽ&@Fãw£'Ë9ľ"=šìÑ"ßU&‰€K¦J€xÎÒVT4?D©Î¾¢’,²ƒ²œÿ‚@ àÈ žˆ·¿Ý_>ö³þøZZ‹Ê@fÙa$`±•ÉQË$<&Fô6Œ€å“ÔY# ³n³·h0‹ÑdmrÞ4ƒÊcì®Pu†Ñï¾£¤Uåê@ZÝð} ͰvB[iâ^"žÕ1j¡£•>Ycœ;aO KÒ±®³x³ÈUo/]©iBY§ø\•0œ©„ñmòë:›`Õ/qcíƒz]BËîA‘˜Hh-k¬§ÚÈJtÑm‚iP† æÁ§o'4Ò’è'çAÞU¢Ï¿¤Íà («@s4Àì‰:ô^Whý SƦ+ÑÅÄ|5$}nKC¯'…Ër ¡¾Â± w™êâ¾Æ.(ѧaîÑ®êÉ€ ÆÈÝ]>þºy~¸Õ§ÿü¸¹þ~ùîWÃ`ÄØxï!’›×É9$ÛТNêÆkFŸ.¡¾µêžTŒîÐw3]¡‚>¶ÌÄBòVÿªý§‚×åýÓåÕŽúïDDù¯h¹ùvyû´Yb þýËý; m¿½YÚÈïc¶‚™|, Ïr• a®t%<Ý¥t8Øß·ªÏO/ÜÜ£À×ô¥È¡`ÅU&Àº¶ ƒ4Ô8Å Ãð¡‚k 昼<æÉ `žmà­Éô/ăSª=xv²¶N(FªØæ½#À.¶ 6µŠSÔôèJt*›”**‘z g›C~w’mâl²«ûíd%lC˜Ð¨™o$¢|€ÐÃ61-ÁoÈs7ߨ˜o2E!Ë®U¾ésdù¦aHÒ…>­ˆqú,ÿJãÐ š>Ưd.bœˆ#ãC7߸Bߨ0—è›êLŽoòÒwĶÙÁJÕMC¨J®ÑnK×°e»3qŒùç›·uùõvó?ʪÿÞn>ðï5bÙü#`¿8`ãÿþ%f1n6ׇß%¼K%ŠFö\Â("Éòɦqqv–¿ÅwÝŠ›«ÍÍo›ë÷ŒòÔ³}ß =Äþ`û§Ü<©oúû—Ûíï›Ç/Ïß/‘cÚý(514ƒPÑw'¢ºŽ–¥K ‚4´ÅfI½ô*¯3Åž©4n•ý*è“2ÁDœ•‰ÀÉ‘«‡ƒy¦6»àk”×kdo z¸æ¡e D!ÚÓ¹¾sÝä É—`î~Ê)¾yH‚îüh¥¨k¬vUŒC£tïR7ßT ĬWÿìf\(eœ “XõõCžqè³ê¦çNBðì`%l³>® ®âš:¦à¹‹kØÔ/µ €•ú%•`Ülv}\‰Ò ¦T£„ c\ê<•¨0@ÖÝõè’îîŒ\#r=úÚ /¡FX‚Š˜þÓ%,Á»–è ±¤¯Û B©‚5¨ê¾˜‚èSÔ¢†,[ƒ`rˆùëÁJܘIÝtvË™¦‚écÝ‘êÑG°k™Ed•~^_Xš¼á˜W=ö†Áà”5ŠzìÙås­12È\¿Ç³šTªu~˜¼;liŠ~Œãwîðü]î0x;Yd°\îÏÁ ¾SûÝŒ>ÂÒø™°6fz4ÊßsðÿÇ^á«Çë¶µÂKƒhÆðçØáOÀ)èû9ŸØÎu·yÔ]nŽÿ–›Bì Çš F#YCfî“Ûc1Óƒb&˜}h='9¶¯5¹½üòáö‡ÂËSfòDÁÆÌщ¬¢8°‡U+Œ1µÎOäX±ëü;HN,,/7€Šx¦êx<7õnXV Gî+í©0tb­*äñÐkòx’«çG+ÁÃÀSœrj«Â8X´‹qÊ{lêR7Þ¨ÁC6Ç,ˆø0ç&j/«ßQï‹e«Ù„ŸN·©¿Qe„s£¯žV‰„S±íY÷aWÐL ÍVÜšªg¶XîEÎu¤™Ä%ŠC‚ ´•öÅ('µ•%e|gt¢¬*ÞŠ1ÉéËÊBÏÜ-}„÷v…HÓM±Âѧ¬´ؾ×–üÔLþ ¶¥0-¨° akÆ÷.¦t3ÊEP/pŒ ‚ͪf€dëâŒ<#tSßšHzªÑM¯vÔb—!õVÐMŒ§‰Øuwæ)jûj§¦„‚ã¢z@‚¿ÝJ,ee³î¥ôýåRì2*õA¯ïŸ*GmØú–Dý-å®Q@=؆ÑtÙä‰)Ó%ùÇâY£I„Ø}ç C.Ÿ€löz£Þ„I]ev`€èxÓâÅ:ˆ“ ×Ö(,iÓ0 ¨˜ Œ¡ íVÙº"{ŠùÞøT:pvþÌœ#;”ù¸¼B éq2AßíÓ+¹~èQvŽÖåUÌÖ>oÝÜ7­Y'Â5ÁÛƒm™“äØ:0 kT "Ý8g5Ö‹g5®ôɪjºC~F¨!8qy)a¨RU‡ «ÌCÖ(ëSÏ;6ò­šó¹|¸Ùüñ¬’¯¦_ý‹ªnï^gà_ëÃïoöwï)å,Ô¹^â9ü´"c¼3+\P"MèTný"üÄŒSŸf?¾n!¾x¸½¼¯Ò µµ3£KÛÚ‡zôjÓ‡ © ß¸ÄœŠ ¡í‰9ë9[6§´Lî@?kHbŽ&çÑ‚ÔáiœŽ’âšvª5AiØð6Gša&@(ɹü틊ÆÂnÏWÊ Ê©×,àëÄF(ö¡=­ÉV}fqÎí4KT‘yÚßÔ]´ðò=il¦1ÁôijÓl ñqëDµ¦ÖPi\µ¦­xTÝØ’Ëë`²ˆeN”!Õš*¶.øP‡Ý*çÔwêÇE \­†rŽÏÞãòrpŒüZbuyñõÇýué"Œl9ž×(ÊXÛ¥Œ¡e q\2Ó7G­¦¥E ¦1X6‚Ej¾ÖV©—23ê ŠP5¬®j 7v™Æàx|‘3ntÇEž1¶ b(n2GÌ‹p²×ö8ƒO2ÖWz¸äœtÅݺsÉ_ðùã* ÀñûjûãÕlqÍȱä¾h5[\#W<—|„‡½ÈY‰=Ð}>3[¿Jì_ÙÏ<5#fŒ_ÉmëØ¶“¼e[ðÁÅ!nЄƒ™Æ¹eçy)Óé-3eÁU\²sF“!nÇ‘–jj¥ƒÑï·º”0À i=u~Hÿé)ïänïó½VUüòšY~Þ¾$—gŸ©/Ä‹•†yhm-ý…ñª&Ä5ÖkT«´Òu¤ͤ§+r¯]Þ½Þ7ß~ˆ¾Dâ^û v¹ô*55•Ì}‡Î¶? ¨w4`‘òQ©ô8ApkÈPÉ„#$›¯>LÑbˆ$¸ CÐÇUIZ/¦§t&ÄýIã½.‹“WWß­êu=l¯çŽÄ1´¼ÂÏÏKÿゾŠ| ß®Ñ_|ÿêÿx®L£!þÔ̘¬Š†XoײjDMõ5ÏpbóàT–b°\2T³á±Ö§7(7ƒÓ×– ¶N¡Q‚ïSh´víY-íÇ-ŒÎÒ.ŵ÷òŽd3toWð%²ðË’Æ¢y@Yd5‰ºÄÒÇj¤œõ0YRgÕ{¦Û›»›ç݇?¤Ü·Ê§g蔽°¶ªyY}³äÏ«/@hH•v¬ ö¯¼:qA9ÙB0% ·É$Rϧ.3g䢚¬QôgVÍp|™¹Æ„eL]J¨[GVî’»R_aÜÂkS¶ðÚSßÂëåÔù"‘-ƒíêî1´Â@B}´1løsjF¦þhÒôåë›ç‹‡ííÍÕͦ²‘Úß·¤Õ¢¡‘Uꀬvõ[H»akÍä½\ÉTQý˜Ïbn:ÎjæM±:Ta®uŠfÒû˜š"œÆecž?ߊƒn¾Ýè·¿¯Tª©JÒOÍ (PÓ¦áè€dÁ°jÀ*Q‚hãnÕñGD+%7‰6›XV¨ ß(4ä&QE:jЙެé*>‘}é…:ò‰ÂäÔHÁ{—bƒ{u—xøã&§¡{ä±È72ĵ‹äÛ€bMìŰB,*fÚ }öîŒ|"Ù·ðû‹§Êª[±ë¢qÓö7ÈÅÖO†Æõd7wâì`.¹ùƒÛÈIša9£Ù¹#fòÊÎí@?Í@qWAVÎ;ÂòÈa¿ÝꩾoŸž/7W[ýÍŸW·7ʦ‹ÝÊá‚ë:RÜPÍá5ð¨{‚eå†9Sà&_´Cô˜ÙK=ºT·åL#\©(ÜÖ{vMYES‰âòÉÏÉL$LÁÛôã«'µ 7ÿ¾sS묬3Èkz>pdeWpjã’JøèԪ¨•0²”èëÌ–UÇ…Æ5Þl?j¯ Ã-C¨<×qP¬ã°8ŽSRg‡J°˜óq-S²ýà@©AXL úpn0°>p”KÞYäÓa­Ñ E‚¢°Ö})ÿz£°¦Å%ZaÒÓÎx9cMÜR4vs¯R¸ûM¸¨ÝüÆ ¼* 5­ŸWÊîŠ2û ß )6,hežœ‹+óA«¥}c×Iô”䨑yFD­*Ë&ˆ¯Á7D3íMBê\ rîyÑûâç·ÄÝÏw—Wß•ö-ÍBN5õ2þ¶¥íÀyåÚˆŠñ ±FúC^œ_âæ&)á ÝKðJ™!þŠ0"9>³BZC«×>‡—øÈ#R Š d1Yoª¢è‡Ö>Õ›:åh¥#TKÜDdºV—¨!BcCl˜qÝ+8±p§úxo r½¼Pö²Ü…äú±ÙÉJ6Î NÑÄrU;϶©°·t„)ZA”þÕ©¶”o»qN‚.Ÿ÷óFá)7%#¶º§ÝÃÑJ§¯å ¦ªç-Pãl‹M%K&ÖÙôóÍë›LjªJl¢À›uS19×tv°"uã m캭â²cò]\æ ·èŒcÛÍ5(Ö¶0¡ï @ÛÈd3;¸o-ùP±ðv°"]“É:å]qè ¢§07ƒd|„!hâk›µö©âG¶q˜ —€¤3ÁŸt@w'ÇÞ!t8Z ãØNB–ËÕ-~·z˜Örèaœ³-•FèƒzòÞöƒ$•ª›).B/¨ò¨*`s|sàS(9;Y ÛÈOA”®‚m æ:¦;Ç·'ß2 Äîv«Û•Ôls`ô&KئtɱöS5>4¢ŽV¤nFùà97ßXå®áBš×W&:ßÐNlЗ¨›béé=3/·É;äÃÉJ؆/l;»º‰oÒ7=\h»w»¹|z×¶[/^ÃÕË¥UÌÊ-ý }ÄukßB@Á7p(Ñbý˜`NÄKrôþ`Ò6`ÂŽÓ¹¥EMˆ´,iðwYW¯+ûx÷û’ÎØecö×:Ã$yŠé¡Ï3Óô÷´Jî`8Ðb„$è[“ìátþ.>ÂX¡ëÅîj9‰ÃÙ‹Kòy¯noZ²ëàÒÙõ1 ©|À–N$ !™õ´¥¬;I¡€ùUâØóœ W2ºd>hF§×_6zúÖ”oL%%š¶i˜©Øü)‰ñ‚ÛÛëa íÕmòj™©$æÈ ‚MÎG“bjÇ×¶n7ò¼‚€-ÉŒ‡È­;••âÛ›Xñ}½ùvùãöyöÁa¢xrqÎv(È5¢!ÈŠBʀψ1BBœÿ¼áI@ã< ÷ðãnÉ1öÛ«ýVõç-ã~í÷}+úù}óõ»¾ÒÅNïÿ}{]*'ÍY4Û*¤f¥G5œoY'Ôkœ=‹Îlœ­“A¥C‰±&ã8§˜¸7èÇ3¬ß¨3ÄTû‰ãÜd¨ÒKdu|^ÆNÁU“ÙXü½|ëÇïPÇ…¦ÔÅØ˶KÉBK/ƒÄ®ËQú˜$Ô85ô“FW®D åt£Ó Ñ’®ÒŒ*ƒôBÜÓ[£‡ñë5DïÑC‚ÔЙ‰ ³uŸl;Q–‘¬úØ®Kɶ’¨Ç›}<7Œ«hb^]Añ¦GƬAŒûC’©¦7²ŒÐD}ë8ñÆÙ MtÈNôšhCU&X9!ç·ˆ‹•êEŸj·¡lv=±)«ºÎjªó°q"ŽÐÝ´¦ì*iê’áe·$¹Û]¥³Kf(f„bwabÑ÷Ùù¿ÿ•ú’-PÁúòíq{÷åæþâns·}üSÿëzóÇ—g}‡£‡Æ5ÇŠð©XצEÓþ{W·×£_Å囩­Z ’Ù'˜ª¹Øš¹ž Yꌻ"K*IŽ'o¿@«Õ}$QâÏái'™½HU";G$€Hàƒ1mcý¦‰¯çªÕrŽX D‰—„ÊRDk2·¶¼‚±¨!„ô¾­ØÆ]¾‚㸳ª‚)+è™-Ì?rqóõöüî…ºíŒç\jR·'p KÔ u!ADôÏ8Î?_mþ®þÛÍÍí+µÿãáüaóW³ôŸ<ˆ§ÿù`žg»¹<þ ³ ŽêÑSªQ0–jtl¯”; f›ù‹-vÉ»ÍÅfûëæò•žH­ô±'î¿3_Øolÿ‘í½%ßÕ¡~ßÜ}xør~ýá Ži·÷—Ö“&ò6#³Í ôw÷”ß}»Þ* 5nÒÈ`1èc-èÁO( $,ÛDäDT´ „|½Ýag ×oèQôÆaýF¸§¾ISñw¯ µ¥¾éîâŒQÑ…¥zãêâÖEIù°ˆ„e½¥ì…ó|kU…’n#Zn®Tqšðªuõ+ÎtO``•ML!.Ö›«Å›CSó‚õª¥Ò›‘m<ûx8ÛY ÞØ­<%lR›ºˆ .MókäW&Ûúº Á6[æþ‘"èñ¡ùõ€™Çj¡³«›‹_,²/h»yƒ6¸J)X®d ÍâlÔçb0A˜¼žœªÀä‹XBÊÍÙžï¬L¬Ç©oSt^éB‡óòè9Úûrc<Öþ»2^ó–™‹f‹£TŸ!m"Ûc5Ú|W,=’ÃõÍiQ¸³ÚÚX¥£eËÊfÝ;±ë+ Û|‡ðÄCð>иbnš¼d[çŽ;«zœŒÞ†S‹Ú@7ãyÑ•B’œ‹:Dϲ¸•‡±%F†"aÖ.F.Ǩ9vàÙÎ*CÔi×vb­…Ð3>Zˆ¢^Þ‚•ªÝª‹pä¤m$RÖ[mó­UÂÍÛ¸¼– Õ!Ø;Æ¿ŠÜu¥£¿×'^é”Ìø>H<ŽåÜ0ìGĽãûÐRöìýÎag5xK4‰s!´œ’èõwKZ°tõ¡§:5Q2jK\~LV÷…'МYÝ^E*¯b‰TT[Ȧ„³U©M3yoÇj›ÚaÈYŸ9ÊÕÎæœ´Ýû%ïP½áñžu|0ab¤€ðÃÃÖÇÉ\óÙ,˜‚7qh½‹q‰ÔÅà wk6øÏVžN–Îå~yÝ6÷›ÒÛ—hb Œ²¤9@?á{&0E° yI¶þL&ãN5œP€ÉלjP¬ó‘Þ™¸†xJt#ØÖάü½ºø ä"Â*ž2QpIN_Êñ‚À® ˜6ãc·°+‰¡gÂGL`†8¤~ã…€FzÌ”*lí<&–Ë6æËpgÒä2câ(ÜÁ@ÂÍl6Ôqœ_ã1ɾ±Œ@¢-›œ‹MHD"c¯]Tôé¥kˆ·Éáaîê§nž‘mš¿„XÕ,¶¿*{·óŒŽ3© j¬P•6‹ˆ a I «”‘ÙC~øA¹Ð.º¿8?ûüíú²±’Ì;yûZBÍ-^RIfÀ=üªiM/†%BÏE4î¦'ý‡|Õ+6g9»A~–ÃAƒJÉBT‡ÜÒ0­¹~4ÆÉ%í'o NƒdBrႯ¯>L×ü&ö™€—ðýè'°‡º]Duôc¡7¾æKÕî)9¨j4Š¥V–¥ÿ(!‰5æ 6ÕpzQ¯çpÑ  ­ðLM2é‘$ÿ„£™Uf,Ø/ („{¨ @ld)üG3gs5jc]•“xWô²d± IIdr ¶ F4ÀïŽë.B‘»P “1lùôÁïÚµÚÕØTUJˆ÷ÑIÜÍþøCòûD¶’c&¬™3¥B*`U¥˜­îš‰iVÕºŒ-nSô°aÏaV×èÎeÒO‹ x"Š‹CÑþë·½_øé­à»ÍÅnö·F¯úfáIY)ÀM±‡[6:ŸÔ£·ÇµC9 È&¡RŰ£(¥9ªÞØ&³=L3©²™¾cJpj Ë·>ºF—(ü®š›_¯ÏŽñÉäÎn¯Î¯7gWÛ¯[]IcÿýÛ÷ó#`ÑõÔmîŠß997†Î¦Flã0Šb¢5…Ò«Ç=Áÿ«7ò£ˆa”bˆINÑ€² Fáº)ê®äéüb'¥Wwÿ÷_6—ßtOŸ®nts_nîþàìñ9 ­x…âª@íš».ªOÂvFÚå‰W´q‡T…WEÀú”-j™‰j`™Y$ž°/.Éd㤿ON~›û¢¢ø1”»ºùvyÖB]"œQîâ¦eœFt0äj·$­wQéÚRV¥¯)>QeTî •_µë¥3$gU;ÞQø• qTj„ëÞ/}=·íjs~¿)ÄÍÿªk‹sßæ´ÍØ3Íœ8ßLýLbï°¾=ײ $™ì]  ê©œxƘ²;È 02Eß2<™ h¼ þD?A=ìþ㘈þyØÙùç«Íßuk»¹¹}5ô盿^_nþý“±Aäêíæòø³Ì )pSÔ –É8}ò.NÛhöõl¶“¿ØJ?lmEª¤‹Íö×Íås-±×3æ‘úõ¿L¬¯¾°ßÕþ#Û{5‚ïzP}ßÜ}xør~ýá ‹i·ñ_6|‘9wõ€o™ƒs*¤E&°ƒ¤CCGö!y^i szm.Lb¨{®a-b,šDÈ–tÏvV3Öɤ6J!´ W\HNd‰Úzø%F„ Æ=»pœoíÔsÖlħ„j³1%µ1Ä,«ÑøIH­ š´f¯ˆ–hM #ue¶gIñå7g|Ž»×š}9ß<0þ|wóýÞZû“SÉm+fŽŽ× õ¬Î-w›4ÆQn"p>5´<1Ú˜„K Cì©ûä ;÷z:-†o¬†o˜$²`ùßG=P©|ê¾AFvÜZ€EÕBt6M‰§†çÕ!j ˜zX5©gň‹õæôæ<¦Šk²xKz ²ež³­U*Ž<Â©ÕÆØóKzp Åj£µ÷4¨µ 6b(ª³æ[«TŠñHZqÒs7!l™Ç4´Âæ]ÊQ¬áJË'M€\t<Ü«ÑUûNÒé¸ÓeËD—ÅÕ¶ÚŸ=Üü²¹#G¯yTÄÈ’°œAÎÎW:ˆiPÔ"5‘Q 1t]~|ðÁ±Yv²¼`±Þ^«¶vI:Y\"4ÙFSUšåS#¥ìáL*#lB—mïÑŸÜ&tñ›ëÿ¨Ñ¦û°ØeS hwpUÏ…èS*=$«9äÏr’…)éAäO~rìGr´Z‰DÁ4Ž-âÓþJæ·sú²·ü[u4Æ ª#°¸¡…G$ä—˜‚uƒ÷Ü êÉ”|l¬Fþ™½ÿJFˆ5=ŒâІ"1;•i.ª!`ifhbJ’`=~QP*©§ÉqÇ?.âÇrNfîOÇ’O¾ÙG¥‚„šg=.6Oª@c.‡9JlT[øÐR0Ô-É2ž—^°«ÖQïºhµŒ,à%­Vާa"{ .W…Xc,„ºUNù²Ô§½”‰µBš¢sž³y?°ˆW‹`²±¬"õœCL€À÷tEAŸâÏËÒ®èPMúAa²ÎG)#[¥HìKA€Ùù|³­UµEóĈ€é9ÇËì#YŽrSðä"œZßÔ3Ê1zö-äQƒwŒàÕ夿 ÎîÊêqÏKÿ®º);,w¶³m{š¢J°åš!hŽê!zZ¤6‘ÐÇÐ;‚ø»¡äÊjZ&Ïâ*øÝUÓ¾♬²×Ð3a ªÄAM ]C×J´ãË.°ý÷\Ï6V_[”‹š4e­YYHQm)R~ÚÂag5zc¯á¸ 7î!iÒ@Î8aq%w鳤ûÉÇ|Ú‚‹…èÈ6žçh:î¬ n6º#1ƵyÅzZ2.7ªC‡®)æ)X+âb¸Õjê2 E†òK½0úÒkËnçy*»ãÖj§Ëò<ÿy×ÖÛȱœÿÊÂ/ç%šíîªêêÞü‚ yN€¼F K´WÇ’(è²¶ÿ}ª(ŠQMöu¸ö‰ ûº §«ê«[×…Î͸íÜ’j¼I0îÙwϲc.eœ^‡ý•0N'wæCp»£•0N^Ë0&ž›qÐÒùb˜œ`µ{˜ûb¾ÅÉÙLþzÓ£„uy¾¦ ÜìdElcÁ[Ø4Ø—mØäM2º¨“èº'¼2»“AL BŒ€ó|CLâmv´"ÇDñ-–׿ ÿçI†ÿò ä<®<ü¢¾µïzø¡)㇠k, ûù+ £; ‘+.ÉvÉ)~#ûcC'ߎVdqÒÚ—šfÀ$ñ-¢ïam0ŒŽ-‹šÝ| ¦˜on⨗ny¾yG¹=3zðä$³ÙÉŠØf§H¡jè¤|4zm—~¥¦¨Oƒ#1 ÝÞL°¥\3~BŒ%\“x»€kG¾ýÑJØ&¯:بâê]“ :Ø&hòfl0è aফÑ?\| õlôzW(4ÎòJœöͨ‡N/-Ý&9¢U™â…¡{;2FßÞ½Êí¥LëM— Ä–-~­î¬ïOÛWìÖÒÄâ?b€%Ú´˜ŠhRqäìdEn­®ãG5øµ.ê ˜¶É?-·Œ®Làn¶A…µŒâ+Bþ–ÃÇ`½Í±MâgL6îNVj- çf›£†[‘gQ´Ø2K~BB3MÆÊÇ"⢸\ÃÍæÜ)°ÍVr7Å \3XSs"M –ºŒ¥%jjï4Á±Oýh+d來À&Ÿ³áà|.Iª'ÇäÌÕÙÑJçteÂƹè Bhßq(8¸“²âÐØ „,`—\q˜ž‹õ¾ƒf7 «n¯¡úC?4= l›RÓâ8K¸C71.€DÒˆÊl— .„ģ㈷»öCuîVÏ7WO•PŒnQ$6ÍnÅ©IbŒ£F7£Õ8,Š(RÁžIÌ;Àåu^äœu%¢E!ê¹%)ê9ÝÝ5Ƨ™€Ã€«B¢wF”2gë“»çä\}mçmKuó=È .OV7•ñ²vôp¬ËÁd†oxaÞ¦I½ŠäÅ“Døò­*´úp<ÎÌ2 Ö[riV'Ô? §˜jã Ê“ƒPPGìùm Ò)„†ˆÉ°3 A¨•0 ÐÁ™ª6aX;€éO½Û7 P/Îo03C<eÓ¹JÀ¤õÜh>uÒw°5­òCði-Àx|39û¶^ùœÆs?Ù÷êòâ§—ûëÛUÁø¸$­õ-ÕFºô£;ýxG£aàSÆCD,èo×A˜9ìY›o¹'Çì­œ{Ï=ˆ;ƒXzù<¿WYîѵ—mº°¨ûj¡aGޏW¯ î¡Ú8ˆj…q%uóÀÎgÃK ÉæÌ†`të¹E>7F–À¨n’p!ÒÙju÷²&˜x¼ü˜èxº¼¸’Ïx–Ÿ¾8%˜uÕ€hêÀûŸuàÅ–L¯³b ³³“žãp-2åÐZ“¼Ñ|˜àØ„±€P`©‘’£1fÄ¢ü£·\Æ9}• ^àÞ'„Éšø¶šêì~²ˆ–˜Õæ/ëëºâ‡Ž_ãd)^€í¦ð:oÄiÈf‡yʇTXᾋ† ò¼âí’ÏB0½×}N’!Ô‘e!TM¿AoˆÁдYcÏ[{l“„^-<Ýür_if! .ŠEv¡ipCíccVО¤×(Tn$‚)„X2*8b6¿+´ƒô¢q róÚÚåÏgGåÁ:§oðøpõùæþY÷?h&P…#òN8Þ )¤}Þ‘øÓóÍÝê_¯xô‡>I줇Úp*M#l8õ¸º[?ë#þçËûãÐUâ3}zþãA¿ûy¯6~šþý¿?ÿ}+™o¯µ¥™ÙE!öÃg{#ììK¯?âÔΟ%d¸®޲“‚ zØéÐøñZÖ…p:ø~õ-;‰+;Ú‰žW¯Ž %)hµÜ:Â?Ýòæ”^Õ)5älÁZ^A{¾ÊÚ½µVè×ßSe„^•צ ¯S•Ôð†ú€¸í’^P¯ÒfSéG½*Gò¡^µÑéPm¼œ¬ý‚ìÄé©`]‰‚%"_¬`«UÂ1†‚1!êbèvôÜ`ÿ•'qûcX´‹EH¶ú]è¬ð~š~ ¯Ó‹„EwÂõ‹äµ<üþæuìÎéRº‡imáF^f=?¾¬J·Ù*`¨fߨGk«L´hmãôjÌ׿Ž+™p¢ð°’Å+s“Þ4OÎG޶hñ†ËzÓ.Æd§ÛŒªC¼iM3¡¯ëtÕÙ®+Û¬›|–÷¦ùÕrxÓ:«ÁÃi­oœ#µ¾“Pû†?zÖ¶XíÿÙôÕQ0"f]ã(,bU¼1ìíÙó’3ë|'‘ÏÅÍÝÃú±º5Do Û©^ ú¡e¬0¢±6 i 9J¨q‰ŠmB‰êö˜*D3)‡}F•!š[üZ`[§¸Ñ¢ë; ‡;¨‚\DqûWùAq³ÑqÔGÒ ôóÒ¡( +Ò õŠà(#Åf“±]Œ$Š (Ô0Ýž »é¢?žž_úf«¶”ÜÔ|³ÓÃãúòéïêóª”+FCí(P®Þ6\üèþ®MƒZ½[ÝH´qŠV„Ã.iÁ³šŠÊ*ZoR;Af¤h#†« Gô`µÏ"ŠCH(Zñ‚1RäL^DÔôHËT¢qQd«ÆAîÐǘ‹Ž©ë6â“>ÀšI+‡ÎP—X-1†üW—rŽØN缎ÅÃŽ·2ëCˆlh+{;NšaªT9£%ªT¢Ý¬ÏІS3BŒP¥òÖLn³ª\•@Z·¸*K‰³œ8ê"9ŸòYÅ©AÉ”hPñ”+nî À~”s-Ù®°Á.pMg­î¢~›Ê±`Uâœ`ëo÷ûŸ8ú·[Ñ‹‡ÛËûÕÅíÍݼ[]1b¤vžèT@ji„VbŒª‡Ñqœ¶‚yƒXPÓdµú0«€Ò»(öT¢Eòu hUYáûåÓº±1¡‚íÄÆ†í¸àãÞ,ÐPo6b‰.f[îÌW(Çø-/eB_ºi‰t­u“ÈfѲ­‘GÁÎt%˜Z?Éÿî>_Ê1žw­|§3éOè»îóí¬+PïÞ¶¨wp Tj÷jŸ¸â+!u×½žˆœÕñ%UrNuVÏ{CÉï;ò Qóàרù°?CÎÂm ð´¼ð)ÐÛMÎ1-OLf¨–·&|Tóæƒšw±XË?õsL."o¹G.tSôæÀO݃g޶² }…ËOb–_tXŠt“¹¿^ý|ùrû\æ±gzD¡kð¿ð/6mC@9Öû—!ØI×¼Jeû)úÈŠT¶ÉàÅ#ûU÷ô¢²ýäsÍú?Iï»ÒÌ!.¯³zò-Cbùì±äÈÐ =†ÁšºG3eh4iõ1” - kãdÅGY¶Ùz»¾çýHín_DÙäl_ÁÌÍ ^W×Ù®Ê:QR-KˆtŽëë+²ä=ák—жOq‹ŒDWRC§#ƒ²Š›ÓÓâgô¢¹åµ bUå4xvAÎÐtð—Ok‹:JÜŠ7m€Ìá á«æ&‰Ñ;Ù~°ê>Ÿ¢9Æÿà1xî©€@ZºÂzo½Áa—[;—êæ9ö‹§aÕ[º8zRÞÛ &z›» ª&§ ͨ6@gè[³è v5:#Dù=ºd&„¦X h--™3lÚ:MW­».µˆŠº,ÈâpD|-w?)$“Cöt!$òÖ@¦&‹QT¶kDÃ-n ‹ÁeK…Ç剢( ÖüGN$¢ñéñi{ªŒ }mDªZ"øÖFê’ ô-3â#’h,¨¿„L°Ø8䤰֡æe)é€Î¨2B&䵃#XSV!Nšü^WëJä¾@©¦®C ná@S]’ÍØ”–µ‡õµüøoëÇ_å‡ïå%n¾Ý<ÿqõuuõkº|7ë/ŽþÄAl^²Ú\‹Å°bFCDGõìpö|‡ó®¯ÙL—ã ŠÐæ«¿„}69ö~ÆŸ!£?ãä¢Åמ…I}È+~²OŸ~~\ß}º¹¿¸“àòñùÓõê÷OÏò‡»'ŒÈ>åH»%ußÊsÑNÔ” ÷0ß%ÇA \•+o6>øþ'Q¾)ù â³; Þ´O¢Ù<´L¢!MdË~aãïÿÚ: Œùèwñ»,°Éé³0Y&œµ!^B™ÍH3@—1NŒÎ;7Weó¨×dÖ‚ni·èKUÙæ¨ä½×%jH-ÈŒ³òéa‘F”M » %)mQw’âP mn;Õý¤´azpöŒ@#$Ž&b#¡ß;ã9ÿŒ‘à 1Pp5"ìDN©GäÀh™FkB@ ÚÜQëò§ÛÕß…\ÿ±^?¼@ù≥]ý›žý‹| AnV×o_ó?ЇչX›íépÀâ·ä¤CÈ&§ÆîÎò7}×-WW«›o«ë>yéÙÙ9óçØlû”›'‘­ß$²úmõøéùëåý§9¦ÍÙß?>LlغX#âgÁP išjŠ`Äq?ŠD_Þ?]^m8wÈü×Ê–Ÿ/oŸVÇtƒÿ×O÷/w¢›þwýó.jRUr(`%&ul(/:p#'hÒ“Òf'ûÛÃãZ4áÓ«¢Ø:VBfRäcq-ïæ³‘±]|sÐàšx‘1Ë’þˆ€™¢a‡®D¶‰‘“2à\Êh̨4"3$o͞Ʉ*!0C—ˆ´ä ³^þ…~`»R`“¸\¨ëgó\5o7¡'¹ é$ðìh%È&7y½Ipgg[Ë Ïà‘cŒÝ|ƒR¾aœ,;o €oT¢‘‘)‰ÆÝÉJ؆®XT3¬©á؆’Ûµ¼Ú×õÓóÅãêj-jäÑ>Y]HI¾(¹I˜½Ÿf›^—5£Ù‰¡ÍP%ÃUY ½@èKZû¦MÃŽÅ÷ ¬þxgÖ‹³Ö<4ÞääÝlöjý‘U„û£¥­ýä+ʇõ³ M1v9¦¥/݈‹d±æ–(Íž47|ó(6;äïþl4Áç}¼­x"îOVtI„“ÈRðU—DAlwˆÔÇ6n‰yË:üµ'!ôášáˆ–þòíéáëêquq»úåòJ5ôúåúBˆúíæ:sQåû»)8@çJœç /é†Õí†Øv7±§ªâ_ÙV6(:9{üɧ‘㇀©çøQÔ†]œH)õI1"¥R¨ˆÅ£+eÛMb>{O"™q ³RËL­|‡âÂ"¢dNq¢Ûê±+ËÀÊîZ­ñþ¡«k<—m7ºv‡@åAtÌKX 9k¼‘Æ$’ù° ´@½NCP`º¶‡LZmÃÍíãýþÇÃí·Ë$¼„%ØFîŠÒ‹úŸØeËp[yLYC,ÿì|“WÌV „¡›‚{ú4¢=F <É.Q0Û2¤°;¨™!Åb ƒs™ÄTÅ'¡˜j§Ê–8O³‘ǤÆ)ÁKj:|Qƒ¼±Ú‡X/jz¢è¿Z™å®TW×·ç—‰M»’S\¼ó©†FM}õ;ã»Þã“(¦Xá¸ØJ"ßÒ›œœx?Ä ¥àÿ,‘Ñ䊴Nõ½Ž~Hüø´¸U`|Þý;5| ‡KH\A%ˆÖjTª¸«Ø$7Öµ‘Ë PØK³õf5"F…‘ "öT~½3 [üFÒ9ÜqY$X‘ƒkö0¡!›ÈÞHcR--¶É»h$V#Xð®gÉq*;#8ø”íÏ·×ߟî^~Œ/˜8Ä@éVˆ/¥[M‚¹Q±“„¦ÀƒH"5Áƒ=û!x vÀC¯M<á\F¨•RnÊ·¯»ë«™ˆQaU!B h2ü¡õm‹ÈI*“ ¡>?µŒê«ÙøãÇ >¢‹ÐÅ8÷~|Ií[ÚÝóÝ×o½å˜ lXŒÎUø¥4"lòÊoo; dÖ•êQbŒ÷B}Ì!4¤®HbLæ0·7¯4o™Ø]ÓÍ ‡D²{xüç_¦!Do’˜jšô…9½NŸ0¿Üú(«)Ë­ÅÔaÃM´’~ è#„z– ¡ ¬sôcSɾš!"‚m–Zgí Zeï¹i°÷¹œïæË*†[%{Q øËŠ¡Í#®÷WOï6HY…È 7,- ŽŒú"¦e““ГŒbôŠôá#¾zw¨Ûmü=å¶&稨ìŸZïÒJÇ/«™d¿hNZÎ(±Þ'nLmì{ø?Ù‹³¾ØaÂ_M@êª{ZÖ›8¢ÚÎt+m¾¬Š:@2´\ÂAC8/dDoÒE­yCãº&нÏL §Ì “EÐ<š©xÂ$eOØæc*&ÐÓ‚¤j};~|Âкùb fŠÔbu'À ô±ÇHb4 Ã,L¾ž¯Åv¢³¯ð±9DŒEPÈnÝ~Zc‹yjùÂÇ7f€Z7éâeJ”=ƒÀßÇlR‚Tsúä²*š²/[YL* Û6–vŠdü”#­Vúˆà;¢)ãÄW¨7OÌ]=ÞÝþó¢°ÁöŸ¹¸×¢¹êwÿýéúVáp÷íîuò}MÞ­ÇæuÂn9ffÆu28n1½¦Š;žõ/ ®Ëve¤7¥GßZ øè°"‡F:ÖͺèO¬¸†·ÃùT½®“– ¯bG'E¨¤V>Ðô½QëæËªÖuâ"ÆXMjã¤áÿЉWäv5P%‰)ÆÈ£zW«7/‹å»* +ê3«{XÔeéT·ŸVÅúÄ £íMmQœh@6”××—ÂÔhñºSeXo¾Aoœ¬§BoÉqù¼I–â~óe•jz]Ú{YµÅz–%Y´ ß7~Þª·Ÿª„ˆõ—³‹¢Ž4rQoz¦²«NŸV¥8\¢:t1^XqâúØñ0[ZÖ[õöSO‹žýÍz³Ü’Þ,vγã?­Ro@(œ.­7”ŽœR½¹@wXoÕüÿ6ë©¶ ±BoêJ–õ†ÙÂÏæËjÔæ’y®ˆ-Ç ­mˆãHLŠ.ÅžHqb©!ö'ôFlì}zKì õ|ûò|ÁnóiUŠ£E™¦Sç½ëaÈrä‰Ep8‡ Õ¹wç–•`¸œ ¬Kξ<Ïktú´ Å%ZHýר¦·uš# éM\×”U$ç\t~v÷ü´0ã¢Ái¨ˆ÷K*(NùñÆ“$¦lóŒFN ®å¾TïÎ׎ÔIm:²kùƒú(ævÜÜ=\}½ÝÝ=<îŸ^z;nr˜ ³Ù̾* J˜Ÿ_Íq’ʤ=Ž›R;ëB8ý†!H@$lýTXW>}Êù™}Þ9ˆ¸°$T?«âZ*Ž^­BÌ®õÙHiJWVXÃ#”Kc¤k´Wƒ›bón]#§BæñÞªdŸ’¶j«ÍÍU®0"˜çÈÜÈh’±<š»¸!ßASÒ;MÜEC}½¾± *‡KPUV´õh˜PsåËn~9ÉkJ[,ú.¡ezoXØ÷,ô1¦y´-ƒ#Ç»æ³çG²¶$¨C¢÷M9iÑ]‰þÆäç²+ 6šbKôµÁG‡÷1²X©Üᔊõ¹ ™PûÑűYsˆ’\M–c0YZÁ­È&µ &”‹Ûž‘#oUøIã©»§Ûud®_¦]3z"…k¸|8Râ²É7n3ÉŽJº8,bל@ $#›6˜¸¿!Ρ¬=1hPS+-×nLNÙÖð“ ¦t†“í,–Ë{¦:n½†9zNôi\{7û d{9xp\\`Ž¡¢ýD|s鄦ºë[crÔÚ¢c‡#ë_ôëzvއ`]§Ðœ{?oôê8|‰›Ÿ“«×·O/ó²øHTÕ[aô E@$‚ìPâQ(“&I¼-J»< z¦ ¼õ&Ñg†÷ùv íyw}µûòýÛM'çÅ3¡žšsRc&8U "±nä2ÉL íÙnŠIRºEÐ÷PlËk°²[š“<_ÿq{ó]ßáÍJ‚™ ô²fØPÓ)©XTQáa~íQ:“hÊP/Ž®¢ ‘Kÿ{›¤Î\"z;ÔÜ!L¾ ‰|Me#”I—‹Q)4ABCï„4 „Ôµ2Õ8—ÄpÂ’ìʨ¨©»aÔ‡À3äG)L€Í.C[·ýp×210n R ²ã6F}VJÓ†Y¬Ÿ© à©ÙîŠ4¦`Á¨ øÒæ §¿t(´ç§.»ýLy5hlé°¦¼JåUŸrÇF@“ª«ëŒ¿´¥Ø5Yl_LŠÍéË«‡GµK&?yÈ·Xèy€Jt‚ÖB©±/¾Â¾HȺN²/lu›¬Vk¶söÒZ£ bH6p3¬³úA"\ãIÊ:³†£âA#—å—Ú|YÕ¨­†ú Þú›%Z­zHg;’m°úŸÿéøŒ0›ÈÁèÖÅV5ŠÊgÉ)# I™+é×Gïë«b<„Ž®á ±!ÞÔ^ªà$è'8ƒavâ+Рʦ€³ýLBû 1´ 9:ìgÑX=Ts^ˆÙ+vã0Þ,œ6©mzОçÁ".Î÷Q¬&¢á{æßNr™… L±¾ -Ö¼}Dß }õÜûDsø.’,â÷$YÙYo• TøÔàlåÓǪ´o…\©wó1e’,=…@øª¨ÿÉ}Y-¡,Ѹáâ¥ÕFÐcq™×8›$ÕSÁbÅý+ «Õvµyk~O_VEIä5¿95©M]”È=£dÖ‡š˜3:ÿÒ Í7F{]ïþþG o¾2¼”€j_½Þ¼WîǤ™e¹ÅNB™â:é;«_Ÿ¢^_-F÷ØA–ÔEçëa΀Ÿµò§çÏ÷û¯kÉ¢Š«ÐµA‚42‚Dq’]pË$H MW29—B€!H¨{®ä¨hÄ s[0Ն̴ ÁYѱ ¾èwa‚½Â’˜ܹ-¨"µ«(azÑõ¯Ô¤[ÓW# î¾þ:nüü²ºúª/ûd=þãçµÈuHÒíN¯Ýáçw‡?°;þ‰yè¡…¼mO¬ð$Rø˜³ò ßüz²Ÿœ4÷Á». ì¹V|4&|‡ÂŸËkOþþx£ÑìîË÷»û›ÃX€ªøåûóLü·ZM‘I1€ ÌŽmD8 @1ÙnÔKˆ±§”£Þí^Áþ=£LžÐ»—…/jc±s…†b@BœÍm$1úÖ^â¥aÐ5?¼qF¹ÿhkÍ\°c„*\Ò¹â"`ÑLÂÙ›îFµoc_v>tñª» ÷«ssê§qû½¾Éûç—ÝÓíõ^‡:Ñîeÿçí·™(‰VÇ©B‰/2|†r#¨I01>¸8L ‹:Ç QI÷lºŠüq÷ë,ò¯3"¯#Ë»ã?ž‡YÀF8jâ0!»~a#¢Yó†ÎZ½šbaÝ/Š±Ž TkzHÑc«›ñÇíÕýËÓ’ä–¤·|¹Hh¨¾xcpÖ¡Ü|ë:}gñÒ”æbÛ·I<¤géÙ .`u,p#”7¯Å‹¼IîUF„èíñ§-¾<Ã{2Í6ˆÓhP=k¬)V-B ÍÙrÅÂóm ΫGº{Üßß]ÿØþ„áhXüBèU˜! P‹œ™CÝÈk Zü‚FÓÕ°’:;èLÑGôÕwNë5:Vãêe.-ÉjŒÕ=ÈER?\²|X›O«döM)M™* 0“F4Czã€=[P ¤8jm0»¹}¼ßÿxxËS3af”s½Ána½×©\ýc¢ŠE]ç[ ·Ò˜â º…"¸&op ¤§ª 6™ Ö»3 §æ²ãÿ›ŠñåE/¯x "›Þ1›2 @¾©6:©géžF$h]¯Í9HEƒ‡Ï· Qã.|áß×ÿJ_æaĶî@M à ƒ=¯RäýÝFLÓQLréë#}&Cÿ.Éà’ì:„¿Pø=í¾Áý÷Ó0‚¬NVJ¾Ü´l{ÈS)ô¤¯§oÅ4eø‹m«·z»IOÛ'ud9Èœ’ú¹hâåÉ¢‰›AŽ#Î&‘*ÏK!"HBv=ìVL³r• \Ú’Ä.âÕà5ÎŽÜ4Œœü_ÿRÍÍÝËkÔy×9HzÆ–XY=U4HÛpZ1‰œ·…“ &™#7„™â‰X" $%ô]ý²ÞzRÓ6uõºsxßëÎ’ÝG1Aʼn§’+¨R?ÖçWÂn¾¦ÜìÎ~‘!Ѷ×ý—G 5» ­ŠÍ.•s@P¿'&Æ! ö´rØ"ûáiÒ6Qì+ÊÝjC1R¶á·žÂñ»*r âõß¿ bó„ëýÃãÕÓí;=[}¶IËÈN-kXºVÆDQŒÿ?ˆAÄg)¸%:*CÂQJ¨a3KdŽršDÁmt&Œ-—‚-S«9„’{(¸“¢K_6„¹#´Galt6‹ uÔlS,Ôæïc¦Èz”'ÁLEZi¤KM°PÄþåó¯Àžœ´ú¡ 1Ðð<¼@Ã"[— ”;;ç¡xÜòÔê›/«\d+Î7ி;˜s4tœ{(p«Ûè°Æoöj§’—˜Ê§‘õþÅ«]Î$N_VUGàÅY§ZÓa Në‘N5Ã\€¾ŒŽV7t:Eí¯ÙÝI/¹édXR¢èC…!F²ê³‹§OÒ™Ùë[«1öÜ ì5hF=1›þ!ìš³?—%¸Îήa_ãó§"8Âa#ﻀþ(ŸI³ë›ÅGàÆ°¡¿²'ž‘mZ‘dÎ6¦sÉÁs +·÷·ëý²vìÎ6$ÌC”B¥*†<ÌB¨è³1äV¨³¸o5¨”Ø)ãïˆqRÈ=³¹¶êVp™¸sãñþ»þ‰é,+þx=™e:èOì댑/ñ/˜t)3lÄ7‰Ë’ˆméÅ(Ao+»ªº¨,1èÇ S7 ~8¯óº¾öåéî«mû “'ÖäË·X¾Óz#»Y,ɵ[á¬däc£ôë!Ü6ÂV$tyšÖ.ìo_OÛ;ž®ûP‘ãV†Å­[”J;(•*Lh”]Û±‘Êje¯Þº¤tqLHocCCúº£‹áä¿uØŸÁ’mǪCA¹þ¹¦`‚Ø’Ë`Ôà4âŒ0v¥ ÈÂÎCGXl÷ÀãýÕ·Û3 Ùëÿ½úSø›¹±ݽüÐ(ùúÏJ¡{0Ÿò;Üžü¶ˆÑqEJ ¬VB)b~cÈIQ³HCZ‡àPƒÑP¡TºÆÉlC‚sù_ƒéágv7wW_¿íŸ_ô¾ûíçß[x÷¼ÿþd«—Ÿ®w/û ôOŠÝÁn£87ÍP.#Õ%ôÏbPoÞl礆)#*úÚøºm§„È¥4t]Ò!tmíD†¨ò ;σ§q×Ëú»4Õ"Y4Æ T“êÉA $![ÞHe &d‹Z*{Æê £!L$ßSÙãä!‘ÚÑÿ Çî/ÿEßç)Øc„P¤P„Q¾^tã¤U¼ÎªM†%8Tä Ýn¥'ªç@j iËÓ[ò¯éù3f%±Tp ³K±ˆ‰YbŸ­D&]41´qkXº8hl6äÑGpÏZŠ€ÎÆŒR˨R¦xÖ‰MùÑÚ3Šj• ÅËB?œ²D™›/«* Ç%BbL-jc6¦Ž0¢6 ]‹Ö “ã ®«óO2Ùl™[RÍ`‰$"‰%MÑ–î]¼qü˜–[¿€QÿÂr»}ÄPãßJ¾œ ¥ño \ÍmÄèbô±m»áÀÄzp™ígÔ0‰ä$•`Â>;¶ùØ)½¾úÖcKÝXXØü‡!Ms¡q"¨Çt¸$T”jخ֓˜*Ô©tÿÚ‡ÇüŠëŸVµ‚„–äô®ôMZKŽpÄÛÒGÆZËo6„ì?¡À×Êb"=œIÅ­ìC©ËÛ沌æ'‰Lr¶BäÀ—FCOLïY½YïZ‹±*œ;Àôë—k•ÊþYÿçaÃdöôýWž³_wƒýÌYñdýÉiAè ‰}…`,zêfãµØ¦Ø}#žˆë&îz̈‹ G0#Îw&‚À%&÷ ½d“MˆÑXYεÆ]tž‹Ž=§¬c¿ɤÑ0n+Ê«2éÿÈ»–ÞF’äüW„>[Õ™™Â¢/öÍ^xa¾x…ZbÏjG/HÔxúß;‚¤È"™b>*‹Ó v–4U•‘_¼_N|þix°Ô’[Ä•å3ž_îDð,¿¯Ž5S()ZHŠUOgV´ÊCÉ[L™wÔìÔ?&æx_…%b0MÂR‹ i­cvX’oØEÒ.±º §D¦0b®ÓH7¦·bïˆÑÅ.!bpÎlp.¬QËjuŒ£/òó×å¾dÙØ¨«´ýo0hcÑ“(V3vDûDÄk…LJLÈMáU2&£#:u‰|µÎ£ªÓ<>Ä8e¢ CZ–¯éüs¡Kåòf»VÏ¢å¤Ib º «$‚eòz$9–wGšNí"§ `.‚xe8)‚,Ž]‹1Ó‚µ÷C6œ} .€ CQ’Ï¢‚“[Æ”é$/ÀèÛªpÅ*1Ó” ·Lí¶rxBÇ"@ù€Ë›Å˲Ÿký` iK|–MåÛi($g'íhÑÉcÛØØ*ë‚-!D?l¡¥Ý"ŸBç°4±øu°ªøÈ]-EOÙ°4Ûd¡Êèx=îV¿Ú°ó5Ng¢µ¦Ü­3>ĦN7«cZâä°tùèURÊ™¼GÀ`0çȹýMjï+ÊÆl0¦ÆñCpƒnŠhF 11W‘4ùÒ\ÅÂ6çt§iÉ¥qÖŸG¹þÔ­NV¸°MÖl?îskZJ7ØÅ@uØÍÄ{³÷FÖŠ%÷Ñgï Ó1ÞÑÑ ÙÍ{ƒkÏéŸRINûñÊŽX©Hî"iiˆŸtßdêP#êÞvž¸õäåémYÖx²5ç`¨õf¿›VŸ¾ Mc|8ˆamJÀ'l™ARÝ%ˆƒÇPSgt̰8c:®¨ÁH÷¤û'1þŒµÎ»!fÕÎmÞA& Σ+053ål ¹£ä‹Ñ%ti Ú9T!åIIQ•«-åöä™t—v‡Bç„ëQñœBð€ Fe² Ñ„,2\Hתn‰Ói’‘x+êÁF+»'!Ã75 Fa ck#ÿxúº—߈§Ëû»o‹›ï7÷‹ÝÐ}ËÍòRLŽow¢6/mð$¢˜úá$ >x_™ÖivœŠ÷É.÷­º E>Å u5H b'r )HáØRÄ HQ·¡T ‘d’|= m4°['dv„îö&ñµóîšhÈú¯B±tû̈$½Z%˜ÅLª„®3¤)¹ Ô5-¹M#•öÍwí]ßé—«Çë¹qy—Ø7oš"×àäÓz˜ÞÔÎõnÈ r£X0rÑG]“ÃÍGã2F”ëÎ’ÏFF*U9Ýqòç‚ëV•³~¶$BÁyðQ³s¬*ÁG­—-nnî^Wõ÷×_µÊâûãÍôéÑ%{œmÐ"ìÓ ×ÁÉŠ­ ‘ÓËnGTìÕ匆ª0ÄÚç“0äcKGé¾VûãÿJbÁc 6ŒœÅ†OGèvÔé ùj-ý÷5Ð@0n 4È0µ”ë[âàÀON‰¥Â9›šë‰ 1U8@ *àdiÿ†nÉ‘­#ÂtBE·ÉUÂéÔ²8 ›µ:‡M$k9ü41˜ÄZ›.!˜WD®.]hÅaÉÊ%²!9vt ](,m-Tè,6:[EØ­ƒl ·d„È1:W…MõídÁÀQ;Y0©\»X>/doRón'WЬê“säGgÉw“¡Î¶gØë%?`R/kŒÈºÒV²nhZp¤†¡Hgš˜[BƒýE!Fç¸:ó$‹‡ôJ¢ÝÁ 2K¤Éeñ?öK£'$ÓJrÏ ÌÏ|Ïbݺ¦ÎQËj¹Žu9ë¡\—¯w¿<¶‰þlŽ‚ƒüsIãÀ€>D?PºûtK.Å9òÕN¬\[!ù=®j>&x¼òG ¯n.‘OÖ0ÑD¾÷»# zˆy= ,'#`›ƒ§ÔÀè`E}…¤I4u·Æl˜qÒ­Qlj%óÞÆ®+¦¹Ç<æ 1¡ …˜½6 œlVT àÅoG=Ö\œHsGÍ#~ן[Š"A ­ã‘-ÿ¼YW˜ITe‰ÕÄÖ Cgêhmzÿì–~–8/B&T¡‡‘ÔВ<ÂÅ5Å™ º÷ëqÕÙÕ˜M^¸wƒoÃSþ °ÏŠƒtÇÅèÄ]ÖH»uîœûº7kW+ gƒXÇ`öˆ$ÞÊýËÿÞÄuÃAÀÁGΫ &ÄÅ%½º-zA¿zµ(¦ įŸd¤Q –R0§­ù‡§^߬Nv*ÇyðïË×ë~‚£ë>¹HPPœŒ ŽHÔIPDùfô5øyÀq>6qZ«‚­ÝÀOA¼º¾½üz-/ÖT—àîòîq)_v}ß w¨ó]bvOWè¼3Ú Ù’¾KšÝ ‘¬©³f½.Zm®»Ø@76M$4¬“óizBc'ŠV…›û;¡cO„&z[$ƒrxX§j–9<Ž ,•dʈb…à ˆ¥üybÀ¦!ÌÑG9>†ø3'sB¢êOS}@˜¯úã.W *äý »iD¿.e:.ƒ­OÄ`â4ðmiïµònˆ<9tRê].Ÿ~]<¶‰˜Hx[ää÷ “yüº`D·5pUu†ÎªÔæašr-MNwÖûØeøçAˆm[/|Ý깆DXKYDlÚXŽ*¶Dé„ÂæB "ÄÊR£e"|ËÄw­œa'¶Ï,*oŠU*èúFë.VãÝ8í‰4ÆxWˆ>“èÓÚeK©N8q¤i€³ã$´´U³ në—ú:üü²¹Ä $÷ °Îϱê¬@'>$;%G”ê´W@›\jº•úà„[Dƒ2HÔ ª;jŸ’# %¦ ©Ã‘Áçå´·uK¡N{[ «óqzà#Ʀra¿ÛâãÌØZ„‚Ñ,K 4ÚšD é‚áI:µ72™ˆgÆC4ÜÔXíIgBÄNëÃä‡Ýq»Ý"öüöõþ½áQW[”Øh³°ˆ$G¤édo¬ZøkìRùfŒÜYy„¸ÐõÀpâá)&ã´ýÎ&höÿÙž£LDÇsú|è=€õ˜ w(SnˈBàœKUðÐÕŠ&L‚GlÉÑm±pûlÜì´}½~xÖ¡-ï¦ÈçÛ§¥ü——/B„»‡E˲4>t A<}RvèˆD¦@9G†b >T†EÂ)øàñ!úOX#°™cQÒ:úÕçHü€º6ƒbyæ!$KbD“.¥î~pl¹>Á§f‚m«G¸¦eI0Š*²ØÔ’ á¸%üÑMZÒÑ&rÎd„Á88¹ËzsÖäMŽ“ïI%7ÈÃÛn9~ÄÄ ‡q°àÊÛ{á )Ž"£¶Omt‚XØèdƒœú•Ç„è_´YPød+Ëèd}NòQâ¿‘ÝëT?"=ð4ªˆéÜ—[ÖåAÐ$@¡7õ¶­)¾m1ûÅ¨Ë €_ݶËßuLï¬à²õ£tÞ q¬f§»pOd)þ÷úåñôµ!Ð~ŸŠf 6jT/o}mf{m*ˆ¾¿Ÿábùôô«þŸˆ¢Ç_ôEòo}•g8ø>Dxñ»XúËÅíåøŒYýîùeñíîwùÞò¥ÞòZe_Èï6o‘_þºø~õ×O™±·þÃn[Þå¯áõ¯ŸÖO\ÃBß·2 M‚‹WѧWâÔ}j¦yžU°ió«åhPÛ &N>I¦&ƒÉO~X!@DU龊ã2æÉºÉM:LúÑ@ÎÏ„î ŸÜ… Ù²2vN&|~º½½{}y{ÖG~}»ýeQÕuô|ûµŒ3Íš3ý¬|éZF<¡Î¶§¡!í2‘rMînŠY)º'͸ÈDY£ÓµC#JõàVÅ÷ªæÜÜJýy5Ò`‚!‡sòêÇ®s"Ý÷ry~º¿»ù>þ‹õcªª{VÎ¥–}ˆ¢x¼8XDÝB tì¦qBà­bˆît.dMSL©ÜÑz0qÄ!b,ïºìÅÃÞôçbgD@ÊÝ ;;ïóˆZ©bM«yðO³’ÚÎàaD36f%µhÛÅïK¡±ÎRxßuy#üüô Ä\µ·ÞÊÃïÖktOuÎùÎÛªk‹äçõQ|ËH1Ð6I6}‘ºê^rR™ .‘“Žòô”ŒŽHÑAP:£€·dcWIY„–ÖNï¨~¿ì9®Ÿî°x!T¦â&`|Rœ^F»%jÕ«ã¶ ¢,¥—BùÄ^4²‹›¨Þ9uoÒèÛ$ ªdºˆ%p³ õÐTX‘,¡õÜ©äCjõ´…ER†‚A“j Ÿ\¼!SzÝë–2]8’4aÎïÑÎ éÑnðÑ›0«O{jFå¶Þýå¦.üëiÞ8S ØTß hƒ©Þº\H¢~Ìç&'¶S2{:Y¾&§‹x·äèÂ{n ‹-ž™÷ÈÌVøÍ½<ß|^ ǹY<«É¢pˆ¼…Ã;ß e—[ _hýËÇv¬_#NÿèâUp°ÜXÂŽ6ê/‹‡§åbUý2XàAìÍ+ÒE#ƹX~Ö?ù¼Ï_‡^‡8>ÿy¡é·»nÝòíņ„—°û‘2ÙûO6ÏÝû«ÍÏÆöþ,¡ÊêÛçtX †¡Ìs#ëõÃG7‚ºÆ‚NÝH°ÿïnÄZ;‹ÅHbðsê§tѼÆò…lwëåAÛBù:kQSösª)1m‚ªtAUèÒ]𡺊 EŸMS’Å”8¢I'35º€çVUèÜ,lM`à³;n‡)¶å‹‚ëV‘˯o·÷‹9S“©ãE¤¦–0¯IøÛzëË{´{q¬¸ú³p,Eš9Ùñ®^ÇóŸîÖJ¤Ñiüég$·Ú42'¹Óó,wå¹/oûu¼=Oûÿ¼|½z½\¥Ro·?޽ãiêˆQÞnƒ ÷Ž˜²yàµṗb²êyLŽN²‘´ö»ÊÆ"0„–1ZÆc?\ßü] ÚÔ$*jN&t†°e ‘î¿â6Õ«¨ÕæpOýV8è4È;œà ,åXÚ™äD¼i:8 úÕØÄŠ:ÇN,ìÜ&ˆŽ-½'¯ ©NZ ÄÞØžŽƒ%Žƒæí*ý†jÙ0ç­‚‰3ZŒ:ç`æîé’Âw)pùüòö(?õ%­`hV‰ ¾Áì >êÄ*CóL¬M’­Ÿè…!x@Ÿoš–ƒÛè²¢(•Ѩ‹è…8P¤s‹^{`Ï z…Ì.&d¯ÕµÆ–½È±§ìe.‘½¢”C±ì*.æ¶¥›E,±#šÌ¡Â`ŽG;XȧàˇmÖ\'·XïVËAXïJ>»Adê3h®5úc†ì²á»å]ú‹G%2ÎÊ4®%ª›y<: ÝÆœ¦X?4®@±ÈY1³¼ç’Ù}ºhLùj«]ýçæM©³;+9å­„Á‰bŠá,Ù v%Š2Fž–Û(” ³Þ¨›Û–ˆëv©½U­'Ìb[}¶7j}¿Àw€p†¸·P}¿jßçÜU1Ší3@ ¯þë·Eùàã7: ~›ºÈ÷ÔwÂßbu^Ü/®ÅV!áŨ‘±èéxBÀ½¸ø—Y¡‰3Œ èŽ4s°òÃyâ«<æö*Ë—»›º9€¢{ͬú[ª½!qá0ôš¼ž¤T7µ:]Ì¡¥‚ü©ß Ó8©öѧ+wdé¡÷¼ÞvçÖûó;Êò_§er£ˆ›RˆÊ =c“EjDkëýzQ0'sSÓ°¤ìd—غÄìQì†à)ÉÒ].o—' FG+pŠÙ»ATŽçæ?ïú7›ïµ³qVUøp÷¸î’¬Œƒ‹&|:'Mû 5…æ±+³ÂvÔ9Ìg-ýp`ÿ¨ÒÍ<¤Õ¾”Ë»[mž]~¯¨!Ó¡ ¤ú#ö B¬íSRýÕµ!©Ó¾e_yÑצ¥G ›)§R̳5™rÐu}'ÕzŽ,ÃÀœÕ+⥶fìÔÃî“3:í>­²û>8aŒô<·á'd¶ÇÕ¬zd_$—!1Ø7;m‹²Ó ·Ql¨ÀæûÊs3š†Œ'{&k ¸ÉÖ+µî@Ü/ÝmÁyëŽ8Ÿð@“²íFç*1퀋‘éÜ<†àææ1%"óØêHËh&ÅTKKOÃf<¼Ý/ß^+ ëz”fߺgaîTrš}í)“¢‡²HM[WDgŠM>yÇŽÅÒ¼«xsÖÏù;ZÏã}VYGé½+Û£•d^= l<»¶G{0n†\€ÐŽÃÖ„ƒ¦NVÏ\þëÚâ” —3¶7ûÝLù cËu…çH0?ÈC¤¸É86&çÚ¨ÝÁr]}6B@{nÈQ˜Y‡ WFê‘c´lsyºæ>€°D›ÚXlý,ÒmN‘…®!Áêuýµ³~z0” ýçÜ`wù”ŒGk!Ûµ+B å׎Vàýü{×ÒÜHrœÿ Ã]ÌžªÊ¬|Œ7öb]!…¶¯Š .‰Ùaˆ/äj÷ ÿî,$š@UÝ]Õ3áðE«á î||••/-À³hÒâÌ¥S¡±>gvæÛ¼ëaÿ†ÛGvÝfãG6Ùk[gá0á|€ÊÌB ¶ôDU+‡´±ƒ˜–ΖÔv³Ûç}m°û´/— 'qzlqÎvÆ¿ŽtFm_–@ e‰Ø¹Òf¹N-/UÛP±hL“1ŒhC-0õS*†û#†üsÜ/Ó¿´ž:²3ÍëÀ:št(…e[õœ|ë´†…M¿<<®_n¯ÇÔÍ!TÌl>ÂÇ6=W7¹Qø gó FªŸßp˜|5´­[m#óçÇ»ÕÏ·éPYºyX÷ˆ±F ˆaº  â(SZ€6Ó :ºl•N­#)›œ*åi¶»]Vþlȇ·öDQãòŸl4DPrTð8r­œ$çxܾѦ‘› 5°,_±@³¼ RäúM•ÈõËýì¡‹ñ½{±KÜõï¦|?%PçPÞÈ_]tûþ‡=MÝŽ¥nZ/m¦ðMS4$Ïá*o $V/Qa†a(GX’¨ˆyö`Š:< ù. *yŠhºvüÒàL>¶ÏS €³KÛ5åD—fŒUÓ\VÑ-†¡ó,th¨ÓÝNçºY õ]Ú‡¼lÖbG>µ•àͧçÇ×´Õózõü26cÁM¿ÊˆS¿‰pvÂ⌠ª%+Ì"‰YA~tAÄÇìè‚r Áð^*5’Â{Y:¬š/‡19»Á\Ó”!{€°pÝ9”å-4ÎH[d ¡BÅ9l@(ç°d‚¦M=W÷¦„«ëÛà6MpãÆ¾ü¹þÙ0šö·ñ”¼¯“àd,Œž–J=~XÓ.+–`&ÛE5ת³Ùp7˜àÝ‹  A,vÏx6fþu¤‹IlN© QªÇÎùà$œHñÖí•Å2¦8?¢EggÍ'”}`;¼½ó"üsܧŒóq™M´KDLËΡ†‹Ì|×òuŸ¢Øë¹¼¯ 'j¯¼¯óP1§'Š ®¾yêÔo?Š“®†`ðSh‚ï±Úºñ«×—¯‰Gëz#‘OéÏ—ï]Í:Ôuv« ”ŸeµHC²ÖÉ"öªaé±S#(U5‚“@¨&«l½ö&]"`ù6àœøùè¦j¯®%b{ç§,¸fŸDÚlÂ9#¾Z>›ÌEoÑ™tW7>ç³Þ¹ÁoOT5–ÛcÚ‡.í³~·i­îÔª]¦9î³v5ÝdHOwW›jÉ@fÏìr×ÖiÿøÁâö×ۗ߯¿®®ÿþá8y¯¨gK3µ¿±V͸ fð”NH ªfE~Guõ¯1W×Ýœš4E×´€†¢@ÙÔœ©ƒ‹ž~jL„$×6cWö¨TJÍ·V$9„$ME»hâ !¹T—޲Œà`\MúÿP¶´¯P¿äRÒV؃¶<Í0_Íþçõñåª/»ÔyûNÕp^aEŸñ=U!LÉ:2  ÀØ•Ue>sø”‰wÎq’L-8Ä‚CŽf”=NÂà0oO„5ÂÞä ÕËÒ§ÉŽ¶±nØkEðUß8·+S_®¯îŸîV=£üòº^ñåêêiŸùž¼˜¦”`7N)ú›çÆHîì%udìP v¶¦ÄåC¿‡¶?öU#ô³§¶øÊG]ÜY…XXFƒ‘x Xi´4”à|Ìh©aàúYH”eÎ|—ÎîÃäÛ3½M®kü²l“iÛ!(Méw"âèlþ(AÕJnl -˜Ö±0$Ï÷lB“A¤žT*@mzl°°hÜÖ¯Žˆ¾ùܽÉ9/l4%â…N¬ýª»ô¡¬ï…ÔìµØg‘ ©©AZ7rç1±{,ßnZ††J ×w·¦’Ëë±M1bSlEÒ£„šB¾m^dÕÈm’];Îפ‚I–Hʤ'Ã#‘{ùT‰gí±*̯¹Œë˜ñ±9Èšœ¥2¹ oý‹´ÊZge^LûÑèO©0q$‘Ûœ@•Žû˜q–ë/ùá.z,ÓÛFŒý¡wõzsû2™#N—x0Gö“è>ÀQ¬²ì~PJÕ"Þ¤}ðùD`¢%Ì‚q¤´ï"©ñÚc#EÄQí‹U\PZϧG•ã–ï¦L‡ºÈzû¨eÌ3÷ÛŸ„€–:d_F- éØFÅatÃÓfš1ã[m—ÛÝbŒ²Ý¤šÂ(ãµàɧí³UPtPHÕBZS¾¸"퉂lxZ =€¢=‰Ôˆhí©‰8, ¢Lq‰5RÇ›#6ŠB »ÑþÅ"Ú28eu8NOBACm¢÷¾ ÏÚ÷¹åát½<§”÷ö—?¿>ÜÜ­FjHD¾ÜVÑO)ˆ)¡·p¢¬žVµïd "\BKOJ”£¥7±m¯QàÚ“K•oîº. ®¸ëRmÊÆ¡4°£oC@ÂÜÂàºeœË«¹£7^3ÐpR«vi~ÖdbŒÍÇø0z¬j¦¹p8¤8NÝ fã|F—¿}üû¶€øé/«D6ÿç[ƒå#Ý^QÝ^°^k÷réö´F„œ¨›¥õgE1 c¸(^—^¸+ïÚÓ¯7«¨³{¹O˜·$FZªÎ£oÓ΃öp¾)íã®ÌÞ5úׇËý?üôôüx¿zùºz]§uãÊÈ|:d!CnˆqNÈ%è¤E½Ž]Ï3EZõÊi0Ø¿1 žoÛqŠÙÑ’(ƒ-v=ÙTêÚ±°Òã( ÿ¬aÀió¼Z’² ð7=)©Àp¿6J¬[I.» ºòHe*$œT¦·kòeš=Ô_Žƒë˜ ñr”Á¸ï 8?µŠÌêÛJ]°Á±ì㚎fݬžîO¸u65üöÿÆIÝ3Rq,sN4SÜ”9("Ž‘Æi#eUï433 ´¯¸à4Ãʹӌ¼¬õäRå8K<92Ž“‚SÅY¦E¾ùqªC5{»l£j¸ÈÈö`ÌdŠR…EzkÈi”8sbç$´Ô"Rý•:ä\çMy°,7íoÔ¯xéÞÈ)׿›[Ü_®Wë#u`º puÂ}Ô®™Y\1›§¶XhÕ`6GdCÐ<ÌÈ„<Ì¢ ÁlOD5bWUû*I»UÃAh'ßÄØÇ0kŠ )Õ‹ìØa*Ë´Ò ªÚQqJ©’ú=g^H(6ØFR-@ÙéwÀ¥6k4¤û·ø ÐØ &0Z(áó B!Á8–ÃøÆäÄ.kƒ?Œ³S˜Ã{){2¬ã踋!=õÒ0®±uS¥IÙÅãö„¤'uA4C*«f|½Hùâ ›ë×FôjÔR¦,2ÐàÒŒOÙ>ß±wžÍ2Û®7‡¡ÕD‚4?ÿ‚ÑeÁ„!U‡{â«&j÷Qfp²0š°}Dëú‘F ãL§é¨sÁBC¾‚ƒ¸º ¥$*?2Ñù-ZW[ÚCĉÁˆÄÚ£‹¯Ù@òúúëêæõnl»QÓH#…)»)ÅRµÝïÒ©w-‰Ñ±d7P„˜’á8È×E•{yØìÄÅ!8²6¿—GpŦ¤¨4Ùe‰ƒºÝe%|»&çü-Õ(Ò€Ž>v½´í3nr:¿©ddã£:n ¥ê¦šìöF.Ué{\f³K² â|A!Ãn÷ÔYlÂÖžlªp¯›;^:çÉ¢·ÆVôCé•íË%7Þͨߠ©?Mþ»™]y€8¥XåÔÍêjUŠÔ`Ò[:sˆ·,ß°­ÿã/7T$—Ow·×·«qCTiËÜE`”y5~e/SzRÓþ@o®Ó 'õ¬ðê‚K§|Éîo¦ì6åáu==IU™O#FÂnTK•Ú]ÅѬZ¿™´_þÍH³à’“7þ†Ö-UR܆љ†™ˆ1¤Û`7FG&cÀæ#i7 ©ÉEç0ÂßÞÍöêç»ÕšÝþùññé#$ØÿË aõ§‡›ÕoŸÁ À¿]$T¿]ÝìŽWቷóVí2ŸñÓ”¢ˆñüRÔô²Ääbû—ùCzØ‹ÛôPæ„׫Û_W7^Èi—Að›pè_‡>b÷f»O¹]›—ÿÃN«¬ž/^¾^=\¼Ë£Û¼üÁFUèìNçüPi"œ¶ƒÂ,;€0eþ(D¥ èüß.ì'ë«ëîÕ¿Í€}¹º[¯NÀt2‰‡×4ÒÿÓã—÷ó0¡ú‘QhçÄ»òFÁbzʇmõ¨u`ÿjxz~´ëéz Ú;Ï?\‰¦8¦øÑ.úrýxÿtõ¼:Ò·²Z¸>Fß‚‰Ái–ß ÈÒP ß—V2ž#3I…ÉmÿqvpÈ™‰ÀÐßH%šx Bq+$¢qp†÷£}Âkð&úÄü…:Ûû±Ðûíèí4ǵ**žWkzóè½ÿjÞÌûÉÝóŠ ‡#ÒeÙ†hWÙz‹#ô)1“åõfÿ4§·°ÛPr˜ûØ¿Y‘Ú T 0JmÑâNÀYjžró¢¨qS{˜­7*Õ[ˆwžiÞø|›ÞöŇ÷öÞ¬ä°ÕD¢ úQzK÷˜géݽƒuv€ÌV—ªÍSG»h$£¶æ±ózîËé½Y‰ÚD:3= 8Fm`àêý,”'ÐÅ %npy¶Ú¤8¶Muõ¨ÙñxS¦8§6Øí?P[ïÍJÔ¡‹Pu”ÚÄE‡:Km<‰ÁŽôd0¢Ó6¿¯?­ŸV×g®ëóX’1ªÍ/›(äÕÎq·¾ì¬ÚyðpìK¦F´šÛù@a„]€Caösì‚§´azH=: c™2îVWëÕikØ/°:ñóË»Çë¿W³¢Ží<–|ÈQÌ ïº{Ž2"ﲪa'öÐv±,n'4)8œR¶.Ž--^­î“_ÎÐ'Ù7f’äWÍ&:oñ°†‚C£óY£ 08¸¶—K•«.tŽ64wËÚ„ú —‘DbJïU*y·šî‘äzý|¹¾ýåÁ~6‘œcÐLÈG%±R6$´ËÜÐÈMOTUÌ„º ivU;9_óHo'޹ÁjæD•y°Îíý±‘¹Æ"æJN,ÞMHW± )![qóÔýËå}Ò®œÌˆ]@(ˆC> &n°‘¤'°*k—ÍøÁ®Ð‹»rPjàʾóœ2Ë’ |Xþ}¿zy¾½^_¾Ük,ˆ‚¡©B˜xQLø³ÉN ©ÖYºQ¾JD)q@ï³÷t?LÊú.‘*˜êìʈK;`tÐÄY‰´9Qòé¶ÔOÛéüÏïÿy}uyw»~1Ù_îc¿Q¾i!ZS׌0áèûÖDĘ“g‹¯¦×J_ÛÕD³°D?÷„UÉkÓ®cj"^§o€H‘8§xmìÒ¬~ôߦMoo{¸ÌO»fVàYׄÀ0a¨$Úg¥m±ÚÕôPFõüìÊÏ›m{þ—­šÀÓœ=‰Tñ¿Øòâpaÿjqj‚v&öá{ãÀÚ,Q}¾¼^=¿Œ<0ÑsK·Ò)½íšjq/@…Õ—]5oM–¢ *¿&¥\®Ùä›d{‚ªá® ÀˆI°ôhâ‚·ËÜoÅàä‹Ðö ·¥þ4IßÚEÉDÙ]›ˆ×öŸû·jØî/Óé†u³úrõz7rœ(œX^¤†¼³b  •^ l‡SÀ±éÝ™’«æªf&®…X⪠!{°¢®ÚS Oµ§v¢h„«¦=êaŽ«Züxªï@cÔ…¹%oï¯~Y]>›q®_žÿôñãóBfÿ2YòYïÄIœ’^EÅâ9 Ù'iVZÕ<2¦v7(òHˆ’ñH“\lRîɦŠKúΣ}äÒ.W¹Vñɨ]””|ZÖ'·±ÙfeWÆå„ àZ:á”V(Î êl<N=ŸÓŽ-Œt\ÐL>ïsf;ƒœ‚{YÔp9{ê(G\/#Û3ø2Ý " Ó„@ 0 Ãì–8-m‰—šˆó ¨fà3×ôÞ0„¤½7+šöRùGúÓýÏöH#Ǥ|؃+°‹3éÈ~ÛOÈ,I²2ûåØ~Îs÷Ÿ‰Ý80þáí½½=~¾“Òô$S¡1)ˆmûíɨF·ƒ=5’D1FPÅ@Â4ˆiê9ŒMq¼•æ>²'}Ls¬ßØ”êY¤Uà›I«¬E¤Ö±œE—¡–èžPªXt–¶ˆˆæBÙZ)À„µi·ÀÖö‡Ow¯ö9bºÜ¯—7\œ° $™¸Äv%k;»*ÜaÃÅ^|•lG]9sǶ‘ÐG„ÉöNÑÕ_ƒc«ÞqT.ìCôþÕd´zþüßþø9•’‹ØÿH$Lµ†‹W{ÀÄDdþ&8Ýî ²È=áÒÞ@üÅ/¿=|þawÀþÁŒá—ÕËç¿üÇ/æ÷pÜ?Þì¿ Á¾nýz‘Ï?ìÞå§§×—Ï?Ìÿ®_¯î^W?m»µ”ùâÇ/¾˜«¾¦—zûªËÕø²íã?Þ¶ùÀa‚‹"C<bCŒ4¥7ÝÊ0{¦K§1¦ñAFŸ¯L. ŸiÞ¾8åÚ{oVäzM”*Œ‚Üþg ¹Ðq‘+ŽrkÁ¹¶+)6Òçc–Ф> &Ÿãb3s/ «øïߎÉ*ŽVڳú wDWƒdÿ_cá«!PÙ%BÔyPÇv¼«ÆI,ÈÇ, ²Á bÚ#˜_Õ`îÁBç›Ý¶/˃¹ÌÞÛÐp!&m³G·ÐÿˆY4Ò±žX_ZZø Õf»á«]ógY¥ݸ!³s¦‚¼U @Ö*À ÖœöoVpdÙ‘Û©\8ⵇWßúè1)"Ÿ=Ijxª³Ê³ìì J²ÀÙ3˜Iº½{yíÿ`Wo½¼¾ê®Ÿ_O¡D6òšøµ½)ˆÂèiâ×¶>š&5ŒzÒ½s³‰F ’Ý<¥à’n¾³ˆà ]TïÕŠ )vÒ åÒ˜Ôš›x+Çí ë“RarÇ~ÌÚ¨yç`(2±?²Nðý«Ä;ô3ýüE•~‰îFË‚)‡\ˮ瀎틇Áœæû‹ %ˆZ8“ö TEË­±×)Ä~"³ÃîèJ•Æö°KhÍM(1«4öƒEŒ·÷*ч9x KŸpêZGÝIˆÛ(àà —ÔB”íxGQ·é¶âQ'Hßâ¨ûX™Û#ð‰Ÿ¿aîêËÏ_ÜÊG><ØÜÌC­ÂóôŽ0Ï ϰçF‰5´¶fÃq:^–NJEà³Æ™Êv=–3Xt³ØrpOÏ›}¬?Ý_]½}è­6ÿ_ê®ï¹­[9ÿ+š¼Ü—ðX`w±n&O}éL;½Óö±ŒLѱjYTE)¹é_ßI‹‰C8tïLÆId›”SôUý`Înëñ ¸ÎØþ|~NâĪװ|ΤVz`ôíÞŒçmh´*ƒË1?Œ ÐL_³»?»)+6 ‰Añ]¦k/kZÎ E0 ú`ó¶ì º37g’K‡Ž%Ó!i°Ua ™¸Î Õ“ñÞ7¤?·©r—(øëI‰'¦tnÓÛ®ku‰J’ÂT¾Vw"Œž¥ ‘ì´é,3€«Œ×3ÌK`3[°>¬õí>¯7/‹çÕr­oöçaH¶mÙ‡éQ¶Ž¦lh3Šû¸\iÚΨáõC߸PE})A_—ÅÞôf cAuÂÞ8¯S7°£¶&ä Í^ͽ{2ÂìA&˜Ð)7Tœ¹u% Ì\ŽÁ}€côIõ´éý ºQe5h6ruë-Ëk¢Ùý§0ËÛ:*NF¸8·²£¸úƒ›Ä'Ĺ]ªè&Òw†+\‰#›øë¿ýWà?ÇåΆŒm Îõè&Ä^ˆâ¡w]š˜U¯û.*Äô`A'<ÄëìG!ÕàrKeÔ^¯àQU t!Df±&[d;{«6GPÕ¦fÂÁ4å1=aÿýÁ^?Ú‹³¾O$u byŽ!üº=‹¯w«ª+Ñ“Å_`¥$"§ ªÿÌÁ]&°n7³¾/:üÆ›tùf¶ó†IÉý¤o¢éq-G=¶b¸îZE‡MVyJØ2ǽl­Itwë÷“Q­¼J”Ħä^Žf¤Cœ”Á…±EМ›Ú¤pÓTÞÆq`Ÿ˜]mábËø±Ù-¢ØÝ´?ŠÝî]OŠ8ŸÉ•8Q›mGDË©™ªc‘ô¸wãcû æVcá}ÿ¶ôvìþž’ÞÖK¬¸Ú­O뻂¬Á˜’,–ÏËnŠ‚2ŒìoyE‰#ªY=q”^dtT=‰O™Ç«nDÕtß”éFÄU;;8ˆÛæôÏžVÏ›ûMDêßׯ_WˇÛû¯ædUÉN~´ØèM§?¨«@‹Œ7a!9Õ×d¹4‰ÄØ“>Vûlý„ØË€·Ê¸Ä‘ âÁfS,H&¹´ê ±.«s·Óƒ6T1͉[Kš\9ÚWëúöéÄz¡ÆŸû}?³Xïk¢[ÝšÖŠnù‚a’ÚÖK‘5S†Òc¶ p‡‚ñ{átk~´ï’|N3nì’ì•IûžýÓ+ó ‰ݪ¢%Ô Æ’ ì$4œ¥¹;Ï£˜tåñ T%ÓýˆàzFQh‹²›le •3ýÑSŒcb®©PHûÝÎk–ô£ýžÕ&ÐT©¯_UÑþçuýr»ùIÅt@Ì6ú‡·Ç·ÓuU¯f¯°U#ôMÕ$B˜¿†¨ â°6~=â…íh',^„–ÂYÕŽ8•S°DÅA¾’OhSMwÇRé´n—tª*å ;oš,9Ì£·ï,y³.¦åàÛèÈy’ÊÃõ›í\9¸ö†ˆÑCfõ C›÷ÊŽgaŠëË躻«×±u”|¿ ¯j‰õ8ØBÒ¶A±)ÉBu°5¤¯^i[%¨nN­Ã ‡’B½B²Ë±¹¨Ð|²P$•N`KèTå‚€»Ø›7Ä ~ñìh ’[Ôßç©Ïþ·bÔ-*Ù;¡š™žJD;O©¬ZÎS ÙÕa0no™5+P˜J¾|«•}ˆ)ƒe¿‡eò2>ó£þ– iÂåSó²Ñ“`â@Γ°¿àŠø Ä}ˆ«Â€+`×a6$ÙT ºÕ©ÌÄA†= \˜ªÆ2ãb ÉaV¯av‡Y¥Ì‰úÊl\ð9öð]Û®¬M€¸?q ”B1Š_F5"pð¦)É!a†^YŹa;âñIK¢ÜKrZÆCpãòoÍ¥6‹@;¡ûF¢†BÎÚ¾ä%ãBëæt«j`°Û*f¯äñZ’‰ä#ulœX0U5z!æ6MØÏØÀ!Ø2Pp8“Iö}çèÙ—8Ý(†z‘™\FŠ‘ÃlOè›æçõ#hÆ(ÒFgeÖÌF¤“ýxÿ-ådŸ\ì¬ÝÉñîœnvìì^q¹z~Yl› êÃø™€×`»eŽL?bRϲÛvâ«)Ôf™{ʱF« a¡‚Äq~T…jÓŒ•©õé¨ùÑÕ$FôÙ$ök6Ù1ÌŸ!’Tšb½˜ì;u×4´”as¨ÈC÷Æ‘ÑcöèTU›ŽÙà t'!Örƒº+W¬_¡*ûòw Ó#ûú ‹Mý­zvv‘r›:R±³à$ÒÓµ)ÍOû{õ¿êFVqœE>/ó5x¨ÙEÕEwb+þ”EêÚàm3†œ¬ˆ]ì¾xYY=vC0PLË(‰¢GÖ‰ó5É…Eorê¡$ñ© o''¯¬$“&ăQÑÕ+ÉÞÛx7({6{ðR?¼W¡Åòójùe¡‡õ´Öh³Ý‡µ¸[=¬~‹¾KÏ;Ê‹8WvGåJìQÆÉ;êXŠî(ìáÚN¬ƒ ¾Jì"DOä¤ΤŠT›ç–Öµ¤f¸A‚` ´P^1 Ù{q,›.Š¡O­W¸¶÷ÐO$G ¸Ëʼ·û§w@ƒCˆ8¾@ó¾ˆsÝ>¯Þÿ-¶Å%Ù©JÌyÚvÊ2&ÝA¿ÝŠÆlßôøöãÃêßô„ÿy½~:;ö¹}YýÓãÝêoÔ#ú‡›X©¸_Ý~拵xÐïa ùŽ(ÎÙ¶8’‰x{›¿Ä§½¹O¥Ç»\Ýÿ¾º;9)ŒEÚ% ~L}ÄþÕöŸr¿Q»ÿãæaýÇêùæåóíãÍ›@†íÛŸ|< ` ¸*=pâ˜l“8€)z°]£éŒo¶z(¶z;0Iž=9Z=#ä­D’Jqxµ«…±îj«àÚ«-´ÌŘ2«Ìq²¤œu´íhì7´üt'Ÿ–áÓrÁ›ÿûÇ´[>µ…OÙ€…q³z‘œè<’[—|úÐÖ˜š@€ÔÙch¨ljðh oXï<5[;Z{Ü+ãDƒ÷kGñ¹yñÅ!Éypx³cO$b|×cËõéeÞݨ;)Br½Æžð¾/‘¢®ŽÿøÛãyÁÑžõt+%­u>I÷µ÷®>ü¬oúÛêåÿüë?ÞŒÈè-½¼Ý¼PSÁé9ˆqˆÛ| ?†ÿüáæëúî Üææ—›Íë2*ЇŸ÷O÷ëÓëˇŸç}ˆßo^W¿îÔ%¾ùå—›Oj’¯Q¿ü0®’Þz4-H–eJä¨1.Kð¡J|1”È‚’ÝqT¶9(´7yüjEX ª}ÝàPÃׂ%B³ƒ 0¥ÐD{HÒ q𵚸¸èŽ& ¶§ÐÛr¨>¦}ëV8ü+¾}çÖ)ëú­¿ÜŒãž‘E€&"™RŸgÔ{\ †\) 9}ZÐ_©ÀO ’Ûþ¢/>R^?¼Y ñ !à.zù1ñɤù½Gg˃Ղ³.€.™™øt'{+Éý[Á‹E¶…ŽŽP0þ È¥R|éÓ 6àGúøé ·^Ò•ñ‰íÍv“üôz™;6úK@Rm·¾Ì¹R%±B‹s„hŠs%‘†C0ͨ&¥Y1`\žE5q’ioŽl&;¼ZIVFËê¡…ŠîÐ>Ç“–F*–ªc¤Î`óÁQáÁA€}¾Ù ï²ç–_ š¹äV|ÕäøËÑ»äáv4‡»ÖëÑ”n0§Â+œ‹>¦;¥úåÐø@íÅ/.µ[‹qÕ—Ãe>¯’€{x³ËÕ§Ší©«éf=B1pÂ&Ñ…)p«Ø8+ƒÇÝêéaýç×V§jp†Œ 3ú-z2!½Œ >·5¡¶U©“'U"ø¼ß`«6„ M±õ5w§ª4“õGâêÁ  Om}×6Ýýâ»9ƒ9ÅIIsúÊÎFÞ’‘¬6uå¡d†Œ/_cÑ;f=aœc¡=äç%›L4Aêoþ±~þ²¸»¿ýíq½y¹_ÖøÏ ½“†ÁÁAý›:´‰&$Ô [ã©“³PPåU»6œÅVÇ附7qtYŒêC{¾:¶^¡`èI\zw²1˜#\aÇ]‰{±hc=›rŒ­À€9ÍÚO)Ø[&²±RÕ—DS,K! ú|¯%9}+ÊÚ§OÒ4Þ«$ „và8põ¨y†‹ÏG÷RØãÌ4Ëçº'ë±¾®žõ_»Xúï÷Óº{Æ¡âkÑûQŠÖ>gAÆÍr.ذŸ`¹îî vS;Öði¬¸Œ³b™0iÜÅ{²¦GSy¥ÊvóS¢b¨:™‚ÞdŸmRŒH’N¬ɪ‡Ÿ¢Omm÷"\(É†îÆ êbýh¾²qn‡\6ËÏ«»W}™ÙÁão‹¸/°rÓG‹y–=ôÏ™éÍ7C’ëßN;Gm±¹ýúô°Ú,¾õ"üôéu³âÅ—[5ˆÅáÿ÷Ýk»ðüwË)Ä™êÖxG}ŸÑ¬´ÖöÅhBä}žq"}YÉ¢"L²÷¿ ¥(naÄ \»%†§lâqH@àïr{N0©‹Úb*´%" Xôù>Œ»|^[$Yó:¬‡¶èSCð^µ¯Ð½½tN²Å( ÌZ9kïHôo¼i^ÑŸúÖÝÁáÎ/Ýí§ºPDÔ¨ªNꯕví' ‘HÞÕÎZ^Aºý²{´å?¤’]¿w‰÷&?v^êÍ¢;k”9˜d2ðXò]’4dà:';ýU ÌŸ ÄÎI2°–gØ—U-ºrôÚ¢l ìÔe¯„UsªX?C¬ÅCœôáênÿ§Àÿ±wuËqå6úUR¹^IS©\íÍÞå”¶gììØžµì$»Uûî ¶ZꣻùsÈ£IÕÖLÕ”Gr7 ~|8ŸÂ™\ 5'EÎÇÉG1#'…Kd¼ûËØz„ wi­… Aú¨ÂTwq:YôÁb`#|ð¶û=.V3…&‘¨5±Úéè¦÷­„ù—¶gY ÖxÆÈ<Ô¸ÖÜ”ÐãÔòщ6Ï›§~ã2 Ça †pP1ÛQ¾@>ËÀ²á(.D4îí©LoÇò^{g¶e ¨ñJŒðX§ÌqUŒl{¢gŒf¢Æ‰’ªµL\ÈôJšiw’F>„˜_O¬U!P‘Åì‹ðJ&ƒ^B(1XóÎ0ަ·6ùp‹˜ÅGØeŽ`tusU6̼ùN:õyÂËŠ_(Ê‘+bž_ÁïqA ùé‹Ý®£ªw‡o‡6W]ÔÍ ¦0L)ç¨õmâÚKN´¶ñ™gš/¸¼çÕ™ÌàÉ5§nº›Ã!S;o \õ OMÈ· 8½–e|’ÊwC J†wç@ g2øàAƒQÃBæ!t‚WÊûÕ¬¡úšò~ðÅ·Tßoš{É ú~6ÿÊïîÑ€Çéõý(Ã3qqöë¯x4ÎTyå—ÚY=õ>î6ºQBÍqzß0èzTQåLø„]³ä$ˆÙÎ &µ .,!¿•;‚Š; À•‰pÏ[«é€tÇ„(´Pƒ‰‹æ9oxÿ·PBçbâ 7µÖøËýg;çû4èÚ( &¿1\{¶19‘oÐy…U*" @ŠÍÓ+„ÓçÙäT*."krþš4ï¶J%Aå2¶+IŒptmÑ‘™š.¸àoR¸ .õ!ú²¨ ´{¸y9*ü˜3y®l}7Uýc·è+°­\‚é> ¹mqçiLÊÓ¸”»ð^#‹¨?Üý㟿ŒRGsyÌ”DÅŠ^Râ·Nb£,aÝJ.CtécA^£BƸiZcÚ^WŽ…XÞ;qDàYP¢a@ º¤¨Š±AKl;It1ÇŠºÎ¨6t“#aqíÍð‹Ù Þüü^¨>~DãSƒÈºp*ŽŒõXý›?_˜~úÓüû%ÉTJ#ýá‡-0ÃïÕt/ÃátÇÎ7þðç?|ÿç—ŸþôÆÄ§GVß·'?ÎS3¡á°…üÙ¾ú¥™|LÄæ‹TÒÕÄ#SD·5<~HGéÅ­‚YêÍ­¡Õ̇jºY(÷D±•›™ÔÇÇ\&uµ³þUMc6äH [Jèl%œ^™¤3´v‰—•tL5f™¡ù´oAf8¥<óøœ ù §¬r…Š‚µt†5p¹Ïò¯ñÅV©_j¢tt×[Ø«æmFÍÚyi¾µ…Q|jºÀTDMÌ2+®vVÓRÎÌq\s¿­?#K›|&/XMê–Vì£9¨Þo:íÐÓŠ¬(`t›iݸš¤\iIc*ZäXµl#}vºÉjgU6— DõIͪc«°‘ÑÇéFRñÑÕ¸0’væFœM¯ó—xÅ£ÈãV2OUåïæÜþF6ßmß¾öõ1}u£ñÚöí·ŒÒ˜â®zs°ÿ"xê£ Õפ^C“ðâì'X&.³C®lˆ8;Òw½› Rç—è%<Nû·Üglb!Mü¸™‡Z¬ èó¦CŸ;%F^@_ò8$Ú ñt¡(þËN}ùÿäÇØ(ÚSX„•SÏf.4øpÞ;W‘åôŒÆE ÉVŒ­¶VéÄy•€¸·§~~¦C=ŬçÅë¦òŸ:øB•¸Û˜™Â«Ãù'ǧ‡Ã‡»Ãýr8²Ï\¢IÏÀ™þï_ªÃÎÑ3ýß¿ £j®zœ@ §²ß£Îßîÿ~?€:ge = p‰ÎE€©<)IP}„(!kW,"=>Ÿ•í ܦ|”›Ëͱ> fDÙ©-šCb—ßÙêÛÃêrÖêëèpß‚ÅÈ5uc¹b fž)¸ ,ÿí¸œ“©}5Ï™îSÓžÿpö­O®õÃSSc#eœÐT¤ìat1O5·ÖXt ­¯Ð"dŸ,ÓCIÅÜÖEÈ…ì•€†`n\ØBÞ;]Kàçç,âé^`nJNa +Í‹! e™f_SÔÅj°ÝSÏ”eæ²™@‰‘ö¥æL‚Á§Ãñ[ÞýÝß¹§úûÇ~q‹4v6ÎÝ9B#è’NóWes“xµÔÆÁ®]³eÓä œ/ÃnuW‚ºA=ÃÞ¨‹¤óQ×9΢."YpZòta,üV’üÇ ­ãMP1õt§à¯[À ¿{N!-Ýÿpw¸¿ûë/ïýÐVc¨ÿ‚r÷¦0ÎÅ!n®Ã·÷}Ô\¡í|þÒfÉÅj®`M"¨‘ö¹q—Ç™BÓTb³+/MáUÁ0TÐ]Šr,ÛM•,i×YŠC,§[\pâ_&²[m»C=‰C ó]ÓGÞ’´ë¬—ã.›ÁSHÃkk2Œ®ürõ¤¤¯<¯³‡Üû.L:ôþT؈é\=¦¥¦kÇË-=ÉÎ=ѱª'Ú´ ·% NÂÜ#åùic‡¹ÇJ¿P™B;ѰU9Ñ‘å­hØ^ú3ÏÜÏ`RZ‚îS%çTûeÎèØÜ—?VùãLƒë¥‹0È“¦çbêrÚnIgœå<ÎY†Šq̦þR®Ò¦S]È+6‚³(½ÎyŽõ$A£4.ÀüD±J ìã\ ¤àoZ΀„8”ªÈa¡Ñr–t?ŠèSñ¬¹`ýÚŒiì”ïJL&Æ`òÛ‡ÆÚ T ¢4H¹óLì”:Ïl秩ͯ3ŠÏ[«¨Èò(iðúqYµê7èàzH^˜mÉ6Ÿ[-Ç‹áé¢.Õv•Ï #:(ž@®âaµ³šc³U1¦§Ð¶cSVÞÐÆ|<ùÙL&FÐ×LZ]z^ó¡èeô[íTÜ5u­”—5tÄÐVC×ú}«š9¶Èª¹f®õû®×ÈÙ¤Ô­¬Ûn±ÌްˆsîëB!zÈGX†x2îòRÝ­ ô¥p²þO/Êï>ß>š'x÷Ì÷ªxÝ÷”¶~ëê"':áÎâÏÖoÝt+l)¹Ú ж7’rµ„ Ú·ÖØRï8mi>7¼ÞY•K4ËÛÒZ˜„ÎÑŒÃ&¢ Ó©wÑg`ÓçÈ׈¶1´¡Þ¨íþ&GÒSÃ÷Ï$øW|ïé”ÜöŽúæ¼è¤ÁʆùàbÞ&ûmŽº'è™úŽöÅc ¸d0¤¡6ÙSX"D_a9vRD—SNîÕà÷óÖjÐ…übËõ¨MèR:¸ t a¾“ãNcw^¢‹„¤I ²‰Æ?‹.¯²®¿â¸[®Œª½Å9Qú…»‹™Ì[g‡å½"VÂawž6Á`G²}/Âöìs¬ «Õ€Ï‘¡04¹6Åc«u³ ÂLŒ˜™!’"¨àµ7,vÉ,ÄÃýÓ%Ç"]µÒµãšê'nOT|ã­¨jòðéÕ»‘¯Ål¢3?ÍoEq Èã¡ÔOÐP>u-]Òµœ7V <ÞÔS`gàapa>ðŽy౟(Œ)ˆ¨Ìñ º·jú=·¶V÷ûzõ8 Ý·î›×ÉÓé[÷Õ³±ÉsG‹ ˜6ó4`““A+À ˆËàä#d [Î[«u‹HƒÒÞè`¾[)‹Nj{`×çÙ…rñµïPA9J8xôŽÃ9B€­^Sß:ÖÑGàKõ­c6ŠÔUh6T·£X-ÿa¢2OŸ*X§Û)A,Ä\biµ±.[To®öÞçc˜‰17†Ç¶¬ä8⮬±Þ"}ýíëï¹Í»k?¸‹‡ŸßëÏùùpÇýŸ<ç—=ÜÄÑýè÷œ%¾$·§Àà”¸žƒŠlQ™ó¼§ªÙ­T@,³sXˆÎŽ‹@¥yæÎóΪ€ –¨ìqï”àüçµ¢dê¨í ìÆ'ù”ÆøÝ.ïkyÒÊfxíeàòΞ•¬ &¨ãADž=+¹îi%¬cÄ[e[%óvãBÏ#^ÀH¶C˜¯m¹)í^zÄó‹³ß…Â#^Ú9dË$W[« ý4ŒW°ô0žn&’,Þ±€›‰Â‡Þÿ°ý¼{ùÇ;‹[šz$þqªô=Lµ ·8ÓÖž.\z’=ˆ‰@Þ*‡˜FÌÜýãë·ÿüõ«]°Ï_¿|2©~úòKCF±¯rlÀ:^ä1ŒÉ/ö,ä¦õ@²Üë vÔjk"fçâ°qÊ·p£«‹Fø•:b#“ÄŠi R¹d€„\Ær%M4iÕDøb“}*]„œNÄ^¿ó¦-Tõ¼Ëde®jžat:`°rÉ@^?Pâp›§È= 5IeœØ¿]ôï’£××3ç.`a€¢s˜¨Ýc(;‡![áµÚM™þÝ%`Ñð‹1$/>bû;¨.Ä!Ô³¿ÛM°»`—a“jòp×ÓSâY`uSY·>~¸ÿõûÇ!ì°¢¬0 Ä=á—îYŒ¢­D!Ûïâ|ÍXÄtdûx¬ÉŽaUÑ"ÒcEò«¾Òóv˜ÄãM£ÄðÐdGèM˜ 7N&¸©¬=¹6\ãtaî~ûöã˪k§$eƒÂÕœâìÀÍî:f|’¶Ä”âË»%.¥P@L¸{†Î™xè»VØGlj¨tÛðä¦k=*®°/©ÜïƒqŠj–×f%¤(j¸#‚"º7Šz’ÙœFNg4ØN \ :6V\ÅãJ ÅV¿z¬Ea[F-ÌYÒÈzŽsÊ4ùá7Ó±ëmD'ÁŹ¢Žãý¸˜¢§&»§ŽÏMÃÌ=­©c@r~®ø/|þ!7=òÈǹâ7‰}ýa(û_?¾~¿_™ûÙÃ;“ôÙ*6ÿu7*0rH2EG<1½!³i«> ÒU2Ó QWø|±‡,Ò ‹žA‡e‘oAȰ,r‚ÉÄÐRQ÷jrDz·$ûŒ¹ψ4²-;ˆšáhòö\ŽÀ=üâ^Òd„Æ»QaÉG]„£kb-N¬ô ‹IÊ!f/ÂYîAZµ©]­½ïADêÉ>;çÓ?–Qw0 xñè*𓉀 x"äS×gQ „ÄýïtèE¨0Ì‘pö»¡dÚxÒ–Ù«#ü½+™·Š<àa©ä>O=ÑéƒY’æ€Ïž¨…ã¾Lµu|{˜xJFQÕn=Æ„æžÔ]ðÀ¨¢›¹f¤vª‰Õ,¯škT´¼oÑiq]–lfµµŠJ²´,pÁ9ÚQ9Œg˜Eä0*½Q:àÅÁááÛúç‡û¦xÈ“w×õHSîP7é‘tUd*F3ðˆ~X8Ô$Âanqª (Xñ’=„²[|¢‰xõ¦x–׿8]wI“ò–nK…òÊíâ7Eóòm]4wêà©XíþpÓÍ™¹/ÿ|÷Ц±ìd®ÂöðF£'è™Bó¬~± {³J$­èÞUa„rCò£<Î2ñdRaK¶`憒‚0o«…S7›ÕÄ ñõ,†´eNSJ½½uÄKcŸ®˜ aÖ6°¸~º©õyãéLHS“©úöØžŒþO×¼‚o_m³ÿÝ8 oJzK…M ¬Ô‘/’ä°°ÌžS-Çq©¥T#ûØ\Î5CE®Y1Çû¿’ÚÌ’Ý| ÜEB¶©q€ù‰%—M,ÙØ…˺‚W(M¥Ñº´ßR’µ©íÐ-ÕÓG¤HåDGêyå«!¹–ü•ôFä9Ò¢6^vÄ~ XG7¾ÛÕǘh°yîÀÓ«E/-ϹµÎš+7ª¦&Íuô[Ô?vM䉉¯Á¾¼5OÒ,÷XÐ,ôMméZzOTvÛ•¤ø>e+V’‘]IËvæ¤B4®U 4„8;»br~lø~™]±-+8Ã×+/Â~hA0®éÿ=`ÓÕ›‘æA¸M/[.©†dfPgPvj/e5Ö!Ôc¯ç1Žzg¿‘zÀïÞºÿåËׇïŸÉpìyÆ÷u„ Pdð­ýl-R–UIóÚ2WÐêÚå Å‚ÁçiuŸd2"«’n-(JlBç:(Ó盘¹}_¦UÒAñÓ8û×#kÃPpŽ•màõÉ”V˜zˆJãÝoEÕE¦ßAçâëIcõ nøÍèÁ¯M˜Š=“àÑEƒõÍ4}ö´(h@eÎo¹R£-ÆÅfßÏâáý¦ËLÊmMÅÅËQ¡šH³ÓÖIÎ!3**åZÇ·ÃUl´õŒn‡Ë`ÃÕ•CÜÔßh;ÿœ(M2ÓÉ©Ž'¿þb Þã †ÚŒtþ/OKF—¬ž©«ERÇ£}7t¦5¯¹"£yEÂ[ré~ñ±k¶ŒÚàËL‘bÌwµ< mhµ‚}pm)‹:®³bs×)‹´eqv„WR^Ç>1J h7Ôì 1s/AœÀàˆv¾©Íc&ÐþôÅ>«$îÓo¹¶ö[R7®OÃûáÚ‚góïŽãµšàúIT7pùINÛØéðX^SÆ€BiŠ|dx%‡!ôt¸°‹MÓ€ÆÜèj+ŒÞ£ H›_kãZ˜²ˆOR~A¶c‰ås…<ííjk5OÈ.,¦HýÞ–Óûù³+)dæ¤5ŒYä †¡–3¸ªö/s$êg;lOÔÀ{ÚÒôCBvnÔkàsAÏëßmÁ¦2€DÜÀ¤ý,kš‘²ú½܈:€´j²·þà îA§ {6lÙ;yæó¼­“7ÿn£ƒ•&ˆNu°ugH呌Ò•g±ÞPëÛ2ݔ÷€I±èŒÕRzÅ€¹æZfC(½L œ÷‚{+õ¥-˜Dé•ÉaÚI!:"3ü¬:.öuÍÝ^ݦlæ8|É'¢É•¼Hìr¼û}q©õÅC\ì¦K…ZcThMLûöœÕêóÎj\ñà=2½Ôkm€àÓNp˱¡É®çܼ¹ð,)…·ñÜj'p#˜"G„Xö±Ž£ýBéà9pç­Õðb.ÀÐs+¡­­ÞóøZJ >ÍM™7©¥¼:Y§³`éÚ穦£~¡gö˜¨¦1 "£â§«’ï ¨®Š½:ÂR—)Â×Å¢sEøŒÂPTŸ%X]‰vD„•Ê'<Úâ :Û*RؽÖãÜÚî=ÿÝJµÇc^$¿AõkŽHÆ7ý#ÂPÀÇ™GôjTä-–‰ªßzšɇáÛ¢aš Ò m¨öÅÌžaó Û„†U.ˆ_É—!’¥èÙ"g“P«}UøGiMà”yo¿–CÇ™™áHü«ÜJG¹ƒBt•få iBEå â$ð\¾&¹Âו$GØQ[tšgäÃÞvTd|ͤEǦ± û×ñ<ÿá|÷NWïá©ñ·­l8LÅÞK^†º!â.Í:S–S!³aÚ™®Ÿºnꦹ¨¥w„$=—SΕxF(§§Å<Üãõ]•“œŸàä è‰'Sž¥j•l Ëo¿þ°›Yzß«ø„ÝÂÜš“ `e‰æìò|rº 1?êüsoñ»Ï÷‡¦$w¿Ü„ŸièùLü${x¹]"° B;]ñbßÈ)ÔÜêMY„¤ÜÉû©Êýºb\¾¼æ,½!tͶhseïÜ0¹ža!¢cDt]†Mš6Î'–c iÅ“€øSÕgáZäRË+É ñ‰mÑN ÈÞf=O}A‹I-Ì€öðáðíÃ÷&’ÌÒ/<ŽëýòËÝáÃqXoƒëìD§]œ0!&Ý:æ·Ë^4"·vz…S“ú„=½Ä,çòÊ—|z÷?Þú> .!½÷S_.g$±0¥˜B°›Š9Bœ³DàeZ5qT‰{›Q”Nm (*É€Jš[ª3Ò†êqÁUFÔÇò¥ÈÎDYÉe•( »Ç®ãã!r¸$6¸?[÷ÉO»{¸ÿüÛ¯ÎÖ󗯿Úçþ~Þ[‰zºp-÷à9Áæ‚°ºûfòx¼£XìÀÉa½©W8¸Ÿe3@!«vŠ´Ý«ýK£BRÞ÷ÚwL÷òVn.vK‡òX~ô¢Øí( u^a—&Q³ÕôâÇÿ¶‰ÿÿØ»šåFräü*оø2,‰D"Ñ11§½lØG¬¾ØlŠ=M·$ÊÕ½ý`~?™ÅYA…BÕx7v=Ý”T*d~ù‡ü¤°dw™ð'L¬0ÎT®b7ÌêýCFÝ—ˆ£ß …íiOÏ¡Ïðôâkmû´¬áê…×¶ x˜I!)C‰ sØuÓøPžëæ O{gÖûÝfµxÞüö80=$ìÖÓŠ®wEãè$â8ÐÛŸI¤Z^uË{g™\Κ«1)€èãS䎩bÄå­•uÆÎ.~‚ªÈ0‚Ȱ6óïîæÝn#Ñч)©ÜálÂJ3‚8?Ø(‹ÖS»¾tÝ>¾¼xÏ8`Â};¥µE;=ˆ|“<ø"óz·>ÂYÌ»HV‹ æM<ú9Ò¡Êu„|»Aˆ*¡À—ì}3 ªÞ ï§×YÜ cŸoïÖŸ—/÷õ.+E5(ƒl)£*NH™NîÄ×ÜôiQÃ…õºñ¬f7¡4 5Šë­¶ªâ⬽³í§|ÎÇ÷7¢_Ö«¯ áÿÓVìÙ°©œâDMjvš¢¦[,ªQa«ùœ ÊÑé¢ mW¥®5µÆõÂjH3rF†Dôœ,PLs|@É‘vUâQ? üÜfÕ•Œ©ÆIüÄ5¹Ÿ DB½o†·å•¦HÆ8ðÉ¿ÖÎL]-g}–#eql½Ü%0¡H .™‚Ò°lpt¥MÝNjWŸ¡o‰½5ÑR(Š@gôBèèÔ#«L~…F¢8nn<ˆC%‘7ÙPDæoÁÏS²LÊæøyb_Ò¡xtŒ]d•æXqp;óá┑pÖ›K³äJZœb”8¹NÓèÑ;:{ª¤ne,‚Fœðël 'Ǩ?Ð;Zî’pǼ×O±g¬¶OËÝúýÕKCΊM-ƒ‹ì&ÒÀãØm¹¤;ƒH)ô€CØ᪺‚3v«0CI´Y» `šÝ6>Œ²w´v£ˆ©58HN½2“1ºÇz&iÕéÂìž)ª! sÞÓÜ@ S äÉ õM¥É0—ÖM}ÙïŸîÊnª#¸0º!¤t«ŒeììúU\ÄÂâeª¬N’—käÌܨà’0ÐÊâ;¬±‡âíÆjúÁ‚Dkœ³ÃÇŠèû´¡`ñ%ZGZÔ@‚ ûÉóÜHpJÂð°Á/ì å©&ÍÜ®žw£Ö;Å aT6gó¸8.:­£M=âT†iÀ‹sC£äbÍÛ0°^ì\•<Ê!N|¼tûöŸõîÉmÈÝqÎýŽöéøÂQ¼(²Gš*×ä¶!ëˆæv-Lk陬ĸڔ¼_­t¯Õ„s£¨Œ5 Ö(m“÷ ó8{ªva¡‡ÙáaKú”HB¤ ™}• àÇœPÑbä4ð:D;³E¤¬[F¼s ÎiÌï<¨ :ä­Ã8;7:¹¨‡_b1k ªjÞª‚±KÑâjmèjÏRÉÞý@ºxëþ‘4U€!o-®­âl@ØéfxÌt @Åt”*-ÄuÝÿ‡^¶VQ}¼8‚~ó( ‡Øn?³%)¯€=š^ 7/ެ™¨½-—”µ„9‰äTÊçäÜ-¤|¡+F}ÄájØÈk£Õ mUiNÃF1•\?: ¸4¸Hü ^.¶+_ÓÿUVxÅB ¡†N—q+¤Rûa!rö(W%¶À†Ð @ [çÀJä[ޱb%…²jÒBAõÈ„”U*7!¥myU±NÂ=—¸j”skŽ+ƒÓÑrRRaŸ…xˆCr uØfKæ™ùÐmÉïH¾é!|³bÑÒWƒKòÍB¬§w²¶iÓHDÈLs³ÍiS2bÚ ÁšÑl˨UØ.lŒ3l#›C®0:ôùx²,is …íÁsKYUA9<£} e•×z¼–þmó,N_vŸìçÏò‡ÿ´À¯öÓ×F³ýDŸ>¯µþ´Æ•£Ë*'Ž3…¡°ÚpŽ0ëT9] k=²Õ0«òÎ@–fu÷>¸Êu§€Cž|‡ìí•L÷áÓý‹üÄóµqà?®r3…—ã½2³×9ݨÑkÈq*ÚÛ#_%ìxïon§Ì(ÉXÇ$±µªzaS{XÖÁ>exw6©V„|1×îDž*ÅÐ(á3Î $,ÉHD$†ZN !„«½Š‚×VŠýýs58hØ…m'i8ÚÔ5^ šõÑÛÿ#Uª@è¡PŠç†„å’û]v6 èöªN:ù´söí¾’óÏG¯)‰b¦]ÔE^gø&Ú™´o"Ô‰Îxè‘­hÂD hÔ*³}G€†$Žâ=¦FÃàŒâ9fÞ÷b˜…ƒ óÏuV#êA§TˆÐ+vžèQ *äåi~f0±%í~P¬í´%kÿµü¶\ì„›‡uIåZ¶ñd%DIÔIú£BŒö)ôHT òÚƒÆç×õŸ øé*t ½[^Zuÿûe»_¦z|¯„5Cž3.¾QÔXE.çšËzŸ'ŒvAô)ZOÔ koæ¶>PEhbwÁѨ\Å!oÕ†9çÑνI”Ðývõ5x(ãë%/'oHÔ}ŽqêfÐ^E ¡Žw·éU-ܰDÃÊί}IålX^Šm“Xyzwr°N¯F©¯K:ƒU/c±rÀÔÝ'_‡W:ZL{"O•xX5 @óܦI¼ù‚€Ø’ã°¶ÔUX¸)¤}åñâIùÜ—.»#Ä4Þ{‚œË4£“9!a´½G£*‘·6Æ3Î (™MbPÔfñtÄòxX®¾¹º®z¸;b26u .SÀ>ºìHš*¸@‰€É“ž¨Krß`T`ÝDVåpEr¿ù¼^ýXݯóiWËýò~[,®i«f2s9ÉY¾+Z+×£W´¸`›ÛcõÀe“JBÇ‹HL-õm.p±;ø«B»~–p}·×öõ|­‘PȘ¾nÇ’—kBÍhú¦O®*« ~Èj`qP4ÖF‡5˜¬ÐL;Ÿ¾NÕ~&¡‹GtyV\ƒ>×ããVN„ª’åÃÅG°¹(1ª‘ -…(9<¢«Gè¹²VÀ ¡`§(h­Þï‘’Èm:‹Ú·x]礜ÃZmÜ5³;+ÄîÇz‡ÉØ)*/e]ØÒß)ÚĨ¢Ú©F4ÜÿþOÞJÑñ<{¯y <¹!Š”·4z4qölkÖš=¤1VyNb"¾'¦w²œš +ŠÂ‡ÍÔC¤W“Ñli ßÂÈ‚‚š b°Ç*Wæ‡B S±m›N9&׊º*Z8œ)ß8ï-B–2 Ÿb¼ÖQëߣM•Ìš¼5Öpd¼B£Õ ¨¹W"*¶ £å9{Æ‹ÕAÇ{vi¶:Ä«Sñ»“«øŠàÓѲZ|MÐÞ™AŒ³â²h ,)Ÿ·VäBÞ¹ÆR¡‡—ûýËóbù¸ÚÜß/w?ûí¶¬ª¢EAäÖY[a&ùMšÁQ’ßx¡ÎæD’J·Î"“ȃÙÑãðàtI– YbˆÑ)òƒÎ“$/oÎ"B‹h+§Hg B€£“ˆèøÏ:%N4©â°«võ®{3uI igÁXðÕ¶F.œ÷» NîÆ ´b„LbÔ)­5$t×WcºŽŠ¬Ä‰LU¼A#Q"fÏþ1Pø¶¸§°§òð|§5z=•ß̯‡¿¤xÅG¤|_nör„9ÚMäþØEA½í´üI>ßï~lÚžËç° ²3£‹]ü'çSò‡ø·èˆ ´5îƒ1Ù<¬·/y¶í¿ŒA¢eÂíp©¤„GˆSŽeåšùI¦ú:87‡1€ÞH¹@¡Ö3Lu¾îÃÁ.¤í»“ex@á­D[šìUÞÛ¼“Ø©»áïÆÃ~3»§ÕíæQ•Õú)´üÆywdÜk/°Pn¤àM@Úe2 BÃ/ ßtó,Šmß2Â7â7ƒ>h™Ýúa»_·½|ЄY‡Q^1¥×§{³ÿñ¾íö$dOŸšü÷ÛûË㩇ø¦#:}Úx»º'õ¿§ûèðMàûR´/¼Ú><-wë?Ë[ï?þó¿üáæÂB¢÷fý5ù#qÛmïï§òÈýýó~¸yØÞà«n~¹y~i °?þܽͯO/û?×ý¥ß–÷/ë_´²7¿üróYDï%œô—¨Æ0 ô¨Mééát‰ÆpŒ!¾(» ³|~fí™–ðƇUxÉro{5ü=œUÅk{N‡I_†±õÒÁô/ÃÞÄyq˜]j°` ”3h“òêL´ß£w´ ÍyÓ0oiDzºGI¤W~b™ t<äÞÈdà„mK®ʤÐÑC=—Ùò .[|2b×GƒøÞ=~zæ¢eÿ¢ž;ÆÎ»7îØë¯iMâØ_ôËÍeïÎ(Ãy”¢é*†&c¼&± 0Í¢°˜g€‡íÓžu$ª"©iº¥°g9•ÓÑ24 ¶läò³h-〘q”…0ŠJvg€VÖx5:¢Ÿi!X‰*>º ר¦½•d›QÑ–óÞÁrn‚UXËþ«Êµ´}Õ1Û‡°ö࿽ >xÇê½Ïö?ê?4tá*™-ÿý®‘ü±µìöÝ4ÅÐ/ÎÜ£ü¿ýÎnÛ‹›,´/y“žqÑbÑ3c}á¡Ï®tlBxŠrvtO÷DÌ™¸îOT,¾¯?}Ùn¿¶™?cñî·¿I]°Ê£8©NsM `C†Mh­šÉ/°†Ñؤ}sÎQ:þQ>–Ãè+ü…wj#)`ÞŒUCŒ`Y˜aV²–/ 5vü시d\ÙMD'ùbiÆ{0È©L…8ѲÝujun¨û7j!ÃheŠ&o9«Q»BltW8×f¼ûÖóqf(œk\ÈÙ&1ÁÖ¦î.é–¿ŸÏÛ:¥$䉔« ‰¤³a rýÄ´ÅôÌÝ^œI oc÷vŸËÝæ7ù±‡Ín·Ý=T‰=!¯ü9!Ô²ä`¹ö»—uNŠBRôaJɶº .ÛYûÌ]érÇ®~—ü>7zÁ’·‘eÝX'²å²ìäøí/@Râˆj²×aâJ*ÑNcù;<éµ9£Û3OùËS+É^;»$;QÄ”«LƒXƃ”­]‘rƘu}kë»6œØ3Ð °zD"›îL8™]Øvéúc¾‹w**7~EúÒ”}¤ ÎØ–ÚM]¬‚ûN88b”’³l½‰`iO§j2&H>5™ü™žîWºMPåÝkCSdŽ*oä22ïËßDìÄÞVÏËå(‚’jbR°&1IÑ»êÄä ²!ÇÕäÅùè-¼ y/üÏ¢÷e®øûî_è°ÑÑ“u'œèÌ´2e‡@]¥‹y,§®Q%–ÊOƒ-cÈ-;;’n¦ë[§  ~mó,ÂÖ‰m–)L7>qйÄEŒäüL(çÿ,Ÿ1[2ÿd Ât·BkOÛ–>Þß>ßW'eÚ¶•9Ù¨cìYk턬£Ü·.3n#Ô4ËÚ„@} à*.B°zÁÙñ7G¢LÂ`±án×¶«ãæmD†Áû›îƒ-l¬~É9 ¦©ŒUu~)øjn‡-Ùˆ'ý®3à45 RØ4ðy¨J¿½ÛqåAŸôtŸŸž­‹òîIOöë0ˆ¼Í޼)ÄbO™¥ÍØMzqµç$Fi7 uMV€¸Üx•¤°Éï@ÈìÆ¡¥&ÀîNÂA ñ6Ó׶ȹ!}MÑÍ_çªín¨$áôõùƒ"ùݹTé¡ûäæÛãí×û×Ožï¼Ú—m  £dîÐHå›Ew:æG 'PIˆÐ§µ£œ¥º;±ç*¼VuAJõtJMÎ-p_‘k‚êÚ[Cð ©Iuô>£!ÕM¼uA“‘ß»­vä <”B‘ÉÍ´˜«z$$ÖOAä,«)9Œ0Äj¤ù!HFZ²Kô ô¹æÀ3?oƒéäR??*`šzæï±9EŒqL7’pN«à š>58 E”>Ì£?Aéµf ´ ;°—6¿–½zÂaHué$Ô±J«ï³R'ó1”M6!çŒ_ s±Yª°9yÇæè8Çat‘‚“£ƒùÙýHnIA_ ¯ÞD|Žº/Ë–¾==>ÜýZÿýcš:xöç™â1yé,ÖG„ž‰Ši7P¶œ»U"ä4WXeˆ”–RÑVdÃù‹®ð˘ÊwE{GªÍð…õµ# ¶ù‚ÞC5†ÍRýû¤Yá! 5°T§ôc V‡à{Gä,Ÿm‚dÄ!>Ÿ:N3;YÜÉ{rt…”Ïz¸ÿ {ùRºÉ–àòcÏØckHsÌû²=4š´Æ{õuC‚"ÐÚ¨½X \ ’Ï.^)2gÓ.æh#õ›pV òÐÔ•¨¸u¾ÝÈìÞ›Ävd6«>–2µ†*ÉFIŸJ 8ÇLqêÉ0“ Òt0U÷ܼ5>(Ôf¡‰ìRâc²b½s¸-·£Zzžè^É>&&zÆ ŠÚÈ2g{sžP³@ÕdÁ1”›l SéŒõº¢ÊTµ×Žj 75!Ù‚ öCFÕl=MMRnbŽØ&ž¦¾¯ §&Ðëj˜]Ú•†F 8ËGAË: ñQ|œ§˜Lþºpz®gñxI=fˆÖC+:8ßòà^l”hµŒG_W€ctÍ[‹»‰6 fU48¥˜Ê<† ©Îepx¦þÿ…B3`V_;Ih2^ÙUÓP*†ád3ä8K¶Êâ=Îê‘…1¢Ì#P=™¬*Z À'€Ûg™™ …rÙó|ðeŠ‹K¢¿vÛfa‡h[À–ÜùÚ}I0”Rã(=¸«®œ~3Á¿sãj6{¦âà$WdÏJËö”£,àI3#FaáB[þŒ;K}sÂÍ«”Ì)3Uˆ¬Ê!Qº\åçVécU•ƒå†£³Íàp–­ÆCZÄù òq‰>ŶDÚ/_õYÏî¾ßµ%8OS±¹­C%Ÿ,=ˉŦ&¦èZì+2Ì‚Fã’­ò+OÜïCÙͯ±_y4Ú[ë×S[ɼØ&Ï¡kzºÍçqŠ~_ÿe'&Vq;3oͧ©¨¯+û b5"ž(ñY.©×‚ÑpI¼› eïÔkp麽½û~¹˜žfynÄ@¦ó¹ÃCÅUzÖm'å3öÞ3šf8zXôÌãÃUiC±ŸHB¶®}M‹v£ *éqbhÇhƒ†’ùznÌ÷VO–©»R>©=Ï¡ãäxhŠu訷ŀ~FñÏ2Eÿ*œÄ j^/|œXzcYúùãMC–j…²ãùæîöëí÷_öÿi\Äp!¡$”<ÐlöŒBñI\Œ5m䙆šÆqÁBÅwk™/â&e·’¯ˆ16-QmóRšPS ÑXL˜ýæ3Þí­ò·¨i ³¼ÚZxâm;ÅYZÜèwë¡ç;/æ’:uÿ £SÇ Œ5aòM]ä’ò€Ý¶I¤wͬúÉßOßÿ¼1õ¹yø¤”øñëÃÓ__w)º—´U;yº@} F’ó±o™'[F!Žö¶×lšËîÐFoG_Žf*æÞ^€×üè¨#mføë&È^¢kkmW‘£pÕ/t~ƒ^K6›iÓºÃ×±ocˆúãÃ/ú~oõ`?¾ß?Þ~¼ÜÇŸ—¿`QN=èÝ¡¬xóûMZ«¿us§Bm¡Ç"çÂnKk“ÚnCÐy˜j~0Tubz)jµ;Lf:ïx¤Þ”NL^P’øxmÅö.m߉é)³W,@ì³=> üäùAUÃ+Ô ÆjWs;d9Ëð@•›r![[Ùp‚ÌÀÝx1§Î2p牠µ»kpÜÆY&#8»®aóJµC2áuG6‘ä:i¨ª .%×?‘¦ÞˆÏó“}P§fÀ.×G€ëX6I¶ñÒèV! •[…üâ ÅXZ+T)¥ÂåkKä³cV'«Y;‡Ö¼DoWG¯qXr2N*. êïûúÐ6{Jáhˆ×˜:x Û‘àRßjpñïWƒKž¿‘\¹ânÇ_¨à/ÆìèÕiÊ»Á•¿Qý|³|ý„¡ÍàÁJר{l‘=ÅàFä€ ô,›$Á¨wÏhÝ…ÎUßõ¸ßýÏÍ·§O]–:rV˜Á¨&Œ%a"ðٴЊ$33èk33qÃZY:¼Cˆ”zvB’ ÿ„0| ÄÚ[À³*dðŸuXv{˜©ÈØä³é¾ãÉ*nuBç‘Zv¬Øf¥èñí´g¬.Czlö»Ž¹A¾¥Z¾9QÛ‘bÑu6…d,³ ·Êq}²¾©I©ð¾[UXÍ7/¨ö)àß¼x×À^¡Î1„41?0f%£ž^™£È[Ãf,j§’9»ùñH‡ ëm&Kj؆CÁ©ò’ðHÏ\nÁj?¥4ã~ºýùãóqº°~¯U?ªTÜÞ|üùõÓãýEÁp ‚áxa—˸ªÄ¡,s¡³u¦ìβAÔœo Ÿ0 9eÁw•}ëGJ±U6ò«‰_ÿçØmphž}ÝV< 7ìzP³'ÕH‡Û×Þ]¥R¶øûH I«ÕôëSj0×Ô3´¹#×þn×|O=+Y ù(£×>Aõµ‹ž*«±/•gÙÉó¹°ÕÑ*î}q Ꟙ޸uëgdÝvp¶´OBƒ»&`Ë,âè¡)ô¸íõqìþ«Ñ€rR£Ö»mA,_fP*J å½·…¦ÀAXbjY ®Ò!jÈú0"ž\°UT0yšÓôžgóüýæùá¯3å"Ùþ±B.P$”äÂç.‰e¦ER¬ Ò²w\ÔóDŸdÀÅõ‚:f!ˆ T¯´9³•€û_ ϾÜÿøþp÷|óéö^iwó|{³ú¸O$0艋Ø|a*‹D ¡°’Õ¨s~åŠ.SÂPÌÄ€E™a„ìα²M^¼:ñÚ"óR_×è8ŠSÿL)315õ2^ìÅ‘ìm©Ï‰À˜•a„!¦ ÎÅVd™â7ªD¨:}u™=ÍÌèØbÕc@rRøxòáqñ^¶ß]„  ¡$è Ÿ8’hJ*m6£\[@|O¾ Y ì„e‹MëqY¯7ÌËÿ¾ÿøùééÏy7ŒY›„*ÄåòHÎ=5‰³sŽ_É5å‚Ñ—ñ¸ÚfMKâÀ1vËŠ=ˆ:ÀD-]ýã\ê BÅø>sÕE²X‘%–®…¤Ö>ÐECaÔl’zu–rÊ›gaË`ÖÁ†õ†"P–Œ誃“¤ÀGê1L²C/8èo2T—õ7Y 8—l'|I&ÔuÉÇ£UV.©sÊô&…ùæ!ù,Ý{|úe¤ÜKÌ`#+ qzÉHQ@lPW’ É–B®h4CBdŸl¯Ÿt8KB€ºj]¢zwÄ.N›<|ˆ×0Æ—»»êÿýê;œ—ޏ@Šâ°J:€JÒ ŸñXÑgŠtD½%ã®9çºÒºRš6†‡)Ä4­“a_L[U6Ùt›¨C—„"”o WÄŠ²u0+rL¹LpïÒ•…!¢tx¤ŠÈn4Ñu¶æþûºIóøŸîþì“ʸ.iñ®XaVªD, JDÊÅ+V´š®H XÇW“ ú¨.òˆ ˆ:T¨ak#ÙzðÚQãùéñþdÐìþ‡ßêo<XËAû¯»ÚÚì¬è8uq€¡ÂéµÝ3EGRÎé]SoJ¢Ô-*ÈÉsƒìè…©¾OëÆîzb(¼Æu)æ9ÙP–c;Ã¥$J>—²ÀòBŸ)²zz×Ú²ùw6qD6€zÚè¼b çÄÍ7ЪêîùÃ~ÝÕﯲñûQ8~?fdÕ¼ýùýáÇ/›Wøt·£×°CÙÞèÉ£ÔHM€X’ l/ïšr“ xƒ…q›0%„àxSWŠ{z0?øÓýÿÝþ|ü±ú‹3Ĩ‹µÜ{¡"~ø¬·»¢Å$![Ùõ‚€êª3#ò;ƒ’ºÖì H³$\7Ñ–upØ/ÑS…ƒã^fý\ôpS~ÏÎ+‰fˆ¾¶98Ü`¹*²"U°>ôÑw ET/Ù:Øqp$©u ÚP’ýÅñaß=øúq—Lø\Ñg\¬)"”eÂJ>}A&Œh.B]Qe fD3Tߺo©ZïÄu7Ù# ùžù-еɶˆŒ&R½Ÿ¹¡²¿¸»Tb‰Áê#^îñßöPwó~€ËëiÊéSˆ¼$µ \<ár|ÆXDÈûìM°©(ëˆa° j&!Žë=×꽈ÍmNzÏȱ(Ù¦­ã¹*´^ßIˆ^BX«ma&H÷HÄÝ#üüõ>ÁYNת̮»ôâÄø+Þ¸—ÉÞUgóÙÄ¥m‘%çÇ™î¨]¤äF´ÆònöIRMÂp2©šh]ÆTpïÔϤÃöFˆT¨†Ë3KÌÙRG M°¥ì¥wuŽmú½Uj égm×c쩼_áðfºap~q^=N¸Êz ªÁ•–c4ÄY¦&°øÊSãÉö›¨ëÅ6D}·k¯¯ öE '†åŠ€#”öwû8ut·«Zrã7î“‚³,Vo?8ì*¤s*á„-3>3sõ^2­qœÙÕ\´>ÆòE›ò;ýVG« ¤Å9f„&Ý” ÞWÒÍÃ<ÌÉ©S³vHså‹ô ýû’…O‡,ÂK³ŸK5ÝŸžý…ûsõ‰yêGõ’ÃpÝ˜Í éw žï Kc 艢3ZÄwåA“‹DÔ<ñ¬OLçÙ*΋PQäiÒP œ³@Þ±8’gŠ­â'¡¡"|špìYèm8&„_P¢y"— wPM²,aù Ÿku$ʉˆ õÃKvÕÍ»K•‡0ú Q»˜>ܵóèl³Íº#erø2 ¤y7:¹€7¨oÉíðÂ.QÅ{‘ÏsôÜvŸH7þÎ Ý@ñEâS ~â«?«H¹,j%œiD»ž%5i`þ˜ÏˆÜ"5„ºýQ„ÑÛϪ W÷ú³ßïÿ5F/« Þ¬ÐKY? ¼‹ú)ë:V"šB”KbбIAÀ‘^k##pëÀü(1¤EM1\zÕX«zb:سk¬Ï}²9XvÙõÓž]ãD^8 — 7ø‘€f™18A*–Ðø©dxúiü ŒÁœiªgh²;пàF–tüvG~‰à~T7²»=zçÊ•ï¸)~ùvÿõóÍ÷Ü<>\ý‘Ú–s(¦·Ý²®0;€žð(iÖ#˜ZïÀ*ñL³:;qMõ]…Õ© ‹V>„1Ãêô©)ù6´tÇn;$XáèYç§|¯t-J†Îýøùv‡[ru÷þÃÍÝÕÿ¾\¿¾Úê#Bo»Ï¨ÂZCÏÖ¯Þ =Øb¼W¬ó¬\5 “K±låz¯@1鱉®œ•„8ÅÊq±aŽˆMVnû‡¬œÜV®ñÙe‹W\w·zmñgÚ’¤+l•°gCŸÙ;ï8Lä^ÝKgžÅÉâ J…Å©Ç+¦‘*©ì†ëJSLN•”"Ŧ hüëY†L.n`q‘—¦÷íæþáöÁº­~½{ü¬¡Û·?¯¯>ÑÇo>]_ `ºÂDWò!}¸‚pý)†O7r#Ü–Zºs©%GöiìÖdìY&›ìáÖíó~™M³OÓ Ñ„Ñ¢më‰rGùVÔ¥8ßjÚÇð¨"ugXºžŽàØ'¨LÜ‚jdÐ$/V¹ M nÁm³¨ƒ§x0¸ßrcô¤^k£{›V.ç’m• CEÄji±ì‹ç@ã¹ꛇ,úúÕjèBФÀSˆ>8г×Y¥ˆ]<ìy bÊt<ãŽ1€¡|Tˆ±„ê¡ïêóðs«·)óÄ´°‘t ƒõW Ñ /ª5pLÒ‚.lÒ,ˆ¶·6h¾Xk¾DKŒ* mÉKìÍó[Ï«W«eHî ü‹/ÉÒ—¨3ò„167x[6”¡óNÒÎSÜyµÀSÈ_.W 6n3|gž—¨.µ¢…¤ÞÃKÑÑÛJeFSVB™20Áû{§éÍþÔŽhôñ7CrIÐ%ø÷P‰´@ >T](./ȳ8¯„2E%l•нK—V‰§-“ÆÚ›^å»K{C¾÷¯ƒ|ï'tƒŒüÔWéÆ~×ö¬nDŸ›Ú]‰g’nE6éF@ãŽ#º\Ï6*5ë}ä¿ý]V%dqzZu7ˆóÅX#<¹”שÂA*“TBÔûPhS ‰*…!•ÀÐQF€ A¹Áí Öê¯ïÞß~>3¢y}ý꣫u&úÁLuB‡Ú}Q[ÔUä´å °)Ú"KJàÚ. k5âÓ–H ­ˆ Fí ¹½™dP²m5 ÆbŒb6ÈXIb†-%lQƒ¨šÃ8c ÷]C1lÓ’³QežÈ—¯ÖS3Ãc/ A° #¤Šlˆ³¤"è²üŒ+1ÍP}jHQˆ/­"{2“dÛßÎcc¬±‹%~ÙO]¼Xý‡ýú/¿ï¶Žf2rê¶%¯upO¼ÔgÕ)»/¶’È}K4*çxi}ßS²´Ú*y š;¾Ïù|óýþözÚ/w… „š;„¸x‡ ä‡ÎW2™r‰ÈâÑq¼´‡ >¬]uÀzM«µ\€Lr›>¯&¤÷TÕB ‹j¢rÌs­5IM,Û€ti5ñ=EO}{§Ù¼¸Ô10ÿ<{»orîtáx÷NŤÊq÷õú»^fÌ9äµ%Fç¸Î©”†mMœ óÚò,¯IÚ$n ûÁ0:Æ¡E¯žµdƒfñNÆšTÛA¿$ô M/)ÏÕó`á(|x~³ª~&.†»Õ²JÎAÔËh`˜A¿ÂÇ®†¦¾tLžü Ø}=êý·Û§±˜÷o;ÃEÉž·qÇPE›Rˆ…:ƒI,dAÖW"™.úEc jš‚xdUX¿"v ‚Þ„¢Ùy ¿^_Cò ¥m?¤•€R«Ëd9°–ʬŒRovˆ-*áC°ö•0N»ay¶øèÇ;_¿Üª’¨D~¨Âê£çFG{#£"‹Ñ)Tµ¾qÑkø'?{ä5Bšrû«f xnS2?ƒC*‚Øsûƒ›(øïâ6ŒMÕ…·QL3Mh'ú¡©Lrz-Á¥u‚ #ÍLF¬‘Ð œš­‚Â⣄š{Äbì²BÏ5¼V"™â#ô©½†ëM>ÂWÂ@}…K]ŒN³vh^é¬Ä§ùqÆWßî¿èçÞaj\ƒŠx½GºŠkÄF BQC$Û] i ‰¦´”¤IEÀ!ŽÌÕéWøúLã¥æ99ÌNœûÃ]bš/1ð_6„²²¦°fŠTÒð.—¦d5COô¡5FÒ·^O"`à i`œB£mI¥)MÕ <- ía-Ñžw~q‘SEòIêΟª½x6Œ\½YE±!ì—Cÿ#ó ÙÑ˪Ÿ)dyhNŒ^FÕto¥Ð£ÖƒN=Åjó^9º¾‰k>ž¸f:>^ö†ÈCP*oáBgÒ^–³YÂêmÊ×4âðb¶výC×^Ÿ”ÕOQº´&@OÛœ×{‰B܆]F+h kt„¤¨šd‹ ÏïUS_ÔgrÁé­Ðà©%%Ö¨Õ ZðœM¢#Q ÞÚ×zꨞZkErGc‰EÊ^<ä±&¯VsnÑ-ô/a˹ zëRŽœ›øÔ1Sò ˆKzµãvÔzxuwûð}7\ð\Qî Å0£ ¬WPÅ&›9õ…µ‰2gd5#³µ\bh|H’^2 (ú‘zš5‹æ+aÜ¡–w!%P)”ÇS)9ï Ùº½¹ÏZøêÕj˜lÑ»üqýÁˆIh @¿â©9™õXKE!n ùhÀò¯@‰÷~»{TóðËù½ãŠopu€n¿wßvUÙÆ``¤~¤Zˆ‚í¥´¢xÏ,×ȶº¦‚”á;RãÄ©2ÙÀN*rxb =÷9Èo m²,Èì°ÍÐÕ=‡è‡ x[Š•œŸ@_Q¬ÈÂÖâ€Å ¹0—èÈ»x̱âŽ9V¤…aåBŽfËóÐÛ«i.è=-–^œ´ 7þð¾ FñӇ`s#ìTɶ=:’WÖ$b7æ1mr»Fn}4e˜®TGôêK©pÄ$ňK!— ¯¤4…˜.-้ Ëà4¾ c… â± ñ˜€ÇŸÉÛ»_$ú~ûûïx£€o»¥_a Ô3¿&šr©ì˜‰í“Ø,ÓÜé…7\‘ Ey*š&a–3ò Ÿ)!”ñ'`ðpqÓ|ÕoØ"„ÙãG¾ ¡lm€‘R.„BdSYêBI,ò4’Øs¾áä‰&£‹Ä¡eˆó½­wVe¥M“ßüìËóÿ$øtµ÷q Ø<ïù'H~¨l¤GØ3\îb0\¹)CB›æpM5X’@UÒšÊI+ÇìêJB“’VëWGiò¸Dá}¦WÕ©M’Ö§:þK«'Á:­y^P¢4ÕãbÇ^êyA‡<Ä©3EC½–±DT¶€¥%øË€¤˜9â@{9lÖälÑõ ¾¦Fß5À«Q"…Z§!ƒçë­|,½—è~À“¯ÑgîׯVS£×Ç ƒqSJ8ãà¸g„Ò³Dää$ 7WBms…­kâCÅV Ú¼/gGêׯVÕ\±YÜèâçÖÝH°!QÆç“jáápq†nS^‰Ô™–-ŸÞ¬æÔ ,‰!9l:6H!º8rAák† )IÄ…Ùº@—eŒ8õ Õ–h^óvSÑÇ `’9©z'„ôÓ)y^í¸µ…åž½¤äyÄY!pG_T¸Âjñ›á–a–år0µÄ "53>Š9Ø¥}íí¢™‘‚™ÿ€}U¡ÅNЋÓ¨÷;ñL¯ŒhZŸÂ܃©&xËz‘…¾<ˆf t5/¢YvÛÀjBïÇ<µlÞNöº÷23WK Ÿ™{Š33ó˜ª2sG¡¡ÚuKŸ>M=ÿ±) ™_ïmXÜ!\”žàe~å‰ÏŸèCüðéúC¸át6]Êè̹É‘}EÐ8±3 ÂQ’QQNsͪI !\ÅE$ƒ[(ºæ%¶\ n†o6ý÷D©É5ï6&Ã1#n`ÌI½nbI?ÅùÓýû‡ï÷×ßï;¹œ}Hð¶[ävŠ]ë)YŒrž¦p9gÅ4Í[S+Tu« K;*²í¿È•ÍQ³P—ÊæøÔ5:bê9¼ZÍ“^êò¦ž[E,ÄaƒPH ‰/ í0²Ö0x7ÿÔÌr‡TpXEè9ÚPÖé‰×†Ã=¤ÀÀÉèvûÅÚ„6/LRÝ ‰¡&L"Hʼn:•_®¸¸Д(I O@¯ó4T)ǪHBÔûÅ%lügM»ê<<ì ç­àt{¥(î cÁžF2ªfø*À´ñÖµˆæMšZ£¦ö•ý˜"—ïÎÊJ SFÍU_ƒíOµ¾u9 G…\3÷Ä)Hè§åƒ¿ £ýæÂ!ñ sEk۶ܱ¬Ù‰„•Pf¬ñêSß‚¯a½rGCþ8nQ6&OÇé²­o{ö Þx“Cýi¬–{Vàe$àž2[ûà ¦aù 1¶¥ÓƧkPíAï=ýGc‹%œ^«&ÍÇz_Â"hWÛîdùÞ¿þùåêðcmÉ Ÿ©’:ïxh¹ÊBœ¾Tký4CÉWˆgž›Óãf ÂÍ¥8SB!fQ%V²˜âæp!LM$‰¦ hø5Cš€¾C8@1Î«Ž¿öLó:–i!ͤ¦|¹Ü.¡­¬D2¥c™„}[ÜIzË;òÁ·ðÁÆe+D|ù–å.½ílVBHgª%aW¹žZ¶ ú p¦ô*_Égžæ…C®H÷€9Ó=ʵ¢V¢˜â„UMÉ ´“è¿CF÷tM7:M„£lšìåP}@a6ßI>E‰WXcO¶GÆËc˜}|RH]PÇ',P¡¦ÿ¤Xž®%y0Ìg¡L²Aㆄ¦]ýÇHI‡l7x!-E8]~ùØz[¢—öi”õïrÖ°,unƒµot‹sIm|K©ÁU£¸äp«ÏüÔÀº­KÁáôé a½ •¥¹#³'MMFá¿çÈsÚ²‚)•ðŽi¸CÉŠzÜÜØÕJ|3–‚,I¢MSWEÕ©0l °õTºŠyŸ›¿œJ׃òdÃk':x2ÈYݸÄê©ô =ˉg0P䱪¡Ö»”ø¿SUÎ Ø}®™~¬rEþˆÐ²³—+©LñiO°Õ’Ø”u¢ì$m¿›zKùŒãQ8qîn Ô8 §˜a#΄uÛ$;¿É$»-DJ“£ãòÚõ²ÏÝîwØžÿ¸­!‚~CªBçŠÎšWޱ8¼QNóÆ´T‚IÍв@iºU…s“"+©LZ¶=}ŸêPk€ ¸9„’· ¸ˆv°o$̰o¤ d?.ä*xŒC!V NÄìôùê]ÊÜžpñNÓT|I¾±úŽ1ò ·¨òæùý`K×u ¯ b¾Í5ìL_/šÛWKiúÉÕþÓiCB1-†õîËÍ5ôTR9qÌϺÿÄŒ!!}j¶ˆ­Éˆ±.`ع@ÏÔX` ŽR3dÛ‹UðŸB AYRpC2Ár<ÎTeJúrú²’Ù,Nð„šn !+ÊHÛŸCàLUŸ@¼‡ÜÒ;u9PK¿üß«ÓôÕ·¯w·×·ófôÚ_Ä©½—+ázç`‰Èä³K«AÍÐ}ꤿ?5i iÆ#N†´„©§+éбáÎã6¼‡.‰»$Mºë‘q…±Û¤FÎÒCÄ3E9`qzSV'ôÖ r5Õîö ú‰RÏöU`±ÑÃq¢¿TK¥Wû‚¢iE)°Ž/>;í¸qÎE£«7«!ú nñØ£ªŽí|únO½ ›ÖaöbÄ FˆDLþ9È1FTÕaþçŸ_Ž«0þuF"ÖN ÏÕ`žXßýªoúûÍ÷wÿõßÿù¦çâ„Ë:ÔhK?°‡k¸vÃG0ˆÄ¿½}óùëǃê»7¿½yx¼6õz÷ëÓ£ÿýÛã÷w¿þÄ'üóýÝãÍßw…àÍ®jýŽP ÂÖ%˜ž#y|óÛoo>©™?šp{»¥š?‘žLÆ‹“E 5]~ÖÐün{Gãçý\øCý$ÅS™ŠØËÛ /K»–™ \¹‰£óQuE )w¥8P»âJ±‘žÒ•cÈÐä2®LS$㎗¾sb¤Íï ×3wŽ^³>¸^Þ9^`±\}Çíìß¡ ~*tt¬ê# T݆†®gšz89ÄK’(=ÅÇ`ØE¯Š ƒ%˜4Ç«X/Ûk ÇbÀ‡ ³Ï/SÁü«eb¤uõqýƒÅG}gÀÿûßÊÚcZ r ønˆý {8dÍ#x†“€Z†IZˆŠ+Oª6¾}¶š¸sÌâ¡Þ¬" Мd Ó‹’ôúK²„ßÀ ª8}¾ðq‹twdçy;9Õâ¤c·J!qù´5—C„âi äj«7«âvÇ…-{¬À©:¶óׯ==z¿uÊgb„ãÛWÏÁ*•¯rŽn_œ—úq‚Ÿ‘úo’^1~ºæëÈ××Ï ¿7ü^nf§xCO²Jå"ãP*7ÉëØœVGôrò1ð°× Õw ,6ë–*î˜ä5,yÄ,:ÓêÍjîNÆÂáý¥E¿uÌ¿cé>ò:vêfýÓÌïI¯ÃS½Ntî^çhðÐ0ÉŒÿáé¿þ,Ämó9³žcåmÈ¥>åÇCìRöã·7›z0’´n‡ÆžÃ㬖r‚jâ¤A)³wçQ÷ï+”¯^¬Âa’Å ¸ú¢Å$]‚˜IчMähqèÐû³L3WóbŠð`}„MGþ*´ù«¾ßºòNˆšÝSßorFe½æ4Ÿr4ôaºø~a¦;|žô»îK\]Q~àÑp [ÞÖWìÂh°±Äöªz¥ ×Õô>)ÕàéÔvxÝneïøòñ=íÝ[¼µ3G ضå]¸xÅñŠŽ÷EH1R¹,|\4å8òqh=Êî¾×á¿Ï_žïMÈß^n~Õe“êÝPÀ×B-â)Ú‘úpæåÔ c“ -'`Á\Ú"ÂrÌRŠÏdÒbS] ˆ(m ±2º­9‰ù<_´äÁ[HP¨hïÁ,ŽZŸ7X ymç…ÍZSj-“þyX„I¢ü)ëL>ö&Wé ˆ±1Q5p Ç=Æ1}&y¹uËÒš…†ŠJ¾ÔGKi“!åXgBê‘¥Mv휇nmŸ]ê‡Ó&1çúHÐOPcضD´uƒWìÝEr0*˜öwmÙE;¯íÌ ?Zfa7ëàËÍÝý—¯ ½]ÃÊ‘PLŽ[æ2§.H7À$æ²ëÇÉR‚‹xè»N¢ARjAObÌyÀ31u@ãôÒÀ„²1“žÓMB9ãš(ŸÔMÒÕóÕª DtC“ §`1RµÂý½aö~ò¬êðO:ƒ2: º§WЕ}Ö.² >Â7·,°É¢x©XóŠU‹+š½6›I¥Ħ×vÙoŒ±âýp¾l“s†=}H)^…jnYpÛÎ-ë¥HýÇÙ3º)š‘e¾¸Â#bo¿kŠûpøÍ²¡e!ðP8=P*,ÅSO1 Q©öÄÔ M“þ9ÑÝ×pÍÆ"eRYL"é¦öÖêÀæ`Ãøá‡Ò·Xšô•Ýu&Jò¾3¨ºPb5¦.ƒ Ú$®(â Ž"·Œyõ0ækš@‚Ë@‡óbvÉ¥yô¬©díú¾´µJvÔ|1ehû˲‰þíºù3V±@ÛbÈ|Øì¶ö͆PÚÖÁ1 ÷‘4d˜i _›uOK‹Ìö´c“(küQøÑoï>ÙÖ9„§¹ÖPÀvö­ó†P ùÇð£}ëå6PuiØ8;l7f{¹‹ÝU¿‡ÿÜÿ.Ù³£ßíù·i®Ý_ìøýK²; À«åͽDZϿ~ùãaçG¾˜yÜduóðñ§ÓÎ2uûF>û›dûOßö É§Ì !¦ÁAȯVhO°'´\¶3‹C ‘×v·FWÙݪ @˜·#^=ú«Ñì~á!Û³4[YEskz+G°›9Uç`Uéíºñî^?ƱÁ½ÝyF0-™ìèqë2‚•ß¦É °ø:ÒÊ~ýür ʨ‘—aòÊoŸ÷­\Î+¿þJ'+Ma€²_PÉjxÊ.e¥¼®î¾•øwHàË÷:9 t|·p’ÜõòleøÓ,dÝ·SÑù3òs'¼ù‹Ž´zÄq•²Ë¨†ÑF5>Æó™uI}žƒrŸ;çZî3$·=ÉÂ^ùŒ×©n!ÉòxÃ|˜ÖR’,ƒË0gG!0B;Ì™´k ±£¤ÍWûaPÍ&µ¿OZÄ9s Õổ3ç'ì—V3ýÀ^ =²`½#V¡¸dÙ#œãÁ•äè2ý6'vàÈ–CWÀ€þý Gެÿ>ÿNða¶t{Kw<–p¤é}f )¬"éƒBfÌÐ2æ‹TQÔ†¶DŸž'úRòï4‹‹ôF»D¸Xž5ËZq\M9Ó'Á¢¡àcâ–œeúæÏX•éSóÄ(Íš©÷ÄLAÛùàöÆe0¬yÏç·¡Š>QÅ™%žÀš3´“ÉMúW’¨åNí݇.þã>kÿþðòõÞnÎðí&SrsžÉ¿ÉÂÛŽ\š`ãWiÄÇþd@š.ˆÄv‚l@t{·“ç²Éï/· ›µ] bÃ4ÌIJÀb(ÑFÔ(·¦«ÏÈçÀ›®ÔÌóEÜõvh•`×D(¹ –™Œ:\|¦wæ –xƒ=ì=µõ.ALé¶Õn|í 48é´|ššï‰ÑÕšç?/­f½ˆJ}Ó|/t%î?F<º8)0HxGt=Òƒ\úƒ¥+úÿzí!j©ÍK—›wÆØ*éõBÚ½˜7éË9DN|kI³—3auÛôÚ†Ü[ƒ-iCÎ$±ü™Vc--ÀZIÜ"Pƒµ‡*Ëëj•\ä2[Y Ô&cSÃfÙjyð”ÚÝîàsZÖd§1ÍøåKlz]û‰rĬ™>ù]œ½~è9ª£•›üδìÛÆ6*a»a {]î:>G\þñßþùÓ/÷_¿}þûø|ÿøéo_>}üíááÒ1’n™ nÞzÏ|¯l`ãÔKgõPºÌÀ¿öÍ}”E+‹í»ñ{WwÙ£JÔîMlÁ¥ªåÕÓ;b5×}À ½Ób9¶í&åÙtÍ¥ç0Ϲù}iUÃÂB*~v„ûT¯tþ#QÉ䈔¡(6M,¡n0¿·¾Ý{~‡Ê»Íï@þaæwôB)Z’lJdǪ ­F©Pøq˜Ô¾‘Ë(Å$AÊ(å}6žŸ-­&òãä0 ÕS¥õB)¯£/•’éüR)i":ÝqÇ犃º~(ÑûgúCN0)`ìV8žù¶ùնר³`<ómý"i`€ô(„a}iŽ_7BhÿTÁÓ²SÄÙYuǕբ½UØmÓx´ñš‰Ô'ÇNC¸àIGŸˆoœ‚HÉá›Ý²þ#´gÎþÆ™‘U‰„³gΜCQDaPŽ™" ˜R ‘/1×ù¾üREÊ,½ržµ—LÑUZ£àÜŸ&‹æÄ#…÷§²;Êr ‰;Ôb´‹¾|(¯ ‡²zq·­Ç0ØŠëêÅ «¿Ü™„"c±¹‹qDð’£ò˜I§ÃµàÞŽ+-8÷{lÊàFŸûIÈpŽº{5¥`Œ¶e®ó5€½yërè0R±¤ý‰=l¿M($¸1þéü«´–›»Ï¦•eL¡1èPœe×TÑäÐb «é鮪[ÝE²©ÈxŸH›ÑE.â+Å<‰ÀQ*=ê.Òk{ZTåÖc#²í×&9g¨’Ò’-æ $ÛrÞ×q&…Ȱ‚—® Šn@õ›E ξ¢ó›"ëã·Ï_¿½|Øÿrsûñña7)`~€íhYa±[ƒ³5*í¸íY«¶=Ûo¿üzÿõù³aÞ‡ÙïÃ÷_¿<ܽ,TÅ‘4M*4¨gâÕ]­ÔºzÉ8„ ‚R!€—Ò©‡à²óÂg"êqê%’$u¼ñ©‡t8åBp¹æä”ͱÍ…'½TU°±S¿âÔ[C·?iKnŒÑθõÛ¿öØêµÿ÷ç3™§Šûß{ŠYtšïä=ʨKZÁ9¯€¸õþ^Ú>õJdò i@ ¡[0× xb·ÊÙ]ä¥Ôjð0$…Ë©•äÿ~[]ð ØÀj4æ?i[ÝÞFÐlK+rº>okMˆô ©SN×¼¹ôÈÁ÷PÕÑ;B5ñ Xh±q|4gº;r<yúçZ¥E‘©‹Ìãóé5›O·ðÄ9üÿKÌ‘à+ à‹1EŽÔ¾‹wCG‡7¡adä"â²`9Íûº3Nû°Ž¢éãï‹JºûÛrÑîtHbÎÔï–É,l›åÕ:Ǘªx·üGª•Ž8I£ójÑdÜô$Mï¾Ï“Ûj†;APYX¦ðó2pelÉ*r*©ßáú,#¡Žxj)ÂiÀ‹"6H>5èål#‡ü8’£èú otÉ2‡7»ªeÛ”‡S!<„§èkjM ³°E¶°jIÐW î hV`yç“k‰iÁ©‹Á¼S¡táóæA›9X@ÒÌa-ߦGá 7‹¹ÇÓ>Un¯ .¼M_o((¸}dçó]ÔMê™ÛBËܪj·È“OŒ,äã²îªùìL‡™{ؽ5h¢àÛÚ~šh0 &¢"YZ ù½>áÄžà|ˆq_ÌF ZÒid?ø;Ü~~þt ÓãÃí%6ýªéyÝPÇÄÏÔK…Ñh,Mždc.·NVc†¾›¯Xo5ì 8yUÀpR×Õ# H}`î—†ñS²oŸóüËÝ—‡ç¥qŠâOÍâ.oR†à›â³é4ª·iBVID½ö^R»ƒârãQt‡él×¶ƒy×þ»4:l=<)qÀ%€ÒÝ£mÚU[uÀÖ£0!%2‚mk}iŒ½ùñCû˜å þ§f¡Wl@ îxª[ab–Õqø9uÛ†fæ+«”g¤¨ç‹G Óžbìd΄ÒcÚ[›×M‹¸»lC–þ“wѹ ÑÆÇ÷ïÞ¹á?Q{m$޲d’e—= 4šZÆäŒrŽÌiÉÌhÝøž¾ŒR¶!£ë„Ì%”©UÄþìàȦK{¶úñY¤ìü×þŸ—9[×"¼õ‚:n‘&³G ®)•T/©n›ì'UۨP†ØÃPÍÓr–ïbé°é~‘ví!Ûâ+n0x( f¦’ÚŠ=(_½Q%á¾@ËU@›r4 KË0aœN-à0 nÐM¶)c1+¹«á†¸ðù2/Wu$ê†жð§r´Íi!vkYËJö+/tF 5.ñ$_Ác“gÈO¤< ¬K‘K*FÞ’my¢Ã‹\b܇ 'õ…ÎþÈ'ES×\¿Öåúãr^©.@2P×pˆÞº—x»@¢CoþñéÞ~èoOß¾¾Lw¿=Øw¿¤ÏžŸŸž>¿?…ŸN©Ÿ!ì‡À}ýòí¾ÇxhÐÄ[5¶\ʆ]ó•ltIÆk˜v6ÆŒìk@ÝíÃŽk '­™É­KÑ8¤F[ ²1¨Ãé<¬!•‹‘5[4žš0_º,BÝâ#w¶š v€’¨ëVÙ"|TÆ~€ÌÇü]VZ.NÚE_Üä e8rD ål âêÙÐÀžªªÊUŠ7`ÌOW>ʦSGzfX’é²)iø4Ê$g†,83Ûóà,ïAã$¸Þ¹ST·Ï}š÷ÝÒDÂɘa©Ö§ ¦_ 5L¢C_ÓB ®\L'Ù꣰º4PÃdËÑÔ[±é¾·ýÅã¨e.—œEÓαH!ù‰Ð7Ò–JJÁúëÿ~aã4í%àt²³jŠlêê|ÕÿKwõ‰öµÝç¾î忺.‚Ö‘ŽlrĘ쨎¾œ½"À7áò魉͟Ðye÷Kœ‹¥ž>“WÈ{_Gô@a³f5\w°5 ‡á|®&fÕó|çNQ¶‚SvSâírSˆæ 1ãèy‰xî‚]ËtŽÃ‡‘Z¦Ð?ð 1NEÿ.—RûªÑÅ\’†aµ‚Bæt»Ú¦©zg1”I÷$`T.0e§¥Ó~ÓK#y¿9úR]„Îg¤3± ñRמ= <ãúâ¾>`Ý%SsuÆþ8káŠWÏþýjZc_dqkÚ¥CÁ5ú†Œ!+ÅH]kXOÄÔ RÍBª­+S™™æh\…Ô˜mÒ;ʤ¤ÚK§»‡[cjŒ£ó‰&eÚ»Ío1Õ–ŒÀE† ÞÌluSÀÉîNÅ«8¨P‚3Ö‚E¢ á}*ªÎAXæÍ’£‘€K -wðj&N{ù³ eØ ÍrÔI¬` RgÒ*¢1AÖÁ ¬Ûk³:[ã1 çÉ41ûxž`Ø)J_Ù¿´z*òºÞÕm€‘‘šPùˆÌùÕàF³S¼<}¾?¹HÛøüù›¡R©Ò¡â •m½¥2ª.B‹Ï #áå—ñEé^©¤ª회q2±nƒÊîµì´è!ï^Å×ÏÓÆ €ë¯ëÎmòKð‰F4œÚÞQÐðK>I%jçèoòAP,Ð$ÛbÈ÷-ÁâÎ%XÛÓÿ,Óú¢³A†°n8A ÇFùÉRßÿçxÌNٗ׌eã’Ah¨dÈXS€cСýÖ_ÌHîn_î°%Þþr'w,w ›®_ž¸SÖCÅÑ«-³$ŠÅ«Þw™«Va¼=‹b o ¤kîX‹µÊ¤”€5“P§¹|Xx!ÛÃ>ZîŠÀ»€ˆÞ/ϸrouµ’(5V"Ta%ù z&§^VBÞ‹ïj%hÝG̹é0Ùû{„ë~•ǾóX¥j«YP}X½ê`ª\ÐñÊÌ+XJ=häûæL\UÎĉ«'œëà5dtŒ0©øÔ•\#̧G˜Æ†éª¾™»U]7zß·9¶äý/{×ÖÛØ¤ÿŠÐû0/#™¬*^Ê› °Øvg‚Ìæa²,w;‘-$»»òß·x$[Ç%ò‘jw&Fé¶åsȪâWÖåZÈ7ÿÜÏ`Cû=´Br,0êDØT¨ã¬Út³=€ð ºhÍR‹D%ºbɲ x6¹ƒ0K ˆÐ)# 䛪sÎHÞôÝ,øD 8¡…NŽDlJwŽI’‰vxÞ¥ˆ@ÀˆQ¬‹l“RP‰Â6ÐS+4Pbe”¯:?Ïœ»„W_9l„FÓÅå7ßýéÒ;uDBÆÝ e÷㻩Às3áó¥Mr­­€èÁ·ƒÕ§ûËo&ó»‡ñbzùÃûéêòÿò§AæÄÇdߤ6¯ç“_DmÞ¼úåÝò£¼ún~ÝMÅ„òöåãd2]./¿Ùlí§‡ÇÕå7¥_ý4ž=NZ'ù¢Á·ßnät>†?¿¸9\å_ý­¼ìµ¼i¿ð®·Ì±FfÉõhy‚äYN¬|ç~9ž4²ø ÌDîÖÉ“7ãÙrzu¬ÀÑýãð§ùÍ‹Ç@*Ò&¬xŸ:‰E,‡eap‹Ö;7±•í­ýáa12·¡ ûwÇ0‹±­¼¥Æ¿øcì!›³–c­,Cl(ÔÄ(Sw:Ó†ü‘Ü‹À@tzï¶ï¥Arž7òŸî÷ÍU½g®êœ o|ÞïWãÞÕ”nKå[V‘i#VMï?Ü.Äð[®^ ¹¿ØtÞnî–Ÿô¨ñ†^ـ݆²z†ªÚ¤×X@ð²,ã;—÷%Y/óØÂ¾J ‚á@¢“šF’ò.©’l´p‹>%ʈeÕÎgÏc-v8ª4Ù™"=dËÖˆ’Jû3¶lÓJÌÈÊ:?§ú˜¨jF²éuðåiÖâ/R;¾šM±ýŸùüaÏœü«ÈÁô»ûëé§KRößÀo§×ÛïQô”b8}.ë”­7ÛìÕSü”>oæa±ƒÛ°(9ƒ“éíÓôzÇl´0B-IÚfcû›mžr»”CþQÓÇéb°ú0¾¼ÐcÔl~׺d±.ÅÛ.k^¦å@÷jX,ægÖ͈§Ý Èu' Ž¼Óž2°[9 ¤Th­Mt˜övkîDXV(¼F838kí©68 !ÒãA¶ÌˆÏÁOÎÉÎs4xôgðvÒ´[ƒTžÿ6´nrucéjÂß¾¹^î9!6×Í9è÷Ö–_ ÇØRgÇ ßkû¬E<,õ—lyD…˜ÐHü{Güå=‚¿;î„.óE#¶"ÀŸDÇ÷“élzÝŸî)UèÞ'²bÀ  é*®À­^{»Y‡Æ®6TÆyÒ ó>PdÝälWE¼l¹DÒa#š ç+Y©€2'34µˆP×9¯óeËÜ0 Z(iTÙZžX¡ä¾ÚP…-úè1¯ÊN*›V»0Ý¡n¦øá‹ÔÝYf;ÿ.Çe°“4Ú£~øð…òÏüý?øówþïËÁ?Bo˜ÿøk³’Á¿ùïåø\Ößm,ǿݟøþ¿µ,°š>.nWÓæ =./ƒ¬ÝOãyЬûr Ø1üÇà]cW?Ì儉/1™Í—ïíÀz¯OBR=±•ŸØåyšçGu@X§Q9JêðÆ:›Ôèc·­PÍ9ó–\'bzh[>üêÞÅ‘Uš¿ jtìbÑ 1JÞCeõˆïÇÖœ±‚él7ŽôÞµÙþ×:{ºWõV½ýiŸu»="ò2ABŒbÒ•…Ž·ÙÁ¹æ1e˜ºÊ±ZÇúŽÂœ‰Â\kÏ`n ‚9¾ΑÑ'6 å›Hê‘Ñ Ü—Ÿò¼Z„4¢ëád<¼z¼¿žM ¡{ò§ùò]í(Uè…·¥ ²fœ³›ÇjUmí+„S¼Sô\oÆf@¨%Û§b]/žŸ©1Å{ïÐæô¢¶: ÖÖIµ¤£%ç/;. Í¢ëne÷Wåß@"_aDªYRu‹µ¢Ò¿súŸh¨žg},?‹XÜ…ÄÀ]Ë{)'(Í'b÷Á Ú“b¯K€F²…†ó¡N6íg’C8ZáüLœhSÌÖîKT×4‹M‹×’â}Æñsº² dÖ‘ÄaÇâç;>Þ™˜BѨ£×eí€ p˜¿(>ÕWûúw6¡ø+Þaqµûíí Î1ÕΙ qKØ:«CK;eñP÷ɲ—*î\nNÞ%À¯¿I¡Ä‘²št¨èé+”áŠéyŸÖ rÉÊ¿ÈåÇñíJ–5»b¼¾ÛdG= PÛ"þ£|µø|ÛØÆKaópÃöáíõ»ý*ÓüÂ0Hðü±É*iìä_£DyW–ÚþDhfà«&‚œ`£ŒÁSˆÀ¸sËóq<»?aß^™—}/góƒ›ëñj¼ü|?Ù:}éAÉ9u™OF,Ó±Ðz¥) Yâë§!Ë€º Û­e‚–xyâ.ú3ƒÖî„à…Gr´Hœ`vçtȼ‘@{Guõiö´ŠXT¼ÐÁw¿êät@ß}¢Ä‡–/u¬/cJÒÅ#¥\ÒΗRaÔ„RÎ™Ž—RÍ<¥ @ï«Åð§°|#xçüÈ*R*ÜJ‰i^íVJȶ“¨ürWÿ¢ýp¤½¹²\mÇvÂW»'Ou;u5ÖÓ:b“O|—Ê‚g¯½ü.×RY’y\û7’ ¾Ï˜\K $¯>Yùs¦òLj*2É0¡Õztãfß6–ÝÚX–ê'Ú‘'j{-Õ~FôZÊYýj•ξ–*Cèk[èÐ$x¿ ;pO \â^ ¸dGåÎÑäw8;gU ûY&gÄÀï\|:0銊—ìÔ#¸$p1%Ê×ûŽ69mm,¸šEñrÈfú,¥¸FЧ9­·è= OÕ7Ngëå´VZ)Å7$ œV8âOF˶[ËbœrŠEi[S’qªc¬©:Dþ)ª;œ8»ŽŠGÈú•#Ççˆà`>#ïpv{3|žÌ¦/S%ç³»í›Øû÷ÃÉt±:QI[DK38ÍþŒ ÀP8;‰-rr =@}¶åªD²ÅÖI›®NYN"‰ÁXfBkgy•ò†ά õ™µŠw4i:9Üèr/H´X¥Ž-&nHhžm†Ë±™Õº«Z[ËJ…“e‰÷àñÌ `3}½ªïà”1QçAÖż"aO_™Ëpå¦ÄoÈex^O;éT}—!K„me¶ ‹Ù0bM‚öl ÞRE«’"¬íd81ÉëEHÚ¼v’j]»ÉiŸw¶]W×#­¦Ï;ÅË 2×M¬iTšñû‰5k¥è‰ÎÈ}÷%@yÃÏár|÷0›.·PxèÏ0(Û¿ˆ»—†å"+jëô›ˆåXÝ#[WÍ}p¥XDn õ,[L¶!o3Ly«c&akgYQhY•Óa>Ö™MB[Ý$ dŒÄÖŒ #Õ»€~#Á»ÇÙêqyq7]-n'ËáõX6{?\®x ñ²¶³tNgŸ±Og‡Žµuº×ÙI*ŒÆ£LzûäAù´×ȹØÚZFȲ@â†á6ÇéS0" S©Ý×Cèhc9ÀŒZÓT±l¾„©Òä|/'¦×Bf!¯P-|膓ÅäÅ=cwå½_—¶Kº¿¾…$†ýéFHJRÓ#*·Gœ¨éPƒh|¯'š#OlT¼qÀ°)ë;ڷܪô’l5ÖJ¨µ—Œy'aŽ(kfå[wßíGœ4ïDvÑ3)¶eèpT¾ç(DQÈլؽmgÆÆq,í4΄¸Rͳ}j;Ѿ\çÉèÔk"œ£˜='" Î'ëáқܗ£Ç |¬>±E”m ›U {žù¢ªmî4[OÝØ1÷L ¡¦D;×’æžs¤$7ÖQNÁsèÀχ³ùä—ýaD^w¬m,½ž–1HÐ=s¹ôrŽFoKœ]}®ŠèòH«ÃœÓÎøx 0UtX¢…œa‰†³»€uÖ•UùX~4ŽÀ ³ˆÎUïûúBcs6žÏÂòân<ù jfx Eì1»#¸pUíêÓJGcë°óPÚÔ*e}¬Å1|2wÄ’U&i}‚hÉÓ–6%FV…eËã›3›T»ŒS‡‚ËQÀµÞÇêÍI„ÑVŸ¬( èØ=¶3$TååN'Œ2ø¨âY®ˆ¯ëJ©»C9Ûá‰PUƒ!¨ «®‡;'2¯¡:Âj6‘Êià=‡œçtg3´>¥ñŠ÷-=Š@iX4)¶þÜPê|•〆½9ãä§óåêa¼ú0|XÌ×2Ý\w:‚}]Èã3ôCƒý…¡;ºÕV÷ž‰‹º&K“ òÙüêз&œݧ´ÎíCUç‰C¤Ž }IHpâ´g`ªLBªQ1HmѤ¤Bug†TCTRuh©­é1öI‚ˇUqp°*®[Víf„ǬhåÏ:c*/òb­îÝ~=©S««í¬:´¿ÁP–Û X;‹~9p ';X“Iç8dc¥Ñ5Z:Ø"Lt «F1í¹ÑÕÕ@W%TóÌ Ñ5tYX>Œ'Ó¨ÁÒÍI4ª2”úÚX*¼ɋÚÁRZæhÇGMTÀª(–Ú,Æfƒi/mZP­êáø3ˆ±Ì]Çn—örè)'×j6”ÌÕEvéЩY_vï6ÞR¡zʪ)\Þžœ?ñ}·sm±sßÙC«q¨äy»Ùµ’F˜±b°ÀV6ƒù¢þ)¹,ÿÔ@þáOƒý¯ÝÛÍU\tˆ‚€kC¥Ö¥Õ ó™Pn=¼ìò…€—«ù/Óû ӭ柋ù㪠W¯ä×½ঢ¹[ Øþ,É_Û|ÅÜñšL癇©X40Kš |ÎÌ LçØX˱iѬTd­X‡® T—8þ~0ÚÄ'°I @LÄÑ5ÀóÒ8ÿ²«0”Ôd¶SUbF΄Fšëho%ÀÍE$ÿù8_3;M5·˜Râ‰*êÕ&¾„•–ÝI@ï Wçc@y¦®q‹>äo#zÚï)Õ)úáŒSJC2úArÌ\R)8í¢í¶ô-þeæØÍ~w»Óp±†aÛ³=÷ôêùÓýpû± ùç&×kqû°Q»êqŒG®ÎëO$?ÕÎ{¶Ñ°ˆœq ¥ªhˆY£ï_ùÆõÅAAñÚ°æÓ”@:Ff$KÆ«Γ'« Y/è2.’­ï”tN›õ.v9Ø"N«™‰º¥®±²O4ô|õbêfZÄìzS0 (Œ‡TÊšóNg…TsÚìó,R“£ìk¨deÂi©š¿%gC¼}ÿç¢O~°àEʧA–•êai‹Ùæ•üñ%Pö8µ ¹òaäL2Æm}ÀrtvZ‹0eBÜ"ÁZåy)tE*¸†‰ì+‡U/wÊ"¹LCë&W7–®&<üùç›ëeÇÔRo컪´‡ÚA« áñ 9"lñÚ-/5›¨Ó”[«â+BH€x)¤QŸVø™#ýmX9Þ¨E~’6,jµ¾!<±rb3gÚ´)cÅz'”ëT†]䜛ꬡ£G4:-œ >ÍYjÛ2WeM}‹ÛrQþ;!Ä„DpOb§FS%ª$Ž¡ƒs怷{¨Í¯‡òÁóÅ/›R–’>Zî–î¨?2ðÐõé~Ī1…OM%îL»’TE•F_g´NfÀ1ÄÑ·E©2èKÈ$J²CÍF‘ã Õ›´ •ÇB€ÆñôÕT}}^~ëzÚa¯üÖ^xQ‘µÞûH¬”8var]=$Þ«Î?`ºnËôSx®Óï€Ï«ªIT5Jƒ9âr5/i_L‹×¹ˆòíÍ/n΃°c9ŸM?ïf›££'=Ú|÷Õïv‹‘ÑU5'ê~}ÃpE"ì¨9+ÉzÁp‘ NŠÎhN¨=èdN¤>Ç›¾Ð¯LÄ(\B§MI}š#=}zit*Ü\3u”žòg± à‘‚ôe<±/\Zpt,ÒØ&]Á±¡>MÓ¹ 1ÄÚ—9¡ç¡Ó±p—CXkŠ´AʾV”e’ùüDÙŠ†CUÖSõH§ £’b¬·a$¯7¡0ŠF@˜³šP£lŽ×1Fj2›ÀÕˆ˜p¡ |ù"ù›ñãlÕ±FªZtÔk쟢pK£ëU7D*ôö[+g9=æÜªôÜ!&¢èÄÀš”‰ye}ð„άj©úPe ûXÏ4ÞÏ[Û›•?a;dOt…‚šÜ4ºFÎÈÛ|ˆfýEœe¨ ¶F¼Âym-ÀW¼îðؘÑ¢ËNl!¯ê+LoÓ8 -šßh¦=Ôò…éñ· ìRÍñá‡-*”é_'¬ôì½=sÀ¡ßpDÁ¶0_[ÛJ‡ÜÓT0ÜàBGöépéM?£ãRCÑK÷-ÝÊDœÑ5}îhƒ©Ó)¯iûnÍ—+ê<àÏ_>->LÓálú~<ù¼¹¡iŒ•ì‹ÛTÁO!¾ÔŸ™ÌÚQtf²B/ÿ™DQªßj¹tZ¨>1R£ÆèÄHdÐÁqïÂpQ8H²´âQ.6['žßQsŸd‹C ÕÖï !ÊyÓ› –o÷ÂË£J[u‹[ˆ¯*^Óq‹ vÓJÛFµv‹v…´ã.e܇ØÚ [¬®¸ð&¸ýž\‚ ðáeh… >8Šª ÀZÁͱïü¾tRä®G²ëÅÌ‘fºXù OÌ’-gjbrynîüî;¿;~/äyá„ØÊ£ß:¶óßÃÉuÿò‡§^¥a9¹ÿsŒ‡ØãÞYŒi€4”Ai’U¯²d ^)/>ž›Ýœ2(ÕGÄwT¬AgGK²ªï,>(Í«!Éna¨# uÓIn^hÚppÁG-µ™"ަ]Z„é(rN§>8GaÑAí=öï)„ «$ØòqMª}Kæ¡C÷ôƒùT„zAú€v __û“gÊÊÑ‚ÎÇèf×04Ê«Wª%…&WH·Mô—š3Lx’MÎÄÓk'SEœ¶¶þ ¼lͺÆïÄÞwîÅ{Yµ©Ÿ«h–h÷km=ýõqN³˜æj°_¤YÄþåy°±ï¸çíÍ×§”¹Júùã%IÙœÓlwª³~ýé‚Þ8»Þµ«¨—QZˆ|¢#AaKƒX{¡÷d_œàýeôÆ‚S-£7r–ïH†à{Z¶}#ùûðÝÃÉÉ­ŸØÉ™Þs²¤-§¡Žï× æWÇAì]ÿŽF˜©öú×Sô»íQàûï1@k_¤rµ'?9žãPÎ}_§áY]ð?¶4GÛõß³}ìµ×𬳢ouVîK.É6Ñ¢Báâ¹ qÿzxñXˆ˜+K=mc!y£§¸ö± &fÈð‡LŠbµ{É}ž™û ר´*>´ жcưsÐ#ÏŽÛk3'ÓêKéMâð‹Ë^=e(úk§ Räæ¦Ë„wÖ$·¯'{ˆ$êËxíÁ![!ËÉyF¼žVmFÌkõ†0®SÉÐû”wÚ±‹ãÃ}¡#»o94å¢y|O´Õ¸Ý ª9¸”hã„ïWoâÛ§Þ\—D^úç?0HÇ- €ˆ{Üü½(è Ð]”ò@OfšïŠ€…÷Ô4—=¸˜|x,¹ˆ>ycX;Nt0¢'1¿OËLzâ €N>ø¾“S]UZ} Ìi ¾?ø£nØ¥)—}Ñÿ™}Ú^~ùôãdMJKOv"ßâXQ.˜Ù}¯Â‘Ia‰ÐEÊ0Êå®ë$‹l\|´Ù(j«Ž©ªjíÀ8xÇHböï'POŠJãZOòà6xã6A¦ô€ÙD½_œÌ8ã¹#u„ØØ”Ù81м_±}vbYIžq5=é¶1E{?ü064Yz‹Ò¼S]Ê}FB½žÿ’ÚÙ©§2<"Ò¾wø">"å¢ÌcytÀÇÉZ#¯Ž8x6ÖNÎá}œ9i á~ÎJªêžŽØN}ÁõGêÏŽØ)ÈaU쬟-v»ýòùñϹÃÅÍSÄœÒåsß«b>M§ NjÓ–:BLòaµÉ…¯’îÏfYÁ1TÁóž(ô"<“dG-$Õ URú,tÎ+–wGãÑ9d¨R“¢,xvïW¥ñ—ªvFva…†Ç03RÉÜ¿^Åmhjcùžþåóõ‹9Øý•ÁçÓu[0Lçƒaà ™`‰ì …4ú°sä-¸ñã«Ê§°cÇöxnÐÓ¼Br§í/ß>ˆZR/*m«oœm9¬‘$PWÇÊ‚Öòð1RÃAaB{Ô¨0ðÊI„cÊ·ÿ;Ÿ©UâP(nIȲ+†K"²—DÕoÍx7Ϊ€·™QŠiƒˆ.Wr$—.p›Ì7€ºµá= †Û$fÄ Ü&EqÕu”¨ŠFU!è¢ B F*•p©Š'r?@šö®¾<}}hKÖbж[ ?o`tÆ$kór뇽f"vG"-’*År¬KYBŽƒŒº@o²ë©"oeè%=[ÕD,èMzRVIÙÖÑqXxÒ;e{&Fê•Q o—:X‡Æ·vHmÿ0¹?Oóg•]VÇ. ÷¯½é·öåwÓ€Ú‹Üà›2·ø¼é}§Cáš¹eê-%vd‡àçó+­(é~Ÿ¬$Ä"ÀÛ Û¹bQ_ê;É!ü‘X»@¼-;#Æu!žF'`¤ý ˆˆˆP²Éd§}¹5|ÕØl%á9”K+ƒÑHK !ågÞvB߃…é1 ,?ºç4’›œ|M/Î¥gÇ–Ù6Êjñ©p7Ê¥S9·0- y£u2»èÙQ¹ð(Pé`ÊN¨9d§º7ï@Å­ü¢Èѯ{#ÍÖ½ù”ÔF.Lþt}¼!®Å´4ƒf™Ä?g™„àݤ0ñ!ü ”Ñ‘½¦jÛèNÝñ¬®ÎüþœC@ȵtQF Ñ|ßýµŠî&#òÄn7ÁèâLç¨>9U³];Gâé3Ó™Ôî%0kÒwÑ6*ü˜ÂèøÞÄ ™b‘iË!•äÅU³çò»åªñ0"‡®‰ ×Àðò»iJö¾§p,¾ÿéü|w÷ò3™+ Û¿Î€GCÑ8bK9 NˆÕT©÷(/Dß9¾c?'ÎVP%G»Âø‹“cœD-6˜˜ð|–Gé .¶yŽ‚²2G‘xµh„„}ä´Cº¡ÕŽñoCõÀ£OÅd﹤—š., Ÿ'”å®”œyIö™Îâef eOì ÆÒðĉÄÑA\±³½+ïÈF•õµšø2*s®žúML}ÆsˆûYå%= €atöŽOt¥é\©™ݶöª9*¯ÒÖ^$&é€viåǨÑDÚòèÝX-R·}PQ­`µ¾L.oÂËNŒ:H§«…­ ¯`µ¥j Þñ÷WGãY-l‘þ=«…ÐF<îçSŒæb¨rz¦nNžpa¤:ÀÊÞø”»ˆÀâÃ×»Ïs).X…_†Y M ˆ(žC€±8»X?œåÝ •}Φ·¨Îjȉ§ Ðòˆ1‘­ ´Aãp MA 5MQ´cl]ú ï¹G1R±¨ WÃ4úã÷dÌlÂX?c .´l]ˆ¡3afoLMZ©‘©Œ©v%àâÅUÉåÞvŽÄÑRmÕa*Ð]RLJ®&å ð¤'PùåË\ 9#è€1Ò!*°ß“’ BNÛãétfžÔ¹\}¸AÆ[ºW¿?úýã„>ÄÛ›_⽊¬z`n 4s£…`±ÛLÌ]A´½øßw¶&‚JxíEñùG£Ïáõ‘(»L§N«&ïte¸Ž4šþݤvf'émÇ^¼É0ÿêàð⪕àf¤òÙɬGdÚÑյ]'fô€c-CŒ!¸@D3ñ¸ïÈŒÊÐQˆEÈ„hFZ„LöÙÉCGÛí‚™˜Xü¬ÒÔ.~34MÎ2 i[¶SËû3ý ±këCgÌ,ŽË5„E¤nvCP'ê"IŒLú=š^_«–ŽD=ýžNmYmE iÎd„ÆèÓÅ"Ä^}ïDÝÒ8ðNÎKðÙl+—ÞÎ…¥ç35dIádÙ¥¾4¹ r˜Ôv€²(ƒŸÏMÎ^c¦¾4iŠXœ¬2›d­Ž¡tÖìpGÙÂ~´Jç—œsD^Â:©Å:ß£ËÆ³]4î>¦¾í›ëYYbs½ó…Á.xЖŒ¥jqŠs î¯Ó•kOîôí«J¯Ì0x‹Ó(Í/-d†i,”^Ûðn¨çi¦á ‚Q3ÀÆ‚Å9U eõ—=1@ô#<ÑÛUSÔű þ»…~KÒhïÿá>e;¥hŸ7¿Áfú¯ö¶ýàüPWL= Ý9”)ÀÁü®ýÅâëæ´f0ÅNÎÏ•B EJ>›ºÕs:Qu¹éš•SdÏ´®Ïbð0˜çÊļ×Nnº¦'EÀÂØtí[ˆXÅs•¢Ô9ù]Ðc ØñÔ2ðűjð±½?¤5²éˆÊ%D,Bp1Jñnu†›ãHV}Á–í$"¬‹h8"˜œ=æÁ8¨Ð–í»&Áë!jK»Â’È>¯f "!á‚êdN#Ϥ%ã\Dö*ÿófÁ×>oÿËLø?¿|‹öá›MlÿãávûÇϦ5o~›¢¾»ííá3÷¾•Á®Ù„ûÑk—âîôA…3]?üô&‘Í´ý“ï·Û'z \‘œi *-gXð¤v:ãÿüdŸ<<_ßLê;µ€Ýlø_®??oÏ6ýûO_ïÍþ÷ñ—·x9aü;»Ð ‹ýÑP¶ ‡û$ñE»PÎrFmíï_žo¶ÏÏ;ß»ÿ{ÅI õߨÅñwÜ<Þ¹~Ú¾ÿg$†v~–¾- ²KT;Â'“Ñ-¼æ³Ó|4Y;·h5Ý~0'³Sóù»ÁO?¿áìÏgpöçßž¿|Ú>¥j›×7¾V/>=þvW]nWH“W¨§ìŽvŠ· ˆcï!UàÌÍ£ŒiÛë%eb!Ôªã©äöè=f©â"ìÑ\fËCÄú< §‘‰ˆ´Ä€ØµkbdL¼öËÑ@‰ê0m<Ûéu´³š#8©Íþ®®6Ž e!^½Ý#mÑ‹M+Í‚¡ GGÊÞ&N¥¬¶=›ëiZì°³µÙªÀ¢$ +«Í.™-tv¼ƒÚñ.Kõ&nŽÞR“NJ ìgü\Ô›J¶rçhkUŠ“2/«+Î7][ƒíÏBô8'Ó™÷ît“ ‡hVs¼yªÐg¯°‡ÕoŒNÖŽIÀnÏMãŽ8Íþ¢ÅQ‰@µÃ%îX‡Táob'a¾Kô¶³*ƒ( Äµõ¶o"žëo&ò© oU1Gâ ®Bqù”Ñagµþ&‰ú|m½…ÐâobˆŽfgËýÍ×ú[LžäËœ…/ŒÅk„lJçhg5þqCÑô¶z`r:w¦²Ó< Z¬¶P­6´À|¬P[ÄâÜ6œmš{ÛY•ÚÂF=)­înH-9´«¦Þ‹o<#ê¼w5 ‰¾¬4Ì:ÛÛ¾j!2h Õ]Z ’‹¯nén]éjÞ”öÚH~ÙÓ´ìgäs~v´­*?sÃS]]gÔÂ<.ĉiP–$¹_ÆÑ„"Ö9Z9!ÌOiÛY¥«Ùïò¥q*Òt´DiìZê]ˆÓ¨‹…è(µèÑ®lAbYgÑùXDǨ³Õ~o«Ñ™ÓxçæD"â%¦»Â‚ŠñézÓ2Ñúbx¬ÕK9$‘Š2Ñæ¦û˜lø¶³ª„¤¦Jq°¶Ú”[¦YÆh2ô³îkyµU'$M@hZ‹T ûÓ¾wة֫Қlĵ-¶½ùI"MŽA—ç#©:=â,Zó®ÆÛ0:’ÞbÈêíxk5Šc±e1:Z[qH ùû®@lT\ª8­N${3ítj•Cÿ¨âBYq˜-‹=ÚYUéoö~m½‘¶ä‘!‚Æèh¹ÃÅGÞ"®r¸E½Q6¯u´³J³“&âwÐ[Ëà ’Z”MiüBƒþæƒçŠhÒü-7êíhgUþSYœ¬­7nêýöÌv7…ÅþÆÕád˜úÒbÅû¶í©Ô?vž/69ÚZâ(•W‘Õׂ“’¦+¥©ó‹Ý­:ìý&sù¾Í©|¸i̽Úí«ÊÙÜÆ.Q(¼²Î\ÃÕ-ØŸ¶CÀ-NGju|¢;ÐPÆHŒR<ÛØiÎ׎vV£60µÙÉ×>Û[Z¶\ð?¹ô°±x¢ÒóËãÓõÇíÕoÛ§i<ðýÝÇMM«|ÓºêÖÍÔ"n|õRƒÕ‹WÆ@Ù ö ã­¶lÌqíË Ç¦C P`LeéËã¯Û‡¤çíïWãyzüú’+8mj ÊY§ôŒˆ¯ˆÎP¨ô|•D›?åd×ÃrÒ²Ó‘Œk[Ž¶Ñ¡·³OØÍíË¢=@Ë[»Ýó«‰u3 ÔD…j.[¨E«ÐìcË‘\z…-Ú3… ]¢ÔG`_áOŒ¢ ‹"Â=Ãõwî÷Nã/M7w7ÓßžZ7—lGqð·Îj¨Ö’Ûàà@‚ªÔü]+Ë^Î<>Ûéùoðš~ùjÇL{uótsõòx•¹I\íGñ¼Nt˜äãù讨ä2¨áMSIYªÕÅùpðã*¦g.}]mw"b,¾j–PêX ½RdºÖµq& žéñDÙDK ’‹‰žÔØud=p+$yÁ9p?6â1 ƒ‚„E´oWé]r"E|­tPÍ~|œux¨„Ð.øŠÃƒZxíÊJš:Àä¯òR;ÙBP«"GŠOµºçßy7Ñæ ™.‘cbÏ1EÏIݧ¹‘ê`ÁÄ3­À8ßÔ9ÚUÞ¯~Ý{³¬\½½{™åƒDk–uÑ`Kc©]P€P¨ËCÚ‰xºy)G&S¤Í Šº¢>ây{óõ)…,)úÃ>}x6 Zœ8=X½þtA© uUÒ–LzLƒ!Ó‰1¨T¢A¬=]œR²¤ÊÅ÷E'̬G2ìäãèÌtm'0 Ò…¸± ¾;™jæL™—-êÀ'§2~çæÓÚ7J¬›s&»HÔÝ—©æ$s_|zþäÓÃ7Óª½‹«Ç¼{¢þ¾®éC"Rd‚¿F¶ô ^žç5à/ö=ØÇNK—z%Îh¡5 ætw¦ óªÿ#ïZzÉ‘ô_1ú2—Qƒ Fk}Ú˳ØÅÌ^YRU #?àG?ûß7(ÉVÚ¦D2“™Õí>4Z–ÓÉx#â‹n¥Úv÷¨ÿ¹ùtÿpw³yú¶y~|xÞ½i—¹ûùvqzÂqMÖbÿ­*ÃÍ®¹›Xn!?hkNÖ~·kî“F]†™BŠ1´SžCÍ–øxûT€"Ñ’³žC’«Ìúüiá:¢µE©êyj#œƒÖåë_v\4Ôñ–G$¡¤bâ5_Ï ¥ó€Í•L ˜9ذïi›Ì]Pé÷ßáÆ[I«²òxê è àñ‡§ö€õæËòy÷TYHµî‚‡cyÌ]øž«†–<’xS=ÑŠÍÒ–NsbÁLTùKòHÑXKd-l~zˆPu*œ‘—ÆéûÛ=Jº¿Ý{vÇ¡§óeO'MËžÁ–T=¹¦êÙ8\=Çn ƳØÜ$}ÍHúzî÷Ö×¼·,½&ÃëÇ_¾be[³‡á )°×ƒæšØ£N~†®æ$ [65{/Áù¢…(œ½yµÇÍØšáNkÔ¦¯é±*â BÖ±¥¿LÓ›k |¯mÊdÔhóMfQ!HSsmÊú•yÚ~å³–d:f[a˜ÊXÃË„èw®¥Å*ÏÝnóÛòf÷¸¼¹W‹=ÇOGlšÆ V»êóiÎ@gˆ¦½d='û÷»å“š¾Ï¿<,Ï¿^¯¯ë\§§0)¼±Ó0!ÌÉÜ(­Ÿ5Ü<=lW‹õR­ëíâð½*šØIß[žb.ÎvÁ!ÐÌTIA»í—Íê·Õnó:|}·»9ýxßFx«aóPÉ Î6Ý 1q,.Œ à ¸b5ýÆÇ¡ÑÁã¶‹¹‹«Ô‹ÀÆd«‚“;kzÄj7ÚŽU ©¦b&¢0Ò[6âéãF{ÀÍ{7rGdL¦»¹yÜ(Ecì #âÆ¡ä,§÷wJcJ¥£ç!Ͱ^ªóÎ6Ê(Ë≖Æ`ß³Z4êಠzJÅd¡§G¥&Ö@c ‹Žë¬J0àFYäFÜÁä¼³Òaïg™uåP”;úÕà¨Ü±<~>ÇW§9º0J÷= ÚðHαf›ãgä/űít}ßÙ*Eë|\YU÷éeu=ª´Òu XÕ«×F&xþ!ijLÔìYž÷7›–)(‚OÈ튄MîðíQ®ÙE#ìߺ¡Ø¸ü x‡­#o]Dd±6å#ЃµMï‹|„CnwTwù5)³y†ìÀIHÆÀÌÇ©reI޹LÎ]æœcdD8ªš«ÞЧ¹—ŒÐ0iCÆn³|Ü”àâû‘÷w»íê·þ7vlÕ5c çFÞõ†0²(ˆc#`]¥ïmDÁf×E s´&8\FÎjABrñb^5«)–­éÜö&ŽaŒ‚ÃdzËèI™rʃ—I‹ÌM=®HÑØù¨¨È7´iV‰ €+‰ ƒ°F û6ƒ¾Êîòz·ù» ïßîîî?ì*ø‡JÃæ¯·ëͯŸUæýå*Zùíf}úÌ'6“Ù8£k †,âÚ1›¹@‰‡åä Jï4Šo{µo¥ª¸ÚlÞ¬ßê"A‡„Öí‡ÿœzÄñhǧlU×Q÷õËæáêéÛòöê• Ýþôo/¦ BdR)™R¼I †a?<6rWEñâ³WžòFŽ‘ÇE™Ho=ë«dS… :/þœzÆêîæ~ù°ùÀkÖ´,¹ßbR^p%P@süV’â[>tBÁSÞY‹ FB–×!¹ºét°¢ý?¡å@Å sž'æëâè¶‹ 9£oÑ^úP^[d_R‰§ÝÀVwJ,ƒÀNc3(yå€Y<ØH4Ÿ„ý?Q¥É.ìÀ~^‘`gh˜ÉFÔôšLµLìk–«ãMAö¶ù§ØL8P­wT b5…Ï ÎDò9I›þWú´Ž¸^• ;·t¸!¹dõmÝHéØG»‘R‹} Ü^|DžEâ"q0¹õp‘^6-'‚4Y9¤¯­²…nnqÐ æ Æ=ƒ£\$Sí$yÚÜDÿq¸m~ùŸ5—„¸h'^)™ïîÐd#/.ù§¢ÊÅZˆ‹¾uÜòêæv#qUñ\Ò·Õø«~8øCáåÃûݳþÆcfØ7÷ëû)ÙÁ89d J2 ÙMõá•zMDG_Úx2s[´CÆþ4G Öøê5мӵÚñÞm$^J8â6 5-%éª0æ…ãXðý`X^ ÔD:â6‹8\*Ôc‘52üj9²åI)ƒFWèãûºðp÷ü”ô¦ç~P·½ÌÆøRX/lƒ¯ŽCZ t}öL`¢j{}¾«/€w—7œÈ™ŽLzk™` ³XæŽL(ð  |Çý!ëïl sÍ"ÙhkÄ9–1d$/0Á¦qò_IÖD`ôµ÷3qs Ì[¬ƒ#’.04Þ«°¹ØÖ—ÿm3ÆÔìƒeöã[ù`Ù‡ÁIÝNœHפ|oTÚqþH:À‘4qçœaœþuóë“Ú)}âc÷o9¸=õzw7*)û½Ík}¸ŠZüÂÅ­0]~«LÝ2âóUþ6ºpÀ dìÕ ‹Õ3ÿóº™ÿˆBèQ `Œ@b—gÖ „$^[ª-܇¾µa‚¹ÝGÒ,Kk‘þ•2õ°¹ßmWËÇ·ÕéõJ„‹¸ÝZi²¸Ù~=t{½6’”}mAB+Z1J³Ûµh¥Õ·‡ÏbÁçeŠ’½„'ª6ò-ûá™ ÁDÎEÔ·—´·ú~[—_"ø)Á‹ndo|.l•0¼!PKc.dÅI‰1·”M”XÉ…;'j42æÌbÅÏ®xþEÅãî¾_®[R$êHÿßuvÒ…a0Kº4hÞ¤ŠZÍlb R0G¸·‰6k]ºq"M#›Äb:õ?'±þf?€ýþoÝ׫˜x‚‹‹"€ßA¾Rßüúñ!…]°tÐ|d V7Ç´Õg†Ü‚¦¨ê4MfSÙ› ò+A5Ivα)꿳Ça ‹V‚“÷ƒ'6Úس©»l Aއ´DÄUKJ=h xv9ØiåDöA_Ð!ÎzÉ‹‡Kî蓨\±¾6 Bh*n!ÀÔhő̔Úõ˜ä Z1`Ó2¾2^\bü˜À~Bgï‡%ÖΊ;‘F¦IaÇ­Õ·%1XÒ~"”q`éNêדtÜZÀÑ‘y;£Ø{F˜Ö†Î‚ˆ¸šY¦ Ìô¤I\ç¸S»‡8ç=t›UuÛ®öâÓaŒöôc&ü% .níîÛ®îÚ:ç'Õ¹A‘ZÜMdx;.P«¤\³|-J yÁ‚HLÅÅåõ—“Ø^=*µÅô­½°œ9cv°‚Ø‹– Ž6Ë¡Ø,ÛÎ1Em¤ÞæÙjS!vïd%fÙ…ÃЙ¤ˆšþîn¹^\/õ¯TÞ–÷ÛEĸ]îšIï—íåW´ªä‘f”•¼ôz”>ñ›`uC'Á£™E/ºÀur†¼úÕ?Òv”3ò"Ôi®D\F{0.m0`J^zk!.¨!¦¹ÅÅòqá ‰ª?ÑrÆ#tÂ9ᜰÇöø–»ûoKè^?;Â}´ébëØ’$#ŸŸt÷Ɖ€M¤G_ÚÎïæ4 C À °P¨†›qáÞ¹(Èv± ÝØDÊg2JÀäæ×…š„AúÚ@f7.4ð!–ÈAm«m2ŸÀÆL[¥bMg —Ø ë%/ É>ù-švqn9P91 Á; Ô®çEǰ+¹Ç²È’Æ4ìÀ+ùåR½© l"<CÚ“ˆ q Öïl!Ñy‘`BðEÖÄes›‰QZeך¯Ï.ª<¬‰‘´Op„ÞKÄÍóîéùñÓÍæéAóãÅz¹Q .ßk'±<‹ù {‰°Ù0T¬Oµ£õ©ÒH$4½–ªÂS¼/2üð~4}Y;Én_d;ÿŠÇý‡Ÿ–ë›í¡ûñðÁbµÛ*{«e]‹1¡ýá,ñ•Yc.´•`)$€S­Ã(ÖJ1£P°5ûF†œb:8w+®¢›F ꑧѺ^|•fc4Á$б0afF z¹ }½pñO»ÇJt 8¯’YªçURÉ>¤W)^hXçÜhy‰PÍ41 Aô5 —Å»ˆ4ƒ$Šò‰(Mɘ8õ¥†¡F½ó*˜lÉ>Q£>¬þ%4u¦'ͦÜ(½Cï&ðu¢‡0ãì¾î°“êÓòéi¹úÅsõ­×\U¥… Îå ´PÃŒ!uØ` `Ày5±Ú¹ÂXèÆ`|J2ä/_¼7é•K¯¤iâ %ÞÀJ]}žb¿?„Q)a ”uÛiÐN0©F¦kýÇ‹½3eþÊezâÏ+e–øy¥$ƒ”2IJ_“ÞˆËôjç(]g”Rä(Í•¯”tIGÙ£M?»"Ї™•’O9e:AvÖÍ{!z\Ÿ[>)ÍoªË×K‡ÅúnõïÍÃêË×Åwëû*ý2~RõtaHɣ¯H«H×LU£œ°ó%á aE^U]òú¦G¦& S¦ãâ+RIo±QjÖG§ˆjglœš*ñ4zxJçëJtØšá«~ÍåÖÏÄ“'¯z'+šò†Þ€ö’DÔWìБ¥ 8@7²Ãeä(22-žçüˆñ<•–Ù‡·pž±uO^6v5F¶X„ëù?¿Þ~Dõ„÷¨ž`ðG`OóØ!…ëyäÝçõÌ_7OŸÿó¿þãj¨Â›»õ[46wõÓÕãó*Êßçoö¯ûç§Ï?Nó?/wÏ›»¬³W?ýtõEUù9ÿåïïMôToðÓÕOgÜ¡‡¸ïÚû9úÏCZô4ug4ˆúç«_^ï6Wëö·»»û&ïª&›¿Þ®7¿~vÑyüå*†7ÛÍúô™ùè´ÄtàâPMÖ¸YÕ­L¡/ž5 CÑ;ÌŸâË^mãK©i[m¶?oÖïlhøÈÑd¼±mýgv|ÌöQÞ/·ý²y¸zú¶¼½z%H·?ý{·CÆ:q… –8t"c$Á›A>/&ª2zŠ ÔçŽP³bóõ_kX)™“ o0Ý#u:ZÓs{| *æ¸ —u^q;ì¥Ó({"Œ *KŸÈû'OÉÜÉk²èl®Ã¡ì²¨8‰2EÌ¥Ò"‰Ê]µÚÅ÷k¬¸ïõ(̹ºý %íœ>ýø9"Q¾ôÑ­¾ÝÞ”&NËø`? /V#Së½ÿn ûý{#©ùñYŒ©t"-³¢Qx(ÉË  $tFêi’–ipÌaöð·Ê­)SZ<"Õ dÂôŒ¾…‰Ÿ{æ”…t»˜C?·ÃT_6Ø’¶+Qžßa!‰û1’r£Ì;?%U{,-›ªÉ•uÉüŦS'°—=ºÍ%v-ã }±’ë\W‘øÿi’¡®ÛËqÿíÓ•%/.È‚oZbn½ÏÚG^]ÞajFUœ|ŸºÒN¯­ úÎ*K]Ð$x6ÓÅ_28ï¦D³‰hKíШ¢“Y«²ÂvkÄE÷v³Dg{Ç.P¾I¨Å LDúe'6,~Úûö8'*Võ´FCG%Vkªùº5ï3MˆäÅ>NH6`ë.X˾]/JQydvÖý¢I ãŒÄÓ¨:B^=ÍâÕ„þÙ«é<Ê^ÙkbkO´°™¦ÕïŠn_œ)¿~™áz*”|Ÿ¶Î2ÿ®öù\­×µ¢!jW·;‰”FLdøvÕhHó¥×Òg|‘ †üMøŽ¶äè&|/ªF>Ø3sÀ¥}0ZÀ ;›tÂÁÙQ}Ïžöo«Rswœñ]ÕL}<3yË}[Ò e{¾¸_öÂæ%Þ¡Jz Ïú†Îf š+qÎhþô–NO¡ð@µ„y§»«%±ùì—V2=eÜ ¤NÕ7uªŠã0¡Nq ‰³^ƒã-Õ¼XÜ•ªXgÙà„¡ó6GÏWž&J-­1¾–©aÎd]µqÝ,ʼn™0¸‰ìˆdKl>3‡¢ÒÊѰmkͧPž½•¬ÚB’Qm´°’ ÊÁ úN5ŒìlÐi–FvŽÖÐú ©¯'§ñ¿e3[k\œùº„ò¸!ÑK‡œÖôÓTV3ZXQÞkD «´†bü,­Á$Ô<1ç„Âl'i}EÁ¾^eÔ¶cg:§7ÜÁ™9ÉýÒ ç"[X^q0¥ÒqÌ ·‰pOei3(Ëã¡yxlžn¯7?½~¶ýá‹ÍÃó:öÞ¯¯#¼Ã݃¾ëÕ¥~ñµž—·ÛÄûëå]U‰t{L!I3a˜e“A&A ÇhØÑ÷›…l§ŽV‡xÜÐα7Eáç}FHc…dß„}ZÍ4’¯:Å=:´lç82!tq*HÞ›®Nåm”êýÎÕwq7±ÑÍ­Ñ¢þ`Ü6÷·Ÿ_^b3|³—w_.íðöÙð[ˆ®$<9XR ¡¼k ‹S˜ª5—× ^éú µ¥“!/RdàÎæ,œ¬KYøH‚ 5è·¾©lg&1´XË>#Ï.Nšâh/ÒM±€'ú׫$ÑêGK+*N†Áû`qyÅšÂe[[g+ÎV(ŽÈƒáÅy粊s!ɺ_Y¡Þ82ÀÒ'ª ¢˧ A½‡•ïÅò±éo?E>–é0Œ»”ঌçÀ6$ùL˪Ù÷‚F¥ù¬!äÏ?°IÖݽ`šP|êKGþ°´9bäTn°Ø«ºäÊó~µÖ_.žÖq¨êæâö^?½XëvÝ<­ÿ¸¸®CßCt}Y²S¨°­qˆç­9ù53Ù¸_X|Éd¯ˆLÒdG²jÂ*¨om¬ðâ6뉺Ø,ÆQô®V{ÇïV—›Õ&ÓD6þQS7!‚}R™rê'.X`®å¬'²3¨ïä5×u]‘9db“…å‘D™#@¤ˆXØÙ×Áaˆ‘þŽ©|1ì‚“W‰›õÅæöó×ʘB_[<‚ñ)Ä'/†%2ˆÎ„+ÈË«Ý!©;‹.1K‹Y³d—´Ê‘lY¥DÆà¥óL†ˆ"ðGXrß-ÏÜáTΩ÷µAštÓm½‹ 3Ëtš™\TwìøÊ[\̳—%'È®G’haqñ¥5•t~i‹#çÛ[›Áy~åVû‘0¶^˜Ö+A¶¸«%NJ!]À` .‚²õ"µf·‡„‚XÙaÞDéåà|Z?ܸýzq¿ºXÿ¡ÿw³úýÓ¾Ááí‹<˜t󸛼Çò†Í‡wr†ß‚õx<üUO²(z­!D¸Љ|°³F–Ϻ˜S «{hÎÈ”*t d›3BÖ˜¯»ÿòênõºýÿõááñ¨;á?Uù«‰›ù#8Œ˜Ñãß®nöŸÁñ‘‹npòÖ[÷rôqóN6"ìWóOñmwf·^]¯n¿­nŽ&ÈŽÍùÝ3vkÛ=æv£îâz–ýcµþðôåòë‡7‰ Ûåx Ñ8C×ìj,;» ,›:…Éã ¯fºVŸt‰ïãÃûËõo«§Ç;UÌOëÕÍ—Ëw]„Mx¼l +VÎ×´wEMxTÚ˜$Mê{Ð"É‚©<ÈÛˆ¯–£âä w ˜"ìïv}ÖèÉú¤ÑïÅÕâŒ×ÓIœç@ï½ÂèK&œòF‹V÷-ªNyU#’ÐvtØ™™w'mÂ|c®vN©«§™2¦Â[Ö`N¥K€UCçòÉ7Y‚—ÿóNÁ¥"ÑÊJ&8Åj2âXxlóãg\?Ü?^®WG<‰†ÿ²´Q ÷†`§̪/Âì¹Ù¡ûýþõؤí‘IÛ“ŽA›IYõNwÖE^=}ü·ÿç%ÑüåóÓ}BlTuÿô±ÿcÏt…þ“ ^|uw_âÔÒýÃÍÞœhð÷á—›çë¸ý>þ¼{³_oVwú¿nO‘?÷y“_ôßOjÂÏqݯ_ÜûÿÒqÏ{×%ÐÅ„5ª]¢|¼ùI­è:5¢õ:çõÓýêi}{]¢dƒÃž'ˤ‘T§Þ?¦‚¯uÊEÕ¬(w‚}‚ªa¹sɧÌGbiQÞ‚Ùí¸MyM¸…-’ëÐuD8`Ä•ê ­¢¾Õí£²®UÆýåþuÛíþp¼ï¾}½Ø?áõ¯o?ÞuÞ\lg5/&™.;êjºSðσ±Ö¹Z<ËÞ’m×X¡ûLŒå<Ô4 rYKߥX¶>’c“Î Œ3sÞâ¶ CcE ÖwîÒß<Ü­ÞOKî>|¼{Ö-¼És æž`~œbSÀ)ìI‚ÀÞ[ úþ¬tÏ“ðfE;§ø·˜wB•k:å³V I©4_ +†BK[9ì&x¥†`biïá@Ûñ숌‚üþ÷O­8ÐÚX1O){v¨²éÇvBx­Žå¸YDÿƒùj1rÈà’=Æ#Iµ(ë[ë·Ë,c‡œ#`©p߆ǯ—÷º'.c–—íe¯ §½1}MÓOép´AŒ¦|µñt˜ÚµFqD!%›oi¤ v”íÊÌÉl¶Ó6«Hët¥ße‡ø§ïž3­¨ÃÇ ÆvÎÙZÎcäÐ(»G$9³=c úUˆwÔN7ÞÂ'ؽô&öÚ¾÷aª´Ç 7wÛ!Ì]ãÝè‡êX+BèÔ€…iý¸ÖЍ kÁUr²ien[U3˜¶cfGìsæöDî› št›!üt0qù|sûtñø 9ÃmeaÀꢺº\Â)ÄÚÖ£ âZÐYÖ¯•ÿÝî•°­šæj¹L–8ë~ R€«cA5á_ƒ&"ò¾¸¤û¥ÔšS @~éA’-?ê´9Nu.•Öx~ŽóēйÀNvô… 8e( Ézß`Èä½(›ÅFº/ܶ#&‘±’½Ù„Ýќכ$Z„FúÖЄ*Û<¡ê*Ûd–Þ¡‘Š™mºd²Öí‚ØSY(9pM»ÛÉ–ÄH6T¤¡eŽ¢£јö z*iX‹¾/yLE[G‚«hÔâÁ_¯î«ƒRž®•|x„fJ¿+F2]°ûA ²™;Ž›H,cAªê ¸lMMª&8Zú}iÏÂÞ öœŽLÆ!1b¤Ködƒ_&QMõÎ%€N„C “BGrRÑ Q´w³í¤ÃÝ5šÈõ¾ˆ©À“y!‚ÉöÛ•ûïûJ¸“ÓEš(pÒê5& 9«fä{âÀ¥¥×Ì3ÇÝL‰c†ÈÒ“uÌÄCIª…gÖ—FÖè¾ê¶!Ž_À,ƒ=È–:8f‰Ûo]1£¸6ÝÍÅPTXE:AQv'µLbÀÏ;±ø=°|dbÇï~ÿ âpqwûiuýÇõÝê­bûpw¿ÿãIÀÚB¦«>¨C‹—CÈú>Ø‹oµÉ:Aùé‚.8iRãzòšÝ[ß sñM<ÍjµQÛ$E Óî íì9GéN÷‘,špР qÝA×Âä Ãå †á­ûù»ÑGo.ïïV›=»ÅnR¤rÜýŒÏã ³L‘XË8i-¢ªš3FŸY;û ê!*°Ï˜åçí3¤êµ{5±Î0€ZVõÍS$˜šcdÈv±NCXš$:q‰¼§rµ¦š.šm8mšYéçM“Ì”ád‹Þú¾‰b¢ÈZZ&ˆpp%–É!ÛM&ø4:×›|ÚÙfà*Ûô–õf¥ˆä¤‹iŠ7„áÇèýÚž_?_\¯ÖOU¦éw™,üÓ„)×™ô› íÖü5X;ÔÁX´FJ s7%sÖ0wÀV‡Gæ^:ìR,‰óMí²$·±2Åk‹í–ZQ«U&âÍê|±ü '—P Hç||emÒ‹ïåÕ„ØÌd ,íÅûùF1C Ç ^tá”mË6M+}EMƒºb¨íËnRŒê©k–Ðó ª2ä¹k#áÝêró®zrˆ4¶—ëÉ©«õ‘î¸wSð}`£6[ËÇÓD|ÍÎó¸cb:Ypžƒ§ 'ŸœDK«Å®¯a8ëú ˜-zî@Q ëŽÂV¥¿þ?ò®¥7’#9ÿBŸX“¯x :íÍ0 †/¶±àœJ|‰äŒ¤ý_þþeŽèn²‹ìêά̬Ú¾¬VÍVue¼#2â‹3Mïî¾´Vê‘U_‘Fårš8Ñ‚¬}tì§ÆI-À‚•ë”|Èk±ðäôþˆf]´8ÚjÂck1F·Äf-Õb„çv£0Œ:‚==û¢ìù²úáÚÉT"\Rm±*m& èÓüÕmt릧hÝ]#”@±P^Qq:}Q©ËFÕSUûx첦%¼-k®ÙKüazúpwûËݧê‰ñ•æz ®B/‘ØpÑõÔË:uÓC“’ :X‰š²z˜$Mèáˆ*=ôPß$ἫŽU‘Zô„¨/3 àú1QïÙ·§¯£é-ý3Û"Âþî,ÍójÈ®j‹CpÝ¢Ú]:õÓB²6@)H>Q\Â,†)oz˜w†Ù¶D领0Z£*—«¡X'¦5T{µ€;ô2@Œ¸ìðéþË~/åÆÕùê×>¬ÅíåÏóúuhd–üyTª™òg6í¨[Äa’uÓN òÀ“þ–/†lw„@‚Éî¥-zh§m$§Ä³îzlÒ߀ó/Ö‚Å 8É8x^–^[fÖýõÙíå0½öòþîB¿þûÝïúå[}‰«ïWOž½<ÿu,š›ïœ^\}¹½{T}üðüÙê˧wßÎ/­YÀ–T¼þÛ‘øt#ù§5“®‘ÜÞžý<—³F@ ¦ CRTÛ b?Ú;æK¿0 Q D\’£Ë%Åb[k§Æ¶\è’§aWø1üýø7ÖÚ§ ÷xòùáîæäÓÝõÓÉŧ7–á¸èiÚié ]dRNšƒ¡Ò4LŠÎ¹ƒ«“ãt+Ðèhðµ^…–ÅÂÊR7³b\JÞcã ¹Šj·fº ãS3ê0¢ë1mGH”ç§Lø°:xœÜð6:Y Û@† Œv2‹mhÛ©‰m5—AVøàšù&¥|KÕáxtoBÉçù†“Qßèd%|3ãïœ@˜Ã74ɨŸÑ[?¢¦Jí5Î@¯ŸÇŠÕ‹/«¿Æµ°5`×ÊöˇÓë»ó_­ÉkOX•ŽnB*È¢jÀ©`„œP¨=O“ðÐ[ºõ€‡¶·„’æH éï K‹ÔPn±¼¦C»o¾ßžn¿úáõ¿>‹Ëé*¾ì')2Ø<&•˜””“ ù¦íÇ–V]$ÅúIŸ8KRL! É/P݆Aa¼xýo{L?µ >MɸÕZá!yˆ“ ÆTè$ ü,Cšö'M2€¡¦°Ç€s0·Kt:‘Üd|÷þþ/ËOŠŒ†t¬ªc–xøftMT?¹«nDµ2c¯MæÉ {lµìÞl•,\ †ýëÙ¹q@¨Ý&uÌC²l @ê@²Q ‰Lî Þ½‡ÈÙ; ÎÈ?zW>¿b›?µií5©A¢ ±[_ë˜ÛÆlab¹-¥•QŠì@ÊÚkõŽ“0 #ÊôðÃúÚ‘qVè—y7>Z ßôµ|”ÈÇf[¨â[ ä ךùÆ3øæ#XQ=‘ì¶¡ž^öþr²"¶©ÐŒ=á,¾)×Ä…&¾AÕ6\Ã(G×¼T5/UMI3PM@ "ÛÑ-Y¶%™^†ûr²®¥0è×ÍÒ6òV‹£®‘«é±Ššãhv˱™mÅ·d ƒõõŠäµ-jØŸe¹8Uæ¬ðRZ%)š ˜Á6 ¬ ¯í}Z=âí6†.½OUæ6¸°Ëõ>=Þ]_¾.8m>¼¿þ¦òã‡ÿøiêÖ<Á•õ0¹MƒÿOÕ¬*Ð0àªÅ‘¢æqæ³ä=\ÌÒ¶¸˜v €&c!¨&sAŒ*1ëLw ¿¯Gï‘i† gŰšò¢04é9¹œ E‰&ôÿ?¦ÖW„Ôø)µ¥DDUc뎄Óì¥c?nl}JmMd8äÕ6øMÃÔAµ¥é&•±ºlÁ‰CLˆiVTÅèAå¿ET¸Êæ‹Ó|˦֨ÊXX¹ÊV`Ly¶&àlPÅ@8 ý|°¢Â½ áØÑ,®‰þJýŠ«õ#j&€4 DBœ»âJ…zµnþöËp®Ò|÷¨ÿ¸y® nþx ·dóÕǦ½T]åßd¥ C^Õ™§We¿­Ë%¾uò³n{ôëÑ:ý›„F6­s…&xd”åA<Ÿñµû ©UµÇ’TL#½œ„HôÓ×}[u‘+ÄÏ+h ƨáj“„T¼*ådÕs#ôÕã÷û£öSb!8°FÜT !hÜž )±¦‡Tˆ 9¸(3ú“Sd› Ž®%²IK,¶LC DÞöòÅåç³o×O]°—W Ð Q|‹áŽLU -Z¾ 5Û@j‰Ö-†·½s®` ˆÅð°%8  JÁ鎯‰ºñi iÁœÜ;J°}èÒ ¡ÈáÓ¿~H[…㯞]øáìâæjH®?Pù»RþœžŸÍÇÇöª§ÆUÈM× ÆÀš Žè b¸ÇºÙ,Áúi&ØÜ† –d×ê²rš)aúpK.ƒ¿~ˆ« m–b&»›j)Š™f/€&EqHŽÅ„!½LìLšL4ŠÏ«ƒyÚ¯™’l V“fŠ«ÁØN¶²ÂÍî„­$Y7ådk[Œä¸¨»1å•“eúÆpK Ú©ò¬^ÆÓ·™œc=ChÒN \M%<èÁ±—@oÄï9({|Y(¸þò¼Yy&ùi/é}´ñœÍÔGx¬);²ú M«:ìyÎP«›N‚غÚäRÁ-’¼¹Þ ¥œãÉÂå i:íù‘˜f5’'[ö&Ñ5E²ôÚ¨JZ‡D<î*{ñm‚ôêßN ù>Áã)…p3O5Yã¢ýª‰û&ÕŒ5µe×TÆ}Ķm"s¨ÖMEÑ 6” šÍI˜²ÓTyD¡.ŠÏ€9^½3c“†ú"Z€ƒ„£§šÛæåÕjôÓ«›û»‡Ù)¦æ i¿N¢·ë¦&„Š8V•Äö›Ì^ƒ7Ný´PlÊBAðª2(!«†ÀS™åˆ*=ÔP%WÕ0¸YŽBò)´9ÊàX¹…<ˆÓÌ€ÞYQvßæÕ"ïWÒ,KòJê«}ÀÕ¢)–/ÓîûC¿4 )@,H@³¸.FÑ4 1"Y¸&8ID<¶ƒ[ Ø%»óZ6Ø}gðcsWòyeÈ¢¦À× 1¡•£úßW»ìoÊ ˆD…KæuÏgÏÞÉ ìˆò] Z4»v¥#Ž´„Aa³U‰-hYÑ´hsݰ£ËëËUÚ õèÅÞß]_ÿY‹¦¾o»u/#¸j7 ¬ºö*УŽ@ß~Šïkgá¢p‰Ù¤\c™^&øLÎ.ŠÏCHÂ)Wñc„%²C"&8zV¾O:ÖËsjJÚš3¢ê¡¦éZl°ÉK=²ôBºõ¬œE€çê™â¶äsvÓ$çˆH]– Ù®#WH¯~£ ŸäLåˆ è}J$±áVSc\$)‘±y©l® èôB]ÄåÃùç/§Äç¿üÞ­mÏ€tœ€äÛö¦Í§ýÒf„œî ‘jG®nOo.oîþÜÀ|<©0îâQ2ÆfDú.(Ž šä"aͲBÛ ¦á/„%cÝ×;’Œ¯Èß?w“ rƒá¼æå"9L•‹4m…F”ª‘ Ãb‰rt¹Øìfœ Œ¢d ¹=úVoŠQ뉼w¿ß^ß]<ž"$BæôINå÷óïÄÃ'ütyÂ¥»tŸå X¸y#øà#Œ¼\NÙ‹-¥ª¬…úÆ`-Äs¤B †–®5»’ˆÓä9hb4w‚d}¹¾3¬ùR·Ø|áÑ,Ê¡Ï9Ïqm²£Ú*PälÈon­ ÁTøˆ ÿ°zÉ“ç›!Búª~5S.AÖGµHM$5 <“óNØËrþfßFþç·Àò[¿ÈÄî/Q¨ÄÐ(«|VX„'6F´«²50PDfº1@ Ôp}‰A=P ’­&bÍ»Í7É#œ[·ŽHAÙAµ^rìä4 O;:ZŬ·Çc3N°bòÀÔ¢ÁåV!JÜE”8É*%8)`•ufY%09ý·=L#pýR†,7N-Çh LÎü÷•g›=D l`œf–)}`H¾Î\û޹4­ì^ÕS™å(¸É ã3%zA‘KˆP<:ʃ•]4ˆÕµ-{‡šZuÒ°G\¬³Pd4—X­í(€Û1çƒl\ŸuÊbÎ’71âΚÇf`üˆ6¬Ð(ÇXŒÚK ÐWD|dËpØQ3Ìo …n]„4®Î#g%»§ËŠÄf³ÔÛØ}{°'®¿3pH¯ÊŽãgœßÝÜŸ=\î2Ûñ]_f¾SXZ^·³|÷ç®n5²>¿¼·+fc·Ð »Ÿï”ÞO/t?yºº¹<@|·’(ûÒ‰Z};”²/©¥´|ШöÉñï?úÿüh`b<*ûŸ<ýyoý°½ó¸ÿ4üã¿}ø×?n·×'ÏVrû‰Ý üô1"Ëèa›¯¹ÑGëï%?~–aõª–}üËúîûã?ýó_O:6ìßÜ]¼YÕsòóÉã·sµÙ¼Îßî¿=}üKÇ_ý~výíòoëÆFôpòóÏ'ŸUQ¿Ù9Ÿså‚»þêÏ'?ÿ´Wœmsµ -â,ø§Ç0 šx'ïtE6ž‘¿üC…Ð$c½Ùÿ9³=.ïç+[¼‡pÕž+õ˜ÎIŒ®þ‚lÕêâËÝÈÄ„C§T_‚·âœÂéõt[ u2íµÁIb,2½s†D„±^Cív8Fçõ´ø+‡ŒƒÁ$˜r8jWv!ëpþe#žo\Ž{ër(í:¿ãp 0sÒçô·{™Šœ|jíàk”^£o©y¹]*Å—‰CÔÿñù<>¬¬vð89;9:YAÐâ¶{'ž¥Œü1§&e$Z\•ŒëØûµ2*#PÚTÞ_”ÑK<ê»UV¯)õ Y½ßÂÀ¶!±Wá` A7/lûõQXÈʸÙaaÛ¯ï½ÕÄUÕ]‹¼{‡ýgË`}‘ð<…ÒÔ†v¹YQÏêÜ>´8 ëÆ¨é^ðÎðô(öŸžØC´^á¡ ‡*èZ@、œÃQúO^5Œ(Ô!:\‰´æ¡å‡«w 7E‡ÞóÒå£sÚ-G$õÁ.ZEÎ!v ¡$LDŽÅAb›¥ØË\›:fic®ôŸ*c)Ú:´c#1¨µ{8{|zøvþ¤Þññ¹ø1wZ-Ô“¼ÀÞF¨é Ò|@S(ìÀ°‡Hݲoå>QÔ„/ÏCL‡{S×6ltK‘¹·½MÀT¾8h- !h¢Ð¤€éMùá¾Rv£}ý‹$ÕÃ4YëÕ„{ÚTtE©wr03ó.6{™È*Dm¿D«Î¢`8jûª„q{q§‚øx}u~Y;ÊCë)_`LQ*Œ©Ý~°U :§î£V·¨ÕäÁÂV) [7ÓvÍ*ò¾úˆ6=¢V}k\=̪iöPIò¼|ÔêH&£VÔ8ýÈA+J‰¥(ÕAk©mØÇÕàÂ!\¹®Ês–Æ0MÔHè]XÚ›«‡‡ä{µÉ%Wσ¼ÉÕO+¦0$gë=i)“;E¶ž¶weËl/fCZ•§©6Ó‘:Ù^ÆdVÅ ƒ–÷6yÛ›Ò¤íyÞ^ü6¦MäS×:™\‰ýMî>#±—«‘jâjˆKD¹qp ¬”wV¨ýôíêúb¦á=T¨Í2 ÀðJUÛJxœ_¾P»!Z?³«Â!©ŒÜŽHy«¦¯ñGêbvà ¨ÞbVÈÛEAE–6»êiªP‡èèHv÷=ÔgGbO}Ä` ¡ <õ(ÔÝè¢w¶ÄQàÝà Œ&ˆ•ÖW_nmd~& Kx€ƒw æ×Ç(ÀHA­-2°‡x½¬°I‹¦³Œ± ?#Ιaƒk˜jꑪƒFçÍÁ%ÅV8/&‹‰ï߈a=Êöº{W‰<¦£4SqQ3Ut(5½T-&cw“u²ƒ4qW.`ãH£þ(¬5Xå˜b=± ,îówfÀ›Ø%Ï«U‚=áW[\¦¬ªaïÛüyUu˜µªB“mo#jô0«~’Ï2«PÎÜdV“s¼¸Yuz¼ ³ªœb ìtÕ(wr¡Ì¾"b›}2{cýºâÕ#<ö_WŒlVQÁÝ’|³uóõ¿®oÎ\›ä|=òf5_ƒcÄÅ öš-Í“­›± †ÕF̘*D˜kùRºéåI[õ0¶&ÕÀÀ2ËØ†„,ؤ£oK} L )ãîåÙŠSlÕ˜}•„¾XÔ•€!Í-%ÔZ‰½\M QBW£øE,¯8B·èõºÛC?Þü‡›ºÍùÃÝí/wŸ‡ï~øtötþuì¨ÆPõ (0¼)Vij)Æh8w «šf=­®('–XÝD”µº)LÕoGêdtÖ3°3Œ®†ÙàÛÔ3!,ot7?²ctElx%Ó²‚ÇÇu¶ÅÖ·ÉPìc.xÏ$¾‰¹ª Ø^b2D±¥¢_¶ê½/œh$8À´ä|‹½ïk̵‹)ú+p¢Gæ~6ž$z¾ùÉÙxŸ-cÀ4Nüˆª]l<! `šcã!øàbKWšJàò6>B˜°ñÙ4÷Òe£cäµ7F€` »(ë #Æ4uQÖ?âùUGîýWõ g×§ß﮿ݼ C·€Ç¿|þíë× ŽHó0$Úß`Œ#¡.i6ŽDûìÇ’èc³b͆ÞP¶lâ©Íd+5Y. ’H$ßÙJŽ7óÂmVt“££•Ø,}-Nâ»Ú¬¾¥TÅ8âl-D£¯¡RÆ…dëÙ ’!Ë´Íæƒ¦mUÂ4}%o@ørlG“`évcþ݆ r63 ^…öô¹• Öº}ÜÝKÃN—ãTóë\LÛ[ŒÝ ¨v3mo±´«$5ðjȪi6Y\l²4UmÙ\Ð6Z"98†ÕÉ'ÓæÑÑŠÌ–W%”Žmµ€’P2ònvŶ²ó1ÌÈ»3[“;Ê&¬ñ‚VkßK¼2Z—5Zû^âÍŠN3€&›U³ŽJkgë@› –Ç8@Š1`Apü\G?h°6ˆŠ;u¾çƒ…Æ0H°2Ã{•åYÞ^Ù޳ţ,ÎO„Y¶ÄËÔt^ôqïËp•î4š²dËÅ_3ÞjlÚdÙxlÆ[5Ùº’ì„G­cL2áæ5EDa G­‚mnR@ìËåíåÃÕùé—³‡O6!ynÅŸó×Lÿý·?~ýûD 옻Öå8r[ý*ûQ/I܈üÈ3ät[Û9²å•¬d“§?àÌXÓÒp†l6Ùr¥jSVMu7AâÃ…À‡Hës`KÞÿ&æ±KlÉûGxÔ–i1”šû®7ñTkâÉ\a4ë[Á§AªT6ñ¹÷U#Ç•ÕØxr2íˆë–Øx–UåÖ»ÎUdb §ÛB©'±oÀ*hGÊgÉ‘ÁuǪQßCÖBÚçNŸ0C-ÛÇßî¯M«ͽsÒ4M8C¤wpvh;å°.ûÑÂäƒ}4­®;¿ê¤"c+TÊÉŸŸ“þº² Ó.§ÁÑ  佋NWM4&F9%æÀ)z“l’üpüQW‚}¿þv—¿{x–ÿ|Ï]b‡‹ÀŠ÷Î×çú¯â½œ¡D„·‹È58=XCº+o‘G.œŒÈÛýíüØ·™yuX̽îÆ~A1÷ €1ççÌS3#Ï>JÞÎÈ›?bÝŒ<¾Ü]-Vwè8 «Š-%—ª©˜Ìqì«%i]DX·a)ÍÔŽŽ+Ê6%Íï-"¬®ldv”N„MÇ8]CËÖ«ãÉýDY3÷@i§T‚ê63úªrCa%©T ( ÜMôQ@,Of£“áR=ÉäJן—±ùIp#¡ƒo(ß1ÄàD"u£R-ɬ›C›Î˜ÇšD˜9N%¸Åe°žI¨ ÚÒdž%,a›ê£ŸýøD™ù¦´å”Œô 1© ö`R­‡s[J1’²¬ÚR¹˜®V|Ðrk¿¾ûúeO$s0i·_îSƒÐ2¸õÒ.ý ¼×”1H–R£—~®í‰õ[œÓßß62Á–}6yp”N°Å %*,)É좙ÃÞ&)go?ÑœyχJ³Írâ«ÚפkÜD ÜXˆÈ«i ¬"~$äæ;=ïlwž›f¸/q$¾’oñg-þ–˜"ç¥d%âé—Šå “?W‘(ˆ‘K #;Qe{¥f²è’Š5l§6FSÂñ®«É9W0j;ïf}]jûù]S±U]ß!8_ §•êv5˜Y‰«¶1Ê¢éè§èXÙÌÖ*¶iï‚H»È+0Tµ\$ñ8t»ÔêB9AÒÝîC49– ë ݾ»ï"’ìûÉmô«DzðgØWc¢T]¤l`ÖÐeîNÔh"ÿ$fz§•–œæ†»mˆü뮲¼³SÕá2«žnÚv1F YSnÂrý‰÷ÈÁD^T†ïíêüre÷öíIÍ?üÿ2ŽSÇíâ.c(‡ŠÓ B0-_ˆ¡µê… iß9 –{…@%FÓ$®- 8Ê£ÇlkûjdUáEÚA÷BÙ'1g²¨iÉ‚égҨҕ펪ÈîhÁ¥ÕÝ?»ƒêƒ¬ªL·C ýiêć‰Ðb츓½Àkàó)à±`ñï{î¥×éŠËH‰ÚÅ_¦Q¹© €$òâ¬i£Èº…úé`ˆyåK*± ²ê†© xPPß>DiÙ-UdÒU1b¢ í¡ ’¡à°%sâH,FCçÚ¨"•zRÑ5(‘ßYIŽ»®P|qÈÜàFÙ†„À(~AE¾"¯±§ýĉþ×Ü•rsšŸ'…kå´pÊ%çf+«è¯H_…ˆ‹¹*¶­¤öõ1Ž›±#e[²š“<½¯'ýú+¼ÔqKÀªöŠ_šyñ]G ÙFÖ»û[ÍŦŠ_z•ÿøma‹È¯¾š‘˜Mm㙃¯àa=û\¨lŠ£D½ã\™H×Ð&b`.6…Žªäó£…ñö¨ë‡+sC®®_~<>›°ßt­ÒׇÏw§:Xߥ»äý#¨J–¼0D±s-ddAƒS^ï-ÕÒ‹“Kì„Râj„ EŒÒ˜§}=.­¤ì³¼SÂ1ŠÃà 6{1:ÌäôeŠlHå7%VBp„Vÿz¼É7ÖÿçOúK2Dªë!ªøÒ9.I\*¾t4k©½SóîJ0¢j0¢)¦zs,‚QH)v_#†|äv\YQÒ@ÚŒÐÃ`0 ‰j×gÀˆRé0 g#7ò¡c¾¶ð˜ÖÌN| E¬°ÓdÍ»gˆDLËǬy÷`ò˜*Ä­&Ÿâ–B7 Š4q›xWÅm¢›(*_„¢ÄÑ Mi­’üw\L™ÛÄÄ=©É}ºý-óˆUÜ&:1(B޶)Œ<„ á<{0Sn}8ÏÕá¼ áb…u:L§ºx"(Ûc8[X•q  Ô·³»V4NyxZ1I1S?˜öMvêKó|ƒëg¥L5¶`Ýzº7¾½~~ËHû6Ýöæ_Wv"ïè†î"çòŒ Ãú•¯ŸSù‡Å¶jåë/™+шâÖU‹µJ$†¤´¦¤Ú‰ÆÉ‰P†) ±=qJ³YÇãÂ*`Êž1EO;›z˜*îY¦ ýh˜òîÐÍð·Fƒ¢ÀX"íÐ3mÃ/¾kƒ~>^”îrØä—Æó /{ÏÒ2¬áƒaˆõ ÃyV !zuëÊgÜ|z‡Dè¼¹«eÚŒ…/¥ÓÂsÑüla5HäÜ”hpk ò4:˜4ž.DsÓÈ_0P꤆m½ÍŠéÿÞwïó{úgªq¼úy1y{ÿôc%ÙqóKg$f”«XŒûœ¸ÃðÚ®ÕÍ!5ïÂлbé}6Õt$1Ðjï¾~ù´÷_!¸îgWù–oãíÕŸ7ß¾=ý$¥þƒ ÞÞ1`δ>X„[ˆ%˜ÕÜŠ¸”3í£ßF¿&™ìR;¦Z•½ Ц'H–¢â(ç•Ö»¯&ݵiz?ê°K„IÁÇC yRì+›Ïb²¹†ÆŸï_ž¯þ/.ë} ˆC¥iÀ^ÊÏ 1.TìErjkx8ÕÂÝ ЇI–—ë_U8€˜Í¡Î¥Ò£¡Ì>;J𰹋ˆ:¼¾Úä|©¦}Šªçêô"vm'ã*BvRÝæ°Fn"{¦<'Q7mÈ}z|ùñfâõžOòéÂObl‹±Ü20ÛžÁb©µíº-ò뇽vb‚PE´˜ £Eèå3ÐQX] —í{BX2í¹“Öâðº“s®µ%mT4ß·0PM*gÕ¢p䪦^”ö¦ÞVü¹Ï"ý')¥¾y(Ã;s*áÙÈÂç§«ç/Ÿ¾Ùßr&‚ñ¡­†††p€êØ÷㣩–^7h¶ã]ˆÁ©çàKÜàI’>7²z&ªМ¹Gñ¼54ú*GB3—¡ßmTta[ň5Ð ŽµaÍ"ð¸Íì! @fœ¼pËþ>5|dÚÝí)—¡¯9e#Ñ÷}”R¾b{D n9Ax•€ú¬í:y9Lc» °†/ÅJý\MûL]vOP› À7Xö<¼/‡Sr#°h6ÅÀ>5ïÕ„6U¾/è>SQI ^‹¶òŸË¶2è€äÙ£ Î|u4Ÿ C¤ÄncÈ}Í}¸…x5b7/_îÞNî‹w7·ËŽ1ú³ûvF‹Ð* S/q° ãj8^*¼~Hm«ï*j•,t#(Ö*1¸l÷ÑLT] zG‡!¼ª‹ç¤B¿!Ðh¨&Á˜ƒê8EHéãUEÕ.0TÎÃÝ#7Ãð63ª‘áøÜÄÛƒƒt õù÷¯×·ŸM±®ÈqÓ è¡øÛ2™,8`uà›r‹¤Õ pY'ÔiE»' BÙ5F†üð†WÙôÜtŠ)%ô·\ÔÑx‹1Ó²nå‡wÀY¶q1G«ëå\¬r–’å.‡‘»J: ×`;8º±¹“ÏóãÃý;iîÿøýáÅÐæùw“Þ-Zžà:@õ@fn™¬“†ž 9h–CsQ¼s,n횺¦Ý‹" 5ð ûÏEøf—­¨˜É¯ |Ûg§ذ5|oR² ,Yü²È ø Ô¿½ãþ&7Bœ3!M>P¯+&ÞIš%û#¾ò`ô¬žïwoÒJ/?>Ûn}¹Ý½â³œùû•ÜÞÊÝÑÕ'ýóßqÙ˜Ê(ܾ-Et›ZÍ?]ˆîEÙÍ)·sŒà*œò`¹€ê&Ö|§ÄLn=PÝ>Û‰0/¹ì¢ÌÈÃëäLΧÜVl'68>Ch××ת4µD­ó`2r£™yjKâ³fÀ_‚ŸýÛõWÛOSÏ«çûÛ—§/?þ{e«}wt`'+¥š?Kù¤Ý¦îé ~<½Ü/ÈhÇ‘ŒN[:ÿÐÍ),.|þ)¸KiëŸR«ÏMÇLÏ­– Zî¹E¡tµ˜•Ü™ ºôܦ¡LÞ\ïƒï?¹1€©®w7ïDx¾ý|÷bëùIœr}»åÒÞ± ÕÁÐrá’h<0p·Þƒ òjs„2jiG"QVŒÃÀĽZÔJ“}®Ùà(›Z ûÞô›kå;Žú.Z‰`«9ôlXƒóýéñ¯/3ü…9¤¡‘ š?·\£Þa‚›w¢é¦ni¯5£U¨›÷ŰÃÄ”›ö*‡Ê–>Ù ÄÐUÙ*޶àÍ>„õù±’Q/€L* B˱¤-<·©³•UY¥¯JÓ akˆD‘ä§Höhÿ+MÝó:¥÷-ÄP<ßàÞEÈ5äá}t1: ´ÕÕ¹øºál:0Ñ(Wާ"Ë¥kÍ5QÎ…ÕkÓg»èqk¨¥–ƒâcpäƒ_J6s¿^÷çËãë¹Ï{prÍ»=fB.„¥Kž³*t5WsBç9–;r)uäÏҦ£<;¥ôÑf³5nÿŒhå(LR=ÈØäþ<Èz;¤ãÓý·û§/·WŸ®ŸnRªî6AÙ­¬eNsCŸ™Zª!E4z^ì77É«ÂÛ‘°x,BÂÛ¢Ëù$¦ì4™tº |˜"G’­^ZÈn<+“Hê—Ó8ö=¾ýgšÉÕël˜›œf|ª”s„QBÙúK <·ì«xz@¶Ð¤@ä6O6*ö/¤±`eB2ïh¨Ïþz¥=?hwß–My c“ÊMuŒ¿1,í§?'“nЛv6¢«ñ‡œ pQ½”}þòôU= ×§zœ4•wsõR?À%óïØméUÝ]~xüïÂH˜™—)à?)à Íe6‘#ðJ·hÌN4ô¬H4Ô[5»R0abXáWŠ´Ô6bâ>“•ü)Ï.ŽTÊš#áBmÎ n‰6“¢Íb¦Rùcµù@Îü³Åy™ã¥à¦ ÷ öÔÜ Ù½ƒîJüNTýâ; f¥œÔ(#P1›Lså3ÁtÒFA³t´¹6È6€N.™Vù5”ñP ôK)£Ï†–xïÕi@ôñ «~×ã:yŠTSµ’N –µQ³ ñ3Étº WTaØZƒI²Zœ®>ÒGeÿî¿¶ yº~¸ú÷ãÃË×ûöÆ «•Ðd"Íó³ÌuLÿ]X7ÕL§‚U°Âk%bÒ¢j†7”¯âé¡™é«àÖš‰¡‹kš8j.ˆˆnq½ûòüôò==òæåîÓ9&µÌß®¾ßÝô¡+袠ؒŸÇ1ª…ÁËûYWÉ­mÜJFSí” UÐx¥x.óE„˜½Œ ª‡ª¦Ã ¬ÛG˜Š"Ì€ðŒî¹`þú~ýí®Õt² 5l±w ÇžŠS ¤ºÚÎ÷rêf1ÓH4’\‘·õÅ‹lÎçyæR顇öÙæÿ-ÐCEg.µsÒ~&ìÎQKÖÐÂàV iešZ–=•ƒ!fWÀ×4ï!R6÷º²š4œ<²Û ü-óˆÃ@¿ww`S»8¨3®ÌÃ[©M«ijf0=á˜(Z†Í‡Óaó>¿¿â€4Öì¯`!%”ÖŠù«îãjÊÓæ}œì·°/0ü[Í{&³¾…¶>‚ÚtW£s  Á¿oÖZv™Ó„“sAQªNÄâi<“íÿ)‘wãI"úè—ÇÁ{jq L HÍÝ k­@ÚÁJ+€S°`U°f_Õù¢ ˜E‰ÙÒ*Ì€O}_A©¾mG&A"7SØÛ#RÑOÆH>ýgõ¾ùÊ}CJ¬º€P±oÝÈE}Ü/rå…³•ÕXïT>(JÁÏá}þŒ¼ùVsEñ‹XŠÛª7Ù·[1ËÑ7[ZÍÜëT‘†1ú®jz9ØMŸÍYÔ†Oç&ÅSaÛ>­åÙ Øþtz U ÁFßeöÜŸØÕÙϺ»ËŸ¢áóû Ø!bðmS°—½u6;‚ð›Ø?ß¹ó}º¾õ¿åÇk÷$j!&¥˜"o  àQvÎ%çNñü˜Êî`ü9ˆà"‘d½†ÙÊjà(h*IÆÍáHGOú3X—™a0Ä”§!²ÃÒ…À Ë(ÔÆíü…ìÔbX†Bmo¡PšÕ·…ÚÞº …*޳g?ú8§˜ÿtpe•¨1^5"{œOØZ\;©íb‹z¹_i÷·oŸ®nïŸ~¼?ʮ͘.záì‹óoOñX{‡Úbð áSÃ[°Ò§j& mÊ#ÑÜçP¶x{îà“$øqeUþ·}•m…­ Œ7x9s‡Sd¥ÒðÄʹ¶ufO£l`õLöo‹”öŽé¡è÷P©tõõ˧=kê«*×ýìŠ#ßò­ºÛ«?o¾}{š|¤¾¹ÿƒ ÞÞ1àJÜù>úm7:øï„*šF½(DbDò¸!¼µ]Ýû”-c¯q5¼…ZxK„.ægcù²À|Ýpù2h¿rÈâÛli5ø–Fw;¡úÑ^ø¦£ÇÚ}ÄŒ¤“f~ïÏN/P •^xeÍÆÝôùÿ¾>æ@¥CŽ¡îÕsX!} u¯žmà†d7©©1®¿¤€jt 8TW“m @TF§\¶a¶°*lr+lí{©GQ@Ÿ'˜"‰iý£³SLb‰„Iß“òœžô³õ&ÝFÏ5õᯇ\î×ã҂׿Á&¥.Ø´àõðI{ç|\O,@-#L‘"’rºjl(¯˜)¯8Í€‚â5”\Š®H¶RÈæ?gK©¨­ð.± oŠ+æÏXU\iÂgíx.?{4B…E±[t‘ÜêQ°X;‡ä&PõTîÝIt»ÅZ”†ìê·ù—Õ THü&šZ» ¹Ñ÷›$Fxß¼œöùÓË¡„ŽÍ‰âã“Aý9ÞÏ)‘ÆDÿ7šƒ™]UšÁ¬ÿ†8†ý?¤Qœ®.«ÆÚaW?%~¬(¶‹<Ê>WnŸ}YͰk”)ÚWhßÛ`…ÚdÉ”XŒ¨ª$ëÕVÛTj·#ƒJ(T¨ÍÕ&Ùjøì˪Ô'dsîºZmiô…Ÿ7oÌäù\ÍÕn,¬8 & ¢ÛÒŸñR¦Ä×ÃïÏ/WOw7öE?-ì»O G/Ü}kÛÂcÀ‹ yA ÖfŠX4¢â=G Õdçm²ë—SiÌ™bMN¼/AncvëÓQN]’êa’àU›l»xH*¬Õû z2å’ê49ònåÄ\5ù¯I¥›Q2¯ 8oÇ‹‘ºÝÏ&wFbûÕæò`èêö“úÄQ¼~»:þ`b-ýzg1éç«?ä¹q¿^н‹®àÑ‚í¾ÚJ1·@ZýðV' €Njj˜@RÄÛl?éL4]à6õ£Ûu–¶†[F_ÃŒ¸5=©ÃRcaD¬U%LçªQw)6ŒT«ê¤M›þ’,°7O·‹H`F¯w~YЂ`À­8`Âë‰ÃBÄêzIJwÖ$IÈǽo¢êÄ‰Ô ·b¾sL3‰§,+@Øo&þ¨™Äu €C ¡%½$k ã’–Žk´ ÀЗÉÅIx_τ˻!ðÓù¦ÝÿŸ]~½þvý¥~Éér—á@qQíÉEq5âo½zÁìNṳ̀ë 9 ×Ìä×NÎDÒgÓk§õ¤®eß] D£÷ÝñkÕ Ê’“ÉHDÙõR!Fè ®TåB5¸¶Ùÿ% Zðè…`•討1ås‰t³Øv9_jPY.à Ì$X@.EìY5²_²®çKÍE¢ÑLÏ_®yE TÜÈ‘ä“+y¿¿GšÎ£nh*‰õ°.¾4µEÀû04i)ˆ:ŸH¡®­Ì«ü²èóqTYô~ Ê*"c1ÇS~$ÆÙµR6¬]7$z熢äùHnÝ~kA]>÷‹rê œ‡]9ƒÃ"rZø•[}Jä´ƒ 1ºÍ‘3¿Á§]Ó¢YèDòJýœºfR« =`=†6ãÁ@…šQô§l ‰&€Rܶqà,þq·£ùëÝËÓýÍóÛþʆ‚•'‰¯hˆ´„L‘Í¢®ï¨X7 µCˆÌ5Š-è(f©Âé´‰ÈÓÀÖ<»_Y#eÖ6 % ê´ £icU¯û¸¦U !.j6°·+Ð*Í¢†‘¬N…6dw ’£ûÚ·Z4²i²Î/—zÒ†E%*/l^ =õˆe/Iª_(k‡€S,RÊBˆEˆ .¿Oè(•.Á¬Ú¥X}Ó¾¡†0Že!3&™¾\d º°ú¡ªº°¼ø°.ˆý .*2Âuén Ä NifhÛöÙü|Ý翾ÿó¿›ÐU‚ÒrÔ€«ð¢½ìÄ#¬c+eÖ¯eG#˜ÌbM9Êa9” ñÂz÷7u©Fé„ ° g;˜'eMÌ^3õ(S”ÌÇCÙªª¿—¡lPŒT.{’?@xÛj½y0û¾ÚwǺy~šÿûÆ07XD5ˆTµÀC ½b·(·I„]3 N}EŠA!ÛpÏEu>}?“W§p” [3'îIræLœ4¥Ã&°HU2vˆ›ÑゆUE-nÃUއDÆÑ“±ÃMË:ÓÂ’“©ÿ/¤-F6cX®Š"4«.¡|fT‹lw‹‘«¥×-ZFšR¿UÅ\g´¯U( ²ýd¹ ¢ê-§µ3Mä€LÖ¾N†7'˜¤rÝ[¸cCç¶–ëzd…%tˆ–›À#¯fû|;èvRW©ÙÇq³iɬ@G÷3$öýáúÛÝô&ÇSNÂï·öãÿz|úÃ~ø›½ÄýŸ÷/?wÉs¥ìæêöþúË·Çç—û›çOoöÚ¾üüøã)Ñž<Ý\½<Î2÷Wo·]Ý^¿®-7¹\Ë%ÐÇ´.nA^ÄÐÒþ§íÁø_X-ýr/âËýò“Eÿøj:¿½~¹>Û“ã&õB õœ é G‡è—ÃÕÎpøä†SÎåpâä~LqE1ô½3ìëŒÅKCd×riø‹#gþ1Úm3]Û¯ ^Ân'ÂAJÃVêÂI"—ziu2€')¬éË_yýÎ!föi5´éµ ¯\CDÊ0  Röúû]+Ãl'GÏ™vY ….ñû‹ý>=þxÉ%ƲýÖ>uNùëE«81+_5À‚*¦Í“̬ۆ„)¤št(ƒFP{·hÈN‚ξ¬3'Jcü²1f03LŒ9Ö9S„]ÞéoË ~Ø÷pÿÏ»›Ÿ7w³ý~}óGºÞ¾~Ê!9CV¿Î\ìf\ .©$ÄUàH—4±óªpcœ¾gŒKv¾§“-îÅÔ0Y`3_”Bf_S&Û½Uêm¤“›Ìüw¬"ó0÷ÿ÷¿Õ·û.¥h—„Uh±šwŠ'ظ ú×_“\g×0¹l·ãÿX,ù T\p) LküíЯΊ—„Õ+m‘‚Æ kNƒÎEkVÈ¥-f’é·Hoí5RKø@˜8¸×¤² ’˜ßï‰H_œºH›ìGãX5 g!ߊÔw Ô%¹Ðr}„·V×aຓÜõÍN5§sáw_®o~„»ÿ™×sW]k%Y®†2ÒZL¼i,”wÒ>i²Np½`7“”? ŠUÀÊŒT‚]r’î;Š©ꦗ”A—üpÃ$d÷~MZúbJs“±´ ýW¬§ïÐw=bŒTñˆÕOjµÁp€±ÐÞЮMi+ÃØºïïw×/¿× Óíç'‡z4аˆï=m¨±«õJŸÖp»¹³diH+6¤‡ý¤ö‡Þ ò<òõðgöÖ˜„›šùmý¨hYq@Î× Õ.„ šÈ«}|¨ë†„òΦÕd§ÅcªŽºXT¶] óK^ŸÛCÙéµ)€—®Ê®V¼˜M¹÷ çt¼Ù©ð….Ñw]HPW–ŽL‹c–FÏ:T§88 `¦üž›‡=NŽÔ"¿B@J}7š./(|öE5‘‚FZ¥&r#¢JÝ-+‹²ÁÂÇŠ‚èåiÜîür]Tø}Þ*«Z%,H%êVlª²_qA«Di%)ư–ENªÉÿÉHd–r¡Õ\¸“BŽ$}8ç2þ³/«a‘Ko…Ñq½i$q‡À«ÔFº` UȈëÉÿB¥ÚÊÄb•¢Ú sD°¨6âÜJ…Ù—Õ@?ÆIœùŠè·³çGX‘%¶Ë;x ýIŒ1ýÍ '^ÌôWÆfu´´J¸+­‰ÿ4¿±–suòBŸù&(¼º½Yé6yÁ9o-À?v!ìo)NW(û‹§}w¡Î·”O{H±.!îRô!-×w«Q «QÊB&o¸ìkPЏŒR³äÙ³O«‚©´vŸãÖ iðŠÑ‡´ZsùE .Ýg‘VP¬ Pxõ,俜ÙûXž¶Ð(–*'1qdWæ¿¿ 5 ¤í êxã0ÃàdtqÅäC~5sPç.r8‡¾¥™D'føõª[òÖXþH(ÝsvÅRp+  &ïž?Ù¹9¹›†î>Ý>þëÛÃãõm[#´8ŠŸû¾Æ†%rƟ²ᰠ-êvÎà)€³›²¢+wKÙ­%ó|zG‰tÔtbÑs Ió g£$]Ì`"ÄüKÍoÿ÷6¾5”±ÖG²dô§ú½ÓÞæ·ÿ{7ãÃ0EŸºeÊÆ‡,\g(æ·ÐÒÃúÒÅH[‡3f~#¼Ç ‚ß'DYß÷”á}NÇŸ?¾ÞÝ<\ß=™‘ûšœ=Ù©|~yúù)í[=ý£·•ãmö©:öv!q‘yvlG²Ñ<û ±›G³LPU_Ñ& %2Z“gŒyó=¬‡ù¦ïÀÑÖÖK¡¿õ"ð„âÈÍì 5O†ýæv H¿ŽÝoÇ.¤ß¾?ÚÝüxºùyu}ûõþ9ž«‡ëÏwWÏ?¿ÝÜ=-](!¤#-¼â¢ñ‹ëDZ·Ìh/³FL#½®ÆŠú7zÒ’]ƒŸ!:ˆ¯Çžµd =®÷ËÿÙ”f°€`ƒÎ*Ýe².}{~7ózVüh×÷Ž“ú-)!Ô5)ásåÌb úÕïÀèÚÛÒ>!ž‡·OpDÉ´O8—Ú£]à¼ÂU;¶O0Û Üõt÷ýáþæúù®S‹®m%°¯4/‹ù›ëb^éÃZÙz+q^úº#ÚUM$„ðk–j½¶Noß\Ú6\京{Eàü’U©‹áŠë·þ)/ªn!M“€Ó"}Zrµ‘Š&· ¤îGÁôiÐOš ½-¤Ymöy¨ÃCsMŸ•4ÅŽö;IG/zÝðSÜfÐÀi¹ Fâ*º1¸š"÷¿?CÚ¡Åýœé§‘!Ç¢xô²Å]Þñß‘#í¢bú¹ŒÄ¸¼†—ÊÅv`Í»Œƒºx ˜ì88Š[{ ï12„›IOˆ9Û#<1üF3Á/F³‘n¨ƒÀ®Œ!ÕhÀ±ü%ÿÏ?în}ƪÀ¿l¯ÁÏJ€ó2ˉ1’# Ü‹ó~ºƒ=ýÒéŽÎÜÓŒ„wzÍy´¼Å|⃀:N|´¼Åª¼C9t –*zr.®!àê‚4·—šPÊñˆY—+VÛAv†`öiUã„ö¼ N6Ž7Âpn“"Ey_Iö½çYôw{÷Ðu„/› Úèv Õ½ôW·ßƒj.——·¬}“qgß>í¼ú>À+qvpìì¯Á+aY°BLÒêpdô£òuû§Ûå‹Ý$Á.35X'¾4œ`R£ÖÍÄÒãîeAgPѦÉêò™(c¡èð>gs„Ü<•Ÿ Ê1è¶œ#¿0m7‹åÇ©ÔÜ·ïßj‰“‹„Õ­èÝÀïkÔ×x/’‘øªž—¬ÑJ«P=ztýö‚°ºAlt©8­X5’бØú¬>ä‡IŽ¢é²ä'•È›ƒ¬z¥ñC«H™ —iJ@|Û”Dªèô8úØú"ŒDWAUš.v?ùUlzßo¯¾\?}NWë›ÔÚrc§² \u,¸"/¡ Ö4ËbG½¬¬úa+™À¼c®ÀVÜß2?ÄV¤üŠÃ£dº`«`'Adkl nx¹Ù¥5ðl¥Iì¶á6&Íã:èCšwFb,{€±qrÑ9€ú?ßKvòq·áé2´}¡åHØe®nïsa3£ŠÇ¼h¼Ïiêsá]ÏýÅÚº“ÁW³ü˽q+}Ýœ]9bäNŒòúº8aKàÞsy n—É<$=Ij_+¥acß®fïªÂcÂ%]ÍcàæÒ&›eÊó€©J$·Î Íîµ<®´|ºÿúýñi¿Ø²¶QC¸(ur‰e !ÁNqKö^„´[Ú~$Õ ‰&Q¨H;Ï\êÙ4©å©¨fbéÅ’ö3´ ˜•EÑ%1ÇbÏ>SÑ7M Æà.­iÜ@¿Ø¹j®õ5‘QÛÓˆãÉàÀԔ¢ÂØìD7O2§sJž†ØÅÖ}“Y{Lì˜\{m‰ÀºA¬†T>(Oú¡¨!6"]v÷Òé°é(Cªû5!lšñ~ÂFŽ£ÉL̘AXMó Ĩ}z¦ªyr}U°»oЬÌS,…ˆqše;ŸÂÑOàý²„ðaeÀÕë΀«tÁh,¸EÏËE_Æ[BZPpSðàˆ8ôL _V?´¥‰ƒ½¹¯A[ÖR—‰.äöÌdÓmýäìžx46E[²+þx´ÝggÎЖÌÁ8Ü´*ŒÚ'ü& E؉;›fk¤C×_|»þjª¸>mûýñùÐnß¶”†ÁÅS rDä;äF<-ʦ'xŠÙ€Jx2Ás_>Ï™ :'°xå­Á“dìÌb§¦ÕRÒu‡ª’±žC=u•µJï=õJq“7­ÐЉÍwDÞ¯~gßì¿ïñ¸úzÿåu½À¡…¶õ?¸zx¼ù£1†Õ¡˜{^©¤¶ h14‚î¦rîàÁ…)¢'ªÀo ÊøÍŠYR£P{¸ŽóiSÊÖn²Žànß8|º»8)ŠÓ%¢Ô¡]Ã`û½Uqp¨ß´9 õ ÌCüB0KRÝ6+|¾òÓÍóÓÕóý—o) ”Ëõ6hÿZÖ^j\B]æ!í^²€uR¸,¯n0íy"GjRÂ.ÇÌLvùÝoGÙt‚é¡¥sÍNFê€YSÿN‡+Œ‡i–÷IФ(NMºŠ­o’¥¡è¶.ªSÔ[P°d5 YR¦ýžã_>Üi–®ß´pà¢ØÅ‚ßu57Ì´„€-($•CÇ$pNTý°5NŽbd¨ÁVR,b+“ÏÒ·Ó \ÿŸ½kYŽ#×±¿âÝlFiôܸ«Ù܈YLÌ|\’ÛŠVKj=º}ï×X*Iõ`V2“dºÝíh—­Ê$@pÀjþ”ƒ«ýuf®³F öMPäÀ”€ìé§YcJn-5Ä0fˆ$Þ…/Iܶ#_»h÷ãd¹º›7WƒF/iÛȵ}q4aL¬\ýêes¯Û{óýúêÅÖóùýÿ¶ðú³Ä…ºŠŸ¨},E”¨— 3°$fÛÑ®ôÏ÷ÛâÏ?ú¶‘MÍæâÇïßôë¼L…“ ]ÕÀ Ô€ƒù ®3%úvOr4íèíRàéóo—›ïv¶^,0T‰}EÏDÏΓ.t}šBr$û×n_Ìß›]Uð ®L[»va\UYjj¶Ú^Ò4ç\ë$Î/öï™w%²­™€ÇîuIœö¡½‰¢m9‰—²­u{òk1/™FPŽó|hÝ’`,7ô´C÷* “3žf“¦(µ˜@î‚‚ƶÄ|.”ä@œS¼ÐŒé\´PN«Ð#â‚ÔBê²ä-^Nx³Pša°­E0ÍΉ¬Qu4æ*W÷ÑR$Hæ`xÇ´l ½T$!œ^R2.Šƒ‘‰BëWÐŒb€¢ rTs 卯zÁóv:òÑsd¥>±]3«Žé¦ÔK(8Ü“h`ʪÁg+$÷Ô¬q;½*pœeØSÛ£À°ÁwOƒÏ¤ˆMO!’L•£CS—¢ÎŸ+ o3FKiâQ¬RìñÜÕ.i%~½9T-út§¯EF/y%q­.‚Kˆ,Vi”,™´!êƒ#?-°êž¨ ã r® ›ìUü4 ï敜Tý|ˆ© Sj=õ[¿e 7°{bî­iˆ»Oš²_üÈp0“ES$/J'§á°å (õW££š èªnq€—¤8-|vŒž[¦8óOíB2´0@b(°usò'm)ŸÝüI£˜ÌÙ œeëS;¢ÀÖtï&ä”ÎFed¿È:í-e„Úèd?M®ŽéÓåH:«–Eì᤼£°(>›•uhw’Ó>À’“yÚ¸Cö ßM“ƒ;Dì}æ÷äÆ˜6ntêûäè3W.©§›Ó@ȳ½‹cØ®ýèë·´”Jœk;~Û€_RèÁ£y³É#¯›†¢P8 %$¯<Ët°a(è‚Nì [x¶L}oe%ÃP€‡à5 ìŠý/Ù뱺iÀ³†Ø1F)¸jìÃBìãÄÈaþ½L羃Dò2‰}vlåÀooiàÇ!1j„Ò‹”ÔU ƪ@Œ{ߣ„äSŸ¸˜¼ “°î ;ÑÙP‡üqaÛàùz»Ã÷í÷cjÛ#Ü>þr2*ÍÙY6ÁnáãF×Åå£ë>þ,RENôÂU~×S×4 . Œ‘¯LøÞC{óízóÏÍíõûqðp¹ù5õÝîˆnw¿Î'½ûeu x·\/'H„•š!DàPYµå²›Bʸ涑"«/qÍS\7y±³°K’¯„ûOØ•Á<±¨< v'»`×AÈÀ® éÚYýÊ\ÛP„¿.Öào1LŒ«6ç\¥ûÛvq tV†Ý])ÒÅæòƒ=l1⦚qÁ‡€ŒU÷©uÉ 7(¤8—«wJZíÀ–—Jú°lÍG “`|¶ItO4MÀ†ÔFåý,°5Ägà*‹4¥;Ø gJ“¦<&¯ÃbUt÷Z*0¶F•-fª<–G”deLR dgU§±¨4åÑQtaš4µ Li„ìð‘••d±¼p—Â/µÁDïˆjl\wÊæ$ÆWrðÃ<–7´d“’ŽÔ 4ÌÀ—µi·ÉÀoÓ7ÛŠàíŸ7›“ÜTÊX.J«ç¿|?Wβ4Wžÿîsi%ˆ¶gCÕMhlÏŠcã@Ô7žŠ}Púýúòöù{=eÇV ˜ÆËWÁ)xXÚ&–À‚DPfñg]'Wî:%}1i $LB2DÉǨo mà5mw˜EÌ*sɨJÝ ÃØ±Ñq±Ó€ð‘BT{­Ä18`üÂÛv×TókŒ툎$Ø. \uªÊn^aã`Ó æO ÄÎL_W7O/é+¿¾\ýrèeîhDÞèDfšêâr¡Oߨ,juµ£˜ÄÏîu›'¨v1¦™™ƒX’æN|]“yÑü}Þ‡TZ€eŒCô$ÌË&†hpß;ÄdÙÑ*…˜n`!“àÔ¤hÒõÇ”*œ×™ˆÐQ¡êBû$6+ ¶×£ç9ÇX§Ïs;•}I!\رŽJ ×™ øu!#²8„:‚19Ÿa“+ò އoÚrbáwŽ‹ù|“ê³mËïBlãöÖHÄÌ+ã¸z߿ږ!øS§7M™¶@2Lv PÛ¹$>ãø†Ó —rt‡ ž;Ââ .¶œw±o½ð@½ºÿóîöþòj…´B_(Ç%Ŧ>ݤúÀû8Ùï¢jçfÛF @Õ7¦Ñ_o-ŽÝì=¹4Éä¸!‚ÊúðŒû»Ù€¹†.oðëÞXÞO¹þЭÏ¥„­½ë0Èk2Ý 2k…}G!Ä%Õ‰vþމ?¡2Ã¥yœT¦†ÑaA':z%3>c¢¶ò×läIAáÇÒJ9.¹öÀÉÔÅMš`4£ïé&9æ<$'ƒ©h$‘c!x»DN² ÁÊLÎÕõÃíý?s;}¤YG>?Iúˆ*ÏKú´z½üù‹nv‚¨Õ{Œç’ÚApÒa’ ^¸;½ü#ä~÷×õÛ¹Ûé—7»ÖCéΛ­€ÄÚõø¨K:ùcôŽCðË™cçIm‘Ÿñt>´í púÉS(&/eò =„öÔd´’íhçÐéÚ‡TwVxêéµÕ“`ÜñH­GKE„ƒHê—ÇÎ‡ŠžÊì0Å.ÒRw7¯:Bzº‘k'ὋÙã¡Yûb±,á~ Ž=F¬½T„Í 91’ÊÑt€`Q„C™Äf¡lGè‡ÀZ@sÚïˆö•kC³„Þ!|t°‰pˆÍIQöôcbÈÞ“ŸcYç‘÷Ë?×ÀHGdÐEtKâœT ¾:Æ…1¾…”©2¹€ÚU)Â$ãHL~Z6Ùý±²‚ŸH†Ô@W¶Põ½§$ÅWf¦CnGSCÄè×d1o"®È8r¶Yp[yÔ#xÚ³yç˨Ff?w/”XL22ûÁcwßaÒ|äÁ«ŠÓÕ‡S¾{᪉<ô*Uæ}¹»šY5bFÝó8ZT7ê9Bji1î匴Ú9„<€'_0æ%9„^'ÀlRgO0<˜úÁaíóf—ïìfÚs¶šÒö­­—ew µ_¹ºY/àÐÓØñèŠ´ÈØ9RLÃ;ë}?)õý@‹ H§Ã7ƒ÷ݽïYkÍ•Çì­«Äó³}fÛLk›"bïò"çt˜$ɳö§¾v.šÉ¯àò-#ø>¦Å 4Ïá[öÔ=o/µ%Îöö–=µ·«ÇGE$MÆÛNJqíyﯰ_Þeüå#šÿ²½¡ý¸Ý|¿Þüza0õpoÆûôêg?n‰Ôç±dôMã(/è埊 ™võ!¡…‡DrD1¸t&”éC‚1GÒ½¿´BfsÙ|p ÕÇÄÏU›,ɾ%Ž_%ßbìjåò蘴M›'ívƒKé#*q9§¦l€°'ðÓFð%JÃyB^ó`¼û¸À$g/¯D»g^e«weäs’x]Ï‘®J—Ð?*ÌÍŠK`¯Á 2ïH\;WÔ{Űš/z¾~¸K\}>ÖývŠŒ§„Ǒçtñã÷¼Sï…q¡{ºøùgýÔ{=HûŒvuêP úu9ƒŽÇË.¥ w.m=%ô'Æ‚ÀEÃÍ b=ŸñA5óKÒ`P7]6¤[f–IO#(åçš½K¥ÉÄi¿uVw4ÄuO}™˜]~.q¢èö°Q”#ž‘“ž =M[|XÂêÔ–ÇÒ°RtP¢+©ê3ˆ§Í3@–#è}e%Q¥ÈàÓt \Ýüº—–›%—0EÄ`Îàʳ.V©1ÏM{x¼~¸}+›ÎÎzø×€o™IÜ`ÒEñÃæ\o2ç¢øáU>_Fí¸ªfæGˆ£x­¿úŠ®£`Àâ ²™`±Ñ4FÅìÜ㽕a”4ÍN µ1*‘ÛwÀ(cf¤XRD*,¿.FÅŸ7g39’æ_¿?üsy´Å@žòçï#UðÔh"Oùó;ƒUt´ VbÝÖ'T‚•/+?˜ß"s#ˆ›Ä*[w6›û±°B¬Jl­ìWƪèb÷ “"Æ,V¹TéXÇíÿ—fO¸1Yþþrÿ|™·Ðo²¹ý#ƒP$-©âÇ÷6ãñ½ñÉã| {­Æ§Ò¹®€굠Ý?"LûR¶î\¼··°| 4 «“µáɇޔ°IˆxÊ™•Ô@êqV'÷“£¼§ëç¼uþ&Ês› Y7ùì`hãM>»7*¡_PÞà9Mw±–JG®"É‚ù3Ó i{¯@4‰Kè²l{K+&â=Ù›­LȽ§s$9B&È3MˆKÕ!#dÕZæí-V€¦#"§ÃfÑÏ÷é÷,ôm#› ›ÍÅß¿é×LI¸yøTýùG a6JU¿ÁY¨ é¹¢+,hÕ²í˜Zí`.ÛÒ[Eàa¹àvŸ£Ë;óc®˜:“%óƒš'55qÚ$åxª»>I“³Týâj$Ko-!Dž‡ŽAc¨AÇôØž¢oH­¯hËÑQ™ê뇷/¶UŸ&Hé ¾Á5aj79‹™uôUfm¡ñ$;¢ÙΚù7“â=cß%²­±ó´Ç±õ9;w>1`OÚyÈò¿ï‰¯o-ƒ\˜eçæÄÆ*;ОùÐ^ʾÚc碔툚ËÍVJ™’Ÿw¡¯/7·³k¢ FmÖ;;!0VÙlÔ%EÑNÐQt ³ëRfʪY›\Ú ©¢ ùÍ2Ùx“"Ùšê=É´h“³×öA=Ì2GªŽÝÝÐÄ®1‰p†a<­˜ ~¦x©„¢bØ×fºÂ•¸0ªR — –ëTªZ‘M%ÁÛ!ÔbmwçèÙ/Ôb˜ÍF¾†‹ß~½Ûl¯ü5˜¡8æx%œ$[(Q àúÝt♗ǶíÕ³Î審Ü¢Q89üMûDÜ[ÓÄ]4…IoÈûlOËž ZÀ¯½5«zž…¾[p•©úþ¹4ÜõÇÝUë ‚àyä®ÚŽÉ®ƒÓ(F+Þ[T¥Ø 0˜tH·VÅ࣠ö÷Óläó¨¿}µ8æo®Ü,78ŽÎÏNœ,P…ʲ¤äÀSP¡4‰º–keÙÎM¶­µänqúrÊ æ.§ö%×ÄMÖÁpŸçyÉ‘)P¨2gÝ“Šl†¡³sô01‡ÇµfÓ/9ΫÝVÆTÉQ­ó¢ÁÍ}n…à1ŠÉ«úŠõXÊoÝI—ó( 1ŒûИXÆ ê¢iÉE£…檘Ƈµ ö)”[3`N §¸ëؘÀe? ËH!æo/ÞEÔ–#ÅYÞ³=Ì…®1QdîyŽN;x“–l½„¼2!oÙ(nÔXGð3,Fõ} X§_íq‘Á~Ðo¹ðu?DºmyNŸ½ÊÞ”BrîŒä…˜«./Ð{ð‚xsõ¼kÓÄ8.¬v`뇘pCI…Êîý,ÜFÊyÁ{‚i¶öÖ‚ˆ³*ëÌCtÎU]VØžë·^Cn VA´éQh{UlÆXÛÐxF• ¦õUJíÂ[É2 š¾ü_Ìͽ|¹ºy¾x¸¿½ÙÜ\ÏÃ\NÏ(ÂÀA°r $.a-B¶‡ „þ¾î¡ðÚa° Än[Ž7…Á&]^$dI>$Õ„m‹G˜Å£lXA¨Ê`ºWR¹Œ-9` Ð*“š;ãÌð @ßÓ×=EŒ1ݲ?æî×è–]‡©$<8¦°cû‰`¼Ï}2x½ž‘º3>Ô /ï næ"/±õ¸cx—S3ÂÛ©,§Dbœts9[‡ý!‘di×zûgÖ¥Ö›¦Ê½ïMd|Z¤½U’ˆqx-#ë0ã£Ð_@`T(žëÎI ðÛS#òi‚´ÿt–΃ÔçÑôâ/ö#¦È§‹çû‹Çû—çý ÇóŠÒÂx6ŽI»]ò’[BÝ®1¥›ü좴®rm†Õi‹%rÂP@®„%`1×m¼/Åx^;µ Ì¢N`Æà@«ìœ=wgWÂè3€mKæhZœ Nm;[½lôR[>ÇÄÝÂ$u54Ôwz˜b ?§ øêîé³ý7/i\̪¹ª—‡•xÉÄ‚ÛÕïDÓnu°"BQ?‹‡Éz5ssÎñž š ­mO‡!κ}àhþ±¯*æØ3—M˜â¤(¢#cšÅ7M´qát5iP%¼gìcš³à«’i;\-¤»»”‡îêBì îûýÓó…ýµ?ïý¼ÿ›‹»ËßLSfw¯?4wÌðø}C'¾®ͼ‰<‚×2‹¼g"ié5ÃZÛ.@¡d"žîpôÖšQäªöDÕk·ÔÞ|å9XÄü¯€U+Ôݳ%>­NKN ±â`‚¦½…S†CùED#ÈS®Øö®ê ÇŒoMð8mUaé:D$ß0ýþ›Ç<½…sÝY8£€€ŒU(,Ž4ÄI` mZÓ DÖ zmg¤öÑPàæðÔô¸$>Ìu©~ȧò¦í샛ÇÔaN:™rkŒS\ŒÝ‘ØeÜܤ'ŽFrl̺þ€wŽ3ºáªÐaT§]å=‘ù± àjT‘5î ž>Û.Ùœkzy¼¾ºy´7hÔ'hñ"»*”EXPÕÆ)8òËn ÊÕªë-íƒB(I° O1úœg»'˜Føªæ§ê¬v !goQ‹ *öÇWôšÁ×” Wô~²ëØõ.%«j€[„ £*å˜þ­R)‡Ð¡t! n˜«÷juO/¹?ož/žn~¹›éÊ¢ÒøÕ¬2›®ºš•aIMD¼ú6åºgåÕ®’a˱èK檚‘ÇIz Y ÝN“jÛÈÞÓ¼Ù#Íröü-0˺{²¶0Í mÒTªy+íÆ—5HmÍî$@ŒêÕüi“Y•^c‡î´¤2! ¼WnØöòÉô?_]?ÜÞÿó€[óQ~½¢Y «ìyTj»®’PG[r‹ë-žUÏõ˜[*´–È«^ `XiB„)–á´…³u {j¼AÌ»BP§\U]½ºþÓ%“¨²¸«‰'L×Å])š>¯ŒZ»sbT¹Èö*åjô•Ó¬òðWš(F¯7/ÿÇÞÕö6’#ç¿"l>Ü—•Ì—ª"é\ X 9îäÃá°eyF»²ä•lÏÎýúTµd«%±Õì&[ž v13˜ÑÊìfUñ©ÖËâùëxzÿ¸Ø ÝÇËéÝ|9Þ~]ͤ°_n™ ðÙʨ¨¬Ü2oN®ª›*‹‚rOîD×rÎ"Ʀ%„)ª {†{㣙kT,âŽMI¶%;å{)UpY¹ÀÞ'WA‘8Þ2½çì>!8-£Ìk?Ø„àÎ8ÓÈv2@˜ï†€w?‘N*.Ã=-§«ªãi$Ï”Þßyò—W2蕉^d®«Ú·{ÑûÅôÓj½}^̶7oŸíæ7ï®XdzÍL²þïQ«žyñ,_êXmgà‹Y‡çu¦öD¶Wæ1CŠeã¶{ÕÇ·Ê–rºEÕ….E·(l×-mw]çAÝâ'$×¢“ne”™9›&:ˆnÙOš:Ñ-~"w.zè lʼnv6­¢\—Š“oïäG[X€sîZ qéÕ„u§QxÝÐâqúIn*>-¶Ï›¯|X´—ËÝ4ͧ—%[»éuëú3 U…hEЧxÅX ±Y•J¥Z1„gépZ©”ô=‚iû-ŒÎ™ª“¨ÂókC7|/r>ið9TBåØÔoá“9‡ªlàÇ©ŒÀO„’©Þ àx=Ñ…+‡ÝwÝ©ww÷7÷ó‡éËò¹JÿoºÝÙosû4ï Z³Ì2ù1÷$Š•Ã[– &m…V´õñ:ÀyŠ ­ž(I¿¦kãmÞž62K:‚·Ì§`Ñëëà­M ´«¼MdžYª•у ­Ü¾Â‡Ž 9vwÑqT ²zŸÚÛµG²e¼Ò“BNIU^Ѹ SðuŸ«{ _™lQkö@—Bð*·W® ¯ìoêáá0Dáý{ÏÜonPH(7($† M<™ D9!(m÷£m ã+L”²*èµf÷£VjkµºŸ½–³fA+ýœöƼDŸ!Ú8æ»B_Úœm Y9¼eÁpH ’­b-) ÑðÁ;} ŇƒG”¦gé€ ZK]lÎáð ñˆÅ`"7øÿÖž½€ ,…`!Ó°‚`NZ•$w ÚjÇXÝoöąɈ»oä 9UaÈkO­Ãåý‡–C AûX]Y5ÉìC|u¦¿<¢¢ÃèmïÛÑÃfý8Z¬Æ|6_ùo÷ó_GÏü'`&¼ÆXg^,/-x0 +x ¥`¾¬éÒÄõHqfÒÏ7·¿ÿáßodè“´U’6zᔢ[>¾òºã]c0>‚2¶ø wzô‡Ñó¯«Ûß'†bÒÄÉ,û`|·YÑC¼S}ƽ £å|ºŸRPCç±ÒC¼ìø-Ž-¯Ý=SO]/é éŒ6ær´lŸÂbÒYtZÿíI¦wËùŸYžÿs½~:`þð/¬pç?&ÜZƒÈ6‘°Åüþð™:CN´4qÄ ¶"§µd[¬!Ù«‰ZCµÍüN^v^›ùl¾xߟ#¥Ü0-±ßÙ~•Å–÷ Ç—ùfôüyº½ÓcRmþ<WS`³º @²æõB– ° ޣŧ•Ž’H¬øƒÕv:«xwÊþÅs¹}˜.·ó&Å ÿÈœàe Y¦;‹‰î¬amJݪØÁÑåŒÑÝÆm¬Þ¬¶³w–dŒ¶väêþl}¨?+˜áІD¶ÑÀº¡¢>àùܵŠBMEj‰Ó[§‚º‚gû®µàZÁ@¦² ò,}eZò‚?¹Ýü2]Þðo‘d–¹wIÞ.×_F÷Óç©Öµ)/`¤û?4iS†œØpbƒ´‹¦þ픫%” ßÑÄëÀ„æ]Õt\$³/×|“†í ‹ÀÁ¹ëµ„:s©eK›ÍzSR>¯¿ò¹›q9¿ïϪËÚ²ZÂô þj¯‚ôa„^žZ¨{áR6‰´çÉoXÑZ£B›¦Evᢠ]MÇ+»T)‘\QÉ»e+Ö&ºÐ¥N¯AX¥ ™cÉÂÂ(c«ìf9,;+DùUª:Žxºxtùòò7ÚŽÞ¾<úßýóøãÜŽþ:cüÛ诩ˆ8ú'ÿ·Ñ'fùíh÷Ádo6þÏjºùúç?ý[¥aYlŸ×£/›Åó¼âûËövôVC¸^*,»±¼ÏFÿ2ú®²©ŸÖ,‹íh¶\oYÜ¿‹ì€M&éÃkŒïÉKH{ôAë‹XM[>éác†¶w˜þ°óéŽÃËÂALJÊÛ–‚r!­½½ÆíøoH× éš0Í8>4r0Íö¨Æ—Fn,µâÎd"šKG4 Z±­€Ì~¼Í@3» §ÂKÀ3~)Cäå!ÑKbZ+.é˜>4.1áó! U&ç*31¢ Ê|HDóÍŸ/óÙ×Ùrþ~4Ÿ¦³Ÿ¥¦~Ÿn²ÿx|1Þ©<äÆ;³ß¨ 5!ˆ†f¿Ò%dcÝíØ*ÈA6ö³°O=¦UÖ+)XYئ•NÆ6™3)µ†¾ ÜØv2€mà”…hIåakIèÆ¯ÅnÔÝ‚A)ÉA7¶º†ND:îæ•Ÿ o™ÑY‘ÝÞu”ÀÄúÒÕ7–Jä¼®jH»€å?òØÖg"6^[²Áûlm¢ÒÙ&Q^ vS™´UD­Œ³ÑIµ¥ñM³”PPLe–V%iY|s؃oFFå¹üœŸîà(voŒÕí7QÌ«[Ùæ0VÂPÛYÛy§¡ŠØ¤³-€S9Ž•±eCÛ,ýóp¤º(xö¯™¼Î~Œ«óºªõd¼‘îãª;äøqþ¼Y̶ žMÈ6 º¾@Ý‘±¾DZG×7¸` °[¬P÷ŸòUÜ}‰Ì—¬öÙNEß2JÓ5ó–<ÄfÏ\ÃNŸŠpl ³ •ÀÔááG¶°÷%láô‡_vw´˜…ófè=Ñ–*ÄYJÆ,Zò—e-¿‚²nñ#ŠÏN‡Ûyx˜ÍfîŽÆ?¯f³}­ÔüÁ:…îݩīÌB³Ï?‚y—XXvATˆ—ΰ$­ê5C†&²• Ùö¿I&¡'"e[ Irš ´’|D£ömgiÁ$”È1ZÝÅD”±Y\CÝ#‹MfHH“Í6›Î6ËV³\„¶²-è=O.² U,XÛYÛ$_EƒëÀ6”ÂqK”¥€·ÿ´ïŒÅÑ{v<Ñ^Õú6‚Í~΂®p0°ùу›}Áü! Žÿ˱õÉ<1½B 勉¹ $C‰]K{À’öºk”(TÆB·«ªýMUí.~ɲpžqˆÝľä»ÔÎkŠÎÇ ä«d‹V­M¶ßd^k­“c¶Ö†t­­¬qØlÓÚCh‹µò¾m|xËagIZœfƒí$/gvèt)¡¢^ÜÉtu°Á ßAÄéé òÞãâæÐí‚ÜìîànÆ?ýôp¿ÝyR3…ÕT1H•öäú¾D ºH«\w®Ä BÄ'Æ9†=:\Q¬ÃU SXŠ­‘IIm˜"í«È´ B´&å°—öWŒL@txTûY[!«¿N4K‚5|z~'‘%ÅÀ˜ƒW‚·å;¶L;h•Æ«Ñlª¹J¿¼¬Ÿ§¢Ù—kJWQim^Ô¾Ïzã±%0†<æ[ö^ûdõh¹Bì<Ÿ4•Ì*ÓRiœÚOT+ŒÝ²›ÎŠÉ¸„„o£[1ƒ¬N½¨Ñ±D/~m%Õ2¦Ã­0KQÐ!§ H– 7|b·ÆhbwÐR©Šº%¼ ¨ì¬RH1ZLz[öë#P“@8#ö}Ž@8=@ïæ5IïWúC†fܯ¶7rÇBäåk×qÍ—¶™Nå¹3®G3°ø—²ÅÆeœ©O{®L&ñèZqPÙ¶´N¦Ø.&zšÑq IH&)|ñ®";ë¤V&ë<3zOe‹„ñŽ]5.:4Úé¢C£)if49W`>Fš˜èYý°C’ÃD¯OFàBQcYþ?º»Ããtö™ÏÏ>|Üm¬&»¹Í¤7l¾åÀ©7¦O·–ü€Z“¢]Ã1¹ ⪳¨dÊ|{-ÂÚpÕ ºÕhSXÙÅE©þ逬Þ*(dJãpðb!Øí›;¿â Érk½>I‚éì!ø5È/ÐRoìl$å.â’"]†D)– YßOî*Ýž•ÿ ¿û˜ÏI6Êîçå cTÂ,‘¶ Õ|8Ïi0YV·Ç^7vˆÎiïœéžÛÖJÞË“ˆZi›WëÁ2PkS ÝµDü>dp9Я´ó/Bh'mÑeYâÃMÙ¥{ Q,±•-rÖŠ Wx\gšØlí˜Ã~Qƒà<û¯è¾²ƒ•)/ßFêã/ U+Ø»?o^æÐÜ73r2•åΣGq¿•[(>1Яðã†pû„€¹ Ê*L€è–Éì;šíú¤@t(…Zúƒc—Úé”ãMVÛö¢ ÑÖÆo˼jªý(jsS¡Éœg&뮵Å ¢Ë2çÜ+í³2öMk;|òÞkÈoÇÉ==,`Ðtk'®`Ð8h½Û×!–¹ZÛYROk{ó2%ùpP @Æá ÖÁÐíH…ŒËLµäAË\£?„lÀõž¤zÞ‹jyw·ü)Ö uàJxöQ . åZp%<»¹×t«•º° ÿÀö›@¼ÚwÊ¿Ëø—éâ™èˆëHÚü°ožñ&‹u³ñ{þüyóuQ[ì´§ØxÁ q6b«”þ±œ†õKu*c2F712“X ÷%BµœòV2S×I‹Hï†ÌI~Zߟ]lY¢d;Áíøqñi×éê8¼õkcò4£™?ÿr·Zu‹s³a†ÍL ’@Ö»·ýn èÓ&‹¤ODÐè;šÒW%t©0x%ƒR ’dбE׎è:^-x j[\»àh×/õûØCvçƒÄvô°Y?ŽLVV‘›¯û&<ÏüÇ‹´£¥(ç1±ÏŽlX~@ÿye»%ìCß­3ÄÈi‡tÁ÷%};eµ_ù8šqô§õzÙf…÷Yrw>‰·í Äš«!û­lÝ©Ž Ô‹!¼þ^Üȉ Tr+̺ÌÄ;°—+2v ˆ7h~§p@²0öt Hµ‡T¤½m¿*±`ëÂN¨ÄgŸ–,TÒƒú5ÿŒ7‘Yö,rI}Rý>˾p\"%YA‡o•—*HˆÏ•÷Rs=dw‹÷,™cšòÇûÜ·Yy'Ñ9Þˆ$l^–­ÖSÒƒ…~Éféß]8£¥PÎè®—HI¿tk”Dî¼k"=v [’üíDIóAÓª+b^a‚E.‰ ë$­õ‘SXFF´‡¬D:) ²ÖcžcˆŠØñ/rG$%ÉhgãwDª¬_¨Kû… jƒJDÐxƒÖÖ†oàèu½|yœOŸŸ§³Ï•§0yÕ“}ÏDÜôŽ÷?6 õçH‚î Ý#µ—”pèzkPŠå )™õ„­Ž…2íÊ‚Tìràÿ˜»ÚµÆr}+ûìï%mI¶%ÍÝd3Å65@õ<³W¿rÄ'>ö±SÕº» bl}ë}3‘é ˆ%¯ 3g‘ŸÑ—X$øjÔ&?‘ê‡~]ø'ÓWìU8 "r+Õ‰ c|G’òrŸ1Ø@ÍT7cœm@PgJž: ð‡§¿ì¾Ý=nkî7ûÎióg5U,°þì{x„¢Hâ+i­$åÒ»ä¢[—¤+£j= ‰\·ìL%Ëž‰cLàì*“f–=?cE üz¼¶yçÉu£ƒþ ”Ø €=hp¥øp6M†5Scã ÏYE«³ëE«-‚¢{€@Ôÿáý4ƒÿ’4u𣻇íýcr¡MA}gðƒŠH=Íui–:?cÌBÒsŦÛÊäÆ ¹”P©¨ÈJÙÏix& s ¿šÆë<ïýÍnûº}xj£öB»RÔaÿ¸uù­rGmÔÂHHÕ«•[>KnïÀAIÓªUЪ„‰IZv´È˜˜‰fÈ$L HèôœÏ¸ågô ˜UmÚÔ“ª£¸î®iϨUÂl 0…D®þXög÷ÔWX»~a㜽êú$––AÝâÓŸµ«ÇÎ1 bÏè»~ÓäæÐë·ÄóèŒM1ï5ò:sÔæánûr÷ò¶î¸?;ü÷WÖ‹¶8˪°(ŒWíýŠ›Î;•Lµ“ÒP¤pÛ{¹FC<.¦‘†zUg 7U0wFê`ú|q¾ÄE –«r=Ê"½& ̱\ËbÈnU×=u‚c阠ò©Í›×.—[·QÞø`µÙ~M­íI&?{À–¹äí“JÃRGŒÙ’Tæ̪æÃRÙ}= !±¸¤É0&~ò‚dêÔH(Ô¼Lõì- €²kà±s–‹f¥-Ÿ”j‹¦ª0„SQ‡™}ŠvÆ;œºþw//GAæøí‹Gì‰Û»l>´ÑDšk·¥gÉ-ªã49Úšc ã8Ëžî%(k–=agâË~جø²”ÙÛN¨œœì¿gGtæY–2Åk[vpa²i7ͅÂýç¹&ûù‘½÷—ÃD îúÝê€>.6òƒ ÔYu£‚¨_§nžAS¤]Ð8½žû¸ýq&Ñúy{ÿÚ¸¦ÂLYC™‚e‰…§µG:âP.7 Ñ^)[$¯¥7±žéÕ}!TT!ô”T)Õ«|?iŠDQ¨¢– 2Õ÷×}±¸š nÌš+›'æ“ýõìŒN¯Ÿ «Úª«ë¯¢€ö /{4£âºªû—M÷8²“äŒÈ$ĵœÔ7«ì –Æd2Q ¸[~c?.±óœ‡”ùÅ‚„²OCïÖ‚'óî5váL¬é>x/ªælêø8õ+n­<Ö£¹zœÝ°Ç±Ø g\¼î +/Ñ£]?|µné¬ÎIÐLÆ*È„Y›˜ á¦(-•ô‘g¨uÐ)*ãLáGõã‡m`£©bìóTpÌÔy+vž>Ó¶œÌ2’©žÐTi{ÁS.Ÿ(áûÇT$¶èêåõù?›rÅáà4j3• Ÿ4kØ[›fUKÜCí)6ϧ´HÿÂ0x‹èױި[“°V‚`H³EªaB±˜bÅ9$ 6·/öN²#VÀŠ†à¡±¸Z»£ ¼©ws{ð{õQ‘ù•„\@ä‹T‚ö ƒFq÷kìøïc÷¦^Žü’¥ö“Y"pöؿ۴G]eûú7Öùêg¼JW GHssý ²Ò¼D0Xå:¿‘®L@F©ù M•¹ºßðgo>Ä3ÄoX‰B¾d”Ñí8’µ¤xmÇ&§Þ{ýE-9»––syÞBxhá„K ýŠ6Ð0p1ÏÕ·=’u¥–8²†6Œ ÔŸgfÝÏw?îw–ö½ÖÄœ}ç´´‚=YW‰S^;ŸÇ6HOB—owÛ‡×o¨zIrã|ƒnÜUÑè½gÒªã,üò{Mˉ$3×P" ÔÇÿ…K^0û]‡LÿG`uN%ŸþÏÏèëO‘ù?mBgñ yU´¤ˆ(¡LÌdj-Ó›ýñãçÃÃÍá«m˜)ˆçkbê”5ò:Q{žŒW‘IÕïðÌáǧvÛX¾¿qRûU‡Žÿ €½=cv»u´Ä–zí*vE%Oìš™rëwxÜ4@z• ÞªºéÁkÝÖz,—ª>„1d Š7ã꼞«Ž‡ô±rAGMùƠәѓòbiT)&²CÖ0{ŽíÔznàÝ]fðϪ4­"US@˜ÑÖ h¹ Íß•; ïé¯ï7Çoùãôo}¦¶)npseïg Xð¬ŽYp>©üv÷ÖÍ—~üuóãÙžâžc9ÛVh=Cì}Ý+*pr´½OAÕw.>µ]ÕqcMé5¦4_íè(cB!¹!2éŒ)ÍÙ=vâ 7D~H_Vbù4Ŧ¬dÌ…ë˜R ‰O›ƒ°ö×9*Õì7ã_%>¨Xß˰ÛVb&<ŠfHHfÏ#¡ókÞ?ÌŽè È|43Ü"˜hÊ-ë^åUÐÁìΡ™ ¥Îa°—‚‘+³W¨cKÁnÙêµ®Þµ‡sõ:{' $VàâN€… >†2轎¸ã¢•J&m¸û–nµ.ÁJqù¹}¥Ý(Ç“&ûÏâ{ÃIIpf’«j8)@ NE§åÕìLRCJ¥@ƒRn¾›Ò7om1C¸º¯ˆ~2Ò¤Œ¸„ô`Þ1P,5 àX|‡e‹{ÁÓðéÜ‹@yUËŽ`FÜn ùêKÛÿs4Íoëï/ï}ضäÞ‚Ô™†]éSeeYŠ)ÑY®&c6´Èl`16†ÄÀâëq¿s=î'(cø9§9|x~F_Ž™ê/W7æô‰ýdB€hšÃPªÄZpN]edÐ yÑÄ`ÐVîc/´=ç•«)1Z§Ü c~ذ Ä_V*ܨâCbtë°6L{]p†Á1Ä€4´x³›^½!H¸,X·âÀáp'/[q-óÈ¥3ÆŒ³¸´£Nà ³Cú*8̪5mAV/Ü‚·îÝ”Ð-á,`œÚ(3C~­žä@F7?¶ßïš^¼ùgúï©Àéµ{>PÚ_´¨Î…à¯ZkYVCcÕZvÅ–¦ÜcO±%U)‡0€/y Cr{òf¥Š–`‚¥zßÇc±Âò.žA³ÌóÙˆã =VÜoìM 5â ®šùžžºï] Yˆ'+¬/šÛcQ°V/ZÚ ×úMS)ÖòŽÒsÙ|d’xr×ò3ú‚´'wý !Ðì1³È¥äÏìBš<òŹc›=a‰£’ˆq%õÒ eªN=ÌVjèŒ%¥:H q®QŸ]}¤ð~x}öËNÿy}*˜ù[§Ï99¼Ù,bœY’M.§!Üá_ûîßìCïžožvš¥½9™i[…]Ž&8òÊ]µ¿ß“݃™< Õ[·ôÆ rdÚo<­Ÿ-IûëË{‰•½˜öÅ6*í‡}.t’ög‡ôyð=ÅX[ÄX»‰ ¬Ct2?éôåœ3yp€ËØGLcÙÁt‘+~y¼™šiy¢ï¡ Ëeì¦h«å¹€³2ÊŒÀF=D`ÄPëé“{‡ž¿hF"…"<ßQ̈ oÌ}"ùÇ|¾ì¾ž¾sýõÍÈd–Á¤<ÖBOÿ ~!§\Èw"#͇_Ôý¸|¿EtVi ȺÏ;*v€*¦Œ[\leŸÎ7o/@äk·kv"Ó618¤7¨÷Kü£9†jSAŠ„\™4ÜpTï(œ äg¬ 5WÚF?œ °®?l“ÙG$w¡=œôij˜|èf³Ê*D™q+ëgõŒD$ºJÏàg¤™béAý•a¾DpÇ@íoûHïÈ“´ûv·ûóÆôñãÉ®õËÍîùöæyïµõ÷®ë§jëM?Ú1Ä4Ø3¢°0CD:.÷´›SWÝâôAjƒC&_) e’zŠì§:4_™ÏÏèkU„¡Í1 0ˆ³ÁæMq¾w&Í[ä-\É;yhànØ|lÁÜi¸ÎªÞ¢KZFª~„«H锸À(óÇŠLÌöñ'uû³œjüØîþL°þoR§Z{¼{}¾ß5Ò'+I¿b8‰Ï“ÙËöÅÄ1EßÜ«/ÌQÞa™0Ñx=^öqƒ½“ªw@*Ñüd’ÁòC–#»Ãê×ÿNèó Õ¤É9Œ°:wE`¯8â¯ëdIõi==%JE7Kî³hÛ4BëÌ`uVÙi}+Ê*e{ft¨,³ÓH׿ø9×÷i6éf·m›7e‚ ZH¿«¬²ýºNKpÝD#(6Kn` Š%C_Vò€bÕÐ(OžÅ4&@çA$œl©ç‡ôû„ æ¹ÉØ×nÞ‚÷`F<(§ŒÈ¿Ëûß=Üßµ¿{ îÂéڿNüþ ´_î0Îþ…ö+xŠD¡Œ¥ÃC}-/ÛìµiëI“ýïVì‹ÞÂʼnéŠ-EžiÑ?Þ¸ÝRwòêdYȾÀ’—a¹2ñ ŠÙÁ}êæGôÙqŽÑr¹&;ΉŒÖ¾FÐc!h—Ys W±#°Œ=I˜æ’G2S¡qÂú—*lÓ œé˜í9ÞÛ…³÷¶ÙÙ;|z±=¾[à·/Öº)‹>c4p]} ììY!K£MâZá–IüBãw™¸×´„÷WÏ{¢X÷AØ×}D,Qíeá!âFYTN\D~DwGØo”ÚØö†v3 r…Mçn&öß7»ÿgæxûðãévûóõéÅ~‰»ª9÷×Ö™Ž8Õò3ÄÙ®ó>/;¤Ç;„M ÄÙ}u÷ Ó݃¥/ÊE„e *ñ̨ÓP …ȋگÈCÀ~Î[™‰ºœ”¢ˆH&Ô_2ÉðáþÌú%·w{³ÛÞüýç÷Û‡6ø,D”§ÚmìaŠ(”p†µ»/làú‚X0켃šåÞSU 7Ç–Ž¢b¶•<`ð._^ÈŽè éýF蛌6©xÄUë®ý°¼xÍ/yè/ßþ´ßçÓÿ½±»‘FG.¼óšô¼s Ò y‹t$°‡~I`#áÔR×Ã.€SÕêSìý£}Hg¢ZbŸÑÓíxFç¢Rh‚¸ðÔí#ÄÏǸ‚PƸò 9矱–DÑ<`ª¥fkÎ*Ô´Ia]¦Øn ÒÄ)‡™€HKá¥öŸg¿ÞÓ!N S›—»ÝÏçû×ÿlö7÷ûöáä3šì»Ž35ȳwSTD¥‰TÙÓ )ÑUžÜ"²8ˈGW‰äÏk”¯Eß™³QÝ,XÊög20~W碥ߴ ~óØ½º/Åï™Gðö´9'¡Éè\0s .4yõWÈš¼<Í71ä¨dbRÞîE…×* žtq‹ê/öPu8ÜaŸ?š©~»SzÉ{Üã©Chö,Me/gš!·ß_žïvOÏ·µ®þÒ™5¤VUâ§ò¨taÈæÝ‘´µà³Xð”K}ÝÀRº‡,æ´4|ýa¨ë¢s ä‹)ãQ’ƒZ¾clž3f‡¬YRH€M^&1FʺÜ1Dœßú¥bmßîXº,\£”Ác”2xXéw0s3o;™—¤LN“j×b@?dð&Ü´AþWbž{²'yÿÏòù»ä›ÿÂ-³±î¨tæ=¬,;rr&1G±™2óªR‰ª¤ê£F@c½XÉ¥~r&ÓQ J|ôy_"?£'¯‰DZ­ZÉÑE]W:aÏóa•´ˆÈoº7—sŸyì¬ÆeÔ1ØO¿<Û®»‰×•½[u t_d±@,òÌô戌Ÿ·ó7æû…[÷øÓr‘#H0 fb\+JRY ÛK¦TIc·Õ\Í&RÕb«–r…ì·“* ˆ8ð9þE~FŸÉ¦Ô¡j±ØHVÁ/à| <Óܾíç…‚r¼Œêˆ†V½iæÐrÞóVã¬æ¼ÙH«4Ga|‰6Þ‚@q3‡¾Ý=<~š¢}~ú_;)}a÷ÍîÓóݧ—ýÚ]uA¤éÃf”гXĶʰ{ìX>0o#ÄÑ7sn6*áÒÞ[›ÖÁj§û)Ô`µÓU· ì!“2©ÖÎ3bJ³âR~ÈŠâ"jšAJlc²#ýdÈý½£Ä´¶Ý¶lûÀêù¥¸diÓÐXŸÜèM¸ßÌü½%A0ú¸ê–„ K~ã<©Îÿ ¬Õ-Œí1 ëCSÅók£Eƒ²NôŠâƒ(â-Ÿ3³°w¹¾äëçïhëö~ßÇ«¼Œ…Ÿ2 Be„ ?³åŒI¤÷ŒÈ,ò;Ì ÆEö-Ùb‘¼éI‘Ð%}$‹U¦ŽÉO"H­%ç´¥¯àBd¶ô ¬ ÉÌ$$PíªL'–òT)wMÊE´âLŒc¨NÈ~âD=”—]³3þygŸþÓÜè÷×·MŽÿ23¿½Ý¾n?Aä8‹T²<žù¤ÞÐÃÆ)7Fiê-ïá¶Hj½Ê¤q©š¥W“2ûÈ.¢Txaªêµ~ŸC‘Ð+Ø¢TZ«d6åç)F~H_U <6 H¡"‡užy6qO¤˜_2gJÎ*)o-‡Á­ëE„ _Ńg7~çüTTûM–øh*¼UûMÑ«TûM$”}û‡`Æô›Èâ‘ÀN\ûñŒ>ÏÎÎ>Û<;‰®¬é|ÆßJý&S¼)õ°´:}ß|Ù„ÂÈ}ó¯æœ&©½56ƒœïàïTe$»¸n]Ï…zÜ(³‘ ŽiOÀ×É<ÑI)+A·f²ݺq’è s¦ðüŒNtoKl¹ ƈ ˜ó]5±H€~rB`ªóü½½òcpTY˜À0–Ès!5çõÔ<•’öYµz²XlÕ¤¥`36!@Ð~8ºæ êãv÷ÍždÚs{ŽºÙ#G}|%™ç‡vnÎ ®_ ì¸wUÍ@h†|X;YÚ"¶qq`º!%øZÇ1¸:ñ‘– z&¤!q bðÝøT5?£3ôÛFEɧÝöUã+éâNÎä;ÕR hÊ÷hÞ<^cV”eñ¦£þQÑVËsV­A#IXM¢¦lXÚ†¬áÚÐ$‡?|㾸=ÄÜ»‡§Ÿ·o_YfÈ«ãCÄÎÓ:5Š‘zøvÐN‹ÐFà’Ô¤uÑ~·b{0óm>¾>éÄÿ‹æ;–QI2áŒõfÁõ$Ïé‹È Ù’±&®u]©Žgì!§ŒÅ,ÑÜæô³Ýé—×Çnþ”¼Fº‘/WÇë0kªÀ;Œ «¶IÉO_Ò0-ɯ=$HØ*#¿èg]¬šàêvgÕ j×~]†lqàŒy`ñÀŒðKçuªûïv÷òÒÆgåÂÛ"ç8®ª£{ à­èì]+ÐÚ:z›ÜF%S鎤$ª3¹1¡OÔ¼±Ç"œH&¥1#¹˜~ ùÞ]~F_2T¤°Ú´¼nU2åM³‡q£pð¥a\ ßÝÑ5ªê I4¡¿ªÞn{Îè5‚î`×ÍpÊ ›Ž¨ÑnÝTnêowÛ‡×ocÒ£ˆN-Æ]×aÑÓà 脽'h^®H¿ý¨tÇ4FêjÕªGŠŠµöƒÉâ Môñ·ba--ñÎþ¡šèìΉ$$j‚lЉ yM¥Ú>b2˜Æ^y"P0±¦ýýXdˆÑÅ¡&iøºÃ»%8«‹¼Äñ:ìë)s#Á|› øËæF¬ÿúùôºí¦+€ÿ'ïê~¹‘ü¿"äòiøQ,}¹` »{ÈÞáA ËòX‰GòZöLfû߯ª%[-‰R³I¶œàfËr³YUüÕëã °²ð&+䤎 CmÑUM‰S«æ(Ò(m:³G‹4vâð¦[Âá$¶iê$(Eƶ‹•ÛkdβюG/vAú°Ã¡‡Å3çØ—Žw42!`ç-0^þØÛP)‹ä4Üœä*!Õ KöÙEbÈ2n¹K^¿Ìô‘3:^ÜHñÀÓ—w«OˆÎ/ôzF§©<°z*Âï`1Ã2FK„^iSzÿ›F±Š©< Ôv+Ø©õ:1<³¥[ô©’Ë£¥¯[I{ÝéÚ‹äÀ¸ŸköIæA™é ®äê€aôÀá ™8çc¹<ÌxÖ…„0ê†-LÒh2§óï€Óç$c™-¶p0…`º¶žðFăßþîgÇo­½ÖBtÜ ¥ÙŒÝÙ RÎ ]ÔÂÜóÑ)=¢{ˆqÊtôó¯n¥=ÒúËr¶¾c4µHp2݈qrd•“¯¯øÓߨbѤ4à§WŒ^ßÏdýaµzØ×_ü¡œèù÷‚˜WÖ8Ï>œØ ‹ùÍëghŽtŠ™X%r¼í6sZ§°ž¥¼Ùl|XNk7_ËÛn±ýq>›/>ÍoöñHpPùviï Ûm²X³Jú̶Õçùãèénº½ÒcÒl~ÿé"Úmš¡¦iTÒ݈ ä ?BåŒÐÐÒ,Uk‰ñËõtÖðîýâvz¿žŸ2)ð_GËç|ø^ݾob …!ËÊÕ„N¡°Z\ÿóB!ÇèÕng_?<®Ø?Zo Œ-t¥s1ýµ!²ÇlÓÙlë ~„†a3¯6d„ãäÖ #¤bç|¹BâmÑý¶ðÏBÞ÷_ ‰S–2.b$åH&6—ã”ï…Sž­0ÝS¾¤lÔjm+¤ŒD:/ŒQ`/€Q!b 7L`mª­¾ Hy§ß¤æïÿ(¶ðæ€_K­B ë[­1ˆÝ  q(ë-Z0ƒ.f^^®qÑ}½÷£a!í0@ž3’þTÖx,†4êi&hi.Ÿi¨ Õœ‰]ú¶v–ˆjÚ;°†5çppX“QÃQXcÄBBŠB7XÍßÍö» ¿[ÉÏãݯ=ÜÎØvñ³Ùø·ÜÒõ‹ñ‚¤o¯s¦¶}Uá}Ú(gÍï˜ò*'½ÑjTF—[S¡ôPÐ.y¼îŒv¤io,yØí×îÒNŸ‡ ߤ«TJ‡0<·¢v-¿N|¾5n=ήõít~±©ÈUµ©²Þ©…=äàrVÖËž³·Ø\ ®$h+0¾úu©+ùháMšl,¼’ûfþp¿úÒ·‘.а„`dGCx©KqC¢ÅÅ<UÙ—÷3_x|¹™ÝÍû]d+4ù¬ê¶ øùYóÖ”ÒAõspòfÝtëø­Rm÷­(ßÛÑÊEû+µhY媛_›¡ŸBèa¬¼$YI˜‡àlµ¶§²ª`hd³+A.(˜¹ˆÞVµ(SK.¸ÐôML– c­Ë/WÜŠ–؈•ógbQAËŽTX%*Xyœ…&ù†üBzé”)pVº ft’‹'O`¨ÔÕ*Ým&ö…„€>ß´={¾·íG9ív–æƒ:åùOlXïd[Âñ5xPÉÆ|ÐfF›L§:ß(-­!1 Æbzt=Ÿ=ÎÓ.öM·ü0žÍŸ½NÇXÓÏéÌ_¸}G¨)ôö óW>ç'Ö@&g2*)@Rrh]ŒL:™”%ðÚ˜Nd |l:ÃcÚé˜EÚÚY2)MÞiÀ #“s442ÏöO ™””cú`ãñ1W1>Fp‘kÆãÓyÐÊñÅŒ߬f¿2…o?Œ?¿ÿçQ ŒýUŠIék·^Öë ¨”¾öÀ¸d´59íºbºüÂP|ŸD`ÒÁBîx]ÀŠÐ`0™í ‹£Þ\»­%!“tŸCè.‹LfàÊþ #°Äl`1ªŸ3˜œç7¬‡O/ØƧ¤*ΗNﯽf÷É*E¾2å­ÚÂ$ëeÉž˜”·ê94²Æ–êß¿O¢~<Årk.ÛwÈ' -ð5úlž ëŸ`ª·FúùaS‘ôAnC¥H½ôá×èÃ…{Ulû¬_NËQ¨]^çñqõØ ³ê7f™”¢ÜÏoòÉܩɑŒ£œŽÈü™˜ 5ÚJ’f?zâ´¼ñ°-¨=ëiX ¡K¡£ š‰v3Þí°NŒ“O²Ûzd>V9FV­ñ…ÎsExËÒÕá¬ÎWFUíÓà’dª¢& ±ÓÜ“‰òåwŠÞi;zùòèþýÇ¿|ÿ—?]þ.S}ýýoÍþGÿB?>0Û®F›&[ ó¿—ÓÇ/?þç4:œEïi5úü¸xš7¼{^_ÉÖ–óÆÈ5àr5b™þmôUc?¬˜³‹õhv¿Z³È~ÙA˜ˆÓ$Á.›+†òˆ—ìáš²?ogß²$†F6’øyºxâ×ñ I]Ï÷Ûš˜Aicü7üùÓã—EƒökfçxËÞñ‚ÏþQõfêÿÁXdvõÜ”­5Èÿ¿Q"°]j½sNåí}zšÚ‚VK•“…"A—Õ«) ]>mÏzµ0!£<ðþ1Ôô›QÃÄú8õ ÿŠ@cbE®`‚?L‘«0C[K†l®ù²á§Î(´¬óYlqÂ&‡!œvF쉎›Ï…\¾œ³Zš}cˆå-ï6–„pšQ€¤ò3Í&©Å´ s²ÍÉ“–ÎøÅ\sé\“q>\ÓÔ1ǪÙ8…Ø}[kgilS2OÜ;S“mÀgX‹›«û™ŒLȈ)É[öȬÀ Þ·sÁðö^§ßÕêIƽÊ-Óøáùú~±¾Û‹óÂ?áîî8Ñ“õb^|;kñýŒÎüwÖâ§bJ•0Jjâ3"ܤ4«TùÕ[z52°S©é)™È] ¥•f¦·¶–„R’"r×wa”ÒAŒRìÆ[ :i›`»ê‘z}ý éÑïšÿÿÄ‚ôô‡¶Ò´±רÈ>£bÑ.O° ß‚ó]·Â÷ÛN|gÏ’ÙÜß*üÝÎR>"]Zá^áû­;u¤ð­Óžî* Ó¾Sáú>¿5ô¾²Þö¼2Ê_¹¥éÁöWôù Ÿ×òè|€2 ZÂ8< gý9ô%S®Ó‚¾œ`·­¹™ž‘ëOo³d9m±¶øÌß´µÎJ¬ dó»û5€ó&­ ¸ŠT(Ö¤&=! @ƒ f“à|V“ÒK㟳š(Ö#g·±4›X#[êg“v1-f m“ #6|pÆ€ÿÿá9 Áæ·Ï,ÂOíó¹ó—7Ÿ>¹ˆçì±ÜsN^|Ï9yñ¡ÊaN?7é§„ ÅÖ~z ŽjºgkuÆdµsQǹµ³4"ï¥c¥ïRÁ9Re 5|óN¡#ÆÂ{Ì4AwfÑ@EkŸù.aïtìÙÅ!b*…ž¦Rñ ´ÐЉ–Q¡]üg!«Kô Ë»œtVßÌ ¡¸ …NoBÁK²¨°ûF­q壣¼Û;Kƒ,vÄcõ‚,R¼|d…óì7d´±„V¦kUˆ»o5óX)¼Pml ¦!;6c~cÖñÇŇ& 𠘯ókc$œáÌÍÆÿ¸^.Ãd|® þ¾m£Ð—¢ààï{4ùè(ÈŸØÝ:P {˜Þ7ñ Ó!èP Hé9Õ`AúÜ@w²‘ÃN@2AGƒa­­¥’D…šÃ€Tθ¼ùÐ^ºvañµ‹>|säƒÑ |óºøf­eU·v–Ê6’¦z}3ÞxW¢G,(;¸!o)¦G¬Ò¸D¯Ööo­GÎd/~ùàcê+ª“„õÛZ3nZJ×?§\@Ú~XU„QÁæ(%J‰¹Q|Ñb\r èåBÝ;6Z¯ÓÞY"Hñ«ûè™å¥UQ&¨ä,R­"fFhézw6űö®é‘+å/aö¾v<Ü«.~ýáÕ-m„ ƒtêQ9k¶=p¥T\ÊYô,•Ëô‹Ù0¬L»hv3cž]àK&ey o¬·ÝEâþÍçOŸqSÁ¦ž Nz=\Ù³Kzs‚€,ø¡H ›œ9‹FÉœPå^z~¿>9²ÔqòÁƒê¼¸³&:g±µ³$,o¥$s»‡®À6Ð>«åšE/ÃFŠÙÖ£ ­v8lg^HZog hÔñ–k¯[K³”FDJ•’Ä·ÛÉÐЋ¿¥˜žiwx0MR ¡æ°«/q‡ñ8¸_̦ëýƈ‹å^f½™4É:¿>ß?Ý,\ßú뙣c» ûVÏä¯ÜR(lí÷7žòW.Ò$ ¢mÕÐ×s.Z>£ éà:¨WN•úÎ7H}¸»§B?»¾E¸ž…ñ/¿ÜÞ¬#ŽA_Ã)oÕ¶k`µ6ÅÉ¡iË,ÙÒÞ{hç€msxfÅf­‚K†™·HŽY}ZŽw_y·ÿãøã4îéº@…I1‰ë¶“a4–'Ã$.iÍÒ2‘ÁXss)¶°¨maã%6¹€ß%ôi¼Ä«Û kЦ££ÍÝ@ó=d92/ÂGÏéÈ _ Z†Îy´'ú.iõÇè»´%¤³Œ<¨Kx!btçt!¨Û kRwνnœwM>ÙÕ·ßwÜìŒÔè™_p9ý8ç#+¯;žÝ/˜R|ò$éc‡4OkÎðϤóãÛÓ½ŒDA5f×~µ¼-n®`³@S¸E;ßÌìûýnžÛZÓ3,ð¤Èdùƒ¯Èj„Î0ç,ÒO¯-+§×÷si,÷Ãjõp䊨Λ–sW–Ý^vÝ…“‹ùÍëg^y€ÀhÃò%4Ï{€$ ÷úôÍìëVMÌqomækyÙÑbÛo6_|šßìûÒ7ŽÐK1$ûßD±ÝÙö)‹õhÉ@È`8=ÝM—£WzLšÍï?ÞNPÚ£8P‘мɃ„“hÍÓ DÇSP˜Í|ÄÖÏ7õNbô;èwòBPg5k«"žÀãaâ ¥P5Ø ÷.^|œ~`»…rט·Ç*þÆ;› ÓÓãóÊ›C ÕfI½p*i70È2‘Adyà^X%i°a]³yrP)å­.êž<žœd½hL‘ßá¶“Ôª¢¹›ìã£ú=ÌŒ=9’oö8{ÆçÕ -[,½¦ÄèÓ¼!/†boè` I5ÞHáMªó@¼OdýÀ¹Ô ?²öX?=~éEiÉF?Cjò@eº’²îŸˆ-\¥zêÊêdMMh†Ýع> ­îÖ„Õ„mZTP…‘)„ÐC²ÉîBÙ™ ƒg,3êøB«Ù2jã´Žg,’©:IÀ¤(Cc|HV‡‰‡~@î¡6C¦aÑfïdP×d“Äj~Í>:ˆõ»MzÅÕ+-¯^®ºçÑ.–,‡òI/˜e[8Ÿ=Ý(‹tN·6@Ö©¾ÉP$­ˆÍ,Y†Ø„ÜöLÈ+}£‘-Öfã=KƒÖ} Ùiå}ááv~øÝŒ¿1hf”R×x®›UY«¤AÖ|šÒ=•!áå4Ï=»E&0‹ÑAv›5÷h½—.wH÷M¥øø~5ûU¦ïîM';îÁ˜ŽæxÆ=AÒg¢ÍåôÈ@Élrd{¢ù 䬈䠥¼ )a†sÝHî0VöÓ"^$çs ˆß§’wIN©Fç†O@p‘6 £4H–èùÔ‹XÒ“ÌmÄtH _N1Þi‹S¦ÂƒÇ!àܱqÎ,Äßœï¨zÕ|o—ê3»›Ï~ó¹X±¬¯û¹1ùœés¯LNDfµ°ëƒy21+B9K“\«w&fÆ’3#'v”ÅKv”«ƒäÎH“IÛÈ+œg¯Üàó(XGEJ§šï|¼Ã89S¾“®ÜðÝ O†d¶9Hˆ­ÞÖé€Ö Þ§Ÿº3`e9½¿[­eÊàlÅ[ü²Ë¨Im§£Ö €)²À½±98£–bQõ=A»"k‚µuž°¬ÙrîŒnË1‰ÙÝ;’UBk©‘3Ô'¸Ý)/)'8¸ÁáÚ²ŠÂµe§æ «ËУÄC’Õí)T%ž…$'™í™ßŠ˜í‚kPrô€p}7ŸÞ?Ý¥!í6“óI™€³Y]ß‘Ýë5éž8»ÙýYˆTý ’]^$k  3_Ñ»ØHñÖf+!$(­B|¬qd‚mˆÆ%˜KD`!zù§êâ¢I#nÒaqwbO³íÙ ìñ¶~0'Æòr7‰žƒf6Múºƒë¯í‘gëÇñzña)¦þ´—á)É7§YAÞ‘²Eˆ¸E¾ýø„ ºÊJ.,"_-“s#/À¾gg¨WiçU·Éé]´SàŽTUÞZI„} 5Ø@èŠÎì¶}Ì€*ÍÝñt †Q¬BºÚP˜ªwwI9† CÏ,Ãbì8Ád§@Kš&“30ƒõÁ)Þ—w¥šA—éóÍâiü°º_Ìó~1\måó¡•ISFÙLž= @[ ”iW’ER‚ë,ÉÁ iµ-´;ɤ}ìö­E¨*,o­H2ó’!¹Êi5jØ¢š ™#)n²eRF²ÜKO'cq’÷¯MïŒïB¼’·ÖÁHÜ }QnÈtîmïü …·OÞÏß©»Í—º²ì3ž8P9Op·”ëõ`ØÀt}c9Ìø?òÎ¥¹‘ÜHÀE7_L L ³=1'_a‡»{uLPÕb %jE²gÚ¿~E¶X"AP”äu;f£¦Š…LàË+i>}41äÐc?c­RÎvZeèN‹­ÂÚ’nƒàœX=²Aeje‘c>zÐ3¼^ÊzéJ§}^zÕœ8h]kƒ¡Â4›²‚ï$Wò€¹Û{¦8cMÓL î‘Á—:_Þ«+xÅη·l±ó<„£­Ñmxm4jlÖ¢çʬ bV1ç[Fl¼wÒ¡JMöS5µüC­Á¢ÍfðAó……[SG¬k8ÑÞ„¦F¡D_='úmÛñÞRÈ¿x¸¶<ÔW›¯f›üæ7=•Gü¶~ ‚Ó_)<+ëdC̸­ HîãýjÍòQ¦û[PŒå9ÂüAvº8‹š¯5?Š4Vw¤-³"œ•×ö‚=´\9Æx/gsjeÈ¡¢ÃëÁÈì¶É^maœTT7˼­Ál&m,à£SÿZA€/¯ËoËÕ⫼È1ãrò²™MV³»Åj*›åfŠ\..Ao±§Ð§E¡&O ™j$þe ´ È™•"m° 䀪»~kKÑ.ˆGé•á8¿È‘—㲬}8ŠÄ8.öPwäŠx*llK×CLMÍ®qº@Î*—Ä~H¼ñþ¤æön·\ÝgCªBÚê>ÍÃe‡(ÜbÑÄïET»¢xmôaû{ »–œ&êä®noÉ£vÃt•m¹»ÖV¦n2ÄŽ)‚žµÆx‰`]ô€}k}áóµ_S¨tzŠAd§×Ÿ$XîvK'Ϊ<ñÿ#¦h¢©¿ÑÉwúS…w\î¡¡åm+FÓå[ÄaQãIvØx´wsŒ:‘Ü’N‡·VNçkDµš·Œ-Õ±ñà)b6F!’†Q#è¼NŠÚÝ­B÷ž(5UëkZ–õ¨áýÛcâ¡sGfÝ Ó_î hö½š ýfXqaG÷LVA+ádÛyÐ`å3WzÑõx‚)Z«TèAeG&­‡Ú )[­U²(ÉùQ“ã\Z•!‡Å\ß(ê©T<`OUk49çí§ ìÅ¿ÉsˆÑAEèŠ*˜û„8  …dûU ºÿ¦$†M8F„îƒ홺üÝ ÊèÅ]KV…8lt“s<*‡QÕA 'GAl¹£†1ŽÒREBaÕ ¾JšÚ­ Ì¡¨ª¼ÕGòæVæË<½Òa‡1yYÍž™LVu™l©Ç‰/ Ì•~H$¾’L]–r7“ÑÜÉdëc—p-YB2ˆk,#É êûÆÎDƒÖ‚¢ØiÖ\6cÓl²‘<˜Õ뼈ÎR/º:n2’ròçó19,»<;¥û« ũЕe@Ëã 8H­$…ŒJCÁ£é$°‹† ·DTˆÀè}°9&ñ6‘-QG¦:(z jÏ[¶Ôu:QÅ>étBÖÔFñX\R/iVC 3¡È®F™ãÐ\C{͹¬ÛÌ÷»U8Qo>“ÙZÅöx7oe9öˆH³$˜3½Ø…Û©ŒÊF£àõ…Ö‰‰ÀtU±DsÈ[8ÁkK"EÊþ³–YÁÂ%ÖŸ³P» 1‘%)ü#Š2`QzɆœBWÊÎã@M]z5XÊF‹!üÙ‡t¿gýãØæÝoä• ’yYµÞôpm­1N1¯”¹Ñ%‚ –‰#T*C•/¤Nû˜›ÛW³QÞxgGæ°‡êé]hMãpÚdb ¨Mт”Äß"WϽ‡™þõ?òPmuT“u¤Ñ€îÔš'´‚¡Uè•VÖwt€r†¡“ÉVÅ:ZÌäKÊ­…à,ƒ¤‰Ä Y½ ”ðøÓ²³µµ}ï “h–(•4[7F‹ Ÿ–ƒ 2úDÂæÓ¤ªNëxØ^¾æà@5±g²ÆÖáÿžn7»»Íüuù¾æ]6ÇáÓÕòa1ÿ>_-Þ¢[®wÜ(ô-ªL_‰n•§X׫l:K&7¯¬”š®t*¥£-•eÎ;í’6^énÃã¢-/Zz(µpŠ™üØÆ¤~H‰qñ®Ê2bÛ‹éÕ@e› y*ÜUùß{5gT h! ±1¹)æsÒ:pµ–Ñ=®7ÛÉëb¾–Ÿ|Ÿì«Ñem4!ͱ3‰<È1¶<Ê©Lr¼³1âl†Ñbý$GT±›îP‰Õhíý(…•’êç{¯ ÔUÊxMÝjä:‡oŽÄ;…‘£å…x¯³Íöu7ßîDmýBæÃå\Mæ:mû” ³Ý6•nFLWÑH#¶" ›’ÛØYR_e Zaô(³R¡F¡°PI§Ìӧ𬒱kÔÙ…g3VÑ#YbÞ(áH\gìY(=M¶8ʤŒIÐiB6#›dv„|WMiSMc ëÇÈwMëMfÙsñë±D_¾ª–Ô?L‡}çóÓt£Ý ÔÉaJ£ÌÞ^0Ê¡ Ë(FY«´ Íå­r?Ä\Òºk†Ã°n3¹_ξ>¯7Ûå|sûãg͇'›õNA(0Ù®'«µ¼ëÝL¾xn³_–“pËT——‚)Ìè¯ÛnƒÐ§ÿ¨¶â+TÊäo¼>>ŠÖ,ÿQ¶^ ¶äL~Í–Äû资_ʘ€óœsæZ€*0F=£C?€3SÆ æj:½Øë¢5:­«z›UZîSò­â¼ASã®e+H ?ä®óiöúëb+Zœ/D¥OO»gQÑÛ…ò&ïpWÙªÅÌ·^öpâ×a¹n%2+˜˜$SÃ;‘[ÕãÃÏ5ª£Ù·8¡zK@e2“ä­•ÿj\ª£Uµ±.b†hx» ™Œ·WTbœ¤BWàN-5• X#K?4R”-Ûgïâu»|XÊ·Rö‚ïÁ^ É7S‡½ç2+È^™ö;³B•7ÝY¡.V°% 2ì•·6v¶ã²¨v4a³Ž]™Ès:š^<ŠáãçCM"W‰^Ðòu!y£"rïg2 DÛëýš0‘ì®#F€¡«¢ÖC®|Þ„æÍ²EÉDm®¬ ÞJëP"'!^ÐBa½‰Ãoɥ̥´ =ýä½G&,Õ¿®1C4NLLŠ|âÝž¸lº½Ã´t{ éI›}xPU—\ç AT¡Èéi,r ·{ת;Ý yU±Ê¦Gh˜ÈŒUðиX{‘ˆ¤J0?\m\?2pÊufÄ#Gëûµ¥RèÌ€}(xçG¦*Cý3§”ž0[4£öqI1?²¡D—‘ PˆkÔCˆ‹aðîAÜÖ‹F¾Ù_× ¢T™¿oJým¶Üʤ¼‘Ézó_2Aÿò|¿øýæ(ó6qÿ(?ß¾~_6ìÝ„1¤4YÊ2Bཥä_„.túiniCÅ– þõ®Ñ}ÃḈH…iMý…@äOQ÷Ûlu+ÿ„q“·qoVëßnîgÛÙæûóühKBÙzAˆ,±‹1Æ:>˜:L³H¡ïšG¨ e <è©%QŽ~J¶ï,â£LÅë—Ÿþò爞õÍn.!ž²@ÂëcÌCBñ‘Üúæç›íïÏ_~š¯Ÿ^f¯‹/? B¾.¶_þö÷?ßD××Ó,DP­³ø)ï!OZßn*(_°ÙÍç‹ÍæËO‡·ÿåe·ýòS§›­v‹_öI±þ¦ùÔùàŒéç›1Š»0žßÙØ´^ßú³<ï½ë°÷èò¤CqP½4édÆ×3ÍÒFŒä×ë©¶ÂyuXo§.vèlßiþ6 /ú.Ånÿ[Þg·9·}j&‘âV“s{0‰fe\V†WlöõÖšGˆ,zD„y1ìÿ¼‘Ÿ<-¶»Õÿ~‡—ÅÓãÝòõñþy¹<5bXI癇þßÜ2âgg[ˆþ_,Öã"š´ešØô¸ ›O€Æ E“¡d4yŒŠýèè²ÁS‚И²M‡j§hj - M²J"ßÔ®ücì!‡uñþ÷¼å);`ÿí iæ¢Âµø'–†(\ËŸ ÷Ö‡ÎȃÎ鶈dÿdÙq§-í¸KßZE‹L·G–fŠHdq‘1Ãuê­Ûi]¹Št³nG,‘,<™§dl´^’"*g€´Á Ðää²Ñ,ž›œ„µý$Yã‘,³=%œöñSAl¹ÉÉbVF˜÷‹—ÕúûÓI;µåóWùšÍíëz'BÜ/f»Õö|º fò¦kæ×µæ/[ ìyšùuW'®u0à}ÿˆ>­c½ƒO ôà™N7†ÁÞÛî}™¬eê4†VÇœŸÖÀÒl¡#¶Èœe ­„pÕq#bŒ4ÓnôÀàƒx„†w%¡çx#óäe-²Û¬–óÅyoí}ë$4 ¥Ÿ–_÷å™~„nÜÊÆåu9ßLþu÷ûÆŸ‰¬ÍRé÷iKkÏÙÄ*ý>W‘&6Nk3i@=jYú†,Ó`¤™ ¤ÉÖB÷Q“ïꚸ·!­5°D¤…M©@0iÞ¡e„4o¡6Ò|¬+b£ƌ︻ã‚~¾…l‡h´Íq ÷^½1®™<®•}›ö>‚ò· e_æÒP1²QCf¬êD&3Xf5î¦A:ÓÄ’eFÝ~»CÿÌkP“‘Gs×ZCK£š¨Á°9ççhŒì~íª¡&_›j"GïcX³Bq«9º1”ÙRf!%|œÅ*T*R¼ÅѶVñÛsäLƒT»|u T`´É&Õ€¯¾Š%°lÝ@,Ù5‘ÑXb~«‡ÉTÒ$ Sè:©BkrÝT2±¶P­‘%AIÞJÚäF¨ÚBya ƒ „È•¡ĨcPÒèÅá »GSJ¨ÜÈwy!LnÒü 4yŠ]×Aÿ뺳‡·¹‚~ÐÜÙ³¯‚CÜí]ºéCçµ\Æ£ö¤ì`pØtwI<]çL÷ÍZ쾃qí'wYš7ƒ^\r&]jK‡‡êÇN"Fàè&ÍØ¶Ö‘”:—skh·æuñ²ZÎg'í‰{´M¹¹›{ yl#—E0ú ·Qç1ÿÌ}ô¾ÌOq=5+´¦ÿB ÞëÉaIfH¨žj…Ú^K/¸Šò›Ê2zö} Ê–…=ô¸v]tSŠc¸çè €žzbtž®;µ¡bæÕýì~à:–µßY‚ÐSG¡L¶b.Úψ†\ˆæ¦bdS›r‘¤íëÓuÿc+ÇbéˆÄbzöÞÆ{™‚×äǰ"õØaÕüqq¿éŠTEXAÎj¬ˆÇwn®`2›À)ÿ37¾¾EsïáRÀÉEÅÏ—qä´Æþ8’GÈŽ¤ÏF6”[WÎüó-'fv·Z„œƒ¿®×/g\ ¼‹&á‹YÈî5Dº/÷?~†ä#"ßÐvy !ÊMn.( ÕºèÖõm,ïz³ïvüŶjJ\¿åK,^ZÁñã5µÖx^ßk7SK91\c)}ˆñP,¥@.–RÈs®p}bŒ>•P‰DŒ5× FUï[Ž$‹£sòáPú?OÉ£WM’{ù ¾©¬o/}sÍ ¬¡Åo‚„Õ«»$À¦5ΟÇÁ×ʺ]6•Ñ5>'‰iA|‚edzÅ׳öŠÐÑ€<ÖôçyˆØ8žÙÐìŒÙ^—$ ä ’ ÚU]5FÖ°;.Ÿô›Ü1ië…6”qkt{ÑBœÄ’¹Ê›X^!è!Hñ@%Çâ‚øTbÖ±8Ú¥ J¤ª5q,à=7û™ v,’™³’‘)‘` PRa÷#”ÍBð•.F]5lrM£ÔrÐÁ©ô.œà˜º[|sã Ñâ\3±÷䤔KâÄP<«à²3r§¶(™Äæsz3£VFÛ¦ØÀº_{öƒ“IfGÌ$“˜5‰îe Ïù]ØCO~vṤ×4ú÷Ó^>¼éAéõ!1<¥wÿýþöåîz{»¹¹{ºúý-£vpZ™XêñNÈ×÷Jôøs'±b[Ây!Åhoœ(èÂ1·ÏZpæÿ´-+5Ý>!nM â׫‘) vð#_í[ÌSM·-sÅfE ¥ØœT µ8raç'{iX¬¸£sѶª™ IÔ‹mí¤Dƒ=ÌdgŸwt’ZM]~xÍ|ßžŽe\$JŽmnå(h™ÜL¶›†àà|z±¸'õ 5ôÎÿÎb_ï_2iÃÌïŒÉa{½©ˆtŽWQ¶9 Šá:g±ïp9g®¦Ä…ƒ”¸yK‰@ö Å< pˆyj·oè¿ûýšxªà×þ!n*扌žx3#{ÍêÀÅ䪜5y»+ðáÊ]aü9eµºêê’6wò÷D€©u >Ê,…ëTõCzvAº²ü è‚í3M•™àuE*Å Ug§ý½²/P*‹ùb!%yVÅq§àäf†VèSG¡Ä“}M‡O@ûmj»høÌ°™™²Ž”¸·Jâäèþ´, „6¨¤þ, ¸‹§ØÓà1‹§ûÛë³ÌòþÅ?6w·O›»‡ÛëÁÀ’wåÊ‰ê­ †**•>,…T½!/õ9‡EfŸO?,²y‹#Œ»L„ű¾äþJœ?øh³@N>K£sbÇH¾?9 æÕ 99×ä2Ëž¾73½kÝ-‚]]X(uQW ?ô¤wlýñ€4¹?Ø…Äm÷aë1l‚¡óxQœÍöË?•k>üìdø±ÊÅuë¾Ú%àšá¢4@NµÉä:3ut¸C’Gã2=rÔbÂÂÞ)f‹¾Ù¤»mÛA †} ûSé!Ðp®³D©Ÿu·-’ ó}S‚K¼ìàp18׃ÀÔ:jbóÇF0•N÷Î ïaì¨ç¢‚éü%·ð]FµŸ—° Íê`•c‚€%™ãuVŸqº—š¼Íé&b\"\3g0ug§DÕeyåO¬ØÅåN]Ê€€5@ °ÍåV'~´Ï-`ž}Æç¶uŠA\À‚ðH”á>·]K û¤[À‡Ùæ@I¶m${Û^‰é˜Ñz½e7Ç wsÊ:r{¿ýµÊûfÆã§éC×ñçÉÍE/ÑÙ’Ä汪p¨ÙKSÙ^FCw23i.]b ¥DÒ$6·±sâoq˳Ýëqar=íúŽÙ 8Óré•ÝIÕ~xvÃìyýøüñáå—Û›§¯•I…iÛS GM£J~ 0¤Ü«‘å87WÏdZ©Ær¶ƒË(‹'„„¿Y§S¾#DÄ”¤©€YÒÛz¾mwùáù:¨¿Éw.xÈÀ]=äà%>ˆû&>rø0¹®Âޢߦ!  5%+& ºEIš =ârB¶#FH¡HèAJ±ÑÚ=“=°'¶ŒÑƒ=E„X3j­Áv³oº'Ud¸æ™Ù1£üm,in}?„Ažã;ÂØ]>fñûínýÓØÁÕÓõöåñæù«íÛÜžOKžMgX¸?f:cÕ—;•¾t$ÃÆ6V}¹™yïaGïÓ~¼ª^OŠ ì4O ñò 4Œ©‹2‘,p t+€dKèÇG[6†AÒ”AE³Sšr‹@-Íúi/Œž@•Ü©€S¯]òuK•Ñ¿#<¶IeÇÏ„y Àî véîCa×ùçÎÂ’y„SaHÏlÊÔ«ì!ä„À÷÷Ï­ñ¯£iÃGƒÈ&%€]YzÍ̇A«UëÕ1úUûÊ$¬åè×¢äRo­Y.«!xjš>Ñopš4¥BÍ}SÚ îÌA¿·3h.úUg·¬S¸HµQOm€Ð%èF„©µ à‚¾;tÅ«à%¢cßAÅÿü…äkçÿ¶ }-ä“õ+S†_ðqü’Otß‚ƒêøËmÙ¯Ìõ4/œ TBeÄ"8ƒ×\¿ìÑp]ê?vRî*È(:go‘Œ®ÿ`ZŒ˜©ÿØ:Å·“c•yÅs»q 4Ë—t-Ø×áÉäbs›jÌV®’™Õbt’¤Ãv·<[éSœH¾°£†C4;°gé‡OžlY²ÒâSIí°5‡´´n )ÅáôÃ<çcìaÑ)â„2,ô Óß‘rϤ–Ìöqû*#ÜVPN«‡ÇhõÔ~¯“(?8þê<µ_x.=Áî×4eèh Ý®·p›1Ð/Æ@Á¤‘ËŠZ†À³Ô¹¯¶m•<ÛÔ@ b$òmIô8 ìÐw¤ÁöaÇÝfûõæ·³9˜Ã‹¥n¾Ù¿4ÊW^®r,⚉jIÃ9id¾2™·ðL+ñ¼yÛˆUÚ£H(E!IÉÑ•ê­fÑ,Љɺ„!é@hR ¹èñ>Ÿ¿ï„˜‘97²g«$ÎÙ•0?²Ô5O{Oì]c&7B¢ùt¡ív†IŒ’îN¸¼ñv†ŠÛÙ®gó[¨|;+k™Ô!×Oqòd ¯g/.‘mÖœßÒ²-8¿ç£Ûý#”dFñ™%9?Þ Ç·S yõï¨xóðp›ò ›Û‡¯õ¿_¯mŸÿrÿòütµýíÆþÝ~Î6Qˆê˜cÅW: LP# +?V|¥¹ÐCcjã£6'–ø°1uXÚIæ“NéõçÎâÊé¯óPKkQöPÍ;ã*+€úÚZå©ýfÒïŒ×æFo°HÁÇ ì÷¢‡’ÿ©!›B;1Hÿ3mfÏæëÖÜ_Í's'3Øõ‰%4炦…RzC vÞc·\×R%tvA{BÄÔJ§QI§é^Clˆ°¾KÓ.oZ“D䋾̚þM»uÞÁ¤lÁÉ“-dM×ábÅ-.[ù€F7˜0`oFǹ–µгwrÁ–5û.ïÉÓÜ3Íî*WS¢&g¦w4ÆÃ\ðU¾kÊU7̳\ðUæ<ʘô¾Úx4u™mfLê?xÜëðõ¤6t½ñË.d4ßgÅUö±{ÀÀ×[ƒuìy‹I.ÅK™O’\iÉ¡ŒÎ>5O§ž7V”èk+ÚO¦=ÇÑ=ofç|Ï[´££‡ËN|-š° ì»N|å!b$ä’ë_‰J+ƒâtd¿}ºùôx³“/P·½þâ … Y,ìÄ&ø=lþJN1)YjY Nl7Gy4\KônÛAò7b‹i°…«ˆæ]‡"ØÚ[åÈÄŽÆè€µûMœTk¢÷âNX€µ48zß›ÞγìŠ@  ƒˆ†+¨¶©Pt‚†ÉUÖ)`ðz?@aÂ"rt©P{I ™L—Ìñ‡w›¤)wò;O•d2ÓS1jêßh‚ÝC²°’/ÌQäÛU+›L¥ézù¿¶QìvvKþoL:t¾ÉÑå ùÔP])l>­iØ)n“˜e4ãÙÙ‘d$)l¥ ­f !v-êëB†™°žaflL,18ô€ØV΢³Þñ^Éõ v~Tòá˜^x˜´+¾Áæ`…ž/ˆ¤é/=óY[õÌ9–*„rÊ€K›Ó\ üÄ02êÓ˜sEF‡ãh/žêÞ›ÙÅlÆ£R¤?á”Ý4L®%ApØ6eGqD’Q§9±w0e·Ý3Éú…Y€à¼Fœ²°új¦ÆîÆìç_$ˆZ \96¼ÔÀEüflNãh¹.öNÎSÈUyØÍ: ÓùÁ"Çfç 9ÛVJI(ïa»¾ÃÒº¬”'ËÛÆ ÊäZ²;¨MN޽ðÀÍU"s™ÂËOƒ˜‰Ì:¿ÚÅõ_ÊaM^YE¢ÁWµøÛrõs–!q{G_r–5Q¡@lC–˜üÄ"]œeÛ±(µBÆ ¯ÃêÈîV¹lÝgQ:R\ƒ²Àj ÐÌ#¢[ÄiÀ‘)Êl³îÑÜG;ìüôÓöö%×ê09 eÄ“ šä_%*W‚òj›u VS.˜«Y VS)¢ŒÃÂÀsÿéh >±ªD/ŒÃ(0:VU2‡6«¢ptêggêÿt×öSZ”sŒ²\q  'FBoÑPÍÄž°â2yÅݽöšÙ-tžÍþ­[Ͱ‡f a(6“Ò-í/µ-ÃxfÖ™~Õy›¾k¨é— la#zÁRñ…Ê )ã•-?±Y—~);NH£\¯yxõ>ÙsýR)‘'©¶à7w&AÉ6¼@vnm–±3¾L.°ËVÛjEF`ºr!©Ûõõzsû¼°ÀSšì/›röJ\#KŒ¼ÆÚúüþég\Wƒš¡”lpW@Hå$¡`ÎNY½>mÐLû,rU²¡ùÔØ[x šÉÌ.d@ÓÊÖbalŸûÊ.£È¬Û?žÞ©u²«)Yºi4Àpz Š,Yz TÔ_rz#¾'Ù¥ßïo_î®7ÏÏ›í×tÐ÷êAfu Z®~Õ§Ful.´e¿*² ”@ K”¤€”g”91\— jR‡ h·x…s‰æEC'”í¼‘¬ëßìì#ePÙVŠÌ·u±‹Ó²8"_TÉJj:ýÛ6+€erÍ#›ãÕÖw¯! @ð]×A 8²ë`oÁïͼ;h¥Þ©¿4úä$Z¸Ø„èQ×èX€9’P èÓ–c˜4kK~À¶’8J$¯RÂp‘bÇ=ÆìHÔ©­º`¸€”r  ^i @ïc‹ò˜}ž“Ánv¶Ø3ƒá¶RæQªóÔÒ]1\»3\W&÷¦ñ±¦jžSÇõÀt¹"0w?¦:ÁŸºQÕÅ1{ \ÅàØ‡þÙ:loPRA·¨y¤õJÔA,–¾!Op´PÞmi i µ¤Éî"òM 2¤÷v~›ýØ/”&v–¬º³þ+v($B›¦îùTöàGkäôõ.Þ»û´ýzýéåö›–ÑëÿvëŸ/š|ÊòÉÛ⤪.téѲS¿FôÂdwRq` Ü·#?‹¬œ8=±Jï×ö­ †ô5=ô$ÑB¥¦ž{¿àBdŸñ~m¡„!„‰y¥¾U1Y”N¶ø£±w&×1:ÔÔâ¦h6ÉÐÄòYSÅæåù«Yýf»ûˆß®¯Ÿ&^ÿ¶ÛðùsàOüñKüÇïZ¸êuºÉ‹[\Ðæ×FXAúAaÇy×ÖäÕjÊŽ˜Lœª'³$;°HÝ…RLIä“Ê'VëÉió;­êÊe€Ô˜Ñ‚Èì†wå&#û\¥Ï–‰]#]íÕ“¶–ÒÊ“íßýßÇ ú‰áO*ÀÝ—~%&MÍvè\±ëBô¥Óî,<£¬–õ«…úTýaW±‚ŠžWÛKr‹N’½…Œž[HVε¼*Cš‰õåËJM¨¼¶ÔT³ççÎ>¢ù©Ô Ç!e¥rñC{Ym=vœÜÿx¹Þ,¢‚OúV¿_§ŠSyËŽàªÎbg‡G¥ÖQ«7ýLý©ÞîmÕ(r¶Ÿƒ*ƒm,Ž;˜á]–DæÄ²}Bíî+s<+ ¿ÒОj£cÎ…Ú¶NItV4ÛJk~t×P©sê}ÀÓÈ›ƒØˆË1DˆêõÂqù}êóX3ƒ²ÿÃAÃm].ư†;ŒÔ¥b vˆâÆ­o;X¶î%$V•½vßl\/ÌÞŸ³®þ©µúöÕG!\ðyüè„ÅõE‚ýJ©xÎÆõ*}¹ÂܦÚF@ÊH@ç!cmjpª< í8Ø>^gŠŒÇëÓõÃíýv9~ºßþj[ùó—úü •ý¿ìy(P¿FF§H¡º a­Ñú%]Ù«S&(·%x’½~Æ,8ç N,Ô›US•ê…±9`ͶL!×f åÙ.HÉ;ã™wyQ†Q+ÚZbjQ•xïZU‡6p¿ª¸œ†€mc Rñ#5Ô,”¥É¯!ˆmÞHÔ ásb—7§¿2ÀâôåúJû|çhLPm¦Ï{|¼ÜqÇÙüŸ­Åæ·íõíõ§õv,ÝfÇ(+x•Ì»,f«âý—yôﯲ³ûi·iMæ¹3ñ w‰þùë)=ß¾¯ìœ þø®§Ã¾oûféõd«Nƒ§–S ~°@ûÞÌ{f»3NÆ}é¹P‡»®ÜÆŸ9ÄèºgÂ’„ÑÜ4ÊÄIœ^¢¤Ðð¦±1ýòONòøáÛ/øŸÿ¯¿üç_þãçMÚ”ûð×ÿÞ=à‡Ó¿}øb‹òó‡ý WÿOÞÕý¸q#ùe—¼ÜtXd‹œÍùåö%À.ÈÞ½vh4šxÎú˜•fìø€ý߯(É£–D‰ìn²íàŒ…³ÛM²ªø«ÖÇ*tcü¯ådýù—ŸÿmÛQëeuóiýô2Ûræusö¾ø»ÙÃÝHäôæ_o‚/åÄ·§ÍÍt¾Úˆ@~=a¯ ˆô‡ñ‰éÓ©‡‘ÅxwáŠm7<Ùžå)ä2í­ÇÉ|3»dÅÚ?Ý,_¢†~]=¾¹ Áè=µm5⽋¿¸ØX¯å×uð“ÇnÛû~ÏÌfì¥îÄtµr#”’»²lø—ØGö½BO†v™†Ã0]»8‡ýϾÜNAJˆ€øº²%~¬cšu¶Ñ>Ìì[\ïc™9û'¯Ík0Ghó*ô?NÉ}K ;˜Ã×þ°G÷vªÜTOô£¹ÇÓf¯ª[£×ûiµ{ ‘7[cñŽÐ³7¬ä7'*׉V8j{©)kˆI#9ÏC°Ëé>]Æ•qÛÖÚ±‹³±Ë4.´¹´ØEê7 @E‹¶ZGËA/-~‘mÙ„Aߨɪ!0$8à«ÃG²6J‹Wn®Ã£×å`ˆÄ1†v÷âäîïܹ]¿ä[ÙxÞÛÅÓo»ìˆ7<Èûkg]¨½3Ü œÆÙe ®Ä‚ëÚzœ]^îZ-—„4"Ò €DècÜ‹a¼©²sŽ&÷óÙ/ŠY­žÏò¯r)g?-f¿ßm• bpÈŸf‡ŸE¦­²kBâ'¤ì¹IWÖpQÅœÁöi¾»½y »DœÎž>ÎN 3‚†½sþÈžkb´ýWÄF_®>ÝÌWŸÄxy?YÞ¼¤Ùžþäó¶ñröùv*Ѭèú ‚|Bé“è€BÖ­5eÛ»½™Ûºñyx|ƒ¤ÙpR¤´C¢D|! bmÝt)^M+ 2˜¯„egÁú)ǸÃYa~¶òâØhpî«>O]ü“V4ÚþÏËû‡nïUbT|W“%H®FÜÓY€0ýå1c(ƒ”ˆ'\Äó²®ÃΑ‡ âXœ4¼Û‹{ƒ©@©hx“êñ¨ÍÖ:n™H©… ìÛÚ³½ÈNœD‚67ëÕB´èíBlðõç½>}‘=œ¨LlœhvÕMe–¸‹ì«À#“øgTszîr²AÚ·R®ÌzÒ í@TSU´#]!?‚C’»aZŒþÈS ë²Iµ‡7ÓÞ$@¥Iobõ‡²UF'öÑIà°çŒ É÷¸þ*i{¯Øh Ékg­Nª$2ѹw­Ó–È-!Õ¸`Ú#‡®½H•†29KXT%eÉOŸ¼Sï9$Vtµi.«‘RÛVZQîW»Åî^*•/mçŽö&l‘ €PɶYË…@ G¯>­EzÙ9Òúxl;‡j£½œ¿2HñHâH/®½23ŽÿÊ|Ustzf.Ä#W{:£Ü/ŽôÏÙÞPÊ0Ûéúø9STÁcæø9ÌÏnºn_à^NaiH¹§p®áÕ Þˆdé±û-<,C’XŸ1,‡ìMç õÈÔ§K$[k½×Ê2ŽhS,¬¹eµ5t2`P¹´éÅÈQ-y D¡¼)Bz#iÉÖ"ýL/çÁuHœ(s‡5µ],áᘋÅJü+`¾ÚFR‹"-š^šUÐÁÞ ªè8“ª<äò#/t(Àa¿¨ßróƒÜêçÕSx“Üw{kýÅNPŒÚW…bç{y* Î(ï]¿ö×éS Ž·wžrÑÀû$;wZÔ(á ;n´÷ˆ|œªÖZ¤k‹ÇFc8B›=›˜ÏèÄ_D 7tÙ^¾”5H¼Þ®m$ÓPR•}ª„ÖÅvDî+Æm©Ùzg\Ï_>-»½3ŠÙTž½ï“ =(/·ÝU(Œ‹­dœ)¤Pré8“#IÈöÑç´ …™@YÙÑ‘Ý^¤b@¥õÈMJcõ†Û×›Ÿ…0,)"¥ÔóJí l©ÝEÌ©ÊÒ“Òör0ΡeŨ0~¡Ðí‡ùJŽ÷~µy¹]Ϧ+9ÚçÛéüI¸uû²ú0ëëÀN™š¸NÊcŸ®EÆ!y`=×{R±(Îxð90o“Ï ¢/¢oT-‚Ãyã1Ú8ß^¤·¼Áð>.ÎŒód.à<¡@ú@Ÿ×ðB®¼¶~,Õ´Ñõ±=a(Š,n?f§û•D05™t)khø˜‘¡í‚s Y¤±!•ÝÒ´éõpi­s¡¢bd¤aS½<Ö€Ž?Ši…Ê“x+Û;×ë,ÛÒïlÐB¶e§LÚªÖf…|Ò G‹ô²>E,й±u‚¶µë›ïDžé„û– X?Æ$ ŸÕFEP‹²è„R5yºF“öÂ8(3þ ûd]a×p+n‡n)Ðñ]ðçÏ.}ŒU˜yp*'6X#À¤´ iË0~€é펎+1^‰+Å)ÝIÅŸv¦Í-´2‹5Ú5®4êõ(§þr°µœômH¥¤úGÔñÿ7ªQÿÌ:ô`UG íEz©±„´î¦þKÈ©í“`–uD\&ü™}Ç Æ"Th­"v\2?L}i·{]ò¢QÏ™Êd‡1+mhÛr×Z£—رX²Þ» K„Tí˜g€ ˆ%‡ èhS†=|Ñ,jÐY#=;7Ï&ª(T•ÒjòÛayõ3~“çsVLßÏ^å<'£Ho§ÝÊ Ù^Kn®è>s0Còzî•Ü‘`¡_dB;c“ÐoÐ+ŸŒ9ñb!‡qÊ„¡"ùŸõíC{‘àïÂì6¹#ƒðk‡¡…uÖÆÂÐÂ|¶â˜%Âкì8§¼Ì`îâÐuj2V´ ˆ;Ï`kbøsÐ}›@ï«ùëB´êóÇéíc5{œÞzîï‰ný½»¿ÕfúhÍãÌÏzTl-Ò ÍCo(Åc£¹1¦JN˜éîkÌløáþõiþpÔ‡}ÊôÐm0®‰« ¶èëkQiſע ’¹e[äõä×¢Üó‡"ÂýªHn{@9’ g PºÈ¥‹P2'„€=¥ã0!±€1ß›¾Ñ¢K©œ#v³iÃw{~èíC$gtôö#d„ØxFipÌãf„ؼÁ†õàŒkj¤&KÑcPŠQ¨¸f’qû¡ôrx7hãîíÏïÙ~wâûL?ܾUêtKûÐ\Ò z¥z®s±uqZÀˆ-Ž”a³ƒK`HE[„+e³‹Ÿ¡Ð¥·é…ú¢è¡ciIˆ S#À9öÞé±;µÉŸ~Z­?Ü>ä‚qÕ."{Ãúy>YΚøÔ£çÕÞ _&¾~|zù¼ â$´ï_~¶‹ølV¯"ýbÞOo_V·ó•ìõ~23¤×!rt»½Ëɼ[œ‚×T( O݃öNÌRÕ=ó›ãGA§C$Ú¢Nû !5--°FES‹öe|˨E–ø¨é`k‘^Š ™YÁȊɃ©¿Þ‰Éó9<+c•Ó±ø-ês€Í `Õ%Ãó›ÄÊšòb ÊÓµõ.4©¨Ø¾ðgÓ„qç«ðŸ…ȹˆï|ËÁ6;uO³éçé|öÖ¤Fè{¿J‹¨2cxŠh»Ó¶,mgÙZGÖõ²8EmõT‡?C†Îì„BX:hï÷»®j=cbs“<(³W¡{·cö­%zi<Ëá±~lg¸~ÄÞïÊ\O=1ë½8âöj>1AÙº6¶…KýÑ ´¦4!Õ ø!80ä¿JUÜùãÊ¡4qºYßnž~[ÊÏ:ÖȹëUUU¯é˜Ú„'Š|±¹lê• ð¡Ö‚éøi›|ºñíÙ¢T¡ðjYê([«½F/•â¶s‰ÆV)T¿?¤pn7ì,º‡:4–½Ð²l~¿ËjÆš¨’ë„B¸ûs7îZª1éHƒø¾jwgwéš58ÐòÉE³ù×ÕˤÍÅdýaö":w:KèìœOsnâdé¤1¬ëSGÌÚ{‘ò®YD9¾â¼dQwàˆf‘J煴ɦÎéôP&o9>kþ@Á29ÙrH8ðG E­Eºë£Ä\Gê4º0-“hêöX¼À»øP& H½OÔ{¨¢/E \aÿdL$«( ÄFŠ„T6ÆVô-OKùÖ&AäýßR]3Á±¦VeÐõÑ BS‹¶£RøB©+¸ÿ…LCó»wpºKM¤ÀM‚ ‰Ñ¯D‰víå°zÈi¢ÿnŸ×¯Á=ÞŒú…Ü9Œ„ÉÇd,sbN5ÛV8iAŒÍ‰k©ŒÈÝprãM,íjjpRµ3:åÎ"GÓ~B CŽF-Þu7A‚íà úŽÆ™ìµ5 •áþ(àO»3gÖŒ:öÛªDùÉr3™no؈(ÿ¬®»ÇÉ|3»pIÉüéfùºzýºz|3™Â>»ºÜȵ2’W×ã„o'ÇVqß®u²ïŸ×«él³Ù]Ù=·¯&CjŒ=àØþÆtµxž¬O½³0ñ ÐvðÎÊðZèÓ^)6JYSó÷¯"ÃО0"2¡å¹UÚ§EF‡WͤÌhÑîïú{Á^ ÁÏ|¸—½YÑ9ÆCèñXî­³þþv²Éý|ö‹í/«Õó&üUTÃì§`ÝÞYRØ¿§ÙÃág&ÊJ¹h¨m+YQ’• b¯Î­Ã|6»7×gOg'ã[}ƒZóÑÝo`®ý7ž6"ŸÄý4[ß¼¼Ÿ,oÞ¨Ñl~~;¸ "¬¨LÇhï ÛOXè1±Cù—4XP®>×üfÒáåTW³Â·ç&+l,C„M‰³§Áæ^]Y ‚85„k¢lúLBö¢Q|­2…üÞ T %;Bˆz‡ê•°ipÊŒ.:ýž=˜0Ææ«e’<¯ë²Vz¹a²I¶1Úˆ~OË&¡ŸNŸeÐâO é Û¡]:­ïת”¶ÕŠ]¥sþ*'[59«ÈD1µç DBÊ0½â J¿¨„xȶ• ~íØâ}ê*­uI\ûJöRDŠÊÏI‹‚‹k€­æ lÑLRx¢QÑõŠ ‹ìY¼Eôù¢#þ#o'½ôŠn?A¶B?Z§-·jªÈzÖ¦|Ïøó1Ùó#}ö\ù‰(Šòƒt$Ç[­“ºl¬ªEÉÞxØ´˜Õ†íèºÑ»ÚºÑ‚âó‚Øpd óôâzA¬µªì Ç¬ÄD]Š˜¾Xªj÷Ú>ñCCdöFù~© >’ÚpžÚ,6h÷¡|*KeûJ|± ‡ÅhÜÖiÒ¹ `XäÞ}äY}cPzƒñj¥˜¨ƒ ^/œÖTÁz€F+»íSÏz3ž:šÓ÷"—ûi ­òÂõ4Ä7îíýãTù{˜¢¢Žó@MÍëè\ŸGg°`•Õh»êþAtëbiuhœÚeÄÏ‚ ¥nùwíô–·ˆTD­Ë5ßüÈjÝëú‘2¦©GÔºn´Au¢ÕßJ’uÑê1ΚgÀ.?Ýx0JĹÊJ~9ùÕÿâ[ï­¢>ù§hM˜¦ŽCP67Û˜Dî=xò:©”ESZJ(e991GSGGˈ@…m9kã}Öàã@¬|ãUÿëx_·ÿ玌‘ªN’Ûh¶c×Ñ6ñb}Œ;™¯wÚG’o§“ÿ7grýÀ8SîÂí¸¹áq¥Üu/Ç‘2„:…FaæG¯ÚTñsqx4œ3ÁHœƒ†Nƒ‘2ÞÙëXÎ}žo,Š 5N šc÷àð…èÃh½Ï ùVüâÊôéáÀóœe°¸(RÉ”‡dŽŒÊÎK›Þ2]vÆÃìqò:i )üLJåæþÐÐw~‚´| Ö0íˆk—¯Œn€ªG?)pà "Úá¶ä›r Y56 oaBdß þØ×>ZÀ‰Õˆ·†ØÉÔb¤k5Zi¨BWª„Œ‘BëÀ$Øg˜žy>š ·{÷¯ŒP_ ®ÅÙ9©cÔÝçøk-ü‘óÔ@Ž?wRr•{¤+(á¼!3Èçâ‰ÉO(’˜Æôúz­×îäµwZGËÁR!ßPY76 XíkгkÎA8ú±¹ +1Yè2Yžç«ÏGƒ•×è×1›ªØ,×?2Z¬«e´\\ÿJ…bHd=¥ô¾~ÇTG¡1­ãÁ å²AÊ6dC¥gH‰dc ¤´ŠÖ¶N–…Q¦»…™»`T¶õˆë±³Öе3¸hÔª\®!68Í47Ö§™34[çÊáBE[EEy–Ö+\}½úöDôŠm,:ÆQ}b¯,~+ æÚ¶@÷ã'|ÿ>¦t¨‚Ò鸧¶"òVÕQD÷tU9ÉÅ tctõ0xÀóû"°Fl3a‡¡õ_åšœ¾OEŒ÷òlÝÙ}“‡Þ‡‹·߸X‡ÅIx†§>Š#(‚ÐÒn &×¹tNT„fé:±ê'U9©hCÀÖÑr¢âîÿØ»¶å6r#ú+[yÉG@_ÐÀ~ÅþMÉ–²’å¥$ûOƒ¤Å À 0V¥²©Ê®%YœéNßOGLñ¹ª@ךØ÷NpG9&ZÄwš§i£Î n «? Jæw__ÿõ=Rܤ.}öŒr¡ F]úì+88™Ñ<);x cEäqñ>R4ÍD(WÜB€C\œ™ÈÖëáÈQf†%¾8¦ZFoVÔACõË+Ø"¼~ìœ,VÔ|ï´vc"­úh(4ø *q?ï¿|ßþHàø6¸tå#ÇpÍàèÊG^C¡3ÇQŠUØb\žï'Q±Ÿ¤‡"[Q…(ŽäPHõ—äÀ½Z‘Ÿû -Ôä<òjË£‡Ð½‘‰œÙȉ›¤Šði‰.þÎÑÅïŒ.¬ûÇëýmÊ]ò­Ý¥š§#–7ÒÁªyšÞ`†nAˆ¥!ï&ÍF OÌF„3ôBc4œoòO¤þåÑ S£PïoR0A2:üÀúøþ f EX†Á«—ç+x Úr|jà .¸`g[3.ö©ý pBAÔÏ?W‘\=”"÷½X‘K-±Á,m̸ó®•OxÔ~PP“™ 7M§Â,aÔÎÚçKY‹÷½ôCk\ß‚¬a&¿DÓ›35¬»ŽÜ3¢ –PDEÑr¦ð³Y&50Ÿ‹Â•vß2º]; PsXÇYÌádwÚèÍ ÛoÙ¨ÞÂÒ ãzCNƒç«£XãöK8п8xÿòöðxû!ÿ¿¾ý– ÞM£àýòGv Þ/ä,·à ûò¿3 ,Ñ ðö02Ôiô2ëP¢zzpÕè§Ú®°`Ê·“ÊÉÏ)Òš4ð)ç„iñ<ˆ,ðU­õÙü¯–“Œœï²i0🚋_ڮꞙQ1SÂ›Š áÒˆYÛ}æ©êúùħ[±`f&tÕ&÷@X3 KW„¸´¤d­7}QÖOéõfc4tãùe|)MOSdëç`0_$ Qr”ëñÍ“s¶ãW+azWq× ´P\0ÖMbF@¯ºçO¼ 'i"Í@6)­Ø³Ï¦s‚1HÞåÓÄFÆåºl€¶‘ñžt¶‘zË01 侀$Æ%m$[h8†„nè«Ã“³MÄT¶¦õyªQ ‡ˆõÍš}žªs˜~»»”ž1QìS„`0A|Ž6“šú–E¾%ì³EÎå w¨«jC{'vpÎוũtSÄÎ]ÿ=+ùջ䫠ª!OÀ@6`præQ€-Šúر)Ã7==%&šzçlô’‰ÝM2°^Àüš9hڔ̡¬ö‡ÆÔ¬£¯ˆqúª³÷†„xk\¢µ<Þ; »núë9¸àÚö˜sÙtzÅxzOﯫòÙ-ÐªÌ åGM‚!À4¶K [žŒ‘½<¿½&çÈ.}c׈lÜ—ÍJ¾ý#±CeY¹·Ç3"R¨'ËéñL³âÒ—IÌ„ÀGÀ{ ¾"îI‡73£g9PŠí1κ‚(;ñy_&¤Ú!FoV” †=-í«„“xµI>Áâé:]×nˆk+’œ¨—£ãMÀI«íÑ«ö­¯-ðeÐÌ×WÝyÇÆcÁýp²Dw!„ôÊû÷WnáêëS;ˆŒ¬ËÞuh¡=Å‹:ðÖúù¸õëúñùÛöùM=žèŒ­ܯíðÞF1lTEÏñ_OuÓÇä¹çE4fÊ| a}¯ ¾BœÍÒ?z â°|(@Œ4)¹åÊhL’2w$»éŸx©-#‹_jéÞ¼kÝáC>ðQQÞ3¯PUèÙ’@O=ß>Ы„•žú>$nã‘èÆ†®û%aO©Ë­ Qƒ•¾E„†ÕÓ÷=³üÇ1ËìÕazä2é‰éV&Pcs`¼©ÍË. Üvn›ž6…Þ"¯Íd©%v•¤”Ûö.È&N[ ÅØÛÅ6ºÏ ª”Mb=ªIÃBÉ gPh ô¶ˆ5‚Á•Og,==ÏÁéÂâ6λªØÙÀðk:€i¢D‚è0ÎXß!$X§‘?êàðÒ/×8Èžª{Ê0žªÔÄ õveÛZ™·óë#œ(ñë åa M,rYÇžv먠n@<}*ׄ-d|,žÏnÔ4Å„ aS`ÍU/ Ôšª·Ž^¬¨5CóÒTk%0M²@8©pŒbØÀÂõÔ`Ëê©ûfù™åÔ)£«¶Eº“8˜à]¢ñÖ °µmÊ­¥³_î35€îu¼«¹n×O?õo' ÎINPŸ.ЊGO±ãn­ tµîÖຠAûÜÚ!Ò½0.À ±½Ñ ¸IÖGÁ×ù§dùo=t¯^sð§‘?däÓ,ƒu$¸’ Gž™³NÇÈîÄé8 £E !>4 ;XÚ'Á%(ìÛ±ÎúB|¤íã,¿ƒiÚãUÔd }-¿CôÕd‡1 0ƒ#LJéÉ~I€è¦ýx\¿KÛŸØIrRÐþ®ñðχ׿6÷w›?Kù4v?¼Úçé#©óêõyõñ{{ßcuPçêȉTÜ(Ævn2SV*hļ”úTÁ'ÖL³ôC<åBÖ,â4A _VLr•Ð"ùÚèoã¥MA÷E&µ¨3¾±I¡4“…Ϧ©f)J5£Õ¯ŸèzžîáàÃ`˜-Øå‹“»í«…ÍíëË_7ÿXGõ†L]ㄹnñÁPP 7©6^•V;ß_Ct…l‰ïO²@Θ"€‰¦‰ó¿c:\È™ðý‰¤DúƱ˜î ×à­é’:_‚äAMËÌ¢azjÓA{B0‡n±Ö_?0öï+ª;Úþ›8(¡"~Øì>lóÝËJ/Õjô}œ7}Ï:vMû»"®c;¥Öƒ‹c‚®rÛ°™; :â< ›Èižw§ƒhg½·Gqµp¨õ±=v~ivÒ{6Gå|hïÿèQGM©Km.à°ç¦.µ—¢æ )ϼ´CŽ®ú=á iåù¢ó[«ïþ­ò—}{T»Ùh°ðü¤JÛ·úË¿?ìI¨®Å.C>ö©ó™ÙUúÌ•2… -3Yoê{®”sKoƒqEÎ6ÂåYü Z´ y€gQG¥H›Žhä™#€¯3 ž» 2±¿kÁ']sÒÏOó…¢3ú È&öRÓ¹½0ju< Ö°íb<Ä;‚Þ½}[µÏ'Ü¥û/þx|SÌñüSf5²¿YUåM†5~ ùJp–x¬OÍg¥{…µD´sRãó`±Ä:ç¶!«t%•U‰¯à †°4Þ[{²£Þ%ñ^Ã{ÉPm° C÷ý³R—ƒ›‹ƒ›wC¸ƒE2}Û`ÒõŒ÷?\]:S³ê•`º @ü„T¹…È jó6“EÖ΅ד!´ãƒÎ¢4˜à³( ’L˜Ô¥Ã ~¨ UÝ2-n'šî¥O³¤ºe ÖÔàõnj» E¨(eãô,œè¨Ü8ÂÕgŠFó§ ¥îDE.oßn‚ȧ›NJ©(½ñqj§ù÷ÅÙëœåãNlW4$“ík´¸‡ós¢ËwÁ5š‘1.° Kc5wñ¤œw ³Zr(n#6:®nëéB£-RTz+ˆˆãé½™BQ)YLÅÚò.èÝi :Ç‘ ¦Å´b/~ÙÝýÎPÁzÀç,ÈAríQb§·ËÊâ +ýbEiJ:ÄâÈ.LZä«vgŽ´ÕƒZØ”9K‘®ÇZV±±EUýêžÖòGœºßFÿóùñíénó¸~xºÒ•;"?~éç݈N~ºF `ö´Ÿ§ f kiŸÚI±ÇÓÁ˜,æXðÍbñý‹G"kÅúØì¬suXÜâ sw,V9;9Çâ¨){:pú³fd–Ÿ*Öç/gõm‹!—”Ìœñó”¸;ã ;ÆôIéŽM憣ÄbKJjV<ÿ!ü/]Ÿ=Ñ%KÇ&Ù9‡¡×„Õ,Í.<ñ y6,³\T ’€á‘`Z¤Ùõ{O¡n±o‹û(Ý£Ž(æDçJT”Õ(œÓAÇÚÍMƒê%Ä™‰Ÿ tÔ%žò7ÁVGC× èËÝLJÍz{÷aëdà´ñÂ×W²ÙÈׯ·\W'}ÕâCµ¾ïXb²kMÿú²¾Ùjüþþåß·ëÕF?#nª^]»uÚ!ÓÓ*¢DÈ ¶4~2VOz5>èͬ¦'|žÁѸÈ›³šhÒÕé‘àZ˜M‡CP7ÝÂÂf-v7›qŸíy²n§)U”P.x¶Y»¢Fÿ˜ð-Ÿþê€ú=u‚&tÃŒFú}—Ô¬c2èñNek¢ÿ¨©$=£®H aË®€BíN›±È®4â×¼±[= ä¥dì]¶^‚àÓ<6G‰4» ƒ×ÿ3¼4«»Ûì15v«šrN‚ÏÔ®¡-¥3E1Lwú\èªÚ„èo‚^€Lr½9$ÞŽvíiýòçÝëG½ƒ7›»—ׇ¯úéïkF*ùϱk®§ðSýdð¾:¥?Yd-SFwKIʨ R­ÿ¤2÷#5I±"Ç`^ýSF‡låYÊH£õ³œQ‚sòtÑ[î–srNt‡í'+€J ÃæŸ4Ë/´ Šã"pt%A±£|Pl &=²£øšÅ!ƨxqŒú{d*gÉ X‚üÔÔe`@×xC5EŧQ¢4M¹Uiþ:Í»P’â¼ð,:4§G¯Ò½ 8·,Ô¿ø±ØºSÍöõåmóú¦ŠŸ4iÑ_‹¼ Ÿ*X i+j„Ä?©úW$¦f`­úg‹ò»%Ff&»¬ 2‰4?eÒ¬ã±E²¾Š=0{" ®¬@ûbš¸(Ò'ˆ±žŸžÞ¾?¼þÕ#Æj¢¢ÞÖ’ŒM,Ù]gùÂÌ ú_¶ Õ•×bî ¬<©|$ÁëM€>¡òùÉo…²ñ‚ƒe,@YŠ l>²Dé:Ñ»„ ì—ˆ]e=÷nrS9“?•wšqY—ø&VNã}Oå†!q²W•~–½ÇÜfû2þ~e‡›º9¦+‡0‹eÇlÏ-ŒW$Ø.‡©ÇÆ[qE9Lg²mo|ª€?W“¦>µQâ…qY£ÚÞÔà$r˜ Ù1«äé{óEÏ„ІØÑ ÈLAusäk³C ml™6÷zøW{aÅðñë­n¿šÁ£Ísy²CÀ¬[FƧ2•#ñ4¸ý»§6´xèKödŸB+»-Öø_²«îÒ Í _¯‹„Ùp׫zÊÜS¶ÍN½$/\ÝØÑJ„--·¬/±ÜÈù»kSKvGâjd¸õ“\ÝÞ&wW(>ØŽÏ ·×ˆÖ¦¹ÖŶµÛE³BVŸgþš» ØÑSÃ:LØÓ7ªº¨î¹í7÷w·oúR7Ïúv÷ÏÛ×ÕËÝæYßì¯Õ~H  —…|WXvÄS–`#T/Á˜/¼v¥œ1®¤”`òûäÈíYAÏ`EÕ¤” g|WY“ôf¿5‘Ô7•ÉVMykÉ_å¦ä1¾¨Ú+b¤¶j££nAÛ±ÀY›X2eq& ÅL—%™¦]–P1[ô®|L÷éáûŽ«~ó²é¬/¶B²ù l—4žãu0¯¯+ÅøF%¹ÕK1òK*m¦ëi3Ì”ÎÉxˆ˜¹“Å"kf)õd0 ”ô¬;—]ØÇ6¤†mGjÒ².ú8Nÿ™m(ëÚdÔÖ㛢8a(£¢Ä©¡K¯GÖ{r§‡ ³ZV2’ ¼…%ÿ©“PÕPå4‡g±<Øøt]·Üß­_ïmëÈŠ²i1Láñ‘Z6ÔBíþõ¯âfÅvÊÔq-Ø®aô‹í,eô!YF?¾nàÜ=¶ ÞUµœ·¸7ú+z'› n¯ h=Ò…}JmW6Þªq¼¶=Õ#–©W· ùÌñù5›Z‰ó·ü…o½«œc¿BïÒý”éHBõ’è9bkå|Æ D €éEƒEÊ&ÎY\rf}$¤ Ï59kdiõ®;ˆÚÄÔN|á¸èÕ_ÚÝ–eKÊÕ©‚èe.H¤”*곫VG&Ïëè¯Ðƒi§,¼1Qv£z‡5K£zÿÉš•º=DÊ€|£KÐ[vÝuÚIÐ$CΑˆþ¾7&z4¶¿}}y~úíáûêIOñË_ú_·wÿþíU1ᤴ­O'‚¨8Ûê< Oiô§¸ˆY5=_¾­ß5T?hœµEê7”W?R²Í$‘IúßÙ&(÷”[éŸÜ„¢ {bAê‡_ÞoÛžôšûÊÁÕS@œ2ýïr™xÀƱâ¥OêIW¼£À6ØúFÈÌŠÕ«+Z“œù×_¶þ¸Y‹2w7Ýo¡ä(’ÉEv©þ±~¦ s En4?yÅÄîW m_Ρ|ÿ_€[ªª,p¨ªÂ„éÖQ0.htò¿µ7ž1'Ö\lÏ,6w±U¼>y±Gò›p±wWAï5øª‹H<ºYÛI{ZC`;€!²ø+˜|c§î¾ÉéöæéîEÿµz}‰Sd·§ÛjGÌߺê"´OÀ!šA],ü5SïÖùèDÞÞ}]¿=¾V¦ÝütÁ@¦œ™â¨9¶X•kn+|wUÿËܵ-·q#íWqå&7Ñht£ÑÎlÕ^lí^o¥dвY–D­vò?ýß (qDŒJUbÊúðõ}…¾üFàç°ä]|ô’l«õ5­qåÍ$ÃVÓXéd}h×zZ‘š5…*P Æ-T`)Íû÷{Ê%Åà@™ 1ˆï¨±¾šÙ¥Í$»ÞY×kß®|Ù±ÅA\¶Ô•.û&Urï[‰Slµl ;‡­ÖpûB+gml`ú1­Ÿ‡øûõÏÎ/6g·Û«Íj³ž8EL¨çCfõù0‹J[ ¦YÏg!éš!® J\4Îù½ïAÕ³¶×¦ë®tª\׊‘”oµl¥›à{׬FªâÛÍ2úÕƒ‚±7Ý÷jî„¢RUu4wN@‡“¼%ÁÂ,}7¶¦)!ŽX¶,S{ö3D^Íäz3ƒëÙò|<¢Ës¡v»&O?êáòJ/„Hu´DRMž#bÕ$™ki’Ò3 ‘0KéýÑDö6oÿýÒCÅö’÷2Êâ¨qxÚöp6|Z'Yb ³tr¿!gb¨Ȩq‘É;*ÈÕÎ…B‘Ý•µ»lRvwL›*Ë ƒš'ï'½úµ³”°ûJ7%«H^õUàÝX'N¾h;’»¨Yı3³f[ ÁI®:òŠ_s¸ ® °ªx[¡Ÿ&»øÞ'¶Ë‹­gGnÁA¨éËŒ[&È9î™Yzï‡í@Ø Ôjù2ö9—\,=¦X%‡à'ì¦yzöŒ3Õu›9fã3»–M8Iá^‹}2N9ä8Ék@˜fñš€:Ü®kÁ ÿ`pNRö~ÚÕ¿só–e@S¨ãXý‹Áb{(>A²f—>̃Aµ¤äRœÞß®ùD?N5®Tu%®ßŒÈ¤´…6zÓÞU‘ACrÛw°õ‹k9¬¶wëmüÏõïÛ«ë'³r¨µ¶«îùª¡ÖA}ÔPgj­k!‰š©šòÝÙ`8?j5ÄåX’U5o“í #zÔ(›¾¦‰q㔋5´Æ œL ªÙÖcLø¯æüæþ|µ£÷+Píù3jÏÇËó«ûõ)–ù?>Ü<^«„ü¹½|Ñ€Èá7‚4h¨c‚-ÉÛyÌù«·£¤üÕÑÑ~½½Û®Ö÷÷Oý){Ý?®¤­ñ0%Ií‚Ȭ$A…CÔ0©³‹gêžJÎ JOþj¬ÜþIêî×jOޯ噕ײ°]3cIq‹ MÆØw ü~1àiêΫ ñ¦ ’îëù, +AS«¡Æûu÷’ž_¬‰½ÝuÆ,­Î¶ƒ:›A‚T}ö¦¡®ö »ªfý ¸;£ÿöëõh\]C\²…)°…`³(&rŒ(SãE5„úzˆuiäà )ÐGO2.'qšŠ <Û …^8E' »éd9/ˆƒ±™21Ðð.Y'6:YßJM˜Â6¥:‹1³Ø†žk†•{AñäþûÒf|þéjýoeÕ?·ÛÛ7üûÏÃùÃúQd?ªûðLJˆ¾›õÅá3“àÄE™Ä®„S–31~O¦T7’¬ñ^™b5·¨M}£Aí$b=3‰+ºÊK&›Äh$ g|M±Že“goç³?®‚kÒ¤ apjé ô-B]Ý­Þ·²Û,ð´ìñL¦™CkO5àP=¯Þ8WƒÐ¤RG<½y ¡Z)âNÔCǂʧÀžszèÓéóQèa|iW Ëë!õn>ŽT~"áëæc=2ªŠʬ u64Q´­›Ò|<ºrÔKdU çzo=¦c¡Û¶ß|>[­ï&nß¶âú‚kÍ‚;0¬:¨<ƒÙèš!VS€E¸`9ŸöQÂ¥Ç9(Ób­ Ö8µ’ [Óõ*ΑÄÞ‡¹É1ÅÕ± €Éå=X러ëmŠ­£ƒåk£аÁ/m-» £ø´aÂÒ†ÑmuëgÆ@ïÉT0¶‹mÔ¨ÝK×Ö7U´Û­JßýÕFUæ¹gkô—¦!t5…Ç{“Ë ‰Acè©™ ,iZZ¾˜¡"ˇ![€MUçÑÈðy¹4„±2&!”ã~ÈÜ$mgšB( 6"[>ÞµHí{òÐaar&îs'üyšûÕ—õÅ£æ÷«­žêËöþáeòësŒö°ýº¾™6:Eݾх«¹øw>°ñ¦¬E榘tÍWŪŽ¸¤Æì‡Ë¼‹·.•Ê9©ÚFÙFš2N·¢"bW°}"1HbŠœÀx §fh·lÑð8fïç&p&AE_Þ†n«h¸‡ž‰ O›G§l Óu‘k:¢0¦rE˜{m˜±§5éÛÊW”´ûv¹ÖœH:/ÉúÉm9·q©àÂX+Øß±u”Ê (—âð0ZxEL([ã ì|zê:Õ¨:Åcж-˜~¹;h~QÛo‚A€|3WÜ)P¢èÉNåaèy|kå0\Z×÷YÌæ à»Æ?*`÷›û¨ƒß¶W×=Þ~[]Æ÷ëËÕY܇r†ŸˆÎäSøtnuéÝåZÖÂÓªŠL0]ß¡„+â(pp,?Æù¹>¿ûº~¸½RMù}µ½¾~¼Ù<üý‚x7671òü×4\<^ÔQV|*ˆ ¿;¹¬«Zj[hÐä."²Y€ÜKðqëBïä)ÎxÛî_¥ª&ºN鯜&LX3ˆEâ /“KÀª5°¥[ "êW¹Õ.YºŸïxlnGjâVëk;2SÆlÈG Ú“] aÒi tÒi léQ3•xÔqâ\±K=Ïàwe*/.YN†KI à²ácQŽÊo/øyËÇd­6Y[eNøA¹©SÅä'>Ÿ¼â€á—Ž6–MÕÔ,ã+bëšÙØédlfv£‘©$›åÅeëòÙ„dkÚˆhÌnp™6»lºC´+É¥ÇqÀ8QÊì"²mjv¥¨ú“…sŠô[É©~ŵñ Ðbûî>»÷<ì{ξ¿Tr qb.è°±’)›Üâ}ñÞwû@“Ù-}mÒ¢]Zï÷›´š[o4Î[û$WÖw›Ë~{äJ•t5Yg0!–ù†»á2Dki˜ÉKpTb˜™³Évé6©…Zæ(Òq¼§[8_çÒUtì€sA}‡}UŸ7W ¯#<ìÖËPÉœÌ{j.¤æ¶ŒHÒ°ãöñB°8`Kÿš:Ï>é§Å‚>‹øi?.‚NŒž*N5 ¬S.©FYjÑ7’\qv±]}U±ºü|öíówþÒNåi‹,7d)›e2É×# 5Qy‚w—ÖxÒÅE‹Eoˆ?¬øç8*z,Ò7Áâ©¢£Ç£è—t-‹‚&R±¥'qåc™çòqVºŠ`D³F^3°K{qÞ×LqŠ3‘ÐÉôÑ›û»ÇÛxìOŸ×vÞ^|zWJÌ ç8ÇÜ8Éo3PsÁå¥$97xL¨H_›€mXêj¼»8wþi ÕïNX-rø‰²pÁR›r² SÓ ‚u5ã£5Ð5qN·˜‚r­Gm_Y¦z#Ú‹/Qw Ùö( ¸}r ÆÔÝ£¢‡Ÿ­îÿšÛ×tÉíŠt½ìV€´¼§9X…“Åîí²´4(ÉÒÎÛwbpàToåcàÍ:†Šœ. 6€Ÿ/¤¿|¼T¡ :«6€ÍTWC¥fý—ónñåð5z&ú«ß·w_–n4æØ|Û<ü½ú²^}ë™w`¬qøÍzHÖǶC*;ˆ+Ø‘Êd0.tN»ÈÚ©ll½§55©}„=yS!äöŠãF©Š«ï·øíETPr^NœgG"”^ëôL©&b"*&´+n]VLjV$ê©YƒòXûTõH8è ø‡–ÂŽ|‘t@vıúæ)36¢P#épâvCôJG>º‰û¬:r÷¸²ë?‹óè&ûÈkzýÇ3µtÓ.Áü¥£r QÍbpýH öÖ¶¸üÏ’¬]Ù­ŠyÏEë/÷í\辶R/¹úfDž&õ?V#t d×MêÑæËh;ëfÆk|×ëIçþïœ]lÎ?ßhø³YÝÿþüÙî/Ÿ=9´q€ìÙÃöìõÏTÄ>¯&O@Ž«éÊÙà{ç”Tƒ(‘SRTTc¦}á;yjwcXÖ¸8«Tû]Ëÿ¾MÔ]¸«òŠ©Ù— !© $ð¼s)ÙÒOC4h¨ÄOÃl©‰¤úgG4kå¤y°´ËQ­X ‘“>²ßí‡M˜¦>šͪ `My÷l{éÇiÅMËÛ j1½ð˜ËØtÔ;0tÄe%BENTC¬…6ךöe"PÁF7c¼Í®µQÒ¥bämÍ`$ò´ô L/Ø·­á¯ÔÓb°‹ a,K¶1L꣬……ž …“ qÓOß%~åô\ÝßÝo>ßL½èN¼°J”Gøuu'°?fÝ{œr6Ô\oi€b=sèÊê¶+¾Vùañ¡d¤LÉB²3É‘2#Ú4)¾†ÁÅ 7“öoœ„I:ìÿ Mzdïßä %èì‚çèüMªùšw¬«ºÐöêFÄõ&—›Ï/ã)Ge&ÿ ןšÅ¿6¨âX«ádI™‰q¹²Z%&õ}D™u&úÚj=mXZßí~ÓtÛ¯ ƒá`¹«ÍVUZ—&ûGŸM2Û|W­$®™Œ†V¦šáJ‚5[ú©Rͤ-ˆŽP0»þ5v¬'7^ÈÓbég”eþ—ÖLîàMCCœôcù'X‰½¹QØ}R± ÛÛw&©6 ?¸Û@}™àÄÒÏ5©ëÀˆ‰­£þÍH à1ÖÔ”ç‹AkP÷Ù\cµ۬2â½uUA0›CRF$û‚FTj²‹Ó*-Zè)r5uƒKŽœ^]o®ÇR·k<;»SQ½¸ûû÷X£ðú£³{Çôƒi5t"}•CMn}pÜÌÈõTl§ÀNý2JØe—*GŠ&ûeF$k¢Á.V ù¥5˜ vQaŒò¾§ _onvZ“ëX]À“…œäÜœ»M:ÞÏІ¦8ÄiÀ`çñÔ\ÀɬïL=+òx(jÚÁÀ8B¦0¯,¡Ôõn†}1¾ÐÏ`Ÿ‹ªŸñäRØw O èÓ·Fg¦\µPSÙŸ®s½yH–›Cp~Sùö Ú®!+aÀR¾ê±IÝa?µwx¼ö¨HíÕúØA:UíG†¨Ùʼn¡Áj|!PâÆçnN”"œjع‘ã¬ï–Ue§àá{ßf+™)±¿U1u|¢u¨é} m·RÇ—kó‘ÕS-¥ªãßëâ VìcŽßqfZ–¦ˆW”V°ÆbV’ã…F¤j¢ÏqØo€I»5ôÝÄ“à,}ÝõÙHtuŠÇçÅÙïìa@\ÞH«¿L®¶N°Ò™?Åf/–ØÍ™"åÈøî°íös£PÛA…Œ–©e(ÚI¦9oSYþ E°qüœQúW³™È‹0¹ã/³™ð@=9fΑ²o/¼Œâüt@0˜›« ´éˆë@¡Þñµm˜0˜ 1ìgT£ê#|¨™.hÈaðB³Îþþä—ÿÛý`ÎŒÿV<¬ Ç„ù€Ü¡ÜVðH?Ÿìí¨…xèkVàœŒôˆHP/úÄ øµ"€>ê†~ps¾Ú‘ÿ•Œè1ÿŒ‰°—çW÷ë4õþøpóx­ôçöò%cõ1¡ö Ñ‚+à+ƒák<8¤fŠŽNöëíÝvµ¾¿bçÞH±MßÊP`;mqÌâ­ÖG0UL‡ColˆeÈ3Ùfc'^Û0Nýã‚QZªiœëxnL]îVÂ5å>á‹K˶TÚkt¬"` ½«÷æ—æ™75ʦö;—ÃÙL³˜fÈI ×(wóOž,ó>¬m¬@>ÁÉõÁ#ÄÛ\SÑ2USs ø¸ÿûâ ºZÿ[9õÏíöö ûþ£!ðú7ë¿>ª§æã=âf}qø ¨¨4±ª]ùD©ãš›gª‡…Ü©>:ͯñm?lâ[)£VëÍ·õÅkNqP &G26¯±?Úþ)›{uh¿¸Ú~_ß}xørþÿÔ]ÛrcÇ­ý•”_Î÷ôhþYÒØÊèIãÄ’÷hšì+Ç•*ÇIÃÐXxüLJ@–íé?ýz·E§ w¥FÓ$Œ2ð}TzkÉ–3¸Ú|3VêND¹?¡B4%€PÄRÛ¶<¿æbu²ª«— .46Ù¯$JÁ ©-QÏ›1@H¢î ‡½n¬NClN%TlÿFuk%æ8;xÊ7«NVåvý’4Êl)'&µ5ôiDoê/:ô–¼õ+köþ?Ã6•[… j}~º{,³Nõ ’Y~“[wYá4Rð¥HLõ“í…\)`Êîýз‘ȅчܷ8CȢѿo«IqQ7™˜j* 1qpl_Æ»|¦ #nÇ­[¨Îç cÏÖº9Ëi~› Ÿ»2·Íœü¦!“ÊáõËë_ú¥‡_?0óë»Óúµ£ÍŒæá‡Œq±*ͳAüâÝ¥á]¶p¾’àüÐâÁ]?Ðs·ùä):ŠÍ›×ŽÎsc”Ý5ócÈÐxÃ9¨I%E|à³ÊJ6“UÑ/ŽŒ`°KNÿj¤á`‚Uˆ$.TD«è\…^cv6tu´ªhu· ÆùKëmŸ\7F«Q#qï ë «õæF;qÞÈ•ÃÀüààêd•j‹)²\ÜÜ¡khÔ ’\’¤âÖ:g¹e¶ßí±ª.+Ü2Ò‘ùÑwIMñÊluˆ¿¯õx]5@c\ˆg© ýçnlsV²8›B¨©ôÚÝ[Äåéꢘ„kahÁ;À8TÁW(uÕ’‚:IµòÃn>µ¸ù@Û¥±E7Ÿ\”¢^9_LZ­ÒÏ»¤¡É€Yœ-`QûûU¸¤†ÕF j3_ƒ5Q•Þ—ÅÒ-kŠœSÛád5Zó²B«Ù­)PºˆVÀk’Éëþ³¬7f{,µ'µŠGnÖd¸¨þSžpåCBSWܶS%]ܪ¡çAY$ß°Ys‹YoI­jÌ:1õ³\²«“Ušµ†f,|qµù.þ$޼ŸkÖU÷³˜3ëykzVöS¦'Õ~ä!n%™IæLÎ¥/‹®dLÃRýÉØ¾¬zŸtý‹ý¯ïïnžþýxÿteuù›'ÛCºùøÊL€h[Óf¡ø¨p÷ù$l-¡Iø‡äÒøH©gm­ºT½ýxûŒ´¸ûHˆUî^|9ŠK˜µûÃÉ*Ý}¡.­6Š=Áw¤’¢~J—øž1Â\'š±_;ë£ç±§Î÷EâÓly!5öŠæ*Wî¢`Á\$¿Ú S·ÔÖ¥<1âzÍÛkw’xb@÷|ÿ]âõËí?î†pÔáBòAê® ,>’«+Î…+ñMº& 2¸4v¼ïÀWD¤æÚÏšáÕ`ÙàŠƒ¼3‘%!“TŒ°(zñÙq%”)ÞD?µ#nëÄž‚ˆØÇ»,è¢þÓGa¨·Îóíõ©W»—Û›;½gÞ¦uê±µ\zªòB ± ‹ù2à‡d&y ØTOKm]ÂEèããföá¿Ö*zØåzúZšý× _cÆÖ-UswÊ×XÀ¯j}èg:ÉîÅxatrÏ5†"L0)ùýÜäuøÃßïŸ~»ºß\_M»ÔHNêj¼—ž±\Õò![ëþÐ tè‡&!5~¢ >>S''5}L #„" 40Îö11ú±µ4_©‡¹ÅH2€Åñ46øcÛ.ßwÛLñn TE-7н ÈæT+M‰KôcGç¹ÉO0Zàì†B¡gþ)8â˜âØüÓ'žñOã„Dr øõfógøgøsF’‘ÁatSPè÷Ãü'1B!ŸÙ¤4iFΡÄЄAýg#ÐEÿ¤'ON1}'òõêûýÛ<|ðkÈVƒPƇ@~ÄJB“ºeƒº„|µƒäFº¦õx]ïEšÖykNkMw7ÆÃéû™PHšÿUì%P(Äb’"‚9$¬d1 Ñ6† ýþ„mÓú+ ö¼øF• rs_O»ŸøíûÝýÍÄXƒ5ªܾ¬—À.PašÖä ‹ŽMZ›ç]lpX}ôH@ß5óæ‘Cð⦬SÜÖñ¿¿Úò¡Û··í[óþòX-W|øvýz3/ Û¶z_.£jÎ¥æ/£Ï/S=Èéÿv{#¾Ýþõú¯/Oÿ¸{Ü<Ü><½üµgbyS}@mÄÚÈ"/ –žà"&pI„ü¸%üÏÆ¬óÀK M¥ ©ŒæìÎJ:]X€-=*_ÚC$r=÷G°u º˜ØÿLÀÄ™©ä2ç ½)¢SIq)I.c8¥Ì¾ä.As¿f_Zý‚!î%\4÷ÊÉÃõO>…®µ/^=†a _MáC¸8½Ò+8|ƒ÷¡„òù·±ÕÑj:Æl^Ž=¶pøÌQ\¤ŽÖOpLäϺØßã»ÕUþøðø¯Ôç½cFë~I.©ò”Š^€bÊ]å¹L©,ùÅÞê[ ‘‚gÍùyÉw%… u§QgöÕ“òÉ-IÿÚroÌ®THñìà!›ì¯NVeÊÎ69öMzÓÏäúÉý·ª?à ]tÎíÇT϶ĵ.âÿÙM¼Ê·oØ´͸~9ª½ÏøCÉUL(:\©OÆd—MMW™õvå·ú²ÀéÉRÈœôóëûjj¨‘•ß¡8Ú2Ù¢V…ó\퇓Uñ¼èýéS[–2Co¬PéÑ›S}CŒã•"iØ9ãlµfÅý Ñ•˜^¶'Ïêíp²ªaÖëÚy¤KëÍ÷,ºWñ¡ a‚žP½Î"Xßž„P¡µ÷«“ZóÙBýê`UZM3$¶E«Q¢Â5 hl³c—µS¡þm^øs÷¥‹-HMåÌDC?‰¥HÉpaÞB?¤1eÙW\‚þU-„Ùs°Øãy5XܽH špõr‡‹¥„r d­>ŠŠ…˜å*^­ÂˆôÆŒˆMz# bH#zƒÐóΪpÉá´æŒ‡«—o·oÏ÷ú뿼ÜÞüqõצ™5‹íJäŠFaÍa–Â`•g£àƒ|¦ôf°Ý³øÒèÀž5S5áW`qºÈxÊóËÝÓËÝÛ_ÛcMžPÉBH³U²]6™”^!EeóZKqmzq‘.¡Ä±kÇœ=Šè%Ù=íX6/ÿþº'ÚUI~urus±k–7 oï5‰j°±E@¢|}ó ­IÃ’A@ÜÜÛ¨øÊd$há ¯LICmýåþœ¯LM$ûÇèJžŸnšž›¼§#ïM¼8@`äÔ=½ºý1ô´^°8›,÷tÎM'„8íá‰4Xï\Åû„F{þä+ðNžy ¿•À¦<<)â`uÙlûÑp»ékÀ|Ò§Çý9æK‹Qïup.ó]OªýÄ3t@Ù¯ûoÜ\¿¾lôó¿½Ð …m¯Åî„ñjòᇌ7õÐïxãsvΣöá4ãMŠIäXQñ&•u(Ú.R~]÷A^“l—Þ¾©ÉxÅš;ÇŒ—Ò:<<.ŽÝ–,æBÆ»#ÛMo^ž¾¿5Zf¢–)Æ¥ƒC–)®g5º‘xêÏòˆaϼ+3-†³Krz•ØVTœÍŽV²˜atSënk±¹¤qÀHW•Àç5sSl.ÀSŠBçíªj¨ä¼Ú¼¶E·Î‡_ŽKŸÂPp›ú¶Ÿ{ãaŽ‘ÏYúúØL³´Lœª"Ù„E»Ô 3Ë«qÎ »T(;ƒ£&ÃŒz2 f¤tÃD¶Ú‹'¢‡ƒÝ0õÝÃóÓËÛæúªÉ £7Hp A† 2rÏÎWvâ"Æ8#Ù<*¨™†h­õ•)e’¢!FÊ• WR™dˆ6aæB“!Z¿êéŸí«ÅžbŒ¨y:ðCý?rÕè—÷V·wÞï!Õín™ï¿¾?½]½.úeûÿ~øáy<²D{®éÝÒØ¤TwVÑæ®V²›ÒÂ#KÐüªÑ…‹Õ¾Gò1z¨³t¬ëÝ‘Ïé¯W×·ÇI1šü¶µ;õÛ$†þöݯ=›‚÷ÖU$­%þ ñLóÖ¨aÍk1IÁ’·&—šV¢˜Ôxn[½ ÉæŒÓQ?ÆÍQôg±9ô5 <§ÍÕzþ×�^¿Ü=oß!Õã_Ý?ÿqå?˜9_—kÕГýë¡ÉFè„–TSa£1Q£s±¹Ùõ âœjÓÈœ êQ®hÑGvl7É¢mCjSQÐa+PP:˃,}Ø/¾¹xâGw¦‡¶ú ¸ãÆš‚õj +ö,, â°í G[%5Ó WÙaðÅLHñž+®Ä2É1¨†›j…D!ÉÐÛ¸}Ú†8ËÕ­ç4Ň+K–îo¯Œ&Tí쟓ßêÚª…'j*;—< ™$ùŽW [ÒÚµØ ^Õ¤5jŠ(àR):)V)OdqÇK´Ï¬aa“%r$†¡—2ÑH?žÅyûÆxNKüaÎIŠðç—§‡[õþß_7߸µnÏþ—nù—Mñ}­ië”$ÛdljÞDÔ+³y×$.>ºDXeœ§ù%¶òó’MBWšdž‰Iÿ×`žè¬`èeÄ<Ùó9ºPŒV øÿNÌúiÌ¡ñfÄ£ïheWØ#v¥–$%Íkç…«gš1D†TÁöhE TL![y]IdŠªßØ>S °ˆçB_—Bì4nðë½Ý¹Ê¼°”S W ÄR§‘Q¿f‹„Í(Ìë§Fa‰Õtn²D ÐÅîQ¡í¯ð©§ZŒ^o&éás#Œ?ñ¹i¶˜¥rÒÜ=ã`YŒütoÿî¬å‘?¦Ìè¶Ýp s®ÝÖ¿aˆÒͶպ=e«›K¬¥‹†`À{*ËÝðÀדºÉ¢'*cÂ:r *¹ìÞ¾Ãɪ(Jhqd,ô-Ö;Cm"=CŸ I£íB§a½AƒÞD†X£79Mθ?y–Œou´ÅY#ŽDð-nõt<¢8ñ]ŒªÞÈL08l38”ƒ“XÔ›xÎÎ̬NV©·R¨')™¥7ì¡üH€d˜Ò€ë)@ÞcYm*“DE½í‡~®!½Ÿ¬Fmè—Èà¨Imš ƒP›†Ð¡g :%Jà¤9EŠÌÎU ÉY­ù5–«B§Ó£2;úlè´’ÐÌd]5©~´v>‚ïáh‹¢.‹ü„k´š1 É(x€©Ê¬¥¤Ö›¡X«Ò¦!`À‹ë,ô¬®ß® SÇÈô@\M „6òˆŒU¾8PQiáÈ#÷ádµ¾8"¸´ÞbWÁØqÔØÁÁ¸±I‹Þ(¦šüS0b**.Æ#fÞOV«7Ð 1]\o=´@6|Âd<ö‘j^ Œ íY“´õ\e½qÌW?ŽVE¬Ç jNYÏŸ6Kq=d0Dý+ŒßmâØ5`*«M¯ë²±a̽«¬ÎU£³¨q»XÏ& ¸ìØyXÑVz”!‰ÃaÎX µJã`tÚj”&.M;x–™{u²ªôк³-7iRp†ÔÖ·AÈxÓc«®Ç ,¹G_¾ÛláÑi®ÂÝÉS6(Y­êr“E2Š4)ŽÉûþõh;Ý÷øH¤¤_aÞ›Ù~ûõêáùþöõ°ëØô%Œ)ójLÞ‡ßë¹lÆY3>HlÊ£‰u 6%ºпBm{8¡Ž‚½Ö ÄŽbÂ#Û á\ÃÃOSþ§Ú#~½W‰êÞ?]SÁ~Ì¿=ýôtÑïd1†K2"Å2ÆÜ;ðIŒí–?oæ8Èy ÆÐ8š)]dÑùžô–@àÒ>Ç“»_§ùžd´z·W#¢?M·]ž×e%›)³T´€æmÜd(6™ÆAÜLj6'|þËjh_ô‘Ë)‚¦ãPs;‘Oe€à1¢À½ˆ¦x¯i(Hð—Æ‡‡žU¯>Dãå³ÒDí¾y0pa#Я»RPŠÀðù¤q%›Iw GuáòÈè*áXQp®uù«ŠçNýž58´zñxùþcdó‰~ÿÇÿܼÞ?½nnÿó¦ú»½™“À–ÏŠ+Þ¥ƒÅÇT |7ÙA†3ÉŠî,© AÕ%bW1)jêú?œ1[zĵÌÍ„’8_U†â" ä\ÊJ$“îqêYÇ· ¼Br,Jí£$ôÄBnf³éç«æÔjÌ2/iñœ¶ï(å;ȇ2`ôfÈvH|Èl `ôS{ ¡0š–A ‡žà•Yc»Ðœ˜êŸï¯o—|oêóÓÍþa^¿ùñöúíîÏ»·¿®ÿ¸½þVz¼ÿÚö›7;ö‰Íõ˵eÑ?þ™¢â÷Û·ÍÀ›‘ž×lŪMk­É –o±ý®ÐŸäƒ&ePQ]h[jlÅÅØ%†=òâªiR+‹‰â«&2¾{Tµm¿âah¦‚}5þÉI9FÆ”} [Ég’ƒ²I¸48¹eïˆêã”=¹Ÿ¡ñžO¯6:þ þyóŸ™!OH‰]UõÊWXH’mª=iRõW=Å6„Øìo?¯ÆdŸüÇnN·X¬gÃß±n}Ю\|Œ¸¹i¦ üÒ-ð M=ÝY¤]²·ÛF“¬—Ñ,ûÛê^Œ'ª¢a!–«[)KÒ¹’Ç”y&Y„“$j²>`£ŒÕ0˜»–î:ýË‘üpç´T*hPz¥b¨ Ê¢ßÔ¬Ù$Û?t8Zí:UPw[N]w³âcÏ«hrå›çÐjö;~*0]ß¾h„þ×£þ{û»_±‘™ï"5£+†‡P¤gÛ7W›£‡è]ä&¸Ø¾Á‡â0èydõ6úGB~x‹«TOHx[§>·¢À€­}×\­jÙv<ßÖ“Ðêéa$<Ò,€Ï²6„!2ž•ê¯j©]ÛRðþhh”4zå€#V’RÏÖ?Íã%€þ6[ÿraÑvUŒ†35a‘íÀ-–Þyù-¹iLÚ@ Ô–Ù$°¼(¨Û1<w&^ÈRó.üøT#ù¨ Ã×ëëkú-m¾=^_7Y íbøå¨Ô5îÒ„ÀUq]#˜ÞÜ$%±jRARó2k©&ò\eŠ®»¨Ôò«³Vb™bм$rNš,ÑŒ] gà_²¹R?^L¯Ù´K2Ø.R¹T•ñB©Ñ›OÌç‡|¦'ì¤7Å«š‹Ù8Çq&95˜#ˆŽÎ¿ÖãÇwäý›îYò—‡«ë?Tö›Ý7·íö #Ëè¶¢LÒaB×2:Ïi<ºŠqMâšY[gÑW´ó8£¹(%ä'áW²™Ä×'˜ô¯k2Ê‘½2J"¾>Àø_âë;Þ©ÜÁ6ÜqÓ›ïLéÚòíÝÿ“wí¿‘Çù_!.¿šsýªî*BÄ@`v9ÃØ[òDÆ<’àCw ÿ=U³ËÝYnïösöN² 73ÝUýÕ£«¾"í¤ÈðåÝõÄÓÑ:XÀœºŽžÐøºéu"ô3¨ *¨Ø9`œ –Ã@ f5s™†ÒUŒmAí3¶yŠv^+̉IQ ϵO´øðZ]´`²˜4c›ãICNÛ¦OhclSvVs§óÛº¨SªªòxO¥I¢Oæ÷ýŠ ‘»4=(gµšf(ÕHnP£ ±Ý¯.·iJf³+«‹@ƒ]thÓ¬‰Ä„ŠÁøÐ+=òævóoÝô`@atõ9¤C$­çßîL½àÏ–¤"½Œ ® E‚¦ªr\4Z𗬣3‹)„Ï,êœóž¢3cÃr ¾u³®ŒV«¬’Z¿‹`0P6úÆ7@?ÂÕ <Ž8¡YÔÙu3Ö eä]ZÔ|T" ·HQ'b³²,&™¹ˆ¡$KB ‚gànÕùÞWÑ%cß”|h–[>ŧ”gCš>¢B è••›hnz²´,.¶ʱ³P"8OÔÒ#Ý=¾ŠxµÓ A5 .›çÊÃ@¨UY Û(òiÁ‘‹æ-'KË*Tc›ÂÈ<š/¸ÀoÇ`[窸J2ÆhÊáf¹aA¡÷è}†ØÂšêï˜Ø¼ŽlÚ®+Kh8Þôè"˜ 4Ú„†¡Ê‘Em»'“O¨ËxäÈe”‹_CI±™@q_f³´L˜T2TØ Ž=VþG“àÖÕn¥ôÎ36 µR]±—}Üì …‚7-7K“rs^GÙè·+Ë:pfp2¦âÔr{m§/“:ÅÁ»n>oÙLÈatÜÈäÀ¤ I” ÆÄŽÛdaY§Í (y ,“šÜ¡7ž¶ªêù )ÞKoW£Ø²™n=îºÃ=)6’’>nÑoº´¬ãFƒâè±&Ù‡÷-×Ò[gjG€V8jší›Ö%Ìrìº{ÈÀIŽÁBRp`âý†Û¥å2ËqlRB&£*5rÐ×$8O5öUFÚù}ó‰Ó¦@p„‚˜%¸ôóµo“•eÊ-HØPpJ2K µ %{B®&cJd ‡8®ýÀÙ‚Ô96 6Gn„‰T¸¬¹Øj¸‰Xô^U]kÊ8·×š!")£$«œÑ-·³äÓ‚2QâÂíZÒ·šÚË œ°"&þ]ä ·šŽ£ ¹¤ÌOR÷Ñ UºJ¦&hÎvê’+]ÆMw¶x.©6>…p²²¬Óëk]§>¼kçºxŠ’iàÍb bS´ÍȾ°Òrs!Åo—–%7;xð¡„g¼à jˆ Õ‡f/¿Yp®`ƒUìÖ…kilZp`£IêÉÒ2Í¥sXv\eXa€ÿ¦sÍYj'N8£UŽàÐL Îc¼=v»´ÌÈgŽN,8«ªæó8 H0×øóý£Ðúv¡oõ"¢X89Ž.£fR¬Š×êLö¨KM[xg;¹†˜ªÈS©À› uà§zZ^_]¾ðÛßO• ä/ª=1#L h)'Á¾‘N*Œ±gÎØlY'ª’‘í}#ÞTÍ6/Ï ¿Næ qLi¾…~Û;þçÝâö¼¦b[XÞÍy`mMï!»3Σž©õðí–õ‚óQ1¤x7£ô2蟜ì^ˆåw'ÛÓ£Z›?šÃnwú³¹®`î|6Í ¤öÉÎßätœBgy{Ãb9_.ʤŒÎšõDBM“§ ,¿®¡ÍFõ;†fÐc‹`Î1L†C2X!V$·Ý’N§À5äwÒ‡š±½–#Ie8,Õ‡•g½ÜR4•¥›Ë”Ô´CN­‚ Þ§µ#6Íg²?]†¿ò7;vÆN®¡¦ã*r MÀ™ˆÉn_é‚¥KyQÇBW 霟AC&3×R\ÁãöÅì÷dº(µäùü©íw€جI±•_W³ŸŒAà÷p@0‡ì¸A@ð-cÙȰkÈx­S$=YÍü9ÖÍ {3x^¯ÍhžóB—8˜¼yÑy³“ÝébÒQLº-Ê’tP m©ÊÅ“Y„*Ph™è>¥†Ý3ïoæªu¤Ž¢9ꌛUÝf)y‰,%21?p»o=М¿ÚIéœ\i°êžÊ£¹r¾º¼¸*ØA&ÔdUÌ)ëÓiVŠæÙ'›ÑEìÊž\œªÊ²9ôÒ,k|öÄæ?6¾ßÓû†ÙQµ )^”S·llrÒ«lœvDNv¦ »»€]™^x«¡eÂ/OÛªBKÄA8. õb#ø]Å௽»õ½ÍâòÓÍÝz¦ÈÃýíÍò†ÿþÏzqûp½ÐÃø“_†õŸËÀ­¿¡<¾§¡ÕÞæôvÙ¤‘á=ŽW˜m7±“ pTæš%¯µ©U±„À§ÇZTÍ^ëå]dãdxÄÇg?õS =hBe3.k ÷&­Qöúíîtš—Å€…%ýÖ¬B¿è›ÂIçÕ œWÁ.pÁv'd>¡­7XZ(‘ã+›ÚöQë*ÊìQè]¤Œ«5‘a³‰¢( RÕNi0d4Ú¤NØh s²²œ†7òj3ÞÓœöôº:" ï„©kñíÖè_¬2ؼï÷7ŒÿçËÇËóDZx§ æE=öOŠöÉN‰QÀéTw¨gu³W].ÇìÀÇ›ÑêÔšP“‹<ˆQ¼­'Üä¦*e:<›úY°&)Xˆ÷§mW–ÃGL€ `§°¿óŒ8!1á(0ä»âùÑ»ÐÕ¢×5ŒGÐ SõWWyyÿùîö~qùtîYÑ} sú¼üùKÙÄJvÞÍyÌB ANàåx‡Øg^åá­ê¾£"°å̱ÂR½öÌBÔ o÷¥ËPLÀ`BWèÍ8‹3ŒýáHRèI™¹.!™Ræó·Õªš'¨öI—ÝŽ/¨ühŒ#øgìcÇ\t ø Kî_“§œü9 ›íë2s¿ZX¹ \,rqŸµU~ù½Â÷Úž½þòÙÿÓúßþåâìÇ%Ï_Î~üóx–Ïþqê§Ç‡åÅÙêÃÚ­øÏ»Åã/?üû?Ÿ}dà¯~¾?ûüxó|ÅoY<¿<]œ½‘Ü߉sýxqÆË]žýãÙ»Ñçz¸|–@ry{ÿÄ«}wpd}=RÉ#ôîçÏ–×ó~ìv^^=H¾O…ÍxE)YÊFGΞo>]x X6I£=_:{bÕb—XZ¢WpöxõéþYñãÅ…þË…õŽpgÏ¿<ÈŸ¾ß"éÇá_ÿëý|¹Û‚ÚÙZæjûAõÖOšþÎúG“_z}ÎÝóê3ßý_ÙæÿšÔ'X£´jRïûO„uãè¼ Ñý ,HDÆÿ:*ÙV¶ÅÝòŠå|XÕã×p1-²6U³cµ³DÆéoÑ̽±]£ÚH ‚OÚ.°Þ»Þ35^“Mé`¼VÚξŒ+1^^£e›Î.¾¹ ˜ú½^E¥»ÐÏKfÅ Ÿ—E&úXkd üµòÕðÿ•¡£Pü¿"ãÓAýÛ)l=Œ×vðÂã‡_Ëø,oo6ñüûûËûsþÁùæ'] JróÓÅk­«A)Îv4(Ç7l7òÆfŒÒ–¦L_8 ‘¦l†×*N¯¾]w›±RR „'¶üb˜ÙfÈ>Û}›1.™øÉol†&3ŒcrŒ´¾€à­ïi<œÎv¤OþoØÇâº)–õ^÷'ns& Z¢ä¾æ`·ˆfñò|½æ4zzU†.@„›CÐS ›÷(ÉHß7¦wâ(‚‹|À:ïÒ‰iЯ7~Gœ¢Õ“õpúG­C•|üN 3}D‚Òsã·ìòªz×ç9QPk’µ-~«Áè00îÓ…\À«žàmC+xçÓB‰ýJ`Ù ÑX>ÜT«sòˆ`\ÿ‘ €È{Ÿ Ë;0|=–&\|÷‡ß_°ûÈr¬OA&:9:{y²ÇOW¬òµç+™³¨E¶It}öýÙó—»‹ïÖWÁß­Zø.þøo¿?‹šÿ, ~ϧûËÝËtyÕÓËRDñÝz}xy¾ø®é=?/n_®þ:^³ Õü’ïG…z‘µ¼¾eD߯÷|ÏOÞ5g«6RŒž,Í;nºÆGh[Aj`tä›Kœ²gÎHÁªá5.{“!^ÇlØjáÑËÕÉÊrJœdŒ)R 3T'À@Mzî CIéåž‘Z •·&z'¡ÁWßIè·ÆIØÙsÌ“Ó1û”8û„ë>Ó §ÁÈh;¡4x¼}þ|·BÚ›2jy÷2#SË»­ãëŽ"Ý„K¦fH§WÊ)­}ó°«Kp)x¸äÐ%qÉ€‹ÒamV–KÕ€Á Óʬé3â…Yƒ!¡ÞD²™…Yã{~¦UMpfß„¥³À™[ÑŽîٲœI½«ëg,‘ÀÙß±õ®ZI3 ÉÚš¾ö`‘}ûx>ëò!iä!Ð.H|ÒC‘ltxØdayžR@îÍPâ)uÚzy¡Ô´2íÓg­Ï¯Õ途”ï|0I©Žrl–'4¯Œ$ ]O¡eØx3²f{À»1s ­ü–¹c)tÀ¨~v!9…›ûÈ8z³\<]ÃÑst—Ëeøà#Ba™…(ãÄV0^”»´åo2ì³5„ìJs¨ÒîTA‰}öl =dØg•6Ï>Zõ»]W®yæÀS+[bž;`Úº­tVóìWƒ|öÍsP` N‘|¢o%ù´G¨‰ÏcÀ¥úgŸ¾|ŠSì¹Î’~:øò#°$¤®Jc,‘­É‹“2ƇͰd ÒO6¸ |F°ÇP)\"k¢#½7 ËÍŠ fòq)-´4.‘S~þ4D*õG9È•úDÜ@ÊR0'@¨y£ß¢˜‘ËÐ2›ý'Hg éfÿÀ¹ÑÐÕPü¡VÁ;×<"݆|4doHób!£TÑ&±pÝGý¶Pq»®<0°‚Sƒ!„ùë Fsê,“êã^šéˆá$7„¼ëwYsz^õöüáñ厮•páƒ_*¼¼ô‹åOoA¯Ðm›ás&Ç1ÎÙíÕ‚ØÛº´ƒ}s‚)åj¸»@n2ø­¨ãTê»aiÔq&•oçuGÇ]L– ;B+Égð”°Ã_³Ç†¼‹­L@mI…‡Œ}_18|ÿáåæör'0Z¸ü)†>!á‘WNAãu·@ðÈ+çux8Üq5—FFƒL­jg~‡Ïyp˜áñx $ø˜è ”éÒ2ÑÇ%=B'FæGã‹;=ìÚ9i¶Lø*0t±prV?ÿô¿¾ì…wZ¡oE¤’·ïd©t§’·ÏSj€J†”z-ݹ8E8åÙn£3>’Riœ².z½½]Y®“DN ¥N S–Nà$E1ÊÛXÇ1ÊPǪ§`èkDhoï|´?|უ×ðg¡µp¥i약zyYßyÙ 5ÿMg\­zÊ#4¨Ý;¹Ï‹Û÷üQJv6Jùt{ÿùìãåâyñôËÝrBÃ4>„=l=ÐOѸUM’Gª>_òö=æM››ÞØ ;N¦ÊÅn¨è>Y\ȇÞIãÚû?­*û§ì|ï˜GJ w~6ýµ, —NÂÀ5 yçæ%ØP÷on›[¢ÞD¤ÌÚ=<âuÄw"èà;yßÔ[Ò¨ûxKGÞWÖô"ê$S8ƯõšÆGª(.cóß\tàr›^ì Ý[|üV8zÔkrŽÎZ^/c­÷“•åÌ'’¯² P妖ÆWƒ2º~°ÔøˆZ_§¼AÏ[«Ô ¤%ÀåS®®‘"stžÜzÙÑúÛÉÂ2ïd­ö!*š¶¦-è/#¦f6Ų:^Úoè¶ù»/í·¿ÑÒþQÃCK}·ÑøˆšQš.p̤9¤m¶ºÀÖNåØÇé½Wë¶±FþÉÊ2 Å Ò¿ì8?i6š>ã@³#›f§ËRf³ÑøÅlÂL#"±‡>wð-leû-£ü„}M4GÈá×Êþ}{¶ïªµ8²ÐÖôµ§ 9óç ³ƒE›Æ,°`’˜…&æÃNV–éÃZOÄŽC‰;Ô|hv軚¹=²½` î õ¬K ¤¿~–mò-âi5GCËÎ+w:Z\˜©£eç•G G=ÀHMÐCu_A*· 4ߎ:Sâ/”Á )8ŸÄ‚øhÖíÒ2Á‡]#£CÈŸ´à’àÃPa~Ï'XsÀóG'¨ÍøšÏ~ÉÂõíÓÏ&v J=žÃï›Çá9ü¾£¨Ãxà½m@VÞP3«Çñáä(ķ׃…lÔ A+!:`I©TˆWîcaÓ¥e¡N` TÂ[‚:rOemêèÙã-qéA–„c|Ôçáíè:2[è S3hxH[<5{ åU5ï< @–ÕXa[’(ÔM µræ ýÚÀ¸= 5ˆéb çÀ¹tšÈÛø´ÏÍÒ2Ý~]´%$7s†š,GU‘ Ã+*Öm›GÙ‚sÒ\¼J’œ ½GHë0n86+Ë’Ö2r*˜¹9”+&Ãa½™Ý]uk~=wýs«Oš¨CøŠ‰º½VK××Ë}Ç•ý‘>™ºƒ/Ü©'Ö¾[ªîà ZŽŠŒ8³" ¸XÆ™… ûˆîÀLGý%uаkùtóÄ;¶øéj¹xX,ožo®žÞ_ß?=?,ž¯ÏÎVŠzõ8þæòü£yˆ¤ P‹kß»sEf‹u¹öµGUšM°·ÐdS¡¦?Úh™òªŒ‡šÙ¬â‘Ù1£Œ–δÿã4¸¤…Õ˜}.¥ÍjÒ3²ùA¨—v†¥î<¢iH¶¬q2–Lþuš¼±I¼¯™raÀ‘&rÍTPЧ…R´g(dfh“v¯|”ªqº´ÜÊ é£,q®t€¶¨<Ì<¦bµ‹ê2»¸oÚ´f.B¶ðÕÚDŸ–×W—/¼Ë¼»¼icƒfØéÃüÈÑÚÒ/ô,m¡E¯ß)FƦbdÑ2­Åúùy«ò'˜ÿºÞ@ˆß™ùÐÃñëú`\GMõÁø¯—HÚ¿O:‡/ðåo1ºPß%tä»=ɾWéÈyMBŒ‹.´€.Ÿ†þ¥ÐVèVHûàfœÆõøÿì]mo#Éqþ+ò!_Lm¿VU+—‚üÁNpN†qàRÜ[Á”´¥½ÝŸª™‘H‘=ìî™nêò²°Ï>IKÍÔËSÕÕUO±ÀŽdyRÝl_eEhoóW5· “ íÁ„)ɾ!+×Hªí/à^3|~ÍàJ³J‡—õ@g1L¼l/Ë:>¢SNa~ (qÍy§eMYt¢|ïÜ!¾º9¢•Ê/\(쌪Ÿ9O˜¥~<ÚUXe©ú•RViíœnóO;¥nïX–ËGF°ÝÓã÷—+ÃîW.¿ÍÅ»bøÍªÞ¹å`«tðf¯Å0®MT@jž654É^A…0°¼{ùuô;eEAn\&X‡³ZXŒ† -,¤À€ šÕG¿S3ÉåplqY®RÉ>YxkPß‹ªR’ TXR#LÚIŽËjžäú¡ùó$ÉgX“˜ªòUï.)+ËÕž°AMè,x´T³±MÚ‘0´ÃŽ‹FÈ,ëv_Øåå:ì¾]Š“-ooX7·Oe /‡;j,ñ6—Á`AZ-Þ%Í‘nøí„ƒ€Œõ¿Ù÷ý‚Œ‚⎣¦‘Ðxœt ¢=Z¥· ùæZóî#ðŽz»ÿŒ Ɉ7üÌéÝÇ«H*Ý} BªP3äåÄ$ÚR¼6VU¼/Íõ¦ªGSLèiÇSGSÐiCé{ÙO €{QÕ:›Zc½Ã çF†.°8;Ê:EüÊ‘¿X¡ª>ëbÅ•IV³GUhy5W…öõ?R_ðœÌ† §·6«ˆ[寳$U2Ôfµ] õ/ÍXKNöÇ)lyiÖñjäÌ4N+uÃLEù–‡ÐRÚoÂmà4 þ]ZÁ"aÿ¾–/ô«¬úÃr\^:Lfͧ}›Ãç™^û>:î¾û²†SÜt9g¤’ÎNH%Ò._/‘$S3OD%å`ÈÈJ(\”7m/‡ZYbƒºd®·Š¯µoúWÞb4Ã@%|êt‰V*Ÿ74¤k”Í\½©÷NY$(M£øwWkL—šì-ÞZglºM’=)í×6Ú?»—P%îÆåËê5Û‡ieŒVú҉̾§mõ|sûTØÃîÛ&,í¹Ý *Oѳ6‡0F‚Ô*6SM1«1U[Mºó²×qõþº™§^4 \IJ!^9ð—<¡½œÄîVëÏ ]ìVB¾{pD;þƲ§Ÿ(<£q7P2ž0+ò ³÷¥##ÀHŒ*`aè«&ÃZq°7éCOd¸ú*xN±’“$,Ðø$É^bBaÿØÚp/ …){Éñ_js™…šSÝ2¾ƒ¾é“yý—=4ȸ{±ÁÒsæ8IgRh5ÌR¨¶‡ŒÎÐ!BRÐÙ\°è ÆOXµ²NlŽÙa±*€OVq&ƒš‚ɳ]Úƒ>Ù/ªVŒ¢`uF•Á†$;; 0J.p ¡Ju”•[E§‘`.Qfp.^g0²ÆE¯£¼¯Ú)‰Yý9¹ó©fÅ€QçÅ3û%Ñ@ƒ´XhÀÀ ÈÃêîjÍ^ó ÿs÷"µ¡®Ão!î¿ûÀRÜCÆœORüIEôÛžŸ791Zõ1š¦«2½IO˜µ£­BB;¼s¤ˆ×3Dõª$Ýf“”cÎ¥Òm$›ÞýÌŽ±}õ⬒m‹#9ö¥¢žLBòsmÉLY0gd±-è­ ½…Gþò€Cyv ù²]Ý﮾êÕöË畾ŠÙPµ^^WÁíîΛ¦Âô=éè´ÉÕèåå§¦Ž æ®Ènˆœ™WD# Æ‡qOe#‡váTUÑ«mIÕ¥uyÓÅÂû‡³QûŒE4¯&`}2r#ëŒ*ç烈7«íÓç¼#ùîdž­Rv^a˜D­¡rt°¥õ³þíÏ¢jI4ñ‰dÐØ9XUÂh–ŽÆ!Ú.ºÙÁ˜ÚÆTWªV£×jÖ„šUN·ÈÑçxÓ¶¹èîö¾¬õãºì²ÇÒ™qû2Õõûµ¼lq36(wwñ~­} ÜzóxØý6§KËŸQ#9ØY7VO¡8ör «4™¹%b«•MvŒ—¼ì,î /«L'Y†1¾ú!U>yjKè-¾”…d8©i,p)ËŸ5ÙK:©#×ãj÷ôø¼~z~Ü\ QéêÛòoÔWvm¶– dssÛýÚTUºüçݵž©]8>ì…lSØÁ+å9–^¶NPÆ™;× š˜SìííÕJ K¡¹Þ×Êò‹’>H·JÓ??¶ Þ,B@«phxÓ€ÂÊ'רE;‡f¶˜‡ß—{‡•ñI­Z2,*ÔÌýÍ4…ËÈõ»+‰é 4jöÓñK ±XºŸŽrÖq’ß뼨R7 Ѝñ“¶‘ãæG õÏÒ §>2pÂÈA…~‡ÛX­Äs[w-§KÔ)ÍC~­ä·š¤Œš—5€j^æzB—W@¶X[¥E7è]^9h>'Õè£4¦˜÷Ä5¬O§£—9gæ=þ-(ÔÂPA)[Ê ™÷XÌàÏóž(*8cBôœ þgt뼦ÓÖ>°yÃ7¡‰S×8ñ± 3‘u9rÑgÙ‰U0b>g¯Ù¨vÁÍ›ˆ´¾É¢àÓ jº0ë‹´¯íDê_¶Ïw›”ü|³3CJ-~KÅ»´„žŠysOEy&ŽŸÊq^´f[A-éw*Z„ Óea0_¦õ*›*áZLÜE…TÖÌc“9—ÆáZä¬b“DSBË”@ãºåa4•£vÜW¹÷ì̳TŽªi. ô^¸(¯ØËLã1!ñÑ×÷7Ú…ã¯gPù…ݼÑH§ü¤‚/!§(ªJ³Z1'[½.íd}X{¨Y@éYu0^Á}W•.¡Ï3¡lkO[a¿×SH}(ÙÚ1›Ú ØÑêqû°íNrñp0sÉõNsL”Ýg/ª*•~~l%}'%ÝŸ\æõ7ò1¢õùJä¬M¤ȯ‚ Á]¤&‹ŒžÞN3)ºŒêV8åçÑ ¹Ð`‰¨,'æ¢|3²P¸r×ýõU¶/ÑgäûÚXí8/¨WZ˜Gæ³c­Nfï â`.0Ï‘cMŒ&g”ÌÂè$ç Ñé¡Ô*a4ÉrŸ¢IX¯ùtigº¼2—€h…hr¼¦‹v´†¼ÖǾugVÏÅ“Qus– zVHöVQ ØJ‘µ¦å«_ö+^v´þåäÐpò¥"NÞϨÀ˜³}¨i”ö“ŽZüYZ6Y†Òiƒb«ˆÉóY—\‹Ê`a¼ÑILæ8JÑõ{ÕÁd Á~¦Læä’ÌÉk§Šç¼&«âŠ5€¶”d!æükè6?‹¸Ä<”MÄåÇ6àŠ·‚KbóE4"g…ÄåWvrš盕@™wØ–ŠVÕÝ–z cj….¥œuÁï Z -)§•vô›X–úñùv[Ê} g¨4“bÏ@Y x g|Œ@h¶uU=CàÓ-è4È2R&»±Xn1=LŒ%eC %Õàδoޱ:ÖmÓ+ŠtPök /™-`Wœ„ UÉáªEñ—Àhkàýxû§ä¬àÆ'(A+ŠŸƒ¦ÀîZަVVIZÙ]?~òX<TƵé¹*Ψ’À$M’pDt/‹:øÉfêø?%9jÒÒNŠZç¨,e¡§íôDÒžsÙ >¯«`æôdÌýGiƒ ~'­n25äÈŒjYƒ}|ؾ©t³w°6v¾<>Ümž>ožw2IPŸ4]Øèiõ¦Oë¥oOÛÒ#ž|jî²_0}Ê·ÖXL& `Ulòð@µ¶¡5E è|¿ë]ûm³Ö‡è !„hJ®îºÙ¼.ïߎï÷£úMvÞ´¡ñS ùDÀY¥æø[7ê6„fJU¨°·ÏÞK•!¯z¨ tw›bfÎŽÒã’0ì©:FÕéTAU±d­¨ˆÞ®‚_ºÖ+XÎŽ «¬)Ãgt—¥·ËÛ…Ô÷ MÜ…” czE0|òŸ‰· .œ5§ñšmÃáoâäfóiõ¼}ªv’”{kÑÓ´ûf#·Kàš]„ ²ª¶D‚MÁ{“žž°Î¨ôêôñ»æW¹TsûõÊ»’©tD‹@³¨¹Ù°lãäUä¬\dÌ-tÏc0Êzøßr!r c:%"KnV7<5AXGƺДÊ9%ÊiÌÆMwX¥’2e}¬ævìô•µöÅr§x+–>‰¨¨ª¶sˆJñeN‡©ƒ¨Žd‘¥-)Ôð>×QQ 3}LjÊY·•’ÀEÕc^¿$VCÔœ›eiòõzÖ@°p×· mð¬ëûõŠ€ô¿žžVÑî'VËÃ3[b÷©™ìÜiÅ™TbžƒR“æˆ:>ÈÂs¶àÏ0?dK}&!Û!ðŸ&„@e’„4ìH:™MÚK²!?¶"iŸÉÇõ ôì¬2¯´¢7'„€ FáešÀÛ(»W¡*-Öµ™ã~ `4j–ä¢q–e0FµÒyl†‰´¶ä~ëÁwå݇öƒ²²±¿FN93Ƴº ×è2=#ªšV4ÎQ½Œ[PÙ±ò“÷o h’JÁBìþm/:¸,dð¨Š:9ƒçÀ³.Òe&ª5.{Åe"¼ \`®ß‡¬ò°-žëÏôþ–:t (ÊRlÊø—láê>½ðn¡nžoŠÆÒÙqJòPê§t$±.ÈpÔÔUÚRÒªµ³7‡ZA W½TÛ“µáàGc¼ § °ÊSkí¼/V?w½pðÖ4V3Æ*ÃrWA¨.Ûà»~›•öN††¥¡­¥YHkL‹\5Ÿ”éGª/^¾¹ß}ŸZÊ9`ûµ´<ìÝt‰'–?BO!¨rÚÇ"¥jåá)U¬“¶r7é;7îÓÎà*itÃð¡LêTˆI+"À¬B³Âø>Ï6¿rÊÌX˜XˆÊa¸ÈèQV?`ºpFuP†tçíÓ= ŽY^ Lœ¶:õÁfçzíHÞy²œ%èT;JóÁƒGN{ðÊUº‘ø©!ÈŒuÏÍ_žµlSLÆ5îF1[Št#yÐ(›®ýùtÈÕÉ6Y3ÙÚägDGÊGôå€4ãÍÐã_h¼²{{ŒuitÔ°ébÝc”·5˜™ËQ‹‰QÇ5Í‘HŽcÓ5M²f”Dê«ý¹ÿ¢ì èUÙ¿®nŸ\ :‹Ÿhþp³ù¶Øëâ0Ïýýéñûm—ñîd•Ý ¿å팅NQñ?ȃcœèÌa)fñðÜ-Ìí²ß˜üh°Žó§É—ò6@‹–ø“µlëiX@JøÛÍÇÛ{ü݇¾uýjy×{“º~»]p¹þ¼YÿmÉéË;÷n9$«X“ˆ¼ÿ"‘OgúߨV÷ëÍvs3]9çsù>‡O¡Ø²¢Pzh'Ï·©ÇÉ!€íDèOUbüË] kÙÙšu'4þ¸(«Ö^,UÎbâDs»D*9¬3m‹Ö½”Ul ¿1Z²Nǧ¿°jTÒ sª*¼.Má¢PãòÃ}ÐvñòËÿü§Ÿþô‡?ýËõâ/k¶À¿.þòçî7/þŽ-öVòõ¢ÿÂ՗LJõf·ûûÕã÷ŸþퟟطÙNŸ¿>Þ>m:M?ï®E ÷ ¼l¯‹½®làëÅ?.Ä%îYŒl·»Åzû°“y¸’A@៙ÜÓ}ylm´è]´W¡ÛkQeiHíWò ÷‚÷þ܉öÔ|—úØ~—úÔ|—§iÕ2zé5'‘$qögÌQÓb¡ŒQZrù­IoüçΑ®øÃï#y‘]<óóݯî6ìÕÝÂõ~30ûpî¬zñãâéÛýõ뇻/«ÇÍõl6¿lž®ÿø¯¿_DA¥–bÖ‹nÿ©+Mþ#|ü„?:{£?åÝÃÍþ!…Gt÷¼/¾þaÂÏ_žŸ®xǧüºÚ>o~îWçX³è’ømx ‹ì çYüòð]€×Çÿ‘ŸëmÖÖ¯†§éÎt>Cë>âxx^Vi¥¤è­ÿºè xÕ¡ôÛD=¢oCù´Úî6cõó‹ûç;–öÏŸ^ËÔ×ÑR«–­„RI–5ÂÙ,«ñ8çÿÁ›ýý§úìj0”·YªpÅçPNv·ÿ.ö!ƒ+Ÿf_òÇadêÁŒê Ÿ­í,ðt64Žd"ÿèrV #~ Ù5Y­¹ÿþíþ4z¯.'ùrÑàõÿàû|G`À;éŽWÓÝŽ?BûýÔ†xuåÞôrb³}úœSP¸á–Sçbóm½ÙÜŒT:iŽZÁO[üý¤Ñ> E“,-D„p¶ `®òÅ´O(1‡ó[æº7´âÃ"¯¯Pç*Pþ(vÎÌ2€<È‹z7Ë#‚³ÿ\¤L­„zO;kÞ–©Õ•Ñx¥®l¸f  U$Œ™[ ñʘŠðŠŒ3C¢É*’°N¿Í~]m?ðE1¤ü«bvÛ‡_ŸnVO«Ý÷ÿfïJÒÉyìUj×++9`bõzQè…9í/=•å¬á?}ƒ!¥–(‘Œ Ã5ô*?+mE$@àáéz,!âYà“…svsrh†Û·© 1Ø tY( e/±Øæ“[_Hný ¼ÂgCˆzš"UHF.ɼ×ñÐ Øm|-/ñÖ¿´6w˜8çô ýœ•‡¶³ôÖŽ˜ÒØÓOõ´6§4JË2[ËX*/ÀR¥v—V?4 •l¼>ÿ6ºÍ:ü‹‡aÂ.„ê¸hËw1N±j^ÙòUN±Ça-“G0ǧæ+úîtfÃTÀ˜5‹«ò0€6c ?) ‚ÑÈ QÌXu0M¨A±ÇâgM›L(¿‹·?àˆüìYÃòYÓI l6´±ßggM’ò<£‘•Íš5 L³¸[¢>ÚÇöŠvzxy;ñ¡71U3JÔÙc­‰‰ DH8¥Ñ?Br*‘b×^?±æâúõzë£_›b#!ãíáácfAæ¼ÈèäQ?¦0Öqò„h±hÉØî| v¡Ã#¾¤»ÚÈähæü.^ÀO ïmžÖ_vÿ^ÀÕó„ üÿøöc­\Å@À핹i½h§¼Àx±õ_¬ùs1æEÖŒDº¦,›ù#•Œ&¶ƒTg&0!w0Ò.uöà`¬ì`4Q‹ƒ,{0ê[J÷ƒ1x›< „\"(7u}?Ã;ÛÖ$^ß­o¾?DçcýÓÑJ0<ðÈ”cÍôÈ*ŸÿÁ ƒù^XåãÏx^M Š§4‰ôAUÍl¨ ÅP“ÃA7‚ËB…Lœaà´-Õ8” ߬ªô­„ÄÂÂHÅØ©Ô>>ÕF,ÎC†ºÄØÂ‹G·BmÖׯë”Ѷµ£îÅÝ‹jÙÛ¯¿Êã•¢’76„:TšòÌñ ¤S ESžÙ~OXcuS(æEφ.‡õëYŒÍ£ÙÕ'œEŸ$øŒÆU>„±yKôÉOšSn,+÷ìÝìY“šY³»ŒŒÌ´IÈ»ÃòðÒa4²ÒyÓ³Œ‹Õìl t?5 RêÒ¢âMÖ7i÷[v|³Àñ1¯ð½[yá0ïécßY^}¤Ì{zçÃ…§`Ät<§È”ã ×£6Û5:‹S;õC  ­ ¨¢,¬qå@‚ñDâ@â;ËSlñ~{{qTz¢r@¦þ·£ìrþw™Ð­ a‚³]ö°±gíhªg]ö¬3PÓdÅv_±â\pé£Õ0ZqK†ŽÄ~FèècSç}VäÁçûÈ.u<3~4å%ÆK=¸ùA¤)ïpf  !o.眶b'5cVŒA¯^üï{*ßåÕÃ:VîþÏóóËѱë{ÖCMïÏÞ‰ÿýSL%½_ß¼æLòÞž…Kn€£³Í:rª„q?šÿŠoûÓý®úøz}ÿÛúæãùJº=‰‘së?|Çnl»¯¹ßüôôüûOÏ¿¯_z»»|úéÝ"«aø‡Š ":b¡Ôùí&/…¼£!Nz4žÔ9T߸¯QRØéı2êµ²y½ØÜ}ŠUˆ—u]) PÇ] êÔÒ$‰"劤žv ¸:óMRAáäžgAë½ç’=or¤Dì é¢w[5éZ—¹‰­¤Š9wƒ= ?$žûf$Ú$ƒ‰Ø#ò¸¨(œ”uý54S®=úáÙkõx sƒtà²Çöu¼µfJ‹ ÌÖ ÌF$[£Šâ«P£,"Y“ìñ¸Yaˆ.¶á…!I—`ÿ€Çî!Ç(ï˜áR7NZ^EûO¾ŠÚ‹nhﮞ6/‰›h¢f7ѧùá"šlË‹èSìBÎMºÒ+ z̆!WCÞ]å0ŒËÂúÓÉ;ÍýÐJ¯ ô÷ÝÂ8„ÆöÇ! É«bCh\ú¦À4¼)`Àèóò|S“Œ{ >|øÔ?q„=l]= ªboèñ0AC.^;¥ì³3𨼄3º!ä PÀü%e”•MeSŒFV<Þ! cZx\÷\Š€.$s)b4$[“èRiH{¼RÞ€g{ü'û?ýpG6w ðñ•ÕÓž: õ˜ëhÚS{ƒ’›¢6A±2Æ6@¡ŠD<ãucØlP7BBýÙÕH‰PìGV†B‘ˆÄö§ £˜Þ0¤vD±)RÎl¢,¬€÷YÞ×@ Þƒ$»î¤#ï½%R'jUŒf=|ì‘Å6YéûcÓ”µSÇ]ŸMX´l bÖd8MœÒ¸¬4jí ,ÍvÍ~©Ã&tØ?j!$£Ö¬=UfüÏ„¦ãàÉo_ç»T±‚4‡¦Óï M'Ö¯‹ÙÎÇ¢ç_õøÂ* #<=§¯fãUáX,ø” ãüå&›þŒFVŒcjD¨ñô²ÓV€cÔ]Ð;šéŽ©wËn­4ŸV :¥ñÄ¡RÏÄTÕoò¡B”UˆNy‘îøE0¥ÝºÕÉÎ.Ö¢òZQĪo—U*õ$%‹_„ÉÌŸýÈŠðË/,°0~©WÚ[ˆP͘Ôs$Èì—3´p2ýÐ_`ø`×ôèfyœž<ôå@†g%Ë}wwÜ)Á%¥FýüþŠjÁØhËyv%±¥l‰¹Ž;é¿íVYòè(,Œ½»¢m­è“kNy^ì%¿ä?;økøq‡Œ•‡ß_à?©X4wè ÞbìÙ‘‡A§‚·è ^“2œCB!Ð\øâò"˜4̱–)_Á¹<|q²Âa?°"ø2,1ûÒ»eáËZÛ›ôD+šT`Ü+B±år)“Äþsj ÷ý—þü{¹»}\¿}øõOxY?Þ]Ý¿ÞÝ<Ýß§¸‘™]XXøäJaÀU†…>L²RÀ°tSWxü ïh}‹"D§0‚>øÇöm‚‚ë×&(íDïgêýD9ñù‰¨@µ©ú÷øPkÅê_£®eOÑÊ>ä+¦¨Çè„xï‰'U‚9®$ŒÕ…§¬¾ÜŠ£ø }<Ê55ìÏÆ¶ƒ›,Õß&_IèÄEÒ¬žö‡.=ãï˜UIaAÉzy,¬$lµÐM9Å›{’`f‡Ì¹<;Ü’>’ëÎ/ \‘³[ᳫÓìk?°òåVd‚²èÆõ¥£ïH¶nB´+±(M'»àDCê›S>Ÿ1ua¨Ó_Àà‘*ë?-Ðþÿâü±ëÑGÁO9úÔÿÖ©™ßЃKóÐÅÙUì‘c8’Ûe)E<âä1¸Yâ‰3«ÀFyçc¡»Ù º¤C“nXy„XJ´­`î_ñ>vrÖ(âÄe1Ú ‘¤+ Ý_¯ªs·Q«®«;$}1‡uì—´q'èÃÙ2Â¥Âņ@øýãåרTùëýæíõÏ/¼àë«$¢ÛÛ‹««Û×Ûwiù+±—‚­åÂç¿ÎXÍ,ñðVG€„)ß¼³Þ±vºbCÝöŸ¦ÓàŽN’-ʱ°p¾»| ‚“D$}í²·P†áµºˆÅâhÍ€;5Öa¿´¸Ê®ìx›äpóåq­tsñösnvx­º €3]`»ë0»€||I6L˜‡¨ Îqä¦Ýý¸H+Cy±SÄ2ê‰GO~J-(Gh!j!™S¹)Úb±Ä¾Û%X Y(.Y]1²U#(ö!„Ø:{Y(ìûo:µû=^bó –lþ¨:O(gÂIprr•ª«åæL.Xc{œ³$úv@=ÏÙÇû'ý®Í.»§ÂIÓOÓmš‡V°vBd?öIÓ­*µbd#34H? NòE°ºBªERå÷£!·HR~­NjWm±‡ú ùª™m»¶%±l"JÓîÇRÄb˜¨#vq×ibסy»ƒØ9È¿A®±@†))ÖÊ~DÁ&V«þ=ä‘u»\òr-6x¥~·3È*ɈòÈVMÚËÇ×¶ÆpXZ]ç§ÁÎŽœâÁJð‹°Ï¿•Lã05ž@×׬ÙõÈgp†½íI>ã[Þ_¯/¯‡¹[xÈø9¬¬‚_ò2ÝêèëÝ„Ð+ë‚«uü+ Õ’Âç=P …5’uò!):2K# b•œ±58Ûb'ôï–(Maã-D I'%ئr¸T„³Då¾ Hpj"0ÚY H®Ç(“FÛîÀ½üùËX÷·Kaøx¯Xy Ó-\Ÿh'iµ^,R«uïO[¥)ÕéÅØšs¤¶Y¬T@MjˆïmЄ”ÆU{@X¶Øc‡’Íh ‹DŠÝoý@£ÂLï?ì‰áŽn~”ãVí@µ°ë r‚¸Î-'œ]rdOˆim²Û(âòXËy„í:c¡ Ó- <9èÊôë·z?<ë¨îž7ãÇCÂÿÅÛó·ue3ø®§™ §qÖ'c5ùŸg»†Þ@lx,aÞ'ù° ™T>ûÈNm¼6±HLüÂ9Ûßp$)o@'jèçÒ¤*¶8üb‹°–™¨Â1˜='™xl#O!6÷»øf¯xøúõíþ6æI¬c¨J7Îî6_Ôh{ȨøC£8Ìû6{øíõûº¶Í¶¥+dó”`9{ÃÑׯÊO›v Æåv=‚mSå˜èj ÆY—uLŒ1wLØ'eÀ÷Æjã—(ß)3h¶£Áõ!Y1cî“©ú+¿?¿~ÛÊyÜßÄü¨·?¿$?­cZº`úÂ+ag'%f…”N>`¦2»­»ÂZ¯0*Rç‚ù®ØMvJ°Ö¯ll¯ØFMI7Y)é÷lÁzΣ7&Ó¬ÕŠu#ÅÀÂð-Ý{Ó©M8ÁºYøŒ2MÑbÖME¬Û8ß _ytœè#Uý6Ì[ÈZÓ5çî=YùcQ?ÞýáÎìk݆›Í¯Ïý³ßìê›lVÏ¿=­ž_¿Ö-Cß™ð]"¹bD„¶ÝÕ;ÍÄëúåAO¸MRZ(Ñè£Êî<õ<Ñà”n Jò•§âüEÛî¦%nRYõ"Zk êIÑ@ª½ÚÈRM}klƒ*Z"äô0x™mm–BínjÈ‘Äu>M¾Ñ•ú·¾dAP*Óvo—6I88\–"¡¡¾ †}·Ë…8ðªt(Ç´þ|`’1´¥H¾„"á®sVEjs†÷œeÛ!2 CTÆïä?9+l×e¢YVXÒè¿Ôá­õSv˜ÿ‚yWEÔwâú7;9c^’‘EÓ,vÝr_¾?<\lÿ·®ˆÅ9˜në‚ÓM1GÀž}¨jªÍNÂ9ÞYMù•¸€$yV]U—_ !)92F+ʤ®[ð´0eòÖwvhãŽK•7Å-”½%DÖijIlY W^æôY'~×Uáúóh0œæÑAú$.î)Wº°(¸#W3’®Ò4¢çìöHÃp–@}º…IØG¡È/—_k¿R‘ï¢Ö”,Ì¡d–GÛÅò¬ˆhÈôô[^ŸÖW÷Oñhú ýö4á¾¼¼>?®ßîÖß7ßdS§E¦+ Öå>å*Ç2Æ ”6 Ågh;fãbƒ,2b³}sHÙI6›ÑCòêæÝ2Mˆ¾µ B sÜC•ã2޲àjÉUÛ§á’à!u0/“‚Ö‹óù%‘nÓ=²J›5¡€æÉ ]˜ì"¶¿mW³êxx|ìÏGšœçJè‹~k+ÓyuËr×þò¶N¡C‰{WP'˜r?“úkýÓlÛjÇÇÅæÝYJô ä”¬*ß[²Á~/m-Õ¨E#{4fmwbÛ…9pÐzrÀíõõ5_QenÍq6f÷‚÷AôiÿvŠnY›–àa˜±3±àÌr0]²QÞ—_K>ã@½×|•œòyp“d¾îÈ,èÌÐÆú¥ñM¸¿»³)1v¿²âÓQ×4J£gGI˜&(ŽÒ,D§zN>†Þ}CIv á!:]$Ž­Ü–™gcuaV¬®$ÚÔsRÙöVºi6I¡k1=^]6½Ì™å38TÈŽMàN§¦TB,Š˜%^¬`؃„‚qÂf—ÆÛ«Êu«¼C›/[ÙðŸß úó Ž­7/z´Æøö÷×û·?/t´Ï[ñhÖÑTïºÎ‹wíSLÜ*^ Šx+™ì ä9-y˜ ûïé³™ÿn 8†µác.GM6VG~›Œ}@nGöiÀm· \ð5ܶÉölŸËâVz@3…Ç‹Vn“m*ú4ycˆ%ü}¥r‡å"ÆåZÊʘB¢ç7¬¤2IF–j°aYì**ï‚<.¼aá@×§5uÌì䘺Æy²VØæ:ƒé‰ÿïVÌm2Í>ô MâÄð—)½™U8e®º¢2Ф–&8|× ÆcÛ5$QºRÔ È’(ñ²  é’þ‘¡ÚÐ(qL‚â–Fe±Óûbs,<Îä†Ô„D©Š´Î-¯ŸT‘××/zN-‚ïXP¿,„ÎUj‡q·›§Í—AAMùü°›çëoºFo¿^à·Wú£ |€»‚/ò„Û˜ØaÖèŸq”5V»û˜¸X‘0›_×òæ„”ªß[¦ÉuL|iÖÙ…±¥wõ™ÎEJqi2FA½°DÖ,uáÑ×13"ñEˆÐs»I-b‚ÚÚ[ƒµB¥Íc­-1@w“/y P+ïïÔM×ÈrP@] °vi î½J£­$QÀMrÒ ¤m/®P„ŒXîýö¸oè:Õ(͘_9´hYü_AFéñòúNwÚ6.1ÜŸìÞé×ԹÉ>@W' “òf\Ý„~&˜µžo×—.0Ìp:\¡W/-ïA‡t’Í» àùðÚ@–guìúÖBE;Càcz2û€U|±FÊÑÚö’/ñ¦çü‡÷Ä,aEÞ¹@Ÿ"ó# bB>ø î¦]%ˆ×)ÃP˜Pàc1°Xc š Ãd­ÖîæI—r”Ïò\Ð-€BR¥ml£&wOa±ÒgõtbQÇbä¨aXÐOüa͆æâ—JÊ €sè/ÑóïåáùÏZy§3]«²fÏob68áÂB¬ÂnMÿ¶–j(? ëóòrz[Yåv.HÕgŒìÒF}À€gu–kY5Ù‹Ü]qXͼm»|Xh®Ø[YTq˜°L¹‰;´ÿÛƒBÇ)¥ ]n%”ø«³ÈŸøþIÝÃÃËÃåSd¨—/w—v•q¨s…¨+Û)]W­‹í­DL'O¸Ü˜-šì‘™²uÔžc¢­M^!ï-×(žÉ–0ê´. Ñû_j0`2œ©Þ¯ïuC(ó~¥ýMr²ttjâÌʈ &üEœÞI"¨Y² Sz»cìP€ÒÏ×m¬ƒƒµÞp(qq S…%†Ü›¦…‹×°úÞ Ã°³½»ùD;'¤M‡™Š.Íã"ʦ¼?>]§’±EF‡–z îÖ—owe€™•m—n \#}vmnúvôgÐTòPD+ŽJxh^yŸÓJ…£Ñ¶â¡hb'¤…Ð[·µ'ˆ(§\>CD¹ii¤ó… ÷»·ë1H\²ñ}6ËЧ6þÈ>­6>£·Vænü_ê6>Y`ë[Noý Æž•ÊwÐà®úÿØ»ºÞÆrûWòÖ/—D‘”˜iÔÓ‹f±ƒÝ}4ÇÕ•­$N磻zýR¶+¹ŽeKºWºUÀ.0SNªí{Iéè"­q‚ïwW5P0U+ëWï“8‘~“º)€¾»<~,ÆHÊã!˘.ñùÅ ^¨µ8ÓþÔØŽž Àís±àeaœÂM×êÎôæˆ'±/ÊP”|Ô´áSa¼3óÇw5I{g=´9¾Ø¿`4R‘ñ§$œ7Ë’Ï š€,Ñ”×´E뼋X`¨*•2¸‡@P˜ÎïÙÝó°VŒýs±-¯&Ý´¥ë/®x7¿¾»¹ßýzó—oV9Œõ™½&Õµ ´aÜ|ýb6µãã\r6Æùc~,€ã•=QV{C1òWöa7(1mà››hoÄ-ÇÌóH@ê,½Í ˜Þ€¨:©/”Þ°QY±e¿b㋪¿z.r¾Ki@VôЙ~СnÕagï­~™þþv½Œ~9?õ×]½yêz‚Œš0¯â£^8µ0ÓØ¾-‹lÕ@ KŠLöÞNªmx3f£Ú†à½·lg>)(øþµ »VèƒÚ†$ŠîŸ®mpÜXÒ½¨Ê–·"e™…Ù¨ãJcûˆ:E)37»¨“þ Ürýé„:)'=·z‚xœ59"gÇÙ82M¤œ&j â0,À° Ù_ŒKÕ˜½Ù£ K¼è‚™ »X²ýaغ¤tK,TF <ëL*Sr€‰JNG` §3é1Š3^ÑÃh»¦o®‡ö{\¿<§`?ûÅyˆƒ•ý’¯–çþ×ÿþü¹Nw?œjqh€¸ì¨v~ äBu⥱1b³®%0Þô¡!IV\O\ºm`¹6輩À1$3£³CìŽÎ˜îŽžbÄí´½ÂˆßAa+T4Ct–#N+ÈxXˆ1âóÖÁ‰­ÍÒÔ[ßd]Oea“ÿÁ¤,«5~!E²·4Àeoi¢ S”mh£°°yl#5ºóù’…±Äy0z@ßæéSUv~üo±/­y§ŠU69ë…~mŠ­ëÏãZÞ±+ÁrÍâj¸ä!=°L!}j\°soFã¤ûN  …R)²t7¯žÇ÷ìWÌW¤7pjÅÓû" ICߤ†¼³NwŸùÿš×^ÞõF°Ç°OlYYxƒúónY_4MÔvÙ³)¥½cîÅÍ,Ü)#ïs‡4ˆuQ¢–èãªGÁÈÇ«Ëå"n¡µž[¹ÈíMo¤ñ~ɨØÓ6BMÃ/=b««OšÛ²]DWkŒ’K‚0‰.¹¿Ò Szio†k’‰ë_ßÇUä@šìft½çÃF3»Tåqt”€†EsŒs)K}°ph.ÂS…'G}‚1“|ͦ r ›(Aá¿glûí¬lü#oðHfQÏA›1QÐ:¦ýf£†P+lh);È$Í&–“Hiµº`5²sXµ-¶÷®ÉP3cò2P_À±³tR‡²`ÇS;:<ÜÿiúþÐOñ¡ ´‡Ð`í‚ ±™aŠìþQµ;™^õྩ¥ïÔë¦Æ/ãMŸS!•¸wÊ2¬çqÓ«ÌÕ Wãz`2º•óAc^Ê᪢''öo¶i¬ñ±Äk­òa›MIýÁû„’Y|å±yîdÂ<˜^–F'P™Mª†‡žn ¡}']p~aŒ×µ“n«ŒwÓ¤»Ñºü‚±¡+þʈ" B%‡JµjëÚ°"ÇUãÐç™!p6© bR#ËæjÈŽú,07 øÞ€‡¾Vo%Žœ¤5ƒ„Ûì.K*,Ïê7„ŽnÖ@È vÁfBÝe¶'6Çé/º {Y?_fêºO7U|’i¢pÙÇÕu#ÆP‚RÅx¥V›”¨±ö‰ª˜SO©•Ù,A§”•=g*袱mJn`Í&8¯­;ƒf…ùørˆýaž’0Ï–¼à]z<-¶íGÜV+7ìGü~ Ôs=¸]…8å>|gÎ1óœé ähF\øYä8¦FcÿöBm'm<ïÕ÷¶‹Éú,;I¦Ai„ÆAƒÿXn9/£u#§Y· póÖ]”%š™\KÉÎÜÌ£T ’ÞWÖyJµo‚·”ƒû##ŸÛ&² h4JpS›6õ_c-õ‡‡ÇõÝêùóêåéüKxšäÅ|c ñ¨Ù j~©­É¸»¹Wzú°|\¶Â`»ØÍLœSèFŸÚåZJˆ“35¯Ü¢t<>µ2èf†`6Ò¹°Qͼ;Æöö­¾²h @€öta#¶ÍDCÙ]Ÿ \NŽë¸3Ìòûá8-k\}h8X)ëíêòi5D¸m èòóêú%rþ×òÀÁ¶Mçû››nõ:‚KÔ•àzuãÝu®í èdцP,ÁQ<ÃNëñ$ö.)–4°_#,VÌqì`f,îÅ`“PÄ€djÌ=¶²^Tdξ¼Â­#ÈôDupÐAgÏ]ÉÊ*š‚úÞ}kÃaHG ù*Žo<"Ul0…c…úPxÐ``!/_”°U«ÑIê]}0e°œU¢cÏ.dÁÕšÏÝ™fó²M„袇ÀÖ@kÚ5Ðj-`glVF›è߉~ò¤ÇÓ>¶šX¿0 '§6å¸Ó³¿G¶yÂG§r,¦rc« ·¡q|jk¬òZP®V±‡zŸ7ºH?ÿí¯„¢ïèþ¡Q!öñlsâÄݺþ²ºøùæú—¸”p‰ŸØ]®®¯ìÇ}lÜ€ ÈqK"‹î·I–tÔ^A‰/”…Æó§^—–>ßÍåíÍÿ\^Ý®š¨¤–Yó䙲ûcÇœ)HÛ,í?_Q1¾Ú¨Iþ¾^?ìúÃÿT YýíþzõõBù,*âG·Ü¬®ß~fÛˆÄ.ö\o®®ZiOœßöqòúoð6?ŧ=»‰O¥kf¹ºù}u½òÎ`L(Ü”qüKê3vï¶û˜›'=FþÐ÷ÇêñìùóåýÙ«E›×ÏÐ7›ÚGÙƒcº.…1-ßÖÆbæ8";GÚR„¸cƒ fWëw|qȽPÙÅ&y1x¿Ÿ¶]÷öÓÙ§Çõ:øüNϼÇ?w®~V ±ïÍ °@JoƒwÏ@ÈM‚IìÜÔ»µ+¶}n<#d Yó+«Û†YSÙÀ)>æ Á°‡)òàz0 r»¶ÿè;ž¬ú?§kѸ6.ÏÿøÊp¿°®xiLœ@ DD ñxGäq/Ž^!óʈ T[È>Ýví2XºV¢e.ƒµ;^mA#Mi¸¾YªIKŸ:y´X¨-ö«’”ÎÒÑ̬è(¶2— 3XÁ‰úò«„6ÀÑÑÉJ¼¸¿“1ícä Vr>æü…ÑßcÔöaóç¿*V<úúܾwö¹=töù¡·Ï“¥³Ç\gI‡)´¬s#nm‹X¿ØÀ?•”]Þ?].7¨÷>.Ùʾº¼}ZN÷—³û—;]Å¿¬?½P©h%‚œ]‡98 Nv¦NÀ©¾¸M] Þ짇Çõrõô´…Ñ[ÞÃ%hT«„€|9\æÝ–ÝIû ;ï$5#úÔVRG: ®‰Öê}½?Ü@û'å&š®=_®ï.W?ë;ÿºz¾ø·ÿëY¡újB4h[3Ðc]þúÉÇþ »õõpqƒ¡³gO/˸ˆ.~Þ=Ö//Ï?wøöß/o_V¿l):<ûøñì“îÅ—øÞß¾{Cmº|ûdz]aŠxD7%ëäcp9¦|9L±±ÞsN•T÷Ç¢¤,L¥XßàÍÊ`ŠDý8Âb^˜bÛûÀfL²:u¸˜4JM5D'%ù 3ÀÓ~­È;¸Q[×ÁÍþ§ àC×¥P5~ì\o<ðnL6-^I‘W <°†·ÄyÞ‚Q1 0YTööjeˆ¯íB3‚çÞ ‡jF¢#¼Å†¨Ì‘î¢(KœÒGæ{Ò•W/7·×{õýÝýoœ ) HʉïR%Ìí¨É‰ïì @âF ËT1(“ñ'”ãýÎ*²ä ˜›Ä¦¦TÞ¬”Z=gZâO‰Ûx”¶°‹]Ì“ÝÆ<ÒkÀë|Ök™BäÝk£¤E¬__¬Ìk¯°‘eæSC¤wr0šÑ¦i¤Q2÷tâÈ£@»ãƒ±3íº¯÷Sw¨´{’ÁQ£O᫚vO2éÊï ý)v­ô9“{B”àßg€^™4dRBn†Êœ¿Ý*ÿ<þ·¶©õ«O>\ãÒ]~º<Ü%è]Ý.™ç)÷ÈšÝÖ(Ô/‘—ê­5ÏãŸÜv–<M!J‘Æ(Ã@ˆ]Enz"ŠÊ „·†Œ‡<ï#kò cR®ðõÍ óåú]¢4¬†A8GbÍ$´ÛŸAŸBKocFÅ`zšpË4¹ã0Gjœ|Ö»ÈÓq%üû֜Š¿µÆ}ë)R'90 …üYkM,”—éág°á§Gï]>üT,Á, ù¤ÆÈðÕ ãO%7äMM$“u\‰·ÝYwIÖæYãžóÚÎÎBßj†&î“0]7vjþëøw ©” Ò ïuü»:#Ž{?Y· qB@±€ì&#T ‹X}à<âDiä,âˆ$s'ƒW+D"`r3#NœSÖqv6:Dã4b€2îúUßµ@àEŸùüiõü¼Ñb8ÌK?É—/”È¿;hU$Pú{…¡e¡@étÇ*ƒU'|;d39H“šj °Ja-U΄d‘ýàÍ ¡*ÄF;çZBU‘߯°ZÅчýä3¦";lPA•‡g!(p\röìàÕʇŒEt3Ÿ1»¡Ü}s‘D6]æAú?ri±ß2-ïü GL¿6ôwùGˆEb5GO¿'fí¡>ißïÁ&T%‡|ɣ {˜2¨lã âi©ÒE7÷jµÛÛ‡[=,¿ÛËۇϗvñšÍY,׫uüÇÝÁ¦aÊMÓå¡ûÖS¸.Õ›ÕÃêÈØÖÖDràêPG\€‚±Yr°“Ÿ>Pr|}³ÒøÓù(û037€î…êÌ; %ÂOÎÁœ/6üÅ¡‰Ší'z¸½N¢–ú¢'a‰’•n‘è‰GèZïêËêÖAÈ;v2h™šHJ"ÑÝt˜“ ©Htðb¥({~{7]´K<ãùH8ãðÿ^Êl7{°Gï¾,ŸRHUKÁ¦?Á\)³ãOЧa”øÄɬ“ª¢žžH‚ëõ¬ó8…ÒÚU¯oVˆTK²fFª]‹^k™¢,"º®£Ino–¸ŸlYfÄ«+ï¯PªD ¼8îº_hL0"ºØ‚²ÕsEêL5Jà ø”†. e7›A fò»MRM·»´Ñ0°Á[1h§nÆÔ. Õ”¤Aƒþ9ùÞ!T‡)ÝÔÈ)Ÿ¿Žè_˜a„‘½S« ­‹ýfˆBÌ5Ö™PÕCpäY«0—\wv¨»É¦jÉte‹ i½z‘¦â?L% ^¨fL=èwu%öÏ¿Z¢ô可„˜>ÎÒO.W‰Ø}÷Üþ¾Ó—GþzXc„[ää¾uÈè݈+‡qßzŠÅ7XÎh‘:°A VIwdƒ¥éÕf²Åoë§ULH/¾„§Åú÷ûÅú±n~3Y}=AØÃ Þîëx¨÷+{sGöê WjÈëóÝð˜óÝô˜ó§›_ëgVy üxOäIZ?B,ÄÙΨ$¿’©O_Æí¸{ܶh|A ì { 6Ù0°Tî®OíQ±%©+Y&aDí‰X2^¬åêñbS÷XÃe¢˜â@£ÁD—_&>•ùXªÍ2‰³5â±órÓ¿ëÎ;“®ßæ º×ÂéNTj”ÉÔ)—õÅA@›³¾¯—g­AHv évô°#[“/-Ûz9Þ8WL—kÁ#ŽzYl0ÁMò²ó]Ȳ· 1;R´m›ÑÝåÃñš§o5Mñ¯IåÄLN1äœõ \‘@Õ}$`õ´£±kIÆ<Äi%d l¶)’ZëoöiDƼr®ŠŒ5Ø™;ýƾ§¬=Ö³­È’NËl¡ìú"Ø•Š³u 4ôô(÷ÉL°•`л 3±«(#97tåñêr¹¸|yþ¼ŽãâólÌͺnêœ u…aæw„–¬ÏÂãb[6 ƒŒ!ñaPìâÊ4SêŽc`¸Fa•xm34‡þm¬ÀÉysÑQQÍ™¦Ô-—´” ]Ö£yôS…'Ç|M&x/“J PBèÝÎ  ;OÞ·óÍÝ寫WëVa±UKŽ·t‹ÉX;¦«_Ù ±g;ŠŸ²NKtuÞYC*ŠŽY2&Ù/50E#xuL©Ôd™l92пŸ*–9&áÕy±hÒ²¹èÚδ·E•ØJœÛóGçbqÙ¤ä“>W?$ÐuÊò«¬Òžñ^ÿåMˆi§Ã4EIúÚ?¸ö÷Dºü­íè€ûË;5™"Ùi•¬ºC˹ æ.8´Ü¨CKÉ"Bµl…&jxr©ß%^yùfñ&dO.gRõ¦C{´9¹ô±c‡_Må|‹Í׌ڙ‰’:[jCÏîˆ*ah\ލøäªØýÇ=èc_Á´ýLc&- Õ…MQ~©j;>rÚÞš`Ë—§¦“Þ Ôêþ“\€Pµ¿yÓ8mÏø¹à DÉýMÔôBÌ—'áòý=‰^õô)rÂHã== ÓAą̃ n–›/yküÿÖîä÷ßÚý+X¬“ñ>)@aÑ6( >6ØA%7µbC`Q#Ùx(œù6jª_­ 0ëSÛP5¬Å¦þƒv0ŽHÎÕ }k&)°'dËp™¡–›cHO7s—©¶ÊkLÐtߥfá” ôòéqS¢?[^V´3È}]¶Ï©ËñÈÏfØäÈÎõ|xz~¬+á ¦ö†M I˜F¤‚h@­±;6«)^¨MóæÆ9Ë®$8±Ùþ;Ú]ÿ¯á}3U«à$›ƒª´y“uÆh”"Rœ-RQûÛ¨)R›Ò¶ø3C†vóðN/JÊ(lÑŠ Å*]ç[®„pö3D©ÛLÞáý‰‰Zu˜©ß„¶Ã¤‹h‘>™4¨$ª:°ûzÙÏ@zÓÅØ6Ã/»¦åb%^¨˜^ÈŽy‘þ8©–„:¥˜Á’t$Rëë²6ÖsúJ_ë zzR_³SŸŠ<‡àô¤úazo&u "ï‡¤Ž‰)­ìë"éRèb£²˜žŒxP+~us½Ñ|úStwñjî‹·°ýâ5ê;Z-_ožÿ<×—^ïn4Þþb]ÄâO,}˜4y‘Œׂà5N§ZžÜË¢- u,”Г¸€P›<¡vé¹jû5"Ô¬4!&´k¸RƒÅóŽ,•]!(IŒãŸ{êˆÙ”k'pílß锺Xx3`+¦íã,sh¹z N‡3Ü®§ÓÉAw¦í´à[ðM ¶5…Šoå»çÉÔÕã4ƒ4¦ë)ôùÙûàgÕù°¦¨Õ ѶúÉúßCœ”<Éÿ؇®KœöÒ³éMÁõF×§î´ÁxŽ;™Ý/÷’ý÷›ˆh;Êþñ|'A]Ùc@2Þ§7Ñð !ˆåÚÓ»ÔBÊ¿Ðæ ”…| SR‚÷ÍXmNj]é6x©)h±oIúèî¡ÐV«s®ZÖøè§FWíý~D«õ]”M÷Á/ÎZNj( áæwɉëØX Ë—u‚„ò’ÄvÞ”ÙŽeç8?µD¹~s´ŒœPBTb.hffÈÖâ¨!m‘ß,Õ(tÒ°É ø™™]ÿRËéÉd²wò qÍøúäqçÂQ €ü“|¤G‡‹"w$ÖÏœ8Õ_þ±~ür¾s¾¹Ž¥ÞÏu±ÿËÞ¹4Ç‘#ø¯0|Ù‹ÙBù”'æä‹#ìp„íëÆÕ¤$Ú")‹ÒŒì_ïD³Iv³Ñ]¨* (ÍzO;M |H$òÁª'äѱ8‹¼jöHˆÇÐÀZQî!Å`JÃÁŽ>!¬=äå¾ìŽ@ÚtÑŒr„C=a%03Î#¬õoGàbÂ’«"+JòYxš±IšææQ•§Bfy©Žbà¨.cÊåòæèR@û“ß^Bü!ó;vþýÈLN¨ƒ|y†9 uò7äÒ* ‹åì °%…)¥ ¿CŒòPíì…%_˜ž¥Õ˜ˆpT]hqsÞÏäY7öoäìRF.Bs¦mä%*Vj—8qZ Ðü€=5ŒQû Y}g^´£Ãþâ°ëà÷ËøÇ(+žpŠ¿ň:©A!D*8»@•ÈšÂWÍo°\_ÂAØÅå6‡Oòi…_”ûÅ-‹_dê_ŠeÝ7ÕÇ©º,EUˆâgã :Õ¨™Ì ÆRê‚[ n£'xEܾûvýér¿õ{–Û†¸n9à¤h+Hbqt´ÕD‘µÄ- RQkÂ1ƒaÜÖr4Õ“|á–ò œÇжÁÞÜÖçéK[d,Ò–€ÿ¤´=‡£5 ³<¸"]êJ$ð`ÚæàíÅ’|¼ºøôõc;ÃÖ;;]’à™d¦æ{Q†}…ÉŸ¤`×íW8ùæ„ ø¥}0¤T„ØœO“mÔíWH²çm ìë_rB(šœ š;v½èVnF¯Â*Ù[ò 6½÷ÇXÓ@ôd¾Æ‘}ÛUEØÃê·u7ùý ïhD.˵QR€8È}à -º lbO7ÿÿâòæúö:9lJ¸bˆh¢Å]J»rWøÀ6t¤˜BÂa²I±zœ(yUJvŒ8Q¢…yI b¬SÞu)Yc£[ÛEÎ&°Uˆ€iØ_eEÛáy®môrEoUm©ç–Zï4ϼŠižyC*Тí‘g[#™ÞQ}ú²MGkõm6I²Ò[¯X®½X¼þúrnjó¥ÆZ6ú!ÎòNh”.51åÐÑD¯úPóùÓÝÿìyþ÷æó‡û‘®C±éò>Å4Á„+0p®ílÖü¥¦,³†¾C_D$ÃO5• ˆk*|»òiskŽ)ù˜iL"f“ÕÓ”}æ7|”ѹן/nVÛèžÕ÷ó‡†YùO//sÄôúÓÅõÍý›Ýµ0åÂ,Ó‰ÃJ”CÀ›!ÛÉX(ÅŠTt>ïȰŠÊÃf%öa7\A|O©{‘ ߨPJ,ðÅK€uQë©®K¨Rhùèwü¤9®Û‰gÕM¸@{Ç·ðÃöê!"a¹ŒµÕ%üÇ÷ÛC…ÂA!›VcZa1‚q}wóùâËÕÛ_|ª®¾¾ý—ýdz¢’¿¸¯×÷ûŠ> ªySˆ³!Ó÷k¥÷—è*¿¹»|¦X8ûõìþÛzíô{ûËvt¿}þöõí/}ñûŧoW¿m[HE>ûõ׳÷~|Ë’øõø¢$ 2/ôD_öG¨:²ri†HggºãPßÜý~»¹û4¾^¡ä8|¹1ÂçRÉe³#ž‡Q5JuY„³OrwóËFÂä4ÊÕöo–+^y$àîï¼ç•5ñGBÝ…·Q½Â'Âz•V±¡lJpD{ ó*.“ó ´RÈÛžë·}^r ó3ðà¶Ïå ±Á¶Ï£A ÜtÛW¬ Ÿß”dÌë_ÏüOnï/Öñï-$Ÿçoy¿}ñéþê„o¿Ýø"ùíîýÓ6Êœ?LVL9/e¸ >zWôú<µ¿|þr—­ƒ…n)tŽ˜Ò¦gŽRœß³qVgUµ¼î``ºcÉwçš¶b䊟]!¶³/…°/_pi“}\BÛ±qî–•‰¬ùÝú\>üçÇ/MÍ€mœ¹ÙcL;–§øšÝ³<G´÷‚cúõì¸ÍkàúŸwcJ}š~c~“·žÏ½G/{•=߯×kyÇãJÜÙ‰¦2ƒb>W,ÈË$?‰%Dë)ª–ô’re0®È —‡Jϧ%[Ñgº#—V9ô’B c’‹lFƒ`ýsèÊM%WÓ2(FÃ+Túµí,&`áˆJ}EI²0«u…JçŠrvz¡xs¶™Ü„¸à/V¼ AƇ&‰ðˆÔ±¥ï°å!¹RøÑ0ýâösœkÊ£†Ù ­¼Ppð•à;þôE1¯r°maˆãìÎó¥–TÏ«¸NPH+Ö 1æ{àßþÄÖ¤}Áó@+!G[éö'«zhóû”…º^B¶²—ÃÍŸµ§~\½)¨¶»…@¤.!7ùinë5݇Øáu"Œ»Nœþõ‹ËÙæo½%4±$nÈ¥ü8¦f:úÂpú«ÇMÍ‘E‰aÆ ­ÿ„Єê"‰f“Ȭ’D”lEìØ&Q°-vœDyâ©Togf5(òQ%5Öz”j"3˜’¢o¹µ­ÃaNNž±†!ÞÌÉÉ©cŠbJ @eÆóèqœ7ÇgƧwp¤¾fGcgƧO0ixu2ɦG‰,'‡Ìw·Ö2É͉I†­£I‡™dV´ŽžgVÁ$!^¥Ø((Á&Ça”½ ¸­¼±Ç¤Ìàô'TÖ~«DSÒÈ䲿=yƒ¥KcQ5\Ÿÿñãí ”Þñ:“À¸6-XE(ãÕd»^ÕdG¬'7ÏGc¬ÉèNmS[(ÎÚ¤ÊB1$&Žšl¶••j­,öÓTɆ­,ðá ÍgþpÏ=Ì |šZ™Å9GXuœ™•Ðo¤8‹hñE®f<6ðÝ7³\]7M8^Ü"y¼ÌùöžUaô©2çÛ'q3vNî¼Gp‹! •/†€Š Ѹ¡“‹Ýn\`?xšoý8ÃןÂ-Fº·æ}}w‘Ç î«y_ŸµÌ+N_ 4徇ä_F˜}úÆj›)­ cÑ$@ÃÇ/—+ú=ϬÊh‚•åûÛ¨ÓWXç=Jâ@’ÂéëzðáoõÐûq~„ÇÁã%ü^¾ lû6xüË{Oƒ‰š? ÿò) ¹µ”‚¦Y@Ò)5s DHi6xÄuÀ"$…šû€¥ákœ–Üè;«¼ å®c€4¨µ Y”þÖ’%([KâzK7ýó|Å8½¦û|ïÕ‹ß½y‡V0‘9Êr÷½O-¶s‰ÿäIöäªdi–+¶nâ‘ÆP`ä§%΄†jçwJ«\ü+œ(†Øã'W >»3« •Š+J¬<Ê1¨·aúˆu•+ÄJáÊï#zÓ¤‚ÄOe/$`i똩c_îmÿrw6éï¶a´\z{þE­6A•Vš«WØE~ýjö¶ÙÜX2ŒžgVƒ& +Š”–FSìÛÌm+F. Éa‰ÙŽF"±å{\XHKi²PaäEmú—w€„i¼‘4ýÃ'y„9i“—lËFñûÆlá˜{šø.H5÷4ª¸§m¥©UÞÔ$b £žÛõV$ìSÛV5Û’ø÷)… E Ć@rîÙ ps±þx}{õÐmó2¾÷'Ûe±÷#…È }"¦n×/NJÝB ¦®3þ²ãg‚9–‚™!ÍOªÑæ˜IPÆs àß’”¤vfVkޤ‹ÓO¤¿9æYÁ‹«À•—Œ“BRûá⤮?GI©¦žQRÅ!ìšjbç©âNQ+WªÄ4+º3Æ þ-rÛ R@žíÞªJèwH Ô¸·¶‘§ c*AkgfUî­´b £L¶A­ C+nƒÀúº·½[ÆÊ–ŒQ'Õ׋QŸSËç0gcÍwÃÏÔnì§(4rÔÏToÒMKz6E IâlÔŨ#ôáÆ Ô æÞ«¥”ççyU‚.ç) S †Y>N'ù”Ž9nyrÊI\³µVD•â*æ² ï/´ £8¥¶DfÅ÷—ç™Ué-·ÑQÕ³ÞXüò=ç€JÒÝ¥àRäÂã¯ë!Ũbåh”H-‹ð-’rÇuíŠ~ãëåóÝõí<æ\Ÿ2ú-¦ñ€öü<á…¦ñxND˜('1ÎFÆImÀ¢€Ð_Ÿê˜]¼ûtõo²¾»û|@·÷­tõO·—Wßß&¿d†8Ë%ü®¯.ŸþŒô0½=_J}Ép Ó}ð!c¢r?¯§Ùü%öì:Ê1¶¾ºþýêrŸc¨°Ê!í!íVÿÙûíܶ?s}v{÷ÇÙ§»?®¾œ}ýxq{ö$‘Õfúû¿9¬0˜S¯¾NOÍÿ§qV‚ b˜Ô+ƒ8;Øfg¨`¨Îu¶ h ¥XÄM˜Á·ÝZXfL»±1ô˹3¦“‡§Hóü«È¡}WÑ|Ja€¤vóSôÎb@Žó*äøؤÀü\R åô±eOo•‚Yn•§«lðV›…YŽÕšlƒr°yÔ‰ è¹t(›·iºÕq1ãa¦Qž1…ú²®ÎŸ©ñ´› rÙâY*²Å<[pÕVHÉn~ŽÆÓ¢¬›ñ„V 俍æ&é‹uÅ-m£1c®›Ñ¶ËÐI°­â¹6ÀZsFTõlÑdÏhï·V—r©­jž²ß*,¤#UQë’”~ضª®Ì©ú6gúu§T¬ !*àè¶žn›m,íÎ_œR¾´Or¨Q8ü€( ‘xh£R R,rõ,é}Ø9ª?޲@ €“Îza¤-ˆÚv½Š’\Ú6¸Óñötsñy¿wôÕwß`y…œ?Ý¢Îó•(‡G®7ŸÕ'"7׿›¬ŠÝHi©“ïâ\Ïäfœ,´V4E]òÛæpX姃&¢ÒÿŽ|Zt§Ú,hçE\z{õ>I³”á°H]Ö¤hÂGÒ}¥éÝ@ªÚDøfT¨>Lg¢§Rå…3¸ sÍU™õgî¾WÊ÷åâþë—oë¯ß\_;…:FP6iì Y™RÛ.`M'´ ­’R3¬fõS.Ï<ˆU_"0|A!,>L?ˤXm3ž(´4X¥{=ã,çX}Ê )ñ맦§9Ô5ÐH®VCàˆ-*#Ì ?°нT*h)Ç*­õØëLÛäsbù™ââX/×ïå0==¼NLÜÞ€vcârUͧÕ,×3ž…Ó—‰-Ž}^™ þÉ/ZÃò´ü¿ž¤1ùþ³Ÿí¢…ÙOAy8¾snÇP™\—_,FxìH¨Et~±1/ýM¶'Jw‹Àå|ØÉ)+ ’‘âP%ˆ‹W®&Êõ# 9{ u%œÒ6­±}·îÓ‹õöÍÿ `ÊaW—QÜe ]©+4¡=B¾Ê¸u˜xš¯yŒ°Zñv³rCu­ˆç6ƒaÞ–üÎ;‚iÛ͘c^¶ú'¶øW Ã>e­)ŽM!ËZY–úF™yÐQ™Bˆ4]vË¢/\_}y8¶Ö¯.¿ùd¤}ûá|}õåë¸þÒ¹cG®Lñhɦb4Žöh’S3¢æ%`¢AjÚFzËs™•}Z;RiÕéÅ6>ª&QK³Ô„ º Î$v½ —š¾ô@?™äUë¡‹á»÷²¾Z«¬×2Š¡¦iº¦*JaBª.³‚& Ð„Û ÌrêT¤*2?eŸ3Sƒw$وˎH”qÆjƒ O©?–·ëe£]M¹YÊ‘¸–Ø4B€«°WSy!à”•oÁ¿i˜w(s‡¨Ã¬ØHÁ¨kÌç»Ëã5b÷þéÜÿKzG—Êçã͇QW7¦« ã2%ìÐÂ\âãXÇì ±µäs̅ʸŠÏiØP¬‡¶+¢€Î£ÿ¹FÚìQ!] `š¦"¡«=⢠±é;X¡5i}„ÌLHtÓªÿD ëB^ @¤¯ûöÍÿÊùýÕׯ¾‹ž’ F>†a?èféO©‰ìD¡¨©ùcXQ`-y+‘µ†· êgé=;âiÄ[Nȉåmž-ÁÛR߬(Çm„bo±mö”ĺ·1iø6v=5±sÕV@Œ}SH_?ô« wã”N³àvŒú³Å~m‡0T”þÍ-:x¼Û‡ˆ—¦îŽ€Z 7(&]½QûWF‡ …ÚÀ>å” þ?Ëf_©*‚:K©ØÁ¡lÙwä»—N²ÙÆÓ?^žj£Ÿ?üåQ¸Mdºä+h‹‘§Ä&D ¢1Yƒl›q5㬯ß×@U­ÁÓPÌ×FtÅz';²iÚ¼ŒIxŒÓ·Ñž|±)»´ ˆRêîšbz¬$tÌÆu¡´u.ÔÅ%IaVúMŽª5¿'„4K­ÔÁ©À¹ä›ßëL~,§Â”[ÓÅ_Á[¼ òÃn…¶¶›…¡òèm:ÙÚb ÃÖ-Y1,lG@-ª±äaCçXh°;9ö~js9C©‹Oc°r‘Þæ±¶’ªü š:ú†n]›”y÷•—É«­xK8ÅWåícÿéq€<]Þ€šXÅæv…´æë£ˆZ•r9ÖXÔm6ìI :ä @ÝH#žæ×^Q<µ”NÖª¯Ù}¬ ðT¨ÈSRŽ å.P]îZ;žîîþc*„¼Åf©Ð”»T£1½?O<2ŽsÆúÒ›.ûa˜úýlBü˜ªÿ‡"4ƒé yµ«JÀŠnFyÇã X!@)$lG8À*jl XìJh €µÐ 2OÙÀïiCµ®&GÄЀ¬\è©Ofj.‹SÒWÈ {re¿Y¼Zÿ×ùSEÃóõ§k×Î&)äÜqœ?ÖLOè!åB¼³ˆ㔆•~ð›ch“)V-¹vìõuâä VÃÞ€ƒ^€bBš°×V~ÛHQƱ—8ÌÚ«±»g6X¡’NžrDbX8wL¡Â9¯pnîØ(jÕ±/ ¶YÞwÀ˜ºðso!þ“Õ«yzqR #¿ä™ü …ç6ú—\³º‚³ÎaIÊÞØ'™4mb–0ê ¬É&ìkå EÎKlCü7Q€ÎBBŒ³|ëÀ]Ê f¯èI˜è† Ó…]SÆ)M"“2»‘ÝÔuÐÞ ›Ó²-U„äýã G‹1[;âhÄQ&€qé Mvžè efH"/.KTçŽMœ5¾ØŠŒÜ!P‹Á/ 5¼Fõ˜BiÊõý—óûë·#š±'Qs»´ ¾X7",gp¶q œW3¼ú‚H 8Ü¿%jJ2H×m@ñKGì³dZ°ÕÇ,EI ³5nßïz²Õ¥\hë»Q“Èÿ‘w5Ûqì6úU|²É&*A€ž'È9Y%ë,ôÓ¹ÖX²4’lçæélµÕ%©ºù[íLÆ;·¤ê"@|@àÉ¡zرúL…­ÎÒ ½9€,&R'£#Øcå-—×!¤)¬¢ý“G}éäQÔIü dŸ¬Í³ æ´»§}×?ÿ²²’Á£öVH@‚56È^YûrŒ«û7IŒ°Ä[â&ôPû¥ ¤¶rzjë—”ÛÍvQ™´ÿïï·KÓµc¹õ ¯4£¶t\=vt…W:=>=ü¾Í ?Öe4ùH-TBÜÃ;˜á&YG jǘp{'UFHÃŤüNó°‘•BöÞ(\:†fá*Ú[“bÕ1‚š:¹ËÛÉ*+¸Šœ¦S¿wmÉd"rZÚl.ò+Ê¢ê à6ÅE°ýØ¥Í-ÑX8EšÌV…Ó*›WYá—ÿì¨Ý Õv_´]ytÛÏõ蚆w{g~m%º6ËlØÚÖz©™Ë…åùR~q´—Ï4lí­™ÕU¥<X§8õÃrä÷3‹ÒŠÕ1‡ãXËéÿôýª|ù}NT®GDîS.¾±ü1Ðë'‹uhUOöY@&6 ìÓw_Î’E]_¥vß§ºr(‰±]ÐKÜÄ‹T#Öz°YáŒSŸÆ½#„0E—Íq¦¹‹Ì°{Q BÓhn¨¯ªãaq´úÀ“3èB_TÒ”‚¼Í°¼»A¢±íÿ\TòdAD¹çZdú”À)|èIø¸B}þÌBŸ2`æaêxüøðøûãÍÝoÛ(ଊÉÊí<Õv©g1ÔÁÔ§Ô9`ÐY€c’: ©®RㄺhAg”¦9*H…Ä–²H¤²—ÌHMÜ@[ŠðbDMõÞ^\]¼~ ØÒ™-¹áؼÁ Ì›NÑ=+ìïìZ$ñÿnHB¾ Í cfÑ¿Õì˜J^˜È¥~’_ç竨+¹d}»²nh)@ lúKJC=í¼äÆýBº> KJ'g†ê&1ÒR2d&§!>µ¡0HDžcùü;ʰõl™•¢! 6ÑO/푟)•Ù¦xüŸ_Æm šÄ":‰›"æ6E ¢;fb²)ìµ½9¯Z±)Ô¡£è±cSHäØB,–&b¡@òœ:92b!G†¹]“Ådq—Ï:ªØ€$>£X‰N­}¶´’ŒôZ‰°"ÆS+.hKý2r°×íÕ¸R½9{[u(-Êž²zã¸TG>[Y‰Ú\º¢ˆX®6 †Û ©Ó#Ì×Ð&ÿ̓O#jºõVÊIƒ!¦Öä ôy×.uXoiåËܦû•è-½Ù 8µÞ¼´8X` "J7NWèÍ@Ò{)ÑcÌê͇°Ø ·_Z¡â¼gÇtjÅEi`„¨`aÁß_Nîó‹›Í_MU¹»»§¿¿Y”½ùó׫Í??j4lLé¼ëÍÕËgêßkŠÂÄæ@@Ž˜më›î²êG5iqöÐËZþ˜ÞõÃuz'ÓÒåæúûæê ç“›‰Ÿ¹4þôþ»eížqýh^Î7w?6ž>Ÿýð"Œi»ò7 «aBGÅé“¢=p<}²}¬0…§ºVB~{ž’Ü7 Á³uܳ_­ìé8@µ8È´˜ ¨Em¶¥kK_IìX[Ç\\Å!º÷vœ6‚”oíˆJ ’3ã°\M<“Ç€€#½´9[¥l ]¶H+4û§Õ¨|¢Ëî Àm…À¯©95èˆÁ*¹]KAZœ(ªiíWáãåÚ”X8`åæð(r‰™{Õ¬™ïH@ßùU{)²s;¼A}•³ÙR.;g ìœ&$8ür;]ÛGŽàœ6 ,š[,Ú"jO!®aÐ+•¸,š®íõÎI‰é¢„¬éò’Ÿ=“ÖÃ¥ Ôq¨3ÜdvÚç,û73>‡®ÀlAé2Z‘ lõí`²Ìžó@’x ÿ­™Ð–ÌÓö†9…ÂE ô1ožq)Õ4—Ð7mÓçu©(õ¾Ë@ů` šf ÚËù5 t•màž„šáª,’]Þú°Œ·”é4„!¡¨ñÜé# 9 k*ÒÃ:ª1ȯ¡lz¼ü¼¹úfëùøú¿g÷wWu—0 aUhõ-î­¢yKäªKzÛ6aէЪ aó­_ôhgÒ°âSÃá©Ö{¿>ÂÚÙ±ˆ°*vÆÓ²JQÿ  ‹¸™r±¦f™W@\M©UoÇM6W×ßîÓ#/¾]ýöºøû…È5]ö< äu…¶Ü2Ô\ j›¿›dÕÄ켈µf¦$¡k=ç½YK¹3á ÁZÛÄN޵¬roöÙe~ƒµ¦(Rs3r½Ú~(¡’¸ÁÌÎÍа¦½‡ša’j¢µ»š7WóòD8_#êlfsó²hªûu•ò¦ëtWÕä6ÆßæñFŽ”ztW½™~U ±ßù‹c#–‡Iœ=gàê"†UMHšr?‰@I«[QG‰p\Ìb;Ç#”e…¨ +$°Ø"±Ø sU‚ÀÉ­—Âúç(¿xŽRˆnw¹vä Cù«b¥·÷uGýìì«Ý&|3;ݱ¨Ç¢a2…k¶Œ:Å¢dáhV±~‘ôu¾´¢p§`QC`ß0•kwá©?Å5çn²Úk‡ÜäÈ»·$~/לcyXCÑØo jˆXÛý :Õ|ÿ.;×ͲÊÞâ#î¶C-¶CšX$†P`‡ {Wí„[²ÃÙÒJìãÄž+†™Úw‹s‰Ï¿§ÓÈ1žh‘¶lÈÁ­ëÖ\mîoî~OÇÖ«tèöŽÿÁÎõǧ‡ß?¾þo["ÉÁÂè¼ä³&c G—ï%ÔÆ ÒæÓØv ÛÙ󂢓$9^ 5f¢áÒ@¢‚P¨ª·—zí³H]=Õã µâû“1)Jǰ[Z‰âÈMìœTÂ'³pt]ðéEWñhÔãv2ò)5[Ò¿m1øÅùå—o÷gç3+½àºq[^ùÍÂ/°nbÒÔ˜†X;§cX l¤[£ª%švr渓ìp%î…3­±÷ VÚ¥:è²K~C#´Š[³þûέ±h„æªî`õJ…ÉZì´#ò/DˆCŠŽdòéQ,0¯qaŦNuAAf|_1¾K©œm)«ŸïgÊ׳˛kSÓÙåæÁþfSXz³åà ©Æ¶§³ÏA .§3P§ÇdËëe8îf+Lƒ†"îŃë1`†x¿­ÐÛ lÈÅOƒrM]Øá`æ«}æûfdüó S >@WnSÈðÉå£_ì9ηÂ7º1U¼ù­PÊ-×ìSu JCÓÄpõ¡¬[Cw=•ÄÉF8ݼpsG¦–‰æê‚S!Ñ®¸0"G@Ç CzDË ª ¥‘MÝñ8•R6+¤.:{ál<YFC{êR"e¶²Êf{«ÁÕÜ/ØW«K»Ô&-W÷ŽR†ºK…JµÓ°tçÄçµ–û`Vkâ— Ò_VV¢µ¨¹¯¡±ÚæüBä.­yh)ð†à‰5xn½a±µÑä¢ÿ9]3cm.PNox¹Œi¿´"sÃÉmÃì*Å)"ÅŽë<{Ä.¯?Ö›CqpñäM§KwÚ»°¡®•R{øJ>RDp]&¡%Ñ/Á+7¢ãô˜´ÚÂ+zï¶Øf°(¨ ©B‚mȬÅE·è·Ì$3Âo±×f;1Qj,’Ébu‚‹dC,2Ža¤Ó2{úœ*¡/·_²'0ùI[ràç ÜGÚj*“`á[mºÚR2WO*ÅqV'Ÿˆv±ÀŠ92䬘iÑKKmˆÇÉ%Bý*#N3Å‚vmna•4ðF |FΧðûíÓ3¥~a‡@¤I£ÄìhcHý°Áqv‡°[ºß˜ iÄ(¨¸½ÅŽTå1sÔàÄuÁ¼®‘F#Ÿ/¿j­vJa%׺ÂA$N˜z 3¸¦ú_H\Dþ':.ÁµmñŒÁÀ5`Þã⨹\F µ½µ7ô­é:7•ZÔ =Ýré2žzŸ0Zħ¯j\rì;)ÓIô°an6b—a7$~¼4˜iHuc‘Ô†™¨mŽ._å˜2H 9bˆ$ÁÅ:œ™ˆF˜¨½µ_{î¹v²G¬AÃLœj»ÊÊT- à?sãß|pÆW—â¯.ãÙå=_ÖñdØN‚£.Ï6´0·€zÞþi=uKŸìƨ<…@.œ§nçÚ5ÖÅ¡s9 9Om;ÙvñVkLœÒ}ƪâW8OÙ"‡40ô´Öš?ÎDÍ/ôìûíå§J—×$qHbèÍ =›JS[LÖ‚´t»M¶A€ãYNdŠ¡ä ùk;,–»èöÒrÊòäRŒj WÒ€×¾¤4µÜè 3=Ѷ¤ôë;õ]Vë'¢?~¼=¿ülò:{þåqé 7[Ü é J¸ -½x4“Í|…³÷ÁªÆYÛâÑyæNVqÀPݯ(“~Ž^ G¿Ÿ/në ¤-ø= ÞŠ¤@=b7gb…Á^lïKŒñôy¢Cã§>úÁYb*¹ —uWÑv„Óds;oºŽUmiÔA™‚G7$‘Ô#Ìqž±LàÓtâ×X ¤¢‹9g’â‡í [U•¤ ‚Ô•d„–B DæÆŒ¼Øæ ¢àÈsU<–\xÏšM=ââàÆ™D«¶9«h (e£ƒëK{‰8hbBåÈZí›Wž„ã "L!úP’Î^Jˆ¸Ì€úS(ƒbga[mYº#|âðÞuÔ…xGñM?Zá¹bº—è\w)—–R%ê!ŠùÂEŒÞIøÓÊŸs|´’R*—˜ì|§*Åy³åžjhŸŠ[ Ýœ_H=ˆØgbô¸þjŠÙ~Ï.O}~¯|p[HPP€‘r3‘’ðK‘÷²òx9œzgPh3iE30î&B_jÒº¨ Âó@ª|V­ÄqÙ¤_–V`Ò 3>×Fþié—w·÷ç›×•ˆsÁ±_Â<¨í`á(q—¶6éÞdï-Èö+xxyBó*˧‰CøÙÚ|ÜòÍzó{d7õí½Õ^ ClßÞ¼«i—²W‹Hö]»A|Ÿƒ'Qóe´··(B|7!úÆÄ¶y.3é­50zaCaêü7oºt<æ ’Èiy¾Þ^¦#vT"?‰æþWùAí DzvT/M9ZˆÁä\ëg˜¤ïï® Zn f¢Éì—Æíœ<¸b Û”k„I‚B]ÎȾHbÐ>€mùBù>0‘ìH%´0¨a !!ïÓêû¼ *í•A…„½QóT†d‡A®ç"­\…ÙÒJG4}‘`•âHÀ¿ÅiÅ!{µ%v+®˜*F-N5¡@qäÐç§°œÚ›-­¨Y&˜?Èâ_ùƒóg,úƒQ¦40>p¹?0ZbŸº£Æ–á¶ (ÚíüS©óŸ¨’ I\¾QƒÌdÉj;ê¢c·_Y‰•Š™Ê©ÆJ#š¢ö¨-¶tZ¨ª>öGl„Å&ŠSû‹"lEÉ*-B\RÚle…ЊæçÔÔÝÛW{Ö€¾KiZÊîS;zß­5*65™Ó½r©!`ÈjÉÔg++25‹þ(Ç*­±$–â.­qSæ,&–î®_âbλ8¡gàk ö!dõæãbØliE¤Ì:°«ÃȈiB—âC}—¤Bþû‹s}~q³ù«éê/ww÷ïø·§ó§ÍŸ¿^mþù‰Òü_Ò-÷õæjÿÙ[Úf6/òÙi¿õųªÐeº­Ÿ‹ùczÙ×é¥LO—›ë7Šš4õÓÜq™?`·®Ý3®-ùñáæîÇæáÃÓçó¯^¤1m—þ¶×‰¦óT¸7éÚV…;*ÒxϰBaHL9(nÕ~Å]©ÑsÎä]BdŸñøtÿpýýúfó›½Á×ó[ÇÿåîjšãHrë_aÈg–H$21>8|pø²áX‡Ã‡=P-ŽD/E*HÍìÊ¿Þ@³Å.’ÙÌÊ*޼{ØŽØêÄWHà=UÜýù×û‹óë‹—×ç÷ßovóôz—¾;­•¨%‚&¤}DÏ WÐX5¯,dWGÁ§,g”D*nY/±TvªL³ i3¡xwTË' MÀ\tà–À²ééÒ³wÇÑx{{)ʆgŒQ21§Ÿ€Na(Ìžp Ì^€z”½âÈzŠ6>ævŠS@ƒ“xkžñ‡?Óˆrêû^ŽÅB=Y’Q]‹æh¸µø‚QÚl˜å‰5“àŠ8k˜¸¥¬X–MµfhÕbÉAä°q Õ2×´„‡lõY å)¢1ï`2Ù{ý°@ª˜LóPñ£ÿ¯Cà °91N’œp\wºô`|øß6¸5êUcès–ÃJ\0ŠD6”5üUýð¿£b¨–—y5!¤XŒ¡%‹'v”Èt¿¨%­s¶ ¢Þy U’&rˆ›'1Oào[óÏ«&0ÁiÊØÓŸÓÄ]‹/æÌsK^¢yU¬aë­Ê\˜b4aåy…gÒ”½Ûl\&êñ¶È^˜rĪ)w`g_={¡*6^š¾ä<=_~8+/RôÜb¤FÏ ½vv.Qªéµ` _qLŸG›¬ªÓn)šM!m~ßá gšÀ¸ÆámÎ#Á}8#£[õ¾Ó‹¢ÃG<êoF–lÎ4.ÓL„@UoR¼ïÈgg·gÒ’hª©: Cý®&\tÕèÅi^¶8\ÖóÅjdB©ˆ–ª~,GKòõÃãɪâ%OÞG’¶&KLKöììësר­ÿ¹Ä‹ùE}jÐ[$ŽjÇ¥í(;yxv´ZÅ9#1oQ\`´ô¢ãØaiV ÖÆ{6{} x 0¹úhß¾¿Ïþôüô³‘ëä bYåKP/ˆžÈ'Á†ì5[8¸Þ.¼q¡L$‰^3Rå°ä‘*Éü ëLTC*B2EÛ~ðB4ö"^‰» µuQÜõÒ1Ag ü°°›“Ó¸¾|˜´&¢Tµ[B‰‹qÖÇ\c~&•aÖ,¢HSº­=ËËú„VpCoóqoFŸQ…l݃HîtmZÔC…k&îw@^BsJ´\zãüÕÙrtJ¾Â_9–à L’ùÆýLTCÖïs°¶—´hË–ý*{BÆs"oÊú~ùÍín.®Ï¯o/>~¸¸¾°)þóý5róiÏWÞJùOOti “œ½>»&º 8UUÉ£9ß«$8®³í5HjÞJ½D(-©4Cö­t&®A›A›&¾¢ñn-)h"ø ö‚ WÏXŸXóG:1ð5¶ŽI±j/‰ ™Q‘ã¤~½Þ.- Ìa ÂIË (þ¤"S¹SˆìpQ î- ]ŒèèHE–Í“xŠZr¨©kó¤èó 3Á *l’½-6õô F\¸Ì!®ãޤ7œô;B‘¶Žû…Óã~Ñp(âîB ]»%ž1 0àØq¿¹”Fn—8‰ª\Š-Üb¶T™‰dToÁG M½…d“k,Ë\Ðã öd}t Û–*§®éEÛrº*I.iª¹¨—\ÏÐ9#ºDr‹‹’‚¬Æ½¨àDÉæ‚*&pƒÚrÉS¾þ˜ fȮگׄx·¤ãÒáMiÍ$RLf¾§`äÀ±ô² C ¸ ¡ ",©D*ÃI¥‘„eíÛÀi•nØixƒvPŽèì^/ñ«O7)^å{Zö*}½È…ÙÐÅgÏjö^`@œ-Ëk\«'M‰½Ö´z¸M /—ùÌ„3¨Õ³§†m*>“†ûE׉×oö|ÿY³G&GF/ºq¬­š"ò‰ãÒ®O1@œÔkÒü=.k¾‡èWÁ+°TÇÛ\73_µœÒiu¨§;YSt³'ën8LCÈ®›%8.ý ÖçkÐ$«Ñ¤ÿÉ=tÅ5é@SñýžxCH–ÄZÕ, É«׫˜½dáb0ŠHÆÎÙ× 9‚ê<0ˆœRµ@$tË¢4à C B¢÷Û¦Äê?ª¯ûIÞí+Šûóž‰N ¯tá--z t¡‡›'"ìÑÆ&ïIjXÈEœ’ºk"kýQ ¹³¼>G± IƒÉË}jšUý5XÔƒ×xö<¾ḞÕ§1×4%>äWš†¿yrÕJi¬]ý–bÁ EF¯v¤7Óïfè¡>O.󞣸Sõò¶íº¨Ñry"Á¶Ú¤”±aäðÀ'«ÙÖoEBӨЭEìhPØ» ‹¤Å ˆ^jÕFª ìËï*š”ˆ‹ìÜÓÚì`5Z#7QLm HC´&=LWš=†@˜d±³U³šØX‰OÈÈà¤Æ‹j“,TÂìdUÎÆÛØÖjS/ï;áÀH’d¹Þ AoV®Ñ[‘YØNžGêš­VqÑñÖŠó{g´,6h±XqØ ¸„Fä\¥¸ˆEÅi¹šUÜñhµŠS-Ðæ÷|…¤NqšË¨ G(Î7(Nl…¼*R†òg |9ÅÍŽV«8`Í=.øžo.…Åy Q‹Þ‚Á Vé­¸:­GÎ29=ž¬:Pò}x[µE×CÀÅ‘í=b±ÖªÉeT>ö¼çjî7뻵]nPîx°*¥…É%«~c¥ýàªjì˵¤ Ç/ÖZ5L!ºISûšÙ*ýPÄbV¢Î–%7<ž¬Vm)ÀPµ;',žWBÚ£ ¯‹Ëk`,®n¬ µƒnÛð•KkvJ„ Çß&p"pOÛ »«—jœO“PŠUO¬ÿ×^²”˜3ÁŽè¤ê—Žj'^¶Ž”VxÒ@n–Jl 4=ZÖŒéåÇZáý¾ñÚ6ÜãVõãà©Ç/ºØÇ6R!¤qî&@Œ UîWt¾€>ë~Gy r?M¡¤‰#zˆûñ°ÁúÑ)²¦þkºŸZÏýÕ½½4ü~{ýÛ—ËÝõÅÕ—'Ï _4¬Ÿß©UÞ»ûþ~w·{ö£ó{µ>ýA‘…Ⱥ÷,‡.®KSë=;NŠãX­G…TáÀ1R±Ì¦Û—"âÂúµ½w‰7wá52j“nŠ«Žèoº÷ôÏw_Ãîã¤5Ìþð+¤Ý¯—¸ÃPçÌî0ǵ®'§Žþ«ZÆ5l£#¯!ËW]Ú5¸´Ysª|s±ìÒ1—Ïd7£í[KµßÖ£#Æ5°K0MÈ@´*Âè賕µ£ άó!|úï÷ÕZB +ºpÔ »È0€‹oÅ8 ½q·°š‹€÷uO2%µi¦<½Æ£¨†\ÂÉ&™.á¤:ööú %¥•Y‚ÈbÖª†“×òˆ¼‡Xή8WR¬ÜåÊG«èGøÉ3$zJ}=ÿÝí—¯w—ÏϾC©'µ£o޾‹Ø\ÓDÍY¡‹Ùñ%³9d–yU&$?V¹_Õ°½‘AQÃ0Olþx˜2³¹…oœžQ›?b·9N½gj¥»Z…#W)  W†ÿh](«'Ñþ†8¸Ž Á { Ír›œºîß‹Ó8ÙËzðq:I ÄFÙFÖL,ƒª`ƒƒz×ø!¯ÓGú[oMZ½ž%¬êÐuz/>%¦¬Ç Î‡|þÄ¢nÉ5ùaÙ/²<¬3é ñË é<ƺ3 û a‰_®V£§I! l»³2g[;ÿ¨ÅÖåÝî×Oç»O÷û{Ÿ¼–¦ïº…^áH==d ÷׋wV^“ÔHOL’b„Oô>=O ŸÅ2ÈÉXÒâPO¬1Š.lz"aD¿|¨°zhyÒ ‰¥*¢ò^³8dz“Õ ÌX 1€oQÆ –˨_#~âÄZk§mãç ÖÐBÃN˽›¶ÂB¯Dy×-ì !è˜ä˜lW‹–£žf%4²¤ÐR;ùš€™**ŠKä3¿šIcH¸TSÅØG<ÆïVÁñרf«Ôì7mÂ_üöí³±‚íöÉìEèÐF>ñïÛÛñ a]÷ ÒLc5œÄÅ«(¡zI«wŒQ*\-ú‚×Þ!+óx´ªK̈˜"âæ—Åœ ¼~´ÞâòeÀ…q–á~úÇïŸÿ·­ ZÙgz DÀYÞ"Œ)^“Ö°ëMÍA\Ú0ûÞ1$(û\¶0?JfÄõf6ìUEqk ’V¹Þ´R!Z5±|L’žNê¿xhÔþz}ûwë}¹¸Ÿ~‡j¾~²ŸF§ãçü-Ù7ŽizZÕmŸmU>(rrÎy×J鸖LÇå®jYAj¦:­íŠÏ–p”ñlyàìUkÙ­@8ê µ.xYº²ˆ>ñs+¨'Ù»w§TáϲÈ=%õŒ5%M­bs`_.¼QkƼ±JT<|û1Å«+Ù=Ó™¤F ¡è·öƒkjGxÍô·–8¬þú Åà&‚IÞéæ”ù]ßê1÷UëÝåîVø½g[52¿ëÖHÙo=vp× ÓŽïH؇Ép˜›ñKM;Ña¤b;ÈÚ†¹ þ(²!?nò^oÐØäÀž8YäÀ(¸ GDÆu¡ªª@ÄŸN.6rÈ:z×-ú Oõ=¼@ØXmZrý©´†¹¤™3I¹0âC(>ýøC{Þá?Šf$’aSìƒäh‘K>ßgu§jyœü’à×—ê>¤r€¾,É¿rí–”VáÌ¡««É£fêa¥„ù•wÞJ)¿ð{n»Š]òÛ?–J—/bÊó÷Íd8(•– ÕOS?Ä3«k¥Enž!ñŒr{Ÿ£¼ÒámçÜä~õÑ.–oß_6ìîw»_,#üñÇǶ|:ÊiÇfM§Ó¢§VÏj†òH8†ß¨[#“êà5§ö5® Î}™³¸“G± Ê©mÏ‚Û<9E ´ìG·ê¤Ì¿-®Å“WÚÃ[Ü ¬ásøú)´B[œ¢2Õ–[²#Uƒ«$¶†@ËQîj6.³–)é°*³.¶—€Înœ°öxZ1!a´$¾yêY–µ!¥¤wZÖ_.ˮʞ)óø !sE§\X*¥è©>“ë”Ï7ˆ‹ôZgj ®Ë­F+Š«a£±–è}7´/¹¡=fz4F+ ŽÊºbpÅ{ós³Ã”¹¡‚*Jƒ(Îɡ籈ZX­’©–ºÊ ÊQ]¥·F–&£ÌéM‡ì¯/¿õq¸…°j̾§ñšö°}„iô¸ÁSQ ÉÖlM¥¢ÔâTX~yfßDfrÒl¥É¹èZš­8aÒµ(Åâ¸NÂ4˜¦7"joÏOŒT)¥ìªÔ‡OC‚Q’‡OÓ#Ç‘¹Ub×äV†×ùÿØ»–æ8räüWsñ‰%$€|Ñ:ùâ°ŽX;|qllð¥]Έ¢B”fg~˜ÿ€™ÝMvu7º ( šÚÏ¡i‰Õ…||ù@>bv&Ö˜j]|+7Pª„®z\€îàfd’OJi-eóÀXŠÅÓÐÃà@µ`c‘’A"N´ËŽ%­hº¹òì€ðÜŒ ³F"ª™ —ªÞjëë»d z*yê.Å¢ø yR‚äoF·Ôê¤äj?DÏl¬SÚ¢¿±Ž8D ={C¾«ãé—O+WñåƒËÛǺÁuQëgÂö½>íQÀPL¸nŠy€"ÄÇ=…Ik]έQ©‡¢šh‘†r=5_œ‹ó‡BùÕª9SÀ¥µ­¾y¨4•N'6@(¯SéoÑIœ˜<º:9Sv[Âèh%ÆØ^ËkÚXǸÈk ãt޼a®²´²‹‡J+.ísÔi¶¥-9qŠmšÑ6:XÑÌ}{©`VQª˜–~á°N¢Ši0kÇ‚¢Z´M‹µÍ¥ŽJ´M6EÓ'Ùä³û¶G+Ñ6Ò5m ©b¦tiãö6É—MûŒ\l×6ª¸#D79OÅØÅã´¶åÇëŽNVtEj‘¦˜˜Ô©E! M\CQŽ‘–Vyi^»Å\±À)0{ÓlcíIÛ¦(¹«íÑÉ 8qZóÅUlTm¦ç$Lì‰É7/'áâ’ ªD)@IO§Õ³Ö㣕0ý€B•O’v2´L]=ÂQ˜Sþd¦·'vŠa’uHK'W]$ëæâ”q³sc~ÝåëÁŠJ¾¿$o%_›²C%ÏsJ-õ=•œ}‰’“w¡XÉ@~IŽ.Ðñx%.P€%¦i…±ãùöãõóóΊ‘»û×ß>ÖÃÆãX€©!Ìi‰(è@¥vôÖiz¢§xŠÐÓé$z†­§¢|2Ew~5 _>ãzt×| y¿ Ÿ ~’Áâ©à*r]¥¨%8Š0Z¢öKjr„8gê´}¯CfmÎØ•^þš° ÞaAÈ3jœÔ͘MV² ‡€¸Óc²óÛ§ÇÏ×_îwyˆ^|sçw"/¬±‰útèð¬øÇáåòþ0ªñT¢¨ÿùë§C5…}5õä2;PÔ9=Ý0ìêG;êŸï¿^ýÛ¿ÿÓEVwê»ßém·óªvþoÁºu·w>|¸»¹ù`Šýøt·UwñþâùÛm’·«7/õ§Ïß¾^ý¸ÈwÿrýñÛýŸÖC‹ÐÉÅÇûkÓ;4°ÕÀÎ~Rd½xÿ~Õ0þ-QäýËʦ.â¼)›µÇ7Ú$boŸøûünó‡*?.. ÿ:oý"™ÿBú­Ù'SOŸNÕœ#*ºVUž¶ ù¥Š¯4éäÓiPpñÜ>ž#$Žë*Ÿ=Ÿ‡Áœôé,š&êšs%>1+`Åpp” ÌLMì$ZRa°#‚7_δñžë.s˜`>É 0•æŒOÕ¨!ÕHµzÔRZº`aî€éÆ~ºàÔOG»ÄÙº¼íÉJjL¹@óß  ø:°mNFC$v­‘Ž%ɯmm¥nÑ¢œ¿4g-xʽ§èKL³dгorÃa¯QÌÛÚÿ(;í¨^ÚØÎpMµžzpM%h@y¬¢Oº¶0Ä¢>¥4R¸˜‰§=ç£,C@i™“µŠ6fTݲã`Á³}Û=W¡p/#=IZpº±Þjï`ÚÈÆ\™îˆŒì*%±ºˆ«2²S‚P »ûe ¤íáPyŸ|(Îkd±P‹Yzu¾fze£ö,Šà‡ˆÕŸ={0^ÙU—D 'Т¯&B£ ‹Ü ˜  žùV`-¸ï>ûøñrý·u·ói]`¤pVkˆˆ«èõÇ‚Ø-LzÅž¨`ÐRp.Lv’˜Pò‘:¨JôÓk;:•¶PB›Î…Å»¼ŒÎëY»¦]ЧŒTß./,ªбÍÕ܇ÙÙL,QæY“™[xáãŒïÓ`×óN/‡ˆEwz:­Î>[à?¢Eu¶·†Ñ⻪sQZoÖ<„¤¡T½Æ¦cê¼k0N‹Ë:ð ¡ ¨Ù ¯ë•ôiêw•¸¬ÆÆaúÉ"×’.-ââ7Ø6•ÉÀ}|²ã­ÐøËýí“í·ÍZ¥Ë¯O?ß×µlYèâNÜÚtáÈ&ÓDÀlb&Ì1rÀ®æX‹†WDTlŽ;ßèÍævIMάQ ÊàÅ 4Ï –Ò‘Š!ȺÜôõŠO3 Jk²Ý£“õgÓ€”nsºâfßæ\­ ™òõYWtýllJ<F)š¤¡Ói@vÙ:«-Á:]µÚ·¯&oÕKú¶ê]gHâãZ$!}Õ¥p$‰]{-µ,²bñÔpÓ`æ29-@ ®ÉK›í„«xh¶2›Ï ~êBÕkÏFy“ £ÄïËŸ.‰oo>P¼¹ÕËŸ~úp÷ü:>Üú`¤é=‹aîKŒºMˆµy&C¡EÔ3ÜGÌ^cZé)gA¦²‹ ‹ÿ¹˜òß<ûħµÕ*¼ì.ÌpC"»ÔvÍÃïÅ:èt  Ó“·˜O7‘“MÏú £“8Qâ ÄóΔçñ3²³}| ·öÀª¬V7OÍQÒ±htÐÐòÖ«Œt’!˜Y,./žŽï̞ѷ‡t%æ(žë¤+-ý¢&s²üð)#³èaÍUR( ñ,Y7)º@/¡õ: Æ«8ÊX1g˹6ÆYš±ÆYŸa¬É2¹c~¨˜—®-7&A»©¾7å¨x¤¡.4‰‡ôŸ€Liòv`u‹Ö‘[tØFóîù7CÌǫׯž¯/M=í{MíŽÅOu޽„ ‘è96™ùÍP¨Úùøæä@¨îÔéHÄn~¡IOÚHé©À/L¥º“–[1»*sK²†ÛÞšÈ~VîèXš wt7ܪ|˜Ã2Ó”:Kà©•èÏ_eTŒô¡ä(·}ªn²æH½¸ñµæým7ô¶Ÿe)Œìi3òB%Ð –¥«®Ð#èœÅÒfØ B\f܉P­ÊO5±¿´ã >t ã#H6öQ±KìŸ ÿkE' ŠN4)½÷¸8ÄÌ FHæ´õTβ4ÎLò!´Óáõ„ jïµqÝç¨ ˆMs–J½Â*ßüù¼zW«Kp'8áY 4úœ*0(p”òÅË`ú êõsÎi@5‹ÀÍ«˜Å1‡|å¾ÇޏÁÁC^Õç§‘©&ŒŸŸÒ\×j¨óŠ»¿ÐÈ N«V»¿ÏI”ô›Œ¾À2åí1Í…%Sõ«ød/³òüôñ~;pÿƒËO·?W®ûÒùÄŸ6Q²ñ)+ãôæL‚w•)Ù™›•ƒUÈ–­GQ•é…4:rÊæ‰÷¹=#òt*[(H7.i–9½l¢c '6¿\üš“b~},ˆ}qbNÐúàÞ;>YQð뇡â|ýÝL K7¬_ØÙ0í°ˆ)SÒ’¢DQܵ•‹š Еïþiô£ü ˆ°‰Ÿañ}0I/2Ûœ’kr_ý1†vá€ç˜|òê}|ؼ?c¤Güé—Çœ+óªý_iäC‚CWíD.ðJ'ÝHŒÜÐv³zDäþ«dI\š„ ο¬÷ѨwùÅܨç¯_êÊô|*0šMêÇ`“®¥Å£Š]¶€î§›‡˜ø­1p‰+a¸<í!"ä‡omiÑÃEL¯ ŽË^¬ßJ›ÒáâåÓi†æξ?5Žžy L‘Ó‘Ö?4nÚëOõšõ¬˜ŠŠgˆçr6»žÓÎÈžçÕîí­HìÇî> ¨‹Ý¿~§GðÕazã·Ÿ ÉÕyŽ!4É»*-àVò ¦#²èÕNvð±Â‚LÒ#ýÓõ3ꚢ÷*\ Ôä¡èf©Bõlï‘Á·Ïþ×RÇ’ep%S°1­ƒ˜,OP8?ûõh%®%ÓÀ1jÄOATb“é0Þ/=B(Ñ‘3I}ã€Á#Ãÿ¸ãôë4¥ñ 6#«ÝyÅ^k𞕈®ÒJTáÈ.xÇõS««¿ð¤)!Vi 9­½ŽQƒ Z;3­|÷ Ìl  •èÑ4 aÌöÿlIÖ%4â-‚P­¼ÍÙmºÄT¿D§$«’JU—a¾)À1½m˜‹Pç9 Ãfó£@ý¬™‡ê5…êØ}hØO}Mr ÄTËÔw²Æ@=Rv“å–`]Ô×^8Öàw—9 Ì B┚˜êŠýLÌœ{(à+CX§kNó•s}£“º™Råܨ–w3 嚺d0ïïe ía©ôm¶Ô¢‘ è¥bÆt?Ø_”Å~ñœ£i drŽI×4FgX†"ô7´GgÛ—©'Áð6{tv^j¼G‡ÞlÎÎKŒZ‚b±Í ËW^DŸ…C'QŽ®àå¾iy(› £åiù¾´£LŽ^Z/a€ûGhX—¶7Ê9v%­¨º…¼úóõÝãç»sÖFß–tœ7‰Ñœ­˜2{Í þZº­ÞÔ-¹] ÓM´L¤q:F>;òvt´¢£nP1{V—tÄMþºt3f¢b8i›ø€õ(øùŽŽ;x‹¾þM!ñkA1ÅxsÃ|õÅ_~mÿ ÷·ª½{÷ç¼ÀβÐöþ|eÚ¼YbY&M¤€tþ¢ÏõÅt]æ‡;AbŒÞ5õÒ+‡9wF.¨z¯äºT{®éÒ3›1M)š–À: ôì³û+FDè•ÍA‡\—‰LMý«*á i¤O Ið±ÕSÒµ¥$b‘Ì>4VwnõüÛÐAL9ݶÙ#–˜En¡à€~weFO –‘öžQ:ïÙÜp6(û{(4ñQ_R2 qrî@¢hvÞ–d÷ÿØ»–Þ8Ž$ýWˆÙ3K‘/sXìa±—Á`‹= æ@‘”‡IöŠÒØþ÷Ùl«KdVWfeVÓ»°_°eª»*"ã‹GF|A*IZ:) ?BòúGì>̗ż€¼BTÞ+žÚüÞxP‘'sÌØ… qËí8„D)o—³Ûe‹uƒ³†ê Ý£¦)ïh¤õQ–pœY=kè‘Ê»cOR`é¹pƒ™]Κ,=E7•Øeé vÛõ$-¼¬5fMIà×øô" ]™xð6ÈÕzØ¢úH!¦> &Ø£×Ñ¦è¿ w]ü²Z »œf«¥ÛëŸ~füøki X>6·ÖÆ6­ËÉ‘<¶®‡¤qòàÏ0^pÏZ¦Iè‚@ËRgEŠ¡Ëƒ1nˆkÙ4¨Eä¾e;k’»&MS0&Œ5õᤫŠ”Y3© ph¹˜jž'494Éï± Ypÿúq*•Ó„zƒ\d#§šÈU0êÖ}:5H°¨HU-¸K‘‘y—2‚©Lú ì4;1Bh^dæ¶–ÅŸIƒvA«=넬ã2õ˜Z(„}Ö˜=ظê€' .1ª*ÌF^‡Xƒ"êWé *XzªQÖ#,ˆ$}N×hCe^Ðöæžêþ0qä)­«!Úú)I¥fÌ™¤†ËÜ©ÍC@•.’ÿˆK4õAiÇ¿²Ó®‹Ü±«í´šaLj´¬[æ Ч۔ö×m$(ê–Áÿ/»¿°Žõ‹1 __XŠU‹,Ô·_Á€÷™žÌm)‚Ÿž¼ùñáøôŸ?}yÌ_z{sýöËÇ»÷÷ìF(&Ë’²h=~ÐtcÃ’ÇŒ#†•ÎHk¤e䘪F’øéŽàœ,²wÌ3h$)¯~Ejò£QüUûüh¤ øÑbo¼+Ês%Yå:{'ÃTÇu„Š¡¯/tÊJhy-f|aëù‡ß}ñê×¾úïýËŸþãOÿþÝÕ_oýýíê¯ÿyø¾«Ñ¿]}ïjüîêé¦c?Þ}¼ùôË_þüo‡>&?ŠŸ¸úéÓÃçûƒ.¿<~—_óãý¡%ñÊ¿õ‡Oß]ù¾½úãÕÝŠ?þàš~x¼º}ÿãá?ß E·?ãØqyäaËdsò(j÷ŽÃÊöJʳ„¨©bÖF“moüÍ;–^PΟ^­¢½’’NI‹Ùæ”ó³Ï8vù}ûÛ¢N™ øC5—¼?qRîèI?œ˜½Q(Kÿå…pVŸ²RZ[Xí'b\{¦»'¼@æÝýïøåÃËh½S½0°!Í™}ß>ïÊDnçúìûöåŒLV¡Š ºÐ-mºËl»Ñ-6 ›ÇÕRn ­‚Ûqösp›½Y ¸)OfšBCËà½=g†¬Ô[Š1o%é÷Jµ[ý(ñ$ž¯S…âòT]Ua1 ž½Z•WbCjR‡äßßå_Hww/æïUô/„.¢xѵ~ ÿ'ÖúÚñï¾{w+¯¾Ë¯ôDó~:Fç½j5/ß|Y ˬž’ –‹Ÿ¬Ã$"—ØOYî©;<ó›Û÷9u{刿‘½î[¾á8lI­ûš³ޏ‹Ôå(yÓtœær†çûºiI˜„—KÂøåÒj8´½ëÚEËÓÂ4ÄUßÈPÌØfo³¾$ ÛC´K’B91ÆK\ñ4´ß—¹jfÜÍ$6Ï6ažÊL ;À«§¶÷¥ðptÿöác6ŒoGìß}ºyó4ûÝ×_þîñæÚï_ÿñûë3»èòÂx|3£õ¥…iKÓUð§BMÍØ;^–ãÀY' jTÓÓe¸ŽÍ)Æ"6Ÿ$7›ÝAÛ°yåØÔ˜³¦ý»·,p›]Ql ±<©4t*Òª–™ê‘yÉ1¼ìWÍš°Üqþj‰\Y7>RU[ÿOWYÒ+4®*d’îÊdv®;T&aŠ1±ìʽþþáÃÃçÃWݯwm†ìåÈ Ì¹mg{˜lèlqßd  —sìÀ“´ÂAHPuÁ%²—Ù{Õø¦ÉåØ×à0ªvYÝséüƒgºø²Ý4²L)¨Ç³˜gøÝ-0¿†[@O†­¯7†v¹°2”]û\w‡’ñO÷oÿîßütÅwüŽÇ7þƒ?ýðéׇ+À‡;?jŸùFØM7TÙ /kâà»F0uÓ`Iæ-dj-Jô oÜ•”å„¡‚DNó6ÅÕ ƒ¢wµÎD5èNJ=ÐäRRˆ¦]eIŠûßI›ÑŸÝIÙÄ™š6­õ-Žåª# sÁPua v,j™Ü!w¢ÁR#òÃ=*oå˜o ±ÏÔ'ãë®Ú¤g”70WM6 ¬2”h Ž@3©AOš‚%hš×¼Ö]ºŠ“š ÷ï4bµrÀ$yŒ|aEÂPÀ]“|í´|*<£‘uÔý#¶Q?PH¢B÷Ø‘TŽÅ¦„V1ÝÊèöuÞÚó‹—×=žÞ¬&³ô§ Ƈ4«Ö˜+Ô¶fÌþ̸sf™ÅXhjϯL¦.¾K2?€•ßèHãÐÛvO¯0Ò8ÿîYFjD#~¤€{°%³}H•ÔŒù’Ê_»ÁEkÁ%æVX"©€#®‹;x-’+ž^­]ü±"fržt‰ÈpOã[ֽƽÑÅåXà•‰&È–àcÒu——{o“­¡%~6&Õo”­yˆùÐt´J„©©… yºåR™èĆ.0à q’&ÓÀ¸Sý]_B†És@´eîÄiÉŽ †Ï ð§«2¨‘¤)LŠ”¨ÈXao s)¾\kâÞs ŽÝ¿ßÏ^°o¯p?ë§LYØ  ]„6ÔÉý•º#%«”Ü’ÕC”x1Xlj¥ríìÍ*ñ0´á‹% )vዪî/Z¨©äW6NP&,K®Ñ)„ DJåqòf®»kuo/rËoo t¯Ö˜–íòP3Ð" ÔŒY»<Ô9ˆË7\Aº*M)ñ–šv÷™·r§ÑKî´Ì§öœ;%w(¦UXËÔ_Æk°–:¦¡6ƒLª~ÛeWî6Ñ ]ÿóÃíOŸÛŒ×tWëÍå²vëÍ\%)P÷"Þ òe¶ùÈxz¼Þ)Ц´pq’ÒåýLV¬öpÎ!™^Úó2…ݯû\Ê…ý¼=Q„‹ìçÕ*– :öón„=õËÏöÄŒ‰¬d *–ðx™ b>D¿¿¾½ÿÔøoŬøA `”1ÔL+.É­ —‚¯ð˜¥tÕ9“ÎpÉ2¦¼ük‘w±K2Õ}·i—îÔž£Ü׿‚ªŸzºt{ûN=UU~Ú¨_!ÛÕˆÍdõ«ÿ> ²_y°tGZ<ÇDÈ5q=Ô*6XÏ$9Èà)dª„ [¼Ñ.o¨(»Z|y5؉ í뿵Ù-’ž†+ ¸‰ù<3|@l¥l©–ÒHû3ÏN*näÝþV ,eÆô¯dªíâmŸ«žá0_°BÑZ;®©®7M[æÍcš 64ðùS1£õÕ'6HoœÉê”CäTå2ƒ­öJ ¥5s3I ±Xõ B<#¿´ÅR¤jŠaʳDªZ~~8WOÔtoîþìÇпåä®o¾|þ{žº=|}›¡&&ÝÕP‰·0\ KnFÍcÔ[— ·Im\õЇe‚Æ •°¶8 °LD2“ÐúaîƒSMrq µ½ë‡YÎX *ñW&÷»ºÐÏ+CéˆJ{^–ݶì@o‡ˆ=uziŒº:‰$Ü—±òI¤ß6Ü}KÓ4o½kÀÙØz_óç6˜•MSÚ‡u‰¬ºR¸ô)ä¾Û燦]¨ã`ØO‹¹X©æÇâ: ‡·gÃþØþÐØvùZ–~›É&ÚÿÇ t£“¢ ­­"!Ó¡¬ï¡îsÁcQ¤!$êQ§Žï‡‰ˆºc vùZ‘[ŠKïñ©_ðÓõÝý»›/ïÛîj,Å3L– ¸'ÒUÚÒð"êi…4¤ZT–Ó0TÍ'@ESE‹5£è¨æ©ìbCËI&#@ÕŸÌj¨2ôÅA~¨öïÁfd.ô`ã1o)\èÁæ¡sí 5Xên²K[a`Is÷¨Ñp‡+ðènÍS» »>Þ|p¸M„x}{óñæÓ/*Ë’N̺fM,ꆪžã„‹Ê“×F­Î8àäI3mÔ §¸'ZCÎ\a( §œD18yb,¥)5Fê _ŒRÚx%¥pæÎ”ÌʲPСME„UÀ)¡7+-~Ay0FO‡:”çv€KÈWU‘‘/ýÏ6%lJüQxYÚna;8ãóG„M# œ\–lS}uMBÃ`t )iEŽ›+° ™ú£8æpÇÜôêÒrA Ž™®(ë2½„ûgñˆO³“ßâ¦k 0/,‘'š¼?’«É;šu%ïeÛ_T ‘cvªÂØ™×´! ün˜Ï3PÕ}H%ù1hBÄÅ»hp›ÄžÄß?7@pt3LL!îs}† ¯RÈ/€›Ú€›=p„šò+F]«¸ŒK«6g2Ûùn=jŽ1êa@ˆµ ¶.€Ú±°ð;¿±°Òsú»—k Çgâ^* mÇ·vÓï@‹Â½}ÿàzôl¤-’öÜ+n—zŽ×Y´öY ,¶w*¬ j\@§”¢çx5×b)É*.«A±}è«P†s˜­­9ÁÕÉ„]ÀlqÿK±c'Û3`ŽS®å¬à²çcë¸UåWzâ-M u€°¤Ï˜(š¥}bÚ!º81é¾ Ô•¾í᣸ï@hîÙô„5lþ:¼"… ,ähe‘Pö “ŸKl\ÿNB„kú4eŸÁOnq´a&!mšäI| -ëŒÇØ%Eؽ ÌKÝ®'3át‘!R©XÆ0<ì-ACY£ê§Ö3ÚØaì™d’¥\¸²t5‘Ò‹Lcªêì¨~0pDL±¨XϦ4u•ÌëÊÝFia.ˆ/oþn2©ÚüG„²KªÍŒõ¢]·`ïà4;Ù§N¥oUË2…Hη‘ðȺP½ÄÂ×Ò4ø1žû5ä{<¼Aa›t#[BՖ€›ãÕ–ï:µ:(F Ôyrm÷.ªˆ%P"Ïý\ÆK¤Už0%¹À‰ý 7OÙ+¤UŒ†Ä]ýlË Cæ 1hïÞ––;ÜQaU‚01–õNÏäÄÕÖ Tœ‡8 i@\•/ٙݩ7uüpÌ ýÖaŒº{O¶KÙ^V†À¤œ¯â*W‰VñTý/wWÛãÆ‘£ÿÊ ÷õF®V‘4ö8ì‡n‡]îÃbq˜häx6c1coâdK¶4Rµªº^4v$q”™VY$Ÿb‘m9¿_sÉBZ­Ê‹|h9RÉ#ÔÜ[XÙÒŽ|pÍ÷XxoaAbœÒgä» Q–e2m†ºr—.›Ü/­¤I^ËËÉv á&{2òWlàMÐGXÐà”$cðÙùƃð«ÿ¼~ܬ?¯õ®oí·lX¡u³V#ËÃ` ÅjÈÖTÚ`uguø|N^uqí½a%Çr|É…¡³YÃ#ëR…$‡âéB/óTJ±À0²i¹BÑG ™%ièMòN×m3Ýl`MW¤˘°­-ÇSåæ•)[Ê4$¹ d€í¶“ã$¬K!ÎÐÙ®Ç,jMDt&Ö– Ïêå´†Ôx¥ñú4Ž“Æé<³3úÄ ©Æë³!…9£P9{0äšîø¸¦®Û!À4³¹¿|{ÜÞ6„<Ýýô¾_Ü•×[Aî6Oâˆ`½Í4q’½ø@®¾5È!‡–„]5kp¦­ÈÂD;䪞ƒž /~/“JíÔ¦„ðCµä³&(ENàª8 =îÎɪ ¦• €¾,ñK9,bƒÔ­À^..Îɳ8ÉEÖ ´ÄYYÅ ÄÙ-óÕQœ¥•—“kœË8àK€`KÛ¼p}¾7çf”‰QœWhjÔP†W3ùÉXKØN{L¥ ‡Æ®¦Ûù¾'’¨—µQTŽÉäЦýÒ YI•¬)ïÿ…U°Ö3{_m„Ó#ví’©øOÿÛ?©òØÐWåýrs÷QüÈ•ø—+¥‹ÿãŽký‹½Ê•Ï?>~¾›Bæ“îÅÝú¯ïn•±§¶NÔ Èli›ò ×j±ŸT7a Ÿ)!9þ`Ž–ª‹¦G8 ßµ( Ê‘>6AÝŸ[ð/7÷¯äo]7™ðuÝO÷¿\½¹½ùxóôùýzƒìÊ xÃçp[€·Þ£²x×/@ḇžl•c\8=Csoekl_ÿîÿ‘г¹ú$/øþæÝF<üt>ØúkqÖ7Ÿ>¾Ý{8{õû«¿¾ý»±dâG“iXrþþ9æÝey‰ÜüÆ LTO`´}„{ dâ ‘Ý´÷‡1]KæÚýéF_ôýÍûõ敎´øôt i®O8a®¹ ëSPs¼E™WF„hBuÊ}«Œ£rì²~&½Rö0hîMœÜWz©CHÏÃXMœ­²Ý®;Éã¿_X ëÁôRL…(dúf´Ú\w<¦~’bŠHiZ²ò@Y÷ Ž£“÷º<£Ò‚€!¸4£’n3ÁãJÒâ^ÀššvÉÀÞȪÝ%ý‹‹äAqî˜Ã„.ºœ“ß~°²"cXкˆƒ—Ÿ±ÛÚÇ“*påc,§4½°³r¨5-n †Ó»©ì=§¼’v` ‡ßköoÕE¤Ig¤¥NŽJÕûsz„=*ìX|dcñA¶ÛlÁ‘M¿=8¯Í—µÞtz„ Ì¿Z¶ÁŒšç ÚÒ¬‘_ÏQœdÑš¤p–,g»p›|°²’¹KúVÑ#âµ^zs¦FoàÐ\óXË ô&›±@oèåõ–¼t°²B½aˆdƒ_¢7Pb`‹ÃÞ ÇÙ"GH\mOŠ ›&QçØ1¢‰S»_éãæÃýÝúæˆW½eàQ"þÙ…ô¦C^êpþ’¨pq´òRç‚«„FgêË@væ‡ÛŠÞ¡%m…QçÈÇóm Ž¡#˯˜íŒ¦†5óì=¹7£ÉZ¿û5º®¦r—ã´.,P6m1”“ǦH±âbÝɶsìÙ´bgʱ2cùt—¾XÌâSMå +‹Ã¢Ffcí’8¬ íÁ6ù n´o1`2»(@Å[ßRl_æRؘ—ð(ûÁ_ÿtqýã›?®ùúÿxsûô•o7›pnz»–Ú—8ð1ÈØìb¢ªØ6ÃT3ј%£•VŒ÷Fâç˜r+´E,8 ðå¡.ëW]²ð|¿š‚ùâ+ƒAÝašëÙ3ÚŒ¯¡àå8Z–›¶‚R²oñ[ѹ=}šÉ ãxvâ§Wâ ×M(tYe]°ñ‡jä37ϧÖ-!«£,î Ī’;çR@œ£R{Ü£Ù]â9 %•™9Xš»iË[ €‡6*HÜn3`†ámŠSÀ#hÕD›aO ]‹ï¸¨iÁÏK»:¹’ymïëï·pU´ ž  ¾ùt`ËO²m¢õ²V,†(oÅÀIÊ„ýÒJ¦¯Xv-1SÍ16™)Øáç‘#&ÍT`’&±ÇH×Ñt—¾€¥÷ùX,frM£Ð|óÁ)@0,N¸Õó¹¬Z Ê-nÚv6Ú!R{$ˆ µLïéÃÍzó¼Oí'å[† ¼×˸Àí æ¬©Åâéôë‚Âsré ó¢±™8ó gÄŽ‰à¤j{/„N8/ÈÁÝOõÅ$p®ÍÌÂx˜gvŒ'0O´Ø¢ÍPêø®0BÌCWÎ+›³÷Yõ!0ئ{º(Ï]y‚_zýŽõ‡ˆ‚¥çú3+gåÕVôt,]VwØA¼úÓFñß݉_}‘²ÒIœ:/ƺ&ŸÊ¾†É@@°÷±¹¿E}b”Ì}ä(¯œõ”ì]>#Ù%io–V¥åd­ó p'Díl°¡Å”Ðv…"G{ZZ:i‚¬¥mÌš/-õÐóO^æå†ujׂΠ_OjŸ­ëÒ\.õ¨ëšÿ¾Ã»:qX½*¼æ¿ðpF瀄08 @±46õš™?­M0φN_ º(ƒÔg9·¢'ôh[|¿‚™ ߯c ÖÝ{@}R\g‘¶Yˆ´Á.Úd³áÓ™˜CéôÚIH ‹ïÛnDÐ[´Ñai²1† Ò¶4𦉼Þ=Ìê¢oh&ÜÝtÑw |/Ç™¶Oí²@uÜiQ€Í¼1„¦$VR§é…›ÈúR§Í ­g†ÃPd.ïxµ ëxCHŽ]8PÇ+§Ý =ØKoôHm%4íp\/r†dy^0¬w·,¡ ¡¯¿…’ÌF´`{1¨÷iJðÕ>ºØLµÎøo©;„¨´ŸW⼃®&#˜á£ë߯¾’UTÞ¿—Ÿ‡8óyr4¢¾ìããÃãt4•Mý«lnmU¾ßÜÖ+á|ôÙ>¢bŒ5JOixÌPÄ3R{ƒNB¢zg9æ:u¼ˆÝÍŸ ,“l ¦:–ßcìávî0®lÕ&+5¡ÉüŽûÇ{Ç•­˜MHä‹Ôóx'Bl¹{máA>Åñ¦v¦l¥ñÏj6J„‹'—êúï ½²þêË_ýï¿ÿåÏüó¾¾úÛZvÚ߯þö×I:WÿB¿úItùújûÁj—µüŸ÷7ŸÿòߘÒC²?>\ýòx÷q3)ôÓÓk]øûÍ”¸½š|Ñë+ÙÈë«»úaÊé~xuß=]­ïž4²Ì­•æ”ÛBÙ!¡!€UÖËï`TnU< Bß°ªQFDJôsrq ÜÍù8ì|6 OÏuÛ/ºS  —´$0zWÏѼ}ÄÑ£‡±‘­¼˜g†oeˆ×œC–=ØÇúrª(°>X“ Ðxª #çJßYsœv‚·²ƒò%o(çeÊš#‡äq/„Öøeÿz€ÖhåS¨ç©Û®ŽFŸöõÚû”M ¯ì·DÜõt Õüa4+Gd‰mÕ??EJžôÚ?n{ìÎTXŒ¦¿ÀA"gw¦u˜¨¨úžv¦‹FÜPÓÎ$×*)¬Ø±À=ß¼“˜0·;îd™ož4÷~%~ÞS+¶cš¬6ò˜Æºš"M+èˆÉ›ºÎ< k¦ýÀzçs°Æ¡„Öƒ5Ö¥§n¢®ù²--9et0JëFjŠ˜-ŸY±vYr„ëÇé:~‹;G‹Î®a¡ª¿§â3¡)7jGÌO˜Òž½üóåóO燗=Äô 19u•„ª˜{BcŒqLÎê0zÔ‰7}ÃA¬äH7»êù˜ƒ©˜s ™^7å8ÍKBŽ·®)±eýøû Ÿ(XšÌ‘v‰æ‹Oü¿È¥›/™ß ^¬ßoL²V+,¸ @¡30x?Gc"A‚¤3šd«ž!ßŽß ùö¤ ñnܤ k=ö¯š³+¥As|@ÁOß¿…ñ„§Ñ_˜‚ŸæwÉùÄbΑߎ5®ƒòo mgu±¸7Ç[rÊ–ža«‰+â WÅváÉ”ÿÁŠ:s¼öUŠ'.‡!r¢T.Jnòƒ)ß'¦ê,dÁà½#OeV½ç}*{Ž»™Ï¿²Ým6JÒI¦;Vûû¸0vmä{ºÿdAô-ŽÇ®¨ p¬õ»‚x›ýŽ/¦×䙕î¼ß +ñ†³ŽÇJ ›8XY‘ã‘“€^í"°d>#I¯pÄ/8K…äW[m“ÓrnÒ„íØ1/[é£MT’—×Ûñtð:̾XáoÍ»#Yš3¡ÉAM¾ÇŠ1:(ÑŽN¡^êÔ!ƒœ]ƒËú#//ç²þÐ'‘Ð~iEt  À…° ¸AÇ."6y–€£‹ˆ1Eö£z`Á~áÛ~cúߤƒÑQ{/á`P¢&¸&ƒUw–ÚƒÁöƒ”“ 8ƒ çBŸ;h9¶³þÓ—++ã@ïrìÈ.ð/^¬PÌ·Å¿øÝìžg-£OQ h^): d2çÌ·{äZßlûoåÈuð>n‹¼m>ryÁslrAD­9ÚåMØÁ…b¤‘Õ;çrþ'j-eÈ: Â]Õ~]EîGà³llXâ~¼ª£&÷ㆧzTŠ.åD $ûչ̕SOÿè/à&c<šþýæî§½ý÷õýÃúçSÀããBÀSû½¾Äór:–Ú¯=‰X"7[×’Åôì}Û.OåÈ=¹aŸæÀ±iélaü^3‚Ë+1S¤XÅînÍ1»û–ñýÄ‹:Ï`ÈÚ»ØWÖ9—«‘g1»µ_MžÝ‚Ñ¢*òáÝ!»ûá3ÙÝ=±•¨Àåù-–Õë9´e+9’®EÓé¼`SEöQiØÇ3¤søýÎm5 SËÅ–<ÂÖ 5ˆZÂâšÑ£«}·÷ò8F6wî¼% 7©Ö²ƒ•áËzm,ïVŽwXGpk1$qçÃ[ŒŽýNàU3%6bNyœE[<·›÷Ÿßrj8×OfŒ.œ±ÖïM«Éî^ŒŠú½É9œÔÅ&ȧaÜ ì= .òÃQ^Àð™àéû .:I&R“FFðŒÀÊkÂåŠyõÅ·ÃÛRãçH¬äpKtFÈÞyOMLUÆ”PË}c î3ÁÔUQ¢ÃCU¬üh1wy#A*ßá!‘)™\Ý ¡K‡‡ È Ê®± È:Ûì ¾ÝQ^Œ‰§è˜/A(RFÁí•+·¶ÔöÄÒ«õVbº5ÃS&×ç‚ ÔfôMåè;P`òü>)BÖÁ¦íq¿²2ô0( Ýs ¦mÐØ±ßRÉFщþìm*ÆžCT”Ùäà{›]{~û¸»wü’q{:ºYHm\´ë¾õTƒ¸üÅ ºî[ÏhñÆÛöup0€ŽÄÁTÝÇÓ‘<—çnŽÂ¦Ž|Ôy2óÂQ5 ÎÚ>ÂÔ$Euê4©›¬"ɨ~›tn’ËOÌ ‚ÍÆ‹ì–:”Fæ§“>Ó¢lNn/^0óÒ­¼é ºõ6¼§õÛÍí§{%õ<ß tþ—µ wþÈ||ü´Y@×N? U˜ç[Ðù‹ §/|°]ÔF¦¿azQÂVŒm¾¨æ‚ ‘}_z°çÏ´Ÿf6|9©>¥?”ÂPÐwÈÕ0Š?õYŸ!ɽv µ.-ªúÚ–˜`‰ÏÍí™?]b›©l!DÅj!3´%p×Ãs(byv`üBʧØ5ÔøCñ{A‡C˜ù¯ò†óÉy¶¥Æ`š¹Êœ…V¤Œa.oæ!9ü@>]Rc Såˆ.mæqt]¢ºS—ìÓþé"¦;*|×t¦¢Ô˜m²í‘Ènè&À0Ø×‹±9LQਹ²Zìe Óà2‰Ò:ä8«jö$¢hR5ш+§èE‹¼ëŠÌŸ™ÇÛÍÍýÇ·FRI² ®²«iy6„¬58.W«¯›0•Š’¢0ÐzÍ,EXŒ(%Ù¦«ºö«í%uŸãü’ É!’õMF3¾,KÅlCÂ?FMùk?}sîùu@Æ®ÎѹæñR3–;§#4z›ÚtäÝ€iQ02Å¡)‡w7ë·²ó·˜`÷ä£Ô·È÷F~äñQ”!ûcû£9QýܶÌR<£fŒÄMPÍ€¡`râ &F9Pâ…S'wºœeÉ#'1¦Zàù0w2¼µ´8Y~Å¥uÈ%…Ô1/º'‡>Ÿ‹‡è³u“²lŸ®oþ*>©xyk9?ñÊy7bámö7ø¼¸³OU(Ë’uÖ£Kq†h1V×™_¡ì¼èšNŒ³`V‡V ÓM“­B%ù¡ã­NZPv1NGU¾ÚU^¸¿‘O¾üŸ§ÍÇëû‰~j‘_T5VqÄpK"&ò.Æ—ŽcËÓÇ(¼Cq°»Ï"‰tõ+„+ò¨rà6H–¦Q›6z¯Ø8Ùt4ʺ›«3$-âlpÜNê=î)ØË¨Ë8Lyi@ ¸$2j—elÜ5LD6¤mý#€Ò«¶<Ù<Úþ_¾{ı!ã²Uª ã•óÐÊnÉUNÉ־аËþ‘׎iÑrš-l n4·‘ŠÙû‰* •ÍýÈÊš´rp‚­ˆpKÛ\uŠJó›@ö´n‚•1jlIÇb³‰F÷é³HíÝÜ›e@Ìœ ë¼5Lÿ?ÅFœ0;0Ó<8K]:ã‹ï>ŠN6Ñ”Û^3*Ÿ×opÑÚ¦°ì|Åà²Ä4•°­Àúõ‚ˆm{£ž'›Ê÷]£¹äLÁ‰ôÉä‹GòÑ.©jÑüÿSwu½åÈõ¯Éþ¬e’E²Š½“‚Èî³ ò° 4¶º[[òZv÷t~}ª®Ô–,S—ä%©ÞÁ`fÐjûê²Hžú>UwÛ;ó| RÞ¤œFòyŸØÒUïÁ¶Å,!ç’;(¾ä™ð~~ t¨Šô#×%Ì>ƒt¢tó¸Ì놪¬?\Ý,ŸŠô%²}Åï¡Cä" ü®«…ò°¹[Ý|™óŸ?oaéΡá/·÷«õþãá‡VËm2w_ð¬n «,Õ•‡!àýhø¢XBW<kÒhìgo†u¤ƒýJ'ÉLø.ÄÒÞGÒiëgÕë\IiX““1…çKÆ,;šŠ K.çhIaÉͬ)£Ø¡($QÒ{úÏñó„-6<È´I„C^[&Òꢥ®«3CÛ=u¤4P4u$CÔ4Ù8Ý|Ûùj±iœom/±f F¬M6ºî§êí;ó½1>FÇÍG˜ÍiPv¼FÜ`ÛØ•¡êÚ˜d›äìaá£t¨;,›ðíÜxÖ’‘Û¯ØïbãÞŽõ›"i²zdhpèMÏ‚©ÍzÅçEø«`í‚òô¸¼[ü¼¼Ë,‘ÊR7Þ{=¡ÊRszJê)Î*c -µñØi%²¯³ÒäLzÉ}$³˜’êL²3 SÑ<Ô‘<Ûä1ƒLä„¢ gQQ]e‚ƒÞñp6ÓbñðÀÒƒ:š†ò`ÚªrÛX•ÿãÀÒùãÁ®©­=¡GôM_{j¦ èØ~öu™ ç&Sà,(*í¶ Æ^—ÒÉ@ŠqiHu±¤ÃÑbÛRdd‘P„¨êRŒì|öv|3QÞ=Þ%†åñ £k›|0ÐY3:õni ÎwH½krÒÀÑ•ïtû¸v9ùé:Ã6Lß’ ˆd¿m ˜5RmŠƒÓ¯¥8b»žŠ°®K“$m´MÖIi™e˜Ž"ú8ÌA,M2¸|°5™¢¡•|&\0¶.脺·*b¶±.o0L& 6ÚÆ}$æ:DŠÑ ≭£çŽk§{ÔD9ÏŽó~Ú…€ù¤=xµæ³8Do·‹¢$¯wçÉ·0H×e]-+NâŒàM6j•¯Þ)5,›q|—ɧ«f,cn’ ãs€$Ò¦jÆI»?߇Ì”ªºÆ1˜Þ"fkbe3Ž__ôÄËftÓÚ8¯²jã&#mÎm"±-¡®ö)XÓEÍjÿ¥|ÏÂÅÃê‘cû´obýšÖxXíÅ»½þ¤ç!„|0)ÂTKÁœ—¿V©2[= gdþ”ò.Ùuµù´¾:üÈõë?^Ý/ŠÙ$V#¢7†tU)ñæMÑgÀЂӵÍ;9òj×´Á:§ ê”;X9íªcÇT)Š’I§I¸$úk|‰j#éÕUÜà’_îìN°˜-ÅâݼQ~hǼDE(ª¼ŠP˜^ö g·Ó($ªJ…Ѿþ£0¨ª-ÙÒ˜êTµÔêªÊ×)2:EiH䜳é›îc\gé4!4äw–þ<,éÏ#ãˆêòZdNNF{¦3‘²6¦3+ÝiÈÇ.ºS¹­ ç¢[ö"²ozýuvKÁÈtà:nŠ L2qÐ’¥¬¦ÓYZÆrà“)Zê2âÄ¡Œ{K6f²ë›pŒ€Æ‘˜Û4eŠ(¥KP€ß*ÏWÊå{we²Œ•ueºÀH Á‚JªiZÜ™Öiñß½Ñù3<[±6=¸¯¶”KPGÏ/>óþv-vßíòýâù®¬9…M‰óL×d=ß„Úä€ïP T; %–úÆ8v/zRò!ªwøÅ}éf»Ú®Û›§Cõïv.ú7¶YedRÞ†éÛ“aÃ?\&ºx 5vÆámçtKŸpcJ¥:k ÓN7Dî#Q4ѪŒ!(‘×’Ö’çÀÚ –›‘±¶¦˜¾ÓÕjוĸ‚žñ%¤’Üž°yôðXˆ(˯I9ò:X‹®$LNˆ³ëB6ÖSg›Œ/)z±Éøš Ó¤2ãF™mK^í²:J$»Ÿm•ejö³Û(}õ¦’UÕû>㔼D&Í%â,ìíænyõUª×§\Ýmn~)KGŒZ> U¥5„z{1yÐÚùÚˆxžÀZ\b*“«3.NGÊPŇäÓhà’—’ÝP°ì­]eEÓ¾€¨y/Z§BWnº]kîW§mõð´ø™?º ¨‹™GrS²­‹X5¥Í6ðY–Ú‚R†’L 5¼yȦSΧ …õžühôæQˆ7_äѨTØ['É™‚«„,¯Ž „…îµÂZc´I–½¡ÔãÕm€MNÎgå¥BÁÔ|8»“l$kUÕ_: ‹„9°“@|Êz¦ûYô›g>mÞ<-RQ¸×?Ük&]¾Ú¡ªÝ"€ž’´øJ:[³'2Iœ°.®äƒ¼M§þ‡…¥À6@4xI›Ä?ŸiÂ>Xµ–ý°º°,«nÛ¡¬T¡ñl»+¼@Xv;¿aoä÷/~xÜ$­}ZȰ§Y>µŸ„ÑŠRY¶ÈKÒÉ£¡=ƒ×ý‰üA”È/ˆ Uˆù5MfÚÏ;åZ0¦ ãâÇ÷”2ÃÐæJÚ>èÑOËŽ¡ƒ}‘s?Ü%ó¦ÌO>üûóêæ—áde zÈyJŸo!(cŠ/…Ær &¥o‘!Hz&M9g‰}|ÞL–Ì+ÇÚ"¡”ìê5ÚÙDW¯ˆ9:ÕöHŒmºz…dœÎ¯£áW³ ÑWðÒÈ9×¥.U¨Tv#£.8‚lOo¹ýšþ)²ž4¿ðùûÊ»Í*½ê¾Z5¡-I:Rƒ²`̆:OËÊGVHÒ­–®|”ùÉ;!véŽdѨòÑà°wN“v ë®{íšò!Zº&|åÊ3æ*£ÚÎÑÌêFâûªQO±ÛßmÅñT]*d¤˜Dpù¢3("ùÏÕvõa]ˆœŒš.é äôzJªÇ­ P)é4u:I2Y>çžóh7½Š&mŽ$ÑÈéôVu˜ §‡þƒ‡•§øàá,û6&²m¤#±´é±F-\YeqêÊ’«º‰AQï"Y61V#‹^€ÂKÌêóydY~WJ;1È’B‚sûhx[S!%P=†(g­tá$ß>ºÿRRz}¿xüeùôpÇw¯Œˆ-È=XWÓÕ)ð~¼ÛÞ× á7&ª†&©³N9Ìè<°è\Ur±Ê©#¹´1IÙ1·ZÙ’¶Ñ^¦ƒW]F ÝMR³Ž.àòJFrŒF?z›Ú¦ÞeW êªì_ â{ŠBA&TÐJò#øhžØOðÓ¾Y‚·5 yÕݶ~^¬žøXÎø¸Î~à#úÇõíò×ÙAêǰû{þüéñËjà- æj/¨«Õ­”¨¶5PæÄ‘$÷ ü WrXáˆNÀ8"P»éÊØ©,y»G°qý[‚óà” ÆO‚4ü`ßÔ~˜[БԾžkþœïP XI6m2ýi!ï¹^¬o–×å×yÞ~“á¦,H x §ï AJø-Hò’fe8]ìJ[ÿ:•òyqwÍÿʺI¹—uoï6ŸgïoO‹í—õÍÁÌÓs‡š-PuÞqÖ 5¢†GH'm¹¼ÑRfC?Îøƒõvq3¼ì+SŠww—Ïz¿¸Û.Ï<á³õ³ ûýióþÅŽûèMzI@LªïšæŠÌ¾t䌴_8ÄrùG+ûÝÃãæ†}Šù³?{§é$‹H*×wluîÐê¾IÞAŒš|$ýÀ!ƒBÑ5é·ùï_×oað zÌ¢LµQ¼ÙÜ?,—ï¾ã%X>½ûÓ_þ}6Ö#^m¿ð^Þš1¯¾üŸyøøþ~ùô|÷÷/öayÿñçÕãÇÛõjÅæÏýæöø\3.;ŸmŸoäü¼ûnÿJ?=0¡d¿ LÁkÂ$09ŒQB­,˜¿NÈú/ L¨Mo`b1ú(0Éz¡_¬™õ‡G,¦ 㑸ßWÃû´é)äø0rÞ<üU¬óU¨òæÙƒ¦ÅÚ­Sä _äjäЙȡçÞ£7Ê%ÂåanÿPÒ¢!M5-,8ä¥|©“2¶³g'RÔ6âÚÉ6w‘¹ Aãbœ\½ä&nä-âê~±^|àÿ¿´§~àkËñ+ô1ìOâ$ôéðfÇÐe'CW‡ë{àݤ#€3ÎêjÜ3¸çØó³˜Æ=¡rIâ¸ø,ÞÃÊ2Ï)l½4ðYMݑϠ§(ò9öwµƒxÐøv€§© ‹ýõÈêë+C˜ö7j‰þgCþýûSô+tõ.ó†G(,ÍT;™‘Ãð ;ãÓIf&“Ùy4¸z²ñɱ˜„|Ï& €MT´öxiY%¯ÅÖœ‡ ”ëŽO,Få"øä›…h\¢+¤©eŒû854ˆ¼\ùW:ÜïwC`Ck*üî#l!ÐUØ¢çì¹Jm¾ZÔ6<œ"¶(ÁÐsÏ(иWðUZõãrq÷ôñM]…|ßããæq8R|¸~åC&‰£»åíY9ZeÑLÅèa+¦PX+ÍšÞ§)#+]&ñ‰C`d槃æS0Ƴ?jQÖ˜£4¨}ØëeèhPgí^ú@PЩEÌ:fIòFš×éQ­æFã\Í!¼c@M©cLVJ@T=œ¹‰…[$?|­èZÃìëÏþ÷_øóÿüïfB¾gûë°ÀÙ?óûÀ›òn¶û`¾×ûÿ³^<~ùá¿þmÀK>XO›ÙçÇÕÓrØ™çí;y÷õr0}f0¼›ñ‰¼™ýË쟫èaÃû¶ÚÎnî6[I˜FWÀ¶;DÚºéÁ0fB\…SÈ «÷2mv\^‹6"2˜Zã8vðÂu”4áheYqy ÞBe³±A¦—J½©¯Ú¶ýÄRë%éŠäª÷Ígï›B‚óJ'÷ ÉŸÜ7LÔú>,-kã¤J% S´q4…é .è<sê" ¾AÆX§u XžÕm´Ò0»£Ü`CÍpÄÁ>P…äZL(̵LüÚ#C›÷­<–9ñkÏ*‡ãHha¬œ/Ь™0¡Ð{ÖyÄj$rùH$¡’HäýdFÈê-,‡ Ç` ¥v-‡tob <ŠCJKYo.×õ-Ò2ÙL¯§I_K•i—ìo~•6õi•ì/E£ Ôé×ÎTõ;;]ì\#«=ðÎ^0ñÈ€ö-NøæÓúêð#ׯÿÈ*hO(ur¾5ûTuç;ó{O·n4ÌüÚ±³ÍÛOJéM+mÆŒ~Çn¼QÔ äŽùªÖû`Ö´¬ý0¤T­Ö!JÔt´²<]ë¥5ß«@× )=ÿE &I#qoLb9ú¨®õd€,]¢ˆÊ(ÚµJàë½|õp·àO¾þÍvù$K·qHªU¹…ß MHõÐTøõ£…ÚŒNˆÊ€(o'@öÃQUãàã1Ÿ%ñÉïhaOk<¿.+œÀJöЗ€›¶ œPÛîà„ž¢“·Âóê/Q¯àŒþyÀÅóÓGþhu3ìóÁ<>óùK~ŽÀÑÂú›Ö¹Áïs„`¬±2aÈÞ‡)¨£&YFн Ô^Õ‡±CòkŒ&JCOpdSÐóµ[ýM8ô°´LôÑ2W×– &±®³ŒÓ¦ð¬¡tÖ‡±m~[€r¬/Ð$‹{A-&5†ÑÚGÙÔ^–·kÎX9%%Al>| ºT£3øøöîÛ1š˜Ê`YKŸ †y|¢y*üc؉îì=“ЉñêJýé’ï:‚y¶Jpb´:ï»Æ¬Rêºójº;­£],Œ÷|Iu¸hLH›îº3•o®SÝPwþËŽ],ã[ôÐÿ®Ñ“ëyó1TéÇ}÷X)!ŸT9Iµõ/å-2È[Z´ÿs³yx£)¥}94o¿“ß³˜IIÕjyûò™V±”<{7Þ³^LV6ÓJÑùhaåÑj~'o;[íÛÌo–«OËÛ7u•¬C5{¬{Ä~iû§¬¶³õæóìnóyù8{ú¸XÏ^2VB/E2.À£Áû·:ל= A9¢:‹ºÃäMV;.M}¹‡ã0Æ©ŒÓ©pöÒ$ÔtUz0@ÁPÑÅž°î(ê7Ñ¡ý‡ƒŸwúÁËÌÁlé;ðç…¯𨪠ɠŸ‚x^f9+ZPžœÍIÔOÖDæ¾ðmcS„œMµOã³–ö¢rQ¼<È¢ÉÜ~k^. šƒø…¡*hÎG©7%KÙÚÈØ^°d (\‚‹4sÚ‹SÇEܳ»'mZX¥­Àœði7‚P¹võÃÝhI?­Dv|õ6«±ðÙõN¨/]8~tGØbª*˘32-Q«âé£Rk‡¯r<‚Oâ«òˆÉ Q6£¡µ#5ÁW¡DVÞùP°VSuEºsÎ ³øDoo½"JŒ‡ M¹ï1o¢7…âÒ*¬8»»ì,8UX7¥®Œg%ØPXÍï&IÍç?™‹s¤Ð§o­‹ò&­,«9PûÐ]J¯‚‡:½éBïH•ˆQc¤çƒ7‚œ´Y_J«Ö h s æ„×#*ÞÐ;²¼~Úq> 0”Ó½~ÜXD‰màS\wÂ,v°ÌXHoÝ.ÎÙÉ2û…¶óͧõ|óøáz9ðC¿àïøà¦ó¿X7°xÄ€cBùªj/ðaB÷9bÜÏâ‘íȈâÁfO%ŽZr|œá×'H¸pt JÖjƒ§( ×AXM,9¹@ ©Di ×ÞÖYrØyDÜ ft1KŽ—ÌÛDåQÕŽšRχ¬jn“o¿] QÎn~ þ§Ï@‡éošœŒ´Ážtõ›Ûø`ÅÒqpìëMr³%8%()ðZSéÀ1Á´ò–w[ìÑû¤ám=ÓÙö»£îò‘šLˆã×¶Fh, @Ö*… au÷,øÞ¼^”b{â.+­ÎtÌ´% ·YôFç3ЧnúÙ}c+Š=„š}³ê¤V¥½ë›â©ç€¤—`íë" þxÿ‹Ãîl¯^l—ý¹eObŸàyØÜ­nVüœOzq÷ðq¡çÃ'_æû¿ç‹5%¶Ú” Vâ‹4²[ÁVòY †€0úƒ*ÚËH¸]€SN›»Sf1kn›N YåÏ8g»Ø £»wA—@¶QlÚT¥¬&ÛÙ.f9ƒÇˆ]ÌK&B¢ÌßvrrȚͤ@eC÷å@èüI ñqªN‚1®‘Ìšy_ÿÒu¦ÓIÓ÷I2ïzñ|»*Ë>iíìy‰L‹T䆦¹8ýd\ÀIãœr¤ÔÔ~F+Ì1Ÿ]²Ç¦£´—‰´2Ÿ²™Ï@6èª~s>R¡¿ùlðÿÉ»Òå6Ž$ýD„ëÈÓ÷ïîCP=fŒ® åÙñÛo& XèêꪂìY‡ AˆÎûü’‹á³å x“‡Âá*#sãý¦µ&à*Ñs˜>‡ŠI&ÄÒ,bfL·«5ì:O—Þbs¾ýŽå#ú‘¸{yß¹ÉÖ’R¾Î ° Ⱦ׆V?G†"v'z)9.8v1âÀ\­ð) 2r êD¶!±±J9Øs‹9&V¡®>#PšQ5‹ñ%$¾éŒã±òz÷p’¾Âk&y!×eG«ès‰<ÅšŠeMj_}¦55i2’züî€x__ì·Ï¿|{þú¡Z9_zë¬nœï eéZlIiÛÉçàI­ñì"r‹ÔíkÉ™\Õ†Wy‡[}‹æ•ùÊ1èb^]XüÄjûjϸ——ݶȓ+Få¹PyP¿4ì=ÈÅ«¥I‡wænia®Êˆ;ç¾Ö,Å)…`eR¹uááT¹yx~úöý¥­è,pÜjzÚµ.å§nx‹•NëAJaDÕá‰Vö@ö!ÀŠ’ƒbm*‚è¸bûΟÈ1¨äl¿¨!Æ%‚öl}ÚGLG¶0:ü²äàH÷9B¬ ¸&Z{XÕº‹`yR_ñ¡h®²2eˆ‘ú†Ù·›HÁ1^bæn|XL$Bâ×pÐtBW ¡§´Ìù£­›g•%Ú-:˜íIX»t0Çéó¬FG.]›0˜íÀ^›øSPf~@=”õ „ñùá„àò(øÛo¿}(Óø³Ï¦e9vƒÇcô᜞M"ŽÑ`gæÂhæDì+LÏÛÞܛŃ—¨ m¡ššâ,P$KI·€ 12ù ‰ÆP­‰R£"6Ÿi¨6ìƒUðs#ZñÊõ9UDl{ÙÍ5Eläß‚»¼ªLN›Îø>mÞsÊ g\^ú´àud VBJ,jšIVjͶà*#Õ/ôõÛeBÚ›E,â‘bvôíTŽ¥ Øë FÀ+Mwü ö,hXíl´WÙhªbìëÖå4Ù/Q±šÌöõ¹‚_C ºjS—Thëòˆ€k×̘wM-›âê„Þ ¯­R¦[ögß’ôW˜»çÇ<~y<üØs`­öíõÑ ãƒ—zÚ~ö›²ˆ)#Zî×èS»i7nšBSDAÀj·Øöš›5Ï—‹×ìO„Òîs÷šUÃtY¦kÏÐÓæ±ˆÓû}öCiÏEÞšÍ=ïk ³¬r´Ø`—‡˜‹«¬µ´˜û¦4ä’µcªC!§¤¾A2sJc—âHà·.»±mw½i§è´©ËÆbÚ°\gÁ=‡È­85 4Wõ 1$ªOHbµO§K9ʉCj>.‘–0)´˜NJ9uáÏÛÃ)O6Ž?ã¢äcl²Däxì]Šd(ø Àª’­ïÌUÔû*Ï|n ««ã€Ó3*æ‘sfÉüX0W³“H}¡¨nš@ Þ‰Vý{̣ߪ‰«¦3 ª–wTC±`~F•!ÆÓD7!K |¾}7ͰO',·ÂÎâ³wžo®‰‡KmÂkñ½}C¤WaØ/•hWš`añ„¬v9R0É|ãý‰—‡ß?þaObÞÚTu‘s3x2.4·è@3ˆž(g oÚ‹¸HŽþ°ßóï_þ±¯DÞ…°2¶¯ M[hì´‹ôŒ<ƒô¶Y°ÿóŸYs¬Š.ªk˜beHÔ¿Ý*ÜdÛvîî¿Ü?ÿÙDk^ 4õ-5j2er$ÆìcëÓ§{gHwX"¹æbÉi6{ ì=þ©&åT·ú×âœÂ;¦E7)[àÝÕ³ÐË++cšþ¨ÙÔŒùæKöý¾ÃÂí¹³mRŒ$iÁê'JÐw´@#óê{}J óÌßžŸ¾>?}ÿsÿ°ª?¿¦ (–¹†¾CÍÀÑU–Q Ï84kôøáé‹§À5ÞœÿÓ>fè3,;è‘0fM`‚ùÆ>ü鳫cáðüJÝˬú˜IÛ™ÆDºk‹ó¦0QÓ~ÔHoò¾ÄZã,À+]¤Fžá彯’!ÞÚË_ ´ÿ îÂÏ8þu¶_ÞJÉ ‘°×ᥠa'Çmh¿yÞ û°ñ´ÐÀãaXƒw]Q­Â;ä°xÝáDŸ1×Bö¹æ–Ŷ± æ›Bˆs–P2F9{L;=~xùå ƒ¿¾ŠÝ¯ß¿þóñ‹ ËãÿžŽ)Üí‡ñÊú*‘iÁrúâ®j±DaKÄï@…4ÓI½[úéŸ;l}xüS6ËŸ>Ž*Z" vU,áÁ‰”7šÍ€ m©üþxÿéûïÃH§‚±k’Ã^Ür»s&â軌—ô&F=<ý¢ƒi¸ˆŒG”?/:{P³Gè…¥Öcˆ*ÅíêÓÓp0û¯Eˆ>·8ʉ(w©Iš±árL0¼¦\ÚýÃɉ}ȶB(ŸPD ‹£©kn&ä8‡œÅR²¿=ì»ÑX°ïŽ,âœBc‹£¢ÛQ¯õšw±ÔMøõ-7î^Û¹máS„ëÌà±Ë¹À¶õ&NÉgÓvlóKÐÿþn¤ò1ÔS”ó6…•ñDÔ‰ãHÔ’·É«c„R*/¾Rh„C²¯MÐäÌòSŸ®ŽÚÌîÐŒ~¾¹?:v=OÞhÛPÌû™V—2t¡fYÞ;-)Zææln\‹=D¶6R1_§µÅØ—æà„»1`§û”îó_à.‚ ý½™«çgûãÓçÇã€Øî_qw­xÓ2zóimÈB©O÷[}¼¢ègLãücº÷G–,”Ÿ¾}¿ÿðéñåîÞž±U[”ÂXÍ’÷1`Bé&îRT@ߚȡ5‚½!Õ¾¼…Ò R: 'šŠ¶R²ñþxúôqK‡íðÆYÎØ å!ô%?4º0b–Ì0s¸éãýãg§ÖƒóñËË/öëîãão÷|jÞÀ KJÁ±/x%å Jû‰=|óèµ+ô$yÚÊIú¨ÍaJ…7ìLù'*}~|¶ßîÌ–¼øØ·ò}ƒòéåûóŸ­]L€tݳ:ί÷ñ"Ç ’ω“ÊLwðbRïSŸî_^ªCoÿñ4³A%ôå`2eŠÏ$%ú™ïŸXûp÷á/?=¶…öfšÈÍ(Ò×§½¼,9ˆÜ¢E½sÓ°£U9¤O‡KN?²³ýUË´|Ea{fFÞëäw„è" 4Å8Ī›¸>W¤»(y9Ž2Œ’Ñ|ÐÔùÓëIêÎm›Û¤%¯™X:‹7nç';ås¦´1š‹É§í‚e³¼ ;ù4iý¤ÒÃ_þçÑauÿÛÌÈû%ê»wøªwxÕ»÷kÔwE|Õë1+Ϙú8B7àHb½ÂG™äŽüÍ8úÔ1M8qwªR©»±ï÷þ.¯}ç¿nA%Œ¼Ð¶3™ ‚ÐÇ™ÁÖ¬’uj£º8š{Á‰ÖÙÜ…hB¢¾’MI3híýû‚tãýâ÷ã|µuUÖñ h„…Ú²Å&”új?IóŒS\ÓÞPͬ³=}»ÿ¼;Vxvÿ¾;&öêÇÞt¬&ÅÕ÷ÏZ90E ÚuàÔ8ÏN8F3š‰ÓuJ/…ª“¹gpÎg†,Mùpž~ ’Õíuõ2”Q–J“Ùg¤‚Èj ‚’,rj˜S°”¸+á·È³¯’™“¦B hŒ26i†T\ÌcOBÅ0ú&ÔÏ17We” ä.Y žëê09I àWQˆ2’Èÿw¼sŽ 1-ö¯p7@ØyÞf…âl¦Ø9 ž`8§Ç ÀÎÒ-/gæê©ƒU/ åýœ3š Anr¬# “bì§à-ª.Õ·Wgk>X~$Í7Fí/4¦Å iMÉ`8¤sŽú? @êª(‚ô­ãuË"_{_`æ÷›¯"9 ¼„’,vÓ(Õó¡Jb·ßˆ&EópF•1—P¢Ÿ4ÎØd,ú" ]æ!]ÀœŒ·NçœJ—P’úÉ6â2 pH· %v!½/B•]å#*P—n¦-‡ë ©ZpÖ“c `ب*Ñ´WëJny`¨f‚ ’ßÈ3FÉåÊ-;º ш ]JŽi6ô­Ñ9åö­q*½B,¨ŒCƒŽƒSÂíyWùê'a(u)=á¦Óf!'áÜ\úY§Í0´àYªºnáµJU×éÐÒ|·r¢Ê]ÏA´EטûRýÄóu=9mI×Ѭ2@Öå€?½áò—ÎùsHYSìÒrÕ¸å4yûßœ_£–¯€F¨ÛbOç[ õ³…ªÂUÝV‰Åë'ZŒÑm Xý>ÜzÝÞ×C¥K·s ùg -à)•ñ²ØH 5?žÃÐCÓ«t[9¬Öì•ÈŸW¹háXîkèøAŽ ú¬ûA/ȼi¿ôÖ>Û´ aE|„¥Š®î$+êõM#¦b¾À× ×™8w·È<ÿ)õ‘,Àª¸lªÖVºìÐx>¾ß_#„ ]©¸EA[¶ìGiYÛµûÖëB]¾ôÚ“‰gƒX&SEËÊ!Ô5´Ø“}¥Î¬,GáM©) ÏûÞ®4ÜüÆÜŽìžÈ™Ráx—±)Y†+ç1 ͹T‹ÇžbüQà ãöää×$C£$¼ÁpZpo·û&ú•÷ç»kÑ£DÈÕë¯F/*ÕvÏ2D@•p¤4Ô݃߰?ʾ)^˜è#‡ÉáÊñWÁé½Ú ~!­ö #ñ䯲8e °©‹Å)àì¡Mð |i»Ç⌈©’"ı¾¼ª/Ãú^ÝZT†ë|T6mí2Ü ¶ŒÚ˜‘²è„üE€þÆ ˆÎßgñ9E3ä©6½';F“ÇjÅßt®8žsFÉ!&ßÏ,ˆ£±¶˜ül‘`×a3ÅÉßÉ|p+æÀÈ’Šc»‡Ží•÷ëÿ '+®ÊÅ>vèª(„-sÀ)‹\(Œh ΣL; ,ŠP)˜ä™JjÕ@äòüÞ‰cìƒï"«¯Æ6Ø´h!C—}€‹#ã „“ùàÌ. „ ‘‘ù/ØŒØ`+V^F¹ÎF$BìRçË]Uêì­³(í§éG`û ¬d HÕ€ÀWËç©Z ´d¤Ô*<£Õ˜‚€e­Á5´è;bèËPxv~•€l—ùÅmIËÁast˜š ®QsÈ!A?¤Õuö +vMèå iƒR0Éln ¼2pt@Ý1Ç\Ÿ°«æ€R±"t"Ù ÁdÇ(Ó&sP‘—怘¦D.îõe‹x²âµ‚ÐØ6Áº"4äƒoÿ\e2±öõ, Üëï+ŽÌ­xôhs ƒ ¨¹¾6!{oÉ’ùjc€ÍÓ`¬ê=g(†ý'ª é:þJlêZÖbÁH_À(iÃä7ûK„}|Þà#Œ¸œBóöéõâT{¹gnR ’ˆ×HÁŠIUÂR™ãDŒQR`þ+„Ô"Éä»)¶ì!ÛssöS.Ø}M¨é ùÀ’y Ž—ª2„ÇcjK2b•‹g…Nt2Aáýš-h’ËÓ3÷I nÈ^5åü®Úš 97ÿf!œ Ö$ˆS½že¤*eŸg´³ªY-—ÌmæÂâ‰>ØÚSia.Zêݬ¼à¾4¶±î|û; ¦…fh\¯«¶Si…QiRúŒˆc$È/›%l‘ ¿_BAœ=)m®T&¥sBò˲<Á1x•×Ý#ÄõsYïrîwµ˜ËöÙö —0©ÓÔÃY%•Ù;/¤)•@K£#ÊñRðuÎfº]*«–^0ó°ŠŽ†á"£ÑòÔгFìÙŠLVa‹s2—¶A|q,A*_塳W,køK »]Õ²E¶:@\×mûˆ8]-)KRÒ_Ì>K+–ÇÎÖ­³Ì qýÒÚÆj×k-¶ðß.Ó,“ÁÄö“g!–ÐE)fñùÖtÓ}ÄÒYÐ÷¬M2Uìýà"g)Êâôô Îj˜½âSc¹ètCVòCË¥‡nƒ3¬2Êzᆵá[d.SL}kþz c§27%ès9Ę+Wÿ½«ím$7ÒEØû/k™Åâ[9{ X`7wØÜá>‹F–g…Ø–á—Ý™÷߯ØÝ¶Z»Én’šÙË-’IÖ#·šUdÕS/|Š=qÑc«’®!J˜=¼û  oD„Jk%T–Kõ]ª¨Ã$ï|8ü¡«'dÚÈîf‚Z“ö‘«fiyûÊû@@íSÎ0̓+Ã_oD2"eé=Î…¸Œž€Î ' z" Â¾ƒQ%Œp¿7£ºþ°6/ãÃ$š£KrR„²•ƒ´±ØvBé ³µ¢ºŠmmSêëó2äS 1”p_$Ö&Ú¥ôC «xåɨíñ#!Ñ^<—¾úßÖýÅ›òV·Ÿ¯ë{ÞíþáÿXÜîî?ú/â÷Y…KÒl=Û wóéa³ö³ú:B4÷ð¸¹Ù~jvÅêúÂÇx-zXì52¥¤¶cYîµÅÏô/Æ¿ì×ܨé«ku¿ÞÜn®ç <Öháo»Í ë$›*­'Ï}š&¤Ãæ›câ8¥–B+G¹ÏÑ8¡Š0¹ú½GÁvýÞZKŒŠ÷›Ñßité:Ê-U—Ñ¢“3¨ãXSÀ&Ó¥j—ÍMÈYEóñ>Qþ×Â].^?¼øïûé¯ßÿõ/W‹¿¯y{ý¼øûßI,þÅý¼øÈ ¼Z´?X><îØr?ý×ýêñóOÿñ§Å ŸFÞ„Ï»ÅoÛçM£Å—§+¿È{¶¼Á¹Zðî]/þuá÷û=‹†u¼}Z¬owO¼ÂoV`ÈӧЬ I†Î°!­ nHíœs—SÞÿÏO÷ »PÌÞ„ƒÛg@ø¿“í#—–ãBI³'ÕøY’DŠCX‚¡¬¦"T8!ø?M9ìsRIŸêþoœ\Ñ×”/Êh ÅÑ BJBvûRK]QIYÅÚú¢‡v­èÝ :Š„0;Ñ¥üž…ƒÍÉ£ÅÑ sãö­ôLhê_Og$¦Xe_s#ºìp·PëNð/Þ·SL¡ú, uC Õ‚Ì™‡-™²ª=›¸ N‘¼üj‘%~>´É;¾(1#ké´¥\ NÎ!EñŠã¥iâxE‚‰Â¤þÛ2KaAŠú¬RàDuä5]ËX…ê8^KÎ8:Ë  # ')§žçÿùÃ>fص©’dޱ‘„3×£ Œ%þ:Ì?ÍaWu–¬¡élwå™Éå9K [VFýD˜‡ª'B© áP9göZ¸3” †ƒ @â¿>k] „¨_ت4Áü÷ïYp©´4žlö¶åG(kEíØÝ‚ÐlþôÓBPÇ5­¸RRÊx[ï+ÿº÷Þâ_þ­ðéî½€ãí{¸2wqÚºrÉ5±B*!£ë^ÃK­Ñ¸†a–¢·,HiñO¦ -ÌQn« òÒàçYò’‘×Òú…]ßæñê»ïÿ|¥Yö¾‚ÿp¾( Á-ü}[ßEÅ&¶½ùÕm¶…¾‰jOð‹w‹æ†öûltž¯¾{~¾½}ñä/\/¶×Wj­ÖäVêÆàêF(xwˆ»¼R®Ì—ÿ8Æjäoå *…M FèŸßƒoŠø‰ÅöÃn÷p¹ø‡~£n¾¿¿Þ|âoå÷øãÂëq»¹ÞÿLË„³H m[ÇHíZ!Ä¥Ó[ÌüË.¶þ¥x­7Û_7ׇøGIµtžSû¾žoèVÖ=…Ìýî7ÞŸ¿±;{þeu¿x“DzYüÉ$:@a• ¹5Ï!U^1PR¡©V|²Ð_ \é– ·õðÇí²ùó/þÌý®m#gëëÄ*5 Iúø2öÁXnŠ{«?µóòÇÍ#ѶO_L#~-ľßÎ×?Žf¿õ¼Õ¯ø¾»†ÆJ!áÞ”òÛjûÌ sÁñÕÂ[Èï;óò*·¾û–þüøyÛx³'Ÿ½éx±e»vâ‹°i}ã_¸ðêcŒÎ_§Ï‚òÛR“z¶ü#xÃÉß³übƽ2CünKúú… —Ârdb¥š-„æºîÍ6–tSö?5PVàR: R é„Qˆ_ ’Ö¼’–f[ÌW†lF'(ã·Õí%ÿ׫À ý¦‚'Æ"‹›ëÕóêéóýº×>¿Dex#1ŠÍð4¸†iFâl(Ú,€,Ìâ VÆ¡³?/šÐ~ÕDý'´½È}³º}Ú qñá÷/ÞM½ßݼýˆ'<enƒr…¥ ÍÛµëR<öö‡.íÑæëºmuÔoî_J#lê8ÔFkÒ÷`ζcŸX=/Çb µ÷ú%#Y…*\Á‘fv{/’qIœ0MÌzw÷°zÜp¸zü¸y¾úñßÿ¼HëS8,éì{~~aݵŸ4äÖêâií~±]›ÂæH$$yÍQêÝîz¿ó©O/k¿½®¾ëÞüýà G«_î]ݾlÞ7Ñ/m }ìòœ¶´x÷®É¾xѾ¶M¨9Êu9¶IK=¯ÇŒq–i™(Ù2ù:~Ä $”@SÌ41̱Ý‘íÂ’,“!M†›b™”óAu–ÎÔ¥¡crà&²µfÒý ƒQO¶ך‘«Þ¨Ö‰æ÷+Ks(Œ‘'ž¢6ÞZ«,‡¢è ÷EXÚ‡ÂÇÇÏõ¶£kšgé˜Z+;–\¦ÈŸá© ä4¿‘ûý=— ¬>0ü¯_ÞÔZ+}ý»Åˆ{ñžeÞáéM„@ ÖÉsîù1[ý=ܳyôQõÇÍý¦µzýçÀ¾—6sßÏx‡þÞ×6ïÏx…ÑýoØlj›åªÍãFâõd_¯çæ#,î«Q õÕ¯ÃG]µ^FÞ¯,ÍW£ðEœ±ŒómßYvËTOP³<°~Å <•h³Q¢µâužÁ\…HŠžÖ¿l®_X¸—olܶcš<2MDjšišú}=3¤HN¶CS¿oÔæø)óïFµ{—f¤›ø'V #!?ßdÓm*iM¼ŸØ'C nt\hQiiFÇ“ºjA“ŒŽ#5:o.ÁèXS=ãdý,ñÕPB íêgœ,©/‘pê¨_™¨ßÀ1ŸÉ6U³·qüÿ¬,KšøÝ=ƒdy3妉Œ !U–MñFiFʶC:Û¤¸ &ÅH--ÆaŒ!Û¶Œ™# ”(ê­,Ñ¢hœ˜(2ÀxLRŽA1™êÑY­‚…qÏy5(Æ„1 ñ &%|²yçÞ…ÉÃÁ{Wâ4ã‘ö-=¼Q¦'Ò¾f ªéçÖ«,¨‚sáQùÑ~†h¦]QÂ#F;›„¨]yÛr UzKKŒáH5ŲÄ—`Y¤¬UXŽÆ}¿d-„B‚"7wMŒµâK€–¦ ó .üÛ+p¸°^[gliÐ2ñ»ûÉX“ZŒ¤0 µhsfÀkÃJ)Ê5/&ԷؼgSnV¢ŒVJL—w=ݾ_Zb… =p‘“€‹‘"ƒ³U\]ÌNŽ`ƒ¥Å#c=4 ‰èæ%•PþŸÜ¦ýùún{ß‘Œ?ìn·ëíÆ³Ê¯n~YÁ²ùÉçe÷÷Ûûݧ“ Žq4íE{FËO_šŒ¢Îö¢£@ŒÈqì˜e) çÐhrtÄЃ¡lK)Ó-¥qš„nG YJ1Ú `‚wÐû+K3”Æ:TÚš †Ò÷)b^žÚŠÚœ˜^Š"”§f=«LJZ¡*‰ÂH)ûŽ4+}4ä“%B¾áïëC-W¶Pð7ü…cÖÇjÆã*«AÒ*7§‘É8Aªlã“jr@&áâŽkc¶Ç*cB›÷ K²=š!6[ÔSl[QÈ Þ}¥bFï˜EáÈÚì2ƒÖ´&¥u¤eTkÄhWGÕ¦)Tfè­,QmŒ“‘â)j³¾ªóÔfæ\ŒÖ·ƒè)ž>¬Aež¨CÉæMES.žÛb\m&”Êí­,Imþ­”60Emù1‡Yž¾*5]=ïþ€£÷‘’$ôó-lÿÌ—È·ð3žv·›^…æèûÌÇÚH½Ö7¥³.³Þ ¤*{)±_­¬½_ý±×¡öy>¢Šhl&PÑ›ÆÙ¨máίä‰öÇý^H™ý^ÉßÜïòâÝ”Ýå•üÅc¨5ºÃSP+©éŽÔHJ•@­é×zŒdتUþa³kL¶ºPoWoei!³d¤µ;³#uÕév¼UÈ.ùq„l½R‹•š,ôWé?›³ùÖópsãYÄ>˜³;ÑÁ×è+Cù­΢%—• wVÔÎ…û`U› ôsÈçͨâc&N{¾DçtÚN äxPgúÏäo>è’Æ|ÿ™üÅÃþ“ƒdŽuÁÊù›–ÊÏÊZrˆìü°A¬N:{˜=ë²e¯B|z…%Ù¤ƒIâŽÁ~„´3â~kÑGc0‹v6MDãs²¼FÉ1ìpñkÉJÚHmÔ "8¡¿Òs²ÚÈ'ý޲׳2èg+”կܰœMðÞrCÙf`´vÚ´8¥ …´é§{X™äwOyõ×Èh|MK1àÖnîvl!­*nååR‚°²Ôâ*ßä™ïk~´·‹Í§õfs=`Ç“:nÇ›G Q,‘Âjá$ú4gÄŽ„0j eH©]›!6ÐÊ¿=ÈÑ’l»Bj^é/¡€îöp6•¼Ô‰P`+“Ëð—t³Ù§ø Áî­ ³_!±u&*;*;› |àT†UÄ/ BóUÄ0ª¼Ñ² —ÄÒ÷ {ÅÉQM!o|þtõ]rÐt·ò®¯‹#ùi`„þÕ”Àhüéýx܆Ãqâ½;9*ÿVŽ|BŒ•àF6'¨ó »õ4ÌsîXDéxÇÏb¬T"ÀXiBµ8¥A¡H ±e£¿©Õ®Õaèm1qÆJX:?ýסceÿÙŒ•‚4hÂDÆJ¿0l¸•É1>ŒŽ]q㣗‚¤@‚–¢bÚÞ­>òqâ´‡ áÏþL#Ê6Ïýüø²9‚YíÞ2ß͇¯Dã-®è›ÙâCÃÁæ|åÕ|x¦á«Nd}€”W¯Љb¨,pzz݃4QB~Ë[dürw+¢à=˾ 4ÿÚl™ &4~7~ ç\Ö¬ÝUçå¬N3®­¦¬;¾ÛpÈlE£gåNñ™Tx‚ÐP¦B´L30¨Y­8—u¶•™Óh&‘¬eà–]²KgNBÅØÆÕq+»v¤Ñ«‚T|½•%”ìäÒHÞ¸V4ÎßžÑAÊÃ_ó¸W ÇénŠ#5Š£JÈ:Ǻ2õs+}K¡…$•¿ñ*õ ç vÅÊs”úþ)ƒ€A[d¬„¬ÓJ[~@€Zú0„´@¬ó>¼lo¯#ö½ýLÌ3#â÷$xYÑ–E3%Ã(Eí4˜×‰læuòÊyîc³½DæðÕŽ9 ‹A§Ñ—A˜×nY+uM€y òÙ¡¨¬¨°öd–3ºÓKf~É œÅyZÕÆy§^BR*ÊË´ƒŠõšÕ2K±ÚÖ %*>5ÎÔœfú¸y¸Ý®Ù¢¢=©^í-´öVPóõ’`rœsÛß9!%š:´½´,‹ Ç"œŽóN²J©ZTgãvº#;áZÛ ®H8î/s)g…™b§ çŽj¼2‡¤¡@8ÎK&ÁÎ2vX-™Pà’Û)¿6h“kÚ5 ËÎÙ­ryà˜Vè rÆ)# çHÝ2Á5ƒ/7ŸX=Þ*ôäzxán’­>V®®ŽÚ…Ê8Q!ùD[[QoMKßL±óÿsw¹»½k5s Y&ý¶˜¦&ã`¾–\*ÑŒ„–5jPóG‰OÛÚ¥\gs‚Ù«‘‰Ô¡ä’ñu´ÅJµôT¤Ì¥iŠçtÞÝá„03—f+IɉÛ#vî¢ßi‡.o±ÑAÉ‘±ßÚµÆP|Ó´¢c¼Õ“[¼åm¥2Öè’»&nùÙ,Qå1¢þp¢¤Mýñ¶Ö(5~×eÑÐ8twëo±Ëu0gŽýt0¬]åçÒäiWRõëé…д?‡ä¹Ç9#tÙ$rIÚµB%+·>ÂÜleøxdí5φ͢µÐ2XÕÇÖû fOø¾z¹Þ>OÃg õ°¬%ø¸0ËÃ<.fôÙA>(óÚ |Ja1Ö8ã ÅËv1,ŒÄ¢%JA§ÚE‘d³ñzµHrŠSEÿ”u褩QìñS›5¶ ë{®7·»Ï^èCÆmüãµJ@Q$N³ØG‚!!ä´Щ ‡«A§RÌ, ‘ j£ùF'Œ06zRe˜\°'˜2u!_ÔU¾YyÊQ%Fáy‰ÏṠ1Ø!•ücÃ;÷qûôÜ>´£{e©Û®7O—¿ÂrççÁõ–I>S9’ßT•¿“5úA²ð¥®h+›ñ[ï^´FMåñ§«YJäYÊ9] @0vjåfî6.W¡áÃêÇŽ"¥d 0ŽmÐ-fO@¥2ÊI-&›ÛC‰9(W‚âWubš=9dÃnôä„evÑ‚$Ðqî7ë´²q/ŠaæÉ½TÊôвÙóì 4iKHtyE;ÞUpŽ4†Ó Ö:$-ÐB•MaJrAñ¾IN.ä8ña¢çSÏÔhmâp ØÍ'¬Ðr´]¦hÖÈn—©€ •­Ñ—ó”í°FŽÈÏR7zx{%n†î²v,17ö«yèl¤TëoŽ‘Ír¿Úª9è 5!Ù„VÆ ©ñŒJ7/¾å}…ÒÓ8Æ3QR&À5m‚=‰•ÉE9Ãïz\³àŒÈ+ðtìTÅ\?(Ú»ÅçL7õ•ÞhÏ»Õýêc¿eýôxñ´ýxÏ?[¯&ź(ÔH`SEÂÑ5–æ\6㟱W"?±ÎS2ÓOÈÞ‹Ë—œý´ÎXJ+÷%³ŸíÍÉAUk0êœ,èÒ±î<’¸ÍÆ8–Ý”W&%‚‘˜b %&ª búÓÀk¬©æd‰Û˜ØÐúÚØòx—s¼0@±o"}qP‹ALHMW¬>ÂB—¨$ÚùêeG%m¶ŠnªÀPåáìrä‘.KADÎßR@=ˆK˜ŒQôªØ’)Ú-¹4 G¢q‘À–80‘Ðûºp$ÆÞù"–}*a#(RÃÑVl aaÛ&Œš[Šƒš&«þmU†XvË„‘¯èDŽKñ³Ë¦95²&o\ô£ãñ†ÅÿÍ‹ÑLM=I¥Ñt<)¡gÉáÛ‡¡‹Kë¢xHä{º´|˜ùb% l©%¾Áš“m}ìbKÅÇôD'€ÚA ¸º]]ÈŸ³«å›ùãM!Ô´ñG¬hYƒírm‰ô²}º^[‹ûc¼D+=ÅzÜoßž-–÷gÆø<áÆ$ªª±{ܤªVðš²Xl÷¦ˆë¨ 4e&P'í#Çq¨·c&P¤—ÄAÜO#¨à²¦"ØbsL(¡8ëÈ;p-~vÌVÃÛPƒ} 7Z Ú{iÇ·‚MM@l £ÍV×6®_"c+äXhè]‘#g“V…h°«! ·—þпÎ÷‹²~"qè»®BööëqùEÙJÞ•†›¦ú¬Ëâðå@XÒrb{x'°üê,8„)dàÚBΕ¶;"Mã! /’Ɉ‡ìx×§œ–dvpK­â!вGIŸ˜U$uÆ€mÇÙú޳ü›wÇQMѵe €Õ1æÍ/ŒÿC tše¨R w’v£çOšs|>äzñÎÌÙ:<Ùpwž €O÷CX¹*cL‹‡õà䱭уs}ºŠ˜1Ò ºŠ2{&!îmRÀ$Ò?¢ 9¨SQ—P’ÝÀßqêjÇ@1RK¯ Ó%>îD¨;:‰xˆ‰‘[$W“RjñÜR˜ù Ìé1—B¡3Ó¬C_DÒ(ˆcíµ®À¥p^‡{뎠ït#Q$ +zÞH›TqÊ—¯¯oU¦«‹õÅsùe³]îl¶²[ˆY¿¢Ð?´cJ6(Ë!‰m<)F}à¬|¨E¬Ë‡´Ã‡µIpԵ˱¬'$Ø!\¼·bÃ:a﷌ڭö™ŽGmZ¥7±©"Ö–hÅl½¥Xb`Ag`bÝ)'ê³ù¸ög÷b6Ò6J÷µíŠY‡;8“¶'øƒÚ Îb¨*]iUµ-µ1âÚÙ5TóË Ú£$’Â;3…:„'7¥™£y j¾YÈÙ!>c>:‹ã-É.isŸdÔ0†%8Ù\…ªb—è¦Dµ—è[à˜<ŒkÝ Î.¡ÏˆnF7Fî·$Ô“(E7¢TO%é­lx¼mÆŠ_Œ&¼d^ìÖ“cƒ ¶Ž-ƹI7޶ˆ Ù1éöúñúæj Àeø™.¢g«ýƒU¶Ë9ˆbdÄù—ˆð\šôäMAKz²¢§ÚÊY¹.-ÞÖÐ7é™å(”qíX{DJ_«ãMADM™G4mŠ5gz5EúRæ'2ÉÍ-ਟht¼u46wÉ‚ê–$Ú8ŠŒäŠF×¼‘;ªRpÞôÎÀ‰”=rjnUQÖp â{plµm7b¢–ö»KµìMéìêäzÔAíj+Õ°ÇàzÉY—PˆŠÙ€0Î4mæ<P[…ÿЋüàFP‰s/GpJë @𠈥ì^»Îûž¯]Ͻ’^A1U¬DÏ£ K$ÁØhpè7)í=Vï'¹´A]’xÊkŠŒÐæ´ªù/ÝùDÌÄà·d8ÐNúŽõ¦wz¶´ qˆzPÏ‘˜*¿áÂnw†(rÏ4ÀS® iNô¹ùÉ^p–^ ì¸*1'QúövV”æà]aæv[zGÒµÛ¢«ËÑÊ–ˆQÆì/f´ø’Ó]Oâhâ{ëFGµÄüZ6uæ}1ážÉ’QBÌ£˜1zS T ·3 uÌâx®:ïlp º¿NhA ˜pàGzõ!(¯Æ<š'NóŠkš¤UÝ’A›S­FüX2Í ÆyªSÎݹˆ9@êGqQ•=üè)ïΫZ}ʧ%žêÔjG@U8%;sBýÕÉ÷jjÁ}’ºÀe,u¬5Pƒ4u@£)4°ÉÚ–<±Ô‹ºä€;‘ª®qpÖö§2o“Tf,ëµtJ*3›GeŽò“hª‡´œlt_wä£ÄÒâ@1ì[U¡%eˆDx–ÇYzÄ0vÔ‡À˜féù"¥F¤!òÞÞPÉ8hP¬We‚ ÝÝH',’x»Úíà˜NÑ膔JÝŽÔÕªU gŒ®‘ »Cp€uIt`³i¤9 ½ã³}¥FôÐEØ‚UÈèb]¤­MþýáY@žU@OˆÎj^:kvÌ­ÝrÆVõzˆOÔ]ÛA®»dQ3Dd4D'dor/›½é”ªG‹¡ÿDIˆ)" m•—ˆ;(QȦîx°Y%äêjžû4=µHÔ¥O.°‡à^d½¢uÏܨÆCäI …,^‹'YÉË/V(o(Î)VŒ³"Û4é“<U+Д´”·8”Ü}XOåì’yL…$2ìNX­°/¥Z‘•à¨Ö¯Vãz# *·‹‰›Ó²†Îè9 ³±)GLÌâˆQŒJʾ1‹Éâ;žS Gs- Ä2Ÿ`cwŒ5™¸›‚¯a5zîzšvúk›16;òq=wú•ù õLÌ嶺¿—ÿ•½±YÊ+ön¾øMþeýÉæå*( °§ŸC›d¡ŸãÅ™”ÀžêÁ˜‹¶yË!æupd|’*x£þpHù?[rj“ÝSä#žÖÿ!ÂÞ”Ð*g—†ž äpLc¶†¦$e%-z¬á-¶ì=oÌhºÀ8‚6R„þô-« Ù+‹$¦Wé­H=-íg bdsñÙÙ‰tG¤ÓÇå*7¦zšÇŒ)›${ó¶(ZS–ØÖûÓS6Ô×\äœl=V?Ç ñ@UÅ|…ô‚--«ŒœùŽÊ³¶ì­}4¦gˆQ¨å›NÙP¡«í´4a¤?F VÚ€P$¨–V”QÛ?ÚgÉlåaD/«2uáÞ ±Ž0‰NŒâ¡;¢ð)„PŒJw´å>°ùš·aìÏ\ñ<8Ø„uŸcÀUyíÌÇõ3+ñF®™$—G¸ ®¨ª |DפŒZúœ6¢³às|N?–±yÙ$°ë“@šùœÆƒ/˜5³™˜*Ÿ“)ô÷9KŽ%E‹AÁƒR ‚¡„Ü2a—5kæ#Ôõ·<ÿ‡uèõR«Â†@˜P ÖBDùí“B0–5V‹ŠŒ:À(4§¦F·äЦ­ÚˆFž§è ëxlD2p‡Y"ËrLÏ{ôf9_-÷˜µ74‘:×÷n~;+ëXµï—‹;YÔ§í¸¹[ü^†ŠJþȉ$v¯ëâþ¼fC8´Ÿvõ W v39cpì%¹™V­¤­cõ;à Í(v x"ÒDèÑâ͵Ȯ̿dkŽH:D¨8Gø ÕdÔ!¡|¬Nuî §¥c(¢É` €àâ§µ*‰f²%‰V «F,Zt%÷‘¶–Ùš¸Náb†U±iIÇ20³Á“´¹æufy¹E+’˜©Py`c°ux¿>tz”xË»M…¾Ïàd‹ aãJQZ Ê¨Â_ޱ•à¨ww_ˆ‡}ý} V"ïJHëÄ6Ž’Nyn5²æ±Þ’L¯_¶¶3\feƒsXâ0¾{=!pLÕ÷ESÁ‰ù„õûòê 9s•Ö¢WÊñ*U£ Ý!öífT{·tdƒ1ŽOˆ±YûG!ØÛÓ¶Ô/±rV™xšÔ…éå‹%ÁÒ.Ì6Aw+çÎ= Î%{Ü9€¦=Gí>…$DØ–¸šØ}"!Í=Xùô8l†1`æÎ}–f3ʽc Ý.žrrþ܈"ù5ÓF*'g“ ýHéÍÔÆúÄ}Ñ P.. öYwuD 1Õ Í÷£÷ÕOs}Ñ[…(¼ø‡¼Ïãê«ÉI²‹Ä5Ê`Óš‚ϽâOì·+’x1Áüë»dÎ>d/H.x'‘a¬P†q`ó¼=7:µîŵÎÎÁ?˹ÿ&FsyùÃMXÀ0{”¼¿[Šsõ¼Z¡žÔ“Óog¯fÜ^þ°¸{÷~~¿¼üAöÍÛåÃåOÿù×YÒ7±Ýî¶uïÅi{!)ÁH ¯éìûÅLJsKáu\˜Åk–ç ¯öîîêéÍŒ¼Ùêq±X®V—?l–ýëûLJËNýZæ7Ë_‡Òš8¢Õ]™S@ž½z5{#‘Ý£ŠñÕ«ç‹õH9Þ§è-óÔuýˆ¸CF‰î(;F³ð¿ÌäƒÛÕ|1l¿gaªlµuîíÍüfµ<Ö+³ÚBq¶…ßQ‚Âq å9:3³Y¸Mò­>­,Ë@iQä ©Ä@E0–|•Š&öö”dU6¦x?äŽöøv@¹<ÛYª`Nbª²¦eö™v ”£25í[·ÌR0¦Ü.MûÖcÖHIÅT3rÿ\yßzPìw¼F˜Ù3 Xú¡ÀòåsðË5´¸_|v¬ß†7ìèMk¿ð»·z•N=D%Á0ÞLߣ ³F¦4´\"¦þCLCɺ6|q ŸdoZC_êþþî~Øv²ÿ¨Yì›åÕtYy'ò4)€òâªy‹ÓÎ2ô¼‰d¯3„‡6òѱ@84ÎŒDS*ÓÑÔ—u6é 6aàg¦Í‰ ÛF9Ëí”è QÔ#¥o'í YÔѦ´O,ódªRøÂÐ…õ³Ï?<ûßûùï?þý?.gÿ\ÈÖúeöÏ R˜ý ý2{+Ê»œ­?8ßøÐÿs;¿ÿôóýe°Ñ²îfﯖƒW—ºÀÛåEÌCs9“»˜ýëì»!Àx'ú½^Í7w+­v'VàÎ!xËÁ‡©ÝÃ#DÓkZO{ëlü²ÿýËš¿-¿Ï÷÷zß s‰ß¯ôÖ’7‘ûï}ÂŽ^É?êÑœ-ÿX,—W j£Õyk!cuvH3¬W÷ãÛ[µâ®o_1î~×…Ï›óõr1ÔülÝ¢®ÛïÅmV/Ï“ˆv~s¶þסAAŽ$ËÑC$YRx§ôâYb&ôyL_¼<»ÊNdù9"WÚ‰à~ú@¦B´­q.²—X“#Qu°mf°.FS©¸u¢u¬ŽE²'ýÑzÈzá!UÇÚZYF°®o%žYt˜›M¾ZîØèB•Úpgî=Î:gÁÔ—±¢ËW›vi´~TmresU&ý–­•å©M®bÙOJÔÆšÈ¡s¡L¶“ÀºûÍþxèp~À[ IÃjf’%‚9A4ÚþîY@ê¼,(mð Ûé—|qú¥Á+ÊÅ4Ûþç7ÖV„\jû‹BÁ6šßȬ†°ùÕµÏ.ŽãÕ£ˆùâùÿîgm€ÊòÇØnílp¶¾Þ1ö}Ƕ1Éíë|&åÈž2–ìÉGôõ½?>ûò5ÊxŠrôò•`6ÂØåK&IQö´°¬»W^ ¼G´w/¹(añ!çCoã#RL´yê’IäëûW`C _ÍæL˜"Û³E±…-šð"Û6 ]#5á=ŽÙ.6QQ©ªlㄸhA ØúxòmÅÂx¼G ŽÚ.Ž)@í•å/íƒÈX`¼FÕ6n¼Øô7^¤v;×ÁóàBî±B#w¢×|æ¦q,¶'zÍ£ÒYË“AgÖØ@Å—zw¨é8hàÞ…| Šj‡&Ž™Hv guÌD²I–¦­¥åÙH±ÚѺP’cï£7T¥8§@üZA[#«û\Κ¿¾YêœÜßîîÞïiP‡l–ÃÝ¥"Dš?Ï´{½¼úò˜T ½—øyŒáRq¨»QWœ7s2{ —O«ù^ßvv½™õ[,¯?,¯ž«ÊûÿãîÚzÉuô_ öe^N»EQ$¥³ï `X`Ÿ÷Áí¸'>Nr’ô fý’UîØ‰åº¨$O2˜iLO.å)}¼ˆü(ªÜÔ›³¿åž±_Ûþ1»§«»û߯nïß>^=߬ï®^$²ê–R-nt9Ž`jÕw·4c|^d.CªßE$+=D*ï^¨?Ü_Ÿa@{ÜÌ#åÂèwB"OB…Ól¯MV)åg…ûZŇûGx߸hÕä|r–;=‘Êq䚤î@Ê@“Ødm¬âäc=æIQŸØ<2Çq™ú¤¾k”V£ºýL©‘lwø¹ï_·¿Þ[4Ôîä<ÿ{YBÁeô²•Ô$?›*G lñ29¦ÖaÂ[œþÈ2áÇráZ9}„/!~Ô°W ”grûìëó$¿›z2ÛoÆöY¦ÇW‚°Ž8` mgIDUÁöoí$ËÃùÂÆ>°l£„’$°4J%îЫ=‘9VKÔß„=6¶?Lò·PžìùHU|!}m² <˜¥ mbZ†ÒÔöêÃΙÛ3‹¾ò†lË'Ý}èÎ ¯Ãô÷ žh]ÆÏX—ìùBýüH ¸p¶îà2"gɸÃvT#ɹ‹{W5´ ™Ð6úyäÏõÜ„³5Š'”Eõé@È™MÀLÞèóõ4Êä>ʘ ^˜.³O‹LlÛ®cA»(\|ãÅo¼hÿЋ) P]FçøéÄöOº K›p㥯£Ff³L#%›Å°(Îô‘±¿9 ¹rbÈ%5íé‚=ÛHá¹™3áð„JÂ5èÔÞÖ1m„z%mú³÷Ÿ5pÉ.6óÝ/Jp‰q€¶½Ù£xß×;^æ†î¥èáPþð‰eóå+‡/›ôéŸÿüzý4ëúÔDz7Î"Y„ö\V†!šu\–^ž"®zYgÝ Æ)ÔÔ\L´™R¶„ìE2uâ,VgTßg–1±‰¡nY6LBk/Û‰nÞœ—m÷G >\4éÌ“fãêqôRštž gÔ­M ý«ô-®éɵ»´¬…¸Ù®o»¶” ù∠Ò’‹¹ˆ 8?#RÒˆ#̽—ëW?ˆ…nÖu:³Uúàøuz×Ñ8†% †G«­s®û “K3À0uýÉ/éëØ¸Q©—r†6²Ó“úÕò´‘®îG‹³MçÎìYíøè–Å= Ík°B®%¤Í(‘}º¿Ýnnw×÷¿ßÝÞ¯¯»òØý—ßVÄNw.ƒsñ¼fôTóˆ XÂýf¼¾Öœ7wšWiÖ+_ò6ŠÕÌ÷Þ*&ÓXõ’ ¶¯ïy ·ÉU[#Š´S sà6@ti‰ï©«KÐoMÊ.W½¤zŠ"\¥½a*ð‚s“&ù¸ÙÞFàrVïì]raQÌcý4ÛÔ  ´+”¨ê›±ù"à%,À]ÐC!ÖÚÃónÓ뺦¼BpvSM£,ÒX«—Ês®éñj+€e¿Íا`IêúˆÐ8ékb¦\ÒW$ij÷¡$IªFéÞ×½wPI¤b¢ñî!¾«ÑçäfŒ>×÷ƒ‹©|rx÷¦ðq…`SŒ‡\ñ¨*{„]Ù}d!h¬AhÓrË… >´¢L-B”Åm~åœóŠç•óìS9¨0RðŠý­î®d ¾·å÷ˆoœ S†8^<\ÄÛ¦) :Aý ³gG+¨àí·z<Ñ ê^Í ×¼[pŒ«¢i`·3®uŠòIO4 ]&wùËAWçn°WŸº1§ólÂ{œgCj;šËA·{ÄÛMö¡,úñ–ìóºŒr!Dc™loèvCëŽ'' VjùÙ¨ÓlŒýøû¯µ½èÍ[ûü?ÝþúsI$†ò:¢î¿~ÜÉz¦‘Šwd÷øB6ÀÍ‹Ãðט"8IÖÃ~R÷_Ò2«›ƒ:F?gˆ`§Lï=ˆ/eÑ6¨±ò¥z)d-³Š/e[08‡Sùd{+@—g”úG8iŠð½˜Åg|)›!èÁ¿¥Þû8 ÁSW‹ß£«e+°:2~Ɉñ#¶$‘ô$åBÐGùÀBÐØÇë8±K…`p ¨Mª#D jà=¤:-w'ÇŽ¾£xFÿ,ÞCÄc˜p§T’áÐÈÅa«Löƒ8\|Я²åÏGK¨•âÑÅɤÝ»yOÈ/96¿©©YîåÌÙ ‡‹AGÿÍpôâµ®ã0nvå=š]YùÐ_ÅÅò=¦ˆØ´)8ûÌ\&€ÑÃå+\|ù”•“ÍåŠ÷Ö™í£Qªë °Dø€Á7¾Ê‘òÂw–"KÃC2—L l¯D!9Xpô^Ò’ÉÆa…lc$0ÂÌÉÆöéÞÙR(7ô6l×úDIÜbV«4yŽKp¬>nðÃÅÞºoÕ÷D¦aÀV¹ÑÆÇK›4ÇE_‹ˆÝä;Œî³;)ñ’ç‘°ñùW9ú˜2ÅÜÁe)¼¡²o;%”/7±ï8(?p’üñþáæë÷íóÛý¶ßo¾ìo®ïv»L½šOE#ú >ùx"n¢Ò™||®ª­ÛžÒ ÈYM\4©È%fq‘ýRh7š4Øõì¢ÉÂÊEˆ‰G¡ (;¶èhi“ I_KCAˆ2šD@?f4IÛºÙ½9Mºäˆ"]`†¨£ #’…žŸº/|îéùß‚ºrÐ9yøñ ;âE¸ròìsБÔéô‰ÑqqôÛ=‚Œ°rõÒåÙœŠ+^& t‹»GH =Gý‡A¸©Ó =¼¥PDpÌÔ·Ç+Îan¿B'ÙÁG/K¨Bo[È3Lm†¯u÷¢ì¥B†LØ``ô+#Ì!uok¦Q6¹JŒMi83—§ð»G0žÍ^ò÷vŸø¢§ÿxImn¶›oW›¶C¿wœË¿<])êê›<Ýì`HÕ ¤ «ƒn#ô«ûǯwv¬žo¶WõÐSõÛý7»°ø©¶/ÛÍú‡©øÕº¯vOw¿è1ݯ^Ÿw{¯û©ÿnw_ê¡ëɉH £qùŧÄêÊj¤_¾ø”(µåç·’³D ×ú\™FÄ.~Øt ¹•‘M+†Ò¤Yÿ~ÉVÿ²ÚYïbÌj…ä?\ÆÒ$§˜ãipdð¨ðg‹eew73W¡ ñ™R°øQJÁz]`ˆäÜ]xïÛ*#­Pø´jƒ­ëÕw ßsÊü‡R†QP8Y¤ ð­QIŸóYTòìöÃzÛdò.€NêÞ» q‰i@ãm­¤=ÍpÎ@bfç?I Ñ3]k,ׂ>¥~ฯùg‡ C÷§íæqû|ܰÞå>ìÖê§>~²¿«ûi³}|þ¤¡ÑÒŠi’ŒðûG)ȶT§,B¤™~™¬3½–Õ2‚ã1: ˆ0X!²—åš`Ž–\!#°ß›!Æ0qrµ£Fž³™˜Oïû%«—ìRÌûÂuÙ›ê¦ÊÏùLmžT“{‡Õ$+@VUKqwBÿN¸ä*WúSN$‰Ì»Jï>=iäÀ©´+°„‹ôlVhÿ}Šõ—Û­ÕbþçýýÃɵ•¹»Û®JóïêwRü÷+3»íõák.C³ IGÛÕ êY†ÏAc¿TÎAãa-¿Ø»^íöÕ¤›íî·íõkà³Ê]Ç»âÞßNŸ°_×þ!ºïTïª{ÝìÏ7뻫i¬º¥¿~ºm êZs†õÇŸÝj¹Ý#ˆRÖõ’F|)á; Q|ùî¡R}_¨Þ$gQf‘t©Ü¿ëhÂAe(9¨ n:E܈G±D®E,*2a;Œ%$C?ë>‚]È!Ä‘«P±ëkoœå§þÝXiÑ1çÆ}*gL‰1CÅ®K¶T(ÆËP)¦ICçSÄêTŠ¥@sVïI=„¸ ÞBxçÈ.çGCxÿŸV›»þÙ|µÿ×oü~ÿøíÓúùy½¹±cøéZ~·Ë½]ò°ÓíM DFó5î*õìƒ3Ö®8ÓÌÔÁÀ€í™ ˜Ì<Ÿµ º=A-Œz0úQÓ8; î Ó*–ÁΔÆUSû!§m¨Q„fß[†`w¹!ºd ë%o€BU†-@Wy*Ò{C¨³»¬ÛÛ-Ú%˜‘È¢pô—f¾ÿº]?ÿxÜþªÁؘZÆ~}™©ˆçuæÁæ°,0ª³XPž¢úqŠ x_ zÀ:ŒJy™=°}F!$³V6æÇ ‚ Vr¥WÇ’«bôµCŒ>ð‹€à8⢳޺¯“sŽ¥[²XuKž˜c]‹àü¤XQ.¶›³;!™\\´°Á §¸ÒØjä=LØØ'Ç5>›•òI!ùrÉOÀî€eØ­àäÔ©j”ò9H«^"'Y)kò cଈ#}¥Ò 8ëãòàü"š:#Luw6v8“‹âÓ¢#¸58ë/«½ÌÍ0UÑJDð©Ç2¤5Ú«?ã58œÕ*wÓ³iUƒ¸@«Î"4eyº[WMèËNÈž7삞—² &A'€*Ç’ÞQÇd£àiî´‹aÉTPÕ²`ò£™pP¿Êñ(€rv¦Ð‘êàgôhœ~~Ží 'M 5•¦ŠÙçðÓŠ\:3ä"V̓‡I¨ äy2jŽŸôsz5|–!dJMR£Bh;ïþR©Ëp@‹\(¹g´ë,òït.Œã#‡˜ÆðúìMáA •2¨‹)<ã¦Pß­#³]rÐèSs€dL˜ÈÀŠ 7~ º¿.Tj\‹‰|÷ÀÔ¢™T‹äA.=AíP¨¶þq½{ž‡˜Šˆçezà.BLtXBGâ*9IùioÄS 7{…'‰)Žâ&hP;êW‚O!Ë_ò"‹: «êVÆèÍͱ0áÐaãn¡^Ìr«ÞŒRì£7Ÿž†“Ð$.ž–9õgÕÇúÝoÁÛñÛu0“K¨& ÞA¦Ó {w_wK$ï~½Ó£³ÿ~WœrüÝ•ÝEά“ä¤\A€v?!~î ǽþ§QBt¶PëÁ³í-ýpŒÃEÈÑp÷@/à˜û$XŸ»²cbJsð™c· ŸÙscŽ“sÀ >cbª+òŽf KBW=_Z1ç”"E·Ì(«/í ºuìï54BõïëÍž¥Þ4îŸüZú¯~bìfqîãZ•5Œ*uÜøäK¦ÊªOi¶Ï=[e³µ°¤ì¡Û§ÖS★°yž2êÀû” p,Ù*‚\Rûà%̰5À"5&(ìåìsœ3ª©èÀ A)‰ÔM|†SSá@¼GØj¸gеȴižžnym—Ÿ¼yÚ=ÝivsTÃ>¦¤9jUY]Á® @ÁðNh3æqnCñ,ù˜”YÂ_fNl[j˜çhÌœaçiÌœ d;nŽZÅšØ[cäpak¢+ð­‰UZÄŒ1ÑC°ŒZ¾†®.“=x®\Uýž©åñ©…í}3€}.ðOÎ8uñßÓÓ¶‹þöõìúµ’ô’õÌ5„’¶„¨ç<¤FÙ¥aÖK%Ù¦ÁÈ!;êïËh*ÉhæsÈ~Vd×·Ÿ. 숭IF瞙֭8)æRSÖ-À‹“îGƒ§X=¡4Ž -5ê´M•Arâ |ûÑ‹‡Û”ŸÝD±ñîZU³{þãóýowÈÏ/Ì‚dòMÓªuΤ²aŒs%V ƒû}Îá»Urq?îz„);ûX@5è€úýì9Â…a˜Z“pñ4Ÿ¿WU ‘éseR“ ÌäX‚-5Ê -ð20Å–Éz‹(tÇ­ÝnÖëÍîy7Þê“ù•wœtG.á 2"ò`I²¹u.9yµ愹$Ù±ß6ºkÆÚRÜ*ªK(£UƒÈ’»^=’P4¶×Vp £±´¬fR΂ö+î2FaØ)u»Éûò–š­ƒ 1¤¥æ5ñ’1Pr(.é%æýÌ ~óÓíýæÛ,Ÿ™¡-J§’ãr‰Æ6½Ôe._EÚ¶ŒÆl8:`C’ :Œ"öžiåíôƒ´*!v H|a:8h=MT‚1ë@c`çQW^…Æyr"cRe ‡žt)‚œS´DB·¬‰Ûª¿ê#tXùè25¥(~ümgUôºw6Çr¾¾{ú¬ŒeýãvfM¸¾x¹ÀÇQ˜€‹Ú ÉZH<Ìf&ž&¢j@Ûi^ÈM¶6>]VZ‚- ?¤Ðv¯bš´5Î_ãÑ{9gæHôkŽhXŸÏT@ÝÒði4qÆeàh©Crõ‰0@Ÿ¬Ž¢pKšÔ‡ûëîç§ÍÍöú‡.e$Âü]7 uýOÔEĦ°K%“‘ºÉÔ8—1cXª¹Ša‘ž ³ŸÌý~2¯ktä°vF‘™zèzƒÌG2«ÌÝk£xt—Ff ­sÈ,&9EænÍì¬Ù1Ì®j9å ý2ÈŒ4½Ç±-®´T;Shæ„1„žÍ°˜?o¿?ÜN “:þÑVew5`›¥äš“FŠnv5÷+ù £ôAx‹2Éý¾@¡‘a˘VŠâG«+ˆó\öG"©ƒÊöÞ)Æ £2Ǧ ½ ÷÷oQÙÖœ4hÉ–ÎyW5Á¾rå\Mph©à©þªG“ÈèÛ6¨?Ýßnßö_ÜÜî®ï¿»½__O ú›ô˜V7‚5p;•Ôg0JTG :ß§É}˜dtšÐâ½íèûÐá=—FÉF)åˤ‚¬ƒöúÖêþ%º4Ú§¶uÒ½˜£@íÙ‘#Åõ埪 ÷™+C\È6ú§ƒQíÁ|“«EV­ƒ`K"]¤Õ/ ¾ÿ¡eŸZb¼bB i”4ðss*{¡ @ø^bKK;Œ…Mß/^ª;F+íXQ8KlrC¥›B« vÃe!š´¾(Œ.FÎ^²žTK†POÎ_gÉi¢ýdŒ^ Mõš …®[”‰Õ—x¤'½{¸ý¡xÓ³d¬Ý¼i_V^ßtáÍ]Ã4ÐHÖ:c+¦“I’¬vËØo#;ù£i ÆÛ U¬˜Cé#¹Õq¤õ½u‡8'Fé}gyCG­ù%åi]³DEc †“D“JïR}‚“ÉpÒRÓ¾EþÚîŠõ(øýåÑEÛÎ×?žo¬Æ|³î'åln豙ɚÓ<±ÿ÷[\>ûˉ9œÐ¶(I P<ö/Vé&?#Öª• ¢*A|_ 6ˆÑ^r÷GR©U ÉQ˜Å=}fÌ;¸)¶¯OùJó©íÀÜ:µ©I!Æ…}áÐÑR“R‚ýÊyãÆ táyX{¡ý”ðÓçïëÇoÛç‡[=}³@Øÿ?{W÷ÛÈäÿaï!/k™¬b‘,_n€Ã.îà²dwq‹ ÐÈòŒ[òIö$ópÿû»e¹%QbíIp‚™ñG«YÅúÕw•3¾?Ý[ØÆˆ=j< 8ë œÊ°Ñꩲkul¨ÑMÆÅês:9ºÉbÌömÐ%®Ö×W¼Ã#ã*’/lú ™m¤%{Ç)Uw&†¬³vØV¦/87leU JòÒ¸2¶-#‚ò% ívTÛˆ;Ð,nÙõ°_mÖÏ]mZñËÒÚs-æÄºò #k±Ûն׆µê÷ÊeZ‡+=Ã}jñxгï¬õÚ\ã¬> ~ƒÃ¤AN§]cÛ F.‚Å£AÖ9u]‹Ë@ªG½”aš4;ï>/f—yÖ26(oSãŠcB2Š”¨cHV· )cÞfƒV™¬"ym4¶³I¥MHu¦í€"°‰þ.Ì<šÞ´š|¬Ú>n­ÒË2’ [·"/Ñþ‘êîê+¿¸òO^Ðå8äGØËšß\(y D¯•0ÕÐÛaUzd[MÝf¶²ÍÃrÛ>l8z—l¿É½ª( ôE(—Ag“»£ö>¦õ@=.PIèq?uÄœ^ÈewƒcNë/öÌ£ÈÃ{#±Y‘[_V‘×tFà ;ȵVOìaæ}Ücàþ8ÐSôBp™ük.[?½ïçb?®—!§ºëàlü`ÇÉ \¼]ŸÁ N¥È2õmæ¾Džœ¾·°ÜXdHûÞÊÙt…‹þlP#“ïÍ$¸~d v¦x#·¹€žxTÕàOôñå±â]ŽïH‚î}Ü)¹/É?϶„ -v†é(:®;{xtzf¢u«e ­ž1ªý]7(fìaGOZ‡ÍÕ]¡¸Å/ÒíÈ=Дwτʗ”)6xÕ)œ!=mа€Ÿ»lļ?ÙuzbwË!—cvjwæÙð‚É6¼WT4­(g¶jëãiżåZCfÃûmñªè¥pE¦))«"Ø7j^\­w»ÙšU4WÏ6³Û–5 énñùÓ:À)ÝcE9+6×}Cyo‚]4ÑU'].†ø³î2»©N/ñr*ÚNØ P=ÜgfÓ¸˜v½ÅìšÎ†c6z`•—_á8fç‘;“¹{|:”d©6PqI_%ÛTD~–rçÂÊ‚¹Îz+<¼8C»o6Káh¯“ÿºÕ@CQôծNJ¬°][Ì庎N½Œ¡¹.LbzÚaÒÐvÚF»Q¤Ê…ÄX‡5›â8¬`‡ÉX£íåeŠ&ïð;ßn/:jj¿L1 z”ä2 ‰¨0 eÆþܨqlÖ;˜öL`Š¢1ô‡X#bÃb(öÜŠ\+¢åNˆ;¬,§"Z„Óv0PtGÕ+yòä ÃUƒ©ÇÆ_(ž4:GCÂ)O@z”í(ºÕvãúÏtn %ù‰^2‚™=” ;¼Q½_®‚”l¯·ŸåK7{"ß¼óæH-WrÙ»!¯.‹¼Fõ€^&,>‡î_;€~YMa«ÀÚ¶ðn‘ÝE(FŽ Úo+›)̲x#Cq ÷§J18$McÚHÕ ÎÈŒû3_ïa2¡þCY^py·»%V‰×a¢!}ãi#¡ëS ;Æ‹"³éÖ¤<8±[†Å}ä#0W˜€ 5“6²˜˜ì¥p¦¶£ƒ¯ÔÊÌ5”!(ıÙÚÒ62:¶‘ñÏÕ™ ḅêò~Ær_ Q’ÏǬ2i``Ñ¿`‹Î9Ö¬Ÿå6þÏóúivØ‚p·™%¨—Y}AY=‹½¶o{PÆB׬^‚¦J:”é«ï“w”´ª=)0iðŽ˜4˃ÝP­Þ#ôC±û»Ž2m ÛqÊJÇ [ŽlÄKñ´É;ìÈæJóa†½«ÝÉ*±‘ð绽U§+âKt7ã”=…%BEó†·‹Çûõç‡jtà+ÙvÉØëýDÖN®•cPýéÝÄYÛ>“ü=:r¾óÈ£–$Êh]Æ›Äzn-èj“K¯+ŽNñ%G€;l™XÔJ{€Î!}\¼#ZS ¡åÌ,W¢]+F,ò¬ÅÒ­ŒjÑyØÚªî þgY¨­•‹4„…^±/ ì}ðTË®I¹]n7Ïá‘ïŸo?,2©¿„£)²§qÔkÓcrœßQ‹ç¯º/DéD©œxÊUQ¯ÑID5 é6@¯1†¨ ÂdTF:».C޼ö¢M}ôÚ•®¨tŽU¶Ugín!¨ÕøCŽ|<íçXY¥%ü@V–V3‹ Ìð£´ÍäO³ÕìÃb“ªênñ„RÛª’¬kÎÐg·a³š².KÜ!µ/„+Úz`u²\9c‘t²0@}²*ÃCtlƒ~™ERD¥P Ï!ø¥«2j:+í VYÒÅpm5oåTžüvÈSòBU¤•[!XŸÏ½Q7Én@ö€Å3»'ä Rç€~sTìØr ›#"Ï2öž¼·ç²«ÚíLApÀœ.‰Ö:T5'±ß@´$ºAÀLu ".Ì:üwår% x!ô™ÚƒÐ£¬Hóåšh¤ÌZÀŽØ¤’tJÞ‚2CN…}èuÙùó͉ót·üpUkÐÛëz$ÚÕ\”jÇ9è (|AŸA(¶§Fè<È£rôi”èÁr¦t· 'c¢=öTÉdcX>èpdD&G…Íñ@fŒÏãGB±Õ/V€(Ì<@ºU£·A¦£=:BBIŽ“‡S¯¬pB;,?i6ß횉 ‹Ëòð,~Ìç} M7˜Õ⭗䀳Gkas% ‡•E¶ îþì–3P\R¹±ê5bP0W#te’»?³¦`5†ò‘IX—\¤.ØÝ J¦Œc&Y»±¶ÅÖY²ñ ¬6FÀ 5ØäÕn­z0}‡õÝ!á,G­vÆ`ÅâF‰m¨è࿇媞°™wÓWèÍY…EZ[«yPÙ ÷ð À¢ ¿Ù9¢ÿJ†œ 9Â@#”HârŒÉLj˜®«My=s¦@Œ SG¸Ë>Û4Ç“bDJ¯LQÎs|Dˆpš5Ÿã—Á·kI÷Ö¶ÃÊq9Á$eúäÚÐPØjM¿É›Ý­ïŒÖW¢5É ¦ÄåÙ” ëê¨#1nP(yãåÝЩ…Úƒ/³`\XÆõ‡¢ (Œ¤á¦C´F„äýŻ̵)=l)‚¤Šl׉¨ýÜý¢°ÛgÆ1ÙôX‰˜ ä±bRp¬rÍ;YH"Ôˆà %ã¢5´€V›Ñ B I{cD)h=2´’ÂÒQ1&]´NAX%æì®ŒéÕuTSÐnª¦NÝ&ä­V0|*Þ`ðTÀ¦â¥º³<0_’µ¶H€Ôb€b5Îìx¹,ó\¹`ý…S§z¸—Æ9fcúÎO(£Ñ*œ7*ÇSF«PØ&‘ÕrÌ÷l$ÑZ5ÂiÓe.Zñ;^'ßd tŽvENYãÀœ±˜òîâj5@ÅÝgÈ·ÿs,DKdÂB@ëÊ䘘5gÞ>Ç´Ø„qŸ‹Û9¦4Ò˜ »å~Sù`CÙ>q™ÈÀ)ͲF˜C­µˆ Le>„€±…I å °waYŒÍ! ÆØòn)ÆfOÊ™µÒa}db®Ù߇{GŠ’Ìµ¾L†_Œ­áMÐwöüôQ˜±›°|½ÿ¾Ú»cwÅÞÝŠŸ®á°˜µË†½—)–z+ëV¹tPV¤;¼.ºs¶AŸLЋ¢,Ú‘‘×AéÔJ 2DÇ2£tŒaÆ…Þv)gã3 o% òU™1ìà½0<¾AÕjÅõ«‡ÅÓf9¿Ú.?¬:Ž™D±–J‚-*ÛlÅ· ÉdªYP)ëú÷PãêtËÅ2!¦N “’Q,~РI®õï&ô?è‘m[TŽKw ;âÉëÀ*Gˆ&>ø&/°ZÕj%³\©zJ²ËlBÎz«Ý[„ak}´Ÿñv-ë6 ²¦+ \÷ÚÃÜmR[ýé”OÃðÞ×b¥§U˜ÄSä3Ë•÷Tɨá½ÅÎÛdÅ]‘DÙ¥žd¢‹*äÌa)pÊdÍ;_¡ÕÈtMÝmÖÖ˜P’¡dÊ +*^oT3õ˜|Q_`ݧÝJ»ó#Ò³€-Q¯Št¥”gA–Á¶k'ÒeÅ_A1e¬m¿†’ðK…ßrÁ/†Ê.ÃÑóH«a§²Á3ð‹ZðŸíö¬oeÏj×a"w¼8ÏZV¢w±ÖZ[‰1ìÙƒ7EâÊe˜ß/…¦CèûS½îZ¯ûLa E¡–³ãnƒPyQVüPðm×£IG ¬‹V¼7È’ fÅ‹&Ó¥  båÊ"Ûò0+Ü:³è­/±ÓÙêŒx{„ ç8*Ü`cý ޲*…®zÏJN·zmL¶OõC§?û—/ïýˆOzú,ÿ8˜ÒÐmœ€÷ÜŸþ-p–ûLiÔ&l”Ž8Û“dy×*JƒÕ8l1 ä‹Úµ úd\Ž¡Ãââ6 OÎ(¹Ûe;Ý-?<Ì#‘ÒÝ%üÜÇã$°%%’Ø÷ï‰à÷ÏÔ’H9eP£WèZLÛ3 “¹hbîÑ H&ÔHa:¸W­‚ÒEìÌM•N‰Í£1aó@ÞQÔÜ2²×iàR0(ÉNÐeR&`Èn•EÔ§Íúþñ~¶ªOU•R‡¨×·ò㿬7?˯ä%–Ÿ–OŸçóŸJö¦frÊUîOÌ4Š/Ô[0}‚‹NA¨ßÖ¶;ÖggßåÑ}Ùy7pÔŸI˜€œT¥¾­rA·“»Íúaò~}ÿ4¹}ø ôS@Mîáp¡¤pë¡À-íltE‡0-ÄWzçDkæ— z ·"˦}Aþ„‰³Õ¥œõC†<Ú°¸ål©Ú'ü±þ[à¬ÜÓ=g™-Ÿä‚OäâO¾—ËþÍêvñëä•M(ÿ£|ýióyYúVtµ#ØÕRDRpÝ1º°êÁ“ ‰ŸŠ¢¡P/Üõs 'Uø#L•ýPõνTPù§$ú)3‹_ﱤ÷}&¸>PîO8Ku-•ë9[Í÷‹Û tdk<÷U>õ#¬îãT{-žª0¡£ò‰™àGÊ¢â‚qœ.çý Êëó. sNYÔG4u‚_ÏAYì. ^Û^€êÝ€D–ô IU48XÑÙ3ŸšÕ5§äÔ‰q„”7ýØÆ´V­aþ²¡]“Ø¡å.…¾VþZãäå‡'ÿýïßÿ囿üçÍäŸa ø“þ­:áä_ü“—›Iý…éãf=_l·ÿXÍ6Ÿ¿ÿîO“;¹[OëÉ/›åÓ¢bÎóöfòR¶^M*|¸™È¥œOþm®ñJŽ,¬[n'óûõ6vôÚ(«i@“_õý›Ö:ÖhíDýÐ:ò…î·L¯t¨€ìOyÄñ¦ò_f÷×ò8·W´?÷ö~ýËäîvö4Û~^ÍþáÔ†‘\†ÎÏÒñàTÌM§zÛÅÕ#tÙ¡,B%2¡rí1N™ÈÐC¬NÊP‹<Ê·³ðž«`\ÿ­Âˆ·É£T„dm4ÚA¼p6'›€ÝTžKÆ=´6ä 7Q r–›¯¿ùó©ÌMB1Êjö°]rØ’b¦¯Î°ž¼›<ýººùz¾~xœm7_˵ù°xºùö¯žDU‘íp&cµÜ6Rƒv}îWÞXvnnßϯ܇Ÿ>~?ÞÛ¹š/nïðå5Ö·¯o©ä-·Ïó Œn¾Þ‘àÇÇç§›¯ßò?ÍîŸ?Vg`r¿˜mǼðäxòî]¥:Ÿyß½;´©/dfÃÝó¢zï_¨¡zTçêÉr!ÞûäRâ³J¿ÚÐr ~ÅÍÝì~»ø×Éê9äM\ßí#U]RKbÃ…q‚ÞÃåAˆ¤¦BZ¤‹¶õÁ]t»iãh_íLœÚ˜Þݬ£ÀHx-+j¹uâ¨úlðaæÀØ9q{r«€šŽxZºWY”¨_” Zìýý×Õ)îŸÀ¾SmLe¤£ ¹[ãÖãú6Þ±ÿÛ•uó÷wÖ¼ŸóÕO?ÝUÕ&0„Jü…nPÔïSÈ¢½?@—­üÓ¬ûnòîBñ½DÜgÍhØâ¢tÞ ¢Y3šzkåÈ—{7­ò‰>…CŸØ8X ¢)„ñØÊ)nÆy›Ù‰ÂáïUvŠ8·Žì¶âv¾˜]YŸ¿&¿:ìVüC›(ø×«@/êÎ̲›8?»ib“Áœ1¤­óƒÐĨåÈbÈ;¯Ç WsØun}LÀ{vI0AŽÎ¢ÛŸ«]¬Ú³`‰÷œ(’Ïvjš¢ÒhvïøX¬šÅ‡B‡è‚`Ëœ­m«Òiò~°TÂx¸xrþq¹Z„xîuãï{ ?`ç|7@éó™ 1;«û|äùP5;µ#¦mÿû, u6åûZA!:Þî/ôì‹cª©H“ùófеªf¯¶ZåMä„‘:º[ùf(ßš,~/·g êªWãP¹®Žu‹ÃiûÃ}óa ÌÄF­5„ÀϧõÏ¡bäåz¿_ÌgÏAŽ=YnW_=Mf»ÃËóî×ò±Wõw«‚MÀÞZ¹¨fxOdcg7Skµˆù€†íêºÄL~5UòŸO2ïŃ\™ª2[ò²š®¡z´`pÚ™ô,„ÈÑ/ÖdÖ<Ö=]Λê핽˜?®~|,¼Ð8BŽþúêÈë@ÛøCõn¤,;H°l~yGgˆ”ÞWgF@Åg:¶‘oÄætY»†ÖdþuW»KŽc_ebÿì¯[×¶$Ëš}…}…ýAÓC4\àöL¿ýJY$”3NÛ\&b:¦ƒ†¬JÙ>ú°tÎÂi\\£¬ý¹Ü’‰_±%3œd`âÝíˆÓ# ¥8z«“Wvâé§kú‰!®9Ëuœ;>âC(°MNDtéMeþÿ^àâ·Ûkë­üßûû‡³ÄÄÚä®§®Ë¿ƒfæîþfŽçæúêõg”Î |°ZÀºˆMÚ€]¯!Þô®rWÕ³—ùoû²»9µ‡^^ßüy}õÐtç=»Âì¢zþˆÓ›ž¢»ð‡.½.¿î÷ç^üøÛ«=ÓËLq%Ùx=½—¦³š4mÁs¿)`ðX“–æPZÊWÏö„BŽ©ò†TÚº\ܹ"×ìÍ6$¦ö­Šfö[›¨ì£½y ð£þ‡Ç&¦G3fœÜ´Öžá}¾‰Š;6Q‘ßTèÊ–íÏGO”°ß,âÑ`Þ ØÏ}¿ÿóÇDKøòƒó6*ÝŸYéæOžç¦ÚsÓͼ”¡N»2$là!8>b—º †­‰‡v@Š5€¤x(’]%”É/PgÏ^m+"ù`PƒHÀ£kB$K”t²£OyD‚i"ïKeäèW@ÓÝÅã×Ï·º5¾Ïþýµ†ôíöþòsd šëeÛ>xL!õ(šmûÜU\"ÍÊ<6áÒ.Q(Nê(CˆÍ¨Ä¨¤§_3XD%r~½ êøÞœ÷{³m äD$×`±ÆÃ± “ˆÂhLR+¦Å$ëív"㣤èü¯€¢·žë×{¹µÿ˜E¦ ©x+2íü3 "'í@µók¬âVçšp+ÑŽ16’ÿ¾¸Òvàr‰‘± \æì‹À•B.œš½ÙFàbÅ9qRƒ\¢'¸ ¹Òpä2Îñr¹”SJYÞ&æ¹ ¹Òg$xG–µ÷Ê'Ú—ãøôýDÊšÉêÕ!TåǽKå|ý=cåÇ­N&žÐ8z¾ö°Zø  I3âÈfÄa‰šÁqñÑŽJ¤@Å*cðYÕ½ù«m‚) n>½âßÉÞ´Á˜ùüøóº™­²y“˜lr)î¹=Ð#! §JJµ²©WH>ËvÞL‘l®éü=’×ø6PÉ 9kö-Þ—†D1Ë:ñf»mCÇ#¢¡B•›’˜æ›Î»€ì¦ÌÌ>ç¦ôAc#ÌV õDÄžÝBÞag]Ï_ƒ7K{ÁêüèšîÖ_è£zc?Ûýp8Ö·a¿[Ý[OÎ$çõΕ?êo7?ìÈéZ??ÞLqpiqê8Ê;x[¼„]—8^·”‰ku®v,ÆŠ»Ø±þƒŠTÊþ#h’PòšªdY‹fÖíã?؆ákª'š¹¦ê Œ¿þV;GÉùvõŸß³Àteõ:û¯ŠY‹›…¢gnó0'ý˜ÞF4ŒI¦‰>ÎÜæV>¤i3¡íÓ/<]?—Ö©æQüŠõ­Ä¦œHö¼“sÁÆ»j½JÕ¬ø“*ë7zÝ—ÂP“6„c}B1JÙ:ûÌ¢}<‰~íàM2²Â“ÄDÔÖ´‘F_ý™1W‡MƒÔcä[;=@_WbgWò•Àii‡  ¶©švˆ|(„÷pt˜˜34óæ¡ú¸—×ÏgJßoD~ÆïúmÎüñû·ËëÇçoÎa9È£éEwwS{*îÂ#OV!Áz¥Ü=ÖÚ¥²•îiGD.Á¶;$àb+ºlif›¨}ÜÅIjP{¢¶n+£ÇѼfæ£Û2©\k] Û¢®$öA{?$,.§ÑpHĪ©Ç@,DLh¸Äè{ÇuswñûõÕÍïúgw767·Án~ΨØÑÊmp{†Ü<ëò'ñ¼O·t‹é×Ä¿6Û½ܽë;†;êbÞ!« 8·e'|‡„kð}šW’&@ÀHÃñÝÈ™Îð]³zÓÉFå û^×à¿ -î &JmÍšÈa€¯`Ës½z]ðâcßU|úÃÉŠO߯®nïÿ²ƒûtøÓôžªä©c·¼Ö æÓ n²9)H¡VűÁj»äª³èͦ3 ·°4Årpž²š3uoÝÓÌn;%„}5räChjB2Ú¯þJ'‚±i^øähîáñþß7×{àòô—m[ú¯ÝKU>ÊäÂŽjj ÄŽB‡€íź•!Ú‹i[‚²ã–B>¥rËT/rÐC}*E¬kEÖœþÞÌ\Žõô­mRE êXGqД¤‘á1ÙÉÌá¼iÃ^YœLœ’]c2 [„çÁ7…dcPeqù=$ò©iù½óãÉù1’ótÇÌ)«±k صä·,Û/]Û"¿ÅUUd Á·­jJ|µ•ÝcðŸÎc¶¯øQ´äW`pƒ—˜’kg2kq„¶^vJŽÐ$mʳ9BöÎpö¶]<¡}m‡©†{Gý/Mg&ŒöƒfeŸóƒúÂ=~5³ÕEãÙd_®lKAp@ BÒÀ‹>_4{'¬aLQ¸ Ööÿ‘ÊÀcè ›ÝRv5Ùi4M˜Râ@¯8æ|ºÇ?#y{Û.‰»m4ÅY_Suµk8®Dõ$Ì̹/…èŇ쥚èÏ×Õ®¨¹–1-j¢.Ô´:qD¨æÃ5;UÀ•‘u•ÜÞÿëéòŸ×w¥Œgö›£ê&ª5[P1¦=´µ(‰"c%(έ·R'™›®5{Âiç:­ ÂAà WnFˆ!B–÷Õ=8p[Ù³& ŒÀÈÒÔ`5$í çÄ7Çw§á_žùFú^WÅÌmU –ÚH?dX\`bý_ôê¦%GM¸0 ‡È{ì½~º¼½xz*–›Þÿò0üèÛð7ú=ÃéŠ7 jñ÷ƒ W øƒÛ’rÝŠ‹Õit&]ᓌÎEÏ«Qºää¶©m¾Ï×`0O:6Qö08+73åŽ>dåúÊ(¦9ž'“Ô‚cçêto„XZdö㻦Ef7âb13¦0üãâNHÏ[ÉÀo¿8 ~YŒ®©(ÀžwÐX{‡NÃ3е^3ã­`ïÌrm¸«ûYw«/án0Z‹bðË>f[µfÖè¼¶1rªa}ä¢Ã¦ØˆÆ]33† î&˜„ª­ëOv¤ ÌHÞ;Ž£jî‹«LÉ'jºýcÂþÅ>õÚ ™æÈÚÃÔ´ö¨¸òôüø×!Ï­4ýÎÃãÏ×EXù¸a0‘Ø7©pÜ#¸é$»Ú«-íV¯ÃZœ]»-aÚ¦è0È=0Ç]8ö@¯z„è$;œýfÙáxºL´¢¦Q„Ô)4Ý1»±åÉÎ>ž»„é•I%ÊšK +¯÷ÓÆÞAù­¥-“<¤Ð6¢tçp/°ælè¿@Wïýíݱ›gjíÈѰT5ù"Êò½_ ÞBËßü.5æH6Þåuù–­Ø«éwÚ=i*]QŸ\qd#ù<·ûÌd]P_¿v”$RÃé¤çW6µå§€2õI4_É ¾åÄ­'úç}ûýÐGòÒ½Ql˜,®6 r[¹-a×MKAFÄõ\ݨ­ôà̰L2ìþéñçmÙ—–0jD/Q´½&X'ØÑ¦fmÒQ4¬eÏ(Ûz3£lè¶°Þ¶[ ¡0Õ1(FþÔø±Àþf¼>øî‘¦XsÉi~ ¤©" ‡w8]ë|„wOÎX ÈGêË“áºódü"ÄYÚ âblš¿dþkî«Grbœ)‰œXo¿Øng¬·Û£qr¼lnÀ¾iâN¼ßA¥'A])×j¶Z¨c¤_µ`øÖ¯Mâ jU„ˆ)¶Á79 ßfgç3ð­¯Ì‚Þåi–®]Ö>§&ÜTÿ•೸#Rò šŠß¦rÞݤƒæˆ’غ;˜nNkùtÿÓ¶õýíÍeyܵöq£ -"1`Ûd½|¤¸ß6YY³`/]œÄÊ2T;Œ•5hq¶KƒÉKaòvË#ø"Ù üW³vðÇ“¥OŒ6ìn"¾ÑG¸±%˜£‘ñœônZ¦`„IyÒ;ì;Ÿ“½Ní¢šð•pjq§õM1dõèäPÈ&„qhu~¢W^Ü>Ü_]ü|¾Ò—(»õ¥?ä Ô«Gß" ØÓÂŽ1Ú*Õ¶°/ÚuÅ,µñÉ®Ä"â{‰¾TìQ;ºäÏ ÕóÉèz óA\©é$C”јï…8æ0_3 mYÖoUCìZõI½ËïŸ)‹;@· C–SŠ#À<ú 9;¥ñ¥öu;iPÁ‡m0Ä5Eò&÷±GåÌzÌôð‡Õ÷\>Y¬…MŠRƒœ„jW18BNý{nƒ> ¬›41‹5 ̚ʅ¶À›ÇÞjgÌ‚pôˆâ(@¸oí†2¥hêqoÅ‚¥åõšQÓòÊ€ÍkÅnêÇ MnÉn†ß¥"'¹„íÞ®âI£bjoZ#¡e.I±«6ŸDŒu3¸zÉ±Íæ_×Ûlû6Ð×=Y4¬.Á¾c"Í“šBVæÍž]`ߎ‡*ÂC6‰G/-×túzþÙÙ‡ ìë+3ƒø|Ë `ßzKÀþ*c_˜÷N©)2÷ý«ör@´¤qhÑ^×îéþö\lÇ~¨vþc)åúßóˆL)5y tV˜H˜Äc=ámÁÒë´·3·8ƒãN‹à¹Py‡h¤ÔÅ:Œ‡Ü”ëÌr\Áô¥StúÐW@4 h:ê˜ú€x8h\F?ò¸?Ü_Í[^ì›OžâÛåãeU“›Mj.gÚE+o8œ´KOXsÔ”¤¶µbÍ0½zÛŽk 6W¢RršÔ…]ÕF”¹fFèÁ¥4}m}#HUAW$ËD›NZt£¹”ÌÐá\ùã´TŒ)ËB§¾Çu-t"mj€à·gÙ¥³¾¼pIœPDÊØ¶Ä£#Âó¶Dó &—9åo'û¶%ö×tÿ5Ùâ^$ÝuM{Aµè2'aI1Ž$ x¸¸;œºBÿþöGzùéÕ•ñÞ”ùqJ?*6&š”š ÝÁÁµE› I‰}-@ÑÒkóÿE37ÆÆÖ:ÅÞccJá4º¼ê´…³bŒo¦ëëù`±á÷í.;x1’€–Ón«ƒ‚cõJ€<\ðåîâ!ï6Ÿ.ožëä—¦ñ¬ås}Ó9 nG•4D ž îm)Y¨k¼ Îa‰}i ã|*M]Oû;G=úfnárò¢áIÍÙ Æ…ï›Î^+„z²s†³ù´Rbþ¥Ð÷njSÜl“Ä\©Ð²×X1®)õ €ýUPMÀ‰,?[õȨžüä©«`ÔT—­ïø¶hgT©qË Fî¡|ufžnjK. ”ŠtH6ŒTŒ_dÛjçæè¢ÓVÕFª@E¡·©ÉJ1tìýþd猘ôq¥<;ŸM]ûR×m¢) Ùmr£¹ƒ¿¸~ìuµçrºÉ¿ ±KŸ0A1çîøþôó·£ËÑ™{¤—_þv{óëË¿.o¯¿Ý]ü0j§ ËŸâúè’p –\|kRµ#¦ ¡É¹¥1«´aî¢u‰š˜ÿ§=ÏdbVåð{ƒ”-S.ú~[†^Ñ·Õ9¨i +n À“B}ÉtßÙGˆò¬©o‹§A#ÿ9À·´‘ÀS›è”ô0$ôHÞƒ íò­7œQW™Äỿ«»‘ĸ|!i<9ìcÛ²¤1•-c¼:I“ ÌNáØë¡˜à Îℸbñ<7uðß%7&Ñé ¤YYÖD=«[†^JœSú‹.êö+ùWðYNÁW{tr¯ÖE¥bRžA=¼†åM§ÏÇáWÁ.ž”3μ«®cà*‘,ǾwÂ9nñÌ0EnJÐq`y5ÅclŠ• =Q‡MFJ·Ð\Ý<=þœ"†ß~^ý~ý\ºÌýɨ!4.¢&Ü…=l¯š{qXßysnÏ•Ü(k̦DgÚ7Ó“R‰L"WdwUfµßç&êS"Óï­Ç$ºš ¦Fm#Åœ×Èl%b®H¦ïŒOBYg}9©¯¾C.Ói}‰‹+®€kkr0$øeT¯ÂHIN=2—·7W÷ÿúq{q5%%¹~š:rnçÒþ¥Ù€ÖÄ;àÚ>YPÈ âæ®°fÏ€šÑG P(X…ƒˆf\Å6f­r8>³]§ZO×8½¦WLv4µEaÑíÒ;Ú9Ã!r|gÓ³“Ï¥éön•ºHß§»`Þ¶^hCsñ#¢ïÄ!AŠ_ÍïÌnA÷ÕõÕÍôñfëßô‡þ úxñ¤.óòùçãõYçX%¸ÓòÅ´†)ºCÛJñ»É‘ž¾@ÿù³E+—]¨±xîàç2¶6dÃ5H.BdNeghÄ%g¨.*×·8·B'o˜œ…›5„Zè˜8µïé#h¼3´ä<ë “1zÅX˜ëï;éù•fPÃ&lŠƒEvtñ Oº&’pP¼ßuìJv¿ S©¼œßPÝPkå¦ûÞ,Ù§¸¡ßš]Hj€À§¨exâÁÅ 3³ç\q#1ØoÂÏŠ=m‹ŠºGÅmÚâ>Ý9M:ïšž„þD¨+lÜöÑT¹´oyc ¢º³.ßQœs—TäÛ£ýQ¿A§Vž¢Õ7„b÷Üù!j‚1q%ŒW[j•+ŠÎÇ`Zçë—äæ‹ÁY86n~¬8ÏìÒ”§/­ážø*L \¨©R0vôhe¤t†ÉÓ+' O°œõUž‰½™ïvaÂâ’’§Ø6"Цèý!Ù£#ý"ÒŒ‡ÛŸŠ3-Sš§'ŒÔ]nÙUHí|'á¨0¤#qÆ‹µwRg¼˜ºéúpÚsd¼w…ëCÅŸNÓ«`N˜cЛٯG€}<*!_s{ˆ1zl»KÂøáä÷°';gZ9ì£Ã$˜»=Ôc!©oµ9}âtøXäYÜ" ©mC àÜÓ¥õQ!†/!ø{{swó<=ÔÒ›ºs\÷ekt–Ð4AJw¨@F`ŸvÔ>÷¬[-dÚj2* Ôà$äŠ=특¦»™yú`µ~m²Žˆ ¨&QÚŠ¢äý`¤6ߌÎãñ“õ´K®Ïƒ|èÚkÇÛä{y€|ï94,.(9qm½å„~Ô±µf™þÓåeNžkŸHÃËêÅÓ¨0 5M¦ù=½-ÎÇ>c¯6®VŠy5pc€­û £`´!¹2f£äèéfFë„ÙÀA¢`M[‘DßVÀ¦ APXR6¾Öw¦¤’W[¥m†¸«úzúu±q`³´t/h¬âÛš5eL i•QøãDFùú|Ïz(:Þ"d“+›< ¬¬é7EYÑPözâHÐXA>{bùòâùâöþ÷£ìÑÓž¡¼÷Oåž‹KWvÏ'MšZ-fM,(Eè1xüÁصCÅ,Ýä§;YÓ†Ò@“qûâ­Fô1¯Ýüj¿ŽzúÚ6Ys©Mü˜±íØËè‘&5sŒ™áé•)hÈë#MÐYÌÍõî§ÿu´¼3R`i’‹öHDGQ³i7ª—¥ÚËvíì6 HR•„è‹Íl1äZXæì5,)¦oj¸"cn©‰ÂðiIÈ\—WŠuó¼¹½ë6š*më\9…u-Üì :-Áum‹ÎiD‰&G”ðÓ)Ã6ë8¯ÿñ°è±´bpÜÄêŸ]ô&Ü·éÆ Ü4Úþ’é²íQcÆ ÐŽÙáË™Õúúµum(TEzŠ\jJçuÛ†vðÏÝžNï F-\¨î`r_ººóÙh³´T­?¿o÷ÀH¦u£5†”cZ§~^8æI‡\úÚL뿆Þ{q/P\ç.ï…„ÎèXE/v ñ…†tO ]æsVîדuܤ¦á¢÷04ÿ?u׺ÞÖ­cŸÈq!@œ¿ówæ!Åm<ãÛ霾ý€’b)2÷…›¤šöëå«•(°±€ÀZÆ(11ÑØýÜCö*àòC伺4íŠ~ÀÐâ!Ÿb‘ÉùÌj}Æ]yß"¬8á“$6)±$ƒ‡]ÝÈ”¸4ìʈ¢xÕí[·‡$îÞ×Ãɤ¿-‚Û¬iqé(PÉ^IìÑ*ƒbC[Ç%¥LOçqÍK;Jê_ü0–2»¤t¹äš<­ÏŽRfBAÓ]$Û3¥7ÍN'|û·r*ó¯spˆÆtþÎJlð'¢;6¬§(œô4 jljäe!¦!)*"_[MãÍøï~¿¿yxüöôüz³»«KÙl†xÐÐÍÎM©L¬8îÓü‘SR_÷D¦µ;öàùm·éò)g`vìu¯w®VËæxf†~9lÈÓAAa^(âm ¢eõ¾Š%—cH§¯Ü%‡ÍO*U‘Ë’?ªMÝhd†®ÑÌŒ¡”Å–uìt^‹„ûò5âª6•Úa·kÕauÎÓþ2/z ->e“†% àA%¡ÉÓ¡kêÖJ!®¨CI`9†cñ"ñÌ, QÌœ’j‚˜ò—m*EÀ„ž%’`°¾;z?¥sOwß_?gÁðÏOÙ°¯OÜVc–˨-¼E笈XÇô ‹‘£?Û+ÖðVô§K£Öl[™öç†sN,KÁ%Ý./ÅҞݹ…ºo~ÜYLköìò•™´ïEÆÕ¿”íJKÓÙSþ.ŠÄ,qzë5¤ý‹MPFSuVÞ\A&Ý£—ÝmÅKÃ[Á)6]»dÜõ,5¸ûþéáµ®¶ñCzÚâYû£ ˆãæ!*1hÌ©h¤žY!‹cØš¬‰œó¬e©;w²H§¤‰À 1¬Â]ÑÄÐÄÙgé ý{(’Ux€y’µ$ vˆú~üô¶ªÞm”^œ€ ¦¼æ•6š QC2D¿Þ=ºîòàëɆG¶:ØÔfl¬6«s³›þqKÅI¤šïÿ*asÎ,=’½ìã0h©¸Lù1.å§' ô‚É”YÏ*fTS®mcÓšš¿Äñ0 ËÅ7yé.TÆÉ„×çYÅõäjK1>é6t` ¡Ém8@ào%曹”F™øwÌ;'‹=?}½ÏH¯ÏOùÛ¼éˆM½p“r§;ù¸»Ñßÿ÷óç[Hñ£ìÂîþÑoô‰ª6mwÕ €= îÕ®Xn@'ªML¯aÞ^H}xàÏSÅGºÍ7[„j(.œlÙ«÷Ÿ:qŒd5XMW²Ú‚Þ† ~Í ï™âóWŽùÎ¥8eèX™0u X¬oá_ u&ÝŸ/CblrÿÂx¾Íä‡äï>rƒø8Ÿé½ÿøð5‡Ôâ|ïûß±RêìØöN€a»GV@;SÜ’;Aöƒ¥Võ£dŹ}‚ ߣµUõ~óâïm²h`\N­‹­ß3uÉ­ýûÇ&«Ê­;,`ØC1rÉ»f’–«ÿÑÃnÿ‡¼é²~˜øùÂÀsÝ"ÀLܪyÝÕæÁÁEO(Ô<@Ñ«žeÝ,íºè«%qy{ߎÏHÉóÝÐ|ÒdzZe›Ã˜™-Ö@ø+ˆ«¼½zÊUŽ©Ê~H®ÈR9à%3ÇgL¨˜šŽÏeËñ)¢æOì ‘Û-fíÚºJ^XèŠæU^’^ÿÃ¥BF"3ëÐö[c±ŽÇ&‚H‹Ç_ĔϿ¬2W:OöésþùÓ½,ëžžú/JÜÌRKwTd 4¿:Lš®FM¦µ 1ҵǩºþ°«Fhú 4•Žã {ô[ÿÝÿ÷ôüGæÏÚ¢ÊëµóP¦¸…”ÛR«^eëc¿®ì8z|f 9.΂º-¹¤\v2V/DÖèÿŠWFd_‘¸•ˈœ8_•Ì!­º?gŒ©{»h@FzšE‡´A@€8^-ònwÙv{7y{ñÿ7/ukÛÒPTæ„[T¼SÌéÆmÒ‘›ÌÖ ÀÁRV 1ÅÅ9&`-&ÈgFê„Ç–©QkôlºDi 8K"ßOE:.¯½»UèÛ!ZÃZÁ™×Š#½*Àc°—sSt¬Rïîù~fÜáß|nk7‡:ä0?ö|³»~­ègܜڲÊb`’êñ·Ít]1˜)0ð [nRÇþÏ Õ ƒI•kôÕûD+óx .È«%vÔ†}?JûfÂë$Ÿ¢9Ñ-FzVyHœÙÄíHÙpÍÍÓüñ÷î¿ñpýöäOh%Þ†±p«²e¼?@`¯­öØ;-˜¨#®ú UWä¶?ZŸ³¸ªåuÿ3ƒôÁUÊ ³jU¹­gר-{ßþ¨£qÕË,ޤh¤“ÍÜ•YøGÝ•e"ÌZ–H3yè8Ü{…®sUöòáÓ×—›“žAÝÁŒ-©4Mƒ0nb@‰)% ºý:ìÒ*A/Q Iu™ (>´Nç@± zg6èz‰$FªºòbÒàÀ×bÄÃwíC& +^"G5‰iéÊ«ëÖ=óªy¼È[®¼JÁ>é=¡©^J ÿ0?S·L9ox¼ºí±ë¼M òÇo$X’²z±€4®¤-sÓŽBžë ]ähߌ\-Gûfá* é6ú+†óo)+Ø-ö\Y´XïŸY­DïÃð=,* :eÆÒ¦ÝѼµ5¢÷vò~w4{JÔ<,*T$ü'ŠÐŽÃ˜©‡ É´Tm|*6Hñ–1SB»:ÒÿÜxyùqˆÖ¬ËôœB„ˆþ·Àu&žÛBLåÇ„zÑ­' Õo‘ŸòÝÛRÂL·ê@Dº„Æ ÌMõf”.‹üþäæ+ãP“/G„hд·âeº ^äw3³aa‘ßåé£-Hƒ÷•\žbœR#0Ï@¤G½”mÛ( 2bÕ7¡ÂA~…ɰÛñéÑ=¶Wcýäž×'ÙÄFþ‡ûÿ¸³^ö?Ø0,–E†§ÄŒ^¦60Ök1 ˜R¥ïQmÔŽˆC[lñ¤ayѰLÀ|f¾>xösT³È£ÿ“ššº‘G󮸑KY{/1xÞ&¿Ž«@’þ"@[àeÒç")IÛÎ `AÞ€æùÉHò†oOŸfn'ƒ!§—•ÛéNt·»ùÏ¿Kë’g¤™i†Ì#š°©×uÅŠ×Jèeõªa›é:òb‘~?õ`‘+oµ-67¢Jw—n'3õ!ÆÂÌ']³p˜.Ú ® lÅV$½ôJ#Ð5t­„Ö1^J£®Õ¬ú¤)w SSDÒ/²<‹u}}ÀS|{úTIöMhÛ-½˜DYžnß²›Gƒ<Ýæ¾Znžž»Y^KZÞÍ ¦Ké[JÊ;¯'StÚÍÊÍ(®á°¼ÙÛVîÚ ‚¡‡`P).½:Ra’‹¥×á‡àªíWÀˆ½ŽÁcøOº1ϰFúåØéÖ"HÈêáóçIˆã€Ú~ÀˇǻÝg¡ãqݦWÒ(^7j‚ÒȦ¨ò[¢€´ I«¬Õ ZýqÇuÜ›‡V_´`°­G=à h=3MdÝ?ÄÑsL¡dF¹ Y//z#k6³×³ïsÒ½£¢Iœ%ØŠ*¡ë¤”¬Û2ð|¿W«±aÒ«ØáF¯Ú˜u?щ_Žúxÿúü°{ñ,ÿæã÷¯Ÿ¾Üס,Ì%¬â ›¸ e%঄1å–zß„õ­º¦¯ISJ+xeƒ‚.c¬I9}}3L¯ôU%«ÁXQJM‚6v ± 7sàröjT¯SùÇu\ýrÖ"LºÒ=™´¥öWÐ üÄ£˜’h¹ˆõåþîå'ÁÛãuóãhòûü”¿ÉÔ 7þŸj ½§W´,«iÍR¬ˆ-#* Eñ´ïú”9•öõ½Çš:ÉûǬøGI¯mëKÞ½= ìiú¾þ~ì"®O,‚Ù@Ûk&ç’ÐIÞ‘I…R”ÚÞ¦ àm7òŠì-maC ”yŠõÈÝÉžbeÁ -VX‘¿IJ‹ù[JEùæ“m:‰•…Û™Z“¿Y"nZô· ÕTªa¨"ayÓ2ÿš—ÝçûOßýûl[„ÿù-QnôqÞx†*_º´¼³{•ä{Ð`cò½âØÜìÎmqÃÀ¤—’µ[û¨\„O5ÊEì´Õ8Š`f8OKÝÎ=ß"[,­Ù¯ gè‹)V¹åqã6,Ð8¸Ù™­|¸ä¸À‚ì'Á óS“^­÷«pR•ëœ(Sõ¢eËÜß‚Ò\áý…¨ÔâJ·R¼WF˜[1$±ùù<öâØOµÅ½,»P"Î;³X$9vNb¬°5 5lkÒ ™Óäš üòÍI$öæ¨ûöð}{úò°ûëüW|yÚýQÙšÎò8Ä`M£UGPØgš2ˆh#û@Ÿïï¾¼~î3¹œµ,0bh2%ió &ÃLÓû·(±ì;Æ_Ööo¥ðŠÄ0mqm"¢Î5mýÙ“8%ÍEüßµîî˜{®lƈ"ž(­Ø‚ÅÍÀ±t©tf·^@¸êN ó MSCuÈ"±gû|fìþà9íx£ 9“>ùù¬k²Ä™#‡$/ï5y€9@Å3¤‘çÿã]>ÕYÀ|ª~þKC¥ýyÚú¼_Ín{þØ ±¼ô›~Ú®ûçׇßò³ÿòðû×½<Ûþõ=±Îù«[h»Ô¦ÕÜAÊFM#›lHFJ™FðmÑËÓ—û‹žÃ¿}ùîgêRجx‡¶–éLI<›nª…cØ2víÇ­æ¡æT²hî™Û[7¶Ø@•ÂÒ¬‹×nJG®ÞÙ˜˳.gæëÓcóH hkÙKå6ÅýåèÙ€›Ò‘Gû²ÇšTAKúD™íµ¿w&Ÿ¯&Û(Ï-Kõlážaõž»¬p\ Utœ4VÌâÅ Þ¤5“Æ©Ü2;٢Ϥ±;5CC 0,=+€AGÏ»‘ Ê‘ù “xBW¹†‹«¶ß4õÚ~{›št^26„ÆÑˆ íñH‰Í 2ŠçgFzNAX>%uyŠ•!éŠø-rŽœY¡Ó« ¡öŠøÍ",šÆ¿à¸•?ò"=¿Å¬r§ma‹úJŬZD†ˆë9U—§¡&=舸…U@cég™&ó…_‚ëúìäfÒóWÿžO¿{@ýq¨˜ ÉQÝOÓô¹4h¡éó·àMœ!îVÉÏÇ(¦ë:“v»’ô+Êò.óSjÌ·þü)/V^E*Õ3ûõ¸’„=¡DÖš ‘`›˜"ÃÐë`g)h\µg‚›ß’ÕØy¼!¬Sóâ ¿H3é|2 Ò2Ú¢z©,ÕKÃÛQÍ8Åñ·v/üqÚ­g²­íFã´ùY²zht3mKóDŽU5TKwo²XÏÄ<”-ÊšñaY\ÿB.¾'ótìÿÈ€5äÑ46uÂÝ|… TNÅ TÏú™.„e"¬Ð5_ÖuR1|X[©Ø½&=ªâpÙ¶Ê<la( ßI˜Ò¨çû§ÿÏã_\ °}|øúðx÷¥|½vžt“Y øò&¡”9#¬ZĶŸ {Â1yú qÍ6î‘bŽ‹0ïå7‹u‚cR­ic8d µm±1óÐ8R‘O-#ñ~¥$§È}sc[Å “{ƒ«Ñ¸/~L:Y!F…&€¶0&NÀyí}h6¼{¾[¸(‚OÏ7y"क़¦‡4Ø2oGSVÌ—32«YÀ’«fCh´\O8ŽAWt•MÈ݈XT<³R'4Nò*nNjæä8›`xWÙ”Šh‘(ѱ¤ž°œV-dä[‰Š$¹2&= þ 7qÑ8§!É2å“"âàI­wâeÓrgYˆç¨ìûêèáχ׿vŸïwä“r?<ÿíËÝ×}cèýL.A”ièŽ^P“4]ÛG„-3@þ©"græú^ô-ÝêS¾²2[Æz=ùÎa},!ý¹Q;] æë\O7k žÙš >â`ª±ƒ•C‘ÔÚOQápA³@ºb=àªD‹IkÚÓWƤÉGB¨INYÓåÖ]¯³(K®ÿ ’ÇÃùöO¨ÃwO¿§-ŸÌ5íQÇD¼eçDš‰ ºj<«#DG€@žë-C4ËòøfT±âVÉ›e:a´ÿåߨ £³ŒhFp3›#0ÅcB7Z/WV3Äî÷†?”/ƒ·­Gûƒ«çˆŸ¹º 𢽥ÜVµÀbZ?!Ý•ÚrÒÙBM]ÙÆœ2Y›'#ÔJ3²è¡'É8B#Z$QK‹s#­Ø09³W’ÿ£‚[Ms\¤&¥‘D£±Aí89ùò¨_MåÑÅëÃ4Ø ÛIN&=lYù¹-)´mmÖ„€¤V›¸Ffã¹n3RE4’ââj¤ÿm1ž+ï §¯Ûg5Òϣı"œÕË{lš¿·½É‘«‘ÊRnî©ä§}q52/ w=@ê¼yâÈ™t2øÝ‹^:l"ïÏÜadA· q®!«éW…íé(!-^W¤pT9› ×<­Vdí?Y¤K¸¢¦´œ¬ W‚üX7…«§Æƒ‹073&(„«ûÉòJ -$æ}ãVV%æ¢T;¿¹–‘iÒ›1xMÞdI㳡 tƒ·9çM‰UïPQ®/”H|}¥ì”Q³¾–œW5eU*[¦ñÁñ"Ë}ˆnïµÕuì®IÐ@kò3Á ."»`‘£âÌB½BP³)¥YÛ1…+±¦ò5‡dF¤«v×”WNæ'ØÒ^«§ñšrnò,[©©Ç¢f[Ò8Íʬvêð'2­ššŸ˜´Zƒ(kÆÁ~h.ÎÆº£m)ÖÏ Ò+Ö5Ï%Õ] Ø,5M$ë‘˱î%WˆKS'xý,*ä—[ã&]K„mjké2A_éþ^„è¹È°}éZÞ¶ž—l^60¬ ´À´˜$ÔRpnÁ^ŒI«èà%äÔÄHãçð-° QÝ~œøÊûÒ²Ô(ôß—ÞBgXö¾[RæújbðÀ#Ø0ÔË—È!Æ«/ò}{úö®¶·‘GÿcîÃ~¹$’HQRv®Ã.p`ß0{‡û° ÜŽÓ›$ÎÆÎL÷ýú#«œ¸âÈ–ª$y²‹z0ÝŽS%QâCŠ"^½á$P:õûmØ yÝŒDy šÂËÒOýÀ+ÉÊ¥¨JµßPÐÇ\¾¤”K¢ï²Õ8!¿§?H2QîE²ÑNÇÙÕáÇ #nÓxl •¥‚ì=Ôù1XÎ&„?/U,Äp¼Ù1(lÎQ‰ÿu'¶+ܹÔié9uWt°¶>Ý}0îœ×Ò!Þe[W¨ÿ¹ËF¸üö»ß_Z ¢çJÈA-ñŒ)Ì>Ì:[ bõÓòòÛ›«K\à"ø9^Ì—×D^Ãu‡ËšóþS@:Nfþì-è¥ÎCçê9¬Ûtú•¯r ƒ¼ïñq%Sé´â kÇü~±¼]^M—ãq×=ÂÒ ç½ò| Åtjvdê¯O û†F–À*IŽ>~¬Ýr]23ýü0vË;˜@ 3ÓíGRò‘geúÕ%[¤Ô8ƒº—²ŠälÈ”Ùþ˜ýSêÜhw®Î!0ä…P•óØsì4q@G.‘|ùBù ³ç/Ïþûß¿ÿÓwúËÙßÄÔü0ûÛ_» ÎþÅÿ0ûÄ‹r9ë?8x\-–ëõÝÏ¿~ÿ—ßÍ®YmxcmV³_o6ËnežÖ—³çšÕý¬†ËïÈÅìßf²…ïyƼn7ëÙâvµæ ùMtFÚ™ =ØôMư¦>û=[Ós'1™ÐÀh92Z× õ(£ÅbðÁÐE’t{³»s–…¦·ä<ä,ÿˆP›»Øµ JgJQÿ8—Þ‹Qºøk·Iߪì™Þ×Ù³3ÄÙÛ(ÀYô.ààb€µ`¦[>~„™Ò×h‰G)«Í³N犾6€¼5{‡òz~»^:eÙßÎîŸîØ_ýquýr¶•Ü›V5Ò\ƘTâcµW„î¸I”‰Çø¯†ûÍ£zS¸]–ýV4<(e «D¶ÉãW£ D®D‡·XíÅ>rµÎSfg ÇÏU¤²Bìÿùåþ­½Ñ r¬FˆÏ;óGÆ?žó§åæòþýìÈÛ—{Ÿ¬ø|±¾¾xÎcî¿Ì†ónu5ÜÓ¨x×O Ù<—ßn‡óãÃÓæòÛŠoýy~û´ü±gI%mg>tFôIæùüÎÎŬúÖ³‡á( S¤Šà(L¡c,H¸¤L>‰Ú;J‘ä1Û$ñbW‚ƒ©åá¡B¯³=òn§P©%xä”6­ñˆåØwÜÇ#"©Û4Q†'—G%’‰BÊá{A!™Á[ìáslìÙ¾kˆ8J«6ˆ³}×1œaÿQÐ$uàSÍ„¿õÊYžÌ…8ãFx=ÊSðˆi·‡=b{EuÑHÀnfy0R›Ã 5f¤‹”×EËf¬žrý¬¿ÚI;€Âuƒüu³J>9‡äº9—±l1z-¼›Y¦»*3"B“±næÁ8×Ü]%÷V¥ïŠÖ‰ÚÊÌz»<;¡•¦؉79¢[ìAeåÞÏY¡¸Pê ¯,í[5ÎzÔÁÀ®ð±½Yì‡3¼uá•Á9b øPèζޮ¬õàbûÕò·Ñ›:õ¡yûÕÆÚnTß®5+F÷ܰ0nãÖË`û¢Õ£½¢šC9ê5YòÆ2¯ &´$ÒÁƒRްØú⯉Øeò6ÃkRÆ…¤ùÕ&f~3Ë AÐÇ‹j,NJÏEÖÚŽ ¾pÝhĺIå¡ ëf}0Éuè·;œZ®»ëQâ£.O¦È Y°í£|&j†ØÄ€³Æ–”"gZ« ë3¹8yßöÚžÉ#yey\¹å™<£vGÚ›:[¦¶¹?Æx¡¬í‰¢ެ.ÎܧÿùüyÊTWh¯-®¡v0µÍä/¶¦7z¼A3åVÇ(‡§0žøL¯“FD³x|)tòtôVg7³LxiL¢ÃiÊ›`›Ÿ¼B ˆìŒ=a˜O’ƒt0Ì£•{±ýÿÛHïäJQð·¯z.†ê¹ß¾éȪ ØX5å …¢¬üÛÅAì G€D°Mpi°Âd0ÔcpÑ`ènj™hÚ Ÿmló,‘#˜(Ú °×ÙÄŒ53µA'€™‡ÕU~sÏ3‡× ö+Übqöåï×þã[ðÑÚŸâ ‰]UíGƒRñšCUShU@oô¨{›áž‡*-lqʧ¡ÊY‚$TÙ½oL-ª»²6œª¨q…C/G (Ti%…¾‡Úã?e”§§2X^:Ê}ï”(O¼ 7¥Š1Xv±É‡bÏFðlØ s`!dzÉ8GmãâûI»™e¢*Ô¨ëù*háLû8Ø?FY­ÐÚ:œ(¹Ùq`Ü;ºçzÕfïNKYjs§õê¥COF²¿š]`½zk[鈴†h@c­ÃâRŽs¶Î}[xÜ9 Go2Y(„½«ÚÍ,dT„Á[¸;)2Ô¸8S¤IÕµV1Ø8ptw‚œE§ åtiRJÖ}p±x\D*7œš¼‰?¼NfHüÙ­qƒh ??"[zàÝS #jÀ4£{]É0£5©Knž¹…h€f7µÔÑD«dÑ;Ÿ?Ș$R wËB”õìÍZ¬ „«¦$šŒô¡/^4c½È€Ò9Ö˸ô¢Å™lv˵^^yoÝi­—QúÖ‹âÆËñAI·` ï©Îh Ký¾‡¦Q0cÂØ†tJ7ŒkL[‘¥ÊQŸæe. C4c]³IaÏBœˆÕ_b0<¬¿ëó/þ¸ìüÃÍzó«Ñ;ÖYÓS`Ó’59•ßj[p ×øÓX `0—ßÈXg72&FµÅòæçåÕ^·3—ƒ¬ö’Kó¯‘Gl'¶}ÊÍzv¿úev»úeù8Û|žßÏ^ÄqÞÍ}?Â@½”ÍEMl„sN pò…{÷Ó¢ùGw•nC‡u郞=ñïçwKÖDî+£«ŒdÛïðE˜$6_î/¿”k³í‹ºë†úõÍÃçë»åæéöï_ñay÷ùãÍãç«û››…j˜žˆ3îÍCW,Ø¢,q/f|ŠñþkÿÍÁíj…^`2çjÿˆ ñ0£ ù|±k²£˜¨"ƒÆ§òùä£ÄúyCÂv3Ëp͈ðœ-‡µî† ž±Õ”}lRZFÉÆ`9‘XÑb“žÄ'…ö¦8üÉ'ÏìÕ–´ 6Õi‡9²6½Ú.Î÷2³¬ 5Ê;›ïˆw¯v– ™"“BKz{1‰0¾(äNÀ9`áŸs` >¾oÎç!¾âPEœÝ.eÏÇá“ÓSúž¦5*7É…:D!‰x»;ÔIDcÁ%‰¢×hƒ¹¤]hîICó3|D¡ Í^'IÃL;ÆLBE¾ïBãdw1¾±¾aìcH/EwJš_Ö™'½ÿ¿#žvÄ¢\Ðr÷^dÎÃ^—ƒ:'Dã»Óœ¾=äøÈY‚L›‹ ü„d4pNQ0 ‹ÝY5•Ü]&íÎâ¶Ä1㔋åîf–é΢„ IpgƒÑrv)Úÿºq›¬^ŒÖEÝY©>Uˆ§l<`Æ](ç…óÿÕ¿ž ÿ-|txLuu仇µ–hL±/Œ ¶t—†Ð¢‹¯1ìye«Ât‚×gõóýÙî+¯ÿyör ˜뺇u«ÕÙ£6@EÀnÌ„P:’UrÕhFö¶Ÿ$®×‚÷¥t$ò6=OîÙ$ä l¡sº§°8jŒŽÝ9„S§«°‘è e“xtCCÍ*\]&˜Æ‡³ŠEBx4ŸÁW9äöv‘¤°·9, »sýdt˜¼¬YžÜ„ƒ¿þ'>°ay«ï}ïuŒ³çMü‡)çNK‚¬Ïpî|Œ7c7Ù* ,ç0a”ïg­ K‘þ¢ÆÆ¾K\ìtÏ3–fàXWp‡§ï gãß¼Ú…W˜¼{5™àOéøìšòÜ\l£¤g·sþäù'ëåfשg„ä8@/t€¬uSʬ(£‡ÂRhŒÜ*zB¼C”BÒ"F®ô!Ùj-ÙÚ ©Ž'Ä£6Òr ’¦¶H†–Zߺa;ËÙxˆ¹B€l9Œ†NlÈçx@¤ < ±0qpYÙv»P¾nïèS |½ç9t ÑwÛ{#^u±ìºttVë –â5Jž¤øIݦèãu›Ç§eðdj)3@ÜMj ¢¤ »”cA|Œø‡˜] û2¿™÷¤Þ–ÉŒÔ*hJ⽳ѶbyÖÁ{V%VÒ@cð¾0øÖP,f¯`ÇÙ—ª0D5q_,vžß+@Å÷ *¥Á›’‰<¢ºù€sÞW,54ÙI¿OëÍêŽ%¼zâÅ‘Qt©¿üíåÞë «|Çž4Ááå1ŠŒ‚éFA–ÇM"P`íb–üH£ÐL¨µ¼~Ù[^Ú¹¹ –baš:nDÀÍõH°‚èTBN1?ÊcåÐP‘z€& îµ4ÝÐï@ÁïxcßÛ™åÕÍüyö¬'¬_ó5cèbóô¸<ßö+8ÿr6AÓQYõMÓ•Ú#þ­±Ræœ=>½Ú™àãWj{ê‰õ!¨Åçåâ§”•Ì}L™ï,¢³\\›I|7„÷.5®³ÄÏ–z‰÷ÞíCÍÒI7s¶µ÷IØf·'ÛIV€í^}T6 c¿¬àŠÀš7 9c¬O†¬”$SÂ}·¾ªû®³â6 ³Ý÷÷€J·ˆY]5pJ)ŸoÝ9†ÓqUQ{kŒÉéï 5ñì©$+9ˆ^‘çUƒ4„¨¨Ìí@ §èÿõöм›28°ˆð5`lÀ rù‘áÓ¸¨WßI_´úäC‹ã7ØÝr4t:·4ø{Id}îËûñfq&Œ*#/ê >(p­œRÁ¡v 0‰Ìh¡…ÚÙ2ª¾B/¥­59DT©º‘—õqú°gÔ_~x;|Ÿ¿L‰úiÚUé`bØëÅKFpÇ3[AWõòlž“Ù·¼öê ¹îæq‹·^<Þ:;É¥*b:>{*—˜1‰˜cÚ j¦Õ*Œ¨¨¢e,}ÀˆfðÁ¢-陞‹“˜…“B—”ô¼åºAh[uRYLËÔ‹Š•ûU0Ðà”³¼Éb0~lTÝ‚¯~ÅPSFH”5=•rÙØuÔn²•À.ž95Ø!@{°Óź`œÒ*Õ§«n¹‚ÊY«©š/–±Co\Ñ­¡pí¶Á6 àû’žS;;Æ—ÕË ÁÏ':áð•c†v‹À‘h‚ƒh­Aœ‚Ò8çñÕô$…¸ Z1â4Û@ŶdU [åîÊûØšÜ(JëÝ)°õ¸:ä¸S”€yÊ*‚·8=°94Z.ohãozÐhàÔAÍ)n<îXN-A”¶“¸Q­±j„4÷T󴕃gúŸg”LÖ¥BœOõE•nñA¼>-®«^óküàã·ø ^:Ù8ݪSU㘘u>'¢²0fLñ.Ï-ŸAÛ7QuT'Íê%¶½d»ê _ϤԸ;!6É“…žŸÛæŸcûh»ä)΃:"§š0Š*X!ÌKÃhFÎ;l«ëß´à~J-eOB›QÞi5ÓDµ‹F9=jÉ„5qµUëTÉd¥¡B°)P 8¸ŒHF—•1±§ØäN(H§ÒáÿN¬ñÊe‘Â4¢Ï¶YƒzŸWBAkv8mV dY'ã‘W÷¿¡X$gõ˜²NèU‘–y皣%¯†fíK&˜²)ÖkÄÖË!´’.\”xèÚÞ›Îïl\—ûÈ8±ÞlÛö… ϵ7‹åúâg}.=‚ö;jp?½‡Å/mˆ¼*P¤)Œ¡^*~<¿{|ñÔ‰ÕWÓµ¤ p5.&EŠf‚ÄS©"Š‡ã²»Gw#c(öªÈEw‚¤|c¢AR¹'ìâÂÇo ¨.aVžCÊ'‚1uQS!âÐÊZ¹ž+Ëú´ŠZ`®`<õÝIú] 9Òâe±~œT÷ä›Ê^7à)cÙ³-)E~<WˇÛÕWÁ¼- Êø ù ¦/AÚæÙíØØ,\4ÎYˆ­žá“ýA‚èÉ>–Ö“Kž*ØÏ²2ªbùxØà½RcL_%¥Ö1‘³ Û'+<ÿ—°}¡j0ÆeÝRÐõk‚ÀÅÁåã!Ê`—ñ/òóTyñiÚfUÔ|%Ô\|Òm¥ÉµI³ŸÓ8ÚoîE¹"®£„tWó§Íçy× p³úiyÿlUÇåÌÛ#áÈÀΈ/ÊR°„M‘:XûÜ]Ýo#7’ÿWŒ{Ù—•‡d}°8äå8,àw÷º[“ƶ [Nfþû+¶dY–)5Ù$åÉîÃ&{º›U¬_}W™`Ý{óá¹îøãÁŠE½âëáÈ…ëû*˜‘¡Y9¸IãYF1ÓŸ ¶ô„A€qEÊFÆ=!–#ãväj¤JõyªI‹" $×ê’HðèMéšÄr_ ›àËÕRòÆÛ:­²¬3]²64^kN[.!nq…ÓËwcιióˆÏÛ X0«¡b÷a™‚Á–”=$dŠcÃy¢Ð2ä )'´]'{e=%3A{i” ò€^¨dH‹OXWŒK‡“=ºd‚ì&èü&äõ2Šý®U̶͒%íŒPeÊGù® ]"KŽÄÛÕÂÝÕˆè¸J7†è¿ޏ²4Úãè{ï§´ƒ¡x&µÌ¦Áç…‚§r=(Û3LT½£É46ÉÓûähžúÙV̓ób'o»›LÆl¦¿b§‹‹·FL&Gæ=\”¢hôä¤í­q.öÊÎ_0öB½©™ T¾ûmh ™C¢í-ÕÙ Û lŒc§D›Z…rµ¯„’ŒÇ­S¯Â;jªS˜ §ï‘§ÀJˆ1ús#ìvpxO„U:C²NI¬Æ±·çèó¶q}:,ôd' wYEYë{FtžöÇË+˜UüÇ퇭ÇgŽÂ=¶hóóa;LI¤šØ¡i©x0w#ȵ6bP­ ±ÃÃÛ bƇ}3$'áìs¡‘ ƒ&áÌB÷JWÏ!@Rƒ}» # Yjè°¾uBöÏ~=¯rè’”8g¦k7ߦð  òþau»X^<=Æ5De{lºê2ü‚Î.ÎfЦe€ZÀ{ô ¶ãc"C*ÆòB‰Fái±¢&ñ¹rr†ð´‘dx:N‹°2–•j[OCyMz’§Îü®\ìdö¤žíx*x÷+½m7CŒêzñiþt³Þû¥Òfç¾)SEŠ% KCÓ£¤i•\02nòrÑ)âL>4Ù#D+‹—Œ(:Ÿ2¹¹’9¹çQÌ` ô5ƒi»k,k½ Ì‹Èûžü¼¸Õ›üÓR!ಜÙCm9K¤foÕå,©- ü*ñOúÄV¼wŽð éר±-zi;KÔ½ ”ÞeZO…¶+KIÿá–’E¡=£3çl–ûM64ŸgMæv†Æ 8aÜ`\ÏÇOÌ×4š¦Á9ØHœœ3a\åà„ÊݧY«ðcá—øàÍ9ªG·½oÃsAuÆÚÝšV@…¼ðœ.N¯4R=ÙmMŸY†D@Òu àaô`nîÛ*ýÁ?Úœm¨ŽX?¨±Uè’bO´öv‚{†Ô8*$ÓÈ×rÎ!éÿ$c¯`Ì9æP¼µ©Û{Ôj4ç0Æ1éÌ(m¡»£ÉA‡DB²Ä_п¡ I´C®ìÅ>rqz²±ç¯]}¼ú¼¸~Ò³|¸Yé¡>¯׳‡ÅÕJômvu³T†Í6 Ǿ,b噹/$û ˜L Þ±7Ô¦”µ€zMÇïÅvªq«™­1pÌ©”ö¡Z ß‹%‰AÎ Ç¡{Q+› G†ï9cìYGüdµ x–ê×BìèËfìË™ð^îVëå§åÕð¶ÇçŽ×ÙÓýoóëE£ÒÓPìxR[AÜ#‡Þ7âs’b ; 0Nà0ž$'Ï&±#Nv¼P¨ôxn›Ø…î½[JçäŠD=23\07 \x<㘞QèÉYè‚»êɈºÜ†Ïj¦i÷ÚãôÊHäÕ”½_ÖÛ¾á ´†P#¨@õm]E”k˜G4d&FK•”Š Þ£R«¸„G} ;3+ì÷K Út`"ˆ·áÍ]bòÌ_¨™pYŒ=KÒ'NÌœô5}#\®ïg*[K%Ÿþwa Øõ%tÿqƒq¾OH§§K'ŠXþüE ÿ,S{ֳ̓g”  Fœ½¾®-U;¯ÂUÐ… èžTm ÉÑ/”Hé¶£œ Œ?”¥ šPMvc iÂ介äø«ˆ¬ ’Ô„—b‚;ëð-Ê«³`ª¾õ¹{2QôÑz¤6Œ¼{‰èËÐW_Íg¿>Ý]ß´‹öŒ>uä%íVÑÞ¯y›F®v¡žx%˜Ùp#$&YºGžFn“¢ëÙÁ5€í®`Ó+4™­­þkÛ›‘˜ e…§ÈøÀÐCÔ‡âù†É45­=‹÷íN”1#F\j‚áEÚrÁX R´ ·t‹±ghð'›,Q=qܤ=’Msm'f-̰Öû¶‚¾kéÈMõ)é ÇéˆXGú³T©JÜjØ}j~b¥n^g|¥x½ºÕ7­ž”¦×j4ªõ›ídËÅW}Ëãð›užy.Ô~öš½ŒX¤’ ü.sžoç_ëûÚ1â¶ü´Ô·ïjºË"õ §BšRöK6Ù@pÍЬ•~Ý\àŒèW¹t«ªÇô«¤Ìç=5Я›¯¶èÜ™­g!î[<¸!3¼­Ž÷ƒŒ ¹äÞ™Ò„~5Ä FgEOævX¨l#câ–T×=bq|ÞD€vÄn@Ùö?l_zCû‘Cs‰€âø¶'Á—`­ÞÙþZÄ4ö—ø–ê$. ‚EÒUßù þcð€äÑ/`|&Û©^Ï4«‰ ·€EÈßžVk!‚§Ÿ’!žR‹÷Ñ@­ _mâbïÛ3«5ßyò Ö(1÷~ÀG+àÂÉÌÙŸ­ý·š#Áøö~© Hä]OlTGyõ¨ß­×;©‘ž›¸NÃgÁ“êöD8Þ"… 5«·aÂàÏ€ŽE%£tpCñO€p åkpz¸‘âFoÇÜý57^,¯ÄNÁôŽ˜ Pzøfì\J]¤ LØ.ãîˆÒJdËoûKã‘"ºMѬPÓt¨u­Ë3¿L:z?ÔG²u åxʈoL`áÇ>¹S‹[Öb Ag6³ G2€lÆ—Þ'ûmö Õ(¬ ÀEu†àœ5U³›ÔšjnXx K«$êiX¨<ÌUŒ®ÖO‹Ëí¼ü:Ûf?Üêµ½§Åõrx­ÞÅû›ùzÔM›þànf e‡UfØ)óÆÕÒK \Z{UÁœvHgjÌ’á> ¨¾5KØogjŸ´JÀ`r9å ±@ÊðÕŠž‹ 4êwV^ Îv‰Ry¨xe—Ä#£âhÚ.!i›§¢ÖvÉŸÒŽ_ ®ÓGƒ®úôã'gÒëS9¶%ùtC¾oš©–¬a)äoØÛM}”µjœ0Ø*ÖR‡ 'öÒ‚Пcßnøiº‡@_øaS°ûqYŒýTþ2Œ€) Œì½5S×ó•Ó¬a:S/;¶ðT.÷£k¥Ãvsý¡æÞ£Q›|&¨ p‰æÖPü­’PvÐ;ð«Rq_pÖ±õ#ÕB¡idÁgš@Qç}PåmPE@uèÛg‹ŸÑïŠßw05yÛ_Ktîô˜«ßTо •;©~Ü2`F>šü´ÆbL{¦0SŠ´•º*NJÿN} IÚrº¯`"Ÿ3Ýw¼a?l—½Áíú5šî«ö£¥’Qƒ6î¶P“ÒÛº”‰‘Ec™Þ¿Jlu{ût·\ëP%6Nÿ éUí5Ez%:LÔ§Lì-ÑÚUz5@Çæ”‰ÉX™˜Ð%ƒ¬{jcW‘öC‘|ÆÁJuòi ÎP'–‰l8åc›Å9É¿_yX!ŽòÔéÁ­©±§€\—>n!¥Žaó‘2JI·þcv¿*+ó"'l$µÅVÖ&ŒjÌCÙ8h^eÃMënɨ¢k‰ªÆÆµÅƒª.£J)iô줨±M·TÇ®C¨â9@•$¤A5®FÆd?„ÞÁ¶íÀ’å¥Z‹…=-™¢ÇyH¬TNà&d¥œxC‚¡I¿Z÷Z&£­ðóöÊÓÉhA7”¶i¼Ã™é/i“ŒÖg”’øS.rÞT‰4uŸþÉŒ.üƒç׎|íj>¬kW;ªÜ2Ó#bÀ*fê¥ì㔲+üÞô·ó«Ï*HÛ¹ÇeÑ$oüqÒ«îF¬N³”†ì/xë{XJ¯ÉÕÒfrŠ<!Çfb? °i›i4­Qò±;§`Ç.F†LŠógpDI’6“³Šï8²ñ›MÓ$=çOL[ãé-:c«U4ž+ãx†Î`LìŒ3$<=ë*"k²" €‚í{„Ëó Gy¯šªò:6æ¾ûô©!ôœp¯&ÈÁþ€H»gòÎñ×_½ÿÃŒ¾Ò×/e±âã_‹qéYU¾Æ"NŒE8FC¥R…¤jçUëSO2®]ã¨vU Û I<Ó¥UHBíO‚yqI‡%¨ÂáH­îê50¸tH"ÎàEéƦ gÁ°§‚Q `á(K½³®.m®j¯Ëà2±a¥¯×cÝßÌï—éEZJkýõ?V_ô—ïô#–¿/×ß®>/®¾¼*\ØÍ9ipmÿFÓfÚøUÈÀ|ÕÕS0I)–{UÍÙwªÑ¶ïj'¹£Í˜ä&`Æu‡dÕõÅάÄu­¾D÷ˆSmUn±¾ݬ8Nr$Fò§=;l;ÑM?±ýH·ÀÕ?_?|[FÐcÜ3´¥Øl©€Mb !Dq$¸0½ë_˜Å °zšNƒè œ"Vô'Ü$"XÃnÔrqýüg&`"$%F xAå§ûãžwGMΈØ;Ì_âÇ^,·êíj±ü}qýÚèˆf¿Ò]b«å_Øžlû”売ñÅ¥?ëÏó»‹=.‡Ã¿~|ð¨ª#Xð‰NNwô€!‡Uwœ]çù±Ê$`IÖ-zÅÐGÓ鄼 ãZñ§(8†ÿÿ/5Æ×ï§}L¬ƒ©Í0eD¢÷Š´ÎýëBÿàîq~5ÈÓ¡Hnš¼þÏÙí¶2¥ýËÉ2õ<ÇXÚ‰«Èb ×yÐ†Šœvt˜•DLéÚŒ– $Þÿ)oß; S‰.ã¾/ÂT®Ëßxx”1 U‰²Àa’‡¡þ·‰»°ª‘' ‘¨ª±ø88qy8áÜ;Z&ôXŠe^!{Àxö5؃(ÒÝRGbSÔò¦,ÌU/Ïdð=9׋û›Õ·Øð· Ü?&Â(hûxé·ïƒXÛÍ=H¿ýPÅU‘ÄRSéúC¡P,:4m‡B½¢ßçÅüfý¹¶ù™’¬Âçk _1e¡ãÄŽxJþäX‰c­½6¥œrÌdÔðo§œÐQ¡¤¶2ìµÉP‰xÉ “¡!!ˆ³5*BE®»Š0ìBª˜"µJüáZ†½¾4ò!´më­ïê="·ÇxË0¹B#7a|?1±Q+¥¾À̘é Ö¿1n~ o' ==x2©¿w²<ëËOhœ+-¶ ÞJh©Æ± I?¡g4xÕi. í$éõDn%ÿdü%:ïŒÈ¿QG>v×Ê¿¸”¾GÑòEÆ‹‡ç<Ìz55½vTvo{ê#ŸÈ‹Z%·5ZÙÓ®S6×ÛaqãOói×mèØ%Pó› ª3ÃØËü°‚·§×ªò¶Ô[ªµ×ôáþa¥Dÿ¼xz|xº]ížõŒä÷lOYN°-VÐGØ)[©<*Ý”GòsRóè]ç~ƒXU”"£W¢öõ¿½3Éɧ{4lâ€G‘‰# ”€ƒC]•îû—ï(™-§&«€8ãG9]LÒÔO6*8I¬ËÖï‹DGo«‘bëü~ é 0ÇþŸ½«ímäFÒEØû°_Î6É*Ion€Ã.°Ý[$w¸‹ %ÍŒ[6$y2óﯪ%۲śMRÎáyA2“´»«È§ÞŸâ°84Ý;{\/·ß.»£¼šÞ¾¶ÔÝcd¾k·N<¥Ì§5³)•°vLnUë@VGÄÙÛvóÔpÂväê ÌŠXc8œ :I{ÃjU²Êî,Q”’àE®U¬ˆ\-ö ]N'–—¡= Ef„šÓ¿²œÑƦ=XSšM!ƺ€ÈÖ &À Û'™±¢÷÷TíΉ!ÔØÀ 8ŸCÐß{­]’°<ùˆ+ îm…Ô0z5‚¥l‘Ƭi±!É)v/ÈYc¯£íÍ“ø³ò|š×>‘[¥„L»d䛵6fä›- ÿ)M»Å {ÄT+S'=+ÎYãÓ=l‚îIÆÙ·5¯2©b^ùØbÓpóÊÎáh´¤ÑÙ*Z›×ΊÆê¤ŽµdX[jt®º?€Ô3+ME›OÀA¯:PU†©®E«ˆ3$»wwª)¦Æäv‹q9X]/g›‹Ùôâæq5¿]ä•Jµ 'äÐQÙñ#vŒÖ™ÌÜ^¼K¾ÆÕfÇEU\1xFÄÔî¾ôJ¥¹ƒYlë>Kl568s°•ϼ ¡[‰ZÏ©‹˜£Ð*C„¼>ëÚl²Ã"k*¬ÍîG…>•jíØ9ÐEøJ¦I”†ª!¸ •¶ŒÈ®„Ë’Í“îç¢Žà•¼XÂøx0‡çßåÉ·†§†FàA¯*‰ï·.ËЛ ¤ªò1#MM‹ë/Ëû÷|î—"ÏTííÿÐ(¥oµ³‹ˆø Ç¤ôb”&?¥$Ì“Yû#I–%æe'‘”¿éê®>úh^þE8µàT`΀_£Ð³Šà7×~AE¹„9ʵ–´æåMݼ|lß æåÛ`F¯¶…(­hMŠaǤ @ ß¹Aß¶£sÍX³Ùîº'—{2mT¥{X,ßøNläûÜ/~ àX²‡ #†Š¥;4æ÷gŽYMOØs¤¥=¥¡( ÅÆøØ¤Ë|*!1(\£–µi%Hlà H ÖÄaï©—OÕ]jï†u[msº-G£CŸJ9òªhŒ!ïš”ÆÀËEz¿4îÃý< Yeš œµ£“4´Â˜ q)»ô¤ë&nY81”Ð)çBšJÌ{ )*1–ÆRµ’¨„¡^ª9\c âÖª²2@{y³ŽÒȳ¢‚L£G½Y]wJÐê!Ê·®^Žvå{µ'¾i_€‘‘…6pÉŽ] –­æo(Ë^¤6[Ïò*\ }.@FUQ•ìžåGØ1Ûš„KÁPn…ë”XjB$G‘^‡[¤"•‚Hp"ŸEP !QkcsÒ­¨QLe B¢6í99<ÆiçÉ’õûpàÄ‚ººî&«f1x ýS×½W$ô¸E¥+§t#´A‡¦­›Ål½x•©–ß¿ØÌ>/æ·ûÅÞ|U.f‹õöÂeá¦3ûÅ„pBÁ¦£1s4FʸšËïŠÎUM,%ÎW‚¥&Y¼’¶æhÛÕ‹\j¡©µìå´X Úb‘¿i÷šmÑv ÁGhJN¡7÷7-¿x%JÞÙ2X¥æ+õ\.ÎRì”× {hà¼ì}¦´:ÿ¼kM»úÛB¨Ê~XnÞo»%6%Á`‘Ft“ÁSYp¤ZæNÖ÷·‹›åJpêÈKx™×’ñݼì´íÏ °S¡È ÉÀ 'sæÈ~nwÆpU´mŽÀZL›6|jôŸÅeb–í@u ›Œ—º`rÂâO±bòyr¡=:Œ– p×HuÔ•áë–m”ŒöÃÓ(y·¿W‡AZ—ËLš¶-ȀŎïéÜÉ磮–Çùr›×áÆî^¯À²C”ÌÚó#Æ$¡Ù…”~šj$¡£Bªˆ ljNØkLŠ>‰¦£k˜_$RC;Þ:Èy´ì}Q†:°ýâ‹ ¢=Â_•஫Ý8< L¥·,)Ý ½ÚDÍF´¨–ÇfX·ASÉ+†¦µ¼]` †isµùÆÿ|wý,ÌëGþˆN ÓŽÿu{ÿëbõD™š¯Öà x«LQ…Ï¡1ôfÄ3VÚe£ëh™UE[i#†0mÝn¼í$Úb<¯ý"¡Zh‹Ò ”3çˆÚ%¶µ' •±ƒ´õEÌÅc­:øæå³9ÂޱEøÐ§Ó ƒ®hË›0Šæ—C0¥A‡ s®GÜ':YMoA‚Á,L| ý~YùIL°  9˱VDXØQáA9ùÙäJƒ‚‡ÐÜ>2„ ê ª­;) UBëNì!Žqñ„!Ì"Ûæ|¼OŠ÷4Ü'e[8ɺ]FéÖ/r.ÉÏÒUq¬_%0 ‘Ê$ëzO¡ÈäG´ïý"GÑŒe`Œ1cP/Á]]‚¡gW‘!þ|0Ó«þ@Æm7´mCä5„ó׋v¦sïjÏó]d÷‰]t»f»š²ÛÞ^3b—B[SÒkưcš&„ú×V*)c½(Þ+Žˆ‰‚à±cH¢¸VG{Z%‡œÌƒf A6ì±C‰«Ælε bvuØ9qæO·§Uí¬ƒ:+¬×¥¨ñpÒ«o”±¬ÈÔ¤F%KYÙîž·›ímZúåwä7XcËY·™J¤›WúgÄ~58¾`Š€šüj”Ý"Naqs[–äjB³ç €æýëIh&ç¢Ðü,¦JÐŒ(Y›ÍAïÊ ÙYlÍ&D‘٣Š(Qͪ›SñÃÆØPô¼ecFŸ†ÙöíÊò$¨B0²¦)÷æ±`ÄÇ?åê ^¨'¸3}ëûÇíb ³_äúõtg¿P8AcKà½fO³ŽódW»„‡ƒ’Û©ö–£¶ª „¬æc2JcIr›OÚ9y?np„Èl6 ½>Gó±4ì†ìq>^ôª–M=Ú²dú&üdÒ‚š°q2ãP®óÕ¦¤élÆÙøÀVŠ ÖáˆÝx ¬Uìàˆ ÅiÙTÄROâ,eðJÒÓ³  ¥‚¨ƒ¤|@eçgÆ¢j’ýŦhÇÉzìÖH*“{Ñ-£:PPðvüQÖ¡rÛë H²²é{ß«D T„™Û$€]@@8SƒV$qÃÿÊzÙìíOž§J¶_äÈèIEõ:;¢?ÃV·寷d%¥TGAN–Já0Ø.–áºØÆÐ™TrI]ÐH†r€T€Ü–¤óø¦9™#‹YÛ(¢R°Oæ7a¹ºdŽÃ\RïÇ4a €^-v-Áe­®Ö51vÞhÅ7àü#Æ/ùéf»x5Y¨Uꀀ ­Õ¶Qý¦FcÄf7®Î”qBZ'±Uåa+")«Ülå¨2‰­ÞD^¤S [Å·Vr°5x"5´/±˜Áű•ƒzæá¾Ó•ûƃBŸ:ÙžŽ;Š@–¹«»}ô;Ø®ý‰­ÕC·Õ]<ž–¬ê]´^H­J@GLm¡dËË^wWAv5}[+Þ­„¿lM@Œ& DU ;ò†œúW° <•ŒS[ª3à¯ÁhL Á]pŽuØ~PG»ú¡ú6ìb´Ó®qN7Bc‰BË&á»©Ä ·‹éf±¹z7ªä×&Å›†XŽŒÇ,µZ{§(;UpB&5½WꜢ0=“ V ^¬zA-ôdåq^ô$xôŒ«49þCcÔ{U¾j×beï5q¿Û錽*h’PB5ju8ƒÀŽU¨+á]-¾²¤åR¿8ÿo ymYìhû–°IšÆì# 7º€Ö6Oh5“ hð:½õöóÀ§@•ñYG³­Ïª‚©d·¬RæÌ˜ª=4_ú{8â¬2Ò!ìÍ9ØnݰÕ9H]²EÑt-5¡½ V(o¿Dg:Û{óoÈo–+>nòQ›iÌ’UMAF_ñÍ0Úx·gˆ*úÆ{FUJ»ª|§m²Ñ•Àøh ÿ,’:®j]ªÖâ™aõ }U2\æ`QÉúØhKºÀϾB—vûrVâ E€–0J®Œ §º¿Í ›Ùzù;>€ŒüM!½3¸ëCPüã+/ÎÝ‹¨&|%»q|B²N%}“ÑAÜqÔÂOÝ5Ÿ?­¢öø ;ágÐüd´gYÂ`‡¹¥^¹ZKnKðl³¶& ƼÕÕôߪOÓíâb{q3Ý,g’gÎ\“Km½QÂ1Íÿ2º(<È¡*ÏUÀ*"+(%KÇ‚xId¥øpìt*!+)Ù˜z¨y{•°F[XSÈ¿k}ª•#ÿa3XÊC%¶«ÑpÙéoƒ¸ô~j»f€çœÉæiñÚŒE|ÇZºäƒ9燯–ò#7²9ƒÿ×ßî׿²šVüBË/Ëí·ÙçÅì×Í¥hŽõòp;]u\=Ðø¾`©©×ëØ%Õl^]àŸß9pFAW…x+'ñɧ º ýE¦µðÝXòþÌž³SíY·ý ˆã»õ¨U|¯ªÌacÜ\Ö»œƒ3QÖIøG–A`c«ÛRNÎuÓÜ>¹x!é<º!oñžÛòv•n—¬ô¼~±¡_9ñ—ÈÂ}C#pŸBOÆUÏI¨*`{Ç0a6Q²£{¤Q ±- ·9É£0±á ¹¡Ž#¶lV7þ¬¬–u‡±-ÆíÀvªdÈµÍ „A‚Ýè8µ –p8±(¨ÿM€ªhDc‹þ¯…ʨð.¥Ð(ÕÇs©9³2êŒjjèÆì³PÚš`lµÂhJdMŸ¬ð%½Krœ4}:`€¤é£è@Ê„ªØ>9Ðl²•;³í#Ûœ\Ym¢Ù(À Ú$Zy0eP¾ßazé˜h ½®ùšX­£^ÑÒÙÌ Shéÿœ)Ä&4yA)§UÓü`ÿ¿›®]l9Æž-®dèjùqÉ?ý™<³*ÐÒÊvé1©>”a,¯M5[˜ZÍš7ºnaåþ¬Ìx :”P%~gwî0Ѓi?“G³ ð̶pXK¦²TÁ@Š–ØKMÂ_~ô™YñºV×}vSú°ö{\®fv*–ŸV™ÅpðˆMawÌdfð^[[/-¬špkµ—•!ƒè𒑇Ç‹<dS)ë¦4kÙÙ3í%}¾&­“ݤKû¬Y7´Y<ïßN‚CK¤ ؤ îƒuìø÷wsïïîWËí·ß«›Kn a¿æYƒˆmÜÜc¡UusÁ \¦qA'[;=Qˆ6 ½H¨’› hT0æÌÀK¡=¿‰·vZåT|VíÎø÷ónãS©g•²ãIŸåês>¹K@­‚» ¹¯ ö3ŸúÅúú»ïÿrm1¸Òlö1€6éÈZMï,å×õ!™Ôz¹ózòa²ýººþŽ¥ú0]/®¿ãÃ÷i±½þÛüeUKíµnö;lžvÙ\âÍs7.ìWûõ×Kíí ÍÔÜú°@=ŸñKÜÝÏ_ÞAñ;l»Í‰×ßí?ð—‡Çíõwí^àËôöqñË®@àhÒÉÑ?ùðaò‘QúQDóáÃk3´³7~üÑ;mmºGh3¢ÕÕáÙ·ÁýüŒ“Ó›ÛÅ|d~¸¿xm{øbäY|¿š/¾^håþ4‘3Ìä˯ёa.jçðî„aØ/Q|J–ö†Ý·îV›½1 óGyÙÉR^Šï×l±ü²˜¿†}v:ùZ±!»cÔÿ×Èö¶ÈrÃVå7¾š¿-Ö“íçéjò,ŽËîÛ_?].-?ÆË'ÓôŒa›$’ÑlRùWV›é¬ÓÝ[õÿ"nÜõÇéífÑç˜?MV’&þåþã³ãu}ì-P,!˶&}(€_N§FF|ÚÖ÷;/a oŽÁ¥÷æðX¼zÆÿÞêÛÊ6Qe ª¾ÓÇAËèíYüæ˜$Rè@£ÚÙ¥ãù3(GöŸ_WÇ>ÄQ-ÁÇÊEÇL­%ü¿Å`±úm“ì:Ժ被&hïñ®a(úp??µa䥊HafÉΠ]|¹™?ÌóBQV#×À³`ÇŒe;pÆ@0¹T˜£…6.5/A–}Ò ƒÒ‡eäS!•—f5[MwÑZ×Ó´¶!,e8Då“Qسý]<Uæ"Qv¡†3 DL©pi¼f› vtvA¡(´Øz!ä,,_wîð‘Ç«·ÛÏGø)?o½¾_wUò•U3]Í·‹ùx9žFÎNŽžÆ­2¤[ôébuäÛ_ÃàQšu IlóO;λ/4Q²´ƒO¨’f뎎Ra(Õo­‹ 9º ]º‰¤Ùø“­QûÛrÜáÚºªù3ÄQ>µÜ§ç2fªHþã+å¯4LžþãÉÿûÿþ~üsÆGæçÉ?ê>pò/þçÉ'VÊõd÷ —û8í¿VÓõ·ÿñçÎM僵½Ÿü¶^nf7ד§i“ûÕ¤Ã†ë ŸÈÙäß&è¢Ø‡{ÖGî³Ûû È?D¿<5Û㙿™9ümz{ÅÉÑòÊ>­Ííýo“óévºù¶š«rB&HÔÛ>¤7=*èZF”5aüð#\ËiØÎÑ`(ñþø–°éÛ÷´Mg£ÿ6•]‰9¸ú©;ïÓ &’´:(cl‰2¬¶-l·æ Ok­lƒ<2æä‘»ô—O|E9Þno¯µ½Ø,X¶óÉr~3œ?ÅÓjªc¹TÔýò·_ηùòˆ13IœÁ“.ΡáÀšøÕݺړd*;/?€õæ´+ ncóE_6 …¶{+-+ƒ[z©²Ú”*º6ÔróÔ³£c"T”jC¬—ã@ÂÒb%!ΫlÈn±¼ŒXÉÏ>H„iÍþUÂëéGwžvƒþaò¡¢Ø+tÚõ`š{µAS$ïË'Ñbœ:ݵc(¼ÿ –àªûû_żŸÙ&>š€¶ÄlÈÔ˜P±³Xì„Û h7XQŒW@ÉRÖïžfØyt6çàÓ†Ô^øµ¤;Üf>jã‹(»×÷¶ñe9F`&@iS²`˜½ôç0uö¼2‚–&ÏpÔy‹ÂQV¾©ó§l ñù‡ñ{[»G°53ÁÞŽW`Gö„H¹¸$>›tqŸKÀ'¡ÊªøìÄËפ›4øKß{]&>|FQûÇj `„fX9¹û4É EžäMýÈ“UÃÎû¾M›FÙY›/7ëÇyäÍãüÓ"cða~Sa¿ÇN –½ò²ûÌ?[ò«uE‚·c^‰F+“öJ„-=yÕƒŽ&¶U!±Ý½6¿Ož×â8ò¶( àš;--Åœ™ØZÛøzPP­7Ö-©Mõêm 6‚î€-g¤¢ìNW¯)ÿ7O¤y¡oýòNêÈÇΗ€.+nÄòzÝíbÌÏíÖÏ’Ô¸¶ˆ(Âpb›FX0ΧÖ1Æö@*uÖH΄a;ae“=%ùXùæ ãq!x¥Ñ¸hsRÕT2ƒ#BÎæl,èU¤Œˆù"·Ö™7íOµ ó~níۜ䛯èØ&Õ0ch*F3ÃGÏÖulˆ®¢k+û¿´&H¯ñ>Årœ¥î<U%àe‡¬Í©ä8PìSÙ}=ƒoköŒLÇ 9˦]‚›«Î¤î8—ÏçäÄ^£!ME¨Ç,ÎxïνUä .ô8_n3û­/ëÀ‹zùG ä0{/óùÔòi;{%=HÉ¡ûµÃ'¡•Cë³0ª´ÃñkR°¢g_Q]:„æ%rÍ~Œ—Qô$ü*þ,ëD`X«¯óeÛD"·¾©úZø±hÙž4¦¶_3Vl¶»‡>qJïå·œ-6WÅ»íÙñ ýâ·Fª•Á¨‘#p¼÷Ú‡|Žú1«˜,àcaA™d:Ö°¿gþ‡½kinäFÒE±—¹ Ùx%©uø²s™ˆ™Ø‰Ý½n8(’êÖ¶^AJÝöaÿûf)©(¡  ¨¶7Æa·],$€/ß_f+ÆŠ‹zâ©c²:šäP¬©£‘q3¡}5FØ»<[­çíUðu[)²"Ž´)ᜋËýóiëšDcPÇó©kn¦žÆ_Žñ–WB„Ow«õ¾S‹Ã‡ «ÎX­¤,¨‰á·Fi"oŠG‚ŒVM¨U] †Z±š„Zð>:ïU4• VyÃÛ\bÄ&ÏEÆ•ôÆ´±‚‚ÐüÀ©¾]áôÕjýõù±x®/ã{Ë»èÝ@Kå°óuçúžŠ©ªÁCڑƒç@¸{öz£©èK¥–ÅÃHÌûº&Î_3§¡¥bè0o!Éæ£ÔbFÍG­W’Ìç´.ó^ÈcÞ+öÊ‹œ¦» ³8&î N‡YùkóÒ‡>@­Ù×õdSÕçÇT馑¦›BÝkž–yR :QT”Î<ùh‹P¹•2OŠ¿_Ï­Ò‚n_S!êÇó‚I§ã3ã|UN>S;ݔꮳ=!`‹Ã£]ãAôBàóNO~øxûÌ8°ÿÄÒ{»ôcž æ2i2@“ÆôÂãÀ_L8“#Ý>¢Žíd¶ÎM9 i¦h{O|µ0Ø*6qf¦Fþ=›Ù ™•$þÍ^¹Û>ínÖãf0‚nŠ«Ô¨< ¢%š;ßqoï×_¶›ç[¡Üî>³ÀBvhÚÊÜ7ÒeNšèæ„ð– •ý¢³àï?/dÄÄB)ü=©&Ó^ÝL_"7yÂ$ÔŒcyGŠMÆŒ8VF ,ZåÛ“H¥8–7UÕMÎycŠc¼¥= ¯3#q}ªúƒÎ90Yþ`Fª0Z‰ØO-[ı4wŒ“ˆfp5ÄB½ÞÖx8Ö.îkh s>Ú9å<˜€æ3œw¥AâÈm=cé6Dï0ªo‡!_”«RÙ8`nÖìÚaUk!± +þ4bGÉL{"©جc4ÐÌÎcPÍs)"æxå88ŽõR­ ½Îœª¡§8žñ¥Švñ…»Ø€1ºS©€üj®­G*3ŸoW÷]D)Òõø°áØ}}!0üvóôËxý5^[šŒ í„–Šݨµß¯ZÜ1þºeê€Æ´±EÌ_ aDà±úfŸTVß»ÉÞB€o"¥™‚ŠÎ{êmN-W@Z†K4Sêhf`šÖv††}Œ«&å-»ê>šå¯\ÊoVÙ‡ø!˜Ú%›ž/mÂæˆÒ8Ó¶9 ‹&‰‹Õ­åó»ÝÜt_ÿ´½ã-’§|ÓWüK½d±[íŸvÏë§çÝvy,Yþº8”—õHÖµ¥²2cf6i§;ÚU?¢¡`&)W e"‘ÍêA°¨“F>óAż‰´V(Á²ú… `Þ7‰lXãâ‘ 2†0Á€ TµôH›¬F/ö˰¤aF4jzB› M¯µðêÏ7wç-¶´Þ­Ër§V‡ÐÄ­Sæp”W~Úh¹Ô 8y-¤.#ât´:΢®¡8׫ jEœˆØšs­qíNÇéŽÌz/ló䪰Nçb®ƒ¬h²5Z£óá¾7Ý@lc7£Rš!øw`7¿þöõè¸F£~Mºz×ÖB¶£Â9ì YÖ}ŹÙò¬i # Ô‡e 'AÙFƒ-=ÑÕ²„Ƀs83*;=?ºÁxŽÙF6~¥s¶¬²Öûúp)¬4ÝojžÓõä\t¿=)§ŒÂø”§p)ã6ÓÉŸ;8Ÿþ¾ ÷¿Ýì ~&5…2iÉ„¦¾ÇÑKÛ=Ün÷Ÿö¿ñß]¾åËÕgFÅÏìå-ž{>X«ÏÛÅjsws_¤9I릊˜†M6WÈ)þ»42Rd•# “®y³ A§]ˆ—5õTG=ZEìháÜ"o›7º°œU”ÙRÈß}Âi±uóå˜'"›_3&šn-µàU3Œ;µ,æþ0þú´ø`ñbX¤?±€ ]Áµ¿Z\¯?oð0(ûj»Ú¢2׫+û;JY㘠¦à½”ʺÒ$Àœ2®U´Ú>kØÈKR¼¡å-I‚;êXcO¤UÞäÊ ¦b{aí :ךãÅåx“}bç3¤FC‘«›€Ê ç—[2´K†—ó‘[z.Üúß²×*:” ñTµÆÙÂD›aŒ[Ÿì1ÏôDQÉ×Ö«ãÀž~çè} scÜ:7Æ=jè:gw§jzÍ<uæ¥Ü=Y¾ŸFçUh32‰ØOúq<Æ]þåµ1£ŒŽŠ}<5^êày ½–‚'YPà}¨ËhüNR5±Ú ŸnF"ö…DèÒ±pý¾‰¥ÖD$ÆlU4©ÂU$«Ú§bÑÇË_x§Ð¨Dù‹²¦ªåë³¢öX%Sƒæ8 ƒ;ꄬpÒdNr¦9êŒÕ4k›ôÎý˜Âb@zàöe¦*j7~2ÐÖá¨áHäÙê°Ó;¦‹$WÓŠuÈÒK£SÎ'ó­2À*:+éUH•lXLJÁ@Qкñ$Ó.*ùæ6,K0PvÂ>E}Wu¹Î‚ʲaÑâ„VÊb¤ÞY°!L‚me¢›Î³Ïéà;Ûµ¡˜c¶¸z¾ßÜn Ë Ý0Õ¼p-N³oý¨Òp¾"|•k·ƒbªkÜ‚fÃ3ø=Ž»? °€ñÁG™Ô²lÅšrT¯ªIôò„¶ ¹rGòÏF—ðv»Úo‡Ûÿß‚£?_Ü>¬¿ÝM°gR멽ȸšG¤ÖíP•zžUÄW•¨<8ÍöJºÔÁQ:{gC”¨¼'¬jDåÈ`f¾µÔ¼- ćBhmA“N×ÍßcäçØ«GË]6.´ñQ--S&R»pus/W¦/äÍýþÓãîán˶æó^zUÊÚæÜ`Ã…SZNå”|¹Lï†øKAwꮳETÓí,P†Û’ì],®Ãؘ÷û&Z~'¯ÇcÁ,ˆôaH^>^÷ ª]DË«ðÇLg‘BÚn:n²G9õé9îváy·Ë§Ýó¶ ìeÂ5ÎQŽhÚkƇ+ë€j¦–¼­E3-F{Ö±mŠÇaŒ«jƒ/w$Óù{sžB#yi¦Õ*1v€rhuªXIY IR o̫퉯J±’ ^ÉÐ4¡ä@m¦ Rª[N =+=whé­k7ôñ¡¬ÐpR8|Q#hÒE5G]:0üÖ+EDTÕpbP·&dXNéˆ=ËKP[¾ ¤’éZ#QÉå³J³>Ér2ígtL£Ñˆ=ãݼ¹RÀ,¦«¦©áÜI§=oåDúî9¦VÄ{ŸŒ’T¤s[ h¡rÓ›¯Ï04Ó|†ÁƒÖi&]i‡-œ¡RÎaš–"½ÎjZ®4ä_w¯?ü¼{x~LÉ?ã ͼ!pÁ¨0I7ƒ3ZX+RèK5sŽ°Ï˜Ð9’žfBîcrIE! ÏÎ*r01ÆÓ7éU± åš +¼¢[ÀA©I—°uç+KÙ)÷ç}ÒÁ&(ªÙ&ó¡.úGèŒm¤ê)ýú  ôleN:šó ZÒñj8¥í¡ŽhpÆ…?RG´8õÈOÚ|g§ÕQÏ Áz„†êùîæ># yüT35B˜VÈ&4`º\ÍJç Öò…zöEjgté‹È¦éK9Â’—\‹^‘I}‰>æøöÄPEaÊ[[¯K›œÌC0zšãšSEˆ˜uD_ —a`‡)w6æï¶(Ð’ÓA`hWµÚLô}ZdaUp¤-Ž2ünµûº}z¼åÛ÷i·Ý|Y=½f¶ „•6g¤Ï—PM)WãGØQåjÖ[´ÆT~N`õâŒÂ lú¦»2€ÿJÁˆð¢uk¯Â©e”Œ¦’þ´MžŒ¬-¶32j ֘ф÷óÏZq˜»EUƒ[©µ%5%Î(§¡y7ßs(øÊñM‚÷K[cæ [c²u ìwÖ(Mj’g¢56F¨Ø\”þÒß‹òìý¨t¤–?ÅìlORŸFdæ ;D`•o©?{?ª˜©SÀ[›®q²š¨sVƒS¡}ÕÉÔ±Y¯cJòäüþb?NÒ¡ZacF©E01F.ðdíÙŸ1u•h4Ä7ÉyùqåVCðÞP8Éé1 ›7¼òŒ'ÔŠÏ¥Ñ6Ær)=hó³w±Imñ;Å1¸¥!à¤j*éûCÜGÎ:Ðh“ªàäü~Ù®nŸ¾Tãir¸$þMÀ»ó¸n—†¬I×®jâ,|o«­‰â×AÙw¸n,Ð4â§Mƒ|­Q°o´Ñw¿æéýóÕÿð÷ó-Ûî÷r|¶ß;ŽXéc|à=é^i9j6Ë™b)c­šÂÎ#Û;& a|ÐŽ¼iD<]"ÎjÖšòKoÑx“¾Õbl%ӱƺè–žðjXkr´õºèR;GÚOÒÉú= E¥" ™lfºÑõÔa IÙÝê~õ999+ï!™#áxÙµójø¶±:™ä€€Z¶;‹Ó†x ÉùLÚ(SÈ®=UaÃN«Ùd†—ÅÉk—zB¬S†T`ÛB—”S2ð(>“Lqo[gÛYÌci%Þ(i²&nŠ«ºµw:’W‚Ní‚;þ£9ò Ë:HšVÔL›¢v+ÅÍ8ñùk¼ù±ñKLòò´äuqsÏX–\Ö«=L\㬖Áƒ“òRV¿«É ¬±'ü@=ž}‚kÄ‡à ›nñÇ1VçÜjÑ0Û›¸jÄ[ñŽJ*é¬1lLÊoX£¨}ªÊV?äªy¥R¤èUKBVI<pÙÔƒÁfÀ$ˆf×Ñ`ís¶þíaÈn#ˆÞ¬ølðÖ<¥¢•olUŽeùÂMr²­Sc‚]|´ Di/wOxglëžä¦Ue²-ªtzv©óìÞè$ú:Ÿ]ú&*´Tò‹»’BëÀ´ë|kšåì"4ïAУ4UfÁ¥6Ü”ÜF-PØ]°Îâ´P§÷MH2Øö£ÿcZ=yÇøXnÊ‹½"Ë—ÁA—x¶â˜ï”èÇjÍžï…T“?ŒÝaoTÚº5ú¨ñÏâëqnÂ;|íI¤Ò”R%#' Â5îŸ íkwXÊ*š)4&8RŠæm÷¤ÛI¹ ýž1$ˆí¦[éL';¾ã“|$Ûz*v D^Slĵq.ÞÛêÒŠP“[ù÷•¼îýê~½ýôŸüVÏûÓÜÑmI`ÝC0aK¼j¼#niƒáãŽ8 K¤`µ½‹Z/Úÿ±öBxAq|½L÷´õý;Òz)ó¯”ÉÏ«ž_Xlw—?ýõ/—à‰ÿà1t^<ó Ê4†ÃàãÛ–‰äöÞ”¡¾øùâé×ûËŸÖw«Ýöò'>7Ÿ·O—ÿ÷¿\ÄËöWâ‘™ÔNQ—Ÿ~÷°y{¸â‡ïŸ»äâåOÇ7ÿåñùéò§Oþ¶º}Þþr˜Ÿ…ݧ>.\Û‹Ÿ¾¸fãYÖòò‰0ê[æça‡ ¾0xàÿÔØÑEÇ3;*[HIqü÷«Ý°ººÝþ”¿=<<žÚ^üC¹šÛ¿Þo¶¿ÊX{ÍÆ’œÜ›íæíg*â‡Ú€^Šg1•¾uè ž-W?,ÖD'÷Vó'yÛ‹y+¾VëíÍ·íæÔrK¢\ÉÎþ9öˆãÒŽO¹Ù³õ¯ä÷íîâéËêþâU Ënõ§—ËŠ’‰•t˜†'Á(qŒv&ˆCÀ><ÿä~¿Zw»÷þœµëÕí~;d(ÿ^Ü?K»ä/ׯž‡ØÕ‘cÁ+Öd!},^B玅xb±àpoizÜ=¶ìæã].ü’]E£uÿ`œ<ä|ï6\Ÿ'Þ<_uÃÓºÆhc›b,‹±ök+Ü•¢a ËFMV¬ø¿~½ÿ¨ìõÈk.ªëÿ©ª†TÕ¿´<™ÎQƒ(7:1–ëN`>Ô!V˜ÉY ÆßÏ žx¾ŽÊ&grN)»„8”)p9Þ'5¼qѲËÞòªšå$1ÐSîìãZ÷ 6GlgÔb •’òÓˆJR|S‰’Þ®flO`ÙUí4£3rÝ#ÞYMßW·ŸøÙˆ àu#ölÿ]\oVO«ýo÷ë>*)ô†­I3Jz?ðúž„ÉÆ×üÉ#´n0ìÞ{·T º§…ÃiÚ9œ¹Å¬wŒ·–oÁvsÓ­èi{÷x»’§|ÓWüK½d¤Þ±Ý=¯ŸXé.Ùå寋׊á£À q¶Ì0˜õe{v†%Ší‰Y_¶Ì“κIçow“œ±cÚ$xoØD›ì?ùLÿÉ,")2O‘T±6Ìâ9{Xx´Ñám]Γ¨X¹iÓM0ýsì!QçÉ{+Œ=Úe{˵pTs牅k—Ý#~ŠMEXÏwnŠœ§ÂîÿØm °Æ¸DƒÁ@ .LÆWÌÇW™Ü„2É.¯x~ˆôaá6V>Ó[YÂv¸Î•;#ºRC®=Rb,}Ëûƒ¡xu9VDJ =PÖ*s‚VYÔeXë=zðæˆŠÑ­Ök´.[Vz¦MÆ­P‚[Fh/t·Œ IØŠU\÷–• ZÎñí¦™AË;Ó´Ìñ`|D-kña|ÅØ¸2aØb±Ÿö©w÷q¿þ²Ý<ßöïiò ØÐ\û«Åõúó—:À•¿Ú®¶¨ÌõêÊNŒ¶Ïüª= a(~oèÔðk V>ŒOo *göpA%uÄ>Vѱ˜¼8I¸"‡’xå1Ú²ß[Zdñk¡Ñ¨`fÈÂæ)±ùJDlU† ÝB(b} «ñAál¸Mt5xFi߆œ(°Æ…I(kíœ@€ør'ÍPžÎ— î£U$½•åáDðÈ›'¨1ëz'Çpø’÷@¼DeªpáýA㔑g6Ì8ýÚ>jx=jØqQr¾–^j±ý¸z3ŒÔ›}t€‚ÕK§Û\ ºC-%- ‹CŠÞjÒõfÚó×Ç@qRpÖƤ‚3ïý¬±äf¡³/Ó¢±†=Fåd|bSêÆõnû4yø$ÊEw=ï?/ÖÛÝÓB©P¡H¤Ö-d½3¦{1†ÈëbÖÆ1âWdBñ|—’ð1¥ã iXm·é‹§J¿Qx»õÌŽ{ž¾yìÃ…xnK{öu¢áF‡Öd¹“frLJ¢}ýGá¾¢nÑÈœrÞª¢!C«*»s2Æ“œkáïJG®Vë¯Ï ùl‰Ñð^ÅWQÐa@[cŒØ?J$6®!’¢š“Œzº"¡vI„~¡_åS§#2XbÛ ŠRj©Ó‘q“IasŽUO”š>8aí¶ñœZ¨;³€1cX>J±¡í†6é1—égì%Ь¦ðƒTáõtÝÛoä¼77ë.>.j¯pp.ÕB]²#Ìb ÖÈ®Ød«¸Hr5Á7HmµIcïˈƒ³ØKÑj…ž”*Ao ~àæ†ÞwìªuZ¤MÈÉ(¡¶lB(÷x»ºïf½vHwZôø°áØ}åßóKÜ|»yúmýe»þšÁ1>åé-¨~ªm¹o®lÅå‰*ÛÀ¦5ÛFñTpÝ‘!Ae1•c§VåÐîÙ§•êÆL_N42éþ!~šzù¦„Wà…ä}U碦é˜c=¯@ ž‡}“{•&ymmÙ˜WƒÈ0¢Æ| "çÀ{ -rt~ ëX=ÞUï>"K˜8ã¦Êš¨wôcÕœ V·ô¤‡gÿÈ—ûŒ†(sØmm+ê6 ¹Aš¥Ž#ÂZÍÁ]­¿0$ˆ€ŽO>=Ó'ŸØÚñÝØîø=­À ­‚ÍZy·¥ëu°+DZ)¿Ñë÷ÐU3l”cÏo•k±Uà%oß–¶í$Fñ¼y(e&Æs^²APÓÌ&3b±ö†½?Ï–N"n}nkúÑ2BǦié¥MÙANS4ˆÙ“d%O<¡óeAL6ž5M;Fcº9,CC@O…§(GÓT ¨ÈÌ •Qp6ã`¬Þ¨'ŠZïÐ8_ódàô±’¼©{L|Ìj²Þž§®ÈSw Ër´]p‡Yry#®çÐïÃç€=35F*kñt4†0)ãÕéØŠ÷¿›b…ލZ¤tˆ%áS½µ×A¶mx_Ôka…ÃYOC€æÉ,‹NG¼T|˜$w}U°>+«E82«õj16Ý4í£¢6ÎÊõâQÑÊœ×yóÁæç sÝÞ¡ícëþ½ëûm$ÇÑÿJ0/÷2q$‘”ȹżܽ,° ,îžwâ™&‰³Iºgûþú£Êî¤bË–T%U÷v€A£ÝI¹DJ?RüA³ÔGÙX¬…u¹ ÍšZOjfí8™.êÃæaŠ¿dI,04kg=¯õ‰¬ Tn€y‡ÇKÙtYuA\ÊÎDшé¢òAäšÀo‹3çC÷‹$•3¦“6†âeûXÍ í¿\ƒFÖ¹Ö­´ÈÂ.€ °nó¶À‰‰¹’[£L‰qjü‘Írã‚Sw^ª/48{çÅdò>þ~Rõá„Á·Å6¹òŠ»Ì;f·0ò pç+/}GÀÔ€]2)_dN÷­Q_¤)¿wnv:ñ‰ƒÛQGtØ ¶Q5†å?—MA„Íæéòúéº.€î1ôD>²~ò©›“WŸgv,žvœ0*œbÍzAïäl6YH?G²hSj¡$3€£ÐK6¤4o²êe9cíìÞ RÜDÏ0+ ·’i(¯×]c¹C®?ZZY­wDþe-Y^ ý&zg{52´DNXYôƒ|03rÂÒhÝS{»¡ÑSŒ]©)D$<p$iØä÷±©ï¤‰ß뿾þBì1—γ9j×»Möi×WùZãúitÔ­1_åkëdÕâà€Å.$ÑÄÖì=£†é4®QΩ~Óó×8z]*‹ §®lÀMI¸ˆ‰”@\K«$Õ’7ý¯ ÷XYŸå°û™£ä‰7©4"ކãüeY˜rÿ]4˜îOË ÜÙ]r€Mƒ‰¾¨L#—ß™WƒBO¢ï‚¬Ö1ž—IW;;8û8Ï__¡Î/·¶+Ðb˜Pà=(ÙSOÏ@¨[KÜyL%½2#¶ö2Lå*Œ„Ôv­Eb Ãnw¿!J™lu­ ékšbmà"¯pJ‚Â4Œè©To}§È§CCü‡I o±~—E(%Õ—á˦}F£[Òö[8Oa½K†ÈFÂhû”8&z.–þ­òØQÿš„tÍ_\¨2K¾ ¥œúïZNa‰¡C$éH áà3M®•p6ÅÕ¢d[A¢b\­„SúQNng•›ÄA²n=ÂØ.°#¬6M»ÍŠ2–lpJßþ8ÇÑBmývëÄ[U™€¹¡¸26@~¶HYFËm’y«¯ ±ù}M»²ç†MduÏNú–ùäëYãÏßé‚öõ¬ç{ºÔ>n^?²¾ØÚ IƒÀ‹C°Íû/èáL›ªj%ÌƒŠ¸M=Kð9ÊÄ>Þ­d¡Âše ¶ RÄ·6b`i¤°½ “JÙJ"C_{Šdèu?Ñ5í(k÷úþêŸxŸì ·Kì=!ÎL'¾š¶‰¯o“|GåHb=½=£¥ ‘û›äý蛘“tVŸíúîñãú-3òyu­x»Ü×%™€…®æÛ¹ Y&‚¨Ëhk³ùzеUpcØ`àÐæ&£zÎÄù†ìlªx$Ã&–: +Ì´°¥î½>ˆ9H`°.™@N¤¡”Ξ/¶ÕFÊòPv 5Š`¸7ÌôÔûa£”F‘e§„3æ0÷tæ¶·*,=C#q}_¶ÿÇqŒéæáùJÿŸmÖMH]ñe‚ûE^ÅC­û5Or ãÏbYJ¾× .fäTÜe$¦6ágÝÜ åÂM¶ûmžŠÙ&ÃϪ(rœéñkÍ7hî°¢£Ù|ÐèªbìsÏLìîè;¢ñãöf,7UÃóönó6ÚúðƒKºùðá:à͵\^?Òu]ƒÏgÚ‘¶Àcò0%k[%mCmÉÙ¢ky!Œ6%Ù†˜Ï¯ˆÃh“‰Û_åÔè:Pun ,M™)ðåºiyÐ?‡ŸÜ=â2Ôù­öLÂ* fŸD÷[Ò'QKGûZ˜Ó>‰ok }QpàÊTÃjÄ®Rëßê08Ðû̘“_Åp4¸°ËrlËÒ&÷xÚYjÉPßÙ•D2¤€8“Dò‘Ìš…2‚Ó/«Bò¤Ò«`ƒö‡ AH‡28v—EÛVJY¥¹ÛµrºM™¬î<œˆîÛ ”Ž-»˜„?ÒEçáPwnþ¡[ìAßcû«žŽì´âçÌË*•Rœ=éïÏí†yÆÙ‚éè §àc$Ñ—UúÚ$Á{}íEáCÀth4mý*Þ»ñN ßÇôÛÐg»Jð„º ™÷Øõ˜«k8eô¡yõQ:ßBçDÙŒ ÄÚ¿`)pö0#;ÈϾÉ®3#¹µàqû»h4—>Ìí3¢Y@ê|˜Ÿtg>¿ì3ª~ãñƼU±çó²òè5¶Åq§ƒvAeÇÝx!«þ¸ç„}Æ–HzŽö{Ïy îH,eëôetxèGÒkpè‡×vAéŽïëìäœè[—ŒH`Ã}2ÝÄ´í0¥ ;&þ~ÖœÒo…:=·ƒ·íKÆ=ÃÊ CW>÷øt»}º}ù2¬õ]~mN g~ó{}5ªSš ;§¸é|mYä9ñžAûs²óqK9]…dQÞXŸ'v'z±Œ„Õã‡càQ–ÆxOÔãUÊpÜwÐ’xHB< ™Þ¡™ðËaJZûÁuL,úéÚ×GiOëƒu+4ÖYw¿@ëÿó²ÞÿÔ<¬–é*Èaµ>ÂNê‹e‚'&ðÄ‘gpù«Ìæ`ð°bwT¼Ï%vÇ‚éÜ­i“KÆÚGrhÃñµcó6Gå0Üæ Zê]\å|j+޳¼¿Ï„ÚmÓj,Jà1¹Ä«n/?z*Øaè¬`õl÷!×wf€•¨2ïÏtÀ*¨¡ú~:`µÑPû¤zZ©¬p_Û·w!u‡?nocütŸú:úÁª¨3º¾F¦4¾RX²ÑºÔÛ¸ñ´Š$'u¸$k験Éõg‰²J÷gyFCßÚVú¿´¡‘î8ˆ’ú—Œ&å ´5tE#¿‘jì\áé?¥FoLÆ9jDã{ŒƒPĈlzÎ »¿}Ø5w¯ fUd0]¦yDKnæ\¬üçªË‹ÞÄÐ.ÁFõ'ËÕr 6Á8+˜ƒB´˜‚ÂÑ’›TsŠXW1±Í²Ü;-/Šywÿs_£|Ô€!›é„‹mÓò\QZžw4åµBïsJ_~zVotª¾â#tË$ŒŸá÷×ܪ21üª²ß×·/ºå.t+^ü—n¿??Ülþqñ&Ñ1þ¨Ÿ¿<}¹0ñY×|¹—ÁåíM4$‚Z0`R»©Mú —Q¹ÛOQD4àcJatºŽ©BÐ4›UPå„ Òüç?ª`7O?ýéÏÿ™’¹øô³€î7ºy†Yc×w·*\Ý-±ò¨Ù‹Ÿ/^þñðÓŸŠ_©Ô:¤ '÷÷6_¾&5|üV/‚—7|n.ÿNן?¯,Óÿaó~1ëØøìp –©›€µÄûÆa±‡‹»Íúys¤'ünNÖÏ?¿·»;ËÓ7îyë:2v¥ ÇMu]Ã"°À|¿ÁÜT˜;h^¬3¥Ó·¸Ü…ß×wWúÜØlèuc?ßm¿øåfý²Ž9Åãæ>J Áû}aSÊêÚ4•âò•¦M]ÀðK}ùoX‘W7#Õy,x«”Ÿ“w–ów]Ç}X?\o®þ[ßçÓ󷉆ª$ã +ÄÒeõhå!B±ïÀë\?^7­Öç„E¹Ʀ}똀ÄV#ö­?_ÔP¸¢=zžÂÅ=êÙò¤(²bMìt?›ÂÙB +’¡ÈÍRÁÂÙËÏaáÁA:$üº²gWBêZƒZÑþ˜zH’ÄÅc¬ö Bª”ÔuĤ@èz“8•¿9žŒ+oŒ !Ž™KÝdÜŽÄE¼‹û¨€ÚIø a˜_l˜p Fì¼Gü?¯‘Ìõ‡»MŒ%ýe»}<‚±ÈE6C”é§x®áß/¢­¾Ýܼ~fC°ÜÐ’+ð9„ béZ)…X£Åü[|Ù‹Û}8ìzsûyssètz}-áóøcâû•íŸrû|ñ TWéîæéâåãúáâU«añ‡u•±:ljÀ°‚3~–ÃàiÊe¨Xk=̶bPnÅb1¿#ÙMáM°Ù=á“÷šoë*°añ¢m3Ê‚õÃ73 ;3éÞIšäxìœA³›¿óýt5xÜÞT¶0PH0?LÁ™Ñ8%£Yô¸ùþÕëƒÌ&ݤ9Jⱋ.6B{Éã1'«ÓFòip“6XFòų†Wa³§Û=ÚGû,ÅC¢èTS&Zb„V(¡`ìU þ §TÊÅ!ÏQ)îÀU£eR*%ˆ03ÆÇ•!Fr]"”Û¢¥×;—À1‚Ô¹-Þaä ˆuTí ´x‡sî;ÖÛY'ÀÙ>Œ\tÌ{6ý-ù=ˆö§!v÷a}ýÛ§ÇËë§U¼ƒN×AžvD_cJÄ òEãçGœ\WgE& PÀÕó!']9%ËžFK+£ë¼UMøw!§ÑCÒ!'³RR€î+\³&úž2 +]’¯î‘4瘴¤šCžj¢ Ïïœ$Õ ©Õu ‚j²®Ó ›‡Ê \,'Æ ìE|4Éú—@hšÀŠ’¶+¦¶Î5§õëÐò,W‚¡¿+¡l0É;£rÑaº_lpíè&‰]€m·äG~ÅkK˜7ºuâóË»íõo© t¨cœ­ÞcÄ:Q¤št¶z³Ä3Þ‘"΃8Ç}B]q‚B×QB_KÀÞD¼wÁ£‘У1ÑúºŸo¯ÑƦku%3JÎpOÕ‚ŸÅEÐN +ˆWœX:QrMÙˆeã‹."œ‘,I¾^¥ÔˆŒ¸]¦C ÁØUi^DBÿ¸×~>ý1q±"ȧã^mg²ΘPêS]-:*Nj6X?ƒ}§ë0âÅ- Áï­ÙUÌÚ¹j˜~Ýõº4F I.’|ìîÇ_ÕÁ.8йÁΆÝiMš©|m%H(¹f@á,ÚzI9#é´òý⨠š[h½Ì;“Á-€¶Ètm!–C-Z¤lãQ˜ÓÁá¤^Eæ´ÀÙ=¢C-ã 2 Áv•é½_‘›.ûö£¦Žèâiž+†¬˜¸b`ÏU7GŒp€ê¡Ö#‰X=×lFëÕ*ÀXYŒ“L ¤ÆêËqÈÆ@7'çO¤?È‚Ç4È£ìè||Í8ßvVA·õ¡b.õtT8©ZbÈ~ÖA·aJG0Ö¢B³/S°ü2%ÊW`)áÁü‘µér£¥^¦(okˆÆÎ¬3é¸{Ì[åè8õ72uÊN^Åsj”Vÿ¢ª)ÁÀ]¥Òµõ¿0ˆˆß´®¤jõR£Èi˜Y>5ìÈx byMðó”k²CïçgaRAŒÂØ|D,¨X²1!LuR­¬‹¬sqNA ©®ÙÎÃ"ÏÝk‚JZ׈LOîIÉ ÏJðÐGNrƒÀ±ž«K†Ç{qµ¼JAR“PÜ .—¤»VBz^Í«pZ%‡ ÎWdvAkü,–«[CH"F’ž'Æ _@É›ÿ8F¹¥Ã)%'9Ó8¡ãÒÚd´¢éÃtmrÜ­~^u®ß}T—fÜ™sœ“þÃÎ1 RßÜNR-²iaúãÃ9l %21¦G·O ?w8ƒ ©ü°‘|Îá­õi¸êpýFfNëºßÿª˜í±[<,YÑÉgÜâ¶µe=?¬A© 2 &NëV¼±a¦n{û™Æ«3™ y(¨9{¶ç‡7›êÖ:Ó8Ãúû „'7‰C#žg‘u±=Œ³žôŠÐKLíÒ]wý®]õƒâñóóÕÓöSlS½æ3¿$f'r"ϳ ²“I‰›[‹’“©ã»Êä4­&iˆ­2 È#pÖ»ôÃH.m,ñМµŠ$+0L³°Bû5\Å,c}²ô<‡£Ù?ÛÕp]ý®ƒh] ‹ûa²ü Î$”{cãde5Ô•grªÌZ±äag°u™–Ê~ÅüuPýÙéd.9OöM@ çðÚÀÊ|Õñ;½Œbÿß—J r&<³7,YŒdn¨mEMVÉR1—š'uKŒx¦n¡ ²ÆÒòµÁQ OÛ¸˜Ë=·¼ºÛêê>nŸã¬ë­®ìË>—äòeûÛæáÒV–°…pZ¦“8&š@‘õ¨ Ó¦jx’Æ1¬¢–³ôi_s¡ S­F2kÄžÔ £8­§ Õ> Ì;Ät—Ñ!Ž¡f,ư–œ—ƒÖö½k‰¥,Œ¹¹åÄ“ñ䤾ƒ~޳Òù‚îÚ€±$]_¾¼Ÿmø®Ÿþ`‡_ÜG‘öY {Û¨?ÑâyõÙ®ï?®íêµóþ(¢P…âÎèܬ ¤X­=áɲº_¾žRwkCP:C/^†8´- êa7¡þðæèM†m@”#I2Ùî4¨3ÇÉ£³y î)ä±ë§P=f°[äLpš¤mÓ”%“øÚÝoNm6lf¡|œ(Ñ!*ÂÊ…ÔŸZ|JÀaG¤õ§›Û—:âm O Üûyw…lÝ”‘P–Ø1`‹¹I)µ |¨ò <å¢ù•‹SQ²¶»!Z‡‘Hš>8Ž Þ×ܲ‹@1뉇j·º¼''AD–>‚ñõ¯t³(Žm†¿Äîûu£9Ô®ý0YâGÐMišgk!R0-N`RHÿÏÞµõ6–ç¿B8~±8}­ªV6 1,Û‹u‚<‹‡¤f”H¢BJ;;ÿ>U‡ñÖ‡Ýçt7g ØÀÚ³ù§ªë«KW}Uó‚¾fÜϳ% ÐÄ" yTºž÷œ#» Žà–™ŸñíoçwCîg·óÞ·›vl½ˆÃg…=”+[ÄÑkþ½:”S eÚ˜âzëª"È×àW`kûv”¨5b™{è^%:þÐÑÛ#¾Ø2¶èi%þ^Ê5ôƒqšäHQ§ã‡Û×sö¿°ãIPY þÂyù*º¬##:Ú´3ÈÆC;Sëȯ–ûìÎ :”NMñR`“%²½VßK«Nó){YK§‡8Pï•Ý«“7§¾¤œ£u¬Š8À%¿ .žúʦöÅ•_?¨‡½zF)à«2=ŸÌTÕªû`'ó­4¿a¶ksAú6[ÔÑD£*½Ú — -{Œß$V³ˆ Êí £ˆ .9KD¢%„ñTªâz$­µSPƨ2¦–iøÈz ï×ûÎÙƒ9Ú»mDúö×Wnù^Ó 2o˲`]ƒÀÕОÛ¼€s\µ¾dWó2ûðx¿^³MËmååÆÛ¼‡´à lmV«Bó&™G…OM;tã‘CßÌa†ê'õ”r¶Ä¡mFôŸ âÏ•y›*FŸœjfÃL±qŒ‹Á«ä¤ Ÿ¦Ø¤ÌT*eA“²Æ pcà)k FaëTƒsÑi èËK¦ö³ù¬lÞ¤.Ì .á@¯YíÁc™Ý.•cì»]à¨5Ž=ì ÚµÞ6t²¾Í ÓQªÒ¯P'ËƒŠ°Z™1„†làì{pëvfTraJ93$)QæÍ¸ t’ëIÖp@’뉟íï>bp—¸’ÑÝ wiï6 Ê@Á6¯±œaëOÀ_YJc¢5z¨ Z»Êr_;Joy*P·à•E l5h[æNgÌ¿[ ï{ÿæè½p³ø?¿™oI€ßßùÙ{9ŠEIº´‚2 ݘ‹€`†sT´j­±»íñB%[ ."?N»A—Dþy÷Ñß^ˆUhj±‹q•ÒWF~jÓà"ó€WÚ ÿÖ¼w³îE‡5´ Ã¦F̨Û:¡&ÿÏ}*–š`¹·açœSÖÉÑÖÀRŽÞÁíeP©ŒÊXù¯lcá}¼  šr¡lù&µä»j+„sYý,ˆ£H¡cæÞN}R¿i±À8XvTA&…®X|œÍ?²™œyîãŸ+E¢mŸ,};>A×Z‚¥µÈ<‰Õ‚V9œk"8L…/d*­,>㣃É{ùT _ä4 ðW…V~½ÆÐ*rV!ºKŠ5¥5žÌ½5*J¢É*J†‚¶†|hhè*Ñ’n?ÇèFU€.W'«zHpÕw¡¦ƒi©ÇKPÝQKûœoyk÷qÅ@Dzž=<¯³×—Õf.¾2]yÕJü—­0µ›IeÝ"˜¦´9—³‡—-ïž6óhÜݳVHvÔWšy¸Ü‹^ÏÐŒ±l¶W¢MáéáLŽHç!ÝZJuüúMšUÌ’Q9täsƒÌ2ú惜dp­YØYÎbdƆ±×éIÌê-%GZÓ3í4 hMNIE^mW\TòtÞúxi~Ècƒ¥ñ2O<£¸køè-ôÁÅœ‘ÄT“’Õo½ÊáÔ6&y;¦ Æ„R‹RYg´½.¬j« ýeŠQÑ,’%Ýêg^¨»Ó²nÇl]À™€¦zt®š²KÔlMðí=î9Ö~ÕÓHSì ¼×o-Ž! T:€U¦-ÁcJ”1™ÏQ°”Ädãu“m|ÑîÜꀲó RV›+ƒ²S¶5(³œ}ô†ÛI;Øè•צjËB€,& lE嘃&-c`jSú3ÚÉ€–œ9»ž­ÌvO>–ùÑo<¯Vþ‘}`ï¶ ‰aÄhçd7×Ð ŽãeV•[΋*MÍàÀR‹AÅ®YöªtÉ¢­!§ü•‘›_²€g—cOfM‰5†ºÍcYHìIå‡Ç…ÑÒäQéQ›l‚RJn·ß–ekM¯˜)§Mà…y0¹ZÕHN™6à_Kóö¶ufÒ°»BÎZÅ€OïÖ,ø3*8I÷JfšNʆ緤uÛ@­Ý¢òv«ÐPO îÔ~þÁÇyÌëI)“XNaª–y!ë2ÛCF•qA&®cF·w^[äÑ‘Îk¤qìxY«•©å¢äôEÖ×zJµ¡Mˆ£&~$²JÓW!ÆîJzìNŸW|Œ‡QÌ1Ì´Œj8;SkräˆôàRS®„jÖõQÆG0dÔõÓ¬ô,.­!íåQ©®/[¯ž¸8ï\ûº¾S¸hN”}U‡ê³X=Uè±ÏA ¡•ÃØh¿€Ò᪥{ñu·ÒOnæëùÀ&[Ma”#ö0j…Ä´öÅ—¤QÕlÓ²n^ÙœÝÁ'alFäQk»€"aZ(ÅÑcÆ×ó¯çÙgÇé‚0¨EÄc¥3Eæ:®Í$ùt·žm8k˜¿¼®÷#ÕÃxu-õòêöÉw£ÃöØlç*cOÜ–Fäjk }^©0Üd÷âôhUf ï)—P&K›Gk KYüÄ×C_1e2Z§Ñ—c(“F_Œ±2©‚¾ü8'ú¾UlÛ6/ ‘RÛ&ÕÛæ7†ŽZ?~j«š4¨,BÎ"Ê8Çúñ¼W‹9Tp… ÕÐ`Î×eÔ—ÍkqcìZ)oæ³wû?–íñýÔÞ ,šp®Õ˜êù¨)ñ{¥ÔY«·¹ÎrÄgLMí¢P:%á•3¤(3ÆxªŒíZ Áp^3¤Hà­\°á«·튜mdj—%lh®ºù#ïÕcEF6BÄôj§Ê åCæX½Ê# ‹­¥^µ™vmçýCŽô”c—#~T tSçŸfòEŸfOó廿ò÷yÝœköFŸªö&¢Ú›sÝÞDï¯[*C&ðª»?=Ž\ íª¡YîïÈÝ}ìX±n¿ûá·žÓä`9 –<¸`½š¼ò|š=.ùÀ÷°Jƒë†õäûÉËoO·ßÍWϳõòö;>7–/·úË'ñ°ds/›åf–óÙól~ÿrÏÖ#[ÒŸg/ož×«í“eUÿæüæÎlè7þÜÇÕâý­ üÑ›×ù|¹ÙÜ~·{¯_ž__n¿«ú¹¿Î^—¿l·PX;ùþûÉ{ÁWyÕ/Úù®Êû=Òq ±M¦iüI½+tØuº(ÐÚ±3üâéfï–?ñ ûÕêù8tàŠM/xZ,»eã0ðÏ9ò÷ËÅþgþ̵ë)I¯.í®}z]{q‚§‹e¶/Ÿ?x›ßË·ÜË·b{œ/ï].Ž=·&bCÔ&<²ãþCì»WÛ=å~ÑÁ'¶åOËõäåãìiò&i÷ö'ç‡c@(Ò5aZb– Øì,j™•7uYŽ2œjµLÊÙTKV;¯r*‘—×gDš9Ñ•S›´8¯cÁôÁËVa™ãom\.ÿM-{ñÞ6î2Ð!„H£¿¯wNwè©©Ñ8Uhs<ËRµÅÈ79ôXmKS¤1¦h4{ÃùžùyÂ?yÚÌæ}œ:µ-ßÎÝìa³¼`bO¯ü²º{Ã"IoÏü¼Uäe÷ìeW§¦^™ }ÒðbËÓ_í÷ŠH(µµ¸ÖŽ-‹<çf”:rvGÙEçNŒñÁkCœXð2f\d”¡mv»•?al k­uœ'0˜¬êáþötnŒg‰OÈk9rÑÄçq{õ¸½ÃØFtÁžìU´ æ€YÜ ø¶÷‹›ÕÃò¸ ½ûáóÃ+G3 0ROP•"¿¤ªÒîÆ)5â2‹1Oi n·LRº—×í$E[Kòc4LîÚ S d2²7טKÛ‹¯J,ÉßZYo'"kŒ.1sÇÚ8œ1£‹„“¢(/]óÏŨ;u¢Ô_µs%˜éÕ?¡qEúÇ“ýk•`•÷n×®ÐæŸW‹#baYPt³fÈÚ¼¬?¿;þל¿_°ç»»›÷ïïÖwÃK´²0^ Nc&5ZgÔ@/ܸ ´(N£xÆ4N;ÉÔÃíæ[O«l{!ÕÁiTÎ ÙÝœšS’2;¥`›ã´Ób8Ýѧ$Ru‡)«=A+ŸÔÅ@ѧWÏ(†dKôêuü`L°×\8Ù%úö§î/vS#8|}¯4öz=Š9ÎãðËaY†,¡U^5&Þ¹ŒY_fÈØ {–Q~‘P¥YVço$뾋À•Õ[½qØ>@ÖÞFdv6Ö€³Õ]’„>yÉ”M5d#D¯R½aÔ4EJu€MP—ûѯ‚ºûáËÙ¶·Ý`Þö6ýf>†¹AÙñòÏÀ\¯Æ°áb*TƒÜË« ¸$‹öiÀE’·[Þ…Øý^<•ð–Ø·šlB¢­^¥½Ì4O¯1[à-ZˆºÄ pÕi²<NÙðYxÓ Ñ«[Ù2_ 'dSµŠ Æ ;&w噕S^§’µuض¢÷M<žÐ|:Î5¾*j”eVÎõr=pnZÑÆ«!ÃñÁ¨BÃhŒ^U˜;IœÚšõ¡¨Ö.£/‹I¯·–µÊ)àì¶H­ÚÀ.Nâ[òR¼Íœ?t{üv‹ú~i!Ñ1{¯-–à*Í0â ”‘ ,Kr(#ER65QT#„ŒÌÁçR JÇJ5‡r¨„¡r§íÔÜ!y ÒÆªÏ{÷64g&Mƒœž}™§±ÿiXDÃÙúË×Yhy~LD(rì÷Ç-ÞÈ’SU# äMN(ãè´ºè(íPj™!yD?Ì IQ 23ô¦ý]•1 e4éÁôlÍŪDï>k–Öë|"Ë,§Û«:B­ eªÃ+¨®/e³…‰1=_wñqÖ˜ž´=—¯>îò^u'S¹Eé&ç2cú œ“6hå«oḘ©WÅkëÓ ¼¶àÓ©gœ·ô@NµðÚ8²è†àµu­/2zc¡½Ñ[¯{ð:sÅ”Ç}€+.æH§z5ËX¨Ëº À;hó!ã½jYpݳåÏÙxVò?o?d¥,îå³Rv™O)ÛŠNãU˜_û0ЍFf1‚w5å¨vr$(–ˆ±Ê1­8¶â+£­-‹ƒ5gYÊ‘5$Ý!'²‰{(ªZÐ&’`‰«ø6 ©ïŒ òJ»²Œ1`›Fë­Á–MËÂ4³yžÍ—§ ”™F}Qµ –ŠŠê¨hÄDŸUèƒl.ˆã½2©Y±Ú¥Õ»;j“×¥#| €Jõk•SzH´.;n8å(±.áÂhß  ÁGë1–m[™Ô†[‚]ÖÆTÙp™Á ½WyÆeu™ò¨A ÍÙ•ô}ó ,G}^-÷›õë³|ôû×ŇåK·Òðyõp?ÿ<Œ‘ÒA×C¯UQñ͘Áh##!zðê:ò«…¹Û3ƒ‚M½†ã›,‚£‰rêH«JÔ‹$càiæ:aÏ-êäÃG½,æ[? {…žR‘¶XÞêó–XxGP}!jAz5-ka¡ìŠ“À6@ho=™–•ŽË‡ÇcÉ>¯Wÿß$1ÿÈÇo½|^mîù° ‡ÈåÜbØÃÊjý³ÓœË¡…¢Ú&š&r‘8ÑßÂ6òÍýb}/IŸÉ.Ü¥„ÚðøñªÈp·žpLw³€ŸÎ]C~õRëä=%S àB²Ý=FÇÙ„U‡ÃÎ!1Ìñ·¬ts;–³õ±¦DOJ_cï8eU ïfý½ã ÜèU/çÐUŒ‰¼e¤•ƒ?;xéí ¿x¡=Ð)–U¤½Ag‰ÃÝ$õžVʧƒÑ=¶ûM¦UÂssä?~M)ÅYBQQRk‚=–r¬(Íj ÂR›(JWÞ¼£­ª¾[÷ÛŠ{N‹SÁšP–Ì‘mq™Ù9^É>¾p1ãƒÈÇ`›7ñ ›Æçh¼ÒðOÎŽY¯äA1´°6 Ï¥V¯Ú"§ƒcŠmãñ%4ÕvQÊ%4'g¢-â"ª‚ç !p.«Xèn#JC<çoE]¢tV³×ÿòX5À¡®þÅ1"®ToòÞú‚˜Þ[ Aúp÷WûËîK³^ƒ¢7½~šÝ¿ð¹œðyÿ;ü/ò?Dã?ðÏ_ÖŸï;\ÞÈÈúîoî¿‹,—蜣:É X½vÛ¶:ŒŽ Hø} 0?›¾äÍhw3~í$–7ëcónÍ"\®oažWë—›Ýt™瑯¸^¯Öé/åßøHËŠ•‡åb¼äSNG¡ÆT´lãM­I¤„¸Ž]ΙaMK G€Ôî £œV ®9Ž!vSzøÖUüHw@9ðÊ¿*…Æ=*²6À¦e„­œ¹ˆáWö²Œâ£FZµ& 9OTùJ†±÷éÓigƒ;»=•_~§è¶“/¿<ùïýéÏ?üùßo'“6ÈŸ'ûk'“É?ÑÏ“¬ÁÛÉöÓYü=ÍÖŸúñß:jm>…/«É§õý˲SãëæV^÷iÙñåO:ì¹ðñOþeò»ŽJ_^AV¢Ì8)ah¼ë¨ÌŒõ8úDº-•§ýûu‚nê‚6 a¼œô£Ï|š=¼ãä½Iù·÷Þ<¬>Mî³—ÙæóÓü°þ®YÎöŽèø xŽO9KÀÑ,DÝ#L° 6i Q?GÏ®¾–f¨OÎ’ãeŸÜ=Â∆T>@dµWº|)ÍÙn'V)«SU7És¬÷Gy·/èc·âoPÁ½î°:2Ýk÷Õ8‚ñä‡Û ÛÞŠwbÞ‘ñ¹×î•¿ÃiÝíxÙLP×_6£êìš¹ ¢sé¿EÉo€d‚S r:ÞvyŠ›BÒ¼ó« rSO¤P÷¬ ?+$E’݇?1Ø=B+ÛX¶Û3w¦ aÊPÀùJŸ2Üß“2H˜‚ì¦{„?ᄪ…øÀÊŸ§ñ±Á>OÓnŸçÙþÛŽÐð`ÝíãìiöAöä,×ÏW,§Ï‘}¸'‹‚<Øa‹‚ª}‘ƒÍAÍàÍAվǰ r2Ö—;î’¡!é0b°•†8‰« W¤…ÌiœØON؉’Û@ ´‰ˆ‘dùQlzhÿfÒìTšcîÝÁ†´ÃgD¤qB5%m Ñcæ‚´­²-U¤ì14F9Y:ÌAR©²­ÊW6k›…Š.©ì`«_»÷VÑQ±ÃWËÒ6ñ÷ –Om~üŸV[Ú±™»Æå51šH«!xi+®²´=o½…õvÿpc ÝX¿Ã’j7ê2cpÔÚT 6D/°Þœ0Sïi­LE+ðîkXÁ1+çÍ—‘øs¶Î“c¯ }ö's(?çÙ|ñ`; œ§9g{R²Î›6nСÜ;Û|ï Ú7!y7&¾”tÎÖÇØ0^,Ï7ƒ’fÒ쉕,­eÀQëÚÜVŒ6F”Ïzý#ÁT™XÉ\>Ë"»0e7מРKE„~ø° zTëþ7:2ÎÞq0’ÕÿF„×U!œ÷#Z8@c±áÌ„³ذþŸº«ëm+G²Åè—y9d‘Udey`Ñ@/v°»¯ƒ†")‰Ð¶å•ì¤óï·(][×%’÷’7ÙiLwâI·ŠuXŸ§’ñho3.:DÐ{°L„ ®m ¾I(:¢º gmƒÁ‘nXAì©mg·Oî {Cg6û²»XBH”ÚA¡6ÿ2Xü&ãOö?gRÈÅ©]1ïØ@‰ jß4Mò8‡Š,§“<Áx’†ç£í›=ñTÞ‘Oò™&xÅ£LÓ{jž@݉&,p®¢Ý›ÊÖ]ÍyÃ;Jº7‡bÃ%š°ìÚÚQå& ÑP»×ïÏ@[°Ýܯž¾¬D´{GDŒn?žðð¹GÅVÁJÑp¥¤!Ø(°%  ´ròë­ø rY™å ybV.…ÌaËîõ¡ÆW ô¤VšÃº7zU’±­`ÈætŸ{%¯I¢½ÐŽe~üŠ­íŸ«§Ç;ÑY‘­Š7éÛÚ* p— ,”’p¹Ñ^§WQUô“ä xGéú€§ÓÖˆ±¥'—:~i a¼ÜOºßêûIzŠ#-Ø2ज"l³ü%U0¶Xýz¿¬oQ—ÇQúÖŠÚ§äm<%··¦Ø®`+ÈSu:•²¦šÀL5 Aÿ¦ªÔ-CÑWãZf:©m7w¡½ÿ»üùþý«Xß¿¬ý\Íž6³îGgÛ}KOÙÂJ§ÜXyFu¡ê5 m<‚;C¥êP™U¼Yåh³õéÁŸ08“¼Y5D·²÷$Tçjul,;Ä’«´ÑÞ³OôÍ ÙhõC4Åhe¨³b¬ÇªPëlÞ–D“?û3 .ëTö¸Œ¯Ñìš`®wâ´÷#f'{;?äÍÞ–ƒ¼~»ŒtÖZ¸"þ0®1*ãk†lr“OEâøXª5?y]dUW…E….gÒÒ&ûúÔbü+}ùT\ä-˜À—áw:ŽÐih5šÑ4›_@³6l]UékÔ„)­ÕØÒáãsåÕϬQ,w¶.R"ᨆ%Ñ:´© ih?'Ýù>ûþíãáóL½íz±›-¶‹2ÒY§/WÎꈽEèíÚo¹¿Ÿ?f´ãû•»íl·þü _[ÌË‚Å+ªp† Œ-cdå=Hi^wõÐV-^†dBF‡zÇâ{óÄ“ˆ€^O,µŠ—ÊqQÛT•#1 ¦â”f$B”§eOõNÉžBD“NÎ1@h6õéScèIªÊ!‘vy(:%AÆa8q“2Š‘À×`K–àåêñnó=ˆýSÝß:zøSYðèõe¨¶a‹äH‰·gòsã—Q;y>C•Ù\ L?_cŠukeBú|~÷øe®ocÛÊÎ;åD7íh,ýx½öFÔ^·no,ýx×z«’o=³.Ûæ@-hçâÛ\]Õi}Ÿ—SÄ…EˆÑþïeíjrÃ×vwÛfE´äÂfáR†í커fÙ„å4>£Ç_)Èðe}¬ÀÜH¥³ƒì‹²àÖŒ¡ÅíTó,¸ˆYÅ Ž&cÅ“à\w›'ú¬ý@@ÊfÛz‘ŸtQ‡H€0®’àZl rZ‰ëb[²Õ>†+o„üus÷|¿ZÜÍ×÷oÈÄî)g]¿ã÷wôž|iÖ±’—ù¢ÌWzyÄ=ó(Œ%5dmPX£î¨¸ôXOˆµ@øpxXŒ;Af ·Fâ‚ë$„‡#ÎÑž»£Äê¬rJ>8rQ¬(×ÀØX‘°E¾OžÅy.·M³>o×Oßß:º/_íˆÂä?¡[y€ò:+xé˵ؤ[K©†¦˜|çøì»Óä5‰|•{z²Üit H&‹³øUê½"?jÌÕ8£‡´”°‚(O—™Ö•µÅv•½˜"ê‚ òô+L¢¿ ýféESô[Åx Z)W‚þLÚšqM Þ´íñÜ‹ÙéØ,LàNS(vŸ…ñTw#E `ÌžäŸñúixT€Ûgf´±)†hÍJt„ÆNȧbYÿ0>•ÝâËjù,b~÷ö¯1Þ¨Á›rå û¼P‹åÊû]M0² \B£±EÀj™Q<"l9+ö’²{‹¯9&ùº_ qŠ_¯™ÇËÌ02êe7ˆ>?LÎrÚBÿ¨\W¤r=ŒñögUÀÔYJyHè­¡t’’£»^z¢­ ô%IJ«4ˆ+1j’œÌƒ¸õÉú0÷òhów«@þûfóxÖEø>W{âô÷Æè@*€g½Z¾~ (¶·Ç†í^6Õ;–Ö¼‚_ü&ˆ®mì=Íß§½Ywï‹ÕúëjyÊ÷lÑyð¾Çî÷æ5ºgë^f½“³ðM õÛj{óôeþpó*‘ÛýãŸÐ²Þ¯;S¶„0yr®vå`q‘®íƒÍŠš»ºÇMž°²×LYUª|§ùç¼Ã.ZåÇà‡Uv@ËNè¹ K|atï)æ÷žêÀ,Â:Yº" Š“VE;lz–×|ªE$—U êkOJ ²ú¤4Ui ƒ(Òж r·¹[Ì¢¾ø}~·›ß?Þ%W$f¾J‹ÍÚ{ùƒ²€£F°ä ÙâÇ¡‰!T7Ë›å²Ä~ÅÌ•ù8ïOΠG#˜òþ˜µÑÉüXh ˆ¹ =9ÖYõ'¦ã•|¦ P4Ž§Îœ Õ)XpF‚EÝ2\ÙÎwb’‹'‰Öo»~ŸÛ¿fi˽ë;#¾Äj¹Þ¿íÓJΗ8])lþÂíàb´¢qZ;wrxÁ¸ØNG‰~É¢øô3Um<Òš*ïæýñ7ÎÅsú§hÔ\™…!þ›–[{_º•o„É^¹WFØë¸RŒ 3ôì!9£FH¨\òªøVÙWYW©Äȇ«YLÁE“:g9øã±…Ç)a¿ã±ýøè|Ñu–s½‡éÇcWÒÉßg»²§üSë5S·&ÀƒÊ–Cç8N!ÑOÕ "»àøÿâi >;ׄÑ0`ÂåÚü®Ã¦Y\½>+eŒÒÚJ&QCÞŠZeT'ã¯è¸ßC¿Š5WiOG·,£1ÛèÎpQ”ùËDù´nòйœÜ¡+ª¸»h.«Ä·€‘ÌuwÀY ¦cèVå¾Fó`43A“¡p¢püŸt(üÒ¦N!³îœÏ·ë¯ÑD^v¦U)Ïò•¨ÙqÏÇ飅×z‰ñ¾%éÉ…U³Y(°fÈJ±À–-À]ÊÁÛ6;&Ôóìµ •˘áî®ù«®€5±îÞÃVšá&F϶¤éÎ ‘53a­› ;œvçzíŒN0+ÖvS9âH.§Më)Ç"½B)„a÷²Q4š€aØQsšÍ;Òé’ ˆÁê «æ(ÇÐQ`•ÆÙ$b‘ö´f-ŠnÁadñüÔÌ)½a²çåºíK#^2MÖŒ@4®,¶ùú eµ±q=­œCßf¦ÕôþÚ8MK¢¨!loƒ5š¶Üíà Kkp›œœøz™  Ñ§Ø1å‰p’w"+Òž0ª¤JzÂ2›¶æˆCÌQm +¯F5”#¸Š)M9#N¾¥Ó)MÖΤ £­±G)ÕIiÇ ýÔvJÔ¸d.xhÙFRš,^)p©)C¨;Éo²ncíü8ÒŽˆëÕTš§¦Y›X_«ü,9ºÓê1¶‹1¢Guh•I¾’ ý-ÝåC­•¥ ƒ­Ä“G&§Ùþ7#¹Å$ŽÌM/ÒÓmc™ó¦AiHb±¼jR9 .£MõÞE>z©”ôZ"V7ñõÉÚï¡ßCÇN)08)ìçMŒƒÝ |hªWãš„/Æì_bu²à»lÉ·ZØ èÂÐÎZëåø6‹¬ú⪪„ í0ª¦Ç5˜bTá}ÑT UÂ;.a¹©c“'Äs-\\¹¢Ý½à5)žd·‘Ï[¸¡¨ön£SPhêζY¨ŒRÊ£ÿÑû6l(CX¯}K„EEòB ¬5ÆYÕaߊ«¢7>µwdŽ^$“AaíoŒG¬'›:ÞlGÊN ±¨š3·*<4œº³Ê"œ¬U>Ž™ÀÀZo+ï‘;‡‡–X˪EÇZÑpS¬Ï‡Ÿé(;ÒãûõçN²Cf¯¼^#¾Œ*84d]’Q¬%åªðe\SE1_Æ5=ŒëmBKèÉ9JÝh<%§X0šÔ艶Πg±°ÐºÙ´'ŸZc.bŽÚÌË_:EøÓzÊAΈÑ1paŠo’ý yYÎ{W¡„˜¼jTB`jp¡Ñ ç^蹫PºäÕ¡©x_¤ {P> <éÛî{o^¡l´®Ÿ <¶0Ù!…¹`]ˆÇ­dZ°œØPÕ…T³³}23ŽVǧάØÁ !k൵¶qOst,1(ŠC21‹Àu9Íæ ·cù;©[͔ʯízy:­Ìb¹Ï7Җݪê»!tð>ðTc®Èn5’Ùª«ÕþÌ…5чý²×¼t´J§Sõ6^ 퉯æ,E9tS;é–[7ü‰˜ŸaþAO,×E}t¾.Ôû ©ŽÚÂNKŸ”nÎìîGçÉÉa  çA×e$ðYýG–X2|ª»¥Å“j“•5ìˆtˬìý†!Û]ø‰¯›»çûÕân¾¾ï+{}/®âl+7úîiûýø‡'_šu1|c·Ñ¡k²RÜBvùä‹ Ã3t4|eôœª¥kè‡Äé¬Wªííë ¨&GœD ÈÉ€œœM2W8ôÑý=iTâˆSJ; P§u éÖñ¸HL”"Nioð‚ë ërXg–©L…µ|§fßÒõs ²žÞÛ[<&{ß6ÅmY/åõ>[}ü"ï|p »÷HyÛ‰ßnÅA4|Y~}H˼5h%-ß”˜¯ë)‰à=kñ¥$h„ûTggÀENµÙ¢õhÞ´'· 0½7 ãØ›ûIašµXzãÞÎ g<ñ‚¦ À4ÝGû TÝÕ?Z©s¤6c˜m¦š‹ç<‘Ãt$/a[øÉb̆ÔÏÀKÑ]¬‹ù¾„X¡__Á•D«a·[^‚†ôë+t^Yר!CŒõ\ng¬–ÐXSŠ} “êH‘¢‰¦c2«âs;#/ñaQ:Ö:ù|z”ÛÆ3S{9ëX LÆxž¢5Œ³È‚¬B¨Þ†‰!—tlåÊ–{o *XOCHÙÂ+×­¸+ò•‡Ý|±·±7Ø gùpÛ|šßíV—ÌTì÷áù^„÷ÇæÓ+2«ŽtÙk" Ÿ±ðÒt=›×ŒW<•(×WïÑþö¸Ý„¼èÁh;Õ/´ô(§˜J¸a0žºa=92mI¬&t8é²í³ ñ±µ‡›íl±Ú>•ub‡Ý§¿ VCÚ~Pá‘g¯Ä὆¶Ht“WVÜ ð.§™Ä&M‘9:{Ü“R­fãät—o£öQÎ0*Öí›I ÄhœDO•R©^ux•5„ìŒÁåP —UÌÎñ¨¸uãæÍý­¦â[ý'€ýfE zµM>(¬qmS¤þ6Syˆeg/þÑåïÌœ#ôñ#Ñlù¿¸[„%Kéã'œ ·:.O+ ª¬úÙø£õj¤^Ã;æù-;¼’y$ñùømõôry“ÅÉOzÌÁflBM"ÁIàkúâøÇÍr¹ÞmŸÃ[|^~î57wëÅ÷"g#0S WFÚÙ`²œ /ØæÑ×Ãê°¦Ë!AµV9;•4úTÁL„y¸0OœŽž´jMhâ’jY «%¦ ƺ\´\f½(É+7Eìî]Ž£(ͪÇîi܈)Xü-çC.ÎVpx Rõƒ?ÒþÖ‡)‚zÚþ"°Ú¾ÿõ·œÝY†Ìͳ|À‡ùýJD¾¯:.îÖ¢Tp׎æ¯åFÞ_||䋸ééNNõl¶±.oÖË÷vaìçö™ù'µ„oúMÕÃň÷â×0 8†,ú×+lÌ?Þ­þK¤öûfóx–<ùo1ÅÕoËÕ_ï‘(êßn‚׫åñk&FE"z4 9‘&Aqü-¸«±ÙáQ£Æ§d¯Ïò·ðYoÖá3É [¬Ö_WË· h-Ürè'ÅPúûù+tÏÕ½Èz'ûMç·ÕöæéËüáæU·ûGûêáØ†ÒU(üœ,´´A‘sS H.·Ç"q'\'2é¦Ñö÷`kïöÿþ÷`pç¨;;‹f‘°Ù9î΢q@S\´Ö´`Œ"oú]ʧ+ª÷ˆYºDYk¦)Z”óȨˆ«-ˆHª"ÅPE•…«„I\µñMC=©Ô!y \>ŸX š!zÛƒœµ#PȈM»?9«Œ$а•÷Ö÷A!®Q¯N±òz¸FC¹/–ô¿š?:ŸY”*oôªÔoóuèI¸‘Ãzß:7áEæ}Àý»|ýiû}½‡Þ]x¦NJ³õò—sòÀÔZƒú7ÏAˆ¸ÇḜü>Kl2âX;F>™ ü6¿{'ÿÏí¾>÷N¼—›OËùÓ|÷ýaÑ¿JÀxgœ<Ì¥»D_~ ªP`c8V‡—°R€6„Z|87ºÞÇ™õ>}Kè,…üp­BLnxrˆ±jõ-£Þ·ÿXư"ÎTyoPE=æäy ¾5 †7F5Ô4`z€¼T³\Yf‚\ó(—7™d³JeÙäQoÞÏË]VSàUŠ „GÅNº'Yzü%µ+$GÈT ¶‹ Dð{\?¿åZ¯¿š±,/¯/—aöë¯×WëímîÒ¬„/¯4Ë ÖÍCb/L`±£®”7G£yjƒëÉî¡E²šC‰õéY.Ór ÄOpXÇr¬|Â…¤ºs:öû÷ާfaXÅŽýþ³,j²}.Ý.:ëÜ¡§Úˆ ÈÎËN=‚(kËмæo@}ò·Ìcì3#x¶Ý2¹Ìcœ5MÿÙ‡‘9ÔG™.îtcd¢-2]\›|>§c ^± §KÝ î½YYŒ©á Wc’SÅ™QF¶{ÝIÅhR…|Ž1¸I3;Õ¨i\ÖhT3r¥÷Í¿þæÕ”ÕZoŸãÞeK-L ùÎ=LRјzÇ<äKÏ"Gò …@ƒø$P­ÔxMãG#©¨*!CÌìóU%M<}víp´÷f…U%46ÀÊ©­˜ûW¾]Àt¾¯ Äà„1+Êÿ@1T¶ÏíÛ(JôDõ‰¢²²ŸêZ Ýâ¨ìƒœÂ1ÑÓlâp ^¿¡á}è°ZÌ#ï(úÝ5ÝO_ïß(>Çç×ÓøN¶•¾óü§S\9Q[q\7sþ[õx¥Z­~èÎ;Ïxè‚Bš¼·¶£ïdÐzÏÀnÎ4ú¡ì๒³»Ø¶/nÓLÇooVà=ãS¡3ˆXÊÈÑ,ôö>Ûˆq·Õêï¹QDlȳIF=,¾aáR´tÍæ¶!ç`?ô6~} x“EÛDï°‰F|õ~l=øêQ®°•ȹºŠåìØ%(é­‰í,6‡IV"ï]”È$›YÞ^¬“¼Èœ<ª›ÚïËÞÿŒ]|ûÏÄTª¥ýÖ­Œ;#Y¾KÖRU{`Ì ßÂJLÁû—Œšº"h}Pßìš 4KšG#!!‰=Nq%äÁGÔgÁ‡“´7{/“Ÿ Q[šÇµ=‚ûسÿ£†BB°s qaf HY„F€Tx™Ü®.õØßÜ\¿[ÿ¡t÷þ5ºzé õÁbÇO¿­î_î9릕 ûêÁ…zˆkS<ƒ›” æ+Ußf óÂÃ¥u?úC:ï4 AOuõJ£ÁP1h]b<6 izB~‡ çÓT~˜ÇÙ“P“qpŽcƒ/bE ÉñT¼aBëÆ³%ì§áI 댳Kzp³¹U4¥äžÁ=i´9 d ?ùQÐ[-D@íòàÑ.­%`˜` x—ï5r~žd+GJñGì ª`± ê”Yç½Ã=z˜ ˆæŒC™”²Š°h|Ï©;ÁYu"Žè©H±ý{„`Š $ZYO™ÛS'mÉÇŠ1IʹǚÄå]uìl—”G”Úèäcï‡ÌnOQþWq¿ÏåóýÕíª,öÞÑ䟉»…ÕŸã(o*†Þ(ꇱÅì{V\g}¨©ó¡6v”úPàC]Ö‡Jz¥óžx9QkØ‚«r¢ÞÇjÎ8»< F肽 œÄ^»©úM±ÏNŠ®Þ Œ›{/B…SÚÆhSÑÎõ)íPEÑw!Y/?­®žõ}Þm»Wf¯?:“F(ÛDô¡KMo3ØÌÌ¥uE*É;>°0„G Ò{CÜŒóåÜné÷8xÏÅ&Ã’ó{`Ò{\÷ÄÓªÚä5åšjS sµè» %6®ëöÓíxªèõu¹~œ­o>ÆÉ¢Jr&gûš)ñÊP=…AŒý“-ýHÛ®š”¾â–½ë|ÌÊÎÙ¬íî÷oè«ÈÚ˜.k FnjÓíNj§bŸ YUQ>ükéÇ饭” ¦»’Õ¢Oæ%b5±ÛŽ6üs$&Eô)u:0è€F©sÇbИô= …Еõðh«³þÍï¿Í"ZÎn®"‹ÿÓï¾Üo„üòu÷Üà‡K¿ÀÃ:C¸` ^#³J;P`­¼éæTÄ&Ì72FR®ì- 8qIzÙWñ4aAÔ§&ªò¦-,-ônY4„R-‹žƒµÁž'èB‚ŽbÄ-+¾Sù^ÄÑU³Ô%Åav&„:bn¼ÆH¬°Ø&5WÇÛºÛöÝâÕlSx«+F öU†‡.ù¦„8±áúoeרP?>ySX»îª¯³ÃA+Ù1^CZ‘Ú%’mŽjËT’]l,h!`‹Y߇ɱÜ}i5J%ãã¡¥ó+)=ȪQf±*d”fÂâÂMËVgbÑòåÂ`M¾\h95í±'ªFÕBàìÄçdÐ O¤muàX|usZq[ž †°€3œ8”ÜŰ/ŠFAŸ›'–é q»KéÉÚdUB=8ò”$óø–Ö¨Ÿ)Ž‘Û…t]õÛŸÁسõ©GU'‡q_ OJ ï‹zÊBlíÊñ|x¤lô‘s¾O%˜dwÞQ“èXŸ:ƪPWÈÈ¥ÞèQΉؘb ”†Ç0Å4OjuËqõ"UšþO@ÄI:ïˆÌ(£wf@fP(2Þ³©´ù­×i9Ië½+YêmÑù7_HµPî½l«$G¨¨Êmsš.°^Çý«ÓôAãfÆ4„ê¶¥ù‚k¸¾…J'µƒ& ØqÚé2`9ãµêÔCÎÛ?|÷|¿~þüùáqì[Þ6_]u}“?sò/ÁÁ0l#…QÿÓ¢¿¸Pf ÑSOÉÓˆ°Ëö?¨ÓL‡¯"jƒžúÐàŒ¯é?'4ÊE@ëÐv‡OÚ®×8„OKq‡=äý†"SŒ»VEOí²é‘}2z°†¦¡y~Ý~7»Z]>¬¥pÆ Ãe\€°ƒ¨B#1fTOá\¡Çrix®*öÞÛ<ˆj(yeNf{Rhƒ¢èãŠjÓ¢¨þiŸæÐwÁ-?Ùd4Þ‡b{;kô/âàè2^K!ø‹]áNúS6èqOeòzðT[`BŽÅ³iW5¥ôf‹gv§T_£Þ< ÀºïÈôfG»ÙƒÙçÛÅýj;˜b@×=G[ÖÂ¥i$:À¥± FfB¯‚ –Í_ˆ&’-L$îÞuzbS´ ûrj5’Èä<»–N¯ä” auE͹â¶ïÐltµÐ²{Dr&Kà²7ËÞZSpL uLöÕæfÙÆÙ2ä‰óK/̳¦›‘,!pšE‰š¶Õ{S6àäd3zuhÖS³¡wÃ€× S’{æ ²ýž‚«Œ(ÝÖp 5 :jm¦tœÕõu]JTz!µë¨Ý–À—ŸVËÍ@Ê¥:";ßýÝVY°ÒW?Ô¾ªÃbæš</wõ£~Ie–ëˆÙýT¯ŽÜ±Z†A´ê€)ôj)¨=Õ-· ™¸ôƒòô¡è„³å"=áæòx‘_£åBq»Êä1r¾4ÁW sý` önò´…¾ÆÓnž~롆õ¼%>çGÆ !WkуgÓ¤Ý6%úê®Û”ÜG]ÁéYŒÓMÁe¶ËËœ,J~B’Nx_–-A[ÑIŸûnb@€€(ÌÙ‘sÜôïŸ.¾6Þ%H7(øë¥§ë+œùßo×wìÓ%/Ír¹4×Kwe«.ÍÿºÆ[Ї9 ±sáGˆ‡W›€ëñáYÓ·M‹Ön’¡ÅŽglWvf@±LÜ?G®zW_¯ß*Û˜=‹¾Ë•§¼CÉcïvÝ!ÿÞ`o|lŠ;^ðnÚ*¦Ÿ„!.Þ'hÈk:EòuöØ4ŽW™î:Úrq|¾m?µýMz­âxŠ íØ£óó9¿pêƒ4ƒpx´-! è]1Óç 9nTŒ¶%HNÂÝùN›Æ4ÇÖPIí,`yI´g¸ñu ®Š vgíCÇiV7R[76ÍÚ׺ÇÜš"vµlÞd^ÃôÔ8Aû)úàí\8ô é“âz3¹Wu¼öTÞ]›à†K¾À¡c3èƒ5èjãÁji5 ôô0xƒ†B>гP°«i»öû(Ð{“L‹@O›%X¢ª@¯E²íÒŠNÀ†ìÅíÏŒFLV{*äþSÂd¤­30uå#²™½°œÍè*Ä9ž€ËÙï_î_‚ޏ‹q|Ý”$qWÔÜ-¢­œ—"Tas5ÅHÓÝ2ûvàòc“a70PÙ¥˜¨÷„Ö(ûŠW«-á´äÀÐ;°„XT]ꌭۅ¹¡Ø=Râv)äïº8YÛÞ“T#¯ë‘'wºÜèõŸ'ÛÔš½ó€“¬ÊòUys%ͽ~W=KoþhM þh±³ü]ºÚ¶yå·^¨’F}ÑS·ž¹C[6Cˆ2u[öaÝâùê¦nS“µ„=%®ÁAÿÍ“ÀÉ9am,§7OBãQ¼¢ú ìŠJEÖ¤ÙøúfâËÃíóÝJáñËrvMWžW×ËY\'4ÃK¢Y¸ô—3pËkv׫° ÒÛŠBZÅ 0×OézÇ›´¢‘öC=“2ƒ–å@ô#±m¡Ekú|š…ŸQãqlWòág$Ér»( '×åì £Eü©í#ÝùÔhìðê༜>šµ?ÿÒõjñôü¸ú¸xZ½°ªRÊÖ­‚EãúZ'é›òú\àƒéUk(c»=?*Ì™°¦6ä$o˜d(Ø“X“ }h‚ 2µ w¾<‹PiC8¾< âæ^œv5 ¢ÜÂZñãF ®»«úØuVŸš‰&)‰I‘hf~~­P¶Õ›— ]@G-«1´÷³Á¸yÞq×ûÎ4?jÒ°ýÐùo~_ö› S*ÒMÖ8¢ Ψ“ÿÏ•NuÈ­§ºÞbÐÇ4Hf£=è F.LkŽxï_‚fù©5b³ù Y£Ô|Í–ŒOö#Eпž‹ë—{Ü¢¨‚lÛ±£˜MÂ᪢Ø:}ü»ÌN$nKXÖµ,Õ@÷`Té©Z Uœ³Æ›ÿ¬m)YÁ4È€šÇeŽÿ4])zDÀ×c` ÖùpмrO0MÊz‚¾‘›iî¹!H4…FE‰âÞmÓ܆¹lŒÚÚòèx"tT&R¯¹é`ôݤ,P'Öån·®ÞܲÙYS²‚gšXZ`¬;Œ. ‚#M‡1ÕußÁBkÙ±bÅ9g hÚ"ãEl]zÎnOB­ö=R°vj´õýùÁ…t7‚¯†ì§ #‘¢nÁŠkêQÑS§„Z?w–pW÷ëµ­lq§úX,WÉÚ\]ôJ¦oð:VQsvŸM ÜzIÏPð‰@ÕÏœE8Q„÷MK(E*UTqó†ÞÓt ¸mÓçò{ݵíâñÅóÓÃzÙ¶k!v¿k]®Ý¼Ì#$C––€wyæ6òÉÑØ7ᵸvÓ‡F ¾Îâ1n&Gqãis<éÿŒƒÚmr/Ì@gÈ+^hÆPSDš#Š$®>{,8¦§ªLÒÿɡÁ)ذ3±>O])ØFš4?÷ÙPßÅVÎö¸@ÁŠûš‘$q?nŽl24¦…OuuR­@lx¥ÛУ-Ì­æ9Îõ ›??\íûÑͼg* 9õ3D–|¹œÉÇ_?}ª+a0âňQÇ1uy·Ž«uesÏήv×zsa¶ Ïìc¡’ðÜ@ÖW3¤7±ïI®I|æ†+£s§¯FEç|H Ô#:W›‰è\õäŠdnì›¶‡¢UìÂ\Î-ÐXN)Ý{Ž‘ù8Z"Ó¶ÆIšZ7Ò{@æ–6ðwà“@@Ó¼ÃfahOÝ3ô7x•ĽQœ$“ lϼŽð׺@Êjµ åÒÄq€ó'I¼Õ)WùtL»Æ×ØnÎÎowŸóêùóÓÁ&«n{"iÒúªÞ‡~½™³ØÞ­¯*g‹©ÞIÒ&ñä&ÝÎE¡°VÆõÀžë¨MqÔƒìÀÌÁ¡³®säê« 9Úöú¥—i©˜øp§ªyxÖÓx¥~¿rC©ùŽ ÿ^èæËÍÓ;·¹‹œ6Ìæ øƒÚzOP–A£D^l, `zžPÎífâõ 29‹%ÉZž6Nœ¤g’Þ¤Úè.…WUV[àBÿ«³­ÞÄnfîTˆÈ9"8öMw(ºT ä¥&pŸ‘Ò'ƒ lF”ã<ØCö•6Sj6‹Ý¡? Îù*çö‡Æqy‡á ȼ*@†Ü’90­IÛÉìÌUÇN`cn:6êŽ Ø(³¾?g.¸£ŒÒdžoBhjGU(FÚ8eö‰ïØââåå××w7÷³;E³Ç?ôWW«¯Oú‡äÞs³i9J€=ô4m!ßí£úljQŽš·qn·§õx?yãéµÄ$¢oø2¶?è –c0¸¤ôæ€.vbÙÁN|óx@;ýûâöþµé ½js}ûðûÅõÕâi±þã~ùÍ q\FB'÷ªÝžxŽ;SÝpVóÍG @Éh'ÄÛòÝÀßø˜O «Ç÷þþ·÷„a³2.Ó_É?]Ü®[ì{ømõþÃÍÕ{\F”^³[\›…ýé[—²ñ'Ö%F1`ÌëÍà aó»]€µ9BÜ"†é®÷ëÅr#¡o|ˆJc{ž¯·ëÕ)ðÇÿ¼¸¾S›øåáúÕ{ÆþÈ% ƒ†;휻üöÆB8K°yó3þQ þöjÿþùña¹Z¯·¾`§èCÌ×lj­“…øæ»I,™0Æ:Wзr”TyYOÚ±ÃZô?_ï‘Ùsù’»m´)\^>Ü}^<ª©é›~\=½ÿÇýí¢ÑÝÃÕþÑÆàÕä×ÏËx‚ÞØ=Ý/ŸŸŸÞè÷_·Ï«_¶×½.lç >ÚO×j©ÏQfÇùé,i '•áƒzw c”á±ÃFdœƒ'=iw|´¿xÖŒ-zþããΖ·7*)=¹±äúfÝVOòÓ×û÷F!L#ò÷CÜ7w=Ú>yêŒFžY1*†YÒX‚õq£çÀ0lûfÀ:bëHÿ´££°ÿgïZ–ãÈ•ë¯(f3+r€|)GxåÍðÊ?à Èž-JÔ©ëïèn²‹$ª (=²Ã³ͪLàäÉ·²00¿LI“ïI(-5Ha³ˆ“+à`û‡2èO+üŠ8ØAä‡5-§@<šƒ¥\¢fª앃¤e„Ê] Oß4¤ù{ží¼†?“)µ|¶Á{Œã]l@Ÿ›e fýÐ\óÖ£2œ7ú7ùÚÿÏ„R&<‡ÿdƒ] ü'ÿzU{á²›$‘vÒ‹ @š°§êë. @Ú?¯+ÀÞÜÇì¶šÓ«Ñ^{,û, (§½éù(l®xÒý辚$G—Û–ÒÂŒww©Þ-ä¿&þ[½+ìû•àÆ°ßê›rß<ŒûV?Ø"ò‘¡5@ò¡à–FóX+4_(g¾ÞIªºUà³ë «¸‡ŒÙ>–ç+õö£Ž´öÖ´V{F¸GÞIÑçØ 7éš× ùnÂÐÑÙg§gÀº=ÿ™²–ã¼H·7öq·?~»ÿóË~RÇÓÞr<\fÛÿò­ÐWƒÕö?¼ˆFÁhãR&±dÓ2$ðä÷#ÖÛÀˆ] óvhŒx¹©îøÚ²»‹ž_¬ƒ™—‘¹Œz(M¶ôZJšÊ`n#6Û­ˆ§j|çCy&W ·ü°’É«•’ç´xÙW)Îü€Vò@F[v>SA»¿@©¤2†3ÄŒQä'âÌǹ¥×W{úWH”£1D¹ìi¦ö$cÇeO³d„ u;¶\ ð|ŸR9›Jq(AàŒ.%jÔŸèz”®a{Gñ8æ‚”>ÏäŠ`Œ:슔>Ïâ%!|‹ÑŒ[Šõ¼O_B¢4O%ÌWîvÿa†þßï￾±þ©ød÷T‹üѧ Êš#÷Ùá4§·ù5=í±júÛîzwûçîæU–RU§h›tÊ’¼øŒã»?æöáÝ—û¿ÞÝÝÿµûöîñãÕ—wϹܿþaÄ#ÍA€ŠÜˆÇˆÔ„—Ì#浚AÜÈЇdYŸÜ±B~Ù,Ê‚[uš_éEû³ vûÀáí»×æ%8Ÿ›ŒèÈ ݺUv s·nò¶]†¨&$Kž\M§‡ª 7øº$Ásù›oIå¾®r`)puIV@ó™&/Vêër/UJ3xdi‚:ñéáP³iXsí½FÌw¡ô| ØyŒ× igÕc¬É»¦°(Œn:rÎgƒÌoÐ0Ó$ÔµÝc®t|vü™¢^Ǭ“Á”Ý~ˆ!M¼ mQíƒUj]G”ýu»A15â4SÙGÈ–‰—!šã-›<¦·žA†£˜:/=9+k?üc­~*½ëÁâ½²f“—Yw áÒ¹h>ÙÔ1˜~D“_à/=ºZ^³”ÎÚ1â¦seË.­è Þ¢q¿fŠåý\šfÍF—9N¼Tóýꡈœÿ?y³¢ª:$±ß`š¶ð¾ø#”¿Ö·§hÿUé;:g^nÜ\öÚÇ*ëaÖw/~Ž˜kSEwãëþã»ÏÄe,ÞÌןŒÝ•»†ßã ùÞÆ·ÃóL¯ó±ÑøÚfNuð-g8¢Ç± cç@y§n±ŒÂÔ.ô¶Åûí´Má‡T›Ü$|~Eß;-þq„àÃ!W{ÎÝîs›=ŽSð.¾ÞßÝ^ÿ˜þÄácj÷!™Pfõ"&WCnªÝ0ÕCIŒi†ãôD¹i:Sâ“o·ÙABE^OúCÚ´°FR>ß:É­Ï’ ä§oÅ¡’‚CSr™GÜeŽÁK 0ò.ÛÚ}0ªm²|UÌœÔáäÝüöðÃ$ùùýó÷ßFá^q¿³Ç õá±ne<Ç¡÷X܆!lH ¸oãX¹Æsôb4OƕǛs÷ѬªÙkˆ´ÆÚ1ØQ“õû˜­Â9½—똖5`ǺëHj¬¨é: ¯4Ôs¹È¥»Ô³äåÛßµhF’bÛäËFã>¯j6''´]sÜàwËþ”W;ý¶+Zö³ÕœŒDºº%mV±A —‹š­ 8p”࣯‡ žZš‡÷‡. öÜ ‚ErÝe.íúFz9¿Å»Kcì—îõ=©`×Ù¸$͹~ž×¬VÕ;æ6w*  `žØ‹ÇÃÝ©WóÄb¬ 1›ÿÅ6¿7nWT`Ç-þ•‡D\Þä`e…»0JqA²m,ÎÎhˆªk,"òZ~9 3»èz*­.P®AšýF5P­èâæ‚Çp0$Fy‡þ'‰‘œâ‰×ß^¨«**Íüe¬:t°åLÇ>³x|1D¿¸@"î»1Æñk_É®Çcýü) ó#›2]QaCÆÓÿ# -Ü£žq1ÃÖàѯÇżçõ¸˜ú\Nu"®Nq±Àio-W83C#bÐð"ðN5눇ˆz ˜œmE,Ú1E(c<òS2§jvÏA[T­NhL T#…Îɤ—¾Ù}½»ÿñùUÎïͶm[¶à<ŸVÒ¾Ï&¹{¯#8’§4~ÊᙟcDéé?¼J±Ö¥Š¢_=ÂkSíú-mÂ!í.UìàɬH«£Õ3þEJ~=Âäé=Z²zš_€;M«g‡Ø1F¨°z „ЖWW€áó‹RS|vÉž}ÃAôa™'ÛÏt]´Vdõ0RS.‡±j’å ‘Í±Á8iv×ßvoCÖ{…_<Üþñ¥ZÍ›œ—5zjÊÞ™º¶”ÓiJlѾ^¬ [KäÓ1\o§MÖÁ(;P}"Œ>ñú)ª=z šbH{^›®ŽGSNõ4µWFdŸOæõmC`,QårסôÞϪ4µ·©/†1¨iDFòÓƒ¹y‘ô{›Û{[„_WáÃPñSÀ1îA4 <Ôhíûã~3ðûz›¼³›ÝïWßï'?X Íïô8ŠÍjyh2Z¤[«‚ù¸"Õ6«@<]=sñEJ<€c/ã¢Ñ"Í­‰0zyX« P” Úœr>‡À󀚳Fùà®Ä?7 :îb¨0Y…·~^}ê%4å(•ý–wÚ–zIjûf¦/!5ìÃ>ª®äjK¶|äY<½è(3 57[Ø»Èm7[Ãx6³c„ìƒóê ;.†®ìPÄF™\¹Gß@§FjTÐ \›%cOc7,Qù}wõøýÛî«ÇÕLl¥Ž³’Õ˜ýömÞ¿ÁÔ†q œœáúõɽ„ص¢O?ò땊¾@«Õ¾*”kÒ;I¬SA_‚dðUd+€l H8뀹‚¾¤'3l-“ÝŠ«  ’Aj6"÷DY-k:Ò}U7¤_ʳ鉇Ö}½¿ih&¼ñÆè<Çkºx¸ŽCT‹™·µ¤]¿MÕ ª[ªù¼N¤¥Ÿ•XÝY’=½cæ´» ¼cÂÕº>5š™Áu’['÷˜Óˆ¬( A瘱)¤ká`¼{LÙB²¤)IF3?Ò¹o^L¥±Ec9b“yMKtܨiQÁI ÑóȪ7s£÷"}¸þ¸»ù~—z_R#Ç·Ýõ½½Ë‹Ö—»ûëO•p² ƒè@ZZd’7Œ,Iƒ`¬„è&±õätB̉q S†J×ZbèåôDH]ðØžÚèHPªÁcNÛnéð…KIÌœ±o¯@Yty láÂ¥âèFQÁ‚i¢—›ábV¿iíBSÈŽH¤!uôÈÏZ¹ðº.o1²o¤áý×{K—^—2¤5¤qؤ€!¦(BÂßÒÍpÒþ§ÑþòõÕŇï_nîvU²{yé#’Cn2ƒ°!¦d#šë94W”TÛžõêÞ°3lH-"¼j²žÉDN Ö½ kšCÓ8Vµ+!mHáš»†Æ©}¿é תg1 ¾ ÈÈkK…÷²ËŽø8 §S”1"¡Ñÿª“á‰=™ã û±QF†˜2Æ4ÍÃY[BQóSV,ɼj™ xšTK8~\DTÈ6©¤x—5ÍúصI¥H³†ç¡¡Ò¬š§Íj˜‘Õ»6 ¿ÚµÔƒzÆKŒ@^>ÿ9¾ô['¯âØ´¸Y¤"@a3{§¾Éê2n PصqæÊ÷U ´—YN§Š(Ä—§¶z½TV­2û|äâYz¬rzh@ÅÏ5VYÄè5]l–Õ¥b·:±æø\ìå¾àÂOi¡¿ŒS!$^3f"€( Ô`vÍSÛñ›S‹¸/º~ŽÐ§Õ…'@a»" PX`CœØœ´´×¸º|µ]x=}d1ßE ! 8].B®øÜü¤‰¤:¹ÈS&«'âŽ:%Þ£¤É/Yë…a.¥Ù6Ù%™Œ(×Ï.8CÙ¦ž‰û"3Ú¼¯Ê8ˆ¦nMv[„Ï0¡Oò>— Šhn·HÚWܵ©'•Qzô\ìjõ1;³Ê 54*7ßLb—Èe«ßMëÄ„pŽ‚,ãÐ톷óÕºWd£…s'#a‡Ø6{ïU7Ò˜m“’¢™·Ub¾zºo_„õEp,ï‡î>¨w šõõRÑNU|LËÏYõñùêú£YÑ'A?ã諯Ÿ"u‘ÕíŠX'kÞñ–œG£‰ ®µô£Zvë?P]4–²Úc­êu­~Úä˜o±žHªOý—æ´U ¬èp_íõÂðúMÝ•¹ú4´HyÅ|=^€ó³1Ø^ö± /æuËjg¬­èyLÝ´ýÚÞ2izîúðýöîæâÁîßãî7÷ןj!—x^ÞÀ5A®— ˜k–Í8«£†ÞÁeõ,u–h³¤ýjÒ8ð\ðI½ #ù¬ÖµÃP¬>òøBgÙ:gQ³%€3t·kB‘ËÊéc‚j›×ïþ¸ë J´©l(º¨cûêZ*Þë䃮å|Œ¼z3íͳ"¦¯V´Ø Œ‰©Ùñš8¥·»o†«éêÁà1ï{9’úÌ‚¶%oÇb9«_XÓZ¶ªMEÿ7mjóv`Î8þ,›Ú&Ï3ÙÔ™[µ¡gDjš­if¥;‡³| æØ ,_x£ o÷ß³»Dæ¾q!vÍn®¤_èŸ_ÿû¨3wƒðw¾ºÂ*ú§âª A¯1Œw”-k8iîùò<<çÊA¨ÆF¶K¾€xÓ–M¼‰ǵã8«¥Õ±nSØÂ’ºM`]¥àF泩¦“dzí{‰ (5͆=À“]s6…äbΚÚbÀÅiœÚu+”uzwL_ññMà0¯Õ¨vÛ´*cZZ º4×÷ÌaSÀ>mÏÙí¯ö$èñþÓîË“)¬‹sĸrLзÐÖŒ¸jZ¿Ñ!ÎQ&³žU")E :)Âz¹¼ŸÉðODÔ©LØEª }Hš?Òæ—Å-ªýov&õ#,÷bße|‹‚ËNš‚ åH1«Ü è§>xØrû R@&Hmb©r@×G€Q ˜Öj)nÕŸ½ñ±ôšÃÈÌXÍ…H@mI¤À:žg¹|ƒŒî—¥ /_xsÛ»ò,*ãY åiþ »(æt é`yiº2›UƒgÁÿåÕßLà·×W»Ç}m¿½A]ÓpŒà¶«`gó[&¡‹LéTo™[מUâ´Ç`vS÷É*Ó²Ó”íYȨîosÆk˜Vû%Mi²áÃfS‹Žäûר9xŽr»Pä×F³Ý«ífPbV« (Mkjí` $†€¦²´¥oÌ÷…ð>î®î?–è±Ád¾ñßÈ—ù\mcó’C#ÖÛ×mzù)Nõ°ŸmBMàö®NŠQ·‹¿€¹ ß0PÍ  "×6Šœ#ÑÔ‹á`ÄK§Y—{Öì|¾›Œ0€{zzò€Ì5ëêM€¶#[Õo´•iÌk4hÄ6Ÿ–xLz/å“îê¸Yié~|½»ú²ËÓÞ4ÃÕ~ü¯ûoŸì‡¿ØCÜþyûøãúãîúÓô"žâ}«»Á{ÿE×Ë}c×6¾Øû-0 ÆÒ¥~&Wwõ-4'Ð]c#³cJU ë« 8]' œ ½´Ó‰€°WN‰š ÂiÅBSmÉèÓƒCvÑ«©)¦5°Ù.f–®‘7{²’âR¨šö%gWHc§ÚL“ßf…ŸûµµçX\{NQ(ªãЈ—i)©®Ï¿Ç˜Z4y³¢ÒsŠ$Þ¸HÍÉUµ BðC6 !³zôpÎrÝW£ù–¯ØÊo§ërwûùöñ=›h èð‘`xÕ¥r¦T—©eÔ-WÌyQß­z%çư&ä7ÖߤXUó€f<Ø3­Ú ëEI°½Ê'ÉõI¾!zóŽë’ojOçÛ@ Ã;¼48ÉQÓTО7áü[.`gìmáƒjM±Ó9¡gî@ ó±©05͘â¹M2Ös}¸¿Û½äDÇ/~½ûn0ö°Î±Ö>¡—g¹ç'¡)²œfÇnð,‘H ú´Þ±\•î²£¸*ÚVÇ/¨DÑ‚È3WŠ,?:̺~'ñuòü‚(Jã—ŠÇ}Ûˆ#caã=?vÙ⢉Ÿ‹[9`ß6ÞIðLh3{ ¨‹mæ°õ?8L™=ÏɹÑhÁâähv{L}AÎÐeš÷ÎÓÏdûL^4‚š ¯k"­üs“þPˆ /ú@ŸþÚþØöù{ÿún¾¹pÛðe ¸!¾š<.^¯äΔ-&K+LVªfÒv›®ŽJ6ÿpÝW!ε¼ŸÞµƒÅJÍÁ.f¨É£cÚÒ×ÖähpT’2¾õTB K¯’¥îÌйKÔ´j©o´G,gªÚfuÄ)!Ú*–tD£·»tö`G6zw­ú[e¹gÚ0§"hmÙt_`Ãè/Ó²)×ëƒReu}r–‹OÞ¶KP*—ˆÂP…lâ¼ú¦Î;-~tˆ‰9Sâ¹WTpÎéZgß9f€Éxº ²bc¶ ú"c§—Àjnàçá Ð^¶ýåXwQò¿×6?.¨) jÓ3Ù°Ž¼SUŠ›–`åD» ™‘k î²O\$mÅZß½vœWë2Q8W~7•UàÝ_{§*à hÞ4oð£g,%9û·îoÒirá?góßú.L*šo†¾r Öh<™S=9³Õ¡©=‡t†4¼žu¥è«U­/ÿyq}wkšÜ喙°¬ª±G;£ÛÕP€Õ6u’÷äD´y«e•è:6f/ÆW#Õˆ«pP³ƒî&Bê“¡ öq$U«ȱ25%¤Rpt¨Úœ{Í b°W¦•A Ïz»´¢a½]5f̪ث#hËF±‹‰¢³ 8²ÑéËÕgSËÕõªå;ý`aQèX;ây.óâÆÚ£^\»•<Ø?žHë Ð U!-1¨ø‚1ª_Í Ò‘Í½Û“0ú@-ad¤fGlš`aOѵ0¾Áþ˯· ³ÁE\ZìeåL:èÅXŒ§¶£:å= [AaVµœ4ínOs€‡,o7o“ˆ\ø &a3NWßïì5»OÿLë†6·š"‰[ÐNÊÍ4UrЖX/ £:Ž£FdÔɳkaŠQÉðŒC™í"`çê÷&Âë4;#2E¨ZDœ˜Os²³'g˜òc&AYÉàP´)öŸ¢Q0sªOÍ«†é-ªgsLº#zIäÃƒŽŒFÿ×ý‡5˘~dTŒyUøë€Í6”['a,-PíøÈ½Àèò^Z-‘ã½Þþ:¿9f9¸%Ôýö®­9ŽÛXÿV^òŽÑ@ß ¸ü”ª”«ü”<Ÿ^V+"©”%çןÆî’r±;˜°äÉqÙå²$jÓ¾wmôÉBÅÐ@í®OíÁ鬙~ Þ›s]%{ÁùΉãDæ°ë(§OfbF?Q±óÚ´b‡G9ÔdktÀÎRZàu£ØDÔsŒ5¬©PÍÇœ}Ù¶>¶žÒed±(/N¿}gó8æîLJ‚ᚇºœ–,A_I-…:»ÿ¡žv­üßÍ]QóOeJ{Vq“Ùa¢ˆYp•'J5ÐÄëS}f-âm"¯ì{kbŸ×ÄëOŽ"J1SÂK;íÞ`¯ ,XY¤/:ò6t1“s.µªõÍY¬¾ñ“ ß?bR\Xôp{m»ýjwóÒ~s•^yppw˜ü·@„{*nÞÎVÌTÜK:ÀÙžñq ÝNɧKÂÔ¬Š¢ìü$B(;Ê͈ÚDÇ3¢OC{t\ÏN±ó”BÒâ’ÓñÆ'H5$ŸÕñ®- 3ø2t$ÒYЇGÖDoB0‚wNTu øðœ·Æ×„ñ_ëìRnHh^îš½tÔ÷yÑ’fBòÔ+{]@dž9k/ ¶Š¦ÝzS}wFSŽù}ÍODk4|¨ÂrTmN@ÒôÐìT®ÈhlòŠ$ñ¨)ëX–±frí3Ö…Ú$Çoœ¤yZÏ‹»¦×`iïÅ/CZ«¾ü¸|å’ ëÅÀ`p\¹urÉ!Fc„ pÍdÉxa“Óýd¯}i#7Í4º÷ª‹:·Ø®aÉ@>…HÍ,Ô1ûÁ›/å¦tL_o[^ö¬Í—ûl¾ôiˆGéXi«Ï1nZG õÀ12‚îà¤/#AõÂ9h'*G˜ƒ®ÛNðr,š5ÆyZ®îíã)i'Žf«±º×ïš^ßVeV’*MeªjÁдg1ºj`¶X¨§Øñz8ŠhRO¡v¬7ß$|þ°-ÅŽòçh)$HZÅ4eY‚ñ¦y"{­f—² hˆNƒÀ4ÛâáÆæË!§÷üi%Œ¢ÙŠyŒ‹æSˆ‡ó¢ñÕ¤öö%]ÿLÈÃN’¡W·oΨ¬s½Ì¬¤RÓ¬Jn†hۙ󘓺ÿáúìâ“…¿§›Þõ™ÙÃæêþäæöÛÉçÛo«»“‡Og7'OÖŸÿ ûÁëìÓc.òôû¯1€«¹ ¹.ðé쥩'A‰«}ºÒ”YSÊmzçOêkÞÔïEÜÔ‰_ߋѧøt>šÆ'ó5ã‹‹1~ÈV¿f8­sÑg1œ£R•ñ 1ö6žFÿLà ›qtÞÛû¯kVÊl¦w“ùG6õhÙÔ?õ” ì€9›|ÅäìPì¬r}öeg¶6³€hóçàæí¹3†,§þ´ Šäx#ÉöƒÙ?Y—2—`‹*µ)¾ãÝÈìZ—87Þ;œ2b‘ ×\9"Nƒ:í:êãÏÊ[°[>b²yD÷¼¸‘(f½ èÀï±UÒt T h/•w:!e‰bèÈPuÂ-§ò†…×tTîrMœ© #ûeE!&g~ ûïë¨Ì*vze( [aZ³gÛæGTm¢Ù-Øfa^FZ@#¸*ÍÎz„¹\ O͘9ˆ4µy’õÞÊÒ>(if³îJ´Ÿ'5jšm0«ÕÕ ßE·¹M}:£å£åWöÆÒ«‹Dß5ÀÍý,eoŠö€²WMûª”½„%ˆXÑ3‡ÀÈÕˆX³H×J}¯ïI ,¨ó&tµi¿\|¶rD¦À…vjä ^g©ï©KR «¨·úN{»3 +öÉ,è¦*0oƒ‰¤k¶ÖØÃb5—/ŠVûtI[5FÅBu/™+í%‹qX£ŸcA†Oö’)`biüi%½dQTõfȦº4¡Ž5Å~ã}è^ëWBÌÔú!¥~u‡½®õ£”a8—åwIáùÝut,¡/Kݧ~Ò«ø(ÄÞÍïvóò»M1ÊïšfÅmr†Y\»ø¦¯0T]|€ðØÑ a-öo“ÅÝ‹hÎü©ÌòÍ6¹ôlÂ[a1ì¼hGÔ“9Ð,‘{˜fÍ\Æt3"ñ¶µé ÝÑ(<ñý²ÛSÆjá3Ú±½È:"ša—’ß«Òüêý’Jsˆö¼úJ3C©C¡d^µ ù’`@p²A=d+Í£O+q(‡Ö#Îâ[0Õå¥J­öo¶JkDCÖÕ'xÔ¼¯Ý ßÐ ðžZ¬ÊƦ^5VAjðëÒXUvœq;•±§_;UÙqº!Å—Z%¯1˜šÔ2̇vHª|}|÷\aèþîôþê×›µÿ6cæWÐ^‰ª*‰}KØ„E2CÔ¶ý ½šÒ0c„RÞF?i’IÈZ£qZTœ,^öJóÌÕÄÕ(Kîžš’´U#ÿ‚ùI¬{jÆÔtª—cÑT¯²Ô¢´Oê…=ìŒ$d”ªó>„:TŒ¤gGêž“{OÝòs ölÃHÄçJMgÒ`ž²vèb‹î5ZvÞ8¯Ì¡$M(ÁOèÑÔ 9=:¢F= ƒiuf?CNß…’lJå86ÁƒÇM–/¿v”B_WCŸÄÎ(¾Edê(IÝ-’‰G•Ù~ÍÝoW«³‹­cýJï?‹âÕÉËÚÜŸµHˆ< ˜ÁŸÈ” aÒ±ÌåOFÄiBÓD¨#ˆMå± }‚âèfŸjæTUgOBñh?šˆbÁd?"NŽ^*rnbvôa…ƒýÎI8:×–À4Ûo…Èà빆Å\ ƒ‰@(a›„é"jv¸âùË Ù–°ˆœ›mKr•*v)#9¨žŠ‘“ÎìQJ’ÐìU§ùF.˸ѧN:; ÇgÜ"K»½£1V3N‹‡ƒˆ]m-`\æiÆaÖ}ZãÂ`NN„plwÓ÷Q×Ô¸¸›fމĶýÝ[½‹À¸$ÄРÕ{ºzØ‘¥foÛC™û OÅ™7K4ürkwôþ³¹Ñë\ôåÕýÅ­ýÅ߷혳 öÀ}9Ûs×IKZÞ¦Bþ¥¼üåúŽÇyá9•}Ô¡EAÌ”„îß™ýÁû‹÷smrao—׎Àmw¹ô½Þk—L‹ªEX]õä6‡b ZÝÿpÿ»ýÿõ‡ó¯WŸ/OïÍýxXýúûé¦}–’‡¾ôޱK£y’î¾Óž×·7WvÿbÄÅíÝê6µí_?¦q¶8¾ókìȧެõ¯®¯n®®Ï>ÏÓ‹*}9´GBÊi?ÉŸŽ¿¡öÅÙéù×›ËÏ«yy~DZ'½= tÃÓÁÁ¡k§ÜKë‡ËÕdz¯ŸF?4o´Yµ+}‡pCd†߃œþÖsyp[\;Âõcäûr§Ãª%©ëÊóx,θûú%=òüëå¯c/}úåòüýx?>tp‰S±Ù÷mÚmÙýt{ÿðå̈üåîv“ š[X"×—ÊäÚ×±wƒ'À¾ÍJkïÆüK{|¾Rlÿ7Ú=«x¢ˆ êè"QQÉIuÖß—fý,Hð!kÒâ‚bMÈ%!G_V’õ{›íäY9HŸ–$×L Fìƒ\“1Ÿ‚ti?ßÔREÄv=¯âé Ìf¬ö‡áÏÁÉžß?½$DÒ <½¿ÐO²Ûöª:0©ý‰ÆèIÎî|m¢ͯQÄ.T•=‰‚ ý§ÌÄiv~ŸMT]œõØ2oOX¶Dƒ¹.oŸk[ÌÈé–/Y2²ÃD¥¹}´³sÚ5І!m­‘‚J[4-Ì|}ï;OG¸óA(dî<j?î§bЦw>e Ä^óÝí×‡ì Ø¾?8eU¼¼ø(Ooðß_¾?":_†óð‘ÎÎBôÚU‚^÷9æØˆØ;Âb÷9•Ng4†%ˆvp‹´gÏ‹µëÊk•[ð$ƒ˜sQ0í [|C¹5;šÅy&Y‹Å»!b§ž7ÄÒà¾DGK†X": ›AÄnlu=0]MC “׃Ý!ÛSä] &EÀþâ‰ïÓ¥:V ÇjQÝQ=n:•D<UvqrÁ®±Qscx#>µÈy Üø±ÏëN­.•ß #ã“ ÕO$K±­òàK‚ ?Kÿü×5ˆ÷Ô‹è–à,'óeÿDš«K;µÛ•ÝpHt*ð}̳ 8©€BÖóÑ£IÑ5/ [ÿ öYÃàs ZiÞ­¨ÔF™´Äñ!Ärµ3g¡§<³,™¸¶Î|ÑN(–è¢é¾½,LÛ°‰ëD9,e5ƒ`–„C§zÖÌ!»fòïã€i6·y<ò´üûœüÈ×DüÝÁY€1Kü§îNø쾬ÔÈìsíéý«:ÓŠ7Ù¹¢VxòqÖ&»^ó¦{YŽ0VZ~^PÎôÎ…ç.¼¨Ÿú<¨Ü<Í ìbIK½ÙüÓš! 7¢TÍCHUøYŠ1Øéª9ß]1˜å¢¬bX¯PŸš+ ØvYZh\[h3ð¼—Ç f²J(ÇEËMBt1Ĺٿ‰Ñã– º€Ñ|øI9×ò›PžèÐ( ET%éªæBC•¤ Åþ!s60.íDOí ÚÔÀ¢,>Ð /¦ßǸà!Pø.jdcUû—@—á.OŒ´·“bå“b#Ôã´kvOÕˆ-„ØŽ¦•ݬÔ\He•!¶ët„VzÐL©MÌbïY Ø6OT$ÅtƉR¬†<® øˆ…{Dö°;!­ˆj²ëâ¶Å'»ä«»?þü·þatñä«0é@£é&yùùÊèl”J®Ì³„ÃÉO'ßo>ü¸¼Ÿ;·Íc·¹X”ÔD‰Îõô·ë‹oÍÅ«Ëð1Pðç¯{¸kû·Ûkܳ±°gû§Ÿ^‡lqÏ–ßÓ)£c°€dѦí9 ÿó¤CÏÎ?¯þa÷ë—ÛÛ/;x!ÿ4­´úùærõýCÀüדtá¯V—Ï¿—i 8¸@§ê9iåMôÙÇ’îYˆýô5N§=¹J§2i¼X]ý¶º|Õ· <°˜‰‚dþ’{ÆöÛ¶¹º7£óÍDùÛêîäáÓÙÍÉE†õç¿|~ò`M]s;6Ç÷½ ‹šBŒ®1†KcJ>h˜¾À®ä^Ä|‘ÿùÓ c¼·`–D||q1ÆÙ*Ì× ç!¤ænÊðÅ‘»:úç¶*-®£Éú_ÛÙ%qo1»ô‡­ËÚº½VÍ;6qðUª,º«ù<8ñ "ªŸKW'ÐZöƒŸVeHä`R•mƒÈ×IÌѧ• _Ù±Lö‚ƒò¸§€q*ÉnK{Vó#L{Æ·/“´mU!-,71œáO]™B½ÍjÄî!bZ ¹'£ä¹uàoaö.¹Š§w[u gúñã%ÇpÙÚ ÌýHÛSÄ÷¡í—ìÉáÔA€Ülžoœ/J¯‰Ë†Jè…‹B¥©2y¢iv?ëˆh-:ßìÔÂQŽnfú{¾Ff—AYLŒRSlrÜηˆeÃ~Š fý¼Ž*Â;X[O[Â<Ö¯@‹…þ ¦kàÉCœ–ã´NP¦äØŸäÑ—¸ƒam*§%l[´bÛ""ÓSÕ~|Ò´e|S;nXÀ6u!Æi¶…lycôe%|³Sy‘8o‘,´«Ñ¯Þ ôÖ¯výy¿o-@š°é™YxKßñ çÙy“Î[<Šç]|ÇY¯oí;‚S XwSµ{p£¹X’MñÄ!¶„øöÁÍ$†ðó•qççþlåúF;‹Î3ºÂÕßaÏ P•¸ß¶cÎ]7©©ÌÓK¥s£ÅÎM‚ôˆ¥À¹ñÛiÔƒVÒC6J}Z‘wÂnž•D±. ñ!tBìœÁg¬dTMáW ņZ(u7¿#<Éùm¼¯0ßÍËéƒ-9ÿdcÃJŽºáLÎ?Ù~ð/r Buº5MAð¤ÞW¯¹(U~hÒ(¦3¦34Þ'lûIåHùÍÓ§•(?tƒ)˜³k71N«épJì¬ü)"'( D:*X¨ o:‚óß eþººYmnÀ4`(aÀÐygƒ†5 w„Ê+¦võ*ÅE²…N$Áû‡êf …z+¤®^S)0Ýl^¯¾~þ÷ïøeuýéüêîÓåÍÕÕ®…a/‹,Ì‚7¿H@-¶+ ^|Èš´¸á{ßð¤'2 •I¦YM5Éž~qi¹_#Êÿ=Äu&y;Äuïq]…¢_³È!¬½•92åÚQ|´މyª¶P0ÉøK¢òëÿþݸñ°{ûOw®ÿi¦Êpº[Å=Í^ÿ½9™4>\ ÿŸ¬¾K00dŠê8ªÐ^³¤¼ú®šu„fz~u“¤ä Ôô‰°iýaouõ©`–ò%: zÑ´€“*¶„ØÝy”à ã<ê ;JœDîÄI/™ Œ‚oí¼p …æ‘ÐÎû…>L ßÕU]^|íÐd:¢’ $=uÊõYR)[‚¾´-ó|3³B{)œjb¡*¸dê0"Áì¼ Ú“Â¥%Ü볋Oæm:o·'8 ^UÌœ/çNßLKnìµhz7„¹{<þ—½këm,GÎÅØ<äe¥&Y’ d ÁÉb1ƒEƒÀu&nÛ°ìé‡ùï©:’-Ù¢ÉsHõd‘tc ØG‡ÅâWV}5¢³­üd=|½–D£òN2&›¿öhQµ¯€:@¶ÊInq8tp©`)¢Ûùl®þ.æu3œµíø¯[FxÑ”ûgñ=näáwë­‡:6+b™Ÿ5Quf#ñ¢‚w0/²aãû§ëBªÉØi¹Ù)&`šòaY‡—óflæéMCï ÎZ9¬SRû¬ç =ãñh–ãäЦZkKÊCÀ,|³IÒˆµEŽCÏ(·y|{½:çyH@±¹càc Âa§Ù“HÛr“º"šñH¡Š›ôÌä¤&„@ÆÎË:{ä^6 Ü·Kd³Ë{×Ŷ-‡¼ùðe%qóÍâéQÓ(7‹õùT«Ö›'½0»¬²ÏˆÎtÝ‹@ío(Rஹ¡Ó^ˆ’—¼~¼®¼œ|´ÄvÞ%@˜RJä¼èQ=oo•¬š8ª ê¸çËÇ´Æfmdð©ª×±4™ÈAK‰×«FÓ¶Ò‰)SÀ[²$^p‹VöJ0kæM |‡ˆ¡„Ô)?çᤢʪ…3¥FÇ ¯éŒUMqâ ¹Y°»‡Uáóñ°6=N9­èD"Û4u¦¸Œ RÉ{k˜œ« Çé½=õ%{Ù½m¨sB?s2z™’/1z¾±Y•S…ßËÑ6΀.B$ŠÓ‡P½;í®yý’Øšj7ŠÙñSž% $Ðä–WÐÉèô„*¿ åϬý˜ÔÝàÅD:Äú vóÓ;èõ8ºsâB…HÖ{y.i³à|Ö\oùÓYó—íiÒe!ÀN>x¬ÒM4$¾Ò,3úwYpHT àÐè !$“æ–Ú:ûòfý3yB¿)ûDœWŸÍÞv©kpÁ{®ž×f«çõíÍb#ð´úüëb[eR×ÌáìtyØ6SÊÔØGœ©¾--QÓŒŽÄí¶d8„Ä‘ÙËsm¿NRF½ ¤Ñ•¨s`\ º·8}ž¡{oÁ`2Ž Øs) ¨ÛÎÎ+š€IXAé]ƒ=;ìÊ­íñümÉÃûg ÞÈoªá6xÛ ÿôø¼ªˆ÷BW ö4…¢ÛãŒ8•Ô¤›y+ÞÇ}L¶³p :w¨¢•}*Ë' >j²ø@^M¢C·4Q^¾-F—tðLšPÍTÚ\_WÞ¡%ªywK&„‚y„˜W&Ÿžn¸—^óî–(þBÍû6!¸þ52b"tƒ° OÛpå–_Ø@‰y‡H±¦Š¼S`ĢuZן|79¶³Ée¨”tψçk"˜ü‰&o8æÉevcìŽûù^—Vºgp©•\pîBo‡\ÅÈ Ö½Á¼9x?‰¾7Ã+Ÿ™uoŸìþ°­R<"ÖãéÄzG?¤SJ³ÜyGÏiœo¢ŠÑt Çdë³>G¯BÔ46ŒEô*`gr«ôˆkz¦€ºÓ숉»~ë—Þ¸Áã¿ë‡¦z€EN„çrbüFºgì`'ñf‰é³Aç)ÍöJçS;£–)Ú‚ÆPíý÷.O~µ¥1OLã{YZÉ|jc–>ê%Ä™sÊNÚ<›Ã>.½ ®·ýÛýÞ"×.Iök›w uÝ}6ôÌÎp<¥ŽÖ‹tõÂoZCQ•¸ZEÓXôÁ¸|…¬Õ3kÆÉ%¹4EÓ šV-f'j|æ3)ËëÞ‹Ã1ßܰOñ+*¢5m*Ê‘£³¶²å¤z¬u=îÅ<é÷@Z$'õá^´ss»+§™ˆ›õfË@¼k†¯‚]v§{{›À.N)‚yŠ›ˆ±Sž3/Åv¹Í°ŒN\Ⴊ)DŒ. ǘäÙ=Y“ä¦]b@&n b{Þ1б­&Ó·™{²¥òòùfý´x¸5\¯*œ©t˜þRg‘û»ÇÝœ”£‰Ükˆ­I̓àBKgÇöÔÿò“¼gqéŒ à³…iMƒîXÄF>L)LkFIÞqË­k“]Ê«ëZ¿¶­ýÛKPßüÃÃãý—ÕÓO«çÍB\®º '›žÎ™øÒSJ€ ”ŽÙùZç¬@:Í@V·›*èíðÙŒ&"${;dÑfÉ,£Eoì™aV°ÿ ù“¸ã•’èŒ\ÈÁ,5 á¨ÈÿÂPΦSxø{î"uBNí_êË¿òø‹„¨—×»À5Õò’øl±©#„ô®/ ò»ñjeÜ`Ä”£š;cªÌÚÂ,S(‚Y΂,SŠXå@>­0–‚;7Ærˆý1–¬Æ¢;5>Å6 m}S™¯ *›‡=÷ÔCÀuZ]gÏKx5Ø+ýg±Y¾«¼‹‡ØY§vJ É_;›•è½pZ"¨‹KÚÔÄ}ʦDPèÓ=/’h‚¡ò:ÁÅschèÏö(bJb(€V(Ÿ`{¤¦ìdËê׫@´äÄwÜ<2îét_¼CFúæ¥L»Âÿƒf|’÷=á“Þ33^ô(Î÷(ez+®–€ê1–Fþ˜uJEtÉÈÿ@6•QS gFTÚÕuŽü]L"j˜ñ+é\‚óW5Ó¶ªéznë.§ÕkÝ ¹'¼Î \¼È3ÿ º‰W1Ò'¾Z|ºþ|ã몜Ç®ˆ 1Nšúm‚‡àcƒM„Ú·E·B4%¸mLöÂ\ìÒÃÅ_%Ø·rUr˜ Ûu¥6„ݯųS=9²`d/Þ®=¬íÈc›jKÔÜ„móÎ$*û“Û…ÑóK}S\ú­î»«TåýàÇ6¦€–‘œñ½ÝîÍýíêaÝ~øpû,8·É!åžÐˆÿ/¿UFa'ó Àéb½—ž•î8Ÿ[V´³è!TÅ‚w%ülN9³áƒÏ^|M ‚¼µCWÇàÓâ˜s*^wSDß™^GmšÛÔép¡î–`.?Û™`æäþ ³Js=%Ä>‹þã«j_^Ý®¾Ýþûû‡£V¿DYVßÝݬþþÀê±U{±^Ýì?ƒã¦>‹š<ã‚Y½Ìv7f~¼¹ß'›úVóÏú¶k}+9©×«õ/«›·GU]9w†Šý?¦ž±[Ûî1ë`ÁW±„_WO?]Þ]¼Jd9,ÿíóCXº`‡®ª#$piMˆ‚1„p„úÃLø`áâå‡/þë_¿ÿówþ÷»dúñâo? §éâŸÂŸåì¼Ø~°Üµ7þõîòñ×ïÿòoCë¹,êéþâëãúi5ÀóæãÅ àýÝ…– =~¼໾ø—‹? ÍŸ÷"ƒëÛûàÞ+PÞ‹s+¶*– °'[r~ÿÞU>‹>G^õùëåúIÞêBPúBµ÷»Ýο Î¡óGùüéñ×õàÑl;¤X¬o”ô+z­œ“U1¢.´tÞâÞýóÀD?x7¿%e”OM~5L—AÐ!\üYÑ)ù8NBtÞ¥1Z5·”…:wÜjÞ/ˆ˜’ù~±ï”ÿy©/zwyw½úðÃpÆŽÍÛ∠c‘Hý/Ž#E²08¹°´è” 'L!><ÂÁÛÍøzyûAþêC¯[°€¼øtsùt¹ùõîz3Ø¥“3ΞŠìæô¢ªcœj"·€)‘u'êá™Mð¾° ^%唿7g/Åñs~Ô^nWn“ÝK+h‚^ Ä2ÆXèÛêwƒµZÒ9GóÀ„Þ×~*Ç-QÀ–¬ƒ œMO?.c¥,#ÁArg Á©¿­¡ÈŽð[Ú@W|m/ã5®.EÞ³å˜:¦œ6op@©ä.c³eß’íüé¾HäIÌŽ/ò?…¡/ˆ¡¼3WÀKEÒ ðÂQô:À ;ãð" 瘚vt°²"ta!D1Åè¢Û¦TŒ3ìš>‚û£ ;—Bæ-‘›uãÍG®Œâ¼fʨ¶`Ê|¹ÔÛ¯oÕÛõ$¸¶Ð¥äfu+%Æÿš?]Tòlew>¬vˆyœÖù |#¢Y¤£/fSõSì0 xxÈÁG4„ã½Û…Sª?ý`eeðA$p'^j|d·-:µ 3|¨Ù§àƒHÓšž3ðÑ’ªÏ{ü^ÊÞ=Ø{쯯>1^]ÇÅÏ?ºÙìܳZûÉÛÜQ™ú¾ {{ÂWq1ú*Öƒ«7 lxREX-dž5¶kPâVljDÛÇy~·Ë6œœÑò²®2¤Á ¿rì*&D ³ÆÛÞä*ŤŸ‚Ñ ÞÍ¡W(Ä€iJ¡ðdË©®š¦/s6Þ¡©&mú2£>St&ÎIé詚ÐpÀÆP0f6ŒAŒYð¢Ëâ˜3H˜Å±àR8v°²B “øÉÙªˆË"ÚY@»—¯¨Ñ%‘Ì"ðÔ¤|¥Òl€3@ZÏʹ·çC¬t¥z¾Û!à‰_T x=ßm ÿœeñyþÉÙÅ)diÈALƒÎKœpëR·¾é4y`ã  Í”iÂÞ.Ö&iÌö‹)¸ôÕKÐcpéûæ³.}ÑFíµ#æÒ[ßae޼ã0O„Ë(a¼Ó‰S³M¡«0…Ñi¯g(péC´y­ðÉQëû••úôÀÖWc l››‡¡ ‡…àͼœ±ñk޶Í`”05rfÛÜRÂݬÈÑmsÉ+©ƒ•m›¼•óòVuÛ¦eqŽ£u½/¦EŒ 2ôíFDì ؤßrTje¦Z;-E½ë»Ïò­›™éœÊ¯:p+¬%[˜›i¢\ÝÙ8Z}:Î i›Œë)¼ü{ ø«ç÷¼s´Gð_ýb‡CG<…n‰€êuŠ%ö"C³ÎšömùZvœÅ@±cÅûÁ€ˆS×ãu”¯Dþ´¨É2ðÌøcR“„æ@di¶×‰…^gðnéµ./Ǻç–"È{ébòýÊ Ü}+ÇhÊiõ†¯f£Sqfê^W'blb‚1%L&Ìê£/œ*Å.œÁ²wÖ¿+hùººúIPdqùüô“<{}­ xd9D;+šÉ_|`ˆ¨>c2ù‹G‘߃ŸUÊ!Áv@~özÿAÏJÈòN¤Ã¬ ýlkg+™51ÜUòh\›õþ¾ »bÁã½.hñåòîòóKmÚÁç׫ǧ…Ö¹Êÿï7gaM™V¨¼;fÚO©|%ŒØ9ëgÓçdTyR±ÚþDÆQ§Âò¶ "“q´Áe­¼O^³¦AËØðÖ0\®V8:ÔÕÏóÝ电ÓuW‚CÈw(…HyŤ{¸^Å‘—/!§=³æìf0ÔiÎPÏîk5§ &j¨ä±wùzùuëóYNÄ€ˆ¢ø`@meD Uæ…»¶‚މ*=m&u+«UÚ4Cé0›o0OZÉ%ÐõÏu;Ní¸Òf˜éyÛ‡î;>ã§vÜëŸé߀ØR0S†ŽåvºîÛþ•î–’™eÏl¬ñáôy\D"A‘›ÃŸW ¥73xT2<˜a„ƒŽQœ2ZÇéþHL5íæßßü‡Äô!.Àeµ<̤6£ºVNö{ï“¿ù§g©2#|Óî}ðˆYÿ€K†!”ßû«h6o† ÖGw¼Å›pŽ"Æ33}íÓ8ãÂèïßn|;FïOŸTÙ!pfÖI¥)´à–q¨.0Ô€ëk/ÖF—q™ùÕî”_M ¿ZõÉ[f.ýŽ\à, P²eäPfMk¥$3ä*ë)9Ô±G6-2QìšMKy9uýÀ#‡‘p–¤°ÐùêCUR>†h¬ ×g¥N‹‰~w\ÆÁTÁxÙº'oOê{ QöÍAƒÔÅ´\E Sõ86€ùB:kr½1ƒhRNÖÁÚ›@ª¼5Æà£«‚ÔçûÕXcS¹ Y2; δé4MZ@Ñ$1OqV;z^£æ¹·!´ÏDâRÀ³÷=à\¿~\?êÃF>¹ü¼ÊyKã¿ÜÍmŠ,’ ³ð9:š4Øxç° 3B±ã Ï3è"E%ŽuÙ~b6Ÿ:1=û÷UjmúEĸ(Ö »ÙX >ÎÒ“:Œ”@BNlÆåPF¶2ñnZDh|þNΖ£™Õö1ÙW´—T=Ñ`?Xå¨I:³UåšWÒ;ç¢Ü–¾2•‹Òí-GLÖt²išZŒ¡ñŒèsÛ¢“Jà@þ™UÈ–BŸÀ<èÊ‘õps³Þ<>?è#¯žo>¯NLçI|¶x¸¹* Ò³Fž”mcÖ€m+«0'Nö—óuí¥?U _úuôXÚe~:“È+y;+&f„0¥LAÇ U§LR",ïËä7Çjªžè¥€¡¼Õdk²5¤Œ.]CºO£©omÅ~X_e4ÉFâ¶5+í!´`(Ql¦%GZkOô%3à CC>æh~×ch¯Œÿ_ò®¥9’ä6ÿ• ]t1kò U„.>9BŽØ°Ï‡ìÑRËǘ]­½î&ÙìÎêʬʬYÉ=¢É)v‰>l¾Òõ—/ßm íá78\C+náZ;¯ˆ²ð‘ôiô±]ƒÔ³ÑGØÃ‹Õÿ¼<<_Î0.|ŠkPPˆXÔ¦N(4gspŒ‰Ù×D”JùŒK*ñ²"€rÀP2ëðº{ù¬ŸÂ;ìÉšâ71¶êá˜RMYŸìÎ'.šo&¢.Ã^ÉG"à÷”½º¼øòr}[7â¥gJþ0[Þ&›ü “õ^!ÔkLÒxRöMF-/O¶ ;¤’Ë“8m|)¿yó@ nO¼7¢_c~,B˺km£zÿÛ“ {{âÁ»ýfž3]>¡é| &D'¡ÕÔì Óerž -â éÀ9`鋨›—µsóè¯fÜIÍ%,c¯KŽçÜB‡`L ‘ AÆý*˜VIž§nØÅ¢´ÚO£5È’\¡Q^M$‰CE^ͪ>Á°ÈÌØwfd19+,Ê d’O ‹ nl¯í=3 ļŸKÀtbéõF1ô¨G Ͻ¦Ð=o”51Ùü]Åú´ÝP´[Rüùêåéùáî5£¹Ö‡ßßlW½ÞÜ_=Ý\?Þèë=n4ª¡pa…µÙê™FV†9¹¤@Ô˜±vâ¹DÛE±Â*´!ÂôÖB·'<9É œk ;^›kd²æŠÈ²2"cÄî[ `Ì]kƦrœ¨tboà‹^6÷C™®ºOíÛu“¸bàžQïö÷ž^‡?ÌÔ¾=<Ö2kè Ê4oÈY-BõRÛt9"V¨ºÕ*YanšG‹Õ§O²¹2ŽLB¿½~TµomíÖk]T¥ØûþÈÄìO'ì•Ñ'1"­¶s?àʰ´œFëŒe÷TXвBb’ ›˜Ødèžãi5ŠØëd(óúºªºÃP¼Ü ŽÝ}7öôÓËË:w(ÞSW‡ÈqÎöò= ¥¦œè§Âjç*õ(Ä@¾€r’a~Ÿu•‰òÄçï‚iâ+að °¶¯d‘î¾@BÆWÂ%‘øUjBTF:)>J#Rî<"ô¬AÿÁçýLÎÉà³N„£’·¸šêÒ{hÜmº~DÇs€)†ÞˆÁçˆ vsš$ñ2d͘O=‚`ð v_ëÓï ˜u©/ö¬–óør»9NcfσóÐh1ÌèCÖÌÀ $¥¢lÊvŒÁš§@¬"'A}®@r ·&  ¶Y72ÑÊ ||hÚ³e’ÅmyË÷÷_žRÜ\mÿÈÅë(ïk[ÓØÏ/n®~®[ê©+¾"ô¦Š¶“ïr]Ûz cD¡Üí·-LY¿P¥N95/TÕà{•¦¬Dlš“H”Ú:USƒi× 1ïFòp=ª¶k–c[Ûwx°;¦çóà¹ZÖˆ›€;oWBƒp¸gß® I¨ ¸'Aýbѧï¼/oÆþ/ïÂø°i‘˜4͹¦Šš*(s‡}­w;™úc7_iÊ&9¬UbQ²Pï"i“õŠ­}kÛ tïUgM!1ãÍv€| ˜Ÿ«LmÝyQ—ULU6𺪹óÈåÖœ|n½‡Ù£0áµb”¢Š†‹Õ›»J¡~L‡Þqta “¨>Â¥îRÃéàŸ“‘ÀJnžÒ"¹{ŸºÐ¦#i¡o˜òüøpûíöò~“'2Ýè¯ÿúðø³þò½~‰›_nž»úisõóìùâ^¶U¿Ï䡘¤ôLМžÖäÁµêã¨>z<ñßG‰‹©ß‰^;àÏS¿›ß™ ï;0Ûƒû¦¨FÔïˆbå*¢;ÛA‹6’ë#º¸ [Þ‚ˆ$kÖ϶#Co1Õûç¯ÑÕ‡Ÿ×WÍǯ}`RU¿}ÀY‹„¨$‚¸¥Å”zé5ÌÈ0ZçÈä%dH0ÙÞcÛñòùØ»¤ÚäcIBÂTc±ÑEF·Ìb¹wuUå%—™žX“Ú0A1&Mo!¹,1sR¶­žÕv•²“ÑlÌ.˜î{¯žRáS†ºMãËÁ9 £S5Úmr–ø]694‚¿!9¥«/_ ¾\ÉÅßþöõúi¿JÙm¾:¾^ÆMëeÎs¿ÄÁ>gidŸs.Øçl']tÉ/B&vnFG5G&Ä€a12¥bdr ÝŠM_i~`†s\ƒÿÕ,qyqÏWF†Y›À%¤Àè§{DÎÈïC_Çá•÷bð‰[·cAI±¸éòXœ â#LÁ7ü߇í(Õúû¶ÊØ7[è ]Ü]Þ_þõõ"àôs#¿ð®’ZDáõŒN„4GZäôiN“°wFlPÚM[UËqÞâiŒ¸=@Ä4"ª§žh×ÝI”òä^ï"k%±&<¶º*ÜàÀ –…)´'þdC8Þ¬nÄ[§¾”þgÜDŵàe&ª'jN—:£ÍeßÂD³Rje¦}‰!Æ8¥©µœ4AÆì¢É‘40Áí¡ èRUÄ?y LP:DîäœÙü) NÁ¸¯ZᬜOiÙ¨Ü(Œ+QSþù#[»GDßGa€ºnïÍ·”~¹hN4î+õ„ÝËÁ ³å`»ûMx®ŒÿXå`uÙì0-ñnâxV*qöÕÞ­êì¶sp0` Rƒ„ÉÛE=Æù6Ôw©4qp0D²z}•ƒóš”ÌçïÙ="v¿´!ÉŒ•ê+“þ ïÞ mï)•цª·/,©ÆæQ5´F©ejÄö7=¤!ˆG[ÑÓÅíy¸Ÿ>«~{¸±âÌõæëåËíóÁ/Öµò,tÜ *Bð¦e©÷½¦¿áôžÓòç QmŠënƒ“2ëñ"©‘¬2Ϩ¾!I¢EƶÑ!| ƒ•÷‹ýºÙÖÕãÇ­Z;±ïxýùñáÅFÞ®6Ï•6S8#ta¼(V£æéÂŽ[Ûn T½«¯JPíb=Á%ã0}»&€ùõïRi«„S«ªUªÅ™ùº¸º|o9Ì|v@_à+ ÃÅ·_¯®þ·nùCð<®Œ!F¿R}ô3 ÕXIl•ÇeÄsEجf£ÇF#?L3ÖVŽ™yPqºìÐüZÔlôkG£É­i¦ÑïÆŒ¸(dU°§î£ŽêBfÔ‘˜h"fMÚ²Ùq›]?—c ŒŒªš0éQ\6yFØ¥ÿ8Ú28ìߺx´Ìsw‰û*ͧ϶ŽeóTwe#cJÉy·Œ g\™ø`lœ@L³šËäÔ²J®v,©¤JÎÓUr•äæ‘…Ò¤ƒX†à˜ë7øe5OÝGËUÊDÙyòAÎßSc¢ÆDD¾n£:¿Êkêr4Õ¦ Ê"¦}D .¡/I$Àuï¡Ìœ¶Ÿíä]{ ÎJ>8RÝ2dÝ3LÔ!«-•CtD‹ï¢'„Õ2ž•¤"ˆñlt>L¢«d‡xÓ(œµkœP®šYAi‰9†=y]ßhvòNȼÃ×Ç3Xßô’¨ì B\p‰U€ y­‚õÇZ'Ý|S·"€Ì™ À)Ymláh¶Žfë«úß4i¬v©&ŒÕÞœ²“¯V0› ‰M£…C†¯ÉR|‰å¤.ÆTNñ¥_Ùh"Ó‚r’>bϾÕÑŠÅvŸ^@›)¡‡°hÀ£lÖV˜õ-YŽôzØ/L~/O»ö½%½³ÿèÁÔ.P*šÍö.ù¼ Û#'Òž”yÐô±óUyVÔ[¶]=Gj*›wþ·‡ëªø-1û3pà–`ºw3 #9ŠØb¦/+£Ya›„SO`ºgò~z¬VB”)OàÝn-ä1Õã»DZp¯`’ìJã60Æ`ç—˜Ÿ¨Þç&æ][ÓGÄWE)Ü«w\·}»l¶/©ŸZÖ½=Š£Ú4n^EÚ ®=šÐGQê:øp²˜d¬ûýu1É·‡Û›«ß£~¯ɸ6"Ø–èeÚ öm`Qh ðt÷ݶÞÜ«ÑoE.Õ+ S\"ô÷Ý JbýGÖmè°)ùü‘œZ¹8;)ùׄ‹õ„šM&;V›ÌñP½ ¥‹Û\ð±ÊÅ50Ãúº¸˜ã©‹Û*Šaª2Ñúž­¬»6E è3x0®Oa²ÈÂÍY'œÐi\¸t[Ü<¿ÔÊöÍct+NÚ~ˆ”MÛ~v±Ì¸ؾ}ë4`–*Û‘”Â"Û‡öý1Ð )ùµ/^oîô¤=M/áù‡}è@ ôT`Ó0k˜Is–$ÜÖî…{~ëë˜d‹¹Bs«ž§m§÷tãŒØfQž4iHÙÆ™iµHYÍ À.”ªl½–0+Û#\÷"¥Ê9dx'MS)%-Ó ®{?Ž Ùe*¦&0F&æ¤-zª2¤òaÇæÒSϾí¾È`ÓЕ0bŸÒ}X¬øô›~t÷Ûx•éG;•ߒ⺒¤‡žŽ¢²ê˜âr¦uñ¥×ùHF¡ $ÖüuWf=¤Áç™Öß^¬à2â š"å´Ò`Î:M¯šŽè‰ª™r›úf.Õ ÐËt³å¡S'<Å,ƒÙ»¸ZxUýÚÂv˺^5õµ÷ª&g:ç²WÀDY¯j /m‡ R‰WÅXÁÁÔ í»êC÷ûZL²÷µ@ÁíWšœ®QM¡a‡Ž¬±뤶ûqÚù}†äµ¶;òó×›Ç;ÑU6ï´þ>‡==G|û¯_f‹Ò+}?úsߨfν }L‹ƒWÜ«H¶Œœ‘Kzúéà&ëÒÞ¬$º±q\')ÑÚ+R÷#d-Yç°¹[²º ÐÀ;ZѪEoåöòíBcœ3ÓÝÕÝ’îÃþ³œ—²FÄç0À‘gmÓ‰è´&¸>iÏɲ–ZFlî‰ÖÎèCd1®•T].,”P»&ÄíR£T0;"ÑqšÎZ ×~ &=ˆzV͉ÂÚu¯ºD™®sÕÓ–µ>­ÃAAe¼ËÉ/$íÉš~OìÄȰS#ðè÷Xã;rf3«}ˆ]1–܌ʑFd)EWV*I²Û­cA11ÅɆp@É‚ñ»Øš€±…ãjW¸6t¯/(ä¦ «š’ wª[.¦¦s|RVHJܽtRz‚·´'§@’Ávwº¸úÎîÃÏ QOÏ¿Í`óö¨+,'Â9œ•ÖÚ¬VâZÌàd¥Ô q“³Ö Lˆë£x˜„ܔݫu(’˜«(#R kc.÷žV1sfvÚ%)!ðïqÇ#ȲœQ,¨RæEÊ}¢Pð'›ŠÞÁ7µ_¶ÍÉD´Öåø[‡ÁÅåõÝMA°€;£¬ìj°–¢Ì©2ˆ!,¸9=’H3`ekÖ) f‘ýUÑ9`¥øì]Mo9’ý+½ìIÕd#‚áÌe÷2À,°Ø=ïA.«ÛBÛ–!Ùݳûë7XU’ꃙI&ÉR7ÐÀ`fÚ-§2#Èß/"d Ïè«v)z¯±Vt¿ «üè£$eÌd–“ž$‘Ô-ú²]YCÉ¥Év/›¹å•‡¤cƒzUÜá¹ѳ]X_ïq§™§gfºCþûió«ßþ¿Ý¡ÍÖîÖÓýÏçÊ”BÀ¡€J°Ây 9:!©oóì*Ê~®¦e5Áx¸Ž =ÜÔâ’«á½Ê­§ãOLáÚHLax¯§‰Ùg2¼Ñ¥@4ø¥¬)¿AVö,B¥MŸÝae(€H+¸ †Díq…|·Ûƒ@Ïfúwý>ïo·¿|ÿzót[Ç@Ì0›×4@ØovXM?\%¥n¸ËvÓYÑi »‡¸ÅÆPfÉ9ÀGBé’Xpà´¥âÚ¸+ŽÆ³{p†Ý8l\féðÅèó[,âHµk¡ÊÁ`$¤ú˜j®L2пŸøÖ>ì[býnÇ™éZ=ød# Å]ÕÀ›HP äÌ& rŠKeÙ/;›”Kt]Ý8ÅsNñ±àºxÅvœyÙWg9pG8ÅèU3N1n"¸¨ÙÕ»¾ÁëûÂf¥»/\'CÛq†£©‘߆ängùž¶÷_kËkAâP8ôkà˜AD#wã¸;Q?˜6. ç`‘,@`¿sëc¤Ñɶ‡ÁµSÀ‚tz;3¥5Në¢Dl"4-®¨QQE-±Äv µ;¿ûC¡3):!”ëÒ¼¿ÊnûéÞ_·0CýÐJ™P\1Jìƒ3 ˆÍ´îçÂéç•ÆMÚ%¤TR33Ü[DL’!è‘$:Í”µz,¿1Ùóø¢Ùa ùÌ+Mf╉ܩlO†b {îêDÍCtÕ5eã¢Æ7ß8ôùvûÑnСÁ£GI¤NôÿY‡£ª°f\#KP±wèTZýF³ý&#‡`e¿/Ϋf›ŽD³¬Sj  Ñ_°H–À°lœ “ºä@þ×Õ\àè ‡av1S13µêެ/›°£ÛwFÑR8dçún)ºDÚDäÄ"aˆWÿÏvœ”(ÿòóͶ®$fŽ™®ý2ǰj''zL{¤G.`A\ý]ݤÝÀx犟‰G.p¾ §‹§kÇPT¯ ±A†#lZϘqtucѳuµ®[Ö2A€¶A<ŒÄZÔ1PkH¯Ðvðô£“íÌõº}òÂP˜%Zƒ³‰,Ì8ºæƒrYuÙ¥$›à}ÞH˜-q ¦Æ‚`”xmŒ5z<Èj¶Æe +¼K dÙ¼»þ6839µup0ð†c€q-yˆÑþÓÎÅÜq¸ iu\,™½wnqÞƒÏæû^¿¬ˆ<Ι$ºö ”ñô*`0d £Á‹.õièÈÅüÌQ;ç…éhé@tòï§ø¢Hù¢V½Å1ãŠp;KÔª—˜å†êàÆ!sV!A%(8³óüܧ?ìÊ )J(z÷'øN‚/9NMJìÛà× ¡=Ípû^ïíûOwÿe§÷_/â÷ÿ¶ãp÷÷/îþù •Ã_~H ~÷áõÏ\f—l4 ,²¼§]fBË9n¿Xüb–çå[þ5½ë÷éì&nïî½ûpz}°#›V„ãíÇ8|Øá)÷OvÕ3ÛôÛÝãß>Þ~ùáE›Ý·Ÿ-v×=;b,]ìNæ•íz4Ýê=h»Gö/@BŒóŸ¹Ù‹Lð‰Éýh¢º{|÷׿ÿû; *æ¤9û¯H4„ðÃw{ÁÄogWè´s&Õ^-·àñÛ?¿¼ûëZ–ÔÃ…¼ÙÞ¾†­™?»¡ÀïÃOæM†›¯¿m·ÿwÁ›}h"Lmx“£(žƒwÕa|¿7±Pþo§ŽÏÞÁÉ;7E|æöpkÈ…àÔ¢¨Ô´×˜žÄÂô¤:Þ$ÎÎÅýä)cÿ¼®c ôv_Κíš9ú´‚ü${sA½à1î<äp½Î6;8;Uˆ£ÝqnB´Ñyͽü/»w ÔÔawÚh&®Oý' öÄ ècR3ÌÝú›áØv¯ˆæ‰óÛNãl{¿=ògÙyót³ó«ßä8CO2ô«_¤Ö´ÛáT¯f¡Ö›v{DŒ«Z v•»ˆ~]Ù&\Xs¿ñd6ÒX²æä(ȼ1Oß*ùåD/ß²Â`´X(p$91åÇÏhŠa„uc˜³ã¾)5ùl‘l` Üpì~MgµKuW{b³‹ ]¼Ý¡PIýB‡Âã¡°ïvÙ•U¯Vàà)¦¹Á¤ƒ“SqôŒ¼ç-Æ#»ÆuÊd¿>5µÄàÔÔ^øþ²s$©E…EGäÞ¢\ý§Qìjÿeò"ØõjC½ kÖÿY„SW-5ÃUÀžO(ïuöÃ2î…lkÜÑ—à^z«D6ဠsë{½i4Ç¥ ÀHh8€Yðà³æÀ"äz‡2CÍÑ;tG°„g¡ªøÊPuÀ+aF Õ 6à•fÑMì0!6¡›¬[nê0¸@à›ÑËÑ ]""_7Ðýj£Yló}¿/VnöË4¸5àÍçSi·È2Ü@…9n‰+-ÚèBèX˜‡CßÊUP<ž¢+ú1(WñJÇ™9{£a(WñJs(ç=qt¡åºXàÚ¿f§bÄ«’X8õwY»Ïwßï·7O÷?©GAš–¸RD’»bp°†È|&ˆ›é,&Å´n…³î6";,È2`¤%“ä5KA|,“½.»SËÆ “ehË´Ë€ã Èù„‚ý~¢<PgLWÆg¡±Ïb&µ˜šYÚh’ÚárÈ‚<JíO D'2¯õ¥¥¸áŸxûž‚ÁåÍãûÞo|¤÷üþîýOÄüOïË0Ù؉×+n×ìûô‡ŒjIÙÞBسÈîjÝŽ£·Ø~9 ì`¾/j/û<ÇÛ‘p» {ºEN£dï€ £÷|ìå `OŠÚ1ÏÏrN8•"¼Ç¢¦(ßžôVØ4y*$-Sl³÷Œ<ÄR¤™AÅxU§;UÅMC÷ÛÝoùñ×pã&øf¿Yðæñá{ÊR×o» ÁËz=?ËàgÙñWziöÄëd×Ï=O'…È]qàå* 0gÝó#Auqs •Ò’ïq]•‡ƒ8dAÜypäáÞy,b› !¸ï¼.&5I‚ǦäDÄQ½¨àµÙ¦ …wÛÿݦ?8HÚ»JZ䈓j@çÍËhʈÀaÔ°ÖçÀjè=˜ÊE׆½£€—a˜IaX]¶ÝöHN|éhž¹:_Ø6Eq ££ÑY“3ø\ÕÒ4”å„ÉÑ•­ uÜF†T‡“šU A¨ †5 7°!Ò3šŠ|^±q»/·‰ï‚šw³¨ÝÛwbüøw©^÷{Ãí =ß\”nn2÷æRÓ7ÙÊͤBÌ& ´yr´Ûƒ-Ë`§EÔÏΡÙw‰ŠáFd]êDMó8kç^åÑ)Ü@b¶³ÂÎ…/i“S 19Q MS$²í…‘ú²[S»¸n'ìüíŸÔ JtÌMö 1·gÞ»˜³g༲ÖŒA£?˜AKµÉ ù0¤ÆmšÜ÷â^e•ù.@~÷r®ßÝþlçúçÛow7ßn{?-|Bm3o´®9Þ/O!¶í:/YÏêwšL,j±^Ì®r!Ûcÿ*ŸNÅovj¬±vCÐdíÃóckß$[û6 oŽ \uº”mBßRVlB¯‰IÕZT©-»fº»˜`WÂnšÿ{úz»=ñ"¶Ÿ¾¸±Ÿùíáñ—çàø5~® )œ‡Ù&mkë"?¦­‹ÓÅ¡ÜeÎøŒV4áÚîÏö*¨¬)apnFò‚mÉ CRÊŠÌ„t_ãýý—d"2hrÆ ]'{•É«EÑm’gOƒðÆŽ½Èµ³ù¯'þþsÂìûÏ_¿Uo1Ц¥nÇݹ¶`Gܶ'ï‘ì#ÑAR¿´z¨zÞÀaïúøðëѤÕù¿¸ù´›%¯RñL2É>Xœ´ÝüÈQPc”7©i½^…Ó¼ùúð¡®”èf®C4”Eßv`xÇ¥?›ËöP0É\2;þ¡bu.eSš4£Ž(a»kaà…xÎWMœ”З8ÝŠž‘*<;íÇ’¾=~¿«èÎŒÓêvÜæO¡÷~ˆú¥¥ˆúæÕDkDM™Éਠ«ç6ð ˜UqC§D>ß&,ýüM9þQ×§=Ùœ+³çÐfÐ1 Ê^:C›¶ôÖÚn–4¤ºPÓħ™ÿ4´&DœÅÛëó”ÅáVÏÞ“TÚ"†‚Þüb’2féaŽdÓ©õDMÕ3¥éúÅ Ôt0Ö4(%ÖÏè‰j7g®É>uÍ^›YtÂ% 1>.ž õÙqâ#áôJ_£™t暣‘þ†k<aMqârƒß]sñBr¬ã¹`çE"œ c Îf«øG’éu.$1$IÕ¹@„ÛÎéš)O/\­1é–Êèiaû ûJô¼…q‡Íxóç…²e°#‰õ11i":RŒk”¶ã"+Z€„ÀŽ×£v2ÝØ×´ø”b)©ŒÒò‰È¹G2é…†|æÇ׈`ç>hÛÐ5Ô‚SŠÇÎc·žè¹Ô[Ç1ÇÂaÙUXØ»—^Ì:GâéÓ"Æ©wVŽ(ˆ®åpèyHXx8Ô«çꎊ²4Ô̲Tã`ª º(ËM†vµtÑ=Q„<åÀ« {™›´±ƒ«N¹bM=9ª²^ìŽ:1Ï=ÈXw"ÖÓ%ÔDñPô/¢Œ*å,б”:‘t༫raÍ2¶ôØ#†3)í˜'²•Œ˜2gà—:sº6¤J,éÌ¡u9ë2À“êh¡n‹‘D‘¶†ßÌ.P ¨±‡‡q’ 쪒ÅEx²°Ç I*ª‰¢[°ºXsÏÍÙ ÔvÏä -x”eLcƒãD‚yÁ**p@j\3‘ãžÒžw$^¤M{:¾2ˆYí±¹Óa¦úxý…Ö`àrAÓÙ¤VÑKlêß‹TÂhÛ«sC )¡r˜Ö™Q*t%‘¢»éA:L=.õ“Lj–‚gðMš%?|¼Ç"œ,ÚÚM¥à¢Ÿmx&°è²+'Œ‡"Ú¯ò¥voÛ1y8L·N¨É'ókµ¼g¡Ú9À“ÒøLô~Ro ÒÅü›°Ì hA`^tÌ|6{•G¯ð‹I}¨ ÑÁ%þCjŠˆ~|ø(K•²´Êê³n™‹q4+`eTKÏǤB3€wMžšÅðWð³=LøÙàtù»uÆ2¾Gn´é³Ò“EDs€Ú|o…+jNO>E"XB~TWú*R‹2#à:gF.‡ &µI˜þÚ´É㯧SÉ_ÏH|H^Ï\Ot××*8¤bµ®œšR+zšäD=3¾#ó$Œ’úŸda“t_F-"¡X>WßuR"¯kuÎü9X¿H~÷Ô…1Žd‡„…×ícÓÌ>¶˜Y)í6ƒ,lWÝï¦ ªô­Ù ÕÑÇ씵hÕŸìV=~DÓ>6N+ÈÅP¾¡«Ï1 ^‘Ú6¡‹’`óâ)\ÜÁN6hX< @‹G‚(·Žíõà wPà½˜Š“5ã¯È.cc·±p…ºjz ÜíŒcód{чKêt‹‚6.xPþÜŧ.Ûöá!ä2eÝ×}¬Kí©ÂNöz¸º-¿÷hy‡¹ÌêO¶wümÚìhʶRÓ)T×Ä 7h:~Ø¢ÿhÇÇNõöÓíÓr°|úÃm3C¼^ø¯Ä«šrÕ"'j›§Î„8“J;“`y2M.ŒG:Á>ÔÁ¢ñ  „²h<4H¾!÷E*ÒiéµÁ³YèÏÅé4uÞ€ž‰ÛîèàìËNÎA/Ûvš ŸÚŒ®“5åÕzƒÅ¤¶}LžA‹¶½ uÀФrg„î É'ˆw·Ÿ¾}ì2~¦;òhCˆ,õ^×4Š‘yµÀºˆ¤™o_·¬B2åS—z‡²Tmà´ U—ÒgYŽ>µKµ!1ûžŠDIÍ”š›. ŒÎd&!g)X“–D•!?¶ŽúÎ$Ü7Ÿ Ð —vRE½'mRÑaJ¢+¦E“1btžGº™æ£Ï8ôó†döïÎéâ őⴊÈù¾ ,®é©3¿Ábucö¼XgüÐy™^€.–ƒn:OÐ"æE·ÔET¦EÔ ùùÁ#™uÀÝôÚS•WJ‰lÅ7Ýé #ØZËIT‘ì9¥».wEœí§û´Æ'-ºÜýóù.ß ># f®°¡¬´i„œ ŽÒÁÏXÂtI^Ò3傾ý9šYÙ„—UZǾœp~,à¯V‚ÓŠqÞ ,”’•øÝï ­ëÎú̘öÍ´è°øEì&ŸËG‰«ËœH̃­‚nòÛBLÎòœjÒ˜k¼MJ2‡9p®øo¢íšGТÖLt™„¾¶cRÉ»]»maÅþÌ71à&¨¸Tb¹.ûÐ<ðfþF[’W§U#àÐIRs\³GÙ€‚•£Ã#l3vN–-ÉŒtfÄ1J_má2‡ElæüæÍ#ùôð«íµ}ôn_M,FgM°mWÏv*÷¸¸Á§â<¨Æ· –Ϫ_ÛÇm¢z?ã?«¶Ö^¼®)½$²O2¹†®ð¥¬zùIé$˜q¶ kñ:†˜6 /^ljêË‘`:\ÇôÚÁîdíuÔÖ ©±i£½FÉâ·û¹§ÓËÞoïî¿ÖñMû'o¸v´Ý6]“p"!Cxû˵•ÎsYô‹;0¦ž#†EºòQ]Á}Š”]ôúå} ìBr3+îÓ¢Þ—ïSÚ ?ºY5§f—¥N…¦FgžwÏ($·X:Ó¬ê˧ˆ²{ZgQ¤eR3I€Ñ™%sýüe¯RòÂÔ. Ÿõ* ê—Ü€¦^ÖWšÚÅû3“ ÇÄn×dôpt*Ñœ ¯—Ç;÷$ ‰¿n*‘©hŠS=j§TbÞªTãÉ Æè”Æfìïþi¢NvøióKÜß»&ŸMCßíP~°‡¹ßÏOnŸîŸ¾Ü~}úøðíµYÿi“œ¬ò‰$­WѲSæc¬¡ëDRWŸÅ%Ö®îU0ljKÜ«(‹½šgõ|•a'÷ Í0تq¯:\r„k¸W¤y÷*E–¼à^q×!LïŠÈo$†X“å‰7“ú§´”¸©4â Ô›‡SnŸf՞гŸ~|žÍÙÿp]õ•DÖ ¾ºWm‚)Õè×ñ`VI«'"Zȫˀ,àG?À–m“xM'@N»è£J [È Øx!Çã±ø=çÊŠ 4KrC®oón#;W9:] WÔjœ% }Ìp‡¾ø7n~Ù}Ñ®rÙì5#Ï$îÉN#¶A¯ÂªTb4_\Ôû0/Á^p¼;5€—§)0Š×eùP8¾Ì?¾ˆ«§×† >uÛUà13;i*¯œÚHr¾ì}H_Òæ€¥&)Rî‰ÇQÊðû7A,cȤ¢,ÔnK4³h:f·UG¿ Œþ`úë„ÑxÆ=ŽÑ«sMÚ™¤ä´—¡e´ìdçÁ‡šäŽ|¸ÙÞÞ¼ÿþåç»Ns0 .HcdyÕBêÄqÙƒ-uQZý&g0É ÌŽóщÌb5΄Y6íéô NR¡9©*Æ){ M€ϸúGD'ç˜cì“18¸jÙèyEàzP˜R':¤àÛ|›snô ËÁ%"9èßzc“÷õñÿÙ»–&7räüWsqøÐ% @BžÐi/a‡#l_›#µ§Š~Ì®ýëI²›ld5ãõF3R‹¬ÊLJÌDæ—_6gžúÚ¨:Ó°T'{xÒ_î* §pFì|hªæ£ú×àM.1€K3¥%#ì•—¬ÍF¼°§ ÀÆ]~œf'®HlO .Æš´5'Immjnan:96}c=Ëœ£»K´dKQuˆ4ªëž@ÇI{V°6§4KŒ "y¾L?v†ênwãñiùmµüõJøûƒn]»Y`ž.þ,ö8%Ɖ ›Þs]%²ŽEz5 qšÖo°*Ì(úfçaöåÓ)F0Vý ôµk†62&5°4;ƒbòY*zL¤p ’op⾋BŠxMC”)ýMÕàpR£šÚĶ'„3À­Ú»è6^õ÷}ÄÑõâµn—úž…ìÈ9êý=õÙ™Åxj Ûˆ [¶‹Ûý'LiLÆ¥àgºo;ߤÓëp·Æ$ŒÉ{gÙ @e¯{mØ9ºn{VŽ NC²n*ªºmK¶}´©}WÓhœ{ •`»ëþý@kðƒ-ÞÝÞ¦ÿκ5õwÝïÝÆ»ÔNišœ“ÛêÀinvXÞrãP>õÊ6¸PßRI¢Î›*/kœÔ¿ÑépÛÍÄþ÷¶ ÆÕA6ÝUÊÕÝâ~ñU}u¨²ŸºŠŒ]ÇëøK¸‚ûgùŸºÑWÕÆiE·Å?-g7yÁ)T X¨±nÖ_¸½öèq =Ø}rq”=‹|Ì[íÙád7!ö>VÑÒÚ‘)©‰jZ-Qf>ÙUÌ!C/bŠbÍä©*bß1+ï‹?º@S æAœ“ÊÇà›{õ“9šÌ Ü‚ó?ìÒ÷æ^­r-âTÍ›1¤3B×ø„ÚêdzÄö?aÅ Á…YùêrÊ«í~{ù2%ÎÙÿësÕUˆ‰&µ¦z±>¶Ô¡°òNЕŕwRn)°DñFNÉ2~¸D?:ÌL'8ß÷×ãtUïà(Á¼£âtU«IÜD… qætÊ®]n#©ó"ÞöK^âtý¿Ã,N#´µ®’esÇTÖ·sR©wŠ‹àñnd„]úÆV.MÏ}Òœ´ˆˆNÛH1½Ìrií…Õ`ýÅH1 vmãz%5&ÐiéŠEUM$$0*ñš|¸~`^ =oIãg¦ñKçÇò(†ÜxúÞÛwºtö¢É‹¯ið¢ì”o‚Ú„4û­³>&äéB aò2Bß”ºb-­üö'²c¾óðSšc@…–&§U±N‰¨ÙX€Atíû:È;îØ'NG§ ‚Êh»¦1âf¹&vRéÓ®©Ê‰"*¼™×ŒWMá2‹Ì¾›NåÌùvMÏê5.\–F³(pŠ˜ SØœ©ŒœP(xýGš.˜ÕøÓ”"µÚsИÉAó‚v)\ЈƒØüh^Ä*û#ŽjÂYR˜½7+ØÐnO…¶à´¦* ÖšÀ”ZôæA¦4ˆµPFa׬·T¡7kÒˆR 8ÑÝÈúæ1{¿÷jEŠÛ܃©Nqà<^ëÏݫžz¬}Óàÿtþh‹…’o*ë¼Fjµ áÌÁ Ï»õ­T¢Ý\›È–C¦’"PL.ŒóíN|=\[a½™¾ª„ÑEjrmœ£ƒ5+ûYóÊûÅšÄÂjü{Ó$›åêiq÷]Ï”·Q’ÊÕÕÞö×È‚14ùkœD¹ÆßÖV|ªÕµë%RI-6‘K£n!ë†{RéRàQÓÕϬk1 ç‚ø¦xR YŽX5Ö”Â)ÆÏv½Þ˧½wu‘/âéÈW Èa[äK.LYž¬?C·:ì˜Ôzž”I5‰%'å–}öœ‹’cʦ¤;u:*c=_ª\Ô Ðãg%Ù|n‚ø# Bw«çÇ›åSíö™WÇôN;&X#fSAðSŽK#âWðƒ¾•¡#YõsG0+HI£ˆŒ˜Æš=0wréäzþ"{c<…ħé×±mö¯œrH>J² ½©#/:_Ú‘~@vXŽ$UX:[iؼ÷†Yïp€áíÅ úñ¼ãÁ–TXìṀ±|¸û¾x\ü-ÏMxœªá”ªm-6MgY\„›Ô6«ù€\ÿùfÄ‹/·«SýþóÃÃ÷#¥ÿûóâyõçûëÕß4ÔPù§à7«ëÝïAöFŧà ÔŒkw„Ñ+¼½Ë?س~¸±gRå.W7¿­®µ«¹$xÒ¾v÷?bûbÛO¹yR—ÿ«žK]=~xþ¶¸ÿð&Žaýî‡f  ž6ÆSl¤§Äô'ë@Yî@£¾ õïFò­/ÎËôñ’õGMnEO^Ÿ­ÇŽÐé´Þk$=ÂL‚K©àšÒœj³ZâŒWÙšÜQ¸÷z.*CܰúŸ„¶¼VqWZàQœ›Ò>ÂhMLÍG!…>Ùµr§¢±•L2¦Ö5Çi®ãc÷f%g¡>æT¥6kMÛÔ§0¶ÛN¼4k *´&ÎzhŠ´<¦5q˜Ëw/Vª4"®Rš†Ø Ô¤4?Å×|ò¶G¨Yk\ª5‰9.ÐZ°Á Q­y—eQß½Y‰Úô©€ÉeC‡“jÓD†cômj›Ôa‡}!6«j R“€‚{Iu¶mbä)¤}µ•Ý"_hÚ¾ø¦fB&…AíE ê¤ÉöÄù±pU"ٱ尺ģaˆ6Ôƒ—ÛÃn²ÀFÉäœt›º;ô9øÿ«§E/±cMÓE‡-ûÑ@ÌÝuíDÔcôÎÄÕ¥+]Ì#M¦¥õ`•O©Ú>–«çs¥Ü«§›¯÷ýÐbãDy%:o Öª7n ’=Âwéd ¶7‘ü¥­a»§¶¶ &£ë58ÇŒnÆ8@‰¥$æ‘.Ðø fë…;ùô°{l/úe·™´¿AkrmБíKúøÞ2ž^C“ž&¡±x*2‰@ã&!>»£áM(½,BÍË_<¸ ˜°ˆ7‚:F§>_êêú¦#LfïT2ÌŸ4?j9‡=±t± 02y¾¸ML 8½±×Šu7§÷¡4½a@…Q_ Wy%`8¯×˜ò½‰o¯V’ßý>JÉS•â0`mÕ4ö4EqÉZš/Ðq³Í+ZoB¶ÊCÖõ[P²jq¼¤JÙ9¬=Auqo}j@ä:÷¦ Ú å<¥ÿ#DP$bGÍ Ã.…\g ËÛc?[°ÛT ‹wÖvù¬-0ç"Â=qt±…0¸dÖUe œbßh “˜˜0qEÞy{Ý· î;8éÚëž1ž(ƒ¤%7jAÕUbvd5úTa=v!¬'6Õ“·Ì•}ëÉN5ßšµÉö­Ây»N_Õê/·Ï{?TW,ùi²œÇ‹Å'Ì£D$ôȪ!•L·:°'£EL\ÐëCFè9V¶¦˜\T¾D2°™';¸«³@uð<…ðÉ–>‚ž=–/nÒ·W ~úx·X~Sy]µŒÕçJ~~ðÆ3^Ï…¨º…áàR–÷i'›.5?od±ªÜ†9ÎÃцéf%~²JÀ—›{æÓ†&eSø¸|º¹zº×ÜâÛû6o»lÞ—‘¦Üðy=bæDm†8iŸín,Hm)¶³(û¡<È6VTpÛ§9óXr¦b ÙížÜ:Á¼*^æ§Ô^HóÈ$õãÙY6ƒžXNFmZ„å>Œ_ßÎwôî$Ð Ë)x.ŽåiŽ 7H ž·E£ßÒŠ&g…ðI“2†€À˜ÆyàŽƒžþ˜ ¸ãvGÆYà.KZ½'®.mjë˜<^<>~ÊT…ÓÈxFB×)d5ŠÅ×UãrÍpb(AsWÐO§?”­´î¤Ò ÍY"§ŠÛxq^ôU=OGó˜â »øh°Å$©‚çãx[ÇwŸ~þóŸ>1¥˜Œõ ¡p D?¼èšq)ÀïÕg–-L܈ÿðùÃóßî?ý¼múô³Ã×Õó§ù×?}È—\~[½ï(z|xyÎ^œúƒ+¡ ~º _–Wñë}û6xá/aé~¹æè¯Wõ1ï®wOéô)Ÿ^–vaóéç­þòýåùÓÏ?ò[ܾ¬þ²á2aúp»Z<­ŽtI>|þüá…‚ïçÏïÏÅ-µÂO“Íw ÒÔ|ÉMÈ`¼ ÕQš6(–2ƒbr„DÁ[çóhg¯¢Œ§‘Î}WL¹¶ ½—)˜“4€à¸?)¶ÿM“bkÏW`ö±|R¬“ÄI-fœˆBLÍ÷Á±ð>ØŒB£\„›Ð\“Gm‚(e[ÅÞ^¬¨I? >JÀwF±ÿÙéP›:LÒUÙg»yW6o¤÷ †õÊfz¥“;·óªhKÎüíþxGŽ?Ú-)ñxIŽ;Z’CÙí‚ÿø]èðûiN‹ý—W¬mÙâ)šwyÅh7î[øVA@‘Np"ö:N¦P"úH¢Ió®ÇrK BÌžBÁ'Ÿ C=†bÈ2OìI¤C’´>å‡â†®T³>Æ6˜=‰'´IN¿ÓSÓõ?%êÏìÂÁÚ¯ì–sN<ý¾z|ºy2)ÿöpûr·ZÞ.nîž>ªÀv¸Pø—¬sf­ØMÙòùñeU±€FÎ(Èzæ–”ØGLÚQŒÎ©yÛZT¿'„z&a=!Ñ–ü”£9³‡4>ßAw„ÚÓ„HYHÞ“RHV³—¤ÈWS&}|Ö(¹Á‰õ#`îËvŽø!Y5d;p~ðý€€·"ÜCN*=Xa Z›ë?Í –h´¦_|õ–nâæi=ÆN ;úÐVFT}M¨"z£*¡HH=F¾åÓ+üU;TïfŒãÅ@ëIò£X{"üÝF¬5C%f†*¬ÐÀ26amäÙ±½jãøöM…A$à‰â‚@ר—Š¢Þ¸m‹ýþ”þ¼kÉmƒM™eLÓ;F±Í8£\O{?]‡¤á ’ZŽ£HÑ‚¤Þó„>(Ž8÷-<ä52놮 èjU«qpµ8t¬¸ âËRîɧӄ˜w¾ê¦…l?2AKgƒ~„Ÿ»p«ß°­ t\)¶:ëFÏa+‘°ôÄÖX†­¡[›Ðá¤NÕpššèø°»àòÀ“ø‹‡©ï{ ê •õ=OŠšƒºvS©ÀdMÙ÷îll2HêBLôN<ÝPýšhŸdF]â8¤ê!”¥¨Ú“EµÁS$Ui`i*x>¸%™HmRH=‰ãX»V¸ì*LŸ¬-Z=vÿ“jL! 5ÍmÚº°3 *¤˜~—k—ד¶8†ÓQ+꿚¢V7eᆕþ/{×¶Ç‘\¡½ͺ䥒»¡;¡í‹ýìØš ÐH®ü_þ™3{@¦¦»º«z(r­XiX3•—“—ÊK ÙNµÅñ™zí Wm{R/h¥v‰pzƒ—,ôhÕzUÊÕ'ösŠ DëJ¨ÑY•N«C¯ Ê@o´r-ï'û)W!9³êB/óe—ƒÇQ.«u-ž/@æVæ¶cÅWÞ:g3ÇÇÛ+ý£_nï~W²ßèÚ|Þ<üÑÓý¾Ûqâãû‹›ëC^ Ž3lm½Tø¢(è¬û 9­»¥’Ðíà$Jšã…4:œDûüDþ!U¡=³„9³`Í%pÌÉW¡=%Xí2½Ñ–j0†#Ù`'M³Á>׃qˆò‚©¼âôPtL¢w*=uÏqW0 A:pÉûUçyùíZÿЛÛOJÇ›þ}ùÖ~¾ùøñööýÔkèÄŸ®+­ –%U«ªG½¸¬´‚}Œ>ÀìE+Sd©±˜¢qM±…ÚUÅ,\ qª@É Ù¥zºµ€|ËØZ bäG|ªIhê×o –y4N‰­uð‘!6hæ+é¿ RŒü§Gœ£j]cUI2‚—Uáè´S%mõ¼¼¿Û­{Ÿ7`R‰{œô˜@V!8.™9e8„6´~Âä4½š9æªü”\*š/'ó0)ò0ÏÄiÓÀ¬x$vX÷– àh$ Ó–þ•É2eßôQ‘ŠÊ”c"®˜;YÇø Î%¦T‡¶¼ Ør?®íd+ÐŽ6U¤»¶UáÛ+^^ß=lû­gÁ0baI "®…5Ž_°,Û iö>”¦dl‡Î© É܉tfœšQ¡4Í:Ñ¢5gâÈ2«Ð<„ªÒ& Nø ÀÙç|è¤a¹0™$§¦>´%ÉÕã+Æææpr”ßÑ ÔMRB\ÉEfµ&žNöùæâ~syn‹sæ6Fûò²Pª*üP_ò‘h{L¹æuò™$-±”\б(!ÁÓ Ø Ì9œ÷ótÿF`ª‘·“Y À˜$Å*0Ew0Å,˜²º˜NR5®¬Ý9À²—Æ}í>ʱÉ×Á!œ`²œº³™W×y•9yiþ\gk]—^ƒ„Ó÷/[½úÛµMðúu£zÀÀóƒQNçþƒç‡,<ÏZÁ£AoÛ‹]Gd¾ó ¼n çîs?›W9|™]0ÏRññEc€ê{RÕ=Â-Úmî4žNÒÏNØLQ§™Ñ"§ÎC,Y~jD‡Ê´Pv“ù3%ZØ,“Pðgeg0$ ©¦ ÃÆ¢Ãú»@"d¦Ž+£Ä»ÇHíÀf±o[—è‹lƒŸ‘”)Qù£ÌCç•U`ÉŽVK«7'>ù ŽçŽû~¡JOß›wÛv3rÝêOŽP£ÆîUÐùÒi(Ìu§ÐEò-æp¥T;õc”" i²ó1û9 J#ÕóÎrý‘ÀQ¬ƒQò¼>ŒÕ+Gð/ó(+M:¢XÖøbÝŽQ(8ÆHrúc©ëJÜmþj©Â‘O”yÊF$,¦êOýÔ‡Û߯oŸff²¿'’wâ±*‘B޼'²S¼"ôayeéZ¢nrœ¡îäPQ¥#åÞ„j„º Ž(Ír^)Påä´õÃw%sÖyUFIH/§>%\N?nçÌ=jGY‹ ‚u™‰¸ qJ«Vñe‹_•›Ë‹måŽs‡k§:Y¹@UV€(,¯Ñ¯Þ‹³+³gQªƆ."?NÊi€±äós6ž©ÒdU­Ô8ÌY¶l· dÙù€ìvXê Æ ÉO¹Ç ]S·¬…Q ¥b¼ Ç8jÅ›,U '”¡Kð*ê Õû&bñryìl-]¤’™ 8ƒ’”±>¸Yɺ KÑ+|Ìêgãœ|U+1ëÏd@Î8;;‰ñˆ³Ã.4Üj«êO°eâãíÕèÈǧ©º$—Hx)oèüó›«W‡pÔ ·Z¢æ³‡{E­EröB‰šÙ"Œ^c€TL´dO9íGÐ/P L¡˜RG"€RÒœ%4Uvëó^âÒóÅŠp)¨ƒØÏÊ}±FÚjá«€ OÑh«žT˜RÇÑyôY`êßКSLΟlû;›°s }ˆûƒ¿3ˆDqɲ›¹Ÿº·Ô4ðÂÝ6s?õqçHÝc`Y¼>¹?Â/1j­¯~‚PpÜØrãIJhû±Æ@h{ñìlÖÁÍ Pˆ}%aì½£Ÿ2GìáÅFõ0³Dá8«Y“XÇê´¤ïÌ`’sòŸOQÛÅ›÷×ÿ®üýõööãÓÿCaîú—›«ë¼Vʧ¿œY¶bs}õü3ʬ†.Ykîô¤D·kÇÙK)¦nó£}Û³}+åîåõæóõÕ‹i©³R{›¯÷ÌÞ½#vWÛ²¹×(÷ËÙûÛ/×wg¿]Üœ=¤ëoÿRë‘RvSŪ’¸P˜õŸõ®j½T“§±ˆ…zæ0)Ýn<¼ZÚÛ×Òkh_ê|ôŸÍ6ÄÕ1N–4…(Á9Âú…®\Ê8Ua˜nè‹§GÙÕÁÕJ§_K£> ¾”qÒ©-A€ÅŒëØ¥Õç1.¡Š˜†=¾šo©Ô×w®³RŸ0…ÃIù¦ÙhAÑöâ9}\¬€k+ P`™ÅµÄ^–7ýôG0®ûν¥"fÆ,(Ðd¦š¹4túw)¾•}~%ÿÍCùžŸË»ËN=—7té/ñ­K|qå_Ænžÿßæ ã”Îú…»¯QAO";¶^FѰl/PÈ;ó½° š ªB™¸h£þ9Ï,òð‚;ôðìg/x+VS¦"¤Ó¨ÞcœF–ü@¯Ám¦=<ûVNH#‹=oxF•‹¹S÷ÿ§Ð¿+’ƒÜBh?üÔHAýá°æsâû͇ÍCÿËSíøƒß\cØÇ–jvbJ ¤%*i“ÿBœÝ¥8$ßÈ`!íʇx¤¬R³øÇÖ)¥žTj…ͬR?Ó£Áë¢}퀋 û}FoK§ôõG€H\ayF²)µ¼nCš’ºß{ó®»TßÚì£EÁ»ÿ8¥²Eg¬°©Œ}£Ê¼=‡¸hÒ1±#µÍ3•¹Œä#j^Fï0×\6¥8 Ä(q,ÎÛ‘Øa~Bò ì¿åÙã7»?{{wûAÍóù½ÓÝ;Cý  ñ2ŒH] êTÇk=¾Ì±¶#z⦔ٓ$¼V"œ2i;±Çgj €k6%}]Ø9*Všâ±J(^t¢4©ørÔÙØ¶Æó$^¬¿xÿð[Y!טƒUFÇL¦%Û™m\»D›9‰É™ÛgÍ@Oã—õF?ž$£Óv´àíÞz¾ì[6+g‹q³—*ç|ñ~¹f*¢aôÚ¸©du™~UcŒøÇ}d‡ýªQ^«÷(M› B¨wvDSxú?›—/Ê"Z–ØÜ‘dÉt[ Mð¥:±)¥ï‡Ê6òühb³¿”Òd·b «kOÏå£+HlO¨å.{Fè?X0F²ª¸EŠõxDX±Æ±ÿŒÈ@™öÛÛˆñ)^\4=öoöEo.n.¯_YÞåÓýWh$d"bâ*^´à’t61€ùOï 4¥$¯@ÉÔ©!ÿýPr Ôõ†ƒ] ‰PbÆÓ¹Trˆó*ùhe"2‰óD°Û qç1çS .»À¥²o“º8ùÕ¡"r¡Š³)ñ¢µÎ^ý?ÿuY›:‰N(”°6…iÎb~#Èóe±6uü,ÖrLŠ1©þ ®Ðo·±ÑoN6ìj<–þª+CÌp‹™!¦˜›3Rʃiõ‚ÈKà<%JHk&bä÷ˆv ‚JŸyJ¨ J¨aLi!Äl5ð&‹´ÐŠF‚+œ0ÒR 9­¢… WÝŽýñns{g› Þ_Üß?-À9·‡•óKý½Ò¼ÝX°®®[bÊHm6PšÝ5]BžeM{y%Ã|‰ƒ‰iZÇ$_ ùL‹…–œ~K.Õ1ê4ôcç‘i©ŽõG¼\8Ù:~ÅŽõ3Üa ~Û~B2>ÅSe;NDZ¿Z½È«þŸÿ¦Äø:Q,uäÀ'ñTÁr¶ ¼9쩯׹h3QËqoç~ëwr½þë/ÿzP—“ žY÷m[T,ìÇn\¾ß(¥ã¬IèYGýÙÏgÿ¸yý×õ+œb¸æxEHékU8í}ƒA…Sx¤Â)߯púyßDl›·Òré7½ô¥–¼ˆm Ú,¤•;e!\gs3']qbô£Ñis¦|ÆÿùjEe±Þš<ݶ%á§Ü!ÙF†èúÕEõë Ñ&ɺ Ìžü!6i1¡d˜ˆØ°94Jü3—c~`•…%î0Fq2º½u\Lû#^–7©öa{šâˆ«Ešc{ÿý9/*fï_{<¿¿V¼:Û\½†K¸”to)^¼ o0gÀ¥¿ª„¥f¡?®SW†ÅRƒËÊa1S›™ËÒ%¯>f˜´j-(Ä1K°½lÌÆÑƒÛL—Ã&ß%3;¸ßð48¢ª™;aÒQj0ú›¥¨aF¨ÑDäö©1 ô5LPâŠ/¦‡)±Ã_i^»#TÑ^Å|ecmâ|v檽á{ÿ=LÞ‘“5âÆHU¸˜DMCŽ<ËÌG€­ïe'3ò[ü0i-0MT|šÎ—pç‘dK“ä‡ ? ArÄ.Biú¤ˆýÓ I°æòØG2'9ÜY¬¡68>^VB±é|# ¶e5U|Œ­I­b[%®9py%‡‡›Ç$jø˜HƒºÉ´XúFÒbÖ&šÐ^ñó¤?"¦¸‚ãá;q´J€n½¬Ø‡ 3¢}Ðpÿj_6*Èñ“!ñѸgÏóÿÔŸÏæ$ŠämܲopKÄ« %¸,â¡Lă‡¹/T”§§5²cVšÈêôǬtYÈ®x\f:àaã¸í,ןrGT¶ÿ©TD`W]ðô#§^7× Â’±.–YK„¡zN@ð… QËfĤJ'…!Œì¯x¾w®¦cp±’²L+ð´°go¬ËðŒ|6Tq™AãgœÇlJLReeèEåG{‡ÎˆOœ©ç¤‚SU<’%h8+/à Ò¡ÿtVê¨=ç$…:Éhßåb†„Ø„ëôýŽ ®~»þt½¬ëhxÀI›žqÁ/Ê™¶â-ýXM¼kÒè¸GëÙ]Ž{„.Ï.ø¬ß’¢Mr(ò[⤷:zPwøL½éûÚ6Ž|iz¡ÿnÑÙ$¤—Àk§”ΞãP@ß9¶¿#ËA\Ó~ïê3 Ì9* 8`¨’â*€åqoÄ·múQ *KúWðGõªÀÍÅ t’äï ý}„}€²¨uüÑgKñÄkþLDÊ,ð'E¨JxÙŠë€?åÁLúOþüm‚ÿÞä%PåÑùå’`• }mˆ¦ä|X#÷‰ëå>Ÿ–'*ç>Þn¬Üåêúíŧ÷ƒ_<œlÌnfŒYú9ÃÙef•¥37 ªbCŠ[€ËÍ—‹&gûFAãLôpÌ-:×Å 7eMK ‚à¸9±»Ì>U>]¦`@§>sJûÅÏ'T¥@Úô€í×Ò¬˜ÞË88T‰/èaT¬ñÔO×¥ÓõÑÙ- $ÂEÇ“Á!·õcp±’ (¤ÎÖu˾L ÎÈf@ žUhi¯ÑRùªŒ¸u;zê‡tø¤mìC 1`>Ê¡a|8AôŸ×Výp\D9¥àjDÔ¦s¶/U{ÂÖ­ kF¿§ûîöóMw{÷îÕµÆ&÷÷o7w×_ô"S.éÈŸ\'ù©t &Ç×X|¹{¨,þEö’ ÑÜ1ocä |Çh[ñ†¬"/rQM$ .Ûf= Wƒˆ×¾u¿«šË#^²M´aÚ5˜>1²ÇoÙ'³˜Ò©wauh°ø1³?Â¥£ Ÿãß·ÿfR .=IÁ—‹ÍƒJñ™J÷™å~Ù%_¹4ÄúŸôçwlzÔ¿WRžïH{¾Qµ;ÌQö ôœ›¼Ü~Ú ‹Y€<l¼!™$-'‰úUû}lʆWú·Ý;9|º÷ýûÛ/go¯..îÿ¸¹|¶b :¾Ÿµé˜ó#` >ÉRÐïÀ%¹ õ CJÑÑ”`fó•¥ _`²w‚^P&1™„ÉTÙ^³`ƒÛLçí[Y-‚ßß‹0<£nõU[ÄS¶¡¿˜*³‰v¾F‚_bý¹ŸZ_Yº X5&øÈÖx7)œ\œ ﳆúéf%mâªÇÅ9„BC\Ķilî[ëmœWƒ„}ëƒÞf³˜w·vó7ïôÿ·?–ýÖ9 §+¾â·tnÒo›¡ß\¿Q׎ Þ`ƒA‚­ô–ìÞV·RL~nÙ× õ¢áv†‡ïTÜ»±À àøËЖôÙ´Ë€¶ \ó­õ)ü#Å´²k®d†Ìf®žQÞ6eóýäCÓ2ƒ]ò¯Hk z¿¶0°-üÌ CðVfƒi¼óM¤á\àS¬f~z.yß¿ž”=¥™“? >cðü~~ý{ÉGyúñÎ&m¯±.”Ïþ"ˆ¦ ’üŒ Ro H€üâ r{„ ðíÁòfªç.-®îˆutdÛ6ÒÌ@ºìûúsÛ#Â’ÒšD•è¼°½P2qtÊ8PFvÎѸÕÇÈ„c£µvw 9jp™é0ºÿR¢„ÑÃ#êæ©슣èþZˆ¶gÛ×H1„Â1E™ÈG¿æ¤áÛ›·›wÝ^·¾ŠýÝÅýÃݧ˵*“ˆ§OXðô±™êc ?¬Ê9I«p.ª§#Èßçð„|+À]ö *òQ­®­”¹c¥ H=6mº€ÎÅÓq3£³z¤ˆA¦Þ»ñ~¬q]n•Ë€zM"eýÒÎ:¼RY¤ÜNtbZ´2>x/‰Ý÷';ѧ,Ó²ã¦e'ÄüZú'ò5žhƒZ O½`Z·c«¢3‘µ)¹3O,@÷û:ä0¶>%ãÁaqÊåë8kŠÅË-"kˆ…-§ÌŠEŒÌ‰áÿÅbG³¢¡9(€*44)p´Z‹êÇ5(~\È“Ÿ²V&¥lÈ竽®aTñ±dËiA½o?¤!tz{Œý[áiƒ‚ݲ“ÇìôzyDó•ÔM™—Ö`žÇ˜|Z‘yÿõéöá"Kx¥ãí'µ$ýoL1°ô˜Ù±ÝØi3Ðô°``Ä­õ«E`w .3½ó])=tÏ·¨A!òä+([³tœ„fsýq6pÏûo @ñÔÈM²²¦0IÈøajC=xNñHÛ5ü¤PtŠÆ§›Û'ÿêòîòðm !Ì÷´wÚàK¯cïdsç8íwìŪ,á‚ÝJf4©.Ì„¥bC0MÅV žÒ9%{ƒa—¿Iàýr†@3-iO±Š,'œÒ7‘0¨ž“ˆ ö¨^ðn6Q6 ðLÁFY€èlŠç¬ÞÙ¥T§ï+¸T¾cl¨ÐŠ ÿ$0Ée?|wwûé㔾œ° 'n5ß®ÿÚœc·vƒ©)Hf5R/¬(1àÄ;šok~½K‡ö7ÀVÌÝV¾/~I[Ën,‡;À~IÖ… )\ÂÜåñ%º7bJïÀ ü{×÷Ûm£ÿ£/}©}%’¢È4ÈÓ‹ºØÅî¾ÁÄw’käÚ¾ðÛä¿_jf®=žÑŒ¤s¤qìÚÆ±gÎ!©¤H~Ä–N´ê}(ÀéF 1@t9æÍ-ñuéÔH°é=žØ &]^K™2ÁIMŠì´PG^Ç=Q{“jyotÈ(8µ@8že*#*»+jBEÙ¨{¿¼|¼»zøý%"ûéJ‘¿=ØÿÜÛ)^qzWLëÇMiá…uаƒ©Uv~DûuºÏˆQP†juÃÅfðq¹3?s×~Hmï·÷çö9wWË»ù³¬ub/{ç´ä~šOÔB@®Õ=·‹jÚ(jÎ󚤜³¬Ø#ÉÑ9%ÏË.Û#¹%™.ž7pð[ª]ìb ·(p"âi¶ŠFX<Á5câþ½,´$õæ6PX=•’útN¾lUeR–l—¤ÞÅ©$ê’ŽVUòaôÍ,³j̵X¦ã!ÍtŸií˨ν Õ§9›¡:]M'Ça&Û_ñL9Y_æì—CCß$s—ì°§K ô êücE„ "ªÎ*˰—cZäIV£Zãö•Z.>?|ê´•$YáœaÊ*0?…DÚ6]¯_¾cæI1‚b1 cR¼7YäÚ”·^¶OfV& ÐÒÅÜãÌép‡és¥Ìµ¢TEÃÁ­Ö!ªvõ–}·Z?Ü¡: ~H³r"&ðï±Y¹ {êZúHçÁ©À”Õb^ßs_ ‡ºö*ú#”‘:úìV'9vj ä4ºÒ‚Ó0λ¨6_ƒã›c¶äå Áþ«'p 6ŠD€¤³Œ"¸0Æ1DußÙKçµ k 08œ×aÈ&8‡ Ä ÞÓ»vJ¨5ÎÁ•ãøMÓÞ._Á³ {y‡ÈBM|E+ª>EÓ¸bÞAˆ¹ˆõŒËÛ÷5 6†âgƈ¶Fa¨…)2~¥1U{úûÕ\Ïæÿ45«2y ‹îf!:Ó¶¯àY°__â®z5‹¯µ/ìËL2Tœ73§,gØ“Hº\±$O+~M*VÍ£Eœwƒ ¾bIbö¹š„½²S.t™PßnqWÓ-®*±G¯Z ©3ðÑU(5žÖ`Õf0¬¿æ1 ¨×‹/{}÷ÇØl/ïïÎï¯~¹I]ø‹& EGÍ:ÌÒüfóuw"i÷Æ; ÂRE+©®Â÷OÛÐî±i4Ÿó¸O\÷ & , Ü'ËþÓT·7‚ý,¿Nä'(ÎBÙ¦êHƒ¯ãÉO8B–ýÕ£“xbö©a?i¨’¼² A¥™½¤ÊC˜œøN¨oyêh‹Ú+ûÁ0e¡Lêç‹QÞ%/";Ž´ˆ®Ø¬h&Ûn±%ÀN´ˆÁ§éª^ WaÖÙG ã¹õ\ž|\W«àOË­çýÛæÖ«ÊÕ±u³Lƒãˆi#´´.$ÅŸ¦ñqU1¹ÿݤxýaù›©0!ÇV eñøðÉôzu¹úú¶ò*q騀ö¸ÓÎRwM D=¸é=smBëGJn¶‘Z°ŒÜ!Ó"tGŸÛ±% .È,Úù–p¾ÇùŒƒ›[VbFα’'E h¶ÌÊÔ5ŒT·TÅOijiGˆƒJE7/GÓÝ•2@72XzvºV–ëÅݯˇ/ŸíÄ}¸\Þ=\ý|e_ùäÉΡ¯Ûæx€pºìËp«àaJ2IÚä`^7K•¼:B­Y„š.ÒŒEÓ–šígÙM¤]qc(µ”ÜzJ ´8ä‚d{å8iø´ûê*(À<µ±¥èU\šŸ a–^1Ââ×è-ÆÚÜò ùpG^VjŠÓò‰-¬ ošjÒµ1MFƒö¸Ì4 ¾lkvGÛ¶\;M²¦m" .  Tì®ê—Išø näéçåâ~¹ s“°ÏW?øÐÚL€þ4VÀ<œö6{d÷hcª]z= í#ÕÑ>¾)ÎÇÚzE„Œ“ÞûhÁ`#À>>ƒqƒbŠÔ'¬Z¦üÕMŠ¿  O0.Î…hÁé©‘x§¯µ?P¤aÕõB»Ý`Ü2 `TŸ½ö¾ÕKªvªob:î FêL†xO°0›TN< sùiùñÑÞäà¿hÕâ84í¬ê„­9>­Þ5ÌŠÜaT§Nl=Q@©8wÎ{ŒÅ¹k“`v3ζˆ:᪷ãbqáIq5m«1ײ҃ §˜Ú‰uS;@êfMíÔ£ÄH­ÂÆ TqèeèdäÆYGN¿ÕHÏ¿Ü~¾ºü}û7>ß^þÚ–ã° Åà aic=eµ\§®sدY?Y ³‹e¢n¿nI:ŠÇÀ9‚¿-quiÖCMWJõÄp CÖUZ+^†^Mþ*÷·_o.nï~ù°øx}u³¶«ŸÌ?¦úÚÝÍâóÝí£ýv!­ÿ AtŸ]Î<ú)íY1šÕ¥+‰ÆCß ü#‘ ’Ÿ×À›Ì‘$¬w¤…K|!ó;”¶¤Ù$å4,'†?¢qßþ&àÀ¿÷¢N"JˆR ðï®qßlŽ9p™O¸ `–ëw[~}÷í±1€À©>ùÑK;ÒÅbŽQÒ .uî{,tîw¾$ﰸ㕠5“0œvÃŽcÈxØÉÐèžx¤ïÝxt1 R8zýú:ˆËAnZÒ|Ô6Íýw[ܦ´¤1Ø_ºã¥®ƒO-Ä~o¶ø-nF÷+÷_ÞÏšP­œ[—cNÜ’u¯Ë"´õ7dUÖä¾Ãð‰óêÄžã/Fïb†$œBèÃØEd7‹kÓäâòÅè·‹Î)KÀÇ0]ÖHϧкاk^\#ž~¿IáÄA ˆASºWDbfÈÒ´<É¢ h5Õá¢ó.Î+B{sœVwG¢óg…9™,P©ûŒ@õÕ†ÓÛúéo+Ó@Ø ¯‹E ÓUTŽqÊÔ$x‚ÈÔup½1{x–ç>£uSSê³H¼œ=°mÅZ ¨f'Ÿ$Ö‡©ÉMHƒÂ ¦‹Àywz›°‘‡ ’ãò‘=½ZÀ·Ï"dG3p§1ö¹xrPé>9Å8Oé:âžHyç9ŽÜió¿ôŠöãÍ®t[r¬•Ÿ2oDîpÉœ(Z¢6«tcçfÊ-Qpé°H+Ûu­Ø ~­Ìç¥ûÉ9íU+¥ûd³|íö£Ç<‹É³»äûP(´ŒÊ­¨üè6ö$æìV3M-Íà¢äÃ{î›ø­Óø:{ÜOý í¯D# iDñÞ“ùôþs½®ÓóP¹à¾U‡¢“¡.ã„r rò¶‰^}[þzã=,愞‹£Pˆ2>äY°}òrÎ¥ ='ö »ûu$„.Köm¯Œ¢§!" uD¤.¾-.Ò-°i'¦ª)GÔD0€zâ­i†¶w “âãåÃãÝrÊ=ÞÎ'Œbãèá7˜¦P„[p')tV¦íŠ»qcÚ®¬g¦‰üÏC‰Üb_³ùrϲÏdlɯOŠaGÅé‰^£™ÄÌšK1ÄüxÀxˆ¨züöLß»€pÜhF›‹œ+5&žPçµ°D¥«]H•Yõ­Þ=¨)Z€•aVHüNS¯;ƶæFÿžîk²ÄøÍÕ§îw?]seo§dÂ4²>èºY4¡¶Ý-³ê kÏQÇ€Rl%’ kçî`ö³ ûø}ŒÞqË`¶…ëfD4ëü¿[413fïCb5Åpš»Eª»[L'7Ò÷~ñ¦¨¹}p³¶/ªãFxNå&”wå {’+=CY‹EÏ .Ò”9ͳ8…wìØGE ª);a©aÙ'f}d§ªù4IÚÐeÚ $úñe'Ñ{’)J1D=QÙ)¾×PÑoȉX"ótË`Ë¡…¹‰¯øãæqÍ8tÅ©²6Ž,®̸ÏÌèÏþÇ ý?n>.;{Vâ¶óø‹ýüáî÷«•¹7ŸoÞõüêcÚ£iËaZu(É2ÁU¿¨ýÁy2£ÛǤ‘°r)y!D%í(£é(kA2i÷ˆÓÔ0ô÷3ûÉÍýâr…N/°Ö¤³Îó^|¾_¸ð׳›Çk3•o~ò6iÜ}öØZ0,±0Õî/|ÚDtõÒ‹sÌÛŸÞìÏ_în/—÷÷k¸Ûèî%¬¥§Â€ X?´^¡·’ñÚG0 %݈÷ï6Ò+3Xˆå¯ ´pTUù¿ßnöÑÍïµTa5íÔ”½Ú¸¼½þ²¸[~÷½½ó/ˇïþó¿þí¬µvCMtÿᙣèúöã¶1qg?œÝ?^&£ùîûÍcüøåñá»ï;|Û×ÅçÇå›uË¢g?üpö³µÇô^ß¾k…À]¾í‡³²¸ì@"OÅÕG0MáÞWeGÀ^fãN¬ÄAiA·÷… péÀ¤;Jf³~sÈb?¿Zð¬Ë>ʇÚÎôÝ–i›Ë™ì5W¯@£Ç¤À’‰§Òän€ þì¹ì:àñšm×ïŽ<Çz^þãf9Ñ øˆÚ?¾r ƒ¼:ŒÍ 4á;!QêOµ“3‰ÐÉ”<“¢_Ñ›¹ÙHÄ H”v;ä $*Ánãvah뽪`•$¥uÉ}Èæ¼ü»¨¢dªË„MpPÙ@èÈÏA/ô ÃÑ+h¶$dÚKC‡AçpýÔ–g'€¬~µ¡@Š•Ú¬ß“l…vxë÷ GQW—ÎóPÏOÚzb˜¢ÆùyŸ´Ä_vìÃzaõXµ |kª¨½øëùÕ*㯠1¢4Å_‰’_ç!ÅñÆ1EŸÈ…ºµ+¡Ì…ÓAY~ýî.:Å0 òþpx*àä?û†“T´š…!ÁMšþKtý–Ïφm€R²7®œ$ !„²SzOoV‰ „ Ž„ Dž>f½úˆ±cIˆ(ð]üH·F„ãGŒeüøÛÊÌWÿýïfEûHr¾%ç™ è|¿=æ< %‡²Æ€³T‚'È©%g1=]>+òœÁÈÊTˆá•Réê¿;I5ÌOª«¿|;½ß%½®þö£î‚1ÑÍq´»Ç¤n$D)xF€¹î]µ»À(àsÑ]0h1Õ¦mºÝz³ºLÛ²‰i„¹Á_”ÔVN ~08%1f—˜"Ðö¾Oäôu½¸ûuùðå³ÙŇËåÝÃÕÏWËçO¼ç¿ÐW{è$ö¬³Ð©ök_”Î…¥Ú¯= H‘€ͳl1&BjÐaQ– ¹^|©¸kxì‡Ëû»óû«_nâ/š8Ù-ç骨ð &\Š˜,pÒÔÇ,éM"kO.h?7ÑD ¬\./9ˬ‹×d"ãl¶DÕ¥›ÇžÚRï|‹3RË_PgÙ†g/&æ,{XR”@ž@„©o?Uw8ÖÆÙˆ1Y·× :edXBÚ îÕÿýÉj?}^¦Þœ¿ÝÞ~Ù ÿ×ì`¹êÚùSKï_Ϭ›ÿyúYºWÈœRb 5‡T©xÀ9–Ÿ­WùszÔ³«MsÑåòêëòãnç‰\ ‘ãõVåeû#6ïµù”«{;âÿ0gõåÝÙçÅÍÙ“4.V¯¾³Aø$mm©ÐH9^œsÂ-VѼKAH’‰Ú)¯¸¦Ù;_\þzßäwÜŸ† ;Œ˜›A ÎV†W”ÝYš$îQ`ºÈË(6-"{€@zD:{&Ù1˜ 0²—"NZðEœ >æ2ç-iô fBšZáÐ’Yw1b ‰¶V-30ê¼90ýì!!EGÅ›”è|…9dïÝ·$Ò‡Ù>$zl2ŒfF³ÀüðÊqÈ÷&a³ÆârI×uQ¨‰r•p^›õÀ‡õÈÎÎÓ#êè 3 ÌòcÇhñS$Š…Ö\ßU‘ìjéi]Êí6¾åÞ‡ª“pDŒÄš*éÇ/è{> ·_oΟåÃË´ì¯9Z þH|Šˆg• LéÍKýûì=N\Õ×&®Ž¾Ò,‚Å;*úJ RÎ1Íô³U‡gáôq–¬>M¯H‹³,YFÅ©DŒ£AÖÄ YyVËo`‡Mp4·CtuÛûp:·C-<ЫGûÏÒ+a²UHít¹ ¯“þo.Ö>~ø¸¸ÿôÓíâîãÓÆë‡Û‡Åç&Ì5 G˜®ƒ Ô%žP¨µÐÒ±EñSÔ’Ø:boÔij¹â"ú"öRÈ]ðm‰¨ÓâM=ˆ!ÔcoŸ3ºc½""{Y7[︮;nG÷ouåðTÒöÆlm=Àû§(hD®q^&L =’½¼  ŠSHìÄ[TãZ/vÖ6} ][j‰~`”rÊ,e(Î2Î=½l§0T¢…kO …›Õô¡HìÐûp¢‚íýÍiÿÏM²­9¬ÅãÃ'SÏÕåêëÛ®ÓÓÓP 0<`FÍæb‘˜æ+–ºöÐîˤž=áH 0aq€}VbSI¶Yt¿8ÒnP³øbiV/\ËÏçâÈ-uO{jgOíOG|tMÊ’$¶¶8Ü9ÒûîHuפ¯M ‚ÚQ|¤nãFFr>‹גּÜCä²÷EˆðBPÁÃõÐ5 w_¯.M®vD/_tŠÞÞ\f}H=+·÷çö9wWµ’ŸŸÂ•}Ó”*®Ç`K€þÿÙ»ÖÝ6’ý*Áþ^uêBɃƒùµÀâ»À¾ÁÂväD˜8™µ93çé—””¸e•TÕÝUŠÇÉ æb;nu±X/E~œ<ë`ª¨ZE[UPǸnÎZ7CÑ|1K’€³˜#Ñ40oÛ×¶áËþî²æM j_û¶•ó^†ýÓ@éîDWOÓi‰û¦óà §K‰fuK+6 þJ5Ì´ªO”b1ì·B*º®„”o³þ&“6µ;N϶:rÙÃÍ)õ.D69C.s EÅ—KPͿػý»¨¶,5w™Èó¢2¹ûÑÓ7ed•@s¼§ dåJ’þj)‚­rž–"‹!‹“2ÌJØ9ýÏav$ 0koí‘1Ð…}(ÙÏÃí³&g8NïÙ’#ÄãÝE.Hþ ¹8€n‰5÷Ïf Ø>Â…ö*rÔÕG¨Ýý z¿¾ÿÛßÿñÇ̬o¾èû}ºº[«”ímW77*/”IîéÐû7¿¼yüãÓßþ^ݪR3ö܃¬“í8UjEBüŽÞÑmZ…Oü¯Á3^§ëõµ"%¸Æç¤yã¾Ó+›P½ÙV¯dXt‰1ùåÐØí‡¢ÏWïó&mûˆèçLË6H}äyLrÜÈdÍMÏÌj0, RŒá“‡ýÕÝ)û³[¬Ë^ïVSîe (ƒCZ™±¨—É :©f›k¦ 2‹ÂÞ›s¸œòÀWR0Æ!„²Z8rÊjÁÙèo´´ ΃Ó9>Ћñ3²ä‚ê¨ ¤žø¦û]aØ så÷Nün.Û@õVNy+¾%‹÷ HøiûZÆ®60=›ÀYYã4n¶@/p'ñðù¬g=VŸ §"ð¥°ø´² àctŠ ëâ­V–ú˜ìi㞘î†àü]"ªº„µŸðer:¿[ÔÏè@´üÝÈœ[é/u(gHC€”öºùÒKS[A8%œƈ7¶E¾dcÈÄ9ºe”؇ª8‡¸ˆë„YºìÑräÙ¶¯í’¿8ìsçnš˜ã1¡­8jäxW … é/\OgBö>I\t&÷Ü"S³åìTE£óu«" ƒTãWíg~Ÿ=’(GK«õ«0‘e ëXqãÊÌwE¼—c YÇ*{OwdÑ£:+ ‡—Nõ9Óךºù8>·JÓüë¼Ý#܌ܧJRý[<ÕÇ<º„WLƒâ»ŠÞo¶¿8DvòmžìVYv¾Ããý—u“üŠž§t–î©äf¸JäÀSÒÓ5u¼õô½87ézúF,Êæ¨º†ÀÈe‹ÎžïDsmU#Ù¶°vÆBÒ(xŠEêAsˆK#¸g-‘MÊk¬Úœ4ÂìêÌåÇg=7‡_NóÙÒ颢Ð˧9h„?'[Ê⃠ZFUtVP­\3S‚€jòË®™XV¦˜UÕh1ßò$–'q«»úQ,ÓNbTÓ½ì$ú;ûf*g‚cçÌ–)¤»‹’¥X奉ŸORT„„“j¬ymhˆÐÅÃäpÆ>nî6Š9Nÿjâˆ%=bg7µ Ýɑ̪U (O,¨Ä@ÃÁO,•Ë%’Ë—¯„ÛÈ”m¨ZoÌ:‰“%x¡ñ_ï@•3fÆØF¡È×ï (5íôQ_¬½Dà:©2ÀFD¼Ð{—þ*“#4Ùž .®2ü:Uæ(ñpRm0B¢EH€:t²1ÔEüž1Ÿ ˆZݯo>ëRrƒ’§Õâ™- iQ…ç;8‡~Ìàãå÷à©”k¿ïןֻ]¶ÁŸÞ‡-µ«_tö\Ûæ?4Öb¦®ùo?Äýöþ·r|x{wuóA}ž]͉ž”›_†ßýµºŸ~Øÿìà Ó2%zÎì€:ÜËöGÚç§(T±ºŒ¯¹év»uƒ!Uè0÷ެtM#l¢ž0£ât‚ÜŇÓr'›ô»,ïpËι}º Üõ<¨ßóvÿß•‚Âõ5Ñ5È ÿÀ?¦™¦3·éäØº–I=µÇ20¡0Ü}gJéÝ\‹÷Wú+Ó´]cìÓb÷úöˆËÄÎÔEÛ‘)@øÞbu«ÿ»ZÄè]Lo‘Ñ(~Ñ>ÄÈý  ÓB¿-²¢o@Gúóv‡ûœrçž´¹”ù¥† H.Lk,›û¹£Î0ð8ynïÜ=WeHžhYÆ6…­ ®SäÔø®íl€ÚêêÄ ÎJ+ )a,6n§”;R ÞÆ“'æ)‰W…÷èý¢t…îÜk*fà oˆn”¢vZzÕ¼!\7„›]¸ó1'·5jä"‹nÙ)ÎéIÒJÆn_6YPæÐƒ’PòÅ3OAWX¼n× 9GÖ0’O‹3¯‘8‹gŠ“Î<cì"‡“ ö¾m71ûcJ6Û'Ö@Åýä :ÜBõ —xšC¦b¼SüTnÕ©Ù…fçÜzt\àTîV$ŸÀ•ú¾îù€éobiqÎ! ^¢j÷¤s.Þ«{±èœ3Aïs®ïˆÇ½T¶OÀ^ÿ^Ô¤^]LS5Ó‹Çús>'vj/9(äÊ¢"X3˜…0ñdò¿‰I¬f ËÙ&¬jh8?Ùo'´˜;ÜORiÒÏÀƒ…0©S’#è¯,ÊMsôégÀL+2 NÔ}OØâNÉêc^eλãÜS^“®=¹¥ j˜`Ñ!‡4£6–ÑÒJˆÜažÜ(gÚ®m ‡@!pE‘zŒ¾Ì8À€Yþ$–ÇÜÒØ€Ûëå ÇѸásìí«›”s]K8Dvé9ÏLïÙu‡Üû@MçŸ=»8¹¡6ŒØ/¢ á4'4Îi|K\‡S^HÑ·ãU¯1oU/±{‰9e©ÓDzjdáÕ)q<©Ù˜Å8šU¸©ýÄK\HPÖ (#^öð34.pktAuj“­ù‚\\–¢Ç9’‹êmHâ&9ú LÍÜŸL•Vt/£‚rž²„r#I5©¶¡Xºï“À ¨'e0zcŠ9×¾ì5ÀQG†ð²2ªö ö‹÷Nnqô£ó2(H3‚| 諵›Ú^Ñ©„®Yqk”+å‹@©†žÕ¤ù=R/Ø.éæï¹º¸’Nr8þÿw¿$ÝvqümÛÿyµyT­}£ÚüÆ(Óÿ±§ÿº=ãr¢×ï?Þÿ¹Ù=¨W{y¬6ïþ-Ãþ¿õËôV¦ Ÿ¿lµc[d” VDd¶âo᥯acc>v’#àŽ‡¾ê½pÙ5þï+{ÏOWŸnÖo•ýËÃñ XUð¬2<«ã#°Ê&»Oîb2fó%{ êe]¢]È`‡ìú ¬hG¹¦2ÈA¹–BÜI5¶Q`™ëtíçðøà=zÇnÖ  ã!“l„ŠjÞÏ»@êPŠá,ŸÇn±>K9ZMy*îÛ åÁ´ñ3M¡°¯.”£Ú¡Û•ODË4!ò_2'AŠ8CcN-¬hØ ‡¢Z¨VH*ªE¤üЧ¥Uò!²:‚‡j1~Fv…m· S÷[b ~‰µÒ—êë2ïÄϹɄÁú¼8ªQˆ/„gôÇ1z§Í›qš,5œC„ z¶+,tÂ4ÈbšWÔ×@¾ŒiVÚRÄ4Ìò VViöVÌ#¢¯ çwÛÆSZ„M)…îØd)‘,6i›‹/e¼DDš2[Âpˆ~ŒÙ[ÍòàqæHœ3Í+YÇO‹É±’ÜØük…Xse‹ Ãj”ýk Ù!o£¥Õ¡h$-ê!L@–€<ºÁöÌÝQG岨£?‰ Ó ¨a+3~ñ\›»«÷k»hØ<<ÞÿùöðËÝ\¿Së‘noW××·÷·»9V7Þ¥5»uÐzôÖò×5O…EcµL Ecá(‹b9ñAæ4>©y#Áå”êÈšÔ£¾K$Æó%»…»lIÄÓÊê(†Æ®=€ÄJ2çw&ìvžzãJ13p»¬ïŸ É%ç‹nÅ,>ïÁ$X؃YúÀqóeË›/KŸwÎÿ)jqø€3ü5¾­šq1úÐ$ô‰B ú¤DEô—þ4ZZ-üG€“БÝ2ôýá'%Ÿ‡J6^ª8ÖшuFRwä…¶ Túü1&!4ǤÒÇŸ…¨ˆ[Q(~έ‡š?£Zî ñˆ )Øî2D%A,B’d¯@ž–VQp‘½LÁ(òÆE¥(Ú£’d˜’·;A½ÂrŒ‚ôÝ0jZAØNù85í°ŠaÕ´W8‹WB¤¶y^ ̨’µz%†Èn1\I=\)V‰~n1£„6›½˜Q’}.íy'úÓÊêÐÊ DŸêÑ*:…CôaZé#bg°2)†l<§,'Ø]‘^è†Í¸.[Ó›ÖŽXuxHMÿÄ$1qšŒIÓ?ñ é NNÏå"UVdo^Û¤jŠz.ƒ¸Ð‘¥ëa}s¿~¬˜Ý´Ùô°yÿiâÈȧe/êT%^€þúç´LYGfÄÉ•Ñsä5¯êYrÕ†«7ZUòãCÁ€DÇù¡n#á4({Þ¾¶ñ«ÓcV=È¢S)Ôݦ(g`¬, †<“Ic†©)zŽœê»"ç¢Bn7qHb`27íÎw¨MIM½‹‰CúQêW?ºwËÇ,3O¹þ+üöáönýøåãÿý ¿­ï>\oî?¼û´Ùd̊If|òÈì£Ì¦Ñ›ñÁ'jG=Ï××óvi«¯âœÉtŠ[^ƒ„y¥£œ)¥#;"  Éýž]í´Ù^[ûp¶Fp·V—÷m1åÊÑ 4°WÐNãÁñ#ŽÚI§D̾¶’°•ðœÆ-Œ{`ixj¥Ã•á©I_ôƒ±¤ˆÂ‘ŠZÁ1G±2^ZE| ª¬^ÝûƒÒÑñC²¥£)á )RÓ ¯0Tê;un:QùJÆïðÑ{`韅K/—…ûiç*íÜi‹¶¥•KÛT«C”ëù%NM£Üƒf½]S^6é*I–m‚÷iN$Œì¥³fV?‹¨#Ûwbf}©©d&ŒúÊÇ’™ð>[7Zl“ÔÞÚ%ïkÎ<[1GÌE¡ºd vò0Íé†àipC”¿!IÛ1T!,&é8qrOîQ#È\´GáY³G`3TSoÀãwåq~×Ùßuuh¾ŽAÆù[T˜1Ψٳ¾/ýàÔ”ÁôÌÉó=BÝ0uU›ˆ¬P¦”ú“CuãÎK}ÞñMbM@W_“` S@·ÁŽ„3&e—»×%³5Ö]‚ÛT2”H!džŽq›.E”ž›Ð~TJœÕö«ÀQ\Ï×F•RÐ3’ ]ÈãýúãÕõú㎘ª4cq“Zn…ôs.ó½8¯&"…‰H?EðgpŠÔ—øÞ;mŒ"åžo#Ç P´²£ãŸÄÙÀ lßÚŠ….l€{—Œ«˜™ŽYp·KNN=óÒضü7sä½ dê©#(ÜÅj°³–Õð­ÆÑHÞb9’Ç9SCT5{ý¦ƒ]´¢Ât ù¢éHÙ;Ÿ‘8™6(…KG©ÃbkŸÞ)ìCGXø•†›OýçæÖ¾Öüóóý¯««ÇÇ«›¶«wúpU»Š™ÝÓÖÅ¥4ê(JË6»`<sB”®åY÷¿onÖoÕ³¸ÇVO¸ñÖ.???¬ô9÷›Úâ¬åb¯ÁcšÁž8èá“8uÎÔDµ?Éu~Y2Ý!ùèŠw®]9­“RîÎu$Ô6])8ôq *·Ð¨9„%ÞÆwzáÉ•~S^«+–-´ˆëb®/E›SX¡’¿bÙË¥¥V8$”–:Qî"/X¬ë3ߘ—‚µúh\›'ylšìóÑ7î^’§pJC9aYæË1Ió¯@C˜ü®+³Wˆ§ò2ÑOÂe¾L+X<Íëtê4™l=ÉaV1udm…RÂb7N°tXe%;.n¼æ&8«¯úI8ÛâIïbj³#ÉäÒtÅŒ1¤Thס¦ó&BÕPX=i;2úºtÚáyîº]È}X›R_qÛ#»Õqy–ï5ºP/åÇ §€wúÙ œ’<ùéHª­â)/‹éf×¾£ïŒê¥ÄtßÙ½.ßù¤zš’}ÿk4Œ1{ÆN4†¼–€k{}¯åúì’ú<øñ•CP?¦£«Q3ìiô—Ó•êÂç/ªö_M‹Ë|Éõ݌أ–b2ÉŽ›‘)|?Ñu8IæèºF–¾‘åö†…ØabcAaÐbíYXfßÁóãÖw‚Äl3ò“¤Ú„Ï’$9çCKW«JMxF; o™9Bš7ɯâ(µÒƒˆh³5Kä&0R¡ÙžÂ'‰4ñ¼ Ø‚Q']Øóö»S5¡Ãl:Ũ¦„=}÷ñÙÙ½cÓžö»ïF§Î!–'— ±ìD²wè/:›«è0†©³+ƒž‘оϹg$¹”‹„DÄ—8°¥ñÄmjÍ+hèy@ƒôp˜#‚E/˜::̛߮î†=Œ ¼í¿ûînÐÃÍÇ«Í])®¬zÆ N‹†ègøU†±Yô«:‘ŸÉ†ÖÉ{YÔtϪ©ä‰ÅÈ!ËTÄÙ‹­‘ ›¸böÚ.íŠEè=#2ZùMÆ@늽Or¾J<º¶Uâ>4õï D}u£Oj‹ÉÌ|Ï[´«ß6ë?tÓ )†½8oT¼Ÿï¾úÀãõ^òjõ%>ß? ;/úhdmuÚ…ô­ùŸ‰À<«¬+—iúDìÖâl™ˆ1EæÜ­×É@ká~8ž’cJeó€1_÷ðMÒ7LÑ&™‡ì*§A@÷^R“3du6–&(Ô=´½+s®&€ÓÀ’¦ ÑîF=·û„!º˜Âk Z—û·®Â2@š3µ7Ä…Ý+ ÇP¸2æfI[F2lù c>÷ üÎ9Ž ¢GáÂ1Aü+Ç•¥Ä Cb‡x 8Ýt5ããÝ‹>«øm³ÿÅí¦?¼½»ºù çrG¼sóa}óëÃð»¿V$ôÃþgóƒ@ÅÝžÐs&—tŽB˜ti»ÀÀ Ê#,É•A˜Ÿà4’_OßÞÚš>/íécèέböO_WŒÞˆˆ tžÛzúXãéîxfj=ýn0Óuï»P€©;dƒ ~Þôuùq;XT½Ž$^¥Ã¯šGÑûb_i Þ—¡ŸsÅ9#¶ñ÷õ­£e/ŒüÉÇÞþ~à˜r|aúÝ! _ÜÏ;€9wË•ƒ|êcÐ;‰ø3ÔÏ,Pœ5Æ(A-ö• ’|ª0 ^ÊU›²3¥G2le@ô´L¡ï ̈ËÈÜ÷· 1oÙÇK[†×ž "V1/ë ˆ4#Õ Q^J ¦:øÊ©’d§ †åY ©_[8æˆcG+«ê`oåÑâ®ú<Û¶)†›ží©÷y61fz™¶a¤Œé.?ʇ†Ã‰.0œA¥þé`HÔæîêýÚ¦nïÿ|{øåŠn®ßaJéövu}}{;xÆëtã]ZÃúÆ=ŸÚà¦Mlèð:£Q”àͶõxÐS’ƒ!¿œÆB»©_¤ÀÈ¡·SàŒþF?H⻳·Ñ*ôø¿LŽo·ÿþO…ÁÇc^©ô*Ã~²:NV­²*}zG4²ÕãºÈðœ:ëœñOŸ…Pk¢ÄUŠ"P4¼£÷zž ~ZY!з ¦› èâ¢sôœÀäÿÙ»¶ÞÆn$ýWŒ} XÉdY¬êy` fö5¸eub¤}_Ò= ìߢ¤¶d‰ÉsÈãno‚F’ö刧Xüê«b]zœ#”@‰ƒdæ1”¾oó:BÃq=ìð5,Bí´è £_µª¼¼¼hmZ,h×*¶ >Ο€QlÔ#C@È€³`G£–¢ðãPW€BDr(ä7c¤Phûj%0aNêa2ÖÀPvãò0ä uG!JTÅ}ˆN™‚G¯†=‹ß–—O*Üó—Ý9×ÀಠÐÔ|úª7ãQlìA= U•6[GÔ$*®Uüœc·(@Uý¬“ë7™ÃŽ‰Ï¯V„*~n,ëÖU¡ PPê0 UÀv÷rUŽ)r£¯ FŒ›”Ü~³á‹[]ý¿Nðˆ÷ô/ 3ÞT/f—ÔX?޼ÅQ®n¬bj~ñ-½Eã¯;^¼|É\~™æy‡rÏ’G ¸vÙ$bkèøÞ©äEÌ(Óáü€þB¼·¨¼x)÷‰{—"Y—gb¹„R­cܸ|§íC®£o¯Kawä×àÒ%.Ú OUæ+§<%Ÿ]wóÅàA.}eõÚëtéÖw-¶$Kñ¸ü¶åµ ç¨>0t’¨“!pÕ¤½C0<5‹‡o^% !9¹„(º;¿ K`ê,AÌ‚6¡Äl†Çž4äRaØ62(¦Î´8úlû›\ÒÄŒ”‰LAQÕگѦcévŠ#U‰QÁŒ]ŒAŒÿ¦¼ßÐ'Èí\%0õœØÇèߢO<9›PÜIK°¹ÒÚ·[6²^-A µ•[2Æs£Î½¸0%@JZ6êÕðŸNAòՆؤ”Fñ²{µ:M¬Ñܸ K»î:Òkq¿< Ý63;_<ÜÏ®~½©lxŠìÜpÙçqœ†Œþpq4P0Õ5vC¤5¬~.ÐQ„°€ª;còQ²”¢ê;¢iÐä笋7¬ÝâHJ÷Ì$3ØDjÍ-91»t¦EÓTÔé™BE{¼aàpt_pSÿ @ì@¸=ÅKˆ`»†áW¶hõÃGÚ—<Â4™¹X°YØŒ< ÷…p¬Æê¸Eò=Á±‹„;*Ú¢jóîEÎB8†TºÉŽ[@x<JÛë’âZugºß+3ä„ë+£âµØ „34.æÆÕSbÎQUŽÈøQªà±Á¶s¨/ÅÓu8ºyxº»»½Ÿº¾"¿}|ÎûÓh…ä¹ (@rhÈô\o™À´+Ù°©§)¶mçœ %tÛ[—§Û>9öt+¢&l;Öiyµ1UPÝâ|v{¥Œ‰ÀxÜ'Ï$áHê kJ²CëÔcá¡ç¦º€®î Z|%ª½MJlöža¾"€8d ëÇzF²í¸öVÀøöVºcñ›H¹6–à·¡l HŽ­Þ‘`#×à. ÍÁáq‡Ý÷pã$ à$$Χ<´!0)ÅnŽ5Ç x1ˆãfâ¥ÚÃÜ ¡s×ýGu Ë>­›ÞåiøäA|ÓŽ«’dû`IŒ³Ã¦2 “Z» ¶*£+ÉEtNl¦ER¹ˆ;Bj„Ò"ž}UNƒC v¯ÐqŽ%$`æ„1\O:¿'ͽVëW;¿g8ZtÝß@@Ø+ۮ׈w·—'Dzì³ÍäñºúÉ .æ¡U´.¯vÕ ÏÆØäƒÙG0’ FïH¨Ñ}¢±1nZäµý+´Ý— Ø{ÈëçÖg²ÁhvM‘×7&Êã0¢çÞ: ]PcƒyMêûò¯&Õ¶A[?$qƒ8æÆ\÷å_Û‘[ËË¥„Û‚µÙë>YjôÕV&Qµ§XØ`­¤zÉè+ÇÆÔ覥¶„FSV ÁÑ ( £Ê~¡í‚ª±`aS[=)ªVôé¨ZrÃ÷¡h) @"êì{oš@m…ìZ¢oìÇá°~ó s(9fdGRØÙ*nq^…&`¸Ž’L.ÖŠ¦CÀŒMq—¡w9ŒÄÝJ¼8¶µlŒºÉãMà½1Ç EâÆPü"£ûöâéñ·ó‹U#‚ÇÛßõ;¡ôÌÏ÷é_°-ydfk`HÈ!Hìq”oŸ“ä‹û¹œÇÅÜœUTŸ¾g;…Ä|,Ôð,™&@Œs–ÚfplƒwfTÞ[èŸLRLX÷ÉÏûLØÌÁ†¹QoæÓjgßû} ÈHm8ÍY9Ð`˦=<˜G§âæíÿ¦gbyÿþrØwEøìI×wsq½TqÆÕίtKTpQš[<°g?ž=~¾y÷Ãð¾6©lóƒ/ͼð‡û—nÆŸ>>\o›Ê¸KñFš·sm·¬Ý^7áh¯1/{ÝüøÒˆ­~Ýòp5=m®VjÊ2(›;QÌ _ž÷âýÇåßT½~¾½½;hÄõwŰåO7—ËÏïÁûïÏ¢º_-/·_ÃD#yëŒúé2F_Á²‘ßú]ƒ$3«Ÿ_滸س«¸(=Š‹åÕË˽6ñæ6Ž$_Ūÿ#ñˆÍ›mžrõ ê“ãOËû³Çß.nΞå1_½üž}Ò.ŽŒI%ûAO=ÐÞÑ+Ù‰ý Ç÷˜õ…}Ø¢R6&—U Ëöt³ñõ‹ûTIëΛ´a³AÍ/Åëè]­Ø}Æ*÷w;¾NlNÛt·óÆI¬oþZú‰åëýóÁ±O¹…d¸eÓ6y•¦m·¤qëiÆÈú!ñ0UEõ.ÆK  ü"ÑŒÍã—X—Å/Ú¤©ô‘ܾZ €Åe!;²B¿¨ÙÎÍæ×r„4ý³X<®Û|1" š×@¤ûÛ§Çd‰ç±öÙ].>'³÷Ï»Ï;0ðaA‹‹÷­Ñ©Ïw‘JÂWTH<¨èˆãh  å@V ³É³o§Ýd “¥é;oV†S`#‚ÂÔ8åŒïS*Æ4cÒˆù{4&^†ONìøT}¥ùiT‹‘êЦúwpü@‡/Ÿ¶ >µú¼Ïú‚ާy6vÇB0¾Í6W Ž¾‚þ Î¦€ô$êxJ¥Wì¾Z1ì(N1TÁNµ/£`‡Œë;~í À˜œlH¤+k:(<…ŸÖqnô>‘«Ä§ŽKÛA2äPe—v ô‚QZfiè‰T=Çdã 8 zRzV˜¼õ!zê°Ø,èm’ýÊÚ¶¯Vz–õã\À ÐËn\èI玜k9Iž‰“ûS-üP­\aÄó m™Wîów‰˜w͉XîãOBTlãÍf D…Mð¦rª|ìža,BySƒPâÙ€²œõæÎþ¬÷í‹•â“gÃRO¹MËÃS°sªÖR .ÉÉŒ1Àëàáè¤ÖB šd¶R‹t«°â`U·†€Å«n 'AËëù¥ÿÎ`‡iõÖYòƯŽN9À©;‘‡ÛËóÍgþR(0‹[Ì>}&¸ÙÄ•+ÄÖU áª$`>RY\´Öù!^=J`µíŽÇ oP°7ÉKKâ¤$3Àe£Áa²ÄލšŒ˜×e{öA¨ÆbyÝæ‘'Ö[ém±TÎiBílð¨FsÌÀâ,à¢ê ®(,nƒGö6˜¸7fT„(0R4!LW^×|µXγ/ Ï_›)yï>(Åq³»O‹ÅÿT!2y°Ãw£’‡ÜC£SÃçšqÇCEØ—õŸP€ËRà'ˆMVgìÈ«.‹º¡—[œ]6àúã²P: G0Ф#zö›ârQÃRK7´ðx zôÜap:S˜«î“ =ëåô>ÝÞÿ>Owå_ê©{xXÝÍgûúW§‘…Ö­’AÎyâPÛ©Fú':ëÕˆ¾¸¦'aÀW*é%læœ2àbŒ7Ù›ˆXò˜JúØŠ³IQOˆ¥Ž ţÚéªÂ5ÎÄ.-üö•‰­e‘er„ye ÉTÇy¶Ò&ô|=1î”­{A’*ã¹g}åÓÝhœQÞÝ´o#´nGóu–žJ‚¦÷Ýh<Œ˜j§ÇÙ‹;2¸ÖmuýÝ(-;l¢.ôÎÃñVŸ»÷Ô)Ê sûqêW‡Ž*´â±+uê­l f»x>uß?"Ü\á¤dÇ…ŠT¥Ä!È‘Ž%ZÙ~’";k*Š'Á9¤‰¹7½ïœ£˜}*‚7Êa¦NB¹AÛ†âKBFúÆTÙ–¿7ÂôÔêíbmPëþmb{ë™ä­à¬R&`=Bu/¼iðÝÔá»XtW‘T†l› 4™È½#±Fø®V)®hb€'„þïÈ%^©A–#¼¾©Ï'®ñò)±¥§”>讇Ã2~Ë̽ø.`*î<hš‚.f_ûÜ]0NþÀl7yî\²1ÇV`­ „ÄÂXhÿkÝÉfþÐŽÒÐŽ¢ßó“@{Ê~l#$¶‚9hÍö*@ô¿uk¬RéCòU=‚nk?3p||ûsúêð ðÛGt#ý¹ÍËÛÙo«RÖþÚ"¢wPkŠä}Â> {¤€1ÇEO|ÖN þ”äì„Ê7UB¾#À6vã|_c'Zh VتðˆjÏÃ|¡R¹ÿ¹>_\<^|¼ýõáöI-S‘òœøõqºãüÜ;ŸÕIó(ì󺳞3´§;;âk¡;qÕFën¢Ç› 1Â}̆õ°ß†Õt ø®6Ù`Á{Š­Ãü7a4l¥Ñ`ð¶ÄhpÖ¹P§ŒÆ®[Y ö¨§eâ“o;XXË9YwÊ#“ÍÝ 4í-k­-ò3âгâ(ÒëÀPW\ÒØÓ4¨Gƒü7@'\Æ+¢®€NXNf¦ïȯŸ çýÔ|B:lœ™No‰OT'»3Šý<²Â‰£Æa–·H(BlöмPŸ'›¬Ïƒi¤ÏlÄ'â€:ô~â“Ý;E1Ûd´c×X·W“ò'Ÿ¨€¡ž¸‚0äJ½DcY]Û¶©Þy8WýXì–ú艺R±è‘9 µofº#÷W™oU±™‹¨žIž,Ø/R9‰hÓåÅ[Ñ4! ºì¯Â§& È®G ª ™žlAUbÅ.ÿùtûxq¢íÂé“ZøÓ`,n«³ìU©bTW•'¹TÈ'XB©„Ç…«£Æ9ÃÕ«X@P|ºzõYŽmªWuÙq"M|ìíÞ$Y½ªLA©‚Èé>83…Ö9éÓãOW…è0‘ ÈνgÞ”t²×zjî7Íw^ªQUë\_Ø2¥…#M1P‰Û'„ÒªcÀjwÙP€,Õ"‹e³˜ë’]¦wDÐrW«+vj¢µÉ“lO´ÔA÷±ë‹»½^*úÅMC÷Ëó_u'ôg—¿½¿½¸¿œýγ/àö0[||Š Ù“Eýµò8ŠÒ¾/%SÝi¤ŠÁ/ý:wÞ̯Ziš P@° ß¶I\2»#àVü ¬º¼±´X«N»7ð++!ͯÐ8^Y:ί¸q4¦WQ§ëf‡ö&Ș¹8#ä®ûB{l@x÷ñbÝâ9´º»½Ü$Óéßè"®þ¸zü׺KôPÒÛëc«3 ,MF= ,çÄ‚#¯d÷ÙÑÓÙÊ}¶sŒÅZ ²]–ž©GŸA¨û˜ÊBÚÙ¨kµjuœ ^Om±ÄwéEg Y‹ðjÀmÃè1ŽøóSÚÇou&등nZð‚d¬´T„ã¨Ëdrºå–ܬoO.Þ°ÝëiÌ6¥|•‰®‚¬˜&®iôy{²GŸ·Çz¤s¤08æ{'Š_çgœ45dRÎÑŽµN4&ö§Ú9ê_/©bv6Ù„Ý0q Ž´´åo9æÜÃÔõԃС3ߪ;&ëêI9y—Ÿu£"(<ÌÕé\¡»:›·×_„{©¿¹Šùp¾ñCã­ÿêKU¡j}A[CèMóqw‡$6  ‹µÑ±öÒlÙ…Ùù‚9æ1Ÿ7ƒì*X“ÎhÞŠ®´Cvg+ Ý´j_Fi}½Ð}ürÒô½¢sF1ŧ¡Ý‡¶-N¤$ïد©k¸÷•c» ÌÞ˜ÝìÐ!—Œ›[kÅ}Õ¯®<@‚*¯ ‚ ßò¼a·×̨p¥Z,cCðG³ú­®‰M©Ž¥?T››«®ŒetÉ©q;roœÂ¹Ø@uí5[ë\(³’³—ˆt¸9y “D:,%"|ØbÐÃdÁŽi±®ŸÅ*]š ?š%]oÅJ¯ÜÔùѦqËž×ìÐaÚçÁª6·ïzR™õØÖFaéÜŠ !öäkká×oß*ÍaµcdDòéã€Á…5Ì”œ–¸ó¶-ìr\5€5“Ûe"ÛÛ.«˜×S’^Úe}eˆ}'N·½Žc“›¶6lÑÛžÞcûD±ÌÈHupÔìÜÅZòÿÚQëÑÏ&»åy&ÃBxê)µÎ¸7ì§Uå„9 E~šÏš2ëQ û;[±7qÓô` FjÌÁx˜ d½ÀMK™Ý(a ÁM䦹"7;­ˆŒó¬õ>?8Ь0â©o×½0÷¨/¡G$s¥eލå½³ÿºˆë¼¹¸Y,Ïÿ®Ëyz8T¨ÙÁ æY¢Žuvè¨Í’Õq/¬¡>—tÊêPl1qxA”3雼ûá§¿¼óN‚ÄjkAåÓq9=éòn.®—zâbg‹W*'Õá‹§Çß¶oÏ~<{ü|óî‡â9×MŠî º&çê]7YÄΤë€r¶ª9¥òÆêØMV÷£~ìKµ.ÐááŠ~š0­]†Ô稼xûË™~áæáb±Òà´Iµu œ.>>,±ûýÙÍÓµÊò·ž c¼«<¸’D H@¹&K+ÏóÉîÜëwIuûfßÝÝßFõ\s‘ìÝ8̃ !"ç¿GA¡Û(Á…W¢Ûb½cßèDŠ?x‡ˆz …¬buÆQZ(4R:8¯ '‚Ïç1ŸPðs Ÿ¯þýŸŠf¯w¬ÀãÔû·'¡Ç”+lL×¶ò%„ŽuŠ÷ÈwŠàušQ?&ß…j2~ŒyÃCˆ³Pð^¹tè_–a;F ¢ï/d$g³U»„³6[÷*ÕÙeWZMµ*{0ž+Œ:Xql˜£Õ­3œF9»”?¯;eWSºÒõö®iQ&S£Öɦi²"ô8ºÃΑQS]–öÍ{0öHוê™ß[ß*¾A¥C‘Øÿ½«[rãÖѯrjoöfG&‰ÿÜî힪}…ùM\ñØ®;§rž~AIöh$ª›Ý$5v6IUâ(3­&@|@àCƒÆ5sz˜9šý ¬ú§ˆ ûòÎLO?‰¸âŽ¡Ñ,â‚ÛàDÓq·;˜BõžÝÆ`•HMöˆGƒáú0f9ã©›—ìc¹¾ëKÿÈÒ¹/b.œÕ©Ç•€m:µþ…"’ÛhÔl,Cöíon*»Ckÿä×Õ¿ûa_Øî‡æ ŠW \ÎTàDÙjÊ<ÔÄÇ™®ãγ°*µív¸ú@ø1AìܾaeJ[ö w«wʸ @1ÚÀgúrZ+“¿[ɾ {š,„çUd"Ørp°ÁŠê8ŒÆê/=7¦¥:qBL‹ôä,€E©˜\hfŠ6w„Ì€ sgAÎY8 DÖ'ãF€),IĈ£¤Fn²h#\HÝHÐÁ|I? úön·Ýª6fGC[T+aÀüœ¸‰ Tu0—áîE_Kß?ÞÿâVŠÏûDØóæ¸y¾~üüÁ»ØJ¸(SžoOèÅr¹} BK]ÃW…$.'2ì*Ê~ó¨B2äY˜ˆÓ\è{‹)ŽG{‘[CÍ(íO]‚Óy¬6y^²¬1Ðcw1-%Ì]QÑ6í±§¾ Í&U4A’KØ »ÃÊyKÞµM:ß×vŒÔ¹{@XšØ’y?#`”–¢ÇjUZ×á4SG¸«Œ¨ÒõXÿ¬Ö‘£6aMw “R Dá/øYp š6¨´*ñC³¾¾ŽÅCäE¼]ò>ºñhœ,ÉûÌî­ DA° ä} ×®YQª9.÷éÍ6÷ㆅ“‚ëhù†ííÑö`ƒ‹ðl1ž& ™Ò†0çzäFÆð³ð_ôRFžG7 °C"¦ q Æô¬Õ£ž6÷Ä—U+N?ý°OMÎtd»¾÷¯Më".ŠíŽ¡ˆS©ßI·d÷ˆ•Õ]¹Î/ß¹4—Osuù´ä‰$ã …ˆ”üdŸð ö+/wÝ,­ª~Ú_Kc R™áÛ}wrä[]%¹×ýXžâ¡T¸šÁF]Ǻòi×Gº@ùô€PòªÁ—AÕ€W:À7ZŽc^éLCîv«‹BrÔm‚¹Ä´†g"!D`i‡9ª†99"!³Y˜KŽþiæö\'¤/K«€9ˆi!%±×¬9úãÇ&©q›DÂ+àÆG—qòSÖ j„;v•Ät .‚ÿ—¾ØP8bZUd‘(—‚®£½¡SÚÆ2¹Djh²‰t¿V°bÍÍËbæYo¶øãA?½ÂŸWÏhd½AS2ð˜æ²8Å€#‚>Í]‘!Œ¼Ìk¢—òƒ+à¢ë;Ei‘ ¼âönÛ„™u)—|»ðÖ]ØqÑÜ#gž¡XcïQçìÝ%YʵŠªË]º"éüâPƒUïXP1v»dQfå3#%»ÞÞ(ÔdXUêS¬}c¤n‰x §à>Ëëà¾Iô;/Òñ.ë››[Á»[»ºýL·w‹À8ŠÆ+Æzø¶‘t1SG³äº"±#dÄ $&€y$&+µ¾ªGN¼(ÖÃX9Åá@Lp&‹ÉýÅ£^õÕˆœjÙü¥×^§¯ƒ‘@`B«š$I¨ ±`î8 þôü¸ûóËEݾó©›ÑGÉùÔ}Ýüy›·Üøu6«Í~‚–Œ¿K¤‡Ñû[gâ«ípèeôÊ(º´ŸlÜ«²Þãe^%G{ÍÎêeYÓª‚Òà`?kY…Õ¯k=5š­Â ӼѰ&™3Ù×!ž”‡¼,¶Gh~kLzq£A|ËïrÞw(¿îõ% åñ£Gå!!W­lÂì“®ÇdJÍ ¢g,w¤Ž’Œ¨êöhJ‚"ÊO0°²¹k¶†éOó 7÷ Žäâ5wnÓEÒÙ •f±-ƒ€Ãåv‰üµÙ _Û`ü=¸Ë9Jñ\ÅòdÀò=OÄŸw`GõŒàrÊS-2õ çrz}Ãíàótýüåéëí—¯.ùoŒ‡Ë›@ÓPøƒ5CQÙÌ"²¦UtMURê—'ÉêwøJ< ‘„³y(ÅL"éƒy‹ópi„d.æP"Zô%+!a©ÝóñÕ•¤ª$8_".ddªÆ€EZüß…ZT¤ ’±â ž]ƒåbùãØv×çͧ?>n>=ýúîÞ­îùùýç¹Jëâï´µpNy¨EA,‚hĪ" óÒfî²D'º=Êâlój9S¡Î ÐÜhm¶ªÉ¥tÉx ¢.ÍÛzSKË {fT;²Œ0v_ˆq” Ó1T³À.¸¦ÿªUä}.¥3“sÃf˜!'Æàšìz·:‡cq|$8ÓªN½Èh˜Ü“ì@µQg1ýüi‹ó €¹]nœË)‡õAgKÉ!-Ê…Ñ™ ‡,çTêÙö%»€ÒE/…ª.Eµ©·þ ª]£!g¯2‡°›÷ð&\HÅy,KÒ†2w9®¹ê™899vu¥8:VO´UÖH4[à VqyÅ¡8¤å@0ÐV9b’K£-Ãè2È€YtÅLÅE¨— Kà*Œu+”Љ,¡ CU)0&‡Aœh耵®ÞìBnÒ‹d$ØV4$äôhHfô“f$H‘€k2ó ¬%–¢õÊGätj  c°²r8Ì“]6Ž?r8<Ç ÚOï4°Sò3Uý¶ÿá_Ÿž~/÷›¾ûöéçßßÏ)¡þA8·ZPw!I ¼kˆiM¥~BÿÝ…ø¾@ø ¿@òm'AÊÓÖ<ú˜mxË^;Í–`j(Îe;”ffQ7#2Ô%‡Áì^š lƒƒ,çXšš˜rE¶bàÉÃ`{ÃÞ“l¨TQ-dC? :Ü&ц´“ ¥)mšÃã6¼Œ–T âPìO‰Öt6&Ç1°—œßåÂëX%b¹EDm¾ãžÓ~ÖS¤b_ã‹ ú”‰øJvaTO8¾Ž.ÆR•ˆ+*ÊÐE&wÅ:þQ#¼Ìà®=`ŒÔ-€ qãsU‹$ùë¹ñ 34߆h’¡H~\ÉUçÅSÌjRýù¼xXæÅä, VxñaíJ“b…Ùɉ‡La¡pa¸‹ÃøÌXtâ,Q,ó>ö퉥²é±RøÑ÷Ù¤Oô HðZJòí#X ?#‰`Ú°…húØŸ†2Á8Ê.œG IL´Œ!©ËK)¹†Ï0))/gRêòvËÈ/w[]Å·ûÚu÷ZQ*ï˜,ß8¥fV8­f… ~ji®Ÿ™<Ý2¶ç‚ëÉ’÷Ýʱ8Uá`iUä—ùµ‚úiTy|m¿;r¤Œ’†òï奮_2#Eã–[á:v7L¹½[€cˆsL^q=ÞááÔÒr ëñçˆã¶»-…&ÄJ´"‘o™$-j–ÒXR XÑ_Cl¯(ì/þ&ñ*ñê`a5$–Ky’Ú!‰Üá3Š–Ùwñp\|¬ä†«RvÊAŒ£QÎ¥¿ëE8“Ç”@™BCîw áh÷·Gw1î<RðCMHˆ«®4…s©P•šR Ðä2úeO ±þLfá—‡˜_M‡f†¿Òn,핞ÑÄ¡) ¾f]„“ŠAÖS¡oÁ6‚¶-WPˆçJÿüÉýùÃvùÝýÃõ×_~hÑ]êD¸4'ç “\S$FÉp)EæœhÖÝiÙZ3æ ãí7kM³ÖêgÑYù.ˆ>#ë|"Vó_æ7S›ÂêéeûÅ1_À)‘²OÂØÊ÷D ]çÔQU71Åô—5¶~Vs¹Ä­ÍÔ3aýÁ®¬ˆñoäuò?^µßÌOÜV)cKlÝ<¸”¸•¯à:š›"æú¶qÏcGQ=‰¹î- Îa®î‹t Þ¾ ¬è†GÛÐíaº2s]ÊXâök4˜é8 ¡ë-½u®¿í‰ gMä0Ø”ØT´þ4Æo£#ÐãE[$ò{oº¥eÎß~±w³D•’*°û¸´’`,Y¦CBlî–ø.ØE-ߥÚÜy/Å òãpûJ1Î7•ç;H«÷b6’ø¸¹I’ÿ£É ‰Ò`èv%:…î¼ä¤‹%ôElß1ÑOÎ*^( µÙlCœmÿ[FRÉßÝþðéÏÇãàŽæèÊááÝÑ¿Y”°`Ô¸^ H-kj_c¾ùsŸS–2í´I®c>Ãw Ͱ9ì|>žž“¸“a,ÎI<R/ÏÚ?©.haMØT4Þ·fJEß:ÿ~™¦‘·ˆ]ÛÄj2ìÆWÕíqVÃFÑѤa•”!šëmdz¸vÐâçOwûb2WÔG“÷¼ÿòçío÷·¿çé‹×>ÿv7{ûÏ®?ÞŸĸ¼=zŸÐ[f4æ&ð6\Ó·€$´´'ùÒ¢î‡ö!¬M°+ŸŸÂûˆ&qöª]­H†~(Ø.xï~Œ#[¬æ“Èïf[mBÓÑ—ê.gÀRûZÈc¾‘àÒa2÷-}­ÃyÆúq!oGg÷BÆ^iºË°aH¶EÑ-?a¶¥7af•¢æ‹kæe*²©¦Ä?{ÂEUkMÂ%ÑlÂ%»“EêÌïÂê”oÑ$ˆ²$ßbÑ0 ´™´Èø|K‰¹1/Ù ”ò-§ÝÉÖµÅÁÂÏ‘x©j>Þª@Q¤i¤LCL²!ʉÇÁåúôxÿå·û¯Ï÷Ûën¨ï}w»%ú¹útÿéÙÿõ¸¬z$^¯” ßÏR[HLïvP8ŽòçkäØËYßn!S™Ož{Ô»'Ý™ÄrEæú™uÀòíKçÆ7]„åyFh³b€Ñ¥&.fæÓÜy^2sž 5å!Å®E'†5.»­õwÙkñ¤¬p,L¨áð¶Ü Ö?Sƒ®/6“g0°ñì B­WEÍ)çËŠšßà•J ‰€W:¿Á+/ít«°¬¹³×-+†5!ÁHü É—°#VÙ8"b›\ã773jQgÂŒ)V, :XZMëHŒ› B¯:G^=£Ø:’±{Rª/‰îƒ¤qðµóNü§Uš[ýq"æÇ–$W]ñ\¢cäo(þëBqtÓ&BLÁVg­¶H°¦ÄSÉͤ¹»˜C%æ*ÓF!Ò<æZ™¼BØ-<kz–V¹ùµ<Œ¡o×ÅÛ令²þŠ`ûT ž.G>EO—¿Ë(ö‰;ê`ÈèG‚ÑÊHä¸mFV¾ÏÚ@¬|ŸItchB7Nk؈b&pÇ©ÞÒx#Í£è*à ¦©v Åë+«D·˜ ÷hº©°ÿ£ Ý$âxt“"º‘ûöP/À€œÞ„;!»yTÉ®ªüé…> ê§®ÄDïäNø*}ü¢ÿ>: ÔÊ­ÐÿÀÏSî…þï8 ˆyRîÞŠ©~îh2¡êº†dI…†d-@ lûdSæùBó^é ñ`)ÝÈ6Á|vR>¢©™H=`O’=©ºÈ;¯ËÃæ€ÀMð Ð?‡™õ—Ø‘?=þà]vnǚ춺H‰­Í‘·8`Ž¢›ù†ŒTÅËpîKHrÛ,…5íŠäŸšÉÒË»ÝòW5f·ðý\cã,üå2:›o]ÅœâÁr;Ü»m7…H‹â_?Çråe“Ù$<Ìr>-¡Ø*JØO…²ƒH©ëm[‰²©râÅjÏkG5´™"¯H+§ Pß qØÏ„n¦Ì¸‰àQ“Ty2»ŽÖISŽRšžv ®–œïdâ"KÆd¾[š,Œ†Çzb=Ü@ÐdÓ7èý‡õlÞ;EgMîsª4Åû¬ìBP0 ®!Óæ\L\‹Ém¡„5©fGËÙH¤Š¬¬6£ª&‹ìS}“¯¯pÙ¡À€`!×µ`Ä¡ªÇÝ÷'çÂö—´e¶Že³GcsÖÃ!dvÀ¦‹ó3uE«I.ä ˆK'd4JnU-Zñ(ŽÐ(³ÜÙý‡ÓìQêáV©qäEL]ŽÒ°Éð8-0UÎw:ÁZLÕA0ü(UR+¥qchiú(µaôHøÂDÒõÔé@㬊=ôBHMhl!ÏŒ[ñÖσF&}¼ Û¦û†oÁ¶¹¿»QññWÂrKJw7tõï;⛹äȵ¹¸{8©Èqw#ÿfŸ×:d Ni$#gŸ÷=—‡MîÅH eflc *me_fñʤâF9 Úò‹8•ÿ¼Î/úñúãíý»œgþú|jUW'fuUàÁ¸:…Í«¢YQ†DrI:¼­UF~„o¼mÐÝ`¶ ýËk1õÖ†²­úÝãýÝ/x‹·¦×øÀpýn¨TL™l¤$)1õŸã&¡ˆömüž’ò÷{ýaw¹Ô§·¯JšÓÎ÷ö°¦Ï;J æáÓ,GGß\òVMJ)Íd³Év·™‹Ô¾/ëûÏv]¡Ïÿxp‡èï?^=:¢=ý¹¿ûâ^õo~Ííx®Z7¹—mà`ªº\ùòó’‘³\L粉®íÕÍmwS¦yNCšó’ëÛ°ò#Ye¹½¡£¼»šSEP=B€€x†”×÷Lµ svƈ¥DïÁºªždž`f-.0«Y¥Í›ï)0zRYŒ»/9žÀŽy”Ê%G˜¼Õ(€£Pô]!:•¦Ó^ýñxû¯/ߨïïïàÒM¡8‘¹}@@§W; RÐ èôÎçܤ@@"a5Kßö‘ú3È;´¸íDG¼­gHãZÏŽj±Þ?ºš®žÜ™zþòôç»×ÿy%·7]2?<\ÝÜ<<=œîýd©©^mñ÷lðèik1ÚâXÖâUµ‡§ÏõÝ#Òšà(qû,3¦êss jŠ˜æNvAæézÜÝÂc™AôûʪNöÜ™àç\å¹¾ýbtÿ¤ xà((ï®»©Ä¦Ÿµ`!íF”Øô{öi¾Ài¾=Èž·n©š¯¶¼»}º-ÔÐ&]†I“?$Y 7“Ïž<ùÒ$9^j0­i à­?Ñša+a#íÆ?$µYØðW3šE ÞQtTñ¿¬¬. ðˆÙ‚cÁacèáCÊ¡*V†bõ:«oµhëii¿m™Ñ„‹N‰­…ƒ2¯þÑßìòåÿ»J+\¥³ðæg7I²xc”ÅrìvFàr]M>žÖäK*ZJ‰r|0h˜BšC4F*e3_•Ÿ n”éUUþá#šªòýár‹T‚°—nÄ6ØcÕþ…6[ æ ¢ ,´¹qi¿ÿøëåêmÌÎZ¥„ܶMÚ0ÂÚˆšg”Ù[O":W˜ïg?|º¾[¤ Qb&”á¶¡"e_¾¹°m A§4bÑœüÖU@QÑ¥TˆTáRê¾ìzÍJ| «PÛ·F£˜hA¬*ó„Ÿ«õ½‡»ŽšPŠ®cÎ/ƒ”ƒUƮ̹J5…O¢îäw-#®Á³ú›BA‰aÅÅ€º·okŠ ÛOµŽH’PÔ]ÃŒ/ç‘KC^„Õ²ÿhq øËIÔ& ØÓŸŒ…;Þí’ÍLæJ!»r¬ªT•Bº?¿¨²[wVÓI± òpðm¾[ ¸ÈÿB:fB ›e6ú Z²ù*®ÿÞ x÷ÏûœdúDߦŠ+‹Sc &MÑ ý[Z s‹ íøœÇ~âÜ:€­zøu7[7]Ý–°ŠƒX]S¢è™‚æ¾ÃCJÙóÓóâÂè²Xsv'€{FšdÍe;J`b^:V®ßÁÙ+^ʃ÷0i¶ 61#ÎÃFqzó‹¼:€F~gJ¡º {·U$OÅh å €{lÿ½«Û+¹Ñ¯bìÍÞDí*’Egƒ\íM€½X` %9nŒd)’Çyú%»ÛÖQ«ºëüTµ-d‘£sHÖ÷‘,þ¼naâ~ßS‡è?fëlÄñFžÑj>´Y™ŠÚ?ˆý¤díÿ Ö:–ZDrkå[x8åœg³åeç¼s-ÏÖ9HTvDb:s€£œ{£É=¨nþme)KF+JhÞ_¤`ùÌæyöy·›_¾Cúpuñé~ýøeþBI ˜ç+f¨sà9FB‚!k¯ 4£…ÙÊ{ÛÚ’$ײ݆þ€Õl··P{˜ž$××ýµC¥;Cïš)“sܲǰÛ'û p’•‘šGázàØ|ýÌ$D9¨ko_YFâÚ»Í""Ô—ë =>QP<076î qŒ®íýÇëºm‚÷’}dÄEIt œ;ÔŸò2©ÈM úÙ±¹õÖŽ·ÛF—.Ñá¿Ú+[ÕE”5Œ՞ÑBúí8°Ç4i94 Y”€ÂÔؽDÚ*„u»Š1AõB€´V¹iâ-ß[ä×€T)ÓÊHUO¸‚ˆÁ¾’xÉ„={ß[š˜sa+¶+ r—£ŽÐ659ªvÓñrtXÓ^ê£ò¢HÖ¸OÛ·<ÊJS캈àk±ÛÃ`¡Ñ·š¨/×›A%'<ò)b#SaFIºôqN¾Ò¬.“ivjiþX©¹Œ+ò%• ùoàk„ªÐORÅ~¤â>§`ÿæä k‚IØO™Â" ±ßä …Á¥®©“¼ð´÷¶ÙE‚¶÷S‘¯|@:h 9‡eÝXûUÿMh‚L»â½å¿h¢BÞ·ÁqÉý„w›S¦×UàüuRûÏ¢Âyå&æ@Ó j ‹ÊQí³¢»ìËjûÆÁWÃôq Ó«y>—e ÓÔªSMÌI‹Lÿ$Ç&Lo'$úܧñL sB]„µ?ÓÇ\˜½hŸ ¬‡Æí¶mljq\?N@ùq)~L9JŒÙ3ú‹ü?¯«ê@ð¦p³„œ~±C…4z;Ú"vÈ3jݯ ¨gýIÉÁ,9¦¨ur`³Q%‡,¥–Í¡[ƒ½v‰™¦ƒ¥hBXÚ»8ÑÄL…Îÿb2 b<Ä ð‹æpƒ~Œ‹üNq=gf Sò© ù'Í0Yô )ç3¾76Õ “¤b}Ü@ŽM Å‚n²£6 Z²‡E „önû` °%¥Õf¹O®e˜bþ•aš“aÜ,b ®ŸìDÿÄ¿í ÍF4äo6òùÜ^Ö‚ÙþŸ¼õ×ÝØª¯*:¦²Ÿ?ÞYo\ÔóÙNìgk;•/†ç¦Íå×›¹5Ý~r­¤»Z‚™X \°]hós仞Z¡ýBÖWGb>@ÀùÕìBqA"†¨´H»Ü%ŸBì0ï9Q¿yÏ/8÷Ç Ë÷ƒþõÕžA?öô¯ïXFaæìÛ^i¯ÁZ($‹†Ç€µp¬í‡¹ÖOrkÓÄkôÂÙœ®‘IÊV6ƒqÆ-šúÔsVŸ|~Šúùê)lh;’,úmcñQÛ!¢1¶S}<_ã‘Ä2Ð$ãaûÇ2>@äî!a1$2M¡D:šáÐ6Ãýš[Á›)º+ÝΔ¶^9C.nJñ*ß¡gB‘z·LŒrBjœ)(Ã"Ž É3B3MÞõ2-ÎtÉ„LW²`T)Õ©ßâ©Â7qq‡ÚàÓFmCBÉ›i “Ƚ¦¸G5užÚ°•#áÙÇòHÐ_K,5¦_KžLU}‘Á2kר¾Þ›”W™åk[t§èt·ê®:ïüíÅÕ½_â^ÜÚG}þs ~Ÿ‹šó4_#8CyÆíˆçÎ9N­Ün#¿VQ„ÛŒn®*…2ÉüT ˜«4¤©tã2V‹.\·ô2¤› ,#3%Yvn;Ï:Hy9÷@ˆ­ø@ìµã”넨ÎËüà ©?p:À¾„*ÓIø &…“¼Ì-GejÌ=`èM˜ÇŸx‰Møv÷´°A¼”âéc†»ÛËEºüÿ—Í^àùj«³™3—-èf’57aƒ¡¨'3ÁPÎ㻦J$ÞØ˜°z%Aê© Ï'²=I® ø!¤'P@‹ã¾¿5®˜˜S±$zä·{²ôLQãÑl4æît!ôÆ®FíÛRR–!íßüõ€»¸ëîúüãÕÍùÅ;ŸW›;þݿί $ :¢‰‚ ±>ꩆ 9”9wª;›(ÑV) Y%‰ª¹/¤8Þ$»“nï¿I¯ÜûK£¦0)óà™ó¼ì`‹vÎü»éåVÿ⪡Ø}ß|ƒDä_&€<l^74Z«œ#%]¤r ÔÅŸÏ”}ÚÑk÷ç'¦wê¾<EÔEøŒsÊŠ½Š’cþ¹\ùl(idÝ—Ç]>ì(¶C’beñ7Ñ5òå³1MžT}hX¤qÙQïŒî[1#}ùœ(Zzb_>½F_~\.‡ÐXlQŠßmªý’öW扄ܵ@§Ä”‡˜µ¦’ êÂ5EŽ`œUjq7e!ž|Ñ °g !÷ç?‚ÏŸí‘GøüÈ«L°8#d ÑFNN‰ã´Ü¿9Ó–%‚Ç8ýŠN¿&ÀLÞÍm‹}€_üµoÆ0-²Ù[Û„=’®@|«ãM¿}Įί?´rÔ[H’».ƒ¥b˶g(̧‚C[EQ9Ñ«Ù*ºÕˆpHËB§ýRÇqlš0'…Ì2m½óÖ[Å<~„xwuzŒè˜qÎ%:)î|k žósoÄ<‰ç4Š𢳗¥w“ƒ‹¹ÄsöÉÄ)eº9¸¸9‰jÓp`1Ù€ÒC:2ŽO ã³ó—Ò²…PÄpê8e!ÿw¾€¨*¬ŸçL-²³ì{áÌùnŒ, @Z[s—µz#ı †Å&Øâl×䚤ŸìµQC¤8Œ[ôH½ÓO.g*]I»¦Ô9ªÜ4pã+iÞ’ÏeÏòº½¥ÙÛ&çSfœ¢fe* ?•ÃäÔÇ8LëGKëÑ‚kå/±ý7È©ºB©Ü¡ïŸ¬õÄî¼>witR¶…E`Ÿ.>>>ì˜Nàw{½¾ø²²÷ëO/‰ü=oþàÝùÃÕæ^Þ¬?îþxó—×W5õÌzæL8g©fLÞ™®4¹Ø{žJŽÇ<},c·]ÝN¢=Ê(1sª6ù› Š!ø@Æmšü±&<1¥Pïk?s±(Ü%!ÇÇzÛ–jÝ×ùcãWO“IûpŽz†_ÿ(œó#&rÌZ¢}INæ?(Ý(A½É`0I=U‘"GÊ|“p+²Q/דMÂÜŸmB™mÔâ‚Ó°Mú™ÙfdB#â¢ËZ¦ÚƒirИTÓÏ2„l~ô’¨(J^Ä&Œ3 Ò3$3ôJFMc 3=„Œ´xCìLð ªvT?Ùòö.°r1‡Ò…!ø–lo“<ŠÀ ±íaµÌA%Î_Ö‰¶Kv`Bò8ZÓ|OÚáÃc9MÿÄ¿íW™)hÈßLáóùúÑLù™ø_àü×Ýò㯚ºR²Ÿ?ÞYoœª“×ÙN¸gk;{û«´•7zó½Dn4&5çìƒU‚(1gˆ8_öˆ¤ÍGNfÊöaaBÿÊ3GÑÞ>å·?ÿõ¿ RŠo>Ù ~<¿¹2«ô×=»¸^›pÍZÎ?=~xÂÊøæ/oÿõñ·?÷^ô>f÷Ýþb-J'Yô>æÕž-zçC‹Þ5œjÑû˜wþ‹½Ìs—~»3,>+Y2Âì!cÛGÌÈܰ@0 ‹¸xÃ`½a0²ÑdF‚Z© ƒ†ã •Íw —ľlÄ‚ÁdŒ!ñpüð»cº·@>ÉJÑk|Æ.xß¼±2 Á"`Tꦹð·[›÷Ó, æälóasWˆŽÛK¨,'ØJø W_®TV)BˆYçOI¶GXHMí+³ ÙWjQIÂîœÐÝèãC¿8"¼ðˆôyÍáqèy\ú¼ÿ4—dÔ:î’lì&3£ô:­¯aúù»ë+hþçööî…wòÆiW›Pç7ãâLÿõÆOæúêòÛÏ$ü,¬9pŠU?#1·ýØòœéÁ×ü§¿í›õ.(»¸Zÿóêrß¡•ß ýgÏØ}Ûî1ë‡7o?æ|¾ºóøáüã›oYm>ÿå"e{Žb©Žzš‚„¦ fÙì{»§:Ú=5³ æ Ž0 åªU”.ûŸ5nùuö@rØÄðEßÔ‰'„ñ¾i+•Ô{&¹‹žJ,W^¶ãÊM2X#·g‡qCk[»©¿øøuóqOæÕçlŠÆ  4o˰ZDa E¼õƒ¨ZÇ[–ãë}¶_ŠãÿŸ6 s³qn¡±ýBÍ'3Ê95iŸº”'Æó$ï–Õ[Ž)R]o\¼Õyú²qjVÍœZj­Ny Ð{·Iq—ôÙ§<°s±”IfcÓ h®Ùî="³Pd‘5{‘U‘ÄÉ„Ôì=zSޱïŒþuñ.ÑòbäŠã‘‹³"I=ð³8 ×±4[|ðeã‹sˆ’º8t÷Ö}É„” ‹L@š—l^ œ½Æ®Úx“øíÅFÿ›]BWŸî×_ö— =20'¦Yw`sOµïÔßöw­s©I–lçg1Àáh€“談“ÔðMÔÜ.­âÛ®pߣ|Ù(€“(œÀ‚²S\çü%•Îa"6?ùxI£¶¼4³ étnÚÆyøbº¼y:Óg_þ wÞß\=~ºþǺ»ºùðn}ÿáòãz]J0„Y~ÙŒß<-‹*æ:b3~qç{+5?£C}µn´ÿž´¾zWª~vq~f?´_u{ÿ¶ð³³DüŽÞÛÉ¡³»ÏÿÞæXÞ]]óìàò"6ìÔ,8äþùP(zXö ±2±ÜÀæq”Ç9Z¸în/‡f³¾±È‚Ÿ¿¯}þ¯grñî21óû÷gïÞ½¿_@!– ]^]Û/<Š K_İa 5ø…]­])õÏþg.{lG Óùõ¦TÈÿNó[uÎra5×Ô¤¹´0½ ÏšUçNs!㜣ù´8æ†/i=þßÍøÇ‡´1‡—eÁ{ònR½îß5Á¤lBùõ¦þœûŸþ”¨|ü ˜Š©Pn\´Zïëë ÍÖy`b˜µ×$ ¥”`~ÏðóÔýn¯}¿ÿt}5¶gøðÂ"„A^gNµùî²BA•Ú݈I¸¸Òj ˜áoM 7'ÅŒ0ôPY"8õVºM€ø-2|»-ȳ;©Ï3`×S‹sj³À~«ùÊqrÃÚ$IÍê<+R½¢(Ö¨^%ìóD„Ò%åP*MØÛ_ÛL7ÆSŸÄ½DB{öVVboûdIl€vÒÝr cZÏ¢¯ÿ^¶^î(tÕ(k‡ýq¸ÂH1å¾ûã< Û%ÝžÛÁ4$%ê‹£ÊspÔLJ,²›<2å°TZ¡æF½äs,+î ¯8H TGÍœ‹¨ù$ƒ&{ØÐ¼.0ÇüÔþ ž1Ë™Ura®LQ˜à¦ó¤¶3OJó `¹mÚ9Ú÷è華N9s"òqAß}NUµ^ä)¨¬ý…Ip €ÐWqœ{ KvŠR꨸ÂÒÑO’x Ú•àHæ4–fûÅ9v ÖиÛÅ~öÅ7žÕÓŠI3WY“¸ôÛ(QH(iIšc¬*ÏIqÒ8yýÔø“×ÎQ(j-ËÌ+L¢ÖÍAJUf‰´™—Ї OíC©vö¡LÌy;Ze¿¡9¡—éE^RB;Ö‡J:ƇJ£}¨i¬ÒW‡Ü=÷¿»y‘ú§²Ö6]Pnz€qŒ.}sxã€}»ž0ŸÏ™N;ÒI~ÎË ç òG]ßr¾1·¿dØèv e‰áä·Iº H(Yà5l˜kvuΨxQ1 ÃÉÓ}¬˜+:T®2³uÁšC.©ú±Éåví§¯m3FÎÞÚÞàÄÇf7á­£C(šKEgöÉ춨6È^ñ Ïfzbê’‡J>å¸#¼®á¿º»^oë÷Ÿ¯äŸ”å Iyš"&•I|°D€°¯aSÛ(¥à¥˜"S“"‹%Ý2ÉaA-‚ʈ$°VQ˜‹«š‚j”ä0DÏQ§UCU?épK¤þp,†D‰È<69ˆMc¢<ê>•$ŸåÙfz*Y¹ÏMBRŒiÛ¤ø|'Ï»ëóWƒÍ‰ÞŒUÚ®8 ×áèýlE=#œ^Õ9m[D3¤TOiSWóÎFx€ziÚ.“½ïFä× ÁýÍãi<†Ø¿˜Ý—•GÙ“n¤%ͱãËYGÝíL¸Üíˆ.=U¾¿°·œ3ú PÜÙ!žãÙ‡–õÄñˆqFä ‚29wÑX”-ᛑ# Öá›v#²ŽÁ·IµT™ó$µFàm¯ì'Fo$<zÿ?{×¶Ç­ceÖyW‡q‡ùƒùY–bM¬ØËv&sþ~Àî¶UêfW±Šd[Žý“Ùª*䯅À”{‘Ý|0•›°+z×€·Q”¶*Æ60)©ZwA#¸is×ÙþÓkbËZÖ¯;%¦íBÈ-˜.÷ávW§øke ]þª{Ñ“vPJÖÜ”~ùeSB à}è—ßµŽ÷µj#ÎÛêýF´°…}Ž}Ÿå¤e;UVRe¨ñ.Õd‹—qªó:û•û³JáÐdi5lô$»U_°ÀNR¤üL‰v¹Y±–ò³ò˜I~:·´YƒqaótEŽ ´ø}86~N»ˆUî~倭«8 nÀ*’<ì:`•Ôb•êN<Ȫ( $°UR)õ>]YTå¯b«­ ¨ÒÙ2Üp¤4n\„ñ<¯ž—k ñÓxÎ2tgí5q–Í$Ø_r” â(›ù†)(…qœd3ß0V™cX¥´! ’ãhsw[ÚÁŠV8VyÈ\Ô Ç*Ó,¢U‚" Ùdi5he´ƒÌ:¸±ønRÄ&ÄBÂá’1QÑA’˜g§/ÍU•®“{Â÷ DïI™ðÒB# zÏ›RŸ'¹D}Îö/ÂÒòî^†% [¦ÆÇ|½é›§;WÃî82bE¼ç€â,IˆÅâËÉÒª`)íPyTÃHX’Æ;R e÷{Mˆ‡´ñ©KB% ¬¦ôŠ<ªõ7_/ý,Vµ1~Öú/› R13ÌûZÿe—|2Û9Z‚9\ÊVðÛ?bËÜg7È9c–M3Ž,žÏ8R*ñ_„@ˆiìDIf/‹+…brk²–š Gþº±\|ɶwºÆ¢Í¿uÆ*)@ʰMhDº¡@Y5(Sû4†úykèN¨d)/¡]Æ"’Rš`²®*(B‡=ÎÅ•îx•Ò* …qô5Š ‹³\ Ý×+2çïW—ÆÜæžÉ~à»7ùÚLo>½yûöllh€•ƒk|ãèDa=ú5¾± +NØi5p'®Arוýô¬îs!Vr|{ºÿòîþ¯Ï¹*gUVPh”4Ä-#4ؽPdÑ•5|Û¶­TÏJ±%?e–Ýlp'×L› ¯˜ŸH§õ %$ˆÕÔúL=-´ît.Q0’|çªÛß]þŸ7Ý«ogÛ†{©" ¿»`ÀRe&Ì q~œkH]I!%Ö‘BÆz’Žíà=ŽÒ¦™ÌþòTTo8&ÛÚË‹píXJɲ å@“Tý|”^´FIAâ•Ñ”ÆßH(BB¬ç¸ ר¬.öÅ4u•4#Nax²+E$-*Ü>Æ“)?ìXÛ©Fã,m׈j°ÁGÐvIÈå/4Bw¦Ì©È:J ¸lÿû6蟷ÞÝÿ–Sù}þ~Ê€¤,‰Z”)öï&q÷&<òðv®¼×q•÷F¨+²‰Üñ›»ùýÞ½;ŽP¦oö.Ðó‰/¦Ô[m©ÆŠ‚ýýæsä Û‹`÷À-#º8F§œÜÁM$„”X2BW[ÉU¶ÒÏõZ=®Ž‰/¨ÕBH Ôf yH0ã`×ÓkàýüøÙwçíï÷w·oï¿<Þï}ãwèçÕqŒŸ†ËJIæ‘46XMm¡õHîc¦Q´¯U’ìÉøöd\*j±$²`^]ªeó:[—Æ(D¢h\o^- »& 'ÙA£ËZ²œS‰0Ð5¥f®Ë×mTehõ`ûÒ¾VƒÊE“hhËãˆÙó˜;†!þs¦IîΙï94á2Ɇ±ó®å¤a5÷à±EÄEÊ·~FËY£|ÝM‹ˆKÅ)Ïé‚·¾_%pˆ¸oÅOÀ&¼e°á· ©D¯ízUüY§þí ÆQÛÆ1Ñ~ý¤ˆB3÷G>„µeyf=áÅ£ÉîÑb®!»á'+«ë5ŒÉ?ŒeE¾Ö<2HÐtöl09òAŒXòuÜ0ZŽºìŠŒDBt½J€Šcy¼=ÏŒ$fo€û\ðo|ó´šSë½½ïO ÖUcŒ#RÌ´‹‰MŸ^ ÿ· î›7ÿ¾ùøáíº(šù²"A”–ä³+R7„Ñ brõ eÝ?•\/NƒãP„TQp¨jKåeù,”ìÄDF]2Ò”Ë5éÓ3²´A–ÍHŒI†§¤³³yÎ åjò¯'ZbþÕз;¬îþ‘mÙ~ 2.ªØ]Ü€¡ ‰Ç 1‰?üipÏô›Ç?óyÝ:5nWB§¼­½to4—ÄØ†ÞéÍ‚ë Äð‰íÈÄ7 ÅPù:R'(vó¶ ‰SÎy‡&$†Áì$.qùe=©A §¥>Ý®·„¢U}ºpUŸn`\V°GxÜTž¶@€bH¬…€¿ßÝûþ~óá¯/Ÿww>ú?wùç\Oïãó4ôüÛŽÏY{ûí?;\}Ûë§­—òq‘vnÁtB„=¾Y¬âB€baæDÊ=2rù³ƒûëÜ9ö_!kÛc¼¡%Ü]YFÊs©~Ú=ææ?Uí±Š"*ݲM¥Üm¹ÑX·Ç˜©©RÀÒ8sâô'Ožn?ýqÿåã{×ÎoŸîß¾»}ñ£›OO?ü¹nŠž©ük³è+¼ÈcmÏÊpžŒÈÖäoV?×1íAª â.»ŽGÛpÅ?‹¦“ëh¹¼>­:’ùDŶ(^p¼ë*ˬ't÷›õB]Y‚®õHU£‡¡ÚcÜ ã”é ñÚtwG Ú »”S¿'Ú<ájЉ«Á5âÁ;K›F¬?=€;¢»ÌD£Ü×⽨ÚxwûþË».\T]$™„Ü €o[‰_§|çb»oúíöe1_IZ‡/—vDShKQ©n*¹sÓŠäÿ3¨än<»Õµï»U4UÄ<ÓGÝ• s&¾ ¯‡ÿ‘$Yå,m ÿÃdtî*ËùÕg)öaÆp” µrÀV¿ÃLq¸»@¬Pt˜9dÙ¹ŠsߊˆSÿyE„£úŠˆ+ÀÿP‘ ½ Ìà*óùVõ`‡µh!ŠL5hhQžžú,­^pÁh_.0Ðà´ Âv$]=OåUŒã«ìÀî¦^Põš5w í‹ßi†ÅÛÛ Â7Ÿï¿l`Á64_¬[®ú%÷‚iŒ±ã‹¢¬zúfùÒK,V -‚-¢-—ËU'’é·’ È® ·`<Ü&+Ãm¶WS¹_†º^÷0Wñ÷BìÂß{.ªÒ8°„U’ hFt-¥äQCˆö36Ôéf~I·pE&¥˜[Ê~Ü~‚"@ï^Œi L!søLŠÅØùYt}à9“ 'L¸ž;œiKÃш‹¬’)åy ‰¯1‰äo"ØëŠ3í4[‹º9²p ¤jðs°×ie¼¶ÜuCp—ÛÌ0ü€ìeÜ–<›Èp¹–Êa›!Hq„Ô³Ð:á¶„‡¬Àí†sɹ”ƒ)I  ‘~ѯWЯÔåþŸ»&§áÄùZ;IYãJÿˆVõƒ8Ùcanò¹™pKG^2÷2‚#@s‰®(QbuÔXv‰*À•ŠÄN“•ÕÑ9Cžï™j霯 SÓIb2È iTzI›бâÁçÙôÎþÕ°ŽŸñ_Ců2<æ0ºp_'!%Ó>¦«®¼Ç„kÔ÷ ŒxRue{ÈšÔ åxuÁК+,Z<4¶FNæ¡Uršº—¶wÖJLqf†BîÈ´zÿ•*¼¥Ùa G…¢šH¤“ÿ¯!sÖ®I«û*U"5¨àøV ¥¢/¨à®l\¨”ÔÕûçPÅ_ÊR_ðÐ`J/*Ö;H6*v€oƒñȃØ'Ô§¿î¾8ïîÎþîÿnáÑoO<ï“ï˜û·û×~¹úøþöË¢ø·?8+ëýãÓã—ܱééQó&yv}#˜[Ú8°›ß$î?­óÅÌ@ƒVÚª]|/£ïe’¥R| DYŽe, ÅDÚ] EÈ#÷0®‰u·Zœ˜ò<14·×D?žì•vÎý÷¯S<ÈàMýRyˆx ?;¦ø~äÀ‹˜‚pd˜Ã ¡\Óñ,í>˜b¨î ^Tä8?`”óyÄîb Ã(y8u¨¾¹ÁçÊæ®×‡u±VÓýÞÒÐ-„qôBÿõÒųûPÉ–¶ôÝBøsl¡3yqå\`h*_8Ƚ¡(9Ðý.5»6KJ®pm m!K IÝG…@?»kC¹ÝMÒ²kÍdѵ(º6iwrm(R63k\›€,W°K¤e»fÄte»ô˵齅ú×X1ñŽ)ÓG¼ÞíÞ~íîãªÛÅd c­…miD 3î£*…=Y¯ª©ý¾P°%:ËÌL¡mÏóø®Eö‘‰|:àûþ³³ÖNïèv8SÄÁøîr&>¿|ÎKä„òt‚W©jÍWáÝË¥ èpQ£”K€›n‰úíá.8 „Û3òÄǧÛßïo>9θ•û÷o/ÿïܽyKÌüðpóæÍ燯D‰ïQÂCÀU8l!mWK Sˆ›è|TüÍk}öÞ’ìÏy1FáexÆtÌxÍÂ3”‰ƒ¾Ê­:[ÚY&ë[‡ÎÎrr–…†¡g¹«f/QR„1Éšküì×ÅOEsD7ýmŠíÑ84“eåså°c, CÅEºz6&UƒÉ€«›vx³ÆkLkJ[®ú¢a  +Mk c.™ôòTØCZ,óúºûÏnÚ¾-¶‹=t ‚"­²‡=4[4¨y¸aöÓЍ%ƒŠ:N¶¼Ó§îlä̳°{Õ‰x<Ï¢=·Za§Ëå;ÝLé›àÊwº¿.äFl£áù•}ÛHÁ!¿qº0‚b×Ý©óæY¦÷K„ Må”§²òPÌËùr§±e‰ê[–bž¾C,‹-KѸÂQà"ãîtiu=K2¹@Zu/µ¤¸Šs%qxÓŒË1• ¾£t§ó^¾*öë™± p…ž™m‰_4Å€ɺ¦˜moŽ@ñƒ±ºëeÛ[/·µXˆ¹¬qû¶Îý“·Ii‡¾oŒ0¸â »ö8ô”È9Qã|ŸöOšld¨œåS³Ã|ëÚÑ  ñv#m¹ûpCÔ— Ú<¡Øl£­ÒF{@´s›!¶0ý&0V“…»Ç¼ðQ̉ž¬¬ÂDï¿ Á‚Ôëþê!˜¶`€Ž bÄ󒯼ää_¯\_IgŸyL×7Í{î³o.T—Õž_Önî>Ýe¦Ðd”ß7E1di5Ç‹ï›3Ä@F¢m˜C´¥¿1×}B3èH-è`Ì ¥X:AMA‡Šü^Ó¥Õ †i&È]…:U(4)îÈ4»òž•!±y$Ó¬7®ÕÉŽ¢[ZÖRP[Ô›k·tú¼²šxN²Ú”_ŒP™<£<_Só<â$õ“Qü‹È7n“¶%nÐ6GKöS±µ­Õ®zКBªÐ6c’EeK(%_' «ò d_œƒë<ƒ%­UxâhÏÑ wtùÔ )Ë5¦¢ªÒ÷Šú+HúþAÒe×D-pK%C~D‚á~5‚pÁ¯Ö]>èéŠt1˜^]L5÷ÙKëhúbKÔ‰”±Å¿Âì,‘«v£+ áBƒcSi\¥Pm¿¬Ü!í¾y­U‰W÷wßÒïj¿*1‰]ŽG¦e5Í-™ê@sÝNØiËU‰9Y5ƒW3…€¡ ^]ZŠR‚W _rýASÉ´€W{mdõ>ÌÁ×à1¨ kŽcé}Ñar&ÙoòMŸLΠ·°¨ãw¼2ñ.T)˜JV©”|ûyf§ƒE#€]©`j–ýN–ü&7¶K9»»¿»8{¼¿œG c¹Tùö¬¯B]]ÚBŒvhÈwØ^_+NC¬È·/Ôø2o»¶÷ÂܬZL]Úk{µO‚98L‰Á†Vá0yô½+G&çLåˆÁ £/UŽ\ÓݵªÈb¢÷m·…WÈA=³ñ«ò¢<]ƒs޻؜Íw]î•ÖvNm+ß»ßîno.¾\>Þ§IÁyK€œèrá—áøå¸i´8ðÌ8Œ—É«þ&›ˆ–iþ*­¤M²ó9Ž®‰tZÀ¯a‘ÆY@ŠDÁѪcÉÝëö&åÀô…<ºª Ò”x1Vm¤µ´~#ír\8¤OFe¥U]~IßÚ gÓ?0¢ÉÎ[HEkÚ‚’—\„@º²¥ø–ùóX«(†Ö`­§bIwÛ^ÿ2Ôˆ§ ÖÂÅâ‘Y%]ÑìŸWMß½¤›ÄŒ”[ܸÆåæ¤kdß/¢é%’gX•Å0kÔ5¥'ß—ñëÇo·f‚÷×WæÎ>^~:¼~˜üM3—w.æFX=®‚×H Ê Š©mI<Ìå9,‰¦Žò€hº®©8*×n9†\Ì:DåÁ^&М)¬´Z£¬:lÑQ÷ rA5ƒ£<˜–¢k·Ú4z¥ª…|õµÛªcŸÓ!DC€åÔ5ö/Ú“OLco(­¹h$2\./\fÌ&À«Ù<½Òt̆ÃÛ³yz×y\U‡â¸{E\2Lª–.嵇Е‡ìiÁAqÂÇòÜ1oµùìYÐä³*:¨Ò+yuÊÕ÷ˆ­€L]ß{ÄQ†¸AËçÎÈ>ÙÒýÀpsB² (àNÏHpwûø-®ú?ÎÄ‚Í/ø‹³øù¿|Ù‡Ä ëX š¼Ó³ÑCÇk™ š¼Ó¡€F(§.ÐZ´TÂõ0çjaŽí!À¶µøXX®b èÔùl}còi5Í¢ã dÚ ÷l”júì}0Û3ù²šü `*Ž;:u~ /G,:x?£Ï«’"DO2 lªxCÂûÕÿ—Œ ûÌïÿæé˜oˆ¾Û˜ïþo^åš*L›¡Ã2(ÆÄ¼÷ÔûŠ —Ÿ M쿞5i¼<ºœ¤•‡à°€“BÙ[¾c‡žËJ.Úw¡¹x‡à,½ŒUñÈñÞËD}ÎõLDÖâ%Y|`KÄO횸3}üFÌ.CŸEìœHÌBÛ%6ؘ>¾1|ôÔ±Dé‚ÑìXÀ÷Äè,?…Y§¸ûû]%«´Î¡ö1ë¶Nu½Pu‹È†,ôCà8w¿_µàP3TK}-ÞÛ_ލïË è¶Š´Çt´“d#À'`f85à+RÀ‡Í^ó=À²•8B‹÷šn›©j<ª¿/ `ÔÏ2,hjÉ—î ¼„®‘|"|»ýóëp{÷ùÃå(Æÿ»½/îø9ðS=6ÔпæøÓP\´‡N)zžKpH¬G ÿL× ühC‰ 8ošÑƒB áèGã¨{ÔÐ&tÎ/¼ÛIµ??2 „Òpàçz ï®M+"û73é“Ñ òÒ{äjUÐuºª]:å–Qö¥"x7|}usõð;Œ `3a#øå:¬pàã’k!²“åÑÅw•À¼” ­êg)8 tvçéT’Rv¨êOíà5 OM $ѪèMa%IÛÜ@krƒ`Æñ–³ƒg°ÔÕ@¤Of òäýßKf0Ó³‚nÝ‚^*/¢éý¦©çÀIUJPðškÊú.£V~*„ÇS£;vô( ÝGüñ8§š‰cÿ¼ŠäUä‰åDy½õ¼ª›¬b]B™ÌšH?#À›¤ËÅ×éô#–‰‘U´°\`+¸,‹ãD2-âkrƒZ¨N ÀvL{‡×&fÌô¾&EI¢¡=°¦Ü?„]¬™*‘;TM`Àt»D½lþ¸úšŽEo"è ³>ð’² šÉ/ ²Y,¸f›lD% W”4Ô(B®ÙŠÆNHM*öÖ>ÍŸr»W44K(Æãòx§ñ¥¶%W¨ºãta¥Í* ÈéÕ@¤êÄ-¾¢!þàÄðŸþ¶-’¦ZûMßUû×ùÕƒ}Ì/f²¿¤Þ÷ÿÚ6Ž?©`ŠÊ¿ÙŸ?Üý}5âó½Iél+µ³+;KûóÀã¸GšHFpû8R{ŽX‚…3éþ=,&òJ€°$Ê ,NYâúÑ)_ÛN:DˆæÙŠIº*ÑÍÒ7NÙΌɧÕ,¥ÈâÀÍ”èo¹‡dG§XIZÇÚžÿ*u—lÞ¾šT:'’Iü˜.5Š}Ra¸ÔtÜ®‹6沿Ly\‰Ä‚†ˆ1È3Hš>cÕ¼’¢$p¨Ÿúle ‘â1߈ÚÀW…J_•ÌBìüXøV6 ³ )šE¤óýþi5Û²‡±åPÐ^ëÙ˜ïî!Y_%ãÌ+«4Õ÷q_5~µv÷UÛCùÌUúó‰ÏÝÿäAXÀƒ`9çO„9< @‡ÖüÑÇYÜ)‘^évf;àRɳóÏŸï.?Ÿ¯¹ªIñeG/eOX0^íÕ9oêã†eû²Ü–”B>RKgWÅA¾äð4·*d"¥¥ñ­…YÕW”ZSTßݱe=›EÎE9éªXÕ!U›óë#§`0 ªw‹÷†Œ è;R„ï÷ÇàÃfÁ©…²Zÿû<½ç×ó¯—Rðx¿¯à³½èå,3¬}¶¯â³lðrPÁ)ò]ă‘üFþÞ´€pJZÀYUƒÊ_µNñô“àŽc¬-SÌàß-'D;È‹««ã#ü"† ði›TX°reÂ*1 fÎ¥ ¡`)$²­cl¾ÛÅ,ÃÆ÷/«ÈWÓ[QÕ>ã¯NSÀnd„Îþy#FÞ/’¦OŽâÈêM‰u §…'È7·»Çs=Î+ðÁv&qdš‡&G>‰@qvxôÙ‡2¹Ñ|"‘êU¸Á!.aýGU‰E ÇöcÍ@Á{àÄUW`{1(ç ív_VUè2gÁ àÕÀá]:u~ù®Œñµ3p$1æâzHëƒ-7ÑV¬ÀU¬,_óï´dõñòÚ^â5êDo·Xµ9+©#V9Y’ö6ÃhÈñmrý4äø©ÒC…«Y´,ûLøJëoU¨úVEÓ’Ën»štÍ­àË1*kþ²m÷eU¾1x¯áù¥Êô!ÙK•”B:Cc®¾EõÍ.U'V;ÕçÎÁ¸`œzž»(÷"s<ÞÑ}ôg]Ýqô¿loÆÙɯ‹UTq$_æ•“ù]*ýøeWÞ’¹òŽÙˆÏB+ËD¤ñ9ââ)´$;;¿û˜ò•·—t˜6ü'¿åž±êÊ;VÁDÈ|âÚ𡇓ôŒÎKìyXon¿^YÀ{õõópq{wy{oÿq3ið»{¬Zvwö4Âø¡ô7Ì\ÛæÝ¯=G/¦©Û(ŽBDFQz[óîM榀‹ëóûû‘rØþɦg®nç­yŠÎÑGj%^©)PR^2ÖÈÎÛ?]§-z%.»Ÿã=ÐN†“¦_D‹}J‘ 0c³§@vÄñ»¼ÜÏm̃ýkNo±DZ•Æs÷i»$fدÿŠ ÎPŸOºVO*'Ø‘›¯Õ«‘ƒº6—o™û*`ˆ‹ m‰|Ü©†ÓQœ®£6ÍaBÚcàÈ—ïìSE³ˆ 1˵>•UPH¬si\p&”줶¤C]ïì#9Í`‚)Êsô$§ 6mÎlÔ¨ZrH·`&l™ç* Pá%C‰Š@æÒœ<2nF¤ÕZ(PÑîÌXhwÞ>fK0É6‚ 6œ£¨30£hWeÌ×ù:`#g—‹#LS‘XŽÇ>¶¥ÉôP·Ÿ—|5~¼J"ÙÕ*:·5o  ™qÂ4Ú Â@úo/Н³½èÕÐrØL½Ù)¬Ê˜Óàõ|§hB*ŸèL—x¼²{„èxYwÏÙù9ÎÎ¥ñŽ9”îÌ#’@Î×íÖÄÓ9`ô©§uާ+K¦aç4£ƒä¢cSS ö šg¥ðM÷&k&:öûÑ1ñ &è¾×y­[È­ºÂUC$j_6UˆÈAzôV÷\Óñ¿ý·èÕÇßÃE¸P9ŸØŸÂO”ëD=(ÉT-6Ó\¶¦ / FÛŠØi_¸’ `ÿÂ\†," Á @©7pЇxÓ·RŽÿrò15+Ùpˆú|'ÛôëV²™ 2B˜q7«œÖ¿á*3 Ežì³ÑQ§¶¯˜ÆCtd™G…MDŽ¡l17ï1ù²šlöVäÒ‘¯ï­P[û2žn¤H™‘úÄÍ]8@Réûš6{…wÐê~¶zÕ¶zµ9>†s‰ìÄ~]ˆú.C·”º› ûé¿nïþ*‹®B!öÔqÔÚ°TVìý6ƒe½z Î/¾X ³)oŸü\1ÏþŽËãÜÇõXèÔ(rHt KhÌØ²;]p?ËêÒ¹ž$ƒAK{±Ô“”h #•Â.Oh¶W“ùSË„½Í-ã•*cY¶Ú9/<ûºcî¡=Rî™}bëoHsfe¡kši.wº©ÄR«Û(vÈ/£~’k£J@Lê8ž8AzÏ>˜˜}Ì”µMOi¦ÙC÷_°íeG¦„«–‚½E××fzÏËøï½ /l&¹ !¿f•uõ¥z¬ë¯ ®yM92ík vˆ5½ÓrÓPó™¿\ž_?|i2¥ÑÆeS\2IlÙ‘¥{T^ΓùøeF97ê°@'–ühr‘Š~4KÜ8ùÖ&nÔl,H8µeì=yœ‚[ @4-˜#}îC]Z1¸ÁëïU›v àú-ÊmOH‹:»Í#QÓ@Çôy:Ѽ¥J9Ð 1«`AN»Bó¢lÅstéâbfºR/¤†é+Y´¥œ¾,$g;1§"i“¾¦¦*çôÔ©Ø=h4›Îñ&E™#ãy’4 ©ª9›€Ã ä¡(1†ŸQ¢¾ä]²¤9s ñ•¨8¿ýyqöíΔ~±Šñˆì£˜WXƒ…qÉUˆá‚:øÐ€ó ´ÚEŽQIÈm³¶£%ê˜(tJÈ!ך3‘M› µ†@ ‰gáfÁ.Êg2"õN®“”sjeB› ªÞ«³¢«bÛ Ô„mó(&T§'ש³‘EÚR‘º‘|ÏíY{-ã&³ûÛëËÛÿ<£öy".Îþúã×mÇxš—qœzgÁ­™Ý!=TZT´ nÕ-¹z¶à):‚™x»^v­À—Ùâ–XÓGP Zísýá95ÙZhæm!œÈŒ¹²‘TVeîß#·åy±CË !Ñ$a›½3ÕW>¾„%ÖR¶žJ.v@ä0„ àæ'[I'§ ¦à„Å3Â9ÿšk{¦|„­)Å|‹¶C1ܶ(‹ù;Á6Á|;@›VùB¾tsÜÙgº¢MK’ªI¹®èÔÀÅ?G^«G^9ßam|©¥]ýBË:o¤FZ0•©èEœ“¹­>1cS$Eߘy¤XémƒmÔ¾ÅÉüÚ ‰°Bz‚Û__.í‡þ¸}|0óõÊþºø”þüêÛ·ÛÛëRsaá§S/áõÕÍÕCš2á¾Õ%Dp‰\b*9ÍDÐ’ˆô:—今Q)-?r‹PÌчX¬^ˆŠ?ÀÛþ$·óíéXxÇO]¾ À.™­=XYo^™¬zG»ôíöãÙÃíÃùõ¼üTˆ¥ç©Uä *¥¹lò:°´eDÖ.Íd;œ^kÒÌHÅqEÊ¥™ñ4J3Ù~W8uiÑAú'š3‰¦éÉÀ  'Y0Úôz«N!…¦$mP¢çÕÑ’yx.]÷õjÛåÓcoýS.ý쇛Ýô:¢ƒ…"&“”{#…ìJá‰ìZÜõÚks@öpr‡:ð/kâg 㻎ʗ°vÌUqÑzïà#“Ç%0ƒ‚AªÌÃ6—9êû5ÛÒ1[“¸Ü†w*'Ë_Ro´:æ, }+‹ÕüÈÜ_Ü]}K¿9ù™óëo_ÎwÞæ~R!žG3 ]ï”ÌY½X¼YwœÉ~u´WëŸ^/ÍV®;åÚ.²G­:ÒÞŽtt.Ûݺ“\£Í^8¬>Ðÿšy ‰º4õ vH´‰B<ÚãÔ{»[¾ò¬3OKˆ`䈊nvžþºkrA8§PE‹EX<éä²Ëq&kÔ$ÕÍ;ì%s)vû<:o–×LOê,Ìò7?×,ÁÛ¨š{ ‚G{·(ücSͼEÁ-اÍ*ÒïÙ-™: }ˆð®™dp^£~4paA¨à’ÙŽo÷ ’%Lœˆ¶™Œåå3ð´A î¿G×Ä ¹1ø´zÀ1»pb.ªâ’±C¼+B™gð•3RIÍûèoñÁ° XŒ¤!ª‹ëÖ˜Ú×ÅÚ5¦Jܵ¥Æq0¦­:Ç `ün’\…g÷aÔ©§Nqʦ;y@vƒ©E¡ƒ… €¡–%7½­áÙC×(ZÀ’ÊBº‰¶ŸËu—« ìþð˜9þ“µuB³-Ù·¢ÔüfrD*[‘øíÉ8fEâ²”wS©µp(éµCçkSñÝF*[Zæ|—D*lÉœŽŒ ó¶É»=†m¼.Qˆ«t‰¡ˆêòÁÁîkÊÛà‡fôøÙJÕé#VQl'ô°H ‰g¡1ˆºUèá–Љ«(E^í(¤ÚQà ÞþÁŽ‚ =:›/?À?¸û´š…×”V.ÆyGX”Ö)ÎlÁ6[5;#‡?ì§ZrRw TD*þ`¶Â4‘ZØ·×v†8ÇfÔð'ÆU¡‚†E > QXB‹{ù ÇïÇÛ…MaåÃÅãýÃíiøöÑ’——Ÿ®¾^×OSµÛnÿû!Ûö¿ 8™7 !Á'KÑtüÈ»_2Ý6õï'wÂka:öÚÑ"¾@³L'RTZ1(/bT‰ìN£,j9{žJÝž?>|9Š4‡Ê­2gIn°|M˦‚>p1EQvÙ9§°šôš°a£Ê KñDbX2ì—LϹèƒ(z~s¦âæ™JZ”"Xe*ʦ‚š RvÒjd+ ¤€³lÅct+Ò[‰}\p+þfc“¥«J] …´ÂåLf"´F!Lpê)̲ «ÐÅŒ..è£hàJHðÞýPZ{ìb ¸p)b1Qf™c'²j-ì™p†¡¤¥¬ËŠüȸäò]‰½Ú9›±\ݤ RûîîþÎ7¿~¨¸‡™ñ˜u¶œJûcµ²dKÂø¸1™¸dÃ߉<ÿŸ¼kÛãH²¿"øyYÊK\2 ÃO ,ØÅ¾ìû€igH‘ E{4ÿµ?°_¶Õ-vuwvgVVV¯dfŒMUUÆåÄ%ãÒAšòÑïea¦ïØ\dþýöËË»_žÞ}x¼ÿüîæÃþc8 Þp¢Ã)™6ß.,‘Iýû-©Ú‚¨ ; Ì‹Ï6Ìúx]eñNýÝeÒÇÎZ:T¸Iæ%Å’ôA6nŸP®ƒôYå ~Oú¦ï¨“>3ø˜Íùž>²¬.-> ®!øG5Ùjè+f¦7²gËSQ¸Fö‚¸¢ìY 6×N´#\Ñ£A¿Y[ý³ª\AKÅzpV\iöîÓoK®ü<¹õucXAJE±ŠBùk½7Âv+KÞâÅá Zú \dëæŒôg’*$kÄŠÊRÙº‚)a;IUðÈ_ZªÐqKÑjTOS#enÛ|¶›ÿòøÛ§«Ý¯¼ßÿãÕõùGmåI9ÿÈ è“¯°c,R!’/ÝѦ‹wäH¶XãÒ’A!´Df€ãìYß¡«MìH…,¡…i[ v·½‘³‹,éWÍ“%ŽÐ-‹ò}ËÜÄ(AïDKmAtLêë&p¡"w+âqÉ^áNŽVSíxýu·—Þ{H¶&ºA-!y™•LãÄÑEØÁÔ€¶ûUAæ– ==ÞLMÊvõn2õᮘø#&¼ù€Wÿ¼AúÐÍÀD?Ø¥bª@Š ÀÙb’ º$s¼Ê©—ú—öi^аÔÀ´]¨ß&<;Nú“9¶P+Ô$s”Fòwoôìdb*èÌrd½5“q‰4y'M{Ä ‚ƒøMf’ݼL23Pö@tT’¿Û}d¹vë”JÖøœÎ’û/.2Q#4õhù  ´nÛ†Ýî=Ž;úöläåÇ`¦Fn„ŠFËcÈo†ÞQ®×„]Y×^A@P+‡cZ놿ñNƒû-þe;(E…G\zžß¯•¯zN=Ú;»Ùþ÷íµð×Îþéì‘ÑŸ~þr7N!yQ]m]«;%:‚°Dvú„„ެQ_ÿ• x|5æá8“$G„8(ÍRL¨•ã#÷¥gDðNÍbNh'‚>ö!ÿ~}ÿ^ÿgçNßÎýrÿøû»_n®?_¿|ùôq:ZÆE°}VÉŸ.ãO=D=#nሚ:×<)z´UlP¦b3ÓLÌ¢ÛªóÓLT'€£sØö°ùjÆÉi*J6쫈PR˜ÆaÓg,+ÙpiH$ÿû?uáÚæ`UüIÆy…ÉIÖ †6ßÀˆK ý^î^lŒÉo÷¯ãxªYsíôñ‡fT(cié$}KXi¬]ŽlMž‡‡¬Šû(¦:÷EùUg;u™W¤_mŸ XéxôRRðýwÑâx ÄË·0‡v»dòéþUEv ·ýÉÞß›·6…Ψ-¢ ¹W]:: äͤñƒ1R*ˆ”œŸÛA§^×ÍIÌ5sQý±îsOa÷i¶rt!,Bc€×(Ù`Î>­Çu Ò   PÕ¿p_fa‘ºDž‹ø|¢›Bµ=Bšu ×öho¿-)Â2(Xw›Ú†ÎÛ %{P`G&5¤|áY£‚U[BÈKwL¨7'9žT¢„Šý|”ˆ5b`”¥·Z¦²u·Zi¢h¼è±èi‘ ~,j2A.—<9YÍ¥–}•ç ç(ªÍœIË5ùþýÅæÏ …kzRÏû[Y·kbvw]»®WO/³\%cëimQ| iKj©·G‰æ83 ÕËdšëX¶™žE(jZÊÚÌUzôÁ£“PB˜e1KQ£ˆ)­í<“(rÎs$gÎdœçhV¶ÆPþ÷?>›Ih&•W5[G!;ru{¯ÿãOzÒ_o?ÿøŸÿõ¯ï²Ð°-­×0¿|Q>œZɬ¨ððx3',ßýüîåõ£Aö?m¿â/O¯ŸüiùË~»¾½ýËf;@Àw?ÿüîUàW;Ô×7Ú×ã]?¿ûù †Ek¯[$±r0¶¿Ä*.àq´g:•Qç=¼(.vÝ·I¬—<<ï¸ÞÃk0a§Jú^$Y–ú\a‰— ¶‘oÝ­›OÏwÏwŸ¿Œ‡Ý—|°`Ï´äêéþúÓí‘gzÁñ&ˆuL/ñ Ô¶”‡1XcìüAh‹×ÍS°õ¸b•qeOÁž‚ÂFö¾~B¥¾‚Š6 Aš]“÷QÎÍT-+ªŠ¯¼Jgwì+§HƒÅPÂ^»ÎkçT…½!ºjì] §LžyQT¦2Ò-S2:¶ë®S<ªjßø*ÿz{cï÷ÿ8ïÚÉÉi¢Å͸è XùÖ2¼0 ÏFÞy„êµ& ¾¸ìX! 1ÁÙAæ[ªå¡vB–ûÑLvÇŠ®PÛC(R‹9v¢Âš”Î+%¿?¼ÞÝߌ©¾ñÿ­’òöâö°l•E…¥,*œÝ9!VKIýlRA™ÀGëIíuý›Gl3ús·d3.ÐŶ ÎTб֣šç JZ¯lRË]à¤5_?;9L¹€D¿‰}¢¸?òcúˆeïE°ñ!Õ$Ѷï ÅΙ>b…„jgÓÛ\šoájúøÌ»váT–µ‚e•¤Ð8/‘Dpñ;o“ѾJãÏŽߊuŠù xoÄêaç ½«0ÆoSÛ‘dIÉQ$ëµoÔ£†ÆyY¼¥Âcåõ•Zãcô’*øj-Q%¾²˜¨óv²ª}F2¨'¬f ÁæýD–æ¤=b´õj+½R&‰_mQCfîýÃõ4ØÕ·_=Ü~~¾û83;¥ùC3½Ï«ÉHon:}0…¹Òµ$jÍ\¡­2¥»áQ¹B8«\r…hNèÑ3½­Á ûcC§ï8œ•w÷I)÷ðøüeëY}ÖOئbúó8ÖTèÖMm¬ÛNÔ<¸3æ³F½œ»#×7£…U5dS} ÙD9ÅB}èÂ"²K´&zÏò‡® ®âA§Õª7Œ² èSòü7_A}»I\ŒÐ gÇZn)˜Ÿrº#Q'è&bĽ­oÓw´A·X#|˜Ý=ôÞƒ¬Ýÿã¹9wPý»vÉßwRÜ‹½Â*°2ÿ'«¹ïô©è!` ¿ÃÚì<ô—Ôyô511bûóÅÙªuÂ~†äcÚÇþÝ;°ŸÝHxqì¢õ±Ûšwþ¶fÄI¼DëÇw[æÝ‹Ñ˜üVÀKRyåzáª9ª‰«• {H«¢=¶4F¢°MöOØP7<Ÿ`QÝÛ UàTFuæ³;\¾/7juB¨2D@·—‹™¾¢ ÔB |iP?L®®ê¼M‘‚ºò>Q ©Tâ ]=z_•”ñìg•x¶¡Î ΂¤cOKpD\Ëî2Åà¤åK¹=U^w$LÆp&(^w¨Ä§³³·'÷ù‰,»£UÜw¤HCHV´?Uó½‡dgÐù4ؘ–ghr™ßeM^»R{CýM|¸§ÈÆ¿€ôuùL«wV×[Kh.ètM¾×v fÃã¼¶ƒNŸ1iHâyvGB§Ï8Õ«ƒM•\l<}¶Œ×c[+lxmCi¸LiŽd%ÚðÌ"­˜‡³S6·GÍ.*˜¦\š#Ö^üAeÎî‹ sB0H¨Á>2@±ea§’“Ôë`ô¸¶EÕó`ý*DBc&_ u¤²ãëvG«0zÐÐÎÌ]<#kó”Ý@˜êÙ N=„H¼ÀDZuê¡iõ³D4»pó°“g¿z·o¬ÄYÑÀ<–Vç|‘B̈—Á<¡NJMýê Â3ŠFT“ø’¡Lç6 §˜Òb HÕ˜Ã@ú%®†­€±àýÚÉÑçàíh5ÍêÑà Ía\rÞ;âEŒÃÔ’ýVBº¹[¶3l7нýÃÕn¢í›J·épnO£j!•™->H,29›îx#G—·ˆkpµ»ÒŽ*<ÞqëÀäñ-…HÞŽ—êpð•:Ì ¡”PŠlôªÄçØº9yÈêðäh:LôâxoáëÞCò¬ä‚Õ‹ëø}>‚O͸îåò†ü›ZȽ–Ú ‡lõ_@ÇVäìvÓñË{•³§Ç; Ïö:ÞìRáýÍãïŸî¯o^ŽbUVgn^¬:û…“¨4F7;(ý¾Sá'Z³”J2.@+PËײ°ÈF7éÿBSü‰™øŽ ÅõÛ†q€tžÆ–ûÂàÁñ¨Ž³ƒ­vg)‡ŸAŸ$(…=pšä›gu?:ZgÜ««¾<ç¼HžçE6¼râGzi¶#Ùðʳ®$¡ÕÉ.â´î]ÞˆIŽ$LoѦÑùÙª¦]/å…j.åÙI}ÉÕ –ñË5úµ.çE¶N°e/‚UD&[Uí7fÜCWo4FT­cðE{dÛËc,Ú#u³ùÊÝÑjcpBØ._;]ïS{òâHQ«&ûYdvr;£¶ÉÙ·á<µ³þÜ<ó³ðÕSSäݾ)úyMøÛ\ñLøPÉRÇ2$^vBu؉ƒµ,3V„¶–±!¤ìÀ‘ÉѪÏ8$‹T/qå^È gsžÈêÐ ‡•C?‡5¹KŒU“s÷òüúdLýðz£¿H{fÆÊM/¢4ÀOÓK× L ¤*ªæb¼\ìÄ2$ É—‹¾„ŸŸ=±98çóMNVÃï€. AVÏÕQÎóQ6ˆóúöËñ0½e6îï~¹ýøåãýí%c¦³oß ›+Ã&ó×½—%À‘|l¸³UT‹I÷å® VW‹z•X(&]Èú‹®Kò.Wû79YM˜]M(šIíîkàÁ[`F››–ÆG„ô-­cMç¬cÕ¤”`RbVEOȼëãjä4ØZ`ÂB’˜+î<þÃFä¾ÿùoªLŸ!õêÈ©»Ê´p]§‹¯²>Ýižˆ†ÉíŽÈ†­-ãA)‘·yÛØTBp\B™åflÑOäâüuc»÷çï7gÍœONS1Ô¾ Ã^/åÞ3•èÓ«—ËÚ¡D_‹–襸ƒT\—½•J'…Œ±wÕs–¶µéª[7› o²¹}^wµh„þC3{Ê**¾¥¨ížtó‡i¬@Ѧڂ³˜ ˜*Jˆ >Û•4¡]u‡¦ ÎAÄNG’IÑ}‘^v¬ao½ß”¿Ù[Ûýóz”©Eî:þBE¥ªCZ0âœ+AËš,Òê£YÙ ñ[˜ó|÷t}sc]êãx©Ši~П= ÛnøÇÕ†Y³P4º>ÍÄèR\Äè?}ƒ €òõb{%¾ÜÜ>Ý?~1Ìl ™\&î ª~kÞ`B&nçN…‰ÅвO5,M"2·‰g%’v3²Á¶ +IbÙÌ¢H⢙EŸmþЯ‡ÕÏŽ˜’K³ì¬ÍÀeÒCC%Œ#–ùHn¥igÕ8ÙMlÌLe±$E±‰>—+ž®‡Ôè«ÈÇY!N².2H«'CT93uŸ¦']HTÚ–¡¯—VµªYê×e­hœNó^º…¼g\›÷às5¿V‚¥J.2—®jA¸Ø.Ý,71Ïk²ên•ÆÆAÁ®¥L2¡Í4 n[ÀCX•a/ƒS‘(­Bfk®¤óýùãY)»gr–r~- â)FÞ˯MŸ±(¿f;‡l7Gõ°£~rв[[¢[<æ#Ö–»A€õ´Å‹›ÈÄÅ ¡ÈUËNNVqq9 ú»ö²®Ógdû‚mAT+ˆ³˜-ÖÈNíoˆëf^Fâos÷ûåðÁ6…ëœH¼PǶ`ð…*××Û½„û>Êñšlõ ç]Ÿúäš—øÝø[™»´4»$åü[OÕž˜d¡w’œ,N<¢>ƒ2€j¢_5‘_%‘­kž‘¿gçΛÐ/~ôÐTè¬& lÆ^—í4Ç4j‹úò>:Ҫʉ\²˜ÏÉO Ò¥ÝOļÇP÷é·éûõûiŸ'^Eý| °z†¶%#þëóãëS‹6Ö@á´¦›4ËX#´¶ÕÄÍêÖƒ&6 ÉE_ßЖqá–Ÿ<4Ÿd!Ú!¿ˆ…@i•=—j4ýÿ[_ü{¥Û¯çý]W§eq£e½÷í,ª°‡èrí`sŸ÷ì‘?5‚³Hzd1c½ÅÅI ­Š3RE‹‰.W×2!Y§˜‘9&že0 .RiŒ²ŠÁ £§?啦1Æööâ"E¦Øri¦^œJ…Dÿ}Þ{ä=à˜—×›ù§Å4šzÉ”V´£\'8Äç)4Yÿ"…&ÄÕÝ, ™ì÷¥újÈ/¶’ë{Oƒ“ á°,(‘´l„rÑIb¼LÛ™H¥'X X—s*¦W1d7%O(× +Ùä9XQ›2VXŸáú!™d68Apƒ:iX.ï{Kšµ˜‹ÍÇÖ.e+î9¶›%$²ú¾æ.{@ÓÙ8 Þ;iº3c|gÆîØ{ÊÛƒYꈓ ÆæœoÎ Yk?9MÅlBý*çS¢ƒ[³É3'ôVW¦÷’lÙ o%ÒS/¿4«‰‚ Âä•…ՉâPl7ŠvjïNVqif_¥  ê)ȸ†¾e.@H³×±ïMľþUmݯ*ÄW÷W¿ÝÝþÞf¸3ƒ°lê¹”9¬üÙäç8ŒÙ?"ô˜ƒ­ßŒ.Q}Ì®_&l7í¡ùþs|ÄáþÏ.i¸ˆvìâª%Û!½×·ô.dgØÕíýi^;xþ¡™øEõFßšIœU|˜©~Mô꥔£H0K)öÁtRÅVr,ÒôH¤©©Ÿh³Ç¦N)ÕIð]û½¯=BÏ׿7@MÕÓ–+)eÎ}ÿñúóõýã¯ÿu*x§;·ß›Ux{XÖºÖÒͬ)©¬>Â4µarÔWÏ×!çY¥võJm"%N\*{Ø€â#ŸWk£nÞÞ¯a¦}¦0ÎÓcö!ù%z Þ­;Ào¤«ÛÜ…îÅÄ#gD ù³ˆiõþ®Ú™”“Œ'Ÿ"Ã"ƯQ¹£Q® †B„ËWî,ð¡ÐùvRWrÓlSgc´6»:Ôì¬â1ì€XŽcTÚxC†³àŠùåMZ´€«~&*ŒÏ×:–hep5ºúãÿcïj–ä¸qô«8ö¼J‘HÇœö²‡ØWhµZ¶ÖjµBÝòzûî d•ÔYU¬"“I–ZcÇÄL81 ,%L6ô˜¯Í>ºdœÏÇ׿~xxsóamüúÕcŠ1üÛùãŽõ’RËè; f$7¶A†Ýü*½9–½‹X§ú¾¨ú²ÍT uHFÙ²ÕÔJõÔ“ymÑ©’`Øt]$¤–Žû€ê#± špæ`ERù\ƒAñ\…òÞwVÓ¥«ÒÏ ÃºsFJ›PW˜ûǦ˜I>è–Ib* ú ãPïés݃|dÞ–}h4gÏÚÛÄ ÙrÖÑ?kÕ2gmãç/´)ÿKòÞßÜþ¦xmeÆ× 8=ü¹FÓDeFS½"ê>:ÅçÚo™ |Ô ÿ}©1Å…Ô˜ºýëÉšò¸Ýê'ÚhÈé’±±Þ31 d¦çÀd#[B!?#F‚.@ÛªËW·~ÝK9¢±5¹ !h<œÄùí Ûâ34WÄ3ÂêÑ1„mW@¤!°à¨Î{ØÌì‹RËì«È¬Q£ñö="¶3)^ˆ½KqÜ þ¼³f_]•)bD_íé±y›¿¾ìäe¸‘dc,81’óADÕ…äó# ±ã8 >]gÊÒõ¹¿ùüûÝÓ§z^ßÞ}~zÿNµúÙÿyõ+ý!é´ \½ÜÕ3QZ~va3…Z£´üì™þðÝu„dãÓ7ÝhðÌ”|Ò{¬:^^f¥OÀ>Ï>…󧣎­§´ÁLè' …±õPÁy›j|…<˦RÁB;'z«„Ø.;'°h‹ôî‡lóùB|².¶ì¢àŠäyÅå©PmŒC[ovrÞ;‡,<›…ö g"ºDßVäÑQ{cÍÀî¸õßosOÂQ“^Ÿ¢/7Eξ7â;wO>ªU®vCç&§›Ÿ~»ñÓýû_w«™z'‘(´KŠnyŸt™j¬”d/ôž¯Q’èR½-%(±ˆÞ!åBË…Øz¤ÌuÕΨþixwPåèý`ð61‡L:N·l9à+÷MVÍXFŠÜ½or¨œ;r›æ‰6y¢þ“àDÑÚûÞ‚\ÏŽ]»ÜËðìkyÌTÿ̰W·°¬”U7÷YoBB›ëS•Û#,°pÖ}^¦‡û¬Ëö)yâ5ìrXd‹:z`´û¬rÎ<ˆØ–Cê3:»c¨Bཻ]…À-À;S? ‘}³UõsåëÆ r2d+²ú›Þû»Ï?ÿòŸÿqú@áé§/ºÀ7÷w*ݹªîöÃ{•¸JêæËÓoÏZïúÇOO~üù—êü[óx82TÄ­$hì¿¢EÖ=¸ÕY»þ+ú‡®áÐøíÞáøüÅ6»9‘4"4ñFD¯G›ÈÕNAF~i…+å'ç@-Í%+µÛ9d«ê[«)Á°q”Ð [|$ËIkØà-L•¤´u^F:ˆGÞÛzÍò»Yb‡Ö‹4d2úšN¼+uOpAÊCæw‚̳àˆUÃe‹® Cß?ðÂ8qð†òZ#ü›÷Ím^ä#Õíúd¯I·ÌmñwkËGâGï’ÛbÐ7Œµ%çbPÄ_û¨Q-¢¦@‹\Æ„ÅIHˆ±Â„E°dÂloŽòïY =â,]µj›pöK³¶¿Iû|ôÃ-UÜ·QZ*L“#üJópò¦ÎÒ5ÁªˆÁ¼‚YuxµJýÏžaDŸ˜6)t€&¯À$ºß¶’¦Ì`Ù„YmWê°Ù_“˵Úû0”²\ß6SÑ¥Áéw£ þ=ó…M5MäÝd³ÃCí$„@Srsst3{çü WǪ2÷¢‹™¤5¸‰1Æã¤õN•R¹œ)ù¯[èÇ›·w¯M¶_¿Ë°ç^‡áÇæ¯` ä|:e^4?ß§`”«ùèpüÿÅ„CŽËFnWœ×9ñÆ%ž.”B-þùùUßPéí«wòçÛ§“0ï0u««\Á² íçû•ƒU®`]"©ê"_¶ôö Zßbé5Ìâ0¦“‡2$m“bPýã’ÙOzÛáb±Ð¼q¿Àsbö¿í¬†¤ÍVe]]‘+ìNøs<´²¿mRÜ]#[ ;Ftàá/ÔÜa©ß};Ÿû5ú;N~qÙâá$U´xôBj)mWÅÓfGƒªT3NQY(ø`Þtk?©r?äSóP¸ýñ¶ôCxˆÀäÔÍÕ±Bž½@Ûn”Æ2â $b*7Ç„K |Ê“ˆ= ¯hÏŠ` ÉÕLB»«VAµI­aø “É™äµmËÆLxݶï|å<)׿/b%œ;ø˜ôêÂ&k0È´ŸTXóœø€ÓŸު潟y;TÂûV5;šf¹&ÛO¤Ô`#Û[~PH}?Š¢FŽýòÖzŒj6r ©UXKP­2ÍRà/…Ö%o­Ëö6÷"ÔCµQШ³ï7)± †j“3dòÖºãä 2\—ëüjè?P»O†7¥ε3² vÄQ{7*cûàµYê ¨W‹›ÀV¸¡P…F quÖbà¬?k'ÆD EV!N|%%KŸ³ØmÖVí¬mH‚z£±Öp[h´?«bÆœ?«[Ž,ŠŠÙç½]ÁzK{Vڳlj½lrDAMÄPCù|ôã«&æ7ÏÇê}¾}æýs]e„ãv—¡ BKeÅHÞR[eD…úÅûzø¬ØEÅxYýÀÒCšJ,ÛæµI€´;K‚´@‚ߦ€<Ñ03øˆˆ.I¤‹ƒÏXã®Í—Uƒúµe•@pæ,ƒÆäÉm*d jë Gš…ˆ@ÃYÃîo>–ø}#z}óðêé³Uº¿}u{³Ž™0ž‡T»X=âHµ“k ÝE‚ž˜o¢ «V/\ïÍ@XÑ>‹1pUÅæ²Eò ¹ôàmœ¢$„˜–EôË9ƒ÷æáÃÓOoßµÒò¨§èWtÒEWL’6é´—ñô-j,9SI­®†”)dý×H]q9Ru/X“úJÖ±UÀ’;Ë –È¡"E„Ö³œ?nÀ(v&Ågá¤cFúòÓlß ~¿ûù—÷o¦[º¾¡woÞáí]®ùä¼$woÉ­;‚¸…Ý«#b„ÐÖi’2&13:Ì[IT.V$uæ/òVíöš}oZì¥btŠÞ‰ÈtÐI´üĦ6»ƒl¯´T‹Ž¶1u aÃŽÝ'`Ä8HK|Êž³ü{¥žôò­#}~Tö,6P™Ä¶h þÛ LÁ£·›2‚*õPZÝü½ ¢diÄã^/WÓï$çsl© Ñt™ð“zb5]õ¼4  ËØ¢‘@0º.ÓÄìrÓãP]´€Þ_™,µªÚ'ø“´Ááì±õâd›ºSCÂ\¢ P—ms'/Ôvò¦ÙMÇeeE*_ lµÇbg5œ‘ÉnÞŽq…2F`Ù¤Œaprg£—ÓŠ;;4B/ºßÒÛR׆Iâ¯Ð…¹*ÍsÒpÉë.WýØ¢·’ ®n­\õ[çº(ÕÁ нSh¿±!@rý—¹IAÿë ÙÎ!Œ£ÇéÖ`u4)-%hºŽÛ²¼ª Z¯êöu¬£Ì©ºÜ%#j~ÒPŒi“RG­°°%h•LКN 'LHŒÅ¡,sD°`8m¯ÙñŽ‹ÍT°#LQ0-/?°OA!DpëcÖ>·@bËÐׄbäuq»3…•Î{žÕÕN5ΔBwñNì§¿ŸÐ¢ER¼~ñ6xq«ª·»"ÉPhØ’Aå…¢\±äøT§ÒäŒß4”uâžú²NcÈfU¿í¶ƒJÏ^.&^]£e¸Ã¯RÎTlØ9yÅD83Æ xú+Ž{Š ð@y² M> ^ I¾¥[™DãY½üá‡åÊa®Þ¨èxÏHrsé2]é^´9j‰¥ìº`.OÖµéÚ K8t)`Ê€®”x”tn‰¨×y‡þƒºØÇÉô½ R·ëu0÷ï¦Ñ?(–:¸æ fÛJâ* ]ž«¼—hnÜ×Bd= Û‡Iغ® ÙÁ÷ŸrN@“U‡âPGÌ{?éY,3®‡õ8ߘˆ×e¨\”¡°Càüê Š qÇý_‰¡§lÆÔ›Iý6ƒŸ'çè†yL{M}%¤(EÐŒ)çè> ¬‹›«kdô›1ó¿—ú{î/Rôét²i…¶Küe¹Îïþæòù¿uG» @’oé…p $¬õºÖ¾^a&l´™”KS@¹Œ>ÛxöU"ÂÖì¼p€UQº > ž¡º²?UyÛ2¡ÌçÙv±Ð·]¬* 'õ‹k½º‘Ç(h ”ú¿Ém/ˆ=¦Þ©ä)Œ!º‰æ6ç¡Å‰½íTã¡ùŠ']Á,/äÕéý'ÍL9ׯ`‰q¼Ûå$dÝ.u­<ñu©É<ÂÛ^G¯PÜgM÷™ê>=JFŒWf¡ÿA9غ©¶ôòÄH'Iû‘Õƒìò…ZVøþ ŠÿÖ]öêWúCÒ*#,ÃH#l{-ÌEjº’þg-ÍF‹¼ºY\½ₚɚžS*;Èr= §‡Åµ‹ŒúÅte‹Kù M§B§0l'…ÂN EÖܹ鴊ÅHBàjnˆÜ¹Æ ç«ÅÒLNeŸHÞõ¯Õ ê¦Ù/ìß –Æuƒ z¦=(‡(qe ü e- â1òúŠøAËZ×8V¥—M߬à[2|jZ·}2Uw …))œC9U,¢¼Ø1´Éÿ.¶VÓ1aRü¥tÐIvð‘|Ç" Z=@]ËP/ØCÄ3ñcÆœ‘‹MÏ„MyûºV!T9]¡WèoØ|°y )Eá ‘ÛÒ>áz*‰E-½þöf€ µiÔB@.¥ €üê´ŸHÝ8c–¼ùyg5ƒæuU N T?gÚO'ãh:ýÄñlÎ@GÎ6?°àäHM@§ôI%â /ñ**‡H§gFc®r9K„#‰Ã®r9—M,ÓÒdchA6ðàƒµD§ÍЖj¡Ñx²hV]„6!ðR‚6ö)÷J°ÜZ ¶é²H5ÚØøê±­xpelcm*G—iaš\tÁÇ«bC|AØV~8‚5½‘c`­¼’%¢%糕W2Ì(¶¤pcô!†°ÌbµŸf`kÜ4⢛f(Ç.vVé§ ^® e!ŒG5<ÏÄ£¢P†ääñhTD|AVWYvD`ã ç¬n5 $‹èh’Õ­æš±8F°͘CS·;dnâsbÈð9É €‰CµñV›Y`S!§›%ÉNÁXì¦Lè¤ËŸDð(·üÄ&J§@¤›–¹½6a×ç&ÄÔBédó:…`³]ãj»¦Â¹ÊI—w;§l[ábk•N:Øë ®°låƒ+Z6ÖöãtÊL?±“8_.©~`?Ë&X׫½Ñ²5¾ žpe®tÄva«ˆW۪ƟÝd”*n´ôoð1cBL‘F¨|Õ‘é?¨¿}üJƒ¼ª2ÅF5…óB÷ˆÞÇ-øß”}5©ÑÍyŒU…)«äÔV‘ÂY"O.Õù¥ˆHE¹—½…P:¤ØªÕ åzVŸª QÖBqƒ;ovb§u¶å¤ÿ‡ÏšR¤ëZ†RÕx£úÇõu(«q Ž¢0Ô«tíŠm<MŽžBŽm.~8uñ™2}1ibLL%¦­Ÿ†þ²JÚf£ä}¹o»)»øóØqápÐÈâ ÛH[!LàŒ&¨ÞÃ7 ï¼Û`Ví.Õt÷™˜®@R ô/nïî©ã¦èÎ+¨9{>mQP#åh!{úÛvIeØ%^Ã.V-ࢪê²IIU Ï2ì÷Us%qÝÅÁL ç/d+b<¨~«ˆ¸JÿÐ mxÛ9dž{à<ˆþzKNOióÂN³:3`¤ØÇ /LÞÆ{´”[7l^6;ŸªÝ0]P/qj“ëO sgЉwçÁü×_‹iîž~×ÿóµbïóEªú+NÿÊ,èš=}þr·™¿v'Ù¨v6©4Qƒo6KÖùXŽš*ä¹ÔËaÖ“(žê®]pC,»kް¨»{Ê«cÝ}PÕ/;Rðt¿Fuµ5Ò6ÕMc;«f1û /ê|PI›èÜ>MnBùYƒTèÚΜx3—âõðãü± ¸·ûàq§;˸«>98öÙH Ç„x:ÜXÕ¿®ñŽ¦Ã½äéŸ=êäÕ‘MGCÿQ?,aJÆ2ïç_??|ùTЦݟf~Kâ¯0¿QZ’–ެÀ{Yg÷";op÷òÚbaç³WÛ¡e|9ÊŠÌYïøY=ºælÙ˜’ó«LlŠQÍò&LƒG5ÎrN!3šDÂ\+ˆé¢‰íKWLMìF8w°ì‚Ú„¬ì `ÖÃɱÞÓø‚º”pz·®KÏËÞCRÌÛ«ì¤Î‘˜|\=·E\½’v!|ˆ!•“Z$êÖA nÙq.ÿ¼Mf&½ÆÞfÀ®A[¶î_ÆMJ飌h„öd‚GÌL8‘1ì¥Ù£ŒGõ(ƒ‡³çjSD(m²¢H£¨ÃL‹å”!êÿ^w¼—‹WµjÚýjþ¯o?ßfÊE ´V=ùør WH›F©ž|û|å‡^D"Èm˜còüVO!Úp­‘VÿþÆÞr÷Â;¼Àë =ˆ.HXcû¸Í¶·•¯‹gä(kÉZ/H¥— ·ãà½OEÍC&¦˜åyAáöºjÅmf\eÃç¤ä¦÷>¦¬îö*æ2ÃíuË)8æsÚbßX‰ªê:bª¶Ü?shPÿË[RŠà〉Ò‹`«|ot”z n+H)_ßß=}~û¸3ùÇSùvùŸFÛQþ©¥`‘€CH1¥¥Qì*+¢„½â$s&-utšc»î->YXEI”½CªØÉj¼“µ“*FH‡Y[²‘Éœ¡ %¦è~¾6”—$Ñû¼‚‹«¶¡¼|í$ã`u³k·¡¼|í±dD¯ ~í»%=¾w\ÓowttƒShÏC Xf?ø1}3þóß ó7ùæ`—o2Ìô›ÃÓ~“Ýäkž =¶tßSÔ¿9´³K¦˜ÏÓLp°|  ¾•u®³ýª“Õ”‹Ù½…yäÅÇ7åìÓg,kXUÇF@`[._SNÛË<#õ/Œ4êI’`Õ xÛê}ñ­Æé=òùfûyq²ºVÇ­Sƒ \dÉ5Z'sLŒ3åŽblŠÍSÌ`@Lu°›~²×‰S‰uˆí¥Yw=òŸ1(>I KL˜}\»@RÅì2ÃlÉÑj¬ tŸèÜù§<©SP?e 3˜Q·#äp°˜º#Ä;Xu¯ˆ[{luCYâvøåÍõ寑< Ãvý”áÛªZÊé¬%RJgiP)Ê´”#tP/÷¨K N¿œA—¬P˜0 ‰0Iø(Ù*lÚÜ"› ®Å&©o£Ðþ9jE‡Œ{%€`¨È*«œ¥hï»Ô×cÄ^R=èkGIŒ3¢§Š}Ra°Á¥ V} ™×Û•Šmòç}tüDÌ$tµùœþvõ<•M…ãæ#JËÌõÝœ¯º¶X]³1ÄýXÙj$×ÏVmÚžJ/UØê ÕI[ ËtLÄÔÅTÍ£Hˆ2ËTK{¤ÆTÃê9xÝc‡dNêãAðçågË5ãføÙH°?[-fÕp•ѲÀ/†x1Ù¨.0¯Ê• YÆ?›>l9*ì¯_fNFÓØ²]à°Û4Më\Ž~.îÖ ©Äšò‰‹˜Ñ“CbwóûšÖö"é‘L¶=ë|àyîP L°.ÑÁVÎŽ3×ð~dÜI}Øk1–«’Ê*Öjˆ‡kê2 ®~`¾øÏoLrƒîÞäâIe’±|ÿëDsPÈZV™`û$ôO8PòHŒëÆ/™ð«ëÏßïž'?8ë´$8~ZŽmž—œ–ª®†WÞ…d÷‚ˆ­Ti§äÓí T•£$¾"O˜$øÒAéóôaô8'õ­ò0çœ,n…²ÙyïüÊç¤I 3ç¤.™˜’£#„@}ˆrU :Nó‰ÑJfT}èu™‹ü^¾%9híQQ GYÅ—`Û|ùô¸yºýòU?»¼èeåÞóàÄŠ{ŠVN,¾˜pðúq69ø*©Óé쥭Tg99$·èlõýÏVD7„<§Ù¨9Ýï·Ý8ÞÍoæ§<|ýe÷ï™ÁhøC³¸+Œ%´Œ eõS¡--J¨—é™Þ£±¢PÑô€l0rÑöP²F_ÅÑ£‡Ó6+Ì<_{˜^Œ+Ÿ¯Ö'k_lÉÉ©#t„y4bßø3ÔÅŸnnüYeúG5¨ð¢èÓ‹ƒU¶,ü 7¡£)iˆ7lOøvò@íº¨@Vi¡xÖÃ=è‹%+]†ž’_ϸFÊ @g/e—G0›œ«S`Ã6ýrÖ%‹>†´,°áõ›$ùÀFéÃéß<Ï5À+úçŠJ¸Ñ¬Þ $¾¥EÒFÑ9‡°x"»YhUw;Q§_°"!Ä®¦l)[>YYMKlE’èßÔ-M’o‰ÕÕJ„4§Iµ-ɇEæ×6gëè8Ìk°0¨³p™UW:,Î<õèÿ+1Ë•˜GLÏã Çè’ Þ÷ç±OÖÐ .Šÿ™èöbïPíçúêv| ýù¯Ÿ/žž¿_>«&‡ƒ?üm³}ؼ ^ÚU>qÀ·4壆`IãH¿2íÞlÙv«ýÑ­6rñBM—–8ð”ϼ ²Gí¾µM‰Â0Ç-íbîá MN™§QO¨ë‘§ÎSǽ¯Ê·³lÝ×UH÷šð&¯{cßqD$íºÞæˆõgVE7xëßøyzQU¾Oö m½¨ãErÓCñ j4UIZ²A¢`×ìF=]/œ¶}Â^±P‘? ‚t§MŠ”§nÙ‹©Gþ Ê1 ƒ·ä.“/ÙîaݶO>?>Üøôp÷üáêÓ»€‡Ð¥‘g¸Tl¶²Ñ³£Õ/X#a:ÌC˜ÆÏË]×Ü:&²xöÖ,üÓp°yÉÁ-I›º¦ °Â6,NEpe*Â#ä1Ë´ÕQ‰žJ ){u3]ZM.ôsà] ÕþÙT„§!ûÈ3 ƒ‘3,P7Z‘FC…ºOØÉbmK­¶™Byò•i!–´!«ìÉÂjtM#ç(q¬÷»ûh-@Ó™¯/l£Î©­µ3­9Má0έ­QT𩨨à³%K“ÅTt6ú0Ä`,}oÞÉ3–u6êñö°9æÕsKqáFà† –ó$oÑçBó Õæk•Óz@aŮЀ/”wåvÅdeUöK‚b5αßjÓƒ eêG´n¼´üµzS„Ów}¹ý9©7ã-©-—£Àœ®¬Jo2¨ü¹µSË-èëJ1üÓÜׯŒý²úÙ Õ¡zñø1fã­‰°zÔ¨ék;.¥so•ÔrµOSPS˜{³ÿŽ9¤W9ò‘mQ¤nPÆcÊ2¦NÑih àæÁ|ò¬ñuX² Òûrä:À J‰lNö™³ìÏ×÷ßî.Ƈ­‘mÏm'XR”§€Á—v“ˆå@e"Ð.ÛIß}€Y^BÑÁ²í”šºÕŸ¢8»ûV‡ÔU`]LЧOò!æÛ_eÕ w@ŸV?9  ±+Y¬u£Œ&÷Ò&ÿiDNmŒÛ)3±Ô>;à ƒFØe~eÕS¤]…Ü1]Ž‹å,Æt5åøp|+‚a¾yÆ¢ø0 Mÿû?•Á¡- ™Gk²v”Lj=)ÞŠj`՚퉣(sýI®¦õô˶Ðáã+}|os§óÜyŸ¿©ÒAÙѹ–tjRœf·r/‘[Z§Œ…GEk÷2϶dá§cýq§”E뉺0…Z ¤CÝz´F›0ɰÄLwÓÊ9tf˜¥nk€ef¹ölÉ­ù؆Äi” :õé×k2¸t†jÍNÕØoÙ^£:óJ7;½Æ´ŽÓ7p¾vzcUºäÕ„B\` d™˜€dPhM‘Öe3¸|¼~®a'‘ëL²y¢vÙ•}j`7HH‰!Î&LlW“ß(NT¶*~(5~c8=Ži+ºm…Ú»h"›.œ@2˜q>u1J‚3ø2ÃOLQÁnóJ³S×*L©dš—0ƒõ  Ö4øÐ’4v)wq±¿h–Yç/R„…* p‚qK¥¢½îº Ò·û¥Õ8Œ¤™(œÛ Óêä#&GÌÌʲ[Z‡Žp`Ïq ÑŸÁO|G‰¨¸«ÏÚ'šß° W$Wüé“\m¿ÈóA‹~¦“ØãÞLà4"Q—XÙ?X#ÑhôžQ^îÿÁMåw6xAEýýn눫»p«ÒÖÚÌÉùU}G À±á€4öô>ñJõ µ¢ìæW¢å5Ôß+³ëD‡^JùHkÈÕNäÖï´í¯Nã™±Õ;SMÈÉz˜šÔr–FóT•„Ôã¶£ù,YSÑïçòö‰êiˆIâºÃä^ÉÊtó\¾eN¾¼QÃڨ̙üy8þ~ýéFßt^ó¸*Lk$ÕB`h¶â¤•À°Uxý~¶‹¨øƒ@1àõ²ì†{Iu‰øi à…ùÜÈ‘Vøƒ¸Ü0^À]ô±ƒÎO  ͧ7\‚yݪĬš1J;è#B ÷!8ëø˜bKɆ8:(Ùw8úQýÁ‘NÉNÓÀú„ §íÔvX–­{ºšrɆC4Òàw%“g,*Ùð‚ƒK4£hC1Æy¯~Ì¢bÓa ßì9±›‘þÉdyÜ1ô–Ì®Hƒ¤$>Vì ÑwEÊ—O–V‘þ±×Bô‘C=:«â F ÚÑÙñŽ´¿‡;> ‰Q$ž©Úæé îó O›m¹èfR/º¹ø¢‡é—†¢tñ(˜Ú¼Éäâò*%LÐVëÕZrÓ*¼6‡*g’npÈä*€š<œ&³Ü Rr]6{Iup§l‹GS™Ì2XŠQ".Ú%ÔÒéšÔ‹c‰¾Lò&¼¹¾¸{¾9©èØ«˜8(&ºŠŠ5(,b¯§,MÉd±=êaõ­â<ÎÓ´$uÒ"hæûC³zEæjp8ã4±-ãÓåÍõÕw]‰ºþªúñš*l./ç±[:>Â%¡W˜—PhºÇòÈÀâ– ;-¨~€«{@«| õŒ¡h‡’ïAŸH¥äÚ iÅïÊ+2þè`ˆz\D ]“!¾<‚Vð‘8ÄAœ_ׯ®¿Ý=ü0©Ow™þïßÝ_Ƽÿ`æ4?á?4Kÿ„E¾J?¶Ü,G´óÞñÜZäF‘5Ùf8Ì.Ù¶Nãò“-Ç£h€0ðñrÆWñI~Äî^>ì-·_Õõ»xü±‹AŸÕtßÚ"‹èîu>Ýù]A×lÒ«\ýa6iÔLÎf“ÄQ_¨›×cu6iÑ(r`ðäÒ%]¦xAŸ/>K¼üó[dÞBpjäiøáSSé¤Ú—°kë{‡—(>ä¢vBÞX”¸•BàNtŽ.6`6Tܯ¦|‰"djiK;öÇÜ#–Ý¡Ø. ^‘¥0Ç…)&Cã¸×GxYÁ¤TƒêÑ‚Ðú¥ —»bÅžýY]úi§?=þi{Ï£@v]ã¹]üyÜ6Ùzì‹l“BƒmšOëÄEn«qx•âÔá)ˆ°:/›ñ‡Æ-b†CR6rŸ¸lä$9h"–yY{k¤2Ýn a¸ÈJÙ…•Ý“rfîVOQ¢KùzîêÍH<ôfЧÃÖ…Ùã;ÂÄQE«{" "Za*àT}§l/jT@ú~·w[W`.’Ê i¯¿!iãØ¨„`î[1±¾àª*‰¦&rì1CÄè  ‰‚Ùq²¾†Ñ^ÓG}Ït?—p‘m„µ³p*V)C¡‘Ï'á8víù÷®¦gÒqáeóRxû*n^ Å"ãÝÓÎj¦6ÚªØÃ‚joÅŽ£q;–ú'àèPÊÅHX¨3 ’̳æ¤í)Žs[<øò ­µ¹ï礬šºHYkváV0I/ kíÏÞv’°—Þcè¹ÚL|³ÎRÎ:øÕ»@ˆQ÷¶ZU9L’Ïx—†¼T&öä_‡í§‡÷w?<¼_z»C˜l1®œÙù@åeûý–7ùt&hób\-zí¡ÍÚä“)ãFê N2÷@mc>O„xfŒ›Aá«F@Ý‘Zðcò™Î“Æ–ê>œWªÅ#9‹«U§íåòúêÌð!#of’æÍ¬ÓŒ“ÅB¿'éx¸´U{ø˜÷]¶Sm/¥>íNà`?ÏÅ®¹‹}ËêQ¿Ž™ØTÿpIu,Æñ ã]ÖäöÑ›ÃÔ…Ø2fÑ.›ã÷öÝÏ­¯UCCNÕ“o~ùøþÝý/ÿ‰÷ïÀaJÀÀyÓÂ(ÄóPœU˜€âÀÅ…¸†LÇ “—Ù’Ö¾ &é·Ö9¸=ÐÓ»(Ã]ía·püç‡Ïo¾<úÌîg¦(7åo»¤ºtU©íu;‹8ÄîTÖV™ ÓNç¼#š ɶWHå‡ïoÒjZÐõL‰9ôYqe9D=E(œÎà¥Yé wÍ.Ú§w÷;3¯)f$v)fjiÑf§æÕ,q ými UIÅX§’º©’)–üê…hi¤ýQN{4ÒOÁØ¥‘éÙÂ#^{sP(åOU¤È²ž?Íáð†íB=`αŸ€~n«.›Ä»Ž5?óƒ9Vб|¬1!©ë¾S;ošêª;wœf#vûHU‘s‚Shá\e‹ÕÍoàÆ8èĵ:7…}#†ÜHE’3Ýj¨ãi;™ŠÝä ‘Œ€n[6ŠîK¦ ¸]Á¾¤6ÆCžØ}ßÊë€ÌC!ºò -IIó1$©äðëæŸMer¬Ë?óf…€s¯—Tf±ÛA hÆ,\ÝþÀ“z˜E€Í:㟤Ì2 '8ÏKÆFúÝ[߇»ŸL|µoîß¿3Ñ›öÜ}ùüöé~Àwýîó}øË?U?ñÖüÕÃ.Ù9ms×-žs#îÍÝõ[µÏ¸ì L%Zu ×g¾†2šæ1—MÛjޏPs¯ù½VxQ’&Ì«áÓ¼×ÈEÖó§Íl×E„É4âÙ8 å'ºjŽ\ƒ-f”"e:yúùf6EQîž­TYsä—ÂÎ Ô\ XOBŸ6JÓ ;«m·Â 3.oÅòg,|þ¯e3>Æv…í9Æ:°êÓ…_húðãcP,×U[HFëªc~‰ú¤?Œ×§•¤Q7öùܺ!ôH3–D9–t|ñRyoéÇGûŸŸ¿æiθÜãLB:Ô›xžT¯äLòQä¼·£ð`Ѷe¨l³|U926KcyDÄ“G)Ù²)†œ_<4J7O(7ÌÓ\(P2OlÖ|(‘@ÝRswªS†/77Ž>…¬±ƒZ{þ„¶¼ÙÆì5÷±Û µnhÒ ’Ê&ݤ©&FØPig@ËEší§­Uø¡)É”lIv¨lrZ^ À:TÖ~SâÑ*›)žêE.UÖNÂŒxqLMTeËž…K¯¨eow‘Ô…Š^ÓwL“Þî…-Ûò8ÄÃÚòv/lÅÁµH'‘ÿ=À15U«rdî>®¾SR&Tø29Ä-àƒXž/ð´³*܃IÅ‹_váÞÖ±Uà\ :‹±èªØA ¿ÝÈ t(KÊüŠàîë#·iì–Ù‘¦c°ìòW@% á0 ºüÕ5Â@sèB¡¬Ô¿ùT3' í†!©„!sˆ& MÒV)*O&^uÚx95¼ØY ùªÈÓ€{"¦„ä=]ç†À ÖC™#:…ÍÃÜݦ<· “í+Üæ ¦یf%J%ÄO;«±&»ø´ëØÌ«Wîòš1¦ƒ‡IS0XÍŠ£ß0ÔÞ'zåvP²W[Ž“ w ¥çªøoÍ’þ=~œÇl]˜Œ°Ï\ìü©…Œza'V°ÜƒÈ]—ëØª«²ØõU"Ä-^§œz(Ì/˜ëïíȸnO΢MV²|%`m}%°5ß&:>S챑†j ìv4‰z{w>þLJ7OÿÈ÷—ÿׄä²iK8K¡|-NÞ¨â¦{$œExËÌÆ(¥ m!œ!ÕkÑ')SÀ=fxófl#e|>÷r §{š$SJðçt×|¨V¦ÔÒQ—ÌÀ˜™_-§»8ÝÓäùí ¥„(hS)ËÝ Ù ¡tOÇ<·œ¾¬Rf„ƒð§2)Ð…§)£—I¿.ºpüUèÂeò¹"æMJóËðü‰s?óîib€>\–š Í0 Íòuéj‚) ¿.lj¦æ¯@niæ¼YQ-Ïû¶›íJ3dÀÛU.‡¦.¿ÑUj†4™­ªf·ò¥ˆÊ]7!Cs¼¿kA’~2N­Ì1¡Ó" +®b–Ík‘ryÇÓÖ*²(0E ˆ)›Ùy{9oqòÜíóÎ*Iš-çOØÆ7˜ÊZÌ$ç‰ð¿Rt_û3ÓŸšÅ¾­f)· od8¸õyHdzô'Œ!¬‚í¸©ŸæŠÊ‘'©Œè7°‹‹ÈÛ$êü)[™¿¤‘Á~Ç}°[`@1p ±vS5ìÊDYnžj2'9ÃÆ©ÚƱ».wVƒº¶ªr¢Ç–ÄczÚÉÇñ…Ó„‘Ókà [‡¿Áúáî~ò–’îíͤ¸§±»ð5éñóÁ`TJÒq0v¶’0k2 ›]K¯häÔ9—=bÈÄÜ~*›(gŸhé 2©Æt­Wõ o3éCMÁ£ oƒh´‹œ6@Ôn;—ÊÓ‚amÕA|$Å.qkZ& ‚‰ŽíB‡ƒ‡Y­ëß°+cÐcñG¨±»±âÊ ƒà…Ô†øS29%qíÉ“»ÖÄšýëù[zà#ÛjÁ¦òThœ£ëƹˆaâ™ø`ã(ó„fm`5ÓxÚ«[qŸ6SÓ8ç³!ÒeãÜâ}Ù °ëKòÿ[ÜŽºØò¨ä³-zÑ~¯:×NñC 7+f²{†º>Qô´óâ“Ðrk5s¶,T¯™Ù¥¾Q4PsáÞéì3ï¿¶;®*¢ùÐyësÂÏw¿T¼º>õß?~zóøîïìïÝßírÚ(IJ']u:›Æä诰÷ h€ôšLn†"N'rÎÞ*œÛ:Yl ZHjĤ2[´d ¬»4–ÍàSîÒØçÒ£„f)Ÿ9ü/'•Ù–3ç \~2ù ”b̓7ñÖW7OWÁ<‘Øwº€GŸ®W·^“¹ =w¶>÷UÎL ä×73á§üüP˜™ðЙ _v939=3áëÏÞ¨7‚0Ùßòù415šµùñ<©c/™ÃÊØ!H!B¸®Ã%  2eI›–GÁþÊ+–ç¼Y(zƒ‹ÝT¼xæ8¥¤)Ä‹aù®Òd¾ äXùVwV±í$Æá®¦Ÿ ™¬ŽùðøpÿéásUuß7[±Ï·L1ª„Òâ[‚}Pw“î—W›7‰EfTÄ\§Ó›*-TÌß,d3ÀôUc6o²6¦”z¸Ãá´Æ\˜|g[–h^9n ¾Íòò…FdÞܶÑ6|8PãÍoâ¦TˆlŽ~wB¦vxOÑœø´¤³Ðm}^ËyãT|½~ÚYÍ3g–‰À ±\˜àÅ7е%fY!Ó.Ã%æ{tøÜ~p¤ 󑤠Âvx)*½lÓyŠòšæÄ>¾›¡Þ_´MئçOµ—M™>Fý )°k‹X6“kCoÃEÜ 3`  ï‚h»äœÌ;ÏŒ„ì÷X`<‘$R<œHò™9z¼ûðã“Î÷—ÿ·ÀÇ<¢ gå/8¹pP·ÍÊïí"•¬»“[6Øî¤7Œ[Wðá ©1òÕBä[¨Â÷Tòñ"ëf×öA(*qÝìºþ•'¬?mf;ðMa’€óÛÕŸ ÿ~WÐëVFNµ$ƒÃ.ÁóÊ¥JGÌü y|o§'FÕclÑ)$DÞœÎIÕ‘±H峨YÍËç)3Z vé‰->RtÅlS@ÌCÏ»ÂåL‡T¶J*ðüûŠ é vœ¾(»ämiÑnÚ. )BH­mÓó'0À4é Œœ‘èÈlíñÃE0þµåöç»û·fQߨZ?}ô<ÿo4i¼Oªüæ§·ñßó‰œ@ð΢¿}=ƒ)·ŸÐºa96Ôøiôn¼û½ø@¡¶eþ¨èÓÉj‹zß}³kì$a)²j<‰pDg¡#³˜gR[í7LÃ9Ê~|s8U¿]vÚAE'{É7’ÇŽí¬›/ ¾µð`Œ¹qì`ΰ™þÔqìð•4u,°'š²„Àò‚]ÞÍ"ß…â>¹¬ý<6aÜ.§¶ÝŠ9¸ 9h_ëw³ ‡vŠS°›_†²l@¶I“ÊÓRžÄ5³'â °³‡(oNGC¶‰ù4¨ì²í˜ÐNpƒ^”RJC©SÕc莉Ë#Q¤tÔ8öá [kNŸ :ö¬a’ot»—Y}“ f(6þÇd;™ÿ¯w¾Ðwî¾÷œÌ—ÇëÃ~sK¾)„’o®ûM1–¼}ÙôžC×ad?¦Hsž‚¢†CÒËé¸ôò±ÎÏå“ 'Þ9âØÕ-Âüõ»9l&ý˜“ϳؙ8vÙûòß~å(Úw[]›ù!·t¡öD\¦îÌg¬Î|ÚOš aÈë^ˆ‡ML¸ÖósÞ¸–ÇÅÎjÞ ÙÃY;ƒ‹N‘å7ЉOÍ:9ã0Cmâs^1ú#­ö`¤Ïý;Ø`™ôQK‰OJQnT’èÀÄgÖ—H|þµ¿E¬½ª@ I4§vT5TŽ- ÅBЍÐöª˜ ¯ŠZR€™Oœ¶€%l©ï‹­O›Ù~U"o[ pQ˳üD×â݅Éá™Áðæ=ÀYªíx >`¼Oê'ˆÚZ ø (,R|üôñ‹éæ\Ub_øÏŸ~ЂßšOg[KCË —¾†Äuàïh[އ‹°€œ(·Î…k°ÀP|Y~ÞN_´y‰©2Çs:ölø]zÇŠ?K‹ƒâÁÂÑdÀÜ3K¡>/_5)Þ£êäÎqÈrè‰ëSý’Ù%>¾ wƒÜÄ÷$MŸuØŽñLz“"¬ ;‰Øu¡;na=°ˆ8"ª65ënHy‰Ö-"¾BxŒõï·-£ü¼ñÎS"›žŸ©¹žD8b¶Ÿ-Ûǰcí¤ŠÓÚ¢PéRyJGgw]ÎX–,\6ìÍ]3º_1^¡¼©,ż/;7o„9tÁÇ'1™N9Èq}†Ì¬\ )Ëf†ŒìÚm{p ERœ§UdÈ|U’%„=Úëcb´O{%oeÄœ'&Ó=Ò`ïšf\NqÈ·Ŭqn/¢=}šú“‚&ÉmS *„4*DòÃ@á¼i?ËÊWõKËïàß$2‚uÊVm´Ùc>E<…ºÐ~òè\3ùÞRŒ”“\6Ì9ž&!h|yš„žñgýN‘ ‹:¡g)Ëì±Ý·^:…ž¥¬e„7µa%P ËqJœR°`':¾}¸{ÿùí*ò…É¡½Õ2mq0˜Jf¶]èôI(R0,v;";äõ0…¤]ÈgqFŠØ•ýWlêäO@ rTbñœ>™S)ÇæU§9UÜ1¿pÓP*R1Î\lÄuQžØö©öºiæ6™/Ü\ä;"¤ceq²e’^J¶? Õ nTU¼Å¾–*¢Y’vR{kÐü‰s=ùX²½™×¾ÊylQ<¾Iµ.yÞΓrS;OÝ]´òHk+OÝoí+Ì©ºˆëFdþ ·Q¬Ëq=ušJµïħÃND1w¡Ž`<ظðO7êÂøñEä«ñª¿£†Ä?ÐëNB¡@ÇÕM"Ïiì‡ôªDû´&Æ|„Á”=s–õ¿ýÝŽÃNþóç÷ž-—ðæñÁ„ýãwï~üK¼÷9ÝÅ¿ Ýý-á%Sñöd5׬ë”3ç‰PÏ2->ßËÂ6þª¾qI?ÿ÷?»¸-?2IfбÝ|[¤¬ÒªrbçAì¶ßZýl rÛ~'€ ûí;§R=ØrkUïæ+˜ÚFˆÕÑXÅÁm+SáxeJPä°“H”@s1mi4Î[èL/QûáÇ_>šìß¿»hKÕáÿ§”¶ÜÉø:p)ƒÒ!í¯m·”ë®)(¥S Ð¶L ²…ÒÝ(–ö ˜:Q›l¢Ù@1õá•[ì¬ÄD4!Õc˜ý3R†ÙÁ:ÃØ;èn`˜¹”š6ze`8¡é%Èæ|]³¾~SßÍ-!ÙN¹a Yà˜dÚ cÃÖ±b ™!JˆqОMÖìp´mþMÄB5~.â–¤£V8_¸:Î÷¼Õ⨓Å^¶‹ñ9„‰ÍÝ—‹i¾ËOôQ[Oˆæoš›ÃÕIÛX$F ®k@ÒÐÅo!»„¹Ó–Aõ¶Ì‚T2Ï7m^ ÁŒ´y)ˆKôÈ‹UÚ2 ˜!í0fìÍ`Ä=ÆŒ#óÑÆLðVt‹vë4ö”ZWfÄ@ø¦î®½~6Ab:†Du÷–©6sü#Vݽ°U£gOÚ Nš—¨…ØRÄW·ãžw8î䉰 v†Uˆ›`w¦Lº"¹üº±J¬ó^U[ج˞c”.¬;³x긃Jë2ªýµê¸g‰zl?÷¨÷éÁü•û»Ç‡Þ†Ò’ó÷Ýȵ,°%Ánl¹–58 äƒBfH #:)Õn<ûû×ËQbÍ/5Oç×+àWÿÝÊŽ:4öQ·È/qOÊ[}ˆoK°y•¤!@KÖ3±®tÚ¬Ëôªð‰Vüù‚1Ó”rJPέ³žDZ®|ZÈlH¥-)Ás'¨>ºK©n£<É¥hî’ÉO0¾(WV.Ô Ó•¹3UÝÑhs,¾”? lþ·FÐB=ãcx½Ÿö6 œ§Ü¿¿{ôÇó€¸ü'Cmå,„‚{ë¹”™µf£:ÆÔ!eXG—ÝšoüU*#!8Ëå☸y%„-ÆvDðO`æ½®Ùpù÷Æ|éYo«hr"éQQ‹Bºß"æCÄ—Eíõ‹Ã6í¹g'·^•\œ©ô6¾×sn«&óç˜÷(oöÒ÷Ð¥¼)ýìäb޹`Î=ì' ¨=?c/{ôÕ0^úùP🻎8‚ÏÁ§œØä÷Aƒ0_‰S’ìó§/5xά·„!h˜Ó9šØép™áð—Œ{œ3¿wÉœmÄ-xOñ+ÙÁ¼„;:ÃzÃI!Ã|'Ÿ”§]øN(G—F¹œ¹Ì¨Y3G]bcÃ5»6U̯‘á †nÞ VávÖë“mI´ÛÇ8žv\69x.c¸»?¿äµOåO½20è+`,ÜìƒÝǪoêpŒæ²1öèp¤Ô¢Ã%:Óo©—¸ì£©“Ã3W¤ÔI7}4wœJêû$°1uŽ$É"ê=ê…@cú^Ï’Q§bBÝ)µQr2f¸:¡jBq6÷{8?aŽÜ<ë ±ËåŠÊz€ËE^ pžýâWÁGXûÃÝþVÆö©Àê Ø- (¢º›ãc¤Çy[þ®G*¼-±¤M¸N¹R/d6ÄÛ þ´"’÷x[ì÷%w=w0A߆Îß)°¦·Å G/…£d#žÝÉ ¬ia{Eα¼Ë5’ý@üZÐÔÔЕÛy7|°ü†úÉÂ(åËU¢.qÈBsÜ’9½9þ·ò<­ò<¬&‘YD>ÖÔžjµR„`°¡£ÃI@²“Hçáf†ÈlÎ 6¼ž¬uW=Æâ¸ßüÑAæš-ÜΈsBFܨÎ7873‚Ñ:>eßãáçô=Þ½·üù¿îáëõÝÕóï·ÿüÓ?\Ý}ýróøõòûÍM¢ 2›æ&È…Ÿ¼leÀØ£#ráÃ¥`Tްå†cx@>¥%@íÅäN÷ø´«<ºy‰Òœ=ÜžËW^¾ótõ|ö’u<¹Œ‘¾ø³çg|þ±êAŠà0Wæ91Ü¢-ªò¢[Ðbò¶×¨F2V½P©zJ8oZð—m˜¹qÞòŠ(XŸjU°#Z—´JÙ´ò2­ÐSâ¢Ó$ÅÛõ”P’}yäÈä9œ¤°‘‹Þ§{jß§: ÈA>£¸¾Ï#ò±˜‰´’zhK2uþ™WE˜«+*YbDycó @Ý›Š6Ÿ¼×†”°6U¾ˆàGÊ‹¨]\Þ˜€øùÞ plóCƒtë86ù{KÇMïw>pB“"ïºôð*t ú7ùz;Ša䤻#yã;ULê¶‘CŽU ##"QQú"(À†µCT#Žaqé ¢5"ÀY»Û‰ø… ŽéÁS•/»ƒ OèH`ìMåËbd( üZ”(­sê~dÀ¹ÑÖ¸?™i'ìSÛ‹ø„•/¢’8ž¾ôåo³ï4fßa\$/¦¾kÁE[—>½öê6ëžIs­i »&:#,oÓ‰½9›Ž´9_ʦ[œ&ÿLb&fß`á›%ÚŠCLFT1®Mg-Q hZд™QXhµ.kdìñè”Ü·Cu3Ù«+VjKb ¿T³2/Ôr«†(ZЖ‰ìW.×ÿH.ëÚ—§´†=X™ï$¡õ> +àDÒ³°"78ùúº g‡Ð¦¸T‚;À®q%¡p´Î% Îø8Ø•:‹AµgM)§ÀhRL2¶‰Æt-\°®È‡t\Úü8Àtð~81â«Û…o—0#¦êªÚjGŽsúzu~ûüµìÑ)Ü9R·ó5á6 u@äÚdæÍñ¾­ÀÕ Ë"„H9\Õ4cËY\³!…«‹ãö™„$Û¶b\ƒ«àƒxßmrx0® ·)‰ï'!YÖ0æéÊ^¦²øú=qÕAgX݉íAö`ˆ–¡‰=~@²ÕÖ½Œ:-éçL©Ó‚ ùœ†éP/+ (âÚ²Nþh‚QÄŠuux†‰±ßˆºWj×M¨{%u,;+F›…eØÇ,*£OŽ.ÚQ¯ («œxÑ~ (}j ÊíóƒAY©œšN§Íe½7”~Æç¾¦®¡’wü>#écÎÁÛÀäml» lÜàu(›·ã'•Þ?ì5öM´Úyñ*>_<=ž=Ýüã»6÷]7Œõ¬(ÀtöU1 cã<© Ô›ÈW—‰•pÏÆb41 àì²ÎWìHÕÁ=‘öKsk¼‡Ì†Ñá !sJA¸öf yü1¦kN–/ÂrÑ;+Ѽ=±DRcõœð͆ûC³Â)» îÿüTYÅÇó'QrÏ¿ 3õÍ<¡#WÎ uæ0Wt¨5YÞàLM­ Š»æÄ% ƒRfË(Ù °õ9Èο[ ;¶Å Ç,%s‘dë€ØºmcdÁ5ñepÚ³ÉÑÜ`ÀV2'âËzb¹±/£GçΖ5Ɖä¡{îl9”d´'±é!Dµw‡lðA•«Å¡-"u#QdFèöxuÿ$Ü}~x¼¿»zþzõ»Öwnò ŒzOÌ2®Õ=VÐfsªä*T/ ö‘xJ¥[Â)zßœµ–ò¯‡ ÑAܽçtÃêWêu÷YL|àV¡; 8†6;Í Fw¡³åý\¬™S‘#‡ãÅxìœé[ ãÝ? }^‹h0P“—&Tÿ>i ¨íýÆž½x¼zΑ|ûSm¨Îõ,(@õºÔ/ë FbX=ivC¶#ÐýB³|Ö;às(0¾=o&QÅç1ù ¹£Cã[¶í½p•Vásô,Ú½Iã»> #¬oωWÈ™ShM é~I¡o[Jtû¨Œ.'qŸ܎iÆ’8uì¼oP¼¤/Úö_ð?[·Axˆ^yûãüæYîæ'¹³Ÿ4™ó¿·‰/; …6æýǼÿ@~bõcrf¥@ë2Q^9yLš•Ë£dþ˜ O#ñ2ÛõÍ"ÉÔ“ACyk»sw^–0CçÎmˆ½ßDOøu±5mèÙøAV;AÚÿãÕÃíÍÅùÓÕs—Ôö½ò§H–Öåý÷ßѲÊ„õCéúïèPâ¿\(×кX©Õ%*¬?¹áÁ˜Ê¼ÿ˜Èûßï¹&&— v`!‡ã&Üæ¬.Õgq˜|ÚÐF¸²³M^þ¯‰%Úš#T ˜ˆ–Af§{lUÊ!9ùèH¡½2Ž 5$yM[3K4$ZG¹[¡“øvG+©óf‚(† –îEŒ;®ëæ%ÜØ1ó[:šýÀỂÀ¢s]ޏã´Õ r°²“óp/D|8¿jÀòïw¿}ûžRuqªë½Ÿe…›·ZÑõÞO“š+rý«šÄiž {í)âLÙøÅ»ûº”ò~¤‰ˆ«ú'é¾¼˜Ò•¦5ôªzòõ&a²ÄÉjêŠ/2YBÖdÑ‘µÉvI;âtˆ:©¥¥÷ØÅS+/ü满sâÍW9åXŒž˜Q^‘»>þ†¢vÝ@!®?­ˆC|bÇô-"mt5ñgäfchŽ·¨h–Y“r ÀŠ6¦¥ãiÑ›“LÆ[vG+±&çùõ6ÆUÖd–qy|y©+1„¤@z÷zf®LÂ(œ'NåêSœ–ñΊU3¦_Lñ†fdðûÆo舴X‹!4Ù2U#´½eMp¾ÙKæb\ã‰MS;ÌÈ"ɉl‹ƒ¡šÍ0!Ð TËs-‹jA[&Gµ€à>²ŸHƒC˜~ÜŠ®çäc´'5Ý×ì󽺀ÕÞ_Ê/æuˆÖw7K8CXf}7s ÊÐP´õÝ‘6K (jqÌ“ØÌd?Dæô–°ñEÀÖNÛ¯ÕçN{Ï®ž/Y£¦ÂvÖṳ̀ 4(uº””už´M \ÑIrŽ,fDÆŒ²úTòô‚n=æ­È®÷˜ìuPÃQm—ÆÕ¤f"è§s»ÃeKÛ!Nѳ [5²\ÝöóÛ˚ܬÀ0±(ZÄGó]ÙV€ÁÎÃ1Ù8‚1A‡«GM ¹¸—Cý¹üÛû‹oë½ü2R†<ÕÌü´,(åÁÔMl¤_/¸GKª1o³p‹AçÌdÓÇäÌ‘µ: ´éäbëyÀk‚SÝa‡ í2=áx¶Úâ'GtxèÅœµ¸¹î›y7åQ†­‰p’)eýöƒ k§%9½©î^Ý–£ðä/»þ­–¼°Žµ«×‡ˆHˆ/ò›|þ׫ۻ‹¯r…Ú÷OsMÎÕίÖï6Ä'Œ7C1™kº*©ô" ·qP€¢Ž²½ìg±hµh:<üG@öY$ß–ƒï5çØQ±Gµ‰ÈFŒ‘M<5’‹DFr!³ÁýþJÊ(¶¡ÏXŠâb@kK 8oBÿ¢ïzÌIÝœ‚EíMŒÕõFº„ЀŸœÎYÚfZÎø=«F7?¤ôöæîæù?QU€áúc—KʶžÇq|æ[_ƒãÁ2ˆÛ³Ç·T;bVoI¶ÃB“4;“¨Ë–+à1“kº: ÂøÜÉØ$ïèð››#—åéÓõãýݧ/÷·ÏŸ.¿¼+½7ifÏâî%jÌc_?gRï'Y)3ÐïͤÛϱŠ]ÍçÔ3¨\ ܇[Ö·ábÀ­üwÜ܉€®ºÖi^ÂÆþÔ s†éîÔ½éÄHÑ&Ö»nRâv<ég¯lD眎­§üq—p\/f ¿z.èZjÕ˜¶bõì#*kwðÙ—¸090ŒG£ºQQ„)DT6“Ϋ°ÅÍ,6œû¶IæÆ¦­nH¹__8ó‚ð¡Ziß7QÔ©H% |[O¹ŒÔ§yI½Ïõ¼”%Þ·ú©õÑÎZ^S'±O´øßVçaËd‰û[ã!ºÉ¶ÀÑs¬©¦ÓŽkºÕóïBã¯îÖî’øúd ¿„ {É××Wö*^íOBwkóœFìi‘íĆ?Í/{ûί¥>b³ëæžÉÅq¯rá Uäâ08ä"6fU8ç ³*0²XÓÀÙY6abóÒB÷ŽÞ<•,³8XAV…nÊhÏ€bõÜ ÍÄŠ­ž•Šn_?Ï| x¼f§0ïN1õoO¨e§ûÛ«ÏÛ?Ï‚÷_¾ÄøÅóþ|ÛJ±ÇKÑâÈ^¾‡AÓ5XfxbÞ Åp5n‘!ç›A¦8uk‘‹ŽCÖ 1Y 2`Ó•×»£•ÀŒƒ dðí$¾å"鱤ÑNÑŠ_Sܤ_<^ì‡4­[Ò<ºø²i2ÆÕÈ£k %Î÷Èq“¡A‚«²TÅÇCsk#‚`!‚D’Ê#gDÄßX—ƒàL²Â`q´‰d5}$^…!^ Àp†@ƒ1lõË QNÌ(ëk©Fo?ÂË.^rˆDì/Î~üÜ÷W+ñZ´ _9ý²[²7­È\³•ªƒ™½oÒsÑ0 N‹n â瑜>>Æ{ûëŠÓ¢‹x˜æ:ð„L  G[eb{ÁÇPt.¢S],1 ¹‚»Æû\,YǰrÖÁмɦMºÄ’eÛ(êã Më,Ò&)tÁŽ%+#'‚rdŠŽÇ’½qÞ.NÔš mie,¹òS¼ÍmüÄ#ãì}Ä‘¨zw®ôÚÚ»ooÁº—» »‹(\€¡ÁÖLoV•ÉV´ôÚ9Ÿ‡©Ò3…½b³ÙY«ƒ3O\æ't(¤=ËúáÇAF‘VÓä*ÆøÅjp‘âóÃýåÛþôß®žnE??^]~=ß5£:»ö¿}{XÅüì]]oÉŽý+~»/k¥Xü¨bî`žX,p ìîëÅ@±cl+ëÌÌ¿_RR,ÅêVuwUkb' à$RÔê&‹§HyIúŸ½HKª°Xd ™Æ)›ç-«v°kK!{N¢è*³†¥Ù‚¹®ÖÎ`šÀnòúµuL²7+:úW™cÖ43î²vŽ~öG΀# ‡sÛd¯ Á]›Ácƒ€w0ô)UÁ©K¹F©fÉøª©"QVzsçhMd.í÷µ¸ÎÌŸSÔ7(s€ª-M!NŠ.¢-É·r¹QŽ©˜„J[*ãc[™Â&SA|‘H‹‘~ÓäµÉc’ð1¥ºê…C«Z¡žÁiŽð-°$ó§Œw ö‚iK^Gñ€:±ÛÚ-› v%^÷$Ò¨tÀü:’œNl‚1¦ùK¶MÀ¥¶` Ð= sØHõ7P1 æÊ$ˆujÌ8‹/ì˜ø ")1#Q•Ì)ΑI!¢fÄù­Ë‘þ4Iß¾ë1î€1h¿¼Y8§:¨"%FÒ`‘c`(zu+¬€¡N·y–TIsÓÛs/J2·BSJìj ¯¡Ì{³¼o!—)C21V:{w2i“.qú‚ < azs|•Šèì”!™âa¬¶ÖTDތϡZ-¦ ³¸±Iâ©ÝØ•sïXÓÞ-Ÿ.¯G:®ÀtDâëñ UHšâ„šÄÐÔ -üÖN!µôT œç¬ì©†Àå”WÚðW¼$½“H#GU%h0 F ËaˆÒüŽê–ôÀOÕŸ‚Úžu{ )Õ¹§½Ð£C“Mfæuñ΢AÊ2ÿl¥üÖÌîÏt%_ýkÛê§=éÃe’<®>%¥~€åq¬Ú%§L.õþ¸4ºÞ»JlíÖVHtJ‘¢Çj^‘ùR¤…mûÁ4Óg5Ú,A ¶ã 59zK•‘ÎÝac2î®S1-t(Sʃ7„0vÓt°èU¯Ü®R/ÍR¨]Eâðæ’Ä¥Sæ2OÀÜ쵺ø*No-Ô æ¥6d*ã+S—'»'6žl/HB°æ±çªR@»ÄÜŽ,y™wW£ç˜MQÌßwB‚FÛfê¶É­ŸÐG9+x‰ÂŒ8jX=ÙZû¿§Õã²;ƒ±aꪯ2=“Jþ¨[eD€ UGŒÀsôXyßaN,oRK2/C*L ÃPʘb~-V¦ƒ›¢j!¿}T‚ÍÆÿ’[m'’6VA‰PÒ(Lmaƒyî+3vvXyÇŸ„NVøï©¢Šl‰e–*çTÒ<ñ"!;b|cÍùãòbùáÃÇ叿ý*`Ÿ^UCEô"@8¤Èœqt U•Üú¯ö+‚WÑ¥”cl©ó¬_JmÚ¾÷"urê©Ë÷i”y¢S2 á7GPk{´w§ÕLîµKÀ„‚0yš’ã+¡Kõáï!–@Ô"ÂR纉¬“/u_&­âS8KÙÑ«R~¼¥«œKmûæ.,öù´øãTú‹2 £h§¤4‰âÇÖ34ðá´ ©ý“s¨I*fô¤ÌLA»~žEÒ†Û?ðƒc""1ÉT™·ÍwYNaa±EÜLóž—Rrlëå6ãî¯/M×¾[}¾[/¹//Œã—Œý|EáØnuÒ\e‹$‹fœfc%ÖÊ49a:D€Û㦙É6ÞTìcÍ[ïž|˜æz1;ÞŽ2M$‚ª‚Ly錵Þ{7rÆÃ ukM¥‘n;“Û²ÄAséx8‰A6ô*B¨£/pòy˜Ý™"îàZóí ëÎÔVŸÃsÕ˜*°pk–óSˆë±=|j^µ›vñëÕå“{fϹ½âÿ8ß\n,›ô+…±jÞ¶]eÊÞš4‡ÌÉ, ´éŒ°§âr+Kð~”ÒÎ+ØI×¶/½FD xê½&ŒZõ:RÎ~(ëgÇÝ@Ir¤“ Cˆ©Û´Žº­Æôj^4Īø8¤)‡ˆ¢f^>Sîu +É ˆÌC¦•°ä ={2i5®$i2ä)puG†’ðãJ §ŒXÌ׎ò½Ÿ&$ûQ7Ã0Á,ãJB2'b:µ¯f/>¬n®vúxucöxÿç$g #ôK=¥ª1Ãlh¹û˜È[øbEµt¶Äì9ç2ÀºSòµJ—¯µ'œFç æge¥1MI˜2•á5Éì=[ÆË?ËħAñĸ yWHTç] B…^mfˆUMo$ ³DÄ^A¯˜ÿ¢9›?Î/—¶Lî¦á«ôó0$e•œ«ðUcžDŒkökRn8(§KVM¡•|¢2 €V*†±I!u“á>Ë¥Q‹(ŒÂVûzLTU[œtö¹Í.æ¨ÝðÊœ,x?i+< g³4™Ó‡ ½:•˜¸ªK|œgls"ûýæŠb²¨ÅôU…ÅY¦tƒa¾Ší©¯„§!SCòƒ«n³ŽkÞVÇšïdÒ*'Àl@9†ê&'ßn«€5'¤ä¤“3Lí‘msøÎG˜‚ÏR®›1F3a©Š¤8kNàÓêò «º›‡qœòòègÃ0ìÅ öb#Ø«`>»Vâ(L"ÈIѶJëØëžÜã2=Àf9j)F¦C!SqÔŽ‰4@7‡Î³Ìa³J0hƒÍjwCÍŒ\{¼ç )Mʦ)s¢Dºys6Åf¥ClÆCÞÜ #°y^XéÕ:f?»©*cËÚžI—!.ÄçÖèœulÏ2úzK´—·\ëöáÝòòöúîÓêæúâÏˇ«Ë«?lEß-oîWO~¥Ï°ø-?,VŸï«ûqÙ òlW¯rˆˆc¨q‚)ìæyìE‘c;ófh³"8°È1a±.Œ©Ø°§:YÐ÷„×¢Î,Á-ˆŒ©SûXªj±ÇË2s œË9fˆ]Q‘œn %SméОÄÛ.3ü$~6ˆéռ䕉ŽD³´ÿ˜a¤DQOJVx~q;²ŠŠ´_ ‰bUîãË\Æ‘‰å3å”›Q \Ã,³WG„ËÝ–ÙIË‹½={~™fÞI©ÑŒ`‹hÑ(ç:‡;Î>#X€¡+Ëì•ÞUNrˆ7tºe¬K.BŠÅúŒs¬é£f RÏÞÅ$³ö†4«ìÎG{^ß6°Ã¥~üxWéj(‚~ÅDó‰ªÒÑÍv‚?SHA-Ôªeº¨f+ Î OÇZt¦I#`©Ô›;é=É5@j7ìŽeáLÛºñs ¨2è(s7”˜œC:Dj×däSÎÇÔó1…¦ó`´€–^¥R„ª?»µçÍÌ®êæ^Þ¾µÈç"Vt™tbˆ9[äô ªˆ× ymà蕈X„Þ¤ûÄðY&- ×îÇ{iôjP‹h«¬0“ν›dÉ×Ðko F¸ý¾O }üŠseUvKÏ}¸à]éâœóÕ);á-™<(ÖaŒc[2PÓ¡Sß[\^cŠf8õ ƒ8yí‰7DÛî—÷OO¦žIc¢|~Æ‘+Î×ì‡ Sšj4ÙÆA[ÐôH©]Mp>ˆ%¾´ˆ‚¡ˆð¶Søån¸'’&g´¾hCôá<ÃwCÑ!U™`Œsï†Q¢vΊ²GŽÛißDzúáôÛ"Pàª]ñôªÓ¢² uˆJ8Ç̽`ѳÁͨwË[Ó€™Ö~@·Z>=þº‹ÜÆ!©í.ý¢æè›UHJ:%²@ó~i4×áù4ÄP”L‘7¥mG1ÔBâ"ˆRw6gOm@ÔÖ©y9QNö«*=ËÌ47ˆæÜÍÆolNuçݹ­/Šƒ°sŒ3:ÔîûÔ'¶JƒV©Oâ˜écÔðþ²ÚÃi¯‹û‹qšú“炞û®i•q÷kZŒ¶e]‹2VVíÀÔV‚æ@J` ™Š`*Ûqf/Ï0wri‚¥¾~É<ÜP*hßOZe‹(sC©I9¦.(µ¨!dÈ(§¨Ø“©QÍ`7ôêÒ;†B.Eq`EVñ™ºô TÞ\ß^?®/ê59#KýTú¥Ÿ“We¾åe?ï°ñ&†"f÷2W¥ß×k‡±¾(r@-’ï«­êDEŒMÜÅʽ'&kwí¸yLÐoîž„Pçð¼ìQmŸ;u1GìYôMŠcw \þ‚foJо€ïzêƒÊ¥šÂ¯/iß>c »ˆ 9ço¡æÚWÞÕǧ›‡«Çuù£=ÂÃ8Q[ÓuPÄ[ܲdŽtfí¦‚°9M3ám—ØZ¡®/4Q°ˆbÀXêU4B×̾}!µ¨ Î¼ðˆÙOÿ¥ëK6‹×ÖëÃÙÇûÕíÙ‡ÕÍãÙå‡e~¸ ΚºjGbÿBS¡*ô¶KÀÜ.²©+v`ºÂ)‰¼= z§A'_¤9z÷¡MZÉëûSªi$ɳxËÑé ’|~}»Ül‡‹ŽŒù80×HÓ•QsóÍÓ”ä®kœ£i&4?*À†Ù_öÆêHZÌþª$`Ý„ÙéLïK«Mö¢“ÜñªõƒP‚Ÿ ªw Fu[q‰íó·E;w{k«Ó'7}ÇLâIÛk6ÌExWJÍá½D=Šfr*p©!fbÔ 3à”hœ·°ò èvu¡vÀ¹)Á¾¿+-!7ñ4èDx ˆOF™n xoÕQŠùn˜{ŽSXÌÅv0b”>¯ù凛«ÿ²EÿÕêÓ×0b/þ·Gÿ~wyõÇ{4·þ~æÀõÕåî5<0r‹® r,DâëÊÉ#÷‡…N#ß{š¿ùÝž]û]™ _\]¾º| '_¤1äý8û«Klm{•ëÈßmgûýêþìñ×åÝÙ³@ë§qyYÄ”…GÄß-4@ À§›¶­¥Û'Û\ƒRTø’mþ©]5è¶tí/=÷NLÝûö÷ê)Û©N±R±KÙ÷Æ0Sà6R ­6ÿõªRsÜi.XœVÂÛy;C¹=ù5Øüý¶‘Ȱnxo@ÓFœ·v#çtÈqà,L‰å$©54(øææÁ×p™Såaæµk2Þ¨Cå´È‘dÛ3y¤Ë.Û?¼¦ÿÝúç¿v<®sx¹Î;NÇÎÁþwK O'f·æìå ª1»À÷®F‹"üóÌOÞ–k|éaýâ[æûË›‡«$EƒØ»§[[Ü¿¬>>oc¼/ù X ˜â¿+S¡÷ʞݯÌ~ØàêV1/<#»-ŸmôFSB댒5KÎÕz Cõ–ha7¢ e½±ê½AçïýG¢7»­£P:µâ"Nh !CÅl?ªõu„ÞØ‚Cô&BE½ÅÐÅŽº÷dÕÅ”0ÎÞ”l÷â*µ1O)ñI!Ûz©‡É˜†ª-㔌8ÄÜAŠjcê,ÏÙ=ٵ帠˜•ó©Õ&:ÁÚ$Øk9`µÖd Ö`¡‰ÍM)kÍÖ”Ͱ°Ck» 4„°È‘±©ÒŠ~"Rš™±a-EÑÃSwÓÙzfäv†o5cÃÿüqwè8‡ÚÕw3{ÎáÅêöÓòþêýOöÈÿ{õøþ?þó_φQ8øÍú×–®!j”ôá2I¶ÈàvuùÕâ:ûùìáéÂWÑûŸ¶7ö˧§Ç÷?ÍòýŸ—7OW¿¬óYølÝYûžI“©µ>è"ñÙÏ?Ÿ}43}r¡|¹­uŒ?Óý|ös¯‡Î1“ÖYƒ·OµÏÙBW4S5¶M"//¶áèNÄë–½ïŸzº¹9¸º¸¿z|Øë•òÊ!÷+C TµŸð”‰#`;­ˆ-ѱ•gmä7)Ã…]›®§´F²IE.e¸<ÊêŒÀöÄÕ"ÃeK=›#K:fc?ÖÉRg¸y.Ç„‹Ì$üæ8h\æ‰BJU2G€ù]‡Ðá:˜u¨ {zx⦭ÕyPÁ^FœSlÖ½ú¥¤‚P…¿8iR©3™%5µ`ƒœk[3†)‹ ÚՃýqûEÛ7Ž1äºÀ!S~ŽÊ‚rŠœŠ¨ìZ9Q;AyO„-jü®ýØAFr AB@p˜aV›ÝÈ:¯7ÕI›¹ú|w¾ûoÏ|™÷ןÏo®?ŒìCÕþòøØ/ MáK,²ÌЂZ·(²VÓzYä”u€ÃÄž‹,š&sì¤ Û‰§§®Ý6dÔÔ4­ )m¢ŒšÇò{n™™ö¦Hnü›óÛåů&¡s³üû•ÃBßçv‰Ëvî5D]`@’o@$2X-]H¾'°@nw½5Æ­ó­³V¹HžÈ‘RÎ4'wž½¾Û Oã²"»ó~ÜVo·Á*ÛLSp;IÌ$cA{˜xZ‚4±¯à! ]6ºînÖgI´ÂgÄG%Ó%hbÊ5&'q“Ë1æ“ûN·Ëûß®?ݘ‚ÞíýýüñÞWØåùÅr” Fs©?º-* l…ò„¹`N¸Ï&Žd-³–†é… C̲Xei¢K]ÓödÓÈ2=i™Îͨ¡Ê2 ÚÓÇÈ `²üöRM‘cMßÚZmSB¯<‹>¥á£;î²@[‘’NÑ ’:étöeÒ #Ùý0è¨&›Õí2Gn!ÉN¿?n^ÜÔ^¾»\>üúaµ¼¿<ÿ-ÿ?{×÷ÜHr›ÿ•_òbÍ6èzËuOyIURI%y‹].Š¢nekWQºóå¯@Ί#©Ééééáj¸Êw6%‘4ðáG¶—ªD÷OëÍö²Š{Õ.ÇÝ1óÔüÆEÆÆýì|ÅÍ¿­ƒN,®…¯œ&¿fV›\GÎïJkcž3:ÇcV«0'¹…¨Z-‡NsRtSš Ôq2h>;Ïu.Àç¡QEç!û¿£ƒµÍ!ÐJ>hIÀ˜R~šÄjG-ã+*‹¯ÈÏ ¯Jb9“· ¡š²˜×˜Ûqà kÃ&ÄÄCÉ,¹¦\¾` dßèò†…òY$MJØØéÉF'çöÁiñZO©K¼£¦¯÷×—÷«»‰8DYÔSÍb2¶4‘Ó¢µŽg‰µ‹Í‘9 ÊÐÒot:é~ ³UŽgá4 ŽU•!Ç3&sX`|V\çÿžü›˜!WñIE[è£ù½óÆŒ¸#„PpýÃäGou±g~÷ÄÑäúÇu`Còx^ÓÓïhôÜúüÝYÓŸ¶›‡9¤é²hNJŽ*FœƒQ {Œ~Îô7k¬Rg[Þb*`õK)ŒZ&¹¬eÄÓÂ0CÔðÓ¹ 3.Q0²c´ÍÌgVX\âv½û Cfxäõº–©´¬¹¢‡*lj Ù6i4®c ûÌ Žïd·4»À†d4¬%Ì3¢dÖ$ãŒÆµp±,ÐqÌÒyÍžã»Ø ¸Ì®Ö6öjv’øÀɽçM­Y» Kj2#t1Ñ(¤Š.Ëàù,›&F©jì5üäs¥ -2<¸ð.Zϲ®F©¹—±¡/Hî‡Y—³ÙvIR©FŸÜx<³ýÇY5 àvÔWç5Úôª–ߪý4!Iñ{×n?è #—7›Õã“êÒ$kM¼hWóáPAÛ¢ºË.Æ ¸ÇÅÖÎJ½fL)Ä’;Ïã}$œës8©U3±qó­ôߦYé±·V3’·«¿RàElÚKâpîIöuÞöñái½SÀªÒ/ œèdÊ qŠ5 ,údÛ¥Ẕ‘RKsU_‘ È\cË @Y®L¬ Ó¤*S$†9nUÜ"yÑvx‹ÃwÖþÞ«®œާ«ÑÈÒœuúX‘® 9OìÛðåM_;£¥N8Q*jcß÷¢’ô9£ˆª‰Íª–{à§Ø,û`+dfÙ,\Äf£Ãùµ1<ÜÞ<~Ú΃òA ñ QÓþÖ•1ö­‹ÇyǨÀŠ+l=%oãRMø´LOìm8-Ð7S,Z:@^Jö*ªëµhJ¹¼v ²&º .^ü¹ 1çäÝ7š®¿'^Ó&öÉ®ÆË"¢ÊôÝMÈäM0xÆ’†¥ Ž2$rÌóšdÒÈ…žÝi‘U¸õß½°4¬aN»UÁÇ%íPjvÍQˆªóŠo T”†ÂjeŒè, õEüûq<ÂpÙ%rµ­zû·©‚ÊD- É Æ¹!»Â€» F7F§.ôŽX~ÿàœ³üÁ“Dö­œ­P*½Ù·Æ`d®Ä3ŽM «h˜ ¶År¹z7ûÌfY쌗”KÜ»ÑW‘e‰g2i²Z;…dš¢õ Íшè¥fɼ‡¨(’"ÿáÔ±x‰£Íµªh£bcó<‘4Yï;'!NRˆÀˆÞÏRÍmjš°£Ó8f¢6<ÜßmŽ-Ü̾zùyóøp»Þ¶Ó ×yÅô͈pr±—R¶íºN½ÐbÕÒ0I/˜„d–ëÐжb›UÒ$ÙŠ^ïh™UN @‘W¯‰/*gYÒh¡ °Ëv ¼ËUf(f©BJ5ý¤^œÆ©?BA~Yp¡$ˆwI²Z‘4‰!¨ã¨éï$—¡©„y.ƒ=V¸Œ(žw«Pç:û/·Š*Žoj1xéÐP¼Ò¯öx¹Ö_¸ÿ|û?;‘µô"šP,q#q<À`ïsnd °FŽDÏÝCš¤-|šçH˜|¨ÑÉ€¡ó»µIEèQ =_É}xI#ôHX¦éƒsÆyèAU× ¢Øiüƒ8>žâîöf³þm}·ù¶9àÃñ-‡&'%Іø2®0’íÑȬ‚»è¦i ›ÞÆYSám:§ÙršZñ>¸’n­B¹ßê¿>èùûn34 ÏE¿×/´Sè@8‰)ÍxòÂ1Û[;\Ño­OŒStFˆ4ᙳrªÙˆQCvž£¼º×ý¼ZRA]®¾~}¸ÿeÓT'ØzõŠ€ÄÃh£ê•c]ˆ¥‘N°­PÀsë„âdE[]¤Ù¡e±cXâhŽˆ'–’p¤ŸÞ<í]²À¹´JfôƒÜ4¥A#T™£B©ngBÍÄ'g3½'Y­û«ÝôÅ×§«»Ûõ‡«§Û»ë–UѬ–0T9 õ\Z§A(_=ˆ¢I•#vÉ3¯ìÞ5ŽšÙΪŒ cU•#Õ]ô2žîŸ¶ãì—•vñ&„.êS—Õ»ü¸&°Ï×7Âh¢ ¡Sß‘¯–«‚ñ5zÄ9ªÖŒ‰P ^ñG(x©‘Å’)«÷B£ ‘œÏSñ=‹¤4ØxYœT Wf2ORÍl.X󧤹ë·*×võù«þäjûA»H"v–ZbQ a5’äçs‚jIÄ.…4)’HêÝf6†ö]ÐÄ…Ó'¸m—ò—>|£¶ß—;õ|ïŸÖ›ëÍÍ­&¬;yÙ‹²mÆó™Uî¼Ó ²Ho \Žl¼=H®‰Úpgkz“Ÿ¤6ÆRÇn–Ú¬¨‡%‹ˆ&B½—ÃÄðB‘‡Š•Ô±(ñUJNòh¤ ÈOPõtQšY,¯q5FÄÉϯ[lõ¥‹yõB˨æ&¹èâx’šßÖ8L£(kD:©œÕ@/°&KU¶šú™IêAAvl–Ô±¬4]¹4õ¹ºýbrÚVñ¶Ã+ÙvQpDIè[÷ô›Þ®g15Áèbtä§\ÓLª¦0Ç‘è[Ô`GDuÁì&ëÈúasB7®ï×Ý<¬o~¾üïõúîoÍt©S£&€à8vw¢R˶n ÄÒ$sIqФ]Qã÷™Ï(êêBJȹ҈Ïit¨­K V³voÑWk èûÉÌî_t©t H¿­7Z‚‘:‡þ¸tÒ«ì<¿ íðd%ó@ªgÖF‹Á>:hâå˜æb m#8iEó¾0÷ÜÀM87¶.%.9·FÏMó|mÏVxp AÊ£ÀV'TsEžÔRl0ÀÏ>8(>8¯gAVÑÁÜúÙ?y¶£røhE§ï%6·væƒ#ªZºîUÅRÜ­žyp~ÂÁ1ö’ƒ‹'·¥ïŸó41ƒG+<¸”Ô#ã¹.Ty8Бüé9X]Ýmþ]ëŸï￾9Áÿ0:½úr½ùÛGD ê×lóvs}xM²g¥ÇèÍÿþ¬0ÒèYéWΞÕáiþÁ¾íÅ­}+=©õæö—ÍõË££ªLáE@óâú'ëßäv«Ó¯w÷¿n.?­¾\<Ë£Û=ü+EðæñŽC§4Ò!ŽL)¸Ú9¹ý[¼žto2BCç½âÑ¢ûËr¹®[ÍÌV_V¿e"sÒÀ|=‰šBˆ³Ìeò³C{‹š f 9~òž $1%2«Éf07§ªšö8_fÚ#0lú›rÕ±¡€ZQ v>ú}©ô÷¹Ï(Kgô+kJD&Yy Ac÷YVÎaÑiØþ¸(C4¤NÖ`5Æh¡N³å4,c £…WNh1oŽœ­0°¦«0ãlÕQúödÆúÖ.ĸ(­bß&¶¿šéßùåµð‹ßØ~xP(Ø<è÷°ŽÃÍÃ¥àj½¾U"·ñéj½Þİ¡ëë$Ž8Mc)'u®þ¨FÁ^"WËê Õ‘ÒÔn¬Åe[åÜÛhÏT-x‰F]‚ã2âTÈÙÅ–1–aµ}-[ ]Þ‹±S 6SŒsì™û¶Å±Úä¸ïÊÕöÈxW´;ÕvÃà›2•€µÙgÄb¸> ¸у&çæàú›ÓÈqgÕ$ŸÞ5ª¯¯‘®™®Ö쯓[­®Ò&’Kײ¼¹™ˆêŠTP£¨®á¨f¾#@ŒjZrn`o3`WmÛM[»Hð#ENt~Æp ÈBhwFX+'”9Û˜´FòKC»H¿û%´ëYO®çè}†öÿúøþô1èOâwAt";¢—Kþø5OÐDÓIš’°¢_m¸¡ª’[Ä·%·˜¹e ÔÙ§تF8|r«bÿ¬3ÖÁÃŒWÜ€¤KÖ Åô{ø³Jn¶`ÊÝ¢1‚-.™ÉX âÆöÖ±ŒY¬â–éåßþ¦’û¬¶¦GÑ?:#¦ÑÀ::n~Éxu¹Þüì-¸ª¥D3?š >œV]‘ 3E¶Ôy¯A:ŽÙ4ÅÁtÚ˜Mr1ÛErLƒ›©0¦P¾ j÷ÕÀã©7H{‹Wýˆ ÇTÌ>CöªÎ‘#űÆî9rÝ«ç µ±d8z¨>Š—0ÇØÁ×øÚDÖ³]Ñt¼}l_!|î/ìk„­L(u.©§q¿<Œ™:ä¯Ê‚iÑd_:jä'“,]Ý>ò¬,ŪD|´ Ñ;¬ÆÂËj}ÌŽ9–”ÄŒÓ/Œc)f[ˆ‡OVpWŒåY>M:6MGøÔÚä€î§ý½½Hûe/ñY ‹)ĘKšŒk°¨ öŸûò“á5& !¼Ee÷• r ¼¾ÿüuõ°ùø}П7ÿå_ÿñ¢î£ŸÉ\büå/k2¸þ|}PgžüÅOÛ§µ©ÍÇ?ôßéÏ_Ÿ?þ¡õGÿ²º{Úüy!«8vñÓO7jzOöÄß>x¨í?ú§‹ŸŽú)oC@äçø)5› @sy à.´ñZn·ÿ0ËSÖÌ{Ù”¦=Êøm°f­a lös–ZÀ9c)e„ô]ÚH?<¨ÐíÖhóð¸½ì 7ÓÖ»†ür”^êb¼³ÜB]Aê<¢êO­šG³’jVÍvb#±„³2Im4IÀþö5ÄÒÈUy5·qS† ‚3Ê}Nq’“{µi$W—Ç1³»ÌŠ‘äûQÅÚzIq»¨ÅÓ¨Éç·xÅ‚£Iz–æ¤s¸(”ŽÑÈG¾ÛEáýý£&Ž;Qî‰À¶Ÿ^ÙÖë8V‰j²lrSË[¬¤šá*£]DG%·„8~u@„¹âËA0M. ب›†ªÁ3€ÌBÕ°üa¿Íî%¦ê)yÇÎqS9¸øfð(=IŽF> VØNÔ\\´Xòf—}õÃÝâ‹ÿש÷¹Šk/7«õêêêf5m!zpÇÏ E[ÿ0 aSßœ÷¶LNÿ1w«ß¹µÂ[o·ž6<^ðAk´¡’ÈO ¤ÔbôÉÊ€‘£LªHhb“f¶e8ˆÒÞNƒt6vŸâ’vúi³º{üTfp£åÃ`ËîÒ,s ^¨jJ[#ú¤Ç0ÑÜöÒn&Ôñ|ÄŽ)¢Œw3y‚àiÌn,ÈOn??n »1Mð!N²J€8+ÿsÛσ!‹†!°¬{û–} C„>|x% ®j¿|ýËÏa±ªHV`nØÅÞ,°7 Yޱ㬺'û^ûûd[uŽlI”y”ÙôÝŠM‘¬ÿbÕWM‰ècP£ê+ædÁ”ÄôÍ-¦s…º#r Ú˜çè§tnôÙÊ‘Ñ--¢r•³UæxV{TTWÙ‰kÓˆtþ>¿A³ÝU¹J¨•Çù×ä$¤,'i‘œD™ª8JX% Ra5š9Èc ðÃøŸ‘ÎÝ'­~~žêeÂã·/À5umD©I›Ê7,Ù¸d2øè…-™fÂ.иžÖÄr­‹£ÛY$Ë`!uš×ؘœº‚mIàäöæl#É@›þ˜ƒ$pô¿–Ê`§¸=|RC¥¬çà]›ÇÂj÷’ë¢ð9îfÆÚxÒI¦ñk(‡œW Ž?¹%ô>@“–Ué­))›£©‘ôÜFÊ9×´œrM‘¤ìég²b/ùü|ºÑ¥,âÏ:+ î»Ùg¨W”àC›zŽ ’s¨Wç\̦Õ N¨Wq”Í%©WJåomsÕÁQrJˆã6I¸¸š&¸ÈìÑ‘‹çõh˜cœoeK&Ï% 2íD¥”GÎ]Íí¯lsÉô@3*^BôÑ¥ ³ ¬ 4TM±cÂë€ÕbX-dZÜmî MW/q€Éw;,es£ÓL#«Qê‚9¯¸ƒg>^¢ Y-¨–³™rdµeø€±â)AÕtd‡í£Hb!º ¨—É)oKxö“,ÁÙQ€Ûƒ€»Ô;÷£6#{tî1òˆ]Ç -,!ù&B§Ta Ä&iêÇí8>©˜Ò®S·}‰ð{Œa’Ò‰²èLÛ“œXì—æt¦Ù¼âÀ⸅l¸)+˜‰™¢ ¶’Í—N粡—ê¥9žž?£32O’M²Ó¹F'+ ›H§*;°ŸE5oß-MTó5#žBJÀQ³¬I1ѰS‹‰ dMhŠhúÕ9¢mV$j¾޳hFìYBÍ6“SæÑŒBPeϳá"ø—‡õ÷S“-]Á>oD1:I%6tSþu’Ú˜óªF7²n~bp; è[Ø\ÕåHj%À…3FNû,Æ*xŽYbå §øƒäC¨í-2n¬Joú‡nÍö8´1ˆT e’sÄBhÆx‡%;ˆ©GyÛ›ÕºY’ARR¹% â]šflŒ=º£EøCwíÒ,×lo©Mƒ`UC>/;X°)úø£êr¦ÅwªRYÈ>éIÏ"ng)㢻ñ<‹5¢r§´±FˆUÆ…!:'óç=~¿Sý çïVz/V+úΛ¿œ¨}:õ—Ëéî¢*Žwtò!´¿fÈçðÞïqÊ3]”%¸Hj摊Óà*Ar j˜——ûK½8å «L~¾]_ŸCÉh¸bÓ Ëì/ଃ2º¡Eø;ôjÏŠk“ÌKΩ]]jKD<ÃÔ߯ô÷øWvíj­ÛSôSÄëUZ¯oÖ!¦õ•_ÿö›sî4ŽØ° û¡û»‡»—Oöpu1bRïBÉcnÿkC×hœü‡j¢ŸV =ÑãªX™R°iâ>Âl|«_A¨W½?w(qC¦Õâ‡_Ëhǹc'—(¤xR»xéD£ŸÓØ.%ò•£«^ćI–à @;ïã/)ð¤;V×ÞSa^z)%%çmêÈÅr¢ØÜ{NùòIX91ÓÓIÞña·êˆÍçE¯Í"U™c#µBÄPPÓ/Á´|½¼rGÕ¯ûµ5ƒ_µRêõ>O»#?qÉ×òU„oÂW ü¿]Çž×k¾ù °Z߬®+I•6OYxcóxURõÄ?mó†%j—²Ó@Ú¡qÎI±jã§8âZóaù„qzJH2åZûŽ•`§Œß@’|îotçev)yèÔ,«¦.õžû¯–çNM%Ây›©ú« rXhjÄ]8VÙ¿°iâ €™Ïš&e‚zÓô'Pm9Ž" ;£G‡®ºÖ–¡â >ˆs<Pèøƒ‚þˆÍv¼47óòîZïäîåçÇì§5‘;;ÌfvÔ—€4•9&{  §û«ôÙ‡Æí]-àYKêBpª±JO4Ð/°™—ÕÈ ¶D»ÜW÷õÿÆ1ñ;w¬tÓê\\]|_ÿqñŸù׋çõ7;°¹#W/·¦~¿zúx÷µW Öö$lK}t»ÿó—ãÿJ¼¦N*' ä:€¹a„˰IE”ª¾˜úÑdO‹³«ÏHuí~õ™~Ƈ¼ê•‘@¦*†5°H‚ñÔ3Øæ¬”kǦ`¬§ ¨uã o¼BSí™wx¯QdaMR¬è)‰-\}Õh"aÏ0»ÁòÇãõ1ʤº|~¹zZÕèØ2£c¨naHÈ"zK•óɪôá&!eq GWµÐS+Ä(¾4ÓÝïM<&õtÚ¥^˜H/XÜl>¹__=¯¨/ïW_Œy¼ é0œü"Å2É<›j¥ýgØíõ-Á;ºkôÓ/çä ¯‹½€S7£µ‚®'`Y1–:6€ž¸„¬o…V'ÉŠY¬ÓÑÉJª±Àw”b?VjTä:Z#[åª%h¨ ¥U®Ä6wF *ÓÕ»_bSu0wºƒ:­ LÓšbgÜËúáǽÚÔ½1>ùŸ9 ÑùHvZƒÝÌ{²²Ø8Õ$ª·}Ø Î=ÆýMA¾¿ateºí ­½ôy‹Bg“©™ÐÕvhKðò ÚÞúC$/Œu»Ãwjž_ô÷'ÙýÉmÜ æ¥—$áX†’O?TÓâ´ÜõKÔt“¨ûè=¹çIÝÎõ¼»;”=)—=c ²^É4){)$8™ÛÜV696ºÂ7ÃïäËE Õ„=5ˆž.œ_¾U×s,‘Ïg—½©«7O8œ:;Àà>%R:z®—8°~Žšw^ž|`®¹IaËd¡ ËåÌ@0…©ÖÞ~ZÎìŽ8[ÿ³½„29³yõy„æÈ™ªTàêô†OâYÑe†›TÛpØkݶÞækÛë:±s]tê‰Â²H”Á˜Q:ö2ñB<‘‡^Jæh>9éФÛ;®+Ôx[kbZ Éæ'Àø°ýR?Ë$ÀÐ\~ÂÉ!7ý)4:KtÜܽŸ4;Jýý(Óé/›`“HâNÃö÷Û/±K2X©¼Èúì#4‘?Ötfkxìö>J[\:”Ɣť„]Õf<ͺ/ñ“ì$ ‚»=YI\ª»:)` nšE(›ßßÊr¯ÛACž%÷Aù޹‰'\ÜC²0H)PWiýY)ÈÒ’T†2ZÂ$)]ôÙG·ía Þ54 }œ´“³¯Ñöòîºä ps–rHñMŒ«ð\ôÔŒª‘ÿÜþCF„ØQ´ÇàiÞ ê¦¦I拘ç½÷û[BØ®}ða–ñjÄ|K…Z¸ÙTõ_|ꑊ™ T¥Ø,QГ É[õËb†j·}êãý£îçöñùåòi½zÔO~.ìÑ€†&Ê>ž§†öp.4eĘ$×Xµ½¨EútÕ„Y_ÕŒ.$(ü©“jPpôÐÙeÝŽz3þ¬©÷‡«%ƒ`»\Í{Ô4å£)ÑXÛ ·´è5ýÞÙЯ³Ýɺ+[J2Ɇ ©U+©×ŽA&ºvìò²¸—ãÛY"ÔefõuwÁw¾¤l¶¢tâ yF}·JQª­ø~K‚gwtBºgvKo ‡uÁO8:zÂ%ÝÕXä®ZûÎÌ×(›i¥SÞˆ1U[ø~‰àk˜ ¡lÞT!â-uê L=é¨åp²³r8¸Ë™äÑÉŠÆá’² C*}Ã#ìЀí5p¬;4,Á¹$Tß@nm¯)­aô‘;ç ö4ÕÈ Žääè°áØ”F0:XIwMTVbßÛr¢ö1×B´PS…ÞÏÒŪh“ãƒÞ¸þ³}:o½‰pšNIƒÏ4E¨°é˜Ý'Ôö0­q¶)]*íXÍñmݰêI”wç˥Ô>Í K$Âå1 ƒ¡¼HøÏ)û›NºÏð—â‡cˆÎŠ¢¸I%Ö´#›—MÝb1íÄÕ8̽^ÎH¶P"ÉN2)Ù‚YŒí-U¨»¶Ü|œ¡¢ƒÒÖªO]‚á /çäm.3Åâ¹#Ž·zGë§OŸÿ密È£Ó?Žzí¯ºAƒQ‘ÝM~ Ñ–AàâËÅË?¾ú\œŠ+\±›œ9HÃiçç¥áê¾u”‚Ãèç‰×}ëýž]Ý6(±ôá“úÿcïzž#¹uó¿²·\¢6¸qùô.9¤’C®)—¬•m•µÒf%9ö°gVÓ3ò»ÉÑî³_½zÏ–v»› øáî;·Â0%Kf…&nBg'JW{µuwÁ¾–Y¸DtI›á’±€=¶pÍΜ¬¬¦îÀ¹AÁ»°×r?}F~HAú,ÙEØq·ÑÑ‚¬ÉÞ¶n™Û«ÜFv<°§â’­íBiƒOŒ,¦‚D%¹UöHT¦X9¸À#o’6Õƒ²iÂΉÛSx\­žŽqP[@È6Ù“zjwÇäÕ]â’éoËVaÙzÚ0ݤ'PSñ¸Ú†Õòú†T¨@J\ŽŒŸ%mÝ.œ³6w+«°aé«çøÇ-@ÈŒLêBf6Õ…û(46xcP>Q¯ ›0_…ަ÷OánɉŸo{§Y"Ý‚þ¤¸²¬®Ç÷L›7#¼¯º£f_Qˆ×L‡ ÛSsgaK¤·¹¯ÜÝŽoþÌÌœ‹ùµ]ÑžhÑ ÖT¿À>Äfw”‡bZ–f¡L¨CtȾœèFïE3A”§‰{•Hƒ4KújñôÒV„Dz[“²ËX‘´OOYjÚqɱêÑYLÚà1‡=wÑ|™8*ƒWžù­qôÀÆ]¿|¸{¾úôxwsw;3¥®k>™–Pšø`únV¸ºž^;Ð5]rTº†¹Š ËùÜöDRMPWg^ÇKÃ.‡î ö3¨›ª8ÌxÉe«86Í{EøµOkоEäè‰Á-bÛe1}D„Õµ¯¬Ýà“_X‘¶ÿHùìæ›F&+« ¬1¤á \¢Ãâ€5¢ÏÖ¶¢îCóèpŠ4¬%רßV€ >¨~=öë÷LlQú*l9À 6Ža¢=¹wûqe úr’#q]ñ_@–ó¹%v+Ù_Kw”wÿL#(UÔT¸(¡lB0[´ô*œ&Þ_Rsô—¶0â»[³÷÷Ï6*’waUÛsuínÍQlAsT@…ž=â‚ЀàH|XíæQ­›‡hÞ…ÇXqH½Û²gi̬ïVåå¥Q ¾~ÕöÍQ€aÕŒÁ÷>ƒ&E8&ZIû@f>ÉKܵáõ‰ˆ¿$UúþÝfú3¯Gr:$ù°?`zåë&ŽH´˜ï¼ò}çnl=zAÒ5ºkñ{ûÊU¶Ã)3èÛW®fHâç\¡.—~îæ.!í²b˜= u™ÄZyv<†(\.ZEe§ÅÂ1{M–úx"ž®Ý¨Ì†±>Ì1+Ù;WMô±·Y19ûãÌ^ZrlpY*ËÊþ,繓W‰“;;VK¯r¼ïpÅb{£ºðFÝ©*Ùöáîf|Ûwé߯^=onòié“.fˆÝ>¢Zð~ÔtÝ—Tt7œöé» §ÀûW¿ß5¾¿ùõöæ·+ÏO†OW ñ®>çóØF‘–oE…ý£%w\é¦9ˆ¡ãòùà‹¥×,Í‘ô%ŠÚŠÒ‹/Š¥ž|.‚šÈªEš#}5F•Y¦s ëŽlÿ "k.nïgŽÞySÍ?hzÉÅUT–$õw\°ãô63*Ê*8àeC<\ôQ›yÃçmY30Šiâ·!Ò”c(C@~ä÷D@-0À>;"Á, ¶]çwž²óE÷1 Ú¯l‡“Bº»Ã¾j–io¸ì¾ÜZ!Åè×ùÃܾv“•¢^zzd¶㌺ÿ}y|¾>“C6 îPcéS\ïæ¶wR§·L-DvaT+.à§QóDDunÞ¢VÈSd^(áÚ±ÂÙ‡iœ†ÔI_öéBEzCBŽ~'ÃÉtHýÌ䆹JŽ×¡y<à7êàÑÕLΜ• ­C8Õr -AܬïrÊÞ¯rNé{ç]\—ŽÜ!©l?l‹.Æi~ ÿÑoN?ÛT—ÍË<“'ç– ¾ à hK4bB.Õ´ÅY5ó±9Íz!UÕ®\M`®t®—u"˜.¶é¯XÌ?•œFÒþÕî˜útÜ'Uô¡T®ÛO Ua¶'\Ã`[ =!V¹C¾“à޼Ȯk2»ýÃd=ŽÙ~ÓM"ãåéùñãûõÁþp÷<ÎáøôøÁþêÿ=~þÍ$û`t÷ûÝóŸc>#•3Ÿî¯n‡ì%ì,€Ž¨+>£,¸d5²}™› ¹¬ Û¡{ª@W¨ ¸2ÌrP,C!äºO¤ÚÝà N× £»&Z$™l5füº^| û<_šóùfÞ-¢|ï‰À_&.öL]ÅãƒO'ij>ÁÏÒÔœVŸ¶ð±>o}yÐÕýA—reSr4œMT6÷Ø´fœÇоYˆµÈ c(f[¸vriÌ40_ïÒÀlþ¥tÇ‹P2E¼œ’!Þ¥Ô¶jÛImU×[Ìžªb‰Èmª)§yWqyùNz„Ú~µ'P÷:°9)± éœö#ËW=îß:œ©êˆ-ª:O¿oZÖí86*ë<ýÂys¦PÁ¹4† —šŸÍ#x‰ùÁÄl ´ºÀ\+ Ì%È aUô¦Ô‹ž#†Þ®œ²µ‚“¥U”˜§Ï æFÇÚYž›wƒA‰ò 0±G„Þmc ëÈ@¤«Xì £mˆ ÷@lÜ>NIÜ‹‚ɹa›L$7k“ùk{T'±Ê³0q•ïd`·Äu{5²Æ.Sí°»NйÔWv$F %¬Â<}Êde5ž“ÄAÔœ™ã9·­u¶HÛÓs21†ãBƒ´æ]~©ïû«ÌÎôRô œyéÔab©àIh„Ã’*Võ¼‹$‹¨Áe¨ã6(ÀjGBÊØÀ¥ëðÍbóô,»Å”©½…) ¨qêÆì=c·3¥[góÉê¹[i‚÷K¢mµ`55f®ÔŒC]´Éûâ¨çê0žgÜÚ,Üe ‘'+«1†@²Üì=M†gàî&cf0ZÚ‡Ñ_‚× ~Kl=?ýtíèVäkaë™|ÏÔ×5—h]OÒ@òu ø“e©¾”êgÄÕà*ÁGHÙBë2úÇRŒ­ÉΧú^—Vƒ>Q¥4œlú;pkЇ ;¡_’#a¥`ó â*Õ:ð±ý¾X‚ï<¯ß¥½Jâ{yºX 0‹_:õXÕÕB…9·ÎÉ*£ã‘`ð©ˆB_Ê×§ÇûÛƒ´ææ‡Ÿî_~¹{x*ÔAWHvó×Ûù7)o+“ïÈÔ´Bôùy`›€UÑ9Úmؾg°xWƒÀÉmBõë<êÃîÆ6HŠ’rŠ |é! »Àe¤=ŸW] ÌgdQ×%:–ŒþUGJÎyßbLÂxÚA§m8‡ÈŠî¬šßëKØÉß=‘Eì´¯ôav¢°ÆU‰Fî]_ÄLœÁNL“FÍ2ÑE'!pA€èºI™ãbŬ³>´Oe?Äh1G× 4©töé“©’ó¿ûƒë’ º|#*•–p< QŒanna"»3)„‰àÖd ì.ðU$¶åùg¡• ç–N„ÑZ5uÉÚuFJºÉ‰$îÝŸ`RF=®¡Hûö~Å3MY8“/t­3FÌ´‚†žÛ;Œþ’(zÕ¿Àè¯VÀ—”Å)PPsD¾Ñ_š©C0]Ia|ÅM †"(Çì¸Å‰œ€²Ä0D‘€piTŽ¡{e›‰ÉùL™Bê*MÄùOþZå —¡Àº%3VÓi°( [@Áæ‡W¯®¹ý°å†ëò~¹º¹ýÜ.à:Hб4ý,Ð@^B±tUY³¥ñ´ˆxí³ƒ&jó¦P¡‹ì„zLÃ$â\3ñZ\9ܘTÓÿ}|ýá/Ÿ_ö”æË/®îï~¾½ùóæþv«@giÚZ½Ã­SÃ8x¸F Ë·ˆ¶MYΠÝ>4QÂ8¸ÈÑÇ +a¿ÄY€ÂfäÝÆ®é´b\™gý›ýÃËÛWÒìö5ÃXlWŒyàÚI­‘ÎŒœO—v]¢í{£Ü ”f‚}|››Æ=ý;LÍdóÐÓ«ˆ^—xö^5¿[;ÙC‘µ:£b¤vÅP>æ0@1Ù“†çNçN> gújs´ÕÑ¥'m8CÎåÈuO‘o¿¿u9ó7¿â md]0..%ØÉ¹°ÔÌI·Â§Ë‰vMwT©”*Ü4§ ÅtAä챞ˆ«Å±N_bAÎ¥µv`Z%ÑцØóXåôS±exÚŒ&¸ûúAŸÿüîñ÷‡1TÿòƒY&—1t=©ÑÑ¢”žBR¿¹^òB‰µ²¸I--)žLHíÅ*j~ øD< ŽfújOì”/}4£ïË·‘³;æ‡KKNŒRm mGTeò˜ê 2W`D¿çcûñŠêÜ t•¿ÊYS7Ÿ?,5°#‹#\Ж‹ˆ*(—5µ^+,Vƒs€ZÑÝ"A ŒM’ù«î¨Z\u›Žè3¯w“þkÞ‰%ê«’†ñdÚ¯OÉÉ•ÙwñÙ8Áì¡)aŸR]½&÷ŸGµ/'UAÈ‹È*ðfãš~š1Vò•݇zü0óÜ;:#~V'º ±ÅÃ’öyŒA¨ÿ%¸I¬H'­@2À+ß|¹Ä‰€EÌòÝí¤Ó‚0Õ¾ÚÅhP3 ¤%¨iǪ“){d7bÆãý0†ï)?¿ªI¶ñD\ô(=¯¼·ØpjGÕyÖ‚#ße&.søÂ~ý¦sÈÓ=ùÝÏwöö¥£È½_¾e´\ãÚé—ùóˬà&ÕPB¹4Š’”î’ür—¼5š½hn`³2Mާö “Äì(3|ÓT.<{‘êF‘sh<ŠüNôÜ[/í/câÞâN½hbøãõͯv†’éúnB\lÿ¯¿yº}¾ºŸŸ0o¶+ú’_BÝÈ,3{ôí*¹5«öL*b efÄó°72ÄO…Ô¢ÜÓ¾Ú1 —†`âÞÍú&gÈЧ¦%£¢¬TÚvôµh ôç—ç†çÂDÏmµ`¦=ú¦ÁHD.Ôì´“lúúy~­?ÝÚYå š¡²FRBšn¶¢‰i_&Í@3 ô±<òÊÂr¢b¢,Ï'¾  hÚW(€^4•ºƒ¦ÙnÍ€¦-™X<œ` Ü4©®4ÞÏh={¾;î”6AD‘!Q«Qøgœ Û+PzŸ@÷u΄Í@hR…à}MïgLÄ%Eà\[ýD0- Ô¾šƒG¤ C¨¹„Ú}ôQ8¦$IKN—0á„züËÌ‚mµ™DÜ[uÐîó¶ßhd+h¥°„Ž•$ ˆsþ«ü—ÃV¢ È5s›Ÿ?7UrbC §‘× ïPÃ&¢lÖt©ƒÂ3Ä*G¡’g¨[?ªíÛÉ­…ghê/‰|g€³b”D™²ê,G†þTŽ‚™¦CÛ( xœ}tÔÔEŒRÕ>‡¹¼ªœÚr¢4ûz [½=•~ÇÄ…¬hx1ñÚã÷˜›öf³o‰"_=/AM„gÄÙ~¿§‰dÑ8w—8wf_Ç.”XS4f‰5ó}¹R¯‹ /doewÒiÆÑ;˜¦+1y¿êd2R÷0ÝùˆY0$ƒ–S·²ty Às{\–@ÃÉŸ1­Úу 1m –L=íѽÌ—3× ‘«¤LÁ¯ƒeE]BLˆA$Íúçšg©ÀÉ‹6™ÔPwHðEà–M™ÝqáN~MÛŽ†}6ñ,äŽÌ^uÎ#ö§î’ p³yË^üùÛC2¯£ñ\Ëð­Îµ<…"ØÖXðè ½kÍ,6  ]KqìÄÝ™žÚ‘Ú—ßµ-äùóíýõO·÷ÛJÑÂ6ÌxR§ùÆå,ÚƒègzU¦S¢yŽVÏ­IŸ#þ3vaŽìר‡¤“1:)óì9ï}©ÈÄù4ËNž ì;Jè<ÏÉ7Áï¨7Ïžó¨Ç‚UÌ’—uì-œ¥¦s2,†hl¾\Ê©¦·Öð=¥Gxó?º°’Dª·{vâ×± õý÷ÿþć(Ñ‹³ÿIÍBï^ìS§–I:}îÕÍýí£I.EX;w?¼{þãáý÷7?]¾}ÿ½iò/·Ïïÿã?ÿñîbeD?ìcÚg=½ÜXÄøôþûíšüôòüþû‹}Óï×÷/·?ަ,Fx7ö JšÙûw?üðîg3!/Ix_>uTô ~ìöû¦y“-Óåçâ¼Ï…à¶ø€HÑü¥ÿygÿþðt}3jûž!6ÍÞퟯïŸnOÙ½øoï^>šd|üùÕIé¯CkèÜ@Q%”9´íÁg£¥Íº!Ç¡µ[Ø¿|úü˜Twc·:q@÷íø+Œ¶î_±=ËÅÁÌ]ÈN”Áž¨KTFÙC&÷•vÏÜk§§ˆµ ª#á¿ÿx86pd Áq…¤ì ©¿!ô‡Ð`9~& òòd NyÔœó=œìçD¬á<ÚÓt7OË×¼w¢Íöu6k|ÃIuRTòÄËmoz.™®df'#½«­oÆÈº36ùhX9x(rÙú>ã?®¢Ï^2í–Va-nÆ:œÚß½‡d-p:qb«zlŸLh§4®õQºàÀ˜™anû§åSþê‰Ýbó{ŸºåÑé_yNƒL0ˆY 2Á-¹ ç@H=¯tñÑA¥‹og™5"1&Æpž%w\ø‰ ±“•U`Lú* ÌôWæ³Æw ‚¢¬Û7Z2ÆËÓxÇœ†È­Ü7W½o€pâÆyÜN?»q²Åº“¥Um ‰'s6Îô½[…òÒ™d~s27UIW£2ž/Ƶ µ]¼Eì/oU šÛ¯DÈDTªóÙ[§XOf‡FËÞz.øÑœéÅ*@R^ÒwÀ`‹âi5 a½!¡t—g€Q$µ#Qá­*fi²´:KB$n½fRtѯkÉ ¡3%1åðÈ\ó/!ž¿9¾!ÙÎ3]òíû§rÚ/p”ÜáyP4÷}‚¬ÎF¡¹/<‡?È!Å•vµ÷\€äŸ`ήš¦¢HdXÑZ§¾ìôÊ»|¢Ö;š™£\þæ©Uõn¶:/ñYÅÆè½¬4¬K®`Ì£r~µ]õ3ìjÅ.–íª_v5ëèOVViVS‹ƒú8Ǭ’¤ÎŒÿgïú~¹‘ô¿bÜË¾Ä ‹U$«²À½/°»Ï‹ƒÆV!–=k{2™‡û߯ؒ­¶D5ÙÝd'ÙI<š»Šü꫾š£6!3õË:…~³ÕFåjƒÎ‡àl|Æd fkAIUˆ÷Þ¬Lm…;ÕÛ(oˆb­Ê,3¢ûÒµv‡Pã3I¹C±§‰h˜„„©è i ‹EgcçÎ\ÌÌÈÍùÞ¾ƒduïæäÄÅ8ÒÌÛƒš»äzN]Ò%GäØòU… §p¢ØoéJvs³aú`üÌÍÙb=o6màÂûØÝÌÎø™†0LÈ ¨½Ž­Îÿ뵤pýánóµ}xøxfÿ©ó·ûÛÍï? 1„¿^ÅËÕíæöø3J§&‘Œ3˜OMê±È›¾ôäµÞËü%.öj¥†ïf³ýms{RÿÁfecÄìú·VýGÞìð”íÓÕýÃç«»‡Ï›Ç«ç_Ö÷W¯òXu/R O>¦¬å·[¥G`ƒYÛ@씑~ E±EÁÍ÷c]¡CÄW!ñ¹Î ·2è³ kÃÉ„õñÕ <"¶°ÒèÓYØ•{DyÅX#á¶ ë½!ѧšPg߸@]ã*Ö9õ‚ÍVè0edï™´{Ö0q˜úRVû2™dÿÿ×Ç$nqV¨özzÈÅÄÒè@¼ö‚Âó:ÇCêqé,“Eü¯?¼ða·ût¿}þR{xa»â™2ÜÔÌ"xqÔfzá¹Ð¦4twF+å´øè¥C‰Ó’½WBº£û(¡ ݲɳq~Qó¥¯G®y^ÑR:/T„ˆ’ù…i¨h©\k¸øR¯ÂûÓ½þt»}¾þøp·½ÙnF¢° ª) [œRFÄÞx¾ýÄî·Â«‡ÆºWœU?_ÝŒ΢±µÉ`¡'©*h¬ËVàqFãÓ}RéÀªlIC·è€§b‹ßèÚ¼x {ö¡ë'uÂ÷ÕåGÕs[ɺ '5Òíw©ÁÙ“žŠ…VñˆêîFÍmöˆê‰G”RG´'¢:'TWmˆ -}B]óú4}3›*CVPòV ¤»€¨ª—Äe 8a Â|“ÞTµ¡¹+¬'“¥>Áôe‰q^¡hœW0~Æ0¯QàßR©H F&räÉV ³»æ.p²Ž1’©võC‡ÿ7,Q£LijCaõÈ®X R0ØIÎn‘˜jÙÌNÿ6ŽØÉçÀC‚¬ÍÄäð’žHj,ëªÑv ›LlŸ!úrç÷µñ]¼v ~eKuù•MѼD6{+;Âz£AK}ª¼š)à©jBj¤@í[©ƒEÚ­ï×?on¤¨ßþÝbcÒõó£J|ÚêÖj‰µ4åÞÐ8vži, PÖŒ]ÀbÊßP33ea˜ 9Cª'­J± 8ý °06ï d{úˆ3B$Aõé`®êàr…2ùPŒ¾õ€£©~]ýì;£ÊÓ°ÿˆãÁ/JþÇ‘(<…3YCÁ@[xóÁSx÷óÂô7è÷‚ú´’Üýg.i†b¹ômêiýÐQŒ5(:IVÂ`ÆyÉé…Ž;¼ÛÛ÷:ó)$VƒuärþÄÿi¦Š×Òªk0‹$ZS«ÝŸ}{¹iÄmc°Ó¥^ÇŽ'= [Ýãuf<]TM×7΢&Ê×9°:Èyß×…rO(•\_çƒÖׂ áL›1ïÉoçqÅŒP+ÅòV…žµ°ÈƒÏªÕïûWcâ_^­ º’‚¬ [Ú*zâÆV1ÊÑçä;M¨Uä$鉭Øtë‹:nAPùÓ4Ý üšM·uvo‹€L`å\°Zdÿþôð¼NfuW>|ÒÃ×}"Çi[ú˜DÛÕlGðSlŠ•Žô&Š?À²],õRŠíd0÷!˜%+€ò¦Ê'k{z’¬ßÅe«×vqSfšˆÆ)!éëeõî4øÃ9¶¬8®++`7sºä@0ÒT‘‚©þ¶Hb$î]Éh|æ> ¤.k:peÖô÷`^Zn–ú…·¢N8ur¿Î\¥§õîc̃OsrxÂ{6÷r2 ¡Ì܃PC7Òܯ?n7¿ëIé.pW¿ò^‚º…v/[øVuy¿Ý#ÜÐLíU~&wµìƒÐ ÔÚ`Ȇ©Þû.kû9M7Økôƒ.ÛX€è²,›k§ÜÒZd"öPqx×Ëž8¼ëåÏñ#©*˜Í ò©+J^ÛÅWaótðk‚:²K;ŽB­ý •3ùókÛNQŒÇÌð.¬<¥Åð‚ûÚZµ¦õ¹\`SèôUš+Oîk^‡_?­Çõ ÓÖM˜4%—cÏJµ¾ÊyÕ¼kè&À˜ëÖ)¨³9üœEúGéTºl` =. ØÍÄ(g„d¤Ï‘’܆9\åÅ…äEuŽ.Wè¤ÌàBCuª¿èÚtQ 6¼ýÏÜUý8VUõãn=i‘g#ˆÛ-h‡ÕròéKñôEI“ ž ,Sà@ë@ú±c̪†fSè b21çÜ·;åøbû¬ÑùúÖ`M¾}ÔcO†ÙGùUjŸe‘8ñz”áÍ잢ãߨ ½SùÂÒíîû"Ò£ÍëºáÆUÚ€£ÿj*ðæsº¯ùB3¤ÚN–äP Zÿ3Mz`št5´v¸wé¼0ÃèÄ凤bäCÛ£¥€ ?Ù<' z©ƒ¿ Y@àaiü ¾93g°ivXBõ§½ã%º£]qBóº£/Z€¶.Ø„zÔ-l Ü̼ƀ³uêòÎq¬:—>8´>{ÛÏŸkL•Ù¥RáXÇEkdïxäíÅücmŶ €YÚO¬Z<Ïør?v}·ýisóåænsèæyüþánwüã·Oç}i8ÛÔûBhÁ­g"·¦©¿{¸¥W_åÃö>îõ§ó&ª#Oá'¼Ý…çµÈj\ž:¶ëÛÍÝæç¨³qyb4mudëŸõ{ãèF…¦–‰#fÇ„.›š9Ä)÷üÌÆBàÑ$-÷{Eo7¶À†Cö ·kŒ@–Ö@7š­ëUˆu¼],¦šV±dÑ”žũ¥Æ?ÕOc­]Ó‘WCž¯±ˆ6„ü®!›$„>J®Â¶ñ6¬$Žmi¨`5ni°‘™ôüvÈ[YùàQpÖ ùÒ I¸,H¢rž¡&&©)FÀv "[Ÿ«vI<ÍÙ­Ë©Ž–ä‡Óûj—a„0©4vOnµH )[¿0@i~}ÙüŠ9/N`‰ëc)¢i Ó+ÜO÷ZªºÑŲ€:hü‡lžÉî䎸Ìo‹Þ; ßHw°ntÈR€¸.ßFÙjR>YO*µ 7ö7/ ¹®=3NlØO#®4¡ÊŒ×?H‹Î|}ú6K1qf}eÞó›»­Šôúf=2¡Âc3*ãHpT Sâ8c/NYv HÎ_%UXu8uR¡XÑÅ]xQâl8ÛæÃ0ö®áÙ0Š£ ëÀ°ë1ð8N¾Ï˜c‹v ¶!IPÇo"_e¾de $††=¦7ÂêéRÒ²zã “¬q•›^Êja]y-ló;’†z·‡KúêX¦~Tø*W†ñº 2ºÁÝ1Eî9#°õ>LÉD3[älµ4S_@5-®%¯¿ ,®9 㲡H.YÙG% Ö²c7Ɔ:òj†fÙPZ`>\0¡ÖiDè—çä\ðR&„S¸¨Çnž†™… º!(°í‹’yžÐ¥>>|zŽjóø<–Ès(7À[1³ôpm2’¹^e6 f6瀜ªâ©ÃØÔYÁåñÔs*&é ¥œjP2ª—+è~³8u1,PâüÒõÀÓ@NhYZN[FË9"54.*Tâ¡0 WÉ·I ƶƖI÷e¹"F³8?Ðh j}m˜Ì‚SŒ…xâÝŸƒ•#"FT„ìÙkV‹Éjõ£T«yʤ¦bÔ=k¬Ýžíâxh·†v¶lö…VÐNU+24x.Áv•KyÝúò tiKÀy úÖyhc<6-Ó÷ëjQÏç@ÿ@½êeï¬Ìº‡%À)×FeÅ>Œ%P(OMF…H²¶$eY &°.yûú*‹Z8ì‚ <‡Éb̳Î9tdÛ# 8‰ÃH­ä’þ\5cÕë^JOJ~E±ÏÐPrXñ&\¦ºÿ ÿwÿ_Q“ÒIs¯ÉÏkµ º]u‡^ýCwåßîo7¿_%ÝGÔïôçÏ_¶¶>Å닃p®·ztIT= ²ó7Q'Rý ×Qç`è×¹gÓBP'$æÞ§ ACTñøí !¬ŒüœÈT!ÄG œ |û¼¾û^ÿ‰ïÍÆ½¾÷ÓÝÃ竟n×Ïë§/÷7}ƒ Ɖš{1š‚K/äœhÈ<Ù4u )”¿AWØÑ¶ø´Iñ—häÏ Å@!x›«¼ÝÕ×û÷v);Ó{±®ø¨=F#6Q;ò]â¾ò“¿†fÅh!Qm/);8 u¤§ïV}„Öã £ô}* Põ9Žˆ™0 øzdó š(if'˜Å5ߢTûŒ^ÆñÑ·XS³^®î6k=ª'8ìŽç²o±ØK|÷¼Ši5Å[;™)©{„;ÉÒŒ~´ê‹ëi$ðóJݸX˜caú P°HmØÃÊ90çýëb ɃMµfŸOüÏ:®ó~}³ùþŸºœOOçhp}×p×çÞèu .ê‚"•O6Â:§ñî+˜ªó@Öÿë5âY¸ÛD¿êïÏÌq”Ô¦ó¸~@]´ùëU ê·›ÛןY<ï_9΋¡¬á…Ø)7dx÷ïš¼’î½Ë_âZ¯¶Ïðf³ýms{nw5”Búv÷Í3ovxÌöIÄÏêU~Þ<^=ÿ²¾¿z•Ǫ{ùÓç“ôŠPh km™2â4¨bã¼õÙÞX(ôÆâ¬á®®È¸ì®P=AÈî 1)šÄÞ›¸cVC3v¥w3—¾K<ã‚;†+õÖ„yWSÛyÖàÁ·vÇTúæü­ӟ“!üœö¸2/Lƒ¯á„½-Ç>ú~þâC¨Vugÿt#§N˜™é€UXOÏcK…Ø·´\SËMíæÍ3(–«€\p ^ “ tïmÒ$î½W+,çÔÕµ•²ÇUBk@riä Þ‹H•Ò©$¥!ÍtJ³bº\Jµ[?þºyþx§;ä{}ÜîÓýöùËëAš 4s¾·(ä—„ +Ê8}`²öÌ÷s¸ÜÏA‡À˜E ݰÁfQ朜Þk•a’åW—Å ôÍ1C¥˜¨oßkÁ{õíR£¾ZÅ…Ha‰œÑ¾*(9Ïçõ€ž¸çY!k`VLûÖJ ů™Þ™ö­—8º¨¡YDaB™cÃúËÐl’bŠó¾ƒþÎd¾ÕnzkÅeAˆ’xïÍŠPÀ¨K5ÊsÉ©­…\cÆ€½÷dŸ'(¤Š°.Ђ)lõƒ¾¡ 6Ý®ñ'0µƒ§ ë願jhvðä„ÀÏ‚ á á1À\S AVe¤âA—… Èì*Y’$õÞ¬‚¬èAõìÇ@PVmyrF¸51®êN HÑ1p÷Ð䊑:À©Béê ´[ßü²½ßì½…ƒšÏ(Ÿ€û=½ü—"ëãRÎÒȑε×Ó +‰FûQµ4äaU€7wÊ¥Z–0($2»¶ ß<¢³†².–>4SĶsN±õ^­ à~Knµr,û-Bœã…+Œ5´yÜ%ŸœþÒR¢È!£_¤ªõ• æùFu+@ÊiF¤jCÅ%m[Ž#Íi–¶µ`W´Ñ1(¾! ow±~eïúY¥)ÖºÏ||üt¿yÌéaìãZzV© îxBG,šHõäGWŽÖÃìVÂðߦþÿÙ»¶æ¶räüWô¶/  ¯ÐNÍÓV¥¶jSI%yMMÉ=V­(ºtñŒÿ}%Q$Èœ{&™{L[‡ÝÀ×t”eÝ|:σ†$WG¸#Ø&DeöÖÎüm  ÃÐ*ØIo~tb€¤-9°8ç‡Ú¨š6ÑzÀÆ–áGĬž[F©×d@§±+wïçååíãçF•ä-0=â–¯²&r• þ¼üV<1Ie›1Ù:;ý1CD ´iceÆ"•žh)Õ&»ã¤p¤ ž*;9žßzïÛ&»‡Ûÿ©lsÇ˼QIÆÝß³cØ ¢?"¦–ªæœ²²¥£Ž1 »¦Äš¿ãI#DÕ4 ¼ªå Øk˜D™¶tGÔDú ý#úË«NÞ¢^ÞÿsùøåÖÛ‡”]¾ùtcßþJ7Wç§:€ñòFUä1ó¶<¦‹;ó<.ЯZËø?š(Äÿ´½‘;²H17´|WBâÿè|š©åMÎ'cw¿Õä,œÿc` .›8%óá{OÜʸ«Ž¸6ì‡Guª´4*nçµÆÜ4yT#Ð=Aº¾]½ýõû'Ôa1¹8^/X¼Ó]yge¡©øfH•k8ÖAÜ/³1Ná>gaG6rõÈöAEI“cI½AßdŒ’ÍÔ›ôxKV:[…‰ÝŒ¦©qZI BôT,S—”;öQÝ÷ܽ¹›žó—ñÅuúà´+ô²ú15(©2ËùÈ{Rt-QØ1šó-Ã(Ìn˜|‚9Ëÿö&§F(ì’;=3 sì~cjRvùŠi³”ÌNò={Ø4ñ£E={>T°û4A‹žº•N (©@“%ôâçò‹4–oîÒyyøðœ;»xõÅ[«ùÅž˜7Áƹê/kÛÉçW÷×çé%*»þ˜¨+> h[&D/T”é%Ò¦î3s@†aà§8ÜòÌ¡´ÏÏö&ÀVþ3§Á¿3ûÏéB¸ÇéõLÛ†£Îã¥;ôéê ÚƒgV÷«QÊt%&~ #'<ÏÜ«›ö@d Ãç0]Ò¤ —9Ý7¡´9‡ÁÏ~ ºß ‘Ãç†ü}ÿ 4`pBãxç'?°ƒXÁ~P =5µ×¤B!ÌQovعwÐuò妲–,tõ„âþÐò²H¼3ÄÇV ¼9µônT4—%¤ŠßÁK!ùRÞ¡4ro4EiQæÖè|÷[¡™p¹Y°šÙãíz-ämŒ¦eý»„:½÷ SbH¼íÑM‹@»ÇT‰óI^Ži\^V‹¶¸¦ÌÁ®¨¬„¥\‹]#à~*‰3­‡ùÄ`Ñ?«s¶‚¿ý“Ör~u{³¬¬Å†Ç3½›Ù28¥¼ÃáÇ4pGÇÁ™!áÉU~'ÕЀbP3mȃmÐ4‰Îg¯×Þ¤ÒÆ€b`5yÝ@Ñ&C’î¥Ö&eÊvpcˆÞéÃÅ‘ý„оÈë‘-t‹qGl¸…GŒ«¾“<Ÿ_t/ªKü6?¸-`¿¼^ÝÜm+F^2㋯þòöËçK¿Ø|òm±ý{;^Û‰•ø«q¼–†ð×´D,cÆ*Áfø$Öûì(ØVx6YÖ)²ì¬ÎH#%!‡ü¦W)6€k\¨ĸ2Àþ—Ìwb{"ßÿBL·2ê‹Íàæ…Ó‘š‚&¾®VðYö‡µ…ìÒ· °èj’¢â–ÿþýîÐúC訤°<ÏøÿXúÇÒ#¨éÉ¥0ÜÅñ¨iðch¬(:Žú?¯îöåÇÛåPþc½þr€žÿe‡nù÷»ëåï¸çþz–œ’›åõÛgî' [Àyq'É»í`²ã8™–J¹i~okùKz׳›ôN’WË›¯Ëë=”D^˜“S\ý“;OØ®kû›‹M~3Wë·åýÙãçË»³Wi,6KßCÓ¸agFOåhjË" Š"¬ “€é»ƒ0Þ &ÝïÞv0 §<4„¦ L“6ÃêÈ-Zhgÿ‚Å `“18ï-ž¹*•¯ÉK옸à9Ö›²&oqÊv îþaØòûÓÚÊ<~¶3¯ö³“Q‹ Q‹¢7§Ñ™?9lÉ8Ñq ¡–Åjí¬¬´Hã"0ÒÌ •X™:ƒVcfîlR„“‘®¦T¤—:í<‡Ó>¢F=Q|?|»³ß7ÿrK]àý>z¹:äjü*;æó;ëŒ+qL[MJ*kŒ“EŠÅ™;Í[)–àqX4Û ³³´BdQ MÀÏ,¡óX•g9:Î"‹sçÌ ès¹É$- ×`›é“¿ž3=Þgk„±0%Є@Ç´U;6»“É „¥1¦­kIœ2ˆAe‰ÞYXID–’Üì¾ ê@&Ä ÞF DBG\ß0ó1†ï<ïëÅßú‘Ï_úÕ§OW°„Ðx¼Ï®“cpþC Ò˜Zí€dV•'#–º?ª‹hÞ<•ÄUy8®B̺?;K+rdÁ tnðÙgìáþ˜'xXS<>C%]5iU)„!Eu3àPmóÊÁG¨Ã™Úï{‡!è]u²§ö 'åuŠö±v7¢fF3×{À ˆJÄCn< ׸ü#Aù‡Í¯ÿjÒ<ÜÑç[úšŸçpúM•ÆÚ äà¤äø¥NùÁã§í1ÜYZQ´íÓtÌ ³{¼1t¿J0qÆQ€Ô­óá¶ýL;?—hŽ4ß&òÜn·m†‰Vì5²Í|v~»¾úç×k¡n¥×;õûw¼`¬ö§~ýIŸØ6:zœ²Óì5€5ñ¿ÌÝEŽfgWß2Õzµzº»yüÖz2U‘†mD€1Ë DêÜ¢woðòðøüÐmƒÊKÿˆDà«_¤îŒw¹ŸÊQ«GzïšI_¡ƒøÁÄï¶ûO3ݶ•µ#QUToY}¿ã˜]ÚÌÔÙQŒ4а©ÃÈ2XFHrw˜;âiaꀤ‰÷ÐÔÔì Óh#âT²ÿÑ&çöƒaCtX°^¸ÏNçG{I“ý ‰d>ÂÜ®ûÐÛõ±cç3Y®tp£y§¯ 6nI•"JŒu©c u_Õv¿‰Aå ù]ÚÍ18%ê6Fü?0Þº•>£ëáJá<0óê{Pß™tÖ·Ë×ëßîn×—×uiu'îz¢“I5¦³™e•`¦Àµ¢½;Q;SiºgQ/%¦’†ïƒÂ¶³sß›Dš˜J\øˆ”m[õ£·CÉùCè§™,-ÅÕ¬wK`T+.‚êÎÿ1&Zs’I N¥†Òu›kûsϾ(RÃ0²‚c¦S3 ¤Þû?Æð‹àÒ‚B ô€+ËmÁgwGPM×¶7*H¬Ü‡Õc÷x³æXSIÔ°šu Ñi F+%ŽÙô#1Êì|ÀâšdY‡½ž°+ö†1ýâ^8¦¡·!´ß¬”Ú¡¬)?Šº"”µXpeC¶©aG"MPÖÞ:D޳£lˆ±?Ê:Æ Êò"€Cð«9f •¹µÞ‹L×£ÐS‰ í‰(Í[^D‡Ú·ÂÓµüÝ„¼I¥¼¤Î®ž×+SÍúÉ6áµ=üîæ¹ùÔüÒÅðüÓ*(޶»B1Ä1PLê"ÄF0SÎ(éVp¾Ù…A,þ„s‡©d4×€¿+Öxž^[™<û¹ñ]o¼4ÛµpàÀ/¥ ‘¦ ‡"؃§bômƒG• ¶Å¼Lƒc¥xlVSŸðoÙ'â叨ÍcFœ¾ #ÀW£ùyŒû¤Úw‹Øùe ôôEøBvpiMlœíy“K µ(;zLùÎ  ¥`v`Ðâ ÅËij@+Ȥ|„ÞHÚú¹T”k@×:×PÒ3bvœ-y ª¢ ± ªbH&ßµïnLþÖƒòqy'Jï'Á鸬AªÅeóZ·ÝµÎÝz²=ÅÄ%¹[² mF…rýé;òh£%TU7°‹!ºIM¯ö׿éŽ(ÇÂhŠgþ¶¤æiZÝ@EÅl]Ãö»á´mŠ|0NËü ÷}€”¼¹ ]Ó×Ë/·ëo«= ùi^©ÄÀ'ÄM‚›‚£~;Œ¡2'æºÚÁ­ÒBµÄQ ‰’¥Žz²eb;òh¤hHª5%샡¸¬óŠ3ú¤v΀>LAâ4cL²Á‘©ßC3§xXjÍ`1Õ©HI"× èP®‰ÐeÝá75âXç¨uƒÃÙLA“p8HwoØÄœ™öazŠN$Ìâ KQöV …3\†ÇušB9œ†»,Ü`Y ëwñ‚¿<ÝÞž?ÿm¼Â‰Æ2µeš&ê¥vÞÜùþDî—ן/ß}T9@äD.Ýl±L øztÐSÔÅ€º|µó!+„®O8ˆ‘ÝD©³vØôæ×©z×õÂè~}»üxs— ðÃ3¿ÚÃ7ðêÃóo¯Êø_ö®¬Ç‘9ÿ•zÛW6ƒA2Êyš—¼ðÂö«1PWi¦ [—ë˜ãß;(©T:(‘Ì$³{íÅ,u¯”A~qqõ1žvuÀö¢¸T—áéQÍd]qIííÍ„˜ô¦™ºp[°,}×:¬Oúž‚Ž”rjMt]öà´ „iÆ:°~Јð›3ç­º'WÆãá×Ëëº`ÈÑ«1uÓÎÃÔn«ã¬ú ð=LÜ/nîo6œ6ïTÃo°¸{úº€aõÉŸÃæÏcGìš"§NKÎi)˜¤%×ÁeB;°F8s’v†>ýF—fã²n ËåóãÛk4ËçJSAt¦=ÖÆØoRåYÕÐã²h ¨Ïæð^ö¾ß¾¼\?ß>­§{ÿ˲q_jÚì@V…:õ~¦…}ÎwX©…~P/Þ’ëlybÑþ`Ãúç»·_o³T/ßÐLUÞXƒ<)-G0‚h[ï<8ç¸Y…d'ªnÉ¥­öÁ—Aœe³o˜\ó³#Š&U8BUö-ºi# ÖYe§ÖôS±Ëbšýj—£5ƒB.ÈÑZk²„ë*GHn<ßT“$­ˆõ”`Ý1}:3)I¨{“XLsûäÎsïm@ø‹eD4¥XVï¼Ò1Yá`&µ…¢1#ÚëÑ2‡H·×ƒþû Ôk¹×:6¦`bÉ:o³-¢V$I“ò!›—_#ð˜”*‘=ù˯~~wš”è}ºä\ÄÈX5ïºWÄŽ`m[FðDj£ç…‹c.¼ˆN°'ݸüB˱q}»’)5«Ùâ-¸$4l¥Øhõ•¦Î-h п“Q¥lR½ã8¨û{hÖÕ[¢àšze[Ù9áÖQóÖ÷ßy‡XÓ+x² ÌÂÖÂEM¬XÃOXZcîýú¸nÔ®kõæU÷ªW•[®{!ãßÇ3™ã²NÞ¦èx¼îe+”&™€0èibKsc9u§b¶$Ã#6ΦçÒIlÛe°e»´Ûò^œêZz×»-#zŸ#íh”]Íß¡ Ý , ¹V FÑ3Û<„$ìÈ®…7<»¹!ÀuOò¨˜M"΋qœsx‚™Bšrƒ)Úí¨œ¨S[GO˜àQÌ`>ú^F¨&&uW´‹îì` Lª±±ù¬/§³¾;BjµØØ92vf< 3ƒ“¯r2¼óÑnš=ÏðãRSdð¡hD©<ï3µ«¨«§À#¢6‘ŽÌjÖ›¾ß³7zù½p #`Üé“÷(I°ò!Ÿ&Î@ÁX™ûò[ߟ`Ñ€ÅèŠç³ô*MS<¾ŒÐ4èÐÏw8ŸÔ,z}qœrí GMäxµ›`GÆþ•­2g–Lì“9ÂSS/ŽI'r%õbM–ãn³‚ãh¤çCÈfzâÚRWW1"G2i(A¿ºcGl2OT‹y ‡$—(n»`ÛKÌhTÐéoe'‰Í®“&ÉÁ¨%§&îBF7¢àœíð::%í]ÓÆýûÎãÌÔ »lMš6)Í£m¦òk1úØÁYS—«Èžˆqn†±AÆÆxEUyJ§+}ÛíÔ`\cŒ™¯×1­ÿ ±¬¥€z¸ƒÃÍñî¦0¼×íé_CˆÁŸ,@±…ýÿuôañ þßê󼽟„K8< —‰"äå±z™Ì[wU†´_»b­W1D¨{}¾_¯–ÏWŸúñ*‘½Ñ…H=B/Þô÷K=Ý«¶‹-?L Á?Ð.~¸xýãáê³Ú̧Åóòê³›_—¯Wý÷/Òlà‹ðß-/Ë—OûQ‹~ûýãÍ®)kõ^Þ®¯—//WŸ7OÿóÓÛëÕçßþÛâîmùóº}Ø_¬þÖñË3_üðÃÅ/j2ßâû¼ÿæê¾úÕôûöûš7c€ãÏ\Î}ˆƒêfDþRlðMÜ\¥<¼,®WgiχÐsósl¿úeq÷²ü׋‡·{•ÅÏ¿lµOv džݛÁ Ù2¤Z5‰aïyî/Nb†}çÍþòôüÏÓÚžoô{`·õ©È˜P1íß)¨÷‚õ×¥œ}³õ(Ø0iÞÿ¿þx(0Úf¼É.Æ™ Ù…=2u4ágwÐIÿ=ú¡+Tx³úÏ“„ÿ÷Ö ^|¹[þ‡Âÿ=>>aF´ãËŸn–¨H!"Ú¹ÛåÍö3L¸ý> Dä-äÑAÝìw^|WLö›}¼Ì_âÃ^ÜÆ‡Rl¸^Þþ¶¼Ù…ÖH$%°Ê9þKâ+6o¶ù–Û ~Wûýûòùâõëâáb+aõòû_omˆ/#!Uϰ=ÏAµF]ß›‘¦™ ¨1©‹­ø‚C!6Ga_l²NõñjE6#¦ÉÜ6CÓ;Ø £O DE8B+¹þã@£Ç‘›¸ÄzN²O7Gü‰¥iî‡Å¯ê¾ŸøüòúùúØ™µ€)[r³¼Ó_=ë_6yuCë¼ÚV¿ÚÓv±ñc²õb­Ä-®´ÒðdO¡Ù!h9Œz”² e>˜Ë—äÝÞyµÐrm:73j1 vF­(GJ VÔ[™Ô]VÈR”'˜WÝ›¥÷¡Ì VÆåÝpÇyFò¾:„ïþ€½Ð¢ŒjYVHâà'ã â ¸ƒqÁg‡ËôþÎfqÐbºÉhûf0WœF‡²¦­¸ Ú½a0& mb««Äê':<뼑–Λâ!Ï€‡ë‰ƒüºÕè£*xÔLt诙Prã~u¹Ô,U#׸_í GèÆL³¶v:¹b42©9 œG#!±˜E#dLrBo߬œ >NÇÎ~dÛTŒFh¤ŠpîhÈá8”ô¦²3À‘º ë ‡TXhleŠqÿÛvàC_'BVmñbïëzã K—.\vBhà§`i¼†ï›aóÈ`Í–%Xe—Þ¼}³’p õ*¨ÉÆÙÃ5ê]NŽR¤ÔŒ{ ä̉IVdG £5†Ù¼“ƒa÷ƒëÙ÷{=Þ7‹×ÅåqqÔŒñMês^ŒçVU‰’ÚSÇ:¿Jf"Ùïa}èåóòFÃËë×&ÄW'¤þ·:|æ1,'Á¬Ý}wê-˜Œ7GLk#Äz¶5Þœñ  VÁ¢*TÈçîHRƒr;blÑ{Ϻ·"¶ÊV¤¥_usÙB÷‚DœãL$TQ.8Ïóîu­{ÐF!Èh•–À€Šž ™€“½6*öÚÌ"ÕG¾4hc¿z¶4Èœ¬ï¼Y‰×feЇFf¿‰ž{»m*ƤÛfëÝ) °íÜ612ƒ×¦R?·TmÛþéÄç ZüÅ]søeñ‹µ7ÁLôèz<ÏŽ·ç…Órz%°Ô œŽ<Î80"˜ “¡Ç”BOÀ=k„Yà$¨‘Ê&¶áŽ·¯V‚=„ʈq:ö80ÜÝ ˆôsÇØ£š Ï9Z4ï›ÆŽ†ÿPÈ -!| 6Õ§`¿ í>Ó‰@!•¤»z<ìÙ¤Y „1lÄ—‹i0¦B Hr\J pC°pÁ¤zvÞ¬‰qñ«ÔòG»Þîwl.ßažŒÄF…ª6¼ÀÉÒ87èp:µa'|6v `ç@Ëÿ—ç (ÄîD/±Cð£hQô175Žát¼]"?ŠÁú#SìâÒùo8öªDÎåýU`&Õ[º#‘©…ÉAÈûª‚aþ8d±*zPv…‰´¶ òÏ• U¸‡ªüG:½Ì»Öå p/Ëê³u•£r ’1]138Å8M…#÷<É@Øx»)`>N¶qZ³»!ß?fÛʯE6]ÛõZ|Sˆ-8=#ø ½x²ä`ú*’ÌUkex#’Uƒ ÇÂP6}"LâXì¦É©ðƒðavÃ+ýÛ§)’'"Ócf={œÚúç ÿùAÿ6¡3Çøú¾9L=ºœ„³0M§jl ŠqT™@5]àXtT¦Ú—Ð) ^JÛx?âÞ¡ÿº\ܽ~ËÛÍ[Í4¢ž‚5-.OàxùqÝÇ–p¥/B”¼%t,"Ù4@ªólç]ÂøÐFãá¹í`dëÕèËp@÷=,ý˜¬{?7qÞdЩ søô¦õ6š oÛc—„ÈÃà‰;«bù‡Ú‰µp×;Ÿ®ß^^ïU–ojHoôËn×d`O7›Õ̓>Ðío·¯®è‰#wØJ]Ow‹‡•_pÌ{^¥5‰Iמ8InÌ쿵ˆqR®Ó’›3g¾]¼¡(ëÙÛ’0ÔXï²(K ÉM6[Q5Š7‚'47Î2B7Ó¦¼Ì0XÑ?É„ÆBSªã@ETÇ5‹M!~O-»é\tn0&€¡ž~¿¸þª—gí§o¾y_Î{ãå“1—ëJJÇ’¡®ˆìdÄ„;…¨ ®‘Ç‹­¯Ž‡ fÓ­qˆÕ…qÄY Þô„òNH¨£ãUµwv×›ÞÅa•2ËqG_Ô˜ð¾húˆ×¯5ß4ñMGÓPÎ6? $zªUº„1ÅJï:Úñ`I~sëõÝíúƒ*øE@9­õì¤ñ4U%Œè6t(œ©Þû5Ut-}aïœ)s…)[‘Q)©àbjâ ‡Á©.¼«‚àÜÉßU1ØžÎÅ $›¬z¯»ú±m#›ÚÞù«1‡½ìºWìõùmY‘ç›paK”Á4C\b ˜¬øí¤ÁĶ ·ÜÄ»•ÈÝSµíJ¤÷ŒÒ € Èñ®'.·\e ƒå3â×;ɪGd†‚G$ï°Ùö›ókeã¹`PÛ–B"ƒ¹‘¬ ›ŠBväÓÀ®N3;ã±Î²ú›vÚÝì>®bæDÕq¥(0KñMöߨAh°þ&'UKÀÈf’jm{Ôuìd ÌÃì‘ÈAA÷ ©ñvsûzùÎãU™ 2§¼ÕCôqãí„ícˆÈYÅ0ú¿¡HìšÅ"N ‚)èBÖÞßi V9nºïy>Õ"‰±¢;¬à”än«¨gê»3ö¨˜1±ƒÐ­s‹ÈBÛYëP¶ŸXÏiŽp-ptƒ‚U3fR%ÏFÄÑø]¤ç;EwCÄ)1øeKR07DbÙR>A©Ä®DZä€õ±QŒ!šûÒ£éÝü·Š‡„÷eT¼`ØåöŽ¢ë½“Üî mÂ_;fñèäTH7UëIFÓ>1Ìqi¦Xü.\6âÝtZüÏÛãëbUð\ý×øÖ Bg{:hö`ûoaªØÅý"½:'Ê…ÙÊcc}Ÿ‚ý-è›ó×,YH&·‚kÝ«`qµWb^è&îÝGQ$¼°õŠêévÝbÚëÆKä ·o ¨C”®ðÍízYüˆß75w‚ïBöÿÛ=«OÀ\>/_Ÿÿ¼„ õ9T;¶¦+Tݥɼ#ª뉢kJ êíBI(-ú². ÍÎpzgæVN-°9oãç†fwpW{ß…ÄH*Jq9À‰R’o;IŠ9Rµ#s°è‹Âí+¶NdpÎuNifº‰Ïv#ïæ”·™ŒìHSë_l3ÝÆȘUgÑSd²¶š¿ ‡úÎî6×Ý”¼Žê+Bž5%ÛÛÄ‘Ï Áä¨ïŽ~Z$sãÍÖ€&„¹-„þ‹¼œ=îeˆŠB m¦ÉÛ.ˆ×Gl?Óù0{ÚC: …lSã‹ckΙ®Ý†1 ÷åö!^Ú3Éúu§ÊÕöϯ>|‘«•Ø/U•Oz³êj~ÎúžÖ AÆ -’õ‚ê>V§æ¢l·x}©¤sK9VrM¥”vפ㜸¹K€q¨{ l¢%#ZŒ“-͆„²Æ›.°ÒÁÙ`‡¼R½óþ;È+7ºe_Rxð¦·Ü_|ü­Þ‰eTðÁz¡OêL‘=¯Tâ½"e– ¦W)m¥ÓÈyµ–|•ïÊŠŠRSà•ïïºÚÄÒ·¨&ïl ž™P£ˆÐ rF@œT¬Ú`¦9¯;0”Û0ˆ†;›òê·L¬&áë@˜ÆK¼dmÑgáVôk<Òs-R;º>X0@Á\3.dÓ°*°äÈå‡Dš8®zdA¥jä’#qÌ$dE螇U1§«¢Ôñæ`çÍ °A+xß6/°…‚“ÚTg@È3­ñÚ’VqS¼é»¬ t‹ëëåËË*ü>#Q·8üLµÞcP£;)•éqÔÈ«ú,ì Ð'·'»f!µ…ÁX£nrAHí³,7Oxwï’j€ÜÞà ÁŠ«"½÷dâÖ$äîßÛ4ðO4H©¢ æ=9 ܳ›eû-¸¦vß.še3gfÐ?©ÙõW µ/éS`yu¸+âç=äýª§ù|õù§¯XýP‰éyÁÀN/“§‹(¯‡ÅýRŽÊ4¬Kn*©(û?\¼þñpõùúñþiñ¼¼ú¬§ð×åÿ²wuÍÜÆö¯èÍ/Ö,Ð_6©}¹÷%UN%•Ü×”‹’¸6ËZQW»öCþûm”8"A0Œ·\)WÅZKÍt7N ûôç÷ýÛÿ^%µ¥b»´ êPà8ó¹š1ßÈ­š%ÀÍÇ›[Ðgú´¾;<’ÑGÚ|xë³ö•ßó–éÐŒ]»û 3læ4²hŒÿו~ò°YÜn îƒRãÚmø¸¸ß,Ïx TwòðüIµñãúã«wŽN昂ßbGž‡aÆ´ÈÿÈ—Úö/¾kZ<ö%½WûîñimhçCövtʲOúëLnËWžÞ2Å5¯D9âiù5¾² …˜4Áf•_ÿïׇSŸa}©{Í£ä…X64eóï¿å¾Xó–}_q{s FàÉø2@ªó="´ö Ò¼<Ã6Ðkú®>œÅ¯`¬ ž¦à—#[{ŠÜßh*~ÏůÀ17~¡¹„_Þ¿tü\Â/w]jTàðfðefìh˜¶ðõ}â;ö'çh)§'E=—ìs…ÑÊ=E½Öµ%u'E  ,ê5&¡5¯tC‚¨ ¾±ÕœÉÀ™šF]ND î+¯Jø}Ñè¬I ÓL‚8ˆÈ â­5Mýô§ÅíÏzÒÞìd{Û ûæ'6ïŒÙ…euèHgÇë æƒso^œ¥âM£ÅV­†iK}Æ2† Hâ‡Q;IöÕ“P ÔÖ§Áaj#z Â'Q´¡ù€%Š(QQâàr¯¯zߺ g•¤õ\BÁÌ$´8«Þز@2I½Ô$@W\7Ü–ÿkõ°#JËMx^=‡ñÍÀÓQ“ŸDZ=)¦‹éÉ¡BB\"n¹¬¸v·&ô"B&[{ï\)°UÃb.«D‘#–‰gH`†À’-¢³pný"T­.ƒÏBFôëjŽŽñY-qðÞOK?˜ÜÀYßykÄÌÅÂД~Áãyˆ¶Óê Æðh… †l=»ñô sð.$#M5ÒM2"Mëq”}G{"ªjúÎEšŸ2ˆ+‡&P±ÍqTålC"Ôô±õÅã/Xš?Ä#cˆê1.Dï,`§ê´M ˜œÏ¿“&>þŽðqsû´z,¤*´ä<]79‹vÆ Ž>ÅrÜ¢JUvÖžH¨fXJ­ÃZa©$gWzÒ¨•¢µ¶ M…ÀLKÜ…gJS`ªoÌ a5μ‹&/qWÉL\F“Ä€sšÔàœÉM‹\ƒoƒ¡žDM³1†p¯]än;? x¹ÈÝê×Öº«`Ö˜éLÐÌŸ†Ùë‘~w?SÚ²íÜx‘;׸²sLã5Œ,îÙnvv.\ž6;8SïY½C—ÁÙ±­“„!ǯg'ÙýÒÓT%Ïï!X[2ÕZÇNº_2ɤÐD–Àȸ4= Lçz‰u\|ØåäÕâý°=Ød§\O"UòêÐ,تöã*ŽV7É«ESÎQQÎèÄ•Py¡jÔL‹Ï:«¦ZD;G@Ÿ.3MLMr&„ØØªûÁ+÷Oü£Ð––v´Å´Vdé¬`Û½íûÞì·2Þøxÿ¬Îk3¬³¡o˜- Èðôvܶ ‡>Î ™rO?(ÞËÁå l'Å‰ÑÆ%c[±z$7X±ÉÕAùUŠ œ8wX`=µ Ða2,ð&Îç§·¸¹ºa5Rߡ̄3-õÐç!t±xÚô>ôNE´þíÓ½ËÉŽè—?•VÐ7Åìã6Ø\ÂðâK«…²ªW¬W;ë8+e·2Åœ÷î‹¥JÊ®)‰Ÿ‹AfˆíM‚Q3êIü0°8>¨¿¬ºOˆòR5üÍš#`áŒJƒñ¨¿'©”Lh¯¦3–ƒ|#H—\Þ—dÆÒ¼Ãf4ÞI[±mâÉ0¸0ÇŠéÍ› †Ï7»+ªmUš á²Ç~m¥$iX©—Üb,2޶É#XëáBâ4Z Óªî!®ûÉñàfð²Ý’¤.Û{b®ä¿1 +šmª¡½ÿ6ÀIÿMbì@<øª L–|åŒê[„¯–öÂÄM‚g›æ^óofËNÝ 5u$ìÆÜÿ:g¼8/ö·%2é6TT6„U„³ƒŽƒ’—s±Vñzvq~ÏÁíÇ«lºMK½D¢á¡] u3¿?ìºÈj!š$!‚ÆnßÙsTÔܵ&¿zâWŸ|}¿ú¸¼ýíö~ùÊ^ñ¸¸ýEÿ0¢lÎpSìW´ýdÑ»â ÁUY³¦'ŒÖæ¤Ñ B»pÚ{R«”°“àinhw M‚<§mÓ¶Í7 ·¥›ÝLÁñª—§íæÂkUC9Œ>Nœ8Øî~妠êе§Yœ®°:Ç~ßSyÜ=A«¦f!«° n7ÌWÐSÕ›â4Z¤hLäD)f«sPjÆÞNbL—{›à†cïý*Ü“Øû ®J±·³&Ì{;in‚õÉàÛ£tfE»U[ª¼ËÛô› õüIû͸ÔùÚИëû¿Æ”Ý%XÇãõžá¾™pÌÔ†‡L$5:’Æèª´CiŒ¢¦feb̺ŸžÿÞ‘ˆœÌ4Qé~"Ä]%\!Ôp 䛯G IFfÆátË”5¾N]‹þeàá)ÿÙ•¡ßýuWRþ°RÄ: ®Oöá^'b€ëÓ à:ÙzÜÖU·_X£–ŸÚGÏó™½_DÞÕm ó&ëšÂ?­…ì÷I-mé¨ëµÉáÞŠžn1òrEyæpËìpçô!³E´ÍÄ—×E'îQÈlà?qØ\v_w1›¬^1ô<ÜdL˜¦âÞ˰Ҹ¦wÎ{™Ù]#™a^RV#à!F¯ÊmÆ)&>9Êh|(ª¨Î:ÝÒQÌño&!‘æú?,m0åÂj¢‹¥À¢Io쌵Â{%Ò RZ—Œô{¢ªÐH³CÏÐeêöÌK'$þšiÈŠêÊÕ=¬<—PgWË0Ï­ ®UDŠcSM›EŸT-‹Û7‹hFm³Á–@‹ê|Ç­E»yKƒ·Ë’©w‰­^Œ„¬jÉð„WìÙLbèA •.±ã>hïçÆPß|'·ÊÙc*7õvèÝ0¡jÕ®!Êê² ù<ÕÃg¾iÝuÌ>;õ×X.má®~wQo‰´Ñ(Ägp3 KÃeRKIn†ƒä*m‘öDÁâÌ_ÿ~óà _¦œŽ*)бž{38¢ãª愼E–ò~“›¼–J®AXÅÜùHÄ1ËÝÜÛ†í·WÒ›—î²0KÅâ["t\ =¦{›b—œ7cgõ³$U~¹ „\¶ ¶hSɨ«'“*àËæwP|˺XØ·/cLÌmD=y9Û8­'©;¿!p{ÏJÙœn±z½mQÌ)¾GÉÆÿ”=T‘ú­k0'ϪºÀ†]ێç/«èªAÝ^hÚñ½þçÂ}"ãÅŸÇ6ŒÉ{… l âNÃQ"«‡ÌÒ)ÂÏèêŸ1>£éœ¤°¹'Ÿ*جæÌŒXXUœ~8Á´ïðaׄͪ)/2‡º Fä-.h íëBp-Ö‚ºÎ¤ö“«'»ZŸÖŸ–*ÏçÍõ/¾°•[LS€ÅQûšØþc݈iÅáÔ+.¢‚’ÙΚd@f`)Bêj·'Š*­Xj£±×ÅÎ ¥8Cqlj<9** ˆ €4Xw[S˜ÒîB'{$1ãè7Ôbä½m‚›Á ¶- ,ºz\ß•µÐBh œŒºÁðˆ-¢5âm1苪üª%X¬ÃAøUƒ 88™ rK5 ôSO£ýFîï¹ñ4ÇñTÅìýiÿÕVQŸ™êôXµ` ’G_œÏŽBƒ–ªÛ x`¶c¹ä`ÞâÁQE;¦íg»ô °ÿ ɘ¦Ð ãF;ÁŒ˜^‡VµZA4O€šDÐàÈ|ذOÆ=’©Q,ˆmÅðÜØ í¹±TÌ©R¢.¢Í[”ÍÃX=Ž0¥*› -•Ê¡¼õ4ߺ·{Σ¡ŒÇÕþ/n5ö:•ñ²ôêßÏëÏ‹M÷ÅvÛ?_­G(Ð…ÅŽ(+€"C `(߬W[šõJºjM|JÛ}KìE”“AèI®JEWZ­Üù¹QZ°yO‚‹ô ˜öc°gøò¤*:[còàÙsÉÒ¼˜ÒTÝ®Åwb½aúvª¯Ô ·ŸXs­¾óé·kûB"“ÍÆÐ6dvÆ´.ˆfŒQÒmh§Î‰®2«8ÿB‚všÉà Y"K’sª'§*ØÌƒlYóçÅfÇÍ'w†Du"¾² ãæ¥«Ëã4&¬M?u 6Ú©8 -ð8tDÛ–ƒ/ì´Ó§ßî‘}·ÿCY ¬Ž$4_•ù˜ÝúI6’=Õ[Dz,§z@«à,S·32Ô=%f“ ‘T¢UDÙ„s⬾ž4P1§X¼õ•™w—çpÉÕ¥HÈ‹…½ñPcÇe Òúdkã’¤)Ì3ú ¶?C NÈ·P£8p€iQ¶¹Î‘?/zŽþh¶Æ¯1µ`ŽEÈÖ4*C¼X-”&¡–ª‘áà]X‡8Ðê…GÉp¶' 0 ]@1¼Ýú}ê—ììUMtsõñiýéêf}ÿùêîæèÒÎtyS H›˜B`Р’a¼‰EëàfI´mùו~ð°YÜn•øÆÌTX?Æãúþãâ~³Rj LmH¼^sê2&Yã˜ùëC\UêÝ·+¤MÑYÏ9¦a Ú[L6:¤SÅ6‚†8-± 2.¦…SlƒŒ5ÛEàÉ¡à·Gc˜¶‰àõõ1Ç&Œ ƒ6Br×|O*•l"_f  0Ó$›°nÓÂ)ªƒŸúanèGÜÅú5…œˆ‚†K6I‚Ö{³œØPc ‹±-¢@oê´NÂy:žÎˬ·‰Êœ+½šþ´ˆ‡tßžva;Oÿç̤sl¡…v2α ~XÝ érÛA"5Îq|ìÈšìŠì!"N²‡Q[œôíIÑT‹UÁŠùËëi 4iƒ  ä%Ë hÐÙSz)S_$• ‚bXÉebv7É xLo6P`±†'ã:e§ô±Z¬/›ÄY2Lƒzݯ½¯® þúï7wÓ_¾¾PùíòáûÙ׋³¿ewëK&®ðun‘s郲ŠÃãGÈÒŸ–S^²5áO§”G óüKrÀ¬fÍšê–ˆQÓšÙŒ'qºï†\HƒÚïH¡â~BЕ}T!Î.=“§‡pر=À<–”m*¡Ëò“©å2‘#‹†swL¼jªVàÒÜg-¬7Ö$ɘʉ˜²þï„°]0‡ÛsƒóÄJ}s¿sBS5Lòb#/®.ÙŽÛ0þ<“/£¤-:»¸{8y¸{¼ï,±’€«ªÈå`i»JiWX&äê,‘Æ÷0OX"#¤°Ì@¹†K3¦dmyÔÐ0v~y÷xkoýåñü××=èÑxqŒvþ~r{þ¥_–ÌÈì JN •mU¢œ›=¡T'/[/u´g‰IB\hªˆBËp9 [mÂ4̾kç¼l dûö¯½ÞW¿qÿѹó€;Ñxg·Øãæb HÅ«V"Ÿ…‘ …ºØ7h¨gÕáö¹©æÎ¶„„˜â{Ÿðšìª ” MÌgå&4ê„ NBðÖ‚-#€½³3úÝëo²èY¨áÂBÅ}>€`(‚æsõ/ÄéRgÕ%ú¸Y¶ÅJO!-©á7‡·¥x;ÙÜc ÜÕçËÜîås&,©Æ– ¸büÂÞe‘bB’N9V¥ÇêúmEF¯žP{ØÑ2o†ÕwŽJ€ØÔÑÁ°ßÑÁaŸ‘¬>ÆsåtýØïvŒã{bv¤÷äU*:ôIœÔ‹ÕÐ1}IJ†Ž4 *x}/V/P·¹¡­ÇÚYI­ÛÒ”ÚNª¡8æ]E" ø‹2Á1ç&NÞ¬æ'Z%5ѦrëÇÌ3În®n?ß]ì|ŒõS²éŠ5y­ñuƒëƒÕŠ«O±zdáO6½cVë³þËÝÉåòª­Ìd+ƒ+$z®–ã9‹ I9W°?¡YûÀ–i 3j6ìhÄ„1Á¡l°@‹’€÷UØsHLblp°âÿ*û¶cÛŽ‰ Q<’>Qª‹˜À Áo 2GLÔ_"r‹p…ÅµÈ ;ûßâ2€èjÅA˜”K{Äf0K‰¯œ$[²5yµ+B8xo¬›Ã¸ÄN¿!-a\jš8ÉJ)ȧÅ|óµ|‹¬’­fµÌ¶ f·è¦˜eÛäŪlraׄPeƒ—¨›xà¿=ƒÈü¦Ë;ýùýÃÖ‚k¸6éżT"}üÍcþxù0È[s·‹\Qd) !þ†|ÙÒ }º$â>Á[ QÓ\:d úýØa4RϦËa‹Ø¥F8RQòµÙŠt’‡‚¯Ow’õ‚›êø•rF%y—£½sRIc*ëÙ)‹lwÔ“Š€é@Žø™0=ÄBOm•Æá­mHHØP-ƒST›ÛÀ§Çº>"Ïõ1ü|ð)~¡³¨¾ê…O_Rè&5äÃÁWð¸(6ÛtÊn?Ø ázHž:ˆ‹ßXj|C!Øà»V m5V÷ïo/ÎŽÝ^ëŸú…~dåXQj–× ?{zB“.¡ êÓê·½µD¨‹ÓRXE,êpͽkººüõµjÝ¿üo‡Ò=ÿóýÇ_¾Ýü¾ÍœØŸïϾ^\}>ÙþþÉö'ÏŸè'><¸SM‚IÃ>(z«Ö=’­ z¦`ñÑS£óæc¿­øpËâ— ÄÙõSÙ«Ê],Ù,Ë5ä¹þõäìs7ÁH`ÕR.T¤œHJEÁàë(ŸÐ¦‡`è©)1Ñ[»­èäPψã{îLÎÊŽP£•*Ù(‡4èróË&äé"híÔÞ30¶lßoå? òï.„ƒÅ{DSÕ¿¸Æ~#šÛ+µ£¢Cî*vžN²Éè5|Ÿ#›É2@qIBUE»e†'&fm\©k>[¹Õ½’¢—htT#"ÄE—™óc2'Dê""dÙÍò˜ø(íÍò)kªßƒÅ ™´ñs®.î.ÏNî/½î˜ KvUψU®N¤¢9KÁe%bB“.‘´µ³¢êD<»E[B(aïk˜RVD=7#UÈJQbv£é”$äAÔer³|ß”Ih‘&j‡ÑP$uìÞZ¨ÅP@òÎ;òò^³gYA°Þ’âÜ>}c(ÊeÍÅ„4Mr`Cى껛«äàø¶”ÍÛ¬±.%à€êÚò¬ :8à]½ÐYËQl»C~óT7•k»NË®Gé;þ¾ÕGÏ(˜q[ÌSÙò&_¶¼LÙ’û %4ÌNÆ ¯Y&Ökã±}[T6·rÚ|I„÷´õ ‡®ì‡˜|¤ 3(p*2?eËë^ÈÑ‚®zHLõwTÖ ÕõVo¯]õi…}ÙÆöYuÕùg%©òçJcôÙžZ6ŸK/v~¾yÃYà žé‡fF””ÐÑ2mARˆ*³an\¼œxmê‰û»UX‚’zŽ¿FRð~L¢%çýL(Õ ŸÊA¢—æé§¨ ã"ýä5wÿmÉJû«ÿŒ/œd›ø>¸bU<…®+V¡fõx‘êÍ}â“ÉjkC\ÂdõpCNֽŴî–Õö ³ÐW¥ <´ó Œ¿„à›¶w$²\ÍûL‘Ã]Rt‰¸Kx|Xü†€ÙìÄB-¸«§$©þÚ¢—JâNöa ÜU™Ã ðºÁù Pع >u^Ž5ÀkŠT ½Ëà!Ï_ç@’_¢î Û–|$pÊö¦þp ûýá²€)ج XTMð1…¢j¦írÏÌ~ާ·)·ˆkp¢ªè·sxÌ=cQ8&kKcµ}ý$!¶Té±ë.·dÚ²61ªÜ£­sÅb­Š 8…"fKì&/VÑ&]ì–^Ï ˜<#Û"®¼N’ïÊê2¨«!ZÔƒd¼ic^Äèüqo:ék@ý¿ÿq½é~o6‡}Lwû{´}Ï·Œ;ýI_ù׋‡ÓÿøÏ¿|Èbüv•Õx‹¶©ÏyÙexòýÿÂí×_®.¿ýý;Þ^\}ýry÷õüúòRþêæ|ª >ñ‡OîÏLèNÚéçÛLJӟ:óoŸ¿=^ü<º–a}øôéÃ/ªµöºO_;:+Ý¿øÓ‡O?¬)á¼S±Þ#’0{dc¼#½y²|÷fœñx1ÉÖEnEЪÇ–ÁTÁj˼'XãÂj\McÍDe\±%Tå¢x*Z#Ƭ‹2!O‡;¶za¡¾½›^²_Ùò(y“ÇeyÔ;4€b~mxþçôÔÿï)DBêDøƒês7í¨ã&XéžzÌ·rs|Ê YsàÄ…9YóW¨úU%þâîô§¿þå4¢°;¶mp™bøð¨´ÝÁJãÍE‹ºê×J)#拾{5Qÿ¸>ý©ÚþçûçÍ6yê”3•â/ç²ë¸yžÁZgšø ìðÃèìR;j@ñÊ£øô鵉Úd´Òa kõm¾ÉÑfŒ,ïáC[t›Ùg/ûÓÏÈ ¸“R9ñd‘ëñ‘¨›w=°}òùeÊÁ­F°êŸKÜT¹ü˜yÄ¢Ø6BÃ$©€f/–¼KB²©’Ûi'éT~pÈQ£¼ÕýÁ×É¢m¡æ¶Áñþãö³Rɘ\”ši^ֽ严ä9 }‡!èÔ–;æý+u€èõ%RI…5ºŠÇsÇ’å4LhÒ£ÂÕŽHµÚWz‰°‹Ã…Z(²²÷§tÞf¬_yã++†©qÿк§ûG®&‡¬˜ÂL°³‘A‚_ÄF²˜z‡œ$¾up}|ŽñçÇóˇ“Û›o—g—÷3oëÜaï&©F Á"„õ©¡ m@ºª .NÙV/fÔ˜Dæ kÂ$–ý•à\¹÷äµj¶Q.¸[çÅÑ™>#›°µà$pòg90X¶Œñõj6\\<ÄÍŒ_«ÅÅ!±­-{µ lCv2ÂäÅ*˜m‡R ¹ºv{üj°/X §qeƒgTôû·§öÊÌï=ï€Ð/¿ŽNè ìÅŽ•I<âýÂÐyþ·M‚bt^Ǿ?¬*fÖˆnb@ê6­hóiÏy¦össÚ<ì\¢#”'ñ‘Á24 EŤÔ;èÒÆw„Z#hשežlàQ×%;,wB™>‘Ž5yE»9À¯Q½›¹Z¡‘À«G:)nw³íF:1‹$üH»F:TU-㼫Ïu7aªÌÜ)K\™¨Ü”3Õ#µ5å!oÅ}G+®.æñgÉKYòâLãø®eaDÓ.C^<øˆ¼þÚ ×áv?ûåJ)Ú/€Àå 4¸ü°÷Ju±_–QSéqŽýRŒçöv˜ñq§nÈ‹\2çÆå ºá-.jS¨«ö¿ìªvnŽé0oIDâ"Þâ Ñ‚zÂÁvû?8}×XóÜ[”ćÉo:—y©°&à²êŒ°fiÒï_/ôC_nëKýïì—Ñ5Ó³Ós*~ºÔï°7ÚŽQ¼ x§‡:}èȸMîJ™~±¸ªƒ}¤5¢qö>q’°"ƒ­.âþöóÙEvÆÜ¼Ð;:8Bedy‘'Ã-T“DaÖs¦#Ó(ùGf6Š}}±Ú¾d ¡Ñw9‡AÝSÆTt‚ØçZ]&DîàÐæô‰aVò–Í·^(aè[æ`cH þýÜ1Çu¯c"‡½¢H9£ŽBY d_¿¡OÇF~Qôi–¹±‹ œ×Ïß;=æ~¡¼€%rT‚c.ÕFuuƒ=Rç‚Å÷î+”šd¦É-jŽ !‘"êAö¼èYx¢~¿§$,@Û0¯2¢PÊîV˜P§ ¢è±õ‘Ih¢ ÷a¢ˆ[»Ýè ¹}e¢§ípëMr®k)tU-ŒÅ]IºÆØ³ª¡mÍCÄEÕM¢¢ÑÝïO†D1m—®5CäâöÛÍ÷«,hfGK  Ò˜Úé^†[ñ-ÒƒÕe¸¹Ó˜g’ªΚˆK]ÉQ.Á¬¨ÅÏѼP¥ÊÚ¡Ô·™ŒG ^?è–©âNéi”52»ýº‹‘O*Ò²u–Áïzý†u×oêó— `p•¶ïœ¹àÛ]}AÕƒZI$‘?Ã`¦ E¢[³ØRÎR Dÿ“ fÊ ¯ ‹Jðeè ä8¡³EÞJõ€^=uP®û4 z£7­X¤¯Ñ­}Yž‚ÍÛ‡^}åhåñôF÷§Ð4rFCæev•Vh«Ö8lH¬$½¯M‡ÕÏÂe[àÜΓ \&†–a¥®$a¢Õ'6¦c?ˆ¶Æ|‡k šË}8r`•û„h]0zIJ€óÜãj¼zŽÑyS!±ƒÑÊ©äŸÂÙ½´¦¸®iM©ÂhvàVœätGÖd3Sÿ¦Çè’e˜d›˜^ ­.ÝÉñ´¥$4Â]•îúÓ•î^5×ÁºtŸ¹b^Ë) s«Ú —Ââ4tiöðØÙBÚËÞ™¢ KöTd<†b5›¤KºOÓÁÜÙ©ƒíCÅ·6wig@os7’ü~1›½r†ÝqWOÅl @1vB®¶vM8¼ª‚džúUŠâCçß犜}7°¶è¹@¨¦%»Œ³«2@+‚ äŽ÷6ohærAЄ(=š7ôÔb”·¶$ˆk³’ùò”qjÝ€Ñs̯г=Í]+1ê&3ªI÷ ' ‚55›ˆ[¦ÄXë>§8WµÏî/Ïï.Ç©ÉG¦’½üVý¤±ŒJ+uï$oöq@ËÞÅüT˜jô¸uÐc»DüÖJ-¾ÿH΄Q£S¯§»ú—ZOÖKÿ’A”õqäÞÙv²œ]5þ'¡Š.uÂÊvU wõ7!I»ª§FK}ù7×ÁÕËkŒÌad¿½2+ ºãEäV.†‚-eÝø™V¨uäÁì¥?´ ïâúìîû­}áS÷ÿ¸féüÛNÀ*®¥G‡mòy„Þ=:9b5­ÿÎ@¬ÉƒhP^Ž\¼;¾ÚlC:È%¹&´éRÀȳ†óñV­0Ëeà`CÆþûb»©¤´ÌˆD/¤¾é»ì›Ëº<* J1ª¸Oµ[ײ>¦ìˆ aº¸<2P˜1àø Tü×L}Ü‘Šÿgïj–ãÈqô«8ú²—U6 è˜èÓ^ö°ó ²¬^;,[Ùžæ±ööɬ*«R*V‘É$ÓêÞž9t·~J™øùÀ‡þO‚=Ê5ëäÈ^‚¶›ËqQ>?ž>'oa¡Óea‹8q›û¿–=Ô"#ôÃo&¢ÏQå™Ù+àè­+ ÿ#“Ý”9`3¯L1qºÿj>®RHEà‰M‡|l°Ü«ù8— NÎYe Å€¬!œL ?ü!±r²x²¿µìp¿ƒÿ"¾&NbÆÓ®á¢XÅaé ô¯öáNÚö¾9$<©=Ü¡óhÔ¡ïþDçáþ.Qƒ|·ÿøúQp¯ý5¯oÞÝÞ|¸2‡þ|oF¼¬*æ¡Ðì¹eu«K;•-/]zNÑ*²n¥Q2 «—K£à)”‘Ø“f[ êQÙc‡hõ½ßŠÉ ?69s¦yÐ^9R`¯ù¼Ø’¼®5QGž„ú÷5è0T£¾ÿ%zS[Tä-X¾L7æ4÷é¿øß÷ßžìAøñ«»÷¿ßÞ|¿¹»}ì¼¼^¥×ŸqËVîAà±z—¬!Næü ?¡qiüvô^!–bËä£Wf/r9z®È1[@ô@åk€”`—oH}vˆã(˜EŽ=µSî\ãÔ…6ä]iË0!7Åjä¼Ð4Ó 6×á.Eg*¶ÜøDëËVÈ1×EwTD#LO­dáÙw‡ À0¼Ò6_ÏpBîÐÂò».WÚdõÉöçŸàÀÅ•ýt…ø4T« Jj™ˆ€ÆîÖ}yüô½ð^Z:˜ÓpµD«$ñÆOŸ­½Í‚D±fœ/x(_K¶³Ÿ$G™u)½eòªJ[C³x_yg9½0ƒ"_¾•$o?óK}?u« (ËÍvYÃa#Äö½¯9á¯çó½«Ï×7ì_ì eÆŽ’´ð,[ü5ü‰ýšc—˱hïLÈ BMBíËŒñ°㄃ùQh]j±ì2Ï[£vd’z™#Òx7~¶L4=þœ¡óËÓ=¢ Ö²a\æ©_æ©Q[è-4p"c=v _U«3ž•Wõ'[ÝsùVZ–«î:œ”]7†,éÕQŠò-I׳ Od²"Yàºäøñ —Îf\1ZÍyæªCún9®[ÙÆ¸nq;Æ©2%Sý»ÞÕE+€Í/uƒ•?×7?®çÏõTÝÝÛÛ½»ÿ’ذn7ç÷#Ó‚²—´]EX6Ÿj8úæ(AQ8¶-¢Y!ºn•®™Š’ÃX¦}Øñ¯•fŽLŽYäIªòÚSûè,o¼€£YðÆÓ~ŸôÊ :p;od«ÛšiAp銚•¸1TÉ YBb¼ lv‘ÿæý§ä7™ûü3»õë†ñaK÷ÒÑމz\{›Ï"‘õ;j VL@ŒÊE†˜e› ¨KîköœøÙ`kFŒÃs_Œ”¡SHš Ÿ¯+~ŽÀ—tû~“LK¿Ïb˜éùâZüÒ@š-®fûKÞYÅ•e©ÌÍH0Tô€`ÙgÏÜÌ^­‚,KMpsŸ¡áw³ gîfí§Áéîäº}}udY„J°eÕ®f¼ó )–uºŒkÅŸžñ_:~Bsõãï"fÿ?ýÛ«ßF¦ƒÒ?€ª·A°ù1é³ûÃÙõ··ï¿^}¾¿{óþvY÷7Xq22:Hh™Ìö,œˆözœ.^·ÑlÉk¨£v¥n5dvDv&¨.l[~J‰ðƱH4ÏÅ“dÚÀMQ&>‚ð±´‘Åu­Ðëx·ÐúCҥȑWr—:<¶cAà[ÖF«šyE ášaÏ,,Ô0±¨g-ÞBr@_"ñIï 9½L™ 쓜Ižð@Ï?b?¸ŸH•±ž,º‡ˆs-äà!­éu ²º\€ÊrÁÒxs~F*[„èa×Ó‹°÷†\£çüÍ*ªà6’x'ÜàóÉsƒ[êÍ’`km#6´¥±4ØJrì͈qöpòêëý‡ÛFN0ȉŠR”*8ñZ4Ì®šÉ©/Ÿ=µÄ ² µ'ðV®!¬1±áðÐg>åÂF‚W;’‡öÕŒ2ãÒ­%&5v"Kvâ1›&EÕÃLì¡Ù›Ùâ"3±°EV™Éó5uSþiÛ+¦Vú•¡kCGÊ)SGJ…ZSiQ«³Óø³7« ŠöT ,Ñ›DŒîҺŲÞ]C @+‹‚õm·´æÛVžgÒŒz76UÇ´„škT}ØmI×t¸F|®ë£4º ½ŸÌ•üæ¦@ l¼lŽd%ðjöÕ즴¿¼˜•—ÕŠ>7l6{³N'ØVò/i]ê£6K†[† nT w^?zÆ“L‰«¦Œˆ{\öä_u”JW¶ÇLÌ´µM´ðOZÐ2äAß­÷æÂ0áýÝÇã·Ÿ~B?ü·]‡Pƒÿ¾4•jbõû\øy¦”['üe{ Ørl…+¥÷«ÕƒZƒfX¡W+”¹¨×ÙÞÖÙ«UEIL¨æz›+®éÔFS–¡ô2ûŸs1 mî&ð¡&@ ²Ö}öŠx&˜.1ÀÛ!.™Fèc±…ð4Q‡’ˆÊr«øbUû³ÛÆý?ß}³ß¸¼t¡üë ¶1äC¦YQ÷¡@Ê DÔì¡ÏQ|]B=µ#ò[—ü¼a¯®*T/iå¿´Å{ ˜¬B)Ÿ úÈÅdÎîÛ™I¤ËFp7EÃ%ÆÍÍÁ5e€oiY1·™>èî¶cþˆ“7ñiEÕ¡(X1¿ô §.i'“ƒ­Ã cM€0H,äxŠ´ÿv?Cpjú͉ƒ¹h-PBþÐðQ]l!íçJ¹P­-Èä1ZZäkíH¡8`ÒÜp² EÆ’ð¾<Š*\öÏN<6øgâ<&+€þp13Íl¤.îš9/z±LÎ_.÷ÍEü™Àþmosff_^ýþpÿñÕûOV}¼ø~è øjþltÙ‘)ÝÚmí±Gxlš 'ôðÓÖv ”_6½Ìa¬c 4$WÁ4&žbÁ3ðTZýü’ÄÌËþGdA¤è¹ë˜™hš\&ĺ$h&:1¶¸½Æ£wýWÃà®SÎïÞfËÕ0;"éY>6ËÑŽ7÷ö· )‘Íè—f5ÝÑ”À Ç&fÔi›_¿ºi‘亹fL«ÊÓ‰QEhº|<²—¢ÏLÍÄÔâšÑê_GK¢cêâ`…M$ÎIß²ÂH­ÓÕÍ ±¶/Î;I…J¤ˆ§¾ G{s‘ì®ã«UœŒ{Çsˆº*®ˆ©Vf f°ÞË1Ã`4‘ò@fŒ&Fí7F!x¿ÁMþî7ýL–æðéì B¤e³3 ÿÜl^‚-˜Yø÷Î É˜á ’UÝaM „&ÐIÃ.DméÈŽèN:²#èi¸@ˆA¼— cošk¿˜½I¹Û„e‘ Ãaþ=ó«Ú±ÁMk;tw¯%öúWÙiK‡®ÓD/´:î`uÜq‰HY´"‹¦R4‰Ãö…ç©ýñÍ*ÂŽÕÓnþF„²ÚŠaGÅ;&FÁLØqXtA.MÌhÇ1NP«16ˆ?Ÿïßžô›‚[KÎ|ýêæáæ4*Yº´,*uyˆy¬Rpq¬êò—"˜‡CŒ«ÐK[J) ’.`yuCaô Ð+öÆðb¤"x)åÀköbµàŰ›Ã®¯¢Ö*À+z^¼¿`>/.é/¬Ä Ò¼4ÊÐeâ¿ÄÌóØ ÷뙯'ª›7rÃ×úöúmxüs$sËPlÄóÌ@-ªÞÝ^›»1™Aú`Y˜·ÙŠÿ'`·ʶìÁ ÖÅ8Z ?‡ îãõÇۯŸï S~M§@ï·¼öQ1 OÓœ÷#C@´h1·œ]} ìÖ‰ZZÛAšÏTF‰'P5ĺڨKüa7íÉlùQB=(sí±=³#Yl§c Zå Ü;YyŠ–õ+ÿ÷¡WI¾Â3=c i®ÕÓèĹ=ë‘:ùTÐÕœV°,z¤§,ÛÃL2]<Òl8±HRW¬° ßÒMI7¿»½¾ûú]½®í&ÆàCù†©uÍ’o~|Ûm:öÔ8n­j -!`5‰K¤˜ü“uJಮÉd[ß>ž|.àæ ¿ uKiÂÁêÂtCV§á§âé–y›Âc¹RS‡½Ž³ëg²è‘x'35Ï«ÏÆÂä,>0H»!ì>µ%ãô¿È¾é¾—ðô¾—2—&+l,.jÒ~ýbµÓì´åñM*è·œ›ÒFôàŸP-Í>bÕ}¯ÚsZ¾ô¿ÿSyÝ»{+ÃýØÎµ³ûn9,S3¾%àÊ s£Ú\gO뙸|Ê=3IÑ"X1O¯V3€»ë~ ¦¼%®ëSÝŽ«\·õ,%qÅÙ'.-š-[z°¯ùzè«ÿ ?¾üȯò˜¾Ù<édiÁõMœh'ðê° RIûÞû¼öòé‘MÙc;+û볩ôl”Š?Ž«¬#pC啸°!5îöØöüåÛ›/7ï?ïÉ9²-¥öcf¾tÓóÂϯå=g~ä›VùyÇEó ”Ý<* “õ‘‹»¦©Ögo A×XŸakËòv‰!W¶3¿AV·ÚØ‚G®1¶Ãð%c#ô”M~Žòîdmbq³þ!q""§zF[+ÇÝG<·¶ÞÍ–—ñu©Ïš0õå\Ã-£•ÅžÿºNúéúÓÍí¯)7üöå´àê¤à Nû®N邯² ¿œS“¦\|2̦ú—ñ`B¶¢uÁ>Ê'eû»Kãë¿ýç<ï °À«”¶¤ÃC+íwUéã~¢äÕGw…W¿½úúÏO¯ÿ¶²cYÏÔc×Ã[ 7×á- éÂXÿL³N UªíÄøíé©È¡;­ÝBKÁÉ>š%ÚkDh+z9Sôž}¿#ï*Þ/¤­=Þ‡pùÓ»fFç/S®{->Ln<©{矱ŽwÚ¼[•ÂÇêÒ·%xma‘Ž£—Õ¥/W–¾ÁË$œ–­ÂJQŒE«ð!;`s|±ŠÂ—pÇC,ú„Œ|þyÞií)óÃ"e§ »öþè½½ø08KHÒßÏþ<É’úLyàqËCpZá_®Ðk˜,R m q«Œ:>kuë‘mYíaY­¨ºý;×L~v-ÅÒ  Ð.øŠÐƒoY]áÐ sq‹áRYµºq&ðXI\&3Þç#TÎG¢øìŒG¹t(DÓS{G°äØ£‹7Šs£ Ñ$fð™cŠR]È/ô¶ïo¿ÐÛœÐSõ²š<ªK† «¤C›DOÚ Î…¿ÃXé~ÝÏ÷ùO¤Fˆe7ÖF­¸( S=˜úhâb¸í$À®(D«PX‹ù¿•r¹ªp&®N(̔͗¡0 ã:Ï  p€, Š"1ÂêûîôÖª²\¿Q¶#rœU°&ú°NÁ:…(ўƨcIkÎod[$äF)T<«”ˆÐ¯Bèˆ-­žö›>ªë¶Ö±A޽pš4LU*Nï¼¹b¶!K„2ZœN†ÏVG†E8mÅnY_ãÆUã´‰97°žÄ·]ñ©°ÓÞ¶+e ÂÓàUwÜæs]„Ã^Õ?ßaE4e–UØLMmøžIÀébº¿ŸuZÁž§‚z(ã¯/ЭL>;ˆ3“KüåýKˆ°KVQ¿D<½g>Í““¢Ô æ¿N+vŠHëëRbòý‘•üq‘Èšç¿zü, Ö’Qs§r^ð ®‚Vn¸A´Pšz·tYƒuAVÝ•¬æŒ,edMëËŠ'öI¹îË™`z +¥¥v¤qQCRÑ(*UF#«‰9sÕ˜… Ìù2ñvEV©BV.V#k œUfªk(®ƒVéÏÀÂSä¼ ¶V50ÞÜÝ{{õùîú«9×ÇýØä¯&Û#‚ôÿ|Wænæg‘æg’ wƲo«ûúðívAäБ#¸&î‰HÁ‹”7Uå$öþsâZá!1íªsåslLGÔ…a—$¡<ŸÔL=0=Ć øÝFâªH\EŠ“c€±Œ•L­—_݇Tæðr ‹ŽâP4Æöj–tü‡¨Žº3r¾­W ùϥϑq⨈ç&)¦ä&ã,'ÂLˆ=8H'õi”»+œ—+Áà‚´ /"ýyO/‘¤.gàurèXÊ•Z@sÛâuupyâ²™dºÔ„)&ŠOÚ6ÌKíßkžÒŸµ®ìù²Ûe„‘êx,š‹oI®ÍªY Éðe-΂³y!Çò¥x$'ElÌòÌDÒœ}ê–râÖNÈÜa™Á›E>ѱ[>ÚÎÀ^¾Z†úCˆ¼e/sgà¹~[µTF*ÚmCIJKÊßÏ$Ó£ÝÖl˜Óø}غúÕá—]&æÃ9i·š@1¿îJ}ßË®ªv[/Î-ìÞZ #ñU ”I°šX_@Íûþ“ÙÛî+à®,ó~ø~‹[¶RúH¬G yƒat ~L{Np½`W'JÔ3¸«*NK¸ ÎçÈuRê»4]äî‚Ŧ° NFwo™”YèvwzŠ^£öY÷T=îàjð7Íy¯êâZC±Xû yŸºº\ôòs‰<;Ü8ŠöËõ2cZú¢cÐà»­¹ ®^¸ë}˜È2Q(ã.§±§âñ I »‡÷(œÀ»³ã´yyý1Ðß—/zÍ6å½ÈéuOR³}²ži›åæi…³â¢`ßyîH9tuèëÁ¹CþµìmArð®„&ôì#ýDH^Àޱ©CÀv•T uá„ø€˜Fúâô"v;@Tš4rÔX†oŠX„ïüØÃLf=“Í;b\2u,„Xò¾n:˜¹;§ L&Ø,óõöcê+*Ý•?þب–™¢*|–›¸‹í¯ª#«ê–õÌw¾mæ(µ53b¥ObOuå+54Û‡²/ZTÈò·%Ñ£ˆMæ –ònìŒDFŸîHòO»“¦ÈRºÔ9ܵs†3ãŸìV´ÎtA„X«ìú_Öx‰ÆÀ_Pëü~®öÇI­½ÇÃý?Šý5]þF¯ú¨Hš¶=E@Gc›èϨ¨_#ýý¬‰'–rMLÎ2èb<¡a¿Jêb< y&è™zÔæÉEC ^·Ž'ú¬·=MŠ$r”%‚öŽ»N´š§ýÜ&úŸŒw9Ò‰Ó-{hž¤Ý}Pÿ˜e©ÞäÒ†Žà¯ãø«û=áóôè‚.ãôì÷$3ObyÂÕùã1vF<üA~{µ„ýºÊ¾/éÝGka¿N‹Áap«IC%éq⥱£/Ä>œGÒ‹µÔîÍÏÐ9Ì^­†ö8·ø¡zÊ 0m{9J†-‡ÒyÊ; `?ÖbÍuYŒ'-žoj¸}ûîúÉ—®èë?ûƒøSCon¤7KqëCÌ-¯¢%ÞššGxY2Îw(].!3­Æ©ÄAœ4¹~ùêK#ÅËŒ]ûÏ+Î^¬j£L ‰ðqN¬>ÿŒ,±:£Kì7¨PK¬žXOÛ1WSŒ£;‹C†GFÐO.¤óB'BÔŽÄêÌnˆú+ó›y…?ó!ÇkºEëÀ˜ðjàV å±jšÈ%…‰È{4°žùѺoõÌŒòvÕeqcÒƒÎ/•ìT4Àb1Þ‡:E:¼Ål{käå#ü‡WK×qSEO¢ö.è©p¸Œ"Ò ¢2ï|û)Ò[h[ídÒf$ǧF«…$›$‡”»4©´¾ø´¼òv¸’õôíñûí·ï& u—^à ¾AÉ \CÕpý bƒí×*µJìKîÂb©š0›7‡ß÷¢}áRé³ ¸ùúøýËqµtízôJÍ­ýMÕš£Èâ$50û¬¡çéÖÌò&±à¦`R-; KáDÃì¥û‘š`ˆÄ»yĽ oè~û2acˆÏbˆ¯e .ÃÚ  ­—ÏçíDO¾Æ3¼Í&LÀEåÍv0O˜-•Ë»d­™°O›BF}Þ³q™!e³~¥ RL«QË+êÈ鲑&ÈM›®E¿2É?ïm¥)ôîW&:³\ö+§"éõBcP‡Òh‹¬õѨ÷ï[ÎX•žg×j/$|KËð;ÛïYDÃY|­÷ß?Þ;NûÜß=­+z']-ö¸p³²lˆê5H3XÔBÚµ²ÐITÄt‚À¢¸ëg-tô¹[NBµØ[²·&õw¯_œŸVlß ‚Ô̹Ü[: ¡j M»!¡Š/ŽòWwa;Øž=‹XÚ Q!ö=’íF|¶ìddÉ3:j¥Ùv¸ kEœ¯¹*˜ncC»s#GÜš¯Ü<ÞÝþ¸|Ç̱• ‡€fÃAa@dH¸}Çù¨k6F Œó;¨'ê49޳o[ )Šazª ¾Å}ܬN^)VÈMC!„ƒÑÆ/îXÈà,S€P)FSJ¶˜ I–•c ½Kjà"÷ ‰éÎëåXˆqÊ‚ïáúXh»Í Òx,ä-8¬U"ò+G9t¨Îñà4 oýúaƒ9gÀsÝ«:™~åy«|ÔL c ÜPÞêÍô\INŽ…”Tä`¹k€1;|¢L“‚œ‰|°j—wY. l<w/ÈY$D™‚œÐüófz3-÷P3•Â#,u(ò˜ zJÆûöý͇ï_>~ºk–ˆN"ƒ_N ‚qÿ®jºwÙU¡ uZd"ƒŠê:M·øo㪵¿&‚ö1èÂóe„}ÆJ–ÓWU^Õ©›×J4!ó¼I+ƒT¤ê&ß!mˆm›^(!W+DM ã£/¸”ἓE ùi± mZdX&Æ ¼2¬ó¾HN½nò¾Ø±ä@gˆ™SÆ)ÏA|Ü÷t~™V€êÓù¥æa–­äƒò¶  {\DœÒ·]2·¹vn,-°ÂV°—7Ù^Š5G†L!K§É±+äkÖ÷¤àú|‘âjßK\¸|¤d;jJª/t)<Ò°n2½÷L›t–]ï£F‰ÎpiЧGuûÌI(jx¥åôÖd VcŽ»}ZÕÝÂ];Ô©dHÐÌ?a á”_ÕÀó†zb/Û\¤f6 £ÅÊ ¥µ«.ÉÀ l¯¤¼”å—̪+;ð5¡F“ú’‰ªàUfµ…â!@÷úRˆšø’AœXˆ>s,»ŸÔÎÕ—Žûõs9ÍŸåŸ`‚ÝÄ?ê°ýp` {õøæŽ <Ù0Ÿtóü‡{¬æžÿF£³Ëì.0Ý|6Rv.O <†{œ¸àPëûìÙÒsN¢®>¸W³cŒ‹UJo$wØïÄ‚FîBbXÕrnanx‡ù౦tÑpñ?ÒÙìŸfíæDˆ,Û‰º)‘£ ½%HåR„¢O^ÍÅfZm1<¨æ ÐÅmà óf˜(ž½ÛÂÄ4®+]æ ,"îv¼h¯éh{üƒ§ºQ™þ€!§Y+sº(õ Y Ä£TÔÑ„LìÙ¯NéšÑ°]ÚÇi¨Àû‚5xK,d)í3zR.훬Å2fŒC‚çÅ0½|6ý£P›?½ûõñá󻟾½ûøá<0»×Ü7kb¢ï¡aÌŠ’‹Òž= Ïá¹SSS®®,w”òô¦V(Çc (špQ5€Ð~=cLYÇÒw[Œ Ä~Ûî°ƒZèÃ.üÔcð“—˜ÜX€Òcð+p(Ž"ˆ,R]®<¬ÄñáH C^a„Ð¥Þ$ºMFH¹÷ÌV¢¢¿IŸ,ÎD7 ”Y!uÄ;˜¡ºá­3 ó_ë¬LݯNZÕ°5¤îWçBÚˆ3ºç¢&ði¯À;×É\À89„?ßþvwûû¤¯¦¥ë®2ÐÌ"ê/z£~¨Áÿ0ÊFPqõWtVQ¬ªŠ–¢Ü‹Æä" ²,; İàHŒ|>ÏžèÓ¢Šfoí‘Ã}=}õIKdÎìæ¤OF'3àÐt&˜‹&Ò¨îFÎjÓÐOÛ-'¯YÂK6±¹C2ÈœUMËÖ–i¦.†xsš9ù–eÜ@‹;ÙùWåëÉ#6áv`Dr«ê˜°1œß¦×¾i p™ ÷Ínu•Ò®•aN²œ,÷¯4€÷~Ñ0ã¡çw»N”j`—Ó[‡`ö˯±ËMô;`)YX¯NÕÃÏÑ×ñ>Üãý÷Ÿîþb?œ&'ž¾Ûžn¾>½¿ùôþÃݧ›§_n×¼¸+:Ëœø-̀ȘayVò±'3Ú¡S¯ˆsi7–-hêPWà?D .^‹¶Óˆ€íâ[“B"*ˆoÃ[ ÈmòN¨Õ$ºõC ÏVYÑŠKˆÝ£[…Ü”€}rtñ¼ú"º•ÐtjPÊ€ ´|h°¡á˜c°9Wäú³„«Ó€g:' ôvVÒÓø#Í?6]ÍxõäëgÍXÛ#J·´Å—ü6œîÝã4àÞâsÑÕ3£Àçj¬‚¸Xú.¾Öf+€¯£xñ/ç´“AaÖ‚Ü'¦½·E§­(Yܼ¹›àæYÆf‹«zM„­FÖ\2p¢Ú-À»¦¬Íâ»d ¼_з-,xJÙS-r5‘ÐÑÊ<¹‰—î}ñ4eé‹›zptuŸ-Ñøs·D÷ @úÊK÷¹aÓKȬ„&Íf¢K œÒ65(C¢Ð!5X o:: ã*®$ÐÖ´¾Ð¡¨äÕ̇¤2ä‚Ýñ¸\j%Îø±Zx{é.ugâGÝ®›À˜›áuƒ9zÕ™Ñ*ó.ûYnÓ®Ò;ËßàÍfnêp>Ÿûm^¼A‡úSn^­Þ—X´ ¤žÒ&BÅ‚¦Kçæc+¨‘â´¬¹ª³·,ˆSGUP¹˜|B‰FEWÁ´`±Ê ¶P8”.59&K1ôÕäÚ‡ÂÍjp-XãõsÁlý\‰P—P_¼6ÅèŒPx¶Ã¾,Pif—+ê)Ä(ŒëñÒkT¨mÉns©ŽE –TêÂòø˜Ê-ŸP·Q¥Ž‘ãÊØ¹…}Pß¿úŽ[„Õ Ât"¾dÓ2LŸ_†éäsú Jû˜ÀÒù­}NG¾6à#”Ë‘zãï=ò¥àIÛÀ&ûFù*Ø0Ä€‰+ÝI õ ÜH é·¸ƒHj‚\žrs ÒbÑäh&äl1å–Þ:²òªù Ȧ‹?&Ž€h‚=šÐ»xdw@ðL!tЇºöB7x_;]äA>W!ÒD‡ €›ÁD-“­U>Š®™†Å3,Ž¢Z ˜­žhÔ Á~Ê¢žýuôÀ¤}ŒgT&¹œ¡J_Ìp<6·ÃÕn.ªBi€þ]f$æÙAd—²hÄÞ¬¯é͹í+ŽýëÃç§_þöðøûÊzi„èú² =Ì¡Žœ£]盞gíÕ“ÝýeüïuÔvp™ö#P‚9}F˜¾j‚ÉSvÎûD±&—áÍhE‡°nP`I^J4>vï)Œã?¯.ûdËÕ…ë—ó ÿ<îÒš á&fk‹à‡àÈØ¹7îisþõÿ]zË žðVYkâ^udbHÎ7½½F«V¶÷ hÑoXƒ³„kÑö*æ÷º´··†ÀQÖ\Ïn£Ü{m"‘9dÀÆí“18q„[*:ûbJ¶!Ü.ÙƒŽ¼ =V`A‡¬#~Õ[*ä^ï±×<²W³½ñàjnÂFpæ×C–W1äJÛ½Š›Æ·’Ü’˜:/ºòÑ…Åþ»1 {þkBá!xzí¼n}®…íßû@B¢3èenŸ é€Z̯ϙj:·…þÒøËÐúV:w2^³ã¶‡ÃªÊI NÝ6%Tîn¶Q=fÍv %7lª~`SS@ñ›æñ¯˜Y6F2‡7±±GÂ¥šˆ/ìXÉÌ´Ü1A;üp7wßÜÀê EïêÙP`^‘µèœCHM‘’”kW¬61IúZR¬NsM.²Ï‚‘¿©I­Ú O ¬+˜4PÕî{P\n4À>Áþ,\ ¶{Rdx)bØT6Yi2.ÌÆÐAƒê2íñ•Ž˜™ªd`¸dàK]Â#»p5<:|§Í¢ùÔ— ÙB“¯)€d°·°Ð4N!^=c&C@? þïÿ”2ŒßRt°EZÐýåþ¸²tÞ¦òíÜf¢C<™6s‘#a\ù.Ù ñZŽùbQ€RÁ1¿áÁ»H•ØÀÇG˜Øøš‘a öoÔt…¯&$ LH/Çýœ (è®÷>œ0ð5N?6_èž|ÍrH\TˆáJÓ«gl R@øÏ…1A+Aà×’Lo´ÔukH€X§©ˆ¯ß|–‰p]»ß=Âî]8†Ó§ÄITÉ2§Mõ÷JÎöüúçÇ›fm/tŒ—óE‰³o”µ•ak¥v”îÓ:ùÌ!žs¹3ØV 7Á¶Ö½ÄYˈçd+ŒkÝ[dQ]Û©¸Ð/[cæ¡/¬ëªKgs o$¼~òæû<í“8®ˆP4ŠÅ ¡ïi—3n´:ñrÆŠâYäD3îÒ¢•²Ê/yKq”+€N¨Ýb_"é]°ˆvv¦Ðß—RæÔÑ—’ØùÒËÏ=¸»«5ë)7Áu¸ü"šìb±‹PfÊ=M»†À¨û®0P=šèéôWŒà¯å&&ÄjU±¨Â\¨8tº½`#k±ÊQIªYxóX´^ŠM±LÌzÌ Z¬;ñÒßp[ñ ë0€gÏ’šœ3A±`§v³fH—ïV{·2·Ü­q/m•_v të´ÅÔ{¨*#cVZO¢¬‹R0ä„ã )Ž ¶Ê¹ØwüüeM){+gP!º²)ò+ú¾)aô¬æmƒ4†8²»2¿Ewå9&´3~èpü˜Á·žî³ð>/:.q}Ç%{²ÖáA‹‘9nL§Â+ø€C©ð àºÊ¤»Òì¢Õi?aªã#Æ¢/sÐÀ_Æ_;B q €j–ˆ+tþ´Ÿ·—òλ"JQ¯KœÛ61•8÷í«™G)OéÎÁމ7¿óÐøp!ÍWå6tT°\SÃMö𙝯1jô-òñ .²Iä®—Ôµîj3S¡jÓ »o]wýs±cúH%Cn67àä»®|³åWð1pALºº›B¹:ƒÄ¹Ûà J?ÿòðè³Øìÿqóáó'ÓÛ¢Üiæ¹!»hŸBÆ2±XˆãòlvzrÆ€Tö¶o®·OH¹mî ±!ìG>ZH{eû•ɬC© ʾXR@äZ–Gz¦[²lyAk0”µMÁ>%X¥m•)u‹Ð’GõSÑzǽÙÍ/·Ÿ_vns ~üññÇ›ïŸ>.kˆ:/}E ÌýÈì ìé?3ÉšI¨¤åÛI{6…ýT[ô_MPf¨ l)Ep,Ÿ1){íÀ(í9Ö†ÃQ3Mÿ¼<ôÏ™¸”"H1¡^,vy³ØÈF‚&æ°¤ØÑSõ*31гP1O„ÜWÝ6Mø‹ðVƒêTÀíIúúîô‹‡|{„5ký8`Fæ*MN6Àä:Ö*ËUϲµöÖœ%‡te¨˜Ýuèb¥l¬}²5]‡ÍemÁX…ØÞøß‹ 3ÕÓì­t¼%Å-¨3‡ÀÉ J êL8zîGÃ;Ì©¿]Fvæ™Â0ƒ€•YafZ®iÄŠ“æ^ÕØµf]% FÂ¥[LÛ…4îŽv勽*Õâ!µðëñP±bz,‘!w´¿5²^ùŽFš ê.f(:)…HjÝ_24 JÚÖYÝ~W/C‚©1PœE·¦ÄS‡±žû\^Ä’f'¦—ǃ /¿Úc樌AhêY ³( æÎÒcUð UŪÔ× ífÒjðêÁjq-é‘$ÇÌ|²F½vôe??»âàKvK!IpÊåVH2Ôm<®ÿ@iê aœâ|stb¹7'Æûøåq‘ëUa.²KÏvAÍ€yùžàVt¼ÍGA­;Þk”–.®\šÉ=’Ç Ç›Áà^äÚðÇð»Ú~ƒæÀ MJH#G^E ×4SÍS¾Á¹5ƒÆëuãôM8µ=¤qÞ…Ÿâ#s4ú•ÖàS[Ü3½‰ê‹®ÖõñtLg6ÊøÚy•„¸”±>¤’R¤:¶KyþòI†c˜$@Cò¹Ž+£{³ÑÝóÿ¹TK´w÷Eœ®ÑïPhªä‚nnxoÇÏtÜ™z¦pBúÊÍšßüϵÍìÈÐÚ=xÈ y*¤gâž!%“tT\œ&Y/¾‘†°ÍëÖ"¹¡ŠÜ¹Ìy$ª1 ‘î⤀|eì”Ù {ŽÝ%†H×”Æ ÁÞØÔ¨`‹cîkÇÃh¯ Ç9-[<Ÿ¢Æw˜_!SV ïÀ-?⦠7»E½°–9ÙPÞ¤†:*CÐŽLA” ¬æ€ò9É ¬Fûrª*©¬!2ìúd^—RSôk$å%ÕÁ†jæôR• KùlW“iàºxÜ”Øæ´`ùÈ8«bŸ;ˆa•Šy ›7báŽEBoÁyú¼~¡ Ò/é¸Í]¡€Âȼx›M]:]šB‰>O"£¦œ«ôyð´‹ñ"¤²–Öˆ‹b¨ÚkSò–ÒvT5%rÎyÉå³ùõ,DÕRúÙ>9{öL~blÙÚ|[‚õ —/L~¦îtÂÊHÙ¤ììg÷CáòE)õáöû·_~ôÿzp£øöð«ýŸ'yœú¬ëYæ~å4 ¬¢ô¤ýq¨çŽú"SÜ Íæ^äðnÏf3Ûûð`É °Ì}rÐàݾ¶×…ãý•WÓì ò¾¡GNÙOü“ÍÑBâWäƒAuÞ„ êOÞ·8væ^VÏ^Aæª}|ÿæY³»¯ß–õyæ8÷6ì[šk17Eˉö»ê¨@dk“Œ9Hõ>TrbØú}˜µ¸é £÷¡¿vJH:ö>l9!Úã/e…„bO\}B.˜Ò¨3±E‰Œ( >R"m9RÞÑuʉx£¨Å=z3]¤¦“eاÁ鸖"½¿œðËÊÓ·ežòÃ^BL´æÎžªÜˆ³ýßD‚§n… fùºÚå&Ú)Ÿ+^¡ÞŠC0S¡gô·9 ħÍ5õ\½òqÙÌ<×ëŠÐÓê½úOa:ëy)ŽKûù¡k=¸A1¤Ú‰¡Øóp$³!Ù`{ë ¾UüÊ7.ðìl°‹™J56¿ *C9œ†î;Ò¦V‡¼ŠÇbÈTXàÂgaKkW!¹™6­BJ‰61íÓ—ŒWÈRëÆ›Jµœ£/kX…äoÀ”òµÝa˜°29=žOœ¸8gÇ,n_ï>þrûí@ᶬbjnÂT÷&†ÙùZ?ÄpÚ ¶5¶ \jþjæq;©Þd'U«ßóæ'⌑‰Fo¢ê~‹£ýSk÷Oû{¦‚ìÆ…y˜xÉÒG¹{[4s¬ß¾œ¨zaì9NˆŸÒàîù[›¢…áÚ7Êlúʘñtúß?9µ[ôþî^n«þáïÕ®´E ýË2…b˜Ñ@²=7p]¹™õ°%tùˆÇóÏ^š(;§ ˆ$|Ò­x„I¸s:0¤óógç³½1öõi%3í y@_ìA Ï8t\Áû>䜪|Ô(õ wYåÓ†®g™ ásôv.Â%˜^;1-€³“æÛýQ%’3×”¦°k½›=q¦tŠéX rá•gã@h¦Ö f$_!ßÎðGþ·€r–ar{Š $÷ãD(ÕÖÜC’S,"ù‘ÌÆ´æ¦5äte '¢Ù¹&æâ¼ƒ7Ù «Î;ü?¢¿Pÿ+æ!Ù>’M” t?ŸþEMò,qÓߺùüðá×e=d§by‚ŽÂ›ÅÁââI–t±C…€ŒPOº$Ú†]ÄuƒïÒÚÒƒøF$]ÜœRë uvÒÅÄL§úVQÂÌñþªôò¨iMy)¿ü Œ™ªzš±8d1å½—~4/¸|´8%ÌsÁš©g´Øy bëx¿ÔÀ±7óŠ‚7þU}kê®ý2§ò(ñ³TƸÖvtF¸6çÙ<ð.åÝ‚î×®µé‰¢^Õµæ¦~‡”8 ò¬Kˆ0ÓÆYzª\ÉbÑdÚXjä_nïMr·^\AÛù’ƒ†™vb÷ÙEjîÙ¡cÔzQ‹ËäGÂaÛþÚÈ!éPÿªá(hê50ƒÜÂç˜]!/›lŸ G F’Üt"êk?LdPê‹9ÈdÐHHyaßÌz´Ç0¡1•…7ÎqLWø¹œ§Øÿ­uð:ž1¤Žx9‘FQŒÒ9Ûq!Ëù$²5¯Û# )4Lsd`HUö Tê-=Ã3ô·¦l¯¯n†2»ÙÀÅNþý“5{ΦÜlÀc S©ÐçŸb:MdÆÅ}þý80S±‘Æg*%ƒ=:žšª¼ìþü×&Ã0U{:»Lj½5 lOe5ÊÙÙž„(&®*Ì&Ájz#c‘pö ˜8ë¯)eZ³YhŽ:{°Êäœãi~Ñ?9k­Qõep{V$éÔ@և© Æ|ÕA±pGþkÙw/:”ÆwŠkLÐeØ·!¿Í(V)Á°pæŠsè—}(›ýv 2Æ@JÇÎ\•Å5lœÙOc¢úˆ;e‰P-è[tX*è gÄ4³½5ˆpâeÀ¼Þ*S˜Ý™åË@Óé2RÌ5\Ö0vœ¹¿PqÔÕyt˜¨VŠaüÊHÂä™â <l·ïñIªnE¿=,e1‡e&œRÄ'×¹r°ÔÇ=#aœ ®UßzWhSÖPMP,6@}ýF¤M@‰HWÆKŠ,³éLÊp:àzŠ.\-' ÒXbjÂI ö…8 {žÂ bŸh% f_Çw=$üôÛB ™ˆ&×ܳ± F@L˜iš,†ÁŸ©’ؾ‘àODkô7à;•K•¬£/€iã:ät]¯!O@É©€¦)6Ê9üš0Ŧö Љ{s.i*o(&öMôÝC–ÛG¼æ&o²Pñ`:©Â¿?ŸÁÛŸ?ßý«Â~xøíd´þßL«wþòñî?M!šå9‚~ºûxø³B­Ð´jF—7cà‹åãÝ·Iæ>æOþ²?|ò—2‹2ùÿíîãëZà&+‰ [Ô?ž°ÿ°ýC>=šÅþÝ.†¿ß}ýáÛ/·_~xÇfûí¯Sov{'ÀÒ8eœy 0PN«—c(®&VÈÚH¬ ²¿΄E`õLD)ñáÓ˜6>J-ñøT¼xÆ~®üDÝœ¢îS³º³ý*†UêæÜšHÊ&xÈ®S˜7=ú£®Ë;kGB Є#—wzîÄÇ¥àåH>#š<)¢1äæ._Ù qê½»·ˆi¼/ƒs,«î;­ÿàY•ËgM*ºl¿ÛG`ýFÊäç3þβ &íÇ)c ª&mw#ï\œs&½“(CqOï³Èþ´;„vîø¯÷vÁßÜ›{ùõû«þ›YüKö·¤èM<‹LXÉn„°Ê„3N0agŸ(@'™ð3ßÀ惡äƒÿÏýß~üðõÓo»–´Ã‰úÛão¿Ü}½»1‹ùz[±ïþ7ÿEj~׊SLŒ« Ϥ»”e;oþHg‹`‡2.À M47=•0ÃÎ/ƒÿ§Ž!`‚*f(]Ç#i÷€†›™Ýú—€†¢jJq hì— Å 4ÇØÂ¥¦öÞß~øÅ¼©—¬û?|¬7Õ~|ÆªŽ­À@¼Jg4!o™·Ž°äkýK±º¿ýëÝ>EòøðÝ“EŸ?}øt÷Øï­{×J°q—²Ú£AÃT¥î8|_ê`ßAnoÿ¸Åöý?,#ù—ô‚̽õˆ×\¬§w\¬ìÔu´¸õ~˜FÁ[ý3)HõöcvêŒÚíg^jqÉÍA$—eÙhb¸èòK€d•Í¥É+·brºLÌãNõåÉÖ-©Âбª¦ÊÙ´÷ú.´þ3ê´+-2À:u2ŽgÃäûŲÜ¿Å:Æ…d¼ËÊ€)¤~}4À«PX…×GÉpøúÉɾ”»žµÎªØk[$;È«|ý-ƒ]€ïk¥2£ú|G ¾›tÅwû¦›¿?|ýõóƒýžû‡/Ÿì'?}ùëÈÌŹß1(‰1 œ¼³Çׂ=7½ ƨhT>ã¬~Ö¤6ˆÀ…ÀÔàÜ©­ÌvÿB`:èñîÌ$ fôÚÎöîL®…å[ͨf¸ìÝ9UÝPÞ’L«—þžîÜY²sÄ*¬Ë­¿ZA0$å¢dqO’ ÷¿O&”Œgh«š¤^¿3 y:ÉØ·­š¾ÞJÄ“æ ™k‡ZñD¸ß‚~ ĨTÓ:¥Âý f•K²ÓQ½]k „hš á&U.t”¹^RÌQãý5ÈaÛøNì”ð•ùN¶Zð‘ÖU–µ§%4!ˆF¢!,ÒÌ Ž^sŽyOçYx æoÕီT¬V öOÛ—üáéÅÚAÁ^Õ< Ë@Á·jX ˜òl¿Îd[è³O^¥‹’÷ýkP ,2ÔŸÃYþÜ{*¿œ?)‰c„U¨Ó׈Ägù—ò^V+’P¦ZŽ\ƒ*^Õ’rU$䱎*±)I®V¶ÅãÛ|ëXIÞE«`%MÞÞº.î·GX„ty¿§÷ű4—q0¼¼EIþì‰ÈÑðC׈<¾9"‰nbJa¿Iæú%ÙçxûI´û¬‹"?»‚óÑ0†UÈ¡§t€¢ 5DWš='®a TH¡žÅK†ÌAª kyÔA6=! x'YËÂWažL+¿•kÄÓ­r ‚VÒxs_JK,ˆˆí%88§Ö$„°F­ñ5»Öê/&‹¬Œÿ×ûÍ)Zl'ë²o¤cí*ˆИáÿÎý<€ÕýæŒ^…­as„\lŸ9’YO…`#’,ñqqŸ_[eÄ‘âdlÎQUO}f׌Ӭàý{Üøá«PÞñÆêD®Óþëý¾3nfÚoH{u3çÒ‚D††Å²ÿrë/úåöˇ»}2óûã©þoN¶ÌÞ3Þœ^Î7ÅÑß³Ê ÞBºFfz‚7)’ÊÚGïokéÙ± ïÑ_áÃíÍÏß¿|ü|·(l‰vì£öë¡~i’¹‹=$‡ÊÉ Ž´+pé•ݰÆÏ‰E-1Œ„$µü’I1K‘ò𠦞†aݨÕi„qÆ8½Œer-ð÷»fXˆµ¶BEÂØ¶ÚD‘`昖vš®Š3:öUÓÑš8Õ÷dËl_È7a²¥Y$h·¡i²N—6TÇÒ¤cHQ–ò¥ iƒ-éZ7ÁS°b~k/Èû#öq&R°°9P}F,ÐgœR%D`ç@È+P,vœ4\œzß}ê®ñå„âðù[êìÒ†1l÷8ˆŽ±Š>ƒs´oùŸÿndSurßžz °–;CB#wFäl`ξÖ ^îÜ}xÄ3;æ÷_Ö@™793ÿ/{×ÒGޤÿбçVš 2"Fç ìb/{Ø’ÜÖŒly%Ùhï¯ßˆ¬²*UÅ*2™ÌRwƒ~Øíñª. ¬bÛi˜OÏ®ÿ¨YS: ÈÑŸsÔìˆ}ß•rŸ^ÿh˜½½þíÝå÷‹ËÛ»¯Wʆo7WµM'‹€{) ‡–RLQf¯¶j&ØI»ØÐÈ [-kcY÷rvñ„@fŒ§ökLJù0$µ?Ô4ñ­®“}B!jo~b•pDb½9µ½ª_Õ茶,Üs¡•CïÞê@N0r{×ì›ÉeÊÏj਄Þ`è/™O,zVõ Xû®ÖIÁI€ÞP†pÕ¨’þ%Ñv«çÞM÷§0ÇGÖnørúý?mÏãëñ¯ÿ®òø"1¥‘€ÑÚ¹°BýÞп%Ô ×Ê~öŽä:’1ÐB Ò>ŠªñŒ³wd¬Ý’,áðMTþǨÆ3¡”  TI–²C|&4éñ(ê±m>ˆ“úWQÐlŽ+l>á÷ªûŒëæA-†D«jáS&boÙÜ—›íÜÚeÇ ‡oþÝí—ïü-5œ§»GÒ¥*GBHíª+öîC‹mË5æv+¬NÛ^ún¢–"q­étZÝÌ>[Ѽ£cm7{OmjçèÙhÇÉÏØ>¿¿»}|uõ~/2j5¦˜]) GEA‚øE˜%­lC¯äçG[Û»ƒ{ÿêí«Çß?¿ùu;äöͯ*7¿]?¾ù¯ÿþÛ«¬j)ÙžO‚Ý•9ì¶M_¾ÿ@ñý¥\ü㮟ð=]âµ|ðÞ§Ë ùtwµ;‡Ós<|½´€ã›_·—üû—¯o~]÷ßÞÝ~½þûf#¼—W·æóÒÓ«·o_}Ðý«‘èíÛçËvÿqÔ×#ÄÐj—ŒŸØxÖý…à˜Q`i Zbe Zœ…„c(µ³oŒ~q'-…þnJGööînV5ãYNú×gA–é7²3ž1á@|v›å¶ ôcZÄmq-VEm}ÐÅ/ÎKV²mùžíœ/ sÕ[YÑFôEv3fc£“«UðÛŽ¥v smWÛ–êFøeO…\ÙfSµItøn›â ù@¥b:æ*ëí~ÿ|øh¼ÙUÝ'1ûfÿ|rf<9G—äœK‘iµiŸX!D˜" èDÏö3:±¡²‡ä›K¶Ÿp°²[`0kšUDeôŽtÒ§ÐUø¨2*Þó|âN‘¿ÿ|ùøáÓõã×Ûÿý¿\úøþæþãÕç››}Ñç2ò<(iÿÉSìø !~üØQ>»ÿà·¯N /Âæ­!P[/†T«è¯wÃÃ5Ô¢¨%ÍiAÆ“}e[BSÖ¾šP²Çj=vôŒ4Ãþ29²Џ eí¼£Ò9fFØ•95ìØ°é  7Ö P¦!¯Ðú;?ÚK í©è0¥ìB›Ée*Jûgó~êO?±¨Á=\(J½­r`ã;¡ÆþY7½Æä"ÿ%[¿ëH_ñ ïÇLëtN‰ƒøôíüμÐ&ÁécGňj0_hÌfÂ'¤éð@Û©= Ìz  I„©$y\Ùƒ12‡Ãzd»˜Ò‘¬Vø×éÿÞ°‚½ ¤EZNÒ§dr¢¿\±îÙ]?^¿»}üØTÖ™Q\x€„,eÅ% ñä^ì-1RvdÃä¶=vËÙ±Ç)€³4×zc-CôÈM.ÇÄ@éEy­Æ7’m`*óZA–}¤ó““v—íÁj;uäX½Fp<šMìŒKìg“dƦ‘ïª(É…øG«_Ë„÷ƒUNù åO¢>\Yù³¥ÀSšô=6ˆñ¸V"POL š•ü„‡–ªn&³‡´8E•ù¨0DŠìÊù(õÄ(žt¯6ÏVxï.V“}ÔCµë­ñ'+d'ho>áÃÊÆ–Q-åÙà„)„E%Duáb["s†xñ9¢"Ï"Ë‚Ÿ™¤:Ç'1hk5›„>Ç…«M+‘õ…[‡jbËfÆQÿm¦” 0¥Cô2¨«Ã¥Íë£îq(! …l§àä2M.ã¡ü¸DeR1ùIJÓÀ\ÝáR%el¥ýA“]Ö9ôñаjE÷Ç;•<Õ+Õ¡»«w_ïôj_ÆÔûc+¬ãÛPØÞ¯èé)Ɔ(£I¥›Û¨x”¬'¦^¥i½·C‡ Rd½QTövÔv>!C.i´£SbmýÀyŽ‘T”‘ENkçÔ9¡Œ‘¤W&GáXNSßIÔyPì9päã­*X–0žaood7 :ÉÜ@á˨̀MÎä¸I¤gt¡on Ä& ÖlU¶q ˆ²‹&¤éÄzê awQÈ$«»«DñV­øô.þL xRÛƒ²³yìöø‰dÝØ Ô?Ît/‰€…îUb¸<&ÜoÐIøóô±tb öŸÖ‹Öâk3ˆ¸ '^¯eÖ²ý_¿¼¾¼xø®‚q?þ™ÍÇ.¼_Xd¼Ú¡&AÕ×ÜÎ2Ê ®¹ÞoüDŒ±%€b}›žea<9†Ú¹Kˆ4¨¨@SxÁq•×Mα|sql4åéf5í z*‚(õmì¶¡ !H\w“׆Œ$™úcc'ˆ±}¬Gu@YŸ—èj8¢ç?^ð×¥ÿàG·Á‡àñ»DìÝò°Ê 'ФŽò¢~ˆQFÉ@¼È58(Áöº°@jÑIdP2AiL ©•]Š‹è„)WL8¹Y:%’ØÂˆi¬wú‰|¯‡±&…ª{íÆÛ* Š‹˜½]Ç83Ù­?¬%­)–Ÿüa,?3y É @b(óW­Þ˜Šü%Éf©w—)ÇòlòO‚gµ¢Ó/, å[«ë3K È{•8·èicÇk?m¦_‡þŠ|Pm ‘úL÷­{ã¼çs$MZÙå‹ …ö)þ? Ÿbø]t¸ê"–m˜B©{ý^!ãæóo¯7½Iož˜ófGÿ7_îo¾ÝÜ^ÿ¦1¿ðá‹âÏÃÅ—‡w·ïÞ_ßnY3+àIíÌ)¿0Ø-F}Øõ'§ÙñÎUèÙݬýri•)Œ6ð°ü¤9" ¥'¶ÛKöž´ õz”K;«ùÆY—:É’€éõÚ!Ñ ™cîU’ÁRÿ^N¿Jâbß~&WµD™"ÖGWC˜£œÇHí#Ž7Ÿˆ{sT» º…À%¿8 ÿpß<ßvs¡6çþýû,üVj·³¢¿Ñ7T>+\¨× k w‰xÝÀZÅE¬cEôË©q[Œ~ѶCuß¿üAªP½Éò¤˜fAµÅ|X¤°¸îŠÃ ‘£®8$ý-¸#±1¸vÁ!@««Ù k#Ï^õŸ½Õú.ˆXKJ-az“P}¶8AÌÄ àp:£¾Å*ðK?££›<žÖS»,eû&·©è*µNõ«ã³…Ͼ±l´u2 ÅEäúPAQH-ï:܉­* g/j‡qE{’b—ƒ…™œ \” Î÷½L®V ´c¡Mü õ]Á¸DË6õÒÕ¢b»‹ƒ´îÒˆM7Ôò ýÉk'÷M }fe¤Ò[-X‡GjæôfýbdœþÑbdÏ×aØb¬ó†ÃŽüüIä‹Y¸*ò¥¢¨`ÚÛ7ÒL¡eTžj€(h¥Å` µ`Ìq°þ@„"G'©ÆÁ#eGåí®VÆlÞk q—WÆàW/2:òa.Ù8á\b8†xþE1d7¦îÎJf.>©€_½{T½…ÕpãôÏ|%Ç^XQ!rýc䑼 j‡/³\õáòãõÕ×ÛyЧ½ørw5+†¢xA«â47mZóâÈ v[©zŠbm“ÃMC£T¤#–±>yE¨/ùmmOÔé:±S3áò­ ¢ÍX¦—ënÝÜÙ'<| l¸?x>ófU_CapÒa¯j #š9[¡õ[¬3–€‘Ñb ­³Pí*ã`qÞ²q†éô<€ÍÅcv¿âäfUžr´eZ2'˜ÙG#ãÚî]°½‘÷VÛ—àœSŒ½ºšî‚?T¨ðïñú[ð79᳂?YXð×IÜ©3/ìùGªûœ¦áÆêž ¨þzuó8ËT·%®ú0Ä–åËãîžäw1³TêeŽÜI8_”`AòâƒYŽmLÚR¤GöLOMŒèÜ®<Ùz$3Éáâõ‘Q6FSÎkbÝâuϼÌ<Š+j77P ‡9’Ìm¼ož–S\‹è“7®¨¸](çÒ$»{}rÙzk‡6ªïÙ즷´¶Þ•ݡ޲ „¤w(™Š=õBï6ú§äΚ|Ú‹’w1pX¿ì£¸ô2±¯¦mš/.ßÍ´q„Uj£—œ ëNº…¼2„êfæ˜ (’I¹ôˆØ•³”wœ'Téaçè±qpgÇK"Y/Õ±¦L•ó`;HÎkåÔY9)R‡H×D8ÊO $´(¥W(*@eFð´­½] X¿Ü]Í]g4^Õý¡´§ˆÂKà54”dA ä!¹¹;CÈÕ dMÀém}d}JÑÉHÙeÛ;Êô0IõÐj¸…™®¤"s^¤’„¼2Ä*•3ÆG6¥x¤“œŒˆç8#‰Ðˆ Ǹ9.ºp˸)¡ÿЊ`H,ò2–멈êåÃýÅÃÍoŸ­Õlž=\¤vVTà­pÃІ æ!FèfÎVS¯þš¸¹ üõàb1;ó1 ©z,a2!wd–ÛAeqmôõ‡#œF H\VX]_µt)8ê‘¿…Gùj9³˜ñ5¬0ò4Z ÅVÍ’ìú¸&ÄUÄ»Ÿ1ðÒNá2Âb ¦¬Hðâ$€› ±Ç©Ò­ÆZS|pPÑ™ ¶b91 fÓ ;ô¨„1¡TQ޳lWÔ?æp¡Š¥´zæ]¹á3ïÆ)Eþä‹‚®ýž1Ô"T#éie?Ê=‹‰q ÿ¾17X|$­ Ï˺Ÿ¯ýòíòâ˽ŠÃX…3yŽæm²¯ }tRKʉ…ÎÍ'7‘«¨Ž¡C©J†ÅnC’ûÓw¤é€¨vfGi¢²gHËŒÑÕ§º F< ·Ú…!:.⩸®–)»ªÊÂíóÍÐpŒ«ä‘U²–p•úÇ€eH€iÛŸ±În70OÒóß>_ìþ“×ÏÿUí}kŒ× ïÝqʃ•a/Êi¸5º¨xBinе‰\½pÖ$BBŠ\nŸ$"À¢ñJ>;~kBœHk§VØÄ8«Š›pܹH'Ñ­£•l¸ØKç˜÷ÚÎÇ2š‹iæy‘àÒ¿53¢BEÛ6õ±2A`Y!ï|’x=õ5'úÊàËúŠ!¿ÑqG©N «že)âXJ|‘º+¬°IC€¸ê3ºúíëðܤš§Šñx•+q O‹œ’s+@bHÙÏTÄDé¥bª[ƒ³ó•5,0qq mgýT{`:ÖáŠÔu"L¬Š†{ª¯d-è÷1ž¥h›Y"»Ð+Lb•@‡à`ÝÎåýõã‰WØ´f2³ržÕ¢pà\;åË`É®%‚ãÇ¢¦õìÎLjuË&š0Ĩ ¢Ž­õòÍ¡;ÂôÈ&ê©Õ;æYÉÄú«Q*gRŠz㨶xÀR±tàPÕ|-•¬M%re §ÖäjÄþÓ •‹jœèÿÒ9#åG›mýÆÅü¤#Óº°‹-䤕gY8o¡^7 6y TnÑrêk…bí2g·sMHÕ‡iã™"ˆÖ®\V*‰‡@¬|òFû=[W‡}÷$bU=…æz+d¬É]Žýk;$èK ¨ÿÿK®CìGúþqSé)¬Û›“]¿=þ7O–EihÍ'–mN«¾žL±eüX°]FÞ‡?æ2ËlÖÙ”9¢«È†8ä‹Eè̘õ[&´é‘xÖc»„˜b×÷²B2HË”Pmܦ™’Q¥Š'6QWéá’¾Ú‘лŠè¼‹à6¥†§å‡8;ŒtGÁòcƒa,§pvÇ7…µKÁlf7Ú[¦2€”@~®CíÇMZ»,ÚE/.Sˆâ¬FÅ’‚ynŠôÝßà« P‚¯fæK+J¥:A]%Ri /eq%õç<¼Þþü³íÊ^ÓZ³¹Á ƒ$Þªp?cmŸLÝRà”l`˜ÇŠ8Û>²Ò#kq\«àŽ$=2à*µ(a;)¾òE/I„xÙPÿ–”CJèÕÀ˧ÓùÊét„l³.ÈUÌ$I](øÏnMšÜ¬f:Û£_2ßÈnz´¹ýÎÛÞsûËÜ.#n¢¸öX;¥~H‡©"›Uâ“óL-ô¢‡wñ¬Ãaáiµ™Ã¥ŸúlB9U—#Gã¤èb‡ ûÇwÐödúäW­‹{²Žž?búËÛ?¸ hŽö•rçêÆ~üÃð †'S{¸ÔWàÎþöi®=Àíœ)Â?rhš¢bœšos7óõ§f·ØŽJ°—ò¸bç‚+«SÂÆlYÏ„r=\óƒZ„8k^qŽní Fç̈cT `<=Æ%:ð]½:©VS¨÷Ñ×Á–×yP#%(;|óÚŸñ>µô€|P“ܦj,´±ÒZôŠªÍ\Tgµ¨N:››ç#m“«U˜‹ÞfKŠçð|sôô#Y{1Ò‚ kª5õÈ=£ã%üV3ºÅY BAYåýbv§jvãC  ŸO8¡Ä»õâÀÙéO7«á¶ž ˜S}ܽÛš&˪I¨ÄIaùv.ªå[ÂC+Ô4…ÓÛ¹67çìv÷ÉÕj§Çò$Liã@<µwpnx/-öl¦™M-[ìÇZo\©£š åañÊ\ÏeÆù˜Í„M®VÃ8=VRñ™Å¸}‚´ˆqûëꀬ…Õ£oÛŒ˜2›)«c‰:åU:ÆXf•ËmÀ›\¦¼ÑŒ´„7 ¿d>±h/"&½²¾ØiÖ{ikÞOõdV½— ã.”ðÑ,Æ]©~/½Hý€ ó(A!Fº¹xÈáîäfUï¥Xý]?ï½DËÆÅElSڵ䩃²{±M¡si …C  ¦±ZúE¦Ÿ_e¹½WàÒ‚Ú¦rn–5-¤*ò6”3ÍÍÐ4Ž6ÕªFÓ˜¨È4Ì)Úäb•Šà%Ìã%ÐGgÙ3ÙÀ4ðŽía °¹Ú.uª9'ªÆ7Øw’k MoVe–ºÁÕÍb«2ã²g!´(›÷N¯è×ßÿ¾«]´ÿ=BFàPs¨²ž°, œ×ß'Zuˆ Ú¡ÄÙÑ,AlO&n>ü¦àÄðëÏL>"5„aŒó”¥ÊÞ‘@.=!\'©ANÒÙ¥¦©€…@4_lîõ×7þ™¦Òtc3³¾ùáëÞ\õ“±”_bMÌÃÊ@Ë‘ßH>¡Oé°Ùé6ùìÒÑ´4SVõ$—f®_]œ Äy‡r!®Â'ä|SäiºÈÉYšrŽ\ ¨d'\"è[|íÄ(ÉÈŸdoNNl‹6¹ŠÅª’üv¾Å)9AŸ­ošPª‡˜paFól/1ßReêA;w¾=LÇEj¢¬$EÁåWr>ݶ"&%ê,VÇdU{ËXÝâ¦xµ¿ØÓÓ û•;¿¶²lêNMš+HYõ!š¤‹îÓÀìRœ§ûd½öËtëlÍëQˆ H¸#R7•’ÌòX‘-&µ("…(M㉄T ½ð|ßãú÷G•Ñ Ýz—j^Þ}Rß}½¿¼¾ºþpóy㦸²CÞåí&;ä‡@Ì¡"ì% Œ/ùö¶ õzÈŽÚ‰Ÿ—6è!;Û¹C3³="!:œmZœšAºý½ÿgïZ–ÛÈ•ì¯(z3›fH$òáqxu7w11›YÞ‰‰¢º-K IöíþûI”H‰ U”ìî‰è‡MJõÈN>™gñRï9màîX1'ŸJ2¢÷/ ’܉ÑN:µ`%z†Xò ¶š¦Xò<æD‰3k¤VU•×_,`iéK­u0ÊÂT±GëŒiG»TZEaqhªó÷½kBå<õ® ò9GfïÍJúÈ¢vk%èþªØ¿F¶.MÙ²Iyhi©ˆÚvæo¤˜á¼–.@íT—ãåÿ^°^CY‚7[CÙÿ›¼!&ï§–Ë\Cý¾µd•,Æ–!Ù¨õÞߜ߮º|óÿýÝ¥ýø¿ï~·¾Müß®Ÿþ\þ¶ZþžgÀ<=a Á]Y¯ÜvÊ5õs”Fä‰Èœe€ÌÕµwbòM Õ|d·T®ß›Ä\;nNc¯‘Ο‘ï©§£zÚØÈ1õÎ7jã)Û1ÇLW·½²&fç&&ͪ=xöd%VQNþE`òÈúâħÇ—m®/a'£OåD'·¥PiÝf@í¢x }s¥“)ôᆽ¹ÏÖPì½Zw¿~,Ô 0ú× v jŠ 1nâŸ×Ó÷íIƒÛrÖN{(ã/tÎ%7?«ºs~d€ôaýÇA:þþáú›…ë »ìb=½Êø‡Š±ï¸›GZ0b½¨8Y,ž(äÂg*˜±Œ'ã âITé0 ¾ã^æQ9d%„Ôtßi_f_{ Ù%×;òAºèƒ¹^½ûš³Ý¿û²©RFªÓäâ™÷5»Uf5ÌÏFÅÙ9Ówé“5Å÷0cjÖ´å&ä1A’çÊY 5¤Þˆ§š15}«yóÒŸtQç{ó´ì$ß(·E Sš)˜mžÛ”²ÇÖaWóáyñZM¶-Lš^lIC‘%õ,yUú÷ü1í‰[“Nõ,"­|æN0 âQ•´ý™Ååõù¯·wO×ËÇÏŸ­x±i§Z,–‹§»Åëï6ôâù„lë ãÄcǵ NLÂj1ß`D¾;ú{GônG-SËJ„Äàì#J¿kªê¤ÏNˆ‡lAøN ÌDÚšÌÁ‡!ç}â,²”»7ÛZG\ªB‡¹û¤¨5ľ*ß¡*u1OÞ”Ï\bÔ ó.! ZŸÙ3ˆ[Ÿ©ü5mÕÍ=ëŹÝx¹zHÎÅb½ånÏo†±fcãuÛo¡ÇŒm2ÈT‹øSG|ê¨f—Ò‚¦è¤Ä,m›O™%EÈ7^ïd_Å.iræƒÃ™íRbwko—à0|Iz»9Â,å®žŠ¬ãæìù;·F§Q­år!h/ézvýâ%ûn«¦ȰlÌ©?y 4†Ìð;SF½¸(!yŒK õ–M+Ê%¼¾–ù à%¬ëà µ·>0k~xš9*",±C ¨?jT´E¶v«Fœ£&#»ý5Ðî9v)Ö…ÏýÝõípk„AZÆC¦añ£J£Æðã¤ìzÕR3.rBZPK›òuÒw®c*bÌ—H¼è †eÚàtðq^Ë$Î;ßÞ4)IÖ4ytî„]Ñ¢ûB¤"¼k¹~Þ6úÕ2R‰>ÏÃ_ÞH]®®Î¿ÞäXn"!lj§` «_$ðö›~D3uT15-”ýuÝÎmÆOZ*À\çéž**Ïq~;­{B’7¶ð0G1|—çJ$Â?–™:‰x-W…ØÉ‰ÐÖR½°LØ¢\žj®yþÓ°2njDWY‹Æ…±Ó¡M’C°à˜"ÅÅÄ«=I‘ŒõK;Íþu0xšTw–ÊPíáüÃý·Tòÿç2Ùª‘Øj»u¼ð °Uâ˜)[ÑÛà©ØZ&°j‰ÄÂÀПðN)ô"¬„|ÉçN:5òöÔä- RÍPecªÇÖ™“s&1xMØz|XuU}X.ªjàG#m9@Õ«¦&Ì)it€úSä)Æ•ཽØÇ›õè”±pKãEß·‚cFìz—¦Ãƒ4peߊ«š?k ‚Øã6¬9éÏ"pïÜ4=Ÿ…Ý §†?kíÔÇAô Uv%aëÚ±$gwx¸•^9™¬-uælp[–2àP×±ÍÁC^­@) ÷ ´Ùy õ½[bgÞmàwÃÚ›Õ¯çË?_æTmfÓϺ0kv>4QË"ãµÐ‡»I 2‚]#1šZvK$W ‚Ó2Q\ÿ=gÑt_Scvàëžœj 0q'ÎáŒB•­ ®}F!ž/×Ûö€Ùþø%Ù›Wç7«c;?üçÙí×/¦_î®^Ð;ÅÛ£ML3­\‚ê’€úx ,†Ë&Ü÷Þì?îî–«ÇÇ l×Ò›£Kt­pƒRê©’Y]˜²ß$6?ºDôœ9ºsCY½Ï]’/÷?ÜîvG‡LɈ;œg‘uA—w_îÏV?mHŠ?þ×ÿã,»ÿMêé sKÆõÁöfgðqAËx¥W1ÀÅ’m~¹»Ü-]wöùìñë2-Ÿ¶·þåþëÓÇOîðíüæëê—ÍDògë©#*k0kH©êHÎ>>»²mö5½Üçcè`ëÛ‰Ÿ²Ìx]¢Å²éØž¹|.Ë8„ÉiU;ïBpæÓòX­®/áPÈó¥áÇáÝX¼^hwñâ‚ùuV‹m*wˆÚDîæð<7ß¿ãQć»ô÷½2NÆ«%/‰—Ë‘3ŸÐCp£#Í%üÞUrH°ÁÙà ÙÕ2 i¡Hê‚-êmB=I.¼ÝÙ²¢=9ÕÈb­3ïÛ@åçÜ=ÞZÐëÛÅC܇?íO—«?Ξì2™”¦/–ÚÓõûb´(ÛOB€gh˜ÂL‘ÑZûä¶dK³å´ÄFAÎU=ÊèÁ¢£*Ž"@q¸H”x—Z bM:ÆŒ=¬ &L´ÈG鈹¾ý=¹Ô³•¯2'û·%‘¤Ÿ=ýqûñÓÐ~¤Ñ•†/½E—i&=ÁªN÷RÕçÙëuJ‡ó¥ÍNŸ_ææ÷å§ãK3pT™¶4i†| fhG0Åqî& E´ØŠCúˆ­<ªeÏL±~èý˜ÚL°ˆœ™0NíƒeWØkfÚ š1ôWå°g¾Ï\á¶.ü­¹Ú{µ‚FØõc™Ï¥¾´>²šââ(BS5›ÅÅLn`ÖBÅ™œS%Þ3£ð)Å‘†maÂiÅeÃÇýW+P\z,Jæ)NÐQœ”²À´õÄ‚´€2ÜÎdá˜üfè`áù:˜ßçeÌô‘Ïwš/Q.k{ žgÏc°m4©;ºÖ†ÐÚ5HPá|L»Vþ½këm#WÒ%Ø—}YÉd±.ä,pÞ° vÀc+‰ßÖrdýï[”d«-Q"ÙM¶3sf&@ÇV7«ÈuýÊ1ûú(Üp/§æåôßËw—O_WÏ·Šnj!ÜÝ}»¿yþùºUÖ‹GY|Ý1Wö’9|¼l½ƒG¿Åpß²ôß·%wæ1biýq ˜|gúÒ;“m­Ztgª9j îL{¢âuiw¦øX­8(pØÆn³ÇMcyC•wݧvüã=qJ±0$ú¼yC$4ÇU«ö'¾>ýåÏrÙ2vèiwfãWüCÏ÷ 1ð·ùšîÆO7ŸëФÑéûŸ<•=LÊîa@7¦HZ¡wê…¼„2/?Þ®þK¥÷G·Æ+Ò¬þã’ z?ýû‡¨À›Õõþk,tDùü>Ǩüý&ÝLøº˜/»  >­®V7ßW×-Iå!ô¶Vyÿ »…í>äfýáþá‡îË«§Ï_.ï?¼Šc¹YûqˆVÔ<>é­cäóG¶ÃHdµcɸÎD–1¬ô¶‚x÷ÅmõÙúBÏÜ>0æLÙ©5»VãÓ'ÖÅit“R9~L*‚Ã@feVºÃ\ÍÑ¥wÌ™0Ú‘ iâlQ.HÛ3"d!‚ Ýñ°—^‹ìÇA^Ψ‰Ê›I‰ÚYq]ME‹ISÑŠ"dFµ 4v[Ã%F#„:É™àæäFèeð¤À¡Ü{· A¥s{Z$w¾ °^~õ[‘~[??Ü©R7ü›×úáê¿Ço8Kí¹ÌSƒÖãi­ð4-NB#Úâ˜2®¾ŸmFAÊ÷§.KuF\óØ-‘ÖÙ A/—Ä…0j‹ !žŒ¾«.„ ?3~Jíö#zÓOD!C‚~BÕä/éÐ5¾ ¨ "¯ê]›ˆNm6Þë6ä pYX9¦ê‰ª•>s3tpf[äw=ç]J¤…L¹”Á4©ŠãVÊÇalv…5€4É^ÔÕ»º;Š«#ÐÙÈ£&ééLŽØÁüýÐ!¸Š!ª#Cieª Äb—ñÊÔ0þ¤Í÷Ýýcׇ«ú Æ¿êóǥ—nZݧbtçï»Ðȋ܇–à¿éןŸ~ÞllµJf±“Ôâæú_Žk¬¶ZýEÔü÷MKóÆ>L Á3èNñ4Aú®¯–.Ö¤%rz>,7*Hnh½î(¿¡ÿó2¾è}œrrCHßÖÇ[{q8_$Òz‹ãͽHÆÍOêÂYQ;ÁœÕ0nD C1- ØO®íRrzõ\Á«†²E9}&Æ {0øTMÎ`aYªøR¢ #[|'¨ÈÇ®Ÿ‰ZQ’£–^¬å±ÆMVÔ¨ ˜Ä„µíìŸszsRwýpiÅŠ#|K½å‘Ïaoö¡(E:ÎnôÀGÊž¯fðÛ¥ ƒ?CºP­úê2‚ÃŒ ë¹.+8ò±ƒÄ_0ƽIï½üíä=¢F²g?‘ü˜!¶~Óý8l©Ïë|É5’Ç# Éɰ¯++‚#ÃbÁ¢ABkøÉj6Ëèy,ÏSè:b¼ .÷…1oC ÆœåZ–ÏÃXð ‹²0ÌQ”µ)XoËõÖ?U—wÛ?ïÂ*i=ë÷—Ÿ7óŠ° ­Ã®ª‡  ««êYga* ±S`ŠÆN֪׌8¨|1P&Œ­ ’*oìŽèîP)À$[aK+D*rNQ‰*,§¬â 0Çwïqr4)ÌÑ%³U@rÉ(ˆw¾a•q–2yk–$6ÛË·ú>·'Ä8Iy!ƒ•x!ÞDBZæšÖ¶½eo© ¸seg”cbºZT‹žQw7cÈÐz?‹¡u"v÷š¸¸}P‘Çi ʼn]ÅÏUââùáëê~áûë^`èsxµ–ç»VÄ #ê³)’*OÅ_J='BKöÖbþŽÉ÷MÅuoGAÄýÂJ¸X—žÔ:ä¹á¤wþa+F:†“¨}ºšÄs’?+¿>ùÓ5|’O¼ ïKþ4x‹¡k]+¶`ß²m?‘Eý0òN·ˆ@ÇÆÞ»›ûM£ÜÕSåtsè á‡Ü4Å„ à݆©±ª9n ‡Qmo>™j  V¿Ë—ž°.‹îŒ'X^–Ü ím»íLÀ™¡Ÿ¨Ò›! ÈÝÝsôßêñöáçݼ$âÕß_&äþX}ü¢oZÇMät=w‡ÓJM'ñÎÖÎo,¼¦§ÕÙ`ÑœV—­¿Ô|Ú{•T«ãjb÷ÛìVº' \Œ¢§”b†{›z?ÓÚ²Á‹fíªà‹†ésËG‡\¿=õµ£Tèª/µV’êò$Æo;0ãÉl°©–¶ÂÏ6«S>y±Àw„lo¼5fÄ"7½§Ó—6ãpÅˤ>XÖÝV‹(>óižõÁÊ Ü]Yº8ÐÙÏìîz‹ØÙÝU1ÚpÜ¡+6'ý±¡—KsÄ̶ýüw—gšÿ¨±w¿]].,Lto§?}àÖÆRÈù¼WHm×d^”3±ïKúôýæjuyµ›X~Rô7÷zT6y²õe•¹ÍbºB7„1Ðí}ˆ…Pª)`FÈ«•…½ÙùÉ¡!vGnƒtgÁBªÖz(œv|kQ±2÷íàl÷Ü ëk&†|ê’ãÝ”™ÌçƒiJ%EC>9ÔÁŒ„‡žjEç;:ŒjÆ…°u‰úq¾ž¸ãöy¡}ÇÊžc§ŽÎO°+à"É «=¹îë‰\G‰¬]T#n RôÊ–©{Ýê,è"¦GíÔ$ª_ÛXoÝTÐý½òtvjD9ïÊjO­C]–£ò±ƒ*C«’ºÕ’‰˜¦ÞµÍ1 Aj˜]GƒÊÿÕ °j#ö34‚i €Ñ×üŒâ“9ü*ÀôãQØã:ˆÝ†m*ô²O ¯!tëfaÛ:û³Ð-°k8 ÝlÒÅ{Qµn}íˆÜUé'–¡÷D•ó®?í°äØ ³6á$!3ÿ,IÛÞN>‹=u+ØÅh&ÝäÌm4ê-ä˜ù芸aâªaè˜tÛµ0‘ÔT ¬£PE–lM¯÷éô«0Ú`ªîT 3Cjp½i*¢”OA*XïÐO P—BjjÈÕ1¤:Þ2ÖŽ·kSç¾£þÔê›V&qç÷„ÍÛoºËŸžnWoîã‘X_l¹¤~{ço/©·ßN5C¼ÜRUøJzÂk°vD§´ƒTjѵ©[D“ÉBAiT`²Ù°DH!H­M­…>ܱv^ŽE)½s÷A!5eÚFE©{(‰Ã°%* ›òÜ}{é©gè0 &R(ŠC«:ìnæL×ÙÊ~W ±¾Øý¡²ÜÍœ«3mÇî€Û²0¾€ÆY–0ÎÚ-’SC°Õ ÀXœ`wƒ Ïb­KW] DÒl@ä$ò3ƒ­ëßyËvÇnxDv*¨ÿ$ÀV]ŠÐvü‚)«”2õl•0ÐSx0u­–¢óHVèW“ß+ÉŽHž)ð s–Â/Q’¯úcDæ<û:™y,¤$–Ü щ~¤Ÿ9¤Èu·;UÌ$„C§k`”tHøÏRÚÛ@M‡¥Ø­ Ž ÷EºrŽº«õÓb}óù~Ã]+õˆ]‘GøòAïTŒí•mx•VKäT¬3$ùöB6Ùê­H›•bCØË¦r’Q‡gŽœ†Ã.šȉ»õrFº|oì,3¼BQäTå9u„Wº©3NípÖ«ÓàÏaLþ^Q¦4ÆñæH·꨾‹5yJÁÂÑPÐcUÛÓ (úŒí˜.ÅH¨„ BU!VZu'Núã玠å?õùÁþót—5P׎)µ9@z½ÈŒÇùÛw·A‹Å.Xqqð÷×®¯ºfE‘ñZ(€T’1´ë†À­n&˜&¸–†©E°V­›…UJ¦ôRj„ª‘ØÈ0ÌŒ«fàô”ìUE9Ë™QN ¼M-T)²P™ Nm­€Œžî ðA ‹?¢©ÚWå`€m®Æñ²&xù5¨HTºÊƒä¶ ò#+n…‘ÁYãíÌyXmØ#Å¥÷Í dÝ_Æg±¾¬9`!lyÁE<˜ÝøÜ÷'ÿc]u“ꉕv•'¡뱉Z)À–0œÏE0›EYkÐ%«K÷Âj„³ØÌ޳Öp'ßr8³Ž87 ' —À«ú~¢ :8zêÒÅÍp9Ú(à;âò«¬Þ–2è—w?¸ë”xR5Ü\]®WÏëåw»Ô¬ëLWã v…ccºZ6úhA©²N[SŽcT)€0{ŸEa€tœu/£V(Œ<ËÌ( 3 °ø4 ‡³“(Ñ€õMãE`¬ÊSVᢧvÑwÂ`§[ñü¶ñ·ç/ú%•âó0¨rqâë•d/Ø×6Çr1äÛ˜ÆuòkŠÊÞIÀ’¹k»á=gQ™ÒݬiµBeaCfFers 2A•#}Ž¿ŽmÌ“mãzàè©_îÒѪªS Âýs†i6ÆTÄQGúdï~ Êhc „”ÌqdE‘¤§ŠÖÜ %ƒ~¢;‚À3jù­1H˜ ü‰êSIñ„™¢%€€=°ŽÕšP¸“÷‹Ïî ÿ:$b7^ây$ #êR­5Î:çØµ Âî¥Ô'ÙëÁäP@­9j”(/Ÿ ´Ò&Õ v1Õ\“- õ¯¦R1›dÒŸ=èdª©Z'ý© 0ÕMàV!×·XÐS›Ò¥±‰#ÿp–c‘:„>Ó'¬(D‰ë(ÓAH阜üâúáÇýíÃåõz¡Æ¢úP?†:v/²¡çMåG•¬9p¶¶ƒ¢^X--{+! )I 2fsƒ*8›,S{•L£éÆlÒ-óÞXç°ì)$;ÏÈzê9en,njâ³±z–[øã ¡ãaWa ³Tl;õ^<ü:;ÑwD*˜cò…T œL. ÖÜÈòT+=¹yÏ1¢éoy†]³À‘åIŽt𗇞Uw F21z½Œþ|&eÖ‰ŒtB†Õd´¿ÔK ¨³Ù<:rHÍ:,¹Ò‘¾²3~f¤ýça…péb#ý_H—S“J£‹§'V(?#«ó–"{Ãdu¡§îñAwÚ G¶.ïí¹'’àˆ$ÄèÆTg½+åÔ-u è Å‚€¤Í§·õ£R`¹J¬”Ø“| V²qê0À´CÈýg¢M£“è]S€9¨š¹ˆWTLMÕ\'I¿ÉOQ$[êR]¤{’MW8ð®/î.¯¾èYYlÙ«Æ¢¨EÇã…‡Q¶a„÷¬Î*{4ÞŽ'=# †ðGU 1yzë­§~²õ©Ä÷@mTwªÕÿjèEô>•içú‚ª˜™SªŠâÈË—&µM A©ˆ4vìaÍœü“ ÔK˜pRá÷H8ˆ§¥agM¸ëËôeôx{y¿Z¾TK½­{}|¸ÖoÿñðôU¿ù^_âæûÍóÏ«/««¯éÔ™J{#s<Ñ”!»ÙÙÇã·B¬#hDœ£zBèêÞ3èîè¢1åM<#Ö(üfgÆî Ù{fëÚÍ Ük§Á=³9ØÈ®nB-GBVài0%¡ó Â(fsÌ…ºÑ`{w¾;ÀQÓÂ}Å’Yµj(Qÿ$xyb£Yƒè=O2hÄHû4 Y:ˆ}3Ì ]_èÖ½:3;g\©˜/÷üå#£†X€È„Ï0rºV¹¬Zù›­ >…,î3Ç©¨9ÜWÁ%Û’iüñµÁsÕìÙ&Ç:tÔ«b–ŽÈSããøؾ¬.oŸ¿´±îš0Ài äøƒ,yã.±ø³gÆÔœÕ—8ØÁçÙ3C*§lLS ÓÜ/¶É‘Ñ·ÆÝ0ï‘q¡=šYz"w3¤PÏ»ïІÀFšÿˆÏøíùéÛªâ¨ù®G íˆ4\ðê4L¹žq~^d6ån¶€ä/.’@.Û"ɲ±¡ÂÍK;1æ>…½ó®\â׈ÜݬÔ=E¼¶:;zê—¸=Ê¢1qw÷.ÓÓ‡^»÷Ç‹«Ë:êsƒ}-ò2†-8TÐ4¤^,½V¾Äf»ðëI? ɆÅfs*IH²©½Šª$oÞÚ0ÚÙ £ÝØážl˜ljhâIB¸›cˆ¤/"i84Ž^…]µëÛnЙ%:×5póÕ¯—ßï—OŸ/Vz¾ÖëO7O«ºÜåwæ'§ÙÇ}[ŒŒi¶Õ²Ÿ“î“ùœh§˜Ñ›-EŠEy;EÝÑlË H²¯úUX- ;¾s¬ë›² 3d«mbzE\2¨‰}ÞŠàÚžlõŸ\:îo;°º#Ú%9«†üûXÛû(óÛ¿..¿]ßÂfG0””гÙF$Î@z]²$e/›&%ô¸Txfv5ˆÞâT*¦v/¡§°áA =.P(ÜÍʽW6ZOdcJç@ôTlé·aé"ÞÂüpûB µû½®L]_x 8ÊG‡¿S£Ñ¦oÔPUç^@\IOeIŸ£°’µéi4ATµ%ƒ¥¹ñ4p<µ„ÀÓ°ÄH:‹óâ)Õ•x¤©pš€“ŠDçô–Y»Ëÿ6ÇOq‘ çÝðsP‡sð…Ê9}ãÅŸGÔ—©FµÕ ‘JÝTh‰¨§EÖcEضõô#fV_21O#Œe\]u‹Ã (ýQK¢¬7‘~ónŽñ#e£ùµ×óèpJ£È S4жm´Œ¹nž‘úê /»¾ú²ºþ¦‹:ù‹u]é´ÅUø8»e ôª6GP‹!uóþŸ½«YŽäÆÑ¯Ò1—¹ŒÒ@€ ç 6b»ç µTv+¬ŸIm¯l_`ŸlÁTI•ªb‰L&³Ôê°O%¹” âñûav^áõ$ÉŠì‚wZÁ’J=x&HÌuáMDÕ‰%+RjžœÁÚo– !ƒ¬Ï’2Ã÷Oç$Þ½M¬Ñõm ®ª.¥f¶¬VËqì”C‹§ü¢SöV±Ë‰ŠGÀŸÞ ~*Þž=SëØßߥ·Ùÿ`E!¶C…M6{5ß(‹ž™€>Þp­äzÚcAÏ„lÖŠ‹Ù‡ÊDL ²;œa{@•Ýú´…Šù5*vPª!ðISꪖó!/öŽç5x/¹ÔË ¸@øáˆ°»XV¡ŠH”Óï‚ÛÎÏ~$Pa%¹\ð2‰dWwì^¹—•t¢AOl$…O`$½JÞH‚9 {Ü®kO`¿!v—óÒu‘ ¬Œñ=Ò°‰Êñ%Ïrô'óüÏ78²»XÉ訅ˆÉ9`7K¯tl躚Ö-\¯°¬[а7-«æ·©LÄÔË´J*Ùɉmk¤õM«óš7­ØE:ÉNè:·sov[±âÁò*; ÔG–ðN[ ®7¿ž_üy¶ý•³íïœ=Þý¶¹5‡s~;3T×4Á -D¬>¢¤븧 Fr=™·’Ã+r²J3&FŸùØÉ©óv¢ Ç9«T» O°ÐÊk~Í‹TäS'e¡ªû Ë ƒZ›±æ ûu¬±…&ÁŸf§Áç«Ûš‡ŸžäúóËÅös†Üu^_„U0i‰(Eôí[f‰¬«õMk¶j*bÅ%[&¿lv* ^æ—Y£œ8»Àëï= çÍm‡ÖWb”Br!šûܵ?ªú¼kÙ0ÛJ9Yô, Žœ,Ð*©WS{TuïW{8¿ùz½y˜4~ùÁ<3lšÖ~E;Œ+·dmÉû`>¢t­‡UK°§U6ßR,ä­X}¡Ô‘kÒtÙŒïN\–5«’|ZÖüÌßxRhÓá‡O¿ÜßÝ|ú|wýøéòónZ"@¹–3\Óà –‚Å-£Ím÷©Ã\ãi«kZgÛ:U×f™¢ÌQ# &Jkií=}ú t-¬ófò$†±ý 矯7ÿa`ø÷»»¯¯}øŸ¦›»½ÜüÏÏĈҥqµ¹|ù â!öÉ h «ö"dñ,¾µáùée!f»ñ'oó÷ô´Ÿ®ÒS´/6W¿o._ƒRÃ@LÇyÔä¾bûjÛo¹z0Ûñ‡Ý†lî?=~9¿ýô"a|û= L4Ók1?¾˜ÅÞ/R„¨ ŠÀhA°·ÿû¿>Ù·çãÙíÿÓØå/ç×›¼ég–~ºývcúï»_^.ÚŸ3通–”•‚IˆŠJ!×¹6y³¿½¿»Ø<<<][ø¿>µè%ŧJ1ýŠ‹»›¯ç÷›ƒ³öHNdÖYÛ…À.,9kt-VÉ©snn`÷ʽÝYÂçÄÂÅùΚ o6_6ßÎ~Ó&êªQ‘T&ÝÇ‘´ÊŒ@QcÐ…\Zm"Ÿ.„=´3K µÌUöh¬È‰¿ڕþƒ´‚•–‚PkÀ •¶3•o«EëÛíIÇÕ^œBÞ¼¼Y…!HO¢?ëØÄ‘’sKް…IÏ ¾Þ]¾ÑÁùõÅ›ÿã!½½¡$ûêàp gž¯/«ƒº(PRs‚%7 ý"± ·‡N¥Äê¡’nÚÂ^›– YLãüü ¦9&·‰ÛÓÛÌ}FÔ,§°–5Á¡eMàì²”©4z肺!&Vľºðf´7~…wn……_`p>¾=ýÃØ·³ýǬ”ŽYˆ|'^/üyð-=Îa$ëíÇa·/¦^ Lç‚ e§+˜C+%Zè‹Ý&"鱱ȞLîˆ§Æ ˆ_ƒÊŽüª\’ù­†éw^ üó¶Ík‡MdÖalYb¢åÖ¹Ó_säÔ…š*Q¾…±ŒÂ'R¬ýŽ‚L:¡0DçªÉÖ»¡VA! æo:^„¿Ÿ__]Ú÷ÝþúÇæóûï8²~Ún=Kçpvu™˜Zÿ|¥‰óÚ(àªÈl*={‹}•y6ÇöráõÃ+Ùýc¬º5‘‹1­§,ËöDR]ðjŽ«w"wÃ+ó:x%F…𞀽ùvýøíaBaU„r – Nãl iõ„$Ak Wè–Áô`íß‹h:A5ÕN É º $S o{' &—º°dU*QnL[&Õwˆ#×t^íԃƊ¯!Ïùr©ÈyÂÚqt‚ž DN =õ´ôÒ²&]ø²À<ŽJKù\8±X€ƒÈ1PE¨ !Íݤ2 %[„ŸÈ¤ Ý`.)Ñ©}R†°Æè‡Ä‹ÀeS³2Ç–‰½Ô<Bø ´óN«Ò8} ƒ 1wN$Ò‚~pì}N AZaWZz¢T_‚Gz%ì®ïîíÓ—>±ÝGm]«€È«br[MŸy3:Bs6¢“>M&u‚ë Sb $U8…búÆ„˜¥6œH©PSãÑ©ÊnüMÒÂZ‡ï™¿Ù.þz}~»Ù§¹øb'~¿ùz÷0VÏ7ÓäEúùÙø 3S;ü-Ì[Dn“¾Âµ”>H9-b¹¥E‚ëTI£H•sê¹ TŽÙ„‰”ºÕTÛž{ÞÍ¨èØ°½DGtÿf¬5r춘›JøåúîDwpsž„ïÞAA1ƒ\1 }9_`ÒÊ)ÃN=áí¡Í0Ьñ¿(ÊöKZ–¢ˆ¶˜ Ô;˜Ûüðrì õ[½%åÝ‘' ¿C?’oW‰‹c-µÔ¸`ÂcŸMÿî¤ÓE7FI™“I2DgvbA&IÐüoZŶƒZÕ.‡óÙsÙÇ 2¢.ÁPÐlzKç DNSÈm²‰§ç86æ0Ô˜ãR+}’TrIt2ÇP‘_ÞMþH™%™Ãaº¡žìLÌ öšº¹"n†|–ÈÄß¹óP+rµe1Qk_Á ~”ýÑ´žuvCA2ÝkÇì‰}¬òŸ‚+I“”|Κˆ¡bQ4L;ýU€µoq†ï9€M“ ‡:½,g£«ÉÛ”)IG?‚Ý=NÒéw,â$eÐ<§§×„–R&¡RDZÌE+¹SmÅ³ÝØeoÚœ=Wâ"LoNÙÄÉ«UŽójJ0£Y¢âàJw®zsû§›É›÷dÎ ïQòé––žˆTôˆ)nþÞ >‡dÐéôÙQ(÷ø²V.!,:Í!l"‘7qzhç6O @ðk¸o¥L‡Ë9ÆcŠ,Û雽õt©q³+o»T­§3ìÍXø9Ó¬yŠ+„.øwúNfô9Éùð²Qúé—çñÛp«ZÓ–2ž† ‰+¦c·ö1aõ2«I¢¯™X´øh»ŽëM³Jùõ;Éô(ØC‹ÍÈŸÚ¬ú=‡´‹_C2DIó k2Ǽ¼ë]<»9¿=ÿÕÞÂ@¸S¬Æ/quHÏës‚ÇU¡Üä!Ø8§³¼+=“$;#å؇ÞTÒ9ós°"ŽMûÖC÷^³‰‰{øSöØ*Ñ…“ûS ´¶Cerö|èPQ,rð[þ•ý}¿f”»:TxèQ…ÃM8æˆsµGujÛ³jd»†K–ÂN”zkÞMõÆ™ÍUL°ª)ŠM¤"Âì0÷„ÚŒgfJÔ‰–;Å~«ÔÈ1 /ßqµ“Nß,é²›GÞÛG7‚oiyU2ÙÍçÉn4H¶`´¼gf¬zéAÚ‘VËpÙE^¢–]ô]g0‘D§ÍÁþ<žÜEºÂX¥ »uÍs¾¤uxÃͬÕ;·¢Aæ´î¨‰ÿÎYD¡OÙïPFÝlp:wŒ±Ì»%½«=q;’uÈ}÷"&ØžÚsÔÓbÝvØ¥;öB@Âpú-[æï¶Vs|p¹ö€¥…‚9-ªC}öìK¨'ò A´zTbB¥YÚå8:A/F/'‡^ôë@„Tþª¸ådÞ´˜Ùƒ9Yi=ð÷_sÏßÐ%Rwÿi„9NDÒ ƒƒÄ°ƒÿš‡AóV!`V§žs=¢ÇÞÅ“z<è…þèˆÍ¿Õ,ÄB_:c*j~€2ù¨.„¦/(q;xü&d |ž4ýE&蚃ÞÓ,Ì–4¢³«Ò‰ Yà¼6;€cK§¨¹Ó ‰| Úù³ÅÆ™;¢Håk“cƒ‘tšÀb 0³W¥Ã^Ó‹’ ÿ…Á¬Ì[‡¸ÄÛ¦¢ø@h áÙ¿ B¤ÀE†ìÞž©L:ÑÑ‘i;ž…‰Ðd~«´ûŽÇpx=>l.î7ùÕÂc6þâúÊÎd^Q¤U[0kHæÒü®™¬Q7ðaÚcÇXÁ binbW¶v5‘Gž*SWG|jð¡×UÀÇ`±cüë ÌʼeÊP)ì?B(˜Îß?ƒ±ˆA”<w"é„AoOMó²7,iHyýÔôÕ‚ûgJ2¯À SË€¸ÈÓÇ¡r¨™³7nyÇßD!cÖ È¤ —yå{ÅÔ©¶…a ²8ôƒ×ÿ}—4«÷—M,«òFÛlé$*°ª‹¦œjþ“;ÅêNtýë‡D*ª\W)ßšs·æNPøU†íüg-ZÑ¥)‡°HG¢oPMŒ4‰|¤GÏÕ±fÒÇû¤RØœŸ}þv{y½éÖEq oAm9ÞvMh1»1ßµ“R~,ÒÁq<«'!±ÁÆe3¼ÔÒ!¤Ž4tñ ¯ÖÎð* €j¯\n‰O|æTžá…ìh÷äÕjfx1 £Òkúé—\ÜÝ|=¿ßìÍlã`ÖI‘æÌlÏ»ÜgOµB)ÚW“ËA»]à·ç7¦&ÞýžÎy-wÄ|ô^¶;Ùr]bsZë¥"¾á¹]GeÒo-M¢$ƒZ1ÆÅT‘ép¶Ñg'€.hiV£›•)*ž~Ù?ƸÂd=:¼Jàpú×ñþ>dÀ›Émð*ä"5$k=¡ ¯]š\È©_«[j ƒ…÷Š~/n—Lí·úì„Ò¥ÕGÍ‹»Â°¬äZ¨Éâs4Ï¡ÃDaRËoö·þÀb£‹n>o€ÔU"5A‘&úÞ’vЖ”kÍá‹€:q†.ò©•¹A9"¦!JàY†1IE_|ð[åMuØÖ?öÔa"’N´XiinÀßÙ¦Ô¼J_ §5hzú;{¬¶ÝÔìV½©‰´¥ÁÏ¢t‹Õ¡uä+éôìë’çõŠ…VÜ@R„Ü‘ÕÒ;QtjëKYþä&Ø· ‘C‹´ãh2µ+Ór¼  RŽ›Èç)Ç&2éa„í±žÚ³®1‡DŒ*ø#M&õ@_S‡'ˆ3}f³^Ýò»&—§þ–³‹ó~ø£!‚ TàO¢Tø@)_Ø{‘JüÙß f“OîI\Ç Bo Ó¯¼Þ‡ûÅÔÚÜ{ É=€1î€â$Ö¸Eæ”Qœæ“ö/2éäAJFÊ©Q`…-¦ÎÑsÐwHÞ{ü²„Ì^dÝ€$´”¿”ÌeFˆÔ% ÉȨ_ÚÐŽ^"QEbȧU2eüeÓ†tIÚS³;9üÖh1K1–¨%&ÿq©Ñí¯m«½YÖõC£P øÐ{ˆÐ|‡"êy÷qZ–!Uw•ï¾ÈÙF„€¼ÿ8TsýIùúóYöÔ‰DºÜ~lA ø'åÑl5lH r¸!-p¦Ÿ­/ÖÅØ5÷f«ØÓËf‰û¦oS±!ÍžÊCtüzCÚô;mH3½@ãÿýoe³ýøbÀÇf‹<~…®0,—¶½Ù³uNi¼ºÑ?»º¾|8 |ÏüNâsù4éñxÿmScœÝ“q–¿5 ¿…Z|#‡yRÁÿ—Ù«µ9yØa—·Ã˜YwI)}»€²§"või_Ê>v'2èa‡í±!˜çØaç"û%ð3Ï1¬2ëÂHò¬ƒ¹º5%?H ų\#»Ïñ(qìjXt "„†¶| æˆ;»ŸÖÙë²/±)‹Í˜œó^j®U´ÍÔ:–s‘vÒé4&QFúƒÈLc¸È6››Ù´84õ; ‡Çó«:ŒÞb¹è4Ô+%<º7ôåÍjfÍ[ŠàÆu¦3Ž-¦ñɰȠê<8IÆbJ}‚"Í^ÚÓ‡?Ý›=yx¼ÿóìbsÿ8·DŠÇŒÑt-.JS‰ÄÜr1J{´êe¥ÔÏnúÁÂ%Š5ãPti0f=š‰@ºØM?€X¿=#=‰E¤n-_‹Ñ«GŠ$k‡OšTXl3…×W7W¦RÎþëZ¤.}ñ‹PIÒr¥ôFP2£ÐZ<‹®`<ËmQ˜‘*Nbê å0ƒ˜´xë‘øp$Ýó,.›L…x(½#ϼHj1‰| Ix9Oà1?7…ÔÔ’Œu/[m–e!m£**FBWtr)jÎZO¤Óe®ÌžÌüÏÓ ³ÕaYøé×èådÛ"Ëw“þémŸ=JÊ/B$’G^Ô šT,{‡Ô’Î]«R?Á— çAó¼ñ;tIýðà9?ËQò¬)˜Z¤Ìm´6˜È¼e.òõæüáǶ&z–¤uvu™Š3þt÷ûíh¯Ÿ?èf˜Q5¨/f;Û·ÛìŸÄç%ßfÿ"Ÿ–9=¶Ý&Js´ƒAÁ#.±Ì k̺„! 0ÅÓr)ï R —>Ûz‚ó:ÉÙG7J ½còK`ɈMÜ8f@D;9KÇeÕ-¦MõR"ŒP6ÖþŸ½«Û#Ùͯ"ìõª]Ud‘Ec±Wç&‚A›$XÈ’¼Ö¶IvÎæ½òy²=#©¥©žª®®žýÉÙs.Œñ¸§‹dñŸ=Ké:Æ…ªš¦K Iýá´(¨þ¿m[uƒl1ôâ,µéÓ©³ôÏÈÏ©æÔ›gü®™æ°ÅC"+ 5\ÅE„êv M"pÅ5´• E«õNg“KÏTéÒ¶à œ=Ñ¢ìn$uš„VÝC t|¢ ¬×°÷²NÉ¥-jבU–hUé,Rj˜Ð&¶ÍAä÷ž_ R”J¡&¿”"ï%¥Ü½œ£KzIv œëo¥·­ÐQÚEAáZDÁƒž[\à…m w·_Ue‘‚ÝùD+ŽTáʘ¨;Êåm_Êù‰òÈ÷t€(ƒ cÚ5âŸû¶RuöþîöÓÙ»ÛgWï^>yÐʹZàOp'A«öµª”Ý#B“ÒXл¸¶ä R[rõ–2¡”dÔCE¥É1 »ý¹9/Ï'«)¹ê[OõHvöÛ&êÈ…lç©%‘aÈÆ©[ŸÀò^Bp|ÐK8~öú"‹eÜ“”¹±J¼Ýá´?,fy59M¹—p|+µ.^\Ôé3VõÊPÛH¸;Udfr«¤¥¡•F,UÖQ*ð«eüë&/$+DQ]@©¢£ö´J¹‹ 1z¤®ô­Q­{5RØÞO1$]^¡Æm—KËÈ& 8Z¾S<¶=ËA—V*t™2S@ª$‚è8”ÚždÙå QºT™ÂË@«L¬tަ6%%PÇãÚD÷§‹ËJ‹7û2äù—úÉãßÜ_?œlÞ‘u Éác æ1aw,¨ £!åŠ!"õ}ëhv4.’ðQåj•Êj)…¤ `Ð!¿¯¹Þ¬<€ FIJ8¸´ï«<ª0³µ± Iº¬|ðÄ…Õ¥lÞ³©–Õ1I=ªÈ(~qþîÛáÓß¹¾¬é\˜ù¼›°»†ê HYZ¼u“•dNèÕ¥TF ‡êµk»W#O>Æ5ÂÒ3zÔ°âژѸW3zÎB‚Xæi@v@|>h|:ZEÐè=*ÓÈ¥Z0?Õ€L­[ºÆGhH6Û&ø ~Úýɸ'.=qï¿.nTîÎTÏ,Nü»}Œµ£úËüî÷úùÃݯ7c¦÷Þ÷8¿Q¢F•TV×R$=?9K«ZïèÃͧëۯƜ8f}óD@]5»©9r¶G¨ Ùˆ®‡Béõ•Ò«Nゃ”ŠÒ›Žº3ãÉÓLñþédÂK`‹?0Å«|½…Þµ ¯=âß¾ÁÝ—K6Ôƒ¸¼þbãçÆ9á'Î=– ”rO<3Q;BF?*ûÒ™Z&;ÔÈ1‘v¦áîúÓíÃõ¢Ës ¦„½«ŒJ8)œ=üúžöæù–}y7üý¿¾ù—¿~~.]œ=’îù«#|÷V#ž£ÑòûˈïÒÕŸ_…w¾zê‘®ò *uâxPõÖ?¨üÙ£N‚ª&t´Ê<¼Æœ­k‰Äè½Íè/«îgTÃ|¥?£ª»>Ìèä\q1A¬r9Šö%åL(Ó#í2*lDï`‰ù‰ªAœ¬Òsèeó H=g|"«›¿ ‚Ü`¢ny‹BXýó^8_é;w…e$§ìNhgNQ?‡un"4Œ Ee£:T«½ÄPí%zäñ\¼ÆÑÁÑ‘øÇ()769Y›H6®•â$r˜>"»ƒ L.»ýh>$HHÜ ´{#o}¹•ö>¹ ó^ØÏp0S?oFà %ÿçÝþªÍõîŸàÐÞ¼þàœ¯®â»wtu~w÷3= >Åwt ×—lAíkgÇ-st6z¥‰'”‚;k²¯’®1éÿ_xH³^ŒìæÊ ‰r¾?0" ‚.%üл™rÿæ±–½ûò¢N~ xžô™‘W‘ÞÇþ]ÛdpA)8õøÄ³Wþj£îÒ Ïê!pâx´¦…jÀ»Ft”]>àò9w$¦ëb©gÙj Í»G`À ’Ćórp]‹-Gb†ã$=0§° Ýnóæ†bë‹wml•Šæ\“yB³ w}í`½P´ìNKUwzû„»nR&ánxlMŸ!’Z»*xhWáЮÜà*ú­z™g¿D!·N¥3n]oQÓ˜é°1ãÊzMS©ÞÐq†È3úp‚›üPûÎkŸÖ€f…CZÖ?³ð禠‹;„þÞ‘Ö`±!Z×T£<í_]bÃÜELaÓ¦š×>Ôù¥±ƒïH°Ê@jÁÓ³¥îâÒ‘÷ÝéFÒ ‡#ÇRð1M $ew1»ŽarÚ–|ë —yó<8Xuk^ï-ï¯úÌÕoGŽI’KŸ²t ©§Á¯9Ú3=wigÙÃNåp]°EØ¿C ñI$m‹½="Y\\ŽÄ^eq~¿¬nÎÚYR¡)µ$.ôž²øÐˆÒ…н’•£üÄÄ^Š*Võ‡¢†%ήy{&Y k/”ýnYlÅ!¬›M1‘£5¬R™(3 Ÿp°™‚Ò|¾8êZAMÊ’%.ÈXöU%³ÜOÁ­ëqÇsÛêápâÒjŒ ²ö”÷AÊ%¦ãaí—4HlT¿Ÿãwr=­kam‚ #§w.¹~oha‡ÊÞŠÀà4N/+{¶é¿bö[²ƒˆŠô@ŸÖ—vhøK”}R¯g¥;­1ûÖ™">TöÖz„ÐùOk®«Vc’÷¡`§üY~Àa–—M­ê©Ð ¿ÿ>ì¤6ÙƒìôG߇MæÛSǵdNÖhVƒŠkÙ¹ ±ZÈßóB쌚5¹I¢E5ë½áp—ôlÚOÊlßx¦OEk¯íì…p‘¢Ef€U ãÖ~–k?‰ñBÑÚ‘AT¯ÅÓV,rC߇—a•­T³¬®+Fñc>$ðÁ‰ü&c³vìÑœ-ËqD øæy`Ý1°Îö‘—-x€¢J`ؾ xDUò% •ûýÆAýòg}‰›o7¿^~¸¾ü%ßS†ôüEש¼øUNÒ§¥Q'pôÖÙÚÖwÜ•}å¾å®¼[S¾ØÝ‘ä1R÷E‚+6&Ê. žò§ò p^Ò˜ÓQé¬PT¯{]¶@~‘ì`Šž8bˆ!eâ-ŒÎ÷-_¸Ð¹~ñçÑ“³â%£ë”_à– $­5K¡Û\Ùè׊ìƒD1Kˆjf•Ó̰ò„H}4PH!"E^ Äa´ÊU’×`h ¥ó.­üZs.8˜iEÆ®y}æš$B’3quþò,[ÕËDYÇÖ=N}WX­Âàcˆñom®lм*C$ú\EB‹'B~‡m®&N!<¦| ]®¡œ3 Ù]PÏëÒäʃa=󢔑ÄêT­ºÑÑáöM®~7føBQŸ]UÙyS=€OåõϳÑJö¶pѵ_hp±iä–M•Xg캽X»Ð( È=¶’Î_O•ͨŽg:~=íàafzöéh5[pUÍwòTýºð ÷ž2Ž%µ–´z{1ÆJÆM†DNŽŠzUT)J,1î±b÷šq“£ÕàãF‰vP·ßç’EÈU?bð Njr«ø]R·ú¶nâ6òS¦élìGs€‡‡)z®þ‘ß·°g9Ká B|íßõFÁmøýÉh¡_yÛK%…–‘ï$ß°»vŸºFECâÀ„I€Ê)@6ûù|² …do¥:] Ï©-Iˆm¨j‚Ô€Ãj¾¥j¾^ªÿ\Á·DÁ—ùæ(ï<­’q̺2®Â$€O››˜1 zd <(p©í\z®€ Oa>_}¹U"Þ¼yé—Oö½[±êüé‹ç÷ïÞ‰‡ÓÑ.Ù…öŸžNzzŒ‹=WüôÜÐg71OÛ‹y"1k¨œR)Ð¤Ž»HNá=-!~âg Ïs2î—ÉxóOž xóï®ï ó‹-ÛêY5«FÚv"òáND ®CÛ®¶­lpmÌ´ì(av«üä0åˆ^Ÿ”Bx±qúˆU;•ÔCB ‹vxÉn®’ƒÖ™Ckô À’¨Ú £Ä3WH…úE ŠRAó³OG«rÃâ`‰Ý…™˜ã*ìŸ 2×;˜1O4X#mœ ̤ŽÎ—?ÍÊš²¢:çÿIÿ3FØÝášþÜÔÉrì¶p²¦?wÌòÄ€à˜V ®ømö¤°Ó{§<ó'÷¿ê•ÿôæîZÕÿåÅýõäÑ}ÙØ‹¤vºWhziIÝpJWN—IÕTP³<~fQ GUâ5ázÔ‹vBbÌ¢t<¦Ë¢TÊ3Òå6&ÚÜŒXí1;V˜‚õkÌ´.Ä®¸ëTwÉÁ7ÏŽWiƒ /xt¾¡iÉ«‰µ¤ ¯Ï„òW.Fç<ÔdÔ0béŠÆý•׮Üäh•5q,N|£ÞÂíS {°¡_.ÚDå\ûPèˆF˜~?ýSΑ ›dv?6qãÔIÞ*“°û­}¸èô÷áÐ >h\æOÞúÔÚõä`S…î±amL¢MôhzZ×ï”sÏŒÉc(O# ¡HY÷ûœêŸ¡‡w¦/í’#<µaðä·6 Jä]&ð¥a06¥(fªï±ër]S u=N5íMÁ«AVG|ãö{:ëG?Dñún'lñ_¾(w™ µN°vNTèPhA¬öÖ«8¯kå_N½~zÖ ¨Ð³@»cGõ,„lÿÓ„R]4­Ê8 ó’ªv— ›£>!¯ƒ†o䉸´¨©®¯ÔÁn:«¥—¿MylÉeLý—Í`€séäÛK¯¶ HáÊ2"žÇ7§_Äåá¦08‡‹ =1º”]¥ütÚšÑ&²ÔséÔš1â¶Â™ýáp“ƒ*Í0ƒãëñŒãÛ‹=¯êø]Ö³ARtÄáÕ”û7ÊïË#m©mÁy Ûú• ±»ÄGÛÿÔVN©§U/?r PV–$)Ë)–SÌ¢†|(­ÊùWnÃìIà1B=&)ž“ã² ÊuÀO©Õ·Å^›/jÍêaÈ!l²2W’§¸ñj…×ç m}øýÍÏŸºepS¿ŒµÀ*q°…ÐÇ­^æZ¦W7žX#§$eîÅ•]3r1;P4!N'mÑ;n'Uá´Ç¶_¨Â‰Uû¥€[GýëÕM·d¯iH¨l ­±, ÞçàZ&´è¢ŸÓÀÂ"pjýìaƒ˜9‰Óˆ6í@Tºþ냊“ò¿¤¨©ÓxûI…åöëÝåõ•>üóÍÈØ ôn(ƒæ-Òìi[Íîcƒ÷n8{B\z}OKè~&A  `¬0 äË£ä1g&Tíbô­=$:¹"¡9@N ˜ÔÀmš"ëî z¢N™lcªK§¶‘ìÒ•_Ä0¢ã)± ¦]Üïï.Þ|ùv~w}ùë¥eJ؇-/&{×´QÇÅ„‹Û{ÚÖ³ø+¬W†jŠ¿>Péf*õ²ÀÄòtIõëk;‰õ[=YÍ’Æ2ˆ1´ ‡=ÂIK<‰½Oú_Þ»C¼+’L¢0 `€C%VÒ`Œ„£ùˆÝY)‡X19L ÞU‚þX€)ÞÕôëð®ZKÑÿþO%ÚU')ðT„ºQm ¹µ R‹à#Ö­(¡(IÔ¤†’H¨Üäêx““UWkèÉjÄùÄ—×cj)9‘Ûts¹r}=ã—›Ôº‰CF¤hu×*]°[!w”ñÙúí”4]ŠÖÉá<È©ƒÈ· ×ù(âÝò4×~~ÿ°;øc¢kïˆýùßü0v\®OXå¤#¸!øEdD2œ\Ä¢¥ðÿ½«YŽãFÒ¯¢ðe/Ã2ò 8&æ´—yŠ [â®#‰Z’ò¬l_`Ÿl3«[d‘n P(RvÌÅ–Zdu!‘ùáËDþ”{à= hH¢Ïܼ>æØ®Ñ;–€^ WSèýROñ/…Œ¶O+È­½Á!‹sàT³z$ØåËûê ‡"™[¬¬ïí­ÀÕºfßÚÉ%y˾¥—“•ïü,Kì%$=l OÙšžv'…àÕö5Zß«”ˆ ¶W By`ÏÓjèš¿U4KŸÍ•X>c_ã)¦dm®J7YoâØ‘r/Œëêâ¡øðÇ#èÿ~õþúîá¾Í ¼ 3µiJU“¸8`!‘`<Ï1FnNÖ2G‚‚1šÈÝÍPÜÉéEÜm[ùþãð'׉<Ï_:èÄ?¾y°%¼³¥½søûц¾•Œ.#.±Ïm“oæØË½ÉèꈊW7&óç3.¢‘¯¹Ե‹Vo¿Æñyæœì·‘¸;Æ1?B´ƒ ›÷.‘ k0R0йt˜ª>‹NŠäb¬ð°ÖXÊwY,¦Ž‹þR`;v )?Îî*™«¦]¥› æ¡‹Áó êÞ9&"=7Îbà%z…È''Ž›ûØpâÅß¿5êxÑ (­ë†Üû½‹ÆÈ¬ºº1rïמë‘<ë#‹ÄKw *=>I‹¢š®ªâ«Sý§\£çí öÐ/öÁ=5 )€,B:€D\’Õ(Êàš¨~abLHUÊÀT"– ±Œà•®¾ˆ˜×q1Ÿ#lsöŽ™i;ž.&æbsO[rBŒxnÈáÐ!ÒÔÓÓC-½==›à {/[ì;÷6°17È d¨_¨&½ø‹§U¦ÜnûqÆêôŒ45ŸL붯¥fK‘¸7°·f‰ˆëŒ_QI¶QË(ã{q›³kîyTôꑞo…q/YŽÿêñ³??¬«b¤ Ñ6eÙtmáý¿:¦M’qtHÊt£ØFЮ Œ1@¬ŸÐ ˜cÕJK·!  0ÒY­bÂUFj‡á¶Z÷h™e ßCdür«†¶‡„6£NÇ(n0¿ü¬UûL.ÒmNÈ;\gD™2ïä·Év?·7ŸÍ†çOî×akLéü& ‚ò&>${ÊQÄCɦaXÖ{‹à†ELVo!VG×T)@*7XHiDÌÔT;%£À¼]ÍJåM:‚Ü“ö¼*G“îY_Iž px¢Bi… ™d"Dmp¡sŽ•êßÃ)HÅži )i€š&ÊvàË*òl_ÜF£UxoÚˆžúÐNJœãÄ3‡#E¤M³ˆÄ¸Ý¶ˆû v[EM6/Ê/®SFœážu“ >÷÷Žžw»=¶±·Ÿ[üxüê›SÜ/ó8¦®†—H§0¢—ð9:Ÿ%ĉ1§¶\0¨ÐRŒO.e2\]oí¸Çö^X‡wK*²éRS^NqØáúËäN'€úN±ØÂáâ­&ä¡Ã6bhAYö••0Û g÷3ÆœQ·yF;d1“ýà›öʸùdÌëêæÓ—Û»‡µ½2|ÎX¿Ôp5JO5PO«5&8¶YÆ I ,±LB ™ëК„*o•Xl2¸ˈȒ)oBe\Ç[ÕykÞ­ñEIþÐj|\N¡ÕwÊ{¬†øº #‘Z 5R†mö"(”v4Oœ²Fî;ù#R>ßüï (¬*ð:s¤;Un~ô`9¼Þ±&H_!€ ø¡TyBo]š«T9iºÜ4ä°V*U³-S/P5üq?„ù’³¹š”_%&ÜB$(Æ8ŠH\Î5uáSvæÄMû—âã¼VÚ é­û§½ÈÈ™\}¹ýxóþæú~]®†Ô¿ šcêªó‚9ä:©]Þ(¼u]AMZíÝdl7j%áÿ ÈbYÜRRCú$Lk—ù éM›óø\ZNqbŸî«o£þðjuÝÃzDõ‚‰2™“²ÅD=¤‡õøDy°ƒa\¨úœ¸F嬊&¯ªQ:‡¸<ú :(ÇUžd3"®b¯M9¬ˆ«ÞM¼Nq‹QšríË‚r>Èðy\Å–ìiÄ/YÐIÈÒØuj£C!ŒˆX_‚‡³Ûj ;d܆µÃ‘6Úq'ٽݷ«ZxJB\]³ ¤ç]K"oî­›@ÖÈ@W m…·šÒüRR£ð5æè ÿs5*”§`®6Õñc¹oΓXà«+¯+p^…¯Â^» _Ç{)Q”£ùÌßYrÝÇ[[毷÷i}kKüýêýÇÛ·uvz¦ñsÓŽ4˜©`O“ç©E¦Ý3íÎJq˜ ‹WÀƘꎋ9ÌUÇ…¸Øâe)³6ìZrR\gÃv’GÜi?¦ˆ ‘’Ó7´á§óbéò…Hƒ(†H›ÀTvN^;(üá²õYu“/“?²Sz÷—±…£jj¯ æg79êÖp’ “{|UÛ ˆ8Ÿ,d˜ëjÔJr"Õºë‡i–¥L­¢áº)SH´ –ÕÎaÙÆ·‰ºòïP²ù”‘úF.3?ïþqýðå£=õÇÅŸŸ¨RHäÍób5_$O’±¢ã¢r°œ¤â=‰gD¹ ½6xìsÝ™­bư-®ãÞa ³ÁÃxíça ³bEÇþyçÓErzË“šÒETaÔ‘ÑÈÎîqâ6í1ñø0‡ùˆSðøáë§çùë‡3½¿»ùò°òš‡õLØAÜž9 ›ð6õ”Ÿ@Èr¦<¤ã׉ˆFá«o» ý«¾œ êqTìâºLjTg ÉX)ÀªÛsûvPÝFÅSÎ{,gŒ…Tgß©œàµ8I[SNA·Ý£aàÜVz/Jó!7meÞ}+CÔNÏJ&¦,•ÛÉÁ7mG%å8¨»F)l}vCÿGØäA1jÇmŸ‚dIxs_Ylì+›‚‘Cݨõ„¥ÇÜÜK˜ËJµ{‹•5´•5Ø0¥ÄU”•# n‹n0KÚ;!ÉÄxÉ?³CßJá¤'ÞÊö6mÍÓ²iç+tO{a‘KgÍžñéë盇ß]µû«/zÿÛ—ÓæiV6OëüÚEï4s*Ö7OëüÚ3½Ó8L˜ÌÞºS´à¸CýLÔ)#î=ß½yŽÀ:’n Ñ/ñ‹Ð|÷ŒÜH¶¨èw5m¡$3^”ê÷ PàRVúQbXl›´É¿”ÆôäþÝÞÝ~zwóùê“aäÝïÇ,ó£ñ/‚bÑëIˆrkPìð.ÑÇÂê&“ÝÃäü¾‘$î›duȘ¸»ýxý‹ÉÕ|ûï·>]ýr{ûàãš¾\¹wôô×/+GWš»ôC·ð¬/†ëCãºæO¥µ“+ûä5ÎM'²ÎÀª†x±/ÒQosi¢ìB6]VhTNFYg…Ò}<>÷¾¨2¡/²Ÿ‡#L!k€2“Ë8Ô“Rhj²@+2Ë»AàÜvR2„ïnšqØNÅñÝXu Éf~› à G\GkQÿ4kϼOñ» Xßw®[dàÕ4B=iŽ[¤\ÅV-¦g,äÓƒ­®¾µ¹sü0cL*{c«‰ •À¶d¤,Ç–{'hS݉®tÐ`Ï=Ý9ò1;œJÇ¥¹ÙGk¿N)VnÚRá!c ŽêŽ{*㳤eBå·qÏÅ_íļ¾ûé¯ÿ÷ÓŠoäw_í?ÿüéÚä:GÜï-}øëNû¿½{øŸÏ?ýµ¿yS¶ÀiÊ^uc÷þÖ/^¶ïǼ½}ë÷þ;ê9Ó8PŠ´'¡0÷¾'Nâ`'Pç ùX(¼—Óã_âÄIIë%,FIjÇ?$ÒâMäÓbê…÷HàœäÙ¾gÏØ6‚Ï ¢A|)êŽ;jèòÙs &qóe 5^–ÌŽ¹¥ÐøÂËù?Çu‡²Ãý¸°†»$ñÖ¼¸ÔŠå3Ž€ùr³yÒœ…eèf×Ï( { >RÑ/ð9 ¯1Ÿ&Ä·˜Oó¯#îü·'óBØ¡ù£ŸB~%HoóýéQÌ?=%'ÿtüù«÷÷wW¶Œ»ÛyZÅ㿯¬3Á=)†÷„î8X¼ Êì4,Ü%ȾP¹LŠÒvj 1V- ¥´ª…ØF¤4Û[«q›È¯Û@Œ»ÇÑÞ²Û°%gŸ°s¹¤×(§¡•±­ÒAD»!¥¼ã bÜ‘¶ «tLÏ` HÒô³À2ÏŽ@•â”{ô•JPÒEÁ~Œ.²/Jüs±²þ9¿¥¬Ðn¨öt2HRزm^»Ðá? wðºÑmàÖÙÝI'ðdMnà!×öͧ唼ÉÅÒZfw‹L! =s'—)zæ…š:Oyluì•YD»s³Ž*“÷w’9M§é®¶ÈÞyL’dc’VÐô‡IÒÒ€¯ž¤½}’û=u`È› LRO W°dÍ›¡LZ¡,¤I|¦oKšúÅ*”‰ëãKk² “@sYòổ“*l¤˜ö®qr9Æ!úNDŒˆTÁ$ÕQ_#oÔÄþfìîúï??,môÓÿóýI~‰ï#¿OôÁs,^BTXOƒ^aWáݽxq "î›=ñóP™·‰ìÚ{õ´^0Tã…œû¢î©u§—x b°NU‚ë!õX—\jv»\K=è&/(y]}öˆmÝnÑ|ó שHª!ç-À1ŒÏQ¤dØ“€Dÿ,u{mâ®[!uO{Fõq·Jò•íº¥¾ëHæJæªÕâãUÏ%³õ2ŽbÆ'qŒ¡ìÊ(ª¬á CL/íZë5Ë™²ž¶{ó%“;Õºè?{ÙÞ¼¶HŽ1nÚJÞal#k˜QÄü´Ó|$=«ðÏ4¡i“{œ õÄõbHY÷é¥ù(«QÈ:k‚*e¨{[lL²ð;þÈÉ}ó“\FDîí­5 §´XGX£¼îàŒ™˜ 9‰aÊæ‚"–î•™ñëO¦Ç5m&zÀ`ÏÔÈ©)ød‡â1+'Tý&­ç¢<Î!<Þf›;Ý“ðÊâ5Mº+´jU5»ðNqmýL¼F¡ë¬™kΦ‡…³¥r\µÄZ’ÒkÂÞàÕ±Uy÷à{ÖXHÈS©‘Vf ³y ڛ÷ÂÃÙ͘pËÆ&JãÇÅ‹ÀLþd}ŠÛ¤^GÙÄØ3¯fÌówا¸°®æKƆ$Úh~}õ² q1.°Êˆµ·ÃlæU;À™w¯À°S îlÉ*”Ó¦ÖôÖ/Ãl;†žJ9ÍI"ËnÈwן¯ÿùóÇQÆ Èæ›†õP}&ÅjNÃ!šÿ²<îQ(#ú ú#ÇW6Úñ˜S¦ À± ëíUýõ;œ¥ZÊMŠ¶Ñ¦Ò›$P U˜ÐÊ`ÿ(–:᯦aôÚ:;.üÍÞ|„#l,¦áñ?þvÿå×ë»ë«×ÿõó{ïyûõÕ'àÝ|¨xZaVpN °B¥ª±Ø7õI<ƒÐ‚5çæildÇŽP’î¾ó#0ŒÑD§˜SxC>n§î“6­ûÝÐÆÝé8È MC~èޡ˶;?âåÔ»¶Þn™SÒ,i(w_šè*‘žØ3±gH§7~¬eB©þ0ÙïWQåR"êBdÕñ®óœ•EÖY°9,7Ypâñ.5BžŒŠØõVèX^³‡ÔÑ[5ze³÷,þ㟿v¤NÁYL®j«éGU;r±y!±õæÉÇèµ7§™ßÄÔ1÷·ß;jØøRL žÚF°ï¸¯ósu Ãaž»uæË¯‹p+'É~a%aIÎØ—gj|‘‚`6‘¸Er£`ÛµÄþ!²Öa¢ávÕ0iš')¬ObêaÆöš!ÁŠ’­&¥h0MÜwxÔA®G÷ì9S³%S"ÆWé1¤¹mˆð¶|·UÐPÞWµœÝ£í7vFAzêW(Ì™G!t%•çB+—\>‘qt{í†ÃÒŽ+ÎgÕ T¬¾{ZMCZ¹½•!Ey^|·xƦ¼rpç—øÿþ·1­|V6\Á-Š sì™HrÚÜ÷rc!p˜RëHчuU´ÂVÅù‹‹¥µÓz Ôkð±™í,À ¼eã|°|×¥U ÑÞ¶ÿÖêi(óíýÕ/_o>~èõ¢rÁôÍ‹2ÿPˆöa 04©ëTV£8ï¬ Þᬎº1ÑåÄ­ƒÔ¸”²Ëе—Ö˜´½¡Ç0cÔ½{]Ì\ÇØ’³éÀë”D¶5HR¡Q™½e88·—˜Éèo±oL{蔹Ȭ°¬Í‚í{EQô^ µ2(œBLHUãÇ$\¤\O2TeÌxU –‚×n¦-Öï¹0ãË(&óÄw©ñݛޛˆoýŸ?4»ýpó0_D\¦AOÙæC¥Îo¡Êżu£7-È=™Ú¢jgGèl&S“úëoù×ËU0S↸W6¶êyQÙóZHq Ìv£1ˆ®Bd[mB|QM?$›iÊ „éOJÈÉÎôb“íbOÙºÿžæœèÂÈm™Sc7ukL1ã¡´à¢9"! ÁŒè^êL1§uöH€d›=ƽ“LΠ§möæJ :èŠôÏBΉ1n£Z4cdJ”ᕆ—•2I¯}ÜW!+ _5±1áM¸ÊÐÁ‰¼½˜.c3mòv¥ç{.ÿÏÞ³ìFr$÷+Ä\tk23"22èÅŒ½†/†á‹a,8œÖ’Öœe“#éàwDUuw±;»*+«z,^`‰ìNf¼À“Ь˜– 'CÉ)Ó6ÖxÒÓ[s"‡³T)JË{”™äÒ/zŠew:Ú V¿Ñ!åÓí}ÿùÏÀ\³`lBì3ô#lb5æµ^Ý*Rä#º³çúìíÁûÚ6ýq³~ºK׬Ü|ó»<¿(··\Þü’ì”*Aǘ­¬Œ1Lt*žØ)Ž z‡KÜãýY'}nŸž¿¼^}þtZ<ˆÐ—6PØ_ÙcÂÌ@çÔä·ØBäHÁ/-÷C.î|‘†ÑÉÔÜÓ–tÒ§˜FHGB’]á3­¨óE ——•U¤TW”,À¬¢[JÍê¥àAííÝž)Å'{eÔ)±F“ç¾sš…’çŽä\~b—zÚq”Z,3—ü,ì1óÃÅ÷˜*a3Ç^qž Y]Ã12[WewG-¦^Å ¶my.šiP#°ùMíe[ÞÜ«úwj"Ÿ•žßÔø¬‡?¥çfÙÇö½öÆ;ý_‘Ÿíx7©ÉƒºÃªi:*ÞÝ1UxÛ)xfuƒh¾žC‡™ŸI„S[î8¯ Äg6ƒu9&W6¼È jÝc=æ¦ ÐZ¦ ìbΗæ©ç©‚.8‹µÇ¤Ä;uÍÁZØ‚ãñi׿¼¯é¢«–ÎøèŽ3ã…Tfcy¦ã¤šò “œM| õzFý–ã•…Ý‚:Y±j(ÜÄ+æ©R`)× >ZCIJ“:!"GžÐ æÙqvií…þICÛ‚¨2¥àÛ¶™Êx¯;"Iß4:“ÆÁZDyöÔÎw#}GH=ü\æ}Úû4n¢HðS4W­¬~=Ò¼ÃXvˆë%¥Q7j7S9É1‰ ÖÒ\OrLIjjô1‚m:›½·íeóõËÆï¯ã?|ì”à fÐ[ã7µBB4IoµV4NoEWJ¹Pp€RÇÆ+µc9¹£çvØc¬5ûv­?yUÅ€9$ ‚ðû Ž· ‡V8îl÷…× ú(ªI1%yFŠªîI›ØkÙúñQ#‚7@Þ‰Üé}gž1Z+dš2®jçOža‹9k\ø(”<½YL¨ß ßYô.íq*»‡®wwGŒD$÷º3¢(3#J™èÄáne ¡fPTµ*ÈÓ9i<ÕUõtÖ#â¦ó©íQパá;´!ŽcµÿTv ©Ÿ–kýÞÕSaJ»ÚU»È³úm,•­‡#ªu‡¶S½Ò½j3 ÔÞLêUÕ ãzÕðä³zõ€ˆRµŠÈ¶Ä—«U ‰”F×}ˆ›D¹´Zõ©Ÿ_r¤VÅEŒ)NéÕu3ëe”‰Nôj+e³[ëÄ?CbŠÞKMgLµ)ÎîrU²Ì>YÞ‚¾äê%ý JÒäÛ«¯vçç·í•º¬JÁƒ6»º}ú|ÕÖ‘ýô—®è¿ÿÕ¼òÕeÿÐ*׋*Cêß{5Födõ_ïxJíOp ·ú”òÔ¤5eÂ%ÚÑR»[æ²EÒg¦Ÿlïî7Ÿß¾¬=7) чK w šJ;§Á0H•æ&#¯¢ä²Ó ë.0„"듪"`v\ÒO«ôi¤O©V.®ŒžId—`¬Ù°l1¸Ù{XO¹äÙÖASÇ`§þBqúpgM…×w›—ªü,“ø„6¦:1O3‰ºÜiÚ  å ÊOkôëµõ2ªÕ¿3—Ô ùQ±qUrà‘vQo¿^Íâzlaõþú8íg"“lqfÚÏ1«,g¥&…Ã~VäŠPSË9@[ï¼Ö<æ};ò¾ùúËÃÏ›»ßï¾lvË®¿ÞÞý¢ÿÒ±Ðj¡‰¦cMäÃ$Ç—+  m•Ñùzkk‚ïÍ15Û4HQ+Å8{”ãh—úß^žßÞ-‰9Ï:Õð3þ†[Ê…)QH%LØ'ƒG™bvbÆ€+qaÆ å\¨6;"ê¹PpuÛW$DnB¬K~H&ù‘ÙYÐè¥ p’–d½x2NK–³m'h v8¤†é}öãÝË68(X]D€òôÇ:|à±æE(’mfÔ0wFþ#“æp3ä;‚m(ô~Ú]%ÂH“<áû˜æçîÎÅÙË,Â9ÔT '†$îi´¡2mm!i+Ó~’~8»_g‡¥U´34>Ú2™ríœôïÛ4°‰iÛ®ƒ|úpôË·­ÒÅß¾Y«¦rJL{F±röí½B³Ë8|¾:|ÖŽëþãúè]éb²?Úí}F¤ûÛíý‡”¤‚n£vïÞ^^,YñùÓµåo®?ý®LöáF"J.~@d¼úçüpúÕ‡§ë·íþrÂ.<÷9³æí KâÜcæ±ÎpÐÊ#ÊŒ Ê+WO›_¯ZePu‚âS܇˜ÏDj³±ÎÓLØQ¨1ñm÷à‰‰Ï­Û"›TÚÚ» uÀêñË„»¦Àf“ÙC` Þ7޾õ &~xÂBï°ê^*7ñëðVÓó!$T݇ßÏÄClÔÉ’¦ðSq¤N<ßÄ'Ûpé0Ñœ§šk^SíKÑ϶ñÞۼEWìLÿþ׫w׎ÀdöÓ„³À¤4§|Îhˆ 5Ì»õ"ÛXÔ8Ǽ‹sÁû¸ˆ;¤ÊéVÆÔË ð’ö oô\s}wû±ÛT³ÿÝÍaÚýÍácëù‚±QOžb™îŸd ùBt­â Rƒê˜º¢|‘ — £¼£:¿dw„:dfœ­ ËZI>P#øâ’½ÔÕ›cÝ`ò8!¶òk·úø¹ÅmÀŠ ö’¥@°”jØD¢°ÔR­=BUDM K"˜"-&͘ñ¡\Š>NÐM3‡á|ôòüòd…#>¬ä6åó}¸ñ®ñ8 TÍfÙácE¡¥‚f£¬<.%ÛŒBKlœíu·(çÎä8€}áÀUTf M²Äi*$šõÏK ñf-Ѻ#W%3Ú ÞÂòX ›zTF5g]”Æ©fÒâq„n=ä”m:€VB8½–sì°¨¬ÙóIH!Vö¨"\„Õ Gˆ5ÂŽÁY§Ú2ºqñ ðDMâqikj7úžÍ>‡\·ó²²é­‹¼Ë, OX–X`¯J…ж?ï™ÀS@ÏK˜U]T ìvÁ_$«@)óšDÜDËöûiŽPrŸ__tÜC~ˆö²ÙÕ[¡¨»B³d7˜_)‹Èæ#Ö”Ä)`ê½ñÜb§_ï7JÏOÏo¯ÛæîéAÿ÷sûfümóòEÙÞæ“Ù·¾¾´e ·“ãŽêN,Þg!•_" # D©Ðù‚º0k¶X^#dµk«?ëÂ,³÷“ºŒãþˆ˜*œh‘ ŽeÍ¢˜–”çÂKCÉÅщ7=C &ž´2ˆY?n€’UФ4¨ ’/ålý‹¨Œ”jù¡=¢Bã0Ð0܇:oO½…)” Á†‡D‘ :‚ vôQÆèØB .W52fÚ[h/Ô5‘¡»0ÐY¥ï”iÚ_Sâb¢š7‘hl6·È IuX²’KEQ™à†IÁù¡FÖ(q×[«º!G340{J8êõM {8zZG±ÑP ýEwÎ0ï³$I s2ÈÁyõÄÂd/¾f\p´€ÆŠ0/H2Û‰5mm]œB›í'„}’ÜKÎ%«È ¶ã ËføëÕÑ^°–Ú#¨¯?7òr1*޽[˜íÒ £8KéïÑO8E TÔ˜ˆG¨ÚÁ)[I;¬ `NJ5•ÈÞUg ÏèÛ»ãl[™yêñ  ÐãÔKŽ¡.%½;"Ô ü×àõ›qqJ:ZÐRHlÛléG-i UD#÷ò¾Zçä9öZQJÚ7¤:¸ÌP¢é´‚ÆJuSwGx Åj'™Y¡«ŠVˆâI´BDRÅ&(J©»w¸3¤ê YÉ@3®ÙfÔˆðN0‡G,Ënº†4@Ë>M„ 2‚ÊoÅTARÄdz§»ýòðY!~úÛ¯›O÷ê^uæ¼ÇÂöcïHýrû´éw˜l7¯Mÿ¯Ç=ѳ­;;Ìr›w?Z±»ç6¦¸-„ll<@Ù*5=ª§È¦‹~gÅ¡¼^£8l€¢ªL\ú–ÕR°LáûhÓšAâ4]U~¦é Y…?­ð R46rï²ïÉxg†‘S¡~°ÚF"¦ÊÝÀ¸ÒlÍRþ¦mÙTÂXS›¸TSŒh[ …ÕŠU‰mŸ¿lö`P¥ºr+‡”)$rL~œÅ=1žß[s@a¶Älˆ£“á~O×›Çç—ß{»gÔ?ÎÇ:[(ÄI u•õEy«G®uN»#¸¦þ0±­!®)?¡†xhC‚ ^¢G²«ëCBEµGt#}H˜íCZédaÒW'H>Ó–ˆ”õ“Fó6ж¢î®jyoçÎPXêK(v©qÓ¸NP¨’†,aÌoè!÷ÙW…hE®qjlì8—É?)2Ë_k]ª"ÜZX¯áºõi˜9œŸ áxÒ#nŽÜiîÙ¹†˜FIÕÂáœwþ|EÞVÊÖy ¡)xºÕ[i,ìä}–rxƲ)yA-IJçÛ0pꢎ-¬y=êþ¨ƒl•×#ç úÈÃ…×ݽ¨?²}}î6Ý¡Ó8eûñ›oÚ¾¨ã‘”3ž’’„gñïm{Ã"9ï«^sÕN³_skqVã·'¦Œlk€-Ö†W$Û#yÎ=þò= ­ñ°äl]Ù*ÊýÍ‚hè–0G®šq­Y§MÐ.²®-Ë&X"Äi‚¬±?IÕpfB˲ã0X§ –ÕD±’©QÇ4Ùtè:7¯?BIªœ"öl•ËËȆʧ¥NQ`õ>¢¤ÑçÀ*’@çî!‚y§hZ‰S¤×R:K gaE²*ªºõ}û#jÆÀY[¹:–’-?â7j¶£¤I²¥dVcdkáÆüîædTiœÑ=ëý˜9"ŸãUy&‚²G î”LL“, 5Ùˆúš‰r¥ÕÇC…ÌŽNjõg™&Ao¨až›&° N'?A`6»ÀdMA«ƒ—F4ºé–sÿ˜;cÙs zèe&#ˆKè1B`¨ÒÖ¤FJ‹}qËÖj‚ ¤Þ6ÆL2Eˆ1¯¬÷•(k†& ›£¬É¶2reäîˆT“éFO8®’ãì}æþ—ÉΣJ³žÓd”FR­ÁeJ#ð$$Èó×xÓÕK·ƒ¯©”{°}¶±j­="Æš >E›|êƒV©ÿäOÕr™Ç¤7t„üôۣϲƒò#7Àh½æ÷Å ïÎX–ÿð©Ñ›–å?zÀÔ'MœpÔ¼í³ˆDXZË¥º¼øi±QÉSK[ÀˆÂ“L!>»”÷YQ/?4"ŽB˜!¾–0ój¡-ºªEŒ|"r+Tà­1ØUäVdJ:68!¸³æ¡I¿SôVŒegn Q²Ê(±E")B)CXFÀ;pÕúÜŽp5ùl´\ž0,ãâ)¢áŽ£¶K`”¬Vn a܉kᦔó안±ÞÊÚ!PæPÚà´„jX•ߊŽÅö˜Ì-_é%´+ZП^ß¾¾ÞÞÝ_Þü|­HúÏÍÝkôfÊTHå@íS ÅúO§èŒ1_i=ÀÅâKIC¼„3-ZˆKAQP“zÑàÃÐOXçŽAƃ,-E¥qÒ±îh9žné€Í.lBSà)©@|JGŹƒ3FãÄâ…J£qŒÉ*öÓFàã'ò2¿­žqnéÊÏŠ «dx¼ Þ+ÑrZA8ÇNrRò02"f¬¬M`c•Vs´R«Å>«#¯^¡'¨Ú¾?‚/²j ÕkÈé×^\hÓ å°¸P)¾Ý–¯,Ô.µ²pò*Ãe…Ê1\V8y•ò5…Å<>®îÚ#qÕŒv5!.B¬³{Xd÷T7Áž¦ìžm¯CŠcÚªƒ³¯ ` ùÚü,+È<ª%œ±pY¡T’ò…»á 'ˆ³I¾äi;‘õûè„ÐÖ3ýO·ñŽïË.8Ávi·dè¢1»7%e®+ã8+Ä+Îfn­_CC‘mþnèæÿ½šò¬Õ…JTñ_ÕŸã™lOtvìg²„×ËLΪmr]eÚ¦ µmÜMз‚—Ùjªèü /sæaª²ÀIáŽþ¸Xyî0µ‚À§¹#„±:؃ðå[ÄZƒ=¢ý-† óØÃÞ¥a™n—÷ñè7xùjÍ`*®w›¯¯Ï/-ƒÈ¡t{§×µ¯{_½>þá']½ óà´^?ê>~x”âàéèœç²Å©‹¸Á;in!$8å›tØkõ~Öä†v@ñTÌK\³$}äEú?PM%<Ûòéaù >˜FŸÄs‰]ïKG5wÀü à´’²/ È„õãü. ’-ü²@Ë)ífZ §´Œà’jŠj™| $uã³F[$¿|¼5í6øÌv-»¯ºÏ:y?É<6um¼Ò³Ãâ™É/{4­òá´Jžeõ(¡,ÒóáhI×tÓ†ú¹f èî7·_^ï˳>6ŽçÕ«²8·HÚ UäóS’1ÊܾúQ™™IÁ£+ð•r~Rh€swhW»u´äùÝþxûnõöãÛ—×·íÇ»§õ¿þ~­è~SFÛÎÊÔ"žÅºÆ¾Ìq™ï¤"زA.Žꦇ—"j-O$jÛ½ß^ƒì‹ØG½†l…×)k8 ƹž¢ÌyäËG á"9¹´šT,s÷:ù^M*4œ–㌊*þÆÇdsºïoЖ¯©/cQj0òÌ\Û£¨‚¬áµMy¤ZжGOhÖoð×Þ|+Q•±÷DýõöÁ&ä])³^Ù+öOýðçC}û£þüõå÷‡Vónm’G¦ë¢“¢Œßú…k#ÿó[ë0¶Z8„ Þ¬¨:ìÑ#ÒNhç®­²ùà!†ÿfïZ—ã¸uô»œß«6/äǾ‹#kc•/òJNÎfŸ~ž‘¦GòIö$µW¥â’G=M€¸¤;«T;JÌSbv©¬­ÈiÐ5«t8y~kÕédI%{«!puqßJ—×fÝÞé~#ã¡YùLÙ‘Y¢;–ê/¼6ñ<.†Qãqƒ¦7÷uº€CÚº x…EÈ ›#–o°©ŒºóÌã]aâ4*A”¿Ê>üïîÓG¯»;|r“3ì#Å]mDˆ¼W\ÚŽÕ·•Vmþ0g,ŒÞ×,5c‚¢Y±0 º pˆí­ƒ:3ps $ïæÒ÷°@ëí\>¢o;€­¬ž0#ÐbO»4vŠáñÿ—IÃÄÎê-]¢í•ø;-HÊZTLó­An9fK¶ÿ9„תêY$ˆIµø ¸ÒmMª„÷îÚQ:J™Â”r*Ù¦íBŽ)D7Ò¶¢ÔØVŒ±Ú´nS{23ŒW§à&½—!í­|}üöøsþp©õmñɾ¾wZeEˆ ‰=ŠÖ'j¨í{ërä‘{[iô_þýôüåδÐÝã'½½?ÿüðôÇ÷ùf¾þ`”Öõž'`Õ‡¥nëÚàšÖõÇ›ü^ë.è3d3(OoѹîFp©¡ÒcËqœ°“í±íÛ•éuåÀy‰øÇÓ§ã}ÑWñ~üC/Çýç‡û/¥výžGohåÏ\93§â•s¢qq)v¶yô\ì¼ ûˆgoíÝD({5`PŸzLƒ^Úñ][¢%⮫Hõ ©IÕKö³h–ÝË8™QVJ·†àWH"oä>£ßi>#^hÚµà CÑçàXJ’ ùý z L ™T™PØ(™$R—drØÛW*§Ã&‡3h¬”Âq%Ò•*/Ín…Á]1#5ħÔS€Ø#öèZv«é¯9ë ÜÜ]½ð¨¯HýÒîz ¢C_–ù¤t¤PŒ»eWߨ1"ì¶ØÆ:e‹Ð¯BYèm׿Þa·¾c&“m™¾ä|Äbk‡Œmípƒ…~\Ęã³LÞÙÞ#{[ùãö×üÒiÁ*U9i©¥.@–µrý@½Rm›y"±íj¶Ù{HeÛ M½¿¬¨—Ó5z¥³™å3²³öäÅ6GoòÄTK;éa¶·]¶ ÿq^*eh*:Ë UIOuŠ/±ûMW`ïË>]J°äÓ©F“ìÌð‰H#ÆLÑ:ˆi“Ë¢ú)¡K_‡ÓÄŠù‚ûVljðmŽÙ–fZW40H¤N’Ç]?»Ø>3—­™Ðö¿½ËÊ)»lßßÄá—(”†æá¹?[3½Îyõ@$öq~§ß ¿§ß,`ò‰Æ´AÕyÿ þvÞÿ? ¿õ›«‘^½>G¢ á1¹¨_¬ëƲR9šoj*ʸ – pXÄØ=$C…»M .I1À_°×Ž äe“¿Å ºTXL{[/¥3¤ËVN;2sÂqhM }M+grÕU¥ºdÕ*•:¸Oœ¡‚+Ú·Þ®õø—QRmht)JYªmyÏõ¯žv.x|£Èˆ:±Ì~#n’éÒm¨JûŽÇù'ñS2t¹]Ѷ¾=~¯²<~jë 4Þìb¤½]C½èœ©Ôë¥#§~Ýæ4W ÏÀZ¢ŠÁP ÓE ¸ÊEòìSèR«Ø2øB>&Q)Þºíõî_[zy¼ø= Rû<ÍË× ºl«\Q•bvÖeA…Ó£¦}‚Ì(Ëtié T22î,ÈJå,~ÌËÖ)/ȉaì€K WܯÉó\…Á¶øu¶>ÂKhšhsj7]ã’Ît6\¢ël°]E¥ÎÛ‡†âñº|ÚYC¾†ýv˜ŠÎ‰Î˘´‹Gtu6pÔ˜ õ=ëÓé8t8JJqã‹[Æ@±M_»N’ž­ÞûöðóùñþmÛ~¸­:îø_Íd®6Á–dƒ N]áØ³•0CšQAIH<DfªTÇEAÈžè0À’ÚõL(€µÀd^ÃñļâÖ¦Ã#¨Á ‘E­#`wQ-,¸OSL¡Ð”¬‡rjF®.><å`Á'«)Tº01è=Jõl €1õ°-x@Û,óÀ¹n4w¬ÅÝò^=ÄDBe¶ùxYøxì”]{:W ÓÒÄ¢otÖ¸|D¶¸¬¶6Ht•ÖÐä*ƒ¨1n´†ó#àʬÛãÌ ¾6àÌé ÙÔ¿T~ýºÛ# ²¡çÕ±*‚ä·´:fnµ»"ïÛ)'V‹‘ í^7vîjw88gµÔâd5ŽúVl3v­–ªb[éîÚ#ü®iì#3PÙ,4º¾ÆV¢¹w®uÌETâŽ[D>Þ! — jÏ_~þøªäÃóçÏÞ½ò^`̹m¹æ/]ç0çþsEa`TËÌ©]a`DliŽŽ!ÄDÔI$™HR.—ã ø)±¬!ŽÛ'Ö5„ÔåÖMžŽRŽ#™ýäÙ‹ÏÚ­NOèkwêc Xo8Ç\jÁÓßòây€g[Û‚—lÿ¨­±/^ˆD.^™8^ˆìÊ’ÅÁ*LFJ0Y+_}ÿ—ä¿ýï?áÇ÷Ͽ>>þôýñ1ÓÈ¥»‘£ò›†GtuT~ñJ‹Ç°û=~ûmä¤wšôÍvìÛž–}ø8sþçÓ—‡ï…^°ËÏïPÅ< e‹zdbØ3áh™³_Õ,kô¶ôÊžþø~wúàê?Ìs /os wótíçmYJ‚=9B¨¡ØHCu¡­M»³-³)Ï1¨£`»ª\ÇD%OðÐ’ñ¾Fx"ހ̦‰AÔP¬zÌn˜¦%½ƒO%sføË>Å”G˜eçyìRÓª*aâúM »©•]ù-8¾¿†`²¶$Ø·¿f†C?ú,çç6(oX×Åɉ͈£G&…†ú,£†»Ê›ÍëD¥Qgî†]*jTÖöêVˆ×kœ‹½$Ñu¡omë\lѨEþWHX‚°³F52‡Ëtž98F/+»¾dèZ`€*Ìî «k ¾'ӘLJìÃDVýL7Àe^ì”8Ôï¾}¼ÿ¬Br§ïúüdYû‡»¯3>Ê6ØfÂ]•hËj°à¢çº×ò6Rp”Æ/N)µ¹Yš&ÀutÑ#9]·]l€Æµ·qñÖW‚ì S2ŒÖyElùë$]µWª:V5Nߊäú˜¶MgÒ8ëƒÃ™N}‹ë¥ì[žŠ]ÂPXÆ dñ`ˆ=EÙ‡€Ù)'ÚŒ¶ôšÆy7~¿]»ÕtFºt·"[#;J •Ö(¿€ mZO™(6nëu½EÆrOm0> ɉ¦Žã®yÈÄ.Mï\»Ø&еMêŸ"—&ÆbДɱªÛë›ǨޓɠXÈ;†V8ÔÃ#âÛ.˜&0þí:'·_èü6^=Ý«d>Ùÿ¾}¸ÿøóã×§ß^ž~¾h ACƒmÞÄЗY¸Aé¸Ù·ú2ÛBl»¥»æÐêðß’Ýs¤j4‘•³[blÌÄØ—– YÃ"¥âü!~¾>¹~8lWét–Š"‹S˜G—h‹gtÅØ¦$ˆ˜%Ô†\v0FQ«{4^ðãýPeZ­äÂM»¶_ýø‚è›}8ÿù¶íWÅÕ A{Ä“kéÉduFB {»´ë(Öær^nm$oX)ÂÊ‚ôO* 61Å.Jo'úŒp9í}œŠfu Fß +8±C6õ;»œ3}ºìº!Ï£gÆ€ô´ffº³…Ú»³ë•Ä*g£3ÄÖ.¹?ölÍpÖnܙ؄ÚÚÚ’g˜¤$’ESÌÎe÷¬žNV3Å6M`ú÷<ß}zF6­i6Ö!æA„Ã*·A-Yì“ãcûø®¡£?—ýÕÆ? `aÉ‚  ‚·@ù'>¹U|²® I6o™>h„šPXït ÝèIæ¦ÔªBG B^Çe>žÛgÛ†N«©ðèK% ì(mqMÄalÞ1qxïŸ 0«ÑfØò+¸è05ÉE¼"ûñô©Vï^þý_2€I´qÕmËwžA%Qܬ–Z¾óšö '©'üÕXdüŠl&™Èûxë…‰/÷Ÿ>ý®')u‘_ÿå=f\f†EDÏYáðˆ½E“`ùÝËŽD¾2ÐR põôJ&Žæ'b!.†ÑêãKêWL‘ÒÔe;D1¼¢RÈ(aK“ y­ÒXw£•ÎH!3¼s–ί̮Œ“<ºrk ³v¬±F}¯>=ÿÎa’æ7yô7ÈrÖ­ÜØt²NoÛ®z¼zcYC}!øˆÑÅàR[b³‚HÃr™“Š7•p>z²×”pÌK,è1"‘©÷Õé‹#oÑÀѦNz4pô!쬕È.£ P¥ß'p ”ªôe [Ó—•Ò¿ÊBäû*ãÝ Ôñd« âMËD+üpÿðl+ î ’ãÏå¾>ÝÙ¦\9´ó¢B¹"6¥Lô7UôVš8®ˆd]¾Ø¶dµ ‰‹z³ ØKjP½nãIq›ê%G¨Kõ"óî5$‰>ÓRkŒ"ý…ÛèÞèdI÷’ÃöÒQ³âX密6œúÛˆãõ²W·Ö9”äÃ-³þË¿Ÿž¿œjrïР‰cZ¥¾Úáà»4qr-ÙsUô*â:‚Ò¼v+\bÁRœù5ສz@ªåû7òŒ œï²÷žýÍ[¼š7í]ôS*2ï2ä6¦GA©ûw¬Ýoثޡ öãkd²ƒÆ )øÀâhGû¶™à<-£?>þâqëÎoÏO¿ÿx™þð“5—ž}x“ö¾î©~5”mh£ðÖ³ÇȰ´}ùÆéb=‚Ç„Âê •t±’2›^Ðjˆ2¶¶,¶Ý1·UÆ7(X’’*[°4|‚:æ8Ö®‚Ž1Õ÷R R»êfÙÅ&&P"ܸœgü6ÿp?ÏËëüÄ6¨vq×\à”ô¿Œt0xp@=n…JÝÞ„6'Æ©èö’0Ëm}n#÷‚$cÜ^bÛ© ¼IÓJr±§¢®à°·ßK’b®5DE¡®jZà0Ôñ¥*Ha/ÑuUÞ®(ƒv¦ä|vâ(QC®‘½åÞİ:Û³¨²=KÙ>AàèÊyÁBô%‡Hž…÷Yœ¬ Å ¦€œRÚöS¶± !tH¡>"ìöS2¾ôwìÈäÉñÔÖý䇶œÊ-ZN¹½yl|Ê#1þxútŒJõÃßÕSxüãñçŸ÷Ÿî¿äBÙOûþô¢býòÍξ;4jÞÝ?ßßý|º;ýæi½œ^ßOÊRûÌEKXô”¶µ„ý­wÖë·ïéû[îJ“›úg6yÜS/ÒG¸†÷”|t pÑôg/3äÛÆ†©×ßÛÐ.Eä·¨š‡*,ƒi±8àìœeXÐk´zçY –uËÁQ£uè²ï1X†À‡ Ljl^·qìLqe®|®â<ÿúñþNí÷ÿü¹)˜¢õ|V™ö‚JÐPN@ [€·ÇR-äj‹ª.…ÒnDˆ© Å‹p,Ë$ÅÜ\Ñ‚4dr¾ÅÐÑÍe2ù½‹¸cº,âÎlC»YÙ0Ç®f¨ ¥hCΪY'¬²Sœ\µÇìd ÃU,MäŽåöf©Ž$>⎿|8þe“bôˆWh…>Hš| èb¤äF¸@y*Ò§3ûMÝÅ¢>Ea#5*ç@PO OíÑI Û|œÒe¨@á@ \^µ=î¸ûóÃǯ??׉Öqƒ®ˆUTo/uQRìl™ôºz¹Q·Ë“º#¬Îzžâ_íS}–¸+ÝwE«®ñ“óœ¤OMÆ–ÖÂ`3…® ÷¿[k·uöy×Nû¢"ÔG¼«m-±,‡Ý=Ö5ôå;Æ0ƒ[®¨ †‚æäà…òÍ$oÔ±/QU§x‘M=Õ#Tg{Ãò™3-ÕÆ( Ó#ÁMZªë¢qŒ±½£¯V-¬qE t¬É“-6ìj»@nªeíµ­ôðñÃé¯}£,ëÏ2ñËjbúGt Ä^5[E°QzÖðÏ|$Œå™A >âr]Ó³}~éàu†Œ®È"žp‹š!˜1¥½=T%3äàÏÈ60¥ßù*…›:àϪ5Dޱê«ñ ßl?í¶Ÿl|‰)e§uí³‹Á†š"BÝ{„~åÇ„þâ7Ÿaó§qØüÅ/Þ–;ß× 1bó*ÃÃ#\ÃFKÕØ¸ +×û Vî¥I±áéiuݤ ××Ç!ç³úlq{q˜2X.L„É–þGæ}ûhbTë ‚kqü惩<»à»îAô¡%&D ¨N¦t7Ã¥j¬2çƒ*1 TŠ·‚B)ÞŠ œ èNG«è†ÓEÃÁwûh–É¶¢™yÃ!Z/j¡Òßß7Gv ¶—U9H!ª4Ä©\LƒøW.¦ùÇàe Þªi#µK.B—Jû?ö®5IŽG_ŵˆúóc#ö­‡w´ã¶zŒÃ±w_0«¬Î®b™L²º-Ï„ç‡ZêÌ$@~xø ØA@mŽ,¦4`–mA´ô؃qÑ(‚VMKå½+k¢_ ”6†FMjk&‹<˜è˜~>&·+ñ˜¦ Êân‡Gm+§¼‹b]Ôö²5ê ô¢NÛ»&MPó‹¯C“ålç.h®.rYòyÜ MÚ¡É] ÀbR‡¦(«Ð„EþÕÊÚ)˜›Á69AnÐDv¥#™ŽLQJÈ”ç«'wƒn1-”ŸzÉŽÎðÉvâSï‡< ÙÀUïw\E/ƒ¨ýL›‡ƒ”:nÐò„a¥ÝØ[°Ë3\êØ¥•I"‡u[É­z\X#tiJ SÜ]5¥5@—Dš] ‹É#ÖbøfÈ])¥xär¡gÒ‰-¼òw`ò&¾}ÿþÝOô^ñÍ)nmÌ+ ú„bE’:½‹‚ˆé –]´kŸê Óä>DÝ€N¬øüþí§÷O”rÈ©½þíý›ø‡¼zÚOÿ*Ÿ½ÿöðù˜ÿj[•D½¬‡üû¢gãŽ4º (¼µ}a¯èºê%´ŠK° I¿#ïÄU›aTªè]Éió†5æVRÝ`T4`¾ÙuXο­ðƒËÅH]S\ô‡•‡Ý[[™D´vÂhqQµ.ŒºGµz¼•ŽÃìÿÉ!¿ò½sÎ5©¢ÅŠÜ“ÈDóÿH6W¬=çÜ8æ j°œ€(58ÖcÚYƒÅ£¬Fáq÷†·8ù#-jœÇÀ©ˆÇ `îþ½|?þºTsºtVX“]Ú•ç@òR_7Ó5î™Ãµ#!\àªà8†ž’ád–'_ÀV–„Í ·’ùõ¨nCm¥Ì¶¼Ì ´eJÝ¥ h;âÔ¢”‘Öí¡«ò½›E¼};‡Ä6æzQ¡¤O»# ”ëg»gˆ,€“ºæßî~íÿÏZôÍòM‹ŸþøÛ?½»ÿrÿù÷_Þ>Ô\eïÓE¯Õ‘¶¼ÎCÔ2EÿüçÖ¨è°Wé&2ÿ‹`Š¥~L1 õýø÷ù3¹ÿÅõžKÞ¿~>ß™¯Î®V^¶æ«ó½ùªxµrY1»8»T!0ãÆ ³XK.–ßÛcpÃÞžÖö«³JgÜy‹Þúâõ­y®mÜ{kÞúÞm=ËN3bÖnR¨å]ÕÏdi±dÝ}{Ž·çxGÉ bˆ© úך5.ù'‡•‡T —Ö6~>‡>îT¬û9ÖÏ(ösøq¶„êPƒýù“=LMŒ²GߦБ»eˆ!PÜ­nÚ nbƒ” AÝ©¦n·Å®®Ç•5h{Ù„þa­Þf~qò˜Ù öXÿøé×Z.Äp^€º¨!fEÈ P9ásÔ{521œ&M; Sã{×v â~»ÔøÚKÅ[ËÞÆà~Ä.@JÐ3Í#%Ýb~ °s‰ZKK#0üäFÊùñè0i¸~ósXw(õ>.¬ÉøP‚”=8ØH5¥Õ)ÑähãPæ[ˆ]¾exOü[Ä¡³è7ô3f5£Pw8¼§Z«û¥O|cm¨ÂZv åùƒi׆C¶ù´lQ0É éYºOÿ’]Éúwuƒíë½Ü:E@ÕÝí`Ê[\qŤf ®¸D¬š>LXrÅWÖꊑ n±}îŠsÜåŒ'"žEB©ìŒ«FKeÓÇ<‰¢„[T.÷ܶöŸFÙ†B=ï\×%3ñfêyçUðaßù÷ô nf?tb§‹œsº@é2Šs•2cÕÛVÁ¨UÈ¡òˆûÕjê¤.ˆp—ÙLŸ’w¬ž±‹Õ%çxQ2ó†¤P’˜ëeöm…žÚjˆÄnÃnS$ͦ “yü¢5S„y^= #*î‹ÕÒšlQ§–ü—6Ø¢0†”vÝú$ߎ“m‘ËcÁ!¹&Ü*_C*á8[d¥[ÈÛ6Ñä·?}ȵÎßpÚÞ¼ùŸÿýÖÄòS¢÷ïß¿}7³fÓW¬L–F½ÔJ#ÐÄ¥@ѽ­=Aœ?bB½ Þ¹QqÖéC?î}’@Ëwc#Í^ß}÷á˶ò¾"qƒ€{Œ¬´Ú6]aYb×Ì!õ­HÁdå'B¨†&¤\Mʹ¼¸Øó(E+ù£y©üÞd-ØW`ßù³ÙI”,åâ5‚ëÉÝ9‰×o­Ê–ÔT½ªçT´ÁEe:»Ó¶K™Ç®üÁ`J‰Sò“·R÷Æe? WæR壵k`ïÈËzÈeæwA¬HK>"cžÁkû.0`ã;ŒÕ/0ìHØs+EJª«•µÞ`„`¨´ £ièï<$øÏ²×‚¢áP`~>G“‡ÎÑ4lñý4¦ž9šýnË%Õ¦w]Êgš¸íxl(!û„s;¤?9Ê|þrxèÝ?mÝùÁ-ÁëÁÝâf÷·Eç‹§Ëò§èѶî` žÑÅr“,§ímÑ]2¼‡‘ãZÕ¡\€«¼9½Tr(WÒ2âP-ºÿ·yÁÝÜ—ýÊ âÙg°xñO¯HTÃWD+¬¤3&s¦$Û„®d`ºâôª·Å†•J \Myf T¼Wðk}v­p’°óV^ˆ'.‡ÙPÀ½0ºŽŸáÈÌsY’""ìOÁ—TcH†–n68¾…œCÇRVÝÈÁ° …ÇÛÉÓ‹ôok‚„$& [®ŽP’DÂ]H(:=…j¤D(t5eZ8¸66^ßwl|–¯kDwÅãîç¥ ñ8I.lóí37 ÿüñç÷'®øá‡~"þùùµËîñÔoÿýà¿¿èñp¹üåÓ×÷C´ª¶:€R߀Ö¨·‡ôI¯áu³˜Ûñ8–.ë]”îÎcõ²2p IËõÏ+Ù ýý³#mrMGœv“Ù™U—rq<¯7)¦Rk˜‹a,CàÝPü°f"€0¦17 …ýuì±y¥D­¡±šÚó•xdW+k-ƱˆîdÜö3Eœ_Œã0W,ÆÉ‘1¦t=¸t¯w ¿A[Lõà û´<•y[yΘ¯Xì䣳¹`gÌW\+áq8Ìp[s\š’ÑLBÝ¡±øQÒø4²g^m&в•Ð¥Œz^ó°è)izB\Ô|9×Tó „ÍNúv8˜©Oñ]IõŽÜ'‡0„ã×ïÞ}øüéë¯ùÕo¾¾skq~ýøó‡·Û:L…[¡žÉå~4DÜLÃq]€£pxÙ3Q0UaX\UN —%–ØŠWÂÃù£Ý3°[Ãð±Wc&;,Ä~X_r$µ‡›ÌáÐ&øäás8êÀ1UÁ6¡,¤»”’r|x–ª‰Å²=vÕ}øÅ7áÂÎóù~Û8_øT¯·«ïÂÏ„‹mu{÷\ü•͘y«¹£˜+pcÝ›¥bïÄãb‡¤™sDKùîo$Œ6hºkìU È#†aÅ1WÎÅ(óŠî¨ãsžš4­‡9’Š;c%W±bNÈßÚÀÆÉƒ¸PJƒ1nÔc•èG¤¡Å‡Hƒï ®ß§;rRn–ìUÏòˆ#AðPóè²½³ÙÒÃxJö„ó(Ù÷qŒ¼ o5𛘞•édý«Ü°µ2làZoÚ…×ÍÇò¢Ž¡ˆbûø¶RoËÎàÞ@İ ÷˜eä—àþ°T(õò<®¥Î¶EdwÆ~‚ž°m­ž°‹k‹=.à`%K37ŸÌ1mó"áóe}DwÞQj㥡¸„Râ¿ð*ëòã×åòrÀ‚0,{Ô|›öö|,fògB˜‘û…3/ï›ù§Ü’éÖ™‡¸ê5+‡GõŒˆÅt(aÍ´½àžBr0Nub|•£ç¸rˆÅPâqiM´½þï•cJËGÅY ÝÙÂ?t?ÝB)ò0ºTÌfáîs ³¨•Gœ±0Ú9Ä¢öÂõð ÄýÃ+jï»P¤xØy¹]°{‚Üñ¡§ËŒýàì´t2+UArut,•JV+kÄœLÝIºrr˜[¼=aòÅAŠ`EÌÉ,½nïnÈwhÀò'™_¡|S¾CóÀþ9ùóvT€±›ûþðˆIlâÂäTàg§;+ oa;³Ô/þªpñkGÙƒ/Ôt ÛٙȺîíhLÜ’DÔëÆ$¦Š)qáÅÒmøZ:cˆÆ…ÝÙdk·5cNæ BEP"J6ñ`žÀÜI¡ÅÓ?¾zûéí6îu 9xºf%nOßiy oÐb• ¥æÀFb9L,W„ä™ Ë=Ó¹s¢LâÖ ß­»¸.gÛ Êàkw)ë@]¹8Ù{%–!EhþÑBQoŒ®gŸò,fN…R#ÍG<òa{ïºæV i"àQ¡öñ =I—,¦îíÒ¥2N™ˆAýzã!Ÿ÷=üøM¾?þüxI䯾/në¬(sK¶©¥v­§¢W‚9¥Tê{$9ÐçÍ ÕŒ˜UŸ7†ë¸lXÂåµà×É÷#Qe0Ù5*]õÁ˜qofÌB;û@¦gÍSh®+6_gF TWlÄrqï·¥µÑ0–$4RquöŸÚ ¦œ4úžØ{G ÒSK￘ñ(ò cï-£§ëŸŒªèI ªà™ ‹“*E2< Õ²c{ë3¨³«ä\Ì!cW¢‡õÐõütÜ)ü"i|‡y¶‰æ’ä-¶)r‹uËýF$óG“燼ìbGK?‘ÈOô¼ÅŽ«¯XOÓŽ²§Øq˜=ŠQzx8ròn× š]·@š¸6ð2Ýå‘ZuÝ0é'V+kòܺ×í†NG™,F)€Œë3ëÖv‰6°‘oWP±D—Ë£å¯ÉÓ»Ë|F;Š'Ê_JˆöJ”Ÿ}í2rÀF$ÐÓ8“ÿZÀ¿1Æ(4§Žjkñp åOD‹QŽ:b®ñ¬½E®†©dVÒuD±`nûok?ÓôB ˜¾Ï‰Mmү㫻Ôsë„nã=|{¹›Ê«ÙkM m$‘­šüP ÅŽôGáŒÁØ<¬^#oI©ØRÇÎ ¡<Ë.XßRo"焌µÚü—„Xõ2h©!hµ˜:á„T&g@wèVÌëGìcœ; ¤düÃà#Lrâ±(éÔÖúqµDáß“»¦h–┤Ó2¦Võô´Û\ƒHSm.Yê¸-PÍiòdûÊ2ë²in1D´Ú„ïŒBÕ¶Í,·Ò]ã\™[È­¦0ÔÜ6Üxž–ÿLÀÙ"h¹Ã 97E_ÇY¡d«©-™´åþzBQáLfÀé¦UP©hZˆ„á&¥¸±EÓ{+qÛ,ÀD]&œ0w8WLçÉÚŒñÙºV®pºú»¡ÍêÒÁê:º»¬"†Íï1»®åŽT"ç†*qðÚ qe4Üu‘ždÚÖ(‘X…95tJX½"7 O,òJdƒ:%Тɖâ€äo÷¯Ûu¢ç£³KY‹œ¬®§\8iHËs³A. ˆ£3˜Î,aPÇÄ^p)«ßý#Ó’ s(ÌhCN™?áåÏ5n“d w³$SîBŠˆÌàf“ ‚R.*¯LE€T‰a²0´4Ùx½Ú!˜Ô€.³4ÿVzÉa3ùþùüÃOŸ>>üðá—W4Ÿ~?¦§¾ø7<}¨Bþ~1hÏ@9‰Ä0»JËuWìXË×›±<1™Òy‚'ÿi'&Ô‘ GKb»CÊÏ~ø^вE’ gÌžÁrÀù*Árôʼn/Åfõtð9º²¸±£ç´XË¡14®Œ^ðeº ª‹¬È·¹Z툼[æWV;¡ë~ò’‹…wÃñ™Ã,VËîžV> ì†(FTCxùàs—2θA¬zKABªî=E¢¾•¤Æ „A³LuBë½·~IßÞ[8¿ní-ñÜ¢Än‚3o)#/áþØoÞ’áîHÇøoo‰)Nöhý|q,Žœ!çÞ¶pt4ÙïgcªŽãx?.‘‡$y$ÀKuéÀýëGß³Y¼Ûê÷í"‘ÑÑˉû6Æ…f¡<}Q_B7E¾âZ¶õ§7÷oïòDŸ¹ÒfÅ=´M#mª;-Øsã½t1ÄIþÐz÷]zŸ{@ù„bP¬ŽÂIwfyvYÕ’R¾h%šd÷’kç{ýŽ®lQR–[ûÞÒC À¹NÁÌr½ëçuÔ\pŠ2¥l½6N ëáŸ2;§w<â“¿I3áj®ßÑü1áÍpQšìܹPPλŒ=Ž(r ýgË-õ®ê7l¿V¯ë1U6;ûìç"|u?Éùã-Ô™ ­'Ápu¶y33•ì±þøaESå&ó—à­ß¿{øð‹ÿÖo?ýsé™û°ˆüþç_ÿqÿGÝÝñïŸ{¼HŠS½H¥žÙw¦ÄM2í}‚eî—M&шªþf‚cGÓUs¯ ÅÌß„8d¸’MæC–'ÉÞÕ;ºÌ½Ëõææ^'Ü|%‘;£nÝzR¡ÑÓPuüçF‘Ël“œÏJ8FO[fr’øpÓô™Çf-ÆÙ¿lPãõ%å1äKšÓó2ÄÌüß¶wn2ÖÃÈ™ù¶=&1ÑD\+À9÷ºÉ ÔvX$¥ÉÜ鋿ÏùʲîÙá&–ƒ¾ÑÉ’›£BØ×’|ÑMÕ"Ϩ½`&°®»Pªûví“G¹Ñ*·yhæÇšæa¸çC!½ GS•,3Šì„IDoÌÙü1§Ì>WªéÿŸ½+ëäFÒEð³•MÆÅ`Ãè——v°û°ï†¬ÖØ‚uô¶Ô>þý«ª¥,«Èd’åžÁð¥Ve%ãøâ`Ç?˜Jé7lßNš|þüåfAÞ`C[ÖD¹£ñ°Ã çqOt유ìÊb*&bNcÍ Æ€8ˆ/^%‚“\íœZ}ª©4%E ÇfÆ`ïK6¤¸øzüEA£ù–ç¶Þó`‹` älµoDf|{Lãá»ÞT”^lÆcÕPÞ“`)¤=ûupæ]¾y>a€7SNÓ϶ä^’sC¡ßKÃü™”z3Ž&ZH«~¿‰‚¸4”@>(ïîâO‚¼çì}ý ]:@|d´·A¿_¬=ûŠ&o?,/—í¡ßaHÈï"[Ì?tòîÃÕ½ ÙÕõÍq±]8Ê‘ÃX(~“žÚŽ5X%%‹v†X ¬þ‚¹†¤€ F 5 úP˜Ž —²B^,Äé ™ï˜‹1%Fód‹…TdâUÎÅ€Ï6ÒÌHÑ'ƒi1œòÜûžG6‹¢è¹±\íwçryžòyÒu/)×åaXëSë•6b(Lcã€ù,<°3ôöþêgûSûÜýa¼rô7¿åd ¨´$SRef·°|{Ù“ã´[“.ÙÈG'åÆG07Ák¯!äÓ%¯ôèØf^&%û½tÉìKÚÓ%‘ÉIßtIM™IS÷c”èÕ¹~óíŽnêæ#„ÔÛ¦vš¢ÌI*¦×òÅ;ix|!Nø-L¼ÀÞÅûÞw49 äÜz'á¿—NÀT’¹<¡óøè£t0é ƒd{rž*e{˜?ÿüîÆDàééÓšә´#ŸZg¥N´Þç²ÈJ‘oJù$ ºuW.ÓÂÚeZ›‘аlp$mr,ççÉqÖ༭f›–s“3]Þjâ÷¹‡ì–9½ùœ‚G§ Kgzð»e< QP—Ž ?¦F'îwŽéкËÐÀ†}Pì”"_–ìX¡•ú\íxJÛ]öåç_Òê«xóUL—h™ð‰Sõ«¢¢ÑW;Æ¿@¹îIsê0ÛÝaÁe×;~êÜ€=Þc;jtÑÃ:ÌѦýÁ‚ΆßðÚ<„àÉ‚pNÙú’2 Q®5Îî=˜‘§ؤ×ö¤.g|³pT`º0x‚† xX]ž"ØÎìô©½»ø2—¾{Ež·¾£Ò&#@ZÒÖÑ쟋ØWïÌþ7màyþüxwiñpÓÜD£VD­|,;®²]–òfÔêöÚDñÌ€€Ð?ƒ¼L$DC|æ+M·sÐß}Ý딤ÊNcèÁ”pÙE!»8R5¨E5MÔҤѥµ¿õ4ê•4J¬OùÇEõsQD‹S²0ëü¿R¤ƒö Á$Žq¿[cþm3"ÀË2¯¿‹B ¶ð‰stXÑ•x/ögòÝÑÿÝŒPíö/ƒ”½ÑFIŒ>±%6À–ÅeÄÑ¥‹¡m=U<\Oápñ ãä(ÆXN )1ðÉšžíY³‰¡Ùa*ÖS%É3£{÷ óG¬ZOåQÒçªuys°@¨ZuyóæØ‹Ç ½Ñ†ïǯ|zgèp}¢¢£­œG¾IwÁПîÁM©lÊɿ瀳n´§´‹·U1 ¥}nEϱ`h?ú¹¼»½¿µZĉpd“W#*Œ´ › ´)ž ¡-ˆ¬Ç‹6•3-Uʘ^jÙ a,Ú3ÉnòšÑ¥ÇDa{ifðj£Ån2Z6V£s`iba½>uÃôC¨Döe)‘˜[aýJ©R’PPŸ_L C¼c1ºè7;1#fÔуDz\€îÖ–ä"ç ÏHÓe¹Åp›ë™%r $WÉEt¾e!3©wiᵬ¾þ¦ÊëoœâfÂcQß-b×_£sù4ÆëÑj®¿Ñž•fÖ¿ÉSÌ’¿þFž¢àFL똘®’œ®qæ¢w#i AývÒ_ïH¿É~/u§Í¦ÈwÇ9@àEW©œ‡– t&ûj>Ë ,>¤Z?DN GXáÏ¥­|ESmÌŽ˜‘¨ &ƒ©¨‹°ÈVG Áy^% -û<¤U_æÕ®Æd®Åd¦É _ÃXãJ1ñ!? nv² HNù0UõÙÍQí|«É ½Qì>qr˜ÒÎqÿ [‰zi6Ýɤm“È‚î|q8먤2ŽŠw§g$n‰œÝáözÚ.‘I0·Y„¨^k̆TEEí¼¶Gpˆ-I •(>­;­;ÅÉ΋»õ`'ùš¦/Ð.\ ›–x9Y ØÙ[™÷i1å¶¡E æI®azŒ-3“BP%“ÚÕ|“Z¾¥!Šš†{WðÍ ùfÁeÌ9z=Z 㢟z ~ãÈð€qãÞV°WF|."¯×·P­o!yáPðq ‹|Ã|³Ãëɪô'ÇÁq_¶•œ‹ôö¡ÿÌhÐ8ENCïï¿­aqŸ?.œ‡îØH"?û{•Ö´( ˜m4¶G† qF±¶xM3ˆé,šg„ÄL‹¯‹šG.ÛÚ6#OÆ„Y S©¦¤¡uq•j2cÕŒ4EBÇp†;ᅥ ç3{ø®™úšÉ¡©´2a‚šüÃÅÕ9Õ4© “+²ÛæGn§þTM–¬Qœ‘§‡j¦×vÉ­­WMbçœýÁ /•(6y©]ò„šÊvËv0³9ÕÉäÕE.Å›X!b!¡Îš 'fg©¨ÚAž ©d/™=º¢`á¬S‹1ësÞRPh£ hÚ4£6!£9vg*`xz·­€{ÿ9ï_ËàÞ_ÿrsýëåË…Ú"ˆæïF*!‰6µXz{,ŸªØH²&ŒŽ>³9mjw¥éèI¯]ÜÍH<©×Ä1ß§ùBŸÞRú=t…èåŒcû_¶t¦˜Ù ›*üw.ÖAOœÃ®ã™ÖTÇJÐú‡+Àa(Gµ$‚¤v ¦¡ë¶¤<Þ.3®tLâX fnÙ1oÞ!¤ÃKwbv¥c7`6âÔ?Îedt¥ŒR"j>3?£Zd¶×FÏ>Äs#3kÌ‘ÙN,è½ÓÂÞˆ]'bEªAèàëº;žŒd÷ÛJÂ>^2LàTîÏ<¥|g_Bðw³ø|2C  ÍÁµTp2/^©ÃÌñS¤êç› ƒ`…[¼ét{%æ°wF–.NqÙu¡zÃØ­;*{Ê8Å`ÖÆyæ¼SœëÚÎ5›¬éª1à%08ÊJlAÑ*õŽMËeØ#¦ =ë«ybå…¢NB¦£eµ¿Šú(Û¤=;WMy¥È£†mö÷¹‡™.”¢Y%ÁE©¦·‹Š›z‰Fë­ùE’ f{‘ðk‹wëúÞÿùãáPký[­EW3à…²Z»ãØûì¨?ß<¿ÿÇýíbUÁçû­ÍBßô{¿kº}ÿøñMÅñŇ‹§/×IòÞÿ°{П¾<¿ÿaÜKüvu÷åæÇíÚZþâÇ‹šITøú ;;ò%>\|töhh1ˆÎKp1õW¬Ã¹èjqÎÞV9Dç+Â@E.%èìà>w‡2;YÒ!¸‰m®¸=¯D,ž~3*bÈDy~Š.:‡g@,S:Äo²²5êûPì¨Ê~ù ¢|ºv†QÙo?‰MäYœ[…MÈ-—Äö9ˆ¬~=8ùàDÎÛ'jÀ)’/‚ñÃ^V‰N&a;6¸ÈŽCn:ÑØM§;:Bˆ¡Ïêå:”Š‘ü7Ro:Ò÷áÉäJÆÀÓ›¯SjƆKo¾ö" ©†€+ Êìöù–KQ3¥jnþê¹³jçΦzvGäʉˆB§SlN.ì³÷™¯G«Š 䘶Ö/@$aÃ<ûçÆ14µoRC]é7oüm‘ÑåÓïwÜx6n¡7j ß <•øÎàtm½P¦ÇضôÚÀqI]5Iöý K%pÀ­§K[†$jø÷­¬ ~…Vî^/ìoÞ«|»E€1W9'*2é̱¨˜†£¹÷WâôH¥'I‹ùÜz©èTç¦\©`Ô¿^/3k'ª&âwCp´ ÏiSë¡o2ø˜/ð1`íz}|ÕB%‹E¬_kEæ¡XÛp_).ûù¨Íè@/´Mª¼½}m%-ã.£mv-òŒ>] ®u"òàÌh+8`N:˜Ä›Su›ÒõÝ­±åòújÂz©‘±ab@EÂÝI/„êçôàäÌɃŠëIAŠPÒCsØsYûQºx=é®A4ž[ % O›•9“73>9ŽGî!Cà®#g¥n‰¡W’… € ÏHƒ+e4·k¶=Âù¦I1 Æ 2¢ñðùsª¥øhg¿üéËÃǻơ̠xSñfB¡¨âj!°áhd ìä„WêôÈ8¸Ô¡Q(yÙÀ–µ"êR—‹ÆÓãÝÍ›ºší?Ý}±OœÜjTññú GYáI-oA¹Bv<”d07ŠfF¼.’“&‘ÒÙ%§iéYôšîáxé Qûß?ÿ:}uÍ÷eàëO?ýz{¢Röéöç‡Æ £9añÞ|zªðéÓB­–˜“–ÁzH‹½´è’:pq U8X#-è›2Þ©¹Ñüµ„²+¯*ªWä9šÔ0VðUÑùŠ>; }~´Š« ÑÉ Ç°i¯†mþŒl Òä;­.aƒ)u X$šëx6àÐ4~,ª:– MM³ä›fÉnÝu›)‘´ÈasáINsÙ–²Ã\f§)·ÍbœA Þgðì«úfÉ1z¿L"vá*IPnQ|Ãu)á¿@ñ3ú}ÔøÎÜb“ù_¦Qe±@%Æ¢X(d«&fG«P|“¿ÉG÷7ßÎ’Õ| SZYÂ~ƒ³–›KÁ¶p£«Îùñ°ã'1”#c¡Ü³k.·8]û’ÂêZ‹æ î^µ…[ViÑü¥³Z â°Wj‘¯†Øˆ n¦n®’¾{¼úòüËåF²~¾L÷RËš ø¸Î¦žÜöÈûˆ–"}OäRã´G>±H±¶0?£˜i¥$)²¯PLÊÑÂ×¼ÉÁÁWòôÐÌ$Ë©fXi&o.Vi&õW̱ÙIÄÁè¹ / IæÛâNMÅx3AoÕ€ãZ+Éá*ƈãœAã Zd9¶†åúóÍó‰ë„»G;Õ/OÏ—\?ÚOþ¼Ü]®¬£]/ÝHо"뢅|é–Œù‰Ö¯tê£é­ w—Áh h~Ä:m=6#QYð0Šv49¯ xäÖ;i]½“Æ#zÀÅ1Ö¢±xpeœ*€˜&p¶ArÇçŽlë2=áÈxdMså+76ÍWr@Á/6[ÈÕnÉüQqR‘ͤc1› 1?±|Fœ.xKi‘,ÁÛJ‰ÎÑpÀ ;¾\žœ Ó–ØwѲT-ZŽ -·ÂÃq¶¦²'^çkùÐÒ³ lvN"4Œ,ëBô„8ùÔ6«n½Nw¬Ï㉃¦ÐWÚu8[ƒE€à=ê*@ðˆÃwHo!N͉•##%ûâ€w®j¦$ñ¢‰e"é<·EÑsZ ÕÎmIÛÊnFø Üõ~ãÓ†ÿ~uûlG¼0A¾HWÔß]ï~eÌÜSûÞ~þüùÏÛÏö”\Í-/oMŘbˆœý#Aøm¥¡}à2‰Æã—­\$.Gœ¼bô¦ÍDØ<þ+AD<'ÜNI û÷—¿_ݽ³¿Ó¹Õñ˹Ÿî¿øçǫ竧?®_ñT†DžHyâþø‚!£æ:îÍ# e5`š(“rwë mjwê$B‘FÃvÚ ÀÓ½¡ÛƒçZ‚g«éN/å‚w*ÍÍæ›9mY%vdìµù†Š ‡þg:²ù ,ð§Æø M)¸ýtõñ£}õÓõÝÕí}êÞÿÉäÞOöóûiç±N\þªO¹P)ß3fŒÁÒ÷šß½C7#jé{„@›¢$5ÈlÖžÍ#öÏmSrÑDtIÇÐ^å—M„óþ‡¿ÿ-cÛô"×z¸º¿1.¤×Ý¥ ŒXé.ëUºŒ‰Ï<¼ÿ¡ZèlÅ‚Ù.ž~ ¦çÊ?Éõ͵€¸¹r+ËGº½ÄL¤ƒðÅÝÍ•™“7ôô×ìš|ØO:í2ûí2xÚînÑ´qÐ|ÐÖ’GÌ”#f$…M«4¶Îñ¹o¼“2ø¾½Þ|Ù.Uwi!ÞåìOìu6£"eøùÈÝj/Íkˆ”à}Œ½²}‹È×–à yŒGŸG×`¼Ç¢2c¶~yF­u*é­-Ì ¬ ¸¶¯Û‡Âmñ¦.}DÀv•z{׿Èî1ñ9ò{jÒ{t}zo1lå/1™]ZÇ߈ƒù›l"Q†¿“Zü•¿AGèG«èÂèÿ(~8¿š·Êí£úwª#|ƒ§¨a a¸kÊdX0]à?>Ù¿îß™—wc*ÿåéó—“ÛK—é·9Á|£v–TxÒÔ°hÖ‘L†dé4°ŽdìèP # ¢C¡¤A‹…äƒÆÍúxL&÷̲ģfxW)±0ö(ŒÎœõ(Rk¨„˜÷(võ(bÕ¼ŒJÕ.Eg9Êå`¸®ëüF”îPí''ä-Ä :ªßŒM/^h,Ê|Â-BéT´¢íì¨éزÄ iå Õ Ot åÈ·‘·bÂ$žFdœ 0£/#²fׂ¾’§ÇÌôÖŽC¤E€ _§ªÁ]¤²!ó¶ßáÍ•šKKpÌõ‚üÌ@ê»´jå¼Ô/@ê‡G-B"YÅ`î?-WÕO›zF<Ãçîâìúéöãç[;E[ßçÁSÖµÊPZ¬Õ¿Ú<%C04’…éêîé“Áåž_²éC{¹E]ÖxÂGªŸ«h]a*µ©ñÄ‹¥4"wiµ=ñÅ·‡ò]ßx{hS“¾‹Š›|p"V£FѦ*çfsÎéØÁ¨n`ŠÁiu]d7)Ò–¸8:‚–Ή¬Ñ²^ï7¢…-\õiÕUYB¾Gé•=Jd7õª‘êLvïÁ[×7*—[T—¤?QÓBÍ|J"w­•õÒ>îãuF ÈÛµ•ý$)"fš*RzžÕ”¬ÐTáú cU;/誨õ3Æòû{iBfx @ÏÛ:¼°áðåv"ಌFDw‚ÖDªëìë®GhiÇJd‰bb½º=ø>ÝìkâxP.NØBû5öTN_Äì2˜-z˜×ôÖÞÛ–™We°Nëh´y52»Ì΃tdeïb <ûæ1êÀÓ¢~\Ñ œWÿ#l$çÒXÁ5Y({„—! ã(€²­!8cÂx[ªúùº_¸Lä"jÚ#¸¡âN˜ÒòÐÅ…?§èÒ ,·‘r®×¡–jóŒD”ÝñJƒN¹^ÓÏ´¤§,íÕ4¬Ö²ÀÃs½)›ë¢±0uº.½'>OÒw®ïÇùµ Úñ/7v¦%1v2pà7ÆÎMàÃä&}O¤|úÛhëÝ?nRåíÞ¬pñò ~ç23ÿâò—Y&gˆOÉ&\ŬʼnœàÙ³û¥iKó²ÒNé ÛåYZF ²¯ŽbŒÚ>yºU¿9-P(ÍH'ƒH'X4až³3 _)Ñerš ©Å¥~If•¤1é¼JãÀéx >ìëÃ) ¬X¥þMa`Ž XÇé§4å”$œ{Ìä«A7K¿l¤$ ĤH"«@š@0µö¦I=FJîQ§ŸŸ¸mÄQ-¹ñN8¸2Bÿ½kY‹cIίâOkSʈÌȈ8 o½ñ¯ÀÄo@h@ÌøøéÙ4tÙ—Êlif¼9º«â~ýã@ˆ!Q¼=t:.ͱÉ #Åm 7»âaDŽÞg¢x{cûb©â‡V<ªÐÏLù·]žú ùG™,qº‰‰û½‘±Ë¸‘$…8µ§ÿœ `Gñ±{‡vïB•ߤœì>Í4´4¡ŸdO–$ÐÌlØÝÜ]þùzï»þXò Ú»ßùþðô­¼áÝúqÛ0_d¦!FÇ=]N<ùTPi4ÄÍŒ8HÒÌ…-¥züÂÞ2ÐX4ðªâ±ØàÂ=°ÿ‡ùùeø¤^N@ܹ |šìšlà3óó;N‰qQ p Ckõàë¿ê7À~E«5Õ¿Ä0aÎGM§6x*®Á.ý¹¼Úäy7SsóÍd9½ÓÅãeÛzšZï®1ÿ­(´ƒTiÜ@]Ú°q¢TœJP6›[,t!†üIW’ ™¨3‘ ì,Î?³ÍÆéýUe§Ù‘ºÄ(Ö罌…®±¸0±îÆ&· 4€©F”yJ®ä=O÷¯=w¶GRJJöÝéÜÙòÜ·éŸKàUǹBÆ“m;è{¢lÐh¡ GçÛ‡ÁRr”5ö!a y¬ ¡9BÙ{ÎB˜¯È6(„Vó!iOºÁ—„¦Â??„fbÊ„Ða‘t{SòG# mäjÕ„3[JÕƒmÉDëàw 8ZÈÄ‘C£qØ—{×§bvÿ}¸þ|qwù-¥ ðà–tÀHʋ窱<êžó„wj¿¢Çˆ ,=sº;.MZ}”P1„,HL€œÀóJåÔá©,ìá‹‚}ÞÿGÛ¤6¢*ѼÂ=Ç®n9Ÿ°Vš3¡:Ë„ÔbCôÅŽ?¡§Pœ4MûüÙaªM†¤B&·©ïMJH 6¹Þèg÷ü™]n÷ÙöýÙÝ" 1œ?J§J[g¨ª­@Ž‹´;­ö²Ý¹Ðî#À_0ò—TPˆ<á1…_õ1›3¼¾ÄøSφ­bu6L&"Þ w¡{ h÷¡§½ž°ïqÈ!ö‘Ö `”*ef§O6¬Ÿ_4æ±0ö¯R¾@D–ètw3öß?|À¦óÃÞûÅcŒQkÒ> €j Ú$Ä=Úª2²n¾?,•÷‡ÅX-.Í&”%"µÉ©(”íp­_­âqP0 äa-o>${Ø{·¸tm+61<í"Þäu"…©±Ã3ý3ày;ŠÄPÄÙÆ8îè¥Bøå^þsx¯£~ʦ†ë&©•ñ+;’†!½ˆ›Z6¶XEɲÁ÷Ž»ì½ñåŒ ‰’ñ–˜˜˜Ö§+´Ï´ Ÿw¯o;âüJº¹¡T¸{´|Ø¢3f†ýlKO.sÒ,½p´œ,½ñéuı3Åg^GÜÑX-ΈMÂ0ŸM’g“šùc_b“–³úÿJ>âóîŸÿiÊþ“°=P¯[XÜä™í³ùfàíø§Šåæ`å(º¦’ŠÅŠ_]sGNŒH7qd?“<6]8ú0õ¨È¾¶ôp{ý»¥†æ??÷5~{ Ô~‚=LÇYà©zØ8„®[#–¥œ#bkz ÙzÊÒè29§ ˆX‡Xp„@Å*D9ȯ¢½RiDÀaM¨%â>²ý`“–ú€Ó]Yðq{)½²‘c~l^_áª]4òõG­7›‰£lµ´A7i¾ðvÕWR‰Îص¹D¤•%¢Ød)«+™VW¨kΨëêÍ**D1Ä„Îõ'÷v_Mbö6i£e0“µ1‘ÑÔÆÄ ÞûBdY{碲ÒãžwË&—zú–FßÔuL¨½´Õvú¾u]ÈQõoê5/ß¹s•C¿õ?þíx-(¤ KÙæe"O(¥; ½ûy“ØÏè†CØèŽ[ƒn*ÆŽQZƒÇt“h£í7’WÛ~Z\ºá ¶_#Ç¢ígÌ…j«7«²ýihÈAà&Û_b[…Ž0ÓGKôD…„i·òÛ[ ºxen̜§™ÌÉ“’Ï2%ãˆM+TB(9bô±‘ë ~¸~P0ã~¥Ñý6}ÙÊë¤f§Ûô]'}­:‹!·™¡ŽvADö=LJúO«þ°ìßÌ]/çþQ› aPì=øáV#R{fÂç«Y Gwˆm6N§W±Ìø±dš^YȲ,–Ô_¸.õçá©Ùáç/¥ÓcÛâÀ ?Ë”Æ.ãíãäqe>·xoÁ?T¨zðZVuö”¯¼PcH•/õÈŦؒ…”i[þ¥3šñÀKp,è~lùñ^zý,¸¸‰¤'7Ḻ€Osf²î~Þ1S§ï÷æ-ÚÎy˜Äô“»lôÈ|GWô#é×àCc¯göŒïÌJRaö R1e¦ýÍÛÎ =†˜=¿`´t‡†š½*aèXƒï@£Ó)±ð[S5N2ÌK`¤Š¾S¨‘ŒlßkEœ!¢ÁÉG€3{DsÂÓû^¦2}/¿Œ}ßåØKUð¼uêå[˜ËÁé¦)Ž2ä…ˆÀ—¦b=ÝjZ•Ç œÇ| ®f²ØO U‰,ໟcƨyŸ60v¿ÓlFÑO ¤ÐQžb@R^æ\›9J»‘~4B ¨U~T°èGA³; +B ò£i¹èÜ~ýŒ0IÖGó¢ûîÑq#x¨¨CÁ q4 îI»1ÕÄŽ:’0š…"ÒÍýD¨ì'Z¼µ}gˆEÕµì4*Àþ½sÕßÕ‹U´}L…*¬_ú¥—fwÃÇânbC $õÁ 2®oµ‡T:ßê5ºøãñû×?Ý]ÿxºýëáûõÝ×ßo¾~ùvs“ib‘nnbU~óª£E:¤£UùŧÚ[C{šîn‘ ¹ý™ñឦÇF/^/aù±Ÿ_ÜÞ_ý¥mââTá¥í"¦ma#«öÁÍŒ!㨸q'A!è>'<å|Ôl3jÑû쀿¯Ìˆ6âFÚs¢ ÎížÂ„#òF\g®yðvàà‰¡Ë#t¯§Þ ,.(»òˆæÝ{Vs*³cºÊ%ËývQeB¶˜½zÙ*“žÚôÅãùUfú¸‘9wŽkÇ(öº"û¸áäõ7bÕ±‡ qófà½É#ŠÛ?ÿ~ÿðûë»ßl¡vr"2Í¿o“qf஋Li8×l“rä¤ZLw>j&ÕÈùrc>Q¼"Û Iµ´)¦±Éˆ—„¦Bñgtÿ\´Ý¾Ne¢b‚yý<ýùýþK¸|šKðéàRft5ÓÀqéþ JiÑAqè4°Æª.:¸zБ V|ª]îAdN`¿ŽiXÐü^-Æ…Í1•ÄU`E_.ÅqÈ…Í+Š ›ã¢g·¸q~Ø,fÂf^œ£ñÌasÕ‘6 ›sÆ&7EÇ£® ¹½¿ÎÚّ쵚øó—ºþ›Ÿ·7EHhª™Uˆ=°mM#IekW¤zÃz!I\bpRÑ AŠ¡ˆûIš yW¤Ñ ºDШzfœjÛr]YÒ„¢Ð¿Úa\™p;.,¨Euì]‹ÿÐÛ¸¾º·ÿÑfJÃÔJBtÜSIðéD…¸cGùB‚it@¦X5™ˆÅù¦è8;à Ú ÁDB ÀC­jÌHOcTƒ¦Ò5ÈÔ=è#:7JT’™ñ‘*€5—·„âþ¤ï‡³Xb •haCÊFB8»V™?Ä47Äj:­þ%°>ž‰—ÿ¿5ãàütx°¯¿…÷Hª)@Ï»;"uU¯¨“vàODÇø,Î{¿ ƒœUi~qÓç•SÙ?ú—õxj4tl‚ª4ÛžyÔ%ÒÖFÊÎÇ(o›úœpx'¦‹lélú¶¨ÏyÃ¬ŠæÅ*¡L÷03JûöìÜfÙ8)ÐÂi³'ysߥ:…Q òX±’Œˆ“Ll-¢vMC›”Pìgo\&:{Ê‚bIº¼Bk½û׸f6‚ŸÞݤMaÉc˜Œ rzLåñêáæ{#0…騄™öÔ¾ ç&dš] BÆN«íI4™"í3J¨™U.íMµböÐÇŠCºn¼¤cMˆ‹?É¿ ELǨ H!'F ©­´?ÏAObiÎÞnù·gAÚ¾™ˆ†lÞ‹ô•{‘Àap%Õ܉6»B9h·›­®^­b3R¿Å'x¤·gØV’=Ãf’m‰µÆ†+lÆoJ7ÿÂ~ ö¬8yŠBì:¼H.sxQ2ü…Eðµ;{š¿ÑÙ+ ÙÅ£×W)^$¿râšµëOØty@ œ]|çHÈÆÎíÐÊT«ò¤‹SvÅ[œ”¤?'-'Eó½‚Õ›Õhc[ ·Xë"ÛJó—öΚJzG.F3\¤ÿX®z‹sýù¹zõúãÆa耟&j »_I&)Q©ù¶L'Áúâ(ÉZcJ~µÊ;*i»ìüåŠ<#à0Ün&A¨¡I- ŽPû5Ó"A‘ š‰º† øó@ÝÜY²¿áð“è1L]£?XÒí7伉…Ú3¿ÒA6/6{4­ôhư…Á•kŸÁ¹=œþq½Jo.YüðÕ«U¹4]TÓ”&ÅAîÔ˜cYqMQœ úrª{’âäôöJ³ÿáql£ômÞ «z'[NÓG¸R{,“Ö!À§-Äëótš9‚¡ EÇÕ$ótÑ—54{•û@©!G0LÄ-5qm kùãÉfD…ÂÖ/Ñ‚pO¿’¾C¼ÜÿÆîÓ®ÏÃýíÅ÷ÛËo×w×?n®7÷"Ò§™|"¦Vñð‚RùO‚ê_Aï óiÑrO³×Ä… 'ÛÏVYg]M³92Çš ^bÙºRÈ®«¯9ļú%®¡o3¯êTi›õ´z ICˆà~©ãy_+¤„¾Êת/Ke;ÿ+‚ r¶,°ÍÙ²ÃÀ²ÉˆG”ñFxá˜öÿ'§•×ÿóÃdj˜´O*¯žÜß½Œ’|±73´k¾‡\ZòÐL3…>¶Äž e=Û´%0³wí­µ ¦Ú&U¦"®bè<ƒU;r®|¿¢ÞÍNº`ÉSK“ÞMÁÞa[x&8!<ónq sÓÞãÓ©ï'ßýÿp©,ÕɧnT(¯ôàs¢‰¥™N'ÃfzËT¦¢& ê«¼o©‡šH9ÛMy¥ÑMBµ\#ŽщGÜ¢¢ž¦¨¨ ¯ã¹¥©Ë4³G×6¥;ÈÏÉ_u­í–OÝä/ë'BÏ™4FpÄ®xæ ÅN Ⱦ!×VeL„RUvR_ …Í•åV¤Œ" &Ó›‡|­ãðjUFÂ)ŒÒÂ8ûˆ[— 9{g¯ŽMüyW³Ç‘£_…¡Ë\†µ™‰T8tšË&ö²·Ý MÑ6Ã’¨IÏèÁööɨ.²‹ÝÙYYY”Öž¹Ø–X¬ÂÏ—øà7ñëãy—&¿ö9‚88Ö? 5êwΗÔ/Ón‡#õïÔ¥k}ÜCÉ/20^F·Ê<ðÕ› Ô`>ilú¡ß©eðO}Í©Å>Í%@¾áöYb-áEE\³Ž~],*‡Ë»ëºé hJ¼šYÏôW×F­Þ” h^\â¹cðÕÃì˪"Q78„°$ÃÔ_È#¦u)D ª~¥èìN±a ‚3¼˜!ˉªv©FU6b‚å¬!æÃÆý×”Ç <¨ªÄfè_̹̟±nÂ:„ƒ‹¡~Â6ªKÀJnŠÄãâe‡/–Žœ§=ÞF2ÞìêÎ×7_.¾<Þ?ô|?°‘SCM€)‰+ !Ë´Z§øRÏEСN¥ « \Ë‚. VK‰:r†­`z9qð³J(Ö$*þrœèS~SÛ^N~FðKút:µ NZ‚’œaß3i؉j†7Ú%©ªfTD€yrÞ½€º˜‡¾6¢ òPA°¾¿¢ËZ$›–?j oÌŠ²zÁCmh(¤ª«eâTêÚ°ç,úï?¬&2Ô—Š¶†Š© #E\¥6ð-„é ö‘9-m‚óùÃÝW³ð#’¯|a@³¾§hól86+äSH#–Q£u_´õœ|…i/¤ž­¯m#˜K&æû˜´Ô)#bpÃRÜÿtõQõxu}Sm ýì"iΘ|YXÿTÑ,€rÈ0L«HVíòqÞ£µY:Ye2 0Þi ¹ }okf}ᣠcÒ Ñ‚âìÃÝo7Ÿ¦ô¢Ÿuš|Jˆ5ç—èÃM€˜%ÜK¨‡yH0.Wß½Àƒõ›{b‡­æaðÒT–¦d´G²øN;®Ül迎@ò¢ý¿Í>24,mÌ1[Íx0B5Jçìc'Ál\1“Ð_vÍQ¿Ý|½¿øùËÝÇ‹[µú›w_¾N5‰5ŸC´}óÕqFTs&+·wùÛ#p"(YÈ<ERt[Vó/8ë(ÐØ­4½‡€1ÒÙjÔîÃs—³«‰(DHê¯(Ó$pš{vÇGø“3Þ¿ãSoµj.9yÖÜ?¯n‡.Ÿ.¬Rø÷©Êö´¨jÞ%øWýï_¾ÞŽý‚÷ê½—Ó÷_ÞªD¥˜0ªO)ì²€"¦µêé\Úª¬»GÓL{ó2PGIÍZ-w|„´¬%Ñw¶,+6•Nc†@&ÒñbLKeÈ«sŒ5æ“ΞI»oÅlȲÿ˜råÔ^ʦŸØÏ+§óG¬*œ»A¢üïÿTÖMõ³4Š·Å¸Â Œ‡¤aVXÀ¡÷NV/§Úì–El¥þ¿Â$ô±©`–å’›Ù—U‹íïKΧz3þ Ûÿ–ÚÌA›®ïÛIÑeˆ«À;žX>W¯Œÿ}:Þâç·ø¹t¼ÇϯBõ¹%~q×Ûô›¹yxûÿÛŪ%òW*ô‡)ßým‘üOjû~ÈÞÿ×›‹wïg–¯9tºxwqÿ8öo˜žÿãçLJ·?¼ú»ý~õáñæÇ½¤xñîÝÅÏêÆ&³§7c²oðnï.Þ½9éHÑÛ8Á:G:ØŠÚe³·X²$Ò·ß™W’UŠU´_÷‹ˆ–Íè qFšóZuŶ%ß4ø›ß3"kʦ2ǘ¦È9QUdáxŒqvV`.¡;¿õµƒ7†—EçœjN¸Ê=ãÁbŸ-:Š»f©ƒƒ. ‰ÙM>|ú ó¡ëÎïHU k]àê}µ«€"¯\qÁ¥ÀÚ•+ŽBwì%ñƒc€mIð²ãö;=^þëåÕãûÛ‡ËÏwn¯oob0œ ìcãEr0ÁD70ì7·VDõ:ûzš³àC8#sÍ*ÛÏ=“97]P±]ñ,®#®¶×¶ð˜ÖÜ3$‘"oDPÀó É‘ò—TÏ‚êpþi†?è³.8ÿôÕX?T`•gúƒM4=<3 b­Ûrÿì¯6ž{¥žö ˜ð/oßùåÃ×E~©ùç›fyWx%dž® YÑ ,gܯQ/ï $„bóGÀ*$Pô>Î23ÏÒÁûÌ^9ÅúòñÍ"¦H¸ÊùâÁ¦°þÁ§JŽcOS±¼ =ÿóí[ÿßo‹ˆzFœAj"Îè}uÀ¹ÈõO*0Ù—®;cAЛЋ¤šG|µQ㌧zR›g[Ž^ôTMRøüÐÙN1Ƕ5ûØ.ÃdÒ°ÌS;(:4•;Œ iíw¶)%cMÇ  h —Ëö@’òìkÏ2é1\¬¯ ¡[dšu;ç×NLicì{üx[„+‰hòÜÓ…GÜÆª € p¼G–{RÉ`#&²®€à6àu¬7l¢óxUÓÏwŠd/¥Ÿ7˜Ñ v—p_ojÂl7…Ùíj+£5û¶¹¯8Ï]Fóæ’>GSóšCDOoRyT€ºç€±êÖ/—'x–]PÛâ&).Âô„h]™J6æõ¹«Q8¶«ÎMÝ}¢#:Þ°÷"~îäjJœöÚèœíL\嵩%ÆBOÁB¥©q£Àz…[v±ïY_½\¦òD`Ñ1%Ë$>“N§[°×‘%ŽY4²cjªÏ[¶Tz×8xpKë,PO[€}oiª‚­@©:ÖZyÍ&`ŽQ…ÞîôÉ6Ù54‰ÔʳM†mAÑ“M—ý€ž‘CÑ_£uÿÖ><篳«io´ži~ÉAÙGkS¨°0Â2îtÑ$Ûzûb¦·/ÓO ~H(Εc ðCQSÁeiÄg_S1íxˆšÍÅ—cÑóg¬‹öÆ¥X½%оÌÒpG°Æ¢Ç†vu ¶pù¡ýtqü²2òþ“Û´xº×m¾ml¸ÆŒ<­(úìâñ™$º µØR«˜´;§ô¯hòÔGkâBAgMÂ^¯J™5°F¾–’— EKÄB/° ³ì[ó¯í²–M¸`IÇLŠ ‰h•®Sjеµ iquC/Õ$°¼x…ز^}ˆ‹zM˜ÓëìËjŽì ¦Y¤4b'kr+{´Ly#Vr@´fê¬ã…cÎ{ÁÎÕÀÞë‘‹8­’rÙ ÇL]¼×¶ÅtµÞ‹0$aÖ9Çw6“Ú=B2y?xµpšAmXâÈþ*Zµ•ûi4M¿©Tج}̺úf¶‰ N‰g½y÷„jpì9Ç]¢®ŒàÏÔ9«¥^}”S†GÕì¼Qdžôƒhà%çŽòÉ[Rž£`/É¿Œoyñôfõ…ö®>bzeÐ_¹e-eîî¬|QK?Ysc`̶hþÒ³„â}UßÁ‚ëªï¶´ŒÀq‹³Á“'#JyívÙ§Àã™ñöóƒÕî/Ç9[˜qýEÿÛ²¾M"Î(A9X…ím—W8Î|!ø}›µ‚k âò¸­6’ÀE)â¶fSѯöšj~Šš®xç\(¦+¶ÜÊàB¾,´—`k5t=.IZ„5 Â¸ÎáeëèÖ„{á£rŒäÄat®o-ßc8Кbþ·Ä›‘EOiU?Ž$ö8$ò ’´ëgã Þûù(ÄÓTY¿ŸÖÕ×Ý®ž²Ñ9PVaù0Û†Fx#]\ãínIê—»%‘¯9 ÌG/X< ¼. CnûÈLŒ­‡Áè-vM‘]|ÓÆg‰öøbWbyžNêÓ,Ô÷Tànx¿miÑ÷ŸfŒ) ä‘|£k†ãM&—Ÿî‹²¨ì’œ„M!=bËF[@YÎ\¿[†SâêUqíA1¦D­k`’ºXÄé˜åQ˦¡æb¯ ‰Â¢¼>NH›ƒ´mK<e5#Oë=‹.}I?¸ê 7y'=.ÎÀvÊLš›öï˜aNÛ‚ÛMÙ“ê–^ë ß¿Èo–õÊHâ Q5MW” ãd[ÈEÓ&[ÂÖ WÍ*bÀÀåj‚C*õ6ªífsÎÄÓ«ö–„⪩”ºy¢'ÞVU¬2„ÕúÉBàBiB”"¿þ„(%_{»¶Tíáþ¼.a+»AD¦)Áí@¶Š·Ú–uÞþ|{}¥?wû˧±tüs£¯æ:éd GG²) C §oú‹ãâŽÅÍdÚ-6ÓŠb¹RÎ/—˜ä›åÀ› °%Ç£È%ym¤FG#µŠušçzë'Ògµ@u™ÂU•)Œm~ To‡,'µ˜Èã*íSÚÌcÄ×§dÞóTÄTËz=N0×ɺ—CCÙs´µŒ¡K#ùtz"l–”j6AaƒÏÞ îeш°a€_Vbèàcy{„•ÈY„å`[Ïò5†ºkÀªÆe]›xÆÉO©Ï»@‰â*·•&RQ›NUlÚ›¥ ã·†¾Üq1Ù¶]ÀÒ’7ëu%öŠŽ'Y>±ÙÇ”©54ÝÔœÓè¿æÔóG¬cÖ` !fIऀõƒ¬ûHÞ÷çÌ瑦µ¥ACÝYxŽp’d´ qGÙ¼fËô;R aiõ}?ùŒÏ©ÊBª)ü˜Ïai8Jí*[ø™nNm…Š5À%§Ÿ‡ä%®:ý<Ä­Ûø‰]fXÊ¥©„@a¡ $êÚX³¹ êË@{÷=©& 1ú°NM©'82B¬qkïh%È$óë Õó=7í¬§$ÄIÖr·ºUq·Œ³kÅcŤ’W`žãü¦ùçO« o!/://#™V†~ˆv¼,„è p¤&…kä¦až÷ßmyæt=ž‘±l)šñY"ÍId˜·”½Lzì{Ñ×6ž(^Ô"äÇ‚µ¬‚]Laó)7‚ü”€¤"m9p¿ÅžâáözªøMìß/á'¾¾¹ùI­&áO|¸ÅÓ-ÛàÙï-fû:£“‹‘:ôp«´çà_,ò|÷fK»%¿uÛ„áC¦¹Í\5ª£úït&îÛ7᪮õÂò¢FuÀrR¬ÎÉë2±jš$…'²®j$¶½¿þõæý£þö“pyýåº×ùä£-¶äâ:2•Žz”ÃbÅD%˜ºž‹¨?™¾¶ÑFã²ìM¢°¬ºKô±e]¸~¢Æ`ºÈ™–ç³?xÜèŒK,†Õb4ÔôcäƒE‹‰ù|ÿYf],†…Ê…MÏE‹©8¤»¬è ë”Èé{蘚/®¯Æ;»,{Ô²<Ó9÷fK–Ð@©a›?’åîõTH±è›ù "Qу£º_…Kvpm&±K7ô¥5'\ÔºžQ·5l©UwÆ£-J›V‚>[Ðto¡×ïw?Þ”ŽþþVSHEµ”½\-+ÏÀ»à,¥Œ=å¹Ó÷HŽk*죭0ÇÒê3‹&n·ýœ¿‚£,mäL6Ö^[Á—¸]6n]b79ãñý2b˜&Ñ{ØÝ7h±\0^´ ^œT5ú ö±NÕDW4$)uæxqþ?Ü|üüÁúšŽDœÿk›!qI H -e{o|&Š7ËØõ÷‚{À'¤¶wÁn6#§Š8)éÈTNÙ"ÿ\]â$„â¢ÔˆYf• o9#F‰¤/¼vAÖA‘ä™3qÿ7¦šÒ©¿yùáîú·~…•8Äè¥"M&ñ\´ž©/õÐzfòë”&'ÄÕÖCƒÓ °â¾u|„—MVÛ%›‹õ@þ¸UÝ|yûÃßÿvX]Œ|ñ¨/øéê㢶½®Ú×­Ê^úêñá×½ø‹wÿúôö‡×ª×¿÷já&}Ûzýì-æõú@µõúw/Ï·]Ojþ$«2Âó6>[ÈARHâShkm£Lkæ˜Â"ŽKÃy<1Íx¶L»ûÒì‰Ù§Ô4¶ñ€FÊýòºyöˆUm¶ÚŽ0y¨¾•îfÒr±óØ ¯nC€Ú2jÁ‡Ê6ÁS[Õy«ÙYæÙ§U´! *Ži·øç¯¹Gd› ¬õC¡EêV¤‹»Glš,îd/˜c™Ó”>$Î7##t¼gòýß3ÿAέÓ'TPl‚•ÆŠ[ð“Fr€a·–äÉOº“¤´êlh*::5±ïûæ'…ì C6}â "ê ó¥“ d/ûfê‘ÅŒÁ’„@qIÃΖi®sϸýi ó§‰íþ $ÝÁýÙÈKÉÖDÛc íÊÕGù-öÏ8Òß§yü†àûþJíAÕñ°h»Ä"äEæÐ.ýòÚ# é¾Çv‡RôK—еI¬ w9·qF­"±$.â®b³§ó¸kÒËOÎÄÓwõµ±©vûxfikØõN¡4·Æ†¤E]û^ýøª«–ú.¼vŒ8¥Ù ²Ã°Êë§…¶Ë¼Þös ‚E+Sq¬NÅõˆ ¶9ËRò E‡[À>û²šm®úV×C\àE­Uø£¤ÍýQ¥è³î¢O"1¿Â/tìÝ>…WHªû¶a½H¬ÄÇeÉuß·™%ØÁ¥ø"~z—ñD}•·ywñnK,S{i¸'‚jÒ1úÕXFõeÅ$i>V|$H¡„eÁK®¬8û²*,s‰5Tà׳pÈï´Ep‘`·æë0§K!i䲆±Í’‹éЬ$ñ%j ,C­¶ß:C'£R\ŒNm¿ukBn¡z`oc.ë‡,C= ³³ÅIE™hKφMØZ‰ÕqFü pˆ9AûàøøÚ˜˜j'G'9›MD]ÀU^YHŠßέ„;B¤Ð ‘ÎýÚ$‰ô„¤s¿ö,&EÍ|­Â$hê s6îq5&ñL¢l×P“p"a8 I)ßöüe•„¦ãH²ËXJq$EÀ­!IÅx’(¥ ¦w’;Bãk ÒFmÇ å–‚Övo6Oay¸Ý‹C¾ÄnƒÕÊcÙG(„sü9Éc«T>šìN°…Ä›Tk”þÿ&Í_Lhˆì*j8‘^Ÿ;9»£y&¿N×ÙŽí×=Ùm_U)CÊB!1x’5…Ð?…l/¥ûMV'MRAãìo¿sáñá×;ëqÿNû”o<­ð1¢¬î©ßqi#9ÑÀv›õ 'e× ÕR€Uz5ÅÛä‹ÅÛ§ÞÅ£ªÉ^P}:v¬mØÖ?/h D´êê˜Ã&+’Àw€Né]¸ºýhà¸pY À›-6åƒùMy>^Y"1¥ /Ðe½‹Dí®ÄiéÊBÚcò K×ÑœJ/¨3åFky+1‘à`+”¡w y ö"è2Ÿ§&I8’ƒ/AºH„¸Nÿ-ÜFº( 6ÊdN‚SÏŒE¿< Uˆš%@ÙJ²{9uÊXô$'¿ÀH¢'Ö¡°øÍ3ˆ^²ËؾäU·_HÝž"¡Ð=s9{0o©d@ÙþF[òÚz ©¢S6+…]UëëVP÷^Au6EÚR³Ì©?)»2CŒ0Òâÿ#ïj²ãÈqôUüj3+…I€¿~µšÍìçj[]å×¶å±äš®ƒÍædD¦­’™d0iwOm\¥’#‚ø|h"yƒ[eíÚœêñJ?„í©ÙOb=Ì"t€ªårF¹ É€wûlä”A>^Ù…æÀ;[W£s”S(P?²÷ZÔö\cø€ :ŠŸšx‡%ª2¥`‘1f¾Ü}|v+æ~ÉhîÕP{Çë{ÿï›ïÿ{ÝFê÷$z¸Ã.¥I3¤}=™#B<|¾}{á>r]Y€9íéÀ¼(ß‘fãÍ+¬t` Ô帲ò‚…wuÇ…Ž|V¢ÉEüž%¸­YFCPë¶šÄ{&-¼­­A ¶¦Q1Ý)Uå!%!l‡PžÑøF“òàߌ„è6W crнÃoé.,‚·0Åœ#Ò™-mdŠ“eb”øãOxIµþ~JYúyÐàAµk]©) \«~>²tî’ƒ*Õ(&ª·»fÈe|¡'"¡\°“`¢U.s€šbÜÝe è3§]¯»::ç¶ «ÆÝ‹çÕºùþòxcH».‡6Uzoògº2½5º°?>¾žg¸?Þ~ºýÍt.¸Ô›Ÿ³ ¯–wµäØÿÏ"AemCg;Ý/À‰·½îöÔÚςȔ«[«q²Ðbµµ3£”z;Ÿ(9ÀÖÏX%_ÛÖ§§3©w4Dâ«/бZæð;«0I¸«Ööt¯DFÎyĺø"Æ[Æ{Kó#׃-Ê9`Uýg‹$léÄ™!_=ØJ´÷8¤Ñ™ Á–qJ,’’Ú¦À4´Éš´%ØæmÛäÏZ‚=™Iqïr…ù,Ë3N˜éÑ>+Ì4Ú1aðZäŸ#Ä;+$Œ˜D· ÉÍ ^lL* »ºÜóv/´íý'd?ÞÍÃí*ïk6½Ÿö Þ—cO—CRo[Ô4 ÷ò¹F^QBÂÜvmTÅÈÊýOÔrqäøÉp#fIÃ&µd¼Â=AÄâ=Aö÷Ùm;<Ôbç¦.VtY± en‚æH–—l³í¡g?E!˜B~,@àÇ-ßËclE1M*Mç´RŽeÕb„¼8M?µØB™ò3¼óå36ÁàÇ8YÀ©Êí°èÆE€|q£qƒ DîI¸|p“×–»­=_î?ÜýÕÈdÖíáõßö£o¾kÅ›o³ÙoÎÎyW”.g@¡ sy²$€Që2gª/Z“9SÑâ 'ºh÷¯FÐ5½oöi^xa“ÐôÄ æEØ\îJ©)¹½>bV>|ªq²À“„ŒìOÀQÐâmÈ‚Cd=A_ÕPbßF& =IêÙA($û'ì‚•~Š»5N:,Y 1±´X ®ŠŠ”V´,4ÈT¸Ú¦´J<؇5·™ ÒŽ~#bóªd¦r+äs° òÒ¤I›<€ßÓW£;.ðuq²ÈûªˆÆZÅ6±4h[|ˆÜÓ-ˆC‹Ûwãh3ÛxžtØb¬ER•mÅÖ¿ïÇjáYL“ý¾qy).ŸQ\Œc`L"E_~6”#&ܨ =kYêÖ6†> ýaÝÿ†æ­l3@ýòˆæÑ3æÐû˜š¿”:Õ~I¯Qæ\Í ­° è^Kã–‘=»Úc0dµTEe¸…÷o»Û³P¯(;ò~ƒya9‚»ž§_.Í/ 4B<|•¤%׸âºÑÄ3–lÒÁÒ‘MR" lööÍ+ÕÈ܆ÄLu¶úÚÎÊ%œŸ›K-‹ƒµ8ÿ(„Ⱥ†k‚‘ͬláš±kçYœÁöÄÍl‹­lã0;IL Þ>k-M÷“£÷e<­‰q:áœÖ¯aœ×¸$mcÜ­5ÎÑS ÄS‰3Óƒ£?lJà eÀÈS ¡Îk³ÝPåµ1È–Ôazí³-§q,¤ Ä9óY8–ÎW¦áfoRÂ<¢qâðÛÃõá»×ÓuûøÕþíGÃd‚oUDª'kÍþæšlh„’l,É3$j³Ï¶—‘\[60wyRH‘òO½Ï®$ä[ÚT[Ü;A–ªx ÝÄ‚@#ÄÿƒÀÕÅ#uõ>()un;|xmÿ|ÿÞäÝÝßn¿~x\üâ8I)ø¢âÔéÕ†ÁfZ/“žˆ1Dì«Í§¼¶$0ôHB"3”9îÐ~÷ܘܼ½& “EA¨%˜Äç]Ûeƒ5V¿ö7°rß–âTöNj¨"ü××ûÇÛ @—æo~é&M5ü‘¡çž…!šÜ¥òrJ³>íÄSÔ7— µü¦â¥hyTŇ;ýR.Þ³<èße2ôðêo_î?¾zÿéæãÝÇû/ïôMy_À¼‰ûl%×ÖFL;à(ªFsýø¯¼"zŒ>b>JÅÕx W© ú¨Gô€º:€Ú­¸“¯Ø ± O6ºüúD9_[5ÂÁΑ&NŽ4ÿ¡Úxz¡¾RwUFêZØ1Ť?wBÉ9šd8’®48ÇP+_»øj¹uõ‰>]¾‘&’¤B×ÖFÖ¼ƒo <µˆ]sh«ZW[¨"쪊9uÜ'Ï…”À«/¯Y‡,9Åà)¤Å– ^‘S¨{Å\æZP§Ë+†)BÀpu=TÝÃ+ò$ˆ áj`ß•AšÅ¯†AHàC41‡žÎ9îŽñ< D|õë«Ç|zó—þmk>´ñå~ž¶8λ= ãæ¥aO¿ópºV-&Ù¸Vmý',ö§¥¤Û÷§­ÿ‚_í¥Ï-ÜÁ”É®–ìXt_=—fD‹_ûÆU´0®"§V(ˤÜ`…䘧\4C©¸F{yšú¸ Å Õw·<›VY>bÓ´Š›ÂHaE·â£–ÃøµKó÷Æf¹}û»Iá!8=>ùðíwMká]ÿÄmx-”¼…µqÖêÅ¢EÜ·†òöËݥɾ÷vªßïoL]ïíD=ÛÍãýßïÖAGïí>Ïs¥u“µÍÐ3ÑákU@W7 ô¨ÅÅèr½N4ÃéP*˜³Fô†V”!Õ £ù¸5ãå=ÅyGl§ñ¯¶ ×¥#d«§?”™Ð§&V—é¶©e_™ $#:APjhBðb]Fðû'³%ßé4DFt È{µËˆCåb€¸)ÚSèºèT ]mîEM½¨àˆ"Ù²€¦ruá;"/(¿¬¡•p">ÓeØ·xBqò|àQ}“K{0×ÀèºÇW…ÊCù"ï wúlŠå|ëÊË‘–‡Û›ï}œå©p¹ÊùÓYø1ª§©£‹T8x»ô–Y Tf«M~RÔs“­Ž¹ªÖŠ¥tnA³¶Ú¥> ®™ÌÁ3žHü¹Éœdíš$𖪟l-NAbÔ‰I×åÁg=¨bæbcyÆðI†t¡‹E¥’¬‘‡Ìî|7€7™HÅ;džØtr€®È¸ëBÏ÷ïVôUÜ<þíÒuȉé—nÚ×uÑl–ôN–”hʼöê­‡^ì´K„ßÅ6”þ†jZéPúÅàë‰8#Ì´÷õJ\Ó,6F+1îpÀy ä}Ï{jå±d½œžÿý´r½.ŽŠaWUDÈëU- HÁdq¥&¶Óh˜þ9ïYcK;Ø|yWÕ?,öN/(2Býü«cÎL×Ö¿´OeÓžÌúÅ-ýx÷øûÝׇ/¢÷!M¾xÆ>…é1Êœ¨§£Å²H° ¡â«aÄ_’{[ÍÓD@rCûÄ RÕþTšž^PpP9‹£±¹äIŽ!QJ½âã íR‰èù\Ô„ÂÝ¥v~’ïiÆ\ï_ÊD/2òpÖRµ8Kýæ2Ež9†gð)ËGlÃÙ#_†PlJ„ób`fh‹pÄžYmÊ"Þš¢+ ›…úå ív¡@’MB‘/ÌÍ'åîÒÅÑJ›.,Òs±X<£ «cüF_î×ð;S¦îâæ|j Ø0õ@~€Â~X²¨+'¨a]GjÚûŸÿøÔ€›º纹‰gËäȳsgË{—-;ŒÏZv~ýeWùKy‡áf"à¿î@Bõ¬}ŠÜÇ`JK”þÔ eŸAæ1·U—‘Bá‰>Cv‡Û»B$^ŽÐÍÄWp „E×À¾wö:PÚMëcH:K»nÊ µà c ýênèуdAeT–Í—ÖÔxiMš&p¤‚–Ø0äËŠêÏÅn•§“µÜZ+NÊ(ÚõÐ^-6èáÌyÙ]´ ˆÆ´[R%F3&tÇhñ¥B&-QZŠ›¢´U%ÛMÕ–‹v5U·½l‹÷¶O·½ë×W¿žµ8&¹!qÞbqÌÀ÷¬sW¶#íñ ‹Ù2Ü`qðòº«ùÜQK‘Áâ`ÇtAUÖœ*×êÇ"‡ý“BÈ0ƒ“2æ3®?¥qvFƒà MçíK““hÉé|íÂøX&«­Oçk÷¶C}ØQ!ûõG1ïyØÒý}£ìëÛ¯ïÞ?ö¥6\€€5o¬‰«Û‘iŠYA¤jÁŽX/Ëa ’ €u„à(9^ÛÄóðºL!&Ý·ÿ»ò,/N¾ýð·/÷_?×®ºž°ÇE×(]>n°Z ' l’›¡s­Ç%r_¸çj¡uû-שâÏ2G¸ÞàëøHUŧ2tÜ‚~)Nê»l®®øüâ‚c|lctNzZðvNEŒpìù}Ûˆ…DCw:9 nð$¸Áöeè?ÎðìiMrÏô9E¡œ³âæY ŒæïßΊñút¾!C’·Kæú[EK‘‚ÏFg­A-R0³r±ø@Á¢½XhP 4ÐP{Ñ Úåm‚doèK€veÞ><ãÜÍJz¬(~[+5ý§ââ©aâ¢4‰%Uð€Êž»™ RL—!/öÙæîtMÅy‘=×)I¤¹q`Àαç6e#æyÙ‚¨¯ØÐ ka¿’ŠÅÙï$d?$Æ |myˆÒ³Ã2B ˆ˜× „£R<|¾}{w!ó™s0*·äœœª9§Ä²+Ycˆ(È”„×¹r#ˆ›D!ÅÜ# ˆ”ƒfÝk¼@œ²xkFθûòø0R&ÌÂç€M2u늽Z ¢ ‰œ¢EèCE¢žÈ.Ý:™mVú±`󨿙x­jzðhç#~оŒr“R×Vaó @Gw/ 5J IÂ’@®W2JJ©®†å¨mA•!­&ºquU€2+mºbÜûÆå«ðüÂCõ.KÓΘõu›¾0¶g—Ñ?Ü® Â8üÒÍŠºzZ6ؕǜgŒÙa§íäšÅ)%¦–Ð,^^Í} %–µö‰XƒB³˜Óµ•–3ï™Y[ÍâDˆByÌ€BkŒ&Ô£zW6¶£®³e6'MÉb·-þÚ¯Ì{ʦ)ˆ²IßøŸŽŸ—/g_Y%U7‹œ+¡—Ÿ5” ”‡©ÏŸE¶¸*%y¡´|¶ñ³‡©]mG#Ü3` %‹øÂNÓg¹zYLÅF¡ºH$äªHPq¥ûâ`- ^Ý–™u +î<†p-A‚®î*ïË&ýùbm)Èø$ÚbX°Æñe¯ýD”!«<=Cˆ³œ^U"0wõýš³£o4ÖMÍ&fÓ2hà*„ªs±yæé`Mjæ­ß,«˜Æ>J™6©1wÍþB ,9ÔÞ ›p½½Ø(ùùý056ÚI¨Í—§ªsuI”Azìuë¨kD‚:p‹HÆ®Zy@ÍþþŸ{XA>"NQMûê%¯“Ū™' T¬›?QhúF¼¼ÊÌ™tôcŽÑ“Æçœ¼qu¹õÇ8~³’„[,FÕèßN^ õDRc “=QÚï¸-—• ™b¿Á˜BWï£hIà><€qƒñ•íu¾ªðeE÷ƒ³–/È'kðýþUöw¤oìÃÁÂ’úùÆÑ¾¾k5]Ž9[~²И±9dËSHÄT»Ùä)™ÞR¼Ì8v@áò"‹§£5mÎK ¡qÙ^N‰yƒÂÙ#Ž·Š+aŸ‚_¶îÃíA<-œ`au5äÉB’\ecJÌ—Yåg-vŠ-ÓP8‰0‰C­À²r²|ĶÊIšÈòpmÆíñs™Kè‡D>JROoFÓŘǡjžNµß|zø{–.‡ÌX”ªd~¦:~”ªªPiqãò’4C"xûê”3­1ê%ýí‡ó#Pzü±š>+ìå±X^aÇ%na*dÎ5®æ"W¿¬Å¦ÏÊÌÞÊÖÆ³<Ír—Š'—¯<ü– ŽßaŸNÙB‘x¥Ù§9ý[“‚ûž»Í®CÁ1þ¥›Þ—ud¦·vu€' A!Œª{¼ QŸÁ¤ÝrÎk š*º•§˜¼êI·ô:Sóx"HÇBÿL_ãËéÊê¶ï8˜Õrƒtr½8³EI¥z½ˆiäõ"å&¸›VÃÝ4iýYFb 71òåâ»ýÑ[qöCu÷þáË×ÏþÈ¿~}÷ÛIãöLÇw÷ÿýéÃýí»uÛ8$æódO¾‰'l1Ÿ1¥®¶v‹Îˆ:àÜWj˜51`± OêF1§ª ±¼ÆõH–j‰Ì9è*JÙû 7i…¼ƒæyï 7ùï¼ÊÜ%ædùšÿðó‡¯&‚µ¡Ö†'4.\>NÏ_vF° yVÉ$æ§ä‡Æ–Ö1l wƒu$î .S°X2àèàrËh¼OŽÜ`!Ǫa$*uÇ,)Òe}‡¨½°: Ÿ<ÛÈÿ®ù? 9ƒ@Jë‘gªæìZE‹-kF«( ‹Z`‹õ6ŠYXhƒ´Ëî úuJ‹ß¨K*--ÖYhïtõykVcoU‚sÈ[<4 Ñ–,D``òÌ=œç¡÷¬nä¡òþ<<´¾`¡/ð´ÀåŒ’ÂØ†Õ¸Uûeçå@bÚ˜Žæ]"-oÓ]ËxïŒF÷zÁ{\œ•.Zì ßy“£eÖ,kHY³¬ël$Ѹ(Ë'mrœ„õ0‹)±«ŸèÑé8%'^å83[ʼ­ˆ—ÃçïÔêZ¼ÊŽSÌ’ uœÜ4 +ЪWhû9Bð¬Ûªwûsùà›_pбA$œá ¥€cgp›æ;$…föTÏr4ÆmŽo‡Eb)Д-¤Gù ê°=·X—ê d¹é¦:ƒó­ç‹2q`Ô}ê°Ã¯²Ø×Ü2bÃUSÍ @±=sA–®«,šØH¤5^b ´Eóx÷ÔÃ{ ¹Gð[wI/Á O‘âeèmãè­¡…Ã?+Y8×r„À"ÝR¯³ï­‰FG*ÀŒÚ‘3IØ´=åŸmVq™ÊþÛ´Š¯}¶L+6,Óš%ƒý2oº ˜Lƒ¢7ªîwý~wûáñ÷1·d¾‹×—&l´»Ñ”¥ÍÙAJûšj ÓÈX0´ÆhÞ6媡Ôc'YÅЖÆ¾¥ÞRkáþ$öøÙ0òâ ›:jtbÌÿû? µ À ¬›$SÏ0œßybÔXOàbœ¼Æ»ÒŠI1· Ô„„‹0|‹ãŽwÙVL,«¼5Sž¸Ép¦´··6:›W>õÖvd¶¼ö¯{â­ylóààJý“á>Ëžì—y{øEn;d3¹g%ÙÂ<ݹ½äÿØ»šå8räü* _|1[ò˜˜˜“/{p„ßÀAQÔ,wDQ&©ÙGø±ü~2gV—ÈbÝ@¡€–ä˜ù•(²º‰üÏüòÃͯ/Ë7·—¿^? ª­k…•Û)]¡õÑVêâ 1]ã½oZ“rŠ8½Æì౜<·Å'Eu¨Ñ}.8y¦CUàvA]òõ{íÕØfðNåïÊ—@ƒ«†–&Œj’tìXøãòöãÃåíçÂ.ùÊGlê\° ÍÉzËwHeЖîíÏA<“±Ç؉½uñ+: ;VÃ߸¡§Oàü=U¸NT)shs¡%ãBgpóÈ"dœªX’+xGvØ|-mqšŠÁ4}+}%‰/Ó^]=g¸¼nTÁùÀÇ2\þøÌUAj­^ï-«°\"ëÝç8Æ÷— P‰9 ÖU]Ò\z5ÝI„×ùà.e}ÿç£Õh.{­ÔoLu&g曆\â·\hŒÿ mw–2vþs_Ô±¡\ÇÖËÃ@Ô%·ë L-ÕW²½õ&,›=žj,ª ¾:§×ñ¤Þ˜ì:zI§õ†KEzÚ,³‹Âž–MÏOØÖä´SUš"×F[Ó%kL [.m)^ @ ”zqq{6 +M RÚuKˆóz( mùdGÇlc}.¸^œ¬ÂÔ •—PÕŽûìiŽú÷FSºw˜ØEŸ¼“xþʼnf"îïì4sÆûþíÕÃýÅÃͯŸVöØ>¿ãz“H˜6‰ J ¢AÔdý§ÏúÄ“äjê0QÌèáÔ™ªÑÃ@%¡#Ìc >Ó¦GeÀÞÚ±zì«ü?R#ìܦ‹AÔ‚5(*túÂ!vè™ÛFžü‘· ¥Û­ °sðÔ`}Jëí9 @;“ ³pj ºt©ù:[œVijÖh<úMš öÏ¿^­i&SÔõ ©4’®_t AТƒÉ}~Ø×Dí\L_x{uuèðëXçôŸ|ø2'@üÂ[ÿúäIÌZžýË›”à›þæÛÓ§Á™MuI£íqNÓBÛ]©°ßÙÿnŸ¾¨÷çýÍüþ4(BåS aº¸{ÿõñþËõоü8”…3ôkoÚ äà0¹t6Wúöûæ_¬r÷Ð0pÚi^aÖcÓdi"ËXEègÕÉÔÏÍSþ“wjéJi<nöR´è1äÁ‡ŸHÒÅÍsÖæ½È‹ÞA#Ñà4Ÿ’9HȤùŒQÂÃi‹ÎÐà†]Í©^!‰k nÖ¨ƒ£üLè¶¹íQÊt1X'›_ àYkÉN´ûÖš±ê¦°¼N k†'E‹Å åRû 2öÑ êŒZ2X£T/%ï6é„£U‚R³*ô;Þ—nO©„®0,ÞWá°€¯VßÞÅ;v?Ø; 6ÝŠñ 嘫(3F«6Ÿ¾ ûTÜAdYp×–9<Ê×àx“/À¸ãAÀʺã¶6ìΪ7öÈ—fX‰yýîæ“©We×ãýÍUä­à¶8‹ÛùZö ØÇ–¯0G›«Xë40ã„ÐÀ‰m®‚Ý׈1BÉUëi,v°Ì&Ÿ©ÛÅW°×¯Í _CˆÁoÓÁvŒÎ€ÌéÈ”RÁ¨CÔÕY€Øyìö{Õ^G¯ 2«cºI%7„)Ö±ǵ£©sžrÙøpõW•([Kûv¶¸Ÿ?^êW¾þÉÃõ£e3ßw¬? B•öTêK4 fëO "õP,úÖê/F·ªÔP¼!5EკÃ>ÞYfAä ùÝ—’÷ô›§]ȵîñ9£r¼l»ÃaSmŸÄlô˜À”H ¸ºYjU²WÃô~#l,áò!„Pßgé:Ž;®ç0Å[“‹‡ ›!yì[Fuî,eÔ‘Ná±[`™;º6ÝÄÀ‡À;ˆ>œa[cÓÜé †ìŒß3AºÀ”êK[~sU¹'X»äé‹a€qDÜ“Ä3ˆÑ ñÓ q÷E=¹ÿür÷xYòI^~ó¨¼T‘!eñŒ!6ÍňB kK´D<‘‡: à–´“ÝDŠ2‚0O1d!–Té ¤öÚÞ¶Y‡s éàªÁžÌûÂËÁR7eT‚¯ûÂ2]ãUîÝÚ[Eå1¦nS !ν6}qãö)žÁÊ;Ú‹_?¹Úï®~[çh'~…ÒEiÀuïláÜê™Û6zõò¦;AD.•Aжo•/rvùÉ3uzè^}kUòL²J÷Ú:ñ€Ûä2¥Ñ£·H.³PSOl­þÈR‹ä»&‘Ä×è^B®V¾íjá(;9¶÷&vÎCÎ]Õ,Q´ÑŠ(þ{(âì—?<êÜNy×é÷ 8‚ÜΑ ÝËܲ#(¨÷GpÊDì¥íòpp± ÁEçÊÅ›8;̯Öq>Q¬ƒ>ž®<Îx°ÕêXœ`HÛä7Ê`uìÄ€Û_éã‰O üŒp~¼ƒ&ö’JUŠ„S÷"N&9Æíj×H¶MÆøͨ^X"FèÛWó‚–½vñÕQ²¬eSSKbð]ªXI”9{Óf|‹¡ $ˆÅ&¤¡¨SÈ‘.Û§ÃP/' kZ ;ˆŒ†B£«žJæ9ßrØb¨öɜكC· ^vnÃOˆ»¦ BØ<rDnòˆzÞ„——cý:Lç$@mdSïY¹-¾¢ ­Ü¿MÖ5ÀðN\(N¹бiÊB6.(ÙEÔm¸!x÷5¢Nœ6…¦ ãhQW*ÉÍ€Ÿ å b!F ]¨‡o;*rôn(] 7ùɉ¥?¶cܱ4ï§?W:qFôXL}¾¿³ƒûƒy‚l]†‘ãÿ¤ä (w“Êר¬Å3\5CÛšbl¥a·Åi /¡åE&ÅùÞ¯[™_yqÏëÑ£¨¯íYVĹè M¶¹q)Öí˜ÒkÝnGNÖ¿Â}ÿªãÜ*O1¶' ·(’¯aç”"‰)6ózzÊØsP­mÇë³S/Ñ årÌdP—EÿÛ¥½è§ËOW×o šúËÃkf_¼Bx¼Èd5.^³û"kÒ0ôˆ aœkgX‚iÐük`?£TΆò¯Ó=üéç¿üëëµ:1¼ùeoJøî·ëŸ~¾yÿ^áUŠ—øá¸ëw¿¼4…“Í i(%ú» àã.©ïÜ·íò)êÓ×»¹ü¸‡£ïƒ%REÌÓþÃDÌ-M”jFô_qØ„´O.ƒ´ŸrÏ 8 ±”ÁfD/'ž÷‡ œ…þ{>MÍ*ذÓ/XïÂbeÙòÛVÁšL£m¨ªÛït’ó-©ŸïŸ÷r®¢ýðæÃýÝ­ò÷âVMÛý3§Õ-|ÉÌI­°Í–Ôúy”dr<Ók¦kfeâŒþ‰ËÖ—AšÎC [u§”ôHyá[&u¼€D/Ím˜Z¢(¿m‚.$rãTÒciž[|mãžNÛgÞ3`©Æ²ì%5AF÷,÷€‡X…‰<ô'Ëâ~Ü2„¸}‰=µl–ŒB Ƀۼµ‡ªÄiH&¤(z}AôôÜ>fÇÔŸNVµ Ž£ Î{õ’UæZQ²Ð ŽÞS1`N°ÔÁG°¥­n溆.€(ð.„àØ¾¢øaËaÛ‚ˆâg[ªaU u¹ý¾²õ H8kLèVT"ê*øæ‹¾à§ËÛk%­½îŕƉŸ•X6×ü| ¼ïñŸ~úyí•ë’“<¸‘ž]Óìò./6šøÖ ÛåUô>ç’]>»åDÀ;j·¥&(-SuÞ gUÙñfkÊÕÖ4 ¢3@­¢5eN{hâãÖÔNžÅŠX­Îœj,”ÔC[&k–ÏÈî—7=àÔÝ“t[…©àw…VÃÁ`íùc̵Ù(ÉVmÈx;L(ç³Ã*Å¡JñŸFŠ…I’mx/­¨šüømõ½ Q ø8!Fõ¬~°¾ú˜3tz{4j$,‡O/ÙS³†nA².}mRO´>aÓI‚E†Ç•jÌsË%ôÄVlbù³±¾ÔXßK; kºA¸F†¶r"¾.'Z‰ñ°däÂ’íù*–Œ¬”¨ì¡Æ,¦ìâ4åÄàvàÿgéŸ.Ÿ°©˜hgœ8øsû±ý w0<:‡ö æ–©Ûó›ëR׿‰Ÿ ÑKf¥‚mñrt±Ç^à¯Ä=Ñ&~вõå—ׯÜnº£”u‚¡º„¢NìÜ‚\=P]LHè¶§1¯¸* le=¥ÿæd„¬IF°~¤Oåd„øÓ»fɱuq²ª\„¾•­ëªF˜í¥ŒS^4S2zÎ'$úèðœkž9|‹ìš-$¹ŒlÌ(¬üüeAâö,ÂÊß”9(«ªàZã;›e¦íyÓX¯ªˆ +¾¨ª4î E dûÅr»Ï'«SU¤±“Še:¯ª Þ/C*]®¾oŒˆìÈÿÝH¿2ãPGnÖQuü¢PÙE9Õ}îà|fÜ'#RØ%ç|ŠûíÖçô²!6X„ࣧ(|·Ûs^G5Ó…¶Š¢Õø˜€Š™ŽÇÁ[Ц“¾uDÆz¤Ø^B9z´}Oæž1*9Š”ÇbŠßbíí7[šÓ‹Fì½U»"ÓØšÑ½*“‡ÇýCw¿Åeú÷F­îÛßýN_ø¡½@„1U³ˆ-@£à¼ºeλõ¢’õ«Ù½ CÌ>­jÃ|¤b¢! dÇOäéR Ò·N7Wƒþ}¥l{6„Ü«†0–ÁŒÆ”l³¯…>!Ì6C˜Íë ’j3&¿ªdÔªJþ{Öñ?†õ|žœòjÆö¥µW€Ì÷lKõ§±ä'²OÎ¥ŽF.asH/Kó®K¥™ÒX÷^Å´ ƒJ-5Iϲ6p)úD™¦Hå-µšiûþSÉÒ*50”ƒ‚lùvA¸Nkå\ŒÒª¤Y'ª–l­ ':svnE3Ì›icúü‹u®/y¢±¢Ú2ˆæ%EGC‡Zê2ul~Rþ“Áº³ÕÉ@Yc¶ùiA“>ÍOä’®ów{á`¸•IÙÁÞ©z½+L¿_²»Âl¾œûŽánŸTûöìM²8]£BÔÿŒìd¾ïáë,¼Ïw÷+wJ]v<ºl”ë¯>Ÿ¿À¬%šÀ…pfSɃM¥Ñ9Û¤¢Gö¼P~Mè*ÄX%Ä~?!S·Žæ¸‚ɰhˆ¶C°)ƒ³Y´›Ï+mYŒCÕ`lÙki#Ôõ³7iA%EGý'ŽTR13ŽÊñzÄìøçƒwÂ×vX_èÌê/ì™ÈR.R0Fq„óêÏwÍuÖ¨¿à}£ú›¥y(«d„+N)Éž ƒ_Gèž>º.¹–Ü&‚ üÚ--[°{² Ie§bF2@B,+¸”…§[œ¶OBÒÒ7?³†Ý…<‘RV\L>ùÓƒ`!a_ðèœ)¡÷ta8Q¢ ̉‚ƒáiãÛËÏËŽ”§Ñ¹·—_Þß<®sô§‹ÁI‹ö ÑÖy°`S¾ø4}úyÆpA(·E»T̃ãl±fA‹.ºÑÞÚ;Z×öÜCèÒhÝhtÆìC`õxcØãv¼öþ¨ëp,AUð»>UûUì[×ÖFèLäh0G4RiÑÃçË«ë,õŒnGɆ)³¢¯^¹’ƒ<É>|•rõv¬²ç“sN0­T®§éØO­Ú`‡å"¸€ŠzÕcÖç\¡ÏiÓ“Ìίҫ…KP#˜Ã£j¥3s®ƒLLÖ•Ïy½Ê]õjO'£W)ÔïY-k†‘ŒOC:•¼º ‘¾©Bý~T$@ –‹†Ùn ù^U¤2SØÏqV‘Të.,g‘ÁdèÔ'ä]·Òõì i䇫Èàó- >* Âé‰;ÇáO]y‚ƒó¢ôþº2‘ îôšC+°5¾~ç¸^Íí[fÌ¢êáTHÖôOÔ; žñDºÍ ˜I”PTÑI®˜ðÌâ%?S£—^M ÷ϬWÉ׫³HŽ(*›2cÊ׫¾¯^e×9ßÙO; å3Å!…@Áù02ö¸¾º·Nºg“uÍùíÕÃýÅÃͯŸV:°`Œ#Õ-IjZ?ËÚ·-ôê ;d¡v e·vî½®ú‰8`Á¢Ý•íD›¯FhY‹©aƒ…ó¸n#È<ôÂçæ‚¶Ù^›q¡2ûÅqÚ¯r:=h˜0›øy"Aîë5@õsç}Øáà~P%s”,â°:/óV‰#kB¸ïü Æ×–7 ¾ µ‹B6Ž» d-â h/{ÉFfø;€¶eÊš›«é³'$ÏL—î:KL)Œ´ÄH-¸Ÿ>%"ï¢ B®#dO‰ÄŠ…2”J eŒ¨.Ûñ» ['­g\Î!!ž8I8²‰¦lÛ>S×á˜DUˆÀì©;"p½*ÈhÆ4&D’ ^††HSSáåÕLÏj¨“u‰~CýL[&+‚%ÕqR#Ñzjb‰"âcÕj¯bï±úkÙ`iA¡^íê{>s €'o-ÿÊ¢îE1œöX@ÝóÔµ7Oªº½£úÀ&U1¹‚n8¤";çbž¹ Föýõ©ŽáüðVú>²‚§m²;qÚ(DVòjÕåÓ#¶l„U+ލ bŸ2û¯·™b ª c_ž³vÐÓ-ÑûÆ,Òâ4eˆ} ¸“_,ì^>bƾPÚý{Wºɤ_¥áß[)’q0¢1˜_ ,Ø}†¬ÖŒë0$µ=ý`ûûdÌ*I)³Èd2Õ‡gàñ¡.±È¸#üÂÖüköÇsÙ9è*9ð-XÜ^=Æèj„[­D¸E0œ†òÜ&u{OtR(€ÝGò/G«€¸E !8©Í=~·}Þ ¶Úã=ïãÆhƒækußOúm01"²†¶f=•À¶1ê; Ûvσ^#ÞzÁ…¨ÜÝ74AÂ%nÀé9ˆÜQ¶)áè*õ@íßÍSê׉øcBäVѾ§´å‰h„õè¾YŒ\͆*ª¤‡Y2¥PÊ¡Jv}J›¹¶k‰cXä´Rg¯sZL[;-V”ãSH,ÝòÃàþDP¹82D¤àÛ¹jK„7Oò&¶öwøûÁogÕÉ3gÿ8¿z4Áü`û!¥;„ÞO”ŸZàÿ°Ÿ?Þ¹mñƒgw ÖîêÓOÇ’u̶ívIî>'ZÒh—óDP ÷À~SÚ‰`KøG®â127@’Ä, Ídf›\9 ûŸó´ÑÛó[£”¢|~8–îÝQt¶Ę̈ÛË÷.œÍ2ÃÂ÷è¶{ [Âò²&  ª–äµe·1“ÝÒ‘Ëè—šrJ‰Ì>IqrÚeèSÕíÐåù0åäVˆo.Aý«ìvºÆªìÖ›Ãá`zvSÞVBY+üP¥NM‘MáqS¨ÒžtAñÚQ-»˜ižê¥jàÿ…ó\ÂÛ`3›‹qEg©ÒÓIn–ŽS;kðÌ¥˜N¢… ZTPÈBŽLiÐ¥ìoRiñ™ŠÕA]'Þ8ª3:Ÿký7Ny~Û¡úTfíÛSWì_P.¨ø,ÏbÀUå_[‚}“Öš¼Ä@¿¶ì'®z°U˯œP,&X1:Ee<ôt)ãËѪ&[…švaâ/_-r(>½ù=,XuAùÁ¿+t”u ?R#|w [{²&tãuj6j+²…ôUP¶².Ï8™U(™?Ýq´'‹Ç<&öó¹ûÀ,h&P–8¼×+”)n A“È̪9‡—Ð}"(w™E^WƒOû†jðUÓÉ_×ßÍ ñ6õ÷ªÍLjïì”7«½Wmf¾în‚-&8!¬Ó ‚ÍuÃç@t“ Ç …w Ž$ºDð„ºb.x‹Î5 æZêbé¥v+cA_ Fr¶ÙPôR–Ÿ+Ý”f‘Ò&«¡z0ÃHqêÀ­‹èÂ&[Êb¹Ñ{#lÍuļî‡Ú]œ/Šÿb?¯:>ɉ[¥:¾eLYpYÔ;í½UM¹žÑb’yõw^bIã÷è(%{¡R§p‘JX ‹4UÓ¸õ5š5Ü-]¢É1æ:çÌ`ZæxiÕ:².LTÏï1„¸ñþëÍâò€°ñk'¡Ÿ:u‹C¿Æ¯=äEpÖUü¢Üþ—%+Ñ¦Ž f®öÑw0I¬ºÈ‘pàÜ"/8ËaDZ™âF *»GöU çʲӛ¯þÖâþòÓ/çÍ-|Ø”ú¸;õÁÉ@ ä:Ï÷~•“vC†®¢dE|†ÐßG–HXð±É‹•·¢×+,Kêì0žŠË‘S¿@1.Ãìs• u:„e£QXЋd[# lÖU*,•¤M¨(Ñb!üNG)íI#*V)qCe\1"±wø‡ðÌ%< Lûbå!åæÕMŽß)ቒ/Jx-˜Zn=®nOf‡ùðPÅa£œ—jÏžsâ@è› ûyTJO$¶³‰‡[ô—{÷ûíîå#g¯ÿs·¿z_d1É»yÚKzø®«,&kK¨Â`¦Ú”aØÌŠû8†–éñ(Flb@eûˆÙŽ™Éa;ØÇq×ÈqßóVmmÿÑ­+IÐ[EÌ‘ŽïHÀéà,:Þäae@?R´ »ë‹õ3=grŒ9‰3·x•6Fh€"c­9Ÿ°ôb¿ÉxõŠn’}6Ûcd+j/9ïµ7úÜ”… qz¼aH^Å¥gÍK´W<2ªäÜ„K6Ö^rN3ÑÙªÁGÑg_Þ™!ëÔĪ ÆrÁê ¦ÙGϲ3p°¿VVç·¹G£èùP¦ÜêÞ¬ÉÃã~ÑáW™¶ \%áßý^äÝ¥Nöñ3ÍÐO¨Nð!ZÚ²*aL—-‘‘Á-¶¸ki×3µda‹)¨¢õ b±h#³½WBõê½b“ ^b|âªÔR‚Ð;´—@6³dER/§3KŒ]/ÄWÍjVä% O«ÍÆ,‹‘X%®¼\‚m ²©™‹["ó^ßß]_þ|u›´Æ2ø/ö£›Ïü¸vœù8O~òhš¾Ê£6¾Á{Û—%KAžZIÖµ´—ö°Æþ–‹æ‚’+šO ÔËþŠ_=~q/–íºUmGB·7ÀAfj{Söµ‡8욵ƪc„®þÂw…u˜á¨`¿&6ɇ-æ>XD%ºMà ²Ä•m÷IëÍ•íÿeY¬Ks#«h^4²iÚ`S°B hk«¢RG«šã8tR²ªÑñá¶û„U•8f£Ú'Šô±©dß”J¹Ó7°/_±K“ćÿ¸¿»ùðóÝõã‡O?¿Y…û K^ºvQcÔ­¯U !ÛefY´+¼I°äcë1™À˜–Ï€¬¶'9nÒ€hiš`{6œ–pZÉØteÉû¼LëõléŸ5Ý»!†HR*ýÓÁ2ƒ“£÷'Ïéùä`=Ùšêdé¶¶¶4(ƒ \ §®ÏNsm\¢i yH±K €kéµúQ%ÄÁœ³E³¦ÅAÔ\øÉ wpŸŸñr´šVzÛ9OZ]ÒÕA„Ÿ*öŸ¶ãÃ7`ÂŒ LÈ !A"¶ÞxŒKÄØ4ì"'hQrM˜1cfÄcÕÔ$ØAÉÊè`$ÔœeêO‘Þ§'_ª eªí=ùK­Dž­‘)P;[Ó{Fß?¦ ÑìFvTßWÆÖÜŽåã_þöŸ™›;ýðÙ6x{~si¤NÛÝ]\_ùR©£öEïý‡¿~xü×íÇ¿T#ß,*i¾Å»Y†w³èË&(7–ã.ú®¿Úò¯=Óèµ]KÈ–¡iH“€¡íj3W£™‹d&WÈSGS›&“Ÿöé¬.äç¢=Ÿ¦ân4ÅQޝnF§+¬ºb;3F§õ7£} `ÿ`0qwî;Ir #9Y¥Q[fD—L™8y¿,7¯r Ó±NåŠùëÌÉa{<˜´]Sš¿ Ë­`u…Òà¦hÉ{*Ï¿™T˜Jx·ÓI®ºï9ÉM­,.«X¥ÔÖû#BÂêÎ5¨êSK„²ê‘cWÖ=‹à²@/G«™êSçšy·eº•^”x\¥[¼A„ š¦õ¤×z?Úír"˜©‹ÇUÊÂÜàºÐ¡€|sÍû÷–دcಎ¥ÔuŒ)WÄ}¡H0ÛtdSX¿LÕråu!!“ß«O“ãòßMH=¹•Á5«l(˜Póñ%=(öi'zóï7 fˆ ‚ëBüØ+ïRwYtð5CüQÕ—ã $¥‹¬Q¸³1þä´ $Í Zd;°š¡…Õœ£…ž¼´5à÷»ëÏ7—çç¿$z=œMù^ú°[#^pÀôR$ƒ$}°,!;†~B›oÛlÛ@èu•Œ £76ꦀŒ3XW¦ n.û“ðÑ¢æï<ûSr@+³?¯M¯QA|çVg¡6û³„Þ¾ÒŪZ§eWåìO$ÿ¬ôùh5ï–Ø”]âë‡ ÓE²ÃàÒMDpÈ€‹Ê˜Þñ0Ž,1×ÚgVœ°Ò/ŸZgž1˜=À@e I/w¨( ê(Û´õL‹ÆÙ6má¼§E¹‹Z¸j›XcœÕz‡Ò\f2ŒÏ°B­×¦u“),Œ{‡Áÿ¾ž»ŸŸ<ÑKb…·í˜Ú× )#°^KÈǾÊÍ–œ¾ŸÀ6´$¿•Þ€MÒÛðÍSQß*Ê _<'×8xbKà¼k¾`NKqKåÅ¥ö¾Ô¾ÓrÃ,™æxœ hQ–ê`Á™ú“ø ‡ÃfÑ}§§)ß0§]…Àôz`ýt‰uoEÒµúÿýoedÖIRcJú¸„"èuu$Ε‘8Xè”*e"ZŒ±(ªYج—“UâiWí7ªëUl;í–ölÛ¸t¹'cÈd¹vdñNÜ»ø% _Ã/Í E:”5w¿Ý]_]|™~âЛöÖOy]é§šw2õ[$ëýVóFVù± EÒÎ8@´¥pÓZ¿¥¢øCÖ}ÅðâúüêæáÌhö’´Vþ’³_µcoqï?_.¸d‘ "1§´ÊÁ€k©õ„CT—>!ê‰BÀ E닜 `L~D0JUã¡ä¬ŒˆÙfB¥Åü´m sÜBoV‘ %{3£³cìÍìÈñ èÇó*}_lÝPsýõƒ›·6!³<'ÆaÏ úw6ñc‹oÚH²‡‰?¸È×W¶Ë&b!ÎS˜=iû+–=…µé:”5˜NW x©¦J[£È±]MìEt+ÃH¾œ¤á0ÙKÒôx"kÛQ|½]%$ò–œi»Ž%ì`ÕCÅ»ës†èý’ç £°ýýŸ&\P?>^ô´{¸4›i9ø§x*çø†Ÿ#xÎuñ£ÿižþl êWÑ7.⦆Lƒ‰˜+7U9znù¦ÁÄb‡ò‹ËÿNd>ÿþ_‰ÖÇžnw”¹í2ðݱ¯Ûe}]–'<ˆ8"l¿±_âĨӯ½/Þ‹ æ< 4wŽKxhAVHÀ.‘9®¿€Õʲ¨1&;V°øl~ßÂH=eñ÷'÷ÙûµÉÑ*ê>i[D‘B5®Ìžq’^lµJï~ Ú¶»aOGŒÔ•ެ–¬“æqeŒGýê>ÀïPø9T+~»>¿½žÞ¾îdûíîÓ¡do¾5OwõûÕã—‹_./~Í•õ?]ÿóöîáñêâáìégã‡wwŸïÓƒÆû‹ÝãÝîõŸí÷ùòr_S9ª'¥YËêIßü'e* ·ü~å›?à\ù+ÑÒK æÚ ¹-}ppÞL—G¦¦kÉ<”ÁdŽì|iÒE²9b‘n8m»í°œ/ÙONS¾ÆƒK#_=|µÆª{ŒÎ"?•k{lÒÑ$€å+ÚîRv±Åti³Ä1EÇ›?Ìv&“ÚR¸ÿ2“âìüó§«Çeéµ'ü©™àeÝ“·sƒ*Aÿ„¯wam®};qѧáêåJ×b§ôÐN}Â49YM[Âg÷QEdz¡:]$ÛÄ–½˜½íª_ìMó9Í­aôÒÞÃVP‰–Ú ¸9!ñì@j¬µPQJB¶f=¡H—æc401Tâ¶5@±l•µ ÛŽ%:™bfúEb”@aú…C~À\ïc\øp¤ÖìoÊËm¬ +jž—–Ƨ¹½ë—]EôüCgW×w‚ü|n_|qyŸ„i7òòöü:—SÉ÷’S<Ö$“Šñ;ʤNëTþ”ú|h•êo2ŠlìPÔ·¼ƒº»½2™C.ÌQÞ=Ø?nÎ,êúù²tÿwêW×5ð<«ØÒ@u«â/¤–ë,tHhu¸í„Û¬ÌL•#)TŽ·Ñç\_ŽVoÛž$áƒ, •JŒ«Ð±·­¦[„J1dÝk‚J{;Cw“šeïþd^õò_ó^5~¿^ur¬©Wõ ßµWë¤WM(!§&*טj-UI Ä¥ÉòI_x¢óë¤#¬íÿšÉ¥1å¿Kéi@‹ƒïóÙÊç Åú$Ó˜fOZì²ÄC”ä¥ÂClì ŒÌ’A%O¬ˆ*šuQ»‚i*®~¬ûõ¢ÀYþ+¹€ë"}ôß# 'Gƒ…ac¡p»8¼¦I¢ƒ:âÐÞôµ_¢ X‹x"€¥“½ÚA0rv3±L²–pîâ`™_(šMuùËþÉi;˜Í´k ûwÿõfS=ÄUÉkºŽßÔnŽdöüštd5¿f0ôÅq‡Îvó4ÊÁH[öäW±'pÿá1JpôŸ(ÿ^HîÏÙˆæS¸ѽnÊ pý=LœJ MyaÔ½7Óûð¸_tøUž~|è6x8ûÝcÝüUb± ÃN4œ ?1¨_å–,%l€°c äÂB¯ÔJ²^wiI0|¯±èÇœ÷^¥èÈ äÆÇNèÓÁâŒ)YäÇÐ+®SN’ýX"sfXÚÈ(QgšÚ,ê ¦Uðkê©Ú“­±³,å V5£(½ai—ˆ^xË–{Gô¯Ê1#©ÎöMç©~ópw}¹ðYGvÚVØRn™ú®£:qnÊ]–½ìdb¨‚°/šÉÔrzÊö^ê4ïOŽÞ#ÜORˆ¸àÍ÷ži(B«²deð›É±÷æxæ‰YPÇèÂ<®µmÏwŪ„XÕq€äk±ÍfU{–iBˆÈë˜aƒFÏQyÓœ /y$˜:¿ª)V㉖Ÿ½c çý·…˜œ/0ûTÚ²À%” Ì$.¸â£ÚÞ˜'4éR`¶m;ô1,¸‚$çÈ6±&ý³%ÂÖf£óáQÍ› ³Y"2Å?ñ8Þ‘Þܳ¬+x õïqVÄ"øòî¶•ÊñðõƒðÌçÎÜ쨋kÒvr¡å"O€Àþ‡Ú6¯‚F½ìhâ½F¥Šçeâ(Ä‚M!¹N A:XÑ´iÓ`Xðö, CpÃ*# o VúÑDäpÜÇ¡HCB{w%¸ôÐ.½®éÕx»tø]¥˜cfjEà€k4ÛâGnz8dk²È ÊÝ´ž` K ‹¨¾<˜mˆËå¹,hë”Z=ô>•þ‘-¹g*ËJYñ£÷[÷º':ûŒâsÂóˆn%jk5> ×è;chvÙz‘’c¯YºB{T•–H©ä·ô"~ùVõ"^ìЊ™3´(»‰OK‡ß3ÓtmL“K@¤×’ðÇùõ™ý?Û\óó¹®ïþøðOçç_n/¦u 7Bœ›‘š ¥ý‰¤â´5ž=-Ñ4ÉÞ#hzê±í9lÕTb&gw$”j‘§Žš4’ž@ >+p¶+øå0å×°ã¦Ú¶üôÞtuƒ31 Z‡jú,$±±ô¼¶ôŠƒ}/'ä¡Õ½âXÝ+îÒí“:¢T03HQ*²o3'G«ê·mEç‚pM¨ñôÝA¯Ÿ–ˆ¼-bÎHGt¹^q%!ôûÐ%Ǩkg¯pzœn䑟š<õNVžvÛÙ¤s›žCm·±lïõ“ø+‘éè* Ò€\88SMâ†5²|üÍùÅ/æ”÷UÈÃ÷•š[¬Ô:õÀ2“î½®cååNÛ˜1“8†-ˇæƒó‹‘ZS=º9¿ÿõòñ·kóJgfÀn>ß^=~yV’‡e7íà,%·&Œ06¼ù— ©£<.íZj¦Y[]§™hü?y×ÒÇq¤ÿ C_ÌaUfU>¸ Ÿöâˆ=íþˆ„-XÁ)Kú÷›Ù3˜š®ê®ê!EGØá0%ötçûù¥ÏÅÊl5t¢‘X Fr(#Gqf:éNbã º‡W‹hMzæ –EÉŒ§cKÎ(H¾Íw¶–ÆŽ-¥¦N|ÈÔ\é²gyªâÊ}à‡G`â-¼'øµå5÷žÿç)29&Ÿß<¡T-h8ùEÒÕ h0¹¨ködBšš.×/²¹«‰6ÊæN²¤ZÈ‘SÕ䢔±yž4¤‘oo4sZ’ŽÐÏ´uzèd.…ïù”!ë|z˜ZU‡Û^GÓn¶½]†bSÞnbzýÍ_r]âá*Æa,bÕ˜T¼©¡Mº¢µsòËÂaô6Ïh ]5"%Æ,-åÖza-')–[2ưf ³.mX-šÝºð&Yö7©_ZÖì³£9ë%z|™Ûzúa}oFû7åàxó)B;‚â†÷c6hF“Ò5“‚ÌA/uE¾dõ&~Yv·5£g¹H‹ÑËRF\}úØFozë@Ü8Ñ4Pe(¤ž_˜(„“þÉö½Bx~¢>³jø“Þ È#áñwb”ÒŽbb¢Ûïg€~  ]…*Ü['ýæèO å$(\µ”YâaªsÖRJ1ï>¦Éˆ0{í)kº´¥Ô˜·N¼%FÅÂXÚ±©jmô“ÆŽ~~‹ƒô^8â$ìa'Å8¾I9îb€Ä_¿±týþç«g´°¯´žüuûJ1óªÜÛ2U"›4–Ni6ÊØN‚‘2TûJ 9Ìl+=Ò¯8fL g»ìµaÒEa)cô¹ß° éR¢O"Ãx¹í>ƒ¦¶¤çÞ“ãL³j‰ Ǫë!4¸î¸ ¯ùHÏ1•ä#\¡ ~³Ušl=Ðæ•UM¥Ý·cÐ\(ì Xô=Òc[”6øºæ·3rV<!å¾P ‰nÒñL~¤ `Ëüj;Ýß}¸þüf¿Móö‘Ào熲–¥[”×3 !È×l+LjD¨KÛÖmdË3%ð æ–ž'R5Èû¡Ñ“åå' êy&p$XbÀh(…¸}Óópé¤ééM>³¼ÆÂ¼qnʹdAÆÕe!J<;˜0\ÒJÄËÃ#„aüh_¶œ8PF• .³fz<¸jú}÷ËõÛ_ÞX…+)ÝX]OÉyó9="Ä5Á2`b?E³dÔ¿0Ñ¿ n%5uS%ªfAÙ!Ëg§ë&²¨«xþéÓFýIÃNS`Ÿm3kƒT@§Ç猎R0kÎ /ÉwÏ'üOŒZX_sožï{6Á–Mï÷üîÑ|¾% ÏæóÿvÎ^äˆÉGG»ì…Ðð,BÎeÝ‚˜Ää5ÐÞ%ǺH¾)_3*ÅñÛ£iX ²£à7Ï%?¢k?,+ú57Ö¶kn­bPµ>9Ä ê8˜Ö †nùu2:®Ñ§èCv@‚ékvË]~´IáX³VΈÁÅ¡£¯Ó¦0/‚.ðØCt"lí±Îxê±ý“2i— ¼µ<ÿÄíòQúˆ«Ž’zKbJ©{ó57n¾’%+M1¢a 3NÅhøèË£a¿ÊœäÒºµé‘¤G:–IœIaûö‚‹¯–½|•Å׳£y'›¬Ú»Ézö§ŽC_ «©géÜ®)ì²ãÓCÆÕ»¦Ó#˜âçóü¬Ô ªúê×Ï~éöÚë¯ûzjÄrtÐ'ˆF¼/¿|ûãà,¬ØšÞ> +þì³,>ÏÁJ•:+L9’eÕ«ëàÓ#¯ˆ$½ŒÄ«23…ÓÌLCéž“ùdæÕ”uiœ«ÜkÎýé[ꉺÿT˜5þZx@WZ6é ©wlLʦo2=K¨=ö$C ì‰x—qÃ^Ô¿î~ª5ý_ÙbÛ¼‰ø ú·îDC8¦¥3)½f¦ &bµ'w¹ ³"&NæJ¸`N=šXÕY„Ò,ô†4™\X½†ÐÚdÇý{—) |#¥ªè§lµÔ7fËô„êiÃÃÑ—Y®• ûŽ>¬)kðQUÉÇE²ãg‚ˆ‚•EQ‘%v6Åbè²³¸åHȃN…ýUõEÓJॆ²†®°§²ö̆ôØûóœËÞcgÓ–½â½=å¬+&’*§ìcßcwåÏן—>3ª1ä.é˰E>(”‚7Ìá¢+¼'#Ï80%ÜOÿÎ2ÜšœÓ¶lÈ[Üœ@&GHÞveÆr¼˜‘ßÿáç«ÛOªwMž0üÈ}+ë°LkÊ·l!‘÷`÷Vö2©_7V Ö)E0éÂZXg±C‚zX—sñòç¡ÆDëfšÌÄb\­SÈ*Ô§á²uµØè|À{À§Í\‘96mk3wlk/7ôg9ki˜¦¾<Œ¢¬03X`¦iùê]ÕÞÎdé-ƶ/‰79 Ej…·¨Â¹^x; Û¾´ GôsHoǸ¹Ä.HLˆÐe8n I¶§s(%vöɪ,‹Ãþ:v=/|øëEgåÁ„R¹/” †û=ÓcŠ|¡áþŸn>ºŽm9ãŸõ|DN!ˆÜÅzqmcLD®ŽÒmñPü®–Ø÷4–OJ!¬Y² $,¨KßÚ{É;NÀáJ”ªÅP¿d˜«ÞQ‹÷3i2Ä;ª.'AYàÉñt¥/¶Ò´æ:€¹»ìKaý¦Í: 6JN&ƒ-)2bˬb]N°œ]=QjÄþ­¿¶åWYÉID{½®"*å Ìâp,Q%DíŸ/¢" Ýh:|yÍæÍz÷}–Ç)õµš7ñȬí~EÎEn¶öÓ‚Å.®¥5X"è—ÙQú™ÆLÃ,ŽcH îö…ê\C)õu޾¬kþVÓ™n¼]Â5uKí\güÆíܽìÇ"D“Åáì i=íܶŸèr;?Sì™ÐëéÞ¼»wºåšVmù”~lª3¯Ýë)?ûü&‰…¿÷IàÆ%¯½"§ÓÞ•Ž$ŠöxnÜ:Óxa ¬fe/$¬—ÈêÙƒœ©Š¥òlÌ5ǬÌYø­Ä´(ËÉìºìrJ[¯Ì9Ë+s”½§+'åÃØ!¶â¡œ®Ý¹oÇ>q¤.£“qÕU.÷]†-C:¾¿ûõ˳5e£ˆ1èóëwW¯îÿx³ÿŸQ%¼t—-Ìm˜Ç$•\5æ‹7µž¨1ÂhØk'¿¢E%›š,4 #ÖðX‚ˆª0ÄÛ¯v4éËÝ/×]X®{=n¹2¦msM#8D/&䜆ÞOj¥ß(ÅÅŒ»‘´^|÷¥-¬—Z"ÅÀ'r PÜIÔ}sQ—)®¤NÅeÝzÎÅè 0çbœb¨×Zq0‚·-ð £JK,ÈYF›ô]Æow!ñÎOAçôý¸D}ßÀ?“‘âNCHpÑÃM•Ň»/gÖ–hDÄ./IyÕr€ƒô‹vÞ®7iØ“ûr‚ëY)ÄzöK¹sD—Íèä•<±ßZä9˜ÀcŸ™”Í{ëâÏ{Æ)NAÑn7^ds+¢ƒQO«Ó°´1Ðdœå¸ ›ê²¼ ƒ Prýoúöx!@6§8`J £˜ë›Deìƒ#íµƒÇÛ‹V†ˆ­Ø ‰QÉÒ¨KÝÄͧ«ÛÝ! ÜýþúyøÓ÷ï½ÚñîÃÕÍí¬óhz@_95Ä]òž3Õk#É2,­J—S¬#©ØkŠ)/“ œ¥Ë¥¼tÛ Ã2Eͧ»,®3YŒŠÒuÖ±ùZmÛKHºÚ‘,‰ýÏs”’„>޾<±:"š7c¹s´Ó€[FónîþýqwwÿÏ7×Sõq`ÞKŸÿ‹›õÅÔBÊÔW ^S¢ÎIEïâÎXìÊö˜i²:™òG®ši u0¬IqŠŽþˆZÌô¤1òÂÈßkLÔW ÓHÛi£3êiäOF­þ™dë¡”®®×¥lË9!à`ÿÉ]é‡8~Ÿ1î€3ÈíwKèôòKb©ËF‡ñµej‘½P÷†¶4ohÓÜÊä† måPŸê/ØÒ£ÏjZÏλ¼µ¸ÄTr ™cî‹`‹­'`&©ä¯£&SZoÖV‘Óùމµo-þThÌföK%¸m—uØ™æ=)U¸o,Œ/È+ðnBvÞ´!rØþ ¸¯?ÜüãúÝï>\?·ºz÷‹wNvÜtúp÷î—e=k™mðƒèÒÉñ&…Œ1XH°©ç½útsoÙÀç/û‡î%—îÅþü;îì…??3;˼°èŒ:"…ÎõÈ[Ì‚Zú!Y.½ÇºŸ 2Êlû233ÄΦæÚ7xK”vl’NßÏ~¸¶{€t4GoÞß}±¿9Êæƒ˜ÿë«àŠ!YŒLÑW|7©ófć°ù(¥$!r±+#Ǫ{’yÒ¸`º}}À{–¡Ä\UŸ²ÃšZ?z+““ˆuã—aç8_ŽÊ~o¢Žé˜“ãÿÖu;§xލ1f‡E³’¼K”ÛãcÒNYXaø!… d‘Äl‘JT<+Kº<˜v€–œCƒÕÇÐ`ôc©0yDšFßrâh2};R.ŒþËk’ì6%•\Ú<¶¯e¿Œžçw›TÆB¬7E{’ûvK ÷yFªÎ^goa¤àæÞC!Zw…ËÆ&å¡ {³¡Óy:þ¬Ûªþ,3ê>ö…b¸I7aÂ7­j¸}®Ÿîïþe¿äÿàÝÏ&a÷ןî>ß)oª]ÐeÛjê¡×sg$&«¶‘í—#„¸4[ȃ™AŒ… è[IÏy0Ç\ìGjÀc.b>ÒtH8g:•-8îÛ O öA¶^Hv§’Ó6.A¤å[2v Gd|kæéœŒL¾¸ ç_‚ž6f¦ž&Ÿ¿µã†…ÁmôÒ²^ß¡˜«RF²2¬àMF ncØYvdÑf£4µï*èJxÑá"‰OCDDA¸2ÏEc¯)}³·xŒ1¥Ì]3'¼ùt^%-á±ó.!‹ÊíãX§ð‚§zn¿¿< ×èy…c÷¤kAÜ{Þ`Ú],h ØPGY±¢ pÐ!•͹©qEN°`RÀZHL$XÇéá"HêUÆ„ÄÞ¸“˜— ô@ÎØ›GÝ:(v*ÇT ŠYOP¹(2¶’› YÌq„s;; xž¥j½¯h aM€‘9…åy³OäªCs0Ös¡j€†èT‹íÑçŽØ3öáÇ`¾{QpêcM}˜²õ:'³¨ž.…™íÛ8rÈ|8cò<4:žÜ>L”žåS2;¯Ð©“k ŒcDr+Öéííå“Õ›ÕjΪ¾ë,H u§|0r/½òŨõ4ˆk,ÚÏSBW)[ÒæiŠß:¦ÓZ•‚î8%ó·½áÛmK1.½á;f¾ù,¯M¤)R_FÇ[`¸Co®»µ¹`<<ñ ­“Æ>€SÉÀ«¶ðù½[]†5ØFŸQ6ÖEYD­\¿A¹ŠXf´*oÓ?Òb€…ä4Elß=Âo’ÅõÍD’ñ@Vaç“sL_O`ãîZ(£&5‘^§G$\4lÉ CðÊ&#•§$¥šqgZcJWUMÀ< –²'_,ÎTè/{y2úüê÷w·¯n>¾¾5o~ÿÇáÝÓÝçÊèò –ñ2]t¬]éÑÅœDÆ+£*’̶ä-•ÑÁú~2zša«”þNý,>yˆ0ìoC%Î?¬fVƒææ¸*•IHR^ŠstSt:f Ô èBó°¡jÓœ'j®Ñs{KN2~yEßÃÆíg%Ô-°™âJ¹Â'ß/fcœ€ÁcÓeÿ¿%o3Êæ¼µàtÞÅÙþQÔð… ÆÐA‹¨M9«¶÷I/çNÎJÂT» .Ià ÚɲڋEüv?ýñåçVŒÙ¹&Â7¸æÚâÊ?IDÜt^}O¬Q}‚}tgö ·8Zɹêhy®â´OðHœµž–XÓ2OëoŒ}ÖX··Æ‚T¶Æ–&¾ìíï<áüd Îq•@‰¢ö(8¦UÇnƒWw£Üh#ÊMØa²¿Ð+ KM= Ê·k?­ é&Ùk¡¢>»Õ~üÃÌË‹ö<™ FÖìÓ;£„ G 7¿hèôG.¨1LãJQËû Ú†¦ß8¥Ô¦¼£§”:/Æ•"ÆÎq¥Žw9ž[bìŸ[êx•sLignAòzÔé$ã/̉™cB“õöÈòY$ùóy¿ýñïÿý6'eÅ=š™# ˜Ñxõ«½àÇ«Ûk£¼¿îSWÄ»OBäôå÷oœ‘ç¦î»Ÿ:ÿéYÀão=“×P“Õõ¿33C÷·¿=“gâaç5‡€9¯îº$nÅú¦¦I YÄ÷н¸úéÃõÿ—ÿçîîÓ‰Ûü?³´×÷ð-zý_¯\ìn®ß?ý†ÔrD‰ã¼Ä]ˆpÖ?î?5•ŽÉ}Ë_ü]êýõ»ë›_¿9bw%Ë3ïxüˆÃ‡žróùÕÇ»ßL›~»¾õå端ɱ›¾ýåŒ[da•ØèDíÃ,e·ü©G DÒªÃðÞ¤ ‚¦BlfB©S™°€É² †zÐdÙ"Ô„B°|ßýñÓ‚&Rï Ðá¦Ð_K)M“]… À·#ù]÷òRíÇ,FfJÀÇ ƒPýôë͇÷kîñîÿbß‚Ÿü°¥bê¨ûY1Vd0| íÂÃî¶×/JºnâÄ,ð¨9Ì1î·”gu]CiLùˆXCÆ”í­3;tKc9c”J+èæyÄ}gëŘ²}³fy±pvyv¨o“R\¿³qRÉèXßûncÇ6®¼mWj¯ÃûMºÂeúdž<ÆšCúóÝ‘ßÂom*/ÂÄ!H®ÑX¬ÿÏÞÕí¸•éW1r³7#™U¬*²Á^-°‹Åîõ"èé–ÇÊØn§Õ¶Ç ìcåòd[<’ݧ%Jä9‡”=“2IOG:}X?‹¬ª¯2ü¾8»ê`ÆM¢šã¸ï­Ùöt›OúGN…¶oKۼ؊¨xÎSçriŠ‘HZ”=Û[»4³)¾½î6¯ÎKg$NÕyzÚ”–lç_ˆœgS£¶£Â¿ÓÛVj‚á‘€š °½4‡”é¾2 Èé;¢psÌÝý&=±åzÑ·seBl܇pºê´}7™°_”‡z^?ìI¡Ççç¤I˜ Ô÷¢ÁëbV˜ Àhj°{A*­@sP¯¥X¼SˆAW ¦f‰ñG2hA€n¯Aµ~êÅY øÁÎ}ÃKª?¶™j‡$è ²I+pÚö5èÕ‹gªXE›¢,ÕÍÄ‘ú«‡"üï4…MBQ’Ð%reÇ=G¹¥jËÝû›Ûß4Hõç‹»S êÚjÓK2UM„ÔJ µŒIÙDŠ ”.Ï9H+×û$F1)§«ŠSÀµç±“þ1©ÏÞ Ø’a í-ôî´%ÆUW¶à¹~ºÄ èªJìP°²ÆØ»¨àäÒûáþÃãæ2©éñGvÓˆ \_„ežÁ9B)yairlŽðZ¡o²ä‹ÁmPnì-GðûUT Àw°q禕 ´ð؈Á7ÕOÊi BZ2¹u\Õbn¬Â\AžŸ&›‹=Õ+îcÙBaünzÓÓGþz÷Ë `õðzó&4èRo…Ã2«‰ÕEU ºv©‹m^¿ú™˜ ÖDÀèÊ…`‚y^Û'1µŠ}4ë½v ,Lýc`310EÛ0郹·m[ÏÁD^­¼p„æû¾=‚Afñà‚fGóZò4Ó’wJ)A‰ý,åÔJÔÓip]Ô›mÉ/¦Ü“—^*]=à¸÷êÙ#–õä‘-ÆÑ?þ^Ý¡eË Àæ ó½;Y’ô ²9>ͺ$yŠƒ¨mµ– >ÿœž—Ãÿ»m§î¾:iu_Á©¿¯Nã®U¶öþ¬k©ƒKtb§þöŒ˜”h·HÔé÷ÃÚóןîw»Õææýê×÷´: áßׯ6 •L v’ðà—À%™oΕœç`û¯Æ®¡RQŽ­b§dD˜Æ0ù:`Æ0Å,™ÈHn b§ôÚÎuõtšÃ»‰¹òüG{« ±?ºÇ ºz;ÂRpr]î•oDU!JNἦdŒ`Ö=WáÃ#PÏ6<ôÙÿ”tngƒ¯:ÿt³}4›}a¶ü"R:D!_T2ôì÷Ÿ·´ïLn«ƒWÛ»?œ’^ìÛËì «dvÂO|2Ìç… !M° ˆó…`¿]!ÈÚ³³/ê|Vk{E¢ç}²ŸnÞ¼´Òº£ã¯ëÞY€øâÕÝÍãÍîó»ÛñM†Œà0œw g ÄOÂìÂð?ãŽV³(Ûx)¿¸J~+XÛIˆ,²)oHõbÂ~Õ.wT­«‚§?b:thÆÏÈs[¹5'Îòjj«*MMUˆ:Ÿð÷™¾é.ñ˰ÿýâ«èT8¦²"·ÊªâoŽ[½…–SVUüÉsÔTÉ9&ÊfY\1Ψ‰Š„˜=-F.¨F®®Ñí¯·.b—Ø9Ÿ‹ØC®}j´²*ì Y½W¬Œ•«ÔVB‡J‚ž(”njrC¡M„Þï7‘Î{‘¾úÌL ÒRr½™ï1F)Åå(5ó5. W È¬ —8e¡4 ½›s%ëÏ\Éæ²&äÀJ±„UéNÑ•°Ê–š+e-¥|!›"ä è8À(Ê?bÑ…,Ú-6Ñ ÑX`ð©kg†£cC›¬hp’(ÈõAõ4½oá]39*vdDLÿi,É£;ýõF¢ ßÒË—QlZ­Ð|5”1“A'ÖúØQ‰ ÄÔ{Ó…’›Uè>w*5#QÁX>—Ú)¤„—©ü1Wèó$§&9f{k;+i}ì×ÂWÙõý,ÐÎò>%X1Ûûã|S²‘U•>±¾*½TtU,†. ½8¾yÓåß>Ü?ÞtîÖ }ÁÍÈòGõ¢À“‡ÝÎYKÔjxu 5}u5—žɧêœSWF]€Rê˜ôæŸß€¼"MZ:æîþCÚ:îßlo·›÷ÝÔÇ-cr½öŠÓKÅ5ïçÌUeò>kö‹óz¸Àï:Y Õ™ì,T$3ç Q탱€¹F˜‘`›@EzkÇXM‘±5 4úÅÁ0Cç-‰9w7g+–JÃe^ÎÐ6mèÚ8‘ý=BVW‹¡î…eì)S½?ì§\ôøÏ1`0ƒžÕ-*ñü²–ý#¨ËÕŠ÷¶ý±óן“žeAž@'ˆ:_Þ{:ÎHhQº‘¦·™”~*¡†Q»OÉ+ª(*“b,e”)wWò$6Q»÷œCœ²ûàYÃ"ç; Uí[SƘ#`)– q9{uCj¬âäK ºÎ*ÒÂ:ß}s¸Ò›©‡9ºPÑ|“¹ŸUšuÙà‚ëPrY\®è²9^£§•6ñ×4sØŒ“NÙDö%]䯄¾³¿&CÌåµ]0‡tgý•gIKE\7ŸI+ÓQ¿-,‹tÄ.vh¼Dïú抦ÕÇÞ|¼Y=˜RÌN¥êŠâ¯@BžEÙÉ‘€Kè[\?Y;ü4Ó@´½¼˜ÒB?9ÏÐ9Q MFÿÑ mâŸA:c¨Ég04iJ‘4\¥‘¿mùü<ôT©taN¶c¿#µãÿo¤Ð¡zÊœYãŒl_ß:ŽŒ)&ÛŸ( q$‡òÑO²tD£Å6 %SfØB«+àèJfoamÅiâžÊô\$èÿ#ÉÐcà!§nètÞêIÕ¶yütÿðKåõóѧ{%ÄÚh¤}EÅk1Ó×ÞÝ´»û7›£A2û_n~}LÌ߼¹ÿÙðê—Šé^ÕúfʬØëâœj2E%fpó¹Ä…äå±?,Û“¿ {(¦&)”+Ç‚äèIF2iÃÝih#ª¶Ük ÂÏ(™&¯ªD0ƒÁ¦Þ!/2­÷Æ%¦”p*]Ô¹Ò„ž4 XË͘MsÚÀštuHõ”c­öŠHÚ;ÍŽã¹Ë( ©GŸïA¡¦çL‰Ï™í£‡ž q¸‘ž:"&õ‡»—7·bu‹˜ù•ƒ¨TÑv!êËmÊY¾…ÑÒªzļN-“Ð>±†R\âŸâ¨=g!­-è žé—«}•îþç…µûv¤ýCWqKï²ìd¶.f¯Ý@¢ð¶ðº–<@¾BKÞeþð£F;Ô0­ÑîòÓGís^ Ÿ:¥Iˆ2¹¯îò_½Ô-W´Î2Š‹ 3FÖBt@(úyD².ܪ0b¦€KÁ¯•A‹äÉŸs Âc‘´`ÛA¿V2s “¢73Gº ®´o–`/g<-9Kš üUcY«j"¼ŸJ[¹?Õ!zïéBÒ2ˆÁ.äàڙءÚdu(س5Ü|¸ÛN«-`š/ñ E€9#d‚AVš Ñ"ùš•R»ú²¤ýc F!¦¢®"Œ¢£ìD™'‘4¹R±×Nåü8)ÈÆÔK·¨rWŽgš¶‡Ñ$gÌ5ãÙ’Õ%JŒ«’–Õá)À„AµÓÀà¬6½ƒñeÚŒ]ðÔ6k„¯=;æH„Ó”^5ˆ2-BR?‡!•\"óm17æH:-4ºt#+/”Ý¥æiŸ„Ñ A³‹›”§õ!8^T/ûèa:Ä €Fgf ùbi†V5<{\6+&ãóg•GØ-»ô ×>…K„ktÓó–éýÃöþaûøyXì~:<¿t!{á›Ë²ñ‚ª"{ÐE€Kçp˜{RôS›/‰÷B²ç’l—äv›2( åÛï)`”é Kú“¸ZÜØ[Gž¦åv˜D£[æ×±ûí€ÉOs;D~m;Jp˜½ˆÀݧ|ë ,{¨åëÁÊyõ‹)fÙÅw!V2LYìÕ›ÝÓ¦hJÛÞ4q(êœGð¼Ô\ÄEŒfÂ2¡Ùb0efjÒ~FPMƒcuö™ÛËÁ± ¤ØIžŒ=WÃø$”F±qŒ”Æ€Lá’EÔøa¤îÁqºÔÊÇÊÑG*´’kÓÛÁª09zå…Íä á¼Fƒs‘iT€» «…‡‘ôÚ7 Ï…8õ‚AæKºM…çd¼œmžªŒÔâŠá¹xZ‚(û”у"ˆ*`,Ö)I¾|,‹F(š¸J˜ÒT˜r®è–Ý0HèÍØéIáwÍ´z€Ø{ï2ë–üÅS꽢뎮ۻxJ¥XèÎVcŒ?çÚ@;…ûÍĤB¬å”—…®R,"–qø$“F`*ž¢“) ¯`©½ß2·–î!©IyOÛpâÖ¶§3@®Ìɧ¤õ?M$gŒ‹r]f¤}’]i–Ç!ùÑiWÜmn6GÒ_m>”]ܽ<V·›‡Çi³¼éÒ¦XzMƒ‡ÍÛ^†ìæä²“äÔ´‚À«VðU]Lƒw¹fí‘HZ%¿0㪸Nè¯ý:pŸVX,Îr&ûEßLi –NŽjRßç¾Ô{…®Š¥Ï·£—i很v_í¶Úmv»©Ç}‰¾+°²ÌaÅr*1,Öj¡µ„Y%'Î…Š2 2Ðr¾ÛáID­nRÑ‚`ÇWZÖþ7©‰}$‹´j; øXªÓjÚ¤rƒ“ˆ; *ÎiW18Š‹ÊôPßÑ-XÇUøíÕoxÒgžÊ‰ÕÄóXNóêU¨PTfÊyNmн!ElrçpF“™ÏHI)C²KE!–b-CdC¤"òë¡ôùGmýƒ; pšS_ýêê2pÚúMÎH§A¶mZk‡$¬ùZ†Ðv`†í$5?¡˜á{§3’&UË«j%mÂÁ­-râ®»Çahä³j‘ç~ö‰’’¦>®Ó.RÖiÅ.Â2'ýGQB$žÜð6Yv’ÉZX´›˜™‚0y,ï&N½+ã”9Û77’l‹Ý$y—• ›I¨`í}a“ÄL§óÓŠÑ3†Ô‹ö'÷ÇEÛÊ÷ˆZ=mæ¸Ä½Åö¢àítbÏŽoì}M@]fz0€s„ôMéû’ÂÓÀÐ"ÐZÄÊô[äÒŒãÕ6ÚÁ΄`Rƒr¯é^‚œ¨Q2a»B⸑£å¯~ô_ÈošÀ¯?FGaÆ­+ƒ’EêSÃøJjAS6x)Ç8‰ÝšJM­¶nÎ2g>-¬‚F@×ì‹x]ÏŠŽ±·g%1âi ÐC x\Üàwí(K$7l¦×ñ1+À×.¬ÕçÿÁ÷¯_½Ý<~xó·Ïô~óöõOÛ‡×wï¶ÛIœ57|Æ_Q°òÜIá3þð¶“žÈõzÌÜq‘‚;ÔŸt%F8"»Û„ØÛÃÓ*˜{î_nÀ&îY¨À,^„*!5¼ã"C*›)Þ ŠCÝM^˜RüU mÆîØîÁs¸î{SšÞÇcw\´ohá”ì›NòWUà`'Ò"TCÁ9}"IÔEúŒL=˜š9 þ&Ô+ãé×Mª‚øÂ—{Ý~VÆ 3F›ÛBÏ®d|™Ûõ’€›y)îwë‹ÄÀèŠp­.›${Yf`Ž`! N ahâàJ½g†˜”™sä…ö[ büåF3;t5müseךœ5èZ(ÕêÍf¹1ïšÍ£¦é©]hṞk)_¤,€-¢p°W;v~™‹sÎË÷+'ŸM…?-­†ò5¸µŠ‘k³àÖH=Ãì9²Ã#}ú~k2Ñøú‹ægÛòkÓùæáÇ?þéßNh4¼‡cd2€û_줹½û‘néVã ½ÿSð ÿú|W¶ÏüÑ:A^ô€á^uÎ`pI|ÀZlMÓ|3šq,:%ywaóaÔܳ]8) <W™äªH¤˜ÝÔI~s á¬>5²“E‡OíÏ6uH‚t=Û˜O˜6vë|³ÿÝ»ÝÃÆŽw¥gíc–ÝzÉ|V 6ɌҊlj€]-÷ W_ÕB_rÿ5˜aÑŠ–X@ë‹Ðž­VɱÅ–ôÒ¶‡dSÑÐú–Ѥ ™),¶ä(jžOäphŠìqq÷ã÷‡DyËÀàR!,,ÈÏÛ#gp$Ed%T]œ™EW[ ioKQùâ1 8(³lá ¹ûîÑÊ*2³ê׆NñY²~ôˆCvð¸"e…A±¾ Å‘ÓäEºž5¸ ’&ºiX¬l¨U¶ kGöº¡¬lñ^©¨ìC½âÉî§¥ÕTã:YVÛtjá×côÂÿÇÞÕ%Çq#髸Ë€üó>/û0W )y­0)*Dzfý°ÇÚ ìÉ6³»IVw£ (ÐæØžYUȾüAæ—©•ÈlÿjÊ—z†Da³Þj»£áä5 ÷%½5ƒˆ‹œ½‡…äKÏ^—VSCJ .«ƒP]E "Ή$×|c©À”Z8šë9÷–Òݨ6¬=nÌ—òE_ &ç к[vÌÖAÍVsØ4VŒæ9²ÎŸGÖô-MR ­ú½Qõ„Ànƒ¦õ«¸¥ô”™ƒZp’ͺ®nt KhÛH²®ÕQ…’²£uág˼ߖV£mý,UùjϘ'VEnÂæ›{DXhH¼~‘&¨ ÖiÊQ‡íBØ=Èÿ; A#5‘ ¨]ú–_ÄI±R¿ }sr~÷Äc!üëöþƒþc뎎^×ýtÿø¯~útû|ûôÛ×»#fbQøRrÈ_\€þ!d´âðî¾¥ !“ƒˆ´†k»mLPê_¸åéŠQžƒÐ¢§´_xÖí¬…í«¼°zW•(¼{587ì» È/Có{1B®kT—¬¥š‘-™çºBSLî÷áÙý÷=êa²Á÷óêRu7SÆ\~Ù¬¤C–˜ËïºTG­S‘š¹€wð‘ûÄä&Lè…ºwHªA¡~Õ´A`*ZûÜ›»û/º·TXF¿þ¶ ¼ ïù¿¿~üÛõ)‹Àß²(ׇêcû~-F”àZ¸Ð1ÙÓRä¦>Ëœöá„pž5˜œu[•l!Dµ…¸l m­!G;[L¹ÇO¬_#ó,þœ?aS’Ú¨juœÚk(¶4 ásÌOÛ3€µ8vooMx‰‹þîÂÙâž84:žŽÒ{[Y•$¬.2°„Ù®˜?#›•0X÷¢n•tÕvÙH…€~°[eÒÏ0_îôGÄ—ýÅÏ÷'7v#ݰèGðóÒ'Ѝêí^&¡%ÕIö‹#½³êírÁ9ª”–ƒs~—¤h|Ðg™ìg2éCÒ§¦Ø –êè½Ï!<%ïíof8„³$}/©#µŸl÷g/ßî¥O–$}–gvÑígi_»D÷¥îâÃéÜÜ?Þý²®Dd(¾rjšZµ»weï{•è.K¬ÖÚ¶¬¡Ð2ÖŠE‰@©ˆµ)O¼õ*Ÿ.X«Ÿ˜Iã“ëb­œðiôÆÚ½œ÷Åp'XkšbHÀ]\úÎźܡX· C5;bòëî.Æ¢}y"W5maÍ#F´¦wÒ¸“þµn“#¢kZÌ )õ6þøÆªuQ¿Í`½a¤M|C!\=ö±»ÚS+R©°§©dOAÿ4;Í÷M\½ì)#ƈ׵§pšbOCf„ÅnÉ1¦tž—(5f”µ›Ñfä©ßÓúæ>ÈŒ6T†ZU5|ßožžsÈ~‰O32ß/ƶðO?=ÚÕÝ‘Õ\—-Ò½9‹›ÊËCJèI|è6·ŽD¼Ê ©îZÉ£³t@L¾„ÎÞ9GPŒv$;#q&À>à¬^$¢¢à•Á9 ID 130’yz}~|ÒïVaf'¾ü}Ù5^ñ¤mlUùµªŒƒí¬—«ð±kgÂà$Ûıs’»ÚìäžáÕ%Õ{´ÒùÓS\ß§´ðu†$q¥„*ÞóZJ‰5ÇwÁ ¬9»ÛìŠbš¢ùPôú¸X¼±€•U†„âEž] ‹A±ºRkÌJi/U`„Ñ.¿ 3HdZ£œZžPá\çqîØ¹)òýX·‘ÛäôZ«÷Á á †Ï0nÚî¿~¹Ït¤f~f˜ÇÐñ±¥{*RtTø9‰AyN\Û@ÚToP¼VŽ€ŠW€1Ë7ý&‚.m_­ …+£4®]R)û$˜¶K'!ÑÒÔÞ¾tûNíÝ#õÊ>ö¯œž\rѺá%;·ßޏÆ,sóêF«AûôåyUÆ{¡@Ê-ó4}J¬ž’þESÁN”zå¼÷ÊO1ÔcÙûå=øžµ¿‰¤²î>[Ÿ—H®Œ¬,0ZMÎŽre¡É%˜ ›óê‹Sª ŽYY¥S‹Cbh!sÀD‘€V»iÍF÷»×Â]¤(T p½þ\E€›g˜É§S€ëõqê’]÷ˆ#øþ±‹úDSò  IýuØO·w;1Ý/½²/_uìnW\Ö…•ýèô‚º )Ÿ_,F‰Ít ] s ª2ùú+Å-w^±Û‹a+x”´k›N@/ Õ/Öýå¢/qfÚ”1(^Q!ä'aΤӃ3S±)"UOèvÚÓØã¾“sÈ?ï4¥Ž/¥wµ§¾yUœ úØ7CþH½žR!ôÉþ©ÛDÂ@ úóçÛûçŸëLâ52y*IhÈäa²KG‚•è¹_ý"®JÚ™Æbp\¼Y ¨°YlNEÌ2ùÏVÛÅï´¯J쯌†˜p°ócbö¹k^]2Š÷HÙÐ’¡ëJ€ÎW*o‡v¤zØÅ æ$Šóê% µÃ}ÐÑ ÔìÑþ'¾=>Þ—îžZ9¬¹‹r‡´ízHDpÍ"äƒàoö’+Ì?þóuUÇCœúh-<Í̬п6Ót šŽÂ6»j ±›QY²«ºù„‚‹v•}Ê8¿ ¸‹]Õ¯æ‘7—,ücåÙ¦³¶Ÿ‹oÓ5Ë)pg+lJÁÜÝ™ ÈÈm1WÔ Q?­kefšƒÂ–b«÷lfþgÆ×¡·ôL¤Èú»67Öá}Ç& 5j)bÄŠ2\Ä ðÁl“ÄL>êpÑ›s³ } »£}w9c‹7yÓ³âM Îó; ’¸½I¢Þ³É+4%—ônƒBõ·ùòõÀïAv¹¯™­%»L“EyÉ5ÏB²GïZ(±Ð=a3=3ÔRßs É³¨œŠYŒœ–§íî²SŒf+«JpJÖßk®Uj[Þ»»¯qtÂÕÄèÎgY˜"‚‘*ÉÃIˆðz¤F 8u:Ÿœ¥‰Ô¨áÍs:IáVR£†_"5J‹xG”¤}‡ëI1"ÏMI]™¸%}Î-¹½Íðt£ŠÛ¸Q[?ähߺíû¶õ;ÖQVmî’Õ53†¡)O£z±Û­nõ ’ÉÇ6­ÊVWpßRpÙê²ùà˜OP¼¬¬ÂêFu•Ed>aþŒ,!pRcíœãZÂNPf“Wä$}ÂÜÈœä{IÌç¿|µhìéÞûîãëÑûøb#>^º²¼y½²\—¬¼ÀáÖO-<܇²;Ðé±Ò¸›÷½Ócù¶‰á÷pþ2Pý ÔÂ)Ø•>l1EÒ‚SgSMÃæÙgÖ˜"L6éªÂÑ~®Ù’)—r¦h¶²*2\m¸Ur¾:¬P[xùk€ãIðñšñ_ÝHÃÞ V͉uÿ¥¨Uýæ9La˜ª~ñ.©Ùsp›‹,MüE>‚þÓÈÕήvÌB‘)Ñ ÙJäË^q¤ ôB¯«)“µs€ÉÈ~®êäž±‰®'cùŽaûÜa'ˆã†Ke£E3k;w:{¼M×5ò›Ž$2bŦÃeûÇ>—}“[—Ûo.°ó>­°ÅMSaAd¸}´ §Yû(ê?ˆË¶÷D‡]k°R¥­áàóÔ4ŽhP÷d“¸XÒ5 ~%À7w·ÞþsË ÁH#-HÐr{dÍoLaë…y•¼:°î k+/á/‚HEüÅlKÀL8}ð—¼>Mœ\1Æ_³ÏÑ¡«žÔ‡W¹-÷uø»á¶¼Fê“õ¿Ž aBÒOK×îX?é©8±]»ÞÕ›o÷_î¾|~Z×È\ ¹ì¡ex‘KÂÉC>ö5²ë…¾¶UHSï ÷ÿi"ÇeøeÙYH¯‚êÑ¥_ ÊÕá÷”î Çq “†Év;®?i@út÷óç‡ÛRMáì'GT¤wSE›©ÛïxNVo»=â¡færªÎaׂÔXU@æÃŠŠÔ.>•Å5Ћïd7¾{-,ÏÏÈBAûü€TשgÀ×¾èßÄr;lܵI”Á7å˜õæò耾;¼"$†k£¯Àhç×äìÏ«³lÉÑSбpäÑñèY@¶Ô ÷3×Tt§ä2J þ5y­ÒƦÉ@£ HïlòZ>Ñ€L …Ñ3 `ÕªE´Êò˜ÎdÒ'Õ€ êxµS )ÀèTƒÕ@eS Ⱥ/yVe=Æ‚kÔ‘°õOêk¼T+žýó4ÝéIy´=¼Ð4|»¿ýZËL}ñ÷G0”vÂ㈱É!êKlœnpYÔ£ .Ëy‹—l{MBR<-§(¬9°xCÜ»g3ßd×#GaGDR¤xe/9’ £c ‹Óy ËNSºböy6üÎÔÓ®7õôï8ö‚Á“¿a/$ Ð¿DVȨÒPðÿöøi!)±¬‡ÅßuuŽ7ìA_N<—UTú̾‰kÑ'Bæµ#—źòË2=x¨xÛOmÌ}1 ’Œ.¢Ðƒf"Å<ýâ«È:à»ï*öÕÍ_²ß꺻Ÿ~øéûãÃ?>Þ?ÿðéÇÿž¦è,™½¢ü'íhÞ£l‚q£+M[™aŸ¦oÀ䕸ÅBWn±”1pžK±y±Õc,JT?qÿ‹ kL‡AâïgNÒÕwßïÖÝ3£Æ‚àÕêIÜ„÷Š xï"3pW¼?V¯l‹m…h)„Xêœ*@=J–êq&˜.¨î'cä G=PG/©EuÇÛP ꜜ±o9ÕèüpLç”i½”É…àNïÄ^ªCbßê<¦ª«0!NPž‡•Œ*ƒ›B"á$´ôþ. eàâBrÈ#uãuk‹wÎúžÂÆ‚ÚÏ(no ¨}ïªÆÛºÍºhÔöð±…Ä8:AÂæ¶É´ð9Å…ó“M³IPÌ™• Kù£Ãbóù£ÙjÊmê©}DôGvfþˆM]vÐU©j-P·-yK2Ê/eÚ~ßÐæãÝÏŸï~¹Q{T·®IŠ/:gÒ/F•~j ˜ÔoµS¬Y¯³iósÂ5‰#á°|6M~!›8š ¨Ãá´|—PR3½îpÚukØr8u‡Éèt€ÊÙK&&…P)UçCçÕª!uÜÆ°%òšMžÔj;Ú ë#(õ/#Òh¤6ÜE»bØù~vÊùüño_>}Ä;¼KñbøQ€C.QÒEI²€« þŒ>‚S“÷ëRòÛå)“(— ÞÑDÀ^ è‹€÷‹TЇµ gáîm1åÄ6öýRõ˜VäNue1&Á°i#Ä–¡‘¢8ëßœ'ÇÚ<¹WÏS\ TÞ;£â®ˆ”»ÇŸ­¬† \0 ûz—tð$7vZÁAŒçES¦‡ÀzâuùÁý»"¹\ŒÑÏSàb¹,}ÉQN\Ò\–>är’ÜòzÚr lr@ÿ2 µDÁû´$y ö½Ä°EæâŒeâÉ¡sLïŒ4äÛã§•L!àpAü ÅŸú“@‘ÉAÏ×è¨:ÞôŸ¾ÖÔµ6d~sIDªŠ^–>·\'% ¤ÑºŒ§uÑÝÝ–¯8çÏ´ìÕ'–rn‚K¥\¢I/뾿 §Ç½Râ)x8!?zÉiÁå—¯7êþ|ÿíàò?ë7œ>Ôx¤š‘~¿ÚàC´ Ô˜ŽöUs.ãªîº˜üu9Iêhº’ƒŒ$%9؈˪ ¤ºÙäç$7 À«V(¦ÎåòGÝk`÷Ay$;h)d"hÀ÷înøÏqؼW¾ìƒP„aç³W|3‘tb•&ç’bñÏ_ÒÄh¬ ë€À}nbp0ˆMw™9^`nt=Ô^0E¿Í}chÁŠà#znlÏùÆÍï9Ǹ¾íýE, P‹Pöæ8¥XáÌeÛg^eÕBlv/©…8öåÞ^±Â/ _ƒ"; J«P9o‹P '¤¾×Æ2CÆP/¡ÛÇ\ FvSÜ> î²ïZÜ?!{y9_m‡-„ ŸÄžÓQ ôÑKš¬ÖaÕþ¡`] ›¬mÇÚëŽéüBÔ´A#¨“hÀMvFÝÃGHÄ]C6³1\p\/ªˆÄ´-}™bþ]f2¶¸èð=V”ÜØ¼ù¾»<]—× i5F“lBÛÓ¹Á• dLúai=YñuËÜð®o‘jn^K¬Ùe+MfÂêRibI<éW9zIXG1qX[Ø20ŽÀ»Ï{gMû‚!† ´90w¥‘­êE”áU+§HtI»ÑÐ&W.b MôÞ ØV‚Ëׯ»0i!n8üȦÁ'7‰DNåêQ‰/£Ÿ–pCÃSγw¿Ê¡Gùh°67N|¼Ù;öSp‰!Öb‡WØÎ%ô­Sãv`!xO#ï!­y¯+ˆÔÃöí­;öˆp Ã\;´$‚ŦM…M6æþ¬°)sQDc+Ô7•Ì«Ÿ°ÃÅʦýR!egŒ¼¬¥\×dß´+Ò’on°Þk±¥¾"Èn¼-ù¶@ÌŸbœ@%¢¹vœ¡Rƒ¬È‹¥ûµ†l‹Él5åPÌ;?1Ø ž£4æü›‚1Û¾¤®z}Js¿4¬ Üru3…ÁX+)Åx޵È4E)RžPÃô"Öþ§µ»ÿÿ»·s̽9óîo2åh7ç¨{“õî/JvѵqïUAÁwÇE¯Ö÷]“nõáöû/ŸŸ¿Ýëù ñÐï_¿<ÿö:Åþéæù§RZåš&=€í²¯D’†!b^X¼žÁÕÎi‹¼zù¢¶#4ØŽ E€•„n9µ±—]žK{&œ·þb#_0ÖÓKô:–§S?G $Ú5ŽR—LæŒÈuI°÷þfÉ-M>`+qj5@ŒÔ«øþ-ŒvO—Ó«+©›‰ëoà@¦èb¡É˜ÇÜíÝN^ Eç÷ººŸŸž_£©·P¡^êVµvB œ Õ«×Fï­¼úÜÆíŽa”Š&Žm>C9†Ìõ¹Ì$Ò£Tm¡“®®f;´ð HÂýÚ1RÛR7Oâä¬ÄÍW„šr¨¡[Þ&Ù¾õ¹¨zxB`ôˆ˜®í EHƒ=!;Žç×@v2¢žA‚¿Xöú©“aHUá¼îõ¡Ü¥g­ï·w?ë±¹ÙK÷Õ“üpòçoWPë.FRº¬kÏ aÛ¹ÂÑ9ï}OÕq„Æ‘,» j9 ®+V±"‰ÔÙéã 5+¹ldÙ5ñ¢y±&™Õ¼hÛH7WÌÒËXìE_ ƒP2²ìòÔú3QõpÆMɺei•‘ÝìÝètƒÉÙe¬¬iÊxäóùØÔ—­xß1P}ŒÂEÕ··~ïUKÿÏÞÕõÆq+Ù¿"ìË}‰Ç$‹Udyƒ¼ìÈÅ^ìî{0‘G‰dhä8ù÷[Õ3Ò´f8Mv7ÙŠ³ À°ãYEžú`Õ©úi^)Ò?“þ,˜q?üòðøKŽb诶 …(RV^;7áLbN âîû 1ò‹|*„…[^6œˆäX¡‚rÈè)ŸEmgR ä=yÕˆŒtî¬N£Q ]áf»ØÚÓ”ý¹ÿ¦ZR æèÇc¤ÖcÏÁ{ÄŒÄ%ñ%u`eŒG-µ\ä­Ÿ€¶Ôn…þ‰z} ”Õ%ê¸Æ´éFŸ7Ýÿ\ë:ï×÷×›÷úLýy÷6©µ4ÁÜ È„a¥ÃÇ æ(/2¡v£ÑDl¯ƒ¢¢9Sø#]߬áÚðìâi!ªW„Zo½ Õ@a_q“¨}U»:fæYÑ)v"ºOX3¡Ú=h[(2ÅÙåîTZŠuTRîMX]4È$ÐmÜP²@ñ¸³’QWÞ­">3x~“øFrÖ•ŽMƒŒ+æk­„9¾5µÓ^úx>ëJõú–cþ4ŽêÕÉǰóç_Ø¥Øu¥t\ƒŽÓ£éç6j²öu¨óe}÷^þÕÓ ¾œÖÝÝ׫›ë§õî÷ûë£YÔŽ:ÃgÑ^2‹6ÝVæW!:‹h"LÝ€~ž¨}ñ¼ ' 4ŒÜHmŠŠü.üzü.‘#k¹Ž3GÑBýX ›ø]aÁ‘´SRR§3ûØÏ$,ž²ˆ~O»ù\ÅSÖ0nlmÑváºí&Íb:åÎ3šiµÒ¨•>wÛ$4Ê‚‰ÞfݶÙ NŠÚo6¦ÇÖwS0¶ìJ\4èõÜÚÞ7æÍ­•,ÈÕE‹…þ]­£àƒ›V(áŠC<Û…î¼[yrÁXÎ -9&vÚïý…†ç­øóº,+‚ئçj%ÄsÒõò‘VÌqÃÄûúÓíæ7±Û]aÀê—¸Oy}Þ==lå<|ÿæ£|\œÏ®r@þ÷Ã÷ï «—‡ÏW ²‘dÙ5õ°qÜ$Ç=ÝE‘S,ÿM:p±¬ß«,\b³/i/M,µ=‹€#Ç"Lü±=¿‚åv,&þØKmǵ,Åš ùÉJå8–¤ª BL{þi…ÄåöŠàLÖaH•ØeWƒ£R×,?‡#-l§¨ñÛ'eÏ>„hˆÐG›nÆ2Uëä'•q‚‹›Wü†ÔÎT¶D š0|­ÙÌv(C±C‰VBó×XâÃ| A” 4Ž+p'•UI‚x½(£÷dn˜ƒYyýžÁºZìPpµÃWÛ§hÿ÷º“koÂ’S¡â›L…ú+¿2/¿’„:ÔÎzk”boêÐOÄp™‹å-ÐÆ0 ¡à€ ö¦WŠtŸ¸<"âk¸hÞ7óX>!ííœ÷·2Ïë™F¾7Ð Ð[‹Ì“ßô`BÛ€WÆÙóJ» /ñ2a¦Âù¯æé¡–V0Öïg@C+GN\šOü‡-ùð7ü1ÒÍæmŸM{«èÈh¡AÉGÑ)vè»SÈf“ ²º@4Û§çBŸVâëy2™$19ç“ÄÝÎÅ«Oò·V”$Ž …§‚`ý׃þG’ž=ÔɱÀÅsdª)ÜOV„,þ±«poŠn‚±èçî8¸á n¿sÇÉy!Ç­•Tùè²< ¾0ÛRÉ^xÓ¸jwqÎÇÿèÅ rïÀN&¾)O:ÛðǯÑù“›kzX‰V½ô‰Ãjt¯ÞzœÓT˜40´0ñjÇ.ßýÆûëÇëÑ*N'Z=ûxÏéo_ Þ÷ç/F˜Þ³ÿÄ”*E9 n*uwLTA$îW EGbB†íZ—‡ërvÍ[ˆIG減Æ8¹8Qþ±–_ù1½o̪‚ÐÀ„ÅIéï€ì 㬓àÜ”£`ÐÈÖÝÿ6áÕ˜'è,g­¤a‘ÉÄœ»ãý³/0x,œM‹ãΊÜ[‰"‘”c¼œÚ ˆ;iˆ¨o@TŒÆ%ТGL̹&¥é¶äì‰ÉLŸ4UlI¸^œŒçÎôw×ë™îÍøŸÖ³7Ö(ôV(ôÓ)^÷ç¨AÁ“{ aEËzÍÅì>ɽîû‘š‹‘£q{Ýý qÅ3‚ÓÓ%]€Æ0­<Ñx– k,wOt&$á»k¥§˜‹V½€‡ø×yø†ý·³ÚÆQT)UÖdÈx¯qã¼ÃÆ%ð¶û'ï²e0$QB:œ¥ª¼èŠ*Е“fÞù¶ÚkЭ¨M uÐð›0ãéäʹêf>ìÖ£€“Ð4ÅMÒ´¾›Rk=Tã<;R-üÜkŸu$n®dƒƒçar³ƒÄR󌎩ŸÝ¢Ù¸°4|jpƒcv²%úS‘V“ylÝ.©';A{Õ3]¾Ig`#¹åiä(@¶«$è6Ubƒ>{×M:5z—võÔîéÂÆúúÜÔfΊ.P‚C„àþ`´­I›¥€é2s6‹œ¶kçm§ª…{"©b´dÕ‘‘¯i´J΃›Ò8ÇZDR#ø;¹/õ¢?EÇâ[ä’wQ  bö$ Mv0õ„Q§r\öC–À/ì¿ kmKåÆI œ°¥ŒqrŸbrd’'ÿÿ€’µš=5NÒêuñ©(^/GÀL’ÖÔõ°¬ÁÍ çv½)2ãdÆàH‰`ÉGÃ.!›$Œ¡<Îúdqok…¯$1*ýæÒi4dßü•Ý~Âé+‰aØrÉ4Z,ìÀùÊGn·ëŸt¼ûO·»§nð\󒳟Ø$qd|$! r&DkåÁr󞺠s˜¥/ñ7̨à ÀÆØ³-I#£+ctD7œ7ªáÑgJ~óB-=' ¸ÙbÀ@˜¯þô„©êϾ|ªÄZ²lƒÀvaÃ@¡u± ÈY°T–h* !ÎéQ/uÈBµ'8k§NšM5Ì®E ˜}„Haá ðëç«Ý³—ÏbšJº7˜¢dgeì,Þdu–àê”: œóÆçsð‰Ã<ˆ˜#’?ûûçsVD²à|Õ™ö4õÓÛkÀì-ˆ!þøÂ}l® –{ nyáp}ßmÚµT Â+\§H®F$o¢«‚?‹¥žW¢ ¦ÇrÁDÎçWcrØòQu‚l±ÉèÀ‡¥¯Xh]£®#¢0I’¢i ñ5œ5+gÃʬ¢Òr`ž<û?öVçý?7š–ÿþv÷ôv=õ54ÂÖ5O{˜¸Ÿ‹tšöˆÚ•Î-:üÚã2ù>þ6T Z×"«aQ(Ñ[û¶íúSZv"ÔQ¶Ë9pÜÒv¡õv’+¨Ýà´{H<õl˜ª5©’3b–‰b¶Á-PÚS|E3&Ë–Fè—5ch±µ9{J…x²eGÕ@æßW…¶4å¢ãå®KÛ'¡]ô¤¶6Þ/]ڨ˗ÿ…¡à ÷NR!ul“cÄs`|;Û´»~¼ýô´'m}ÊijŸ€¦•°È“1(›GMûtQ=¥j'g ³l6¤ãƒ²6 0ÙíÛG%Ÿ#« Û(­ù$TÎh6J¶¶û¦ûŰn¿X‘?oÅd†Z¦©wû›j°Á{¥f׈l$ú³dä pÑ[?­Œš­¶]¾u —Èøl—,a ä³°çM¸P'ý¼ÛJI\BÖVÙ…aσoÜ&«rv`n”-ƒ`!aþ f…«&3œ›^è7|u›ê(¶p A™¡•aýmzñŽˆ=Vªq Œà›æÞqR{‘uLà MôËeU±1VNB@vÙÊ7MG䓇®ˆ³Ù8G¹Ô錕U+{¿[1ѶnI1ƒ¥$O^°Œ.†9ÑÅÏÔEŽ¢ ÁÖh,¹MUÙ˜Cί\ÀÃ,ô“ç#¦­#‰x^) ŠøÃ?¯ükd÷ì«i‰~ôÈpFßV+!åC7P×´,QÜ®Um»×ÙÈ‘I’Ë)’ bg&J  ˆŠÖ;»0ˆ2·FQ‘³çѤ²º£`—`"óeLd0¢}ðÝ®¡ÎÈÔg?ò:¥%‚gOmç£z9»>};4zߪ$µ+YðnúT™Z"&™§ &{&§;~`ä‘ÕÂÓý¹  šN .O×L&&áô(ž pÚ­Z¼Rr~Y8%k¡ùp\°tÞ‘Þm™½Ióz8¬ê’†"0õlyÌ<È©ÐÐRŸ>ÄÏvˆ.x}ªÇÌ …q!¾hжxbìŠÐÈ‹V8Øj̙ՂÛýÑ‚¤{!”p&¸¬ûúܤ|‚·=Uy ”U“ޤä…á¡qŠ4j^ãŽE,3ÙÍ¢Y¾ßÛ¤ ì„8ÑT·XzÏWÆ‹Ÿ¶ ¡÷ãæÓÝÃïÛ“ù6Ý$HN)M Uï¶·?F}~1²Ãg ¹ úÆ)Ã#,x4ÎͰÎ[-îH0`÷/0®?o°ƒŒ!Ùg×Rî–mˆMj¾H[n<¨¢“³÷ç%ºe+ÁÞþ&_F`ïê2n„²Ž,ç‡ -ÕK¡~u³]Efn ÂgC–_·‡,Þx/ýù»»‡ë_Æ‘¡Û¶ ˆ ÔVÙ;-ÅäªR¬ÒÝb"Îb´ø¹œ¯–/¥ˆgûB«€ÑݪZˆKctp¾1F«œý¹›¬[A·­2;¤¹ÈM&(OOTG“Ëúöbïh–¾Osúu’rt£Ã`[ΧxÓIÎîvw¿þ´ûùáIÕôø ûÚeH©Æ|JÙªºc²'ë|zü¼©RïœUf탙ÂMªEZr¼a,ÏÝ(ùàþ¼’j=– åLÅ6?6.›4=VI˜ÈªC°ã&g£³!ÃhÙØ耽 Ù±˜ƒ.‘ †X8™´Ôˆa™]KýÇE¦K„#¢Ç™$r‹„·£¤ÃÚ;üû¸éw¹QÛý¯Ò|·]ßK45.Ç24&"+ð|·8¡ˆ ¸á5ú5±\F3ÚN€K+d3ÚÚQžM¨„t;zO"u2ÚŽ¢ 4Œ)à®qÿllÝò¢bNTµuв¢&Ë‹úêÈ%¾úJùq@ÐR™Î×Óa"A„m»®¿Ç‡ÏOáÎû=AÿËTö¶­Ðƒm²‚Áj¿hʪkP} -{ô¥‡ óÕŸOTaĦ¦Í… ­šQ 5º‘d{#Ol-ó¦·1ZkÏæ¢0€ÇlKg8Œm81o=©T0o†x+ñÇvaó-2ZFh(š–$û„{¯2åš?dæ_þx÷þæîáËÁÔ¼Ó_ï®Þl×Rù£'[rMï,Lé* $7ÎÍ.¼z9eí`Ó*´‚¤òaŒóð=fNŽV9ŠªJNÄ1ck¸êE.9'0!/ÅÅÁe7v"k»ËW1Ìc­Ï†9‚»ç\j‚kO‚uÂOVg·-æ@ëù×*åtáŽè‰˜ÙTŠ T}-¶¦è¹ØÚˆåÝ;M-Q[¥ÇæïP‡¹ñ'ÏP°Bc¶ÉNƒÊ¹ÇXöødâôǧiqASÕ†&µZQœ:pÛ¥É×k/ÆùknN´[b‡ã„úXçl@1$\£}kPX5ë²"“±®¤.ËÇ|œÉ>„£`*UeE‘—޲ü%YÉŠ,ÖžcØ.:•’Êl¬#?¯µ+‹ mgh¬TÍO˜ó¼°æ ؈ÇJÛ ³Nü&ruUZ4h”5:7¡ÙR›Èõk›#ÇU Ë@Û¯„®†ý£)“’1†Àß’ ®ÓW´ló/ ZP»@©H½½V0iÝ¢ÑDOKÛ4Û½rÁS`ê–ÙС"üþ!Ñ×MWAEÑÆúÉÅy@°¦éËËó„¸áÂÃÿ5¯ZŒ›ªÀ™ú/ŽÀF«wlíÇL;èn¥¢ß²¿0þ™LÂq:K û»ô×ųŠ&žª|PÃvBÀ§cXÙáèÄëó½(õ{¾sLawÉ#À|t$ÌšB¹)bÔž*ØB¼UŒb¿ÇÙ´ÊÇ8=ÑM¡À5â¤{O£/-z1«ñF*fE˜?QLJ“?Qîã:Jµq¤.Ûì‹;*©îuÒ@¢>Cï¾c¹¯! ¾jd‰ Ï ]<ïàr#:hç:MKÔX±:ÝÏ Aå(ų½FY÷dÅïyÊ x7=íÞÚ#ih2|€)nÑšJšè6 \®Çø¥#¤ÑqȲ& AÈí Ž“qô‹,ª~ɪ]°aÔÔ¾xÐ:ŽÞ‹9Ѱ«|ÊÊ£Í;ªú‹E¹D áa4âçŠ7Ô£|¡5÷‘u `×ÞËŒÅ!füà¿ÞtˆìÂê<)“5Ô}ÂÕõƯè¤ÇòPüUè-;—­|øöÿ à FG\!E0¯>ËõíC$¶yxw+RAizüˆÄö껫Î:þ ¦LàúÛ§§»ßɽx¸ÿxuûñƒ¿ö××þ†àGÁ8ûÝëp{ÏÔf§‹ØâuŸ8L%‘qâ#þÃ×?Þmþ[äöýÃç×P~óä6mþqÿqóÛÍð¿_©"o7¿¯™q%w1W5*7hUm¿ÙÔœÙþfþ¦‹½ºÕEÉ!»ÞÜþºùxb’¼ ޏ/ ý&õÃÖŸ¹Ý‰Íû"'ôËæñêéçõýÕ‹@VÝîOHÙäìFOH©¢S×ò"J8Õ*UKrƒ¢ÜN ~øAÔ1çáò{½oï»ÿþ§^º·¡íä)GBÂáY*ñÜ¢`„U¤ÁÇí[O ¹ÝªÇ ´û}§‰tIôJ/Èa.â¤Á§Ç±ûÐdpÈkqÕ«‘ótLOÊ"eQÓ9 žlªTðÊxb,ÍŒì¥$k 5ÇI8b#oÓŠBÛE†‡Ä¢ˆÀ®;<äÒÚDq ÄîFž®Mùpý×8ñØÄ-üOã>ývÿáÛë‡í§õãFÐõãO›§ÿü¯¿_Íï Ü>||…-€òóvŸ¯¯7»Ý‡o›ùáÓgq|çÿ°_×wŸ7?ìbGWß}wu#¨úY7õü“:8¬ñ³¾“Ï'q/Ä ¯â4ÝàÈ'è„ó¿Ð÷Y§TÈïÜïÖ×Ý;õ¿÷iÔ›õÝnsÉ2ˆÉ¸ÿ¬lý?<ܼ]5$ç\Æp&Çu,8(c-tßikÑÛØß>=>è¡Ú[‰ƒºO .ˆVˆÁí™Ý¾I}äp N^Vr»©ÜÛ–5‹Õ aî„ÆUÞô1Í1a´§ bæÿýíþܘœùØžmIrÂ'}ì¿ðk,~]D*­¤²6ÎAª8…k¼—ãhÃ|œ¢BœB²+oŒ76ãתÏJâæ*›ªëí¬¨tU`aD+Z5­ J+œ”‰‘‚q³õÅöE\Ü œµ/Q¾Ëy½QÒÂô¶V ¸nYâ}…b2¢"Åå-E4 Õ»óçìî!Zw¡!¨p€S™}°°Ã#N ‚ ã Âð×{€Â>/œpîÃhÓ0üSgÙ‚‚ÃiMhíÆÈ%O±(L á"–‘çe§9.ç½TcO<9ÏÎøINµõô} ŽS} j˼Î)Yç,ƒë&qZpÌLhf\_npcdÇA}×ÝpÓØW’ï,½­•ܽ³b¢Ç\øónžâ¦Ô ¢Ò–xãÂl½¹Ž z—×[ˆ>º¬Þ.ïõ¶Vè(E:Åpi½…ICú"3èƒÛlÅሠ‡Ö#çXDq v¸Žw¿sòÉqQÇ­^8egPj³ŠŠ+p"\äæ¹èƒM:,ñ¬¸™bGõ¼ ñœð&Dü£•¼{ºùYæt}³Þȵ'pêG˜q>D½UôýgÂK´·¯üŠ /N¥Ä~³Î-´w~YS©s5åà#.àü¦:‡Û;¿“ØÎ\^œéòNZEßÑ a¾£;iƒî­,Kþem½™ÐRJ:¹Û1Œñn6Õ ˜àsc+F`ÞRÎØzclÈÎ ëí¬ÌÖ†@芉S‹´VbjO–ÞÇ”†¬ÈèÜ‚ÏÁØ·À®Æ%ð§(gç¢\ãõöð‰çãaãå"'‡cÖôÁ5¾ƒŠdÿÇÞµ47r#é¿¢ðe.+6‰| ÇáËÎe"fb7v÷:áЃ²+‰Ú–ä¶ÿý&H¶D‘(¢PPvÌømu‹ªÊD~ù!Ÿìs6h÷Ùý§¢<=¢{P>â¦#6¼9[ùæ°Ú9?w—0Že«ÆUÉÏ»Sœ’ )ÿ ¸'Uú&‡|Pé3¹:\xð8"WG(RtÒ”+¹ó2#*}í“ØñûJßÝϘUé‹H‹`ü^¡¦öÀ¬×wo‰Ð~]±ÁN@Š|ÿá«Ñ×cjfíF÷ßM–ÿKŒqJd8 ¨ ËnôC™M+0åŒqÛɈ©bf„qûP¬2IYã~“O‹¶u{jtÌÞ×$ê£ i½Âë´FÝó½>€f¾¼ (Áߟd;:ŽÜŽN·£çÑa@£ŠÞ'‘ÌÒ( ÷Ø4–T0vÝ4¶0|YÝ-/ÍÉ™•<}ÚÔ}~à硌\Ý”}€é:(cn?e!d[W/„œ%·VÀ»>"$p9?´Yíx v£€dËhÞDÔfç—K{…CMÖ¯‰‘ u/ë—蕳™æ6€²¸Û¸ÑWtÜ&™ñc gƒD?µª]¤ÃžYD”®-T«»Û«ßöÿ_W_þפúêÏ®ïo¶_^ÿ¡Ûei¬FÕgõØóØÆMSù¦–ç…]%Š×iàÈô§:ñŸÅ™ v4c¾ØÌ•fdi õMà"ùMß$Úd?ƒ=tpHzRÐO™ßÞ¡¸$æ™aëº0F ÛJç¾hãM¿/têyFB ìÒ€6p›Ú6áštMé)ž¾uÛU,Ý{ö]ñ=LY†ÁK*äà]·òiÇÆÓH/ЉÈôÅ24‡lÕÏŽ0šðñtP}>12ï·¾wH’Ø1›$1=­«áŽ&*CÔ¶G-Äð¾bÊhûï©H=VåÚ·ø”­—ÃçûËKɱø„yd[»‚1Ë”˜ s\ÿë ñ¾¸pì1²žÃ¬7gÑ;-â7¸€\Äoæl’jG~m\ŒixA91‚Ûíàà0;zË^]pÙåUj¬¿íòª‘sfÁöip§ãqàÐǨW»:~Ì,›m-ÉÓ§í/êòŒD=¶u‚Gu$®Ý ›}15äÔ¦ÉèK­+&1Íí•ÝI+H&äèO ÉàAûC²@äè±/þã¶[ÓÙÊ1ˆ¶˜a“Cƒžê ¡ ¤ªhª8ꙸ¿xÜ/sÞ̽º8¿|y¸¾[VÎãØNÃ:bêQ qÚH°²ˆb©)^ɗ㑊5X&®|ßýŽ@Ú€©¦Æciˆ¾óeŸU¢gÄRÏRhºl‡FíCƒùJ(‡=UÉÚ%D¡šš€>GM/v—›ÞÇÕuÝ.ÕÊjŒÿ¬CR(€ü‘)=ö•|”Å:³½ÈÜ u÷ÚwUŒªG»EÐÈî›4A®¦*x_‡¹YVª„î#j£Ù…ØIIú”¯œ—¶é:Š£¶4D`m):*Ѿڣ.Í[ æåãXëÓÕ—ÛÇ秺ÍÇA4LwjíÒ5!@!’]…Û’Ö­„Bgš²Ë£#)«w!wýß‘FøŒN‚ÓN Ÿi{Pwøùa=F ÓïÓrVµó8µt´"­; ÐS•¾CY[Ò’—?àê¿Y,¼-Ô¶w¸x¹¾}®R ¾8êuBÓ†ŸjíqÄÝ©Rj ¦^Dì¿#ÆmÓóGÁÔK®OcW$ÐÔsŒäO ¦Ø ¸‰™0 ¦^C~?\@fBè ôT"¹>\]t$=atÓ½¼Óå²moÞ†ž_ûéÓÍÝêëvNìùã—ÛÕ—ÛçßÖ"舮Mctf°„8%îŠ)9¸y» ¶%X£ÙúQ`íŠÅiÝk6Xû&ÅF`öóPŸ­é`:0¦˜UGkÉ5®J€Q©/ðÌ·;äô<Ûî¥æH/›Àà ¶®¬Ñûº¼üÙdÎe:ÕI¨ö kŸZ׆Ç$]Á\Â@#3 µÚ°9Rt-áZ<ˆêˆ™ÖDP†kÉ®2Û‘S#´° ãëR±p ³V‘¥·ãþSŠc¶P!®çð¤Û7F5屺ùÛ7+`££Ž#uiŠö`l…âï )z=òåóÞ·Û;›é•+»£Ãte”Q90Oi«ÃèŒØÔŠ5‘_Ë.iêyÄôc¿mO9ÍiÞjŽI¿ «Q›´w†ôîÄÐ8vŸÕjbfȶI{ïU|a€ú¦1d•qýÒÒ¼_ºŒMÎk|fŒ¼é­éF˜‡¦sí„é/ªð—½ë‰¿i¨\=þF3p±š‘OS|5 ‚F,pš°BnçŽ4Z,‘A,œ` °¨Â#åñ5-²c9ޝ@îô9:5{çÇÚG=2….øiøhOG½ºw+ãó«:µ›ôR&Ñ)±â@^Y#öXÛû&ª†šFƨÁY™É—V›%±eƒ¿obi©U£QíÓB*³ë¿ÙœÍ-å0Õ]ˆpSU›æìx\/±hh»º÷=*tT©ìÏÐmÅNÕ4iLâCØéÑ)ÅŸŽÏ0®ä³Ú—Ï ¹ «X˜¢$CÒf|vžD[2`Å]ÔƒØ\,†$Ä\²nG~°0ÚI81\ yé?‰ÍIÌR`50FçÌÇß9<Šújê;\zª\(tóèÒj3ùð`ðûÚ–º.á(]¡Z‘¦lÍŠ °ž1OWKŽ©Ab §û[†Õg ÜvdÓ‡£CâP¯½Cÿå&§,mvÑ{“†zyT©y×8Ô{ˆÕª.to\c;ÙÍóÒp×½õàì7dáj`穬ÒßHõÓß—i3Ðßn ”{~° æ<“c=?TïyvPGW APˆyÈÖAèƒõŸND=úAMÔØôðáûÞ¬Ò"õŸ™¾'!z¬ÓC]—¨’NÙX’bŒÄ„ÚeO°ìÒ;)aå=ÎS†¤o|!ŠÛ·d,v*Iv,ÕŽXƒÁ×°šükU7û¸?vÙ6½‚—B‚%Ÿ¾¶(¸Æ‹ŽÃLO @ÇÛZC{’pÌ¥þÁ(ÎlHÜŸDÕÂ¥š >¦úô¾cد«÷×V¹Ñè"O—}ÑÚwRÿDLýš†ÑÕ3ÓëÅÕÊsJàEjÔC)N?×´hMÁ‚ü΋WÙ4p…ëcÒº?©+Œ¬’(>ò‚5uûö4ʇ‹{;WË#duMLäbO#ñŒÐ¼²#®­ð#žVF磽_š9]^ï…i-e©¶ÝDårtG Œn}LÓP”¦6W> S RaW©<_^ÚÛ_®^žŸW·öïÕMúúÊôg朆A§ï¾5SL§¢Òü€ÙÉž4fX>p˜Ð¦þ´º[î•l¾ø°z¾½9¹y܇Ì;F1‘ƒ ÊÇÈ}›]wôÙYÉáÔ›([œ£ôÔ.Mc¬9GÁ.ñ 3È;{E3-È݋ь,JOrðóòânlÛ[aiÑQ–L2‰’'¤Ø@œ‡Zç¿yý£î½Âl ‘öVæ*‹f“¦:øú&iP¶ÓxçuØM:iÀþ>¹ÝÄÞë;“œÝaïZSöÎ Ù•BÂØ´eÍcãBofÛS=Ô}„œ]-93¶C‚,À·¥ûB}µw§OjF€šÝOÓB"=õÊÒÁ],¢šÊàŸÊ]q„)Í€!­òë®È/œd_vW䊗…$ Í·ó½½n w•N§ûñ©Ý•8íí®LÎx˜YkJ¼bpWá_îÊ¡‹Ô…ÓBÅ­ëúÒ”Zýò°X}ùéÓr½sêÿVOÅU€ßÕg%ikßè¢[¤í]Ï‘YäÕÝÅSY1ïÿð‡é£ì¾ ˜'DØI Mì®ô^C§üH¸cèˆÏ»¥Ñ":Šcni(~s{8æöLŠ+1S“Kš‘ù ¯'özØ»Ér-f`Ì\ÒLQF)"næ4›„Æî¯?¼wÅœ´Êƒ\Ú,k#2{@{öPv$¬]Hð#¢æÞdZF¯Ù&Á7¡4€„ôÔ¤`¼çÔà;׌l䌇E˜é•ÕÙ½ŸóD8¶-bnŒ­iÄ’9ºˆ¤s”ÌÒ~Ü'a°{LtÒ•«]<Þ.5-¬÷K}«·ºzyz^Ý›æV/vF¯íÃn7Ù–M·¦ý¡û‹Ÿ–ÛR­ßßB2ÓK>õhÍ'Ç4,,ÎAfÖ8½ç Ô]}Ég'©¶Ê®§ÓeüˆaÜ.ÓL¾㬚#v;l€â²@ ëèû¿e~ÆæÜÛQ:»ù²º?»\Ý=Ÿ]_î%¿xa"È_à‡¡Ø+’ÇY0töIYN|AR7ú´k @ m[e\+be•hGÌÊé_ñA aNÕúˆ Ü7I`Š4o– б‘GHáÐ*ð÷‹ôœWËOÿmóòô1•£]¨£stAA¡¹Ïf2Tµë™†ñ>ûþÙPoùåó÷ýËgsxQŒ˜ócE#g/ö€©Ëlb³ÉìîÖ$e§6•`½A>œýpöüëÃçï¯V÷_–Ÿ¿·cóÓòùóßÿã/gY“2±½oë•{Y€Ò%_Ý\à›ËË”Œ¹_]¿=ˆ³yz¹º²ûØçï·oùããËóçï;?Å/w/Ë7A!¦³»åÅÓòP¢ÁõÃÙùó—$¤~xOX6|:| ÅXE”©´dýöõ´ *¡üã̾ððtqµ>\ïȉ¤ ù½¹¸{ZùóÙÃKªÿquóJÍ8èq˜Þ75íè§|ûÑôÇæÅ97hvçÍþôøe•Ά&lÏ^œGy.`O»t`÷3¶ðþÛ˜pjuÿëŽ̓΂ ½s"IøÙöSÛ_˜@ 8jÁÿüúpˆù/¹ÌðáM0d!ÿ_ˆU…XƒØÄNCð< ›”'løRÅÔ a66éHl2^„`œ€ŠW\›4[I´óf#°)= {à±§µÚ@S#ó”aèžy51Å9@™ôÊl·*ñy–©ã2¯ãP&zñ'€™‰%)ï0Å3j®Lü±; bîJÞaÅ·º¾!·ý±?œ"oþع9@Ä&Ž(9ö†¯F¾…..ï–ÿeàó·Õêñ‘Òfùׇë察×Mx>K”ÿvyýö5àEFàE襌=’Åž·—ùSzسÛôP†™îW·‰3\/o.^îžwþ`]“¼Ã6‡Â,›C†I™¬t#Š:qÔÚqùL‹}jÖhÉù ¤£ŒÖ—Œ–íåSX¯ÒhÒÚžH¼úZÃ(ìdÇ™Œ"`ÿ{‹ß.˜>¸·`ZÂÇp¤í9º?LÛóZœì%ʬ›$õXr‘ S0P?|’ÈåËíÝõŒ "fwGäo¤ü,td˜Òúì8’Úr— "‡2k˜i±0ŒEÀÄD<‹€I1; dG@mÓ‰ÈÌ5€i9GT8„Þ€ir–ÜZ¸¤)v wKëøZ¦wÇñ¾ñ<> ªTId¦”Ð>] ê¼ë:gbxtçûEMŸ6}ܯ¿]9÷—†¥oˆhžAõ˜›åLò!ã)x¿èÕÑ­´·•sŽH\œë,þ/ÎMا‡è„ƒL¥ÿ#ÄÔЩ7Šî¡|u·ïTtj1æ* vdÒÆ§Ĉ‚PáÓŠb rÊ æÆ0úðFøº¶£ÎôäHN0õ቟%ié=Hss ½äØxEÙ[ÕòÀm[18j€PÐÚ9Õ£ñvP@ÑxÔ,?eÊ`DŸÆ;€´‚Ð×CÞ8=9œ[P<œm…ÛE+äÀh¿9½gâYÔQ j{Ž4`Î(ä´Ðܶƒ›ÆÙµäS33 |P‰vweÁYJD„>T£¹?9éžçuU·?=TN´²§uÓ%=7qJI 0GN× œ½Êy_:Mq3¦`±ŽÀÍPÎ J®†fWpÓ s€*ܤ”#–Y&°Å.€˜ΨvÐÊe‚?=^b¬Ú×1Ææ•Ç>xvór§¨uB—¯u"uTòÐV‹£vcÙ}³Þé•’†ÃzŒÌóB™v°KÎ! ФSoPÉ®¢Ù]JS·>›pXø’FàÏJ3¤;û¤4ƒxNÉÇ;TŠk™•Eï‹J9+•¯¼Cz˜dxO£¬,‚sQC{Ô€Žx–e pw„5÷˜GXôш']£"£ü$ Ï[£2 "óÌû¢„öë‰i o×é¹¹]ŽëÀK©{søç „ŽÓi"µÇ:HF s Sj÷Slɹa·¢=>sH®ã[À¡:&öâJ·º°Ë":."uÄ\ÑÛ›¨Z æµg¶Ë6bU=®¦YaNë~°Uäí¦N= ûmëk%h]ø<’ÿn²¼Ëf© SÏA0oEÊÒb$vFF-£(‘ó#¢X,v×mÁþ²¹y4 " Ûϲuõ¡[›e}¨1[ЍbÏ -·m·Ò¨Ž[{cM“jp`P—iýLœSP„{ž˜HOÍŒ¶\ûÕùÝíÍòê·«»åùýÅCjZ6!?_Ü­~ªZx¸D@ƒ¹Vœu;Uœ´ !‚Ó(Ú€Ma+Nç&]ü¤Ì”\EFÉ-ýÜW dO .uùW± v©…e–ítÉŸxAûgS—ØÉv_/€ßjѶ7Äíñõ·Ÿ>ÝÜ­¾&(µûäyúõÓÕÏËû‹óáå/piGëø“‹á»ÉšaÖ4e„¦”Cj¤¨´êþÂmH¼ì¤iq{ž År!%Ê6 ìH² ñòâI˜¤Æà…ÏkÖQé8õ‹˜¦¾ÆrÖÛS6%ž±÷ ½Ñ;p@s O€»·y‡êÛ¹õWaWF8ö%='Ô!jn>bpXl.V³±Í‚×cfrQé7鵨üef¢êuü°»ÍÉI¦5ë¶}ït2V®%5:R˜oÄèÚÎ7ÒQ¹Ûñ•.>ƒçb‡š×/Ý”Z8LE (>L«…«.yoER]?¥sy"p@(÷ËhÌs…75€ôØ>õáQlAšÅº'²Œ‘…Üæ\#G€Ðgê<0º¶uq4*4cìq4\œær2¨{¶ãâæ¹ é¬}³1¿ „¾Ó¾*-ˆ1Ì<È>¦Wì.픂çf<¨ ´–1󢆲Ò$hbtY"´#¡VA jš¬È™£P˜“+¶×S9¢ì¤ÁÇùΟ¶9*U+ ޏƒ*%ŠÎr‚)ÑÒsL->ª!Í$3©AãñSJŒÌB\ræ~+¦±S¡Ë †‰µ„[ B€åÚ€ÔX‹p»m+ÚŸFõ&žFhË.R ØGà0 l)Äî`ëØÇ<ØF§ñøÍ#´]Ñ›˜—×­Ç5d1bHµèÓ0 9ƒ¶'ï®lºQëùÈO>lQ´DÓe^FZ³á)Åê‘" RõLœ 9µ‚×tcyÓwj»à"›õì!Ûøú&“øEëV¬ë5±BîNfMÌþ0®—R'b~¤n×c7®óD›¦v``P‹˜j›æ ì HèR¿êDˆ‚œº~u½âüÛˆºŠ*ætQ—!qÊ SP»¨² ÕSWFˆ§eÕªS~ÄÈÅPÚ¶9•ù b;²h4AŒR¶Ý³* 3IšetˆÝ±ÓäÌÙæÓÔÿ“wuÉ‘Ü8ú*s+› €pLÌÓ¾ÌÞaBVËc…[R‡¤¶§¶Ø“-Y-eU±’L&Sn{#<1ÝR5‹@üÀäK½¯”ºV«V¹¤,ÛªUÏo}†{Á«ƒGG!­Ï|Óè¡¥22%G06!c±3ˆÅjmHo¡nQÏáÕ€.†ˆÓYc>59;L±Ø6E!Á±øhMÅÀκŸþ÷ê‹»ÉqƒKÌb³ùDÖ>>|º»¿{¿X13ûX}e fF‚$´ùtPJrO’„‹#A È9zhtÛu´Þ¯Jg¸ŸNH-OÿìÍ 1Šq ™© +ø G¿:J‘¯Ž],ò)d5ÄÛÑ*æ)ضlÌÛ¤"j—Ô‹\|“/3Î:8Zç{G¡\wÞ÷ašë4Ž.{~ütÛä…å®,éÄæ‚”¯¬ÕŒÆkÙaÖ ›¾Çµm/T}g£ªDÆZ;<§%BËM¶Ø„úÝ`Æ ÎÍ8å´/ ƒ/ = a/0œ\båtVÊÙñÙajÌ8ª¨Ã‘Ÿ¯°Éˆ{¤=Vq=•÷!¨3†[„@ý’† ‚^3Ee°ÃWýÑý¯zâÇo¹«·]MïÛº 1fÅ,áÃKJIÌ—ÄLc³œ˜ÍÈÕÅÈÓÀàÕίPc«!“Û +y6½R=»JêÚùßs)9•…oÆ£¿L‘LÇ—e‚)È„RÍeŸAgdé!T9ALþ(ƘIÕðØàùé¾Z¡ ^±h ]¶|Ó8.C± ”„ÚŒQÈ#8 ê›!I*¹Œ#·Ãr(05æš’g‡)#uòUu9‰2ç÷|‰mÖm„ï,àZz.(Û„dóP6ÄÊ¡lúÑyAþ¢H¡“’H€ËbÝÍNVDDP®Iâãg†ùÙ‘ÊkïDCWf/¾NKø´+ÞDü˜å6²ÏB$?ʤãÀH5Þa’Ûýµ9Vã\ÅçÇ™öÌÀ¶Hë¶-¯>îÈf;ê—®ž×¶ü­ƲM¢E^¤°I:ãI§hÄÆ¦ƒSÅ*´g’ý@±7—É~}eñ¿èY^]¦¹S59æ—>yõéñæ×UÉ$ʧãë¸Sa(°åÙB}1  ·íÛã>msY1ã è•` ëü_4Fˆ9ÿdF½«yU‚LÕyûƒè€7]lr~o³£dÆsàUãV37;âSß:TZ‘·ßQ³ä9Î*×öÖ–Ú•wCž~m´Ù.i§·éó{¬¼ P Å‹ŒQo"-_d=xÜÛôìd^¥‡¨‘¤x4†|¾FÖ«Œ2€·Æ«Ô ³†Ô71›Z’“^]>2ÕÐHÆL ÎùkÀr]…¢Ö q9õ0]lýêü0¯šNGTuüp0_cS$©r ‚%®’ðÍC§%M¥]ý7$ =võß.<|©¡½S©?={üZפšã¢ºµ¾Þ„~Ó ”֛ؔŸTîú¼.«—Û5 CŠrÀÖZô»l6]ñ:KÀl†ø•0Ü.Ûtd†z·Ë¤"„°å>ª`¥½Ý.¥ò™ÓeNNI›wºl[]«%ëðé€ê+~ÚÔÁEV’ Þ¥M¬ÄØ¿fXÁ ü1ãORL—³rÅV,šÛ¢k5Ô©:#v¾[Ãe5ñºé^•{Ù¬‹y—áÔ„ ù2ŽWRõP¾É[Ÿ2ó*ÝËhNñ¦ ËÀû‡¼‡&ÇÚ×@7pLï‹BêZ01q‡ÌUªã›Åqp~“ã%MS-‘++¾›‘ö¹ªNôƒÆŸ1REàd…WÅÛžfùJŠ YÝ3& QÖ\vI^6½v´Læ30y°=‡F•94y@Ý-W¼v¨s<‰öS²ýF³“Õ¼vDØ©Îõ«Ø&Þò|›âU9ñûÄ«zqRR¦ìéTé~|úuø¦æNù?ýüë] Û«~¡ð JŒ"ɦû'ÒTQ‰fB)­W¡lzå뫪s!sÔÅçTá¶!Š_8E²©Š95»ÍzÐÈV(pc}Þ–G,+諒7¥3„LØl¿ÒÛPðÛPú‚ˆA¦c&lAü^”SNJhpãT(ææîq p-u’l˜„£ÆJ^ɼygZãTÄ"ˆ YáÆã á0ãb‰åxXŸ²9³ÓTÞ+K¿¨ð}ÔÅ2¯Ô=@þª_•O­RÍ µ³gY5K°ky=PÑG]Ïñz\ð]hÚK¡›dEëqHe…®jiqNÃAüSžñ€-=ºPˆ“RK‹æU:‚:ŒÖ ]«Ó{iŽaoNâå|¹qÕf'_HWkˆß·Å®jæUôk0ÇwÓPyæ•• Ô^öbK8r¡á]ÊQ"¦Ø ɘÉ7œCº¨Ùˆbéþ› X*qùþëY1eûêÞS‘nÑàMó¨ïj¾Ä6LH´QØõ˜URPRJ™ÄÒÿùIùgÓ2"½CkÀóU*7—üª÷·/Ow7+ç.%'»^=ñ ÙctjipýUtj3áù+Œ†#ÁuW¸xƒ%›1œÑ¤Ç8zÝt äX]NÐM·dÌ~‹1m.‰•e †z)˜VpÕEö±ÈVâœbž¬‹“=ùêwDˆ `-µÃ«K@Ë+2z,ylzE¾¿þ|#ý 2ÿ<»Å/Oi~¼º¹n»Èœ­ã`LâJ¶X¬8<âb:g¢šÏ>%¿Ñ¥Îk¶Ù$ÍP;V½õ Fô‚·NE›–à“ÚŸ™Éü-ükú“1_iöÊü߯ÕýS¥¤zèoæ,ýóài|ókç†ôýùËÓ׻Ѥ>«B»:ˆýÕ’õ¨Óèªê?¸2ûñËøf2š×<x•ÜR;’ú þ¼Dðªz¬P$…VÈ¡q Ïôg¦ADÒ(R·Ó@—ˆþØtý~ýéƒþÏξžûYá¿ýüñúåúùëÃÍ|ô%ç=K¾¢¿|«/àØÚ~3-M€Ê‚œœ!ëo6¾©¶S)åÝ8—²¤‰™ ÇxN$?ïçõh5%7Æ@N‰)Uêáñ»É¦—ÒÉcŒû¦ñ&::Ì ¥PNø( Òòë×—N=¢wÀ¥;”Mè?#ÒïôçóJÑ LÁ:˜ºU_6C­‹@«ÁéV}׬:•;Qi7¬“ †CèTr{Ýi ß ¸«‚î£ ûUW·O?þýŸÿ•1-áo64Ïf™(em»W(¼ÒʪˆÞ¤À+í^þóðãß_%nâÆø×›ûà¿ô'ÆÏ©æáG.ÈLæ_¾I–åàüß+wÿqü0­Úy^²3Ês[ |Q}-«£ÞVØé¬Ï¹`!.š™h3L -:ü‡ƒŸ-ö};Y•™ ŽbRc/GÕ¾³E²°Vv-Æ 8¡òq½Ûç}«L&ú3úœyâ´hžô3=aS¥Æ:1-Z§ïRWì©Ô@‚r”€Á·•ý§ÌP‡tÞÂn5Jš¦p/9œÉ‹g(j‚ÃÔsÔä×ÓT¼ÃX7þ5ùmMñèÒ`O×é½%ñ;L6Lö¢I{N6|^~Ôžôå§ç›§»ÏSÚk¹W³¼À¶rÚõ PKdžJE\[á]Aë…¾ñ BW÷‹g}‡4¬LTr¼™(*© ³]!¯Äë2WQwíÇA¬Õ!lŸ+/'…Èý}#sȸzâà“K…–ôÍ¿S—õ`k”êÙQ*pC Á²-žâކ ߨz­Çx9×S£Ä”êu¾c³ ÎyjB1æ"c—áå¬C=Ù· Ev6l¯d$ôC¸8÷í@éü¨Æ7Rv1vÐGÿÎF‚ƒÛÛH™!7|×Åg+oÑõÅ?×Ù8|êhOÙÀþT08ddu®Ò;„ Ç}Qö™·ÙmŸ¿P¯DòdŸvÕïÔ2ü£‹Ñ 5zýkÈÕV2ï&b à¥wqú1,êk”l1Ï+izÔòئcbB|guM'@ýÓRFe^T;™jeXvêº:õ$uØýj·~­zØ‘±ÉyÜÁ-‡hp#vÔµV“üøÛÃðøôï·cÅÐÏwO·¿ë9J†oá_~Çžwr-ª úG¿|58Ïu|í%Ònó®M¢$°÷%ïšÁF­”´urY˜û¹ºx×¶k‡^ä}ÕurD;{×JfÊÝGêEĘõ®ƒPO--ÐÙ¹~?­r‘ý=±ÛÄ~HÔ߃BÆøuÆ-³dÝb; ¹Ö1´B÷‡ZjâÀæ[ʾ­t ö`%¶˜ˆQ>Å9* ºZ…-«))&öR€,6ÛŒª=\zÝ6˜m¯-ÏŸö†¢çMJâ4IÛ)ÌNTIü•0Z&‚©â1mºÇä¤e.Xàä£êÞï ¤%V«ˆÕAÓëÉÊ•‹·%‹©9£IŸÀZÙ¯¹…$!Á6S}X÷‰¿‚5;ð>i‘# _­‰ã‹!T¿A^^c· ŒÕMvÛ 3¥ÃlÃN¥ Úó! $¯Ë…,Ð{[¤f²—|€PŠÔÈS”²&ÎÖ×ÌhØ%T³m;Nâ×(õ†¹y*Îa‰½ êŒÌD™HMO !$Âl¤¦Ad×<ˆ‡ýò €þ¹(*¯p›M@Úù±Um¯§˜{lM G:½Gê \@®æU½²ÃWú†—Ø(6ºSâ6Š#ÞÁ´GTMᎦýóã§»›¯Ãa؀ݎCèöÓõóí¨J?Þß=~=~ø®xÕšÖÜËôÙ[6ýâ} Ì}t^C>Y“·±dÁhãÇ6×ÀdW|r\r лCbbÉ55ØYŒÝ7wq lÛòš¤[âãÞ¯¸èDr…tzbp†Tœ5úÖÏÅØÙ5ø¾õבay)m™÷HçIôžÔ9Žïü¨£ZøéúY©vóòå©È¡ŠvzãíÂ:‚= $U“BL‚}Ë^ŽhüËíõ§—_ê^ÚJ–¹LÉ ËLMóhí) ¡l™3§_|?[e#a†Ÿ-%©:z%ê@ ̆ϳÓv±‘8vq\ñ~Ö‡×-sÕ` ×áAõTG-¸\5 jc jôjÒ!»v©ž\–|F¾>5¨Q…GE¾§èT(Ü„{»WæFMòyâ_¡“Já¤Õ»<E¢A:R×T9l¹ õ/ò¬dcËS»Žª„Áð'rUÂÑK>árÙ™:Ã}{IR͛Ȋуœ_uI.Ä$£l‘ G¾CòÒ£ S{¦M nàùóõÍjòͧÇ/¯>º~Q•y5Rw%ºŸøv‚í¬Æ²Ð’è½/8‡°vžl-‘úå-•÷lFkÚ‚KOJ/‡Yš7‚tI[Ú“•3Í`æ_Rë~ 0˜—è ¢Á´µn¸ãJî“nÍô4‰ã¬ž¶"9ƒÁ]öŸ#tM)rÕ  :À V)ì Ú&ÇZÐqB›ØÊZ[S”ï §QÜœFÐûÑëÞš½Òq ßR~޽ XÆÐ4“™/›Î'’Oªž§ÂtéhÓžiÑ¥œ›š™Ÿ¦ 4c»Ša*7ý!·Æ¶ù²†àb-â{7Ai OÔwAvv0{n ôò¢ún©B*<‡X” HÙ÷ÝÙÑ*€Èl[Á`‘jGÌÖ1®¤Æt‰°sýÓDÇÌÜ:;2¹oÓxRjÆ;3OzÁ«Á.*`^aAf†ìùúþó§ÛçSàK·ô²ùKgPfÞa8B3ûÇEÛÄÂ&ÉCêŸ)nÄås‡1±;…?zÄc_äþúé×Ûon•÷÷_î^¾¾:"ÏW/?ÿ†2¨ûÝü|£îõ5ÿì>®B;3ÊúÙqS4” ­®£íDÁ^Ñ‘‰h8¾hÔ§f¢’QÐxóÑÑ+½:DG¶mÖkhÑèruSÿ¢yò2 ¼„Ýæ»Ü¨—<½ x½Ó„“‡_ÝÜ>½¬ºª »*N´³É6)Ï MŒŽçƒþW°]Sg©êPã ê˜²£ ßU+c‹Wí=1­~£j»½”°ÝrÑ{¬ðÌÉ—=s½')Û¹ôFžJض4ª“ôÞJ8ÙÝs'—2ž»™£û6ImA ľoK¾F 0Vëvsp‰³Áš¶pölP×8Šú8ã/ß3¹ú:“p%Ô’ ûR™vÈ¿H²!ƒê•íܤÿ¤ üùeZô[!ìÎãX¶ßü ~nŸN“P;ù˶|KÀ˜ÑôÉg¿Êe·8Âî˜jF.Z0Ïf2JLE4dkߨÐ#ŒÐ]']xë"©âX08ñ´T¡åÒtËÀ™fÀè|Y8°–…ƒsƒvfä邎 æœX¥£«lÔhgÁ½»ƒç-xÝà"Q€û÷(pˆ\s´­ÀáÈ_äF6ñídøs×ý€D@ ×¥Dä ÅSËdw ®C’ø}O´$}ˆRc=•©¨ #gñcfdèa=M8Á`!WiÈ7 =ì®!#GʨH›ÞiÝi¥øà/«+“ÀAÜÂÁy‡‰]Þ 6u€ùR ËuËÓ‡¶µ|I;Êz4áIËW-¬W )¨ziËz,4¨(¶¥Åø.&/e?Ó§àŠ¹Œ„Þça¸^©ÐA—‡⪠¤Ç%ÄwV£Jg€ó.ã”úê˜|ÞÓ$î ‚™Ü·F`|ÞÛkŸÑ›ÕÀE¾&a¤m|e×ß mJ'¿{'îÉTîW£ôáúËÇ»u) V­ŠÖzTý„–ì1éHà{L-?¡O/¯Ô8®Î"²’‹©á Ñ½/ªÓ8îYjø=Ô© j¨Ç4ìuéRäÃaHJy‡K÷ž˜†#Á$Dܦèhï<„ ™t° ™2\€.<•`ß^Påþ¯+*kÚföUèNñ-}ˆlò'€¤´û+щ åÄ®‚e4å!)g4é’Ø ÝÆ¸ª$Óê1öĮ:ÖiïÄ®Òr‰Ý¨¿>L{^àÜÿ2eÆϧ†y?C;ýð`³b„¿#4pcɵ^«§'e¡ Öa~Ñp}ó‹þxúëaƒí™É/´ÚVñ«¨‘C8+½Ydô6Ëk_°ïfwSç*y–ä ¡ª¯¹ Îƒ ØÍzÄoíT§£¡›‹kŠ%»\Ú_;Œ15%ƒ6Ô´ôNëbß‘k¾®^oE×î{j¢œ0„ÁYN3BlööÆ%b“nQ§"z‚€m S)Ó0EÿÇÞµ-Çq#Ù_qì˼ K@yóÃüÁ~Mql…EÓ!Jžðßofu“¬f£(Š–c666ÖKI`!/™@æÉsöšˆfI½äªc+ä|½ÿ°Y(¶Æ,vSo˜Bü59žôK,±m07Ú1Ê*ÐÚ:oÌ‹%o2âž°ŸcŽÁiÌ ÏcáÃñOûáçHé¹OäµËòÒt. E#4Ç&# R5B¢âé²ä%ûl3Êš¯¯íÛr€ìô7ìÈ™6±‹]#®h»»¨ÀB?^,¨•&>õ©A­)_¿F;ì;J™•ü¸±†¦; .&²T(`»Ò, I*¨ÝœãóoÛ‡ÜMœ$xÃ÷{_“}z¸ýùþã§ŸíŸ=|úòÅäþk§Qi=ˆ›ÔXó=_{0<‹zó[„äwEѯ¥À+ʽù²€â³-jLÕÞëäjÎ|ÝÝ]ÔI‹Lï²üÇü•?<Ù‘âÓo7vÀ}ùó|5¤ûòh!@’ìo:+Pç0` $Ù÷åñ Üxžø–Õ2“Zb€a,ÞøYÌßíi$Çgl•zÈzQç®­v¿ûíÚ“kïÇÏß,û|ûéá„ÂÃ|s,ÊýóÃÝ—»7?º±ôî‹ý`Ýc§ªôk¤õßR<·1žÆ¹`"®½'ÄQºQÊUl‡LLRÅöœJUî ‰‘üÔQÝ¿2±­ÍkP=‘¥ý×=ónHÆ7ï¥)Eä]ýµz7òáµ-ªqÂýú÷ oÒlƒãsê©‹äK¬k[VzÔqmòT‡.¶„€³É:•ÔC@‘„© \¹ooèß:Ok“÷ …=ÅF ¾™ÓàrùóàÈŽ—ûÛåiôááÛÓ×›Ÿo¿þ²¶b>KÜÕw5w´& Ÿ³È:Fúºl†Å¦jg¢L\w2 ª>¦E†«W1ôœÄnŽSLïî\ºƒs™üî á»W^êL>üåUî–Ô²ÊÝÍ~Ú5g•Å×2¢ ðº¸ÆEÃf pC4lKT£aå‹Í…lºÂa™Ì‡C{Eà '„¸Q¸†‰2Ç] ÍlîOBæêæ`?¯/"o~þñÖßSÖ•†°«3Fî©ÎåœÈ^·n’Û¸ƒÑ,Ä cÃÁ³jÕ-c±äl!¤®£1xXd«k¼’È Ì7=@Ø ºÃL>°!æ‡÷§ï?¼Jöèšfñºe]w?¢D=L9$Ÿ:†¸ÿD<£œÌžAPnùѧyÔœŒ¨œâ½Ê¢ËÉxJÑi÷W9Ûÿ†MW·N2´÷ý¾‰• …ÙÁŠÍõ]zØÛ ³!ªl%é?óò²úÌiÃ|õØï·~Á =¯{ªA“‚Âæ·õÜú¶qÊŠ±î…¯7ü6Š‹µ<®C˜T‚ЊÇuR"É*"l‰Ø3‘l&ÏœBW…•ÆB…•žk*{…hCT‚q䊪l¯\ K›©X):ej ²,°Z®°©¾*ëJ1ù‡KV Ñ&u‹HÀ®ËZˆÛ½[½—ÃʸÅ}!ÕÜ×vÅPu±µ÷%$°ÀŠÐÔÀE•hK¶âøc)ª¤´zªó Áã²,îù‡?yüvr¥ðZEW8•^ÿ°/¾ÂsÓ@KÚ!Û!ßT©¢0Ù¢B‘2èEx#Já ä@ :š-‡&M$[šÛhû@I»ºPsˆêmõxKÐæñ%E­QÐÓ$)¾êñ‡Ç2ÛðËΞՂ=œÖÿ,¬p¤?[ÃdÁ¤CkØSÕ‚=ªö¨>ùûÿjbÔ¹NöönÞÚž¡âæó§ßßýy÷ùþåžññóÃ럮Ðn_g–dgGn°$ˆª–$¹hI Á@ûlTöÿ› „EG–‚JWl˜œÅ†æTE]椔´M—RÕå±õæ¼­æe7õà-wF3=……Å›¢CH“r’w†³óžÔ.0úhkB>éÉöPÈXœm¡j; ¸H<¹À  ‘¨ôÎ8`‰Mù(7ýŽñŸÌ?LOóÝ<ýi[}øpø??¾ÅŸMLJ~~¼ûÕ¹†/÷üs¼±ã'üë‡]<µìв l2{Oè>l™“À¾Eƒ¯ãK¶q7^H)~üùîþyú—Ç?>}l­ß½ÚgÙ¤’†Dz ˜f"|œ)zãîb¼€‡+gÖÙé$“ååmKHÕÓIsìJóðȬ°uÅ-ßb\wz pcqÿÓKŒoN/™Ôë2Ї— ž‰¤4º£r<Š\R3SܦfŸ=®QÑßBpßï —Ñ-þûd¬y{[Pº,~¶ ‹Òd¶C­£X%­n€ï”XßÈ9»Y0j6ä%WóF.½¶,Ä3ý«Åq ³äÀJ›\“a÷ü#ãùß²"rÖ ï•¥ÕCçÓAA©™P3o‰Kª•H¤¸IµwhJ7…9 “þE4¿¯s²OKÓ×uÖ s¿èëˆ+±ïZXCV”qt±—Ä5 nÝrýqÂ`@hm%oè’¶öÑl‡.ò´C¦¸Õ%yo°UŸÍ|¶¶c‹µŸé /ƒ­ÈТcjš œ(„”±×°á¢V ÐÑMZŸÃ8­˜§œs‚÷ o¯>z¾æ?ž^ÝÜýr÷ë9õïfÉO«€˜.jF †cÿhæy‰ÜÓPÌŒd¡Òn/ÈÍÂÓ³5|UçÊ< ªO)‚Xª=XˆnÄÔ.ÿê8®º—¨ÚMÝ£õóïÔ.æsœÎ'Ìvžò…{‰8žchâô¦Œ}±ð8D¹ l¦˜X6(-WÔâdšTh&2Þ‘ f%aè•BȻۯ·Ÿî»Á€ Ô¯¢*ŽÛÐEÈURØ›‡u…XGÆÝÇzàmb ‚  ›ˆcyã« GDÞÙ’þ0S$5:Šd‚7ù8ãÞ€îb.LªuEE‹âjS%ÁXd—¶v¿Ø~é¼3ÚÔѧäYLÝlpó)u¾JÁBÈ8dôÕ¢-åëýgs#/ôÙŒk.–)1Æ ¯É$HW‡³åVîYXf(ˆødó”[yìÓ2³;X6X…7Øö õ;ËÀû)º*Ÿ Pg¥nQœ| 9W™Ž|@—éCž´ôŒ¸ØK½ðI` f1ž´ .VØÖ‚èDd‚Y«ÌçmY<,Ùd𻦋¥(9ÑFÞÛo_1ßùt7‹áÃï_ ~¹ÿötó«ôeJEk⬱ɜX«æ¤P¬™ZHeD]¨}6qÀ´ $Fänºö£Y1uMœ…9øî·Ãik1œ±i,S¦ºbsâPÅ åb¸ÅÖªáf”˜TH.×(ö©9e¼…`Ò i~¶áÚ”‡«Ááa‰ô&»³bp‰¢Yÿ‚ðE¾|û|^ÉõdùõÇo^ðBósáçë’À$ÅG•65]÷Êy‰œ:Žk¿ ¤!ɇI¶Ú “OÍÌ8×®õL~Ñ’@¾–e\|YñŒÛå§ÇÏ_øøÓºvû¬ö ›¡{”/ç7¥&ƒ½£$Çß)‹mYr€˜ÊWwœÆ^ÝŦwlÈ1lLð†¢IYõH"¬ToKÐcmSd§äKûÒe­œ›åÿøå¶ôiz–oÿ°>3¦Äýº©a·-Á=tì1[Èj¨`ßi}-òØfP>— ë ájSËQ´paèÆ‹ì1;:×aˆv+f;L u“ãJâ1ÛD±@Áe[&R•x‚+„¿à¹Å¼‘Â^3÷Z±£¬oÒlžv³þ–ÀÐÑr–’aTÞypç¡’ÀÒòÇ'û>Ñóû±¸"Äl™— äXED0c½Ž&ß,¥ îU€mxàß•µyZæÉç¨ý÷róDõ°BÊÉráØw1‡…‹¹ó+ÖÌÎð”RõŠÕ óUe6Ê„ù¯»©_ÍùWbâSb˜åÛ 'IÒÚ“8oŒ“_ó&C0‡é(ß µ |æ`Þx £×0Yó\Êšr݃%&æªQ¤â°•ÅÎnaü«2Á÷­jíúq>üÛ<}üqîRÄó,«3÷YrF•Ç6h›ŒÞÖ›˜5¾_oâ²¢âBÇybêlBlûeËnC Þnößu¡­pà0t‘pû9ˆt¼>=~¾?ðâ?´#ì×ÚŒÀëÿ¸}0ŒH0SšûýeÒ4§ãÜ¿«ÆeÆü…ìF<øg;§¯®=DÅœ·€žeM0¾” ÏPù8¶à¿îòa,D¦-^Íoy>«‡ÈçÐqÒ¿ë僢Wœ¢s.kSôš«^}á]y)»u lñU¤„iW[J…þnñj[@ve\Îñ¼?5³…*à´ÿ3Ñp31+‹ ùQ¶é›ÇÏAIÞ‘Èœu×¹€ÿ¶\ίÜnkóþs‡9~9 %ް´ERGîGŒ즸³—Ò»s-E·%ºJJS0<“: \¥–=Ê JéâB@8©ÓÎǘã*V`´É)5àÞ lR>„§§×æ¦Èñ“^„¡5›TÀ`Àón©ØŒÀã@á‚~%0€†M×}¨'érŠKçn «'vúï1ë<½f™-Új‰¹f‰MPuð‹aÖB#’'ûì4S6޳ì)ï·#&K°ì`\»ìqhÉÓ‡-CxJÁqÊ~1Eõt:Ö¯Í"ŠÄŒ ™Œ°ûlŸ›ÕLÍ8[øÙiæKÌ9íP’'Œ"‚ßQ:}ìTmypZ•Usˆ“꺆ª>+Îu×ÅÒÍf¢*iߤz…TG¹øl](ë.NêSz*..ž¨—©Á_D8"¬c;ýÄþGßÝÅu÷°N•Kƒ·LS ¨±Ö“B›dsK’Í!ï–c¯D›²ú!"VÜpæÛf¤D’½™ °ï·@; …‰.à$n¡öïFL•ÐÍ÷YŠÜûh`œÍ“—¯dX>î.WØô¶k«›'l¥÷?ì*%¢¤› ´‡Þ?ÙÁ $XöÏ–ˆýŽ›?{;9>8þÒ[‰Å‹Ô°°Ô;¨ÅÖ®Ïf< µXú±”ÚÆ ™ÔÂ(Zqk߯df7Ù Q¯Ø÷ZP)l!wž õŠ‹¹Kæ'Vo#­›ƒY iÕˆJYÂR CÌ!{ :¬3ŠÁd°É¨k|¡(iìœBvЧ·ÖÏV2Ô"Øß¶,"ç\³ ¹xUø*“AA¢¶•DœBt‹î®<ñÊ‹(Ç39’RøëiŸ-ð-_FëdLÅæmʸ æˆ]³½¹2ïA\`ÛæÎi€‘¦œ!E¬ºmBÎk^{å…"ÁWauÌ\¦¹cÆLÞÙOùÍÕÁèÌï Õg½éÍèHjœ“@#3?¡¶×ÕÃŒæq À-€±£ž½5}8ƒÐäU°ësªÉÓ´S{59þ­mϨ´+ k’®ŒÕ V¢ð³Ø®¼¡>ˬõý´„¯ –€˜3s=Gç(šs`5–'*¾ˆáóGþðüaíƒíÍ\C²|YÞd1ކ2OJ»úÞï/ÑèÞ=><|ûíÓ×?_ ìéæë¿ÿ@]úh ²§Ó‰ä®÷¬,)$;V:]¼ÆE:qJÑIö">Κºæ‰Ïuç”3/Âé uÌnI…÷NIìßÇÝcÎ …XÇÛw4ñ…Þ6;‴%ÄѸ¢z¡öŒ\s¢Õ‰ö+r¡·ÒÄÅZ ëëÿÞú‡þvû›IÐ=+ö欿á¦ÐÞps®Ú›¢bwUÒx‚sK¨&6¬ãÁGÜÉ¥Ñ/÷·Ÿ¿þ²}vϨKSרÕSÖ9W¬Âî{Fî”N dÐŽö%ª'Q¨Æ‚šÊ Õ‹Ývœ@nW”‚¾w®­‰öF,c—ÎK˜]18Ó$ÛÁò7žÂ”ôGdå¡ÇÀæI;<õ‚Ž ‹O~Ä :òžã ˆÂ$(uoñÓCúðdvûû§Â‹ÉÚ±8¡_êUÐ8²¢®mC‚©‰ß® jXxѲÏl±^sˆtmâ ´DÅÞò©ôDç!MädùÂ}mÍrúSâF‰Ôú’=Ê}9î ±¦¤BŒoêI)\¢ C¯1)5ŽÎI«èÂ×áGY‘è£+œC´_‘¶D ù;PV „Frö<¶#·} ÎÔEÕþb'û‚*sø ÎÔÁœà¢ë™[fÈ×ùΛ-—Ñ.vS¯Ï™¿Ê0>aF=YcS…ê$)6“/ ³ƒ3PHh‰oåÀ4¹5’/¨óÒÚ©T}kÃùµnÅnšåÖZ80%úŒlo—iÒÛu›?_Ó¾u•9æóÓÈ5‘8Fɕ׵é4¿ÿBçeÔC^ÇÄÐùkœ j9çjR†Î_{‰žaiK„ñ]Êè÷xaöŽõ›ó#=‰(]—B9X¾±ºm¾MB}‰QáÔp­SЖ8Ejg†W³—ïŒ^„1 ®Ê?:¾ ¿ó™" hï3ÅäœÏÇgø––Eñ}+6›RË7Ul´§8£4‰ã9R1& u @w¥º.—_¾ü?¯ÅÙG¶ ZíÆêõöh–ÕzësÏ’H‡TÂ^SÅ•zõzh-8ÇýÙV9su²†˜ª ¤gkŸ%¯¢16É¿:(zoà§Ý“ 3”ÆCG' ÊAK7[ICË’š*ôV<^Ÿpµ§©í0EO'Á‚ücG¹ˆvñ!°fä˜Þï1¶¯®0‰šêuFQ­^Ƙ0 Ì|ð²Û!¡í³rz÷ÀZ#쌯.çTš­“²z½ð:+`0‹Dß×ÙA:Rã©ý4«i†9íÚUrd9y½È9'Õ<¹å™s‘׿³n–3bÞRß@Ž>tm³×VÉ »§p;ÊØp»MÊ)×Ub™UàEP#n*ü³CÞPx÷ÛoR:tâžÞTØ–Ñï¤\o©cG7Kl X aš#Ö`±§f“Œ¿º`‘)GÚõêâ áÐËú›fóUÈ›LÍ»B¯%c=´È( ºò8~¦Kâ·ns}=~…A¨ ·XîÜ[ÈfܲðÄàsrWÁ-'ïÞä”÷¾v9‡óV>×T²–ò…q[c R©iœr¢ÐNÝÒ‰ ´éMÓ‡ ~nH—zúV²ÅȆ¢ß[É¿£¥7’S=œÒ ‘ª÷–Âïÿ2ÑPoŸEY×ÜÿÕ-¢êߎ…½ƒÝ8ø¨˜yØŽ['f?>þÃY|O¾<~óñGœæÿêg\ ¸«wrO½ª%!ØI×®m߸SÙ í¨áT6¿­Ý*™(±t*/d5äPvB8ÎYßÛiå Æ‡²(œÉqÓÒ[Òò³ÇZ; Sšk…@Öðª AÕL‚ã|Uæ†Â´kӀ䧻ûÛ»£ßL=ç)ø0„ð$sØŸIs¡YΉ$ÐꆂaBvUe¦“³ç4MŹ”j(Må¦ü…À†ÔÔ&QÌߥ9DÝ¿¦&C©N3OhLnª (G¢³¶–ÈÚÞ50@Ê:޼¢eË•-‘Æ7¨BŒ“%(ð=DÏßÌÍ©þsÿÓ/ö}'ôŠó±xûñáÓ“£Æ±§¯GâÅQé:Bc”xEWL‰¨¾}‰Ðñì •¢+z§ðz“|G!»›\Ö*M¥eͬ1TÝÄ•‹Ð¾æ–4„˜NiS¿¢±©,NÞ‘øÐÜ‘bB@Fî·Çÿãîê¶ã¸qô«ääzT!Aü9sæjoæ)ö(²&ÖZ¶¼–<3yûª[êR7»É"YïÜ$9rT&ñø ŸÀÐ4€L5ân›ÃI™9œÓ•µ@F .•˜°í/ãR•]Ögûg·)ÏáØ©(YÇÀ»9œå7ºæp|dõ›¼bG/&ÊF ]‚ÀÐR•CgóGÞ _;'q rG5k B*dßårQ*Øg'qW«˜Ä±cÛ ±â-2P&Æö½æö J~üX7è§T]6Wøpÿõñé3¿×ÊÎ(ѹìL™œà-2Œ‚8J¦(ÂZ ´!lóßQL5\J?ÇUæBI1ÙSvÍ‚Z¸ÙódUï5Š¢¨®wÈŠ~Â5à Kt.8ïÖÜŒ]uõ|  q™c(6‡Ø¶1åLaŒE/îcî5cq³ {k§2Ø<W± ]×ñ©Ÿ1lšQïÈȧ½veÍÉJ%<µO>ž$Ö®½™®zìñ|¦]˜ <ž}\¹ºçï].¤æðnöñÜx¢ ùÐuÉ²ß Ý‡ÉÆm÷Uµ?³Cåõ¿nþq÷ÇÝãýk_ÐÍ×Û»Oú»Ë®òû g_Í1%†M˜âÐp?®1ñô<Ý©»|²}~ûáïßž¾]êÅyîTNôþ5nß½tð½Â‡“kÙªq¦K x‹–¥Zýi û0S¶QÓ‘VïšÉA—4‚ó'|j¨ "“ž6lêXC<íXÃÓ§roÉFÕ°"+€¸‹½I»»RŽ•‹ËT,v÷@J.-ûÕ–_èjW Éöý†ê½îó­4gŽí/;9â,:Cñ54œ°QóDfµ²ã Ù…èË¡F qQ"B¶)fq³ŠÐÁNµ[£»è;iŒC>5ù_’SÿK’e•‰rä2«’†ÕPbUܹžvŒ¿Ý¦¢cÜNäv›êÿ’ûF— •»úÅ ³h‰ Z 0ó'¼lÐvlãnôÛâ•- ùmÖg‘‰~}ø¬ÿºyùöðûïï`‰VJ5Òÿ¹™3eM€ †•`ýƒâÄlrÆfÎO¬Ašiê-5ÖÀï#.Yƒ”]Ù± Üˆ†Uͪ0úvÓh\‡Ø„„±%E—ˆv*ùºý±Tûco`uì|‘­âØG)°5 fÑ“W«rȶ%K#äuŒ‹æó±ƒqè[²cL‰õ¼+æ^N©J+óú¯Ï÷/ï¿?ß|ŠmP…þÔÅ{CdÒØ³¸TA¹¯q2–”Úð³S úŒ³#M1¥àÓé€À…;ü´~Â¥ ü´!&'·-žÝ·§/ÿóô[eÚ¹Ž»—•º<¨„®ƒ¶c>ë’0ò=¶U?A-¹Nt‰­¸²úñ²‹r£4ÐÍÐtE,³Ó¯—4³(w 2 qÀó.©M¿º0†À ¥-±zý4øßà™Ô†nSÑ‚L‰¢FUùbT5ßÉ{òñüÃÇÛ½CŽ­‹›ÕT´@TB#Ös-n C×Ô0 6"5}Æ»Çfƒ«Œšüܽƒ—!'w!ý!_æšÝ;‹ð»¸XUÒi¾‘k¹æybÛÈâR#×vŸÐX¾å=Joç“…½l£XË6“"…gÆùVäðB›äÛÅóOC‡‹Õ°MEZ»Z n aÙÑ ^?¨¡|@íK/ט«•ÍöŽ;)ÙH½I.°mñKQ7«Ô6ÕæÅZ¶ÙüQ Êhï[Ù6¢ TTê²o˜ €“Š!Á)§4r±>äÂûŒÞCÿ7§é’]Üß5 'µ¸L¹`h‡2ññø®`¸üFWÁP…f"‘º’áë½4„…ˆ]b ™v Ä„Ú?ÍѰÛWr­Ñ¯Üfu2¡(öy¡pôvó3 €W«Ð_;V°yZ£¿)8ŽÔ¶±úõøãwS˜Gë¤ßP¹òõéáËËóãÜ~yFñýÿ<|ÛH†Ð† ]Þ›2€Ø*ƒÖ©´iê߬۾‹!ù+lß­m²ékúöýq‡Ê©©æƒ]t Íy´™×¹´nB}ƒ#-×!ÆÕ[{78Rv£o­ºT8j)m§„ZÅÕ{ LÎ…þÄ#{SÝŽÈ1Ǥ¨é2TÅ1P‘Þh&¹ÇŽQÔZìÔ½:âk{4†­ ¨‘O (¢-’'”ÒúrÀ¡ø˜œY=È]»G»Ö- ·æÆ`‚²UûékQöã÷ßf™iXj*Æf샗8w†"íC´j¬/Ú΃¦H6âÕÄN­®CíCÀ$iÚ¦”#OqȪÛù}|¯„ÏOßÍ0=©ú<Ü7lº=ÿ­.·ã£LÄè.7ÜìÄŠ½wR+9³8ì²C^ãÄ X±ºZ<.pß4’›|ˆ þ Kþf ¹]›Çî¿Oû:Öòy·i¢Ó`I‘&rÛ‚%e‡W¾¨Š=>~}¼ýòܸ³üý¾ü–7õש ŒTUZ49 ض†±B¶G¹cÓY´9°Š²£¨Ç¢ÝŒ’íbXdرӿK¢ÄkÛÍãÖñºÒ™N§=õÆölAx•}‹”ª¦=CX»o±Òno«Ó¾eÜÓm%16øÎ|®|g;{‚¦ÙXzXŽÅ…C (*Ê"/i7¢1ÂŽ­¢ WÆ [¼wª'6©¼Rrþ÷ûÓËmvjD<‡ÎóÿqQ€j¿Ñ|‹=%ö59x¨¨ù@Ìî(9$©³g R|‡)jâêPÝX«ÍŸÐ2·ì$P—×ûü$µóæÁÁÄ„Jµ¼0z–‹¯Ç»‹g[7«ÜÕSADåÜ*¶©ó Ä­¾þnëúwT¤ Þ®ÞXxxsµ·ŽÜ¢±?÷­c!WñÎñä¥#hĶáKGÍÞ¿sà¶ï5:÷ÊQ¥'æ¼´X$IÃŒÛÚ,b¦Í"Ó Õ¤Ek?*›4ƒ]ñE“æ1÷v¸MÅ`–žJÍ,²·¼eù®> tirÞW÷YÌ‹>C— H °¼W+–¼l¶°x¾1-ùM?­ª²ûYÉÛœë ®‚Æ«ºÄP”9FÍãŠ"wfSÜ‚nC`hRs*çög#UÄpi·ìfÕ n°F,†)‘í0ß²Töåö³ÊÄíÝ»¹;þºÍèçf—ÕRíLj°²Oxí“ÂYšŒR/u©68î« úL¤u²3Xo·’Á†Éx®a5ÈóÉÅ$˜ZuË>ãÙ²ÿ?Ãï­§²_ÿª7öÿëöÁº'~Ò›ýdÎøï{Göd.î/úó—o<̪÷¬$ºÙkÌ̓RœÐ¶›ˆÓDÒdÜïšiôn,Þ}únœ£Y ó40Ì Í 9‹Ÿ?|ÐmˆàÉùþä+ÕöþÙ°vR\l™(1»‹IõîÞ¹ÆÝ·kÕôýé‘, JX/¶,+‰­~âxÕÄèÔk¦á¾©ö}›™^Y+vI7.õÒ¸é ™×¾"{´¯éY©iãÐ{@’›Ï¿ï¶ÞÖ7­ý…›Ç§»O§-hª ë³kŸw‘·§ÕiÛµ{.«S²Æ3t]fS?áZŠŸÁ%[—Ú²ºÀ§Y]ÀŒ­tz„>m%{Ík¤`+1bÌF‹Û”³:ñ¢‰êòûËoô-h zR½ÿ\™ÖÍ7‹6GÊ]’ÀMzœg[3ÜëBcí¨™8œRâù•°$Êb±ßMy’y®VãFõX¸_ÕZéFËŒ+ºQŒr„U6$³rêU)·mŸÏ7Íç¯l?Èüæõ;ViT¤‹UƳÊÌ$£ ëtÞ'{9÷ö ØðîÏ_½ÙŠÎHÛrâh®v|ìiÆgבö>öt4iÆš8t=ù×…œäøz!çØXÆ|3ÿà—»ow§ñ¡Þ¼)>Ì|Y„'i æòß¾yiç©}\m'ÒàomKà¡­©G}›*;æÕ3a =ªÂ:OEÿ9翤PܱSiâ˜Ö¸÷¢œTXªä·Î’Ù‹`ÈdÉ^c ´ø*ÍIkš“ØÅµÍI½>kSæ‚l»Á$àÿÇŽ]—6µºMHáI…=Iròcu~fŒëÌ{²‹‚qÕðÕ¼BѸ¦ƒ¨\PdDåÜZRÔÎ8j\kœ05¼e† Þ6Ñ9Œ]º³zäb—ëT7xee%N@–VÈŠ=t–qAiA®!²2×g9ÆU²¤Ñ@WÊ ¡Äx¨3Í"D¢«&o·ß_>*îæ¿äàkÊr¾>cÏçÍy ‡‹‘ßx2þQ9D˜"«€ÁŸ´¬ú=|Ýš,òDí$/›ÌBÜ YÀÓ²ž1‹Æû§rÙ91£Ã’YL! Ú² Ɉ٠=µÌ+ŒW™Å`¿â»T0Ðxð‚ÉkR"×-i½[ظ Ñn¬ÿþíOžï_¬FðamPÝÏ›²€ÃÆ)¢JzØa•¿ïa5]I0Ÿ"Fä¡ÛÊØÕ¤ˆÖ©XŸ$®4ÄgÙˆ ¡Ï²JjÙöŽ‚ÖïÞ[Z#ý£lí¬èzúE[“\BÔ?¨BÊnw#Ò\ ²eàNp­EJ‚¡KÑ1l­èJf/§Uk»2 C 1µ/±®(Ú‹Bk­þyþ&µƒ]éh‚Ôe'‘W7ñ M †•‡5#)—‡ËÚ#dq}TQœ,f «LqP™éË€ÒDÕÐkå÷Zñ×Y¬÷sЋ1þÜÌ nš&¤¤¦ÕQìÖàV«?ú¤Ý«Ë5¥àB*;tÊB“.6¢¦¤ò.Qß:íå}W ¤~ÂÑÖ;j$œ>î€/DtWvè©î•'¤v‡Þ[ÜÊð;øIÀZÕì5ò{÷ DúZµ1ÈŠVí:• ÍB¿û„Û5…çÆjÍÀB½Ëzç¢>Îvð׿þý¿2dŠ?}×ÚЃÊßüJx÷ø ÔUq1i:XEÿÓß~zù÷—_ÿZÝÝ¿h(zwÿË·ûo{ž+ܺ¦Šæ¿ôBÃÅßþöÞ ŸÇqÙIAp  Ñ×îe± ÇEY ¬ùºÛf©AÊìàlŒK/œÂE¯¨6ìx—bÚýÍó[eW«hYÔPeJ"ÎmŽ[|d/ÆG+_M×bö¹6e¸Ä ½DêõV½=ª‰jÚÕƒFïæµ—®YõŒ×_f8g±W«h×c :¬ÎRêWaò!nZ¸Ø+Æ9³ê%Û¶D—ãOí£'QŽ«éºƒÔÕu÷Ÿã8Îûfåžë2L†EOBM뼉Âé|AÞ+H ä Â+rÑH䊟‹Ë”ÇÌ'°@Yú„å'º¦4°Ÿ¼ýÔ*ב…©ËÉOÅ;ã"äH6¬“dûïfì㣷±y›”¿ùþ|ûûýª2 âÏ›òƒ·`GHꬄ·äFSà | 0µpñw7L¬cV…ЂW«qMÐ8r-^â(yoªiYˆ—±Ó6g'’jìtHE;-g‚¹7z@z˜sŽè4*ZëY[zâ>aÁ&pãy&"¦µxø—õñB[Ýee¬n¬Ë ŒÚ¡ä=•Ó=FÄrºC6Ý[ÐlŒÄ„’4(0® òö¹Á÷ù$5@.%¸œ$ÚÅ«P8mïVu tdˆp–ÓÉynn¡ÜsZdkV«BíZ²Žy­¦•!gU†®CHÜžþQÇ90”Ç¡Gpü²'6$,q²}ËæçÛ¯¹îÝìW÷?ûpûr»î‰‚´ó ì¡ µµñô`I½{SgÝFÅp&!>øÄ…¥öJ“È_ÚÓ0%° Ò—Ì.N†T½I`˜š¯ o©ÎîÛ8:%_x˜tÈC;¤ Ø­l-l7›ò7m`†Å°+€$\}sÁÓãçñž_Ÿy׳qô›Z\ô —½Mº¤àä§f[í,ÑcÙ¶ï..ÿÙSËe›8äaZíÔÔz¸¶iÅ[›V›8èµ+c@ŠrÙ´ª] טV57XmZW³¬$%t1‘6ÀóP‰œ\@·íHX®í}¹0¬S”~}–:¦UØb–&Û˜¬ïÅ‘ ˜`zGè %ª"•{ªT³œ‘þÍ6&À‹ãŸ{†lYsA¹v{>6ØÎ”Uv›8Iä>e§¸±Ý6:g¶ÄØ•½âe»~,"ƒºýb©ãŸauΊD4 ’>S±¡Ôô÷üêÍ!ÙtbQÄûþááeØ›ûÉ¥À„åÙX,a#KÎ8,h1¢‚m§¶… ëbº’ TØ¡´Ás¦Ó,œ À5›ðõOþõôíÓéáƒ5ʾüñËÓ?¿Ì5ã׬¬gɦZ¶$Wبn¤·å¾Ž`ã#­ HCÏ’×àÊÉÖþm÷$Ùz#ϧ%eÏÌéÊŠië:–’Ùs®ñlnÚx ô$©rÕD®½¯¾Þ2œe¨aÆ4Öî?Ѳç­*•׺Ü÷·//êîšwa%ÌNÕïQ.ên~3Ûâ¶CœªžZe|]ÀMŽ˜‚ëÑ]r~ Ÿ*.Úà ^û¥h_ÜÝ>}øåóý7ýׯ¨Ïv€yƒòÍ7ÂgǸ[÷bÄñB·镹¯‹’Ü\fe"{ ~D »’|ý­ ŒºÂHEÐs±Is¸}ÒûóFª1îVœ° :¬RÙè:U–·~à7*û¬»— -\²îVh,X!U±Å=£ŒÆYöú„Îc{ýp…š®MÝkuc;“üüôxÔ±ÿáý¿_l¸ëññéwÕ¬OUˆêOmÕÔ9„™G3ƒZpƒOà¶LZÏO!56=>éí>>=Ûä;ÿqÜ«÷®‚ç Ò”]Îuø2¤µäN”Ôëk½"\®M×kAOÔlö!†ï¯4;ð!”°—lÐ| ç8.5jìCõ*óq”ZF䨥<¬~Ýè×Ä]Ú;À!”›n£FsR–“²M·Riº´¶Dô«%¨Ñ誌˜¬mÝ‹P®lä!ÛŠ)‘‡>>ÇP©‰Ô?bŒñ>Û²vÛŽêÙÔB \˜’‹q¿[úˆ·6Ž0LÂÃè>Û)L<+!Ö®LÐ%!4á¢M…‹•øþ£pSwô"gïº3JC ÅÚé iLðƒá¦f°ñ_ã'ÀXì'HÌŠý„Ù½ ’ŒÀòÓS“ê“[Õa+^×Y*!·u;A2ãSlWŽ‚”:ly,ôωÞ9ŒŸâÆÛT¢ÝöhØ´£vÕ*úo÷·¿Ý?îíŽÒ/ùEÛ¹Sa}Ç¿ë¬/âÈŒ®±ÉÎòue¨g¡ÒH,–ûugÃX´Ó’Eý?Ðn€™žÏìü¼úºfZhë4)f`K쾂‹x«1 nùªJ—±>_Úμœå;²K}=N6éèQʶÕÑënæ©#xÙBK@hybœôÇÚÌs¦q @cE㎭®ÐYXÛAõíø˜¼ÆŠÿÇÞÕìH’çWìEwÉøaÄÚðE òÕ7C˜íni;½3žž]c~,½€žÌÁ¬êíì*f‘™$«GZAíìde_| F|±vÛÝ/‚øñu;1@¶nÇÆG®ª‹IUMáakýN% ¬®%065¼DÔþ]Ù0Pûƒ2ƒ¸æ9Q\;'j L$A‹!à1·xaGØwc¶&`ñaA)Õ$EïñEX>Äà/Aßïß|Jßýñ§Ç7·fCç_í0ˆ0þãß~÷Æ ÷?Lð[C²ža³¥"Ãv1³éË-¦t ¸iÏ£2vöˆG“‘ð$]wR°`{È Z„F²ƒ­Îª¯Éýæ«Ãþ@·ÐoV7,F7ä1Y¸Q¼Œ<‚cGÍ7`$1 2J²G,¢$æÔeVsŒ²—Œv¨ÎjØÛË£oBÒáY 3"ø,ÌÄØ2Ä.…QuxCN¯‡7;¤ŽNÁÇŽ{ÀgÇ//‘ü^$ÚñÃaIͼkÚáêúW‹¡§9D•ÔMgÉG;ÊŸgÄo¿¿¿ûéÃfQòü¨úÙLþ ÎØbiïÜ1’‘GÒt'wõQ/M}üw6Y\Ö îƒwû… ºçtêÝ<Û”{È(e´ïŽ"f:KüÜ®DT<â¦q<¡©½ËÞP, Ò¥µÄv¬cŠþaC,÷ Fjò??¶Äy62dô„ÓG&!½îˆºganÓLZÅÕµL§¿¶¨e&Ñ.X8IoÇñ-f:>ù¬iýx:y’×?bòü/lë(þªÄËmî€ {dëÒˆ*I—Iû&ö1d/Hž÷PtX^™ˆ¿i|“!;‹pa´.}'öÖèٱ߂Éä7ù1:êîÇf–iÎ þ)#TÙ¼ÂKqWy4ÚÆöðµI#œ;aÚè¼ÅÖâ]°FÌO\tÂÃÌгޯg“tpÂôÖIhÚã&bD¶—¥Ñ GWÑj|:f¾ Fé‹)²'Ìw ão@a^6âµiyˆ~o`±£k¸2”ÎZ›;§‹E•uKÇt˜ñMÊqGßVp56Lô¥uz’UòLX&/>–¹ s®„fi‰NäE%øMä%hAš<.BÿCHž‚ÅÔ«'tÒëÏàzóéãÝ& ‚~³ÛÔ.'°§[]P€í‘ÊyažnTÅ–,8«,ˆ:2nPô9 YÅ…)zp{mpnLº®Ï ‡ÑdÅ™ó_ÈÌ+ÅÆ)ód¥·â5ùª©å© -}sæõ#W/È€6—Æuø¡‚ÖÊçü×ûm‡ÍWˇ ؇¯Ù>8/® B-°ïa-¶dÑZú*ÕæíÕ UÓ–ˆâ<ÇbbÜÑ—+}¼b–È<§Ëôé05Âm Œ-@ÛоC§ÆÈ‘5ŸLŸN­†i¤T!5î©«üG¬: ²B/•Úu|º®1 ¨::zE‚úxûùý§/ÛŽ…÷›»Œ°{&%Q@tN€û2Ô£}ú±T[ó”)ã –(O†ÁežÖèBRuÂTY ׯSƓԀ’#©:1–FNw¿j¬ê…ò(.öb« X_JLê1m¹Q?¼é%ö”-[Ï}69Ê}#¢b –ŽêÛÙz_u®/´8;¶%sp|A±gÍR Ñ9LJÞ—Ç KÄëŸ0½„¶yÃç‰ÜÕU4Ná\SG’8éßPŽ0%V‡2;}_`kpsûùv Èra ˆ<µ± ¼çœ vþA·vï7[/r”¶Gê3ª˜˜Ñà¼xØ Ç4ßÙióÙD=:ÅíµíçIÃ&vdtŠ´®©NvÎÔT§Osÿ˜ÏॡŽ=Ϙuœ({<ª¶ $V•‘CÆ2bS’T_xÞ}þáþ˧æro?ßß}ÿîËÍÓÌûÇ›Çÿý@ÛNêèÜ2£m‚[Þ#"iÎ38×·ecõ½›D%¤šÂªÃ ¤‹(Ë!;ta™^•U©vÔAì-šP–™¯ppÉNâN+¥ Ê×€Yæ˜U'»Q¶V×R“N]Ó!4¨1:2D°pn±ò7:ë"™œýW›.VÀÁžîgÅ 1 ¬ÿv‘…é Þ TȰ0a\Èê§-Ìצäªs²¦Á£·¿×âÚ0º+á`çƒèéYÚÁN-`!u%¿Äÿ˜tq¶\IÅž›êôÀÛ€NÁ[Ìå×KQ\T¾øw] Ã±ÚéB £¸Dˆ~Gã;ã9vÍ`\Ðf¿lÒ3‡-) ÛNQŸ® ) E‡\àÂ`òLà¼nÉgôphCÑáù ÑÜä3p²XJÂw> ¿*€óÒ$¤n‰VhY]}r#´Ý8’ø`£7:”›¯wó-$·wõ¦>çxÁä1;šà÷i˜:G‚ݺzOÔ·…Œáô*¨E/Eð=^Ÿv=¤Sž#)n°‹[ЗŒÆ·uõ±ŽŠùëY1óE¸J9hU/¯‘lç:tóæüu ™ÒÜÁ¦5ä@còéN1ȵ1ô8uúiúôÍ%åu“Ç॑â²î¨V’à ‚€ë¢y+õLD G ÀŠD:9.Â(gata’Nyh>Å[`TRavøJêÍó:¥*Ö«V,Õ5zöÚ†§ëX°ºššz"Û®XqŒ#:E¯H/æs-1`<מ>«§)]iâÎé˜.’ éF¼vLW¯‡~O{Zš½)FÙ7!P2s .7¥÷s\ÜmÑ¥×)î6Ðl‚kñ5å â&ˆh¿÷bÜòm3Á¥fHºi'±š$A|ÄJ˜iHÃàì÷SC\Ÿ:ÙºQJêr'åî³”þj/†Úµ šZDQö«Π8`rOQÍÑÃëY|÷Óûw Å<Öͯ. m-!DØs‚çt_ÏsàP\qn³}×1’‰Vlg >ÖE«B°2óùlMÅÂ>]ncÜ£˜kèU›£Â7Ï*?ÙùLV6NÄ/lÎòyºfcc]W†Ð¹"k+ Iæ`ÿÌóù0  z¦€l‹_GÌÐÖÉöâýÍ.Õí(â÷[¼ °»…$¦t!LÝ$ O­´R5{ˆæÇlÚEH5’¸„©ä³ÍÊÏ&é„©ÄI6u ¨vqÁ9.ô~‚Ô2„;5L>>Þ|÷îñýVí-oè?tFëø&?pš‰z)ÌCQÆ÷U&ÜEuØAu2‡Õëk‰ÁvmúîÈ›ˆRvž_µ íÙ;úµa!G®jTê\¢fa¾.0í'OÄQº°q*@Ãxðª9îë'„ôû°xØ÷R»RjÍ4Ù^áVûeÄX]ý¢ MI `Cbtd@'7šßUæ®@iÞ5cØ™û@ À¯Hö;»I¼LjøñÓd–‹°Ë1Ëæè¼18Þ„¼|/ŽÇÝÈ”Å] ö¸Îhš¯Nì»—3G·§–ÛÄ¢›ì”8óÜ£LNIAÑ£• •‰Ô1vvâ}¶IêÎôÚ©¾º3½[ºDClóh‘sŠIµiì„·ÃFúüñÃýwïL&}|ûø‹ý£‡oÝpß>]™|»ªûë´’m“¦W&ØV­J…ŸªÆ~‰Œú¸µç¢·)û…ç8!ÅPQ¨}YÓ¹N}ya·.ÑÙ¶?¹²/£sý;ðì©©jÒ‡¡÷uïÒœËGß¾óf탓€ðup¶nâ¦cWß<2¬®(;c´Ø¶¢2„ÎRtn,ÐÞÝúðñ—äED<7᪠\ ¯ì™"6Á*í¹ !.²ßZ£[ežžl•DŒÕ°UçŠi"ADµE²j/T8lBQNßÚ|Î¶Òø–%Ê’U&vǮݳ컺&ª&U‹aP5ˆV{ýêú #“oZ¿!)7E †¡)€·Ž?_]œß`|˜õn>|¼ýÁþåŒÄφfX]Œ(¶Ême]{Tý˜Ù‰ºˆÍýòZÝ/Ï“‚³ R‘v .Wheg/¾¬¢aÚÏ!ºML²¸leŠx…¤è±†í¬Å˜ ô¹4ŠGêäœêZŒÍÂþ -ÆÃƬ½ìCqa[ò°[4+cа¹YyØ‹]êhî{‘âØ‹Æ@ƒƒ­råâF?º ³p¨¯©vMâð%,Ĺѕ suj3tÞXìÚkÕCOíiB†oÂÄèÄûØ‚ëÚk+RWöQ¯YÞ‘9åWÚH%…Ôລ1¦l@i+{´8Å-·SõÚVºEÛj.,ù㟠E,Ð|ùòá[O7÷¶twoÞß}‹·x«òÿÄð]¼» 9ñ&ôûí_Be³?КJ NÚÔâü. v sд¨Æ}»¢†ŽA;hÓ·ÉJè<LYB&/ŠÞ¿TÐY<¡QA‡&cT¯ ÓÇyð˜ˆÖUÏMòDÂä ô6¨– íþÜííü¿ÿž|îyoΘîM†çÞœcïM–èÅF úg>ÅNøõ5`Þ?¼ûóý ˜>Ш{îî}4Âäâ ˜s›õ¢¨óΈÞy©BÛRÉw²_ÀlÉ÷³z g°×Œæ,õµw*ÉhÄ„”=>GLûä¸KERç꼚L'òK«"ÁGÛyû—Öá»ã.¥ ’@IÙ“[5£½Ì\—ÕÑŒ!Ñw|ç¾³“,¾‹ñî4ã¶¥b:½Â"ñÂf±™™ŸÙÒ±¾ÈÈlÓSµ­Á°%î;iÓ®$ûa;6;ØGÉý9%O4ý$H¤mÌ‘‹]AºÍJž¾5?Àgñ1eJŽ4!„¿þeÉÈ—hbäó’(õŒÜ¾ Åþ¹oÚèv($a0>m;¹UV—œ«¼&ô“¦ijeÚ€Œà‹[ò#¥Ÿ?¬â–‚L޽¾ÔÕ]>Ã0äñûwÞ<ü|{›âàý/ÿ•ð¡F?·-$uc’”ý¿—{.L ¶‡µëž«|HÃO¶ ÎeTÃdÌ}Qä'p¿;ŽÈ×¸âø­GÐõX©é™Z•L4zõOÅë”îDÇãf•²É¶t"ùUe‰ ›—ÁSý®©P¨Ž¼è×¥“ƒÐy¤©uå£@’Óׄª‡lqa’šÞÔ9‰[0T­ŠotB£Ù6§¶sºmŸL)º"$mܧ'ŒrU9`ÜÚT «ËˆÄ¡EÕq*eÜKÁD¶'Ƕ«7ꦹ ÷·?}~ÿå—MP›Œ÷¯HҢ߃´1éÑÑ\ˆxM…Ó…»qÚ=ö5w9K’¦É y%®…Åz1ø)ÐÛÒ‹Ùǃýh 6ÂÍç8lëdQ¿Ê€N­êJÝ­×R2=AÕ5‹¶®mYÔæT-Ç2Þ«ö U»€ãé˜Ä\Tÿµô å‘ÕV<‘O_¬à°˜îÖ¹.÷…1º +O¤>ø —äj”[s¥½tKrŠç]îB0‹mzZVä¦àãä&IWÕTÎ&ÿþ@*ßþç}º ùÃÿ³w-ÛqäFöWt¼™ÍT5/úøxåwþ¶D·dK¢Iͱþ~Å"+«ˆ¬@Rí™M·Tee pãÀ½ŸÞl¨ÈVÄGð¼â¢zZÔØ¿cšŒXzýöäJ§XrØ}úòíDP´Ýê‹€hÑ‚ˆ¶¶i] ]ˆ@f-Õ “ˆ†EæÅ%©d5ÊOÌÒ9|ó¡Ý_Ž4DXµÁÑxpÌ©¦WVİ1Í’M F¿®›zf­’WYµ èt÷1¤k oMò4ST ­æ«ýe«`+rËœ¦ÆÃyÅä/–ê­D{{ÏÈ%C¸t[=Ù,+R25Jd¥Ähé0H²’St+7¢Âø9ÈÈ”JÙÏtV{P PYA']Y?Π`v!Åv~ U É *í-z»04[=g ºBUõùÎÞîãÝÃãîþöý½ÙÓôf“iûB€ìåý"ШÒjó×õ¶ë ¼˜Šc ò‚úEä嬎÷ÄR]€×<Ü)A]JÛa¿Ê`}¨'3gtJ Ŭ‹úPVÉu½¸‰…üvÅøÛ8fYÕŒ´®… @C@™YbàŸG¹âÜÞ¿~ûþÛçOïwPI&"íkQ€Ë* ÉobvÁÐj¨ZÅŒùzB3Ç X4y|ª¿ ÍÊ9Õꉱ:A3…D ]ÍÑ›³¬Üµ›äĤYh6Ì–îÔC×vƒJÙú8HJá |Ì­³÷â, ¯Ygï=¥%TrøL¤§Ñ¸ZÖQ„ØnêeðõÞ7\žWOIº3ö†m$V ‘…Š€Õ{—Ó«8¢®†@¬¾W{ì7àªWÉâjLÊC²1[~Ù C¯©Ø«´¢IàÅ;\µŒÀÎÅÈVÁÖ.íݦ!Šª»U(½u³á @¢4Üs3€2X‰®E«mÕTÍÒqxI#]\l$xÐ\#ab˜.¨š`Ä@2T¡j‡íˆž‡£êó|ƪ&Öõ:ÓÁÕ¾z_R”¤Z€ÅXÚ„ók¬ö¡ukÉC&ˆ@ ¾M÷šQëŽÇœ‡+Æ„«®l¥õ‹M·É!Znç]·Îí5‹õœ@ €® qÅ q9qżhåÉ8æP9Æ:ŒMgIVíKr8|þÀª~ÌÎpRUžÃX7àÆwÈš´KÈ[PÞ'îy’ˆëf\ƒÈð9fˆ13P‚²’î5.qÒý'óu÷YÈHq@ÈL¬¤á' þúv÷O{ú ò¯è¸}–Ã&ƒ¶ôÕÁ±øƒçJö _ȾÁ^÷QÐ…F–ôsK!!ä®zLÞ¬€~ƒ½ßkº¾WLÒÝiã0ú:Ì’(Ç!ŒÀ:JÐÎêï>‘)(=µ£ç~òÀèZ7Ë4tS!5ɉÄtJƵ©è ƒ¶åª¯wîÁ«,4°,Ït1f³Õ‰ùzˆÛ·¢ ½¼^;›ÜèÉHÐ׌ìãÞ~:èŒ`žDèGIã7 ¤¹®ù~NO¸Žmæú§OId ‡ŒåkÕ¼ø×Ÿ:Gnß ÙX¹‰FYCð±ºÊ~ûxe>/íéß™ÿ{–ŸÈz_DMÔ<Ë]L+aCž“ùÅ2P+}mRïÊÝ;ù…ø·ÔwÖ!²¹Ü»—¤ƒ“jàrP£Äý·Ԝ˲èœ,ÔÁ=Ò·K|7OWÅãð ffv˜ j˜ÒŠÃŽÈÝ<6stíÀp!w¬væŽ+4gՇ襹­–>GÌ,ñ>‘3Œ¬AžÝsZcy¼¿K/ôåæëÍï·÷¿˜O ±ê£œ}ÔÁ)ž ãÇûï·%E‹;^fn_Ìë~XLi¹·BÁ Tx•ý§˜½Æø¯ðÞU1É-“|örÇ=µá*åÏ“½³Dƒö(b XzÂñLÅaúƒ%Þ=¿ýûÜß}y÷éëî‹åý#•ì£}‘‹DÃ>ØGWNÜÙ s"÷ï{XO ôóqÊ ÌI¬ÕßnÞß^N~Ô£!óH°*-@`U¹F’ÚÞŬQz%jimÁ1Âòð,{zê…^Û·ä@²÷d€‚Tê­Ïá|ãNòävæiå{Ví3Qo¼gÉ 'Ûµ¥ Å´øè}˜#Qì«©JewÔ°‚üë*fŒ\2$YÒýó¸÷ØYìÿx{óùñc¤ó*kÉ8 I6¿¶Nc“ä+Ìs/½¿|¹yÿÑÀl÷ôÃU¡ 9„¡¡ \“äb"š-˜[Î8ÞÕ°U‘o¦½Å‚ã.w¤»·Žƒµ¯)&_Þ¶‹HM’( àÏãÖô!Mq ¾¡ÕÉì]dØÅtn³u; 14ï#Ârmc†½>MtÜx¹^ÖÄ4=jK‘üAÜrêjÓg4yBtB[gH0œ—ÏP3·IΨĈ8Ã…1]Š]¯’”ýÀµëJ31ä!öçÕÃÔV`‚Í+Ïcó°–å$âÐ6TbÓ~=¦ÇøŸ}ëÀ)xlíHÚ¢áê9¤)Ø6%_Ê(ùæW’À—-äòöŬÜÝä]–…|St&‡ç¢­§OX¥ä+l¡+5tk}Ñ –ÒBðr¶¾G]Îï½ýÄê@ç¿Ý=|¾{ØÝþÛÀ·º}]¶¨7œ†(¤Î±ßV*èšIÛ*É×À< ¬–(Ê<Ù~r „(7â=1ac´EZ]àGµÌoU¾ƒ,s{6éêÁÿ“sˆrÖ°Ÿæ‚X’§†î˾éxsɺ\¼Ÿ˜£K¸‡=±(U¹IŒW¹ù–aQFu1¢÷ݨ&j¢7=]!RàXä ÄË®@.[NÌÑÉ”PËiÁßÍrOR\õ-eñ¢>ì1&’Þ±¬'ÙÛEij»Ç»ÝÖÉ®z³“›p˜ nÝ®lR^±D ÌGûÜÌ*¶]¯íš<…m©Eñ'Þˆ«Û5@ö¢ÖÄP]Â8ì}¤ë¸bpQÖy 7 EÅ Aœw?ç¾ Œk:Y‹ä ü¢xXö ÊNìO,ÓÆík;¡º˜ž4—]XåMŠ÷ž£y±YÚ –/ÇšvOžÑ/žãÞ±%ÀZ”Ú¹¸ì’ˆ‰):ÅóH*åÌö‡ïAÅé:€hÑ” •m‹ÀÏ&Þqˆà,×·ü® µg©¾/ˆ.«lx2IHß,a?¨C"õ«üáòÆMaÀ°Gƒjànì¿¶¾f©Ý:à@$ÍHðPéÒã¢?DÊjMLÒ ìk'ž©PåöÏ”WeꚎ8‰V$=Ö2T÷ 1>ìÞßì~ûþõC« j.‹{ 4 ¼ìèôºÀô“ñ‚ä‹ÁëtÉ"ÂÞÛ“b_ßX.Õ(]Šà0VÆèŒVãú•ß麺"h¶L]BëU[\Ëô¤Øº‰•6µ§sg&»rK÷Ì^ëŠ=óKPJ :m5]ÜŽ>«„1µH—bÏ= /HÕvDcp+ý¡%v[-¡dTúÙ\Älcà•ŽÄÃÍîó§‡GûW»Ø÷Lÿ1²/IÿE–Ügu­¦FëÝqŸ(³|U@Óqò5­Ö—Á–èÎìcÒkåx˜žÝ}þrl<—=Á&[¥gˆ¼Ø8ͯ/º<[£“+@À U-MWuüªÌ_¥Eo44IôN†púÜß}¼íÀè“÷DÜX°[Xs £‰}:9‡F…XÕö×1j\åѵô‹"‚03ðš~Ñ«r"ûøuíåœcÐH(r ŽË$fûSÛtñ Ú[è« SDKà­n÷ MyKCÒa©NŠžÛf83 D¯Y»‚á¿7#/ã?X*¾tX`ïzdϽdí:½Ìò0Ñ¡ÅO˜ÿùˆUÓ@ao_3 ”õñÂØB g–GFY=Œ¥ÌÂæÁ™»H‰SÇ‹Nq¼ŽýŠÊíåÍ <“W¤‚œôÌ+&Ÿ1`þ¸eÜòy²Ø¾š\Œ'׳€jÖÕõ–Ú¶K¹s²Iµ/MŒŠ²5ÈÖÑÛÝ̬™Ä*­“DÌuLPƒv%Å‘2ŽÖtË¿xŸWÃüÀ…Œt Øï£Cðm4h/2ÓÙ›U!4Ž]ˆþÅ"‰‹ŽUÞH øÜ¥¹K¿ß½üue‡G¸è¨å–x'‰é¡ß µR×íöÒV‘‚‹Œ a‰6öàÆÙa÷‰¥zÄ=ûÖê%òÆG•±e<Š=ÛË“p7'¹¾¯z¹Æ=0ÄEÇ@æãÏëžÁ¹£Ï‰uzðàÛw¦4DÏ'DÑ) NˆÌÌÈþ5g½$Á¢èý’°箤¬Ë:MÚA:z9Œ\ZïyHC!r|&ØúPåÉ‚gÍ»º~ׯã¿×­Ç&¤véŠC¯•Œ™zÖ›QÌÙH R.ý¶`*5Q£-¶Í¾> Æ/ìTÆÄ_U…Åù¯Û°ãó9É™eŠÓ¸w¸ wý}Q•Êõ`\ c×SpLWp‚Çm.BÙÿÐνÜc=Œ)î#«÷»÷·÷u§iäCû:”€r‹ô;˜¥mOðz)×:ÓõjsfV*‚^—±7äd'vê…½ T3çÛg³‚ƒ À×…<øZŽ4£ÞD];ƒZ„¹Dä*0w=Z ]ZCpØB©óòvÓE“»FÏ¿ª¼¥c˜Zæ‹Èño¯4VO¨MÓÖ!Âèr7 H²F'ËtÁZžtIhóD×ÞaƒSÁ,Ö‚cðannU¶ŸjñæTÆZfaäZ∃˜´¡,"[h„­ñ4\±[o ÅüÏÉOœ‡VðþÀÿ\Å,B+fe&†é„¬^}ØYrËo“Ŧ¿¢ÅÁApÛC¬Õ.åÛ„ Cwz˘¶wŽÄQýíöWäüŸ¾Üü~»;Žøü8³Ì’ÿt+±Û¶÷ {ÐÀnù®ÒáŽéò)fOq¦Fêq1}mŒ*›gTÄ2$ c fØž”ú˜æÿò’#é$Q¯+†nV M‚$B!±øaþéëõŒÔE´l-,Gj’,ÅÄTB5‚:Þ¼áDqƒ†“#Ÿ ÕD/„jž',Ó ÷å#i}Àcä*sÿê„öéê§l{ðÔ¿;ÒPúýke³ #º¡ÀËØ”%Eƒ“`_ÝæmŸ^КÒÍíåSTfF‹Ðʘò£TòSJ,[C+‡“Ò uÐ ´ès,€™Ö΄O¸®5hìNŒ-×J­0Œ è\7âÄë&ëwÏɃ’nDÉ=§¸<ãÀ17ã0µO—{N´·m"nó¼G4 Q¼¥.Ð]_z”¼éŒ%+‡„jZ€³_D5¾fä?±=C¢yˆa;1Ô×[ì°¼@–k b%^>`‘,¡Ùôm{ˆ¡¦¯í|åË.HÝ#«-f„K¬geÇRmáúž´”MuÚ—k"ñÐ¥•Ñe£m!Œ¯/,&o«ÍhÒ*X’Ï"dMÚNk¤CÒÏ@lž~^(RªëÄazSWÿyçÚ^ ƒk¸ ÖxVß…­{ÖP=SÍhø‹(¼bÁL—Ĭ|×É*RM1܈´u °hïּЂê"ù÷^t±v=+ ~è×E¿«@0rÝøSM N2ƒÑéšYL¬õ3ó"òÒ|‡»)5—MG®­^´íz\à8 ÿ‘´‚¥J“(9cÿ„?1‚ùzŠ%ô31Âò²f9ô'Fét•$ò*/ö؇:¾dŒÑÅìqTLZ×+F Qþï0$tXÐ8`" $­( Ǽybøp÷ùö—ãÿw moÏQ‚ª½¹o  Rtèc— $o¥~å'¹.,?Ü‚’ôÁbDX¤Sõa¸#‚Û"ª½œÐ€èöÂPjŒ ý‹ÎÉñÞÆoww‰ééÛî hùòÛo•){üÓPã_°íáõÊÑzÙaɰmT\Q8×Õ€ó8<YµEßÇÙ!+®í¬pÿ~(ëö‡¼5–$®aIÉç`½ìaÿÉ:]`Ö€) ‡­aÖ»ñ;=†ÜUZ(ÙÞ€:êzW:ÝÛ³Z¤|7£üÐåT’·&"Ö·‰šK.•‘3 ÅZhÑÅõœr7¬£{“ÏjÁS cPÊj5.â-8ÊîÉD]×|šØêâ­ÐoÐVgÌæVPÝg*BßÃe.‚ÞŠÓë bäâ'»Ã/'QݼmpPT~VR®‚Y„Xgê¿×-óB»`îÙ–ZÚq¹Ìù¡:ÇbŽ»©S¯S¥Î£/ctP‚¾é†ß"ú2çÇç^,Ñ |“H¹ÔMöä®nnQׯ˜_J¼É‘&e»¶GÖÕ³¯"·xVBTËèѵ.^ú5Hé?úˆ”T£¥F5ú L?šƒ[ˆùóßþj!,†ˆÁÙ”Eí·þÝwû‚_o¾Üš=fz9LmíÓîöïþòîñß_ýs±ÞÒš{†¯¥—0)!ÕH/­|üD…ÉŒV-´ò鱞‡²ã¹a»û^X÷¥Tï)±nGÁÕJwT¨t‡d_7S,Od+Úž¼zô÷ôæ0£…üòjRwÞïƒz†0Uº;ûŒRw+oó>«Þq¢wÍÒ¤ ÄRÍ»^ Ên4…{rŠÌTÏÁ­‚­-”ö¯û©ß±º ÄïþŒWñPØÕ†Û4I]ÁEN7¹Zdf%#3ËYœqì¥g)È"ÎÌ#N^fYfÖPV’‚MQvú «TfS¶•xš*`/4|~ìÅ(®ÍÐ> ÆH# ó'1ëÓ&:53F´G…f“ åuXÜö ´;œ 1‰—W–á«-×V¥Sæí•Ìt¡h§K\ØéfƬÌÌÄP=îß>QjÓœ)NOŸñäÂæµïþq÷Å0a÷ÅÂúý#:<ÚW¸€‹óÄÇ­÷?/üÓÊåÒ[û(~¦ðWçºöZµh,K*F`»ÀO~eÁEB](öÔBO+N=yÖWU\ZU‰ìƒ‹¾€ËÛê¥ÀxÒ‹KŽ“gòfEÚãbààpºÇ§ŸqÌ_mcrO·iòÿj‹Rìy†â…âòvŸïÞÿkZ…^JÃÞl97Ü„ÍN»Gé#Üh®òäD3p$†TBa†.7zíX|¡‹â«Ì1ru—®¬»ŠŸ<-¹×—\Åž¯¶À; ‹SºÂ±½óCòì°·:PÞðòÞã}:sü`m÷Û÷¯>ßV‡Aˆ¡Ýò‹qÐ,/-b΃¢ëÃqÅXm™t¦fN®@.–DQô ™ôÁlÙaƒ³t¹Èö¨T~¯ÛfÔ0>Ê Ë]v¶e²ZOi›|\vÜ ®×¾,Ì.&/«“\€¬V9ïÕ¿¡dã‰`î/vuMŒ îÊR¨ð*¨%ù_ö®.9ŽI_Eoó²*ÈÏ„ŸæeN±Á¡¸C©!){|°½ÀžlÝM±Ø¬ê ¨¦VÞðÄØ¦Ìê®ÌDþ!óûÖÌÓ«ÿ%}i‹Ä×Ïÿæ>#锸ßd¸èqr-l,¬.X‡€þiVå‚9jˆÖtj½æØÞï¯àŽ<°fC5<Ïêm‘…^«·ÅNcSån°¢’[ä–ˆä ’ÝEÚ‘Ë›ë›cÁNè,IhòÇW`W$N°¨vE¹èº¹âl(â–²ÌP¤B qÑóäàíHN=<±kU1ˆuž˜Bj¹øÞÙÙÖs·.f²‰°ÿQÈ»gu*£*‚È3`•ncVÇf m:Vؤû`æç¦h^X.Ìw>éÓ¿Q–üæ—›|·:Þ®c‹³ ó*Pe4iRn0"y³ðŒ„&¥iÆåýe%„:ò X 9¥‚‚p¨²†Õ¢2¹Wà'Ï2ê4àJÁSµºŒØs¤cS€d=Àk˜Ìxlð€bZÈx"…®WR4ê =XfʜŬz3xqcB˶ùÞ´`˜ØSô C†r€=ÛÐi‚uN+Z ‰F”S«$²†WÁBï–[_^ôË®tP‹)à©x¡'´èºÓäÐX$]Ò+wÝh˜êŠÕ%ƒ(8Ûv†‰Øij"6»n†s¡!Z‘Ƕ€ÎôÔÕÜœSÊÄMªgÉ[Œ¼Ú vXkgü> ß{ùñêòÓ{?©_îÜ8êðlNL,Š¿dÖ2­!kËõT¥Gmî¹þ•€9aIrLËlžÊŽ¿ ¨ÓömÆ÷¬K»M‰Û§Æ¨4™³;Ñ4:ãšéšs‰E«H‰Çágµ ^×…¦Š8­¸P/ô„x=º•‡ìY „:LÁ¶=/®¥dSèÒ#ùt9ðnýH«ª„þݬ©N@|†}û$³›aÑüÌ/AûG=?o<‹®ö¯N fUKnwm›E eƒ{t­±ÇÉMÙÈü]»éù1.ýøÜ=øß>?õxœÅ>ãÈìþíóõíõç‹›JV‰'4"hÖ”y%¢5­IÑ\æ•®¸Ÿ»9g·A(YÅ/*w¡LNÖ?‹¬K¹»#–P5YŸ8ÓpÓ>¬·¾A¹ëšJR˜IÇüÝz:e‹eÔ¬V^îöõ!³JVa#lR²`Üd”‚Ç8ãÈÔç‹ûOWžÞ^^ýtõáãÅã7>¼øý†êê`‹8 ?å5D¤Ö-Ò 1#Ño üõzÞàhÒfgç·¿¾¿¼º¬½œÓ-…I·¹÷Ðot?zÊîëÓ¦ÂÝ %”8„Œ·°é¦ó‡«/7wä0~‚›´JØŠ@› [mÒæL€ªéû`¾¿º½ú½2Í&ˆ›Ê]p£Æ‰ÞÀ«?ߥÞÜù[}¼{xt¹_ÞùOþ8Œ”½¼ûtu[Yì° ‰¹åò.wùVô zu©ªÐ<‡\'»n%ŽzL™ô¥d6Yö‹–'JœL¢0u£7’S·ád³šöSò↘[Öüa{¸Gd{Må›õ”2m‰ÌÜça:?°ŸÉоSw1«Û”Éó¸åüS\sÏU3B:wÄ:•žõ<ö@Oƒ=K§>-žz 2Iá=’N§,4¨ 5ÜÛ†WÿA›Î}:˜8JÄÉsp@ãÝd°ŒÃ;w½ZªCfê6Ch:ì«&uü'QŒ ÔnUýÎv,(cÑ 2-îÄ“]Ë‘4ºà„Á(í8ñ*÷’-î#8­m®”`b¼2+Ê«.™iZFî;£ƒ%g[‘ÊCzq•;«>$Tƒ¦£ ¶‹+˜Kض1¶ÜCì²£ÿÁãðô±¦‹§4Nâ²? ¦ÓhzJ‹u§š0p@I¶,]cÁbðl×Ï”µ zš…J ZbH²ìíq-z$˜N™\“Šç7‹UÞ‚½v´dú”ùÓï~×–4ñR± »×€¡À$tÒ&FRérm™¯"tµ‰‚àxq›•R‚ÉìÞ«# ²4T’ºŽàsÑ8®*)oÁn«ÓÍ—f¯‰=âøî—wÿ¾ýùoÅ”j«Û"/ÕB›Úê}A¦ö’¿ú—_^^âìwæíHÐ\¥?b $w½ t”¬“¡&3Çù¹YHÃÜÏ‘¤Ô—üÚ/3ó£ )ÌJÜ2î5I\×€ô€qÐ×Q|ëÅ·LH†üK¤ìÎ\Ŭij_bô2߃à¤/¾Çhcøv—§qeWÊð›­Àp›Ø*þ;!ü„ÝHË¥4°”‡Ó dÙ 4…%Î÷ì±&Y`Ÿß«„Ö¿“›Än]¤´¶aÌ „bKÌõŸn»!³—âر¿²˜rSþTFƒj ø <¨k2ª#T P³×|æ(\›‡jòÓ5Ÿ9O{êV˜£a“!ǰÁx’Î=‡7›Ä»üè‘ñ‰ËåÛ8ïÑÏ÷¯Y79‹°ßI¤›h#eLž7ÒÆ˜wý·zÅ*lX/õåœâ­DYÆâgÇbÇ … ³]×ÕäÉDOB(Kôóéeª«9W¦¦kä€dU¿Ç µ-Æô“§ž̾ǭõ÷—÷Þßïêù:°|>å+Û4‘âæ)©ga-,2­Ü™†GT‹Z‘Ügv¤2r®Vp[NiÍÂr° δ°<:ÝÅ€eÑ!§ýUÔ«Õåo’êró˜aØ3W8d©i9.¿ÝÆÀ¥;1ø€° Ò`!¤é9“];ÙZ4g‚J›o,„yåºq‰5)â8Šš4–m)0^\Ù!í|¹¿û|õøñêëÃûOúPÉ@c'nª`M>dÕj²ëZR-=x¹ˆú%»i ¼\ɮɲk= G¼^<~H—\7å\7R¬r­¨ÜD‘pó Ë„dr„3)/]à‡t~¼-Ôr½:70«L Ú–.SM¸3§ ðNÉÐI€›D3÷7ñ‡Ù»,vA$#\³w‚BùzÀ¾w.Yžä«P+¹ añ/7ÂÉà6Q'¾ÊÄJ•W8‚!µˆ¬aHÉ]m"‘øýì*Lç9ªœÊò.hêÑ$¯ðHA¶`]M¡Ä1Ëæ5$K´IæRÆ g†*ZCJhqí4TepžWjDäF¥Ú²W¦4½€â!YñOº€’õG1ŶÆŸIŠIe[à²ï#aõ¯¨M!’פPà%#rÁÿÉÊý4.I¡B Å°É2¹â7Q§*º)ĺJ^i:¢‡ž¾4!˜7 ùìCG&wdhÕlµQçç5ûH_7q‰E,~à‰øC]E*ƒ =ìg<ÓþϽH,ª .·G•&á žÒÉû±rÝÍ“JŒç·ðÃebƯE‡ÕË2²SöªŸÁM#s!˜Æ4žÍ7éô`Sôo À¨«i¸fUܾ  ‘&+J/ü Ó™Ùg¾O¦Š¬ÍD@óa¶b#©Z·ñ°É9ºžc`I–ŒYAêuj¼ûY*¦À2U”V%¾R¾vj:ß´ýìÑu˜ârࡳŒ1–¡¤C‡Ñ£Ù1ÑY=FˆMcütƒ™èV ù&~Ó™‹ _üðr¡ôúóůyxÇ?ñÇO»ÅàÊY[`K›ŠH§¾`Hæ™x•3…èUAÓÅsJܾùC„“Íñ ì<ì¤ß#ãp–R¦»gH¸ÁÜ9ú£?mÚuRo”y?ýSåU3‡°¥ÄsKfƒ˜%yõ³í^]Ʊyør‘™ç:óu²f>Qf`Vi›€_]¸ò(I™TºÌ½MØc·p•™"G]WÆ%á ¦™#yôVþ¥ ø€mR¬8˜J›1¬I] cT êK¬‚£Ò/kÑL°À 8#/[LöF¢è’´È€’¤6iY°ƒ’¤e󆶜(ÝòÅKu‰K`_®9Ò²YHNÐ867{ç•)ÀÒ¦ÌxDçÚ'}O_â¶{.¡«»ˆóÁ~xâN¾üúðx÷Ù5s÷Õñƒ?üö:äCó_ýýîþ“kæÖ¿Ðõo×ìÖ3ÑßnñËÍÅíŽgy·ÉüŠp¹œ‡ŽN†e倞xÍ<:Šùÿ˜VP²ŸQÐýb; š‚• ’™.÷õ€§À*FRíÜaÈÉRéÔLªÄ© pnQ‰"ëT%ª 3ÃÒBß K*‚mõo ÅN½0½ŸU#PÐ65ªnœí€ì{œý¼1¢ÍÑ#…ÐS{1uŒ*Ôwþˆ²¡%¼¸Ýâ@³LÏ‘ÈbšSøs®¬¸¯c±ÐtÐ5Éæc%’MΕ„dKÚ¥°]ð•nC ´`²&hÁxÒðd€‚§ ºgÃl ’RÊÒîðBÜCx; éoeÚ ×ž|ønµ^yføðp÷ÕçÃð[‘c64aV¦¨uvµÙÙi5°åf_ì—w“8ØQן†Ó•Øî4تÕqƒä®YHVá2K˜Àe¶W‡LA›§Ë+ ñbNnyï_&W{Go³ ÌœdeÏ1öSh‚fΞ„XÂ$Jšµ„ˆÆ,Á‰VÔä„8²´c3K!6s¶ á(¤%V‘N£uï_'ŒbôbàÌn1büwÂÙÿåáÝ—üÖw_Þ]º`½•3·!DwîŒÌ¥eNÒ9´ÐïLiU3vP†õë&ùªÒ‹Ô8GßÈq“v³l×nîÂ~Ä?,ûQEÿwgžI½>\ÝÿÝ?"q}¤sã‚ìv)¬ëÿÆ×ýßÜ> IÆî–bFú^IûÞ®ÄhÙ¿ìÔXãøe Ú¿8Éÿü÷(Þ¼xBS÷7le-îþîÞ 3ç4Ù¬Éx¢¿¹ %Õæö¯¶ã7èÄ,êržÌiÙ(p’›oôj `yœ€È‹ªrüƒ“|•lätÿžùµ :ÇÕÞiÔ6¯YšÂ=üÝ!î_âÈhaÖhYÜpcKl5"ÛÆ°>æG„5Ä )å,ZSuõÿ$¸q%?'µW~8O"g¶Ë¼¼˜Nyb¹À&½7x I¢ÇD1…¼èi/CäèCv¢x÷ôúÅ…>Å!¤8½4›f ÌbÔõK³»G¨l{—¹S ËëW )ƒH!,*½l3”5öð6sŽÁÿ"mÒð1fV?‡ŒuÇ(¤s9˜}ÕVy¸xsýð¸kk¤×Ñ<œÐ¤ ÚâÎc+ +“z ”â ž§fñ­ëéêD­äC‰ *¥ˆL‹þÞl²~~UwŸSk¯ ˆ˜Çöè3ê;ºîèÑ@Sªqô1`$nr1$ݾ£‹¨JÖ¼Z<íè“õåïä"R)â*R©.þhVÏîë¥&w/°‰·—à¾õxÅONoW¬ø‡˜æ…Ÿd=uê^šÖ@¾e\fðßíHi=/±ž]Ä(t¿ãYßðQŠQêê/¬É¯lÝîÉÚ{}K‘Õï‰Bân.®Mž'Ò“¶´yÞÖ­ÍZżÐfÖ¿d€”A×L ÏMøõÌŸîáûöâW‹ÓJ){H(+4t\8Ÿ0$j›”FÐkÔ+ôÁµTÎ>L¿ÿáAØöŒ*—7w_?þ¤Lú‡ÐΛJ>ÙCT ïf-ÎQ8KPÞ5õxxBä–Rj+«‰e Dgæ Köy²¨®¹ËG€dyÎÈyØLÌ“s ¯v¾ §Cš„y`×=Ò‹“ñG¬¨¨Ó0/·VåG=lMV ö«JCê‚ÒØ‘qæµ…eWc@°ˆÚŸç€]Ë6 å4’H3Èéòÿ’w-½­\Éù¯\x3›aßóªÇÑ^³ ƒÉz`Èm Ö+¢®Yägå䗥ФÈn²›}^-M <÷Z¦ú°ëñz×IESÿˆ¢ ®½Ê0ôâÃ@ÝÍaa\Á±=ϸ ¶xÃ?¶M†*æ´5 &¹JÑÖeöJV2Šk²3{‰vï‡?Ñè=Nš­ €@a®Æƒ;cÔœG?êò‰Øb;…HÔgX<Ù;£jØ;O6 j¡:_Ÿ‚]jvœk$¬g¶nv^ã©U£nÿ¹×/ÞYåØª%ÌIˆÂÏJß_øî›Fª}¢.P Á‘EGu¦ F»pôGÙΓ:*àµ÷$„o:g© ‰W:[¹íUÀÕÉãÏÀªIˆÞöU`‰[ /ü-ÞK“/¿Õ¾¯û¿ä•Ÿ('yæG[Üž8–•Ÿ%Q©]Xǃõ10Ïvr‰)±ÍÙåŽCöAšÄt\»°ßÅÕ;¡È¨GÒ%w9ˆî b¨´âY:¢#\ 0Ö™+|φárL½ù„¤¬¸mœY” .“ütÞ¨Šãëh+.u`Ëge7_E@nNŠ«6Ï믷Ͽ?=<_ßæ…çÙàz³ñ® Å$*‡ÌbBY 6FíÐYX/f7gx ˆc˜í³ouÌðîQ¤ >ë—‰V»þÝ?£¡}Ǻø9+ì¢ò!æ•Fû…zË9@?‚ÐÂ{òäh¡­o»¥9)õÊE›yØ2ÉLb4UÌ ´<{€+α_Ö€VR\r»>=¿Ýÿ|³÷eö7ßêÛË/¯×·ëFÝ3³ÔOk(É¥j‘y oòMé2zµêæØ‰8±ç ÛXò03/m,f~¤OËm´ÆøÐ/•9QÜA7Ú˜•+m¡ëûŠá[ùfÆÆ#x`Ä[¿Ü`\Ûbûоؾs&Y‹bPÝŒ!,ae38vbi¦™½ÿ3/ñéùp³â-W÷¾(7ïéœè]lmfïÿlhd3£ÖÖ¬‘ØÃlM£'ÝÔ#H#+ÛDÔÙ9ƒìfï"´v$~pÖhͬ›J´fZ­•wÞŽ™Ùºlàd”Í1»Ù¢!iÙº ¦¡qÝC”)ëÉrÕÌš` /ƒÉâIrðË–©ÜÞo^¿½è#úvûËú­2_ §yß*‹:곎߼¤ %ƒP-áwÁC<[7;>&ØÑ’ó>YÁ³µ:ä‹úðÜ?¤žI.t—5k,¸œ©ŠkêÚÑåáÙŒÎÓþ;E§83A’Û6ùÖQ|˜™dhðÑÆªÄCv:†@`~R$äåá›àÖì8æù'˜FaÕuªª…¢%ƒ%yƒNÞÉ{iD|mëÂ*òfê§ÍMÞÆU£Ÿ-E ãƒ'{äkׯ#ff[{ï‚Ë:£ccò.ñtL]R—‰‹ÞŒ†ÄDÆ0b¬{½UÛ–ŸãTÚB×$ûI[¸ªŠP:»ÀÕ!—¿\Ꮌ9îÖ×ow­}Ž’ ÈŽX2)R“ Ûå™È¾{ývñmá˜×èvˆu#q‚YŽ0ZÞ{Ý&Hlt¿³`î ¨H{ãCVn²‰&2/ ÄÂ:7f”+ï£Ý§¼Î¢&Ø6#é|c>Áw@§’Ôë…q‘žNí\=¦O¿þ–…¢ o¯÷7›ÕÍõªlç}¼@w=¬šâ¥£¢JÒŒVkލÉĸ ¤jÙŽ‰Ž‘RÚ1g»1aßY~–]<¥U3&Šénh^<žQ„½ Ú`²Š±åã‘ëæ©€Aü€fLïÆD,wê(úRÛ¥IÉ«î]ݯ|™d¥5:[º©ãH"{ë ÅFêÓÝ$ƒÉ¹> #\ >pUM5`Kà8D±\ÏÑ«f‹Hv6„9Ìfû$÷Eж£ Ý#NÐâ–y¤ÐÏ3öÏ(m!ÏbgˆmUè,/½©U8·ŸòxÚÂ{`­kùЦ¤2k"Ëu 5Óüdô®â§ón‘06$"÷ùYDzë ðàM3¬ ³.Dþ=YòËd©½F+¸ñ ‘f;_À'åX€ãH–6f­d± Ö½#аš½Ü[˜…Õº/±®—BäÔ.f6»‰îgafÔHs€Q¬Žm—ե؜×ðž-“œ š2¨s•Â"cù^À¼d$äéúQØp}³N 9¬Ä˜išƒsTIó%JÝqF{.— Ó&í ³òãý/î'¸íýb•M÷›íF—¬äÝœòvËò%úEZŒs‘–åË«\Л·ÝC»_¹Ï®m š°`Û-=à\^C/Gwþ£¯Ê¦ƒø›E9'^\ ™ÆK)Ͷ÷êzCB˜-ž’9˜m 00ž¥9¨M]€í‡Išãv vbø2ø,;z_§ñ°ôÜeåœmC0Þ€Eš»LmÛ’*§‚XÌÉöL òLrÅwäX‡% Vz R!2åcIÓ{±ƈU R~6¬ý”>ÎcŒ]ðÝ#\‹½"A!ân’×Ç)š i´¿%«Û <Ö …Ó¢Œ¦»kÃöØ á¾wÎq¸ 3ÜvÍ_¤4¯iו 3Í Ï)ž£Ñø§­‚ŸÓnÖ´Èy abn]`Ž'Ô0`u}ýl’“Ù ÌnÄ’Û|4`~¤IãÅG&KÄý&ÊþeÆ‹AƒYkDäób­UM¦1]zÉ´rn´Ö/Èõ*› ãsl›åLÛ-M»Z”$8ÉL²Q«]C]ÞCìæEâäìM°Œ¾¸°.Y!qŸ9bUq ³Lq XévÉvÉÛkµ|7ÔÂãõÍhàû*>ùräóëדŸïÞ0³\.`µ(/aU¹ ä'žXÌR°¹(õÔkYAHÀ1¸PA¸_ pÇÃhÛ{ŸT­jtyäƒCŠ€\«Yl–ÛŽ 0P7ÜÁ~@!-"$D05äñÆz똘t o‚C“Ì%íήŠÉ \ãYOºÏ+y¾ÍÂpçü…šïm%+Õ8•ÌCâ•£ mk¾…:M1::¿”‚Ñ~v<+Òèü¿%šA´Àã ¥FBSçP÷d!tôƸªì FCŸWæÍΚà݇îl†¤Hˆ¨t»‚ï=žL±‘¨*B..ƒÅ¬ISkÄâ‡õõf=\VÔ¢EtéÁªJÉ—ô6’בF6æ¶6¦Ó¨%23Ä`œOé¿ÁYd~x‚Ì=Š4kÀ Žc¿§F2‹I!+¯.̬2¯(Xÿ‰ÈL6²Ã™™$mgg§MfŸ¢ÎÖ)fF³­£¬a&³ÄL Á„E›%å5–t¹o÷¡ºÝ6±œóxÍESF˜"ˆ¡Ç¹Q=Í.ÌÙ¬rT h~T»0o< Ìňî¡MƒzD&¢D÷Ï(ÝZ5XœÕsÓBµÁ„¥ƒÕºjz,Æ!`í¾€ÿB¶ =Ò{£·ëy¸#Ý€®™qþ‚ÞÛQüªrä€ úÞ¿ÝÜ|ýõ·pVµöñåúf[µ%\ÞfvLþùþé~s'ZðÞ=tûåøY}Üî_Vïr.ä’+à3¢%´ŸÜxw½¹“Ö‰é*Ÿ»ùöú*ôXÝþ´Rr¯~ú»I§äZ`ÃæìòL _þúOßÿª(×·Íá ºTXîv3õ9yP°ú ‰èñœèJ„­2ù£2mve)_žÖ¿ÙiŒVgD£¾A.~£9˜Ð7¢öÉÄëŒ'k3öœnü;èõëÕ÷ùó„HÑ“‘ÿc@q„ÉÂíWMó‹.nÙÝ’(Q@-E<ÞOöË_Þþãéêû-Y_×Wß ¢ý²~»úë¿üùKÊÊÂÇë×_×o/r|}]ßÞ]¿B››Õæ÷Ð|ÕãóíðJD9vóíæf½Ù\}¿§o×rÈ[¹ú¾Ùá?Èÿ~–‹ÿ›¾ÝûY òÃÐ^ÚFqZÙ¢æ[M•²1–¤óµ´8Fú›ÜÆ×O›´õL$¯Lÿ|ý°YÿéËÓ·G!ÒÏ?lÀ§Ô¡ï¢è"ûÙ=­Ží~Ûç´Å£oFGeö^í/¯Ï*a;SgÏù¡åá}‡â(³¬aí?c¯'–8`@€qò×ÒœU¡ò¿ÿ¨—úÕöóïFgÊÎÝy©Æþs°~ÖúN®}­8÷SòŠÆ[ù§N^c¾¸ݧÄVúÛÁD½þéaý¯"¡ÿüüür&¶ÿ&ÆÙú/Jf±À€ÜŸ¾("߯o?³ç¨CÒ ó*îµÁy%ÐÞËüA¿ì^^×7ëûßÖ·'rFg+h_ÎúOØ¿Øþ!÷1è—‹æ÷õë—·»ë§/rtÛw?YB,Wc#)Ýî)p`|Muº ’%EH$»N”Ï@­q|ŠSpvê§E£˜ŒaV(<óœP 1ãÛ¥¯–€Z–£&…yl'œä[0âO„:¾YWG3ãu·Mޱ VxÌó-Dt8Ï7G3_-…oòµ¬ÑIYŒg¸ÆÑU] †¥ˆ#íRX.†u¸ÔzÕë_Ÿwy›Ã~¼WdûÝ4¹ƒh Wé ”lætØY´3CV‰$*Ë/Ä‘›R´Æè„¯”›ÒÆ8«\ã{}z4ˆ^mÅÕƒ0¥|→\w%‘æñ¼€ûº€â>ÚÇå|³5âÇj–¹¬žo~ÍRVÄ©®ÛîÌêª<ÂÚ’l`ÁFIJl`k‚6Sm‘*º˜gþÞÏvnç®Ê~ £©ÃùZh¶xçh°M½F~êІŽ)F›QÔ!o+¶1„:°×¾@D¼#´Qwç|t;ľ#mus="žûu“׬o8|W̧Ø€’¤TtHä ©IE´ÒU³t­ ‘,R‡bhÓÜ‚*¥ëè”áZ$²Ä–~uÿŒÀí9bY€ƒ| ® qàÛ†…?Äðø)¥œ"~š¢Y•tO[÷óOPýh zëÅgöb®`³’ÎS*5³Œ¶t²8¶IQ³JÍÑyÏ=’4ÑjÓùèìI0·H‘Z«¦DŽZ;gkrN(O€Ô;Ä‹.’¼y–¾ŸÎÏ]mîyÊm¶‰8­Õ³Ÿ×jñ4K|v¡—µ!úìE’‰4j©ÓZÊÄ1E§w‘ëK*í,Œö=z4Òh¢[ÌÁèQ¤ÐrÅ[ÈShÐÜV)4X»€aï:Ðî \R¡Åó\ÿ$ôþ\°º-Ò»:ü÷«Íõêá~ó&¿µ:Þay…À8mÃϲ$Aå°$¥]¹!·3£%›‚ˆ m"¥Xïq6L. ã9¯w’52Þ=Šuâ†I±Ã˜€NùÏi––we šÞ y¹’Ø“E‹™ÃE†äÉ~¦]hX@h[°÷uÿ—VfY×ÏXoæCÅž(ØÙx’Û7íŸÆ“Ž$i!aÈkõÀ=ìŸQ$bžxÎ1! V¹‡..0†ZË Ë£õw¢9ÀÍ3x|‚^ÍGTïŸDüô'yWPœ¶:gÙ“€1¬ '´&–"i» ÉwbÙYãR.¤yÇSë«ÆàâH¿FR@ ‡à¢wF\€Ö½‡,¸pbáWo¢·H%± ¯]•‚ß5FQ¿ÕãL*¶ÕŽÛd‡ÚBM[¨Fî­HŒ1!¬‰ÚN:'ˆÞŽ®òìÑ­… ²éP‹?ý0Ò;¤HÙBÄ,ÉïÅM«¹¸¼G»H\SÛ»‚}ü¿3*Ü ŠJ‚š€q›V2ÿHc¡Æ‚Âv%L¸V€öMÕÙÇ1?§GŽ&ê,_'úmñY¯W½wF6SgŒ%ÈJkŠ&bðu÷JÑ%Ýl.(µ÷èÖ0»ÙÝþEøCôÞçàƒú…© N£>MðÁùNx*æÝ²ÉÝý¶‘Çïû?W¯ë[‘ó›·&;í•ê¬Ý5ڦȑsÞQvŠ7“RõÛdè·Yõ ‚hÖ»¹úlÔ1c@2-ª8ÅI3ÂeZýC 4\ ÎxOYÑ ±ž¼3U8ð „Ý¡ÓE3à>+Û{¸ÒÎtÇ`ÜÕÍÝúæ×•HÔËó½\pyÕÞŽ¾+æK¸Xâ XqÂuš@Üo)ÛÅÙE’@Ë“ú£æ½R1LÇû£„kh'£ý!zg³5&f„àk¶¢è#ì&€‰ wô!&À~“î…b†‡gy»»çÎj¸y–7ûûq AFä3ð4" ®ÀªCô¡(:`u3Û2« ‚xíÜ .)( ÑÀ,Æñ@ÁTü€ <(8R€±³š)Ï  ÅXµüBo4r‹8Ñ:ÙB»õõÃÛ]#ë^GÉÆàjô]I1—Ec…ÛÞ,=Þ½~;“=tl´@$ÅdßoL¹¤‹èƣĽ×mb²‡.]J6ÐÅþ!Eºh ¹ä¡:,0–î¬Ú>Âͦ°Vç{i/~ÑpŠ·SDÔµÔ±6ÍdäEÈ9ò—‚ºû—…q{íø6 Ó)ä[¡ØäæÄë=£j>…eêä‹þ÷}° PÙtŠÈb™²‡Úé.¤N§@ÓQŸg¥)î{0.JrÍ-_-eÊ|-]MêR­wŒc1‰Cãb(`\´Èžˆ¸E_ÎD]Ê{åÕêåùáþæïýOìSdÔ© œk9N ?+qtáeh-l:ùÖÆ²7>EbôÛùÎ ¼X¥EÆÿáÞtqZ‚€äA;êT0UÕ0CLõ‹…´»·ÒN˜iU?¾ùh!mÿÕRTÝi!osòžl ãÀ¸‚:µÁ]rW#س»zû³1V ”!%°*îëí.± ŒK¯ô^&áªÖ/t6ô ãµ÷ˆª›Z3I×ôQ£¼¯[à·³ Žtü•³½¸Ý0Ä·: ã­W¯Û’Æ"äWÜ8;ÜÚÍóþG±›îå8Ò×= ¶@~‘ËŒ1@¼•¯O†¼4…Ò³}„ú‹ùâãÁ[ǦøÑr*ðìP fÐÄ›îØ»¢&9nÜüWT÷lµ$Õ•Ÿò’×TÞR©ÔJZŸu–¼ªÕJñå×èžÝí™á Ù${íäl¿H³«ž&|@Ÿè!^ÙÖeáùú¡ÕÊjpßIíª†ë·ÍFÖY§¦´o›ÂÜRŸìŠSµÇà'w>A\n«dò”Ba«tŒ¥°UºXй*Ãõj*_ctŠËmà*H[?£ úqŠæ.|®ÿaz ¾©LSðÜk¾ÞÕš/À””U³G.ªÇ …äO+«4_Ý}Hq“ùªsèÛ’tÏ;ßb¾Þ¡ õ_F]ÁÛ d=¶>¾ŸòæÎþþúéÇ-Gò¬@H`|mU€¥°û*> ÙAiÏòq&ëk#G¦*倷!LÄ\h;’ms5¼qž‚X{C—QS¬Æœ¾.²ãkÙÚeU‰ÕÌ.ýãÊ9eƒ±ÕÒ*¬:eƒ’;NÆ®Ÿ‘eŒŠPÀ\G0ioÌöÆh~`hÝn{4µ¾6¦k½»¨z·5Æ ƒv›§d=áÚn/ Ùršç•UAxœ‚€d‹©sVŠ~bš9'¨uÛ–G ´lèò(ßm¥6ι2e&«qp}ßtU­†àʾ-+³9óç¥U¥LÂS¨;{Ã[pz¯fضqð-D”’K–õœõ3ʘN6 ãZÒrY†Q._¾‰~^jNWK©s›m~ 5߬Ñ7z[H©.gò¸*rF‘Ö¥1µ…OÉU¸µ2[ÆüFmäó­ú_ß¾Þût4 ó1K~˜BðæäïO-;-š„¼ÞI þ:´?*^”TT¼,QÞZ~Cü3èÈSÀ-¢N@bìÒž-Q—z§IœsH»ŒÿÎîkô“c’ŠûªFå=÷5\˜Pü¼´è0k•ÔeËÑœC=s0;mÓÆ¡Þ¼œSòÌÀe{æŸöìÀ®3ßË/Ðö—#D‰ù7MV‰:Þ4%”ïb7Æ&[ÈsÉLB+œs‡›Ô……È & N²/c“¢:’¿ÆØäsŒM…~v™²±i-«3!_àËʬÕeÓÈųo\ÑòˆCGÊÆÉN6u)tÆ :’Tù+üàõh^FÅßËù¡§…cÎõY­¬§‚º»vAX—Ýo‘×¢oëÏz|„jEK`°ñÏÒÉ ¢®Ž ­šÙ]½f_V¥Fq…_âyå=`VK«Ù8H“x5ž:Ï€ÞœSLs©iãbKY=:Kù¥‡öí¥ââÛ¿©˜ô¯?(>½»»¹ÿðä]>Ü=Ü|j¨Ü‹ˆ¹‚ ´ûŒW5ax$wq´À³(!›Xɪ¥¨N_3*ÐĸA3Ðyð C3ÐAKÓ•ÕXµCcKìWä•"Å ûE¥pzR,(…J³Ó6Wbjíí/„´E)T—Ø¥ÁµÀy«ûþPƒïòyB¾J@tж³¹¨žóõ:Ï2i†J#@µ„‰õD#A׬ó#N;äksý`½³ {„’º‘ç¡dˆ6‚6ê!\ØHõ¾ +½Ì™ÌÌ5œÉÿþÛ¯ŒÉÂdwÆ–ì%Ç–\ͨzŸž¨ÓÑôêÝ<\Ÿ?þmþñ)ŪËÑ«~ùöpðtÈ |¿ùôíö¿~-=Õ^ý¸b]ýñ/T’[ K¤ÉÅBNO²–’så,±=vŠ.ê1ó [BR ‡"zH¶lsµ˜rj{~©ÈŽ. ×èJmËä°.¯]«eT²v§á}T&'tÉÉ®íÖùùÉL‹ÍÃ'÷m] ¢ ‘zô ²äTÞg ¤ 1Rg±–H:°ŸÐ ±j”"\!V|Z¹Ë+®—VFªVpôޤÄçïöl—Á­ ¾<ÂïF.r\ŒëÄu'¢|HÝ_q\s §×’]仨þä¿ïîYúM>~°’ׇ¼¹ûþë ÛœF!¢mdû7¯ÂF î(j|üÚÙÍþÅ?¾úñâ•ÙL›. W+’˜b}‰Ïðã|€ýÎ!ú?ÿ±·ð—åOÖöºŽ†DM9f«ÎÔÿ¤‘ƒ{³ÐÚ¢ˆLªÁTC”OŠŽRñ„I)Û}²’Ј(B_[½AL¼éÒHAªË>O¹wv8ŒœÎO ]rbO ÿxûþSJA=VbuðÐ…—ö”ì¶1õa®Äæî"«kKsÁî 2­¾¹ÿåöáË'µ¹7«?w©GW„Oš÷®î_h\;gjÒˤ^%°QhkJÉòE´ ÆÊ%´ÕH2ëϯÄ3b`é2û°mI]i߇¶4~Ü]ð<‰1_¤—OÐÞè2Ó=–듯~óêçÑù¼%Küùpÿí¶kÈÖÓ¨¼@ºò.ª ÀÖ.K(Cò·9É_¡?¯{}PÆû2Udç®hzô Ôü||¶ h%ÊΗ½µ‘:ÄMp`ýdØþ«*âîΗ%›Ï//“c:]ŽþÉ uÂÀw\$ÿaé’Ž0&]:"4Þ™ &M·ÿå+Ÿ`^ÿÂÛÆ#sŠí2/ã;;j¸ ³*x‹t¤a¡øk[óh¸+DSÓPÊ„‘ Ò–‰¡âmòÝ¢ËB¥0|ª: \*bùü˜ìgYŽÈº‚‘gðõÎÛ£¤]ŒÞJÉ~šŽHJ"íiÄ4M–åùÁ]ˆfTŒRƒ ¡ ›$weÈÕ“ä²½N+Ñ 2Iöœ^Ø$÷à}Á4àG:Ÿa&yt½7„ða¤…ahp`ƒèÄa™·%³ø¦¦ÑœÉØ~aÔð¤l3É;.:œíú^­uÈÝQšæÞâðÒ&“Â>·º$ þ3™LjdáfJá÷6Ý/+ø¬1ÌE“I(yíÇźoEç6Ð:›‰Ë]°†T©Õfâ\Ý‚q¸Í šŽ Á†cæÈF~ž'^¿ýë¿þËÛˆBì2+¥1Ir¯ŒûכϷjGöºÏda6žóYAàÕ¯~ûõí_·ÖE.ÁÊ<µûÍ¢†óRHÄÔT Y÷eëêGŸZ«ë¾ëG}ü1-Ízù–Ÿ*E¼9³"25Œôè­*…¶t#fÒç¤:‰­bŽ„ÊéFñѧ«Ñä²Ö”ó\W‹©É6Â8ð1§Îú}Óš-Å  qamþiÖ0ÒcéÑrÌMãúQýôîl#Tfã„n!„,*…/ŒnXN)?FÿqeUÙF£t£ÃEý™gäG»²L)2lÛmòj0=ÇNýkdÒOçÅ•óþ%Žä_²¼^®¼ÿÏcìò…ªòÂ]ª«þÈpÏ)L`Tó¿C+ñåq›'é¡“V¾/·÷êãªÜßßΤ÷æÞ¾žŸ±)ѦP®¸óè\ßµcÑO©$»p·–xÞsÇTÚ·¿) Ì3h¦_x™ÆðíëÃÝgæÝ7Ë·6ÜðaRó˜OÿùÛ;ÛÛ³¡ Û˜¶Áûöm©8ñcSØk,à·––d[r4çN¨q¨s0¡èKÄ|jôYfbV{e刴)fbÃÂúì˜ÒîNC<Ô^ž8 ~neŸÁ- \†e–*M`}gÆ€rqÇ“ K±oÇe|\²!ˆ!’ÀÎȽ¼èqŒ~|ø‡~ïÕÍŒnÀÃýÇ9‚ù:}‡É²wÙÎm›q<„Ë›DªÛ@]8N®!‚)q_ÈË% 3c:çv‡±ÀqÂ()#üÇ*Š^-vÄ|o{kŸBÜ–4dõ0»Ì‘7ǨfÅ ·«9º~§|gÚã§_~ùX*‡®P_›ÆXÐA—Ér„6"\=·Ölý•6 rïAÓFIìBœS("ã%~ÜE–a6¡˜6á¨ß‰}™œV£²EÈþÜ!‹z†;òsšò¼EýØ;>SóÎ<±P?^ç‚G—´ƒF„>çMpüiÑO)8ïwM”TÍ;ü|óþg5ÌÃ\ém.Y$j}ßÙAKdí~s—Fƒ°FEϳ6PpˆE´¶­®ì¿ e/}ŸE3®í­#‹c¿¯Ùž¤ëÒW5Þ¯UÌ>žãµ-™ÈHÕÆÌf­îÕàš:$窑».n+¨¨hOˆ»†DÉÁOÙ¿ü(™C½péè»þ÷è’ž7̦q'×1·ôØPŒúåa³¾k*bX"Ô3ùX‘ —Bù^5åçh¯¤8"ªŽ(ïoÂò’U`9Èî7¨â"Ÿ÷ÝÙN%çËPxìKW…å iK6tß$ÝeP)¦®^×Ù2NE4ìD@öCæW=!õ•ȽÓ=ѺR)¢”kþ„ì×J˜Á¯ù[ mfè[ƒ3=ð̈>af¨cöÄ cA=ªhKg³‰ó#­xl‹®ðàa /í§\T‚­I¢ïÚÕ5à†M` .†½>›K†ù!SðFÀ[Æ” §¿/ú!È9殕@G@ ¹IÄmƒ”’6U@Ê !Ûˆ¢R^†,# i|bl%1Ÿ¤ÁÞ‡¯êúÝÕÞÛ;Õã\Tò!p×¹Bx2ÙmȽS_Òyw >|ùLß)õÓ¦ 'P»Ì+Ü¿$ 1d°6Óà65ïÍ•»‰q2Rˆ71 xÃ)˵Ɉ«ÕZgþâ¦â&Ô%teÛU¥d÷» sÓù]Œnà™òå€0ÕUÔVçöªâAU-âɽäö3Mè7×ÃϱÁMóÚ¼¶}øÏb†È.À$‰JügiòDWSðËÂC.o³ZYC-L¼ZE­êWƒ³˜Ü¹ŽmÓM M§ÉÆÍ‹gS; ¸óvpY&RŠà”wJJ—%ËZ³ƒ­V‹©hg±—Bwè@ø!óˆ®všê)'lIêiZkG—ø–ƒY H o=˜¿}5Hû~÷éÛçÛ÷Ÿn>~^ûŒ?Ûä{ý·_îÿñæýýû“§s´ÛyÂ[=ÿ¸JÍŠJv8rN+ªžå5¢¢JßÙ¸ZB¬‡ Ö#/±jj»²è#4Ül¹©M*`OmxUx¦hcŸCq# :ñúNÚZ1;/~µ˜2^ØK©Ü Ö?äžÑº©Æ½48ø¤Þ|r]z@Ð4ÒGq28€ÐÝþ+ÛßÐH6£0§¢R…ÒÊZ!ù ;ÏK«8ïíµŒh‹{òèÛÇ.{/û²,r„¹R‡)¡”O€Ø¨qýkøýk*õã!Së‘í÷·~¾9úèõßßýϯ¢pߥ”Þº%r?ur{‰Uϼš{ØÎÚÚ#סΚ&Ô·èQVL;äiHÏNJû²`nì[ør÷á‘£nÊfÙ¶QѤÀîâÆA‚ï/ÜâH²˜:`Ü·ª(Ì6×1fR>vøª‰”=Žà5h(.^ò5+Á Éùh˜!°)ØTµg­×=öô+÷>|¬Ç1fs>êš“ð…ÃÇ MõUQÔ$ta¯N¨*D¹¸×>Šw9Î/òŠ>L¬OFÞqÈÑükúÖwïoQžÞ–^ýí=ʸæ-  Ôå´Û`éÔæ(N±Çó¶Î§31•\\—a}E”}´j+<~Çc ”ß÷”WrʪÚÖâP_˜µhERP†.P˜öe•óÒsw ʺS^#c8ˆn#]h²†oƒ$Ÿö®°ÝXj±7Z\Üìäìr  •ý$Ž–ãp˜øwïq8¾bÞØvо]ôè}CÛ)'PÞíÑäp,­Q^1z?±"Þ…"Ç,•ØJ4#š¼›$IÚè0IÚ¡íH]|B "ÿ/ÛŽ‰>Ä´w2Ì®hèüz¶Â@úÇ#•m%”âØ¶’s8¾¸›¤þT{ñA!¨…¯Ñ«³AÛ Œ_¬,ƒ°Í:IJ‹‹£Ç2ÂRž¬ñY6# Ö@InÚæârbl¯5Y” ýÞv®rv™¤·å„ÙÁŸ}d÷ՇFÛŽIBÇaÿ“óóÍ—uâþÎz3Võ’7ß>||Ø6š "^‘¸êaH]Ë©aÞ²u#$ä›¶FHÀì ÑS°úpwXÌÕð­$2WUeõe‚ß„«¢î“ôùO|R:²®zŸáR›7J¼x)M¸Â¡Ùƒ(UUÕzhnÄÕZ(¸´›Q‚ïÊÎ{<™‹4N§¸oOîiIÑ×7_¾¿ýSüÀéö§÷¯Å#¿Æw1¾–wüîµïJá§[¹Úv©æ®Ü©Å_ÆVlñ^“ O XÐY†U/³qYgó(T@-öP‚Zt)—¦]IhÔš:[.~“ ;À8qÿ@UÃ?Ȫn ÁQüÅ»¡øJ±êâ 6Ü›õÁCnKiò>’GñÍ­Hó#"íÀ`çü¤ïNiÀºŠý( Úò =…3m߸ª’'t\ “£#Hí:týИ‘š 1PC¢y¾_K=¦ÏÔcB†˜ØM`-ÀE€g= üÕÆÄe­)fë1ŸSQ¿Ü„z,…w?äžÑUéA&=r Kç/iB2ƒö^éåw`w¹ôGª”º¿{w;¦H P.Úfò€Rm& ×í1?T ¸sÔ1Žrñfí‘è* ²ÒõÚÛE¢!çâ­D6„™\Ó誧•ίôtÒeÁagމEÌ2Ìän"]1¤žíëʤ€üneR8¹¸ß1b@ìÂäò2AÁÈL² -4Ëè—ƒh¾ÍGõšÞ|‡iN< ˜tŽüd½FÅÙÅ4Ùä8ŒE4@Êú+hµI0)ºøúˆo~7—Ç}:ïP@&=–`ßáſ۽ð,71ºŸ>Ëä–¦H J=Eù¿s‘hê"DWç£CÙ&9e»cžE3ä„6ò'Q6™¤ Ót™ä{ßfNh˜Ðª·þœGy¶'¤XŸº¶•ïÀǪ¦%Îí›û>M­›Áé±þæÀ6õú˧ýäñ'_o^š“<›07^È~› E€È%p Îw11ëq¿s»ä6 }ME(ZËp}…cTB_r’‹ÖR«š­Gð¦øÈ¦!¨©ö˜)?þ~Ê[#¦ƒäèEÍôx+µÓ@ÄréM&›âe“EÀ>“ ÒD¾ÌìxÇ´‘>™%Í ñÅqݲ›iÙ@gû»V2²èk“DÂM=éÿ] -Î^#i…'kœîƒðV›ç%D>JΉTh‹äªòVæÑÒ]SÚ¢-Œ6|‚º®.(µt#‰S=‹x§‰A¹ )LLì#UIÀT ’"Ålµåji5#ƒNé"Qè=$Ë¡k7„ 71À– N1 t%4y÷à õV'ÁSš¢@:ÜDµ[g'œEL®½½èŸë®ô’k¢AÔ;T-ðigˆ«(Éî3øå¬9_…¾m¶£çpYÚžˆÛ;›–G@Cé¡KÎJC{Ÿ hØ¡.3vD,!› ¤þAràJÿËÞµ.Çq+çWqù¿Æ@ßÐð¼ u;f…Qrâ¼W^ O–Æìˆ;ÜÅ.0fÅã—ì’TôÌ ïÝèþz„KWžœ@~Kú—n>C%(b» Ø#–û¹­A@Œ1b[3fšà¼Ý,†ÉŽèˆ‹¡¼.°¿ÎÇtV—…^?¦Ê> |ºzáÂWèëE œXC56Ô|¬èƒ(wIѯE :ênÁf¥´àI@wø*ÃïèŒRa|-;¾¾¹æpp—3¾«“UDß*q¾m¥ âëgd«ª¤“‚Î0³˜â¹ùRôð/{›d#>£7%ö±%‹€…–@70x§ð3‚÷ŽûÚÓ@^bg ßñ-ë ž}Pßñ)—üY¨£D é2‚|a´Ä€Dî6‚ZiMž'°¬À…¢„p}ÉÁáàëÀ[¬Â2ÉDÄö‚ÚÓ^­Î;‹Ñ\‡9S£ÃÞ%#£èùÆçÄvE¤W‡GWg΂ȫǩþøÁ½}Ï?§zõ+#&‚]8Õ³È%0{–.©5Ï?¼š ©Šàe×髯&q÷óާ§âbÈ—?¼Òé!¤=™Ð\„i‹4Kl"[JNˆxe·ì ëkzæ9fùP£•–Ãç˜àP žÃˆ†ÙU/æÏvæÝ·¹²´,Ä.%Ÿ{¤lt†óæítd/x {:†±§2¸v1ÚD\â± Z€>ûS€ñ!†8ÎØáèãëƒ]–n†Åà-‘¼Â%Ußc’½—¦i7`û6/€N憂ŒÙNҤȾ¢ê¡f°d¶½Ïn5[Sn„ÙŽ:19!Üb¶= ÚWt©4xÚÛlá¼À‘8!`äÛì ¨ÄèLždÇiØ «r‘Û5`W$íeL”8Yn°¯ý^*_Ó ä§¿ì÷Ÿ~Îu~z÷î÷»Ï}¿¿Ï/ºËä¤H]q²-Ûâ™ÕÍƒŽ•$ejg¾G!¨0µi!(Mm€\÷ÖŠ,múj•ôÀM–Vƒ˜ˆwéžÒîíFæÌÙËR2ÐdYFd*Ô^Ðnäª5-æêѦ¶˜K¼Ÿr‚®@Øœ&ï`Gi¢|@·ïjõ‘ñ2­SG&uE¶p¡ÔE¶˜ VDìÌAé~ÍYPš;I‰% à²óá+b 1¡&¨do VÍÙG`—ÚbùïbB%¬¦#‹ÅÚ”ÇéS‰C$«,§™±®¹ðœÞ_d«Séò€À';̇XMô©¥b¼sù Ý%¡÷½ ð¶9 q.“?8t@]†T(¶,•uÑb7Oº ÎɆÙV“ VËo+ê·lf±`ä˶Ĭè3¶&q¶ L·ÙÖ`‘Ÿt¥†0ìn[Ó¶ùsÛjG–(žJWƒ4¶’ªà±(úMèX­fâkMë-Kís›ö°»»H`‡ÿݸR’ÚeCÓ êLŽÓn÷pÃvÜœI´ &Í× é(RñJ ò»ºW§bSÁm²ˆi /êQt¸ûVÚ*IØíÈÁÁðî‹1]ˆýs÷妽·JØUÂFvHÆ‘'Ël=ÿ=‘ܤ·¸¥ÓbSKtŒf¢ª¾V(·œíäÉ2h…ªÇeÏô5Û‰Ëv³{¥#m†Oc¿Ét¢’hŸéD¡ý'"SÆtòD)Uú$·Sž°˜Kì*{"ÅñX˜vÍpˆo õøçç7Çùíåß|ºÛ\µä¦ö¦–‘î’$u—c½è?5äekg‰ˆó MÁÔª™Êâ0‹‘rsÃ+ÚŒ@ÍL €°é^i„R2ïmkÌ ðèKà;²˜GŒØ5V\¯»ËX›Ôl.±“À,¤ëb'ùÖDy=²ÔwMÖ¿<¾ßÚ€¼ÉÂF§ÜNú²‰%Ï É}Ú FÀ›WH7Pk”ÅÁ‚Y)³‘ƒÄâ½yÊB©I3wžÚBpªßçSúzƒïv±GèIý¬nµ8º´ü(4.‘Ìàg<-HªÃ8*Í·èd ÎXà¤5@®Ê½:Lyð”& ê¬güÖèüŒ~J»’×ÏZìCžÁÏôˆ„µÊ‹ha²¬NC÷ÐS¬zZVl„â*SÝ(ª\ŠàóÀ+«£UL=êäÒr7y1¼~HvöÓømñeÇñ/óÛR@Fm÷Ä©ciça©ùd† 9'|ËÙOŠ?¸¥2æ:lñsž•ï]ÏtzéŸé¬|íåùML6uñÒ%Û§ RC2ysFÑKD½uÍÔŒ÷W£Ü×ïï¾gû˜P¡äîãtaåÕQdo‰ê€Zé2µ…•1Œ„É¥*‘¯ F¨èw8»ÞyM“™»‰mÐ#m‰+S;¹>#žwv0‰Î.ã`ŒSÞYÊTP[%­Já=9^1yvŠ÷NŒ(×ìs—ò6•ã¤j_ç_åtÔ×ïC¤ÒnÈvÞ”Œoê°DlÁP SÞ»[MG]!g“‘–Lr`cÅN"3ì Y‰´ž²˜«Ï´a£í«ÅPo£Ç(µ„¸·¶Tö¼Å™‰NC›·]hבּі ÃþcRór‘ï–TúÈ=ïm(« œ`o²Ÿ4·.Ùé²ÄÁŠ DE}öùÊê£U$ûUÞ)mRX@ õ‹q[àqÁ"çÑÁvSв—ÈÜ8“e>òÓoo¿ß?¼?Ä*OÓŸ~šÿÜßøš1ðOÌŠÅ]= ÂgDzÖ$±UÅ>›’ã6 à4öXx;ÝîQ¸)žÏÜŸ¥Ê¢FŸ¿?S¢qÕâoPÞi»:)îÓ¶âNÛ[W¥è5l®í´½õreG’ßjßtxDK•عL—ºý–«E÷æ ƒrU±€¨h¦ æÑÁG«Awf=-Y`¿É Y’Ý5šm |üÕ8¡*ã…A|„)D®àÞ~ü‘~&"×ú#†"r ¨…î®g’K •ËÆã{[F&‡û6‚- s‡PlyòËØíÅO|y||H]v›ËÉâEÚYP´ózEß4+(ns;È6*°MòÁÞE– á™K™Ž‘P0?-ûL£pâN§hbzyí¹zÉAvM\Ÿ~ùh¹ï/o¾ýòþíi€<957£õ× vDð¤§b²Êº»cò™½b‰ßâé°—…̓ÇÁ¤¦î!^ëÇ:mÍE¦FË付ªÈ;ð0‰¢Sý;!ÁTлÂZÁºw9¢Ptøš`²–9¤D©Æ0/½$W óÒqÚ°{$Ç»lÍö¿¥B5D÷ö^s ³‹j|Ò´$(bàÆ®yx•ø/1Hwùöˆ%rغNÝK4鉮­%0fZ3Ýó‚9rEm´e.1å1§Úpà&7‚oP¬w…rVI&Òë †~̺wи½¹À<ôc.×M¬¹«$”ôkpŸ$÷Á²^þ¹ Ŧ¸³mK*ŒRWHž x¾ËùPS­_P0 •¹"ѱ0‰9žkû(ù`×H­–ÈÓæ´«U§ÆI‡ŸR믖§‰…™ÊÒ¡ÙlE !âá¦h‘&ëPñ¨°ÞL~ÿØoIýO¦ºtrÑh×c?Š·¿9g¿ïÍùÑ•ïËÞÝC{S#ÌL„$EŒ–•Ö8xº9'Ô‘;ªoŒè‰.ñVI ×ÃÛ 'øòC°w<[ÒXô®±ïýçyvæúú¸å§úVKê®,0~ˆ0N$  JÊN,¸ÿr÷iZjGÓ½YÆœìoß¿7B>½{¸»ÿTbPÕ3öØ Zžrl6ÅF{‚Ûgß÷Ÿß|2óõ¯eâ›™ìs l•º!+·O‰˜–uw)ã² {x}œƒ èÏWÆw¾~»ÿxooÞ°²Qmf@…6*µ ‘§­à©æ¸‹6fh6LM4‚wN}YÁ{ÏEuÔlB¼"P‹:ÚW¦ýt6¨cZW€ÐŽšNÃNw¸*¡)58„û„Þ}ýp"Q‡=FŸ,ÐoÒšÌ7ïï¾Ý-ÍoÛ&zSä¯ÍÄ/«"{’&€8vi×߬‹-¦ˆi!¡+Ñ¢àà kŠÈåWäiÑÄÃNR”pkM¤].Ža "¼‚é—O‰†oZëÜ®jÈM30jYçâ.30kjó…`AІ _ÂXTAÎ/ \‘¦ÉBò1ø¡*X%-¸ÞB ÷c·ŠÒ”~ή[~3J,¾I«Á},a˜qʹ‹5M$ }fšµ áÆ <29ÒÒ­¿ë5SFŽ"LlnR+’\ÂX£ ýœ+¶HQÂÆ€KQXöálÅÕ"ä€Þo]iûùíîÝ‹!ÇBÛ0Û~2êѲíp?æ®1ݤ'u¤C ×í3!eR¸n,·3£úv®“eoÛ=r„6@às@8¿&ð§€vß4J÷}Tºèñã4e@‹ó'‚W èñüŒ.@;´ðÜÊ\‡ñ0J‚kþ`𜒄ýÂÉa–€ÝD¦¤¾NšHJÒ¢ÙIÍ#IF CsÚˆE!Æ-†.bðíWŸ6cˆWÏ~ÆÞQìÝ¢ýRuŸ æ{©´ãž‹ Æ|‹z%”™ŸE[Óg„R§Ï¶s“tppÑK—t,‘ $Æ„Y¬ýÐé¡lÁô ¼Eq5|u ,úU¾f{‰W«Qét â…Â&¦vܰQð @U –žn¾û™`’Ôö]Ná0Àâ}¯r=¸\ ·"ÍeN_´©D‘ÚA]—\ÄЀœB jQú&¹øt÷emíg ‡g ¯ß^þñÍÝ÷÷÷ßÞ|y|¸wÿáiœ”è„NÍ—¥DìŸb²˜@sÆáH¨!R’Ö`… ë½f)G»L>“o©öFT{æ[»4»úðx÷¾»ö“ìcô*DÂB,ïK"Áä²aÀŠ*ƒb{‰Ì•7 ú{ÊÔÎÀmey„-uä ©®Âi'éöRP”slWs¾™¥)¦')ι:‹oçÐÉ‚µ¥”‘eåóY³H˫Ô+AääbÛ‹JÐú}… -µøßÿ©)“€¦/$¸)3™±34ÓZ ñ“xT„¢Lˆ³h K2&ÙÌýx´Šx0}–Å„±¡–qWîâ<‚tünŒéRÍ¿ë]üY¾xA¤V«7_?üãÃç‡×®¢ßýû6#¡_÷T i©¤jZÞ(Лjo%]‹·•ôlã¡mYµl¢½M4°çl}õ™P#Úúí«%|áÖÊÊ;"*?9‡¨l&åw^ű • U)ªûø‡˜Š]K´ÃìL.!CîÚUßÙ±ÉÜ’é1ïjp¥¥|å-×òõuuÀdm«ñßW Ót¨#EÛ*ùLfE“!3S~R Œxkë*'˜Ü£ÍëLg ójœJë¿—åÙ€,ÉÐQ)q5ö5Õwµx›¸ÈƈºØ(îì%SfÀç«!SSð7Ý;@ȯhïÀ£ýȃù'ÓëٌݛsIÿã¬Ý oÿîáËw~úÏ?>˜ú¿}üþíiz÷ùÞþ}÷1·”€QeŸ¥ý_º²EwÛXÐÿ¥YÜEaØEg¢Ù£s'7…C¤ù8±ù•ݘ§Ç‡'–ëð—_¾›+/ òW' ŠDÚ©}¥õ³†ÔµÀC¹i¹½ó5C;~)d2o–2&߈Ýl©Î $È–ˆi„Úã üÜ ©‘™8³…9N8 f·]FïÇBŠ8­q±·ÄK?ÉìdäaN$Z¨Ñ–Ì`ß„gç(¡.po˜¥vCpSÂ&-S&â(‚^QóÃÁ]VÍ×'«YÀ qŠiP™+õ¸ŽoWõøðˆÅJí¥Ç dQZÒÅÇ-0H¤õ Ž÷Ùý7|ùãã§ß¾?üÇ_ôåç?ÞÞýãýçûûóXº·1T¾yÑrä«*_|!@5ñ JTÚ% üL C¸î7­\Ì·ý>£a.$}&ZÜ$XïGì[¢³p*ti™Â®¥—…®x^zIGVH“”%Ô!ºëо·WÒÊ ‰è½…ÌØÅǨ;l'd7‰zØî«‰§oËTà‚Ø´Pï>uýé'ûܧÓ}n&4Êeâƒoî«=ò¯)!¦ŒR%l‡{k¡Ø(‹š„B=b([TLæK]~]ÆŠ> 5}&™_ÔMA¨b&š<ñøÀÅò ¿àÄü].ôŠ!²×.ýÓØ’Ćh/a|eƒ¹0ÆøÀ“ƒré_Tº%¼> cŽ4i c$¡€›âò&¥+I@…ÒEÙ;Ša\Æï_F1ÁÌÌܪò/|8³€“ UŸé„½Ù˜\’Ël•ü?ö®n9ŽÝF¿Šë\G’ø!èJåbkor±Ï²%åXÛòJ¶Oüö ôŒ¤Ö gÈf“cW6¹ÊÍ´Ù ð €ä'çTíS~õ(TU.cÌkx/[ý¨S;ÊhQ›v”e»eÀ ä;¦U™>ËðÝ>>Þ΃Éõ×Vèn ¢» XW°¨àß¾ås’±…JȨ\Œ«+û°× ªÒ$Â>Äâ ª8O¾x‚ò~îAÄ‹¸š¨/£F¾Q¼¬8AÍOV·s‹n(ô¶1Š‹â¨±q´b l¯ý÷ÁO ï!•Ãöޱt¤òÎå _$ÒÈ-%¨­S%ž<¢Ä„±ùh~Ð,€ú;ü}oȪÉɳüñîî«Zê›Ü?¼±>¿í›(žNÉ%|ÿI?ÿúðãnòG=È®ö(pu§2&TÐÆ%$²)Û 5õWvNß³½¢ÔóBPít$¡ÕæGøØHÉýÖd˜Q¨W%Ã8:¤6ð« ÌvÝM’ÎéòîÅ‘OÝ<½YE2ŒNŒ@ÂõÊÛiß a߈bÒ˜ª©¡)¹ã†&IG;¥›:QLìcq§Øšÿ©¼Uùjù——)74)@N¢·£‘üS曚8jµýLuJPB0}Ä"uÒ¸ ¦ÇÞ>wV/hçÿ|ÿñSÝT’¯»ÏBì¹eHCã½üñqòÁ’ºi'æÛ¨Ñ/wM„ü •{î»÷o¿šÇý^1ÎOûÏC춺?¢ö]]¢ h%ïCë<²ç¸ÎÏ^9EÛ‚óy]ògˆ¹ª»Q¨ŸvùcyèY›ú÷«™w«aÛ®þü›¸—˃',™¯”ìÎsÕ5;À¡€NM€îD!Kà8èîŒøš.ar Î6—HM?UEC‹ N˜»UW5×È(Öç’»¡x £QíÇ(®;eÕ~”ìæÆ¨êFb¬qv×dFzÀÇ©m6f]æmÞ“”t,¤{xñšÙ¶Ê¥ðùß Wƒ¿ñ~ nÛ!ÚRçƒ$¹. ü;Ñžo·9!×-ηéRJêRR·л¢ó|–šm)«.ƒ{Õ<»°¶ƒOâhl«®Á`ØV1;ÎçËät§0œo\Wؖξ÷epå¤Ö?à6LO<  ̺Πǎ’«OέËn{wZà3SJÜ„Î!Å&†{µ€Õìƒ3˜¶Í§èC,§ŒhЇ"‡S4ùÏ"éÑêh:ëh} [-Q{5Ñîn4 «œ9s»l¯ƒ‹.B¾C©Š|`-ùN%œÜC‚DqS¨¬Bl(Ya£L÷« «ŸnlÎL¦ÙkuÕ-hÅc69e>ád„”ÅŠÖ˜;r*ãã¼H³é¥-šB¨/pPÎc¬G©U“æG49H,’Ô¶1爗&‘L ÿ”62Z:ËñÙtþî]óÃoSN{T¤à`™$~õŒóO‚ÕT§‰ç3¸¥¸I$6\¾ ¤”B²#qc‰G-ë% NFøb¾#N¨®fâ¢Rç*¾oVÓïŒ0¡£”ÂóE‘±ùŠd~„œ*Þ‰qGý:¤ÒÖ/IÄ!É/”3~üöþñúáîË{M.Çqœ6V)I¯[Õ"s̺yÃ2ÇëVu*yÜÍnRÿ¾@uÍ'§‚\t¢ùùÀÿàÛÛ®Sû–”O lU#8ñLgœŸq\EXïœJƧIz|y¼Ÿ\šP:½0d= ©tpNÍKD°ötSøOèIææË"{„žÁý­%Nc¼øýÿãõ‡Û›oú"-—u/?‘ è´_êÑ€„ LNÀ»¡ÜwŸ,lßu ý˜ò±êü/ß>—·píã~Ú¦žÇåݦ¶ T±,¯$p-¼jç,geŽga6õ°}\®l˜áÉInG DýÙ¨c'TÎá~‘Z;Fa«9ZÑ5ÓMg€»r¢–XCä•*³ÚvÏ(ÑjÃÝ¢Vm'Áù‰Ì»£>¸DTV+ Ù‰^Ï‚í’>T@À.ì ÷£ƒ]µ^áL;³Ú¿:TO£³IúDÿž7×éiÑîÈXcNÖâ/C °¾Üß¼ ùðÏÛ¯_>ª…ªŒ?}úöùîëçÙW¾ÿñýÓªT¢Ñ}ž–}ˆI$n‚uMŒ.Îhüê©Y-òjË*fN|ÕÕU_>ñ)‚"4ï%J;^„ÓãÄ×eStÃ*l.©F 6cÿZ-tº Ž]ä äõσàîKÃÜô@iôáHÉï8‰ËpJ–‰*1_ú¾Å5\•ßMjZÕçd+BÅÜ–T¯GãdöÞ5Öoœñ÷–°)œ2»XÎìE ‡Î³Öî„ÄÙzç…z€«.;’G¡Kƒkpq´mK ’)ÙÐWÖÓ‰SºLÉF¦Wp‹ç»܇"¶—Á»jQ㎗ó &=8%ÉyÄFu(»ö"vl~Åk»S*C.*N¹M*îÿt ‚¤Š,ñâÍܯr‚-c¯0`6EÝÆUœâL-5òDŒè\ =šU_ {m¿êkIo9ýg}S_RRñôW©EŽÅÓŸ1[5¿^n'[¶ŒqÍéßÅèeôéorÎõ¬ê+'ïKc ¸wÏ*wRñ³Ðg¤Zˆ h\õ“ÆiåâDð‡s,?ÞÒõ»•Œðz ÅñÄ 7` ôqz0ŸT¯›¯YŒ¯¨ŒÎ`DHEtΕ×-$Ò¥Õ[-ZˆtihNƒQg!Cò™FT?Q_¼téLÏ¡Ž&^vÜ.í<ñçðàÔŽ²0H{ú\r±…ëÌi,¢Ø\›* a11h…P‘’fÙB>g§C~òÏË«ÕTº8‘#± ƒEyôò!û*ÌC†?û]ð.—ª7ÜHŠ ŸõhðÐXO™xÌRÊ«GW‘Æ»õn‹}n‹/SQÿÁ&†Ó«ý]>aSõ»j‰BWrÀ]Õ ŒäÄ0ÉÍ 3—1h´½"z æ¯Ø"„~õÓýʧUêó°Ïué“zÏ7r‹·¬ p‡%Ón]¹t¿U,J¤5}3_qsÒëÖ_GS=·eéJs14f2S\žÅwöóäù^ª7EDZt} “óŒT¼> ÄyÆâaõèðÒU[a¬(¢09o;²‰s‡>jŸŽk7á|iX(¾ ?¨ŒnÞþåoÿ}dH1à›oºÀÏï>ݪUÛr¯vâ»o_?¼(ˆWDøú¯Ïoÿ²v”Ý6M>îå ’¦Áv›×±äýsÒ:ænó2þªÿòë0}ŸJyWhv Å©CÓDÜî£ê&ÅÍMkÑUúêŠ0Sò]²Œ6G u~úæ’ò,}/¯Vá«{  ]ðº•qù¬¯Ž¶k/¨÷Ѻìwð±¥Y5$&›,êÛ|uÎøê”Ùbœ„ùŠŽ!Ji‡ƒÏ3x-^¦ì«{€I#*÷šÐøÕ3¶õª²·^áÒšb +¿$Žêx¥Í†ïk ß °]$Êjá °hø!P®Àwñfvo«"‰ŽÂ…]ó†Y;1îî1^_—Í•ðüD“zd b¿ KwüMªÅJ‡#y‘×y.ÿìÂQIέ'(nügOõ‘î4ZÝNð›€]‹ ¢§«cðÛG#ÄPD8¹ Q£T ‘•‘$ë‚,^­ Š`Š¢Ñ¬_E…«€"ä0Š0Eº–ÔõàXOáê—_ŒÝÚqHæ†sa”eYÆtÊ×®g^F‚7¬G¾v=gQ-AÂMÆ ‡ÌŒ òï5¬³Jæ'ɾPoÝI 0 ”è›Õ!ËT¯‘£‡ _˜üßow×ÿœ-ª‚á¡æ)? zÊÇ>ø–{X#1 4fÁþê™pÇh_Ýh“‹¸õôS'uII(¹ú¤ìlç1öˆ¸uÕN"¹ØÕ©Ñ!ß CºRУ'6Œ1©²Üó„WUf»E‡ ÆÐype7 EWÖ!ïryƒ…»ëäN_d¹´{0 §¿¤Rf n}ݧ˜Ü‰§ôîäõ™-Ÿ6rýä£ì¤^` ž¶éEpÃÓM.Û§hš€žÂÌ#½@–ñzÑÚä%ƒ«fµ¨8r@Ú‚SHÈ«g³Üøœ£¼}}ݳéqF!nEå$•ï9Ô ÉQ'.„Ò…ê#N6Z!­sB:†Ñ14Ø,ÊP}Ä)yž[/ÎÆÐ€}/ˆ{#Fï‹Í¡» ãÜQ<8âä‹ÃMÄ.•¬ý¹ÄÝ/JþSÚ=ª´û´ ¨%âmöÜÄo•)¥Ë,£s©ÉÕÎå%·¤sîVߺ¢‚‹€‰*ÙÖì…h{pÇÛª½¹¦Uiz Çx ±2ïé_×S|ú~mÈzsû/S©àñY¥f„¹VÕ—ÝBϘ¶Û =zÈ%̯”£žŒ>!´¯TA.³Ò~7%2=™gå?ºìÙ[ß­úwŸï?¨ÜŸ¸XoÞ¼|×·û«ƒYL«ð“sûÜȇw‚C‚9YDú€oÖs}óþÊ@èêýÕûßÞFô¤ßpG_ÐgF|ó?ÿõÛñOï>_}{|~‚Qºh#óß››"íA#…Î}Þ³ÐM3àÀˉkÛ jýæóíof!ådu$äŒôQ½®ÜÉäˆ"¿s“/ñôwz²æwøû¾þL_.¹%úã>úrjËo¬!âoûn‚§cy·ý'ýüëûù–ûѶm ]Ý)ÈÜà{á™ëKpeŽÇý·]žÆn¼óBúþ^ô­Û:?Zº¹YA5¹¶–ãÆTÛ¤F4qòg¯Õô¥âDŠæ|úZíùÅ)w­¶x³Šä¯­JÏ’X ÖݶX’°õVxjhìIžŽ¦ÐégÝ)Qºb§Ð§TÞ)Ì]¢/^¦ÜØ3/ b$\6ö,±©¯GcßP7€îYÕoh‡/{ …ž-)Smˆ—…è"õØG¾ÿ‰1¥OUSW_î?Þ]ÿX~# E6CÍ+Y„CH¡æ…d¢'-¶›.Úf,ý îMÇQbØ£ØeîwsrŸ&.bÎþÖÕÕjÃ.ÓPeؾEÇ•‘-lŽ$¨‡’¶ñàˆõL€z^¦GÁ(œñj2§"‘øó Î/§bñPŒ˜ͺY‡`ÔV 1†UþM›¸¡Én,©lê{¬\’á÷…cÞF¤ä[9Ù;ƒÊ©M1F¿eÓ#cÿ.šiŠÆ|cgL^kè¶Ý®äzVŒ«OÕÝ]kÄþûçݤóê.H‘N‹]5™³°Êìƒþ‚Øo1ûàúß“h 4©r‡Ä¿€ÕŸøÜNšu¬èêIÅßšw¢lËÁ7±©ë„ódŒ1Æ|F|½jSžSÙbD–’Å0?ÊâYT=L6…‰‘Ù‡U&Ë<…M&K©¿Í‚§ A`,±Äž—ãÝõžª£¬wŸUæO×M1`9}ò‚UfømÖ]ÃÉ‹”„!²[$o‘[/C5!LÁ•ó­I¢§âѪBÌuÛ.¤ÔcÒ²®‚Ͱ[c§E©°SžŒ31»ã*T{儈O\Xr׸9VÍŒ~Eõéfœ¸¯à“ À_™Ø‹×ŇÆÜðbJÄ×ËÝÝ\]¿»zÿíóÍÇÛ:Ü==ìIn&ù-%¶{>5åŒ4¢Väâ£cŠâ: ·n ܪJ g®@[¤XD[^~Â׋tºÀ­F2I½4Y·%ݨ0K¾Ü"9ÈÀ­†œ¤‡Yºì™º1+|µÃé}Åämk7íkêßW¢M‰¶jAw»ºÇeÔUÝ?>Øug©¯¢ü€îÓk÷­«SËÌeJœŽÜÚô~…¬ÏÜKUz’›º%ôR.3Ö˜XDòĹ+©…ìz¹-:pˆ«âÛ.Ÿp4«”3ElöÊ„ˆ.ï7 ñ/>¡ñg!Î@mPhþ“›Œ|7„ Üv<þYõëúL6ìùÿ­»ÄGâ·Â4pizËy ø¶ŽzYu»Ô0U@ˆ!”±ÙªƒB œq?`áQóE0]À9©7긒ò¹§9\>gB'ÇÍ–ó 7•qX£¼|k­×È`íõÆ:`¹©@ý'¢#ñä¼8wù1¸»÷wC7ûÆÇûo7û¿tºÐè²À-UÁ¸-É÷¸Ð(I«×}Ƭ€ŽRiUªjèE¤ÎŽwYH§‹-Ûñri¨Œ´ºÀ ÇnðüÆ |8UvE]éN¢»Ä5F &ŒÜLŠý!–ì†0x¦Ÿ”´;¼:øïµ)»èd(ÖRj`µõSOÈÔ-gW[/çvV œÊk©(¦Ö­è<¹ !õ`¯³;dï#Ñ¥!—ýèRW•³dˆÅõ£·.Ð’sK]ycb¬JÝAÔ]ZŒÜÝ8b ‡›˜äè'ÖM¼D ïõí>Ü?Z[ùõ½¾Ù—©Ÿ+€Çq¤¦ñð ù¾µ²ë†Æ¦+‚Ê`¬!0sŒc~xÔ³œºŒîÐ5x^åý’Ò7Ýf¬<ú¢!’µÒd†<¸I·Ï¸Hõ„@ÇnÕ•hqroM»ž ?³È‚QfüFª•yt)ÄÔ’ŽsŠEÁÁ¯5)‡®óþs ‘Šðʪ\®X/A!f“lÏ鯶hÏ8×¾®€WëªLÛL`0º²Gàj/ â&èÜ>Ü^}¼ýýÑLÌ÷9_î¿ßÝÔfÓŠ·¼dä%7a/†–ÒþD}íÈa-øi¯«`S-u¤¤¢’ØÈCÑV9ë /DØ­Í"Ç«ÐZC&ò›œaB×FÂv ×úÆ­±Äg¸oiDì})<gNî=SoÛ{éߘ©îƒõÜ .7>™ç´›Ý«»ëùÛß=ÿye†vñW`7A»8¨h ¹^eç%ÖAÕB‘´‚OO؇b)å]è…pzð"˜*{ZwCAÑ…¸Í0÷ô°#yŒŒîø¶xÞ¦èâ¾c.CHÓ·Î8ÔU@0m¯€(#CfGž'¢Ò˜£ÑÖçG@j^㙽sÈ5Q(²ËP(¦ã.I§j09oœÆQÆHLgŒs÷²’%.^¼L™B18™’uQó’BñÕ36q(¢wН¥QÜéŠ4%·EÔi(}KÖ³ÆÎ˜#7 R%jHqò B(ê„r^Š:ré½Å›U †Äsµsªíeï¶mØrÃEˆz”qêZ°XßIÝvtÓ1Ý Gõè¼_ÔrÑïF,Õä\ãB^=èNtÕP¬5º"zÀ9€Mº’8¶0gè©+!Êf’+;‹w$@³"}þ½½£>£ ê01C*ë‚úhgév÷Ò"Îr`<‹£‡.誉“"UW]8ëÈÍ ˆýçO=º¬RjìèÊ=!ë.¯3G³§pfUPEÞý6Ðø[b*#» ˆ°–¶¦^F½ìÏö^=÷À¾ÊW£X²?BŸ ¤éAha«fZÅÉ‹° mP‡ôä]Ýr·’~Unrs8Ýè†O*7»7§êœÚ­Ý½ÜS)…bbVdS%JNü`ûûdÛ G∠C;µ®ŠcÓÒÓ?_ÿ ¬TTHP¿ÔŽMv¶|©³¤ðä`–©úE!g`õµ1¹k|ðfÎVˆ˜iv,|øˆ~ÜõÔê»(6YIXA5«sP«`7ÒîÍÛÿ㆚)Wž;q|¹Ø?ɰ!Ÿ€f3 2©ocÉeµ˜÷‹Ds\J5f“îê8‹¡Õµq~4sE$L±­!¾~]BÄž&D¼IrR_Ñr'}–‘ýÉãxçð.ù|ˆP§Ô¼ãá&%C"hŽ÷O#@»…(ù±&°ŽÓõКɱRiÕÒâióÛêS+¤°ž:‚3ù *’QVÀ‚MЩI@­§F%¾2R(jr§ÎRÐhÀæÓ/oÊ,vÿ¢?i³Ï5÷_üÒü ¡0¾SÃkòIXÑW3• JÊÄ,-„"ž:NÅ»¶ù¹Tøö`‘‚è•dYö#—·-] ·†£rRÞ™l`ÒãÄdi®sÁ°/­e@êŒÆ Þk°Q+»GpM´÷Ú LM­‹ÂhÏ:£Šn]¦˜± ñË.5Îìßš’×-ƒ+ö⡌±>p9ϼQç‡Õy¬ ÛvÀó²ë¯Å‹K%F¬Å‹µ•  :Apõ\s+ïgÔ‰¬ó…C–Ó{{xãB‚TC@¼,­ñU“ƒïR°^MϤ‚ƒoœáá&9Ã@Ø9ÏÅ7ƒñ­vó•«7^îq¼²l7†Æ*bHÜdør¥«ãÅipd°@ Ô×p9‰€ôÂøÁ›•¬F4¡S“ƒf~°-‡_ú@ž³T£'#s¢TÃtC;ﯱÏÐר‹÷ñöñ·ÕÓý Ä»ÁŸs;Á¦®Á+ýÁíwÐ`ë]éÏ=³ä®™T“moÅÆuƵ3sfSÓ[¿_ÿòÚº}©Û•< ìä»9­¹šM«Ÿ=Ún4­j›i9ñ*T‚§Ü~ÝÞ«¸8©§¤Ô Ó¢0žÚ8kõ ‘›ÝȨœ¨ÔWVWÑI²cщ4Þ“T4)°/oY¬Âƒ³¼”¸¬ÇNâ¥øö3ö}°ê¥+vØ«OD:w³òB÷wËíc¿‡j±7µc7Õö,+g&…]Êͪ9¢,ö¶˜4Žz­àw'.ÁšYøµV}xʯH²¢ë@ª=2ñÔ0"ÕX$&ŒÌŒ¾‘Ê|:.)¾±sÑgºÊ¸$Á¢-uzži“ÆCÆ9î:‹^@¦p×™@í'ÔÅ}ƒ³™ñx»¹_µ‰î?|¸V Ê e-x‚™Þ츧sì—ó“ðÚYÇUæÖˆsí²Ô½0ø¹„´¥÷E) "æUˆ|>1ç„/nLy!nº'ý…x-ÞEµ Öð¬ä(¹õ3#¸ÒCbà]ä‰ôËŽýç`í7>÷ùzs–ýè<¢ŸÄ~03L!µØ)_ƒ¥91þNi´ùµîhä`$ß»»ÍïŸî7·wÛqC˜øYÉíÏ@nßYP—}îZ¼»õöñù!>òçç»_W é.gêîŠÈ^`]Ívm›ýƒ¡ñ•w£HÕ*xÙ ”>0œµ|Ž’Ûk„ibû|gÄ÷’6”ŠŠ»E@VßZãã‘BQWíd:õ€Œ+‘/yYp©Ë¨9šˆv°¦²PÌG³\fqƒlHxA¾û»Î\U…¶YD*‰cËǪט…³|¤¸ÿ Lã#ÎïÏ ì;6OüYˆwÕ”™ÚÐtT>%†9`¹g;³:ËJÁ„i‘ y;‹¯¤<|­ôÃò~ýJÁw›e,£]Ô9MVÕ3 À>’T$ÁZ%¯ f.­¥û„b|(rŸú¢”‹&³/N86™ 5rŸ\7.sÐB?çÝJÒ“™$i4Éy™Iq¡AÓ™EY`¬4I"d‘â,s™â“˜Ë0C  ±h̺ðõÞò-Qõãþû ­·úbO}‚}Û}¶]r׸¢hà¼õ–pŒâcÌÌO»ÚQAoß÷â6 ž7³?}}±Åör¾hQ­¹Â)*‰®¦hѺØÜ蜡¶k2„k¥¦QFÁÛ|EÆ#eK&;#djÑïEÛ!Ó¨*ô€ÆšIz:GI1²U¬?{¢ã´ˆeи|\?<ÌjÄQŠßU“»@'‰*F–‡84Ü…µm'jæ+׸À7vÀmAQb¥š’ÔháGYu`É^]ódö´¿:‰•£‚ Ée‹MÛ±¢DecEe? º¾’- gY)@“.p° ÒDƒt.ˆ1üõVV—l³dè?‚ ,“PV°ÆóñˆbƒwæO42 Ä*9ŒÏ·:ãfoâQ’·¯Czµâ(ï謥Q@œ“–í?ûµñ¹d}eòqœîUÆ:K(Â_i´Øz|ÌËâöá¨s®‹ùv‘¯ÓÝqX¬8fÉõ àóeOâxµiž®Ô4NÇ{!^\³.Žc*µÙÈ}PeG®R§þ–÷ù@S’#i²êví8o7'Ü܉àHf<­rÙ1J,\ÛÙ-[ãŒiж‘‚sܤ8ÕŹiÜ 837Uè{¯úí®FÐr`ä¨rÛš,w¦“÷ÀçYù/ûtñ»¬ât…¿¯O˜º8i¸_ØS®.NÙºH6BžçˆÕXpZ Y;…ÓðÈØÆý2oòöV·÷Oµ³fštϩ̨ÙnãN`vžÇÍ5Û¿}Íš­sF‰½•"£ÙB²ÉÉT÷mb•v·ª Åc¬R–×%ZsTe4‹UJ Ùq*0K8‡cð>Îlj“&·¡œQÝó<0©ªRåf·5âÑ&lí‚ø¢­‘?—­Aö q“8‚Ð~¨3Aè\Öȼé®ã‰{©Ïîš[Ç5Ãgêi]`°fʦ³`‘ƒ “*ž’§UŽä,æ(`y}oOªähÍ-Z¬GbJB4j¢Õn#¤LTº&©;}tðjÍg½RÿtûQeâv¹:¶ŒLÓùÔÅÝ•fš/G æ°8Ñø',NèÈÄa6Ù`µmÑJzõG܈Ì`Òžç£5¶»õZWãÉ+Þ„àüHð,ófɧ¨·±>™ò~>2.I7@Šn~üQ"VÜ(ìô`Õ1˜¤Ðd瞨dÆý8ù·n>úΨŒ|fTˆo:*„Ê7ªLëq!xŸåi¥däÅÝôÐû¯X¢ò°¹eø„zR ¦¯LË AÄ´-OQê´ô5|èïÒ/ûš¤x—ÅKï“ÍÉR4r6½ÁQW¢MT.ÌïIb0ÇŽQʦ®2Z‰l ^ªÚíµEEJ¯óç™ç ̤ä|œ³:C¤à;o‚í{…¿Nñíá–£¾æ–χëÞ(xLJFÂa[¥˜dÄ]ÝV}¼]~Pê}÷Ã(°ø­ýgw·OãX@พF,Ø+PýÄ`<·°b…tkißãKÿwÖ¾åóþ!9~@£FæM;;jàB - ޝà'2ÊqŸƒúâ¯{Í\bçüÄ»èpq–½â½Îö*RΟ¿AÙÿ#þÚ΃.\º1 ?×ít Ž€´·Šž÷ ‡èsË·ƒVºw›û{i®›V®šnë©7€/µ†# Y@$[9®¼nh\®ŸÆ¥Á¨žøy˜ewtáP³1®du횤Á³k7x :3üÎÔ”gWÉÅfÊÎÈÓdð”ʲ 2npg Ít4{·Ò™ƒ§”QÑN\gë—¥ÆeZÊ æä'Í2„šc•zœ±púqóü´J Iìÿ¿XªOZ:D*[YÝW *&HIôlТ¹0ºŒ:­*¯w'O† `Ô@ö‚‘û’…ãõ±b4Q=´E¼6ˆºÙAÔ€KͼÕWb™.Õ]{€¹ß§Ô]Ðü9™(!Ì‚œÈ¢û*]ïG„—Ös2/|¨YBh56c‡Òª©ýˆDíüPe|,r-èZWÁóÙ´^Ü´šê§<У‚Šg3î «…ò…+ÌšýMê‚JgA8s‡e]ÛU.›bÂôîõ ÌÈK±8Çbéœó_H+Wr˜Y‘4n¿«™Œ&4Ðj ¥3L“޳cPß³Ly_íu LÅúô´Iš ©œ%º2š ˜M9éJ‡„äÂUF€”u¥ 4ÄТ©ÑMx8ËÈ~éÈõßÂz£§tn$u®Iã U‹‚óì6jî™Jç:y®L³ƒ>ÄArœÒ( …äÚ~©è®€¤&¤žÀcf0¬§nºÖƵ†Ôñp0'CQü,°ªÎ‰ç0c~t—J~·¼_ïvÉTEôåÌŠždjÐ9xWŒŽËˆ&éÑ$£î—@dÁ0$ÁlH¼z#dö¯}‰$de~”ôd“()Þá¹ñ9N¿9xkZ"¤+ŒÜÉ–&@Ϫöœ<ó~†!Ÿ6tF­“ù*E¯®ù7wU$ž+J½*sÜœ]Ó§Ýe‘rÝiìLEpéòá¹çT ã€MÐROmôÓo‹þ}œæ±mŸ;$ÅÍ'=—°5¹…O $åó5ï–øJwK)°H1;ç¬Õpª >ˆT\cpN£µ¾ŠóíÏ÷«ÿPyþûfóðôÃÿTYýíÓÝê¸õ›á¯7¶×«»ÃgötÇ`§bM.WÌ4½ý»&'— ^æûxØ›u<”*çrµþ¼º;Ú  ‡"«¿vÚù—Ä#ú7럲ުöÿ®æè÷ÕãÍÓ‡ÛO7¯ôèv/¼A(tlàÿçTõaF)°Æ‡š¥SâƒC üÏýäÓöv¹cÝ1÷ŠVôý/·÷ÛÕ9ðö½ùô{0~Úüòjß'd±㙲"¡g²9‘°ý½P‡7ûþáq³\m·{(ïµûH(ÐuÞ[WÙ»ŸäÀLãÛq•TánO,&Ï-:4Ü~\==®w ìôêº ˆIu›%{[©@¾_Ùz‘ß’Ï€"-Ö±"SQâàÑ“«^´{µÏ!Pç•¶óÎÇ??r51Üúm%ñb9®/–ÙÙ滋P “ÞÖè¤Ã¸!Á³i6’¹„r­5J ÇÝ¡ÄV÷>ÇEEõû«¢“›°™ZìŠÂm /×TòV=Ц^HˆbiMB¢”‚UeïXr•Ûr»¾{\Ç=nï†âP÷S°%à>ÒÆ`Ú«iFƒ—…(RÙ$­û€Œßïyór°íÍ/›ê»©=û¸yüÒ{qO*hGá<²Œ‘ˆw\è'HŽ9_#9=Hš5'ÅÜêÞz¹£H_QüúÏÍ<c: †ÑæeÂÇU™ðqÊH²çþ@ ï÷VK U¹8è1‘!˜Qâ€ê¶kà2M*RÛê¯'8zôZ¯íjTV?+%N·ï¶_ô£ï_ÿþ`vÞ¿mìY,?¬–¿-”9›µÊÒbùx·ˆ°\Û´šàNHã°|Ȩ(ãˆòÒb“»±¬=%Û0¢ym',l˜",ˆ¦"Ìs¬'e'0W_ãýFôa³}RiXnô“/­Ã—iLG b‚˃ˆ÷ɨJUaˆUgÄ™ò¥Ò9£‘»G[!ÒAÐP¥Â¤Ä®Q=màºKä€àÔ¯4Ðos.t&øL…Õþ]“Üû¤€ôLñ S@ÃGLJ9Ã2µ8ÔJªL>0Ãä —f€Ìn¢ #)Óž¿(Â)ˆ¼YIHOåÀì64ÔÝË!ÿîf†•K`\6 nÖÅ¢ÉýÙ/V£ÿÇã\ÓaÖÍ>é´þ´þx{?®Ú@ùýÝlФ°¦MÁ`PTìØ¹Xí¨Xgm9¡¡NuAˆ 1;£¡QÆS9»!ÅZ¤ôÔVí †«Zß(/P1zžXÁ ™eÎtÑéG‹Ãˆ©VâbCL–FÚdÅŲ޳âb“ò2 X‹o<5x4xmqšÌY/ƒ1ß̦‚„ vK3 ¨¿Š¿()/}@‹‚Omâòƒ« טAAôM2ˆos@gP%$‚L\P†!ä%B¿H $"9OxH•"qÓàµEÂÕ,ƒTW.°±ëÌé ‡ÇõçõýêWõ$ãÎÛÛÅýíÏ«ûÅö˧eCáaŠ.¹+ ã%+;ê'eç@½&W‡Ô©S\ê†í¢Áý¦Úº»ý#älsÜg÷ÓþOQ~‚‘Wùùývý¤op£ovãþ¿õQóKÍÍ0ºø‹~þôøe½‹3¶J¢E,ÖJó¸C"86ú›:Ž mvW¤¨ß°ˆµ?›çÝ ³]Ì‘¦u1¡f¤öênÿ_“[—]w–¯M„øD"„N¤ÙƒP,åÆñ)ô5H—!ý»¦:Íï’O„x0]|†‰á#&%BÀÛX½^šÙ½•ì ÎO‚oÜÆÑhàÅÛÉyP˜ñÄê6 Ì 2‚É Ø$¾ _­ m_ñüÅýÏF6¦¿ônÖFí=­œ6j{’˜¯—kÔVÅ(©Hü¯?>Ö#žÌ µ D+)IÄdµ÷róñáöqõþ}é_WOïÿñoÿz“LÒ¼f鿇uÿÐ>¹¿ºÝêw|¶Ýr³yT»¶“‡î7‰Oùïïn>nîr®Îš½ùñfû¼Œâôþ‡þY?=ýg†œÃ™žéŸÓ‡–³[©JÅtxØ ¶CòiMÏ‘“FfZ UËÒ4¼ôÍËÒš*wçy™|àUEm‡ñë'Ip2€ Û'ØÙÑV#àÜÛ¶ÿMn—‚¡Ÿæ6ô@€ë/:êÊŸ«È4,;6þ+‘¤Ü÷Éa±Ë5¹Ë9ÜEFdÇzh[Ÿ ØfNDVª`Úú]ɨìOÍ©]‚¡ŸÉŽãX4Ъõtª~©uoRµ8ËFQ1dXiI·f£Åî[Ëâë3+¡»âÐjÇŽMÅÍÙRÿR¸:·:Ç_ÛMÆ°Š¿Ãø•.!¹ ÙÇýìóv¯Vg!€žÖ‰í €l¾û]c’꥟ìeg¨œ*šµ§Ê@´¤ ©FyÌÀœU4†¨™,—\fØw•Ÿä¨ºŒXÜlâ‹fzZ|æe$­Sˆ©§<ï5ö¼±u4ÖyV C"h)-·š…KaJ‘m°ªWÑçO¯u­…³3 0/¸¶‰×Ce¯7¼Jo¬]…)‚x¹Ø7µ»¹£<žÝóÕ*šxõD 8ª¯Í;ºDýõ¢ù{V¢Y3Èw7Óä! hã<¤2Ë5h¾8™—l'ÿ3QXg‘IL„B“@$"·J RÄžÁj2•üÜ:îxKZX²²Ñs9e¤ S×kIŠ‚²ð¦KbŒPÒc[á¢ÞOã„‚QZ÷æõÝ'öm£Ãƒfçv«þˆþ‡wïo®õ>?¾üÇ«Ïw×m¨†>¿±Žú—qþ„†I=Š¨Ö Ò}X§Æ%ŠRM“ ƒÄM\TM$1„ÞQòªy O`Pˆ¢jOÁ 9_WCÚ6džéŠ)1»Ió·_€q)bޝ¿¥;AÃRÄ~›pޱ„Þ³[ÇX9j™bc)N<wnicçœ_o­Ùd1hó4®3¯Nûê'=ðËp¯­œÏûÉ_6²67ׯ8Ñ¢ÑÆvRl˜‘U±Ã׃²‘մ§’‘}*ÿûÈÓccõ”Ió&i w¨"C²o`c)e&Df¾h±`c)µ±PUÒ×,¸ÚÆ®° [j9pêˆÆ$ɈÛC)[,qô®±ûáçú;Ú+~½¾‰=—yš0EáR>Œ"‹f²Mì òucþÚQ'‰-¶ jì§YÞ*[@ã½2±ª±Æôª³œånÂ3?¿zhEy‡º9R¡Ã¡… ¢ŸHd,’jG¹o“`Š9’Ø~’PÔÛEÚ=¬Ã{Û!ÇÄþµ56DÞØ{‹í׿Ӈx½2²gô¯2á)Uk9“„îϵ&cK. Ž7Ìz²I#Yò´¥a~Ú´œ¢}YþÞãs_ýrw¿|×ã<ê=۶͇3­P£L²„ް*¡×€76›ä1ôeŒg™‰ %€k3ÂÌå\Jrõª±:Œ±æP“ŠuôðÚÆX6îŠÚQõ´)ÊØ2U–ºiÐ 5Êœêxë{£Æ™‹ ù¼l€…̶ïšøûŠ“Ï‚æ·…É1ni“mÌ·%j`é]›ÇÉcWd,ó,>I°ˆŒ‚öâ@J–9@~§Í‚f¶ÙŽI,€øÊ¶9o×\ßÖýº°©Îf‹_ÚÃ6`òà¶UÆØƒ«R-Ñ©pxJIaѲœzY,꘳Vò²Ží[cOF¨ŸIÑþ ‚ˆ¼¶Š%ØvvGÖÝ ÂËø絫)¥3ƒ°qè l¨ª ûЀ7T©ãyîÍ‹K4Ö ýZkƒdØ…éKÁÐ!½ï%e—%Íͺ¡•W\”'= –j·vYŸo5=ܦ KЦ¤æ€Ó–ôÅ7V.h‘‰ˆ«qIç‹©«’¬„ÀP„ X““¸º§+{m„@l§bª Ü#î^ŠýPËñ»Þáf-v*" Ðàˆ54*Û|¿õ5ÎoüŽ·##Åì,‡Dä˜ÉIªæãðHcÂW@#Ýu¼H@Ÿ×Ë}ûøüþ—7_?üë}¾ùøþ·÷ï¯?ÝÞžàVv6áöÿåâhÐŒ7Úÿ‡Ï!ŒšTzoíº°F°ýñž›!Ù¶E,ˆßžÒ¥%(m¾Kx#1Â5Â{ìXï1!%@÷çÚ$cþè8×6áI†óUômi‰ŽûYTöj÷¸«xbsÿÍðî—©zaVõ2IO‚¼àyŽ£§$à}ÑÇ`1êØ柖Sž(6âiQm«éê7-ÙÉÀöp\#.@©§U)EMÕ9¦?×CtFZÔ¢MA%(J‹mž/‡$šwf[ Q·S+çÚ"–¢¸”ÄÈ8Ðt]”r`…æíãõmÛ@†?÷=J5£tmꈤa©æhc+43}FiÝÌñ„(©l£Õ(^Þˆ¾£c¶õ@Œ!V&¯:Üf¥ $ZÖ¨Ê+¢½¦Âð4ò¶½Ú]²ÿÇuû.å‡nF”u¥'¸2L?ûÏŠ¾­Ë°O„«F÷È餉ƒÁ’JMµ&H*é¤jwnÊ’#t2±žG$a½Nr@õ7 °_ôްkõ©uáµÂì]ß|þp÷ÍHuÆ>¯½ÌEFÑj³^´(Q€ §É¯,}&HǨŽÒ9(Mì‘Ý*öSþ­'õê >áj„òsaó†ó»›ûaþÚÛØ–úQ_ö×IÓ%L%a@Ê£,.ÈÓ# zL¯AºTÛž$qpŽc·[˜?q¼°¶¶ 1Ø˾*|ÈTá)›áx ó0ÛEÞ±!9úx!swÙÜ^ßßÎKÄ.‹Ïÿ¯†`1# ¤êåÕ#¤²( c¤’(ÌbÁ¨1BìÅ9ZoøkËöTÙDoïUŒ†e_þc÷:“¬}ˆê~ÕòÖ9š¢H äüÌ‚*ƒ*iÂꇨI"4~äþÑÝí$ö„ !˜N ¬~³Mµ‹/ô¸Imµ«±ú®‹±çkÊÆ‡«Õ,¾°Š¬øPê¡ñIÃNgŒëk—yþDrgŸÀ¾âïþ—1O?3ïßoo¿¨è½Q‘|cÑÝßö‘ÑÓûê²@óýù—ûo·s©æAeûjÿ«Ûkkb1&§ÿÅê'Ñóܨ¿pe/¼w3.ì\¶ÉÁOÖ G€ÔÕpðô‰}O‡®dB1¦¾IwáL¼{j…¢;¾ÜO»ã¦¦þüPóóeCV\—)‡»v(—<»ø"Ü]~cÝ ;jŸ¤*Ü) = A˜ÐÛ{ÈúÆ©4bÉÃ$>º‹Ö{¡pVN* gá–W«0bv¬`Û |­«bÜe+6_6\ p #Ÿ6ýÍœˆô„s¾‰šq`JâÈð&”ç°êhO‘ÍÿÌÝ£]ÝhÜòðð4ü3}õóHÐÝ×OÓÝý¯§ý)D©­?eÔ9–ñ]sïʨsäÚZžµ’ "k´!a?¯f}ò†Ï²õ.…›ßULçá¹§Á²w_î>*IïU›¯õãŸnw²45Ð_ý÷Ýý?Uº?én¿Þ~ù6wù?Lûbâço?Í â§·šÞG%DÚ˜kã§´‚†¨Î®ýX®½ÿ÷7o?|y_GÌý«Vì§dÙ'ê çˆ‚ãØ±ÌâE¶/¹•LX™& Ä,Uae R‘Èe›™TÞÚ©Õ“`S€1B¤z â"«V;ÏE‘ÊèÓEN»zN› ðÁ+ ‹œ¦ðTä»ÌiŸßôx¸íˆ­¬š‹8ïe(«+ì0…­CIU¨]È‹PÒ„£WL¹¡KY=TöJØûÖî?ye?~^"XOîÖIĆíÍó§"aº ¤a×ñž^7-p~2ÐÈ¡ëp *µ¸ex&<8Ç#æÄÞ­âÃ|èyŠànó>)Ŀᄐ]Vº?¾½ÿçÍ•ûw7…f£šO¸AZ‘YeoÊè;ð`9F µK·Š¾Þ˜ªˆ»ÆCÏb–œP9‹b¸z%Í貋y1á©§Á‘ZôUG‚ͱ’9à醻r†P*ö¸±à5Þñj“üÇÙœ- ‰º¾ž‚-1$ƒ×l´$·Ÿß~œö`ÓïWû°Fz}mŸ›‹OÕÅ_^eCìYËÚ~¸åst”ŠÏB¶¿mA»!ï•zìXׂ<Òˆ¤#ÄÂ!-È1Ø$ʦÿ‹ZâQ¥± ±çq]‹ò¶±B ]Ð’¬lPnÔð ´½ Ú»F§gqâà1u:t±¤‡•|&Öˆf;5EÂØ Òà8%‘U@,´Á0¨Á4¿¥J7 Å6ì½òî‡n‚W¨¦HGG˜˜›òÀ­¸Võ4U#y¯úGåWÖìΖëYPd„þÙ©½C|eõ³…*ãw…¦`å2`ÜøÕ©ê]ïóýÝÇ›/ïoî?ìö”I·šq(¦wÊ;+W}lRR[j©„Øá@ "GJ í¯!CI9J—gA24,êr òņ ¥jö¹ã@¶ª<Ú‰#xu]æ­+žFe:…ò°+2¹<”Ç0´Ö)U¥oŒ-¥ïáÆdKV''€ƒºÉ³s þsиGšäýÓQ bÇó÷†Æ«hÿ%y*[Ûh½LX4·ÙÙËE†@€ê¡A#}ÿÚÖö¸±AA3%Ì@€ºI}"ïG`N¬­¦qÿá ÜãØ¨?¤ñ–48œÄù$›Â,¿¿ùðq:z´ûðñÝ{•ªû›Ïw³W*Ö—*¿²®ØÄfýaêÈhCôI±yMB-Õ/TžjI¾ªÄD¢„rzð‹ÏSúØ›–]qDÿˆžšÙ6¦¼®5×ÊÖÏSÁÆ’üi·€1JlOöåç)â0¶ÄÇ*(R_mÔÿx{”ÍX˜Ø÷OÞ¹ŽG+%sd^=)…®vRÊ…)hp@%½W©Ç°Ÿ€?¯÷¶æ‚s „7«š1˜^Nž,?±ïn?(™ #åjÒ°%¯÷‹%(™À¡Z"hõ ­R[­º ï¶ú›õ> 3²ë݄˥[“]'@%ÙUÛ™…Í[pjÄs¨ŸMˆR1rJ¡gò‰RLêpÜêŸÆÂj£¤¾“#U0Öà‹Œ%ÊCw®V3¾éH½²m‡zmÆ¥žÎ {¢?cu÷œºÛ-_!ž°¬î);%¹ Û m‡¤á@C‹3ê%Qˆ¹_hôçvh» †GÓ3îªVòdÜU—Ño™ðå 4Ù8¿>æù®ž³†ûp™ŠqWýc)Qz î²øÂªaWâ)¸„êc½V@ zh• uÄ&Jo¥½õqhíZç9u‰„¡lòm[S¢H æ”{q³‹ïe I8¼Üõ½øF6µU^4šhbvdd·Nå÷ïU­¨úÀ ÖCù¼X]ÿãWºrW»ÚÝþ5ÿêþîñ‹!xÜÜw.£ÂÌž¸ù9Iã~¬µÇ¡(;û¾Ð“8ð@¨žA 1àÏ@† iE8¡ŸÔƒ„ q–mÌLnµQ ê80NιPìRÓM.†šìâu‹›U…Á¼×~G5ßÄPdß"ötÔ;Q×ãÿ?ͬI33`tÖïX¸B ½÷E1Œ¹w®Ÿ†„¾íH1Ó¸ÆúøÒl˜7Ý+ýŸóUg½Rô*¥ë ¨¢°A£´LÉ‘„ï¡OúAy¢\0•š‹¿o?|~ÿÖOo}±1¾ql€ó\šÂ­ñÑw½Ô‘g§že£Jz%)ûê%™°ÄĈ5† ÉQÉi—­—,è6¤QZ&à„‰[RšˆÞ‘O«txsw£A"eÜX{ Á«n˜®kõˆ) o˜n°*g9,P«8Üxp5b?iâ™ü¦]~ïîï>ýv÷NÚf­U»û¹Pa­÷ÙNóZ@ ›´Zëu”fœl9ˆ«xÃ@¡ôömT¤ü“™FgnMj¼ø&ã¬Râeq6·Î6,pjS `¿ì¼u#­s’*ëÌõýÕ«ÆY'T™p«8œÆ›bMxƒ njŠóì—odO}ìmkü4z»(ó<°Êô¦ØÓ<RD‹Ó[Ñ{š(5ÌÒªòE%†ÔC±:§ñP6 ^Pe„¥ÕccpMφz6 yÄUz(›ÇÁJgwŠÈc76ß÷ùoyN2tn0B¡U]”ú8¸Ù&œch‚(.®‹r9à&QnpBqÓ©Àµ­m£‚š?öó¡llÓ¾ƒ±qR03ÉŸ¢k#èj:¤f °"Ð ˆÅB¸µ~çætç¢D'MÖ7¡ó.®²¾ 6szŸ‰s•QÑc¤ìü`J44¼e_5?Häªîkq–µ18Z÷r•|߆֠0~o£Â]÷–B Pn¡ˆ*„AŠºîùÌ‚“gšŒh¢Ðc'[‹š”]"F\å™S¤´Ð©ÞÆ»yqÇæ(GUÛ‹ÖÕ^¤º†õƒ»à€Kä®P@ñOǤ?xŸ:1ŽŠ¥|Æw±¹).*_°¾€XT>–Ü4Å‚CGg[@$mº'¶u•îm^OR*3gæô½59=í²¾PN‹;ªÐQÖ»Ûg¤„h˶yE Ã6yô´¹& î€×O;U/«óNaa.ë'S…öB»7dµóù^UïN…£4x>±íé>úÃùaßžÝ8œo+¬Åv5Ó¸ši8y$‰PÁ4*D4bÓ:¹Ö®ÅͪØSr.½6Û¤k±x°‘Þd˜xMxÞgšð2>]šØÛÄ~™Sª]¥é»«Ï¿vnSÑ…çTé5¬‰p´\ïðUmxQ/£QMªoÍ"êµ;QBj…¾¾ùüáî›Å’»;ûç«çÝV‘ËÈOÎQ¨˜œW¹*Gí\Îp/‰3"§1uˆžZ‰öôÞˆ°gh}‡¦T›v=n"Ï5÷&ÙÍ!k0þ—¼kiŽãFÒEáË\†%<òäqo{ØËþ Š¢-†Ô’’<Þ_¿™Õ¯b7º€B¡ÛZ{&–©nHä™ùåäh•&Ùd:][·Ç&¼æË*£­˜ n#Eäw³uæ/N#¢MŽgð&G«õ¥ :©nÅÀ$Òèìµ^ܸÆ–‹ó™\›UާoCä ì.ÊËAJÉ7”…·9þswµ9¬ÏcŸœ¦l•mWDB,¯¬òtuÅñ@W¼–4äZÅ-˜T¹ ¸V‚©¶á/¦0 WѬà +"W˾NŽV!Á¶­`#éx‘—.nþ-b\âxNY´sÒÐ]ºnç绯¯ŠJFWìÐ 86[0ú_~»¹¿[V}æ$_}¶¡=ià´Nh4TŸ‘5ž›Ýhz”_F®&O– £‹i Á~.N\”ºsƒ¦'´éNƒÓÈ8,‘IÐ(X%“â/ ä¹ÑmtZpfjH½pþšÝ'ê=òÿçîåyô?E ¹ä£î¹—s#©{IFòý«¦ÕyÆ6‚tm”éÇÏJÖ-'}úôx_#¬]æ£xª.±lö`û¹d Ú‹‚¸Ø!¥ûK¤êfI•<9.‡Ë @#î’N$5ËÙq£{ºô0¤cZæE€VyZVàÔC ÄK‹kÈÖé±n,¢:G7ªìl‰@HeqÙ¤ç„L=˜qHÌ'XÄ$hÕîëÜñÔ þl\\f|¢9Æ2¦Œg5ó‘—^¬âlL Är~<%·-ÿšu±(Ù½§UK7MÀ׿LÕÕÈs\§LÚQ!º1 ÅÄ'*óŠÀ"¯H¶»uJ°.Éݵs¨K–Àm´Ïk‡Áµ5‹m—ˆ¯mÏç÷öf÷þá߯)ÁÞSÆw«ûoc¡×æIyt«6·ñ‹^§3;µyD ¥ñ=b·áAŒ;ýøÃ5ŸŒ›³ÁŒºmJû]ÿúøåñåƒÒýå^ÿùý“þéðY[nó7G™tê¢!ñ¦[úÃÝËý³+—Ôï~Ö›¿yÿîÆÈ›w¨dürË«.r'Ð%Þü×ürúÕÇ/7ß_ö+€OQ4ûœuX[è’4OGo@#Í£‰‡§\»åê7_~3)G«çˆïNN„·¬ÓAÕ3Ä6(ÂíølùÇv.¨Låj¢!Sf§{ùr÷õåÃÓ7ý»OOº„U@ª6¼y4ýñÔUxçîáþøCÛlÌÍFtnÁéfŽ>²[÷™¨Ÿ‰ç>³Óúw†bADm„ ’΀8PL½àƒMy¹ûaw Öñ¹Æ³ŸÙs±,Üš=+Ù+ö¬ÑñvÓ}ÿ·iïOO¿½L´Ãî>0·o¾EgÖ8Ë]7…åÛ%Â61ŸÛ÷ø¯ÃÖÅ4Üïwߌ9Õм±´çns†»<Ç4Xÿ§þüÛócØþbJem,Œ ¬æÎYNÑæ—p·ô 7–oQÿNŽ!|žJP]"4N[Ý-±Mñ, ‡D$ªInH« r”¶Ÿ¥ïB­å@‰ÅÍF¶z+ø1¥wÖ½ØÖg‘]'§)g‚mW î€Àk´ÐÉ+ÑBÃÀP‰Ú‘ ¥‘]Vp†]´®ŽƒÖÂýr²¼Tð{ ¹È휞¬"¬fw`ruyÝï¦$+”˜-q„¾Ö5¼§c—Ð.Bä«âJ }\ÂÏwϾ}ý¤œñöùáý‡»W?ºùòñ?}Åʺqk’œªv\ó£K ÷ïwÕxupÆ¥áÏãœXVjç¹@´‚þ%IÔ%ˆ¹ÅÏQ[ïØ¥~pœó4ëåôgøä•lEéDï€i^:~yh’ šà8e`f®-ŽG寽œ Y7ÕR¯Ñ8 2&…@©ô6Šîú°œºgé€ËYÖ ç®6Râa¤‡ØRÆÕ–ÿüñŒq°ù¡â‹¢Æ.ô’h‡˜…¡:P¤ñm¬j ¾ôZwÂÆþÂ")µÞþ¸Äq÷[eÛ"9U—”.‚6“¿Fëïõ‹×(I⼆¶“s’|Ûâáh5Ã)u[AHØ×_œÞ™J-¶_œ˜‡ÔR0'Á?(º¦tsô§é昹+°çATzÆe‹m%Âü]Ùac6µ89MÅÈhÐPÚéŽ_§›§k¬K7‹<¤êtó†Tç4"Èí— †˜)Šzì$ül Üg¹)óÈ÷Ü4¯À7ÜäOHÒ¥SI“Ç6ëŠÁо½_Å‘¨†À‘Êâêò_;mØæŠÆy±|­öêZ¾ÖSd²}nl]€Ê6ôt aˆV«Ñvm»%"5ès{>ÕxÂS‹:GH'ê3£w4lVçgŸ.Æsx5ªé|qòþ¬ÙÞ“ÃÔhsTs¼5<]b2q¡²v¨#` ˜:Hpë¼±3:Y2x“‘gñ˜e ?XË ˆÄþà>§“''«Á0=àÕ?_(¼"êŽ5…Çû%Ž„·k|¼'#žÆÇãEˆ–ùøX}ÔЯvÈKMB–i¶rh$oKUáþ¥?±*œûpëKSG3_=ð`¹zçôûÙRœÍƒJv`ŸVH78Ç-¥’H;ö˜ê·‡ùmqÏrªÀÃï¹lT̘ š@‰„Ù†¸ :¹gªZhªP âÆ‡ÕÝZR1D°¾Ÿ“ÔAP§š9U9œŠ<à³M‘*tÍçÆ‘¯ˆ×fàjÄìšjú¥Y®æ‰rù«vãU'W÷`syÊW\.59m«Ö]³º»©úªe¯¶S·À­W=.ášWÉ%ò©­ýuDÙ`Œ¼í7Ä•)Ãb Ä)„;èÇœg˜e‡ Áò«Šta¬ÂÒ"v`«ÞvkØÁ¥¦Ø#{US¾)ÈcwäQN°ÓB™ên‹7é¶ãOÀö‡©òêo3‹i7]bU²?ú\ä\`³|­R )6¼¹'5;¬?†e“>6piö'“ˆoOõ/gd?_]ù×Îlp™±ôVUÄ6Gs<…ô@ŸNCõU¬œü-c[2§ ·ÁÀn—@qЩ³™TÒó;55çS£³]"±Ãët6ÛAuUª¨Ì–oa…7íºÖÔn»Á]´µY™Xoxykóžèêp®"ºäÚɵ6OiuBä3}åÇ'k ½á©5åŸwKŽ-}“æM0CH FQ…û$‘eýÈù7j,E;ÂDtvJëá°ù“ÓTXE±æö›aEÿÌ­±2‘‡ä¤êí³##Äm?ÇBF° i ë2Dj°) XB±Ì*ñž‹\±}9æŠÉÑj°u[ê=ÇPõúÙóâÔ)jðjÉÆ†KZúªµÃjy™ö_<}ú¼ÉLΖr¾ZëålX%£*";}•ªpPÁ9U1![7GlŽFXÀ3Ñ¢]J±gt hyñΜ<¥òJY®¶IÚF1&OÁ•d iç¯U%[Êp8YM“4 ,A£µ×à°Äö‘ý¨z’à²5Fg{Š$&nÂÎÜ/¡›’ªÕ@#)ÿ×iSª¦yI¾¬±[fÉ%f¶qŠ?UUÈF\+võþSDÕ’EÁÔEæÓ(s`’´ìê. þ½^¿ö‘9Ëp>u“ªy]±«G³Û¾¤¾ªó€·2œb€?Iuîý°·¯¡Ò©PKù¢”[æñØQ/°Ÿ=G¬^ŠÔ¸SŽŠŠÁ^芊”²£=”i™É*n°ª!ákëQæþ­Qd@!¹hÒ×çǧçÇoŒgݰÌvýR«àÌ7{÷îÉl¹xWÉkJ-ÕêØÌˆ¥ KsÔ‰@çH[}æ„Ø8Šòì,Ö­ÙŽ¸…k›â”›«uÔ¶m\"\æ 9fX%Ê—÷ˆ”´›)¯k´DÆÑFž =LQBWØŒpêɉk}µ_t=•rŽ 0YÈ'kØ‘úéQyȳ\}HÏñôö»ïï— NU›í/kf%xKÖ;xý¿`ºÄˆû‘H½<(»|²QieÝ›HÎã|èå²…ì‚4xP¶I°É>´Díö7º¸ÚUªd¦bëµXC_,¨]ÔSí¢TµŽzæ…i­àç.“†@è6;Yã-¯J>z”d¶-W3¹ºpú d×éćhÃæ¤psTÈ>ÔNSNÕé*|^÷œM×XWÁÂqHì«Suv°èCJÀkØ :iy]Œú«™qýë}m †¬LŠ<$bÛÍÁ³#¬''«x½·]i¼]r±FÞLÍ¡ì¸D8ª>ï­‰G"fü_5;ƒÊ%o‘pŽ_Y|êך€Ž®€j:­´´„ÛÍøƒ·÷Ï÷'ै——Î.>iq\Q:»v®ýaÏ=hó\eâÒ@º²KÉK¼ÊÒYû»ÌåF'çI­¡®°[¥ õ²ÚºÃ·ø5¤†>mÞvF§ë=HEC1é-ßÑ¢C¢}äSµ»äi}B–°NëS¼´ÖW2Ç r‹ÙtVBnqÔõÕë[0†j÷»VüÏ^£mÞùU×(¡?–ÍÜæ’௕C7‚Áˆ V)Q‘lõÕ‚ ‰âŸ-‰î3³àýœº®TÔ¥àƒOe]*Ù‰‡SštP¦¶mLzÿKt)8²PÚઠëR#sæ)ÃîI(tùª²wƒäúx1‰z}kä[-ËËäƒ5†áÒ¯þá÷§ç_Ÿ>=Þ?0'_tAj(SÁ†0X 7sQÆQyŒ}IÆw5 Ç2>¡K—îLÝ69Z$䬼8ÇBΡÿ¨çÜ@麋ZÚ]©Àká²ÏìÝ”R¢¦f‰K$GÊ'BH«¦ ©µÊzÕÀÅäcc-ò,Á+ ’g©½FøïØ&»Uø 2…ŸYòs%÷ìaàM\Ôc°HøM{‰[%ü|i ¯t§hÞvä”Ôcõù²9J]G*x_eÚ$‡ÿLåsŽ#ÆÙv"«4Š´ òxѨ/¾L!Øܼ—@ôl ”^BŒ,EE‘$ãq L—´~çaõcz²†¡VE1.—€ŒGRø—¬Ë¬¢ü¼@n–pM8ÀG þç­ÌÌô™);X§ —³WœçÙ‚ Ó¦¬ñžÐ¦¥6S·é êeÐf†öd´-Ñ4h \Æ'ƒjÉEC&³˜ÃbÉíJCð)Rš¿9;ªÏå'‡©hapÎQzÕ54]bU*Úë‰É‡ÚÞ¢^<€©¥"ÁE¯jŠØ¯NE‡ÚF²È6äŠdÆŠ,ù±:“£Õà]Ú$/{Ÿ×Þ-„ÈÒÇUb g¨~Ü}z|g‡x÷AÍm}Íná«§ñYœá•Œ¢ˆIs¥¢(2Åbçäýõ@¶^—î:¨TŸÌ™‰W1Mð-3–ƒÞÚ.aûÇTúÊ,gž1B´x°†1’øglGÅBœ¦cx4ЫJƈ8a@lfŒÍÞ}Õô!S›/àåÔð§³·UPHP¸G=@˜C Øœs›ƒ9ö)û¶¡àY ü_{×ÒÜFޤÿŠ¢ÏÍ™ À3ѧ½ìa~BÇMÉ-†%Q+Jžîùõ›YU‹H ”¬ñìɲ È72¿<ò&KÔ¡G°«ésKÒÚq@(ŠÓQì+SëhÌF'k3B‰»d 3`н ´'Ë"ôý3$—£¥EAÃhèpd÷+ã»WãKQkÐí¦ÿ Ëo°RcT8"ò­6·Û’¨ZÛhÁPÞ5dˆ–¼u› µ ®Ð°–“®ÈÊŠ„FtsPìÌfig:û66’ÑÅœò Z eã]Sš²ê­á  ˜*Yõ~QµL=Ðæ§ªAÉ#y†\[Šä„ÛÂl°Ø¥kP¢2h´ÐfÈ (m’2L4ó<¡I!´»Îm…0Í^9:š{§.Ø£ñQ~Ñ ;Å~J*dfb¡u 1Å/^ùh£Ç„bMÇU‚à–¾³ÒöÆQ{­­tg‚GX´ža¿¾{¸å/ãï‚߿ο)f­±@EC3q7^)ilpmÐÕÆoþñ¸{>nA°ºÝ~¹Þüµ¹½þ0=†mþg¨³£ƒÔ¾Ê0δÙñI%¢­E½ÜC¸6-\õŒŽ5%u59¬w¤æÆy‚†Åò¤¾Æ ‰´$ï^iþqÒ dU|ÄÝ„-¼ÑÛó3·Í¬,P¿ÎQoçŒ"¢¶VèˆY†ù yQÁhhQƒP‚¶ÉþªFð3ÑˇGÄeFî TòYTdfÌGœˆvèNŽÚBbxÏÄ|o•;‚IÎüPƒ^â©ã´)^=¬7_ù‹ŠG¨6¬cY5RŽÖ.Í1ñÐð…hMŒ´‘¡&ßÓ÷²5¢`Lq*µ_B«ö:–e¬Њ°¨§?¶¥¼¾a†ÉˆÿxyüßY)Tqÿ<‹èçÅ´_ÂP|9x‚¼\6â%“PE‚hÞŽp&ðdÙ‘M¢ïŒ4žÍÑ Tƒè£Å„,%h›¼M,zþ½%ÏZZ²lx¤k¬Ë² èqÀsº3È4î Êê²ÄõÃóUÀ© ÄÖ¨æB’Y Õ:Úþ\‰nÙ1Zi…*ô¨7ªZÆYé©ñê£å¹#:Ô°¤¢5:UÑÁÔö‘•OèP¦Xþ@‘’ZN^9°6n†õâp…5f”e¶$ÍØ£Ø2d!™rbœ=i¬KÞ_p ÔͱvñQë79MF9§ñ¬Mœ÷úZhºF]=§ Úm0³¢sàhÜUq‡E*x°‚~_]Ñnj«e(–R"ÅPV'jìY|[ úB”F3S)Èô¸9ÊÁ–cªl©‚'/±(^tÓ\–x)æ;¶üíÑøöGÞ_nï˜)F¯f¿{OolØí¾éõíÃÍZÇÍøÈ°óGh’ùev‹!iR0Ö0%d 6â]³Çj³›û±‘ c™ä̹{†eÈjU]*.˜W!ƪ7ˆ·oÓž©pv¾ÐxðXQðä`9b2…?U N׈ÎÑÔ!{‡Šò­HP^VÜ5ëVýjæHÞ]k'5¾Ðˆ4‡ØXsˆLoq@i©uNÛ”ÔÊY£Úr˜´7ÐqHÌNát…*_«¥F¿7XWÒ+†JÆÏîÝdg!Ç‹¸ªÙ>½üâ'}ùp«ëÛë^·¬nw›¯«1Yõ6å¯É:EÖ#æÖZìú„6‹SjF¨MQ/ä@Î&%æ ãEÔ ¤ +¹’Å‘qL<¨N×ß_^Ÿ†®oöȸUà3¥µ EVMN–3°Š=ÞM,œ.5¬6‡Þæjб«ÂÜF婈~ ]R`å­L3»ÁtsQŸY¹2Ãïßjˆƒôú®6ûÇ+åÇÝ7ùO{í ˜8úK…(¡SuûÙŒoOTþµhÄlù Ù%°R¹o-2—>°×^•v!K@ID¢ûÈÜ:Sæ_øˆñ6*°$“lr„dŽYÏ‚ÞgÕQÀŸéiÒ_3©¡SÿרU.*“ÛvÒ Ñ3ëA  ”4¤1M“ÞV[›iX)r¬ uÊÖ3O¸ ‚³)žÀø´´ÉÉ2¬*îï æH/J³{1Û°„³ `<ÉüUÖ.:"79qÍžÖ~¿}\D÷Õï»Ý·ûn÷8¯ëŠb9äÝC†ôø’&`Ã1IÈByS¤+3­6¢ }ÇηK½Œ Z%4úh!Ì„N-0™Èu>É®=™„…\c•°µ(‚í@f²‘Á=|Q½ѳN?®zÛœÉë¬ÇUcUöëj µqêŽI£ñZÕÜ1)¾c±kƒÍ:ºc±Œl Œkó~žRA¿3HýQ\žHI¼­·&”ƒÖ'?l bo¨ Ä>ùY'@í¶<¢FÖ—¤˜Œò$3ƒ¨ÅsUú@,E©T‡†õnb¤S@‹õIZ(².ýMÈÐ"7ÄÛ©æ™e¢ú©ÙÅóP†%ˆŠ0©¤èùÊ(n äð[äÂè«÷„ Mx;…ÖŸÅ>ø`M•›2Î jShŽåìûãÆnöÛýýúa³{:(Ö2ÇøRu]WtR«³Ç,~j@;U2²Î°§èüüvøYôŸ '{‚øÙ…ü±¦Ÿñ .P:‚a¿ ]R7„í›?´E#¤µ™£œvFû*ïV^O–Ž`˜Ìúm£}§ŒµÛ¯0e@½-¦¬É½dB%¦ìQL'ùƒ²,TE?| T›šœöc‘ÑB¦cû°¾ëÆ'‰îÏ•DŒÃw¯®$¼N]Kòï3Ìý|þ*#áJ¼>’a1è0ÓH¤)}Æ2¤É\c Á2“®¢";Ï™öÇc/Ò5Ap`A•o @š×4iòÅŽb¿„‚esÐYÇ#EòY*E'ñ±Ó”1þµlô~}¿¹¾”ǘçý[˰z“ðXEò«·)­U4áñËÉË AÔĪËÐ~— Ÿ5j;Ãk?Rµ7}Aȧ¿ÿ÷}B`Ón tÒºë/žyƒ²:•í .SŠ¥@(„U_üvñôçý§¿ggŒîÖ¢ ÆôÊq ,’"Òv^Šèüꓜ±ÿ­Èáav²èü§þÆë›“a>Ÿ?ÍtNS\`0ò­+é¦×R¯áIÚ ^!2F"ct;êѧ\{^ÏA’އ•NO“QìÎK)ï­:ž£;]£nŽ®Ò¶ÆeãGc¿P+Õ¤jƒV:^•¯~AÎ-Jƒ³ |¡Ñë$_x…ª-¿8ÂQÁ?bŒé"ÑÒ"QÑJYeœ —z ^¸Æàè±äjAë/ô'ˆ´ †Nb> ø^ ¦Ý Gf,XùÂñi¯NZ&m_ÇöUHÓ¦…\J¬ò´(Xïr»¹^oÆ—Ì,”¢ÝíÝáÇÇ+4™ç;•%ÏR¨1L(¸ŽLwÖ³sáw“²ìEÂE}“àƒ‡ç›$m“UGǹ¿Ð­Åè_Þµg;•_åÔo´aÿ¦J˜Ñ/oj”‰˜’q#jì`ycj|Û‘¿Á5ù»„69yÕ­rºêªm¯·MSvp.hÖ<šgèd_Ða¤‰Uɤ¼Ö±Q·rÿì `RÝNORÝz…ߙФE^/8¶ÚecœI7®ÒÄÎO)CôK”´œcy”ÙK%A=E‚úð–CŽvŒ3)Ã)¯ÍÎWGU>Zþûr˜Œ9¤úw|Žû5²D݃C—].ÇBEHÁ•jâž2Á-€gÏ÷‡hûþš8‹æiÁ“–ŸÉ# VYLƒŠÊ¯ô2‚Ë‘_{:o¤˜iâ IZ(bÞ5’±4G£¾ÎP¥ˆC™i¶Žø/C¨MªAn³ž˜*Ç–ô½ÅÎëå…¡-§×›·Å^YœsqN; ¾JyJ7û"á.0ùÌ»OÙɃ̡2ïrý|µ™XпS<-*ÒâWÐ'*°=’®­ú¹Q"5Óœ|ùÞ*“ÂR Öø¤çCZÇ*Y'iò4­;Ëë¹ì·ißyc´è§Rì—P¦½÷‚R–ë8’ï!™W>£J«_Š ~^þ†%J¦Ùb@k ‚+CÌ Q™ü™Hã¡êØqµ6ÑGÌİߧû¿A£#l')è"–MÚÝ{‹›¶Ë–‚ TÕi{QG¼àìûâFbÈôsq#3åþäeDªê2 Úö¥$Ä0¹e ÀvWS‹<™/ùx}u³>d=÷«¯æéy*4( § o•òJWéÐÑlÏôa¼&t¨çâ5̦U+U*¬€†l€¤*#-I]j\,œ¦@—Ê.¥›ÒÌÓ¥V`{¡Jü¬µ ëR&ë8ÉḬŽLÌãäN¼v4…Þ¥TÈí(’ÿ“w‰‡š»dÄÜPè8¤5h>BþÝzsÃQ€ôÙ~fµ£»ñG6KÍJú™Kñ‚s]¥f_c"gB´J%»f¡ný,B¶sg¡ (¹”´;+Á9$u0b3kBµ"‡Övžp®CË Åc^Æã¼N·€CËtÕ;åŒuú}[õä(c¼oÞªŸ­Aâ÷ ±wÏLR®«„"EÐy_=urp˜¯;`»„iÁV)ˆÊÉmôlr´ŒL«lËÚ`ò5ãâR‚ÊÛ÷¯æý.à-1#Mrd§Ð&$UjWèɼC="“ÿ¸X$íIuÚãgºýù*ðÖ½.\TóŠmaRÝèHÇËñQyãoKª²ªäy™×´v¡ìy=Dž×Ã[åâ´Œ°uÓÊáüx”á¬& –79Lúy½ß”rðèy}²D]Å|G6ûm½?jO k”¹öÕ©ŽõŽàƒêŸëm=‹æb7¾(ÎEÃT!x‰|?ÚãºH¯œ ¥u–ôž„Hæ£eNš´(+ùæÁÏr£¤*)dý±´ë€`BÄuà›2Z€„ë@‡©'ŸEÑùl/¦BˆÝgèP¦_Z¯‹±.d   M ¿Xž° Ó…w‚#keÖURN½à¿œµ²ÃÁ£Õß““å`X:œÜáþ×È'pñ-K/¹¨ôš“·Í†Û–§Ö†%ôÒONB}|‹¦'÷'Õ¤„ñ4i&ÅOƒ°fv„µžÝŒ¼3˜:ý„¡$ïhd˜ nPÍ…Ù JI™V=•RPBÌšjæ7ÉÁÃÑr°×y[VÙìCq–]âò™áîp×@' þòøÃ¥i멺 Kgh½†»ˆþWsAŸðÞØš¶3C»w°%/h¯ {¼ó BÉgNì‚g^šmJ>ó¬}HqtÚ>K9hú;»9ò¼K½mð­(µœÊ¥MñŠBÊôÈHǘé™P­A 뀽`bÞÏ2MüG®Î§pE° Ú[¥øEÙDÔYÙDì˜ÄÒqŽdE­Íˆs¢À›“Ód¤½ê@:Z˜¬Q7n’8ŽâèÎÓ¬€‚`?TDи´Ad‘³o»d]Ôô}zßÿ?Y&ž,ûË"ö‰•©ºoôíËôX#²0‚Ç1ûok¿»åom7_{æìÑÜÇï–»¦`NÔB7ã”s¶Êzs±9pKù̧f+kÞsG½ÎH“„i˜´æ1%]k.ÛæíØYÖœ_£êb·ð„ˆžÎciË‘ –õ4ó=ÆsZžšªî¬êi03ʧ—Ñ)'.; ôûêP¡%‚5¶ Eƒâ8ª“A”™ "Ä®Ÿ|˜öÈ™‰ÔyµþÜ::4üp°ŒT!;YçHhúÒ’,,Œz5PQÛ7*÷`牳¦e²Áð*“+ç?¯¯Â¬2™lašP®ªÊ¤•v!_² *\}A[.Z–gžOè—Ð)tÎ&Õ QˆOèx9YÎkïÊ;Âú½Œ³ËÖ³ dT‘Br5°yÏ BäÌyßzñVýË<Ü|¹»~z¾ýß¿àáúîæóöñæê~»rÕ]™Ÿ<Ñ)ìA·xùÊüà3iΚI€T ºœÈH͉ -:µ{¢µÿ»êû¶‡qÕÛë}Y,ô„Hƒ–FE6íI=hÀ¦2›‚nmà>ªÅÞ5²þîz¬j?J€­0ŸFpZ~®¹V’i¡d486(Þ‡V!÷Vûûgã—ŒeØHí“2h­‰âû¿¤ Ê®Œ1öÝe÷EŒÓ>ǾÙi«@ÇÓ@ð³×ÅŽé:c¥Ð¢´K(÷êa`·ïeº¿ÇÇ燧ÿy¾ßþ)WÙSõûM'|ö?î×·»ØßX[sp¦Yü¢[Ÿ›PêÍM…<é™o±Æh&÷þuÛÿ¶ÃÂfèaÈê|áó³ëq!jòöúâÛ®WìwBÈÇ‹ÑHñ"·;þ¬ÕðíÕV¼ +€/›Ÿ5‚ƒ ˆ*eÂ>ŠÂ>ãüoŸ A,Y u¤£ÎØô:& ßä_QôB˜/_¶›Þô³¤wã÷AÅŸÚ99%óãkvNèéìÎ)ÂÎ 6n¥e»jãäÃY“ŽL2ïûßÁ"ÿè§jž-…././@LongLink0000644000000000000000000000023400000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015133657743033006 5ustar zuulzuul././@LongLink0000644000000000000000000000024100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015133657716032776 0ustar zuulzuul././@LongLink0000644000000000000000000000024100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015133657716032776 0ustar zuulzuul././@LongLink0000644000000000000000000000024100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015133657716032776 0ustar zuulzuul././@LongLink0000644000000000000000000000021600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015133657743033006 5ustar zuulzuul././@LongLink0000644000000000000000000000022300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000011215133657716033002 0ustar zuulzuul2026-01-20T10:42:02.268571200+00:00 stdout P Fixing etcd log permissions. ././@LongLink0000644000000000000000000000022300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000011215133657716033002 0ustar zuulzuul2025-08-13T19:43:57.535975521+00:00 stdout P Fixing etcd log permissions. ././@LongLink0000644000000000000000000000022300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000011215133657716033002 0ustar zuulzuul2026-01-20T10:47:05.875124804+00:00 stdout P Fixing etcd log permissions. ././@LongLink0000644000000000000000000000022500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015133657743033006 5ustar zuulzuul././@LongLink0000644000000000000000000000023200000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000004305115133657716033013 0ustar zuulzuul2026-01-20T10:47:08.980038345+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.979636Z","caller":"etcdmain/grpc_proxy.go:218","msg":"gRPC proxy server TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-metrics-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-metrics-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "} 2026-01-20T10:47:08.981837173+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.981809Z","caller":"etcdmain/grpc_proxy.go:417","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"} 2026-01-20T10:47:08.982368417+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.982342Z","caller":"etcdmain/grpc_proxy.go:387","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "} 2026-01-20T10:47:08.982704746+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.982477Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel created"} 2026-01-20T10:47:08.982936482+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.982913Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] original dial target is: \"etcd-endpoints://0xc0001e4000/192.168.126.11:9978\""} 2026-01-20T10:47:08.982988904+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.982969Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] parsed dial target is: {URL:{Scheme:etcd-endpoints Opaque: User: Host:0xc0001e4000 Path:/192.168.126.11:9978 RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}"} 2026-01-20T10:47:08.983016874+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.983004Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel authority set to \"192.168.126.11:9978\""} 2026-01-20T10:47:08.983737433+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.983707Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Resolver state updated: {\n \"Addresses\": [\n {\n \"Addr\": \"192.168.126.11:9978\",\n \"ServerName\": \"192.168.126.11:9978\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Metadata\": null\n }\n ],\n \"Endpoints\": [\n {\n \"Addresses\": [\n {\n \"Addr\": \"192.168.126.11:9978\",\n \"ServerName\": \"192.168.126.11:9978\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Metadata\": null\n }\n ],\n \"Attributes\": null\n }\n ],\n \"ServiceConfig\": {\n \"Config\": {\n \"Config\": null,\n \"LB\": \"round_robin\",\n \"Methods\": {}\n },\n \"Err\": null\n },\n \"Attributes\": null\n} (service config updated; resolver returned new addresses)"} 2026-01-20T10:47:08.983854526+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.983778Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel switches to new LB policy \"round_robin\""} 2026-01-20T10:47:08.983997000+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.983977Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: got new ClientConn state: {{[{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }] [{[{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }] }] 0xc000004f20 } }"} 2026-01-20T10:47:08.984042631+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.984021Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel created"} 2026-01-20T10:47:08.984295528+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.984273Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[roundrobin] roundrobinPicker: Build called with info: {map[]}"} 2026-01-20T10:47:08.984343149+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.984329Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to CONNECTING"} 2026-01-20T10:47:08.984416882+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.984335Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2026-01-20T10:47:08.984425922+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.984412Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2026-01-20T10:47:08.984524715+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.984491Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415d10, CONNECTING"} 2026-01-20T10:47:08.985446559+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.985412Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:47:08.985523191+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:47:08.985438Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:47:08.985523191+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.985465Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:47:08.985523191+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.985492Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415d10, TRANSIENT_FAILURE"} 2026-01-20T10:47:08.985523191+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.985505Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to TRANSIENT_FAILURE"} 2026-01-20T10:47:08.985849650+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.985814Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #3] Server created"} 2026-01-20T10:47:08.985918831+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.985898Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #3 ListenSocket #4] ListenSocket created"} 2026-01-20T10:47:08.986786464+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.986753Z","caller":"etcdmain/grpc_proxy.go:571","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"} 2026-01-20T10:47:08.986820635+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.986778Z","caller":"etcdmain/grpc_proxy.go:267","msg":"started gRPC proxy","address":"127.0.0.1:9977"} 2026-01-20T10:47:08.986820635+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.986811Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} 2026-01-20T10:47:08.986828785+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.98682Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} 2026-01-20T10:47:08.988107120+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:08.988012Z","caller":"etcdmain/grpc_proxy.go:257","msg":"gRPC proxy server metrics URL serving"} 2026-01-20T10:47:09.985853108+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:09.985721Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:47:09.985853108+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:09.985791Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415d10, IDLE"} 2026-01-20T10:47:09.985941660+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:09.985834Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2026-01-20T10:47:09.985941660+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:09.985869Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2026-01-20T10:47:09.986301750+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:09.986212Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:47:09.986301750+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:47:09.986279Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:47:09.986361261+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:09.986323Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:47:09.986441913+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:09.98638Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415d10, CONNECTING"} 2026-01-20T10:47:09.986441913+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:09.986419Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415d10, TRANSIENT_FAILURE"} 2026-01-20T10:47:11.829245859+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:11.829104Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:47:11.829245859+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:11.829175Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415d10, IDLE"} 2026-01-20T10:47:11.829245859+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:11.829202Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2026-01-20T10:47:11.829245859+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:11.829222Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2026-01-20T10:47:11.829488927+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:11.82938Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415d10, CONNECTING"} 2026-01-20T10:47:11.829488927+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:11.829441Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:47:11.829488927+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:47:11.829461Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:47:11.829488927+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:11.829475Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:47:11.829530339+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:11.829496Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415d10, TRANSIENT_FAILURE"} 2026-01-20T10:47:14.856830887+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:14.856678Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:47:14.856830887+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:14.856731Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415d10, IDLE"} 2026-01-20T10:47:14.856830887+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:14.856766Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2026-01-20T10:47:14.856830887+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:14.856792Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2026-01-20T10:47:14.857007682+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:14.856916Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415d10, CONNECTING"} 2026-01-20T10:47:14.857139247+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:14.857055Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:47:14.857139247+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:47:14.857116Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:47:14.857162427+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:14.857136Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:47:14.857162427+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:14.857153Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415d10, TRANSIENT_FAILURE"} 2026-01-20T10:47:18.741406857+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:18.74125Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:47:18.741406857+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:18.741308Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415d10, IDLE"} 2026-01-20T10:47:18.741406857+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:18.741371Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2026-01-20T10:47:18.741542831+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:18.741397Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2026-01-20T10:47:18.742199859+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:18.741529Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415d10, CONNECTING"} 2026-01-20T10:47:18.748324085+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:18.748232Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY"} 2026-01-20T10:47:18.748324085+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:18.748296Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415d10, READY"} 2026-01-20T10:47:18.748403957+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:18.748351Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[roundrobin] roundrobinPicker: Build called with info: {map[SubConn(id:2):{{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }}]}"} 2026-01-20T10:47:18.748403957+00:00 stderr F {"level":"info","ts":"2026-01-20T10:47:18.748381Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to READY"} ././@LongLink0000644000000000000000000000023200000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000004304615133657716033017 0ustar zuulzuul2026-01-20T10:42:06.559108761+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.558168Z","caller":"etcdmain/grpc_proxy.go:218","msg":"gRPC proxy server TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-metrics-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-metrics-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "} 2026-01-20T10:42:06.561505241+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.56145Z","caller":"etcdmain/grpc_proxy.go:417","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"} 2026-01-20T10:42:06.561986765+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.561951Z","caller":"etcdmain/grpc_proxy.go:387","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "} 2026-01-20T10:42:06.562863270+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.562056Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel created"} 2026-01-20T10:42:06.562863270+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.562722Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] original dial target is: \"etcd-endpoints://0xc000148000/192.168.126.11:9978\""} 2026-01-20T10:42:06.562863270+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.562757Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] parsed dial target is: {URL:{Scheme:etcd-endpoints Opaque: User: Host:0xc000148000 Path:/192.168.126.11:9978 RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}"} 2026-01-20T10:42:06.562863270+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.562766Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel authority set to \"192.168.126.11:9978\""} 2026-01-20T10:42:06.564032125+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.563791Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Resolver state updated: {\n \"Addresses\": [\n {\n \"Addr\": \"192.168.126.11:9978\",\n \"ServerName\": \"192.168.126.11:9978\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Metadata\": null\n }\n ],\n \"Endpoints\": [\n {\n \"Addresses\": [\n {\n \"Addr\": \"192.168.126.11:9978\",\n \"ServerName\": \"192.168.126.11:9978\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Metadata\": null\n }\n ],\n \"Attributes\": null\n }\n ],\n \"ServiceConfig\": {\n \"Config\": {\n \"Config\": null,\n \"LB\": \"round_robin\",\n \"Methods\": {}\n },\n \"Err\": null\n },\n \"Attributes\": null\n} (service config updated; resolver returned new addresses)"} 2026-01-20T10:42:06.564032125+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.56384Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel switches to new LB policy \"round_robin\""} 2026-01-20T10:42:06.564077636+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.564034Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: got new ClientConn state: {{[{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }] [{[{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }] }] 0xc00001ae00 } }"} 2026-01-20T10:42:06.564077636+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.564055Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel created"} 2026-01-20T10:42:06.564280612+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.564254Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[roundrobin] roundrobinPicker: Build called with info: {map[]}"} 2026-01-20T10:42:06.564280612+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.56427Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to CONNECTING"} 2026-01-20T10:42:06.564427836+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.564326Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2026-01-20T10:42:06.564437487+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.564427Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2026-01-20T10:42:06.564623072+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.564553Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0000b7e90, CONNECTING"} 2026-01-20T10:42:06.565648811+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.565608Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #3] Server created"} 2026-01-20T10:42:06.567097184+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.567042Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:42:06.567113035+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:42:06.567096Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:42:06.567163406+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.567138Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:42:06.567201947+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.567179Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0000b7e90, TRANSIENT_FAILURE"} 2026-01-20T10:42:06.567240268+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.567218Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to TRANSIENT_FAILURE"} 2026-01-20T10:42:06.567282819+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.567218Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #3 ListenSocket #4] ListenSocket created"} 2026-01-20T10:42:06.569118863+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.569008Z","caller":"etcdmain/grpc_proxy.go:571","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"} 2026-01-20T10:42:06.569118863+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.56909Z","caller":"etcdmain/grpc_proxy.go:267","msg":"started gRPC proxy","address":"127.0.0.1:9977"} 2026-01-20T10:42:06.569151074+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.569114Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} 2026-01-20T10:42:06.569151074+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.569133Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} 2026-01-20T10:42:06.569597337+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:06.569543Z","caller":"etcdmain/grpc_proxy.go:257","msg":"gRPC proxy server metrics URL serving"} 2026-01-20T10:42:07.569513333+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:07.569355Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:42:07.569513333+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:07.569433Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0000b7e90, IDLE"} 2026-01-20T10:42:07.569513333+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:07.569476Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2026-01-20T10:42:07.569651757+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:07.569523Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2026-01-20T10:42:07.569674748+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:07.569641Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0000b7e90, CONNECTING"} 2026-01-20T10:42:07.569993647+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:07.569881Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:42:07.569993647+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:42:07.56992Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:42:07.569993647+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:07.569937Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:42:07.569993647+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:07.569953Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0000b7e90, TRANSIENT_FAILURE"} 2026-01-20T10:42:09.250999007+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:09.250838Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:42:09.250999007+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:09.250936Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0000b7e90, IDLE"} 2026-01-20T10:42:09.251067929+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:09.250974Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2026-01-20T10:42:09.251067929+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:09.25101Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2026-01-20T10:42:09.251277515+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:09.251176Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0000b7e90, CONNECTING"} 2026-01-20T10:42:09.251646535+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:09.251589Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:42:09.251692597+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:42:09.251648Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:42:09.251765709+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:09.251716Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:42:09.251888782+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:09.251851Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0000b7e90, TRANSIENT_FAILURE"} 2026-01-20T10:42:12.213332396+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:12.213116Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:42:12.213332396+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:12.213251Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0000b7e90, IDLE"} 2026-01-20T10:42:12.213437039+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:12.213323Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2026-01-20T10:42:12.213437039+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:12.213396Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2026-01-20T10:42:12.213773069+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:12.213609Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0000b7e90, CONNECTING"} 2026-01-20T10:42:12.214033967+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:12.213946Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:42:12.214033967+00:00 stderr F {"level":"warn","ts":"2026-01-20T10:42:12.213997Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:42:12.214099999+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:12.21405Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:42:12.214126469+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:12.214104Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0000b7e90, TRANSIENT_FAILURE"} 2026-01-20T10:42:16.335038130+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:16.334461Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2026-01-20T10:42:16.335038130+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:16.334535Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0000b7e90, IDLE"} 2026-01-20T10:42:16.335038130+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:16.334576Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2026-01-20T10:42:16.335038130+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:16.334609Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2026-01-20T10:42:16.335038130+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:16.334778Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0000b7e90, CONNECTING"} 2026-01-20T10:42:16.347027669+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:16.346818Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY"} 2026-01-20T10:42:16.347027669+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:16.346899Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0000b7e90, READY"} 2026-01-20T10:42:16.347027669+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:16.346953Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[roundrobin] roundrobinPicker: Build called with info: {map[SubConn(id:2):{{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }}]}"} 2026-01-20T10:42:16.347027669+00:00 stderr F {"level":"info","ts":"2026-01-20T10:42:16.346971Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to READY"} ././@LongLink0000644000000000000000000000023200000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000005016115133657716033013 0ustar zuulzuul2025-08-13T19:44:07.409131412+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.407635Z","caller":"etcdmain/grpc_proxy.go:218","msg":"gRPC proxy server TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-metrics-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-metrics-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "} 2025-08-13T19:44:07.414073583+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.413544Z","caller":"etcdmain/grpc_proxy.go:417","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"} 2025-08-13T19:44:07.415381827+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.415228Z","caller":"etcdmain/grpc_proxy.go:387","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "} 2025-08-13T19:44:07.417425202+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.415867Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel created"} 2025-08-13T19:44:07.417943455+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.417861Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] original dial target is: \"etcd-endpoints://0xc0000f4000/192.168.126.11:9978\""} 2025-08-13T19:44:07.417943455+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.417922Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] parsed dial target is: {URL:{Scheme:etcd-endpoints Opaque: User: Host:0xc0000f4000 Path:/192.168.126.11:9978 RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}"} 2025-08-13T19:44:07.417967766+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.417938Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel authority set to \"192.168.126.11:9978\""} 2025-08-13T19:44:07.419975659+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.419896Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Resolver state updated: {\n \"Addresses\": [\n {\n \"Addr\": \"192.168.126.11:9978\",\n \"ServerName\": \"192.168.126.11:9978\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Metadata\": null\n }\n ],\n \"Endpoints\": [\n {\n \"Addresses\": [\n {\n \"Addr\": \"192.168.126.11:9978\",\n \"ServerName\": \"192.168.126.11:9978\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Metadata\": null\n }\n ],\n \"Attributes\": null\n }\n ],\n \"ServiceConfig\": {\n \"Config\": {\n \"Config\": null,\n \"LB\": \"round_robin\",\n \"Methods\": {}\n },\n \"Err\": null\n },\n \"Attributes\": null\n} (service config updated; resolver returned new addresses)"} 2025-08-13T19:44:07.420416501+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.420235Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel switches to new LB policy \"round_robin\""} 2025-08-13T19:44:07.421162071+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.421084Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: got new ClientConn state: {{[{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }] [{[{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }] }] 0xc000409120 } }"} 2025-08-13T19:44:07.421162071+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.421136Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel created"} 2025-08-13T19:44:07.421768917+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.421518Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[roundrobin] roundrobinPicker: Build called with info: {map[]}"} 2025-08-13T19:44:07.421768917+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.421754Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to CONNECTING"} 2025-08-13T19:44:07.421856239+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.421629Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-08-13T19:44:07.421936281+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.421864Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-08-13T19:44:07.422733853+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.422625Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, CONNECTING"} 2025-08-13T19:44:07.423965905+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.423923Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:07.423989806+00:00 stderr F {"level":"warn","ts":"2025-08-13T19:44:07.423958Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:07.424074838+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.424006Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:07.424092389+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.424065Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, TRANSIENT_FAILURE"} 2025-08-13T19:44:07.424092389+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.424083Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to TRANSIENT_FAILURE"} 2025-08-13T19:44:07.427257203+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.426529Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #3] Server created"} 2025-08-13T19:44:07.427644073+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.427503Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #3 ListenSocket #4] ListenSocket created"} 2025-08-13T19:44:07.428726452+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.428478Z","caller":"etcdmain/grpc_proxy.go:571","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"} 2025-08-13T19:44:07.428914807+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.428851Z","caller":"etcdmain/grpc_proxy.go:267","msg":"started gRPC proxy","address":"127.0.0.1:9977"} 2025-08-13T19:44:07.429300827+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.429224Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} 2025-08-13T19:44:07.429300827+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.429256Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} 2025-08-13T19:44:07.429875752+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.429724Z","caller":"etcdmain/grpc_proxy.go:257","msg":"gRPC proxy server metrics URL serving"} 2025-08-13T19:44:08.425312715+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.424962Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:08.425312715+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.425265Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, IDLE"} 2025-08-13T19:44:08.425740887+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.425532Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-08-13T19:44:08.425740887+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.425715Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-08-13T19:44:08.426240300+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.426164Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, CONNECTING"} 2025-08-13T19:44:08.426464226+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.426387Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:08.426630710+00:00 stderr F {"level":"warn","ts":"2025-08-13T19:44:08.426516Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:08.426630710+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.426558Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:08.426630710+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.426594Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, TRANSIENT_FAILURE"} 2025-08-13T19:44:10.295301275+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.295099Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:10.295494500+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.295467Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, IDLE"} 2025-08-13T19:44:10.295725296+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.295638Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-08-13T19:44:10.295870770+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.295831Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-08-13T19:44:10.296169798+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.296034Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, CONNECTING"} 2025-08-13T19:44:10.297898874+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.297841Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:10.298034777+00:00 stderr F {"level":"warn","ts":"2025-08-13T19:44:10.298004Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:10.298270424+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.298237Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:10.298428078+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.298396Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, TRANSIENT_FAILURE"} 2025-08-13T19:44:12.743230101+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.743099Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:12.743230101+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.743181Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, IDLE"} 2025-08-13T19:44:12.743297363+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.743221Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-08-13T19:44:12.743297363+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.743248Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-08-13T19:44:12.743509149+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.743441Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, CONNECTING"} 2025-08-13T19:44:12.743921330+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.74386Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:12.743997642+00:00 stderr F {"level":"warn","ts":"2025-08-13T19:44:12.74397Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:12.744110835+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.74408Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:12.744228798+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.744199Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, TRANSIENT_FAILURE"} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.632882Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.632959Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, IDLE"} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.632994Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.633019Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.633195Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, CONNECTING"} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.633257Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"warn","ts":"2025-08-13T19:44:16.633273Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.633295Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.633314Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, TRANSIENT_FAILURE"} 2025-08-13T19:44:23.057118130+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.054181Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:23.057118130+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.054257Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, IDLE"} 2025-08-13T19:44:23.057118130+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.054296Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-08-13T19:44:23.057118130+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.054322Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-08-13T19:44:23.057118130+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.055101Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, CONNECTING"} 2025-08-13T19:44:23.079851723+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.079158Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY"} 2025-08-13T19:44:23.079851723+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.079423Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, READY"} 2025-08-13T19:44:23.079851723+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.079478Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[roundrobin] roundrobinPicker: Build called with info: {map[SubConn(id:2):{{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }}]}"} 2025-08-13T19:44:23.079851723+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.079495Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to READY"} ././@LongLink0000644000000000000000000000023500000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015133657743033006 5ustar zuulzuul././@LongLink0000644000000000000000000000024200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015133657716032776 0ustar zuulzuul././@LongLink0000644000000000000000000000024200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015133657716032776 0ustar zuulzuul././@LongLink0000644000000000000000000000024200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015133657716032776 0ustar zuulzuul././@LongLink0000644000000000000000000000022400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015133657743033006 5ustar zuulzuul././@LongLink0000644000000000000000000000023100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000036015133657716033007 0ustar zuulzuul2026-01-20T10:47:09.336705091+00:00 stderr F I0120 10:47:09.336414 1 readyz.go:155] Listening on 0.0.0.0:9980 2026-01-20T10:47:21.049232990+00:00 stderr F I0120 10:47:21.049160 1 etcdcli_pool.go:70] creating a new cached client ././@LongLink0000644000000000000000000000023100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000224015133657716033006 0ustar zuulzuul2026-01-20T10:42:06.982859573+00:00 stderr F I0120 10:42:06.982562 1 readyz.go:155] Listening on 0.0.0.0:9980 2026-01-20T10:42:18.851656967+00:00 stderr F I0120 10:42:18.851509 1 etcdcli_pool.go:70] creating a new cached client 2026-01-20T10:44:12.684413325+00:00 stderr F I0120 10:44:12.684362 1 etcdcli_pool.go:70] creating a new cached client 2026-01-20T10:46:12.244641657+00:00 stderr F 2026/01/20 10:46:12 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "localhost:2379", ServerName: "localhost:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp [::1]:2379: connect: connection refused" 2026-01-20T10:46:12.245861229+00:00 stderr F 2026/01/20 10:46:12 WARNING: [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "localhost:2379", ServerName: "localhost:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp [::1]:2379: connect: connection refused" 2026-01-20T10:46:12.291369226+00:00 stderr F I0120 10:46:12.291261 1 readyz.go:179] Received SIGTERM or SIGINT signal, shutting down readyz server. ././@LongLink0000644000000000000000000000023100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000100615133657716033005 0ustar zuulzuul2025-08-13T19:44:08.536660030+00:00 stderr F I0813 19:44:08.536156 1 readyz.go:155] Listening on 0.0.0.0:9980 2025-08-13T19:44:23.569024333+00:00 stderr F I0813 19:44:23.567456 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T20:01:39.363673157+00:00 stderr F I0813 20:01:39.363557 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T20:42:47.039150261+00:00 stderr F I0813 20:42:47.039006 1 readyz.go:179] Received SIGTERM or SIGINT signal, shutting down readyz server. ././@LongLink0000644000000000000000000000022000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015133657743033006 5ustar zuulzuul././@LongLink0000644000000000000000000000022500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015133657716032776 0ustar zuulzuul././@LongLink0000644000000000000000000000022500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015133657716032776 0ustar zuulzuul././@LongLink0000644000000000000000000000022500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015133657716032776 0ustar zuulzuul././@LongLink0000644000000000000000000000026000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000755000175000017500000000000015133657715033101 5ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000755000175000017500000000000015133657736033104 5ustar zuulzuul././@LongLink0000644000000000000000000000027500000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000644000175000017500000001005515133657715033104 0ustar zuulzuul2026-01-20T10:47:24.709581254+00:00 stderr F + [[ -f /env/_master ]] 2026-01-20T10:47:24.709581254+00:00 stderr F + ho_enable=--enable-hybrid-overlay 2026-01-20T10:47:24.710224172+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2026-01-20T10:47:24.714484217+00:00 stdout F I0120 10:47:24.712656767 - network-node-identity - start webhook 2026-01-20T10:47:24.714517038+00:00 stderr F + echo 'I0120 10:47:24.712656767 - network-node-identity - start webhook' 2026-01-20T10:47:24.714590110+00:00 stderr F + exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 --webhook-cert-dir=/etc/webhook-cert --webhook-host=127.0.0.1 --webhook-port=9743 --enable-hybrid-overlay --enable-interconnect --disable-approver --extra-allowed-user=system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane --wait-for-kubernetes-api=200s --pod-admission-conditions=/var/run/ovnkube-identity-config/additional-pod-admission-cond.json --loglevel=2 2026-01-20T10:47:24.952512844+00:00 stderr F I0120 10:47:24.952393 1 ovnkubeidentity.go:132] Config: {kubeconfig: apiServer:https://api-int.crc.testing:6443 logLevel:2 port:9743 host:127.0.0.1 certDir:/etc/webhook-cert metricsAddress:0 leaseNamespace: enableInterconnect:true enableHybridOverlay:true disableWebhook:false disableApprover:true waitForKAPIDuration:200000000000 localKAPIPort:6443 extraAllowedUsers:{slice:[system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane] hasBeenSet:true} csrAcceptanceConditionFile: csrAcceptanceConditions:[] podAdmissionConditionFile:/var/run/ovnkube-identity-config/additional-pod-admission-cond.json podAdmissionConditions:[]} 2026-01-20T10:47:24.953421248+00:00 stderr F W0120 10:47:24.953352 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2026-01-20T10:47:24.964558759+00:00 stderr F I0120 10:47:24.964313 1 ovnkubeidentity.go:351] Waiting for caches to sync 2026-01-20T10:47:24.980449369+00:00 stderr F I0120 10:47:24.980356 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:47:25.065852848+00:00 stderr F I0120 10:47:25.065721 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2026-01-20T10:47:25.066524046+00:00 stderr F I0120 10:47:25.066135 1 certwatcher.go:115] "Starting certificate watcher" logger="controller-runtime.certwatcher" 2026-01-20T10:47:25.066524046+00:00 stderr F I0120 10:47:25.066194 1 ovnkubeidentity.go:430] Starting the webhook server 2026-01-20T10:49:33.620860779+00:00 stderr F 2026/01/20 10:49:33 http: TLS handshake error from 127.0.0.1:36998: EOF 2026-01-20T10:49:33.624337505+00:00 stderr F 2026/01/20 10:49:33 http: TLS handshake error from 127.0.0.1:37004: read tcp 127.0.0.1:9743->127.0.0.1:37004: read: connection reset by peer 2026-01-20T10:49:34.104440388+00:00 stderr F 2026/01/20 10:49:34 http: TLS handshake error from 127.0.0.1:37020: EOF 2026-01-20T10:53:51.113656466+00:00 stderr F I0120 10:53:51.113559 1 certwatcher.go:180] "certificate event" logger="controller-runtime.certwatcher" event="REMOVE \"/etc/webhook-cert/tls.key\"" 2026-01-20T10:53:51.115402913+00:00 stderr F I0120 10:53:51.115203 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2026-01-20T10:53:51.115579908+00:00 stderr F I0120 10:53:51.115494 1 certwatcher.go:180] "certificate event" logger="controller-runtime.certwatcher" event="REMOVE \"/etc/webhook-cert/tls.crt\"" 2026-01-20T10:53:51.117123639+00:00 stderr F I0120 10:53:51.116993 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2026-01-20T10:56:34.537112002+00:00 stderr F 2026/01/20 10:56:34 http: TLS handshake error from 127.0.0.1:55066: read tcp 127.0.0.1:9743->127.0.0.1:55066: read: connection reset by peer 2026-01-20T10:57:48.961633318+00:00 stderr F I0120 10:57:48.961585 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 ././@LongLink0000644000000000000000000000027500000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000644000175000017500000271462415133657715033123 0ustar zuulzuul2025-08-13T19:50:43.127891376+00:00 stderr F + [[ -f /env/_master ]] 2025-08-13T19:50:43.128098882+00:00 stderr F + ho_enable=--enable-hybrid-overlay 2025-08-13T19:50:43.129082180+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-08-13T19:50:43.184222956+00:00 stderr F + echo 'I0813 19:50:43.132736804 - network-node-identity - start webhook' 2025-08-13T19:50:43.184322418+00:00 stdout F I0813 19:50:43.132736804 - network-node-identity - start webhook 2025-08-13T19:50:43.184928436+00:00 stderr F + exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 --webhook-cert-dir=/etc/webhook-cert --webhook-host=127.0.0.1 --webhook-port=9743 --enable-hybrid-overlay --enable-interconnect --disable-approver --extra-allowed-user=system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane --wait-for-kubernetes-api=200s --pod-admission-conditions=/var/run/ovnkube-identity-config/additional-pod-admission-cond.json --loglevel=2 2025-08-13T19:50:47.188251283+00:00 stderr F I0813 19:50:47.185908 1 ovnkubeidentity.go:132] Config: {kubeconfig: apiServer:https://api-int.crc.testing:6443 logLevel:2 port:9743 host:127.0.0.1 certDir:/etc/webhook-cert metricsAddress:0 leaseNamespace: enableInterconnect:true enableHybridOverlay:true disableWebhook:false disableApprover:true waitForKAPIDuration:200000000000 localKAPIPort:6443 extraAllowedUsers:{slice:[system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane] hasBeenSet:true} csrAcceptanceConditionFile: csrAcceptanceConditions:[] podAdmissionConditionFile:/var/run/ovnkube-identity-config/additional-pod-admission-cond.json podAdmissionConditions:[]} 2025-08-13T19:50:47.188251283+00:00 stderr F W0813 19:50:47.188191 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:50:47.261974120+00:00 stderr F I0813 19:50:47.259425 1 ovnkubeidentity.go:351] Waiting for caches to sync 2025-08-13T19:50:47.517742120+00:00 stderr F I0813 19:50:47.517497 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:47.577168279+00:00 stderr F I0813 19:50:47.576751 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2025-08-13T19:50:47.590146560+00:00 stderr F I0813 19:50:47.587224 1 ovnkubeidentity.go:430] Starting the webhook server 2025-08-13T19:50:47.594226226+00:00 stderr F I0813 19:50:47.582558 1 certwatcher.go:115] "Starting certificate watcher" logger="controller-runtime.certwatcher" 2025-08-13T19:50:47.745047647+00:00 stderr F 2025/08/13 19:50:47 http: TLS handshake error from 127.0.0.1:53894: remote error: tls: bad certificate 2025-08-13T19:50:47.793929724+00:00 stderr F 2025/08/13 19:50:47 http: TLS handshake error from 127.0.0.1:53908: remote error: tls: bad certificate 2025-08-13T19:50:47.868213187+00:00 stderr F 2025/08/13 19:50:47 http: TLS handshake error from 127.0.0.1:53924: remote error: tls: bad certificate 2025-08-13T19:50:47.964301333+00:00 stderr F 2025/08/13 19:50:47 http: TLS handshake error from 127.0.0.1:53936: remote error: tls: bad certificate 2025-08-13T19:50:48.029546398+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:53952: remote error: tls: bad certificate 2025-08-13T19:50:48.147259312+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:53954: remote error: tls: bad certificate 2025-08-13T19:50:48.215592515+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:53960: remote error: tls: bad certificate 2025-08-13T19:50:48.280282874+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:53968: remote error: tls: bad certificate 2025-08-13T19:50:48.342619216+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:53984: remote error: tls: bad certificate 2025-08-13T19:50:48.687006018+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:53992: remote error: tls: bad certificate 2025-08-13T19:50:48.866582011+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:51718: remote error: tls: bad certificate 2025-08-13T19:50:48.912027150+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:51724: remote error: tls: bad certificate 2025-08-13T19:50:48.944431296+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:51736: remote error: tls: bad certificate 2025-08-13T19:50:49.378656735+00:00 stderr F 2025/08/13 19:50:49 http: TLS handshake error from 127.0.0.1:51750: remote error: tls: bad certificate 2025-08-13T19:50:49.710110938+00:00 stderr F 2025/08/13 19:50:49 http: TLS handshake error from 127.0.0.1:51754: remote error: tls: bad certificate 2025-08-13T19:50:49.824538099+00:00 stderr F 2025/08/13 19:50:49 http: TLS handshake error from 127.0.0.1:51762: remote error: tls: bad certificate 2025-08-13T19:50:49.900767758+00:00 stderr F 2025/08/13 19:50:49 http: TLS handshake error from 127.0.0.1:51766: remote error: tls: bad certificate 2025-08-13T19:50:49.986411945+00:00 stderr F 2025/08/13 19:50:49 http: TLS handshake error from 127.0.0.1:51780: remote error: tls: bad certificate 2025-08-13T19:50:50.647326135+00:00 stderr F 2025/08/13 19:50:50 http: TLS handshake error from 127.0.0.1:51790: remote error: tls: bad certificate 2025-08-13T19:50:51.312891567+00:00 stderr F 2025/08/13 19:50:51 http: TLS handshake error from 127.0.0.1:51806: remote error: tls: bad certificate 2025-08-13T19:50:52.077974724+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51822: remote error: tls: bad certificate 2025-08-13T19:50:52.149405405+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51826: remote error: tls: bad certificate 2025-08-13T19:50:52.196607595+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51832: remote error: tls: bad certificate 2025-08-13T19:50:52.244962827+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51846: remote error: tls: bad certificate 2025-08-13T19:50:52.314900675+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51862: remote error: tls: bad certificate 2025-08-13T19:50:52.379511692+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51864: remote error: tls: bad certificate 2025-08-13T19:50:52.434951447+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51872: remote error: tls: bad certificate 2025-08-13T19:50:52.503497206+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51882: remote error: tls: bad certificate 2025-08-13T19:50:52.594189118+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51894: remote error: tls: bad certificate 2025-08-13T19:50:52.645166325+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51902: remote error: tls: bad certificate 2025-08-13T19:50:52.700091544+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51906: remote error: tls: bad certificate 2025-08-13T19:50:52.759619795+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51916: remote error: tls: bad certificate 2025-08-13T19:50:52.790890399+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51920: remote error: tls: bad certificate 2025-08-13T19:50:52.808982886+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51934: remote error: tls: bad certificate 2025-08-13T19:50:52.857934075+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51944: remote error: tls: bad certificate 2025-08-13T19:50:52.898973088+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51952: remote error: tls: bad certificate 2025-08-13T19:50:52.920450271+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51976: remote error: tls: bad certificate 2025-08-13T19:50:53.225985724+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:51992: remote error: tls: bad certificate 2025-08-13T19:50:53.241171268+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:51998: remote error: tls: bad certificate 2025-08-13T19:50:53.277023692+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:51964: remote error: tls: bad certificate 2025-08-13T19:50:53.294267605+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52008: remote error: tls: bad certificate 2025-08-13T19:50:53.327105514+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52024: remote error: tls: bad certificate 2025-08-13T19:50:53.330707707+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52038: remote error: tls: bad certificate 2025-08-13T19:50:53.362941238+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52044: remote error: tls: bad certificate 2025-08-13T19:50:53.377062762+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52056: remote error: tls: bad certificate 2025-08-13T19:50:53.412945867+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52064: remote error: tls: bad certificate 2025-08-13T19:50:53.441219756+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52068: remote error: tls: bad certificate 2025-08-13T19:50:53.451243732+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52076: remote error: tls: bad certificate 2025-08-13T19:50:53.478115140+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52078: remote error: tls: bad certificate 2025-08-13T19:50:53.520115450+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52090: remote error: tls: bad certificate 2025-08-13T19:50:53.605220803+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52092: remote error: tls: bad certificate 2025-08-13T19:50:53.639766180+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52106: remote error: tls: bad certificate 2025-08-13T19:50:53.704171171+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52120: remote error: tls: bad certificate 2025-08-13T19:50:53.753149931+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52136: remote error: tls: bad certificate 2025-08-13T19:50:53.821236107+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52152: remote error: tls: bad certificate 2025-08-13T19:50:53.864738370+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52158: remote error: tls: bad certificate 2025-08-13T19:50:53.962376541+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52168: remote error: tls: bad certificate 2025-08-13T19:50:53.974479196+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52174: remote error: tls: bad certificate 2025-08-13T19:50:54.024067274+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52186: remote error: tls: bad certificate 2025-08-13T19:50:54.030571270+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52178: remote error: tls: bad certificate 2025-08-13T19:50:54.079695624+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52202: remote error: tls: bad certificate 2025-08-13T19:50:54.114244271+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52204: remote error: tls: bad certificate 2025-08-13T19:50:54.160390340+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52210: remote error: tls: bad certificate 2025-08-13T19:50:54.251052751+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52212: remote error: tls: bad certificate 2025-08-13T19:50:54.313880267+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52228: remote error: tls: bad certificate 2025-08-13T19:50:54.570898253+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52240: remote error: tls: bad certificate 2025-08-13T19:50:54.755031685+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52252: remote error: tls: bad certificate 2025-08-13T19:50:54.912921418+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52262: remote error: tls: bad certificate 2025-08-13T19:50:55.003989171+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52272: remote error: tls: bad certificate 2025-08-13T19:50:55.046258119+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52288: remote error: tls: bad certificate 2025-08-13T19:50:55.117683540+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52294: remote error: tls: bad certificate 2025-08-13T19:50:55.312921210+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52310: remote error: tls: bad certificate 2025-08-13T19:50:55.382659673+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52316: remote error: tls: bad certificate 2025-08-13T19:50:55.451897042+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52326: remote error: tls: bad certificate 2025-08-13T19:50:55.533005970+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52334: remote error: tls: bad certificate 2025-08-13T19:50:55.849677271+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52348: remote error: tls: bad certificate 2025-08-13T19:50:55.917101448+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52364: remote error: tls: bad certificate 2025-08-13T19:50:56.051583702+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52366: remote error: tls: bad certificate 2025-08-13T19:50:56.185426907+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52378: remote error: tls: bad certificate 2025-08-13T19:50:56.239143822+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52384: remote error: tls: bad certificate 2025-08-13T19:50:56.289679247+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52386: remote error: tls: bad certificate 2025-08-13T19:50:56.334149327+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52390: remote error: tls: bad certificate 2025-08-13T19:50:56.375012175+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52394: remote error: tls: bad certificate 2025-08-13T19:50:56.439319143+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52410: remote error: tls: bad certificate 2025-08-13T19:50:56.645922828+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52426: remote error: tls: bad certificate 2025-08-13T19:50:56.699151349+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52432: remote error: tls: bad certificate 2025-08-13T19:50:56.994924873+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52440: remote error: tls: bad certificate 2025-08-13T19:50:57.237949208+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52454: remote error: tls: bad certificate 2025-08-13T19:50:57.305384326+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52456: remote error: tls: bad certificate 2025-08-13T19:50:57.386773162+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52472: remote error: tls: bad certificate 2025-08-13T19:50:57.465000898+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52488: remote error: tls: bad certificate 2025-08-13T19:50:57.496446756+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52496: remote error: tls: bad certificate 2025-08-13T19:50:57.522851991+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52506: remote error: tls: bad certificate 2025-08-13T19:50:57.553314542+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52522: remote error: tls: bad certificate 2025-08-13T19:50:57.598395340+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52524: remote error: tls: bad certificate 2025-08-13T19:50:57.665761646+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52528: remote error: tls: bad certificate 2025-08-13T19:50:57.732241096+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52540: remote error: tls: bad certificate 2025-08-13T19:50:57.800867447+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52550: remote error: tls: bad certificate 2025-08-13T19:50:57.842209629+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52560: remote error: tls: bad certificate 2025-08-13T19:50:57.889692366+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52576: remote error: tls: bad certificate 2025-08-13T19:50:57.934844296+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52588: remote error: tls: bad certificate 2025-08-13T19:50:58.005091664+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52604: remote error: tls: bad certificate 2025-08-13T19:50:58.042565145+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52620: remote error: tls: bad certificate 2025-08-13T19:50:58.213664785+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52632: remote error: tls: bad certificate 2025-08-13T19:50:58.300443125+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52634: remote error: tls: bad certificate 2025-08-13T19:50:58.342754055+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52646: remote error: tls: bad certificate 2025-08-13T19:50:58.397146519+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52662: remote error: tls: bad certificate 2025-08-13T19:50:58.424341976+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52674: remote error: tls: bad certificate 2025-08-13T19:50:58.462741084+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52686: remote error: tls: bad certificate 2025-08-13T19:50:58.499094283+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52700: remote error: tls: bad certificate 2025-08-13T19:50:58.532325083+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52702: remote error: tls: bad certificate 2025-08-13T19:50:58.559133629+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52704: remote error: tls: bad certificate 2025-08-13T19:50:58.598263777+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52714: remote error: tls: bad certificate 2025-08-13T19:50:58.618370812+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52722: remote error: tls: bad certificate 2025-08-13T19:50:58.651226101+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52726: remote error: tls: bad certificate 2025-08-13T19:50:58.682213447+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52730: remote error: tls: bad certificate 2025-08-13T19:50:58.707888900+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52746: remote error: tls: bad certificate 2025-08-13T19:50:58.742193181+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36542: remote error: tls: bad certificate 2025-08-13T19:50:58.767019280+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36550: remote error: tls: bad certificate 2025-08-13T19:50:58.793288891+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36562: remote error: tls: bad certificate 2025-08-13T19:50:58.820872670+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36570: remote error: tls: bad certificate 2025-08-13T19:50:58.871392534+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36586: remote error: tls: bad certificate 2025-08-13T19:50:58.904159000+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36594: remote error: tls: bad certificate 2025-08-13T19:50:58.940449017+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36608: remote error: tls: bad certificate 2025-08-13T19:50:58.970211598+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36616: remote error: tls: bad certificate 2025-08-13T19:50:59.017749157+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36624: remote error: tls: bad certificate 2025-08-13T19:50:59.071537814+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36626: remote error: tls: bad certificate 2025-08-13T19:50:59.123889600+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36634: remote error: tls: bad certificate 2025-08-13T19:50:59.173079226+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36640: remote error: tls: bad certificate 2025-08-13T19:50:59.227739088+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36650: remote error: tls: bad certificate 2025-08-13T19:50:59.281849325+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36664: remote error: tls: bad certificate 2025-08-13T19:50:59.317373600+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36668: remote error: tls: bad certificate 2025-08-13T19:50:59.348519680+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36674: remote error: tls: bad certificate 2025-08-13T19:50:59.406069255+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36682: remote error: tls: bad certificate 2025-08-13T19:50:59.445916724+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36694: remote error: tls: bad certificate 2025-08-13T19:50:59.470085825+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36700: remote error: tls: bad certificate 2025-08-13T19:50:59.497681314+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36706: remote error: tls: bad certificate 2025-08-13T19:50:59.530281755+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36710: remote error: tls: bad certificate 2025-08-13T19:50:59.599178285+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36716: remote error: tls: bad certificate 2025-08-13T19:50:59.649423641+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36732: remote error: tls: bad certificate 2025-08-13T19:50:59.684307848+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36740: remote error: tls: bad certificate 2025-08-13T19:50:59.718094433+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36756: remote error: tls: bad certificate 2025-08-13T19:50:59.754352070+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36768: remote error: tls: bad certificate 2025-08-13T19:50:59.800177539+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36776: remote error: tls: bad certificate 2025-08-13T19:50:59.847697058+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36780: remote error: tls: bad certificate 2025-08-13T19:50:59.890215563+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36788: remote error: tls: bad certificate 2025-08-13T19:50:59.934275701+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36790: remote error: tls: bad certificate 2025-08-13T19:50:59.991349152+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36800: remote error: tls: bad certificate 2025-08-13T19:51:00.049017011+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36802: remote error: tls: bad certificate 2025-08-13T19:51:00.132532968+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36818: remote error: tls: bad certificate 2025-08-13T19:51:00.189720412+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36822: remote error: tls: bad certificate 2025-08-13T19:51:00.223725634+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36830: remote error: tls: bad certificate 2025-08-13T19:51:00.261105912+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36838: remote error: tls: bad certificate 2025-08-13T19:51:00.300171249+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36842: remote error: tls: bad certificate 2025-08-13T19:51:00.316661440+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36856: remote error: tls: bad certificate 2025-08-13T19:51:00.357302802+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36858: remote error: tls: bad certificate 2025-08-13T19:51:00.386042854+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36874: remote error: tls: bad certificate 2025-08-13T19:51:00.420893710+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36888: remote error: tls: bad certificate 2025-08-13T19:51:00.454148510+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36898: remote error: tls: bad certificate 2025-08-13T19:51:00.504189320+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36914: remote error: tls: bad certificate 2025-08-13T19:51:00.550072572+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36926: remote error: tls: bad certificate 2025-08-13T19:51:00.574640284+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36938: remote error: tls: bad certificate 2025-08-13T19:51:00.617011395+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36940: remote error: tls: bad certificate 2025-08-13T19:51:00.650405229+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36950: remote error: tls: bad certificate 2025-08-13T19:51:00.686280005+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36966: remote error: tls: bad certificate 2025-08-13T19:51:00.725739813+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36982: remote error: tls: bad certificate 2025-08-13T19:51:00.773126677+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36990: remote error: tls: bad certificate 2025-08-13T19:51:00.829127177+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36992: remote error: tls: bad certificate 2025-08-13T19:51:00.853702340+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36994: remote error: tls: bad certificate 2025-08-13T19:51:00.897999775+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:37000: remote error: tls: bad certificate 2025-08-13T19:51:00.925969375+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:37012: remote error: tls: bad certificate 2025-08-13T19:51:00.956533788+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:37026: remote error: tls: bad certificate 2025-08-13T19:51:00.994699869+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:37042: remote error: tls: bad certificate 2025-08-13T19:51:01.033348614+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37050: remote error: tls: bad certificate 2025-08-13T19:51:01.060967123+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37066: remote error: tls: bad certificate 2025-08-13T19:51:01.100954416+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37068: remote error: tls: bad certificate 2025-08-13T19:51:01.144876961+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37076: remote error: tls: bad certificate 2025-08-13T19:51:01.175751554+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37090: remote error: tls: bad certificate 2025-08-13T19:51:01.207250064+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37104: remote error: tls: bad certificate 2025-08-13T19:51:01.247654899+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37114: remote error: tls: bad certificate 2025-08-13T19:51:01.276133483+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37118: remote error: tls: bad certificate 2025-08-13T19:51:01.306406678+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37130: remote error: tls: bad certificate 2025-08-13T19:51:01.339034530+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37138: remote error: tls: bad certificate 2025-08-13T19:51:01.360545975+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37148: remote error: tls: bad certificate 2025-08-13T19:51:01.395433632+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37154: remote error: tls: bad certificate 2025-08-13T19:51:01.511654254+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37164: remote error: tls: bad certificate 2025-08-13T19:51:01.516197744+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37176: remote error: tls: bad certificate 2025-08-13T19:51:01.612159086+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37184: remote error: tls: bad certificate 2025-08-13T19:51:01.653710183+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37192: remote error: tls: bad certificate 2025-08-13T19:51:01.710394323+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37200: remote error: tls: bad certificate 2025-08-13T19:51:01.746680500+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37210: remote error: tls: bad certificate 2025-08-13T19:51:01.803880064+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37214: remote error: tls: bad certificate 2025-08-13T19:51:01.839914504+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37230: remote error: tls: bad certificate 2025-08-13T19:51:01.870634442+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37234: remote error: tls: bad certificate 2025-08-13T19:51:01.949026672+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37236: remote error: tls: bad certificate 2025-08-13T19:51:01.998071504+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37244: remote error: tls: bad certificate 2025-08-13T19:51:02.030856891+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37246: remote error: tls: bad certificate 2025-08-13T19:51:02.066442158+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37252: remote error: tls: bad certificate 2025-08-13T19:51:02.105622638+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37268: remote error: tls: bad certificate 2025-08-13T19:51:02.146911037+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37280: remote error: tls: bad certificate 2025-08-13T19:51:02.177227054+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37286: remote error: tls: bad certificate 2025-08-13T19:51:02.195575838+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37292: remote error: tls: bad certificate 2025-08-13T19:51:02.227222943+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37304: remote error: tls: bad certificate 2025-08-13T19:51:02.254015388+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37310: remote error: tls: bad certificate 2025-08-13T19:51:02.282593005+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37322: remote error: tls: bad certificate 2025-08-13T19:51:02.313024875+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37330: remote error: tls: bad certificate 2025-08-13T19:51:02.339564593+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37342: remote error: tls: bad certificate 2025-08-13T19:51:02.372934127+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37350: remote error: tls: bad certificate 2025-08-13T19:51:02.391770735+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37358: remote error: tls: bad certificate 2025-08-13T19:51:02.465381079+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37366: remote error: tls: bad certificate 2025-08-13T19:51:02.492945957+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37378: remote error: tls: bad certificate 2025-08-13T19:51:02.522199363+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37394: remote error: tls: bad certificate 2025-08-13T19:51:02.562970878+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37400: remote error: tls: bad certificate 2025-08-13T19:51:02.611675860+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37408: remote error: tls: bad certificate 2025-08-13T19:51:02.646425693+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37416: remote error: tls: bad certificate 2025-08-13T19:51:02.675100923+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37420: remote error: tls: bad certificate 2025-08-13T19:51:03.107905973+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37426: remote error: tls: bad certificate 2025-08-13T19:51:03.165928801+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37434: remote error: tls: bad certificate 2025-08-13T19:51:03.216223679+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37442: remote error: tls: bad certificate 2025-08-13T19:51:03.248332746+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37448: remote error: tls: bad certificate 2025-08-13T19:51:03.313901890+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37452: remote error: tls: bad certificate 2025-08-13T19:51:03.355765376+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37468: remote error: tls: bad certificate 2025-08-13T19:51:03.383241102+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37470: remote error: tls: bad certificate 2025-08-13T19:51:03.406117275+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37482: remote error: tls: bad certificate 2025-08-13T19:51:03.462346822+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37498: remote error: tls: bad certificate 2025-08-13T19:51:03.512550436+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37510: remote error: tls: bad certificate 2025-08-13T19:51:03.556443761+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37514: remote error: tls: bad certificate 2025-08-13T19:51:03.592369398+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37516: remote error: tls: bad certificate 2025-08-13T19:51:03.629024485+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37518: remote error: tls: bad certificate 2025-08-13T19:51:03.692179381+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37524: remote error: tls: bad certificate 2025-08-13T19:51:03.730339411+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37538: remote error: tls: bad certificate 2025-08-13T19:51:03.860483790+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37546: remote error: tls: bad certificate 2025-08-13T19:51:03.886262596+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37548: remote error: tls: bad certificate 2025-08-13T19:51:03.922530403+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37554: remote error: tls: bad certificate 2025-08-13T19:51:03.963665218+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37566: remote error: tls: bad certificate 2025-08-13T19:51:03.969899456+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37570: remote error: tls: bad certificate 2025-08-13T19:51:03.998223616+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37572: remote error: tls: bad certificate 2025-08-13T19:51:04.014296155+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37586: remote error: tls: bad certificate 2025-08-13T19:51:04.035582264+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37602: remote error: tls: bad certificate 2025-08-13T19:51:04.067182027+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37606: remote error: tls: bad certificate 2025-08-13T19:51:04.088258739+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37622: remote error: tls: bad certificate 2025-08-13T19:51:04.096409982+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37628: remote error: tls: bad certificate 2025-08-13T19:51:04.142221921+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37638: remote error: tls: bad certificate 2025-08-13T19:51:04.172312281+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37644: remote error: tls: bad certificate 2025-08-13T19:51:04.212474329+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37658: remote error: tls: bad certificate 2025-08-13T19:51:04.263190889+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37668: remote error: tls: bad certificate 2025-08-13T19:51:04.299631800+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37682: remote error: tls: bad certificate 2025-08-13T19:51:04.387549273+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37690: remote error: tls: bad certificate 2025-08-13T19:51:04.424341654+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37698: remote error: tls: bad certificate 2025-08-13T19:51:04.497659110+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37704: remote error: tls: bad certificate 2025-08-13T19:51:04.610452053+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37716: remote error: tls: bad certificate 2025-08-13T19:51:04.640043379+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37720: remote error: tls: bad certificate 2025-08-13T19:51:04.668125171+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37732: remote error: tls: bad certificate 2025-08-13T19:51:04.746185803+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37734: remote error: tls: bad certificate 2025-08-13T19:51:04.946409465+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37748: remote error: tls: bad certificate 2025-08-13T19:51:05.034016969+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37754: remote error: tls: bad certificate 2025-08-13T19:51:05.061009981+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37766: remote error: tls: bad certificate 2025-08-13T19:51:05.091070990+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37770: remote error: tls: bad certificate 2025-08-13T19:51:05.131356681+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37776: remote error: tls: bad certificate 2025-08-13T19:51:05.205135240+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37780: remote error: tls: bad certificate 2025-08-13T19:51:05.255356405+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37794: remote error: tls: bad certificate 2025-08-13T19:51:05.387906103+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37810: remote error: tls: bad certificate 2025-08-13T19:51:05.443694848+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37816: remote error: tls: bad certificate 2025-08-13T19:51:05.519501824+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37824: remote error: tls: bad certificate 2025-08-13T19:51:05.543353656+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37834: remote error: tls: bad certificate 2025-08-13T19:51:05.620419769+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37848: remote error: tls: bad certificate 2025-08-13T19:51:05.643328973+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37864: remote error: tls: bad certificate 2025-08-13T19:51:05.746574714+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37876: remote error: tls: bad certificate 2025-08-13T19:51:05.817102220+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37888: remote error: tls: bad certificate 2025-08-13T19:51:05.962096764+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37898: remote error: tls: bad certificate 2025-08-13T19:51:06.000694597+00:00 stderr F 2025/08/13 19:51:06 http: TLS handshake error from 127.0.0.1:37910: remote error: tls: bad certificate 2025-08-13T19:51:06.087982192+00:00 stderr F 2025/08/13 19:51:06 http: TLS handshake error from 127.0.0.1:37920: remote error: tls: bad certificate 2025-08-13T19:51:06.151663462+00:00 stderr F 2025/08/13 19:51:06 http: TLS handshake error from 127.0.0.1:37926: remote error: tls: bad certificate 2025-08-13T19:51:07.520441000+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37936: remote error: tls: bad certificate 2025-08-13T19:51:07.584917952+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37942: remote error: tls: bad certificate 2025-08-13T19:51:07.637642999+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37950: remote error: tls: bad certificate 2025-08-13T19:51:07.664908569+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37956: remote error: tls: bad certificate 2025-08-13T19:51:07.688984427+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37960: remote error: tls: bad certificate 2025-08-13T19:51:07.736749202+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37976: remote error: tls: bad certificate 2025-08-13T19:51:07.798507217+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37990: remote error: tls: bad certificate 2025-08-13T19:51:07.826221579+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37994: remote error: tls: bad certificate 2025-08-13T19:51:07.859079428+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:38000: remote error: tls: bad certificate 2025-08-13T19:51:07.896648992+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:38008: remote error: tls: bad certificate 2025-08-13T19:51:07.936917273+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:38018: remote error: tls: bad certificate 2025-08-13T19:51:07.967011883+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:38030: remote error: tls: bad certificate 2025-08-13T19:51:08.022760096+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38032: remote error: tls: bad certificate 2025-08-13T19:51:08.058379924+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38046: remote error: tls: bad certificate 2025-08-13T19:51:08.094905998+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38048: remote error: tls: bad certificate 2025-08-13T19:51:08.119050568+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38058: remote error: tls: bad certificate 2025-08-13T19:51:08.146479512+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38066: remote error: tls: bad certificate 2025-08-13T19:51:08.177530120+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38070: remote error: tls: bad certificate 2025-08-13T19:51:08.201698570+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38078: remote error: tls: bad certificate 2025-08-13T19:51:08.231027769+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38094: remote error: tls: bad certificate 2025-08-13T19:51:08.259730369+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38102: remote error: tls: bad certificate 2025-08-13T19:51:08.289092248+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38110: remote error: tls: bad certificate 2025-08-13T19:51:08.309397008+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38114: remote error: tls: bad certificate 2025-08-13T19:51:08.936181992+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:55404: remote error: tls: bad certificate 2025-08-13T19:51:08.979532561+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:55406: remote error: tls: bad certificate 2025-08-13T19:51:09.008619833+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55422: remote error: tls: bad certificate 2025-08-13T19:51:09.045163547+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55424: remote error: tls: bad certificate 2025-08-13T19:51:09.072073746+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55434: remote error: tls: bad certificate 2025-08-13T19:51:09.111767841+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55442: remote error: tls: bad certificate 2025-08-13T19:51:09.139935806+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55444: remote error: tls: bad certificate 2025-08-13T19:51:09.168216894+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55456: remote error: tls: bad certificate 2025-08-13T19:51:09.200169877+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55470: remote error: tls: bad certificate 2025-08-13T19:51:09.232715287+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55486: remote error: tls: bad certificate 2025-08-13T19:51:09.267215783+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55494: remote error: tls: bad certificate 2025-08-13T19:51:09.296091539+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55504: remote error: tls: bad certificate 2025-08-13T19:51:09.343610397+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55512: remote error: tls: bad certificate 2025-08-13T19:51:09.848109466+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55528: remote error: tls: bad certificate 2025-08-13T19:51:09.878282038+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55530: remote error: tls: bad certificate 2025-08-13T19:51:09.916370777+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55534: remote error: tls: bad certificate 2025-08-13T19:51:09.945860180+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55544: remote error: tls: bad certificate 2025-08-13T19:51:09.970298888+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55548: remote error: tls: bad certificate 2025-08-13T19:51:09.989120466+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55554: remote error: tls: bad certificate 2025-08-13T19:51:10.010124886+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55564: remote error: tls: bad certificate 2025-08-13T19:51:10.029567152+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55572: remote error: tls: bad certificate 2025-08-13T19:51:10.082567337+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55582: remote error: tls: bad certificate 2025-08-13T19:51:10.200974171+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55590: remote error: tls: bad certificate 2025-08-13T19:51:10.269140019+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55596: remote error: tls: bad certificate 2025-08-13T19:51:10.526060142+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55600: remote error: tls: bad certificate 2025-08-13T19:51:10.565277933+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55614: remote error: tls: bad certificate 2025-08-13T19:51:10.621054507+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55624: remote error: tls: bad certificate 2025-08-13T19:51:10.680709671+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55636: remote error: tls: bad certificate 2025-08-13T19:51:10.708190187+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55650: remote error: tls: bad certificate 2025-08-13T19:51:10.765766342+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55658: remote error: tls: bad certificate 2025-08-13T19:51:10.813054613+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55670: remote error: tls: bad certificate 2025-08-13T19:51:10.851977196+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55672: remote error: tls: bad certificate 2025-08-13T19:51:10.909139640+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55674: remote error: tls: bad certificate 2025-08-13T19:51:11.112006178+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55678: remote error: tls: bad certificate 2025-08-13T19:51:11.134733157+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55682: remote error: tls: bad certificate 2025-08-13T19:51:11.179219989+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55692: remote error: tls: bad certificate 2025-08-13T19:51:11.199674053+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55698: remote error: tls: bad certificate 2025-08-13T19:51:11.236478535+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55710: remote error: tls: bad certificate 2025-08-13T19:51:11.258368291+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55724: remote error: tls: bad certificate 2025-08-13T19:51:11.277408425+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55740: remote error: tls: bad certificate 2025-08-13T19:51:11.310586903+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55750: remote error: tls: bad certificate 2025-08-13T19:51:11.337531274+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55762: remote error: tls: bad certificate 2025-08-13T19:51:11.359350087+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55764: remote error: tls: bad certificate 2025-08-13T19:51:11.384598969+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55778: remote error: tls: bad certificate 2025-08-13T19:51:11.405238439+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55794: remote error: tls: bad certificate 2025-08-13T19:51:11.441067473+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55804: remote error: tls: bad certificate 2025-08-13T19:51:11.466377236+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55816: remote error: tls: bad certificate 2025-08-13T19:51:11.488383965+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55832: remote error: tls: bad certificate 2025-08-13T19:51:11.513172993+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55842: remote error: tls: bad certificate 2025-08-13T19:51:11.541749410+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55856: remote error: tls: bad certificate 2025-08-13T19:51:11.569414231+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55858: remote error: tls: bad certificate 2025-08-13T19:51:11.583654368+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55872: remote error: tls: bad certificate 2025-08-13T19:51:11.604279097+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55888: remote error: tls: bad certificate 2025-08-13T19:51:11.623957230+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55890: remote error: tls: bad certificate 2025-08-13T19:51:11.645020082+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55902: remote error: tls: bad certificate 2025-08-13T19:51:11.662619235+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55914: remote error: tls: bad certificate 2025-08-13T19:51:11.677941633+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55922: remote error: tls: bad certificate 2025-08-13T19:51:11.696610586+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55924: remote error: tls: bad certificate 2025-08-13T19:51:11.720107488+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55936: remote error: tls: bad certificate 2025-08-13T19:51:11.740327816+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55952: remote error: tls: bad certificate 2025-08-13T19:51:11.757151677+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55962: remote error: tls: bad certificate 2025-08-13T19:51:11.778065764+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55976: remote error: tls: bad certificate 2025-08-13T19:51:11.800046323+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55982: remote error: tls: bad certificate 2025-08-13T19:51:11.818430788+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55986: remote error: tls: bad certificate 2025-08-13T19:51:11.835852666+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55996: remote error: tls: bad certificate 2025-08-13T19:51:11.869709524+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:56006: remote error: tls: bad certificate 2025-08-13T19:51:11.901452691+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:56022: remote error: tls: bad certificate 2025-08-13T19:51:11.925191249+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:56032: remote error: tls: bad certificate 2025-08-13T19:51:11.944592524+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:56038: remote error: tls: bad certificate 2025-08-13T19:51:11.964132062+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:56044: remote error: tls: bad certificate 2025-08-13T19:51:11.987549822+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:56050: remote error: tls: bad certificate 2025-08-13T19:51:12.007037978+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56062: remote error: tls: bad certificate 2025-08-13T19:51:12.026225957+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56070: remote error: tls: bad certificate 2025-08-13T19:51:12.049928134+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56074: remote error: tls: bad certificate 2025-08-13T19:51:12.066472967+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56080: remote error: tls: bad certificate 2025-08-13T19:51:12.094082986+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56084: remote error: tls: bad certificate 2025-08-13T19:51:12.121739217+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56098: remote error: tls: bad certificate 2025-08-13T19:51:12.157597582+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56100: remote error: tls: bad certificate 2025-08-13T19:51:12.177716267+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56116: remote error: tls: bad certificate 2025-08-13T19:51:12.207557610+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56128: remote error: tls: bad certificate 2025-08-13T19:51:12.232195604+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56140: remote error: tls: bad certificate 2025-08-13T19:51:12.247984205+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56142: remote error: tls: bad certificate 2025-08-13T19:51:12.263884989+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56148: remote error: tls: bad certificate 2025-08-13T19:51:12.283077628+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56150: remote error: tls: bad certificate 2025-08-13T19:51:12.301991789+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56166: remote error: tls: bad certificate 2025-08-13T19:51:12.315766972+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56176: remote error: tls: bad certificate 2025-08-13T19:51:12.331283186+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56178: remote error: tls: bad certificate 2025-08-13T19:51:12.349030813+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56182: remote error: tls: bad certificate 2025-08-13T19:51:12.366279616+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56186: remote error: tls: bad certificate 2025-08-13T19:51:12.391048224+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56188: remote error: tls: bad certificate 2025-08-13T19:51:12.407750971+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56196: remote error: tls: bad certificate 2025-08-13T19:51:12.424480839+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56198: remote error: tls: bad certificate 2025-08-13T19:51:12.441757163+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56200: remote error: tls: bad certificate 2025-08-13T19:51:12.460493379+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56216: remote error: tls: bad certificate 2025-08-13T19:51:12.487909002+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56218: remote error: tls: bad certificate 2025-08-13T19:51:12.503107247+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56222: remote error: tls: bad certificate 2025-08-13T19:51:12.519890386+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56232: remote error: tls: bad certificate 2025-08-13T19:51:12.541567386+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56248: remote error: tls: bad certificate 2025-08-13T19:51:12.559672193+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56250: remote error: tls: bad certificate 2025-08-13T19:51:12.574982911+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56252: remote error: tls: bad certificate 2025-08-13T19:51:12.599274365+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56258: remote error: tls: bad certificate 2025-08-13T19:51:12.622022115+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56262: remote error: tls: bad certificate 2025-08-13T19:51:12.643713105+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56268: remote error: tls: bad certificate 2025-08-13T19:51:12.660965638+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56284: remote error: tls: bad certificate 2025-08-13T19:51:12.676703558+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56296: remote error: tls: bad certificate 2025-08-13T19:51:12.695582158+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56308: remote error: tls: bad certificate 2025-08-13T19:51:12.712148481+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56312: remote error: tls: bad certificate 2025-08-13T19:51:12.729627371+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56328: remote error: tls: bad certificate 2025-08-13T19:51:12.746057320+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56340: remote error: tls: bad certificate 2025-08-13T19:51:12.769516241+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56348: remote error: tls: bad certificate 2025-08-13T19:51:12.795541405+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56358: remote error: tls: bad certificate 2025-08-13T19:51:12.811990965+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56368: remote error: tls: bad certificate 2025-08-13T19:51:12.829954018+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56382: remote error: tls: bad certificate 2025-08-13T19:51:12.846641755+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56398: remote error: tls: bad certificate 2025-08-13T19:51:12.873191424+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56402: remote error: tls: bad certificate 2025-08-13T19:51:12.893746191+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56418: remote error: tls: bad certificate 2025-08-13T19:51:12.909586664+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56430: remote error: tls: bad certificate 2025-08-13T19:51:12.935738082+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56434: remote error: tls: bad certificate 2025-08-13T19:51:12.951402209+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56450: remote error: tls: bad certificate 2025-08-13T19:51:12.971053681+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56458: remote error: tls: bad certificate 2025-08-13T19:51:12.988277633+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56464: remote error: tls: bad certificate 2025-08-13T19:51:13.002164440+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56470: remote error: tls: bad certificate 2025-08-13T19:51:13.018524058+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56480: remote error: tls: bad certificate 2025-08-13T19:51:13.037715756+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56484: remote error: tls: bad certificate 2025-08-13T19:51:13.055989508+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56492: remote error: tls: bad certificate 2025-08-13T19:51:13.077931436+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56502: remote error: tls: bad certificate 2025-08-13T19:51:13.094364215+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56516: remote error: tls: bad certificate 2025-08-13T19:51:13.108995613+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56532: remote error: tls: bad certificate 2025-08-13T19:51:13.126513384+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56538: remote error: tls: bad certificate 2025-08-13T19:51:13.139469984+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56554: remote error: tls: bad certificate 2025-08-13T19:51:13.157605813+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56558: remote error: tls: bad certificate 2025-08-13T19:51:13.176663217+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56566: remote error: tls: bad certificate 2025-08-13T19:51:13.193339994+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56568: remote error: tls: bad certificate 2025-08-13T19:51:13.210146094+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56570: remote error: tls: bad certificate 2025-08-13T19:51:13.237200328+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56586: remote error: tls: bad certificate 2025-08-13T19:51:13.257216900+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56602: remote error: tls: bad certificate 2025-08-13T19:51:13.275069500+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56614: remote error: tls: bad certificate 2025-08-13T19:51:13.296548644+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56618: remote error: tls: bad certificate 2025-08-13T19:51:13.336249349+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56632: remote error: tls: bad certificate 2025-08-13T19:51:13.375628944+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56642: remote error: tls: bad certificate 2025-08-13T19:51:13.417585093+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56654: remote error: tls: bad certificate 2025-08-13T19:51:13.460748847+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56662: remote error: tls: bad certificate 2025-08-13T19:51:13.496887470+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56672: remote error: tls: bad certificate 2025-08-13T19:51:13.538101458+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56674: remote error: tls: bad certificate 2025-08-13T19:51:13.577150764+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56676: remote error: tls: bad certificate 2025-08-13T19:51:13.614976155+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56692: remote error: tls: bad certificate 2025-08-13T19:51:13.654620938+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56704: remote error: tls: bad certificate 2025-08-13T19:51:13.696578717+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56720: remote error: tls: bad certificate 2025-08-13T19:51:13.736185909+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56726: remote error: tls: bad certificate 2025-08-13T19:51:13.776889442+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56738: remote error: tls: bad certificate 2025-08-13T19:51:13.825333577+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56750: remote error: tls: bad certificate 2025-08-13T19:51:13.878017402+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56754: remote error: tls: bad certificate 2025-08-13T19:51:13.918018146+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56762: remote error: tls: bad certificate 2025-08-13T19:51:13.940852788+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56778: remote error: tls: bad certificate 2025-08-13T19:51:13.975715405+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56784: remote error: tls: bad certificate 2025-08-13T19:51:14.017318674+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56788: remote error: tls: bad certificate 2025-08-13T19:51:14.055215897+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56796: remote error: tls: bad certificate 2025-08-13T19:51:14.104613509+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56808: remote error: tls: bad certificate 2025-08-13T19:51:14.136732097+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56814: remote error: tls: bad certificate 2025-08-13T19:51:14.173913969+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56822: remote error: tls: bad certificate 2025-08-13T19:51:14.220596933+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56832: remote error: tls: bad certificate 2025-08-13T19:51:14.255966324+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56846: remote error: tls: bad certificate 2025-08-13T19:51:14.301126084+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56858: remote error: tls: bad certificate 2025-08-13T19:51:14.330879975+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56862: remote error: tls: bad certificate 2025-08-13T19:51:14.337535445+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56872: remote error: tls: bad certificate 2025-08-13T19:51:14.356650221+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56882: remote error: tls: bad certificate 2025-08-13T19:51:14.385160856+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56894: remote error: tls: bad certificate 2025-08-13T19:51:14.386954837+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56906: remote error: tls: bad certificate 2025-08-13T19:51:14.405027844+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56910: remote error: tls: bad certificate 2025-08-13T19:51:14.418855549+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56920: remote error: tls: bad certificate 2025-08-13T19:51:14.430440620+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56926: remote error: tls: bad certificate 2025-08-13T19:51:14.455565418+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56930: remote error: tls: bad certificate 2025-08-13T19:51:14.496387855+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56934: remote error: tls: bad certificate 2025-08-13T19:51:14.537382497+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56950: remote error: tls: bad certificate 2025-08-13T19:51:14.578863402+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56966: remote error: tls: bad certificate 2025-08-13T19:51:14.625207267+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56968: remote error: tls: bad certificate 2025-08-13T19:51:14.666023853+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56970: remote error: tls: bad certificate 2025-08-13T19:51:14.697620196+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56982: remote error: tls: bad certificate 2025-08-13T19:51:14.737162267+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56998: remote error: tls: bad certificate 2025-08-13T19:51:14.779391794+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:57008: remote error: tls: bad certificate 2025-08-13T19:51:14.832110740+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:57016: remote error: tls: bad certificate 2025-08-13T19:51:14.857645800+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:57024: remote error: tls: bad certificate 2025-08-13T19:51:14.899008222+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:57028: remote error: tls: bad certificate 2025-08-13T19:51:14.942346031+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:57038: remote error: tls: bad certificate 2025-08-13T19:51:14.975859139+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:57040: remote error: tls: bad certificate 2025-08-13T19:51:15.016744567+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57046: remote error: tls: bad certificate 2025-08-13T19:51:15.054408254+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57056: remote error: tls: bad certificate 2025-08-13T19:51:15.094640014+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57060: remote error: tls: bad certificate 2025-08-13T19:51:15.139407263+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57066: remote error: tls: bad certificate 2025-08-13T19:51:15.176716129+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57068: remote error: tls: bad certificate 2025-08-13T19:51:15.220768898+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57076: remote error: tls: bad certificate 2025-08-13T19:51:15.252941698+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57092: remote error: tls: bad certificate 2025-08-13T19:51:15.358891256+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57104: remote error: tls: bad certificate 2025-08-13T19:51:15.619594007+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57112: remote error: tls: bad certificate 2025-08-13T19:51:15.684559674+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57120: remote error: tls: bad certificate 2025-08-13T19:51:15.848595482+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57122: remote error: tls: bad certificate 2025-08-13T19:51:15.893192097+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57130: remote error: tls: bad certificate 2025-08-13T19:51:16.000179064+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57140: remote error: tls: bad certificate 2025-08-13T19:51:16.049123203+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57144: remote error: tls: bad certificate 2025-08-13T19:51:16.079272745+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57146: remote error: tls: bad certificate 2025-08-13T19:51:16.288498265+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57162: remote error: tls: bad certificate 2025-08-13T19:51:16.361176402+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57172: remote error: tls: bad certificate 2025-08-13T19:51:16.380248147+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57180: remote error: tls: bad certificate 2025-08-13T19:51:16.402147563+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57182: remote error: tls: bad certificate 2025-08-13T19:51:16.419362805+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57190: remote error: tls: bad certificate 2025-08-13T19:51:16.437168534+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57194: remote error: tls: bad certificate 2025-08-13T19:51:16.470389923+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57204: remote error: tls: bad certificate 2025-08-13T19:51:16.492042662+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57218: remote error: tls: bad certificate 2025-08-13T19:51:16.512763464+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57220: remote error: tls: bad certificate 2025-08-13T19:51:16.532359734+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57224: remote error: tls: bad certificate 2025-08-13T19:51:16.552450229+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57230: remote error: tls: bad certificate 2025-08-13T19:51:16.572054709+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57238: remote error: tls: bad certificate 2025-08-13T19:51:16.594323935+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57242: remote error: tls: bad certificate 2025-08-13T19:51:16.637283303+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57254: remote error: tls: bad certificate 2025-08-13T19:51:16.652766666+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57256: remote error: tls: bad certificate 2025-08-13T19:51:16.669300648+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57260: remote error: tls: bad certificate 2025-08-13T19:51:16.700094658+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57262: remote error: tls: bad certificate 2025-08-13T19:51:16.724684431+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57268: remote error: tls: bad certificate 2025-08-13T19:51:16.744451996+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57272: remote error: tls: bad certificate 2025-08-13T19:51:16.758405375+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57280: remote error: tls: bad certificate 2025-08-13T19:51:16.773868067+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57290: remote error: tls: bad certificate 2025-08-13T19:51:16.795180356+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57296: remote error: tls: bad certificate 2025-08-13T19:51:16.830660130+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57310: remote error: tls: bad certificate 2025-08-13T19:51:16.847641125+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57314: remote error: tls: bad certificate 2025-08-13T19:51:16.864753594+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57326: remote error: tls: bad certificate 2025-08-13T19:51:16.886215948+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57330: remote error: tls: bad certificate 2025-08-13T19:51:16.907208558+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57338: remote error: tls: bad certificate 2025-08-13T19:51:16.923319178+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57348: remote error: tls: bad certificate 2025-08-13T19:51:16.937390440+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57362: remote error: tls: bad certificate 2025-08-13T19:51:16.953968324+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57376: remote error: tls: bad certificate 2025-08-13T19:51:16.969876929+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57384: remote error: tls: bad certificate 2025-08-13T19:51:16.988606234+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57398: remote error: tls: bad certificate 2025-08-13T19:51:17.004299883+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57414: remote error: tls: bad certificate 2025-08-13T19:51:17.019194058+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57422: remote error: tls: bad certificate 2025-08-13T19:51:17.037623665+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57424: remote error: tls: bad certificate 2025-08-13T19:51:17.054081786+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57440: remote error: tls: bad certificate 2025-08-13T19:51:17.070039562+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57456: remote error: tls: bad certificate 2025-08-13T19:51:17.095944982+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57470: remote error: tls: bad certificate 2025-08-13T19:51:17.136189822+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57474: remote error: tls: bad certificate 2025-08-13T19:51:17.175875847+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57482: remote error: tls: bad certificate 2025-08-13T19:51:17.218141185+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57494: remote error: tls: bad certificate 2025-08-13T19:51:17.255710168+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57498: remote error: tls: bad certificate 2025-08-13T19:51:17.297227785+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57500: remote error: tls: bad certificate 2025-08-13T19:51:17.336637281+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57506: remote error: tls: bad certificate 2025-08-13T19:51:17.377241142+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57518: remote error: tls: bad certificate 2025-08-13T19:51:17.419906141+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57526: remote error: tls: bad certificate 2025-08-13T19:51:17.454640834+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57540: remote error: tls: bad certificate 2025-08-13T19:51:17.495319557+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57542: remote error: tls: bad certificate 2025-08-13T19:51:17.545379047+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57548: remote error: tls: bad certificate 2025-08-13T19:51:17.574591122+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57556: remote error: tls: bad certificate 2025-08-13T19:51:17.617194900+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57564: remote error: tls: bad certificate 2025-08-13T19:51:17.658094319+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57578: remote error: tls: bad certificate 2025-08-13T19:51:17.696045924+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57588: remote error: tls: bad certificate 2025-08-13T19:51:17.736190631+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57598: remote error: tls: bad certificate 2025-08-13T19:51:17.775011651+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57610: remote error: tls: bad certificate 2025-08-13T19:51:17.816130585+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57618: remote error: tls: bad certificate 2025-08-13T19:51:17.857471686+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57622: remote error: tls: bad certificate 2025-08-13T19:51:17.893332051+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57634: remote error: tls: bad certificate 2025-08-13T19:51:17.937021800+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57646: remote error: tls: bad certificate 2025-08-13T19:51:17.974630585+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57658: remote error: tls: bad certificate 2025-08-13T19:51:18.018132358+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57660: remote error: tls: bad certificate 2025-08-13T19:51:18.055549548+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57674: remote error: tls: bad certificate 2025-08-13T19:51:18.098351461+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57676: remote error: tls: bad certificate 2025-08-13T19:51:18.140087574+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57690: remote error: tls: bad certificate 2025-08-13T19:51:18.175060213+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57702: remote error: tls: bad certificate 2025-08-13T19:51:18.218624428+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57718: remote error: tls: bad certificate 2025-08-13T19:51:18.258133277+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57734: remote error: tls: bad certificate 2025-08-13T19:51:18.296634998+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57740: remote error: tls: bad certificate 2025-08-13T19:51:18.337196447+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57748: remote error: tls: bad certificate 2025-08-13T19:51:18.378149598+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57764: remote error: tls: bad certificate 2025-08-13T19:51:18.415622969+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57774: remote error: tls: bad certificate 2025-08-13T19:51:18.458387461+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57790: remote error: tls: bad certificate 2025-08-13T19:51:18.495199003+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57798: remote error: tls: bad certificate 2025-08-13T19:51:18.539216271+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57800: remote error: tls: bad certificate 2025-08-13T19:51:18.577979439+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57804: remote error: tls: bad certificate 2025-08-13T19:51:18.614594906+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57808: remote error: tls: bad certificate 2025-08-13T19:51:18.655243847+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57812: remote error: tls: bad certificate 2025-08-13T19:51:18.696197528+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57816: remote error: tls: bad certificate 2025-08-13T19:51:18.734933055+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:48794: remote error: tls: bad certificate 2025-08-13T19:51:18.782486814+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:48798: remote error: tls: bad certificate 2025-08-13T19:51:18.815491857+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:48814: remote error: tls: bad certificate 2025-08-13T19:51:18.853877375+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:48826: remote error: tls: bad certificate 2025-08-13T19:51:18.896680738+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:48842: remote error: tls: bad certificate 2025-08-13T19:51:18.938149613+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:48844: remote error: tls: bad certificate 2025-08-13T19:51:18.973016829+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:48858: remote error: tls: bad certificate 2025-08-13T19:51:19.018100608+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48872: remote error: tls: bad certificate 2025-08-13T19:51:19.057138824+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48888: remote error: tls: bad certificate 2025-08-13T19:51:19.095294264+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48902: remote error: tls: bad certificate 2025-08-13T19:51:19.136037609+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48904: remote error: tls: bad certificate 2025-08-13T19:51:19.175518737+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48910: remote error: tls: bad certificate 2025-08-13T19:51:19.218533266+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48918: remote error: tls: bad certificate 2025-08-13T19:51:19.256003887+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48928: remote error: tls: bad certificate 2025-08-13T19:51:19.298466001+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48932: remote error: tls: bad certificate 2025-08-13T19:51:19.335958282+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48946: remote error: tls: bad certificate 2025-08-13T19:51:19.378441817+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48956: remote error: tls: bad certificate 2025-08-13T19:51:19.416966898+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48964: remote error: tls: bad certificate 2025-08-13T19:51:19.456156398+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48970: remote error: tls: bad certificate 2025-08-13T19:51:19.497225832+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48984: remote error: tls: bad certificate 2025-08-13T19:51:19.535277549+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48986: remote error: tls: bad certificate 2025-08-13T19:51:19.574767638+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48988: remote error: tls: bad certificate 2025-08-13T19:51:19.619140096+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49002: remote error: tls: bad certificate 2025-08-13T19:51:19.760504665+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49018: remote error: tls: bad certificate 2025-08-13T19:51:19.797073871+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49034: remote error: tls: bad certificate 2025-08-13T19:51:19.819899484+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49046: remote error: tls: bad certificate 2025-08-13T19:51:19.845909868+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49054: remote error: tls: bad certificate 2025-08-13T19:51:19.870769268+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49056: remote error: tls: bad certificate 2025-08-13T19:51:19.890556244+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49070: remote error: tls: bad certificate 2025-08-13T19:51:19.907127727+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49076: remote error: tls: bad certificate 2025-08-13T19:51:19.935927590+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49082: remote error: tls: bad certificate 2025-08-13T19:51:19.975422599+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49092: remote error: tls: bad certificate 2025-08-13T19:51:20.014550047+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49106: remote error: tls: bad certificate 2025-08-13T19:51:20.054914621+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49122: remote error: tls: bad certificate 2025-08-13T19:51:20.095512841+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49124: remote error: tls: bad certificate 2025-08-13T19:51:20.136422211+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49130: remote error: tls: bad certificate 2025-08-13T19:51:20.179887973+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49138: remote error: tls: bad certificate 2025-08-13T19:51:20.216285713+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49144: remote error: tls: bad certificate 2025-08-13T19:51:20.256577895+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49154: remote error: tls: bad certificate 2025-08-13T19:51:20.294250642+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49166: remote error: tls: bad certificate 2025-08-13T19:51:20.338264850+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49176: remote error: tls: bad certificate 2025-08-13T19:51:20.378175010+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49192: remote error: tls: bad certificate 2025-08-13T19:51:20.420151640+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49208: remote error: tls: bad certificate 2025-08-13T19:51:20.455171891+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49224: remote error: tls: bad certificate 2025-08-13T19:51:20.497421358+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49230: remote error: tls: bad certificate 2025-08-13T19:51:20.539308466+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49232: remote error: tls: bad certificate 2025-08-13T19:51:20.576921541+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49242: remote error: tls: bad certificate 2025-08-13T19:51:20.620756943+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49248: remote error: tls: bad certificate 2025-08-13T19:51:20.655605999+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49256: remote error: tls: bad certificate 2025-08-13T19:51:20.701613924+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49262: remote error: tls: bad certificate 2025-08-13T19:51:20.737487670+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49268: remote error: tls: bad certificate 2025-08-13T19:51:20.775267579+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49272: remote error: tls: bad certificate 2025-08-13T19:51:20.814435639+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49284: remote error: tls: bad certificate 2025-08-13T19:51:20.861732111+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49296: remote error: tls: bad certificate 2025-08-13T19:51:20.897545024+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49306: remote error: tls: bad certificate 2025-08-13T19:51:20.935412567+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49312: remote error: tls: bad certificate 2025-08-13T19:51:20.976219453+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49314: remote error: tls: bad certificate 2025-08-13T19:51:21.015857946+00:00 stderr F 2025/08/13 19:51:21 http: TLS handshake error from 127.0.0.1:49324: remote error: tls: bad certificate 2025-08-13T19:51:22.820721319+00:00 stderr F 2025/08/13 19:51:22 http: TLS handshake error from 127.0.0.1:49336: remote error: tls: bad certificate 2025-08-13T19:51:24.650938228+00:00 stderr F 2025/08/13 19:51:24 http: TLS handshake error from 127.0.0.1:49344: remote error: tls: bad certificate 2025-08-13T19:51:24.671006000+00:00 stderr F 2025/08/13 19:51:24 http: TLS handshake error from 127.0.0.1:49358: remote error: tls: bad certificate 2025-08-13T19:51:24.693388507+00:00 stderr F 2025/08/13 19:51:24 http: TLS handshake error from 127.0.0.1:49364: remote error: tls: bad certificate 2025-08-13T19:51:24.713368767+00:00 stderr F 2025/08/13 19:51:24 http: TLS handshake error from 127.0.0.1:49370: remote error: tls: bad certificate 2025-08-13T19:51:24.735238740+00:00 stderr F 2025/08/13 19:51:24 http: TLS handshake error from 127.0.0.1:49382: remote error: tls: bad certificate 2025-08-13T19:51:25.229155691+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49384: remote error: tls: bad certificate 2025-08-13T19:51:25.246324780+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49398: remote error: tls: bad certificate 2025-08-13T19:51:25.263464138+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49406: remote error: tls: bad certificate 2025-08-13T19:51:25.277217700+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49418: remote error: tls: bad certificate 2025-08-13T19:51:25.293293068+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49422: remote error: tls: bad certificate 2025-08-13T19:51:25.310585241+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49424: remote error: tls: bad certificate 2025-08-13T19:51:25.325511446+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49426: remote error: tls: bad certificate 2025-08-13T19:51:25.341518872+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49432: remote error: tls: bad certificate 2025-08-13T19:51:25.364286111+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49440: remote error: tls: bad certificate 2025-08-13T19:51:25.381632405+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49450: remote error: tls: bad certificate 2025-08-13T19:51:25.398575458+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49462: remote error: tls: bad certificate 2025-08-13T19:51:25.414258465+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49464: remote error: tls: bad certificate 2025-08-13T19:51:25.430355163+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49466: remote error: tls: bad certificate 2025-08-13T19:51:25.454371238+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49478: remote error: tls: bad certificate 2025-08-13T19:51:25.471703371+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49482: remote error: tls: bad certificate 2025-08-13T19:51:25.487219043+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49490: remote error: tls: bad certificate 2025-08-13T19:51:25.505701240+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49500: remote error: tls: bad certificate 2025-08-13T19:51:25.524707501+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49502: remote error: tls: bad certificate 2025-08-13T19:51:25.543520727+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49504: remote error: tls: bad certificate 2025-08-13T19:51:25.560540912+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49520: remote error: tls: bad certificate 2025-08-13T19:51:25.578019030+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49530: remote error: tls: bad certificate 2025-08-13T19:51:25.610920948+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49540: remote error: tls: bad certificate 2025-08-13T19:51:25.627284454+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49556: remote error: tls: bad certificate 2025-08-13T19:51:25.645934675+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49558: remote error: tls: bad certificate 2025-08-13T19:51:25.663733742+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49574: remote error: tls: bad certificate 2025-08-13T19:51:25.681895430+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49584: remote error: tls: bad certificate 2025-08-13T19:51:25.705085211+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49598: remote error: tls: bad certificate 2025-08-13T19:51:25.722750824+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49600: remote error: tls: bad certificate 2025-08-13T19:51:25.741721714+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49606: remote error: tls: bad certificate 2025-08-13T19:51:25.763099613+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49616: remote error: tls: bad certificate 2025-08-13T19:51:25.782352652+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49624: remote error: tls: bad certificate 2025-08-13T19:51:25.802023502+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49626: remote error: tls: bad certificate 2025-08-13T19:51:25.821910699+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49634: remote error: tls: bad certificate 2025-08-13T19:51:25.841640791+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49642: remote error: tls: bad certificate 2025-08-13T19:51:25.862193877+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49654: remote error: tls: bad certificate 2025-08-13T19:51:25.877658537+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49656: remote error: tls: bad certificate 2025-08-13T19:51:25.892992604+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49662: remote error: tls: bad certificate 2025-08-13T19:51:25.918943874+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49670: remote error: tls: bad certificate 2025-08-13T19:51:25.945947213+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49682: remote error: tls: bad certificate 2025-08-13T19:51:25.977615655+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49694: remote error: tls: bad certificate 2025-08-13T19:51:26.003864763+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49708: remote error: tls: bad certificate 2025-08-13T19:51:26.020383534+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49720: remote error: tls: bad certificate 2025-08-13T19:51:26.038353276+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49726: remote error: tls: bad certificate 2025-08-13T19:51:26.058040357+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49732: remote error: tls: bad certificate 2025-08-13T19:51:26.081340790+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49734: remote error: tls: bad certificate 2025-08-13T19:51:26.097624354+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49744: remote error: tls: bad certificate 2025-08-13T19:51:26.113924419+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49758: remote error: tls: bad certificate 2025-08-13T19:51:26.129926265+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49764: remote error: tls: bad certificate 2025-08-13T19:51:26.144420338+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49772: remote error: tls: bad certificate 2025-08-13T19:51:26.167900957+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49776: remote error: tls: bad certificate 2025-08-13T19:51:26.181471043+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49780: remote error: tls: bad certificate 2025-08-13T19:51:26.195262196+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49796: remote error: tls: bad certificate 2025-08-13T19:51:26.213324421+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49802: remote error: tls: bad certificate 2025-08-13T19:51:26.230124529+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49810: remote error: tls: bad certificate 2025-08-13T19:51:26.246231448+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49816: remote error: tls: bad certificate 2025-08-13T19:51:26.269661956+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49820: remote error: tls: bad certificate 2025-08-13T19:51:26.286405443+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49824: remote error: tls: bad certificate 2025-08-13T19:51:26.303175101+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49836: remote error: tls: bad certificate 2025-08-13T19:51:26.318382014+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49838: remote error: tls: bad certificate 2025-08-13T19:51:26.335882373+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49846: remote error: tls: bad certificate 2025-08-13T19:51:26.350144109+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49860: remote error: tls: bad certificate 2025-08-13T19:51:26.365631970+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49874: remote error: tls: bad certificate 2025-08-13T19:51:26.383921891+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49882: remote error: tls: bad certificate 2025-08-13T19:51:26.400607027+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49896: remote error: tls: bad certificate 2025-08-13T19:51:26.419436783+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49906: remote error: tls: bad certificate 2025-08-13T19:51:26.438519207+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49918: remote error: tls: bad certificate 2025-08-13T19:51:26.454055969+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49920: remote error: tls: bad certificate 2025-08-13T19:51:34.888909375+00:00 stderr F 2025/08/13 19:51:34 http: TLS handshake error from 127.0.0.1:44154: remote error: tls: bad certificate 2025-08-13T19:51:34.911553090+00:00 stderr F 2025/08/13 19:51:34 http: TLS handshake error from 127.0.0.1:44164: remote error: tls: bad certificate 2025-08-13T19:51:34.932418284+00:00 stderr F 2025/08/13 19:51:34 http: TLS handshake error from 127.0.0.1:44174: remote error: tls: bad certificate 2025-08-13T19:51:34.953989569+00:00 stderr F 2025/08/13 19:51:34 http: TLS handshake error from 127.0.0.1:44182: remote error: tls: bad certificate 2025-08-13T19:51:34.977100287+00:00 stderr F 2025/08/13 19:51:34 http: TLS handshake error from 127.0.0.1:44194: remote error: tls: bad certificate 2025-08-13T19:51:35.230890378+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44204: remote error: tls: bad certificate 2025-08-13T19:51:35.246219745+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44220: remote error: tls: bad certificate 2025-08-13T19:51:35.260329997+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44228: remote error: tls: bad certificate 2025-08-13T19:51:35.278439343+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44238: remote error: tls: bad certificate 2025-08-13T19:51:35.298312789+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44250: remote error: tls: bad certificate 2025-08-13T19:51:35.324349951+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44252: remote error: tls: bad certificate 2025-08-13T19:51:35.341884941+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44268: remote error: tls: bad certificate 2025-08-13T19:51:35.358668909+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44274: remote error: tls: bad certificate 2025-08-13T19:51:35.374070068+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44276: remote error: tls: bad certificate 2025-08-13T19:51:35.387741277+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44282: remote error: tls: bad certificate 2025-08-13T19:51:35.412084041+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44294: remote error: tls: bad certificate 2025-08-13T19:51:35.428288072+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44298: remote error: tls: bad certificate 2025-08-13T19:51:35.445766120+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44314: remote error: tls: bad certificate 2025-08-13T19:51:35.461766916+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44318: remote error: tls: bad certificate 2025-08-13T19:51:35.481981802+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44334: remote error: tls: bad certificate 2025-08-13T19:51:35.506534571+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44342: remote error: tls: bad certificate 2025-08-13T19:51:35.521755915+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44348: remote error: tls: bad certificate 2025-08-13T19:51:35.536405812+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44360: remote error: tls: bad certificate 2025-08-13T19:51:35.551632586+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44374: remote error: tls: bad certificate 2025-08-13T19:51:35.572519391+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44376: remote error: tls: bad certificate 2025-08-13T19:51:35.596243597+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44388: remote error: tls: bad certificate 2025-08-13T19:51:35.613958172+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44394: remote error: tls: bad certificate 2025-08-13T19:51:35.631295326+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44410: remote error: tls: bad certificate 2025-08-13T19:51:35.650620427+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44412: remote error: tls: bad certificate 2025-08-13T19:51:35.667202959+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44426: remote error: tls: bad certificate 2025-08-13T19:51:35.680952901+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44442: remote error: tls: bad certificate 2025-08-13T19:51:35.699039085+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44446: remote error: tls: bad certificate 2025-08-13T19:51:35.714517136+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44460: remote error: tls: bad certificate 2025-08-13T19:51:35.734411193+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44472: remote error: tls: bad certificate 2025-08-13T19:51:35.749755970+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44480: remote error: tls: bad certificate 2025-08-13T19:51:35.769064090+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44486: remote error: tls: bad certificate 2025-08-13T19:51:35.793466566+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44494: remote error: tls: bad certificate 2025-08-13T19:51:35.809748190+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44504: remote error: tls: bad certificate 2025-08-13T19:51:35.823169672+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44508: remote error: tls: bad certificate 2025-08-13T19:51:35.844928822+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44522: remote error: tls: bad certificate 2025-08-13T19:51:35.862579885+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44532: remote error: tls: bad certificate 2025-08-13T19:51:35.878615402+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44536: remote error: tls: bad certificate 2025-08-13T19:51:35.893603579+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44552: remote error: tls: bad certificate 2025-08-13T19:51:35.908233105+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44554: remote error: tls: bad certificate 2025-08-13T19:51:35.924323204+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44558: remote error: tls: bad certificate 2025-08-13T19:51:35.938981422+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44572: remote error: tls: bad certificate 2025-08-13T19:51:35.953170416+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44582: remote error: tls: bad certificate 2025-08-13T19:51:35.968663917+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44596: remote error: tls: bad certificate 2025-08-13T19:51:35.998586760+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44604: remote error: tls: bad certificate 2025-08-13T19:51:36.013735551+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44620: remote error: tls: bad certificate 2025-08-13T19:51:36.033660039+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44626: remote error: tls: bad certificate 2025-08-13T19:51:36.048400469+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44628: remote error: tls: bad certificate 2025-08-13T19:51:36.063665594+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44640: remote error: tls: bad certificate 2025-08-13T19:51:36.105005242+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44654: remote error: tls: bad certificate 2025-08-13T19:51:36.170414595+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44664: remote error: tls: bad certificate 2025-08-13T19:51:36.185508785+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44680: remote error: tls: bad certificate 2025-08-13T19:51:36.210122447+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44696: remote error: tls: bad certificate 2025-08-13T19:51:36.225264118+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44710: remote error: tls: bad certificate 2025-08-13T19:51:36.240088881+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44724: remote error: tls: bad certificate 2025-08-13T19:51:36.254623595+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44740: remote error: tls: bad certificate 2025-08-13T19:51:36.265593497+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44746: remote error: tls: bad certificate 2025-08-13T19:51:36.279867284+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44754: remote error: tls: bad certificate 2025-08-13T19:51:36.294984375+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44760: remote error: tls: bad certificate 2025-08-13T19:51:36.315271863+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44766: remote error: tls: bad certificate 2025-08-13T19:51:36.331644569+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44770: remote error: tls: bad certificate 2025-08-13T19:51:36.349393065+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44776: remote error: tls: bad certificate 2025-08-13T19:51:36.367290895+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44778: remote error: tls: bad certificate 2025-08-13T19:51:36.383337162+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44790: remote error: tls: bad certificate 2025-08-13T19:51:36.398226026+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44798: remote error: tls: bad certificate 2025-08-13T19:51:36.415058326+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44804: remote error: tls: bad certificate 2025-08-13T19:51:36.429761165+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44818: remote error: tls: bad certificate 2025-08-13T19:51:36.447405417+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44828: remote error: tls: bad certificate 2025-08-13T19:51:45.229423954+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58312: remote error: tls: bad certificate 2025-08-13T19:51:45.247171210+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58324: remote error: tls: bad certificate 2025-08-13T19:51:45.257149764+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58336: remote error: tls: bad certificate 2025-08-13T19:51:45.273425098+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58352: remote error: tls: bad certificate 2025-08-13T19:51:45.283935737+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58366: remote error: tls: bad certificate 2025-08-13T19:51:45.295551288+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58372: remote error: tls: bad certificate 2025-08-13T19:51:45.308979121+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58378: remote error: tls: bad certificate 2025-08-13T19:51:45.312430609+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58382: remote error: tls: bad certificate 2025-08-13T19:51:45.331969776+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58394: remote error: tls: bad certificate 2025-08-13T19:51:45.331969776+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58404: remote error: tls: bad certificate 2025-08-13T19:51:45.349488195+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58406: remote error: tls: bad certificate 2025-08-13T19:51:45.352435989+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58416: remote error: tls: bad certificate 2025-08-13T19:51:45.369491465+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58422: remote error: tls: bad certificate 2025-08-13T19:51:45.388581459+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58436: remote error: tls: bad certificate 2025-08-13T19:51:45.407101377+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58452: remote error: tls: bad certificate 2025-08-13T19:51:45.424728109+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58454: remote error: tls: bad certificate 2025-08-13T19:51:45.441388033+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58464: remote error: tls: bad certificate 2025-08-13T19:51:45.457253205+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58472: remote error: tls: bad certificate 2025-08-13T19:51:45.480961341+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58486: remote error: tls: bad certificate 2025-08-13T19:51:45.500220840+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58488: remote error: tls: bad certificate 2025-08-13T19:51:45.520414815+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58502: remote error: tls: bad certificate 2025-08-13T19:51:45.539251772+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58518: remote error: tls: bad certificate 2025-08-13T19:51:45.557504182+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58526: remote error: tls: bad certificate 2025-08-13T19:51:45.571286554+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58536: remote error: tls: bad certificate 2025-08-13T19:51:45.589704359+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58542: remote error: tls: bad certificate 2025-08-13T19:51:45.607948879+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58548: remote error: tls: bad certificate 2025-08-13T19:51:45.625083857+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58560: remote error: tls: bad certificate 2025-08-13T19:51:45.646020864+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58568: remote error: tls: bad certificate 2025-08-13T19:51:45.667046433+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58584: remote error: tls: bad certificate 2025-08-13T19:51:45.690323306+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58598: remote error: tls: bad certificate 2025-08-13T19:51:45.709131022+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58604: remote error: tls: bad certificate 2025-08-13T19:51:45.725559170+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58620: remote error: tls: bad certificate 2025-08-13T19:51:45.747176896+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58636: remote error: tls: bad certificate 2025-08-13T19:51:45.765553239+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58638: remote error: tls: bad certificate 2025-08-13T19:51:45.785663762+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58642: remote error: tls: bad certificate 2025-08-13T19:51:45.804608922+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58648: remote error: tls: bad certificate 2025-08-13T19:51:45.820687760+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58662: remote error: tls: bad certificate 2025-08-13T19:51:45.837835759+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58678: remote error: tls: bad certificate 2025-08-13T19:51:45.855400129+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58694: remote error: tls: bad certificate 2025-08-13T19:51:45.874041070+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58706: remote error: tls: bad certificate 2025-08-13T19:51:45.895304306+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58718: remote error: tls: bad certificate 2025-08-13T19:51:45.918644221+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58722: remote error: tls: bad certificate 2025-08-13T19:51:45.937194459+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58728: remote error: tls: bad certificate 2025-08-13T19:51:45.956282993+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58742: remote error: tls: bad certificate 2025-08-13T19:51:45.973995498+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58752: remote error: tls: bad certificate 2025-08-13T19:51:45.991052044+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58758: remote error: tls: bad certificate 2025-08-13T19:51:46.014019558+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58772: remote error: tls: bad certificate 2025-08-13T19:51:46.036255592+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58782: remote error: tls: bad certificate 2025-08-13T19:51:46.060187624+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58790: remote error: tls: bad certificate 2025-08-13T19:51:46.079926496+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58806: remote error: tls: bad certificate 2025-08-13T19:51:46.100487972+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58818: remote error: tls: bad certificate 2025-08-13T19:51:46.128986464+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58834: remote error: tls: bad certificate 2025-08-13T19:51:46.144252709+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58844: remote error: tls: bad certificate 2025-08-13T19:51:46.158254278+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58852: remote error: tls: bad certificate 2025-08-13T19:51:46.178462863+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58854: remote error: tls: bad certificate 2025-08-13T19:51:46.198764252+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58862: remote error: tls: bad certificate 2025-08-13T19:51:46.213966015+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58874: remote error: tls: bad certificate 2025-08-13T19:51:46.227996355+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58882: remote error: tls: bad certificate 2025-08-13T19:51:46.246208283+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58894: remote error: tls: bad certificate 2025-08-13T19:51:46.266552293+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58900: remote error: tls: bad certificate 2025-08-13T19:51:46.283127445+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58908: remote error: tls: bad certificate 2025-08-13T19:51:46.298365510+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58910: remote error: tls: bad certificate 2025-08-13T19:51:46.323312250+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58926: remote error: tls: bad certificate 2025-08-13T19:51:46.339574334+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58938: remote error: tls: bad certificate 2025-08-13T19:51:46.360373316+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58952: remote error: tls: bad certificate 2025-08-13T19:51:46.379384908+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58954: remote error: tls: bad certificate 2025-08-13T19:51:46.397547285+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58966: remote error: tls: bad certificate 2025-08-13T19:51:46.416198287+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58972: remote error: tls: bad certificate 2025-08-13T19:51:46.433426266+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58988: remote error: tls: bad certificate 2025-08-13T19:51:46.447897129+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:59002: remote error: tls: bad certificate 2025-08-13T19:51:46.468113175+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:59014: remote error: tls: bad certificate 2025-08-13T19:51:46.491347087+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:59022: remote error: tls: bad certificate 2025-08-13T19:51:49.076095659+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48056: remote error: tls: bad certificate 2025-08-13T19:51:49.107637277+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48070: remote error: tls: bad certificate 2025-08-13T19:51:49.128606465+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48082: remote error: tls: bad certificate 2025-08-13T19:51:49.149867760+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48094: remote error: tls: bad certificate 2025-08-13T19:51:49.168729248+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48110: remote error: tls: bad certificate 2025-08-13T19:51:49.185655820+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48112: remote error: tls: bad certificate 2025-08-13T19:51:49.212078173+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48120: remote error: tls: bad certificate 2025-08-13T19:51:49.232245838+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48128: remote error: tls: bad certificate 2025-08-13T19:51:49.250031674+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48138: remote error: tls: bad certificate 2025-08-13T19:51:49.273241696+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48146: remote error: tls: bad certificate 2025-08-13T19:51:49.291936728+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48158: remote error: tls: bad certificate 2025-08-13T19:51:49.308127519+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48160: remote error: tls: bad certificate 2025-08-13T19:51:49.327733988+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48166: remote error: tls: bad certificate 2025-08-13T19:51:49.337156937+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48170: remote error: tls: bad certificate 2025-08-13T19:51:49.354172371+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48178: remote error: tls: bad certificate 2025-08-13T19:51:49.380365958+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48192: remote error: tls: bad certificate 2025-08-13T19:51:49.411187506+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48204: remote error: tls: bad certificate 2025-08-13T19:51:49.440935043+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48212: remote error: tls: bad certificate 2025-08-13T19:51:49.460707737+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48222: remote error: tls: bad certificate 2025-08-13T19:51:49.491663629+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48238: remote error: tls: bad certificate 2025-08-13T19:51:49.509939839+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48248: remote error: tls: bad certificate 2025-08-13T19:51:49.537520105+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48256: remote error: tls: bad certificate 2025-08-13T19:51:49.560039917+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48272: remote error: tls: bad certificate 2025-08-13T19:51:49.575920549+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48288: remote error: tls: bad certificate 2025-08-13T19:51:49.600389346+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48300: remote error: tls: bad certificate 2025-08-13T19:51:49.618001058+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48302: remote error: tls: bad certificate 2025-08-13T19:51:49.644572915+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48310: remote error: tls: bad certificate 2025-08-13T19:51:49.672764988+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48324: remote error: tls: bad certificate 2025-08-13T19:51:49.692653795+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48330: remote error: tls: bad certificate 2025-08-13T19:51:49.716426202+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48334: remote error: tls: bad certificate 2025-08-13T19:51:49.760494018+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48344: remote error: tls: bad certificate 2025-08-13T19:51:49.806922041+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48350: remote error: tls: bad certificate 2025-08-13T19:51:49.835891726+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48362: remote error: tls: bad certificate 2025-08-13T19:51:49.865521470+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48374: remote error: tls: bad certificate 2025-08-13T19:51:49.892461968+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48378: remote error: tls: bad certificate 2025-08-13T19:51:49.935357250+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48394: remote error: tls: bad certificate 2025-08-13T19:51:49.982184734+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48410: remote error: tls: bad certificate 2025-08-13T19:51:50.011342255+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48424: remote error: tls: bad certificate 2025-08-13T19:51:50.037960573+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48438: remote error: tls: bad certificate 2025-08-13T19:51:50.058617801+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48454: remote error: tls: bad certificate 2025-08-13T19:51:50.105753704+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48468: remote error: tls: bad certificate 2025-08-13T19:51:50.135751038+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48470: remote error: tls: bad certificate 2025-08-13T19:51:50.158219968+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48472: remote error: tls: bad certificate 2025-08-13T19:51:50.201533853+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48476: remote error: tls: bad certificate 2025-08-13T19:51:50.244723343+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48490: remote error: tls: bad certificate 2025-08-13T19:51:50.315070827+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48502: remote error: tls: bad certificate 2025-08-13T19:51:50.376929440+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48508: remote error: tls: bad certificate 2025-08-13T19:51:50.555041034+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48514: remote error: tls: bad certificate 2025-08-13T19:51:50.609665641+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48524: remote error: tls: bad certificate 2025-08-13T19:51:50.672921733+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48532: remote error: tls: bad certificate 2025-08-13T19:51:50.769471844+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48544: remote error: tls: bad certificate 2025-08-13T19:51:50.831486710+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48560: remote error: tls: bad certificate 2025-08-13T19:51:50.863310337+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48566: remote error: tls: bad certificate 2025-08-13T19:51:50.908341840+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48580: remote error: tls: bad certificate 2025-08-13T19:51:50.924962964+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48596: remote error: tls: bad certificate 2025-08-13T19:51:50.948964588+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48606: remote error: tls: bad certificate 2025-08-13T19:51:50.968907536+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48614: remote error: tls: bad certificate 2025-08-13T19:51:51.001053502+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48626: remote error: tls: bad certificate 2025-08-13T19:51:51.019893938+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48638: remote error: tls: bad certificate 2025-08-13T19:51:51.046002942+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48644: remote error: tls: bad certificate 2025-08-13T19:51:51.076886692+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48646: remote error: tls: bad certificate 2025-08-13T19:51:51.109906753+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48662: remote error: tls: bad certificate 2025-08-13T19:51:51.133918747+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48674: remote error: tls: bad certificate 2025-08-13T19:51:51.154886284+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48684: remote error: tls: bad certificate 2025-08-13T19:51:51.185132866+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48698: remote error: tls: bad certificate 2025-08-13T19:51:51.225982780+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48712: remote error: tls: bad certificate 2025-08-13T19:51:51.248045369+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48718: remote error: tls: bad certificate 2025-08-13T19:51:51.275053928+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48728: remote error: tls: bad certificate 2025-08-13T19:51:51.307019519+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48732: remote error: tls: bad certificate 2025-08-13T19:51:51.330985142+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48746: remote error: tls: bad certificate 2025-08-13T19:51:51.360095031+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48760: remote error: tls: bad certificate 2025-08-13T19:51:51.386985437+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48772: remote error: tls: bad certificate 2025-08-13T19:51:51.414079099+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48780: remote error: tls: bad certificate 2025-08-13T19:51:51.443983831+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48792: remote error: tls: bad certificate 2025-08-13T19:51:51.468964643+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48802: remote error: tls: bad certificate 2025-08-13T19:51:51.498901096+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48808: remote error: tls: bad certificate 2025-08-13T19:51:51.510905168+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48814: remote error: tls: bad certificate 2025-08-13T19:51:51.523940479+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48822: remote error: tls: bad certificate 2025-08-13T19:51:51.547918962+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48824: remote error: tls: bad certificate 2025-08-13T19:51:51.585071241+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48830: remote error: tls: bad certificate 2025-08-13T19:51:51.604072822+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48832: remote error: tls: bad certificate 2025-08-13T19:51:51.632913834+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48840: remote error: tls: bad certificate 2025-08-13T19:51:51.652868552+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48848: remote error: tls: bad certificate 2025-08-13T19:51:51.671007739+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48864: remote error: tls: bad certificate 2025-08-13T19:51:51.688964061+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48876: remote error: tls: bad certificate 2025-08-13T19:51:51.706877871+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48878: remote error: tls: bad certificate 2025-08-13T19:51:51.722889297+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48884: remote error: tls: bad certificate 2025-08-13T19:51:51.741873858+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48900: remote error: tls: bad certificate 2025-08-13T19:51:51.762867906+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48908: remote error: tls: bad certificate 2025-08-13T19:51:51.779890261+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48922: remote error: tls: bad certificate 2025-08-13T19:51:51.803894675+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48934: remote error: tls: bad certificate 2025-08-13T19:51:51.827888129+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48938: remote error: tls: bad certificate 2025-08-13T19:51:51.848906338+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48948: remote error: tls: bad certificate 2025-08-13T19:51:51.875903357+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48950: remote error: tls: bad certificate 2025-08-13T19:51:51.895929788+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48952: remote error: tls: bad certificate 2025-08-13T19:51:51.917910464+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48956: remote error: tls: bad certificate 2025-08-13T19:51:51.938886121+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48968: remote error: tls: bad certificate 2025-08-13T19:51:51.959876029+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48974: remote error: tls: bad certificate 2025-08-13T19:51:51.977891333+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48980: remote error: tls: bad certificate 2025-08-13T19:51:51.995901926+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48982: remote error: tls: bad certificate 2025-08-13T19:51:52.012876879+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:48986: remote error: tls: bad certificate 2025-08-13T19:51:52.030907153+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:48992: remote error: tls: bad certificate 2025-08-13T19:51:52.057915593+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:48994: remote error: tls: bad certificate 2025-08-13T19:51:52.080271780+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49000: remote error: tls: bad certificate 2025-08-13T19:51:52.100929988+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49014: remote error: tls: bad certificate 2025-08-13T19:51:52.118900100+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49016: remote error: tls: bad certificate 2025-08-13T19:51:52.144946212+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49030: remote error: tls: bad certificate 2025-08-13T19:51:52.166445025+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49038: remote error: tls: bad certificate 2025-08-13T19:51:52.189448370+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49042: remote error: tls: bad certificate 2025-08-13T19:51:52.217337345+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49054: remote error: tls: bad certificate 2025-08-13T19:51:52.244161329+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49060: remote error: tls: bad certificate 2025-08-13T19:51:52.255997516+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49076: remote error: tls: bad certificate 2025-08-13T19:51:52.274023970+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49086: remote error: tls: bad certificate 2025-08-13T19:51:52.304564450+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49098: remote error: tls: bad certificate 2025-08-13T19:51:52.330130308+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49104: remote error: tls: bad certificate 2025-08-13T19:51:52.361858712+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49112: remote error: tls: bad certificate 2025-08-13T19:51:52.386456493+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49120: remote error: tls: bad certificate 2025-08-13T19:51:52.414108811+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49134: remote error: tls: bad certificate 2025-08-13T19:51:52.444031763+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49136: remote error: tls: bad certificate 2025-08-13T19:51:52.468897962+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49140: remote error: tls: bad certificate 2025-08-13T19:51:52.490647532+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49156: remote error: tls: bad certificate 2025-08-13T19:51:52.518408173+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49158: remote error: tls: bad certificate 2025-08-13T19:51:52.554922713+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49168: remote error: tls: bad certificate 2025-08-13T19:51:52.570868187+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49176: remote error: tls: bad certificate 2025-08-13T19:51:52.603080335+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49180: remote error: tls: bad certificate 2025-08-13T19:51:52.626078340+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49194: remote error: tls: bad certificate 2025-08-13T19:51:52.653371248+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49198: remote error: tls: bad certificate 2025-08-13T19:51:52.674299524+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49206: remote error: tls: bad certificate 2025-08-13T19:51:52.698756671+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49218: remote error: tls: bad certificate 2025-08-13T19:51:52.719594044+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49230: remote error: tls: bad certificate 2025-08-13T19:51:52.740359316+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49232: remote error: tls: bad certificate 2025-08-13T19:51:52.775948640+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49238: remote error: tls: bad certificate 2025-08-13T19:51:52.802281040+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49250: remote error: tls: bad certificate 2025-08-13T19:51:52.816648530+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49260: remote error: tls: bad certificate 2025-08-13T19:51:52.819327866+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49272: remote error: tls: bad certificate 2025-08-13T19:51:52.838937585+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49274: remote error: tls: bad certificate 2025-08-13T19:51:52.862078924+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49286: remote error: tls: bad certificate 2025-08-13T19:51:52.888863847+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49292: remote error: tls: bad certificate 2025-08-13T19:51:52.915021762+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49296: remote error: tls: bad certificate 2025-08-13T19:51:52.938728128+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49304: remote error: tls: bad certificate 2025-08-13T19:51:52.963520884+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49308: remote error: tls: bad certificate 2025-08-13T19:51:52.986878080+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49318: remote error: tls: bad certificate 2025-08-13T19:51:53.016400571+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49334: remote error: tls: bad certificate 2025-08-13T19:51:53.033557990+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49348: remote error: tls: bad certificate 2025-08-13T19:51:53.437623582+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49356: remote error: tls: bad certificate 2025-08-13T19:51:53.463060217+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49358: remote error: tls: bad certificate 2025-08-13T19:51:53.481366338+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49364: remote error: tls: bad certificate 2025-08-13T19:51:53.503377355+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49372: remote error: tls: bad certificate 2025-08-13T19:51:53.528933483+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49386: remote error: tls: bad certificate 2025-08-13T19:51:53.553239986+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49388: remote error: tls: bad certificate 2025-08-13T19:51:53.571636570+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49402: remote error: tls: bad certificate 2025-08-13T19:51:53.593067800+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49418: remote error: tls: bad certificate 2025-08-13T19:51:53.617272199+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49428: remote error: tls: bad certificate 2025-08-13T19:51:53.639766470+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49432: remote error: tls: bad certificate 2025-08-13T19:51:53.658890745+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49440: remote error: tls: bad certificate 2025-08-13T19:51:53.677946088+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49442: remote error: tls: bad certificate 2025-08-13T19:51:53.696568608+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49458: remote error: tls: bad certificate 2025-08-13T19:51:53.714454278+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49460: remote error: tls: bad certificate 2025-08-13T19:51:53.733860061+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49472: remote error: tls: bad certificate 2025-08-13T19:51:53.751093742+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49482: remote error: tls: bad certificate 2025-08-13T19:51:53.776317781+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49496: remote error: tls: bad certificate 2025-08-13T19:51:53.796904197+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49504: remote error: tls: bad certificate 2025-08-13T19:51:53.812332277+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49520: remote error: tls: bad certificate 2025-08-13T19:51:53.827761696+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49526: remote error: tls: bad certificate 2025-08-13T19:51:53.845674277+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49542: remote error: tls: bad certificate 2025-08-13T19:51:53.863381401+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49558: remote error: tls: bad certificate 2025-08-13T19:51:53.881640031+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49568: remote error: tls: bad certificate 2025-08-13T19:51:53.899249993+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49576: remote error: tls: bad certificate 2025-08-13T19:51:53.915200718+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49592: remote error: tls: bad certificate 2025-08-13T19:51:53.932169221+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49600: remote error: tls: bad certificate 2025-08-13T19:51:53.956525415+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49612: remote error: tls: bad certificate 2025-08-13T19:51:53.972322225+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49622: remote error: tls: bad certificate 2025-08-13T19:51:53.986572571+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49628: remote error: tls: bad certificate 2025-08-13T19:51:54.001901338+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49634: remote error: tls: bad certificate 2025-08-13T19:51:54.018089079+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49640: remote error: tls: bad certificate 2025-08-13T19:51:54.042053942+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49646: remote error: tls: bad certificate 2025-08-13T19:51:54.071468220+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49650: remote error: tls: bad certificate 2025-08-13T19:51:54.107311621+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49660: remote error: tls: bad certificate 2025-08-13T19:51:54.154198657+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49666: remote error: tls: bad certificate 2025-08-13T19:51:54.190040388+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49674: remote error: tls: bad certificate 2025-08-13T19:51:54.227873526+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49682: remote error: tls: bad certificate 2025-08-13T19:51:54.267584117+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49690: remote error: tls: bad certificate 2025-08-13T19:51:54.315009608+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49698: remote error: tls: bad certificate 2025-08-13T19:51:54.350597882+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49714: remote error: tls: bad certificate 2025-08-13T19:51:54.393021381+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49728: remote error: tls: bad certificate 2025-08-13T19:51:54.427350599+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49738: remote error: tls: bad certificate 2025-08-13T19:51:54.466697610+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49752: remote error: tls: bad certificate 2025-08-13T19:51:54.510339604+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49768: remote error: tls: bad certificate 2025-08-13T19:51:54.551517927+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49772: remote error: tls: bad certificate 2025-08-13T19:51:54.585918877+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49780: remote error: tls: bad certificate 2025-08-13T19:51:54.588643785+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49782: remote error: tls: bad certificate 2025-08-13T19:51:54.626564205+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49796: remote error: tls: bad certificate 2025-08-13T19:51:54.673626936+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49804: remote error: tls: bad certificate 2025-08-13T19:51:54.706184093+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49808: remote error: tls: bad certificate 2025-08-13T19:51:54.747370227+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49812: remote error: tls: bad certificate 2025-08-13T19:51:54.786201023+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49828: remote error: tls: bad certificate 2025-08-13T19:51:54.825520463+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49838: remote error: tls: bad certificate 2025-08-13T19:51:54.868887379+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49850: remote error: tls: bad certificate 2025-08-13T19:51:54.906038627+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49866: remote error: tls: bad certificate 2025-08-13T19:51:54.948265401+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49882: remote error: tls: bad certificate 2025-08-13T19:51:55.071308806+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49890: remote error: tls: bad certificate 2025-08-13T19:51:55.090315868+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49894: remote error: tls: bad certificate 2025-08-13T19:51:55.108698042+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49898: remote error: tls: bad certificate 2025-08-13T19:51:55.123356209+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49904: remote error: tls: bad certificate 2025-08-13T19:51:55.147871908+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49910: remote error: tls: bad certificate 2025-08-13T19:51:55.187059074+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49924: remote error: tls: bad certificate 2025-08-13T19:51:55.226522038+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49928: remote error: tls: bad certificate 2025-08-13T19:51:55.266248410+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49936: remote error: tls: bad certificate 2025-08-13T19:51:55.306411905+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49948: remote error: tls: bad certificate 2025-08-13T19:51:55.347991039+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49954: remote error: tls: bad certificate 2025-08-13T19:51:55.386904648+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49960: remote error: tls: bad certificate 2025-08-13T19:51:55.427086683+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49974: remote error: tls: bad certificate 2025-08-13T19:51:55.466236038+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49986: remote error: tls: bad certificate 2025-08-13T19:51:55.506382512+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49996: remote error: tls: bad certificate 2025-08-13T19:51:55.546525846+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50004: remote error: tls: bad certificate 2025-08-13T19:51:55.597896839+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50012: remote error: tls: bad certificate 2025-08-13T19:51:55.643475588+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50026: remote error: tls: bad certificate 2025-08-13T19:51:55.672675510+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50042: remote error: tls: bad certificate 2025-08-13T19:51:55.686856504+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50046: remote error: tls: bad certificate 2025-08-13T19:51:55.708117569+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50072: remote error: tls: bad certificate 2025-08-13T19:51:55.708869141+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50058: remote error: tls: bad certificate 2025-08-13T19:51:55.727301646+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50074: remote error: tls: bad certificate 2025-08-13T19:51:55.746007539+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50096: remote error: tls: bad certificate 2025-08-13T19:51:55.748355306+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50088: remote error: tls: bad certificate 2025-08-13T19:51:55.767081920+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50104: remote error: tls: bad certificate 2025-08-13T19:51:55.789621482+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50106: remote error: tls: bad certificate 2025-08-13T19:51:55.828289813+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50120: remote error: tls: bad certificate 2025-08-13T19:51:55.866932614+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50134: remote error: tls: bad certificate 2025-08-13T19:51:55.909077455+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50142: remote error: tls: bad certificate 2025-08-13T19:51:55.944401361+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50144: remote error: tls: bad certificate 2025-08-13T19:51:55.987899461+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50160: remote error: tls: bad certificate 2025-08-13T19:51:56.026151471+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50174: remote error: tls: bad certificate 2025-08-13T19:51:56.068135107+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50182: remote error: tls: bad certificate 2025-08-13T19:51:56.110857324+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50184: remote error: tls: bad certificate 2025-08-13T19:51:56.148175537+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50188: remote error: tls: bad certificate 2025-08-13T19:51:56.187425075+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50204: remote error: tls: bad certificate 2025-08-13T19:51:56.225651835+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50210: remote error: tls: bad certificate 2025-08-13T19:51:56.267155117+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50218: remote error: tls: bad certificate 2025-08-13T19:51:56.309512874+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50230: remote error: tls: bad certificate 2025-08-13T19:51:56.347987120+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50236: remote error: tls: bad certificate 2025-08-13T19:51:56.389556494+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50240: remote error: tls: bad certificate 2025-08-13T19:51:56.427295400+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50246: remote error: tls: bad certificate 2025-08-13T19:51:56.467683890+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50258: remote error: tls: bad certificate 2025-08-13T19:51:56.507263138+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50272: remote error: tls: bad certificate 2025-08-13T19:51:56.546551757+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50276: remote error: tls: bad certificate 2025-08-13T19:51:56.586389562+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50280: remote error: tls: bad certificate 2025-08-13T19:51:56.633498824+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50286: remote error: tls: bad certificate 2025-08-13T19:51:56.672197457+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50288: remote error: tls: bad certificate 2025-08-13T19:51:56.707403420+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50304: remote error: tls: bad certificate 2025-08-13T19:51:56.748165521+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50314: remote error: tls: bad certificate 2025-08-13T19:51:56.785644649+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50320: remote error: tls: bad certificate 2025-08-13T19:51:56.830688413+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50330: remote error: tls: bad certificate 2025-08-13T19:51:56.868089638+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50334: remote error: tls: bad certificate 2025-08-13T19:51:56.908528840+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50344: remote error: tls: bad certificate 2025-08-13T19:51:56.947165361+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50346: remote error: tls: bad certificate 2025-08-13T19:51:56.987654225+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50348: remote error: tls: bad certificate 2025-08-13T19:51:57.035987822+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50364: remote error: tls: bad certificate 2025-08-13T19:51:57.064526955+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50368: remote error: tls: bad certificate 2025-08-13T19:51:57.105877663+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50380: remote error: tls: bad certificate 2025-08-13T19:51:57.145515902+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50388: remote error: tls: bad certificate 2025-08-13T19:51:57.189225367+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50394: remote error: tls: bad certificate 2025-08-13T19:51:57.230210824+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50408: remote error: tls: bad certificate 2025-08-13T19:51:57.267122786+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50418: remote error: tls: bad certificate 2025-08-13T19:51:57.307740073+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50422: remote error: tls: bad certificate 2025-08-13T19:51:57.347906888+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50428: remote error: tls: bad certificate 2025-08-13T19:51:57.389106251+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50432: remote error: tls: bad certificate 2025-08-13T19:51:57.425764166+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50434: remote error: tls: bad certificate 2025-08-13T19:51:57.468734730+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50444: remote error: tls: bad certificate 2025-08-13T19:51:57.509384488+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50452: remote error: tls: bad certificate 2025-08-13T19:51:57.550016526+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50462: remote error: tls: bad certificate 2025-08-13T19:51:57.588688438+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50470: remote error: tls: bad certificate 2025-08-13T19:51:57.628283016+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50476: remote error: tls: bad certificate 2025-08-13T19:51:57.668227604+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50490: remote error: tls: bad certificate 2025-08-13T19:51:57.709942472+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50492: remote error: tls: bad certificate 2025-08-13T19:51:57.745881856+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50494: remote error: tls: bad certificate 2025-08-13T19:51:57.788856991+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50498: remote error: tls: bad certificate 2025-08-13T19:51:57.825960748+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50510: remote error: tls: bad certificate 2025-08-13T19:51:57.865308779+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50512: remote error: tls: bad certificate 2025-08-13T19:51:57.907450620+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50518: remote error: tls: bad certificate 2025-08-13T19:51:57.947535242+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50524: remote error: tls: bad certificate 2025-08-13T19:51:57.987173751+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50530: remote error: tls: bad certificate 2025-08-13T19:51:58.027241023+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50542: remote error: tls: bad certificate 2025-08-13T19:51:58.070232508+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50558: remote error: tls: bad certificate 2025-08-13T19:51:58.107928581+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50574: remote error: tls: bad certificate 2025-08-13T19:51:58.147928351+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50580: remote error: tls: bad certificate 2025-08-13T19:51:58.186857380+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50590: remote error: tls: bad certificate 2025-08-13T19:51:58.230539035+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50604: remote error: tls: bad certificate 2025-08-13T19:51:58.266508570+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50614: remote error: tls: bad certificate 2025-08-13T19:51:58.306916331+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50618: remote error: tls: bad certificate 2025-08-13T19:51:58.345992464+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50632: remote error: tls: bad certificate 2025-08-13T19:51:58.389033810+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50642: remote error: tls: bad certificate 2025-08-13T19:51:58.428723821+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50656: remote error: tls: bad certificate 2025-08-13T19:51:58.467154726+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50670: remote error: tls: bad certificate 2025-08-13T19:51:58.507931868+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50680: remote error: tls: bad certificate 2025-08-13T19:51:58.544065057+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50682: remote error: tls: bad certificate 2025-08-13T19:51:58.586625050+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50690: remote error: tls: bad certificate 2025-08-13T19:51:58.626725882+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50700: remote error: tls: bad certificate 2025-08-13T19:51:58.669401158+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50712: remote error: tls: bad certificate 2025-08-13T19:51:58.707880285+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50724: remote error: tls: bad certificate 2025-08-13T19:51:58.748734129+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:35984: remote error: tls: bad certificate 2025-08-13T19:51:58.792397833+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:35986: remote error: tls: bad certificate 2025-08-13T19:51:58.826091813+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:35994: remote error: tls: bad certificate 2025-08-13T19:51:58.872266098+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:36002: remote error: tls: bad certificate 2025-08-13T19:51:58.912637468+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:36008: remote error: tls: bad certificate 2025-08-13T19:51:58.948347306+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:36012: remote error: tls: bad certificate 2025-08-13T19:51:58.989556140+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:36016: remote error: tls: bad certificate 2025-08-13T19:51:59.026058560+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36018: remote error: tls: bad certificate 2025-08-13T19:51:59.066424630+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36030: remote error: tls: bad certificate 2025-08-13T19:51:59.105723980+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36042: remote error: tls: bad certificate 2025-08-13T19:51:59.149355533+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36054: remote error: tls: bad certificate 2025-08-13T19:51:59.187990094+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36066: remote error: tls: bad certificate 2025-08-13T19:51:59.231429931+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36076: remote error: tls: bad certificate 2025-08-13T19:51:59.271972936+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36086: remote error: tls: bad certificate 2025-08-13T19:51:59.308136197+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36096: remote error: tls: bad certificate 2025-08-13T19:51:59.346877050+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36098: remote error: tls: bad certificate 2025-08-13T19:51:59.387407615+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36110: remote error: tls: bad certificate 2025-08-13T19:51:59.430143873+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36116: remote error: tls: bad certificate 2025-08-13T19:51:59.468856036+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36130: remote error: tls: bad certificate 2025-08-13T19:51:59.511932223+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36140: remote error: tls: bad certificate 2025-08-13T19:51:59.546727344+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36156: remote error: tls: bad certificate 2025-08-13T19:51:59.588886105+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36168: remote error: tls: bad certificate 2025-08-13T19:51:59.635268037+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36172: remote error: tls: bad certificate 2025-08-13T19:51:59.669538453+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36184: remote error: tls: bad certificate 2025-08-13T19:51:59.707252578+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36188: remote error: tls: bad certificate 2025-08-13T19:51:59.745287591+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36194: remote error: tls: bad certificate 2025-08-13T19:51:59.788976006+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36200: remote error: tls: bad certificate 2025-08-13T19:51:59.828248605+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36204: remote error: tls: bad certificate 2025-08-13T19:51:59.866762582+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36208: remote error: tls: bad certificate 2025-08-13T19:51:59.909291384+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36214: remote error: tls: bad certificate 2025-08-13T19:51:59.947633607+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36224: remote error: tls: bad certificate 2025-08-13T19:51:59.987223464+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36236: remote error: tls: bad certificate 2025-08-13T19:52:00.027913324+00:00 stderr F 2025/08/13 19:52:00 http: TLS handshake error from 127.0.0.1:36242: remote error: tls: bad certificate 2025-08-13T19:52:00.067411429+00:00 stderr F 2025/08/13 19:52:00 http: TLS handshake error from 127.0.0.1:36246: remote error: tls: bad certificate 2025-08-13T19:52:00.111442214+00:00 stderr F 2025/08/13 19:52:00 http: TLS handshake error from 127.0.0.1:36260: remote error: tls: bad certificate 2025-08-13T19:52:00.157400113+00:00 stderr F 2025/08/13 19:52:00 http: TLS handshake error from 127.0.0.1:36276: remote error: tls: bad certificate 2025-08-13T19:52:00.190490616+00:00 stderr F 2025/08/13 19:52:00 http: TLS handshake error from 127.0.0.1:36292: remote error: tls: bad certificate 2025-08-13T19:52:00.229408055+00:00 stderr F 2025/08/13 19:52:00 http: TLS handshake error from 127.0.0.1:36294: remote error: tls: bad certificate 2025-08-13T19:52:01.167641625+00:00 stderr F 2025/08/13 19:52:01 http: TLS handshake error from 127.0.0.1:36298: remote error: tls: bad certificate 2025-08-13T19:52:01.184061993+00:00 stderr F 2025/08/13 19:52:01 http: TLS handshake error from 127.0.0.1:36308: remote error: tls: bad certificate 2025-08-13T19:52:01.201028956+00:00 stderr F 2025/08/13 19:52:01 http: TLS handshake error from 127.0.0.1:36318: remote error: tls: bad certificate 2025-08-13T19:52:01.219742989+00:00 stderr F 2025/08/13 19:52:01 http: TLS handshake error from 127.0.0.1:36324: remote error: tls: bad certificate 2025-08-13T19:52:01.236437835+00:00 stderr F 2025/08/13 19:52:01 http: TLS handshake error from 127.0.0.1:36326: remote error: tls: bad certificate 2025-08-13T19:52:01.251860544+00:00 stderr F 2025/08/13 19:52:01 http: TLS handshake error from 127.0.0.1:36332: remote error: tls: bad certificate 2025-08-13T19:52:05.236931622+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36340: remote error: tls: bad certificate 2025-08-13T19:52:05.266534105+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36356: remote error: tls: bad certificate 2025-08-13T19:52:05.287584185+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36368: remote error: tls: bad certificate 2025-08-13T19:52:05.307226524+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36374: remote error: tls: bad certificate 2025-08-13T19:52:05.324289521+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36390: remote error: tls: bad certificate 2025-08-13T19:52:05.342008606+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36392: remote error: tls: bad certificate 2025-08-13T19:52:05.360517743+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36400: remote error: tls: bad certificate 2025-08-13T19:52:05.377038244+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36404: remote error: tls: bad certificate 2025-08-13T19:52:05.397209168+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36418: remote error: tls: bad certificate 2025-08-13T19:52:05.415505810+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36424: remote error: tls: bad certificate 2025-08-13T19:52:05.432891705+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36428: remote error: tls: bad certificate 2025-08-13T19:52:05.451260688+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36442: remote error: tls: bad certificate 2025-08-13T19:52:05.469665253+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36444: remote error: tls: bad certificate 2025-08-13T19:52:05.490288090+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36450: remote error: tls: bad certificate 2025-08-13T19:52:05.509049285+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36456: remote error: tls: bad certificate 2025-08-13T19:52:05.532361669+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36462: remote error: tls: bad certificate 2025-08-13T19:52:05.552752400+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36468: remote error: tls: bad certificate 2025-08-13T19:52:05.570749933+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36482: remote error: tls: bad certificate 2025-08-13T19:52:05.584704090+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36494: remote error: tls: bad certificate 2025-08-13T19:52:05.603485405+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36506: remote error: tls: bad certificate 2025-08-13T19:52:05.620620393+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36518: remote error: tls: bad certificate 2025-08-13T19:52:05.635519638+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36532: remote error: tls: bad certificate 2025-08-13T19:52:05.653222512+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36548: remote error: tls: bad certificate 2025-08-13T19:52:05.673669555+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36564: remote error: tls: bad certificate 2025-08-13T19:52:05.698261476+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36570: remote error: tls: bad certificate 2025-08-13T19:52:05.716276199+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36578: remote error: tls: bad certificate 2025-08-13T19:52:05.735325551+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36580: remote error: tls: bad certificate 2025-08-13T19:52:05.755627170+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36584: remote error: tls: bad certificate 2025-08-13T19:52:05.771925954+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36596: remote error: tls: bad certificate 2025-08-13T19:52:05.788884888+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36612: remote error: tls: bad certificate 2025-08-13T19:52:05.808722333+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36614: remote error: tls: bad certificate 2025-08-13T19:52:05.824987336+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36622: remote error: tls: bad certificate 2025-08-13T19:52:05.839405767+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36634: remote error: tls: bad certificate 2025-08-13T19:52:05.854456306+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36638: remote error: tls: bad certificate 2025-08-13T19:52:05.879326394+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36644: remote error: tls: bad certificate 2025-08-13T19:52:05.935344450+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36646: remote error: tls: bad certificate 2025-08-13T19:52:05.957058429+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36660: remote error: tls: bad certificate 2025-08-13T19:52:05.972047726+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36664: remote error: tls: bad certificate 2025-08-13T19:52:05.985517710+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36680: remote error: tls: bad certificate 2025-08-13T19:52:06.003663327+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36682: remote error: tls: bad certificate 2025-08-13T19:52:06.017726227+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36686: remote error: tls: bad certificate 2025-08-13T19:52:06.026959610+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36692: remote error: tls: bad certificate 2025-08-13T19:52:06.037721947+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36706: remote error: tls: bad certificate 2025-08-13T19:52:06.046892248+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36708: remote error: tls: bad certificate 2025-08-13T19:52:06.050854951+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36712: remote error: tls: bad certificate 2025-08-13T19:52:06.066339242+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36722: remote error: tls: bad certificate 2025-08-13T19:52:06.068868084+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36724: remote error: tls: bad certificate 2025-08-13T19:52:06.083759869+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36740: remote error: tls: bad certificate 2025-08-13T19:52:06.097086158+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36750: remote error: tls: bad certificate 2025-08-13T19:52:06.117000506+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36762: remote error: tls: bad certificate 2025-08-13T19:52:06.146716492+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36774: remote error: tls: bad certificate 2025-08-13T19:52:06.167216166+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36776: remote error: tls: bad certificate 2025-08-13T19:52:06.183553462+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36782: remote error: tls: bad certificate 2025-08-13T19:52:06.199196368+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36798: remote error: tls: bad certificate 2025-08-13T19:52:06.214675209+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36808: remote error: tls: bad certificate 2025-08-13T19:52:06.234711359+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36814: remote error: tls: bad certificate 2025-08-13T19:52:06.251624711+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36816: remote error: tls: bad certificate 2025-08-13T19:52:06.268128402+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36826: remote error: tls: bad certificate 2025-08-13T19:52:06.287366830+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36840: remote error: tls: bad certificate 2025-08-13T19:52:06.303615793+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36844: remote error: tls: bad certificate 2025-08-13T19:52:06.317056476+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36846: remote error: tls: bad certificate 2025-08-13T19:52:06.340746921+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36852: remote error: tls: bad certificate 2025-08-13T19:52:06.360356229+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36862: remote error: tls: bad certificate 2025-08-13T19:52:06.379545686+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36866: remote error: tls: bad certificate 2025-08-13T19:52:06.399064262+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36876: remote error: tls: bad certificate 2025-08-13T19:52:06.413962877+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36888: remote error: tls: bad certificate 2025-08-13T19:52:06.429262262+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36892: remote error: tls: bad certificate 2025-08-13T19:52:06.447497802+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36902: remote error: tls: bad certificate 2025-08-13T19:52:06.462392416+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36916: remote error: tls: bad certificate 2025-08-13T19:52:06.479386651+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36918: remote error: tls: bad certificate 2025-08-13T19:52:06.494960344+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36920: remote error: tls: bad certificate 2025-08-13T19:52:06.509599981+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36926: remote error: tls: bad certificate 2025-08-13T19:52:09.232627872+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38862: remote error: tls: bad certificate 2025-08-13T19:52:09.252110557+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38870: remote error: tls: bad certificate 2025-08-13T19:52:09.266079425+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38886: remote error: tls: bad certificate 2025-08-13T19:52:09.280973830+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38894: remote error: tls: bad certificate 2025-08-13T19:52:09.303372578+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38904: remote error: tls: bad certificate 2025-08-13T19:52:09.318289233+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38918: remote error: tls: bad certificate 2025-08-13T19:52:09.344658414+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38928: remote error: tls: bad certificate 2025-08-13T19:52:09.363391458+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38942: remote error: tls: bad certificate 2025-08-13T19:52:09.383070928+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38958: remote error: tls: bad certificate 2025-08-13T19:52:09.401216235+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38972: remote error: tls: bad certificate 2025-08-13T19:52:09.425712023+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38982: remote error: tls: bad certificate 2025-08-13T19:52:09.449089149+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38988: remote error: tls: bad certificate 2025-08-13T19:52:09.463050887+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38992: remote error: tls: bad certificate 2025-08-13T19:52:09.481665857+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38998: remote error: tls: bad certificate 2025-08-13T19:52:09.503006726+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39002: remote error: tls: bad certificate 2025-08-13T19:52:09.531838787+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39016: remote error: tls: bad certificate 2025-08-13T19:52:09.548955925+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39032: remote error: tls: bad certificate 2025-08-13T19:52:09.565971849+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39042: remote error: tls: bad certificate 2025-08-13T19:52:09.583457428+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39048: remote error: tls: bad certificate 2025-08-13T19:52:09.607585995+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39060: remote error: tls: bad certificate 2025-08-13T19:52:09.632363741+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39066: remote error: tls: bad certificate 2025-08-13T19:52:09.653141543+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39068: remote error: tls: bad certificate 2025-08-13T19:52:09.675610153+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39084: remote error: tls: bad certificate 2025-08-13T19:52:09.691690161+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39100: remote error: tls: bad certificate 2025-08-13T19:52:09.715331175+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39110: remote error: tls: bad certificate 2025-08-13T19:52:09.731424563+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39112: remote error: tls: bad certificate 2025-08-13T19:52:09.755659974+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39128: remote error: tls: bad certificate 2025-08-13T19:52:09.790123386+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39130: remote error: tls: bad certificate 2025-08-13T19:52:09.810916228+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39136: remote error: tls: bad certificate 2025-08-13T19:52:09.835099087+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39148: remote error: tls: bad certificate 2025-08-13T19:52:09.852948146+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39160: remote error: tls: bad certificate 2025-08-13T19:52:09.868696204+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39176: remote error: tls: bad certificate 2025-08-13T19:52:09.889639671+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39180: remote error: tls: bad certificate 2025-08-13T19:52:09.907500820+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39186: remote error: tls: bad certificate 2025-08-13T19:52:09.923646710+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39198: remote error: tls: bad certificate 2025-08-13T19:52:09.939259035+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39210: remote error: tls: bad certificate 2025-08-13T19:52:09.953406518+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39218: remote error: tls: bad certificate 2025-08-13T19:52:09.977557946+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39234: remote error: tls: bad certificate 2025-08-13T19:52:10.413194018+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39238: remote error: tls: bad certificate 2025-08-13T19:52:10.434192256+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39242: remote error: tls: bad certificate 2025-08-13T19:52:10.458764126+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39256: remote error: tls: bad certificate 2025-08-13T19:52:10.491059726+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39268: remote error: tls: bad certificate 2025-08-13T19:52:10.510424878+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39278: remote error: tls: bad certificate 2025-08-13T19:52:10.530265113+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39290: remote error: tls: bad certificate 2025-08-13T19:52:10.556875341+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39294: remote error: tls: bad certificate 2025-08-13T19:52:10.576465279+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39308: remote error: tls: bad certificate 2025-08-13T19:52:10.596180591+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39324: remote error: tls: bad certificate 2025-08-13T19:52:10.611123507+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39330: remote error: tls: bad certificate 2025-08-13T19:52:10.627052181+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39336: remote error: tls: bad certificate 2025-08-13T19:52:10.647875534+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39342: remote error: tls: bad certificate 2025-08-13T19:52:10.669904852+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39344: remote error: tls: bad certificate 2025-08-13T19:52:10.690860479+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39346: remote error: tls: bad certificate 2025-08-13T19:52:10.708047158+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39362: remote error: tls: bad certificate 2025-08-13T19:52:10.727132012+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39372: remote error: tls: bad certificate 2025-08-13T19:52:10.743498018+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39378: remote error: tls: bad certificate 2025-08-13T19:52:10.762885421+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39380: remote error: tls: bad certificate 2025-08-13T19:52:10.777832007+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39384: remote error: tls: bad certificate 2025-08-13T19:52:10.797945690+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39400: remote error: tls: bad certificate 2025-08-13T19:52:10.814191273+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39410: remote error: tls: bad certificate 2025-08-13T19:52:10.823028084+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39416: remote error: tls: bad certificate 2025-08-13T19:52:10.831588288+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39418: remote error: tls: bad certificate 2025-08-13T19:52:10.852153644+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39424: remote error: tls: bad certificate 2025-08-13T19:52:10.871182816+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39430: remote error: tls: bad certificate 2025-08-13T19:52:10.886941185+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39446: remote error: tls: bad certificate 2025-08-13T19:52:10.907397778+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39462: remote error: tls: bad certificate 2025-08-13T19:52:10.927762728+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39474: remote error: tls: bad certificate 2025-08-13T19:52:10.944996149+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39490: remote error: tls: bad certificate 2025-08-13T19:52:10.963029773+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39502: remote error: tls: bad certificate 2025-08-13T19:52:10.978043161+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39512: remote error: tls: bad certificate 2025-08-13T19:52:10.994343835+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39524: remote error: tls: bad certificate 2025-08-13T19:52:11.009925899+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39540: remote error: tls: bad certificate 2025-08-13T19:52:11.035461807+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39552: remote error: tls: bad certificate 2025-08-13T19:52:11.054002075+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39564: remote error: tls: bad certificate 2025-08-13T19:52:11.071367320+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39572: remote error: tls: bad certificate 2025-08-13T19:52:11.089543108+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39584: remote error: tls: bad certificate 2025-08-13T19:52:11.107528200+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39594: remote error: tls: bad certificate 2025-08-13T19:52:11.127368165+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39610: remote error: tls: bad certificate 2025-08-13T19:52:11.144221796+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39618: remote error: tls: bad certificate 2025-08-13T19:52:11.161006134+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39624: remote error: tls: bad certificate 2025-08-13T19:52:11.179588303+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39626: remote error: tls: bad certificate 2025-08-13T19:52:11.196653949+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39630: remote error: tls: bad certificate 2025-08-13T19:52:11.221924519+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39644: remote error: tls: bad certificate 2025-08-13T19:52:11.236727971+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39658: remote error: tls: bad certificate 2025-08-13T19:52:11.257653857+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39674: remote error: tls: bad certificate 2025-08-13T19:52:11.281644401+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39682: remote error: tls: bad certificate 2025-08-13T19:52:11.298472340+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39688: remote error: tls: bad certificate 2025-08-13T19:52:11.315032192+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39702: remote error: tls: bad certificate 2025-08-13T19:52:11.331735458+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39712: remote error: tls: bad certificate 2025-08-13T19:52:11.347877758+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39720: remote error: tls: bad certificate 2025-08-13T19:52:11.364930624+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39722: remote error: tls: bad certificate 2025-08-13T19:52:11.381492806+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39730: remote error: tls: bad certificate 2025-08-13T19:52:11.396977417+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39746: remote error: tls: bad certificate 2025-08-13T19:52:11.411919512+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39748: remote error: tls: bad certificate 2025-08-13T19:52:11.425877970+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39752: remote error: tls: bad certificate 2025-08-13T19:52:11.442925426+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39768: remote error: tls: bad certificate 2025-08-13T19:52:11.457592704+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39780: remote error: tls: bad certificate 2025-08-13T19:52:11.474499765+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39794: remote error: tls: bad certificate 2025-08-13T19:52:11.489083600+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39798: remote error: tls: bad certificate 2025-08-13T19:52:11.504001685+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39812: remote error: tls: bad certificate 2025-08-13T19:52:11.527297039+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39824: remote error: tls: bad certificate 2025-08-13T19:52:11.544857089+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39838: remote error: tls: bad certificate 2025-08-13T19:52:11.561618747+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39842: remote error: tls: bad certificate 2025-08-13T19:52:11.578022934+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39856: remote error: tls: bad certificate 2025-08-13T19:52:11.592737703+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39862: remote error: tls: bad certificate 2025-08-13T19:52:11.605950840+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39864: remote error: tls: bad certificate 2025-08-13T19:52:11.620057401+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39874: remote error: tls: bad certificate 2025-08-13T19:52:11.632685761+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39884: remote error: tls: bad certificate 2025-08-13T19:52:11.644263651+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39900: remote error: tls: bad certificate 2025-08-13T19:52:11.657287642+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39906: remote error: tls: bad certificate 2025-08-13T19:52:11.669849710+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39920: remote error: tls: bad certificate 2025-08-13T19:52:11.684449876+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39932: remote error: tls: bad certificate 2025-08-13T19:52:11.700539075+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39946: remote error: tls: bad certificate 2025-08-13T19:52:11.717894039+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39948: remote error: tls: bad certificate 2025-08-13T19:52:11.741139591+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39958: remote error: tls: bad certificate 2025-08-13T19:52:11.783948171+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39972: remote error: tls: bad certificate 2025-08-13T19:52:11.825070713+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39984: remote error: tls: bad certificate 2025-08-13T19:52:11.862937661+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39986: remote error: tls: bad certificate 2025-08-13T19:52:11.902239011+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39996: remote error: tls: bad certificate 2025-08-13T19:52:11.942315013+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:40006: remote error: tls: bad certificate 2025-08-13T19:52:11.983443985+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:40012: remote error: tls: bad certificate 2025-08-13T19:52:12.021448508+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40024: remote error: tls: bad certificate 2025-08-13T19:52:12.060711606+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40030: remote error: tls: bad certificate 2025-08-13T19:52:12.102512157+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40046: remote error: tls: bad certificate 2025-08-13T19:52:12.167097067+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40048: remote error: tls: bad certificate 2025-08-13T19:52:12.217665528+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40052: remote error: tls: bad certificate 2025-08-13T19:52:12.240474918+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40054: remote error: tls: bad certificate 2025-08-13T19:52:12.261388544+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40056: remote error: tls: bad certificate 2025-08-13T19:52:12.301742003+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40060: remote error: tls: bad certificate 2025-08-13T19:52:12.341023962+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40066: remote error: tls: bad certificate 2025-08-13T19:52:12.381319700+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40080: remote error: tls: bad certificate 2025-08-13T19:52:12.422914886+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40092: remote error: tls: bad certificate 2025-08-13T19:52:12.463697788+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40102: remote error: tls: bad certificate 2025-08-13T19:52:12.504071048+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40116: remote error: tls: bad certificate 2025-08-13T19:52:12.547703351+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40122: remote error: tls: bad certificate 2025-08-13T19:52:12.581839253+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40128: remote error: tls: bad certificate 2025-08-13T19:52:12.622860002+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40138: remote error: tls: bad certificate 2025-08-13T19:52:12.661340779+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40140: remote error: tls: bad certificate 2025-08-13T19:52:12.701079951+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40148: remote error: tls: bad certificate 2025-08-13T19:52:12.740683089+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40152: remote error: tls: bad certificate 2025-08-13T19:52:12.781331177+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40166: remote error: tls: bad certificate 2025-08-13T19:52:12.822731237+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40178: remote error: tls: bad certificate 2025-08-13T19:52:12.864116206+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40192: remote error: tls: bad certificate 2025-08-13T19:52:12.902372336+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40206: remote error: tls: bad certificate 2025-08-13T19:52:12.940494812+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40216: remote error: tls: bad certificate 2025-08-13T19:52:12.981758598+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40232: remote error: tls: bad certificate 2025-08-13T19:52:13.031258298+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40248: remote error: tls: bad certificate 2025-08-13T19:52:13.066912604+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40262: remote error: tls: bad certificate 2025-08-13T19:52:13.107282584+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40268: remote error: tls: bad certificate 2025-08-13T19:52:13.142567149+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40280: remote error: tls: bad certificate 2025-08-13T19:52:13.183164186+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40288: remote error: tls: bad certificate 2025-08-13T19:52:13.223954458+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40300: remote error: tls: bad certificate 2025-08-13T19:52:13.264955716+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40308: remote error: tls: bad certificate 2025-08-13T19:52:13.309511016+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40320: remote error: tls: bad certificate 2025-08-13T19:52:13.348260370+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40330: remote error: tls: bad certificate 2025-08-13T19:52:13.382583788+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40344: remote error: tls: bad certificate 2025-08-13T19:52:13.422758932+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40350: remote error: tls: bad certificate 2025-08-13T19:52:13.464926023+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40366: remote error: tls: bad certificate 2025-08-13T19:52:13.503688118+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40374: remote error: tls: bad certificate 2025-08-13T19:52:13.545931881+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40386: remote error: tls: bad certificate 2025-08-13T19:52:13.582022830+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40398: remote error: tls: bad certificate 2025-08-13T19:52:13.621601367+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40406: remote error: tls: bad certificate 2025-08-13T19:52:13.664280403+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40422: remote error: tls: bad certificate 2025-08-13T19:52:13.703613304+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40436: remote error: tls: bad certificate 2025-08-13T19:52:13.744616232+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40440: remote error: tls: bad certificate 2025-08-13T19:52:13.785130986+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40454: remote error: tls: bad certificate 2025-08-13T19:52:13.822426749+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40456: remote error: tls: bad certificate 2025-08-13T19:52:13.861883983+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40470: remote error: tls: bad certificate 2025-08-13T19:52:13.910098867+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40472: remote error: tls: bad certificate 2025-08-13T19:52:13.939739281+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40488: remote error: tls: bad certificate 2025-08-13T19:52:13.983495428+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40490: remote error: tls: bad certificate 2025-08-13T19:52:14.021704327+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40494: remote error: tls: bad certificate 2025-08-13T19:52:14.063676833+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40504: remote error: tls: bad certificate 2025-08-13T19:52:14.104156576+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40516: remote error: tls: bad certificate 2025-08-13T19:52:14.144366011+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40520: remote error: tls: bad certificate 2025-08-13T19:52:14.191540715+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40534: remote error: tls: bad certificate 2025-08-13T19:52:14.221933991+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40544: remote error: tls: bad certificate 2025-08-13T19:52:14.268723094+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40560: remote error: tls: bad certificate 2025-08-13T19:52:14.308665693+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40564: remote error: tls: bad certificate 2025-08-13T19:52:14.342111615+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40566: remote error: tls: bad certificate 2025-08-13T19:52:14.388072655+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40578: remote error: tls: bad certificate 2025-08-13T19:52:14.421615230+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40582: remote error: tls: bad certificate 2025-08-13T19:52:14.461890188+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40584: remote error: tls: bad certificate 2025-08-13T19:52:14.501881407+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40598: remote error: tls: bad certificate 2025-08-13T19:52:14.543302267+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40608: remote error: tls: bad certificate 2025-08-13T19:52:14.588453424+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40622: remote error: tls: bad certificate 2025-08-13T19:52:14.623759570+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40624: remote error: tls: bad certificate 2025-08-13T19:52:14.661446144+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40638: remote error: tls: bad certificate 2025-08-13T19:52:14.701385821+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40648: remote error: tls: bad certificate 2025-08-13T19:52:14.742138102+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40658: remote error: tls: bad certificate 2025-08-13T19:52:14.781102153+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40666: remote error: tls: bad certificate 2025-08-13T19:52:14.822133612+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40678: remote error: tls: bad certificate 2025-08-13T19:52:14.861476223+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40688: remote error: tls: bad certificate 2025-08-13T19:52:14.902266185+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40690: remote error: tls: bad certificate 2025-08-13T19:52:14.942502491+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40696: remote error: tls: bad certificate 2025-08-13T19:52:14.989922982+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40702: remote error: tls: bad certificate 2025-08-13T19:52:15.035294025+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40716: remote error: tls: bad certificate 2025-08-13T19:52:15.062418068+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40718: remote error: tls: bad certificate 2025-08-13T19:52:15.101994224+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40730: remote error: tls: bad certificate 2025-08-13T19:52:15.142621022+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40742: remote error: tls: bad certificate 2025-08-13T19:52:15.194532501+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40756: remote error: tls: bad certificate 2025-08-13T19:52:15.224587177+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40758: remote error: tls: bad certificate 2025-08-13T19:52:15.266966404+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40772: remote error: tls: bad certificate 2025-08-13T19:52:15.307219001+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40784: remote error: tls: bad certificate 2025-08-13T19:52:15.346391457+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40786: remote error: tls: bad certificate 2025-08-13T19:52:15.388867288+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40788: remote error: tls: bad certificate 2025-08-13T19:52:15.426451058+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40798: remote error: tls: bad certificate 2025-08-13T19:52:15.462024732+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40800: remote error: tls: bad certificate 2025-08-13T19:52:15.504317767+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40808: remote error: tls: bad certificate 2025-08-13T19:52:15.541993750+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40824: remote error: tls: bad certificate 2025-08-13T19:52:15.587716863+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40830: remote error: tls: bad certificate 2025-08-13T19:52:15.620622270+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40840: remote error: tls: bad certificate 2025-08-13T19:52:15.662918125+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40844: remote error: tls: bad certificate 2025-08-13T19:52:15.700840836+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40850: remote error: tls: bad certificate 2025-08-13T19:52:15.743609805+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40866: remote error: tls: bad certificate 2025-08-13T19:52:15.788287597+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40876: remote error: tls: bad certificate 2025-08-13T19:52:15.821371540+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40892: remote error: tls: bad certificate 2025-08-13T19:52:15.862442170+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40908: remote error: tls: bad certificate 2025-08-13T19:52:15.901707219+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40910: remote error: tls: bad certificate 2025-08-13T19:52:15.944092526+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40920: remote error: tls: bad certificate 2025-08-13T19:52:15.989336395+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40936: remote error: tls: bad certificate 2025-08-13T19:52:16.021855652+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:40946: remote error: tls: bad certificate 2025-08-13T19:52:16.063159029+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:40950: remote error: tls: bad certificate 2025-08-13T19:52:16.104336372+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:40952: remote error: tls: bad certificate 2025-08-13T19:52:16.143946810+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:40968: remote error: tls: bad certificate 2025-08-13T19:52:16.185765572+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:40972: remote error: tls: bad certificate 2025-08-13T19:52:16.224690171+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:40988: remote error: tls: bad certificate 2025-08-13T19:52:16.271144184+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:40992: remote error: tls: bad certificate 2025-08-13T19:52:16.303604989+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41002: remote error: tls: bad certificate 2025-08-13T19:52:16.343666201+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41018: remote error: tls: bad certificate 2025-08-13T19:52:16.384617087+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41034: remote error: tls: bad certificate 2025-08-13T19:52:16.421874179+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41046: remote error: tls: bad certificate 2025-08-13T19:52:16.427717065+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41060: remote error: tls: bad certificate 2025-08-13T19:52:16.450360310+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41076: remote error: tls: bad certificate 2025-08-13T19:52:16.463030341+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41078: remote error: tls: bad certificate 2025-08-13T19:52:16.474139578+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41088: remote error: tls: bad certificate 2025-08-13T19:52:16.495102035+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41092: remote error: tls: bad certificate 2025-08-13T19:52:16.508591420+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41094: remote error: tls: bad certificate 2025-08-13T19:52:16.516873826+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41096: remote error: tls: bad certificate 2025-08-13T19:52:16.543182675+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41104: remote error: tls: bad certificate 2025-08-13T19:52:16.583160114+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41118: remote error: tls: bad certificate 2025-08-13T19:52:16.622651059+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41134: remote error: tls: bad certificate 2025-08-13T19:52:16.662400672+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41150: remote error: tls: bad certificate 2025-08-13T19:52:16.702245907+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41156: remote error: tls: bad certificate 2025-08-13T19:52:16.742052621+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41168: remote error: tls: bad certificate 2025-08-13T19:52:16.784443129+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41172: remote error: tls: bad certificate 2025-08-13T19:52:16.822364699+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41182: remote error: tls: bad certificate 2025-08-13T19:52:16.863114030+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41194: remote error: tls: bad certificate 2025-08-13T19:52:16.904113078+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41198: remote error: tls: bad certificate 2025-08-13T19:52:16.942268855+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41204: remote error: tls: bad certificate 2025-08-13T19:52:16.981872174+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41206: remote error: tls: bad certificate 2025-08-13T19:52:17.024432966+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41208: remote error: tls: bad certificate 2025-08-13T19:52:17.069047137+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41214: remote error: tls: bad certificate 2025-08-13T19:52:17.108159092+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41226: remote error: tls: bad certificate 2025-08-13T19:52:17.151381943+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41230: remote error: tls: bad certificate 2025-08-13T19:52:17.189513860+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41240: remote error: tls: bad certificate 2025-08-13T19:52:17.228182411+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41248: remote error: tls: bad certificate 2025-08-13T19:52:17.266487893+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41264: remote error: tls: bad certificate 2025-08-13T19:52:17.307342997+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41272: remote error: tls: bad certificate 2025-08-13T19:52:17.346971366+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41276: remote error: tls: bad certificate 2025-08-13T19:52:17.383013183+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41278: remote error: tls: bad certificate 2025-08-13T19:52:17.425935346+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41282: remote error: tls: bad certificate 2025-08-13T19:52:17.464476354+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41286: remote error: tls: bad certificate 2025-08-13T19:52:17.502745114+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41296: remote error: tls: bad certificate 2025-08-13T19:52:17.544722150+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41308: remote error: tls: bad certificate 2025-08-13T19:52:17.584005599+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41316: remote error: tls: bad certificate 2025-08-13T19:52:17.629548597+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41318: remote error: tls: bad certificate 2025-08-13T19:52:17.665853351+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41334: remote error: tls: bad certificate 2025-08-13T19:52:17.703093872+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41348: remote error: tls: bad certificate 2025-08-13T19:52:17.744529713+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41354: remote error: tls: bad certificate 2025-08-13T19:52:17.785493290+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41364: remote error: tls: bad certificate 2025-08-13T19:52:17.823942645+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41380: remote error: tls: bad certificate 2025-08-13T19:52:17.871266064+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41384: remote error: tls: bad certificate 2025-08-13T19:52:17.904524381+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41396: remote error: tls: bad certificate 2025-08-13T19:52:17.947506846+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41408: remote error: tls: bad certificate 2025-08-13T19:52:22.825275536+00:00 stderr F 2025/08/13 19:52:22 http: TLS handshake error from 127.0.0.1:58334: remote error: tls: bad certificate 2025-08-13T19:52:25.223326149+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58338: remote error: tls: bad certificate 2025-08-13T19:52:25.240954791+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58350: remote error: tls: bad certificate 2025-08-13T19:52:25.296952207+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58364: remote error: tls: bad certificate 2025-08-13T19:52:25.338490320+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58368: remote error: tls: bad certificate 2025-08-13T19:52:25.356747301+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58384: remote error: tls: bad certificate 2025-08-13T19:52:25.372677344+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58386: remote error: tls: bad certificate 2025-08-13T19:52:25.387531868+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58402: remote error: tls: bad certificate 2025-08-13T19:52:25.407517437+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58412: remote error: tls: bad certificate 2025-08-13T19:52:25.424520621+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58416: remote error: tls: bad certificate 2025-08-13T19:52:25.442419241+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58430: remote error: tls: bad certificate 2025-08-13T19:52:25.459381585+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58434: remote error: tls: bad certificate 2025-08-13T19:52:25.475724650+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58440: remote error: tls: bad certificate 2025-08-13T19:52:25.501438643+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58442: remote error: tls: bad certificate 2025-08-13T19:52:25.516681827+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58450: remote error: tls: bad certificate 2025-08-13T19:52:25.531246612+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58458: remote error: tls: bad certificate 2025-08-13T19:52:25.545003804+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58466: remote error: tls: bad certificate 2025-08-13T19:52:25.561746321+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58472: remote error: tls: bad certificate 2025-08-13T19:52:25.579395714+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58480: remote error: tls: bad certificate 2025-08-13T19:52:25.593453455+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58490: remote error: tls: bad certificate 2025-08-13T19:52:25.609185443+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58492: remote error: tls: bad certificate 2025-08-13T19:52:25.625933600+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58498: remote error: tls: bad certificate 2025-08-13T19:52:25.639068834+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58514: remote error: tls: bad certificate 2025-08-13T19:52:25.652001703+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58524: remote error: tls: bad certificate 2025-08-13T19:52:25.666618059+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58534: remote error: tls: bad certificate 2025-08-13T19:52:25.682511092+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58536: remote error: tls: bad certificate 2025-08-13T19:52:25.701233115+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58552: remote error: tls: bad certificate 2025-08-13T19:52:25.716989544+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58560: remote error: tls: bad certificate 2025-08-13T19:52:25.735908663+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58566: remote error: tls: bad certificate 2025-08-13T19:52:25.754454232+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58582: remote error: tls: bad certificate 2025-08-13T19:52:25.769279854+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58594: remote error: tls: bad certificate 2025-08-13T19:52:25.783194450+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58600: remote error: tls: bad certificate 2025-08-13T19:52:25.799534336+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58602: remote error: tls: bad certificate 2025-08-13T19:52:25.818184086+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58608: remote error: tls: bad certificate 2025-08-13T19:52:25.843068765+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58618: remote error: tls: bad certificate 2025-08-13T19:52:25.858664330+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58632: remote error: tls: bad certificate 2025-08-13T19:52:25.876926030+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58642: remote error: tls: bad certificate 2025-08-13T19:52:25.900269615+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58644: remote error: tls: bad certificate 2025-08-13T19:52:25.916197429+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58654: remote error: tls: bad certificate 2025-08-13T19:52:25.933082790+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58666: remote error: tls: bad certificate 2025-08-13T19:52:25.950171327+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58678: remote error: tls: bad certificate 2025-08-13T19:52:25.968612212+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58694: remote error: tls: bad certificate 2025-08-13T19:52:25.985738720+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58706: remote error: tls: bad certificate 2025-08-13T19:52:26.003170677+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58708: remote error: tls: bad certificate 2025-08-13T19:52:26.027500480+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58716: remote error: tls: bad certificate 2025-08-13T19:52:26.054754387+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58718: remote error: tls: bad certificate 2025-08-13T19:52:26.069362183+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58730: remote error: tls: bad certificate 2025-08-13T19:52:26.085567544+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58740: remote error: tls: bad certificate 2025-08-13T19:52:26.098739770+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58742: remote error: tls: bad certificate 2025-08-13T19:52:26.115668642+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58756: remote error: tls: bad certificate 2025-08-13T19:52:26.135727873+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58764: remote error: tls: bad certificate 2025-08-13T19:52:26.149984610+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58766: remote error: tls: bad certificate 2025-08-13T19:52:26.167277312+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58776: remote error: tls: bad certificate 2025-08-13T19:52:26.181919249+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58786: remote error: tls: bad certificate 2025-08-13T19:52:26.198191933+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58798: remote error: tls: bad certificate 2025-08-13T19:52:26.216253928+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58804: remote error: tls: bad certificate 2025-08-13T19:52:26.234710754+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58816: remote error: tls: bad certificate 2025-08-13T19:52:26.255699392+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58826: remote error: tls: bad certificate 2025-08-13T19:52:26.275589568+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58832: remote error: tls: bad certificate 2025-08-13T19:52:26.293391465+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58836: remote error: tls: bad certificate 2025-08-13T19:52:26.309984128+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58850: remote error: tls: bad certificate 2025-08-13T19:52:26.327238480+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58856: remote error: tls: bad certificate 2025-08-13T19:52:26.350983526+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58864: remote error: tls: bad certificate 2025-08-13T19:52:26.367071035+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58870: remote error: tls: bad certificate 2025-08-13T19:52:26.386034035+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58880: remote error: tls: bad certificate 2025-08-13T19:52:26.404960874+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58896: remote error: tls: bad certificate 2025-08-13T19:52:26.473646541+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58900: remote error: tls: bad certificate 2025-08-13T19:52:26.487644650+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58912: remote error: tls: bad certificate 2025-08-13T19:52:26.693322640+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58926: remote error: tls: bad certificate 2025-08-13T19:52:26.712385813+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58936: remote error: tls: bad certificate 2025-08-13T19:52:26.738643721+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58950: remote error: tls: bad certificate 2025-08-13T19:52:26.760013040+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58962: remote error: tls: bad certificate 2025-08-13T19:52:26.793966847+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58968: remote error: tls: bad certificate 2025-08-13T19:52:35.227220003+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:59988: remote error: tls: bad certificate 2025-08-13T19:52:35.249591660+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:59992: remote error: tls: bad certificate 2025-08-13T19:52:35.268651212+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60004: remote error: tls: bad certificate 2025-08-13T19:52:35.292355107+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60018: remote error: tls: bad certificate 2025-08-13T19:52:35.314572919+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60030: remote error: tls: bad certificate 2025-08-13T19:52:35.338897801+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60046: remote error: tls: bad certificate 2025-08-13T19:52:35.361975338+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60048: remote error: tls: bad certificate 2025-08-13T19:52:35.380907637+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60058: remote error: tls: bad certificate 2025-08-13T19:52:35.404499658+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60070: remote error: tls: bad certificate 2025-08-13T19:52:35.451247219+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60082: remote error: tls: bad certificate 2025-08-13T19:52:35.479362279+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60096: remote error: tls: bad certificate 2025-08-13T19:52:35.501621333+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60106: remote error: tls: bad certificate 2025-08-13T19:52:35.523306260+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60114: remote error: tls: bad certificate 2025-08-13T19:52:35.540165600+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60118: remote error: tls: bad certificate 2025-08-13T19:52:35.558233844+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60130: remote error: tls: bad certificate 2025-08-13T19:52:35.580755125+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60142: remote error: tls: bad certificate 2025-08-13T19:52:35.599640012+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60156: remote error: tls: bad certificate 2025-08-13T19:52:35.617128680+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60170: remote error: tls: bad certificate 2025-08-13T19:52:35.634223507+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60182: remote error: tls: bad certificate 2025-08-13T19:52:35.652244880+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60192: remote error: tls: bad certificate 2025-08-13T19:52:35.670488279+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60200: remote error: tls: bad certificate 2025-08-13T19:52:35.694291766+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60202: remote error: tls: bad certificate 2025-08-13T19:52:35.718700431+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60210: remote error: tls: bad certificate 2025-08-13T19:52:35.740032238+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60226: remote error: tls: bad certificate 2025-08-13T19:52:35.761107728+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60240: remote error: tls: bad certificate 2025-08-13T19:52:35.785249005+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60248: remote error: tls: bad certificate 2025-08-13T19:52:35.805376058+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60262: remote error: tls: bad certificate 2025-08-13T19:52:35.830883094+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60276: remote error: tls: bad certificate 2025-08-13T19:52:35.848616099+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60290: remote error: tls: bad certificate 2025-08-13T19:52:35.869591326+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60294: remote error: tls: bad certificate 2025-08-13T19:52:35.886356883+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60298: remote error: tls: bad certificate 2025-08-13T19:52:35.908531544+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60314: remote error: tls: bad certificate 2025-08-13T19:52:35.935897313+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60328: remote error: tls: bad certificate 2025-08-13T19:52:35.959551676+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60338: remote error: tls: bad certificate 2025-08-13T19:52:35.979486513+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60348: remote error: tls: bad certificate 2025-08-13T19:52:36.010430514+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60360: remote error: tls: bad certificate 2025-08-13T19:52:36.028902670+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60372: remote error: tls: bad certificate 2025-08-13T19:52:36.047386496+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60382: remote error: tls: bad certificate 2025-08-13T19:52:36.064362099+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60390: remote error: tls: bad certificate 2025-08-13T19:52:36.088390823+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60402: remote error: tls: bad certificate 2025-08-13T19:52:36.102733541+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60408: remote error: tls: bad certificate 2025-08-13T19:52:36.117908473+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60414: remote error: tls: bad certificate 2025-08-13T19:52:36.135445892+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60426: remote error: tls: bad certificate 2025-08-13T19:52:36.160731252+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60436: remote error: tls: bad certificate 2025-08-13T19:52:36.176887242+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60442: remote error: tls: bad certificate 2025-08-13T19:52:36.191049525+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60452: remote error: tls: bad certificate 2025-08-13T19:52:36.215687576+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60462: remote error: tls: bad certificate 2025-08-13T19:52:36.285396160+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60476: remote error: tls: bad certificate 2025-08-13T19:52:36.310347920+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60478: remote error: tls: bad certificate 2025-08-13T19:52:36.337593086+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60486: remote error: tls: bad certificate 2025-08-13T19:52:36.356948637+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60492: remote error: tls: bad certificate 2025-08-13T19:52:36.380633021+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60496: remote error: tls: bad certificate 2025-08-13T19:52:36.399318903+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60500: remote error: tls: bad certificate 2025-08-13T19:52:36.422595145+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60504: remote error: tls: bad certificate 2025-08-13T19:52:36.438301642+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60516: remote error: tls: bad certificate 2025-08-13T19:52:36.460266597+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60522: remote error: tls: bad certificate 2025-08-13T19:52:36.494429050+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60530: remote error: tls: bad certificate 2025-08-13T19:52:36.519513724+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60536: remote error: tls: bad certificate 2025-08-13T19:52:36.542643321+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60552: remote error: tls: bad certificate 2025-08-13T19:52:36.561230200+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60564: remote error: tls: bad certificate 2025-08-13T19:52:36.583634798+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60570: remote error: tls: bad certificate 2025-08-13T19:52:36.607348422+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60582: remote error: tls: bad certificate 2025-08-13T19:52:36.631427048+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60598: remote error: tls: bad certificate 2025-08-13T19:52:36.654232857+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60614: remote error: tls: bad certificate 2025-08-13T19:52:36.671708964+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60622: remote error: tls: bad certificate 2025-08-13T19:52:36.689869311+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60636: remote error: tls: bad certificate 2025-08-13T19:52:36.707753290+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60650: remote error: tls: bad certificate 2025-08-13T19:52:37.193249368+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60660: remote error: tls: bad certificate 2025-08-13T19:52:37.227900124+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60672: remote error: tls: bad certificate 2025-08-13T19:52:37.253324988+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60682: remote error: tls: bad certificate 2025-08-13T19:52:37.289531118+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60698: remote error: tls: bad certificate 2025-08-13T19:52:37.320870740+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60702: remote error: tls: bad certificate 2025-08-13T19:52:37.359422918+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60718: remote error: tls: bad certificate 2025-08-13T19:52:37.385714426+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60724: remote error: tls: bad certificate 2025-08-13T19:52:37.406349833+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60736: remote error: tls: bad certificate 2025-08-13T19:52:37.439943789+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60740: remote error: tls: bad certificate 2025-08-13T19:52:37.465069355+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60754: remote error: tls: bad certificate 2025-08-13T19:52:37.484755785+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60768: remote error: tls: bad certificate 2025-08-13T19:52:37.511383423+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60780: remote error: tls: bad certificate 2025-08-13T19:52:37.540501142+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60794: remote error: tls: bad certificate 2025-08-13T19:52:37.560563783+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60800: remote error: tls: bad certificate 2025-08-13T19:52:37.586075869+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60804: remote error: tls: bad certificate 2025-08-13T19:52:37.602590359+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60808: remote error: tls: bad certificate 2025-08-13T19:52:37.622554317+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60824: remote error: tls: bad certificate 2025-08-13T19:52:37.644898103+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60826: remote error: tls: bad certificate 2025-08-13T19:52:37.657870252+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60838: remote error: tls: bad certificate 2025-08-13T19:52:37.663043059+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60852: remote error: tls: bad certificate 2025-08-13T19:52:37.680747583+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60864: remote error: tls: bad certificate 2025-08-13T19:52:37.699417155+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60868: remote error: tls: bad certificate 2025-08-13T19:52:37.716896672+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60884: remote error: tls: bad certificate 2025-08-13T19:52:37.735016848+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60898: remote error: tls: bad certificate 2025-08-13T19:52:37.752608598+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60908: remote error: tls: bad certificate 2025-08-13T19:52:37.772235147+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60910: remote error: tls: bad certificate 2025-08-13T19:52:37.791632809+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60926: remote error: tls: bad certificate 2025-08-13T19:52:37.808453488+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60934: remote error: tls: bad certificate 2025-08-13T19:52:37.829219909+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60936: remote error: tls: bad certificate 2025-08-13T19:52:37.843633199+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60948: remote error: tls: bad certificate 2025-08-13T19:52:37.859916393+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60950: remote error: tls: bad certificate 2025-08-13T19:52:37.877126023+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60956: remote error: tls: bad certificate 2025-08-13T19:52:37.893380835+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60964: remote error: tls: bad certificate 2025-08-13T19:52:37.909553905+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60966: remote error: tls: bad certificate 2025-08-13T19:52:37.927916698+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60982: remote error: tls: bad certificate 2025-08-13T19:52:37.940262830+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60996: remote error: tls: bad certificate 2025-08-13T19:52:37.958360625+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:32774: remote error: tls: bad certificate 2025-08-13T19:52:37.976464500+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:32790: remote error: tls: bad certificate 2025-08-13T19:52:37.996353576+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:32798: remote error: tls: bad certificate 2025-08-13T19:52:38.013785722+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32800: remote error: tls: bad certificate 2025-08-13T19:52:38.031618750+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32806: remote error: tls: bad certificate 2025-08-13T19:52:38.048144550+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32818: remote error: tls: bad certificate 2025-08-13T19:52:38.066642077+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32832: remote error: tls: bad certificate 2025-08-13T19:52:38.087573122+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32848: remote error: tls: bad certificate 2025-08-13T19:52:38.107236172+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32858: remote error: tls: bad certificate 2025-08-13T19:52:38.121904749+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32862: remote error: tls: bad certificate 2025-08-13T19:52:38.147047965+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32870: remote error: tls: bad certificate 2025-08-13T19:52:38.160669693+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32878: remote error: tls: bad certificate 2025-08-13T19:52:38.178071618+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32886: remote error: tls: bad certificate 2025-08-13T19:52:38.197938123+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32900: remote error: tls: bad certificate 2025-08-13T19:52:38.220079204+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32904: remote error: tls: bad certificate 2025-08-13T19:52:38.239529217+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32918: remote error: tls: bad certificate 2025-08-13T19:52:38.263114808+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32930: remote error: tls: bad certificate 2025-08-13T19:52:38.283980442+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32932: remote error: tls: bad certificate 2025-08-13T19:52:38.301495141+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32946: remote error: tls: bad certificate 2025-08-13T19:52:38.318188146+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32958: remote error: tls: bad certificate 2025-08-13T19:52:38.334608393+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32966: remote error: tls: bad certificate 2025-08-13T19:52:38.368160578+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32970: remote error: tls: bad certificate 2025-08-13T19:52:38.390010810+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32984: remote error: tls: bad certificate 2025-08-13T19:52:38.413753726+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32986: remote error: tls: bad certificate 2025-08-13T19:52:38.432204501+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33000: remote error: tls: bad certificate 2025-08-13T19:52:38.456990616+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33004: remote error: tls: bad certificate 2025-08-13T19:52:38.472303832+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33006: remote error: tls: bad certificate 2025-08-13T19:52:38.486705692+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33016: remote error: tls: bad certificate 2025-08-13T19:52:38.502739549+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33030: remote error: tls: bad certificate 2025-08-13T19:52:38.519135215+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33044: remote error: tls: bad certificate 2025-08-13T19:52:38.532735662+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33058: remote error: tls: bad certificate 2025-08-13T19:52:38.549972963+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33074: remote error: tls: bad certificate 2025-08-13T19:52:38.565916947+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33090: remote error: tls: bad certificate 2025-08-13T19:52:38.579729910+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33104: remote error: tls: bad certificate 2025-08-13T19:52:38.595466578+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33118: remote error: tls: bad certificate 2025-08-13T19:52:38.617000141+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33132: remote error: tls: bad certificate 2025-08-13T19:52:38.633136970+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33148: remote error: tls: bad certificate 2025-08-13T19:52:38.649141945+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33162: remote error: tls: bad certificate 2025-08-13T19:52:38.664753450+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33172: remote error: tls: bad certificate 2025-08-13T19:52:38.704004367+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33180: remote error: tls: bad certificate 2025-08-13T19:52:38.738288043+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:49942: remote error: tls: bad certificate 2025-08-13T19:52:38.778373084+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:49958: remote error: tls: bad certificate 2025-08-13T19:52:38.818572478+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:49974: remote error: tls: bad certificate 2025-08-13T19:52:38.860590264+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:49988: remote error: tls: bad certificate 2025-08-13T19:52:38.911176863+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:50004: remote error: tls: bad certificate 2025-08-13T19:52:38.937672828+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:50008: remote error: tls: bad certificate 2025-08-13T19:52:38.978188481+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:50014: remote error: tls: bad certificate 2025-08-13T19:52:39.019203178+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50024: remote error: tls: bad certificate 2025-08-13T19:52:39.062579053+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50036: remote error: tls: bad certificate 2025-08-13T19:52:39.100899163+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50040: remote error: tls: bad certificate 2025-08-13T19:52:39.140337246+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50044: remote error: tls: bad certificate 2025-08-13T19:52:39.179467790+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50048: remote error: tls: bad certificate 2025-08-13T19:52:39.223054360+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50064: remote error: tls: bad certificate 2025-08-13T19:52:39.258021305+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50072: remote error: tls: bad certificate 2025-08-13T19:52:39.297485249+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50086: remote error: tls: bad certificate 2025-08-13T19:52:39.341536392+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50092: remote error: tls: bad certificate 2025-08-13T19:52:39.384896026+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50094: remote error: tls: bad certificate 2025-08-13T19:52:39.420532681+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50106: remote error: tls: bad certificate 2025-08-13T19:52:39.460883209+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50114: remote error: tls: bad certificate 2025-08-13T19:52:39.500563549+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50116: remote error: tls: bad certificate 2025-08-13T19:52:39.539577799+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50122: remote error: tls: bad certificate 2025-08-13T19:52:39.580488363+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50130: remote error: tls: bad certificate 2025-08-13T19:52:39.620439841+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50144: remote error: tls: bad certificate 2025-08-13T19:52:39.660903792+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50146: remote error: tls: bad certificate 2025-08-13T19:52:39.699542742+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50148: remote error: tls: bad certificate 2025-08-13T19:52:39.739991853+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50152: remote error: tls: bad certificate 2025-08-13T19:52:39.781197136+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50158: remote error: tls: bad certificate 2025-08-13T19:52:39.818201909+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50164: remote error: tls: bad certificate 2025-08-13T19:52:39.859011151+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50174: remote error: tls: bad certificate 2025-08-13T19:52:39.900138121+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50184: remote error: tls: bad certificate 2025-08-13T19:52:39.940674485+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50198: remote error: tls: bad certificate 2025-08-13T19:52:39.981173628+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50208: remote error: tls: bad certificate 2025-08-13T19:52:40.021504846+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50210: remote error: tls: bad certificate 2025-08-13T19:52:40.060499815+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50212: remote error: tls: bad certificate 2025-08-13T19:52:40.099227238+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50218: remote error: tls: bad certificate 2025-08-13T19:52:40.138207596+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50234: remote error: tls: bad certificate 2025-08-13T19:52:40.179364457+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50248: remote error: tls: bad certificate 2025-08-13T19:52:40.220187179+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50250: remote error: tls: bad certificate 2025-08-13T19:52:40.266923459+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50262: remote error: tls: bad certificate 2025-08-13T19:52:40.300463004+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50278: remote error: tls: bad certificate 2025-08-13T19:52:40.338439655+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50280: remote error: tls: bad certificate 2025-08-13T19:52:40.380567274+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50296: remote error: tls: bad certificate 2025-08-13T19:52:40.418242506+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50310: remote error: tls: bad certificate 2025-08-13T19:52:40.465546533+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50320: remote error: tls: bad certificate 2025-08-13T19:52:40.496763391+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50332: remote error: tls: bad certificate 2025-08-13T19:52:40.538459038+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50334: remote error: tls: bad certificate 2025-08-13T19:52:40.578203929+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50340: remote error: tls: bad certificate 2025-08-13T19:52:40.620458722+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50354: remote error: tls: bad certificate 2025-08-13T19:52:40.660864622+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50360: remote error: tls: bad certificate 2025-08-13T19:52:40.699334777+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50366: remote error: tls: bad certificate 2025-08-13T19:52:40.739588543+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50382: remote error: tls: bad certificate 2025-08-13T19:52:40.779523379+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50388: remote error: tls: bad certificate 2025-08-13T19:52:40.822485122+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50392: remote error: tls: bad certificate 2025-08-13T19:52:40.860665839+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50400: remote error: tls: bad certificate 2025-08-13T19:52:40.900133812+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50404: remote error: tls: bad certificate 2025-08-13T19:52:40.941152159+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50416: remote error: tls: bad certificate 2025-08-13T19:52:40.978907494+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50424: remote error: tls: bad certificate 2025-08-13T19:52:41.019480659+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50428: remote error: tls: bad certificate 2025-08-13T19:52:41.058350335+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50436: remote error: tls: bad certificate 2025-08-13T19:52:41.098664902+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50438: remote error: tls: bad certificate 2025-08-13T19:52:41.142159720+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50450: remote error: tls: bad certificate 2025-08-13T19:52:41.185477073+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50464: remote error: tls: bad certificate 2025-08-13T19:52:41.238757810+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50470: remote error: tls: bad certificate 2025-08-13T19:52:41.258080630+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50480: remote error: tls: bad certificate 2025-08-13T19:52:41.298741647+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50486: remote error: tls: bad certificate 2025-08-13T19:52:41.341063322+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50496: remote error: tls: bad certificate 2025-08-13T19:52:41.382123200+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50508: remote error: tls: bad certificate 2025-08-13T19:52:41.418920798+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50520: remote error: tls: bad certificate 2025-08-13T19:52:41.464397882+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50536: remote error: tls: bad certificate 2025-08-13T19:52:41.498985476+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50538: remote error: tls: bad certificate 2025-08-13T19:52:41.538845161+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50546: remote error: tls: bad certificate 2025-08-13T19:52:41.578564341+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50554: remote error: tls: bad certificate 2025-08-13T19:52:41.619309401+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50562: remote error: tls: bad certificate 2025-08-13T19:52:41.665264039+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50578: remote error: tls: bad certificate 2025-08-13T19:52:41.698925307+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50590: remote error: tls: bad certificate 2025-08-13T19:52:41.738637707+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50600: remote error: tls: bad certificate 2025-08-13T19:52:41.783739051+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50616: remote error: tls: bad certificate 2025-08-13T19:52:41.826310292+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50624: remote error: tls: bad certificate 2025-08-13T19:52:41.860895847+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50628: remote error: tls: bad certificate 2025-08-13T19:52:41.898189188+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50632: remote error: tls: bad certificate 2025-08-13T19:52:41.938887107+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50638: remote error: tls: bad certificate 2025-08-13T19:52:41.979832382+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50648: remote error: tls: bad certificate 2025-08-13T19:52:42.020386316+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50664: remote error: tls: bad certificate 2025-08-13T19:52:42.085024836+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50674: remote error: tls: bad certificate 2025-08-13T19:52:42.108896596+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50684: remote error: tls: bad certificate 2025-08-13T19:52:42.142605725+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50698: remote error: tls: bad certificate 2025-08-13T19:52:42.181068060+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50706: remote error: tls: bad certificate 2025-08-13T19:52:42.220756259+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50716: remote error: tls: bad certificate 2025-08-13T19:52:42.262327832+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50730: remote error: tls: bad certificate 2025-08-13T19:52:42.301191519+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50734: remote error: tls: bad certificate 2025-08-13T19:52:42.339704595+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50738: remote error: tls: bad certificate 2025-08-13T19:52:42.381139314+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50754: remote error: tls: bad certificate 2025-08-13T19:52:42.419863786+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50766: remote error: tls: bad certificate 2025-08-13T19:52:42.465001591+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50776: remote error: tls: bad certificate 2025-08-13T19:52:42.498351280+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50790: remote error: tls: bad certificate 2025-08-13T19:52:42.539761019+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50802: remote error: tls: bad certificate 2025-08-13T19:52:42.579370336+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50804: remote error: tls: bad certificate 2025-08-13T19:52:42.617901903+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50814: remote error: tls: bad certificate 2025-08-13T19:52:42.658504158+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50816: remote error: tls: bad certificate 2025-08-13T19:52:42.700273977+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50818: remote error: tls: bad certificate 2025-08-13T19:52:42.739716560+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50832: remote error: tls: bad certificate 2025-08-13T19:52:42.783097475+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50840: remote error: tls: bad certificate 2025-08-13T19:52:42.818829582+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50846: remote error: tls: bad certificate 2025-08-13T19:52:42.858970954+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50854: remote error: tls: bad certificate 2025-08-13T19:52:42.898699005+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50862: remote error: tls: bad certificate 2025-08-13T19:52:42.938124517+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50868: remote error: tls: bad certificate 2025-08-13T19:52:42.978704422+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50880: remote error: tls: bad certificate 2025-08-13T19:52:43.021602073+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50884: remote error: tls: bad certificate 2025-08-13T19:52:43.058596086+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50896: remote error: tls: bad certificate 2025-08-13T19:52:43.101869527+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50906: remote error: tls: bad certificate 2025-08-13T19:52:43.139448487+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50912: remote error: tls: bad certificate 2025-08-13T19:52:43.177199172+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50920: remote error: tls: bad certificate 2025-08-13T19:52:43.233126223+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50926: remote error: tls: bad certificate 2025-08-13T19:52:43.260025009+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50930: remote error: tls: bad certificate 2025-08-13T19:52:43.304552106+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50932: remote error: tls: bad certificate 2025-08-13T19:52:43.337638108+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50936: remote error: tls: bad certificate 2025-08-13T19:52:43.384202933+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50952: remote error: tls: bad certificate 2025-08-13T19:52:43.421442143+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50964: remote error: tls: bad certificate 2025-08-13T19:52:43.467144854+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50968: remote error: tls: bad certificate 2025-08-13T19:52:43.484164738+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50984: remote error: tls: bad certificate 2025-08-13T19:52:43.501910653+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50990: remote error: tls: bad certificate 2025-08-13T19:52:43.510504498+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50994: remote error: tls: bad certificate 2025-08-13T19:52:43.541136590+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50996: remote error: tls: bad certificate 2025-08-13T19:52:43.577963828+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51010: remote error: tls: bad certificate 2025-08-13T19:52:43.581344194+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51020: remote error: tls: bad certificate 2025-08-13T19:52:43.623530135+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51022: remote error: tls: bad certificate 2025-08-13T19:52:43.660688303+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51038: remote error: tls: bad certificate 2025-08-13T19:52:43.699336413+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51042: remote error: tls: bad certificate 2025-08-13T19:52:43.739646039+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51058: remote error: tls: bad certificate 2025-08-13T19:52:43.781623654+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51060: remote error: tls: bad certificate 2025-08-13T19:52:43.819892993+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51066: remote error: tls: bad certificate 2025-08-13T19:52:43.851440761+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51076: remote error: tls: bad certificate 2025-08-13T19:52:43.859924972+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51086: remote error: tls: bad certificate 2025-08-13T19:52:43.899604992+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51094: remote error: tls: bad certificate 2025-08-13T19:52:43.944966503+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51098: remote error: tls: bad certificate 2025-08-13T19:52:45.225947302+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51112: remote error: tls: bad certificate 2025-08-13T19:52:45.240247699+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51122: remote error: tls: bad certificate 2025-08-13T19:52:45.256047379+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51130: remote error: tls: bad certificate 2025-08-13T19:52:45.271318753+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51146: remote error: tls: bad certificate 2025-08-13T19:52:45.296477029+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51162: remote error: tls: bad certificate 2025-08-13T19:52:45.313750891+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51176: remote error: tls: bad certificate 2025-08-13T19:52:45.330859518+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51182: remote error: tls: bad certificate 2025-08-13T19:52:45.347684477+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51192: remote error: tls: bad certificate 2025-08-13T19:52:45.365724360+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51204: remote error: tls: bad certificate 2025-08-13T19:52:45.382989652+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51210: remote error: tls: bad certificate 2025-08-13T19:52:45.399849591+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51224: remote error: tls: bad certificate 2025-08-13T19:52:45.416096524+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51236: remote error: tls: bad certificate 2025-08-13T19:52:45.432255704+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51246: remote error: tls: bad certificate 2025-08-13T19:52:45.448523286+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51254: remote error: tls: bad certificate 2025-08-13T19:52:45.464452620+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51270: remote error: tls: bad certificate 2025-08-13T19:52:45.481724901+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51272: remote error: tls: bad certificate 2025-08-13T19:52:45.502720799+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51288: remote error: tls: bad certificate 2025-08-13T19:52:45.523698776+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51300: remote error: tls: bad certificate 2025-08-13T19:52:45.542021538+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51316: remote error: tls: bad certificate 2025-08-13T19:52:45.558209038+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51328: remote error: tls: bad certificate 2025-08-13T19:52:45.573304198+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51338: remote error: tls: bad certificate 2025-08-13T19:52:45.590321372+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51342: remote error: tls: bad certificate 2025-08-13T19:52:45.605003560+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51352: remote error: tls: bad certificate 2025-08-13T19:52:45.621551881+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51362: remote error: tls: bad certificate 2025-08-13T19:52:45.638193525+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51374: remote error: tls: bad certificate 2025-08-13T19:52:45.653460929+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51388: remote error: tls: bad certificate 2025-08-13T19:52:45.670676119+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51392: remote error: tls: bad certificate 2025-08-13T19:52:45.687166648+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51400: remote error: tls: bad certificate 2025-08-13T19:52:45.704924204+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51402: remote error: tls: bad certificate 2025-08-13T19:52:45.724972404+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51412: remote error: tls: bad certificate 2025-08-13T19:52:45.741738981+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51428: remote error: tls: bad certificate 2025-08-13T19:52:45.761212126+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51444: remote error: tls: bad certificate 2025-08-13T19:52:45.779491976+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51460: remote error: tls: bad certificate 2025-08-13T19:52:45.796329015+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51472: remote error: tls: bad certificate 2025-08-13T19:52:45.811374093+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51474: remote error: tls: bad certificate 2025-08-13T19:52:45.826917786+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51480: remote error: tls: bad certificate 2025-08-13T19:52:45.844705732+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51486: remote error: tls: bad certificate 2025-08-13T19:52:45.868352425+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51496: remote error: tls: bad certificate 2025-08-13T19:52:45.889509207+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51504: remote error: tls: bad certificate 2025-08-13T19:52:45.907740935+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51506: remote error: tls: bad certificate 2025-08-13T19:52:45.924041009+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51516: remote error: tls: bad certificate 2025-08-13T19:52:45.938378668+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51530: remote error: tls: bad certificate 2025-08-13T19:52:45.957604385+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51532: remote error: tls: bad certificate 2025-08-13T19:52:45.974873586+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51542: remote error: tls: bad certificate 2025-08-13T19:52:45.995423291+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51546: remote error: tls: bad certificate 2025-08-13T19:52:46.012261460+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51560: remote error: tls: bad certificate 2025-08-13T19:52:46.028545314+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51576: remote error: tls: bad certificate 2025-08-13T19:52:46.052997680+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51582: remote error: tls: bad certificate 2025-08-13T19:52:46.068505791+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51596: remote error: tls: bad certificate 2025-08-13T19:52:46.083164058+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51608: remote error: tls: bad certificate 2025-08-13T19:52:46.097340252+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51622: remote error: tls: bad certificate 2025-08-13T19:52:46.116855027+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51636: remote error: tls: bad certificate 2025-08-13T19:52:46.134343505+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51648: remote error: tls: bad certificate 2025-08-13T19:52:46.149321121+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51654: remote error: tls: bad certificate 2025-08-13T19:52:46.164659358+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51670: remote error: tls: bad certificate 2025-08-13T19:52:46.182469634+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51674: remote error: tls: bad certificate 2025-08-13T19:52:46.219869249+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51684: remote error: tls: bad certificate 2025-08-13T19:52:46.260067963+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51696: remote error: tls: bad certificate 2025-08-13T19:52:46.302070268+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51712: remote error: tls: bad certificate 2025-08-13T19:52:46.339118923+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51726: remote error: tls: bad certificate 2025-08-13T19:52:46.381685234+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51742: remote error: tls: bad certificate 2025-08-13T19:52:46.424425790+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51744: remote error: tls: bad certificate 2025-08-13T19:52:46.460672882+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51746: remote error: tls: bad certificate 2025-08-13T19:52:46.499684202+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51758: remote error: tls: bad certificate 2025-08-13T19:52:46.539385302+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51760: remote error: tls: bad certificate 2025-08-13T19:52:46.577085205+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51770: remote error: tls: bad certificate 2025-08-13T19:52:46.618899655+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51780: remote error: tls: bad certificate 2025-08-13T19:52:47.525561588+00:00 stderr F 2025/08/13 19:52:47 http: TLS handshake error from 127.0.0.1:51788: remote error: tls: bad certificate 2025-08-13T19:52:47.549911571+00:00 stderr F 2025/08/13 19:52:47 http: TLS handshake error from 127.0.0.1:51800: remote error: tls: bad certificate 2025-08-13T19:52:47.571483325+00:00 stderr F 2025/08/13 19:52:47 http: TLS handshake error from 127.0.0.1:51804: remote error: tls: bad certificate 2025-08-13T19:52:47.591991639+00:00 stderr F 2025/08/13 19:52:47 http: TLS handshake error from 127.0.0.1:51818: remote error: tls: bad certificate 2025-08-13T19:52:47.613573423+00:00 stderr F 2025/08/13 19:52:47 http: TLS handshake error from 127.0.0.1:51826: remote error: tls: bad certificate 2025-08-13T19:52:52.822261126+00:00 stderr F 2025/08/13 19:52:52 http: TLS handshake error from 127.0.0.1:41480: remote error: tls: bad certificate 2025-08-13T19:52:53.250892205+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41492: remote error: tls: bad certificate 2025-08-13T19:52:53.288911887+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41498: remote error: tls: bad certificate 2025-08-13T19:52:53.306545969+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41502: remote error: tls: bad certificate 2025-08-13T19:52:53.329355968+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41510: remote error: tls: bad certificate 2025-08-13T19:52:53.371528159+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41522: remote error: tls: bad certificate 2025-08-13T19:52:53.388894133+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41532: remote error: tls: bad certificate 2025-08-13T19:52:53.409691325+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41534: remote error: tls: bad certificate 2025-08-13T19:52:53.428690276+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41542: remote error: tls: bad certificate 2025-08-13T19:52:53.448461298+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41556: remote error: tls: bad certificate 2025-08-13T19:52:53.465886094+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41558: remote error: tls: bad certificate 2025-08-13T19:52:53.489946899+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41564: remote error: tls: bad certificate 2025-08-13T19:52:53.524639517+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41572: remote error: tls: bad certificate 2025-08-13T19:52:53.544525362+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41582: remote error: tls: bad certificate 2025-08-13T19:52:53.561312900+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41594: remote error: tls: bad certificate 2025-08-13T19:52:53.595445612+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41600: remote error: tls: bad certificate 2025-08-13T19:52:53.621247736+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41614: remote error: tls: bad certificate 2025-08-13T19:52:53.642441509+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41624: remote error: tls: bad certificate 2025-08-13T19:52:53.658166437+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41638: remote error: tls: bad certificate 2025-08-13T19:52:53.676036176+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41640: remote error: tls: bad certificate 2025-08-13T19:52:53.701036027+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41646: remote error: tls: bad certificate 2025-08-13T19:52:53.720359397+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41652: remote error: tls: bad certificate 2025-08-13T19:52:53.744577396+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41660: remote error: tls: bad certificate 2025-08-13T19:52:53.763927677+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41674: remote error: tls: bad certificate 2025-08-13T19:52:53.782188737+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41686: remote error: tls: bad certificate 2025-08-13T19:52:53.799675384+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41700: remote error: tls: bad certificate 2025-08-13T19:52:53.823228715+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41704: remote error: tls: bad certificate 2025-08-13T19:52:53.874706880+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41716: remote error: tls: bad certificate 2025-08-13T19:52:53.907436572+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41726: remote error: tls: bad certificate 2025-08-13T19:52:53.929080088+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41738: remote error: tls: bad certificate 2025-08-13T19:52:53.951725902+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41754: remote error: tls: bad certificate 2025-08-13T19:52:53.982139838+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41766: remote error: tls: bad certificate 2025-08-13T19:52:54.005000498+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41772: remote error: tls: bad certificate 2025-08-13T19:52:54.032761258+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41776: remote error: tls: bad certificate 2025-08-13T19:52:54.055918458+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41780: remote error: tls: bad certificate 2025-08-13T19:52:54.080633471+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41782: remote error: tls: bad certificate 2025-08-13T19:52:54.100256039+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41784: remote error: tls: bad certificate 2025-08-13T19:52:54.121094663+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41794: remote error: tls: bad certificate 2025-08-13T19:52:54.143504460+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41800: remote error: tls: bad certificate 2025-08-13T19:52:54.175703647+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41808: remote error: tls: bad certificate 2025-08-13T19:52:54.201592554+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41810: remote error: tls: bad certificate 2025-08-13T19:52:54.247903882+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41826: remote error: tls: bad certificate 2025-08-13T19:52:54.286569942+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41834: remote error: tls: bad certificate 2025-08-13T19:52:54.327975851+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41842: remote error: tls: bad certificate 2025-08-13T19:52:54.355567666+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41854: remote error: tls: bad certificate 2025-08-13T19:52:54.381564386+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41868: remote error: tls: bad certificate 2025-08-13T19:52:54.432900547+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41880: remote error: tls: bad certificate 2025-08-13T19:52:54.461137820+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41896: remote error: tls: bad certificate 2025-08-13T19:52:54.484679820+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41908: remote error: tls: bad certificate 2025-08-13T19:52:54.507568811+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41910: remote error: tls: bad certificate 2025-08-13T19:52:54.530932366+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41922: remote error: tls: bad certificate 2025-08-13T19:52:54.555939808+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41930: remote error: tls: bad certificate 2025-08-13T19:52:54.578209102+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41934: remote error: tls: bad certificate 2025-08-13T19:52:54.598142099+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41940: remote error: tls: bad certificate 2025-08-13T19:52:54.618382465+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41948: remote error: tls: bad certificate 2025-08-13T19:52:54.640995009+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41964: remote error: tls: bad certificate 2025-08-13T19:52:54.658542898+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41980: remote error: tls: bad certificate 2025-08-13T19:52:54.677479797+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41982: remote error: tls: bad certificate 2025-08-13T19:52:54.694943944+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41992: remote error: tls: bad certificate 2025-08-13T19:52:54.714329486+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41998: remote error: tls: bad certificate 2025-08-13T19:52:54.734549962+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42008: remote error: tls: bad certificate 2025-08-13T19:52:54.753707347+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42018: remote error: tls: bad certificate 2025-08-13T19:52:54.771011529+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42032: remote error: tls: bad certificate 2025-08-13T19:52:54.793527740+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42036: remote error: tls: bad certificate 2025-08-13T19:52:54.810987377+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42044: remote error: tls: bad certificate 2025-08-13T19:52:54.830154683+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42050: remote error: tls: bad certificate 2025-08-13T19:52:54.856083541+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42066: remote error: tls: bad certificate 2025-08-13T19:52:54.870171852+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42082: remote error: tls: bad certificate 2025-08-13T19:52:54.898071926+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42098: remote error: tls: bad certificate 2025-08-13T19:52:54.914767671+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42114: remote error: tls: bad certificate 2025-08-13T19:52:54.936620233+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42118: remote error: tls: bad certificate 2025-08-13T19:52:54.953233386+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42120: remote error: tls: bad certificate 2025-08-13T19:52:54.985152344+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42132: remote error: tls: bad certificate 2025-08-13T19:52:55.002014864+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42148: remote error: tls: bad certificate 2025-08-13T19:52:55.054629911+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42156: remote error: tls: bad certificate 2025-08-13T19:52:55.071086060+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42162: remote error: tls: bad certificate 2025-08-13T19:52:55.092919321+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42170: remote error: tls: bad certificate 2025-08-13T19:52:55.118983953+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42186: remote error: tls: bad certificate 2025-08-13T19:52:55.133997631+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42192: remote error: tls: bad certificate 2025-08-13T19:52:55.151021575+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42208: remote error: tls: bad certificate 2025-08-13T19:52:55.170148129+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42212: remote error: tls: bad certificate 2025-08-13T19:52:55.242229321+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42224: remote error: tls: bad certificate 2025-08-13T19:52:55.273872862+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42238: remote error: tls: bad certificate 2025-08-13T19:52:55.291481883+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42252: remote error: tls: bad certificate 2025-08-13T19:52:55.308587490+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42260: remote error: tls: bad certificate 2025-08-13T19:52:55.323694750+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42276: remote error: tls: bad certificate 2025-08-13T19:52:55.339687145+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42282: remote error: tls: bad certificate 2025-08-13T19:52:55.358117329+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42294: remote error: tls: bad certificate 2025-08-13T19:52:55.375193855+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42306: remote error: tls: bad certificate 2025-08-13T19:52:55.394539286+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42310: remote error: tls: bad certificate 2025-08-13T19:52:55.412068405+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42316: remote error: tls: bad certificate 2025-08-13T19:52:55.427551085+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42322: remote error: tls: bad certificate 2025-08-13T19:52:55.443897061+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42332: remote error: tls: bad certificate 2025-08-13T19:52:55.461866062+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42334: remote error: tls: bad certificate 2025-08-13T19:52:55.479754801+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42348: remote error: tls: bad certificate 2025-08-13T19:52:55.495305974+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42358: remote error: tls: bad certificate 2025-08-13T19:52:55.512859304+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42370: remote error: tls: bad certificate 2025-08-13T19:52:55.528691324+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42374: remote error: tls: bad certificate 2025-08-13T19:52:55.544188645+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42382: remote error: tls: bad certificate 2025-08-13T19:52:55.558555384+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42396: remote error: tls: bad certificate 2025-08-13T19:52:55.574377544+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42408: remote error: tls: bad certificate 2025-08-13T19:52:55.589147665+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42410: remote error: tls: bad certificate 2025-08-13T19:52:55.603646977+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42418: remote error: tls: bad certificate 2025-08-13T19:52:55.620698043+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42420: remote error: tls: bad certificate 2025-08-13T19:52:55.638494469+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42432: remote error: tls: bad certificate 2025-08-13T19:52:55.653157467+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42442: remote error: tls: bad certificate 2025-08-13T19:52:55.668553795+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42446: remote error: tls: bad certificate 2025-08-13T19:52:55.685881678+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42462: remote error: tls: bad certificate 2025-08-13T19:52:55.699141055+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42466: remote error: tls: bad certificate 2025-08-13T19:52:55.716705675+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42474: remote error: tls: bad certificate 2025-08-13T19:52:55.731095095+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42478: remote error: tls: bad certificate 2025-08-13T19:52:55.748325765+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42488: remote error: tls: bad certificate 2025-08-13T19:52:55.764921978+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42492: remote error: tls: bad certificate 2025-08-13T19:52:55.780853921+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42508: remote error: tls: bad certificate 2025-08-13T19:52:55.804416932+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42522: remote error: tls: bad certificate 2025-08-13T19:52:55.845388358+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42536: remote error: tls: bad certificate 2025-08-13T19:52:55.887732713+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42552: remote error: tls: bad certificate 2025-08-13T19:52:55.924672094+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42556: remote error: tls: bad certificate 2025-08-13T19:52:55.965096165+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42562: remote error: tls: bad certificate 2025-08-13T19:52:56.006509904+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42574: remote error: tls: bad certificate 2025-08-13T19:52:56.048707594+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42576: remote error: tls: bad certificate 2025-08-13T19:52:56.085449970+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42582: remote error: tls: bad certificate 2025-08-13T19:52:56.126100917+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42590: remote error: tls: bad certificate 2025-08-13T19:52:56.164636584+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42606: remote error: tls: bad certificate 2025-08-13T19:52:56.214036610+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42608: remote error: tls: bad certificate 2025-08-13T19:52:56.248078759+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42622: remote error: tls: bad certificate 2025-08-13T19:52:56.284706301+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42624: remote error: tls: bad certificate 2025-08-13T19:52:56.334890570+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42636: remote error: tls: bad certificate 2025-08-13T19:52:56.364250135+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42646: remote error: tls: bad certificate 2025-08-13T19:52:56.407463465+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42654: remote error: tls: bad certificate 2025-08-13T19:52:56.445212330+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42670: remote error: tls: bad certificate 2025-08-13T19:52:56.485499106+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42676: remote error: tls: bad certificate 2025-08-13T19:52:56.524645670+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42692: remote error: tls: bad certificate 2025-08-13T19:52:56.565708889+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42700: remote error: tls: bad certificate 2025-08-13T19:52:56.602967379+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42702: remote error: tls: bad certificate 2025-08-13T19:52:56.642951327+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42714: remote error: tls: bad certificate 2025-08-13T19:52:56.684217562+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42720: remote error: tls: bad certificate 2025-08-13T19:52:56.724915080+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42722: remote error: tls: bad certificate 2025-08-13T19:52:56.766377500+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42730: remote error: tls: bad certificate 2025-08-13T19:52:56.806280176+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42738: remote error: tls: bad certificate 2025-08-13T19:52:56.848622651+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42742: remote error: tls: bad certificate 2025-08-13T19:52:56.887936530+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42756: remote error: tls: bad certificate 2025-08-13T19:52:56.925047886+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42760: remote error: tls: bad certificate 2025-08-13T19:52:56.966627020+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42776: remote error: tls: bad certificate 2025-08-13T19:52:57.004946620+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42782: remote error: tls: bad certificate 2025-08-13T19:52:57.046337938+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42784: remote error: tls: bad certificate 2025-08-13T19:52:57.083754583+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42796: remote error: tls: bad certificate 2025-08-13T19:52:57.123685920+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42800: remote error: tls: bad certificate 2025-08-13T19:52:57.165114619+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42810: remote error: tls: bad certificate 2025-08-13T19:52:57.217762127+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42826: remote error: tls: bad certificate 2025-08-13T19:52:57.245899898+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42834: remote error: tls: bad certificate 2025-08-13T19:52:57.286054221+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42838: remote error: tls: bad certificate 2025-08-13T19:52:57.322986532+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42842: remote error: tls: bad certificate 2025-08-13T19:52:57.366049608+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42852: remote error: tls: bad certificate 2025-08-13T19:52:57.404543844+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42864: remote error: tls: bad certificate 2025-08-13T19:52:57.443362368+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42874: remote error: tls: bad certificate 2025-08-13T19:52:57.484454008+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42886: remote error: tls: bad certificate 2025-08-13T19:52:57.523669134+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42890: remote error: tls: bad certificate 2025-08-13T19:52:57.563286882+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42904: remote error: tls: bad certificate 2025-08-13T19:52:57.617717321+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42914: remote error: tls: bad certificate 2025-08-13T19:52:57.646665185+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42930: remote error: tls: bad certificate 2025-08-13T19:52:57.692141989+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42932: remote error: tls: bad certificate 2025-08-13T19:52:57.723772439+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42940: remote error: tls: bad certificate 2025-08-13T19:52:57.764528949+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42956: remote error: tls: bad certificate 2025-08-13T19:52:57.813446482+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42962: remote error: tls: bad certificate 2025-08-13T19:52:57.840955855+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42964: remote error: tls: bad certificate 2025-08-13T19:52:57.848026926+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42976: remote error: tls: bad certificate 2025-08-13T19:52:57.864354691+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42978: remote error: tls: bad certificate 2025-08-13T19:52:57.889663811+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42986: remote error: tls: bad certificate 2025-08-13T19:52:57.889663811+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42992: remote error: tls: bad certificate 2025-08-13T19:52:57.908096476+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:43006: remote error: tls: bad certificate 2025-08-13T19:52:57.928752624+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:43014: remote error: tls: bad certificate 2025-08-13T19:52:57.931942654+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:43018: remote error: tls: bad certificate 2025-08-13T19:52:57.972696274+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:43022: remote error: tls: bad certificate 2025-08-13T19:52:58.004026516+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43036: remote error: tls: bad certificate 2025-08-13T19:52:58.077624340+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43040: remote error: tls: bad certificate 2025-08-13T19:52:58.099196944+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43048: remote error: tls: bad certificate 2025-08-13T19:52:58.144243276+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43056: remote error: tls: bad certificate 2025-08-13T19:52:58.164125772+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43062: remote error: tls: bad certificate 2025-08-13T19:52:58.206422075+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43070: remote error: tls: bad certificate 2025-08-13T19:52:58.246240849+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43078: remote error: tls: bad certificate 2025-08-13T19:52:58.288872202+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43084: remote error: tls: bad certificate 2025-08-13T19:52:58.341151290+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43100: remote error: tls: bad certificate 2025-08-13T19:52:58.365887304+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43106: remote error: tls: bad certificate 2025-08-13T19:52:58.409760853+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43118: remote error: tls: bad certificate 2025-08-13T19:52:58.448663510+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43132: remote error: tls: bad certificate 2025-08-13T19:52:58.486195918+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43140: remote error: tls: bad certificate 2025-08-13T19:52:58.526132395+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43154: remote error: tls: bad certificate 2025-08-13T19:52:58.565579728+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43158: remote error: tls: bad certificate 2025-08-13T19:52:58.612442612+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43170: remote error: tls: bad certificate 2025-08-13T19:52:58.647630263+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43186: remote error: tls: bad certificate 2025-08-13T19:52:58.688646890+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43188: remote error: tls: bad certificate 2025-08-13T19:52:58.725170550+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43196: remote error: tls: bad certificate 2025-08-13T19:52:58.766512877+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:54328: remote error: tls: bad certificate 2025-08-13T19:52:58.805249169+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:54332: remote error: tls: bad certificate 2025-08-13T19:52:58.845984689+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:54340: remote error: tls: bad certificate 2025-08-13T19:52:58.885730500+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:54350: remote error: tls: bad certificate 2025-08-13T19:52:58.926610643+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:54362: remote error: tls: bad certificate 2025-08-13T19:52:58.967522538+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:54376: remote error: tls: bad certificate 2025-08-13T19:52:59.010137541+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54386: remote error: tls: bad certificate 2025-08-13T19:52:59.045977381+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54398: remote error: tls: bad certificate 2025-08-13T19:52:59.084213709+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54408: remote error: tls: bad certificate 2025-08-13T19:52:59.123999671+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54422: remote error: tls: bad certificate 2025-08-13T19:52:59.166915983+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54438: remote error: tls: bad certificate 2025-08-13T19:52:59.203270908+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54452: remote error: tls: bad certificate 2025-08-13T19:52:59.248928677+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54462: remote error: tls: bad certificate 2025-08-13T19:52:59.286589269+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54466: remote error: tls: bad certificate 2025-08-13T19:53:05.229994755+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54472: remote error: tls: bad certificate 2025-08-13T19:53:05.246737951+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54488: remote error: tls: bad certificate 2025-08-13T19:53:05.263674723+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54504: remote error: tls: bad certificate 2025-08-13T19:53:05.285403972+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54516: remote error: tls: bad certificate 2025-08-13T19:53:05.301223082+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54520: remote error: tls: bad certificate 2025-08-13T19:53:05.322422625+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54536: remote error: tls: bad certificate 2025-08-13T19:53:05.342665881+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54552: remote error: tls: bad certificate 2025-08-13T19:53:05.362065353+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54558: remote error: tls: bad certificate 2025-08-13T19:53:05.382523416+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54560: remote error: tls: bad certificate 2025-08-13T19:53:05.398259064+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54572: remote error: tls: bad certificate 2025-08-13T19:53:05.418216132+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54578: remote error: tls: bad certificate 2025-08-13T19:53:05.435298288+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54594: remote error: tls: bad certificate 2025-08-13T19:53:05.453285800+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54606: remote error: tls: bad certificate 2025-08-13T19:53:05.471257461+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54618: remote error: tls: bad certificate 2025-08-13T19:53:05.498908108+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54626: remote error: tls: bad certificate 2025-08-13T19:53:05.520762580+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54640: remote error: tls: bad certificate 2025-08-13T19:53:05.539638158+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54642: remote error: tls: bad certificate 2025-08-13T19:53:05.556648482+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54656: remote error: tls: bad certificate 2025-08-13T19:53:05.574329645+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54664: remote error: tls: bad certificate 2025-08-13T19:53:05.592428570+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54678: remote error: tls: bad certificate 2025-08-13T19:53:05.608616891+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54688: remote error: tls: bad certificate 2025-08-13T19:53:05.625933634+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54698: remote error: tls: bad certificate 2025-08-13T19:53:05.641921509+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54710: remote error: tls: bad certificate 2025-08-13T19:53:05.658406988+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54720: remote error: tls: bad certificate 2025-08-13T19:53:05.683173693+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54724: remote error: tls: bad certificate 2025-08-13T19:53:05.703444870+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54738: remote error: tls: bad certificate 2025-08-13T19:53:05.724500489+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54750: remote error: tls: bad certificate 2025-08-13T19:53:05.742389318+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54754: remote error: tls: bad certificate 2025-08-13T19:53:05.758951460+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54762: remote error: tls: bad certificate 2025-08-13T19:53:05.790780435+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54774: remote error: tls: bad certificate 2025-08-13T19:53:05.807976765+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54780: remote error: tls: bad certificate 2025-08-13T19:53:05.823508947+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54782: remote error: tls: bad certificate 2025-08-13T19:53:05.841977013+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54794: remote error: tls: bad certificate 2025-08-13T19:53:05.863257048+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54798: remote error: tls: bad certificate 2025-08-13T19:53:05.878192283+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54804: remote error: tls: bad certificate 2025-08-13T19:53:05.895533517+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54810: remote error: tls: bad certificate 2025-08-13T19:53:05.921506496+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54812: remote error: tls: bad certificate 2025-08-13T19:53:05.941344901+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54816: remote error: tls: bad certificate 2025-08-13T19:53:05.956712448+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54824: remote error: tls: bad certificate 2025-08-13T19:53:05.974279778+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54834: remote error: tls: bad certificate 2025-08-13T19:53:05.989635535+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54838: remote error: tls: bad certificate 2025-08-13T19:53:06.007107003+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54846: remote error: tls: bad certificate 2025-08-13T19:53:06.025421104+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54854: remote error: tls: bad certificate 2025-08-13T19:53:06.048625044+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54860: remote error: tls: bad certificate 2025-08-13T19:53:06.071913707+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54874: remote error: tls: bad certificate 2025-08-13T19:53:06.089215219+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54890: remote error: tls: bad certificate 2025-08-13T19:53:06.105432961+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54904: remote error: tls: bad certificate 2025-08-13T19:53:06.125697978+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54918: remote error: tls: bad certificate 2025-08-13T19:53:06.144250206+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54924: remote error: tls: bad certificate 2025-08-13T19:53:06.174491027+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54930: remote error: tls: bad certificate 2025-08-13T19:53:06.191339976+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54936: remote error: tls: bad certificate 2025-08-13T19:53:06.206277961+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54944: remote error: tls: bad certificate 2025-08-13T19:53:06.223619465+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54956: remote error: tls: bad certificate 2025-08-13T19:53:06.245045025+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54964: remote error: tls: bad certificate 2025-08-13T19:53:06.259867487+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54970: remote error: tls: bad certificate 2025-08-13T19:53:06.277542920+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54978: remote error: tls: bad certificate 2025-08-13T19:53:06.295119290+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54986: remote error: tls: bad certificate 2025-08-13T19:53:06.312107523+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54992: remote error: tls: bad certificate 2025-08-13T19:53:06.327216773+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54998: remote error: tls: bad certificate 2025-08-13T19:53:06.343608990+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55012: remote error: tls: bad certificate 2025-08-13T19:53:06.361256012+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55020: remote error: tls: bad certificate 2025-08-13T19:53:06.381916410+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55036: remote error: tls: bad certificate 2025-08-13T19:53:06.396906137+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55044: remote error: tls: bad certificate 2025-08-13T19:53:06.413956552+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55048: remote error: tls: bad certificate 2025-08-13T19:53:06.428983080+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55052: remote error: tls: bad certificate 2025-08-13T19:53:06.444305856+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55066: remote error: tls: bad certificate 2025-08-13T19:53:06.461992179+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55074: remote error: tls: bad certificate 2025-08-13T19:53:08.124377283+00:00 stderr F 2025/08/13 19:53:08 http: TLS handshake error from 127.0.0.1:55090: remote error: tls: bad certificate 2025-08-13T19:53:08.144708872+00:00 stderr F 2025/08/13 19:53:08 http: TLS handshake error from 127.0.0.1:55098: remote error: tls: bad certificate 2025-08-13T19:53:08.169200089+00:00 stderr F 2025/08/13 19:53:08 http: TLS handshake error from 127.0.0.1:55100: remote error: tls: bad certificate 2025-08-13T19:53:08.191587256+00:00 stderr F 2025/08/13 19:53:08 http: TLS handshake error from 127.0.0.1:55108: remote error: tls: bad certificate 2025-08-13T19:53:08.215764564+00:00 stderr F 2025/08/13 19:53:08 http: TLS handshake error from 127.0.0.1:55122: remote error: tls: bad certificate 2025-08-13T19:53:15.230254308+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57320: remote error: tls: bad certificate 2025-08-13T19:53:15.254519958+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57336: remote error: tls: bad certificate 2025-08-13T19:53:15.273246881+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57338: remote error: tls: bad certificate 2025-08-13T19:53:15.290993626+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57344: remote error: tls: bad certificate 2025-08-13T19:53:15.308042922+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57360: remote error: tls: bad certificate 2025-08-13T19:53:15.323416099+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57376: remote error: tls: bad certificate 2025-08-13T19:53:15.348010609+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57378: remote error: tls: bad certificate 2025-08-13T19:53:15.364569510+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57382: remote error: tls: bad certificate 2025-08-13T19:53:15.383879610+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57388: remote error: tls: bad certificate 2025-08-13T19:53:15.403854139+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57404: remote error: tls: bad certificate 2025-08-13T19:53:15.421906672+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57406: remote error: tls: bad certificate 2025-08-13T19:53:15.441069268+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57422: remote error: tls: bad certificate 2025-08-13T19:53:15.472902974+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57438: remote error: tls: bad certificate 2025-08-13T19:53:15.488501388+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57442: remote error: tls: bad certificate 2025-08-13T19:53:15.502286220+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57454: remote error: tls: bad certificate 2025-08-13T19:53:15.519585602+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57456: remote error: tls: bad certificate 2025-08-13T19:53:15.544101080+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57458: remote error: tls: bad certificate 2025-08-13T19:53:15.561890296+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57462: remote error: tls: bad certificate 2025-08-13T19:53:15.579199579+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57464: remote error: tls: bad certificate 2025-08-13T19:53:15.599265390+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57468: remote error: tls: bad certificate 2025-08-13T19:53:15.617419847+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57482: remote error: tls: bad certificate 2025-08-13T19:53:15.633664659+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57486: remote error: tls: bad certificate 2025-08-13T19:53:15.648979215+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57502: remote error: tls: bad certificate 2025-08-13T19:53:15.668605694+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57504: remote error: tls: bad certificate 2025-08-13T19:53:15.685357471+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57516: remote error: tls: bad certificate 2025-08-13T19:53:15.702692944+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57530: remote error: tls: bad certificate 2025-08-13T19:53:15.720031608+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57532: remote error: tls: bad certificate 2025-08-13T19:53:15.738600986+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57534: remote error: tls: bad certificate 2025-08-13T19:53:15.761599861+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57536: remote error: tls: bad certificate 2025-08-13T19:53:15.776962568+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57546: remote error: tls: bad certificate 2025-08-13T19:53:15.791767109+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57558: remote error: tls: bad certificate 2025-08-13T19:53:15.809112343+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57568: remote error: tls: bad certificate 2025-08-13T19:53:15.825866270+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57574: remote error: tls: bad certificate 2025-08-13T19:53:15.841345910+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57580: remote error: tls: bad certificate 2025-08-13T19:53:15.855971407+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57584: remote error: tls: bad certificate 2025-08-13T19:53:15.871254862+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57590: remote error: tls: bad certificate 2025-08-13T19:53:15.888957645+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57596: remote error: tls: bad certificate 2025-08-13T19:53:15.907379430+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57608: remote error: tls: bad certificate 2025-08-13T19:53:15.925199656+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57620: remote error: tls: bad certificate 2025-08-13T19:53:15.942733185+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57626: remote error: tls: bad certificate 2025-08-13T19:53:15.960165881+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57630: remote error: tls: bad certificate 2025-08-13T19:53:15.976287160+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57636: remote error: tls: bad certificate 2025-08-13T19:53:15.994373955+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57644: remote error: tls: bad certificate 2025-08-13T19:53:16.010107503+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57648: remote error: tls: bad certificate 2025-08-13T19:53:16.024446411+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57654: remote error: tls: bad certificate 2025-08-13T19:53:16.040982671+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57670: remote error: tls: bad certificate 2025-08-13T19:53:16.056222665+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57686: remote error: tls: bad certificate 2025-08-13T19:53:16.073181838+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57690: remote error: tls: bad certificate 2025-08-13T19:53:16.091155159+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57696: remote error: tls: bad certificate 2025-08-13T19:53:16.107908546+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57710: remote error: tls: bad certificate 2025-08-13T19:53:16.130296923+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57718: remote error: tls: bad certificate 2025-08-13T19:53:16.147045200+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57728: remote error: tls: bad certificate 2025-08-13T19:53:16.161113440+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57736: remote error: tls: bad certificate 2025-08-13T19:53:16.179346959+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57744: remote error: tls: bad certificate 2025-08-13T19:53:16.194681446+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57750: remote error: tls: bad certificate 2025-08-13T19:53:16.214370226+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57754: remote error: tls: bad certificate 2025-08-13T19:53:16.230876996+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57766: remote error: tls: bad certificate 2025-08-13T19:53:16.247018645+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57768: remote error: tls: bad certificate 2025-08-13T19:53:16.264169344+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57772: remote error: tls: bad certificate 2025-08-13T19:53:16.280287652+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57776: remote error: tls: bad certificate 2025-08-13T19:53:16.300380054+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57792: remote error: tls: bad certificate 2025-08-13T19:53:16.315936757+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57808: remote error: tls: bad certificate 2025-08-13T19:53:16.334189337+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57816: remote error: tls: bad certificate 2025-08-13T19:53:16.353718072+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57818: remote error: tls: bad certificate 2025-08-13T19:53:16.373083804+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57824: remote error: tls: bad certificate 2025-08-13T19:53:16.399998900+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57838: remote error: tls: bad certificate 2025-08-13T19:53:16.418068144+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57844: remote error: tls: bad certificate 2025-08-13T19:53:18.646465778+00:00 stderr F 2025/08/13 19:53:18 http: TLS handshake error from 127.0.0.1:57856: remote error: tls: bad certificate 2025-08-13T19:53:18.680082655+00:00 stderr F 2025/08/13 19:53:18 http: TLS handshake error from 127.0.0.1:57872: remote error: tls: bad certificate 2025-08-13T19:53:18.706134956+00:00 stderr F 2025/08/13 19:53:18 http: TLS handshake error from 127.0.0.1:57882: remote error: tls: bad certificate 2025-08-13T19:53:18.728389240+00:00 stderr F 2025/08/13 19:53:18 http: TLS handshake error from 127.0.0.1:57884: remote error: tls: bad certificate 2025-08-13T19:53:18.749899032+00:00 stderr F 2025/08/13 19:53:18 http: TLS handshake error from 127.0.0.1:34698: remote error: tls: bad certificate 2025-08-13T19:53:22.591021295+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34710: remote error: tls: bad certificate 2025-08-13T19:53:22.605655702+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34724: remote error: tls: bad certificate 2025-08-13T19:53:22.620566116+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34736: remote error: tls: bad certificate 2025-08-13T19:53:22.645705832+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34746: remote error: tls: bad certificate 2025-08-13T19:53:22.698604287+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34750: remote error: tls: bad certificate 2025-08-13T19:53:22.723939048+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34754: remote error: tls: bad certificate 2025-08-13T19:53:22.751387410+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34760: remote error: tls: bad certificate 2025-08-13T19:53:22.776338900+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34774: remote error: tls: bad certificate 2025-08-13T19:53:22.803865243+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34788: remote error: tls: bad certificate 2025-08-13T19:53:22.825047576+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34800: remote error: tls: bad certificate 2025-08-13T19:53:22.826663162+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34796: remote error: tls: bad certificate 2025-08-13T19:53:22.844406477+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34804: remote error: tls: bad certificate 2025-08-13T19:53:22.862061109+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34820: remote error: tls: bad certificate 2025-08-13T19:53:22.880748861+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34824: remote error: tls: bad certificate 2025-08-13T19:53:22.902638684+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34832: remote error: tls: bad certificate 2025-08-13T19:53:22.921644185+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34838: remote error: tls: bad certificate 2025-08-13T19:53:22.933935435+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34840: remote error: tls: bad certificate 2025-08-13T19:53:22.950113506+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34850: remote error: tls: bad certificate 2025-08-13T19:53:22.965595246+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34858: remote error: tls: bad certificate 2025-08-13T19:53:22.981860829+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34870: remote error: tls: bad certificate 2025-08-13T19:53:23.001581990+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34884: remote error: tls: bad certificate 2025-08-13T19:53:23.021316992+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34896: remote error: tls: bad certificate 2025-08-13T19:53:23.039603773+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34900: remote error: tls: bad certificate 2025-08-13T19:53:23.063196324+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34910: remote error: tls: bad certificate 2025-08-13T19:53:23.080436494+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34914: remote error: tls: bad certificate 2025-08-13T19:53:23.100300379+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34922: remote error: tls: bad certificate 2025-08-13T19:53:23.122367917+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34938: remote error: tls: bad certificate 2025-08-13T19:53:23.141321797+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34944: remote error: tls: bad certificate 2025-08-13T19:53:23.159916806+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34948: remote error: tls: bad certificate 2025-08-13T19:53:23.178090393+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34954: remote error: tls: bad certificate 2025-08-13T19:53:23.196761804+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34968: remote error: tls: bad certificate 2025-08-13T19:53:23.220670975+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34976: remote error: tls: bad certificate 2025-08-13T19:53:23.248884208+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34990: remote error: tls: bad certificate 2025-08-13T19:53:23.266045596+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34992: remote error: tls: bad certificate 2025-08-13T19:53:23.285069508+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35000: remote error: tls: bad certificate 2025-08-13T19:53:23.307900188+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35008: remote error: tls: bad certificate 2025-08-13T19:53:23.326323412+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35020: remote error: tls: bad certificate 2025-08-13T19:53:23.346357272+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35026: remote error: tls: bad certificate 2025-08-13T19:53:23.362327797+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35038: remote error: tls: bad certificate 2025-08-13T19:53:23.379161196+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35044: remote error: tls: bad certificate 2025-08-13T19:53:23.402203582+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35046: remote error: tls: bad certificate 2025-08-13T19:53:23.417482587+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35056: remote error: tls: bad certificate 2025-08-13T19:53:23.447920933+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35058: remote error: tls: bad certificate 2025-08-13T19:53:23.473523032+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35074: remote error: tls: bad certificate 2025-08-13T19:53:23.500123999+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35084: remote error: tls: bad certificate 2025-08-13T19:53:23.518177983+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35098: remote error: tls: bad certificate 2025-08-13T19:53:23.536484004+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35112: remote error: tls: bad certificate 2025-08-13T19:53:23.563257856+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35122: remote error: tls: bad certificate 2025-08-13T19:53:23.585182160+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35130: remote error: tls: bad certificate 2025-08-13T19:53:23.605052625+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35134: remote error: tls: bad certificate 2025-08-13T19:53:23.631633512+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35148: remote error: tls: bad certificate 2025-08-13T19:53:23.653284638+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35156: remote error: tls: bad certificate 2025-08-13T19:53:23.674903663+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35162: remote error: tls: bad certificate 2025-08-13T19:53:23.693084581+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35168: remote error: tls: bad certificate 2025-08-13T19:53:23.716563189+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35180: remote error: tls: bad certificate 2025-08-13T19:53:23.734459658+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35184: remote error: tls: bad certificate 2025-08-13T19:53:23.753267944+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35186: remote error: tls: bad certificate 2025-08-13T19:53:23.771244175+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35188: remote error: tls: bad certificate 2025-08-13T19:53:23.786530830+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35194: remote error: tls: bad certificate 2025-08-13T19:53:23.808616299+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35210: remote error: tls: bad certificate 2025-08-13T19:53:23.833299832+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35218: remote error: tls: bad certificate 2025-08-13T19:53:23.850043338+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35228: remote error: tls: bad certificate 2025-08-13T19:53:23.865424936+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35242: remote error: tls: bad certificate 2025-08-13T19:53:23.885454916+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35250: remote error: tls: bad certificate 2025-08-13T19:53:23.903011546+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35260: remote error: tls: bad certificate 2025-08-13T19:53:23.921135541+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35262: remote error: tls: bad certificate 2025-08-13T19:53:23.933987027+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35268: remote error: tls: bad certificate 2025-08-13T19:53:23.939688169+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35282: remote error: tls: bad certificate 2025-08-13T19:53:23.961271644+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35290: remote error: tls: bad certificate 2025-08-13T19:53:24.608612958+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35304: remote error: tls: bad certificate 2025-08-13T19:53:24.627416963+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35308: remote error: tls: bad certificate 2025-08-13T19:53:24.648132722+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35312: remote error: tls: bad certificate 2025-08-13T19:53:24.667632877+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35314: remote error: tls: bad certificate 2025-08-13T19:53:24.687181144+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35326: remote error: tls: bad certificate 2025-08-13T19:53:24.708126890+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35334: remote error: tls: bad certificate 2025-08-13T19:53:24.726086821+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35338: remote error: tls: bad certificate 2025-08-13T19:53:24.749867818+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35344: remote error: tls: bad certificate 2025-08-13T19:53:24.769401404+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35346: remote error: tls: bad certificate 2025-08-13T19:53:24.788411295+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35348: remote error: tls: bad certificate 2025-08-13T19:53:24.806189681+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35362: remote error: tls: bad certificate 2025-08-13T19:53:24.823518654+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35364: remote error: tls: bad certificate 2025-08-13T19:53:24.839507479+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35380: remote error: tls: bad certificate 2025-08-13T19:53:24.863265816+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35382: remote error: tls: bad certificate 2025-08-13T19:53:24.888878835+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35384: remote error: tls: bad certificate 2025-08-13T19:53:24.903199092+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35390: remote error: tls: bad certificate 2025-08-13T19:53:24.922199463+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35392: remote error: tls: bad certificate 2025-08-13T19:53:24.938519207+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35408: remote error: tls: bad certificate 2025-08-13T19:53:24.964687082+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35412: remote error: tls: bad certificate 2025-08-13T19:53:24.984069714+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35428: remote error: tls: bad certificate 2025-08-13T19:53:25.010507486+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35442: remote error: tls: bad certificate 2025-08-13T19:53:25.027988314+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35456: remote error: tls: bad certificate 2025-08-13T19:53:25.046068598+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35460: remote error: tls: bad certificate 2025-08-13T19:53:25.066701436+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35464: remote error: tls: bad certificate 2025-08-13T19:53:25.092472169+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35466: remote error: tls: bad certificate 2025-08-13T19:53:25.110581865+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35478: remote error: tls: bad certificate 2025-08-13T19:53:25.127249079+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35484: remote error: tls: bad certificate 2025-08-13T19:53:25.144859750+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35494: remote error: tls: bad certificate 2025-08-13T19:53:25.168981347+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35510: remote error: tls: bad certificate 2025-08-13T19:53:25.187260547+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35522: remote error: tls: bad certificate 2025-08-13T19:53:25.203738546+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35536: remote error: tls: bad certificate 2025-08-13T19:53:25.233270696+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35544: remote error: tls: bad certificate 2025-08-13T19:53:25.250883778+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35558: remote error: tls: bad certificate 2025-08-13T19:53:25.266268146+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35560: remote error: tls: bad certificate 2025-08-13T19:53:25.281882170+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35570: remote error: tls: bad certificate 2025-08-13T19:53:25.301460557+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35580: remote error: tls: bad certificate 2025-08-13T19:53:25.319433589+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35582: remote error: tls: bad certificate 2025-08-13T19:53:25.339662705+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35590: remote error: tls: bad certificate 2025-08-13T19:53:25.362344160+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35592: remote error: tls: bad certificate 2025-08-13T19:53:25.381756173+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35608: remote error: tls: bad certificate 2025-08-13T19:53:25.404191501+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35610: remote error: tls: bad certificate 2025-08-13T19:53:25.420409283+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35612: remote error: tls: bad certificate 2025-08-13T19:53:25.434623817+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35626: remote error: tls: bad certificate 2025-08-13T19:53:25.454376730+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35634: remote error: tls: bad certificate 2025-08-13T19:53:25.472355041+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35636: remote error: tls: bad certificate 2025-08-13T19:53:25.492913216+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35650: remote error: tls: bad certificate 2025-08-13T19:53:25.507830691+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35654: remote error: tls: bad certificate 2025-08-13T19:53:25.523375113+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35668: remote error: tls: bad certificate 2025-08-13T19:53:25.538526555+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35676: remote error: tls: bad certificate 2025-08-13T19:53:25.555080726+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35682: remote error: tls: bad certificate 2025-08-13T19:53:25.568680863+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35698: remote error: tls: bad certificate 2025-08-13T19:53:25.585471281+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35710: remote error: tls: bad certificate 2025-08-13T19:53:25.611140111+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35712: remote error: tls: bad certificate 2025-08-13T19:53:25.626273972+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35722: remote error: tls: bad certificate 2025-08-13T19:53:25.642274498+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35730: remote error: tls: bad certificate 2025-08-13T19:53:25.659359294+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35732: remote error: tls: bad certificate 2025-08-13T19:53:25.680177646+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35734: remote error: tls: bad certificate 2025-08-13T19:53:25.700996869+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35740: remote error: tls: bad certificate 2025-08-13T19:53:25.716110819+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35742: remote error: tls: bad certificate 2025-08-13T19:53:25.740514744+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35748: remote error: tls: bad certificate 2025-08-13T19:53:25.759281618+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35764: remote error: tls: bad certificate 2025-08-13T19:53:25.775606942+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35776: remote error: tls: bad certificate 2025-08-13T19:53:25.797337521+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35792: remote error: tls: bad certificate 2025-08-13T19:53:25.826253004+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35796: remote error: tls: bad certificate 2025-08-13T19:53:25.866070427+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35800: remote error: tls: bad certificate 2025-08-13T19:53:25.909065421+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35806: remote error: tls: bad certificate 2025-08-13T19:53:25.945585070+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35814: remote error: tls: bad certificate 2025-08-13T19:53:25.985547478+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35824: remote error: tls: bad certificate 2025-08-13T19:53:26.029852279+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35836: remote error: tls: bad certificate 2025-08-13T19:53:26.068288143+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35840: remote error: tls: bad certificate 2025-08-13T19:53:26.105680077+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35846: remote error: tls: bad certificate 2025-08-13T19:53:26.146863929+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35856: remote error: tls: bad certificate 2025-08-13T19:53:26.190884572+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35870: remote error: tls: bad certificate 2025-08-13T19:53:26.228761950+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35880: remote error: tls: bad certificate 2025-08-13T19:53:26.266262067+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35896: remote error: tls: bad certificate 2025-08-13T19:53:26.309062875+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35908: remote error: tls: bad certificate 2025-08-13T19:53:26.351186044+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35912: remote error: tls: bad certificate 2025-08-13T19:53:26.386908341+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35924: remote error: tls: bad certificate 2025-08-13T19:53:26.425640023+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35938: remote error: tls: bad certificate 2025-08-13T19:53:26.465864668+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35950: remote error: tls: bad certificate 2025-08-13T19:53:26.511654321+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35966: remote error: tls: bad certificate 2025-08-13T19:53:26.550470566+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35974: remote error: tls: bad certificate 2025-08-13T19:53:26.590697781+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35984: remote error: tls: bad certificate 2025-08-13T19:53:26.626270743+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35988: remote error: tls: bad certificate 2025-08-13T19:53:26.666348433+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35994: remote error: tls: bad certificate 2025-08-13T19:53:26.704407896+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:36008: remote error: tls: bad certificate 2025-08-13T19:53:26.744889409+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:36014: remote error: tls: bad certificate 2025-08-13T19:53:26.785546426+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:36016: remote error: tls: bad certificate 2025-08-13T19:53:26.823026432+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:36032: remote error: tls: bad certificate 2025-08-13T19:53:26.864359949+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:36040: remote error: tls: bad certificate 2025-08-13T19:53:26.937361086+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:36042: remote error: tls: bad certificate 2025-08-13T19:53:26.985947649+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:36050: remote error: tls: bad certificate 2025-08-13T19:53:27.005917928+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36052: remote error: tls: bad certificate 2025-08-13T19:53:27.023608801+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36062: remote error: tls: bad certificate 2025-08-13T19:53:27.064141945+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36066: remote error: tls: bad certificate 2025-08-13T19:53:27.107856529+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36078: remote error: tls: bad certificate 2025-08-13T19:53:27.147151868+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36090: remote error: tls: bad certificate 2025-08-13T19:53:27.186036994+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36106: remote error: tls: bad certificate 2025-08-13T19:53:27.225337063+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36120: remote error: tls: bad certificate 2025-08-13T19:53:27.267027029+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36122: remote error: tls: bad certificate 2025-08-13T19:53:27.305330799+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36126: remote error: tls: bad certificate 2025-08-13T19:53:27.344620998+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36134: remote error: tls: bad certificate 2025-08-13T19:53:27.387692233+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36144: remote error: tls: bad certificate 2025-08-13T19:53:27.428065963+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36148: remote error: tls: bad certificate 2025-08-13T19:53:27.465972601+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36154: remote error: tls: bad certificate 2025-08-13T19:53:27.505528007+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36158: remote error: tls: bad certificate 2025-08-13T19:53:27.545444804+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36172: remote error: tls: bad certificate 2025-08-13T19:53:27.583631270+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36182: remote error: tls: bad certificate 2025-08-13T19:53:27.626555522+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36190: remote error: tls: bad certificate 2025-08-13T19:53:27.664970126+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36202: remote error: tls: bad certificate 2025-08-13T19:53:27.706243150+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36208: remote error: tls: bad certificate 2025-08-13T19:53:27.750895711+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36212: remote error: tls: bad certificate 2025-08-13T19:53:27.785310780+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36222: remote error: tls: bad certificate 2025-08-13T19:53:27.824358372+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36234: remote error: tls: bad certificate 2025-08-13T19:53:27.863650190+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36242: remote error: tls: bad certificate 2025-08-13T19:53:27.907160328+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36248: remote error: tls: bad certificate 2025-08-13T19:53:27.944426589+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36260: remote error: tls: bad certificate 2025-08-13T19:53:27.988175244+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36266: remote error: tls: bad certificate 2025-08-13T19:53:28.025438775+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36276: remote error: tls: bad certificate 2025-08-13T19:53:28.066518224+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36292: remote error: tls: bad certificate 2025-08-13T19:53:28.104514646+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36302: remote error: tls: bad certificate 2025-08-13T19:53:28.147359285+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36304: remote error: tls: bad certificate 2025-08-13T19:53:28.183104632+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36316: remote error: tls: bad certificate 2025-08-13T19:53:28.228401082+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36324: remote error: tls: bad certificate 2025-08-13T19:53:28.268228015+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36332: remote error: tls: bad certificate 2025-08-13T19:53:28.304863848+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36338: remote error: tls: bad certificate 2025-08-13T19:53:28.345934627+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36342: remote error: tls: bad certificate 2025-08-13T19:53:28.387258953+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36358: remote error: tls: bad certificate 2025-08-13T19:53:28.426484299+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36374: remote error: tls: bad certificate 2025-08-13T19:53:28.467051074+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36382: remote error: tls: bad certificate 2025-08-13T19:53:28.507511066+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36398: remote error: tls: bad certificate 2025-08-13T19:53:28.542378358+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36410: remote error: tls: bad certificate 2025-08-13T19:53:28.587055820+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36418: remote error: tls: bad certificate 2025-08-13T19:53:28.623354253+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36426: remote error: tls: bad certificate 2025-08-13T19:53:29.089904451+00:00 stderr F 2025/08/13 19:53:29 http: TLS handshake error from 127.0.0.1:55254: remote error: tls: bad certificate 2025-08-13T19:53:29.108303885+00:00 stderr F 2025/08/13 19:53:29 http: TLS handshake error from 127.0.0.1:55260: remote error: tls: bad certificate 2025-08-13T19:53:29.126511273+00:00 stderr F 2025/08/13 19:53:29 http: TLS handshake error from 127.0.0.1:55276: remote error: tls: bad certificate 2025-08-13T19:53:29.143966440+00:00 stderr F 2025/08/13 19:53:29 http: TLS handshake error from 127.0.0.1:55288: remote error: tls: bad certificate 2025-08-13T19:53:29.161856669+00:00 stderr F 2025/08/13 19:53:29 http: TLS handshake error from 127.0.0.1:55294: remote error: tls: bad certificate 2025-08-13T19:53:30.648552861+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55300: remote error: tls: bad certificate 2025-08-13T19:53:30.670918988+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55302: remote error: tls: bad certificate 2025-08-13T19:53:30.690852175+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55318: remote error: tls: bad certificate 2025-08-13T19:53:30.706744047+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55324: remote error: tls: bad certificate 2025-08-13T19:53:30.730114503+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55330: remote error: tls: bad certificate 2025-08-13T19:53:30.777334826+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55332: remote error: tls: bad certificate 2025-08-13T19:53:30.797171361+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55336: remote error: tls: bad certificate 2025-08-13T19:53:30.815413220+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55344: remote error: tls: bad certificate 2025-08-13T19:53:30.830752917+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55356: remote error: tls: bad certificate 2025-08-13T19:53:30.848394119+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55362: remote error: tls: bad certificate 2025-08-13T19:53:30.865723282+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55368: remote error: tls: bad certificate 2025-08-13T19:53:30.882869460+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55376: remote error: tls: bad certificate 2025-08-13T19:53:30.900918944+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55388: remote error: tls: bad certificate 2025-08-13T19:53:30.918315269+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55392: remote error: tls: bad certificate 2025-08-13T19:53:30.936917549+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55408: remote error: tls: bad certificate 2025-08-13T19:53:30.955001983+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55420: remote error: tls: bad certificate 2025-08-13T19:53:30.973015396+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55434: remote error: tls: bad certificate 2025-08-13T19:53:30.991390179+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55436: remote error: tls: bad certificate 2025-08-13T19:53:31.007022344+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55450: remote error: tls: bad certificate 2025-08-13T19:53:31.022090353+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55466: remote error: tls: bad certificate 2025-08-13T19:53:31.042383720+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55474: remote error: tls: bad certificate 2025-08-13T19:53:31.061742501+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55484: remote error: tls: bad certificate 2025-08-13T19:53:31.079064354+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55486: remote error: tls: bad certificate 2025-08-13T19:53:31.099677331+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55488: remote error: tls: bad certificate 2025-08-13T19:53:31.124328733+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55502: remote error: tls: bad certificate 2025-08-13T19:53:31.142139209+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55512: remote error: tls: bad certificate 2025-08-13T19:53:31.161342126+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55524: remote error: tls: bad certificate 2025-08-13T19:53:31.181322705+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55534: remote error: tls: bad certificate 2025-08-13T19:53:31.204093203+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55538: remote error: tls: bad certificate 2025-08-13T19:53:31.226852321+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55552: remote error: tls: bad certificate 2025-08-13T19:53:31.244687408+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55558: remote error: tls: bad certificate 2025-08-13T19:53:31.263547545+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55566: remote error: tls: bad certificate 2025-08-13T19:53:31.285508470+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55572: remote error: tls: bad certificate 2025-08-13T19:53:31.300322562+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55586: remote error: tls: bad certificate 2025-08-13T19:53:31.315855434+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55600: remote error: tls: bad certificate 2025-08-13T19:53:31.337696365+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55616: remote error: tls: bad certificate 2025-08-13T19:53:31.361398320+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55622: remote error: tls: bad certificate 2025-08-13T19:53:31.382212752+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55636: remote error: tls: bad certificate 2025-08-13T19:53:31.398568328+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55638: remote error: tls: bad certificate 2025-08-13T19:53:31.413845023+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55646: remote error: tls: bad certificate 2025-08-13T19:53:31.429848348+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55650: remote error: tls: bad certificate 2025-08-13T19:53:31.444301550+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55652: remote error: tls: bad certificate 2025-08-13T19:53:31.461504869+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55668: remote error: tls: bad certificate 2025-08-13T19:53:31.476961669+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55684: remote error: tls: bad certificate 2025-08-13T19:53:31.493117119+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55694: remote error: tls: bad certificate 2025-08-13T19:53:31.514311192+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55696: remote error: tls: bad certificate 2025-08-13T19:53:31.533068836+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55712: remote error: tls: bad certificate 2025-08-13T19:53:31.548439704+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55718: remote error: tls: bad certificate 2025-08-13T19:53:31.565037486+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55734: remote error: tls: bad certificate 2025-08-13T19:53:31.580173317+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55750: remote error: tls: bad certificate 2025-08-13T19:53:31.604257492+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55760: remote error: tls: bad certificate 2025-08-13T19:53:31.622169452+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55772: remote error: tls: bad certificate 2025-08-13T19:53:31.640706120+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55780: remote error: tls: bad certificate 2025-08-13T19:53:31.668234673+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55790: remote error: tls: bad certificate 2025-08-13T19:53:31.686745880+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55804: remote error: tls: bad certificate 2025-08-13T19:53:31.702044525+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55808: remote error: tls: bad certificate 2025-08-13T19:53:31.715996002+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55814: remote error: tls: bad certificate 2025-08-13T19:53:31.731558275+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55820: remote error: tls: bad certificate 2025-08-13T19:53:31.755003593+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55824: remote error: tls: bad certificate 2025-08-13T19:53:31.775730363+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55836: remote error: tls: bad certificate 2025-08-13T19:53:31.793231691+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55838: remote error: tls: bad certificate 2025-08-13T19:53:31.818394737+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55854: remote error: tls: bad certificate 2025-08-13T19:53:31.836188003+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55858: remote error: tls: bad certificate 2025-08-13T19:53:31.858885919+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55870: remote error: tls: bad certificate 2025-08-13T19:53:31.875597495+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55872: remote error: tls: bad certificate 2025-08-13T19:53:31.892421774+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55888: remote error: tls: bad certificate 2025-08-13T19:53:31.908754679+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55890: remote error: tls: bad certificate 2025-08-13T19:53:35.224718462+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55896: remote error: tls: bad certificate 2025-08-13T19:53:35.244349553+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55906: remote error: tls: bad certificate 2025-08-13T19:53:35.261050260+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55920: remote error: tls: bad certificate 2025-08-13T19:53:35.279110096+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55926: remote error: tls: bad certificate 2025-08-13T19:53:35.303117371+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55938: remote error: tls: bad certificate 2025-08-13T19:53:35.321457375+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55952: remote error: tls: bad certificate 2025-08-13T19:53:35.337327778+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55958: remote error: tls: bad certificate 2025-08-13T19:53:35.355110416+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55968: remote error: tls: bad certificate 2025-08-13T19:53:35.368436766+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55978: remote error: tls: bad certificate 2025-08-13T19:53:35.389464507+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55994: remote error: tls: bad certificate 2025-08-13T19:53:35.408029257+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55996: remote error: tls: bad certificate 2025-08-13T19:53:35.425146375+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55998: remote error: tls: bad certificate 2025-08-13T19:53:35.441105791+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56000: remote error: tls: bad certificate 2025-08-13T19:53:35.457208691+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56004: remote error: tls: bad certificate 2025-08-13T19:53:35.473743393+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56010: remote error: tls: bad certificate 2025-08-13T19:53:35.498664745+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56020: remote error: tls: bad certificate 2025-08-13T19:53:35.513682983+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56034: remote error: tls: bad certificate 2025-08-13T19:53:35.526658354+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56040: remote error: tls: bad certificate 2025-08-13T19:53:35.549298110+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56046: remote error: tls: bad certificate 2025-08-13T19:53:35.567544191+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56052: remote error: tls: bad certificate 2025-08-13T19:53:35.584734132+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56066: remote error: tls: bad certificate 2025-08-13T19:53:35.600997377+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56068: remote error: tls: bad certificate 2025-08-13T19:53:35.618671231+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56074: remote error: tls: bad certificate 2025-08-13T19:53:35.634733770+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56076: remote error: tls: bad certificate 2025-08-13T19:53:35.654704980+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56088: remote error: tls: bad certificate 2025-08-13T19:53:35.669576245+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56090: remote error: tls: bad certificate 2025-08-13T19:53:35.687559288+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56102: remote error: tls: bad certificate 2025-08-13T19:53:35.705010066+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56112: remote error: tls: bad certificate 2025-08-13T19:53:35.722108005+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56118: remote error: tls: bad certificate 2025-08-13T19:53:35.743603298+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56128: remote error: tls: bad certificate 2025-08-13T19:53:35.766235015+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56138: remote error: tls: bad certificate 2025-08-13T19:53:35.786905845+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56146: remote error: tls: bad certificate 2025-08-13T19:53:35.802561162+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56154: remote error: tls: bad certificate 2025-08-13T19:53:35.818548608+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56168: remote error: tls: bad certificate 2025-08-13T19:53:35.838656143+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56172: remote error: tls: bad certificate 2025-08-13T19:53:35.857985005+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56186: remote error: tls: bad certificate 2025-08-13T19:53:35.875248337+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56198: remote error: tls: bad certificate 2025-08-13T19:53:35.891375408+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56212: remote error: tls: bad certificate 2025-08-13T19:53:35.906640524+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56216: remote error: tls: bad certificate 2025-08-13T19:53:35.928942981+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56232: remote error: tls: bad certificate 2025-08-13T19:53:35.947115659+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56244: remote error: tls: bad certificate 2025-08-13T19:53:35.968036007+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56254: remote error: tls: bad certificate 2025-08-13T19:53:35.987885954+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56270: remote error: tls: bad certificate 2025-08-13T19:53:36.005971000+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56284: remote error: tls: bad certificate 2025-08-13T19:53:36.063898344+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56288: remote error: tls: bad certificate 2025-08-13T19:53:36.100912221+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56302: remote error: tls: bad certificate 2025-08-13T19:53:36.113892492+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56308: remote error: tls: bad certificate 2025-08-13T19:53:36.133520782+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56312: remote error: tls: bad certificate 2025-08-13T19:53:36.152039901+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56318: remote error: tls: bad certificate 2025-08-13T19:53:36.172991059+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56334: remote error: tls: bad certificate 2025-08-13T19:53:36.193922207+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56336: remote error: tls: bad certificate 2025-08-13T19:53:36.214219296+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56350: remote error: tls: bad certificate 2025-08-13T19:53:36.236563664+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56360: remote error: tls: bad certificate 2025-08-13T19:53:36.254482196+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56362: remote error: tls: bad certificate 2025-08-13T19:53:36.275089304+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56370: remote error: tls: bad certificate 2025-08-13T19:53:36.290217346+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56378: remote error: tls: bad certificate 2025-08-13T19:53:36.308409306+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56388: remote error: tls: bad certificate 2025-08-13T19:53:36.327734598+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56398: remote error: tls: bad certificate 2025-08-13T19:53:36.349240702+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56402: remote error: tls: bad certificate 2025-08-13T19:53:36.370983732+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56410: remote error: tls: bad certificate 2025-08-13T19:53:36.390080348+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56426: remote error: tls: bad certificate 2025-08-13T19:53:36.407403552+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56442: remote error: tls: bad certificate 2025-08-13T19:53:36.424618184+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56450: remote error: tls: bad certificate 2025-08-13T19:53:36.464289917+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56464: remote error: tls: bad certificate 2025-08-13T19:53:36.482019983+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56472: remote error: tls: bad certificate 2025-08-13T19:53:36.499596495+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56484: remote error: tls: bad certificate 2025-08-13T19:53:36.518895376+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56498: remote error: tls: bad certificate 2025-08-13T19:53:39.228603146+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50212: remote error: tls: bad certificate 2025-08-13T19:53:39.242932135+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50224: remote error: tls: bad certificate 2025-08-13T19:53:39.258147480+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50226: remote error: tls: bad certificate 2025-08-13T19:53:39.273697894+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50240: remote error: tls: bad certificate 2025-08-13T19:53:39.297952886+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50244: remote error: tls: bad certificate 2025-08-13T19:53:39.314940781+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50256: remote error: tls: bad certificate 2025-08-13T19:53:39.330761093+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50268: remote error: tls: bad certificate 2025-08-13T19:53:39.345921536+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50274: remote error: tls: bad certificate 2025-08-13T19:53:39.366085762+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50280: remote error: tls: bad certificate 2025-08-13T19:53:39.381858312+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50288: remote error: tls: bad certificate 2025-08-13T19:53:39.399841156+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50290: remote error: tls: bad certificate 2025-08-13T19:53:39.414869175+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50298: remote error: tls: bad certificate 2025-08-13T19:53:39.434983749+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50312: remote error: tls: bad certificate 2025-08-13T19:53:39.441425853+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50324: remote error: tls: bad certificate 2025-08-13T19:53:39.454354192+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50332: remote error: tls: bad certificate 2025-08-13T19:53:39.460229950+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50348: remote error: tls: bad certificate 2025-08-13T19:53:39.473339114+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50362: remote error: tls: bad certificate 2025-08-13T19:53:39.485412799+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50372: remote error: tls: bad certificate 2025-08-13T19:53:39.488173648+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50376: remote error: tls: bad certificate 2025-08-13T19:53:39.506087959+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50382: remote error: tls: bad certificate 2025-08-13T19:53:39.508954771+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50396: remote error: tls: bad certificate 2025-08-13T19:53:39.523573769+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50400: remote error: tls: bad certificate 2025-08-13T19:53:39.535036366+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50410: remote error: tls: bad certificate 2025-08-13T19:53:39.540852022+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50416: remote error: tls: bad certificate 2025-08-13T19:53:39.556114958+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50418: remote error: tls: bad certificate 2025-08-13T19:53:39.570016065+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50430: remote error: tls: bad certificate 2025-08-13T19:53:39.591465867+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50436: remote error: tls: bad certificate 2025-08-13T19:53:39.609619156+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50446: remote error: tls: bad certificate 2025-08-13T19:53:39.624335206+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50454: remote error: tls: bad certificate 2025-08-13T19:53:39.646403826+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50458: remote error: tls: bad certificate 2025-08-13T19:53:39.662891037+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50464: remote error: tls: bad certificate 2025-08-13T19:53:39.679004807+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50472: remote error: tls: bad certificate 2025-08-13T19:53:39.693375467+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50488: remote error: tls: bad certificate 2025-08-13T19:53:39.710139846+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50492: remote error: tls: bad certificate 2025-08-13T19:53:39.731186017+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50506: remote error: tls: bad certificate 2025-08-13T19:53:39.745092314+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50514: remote error: tls: bad certificate 2025-08-13T19:53:39.759182706+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50526: remote error: tls: bad certificate 2025-08-13T19:53:39.785442716+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50536: remote error: tls: bad certificate 2025-08-13T19:53:39.813161948+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50550: remote error: tls: bad certificate 2025-08-13T19:53:39.838641005+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50562: remote error: tls: bad certificate 2025-08-13T19:53:39.864312688+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50572: remote error: tls: bad certificate 2025-08-13T19:53:39.880470369+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50584: remote error: tls: bad certificate 2025-08-13T19:53:39.896138737+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50598: remote error: tls: bad certificate 2025-08-13T19:53:39.912435242+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50606: remote error: tls: bad certificate 2025-08-13T19:53:39.932437133+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50612: remote error: tls: bad certificate 2025-08-13T19:53:39.949203182+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50622: remote error: tls: bad certificate 2025-08-13T19:53:39.966415514+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50636: remote error: tls: bad certificate 2025-08-13T19:53:39.981556116+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50648: remote error: tls: bad certificate 2025-08-13T19:53:39.997675256+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50656: remote error: tls: bad certificate 2025-08-13T19:53:40.011871321+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50670: remote error: tls: bad certificate 2025-08-13T19:53:40.034566529+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50686: remote error: tls: bad certificate 2025-08-13T19:53:40.049210498+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50698: remote error: tls: bad certificate 2025-08-13T19:53:40.064974488+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50702: remote error: tls: bad certificate 2025-08-13T19:53:40.079085701+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50712: remote error: tls: bad certificate 2025-08-13T19:53:40.096647742+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50728: remote error: tls: bad certificate 2025-08-13T19:53:40.113604686+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50732: remote error: tls: bad certificate 2025-08-13T19:53:40.131424605+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50740: remote error: tls: bad certificate 2025-08-13T19:53:40.147917386+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50744: remote error: tls: bad certificate 2025-08-13T19:53:40.167720461+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50758: remote error: tls: bad certificate 2025-08-13T19:53:40.181520045+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50762: remote error: tls: bad certificate 2025-08-13T19:53:40.195728551+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50770: remote error: tls: bad certificate 2025-08-13T19:53:40.212724847+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50784: remote error: tls: bad certificate 2025-08-13T19:53:40.232612634+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50800: remote error: tls: bad certificate 2025-08-13T19:53:40.256121036+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50806: remote error: tls: bad certificate 2025-08-13T19:53:40.280362708+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50822: remote error: tls: bad certificate 2025-08-13T19:53:40.298548077+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50836: remote error: tls: bad certificate 2025-08-13T19:53:40.317684724+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50846: remote error: tls: bad certificate 2025-08-13T19:53:40.334333439+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50850: remote error: tls: bad certificate 2025-08-13T19:53:40.348950486+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50866: remote error: tls: bad certificate 2025-08-13T19:53:40.370865132+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50876: remote error: tls: bad certificate 2025-08-13T19:53:40.386952351+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50890: remote error: tls: bad certificate 2025-08-13T19:53:40.404317317+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50896: remote error: tls: bad certificate 2025-08-13T19:53:40.682183542+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50906: remote error: tls: bad certificate 2025-08-13T19:53:40.830138346+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50912: remote error: tls: bad certificate 2025-08-13T19:53:40.852362851+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50916: remote error: tls: bad certificate 2025-08-13T19:53:40.867715729+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50924: remote error: tls: bad certificate 2025-08-13T19:53:40.893137955+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50936: remote error: tls: bad certificate 2025-08-13T19:53:40.908983197+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50950: remote error: tls: bad certificate 2025-08-13T19:53:40.924633474+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50964: remote error: tls: bad certificate 2025-08-13T19:53:40.940548709+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50970: remote error: tls: bad certificate 2025-08-13T19:53:40.956331789+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50984: remote error: tls: bad certificate 2025-08-13T19:53:40.970675268+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50996: remote error: tls: bad certificate 2025-08-13T19:53:40.984748710+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:51012: remote error: tls: bad certificate 2025-08-13T19:53:41.000306144+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51018: remote error: tls: bad certificate 2025-08-13T19:53:41.016050553+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51024: remote error: tls: bad certificate 2025-08-13T19:53:41.029655492+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51036: remote error: tls: bad certificate 2025-08-13T19:53:41.046054150+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51042: remote error: tls: bad certificate 2025-08-13T19:53:41.063399065+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51050: remote error: tls: bad certificate 2025-08-13T19:53:41.077921450+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51052: remote error: tls: bad certificate 2025-08-13T19:53:41.095677637+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51062: remote error: tls: bad certificate 2025-08-13T19:53:41.110105979+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51068: remote error: tls: bad certificate 2025-08-13T19:53:41.125640423+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51082: remote error: tls: bad certificate 2025-08-13T19:53:41.145631493+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51096: remote error: tls: bad certificate 2025-08-13T19:53:41.164570324+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51098: remote error: tls: bad certificate 2025-08-13T19:53:41.184034170+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51104: remote error: tls: bad certificate 2025-08-13T19:53:41.203053723+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51118: remote error: tls: bad certificate 2025-08-13T19:53:41.232431612+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51126: remote error: tls: bad certificate 2025-08-13T19:53:41.251340712+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51132: remote error: tls: bad certificate 2025-08-13T19:53:41.267706039+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51148: remote error: tls: bad certificate 2025-08-13T19:53:41.287701610+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51152: remote error: tls: bad certificate 2025-08-13T19:53:41.305005814+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51162: remote error: tls: bad certificate 2025-08-13T19:53:41.324337866+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51170: remote error: tls: bad certificate 2025-08-13T19:53:41.340895179+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51172: remote error: tls: bad certificate 2025-08-13T19:53:41.358911143+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51186: remote error: tls: bad certificate 2025-08-13T19:53:41.378574595+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51202: remote error: tls: bad certificate 2025-08-13T19:53:41.394209601+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51212: remote error: tls: bad certificate 2025-08-13T19:53:41.415108168+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51216: remote error: tls: bad certificate 2025-08-13T19:53:41.438500306+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51226: remote error: tls: bad certificate 2025-08-13T19:53:41.455054438+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51228: remote error: tls: bad certificate 2025-08-13T19:53:41.473008991+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51232: remote error: tls: bad certificate 2025-08-13T19:53:41.488947096+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51236: remote error: tls: bad certificate 2025-08-13T19:53:41.505689324+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51248: remote error: tls: bad certificate 2025-08-13T19:53:41.542899427+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51260: remote error: tls: bad certificate 2025-08-13T19:53:41.580967164+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51276: remote error: tls: bad certificate 2025-08-13T19:53:41.621004757+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51290: remote error: tls: bad certificate 2025-08-13T19:53:41.659405283+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51294: remote error: tls: bad certificate 2025-08-13T19:53:41.701855455+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51302: remote error: tls: bad certificate 2025-08-13T19:53:41.750594647+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51310: remote error: tls: bad certificate 2025-08-13T19:53:41.780458050+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51318: remote error: tls: bad certificate 2025-08-13T19:53:41.819269118+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51330: remote error: tls: bad certificate 2025-08-13T19:53:41.859475406+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51336: remote error: tls: bad certificate 2025-08-13T19:53:41.904050659+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51350: remote error: tls: bad certificate 2025-08-13T19:53:41.941043335+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51358: remote error: tls: bad certificate 2025-08-13T19:53:41.980527062+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51370: remote error: tls: bad certificate 2025-08-13T19:53:42.021474882+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51384: remote error: tls: bad certificate 2025-08-13T19:53:42.061280108+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51388: remote error: tls: bad certificate 2025-08-13T19:53:42.103641688+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51390: remote error: tls: bad certificate 2025-08-13T19:53:42.138520574+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51394: remote error: tls: bad certificate 2025-08-13T19:53:42.179604217+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51410: remote error: tls: bad certificate 2025-08-13T19:53:42.223700516+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51426: remote error: tls: bad certificate 2025-08-13T19:53:42.266453537+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51440: remote error: tls: bad certificate 2025-08-13T19:53:42.413671650+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51450: remote error: tls: bad certificate 2025-08-13T19:53:42.439923510+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51454: remote error: tls: bad certificate 2025-08-13T19:53:42.482879177+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51464: remote error: tls: bad certificate 2025-08-13T19:53:42.510876456+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51466: remote error: tls: bad certificate 2025-08-13T19:53:42.546915215+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51474: remote error: tls: bad certificate 2025-08-13T19:53:42.564044934+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51482: remote error: tls: bad certificate 2025-08-13T19:53:42.579032892+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51486: remote error: tls: bad certificate 2025-08-13T19:53:42.597240472+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51492: remote error: tls: bad certificate 2025-08-13T19:53:45.226381044+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51494: remote error: tls: bad certificate 2025-08-13T19:53:45.241320290+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51500: remote error: tls: bad certificate 2025-08-13T19:53:45.263977617+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51506: remote error: tls: bad certificate 2025-08-13T19:53:45.279577513+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51522: remote error: tls: bad certificate 2025-08-13T19:53:45.299569743+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51524: remote error: tls: bad certificate 2025-08-13T19:53:45.315553690+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51536: remote error: tls: bad certificate 2025-08-13T19:53:45.341116639+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51546: remote error: tls: bad certificate 2025-08-13T19:53:45.361326947+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51562: remote error: tls: bad certificate 2025-08-13T19:53:45.386511596+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51574: remote error: tls: bad certificate 2025-08-13T19:53:45.405372184+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51590: remote error: tls: bad certificate 2025-08-13T19:53:45.426649522+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51602: remote error: tls: bad certificate 2025-08-13T19:53:45.442340130+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51604: remote error: tls: bad certificate 2025-08-13T19:53:45.460097047+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51612: remote error: tls: bad certificate 2025-08-13T19:53:45.479731118+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51624: remote error: tls: bad certificate 2025-08-13T19:53:45.533474902+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51640: remote error: tls: bad certificate 2025-08-13T19:53:45.551358853+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51654: remote error: tls: bad certificate 2025-08-13T19:53:45.565606420+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51670: remote error: tls: bad certificate 2025-08-13T19:53:45.580892986+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51682: remote error: tls: bad certificate 2025-08-13T19:53:45.598843629+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51684: remote error: tls: bad certificate 2025-08-13T19:53:45.614548227+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51700: remote error: tls: bad certificate 2025-08-13T19:53:45.631554293+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51712: remote error: tls: bad certificate 2025-08-13T19:53:45.646499990+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51724: remote error: tls: bad certificate 2025-08-13T19:53:45.664373710+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51740: remote error: tls: bad certificate 2025-08-13T19:53:45.679107241+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51748: remote error: tls: bad certificate 2025-08-13T19:53:45.701559222+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51764: remote error: tls: bad certificate 2025-08-13T19:53:45.720105771+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51772: remote error: tls: bad certificate 2025-08-13T19:53:45.737989852+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51780: remote error: tls: bad certificate 2025-08-13T19:53:45.756665025+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51796: remote error: tls: bad certificate 2025-08-13T19:53:45.775487953+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51806: remote error: tls: bad certificate 2025-08-13T19:53:45.795995138+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51808: remote error: tls: bad certificate 2025-08-13T19:53:45.815296229+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51816: remote error: tls: bad certificate 2025-08-13T19:53:45.837343949+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51818: remote error: tls: bad certificate 2025-08-13T19:53:45.854453197+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51822: remote error: tls: bad certificate 2025-08-13T19:53:45.872035029+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51824: remote error: tls: bad certificate 2025-08-13T19:53:45.889322273+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51826: remote error: tls: bad certificate 2025-08-13T19:53:45.910040195+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51828: remote error: tls: bad certificate 2025-08-13T19:53:45.935540523+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51842: remote error: tls: bad certificate 2025-08-13T19:53:45.954573896+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51856: remote error: tls: bad certificate 2025-08-13T19:53:45.972642602+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51858: remote error: tls: bad certificate 2025-08-13T19:53:45.992005025+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51860: remote error: tls: bad certificate 2025-08-13T19:53:46.008844656+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51868: remote error: tls: bad certificate 2025-08-13T19:53:46.024552044+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51870: remote error: tls: bad certificate 2025-08-13T19:53:46.040129379+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51882: remote error: tls: bad certificate 2025-08-13T19:53:46.054839769+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51894: remote error: tls: bad certificate 2025-08-13T19:53:46.069429396+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51898: remote error: tls: bad certificate 2025-08-13T19:53:46.084429934+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51914: remote error: tls: bad certificate 2025-08-13T19:53:46.106535985+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51928: remote error: tls: bad certificate 2025-08-13T19:53:46.134769421+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51930: remote error: tls: bad certificate 2025-08-13T19:53:46.159466717+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51934: remote error: tls: bad certificate 2025-08-13T19:53:46.178661315+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51948: remote error: tls: bad certificate 2025-08-13T19:53:46.195898767+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51956: remote error: tls: bad certificate 2025-08-13T19:53:46.219998745+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51958: remote error: tls: bad certificate 2025-08-13T19:53:46.241092777+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51972: remote error: tls: bad certificate 2025-08-13T19:53:46.260916013+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51976: remote error: tls: bad certificate 2025-08-13T19:53:46.278870666+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51980: remote error: tls: bad certificate 2025-08-13T19:53:46.299576877+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51992: remote error: tls: bad certificate 2025-08-13T19:53:46.316179501+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51998: remote error: tls: bad certificate 2025-08-13T19:53:46.333503016+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52006: remote error: tls: bad certificate 2025-08-13T19:53:46.349693378+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52022: remote error: tls: bad certificate 2025-08-13T19:53:46.368871606+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52034: remote error: tls: bad certificate 2025-08-13T19:53:46.389220157+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52038: remote error: tls: bad certificate 2025-08-13T19:53:46.409970849+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52044: remote error: tls: bad certificate 2025-08-13T19:53:46.429712483+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52056: remote error: tls: bad certificate 2025-08-13T19:53:46.448750617+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52058: remote error: tls: bad certificate 2025-08-13T19:53:46.480049740+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52068: remote error: tls: bad certificate 2025-08-13T19:53:46.501206315+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52076: remote error: tls: bad certificate 2025-08-13T19:53:46.522698028+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52088: remote error: tls: bad certificate 2025-08-13T19:53:49.617164196+00:00 stderr F 2025/08/13 19:53:49 http: TLS handshake error from 127.0.0.1:44584: remote error: tls: bad certificate 2025-08-13T19:53:49.636610121+00:00 stderr F 2025/08/13 19:53:49 http: TLS handshake error from 127.0.0.1:44600: remote error: tls: bad certificate 2025-08-13T19:53:49.659212496+00:00 stderr F 2025/08/13 19:53:49 http: TLS handshake error from 127.0.0.1:44614: remote error: tls: bad certificate 2025-08-13T19:53:49.678895508+00:00 stderr F 2025/08/13 19:53:49 http: TLS handshake error from 127.0.0.1:44626: remote error: tls: bad certificate 2025-08-13T19:53:49.696625414+00:00 stderr F 2025/08/13 19:53:49 http: TLS handshake error from 127.0.0.1:44630: remote error: tls: bad certificate 2025-08-13T19:53:52.227095057+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44646: remote error: tls: bad certificate 2025-08-13T19:53:52.268926331+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44660: remote error: tls: bad certificate 2025-08-13T19:53:52.294504561+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44676: remote error: tls: bad certificate 2025-08-13T19:53:52.314733569+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44684: remote error: tls: bad certificate 2025-08-13T19:53:52.333155155+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44686: remote error: tls: bad certificate 2025-08-13T19:53:52.353277979+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44702: remote error: tls: bad certificate 2025-08-13T19:53:52.380062434+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44710: remote error: tls: bad certificate 2025-08-13T19:53:52.414549829+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44718: remote error: tls: bad certificate 2025-08-13T19:53:52.440060697+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44720: remote error: tls: bad certificate 2025-08-13T19:53:52.457727922+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44724: remote error: tls: bad certificate 2025-08-13T19:53:52.472683809+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44734: remote error: tls: bad certificate 2025-08-13T19:53:52.496383826+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44750: remote error: tls: bad certificate 2025-08-13T19:53:52.517896320+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44758: remote error: tls: bad certificate 2025-08-13T19:53:52.541147764+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44768: remote error: tls: bad certificate 2025-08-13T19:53:52.557736047+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44778: remote error: tls: bad certificate 2025-08-13T19:53:52.574990300+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44792: remote error: tls: bad certificate 2025-08-13T19:53:52.603636518+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44806: remote error: tls: bad certificate 2025-08-13T19:53:52.619749288+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44812: remote error: tls: bad certificate 2025-08-13T19:53:52.635091136+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44828: remote error: tls: bad certificate 2025-08-13T19:53:52.660011058+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44830: remote error: tls: bad certificate 2025-08-13T19:53:52.680188254+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44836: remote error: tls: bad certificate 2025-08-13T19:53:52.696991764+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44838: remote error: tls: bad certificate 2025-08-13T19:53:52.711068896+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44854: remote error: tls: bad certificate 2025-08-13T19:53:52.729148802+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44866: remote error: tls: bad certificate 2025-08-13T19:53:52.745949452+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44868: remote error: tls: bad certificate 2025-08-13T19:53:52.762474943+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44872: remote error: tls: bad certificate 2025-08-13T19:53:52.778204163+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44888: remote error: tls: bad certificate 2025-08-13T19:53:52.796036292+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44898: remote error: tls: bad certificate 2025-08-13T19:53:52.818504463+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44902: remote error: tls: bad certificate 2025-08-13T19:53:52.837562267+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44916: remote error: tls: bad certificate 2025-08-13T19:53:52.856105647+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44930: remote error: tls: bad certificate 2025-08-13T19:53:52.875528422+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44940: remote error: tls: bad certificate 2025-08-13T19:53:52.894423241+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44942: remote error: tls: bad certificate 2025-08-13T19:53:52.916570783+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44950: remote error: tls: bad certificate 2025-08-13T19:53:52.936099211+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44956: remote error: tls: bad certificate 2025-08-13T19:53:52.955720791+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44972: remote error: tls: bad certificate 2025-08-13T19:53:52.972533801+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44984: remote error: tls: bad certificate 2025-08-13T19:53:52.988563959+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44994: remote error: tls: bad certificate 2025-08-13T19:53:53.005485722+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45008: remote error: tls: bad certificate 2025-08-13T19:53:53.022024175+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45012: remote error: tls: bad certificate 2025-08-13T19:53:53.038862085+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45016: remote error: tls: bad certificate 2025-08-13T19:53:53.056149359+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45032: remote error: tls: bad certificate 2025-08-13T19:53:53.076240013+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45038: remote error: tls: bad certificate 2025-08-13T19:53:53.093720342+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45054: remote error: tls: bad certificate 2025-08-13T19:53:53.110699807+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45056: remote error: tls: bad certificate 2025-08-13T19:53:53.129115762+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45068: remote error: tls: bad certificate 2025-08-13T19:53:53.145554232+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45076: remote error: tls: bad certificate 2025-08-13T19:53:53.161638391+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45088: remote error: tls: bad certificate 2025-08-13T19:53:53.178665857+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45098: remote error: tls: bad certificate 2025-08-13T19:53:53.194976583+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45102: remote error: tls: bad certificate 2025-08-13T19:53:53.217332771+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45116: remote error: tls: bad certificate 2025-08-13T19:53:53.232177375+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45126: remote error: tls: bad certificate 2025-08-13T19:53:53.247967496+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45140: remote error: tls: bad certificate 2025-08-13T19:53:53.273479805+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45152: remote error: tls: bad certificate 2025-08-13T19:53:53.291633863+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45166: remote error: tls: bad certificate 2025-08-13T19:53:53.305135728+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45178: remote error: tls: bad certificate 2025-08-13T19:53:53.322149134+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45192: remote error: tls: bad certificate 2025-08-13T19:53:53.338193382+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45194: remote error: tls: bad certificate 2025-08-13T19:53:53.357901645+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45202: remote error: tls: bad certificate 2025-08-13T19:53:53.380898872+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45214: remote error: tls: bad certificate 2025-08-13T19:53:53.399923025+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45228: remote error: tls: bad certificate 2025-08-13T19:53:53.418956578+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45240: remote error: tls: bad certificate 2025-08-13T19:53:53.438110095+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45248: remote error: tls: bad certificate 2025-08-13T19:53:53.456393227+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45252: remote error: tls: bad certificate 2025-08-13T19:53:53.473543627+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45254: remote error: tls: bad certificate 2025-08-13T19:53:53.493036634+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45270: remote error: tls: bad certificate 2025-08-13T19:53:53.512355815+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45276: remote error: tls: bad certificate 2025-08-13T19:53:55.236033843+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45290: remote error: tls: bad certificate 2025-08-13T19:53:55.252968866+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45298: remote error: tls: bad certificate 2025-08-13T19:53:55.270724493+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45310: remote error: tls: bad certificate 2025-08-13T19:53:55.287538742+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45326: remote error: tls: bad certificate 2025-08-13T19:53:55.302703325+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45338: remote error: tls: bad certificate 2025-08-13T19:53:55.321409719+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45352: remote error: tls: bad certificate 2025-08-13T19:53:55.338625911+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45356: remote error: tls: bad certificate 2025-08-13T19:53:55.358347094+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45358: remote error: tls: bad certificate 2025-08-13T19:53:55.373379503+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45370: remote error: tls: bad certificate 2025-08-13T19:53:55.390440810+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45382: remote error: tls: bad certificate 2025-08-13T19:53:55.436477805+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45392: remote error: tls: bad certificate 2025-08-13T19:53:55.465944856+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45396: remote error: tls: bad certificate 2025-08-13T19:53:55.491653850+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45400: remote error: tls: bad certificate 2025-08-13T19:53:55.511147616+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45408: remote error: tls: bad certificate 2025-08-13T19:53:55.526959548+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45410: remote error: tls: bad certificate 2025-08-13T19:53:55.544041555+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45416: remote error: tls: bad certificate 2025-08-13T19:53:55.560329050+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45430: remote error: tls: bad certificate 2025-08-13T19:53:55.575998688+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45434: remote error: tls: bad certificate 2025-08-13T19:53:55.590063159+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45442: remote error: tls: bad certificate 2025-08-13T19:53:55.602979578+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45444: remote error: tls: bad certificate 2025-08-13T19:53:55.618206203+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45446: remote error: tls: bad certificate 2025-08-13T19:53:55.632362677+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45458: remote error: tls: bad certificate 2025-08-13T19:53:55.652016208+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45462: remote error: tls: bad certificate 2025-08-13T19:53:55.670677731+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45476: remote error: tls: bad certificate 2025-08-13T19:53:55.692228507+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45478: remote error: tls: bad certificate 2025-08-13T19:53:55.707612656+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45492: remote error: tls: bad certificate 2025-08-13T19:53:55.725362833+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45500: remote error: tls: bad certificate 2025-08-13T19:53:55.743701086+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45506: remote error: tls: bad certificate 2025-08-13T19:53:55.759030924+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45520: remote error: tls: bad certificate 2025-08-13T19:53:55.776724589+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45528: remote error: tls: bad certificate 2025-08-13T19:53:55.795650500+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45534: remote error: tls: bad certificate 2025-08-13T19:53:55.815980670+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45546: remote error: tls: bad certificate 2025-08-13T19:53:55.834473528+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45556: remote error: tls: bad certificate 2025-08-13T19:53:55.849135396+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45564: remote error: tls: bad certificate 2025-08-13T19:53:55.864593148+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45580: remote error: tls: bad certificate 2025-08-13T19:53:55.890759775+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45590: remote error: tls: bad certificate 2025-08-13T19:53:55.909515351+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45596: remote error: tls: bad certificate 2025-08-13T19:53:55.926053343+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45608: remote error: tls: bad certificate 2025-08-13T19:53:55.941849684+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45612: remote error: tls: bad certificate 2025-08-13T19:53:55.961326830+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45616: remote error: tls: bad certificate 2025-08-13T19:53:55.979036006+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45620: remote error: tls: bad certificate 2025-08-13T19:53:55.995602028+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45632: remote error: tls: bad certificate 2025-08-13T19:53:56.012747358+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45646: remote error: tls: bad certificate 2025-08-13T19:53:56.030061782+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45648: remote error: tls: bad certificate 2025-08-13T19:53:56.046427580+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45656: remote error: tls: bad certificate 2025-08-13T19:53:56.069072727+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45672: remote error: tls: bad certificate 2025-08-13T19:53:56.085597829+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45678: remote error: tls: bad certificate 2025-08-13T19:53:56.101531444+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45690: remote error: tls: bad certificate 2025-08-13T19:53:56.118885179+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45700: remote error: tls: bad certificate 2025-08-13T19:53:56.133745703+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45702: remote error: tls: bad certificate 2025-08-13T19:53:56.158297445+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45712: remote error: tls: bad certificate 2025-08-13T19:53:56.179437368+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45724: remote error: tls: bad certificate 2025-08-13T19:53:56.194996012+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45726: remote error: tls: bad certificate 2025-08-13T19:53:56.215381685+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45740: remote error: tls: bad certificate 2025-08-13T19:53:56.232753340+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45744: remote error: tls: bad certificate 2025-08-13T19:53:56.247559913+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45754: remote error: tls: bad certificate 2025-08-13T19:53:56.261352467+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45770: remote error: tls: bad certificate 2025-08-13T19:53:56.280958337+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45784: remote error: tls: bad certificate 2025-08-13T19:53:56.298734144+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45786: remote error: tls: bad certificate 2025-08-13T19:53:56.314708520+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45802: remote error: tls: bad certificate 2025-08-13T19:53:56.330703157+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45804: remote error: tls: bad certificate 2025-08-13T19:53:56.347122716+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45814: remote error: tls: bad certificate 2025-08-13T19:53:56.372862041+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45828: remote error: tls: bad certificate 2025-08-13T19:53:56.393062768+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45838: remote error: tls: bad certificate 2025-08-13T19:53:56.414088778+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45848: remote error: tls: bad certificate 2025-08-13T19:53:56.431538687+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45864: remote error: tls: bad certificate 2025-08-13T19:53:56.448332786+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45872: remote error: tls: bad certificate 2025-08-13T19:53:59.994288411+00:00 stderr F 2025/08/13 19:53:59 http: TLS handshake error from 127.0.0.1:55278: remote error: tls: bad certificate 2025-08-13T19:54:00.014664192+00:00 stderr F 2025/08/13 19:54:00 http: TLS handshake error from 127.0.0.1:55280: remote error: tls: bad certificate 2025-08-13T19:54:00.034988413+00:00 stderr F 2025/08/13 19:54:00 http: TLS handshake error from 127.0.0.1:55296: remote error: tls: bad certificate 2025-08-13T19:54:00.055701424+00:00 stderr F 2025/08/13 19:54:00 http: TLS handshake error from 127.0.0.1:55310: remote error: tls: bad certificate 2025-08-13T19:54:00.075603002+00:00 stderr F 2025/08/13 19:54:00 http: TLS handshake error from 127.0.0.1:55314: remote error: tls: bad certificate 2025-08-13T19:54:03.801344573+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55322: remote error: tls: bad certificate 2025-08-13T19:54:03.819261265+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55328: remote error: tls: bad certificate 2025-08-13T19:54:03.840228804+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55336: remote error: tls: bad certificate 2025-08-13T19:54:03.858451464+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55348: remote error: tls: bad certificate 2025-08-13T19:54:03.877581670+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55360: remote error: tls: bad certificate 2025-08-13T19:54:03.896873111+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55368: remote error: tls: bad certificate 2025-08-13T19:54:03.915740430+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55372: remote error: tls: bad certificate 2025-08-13T19:54:03.934873826+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55376: remote error: tls: bad certificate 2025-08-13T19:54:03.951560583+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55378: remote error: tls: bad certificate 2025-08-13T19:54:03.967153948+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55388: remote error: tls: bad certificate 2025-08-13T19:54:03.986569662+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55396: remote error: tls: bad certificate 2025-08-13T19:54:04.011360750+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55404: remote error: tls: bad certificate 2025-08-13T19:54:04.028307304+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55410: remote error: tls: bad certificate 2025-08-13T19:54:04.045060242+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55422: remote error: tls: bad certificate 2025-08-13T19:54:04.061246445+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55436: remote error: tls: bad certificate 2025-08-13T19:54:04.076940903+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55440: remote error: tls: bad certificate 2025-08-13T19:54:04.089880132+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55446: remote error: tls: bad certificate 2025-08-13T19:54:04.107173836+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55448: remote error: tls: bad certificate 2025-08-13T19:54:04.121222957+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55460: remote error: tls: bad certificate 2025-08-13T19:54:04.139381656+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55464: remote error: tls: bad certificate 2025-08-13T19:54:04.156322149+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55478: remote error: tls: bad certificate 2025-08-13T19:54:04.175953050+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55486: remote error: tls: bad certificate 2025-08-13T19:54:04.193653455+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55492: remote error: tls: bad certificate 2025-08-13T19:54:04.216049095+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55500: remote error: tls: bad certificate 2025-08-13T19:54:04.232299699+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55510: remote error: tls: bad certificate 2025-08-13T19:54:04.249470209+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55514: remote error: tls: bad certificate 2025-08-13T19:54:04.266014841+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55516: remote error: tls: bad certificate 2025-08-13T19:54:04.283309595+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55522: remote error: tls: bad certificate 2025-08-13T19:54:04.298301583+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55534: remote error: tls: bad certificate 2025-08-13T19:54:04.314131095+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55544: remote error: tls: bad certificate 2025-08-13T19:54:04.332145940+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55548: remote error: tls: bad certificate 2025-08-13T19:54:04.349726382+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55556: remote error: tls: bad certificate 2025-08-13T19:54:04.366151321+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55572: remote error: tls: bad certificate 2025-08-13T19:54:04.384687490+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55574: remote error: tls: bad certificate 2025-08-13T19:54:04.399412240+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55584: remote error: tls: bad certificate 2025-08-13T19:54:04.422123929+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55586: remote error: tls: bad certificate 2025-08-13T19:54:04.440418811+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55602: remote error: tls: bad certificate 2025-08-13T19:54:04.458900569+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55610: remote error: tls: bad certificate 2025-08-13T19:54:04.473187077+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55624: remote error: tls: bad certificate 2025-08-13T19:54:04.486955510+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55626: remote error: tls: bad certificate 2025-08-13T19:54:04.508549017+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55636: remote error: tls: bad certificate 2025-08-13T19:54:04.529548556+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55638: remote error: tls: bad certificate 2025-08-13T19:54:04.546664235+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55642: remote error: tls: bad certificate 2025-08-13T19:54:04.562135227+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55654: remote error: tls: bad certificate 2025-08-13T19:54:04.582879889+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55666: remote error: tls: bad certificate 2025-08-13T19:54:04.609154709+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55674: remote error: tls: bad certificate 2025-08-13T19:54:04.628435650+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55688: remote error: tls: bad certificate 2025-08-13T19:54:04.645911439+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55698: remote error: tls: bad certificate 2025-08-13T19:54:04.669073030+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55700: remote error: tls: bad certificate 2025-08-13T19:54:04.687314331+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55704: remote error: tls: bad certificate 2025-08-13T19:54:04.701906648+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55718: remote error: tls: bad certificate 2025-08-13T19:54:04.724113212+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55732: remote error: tls: bad certificate 2025-08-13T19:54:04.744842514+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55736: remote error: tls: bad certificate 2025-08-13T19:54:04.761768077+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55748: remote error: tls: bad certificate 2025-08-13T19:54:04.791839526+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55758: remote error: tls: bad certificate 2025-08-13T19:54:04.809122739+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55766: remote error: tls: bad certificate 2025-08-13T19:54:04.824963751+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55774: remote error: tls: bad certificate 2025-08-13T19:54:04.842395249+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55790: remote error: tls: bad certificate 2025-08-13T19:54:04.862607836+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55792: remote error: tls: bad certificate 2025-08-13T19:54:04.883321408+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55800: remote error: tls: bad certificate 2025-08-13T19:54:04.910074012+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55810: remote error: tls: bad certificate 2025-08-13T19:54:04.934903461+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55812: remote error: tls: bad certificate 2025-08-13T19:54:04.956612600+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55828: remote error: tls: bad certificate 2025-08-13T19:54:04.984290721+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55840: remote error: tls: bad certificate 2025-08-13T19:54:05.004661692+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55852: remote error: tls: bad certificate 2025-08-13T19:54:05.029091190+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55854: remote error: tls: bad certificate 2025-08-13T19:54:05.068919787+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55866: remote error: tls: bad certificate 2025-08-13T19:54:05.231187881+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55868: remote error: tls: bad certificate 2025-08-13T19:54:05.253435246+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55872: remote error: tls: bad certificate 2025-08-13T19:54:05.283089523+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55874: remote error: tls: bad certificate 2025-08-13T19:54:05.297448933+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55884: remote error: tls: bad certificate 2025-08-13T19:54:05.313188692+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55894: remote error: tls: bad certificate 2025-08-13T19:54:05.339418061+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55898: remote error: tls: bad certificate 2025-08-13T19:54:05.360628287+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55910: remote error: tls: bad certificate 2025-08-13T19:54:05.377985413+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55918: remote error: tls: bad certificate 2025-08-13T19:54:05.395087051+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55928: remote error: tls: bad certificate 2025-08-13T19:54:05.411553941+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55944: remote error: tls: bad certificate 2025-08-13T19:54:05.429766171+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55958: remote error: tls: bad certificate 2025-08-13T19:54:05.449028661+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55966: remote error: tls: bad certificate 2025-08-13T19:54:05.467946011+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55980: remote error: tls: bad certificate 2025-08-13T19:54:05.485629346+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55994: remote error: tls: bad certificate 2025-08-13T19:54:05.501872070+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56010: remote error: tls: bad certificate 2025-08-13T19:54:05.516638442+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56014: remote error: tls: bad certificate 2025-08-13T19:54:05.531454335+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56016: remote error: tls: bad certificate 2025-08-13T19:54:05.545691651+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56028: remote error: tls: bad certificate 2025-08-13T19:54:05.562223723+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56042: remote error: tls: bad certificate 2025-08-13T19:54:05.578884149+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56058: remote error: tls: bad certificate 2025-08-13T19:54:05.590260494+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56066: remote error: tls: bad certificate 2025-08-13T19:54:05.605460888+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56080: remote error: tls: bad certificate 2025-08-13T19:54:05.620248910+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56086: remote error: tls: bad certificate 2025-08-13T19:54:05.638069889+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56102: remote error: tls: bad certificate 2025-08-13T19:54:05.653660004+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56106: remote error: tls: bad certificate 2025-08-13T19:54:05.673205262+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56116: remote error: tls: bad certificate 2025-08-13T19:54:05.726669629+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56122: remote error: tls: bad certificate 2025-08-13T19:54:05.768411221+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56124: remote error: tls: bad certificate 2025-08-13T19:54:05.774735111+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56126: remote error: tls: bad certificate 2025-08-13T19:54:05.795135264+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56128: remote error: tls: bad certificate 2025-08-13T19:54:05.812221972+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56134: remote error: tls: bad certificate 2025-08-13T19:54:05.830135333+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56142: remote error: tls: bad certificate 2025-08-13T19:54:05.845655206+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56158: remote error: tls: bad certificate 2025-08-13T19:54:05.861531970+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56172: remote error: tls: bad certificate 2025-08-13T19:54:05.877125745+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56184: remote error: tls: bad certificate 2025-08-13T19:54:05.910217470+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56194: remote error: tls: bad certificate 2025-08-13T19:54:05.948091351+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56208: remote error: tls: bad certificate 2025-08-13T19:54:05.988656779+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56210: remote error: tls: bad certificate 2025-08-13T19:54:06.029546066+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56214: remote error: tls: bad certificate 2025-08-13T19:54:06.076607450+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56226: remote error: tls: bad certificate 2025-08-13T19:54:06.107183223+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56238: remote error: tls: bad certificate 2025-08-13T19:54:06.146499386+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56252: remote error: tls: bad certificate 2025-08-13T19:54:06.194484836+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56266: remote error: tls: bad certificate 2025-08-13T19:54:06.228528168+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56282: remote error: tls: bad certificate 2025-08-13T19:54:06.271680090+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56292: remote error: tls: bad certificate 2025-08-13T19:54:06.309391997+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56298: remote error: tls: bad certificate 2025-08-13T19:54:06.350234173+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56310: remote error: tls: bad certificate 2025-08-13T19:54:06.386162899+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56314: remote error: tls: bad certificate 2025-08-13T19:54:06.428316272+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56326: remote error: tls: bad certificate 2025-08-13T19:54:06.476694274+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56334: remote error: tls: bad certificate 2025-08-13T19:54:06.509834350+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56346: remote error: tls: bad certificate 2025-08-13T19:54:06.553684632+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56358: remote error: tls: bad certificate 2025-08-13T19:54:06.591259765+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56374: remote error: tls: bad certificate 2025-08-13T19:54:06.630498745+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56380: remote error: tls: bad certificate 2025-08-13T19:54:06.666598066+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56388: remote error: tls: bad certificate 2025-08-13T19:54:06.705427025+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56398: remote error: tls: bad certificate 2025-08-13T19:54:06.748031691+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56406: remote error: tls: bad certificate 2025-08-13T19:54:06.789061583+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56416: remote error: tls: bad certificate 2025-08-13T19:54:06.828297713+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56426: remote error: tls: bad certificate 2025-08-13T19:54:06.869535760+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56440: remote error: tls: bad certificate 2025-08-13T19:54:06.906655370+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56452: remote error: tls: bad certificate 2025-08-13T19:54:06.947239849+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56464: remote error: tls: bad certificate 2025-08-13T19:54:06.990194796+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56474: remote error: tls: bad certificate 2025-08-13T19:54:07.028611323+00:00 stderr F 2025/08/13 19:54:07 http: TLS handshake error from 127.0.0.1:56480: remote error: tls: bad certificate 2025-08-13T19:54:07.072629619+00:00 stderr F 2025/08/13 19:54:07 http: TLS handshake error from 127.0.0.1:56496: remote error: tls: bad certificate 2025-08-13T19:54:07.110073149+00:00 stderr F 2025/08/13 19:54:07 http: TLS handshake error from 127.0.0.1:56498: remote error: tls: bad certificate 2025-08-13T19:54:07.147519778+00:00 stderr F 2025/08/13 19:54:07 http: TLS handshake error from 127.0.0.1:56504: remote error: tls: bad certificate 2025-08-13T19:54:10.324143599+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53800: remote error: tls: bad certificate 2025-08-13T19:54:10.351052607+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53810: remote error: tls: bad certificate 2025-08-13T19:54:10.379028716+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53826: remote error: tls: bad certificate 2025-08-13T19:54:10.411750200+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53836: remote error: tls: bad certificate 2025-08-13T19:54:10.438112913+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53842: remote error: tls: bad certificate 2025-08-13T19:54:10.843878839+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53844: remote error: tls: bad certificate 2025-08-13T19:54:10.870279393+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53860: remote error: tls: bad certificate 2025-08-13T19:54:10.892950700+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53868: remote error: tls: bad certificate 2025-08-13T19:54:10.913163997+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53882: remote error: tls: bad certificate 2025-08-13T19:54:10.931237763+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53892: remote error: tls: bad certificate 2025-08-13T19:54:10.955366742+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53900: remote error: tls: bad certificate 2025-08-13T19:54:10.974190990+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53908: remote error: tls: bad certificate 2025-08-13T19:54:10.995896150+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53916: remote error: tls: bad certificate 2025-08-13T19:54:11.025141415+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53924: remote error: tls: bad certificate 2025-08-13T19:54:11.046967078+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53926: remote error: tls: bad certificate 2025-08-13T19:54:11.066142645+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53942: remote error: tls: bad certificate 2025-08-13T19:54:11.084764397+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53958: remote error: tls: bad certificate 2025-08-13T19:54:11.106043095+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53972: remote error: tls: bad certificate 2025-08-13T19:54:11.132503240+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53982: remote error: tls: bad certificate 2025-08-13T19:54:11.154251721+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53986: remote error: tls: bad certificate 2025-08-13T19:54:11.173139731+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53994: remote error: tls: bad certificate 2025-08-13T19:54:11.194903502+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54004: remote error: tls: bad certificate 2025-08-13T19:54:11.220897794+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54008: remote error: tls: bad certificate 2025-08-13T19:54:11.239054393+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54022: remote error: tls: bad certificate 2025-08-13T19:54:11.257567391+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54032: remote error: tls: bad certificate 2025-08-13T19:54:11.273608709+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54042: remote error: tls: bad certificate 2025-08-13T19:54:11.289194604+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54048: remote error: tls: bad certificate 2025-08-13T19:54:11.305607913+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54064: remote error: tls: bad certificate 2025-08-13T19:54:11.321356573+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54068: remote error: tls: bad certificate 2025-08-13T19:54:11.347699525+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54076: remote error: tls: bad certificate 2025-08-13T19:54:11.366058569+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54086: remote error: tls: bad certificate 2025-08-13T19:54:11.385265668+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54094: remote error: tls: bad certificate 2025-08-13T19:54:11.408454650+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54106: remote error: tls: bad certificate 2025-08-13T19:54:11.428514993+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54110: remote error: tls: bad certificate 2025-08-13T19:54:11.449096430+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54114: remote error: tls: bad certificate 2025-08-13T19:54:11.464211522+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54130: remote error: tls: bad certificate 2025-08-13T19:54:11.481318800+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54134: remote error: tls: bad certificate 2025-08-13T19:54:11.499429527+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54148: remote error: tls: bad certificate 2025-08-13T19:54:11.516029891+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54164: remote error: tls: bad certificate 2025-08-13T19:54:11.534594972+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54176: remote error: tls: bad certificate 2025-08-13T19:54:11.549282941+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54190: remote error: tls: bad certificate 2025-08-13T19:54:11.567407278+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54194: remote error: tls: bad certificate 2025-08-13T19:54:11.587741259+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54208: remote error: tls: bad certificate 2025-08-13T19:54:11.606065312+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54222: remote error: tls: bad certificate 2025-08-13T19:54:11.622622815+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54236: remote error: tls: bad certificate 2025-08-13T19:54:11.639475956+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54240: remote error: tls: bad certificate 2025-08-13T19:54:11.653602620+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54250: remote error: tls: bad certificate 2025-08-13T19:54:11.669066781+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54256: remote error: tls: bad certificate 2025-08-13T19:54:11.688043553+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54268: remote error: tls: bad certificate 2025-08-13T19:54:11.704404330+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54278: remote error: tls: bad certificate 2025-08-13T19:54:11.719577643+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54284: remote error: tls: bad certificate 2025-08-13T19:54:11.734508770+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54290: remote error: tls: bad certificate 2025-08-13T19:54:11.750367353+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54294: remote error: tls: bad certificate 2025-08-13T19:54:11.766342269+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54306: remote error: tls: bad certificate 2025-08-13T19:54:11.783522849+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54316: remote error: tls: bad certificate 2025-08-13T19:54:11.800200285+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54328: remote error: tls: bad certificate 2025-08-13T19:54:11.817072067+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54330: remote error: tls: bad certificate 2025-08-13T19:54:11.833085354+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54336: remote error: tls: bad certificate 2025-08-13T19:54:11.849252126+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54352: remote error: tls: bad certificate 2025-08-13T19:54:11.865088588+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54360: remote error: tls: bad certificate 2025-08-13T19:54:11.881589549+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54374: remote error: tls: bad certificate 2025-08-13T19:54:11.897978597+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54382: remote error: tls: bad certificate 2025-08-13T19:54:11.915336733+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54388: remote error: tls: bad certificate 2025-08-13T19:54:11.931051652+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54390: remote error: tls: bad certificate 2025-08-13T19:54:11.948364826+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54404: remote error: tls: bad certificate 2025-08-13T19:54:11.968452890+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54420: remote error: tls: bad certificate 2025-08-13T19:54:11.989436089+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54424: remote error: tls: bad certificate 2025-08-13T19:54:12.004551540+00:00 stderr F 2025/08/13 19:54:12 http: TLS handshake error from 127.0.0.1:54432: remote error: tls: bad certificate 2025-08-13T19:54:12.026195218+00:00 stderr F 2025/08/13 19:54:12 http: TLS handshake error from 127.0.0.1:54446: remote error: tls: bad certificate 2025-08-13T19:54:12.037979565+00:00 stderr F 2025/08/13 19:54:12 http: TLS handshake error from 127.0.0.1:54452: remote error: tls: bad certificate 2025-08-13T19:54:12.052939342+00:00 stderr F 2025/08/13 19:54:12 http: TLS handshake error from 127.0.0.1:54466: remote error: tls: bad certificate 2025-08-13T19:54:12.071769910+00:00 stderr F 2025/08/13 19:54:12 http: TLS handshake error from 127.0.0.1:54470: remote error: tls: bad certificate 2025-08-13T19:54:15.853328536+00:00 stderr F 2025/08/13 19:54:15 http: TLS handshake error from 127.0.0.1:54476: remote error: tls: bad certificate 2025-08-13T19:54:15.966140586+00:00 stderr F 2025/08/13 19:54:15 http: TLS handshake error from 127.0.0.1:54490: remote error: tls: bad certificate 2025-08-13T19:54:15.987193217+00:00 stderr F 2025/08/13 19:54:15 http: TLS handshake error from 127.0.0.1:54496: remote error: tls: bad certificate 2025-08-13T19:54:16.003514784+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54500: remote error: tls: bad certificate 2025-08-13T19:54:16.022766333+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54510: remote error: tls: bad certificate 2025-08-13T19:54:16.038674267+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54524: remote error: tls: bad certificate 2025-08-13T19:54:16.054370536+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54540: remote error: tls: bad certificate 2025-08-13T19:54:16.143197972+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54552: remote error: tls: bad certificate 2025-08-13T19:54:16.158607582+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54566: remote error: tls: bad certificate 2025-08-13T19:54:16.175517075+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54574: remote error: tls: bad certificate 2025-08-13T19:54:16.191456580+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54580: remote error: tls: bad certificate 2025-08-13T19:54:16.205409408+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54592: remote error: tls: bad certificate 2025-08-13T19:54:16.223006201+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54602: remote error: tls: bad certificate 2025-08-13T19:54:16.239594454+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54608: remote error: tls: bad certificate 2025-08-13T19:54:16.258661369+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54622: remote error: tls: bad certificate 2025-08-13T19:54:16.276867389+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54628: remote error: tls: bad certificate 2025-08-13T19:54:16.294151662+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54644: remote error: tls: bad certificate 2025-08-13T19:54:16.312158906+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54654: remote error: tls: bad certificate 2025-08-13T19:54:16.329057329+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54670: remote error: tls: bad certificate 2025-08-13T19:54:16.347042602+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54682: remote error: tls: bad certificate 2025-08-13T19:54:16.367316011+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54686: remote error: tls: bad certificate 2025-08-13T19:54:16.386183990+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54688: remote error: tls: bad certificate 2025-08-13T19:54:16.402416114+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54702: remote error: tls: bad certificate 2025-08-13T19:54:16.417756592+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54716: remote error: tls: bad certificate 2025-08-13T19:54:16.433322186+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54724: remote error: tls: bad certificate 2025-08-13T19:54:16.457360412+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54740: remote error: tls: bad certificate 2025-08-13T19:54:16.479533656+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54754: remote error: tls: bad certificate 2025-08-13T19:54:16.499071644+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54768: remote error: tls: bad certificate 2025-08-13T19:54:16.524058997+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54784: remote error: tls: bad certificate 2025-08-13T19:54:16.540699042+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54790: remote error: tls: bad certificate 2025-08-13T19:54:16.560202489+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54806: remote error: tls: bad certificate 2025-08-13T19:54:16.576866315+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54814: remote error: tls: bad certificate 2025-08-13T19:54:16.594332203+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54822: remote error: tls: bad certificate 2025-08-13T19:54:16.609170517+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54828: remote error: tls: bad certificate 2025-08-13T19:54:16.626135522+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54830: remote error: tls: bad certificate 2025-08-13T19:54:16.650720813+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54834: remote error: tls: bad certificate 2025-08-13T19:54:16.675662296+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54848: remote error: tls: bad certificate 2025-08-13T19:54:16.695085490+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54850: remote error: tls: bad certificate 2025-08-13T19:54:16.711583271+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54852: remote error: tls: bad certificate 2025-08-13T19:54:16.725719775+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54860: remote error: tls: bad certificate 2025-08-13T19:54:16.744172602+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54870: remote error: tls: bad certificate 2025-08-13T19:54:16.761632880+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54882: remote error: tls: bad certificate 2025-08-13T19:54:16.780635192+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54884: remote error: tls: bad certificate 2025-08-13T19:54:16.798684607+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54898: remote error: tls: bad certificate 2025-08-13T19:54:16.817738631+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54904: remote error: tls: bad certificate 2025-08-13T19:54:16.835916381+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54918: remote error: tls: bad certificate 2025-08-13T19:54:16.859974438+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54928: remote error: tls: bad certificate 2025-08-13T19:54:16.915907125+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54938: remote error: tls: bad certificate 2025-08-13T19:54:16.931963703+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54940: remote error: tls: bad certificate 2025-08-13T19:54:16.952153640+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54954: remote error: tls: bad certificate 2025-08-13T19:54:16.969020321+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54962: remote error: tls: bad certificate 2025-08-13T19:54:17.086860076+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:54970: remote error: tls: bad certificate 2025-08-13T19:54:17.104701715+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:54978: remote error: tls: bad certificate 2025-08-13T19:54:17.123251545+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:54980: remote error: tls: bad certificate 2025-08-13T19:54:17.143195575+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:54986: remote error: tls: bad certificate 2025-08-13T19:54:17.161661582+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:54998: remote error: tls: bad certificate 2025-08-13T19:54:17.179342307+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55010: remote error: tls: bad certificate 2025-08-13T19:54:17.199657357+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55018: remote error: tls: bad certificate 2025-08-13T19:54:17.218228467+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55030: remote error: tls: bad certificate 2025-08-13T19:54:17.237230160+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55044: remote error: tls: bad certificate 2025-08-13T19:54:17.251893088+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55046: remote error: tls: bad certificate 2025-08-13T19:54:17.270647944+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55062: remote error: tls: bad certificate 2025-08-13T19:54:17.287085233+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55070: remote error: tls: bad certificate 2025-08-13T19:54:17.308525895+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55072: remote error: tls: bad certificate 2025-08-13T19:54:17.324706747+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55088: remote error: tls: bad certificate 2025-08-13T19:54:17.341763604+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55102: remote error: tls: bad certificate 2025-08-13T19:54:17.360314454+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55116: remote error: tls: bad certificate 2025-08-13T19:54:20.721525557+00:00 stderr F 2025/08/13 19:54:20 http: TLS handshake error from 127.0.0.1:53216: remote error: tls: bad certificate 2025-08-13T19:54:20.746320355+00:00 stderr F 2025/08/13 19:54:20 http: TLS handshake error from 127.0.0.1:53224: remote error: tls: bad certificate 2025-08-13T19:54:20.771123713+00:00 stderr F 2025/08/13 19:54:20 http: TLS handshake error from 127.0.0.1:53234: remote error: tls: bad certificate 2025-08-13T19:54:20.796913209+00:00 stderr F 2025/08/13 19:54:20 http: TLS handshake error from 127.0.0.1:53236: remote error: tls: bad certificate 2025-08-13T19:54:20.821871722+00:00 stderr F 2025/08/13 19:54:20 http: TLS handshake error from 127.0.0.1:53250: remote error: tls: bad certificate 2025-08-13T19:54:22.817745111+00:00 stderr F 2025/08/13 19:54:22 http: TLS handshake error from 127.0.0.1:53254: remote error: tls: bad certificate 2025-08-13T19:54:25.227300862+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53266: remote error: tls: bad certificate 2025-08-13T19:54:25.245855302+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53276: remote error: tls: bad certificate 2025-08-13T19:54:25.262362583+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53282: remote error: tls: bad certificate 2025-08-13T19:54:25.281349155+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53292: remote error: tls: bad certificate 2025-08-13T19:54:25.301179791+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53302: remote error: tls: bad certificate 2025-08-13T19:54:25.324967410+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53318: remote error: tls: bad certificate 2025-08-13T19:54:25.364442558+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53326: remote error: tls: bad certificate 2025-08-13T19:54:25.397158082+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53336: remote error: tls: bad certificate 2025-08-13T19:54:25.419758267+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53338: remote error: tls: bad certificate 2025-08-13T19:54:25.436679680+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53350: remote error: tls: bad certificate 2025-08-13T19:54:25.453617054+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53364: remote error: tls: bad certificate 2025-08-13T19:54:25.469684553+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53380: remote error: tls: bad certificate 2025-08-13T19:54:25.490530948+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53388: remote error: tls: bad certificate 2025-08-13T19:54:25.508708857+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53400: remote error: tls: bad certificate 2025-08-13T19:54:25.525215248+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53404: remote error: tls: bad certificate 2025-08-13T19:54:25.537947032+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53412: remote error: tls: bad certificate 2025-08-13T19:54:25.554302069+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53422: remote error: tls: bad certificate 2025-08-13T19:54:25.568329099+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53436: remote error: tls: bad certificate 2025-08-13T19:54:25.582991348+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53444: remote error: tls: bad certificate 2025-08-13T19:54:25.596693629+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53456: remote error: tls: bad certificate 2025-08-13T19:54:25.616365821+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53470: remote error: tls: bad certificate 2025-08-13T19:54:25.632041678+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53482: remote error: tls: bad certificate 2025-08-13T19:54:25.653220783+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53488: remote error: tls: bad certificate 2025-08-13T19:54:25.670881997+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53492: remote error: tls: bad certificate 2025-08-13T19:54:25.688280574+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53508: remote error: tls: bad certificate 2025-08-13T19:54:25.706843854+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53520: remote error: tls: bad certificate 2025-08-13T19:54:25.722669636+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53536: remote error: tls: bad certificate 2025-08-13T19:54:25.740935798+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53550: remote error: tls: bad certificate 2025-08-13T19:54:25.757637885+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53556: remote error: tls: bad certificate 2025-08-13T19:54:25.773219450+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53568: remote error: tls: bad certificate 2025-08-13T19:54:25.788331711+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53584: remote error: tls: bad certificate 2025-08-13T19:54:25.810117233+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53600: remote error: tls: bad certificate 2025-08-13T19:54:25.827526600+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53604: remote error: tls: bad certificate 2025-08-13T19:54:25.843408304+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53616: remote error: tls: bad certificate 2025-08-13T19:54:25.858583567+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53622: remote error: tls: bad certificate 2025-08-13T19:54:25.874072539+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53624: remote error: tls: bad certificate 2025-08-13T19:54:25.890998383+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53640: remote error: tls: bad certificate 2025-08-13T19:54:25.905653711+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53646: remote error: tls: bad certificate 2025-08-13T19:54:25.926635510+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53648: remote error: tls: bad certificate 2025-08-13T19:54:25.942071211+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53656: remote error: tls: bad certificate 2025-08-13T19:54:25.957762569+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53666: remote error: tls: bad certificate 2025-08-13T19:54:25.975213977+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53668: remote error: tls: bad certificate 2025-08-13T19:54:25.992678176+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53678: remote error: tls: bad certificate 2025-08-13T19:54:26.010514455+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53692: remote error: tls: bad certificate 2025-08-13T19:54:26.025077611+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53706: remote error: tls: bad certificate 2025-08-13T19:54:26.039340648+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53708: remote error: tls: bad certificate 2025-08-13T19:54:26.066560865+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53724: remote error: tls: bad certificate 2025-08-13T19:54:26.089577813+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53728: remote error: tls: bad certificate 2025-08-13T19:54:26.107120494+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53744: remote error: tls: bad certificate 2025-08-13T19:54:26.126047744+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53754: remote error: tls: bad certificate 2025-08-13T19:54:26.142558895+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53756: remote error: tls: bad certificate 2025-08-13T19:54:26.158527671+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53766: remote error: tls: bad certificate 2025-08-13T19:54:26.176040571+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53770: remote error: tls: bad certificate 2025-08-13T19:54:26.192314476+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53772: remote error: tls: bad certificate 2025-08-13T19:54:26.210127685+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53776: remote error: tls: bad certificate 2025-08-13T19:54:26.228091938+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53786: remote error: tls: bad certificate 2025-08-13T19:54:26.244063774+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53792: remote error: tls: bad certificate 2025-08-13T19:54:26.259651109+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53800: remote error: tls: bad certificate 2025-08-13T19:54:26.278102256+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53804: remote error: tls: bad certificate 2025-08-13T19:54:26.297222942+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53816: remote error: tls: bad certificate 2025-08-13T19:54:26.316242055+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53832: remote error: tls: bad certificate 2025-08-13T19:54:26.332867529+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53848: remote error: tls: bad certificate 2025-08-13T19:54:26.350388950+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53864: remote error: tls: bad certificate 2025-08-13T19:54:26.365764699+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53870: remote error: tls: bad certificate 2025-08-13T19:54:26.382657631+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53872: remote error: tls: bad certificate 2025-08-13T19:54:26.404027901+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53884: remote error: tls: bad certificate 2025-08-13T19:54:26.415088637+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53886: remote error: tls: bad certificate 2025-08-13T19:54:31.046433198+00:00 stderr F 2025/08/13 19:54:31 http: TLS handshake error from 127.0.0.1:41316: remote error: tls: bad certificate 2025-08-13T19:54:31.068376685+00:00 stderr F 2025/08/13 19:54:31 http: TLS handshake error from 127.0.0.1:41320: remote error: tls: bad certificate 2025-08-13T19:54:31.089971020+00:00 stderr F 2025/08/13 19:54:31 http: TLS handshake error from 127.0.0.1:41322: remote error: tls: bad certificate 2025-08-13T19:54:31.111292849+00:00 stderr F 2025/08/13 19:54:31 http: TLS handshake error from 127.0.0.1:41338: remote error: tls: bad certificate 2025-08-13T19:54:31.131290610+00:00 stderr F 2025/08/13 19:54:31 http: TLS handshake error from 127.0.0.1:41344: remote error: tls: bad certificate 2025-08-13T19:54:35.227171052+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41346: remote error: tls: bad certificate 2025-08-13T19:54:35.252922997+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41360: remote error: tls: bad certificate 2025-08-13T19:54:35.276163581+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41362: remote error: tls: bad certificate 2025-08-13T19:54:35.304159070+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41376: remote error: tls: bad certificate 2025-08-13T19:54:35.340938550+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41378: remote error: tls: bad certificate 2025-08-13T19:54:35.369984870+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41384: remote error: tls: bad certificate 2025-08-13T19:54:35.397360532+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41386: remote error: tls: bad certificate 2025-08-13T19:54:35.413134552+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41396: remote error: tls: bad certificate 2025-08-13T19:54:35.427249325+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41410: remote error: tls: bad certificate 2025-08-13T19:54:35.443855749+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41412: remote error: tls: bad certificate 2025-08-13T19:54:35.459520286+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41420: remote error: tls: bad certificate 2025-08-13T19:54:35.477164140+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41422: remote error: tls: bad certificate 2025-08-13T19:54:35.496001008+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41436: remote error: tls: bad certificate 2025-08-13T19:54:35.513180579+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41450: remote error: tls: bad certificate 2025-08-13T19:54:35.530507003+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41462: remote error: tls: bad certificate 2025-08-13T19:54:35.545536492+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41466: remote error: tls: bad certificate 2025-08-13T19:54:35.563984219+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41470: remote error: tls: bad certificate 2025-08-13T19:54:35.584946118+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41482: remote error: tls: bad certificate 2025-08-13T19:54:35.602649913+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41490: remote error: tls: bad certificate 2025-08-13T19:54:35.616981962+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41500: remote error: tls: bad certificate 2025-08-13T19:54:35.632596948+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41508: remote error: tls: bad certificate 2025-08-13T19:54:35.649045698+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41518: remote error: tls: bad certificate 2025-08-13T19:54:35.666645051+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41520: remote error: tls: bad certificate 2025-08-13T19:54:35.684569622+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41524: remote error: tls: bad certificate 2025-08-13T19:54:35.704045878+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41534: remote error: tls: bad certificate 2025-08-13T19:54:35.726147510+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41542: remote error: tls: bad certificate 2025-08-13T19:54:35.742761164+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41548: remote error: tls: bad certificate 2025-08-13T19:54:35.761563161+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41560: remote error: tls: bad certificate 2025-08-13T19:54:35.777918448+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41574: remote error: tls: bad certificate 2025-08-13T19:54:35.792654319+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41580: remote error: tls: bad certificate 2025-08-13T19:54:35.815639335+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41588: remote error: tls: bad certificate 2025-08-13T19:54:35.829742187+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41594: remote error: tls: bad certificate 2025-08-13T19:54:35.845423815+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41610: remote error: tls: bad certificate 2025-08-13T19:54:35.861880605+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41626: remote error: tls: bad certificate 2025-08-13T19:54:35.877927283+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41636: remote error: tls: bad certificate 2025-08-13T19:54:35.895002341+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41642: remote error: tls: bad certificate 2025-08-13T19:54:35.908226958+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41646: remote error: tls: bad certificate 2025-08-13T19:54:35.930885865+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41656: remote error: tls: bad certificate 2025-08-13T19:54:35.945323178+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41672: remote error: tls: bad certificate 2025-08-13T19:54:35.963176427+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41682: remote error: tls: bad certificate 2025-08-13T19:54:35.982345695+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41688: remote error: tls: bad certificate 2025-08-13T19:54:35.997435986+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41698: remote error: tls: bad certificate 2025-08-13T19:54:36.015206053+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41702: remote error: tls: bad certificate 2025-08-13T19:54:36.030573252+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41704: remote error: tls: bad certificate 2025-08-13T19:54:36.047594838+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41712: remote error: tls: bad certificate 2025-08-13T19:54:36.067366642+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41728: remote error: tls: bad certificate 2025-08-13T19:54:36.082614408+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41730: remote error: tls: bad certificate 2025-08-13T19:54:36.101377354+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41744: remote error: tls: bad certificate 2025-08-13T19:54:36.117705550+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41746: remote error: tls: bad certificate 2025-08-13T19:54:36.133877822+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41752: remote error: tls: bad certificate 2025-08-13T19:54:36.148239472+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41756: remote error: tls: bad certificate 2025-08-13T19:54:36.164068494+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41768: remote error: tls: bad certificate 2025-08-13T19:54:36.181239164+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41778: remote error: tls: bad certificate 2025-08-13T19:54:36.205109676+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41784: remote error: tls: bad certificate 2025-08-13T19:54:36.225618141+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41786: remote error: tls: bad certificate 2025-08-13T19:54:36.244268714+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41796: remote error: tls: bad certificate 2025-08-13T19:54:36.259244651+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41806: remote error: tls: bad certificate 2025-08-13T19:54:36.276200615+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41812: remote error: tls: bad certificate 2025-08-13T19:54:36.301162898+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41824: remote error: tls: bad certificate 2025-08-13T19:54:36.325165024+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41838: remote error: tls: bad certificate 2025-08-13T19:54:36.349232481+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41852: remote error: tls: bad certificate 2025-08-13T19:54:36.367540803+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41868: remote error: tls: bad certificate 2025-08-13T19:54:36.391947470+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41880: remote error: tls: bad certificate 2025-08-13T19:54:36.427244898+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41884: remote error: tls: bad certificate 2025-08-13T19:54:36.451516341+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41900: remote error: tls: bad certificate 2025-08-13T19:54:36.468951009+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41904: remote error: tls: bad certificate 2025-08-13T19:54:36.491550874+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41912: remote error: tls: bad certificate 2025-08-13T19:54:41.519071725+00:00 stderr F 2025/08/13 19:54:41 http: TLS handshake error from 127.0.0.1:50242: remote error: tls: bad certificate 2025-08-13T19:54:41.538992693+00:00 stderr F 2025/08/13 19:54:41 http: TLS handshake error from 127.0.0.1:50258: remote error: tls: bad certificate 2025-08-13T19:54:41.563178683+00:00 stderr F 2025/08/13 19:54:41 http: TLS handshake error from 127.0.0.1:50266: remote error: tls: bad certificate 2025-08-13T19:54:41.626025906+00:00 stderr F 2025/08/13 19:54:41 http: TLS handshake error from 127.0.0.1:50276: remote error: tls: bad certificate 2025-08-13T19:54:41.651701259+00:00 stderr F 2025/08/13 19:54:41 http: TLS handshake error from 127.0.0.1:50286: remote error: tls: bad certificate 2025-08-13T19:54:45.236328752+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50292: remote error: tls: bad certificate 2025-08-13T19:54:45.257041153+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50296: remote error: tls: bad certificate 2025-08-13T19:54:45.273909114+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50298: remote error: tls: bad certificate 2025-08-13T19:54:45.288248913+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50302: remote error: tls: bad certificate 2025-08-13T19:54:45.305045482+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50306: remote error: tls: bad certificate 2025-08-13T19:54:45.320968267+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50312: remote error: tls: bad certificate 2025-08-13T19:54:45.339462534+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50326: remote error: tls: bad certificate 2025-08-13T19:54:45.367084432+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50334: remote error: tls: bad certificate 2025-08-13T19:54:45.384769607+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50344: remote error: tls: bad certificate 2025-08-13T19:54:45.399928829+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50346: remote error: tls: bad certificate 2025-08-13T19:54:45.419268830+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50354: remote error: tls: bad certificate 2025-08-13T19:54:45.436260775+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50370: remote error: tls: bad certificate 2025-08-13T19:54:45.452531150+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50382: remote error: tls: bad certificate 2025-08-13T19:54:45.472156780+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50384: remote error: tls: bad certificate 2025-08-13T19:54:45.488235818+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50386: remote error: tls: bad certificate 2025-08-13T19:54:45.505031358+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50396: remote error: tls: bad certificate 2025-08-13T19:54:45.519045817+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50412: remote error: tls: bad certificate 2025-08-13T19:54:45.536148925+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50418: remote error: tls: bad certificate 2025-08-13T19:54:45.552672057+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50428: remote error: tls: bad certificate 2025-08-13T19:54:45.567490320+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50440: remote error: tls: bad certificate 2025-08-13T19:54:45.586848402+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50446: remote error: tls: bad certificate 2025-08-13T19:54:45.609313923+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50448: remote error: tls: bad certificate 2025-08-13T19:54:45.629726306+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50460: remote error: tls: bad certificate 2025-08-13T19:54:45.646340430+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50464: remote error: tls: bad certificate 2025-08-13T19:54:45.661195803+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50478: remote error: tls: bad certificate 2025-08-13T19:54:45.675670796+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50480: remote error: tls: bad certificate 2025-08-13T19:54:45.691987512+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50496: remote error: tls: bad certificate 2025-08-13T19:54:45.707129494+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50502: remote error: tls: bad certificate 2025-08-13T19:54:45.721192075+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50508: remote error: tls: bad certificate 2025-08-13T19:54:45.736476481+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50514: remote error: tls: bad certificate 2025-08-13T19:54:45.751938933+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50516: remote error: tls: bad certificate 2025-08-13T19:54:45.767759134+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50528: remote error: tls: bad certificate 2025-08-13T19:54:45.783994357+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50542: remote error: tls: bad certificate 2025-08-13T19:54:45.797493492+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50548: remote error: tls: bad certificate 2025-08-13T19:54:45.813383646+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50562: remote error: tls: bad certificate 2025-08-13T19:54:45.827280712+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50564: remote error: tls: bad certificate 2025-08-13T19:54:45.844484013+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50572: remote error: tls: bad certificate 2025-08-13T19:54:45.858049520+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50580: remote error: tls: bad certificate 2025-08-13T19:54:45.874998724+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50584: remote error: tls: bad certificate 2025-08-13T19:54:45.896306262+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50588: remote error: tls: bad certificate 2025-08-13T19:54:45.913262116+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50590: remote error: tls: bad certificate 2025-08-13T19:54:45.929432387+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50596: remote error: tls: bad certificate 2025-08-13T19:54:45.948160311+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50606: remote error: tls: bad certificate 2025-08-13T19:54:45.964073845+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50612: remote error: tls: bad certificate 2025-08-13T19:54:46.006916808+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50616: remote error: tls: bad certificate 2025-08-13T19:54:46.020915227+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50628: remote error: tls: bad certificate 2025-08-13T19:54:46.055171725+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50642: remote error: tls: bad certificate 2025-08-13T19:54:46.069256957+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50654: remote error: tls: bad certificate 2025-08-13T19:54:46.092022466+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50670: remote error: tls: bad certificate 2025-08-13T19:54:46.116726671+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50676: remote error: tls: bad certificate 2025-08-13T19:54:46.136084183+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50678: remote error: tls: bad certificate 2025-08-13T19:54:46.151573485+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50694: remote error: tls: bad certificate 2025-08-13T19:54:46.166209213+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50698: remote error: tls: bad certificate 2025-08-13T19:54:46.184561257+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50714: remote error: tls: bad certificate 2025-08-13T19:54:46.202747295+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50724: remote error: tls: bad certificate 2025-08-13T19:54:46.227492002+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50738: remote error: tls: bad certificate 2025-08-13T19:54:46.255444979+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50742: remote error: tls: bad certificate 2025-08-13T19:54:46.274494723+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50744: remote error: tls: bad certificate 2025-08-13T19:54:46.296945013+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50758: remote error: tls: bad certificate 2025-08-13T19:54:46.318310713+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50770: remote error: tls: bad certificate 2025-08-13T19:54:46.348319239+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50786: remote error: tls: bad certificate 2025-08-13T19:54:46.367607499+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50790: remote error: tls: bad certificate 2025-08-13T19:54:46.389904406+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50798: remote error: tls: bad certificate 2025-08-13T19:54:46.409265978+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50802: remote error: tls: bad certificate 2025-08-13T19:54:46.433139139+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50814: remote error: tls: bad certificate 2025-08-13T19:54:46.452182303+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50820: remote error: tls: bad certificate 2025-08-13T19:54:46.467259783+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50826: remote error: tls: bad certificate 2025-08-13T19:54:47.001052853+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50840: remote error: tls: bad certificate 2025-08-13T19:54:47.019595873+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50846: remote error: tls: bad certificate 2025-08-13T19:54:47.039075658+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50848: remote error: tls: bad certificate 2025-08-13T19:54:47.056672000+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50854: remote error: tls: bad certificate 2025-08-13T19:54:47.073365597+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50860: remote error: tls: bad certificate 2025-08-13T19:54:47.091295068+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50862: remote error: tls: bad certificate 2025-08-13T19:54:47.111718101+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50868: remote error: tls: bad certificate 2025-08-13T19:54:47.137031453+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50874: remote error: tls: bad certificate 2025-08-13T19:54:47.157653012+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50890: remote error: tls: bad certificate 2025-08-13T19:54:47.176022046+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50892: remote error: tls: bad certificate 2025-08-13T19:54:47.196487130+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50908: remote error: tls: bad certificate 2025-08-13T19:54:47.227302899+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50920: remote error: tls: bad certificate 2025-08-13T19:54:47.404909387+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50924: remote error: tls: bad certificate 2025-08-13T19:54:47.423644541+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50930: remote error: tls: bad certificate 2025-08-13T19:54:47.441867851+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50942: remote error: tls: bad certificate 2025-08-13T19:54:47.457562599+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50950: remote error: tls: bad certificate 2025-08-13T19:54:47.474138762+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50964: remote error: tls: bad certificate 2025-08-13T19:54:47.495737168+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50978: remote error: tls: bad certificate 2025-08-13T19:54:47.514976567+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50992: remote error: tls: bad certificate 2025-08-13T19:54:47.526215338+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51002: remote error: tls: bad certificate 2025-08-13T19:54:47.537221872+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51008: remote error: tls: bad certificate 2025-08-13T19:54:47.554406162+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51012: remote error: tls: bad certificate 2025-08-13T19:54:47.575909946+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51016: remote error: tls: bad certificate 2025-08-13T19:54:47.599926841+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51024: remote error: tls: bad certificate 2025-08-13T19:54:47.616442193+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51030: remote error: tls: bad certificate 2025-08-13T19:54:47.632249434+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51032: remote error: tls: bad certificate 2025-08-13T19:54:47.648476656+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51046: remote error: tls: bad certificate 2025-08-13T19:54:47.663969029+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51062: remote error: tls: bad certificate 2025-08-13T19:54:47.679760609+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51064: remote error: tls: bad certificate 2025-08-13T19:54:47.697938208+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51078: remote error: tls: bad certificate 2025-08-13T19:54:47.711034242+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51092: remote error: tls: bad certificate 2025-08-13T19:54:47.729134198+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51100: remote error: tls: bad certificate 2025-08-13T19:54:47.743552369+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51104: remote error: tls: bad certificate 2025-08-13T19:54:47.762242143+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51116: remote error: tls: bad certificate 2025-08-13T19:54:47.782497591+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51122: remote error: tls: bad certificate 2025-08-13T19:54:47.799375712+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51132: remote error: tls: bad certificate 2025-08-13T19:54:47.817075917+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51136: remote error: tls: bad certificate 2025-08-13T19:54:47.838958022+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51146: remote error: tls: bad certificate 2025-08-13T19:54:47.855899665+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51152: remote error: tls: bad certificate 2025-08-13T19:54:47.876687658+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51166: remote error: tls: bad certificate 2025-08-13T19:54:47.898643715+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51180: remote error: tls: bad certificate 2025-08-13T19:54:47.921446705+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51190: remote error: tls: bad certificate 2025-08-13T19:54:47.935888657+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51198: remote error: tls: bad certificate 2025-08-13T19:54:47.960274003+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51214: remote error: tls: bad certificate 2025-08-13T19:54:47.978524504+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51230: remote error: tls: bad certificate 2025-08-13T19:54:48.005522014+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51244: remote error: tls: bad certificate 2025-08-13T19:54:48.046443822+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51260: remote error: tls: bad certificate 2025-08-13T19:54:48.064538418+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51262: remote error: tls: bad certificate 2025-08-13T19:54:48.080613007+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51274: remote error: tls: bad certificate 2025-08-13T19:54:48.099628939+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51282: remote error: tls: bad certificate 2025-08-13T19:54:48.121064661+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51294: remote error: tls: bad certificate 2025-08-13T19:54:48.145690804+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51304: remote error: tls: bad certificate 2025-08-13T19:54:48.167332531+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51318: remote error: tls: bad certificate 2025-08-13T19:54:48.185262613+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51334: remote error: tls: bad certificate 2025-08-13T19:54:48.202229307+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51336: remote error: tls: bad certificate 2025-08-13T19:54:48.220639022+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51342: remote error: tls: bad certificate 2025-08-13T19:54:48.238681507+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51344: remote error: tls: bad certificate 2025-08-13T19:54:48.266258334+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51346: remote error: tls: bad certificate 2025-08-13T19:54:48.282623311+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51354: remote error: tls: bad certificate 2025-08-13T19:54:48.297842775+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51368: remote error: tls: bad certificate 2025-08-13T19:54:48.312705359+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51376: remote error: tls: bad certificate 2025-08-13T19:54:48.366947119+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51378: remote error: tls: bad certificate 2025-08-13T19:54:48.383752057+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51390: remote error: tls: bad certificate 2025-08-13T19:54:48.420479774+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51404: remote error: tls: bad certificate 2025-08-13T19:54:48.460301511+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51416: remote error: tls: bad certificate 2025-08-13T19:54:48.503072411+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51418: remote error: tls: bad certificate 2025-08-13T19:54:48.539468199+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51434: remote error: tls: bad certificate 2025-08-13T19:54:48.580702536+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51450: remote error: tls: bad certificate 2025-08-13T19:54:48.622874469+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51462: remote error: tls: bad certificate 2025-08-13T19:54:48.665528796+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51470: remote error: tls: bad certificate 2025-08-13T19:54:48.699564097+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51476: remote error: tls: bad certificate 2025-08-13T19:54:48.743502741+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:37316: remote error: tls: bad certificate 2025-08-13T19:54:48.781037412+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:37318: remote error: tls: bad certificate 2025-08-13T19:54:48.820597231+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:37334: remote error: tls: bad certificate 2025-08-13T19:54:48.861330103+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:37348: remote error: tls: bad certificate 2025-08-13T19:54:48.900106410+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:37354: remote error: tls: bad certificate 2025-08-13T19:54:48.941313065+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:37366: remote error: tls: bad certificate 2025-08-13T19:54:48.982903331+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:37368: remote error: tls: bad certificate 2025-08-13T19:54:49.025480786+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37378: remote error: tls: bad certificate 2025-08-13T19:54:49.061073002+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37384: remote error: tls: bad certificate 2025-08-13T19:54:49.101488645+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37390: remote error: tls: bad certificate 2025-08-13T19:54:49.139364575+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37400: remote error: tls: bad certificate 2025-08-13T19:54:49.181347173+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37414: remote error: tls: bad certificate 2025-08-13T19:54:49.228421736+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37418: remote error: tls: bad certificate 2025-08-13T19:54:49.264503406+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37420: remote error: tls: bad certificate 2025-08-13T19:54:49.303318783+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37426: remote error: tls: bad certificate 2025-08-13T19:54:49.341191304+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37434: remote error: tls: bad certificate 2025-08-13T19:54:49.385224931+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37442: remote error: tls: bad certificate 2025-08-13T19:54:49.421119105+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37448: remote error: tls: bad certificate 2025-08-13T19:54:49.463034371+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37456: remote error: tls: bad certificate 2025-08-13T19:54:49.507885881+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37462: remote error: tls: bad certificate 2025-08-13T19:54:49.541032956+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37464: remote error: tls: bad certificate 2025-08-13T19:54:49.594906904+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37466: remote error: tls: bad certificate 2025-08-13T19:54:49.622143321+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37472: remote error: tls: bad certificate 2025-08-13T19:54:49.666360792+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37482: remote error: tls: bad certificate 2025-08-13T19:54:49.699256001+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37488: remote error: tls: bad certificate 2025-08-13T19:54:49.742683830+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37492: remote error: tls: bad certificate 2025-08-13T19:54:49.787688754+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37506: remote error: tls: bad certificate 2025-08-13T19:54:49.827469789+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37518: remote error: tls: bad certificate 2025-08-13T19:54:49.863624121+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37534: remote error: tls: bad certificate 2025-08-13T19:54:49.901086060+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37544: remote error: tls: bad certificate 2025-08-13T19:54:49.940695440+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37548: remote error: tls: bad certificate 2025-08-13T19:54:49.980859066+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37550: remote error: tls: bad certificate 2025-08-13T19:54:50.050907135+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37554: remote error: tls: bad certificate 2025-08-13T19:54:50.137901097+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37564: remote error: tls: bad certificate 2025-08-13T19:54:50.151417743+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37572: remote error: tls: bad certificate 2025-08-13T19:54:50.165392081+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37584: remote error: tls: bad certificate 2025-08-13T19:54:50.184057924+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37592: remote error: tls: bad certificate 2025-08-13T19:54:50.232257059+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37602: remote error: tls: bad certificate 2025-08-13T19:54:50.263172202+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37612: remote error: tls: bad certificate 2025-08-13T19:54:50.299941171+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37620: remote error: tls: bad certificate 2025-08-13T19:54:50.339362235+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37628: remote error: tls: bad certificate 2025-08-13T19:54:50.383579637+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37642: remote error: tls: bad certificate 2025-08-13T19:54:50.422077116+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37644: remote error: tls: bad certificate 2025-08-13T19:54:50.459365299+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37646: remote error: tls: bad certificate 2025-08-13T19:54:50.503114028+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37654: remote error: tls: bad certificate 2025-08-13T19:54:50.542227284+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37662: remote error: tls: bad certificate 2025-08-13T19:54:50.580443804+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37666: remote error: tls: bad certificate 2025-08-13T19:54:50.618663025+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37670: remote error: tls: bad certificate 2025-08-13T19:54:50.662091074+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37674: remote error: tls: bad certificate 2025-08-13T19:54:50.703324070+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37690: remote error: tls: bad certificate 2025-08-13T19:54:50.742994632+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37706: remote error: tls: bad certificate 2025-08-13T19:54:50.781174782+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37716: remote error: tls: bad certificate 2025-08-13T19:54:50.822514951+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37724: remote error: tls: bad certificate 2025-08-13T19:54:50.864965152+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37734: remote error: tls: bad certificate 2025-08-13T19:54:50.903082800+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37744: remote error: tls: bad certificate 2025-08-13T19:54:50.939730446+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37746: remote error: tls: bad certificate 2025-08-13T19:54:50.980327724+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37762: remote error: tls: bad certificate 2025-08-13T19:54:51.028358395+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37776: remote error: tls: bad certificate 2025-08-13T19:54:51.063714993+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37786: remote error: tls: bad certificate 2025-08-13T19:54:51.099078432+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37798: remote error: tls: bad certificate 2025-08-13T19:54:51.138421105+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37810: remote error: tls: bad certificate 2025-08-13T19:54:51.184977953+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37822: remote error: tls: bad certificate 2025-08-13T19:54:51.224947554+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37824: remote error: tls: bad certificate 2025-08-13T19:54:51.259583092+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37838: remote error: tls: bad certificate 2025-08-13T19:54:51.301650752+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37840: remote error: tls: bad certificate 2025-08-13T19:54:51.344008331+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37850: remote error: tls: bad certificate 2025-08-13T19:54:51.381655405+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37854: remote error: tls: bad certificate 2025-08-13T19:54:51.422474030+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37870: remote error: tls: bad certificate 2025-08-13T19:54:51.461519404+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37886: remote error: tls: bad certificate 2025-08-13T19:54:51.502654738+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37900: remote error: tls: bad certificate 2025-08-13T19:54:51.542258418+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37916: remote error: tls: bad certificate 2025-08-13T19:54:51.594379955+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37932: remote error: tls: bad certificate 2025-08-13T19:54:51.622457366+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37948: remote error: tls: bad certificate 2025-08-13T19:54:51.660415729+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37960: remote error: tls: bad certificate 2025-08-13T19:54:51.703872139+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37974: remote error: tls: bad certificate 2025-08-13T19:54:51.741578755+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37988: remote error: tls: bad certificate 2025-08-13T19:54:51.784254163+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37992: remote error: tls: bad certificate 2025-08-13T19:54:51.819879569+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38000: remote error: tls: bad certificate 2025-08-13T19:54:51.859965063+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38010: remote error: tls: bad certificate 2025-08-13T19:54:51.878121091+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38026: remote error: tls: bad certificate 2025-08-13T19:54:51.902353582+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38034: remote error: tls: bad certificate 2025-08-13T19:54:51.906515861+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38036: remote error: tls: bad certificate 2025-08-13T19:54:51.927728697+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38052: remote error: tls: bad certificate 2025-08-13T19:54:51.942359554+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38058: remote error: tls: bad certificate 2025-08-13T19:54:51.948343675+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38066: remote error: tls: bad certificate 2025-08-13T19:54:51.967201263+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38076: remote error: tls: bad certificate 2025-08-13T19:54:51.981193772+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38082: remote error: tls: bad certificate 2025-08-13T19:54:52.020268547+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38088: remote error: tls: bad certificate 2025-08-13T19:54:52.061667128+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38096: remote error: tls: bad certificate 2025-08-13T19:54:52.107280290+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38106: remote error: tls: bad certificate 2025-08-13T19:54:52.141223088+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38122: remote error: tls: bad certificate 2025-08-13T19:54:52.184956296+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38136: remote error: tls: bad certificate 2025-08-13T19:54:52.223043423+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38138: remote error: tls: bad certificate 2025-08-13T19:54:52.262898920+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38150: remote error: tls: bad certificate 2025-08-13T19:54:52.302521061+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38156: remote error: tls: bad certificate 2025-08-13T19:54:52.417076959+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38158: remote error: tls: bad certificate 2025-08-13T19:54:52.437246314+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38162: remote error: tls: bad certificate 2025-08-13T19:54:52.456128063+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38168: remote error: tls: bad certificate 2025-08-13T19:54:52.490100172+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38172: remote error: tls: bad certificate 2025-08-13T19:54:52.513515161+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38182: remote error: tls: bad certificate 2025-08-13T19:54:52.577266329+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38188: remote error: tls: bad certificate 2025-08-13T19:54:52.614880222+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38190: remote error: tls: bad certificate 2025-08-13T19:54:52.636364625+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38196: remote error: tls: bad certificate 2025-08-13T19:54:52.661284256+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38200: remote error: tls: bad certificate 2025-08-13T19:54:52.702261005+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38208: remote error: tls: bad certificate 2025-08-13T19:54:52.740693602+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38212: remote error: tls: bad certificate 2025-08-13T19:54:52.782375921+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38224: remote error: tls: bad certificate 2025-08-13T19:54:52.820380875+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38228: remote error: tls: bad certificate 2025-08-13T19:54:52.859233114+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38236: remote error: tls: bad certificate 2025-08-13T19:54:52.909319983+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38238: remote error: tls: bad certificate 2025-08-13T19:54:52.939708260+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38244: remote error: tls: bad certificate 2025-08-13T19:54:52.980378011+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38254: remote error: tls: bad certificate 2025-08-13T19:54:53.020623659+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38270: remote error: tls: bad certificate 2025-08-13T19:54:53.062672749+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38276: remote error: tls: bad certificate 2025-08-13T19:54:53.102471034+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38290: remote error: tls: bad certificate 2025-08-13T19:54:53.140476859+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38296: remote error: tls: bad certificate 2025-08-13T19:54:53.179913034+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38308: remote error: tls: bad certificate 2025-08-13T19:54:53.222325944+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38312: remote error: tls: bad certificate 2025-08-13T19:54:53.258608689+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38314: remote error: tls: bad certificate 2025-08-13T19:54:53.298877468+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38328: remote error: tls: bad certificate 2025-08-13T19:54:53.347174317+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38338: remote error: tls: bad certificate 2025-08-13T19:54:53.397025109+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38346: remote error: tls: bad certificate 2025-08-13T19:54:53.444224336+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38350: remote error: tls: bad certificate 2025-08-13T19:54:53.497524887+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38366: remote error: tls: bad certificate 2025-08-13T19:54:53.533090441+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38378: remote error: tls: bad certificate 2025-08-13T19:54:53.553177204+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38386: remote error: tls: bad certificate 2025-08-13T19:54:53.582106060+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38392: remote error: tls: bad certificate 2025-08-13T19:54:53.674882677+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38400: remote error: tls: bad certificate 2025-08-13T19:54:53.702545147+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38410: remote error: tls: bad certificate 2025-08-13T19:54:53.724945836+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38416: remote error: tls: bad certificate 2025-08-13T19:54:53.765981567+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38422: remote error: tls: bad certificate 2025-08-13T19:54:53.787168921+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38426: remote error: tls: bad certificate 2025-08-13T19:54:53.819082872+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38430: remote error: tls: bad certificate 2025-08-13T19:54:53.861335407+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38446: remote error: tls: bad certificate 2025-08-13T19:54:53.948517495+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38454: remote error: tls: bad certificate 2025-08-13T19:54:53.966378695+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38466: remote error: tls: bad certificate 2025-08-13T19:54:55.227851349+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38480: remote error: tls: bad certificate 2025-08-13T19:54:55.242359483+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38484: remote error: tls: bad certificate 2025-08-13T19:54:55.257741052+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38486: remote error: tls: bad certificate 2025-08-13T19:54:55.273574374+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38502: remote error: tls: bad certificate 2025-08-13T19:54:55.290432665+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38506: remote error: tls: bad certificate 2025-08-13T19:54:55.308278864+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38516: remote error: tls: bad certificate 2025-08-13T19:54:55.331014593+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38522: remote error: tls: bad certificate 2025-08-13T19:54:55.345660111+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38524: remote error: tls: bad certificate 2025-08-13T19:54:55.362567503+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38528: remote error: tls: bad certificate 2025-08-13T19:54:55.379289260+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38532: remote error: tls: bad certificate 2025-08-13T19:54:55.396632185+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38536: remote error: tls: bad certificate 2025-08-13T19:54:55.414524296+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38540: remote error: tls: bad certificate 2025-08-13T19:54:55.443186163+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38556: remote error: tls: bad certificate 2025-08-13T19:54:55.459181660+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38572: remote error: tls: bad certificate 2025-08-13T19:54:55.475361311+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38586: remote error: tls: bad certificate 2025-08-13T19:54:55.498190953+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38602: remote error: tls: bad certificate 2025-08-13T19:54:55.523388592+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38610: remote error: tls: bad certificate 2025-08-13T19:54:55.539278625+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38622: remote error: tls: bad certificate 2025-08-13T19:54:55.556160687+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38634: remote error: tls: bad certificate 2025-08-13T19:54:55.574049117+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38642: remote error: tls: bad certificate 2025-08-13T19:54:55.588104238+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38652: remote error: tls: bad certificate 2025-08-13T19:54:55.604112455+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38664: remote error: tls: bad certificate 2025-08-13T19:54:55.617740784+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38668: remote error: tls: bad certificate 2025-08-13T19:54:55.636686545+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38680: remote error: tls: bad certificate 2025-08-13T19:54:55.670200451+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38686: remote error: tls: bad certificate 2025-08-13T19:54:55.688522254+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38702: remote error: tls: bad certificate 2025-08-13T19:54:55.710979924+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38706: remote error: tls: bad certificate 2025-08-13T19:54:55.763993497+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38712: remote error: tls: bad certificate 2025-08-13T19:54:55.783688559+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38716: remote error: tls: bad certificate 2025-08-13T19:54:55.800447597+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38732: remote error: tls: bad certificate 2025-08-13T19:54:55.818536713+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38738: remote error: tls: bad certificate 2025-08-13T19:54:55.837187016+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38752: remote error: tls: bad certificate 2025-08-13T19:54:55.853265814+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38764: remote error: tls: bad certificate 2025-08-13T19:54:55.870504146+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38772: remote error: tls: bad certificate 2025-08-13T19:54:55.889106467+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38788: remote error: tls: bad certificate 2025-08-13T19:54:55.909333034+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38796: remote error: tls: bad certificate 2025-08-13T19:54:55.926467863+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38806: remote error: tls: bad certificate 2025-08-13T19:54:55.945926278+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38810: remote error: tls: bad certificate 2025-08-13T19:54:55.973565937+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38816: remote error: tls: bad certificate 2025-08-13T19:54:55.999894898+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38826: remote error: tls: bad certificate 2025-08-13T19:54:56.016873703+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38838: remote error: tls: bad certificate 2025-08-13T19:54:56.035196045+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38848: remote error: tls: bad certificate 2025-08-13T19:54:56.051654435+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38856: remote error: tls: bad certificate 2025-08-13T19:54:56.067487627+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38862: remote error: tls: bad certificate 2025-08-13T19:54:56.088916998+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38878: remote error: tls: bad certificate 2025-08-13T19:54:56.111201194+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38884: remote error: tls: bad certificate 2025-08-13T19:54:56.127637483+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38894: remote error: tls: bad certificate 2025-08-13T19:54:56.142056813+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38906: remote error: tls: bad certificate 2025-08-13T19:54:56.166355907+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38912: remote error: tls: bad certificate 2025-08-13T19:54:56.190515486+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38922: remote error: tls: bad certificate 2025-08-13T19:54:56.207329826+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38924: remote error: tls: bad certificate 2025-08-13T19:54:56.224884987+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38938: remote error: tls: bad certificate 2025-08-13T19:54:56.248384287+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38948: remote error: tls: bad certificate 2025-08-13T19:54:56.266689340+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38958: remote error: tls: bad certificate 2025-08-13T19:54:56.282990775+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38964: remote error: tls: bad certificate 2025-08-13T19:54:56.298635591+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38968: remote error: tls: bad certificate 2025-08-13T19:54:56.315967746+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38972: remote error: tls: bad certificate 2025-08-13T19:54:56.333487666+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38984: remote error: tls: bad certificate 2025-08-13T19:54:56.350389798+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38992: remote error: tls: bad certificate 2025-08-13T19:54:56.368765802+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38994: remote error: tls: bad certificate 2025-08-13T19:54:56.384156131+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38996: remote error: tls: bad certificate 2025-08-13T19:54:56.419667275+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:39004: remote error: tls: bad certificate 2025-08-13T19:54:56.461618872+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:39012: remote error: tls: bad certificate 2025-08-13T19:54:56.501147229+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:39024: remote error: tls: bad certificate 2025-08-13T19:54:56.543489418+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:39036: remote error: tls: bad certificate 2025-08-13T19:54:56.581312257+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:39042: remote error: tls: bad certificate 2025-08-13T19:54:56.620140895+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:39048: remote error: tls: bad certificate 2025-08-13T19:55:02.072369324+00:00 stderr F 2025/08/13 19:55:02 http: TLS handshake error from 127.0.0.1:49228: remote error: tls: bad certificate 2025-08-13T19:55:02.095266718+00:00 stderr F 2025/08/13 19:55:02 http: TLS handshake error from 127.0.0.1:49244: remote error: tls: bad certificate 2025-08-13T19:55:02.121954589+00:00 stderr F 2025/08/13 19:55:02 http: TLS handshake error from 127.0.0.1:49250: remote error: tls: bad certificate 2025-08-13T19:55:02.145891702+00:00 stderr F 2025/08/13 19:55:02 http: TLS handshake error from 127.0.0.1:49256: remote error: tls: bad certificate 2025-08-13T19:55:02.181856968+00:00 stderr F 2025/08/13 19:55:02 http: TLS handshake error from 127.0.0.1:49268: remote error: tls: bad certificate 2025-08-13T19:55:03.229036497+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49270: remote error: tls: bad certificate 2025-08-13T19:55:03.245168697+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49278: remote error: tls: bad certificate 2025-08-13T19:55:03.263196582+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49286: remote error: tls: bad certificate 2025-08-13T19:55:03.281314219+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49296: remote error: tls: bad certificate 2025-08-13T19:55:03.300451624+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49310: remote error: tls: bad certificate 2025-08-13T19:55:03.319897959+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49312: remote error: tls: bad certificate 2025-08-13T19:55:03.347261069+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49316: remote error: tls: bad certificate 2025-08-13T19:55:03.364128491+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49330: remote error: tls: bad certificate 2025-08-13T19:55:03.379416757+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49338: remote error: tls: bad certificate 2025-08-13T19:55:03.394573079+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49344: remote error: tls: bad certificate 2025-08-13T19:55:03.415680942+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49358: remote error: tls: bad certificate 2025-08-13T19:55:03.431020299+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49366: remote error: tls: bad certificate 2025-08-13T19:55:03.450436733+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49368: remote error: tls: bad certificate 2025-08-13T19:55:03.468695354+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49380: remote error: tls: bad certificate 2025-08-13T19:55:03.486017728+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49392: remote error: tls: bad certificate 2025-08-13T19:55:03.552110164+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49402: remote error: tls: bad certificate 2025-08-13T19:55:03.569330636+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49404: remote error: tls: bad certificate 2025-08-13T19:55:03.586444334+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49418: remote error: tls: bad certificate 2025-08-13T19:55:03.600984759+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49424: remote error: tls: bad certificate 2025-08-13T19:55:03.618684434+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49436: remote error: tls: bad certificate 2025-08-13T19:55:03.634714241+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49448: remote error: tls: bad certificate 2025-08-13T19:55:03.651106139+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49460: remote error: tls: bad certificate 2025-08-13T19:55:03.669949057+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49464: remote error: tls: bad certificate 2025-08-13T19:55:03.686266662+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49468: remote error: tls: bad certificate 2025-08-13T19:55:03.703419112+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49482: remote error: tls: bad certificate 2025-08-13T19:55:03.724175074+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49484: remote error: tls: bad certificate 2025-08-13T19:55:03.740313314+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49492: remote error: tls: bad certificate 2025-08-13T19:55:03.756300640+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49508: remote error: tls: bad certificate 2025-08-13T19:55:03.778144294+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49510: remote error: tls: bad certificate 2025-08-13T19:55:03.796837767+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49518: remote error: tls: bad certificate 2025-08-13T19:55:03.811734952+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49526: remote error: tls: bad certificate 2025-08-13T19:55:03.828271844+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49528: remote error: tls: bad certificate 2025-08-13T19:55:03.843378505+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49534: remote error: tls: bad certificate 2025-08-13T19:55:03.861767820+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49542: remote error: tls: bad certificate 2025-08-13T19:55:03.897177870+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49556: remote error: tls: bad certificate 2025-08-13T19:55:03.917193541+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49558: remote error: tls: bad certificate 2025-08-13T19:55:03.933047284+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49564: remote error: tls: bad certificate 2025-08-13T19:55:03.950366808+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49574: remote error: tls: bad certificate 2025-08-13T19:55:03.971187182+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49580: remote error: tls: bad certificate 2025-08-13T19:55:03.987624301+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49590: remote error: tls: bad certificate 2025-08-13T19:55:04.003668679+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49600: remote error: tls: bad certificate 2025-08-13T19:55:04.019499970+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49610: remote error: tls: bad certificate 2025-08-13T19:55:04.035059394+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49626: remote error: tls: bad certificate 2025-08-13T19:55:04.050753602+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49638: remote error: tls: bad certificate 2025-08-13T19:55:04.067880621+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49640: remote error: tls: bad certificate 2025-08-13T19:55:04.097679831+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49650: remote error: tls: bad certificate 2025-08-13T19:55:04.114539012+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49654: remote error: tls: bad certificate 2025-08-13T19:55:04.129129608+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49670: remote error: tls: bad certificate 2025-08-13T19:55:04.142750187+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49682: remote error: tls: bad certificate 2025-08-13T19:55:04.160052891+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49686: remote error: tls: bad certificate 2025-08-13T19:55:04.176010726+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49690: remote error: tls: bad certificate 2025-08-13T19:55:04.190972713+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49694: remote error: tls: bad certificate 2025-08-13T19:55:04.206735953+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49702: remote error: tls: bad certificate 2025-08-13T19:55:04.226151637+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49706: remote error: tls: bad certificate 2025-08-13T19:55:04.240415524+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49722: remote error: tls: bad certificate 2025-08-13T19:55:04.254145705+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49724: remote error: tls: bad certificate 2025-08-13T19:55:04.270585565+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49730: remote error: tls: bad certificate 2025-08-13T19:55:04.292159220+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49738: remote error: tls: bad certificate 2025-08-13T19:55:04.309279679+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49740: remote error: tls: bad certificate 2025-08-13T19:55:04.326258183+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49752: remote error: tls: bad certificate 2025-08-13T19:55:04.344220726+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49766: remote error: tls: bad certificate 2025-08-13T19:55:04.362106736+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49772: remote error: tls: bad certificate 2025-08-13T19:55:04.378886265+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49776: remote error: tls: bad certificate 2025-08-13T19:55:04.395214081+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49780: remote error: tls: bad certificate 2025-08-13T19:55:04.412095992+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49786: remote error: tls: bad certificate 2025-08-13T19:55:04.428080138+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49798: remote error: tls: bad certificate 2025-08-13T19:55:04.443888319+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49812: remote error: tls: bad certificate 2025-08-13T19:55:05.227723685+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49822: remote error: tls: bad certificate 2025-08-13T19:55:05.247336744+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49836: remote error: tls: bad certificate 2025-08-13T19:55:05.263668160+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49840: remote error: tls: bad certificate 2025-08-13T19:55:05.298717480+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49846: remote error: tls: bad certificate 2025-08-13T19:55:05.324918628+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49862: remote error: tls: bad certificate 2025-08-13T19:55:05.352543047+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49876: remote error: tls: bad certificate 2025-08-13T19:55:05.369881471+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49886: remote error: tls: bad certificate 2025-08-13T19:55:05.390610883+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49900: remote error: tls: bad certificate 2025-08-13T19:55:05.415905724+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49902: remote error: tls: bad certificate 2025-08-13T19:55:05.433602269+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49914: remote error: tls: bad certificate 2025-08-13T19:55:05.449245916+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49918: remote error: tls: bad certificate 2025-08-13T19:55:05.479003685+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49932: remote error: tls: bad certificate 2025-08-13T19:55:05.495592618+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49948: remote error: tls: bad certificate 2025-08-13T19:55:05.510164614+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49960: remote error: tls: bad certificate 2025-08-13T19:55:05.525165502+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49976: remote error: tls: bad certificate 2025-08-13T19:55:05.544950246+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49992: remote error: tls: bad certificate 2025-08-13T19:55:05.563106044+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50008: remote error: tls: bad certificate 2025-08-13T19:55:05.579723289+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50014: remote error: tls: bad certificate 2025-08-13T19:55:05.600977585+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50024: remote error: tls: bad certificate 2025-08-13T19:55:05.621222902+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50038: remote error: tls: bad certificate 2025-08-13T19:55:05.643762036+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50050: remote error: tls: bad certificate 2025-08-13T19:55:05.661943755+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50060: remote error: tls: bad certificate 2025-08-13T19:55:05.683520490+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50074: remote error: tls: bad certificate 2025-08-13T19:55:05.705586820+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50078: remote error: tls: bad certificate 2025-08-13T19:55:05.727709081+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50088: remote error: tls: bad certificate 2025-08-13T19:55:05.744994374+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50090: remote error: tls: bad certificate 2025-08-13T19:55:05.763105891+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50096: remote error: tls: bad certificate 2025-08-13T19:55:05.780056955+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50104: remote error: tls: bad certificate 2025-08-13T19:55:05.798433359+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50114: remote error: tls: bad certificate 2025-08-13T19:55:05.820980692+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50128: remote error: tls: bad certificate 2025-08-13T19:55:05.844742590+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50142: remote error: tls: bad certificate 2025-08-13T19:55:05.859925023+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50150: remote error: tls: bad certificate 2025-08-13T19:55:05.877955148+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50160: remote error: tls: bad certificate 2025-08-13T19:55:05.896653292+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50166: remote error: tls: bad certificate 2025-08-13T19:55:05.915192051+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50170: remote error: tls: bad certificate 2025-08-13T19:55:05.932388981+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50174: remote error: tls: bad certificate 2025-08-13T19:55:05.950755436+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50176: remote error: tls: bad certificate 2025-08-13T19:55:05.967581306+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50180: remote error: tls: bad certificate 2025-08-13T19:55:05.986260369+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50196: remote error: tls: bad certificate 2025-08-13T19:55:06.001455402+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50202: remote error: tls: bad certificate 2025-08-13T19:55:06.025355184+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50212: remote error: tls: bad certificate 2025-08-13T19:55:06.042577586+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50222: remote error: tls: bad certificate 2025-08-13T19:55:06.056247496+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50236: remote error: tls: bad certificate 2025-08-13T19:55:06.072909791+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50248: remote error: tls: bad certificate 2025-08-13T19:55:06.092756498+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50264: remote error: tls: bad certificate 2025-08-13T19:55:06.109374162+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50270: remote error: tls: bad certificate 2025-08-13T19:55:06.123302560+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50284: remote error: tls: bad certificate 2025-08-13T19:55:06.139014788+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50286: remote error: tls: bad certificate 2025-08-13T19:55:06.155313473+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50302: remote error: tls: bad certificate 2025-08-13T19:55:06.169356584+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50314: remote error: tls: bad certificate 2025-08-13T19:55:06.182737056+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50316: remote error: tls: bad certificate 2025-08-13T19:55:06.197899328+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50326: remote error: tls: bad certificate 2025-08-13T19:55:06.215977304+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50330: remote error: tls: bad certificate 2025-08-13T19:55:06.234231425+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50332: remote error: tls: bad certificate 2025-08-13T19:55:06.253554847+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50344: remote error: tls: bad certificate 2025-08-13T19:55:06.270428588+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50358: remote error: tls: bad certificate 2025-08-13T19:55:06.286947419+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50374: remote error: tls: bad certificate 2025-08-13T19:55:06.303024518+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50380: remote error: tls: bad certificate 2025-08-13T19:55:06.329432042+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50384: remote error: tls: bad certificate 2025-08-13T19:55:06.348380412+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50390: remote error: tls: bad certificate 2025-08-13T19:55:06.366329395+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50392: remote error: tls: bad certificate 2025-08-13T19:55:06.386994805+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50408: remote error: tls: bad certificate 2025-08-13T19:55:06.425499093+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50420: remote error: tls: bad certificate 2025-08-13T19:55:06.465500825+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50422: remote error: tls: bad certificate 2025-08-13T19:55:06.506290269+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50434: remote error: tls: bad certificate 2025-08-13T19:55:06.545177628+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50444: remote error: tls: bad certificate 2025-08-13T19:55:06.594091194+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50460: remote error: tls: bad certificate 2025-08-13T19:55:12.331142743+00:00 stderr F 2025/08/13 19:55:12 http: TLS handshake error from 127.0.0.1:46964: remote error: tls: bad certificate 2025-08-13T19:55:12.354480659+00:00 stderr F 2025/08/13 19:55:12 http: TLS handshake error from 127.0.0.1:46968: remote error: tls: bad certificate 2025-08-13T19:55:12.378760032+00:00 stderr F 2025/08/13 19:55:12 http: TLS handshake error from 127.0.0.1:46982: remote error: tls: bad certificate 2025-08-13T19:55:12.403076225+00:00 stderr F 2025/08/13 19:55:12 http: TLS handshake error from 127.0.0.1:46992: remote error: tls: bad certificate 2025-08-13T19:55:12.426234696+00:00 stderr F 2025/08/13 19:55:12 http: TLS handshake error from 127.0.0.1:46996: remote error: tls: bad certificate 2025-08-13T19:55:15.225173818+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47006: remote error: tls: bad certificate 2025-08-13T19:55:15.285497839+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47020: remote error: tls: bad certificate 2025-08-13T19:55:15.311115810+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47036: remote error: tls: bad certificate 2025-08-13T19:55:15.337115672+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47048: remote error: tls: bad certificate 2025-08-13T19:55:15.358567974+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47050: remote error: tls: bad certificate 2025-08-13T19:55:15.379595424+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47052: remote error: tls: bad certificate 2025-08-13T19:55:15.394880050+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47066: remote error: tls: bad certificate 2025-08-13T19:55:15.408566091+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47078: remote error: tls: bad certificate 2025-08-13T19:55:15.423299021+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47092: remote error: tls: bad certificate 2025-08-13T19:55:15.439522234+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47106: remote error: tls: bad certificate 2025-08-13T19:55:15.454868982+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47110: remote error: tls: bad certificate 2025-08-13T19:55:15.471000052+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47114: remote error: tls: bad certificate 2025-08-13T19:55:15.488863622+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47122: remote error: tls: bad certificate 2025-08-13T19:55:15.515423570+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47128: remote error: tls: bad certificate 2025-08-13T19:55:15.534169575+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47134: remote error: tls: bad certificate 2025-08-13T19:55:15.549044499+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47150: remote error: tls: bad certificate 2025-08-13T19:55:15.564432328+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47158: remote error: tls: bad certificate 2025-08-13T19:55:15.583125322+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47160: remote error: tls: bad certificate 2025-08-13T19:55:15.596480623+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47168: remote error: tls: bad certificate 2025-08-13T19:55:15.615202497+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47176: remote error: tls: bad certificate 2025-08-13T19:55:15.636354100+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47180: remote error: tls: bad certificate 2025-08-13T19:55:15.654612161+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47192: remote error: tls: bad certificate 2025-08-13T19:55:15.674444857+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47206: remote error: tls: bad certificate 2025-08-13T19:55:15.691401601+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47214: remote error: tls: bad certificate 2025-08-13T19:55:15.712371689+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47220: remote error: tls: bad certificate 2025-08-13T19:55:15.730596589+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47226: remote error: tls: bad certificate 2025-08-13T19:55:15.753963686+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47230: remote error: tls: bad certificate 2025-08-13T19:55:15.774295636+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47232: remote error: tls: bad certificate 2025-08-13T19:55:15.796067738+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47242: remote error: tls: bad certificate 2025-08-13T19:55:15.820903106+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47248: remote error: tls: bad certificate 2025-08-13T19:55:15.840719152+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47256: remote error: tls: bad certificate 2025-08-13T19:55:15.861375081+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47260: remote error: tls: bad certificate 2025-08-13T19:55:15.881214097+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47270: remote error: tls: bad certificate 2025-08-13T19:55:15.900874318+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47282: remote error: tls: bad certificate 2025-08-13T19:55:15.930890674+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47292: remote error: tls: bad certificate 2025-08-13T19:55:15.952244104+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47304: remote error: tls: bad certificate 2025-08-13T19:55:15.969910698+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47306: remote error: tls: bad certificate 2025-08-13T19:55:15.985747920+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47312: remote error: tls: bad certificate 2025-08-13T19:55:16.001598612+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47324: remote error: tls: bad certificate 2025-08-13T19:55:16.021975563+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47328: remote error: tls: bad certificate 2025-08-13T19:55:16.040118041+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47342: remote error: tls: bad certificate 2025-08-13T19:55:16.056385355+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47356: remote error: tls: bad certificate 2025-08-13T19:55:16.076656634+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47366: remote error: tls: bad certificate 2025-08-13T19:55:16.094177364+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47374: remote error: tls: bad certificate 2025-08-13T19:55:16.116261034+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47386: remote error: tls: bad certificate 2025-08-13T19:55:16.142067610+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47392: remote error: tls: bad certificate 2025-08-13T19:55:16.158906441+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47402: remote error: tls: bad certificate 2025-08-13T19:55:16.174759553+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47414: remote error: tls: bad certificate 2025-08-13T19:55:16.192625763+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47424: remote error: tls: bad certificate 2025-08-13T19:55:16.211655506+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47440: remote error: tls: bad certificate 2025-08-13T19:55:16.234517378+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47452: remote error: tls: bad certificate 2025-08-13T19:55:16.253576982+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47468: remote error: tls: bad certificate 2025-08-13T19:55:16.275850697+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47474: remote error: tls: bad certificate 2025-08-13T19:55:16.296477066+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47476: remote error: tls: bad certificate 2025-08-13T19:55:16.312267216+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47488: remote error: tls: bad certificate 2025-08-13T19:55:16.329667203+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47502: remote error: tls: bad certificate 2025-08-13T19:55:16.348455249+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47516: remote error: tls: bad certificate 2025-08-13T19:55:16.365290759+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47520: remote error: tls: bad certificate 2025-08-13T19:55:16.382462859+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47526: remote error: tls: bad certificate 2025-08-13T19:55:16.399512616+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47528: remote error: tls: bad certificate 2025-08-13T19:55:16.414212135+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47544: remote error: tls: bad certificate 2025-08-13T19:55:16.430592093+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47552: remote error: tls: bad certificate 2025-08-13T19:55:16.449546983+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47554: remote error: tls: bad certificate 2025-08-13T19:55:16.468413702+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47558: remote error: tls: bad certificate 2025-08-13T19:55:16.489704129+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47562: remote error: tls: bad certificate 2025-08-13T19:55:16.508568797+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47574: remote error: tls: bad certificate 2025-08-13T19:55:16.528029303+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47588: remote error: tls: bad certificate 2025-08-13T19:55:22.724258620+00:00 stderr F 2025/08/13 19:55:22 http: TLS handshake error from 127.0.0.1:60124: remote error: tls: bad certificate 2025-08-13T19:55:22.747266076+00:00 stderr F 2025/08/13 19:55:22 http: TLS handshake error from 127.0.0.1:60134: remote error: tls: bad certificate 2025-08-13T19:55:22.773129974+00:00 stderr F 2025/08/13 19:55:22 http: TLS handshake error from 127.0.0.1:60142: remote error: tls: bad certificate 2025-08-13T19:55:22.796035168+00:00 stderr F 2025/08/13 19:55:22 http: TLS handshake error from 127.0.0.1:60154: remote error: tls: bad certificate 2025-08-13T19:55:22.822198264+00:00 stderr F 2025/08/13 19:55:22 http: TLS handshake error from 127.0.0.1:60158: remote error: tls: bad certificate 2025-08-13T19:55:25.226013992+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60168: remote error: tls: bad certificate 2025-08-13T19:55:25.243915683+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60176: remote error: tls: bad certificate 2025-08-13T19:55:25.263236734+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60184: remote error: tls: bad certificate 2025-08-13T19:55:25.279889680+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60194: remote error: tls: bad certificate 2025-08-13T19:55:25.311380298+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60204: remote error: tls: bad certificate 2025-08-13T19:55:25.335893618+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60216: remote error: tls: bad certificate 2025-08-13T19:55:25.352574434+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60232: remote error: tls: bad certificate 2025-08-13T19:55:25.374067167+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60242: remote error: tls: bad certificate 2025-08-13T19:55:25.390984790+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60250: remote error: tls: bad certificate 2025-08-13T19:55:25.408735036+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60262: remote error: tls: bad certificate 2025-08-13T19:55:25.425133544+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60270: remote error: tls: bad certificate 2025-08-13T19:55:25.444543878+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60280: remote error: tls: bad certificate 2025-08-13T19:55:25.460848663+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60296: remote error: tls: bad certificate 2025-08-13T19:55:25.476048517+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60304: remote error: tls: bad certificate 2025-08-13T19:55:25.498246390+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60310: remote error: tls: bad certificate 2025-08-13T19:55:25.522613145+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60318: remote error: tls: bad certificate 2025-08-13T19:55:25.542240625+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60330: remote error: tls: bad certificate 2025-08-13T19:55:25.559260111+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60342: remote error: tls: bad certificate 2025-08-13T19:55:25.574486505+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60354: remote error: tls: bad certificate 2025-08-13T19:55:25.593993892+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60368: remote error: tls: bad certificate 2025-08-13T19:55:25.614568859+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60374: remote error: tls: bad certificate 2025-08-13T19:55:25.631331347+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60376: remote error: tls: bad certificate 2025-08-13T19:55:25.647363455+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60382: remote error: tls: bad certificate 2025-08-13T19:55:25.666637255+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60398: remote error: tls: bad certificate 2025-08-13T19:55:25.682739784+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60404: remote error: tls: bad certificate 2025-08-13T19:55:25.698456613+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60408: remote error: tls: bad certificate 2025-08-13T19:55:25.715578541+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60416: remote error: tls: bad certificate 2025-08-13T19:55:25.731499376+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60430: remote error: tls: bad certificate 2025-08-13T19:55:25.748157641+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60444: remote error: tls: bad certificate 2025-08-13T19:55:25.764416645+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60460: remote error: tls: bad certificate 2025-08-13T19:55:25.783475489+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60468: remote error: tls: bad certificate 2025-08-13T19:55:25.806465025+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60470: remote error: tls: bad certificate 2025-08-13T19:55:25.827758752+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60482: remote error: tls: bad certificate 2025-08-13T19:55:25.845141138+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60484: remote error: tls: bad certificate 2025-08-13T19:55:25.865158879+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60492: remote error: tls: bad certificate 2025-08-13T19:55:25.886289762+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60496: remote error: tls: bad certificate 2025-08-13T19:55:25.906937921+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60504: remote error: tls: bad certificate 2025-08-13T19:55:25.923766471+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60520: remote error: tls: bad certificate 2025-08-13T19:55:25.942594599+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60522: remote error: tls: bad certificate 2025-08-13T19:55:25.961747565+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60528: remote error: tls: bad certificate 2025-08-13T19:55:25.982764275+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60540: remote error: tls: bad certificate 2025-08-13T19:55:25.998073442+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60550: remote error: tls: bad certificate 2025-08-13T19:55:26.019044340+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60564: remote error: tls: bad certificate 2025-08-13T19:55:26.036205540+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60578: remote error: tls: bad certificate 2025-08-13T19:55:26.052680450+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60592: remote error: tls: bad certificate 2025-08-13T19:55:26.070468387+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60594: remote error: tls: bad certificate 2025-08-13T19:55:26.088758559+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60602: remote error: tls: bad certificate 2025-08-13T19:55:26.107552705+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60608: remote error: tls: bad certificate 2025-08-13T19:55:26.126840926+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60622: remote error: tls: bad certificate 2025-08-13T19:55:26.144071408+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60626: remote error: tls: bad certificate 2025-08-13T19:55:26.163052349+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60634: remote error: tls: bad certificate 2025-08-13T19:55:26.178105229+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60640: remote error: tls: bad certificate 2025-08-13T19:55:26.194324331+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60652: remote error: tls: bad certificate 2025-08-13T19:55:26.215030852+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60658: remote error: tls: bad certificate 2025-08-13T19:55:26.233077127+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60662: remote error: tls: bad certificate 2025-08-13T19:55:26.246435338+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60666: remote error: tls: bad certificate 2025-08-13T19:55:26.263378962+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60670: remote error: tls: bad certificate 2025-08-13T19:55:26.278150433+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60686: remote error: tls: bad certificate 2025-08-13T19:55:26.298293728+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60688: remote error: tls: bad certificate 2025-08-13T19:55:26.316858798+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60704: remote error: tls: bad certificate 2025-08-13T19:55:26.338270129+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60718: remote error: tls: bad certificate 2025-08-13T19:55:26.355897422+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60720: remote error: tls: bad certificate 2025-08-13T19:55:26.373379100+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60722: remote error: tls: bad certificate 2025-08-13T19:55:26.393035921+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60730: remote error: tls: bad certificate 2025-08-13T19:55:26.409610044+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60742: remote error: tls: bad certificate 2025-08-13T19:55:26.426176577+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60756: remote error: tls: bad certificate 2025-08-13T19:55:26.445216310+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60770: remote error: tls: bad certificate 2025-08-13T19:55:30.187158407+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40300: remote error: tls: bad certificate 2025-08-13T19:55:30.210267796+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40304: remote error: tls: bad certificate 2025-08-13T19:55:30.227160748+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40316: remote error: tls: bad certificate 2025-08-13T19:55:30.250921076+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40322: remote error: tls: bad certificate 2025-08-13T19:55:30.276694852+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40332: remote error: tls: bad certificate 2025-08-13T19:55:30.297585858+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40340: remote error: tls: bad certificate 2025-08-13T19:55:30.313942174+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40344: remote error: tls: bad certificate 2025-08-13T19:55:30.330537588+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40352: remote error: tls: bad certificate 2025-08-13T19:55:30.350941500+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40360: remote error: tls: bad certificate 2025-08-13T19:55:30.369636034+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40370: remote error: tls: bad certificate 2025-08-13T19:55:30.388164132+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40382: remote error: tls: bad certificate 2025-08-13T19:55:30.405098565+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40394: remote error: tls: bad certificate 2025-08-13T19:55:30.422513152+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40400: remote error: tls: bad certificate 2025-08-13T19:55:30.443720767+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40410: remote error: tls: bad certificate 2025-08-13T19:55:30.462170194+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40426: remote error: tls: bad certificate 2025-08-13T19:55:30.478958423+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40432: remote error: tls: bad certificate 2025-08-13T19:55:30.493470887+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40448: remote error: tls: bad certificate 2025-08-13T19:55:30.508128785+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40464: remote error: tls: bad certificate 2025-08-13T19:55:30.524500612+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40470: remote error: tls: bad certificate 2025-08-13T19:55:30.539152440+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40474: remote error: tls: bad certificate 2025-08-13T19:55:30.555514727+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40478: remote error: tls: bad certificate 2025-08-13T19:55:30.572329757+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40488: remote error: tls: bad certificate 2025-08-13T19:55:30.593456840+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40490: remote error: tls: bad certificate 2025-08-13T19:55:30.610402143+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40502: remote error: tls: bad certificate 2025-08-13T19:55:30.634690056+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40508: remote error: tls: bad certificate 2025-08-13T19:55:30.655562932+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40514: remote error: tls: bad certificate 2025-08-13T19:55:30.674533753+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40528: remote error: tls: bad certificate 2025-08-13T19:55:30.695425009+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40534: remote error: tls: bad certificate 2025-08-13T19:55:30.713093333+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40540: remote error: tls: bad certificate 2025-08-13T19:55:30.731038856+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40544: remote error: tls: bad certificate 2025-08-13T19:55:30.746182788+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40548: remote error: tls: bad certificate 2025-08-13T19:55:30.761851995+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40550: remote error: tls: bad certificate 2025-08-13T19:55:30.779347764+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40564: remote error: tls: bad certificate 2025-08-13T19:55:30.803917115+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40578: remote error: tls: bad certificate 2025-08-13T19:55:30.825400208+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40584: remote error: tls: bad certificate 2025-08-13T19:55:30.843969128+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40586: remote error: tls: bad certificate 2025-08-13T19:55:30.861194199+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40590: remote error: tls: bad certificate 2025-08-13T19:55:30.877293299+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40594: remote error: tls: bad certificate 2025-08-13T19:55:30.895121527+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40598: remote error: tls: bad certificate 2025-08-13T19:55:30.910407503+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40606: remote error: tls: bad certificate 2025-08-13T19:55:30.935099758+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40608: remote error: tls: bad certificate 2025-08-13T19:55:30.952342530+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40614: remote error: tls: bad certificate 2025-08-13T19:55:30.967850072+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40622: remote error: tls: bad certificate 2025-08-13T19:55:30.987941256+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40634: remote error: tls: bad certificate 2025-08-13T19:55:31.004267672+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40648: remote error: tls: bad certificate 2025-08-13T19:55:31.019576318+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40664: remote error: tls: bad certificate 2025-08-13T19:55:31.035443981+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40672: remote error: tls: bad certificate 2025-08-13T19:55:31.050174521+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40676: remote error: tls: bad certificate 2025-08-13T19:55:31.075765112+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40678: remote error: tls: bad certificate 2025-08-13T19:55:31.095842565+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40694: remote error: tls: bad certificate 2025-08-13T19:55:31.111585564+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40700: remote error: tls: bad certificate 2025-08-13T19:55:31.130693119+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40706: remote error: tls: bad certificate 2025-08-13T19:55:31.149737572+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40720: remote error: tls: bad certificate 2025-08-13T19:55:31.167994773+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40734: remote error: tls: bad certificate 2025-08-13T19:55:31.186055539+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40740: remote error: tls: bad certificate 2025-08-13T19:55:31.202670953+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40748: remote error: tls: bad certificate 2025-08-13T19:55:31.222940151+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40750: remote error: tls: bad certificate 2025-08-13T19:55:31.241208052+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40752: remote error: tls: bad certificate 2025-08-13T19:55:31.260454701+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40766: remote error: tls: bad certificate 2025-08-13T19:55:31.276256072+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40782: remote error: tls: bad certificate 2025-08-13T19:55:31.296312925+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40784: remote error: tls: bad certificate 2025-08-13T19:55:31.320436283+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40798: remote error: tls: bad certificate 2025-08-13T19:55:31.341494914+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40804: remote error: tls: bad certificate 2025-08-13T19:55:31.360622269+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40810: remote error: tls: bad certificate 2025-08-13T19:55:31.385463358+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40824: remote error: tls: bad certificate 2025-08-13T19:55:31.410011959+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40830: remote error: tls: bad certificate 2025-08-13T19:55:31.429662579+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40838: remote error: tls: bad certificate 2025-08-13T19:55:33.027972933+00:00 stderr F 2025/08/13 19:55:33 http: TLS handshake error from 127.0.0.1:40840: remote error: tls: bad certificate 2025-08-13T19:55:33.057389732+00:00 stderr F 2025/08/13 19:55:33 http: TLS handshake error from 127.0.0.1:40850: remote error: tls: bad certificate 2025-08-13T19:55:33.085435843+00:00 stderr F 2025/08/13 19:55:33 http: TLS handshake error from 127.0.0.1:40854: remote error: tls: bad certificate 2025-08-13T19:55:33.104620350+00:00 stderr F 2025/08/13 19:55:33 http: TLS handshake error from 127.0.0.1:40866: remote error: tls: bad certificate 2025-08-13T19:55:33.126115503+00:00 stderr F 2025/08/13 19:55:33 http: TLS handshake error from 127.0.0.1:40870: remote error: tls: bad certificate 2025-08-13T19:55:35.227045600+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40884: remote error: tls: bad certificate 2025-08-13T19:55:35.246573448+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40898: remote error: tls: bad certificate 2025-08-13T19:55:35.268547395+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40906: remote error: tls: bad certificate 2025-08-13T19:55:35.284942182+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40914: remote error: tls: bad certificate 2025-08-13T19:55:35.302182604+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40930: remote error: tls: bad certificate 2025-08-13T19:55:35.322002280+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40936: remote error: tls: bad certificate 2025-08-13T19:55:35.344218634+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40948: remote error: tls: bad certificate 2025-08-13T19:55:35.361602570+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40950: remote error: tls: bad certificate 2025-08-13T19:55:35.383517945+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40966: remote error: tls: bad certificate 2025-08-13T19:55:35.400439768+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40976: remote error: tls: bad certificate 2025-08-13T19:55:35.417424883+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40992: remote error: tls: bad certificate 2025-08-13T19:55:35.436431915+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40994: remote error: tls: bad certificate 2025-08-13T19:55:35.454947543+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41010: remote error: tls: bad certificate 2025-08-13T19:55:35.471143235+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41014: remote error: tls: bad certificate 2025-08-13T19:55:35.502896571+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41022: remote error: tls: bad certificate 2025-08-13T19:55:35.575462881+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41030: remote error: tls: bad certificate 2025-08-13T19:55:35.575754439+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41026: remote error: tls: bad certificate 2025-08-13T19:55:35.603601564+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41034: remote error: tls: bad certificate 2025-08-13T19:55:35.621914196+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41044: remote error: tls: bad certificate 2025-08-13T19:55:35.642222516+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41050: remote error: tls: bad certificate 2025-08-13T19:55:35.662885595+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41064: remote error: tls: bad certificate 2025-08-13T19:55:35.681107765+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41066: remote error: tls: bad certificate 2025-08-13T19:55:35.699474799+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41074: remote error: tls: bad certificate 2025-08-13T19:55:35.716397602+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41090: remote error: tls: bad certificate 2025-08-13T19:55:35.735190078+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41098: remote error: tls: bad certificate 2025-08-13T19:55:35.753157131+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41110: remote error: tls: bad certificate 2025-08-13T19:55:35.772507463+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41118: remote error: tls: bad certificate 2025-08-13T19:55:35.786490422+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41132: remote error: tls: bad certificate 2025-08-13T19:55:35.803189179+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41142: remote error: tls: bad certificate 2025-08-13T19:55:35.819016400+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41156: remote error: tls: bad certificate 2025-08-13T19:55:35.844026044+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41168: remote error: tls: bad certificate 2025-08-13T19:55:35.860004930+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41180: remote error: tls: bad certificate 2025-08-13T19:55:35.876589063+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41190: remote error: tls: bad certificate 2025-08-13T19:55:35.900619959+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41194: remote error: tls: bad certificate 2025-08-13T19:55:35.916129001+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41208: remote error: tls: bad certificate 2025-08-13T19:55:35.935918986+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41224: remote error: tls: bad certificate 2025-08-13T19:55:35.956083701+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41236: remote error: tls: bad certificate 2025-08-13T19:55:35.974952029+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41238: remote error: tls: bad certificate 2025-08-13T19:55:35.994320032+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41250: remote error: tls: bad certificate 2025-08-13T19:55:36.011561524+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41254: remote error: tls: bad certificate 2025-08-13T19:55:36.027715485+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41270: remote error: tls: bad certificate 2025-08-13T19:55:36.045503063+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41274: remote error: tls: bad certificate 2025-08-13T19:55:36.063375353+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41286: remote error: tls: bad certificate 2025-08-13T19:55:36.088994874+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41302: remote error: tls: bad certificate 2025-08-13T19:55:36.108676985+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41310: remote error: tls: bad certificate 2025-08-13T19:55:36.132025821+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41314: remote error: tls: bad certificate 2025-08-13T19:55:36.149121519+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41324: remote error: tls: bad certificate 2025-08-13T19:55:36.164097096+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41326: remote error: tls: bad certificate 2025-08-13T19:55:36.177856509+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41332: remote error: tls: bad certificate 2025-08-13T19:55:36.190974313+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41336: remote error: tls: bad certificate 2025-08-13T19:55:36.207574167+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41338: remote error: tls: bad certificate 2025-08-13T19:55:36.224421308+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41342: remote error: tls: bad certificate 2025-08-13T19:55:36.238508850+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41358: remote error: tls: bad certificate 2025-08-13T19:55:36.258044717+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41364: remote error: tls: bad certificate 2025-08-13T19:55:36.286055726+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41376: remote error: tls: bad certificate 2025-08-13T19:55:36.300371135+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41382: remote error: tls: bad certificate 2025-08-13T19:55:36.313492559+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41396: remote error: tls: bad certificate 2025-08-13T19:55:36.328174548+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41404: remote error: tls: bad certificate 2025-08-13T19:55:36.345859293+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41410: remote error: tls: bad certificate 2025-08-13T19:55:36.365382020+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41412: remote error: tls: bad certificate 2025-08-13T19:55:36.380294025+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41426: remote error: tls: bad certificate 2025-08-13T19:55:36.396448136+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41440: remote error: tls: bad certificate 2025-08-13T19:55:36.411076403+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41446: remote error: tls: bad certificate 2025-08-13T19:55:36.425283589+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41454: remote error: tls: bad certificate 2025-08-13T19:55:36.439167365+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41466: remote error: tls: bad certificate 2025-08-13T19:55:36.456911581+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41478: remote error: tls: bad certificate 2025-08-13T19:55:36.477905750+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41486: remote error: tls: bad certificate 2025-08-13T19:55:43.542359063+00:00 stderr F 2025/08/13 19:55:43 http: TLS handshake error from 127.0.0.1:39088: remote error: tls: bad certificate 2025-08-13T19:55:43.562344944+00:00 stderr F 2025/08/13 19:55:43 http: TLS handshake error from 127.0.0.1:39090: remote error: tls: bad certificate 2025-08-13T19:55:43.579504574+00:00 stderr F 2025/08/13 19:55:43 http: TLS handshake error from 127.0.0.1:39096: remote error: tls: bad certificate 2025-08-13T19:55:43.600220455+00:00 stderr F 2025/08/13 19:55:43 http: TLS handshake error from 127.0.0.1:39102: remote error: tls: bad certificate 2025-08-13T19:55:43.623744817+00:00 stderr F 2025/08/13 19:55:43 http: TLS handshake error from 127.0.0.1:39106: remote error: tls: bad certificate 2025-08-13T19:55:45.230706044+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39110: remote error: tls: bad certificate 2025-08-13T19:55:45.249200712+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39120: remote error: tls: bad certificate 2025-08-13T19:55:45.271050146+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39130: remote error: tls: bad certificate 2025-08-13T19:55:45.290850271+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39144: remote error: tls: bad certificate 2025-08-13T19:55:45.310144452+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39152: remote error: tls: bad certificate 2025-08-13T19:55:45.323583086+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39154: remote error: tls: bad certificate 2025-08-13T19:55:45.341274101+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39160: remote error: tls: bad certificate 2025-08-13T19:55:45.362244860+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39164: remote error: tls: bad certificate 2025-08-13T19:55:45.381071508+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39170: remote error: tls: bad certificate 2025-08-13T19:55:45.403018454+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39174: remote error: tls: bad certificate 2025-08-13T19:55:45.421341798+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39188: remote error: tls: bad certificate 2025-08-13T19:55:45.437453518+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39202: remote error: tls: bad certificate 2025-08-13T19:55:45.452191819+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39210: remote error: tls: bad certificate 2025-08-13T19:55:45.469058160+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39224: remote error: tls: bad certificate 2025-08-13T19:55:45.488306020+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39226: remote error: tls: bad certificate 2025-08-13T19:55:45.506644104+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39236: remote error: tls: bad certificate 2025-08-13T19:55:45.524887524+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39244: remote error: tls: bad certificate 2025-08-13T19:55:45.544297949+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39250: remote error: tls: bad certificate 2025-08-13T19:55:45.560512862+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39262: remote error: tls: bad certificate 2025-08-13T19:55:45.577144087+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39276: remote error: tls: bad certificate 2025-08-13T19:55:45.594452551+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39288: remote error: tls: bad certificate 2025-08-13T19:55:45.610481149+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39294: remote error: tls: bad certificate 2025-08-13T19:55:45.625259311+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39306: remote error: tls: bad certificate 2025-08-13T19:55:45.642920215+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39322: remote error: tls: bad certificate 2025-08-13T19:55:45.660028343+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39324: remote error: tls: bad certificate 2025-08-13T19:55:45.676483103+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39334: remote error: tls: bad certificate 2025-08-13T19:55:45.692538492+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39344: remote error: tls: bad certificate 2025-08-13T19:55:45.711744340+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39356: remote error: tls: bad certificate 2025-08-13T19:55:45.728074646+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39366: remote error: tls: bad certificate 2025-08-13T19:55:45.743032844+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39378: remote error: tls: bad certificate 2025-08-13T19:55:45.759152424+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39384: remote error: tls: bad certificate 2025-08-13T19:55:45.775766568+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39396: remote error: tls: bad certificate 2025-08-13T19:55:45.796582783+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39406: remote error: tls: bad certificate 2025-08-13T19:55:45.818418716+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39412: remote error: tls: bad certificate 2025-08-13T19:55:45.836564444+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39426: remote error: tls: bad certificate 2025-08-13T19:55:45.854195358+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39438: remote error: tls: bad certificate 2025-08-13T19:55:45.879333456+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39454: remote error: tls: bad certificate 2025-08-13T19:55:45.899477641+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39468: remote error: tls: bad certificate 2025-08-13T19:55:45.918489234+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39484: remote error: tls: bad certificate 2025-08-13T19:55:45.945675230+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39500: remote error: tls: bad certificate 2025-08-13T19:55:45.968765109+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39504: remote error: tls: bad certificate 2025-08-13T19:55:45.987233797+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39510: remote error: tls: bad certificate 2025-08-13T19:55:46.008955997+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39514: remote error: tls: bad certificate 2025-08-13T19:55:46.027873827+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39530: remote error: tls: bad certificate 2025-08-13T19:55:46.052034017+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39546: remote error: tls: bad certificate 2025-08-13T19:55:46.077634698+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39554: remote error: tls: bad certificate 2025-08-13T19:55:46.099153273+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39560: remote error: tls: bad certificate 2025-08-13T19:55:46.116990972+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39572: remote error: tls: bad certificate 2025-08-13T19:55:46.135143360+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39576: remote error: tls: bad certificate 2025-08-13T19:55:46.154638557+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39582: remote error: tls: bad certificate 2025-08-13T19:55:46.175216695+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39588: remote error: tls: bad certificate 2025-08-13T19:55:46.204208832+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39604: remote error: tls: bad certificate 2025-08-13T19:55:46.224925874+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39616: remote error: tls: bad certificate 2025-08-13T19:55:46.246351985+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39620: remote error: tls: bad certificate 2025-08-13T19:55:46.270031801+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39624: remote error: tls: bad certificate 2025-08-13T19:55:46.291949157+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39640: remote error: tls: bad certificate 2025-08-13T19:55:46.321080139+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39650: remote error: tls: bad certificate 2025-08-13T19:55:46.339222077+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39664: remote error: tls: bad certificate 2025-08-13T19:55:46.355744949+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39666: remote error: tls: bad certificate 2025-08-13T19:55:46.374868885+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39672: remote error: tls: bad certificate 2025-08-13T19:55:46.398128389+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39678: remote error: tls: bad certificate 2025-08-13T19:55:46.420764495+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39690: remote error: tls: bad certificate 2025-08-13T19:55:46.439880921+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39700: remote error: tls: bad certificate 2025-08-13T19:55:46.460968123+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39712: remote error: tls: bad certificate 2025-08-13T19:55:46.491667340+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39716: remote error: tls: bad certificate 2025-08-13T19:55:46.511737343+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39726: remote error: tls: bad certificate 2025-08-13T19:55:46.529936343+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39742: remote error: tls: bad certificate 2025-08-13T19:55:52.824556573+00:00 stderr F 2025/08/13 19:55:52 http: TLS handshake error from 127.0.0.1:44446: remote error: tls: bad certificate 2025-08-13T19:55:53.780167759+00:00 stderr F 2025/08/13 19:55:53 http: TLS handshake error from 127.0.0.1:44456: remote error: tls: bad certificate 2025-08-13T19:55:53.807762677+00:00 stderr F 2025/08/13 19:55:53 http: TLS handshake error from 127.0.0.1:44472: remote error: tls: bad certificate 2025-08-13T19:55:53.845223767+00:00 stderr F 2025/08/13 19:55:53 http: TLS handshake error from 127.0.0.1:44484: remote error: tls: bad certificate 2025-08-13T19:55:53.883531911+00:00 stderr F 2025/08/13 19:55:53 http: TLS handshake error from 127.0.0.1:44494: remote error: tls: bad certificate 2025-08-13T19:55:53.910349246+00:00 stderr F 2025/08/13 19:55:53 http: TLS handshake error from 127.0.0.1:44510: remote error: tls: bad certificate 2025-08-13T19:55:55.224732629+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44516: remote error: tls: bad certificate 2025-08-13T19:55:55.241277971+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44532: remote error: tls: bad certificate 2025-08-13T19:55:55.261690894+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44538: remote error: tls: bad certificate 2025-08-13T19:55:55.280748038+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44550: remote error: tls: bad certificate 2025-08-13T19:55:55.299029930+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44552: remote error: tls: bad certificate 2025-08-13T19:55:55.316595082+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44564: remote error: tls: bad certificate 2025-08-13T19:55:55.331861858+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44578: remote error: tls: bad certificate 2025-08-13T19:55:55.358071616+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44584: remote error: tls: bad certificate 2025-08-13T19:55:55.375025030+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44586: remote error: tls: bad certificate 2025-08-13T19:55:55.395235878+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44594: remote error: tls: bad certificate 2025-08-13T19:55:55.419379097+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44596: remote error: tls: bad certificate 2025-08-13T19:55:55.439173882+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44604: remote error: tls: bad certificate 2025-08-13T19:55:55.455989842+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44608: remote error: tls: bad certificate 2025-08-13T19:55:55.469614712+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44616: remote error: tls: bad certificate 2025-08-13T19:55:55.496532970+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44626: remote error: tls: bad certificate 2025-08-13T19:55:55.520431953+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44642: remote error: tls: bad certificate 2025-08-13T19:55:55.536882282+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44650: remote error: tls: bad certificate 2025-08-13T19:55:55.560878418+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44660: remote error: tls: bad certificate 2025-08-13T19:55:55.575270509+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44662: remote error: tls: bad certificate 2025-08-13T19:55:55.591161202+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44672: remote error: tls: bad certificate 2025-08-13T19:55:55.612307966+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44682: remote error: tls: bad certificate 2025-08-13T19:55:55.631002370+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44692: remote error: tls: bad certificate 2025-08-13T19:55:55.648979183+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44696: remote error: tls: bad certificate 2025-08-13T19:55:55.666100582+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44700: remote error: tls: bad certificate 2025-08-13T19:55:55.682322165+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44706: remote error: tls: bad certificate 2025-08-13T19:55:55.699884547+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44718: remote error: tls: bad certificate 2025-08-13T19:55:55.715290757+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44724: remote error: tls: bad certificate 2025-08-13T19:55:55.733637551+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44732: remote error: tls: bad certificate 2025-08-13T19:55:55.750501372+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44746: remote error: tls: bad certificate 2025-08-13T19:55:55.764348138+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44760: remote error: tls: bad certificate 2025-08-13T19:55:55.778900093+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44772: remote error: tls: bad certificate 2025-08-13T19:55:55.795024764+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44784: remote error: tls: bad certificate 2025-08-13T19:55:55.812279856+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44796: remote error: tls: bad certificate 2025-08-13T19:55:55.832994698+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44810: remote error: tls: bad certificate 2025-08-13T19:55:55.849993373+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44818: remote error: tls: bad certificate 2025-08-13T19:55:55.866183645+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44828: remote error: tls: bad certificate 2025-08-13T19:55:55.881169033+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44834: remote error: tls: bad certificate 2025-08-13T19:55:55.900964599+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44840: remote error: tls: bad certificate 2025-08-13T19:55:55.920877277+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44846: remote error: tls: bad certificate 2025-08-13T19:55:55.935490895+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44858: remote error: tls: bad certificate 2025-08-13T19:55:55.950882664+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44864: remote error: tls: bad certificate 2025-08-13T19:55:55.966917262+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44876: remote error: tls: bad certificate 2025-08-13T19:55:55.985651847+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44880: remote error: tls: bad certificate 2025-08-13T19:55:56.000949174+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44884: remote error: tls: bad certificate 2025-08-13T19:55:56.016273631+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44894: remote error: tls: bad certificate 2025-08-13T19:55:56.033253086+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44900: remote error: tls: bad certificate 2025-08-13T19:55:56.052562828+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44906: remote error: tls: bad certificate 2025-08-13T19:55:56.066392292+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44908: remote error: tls: bad certificate 2025-08-13T19:55:56.083670006+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44916: remote error: tls: bad certificate 2025-08-13T19:55:56.110284246+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44932: remote error: tls: bad certificate 2025-08-13T19:55:56.134524528+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44940: remote error: tls: bad certificate 2025-08-13T19:55:56.156586868+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44944: remote error: tls: bad certificate 2025-08-13T19:55:56.182486117+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44956: remote error: tls: bad certificate 2025-08-13T19:55:56.203323543+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44958: remote error: tls: bad certificate 2025-08-13T19:55:56.225880157+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44962: remote error: tls: bad certificate 2025-08-13T19:55:56.242304366+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44974: remote error: tls: bad certificate 2025-08-13T19:55:56.262851962+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44988: remote error: tls: bad certificate 2025-08-13T19:55:56.283443650+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45000: remote error: tls: bad certificate 2025-08-13T19:55:56.302729671+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45016: remote error: tls: bad certificate 2025-08-13T19:55:56.320443107+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45030: remote error: tls: bad certificate 2025-08-13T19:55:56.338330158+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45038: remote error: tls: bad certificate 2025-08-13T19:55:56.353768378+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45040: remote error: tls: bad certificate 2025-08-13T19:55:56.367973094+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45052: remote error: tls: bad certificate 2025-08-13T19:55:56.384070984+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45058: remote error: tls: bad certificate 2025-08-13T19:55:56.406155034+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45060: remote error: tls: bad certificate 2025-08-13T19:55:56.423310914+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45064: remote error: tls: bad certificate 2025-08-13T19:55:56.439226739+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45074: remote error: tls: bad certificate 2025-08-13T19:56:04.212256863+00:00 stderr F 2025/08/13 19:56:04 http: TLS handshake error from 127.0.0.1:34962: remote error: tls: bad certificate 2025-08-13T19:56:04.232744598+00:00 stderr F 2025/08/13 19:56:04 http: TLS handshake error from 127.0.0.1:34966: remote error: tls: bad certificate 2025-08-13T19:56:04.251181244+00:00 stderr F 2025/08/13 19:56:04 http: TLS handshake error from 127.0.0.1:34972: remote error: tls: bad certificate 2025-08-13T19:56:04.274170991+00:00 stderr F 2025/08/13 19:56:04 http: TLS handshake error from 127.0.0.1:34988: remote error: tls: bad certificate 2025-08-13T19:56:04.295367696+00:00 stderr F 2025/08/13 19:56:04 http: TLS handshake error from 127.0.0.1:35004: remote error: tls: bad certificate 2025-08-13T19:56:05.226453313+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35006: remote error: tls: bad certificate 2025-08-13T19:56:05.244457487+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35022: remote error: tls: bad certificate 2025-08-13T19:56:05.262510113+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35032: remote error: tls: bad certificate 2025-08-13T19:56:05.279905070+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35040: remote error: tls: bad certificate 2025-08-13T19:56:05.297311857+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35046: remote error: tls: bad certificate 2025-08-13T19:56:05.318033838+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35062: remote error: tls: bad certificate 2025-08-13T19:56:05.335469256+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35064: remote error: tls: bad certificate 2025-08-13T19:56:05.353410969+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35080: remote error: tls: bad certificate 2025-08-13T19:56:05.371305510+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35094: remote error: tls: bad certificate 2025-08-13T19:56:05.384066564+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35110: remote error: tls: bad certificate 2025-08-13T19:56:05.401367898+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35114: remote error: tls: bad certificate 2025-08-13T19:56:05.417630852+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35120: remote error: tls: bad certificate 2025-08-13T19:56:05.433924208+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35134: remote error: tls: bad certificate 2025-08-13T19:56:05.458980173+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35144: remote error: tls: bad certificate 2025-08-13T19:56:05.477643286+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35154: remote error: tls: bad certificate 2025-08-13T19:56:05.493881660+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35156: remote error: tls: bad certificate 2025-08-13T19:56:05.509205347+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35162: remote error: tls: bad certificate 2025-08-13T19:56:05.526420229+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35176: remote error: tls: bad certificate 2025-08-13T19:56:05.549349164+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35184: remote error: tls: bad certificate 2025-08-13T19:56:05.563699803+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35200: remote error: tls: bad certificate 2025-08-13T19:56:05.579660349+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35212: remote error: tls: bad certificate 2025-08-13T19:56:05.595921573+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35226: remote error: tls: bad certificate 2025-08-13T19:56:05.611611262+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35240: remote error: tls: bad certificate 2025-08-13T19:56:05.630486300+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35250: remote error: tls: bad certificate 2025-08-13T19:56:05.646580120+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35262: remote error: tls: bad certificate 2025-08-13T19:56:05.662331250+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35278: remote error: tls: bad certificate 2025-08-13T19:56:05.688963440+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35286: remote error: tls: bad certificate 2025-08-13T19:56:05.708348024+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35290: remote error: tls: bad certificate 2025-08-13T19:56:05.728864560+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35302: remote error: tls: bad certificate 2025-08-13T19:56:05.748190062+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35306: remote error: tls: bad certificate 2025-08-13T19:56:05.767613786+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35320: remote error: tls: bad certificate 2025-08-13T19:56:05.789348337+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35328: remote error: tls: bad certificate 2025-08-13T19:56:05.811254952+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35330: remote error: tls: bad certificate 2025-08-13T19:56:05.833526528+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35336: remote error: tls: bad certificate 2025-08-13T19:56:05.851578174+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35344: remote error: tls: bad certificate 2025-08-13T19:56:05.872663876+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35346: remote error: tls: bad certificate 2025-08-13T19:56:05.899258815+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35350: remote error: tls: bad certificate 2025-08-13T19:56:05.917351002+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35356: remote error: tls: bad certificate 2025-08-13T19:56:05.940849733+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35362: remote error: tls: bad certificate 2025-08-13T19:56:05.961742819+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35378: remote error: tls: bad certificate 2025-08-13T19:56:05.982911784+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35384: remote error: tls: bad certificate 2025-08-13T19:56:06.006127677+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35388: remote error: tls: bad certificate 2025-08-13T19:56:06.022642348+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35392: remote error: tls: bad certificate 2025-08-13T19:56:06.046241482+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35408: remote error: tls: bad certificate 2025-08-13T19:56:06.073366907+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35410: remote error: tls: bad certificate 2025-08-13T19:56:06.094022967+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35422: remote error: tls: bad certificate 2025-08-13T19:56:06.112557836+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35426: remote error: tls: bad certificate 2025-08-13T19:56:06.133374070+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35432: remote error: tls: bad certificate 2025-08-13T19:56:06.157912101+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35440: remote error: tls: bad certificate 2025-08-13T19:56:06.175482033+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35450: remote error: tls: bad certificate 2025-08-13T19:56:06.197201003+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35452: remote error: tls: bad certificate 2025-08-13T19:56:06.217123102+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35460: remote error: tls: bad certificate 2025-08-13T19:56:06.237505664+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35476: remote error: tls: bad certificate 2025-08-13T19:56:06.255534069+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35484: remote error: tls: bad certificate 2025-08-13T19:56:06.275069526+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35494: remote error: tls: bad certificate 2025-08-13T19:56:06.292607227+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35506: remote error: tls: bad certificate 2025-08-13T19:56:06.312165656+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35522: remote error: tls: bad certificate 2025-08-13T19:56:06.329967944+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35528: remote error: tls: bad certificate 2025-08-13T19:56:06.351057836+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35542: remote error: tls: bad certificate 2025-08-13T19:56:06.367303130+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35558: remote error: tls: bad certificate 2025-08-13T19:56:06.382659809+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35562: remote error: tls: bad certificate 2025-08-13T19:56:06.400181979+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35568: remote error: tls: bad certificate 2025-08-13T19:56:06.416965998+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35578: remote error: tls: bad certificate 2025-08-13T19:56:06.436212358+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35590: remote error: tls: bad certificate 2025-08-13T19:56:06.459046810+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35604: remote error: tls: bad certificate 2025-08-13T19:56:06.478109724+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35614: remote error: tls: bad certificate 2025-08-13T19:56:06.497619531+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35616: remote error: tls: bad certificate 2025-08-13T19:56:14.523282831+00:00 stderr F 2025/08/13 19:56:14 http: TLS handshake error from 127.0.0.1:48366: remote error: tls: bad certificate 2025-08-13T19:56:14.543111027+00:00 stderr F 2025/08/13 19:56:14 http: TLS handshake error from 127.0.0.1:48368: remote error: tls: bad certificate 2025-08-13T19:56:14.561732609+00:00 stderr F 2025/08/13 19:56:14 http: TLS handshake error from 127.0.0.1:48378: remote error: tls: bad certificate 2025-08-13T19:56:14.586077774+00:00 stderr F 2025/08/13 19:56:14 http: TLS handshake error from 127.0.0.1:48386: remote error: tls: bad certificate 2025-08-13T19:56:14.606737354+00:00 stderr F 2025/08/13 19:56:14 http: TLS handshake error from 127.0.0.1:48402: remote error: tls: bad certificate 2025-08-13T19:56:15.225634456+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48404: remote error: tls: bad certificate 2025-08-13T19:56:15.241406516+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48406: remote error: tls: bad certificate 2025-08-13T19:56:15.257578618+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48422: remote error: tls: bad certificate 2025-08-13T19:56:15.274052669+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48428: remote error: tls: bad certificate 2025-08-13T19:56:15.291650561+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48444: remote error: tls: bad certificate 2025-08-13T19:56:15.309985504+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48460: remote error: tls: bad certificate 2025-08-13T19:56:15.332499957+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48464: remote error: tls: bad certificate 2025-08-13T19:56:15.352566890+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48472: remote error: tls: bad certificate 2025-08-13T19:56:15.376644158+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48488: remote error: tls: bad certificate 2025-08-13T19:56:15.395280090+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48500: remote error: tls: bad certificate 2025-08-13T19:56:15.412434010+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48502: remote error: tls: bad certificate 2025-08-13T19:56:15.431586287+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48516: remote error: tls: bad certificate 2025-08-13T19:56:15.447224413+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48528: remote error: tls: bad certificate 2025-08-13T19:56:15.462534960+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48536: remote error: tls: bad certificate 2025-08-13T19:56:15.487488243+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48544: remote error: tls: bad certificate 2025-08-13T19:56:15.504228111+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48548: remote error: tls: bad certificate 2025-08-13T19:56:15.520344662+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48558: remote error: tls: bad certificate 2025-08-13T19:56:15.539117308+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48572: remote error: tls: bad certificate 2025-08-13T19:56:15.554470726+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48588: remote error: tls: bad certificate 2025-08-13T19:56:15.570651978+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48602: remote error: tls: bad certificate 2025-08-13T19:56:15.586091099+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48604: remote error: tls: bad certificate 2025-08-13T19:56:15.602721954+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48618: remote error: tls: bad certificate 2025-08-13T19:56:15.618959948+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48622: remote error: tls: bad certificate 2025-08-13T19:56:15.637066125+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48630: remote error: tls: bad certificate 2025-08-13T19:56:15.652735732+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48642: remote error: tls: bad certificate 2025-08-13T19:56:15.668418590+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48656: remote error: tls: bad certificate 2025-08-13T19:56:15.683131270+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48670: remote error: tls: bad certificate 2025-08-13T19:56:15.696912744+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48686: remote error: tls: bad certificate 2025-08-13T19:56:15.720008783+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48692: remote error: tls: bad certificate 2025-08-13T19:56:15.735206527+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48702: remote error: tls: bad certificate 2025-08-13T19:56:15.758105811+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48704: remote error: tls: bad certificate 2025-08-13T19:56:15.777319080+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48708: remote error: tls: bad certificate 2025-08-13T19:56:15.798733631+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48714: remote error: tls: bad certificate 2025-08-13T19:56:15.814436700+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48726: remote error: tls: bad certificate 2025-08-13T19:56:15.834244675+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48740: remote error: tls: bad certificate 2025-08-13T19:56:15.854860184+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48756: remote error: tls: bad certificate 2025-08-13T19:56:15.872314452+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48770: remote error: tls: bad certificate 2025-08-13T19:56:15.891735747+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48786: remote error: tls: bad certificate 2025-08-13T19:56:15.913677114+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48798: remote error: tls: bad certificate 2025-08-13T19:56:15.942193228+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48810: remote error: tls: bad certificate 2025-08-13T19:56:15.964604088+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48826: remote error: tls: bad certificate 2025-08-13T19:56:15.985579917+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48838: remote error: tls: bad certificate 2025-08-13T19:56:16.019097874+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48848: remote error: tls: bad certificate 2025-08-13T19:56:16.045058055+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48850: remote error: tls: bad certificate 2025-08-13T19:56:16.066288041+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48856: remote error: tls: bad certificate 2025-08-13T19:56:16.091576833+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48868: remote error: tls: bad certificate 2025-08-13T19:56:16.106235992+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48882: remote error: tls: bad certificate 2025-08-13T19:56:16.122164446+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48892: remote error: tls: bad certificate 2025-08-13T19:56:16.146996326+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48906: remote error: tls: bad certificate 2025-08-13T19:56:16.163858827+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48914: remote error: tls: bad certificate 2025-08-13T19:56:16.181500171+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48920: remote error: tls: bad certificate 2025-08-13T19:56:16.198979520+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48928: remote error: tls: bad certificate 2025-08-13T19:56:16.220998039+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48942: remote error: tls: bad certificate 2025-08-13T19:56:16.237085798+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48958: remote error: tls: bad certificate 2025-08-13T19:56:16.269023870+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48972: remote error: tls: bad certificate 2025-08-13T19:56:16.282916207+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48974: remote error: tls: bad certificate 2025-08-13T19:56:16.300881490+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48986: remote error: tls: bad certificate 2025-08-13T19:56:16.318491013+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48996: remote error: tls: bad certificate 2025-08-13T19:56:16.337081954+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48998: remote error: tls: bad certificate 2025-08-13T19:56:16.361762798+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49010: remote error: tls: bad certificate 2025-08-13T19:56:16.379760432+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49016: remote error: tls: bad certificate 2025-08-13T19:56:16.397045626+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49020: remote error: tls: bad certificate 2025-08-13T19:56:16.414064142+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49030: remote error: tls: bad certificate 2025-08-13T19:56:16.428882755+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49042: remote error: tls: bad certificate 2025-08-13T19:56:16.443707818+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49056: remote error: tls: bad certificate 2025-08-13T19:56:16.459309754+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49058: remote error: tls: bad certificate 2025-08-13T19:56:16.481357933+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49074: remote error: tls: bad certificate 2025-08-13T19:56:16.501159139+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49086: remote error: tls: bad certificate 2025-08-13T19:56:16.518668019+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49088: remote error: tls: bad certificate 2025-08-13T19:56:16.536877229+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49100: remote error: tls: bad certificate 2025-08-13T19:56:16.556201451+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49114: remote error: tls: bad certificate 2025-08-13T19:56:16.574138933+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49122: remote error: tls: bad certificate 2025-08-13T19:56:16.592120766+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49124: remote error: tls: bad certificate 2025-08-13T19:56:16.608378450+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49132: remote error: tls: bad certificate 2025-08-13T19:56:16.627979080+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49136: remote error: tls: bad certificate 2025-08-13T19:56:16.661932740+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49144: remote error: tls: bad certificate 2025-08-13T19:56:16.678177733+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49154: remote error: tls: bad certificate 2025-08-13T19:56:16.713048449+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49168: remote error: tls: bad certificate 2025-08-13T19:56:16.731738853+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49182: remote error: tls: bad certificate 2025-08-13T19:56:16.749534101+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49190: remote error: tls: bad certificate 2025-08-13T19:56:16.766628639+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49202: remote error: tls: bad certificate 2025-08-13T19:56:16.785175379+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49210: remote error: tls: bad certificate 2025-08-13T19:56:16.804865541+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49218: remote error: tls: bad certificate 2025-08-13T19:56:16.833077097+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49234: remote error: tls: bad certificate 2025-08-13T19:56:16.852035838+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49250: remote error: tls: bad certificate 2025-08-13T19:56:16.875039805+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49266: remote error: tls: bad certificate 2025-08-13T19:56:16.901080509+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49274: remote error: tls: bad certificate 2025-08-13T19:56:16.917706683+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49290: remote error: tls: bad certificate 2025-08-13T19:56:16.937721095+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49296: remote error: tls: bad certificate 2025-08-13T19:56:16.956477951+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49298: remote error: tls: bad certificate 2025-08-13T19:56:16.976037679+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49314: remote error: tls: bad certificate 2025-08-13T19:56:16.991848391+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49318: remote error: tls: bad certificate 2025-08-13T19:56:17.005632784+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49334: remote error: tls: bad certificate 2025-08-13T19:56:17.030932497+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49342: remote error: tls: bad certificate 2025-08-13T19:56:17.045555164+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49358: remote error: tls: bad certificate 2025-08-13T19:56:17.061213182+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49370: remote error: tls: bad certificate 2025-08-13T19:56:17.102422098+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49386: remote error: tls: bad certificate 2025-08-13T19:56:17.141107583+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49394: remote error: tls: bad certificate 2025-08-13T19:56:17.178469190+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49408: remote error: tls: bad certificate 2025-08-13T19:56:17.226950744+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49424: remote error: tls: bad certificate 2025-08-13T19:56:17.259250946+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49426: remote error: tls: bad certificate 2025-08-13T19:56:17.300422942+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49440: remote error: tls: bad certificate 2025-08-13T19:56:17.340849686+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49444: remote error: tls: bad certificate 2025-08-13T19:56:17.379916752+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49448: remote error: tls: bad certificate 2025-08-13T19:56:17.419144322+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49452: remote error: tls: bad certificate 2025-08-13T19:56:17.458899517+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49454: remote error: tls: bad certificate 2025-08-13T19:56:17.502985276+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49466: remote error: tls: bad certificate 2025-08-13T19:56:17.546481698+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49476: remote error: tls: bad certificate 2025-08-13T19:56:17.578102301+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49492: remote error: tls: bad certificate 2025-08-13T19:56:17.622084926+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49496: remote error: tls: bad certificate 2025-08-13T19:56:17.660241796+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49506: remote error: tls: bad certificate 2025-08-13T19:56:17.700904407+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49512: remote error: tls: bad certificate 2025-08-13T19:56:17.741491176+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49516: remote error: tls: bad certificate 2025-08-13T19:56:17.780104989+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49526: remote error: tls: bad certificate 2025-08-13T19:56:17.817659271+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49530: remote error: tls: bad certificate 2025-08-13T19:56:17.858408045+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49540: remote error: tls: bad certificate 2025-08-13T19:56:17.900224919+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49550: remote error: tls: bad certificate 2025-08-13T19:56:17.938504802+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49562: remote error: tls: bad certificate 2025-08-13T19:56:17.979759980+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49568: remote error: tls: bad certificate 2025-08-13T19:56:18.024401465+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49576: remote error: tls: bad certificate 2025-08-13T19:56:18.060726722+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49580: remote error: tls: bad certificate 2025-08-13T19:56:18.099416237+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49584: remote error: tls: bad certificate 2025-08-13T19:56:18.141405796+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49592: remote error: tls: bad certificate 2025-08-13T19:56:18.182291913+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49604: remote error: tls: bad certificate 2025-08-13T19:56:18.223781818+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49606: remote error: tls: bad certificate 2025-08-13T19:56:18.260134486+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49612: remote error: tls: bad certificate 2025-08-13T19:56:18.313018516+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49622: remote error: tls: bad certificate 2025-08-13T19:56:18.352276627+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49630: remote error: tls: bad certificate 2025-08-13T19:56:18.385901677+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49634: remote error: tls: bad certificate 2025-08-13T19:56:18.421409611+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49650: remote error: tls: bad certificate 2025-08-13T19:56:18.460554168+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49662: remote error: tls: bad certificate 2025-08-13T19:56:18.497052690+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49672: remote error: tls: bad certificate 2025-08-13T19:56:18.540230523+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49680: remote error: tls: bad certificate 2025-08-13T19:56:18.580440782+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49688: remote error: tls: bad certificate 2025-08-13T19:56:18.621526565+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49698: remote error: tls: bad certificate 2025-08-13T19:56:18.660588940+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49700: remote error: tls: bad certificate 2025-08-13T19:56:18.699347827+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49702: remote error: tls: bad certificate 2025-08-13T19:56:18.739736240+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:60210: remote error: tls: bad certificate 2025-08-13T19:56:18.778538288+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:60212: remote error: tls: bad certificate 2025-08-13T19:56:18.822894145+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:60224: remote error: tls: bad certificate 2025-08-13T19:56:18.867266522+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:60238: remote error: tls: bad certificate 2025-08-13T19:56:18.901464949+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:60254: remote error: tls: bad certificate 2025-08-13T19:56:18.941194953+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:60260: remote error: tls: bad certificate 2025-08-13T19:56:18.978890780+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:60274: remote error: tls: bad certificate 2025-08-13T19:56:19.023520184+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60282: remote error: tls: bad certificate 2025-08-13T19:56:19.059161562+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60284: remote error: tls: bad certificate 2025-08-13T19:56:19.098975159+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60292: remote error: tls: bad certificate 2025-08-13T19:56:19.141963816+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60304: remote error: tls: bad certificate 2025-08-13T19:56:19.178275523+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60308: remote error: tls: bad certificate 2025-08-13T19:56:19.217360059+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60316: remote error: tls: bad certificate 2025-08-13T19:56:19.260116940+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60320: remote error: tls: bad certificate 2025-08-13T19:56:19.298187827+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60324: remote error: tls: bad certificate 2025-08-13T19:56:19.340386132+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60336: remote error: tls: bad certificate 2025-08-13T19:56:19.382478584+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60348: remote error: tls: bad certificate 2025-08-13T19:56:19.423060843+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60360: remote error: tls: bad certificate 2025-08-13T19:56:19.459342529+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60372: remote error: tls: bad certificate 2025-08-13T19:56:19.502522042+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60382: remote error: tls: bad certificate 2025-08-13T19:56:19.541055782+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60384: remote error: tls: bad certificate 2025-08-13T19:56:19.580605031+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60386: remote error: tls: bad certificate 2025-08-13T19:56:19.691913910+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60390: remote error: tls: bad certificate 2025-08-13T19:56:19.723640376+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60404: remote error: tls: bad certificate 2025-08-13T19:56:19.748067543+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60414: remote error: tls: bad certificate 2025-08-13T19:56:19.767062166+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60416: remote error: tls: bad certificate 2025-08-13T19:56:19.830188728+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60432: remote error: tls: bad certificate 2025-08-13T19:56:19.847652487+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60442: remote error: tls: bad certificate 2025-08-13T19:56:19.873422673+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60450: remote error: tls: bad certificate 2025-08-13T19:56:19.903512852+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60452: remote error: tls: bad certificate 2025-08-13T19:56:19.940493338+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60458: remote error: tls: bad certificate 2025-08-13T19:56:19.979206143+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60474: remote error: tls: bad certificate 2025-08-13T19:56:20.019782402+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60480: remote error: tls: bad certificate 2025-08-13T19:56:20.060214907+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60492: remote error: tls: bad certificate 2025-08-13T19:56:20.097512922+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60502: remote error: tls: bad certificate 2025-08-13T19:56:20.138601255+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60514: remote error: tls: bad certificate 2025-08-13T19:56:20.179310167+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60528: remote error: tls: bad certificate 2025-08-13T19:56:20.224400055+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60544: remote error: tls: bad certificate 2025-08-13T19:56:20.263304946+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60556: remote error: tls: bad certificate 2025-08-13T19:56:20.302375711+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60560: remote error: tls: bad certificate 2025-08-13T19:56:20.352099811+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60574: remote error: tls: bad certificate 2025-08-13T19:56:20.383428046+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60580: remote error: tls: bad certificate 2025-08-13T19:56:20.419989800+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60588: remote error: tls: bad certificate 2025-08-13T19:56:20.493288673+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60600: remote error: tls: bad certificate 2025-08-13T19:56:20.516630530+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60604: remote error: tls: bad certificate 2025-08-13T19:56:20.543469776+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60608: remote error: tls: bad certificate 2025-08-13T19:56:20.589926052+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60624: remote error: tls: bad certificate 2025-08-13T19:56:20.621435993+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60640: remote error: tls: bad certificate 2025-08-13T19:56:20.658229593+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60646: remote error: tls: bad certificate 2025-08-13T19:56:20.697298119+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60660: remote error: tls: bad certificate 2025-08-13T19:56:20.741112860+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60664: remote error: tls: bad certificate 2025-08-13T19:56:20.782320987+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60670: remote error: tls: bad certificate 2025-08-13T19:56:20.819863509+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60672: remote error: tls: bad certificate 2025-08-13T19:56:20.860204190+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60674: remote error: tls: bad certificate 2025-08-13T19:56:20.897859876+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60680: remote error: tls: bad certificate 2025-08-13T19:56:20.938916668+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60682: remote error: tls: bad certificate 2025-08-13T19:56:20.981937877+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60698: remote error: tls: bad certificate 2025-08-13T19:56:21.020673772+00:00 stderr F 2025/08/13 19:56:21 http: TLS handshake error from 127.0.0.1:60714: remote error: tls: bad certificate 2025-08-13T19:56:21.059742628+00:00 stderr F 2025/08/13 19:56:21 http: TLS handshake error from 127.0.0.1:60718: remote error: tls: bad certificate 2025-08-13T19:56:21.103108316+00:00 stderr F 2025/08/13 19:56:21 http: TLS handshake error from 127.0.0.1:60722: remote error: tls: bad certificate 2025-08-13T19:56:21.141400180+00:00 stderr F 2025/08/13 19:56:21 http: TLS handshake error from 127.0.0.1:60726: remote error: tls: bad certificate 2025-08-13T19:56:21.181011431+00:00 stderr F 2025/08/13 19:56:21 http: TLS handshake error from 127.0.0.1:60736: remote error: tls: bad certificate 2025-08-13T19:56:21.221674872+00:00 stderr F 2025/08/13 19:56:21 http: TLS handshake error from 127.0.0.1:60738: remote error: tls: bad certificate 2025-08-13T19:56:21.259679737+00:00 stderr F 2025/08/13 19:56:21 http: TLS handshake error from 127.0.0.1:60746: remote error: tls: bad certificate 2025-08-13T19:56:24.713609512+00:00 stderr F 2025/08/13 19:56:24 http: TLS handshake error from 127.0.0.1:60752: remote error: tls: bad certificate 2025-08-13T19:56:24.736715922+00:00 stderr F 2025/08/13 19:56:24 http: TLS handshake error from 127.0.0.1:60764: remote error: tls: bad certificate 2025-08-13T19:56:24.756431835+00:00 stderr F 2025/08/13 19:56:24 http: TLS handshake error from 127.0.0.1:60766: remote error: tls: bad certificate 2025-08-13T19:56:24.776895689+00:00 stderr F 2025/08/13 19:56:24 http: TLS handshake error from 127.0.0.1:60772: remote error: tls: bad certificate 2025-08-13T19:56:24.795441349+00:00 stderr F 2025/08/13 19:56:24 http: TLS handshake error from 127.0.0.1:60782: remote error: tls: bad certificate 2025-08-13T19:56:25.228152195+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60794: remote error: tls: bad certificate 2025-08-13T19:56:25.245030607+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60806: remote error: tls: bad certificate 2025-08-13T19:56:25.266129119+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60808: remote error: tls: bad certificate 2025-08-13T19:56:25.283865286+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60824: remote error: tls: bad certificate 2025-08-13T19:56:25.301257042+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60836: remote error: tls: bad certificate 2025-08-13T19:56:25.315609002+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60844: remote error: tls: bad certificate 2025-08-13T19:56:25.333272366+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60858: remote error: tls: bad certificate 2025-08-13T19:56:25.348940394+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60864: remote error: tls: bad certificate 2025-08-13T19:56:25.370581172+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60872: remote error: tls: bad certificate 2025-08-13T19:56:25.394051512+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60874: remote error: tls: bad certificate 2025-08-13T19:56:25.405926421+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60888: remote error: tls: bad certificate 2025-08-13T19:56:25.422310069+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60902: remote error: tls: bad certificate 2025-08-13T19:56:25.438460950+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60912: remote error: tls: bad certificate 2025-08-13T19:56:25.454018324+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60924: remote error: tls: bad certificate 2025-08-13T19:56:25.471947896+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60938: remote error: tls: bad certificate 2025-08-13T19:56:25.489410415+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60944: remote error: tls: bad certificate 2025-08-13T19:56:25.506624236+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60952: remote error: tls: bad certificate 2025-08-13T19:56:25.531308851+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60958: remote error: tls: bad certificate 2025-08-13T19:56:25.546933968+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60974: remote error: tls: bad certificate 2025-08-13T19:56:25.561763591+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60984: remote error: tls: bad certificate 2025-08-13T19:56:25.578331604+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60988: remote error: tls: bad certificate 2025-08-13T19:56:25.598173051+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60994: remote error: tls: bad certificate 2025-08-13T19:56:25.620897300+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32778: remote error: tls: bad certificate 2025-08-13T19:56:25.638916323+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32794: remote error: tls: bad certificate 2025-08-13T19:56:25.653611893+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32800: remote error: tls: bad certificate 2025-08-13T19:56:25.668929760+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32808: remote error: tls: bad certificate 2025-08-13T19:56:25.688427757+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32822: remote error: tls: bad certificate 2025-08-13T19:56:25.700366698+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32838: remote error: tls: bad certificate 2025-08-13T19:56:25.713189444+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32840: remote error: tls: bad certificate 2025-08-13T19:56:25.730241761+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32854: remote error: tls: bad certificate 2025-08-13T19:56:25.750001565+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32870: remote error: tls: bad certificate 2025-08-13T19:56:25.766507936+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32878: remote error: tls: bad certificate 2025-08-13T19:56:25.781043662+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32888: remote error: tls: bad certificate 2025-08-13T19:56:25.799626672+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32902: remote error: tls: bad certificate 2025-08-13T19:56:25.816681149+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32908: remote error: tls: bad certificate 2025-08-13T19:56:25.836270739+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32922: remote error: tls: bad certificate 2025-08-13T19:56:25.852295126+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32938: remote error: tls: bad certificate 2025-08-13T19:56:25.874538131+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32940: remote error: tls: bad certificate 2025-08-13T19:56:25.899968987+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32944: remote error: tls: bad certificate 2025-08-13T19:56:25.918739933+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32956: remote error: tls: bad certificate 2025-08-13T19:56:25.936371367+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32958: remote error: tls: bad certificate 2025-08-13T19:56:25.952628741+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32960: remote error: tls: bad certificate 2025-08-13T19:56:25.971218142+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32964: remote error: tls: bad certificate 2025-08-13T19:56:25.996639018+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32980: remote error: tls: bad certificate 2025-08-13T19:56:26.015352622+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:32982: remote error: tls: bad certificate 2025-08-13T19:56:26.032520442+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:32986: remote error: tls: bad certificate 2025-08-13T19:56:26.048177349+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33000: remote error: tls: bad certificate 2025-08-13T19:56:26.065905696+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33016: remote error: tls: bad certificate 2025-08-13T19:56:26.100657058+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33032: remote error: tls: bad certificate 2025-08-13T19:56:26.161130055+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33036: remote error: tls: bad certificate 2025-08-13T19:56:26.181934369+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33038: remote error: tls: bad certificate 2025-08-13T19:56:26.207480808+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33044: remote error: tls: bad certificate 2025-08-13T19:56:26.228406646+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33052: remote error: tls: bad certificate 2025-08-13T19:56:26.245143444+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33066: remote error: tls: bad certificate 2025-08-13T19:56:26.274306257+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33072: remote error: tls: bad certificate 2025-08-13T19:56:26.294680768+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33080: remote error: tls: bad certificate 2025-08-13T19:56:26.309468851+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33088: remote error: tls: bad certificate 2025-08-13T19:56:26.324231492+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33094: remote error: tls: bad certificate 2025-08-13T19:56:26.350863583+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33100: remote error: tls: bad certificate 2025-08-13T19:56:26.373687874+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33110: remote error: tls: bad certificate 2025-08-13T19:56:26.394339524+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33126: remote error: tls: bad certificate 2025-08-13T19:56:26.422339264+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33138: remote error: tls: bad certificate 2025-08-13T19:56:26.447093040+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33148: remote error: tls: bad certificate 2025-08-13T19:56:26.463037236+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33162: remote error: tls: bad certificate 2025-08-13T19:56:26.480013631+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33168: remote error: tls: bad certificate 2025-08-13T19:56:26.499265120+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33176: remote error: tls: bad certificate 2025-08-13T19:56:26.516897704+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33190: remote error: tls: bad certificate 2025-08-13T19:56:28.230030422+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33202: remote error: tls: bad certificate 2025-08-13T19:56:28.256028364+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33210: remote error: tls: bad certificate 2025-08-13T19:56:28.276163999+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33222: remote error: tls: bad certificate 2025-08-13T19:56:28.372292724+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33232: remote error: tls: bad certificate 2025-08-13T19:56:28.434955944+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33238: remote error: tls: bad certificate 2025-08-13T19:56:28.462683985+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33246: remote error: tls: bad certificate 2025-08-13T19:56:28.520193828+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33250: remote error: tls: bad certificate 2025-08-13T19:56:28.534392783+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33254: remote error: tls: bad certificate 2025-08-13T19:56:28.558302796+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33268: remote error: tls: bad certificate 2025-08-13T19:56:28.591037830+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33278: remote error: tls: bad certificate 2025-08-13T19:56:28.609239080+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33286: remote error: tls: bad certificate 2025-08-13T19:56:28.632140994+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33290: remote error: tls: bad certificate 2025-08-13T19:56:28.652923158+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33292: remote error: tls: bad certificate 2025-08-13T19:56:28.672149507+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33298: remote error: tls: bad certificate 2025-08-13T19:56:28.691951712+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33308: remote error: tls: bad certificate 2025-08-13T19:56:28.711067508+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33312: remote error: tls: bad certificate 2025-08-13T19:56:28.725863900+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33326: remote error: tls: bad certificate 2025-08-13T19:56:28.743639948+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40208: remote error: tls: bad certificate 2025-08-13T19:56:28.765675137+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40224: remote error: tls: bad certificate 2025-08-13T19:56:28.788074757+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40232: remote error: tls: bad certificate 2025-08-13T19:56:28.806516774+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40234: remote error: tls: bad certificate 2025-08-13T19:56:28.829392007+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40242: remote error: tls: bad certificate 2025-08-13T19:56:28.851298442+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40248: remote error: tls: bad certificate 2025-08-13T19:56:28.869924884+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40254: remote error: tls: bad certificate 2025-08-13T19:56:28.914021143+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40256: remote error: tls: bad certificate 2025-08-13T19:56:28.934675163+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40260: remote error: tls: bad certificate 2025-08-13T19:56:28.953654265+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40276: remote error: tls: bad certificate 2025-08-13T19:56:28.976501517+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40284: remote error: tls: bad certificate 2025-08-13T19:56:29.022968854+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40294: remote error: tls: bad certificate 2025-08-13T19:56:29.041279527+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40298: remote error: tls: bad certificate 2025-08-13T19:56:29.060927628+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40304: remote error: tls: bad certificate 2025-08-13T19:56:29.078177891+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40308: remote error: tls: bad certificate 2025-08-13T19:56:29.111441111+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40310: remote error: tls: bad certificate 2025-08-13T19:56:29.133208612+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40312: remote error: tls: bad certificate 2025-08-13T19:56:29.149290632+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40328: remote error: tls: bad certificate 2025-08-13T19:56:29.167316776+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40334: remote error: tls: bad certificate 2025-08-13T19:56:29.180703269+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40336: remote error: tls: bad certificate 2025-08-13T19:56:29.198915218+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40346: remote error: tls: bad certificate 2025-08-13T19:56:29.224107937+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40360: remote error: tls: bad certificate 2025-08-13T19:56:29.237585812+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40376: remote error: tls: bad certificate 2025-08-13T19:56:29.252533569+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40392: remote error: tls: bad certificate 2025-08-13T19:56:29.270678007+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40406: remote error: tls: bad certificate 2025-08-13T19:56:29.286617012+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40416: remote error: tls: bad certificate 2025-08-13T19:56:29.312717657+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40422: remote error: tls: bad certificate 2025-08-13T19:56:29.333581853+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40428: remote error: tls: bad certificate 2025-08-13T19:56:29.356018324+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40438: remote error: tls: bad certificate 2025-08-13T19:56:29.383142538+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40444: remote error: tls: bad certificate 2025-08-13T19:56:29.444007006+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40450: remote error: tls: bad certificate 2025-08-13T19:56:29.594660068+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40464: remote error: tls: bad certificate 2025-08-13T19:56:29.615071811+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40470: remote error: tls: bad certificate 2025-08-13T19:56:29.638519201+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40484: remote error: tls: bad certificate 2025-08-13T19:56:29.694117008+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40494: remote error: tls: bad certificate 2025-08-13T19:56:29.717957479+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40496: remote error: tls: bad certificate 2025-08-13T19:56:29.734070789+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40510: remote error: tls: bad certificate 2025-08-13T19:56:29.752045142+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40512: remote error: tls: bad certificate 2025-08-13T19:56:29.771756565+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40520: remote error: tls: bad certificate 2025-08-13T19:56:29.792145097+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40524: remote error: tls: bad certificate 2025-08-13T19:56:29.809265456+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40532: remote error: tls: bad certificate 2025-08-13T19:56:29.828091824+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40546: remote error: tls: bad certificate 2025-08-13T19:56:29.851987096+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40556: remote error: tls: bad certificate 2025-08-13T19:56:29.874285573+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40564: remote error: tls: bad certificate 2025-08-13T19:56:29.899997457+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40568: remote error: tls: bad certificate 2025-08-13T19:56:29.921379078+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40574: remote error: tls: bad certificate 2025-08-13T19:56:29.951109226+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40578: remote error: tls: bad certificate 2025-08-13T19:56:29.969078150+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40584: remote error: tls: bad certificate 2025-08-13T19:56:29.993981281+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40600: remote error: tls: bad certificate 2025-08-13T19:56:30.017090871+00:00 stderr F 2025/08/13 19:56:30 http: TLS handshake error from 127.0.0.1:40604: remote error: tls: bad certificate 2025-08-13T19:56:35.187288604+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40608: remote error: tls: bad certificate 2025-08-13T19:56:35.213660707+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40622: remote error: tls: bad certificate 2025-08-13T19:56:35.228910922+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40634: remote error: tls: bad certificate 2025-08-13T19:56:35.249924482+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40650: remote error: tls: bad certificate 2025-08-13T19:56:35.252557028+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40656: remote error: tls: bad certificate 2025-08-13T19:56:35.277141189+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40666: remote error: tls: bad certificate 2025-08-13T19:56:35.284230962+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40670: remote error: tls: bad certificate 2025-08-13T19:56:35.300761394+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40672: remote error: tls: bad certificate 2025-08-13T19:56:35.303906424+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40684: remote error: tls: bad certificate 2025-08-13T19:56:35.316918835+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40700: remote error: tls: bad certificate 2025-08-13T19:56:35.332963674+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40712: remote error: tls: bad certificate 2025-08-13T19:56:35.348473916+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40728: remote error: tls: bad certificate 2025-08-13T19:56:35.365609396+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40736: remote error: tls: bad certificate 2025-08-13T19:56:35.387059708+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40738: remote error: tls: bad certificate 2025-08-13T19:56:35.404192377+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40752: remote error: tls: bad certificate 2025-08-13T19:56:35.425539397+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40768: remote error: tls: bad certificate 2025-08-13T19:56:35.442012117+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40772: remote error: tls: bad certificate 2025-08-13T19:56:35.461092042+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40780: remote error: tls: bad certificate 2025-08-13T19:56:35.477609224+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40784: remote error: tls: bad certificate 2025-08-13T19:56:35.496233846+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40792: remote error: tls: bad certificate 2025-08-13T19:56:35.512696136+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40804: remote error: tls: bad certificate 2025-08-13T19:56:35.531024449+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40812: remote error: tls: bad certificate 2025-08-13T19:56:35.548935241+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40828: remote error: tls: bad certificate 2025-08-13T19:56:35.567623174+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40836: remote error: tls: bad certificate 2025-08-13T19:56:35.584547287+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40846: remote error: tls: bad certificate 2025-08-13T19:56:35.607592976+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40856: remote error: tls: bad certificate 2025-08-13T19:56:35.629469020+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40858: remote error: tls: bad certificate 2025-08-13T19:56:35.645192989+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40862: remote error: tls: bad certificate 2025-08-13T19:56:35.661243847+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40870: remote error: tls: bad certificate 2025-08-13T19:56:35.677320057+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40872: remote error: tls: bad certificate 2025-08-13T19:56:35.701640621+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40884: remote error: tls: bad certificate 2025-08-13T19:56:35.721678683+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40898: remote error: tls: bad certificate 2025-08-13T19:56:35.737307960+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40914: remote error: tls: bad certificate 2025-08-13T19:56:35.754155081+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40922: remote error: tls: bad certificate 2025-08-13T19:56:35.769055146+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40926: remote error: tls: bad certificate 2025-08-13T19:56:35.785245618+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40928: remote error: tls: bad certificate 2025-08-13T19:56:35.800938696+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40942: remote error: tls: bad certificate 2025-08-13T19:56:35.822369228+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40956: remote error: tls: bad certificate 2025-08-13T19:56:35.841470544+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40970: remote error: tls: bad certificate 2025-08-13T19:56:35.860976601+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40978: remote error: tls: bad certificate 2025-08-13T19:56:35.881642571+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40988: remote error: tls: bad certificate 2025-08-13T19:56:35.911084762+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40992: remote error: tls: bad certificate 2025-08-13T19:56:35.929287982+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:41000: remote error: tls: bad certificate 2025-08-13T19:56:35.944178507+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:41006: remote error: tls: bad certificate 2025-08-13T19:56:35.962487619+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:41008: remote error: tls: bad certificate 2025-08-13T19:56:35.984109867+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:41024: remote error: tls: bad certificate 2025-08-13T19:56:36.002488702+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41034: remote error: tls: bad certificate 2025-08-13T19:56:36.019123537+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41038: remote error: tls: bad certificate 2025-08-13T19:56:36.039185030+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41054: remote error: tls: bad certificate 2025-08-13T19:56:36.126893464+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41068: remote error: tls: bad certificate 2025-08-13T19:56:36.147953065+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41074: remote error: tls: bad certificate 2025-08-13T19:56:36.188928986+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41076: remote error: tls: bad certificate 2025-08-13T19:56:36.191859689+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41084: remote error: tls: bad certificate 2025-08-13T19:56:36.210958865+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41090: remote error: tls: bad certificate 2025-08-13T19:56:36.230747780+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41096: remote error: tls: bad certificate 2025-08-13T19:56:36.247985622+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41098: remote error: tls: bad certificate 2025-08-13T19:56:36.263572937+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41100: remote error: tls: bad certificate 2025-08-13T19:56:36.287882181+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41116: remote error: tls: bad certificate 2025-08-13T19:56:36.309217850+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41126: remote error: tls: bad certificate 2025-08-13T19:56:36.330068596+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41130: remote error: tls: bad certificate 2025-08-13T19:56:36.353389761+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41142: remote error: tls: bad certificate 2025-08-13T19:56:36.372585129+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41144: remote error: tls: bad certificate 2025-08-13T19:56:36.395342199+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41160: remote error: tls: bad certificate 2025-08-13T19:56:36.412297303+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41172: remote error: tls: bad certificate 2025-08-13T19:56:36.431758728+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41188: remote error: tls: bad certificate 2025-08-13T19:56:36.449463374+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41202: remote error: tls: bad certificate 2025-08-13T19:56:36.466698776+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41214: remote error: tls: bad certificate 2025-08-13T19:56:36.484006140+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41216: remote error: tls: bad certificate 2025-08-13T19:56:36.503870588+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41230: remote error: tls: bad certificate 2025-08-13T19:56:36.522234852+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41246: remote error: tls: bad certificate 2025-08-13T19:56:36.539405542+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41258: remote error: tls: bad certificate 2025-08-13T19:56:36.555686927+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41262: remote error: tls: bad certificate 2025-08-13T19:56:45.232549103+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55276: remote error: tls: bad certificate 2025-08-13T19:56:45.254060377+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55286: remote error: tls: bad certificate 2025-08-13T19:56:45.274263444+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55292: remote error: tls: bad certificate 2025-08-13T19:56:45.295747548+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55304: remote error: tls: bad certificate 2025-08-13T19:56:45.315181213+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55314: remote error: tls: bad certificate 2025-08-13T19:56:45.333756853+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55318: remote error: tls: bad certificate 2025-08-13T19:56:45.354737182+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55334: remote error: tls: bad certificate 2025-08-13T19:56:45.373444306+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55342: remote error: tls: bad certificate 2025-08-13T19:56:45.393533520+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55350: remote error: tls: bad certificate 2025-08-13T19:56:45.410423982+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55360: remote error: tls: bad certificate 2025-08-13T19:56:45.427273023+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55368: remote error: tls: bad certificate 2025-08-13T19:56:45.444438213+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55378: remote error: tls: bad certificate 2025-08-13T19:56:45.460449321+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55384: remote error: tls: bad certificate 2025-08-13T19:56:45.481026718+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55398: remote error: tls: bad certificate 2025-08-13T19:56:45.505608970+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55414: remote error: tls: bad certificate 2025-08-13T19:56:45.521532935+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55426: remote error: tls: bad certificate 2025-08-13T19:56:45.530583033+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55428: remote error: tls: bad certificate 2025-08-13T19:56:45.544694146+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55436: remote error: tls: bad certificate 2025-08-13T19:56:45.547977270+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55438: remote error: tls: bad certificate 2025-08-13T19:56:45.563034420+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55452: remote error: tls: bad certificate 2025-08-13T19:56:45.565378167+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55468: remote error: tls: bad certificate 2025-08-13T19:56:45.579891871+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55484: remote error: tls: bad certificate 2025-08-13T19:56:45.586223532+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55486: remote error: tls: bad certificate 2025-08-13T19:56:45.600715616+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55488: remote error: tls: bad certificate 2025-08-13T19:56:45.609232269+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55496: remote error: tls: bad certificate 2025-08-13T19:56:45.618993548+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55512: remote error: tls: bad certificate 2025-08-13T19:56:45.635656164+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55526: remote error: tls: bad certificate 2025-08-13T19:56:45.652487294+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55540: remote error: tls: bad certificate 2025-08-13T19:56:45.675504121+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55542: remote error: tls: bad certificate 2025-08-13T19:56:45.705092476+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55552: remote error: tls: bad certificate 2025-08-13T19:56:45.721005711+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55566: remote error: tls: bad certificate 2025-08-13T19:56:45.735256278+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55570: remote error: tls: bad certificate 2025-08-13T19:56:45.750643727+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55582: remote error: tls: bad certificate 2025-08-13T19:56:45.777755061+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55598: remote error: tls: bad certificate 2025-08-13T19:56:45.799503372+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55606: remote error: tls: bad certificate 2025-08-13T19:56:45.816620341+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55614: remote error: tls: bad certificate 2025-08-13T19:56:45.835108589+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55620: remote error: tls: bad certificate 2025-08-13T19:56:45.852698741+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55630: remote error: tls: bad certificate 2025-08-13T19:56:45.874041651+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55632: remote error: tls: bad certificate 2025-08-13T19:56:45.890161121+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55638: remote error: tls: bad certificate 2025-08-13T19:56:45.907023903+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55644: remote error: tls: bad certificate 2025-08-13T19:56:45.925525041+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55648: remote error: tls: bad certificate 2025-08-13T19:56:45.943903096+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55662: remote error: tls: bad certificate 2025-08-13T19:56:45.963412623+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55668: remote error: tls: bad certificate 2025-08-13T19:56:45.980134920+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55674: remote error: tls: bad certificate 2025-08-13T19:56:46.000327957+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55688: remote error: tls: bad certificate 2025-08-13T19:56:46.017528248+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55702: remote error: tls: bad certificate 2025-08-13T19:56:46.032597078+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55712: remote error: tls: bad certificate 2025-08-13T19:56:46.047546835+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55722: remote error: tls: bad certificate 2025-08-13T19:56:46.063708677+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55732: remote error: tls: bad certificate 2025-08-13T19:56:46.081696430+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55738: remote error: tls: bad certificate 2025-08-13T19:56:46.097068889+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55746: remote error: tls: bad certificate 2025-08-13T19:56:46.110881664+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55748: remote error: tls: bad certificate 2025-08-13T19:56:46.126720106+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55762: remote error: tls: bad certificate 2025-08-13T19:56:46.140744616+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55766: remote error: tls: bad certificate 2025-08-13T19:56:46.160978344+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55770: remote error: tls: bad certificate 2025-08-13T19:56:46.180014938+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55772: remote error: tls: bad certificate 2025-08-13T19:56:46.200083631+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55776: remote error: tls: bad certificate 2025-08-13T19:56:46.217562270+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55780: remote error: tls: bad certificate 2025-08-13T19:56:46.235698008+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55782: remote error: tls: bad certificate 2025-08-13T19:56:46.251926321+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55790: remote error: tls: bad certificate 2025-08-13T19:56:46.267677251+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55800: remote error: tls: bad certificate 2025-08-13T19:56:46.283754510+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55808: remote error: tls: bad certificate 2025-08-13T19:56:46.301716153+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55820: remote error: tls: bad certificate 2025-08-13T19:56:46.315549568+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55828: remote error: tls: bad certificate 2025-08-13T19:56:46.331707829+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55834: remote error: tls: bad certificate 2025-08-13T19:56:46.348641403+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55838: remote error: tls: bad certificate 2025-08-13T19:56:46.373020389+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55850: remote error: tls: bad certificate 2025-08-13T19:56:46.389893821+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55852: remote error: tls: bad certificate 2025-08-13T19:56:46.402739798+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55862: remote error: tls: bad certificate 2025-08-13T19:56:46.419895958+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55866: remote error: tls: bad certificate 2025-08-13T19:56:46.436435340+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55876: remote error: tls: bad certificate 2025-08-13T19:56:55.251937089+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57292: remote error: tls: bad certificate 2025-08-13T19:56:55.303880243+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57306: remote error: tls: bad certificate 2025-08-13T19:56:55.334475496+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57318: remote error: tls: bad certificate 2025-08-13T19:56:55.358762930+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57324: remote error: tls: bad certificate 2025-08-13T19:56:55.391517465+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57326: remote error: tls: bad certificate 2025-08-13T19:56:55.414621175+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57332: remote error: tls: bad certificate 2025-08-13T19:56:55.434673147+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57338: remote error: tls: bad certificate 2025-08-13T19:56:55.459612429+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57346: remote error: tls: bad certificate 2025-08-13T19:56:55.481214456+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57362: remote error: tls: bad certificate 2025-08-13T19:56:55.504114070+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57364: remote error: tls: bad certificate 2025-08-13T19:56:55.519919441+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57378: remote error: tls: bad certificate 2025-08-13T19:56:55.540751916+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57386: remote error: tls: bad certificate 2025-08-13T19:56:55.568641093+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57396: remote error: tls: bad certificate 2025-08-13T19:56:55.587000887+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57400: remote error: tls: bad certificate 2025-08-13T19:56:55.604516937+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57410: remote error: tls: bad certificate 2025-08-13T19:56:55.623090237+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57426: remote error: tls: bad certificate 2025-08-13T19:56:55.645844407+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57442: remote error: tls: bad certificate 2025-08-13T19:56:55.667699131+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57452: remote error: tls: bad certificate 2025-08-13T19:56:55.687297951+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57456: remote error: tls: bad certificate 2025-08-13T19:56:55.706473568+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57472: remote error: tls: bad certificate 2025-08-13T19:56:55.724226585+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57482: remote error: tls: bad certificate 2025-08-13T19:56:55.739952854+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57498: remote error: tls: bad certificate 2025-08-13T19:56:55.754345995+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57502: remote error: tls: bad certificate 2025-08-13T19:56:55.774045348+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57508: remote error: tls: bad certificate 2025-08-13T19:56:55.791300540+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57514: remote error: tls: bad certificate 2025-08-13T19:56:55.809028357+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57518: remote error: tls: bad certificate 2025-08-13T19:56:55.839509397+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57530: remote error: tls: bad certificate 2025-08-13T19:56:55.857904082+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57538: remote error: tls: bad certificate 2025-08-13T19:56:55.864706607+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57546: remote error: tls: bad certificate 2025-08-13T19:56:55.880642142+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57548: remote error: tls: bad certificate 2025-08-13T19:56:55.889942997+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57556: remote error: tls: bad certificate 2025-08-13T19:56:55.912389418+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57558: remote error: tls: bad certificate 2025-08-13T19:56:55.919864572+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57570: remote error: tls: bad certificate 2025-08-13T19:56:55.930057443+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57576: remote error: tls: bad certificate 2025-08-13T19:56:55.946863433+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57590: remote error: tls: bad certificate 2025-08-13T19:56:55.953193543+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57596: remote error: tls: bad certificate 2025-08-13T19:56:55.972275648+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57612: remote error: tls: bad certificate 2025-08-13T19:56:55.979193276+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57628: remote error: tls: bad certificate 2025-08-13T19:56:55.999218188+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57630: remote error: tls: bad certificate 2025-08-13T19:56:56.019032933+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57642: remote error: tls: bad certificate 2025-08-13T19:56:56.038596002+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57646: remote error: tls: bad certificate 2025-08-13T19:56:56.059092197+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57660: remote error: tls: bad certificate 2025-08-13T19:56:56.077239265+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57672: remote error: tls: bad certificate 2025-08-13T19:56:56.093169110+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57684: remote error: tls: bad certificate 2025-08-13T19:56:56.109713403+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57694: remote error: tls: bad certificate 2025-08-13T19:56:56.129161008+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57696: remote error: tls: bad certificate 2025-08-13T19:56:56.145271858+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57706: remote error: tls: bad certificate 2025-08-13T19:56:56.165505986+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57718: remote error: tls: bad certificate 2025-08-13T19:56:56.184171279+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57726: remote error: tls: bad certificate 2025-08-13T19:56:56.200467834+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57736: remote error: tls: bad certificate 2025-08-13T19:56:56.218955612+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57750: remote error: tls: bad certificate 2025-08-13T19:56:56.238091339+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57754: remote error: tls: bad certificate 2025-08-13T19:56:56.255381722+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57766: remote error: tls: bad certificate 2025-08-13T19:56:56.273644024+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57778: remote error: tls: bad certificate 2025-08-13T19:56:56.292020579+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57792: remote error: tls: bad certificate 2025-08-13T19:56:56.312756671+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57802: remote error: tls: bad certificate 2025-08-13T19:56:56.334546453+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57814: remote error: tls: bad certificate 2025-08-13T19:56:56.357384105+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57820: remote error: tls: bad certificate 2025-08-13T19:56:56.380181586+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57828: remote error: tls: bad certificate 2025-08-13T19:56:56.395532484+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57838: remote error: tls: bad certificate 2025-08-13T19:56:56.415227557+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57854: remote error: tls: bad certificate 2025-08-13T19:56:56.436356060+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57866: remote error: tls: bad certificate 2025-08-13T19:56:56.454692693+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57868: remote error: tls: bad certificate 2025-08-13T19:56:56.474250182+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57874: remote error: tls: bad certificate 2025-08-13T19:56:56.490920798+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57884: remote error: tls: bad certificate 2025-08-13T19:56:56.508707886+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57890: remote error: tls: bad certificate 2025-08-13T19:56:56.528748348+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57892: remote error: tls: bad certificate 2025-08-13T19:56:56.547876064+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57894: remote error: tls: bad certificate 2025-08-13T19:56:56.567367651+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57906: remote error: tls: bad certificate 2025-08-13T19:56:56.584531361+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57916: remote error: tls: bad certificate 2025-08-13T19:56:56.600718173+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57932: remote error: tls: bad certificate 2025-08-13T19:56:56.617605085+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57948: remote error: tls: bad certificate 2025-08-13T19:57:05.230301916+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45610: remote error: tls: bad certificate 2025-08-13T19:57:05.248021232+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45624: remote error: tls: bad certificate 2025-08-13T19:57:05.273461108+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45634: remote error: tls: bad certificate 2025-08-13T19:57:05.305141713+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45636: remote error: tls: bad certificate 2025-08-13T19:57:05.325868925+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45640: remote error: tls: bad certificate 2025-08-13T19:57:05.342881591+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45656: remote error: tls: bad certificate 2025-08-13T19:57:05.367365400+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45662: remote error: tls: bad certificate 2025-08-13T19:57:05.386722343+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45678: remote error: tls: bad certificate 2025-08-13T19:57:05.407868847+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45686: remote error: tls: bad certificate 2025-08-13T19:57:05.424290465+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45692: remote error: tls: bad certificate 2025-08-13T19:57:05.442185886+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45698: remote error: tls: bad certificate 2025-08-13T19:57:05.460172400+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45712: remote error: tls: bad certificate 2025-08-13T19:57:05.478593426+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45724: remote error: tls: bad certificate 2025-08-13T19:57:05.495328664+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45730: remote error: tls: bad certificate 2025-08-13T19:57:05.511511216+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45742: remote error: tls: bad certificate 2025-08-13T19:57:05.529711046+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45748: remote error: tls: bad certificate 2025-08-13T19:57:05.548125822+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45754: remote error: tls: bad certificate 2025-08-13T19:57:05.570967944+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45758: remote error: tls: bad certificate 2025-08-13T19:57:05.586724984+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45768: remote error: tls: bad certificate 2025-08-13T19:57:05.605996564+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45780: remote error: tls: bad certificate 2025-08-13T19:57:05.622596058+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45786: remote error: tls: bad certificate 2025-08-13T19:57:05.643546386+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45798: remote error: tls: bad certificate 2025-08-13T19:57:05.660334256+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45802: remote error: tls: bad certificate 2025-08-13T19:57:05.681204832+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45814: remote error: tls: bad certificate 2025-08-13T19:57:05.702463449+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45826: remote error: tls: bad certificate 2025-08-13T19:57:05.723456308+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45840: remote error: tls: bad certificate 2025-08-13T19:57:05.745062205+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45854: remote error: tls: bad certificate 2025-08-13T19:57:05.763269205+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45860: remote error: tls: bad certificate 2025-08-13T19:57:05.788013961+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45870: remote error: tls: bad certificate 2025-08-13T19:57:05.805992145+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45878: remote error: tls: bad certificate 2025-08-13T19:57:05.822549078+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45882: remote error: tls: bad certificate 2025-08-13T19:57:05.846873602+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45896: remote error: tls: bad certificate 2025-08-13T19:57:05.864119685+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45910: remote error: tls: bad certificate 2025-08-13T19:57:05.880618756+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45920: remote error: tls: bad certificate 2025-08-13T19:57:05.898885137+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45926: remote error: tls: bad certificate 2025-08-13T19:57:05.919036103+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45940: remote error: tls: bad certificate 2025-08-13T19:57:05.934533455+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45956: remote error: tls: bad certificate 2025-08-13T19:57:05.951481689+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45966: remote error: tls: bad certificate 2025-08-13T19:57:05.969558015+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45972: remote error: tls: bad certificate 2025-08-13T19:57:05.988580929+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45986: remote error: tls: bad certificate 2025-08-13T19:57:06.008454336+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46002: remote error: tls: bad certificate 2025-08-13T19:57:06.025493193+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46018: remote error: tls: bad certificate 2025-08-13T19:57:06.046371529+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46020: remote error: tls: bad certificate 2025-08-13T19:57:06.065501075+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46028: remote error: tls: bad certificate 2025-08-13T19:57:06.088051199+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46032: remote error: tls: bad certificate 2025-08-13T19:57:06.105245760+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46036: remote error: tls: bad certificate 2025-08-13T19:57:06.124323665+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46046: remote error: tls: bad certificate 2025-08-13T19:57:06.146168149+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46048: remote error: tls: bad certificate 2025-08-13T19:57:06.166367205+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46064: remote error: tls: bad certificate 2025-08-13T19:57:06.185961905+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46072: remote error: tls: bad certificate 2025-08-13T19:57:06.203137765+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46074: remote error: tls: bad certificate 2025-08-13T19:57:06.284907090+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46082: remote error: tls: bad certificate 2025-08-13T19:57:06.313498497+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46098: remote error: tls: bad certificate 2025-08-13T19:57:06.334065354+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46106: remote error: tls: bad certificate 2025-08-13T19:57:06.337165382+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46116: remote error: tls: bad certificate 2025-08-13T19:57:06.361948500+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46128: remote error: tls: bad certificate 2025-08-13T19:57:06.372693227+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46130: remote error: tls: bad certificate 2025-08-13T19:57:06.398034201+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46142: remote error: tls: bad certificate 2025-08-13T19:57:06.409981882+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46144: remote error: tls: bad certificate 2025-08-13T19:57:06.421979374+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46158: remote error: tls: bad certificate 2025-08-13T19:57:06.431347652+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46166: remote error: tls: bad certificate 2025-08-13T19:57:06.441501522+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46170: remote error: tls: bad certificate 2025-08-13T19:57:06.448276865+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46172: remote error: tls: bad certificate 2025-08-13T19:57:06.464864639+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46188: remote error: tls: bad certificate 2025-08-13T19:57:06.483985125+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46194: remote error: tls: bad certificate 2025-08-13T19:57:06.501266748+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46196: remote error: tls: bad certificate 2025-08-13T19:57:06.520198659+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46210: remote error: tls: bad certificate 2025-08-13T19:57:06.539083878+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46212: remote error: tls: bad certificate 2025-08-13T19:57:06.559295805+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46218: remote error: tls: bad certificate 2025-08-13T19:57:06.574943602+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46224: remote error: tls: bad certificate 2025-08-13T19:57:06.596600171+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46236: remote error: tls: bad certificate 2025-08-13T19:57:06.614022658+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46252: remote error: tls: bad certificate 2025-08-13T19:57:10.600037687+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55290: remote error: tls: bad certificate 2025-08-13T19:57:10.621431758+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55298: remote error: tls: bad certificate 2025-08-13T19:57:10.635941312+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55308: remote error: tls: bad certificate 2025-08-13T19:57:10.650896939+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55318: remote error: tls: bad certificate 2025-08-13T19:57:10.669900572+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55320: remote error: tls: bad certificate 2025-08-13T19:57:10.689583654+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55336: remote error: tls: bad certificate 2025-08-13T19:57:10.706357373+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55352: remote error: tls: bad certificate 2025-08-13T19:57:10.724895082+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55354: remote error: tls: bad certificate 2025-08-13T19:57:10.739721396+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55368: remote error: tls: bad certificate 2025-08-13T19:57:10.755175867+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55370: remote error: tls: bad certificate 2025-08-13T19:57:10.777086313+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55374: remote error: tls: bad certificate 2025-08-13T19:57:10.795641893+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55388: remote error: tls: bad certificate 2025-08-13T19:57:10.814930223+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55394: remote error: tls: bad certificate 2025-08-13T19:57:10.832184976+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55400: remote error: tls: bad certificate 2025-08-13T19:57:10.847275607+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55402: remote error: tls: bad certificate 2025-08-13T19:57:10.889940626+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55410: remote error: tls: bad certificate 2025-08-13T19:57:10.907945069+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55420: remote error: tls: bad certificate 2025-08-13T19:57:10.932224823+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55426: remote error: tls: bad certificate 2025-08-13T19:57:10.956453325+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55440: remote error: tls: bad certificate 2025-08-13T19:57:10.973945124+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55444: remote error: tls: bad certificate 2025-08-13T19:57:10.997486336+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55448: remote error: tls: bad certificate 2025-08-13T19:57:11.018105455+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55452: remote error: tls: bad certificate 2025-08-13T19:57:11.037043386+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55456: remote error: tls: bad certificate 2025-08-13T19:57:11.065667623+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55472: remote error: tls: bad certificate 2025-08-13T19:57:11.085483659+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55482: remote error: tls: bad certificate 2025-08-13T19:57:11.101455615+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55488: remote error: tls: bad certificate 2025-08-13T19:57:11.124011199+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55502: remote error: tls: bad certificate 2025-08-13T19:57:11.142479637+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55508: remote error: tls: bad certificate 2025-08-13T19:57:11.160491541+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55516: remote error: tls: bad certificate 2025-08-13T19:57:11.181507541+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55520: remote error: tls: bad certificate 2025-08-13T19:57:11.197179268+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55536: remote error: tls: bad certificate 2025-08-13T19:57:11.221586335+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55540: remote error: tls: bad certificate 2025-08-13T19:57:11.240347491+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55544: remote error: tls: bad certificate 2025-08-13T19:57:11.254939198+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55546: remote error: tls: bad certificate 2025-08-13T19:57:11.272016065+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55552: remote error: tls: bad certificate 2025-08-13T19:57:11.286613462+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55566: remote error: tls: bad certificate 2025-08-13T19:57:11.306205962+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55572: remote error: tls: bad certificate 2025-08-13T19:57:11.323518106+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55586: remote error: tls: bad certificate 2025-08-13T19:57:11.342847668+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55600: remote error: tls: bad certificate 2025-08-13T19:57:11.362680484+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55612: remote error: tls: bad certificate 2025-08-13T19:57:11.380238716+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55626: remote error: tls: bad certificate 2025-08-13T19:57:11.406290580+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55634: remote error: tls: bad certificate 2025-08-13T19:57:11.422362459+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55644: remote error: tls: bad certificate 2025-08-13T19:57:11.441281279+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55660: remote error: tls: bad certificate 2025-08-13T19:57:11.462956728+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55668: remote error: tls: bad certificate 2025-08-13T19:57:11.480138568+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55670: remote error: tls: bad certificate 2025-08-13T19:57:11.498075931+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55674: remote error: tls: bad certificate 2025-08-13T19:57:11.515517819+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55676: remote error: tls: bad certificate 2025-08-13T19:57:11.534354167+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55688: remote error: tls: bad certificate 2025-08-13T19:57:11.556291493+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55698: remote error: tls: bad certificate 2025-08-13T19:57:11.576725386+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55704: remote error: tls: bad certificate 2025-08-13T19:57:11.598127228+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55718: remote error: tls: bad certificate 2025-08-13T19:57:11.613018833+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55722: remote error: tls: bad certificate 2025-08-13T19:57:11.629109082+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55728: remote error: tls: bad certificate 2025-08-13T19:57:11.647121027+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55740: remote error: tls: bad certificate 2025-08-13T19:57:11.665426079+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55756: remote error: tls: bad certificate 2025-08-13T19:57:11.682359963+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55764: remote error: tls: bad certificate 2025-08-13T19:57:11.700356587+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55770: remote error: tls: bad certificate 2025-08-13T19:57:11.713879813+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55786: remote error: tls: bad certificate 2025-08-13T19:57:11.733523184+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55796: remote error: tls: bad certificate 2025-08-13T19:57:11.777744517+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55798: remote error: tls: bad certificate 2025-08-13T19:57:11.802611327+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55808: remote error: tls: bad certificate 2025-08-13T19:57:11.829885545+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55810: remote error: tls: bad certificate 2025-08-13T19:57:11.858481022+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55820: remote error: tls: bad certificate 2025-08-13T19:57:11.886875443+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55826: remote error: tls: bad certificate 2025-08-13T19:57:11.910943450+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55830: remote error: tls: bad certificate 2025-08-13T19:57:11.931481526+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55842: remote error: tls: bad certificate 2025-08-13T19:57:14.325250390+00:00 stderr F I0813 19:57:14.325188 1 certwatcher.go:180] "certificate event" logger="controller-runtime.certwatcher" event="REMOVE \"/etc/webhook-cert/tls.crt\"" 2025-08-13T19:57:14.326847276+00:00 stderr F I0813 19:57:14.326768 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2025-08-13T19:57:14.326941248+00:00 stderr F I0813 19:57:14.326923 1 certwatcher.go:180] "certificate event" logger="controller-runtime.certwatcher" event="REMOVE \"/etc/webhook-cert/tls.key\"" 2025-08-13T19:57:14.327263998+00:00 stderr F I0813 19:57:14.327247 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2025-08-13T20:04:58.811366355+00:00 stderr F I0813 20:04:58.810626 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:03.888436763+00:00 stderr F I0813 20:09:03.887738 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:11:01.590145355+00:00 stderr F 2025/08/13 20:11:01 http: TLS handshake error from 127.0.0.1:59200: EOF 2025-08-13T20:41:21.499517961+00:00 stderr F 2025/08/13 20:41:21 http: TLS handshake error from 127.0.0.1:56308: EOF 2025-08-13T20:42:36.402219136+00:00 stderr F I0813 20:42:36.399978 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:46.384981592+00:00 stderr F I0813 20:42:46.384029 1 ovnkubeidentity.go:297] Received signal terminated.... 2025-08-13T20:42:46.384981592+00:00 stderr F I0813 20:42:46.384942 1 ovnkubeidentity.go:77] Waiting (3m20s) for kubernetes-api to stop... ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000755000175000017500000000000015133657736033104 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000644000175000017500000003513715133657715033114 0ustar zuulzuul2025-08-13T20:03:52.252101561+00:00 stderr F + [[ -f /env/_master ]] 2025-08-13T20:03:52.252902494+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-08-13T20:03:52.258513094+00:00 stdout F I0813 20:03:52.257953568 - network-node-identity - start approver 2025-08-13T20:03:52.258534125+00:00 stderr F + echo 'I0813 20:03:52.257953568 - network-node-identity - start approver' 2025-08-13T20:03:52.259139592+00:00 stderr F + exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 --disable-webhook --csr-acceptance-conditions=/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json --loglevel=4 2025-08-13T20:03:52.393001511+00:00 stderr F I0813 20:03:52.392657 1 ovnkubeidentity.go:132] Config: {kubeconfig: apiServer:https://api-int.crc.testing:6443 logLevel:4 port:9443 host:localhost certDir: metricsAddress:0 leaseNamespace: enableInterconnect:false enableHybridOverlay:false disableWebhook:true disableApprover:false waitForKAPIDuration:0 localKAPIPort:6443 extraAllowedUsers:{slice:[] hasBeenSet:false} csrAcceptanceConditionFile:/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json csrAcceptanceConditions:[] podAdmissionConditionFile: podAdmissionConditions:[]} 2025-08-13T20:03:52.393001511+00:00 stderr F W0813 20:03:52.392885 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T20:03:52.398011554+00:00 stderr F I0813 20:03:52.397928 1 ovnkubeidentity.go:471] Starting certificate signing request approver 2025-08-13T20:03:52.398136927+00:00 stderr F I0813 20:03:52.398089 1 leaderelection.go:250] attempting to acquire leader lease openshift-network-node-identity/ovnkube-identity... 2025-08-13T20:03:52.403382227+00:00 stderr F E0813 20:03:52.403301 1 leaderelection.go:332] error retrieving resource lock openshift-network-node-identity/ovnkube-identity: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:03:52.403406728+00:00 stderr F I0813 20:03:52.403375 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-08-13T20:04:31.982134314+00:00 stderr F I0813 20:04:31.980755 1 leaderelection.go:354] lock is held by crc_9a6fd3ed-e0b7-4ff5-b6f5-4bc33b4b2a02 and has not yet expired 2025-08-13T20:04:31.982134314+00:00 stderr F I0813 20:04:31.980925 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-08-13T20:05:00.826537302+00:00 stderr F I0813 20:05:00.826382 1 leaderelection.go:354] lock is held by crc_9a6fd3ed-e0b7-4ff5-b6f5-4bc33b4b2a02 and has not yet expired 2025-08-13T20:05:00.826537302+00:00 stderr F I0813 20:05:00.826425 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-08-13T20:05:28.356696557+00:00 stderr F I0813 20:05:28.355349 1 leaderelection.go:354] lock is held by crc_9a6fd3ed-e0b7-4ff5-b6f5-4bc33b4b2a02 and has not yet expired 2025-08-13T20:05:28.356696557+00:00 stderr F I0813 20:05:28.355398 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-08-13T20:06:04.133623897+00:00 stderr F I0813 20:06:04.133466 1 leaderelection.go:260] successfully acquired lease openshift-network-node-identity/ovnkube-identity 2025-08-13T20:06:04.160531788+00:00 stderr F I0813 20:06:04.159733 1 recorder.go:104] "crc_f4814eaa-1834-4eb2-bf62-46477a6c9ac3 became leader" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-network-node-identity","name":"ovnkube-identity","uid":"affbead6-e1b0-4053-844d-1baff2e26ac5","apiVersion":"coordination.k8s.io/v1","resourceVersion":"31975"} reason="LeaderElection" 2025-08-13T20:06:04.164734928+00:00 stderr F I0813 20:06:04.164577 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.CertificateSigningRequest" 2025-08-13T20:06:04.164734928+00:00 stderr F I0813 20:06:04.164702 1 controller.go:186] "Starting Controller" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T20:06:04.211696293+00:00 stderr F I0813 20:06:04.211552 1 reflector.go:289] Starting reflector *v1.CertificateSigningRequest (9h50m56.013963378s) from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:06:04.211696293+00:00 stderr F I0813 20:06:04.211604 1 reflector.go:325] Listing and watching *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:06:04.253691615+00:00 stderr F I0813 20:06:04.253587 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:06:04.309922526+00:00 stderr F I0813 20:06:04.309672 1 controller.go:220] "Starting workers" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" worker count=1 2025-08-13T20:06:04.346192274+00:00 stderr F I0813 20:06:04.346069 1 approver.go:230] Finished syncing CSR csr-zbgxc for unknown node in 60.352µs 2025-08-13T20:06:04.346385140+00:00 stderr F I0813 20:06:04.346317 1 approver.go:230] Finished syncing CSR csr-pcfpp for unknown node in 29.561µs 2025-08-13T20:06:04.346551995+00:00 stderr F I0813 20:06:04.346480 1 approver.go:230] Finished syncing CSR csr-dpjmc for unknown node in 22.77µs 2025-08-13T20:06:04.346604026+00:00 stderr F I0813 20:06:04.346568 1 approver.go:230] Finished syncing CSR csr-f6vwn for unknown node in 19.381µs 2025-08-13T20:06:04.346665638+00:00 stderr F I0813 20:06:04.346625 1 approver.go:230] Finished syncing CSR csr-fxkbs for unknown node in 18.661µs 2025-08-13T20:06:04.346920915+00:00 stderr F I0813 20:06:04.346762 1 approver.go:230] Finished syncing CSR csr-kzl9s for unknown node in 15.041µs 2025-08-13T20:08:26.272461173+00:00 stderr F E0813 20:08:26.272090 1 leaderelection.go:332] error retrieving resource lock openshift-network-node-identity/ovnkube-identity: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:08:26.297370847+00:00 stderr F I0813 20:08:26.293078 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 2 items received 2025-08-13T20:08:26.308511837+00:00 stderr F I0813 20:08:26.303578 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=32771&timeoutSeconds=599&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.178411178+00:00 stderr F I0813 20:08:27.177465 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=32771&timeoutSeconds=541&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:29.356501355+00:00 stderr F I0813 20:08:29.356300 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=32771&timeoutSeconds=472&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:34.382094292+00:00 stderr F I0813 20:08:34.376626 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=32771&timeoutSeconds=456&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:45.578744749+00:00 stderr F I0813 20:08:45.578627 1 reflector.go:449] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest closed with: too old resource version: 32771 (32913) 2025-08-13T20:09:03.647365741+00:00 stderr F I0813 20:09:03.646470 1 reflector.go:325] Listing and watching *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:09:03.663866894+00:00 stderr F I0813 20:09:03.660568 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:09:03.663866894+00:00 stderr F I0813 20:09:03.663174 1 approver.go:230] Finished syncing CSR csr-dpjmc for unknown node in 62.051µs 2025-08-13T20:09:03.663866894+00:00 stderr F I0813 20:09:03.663258 1 approver.go:230] Finished syncing CSR csr-f6vwn for unknown node in 38.281µs 2025-08-13T20:09:03.663866894+00:00 stderr F I0813 20:09:03.663320 1 approver.go:230] Finished syncing CSR csr-fxkbs for unknown node in 39.452µs 2025-08-13T20:09:03.665673406+00:00 stderr F I0813 20:09:03.664963 1 approver.go:230] Finished syncing CSR csr-kzl9s for unknown node in 1.617777ms 2025-08-13T20:09:03.665673406+00:00 stderr F I0813 20:09:03.665162 1 approver.go:230] Finished syncing CSR csr-pcfpp for unknown node in 37.851µs 2025-08-13T20:09:03.665673406+00:00 stderr F I0813 20:09:03.665323 1 approver.go:230] Finished syncing CSR csr-zbgxc for unknown node in 19.921µs 2025-08-13T20:14:41.664443608+00:00 stderr F I0813 20:14:41.664204 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 6 items received 2025-08-13T20:23:18.672638163+00:00 stderr F I0813 20:23:18.672237 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 10 items received 2025-08-13T20:32:30.677906302+00:00 stderr F I0813 20:32:30.677644 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 11 items received 2025-08-13T20:42:11.687999923+00:00 stderr F I0813 20:42:11.684848 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 11 items received 2025-08-13T20:42:36.392193737+00:00 stderr F I0813 20:42:36.345690 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.412533663+00:00 stderr F I0813 20:42:36.412501 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 0 items received 2025-08-13T20:42:36.547542146+00:00 stderr F I0813 20:42:36.546111 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=37397&timeoutSeconds=395&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:38.747969165+00:00 stderr F I0813 20:42:38.747179 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=37397&timeoutSeconds=562&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:40.143997762+00:00 stderr F I0813 20:42:40.143888 1 ovnkubeidentity.go:297] Received signal terminated.... 2025-08-13T20:42:40.159443517+00:00 stderr F I0813 20:42:40.159310 1 internal.go:516] "Stopping and waiting for non leader election runnables" 2025-08-13T20:42:40.160173778+00:00 stderr F I0813 20:42:40.160116 1 internal.go:520] "Stopping and waiting for leader election runnables" 2025-08-13T20:42:40.161560158+00:00 stderr F I0813 20:42:40.161479 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T20:42:40.161560158+00:00 stderr F I0813 20:42:40.161527 1 controller.go:242] "All workers finished" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T20:42:40.162157096+00:00 stderr F I0813 20:42:40.162096 1 internal.go:526] "Stopping and waiting for caches" 2025-08-13T20:42:40.162300340+00:00 stderr F I0813 20:42:40.162205 1 reflector.go:295] Stopping reflector *v1.CertificateSigningRequest (9h50m56.013963378s) from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:42:40.163638278+00:00 stderr F I0813 20:42:40.163570 1 internal.go:530] "Stopping and waiting for webhooks" 2025-08-13T20:42:40.163638278+00:00 stderr F I0813 20:42:40.163608 1 internal.go:533] "Stopping and waiting for HTTP servers" 2025-08-13T20:42:40.163638278+00:00 stderr F I0813 20:42:40.163621 1 internal.go:537] "Wait completed, proceeding to shutdown the manager" 2025-08-13T20:42:40.168672223+00:00 stderr F E0813 20:42:40.167441 1 leaderelection.go:308] Failed to release lock: Put "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:42:40.169350333+00:00 stderr F I0813 20:42:40.168700 1 recorder.go:104] "crc_f4814eaa-1834-4eb2-bf62-46477a6c9ac3 stopped leading" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-network-node-identity","name":"ovnkube-identity","uid":"affbead6-e1b0-4053-844d-1baff2e26ac5","apiVersion":"coordination.k8s.io/v1","resourceVersion":"37542"} reason="LeaderElection" ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000644000175000017500000002611215133657715033105 0ustar zuulzuul2026-01-20T10:47:24.936135091+00:00 stderr F + [[ -f /env/_master ]] 2026-01-20T10:47:24.936465599+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2026-01-20T10:47:24.940679613+00:00 stdout F I0120 10:47:24.938572167 - network-node-identity - start approver 2026-01-20T10:47:24.940696173+00:00 stderr F + echo 'I0120 10:47:24.938572167 - network-node-identity - start approver' 2026-01-20T10:47:24.940696173+00:00 stderr F + exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 --disable-webhook --csr-acceptance-conditions=/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json --loglevel=4 2026-01-20T10:47:24.982001520+00:00 stderr F I0120 10:47:24.981867 1 ovnkubeidentity.go:132] Config: {kubeconfig: apiServer:https://api-int.crc.testing:6443 logLevel:4 port:9443 host:localhost certDir: metricsAddress:0 leaseNamespace: enableInterconnect:false enableHybridOverlay:false disableWebhook:true disableApprover:false waitForKAPIDuration:0 localKAPIPort:6443 extraAllowedUsers:{slice:[] hasBeenSet:false} csrAcceptanceConditionFile:/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json csrAcceptanceConditions:[] podAdmissionConditionFile: podAdmissionConditions:[]} 2026-01-20T10:47:24.982001520+00:00 stderr F W0120 10:47:24.981961 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2026-01-20T10:47:24.982766781+00:00 stderr F I0120 10:47:24.982734 1 ovnkubeidentity.go:471] Starting certificate signing request approver 2026-01-20T10:47:24.984165879+00:00 stderr F I0120 10:47:24.982813 1 leaderelection.go:250] attempting to acquire leader lease openshift-network-node-identity/ovnkube-identity... 2026-01-20T10:47:24.990583712+00:00 stderr F I0120 10:47:24.990464 1 leaderelection.go:354] lock is held by crc_f4814eaa-1834-4eb2-bf62-46477a6c9ac3 and has not yet expired 2026-01-20T10:47:24.990583712+00:00 stderr F I0120 10:47:24.990480 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2026-01-20T10:47:53.600250683+00:00 stderr F I0120 10:47:53.600169 1 leaderelection.go:354] lock is held by crc_f4814eaa-1834-4eb2-bf62-46477a6c9ac3 and has not yet expired 2026-01-20T10:47:53.600250683+00:00 stderr F I0120 10:47:53.600190 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2026-01-20T10:48:32.353389860+00:00 stderr F I0120 10:48:32.353148 1 leaderelection.go:260] successfully acquired lease openshift-network-node-identity/ovnkube-identity 2026-01-20T10:48:32.354591715+00:00 stderr F I0120 10:48:32.354535 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.CertificateSigningRequest" 2026-01-20T10:48:32.354591715+00:00 stderr F I0120 10:48:32.354572 1 controller.go:186] "Starting Controller" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2026-01-20T10:48:32.354591715+00:00 stderr F I0120 10:48:32.354526 1 recorder.go:104] "crc_93218677-d723-4d59-88a6-2c9b331e1b9b became leader" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-network-node-identity","name":"ovnkube-identity","uid":"affbead6-e1b0-4053-844d-1baff2e26ac5","apiVersion":"coordination.k8s.io/v1","resourceVersion":"39902"} reason="LeaderElection" 2026-01-20T10:48:32.359876073+00:00 stderr F I0120 10:48:32.359768 1 reflector.go:289] Starting reflector *v1.CertificateSigningRequest (10h26m35.67204286s) from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2026-01-20T10:48:32.359876073+00:00 stderr F I0120 10:48:32.359805 1 reflector.go:325] Listing and watching *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2026-01-20T10:48:32.363119434+00:00 stderr F I0120 10:48:32.363011 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2026-01-20T10:48:32.460461659+00:00 stderr F I0120 10:48:32.460310 1 controller.go:220] "Starting workers" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" worker count=1 2026-01-20T10:48:32.461150728+00:00 stderr F I0120 10:48:32.461049 1 recorder.go:104] "CSR \"csr-d877d\" has been approved" logger="events" type="Normal" object={"kind":"CertificateSigningRequest","name":"csr-d877d"} reason="CSRApproved" 2026-01-20T10:48:32.468280558+00:00 stderr F I0120 10:48:32.467113 1 approver.go:230] Finished syncing CSR csr-d877d for crc node in 6.337368ms 2026-01-20T10:48:32.468280558+00:00 stderr F I0120 10:48:32.467216 1 approver.go:230] Finished syncing CSR csr-fxkbs for unknown node in 58.982µs 2026-01-20T10:48:32.468280558+00:00 stderr F I0120 10:48:32.467359 1 approver.go:230] Finished syncing CSR csr-f6vwn for unknown node in 28.671µs 2026-01-20T10:48:32.468280558+00:00 stderr F I0120 10:48:32.467562 1 approver.go:230] Finished syncing CSR csr-kzl9s for unknown node in 48.711µs 2026-01-20T10:48:32.468280558+00:00 stderr F I0120 10:48:32.467664 1 approver.go:230] Finished syncing CSR csr-pcfpp for unknown node in 41.231µs 2026-01-20T10:48:32.468280558+00:00 stderr F I0120 10:48:32.467763 1 approver.go:230] Finished syncing CSR csr-zbgxc for unknown node in 27.761µs 2026-01-20T10:48:32.468280558+00:00 stderr F I0120 10:48:32.467853 1 approver.go:230] Finished syncing CSR csr-dpjmc for unknown node in 30.911µs 2026-01-20T10:48:32.468280558+00:00 stderr F I0120 10:48:32.467897 1 approver.go:230] Finished syncing CSR csr-d877d for unknown node in 26.281µs 2026-01-20T10:48:32.511676358+00:00 stderr F I0120 10:48:32.511597 1 approver.go:230] Finished syncing CSR csr-d877d for unknown node in 82.012µs 2026-01-20T10:48:35.154259618+00:00 stderr F I0120 10:48:35.154164 1 recorder.go:104] "CSR \"csr-d6729\" has been approved" logger="events" type="Normal" object={"kind":"CertificateSigningRequest","name":"csr-d6729"} reason="CSRApproved" 2026-01-20T10:48:35.159018492+00:00 stderr F I0120 10:48:35.158956 1 approver.go:230] Finished syncing CSR csr-d6729 for crc node in 5.171196ms 2026-01-20T10:48:35.159424223+00:00 stderr F I0120 10:48:35.159315 1 approver.go:230] Finished syncing CSR csr-d6729 for unknown node in 263.698µs 2026-01-20T10:48:35.168116227+00:00 stderr F I0120 10:48:35.165826 1 approver.go:230] Finished syncing CSR csr-d6729 for unknown node in 91.592µs 2026-01-20T10:54:30.365509087+00:00 stderr F I0120 10:54:30.365430 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 12 items received 2026-01-20T10:57:09.921054057+00:00 stderr F I0120 10:57:09.920401 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 2 items received 2026-01-20T10:57:09.921204311+00:00 stderr F I0120 10:57:09.921054 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=42605&timeoutSeconds=434&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:11.487149313+00:00 stderr F I0120 10:57:11.487045 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=42605&timeoutSeconds=306&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:12.705373909+00:00 stderr F E0120 10:57:12.705294 1 leaderelection.go:332] error retrieving resource lock openshift-network-node-identity/ovnkube-identity: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:13.155384460+00:00 stderr F I0120 10:57:13.155282 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=42605&timeoutSeconds=589&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:19.047393635+00:00 stderr F I0120 10:57:19.047295 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=42605&timeoutSeconds=444&watch=true": dial tcp 38.102.83.220:6443: connect: connection refused - backing off 2026-01-20T10:57:26.448119820+00:00 stderr F I0120 10:57:26.448013 1 reflector.go:449] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest closed with: too old resource version: 42605 (43263) 2026-01-20T10:57:51.519102482+00:00 stderr F I0120 10:57:51.519013 1 reflector.go:325] Listing and watching *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2026-01-20T10:57:51.522239384+00:00 stderr F I0120 10:57:51.522208 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2026-01-20T10:57:51.523603000+00:00 stderr F I0120 10:57:51.522632 1 approver.go:230] Finished syncing CSR csr-d6729 for unknown node in 21.061µs 2026-01-20T10:57:51.523603000+00:00 stderr F I0120 10:57:51.522676 1 approver.go:230] Finished syncing CSR csr-d877d for unknown node in 19.161µs 2026-01-20T10:57:51.523603000+00:00 stderr F I0120 10:57:51.522711 1 approver.go:230] Finished syncing CSR csr-dpjmc for unknown node in 19.331µs 2026-01-20T10:57:51.523603000+00:00 stderr F I0120 10:57:51.522743 1 approver.go:230] Finished syncing CSR csr-f6vwn for unknown node in 19.831µs 2026-01-20T10:57:51.523603000+00:00 stderr F I0120 10:57:51.522772 1 approver.go:230] Finished syncing CSR csr-fxkbs for unknown node in 17.99µs 2026-01-20T10:57:51.523603000+00:00 stderr F I0120 10:57:51.522808 1 approver.go:230] Finished syncing CSR csr-kzl9s for unknown node in 23.76µs 2026-01-20T10:57:51.523603000+00:00 stderr F I0120 10:57:51.522883 1 approver.go:230] Finished syncing CSR csr-pcfpp for unknown node in 18.84µs 2026-01-20T10:57:51.523603000+00:00 stderr F I0120 10:57:51.522986 1 approver.go:230] Finished syncing CSR csr-zbgxc for unknown node in 18.131µs ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000644000175000017500000002774015133657715033115 0ustar zuulzuul2025-08-13T19:50:46.034128438+00:00 stderr F + [[ -f /env/_master ]] 2025-08-13T19:50:46.061916362+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-08-13T19:50:46.146976623+00:00 stdout F I0813 19:50:46.088947634 - network-node-identity - start approver 2025-08-13T19:50:46.147024094+00:00 stderr F + echo 'I0813 19:50:46.088947634 - network-node-identity - start approver' 2025-08-13T19:50:46.147024094+00:00 stderr F + exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 --disable-webhook --csr-acceptance-conditions=/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json --loglevel=4 2025-08-13T19:50:47.258007987+00:00 stderr F I0813 19:50:47.257460 1 ovnkubeidentity.go:132] Config: {kubeconfig: apiServer:https://api-int.crc.testing:6443 logLevel:4 port:9443 host:localhost certDir: metricsAddress:0 leaseNamespace: enableInterconnect:false enableHybridOverlay:false disableWebhook:true disableApprover:false waitForKAPIDuration:0 localKAPIPort:6443 extraAllowedUsers:{slice:[] hasBeenSet:false} csrAcceptanceConditionFile:/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json csrAcceptanceConditions:[] podAdmissionConditionFile: podAdmissionConditions:[]} 2025-08-13T19:50:47.259446928+00:00 stderr F W0813 19:50:47.259418 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:50:47.270012330+00:00 stderr F I0813 19:50:47.269622 1 ovnkubeidentity.go:471] Starting certificate signing request approver 2025-08-13T19:50:47.283521506+00:00 stderr F I0813 19:50:47.282897 1 leaderelection.go:250] attempting to acquire leader lease openshift-network-node-identity/ovnkube-identity... 2025-08-13T19:50:47.517100522+00:00 stderr F I0813 19:50:47.516520 1 leaderelection.go:354] lock is held by crc_b2366d4a-899d-4575-ad93-10121ab7b42a and has not yet expired 2025-08-13T19:50:47.517190414+00:00 stderr F I0813 19:50:47.517169 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-08-13T19:51:24.777301158+00:00 stderr F I0813 19:51:24.777094 1 leaderelection.go:354] lock is held by crc_b2366d4a-899d-4575-ad93-10121ab7b42a and has not yet expired 2025-08-13T19:51:24.777301158+00:00 stderr F I0813 19:51:24.777172 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-08-13T19:51:48.849637807+00:00 stderr F I0813 19:51:48.849570 1 leaderelection.go:260] successfully acquired lease openshift-network-node-identity/ovnkube-identity 2025-08-13T19:51:48.850659406+00:00 stderr F I0813 19:51:48.850602 1 recorder.go:104] "crc_9a6fd3ed-e0b7-4ff5-b6f5-4bc33b4b2a02 became leader" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-network-node-identity","name":"ovnkube-identity","uid":"313ac7f3-4ba1-4dd0-b6b5-40f9f3a73f08","apiVersion":"coordination.k8s.io/v1","resourceVersion":"26671"} reason="LeaderElection" 2025-08-13T19:51:48.851389587+00:00 stderr F I0813 19:51:48.851289 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.CertificateSigningRequest" 2025-08-13T19:51:48.851494360+00:00 stderr F I0813 19:51:48.851475 1 controller.go:186] "Starting Controller" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T19:51:48.856428690+00:00 stderr F I0813 19:51:48.856310 1 reflector.go:289] Starting reflector *v1.CertificateSigningRequest (9h28m13.239043519s) from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T19:51:48.856428690+00:00 stderr F I0813 19:51:48.856387 1 reflector.go:325] Listing and watching *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T19:51:48.869034989+00:00 stderr F I0813 19:51:48.868974 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T19:51:48.957216392+00:00 stderr F I0813 19:51:48.957111 1 controller.go:220] "Starting workers" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" worker count=1 2025-08-13T19:51:48.959225619+00:00 stderr F I0813 19:51:48.959157 1 approver.go:230] Finished syncing CSR csr-zbgxc for unknown node in 16.16µs 2025-08-13T19:51:48.959737294+00:00 stderr F I0813 19:51:48.959701 1 recorder.go:104] "CSR \"csr-dpjmc\" has been approved" logger="events" type="Normal" object={"kind":"CertificateSigningRequest","name":"csr-dpjmc"} reason="CSRApproved" 2025-08-13T19:51:48.971123478+00:00 stderr F I0813 19:51:48.971041 1 approver.go:230] Finished syncing CSR csr-dpjmc for crc node in 11.734344ms 2025-08-13T19:51:48.971246191+00:00 stderr F I0813 19:51:48.971186 1 approver.go:230] Finished syncing CSR csr-f6vwn for unknown node in 59.072µs 2025-08-13T19:51:48.971320084+00:00 stderr F I0813 19:51:48.971288 1 approver.go:230] Finished syncing CSR csr-kzl9s for unknown node in 35.321µs 2025-08-13T19:51:48.971392386+00:00 stderr F I0813 19:51:48.971339 1 approver.go:230] Finished syncing CSR csr-pcfpp for unknown node in 24.941µs 2025-08-13T19:51:48.971698394+00:00 stderr F I0813 19:51:48.971647 1 approver.go:230] Finished syncing CSR csr-dpjmc for unknown node in 26.731µs 2025-08-13T19:51:48.983918182+00:00 stderr F I0813 19:51:48.983873 1 approver.go:230] Finished syncing CSR csr-dpjmc for unknown node in 132.074µs 2025-08-13T19:57:39.305594345+00:00 stderr F I0813 19:57:39.305421 1 recorder.go:104] "CSR \"csr-fxkbs\" has been approved" logger="events" type="Normal" object={"kind":"CertificateSigningRequest","name":"csr-fxkbs"} reason="CSRApproved" 2025-08-13T19:57:39.316617830+00:00 stderr F I0813 19:57:39.316333 1 approver.go:230] Finished syncing CSR csr-fxkbs for crc node in 12.345512ms 2025-08-13T19:57:39.316617830+00:00 stderr F I0813 19:57:39.316501 1 approver.go:230] Finished syncing CSR csr-fxkbs for unknown node in 61.332µs 2025-08-13T19:57:39.330480056+00:00 stderr F I0813 19:57:39.330297 1 approver.go:230] Finished syncing CSR csr-fxkbs for unknown node in 62.762µs 2025-08-13T19:58:46.872170084+00:00 stderr F I0813 19:58:46.872041 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 15 items received 2025-08-13T20:02:29.640165504+00:00 stderr F I0813 20:02:29.639336 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 6 items received 2025-08-13T20:02:29.652962089+00:00 stderr F I0813 20:02:29.650652 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=30560&timeoutSeconds=373&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:31.045345338+00:00 stderr F I0813 20:02:31.045086 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=30560&timeoutSeconds=339&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:34.125400204+00:00 stderr F I0813 20:02:34.125327 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=30560&timeoutSeconds=508&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:39.664872788+00:00 stderr F I0813 20:02:39.664401 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=30560&timeoutSeconds=503&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:40.049084358+00:00 stderr F E0813 20:02:40.048894 1 leaderelection.go:332] error retrieving resource lock openshift-network-node-identity/ovnkube-identity: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:02:47.690939252+00:00 stderr F I0813 20:02:47.690717 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=30560&timeoutSeconds=566&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:00.053560022+00:00 stderr F E0813 20:03:00.053423 1 leaderelection.go:332] error retrieving resource lock openshift-network-node-identity/ovnkube-identity: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:03:00.839901233+00:00 stderr F I0813 20:03:00.839743 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=30560&timeoutSeconds=591&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:10.047475339+00:00 stderr F I0813 20:03:10.047083 1 leaderelection.go:285] failed to renew lease openshift-network-node-identity/ovnkube-identity: timed out waiting for the condition 2025-08-13T20:03:10.050289689+00:00 stderr F E0813 20:03:10.050206 1 leaderelection.go:308] Failed to release lock: Put "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:03:10.050881566+00:00 stderr F I0813 20:03:10.050704 1 recorder.go:104] "crc_9a6fd3ed-e0b7-4ff5-b6f5-4bc33b4b2a02 stopped leading" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-network-node-identity","name":"ovnkube-identity","uid":"affbead6-e1b0-4053-844d-1baff2e26ac5","apiVersion":"coordination.k8s.io/v1","resourceVersion":"30647"} reason="LeaderElection" 2025-08-13T20:03:10.051385910+00:00 stderr F I0813 20:03:10.051306 1 internal.go:516] "Stopping and waiting for non leader election runnables" 2025-08-13T20:03:10.052057009+00:00 stderr F I0813 20:03:10.051417 1 internal.go:520] "Stopping and waiting for leader election runnables" 2025-08-13T20:03:10.052057009+00:00 stderr F I0813 20:03:10.051459 1 internal.go:526] "Stopping and waiting for caches" 2025-08-13T20:03:10.052057009+00:00 stderr F I0813 20:03:10.051469 1 internal.go:530] "Stopping and waiting for webhooks" 2025-08-13T20:03:10.052057009+00:00 stderr F I0813 20:03:10.051476 1 internal.go:533] "Stopping and waiting for HTTP servers" 2025-08-13T20:03:10.052057009+00:00 stderr F I0813 20:03:10.051484 1 internal.go:537] "Wait completed, proceeding to shutdown the manager" 2025-08-13T20:03:10.052057009+00:00 stderr F error running approver: leader election lost ././@LongLink0000644000000000000000000000024600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015133657715033040 5ustar zuulzuul././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015133657735033042 5ustar zuulzuul././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000001202615133657715033043 0ustar zuulzuul2025-08-13T20:08:10.249757059+00:00 stderr F I0813 20:08:10.244741 1 observer_polling.go:159] Starting file observer 2025-08-13T20:08:10.252956431+00:00 stderr F I0813 20:08:10.252847 1 base_controller.go:67] Waiting for caches to sync for CertSyncController 2025-08-13T20:08:10.353970297+00:00 stderr F I0813 20:08:10.353841 1 base_controller.go:73] Caches are synced for CertSyncController 2025-08-13T20:08:10.353970297+00:00 stderr F I0813 20:08:10.353922 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ... 2025-08-13T20:08:10.354230874+00:00 stderr F I0813 20:08:10.354151 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:08:10.354230874+00:00 stderr F I0813 20:08:10.354188 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:09:06.322048498+00:00 stderr F I0813 20:09:06.319571 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:09:06.322048498+00:00 stderr F I0813 20:09:06.319662 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:09:06.322048498+00:00 stderr F I0813 20:09:06.320047 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:09:06.322048498+00:00 stderr F I0813 20:09:06.320067 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:09:06.612236448+00:00 stderr F I0813 20:09:06.612141 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:09:06.612236448+00:00 stderr F I0813 20:09:06.612204 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:09:06.612404873+00:00 stderr F I0813 20:09:06.612359 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:09:06.612404873+00:00 stderr F I0813 20:09:06.612391 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:09:06.612481265+00:00 stderr F I0813 20:09:06.612449 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:09:06.612481265+00:00 stderr F I0813 20:09:06.612458 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:19:06.324387120+00:00 stderr F I0813 20:19:06.324231 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:19:06.324387120+00:00 stderr F I0813 20:19:06.324298 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:19:06.612445676+00:00 stderr F I0813 20:19:06.612323 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:19:06.612445676+00:00 stderr F I0813 20:19:06.612387 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:19:06.612665402+00:00 stderr F I0813 20:19:06.612603 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:19:06.612665402+00:00 stderr F I0813 20:19:06.612632 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:29:06.324989539+00:00 stderr F I0813 20:29:06.324132 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:29:06.324989539+00:00 stderr F I0813 20:29:06.324208 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:29:06.324989539+00:00 stderr F I0813 20:29:06.324741 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:29:06.324989539+00:00 stderr F I0813 20:29:06.324753 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:29:06.613248366+00:00 stderr F I0813 20:29:06.613183 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:29:06.613663758+00:00 stderr F I0813 20:29:06.613587 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:29:06.614190093+00:00 stderr F I0813 20:29:06.614167 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:29:06.614294426+00:00 stderr F I0813 20:29:06.614275 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:39:06.325265148+00:00 stderr F I0813 20:39:06.324939 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:39:06.326232376+00:00 stderr F I0813 20:39:06.325074 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:39:06.326451882+00:00 stderr F I0813 20:39:06.326356 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:39:06.326451882+00:00 stderr F I0813 20:39:06.326386 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:39:06.614742214+00:00 stderr F I0813 20:39:06.614606 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:39:06.614742214+00:00 stderr F I0813 20:39:06.614657 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] ././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000002230115133657735033042 0ustar zuulzuul2026-01-20T10:47:06.703540280+00:00 stderr F I0120 10:47:06.702300 1 observer_polling.go:159] Starting file observer 2026-01-20T10:47:06.703540280+00:00 stderr F I0120 10:47:06.702742 1 base_controller.go:67] Waiting for caches to sync for CertSyncController 2026-01-20T10:47:16.706078476+00:00 stderr F W0120 10:47:16.705958 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2026-01-20T10:47:16.706137978+00:00 stderr F I0120 10:47:16.706114 1 trace.go:236] Trace[192545612]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 (20-Jan-2026 10:47:06.701) (total time: 10004ms): 2026-01-20T10:47:16.706137978+00:00 stderr F Trace[192545612]: ---"Objects listed" error:Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10004ms (10:47:16.705) 2026-01-20T10:47:16.706137978+00:00 stderr F Trace[192545612]: [10.00464692s] [10.00464692s] END 2026-01-20T10:47:16.706278643+00:00 stderr F E0120 10:47:16.706251 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Secret: fa**********ed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2026-01-20T10:47:16.706330364+00:00 stderr F W0120 10:47:16.706288 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2026-01-20T10:47:16.706378776+00:00 stderr F I0120 10:47:16.706361 1 trace.go:236] Trace[1624463316]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 (20-Jan-2026 10:47:06.702) (total time: 10004ms): 2026-01-20T10:47:16.706378776+00:00 stderr F Trace[1624463316]: ---"Objects listed" error:Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10003ms (10:47:16.705) 2026-01-20T10:47:16.706378776+00:00 stderr F Trace[1624463316]: [10.004217291s] [10.004217291s] END 2026-01-20T10:47:16.706389866+00:00 stderr F E0120 10:47:16.706379 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2026-01-20T10:47:23.204999371+00:00 stderr F I0120 10:47:23.203024 1 base_controller.go:73] Caches are synced for CertSyncController 2026-01-20T10:47:23.204999371+00:00 stderr F I0120 10:47:23.203101 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ... 2026-01-20T10:47:23.204999371+00:00 stderr F I0120 10:47:23.203180 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:47:23.204999371+00:00 stderr F I0120 10:47:23.203197 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2026-01-20T10:49:42.242180278+00:00 stderr F I0120 10:49:42.241331 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:49:42.242180278+00:00 stderr F I0120 10:49:42.241358 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2026-01-20T10:49:42.334580543+00:00 stderr F I0120 10:49:42.334351 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:49:42.334580543+00:00 stderr F I0120 10:49:42.334375 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2026-01-20T10:49:42.341456393+00:00 stderr F I0120 10:49:42.341397 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:49:42.341456393+00:00 stderr F I0120 10:49:42.341422 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2026-01-20T10:49:42.348577579+00:00 stderr F I0120 10:49:42.348500 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:49:42.348577579+00:00 stderr F I0120 10:49:42.348514 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2026-01-20T10:49:42.355112339+00:00 stderr F I0120 10:49:42.355019 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:49:42.355112339+00:00 stderr F I0120 10:49:42.355046 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2026-01-20T10:49:42.383825363+00:00 stderr F I0120 10:49:42.383233 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:49:42.383825363+00:00 stderr F I0120 10:49:42.383255 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2026-01-20T10:49:47.167687287+00:00 stderr F I0120 10:49:47.167585 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:49:47.167687287+00:00 stderr F I0120 10:49:47.167605 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2026-01-20T10:49:47.175389151+00:00 stderr F I0120 10:49:47.174967 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:49:47.175389151+00:00 stderr F I0120 10:49:47.174984 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2026-01-20T10:49:47.181301611+00:00 stderr F I0120 10:49:47.181251 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:49:47.181301611+00:00 stderr F I0120 10:49:47.181279 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2026-01-20T10:49:47.221855007+00:00 stderr F I0120 10:49:47.221288 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:49:47.221855007+00:00 stderr F I0120 10:49:47.221312 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2026-01-20T10:49:47.228182839+00:00 stderr F I0120 10:49:47.228132 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:49:47.228182839+00:00 stderr F I0120 10:49:47.228157 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2026-01-20T10:49:47.236052289+00:00 stderr F I0120 10:49:47.235989 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:49:47.236052289+00:00 stderr F I0120 10:49:47.236003 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2026-01-20T10:57:23.136493323+00:00 stderr F I0120 10:57:23.135999 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:57:23.136493323+00:00 stderr F I0120 10:57:23.136039 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2026-01-20T10:57:23.136493323+00:00 stderr F I0120 10:57:23.136203 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:57:23.136493323+00:00 stderr F I0120 10:57:23.136212 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2026-01-20T10:57:23.143583581+00:00 stderr F I0120 10:57:23.143462 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:57:23.143583581+00:00 stderr F I0120 10:57:23.143489 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2026-01-20T10:57:43.934497074+00:00 stderr F I0120 10:57:43.934397 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:57:43.934497074+00:00 stderr F I0120 10:57:43.934428 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2026-01-20T10:57:43.935307835+00:00 stderr F I0120 10:57:43.935228 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:57:43.935341596+00:00 stderr F I0120 10:57:43.935296 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2026-01-20T10:57:43.935472570+00:00 stderr F I0120 10:57:43.935432 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:57:43.935490441+00:00 stderr F I0120 10:57:43.935465 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2026-01-20T10:57:43.936340993+00:00 stderr F I0120 10:57:43.936286 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:57:43.936367514+00:00 stderr F I0120 10:57:43.936332 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2026-01-20T10:57:48.888535785+00:00 stderr F I0120 10:57:48.888462 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:57:48.888535785+00:00 stderr F I0120 10:57:48.888495 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2026-01-20T10:57:48.888687999+00:00 stderr F I0120 10:57:48.888655 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:57:48.888687999+00:00 stderr F I0120 10:57:48.888671 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] ././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000001134515133657735033050 0ustar zuulzuul2026-01-20T10:42:03.667616189+00:00 stderr F I0120 10:42:03.666947 1 base_controller.go:67] Waiting for caches to sync for CertSyncController 2026-01-20T10:42:03.667803354+00:00 stderr F I0120 10:42:03.667541 1 observer_polling.go:159] Starting file observer 2026-01-20T10:42:03.678594429+00:00 stderr F W0120 10:42:03.678020 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2026-01-20T10:42:03.678976880+00:00 stderr F E0120 10:42:03.678934 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2026-01-20T10:42:03.679173856+00:00 stderr F W0120 10:42:03.678186 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2026-01-20T10:42:03.679206297+00:00 stderr F E0120 10:42:03.679188 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Secret: fa**********ed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2026-01-20T10:42:15.040868886+00:00 stderr F W0120 10:42:15.040571 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2026-01-20T10:42:15.040868886+00:00 stderr F I0120 10:42:15.040667 1 trace.go:236] Trace[2051927927]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 (20-Jan-2026 10:42:05.037) (total time: 10003ms): 2026-01-20T10:42:15.040868886+00:00 stderr F Trace[2051927927]: ---"Objects listed" error:Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (10:42:15.040) 2026-01-20T10:42:15.040868886+00:00 stderr F Trace[2051927927]: [10.003287796s] [10.003287796s] END 2026-01-20T10:42:15.040868886+00:00 stderr F E0120 10:42:15.040686 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2026-01-20T10:42:15.127054688+00:00 stderr F W0120 10:42:15.126910 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2026-01-20T10:42:15.127054688+00:00 stderr F I0120 10:42:15.127021 1 trace.go:236] Trace[1434856364]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 (20-Jan-2026 10:42:05.124) (total time: 10002ms): 2026-01-20T10:42:15.127054688+00:00 stderr F Trace[1434856364]: ---"Objects listed" error:Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (10:42:15.126) 2026-01-20T10:42:15.127054688+00:00 stderr F Trace[1434856364]: [10.002561264s] [10.002561264s] END 2026-01-20T10:42:15.127054688+00:00 stderr F E0120 10:42:15.127040 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Secret: fa**********ed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2026-01-20T10:42:22.567504580+00:00 stderr F I0120 10:42:22.567374 1 base_controller.go:73] Caches are synced for CertSyncController 2026-01-20T10:42:22.567504580+00:00 stderr F I0120 10:42:22.567439 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ... 2026-01-20T10:42:22.567569392+00:00 stderr F I0120 10:42:22.567522 1 certsync_controller.go:66] Syncing configmaps: [] 2026-01-20T10:42:22.567617094+00:00 stderr F I0120 10:42:22.567539 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] ././@LongLink0000644000000000000000000000026500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015133657735033042 5ustar zuulzuul././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000014267515133657715033061 0ustar zuulzuul2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366728 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366807 1 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366811 1 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366815 1 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366819 1 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366823 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366827 1 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366830 1 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366834 1 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366838 1 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366842 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366845 1 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366849 1 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366852 1 feature_gate.go:227] unrecognized feature gate: NewOLM 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366856 1 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366859 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366863 1 feature_gate.go:227] unrecognized feature gate: Example 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366867 1 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366870 1 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366874 1 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366877 1 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366881 1 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366886 1 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366889 1 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366893 1 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366896 1 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366900 1 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366903 1 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366907 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366914 1 feature_gate.go:227] unrecognized feature gate: MetricsServer 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366918 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366922 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366926 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366929 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366934 1 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366938 1 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366941 1 feature_gate.go:227] unrecognized feature gate: SignatureStores 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366945 1 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366949 1 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366952 1 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366956 1 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366959 1 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366964 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366968 1 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366971 1 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366975 1 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366978 1 feature_gate.go:227] unrecognized feature gate: PinnedImages 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366982 1 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366986 1 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366989 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366993 1 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.366997 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.367000 1 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.367004 1 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.367007 1 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.367011 1 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.367015 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.367018 1 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.367022 1 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2026-01-20T10:47:06.367129870+00:00 stderr F W0120 10:47:06.367026 1 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2026-01-20T10:47:06.367786387+00:00 stderr F I0120 10:47:06.367749 1 flags.go:64] FLAG: --allow-metric-labels="[]" 2026-01-20T10:47:06.367786387+00:00 stderr F I0120 10:47:06.367777 1 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2026-01-20T10:47:06.367798238+00:00 stderr F I0120 10:47:06.367782 1 flags.go:64] FLAG: --authentication-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig" 2026-01-20T10:47:06.367798238+00:00 stderr F I0120 10:47:06.367788 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2026-01-20T10:47:06.367798238+00:00 stderr F I0120 10:47:06.367792 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2026-01-20T10:47:06.367806938+00:00 stderr F I0120 10:47:06.367797 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="true" 2026-01-20T10:47:06.367806938+00:00 stderr F I0120 10:47:06.367800 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2026-01-20T10:47:06.367824378+00:00 stderr F I0120 10:47:06.367805 1 flags.go:64] FLAG: --authorization-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig" 2026-01-20T10:47:06.367824378+00:00 stderr F I0120 10:47:06.367810 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2026-01-20T10:47:06.367824378+00:00 stderr F I0120 10:47:06.367813 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2026-01-20T10:47:06.367824378+00:00 stderr F I0120 10:47:06.367816 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2026-01-20T10:47:06.367824378+00:00 stderr F I0120 10:47:06.367820 1 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2026-01-20T10:47:06.367855668+00:00 stderr F I0120 10:47:06.367836 1 flags.go:64] FLAG: --client-ca-file="" 2026-01-20T10:47:06.367855668+00:00 stderr F I0120 10:47:06.367843 1 flags.go:64] FLAG: --config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2026-01-20T10:47:06.367855668+00:00 stderr F I0120 10:47:06.367847 1 flags.go:64] FLAG: --contention-profiling="true" 2026-01-20T10:47:06.367855668+00:00 stderr F I0120 10:47:06.367850 1 flags.go:64] FLAG: --disabled-metrics="[]" 2026-01-20T10:47:06.367885769+00:00 stderr F I0120 10:47:06.367855 1 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2026-01-20T10:47:06.367885769+00:00 stderr F I0120 10:47:06.367878 1 flags.go:64] FLAG: --help="false" 2026-01-20T10:47:06.367893689+00:00 stderr F I0120 10:47:06.367882 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2026-01-20T10:47:06.367893689+00:00 stderr F I0120 10:47:06.367889 1 flags.go:64] FLAG: --kube-api-burst="100" 2026-01-20T10:47:06.367901279+00:00 stderr F I0120 10:47:06.367893 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" 2026-01-20T10:47:06.367917800+00:00 stderr F I0120 10:47:06.367897 1 flags.go:64] FLAG: --kube-api-qps="50" 2026-01-20T10:47:06.367917800+00:00 stderr F I0120 10:47:06.367903 1 flags.go:64] FLAG: --kubeconfig="" 2026-01-20T10:47:06.367917800+00:00 stderr F I0120 10:47:06.367905 1 flags.go:64] FLAG: --leader-elect="true" 2026-01-20T10:47:06.367917800+00:00 stderr F I0120 10:47:06.367908 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s" 2026-01-20T10:47:06.367917800+00:00 stderr F I0120 10:47:06.367911 1 flags.go:64] FLAG: --leader-elect-renew-deadline="10s" 2026-01-20T10:47:06.367917800+00:00 stderr F I0120 10:47:06.367915 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases" 2026-01-20T10:47:06.367931950+00:00 stderr F I0120 10:47:06.367918 1 flags.go:64] FLAG: --leader-elect-resource-name="kube-scheduler" 2026-01-20T10:47:06.367931950+00:00 stderr F I0120 10:47:06.367921 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" 2026-01-20T10:47:06.367931950+00:00 stderr F I0120 10:47:06.367924 1 flags.go:64] FLAG: --leader-elect-retry-period="2s" 2026-01-20T10:47:06.367931950+00:00 stderr F I0120 10:47:06.367927 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2026-01-20T10:47:06.367940220+00:00 stderr F I0120 10:47:06.367930 1 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2026-01-20T10:47:06.367940220+00:00 stderr F I0120 10:47:06.367936 1 flags.go:64] FLAG: --log-json-split-stream="false" 2026-01-20T10:47:06.367947810+00:00 stderr F I0120 10:47:06.367939 1 flags.go:64] FLAG: --logging-format="text" 2026-01-20T10:47:06.367947810+00:00 stderr F I0120 10:47:06.367942 1 flags.go:64] FLAG: --master="" 2026-01-20T10:47:06.367947810+00:00 stderr F I0120 10:47:06.367945 1 flags.go:64] FLAG: --permit-address-sharing="false" 2026-01-20T10:47:06.367955740+00:00 stderr F I0120 10:47:06.367948 1 flags.go:64] FLAG: --permit-port-sharing="false" 2026-01-20T10:47:06.367955740+00:00 stderr F I0120 10:47:06.367951 1 flags.go:64] FLAG: --pod-max-in-unschedulable-pods-duration="5m0s" 2026-01-20T10:47:06.367963201+00:00 stderr F I0120 10:47:06.367954 1 flags.go:64] FLAG: --profiling="true" 2026-01-20T10:47:06.367963201+00:00 stderr F I0120 10:47:06.367957 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2026-01-20T10:47:06.367970491+00:00 stderr F I0120 10:47:06.367961 1 flags.go:64] FLAG: --requestheader-client-ca-file="" 2026-01-20T10:47:06.367970491+00:00 stderr F I0120 10:47:06.367965 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2026-01-20T10:47:06.367977911+00:00 stderr F I0120 10:47:06.367969 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2026-01-20T10:47:06.367977911+00:00 stderr F I0120 10:47:06.367973 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2026-01-20T10:47:06.367985021+00:00 stderr F I0120 10:47:06.367977 1 flags.go:64] FLAG: --secure-port="10259" 2026-01-20T10:47:06.367985021+00:00 stderr F I0120 10:47:06.367981 1 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2026-01-20T10:47:06.367992121+00:00 stderr F I0120 10:47:06.367983 1 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" 2026-01-20T10:47:06.367999341+00:00 stderr F I0120 10:47:06.367987 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2026-01-20T10:47:06.367999341+00:00 stderr F I0120 10:47:06.367995 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2026-01-20T10:47:06.368006711+00:00 stderr F I0120 10:47:06.367998 1 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2026-01-20T10:47:06.368006711+00:00 stderr F I0120 10:47:06.368002 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2026-01-20T10:47:06.368015351+00:00 stderr F I0120 10:47:06.368007 1 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" 2026-01-20T10:47:06.368015351+00:00 stderr F I0120 10:47:06.368010 1 flags.go:64] FLAG: --v="2" 2026-01-20T10:47:06.368028821+00:00 stderr F I0120 10:47:06.368015 1 flags.go:64] FLAG: --version="false" 2026-01-20T10:47:06.368028821+00:00 stderr F I0120 10:47:06.368020 1 flags.go:64] FLAG: --vmodule="" 2026-01-20T10:47:06.368028821+00:00 stderr F I0120 10:47:06.368024 1 flags.go:64] FLAG: --write-config-to="" 2026-01-20T10:47:06.371719435+00:00 stderr F I0120 10:47:06.371691 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2026-01-20T10:47:16.600283059+00:00 stderr F W0120 10:47:16.600215 1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout 2026-01-20T10:47:16.600366742+00:00 stderr F W0120 10:47:16.600350 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous. 2026-01-20T10:47:16.600403773+00:00 stderr F W0120 10:47:16.600389 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false 2026-01-20T10:47:23.143370424+00:00 stderr F I0120 10:47:23.143303 1 configfile.go:94] "Using component config" config=< 2026-01-20T10:47:23.143370424+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2026-01-20T10:47:23.143370424+00:00 stderr F clientConnection: 2026-01-20T10:47:23.143370424+00:00 stderr F acceptContentTypes: "" 2026-01-20T10:47:23.143370424+00:00 stderr F burst: 100 2026-01-20T10:47:23.143370424+00:00 stderr F contentType: application/vnd.kubernetes.protobuf 2026-01-20T10:47:23.143370424+00:00 stderr F kubeconfig: /etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig 2026-01-20T10:47:23.143370424+00:00 stderr F qps: 50 2026-01-20T10:47:23.143370424+00:00 stderr F enableContentionProfiling: false 2026-01-20T10:47:23.143370424+00:00 stderr F enableProfiling: false 2026-01-20T10:47:23.143370424+00:00 stderr F kind: KubeSchedulerConfiguration 2026-01-20T10:47:23.143370424+00:00 stderr F leaderElection: 2026-01-20T10:47:23.143370424+00:00 stderr F leaderElect: true 2026-01-20T10:47:23.143370424+00:00 stderr F leaseDuration: 2m17s 2026-01-20T10:47:23.143370424+00:00 stderr F renewDeadline: 1m47s 2026-01-20T10:47:23.143370424+00:00 stderr F resourceLock: leases 2026-01-20T10:47:23.143370424+00:00 stderr F resourceName: kube-scheduler 2026-01-20T10:47:23.143370424+00:00 stderr F resourceNamespace: openshift-kube-scheduler 2026-01-20T10:47:23.143370424+00:00 stderr F retryPeriod: 26s 2026-01-20T10:47:23.143370424+00:00 stderr F parallelism: 16 2026-01-20T10:47:23.143370424+00:00 stderr F percentageOfNodesToScore: 0 2026-01-20T10:47:23.143370424+00:00 stderr F podInitialBackoffSeconds: 1 2026-01-20T10:47:23.143370424+00:00 stderr F podMaxBackoffSeconds: 10 2026-01-20T10:47:23.143370424+00:00 stderr F profiles: 2026-01-20T10:47:23.143370424+00:00 stderr F - pluginConfig: 2026-01-20T10:47:23.143370424+00:00 stderr F - args: 2026-01-20T10:47:23.143370424+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2026-01-20T10:47:23.143370424+00:00 stderr F kind: DefaultPreemptionArgs 2026-01-20T10:47:23.143370424+00:00 stderr F minCandidateNodesAbsolute: 100 2026-01-20T10:47:23.143370424+00:00 stderr F minCandidateNodesPercentage: 10 2026-01-20T10:47:23.143370424+00:00 stderr F name: DefaultPreemption 2026-01-20T10:47:23.143370424+00:00 stderr F - args: 2026-01-20T10:47:23.143370424+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2026-01-20T10:47:23.143370424+00:00 stderr F hardPodAffinityWeight: 1 2026-01-20T10:47:23.143370424+00:00 stderr F ignorePreferredTermsOfExistingPods: false 2026-01-20T10:47:23.143370424+00:00 stderr F kind: InterPodAffinityArgs 2026-01-20T10:47:23.143370424+00:00 stderr F name: InterPodAffinity 2026-01-20T10:47:23.143370424+00:00 stderr F - args: 2026-01-20T10:47:23.143370424+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2026-01-20T10:47:23.143370424+00:00 stderr F kind: NodeAffinityArgs 2026-01-20T10:47:23.143370424+00:00 stderr F name: NodeAffinity 2026-01-20T10:47:23.143370424+00:00 stderr F - args: 2026-01-20T10:47:23.143370424+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2026-01-20T10:47:23.143370424+00:00 stderr F kind: NodeResourcesBalancedAllocationArgs 2026-01-20T10:47:23.143370424+00:00 stderr F resources: 2026-01-20T10:47:23.143370424+00:00 stderr F - name: cpu 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 1 2026-01-20T10:47:23.143370424+00:00 stderr F - name: memory 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 1 2026-01-20T10:47:23.143370424+00:00 stderr F name: NodeResourcesBalancedAllocation 2026-01-20T10:47:23.143370424+00:00 stderr F - args: 2026-01-20T10:47:23.143370424+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2026-01-20T10:47:23.143370424+00:00 stderr F kind: NodeResourcesFitArgs 2026-01-20T10:47:23.143370424+00:00 stderr F scoringStrategy: 2026-01-20T10:47:23.143370424+00:00 stderr F resources: 2026-01-20T10:47:23.143370424+00:00 stderr F - name: cpu 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 1 2026-01-20T10:47:23.143370424+00:00 stderr F - name: memory 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 1 2026-01-20T10:47:23.143370424+00:00 stderr F type: LeastAllocated 2026-01-20T10:47:23.143370424+00:00 stderr F name: NodeResourcesFit 2026-01-20T10:47:23.143370424+00:00 stderr F - args: 2026-01-20T10:47:23.143370424+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2026-01-20T10:47:23.143370424+00:00 stderr F defaultingType: System 2026-01-20T10:47:23.143370424+00:00 stderr F kind: PodTopologySpreadArgs 2026-01-20T10:47:23.143370424+00:00 stderr F name: PodTopologySpread 2026-01-20T10:47:23.143370424+00:00 stderr F - args: 2026-01-20T10:47:23.143370424+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2026-01-20T10:47:23.143370424+00:00 stderr F bindTimeoutSeconds: 600 2026-01-20T10:47:23.143370424+00:00 stderr F kind: VolumeBindingArgs 2026-01-20T10:47:23.143370424+00:00 stderr F name: VolumeBinding 2026-01-20T10:47:23.143370424+00:00 stderr F plugins: 2026-01-20T10:47:23.143370424+00:00 stderr F bind: {} 2026-01-20T10:47:23.143370424+00:00 stderr F filter: {} 2026-01-20T10:47:23.143370424+00:00 stderr F multiPoint: 2026-01-20T10:47:23.143370424+00:00 stderr F enabled: 2026-01-20T10:47:23.143370424+00:00 stderr F - name: PrioritySort 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 0 2026-01-20T10:47:23.143370424+00:00 stderr F - name: NodeUnschedulable 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 0 2026-01-20T10:47:23.143370424+00:00 stderr F - name: NodeName 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 0 2026-01-20T10:47:23.143370424+00:00 stderr F - name: TaintToleration 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 3 2026-01-20T10:47:23.143370424+00:00 stderr F - name: NodeAffinity 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 2 2026-01-20T10:47:23.143370424+00:00 stderr F - name: NodePorts 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 0 2026-01-20T10:47:23.143370424+00:00 stderr F - name: NodeResourcesFit 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 1 2026-01-20T10:47:23.143370424+00:00 stderr F - name: VolumeRestrictions 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 0 2026-01-20T10:47:23.143370424+00:00 stderr F - name: EBSLimits 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 0 2026-01-20T10:47:23.143370424+00:00 stderr F - name: GCEPDLimits 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 0 2026-01-20T10:47:23.143370424+00:00 stderr F - name: NodeVolumeLimits 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 0 2026-01-20T10:47:23.143370424+00:00 stderr F - name: AzureDiskLimits 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 0 2026-01-20T10:47:23.143370424+00:00 stderr F - name: VolumeBinding 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 0 2026-01-20T10:47:23.143370424+00:00 stderr F - name: VolumeZone 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 0 2026-01-20T10:47:23.143370424+00:00 stderr F - name: PodTopologySpread 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 2 2026-01-20T10:47:23.143370424+00:00 stderr F - name: InterPodAffinity 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 2 2026-01-20T10:47:23.143370424+00:00 stderr F - name: DefaultPreemption 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 0 2026-01-20T10:47:23.143370424+00:00 stderr F - name: NodeResourcesBalancedAllocation 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 1 2026-01-20T10:47:23.143370424+00:00 stderr F - name: ImageLocality 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 1 2026-01-20T10:47:23.143370424+00:00 stderr F - name: DefaultBinder 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 0 2026-01-20T10:47:23.143370424+00:00 stderr F - name: SchedulingGates 2026-01-20T10:47:23.143370424+00:00 stderr F weight: 0 2026-01-20T10:47:23.143370424+00:00 stderr F permit: {} 2026-01-20T10:47:23.143370424+00:00 stderr F postBind: {} 2026-01-20T10:47:23.143370424+00:00 stderr F postFilter: {} 2026-01-20T10:47:23.143370424+00:00 stderr F preBind: {} 2026-01-20T10:47:23.143370424+00:00 stderr F preEnqueue: {} 2026-01-20T10:47:23.143370424+00:00 stderr F preFilter: {} 2026-01-20T10:47:23.143370424+00:00 stderr F preScore: {} 2026-01-20T10:47:23.143370424+00:00 stderr F queueSort: {} 2026-01-20T10:47:23.143370424+00:00 stderr F reserve: {} 2026-01-20T10:47:23.143370424+00:00 stderr F score: {} 2026-01-20T10:47:23.143370424+00:00 stderr F schedulerName: default-scheduler 2026-01-20T10:47:23.143370424+00:00 stderr F > 2026-01-20T10:47:23.144166166+00:00 stderr F I0120 10:47:23.144132 1 server.go:159] "Starting Kubernetes Scheduler" version="v1.29.5+29c95f3" 2026-01-20T10:47:23.144242368+00:00 stderr F I0120 10:47:23.144220 1 server.go:161] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" 2026-01-20T10:47:23.146486649+00:00 stderr F I0120 10:47:23.146414 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:47:23.151760291+00:00 stderr F I0120 10:47:23.151216 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:47:23.152832260+00:00 stderr F I0120 10:47:23.152611 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2026-01-20T10:47:23.162258275+00:00 stderr F I0120 10:47:23.162181 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:27 +0000 UTC to 2027-08-13 20:00:28 +0000 UTC (now=2026-01-20 10:47:23.15838164 +0000 UTC))" 2026-01-20T10:47:23.163144509+00:00 stderr F I0120 10:47:23.163044 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906026\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906026\" (2026-01-20 09:47:06 +0000 UTC to 2027-01-20 09:47:06 +0000 UTC (now=2026-01-20 10:47:23.162960534 +0000 UTC))" 2026-01-20T10:47:23.163777056+00:00 stderr F I0120 10:47:23.163198 1 secure_serving.go:213] Serving securely on [::]:10259 2026-01-20T10:47:23.163796676+00:00 stderr F I0120 10:47:23.163349 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:47:23.176643964+00:00 stderr F I0120 10:47:23.175449 1 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:47:23.176643964+00:00 stderr F I0120 10:47:23.176055 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:47:23.176643964+00:00 stderr F I0120 10:47:23.176248 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:47:23.181140765+00:00 stderr F I0120 10:47:23.181099 1 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:47:23.189038669+00:00 stderr F I0120 10:47:23.184584 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:47:23.189038669+00:00 stderr F I0120 10:47:23.184612 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:47:23.189038669+00:00 stderr F I0120 10:47:23.184830 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:47:23.189038669+00:00 stderr F I0120 10:47:23.185005 1 reflector.go:351] Caches populated for *v1.CSINode from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:47:23.189038669+00:00 stderr F I0120 10:47:23.185039 1 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:47:23.189038669+00:00 stderr F I0120 10:47:23.185927 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:47:23.199997135+00:00 stderr F I0120 10:47:23.199932 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:47:23.200350315+00:00 stderr F I0120 10:47:23.200208 1 node_tree.go:65] "Added node in listed group to NodeTree" node="crc" zone="" 2026-01-20T10:47:23.202050301+00:00 stderr F I0120 10:47:23.201847 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:47:23.202375940+00:00 stderr F I0120 10:47:23.202333 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:47:23.212261577+00:00 stderr F I0120 10:47:23.212159 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:47:23.212400960+00:00 stderr F I0120 10:47:23.212182 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:47:23.258118637+00:00 stderr F I0120 10:47:23.257146 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:47:23.258118637+00:00 stderr F I0120 10:47:23.257648 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:47:23.257612804 +0000 UTC))" 2026-01-20T10:47:23.258118637+00:00 stderr F I0120 10:47:23.257676 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:47:23.257661585 +0000 UTC))" 2026-01-20T10:47:23.258118637+00:00 stderr F I0120 10:47:23.257701 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:47:23.257682006 +0000 UTC))" 2026-01-20T10:47:23.258118637+00:00 stderr F I0120 10:47:23.257724 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:47:23.257707886 +0000 UTC))" 2026-01-20T10:47:23.258118637+00:00 stderr F I0120 10:47:23.257741 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:47:23.257729257 +0000 UTC))" 2026-01-20T10:47:23.258118637+00:00 stderr F I0120 10:47:23.257758 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:47:23.257746317 +0000 UTC))" 2026-01-20T10:47:23.258118637+00:00 stderr F I0120 10:47:23.257774 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:47:23.257762768 +0000 UTC))" 2026-01-20T10:47:23.258118637+00:00 stderr F I0120 10:47:23.257791 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:47:23.257779158 +0000 UTC))" 2026-01-20T10:47:23.258118637+00:00 stderr F I0120 10:47:23.257820 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:47:23.257802449 +0000 UTC))" 2026-01-20T10:47:23.258118637+00:00 stderr F I0120 10:47:23.257838 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:47:23.257825699 +0000 UTC))" 2026-01-20T10:47:23.262101475+00:00 stderr F I0120 10:47:23.258247 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:27 +0000 UTC to 2027-08-13 20:00:28 +0000 UTC (now=2026-01-20 10:47:23.25822852 +0000 UTC))" 2026-01-20T10:47:23.262101475+00:00 stderr F I0120 10:47:23.258610 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906026\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906026\" (2026-01-20 09:47:06 +0000 UTC to 2027-01-20 09:47:06 +0000 UTC (now=2026-01-20 10:47:23.25858798 +0000 UTC))" 2026-01-20T10:47:23.270897672+00:00 stderr F I0120 10:47:23.270841 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler/kube-scheduler... 2026-01-20T10:49:49.082592223+00:00 stderr F I0120 10:49:49.082541 1 leaderelection.go:260] successfully acquired lease openshift-kube-scheduler/kube-scheduler 2026-01-20T10:49:49.093289459+00:00 stderr F I0120 10:49:49.093222 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-operators-ndkkp" node="crc" evaluatedNodes=1 feasibleNodes=1 2026-01-20T10:49:49.093937819+00:00 stderr F I0120 10:49:49.093906 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-operator-lifecycle-manager/collect-profiles-29481765-pbh8m" node="crc" evaluatedNodes=1 feasibleNodes=1 2026-01-20T10:49:49.094050252+00:00 stderr F I0120 10:49:49.094032 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-marketplace-qr2m4" node="crc" evaluatedNodes=1 feasibleNodes=1 2026-01-20T10:49:49.094518046+00:00 stderr F I0120 10:49:49.094488 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/certified-operators-tshnz" node="crc" evaluatedNodes=1 feasibleNodes=1 2026-01-20T10:50:57.203126453+00:00 stderr F I0120 10:50:57.203038 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/marketplace-operator-8b455464d-nc8zc" node="crc" evaluatedNodes=1 feasibleNodes=1 2026-01-20T10:51:04.315415460+00:00 stderr F I0120 10:51:04.315311 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-marketplace-2mx7j" node="crc" evaluatedNodes=1 feasibleNodes=1 2026-01-20T10:51:04.916564160+00:00 stderr F I0120 10:51:04.915472 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/certified-operators-mpjb7" node="crc" evaluatedNodes=1 feasibleNodes=1 2026-01-20T10:51:05.906629316+00:00 stderr F I0120 10:51:05.906360 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-operators-2nxg8" node="crc" evaluatedNodes=1 feasibleNodes=1 2026-01-20T10:51:06.511427791+00:00 stderr F I0120 10:51:06.511356 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/community-operators-6m4w2" node="crc" evaluatedNodes=1 feasibleNodes=1 2026-01-20T10:51:06.907691297+00:00 stderr F I0120 10:51:06.907627 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/community-operators-9k9jk" node="crc" evaluatedNodes=1 feasibleNodes=1 2026-01-20T10:53:15.396430456+00:00 stderr F I0120 10:53:15.393342 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-multus/cni-sysctl-allowlist-ds-xhs5c" node="crc" evaluatedNodes=1 feasibleNodes=1 2026-01-20T10:55:16.099045537+00:00 stderr F I0120 10:55:16.098865 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-image-registry/image-registry-75b7bb6564-ln84v" node="crc" evaluatedNodes=1 feasibleNodes=1 2026-01-20T10:56:07.099334050+00:00 stderr F I0120 10:56:07.096193 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:56:07.096153234 +0000 UTC))" 2026-01-20T10:56:07.099334050+00:00 stderr F I0120 10:56:07.096229 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:56:07.096213906 +0000 UTC))" 2026-01-20T10:56:07.099334050+00:00 stderr F I0120 10:56:07.096249 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.096235216 +0000 UTC))" 2026-01-20T10:56:07.099334050+00:00 stderr F I0120 10:56:07.096266 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.096253887 +0000 UTC))" 2026-01-20T10:56:07.099334050+00:00 stderr F I0120 10:56:07.096282 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.096270337 +0000 UTC))" 2026-01-20T10:56:07.099334050+00:00 stderr F I0120 10:56:07.096298 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.096286888 +0000 UTC))" 2026-01-20T10:56:07.099334050+00:00 stderr F I0120 10:56:07.096316 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.096303028 +0000 UTC))" 2026-01-20T10:56:07.099334050+00:00 stderr F I0120 10:56:07.096334 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.096321469 +0000 UTC))" 2026-01-20T10:56:07.099334050+00:00 stderr F I0120 10:56:07.096351 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.096338349 +0000 UTC))" 2026-01-20T10:56:07.099334050+00:00 stderr F I0120 10:56:07.096370 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:56:07.096356269 +0000 UTC))" 2026-01-20T10:56:07.099334050+00:00 stderr F I0120 10:56:07.096388 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.09637511 +0000 UTC))" 2026-01-20T10:56:07.099334050+00:00 stderr F I0120 10:56:07.096795 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:27 +0000 UTC to 2027-08-13 20:00:28 +0000 UTC (now=2026-01-20 10:56:07.096775021 +0000 UTC))" 2026-01-20T10:56:07.099334050+00:00 stderr F I0120 10:56:07.097147 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906026\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906026\" (2026-01-20 09:47:06 +0000 UTC to 2027-01-20 09:47:06 +0000 UTC (now=2026-01-20 10:56:07.09712905 +0000 UTC))" 2026-01-20T10:56:34.514647268+00:00 stderr F I0120 10:56:34.511278 1 schedule_one.go:302] "Successfully bound pod to node" pod="cert-manager/cert-manager-cainjector-676dd9bd64-mggnx" node="crc" evaluatedNodes=1 feasibleNodes=1 2026-01-20T10:56:34.514647268+00:00 stderr F I0120 10:56:34.513869 1 schedule_one.go:302] "Successfully bound pod to node" pod="cert-manager/cert-manager-758df9885c-cq6zm" node="crc" evaluatedNodes=1 feasibleNodes=1 2026-01-20T10:56:34.523188897+00:00 stderr F I0120 10:56:34.522751 1 schedule_one.go:302] "Successfully bound pod to node" pod="cert-manager/cert-manager-webhook-855f577f79-7bdxq" node="crc" evaluatedNodes=1 feasibleNodes=1 2026-01-20T10:56:43.733270208+00:00 stderr F I0120 10:56:43.732976 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-ovn-kubernetes/ovnkube-node-sdkgg" node="crc" evaluatedNodes=1 feasibleNodes=1 2026-01-20T10:57:11.249686993+00:00 stderr F E0120 10:57:11.249583 1 leaderelection.go:332] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler/leases/kube-scheduler?timeout=53.5s": dial tcp 38.102.83.220:6443: connect: connection refused 2026-01-20T10:57:38.455781007+00:00 stderr F I0120 10:57:38.455713 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:41.269694852+00:00 stderr F I0120 10:57:41.269637 1 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:42.377340544+00:00 stderr F I0120 10:57:42.377271 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:43.905692522+00:00 stderr F I0120 10:57:43.905612 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:44.070989924+00:00 stderr F I0120 10:57:44.070925 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:45.133093361+00:00 stderr F I0120 10:57:45.132957 1 reflector.go:351] Caches populated for *v1.CSINode from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:46.184622689+00:00 stderr F I0120 10:57:46.184564 1 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:47.023188516+00:00 stderr F I0120 10:57:47.023102 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:49.459954117+00:00 stderr F I0120 10:57:49.459332 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:49.727657706+00:00 stderr F I0120 10:57:49.727576 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:50.170841546+00:00 stderr F I0120 10:57:50.170724 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:52.441120064+00:00 stderr F I0120 10:57:52.441013 1 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:53.753200263+00:00 stderr F I0120 10:57:53.753116 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:54.724527140+00:00 stderr F I0120 10:57:54.724470 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:55.359363878+00:00 stderr F I0120 10:57:55.359308 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000016354415133657715033057 0ustar zuulzuul2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230190 1 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230330 1 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230337 1 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230342 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230347 1 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230352 1 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230356 1 feature_gate.go:227] unrecognized feature gate: NewOLM 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230403 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230413 1 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230418 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230422 1 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230427 1 feature_gate.go:227] unrecognized feature gate: PinnedImages 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230432 1 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230436 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230441 1 feature_gate.go:227] unrecognized feature gate: Example 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230446 1 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230450 1 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230455 1 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230460 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230464 1 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230469 1 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230474 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230479 1 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230484 1 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230488 1 feature_gate.go:227] unrecognized feature gate: SignatureStores 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230493 1 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230497 1 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230502 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230507 1 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230511 1 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230523 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230528 1 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230533 1 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230537 1 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230542 1 feature_gate.go:227] unrecognized feature gate: MetricsServer 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230546 1 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230551 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230555 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230560 1 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230565 1 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230569 1 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230574 1 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230580 1 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230585 1 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230590 1 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230594 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230599 1 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230604 1 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230608 1 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230614 1 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230621 1 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230628 1 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230634 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230639 1 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230644 1 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230649 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230661 1 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230667 1 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230672 1 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230677 1 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2025-08-13T20:08:10.231092794+00:00 stderr F I0813 20:08:10.231000 1 flags.go:64] FLAG: --allow-metric-labels="[]" 2025-08-13T20:08:10.231092794+00:00 stderr F I0813 20:08:10.231022 1 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2025-08-13T20:08:10.231092794+00:00 stderr F I0813 20:08:10.231062 1 flags.go:64] FLAG: --authentication-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig" 2025-08-13T20:08:10.231092794+00:00 stderr F I0813 20:08:10.231071 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2025-08-13T20:08:10.231092794+00:00 stderr F I0813 20:08:10.231077 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2025-08-13T20:08:10.231092794+00:00 stderr F I0813 20:08:10.231083 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="true" 2025-08-13T20:08:10.231106134+00:00 stderr F I0813 20:08:10.231087 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2025-08-13T20:08:10.231106134+00:00 stderr F I0813 20:08:10.231094 1 flags.go:64] FLAG: --authorization-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig" 2025-08-13T20:08:10.231106134+00:00 stderr F I0813 20:08:10.231099 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2025-08-13T20:08:10.231116925+00:00 stderr F I0813 20:08:10.231104 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2025-08-13T20:08:10.231116925+00:00 stderr F I0813 20:08:10.231108 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-08-13T20:08:10.231128245+00:00 stderr F I0813 20:08:10.231115 1 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2025-08-13T20:08:10.231128245+00:00 stderr F I0813 20:08:10.231119 1 flags.go:64] FLAG: --client-ca-file="" 2025-08-13T20:08:10.231128245+00:00 stderr F I0813 20:08:10.231123 1 flags.go:64] FLAG: --config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2025-08-13T20:08:10.231139305+00:00 stderr F I0813 20:08:10.231128 1 flags.go:64] FLAG: --contention-profiling="true" 2025-08-13T20:08:10.231139305+00:00 stderr F I0813 20:08:10.231132 1 flags.go:64] FLAG: --disabled-metrics="[]" 2025-08-13T20:08:10.231218768+00:00 stderr F I0813 20:08:10.231138 1 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2025-08-13T20:08:10.231218768+00:00 stderr F I0813 20:08:10.231184 1 flags.go:64] FLAG: --help="false" 2025-08-13T20:08:10.231218768+00:00 stderr F I0813 20:08:10.231191 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2025-08-13T20:08:10.231218768+00:00 stderr F I0813 20:08:10.231197 1 flags.go:64] FLAG: --kube-api-burst="100" 2025-08-13T20:08:10.231218768+00:00 stderr F I0813 20:08:10.231203 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" 2025-08-13T20:08:10.231218768+00:00 stderr F I0813 20:08:10.231208 1 flags.go:64] FLAG: --kube-api-qps="50" 2025-08-13T20:08:10.231218768+00:00 stderr F I0813 20:08:10.231214 1 flags.go:64] FLAG: --kubeconfig="" 2025-08-13T20:08:10.231234218+00:00 stderr F I0813 20:08:10.231218 1 flags.go:64] FLAG: --leader-elect="true" 2025-08-13T20:08:10.231234218+00:00 stderr F I0813 20:08:10.231222 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s" 2025-08-13T20:08:10.231234218+00:00 stderr F I0813 20:08:10.231226 1 flags.go:64] FLAG: --leader-elect-renew-deadline="10s" 2025-08-13T20:08:10.231250558+00:00 stderr F I0813 20:08:10.231231 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases" 2025-08-13T20:08:10.231250558+00:00 stderr F I0813 20:08:10.231235 1 flags.go:64] FLAG: --leader-elect-resource-name="kube-scheduler" 2025-08-13T20:08:10.231250558+00:00 stderr F I0813 20:08:10.231239 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" 2025-08-13T20:08:10.231250558+00:00 stderr F I0813 20:08:10.231244 1 flags.go:64] FLAG: --leader-elect-retry-period="2s" 2025-08-13T20:08:10.231261759+00:00 stderr F I0813 20:08:10.231248 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-08-13T20:08:10.231261759+00:00 stderr F I0813 20:08:10.231253 1 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2025-08-13T20:08:10.231272329+00:00 stderr F I0813 20:08:10.231260 1 flags.go:64] FLAG: --log-json-split-stream="false" 2025-08-13T20:08:10.231272329+00:00 stderr F I0813 20:08:10.231264 1 flags.go:64] FLAG: --logging-format="text" 2025-08-13T20:08:10.231272329+00:00 stderr F I0813 20:08:10.231268 1 flags.go:64] FLAG: --master="" 2025-08-13T20:08:10.231283149+00:00 stderr F I0813 20:08:10.231272 1 flags.go:64] FLAG: --permit-address-sharing="false" 2025-08-13T20:08:10.231283149+00:00 stderr F I0813 20:08:10.231277 1 flags.go:64] FLAG: --permit-port-sharing="false" 2025-08-13T20:08:10.231293160+00:00 stderr F I0813 20:08:10.231281 1 flags.go:64] FLAG: --pod-max-in-unschedulable-pods-duration="5m0s" 2025-08-13T20:08:10.231293160+00:00 stderr F I0813 20:08:10.231285 1 flags.go:64] FLAG: --profiling="true" 2025-08-13T20:08:10.231302840+00:00 stderr F I0813 20:08:10.231289 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-08-13T20:08:10.231302840+00:00 stderr F I0813 20:08:10.231294 1 flags.go:64] FLAG: --requestheader-client-ca-file="" 2025-08-13T20:08:10.231312790+00:00 stderr F I0813 20:08:10.231298 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2025-08-13T20:08:10.231312790+00:00 stderr F I0813 20:08:10.231304 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2025-08-13T20:08:10.231322541+00:00 stderr F I0813 20:08:10.231310 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2025-08-13T20:08:10.231322541+00:00 stderr F I0813 20:08:10.231316 1 flags.go:64] FLAG: --secure-port="10259" 2025-08-13T20:08:10.231332641+00:00 stderr F I0813 20:08:10.231320 1 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2025-08-13T20:08:10.231332641+00:00 stderr F I0813 20:08:10.231324 1 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" 2025-08-13T20:08:10.231342871+00:00 stderr F I0813 20:08:10.231329 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-08-13T20:08:10.231342871+00:00 stderr F I0813 20:08:10.231337 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-08-13T20:08:10.231353441+00:00 stderr F I0813 20:08:10.231342 1 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:10.231353441+00:00 stderr F I0813 20:08:10.231347 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-08-13T20:08:10.231363342+00:00 stderr F I0813 20:08:10.231353 1 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" 2025-08-13T20:08:10.231373152+00:00 stderr F I0813 20:08:10.231357 1 flags.go:64] FLAG: --v="2" 2025-08-13T20:08:10.231387042+00:00 stderr F I0813 20:08:10.231369 1 flags.go:64] FLAG: --version="false" 2025-08-13T20:08:10.231387042+00:00 stderr F I0813 20:08:10.231375 1 flags.go:64] FLAG: --vmodule="" 2025-08-13T20:08:10.231387042+00:00 stderr F I0813 20:08:10.231380 1 flags.go:64] FLAG: --write-config-to="" 2025-08-13T20:08:10.245082115+00:00 stderr F I0813 20:08:10.244995 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:11.146145129+00:00 stderr F I0813 20:08:11.146059 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:08:11.188037240+00:00 stderr F I0813 20:08:11.187945 1 configfile.go:94] "Using component config" config=< 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F clientConnection: 2025-08-13T20:08:11.188037240+00:00 stderr F acceptContentTypes: "" 2025-08-13T20:08:11.188037240+00:00 stderr F burst: 100 2025-08-13T20:08:11.188037240+00:00 stderr F contentType: application/vnd.kubernetes.protobuf 2025-08-13T20:08:11.188037240+00:00 stderr F kubeconfig: /etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig 2025-08-13T20:08:11.188037240+00:00 stderr F qps: 50 2025-08-13T20:08:11.188037240+00:00 stderr F enableContentionProfiling: false 2025-08-13T20:08:11.188037240+00:00 stderr F enableProfiling: false 2025-08-13T20:08:11.188037240+00:00 stderr F kind: KubeSchedulerConfiguration 2025-08-13T20:08:11.188037240+00:00 stderr F leaderElection: 2025-08-13T20:08:11.188037240+00:00 stderr F leaderElect: true 2025-08-13T20:08:11.188037240+00:00 stderr F leaseDuration: 2m17s 2025-08-13T20:08:11.188037240+00:00 stderr F renewDeadline: 1m47s 2025-08-13T20:08:11.188037240+00:00 stderr F resourceLock: leases 2025-08-13T20:08:11.188037240+00:00 stderr F resourceName: kube-scheduler 2025-08-13T20:08:11.188037240+00:00 stderr F resourceNamespace: openshift-kube-scheduler 2025-08-13T20:08:11.188037240+00:00 stderr F retryPeriod: 26s 2025-08-13T20:08:11.188037240+00:00 stderr F parallelism: 16 2025-08-13T20:08:11.188037240+00:00 stderr F percentageOfNodesToScore: 0 2025-08-13T20:08:11.188037240+00:00 stderr F podInitialBackoffSeconds: 1 2025-08-13T20:08:11.188037240+00:00 stderr F podMaxBackoffSeconds: 10 2025-08-13T20:08:11.188037240+00:00 stderr F profiles: 2025-08-13T20:08:11.188037240+00:00 stderr F - pluginConfig: 2025-08-13T20:08:11.188037240+00:00 stderr F - args: 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F kind: DefaultPreemptionArgs 2025-08-13T20:08:11.188037240+00:00 stderr F minCandidateNodesAbsolute: 100 2025-08-13T20:08:11.188037240+00:00 stderr F minCandidateNodesPercentage: 10 2025-08-13T20:08:11.188037240+00:00 stderr F name: DefaultPreemption 2025-08-13T20:08:11.188037240+00:00 stderr F - args: 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F hardPodAffinityWeight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F ignorePreferredTermsOfExistingPods: false 2025-08-13T20:08:11.188037240+00:00 stderr F kind: InterPodAffinityArgs 2025-08-13T20:08:11.188037240+00:00 stderr F name: InterPodAffinity 2025-08-13T20:08:11.188037240+00:00 stderr F - args: 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F kind: NodeAffinityArgs 2025-08-13T20:08:11.188037240+00:00 stderr F name: NodeAffinity 2025-08-13T20:08:11.188037240+00:00 stderr F - args: 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F kind: NodeResourcesBalancedAllocationArgs 2025-08-13T20:08:11.188037240+00:00 stderr F resources: 2025-08-13T20:08:11.188037240+00:00 stderr F - name: cpu 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F - name: memory 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F name: NodeResourcesBalancedAllocation 2025-08-13T20:08:11.188037240+00:00 stderr F - args: 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F kind: NodeResourcesFitArgs 2025-08-13T20:08:11.188037240+00:00 stderr F scoringStrategy: 2025-08-13T20:08:11.188037240+00:00 stderr F resources: 2025-08-13T20:08:11.188037240+00:00 stderr F - name: cpu 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F - name: memory 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F type: LeastAllocated 2025-08-13T20:08:11.188037240+00:00 stderr F name: NodeResourcesFit 2025-08-13T20:08:11.188037240+00:00 stderr F - args: 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F defaultingType: System 2025-08-13T20:08:11.188037240+00:00 stderr F kind: PodTopologySpreadArgs 2025-08-13T20:08:11.188037240+00:00 stderr F name: PodTopologySpread 2025-08-13T20:08:11.188037240+00:00 stderr F - args: 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F bindTimeoutSeconds: 600 2025-08-13T20:08:11.188037240+00:00 stderr F kind: VolumeBindingArgs 2025-08-13T20:08:11.188037240+00:00 stderr F name: VolumeBinding 2025-08-13T20:08:11.188037240+00:00 stderr F plugins: 2025-08-13T20:08:11.188037240+00:00 stderr F bind: {} 2025-08-13T20:08:11.188037240+00:00 stderr F filter: {} 2025-08-13T20:08:11.188037240+00:00 stderr F multiPoint: 2025-08-13T20:08:11.188037240+00:00 stderr F enabled: 2025-08-13T20:08:11.188037240+00:00 stderr F - name: PrioritySort 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: NodeUnschedulable 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: NodeName 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: TaintToleration 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 3 2025-08-13T20:08:11.188037240+00:00 stderr F - name: NodeAffinity 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 2 2025-08-13T20:08:11.188037240+00:00 stderr F - name: NodePorts 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: NodeResourcesFit 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F - name: VolumeRestrictions 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: EBSLimits 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: GCEPDLimits 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: NodeVolumeLimits 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: AzureDiskLimits 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: VolumeBinding 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: VolumeZone 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: PodTopologySpread 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 2 2025-08-13T20:08:11.188037240+00:00 stderr F - name: InterPodAffinity 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 2 2025-08-13T20:08:11.188037240+00:00 stderr F - name: DefaultPreemption 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: NodeResourcesBalancedAllocation 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F - name: ImageLocality 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F - name: DefaultBinder 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: SchedulingGates 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F permit: {} 2025-08-13T20:08:11.188037240+00:00 stderr F postBind: {} 2025-08-13T20:08:11.188037240+00:00 stderr F postFilter: {} 2025-08-13T20:08:11.188037240+00:00 stderr F preBind: {} 2025-08-13T20:08:11.188037240+00:00 stderr F preEnqueue: {} 2025-08-13T20:08:11.188037240+00:00 stderr F preFilter: {} 2025-08-13T20:08:11.188037240+00:00 stderr F preScore: {} 2025-08-13T20:08:11.188037240+00:00 stderr F queueSort: {} 2025-08-13T20:08:11.188037240+00:00 stderr F reserve: {} 2025-08-13T20:08:11.188037240+00:00 stderr F score: {} 2025-08-13T20:08:11.188037240+00:00 stderr F schedulerName: default-scheduler 2025-08-13T20:08:11.188037240+00:00 stderr F > 2025-08-13T20:08:11.188878124+00:00 stderr F I0813 20:08:11.188854 1 server.go:159] "Starting Kubernetes Scheduler" version="v1.29.5+29c95f3" 2025-08-13T20:08:11.188957786+00:00 stderr F I0813 20:08:11.188943 1 server.go:161] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" 2025-08-13T20:08:11.203107062+00:00 stderr F I0813 20:08:11.203033 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:27 +0000 UTC to 2027-08-13 20:00:28 +0000 UTC (now=2025-08-13 20:08:11.202985438 +0000 UTC))" 2025-08-13T20:08:11.203489373+00:00 stderr F I0813 20:08:11.203445 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115691\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115690\" (2025-08-13 19:08:10 +0000 UTC to 2026-08-13 19:08:10 +0000 UTC (now=2025-08-13 20:08:11.203324348 +0000 UTC))" 2025-08-13T20:08:11.203548794+00:00 stderr F I0813 20:08:11.203507 1 secure_serving.go:213] Serving securely on [::]:10259 2025-08-13T20:08:11.203942036+00:00 stderr F I0813 20:08:11.203879 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:08:11.204870982+00:00 stderr F I0813 20:08:11.204750 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:08:11.205753528+00:00 stderr F I0813 20:08:11.205396 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:11.205753528+00:00 stderr F I0813 20:08:11.205692 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:08:11.206858439+00:00 stderr F I0813 20:08:11.206722 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:08:11.208265020+00:00 stderr F I0813 20:08:11.208201 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:08:11.208802005+00:00 stderr F I0813 20:08:11.208759 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:08:11.211168003+00:00 stderr F I0813 20:08:11.209991 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:08:11.217854585+00:00 stderr F I0813 20:08:11.217143 1 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.217854585+00:00 stderr F I0813 20:08:11.217420 1 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.218323598+00:00 stderr F I0813 20:08:11.218294 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.218656498+00:00 stderr F I0813 20:08:11.218594 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.225140743+00:00 stderr F I0813 20:08:11.225093 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.225409441+00:00 stderr F I0813 20:08:11.225093 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.225541035+00:00 stderr F I0813 20:08:11.225393 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.228423748+00:00 stderr F I0813 20:08:11.228394 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.228852450+00:00 stderr F I0813 20:08:11.228737 1 reflector.go:351] Caches populated for *v1.CSINode from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.229345944+00:00 stderr F I0813 20:08:11.228441 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.229720875+00:00 stderr F I0813 20:08:11.229585 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.229720875+00:00 stderr F I0813 20:08:11.229561 1 node_tree.go:65] "Added node in listed group to NodeTree" node="crc" zone="" 2025-08-13T20:08:11.230234189+00:00 stderr F I0813 20:08:11.230210 1 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.233049880+00:00 stderr F I0813 20:08:11.232089 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.262424672+00:00 stderr F I0813 20:08:11.260649 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.275833337+00:00 stderr F I0813 20:08:11.274170 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.275833337+00:00 stderr F I0813 20:08:11.274459 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.302108150+00:00 stderr F I0813 20:08:11.301417 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.304706425+00:00 stderr F I0813 20:08:11.304648 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler/kube-scheduler... 2025-08-13T20:08:11.310589033+00:00 stderr F I0813 20:08:11.310070 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:08:11.310714567+00:00 stderr F I0813 20:08:11.310689 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:08:11.310912263+00:00 stderr F I0813 20:08:11.310699 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:08:11.310624834 +0000 UTC))" 2025-08-13T20:08:11.311012305+00:00 stderr F I0813 20:08:11.310992 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:08:11.310959374 +0000 UTC))" 2025-08-13T20:08:11.311070797+00:00 stderr F I0813 20:08:11.311056 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:11.311035286 +0000 UTC))" 2025-08-13T20:08:11.311119569+00:00 stderr F I0813 20:08:11.311106 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:11.311089788 +0000 UTC))" 2025-08-13T20:08:11.311165560+00:00 stderr F I0813 20:08:11.311152 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.311138159 +0000 UTC))" 2025-08-13T20:08:11.311209871+00:00 stderr F I0813 20:08:11.311197 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.31118407 +0000 UTC))" 2025-08-13T20:08:11.311252592+00:00 stderr F I0813 20:08:11.311240 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.311227522 +0000 UTC))" 2025-08-13T20:08:11.311322654+00:00 stderr F I0813 20:08:11.311299 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.311282933 +0000 UTC))" 2025-08-13T20:08:11.311389356+00:00 stderr F I0813 20:08:11.311374 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:08:11.311342025 +0000 UTC))" 2025-08-13T20:08:11.311438288+00:00 stderr F I0813 20:08:11.311424 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:08:11.311411097 +0000 UTC))" 2025-08-13T20:08:11.312300042+00:00 stderr F I0813 20:08:11.312264 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:27 +0000 UTC to 2027-08-13 20:00:28 +0000 UTC (now=2025-08-13 20:08:11.312244321 +0000 UTC))" 2025-08-13T20:08:11.312595841+00:00 stderr F I0813 20:08:11.312580 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115691\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115690\" (2025-08-13 19:08:10 +0000 UTC to 2026-08-13 19:08:10 +0000 UTC (now=2025-08-13 20:08:11.3125637 +0000 UTC))" 2025-08-13T20:08:11.317314156+00:00 stderr F I0813 20:08:11.317284 1 leaderelection.go:260] successfully acquired lease openshift-kube-scheduler/kube-scheduler 2025-08-13T20:08:11.318466309+00:00 stderr F I0813 20:08:11.318358 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318649 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:08:11.318623444 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318698 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:08:11.318681925 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318718 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:11.318705526 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318742 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:11.318723397 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318761 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.318747857 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318844 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.318767468 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318870 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.31885504 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318932 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.318882411 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318956 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:08:11.318941623 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318976 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:08:11.318966564 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318995 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.318983994 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.319304 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:27 +0000 UTC to 2027-08-13 20:00:28 +0000 UTC (now=2025-08-13 20:08:11.319284853 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.319577 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115691\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115690\" (2025-08-13 19:08:10 +0000 UTC to 2026-08-13 19:08:10 +0000 UTC (now=2025-08-13 20:08:11.319561951 +0000 UTC))" 2025-08-13T20:08:37.333024227+00:00 stderr F E0813 20:08:37.331678 1 leaderelection.go:332] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler/leases/kube-scheduler?timeout=53.5s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:09:01.242213943+00:00 stderr F I0813 20:09:01.242038 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:03.336110067+00:00 stderr F I0813 20:09:03.335887 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:03.904718640+00:00 stderr F I0813 20:09:03.904657 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:04.237111850+00:00 stderr F I0813 20:09:04.236960 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:04.238841120+00:00 stderr F I0813 20:09:04.238728 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:05.534492978+00:00 stderr F I0813 20:09:05.534406 1 reflector.go:351] Caches populated for *v1.CSINode from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:05.723631721+00:00 stderr F I0813 20:09:05.723574 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:06.019105803+00:00 stderr F I0813 20:09:06.018397 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.053737367+00:00 stderr F I0813 20:09:07.053594 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.761629982+00:00 stderr F I0813 20:09:07.761557 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.771045132+00:00 stderr F I0813 20:09:07.770994 1 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.834012487+00:00 stderr F I0813 20:09:07.833944 1 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.952277558+00:00 stderr F I0813 20:09:07.952198 1 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.981075823+00:00 stderr F I0813 20:09:07.980921 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:10.114889060+00:00 stderr F I0813 20:09:10.114739 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:10.765327109+00:00 stderr F I0813 20:09:10.764682 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:12.779279180+00:00 stderr F I0813 20:09:12.779210 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.272706390+00:00 stderr F I0813 20:10:15.272394 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:10:59.918217989+00:00 stderr F I0813 20:10:59.912574 1 schedule_one.go:992] "Unable to schedule pod; no fit; waiting" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" err="0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules." 2025-08-13T20:11:00.012890153+00:00 stderr F I0813 20:11:00.012639 1 schedule_one.go:992] "Unable to schedule pod; no fit; waiting" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" err="0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules." 2025-08-13T20:11:01.525253034+00:00 stderr F I0813 20:11:01.523937 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:11:01.531536584+00:00 stderr F I0813 20:11:01.528683 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:15:00.383192080+00:00 stderr F I0813 20:15:00.378262 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:16:58.186838320+00:00 stderr F I0813 20:16:58.185353 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/certified-operators-8bbjz" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:17:00.178201349+00:00 stderr F I0813 20:17:00.177988 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-marketplace-nsk78" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:17:16.061619591+00:00 stderr F I0813 20:17:16.061426 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-operators-swl5s" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:17:30.380858087+00:00 stderr F I0813 20:17:30.380641 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/community-operators-tfv59" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:27:05.675630296+00:00 stderr F I0813 20:27:05.672255 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-marketplace-jbzn9" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:27:05.827866179+00:00 stderr F I0813 20:27:05.827567 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/certified-operators-xldzg" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:28:43.342763392+00:00 stderr F I0813 20:28:43.342273 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/community-operators-hvwvm" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:29:30.103067461+00:00 stderr F I0813 20:29:30.100957 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-operators-zdwjn" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:30:02.007482088+00:00 stderr F I0813 20:30:02.005463 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:37:48.222392256+00:00 stderr F I0813 20:37:48.219625 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-marketplace-nkzlk" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:38:36.098357314+00:00 stderr F I0813 20:38:36.087705 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/certified-operators-4kmbv" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:41:21.475952852+00:00 stderr F I0813 20:41:21.453662 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-operators-k2tgr" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:42:26.033155404+00:00 stderr F I0813 20:42:26.032886 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/community-operators-sdddl" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:42:36.442535128+00:00 stderr F I0813 20:42:36.441417 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.464219033+00:00 stderr F I0813 20:42:36.439661 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.464219033+00:00 stderr F I0813 20:42:36.459284 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.464431250+00:00 stderr F I0813 20:42:36.464388 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.464653396+00:00 stderr F I0813 20:42:36.464635 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.464760989+00:00 stderr F I0813 20:42:36.464743 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469013992+00:00 stderr F I0813 20:42:36.468989 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.484011014+00:00 stderr F I0813 20:42:36.469107 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.484375045+00:00 stderr F I0813 20:42:36.484316 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.484535129+00:00 stderr F I0813 20:42:36.484516 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.484705284+00:00 stderr F I0813 20:42:36.484647 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.484908530+00:00 stderr F I0813 20:42:36.484885 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.485015883+00:00 stderr F I0813 20:42:36.484998 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.485104616+00:00 stderr F I0813 20:42:36.485089 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.485197898+00:00 stderr F I0813 20:42:36.485182 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.485337742+00:00 stderr F I0813 20:42:36.485316 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.508276894+00:00 stderr F I0813 20:42:36.430578 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:37.603936832+00:00 stderr F I0813 20:42:37.603863 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:37.604083756+00:00 stderr F I0813 20:42:37.604062 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:37.604331843+00:00 stderr F I0813 20:42:37.604306 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:37.604409386+00:00 stderr F I0813 20:42:37.604392 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:42:37.604490028+00:00 stderr F I0813 20:42:37.604472 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:37.609753310+00:00 stderr F I0813 20:42:37.607518 1 scheduling_queue.go:870] "Scheduling queue is closed" 2025-08-13T20:42:37.609753310+00:00 stderr F E0813 20:42:37.608056 1 leaderelection.go:308] Failed to release lock: Put "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler/leases/kube-scheduler?timeout=53.5s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:42:37.609753310+00:00 stderr F I0813 20:42:37.608306 1 server.go:248] "Requested to terminate, exiting" ././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000044264715133657715033063 0ustar zuulzuul2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260717 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260892 1 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260897 1 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260901 1 feature_gate.go:227] unrecognized feature gate: SignatureStores 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260905 1 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260909 1 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260913 1 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260917 1 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260921 1 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260925 1 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260929 1 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260933 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260937 1 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260942 1 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260945 1 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260949 1 feature_gate.go:227] unrecognized feature gate: MetricsServer 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260953 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260957 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260961 1 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260965 1 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260970 1 feature_gate.go:227] unrecognized feature gate: NewOLM 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260974 1 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260978 1 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260981 1 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260985 1 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260989 1 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260993 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.260996 1 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.261000 1 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.261004 1 feature_gate.go:227] unrecognized feature gate: PinnedImages 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.261007 1 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.261015 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.261019 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.261023 1 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.261027 1 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.261030 1 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.261034 1 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2026-01-20T10:42:03.261052204+00:00 stderr F W0120 10:42:03.261038 1 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261043 1 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261048 1 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261052 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261056 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261060 1 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261064 1 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261068 1 feature_gate.go:227] unrecognized feature gate: Example 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261072 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261076 1 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261080 1 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261084 1 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261087 1 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261091 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261095 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261099 1 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261103 1 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261107 1 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261112 1 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261116 1 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261120 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261123 1 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2026-01-20T10:42:03.261338112+00:00 stderr F W0120 10:42:03.261128 1 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2026-01-20T10:42:03.262637359+00:00 stderr F I0120 10:42:03.262578 1 flags.go:64] FLAG: --allow-metric-labels="[]" 2026-01-20T10:42:03.262637359+00:00 stderr F I0120 10:42:03.262603 1 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2026-01-20T10:42:03.262637359+00:00 stderr F I0120 10:42:03.262608 1 flags.go:64] FLAG: --authentication-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig" 2026-01-20T10:42:03.262637359+00:00 stderr F I0120 10:42:03.262613 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2026-01-20T10:42:03.262637359+00:00 stderr F I0120 10:42:03.262617 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2026-01-20T10:42:03.262637359+00:00 stderr F I0120 10:42:03.262621 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="true" 2026-01-20T10:42:03.262637359+00:00 stderr F I0120 10:42:03.262625 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2026-01-20T10:42:03.262668353+00:00 stderr F I0120 10:42:03.262633 1 flags.go:64] FLAG: --authorization-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig" 2026-01-20T10:42:03.262668353+00:00 stderr F I0120 10:42:03.262638 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2026-01-20T10:42:03.262668353+00:00 stderr F I0120 10:42:03.262642 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2026-01-20T10:42:03.262668353+00:00 stderr F I0120 10:42:03.262645 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2026-01-20T10:42:03.262668353+00:00 stderr F I0120 10:42:03.262650 1 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2026-01-20T10:42:03.262668353+00:00 stderr F I0120 10:42:03.262653 1 flags.go:64] FLAG: --client-ca-file="" 2026-01-20T10:42:03.262668353+00:00 stderr F I0120 10:42:03.262656 1 flags.go:64] FLAG: --config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2026-01-20T10:42:03.262668353+00:00 stderr F I0120 10:42:03.262660 1 flags.go:64] FLAG: --contention-profiling="true" 2026-01-20T10:42:03.262668353+00:00 stderr F I0120 10:42:03.262663 1 flags.go:64] FLAG: --disabled-metrics="[]" 2026-01-20T10:42:03.262693791+00:00 stderr F I0120 10:42:03.262668 1 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2026-01-20T10:42:03.262693791+00:00 stderr F I0120 10:42:03.262689 1 flags.go:64] FLAG: --help="false" 2026-01-20T10:42:03.262710762+00:00 stderr F I0120 10:42:03.262693 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2026-01-20T10:42:03.262727344+00:00 stderr F I0120 10:42:03.262702 1 flags.go:64] FLAG: --kube-api-burst="100" 2026-01-20T10:42:03.262727344+00:00 stderr F I0120 10:42:03.262712 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" 2026-01-20T10:42:03.262727344+00:00 stderr F I0120 10:42:03.262716 1 flags.go:64] FLAG: --kube-api-qps="50" 2026-01-20T10:42:03.262727344+00:00 stderr F I0120 10:42:03.262720 1 flags.go:64] FLAG: --kubeconfig="" 2026-01-20T10:42:03.262776040+00:00 stderr F I0120 10:42:03.262724 1 flags.go:64] FLAG: --leader-elect="true" 2026-01-20T10:42:03.262776040+00:00 stderr F I0120 10:42:03.262742 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s" 2026-01-20T10:42:03.262776040+00:00 stderr F I0120 10:42:03.262746 1 flags.go:64] FLAG: --leader-elect-renew-deadline="10s" 2026-01-20T10:42:03.262776040+00:00 stderr F I0120 10:42:03.262749 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases" 2026-01-20T10:42:03.262776040+00:00 stderr F I0120 10:42:03.262752 1 flags.go:64] FLAG: --leader-elect-resource-name="kube-scheduler" 2026-01-20T10:42:03.262776040+00:00 stderr F I0120 10:42:03.262756 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" 2026-01-20T10:42:03.262776040+00:00 stderr F I0120 10:42:03.262759 1 flags.go:64] FLAG: --leader-elect-retry-period="2s" 2026-01-20T10:42:03.262776040+00:00 stderr F I0120 10:42:03.262762 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2026-01-20T10:42:03.262776040+00:00 stderr F I0120 10:42:03.262765 1 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2026-01-20T10:42:03.262807394+00:00 stderr F I0120 10:42:03.262772 1 flags.go:64] FLAG: --log-json-split-stream="false" 2026-01-20T10:42:03.262807394+00:00 stderr F I0120 10:42:03.262777 1 flags.go:64] FLAG: --logging-format="text" 2026-01-20T10:42:03.262807394+00:00 stderr F I0120 10:42:03.262781 1 flags.go:64] FLAG: --master="" 2026-01-20T10:42:03.262807394+00:00 stderr F I0120 10:42:03.262783 1 flags.go:64] FLAG: --permit-address-sharing="false" 2026-01-20T10:42:03.262807394+00:00 stderr F I0120 10:42:03.262786 1 flags.go:64] FLAG: --permit-port-sharing="false" 2026-01-20T10:42:03.262807394+00:00 stderr F I0120 10:42:03.262789 1 flags.go:64] FLAG: --pod-max-in-unschedulable-pods-duration="5m0s" 2026-01-20T10:42:03.262807394+00:00 stderr F I0120 10:42:03.262793 1 flags.go:64] FLAG: --profiling="true" 2026-01-20T10:42:03.262807394+00:00 stderr F I0120 10:42:03.262796 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2026-01-20T10:42:03.262807394+00:00 stderr F I0120 10:42:03.262800 1 flags.go:64] FLAG: --requestheader-client-ca-file="" 2026-01-20T10:42:03.262828854+00:00 stderr F I0120 10:42:03.262803 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2026-01-20T10:42:03.262828854+00:00 stderr F I0120 10:42:03.262809 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2026-01-20T10:42:03.262828854+00:00 stderr F I0120 10:42:03.262816 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2026-01-20T10:42:03.262828854+00:00 stderr F I0120 10:42:03.262820 1 flags.go:64] FLAG: --secure-port="10259" 2026-01-20T10:42:03.262828854+00:00 stderr F I0120 10:42:03.262823 1 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2026-01-20T10:42:03.262848104+00:00 stderr F I0120 10:42:03.262826 1 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" 2026-01-20T10:42:03.262848104+00:00 stderr F I0120 10:42:03.262831 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2026-01-20T10:42:03.262848104+00:00 stderr F I0120 10:42:03.262839 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2026-01-20T10:42:03.262848104+00:00 stderr F I0120 10:42:03.262843 1 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2026-01-20T10:42:03.262866545+00:00 stderr F I0120 10:42:03.262847 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2026-01-20T10:42:03.262866545+00:00 stderr F I0120 10:42:03.262852 1 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" 2026-01-20T10:42:03.262866545+00:00 stderr F I0120 10:42:03.262855 1 flags.go:64] FLAG: --v="2" 2026-01-20T10:42:03.262866545+00:00 stderr F I0120 10:42:03.262859 1 flags.go:64] FLAG: --version="false" 2026-01-20T10:42:03.262891583+00:00 stderr F I0120 10:42:03.262864 1 flags.go:64] FLAG: --vmodule="" 2026-01-20T10:42:03.262891583+00:00 stderr F I0120 10:42:03.262868 1 flags.go:64] FLAG: --write-config-to="" 2026-01-20T10:42:03.267935225+00:00 stderr F I0120 10:42:03.267604 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2026-01-20T10:42:03.999390229+00:00 stderr F W0120 10:42:03.999318 1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:03.999390229+00:00 stderr F W0120 10:42:03.999347 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous. 2026-01-20T10:42:03.999390229+00:00 stderr F W0120 10:42:03.999353 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false 2026-01-20T10:42:04.011353839+00:00 stderr F I0120 10:42:04.011289 1 configfile.go:94] "Using component config" config=< 2026-01-20T10:42:04.011353839+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2026-01-20T10:42:04.011353839+00:00 stderr F clientConnection: 2026-01-20T10:42:04.011353839+00:00 stderr F acceptContentTypes: "" 2026-01-20T10:42:04.011353839+00:00 stderr F burst: 100 2026-01-20T10:42:04.011353839+00:00 stderr F contentType: application/vnd.kubernetes.protobuf 2026-01-20T10:42:04.011353839+00:00 stderr F kubeconfig: /etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig 2026-01-20T10:42:04.011353839+00:00 stderr F qps: 50 2026-01-20T10:42:04.011353839+00:00 stderr F enableContentionProfiling: false 2026-01-20T10:42:04.011353839+00:00 stderr F enableProfiling: false 2026-01-20T10:42:04.011353839+00:00 stderr F kind: KubeSchedulerConfiguration 2026-01-20T10:42:04.011353839+00:00 stderr F leaderElection: 2026-01-20T10:42:04.011353839+00:00 stderr F leaderElect: true 2026-01-20T10:42:04.011353839+00:00 stderr F leaseDuration: 2m17s 2026-01-20T10:42:04.011353839+00:00 stderr F renewDeadline: 1m47s 2026-01-20T10:42:04.011353839+00:00 stderr F resourceLock: leases 2026-01-20T10:42:04.011353839+00:00 stderr F resourceName: kube-scheduler 2026-01-20T10:42:04.011353839+00:00 stderr F resourceNamespace: openshift-kube-scheduler 2026-01-20T10:42:04.011353839+00:00 stderr F retryPeriod: 26s 2026-01-20T10:42:04.011353839+00:00 stderr F parallelism: 16 2026-01-20T10:42:04.011353839+00:00 stderr F percentageOfNodesToScore: 0 2026-01-20T10:42:04.011353839+00:00 stderr F podInitialBackoffSeconds: 1 2026-01-20T10:42:04.011353839+00:00 stderr F podMaxBackoffSeconds: 10 2026-01-20T10:42:04.011353839+00:00 stderr F profiles: 2026-01-20T10:42:04.011353839+00:00 stderr F - pluginConfig: 2026-01-20T10:42:04.011353839+00:00 stderr F - args: 2026-01-20T10:42:04.011353839+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2026-01-20T10:42:04.011353839+00:00 stderr F kind: DefaultPreemptionArgs 2026-01-20T10:42:04.011353839+00:00 stderr F minCandidateNodesAbsolute: 100 2026-01-20T10:42:04.011353839+00:00 stderr F minCandidateNodesPercentage: 10 2026-01-20T10:42:04.011353839+00:00 stderr F name: DefaultPreemption 2026-01-20T10:42:04.011353839+00:00 stderr F - args: 2026-01-20T10:42:04.011353839+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2026-01-20T10:42:04.011353839+00:00 stderr F hardPodAffinityWeight: 1 2026-01-20T10:42:04.011353839+00:00 stderr F ignorePreferredTermsOfExistingPods: false 2026-01-20T10:42:04.011353839+00:00 stderr F kind: InterPodAffinityArgs 2026-01-20T10:42:04.011353839+00:00 stderr F name: InterPodAffinity 2026-01-20T10:42:04.011353839+00:00 stderr F - args: 2026-01-20T10:42:04.011353839+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2026-01-20T10:42:04.011353839+00:00 stderr F kind: NodeAffinityArgs 2026-01-20T10:42:04.011353839+00:00 stderr F name: NodeAffinity 2026-01-20T10:42:04.011353839+00:00 stderr F - args: 2026-01-20T10:42:04.011353839+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2026-01-20T10:42:04.011353839+00:00 stderr F kind: NodeResourcesBalancedAllocationArgs 2026-01-20T10:42:04.011353839+00:00 stderr F resources: 2026-01-20T10:42:04.011353839+00:00 stderr F - name: cpu 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 1 2026-01-20T10:42:04.011353839+00:00 stderr F - name: memory 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 1 2026-01-20T10:42:04.011353839+00:00 stderr F name: NodeResourcesBalancedAllocation 2026-01-20T10:42:04.011353839+00:00 stderr F - args: 2026-01-20T10:42:04.011353839+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2026-01-20T10:42:04.011353839+00:00 stderr F kind: NodeResourcesFitArgs 2026-01-20T10:42:04.011353839+00:00 stderr F scoringStrategy: 2026-01-20T10:42:04.011353839+00:00 stderr F resources: 2026-01-20T10:42:04.011353839+00:00 stderr F - name: cpu 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 1 2026-01-20T10:42:04.011353839+00:00 stderr F - name: memory 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 1 2026-01-20T10:42:04.011353839+00:00 stderr F type: LeastAllocated 2026-01-20T10:42:04.011353839+00:00 stderr F name: NodeResourcesFit 2026-01-20T10:42:04.011353839+00:00 stderr F - args: 2026-01-20T10:42:04.011353839+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2026-01-20T10:42:04.011353839+00:00 stderr F defaultingType: System 2026-01-20T10:42:04.011353839+00:00 stderr F kind: PodTopologySpreadArgs 2026-01-20T10:42:04.011353839+00:00 stderr F name: PodTopologySpread 2026-01-20T10:42:04.011353839+00:00 stderr F - args: 2026-01-20T10:42:04.011353839+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2026-01-20T10:42:04.011353839+00:00 stderr F bindTimeoutSeconds: 600 2026-01-20T10:42:04.011353839+00:00 stderr F kind: VolumeBindingArgs 2026-01-20T10:42:04.011353839+00:00 stderr F name: VolumeBinding 2026-01-20T10:42:04.011353839+00:00 stderr F plugins: 2026-01-20T10:42:04.011353839+00:00 stderr F bind: {} 2026-01-20T10:42:04.011353839+00:00 stderr F filter: {} 2026-01-20T10:42:04.011353839+00:00 stderr F multiPoint: 2026-01-20T10:42:04.011353839+00:00 stderr F enabled: 2026-01-20T10:42:04.011353839+00:00 stderr F - name: PrioritySort 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 0 2026-01-20T10:42:04.011353839+00:00 stderr F - name: NodeUnschedulable 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 0 2026-01-20T10:42:04.011353839+00:00 stderr F - name: NodeName 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 0 2026-01-20T10:42:04.011353839+00:00 stderr F - name: TaintToleration 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 3 2026-01-20T10:42:04.011353839+00:00 stderr F - name: NodeAffinity 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 2 2026-01-20T10:42:04.011353839+00:00 stderr F - name: NodePorts 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 0 2026-01-20T10:42:04.011353839+00:00 stderr F - name: NodeResourcesFit 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 1 2026-01-20T10:42:04.011353839+00:00 stderr F - name: VolumeRestrictions 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 0 2026-01-20T10:42:04.011353839+00:00 stderr F - name: EBSLimits 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 0 2026-01-20T10:42:04.011353839+00:00 stderr F - name: GCEPDLimits 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 0 2026-01-20T10:42:04.011353839+00:00 stderr F - name: NodeVolumeLimits 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 0 2026-01-20T10:42:04.011353839+00:00 stderr F - name: AzureDiskLimits 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 0 2026-01-20T10:42:04.011353839+00:00 stderr F - name: VolumeBinding 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 0 2026-01-20T10:42:04.011353839+00:00 stderr F - name: VolumeZone 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 0 2026-01-20T10:42:04.011353839+00:00 stderr F - name: PodTopologySpread 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 2 2026-01-20T10:42:04.011353839+00:00 stderr F - name: InterPodAffinity 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 2 2026-01-20T10:42:04.011353839+00:00 stderr F - name: DefaultPreemption 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 0 2026-01-20T10:42:04.011353839+00:00 stderr F - name: NodeResourcesBalancedAllocation 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 1 2026-01-20T10:42:04.011353839+00:00 stderr F - name: ImageLocality 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 1 2026-01-20T10:42:04.011353839+00:00 stderr F - name: DefaultBinder 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 0 2026-01-20T10:42:04.011353839+00:00 stderr F - name: SchedulingGates 2026-01-20T10:42:04.011353839+00:00 stderr F weight: 0 2026-01-20T10:42:04.011353839+00:00 stderr F permit: {} 2026-01-20T10:42:04.011353839+00:00 stderr F postBind: {} 2026-01-20T10:42:04.011353839+00:00 stderr F postFilter: {} 2026-01-20T10:42:04.011353839+00:00 stderr F preBind: {} 2026-01-20T10:42:04.011353839+00:00 stderr F preEnqueue: {} 2026-01-20T10:42:04.011353839+00:00 stderr F preFilter: {} 2026-01-20T10:42:04.011353839+00:00 stderr F preScore: {} 2026-01-20T10:42:04.011353839+00:00 stderr F queueSort: {} 2026-01-20T10:42:04.011353839+00:00 stderr F reserve: {} 2026-01-20T10:42:04.011353839+00:00 stderr F score: {} 2026-01-20T10:42:04.011353839+00:00 stderr F schedulerName: default-scheduler 2026-01-20T10:42:04.011353839+00:00 stderr F > 2026-01-20T10:42:04.012341717+00:00 stderr F I0120 10:42:04.012307 1 server.go:159] "Starting Kubernetes Scheduler" version="v1.29.5+29c95f3" 2026-01-20T10:42:04.012341717+00:00 stderr F I0120 10:42:04.012323 1 server.go:161] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" 2026-01-20T10:42:04.013616604+00:00 stderr F I0120 10:42:04.013536 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:42:04.013792319+00:00 stderr F I0120 10:42:04.013757 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2026-01-20T10:42:04.013935923+00:00 stderr F I0120 10:42:04.013916 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:27 +0000 UTC to 2027-08-13 20:00:28 +0000 UTC (now=2026-01-20 10:42:04.013880801 +0000 UTC))" 2026-01-20T10:42:04.014295684+00:00 stderr F I0120 10:42:04.014270 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768905723\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768905723\" (2026-01-20 09:42:03 +0000 UTC to 2027-01-20 09:42:03 +0000 UTC (now=2026-01-20 10:42:04.014228352 +0000 UTC))" 2026-01-20T10:42:04.014311675+00:00 stderr F I0120 10:42:04.014300 1 secure_serving.go:213] Serving securely on [::]:10259 2026-01-20T10:42:04.014369316+00:00 stderr F I0120 10:42:04.014336 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:42:04.014435498+00:00 stderr F I0120 10:42:04.014404 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:42:04.016393735+00:00 stderr F W0120 10:42:04.016299 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.016476777+00:00 stderr F E0120 10:42:04.016452 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.016476777+00:00 stderr F W0120 10:42:04.016308 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.016503728+00:00 stderr F E0120 10:42:04.016484 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.016538509+00:00 stderr F W0120 10:42:04.016456 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.016590060+00:00 stderr F W0120 10:42:04.016557 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.016599471+00:00 stderr F E0120 10:42:04.016587 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.016599471+00:00 stderr F E0120 10:42:04.016567 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.016647772+00:00 stderr F W0120 10:42:04.016619 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.016647772+00:00 stderr F E0120 10:42:04.016641 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.016701484+00:00 stderr F W0120 10:42:04.016671 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.017132567+00:00 stderr F E0120 10:42:04.017119 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.017161618+00:00 stderr F W0120 10:42:04.016709 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.017200439+00:00 stderr F E0120 10:42:04.017189 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.017221910+00:00 stderr F W0120 10:42:04.016714 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.017258771+00:00 stderr F E0120 10:42:04.017248 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.017284021+00:00 stderr F W0120 10:42:04.016772 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.017323393+00:00 stderr F E0120 10:42:04.017311 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.017345233+00:00 stderr F W0120 10:42:04.016822 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.017380214+00:00 stderr F E0120 10:42:04.017368 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.017411585+00:00 stderr F W0120 10:42:04.016855 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.017454616+00:00 stderr F E0120 10:42:04.017442 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.017500048+00:00 stderr F W0120 10:42:04.016931 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.017589960+00:00 stderr F E0120 10:42:04.017540 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.017589960+00:00 stderr F W0120 10:42:04.016900 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.017599791+00:00 stderr F E0120 10:42:04.017590 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.017607801+00:00 stderr F W0120 10:42:04.016808 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.017659972+00:00 stderr F E0120 10:42:04.017624 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.017659972+00:00 stderr F W0120 10:42:04.017229 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.017669143+00:00 stderr F E0120 10:42:04.017657 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.881884792+00:00 stderr F W0120 10:42:04.881728 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.881884792+00:00 stderr F E0120 10:42:04.881847 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.894431739+00:00 stderr F W0120 10:42:04.894346 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.894431739+00:00 stderr F E0120 10:42:04.894400 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.933685073+00:00 stderr F W0120 10:42:04.933566 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.933685073+00:00 stderr F E0120 10:42:04.933622 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.958277450+00:00 stderr F W0120 10:42:04.958194 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.958277450+00:00 stderr F E0120 10:42:04.958244 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.962051689+00:00 stderr F W0120 10:42:04.961986 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.962102991+00:00 stderr F E0120 10:42:04.962076 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.978972992+00:00 stderr F W0120 10:42:04.978885 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:04.978972992+00:00 stderr F E0120 10:42:04.978948 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:05.136552216+00:00 stderr F W0120 10:42:05.136337 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:05.136552216+00:00 stderr F E0120 10:42:05.136377 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:05.392920909+00:00 stderr F W0120 10:42:05.392787 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:05.392920909+00:00 stderr F E0120 10:42:05.392836 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:05.432442441+00:00 stderr F W0120 10:42:05.432328 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:05.432442441+00:00 stderr F W0120 10:42:05.432379 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:05.432442441+00:00 stderr F W0120 10:42:05.432367 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:05.432442441+00:00 stderr F E0120 10:42:05.432393 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:05.432442441+00:00 stderr F E0120 10:42:05.432431 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:05.432491612+00:00 stderr F E0120 10:42:05.432430 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:05.438046544+00:00 stderr F W0120 10:42:05.437979 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:05.438046544+00:00 stderr F E0120 10:42:05.438012 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:05.439899979+00:00 stderr F W0120 10:42:05.439847 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:05.439899979+00:00 stderr F E0120 10:42:05.439874 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:05.445984536+00:00 stderr F W0120 10:42:05.445928 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:05.445984536+00:00 stderr F E0120 10:42:05.445953 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:05.453684709+00:00 stderr F W0120 10:42:05.453589 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:05.453684709+00:00 stderr F E0120 10:42:05.453656 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:06.753498457+00:00 stderr F W0120 10:42:06.753371 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:06.753498457+00:00 stderr F E0120 10:42:06.753473 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:06.774586592+00:00 stderr F W0120 10:42:06.774517 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:06.774586592+00:00 stderr F E0120 10:42:06.774569 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:06.827051422+00:00 stderr F W0120 10:42:06.826938 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:06.827051422+00:00 stderr F E0120 10:42:06.826981 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:07.051652058+00:00 stderr F W0120 10:42:07.051511 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:07.051652058+00:00 stderr F E0120 10:42:07.051580 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:07.164008493+00:00 stderr F W0120 10:42:07.163859 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:07.164008493+00:00 stderr F E0120 10:42:07.163906 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:07.234068035+00:00 stderr F W0120 10:42:07.233950 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:07.234068035+00:00 stderr F E0120 10:42:07.234011 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:07.550373356+00:00 stderr F W0120 10:42:07.550198 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:07.550373356+00:00 stderr F E0120 10:42:07.550243 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:07.592037480+00:00 stderr F W0120 10:42:07.591924 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:07.592037480+00:00 stderr F E0120 10:42:07.591968 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:07.904274141+00:00 stderr F W0120 10:42:07.904158 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:07.904274141+00:00 stderr F E0120 10:42:07.904219 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:08.038676089+00:00 stderr F W0120 10:42:08.038540 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:08.038676089+00:00 stderr F E0120 10:42:08.038577 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:08.072162865+00:00 stderr F W0120 10:42:08.072068 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:08.072162865+00:00 stderr F E0120 10:42:08.072107 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:08.255528710+00:00 stderr F W0120 10:42:08.255428 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:08.255528710+00:00 stderr F E0120 10:42:08.255471 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:08.457827847+00:00 stderr F W0120 10:42:08.457682 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:08.457827847+00:00 stderr F E0120 10:42:08.457772 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:08.466385626+00:00 stderr F W0120 10:42:08.466307 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:08.466385626+00:00 stderr F E0120 10:42:08.466346 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:08.532696369+00:00 stderr F W0120 10:42:08.532563 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:08.532696369+00:00 stderr F E0120 10:42:08.532640 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:10.229990493+00:00 stderr F W0120 10:42:10.229855 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:10.229990493+00:00 stderr F E0120 10:42:10.229928 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:10.779027237+00:00 stderr F W0120 10:42:10.778861 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:10.779027237+00:00 stderr F E0120 10:42:10.778948 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:11.961943379+00:00 stderr F W0120 10:42:11.961775 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:11.961943379+00:00 stderr F E0120 10:42:11.961867 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:12.137949668+00:00 stderr F W0120 10:42:12.137777 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:12.137949668+00:00 stderr F E0120 10:42:12.137858 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:12.281796652+00:00 stderr F W0120 10:42:12.281528 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:12.281796652+00:00 stderr F E0120 10:42:12.281601 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:12.291419122+00:00 stderr F W0120 10:42:12.291341 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:12.291419122+00:00 stderr F E0120 10:42:12.291390 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:13.370566108+00:00 stderr F W0120 10:42:13.370418 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:13.370566108+00:00 stderr F E0120 10:42:13.370465 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:13.443627258+00:00 stderr F W0120 10:42:13.443517 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:13.443627258+00:00 stderr F E0120 10:42:13.443569 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:13.588211352+00:00 stderr F W0120 10:42:13.588081 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:13.588211352+00:00 stderr F E0120 10:42:13.588151 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:13.690410552+00:00 stderr F W0120 10:42:13.690246 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:13.690410552+00:00 stderr F E0120 10:42:13.690310 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:13.727853853+00:00 stderr F W0120 10:42:13.727691 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:13.727853853+00:00 stderr F E0120 10:42:13.727817 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:13.800926532+00:00 stderr F W0120 10:42:13.800779 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:13.800926532+00:00 stderr F E0120 10:42:13.800843 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:13.810638046+00:00 stderr F W0120 10:42:13.810527 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:13.810638046+00:00 stderr F E0120 10:42:13.810569 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:14.303452740+00:00 stderr F W0120 10:42:14.303203 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:14.303452740+00:00 stderr F E0120 10:42:14.303306 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:14.741585392+00:00 stderr F W0120 10:42:14.741391 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:14.741585392+00:00 stderr F E0120 10:42:14.741476 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:18.525465249+00:00 stderr F W0120 10:42:18.525355 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:18.525465249+00:00 stderr F E0120 10:42:18.525423 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:20.785425164+00:00 stderr F W0120 10:42:20.784473 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:20.785425164+00:00 stderr F E0120 10:42:20.785346 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:20.960420865+00:00 stderr F W0120 10:42:20.960311 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:20.960420865+00:00 stderr F E0120 10:42:20.960347 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:21.271081871+00:00 stderr F W0120 10:42:21.270939 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:21.271081871+00:00 stderr F E0120 10:42:21.271016 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:22.005637143+00:00 stderr F W0120 10:42:22.005518 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:22.005637143+00:00 stderr F E0120 10:42:22.005577 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:22.143102280+00:00 stderr F W0120 10:42:22.142929 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:22.143102280+00:00 stderr F E0120 10:42:22.143038 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:22.848304755+00:00 stderr F W0120 10:42:22.848154 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:22.848304755+00:00 stderr F E0120 10:42:22.848211 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:23.033521384+00:00 stderr F W0120 10:42:23.033380 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:23.033521384+00:00 stderr F E0120 10:42:23.033432 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:23.190797318+00:00 stderr F W0120 10:42:23.190611 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:23.190797318+00:00 stderr F E0120 10:42:23.190671 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:23.468703509+00:00 stderr F W0120 10:42:23.468566 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:23.468703509+00:00 stderr F E0120 10:42:23.468624 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:23.470777840+00:00 stderr F W0120 10:42:23.470680 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:23.470777840+00:00 stderr F E0120 10:42:23.470728 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:23.754808768+00:00 stderr F W0120 10:42:23.754632 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:23.754808768+00:00 stderr F E0120 10:42:23.754710 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:23.828934280+00:00 stderr F W0120 10:42:23.828798 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:23.828934280+00:00 stderr F E0120 10:42:23.828882 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:23.975672817+00:00 stderr F W0120 10:42:23.975505 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:23.975672817+00:00 stderr F E0120 10:42:23.975598 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:24.675424154+00:00 stderr F W0120 10:42:24.675278 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:24.675424154+00:00 stderr F E0120 10:42:24.675375 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:33.626965675+00:00 stderr F W0120 10:42:33.626832 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:33.626965675+00:00 stderr F E0120 10:42:33.626920 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:35.773140354+00:00 stderr F W0120 10:42:35.773020 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:35.773140354+00:00 stderr F E0120 10:42:35.773094 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:35.932429658+00:00 stderr F W0120 10:42:35.932287 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:35.932429658+00:00 stderr F E0120 10:42:35.932361 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:36.443441954+00:00 stderr F W0120 10:42:36.443246 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:36.443441954+00:00 stderr F E0120 10:42:36.443398 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:36.763632667+00:00 stderr F W0120 10:42:36.763482 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:36.763632667+00:00 stderr F E0120 10:42:36.763588 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:37.768973432+00:00 stderr F W0120 10:42:37.768803 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:37.768973432+00:00 stderr F E0120 10:42:37.768907 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:37.901350450+00:00 stderr F W0120 10:42:37.901230 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:37.901513965+00:00 stderr F E0120 10:42:37.901487 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:39.478967165+00:00 stderr F W0120 10:42:39.478714 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:39.478967165+00:00 stderr F E0120 10:42:39.478903 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:39.806304467+00:00 stderr F W0120 10:42:39.806118 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:39.806304467+00:00 stderr F E0120 10:42:39.806252 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:40.167999461+00:00 stderr F W0120 10:42:40.167880 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:40.167999461+00:00 stderr F E0120 10:42:40.167966 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:40.217708660+00:00 stderr F W0120 10:42:40.217602 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:40.217708660+00:00 stderr F E0120 10:42:40.217671 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:40.248414715+00:00 stderr F W0120 10:42:40.248283 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:40.248414715+00:00 stderr F E0120 10:42:40.248383 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:45.139611427+00:00 stderr F W0120 10:42:45.139473 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:45.139611427+00:00 stderr F E0120 10:42:45.139558 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:46.281997207+00:00 stderr F W0120 10:42:46.281858 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:46.281997207+00:00 stderr F E0120 10:42:46.281971 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:47.420486812+00:00 stderr F W0120 10:42:47.420341 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:47.420486812+00:00 stderr F E0120 10:42:47.420427 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:03.019136700+00:00 stderr F W0120 10:43:03.018994 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:03.019136700+00:00 stderr F E0120 10:43:03.019056 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:05.496223017+00:00 stderr F W0120 10:43:05.496082 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:05.496361852+00:00 stderr F E0120 10:43:05.496335 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:05.555486392+00:00 stderr F W0120 10:43:05.555363 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:05.555486392+00:00 stderr F E0120 10:43:05.555437 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:09.260708473+00:00 stderr F W0120 10:43:09.260562 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:09.260708473+00:00 stderr F E0120 10:43:09.260640 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:12.649511691+00:00 stderr F W0120 10:43:12.649396 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:12.649511691+00:00 stderr F E0120 10:43:12.649470 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:14.931966008+00:00 stderr F W0120 10:43:14.931860 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:14.932030390+00:00 stderr F E0120 10:43:14.931958 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:18.241250978+00:00 stderr F W0120 10:43:18.241142 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:18.241250978+00:00 stderr F E0120 10:43:18.241208 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:18.732911440+00:00 stderr F W0120 10:43:18.732790 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:18.732911440+00:00 stderr F E0120 10:43:18.732860 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:19.104565097+00:00 stderr F W0120 10:43:19.104419 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:19.104565097+00:00 stderr F E0120 10:43:19.104544 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:20.499659980+00:00 stderr F W0120 10:43:20.499516 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:20.499659980+00:00 stderr F E0120 10:43:20.499580 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:20.758213279+00:00 stderr F W0120 10:43:20.758057 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:20.758213279+00:00 stderr F E0120 10:43:20.758140 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:21.722697168+00:00 stderr F W0120 10:43:21.722580 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:21.722697168+00:00 stderr F E0120 10:43:21.722663 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:24.233861812+00:00 stderr F W0120 10:43:24.233725 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:24.233861812+00:00 stderr F E0120 10:43:24.233830 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:30.785424883+00:00 stderr F W0120 10:43:30.785269 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:30.785424883+00:00 stderr F E0120 10:43:30.785347 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:36.253116184+00:00 stderr F W0120 10:43:36.252393 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:36.253116184+00:00 stderr F E0120 10:43:36.253062 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:45.575310118+00:00 stderr F W0120 10:43:45.575216 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:45.575310118+00:00 stderr F E0120 10:43:45.575272 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:48.078299706+00:00 stderr F W0120 10:43:48.078147 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:48.078299706+00:00 stderr F E0120 10:43:48.078228 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:48.759659292+00:00 stderr F W0120 10:43:48.759551 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:48.759659292+00:00 stderr F E0120 10:43:48.759604 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:50.444837033+00:00 stderr F W0120 10:43:50.444628 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:50.444837033+00:00 stderr F E0120 10:43:50.444695 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:52.169765921+00:00 stderr F W0120 10:43:52.169631 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:52.169765921+00:00 stderr F E0120 10:43:52.169694 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:52.558608414+00:00 stderr F W0120 10:43:52.558526 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:52.558723518+00:00 stderr F E0120 10:43:52.558709 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:55.348246918+00:00 stderr F W0120 10:43:55.348165 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:55.348246918+00:00 stderr F E0120 10:43:55.348212 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:56.100045663+00:00 stderr F W0120 10:43:56.099916 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:56.100045663+00:00 stderr F E0120 10:43:56.099984 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:59.004526618+00:00 stderr F W0120 10:43:59.004416 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:59.004526618+00:00 stderr F E0120 10:43:59.004476 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:05.589126412+00:00 stderr F W0120 10:44:05.589062 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:05.589257848+00:00 stderr F E0120 10:44:05.589234 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:07.799046190+00:00 stderr F W0120 10:44:07.798866 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:07.799046190+00:00 stderr F E0120 10:44:07.798996 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:13.079375880+00:00 stderr F W0120 10:44:13.079263 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:13.079375880+00:00 stderr F E0120 10:44:13.079345 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:13.745287155+00:00 stderr F W0120 10:44:13.745158 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:13.745287155+00:00 stderr F E0120 10:44:13.745216 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:14.603848400+00:00 stderr F W0120 10:44:14.603703 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:14.603848400+00:00 stderr F E0120 10:44:14.603779 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:16.671824017+00:00 stderr F W0120 10:44:16.671686 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:16.671885279+00:00 stderr F E0120 10:44:16.671851 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:25.117258127+00:00 stderr F W0120 10:44:25.117141 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:25.117258127+00:00 stderr F E0120 10:44:25.117211 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:28.733324746+00:00 stderr F W0120 10:44:28.733201 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:28.733324746+00:00 stderr F E0120 10:44:28.733277 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:32.239237447+00:00 stderr F W0120 10:44:32.239077 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:32.239237447+00:00 stderr F E0120 10:44:32.239134 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:33.175912587+00:00 stderr F W0120 10:44:33.175808 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:33.175912587+00:00 stderr F E0120 10:44:33.175879 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:34.307060667+00:00 stderr F W0120 10:44:34.306945 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:34.307242252+00:00 stderr F E0120 10:44:34.307205 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:43.847357651+00:00 stderr F W0120 10:44:43.847210 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:43.847357651+00:00 stderr F E0120 10:44:43.847315 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:46.030545220+00:00 stderr F W0120 10:44:46.030370 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:46.030545220+00:00 stderr F E0120 10:44:46.030473 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:47.120117159+00:00 stderr F W0120 10:44:47.119953 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:47.120117159+00:00 stderr F E0120 10:44:47.120066 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:49.586176923+00:00 stderr F W0120 10:44:49.586020 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:49.586176923+00:00 stderr F E0120 10:44:49.586109 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:49.598426986+00:00 stderr F W0120 10:44:49.598365 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:49.598426986+00:00 stderr F E0120 10:44:49.598403 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:50.188644442+00:00 stderr F W0120 10:44:50.188510 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:50.188644442+00:00 stderr F E0120 10:44:50.188624 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:54.224918283+00:00 stderr F W0120 10:44:54.224776 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:54.224918283+00:00 stderr F E0120 10:44:54.224873 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:59.212447338+00:00 stderr F W0120 10:44:59.212211 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:59.212447338+00:00 stderr F E0120 10:44:59.212315 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:01.941591599+00:00 stderr F W0120 10:45:01.941451 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:01.941591599+00:00 stderr F E0120 10:45:01.941546 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:02.051983032+00:00 stderr F W0120 10:45:02.051849 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:02.051983032+00:00 stderr F E0120 10:45:02.051942 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:04.296265723+00:00 stderr F W0120 10:45:04.296088 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:04.296265723+00:00 stderr F E0120 10:45:04.296189 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:05.594590502+00:00 stderr F W0120 10:45:05.594375 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:05.594590502+00:00 stderr F E0120 10:45:05.594506 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:10.471917832+00:00 stderr F W0120 10:45:10.471723 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:10.471917832+00:00 stderr F E0120 10:45:10.471855 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:18.614906225+00:00 stderr F W0120 10:45:18.614787 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.47.54:53: dial udp 199.204.47.54:53: connect: network is unreachable 2026-01-20T10:45:18.614906225+00:00 stderr F E0120 10:45:18.614862 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.47.54:53: dial udp 199.204.47.54:53: connect: network is unreachable 2026-01-20T10:45:20.198085957+00:00 stderr F W0120 10:45:20.197043 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:20.198085957+00:00 stderr F E0120 10:45:20.198020 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:20.921528001+00:00 stderr F W0120 10:45:20.921450 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:20.921528001+00:00 stderr F E0120 10:45:20.921504 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:22.431366242+00:00 stderr F W0120 10:45:22.431192 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:22.431366242+00:00 stderr F E0120 10:45:22.431296 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:29.715967444+00:00 stderr F W0120 10:45:29.715420 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:29.715967444+00:00 stderr F E0120 10:45:29.715905 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:31.386528399+00:00 stderr F W0120 10:45:31.386308 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:31.386528399+00:00 stderr F E0120 10:45:31.386436 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:33.086587520+00:00 stderr F W0120 10:45:33.086405 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:33.086587520+00:00 stderr F E0120 10:45:33.086508 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:35.764918806+00:00 stderr F W0120 10:45:35.764716 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:35.764918806+00:00 stderr F E0120 10:45:35.764836 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:37.795828527+00:00 stderr F W0120 10:45:37.795644 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:37.795828527+00:00 stderr F E0120 10:45:37.795704 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:38.808061433+00:00 stderr F W0120 10:45:38.807917 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:38.808061433+00:00 stderr F E0120 10:45:38.807981 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:40.458233710+00:00 stderr F W0120 10:45:40.458107 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:40.458233710+00:00 stderr F E0120 10:45:40.458190 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:41.505322640+00:00 stderr F W0120 10:45:41.505183 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:41.505322640+00:00 stderr F E0120 10:45:41.505256 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:45.690048772+00:00 stderr F W0120 10:45:45.689872 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:45.690048772+00:00 stderr F E0120 10:45:45.689935 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:52.676254926+00:00 stderr F W0120 10:45:52.676098 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:52.676254926+00:00 stderr F E0120 10:45:52.676194 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:53.465300375+00:00 stderr F W0120 10:45:53.465166 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:53.465300375+00:00 stderr F E0120 10:45:53.465241 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:46:03.010720054+00:00 stderr F W0120 10:46:03.010588 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:46:03.010720054+00:00 stderr F E0120 10:46:03.010651 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:46:03.298921635+00:00 stderr F W0120 10:46:03.298782 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:46:03.298921635+00:00 stderr F E0120 10:46:03.298888 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:46:06.277255564+00:00 stderr F W0120 10:46:06.276162 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:46:06.277255564+00:00 stderr F E0120 10:46:06.277179 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:46:10.694538082+00:00 stderr F W0120 10:46:10.694414 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:46:10.694538082+00:00 stderr F E0120 10:46:10.694487 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:46:11.280885197+00:00 stderr F W0120 10:46:11.280769 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:46:11.280885197+00:00 stderr F E0120 10:46:11.280843 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:46:12.271278513+00:00 stderr F E0120 10:46:12.271211 1 server.go:224] "waiting for handlers to sync" err="context canceled" 2026-01-20T10:46:12.271328664+00:00 stderr F I0120 10:46:12.271293 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2026-01-20T10:46:12.271472688+00:00 stderr F I0120 10:46:12.271453 1 secure_serving.go:258] Stopped listening on [::]:10259 2026-01-20T10:46:12.271633542+00:00 stderr F E0120 10:46:12.271542 1 shared_informer.go:314] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:46:12.271633542+00:00 stderr F I0120 10:46:12.271619 1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:46:12.271767306+00:00 stderr F I0120 10:46:12.271223 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2026-01-20T10:46:12.271834428+00:00 stderr F I0120 10:46:12.271811 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler/kube-scheduler... 2026-01-20T10:46:12.271845058+00:00 stderr F I0120 10:46:12.271834 1 server.go:248] "Requested to terminate, exiting" ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015133657735033042 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000000012515133657715033040 0ustar zuulzuul2026-01-20T10:47:05.869036598+00:00 stdout P Waiting for port :10259 to be released. ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000000012515133657715033040 0ustar zuulzuul2025-08-13T20:08:08.757827394+00:00 stdout P Waiting for port :10259 to be released. ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000000012515133657715033040 0ustar zuulzuul2026-01-20T10:42:02.274630645+00:00 stdout P Waiting for port :10259 to be released. ././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015133657735033042 5ustar zuulzuul././@LongLink0000644000000000000000000000031600000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000001273115133657715033046 0ustar zuulzuul2026-01-20T10:47:06.760878537+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 11443 \))" ]; do sleep 1; done' 2026-01-20T10:47:06.764804013+00:00 stderr F ++ ss -Htanop '(' sport = 11443 ')' 2026-01-20T10:47:06.770118806+00:00 stderr F + '[' -n '' ']' 2026-01-20T10:47:06.770772253+00:00 stderr F + exec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig --namespace=openshift-kube-scheduler --listen=0.0.0.0:11443 -v=2 2026-01-20T10:47:06.835100482+00:00 stderr F W0120 10:47:06.834683 1 cmd.go:245] Using insecure, self-signed certificates 2026-01-20T10:47:06.835100482+00:00 stderr F I0120 10:47:06.834951 1 crypto.go:601] Generating new CA for cert-recovery-controller-signer@1768906026 cert, and key in /tmp/serving-cert-1319517355/serving-signer.crt, /tmp/serving-cert-1319517355/serving-signer.key 2026-01-20T10:47:07.179882041+00:00 stderr F I0120 10:47:07.179811 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:47:07.180880923+00:00 stderr F I0120 10:47:07.180828 1 observer_polling.go:159] Starting file observer 2026-01-20T10:47:17.185164374+00:00 stderr F W0120 10:47:17.185096 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/pods": net/http: TLS handshake timeout 2026-01-20T10:47:17.185264257+00:00 stderr F I0120 10:47:17.185241 1 builder.go:299] cert-recovery-controller version v0.0.0-master+$Format:%H$-$Format:%H$ 2026-01-20T10:47:23.293411551+00:00 stderr F I0120 10:47:23.293352 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2026-01-20T10:47:23.299358352+00:00 stderr F I0120 10:47:23.299313 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler/cert-recovery-controller-lock... 2026-01-20T10:52:13.833849826+00:00 stderr F I0120 10:52:13.833070 1 leaderelection.go:260] successfully acquired lease openshift-kube-scheduler/cert-recovery-controller-lock 2026-01-20T10:52:13.834180975+00:00 stderr F I0120 10:52:13.834056 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-scheduler", Name:"cert-recovery-controller-lock", UID:"e24b93c2-79d9-43db-937a-d4e24725daea", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41586", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_4f951dd1-adb7-40e3-85d9-62e13a3082c4 became leader 2026-01-20T10:52:13.836492281+00:00 stderr F I0120 10:52:13.836405 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2026-01-20T10:52:13.842555752+00:00 stderr F I0120 10:52:13.842476 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:52:13.843018816+00:00 stderr F I0120 10:52:13.842957 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeschedulers from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:52:13.854313806+00:00 stderr F I0120 10:52:13.854236 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:52:13.854443970+00:00 stderr F I0120 10:52:13.854364 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:52:13.875825365+00:00 stderr F I0120 10:52:13.875746 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:52:13.936865344+00:00 stderr F I0120 10:52:13.936791 1 base_controller.go:73] Caches are synced for ResourceSyncController 2026-01-20T10:52:13.936980097+00:00 stderr F I0120 10:52:13.936956 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2026-01-20T10:57:13.899935770+00:00 stderr F E0120 10:57:13.899855 1 leaderelection.go:332] error retrieving resource lock openshift-kube-scheduler/cert-recovery-controller-lock: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler/leases/cert-recovery-controller-lock?timeout=4m0s": dial tcp [::1]:6443: connect: connection refused 2026-01-20T10:57:44.105241619+00:00 stderr F I0120 10:57:44.105172 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:45.583352479+00:00 stderr F I0120 10:57:45.583276 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:48.369408066+00:00 stderr F I0120 10:57:48.369308 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeschedulers from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:48.419829980+00:00 stderr F I0120 10:57:48.419761 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:48.503484982+00:00 stderr F I0120 10:57:48.503399 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000031600000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000001431715133657715033050 0ustar zuulzuul2025-08-13T20:08:10.423486490+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 11443 \))" ]; do sleep 1; done' 2025-08-13T20:08:10.429459371+00:00 stderr F ++ ss -Htanop '(' sport = 11443 ')' 2025-08-13T20:08:10.435645509+00:00 stderr F + '[' -n '' ']' 2025-08-13T20:08:10.436465452+00:00 stderr F + exec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig --namespace=openshift-kube-scheduler --listen=0.0.0.0:11443 -v=2 2025-08-13T20:08:10.591871248+00:00 stderr F W0813 20:08:10.591324 1 cmd.go:245] Using insecure, self-signed certificates 2025-08-13T20:08:10.591871248+00:00 stderr F I0813 20:08:10.591729 1 crypto.go:601] Generating new CA for cert-recovery-controller-signer@1755115690 cert, and key in /tmp/serving-cert-2660222687/serving-signer.crt, /tmp/serving-cert-2660222687/serving-signer.key 2025-08-13T20:08:10.967190418+00:00 stderr F I0813 20:08:10.967073 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:08:10.968185366+00:00 stderr F I0813 20:08:10.968134 1 observer_polling.go:159] Starting file observer 2025-08-13T20:08:10.988939231+00:00 stderr F I0813 20:08:10.988860 1 builder.go:299] cert-recovery-controller version v0.0.0-master+$Format:%H$-$Format:%H$ 2025-08-13T20:08:10.998709851+00:00 stderr F I0813 20:08:10.998553 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:08:11.000208574+00:00 stderr F I0813 20:08:10.998984 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler/cert-recovery-controller-lock... 2025-08-13T20:08:11.009863831+00:00 stderr F I0813 20:08:11.009712 1 leaderelection.go:260] successfully acquired lease openshift-kube-scheduler/cert-recovery-controller-lock 2025-08-13T20:08:11.010233502+00:00 stderr F I0813 20:08:11.009910 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-scheduler", Name:"cert-recovery-controller-lock", UID:"e24b93c2-79d9-43db-937a-d4e24725daea", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"32844", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_d5b9b15f-ff83-4db6-8ab7-bb13bf3420f4 became leader 2025-08-13T20:08:11.012003963+00:00 stderr F I0813 20:08:11.011877 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:08:11.027060494+00:00 stderr F I0813 20:08:11.025374 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:11.041092377+00:00 stderr F I0813 20:08:11.040728 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeschedulers from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:11.041394735+00:00 stderr F I0813 20:08:11.040988 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:11.047141560+00:00 stderr F I0813 20:08:11.046865 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:11.065527837+00:00 stderr F I0813 20:08:11.065403 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:11.113013099+00:00 stderr F I0813 20:08:11.112833 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:08:11.113013099+00:00 stderr F I0813 20:08:11.112876 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:08:59.290396854+00:00 stderr F I0813 20:08:59.290258 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeschedulers from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.506841487+00:00 stderr F I0813 20:09:06.506684 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:07.035347489+00:00 stderr F I0813 20:09:07.035041 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:07.586281885+00:00 stderr F I0813 20:09:07.586001 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:11.968019000+00:00 stderr F I0813 20:09:11.967733 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:42:36.324418343+00:00 stderr F I0813 20:42:36.324115 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.336306026+00:00 stderr F I0813 20:42:36.334141 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.428947056+00:00 stderr F I0813 20:42:36.349394 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.429622136+00:00 stderr F I0813 20:42:36.429561 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.431371316+00:00 stderr F I0813 20:42:36.431343 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:37.290643040+00:00 stderr F I0813 20:42:37.286428 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:37.293847222+00:00 stderr F E0813 20:42:37.292964 1 leaderelection.go:308] Failed to release lock: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler/leases/cert-recovery-controller-lock?timeout=4m0s": dial tcp [::1]:6443: connect: connection refused 2025-08-13T20:42:37.293847222+00:00 stderr F W0813 20:42:37.293055 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000031600000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000001343715133657715033052 0ustar zuulzuul2026-01-20T10:42:03.730940044+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 11443 \))" ]; do sleep 1; done' 2026-01-20T10:42:03.736754475+00:00 stderr F ++ ss -Htanop '(' sport = 11443 ')' 2026-01-20T10:42:03.742695488+00:00 stderr F + '[' -n '' ']' 2026-01-20T10:42:03.743945644+00:00 stderr F + exec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig --namespace=openshift-kube-scheduler --listen=0.0.0.0:11443 -v=2 2026-01-20T10:42:03.799933356+00:00 stderr F W0120 10:42:03.799789 1 cmd.go:245] Using insecure, self-signed certificates 2026-01-20T10:42:03.800137032+00:00 stderr F I0120 10:42:03.800113 1 crypto.go:601] Generating new CA for cert-recovery-controller-signer@1768905723 cert, and key in /tmp/serving-cert-2498382323/serving-signer.crt, /tmp/serving-cert-2498382323/serving-signer.key 2026-01-20T10:42:03.945807968+00:00 stderr F I0120 10:42:03.945676 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:42:03.946592170+00:00 stderr F I0120 10:42:03.946528 1 observer_polling.go:159] Starting file observer 2026-01-20T10:42:03.948087954+00:00 stderr F W0120 10:42:03.947983 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/pods": dial tcp [::1]:6443: connect: connection refused 2026-01-20T10:42:03.948138046+00:00 stderr F I0120 10:42:03.948126 1 builder.go:299] cert-recovery-controller version v0.0.0-master+$Format:%H$-$Format:%H$ 2026-01-20T10:42:03.949558878+00:00 stderr F W0120 10:42:03.948712 1 builder.go:358] unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused 2026-01-20T10:42:03.949558878+00:00 stderr F I0120 10:42:03.948973 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler/cert-recovery-controller-lock... 2026-01-20T10:42:03.949849736+00:00 stderr F E0120 10:42:03.949799 1 leaderelection.go:332] error retrieving resource lock openshift-kube-scheduler/cert-recovery-controller-lock: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler/leases/cert-recovery-controller-lock?timeout=1m47s": dial tcp [::1]:6443: connect: connection refused 2026-01-20T10:42:03.950595918+00:00 stderr F I0120 10:42:03.950435 1 event.go:364] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"openshift-kube-scheduler", Name:"openshift-kube-scheduler", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ControlPlaneTopology' unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused 2026-01-20T10:42:03.952643117+00:00 stderr F E0120 10:42:03.952586 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/events\": dial tcp [::1]:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-kube-scheduler.188c6a62451e19e6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Namespace,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ControlPlaneTopology,Message:unable to get control plane topology, using HA cluster values for leader election: Get \"https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster\": dial tcp [::1]:6443: connect: connection refused,Source:EventSource{Component:cert-recovery-controller,Host:,},FirstTimestamp:2026-01-20 10:42:03.948702182 +0000 UTC m=+0.199814715,LastTimestamp:2026-01-20 10:42:03.948702182 +0000 UTC m=+0.199814715,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:cert-recovery-controller,ReportingInstance:,}" 2026-01-20T10:42:14.746390433+00:00 stderr F E0120 10:42:14.746300 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{openshift-kube-scheduler.188c6a62451e19e6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Namespace,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ControlPlaneTopology,Message:unable to get control plane topology, using HA cluster values for leader election: Get \"https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster\": dial tcp [::1]:6443: connect: connection refused,Source:EventSource{Component:cert-recovery-controller,Host:,},FirstTimestamp:2026-01-20 10:42:03.948702182 +0000 UTC m=+0.199814715,LastTimestamp:2026-01-20 10:42:03.948702182 +0000 UTC m=+0.199814715,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:cert-recovery-controller,ReportingInstance:,}" 2026-01-20T10:46:12.270947584+00:00 stderr F I0120 10:46:12.267473 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2026-01-20T10:46:12.270947584+00:00 stderr F W0120 10:46:12.267573 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000755000175000017500000000000015133657715033060 5ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000755000175000017500000000000015133657735033062 5ustar zuulzuul././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000644000175000017500000002334315133657715033067 0ustar zuulzuul2025-08-13T19:59:38.816516836+00:00 stderr F W0813 19:59:38.811105 1 cmd.go:245] Using insecure, self-signed certificates 2025-08-13T19:59:42.551613375+00:00 stderr F I0813 19:59:42.550430 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:43.926057823+00:00 stderr F I0813 19:59:43.924317 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08-84f9a080d03777c95a1c5a0d13ca16e5aa342d98 2025-08-13T19:59:49.044814837+00:00 stderr F I0813 19:59:49.018356 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:49.044814837+00:00 stderr F W0813 19:59:49.041495 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:49.044814837+00:00 stderr F W0813 19:59:49.041514 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:49.353686632+00:00 stderr F I0813 19:59:49.352555 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-737639254/tls.crt::/tmp/serving-cert-737639254/tls.key" 2025-08-13T19:59:49.357611804+00:00 stderr F I0813 19:59:49.356070 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.407921948+00:00 stderr F I0813 19:59:49.404952 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:49.407921948+00:00 stderr F I0813 19:59:49.405109 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:49.407921948+00:00 stderr F I0813 19:59:49.406528 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:49.407921948+00:00 stderr F I0813 19:59:49.406576 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:49.411376676+00:00 stderr F I0813 19:59:49.409598 1 secure_serving.go:213] Serving securely on [::]:17698 2025-08-13T19:59:49.481618619+00:00 stderr F I0813 19:59:49.410082 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:49.488757592+00:00 stderr F I0813 19:59:49.481955 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:49.695002670+00:00 stderr F I0813 19:59:49.694746 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:49.696477542+00:00 stderr F E0813 19:59:49.695955 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.696477542+00:00 stderr F E0813 19:59:49.696059 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.734727193+00:00 stderr F I0813 19:59:49.722956 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:49.734727193+00:00 stderr F E0813 19:59:49.723053 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.736936305+00:00 stderr F E0813 19:59:49.735304 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.736936305+00:00 stderr F E0813 19:59:49.735356 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.829455883+00:00 stderr F E0813 19:59:49.829356 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.850042510+00:00 stderr F E0813 19:59:49.849956 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.879030246+00:00 stderr F E0813 19:59:49.877548 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.891467700+00:00 stderr F E0813 19:59:49.891384 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.996736061+00:00 stderr F E0813 19:59:49.996679 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.996927636+00:00 stderr F E0813 19:59:49.996905 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:50.118993176+00:00 stderr F E0813 19:59:50.116700 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:50.167541430+00:00 stderr F E0813 19:59:50.166359 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:50.276515046+00:00 stderr F I0813 19:59:50.267147 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsTimeToStart 2025-08-13T19:59:50.293519981+00:00 stderr F E0813 19:59:50.278355 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:50.315617001+00:00 stderr F I0813 19:59:50.307588 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:50.489017894+00:00 stderr F E0813 19:59:50.488212 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:50.618445184+00:00 stderr F E0813 19:59:50.618021 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:51.147165276+00:00 stderr F E0813 19:59:51.144391 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:51.258646404+00:00 stderr F E0813 19:59:51.258287 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:55.970593550+00:00 stderr F I0813 19:59:55.968309 1 base_controller.go:73] Caches are synced for CheckEndpointsTimeToStart 2025-08-13T19:59:55.970593550+00:00 stderr F I0813 19:59:55.970408 1 base_controller.go:110] Starting #1 worker of CheckEndpointsTimeToStart controller ... 2025-08-13T19:59:55.978477274+00:00 stderr F I0813 19:59:55.977334 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsStop 2025-08-13T19:59:55.978477274+00:00 stderr F I0813 19:59:55.977375 1 base_controller.go:73] Caches are synced for CheckEndpointsStop 2025-08-13T19:59:55.978477274+00:00 stderr F I0813 19:59:55.977385 1 base_controller.go:110] Starting #1 worker of CheckEndpointsStop controller ... 2025-08-13T19:59:55.978477274+00:00 stderr F I0813 19:59:55.977448 1 base_controller.go:172] Shutting down CheckEndpointsTimeToStart ... 2025-08-13T19:59:55.994104840+00:00 stderr F I0813 19:59:55.991731 1 base_controller.go:67] Waiting for caches to sync for check-endpoints 2025-08-13T19:59:56.043528339+00:00 stderr F I0813 19:59:56.043238 1 base_controller.go:114] Shutting down worker of CheckEndpointsTimeToStart controller ... 2025-08-13T19:59:56.043528339+00:00 stderr F I0813 19:59:56.043391 1 base_controller.go:104] All CheckEndpointsTimeToStart workers have been terminated 2025-08-13T19:59:56.347940657+00:00 stderr F I0813 19:59:56.345926 1 base_controller.go:73] Caches are synced for check-endpoints 2025-08-13T19:59:56.347940657+00:00 stderr F I0813 19:59:56.346237 1 base_controller.go:110] Starting #1 worker of check-endpoints controller ... 2025-08-13T20:42:42.524890884+00:00 stderr F I0813 20:42:42.523195 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. ././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000644000175000017500000001156015133657715033065 0ustar zuulzuul2026-01-20T10:49:35.301279534+00:00 stderr F W0120 10:49:35.300913 1 cmd.go:245] Using insecure, self-signed certificates 2026-01-20T10:49:35.827812392+00:00 stderr F I0120 10:49:35.827526 1 observer_polling.go:159] Starting file observer 2026-01-20T10:49:35.874528344+00:00 stderr F I0120 10:49:35.874227 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08-84f9a080d03777c95a1c5a0d13ca16e5aa342d98 2026-01-20T10:49:36.281252943+00:00 stderr F I0120 10:49:36.280053 1 secure_serving.go:57] Forcing use of http/1.1 only 2026-01-20T10:49:36.281252943+00:00 stderr F W0120 10:49:36.280726 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:36.281252943+00:00 stderr F W0120 10:49:36.280732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:36.288849234+00:00 stderr F I0120 10:49:36.287481 1 secure_serving.go:213] Serving securely on [::]:17698 2026-01-20T10:49:36.288849234+00:00 stderr F I0120 10:49:36.287536 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:49:36.288849234+00:00 stderr F I0120 10:49:36.288160 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-1516387770/tls.crt::/tmp/serving-cert-1516387770/tls.key" 2026-01-20T10:49:36.288849234+00:00 stderr F I0120 10:49:36.288301 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:49:36.288849234+00:00 stderr F I0120 10:49:36.288746 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:49:36.288849234+00:00 stderr F I0120 10:49:36.288763 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:36.288849234+00:00 stderr F I0120 10:49:36.288791 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:49:36.288849234+00:00 stderr F I0120 10:49:36.288828 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:49:36.288849234+00:00 stderr F I0120 10:49:36.288840 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:36.306771651+00:00 stderr F I0120 10:49:36.306723 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsTimeToStart 2026-01-20T10:49:36.389354656+00:00 stderr F I0120 10:49:36.389293 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:36.389398357+00:00 stderr F I0120 10:49:36.389351 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:49:36.389617494+00:00 stderr F I0120 10:49:36.389593 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:37.012127045+00:00 stderr F I0120 10:49:37.007515 1 base_controller.go:73] Caches are synced for CheckEndpointsTimeToStart 2026-01-20T10:49:37.012127045+00:00 stderr F I0120 10:49:37.008601 1 base_controller.go:110] Starting #1 worker of CheckEndpointsTimeToStart controller ... 2026-01-20T10:49:37.012127045+00:00 stderr F I0120 10:49:37.008700 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsStop 2026-01-20T10:49:37.012127045+00:00 stderr F I0120 10:49:37.008706 1 base_controller.go:73] Caches are synced for CheckEndpointsStop 2026-01-20T10:49:37.012127045+00:00 stderr F I0120 10:49:37.008710 1 base_controller.go:110] Starting #1 worker of CheckEndpointsStop controller ... 2026-01-20T10:49:37.012127045+00:00 stderr F I0120 10:49:37.008732 1 base_controller.go:172] Shutting down CheckEndpointsTimeToStart ... 2026-01-20T10:49:37.012127045+00:00 stderr F I0120 10:49:37.009850 1 base_controller.go:67] Waiting for caches to sync for check-endpoints 2026-01-20T10:49:37.012127045+00:00 stderr F I0120 10:49:37.009872 1 base_controller.go:114] Shutting down worker of CheckEndpointsTimeToStart controller ... 2026-01-20T10:49:37.012127045+00:00 stderr F I0120 10:49:37.009878 1 base_controller.go:104] All CheckEndpointsTimeToStart workers have been terminated 2026-01-20T10:49:37.111701868+00:00 stderr F I0120 10:49:37.111270 1 base_controller.go:73] Caches are synced for check-endpoints 2026-01-20T10:49:37.111701868+00:00 stderr F I0120 10:49:37.111312 1 base_controller.go:110] Starting #1 worker of check-endpoints controller ... ././@LongLink0000644000000000000000000000025300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operato0000755000175000017500000000000015133657716033054 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operato0000755000175000017500000000000015133657737033057 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operato0000644000175000017500000110333715133657716033066 0ustar zuulzuul2025-08-13T20:00:59.204934877+00:00 stderr F I0813 20:00:59.201500 1 profiler.go:21] Starting profiling endpoint at http://127.0.0.1:6060/debug/pprof/ 2025-08-13T20:00:59.221560951+00:00 stderr F I0813 20:00:59.220554 1 observer_polling.go:159] Starting file observer 2025-08-13T20:00:59.227991934+00:00 stderr F I0813 20:00:59.227901 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T20:00:59.230144035+00:00 stderr F I0813 20:00:59.229979 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:00:59.237435533+00:00 stderr F I0813 20:00:59.235747 1 observer_polling.go:159] Starting file observer 2025-08-13T20:01:06.985203130+00:00 stderr F I0813 20:01:06.952113 1 builder.go:298] openshift-cluster-etcd-operator version 4.16.0-202406131906.p0.gdc4f4e8.assembly.stream.el9-dc4f4e8-dc4f4e858ba8395dce6883242c7d12009685d145 2025-08-13T20:01:20.230947497+00:00 stderr F I0813 20:01:20.230035 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:01:20.230947497+00:00 stderr F W0813 20:01:20.230635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:20.230947497+00:00 stderr F W0813 20:01:20.230645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:20.949569307+00:00 stderr F I0813 20:01:20.949138 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:01:20.950898435+00:00 stderr F I0813 20:01:20.950870 1 leaderelection.go:250] attempting to acquire leader lease openshift-etcd-operator/openshift-cluster-etcd-operator-lock... 2025-08-13T20:01:20.960736406+00:00 stderr F I0813 20:01:20.957052 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:01:20.960736406+00:00 stderr F I0813 20:01:20.957150 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:01:20.960736406+00:00 stderr F I0813 20:01:20.957211 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:20.960736406+00:00 stderr F I0813 20:01:20.957229 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:20.960736406+00:00 stderr F I0813 20:01:20.957253 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:20.960736406+00:00 stderr F I0813 20:01:20.957261 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:20.962225238+00:00 stderr F I0813 20:01:20.962192 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:01:20.962340592+00:00 stderr F I0813 20:01:20.962318 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:20.962574838+00:00 stderr F I0813 20:01:20.962552 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:01:21.071599137+00:00 stderr F I0813 20:01:21.071521 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:21.072017149+00:00 stderr F I0813 20:01:21.071742 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:01:21.072623786+00:00 stderr F I0813 20:01:21.071762 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:21.420290630+00:00 stderr F I0813 20:01:21.417525 1 leaderelection.go:260] successfully acquired lease openshift-etcd-operator/openshift-cluster-etcd-operator-lock 2025-08-13T20:01:21.420502616+00:00 stderr F I0813 20:01:21.420447 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-etcd-operator", Name:"openshift-cluster-etcd-operator-lock", UID:"1b330a9d-47ca-437b-91e1-6481a2280da8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"30429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' etcd-operator-768d5b5d86-722mg_41a3e9d3-dcbb-4c99-b08b-92fe107d13b5 became leader 2025-08-13T20:01:21.693231682+00:00 stderr F I0813 20:01:21.693176 1 starter.go:166] recorded cluster versions: map[etcd:4.16.0 operator:4.16.0 raw-internal:4.16.0] 2025-08-13T20:01:21.803696622+00:00 stderr F I0813 20:01:21.803631 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:01:22.115996337+00:00 stderr F I0813 20:01:22.114639 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:01:22.115996337+00:00 stderr F I0813 20:01:22.114639 1 starter.go:445] FeatureGates initializedenabled[AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CloudDualStackNodeIPs ClusterAPIInstallAWS ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallVSphere DisableKubeletCloudCredentialProviders ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP HardwareSpeed KMSv1 MetricsServer NetworkDiagnosticsConfig NetworkLiveMigration PrivateHostedZoneAWS VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereStaticIPs]disabled[AutomatedEtcdBackup CSIDriverSharedResource ChunkSizeMiB ClusterAPIInstall ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallPowerVS DNSNameResolver DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MixedCPUsAllocation NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereMultiVCenters ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:01:22.115996337+00:00 stderr F I0813 20:01:22.114760 1 starter.go:499] waiting for cluster version informer sync... 2025-08-13T20:01:22.835208314+00:00 stderr F I0813 20:01:22.833505 1 starter.go:522] Detected available machine API, starting vertical scaling related controllers and informers... 2025-08-13T20:01:22.835208314+00:00 stderr F I0813 20:01:22.834925 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T20:01:22.836957933+00:00 stderr F I0813 20:01:22.835586 1 base_controller.go:67] Waiting for caches to sync for ClusterMemberRemovalController 2025-08-13T20:01:22.836957933+00:00 stderr F I0813 20:01:22.835631 1 base_controller.go:67] Waiting for caches to sync for MachineDeletionHooksController 2025-08-13T20:01:22.836957933+00:00 stderr F I0813 20:01:22.835942 1 base_controller.go:67] Waiting for caches to sync for EtcdCertSignerController 2025-08-13T20:01:22.838859178+00:00 stderr F I0813 20:01:22.838357 1 base_controller.go:67] Waiting for caches to sync for FSyncController 2025-08-13T20:01:22.838859178+00:00 stderr F I0813 20:01:22.838389 1 base_controller.go:73] Caches are synced for FSyncController 2025-08-13T20:01:22.838859178+00:00 stderr F I0813 20:01:22.838398 1 base_controller.go:110] Starting #1 worker of FSyncController controller ... 2025-08-13T20:01:22.838859178+00:00 stderr F I0813 20:01:22.838614 1 base_controller.go:67] Waiting for caches to sync for EtcdStaticResources 2025-08-13T20:01:22.838859178+00:00 stderr F I0813 20:01:22.838633 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.843569 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.843631 1 base_controller.go:67] Waiting for caches to sync for EtcdCertCleanerController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.843638 1 base_controller.go:73] Caches are synced for EtcdCertCleanerController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.843644 1 base_controller.go:110] Starting #1 worker of EtcdCertCleanerController controller ... 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.843715 1 base_controller.go:67] Waiting for caches to sync for EtcdEndpointsController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.843734 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844119 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844149 1 base_controller.go:67] Waiting for caches to sync for ClusterMemberController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844166 1 base_controller.go:67] Waiting for caches to sync for EtcdMembersController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844171 1 base_controller.go:73] Caches are synced for EtcdMembersController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844176 1 base_controller.go:110] Starting #1 worker of EtcdMembersController controller ... 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844207 1 base_controller.go:67] Waiting for caches to sync for BootstrapTeardownController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844222 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844905 1 base_controller.go:67] Waiting for caches to sync for ScriptController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844929 1 base_controller.go:67] Waiting for caches to sync for DefragController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844961 1 envvarcontroller.go:193] Starting EnvVarController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844980 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T20:01:22.845886058+00:00 stderr F E0813 20:01:22.845191 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.845358 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.845681 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.845700 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found 2025-08-13T20:01:22.847932026+00:00 stderr F I0813 20:01:22.846999 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T20:01:22.847932026+00:00 stderr F I0813 20:01:22.847159 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T20:01:22.847932026+00:00 stderr F I0813 20:01:22.847181 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:01:22.847932026+00:00 stderr F I0813 20:01:22.847196 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:01:22.847932026+00:00 stderr F I0813 20:01:22.847225 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T20:01:22.851604501+00:00 stderr F I0813 20:01:22.851481 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T20:01:22.851868739+00:00 stderr F E0813 20:01:22.851701 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T20:01:22.852231279+00:00 stderr F I0813 20:01:22.852034 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found 2025-08-13T20:01:22.866823725+00:00 stderr F I0813 20:01:22.865982 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_etcd 2025-08-13T20:01:22.866823725+00:00 stderr F I0813 20:01:22.866018 1 base_controller.go:73] Caches are synced for StatusSyncer_etcd 2025-08-13T20:01:22.866823725+00:00 stderr F I0813 20:01:22.866027 1 base_controller.go:110] Starting #1 worker of StatusSyncer_etcd controller ... 2025-08-13T20:01:22.907728821+00:00 stderr F E0813 20:01:22.907578 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:22.945245131+00:00 stderr F I0813 20:01:22.945145 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:01:22.945245131+00:00 stderr F I0813 20:01:22.945194 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:01:22.945523019+00:00 stderr F I0813 20:01:22.945492 1 base_controller.go:73] Caches are synced for DefragController 2025-08-13T20:01:22.945563420+00:00 stderr F I0813 20:01:22.945549 1 base_controller.go:110] Starting #1 worker of DefragController controller ... 2025-08-13T20:01:22.947247808+00:00 stderr F I0813 20:01:22.947181 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T20:01:22.947302060+00:00 stderr F I0813 20:01:22.947287 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T20:01:22.948105423+00:00 stderr F I0813 20:01:22.948050 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T20:01:22.948105423+00:00 stderr F I0813 20:01:22.948083 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T20:01:22.948126773+00:00 stderr F I0813 20:01:22.948109 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:01:22.948126773+00:00 stderr F I0813 20:01:22.948114 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:01:22.948223566+00:00 stderr F I0813 20:01:22.948206 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:01:22.948262877+00:00 stderr F I0813 20:01:22.948248 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:01:22.952234560+00:00 stderr F I0813 20:01:22.952136 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T20:01:22.952234560+00:00 stderr F I0813 20:01:22.952166 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T20:01:22.952550329+00:00 stderr F I0813 20:01:22.952431 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:22.956176783+00:00 stderr F E0813 20:01:22.956032 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:22.964645374+00:00 stderr F I0813 20:01:22.964470 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:22.988731301+00:00 stderr F E0813 20:01:22.988570 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:23.022307729+00:00 stderr F E0813 20:01:23.022169 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:23.040754585+00:00 stderr F I0813 20:01:23.040666 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:23.045177371+00:00 stderr F I0813 20:01:23.045113 1 base_controller.go:73] Caches are synced for ScriptController 2025-08-13T20:01:23.045177371+00:00 stderr F I0813 20:01:23.045158 1 base_controller.go:110] Starting #1 worker of ScriptController controller ... 2025-08-13T20:01:23.045208242+00:00 stderr F I0813 20:01:23.045200 1 envvarcontroller.go:199] caches synced 2025-08-13T20:01:23.082200166+00:00 stderr F E0813 20:01:23.082082 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:23.183869165+00:00 stderr F E0813 20:01:23.183735 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:23.240423948+00:00 stderr F I0813 20:01:23.240267 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:23.244659499+00:00 stderr F I0813 20:01:23.244611 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T20:01:23.244714180+00:00 stderr F I0813 20:01:23.244699 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T20:01:23.253151011+00:00 stderr F I0813 20:01:23.248015 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T20:01:23.253151011+00:00 stderr F I0813 20:01:23.248116 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T20:01:23.253151011+00:00 stderr F I0813 20:01:23.252637 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3 2025-08-13T20:01:23.253223563+00:00 stderr F I0813 20:01:23.253148 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T20:01:23.253223563+00:00 stderr F I0813 20:01:23.253159 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T20:01:23.253223563+00:00 stderr F I0813 20:01:23.253185 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T20:01:23.253223563+00:00 stderr F I0813 20:01:23.253189 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T20:01:23.363347963+00:00 stderr F E0813 20:01:23.363263 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:23.475565093+00:00 stderr F I0813 20:01:23.475449 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:23.478628840+00:00 stderr F E0813 20:01:23.478489 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T20:01:23.480253096+00:00 stderr F I0813 20:01:23.480151 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:23.481074550+00:00 stderr F I0813 20:01:23.481010 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:23.539188437+00:00 stderr F I0813 20:01:23.539021 1 base_controller.go:73] Caches are synced for EtcdStaticResources 2025-08-13T20:01:23.539188437+00:00 stderr F I0813 20:01:23.539124 1 base_controller.go:110] Starting #1 worker of EtcdStaticResources controller ... 2025-08-13T20:01:23.671176050+00:00 stderr F I0813 20:01:23.661078 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:23.711893141+00:00 stderr F E0813 20:01:23.710921 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:23.738303444+00:00 stderr F I0813 20:01:23.737021 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T20:01:23.738303444+00:00 stderr F I0813 20:01:23.737068 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T20:01:23.738303444+00:00 stderr F I0813 20:01:23.737113 1 base_controller.go:73] Caches are synced for ClusterMemberRemovalController 2025-08-13T20:01:23.738303444+00:00 stderr F I0813 20:01:23.737119 1 base_controller.go:110] Starting #1 worker of ClusterMemberRemovalController controller ... 2025-08-13T20:01:23.739571481+00:00 stderr F I0813 20:01:23.739135 1 base_controller.go:73] Caches are synced for EtcdCertSignerController 2025-08-13T20:01:23.739571481+00:00 stderr F I0813 20:01:23.739217 1 base_controller.go:110] Starting #1 worker of EtcdCertSignerController controller ... 2025-08-13T20:01:23.739752336+00:00 stderr F I0813 20:01:23.739723 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T20:01:23.739863029+00:00 stderr F I0813 20:01:23.739821 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T20:01:23.740017183+00:00 stderr F E0813 20:01:23.739997 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:23.745144849+00:00 stderr F I0813 20:01:23.745064 1 base_controller.go:73] Caches are synced for EtcdEndpointsController 2025-08-13T20:01:23.745144849+00:00 stderr F I0813 20:01:23.745101 1 base_controller.go:110] Starting #1 worker of EtcdEndpointsController controller ... 2025-08-13T20:01:23.745293434+00:00 stderr F I0813 20:01:23.745208 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T20:01:23.745365916+00:00 stderr F I0813 20:01:23.745308 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T20:01:23.745365916+00:00 stderr F I0813 20:01:23.745348 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T20:01:23.751584763+00:00 stderr F I0813 20:01:23.746990 1 base_controller.go:73] Caches are synced for ClusterMemberController 2025-08-13T20:01:23.751584763+00:00 stderr F I0813 20:01:23.747024 1 base_controller.go:110] Starting #1 worker of ClusterMemberController controller ... 2025-08-13T20:01:23.751584763+00:00 stderr F I0813 20:01:23.747092 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T20:01:23.760225729+00:00 stderr F E0813 20:01:23.760115 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:23.770430700+00:00 stderr F E0813 20:01:23.770333 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:23.794754584+00:00 stderr F E0813 20:01:23.794566 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:23.837589226+00:00 stderr F E0813 20:01:23.835259 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:23.845963064+00:00 stderr F I0813 20:01:23.841472 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:23.876753702+00:00 stderr F E0813 20:01:23.876682 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:23.887741586+00:00 stderr F E0813 20:01:23.886505 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:23.903355931+00:00 stderr F I0813 20:01:23.893974 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:23.905662336+00:00 stderr F I0813 20:01:23.905547 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-08-13T20:01:23.909276520+00:00 stderr F I0813 20:01:23.905930 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:23.917638608+00:00 stderr F E0813 20:01:23.917231 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:23.984460943+00:00 stderr F I0813 20:01:23.984397 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:24.011359700+00:00 stderr F E0813 20:01:24.008314 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: configmaps lister not synced 2025-08-13T20:01:24.011359700+00:00 stderr F E0813 20:01:24.009051 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:24.012053910+00:00 stderr F I0813 20:01:24.012017 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-08-13T20:01:24.030213498+00:00 stderr F E0813 20:01:24.026310 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:24.030213498+00:00 stderr F E0813 20:01:24.026659 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:24.038711430+00:00 stderr F I0813 20:01:24.037373 1 request.go:697] Waited for 1.198785213s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/secrets?limit=500&resourceVersion=0 2025-08-13T20:01:24.042444197+00:00 stderr F E0813 20:01:24.041417 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:24.043651721+00:00 stderr F I0813 20:01:24.043530 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:24.046381469+00:00 stderr F I0813 20:01:24.043917 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:24.048881210+00:00 stderr F I0813 20:01:24.048655 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T20:01:24.086625656+00:00 stderr F E0813 20:01:24.086488 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:24.169315404+00:00 stderr F E0813 20:01:24.169212 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:24.240874095+00:00 stderr F I0813 20:01:24.239677 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:24.244991062+00:00 stderr F I0813 20:01:24.244433 1 base_controller.go:73] Caches are synced for BootstrapTeardownController 2025-08-13T20:01:24.244991062+00:00 stderr F I0813 20:01:24.244464 1 base_controller.go:110] Starting #1 worker of BootstrapTeardownController controller ... 2025-08-13T20:01:24.244991062+00:00 stderr F I0813 20:01:24.244495 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:01:24.244991062+00:00 stderr F I0813 20:01:24.244500 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:01:24.244991062+00:00 stderr F I0813 20:01:24.244696 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T20:01:24.331661633+00:00 stderr F E0813 20:01:24.330450 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:24.337326115+00:00 stderr F I0813 20:01:24.336511 1 base_controller.go:73] Caches are synced for MachineDeletionHooksController 2025-08-13T20:01:24.337326115+00:00 stderr F I0813 20:01:24.336565 1 base_controller.go:110] Starting #1 worker of MachineDeletionHooksController controller ... 2025-08-13T20:01:24.359629601+00:00 stderr F E0813 20:01:24.358162 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:24.445604942+00:00 stderr F I0813 20:01:24.445404 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:24.551146152+00:00 stderr F I0813 20:01:24.547065 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:01:24.551146152+00:00 stderr F I0813 20:01:24.547110 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:01:24.652969385+00:00 stderr F E0813 20:01:24.652424 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:25.238667776+00:00 stderr F I0813 20:01:25.236481 1 request.go:697] Waited for 1.696867805s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd 2025-08-13T20:01:25.293606912+00:00 stderr F E0813 20:01:25.293553 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:25.678198389+00:00 stderr F E0813 20:01:25.678017 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:26.575472982+00:00 stderr F E0813 20:01:26.575394 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:28.261052465+00:00 stderr F E0813 20:01:28.258990 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:29.139588316+00:00 stderr F E0813 20:01:29.136873 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:29.233407391+00:00 stderr F I0813 20:01:29.233029 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:29.234221804+00:00 stderr F E0813 20:01:29.234128 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:29.261491642+00:00 stderr F E0813 20:01:29.261353 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3] 2025-08-13T20:01:29.263634923+00:00 stderr F E0813 20:01:29.262756 1 envvarcontroller.go:231] key failed with : configmap "etcd-endpoints" not found 2025-08-13T20:01:29.263896160+00:00 stderr F I0813 20:01:29.263863 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EnvVarControllerUpdatingStatus' Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:01:29.364256102+00:00 stderr F E0813 20:01:29.360701 1 base_controller.go:268] StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "etcd": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:01:29.364256102+00:00 stderr F I0813 20:01:29.361877 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:31.538098707+00:00 stderr F I0813 20:01:31.502003 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:31.538098707+00:00 stderr F I0813 20:01:31.502226 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-08-13T20:01:33.450883531+00:00 stderr F E0813 20:01:33.448764 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:35.546125383+00:00 stderr F I0813 20:01:35.545906 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:36.913351769+00:00 stderr F E0813 20:01:36.909745 1 base_controller.go:268] StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "etcd": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:01:36.913351769+00:00 stderr F I0813 20:01:36.913015 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:42.986081825+00:00 stderr F I0813 20:01:42.983366 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:42.998117788+00:00 stderr F I0813 20:01:42.995881 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:43.031426538+00:00 stderr F I0813 20:01:43.030573 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-08-13T20:01:43.705192780+00:00 stderr F E0813 20:01:43.705110 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:56.176110932+00:00 stderr F E0813 20:01:56.135500 1 base_controller.go:268] StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "etcd": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:01:56.176110932+00:00 stderr F I0813 20:01:56.146405 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:56.176110932+00:00 stderr F I0813 20:01:56.168699 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:58.241888256+00:00 stderr F I0813 20:01:58.240961 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:58.244003966+00:00 stderr F I0813 20:01:58.243317 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-08-13T20:01:59.464618610+00:00 stderr F E0813 20:01:59.454164 1 base_controller.go:268] StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "etcd": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:02:04.200344733+00:00 stderr F E0813 20:02:04.199129 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:02:22.853589283+00:00 stderr F E0813 20:02:22.852691 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:02:27.916072140+00:00 stderr F E0813 20:02:27.915438 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused - error from a previous attempt: dial tcp 10.217.4.1:443: connect: connection reset by peer, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:27.944047368+00:00 stderr F E0813 20:02:27.943599 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:28.310388068+00:00 stderr F E0813 20:02:28.310273 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:29.121359753+00:00 stderr F E0813 20:02:29.121111 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:29.910865446+00:00 stderr F E0813 20:02:29.910411 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:30.712047130+00:00 stderr F E0813 20:02:30.711543 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:31.521644476+00:00 stderr F E0813 20:02:31.521476 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:31.904943841+00:00 stderr F E0813 20:02:31.904879 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error on serving cert sync for node crc: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/secrets/etcd-serving-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.905151887+00:00 stderr F I0813 20:02:31.905066 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.909971884+00:00 stderr F E0813 20:02:31.909916 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:02:31.913650089+00:00 stderr F E0813 20:02:31.913236 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.914249526+00:00 stderr F I0813 20:02:31.914150 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.917480528+00:00 stderr F E0813 20:02:31.917423 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.917612332+00:00 stderr F I0813 20:02:31.917558 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.930318194+00:00 stderr F E0813 20:02:31.930241 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.930375156+00:00 stderr F I0813 20:02:31.930349 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.975917815+00:00 stderr F E0813 20:02:31.975647 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.975917815+00:00 stderr F I0813 20:02:31.975896 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.059535441+00:00 stderr F E0813 20:02:32.059417 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.059535441+00:00 stderr F I0813 20:02:32.059488 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.223615501+00:00 stderr F E0813 20:02:32.223472 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.223615501+00:00 stderr F I0813 20:02:32.223557 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.310012026+00:00 stderr F E0813 20:02:32.309952 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:32.548999514+00:00 stderr F E0813 20:02:32.548871 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.548999514+00:00 stderr F I0813 20:02:32.548968 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.111589303+00:00 stderr F E0813 20:02:33.111451 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:33.194473277+00:00 stderr F E0813 20:02:33.194344 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.194708674+00:00 stderr F I0813 20:02:33.194606 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.407945453+00:00 stderr F E0813 20:02:34.407394 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:34.481051079+00:00 stderr F E0813 20:02:34.480960 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.481220563+00:00 stderr F I0813 20:02:34.481118 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.983359442+00:00 stderr F E0813 20:02:36.983231 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:37.045231388+00:00 stderr F E0813 20:02:37.045168 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.045450154+00:00 stderr F I0813 20:02:37.045403 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.823228592+00:00 stderr F E0813 20:02:40.822097 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:02:42.119568465+00:00 stderr F E0813 20:02:42.119470 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:42.172278588+00:00 stderr F E0813 20:02:42.172223 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.172419712+00:00 stderr F I0813 20:02:42.172387 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.181326239+00:00 stderr F E0813 20:02:45.181006 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:02:50.826440458+00:00 stderr F E0813 20:02:50.826086 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:02:52.378155643+00:00 stderr F E0813 20:02:52.378050 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:52.440554073+00:00 stderr F E0813 20:02:52.440430 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:52.440626755+00:00 stderr F I0813 20:02:52.440586 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:00.828872299+00:00 stderr F E0813 20:03:00.828215 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:03:10.831582506+00:00 stderr F E0813 20:03:10.831026 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:03:12.883654656+00:00 stderr F E0813 20:03:12.883340 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:12.929214286+00:00 stderr F E0813 20:03:12.929073 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.929364070+00:00 stderr F I0813 20:03:12.929250 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:20.838115277+00:00 stderr F E0813 20:03:20.837631 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:03:22.859631275+00:00 stderr F E0813 20:03:22.859033 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:03:22.957431345+00:00 stderr F E0813 20:03:22.957273 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:22.967341118+00:00 stderr F E0813 20:03:22.967259 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:22.982678106+00:00 stderr F E0813 20:03:22.982628 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:23.008378959+00:00 stderr F E0813 20:03:23.008287 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:23.053133056+00:00 stderr F E0813 20:03:23.053046 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.053411613+00:00 stderr F I0813 20:03:23.053338 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.056392899+00:00 stderr F E0813 20:03:23.056266 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:23.057256643+00:00 stderr F E0813 20:03:23.057197 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.057374826+00:00 stderr F E0813 20:03:23.057317 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:03:23.057744617+00:00 stderr F E0813 20:03:23.057688 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.057887711+00:00 stderr F I0813 20:03:23.057738 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.062345748+00:00 stderr F E0813 20:03:23.062284 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.062372019+00:00 stderr F I0813 20:03:23.062353 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.356343475+00:00 stderr F E0813 20:03:23.356244 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.356714196+00:00 stderr F I0813 20:03:23.356653 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.557486883+00:00 stderr F E0813 20:03:23.557332 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:23.757595502+00:00 stderr F E0813 20:03:23.757413 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.757595502+00:00 stderr F I0813 20:03:23.757553 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.780075193+00:00 stderr F E0813 20:03:23.780005 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.786199928+00:00 stderr F E0813 20:03:23.786101 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.786246539+00:00 stderr F I0813 20:03:23.786202 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.787520656+00:00 stderr F E0813 20:03:23.787490 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:03:23.957981659+00:00 stderr F E0813 20:03:23.957886 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.957981659+00:00 stderr F I0813 20:03:23.957943 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.154655539+00:00 stderr F E0813 20:03:24.154590 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.158468888+00:00 stderr F E0813 20:03:24.158439 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.357587489+00:00 stderr F E0813 20:03:24.357513 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.357626330+00:00 stderr F I0813 20:03:24.357589 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.505373634+00:00 stderr F E0813 20:03:24.505108 1 leaderelection.go:332] error retrieving resource lock openshift-etcd-operator/openshift-cluster-etcd-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-etcd-operator/leases/openshift-cluster-etcd-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.551723986+00:00 stderr F I0813 20:03:24.551623 1 request.go:697] Waited for 1.009458816s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd 2025-08-13T20:03:24.558413797+00:00 stderr F E0813 20:03:24.558307 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.558517610+00:00 stderr F I0813 20:03:24.558484 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.757933178+00:00 stderr F E0813 20:03:24.757746 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.758035851+00:00 stderr F I0813 20:03:24.757993 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.958230252+00:00 stderr F E0813 20:03:24.958137 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:25.153714949+00:00 stderr F E0813 20:03:25.153597 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.158041973+00:00 stderr F E0813 20:03:25.157992 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.158129965+00:00 stderr F I0813 20:03:25.158072 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.358476540+00:00 stderr F E0813 20:03:25.358158 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.358615464+00:00 stderr F I0813 20:03:25.358296 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.558516896+00:00 stderr F E0813 20:03:25.558383 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.752531091+00:00 stderr F I0813 20:03:25.752453 1 request.go:697] Waited for 1.193181188s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts 2025-08-13T20:03:25.758576004+00:00 stderr F E0813 20:03:25.757732 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.758576004+00:00 stderr F I0813 20:03:25.757892 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.857071064+00:00 stderr F E0813 20:03:25.856945 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:03:25.958207469+00:00 stderr F E0813 20:03:25.958059 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.958207469+00:00 stderr F I0813 20:03:25.958158 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.153532231+00:00 stderr F E0813 20:03:26.153414 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.158027719+00:00 stderr F E0813 20:03:26.157943 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.357712246+00:00 stderr F E0813 20:03:26.357613 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.357827169+00:00 stderr F I0813 20:03:26.357705 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.557380822+00:00 stderr F E0813 20:03:26.557258 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:26.758231992+00:00 stderr F E0813 20:03:26.758118 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.958319360+00:00 stderr F E0813 20:03:26.958176 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.958319360+00:00 stderr F I0813 20:03:26.958226 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.157973746+00:00 stderr F E0813 20:03:27.157640 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.157973746+00:00 stderr F I0813 20:03:27.157736 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.352905736+00:00 stderr F I0813 20:03:27.352124 1 request.go:697] Waited for 1.178087108s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller 2025-08-13T20:03:27.355495730+00:00 stderr F E0813 20:03:27.355381 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.558159942+00:00 stderr F E0813 20:03:27.558037 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:27.754411790+00:00 stderr F E0813 20:03:27.754272 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.957162104+00:00 stderr F E0813 20:03:27.957062 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.957613027+00:00 stderr F I0813 20:03:27.957217 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.156063508+00:00 stderr F E0813 20:03:28.155964 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.355594340+00:00 stderr F E0813 20:03:28.355485 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:28.444511917+00:00 stderr F E0813 20:03:28.444455 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.444640480+00:00 stderr F I0813 20:03:28.444615 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.551331324+00:00 stderr F I0813 20:03:28.551276 1 request.go:697] Waited for 1.154931357s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller 2025-08-13T20:03:28.553475925+00:00 stderr F E0813 20:03:28.553451 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.762089566+00:00 stderr F E0813 20:03:28.761998 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.957700287+00:00 stderr F E0813 20:03:28.957497 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.957700287+00:00 stderr F I0813 20:03:28.957571 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:29.355682610+00:00 stderr F E0813 20:03:29.355320 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:29.537526057+00:00 stderr F E0813 20:03:29.537405 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:03:29.556090137+00:00 stderr F E0813 20:03:29.555992 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:29.765203002+00:00 stderr F E0813 20:03:29.765039 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:29.765297575+00:00 stderr F I0813 20:03:29.765179 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:29.958615090+00:00 stderr F E0813 20:03:29.958470 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:30.153994804+00:00 stderr F E0813 20:03:30.153764 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:30.359384223+00:00 stderr F E0813 20:03:30.359136 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:30.555817727+00:00 stderr F E0813 20:03:30.555628 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:30.755600527+00:00 stderr F E0813 20:03:30.755477 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:30.755941116+00:00 stderr F I0813 20:03:30.755697 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:30.840899200+00:00 stderr F E0813 20:03:30.840733 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:03:31.012560357+00:00 stderr F E0813 20:03:31.012387 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.012560357+00:00 stderr F I0813 20:03:31.012536 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.159309693+00:00 stderr F E0813 20:03:31.159218 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.356281642+00:00 stderr F E0813 20:03:31.356157 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.356618142+00:00 stderr F I0813 20:03:31.356536 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.554473645+00:00 stderr F E0813 20:03:31.554337 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.755144050+00:00 stderr F E0813 20:03:31.755021 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.955864116+00:00 stderr F E0813 20:03:31.955695 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.956168485+00:00 stderr F I0813 20:03:31.956101 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:32.353411897+00:00 stderr F E0813 20:03:32.353282 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:32.555996746+00:00 stderr F E0813 20:03:32.555895 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:32.556484370+00:00 stderr F I0813 20:03:32.556404 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:32.757106004+00:00 stderr F E0813 20:03:32.756929 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:32.958043466+00:00 stderr F E0813 20:03:32.957913 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:33.160642345+00:00 stderr F E0813 20:03:33.156525 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:33.160642345+00:00 stderr F I0813 20:03:33.158978 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:33.559447462+00:00 stderr F E0813 20:03:33.558753 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:33.757573125+00:00 stderr F E0813 20:03:33.757461 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:33.757573125+00:00 stderr F I0813 20:03:33.757547 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:33.957753815+00:00 stderr F E0813 20:03:33.957575 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:34.161032784+00:00 stderr F E0813 20:03:34.159741 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:34.355140732+00:00 stderr F E0813 20:03:34.355038 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:34.355239124+00:00 stderr F I0813 20:03:34.355179 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:34.755950416+00:00 stderr F E0813 20:03:34.755750 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:34.956662761+00:00 stderr F E0813 20:03:34.956237 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:34.956662761+00:00 stderr F I0813 20:03:34.956341 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:35.169619416+00:00 stderr F E0813 20:03:35.169293 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:35.557111580+00:00 stderr F E0813 20:03:35.557005 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:35.861442162+00:00 stderr F E0813 20:03:35.861273 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:03:35.962182946+00:00 stderr F E0813 20:03:35.962038 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:35.962252108+00:00 stderr F I0813 20:03:35.962169 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.139711820+00:00 stderr F E0813 20:03:36.139611 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.139925116+00:00 stderr F I0813 20:03:36.139700 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.157481337+00:00 stderr F E0813 20:03:36.157376 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.556533341+00:00 stderr F E0813 20:03:36.556330 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.556533341+00:00 stderr F I0813 20:03:36.556415 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.759466900+00:00 stderr F E0813 20:03:36.759334 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.953633649+00:00 stderr F E0813 20:03:36.953531 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.356673327+00:00 stderr F E0813 20:03:37.356579 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.555997593+00:00 stderr F E0813 20:03:37.555915 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.959219406+00:00 stderr F E0813 20:03:37.959149 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:38.355437569+00:00 stderr F E0813 20:03:38.355380 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:38.556167655+00:00 stderr F E0813 20:03:38.556111 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:38.556307989+00:00 stderr F I0813 20:03:38.556219 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:38.955301350+00:00 stderr F E0813 20:03:38.955074 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:39.156957833+00:00 stderr F E0813 20:03:39.156656 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:39.356349261+00:00 stderr F E0813 20:03:39.356168 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:39.356349261+00:00 stderr F I0813 20:03:39.356305 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:39.541460462+00:00 stderr F E0813 20:03:39.541323 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:03:39.760090389+00:00 stderr F E0813 20:03:39.758278 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:40.410572265+00:00 stderr F E0813 20:03:40.410156 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:40.554832701+00:00 stderr F E0813 20:03:40.554724 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:40.844255727+00:00 stderr F E0813 20:03:40.844164 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:03:41.697942270+00:00 stderr F E0813 20:03:41.696580 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.076366176+00:00 stderr F E0813 20:03:42.076310 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.683909466+00:00 stderr F E0813 20:03:42.683724 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:43.122655472+00:00 stderr F E0813 20:03:43.122546 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:43.681258448+00:00 stderr F E0813 20:03:43.681139 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:43.681320709+00:00 stderr F I0813 20:03:43.681273 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:44.263280191+00:00 stderr F E0813 20:03:44.263223 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:44.482826264+00:00 stderr F E0813 20:03:44.482146 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:44.482826264+00:00 stderr F I0813 20:03:44.482228 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:45.864068586+00:00 stderr F E0813 20:03:45.863953 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:03:46.393626573+00:00 stderr F E0813 20:03:46.393442 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:46.393626573+00:00 stderr F I0813 20:03:46.393548 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:48.262920558+00:00 stderr F E0813 20:03:48.262499 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:49.207182985+00:00 stderr F E0813 20:03:49.207077 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:49.391407061+00:00 stderr F E0813 20:03:49.391305 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:49.545107794+00:00 stderr F E0813 20:03:49.544819 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:03:50.846910431+00:00 stderr F E0813 20:03:50.846526 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:03:52.325943768+00:00 stderr F E0813 20:03:52.322551 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:52.940969334+00:00 stderr F E0813 20:03:52.940373 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.932722016+00:00 stderr F E0813 20:03:53.931117 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:53.950185344+00:00 stderr F E0813 20:03:53.946198 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.950185344+00:00 stderr F I0813 20:03:53.946277 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.974173028+00:00 stderr F E0813 20:03:53.974113 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.974330663+00:00 stderr F I0813 20:03:53.974301 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:54.735887598+00:00 stderr F E0813 20:03:54.734474 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:54.735887598+00:00 stderr F I0813 20:03:54.735026 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:55.873922234+00:00 stderr F E0813 20:03:55.873512 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:03:58.509095808+00:00 stderr F E0813 20:03:58.508630 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:59.547894352+00:00 stderr F E0813 20:03:59.547725 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:03:59.658252940+00:00 stderr F E0813 20:03:59.647921 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:00.859073585+00:00 stderr F E0813 20:04:00.855063 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:05.881238883+00:00 stderr F E0813 20:04:05.878083 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:04:07.554363661+00:00 stderr F E0813 20:04:07.554312 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:07.554545866+00:00 stderr F I0813 20:04:07.554507 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:09.551030160+00:00 stderr F E0813 20:04:09.550761 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:04:09.724299903+00:00 stderr F E0813 20:04:09.722238 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:10.862056161+00:00 stderr F E0813 20:04:10.861468 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:12.806143739+00:00 stderr F E0813 20:04:12.806031 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:13.424606892+00:00 stderr F E0813 20:04:13.424481 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:14.434228454+00:00 stderr F E0813 20:04:14.434114 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:14.434322137+00:00 stderr F I0813 20:04:14.434260 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:15.221155572+00:00 stderr F E0813 20:04:15.221031 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:15.221155572+00:00 stderr F I0813 20:04:15.221124 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:15.894947624+00:00 stderr F E0813 20:04:15.892361 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:04:18.996137771+00:00 stderr F E0813 20:04:18.995688 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:19.556094465+00:00 stderr F E0813 20:04:19.555673 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:04:20.142189435+00:00 stderr F E0813 20:04:20.142129 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:20.869223985+00:00 stderr F E0813 20:04:20.867149 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:20.869223985+00:00 stderr F E0813 20:04:20.867241 1 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:20.869580806+00:00 stderr F E0813 20:04:20.869524 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.913200936 +0000 UTC m=+93.647264956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:22.857735944+00:00 stderr F E0813 20:04:22.857600 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:04:22.961238868+00:00 stderr F E0813 20:04:22.959978 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:23.051968907+00:00 stderr F E0813 20:04:23.051911 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.052106241+00:00 stderr F I0813 20:04:23.052049 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.052967065+00:00 stderr F E0813 20:04:23.052883 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.054510849+00:00 stderr F E0813 20:04:23.054456 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.054532100+00:00 stderr F I0813 20:04:23.054510 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.261045995+00:00 stderr F E0813 20:04:23.260917 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.262444295+00:00 stderr F E0813 20:04:23.262379 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.559501720+00:00 stderr F E0813 20:04:23.559378 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:23.760595529+00:00 stderr F E0813 20:04:23.760494 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.761087813+00:00 stderr F E0813 20:04:23.760634 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.761243628+00:00 stderr F I0813 20:04:23.760744 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:24.238534455+00:00 stderr F E0813 20:04:24.236428 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:24.238534455+00:00 stderr F I0813 20:04:24.237021 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:24.511558044+00:00 stderr F E0813 20:04:24.510976 1 leaderelection.go:332] error retrieving resource lock openshift-etcd-operator/openshift-cluster-etcd-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-etcd-operator/leases/openshift-cluster-etcd-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:25.895454472+00:00 stderr F E0813 20:04:25.895088 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:04:29.439894403+00:00 stderr F E0813 20:04:29.438675 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.913200936 +0000 UTC m=+93.647264956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:29.557961674+00:00 stderr F E0813 20:04:29.557907 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:04:35.898907446+00:00 stderr F E0813 20:04:35.898239 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:04:39.444014303+00:00 stderr F E0813 20:04:39.442629 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.913200936 +0000 UTC m=+93.647264956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:39.582918490+00:00 stderr F E0813 20:04:39.582396 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:04:45.905067071+00:00 stderr F E0813 20:04:45.903678 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:04:48.532965973+00:00 stderr F E0813 20:04:48.528929 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:48.532965973+00:00 stderr F I0813 20:04:48.532009 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:49.445322549+00:00 stderr F E0813 20:04:49.444886 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.913200936 +0000 UTC m=+93.647264956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:49.585467953+00:00 stderr F E0813 20:04:49.585240 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:04:50.693385169+00:00 stderr F E0813 20:04:50.693311 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:53.773511932+00:00 stderr F E0813 20:04:53.773174 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:54.390655633+00:00 stderr F E0813 20:04:54.390251 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:55.404886357+00:00 stderr F E0813 20:04:55.404737 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:55.405273519+00:00 stderr F I0813 20:04:55.404969 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:55.909447746+00:00 stderr F E0813 20:04:55.909312 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:04:56.190326910+00:00 stderr F E0813 20:04:56.190154 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:56.190381151+00:00 stderr F I0813 20:04:56.190310 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:59.452388172+00:00 stderr F E0813 20:04:59.451291 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.913200936 +0000 UTC m=+93.647264956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:59.593054480+00:00 stderr F E0813 20:04:59.591542 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:05:00.060713062+00:00 stderr F E0813 20:05:00.058154 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:01.130496844+00:00 stderr F E0813 20:05:01.130339 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.967346406+00:00 stderr F E0813 20:05:05.966528 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:05:05.967413718+00:00 stderr F E0813 20:05:05.967379 1 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:05:05.969130917+00:00 stderr F E0813 20:05:05.969101 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c29267bd768\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.957751582 +0000 UTC m=+145.691815432,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:05:09.457429598+00:00 stderr F E0813 20:05:09.456992 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.913200936 +0000 UTC m=+93.647264956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:05:09.598327073+00:00 stderr F E0813 20:05:09.598138 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:05:09.598327073+00:00 stderr F E0813 20:05:09.598283 1 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:05:09.600359401+00:00 stderr F E0813 20:05:09.600235 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c28facabbfa\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.057670635 +0000 UTC m=+144.791734625,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:05:13.643070308+00:00 stderr F E0813 20:05:13.642637 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c28facabbfa\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.057670635 +0000 UTC m=+144.791734625,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:05:13.767411739+00:00 stderr F E0813 20:05:13.767326 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c29267bd768\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.957751582 +0000 UTC m=+145.691815432,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:05:19.462051600+00:00 stderr F E0813 20:05:19.461334 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.913200936 +0000 UTC m=+93.647264956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:05:22.879651976+00:00 stderr F E0813 20:05:22.878646 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:05:29.048246551+00:00 stderr F E0813 20:05:29.046023 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:06:02.210131115+00:00 stderr F I0813 20:06:02.208595 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:03.033576196+00:00 stderr F I0813 20:06:03.033438 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:06.445424067+00:00 stderr F I0813 20:06:06.444339 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:07.608175334+00:00 stderr F I0813 20:06:07.608122 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:08.612634827+00:00 stderr F I0813 20:06:08.611959 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:09.414593552+00:00 stderr F I0813 20:06:09.414529 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:12.348169958+00:00 stderr F I0813 20:06:12.348000 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:12.836973505+00:00 stderr F I0813 20:06:12.835622 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:15.988378780+00:00 stderr F I0813 20:06:15.985500 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:22.843903184+00:00 stderr F I0813 20:06:22.842449 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:22.861297962+00:00 stderr F E0813 20:06:22.861183 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:06:23.072250683+00:00 stderr F I0813 20:06:23.072111 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:23.617974219+00:00 stderr F I0813 20:06:23.617182 1 reflector.go:351] Caches populated for *v1.Etcd from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:24.659640189+00:00 stderr F I0813 20:06:24.659469 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=etcds from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:24.676183782+00:00 stderr F I0813 20:06:24.675972 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:06:27.158032882+00:00 stderr F I0813 20:06:27.157843 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:27.558221692+00:00 stderr F I0813 20:06:27.558152 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:28.358379415+00:00 stderr F I0813 20:06:28.358311 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:29.174883817+00:00 stderr F I0813 20:06:29.174307 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:29.207098629+00:00 stderr F I0813 20:06:29.206943 1 reflector.go:351] Caches populated for *v1beta1.Machine from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:29.561481258+00:00 stderr F I0813 20:06:29.557522 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:30.560253238+00:00 stderr F I0813 20:06:30.560145 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:31.778853495+00:00 stderr F I0813 20:06:31.706217 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:32.160490047+00:00 stderr F I0813 20:06:32.160293 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:33.148306849+00:00 stderr F I0813 20:06:33.147728 1 reflector.go:351] Caches populated for *v1.APIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:33.372511267+00:00 stderr F I0813 20:06:33.365420 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:34.644663810+00:00 stderr F I0813 20:06:34.644608 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:37.201202349+00:00 stderr F I0813 20:06:37.200137 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:39.541999251+00:00 stderr F I0813 20:06:39.522707 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:42.650540216+00:00 stderr F I0813 20:06:42.649910 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:43.907959937+00:00 stderr F I0813 20:06:43.897571 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:45.809039123+00:00 stderr F I0813 20:06:45.808925 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:50.511690451+00:00 stderr F I0813 20:06:50.502367 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:50.553990194+00:00 stderr F I0813 20:06:50.552587 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:57.566845679+00:00 stderr F I0813 20:06:57.561917 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:02.371117412+00:00 stderr F I0813 20:07:02.370179 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:04.722915769+00:00 stderr F I0813 20:07:04.722764 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:10.046854770+00:00 stderr F I0813 20:07:10.044642 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:22.928281282+00:00 stderr F E0813 20:07:22.926421 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:08:22.866636195+00:00 stderr F E0813 20:08:22.865703 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:08:24.161995414+00:00 stderr F E0813 20:08:24.161937 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error on peer cert sync for node crc: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/secrets/etcd-peer-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.184943762+00:00 stderr F I0813 20:08:24.182280 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.223274251+00:00 stderr F E0813 20:08:24.222855 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 31528 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31 +0000 UTC,LastTimestamp:2025-08-13 20:08:24.159629196 +0000 UTC m=+445.893693186,Count:18,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:08:24.359258560+00:00 stderr F I0813 20:08:24.358504 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.359258560+00:00 stderr F E0813 20:08:24.359213 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.569482947+00:00 stderr F E0813 20:08:24.567947 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.569482947+00:00 stderr F I0813 20:08:24.568045 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.690223139+00:00 stderr F E0813 20:08:24.689972 1 leaderelection.go:332] error retrieving resource lock openshift-etcd-operator/openshift-cluster-etcd-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-etcd-operator/leases/openshift-cluster-etcd-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.757401485+00:00 stderr F E0813 20:08:24.757221 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.757401485+00:00 stderr F I0813 20:08:24.757345 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.956251467+00:00 stderr F E0813 20:08:24.956169 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.956301158+00:00 stderr F I0813 20:08:24.956261 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.153401538+00:00 stderr F E0813 20:08:25.153177 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.155126058+00:00 stderr F I0813 20:08:25.153268 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.350054066+00:00 stderr F E0813 20:08:25.350001 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.350190790+00:00 stderr F I0813 20:08:25.350166 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.716429421+00:00 stderr F E0813 20:08:25.714732 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.716429421+00:00 stderr F I0813 20:08:25.714869 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.387466040+00:00 stderr F E0813 20:08:26.377075 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.387466040+00:00 stderr F I0813 20:08:26.377191 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.679662479+00:00 stderr F E0813 20:08:27.678724 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.679662479+00:00 stderr F I0813 20:08:27.678885 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.268107832+00:00 stderr F E0813 20:08:30.267513 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.268107832+00:00 stderr F I0813 20:08:30.267695 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.645977526+00:00 stderr F E0813 20:08:31.645840 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 31528 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31 +0000 UTC,LastTimestamp:2025-08-13 20:08:24.159629196 +0000 UTC m=+445.893693186,Count:18,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:08:35.396656560+00:00 stderr F E0813 20:08:35.393708 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.396656560+00:00 stderr F I0813 20:08:35.394473 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.650405050+00:00 stderr F E0813 20:08:41.648539 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 31528 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31 +0000 UTC,LastTimestamp:2025-08-13 20:08:24.159629196 +0000 UTC m=+445.893693186,Count:18,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:08:45.655396576+00:00 stderr F E0813 20:08:45.654638 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.655446358+00:00 stderr F I0813 20:08:45.654841 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.655745521+00:00 stderr F E0813 20:08:51.655347 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 31528 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31 +0000 UTC,LastTimestamp:2025-08-13 20:08:24.159629196 +0000 UTC m=+445.893693186,Count:18,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:09:22.866896272+00:00 stderr F E0813 20:09:22.866157 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:09:28.690253803+00:00 stderr F I0813 20:09:28.689154 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:29.796471329+00:00 stderr F I0813 20:09:29.796019 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:29.950505895+00:00 stderr F I0813 20:09:29.950372 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:30.868878596+00:00 stderr F I0813 20:09:30.868418 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:32.896868311+00:00 stderr F I0813 20:09:32.896678 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:33.825653849+00:00 stderr F I0813 20:09:33.825145 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:34.341240701+00:00 stderr F I0813 20:09:34.341160 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:34.508565869+00:00 stderr F I0813 20:09:34.508463 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:35.550575104+00:00 stderr F I0813 20:09:35.550203 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:36.763696894+00:00 stderr F I0813 20:09:36.763292 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:37.445863483+00:00 stderr F I0813 20:09:37.445703 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.041151681+00:00 stderr F I0813 20:09:39.040874 1 reflector.go:351] Caches populated for *v1.Etcd from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.059841067+00:00 stderr F I0813 20:09:39.059728 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.775376112+00:00 stderr F I0813 20:09:39.775289 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.877276473+00:00 stderr F I0813 20:09:39.877139 1 reflector.go:351] Caches populated for *v1beta1.Machine from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:42.554569883+00:00 stderr F I0813 20:09:42.553989 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:44.886981005+00:00 stderr F I0813 20:09:44.885150 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.269155223+00:00 stderr F I0813 20:09:45.269000 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.299185674+00:00 stderr F I0813 20:09:45.291372 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=etcds from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.301321245+00:00 stderr F I0813 20:09:45.301283 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.321293898+00:00 stderr F I0813 20:09:45.305596 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:09:45.683142992+00:00 stderr F I0813 20:09:45.683015 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:46.246210666+00:00 stderr F I0813 20:09:46.246035 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:46.483549671+00:00 stderr F I0813 20:09:46.480153 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:48.069358597+00:00 stderr F I0813 20:09:48.066045 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:48.655359978+00:00 stderr F I0813 20:09:48.655206 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:50.226945617+00:00 stderr F I0813 20:09:50.224154 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:52.332282769+00:00 stderr F I0813 20:09:52.331846 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:54.468656401+00:00 stderr F I0813 20:09:54.468339 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.661100629+00:00 stderr F I0813 20:09:55.652730 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:56.423555399+00:00 stderr F I0813 20:09:56.423197 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:08.511851241+00:00 stderr F I0813 20:10:08.507574 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:14.081605350+00:00 stderr F I0813 20:10:14.079826 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:21.924516261+00:00 stderr F I0813 20:10:21.923721 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:22.870503533+00:00 stderr F E0813 20:10:22.870263 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:10:25.574907120+00:00 stderr F I0813 20:10:25.574734 1 reflector.go:351] Caches populated for *v1.APIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:35.172134291+00:00 stderr F I0813 20:10:35.168313 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:43.224549840+00:00 stderr F I0813 20:10:43.223556 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:11:22.868676659+00:00 stderr F E0813 20:11:22.868202 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:12:22.881343790+00:00 stderr F E0813 20:12:22.877458 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:13:22.867875010+00:00 stderr F E0813 20:13:22.867581 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:14:22.868843436+00:00 stderr F E0813 20:14:22.868197 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:15:22.867512094+00:00 stderr F E0813 20:15:22.867177 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:16:22.870828367+00:00 stderr F E0813 20:16:22.870141 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:17:22.876213876+00:00 stderr F E0813 20:17:22.875455 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:18:22.877519331+00:00 stderr F E0813 20:18:22.876956 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:19:22.880135864+00:00 stderr F E0813 20:19:22.879267 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:20:22.877295837+00:00 stderr F E0813 20:20:22.876172 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:21:22.884254785+00:00 stderr F E0813 20:21:22.878816 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:22:09.064314212+00:00 stderr F E0813 20:22:09.063425 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:22:22.884322808+00:00 stderr F E0813 20:22:22.882273 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:23:22.874296613+00:00 stderr F E0813 20:23:22.873563 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:24:22.880233946+00:00 stderr F E0813 20:24:22.879473 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:25:22.877911382+00:00 stderr F E0813 20:25:22.877520 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:26:22.890656328+00:00 stderr F E0813 20:26:22.889040 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:27:22.881600995+00:00 stderr F E0813 20:27:22.881267 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:28:22.881465280+00:00 stderr F E0813 20:28:22.881095 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:29:22.880966008+00:00 stderr F E0813 20:29:22.880293 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:30:22.883764059+00:00 stderr F E0813 20:30:22.883122 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:31:22.885060755+00:00 stderr F E0813 20:31:22.884194 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:32:22.891502994+00:00 stderr F E0813 20:32:22.890695 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:33:22.884361515+00:00 stderr F E0813 20:33:22.883390 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:34:22.885392643+00:00 stderr F E0813 20:34:22.884208 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:35:22.886406289+00:00 stderr F E0813 20:35:22.885518 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:36:22.891921201+00:00 stderr F E0813 20:36:22.889617 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:37:22.886728388+00:00 stderr F E0813 20:37:22.886091 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:38:22.891078907+00:00 stderr F E0813 20:38:22.890725 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:38:49.076961309+00:00 stderr F E0813 20:38:49.076496 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:39:22.889738503+00:00 stderr F E0813 20:39:22.888686 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:40:22.889029618+00:00 stderr F E0813 20:40:22.888462 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:41:22.895517819+00:00 stderr F E0813 20:41:22.893211 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:42:22.891578222+00:00 stderr F E0813 20:42:22.891138 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:42:36.405189682+00:00 stderr F I0813 20:42:36.341327 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.418486595+00:00 stderr F I0813 20:42:36.344825 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.418486595+00:00 stderr F I0813 20:42:36.352873 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.418486595+00:00 stderr F I0813 20:42:36.352895 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.418628739+00:00 stderr F I0813 20:42:36.352915 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.352971 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.353002 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.353046 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.353068 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.353086 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.353132 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.353217 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.353266 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450603191+00:00 stderr F I0813 20:42:36.353283 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.451313861+00:00 stderr F I0813 20:42:36.353299 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.451663361+00:00 stderr F I0813 20:42:36.353316 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.463306937+00:00 stderr F I0813 20:42:36.353361 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.463971006+00:00 stderr F I0813 20:42:36.353378 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.464496041+00:00 stderr F I0813 20:42:36.353393 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.464896573+00:00 stderr F I0813 20:42:36.353411 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.465287254+00:00 stderr F I0813 20:42:36.353428 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.465534021+00:00 stderr F I0813 20:42:36.353445 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.466008965+00:00 stderr F I0813 20:42:36.353460 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.466395156+00:00 stderr F I0813 20:42:36.353475 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.466717065+00:00 stderr F I0813 20:42:36.353491 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.467177499+00:00 stderr F I0813 20:42:36.353506 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.467517179+00:00 stderr F I0813 20:42:36.353521 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.467915070+00:00 stderr F I0813 20:42:36.353535 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.468287871+00:00 stderr F I0813 20:42:36.353552 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.468517787+00:00 stderr F I0813 20:42:36.353568 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.468972860+00:00 stderr F I0813 20:42:36.353585 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469270839+00:00 stderr F I0813 20:42:36.353602 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469545767+00:00 stderr F I0813 20:42:36.353617 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.511689472+00:00 stderr F I0813 20:42:36.353630 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531136833+00:00 stderr F I0813 20:42:36.353647 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531136833+00:00 stderr F I0813 20:42:36.342482 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.120750 1 cmd.go:128] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125509 1 base_controller.go:172] Shutting down ClusterMemberController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125549 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125596 1 base_controller.go:172] Shutting down EtcdEndpointsController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125613 1 base_controller.go:172] Shutting down TargetConfigController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125628 1 base_controller.go:172] Shutting down EtcdCertSignerController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125644 1 base_controller.go:172] Shutting down ClusterMemberRemovalController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125663 1 base_controller.go:172] Shutting down MissingStaticPodController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125685 1 base_controller.go:172] Shutting down EtcdStaticResources ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125703 1 base_controller.go:172] Shutting down StaticPodStateController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125717 1 base_controller.go:172] Shutting down InstallerStateController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125732 1 base_controller.go:172] Shutting down GuardController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125750 1 base_controller.go:172] Shutting down InstallerController ... 2025-08-13T20:42:41.126171969+00:00 stderr F I0813 20:42:41.126131 1 base_controller.go:172] Shutting down ScriptController ... 2025-08-13T20:42:41.126296192+00:00 stderr F I0813 20:42:41.126270 1 base_controller.go:172] Shutting down PruneController ... 2025-08-13T20:42:41.126367504+00:00 stderr F I0813 20:42:41.126349 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:41.126429406+00:00 stderr F I0813 20:42:41.126412 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:42:41.126499908+00:00 stderr F I0813 20:42:41.126481 1 base_controller.go:172] Shutting down BackingResourceController ... 2025-08-13T20:42:41.126563860+00:00 stderr F I0813 20:42:41.126545 1 base_controller.go:172] Shutting down NodeController ... 2025-08-13T20:42:41.126642342+00:00 stderr F I0813 20:42:41.126618 1 base_controller.go:172] Shutting down DefragController ... 2025-08-13T20:42:41.126717944+00:00 stderr F I0813 20:42:41.126697 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:42:41.126853818+00:00 stderr F I0813 20:42:41.126765 1 base_controller.go:172] Shutting down StatusSyncer_etcd ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127575 1 base_controller.go:172] Shutting down BootstrapTeardownController ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127632 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127646 1 base_controller.go:172] Shutting down MachineDeletionHooksController ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127671 1 base_controller.go:114] Shutting down worker of ClusterMemberController controller ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127675 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127706 1 base_controller.go:114] Shutting down worker of BootstrapTeardownController controller ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.128508 1 envvarcontroller.go:209] Shutting down EnvVarController 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127684 1 base_controller.go:104] All ClusterMemberController workers have been terminated 2025-08-13T20:42:41.129861045+00:00 stderr F E0813 20:42:41.128859 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-etcd-operator/leases/openshift-cluster-etcd-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127889 1 base_controller.go:104] All BootstrapTeardownController workers have been terminated 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129352 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129526 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129543 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129560 1 base_controller.go:114] Shutting down worker of MachineDeletionHooksController controller ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129570 1 base_controller.go:104] All MachineDeletionHooksController workers have been terminated 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129630 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129654 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129830 1 base_controller.go:172] Shutting down EtcdMembersController ... 2025-08-13T20:42:41.129907136+00:00 stderr F I0813 20:42:41.129897 1 base_controller.go:172] Shutting down EtcdCertCleanerController ... 2025-08-13T20:42:41.129981228+00:00 stderr F I0813 20:42:41.126895 1 base_controller.go:150] All StatusSyncer_etcd post start hooks have been terminated 2025-08-13T20:42:41.132899923+00:00 stderr F I0813 20:42:41.130971 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:41.132899923+00:00 stderr F W0813 20:42:41.132287 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operato0000644000175000017500000030366115133657716033067 0ustar zuulzuul2025-08-13T19:59:19.439582302+00:00 stderr F I0813 19:59:18.611601 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:19.439582302+00:00 stderr F I0813 19:59:18.511681 1 profiler.go:21] Starting profiling endpoint at http://127.0.0.1:6060/debug/pprof/ 2025-08-13T19:59:19.497175674+00:00 stderr F I0813 19:59:18.676219 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T19:59:19.497590426+00:00 stderr F I0813 19:59:19.497506 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:19.742684212+00:00 stderr F I0813 19:59:19.687992 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:24.021392759+00:00 stderr F I0813 19:59:24.020506 1 builder.go:298] openshift-cluster-etcd-operator version 4.16.0-202406131906.p0.gdc4f4e8.assembly.stream.el9-dc4f4e8-dc4f4e858ba8395dce6883242c7d12009685d145 2025-08-13T19:59:29.136189326+00:00 stderr F I0813 19:59:29.134492 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:29.136189326+00:00 stderr F W0813 19:59:29.135111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:29.136189326+00:00 stderr F W0813 19:59:29.135122 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:29.236993950+00:00 stderr F I0813 19:59:29.235641 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:29.242432165+00:00 stderr F I0813 19:59:29.240557 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:29.242432165+00:00 stderr F I0813 19:59:29.241447 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:29.273372357+00:00 stderr F I0813 19:59:29.272043 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:29.313331306+00:00 stderr F I0813 19:59:29.312113 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.313331306+00:00 stderr F I0813 19:59:29.312268 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:29.364638288+00:00 stderr F I0813 19:59:29.364375 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:29.367155910+00:00 stderr F I0813 19:59:29.365767 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:29.555894250+00:00 stderr F I0813 19:59:29.546100 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:29.555894250+00:00 stderr F I0813 19:59:29.546695 1 leaderelection.go:250] attempting to acquire leader lease openshift-etcd-operator/openshift-cluster-etcd-operator-lock... 2025-08-13T19:59:29.569083816+00:00 stderr F I0813 19:59:29.567958 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:29.569083816+00:00 stderr F I0813 19:59:29.569001 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:29.713290977+00:00 stderr F I0813 19:59:29.713230 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:29.724559348+00:00 stderr F E0813 19:59:29.722298 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.725038032+00:00 stderr F E0813 19:59:29.725012 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.736378395+00:00 stderr F E0813 19:59:29.730717 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.736378395+00:00 stderr F E0813 19:59:29.734086 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.743035945+00:00 stderr F E0813 19:59:29.742379 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.744939749+00:00 stderr F E0813 19:59:29.744270 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.766698299+00:00 stderr F E0813 19:59:29.763559 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.766698299+00:00 stderr F E0813 19:59:29.766038 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.774469311+00:00 stderr F I0813 19:59:29.774188 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:29.912187596+00:00 stderr F E0813 19:59:29.908002 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.912187596+00:00 stderr F E0813 19:59:29.908064 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.990180180+00:00 stderr F E0813 19:59:29.990117 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:30.039085414+00:00 stderr F E0813 19:59:30.039024 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:30.167941217+00:00 stderr F E0813 19:59:30.167862 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:30.204724855+00:00 stderr F E0813 19:59:30.204571 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:30.489536614+00:00 stderr F E0813 19:59:30.488746 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:30.525665924+00:00 stderr F E0813 19:59:30.525598 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:31.151526054+00:00 stderr F E0813 19:59:31.151456 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:31.175321802+00:00 stderr F I0813 19:59:31.175063 1 leaderelection.go:260] successfully acquired lease openshift-etcd-operator/openshift-cluster-etcd-operator-lock 2025-08-13T19:59:31.175363553+00:00 stderr F E0813 19:59:31.175316 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:31.208255471+00:00 stderr F I0813 19:59:31.208171 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-etcd-operator", Name:"openshift-cluster-etcd-operator-lock", UID:"1b330a9d-47ca-437b-91e1-6481a2280da8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28128", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' etcd-operator-768d5b5d86-722mg_13b0a2d9-9b76-44b9-abad-e471c2f65ca3 became leader 2025-08-13T19:59:31.550569339+00:00 stderr F I0813 19:59:31.549964 1 starter.go:166] recorded cluster versions: map[etcd:4.16.0 operator:4.16.0 raw-internal:4.16.0] 2025-08-13T19:59:32.441695689+00:00 stderr F E0813 19:59:32.437879 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:32.458201790+00:00 stderr F E0813 19:59:32.458107 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:33.874763240+00:00 stderr F I0813 19:59:33.874019 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:34.190295535+00:00 stderr F I0813 19:59:34.184537 1 starter.go:445] FeatureGates initializedenabled[AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CloudDualStackNodeIPs ClusterAPIInstallAWS ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallVSphere DisableKubeletCloudCredentialProviders ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP HardwareSpeed KMSv1 MetricsServer NetworkDiagnosticsConfig NetworkLiveMigration PrivateHostedZoneAWS VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereStaticIPs]disabled[AutomatedEtcdBackup CSIDriverSharedResource ChunkSizeMiB ClusterAPIInstall ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallPowerVS DNSNameResolver DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MixedCPUsAllocation NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereMultiVCenters ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:34.190295535+00:00 stderr F I0813 19:59:34.184682 1 starter.go:499] waiting for cluster version informer sync... 2025-08-13T19:59:34.282931475+00:00 stderr F I0813 19:59:34.280679 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:34.412808117+00:00 stderr F I0813 19:59:34.403536 1 starter.go:522] Detected available machine API, starting vertical scaling related controllers and informers... 2025-08-13T19:59:34.599604162+00:00 stderr F I0813 19:59:34.599224 1 base_controller.go:67] Waiting for caches to sync for ClusterMemberRemovalController 2025-08-13T19:59:35.074728296+00:00 stderr F I0813 19:59:35.074419 1 base_controller.go:67] Waiting for caches to sync for MachineDeletionHooksController 2025-08-13T19:59:35.080904012+00:00 stderr F I0813 19:59:35.080535 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T19:59:35.248213521+00:00 stderr F I0813 19:59:35.115441 1 base_controller.go:67] Waiting for caches to sync for FSyncController 2025-08-13T19:59:35.248213521+00:00 stderr F I0813 19:59:35.247896 1 base_controller.go:73] Caches are synced for FSyncController 2025-08-13T19:59:35.248213521+00:00 stderr F I0813 19:59:35.247922 1 base_controller.go:110] Starting #1 worker of FSyncController controller ... 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243497 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243594 1 base_controller.go:67] Waiting for caches to sync for EtcdCertSignerController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243610 1 base_controller.go:67] Waiting for caches to sync for EtcdCertCleanerController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.313378 1 base_controller.go:73] Caches are synced for EtcdCertCleanerController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.313393 1 base_controller.go:110] Starting #1 worker of EtcdCertCleanerController controller ... 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243630 1 base_controller.go:67] Waiting for caches to sync for EtcdEndpointsController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243649 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243908 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_etcd 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243926 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243939 1 base_controller.go:67] Waiting for caches to sync for ClusterMemberController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243951 1 base_controller.go:67] Waiting for caches to sync for EtcdMembersController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.313995 1 base_controller.go:73] Caches are synced for EtcdMembersController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.314001 1 base_controller.go:110] Starting #1 worker of EtcdMembersController controller ... 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243962 1 base_controller.go:67] Waiting for caches to sync for BootstrapTeardownController 2025-08-13T19:59:35.315551700+00:00 stderr F E0813 19:59:35.314409 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243973 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.314517 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243992 1 base_controller.go:67] Waiting for caches to sync for ScriptController 2025-08-13T19:59:35.388949042+00:00 stderr F I0813 19:59:35.244004 1 base_controller.go:67] Waiting for caches to sync for DefragController 2025-08-13T19:59:35.389179068+00:00 stderr F E0813 19:59:35.389154 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T19:59:35.389382674+00:00 stderr F I0813 19:59:35.244225 1 envvarcontroller.go:193] Starting EnvVarController 2025-08-13T19:59:35.391058182+00:00 stderr F I0813 19:59:35.391020 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found 2025-08-13T19:59:35.468019336+00:00 stderr F I0813 19:59:35.244247 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T19:59:35.483227819+00:00 stderr F E0813 19:59:35.476989 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244441 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.478066 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244455 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244539 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244551 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244567 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244736 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244751 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244761 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244770 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.245278 1 base_controller.go:67] Waiting for caches to sync for EtcdStaticResources 2025-08-13T19:59:35.529198640+00:00 stderr F E0813 19:59:35.245323 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.529198640+00:00 stderr F E0813 19:59:35.504348 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T19:59:35.529198640+00:00 stderr F E0813 19:59:35.509325 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.529198640+00:00 stderr F I0813 19:59:35.509567 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found 2025-08-13T19:59:38.230932083+00:00 stderr F I0813 19:59:38.229019 1 base_controller.go:73] Caches are synced for StatusSyncer_etcd 2025-08-13T19:59:38.230932083+00:00 stderr F I0813 19:59:38.229573 1 base_controller.go:110] Starting #1 worker of StatusSyncer_etcd controller ... 2025-08-13T19:59:38.230932083+00:00 stderr F I0813 19:59:38.229618 1 base_controller.go:73] Caches are synced for BootstrapTeardownController 2025-08-13T19:59:38.230932083+00:00 stderr F I0813 19:59:38.229624 1 base_controller.go:110] Starting #1 worker of BootstrapTeardownController controller ... 2025-08-13T19:59:38.230932083+00:00 stderr F I0813 19:59:38.229640 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:38.230932083+00:00 stderr F I0813 19:59:38.229645 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:38.315580626+00:00 stderr F I0813 19:59:38.315147 1 base_controller.go:73] Caches are synced for MachineDeletionHooksController 2025-08-13T19:59:38.315580626+00:00 stderr F I0813 19:59:38.315168 1 base_controller.go:110] Starting #1 worker of MachineDeletionHooksController controller ... 2025-08-13T19:59:38.445108599+00:00 stderr F E0813 19:59:38.444373 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:38.515916647+00:00 stderr F E0813 19:59:38.503438 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:38.566434227+00:00 stderr F I0813 19:59:38.566370 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T19:59:38.575941538+00:00 stderr F I0813 19:59:38.575907 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T19:59:38.577320148+00:00 stderr F I0813 19:59:38.577282 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:38.606453528+00:00 stderr F I0813 19:59:38.597541 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:38.688754724+00:00 stderr F I0813 19:59:38.681984 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:38.688754724+00:00 stderr F I0813 19:59:38.682102 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:38.688754724+00:00 stderr F I0813 19:59:38.682111 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:38.688754724+00:00 stderr F I0813 19:59:38.654724 1 base_controller.go:73] Caches are synced for DefragController 2025-08-13T19:59:38.688754724+00:00 stderr F I0813 19:59:38.683248 1 base_controller.go:110] Starting #1 worker of DefragController controller ... 2025-08-13T19:59:38.838939705+00:00 stderr F I0813 19:59:38.836534 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:38.843890516+00:00 stderr F I0813 19:59:38.842074 1 base_controller.go:73] Caches are synced for ScriptController 2025-08-13T19:59:38.874057006+00:00 stderr F I0813 19:59:38.872051 1 base_controller.go:110] Starting #1 worker of ScriptController controller ... 2025-08-13T19:59:38.982047663+00:00 stderr F I0813 19:59:38.967712 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:38.982047663+00:00 stderr F E0813 19:59:38.970342 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:38.982047663+00:00 stderr F I0813 19:59:38.971545 1 reflector.go:351] Caches populated for *v1.Etcd from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:39.036536197+00:00 stderr F I0813 19:59:39.036468 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T19:59:39.036630399+00:00 stderr F I0813 19:59:39.036612 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T19:59:39.098609596+00:00 stderr F I0813 19:59:39.098186 1 envvarcontroller.go:199] caches synced 2025-08-13T19:59:39.512175835+00:00 stderr F E0813 19:59:39.511575 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:39.554505982+00:00 stderr F I0813 19:59:39.554323 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:39.555444998+00:00 stderr F E0813 19:59:39.555415 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:39.597947860+00:00 stderr F E0813 19:59:39.597723 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:39.614323977+00:00 stderr F E0813 19:59:39.614245 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:41.444218429+00:00 stderr F I0813 19:59:41.402415 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:41.444218429+00:00 stderr F I0813 19:59:41.410329 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:41.510703544+00:00 stderr F E0813 19:59:41.453610 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:41.575286225+00:00 stderr F E0813 19:59:41.491014 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:41.688451451+00:00 stderr F E0813 19:59:41.686147 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.716241883+00:00 stderr F E0813 19:59:41.714508 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:41.772432005+00:00 stderr F E0813 19:59:41.772374 1 envvarcontroller.go:231] key failed with : configmap "etcd-endpoints" not found 2025-08-13T19:59:41.792715813+00:00 stderr F I0813 19:59:41.792615 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:41.797374666+00:00 stderr F E0813 19:59:41.793568 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:41.813208317+00:00 stderr F I0813 19:59:41.802685 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:41.815346458+00:00 stderr F I0813 19:59:41.815115 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.030002477+00:00 stderr F I0813 19:59:42.029895 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.032700264+00:00 stderr F I0813 19:59:42.032424 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.045939881+00:00 stderr F E0813 19:59:42.045747 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:42.050357667+00:00 stderr F I0813 19:59:42.050321 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:42Z","message":"EnvVarControllerDegraded: configmap \"etcd-endpoints\" not found\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)","reason":"EnvVarController_Error::NodeController_MasterNodesReady","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:42.064411728+00:00 stderr F I0813 19:59:42.051461 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded changed from False to True ("NodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)") 2025-08-13T19:59:42.073188328+00:00 stderr F E0813 19:59:42.070405 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:42.084309165+00:00 stderr F I0813 19:59:42.057153 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.084309165+00:00 stderr F I0813 19:59:42.059321 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.240711363+00:00 stderr F I0813 19:59:42.104760 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.240711363+00:00 stderr F I0813 19:59:42.220958 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T19:59:42.240711363+00:00 stderr F I0813 19:59:42.239738 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T19:59:42.240711363+00:00 stderr F I0813 19:59:42.221066 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T19:59:42.240711363+00:00 stderr F I0813 19:59:42.239829 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T19:59:42.240711363+00:00 stderr F I0813 19:59:42.221247 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T19:59:42.240711363+00:00 stderr F I0813 19:59:42.240287 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T19:59:42.244982595+00:00 stderr F I0813 19:59:42.241586 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.244982595+00:00 stderr F I0813 19:59:42.242611 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.267741034+00:00 stderr F I0813 19:59:42.267670 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.274360852+00:00 stderr F I0813 19:59:42.274307 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.364353727+00:00 stderr F I0813 19:59:42.221285 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T19:59:42.364470211+00:00 stderr F I0813 19:59:42.364447 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T19:59:42.381169387+00:00 stderr F I0813 19:59:42.368972 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.385226692+00:00 stderr F I0813 19:59:42.221433 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.411953034+00:00 stderr F I0813 19:59:42.411890 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T19:59:42.415565527+00:00 stderr F I0813 19:59:42.415474 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T19:59:42.415733612+00:00 stderr F I0813 19:59:42.415686 1 base_controller.go:73] Caches are synced for ClusterMemberRemovalController 2025-08-13T19:59:42.415963909+00:00 stderr F I0813 19:59:42.415763 1 base_controller.go:110] Starting #1 worker of ClusterMemberRemovalController controller ... 2025-08-13T19:59:42.439443888+00:00 stderr F I0813 19:59:42.438475 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T19:59:42.455607459+00:00 stderr F E0813 19:59:42.455540 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:42.455983369+00:00 stderr F I0813 19:59:42.455959 1 base_controller.go:73] Caches are synced for EtcdEndpointsController 2025-08-13T19:59:42.456028521+00:00 stderr F I0813 19:59:42.456015 1 base_controller.go:110] Starting #1 worker of EtcdEndpointsController controller ... 2025-08-13T19:59:42.456351400+00:00 stderr F I0813 19:59:42.456328 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T19:59:42.456389951+00:00 stderr F I0813 19:59:42.456377 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T19:59:42.456532865+00:00 stderr F I0813 19:59:42.456516 1 base_controller.go:73] Caches are synced for EtcdCertSignerController 2025-08-13T19:59:42.456565556+00:00 stderr F I0813 19:59:42.456553 1 base_controller.go:110] Starting #1 worker of EtcdCertSignerController controller ... 2025-08-13T19:59:42.459898211+00:00 stderr F I0813 19:59:42.459871 1 trace.go:236] Trace[761031083]: "DeltaFIFO Pop Process" ID:system:controller:pvc-protection-controller,Depth:106,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:42.274) (total time: 185ms): 2025-08-13T19:59:42.459898211+00:00 stderr F Trace[761031083]: [185.063576ms] [185.063576ms] END 2025-08-13T19:59:42.515986220+00:00 stderr F I0813 19:59:42.514103 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T19:59:42.515986220+00:00 stderr F I0813 19:59:42.515190 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T19:59:42.521079815+00:00 stderr F I0813 19:59:42.519198 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T19:59:42.521079815+00:00 stderr F I0813 19:59:42.520748 1 base_controller.go:73] Caches are synced for ClusterMemberController 2025-08-13T19:59:42.521079815+00:00 stderr F I0813 19:59:42.520766 1 base_controller.go:110] Starting #1 worker of ClusterMemberController controller ... 2025-08-13T19:59:42.529508635+00:00 stderr F I0813 19:59:42.529367 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T19:59:42.541220119+00:00 stderr F I0813 19:59:42.535950 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T19:59:42.541220119+00:00 stderr F I0813 19:59:42.536164 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T19:59:42.542379502+00:00 stderr F I0813 19:59:42.542265 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T19:59:43.388393107+00:00 stderr F I0813 19:59:43.174330 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:43.437977401+00:00 stderr F I0813 19:59:43.437366 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:43.442007086+00:00 stderr F I0813 19:59:43.190035 1 trace.go:236] Trace[790508828]: "DeltaFIFO Pop Process" ID:system:openshift:controller:serviceaccount-controller,Depth:41,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:42.552) (total time: 637ms): 2025-08-13T19:59:43.442007086+00:00 stderr F Trace[790508828]: [637.43805ms] [637.43805ms] END 2025-08-13T19:59:43.461142281+00:00 stderr F I0813 19:59:43.190583 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:43.461142281+00:00 stderr F I0813 19:59:43.460085 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:43.469627043+00:00 stderr F I0813 19:59:43.235059 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:43.469885710+00:00 stderr F E0813 19:59:43.240476 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T19:59:43.470704823+00:00 stderr F E0813 19:59:43.240674 1 base_controller.go:268] StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "etcd": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:43.470889159+00:00 stderr F I0813 19:59:43.248768 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T19:59:43.477025134+00:00 stderr F I0813 19:59:43.476267 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:38Z","message":"EnvVarControllerDegraded: configmap \"etcd-endpoints\" not found\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)","reason":"EnvVarController_Error::EtcdMembersController_ErrorUpdatingReportEtcdMembers::NodeController_MasterNodesReady","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:43.494418570+00:00 stderr F I0813 19:59:43.255933 1 helpers.go:184] lister was stale at resourceVersion=28242, live get showed resourceVersion=28318 2025-08-13T19:59:43.494418570+00:00 stderr F E0813 19:59:43.329586 1 envvarcontroller.go:231] key failed with : configmap "etcd-endpoints" not found 2025-08-13T19:59:43.494418570+00:00 stderr F E0813 19:59:43.353066 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.513639467+00:00 stderr F I0813 19:59:43.512527 1 base_controller.go:73] Caches are synced for EtcdStaticResources 2025-08-13T19:59:43.513639467+00:00 stderr F I0813 19:59:43.512590 1 base_controller.go:110] Starting #1 worker of EtcdStaticResources controller ... 2025-08-13T19:59:43.687120012+00:00 stderr F E0813 19:59:43.683712 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:43.926955579+00:00 stderr F E0813 19:59:43.891241 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T19:59:44.024683875+00:00 stderr F I0813 19:59:44.017254 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:38Z","message":"EnvVarControllerDegraded: configmap \"etcd-endpoints\" not found\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values","reason":"EnvVarController_Error::EtcdMembersController_ErrorUpdatingReportEtcdMembers::NodeController_MasterNodesReady::ScriptController_Error","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:44.115764081+00:00 stderr F I0813 19:59:44.040471 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)" to "EnvVarControllerDegraded: configmap \"etcd-endpoints\" not found\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)" 2025-08-13T19:59:44.115764081+00:00 stderr F I0813 19:59:44.066649 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:44.115764081+00:00 stderr F E0813 19:59:44.076233 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:44.460925990+00:00 stderr F I0813 19:59:44.406579 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:44.491474701+00:00 stderr F E0813 19:59:44.491373 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:44.501905558+00:00 stderr F I0813 19:59:44.499984 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:38Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values","reason":"EtcdMembersController_ErrorUpdatingReportEtcdMembers::NodeController_MasterNodesReady::ScriptController_Error","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:44.510090791+00:00 stderr F I0813 19:59:44.507255 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EnvVarControllerDegraded: configmap \"etcd-endpoints\" not found\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)" to "EnvVarControllerDegraded: configmap \"etcd-endpoints\" not found\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" 2025-08-13T19:59:45.405026942+00:00 stderr F E0813 19:59:45.402517 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:46.080965999+00:00 stderr F I0813 19:59:46.077699 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EnvVarControllerDegraded: configmap \"etcd-endpoints\" not found\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" 2025-08-13T19:59:46.527543259+00:00 stderr F I0813 19:59:46.523634 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:46.544948805+00:00 stderr F I0813 19:59:46.544689 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:46.832987076+00:00 stderr F E0813 19:59:46.821501 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:47.102551730+00:00 stderr F I0813 19:59:47.102160 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:47.363246801+00:00 stderr F I0813 19:59:47.317985 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:47.363246801+00:00 stderr F I0813 19:59:47.334353 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded changed from True to False ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found") 2025-08-13T19:59:47.767981449+00:00 stderr F I0813 19:59:47.766996 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-08-13T19:59:48.064737628+00:00 stderr F I0813 19:59:48.051566 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:48.064737628+00:00 stderr F I0813 19:59:48.055625 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:48.611643899+00:00 stderr F I0813 19:59:48.609906 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:48.611643899+00:00 stderr F I0813 19:59:48.610301 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-08-13T19:59:48.812943397+00:00 stderr F E0813 19:59:48.806732 1 base_controller.go:268] StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "etcd": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:49.572577771+00:00 stderr F E0813 19:59:49.572209 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:51.830068133+00:00 stderr F I0813 19:59:51.825763 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:51.928955572+00:00 stderr F E0813 19:59:51.927523 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:51.961419857+00:00 stderr F I0813 19:59:51.961204 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:51.961109858 +0000 UTC))" 2025-08-13T19:59:51.961559271+00:00 stderr F I0813 19:59:51.961543 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:51.96152121 +0000 UTC))" 2025-08-13T19:59:51.961682245+00:00 stderr F I0813 19:59:51.961664 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.961640153 +0000 UTC))" 2025-08-13T19:59:51.961743026+00:00 stderr F I0813 19:59:51.961724 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.961704295 +0000 UTC))" 2025-08-13T19:59:51.961958763+00:00 stderr F I0813 19:59:51.961828 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.961762997 +0000 UTC))" 2025-08-13T19:59:51.962028875+00:00 stderr F I0813 19:59:51.962009 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.961990933 +0000 UTC))" 2025-08-13T19:59:51.962091646+00:00 stderr F I0813 19:59:51.962076 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.962050705 +0000 UTC))" 2025-08-13T19:59:51.962191349+00:00 stderr F I0813 19:59:51.962170 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.962119247 +0000 UTC))" 2025-08-13T19:59:51.962244911+00:00 stderr F I0813 19:59:51.962231 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.96221247 +0000 UTC))" 2025-08-13T19:59:51.962662473+00:00 stderr F I0813 19:59:51.962641 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-etcd-operator.svc\" [serving] validServingFor=[metrics.openshift-etcd-operator.svc,metrics.openshift-etcd-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:09 +0000 UTC to 2026-06-26 12:47:10 +0000 UTC (now=2025-08-13 19:59:51.962616281 +0000 UTC))" 2025-08-13T19:59:51.963046684+00:00 stderr F I0813 19:59:51.963019 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115167\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:24 +0000 UTC to 2026-08-13 18:59:24 +0000 UTC (now=2025-08-13 19:59:51.963000952 +0000 UTC))" 2025-08-13T19:59:54.765899009+00:00 stderr F E0813 19:59:54.737133 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:00:05.196145423+00:00 stderr F E0813 20:00:05.186288 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:00:05.765550329+00:00 stderr F I0813 20:00:05.765069 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.765023964 +0000 UTC))" 2025-08-13T20:00:05.783205133+00:00 stderr F I0813 20:00:05.783177 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.783096489 +0000 UTC))" 2025-08-13T20:00:05.783328776+00:00 stderr F I0813 20:00:05.783309 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.783252924 +0000 UTC))" 2025-08-13T20:00:05.783385568+00:00 stderr F I0813 20:00:05.783368 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.783351887 +0000 UTC))" 2025-08-13T20:00:05.783446519+00:00 stderr F I0813 20:00:05.783426 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.783409788 +0000 UTC))" 2025-08-13T20:00:05.783515761+00:00 stderr F I0813 20:00:05.783502 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.783485201 +0000 UTC))" 2025-08-13T20:00:05.783586993+00:00 stderr F I0813 20:00:05.783570 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.783552522 +0000 UTC))" 2025-08-13T20:00:05.783645745+00:00 stderr F I0813 20:00:05.783629 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.783610724 +0000 UTC))" 2025-08-13T20:00:05.783709567+00:00 stderr F I0813 20:00:05.783694 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.783672236 +0000 UTC))" 2025-08-13T20:00:05.783764869+00:00 stderr F I0813 20:00:05.783746 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.783731698 +0000 UTC))" 2025-08-13T20:00:05.784556051+00:00 stderr F I0813 20:00:05.784526 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-etcd-operator.svc\" [serving] validServingFor=[metrics.openshift-etcd-operator.svc,metrics.openshift-etcd-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:09 +0000 UTC to 2026-06-26 12:47:10 +0000 UTC (now=2025-08-13 20:00:05.78450408 +0000 UTC))" 2025-08-13T20:00:05.838385956+00:00 stderr F I0813 20:00:05.834488 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115167\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:24 +0000 UTC to 2026-08-13 18:59:24 +0000 UTC (now=2025-08-13 20:00:05.784988084 +0000 UTC))" 2025-08-13T20:00:25.750118848+00:00 stderr F E0813 20:00:25.749354 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:00:35.469254510+00:00 stderr F E0813 20:00:35.468465 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:00:40.271403626+00:00 stderr F I0813 20:00:40.269711 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.307409 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308144 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:40.308098593 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308174 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:40.308159874 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308233 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:40.308183055 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308251 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:40.308239327 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308269 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:40.308257217 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308288 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:40.308276678 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308307 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:40.308293358 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308333 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:40.308313139 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308353 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:40.30834121 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308399 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:40.30836468 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308688 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-etcd-operator.svc\" [serving] validServingFor=[metrics.openshift-etcd-operator.svc,metrics.openshift-etcd-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:18 +0000 UTC to 2027-08-13 20:00:19 +0000 UTC (now=2025-08-13 20:00:40.308670389 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.309022 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115167\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:24 +0000 UTC to 2026-08-13 18:59:24 +0000 UTC (now=2025-08-13 20:00:40.309005079 +0000 UTC))" 2025-08-13T20:00:40.309778251+00:00 stderr F I0813 20:00:40.309716 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.342161 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="7cc073c73c8c431e58bd599b22534e379416474b407ab24fcc4003eb26d4d413", new="d7210eb97e9fdfd0251de57fa3139593b93cc605b31992f9f8fc921c7054baed") 2025-08-13T20:00:41.345958686+00:00 stderr F W0813 20:00:41.342771 1 builder.go:154] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.342183 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="052d72315f99f9ec238bcbb8fe50e397b66ccc142d1ac794929899df735a889d", new="8fce782bf4ffc195a605185b97044a1cf85fb8408671072386211fc5d0f7f9b4") 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.343639 1 cmd.go:160] exiting because "/var/run/secrets/serving-cert/tls.key" changed 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.344070 1 observer_polling.go:120] Observed file "/var/run/configmaps/etcd-service-ca/service-ca.crt" has been modified (old="4aae21eb5e7288ebd1f51edb8217b701366dd5aec958415476bca84ab942e90c", new="51e7a388d2ba2794fb8e557a4c52a736ae262d754d30e8729e9392e40100869b") 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.344086 1 cmd.go:160] exiting because "/var/run/configmaps/etcd-service-ca/service-ca.crt" changed 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.344144 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="7cc073c73c8c431e58bd599b22534e379416474b407ab24fcc4003eb26d4d413", new="d7210eb97e9fdfd0251de57fa3139593b93cc605b31992f9f8fc921c7054baed") 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.344153 1 cmd.go:160] exiting because "/var/run/secrets/serving-cert/tls.crt" changed 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.344617 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.344649 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:00:41.353961434+00:00 stderr F I0813 20:00:41.352308 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:00:41.387389708+00:00 stderr F I0813 20:00:41.385105 1 base_controller.go:172] Shutting down EtcdStaticResources ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.393930 1 envvarcontroller.go:209] Shutting down EnvVarController 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394029 1 base_controller.go:172] Shutting down NodeController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394084 1 base_controller.go:172] Shutting down ScriptController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394116 1 base_controller.go:172] Shutting down DefragController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394132 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394148 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394171 1 base_controller.go:172] Shutting down PruneController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394376 1 base_controller.go:172] Shutting down MachineDeletionHooksController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394470 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394490 1 base_controller.go:172] Shutting down BootstrapTeardownController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394512 1 base_controller.go:172] Shutting down StatusSyncer_etcd ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394526 1 base_controller.go:150] All StatusSyncer_etcd post start hooks have been terminated 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394649 1 base_controller.go:172] Shutting down FSyncController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394665 1 base_controller.go:172] Shutting down EtcdMembersController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394682 1 base_controller.go:172] Shutting down EtcdCertCleanerController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.395094 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:00:41.450948430+00:00 stderr F I0813 20:00:41.450279 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:00:41.450948430+00:00 stderr F I0813 20:00:41.450362 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:00:41.450948430+00:00 stderr F I0813 20:00:41.450540 1 controller_manager.go:54] UnsupportedConfigOverridesController controller terminated 2025-08-13T20:00:41.450948430+00:00 stderr F I0813 20:00:41.450579 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:00:41.457009103+00:00 stderr F I0813 20:00:41.456673 1 base_controller.go:172] Shutting down StaticPodStateController ... 2025-08-13T20:00:41.457009103+00:00 stderr F I0813 20:00:41.456755 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.465575 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.465631 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.465673 1 base_controller.go:172] Shutting down ClusterMemberController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.465719 1 base_controller.go:172] Shutting down MissingStaticPodController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.465751 1 base_controller.go:172] Shutting down EtcdCertSignerController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.465860 1 base_controller.go:172] Shutting down TargetConfigController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.465892 1 base_controller.go:172] Shutting down EtcdEndpointsController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.466643 1 base_controller.go:172] Shutting down ClusterMemberRemovalController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.466720 1 base_controller.go:172] Shutting down BackingResourceController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.466750 1 base_controller.go:172] Shutting down InstallerController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467204 1 base_controller.go:172] Shutting down GuardController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467253 1 base_controller.go:114] Shutting down worker of StaticPodStateController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467270 1 base_controller.go:104] All StaticPodStateController workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467280 1 controller_manager.go:54] StaticPodStateController controller terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467341 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467357 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467454 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467468 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467483 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467495 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467501 1 controller_manager.go:54] RevisionController controller terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467515 1 base_controller.go:114] Shutting down worker of ClusterMemberController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467527 1 base_controller.go:104] All ClusterMemberController workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467548 1 base_controller.go:114] Shutting down worker of MissingStaticPodController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467558 1 base_controller.go:104] All MissingStaticPodController workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467563 1 controller_manager.go:54] MissingStaticPodController controller terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467576 1 base_controller.go:114] Shutting down worker of EtcdCertSignerController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467586 1 base_controller.go:104] All EtcdCertSignerController workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467602 1 base_controller.go:114] Shutting down worker of TargetConfigController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467618 1 base_controller.go:104] All TargetConfigController workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467633 1 base_controller.go:114] Shutting down worker of EtcdEndpointsController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467647 1 base_controller.go:104] All EtcdEndpointsController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471415 1 base_controller.go:114] Shutting down worker of PruneController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471485 1 base_controller.go:104] All PruneController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471493 1 controller_manager.go:54] PruneController controller terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471525 1 base_controller.go:114] Shutting down worker of MachineDeletionHooksController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471534 1 base_controller.go:104] All MachineDeletionHooksController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471623 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471632 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471642 1 base_controller.go:114] Shutting down worker of BootstrapTeardownController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471650 1 base_controller.go:104] All BootstrapTeardownController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471660 1 base_controller.go:114] Shutting down worker of StatusSyncer_etcd controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471670 1 base_controller.go:104] All StatusSyncer_etcd workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471680 1 base_controller.go:114] Shutting down worker of FSyncController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471688 1 base_controller.go:104] All FSyncController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471697 1 base_controller.go:114] Shutting down worker of EtcdMembersController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471707 1 base_controller.go:104] All EtcdMembersController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471716 1 base_controller.go:114] Shutting down worker of EtcdCertCleanerController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471724 1 base_controller.go:104] All EtcdCertCleanerController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.472098 1 base_controller.go:114] Shutting down worker of ClusterMemberRemovalController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.472111 1 base_controller.go:104] All ClusterMemberRemovalController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.472298 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:00:41.473148593+00:00 stderr F I0813 20:00:41.472906 1 base_controller.go:114] Shutting down worker of BackingResourceController controller ... 2025-08-13T20:00:41.473148593+00:00 stderr F I0813 20:00:41.472940 1 base_controller.go:104] All BackingResourceController workers have been terminated 2025-08-13T20:00:41.473148593+00:00 stderr F I0813 20:00:41.472949 1 controller_manager.go:54] BackingResourceController controller terminated 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.473459 1 base_controller.go:114] Shutting down worker of InstallerController controller ... 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.473707 1 base_controller.go:104] All InstallerController workers have been terminated 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.473762 1 controller_manager.go:54] InstallerController controller terminated 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.474073 1 base_controller.go:114] Shutting down worker of GuardController controller ... 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.474175 1 base_controller.go:104] All GuardController workers have been terminated 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.474217 1 controller_manager.go:54] GuardController controller terminated 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.474233 1 base_controller.go:114] Shutting down worker of NodeController controller ... 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.474250 1 base_controller.go:104] All NodeController workers have been terminated 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.474259 1 controller_manager.go:54] NodeController controller terminated 2025-08-13T20:00:41.475861680+00:00 stderr F I0813 20:00:41.475635 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:00:41.475861680+00:00 stderr F I0813 20:00:41.475688 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:00:41.475861680+00:00 stderr F I0813 20:00:41.475742 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:00:41.475882591+00:00 stderr F I0813 20:00:41.475872 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.530027 1 base_controller.go:172] Shutting down InstallerStateController ... 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.530740 1 base_controller.go:114] Shutting down worker of EtcdStaticResources controller ... 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.530756 1 base_controller.go:104] All EtcdStaticResources workers have been terminated 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.530780 1 base_controller.go:114] Shutting down worker of InstallerStateController controller ... 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.530956 1 base_controller.go:104] All InstallerStateController workers have been terminated 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.530971 1 controller_manager.go:54] InstallerStateController controller terminated 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534080 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534227 1 base_controller.go:114] Shutting down worker of DefragController controller ... 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534240 1 base_controller.go:104] All DefragController workers have been terminated 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534251 1 base_controller.go:114] Shutting down worker of ScriptController controller ... 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534258 1 base_controller.go:104] All ScriptController workers have been terminated 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534280 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534311 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534320 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534325 1 controller_manager.go:54] LoggingSyncer controller terminated 2025-08-13T20:00:41.541754909+00:00 stderr F I0813 20:00:41.541461 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:00:41.541754909+00:00 stderr F I0813 20:00:41.541554 1 builder.go:329] server exited 2025-08-13T20:00:41.551133457+00:00 stderr F I0813 20:00:41.542312 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="052d72315f99f9ec238bcbb8fe50e397b66ccc142d1ac794929899df735a889d", new="8fce782bf4ffc195a605185b97044a1cf85fb8408671072386211fc5d0f7f9b4") 2025-08-13T20:00:41.551133457+00:00 stderr F W0813 20:00:41.542993 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operato0000644000175000017500000035213715133657716033071 0ustar zuulzuul2026-01-20T10:49:33.047715891+00:00 stderr F I0120 10:49:33.047316 1 profiler.go:21] Starting profiling endpoint at http://127.0.0.1:6060/debug/pprof/ 2026-01-20T10:49:33.047715891+00:00 stderr F I0120 10:49:33.047656 1 observer_polling.go:159] Starting file observer 2026-01-20T10:49:33.048100253+00:00 stderr F I0120 10:49:33.047427 1 cmd.go:240] Using service-serving-cert provided certificates 2026-01-20T10:49:33.048100253+00:00 stderr F I0120 10:49:33.047954 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:49:33.049240418+00:00 stderr F I0120 10:49:33.049142 1 observer_polling.go:159] Starting file observer 2026-01-20T10:49:33.094567078+00:00 stderr F I0120 10:49:33.094467 1 builder.go:298] openshift-cluster-etcd-operator version 4.16.0-202406131906.p0.gdc4f4e8.assembly.stream.el9-dc4f4e8-dc4f4e858ba8395dce6883242c7d12009685d145 2026-01-20T10:49:33.503753412+00:00 stderr F I0120 10:49:33.501594 1 secure_serving.go:57] Forcing use of http/1.1 only 2026-01-20T10:49:33.503753412+00:00 stderr F W0120 10:49:33.502299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:33.503753412+00:00 stderr F W0120 10:49:33.502381 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:33.512913581+00:00 stderr F I0120 10:49:33.510702 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:49:33.512913581+00:00 stderr F I0120 10:49:33.511085 1 secure_serving.go:213] Serving securely on [::]:8443 2026-01-20T10:49:33.512913581+00:00 stderr F I0120 10:49:33.511138 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2026-01-20T10:49:33.512913581+00:00 stderr F I0120 10:49:33.511276 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:49:33.512913581+00:00 stderr F I0120 10:49:33.511791 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:49:33.517650625+00:00 stderr F I0120 10:49:33.513383 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:33.517650625+00:00 stderr F I0120 10:49:33.513826 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:49:33.517650625+00:00 stderr F I0120 10:49:33.513834 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:33.517650625+00:00 stderr F I0120 10:49:33.514397 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2026-01-20T10:49:33.517650625+00:00 stderr F I0120 10:49:33.514651 1 leaderelection.go:250] attempting to acquire leader lease openshift-etcd-operator/openshift-cluster-etcd-operator-lock... 2026-01-20T10:49:33.517650625+00:00 stderr F I0120 10:49:33.515459 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:49:33.615986250+00:00 stderr F I0120 10:49:33.615913 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:33.615986250+00:00 stderr F I0120 10:49:33.615927 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:33.616023311+00:00 stderr F I0120 10:49:33.615976 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:54:32.112144484+00:00 stderr F I0120 10:54:32.111363 1 leaderelection.go:260] successfully acquired lease openshift-etcd-operator/openshift-cluster-etcd-operator-lock 2026-01-20T10:54:32.114151278+00:00 stderr F I0120 10:54:32.114045 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-etcd-operator", Name:"openshift-cluster-etcd-operator-lock", UID:"1b330a9d-47ca-437b-91e1-6481a2280da8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41907", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' etcd-operator-768d5b5d86-722mg_6b232442-60e5-4257-aa97-b1a648d9b9f6 became leader 2026-01-20T10:54:32.117988700+00:00 stderr F I0120 10:54:32.117942 1 starter.go:166] recorded cluster versions: map[etcd:4.16.0 operator:4.16.0 raw-internal:4.16.0] 2026-01-20T10:54:32.134396287+00:00 stderr F I0120 10:54:32.134282 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2026-01-20T10:54:32.138892228+00:00 stderr F I0120 10:54:32.138812 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2026-01-20T10:54:32.139023991+00:00 stderr F I0120 10:54:32.138932 1 starter.go:445] FeatureGates initializedenabled[AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CloudDualStackNodeIPs ClusterAPIInstallAWS ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallVSphere DisableKubeletCloudCredentialProviders ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP HardwareSpeed KMSv1 MetricsServer NetworkDiagnosticsConfig NetworkLiveMigration PrivateHostedZoneAWS VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereStaticIPs]disabled[AutomatedEtcdBackup CSIDriverSharedResource ChunkSizeMiB ClusterAPIInstall ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallPowerVS DNSNameResolver DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MixedCPUsAllocation NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereMultiVCenters ValidatingAdmissionPolicy VolumeGroupSnapshot] 2026-01-20T10:54:32.139053942+00:00 stderr F I0120 10:54:32.139016 1 starter.go:499] waiting for cluster version informer sync... 2026-01-20T10:54:32.244246626+00:00 stderr F I0120 10:54:32.244147 1 starter.go:522] Detected available machine API, starting vertical scaling related controllers and informers... 2026-01-20T10:54:32.244856072+00:00 stderr F I0120 10:54:32.244783 1 base_controller.go:67] Waiting for caches to sync for MachineDeletionHooksController 2026-01-20T10:54:32.244856072+00:00 stderr F I0120 10:54:32.244797 1 base_controller.go:67] Waiting for caches to sync for ClusterMemberRemovalController 2026-01-20T10:54:32.246150187+00:00 stderr F I0120 10:54:32.246117 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2026-01-20T10:54:32.246412013+00:00 stderr F I0120 10:54:32.246366 1 base_controller.go:67] Waiting for caches to sync for EtcdMembersController 2026-01-20T10:54:32.246412013+00:00 stderr F I0120 10:54:32.246394 1 base_controller.go:73] Caches are synced for EtcdMembersController 2026-01-20T10:54:32.246412013+00:00 stderr F I0120 10:54:32.246401 1 base_controller.go:110] Starting #1 worker of EtcdMembersController controller ... 2026-01-20T10:54:32.247019879+00:00 stderr F I0120 10:54:32.246970 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2026-01-20T10:54:32.247043130+00:00 stderr F I0120 10:54:32.247016 1 base_controller.go:67] Waiting for caches to sync for BootstrapTeardownController 2026-01-20T10:54:32.247043130+00:00 stderr F I0120 10:54:32.247038 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2026-01-20T10:54:32.247128852+00:00 stderr F I0120 10:54:32.247101 1 base_controller.go:67] Waiting for caches to sync for ScriptController 2026-01-20T10:54:32.247152163+00:00 stderr F I0120 10:54:32.247132 1 base_controller.go:67] Waiting for caches to sync for DefragController 2026-01-20T10:54:32.247200864+00:00 stderr F I0120 10:54:32.247174 1 envvarcontroller.go:193] Starting EnvVarController 2026-01-20T10:54:32.247241375+00:00 stderr F I0120 10:54:32.247218 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2026-01-20T10:54:32.247285366+00:00 stderr F I0120 10:54:32.247261 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2026-01-20T10:54:32.247413910+00:00 stderr F I0120 10:54:32.247385 1 base_controller.go:67] Waiting for caches to sync for FSyncController 2026-01-20T10:54:32.247413910+00:00 stderr F I0120 10:54:32.247399 1 base_controller.go:73] Caches are synced for FSyncController 2026-01-20T10:54:32.247413910+00:00 stderr F I0120 10:54:32.247407 1 base_controller.go:110] Starting #1 worker of FSyncController controller ... 2026-01-20T10:54:32.247435970+00:00 stderr F I0120 10:54:32.247417 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2026-01-20T10:54:32.247507972+00:00 stderr F I0120 10:54:32.247484 1 base_controller.go:67] Waiting for caches to sync for ClusterMemberController 2026-01-20T10:54:32.249676010+00:00 stderr F I0120 10:54:32.249618 1 base_controller.go:67] Waiting for caches to sync for EtcdStaticResources 2026-01-20T10:54:32.250134212+00:00 stderr F I0120 10:54:32.250051 1 base_controller.go:67] Waiting for caches to sync for EtcdEndpointsController 2026-01-20T10:54:32.250225655+00:00 stderr F I0120 10:54:32.250190 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2026-01-20T10:54:32.250270136+00:00 stderr F I0120 10:54:32.250238 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2026-01-20T10:54:32.250385909+00:00 stderr F I0120 10:54:32.250294 1 base_controller.go:67] Waiting for caches to sync for GuardController 2026-01-20T10:54:32.250385909+00:00 stderr F I0120 10:54:32.250351 1 base_controller.go:67] Waiting for caches to sync for EtcdCertSignerController 2026-01-20T10:54:32.250513692+00:00 stderr F I0120 10:54:32.250445 1 base_controller.go:67] Waiting for caches to sync for NodeController 2026-01-20T10:54:32.250513692+00:00 stderr F I0120 10:54:32.250494 1 base_controller.go:67] Waiting for caches to sync for EtcdCertCleanerController 2026-01-20T10:54:32.250513692+00:00 stderr F I0120 10:54:32.250500 1 base_controller.go:73] Caches are synced for EtcdCertCleanerController 2026-01-20T10:54:32.250531513+00:00 stderr F I0120 10:54:32.250509 1 base_controller.go:110] Starting #1 worker of EtcdCertCleanerController controller ... 2026-01-20T10:54:32.250771080+00:00 stderr F I0120 10:54:32.250732 1 base_controller.go:67] Waiting for caches to sync for PruneController 2026-01-20T10:54:32.250827512+00:00 stderr F I0120 10:54:32.250815 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2026-01-20T10:54:32.251520780+00:00 stderr F I0120 10:54:32.251467 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2026-01-20T10:54:32.251727895+00:00 stderr F I0120 10:54:32.251656 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2026-01-20T10:54:32.252098795+00:00 stderr F I0120 10:54:32.252037 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2026-01-20T10:54:32.252491095+00:00 stderr F E0120 10:54:32.252425 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2026-01-20T10:54:32.252616639+00:00 stderr F I0120 10:54:32.252578 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_etcd 2026-01-20T10:54:32.253042880+00:00 stderr F I0120 10:54:32.252992 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found 2026-01-20T10:54:32.260518879+00:00 stderr F E0120 10:54:32.259292 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2026-01-20T10:54:32.260518879+00:00 stderr F I0120 10:54:32.259427 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found 2026-01-20T10:54:32.273275430+00:00 stderr F E0120 10:54:32.272923 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2026-01-20T10:54:32.290127359+00:00 stderr F E0120 10:54:32.285925 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2026-01-20T10:54:32.301190764+00:00 stderr F E0120 10:54:32.301044 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: configmaps lister not synced 2026-01-20T10:54:32.301327468+00:00 stderr F E0120 10:54:32.301294 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2026-01-20T10:54:32.322390539+00:00 stderr F I0120 10:54:32.322317 1 etcdcli_pool.go:70] creating a new cached client 2026-01-20T10:54:32.328194463+00:00 stderr F E0120 10:54:32.327672 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2026-01-20T10:54:32.345444923+00:00 stderr F I0120 10:54:32.345296 1 base_controller.go:73] Caches are synced for ClusterMemberRemovalController 2026-01-20T10:54:32.345444923+00:00 stderr F I0120 10:54:32.345332 1 base_controller.go:110] Starting #1 worker of ClusterMemberRemovalController controller ... 2026-01-20T10:54:32.347430676+00:00 stderr F I0120 10:54:32.347321 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2026-01-20T10:54:32.347430676+00:00 stderr F I0120 10:54:32.347356 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2026-01-20T10:54:32.347516328+00:00 stderr F I0120 10:54:32.347479 1 base_controller.go:73] Caches are synced for RevisionController 2026-01-20T10:54:32.347516328+00:00 stderr F I0120 10:54:32.347496 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2026-01-20T10:54:32.347797706+00:00 stderr F I0120 10:54:32.347751 1 base_controller.go:73] Caches are synced for DefragController 2026-01-20T10:54:32.347797706+00:00 stderr F I0120 10:54:32.347778 1 base_controller.go:110] Starting #1 worker of DefragController controller ... 2026-01-20T10:54:32.347845477+00:00 stderr F I0120 10:54:32.347819 1 base_controller.go:73] Caches are synced for ScriptController 2026-01-20T10:54:32.347845477+00:00 stderr F I0120 10:54:32.347830 1 base_controller.go:110] Starting #1 worker of ScriptController controller ... 2026-01-20T10:54:32.350265992+00:00 stderr F I0120 10:54:32.350224 1 base_controller.go:73] Caches are synced for EtcdEndpointsController 2026-01-20T10:54:32.350265992+00:00 stderr F I0120 10:54:32.350243 1 base_controller.go:110] Starting #1 worker of EtcdEndpointsController controller ... 2026-01-20T10:54:32.351011191+00:00 stderr F I0120 10:54:32.350460 1 base_controller.go:73] Caches are synced for LoggingSyncer 2026-01-20T10:54:32.351011191+00:00 stderr F I0120 10:54:32.350487 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2026-01-20T10:54:32.351011191+00:00 stderr F I0120 10:54:32.350490 1 base_controller.go:73] Caches are synced for EtcdCertSignerController 2026-01-20T10:54:32.351011191+00:00 stderr F I0120 10:54:32.350498 1 base_controller.go:110] Starting #1 worker of EtcdCertSignerController controller ... 2026-01-20T10:54:32.351011191+00:00 stderr F I0120 10:54:32.350517 1 base_controller.go:73] Caches are synced for TargetConfigController 2026-01-20T10:54:32.351011191+00:00 stderr F I0120 10:54:32.350522 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2026-01-20T10:54:32.351011191+00:00 stderr F I0120 10:54:32.350816 1 base_controller.go:73] Caches are synced for NodeController 2026-01-20T10:54:32.351011191+00:00 stderr F I0120 10:54:32.350830 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2026-01-20T10:54:32.351011191+00:00 stderr F I0120 10:54:32.350881 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2026-01-20T10:54:32.351011191+00:00 stderr F I0120 10:54:32.350889 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2026-01-20T10:54:32.351181516+00:00 stderr F I0120 10:54:32.351147 1 base_controller.go:73] Caches are synced for PruneController 2026-01-20T10:54:32.351236327+00:00 stderr F I0120 10:54:32.351220 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2026-01-20T10:54:32.352013599+00:00 stderr F I0120 10:54:32.351987 1 prune_controller.go:269] Nothing to prune 2026-01-20T10:54:32.354119854+00:00 stderr F I0120 10:54:32.353448 1 base_controller.go:73] Caches are synced for StatusSyncer_etcd 2026-01-20T10:54:32.354119854+00:00 stderr F I0120 10:54:32.353479 1 base_controller.go:110] Starting #1 worker of StatusSyncer_etcd controller ... 2026-01-20T10:54:32.354397711+00:00 stderr F I0120 10:54:32.354366 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:54:32.364464710+00:00 stderr F I0120 10:54:32.364334 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2026-01-20T10:54:32.366878745+00:00 stderr F I0120 10:54:32.366846 1 prune_controller.go:269] Nothing to prune 2026-01-20T10:54:32.367318916+00:00 stderr F E0120 10:54:32.367271 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2026-01-20T10:54:32.367431299+00:00 stderr F I0120 10:54:32.367405 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:54:32.377705594+00:00 stderr F E0120 10:54:32.377639 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2026-01-20T10:54:32.379696847+00:00 stderr F I0120 10:54:32.379363 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2026-01-20T10:54:32.399502005+00:00 stderr F I0120 10:54:32.399413 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:54:32.400195313+00:00 stderr F I0120 10:54:32.400104 1 prune_controller.go:269] Nothing to prune 2026-01-20T10:54:32.407963350+00:00 stderr F I0120 10:54:32.407878 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2026-01-20T10:54:32.452824876+00:00 stderr F I0120 10:54:32.452679 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:32.461845126+00:00 stderr F E0120 10:54:32.461759 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2026-01-20T10:54:32.552278807+00:00 stderr F I0120 10:54:32.552196 1 base_controller.go:73] Caches are synced for BackingResourceController 2026-01-20T10:54:32.552278807+00:00 stderr F I0120 10:54:32.552223 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2026-01-20T10:54:32.626427984+00:00 stderr F E0120 10:54:32.626353 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2026-01-20T10:54:32.649210541+00:00 stderr F I0120 10:54:32.649149 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:32.651467012+00:00 stderr F I0120 10:54:32.651423 1 base_controller.go:73] Caches are synced for GuardController 2026-01-20T10:54:32.651467012+00:00 stderr F I0120 10:54:32.651448 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2026-01-20T10:54:32.651487592+00:00 stderr F I0120 10:54:32.651478 1 base_controller.go:73] Caches are synced for ClusterMemberController 2026-01-20T10:54:32.651495842+00:00 stderr F I0120 10:54:32.651487 1 base_controller.go:110] Starting #1 worker of ClusterMemberController controller ... 2026-01-20T10:54:32.746557125+00:00 stderr F I0120 10:54:32.746461 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2026-01-20T10:54:32.746557125+00:00 stderr F I0120 10:54:32.746496 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2026-01-20T10:54:32.747734757+00:00 stderr F I0120 10:54:32.747695 1 base_controller.go:73] Caches are synced for InstallerStateController 2026-01-20T10:54:32.747754328+00:00 stderr F I0120 10:54:32.747729 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2026-01-20T10:54:32.747819020+00:00 stderr F I0120 10:54:32.747705 1 base_controller.go:73] Caches are synced for StaticPodStateController 2026-01-20T10:54:32.747819020+00:00 stderr F I0120 10:54:32.747815 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2026-01-20T10:54:32.747869771+00:00 stderr F I0120 10:54:32.747839 1 base_controller.go:73] Caches are synced for InstallerController 2026-01-20T10:54:32.747869771+00:00 stderr F I0120 10:54:32.747856 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2026-01-20T10:54:32.848499573+00:00 stderr F I0120 10:54:32.848421 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:32.951233712+00:00 stderr F E0120 10:54:32.951122 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2026-01-20T10:54:33.050326643+00:00 stderr F I0120 10:54:33.050148 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:33.250447197+00:00 stderr F I0120 10:54:33.249462 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:33.445757083+00:00 stderr F I0120 10:54:33.445622 1 request.go:697] Waited for 1.198078977s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services?limit=500&resourceVersion=0 2026-01-20T10:54:33.447955542+00:00 stderr F I0120 10:54:33.447916 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:33.450348916+00:00 stderr F I0120 10:54:33.450322 1 base_controller.go:73] Caches are synced for EtcdStaticResources 2026-01-20T10:54:33.450348916+00:00 stderr F I0120 10:54:33.450342 1 base_controller.go:110] Starting #1 worker of EtcdStaticResources controller ... 2026-01-20T10:54:33.596442780+00:00 stderr F E0120 10:54:33.596354 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2026-01-20T10:54:33.652311779+00:00 stderr F I0120 10:54:33.652213 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:33.848927220+00:00 stderr F I0120 10:54:33.848431 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:33.852762842+00:00 stderr F I0120 10:54:33.852694 1 base_controller.go:73] Caches are synced for ConfigObserver 2026-01-20T10:54:33.852762842+00:00 stderr F I0120 10:54:33.852710 1 base_controller.go:73] Caches are synced for ResourceSyncController 2026-01-20T10:54:33.852762842+00:00 stderr F I0120 10:54:33.852725 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2026-01-20T10:54:33.852805554+00:00 stderr F I0120 10:54:33.852713 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2026-01-20T10:54:33.945181056+00:00 stderr F I0120 10:54:33.945043 1 base_controller.go:73] Caches are synced for MachineDeletionHooksController 2026-01-20T10:54:33.945181056+00:00 stderr F I0120 10:54:33.945121 1 base_controller.go:110] Starting #1 worker of MachineDeletionHooksController controller ... 2026-01-20T10:54:33.947414156+00:00 stderr F I0120 10:54:33.947363 1 base_controller.go:73] Caches are synced for BootstrapTeardownController 2026-01-20T10:54:33.947414156+00:00 stderr F I0120 10:54:33.947381 1 base_controller.go:110] Starting #1 worker of BootstrapTeardownController controller ... 2026-01-20T10:54:34.268116455+00:00 stderr F I0120 10:54:34.267499 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:54:34.268824974+00:00 stderr F I0120 10:54:34.268780 1 etcdcli_pool.go:70] creating a new cached client 2026-01-20T10:54:34.270945340+00:00 stderr F I0120 10:54:34.270888 1 etcdcli_pool.go:70] creating a new cached client 2026-01-20T10:54:34.271011842+00:00 stderr F I0120 10:54:34.270959 1 prune_controller.go:269] Nothing to prune 2026-01-20T10:54:34.288447697+00:00 stderr F I0120 10:54:34.288336 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2026-01-20T10:54:34.446309334+00:00 stderr F I0120 10:54:34.446184 1 request.go:697] Waited for 1.89335055s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa 2026-01-20T10:54:34.882749889+00:00 stderr F E0120 10:54:34.882665 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2026-01-20T10:54:35.646329204+00:00 stderr F I0120 10:54:35.646225 1 request.go:697] Waited for 1.378472887s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts 2026-01-20T10:54:36.846488786+00:00 stderr F I0120 10:54:36.846404 1 request.go:697] Waited for 1.195717524s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts 2026-01-20T10:54:37.448838453+00:00 stderr F E0120 10:54:37.448764 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2026-01-20T10:54:42.574999261+00:00 stderr F E0120 10:54:42.574296 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2026-01-20T10:54:52.825147439+00:00 stderr F E0120 10:54:52.824336 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2026-01-20T10:55:13.317816027+00:00 stderr F E0120 10:55:13.317105 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2026-01-20T10:55:32.258903981+00:00 stderr F E0120 10:55:32.258153 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2026-01-20T10:55:54.294921502+00:00 stderr F E0120 10:55:54.294347 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2026-01-20T10:56:07.106833532+00:00 stderr F I0120 10:56:07.106350 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:56:07.106318398 +0000 UTC))" 2026-01-20T10:56:07.106908804+00:00 stderr F I0120 10:56:07.106897 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:56:07.106880903 +0000 UTC))" 2026-01-20T10:56:07.106948175+00:00 stderr F I0120 10:56:07.106938 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.106924104 +0000 UTC))" 2026-01-20T10:56:07.106996736+00:00 stderr F I0120 10:56:07.106985 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.106971215 +0000 UTC))" 2026-01-20T10:56:07.107073268+00:00 stderr F I0120 10:56:07.107042 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.107010436 +0000 UTC))" 2026-01-20T10:56:07.107121249+00:00 stderr F I0120 10:56:07.107111 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.107093849 +0000 UTC))" 2026-01-20T10:56:07.107158430+00:00 stderr F I0120 10:56:07.107148 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.10713485 +0000 UTC))" 2026-01-20T10:56:07.107194921+00:00 stderr F I0120 10:56:07.107185 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.107171891 +0000 UTC))" 2026-01-20T10:56:07.107233082+00:00 stderr F I0120 10:56:07.107223 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.107209262 +0000 UTC))" 2026-01-20T10:56:07.111327103+00:00 stderr F I0120 10:56:07.107280 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:56:07.107266843 +0000 UTC))" 2026-01-20T10:56:07.111400735+00:00 stderr F I0120 10:56:07.111384 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.111360584 +0000 UTC))" 2026-01-20T10:56:07.111449206+00:00 stderr F I0120 10:56:07.111438 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.111423645 +0000 UTC))" 2026-01-20T10:56:07.111821646+00:00 stderr F I0120 10:56:07.111800 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-etcd-operator.svc\" [serving] validServingFor=[metrics.openshift-etcd-operator.svc,metrics.openshift-etcd-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:18 +0000 UTC to 2027-08-13 20:00:19 +0000 UTC (now=2026-01-20 10:56:07.111773995 +0000 UTC))" 2026-01-20T10:56:07.112136464+00:00 stderr F I0120 10:56:07.112118 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906173\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906173\" (2026-01-20 09:49:33 +0000 UTC to 2027-01-20 09:49:33 +0000 UTC (now=2026-01-20 10:56:07.112100273 +0000 UTC))" 2026-01-20T10:56:32.257538778+00:00 stderr F E0120 10:56:32.257086 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2026-01-20T10:57:32.151210131+00:00 stderr F E0120 10:57:32.150216 1 leaderelection.go:332] error retrieving resource lock openshift-etcd-operator/openshift-cluster-etcd-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-etcd-operator/leases/openshift-cluster-etcd-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.257647716+00:00 stderr F E0120 10:57:32.257596 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2026-01-20T10:57:32.353822899+00:00 stderr F E0120 10:57:32.353764 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.354004373+00:00 stderr F I0120 10:57:32.353973 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.354507107+00:00 stderr F E0120 10:57:32.354457 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.354537888+00:00 stderr F I0120 10:57:32.354514 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.355834032+00:00 stderr F E0120 10:57:32.355800 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.188c6b3a6e5eb5a2 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2026-01-20 10:57:32.353738146 +0000 UTC m=+479.403189635,LastTimestamp:2026-01-20 10:57:32.353738146 +0000 UTC m=+479.403189635,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2026-01-20T10:57:32.355901074+00:00 stderr F E0120 10:57:32.355884 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.188c6b3a6e6925b2 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2026-01-20 10:57:32.354422194 +0000 UTC m=+479.403873683,LastTimestamp:2026-01-20 10:57:32.354422194 +0000 UTC m=+479.403873683,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2026-01-20T10:57:32.356622413+00:00 stderr F E0120 10:57:32.356585 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.356699245+00:00 stderr F E0120 10:57:32.356684 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.356755416+00:00 stderr F I0120 10:57:32.356738 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.357808175+00:00 stderr F E0120 10:57:32.357776 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.357937318+00:00 stderr F I0120 10:57:32.357897 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.359142819+00:00 stderr F E0120 10:57:32.359110 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.188c6b3a6e9be447 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2026-01-20 10:57:32.357747783 +0000 UTC m=+479.407199282,LastTimestamp:2026-01-20 10:57:32.357747783 +0000 UTC m=+479.407199282,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2026-01-20T10:57:32.360746472+00:00 stderr F E0120 10:57:32.360711 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.361463441+00:00 stderr F E0120 10:57:32.361442 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.361520642+00:00 stderr F I0120 10:57:32.361487 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.362652132+00:00 stderr F E0120 10:57:32.362609 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.362688203+00:00 stderr F I0120 10:57:32.362652 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.365302082+00:00 stderr F E0120 10:57:32.365281 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.365383884+00:00 stderr F I0120 10:57:32.365367 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.367358567+00:00 stderr F E0120 10:57:32.367316 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.553250962+00:00 stderr F E0120 10:57:32.553176 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.553293604+00:00 stderr F I0120 10:57:32.553230 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.753335474+00:00 stderr F E0120 10:57:32.753254 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.753364625+00:00 stderr F I0120 10:57:32.753339 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.974129113+00:00 stderr F E0120 10:57:32.974036 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:32.974187455+00:00 stderr F I0120 10:57:32.974134 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:33.154246297+00:00 stderr F E0120 10:57:33.154160 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:33.154405531+00:00 stderr F I0120 10:57:33.154366 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:33.353132006+00:00 stderr F E0120 10:57:33.352608 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:33.353485065+00:00 stderr F E0120 10:57:33.353421 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:33.353485065+00:00 stderr F I0120 10:57:33.353460 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:33.553168416+00:00 stderr F E0120 10:57:33.552979 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:33.753028452+00:00 stderr F E0120 10:57:33.752911 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:33.753146435+00:00 stderr F I0120 10:57:33.753021 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:33.951837059+00:00 stderr F E0120 10:57:33.951764 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:33.952684831+00:00 stderr F E0120 10:57:33.952661 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:34.153573814+00:00 stderr F E0120 10:57:34.152971 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:34.153705988+00:00 stderr F I0120 10:57:34.153046 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:34.352966857+00:00 stderr F E0120 10:57:34.352863 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:34.553265704+00:00 stderr F E0120 10:57:34.553192 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:34.553400587+00:00 stderr F I0120 10:57:34.553366 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:34.752132643+00:00 stderr F E0120 10:57:34.751450 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:34.753413917+00:00 stderr F E0120 10:57:34.753309 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:34.753413917+00:00 stderr F I0120 10:57:34.753365 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:34.953028886+00:00 stderr F E0120 10:57:34.952942 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:34.953028886+00:00 stderr F I0120 10:57:34.952994 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:35.152906351+00:00 stderr F E0120 10:57:35.152790 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:35.353514457+00:00 stderr F E0120 10:57:35.353444 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:35.353514457+00:00 stderr F I0120 10:57:35.353496 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:35.552739135+00:00 stderr F E0120 10:57:35.552672 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:35.552780636+00:00 stderr F I0120 10:57:35.552731 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:35.751559643+00:00 stderr F E0120 10:57:35.751470 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:35.753051393+00:00 stderr F E0120 10:57:35.752996 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:35.953911194+00:00 stderr F E0120 10:57:35.953849 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.153046730+00:00 stderr F E0120 10:57:36.152937 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.153046730+00:00 stderr F I0120 10:57:36.152997 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.353756078+00:00 stderr F E0120 10:57:36.353685 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.353839841+00:00 stderr F I0120 10:57:36.353796 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.549688860+00:00 stderr F E0120 10:57:36.549621 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.188c6b3a6e9be447 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2026-01-20 10:57:32.357747783 +0000 UTC m=+479.407199282,LastTimestamp:2026-01-20 10:57:32.357747783 +0000 UTC m=+479.407199282,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2026-01-20T10:57:36.551605841+00:00 stderr F E0120 10:57:36.551554 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.552371561+00:00 stderr F E0120 10:57:36.552321 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.552400452+00:00 stderr F I0120 10:57:36.552358 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.752967246+00:00 stderr F E0120 10:57:36.752879 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:36.953348175+00:00 stderr F E0120 10:57:36.953276 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.953402336+00:00 stderr F I0120 10:57:36.953345 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.153374955+00:00 stderr F E0120 10:57:37.153273 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:37.353208029+00:00 stderr F E0120 10:57:37.353126 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.552157681+00:00 stderr F E0120 10:57:37.552029 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.552943401+00:00 stderr F E0120 10:57:37.552871 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.753298190+00:00 stderr F E0120 10:57:37.753199 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.753298190+00:00 stderr F I0120 10:57:37.753274 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.829896025+00:00 stderr F E0120 10:57:37.829810 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.188c6b3a6e6925b2 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2026-01-20 10:57:32.354422194 +0000 UTC m=+479.403873683,LastTimestamp:2026-01-20 10:57:32.354422194 +0000 UTC m=+479.403873683,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2026-01-20T10:57:37.977894529+00:00 stderr F E0120 10:57:37.977349 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.977894529+00:00 stderr F I0120 10:57:37.977583 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:38.153190565+00:00 stderr F E0120 10:57:38.153093 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:38.153190565+00:00 stderr F I0120 10:57:38.153163 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:38.353096362+00:00 stderr F E0120 10:57:38.352950 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:38.551830077+00:00 stderr F E0120 10:57:38.551725 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:38.553175002+00:00 stderr F E0120 10:57:38.553049 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:38.754618100+00:00 stderr F E0120 10:57:38.754195 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:38.754680511+00:00 stderr F I0120 10:57:38.754281 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:39.036732271+00:00 stderr F E0120 10:57:39.036628 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:39.036972307+00:00 stderr F I0120 10:57:39.036737 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:39.152860421+00:00 stderr F E0120 10:57:39.152777 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:39.354810382+00:00 stderr F E0120 10:57:39.354726 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:39.552990453+00:00 stderr F E0120 10:57:39.552867 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:39.552990453+00:00 stderr F I0120 10:57:39.552948 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:39.753492726+00:00 stderr F E0120 10:57:39.753428 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:39.951283666+00:00 stderr F I0120 10:57:39.951207 1 request.go:697] Waited for 1.078884672s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller 2026-01-20T10:57:39.952260573+00:00 stderr F E0120 10:57:39.952227 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:39.952836348+00:00 stderr F E0120 10:57:39.952799 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:39.952890149+00:00 stderr F I0120 10:57:39.952856 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:40.553903573+00:00 stderr F E0120 10:57:40.553836 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:40.753222214+00:00 stderr F E0120 10:57:40.753154 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:40.876252687+00:00 stderr F E0120 10:57:40.875658 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.188c6b3a6e5eb5a2 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2026-01-20 10:57:32.353738146 +0000 UTC m=+479.403189635,LastTimestamp:2026-01-20 10:57:32.353738146 +0000 UTC m=+479.403189635,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2026-01-20T10:57:40.953962432+00:00 stderr F E0120 10:57:40.953898 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:40.954020554+00:00 stderr F I0120 10:57:40.953986 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:41.169711838+00:00 stderr F E0120 10:57:41.169651 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:41.354116095+00:00 stderr F E0120 10:57:41.354041 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:41.552604174+00:00 stderr F E0120 10:57:41.552513 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:41.600651954+00:00 stderr F E0120 10:57:41.600574 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:41.600698036+00:00 stderr F I0120 10:57:41.600641 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:41.751026912+00:00 stderr F I0120 10:57:41.750966 1 request.go:697] Waited for 1.036275945s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa 2026-01-20T10:57:41.755428617+00:00 stderr F E0120 10:57:41.755346 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:41.959843523+00:00 stderr F E0120 10:57:41.959760 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:42.120524963+00:00 stderr F E0120 10:57:42.120448 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:42.120572194+00:00 stderr F I0120 10:57:42.120511 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:42.158093186+00:00 stderr F E0120 10:57:42.157990 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:42.158201639+00:00 stderr F I0120 10:57:42.158121 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:42.753296627+00:00 stderr F E0120 10:57:42.753145 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:42.953106191+00:00 stderr F E0120 10:57:42.953026 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:42.953252265+00:00 stderr F I0120 10:57:42.953194 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:43.153938682+00:00 stderr F E0120 10:57:43.153806 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:43.554877345+00:00 stderr F E0120 10:57:43.554790 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:43.752190923+00:00 stderr F E0120 10:57:43.752141 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:43.952465930+00:00 stderr F E0120 10:57:43.952411 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:43.952573822+00:00 stderr F I0120 10:57:43.952552 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:44.355037985+00:00 stderr F E0120 10:57:44.354953 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:44.753983006+00:00 stderr F E0120 10:57:44.753896 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:44.954192120+00:00 stderr F E0120 10:57:44.953089 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:44.954192120+00:00 stderr F I0120 10:57:44.953195 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:45.156733556+00:00 stderr F E0120 10:57:45.156361 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:45.353342366+00:00 stderr F E0120 10:57:45.353244 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:45.554506515+00:00 stderr F E0120 10:57:45.554118 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:45.554506515+00:00 stderr F I0120 10:57:45.554217 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:46.153288690+00:00 stderr F E0120 10:57:46.153212 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:46.153337632+00:00 stderr F I0120 10:57:46.153294 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:46.355991501+00:00 stderr F E0120 10:57:46.355911 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:46.552051056+00:00 stderr F E0120 10:57:46.551963 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.188c6b3a6e9be447 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2026-01-20 10:57:32.357747783 +0000 UTC m=+479.407199282,LastTimestamp:2026-01-20 10:57:32.357747783 +0000 UTC m=+479.407199282,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2026-01-20T10:57:46.724017254+00:00 stderr F E0120 10:57:46.723949 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:46.724052395+00:00 stderr F I0120 10:57:46.724014 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:46.754481449+00:00 stderr F E0120 10:57:46.754403 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:46.954863848+00:00 stderr F E0120 10:57:46.954754 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:47.155541125+00:00 stderr F E0120 10:57:47.155467 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:47.155580816+00:00 stderr F I0120 10:57:47.155543 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:47.249479489+00:00 stderr F E0120 10:57:47.249012 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:47.249539062+00:00 stderr F I0120 10:57:47.249128 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:47.351657462+00:00 stderr F I0120 10:57:47.351158 1 request.go:697] Waited for 1.036529182s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller 2026-01-20T10:57:47.353326576+00:00 stderr F E0120 10:57:47.353256 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:47.832497028+00:00 stderr F E0120 10:57:47.832422 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.188c6b3a6e6925b2 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2026-01-20 10:57:32.354422194 +0000 UTC m=+479.403873683,LastTimestamp:2026-01-20 10:57:32.354422194 +0000 UTC m=+479.403873683,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2026-01-20T10:57:47.954431602+00:00 stderr F E0120 10:57:47.954344 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:47.954530545+00:00 stderr F I0120 10:57:47.954455 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:48.154917285+00:00 stderr F E0120 10:57:48.154836 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:48.353926137+00:00 stderr F E0120 10:57:48.353851 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:50.411995733+00:00 stderr F E0120 10:57:50.411911 1 base_controller.go:268] EtcdStaticResources reconciliation failed: "etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:58:16.302532517+00:00 stderr F I0120 10:58:16.301757 1 reflector.go:351] Caches populated for *v1beta1.Machine from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:18.077108363+00:00 stderr F I0120 10:58:18.077023 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000027700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015133657716033063 5ustar zuulzuul././@LongLink0000644000000000000000000000032700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015133657741033061 5ustar zuulzuul././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000034603115133657716033074 0ustar zuulzuul2026-01-20T10:49:36.576963600+00:00 stderr F I0120 10:49:36.574510 1 cmd.go:241] Using service-serving-cert provided certificates 2026-01-20T10:49:36.576963600+00:00 stderr F I0120 10:49:36.574878 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:49:36.576963600+00:00 stderr F I0120 10:49:36.575362 1 observer_polling.go:159] Starting file observer 2026-01-20T10:49:36.631723558+00:00 stderr F I0120 10:49:36.631170 1 builder.go:299] kube-apiserver-operator version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2 2026-01-20T10:49:37.356101232+00:00 stderr F I0120 10:49:37.352280 1 secure_serving.go:57] Forcing use of http/1.1 only 2026-01-20T10:49:37.356101232+00:00 stderr F W0120 10:49:37.352919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:37.356101232+00:00 stderr F W0120 10:49:37.352925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:37.361197637+00:00 stderr F I0120 10:49:37.360049 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2026-01-20T10:49:37.361197637+00:00 stderr F I0120 10:49:37.360347 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-apiserver-operator/kube-apiserver-operator-lock... 2026-01-20T10:49:37.365088835+00:00 stderr F I0120 10:49:37.361599 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:49:37.365088835+00:00 stderr F I0120 10:49:37.361650 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:49:37.365088835+00:00 stderr F I0120 10:49:37.361927 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:49:37.365088835+00:00 stderr F I0120 10:49:37.361942 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:37.365088835+00:00 stderr F I0120 10:49:37.361985 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:49:37.365088835+00:00 stderr F I0120 10:49:37.361991 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:37.375094791+00:00 stderr F I0120 10:49:37.371599 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2026-01-20T10:49:37.393099999+00:00 stderr F I0120 10:49:37.392498 1 secure_serving.go:213] Serving securely on [::]:8443 2026-01-20T10:49:37.393099999+00:00 stderr F I0120 10:49:37.392570 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:49:37.465250497+00:00 stderr F I0120 10:49:37.462798 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:37.465250497+00:00 stderr F I0120 10:49:37.462861 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:49:37.465250497+00:00 stderr F I0120 10:49:37.462941 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:54:46.074327913+00:00 stderr F I0120 10:54:46.073332 1 leaderelection.go:260] successfully acquired lease openshift-kube-apiserver-operator/kube-apiserver-operator-lock 2026-01-20T10:54:46.074395464+00:00 stderr F I0120 10:54:46.073757 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator-lock", UID:"e11b3070-9ae9-4936-9a58-0ad566006f7f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41966", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-apiserver-operator-78d54458c4-sc8h7_7074dc38-f83f-41b5-96e4-f4dc34d0eab8 became leader 2026-01-20T10:54:46.076269435+00:00 stderr F I0120 10:54:46.076021 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2026-01-20T10:54:46.079136861+00:00 stderr F I0120 10:54:46.079100 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2026-01-20T10:54:46.079222043+00:00 stderr F I0120 10:54:46.079184 1 starter.go:140] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2026-01-20T10:54:46.131104796+00:00 stderr F I0120 10:54:46.131002 1 base_controller.go:67] Waiting for caches to sync for SCCReconcileController 2026-01-20T10:54:46.134731483+00:00 stderr F I0120 10:54:46.133434 1 termination_observer.go:145] Starting TerminationObserver 2026-01-20T10:54:46.134731483+00:00 stderr F I0120 10:54:46.133483 1 base_controller.go:67] Waiting for caches to sync for EventWatchController 2026-01-20T10:54:46.134731483+00:00 stderr F I0120 10:54:46.133519 1 base_controller.go:67] Waiting for caches to sync for BoundSATokenSignerController 2026-01-20T10:54:46.134731483+00:00 stderr F I0120 10:54:46.133537 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2026-01-20T10:54:46.134731483+00:00 stderr F I0120 10:54:46.133554 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2026-01-20T10:54:46.134731483+00:00 stderr F I0120 10:54:46.133574 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2026-01-20T10:54:46.134731483+00:00 stderr F I0120 10:54:46.133593 1 base_controller.go:67] Waiting for caches to sync for KubeletVersionSkewController 2026-01-20T10:54:46.134731483+00:00 stderr F I0120 10:54:46.133613 1 base_controller.go:67] Waiting for caches to sync for WorkerLatencyProfile 2026-01-20T10:54:46.134731483+00:00 stderr F I0120 10:54:46.133641 1 base_controller.go:67] Waiting for caches to sync for webhookSupportabilityController 2026-01-20T10:54:46.134731483+00:00 stderr F I0120 10:54:46.133659 1 base_controller.go:67] Waiting for caches to sync for ServiceAccountIssuerController 2026-01-20T10:54:46.134731483+00:00 stderr F I0120 10:54:46.133675 1 base_controller.go:67] Waiting for caches to sync for PodSecurityReadinessController 2026-01-20T10:54:46.134731483+00:00 stderr F I0120 10:54:46.133680 1 base_controller.go:73] Caches are synced for PodSecurityReadinessController 2026-01-20T10:54:46.134731483+00:00 stderr F I0120 10:54:46.133687 1 base_controller.go:110] Starting #1 worker of PodSecurityReadinessController controller ... 2026-01-20T10:54:46.134731483+00:00 stderr F I0120 10:54:46.134556 1 base_controller.go:67] Waiting for caches to sync for KubeAPIServerStaticResources 2026-01-20T10:54:46.134731483+00:00 stderr F I0120 10:54:46.134601 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2026-01-20T10:54:46.134731483+00:00 stderr F I0120 10:54:46.134638 1 base_controller.go:67] Waiting for caches to sync for NodeKubeconfigController 2026-01-20T10:54:46.134731483+00:00 stderr F I0120 10:54:46.134655 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2026-01-20T10:54:46.134927208+00:00 stderr F I0120 10:54:46.134871 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-apiserver 2026-01-20T10:54:46.134927208+00:00 stderr F I0120 10:54:46.134908 1 certrotationcontroller.go:886] Starting CertRotation 2026-01-20T10:54:46.134927208+00:00 stderr F I0120 10:54:46.134914 1 certrotationcontroller.go:851] Waiting for CertRotation 2026-01-20T10:54:46.134966339+00:00 stderr F I0120 10:54:46.134945 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2026-01-20T10:54:46.134978200+00:00 stderr F I0120 10:54:46.134972 1 base_controller.go:67] Waiting for caches to sync for CertRotationTimeUpgradeableController 2026-01-20T10:54:46.135042891+00:00 stderr F I0120 10:54:46.135012 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2026-01-20T10:54:46.135042891+00:00 stderr F I0120 10:54:46.135036 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2026-01-20T10:54:46.135098623+00:00 stderr F I0120 10:54:46.135050 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2026-01-20T10:54:46.135098623+00:00 stderr F I0120 10:54:46.135089 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2026-01-20T10:54:46.135386230+00:00 stderr F I0120 10:54:46.135353 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2026-01-20T10:54:46.135386230+00:00 stderr F I0120 10:54:46.135382 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2026-01-20T10:54:46.135424641+00:00 stderr F I0120 10:54:46.135402 1 base_controller.go:67] Waiting for caches to sync for PruneController 2026-01-20T10:54:46.135434072+00:00 stderr F I0120 10:54:46.135421 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2026-01-20T10:54:46.135443402+00:00 stderr F I0120 10:54:46.135410 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2026-01-20T10:54:46.135443402+00:00 stderr F I0120 10:54:46.135438 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2026-01-20T10:54:46.135472773+00:00 stderr F I0120 10:54:46.135455 1 base_controller.go:67] Waiting for caches to sync for GuardController 2026-01-20T10:54:46.135510424+00:00 stderr F I0120 10:54:46.135484 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2026-01-20T10:54:46.135584696+00:00 stderr F I0120 10:54:46.135550 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2026-01-20T10:54:46.135654147+00:00 stderr F I0120 10:54:46.135608 1 base_controller.go:67] Waiting for caches to sync for NodeController 2026-01-20T10:54:46.135654147+00:00 stderr F I0120 10:54:46.135644 1 base_controller.go:67] Waiting for caches to sync for StartupMonitorPodCondition 2026-01-20T10:54:46.135698729+00:00 stderr F I0120 10:54:46.135667 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateFallback 2026-01-20T10:54:46.135740790+00:00 stderr F I0120 10:54:46.135715 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2026-01-20T10:54:46.135840912+00:00 stderr F I0120 10:54:46.135784 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2026-01-20T10:54:46.234118752+00:00 stderr F I0120 10:54:46.231100 1 base_controller.go:73] Caches are synced for SCCReconcileController 2026-01-20T10:54:46.234118752+00:00 stderr F I0120 10:54:46.231143 1 base_controller.go:110] Starting #1 worker of SCCReconcileController controller ... 2026-01-20T10:54:46.234118752+00:00 stderr F I0120 10:54:46.233545 1 base_controller.go:73] Caches are synced for EventWatchController 2026-01-20T10:54:46.234118752+00:00 stderr F I0120 10:54:46.233556 1 base_controller.go:110] Starting #1 worker of EventWatchController controller ... 2026-01-20T10:54:46.234118752+00:00 stderr F I0120 10:54:46.233572 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2026-01-20T10:54:46.234118752+00:00 stderr F I0120 10:54:46.233578 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2026-01-20T10:54:46.234118752+00:00 stderr F I0120 10:54:46.233631 1 base_controller.go:73] Caches are synced for auditPolicyController 2026-01-20T10:54:46.234118752+00:00 stderr F I0120 10:54:46.233662 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2026-01-20T10:54:46.234118752+00:00 stderr F I0120 10:54:46.233753 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2026-01-20T10:54:46.234118752+00:00 stderr F I0120 10:54:46.233863 1 base_controller.go:73] Caches are synced for ServiceAccountIssuerController 2026-01-20T10:54:46.234118752+00:00 stderr F I0120 10:54:46.233873 1 base_controller.go:110] Starting #1 worker of ServiceAccountIssuerController controller ... 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.234932 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-apiserver 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.234964 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-apiserver controller ... 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235030 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235039 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235102 1 certrotationcontroller.go:869] Finished waiting for CertRotation 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235144 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235210 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235275 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235288 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235299 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235310 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235320 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235331 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235344 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235360 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235370 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235383 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235400 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235405 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235419 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235423 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235452 1 base_controller.go:73] Caches are synced for EncryptionStateController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235456 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235466 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.235470 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.236966 1 base_controller.go:73] Caches are synced for InstallerController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.236979 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.237147 1 base_controller.go:73] Caches are synced for InstallerStateController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.237175 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.237191 1 base_controller.go:73] Caches are synced for RevisionController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.237220 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.237290 1 base_controller.go:73] Caches are synced for BackingResourceController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.237300 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.237305 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.237314 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.237332 1 base_controller.go:73] Caches are synced for StaticPodStateController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.237337 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.237341 1 base_controller.go:73] Caches are synced for LoggingSyncer 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.237357 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.237494 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.237505 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.237705 1 base_controller.go:73] Caches are synced for PruneController 2026-01-20T10:54:46.237801050+00:00 stderr F I0120 10:54:46.237712 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2026-01-20T10:54:46.237942794+00:00 stderr F I0120 10:54:46.237923 1 base_controller.go:73] Caches are synced for StaticPodStateFallback 2026-01-20T10:54:46.237975885+00:00 stderr F I0120 10:54:46.237964 1 base_controller.go:110] Starting #1 worker of StaticPodStateFallback controller ... 2026-01-20T10:54:46.238023606+00:00 stderr F I0120 10:54:46.238010 1 base_controller.go:73] Caches are synced for StartupMonitorPodCondition 2026-01-20T10:54:46.238055907+00:00 stderr F I0120 10:54:46.238044 1 base_controller.go:110] Starting #1 worker of StartupMonitorPodCondition controller ... 2026-01-20T10:54:46.240536803+00:00 stderr F I0120 10:54:46.239369 1 prune_controller.go:269] Nothing to prune 2026-01-20T10:54:46.278705201+00:00 stderr F I0120 10:54:46.278641 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:46.320596858+00:00 stderr F I0120 10:54:46.320525 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:46.334404216+00:00 stderr F I0120 10:54:46.334332 1 base_controller.go:73] Caches are synced for BoundSATokenSignerController 2026-01-20T10:54:46.334404216+00:00 stderr F I0120 10:54:46.334375 1 base_controller.go:110] Starting #1 worker of BoundSATokenSignerController controller ... 2026-01-20T10:54:46.334443787+00:00 stderr F I0120 10:54:46.334419 1 base_controller.go:73] Caches are synced for WorkerLatencyProfile 2026-01-20T10:54:46.334443787+00:00 stderr F I0120 10:54:46.334425 1 base_controller.go:110] Starting #1 worker of WorkerLatencyProfile controller ... 2026-01-20T10:54:46.335663319+00:00 stderr F I0120 10:54:46.335633 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:54:46.335663319+00:00 stderr F I0120 10:54:46.335647 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:54:46.335848404+00:00 stderr F I0120 10:54:46.335820 1 base_controller.go:73] Caches are synced for NodeKubeconfigController 2026-01-20T10:54:46.335848404+00:00 stderr F I0120 10:54:46.335842 1 base_controller.go:110] Starting #1 worker of NodeKubeconfigController controller ... 2026-01-20T10:54:46.335969657+00:00 stderr F I0120 10:54:46.335931 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:54:46.335969657+00:00 stderr F I0120 10:54:46.335964 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:54:46.336003088+00:00 stderr F I0120 10:54:46.335986 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:54:46.336003088+00:00 stderr F I0120 10:54:46.335995 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:54:46.336044539+00:00 stderr F I0120 10:54:46.336028 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:54:46.336044539+00:00 stderr F I0120 10:54:46.336038 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:54:46.336079410+00:00 stderr F I0120 10:54:46.336050 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:54:46.336089510+00:00 stderr F I0120 10:54:46.336079 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:54:46.336111782+00:00 stderr F I0120 10:54:46.336096 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:54:46.336111782+00:00 stderr F I0120 10:54:46.336106 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:54:46.336143362+00:00 stderr F I0120 10:54:46.336126 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:54:46.336143362+00:00 stderr F I0120 10:54:46.336137 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:54:46.336165823+00:00 stderr F I0120 10:54:46.336150 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:54:46.336165823+00:00 stderr F I0120 10:54:46.336159 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:54:46.336193514+00:00 stderr F I0120 10:54:46.336177 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:54:46.336193514+00:00 stderr F I0120 10:54:46.336187 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:54:46.336223025+00:00 stderr F I0120 10:54:46.336207 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:54:46.336223025+00:00 stderr F I0120 10:54:46.336216 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:54:46.336254525+00:00 stderr F I0120 10:54:46.336238 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:54:46.336254525+00:00 stderr F I0120 10:54:46.336247 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:54:46.336272766+00:00 stderr F I0120 10:54:46.336261 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:54:46.336272766+00:00 stderr F I0120 10:54:46.336266 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:54:46.392674889+00:00 stderr F I0120 10:54:46.392601 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:46.478663641+00:00 stderr F I0120 10:54:46.478582 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:46.520488746+00:00 stderr F I0120 10:54:46.520415 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:46.678757205+00:00 stderr F I0120 10:54:46.678664 1 reflector.go:351] Caches populated for *v1.Scheduler from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:46.722908273+00:00 stderr F I0120 10:54:46.722837 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:46.880457222+00:00 stderr F I0120 10:54:46.880363 1 reflector.go:351] Caches populated for *v1.OAuth from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:46.922225595+00:00 stderr F I0120 10:54:46.919730 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:46.935047627+00:00 stderr F I0120 10:54:46.934978 1 base_controller.go:73] Caches are synced for KubeAPIServerStaticResources 2026-01-20T10:54:46.935047627+00:00 stderr F I0120 10:54:46.935010 1 base_controller.go:110] Starting #1 worker of KubeAPIServerStaticResources controller ... 2026-01-20T10:54:47.132909611+00:00 stderr F I0120 10:54:47.132822 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:47.317528873+00:00 stderr F I0120 10:54:47.317422 1 request.go:697] Waited for 1.185011688s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/secrets?limit=500&resourceVersion=0 2026-01-20T10:54:47.321267332+00:00 stderr F I0120 10:54:47.321190 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:47.519197868+00:00 stderr F I0120 10:54:47.519127 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:47.720254488+00:00 stderr F I0120 10:54:47.720140 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:47.931591131+00:00 stderr F I0120 10:54:47.931453 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:48.119315465+00:00 stderr F I0120 10:54:48.119172 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:48.319406859+00:00 stderr F I0120 10:54:48.319015 1 request.go:697] Waited for 2.186449353s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/endpoints?limit=500&resourceVersion=0 2026-01-20T10:54:48.320691614+00:00 stderr F I0120 10:54:48.320655 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:48.334172303+00:00 stderr F I0120 10:54:48.334124 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2026-01-20T10:54:48.334172303+00:00 stderr F I0120 10:54:48.334155 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2026-01-20T10:54:48.519773500+00:00 stderr F I0120 10:54:48.519631 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:48.535219953+00:00 stderr F I0120 10:54:48.534593 1 base_controller.go:73] Caches are synced for KubeletVersionSkewController 2026-01-20T10:54:48.535219953+00:00 stderr F I0120 10:54:48.534626 1 base_controller.go:110] Starting #1 worker of KubeletVersionSkewController controller ... 2026-01-20T10:54:48.535719926+00:00 stderr F I0120 10:54:48.535698 1 base_controller.go:73] Caches are synced for GuardController 2026-01-20T10:54:48.535770237+00:00 stderr F I0120 10:54:48.535747 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2026-01-20T10:54:48.536206009+00:00 stderr F I0120 10:54:48.535718 1 base_controller.go:73] Caches are synced for NodeController 2026-01-20T10:54:48.536253320+00:00 stderr F I0120 10:54:48.536223 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2026-01-20T10:54:48.719935606+00:00 stderr F I0120 10:54:48.719851 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:48.733812986+00:00 stderr F I0120 10:54:48.733747 1 base_controller.go:73] Caches are synced for webhookSupportabilityController 2026-01-20T10:54:48.733812986+00:00 stderr F I0120 10:54:48.733777 1 base_controller.go:110] Starting #1 worker of webhookSupportabilityController controller ... 2026-01-20T10:54:48.920833181+00:00 stderr F I0120 10:54:48.920752 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:48.935776969+00:00 stderr F I0120 10:54:48.935730 1 base_controller.go:73] Caches are synced for CertRotationTimeUpgradeableController 2026-01-20T10:54:48.935776969+00:00 stderr F I0120 10:54:48.935764 1 base_controller.go:110] Starting #1 worker of CertRotationTimeUpgradeableController controller ... 2026-01-20T10:54:48.935881953+00:00 stderr F I0120 10:54:48.935821 1 base_controller.go:73] Caches are synced for TargetConfigController 2026-01-20T10:54:48.935892033+00:00 stderr F I0120 10:54:48.935875 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2026-01-20T10:54:49.128906548+00:00 stderr F I0120 10:54:49.128814 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:54:49.135010151+00:00 stderr F I0120 10:54:49.134948 1 base_controller.go:73] Caches are synced for ConfigObserver 2026-01-20T10:54:49.135010151+00:00 stderr F I0120 10:54:49.134990 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2026-01-20T10:54:49.136286355+00:00 stderr F I0120 10:54:49.136245 1 base_controller.go:73] Caches are synced for ResourceSyncController 2026-01-20T10:54:49.136286355+00:00 stderr F I0120 10:54:49.136278 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2026-01-20T10:54:49.516800468+00:00 stderr F I0120 10:54:49.516710 1 request.go:697] Waited for 3.281775862s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2026-01-20T10:54:50.517601716+00:00 stderr F I0120 10:54:50.517526 1 request.go:697] Waited for 4.180864289s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs 2026-01-20T10:56:07.069916769+00:00 stderr F I0120 10:56:07.066650 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 13 triggered by "required configmap/kubelet-serving-ca has changed" 2026-01-20T10:56:07.090748729+00:00 stderr P I0120 10:56:07.090642 1 core.go:358] ConfigMap "openshift-kube-apiserver/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIG1tSIcjm7gAwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjYwMTIwMTA1NTU0WhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3Njg5MDY1\nNTQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCscGmreTD5gOW5KP2F\nNUpoBc5+Spnnq3B2tYkRoXcWNc2BgZ+jMRfrKhGDPLeI0kIISHxkrBN7rWAljF/6\nfC5FOupYyRzaUwkNQnjvvIuPXtDkwKkkHjfsK+w4R+SMgmXxZnBschgQlQADWx1F\nhXXj4t/rIAOD6wDn2huK9ofQ3778YfMevXs4C8/wUAoMZsH7WA45mDgK0xz829BU\n4HQ4QgB87eR8dGwI+Ck76jdHYHx29ZeflIDi09CtLSSobnLLsJJzfqcOnaDDL3FZ\nvA+xigo/n1FjfSNGaEj7Sgub6VUD9bMS90VVTXY0CjSLgR+cdFBQoDdlgxOrcOpl\ns2z5AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBRbK19lc9H0VacvthXsqwv+GhYFITAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEAnzrWGRHyvk+7dUDPe76j\nPAzLpAmD1UUwYW+uGXf3c81/Iqtn1+c3sGWje6RB7fFIee4gGj5NHYfU7f9j6rVH\nhFRFN3DFXfTpNb7i7i8+hvNbVmHV/k/XDv/z5BdwkDZM8qySTeQH9hPxLWN5LHkV\nxtm1n0YndLacqybZOXTN1ZYOr8HQpTsMK/B/rg2zN5rhUBC3xsf2qtamvReY7VXr\ngWlJQqShNWoVfblmZB8sv23Kj9n7f1QwJ32oCYcRh3dmOpfNl4ESRnrczQrqXR1U\niMutGpudJdSdfPNI/+Igd7BsVxnQofWes4qab411PeoLlMoBc0XCfICcam2+yrG+\ncQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkzt 2026-01-20T10:56:07.090813980+00:00 stderr F HP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2026-01-20T10:56:07.090813980+00:00 stderr P I0120 10:56:07.090641 1 core.go:358] ConfigMap "openshift-kube-apiserver/kubelet-serving-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIG1tSIcjm7gAwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjYwMTIwMTA1NTU0WhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3Njg5MDY1\nNTQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCscGmreTD5gOW5KP2F\nNUpoBc5+Spnnq3B2tYkRoXcWNc2BgZ+jMRfrKhGDPLeI0kIISHxkrBN7rWAljF/6\nfC5FOupYyRzaUwkNQnjvvIuPXtDkwKkkHjfsK+w4R+SMgmXxZnBschgQlQADWx1F\nhXXj4t/rIAOD6wDn2huK9ofQ3778YfMevXs4C8/wUAoMZsH7WA45mDgK0xz829BU\n4HQ4QgB87eR8dGwI+Ck76jdHYHx29ZeflIDi09CtLSSobnLLsJJzfqcOnaDDL3FZ\nvA+xigo/n1FjfSNGaEj7Sgub6VUD9bMS90VVTXY0CjSLgR+cdFBQoDdlgxO 2026-01-20T10:56:07.090834031+00:00 stderr F rcOpl\ns2z5AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBRbK19lc9H0VacvthXsqwv+GhYFITAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEAnzrWGRHyvk+7dUDPe76j\nPAzLpAmD1UUwYW+uGXf3c81/Iqtn1+c3sGWje6RB7fFIee4gGj5NHYfU7f9j6rVH\nhFRFN3DFXfTpNb7i7i8+hvNbVmHV/k/XDv/z5BdwkDZM8qySTeQH9hPxLWN5LHkV\nxtm1n0YndLacqybZOXTN1ZYOr8HQpTsMK/B/rg2zN5rhUBC3xsf2qtamvReY7VXr\ngWlJQqShNWoVfblmZB8sv23Kj9n7f1QwJ32oCYcRh3dmOpfNl4ESRnrczQrqXR1U\niMutGpudJdSdfPNI/+Igd7BsVxnQofWes4qab411PeoLlMoBc0XCfICcam2+yrG+\ncQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:18Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-controller-manager-operator","operation":"Update","time":"2026-01-20T10:56:07Z"}],"resourceVersion":null,"uid":"449953d7-35d8-4eaf-8671-65eda2b482f7"}} 2026-01-20T10:56:07.091021516+00:00 stderr F I0120 10:56:07.090952 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-kube-apiserver: 2026-01-20T10:56:07.091021516+00:00 stderr F cause by changes in data.ca-bundle.crt 2026-01-20T10:56:07.091021516+00:00 stderr F I0120 10:56:07.091005 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver: 2026-01-20T10:56:07.091021516+00:00 stderr F cause by changes in data.ca-bundle.crt 2026-01-20T10:56:07.108014103+00:00 stderr F I0120 10:56:07.106405 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:56:07.106353129 +0000 UTC))" 2026-01-20T10:56:07.108014103+00:00 stderr F I0120 10:56:07.106439 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:56:07.106424841 +0000 UTC))" 2026-01-20T10:56:07.108014103+00:00 stderr F I0120 10:56:07.106458 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.106443931 +0000 UTC))" 2026-01-20T10:56:07.108014103+00:00 stderr F I0120 10:56:07.106476 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.106463132 +0000 UTC))" 2026-01-20T10:56:07.108014103+00:00 stderr F I0120 10:56:07.106495 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.106482682 +0000 UTC))" 2026-01-20T10:56:07.108014103+00:00 stderr F I0120 10:56:07.106517 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.106499723 +0000 UTC))" 2026-01-20T10:56:07.108014103+00:00 stderr F I0120 10:56:07.106535 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.106521673 +0000 UTC))" 2026-01-20T10:56:07.108014103+00:00 stderr F I0120 10:56:07.106553 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.106539624 +0000 UTC))" 2026-01-20T10:56:07.108014103+00:00 stderr F I0120 10:56:07.106570 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.106556634 +0000 UTC))" 2026-01-20T10:56:07.108014103+00:00 stderr F I0120 10:56:07.106588 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:56:07.106576495 +0000 UTC))" 2026-01-20T10:56:07.108014103+00:00 stderr F I0120 10:56:07.106608 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.106595325 +0000 UTC))" 2026-01-20T10:56:07.108014103+00:00 stderr F I0120 10:56:07.106627 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.106612636 +0000 UTC))" 2026-01-20T10:56:07.108014103+00:00 stderr F I0120 10:56:07.106952 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-apiserver-operator.svc,metrics.openshift-kube-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:22 +0000 UTC to 2027-08-13 20:00:23 +0000 UTC (now=2026-01-20 10:56:07.106932694 +0000 UTC))" 2026-01-20T10:56:07.108014103+00:00 stderr F I0120 10:56:07.107421 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906177\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906177\" (2026-01-20 09:49:36 +0000 UTC to 2027-01-20 09:49:36 +0000 UTC (now=2026-01-20 10:56:07.107401567 +0000 UTC))" 2026-01-20T10:56:07.247441744+00:00 stderr F I0120 10:56:07.247370 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-pod-13 -n openshift-kube-apiserver because it was missing 2026-01-20T10:56:07.849971416+00:00 stderr P I0120 10:56:07.849470 1 core.go:358] ConfigMap "openshift-config-managed/kube-apiserver-client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIG1tSIcjm7gAwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjYwMTIwMTA1NTU0WhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3Njg5MDY1\nNTQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCscGmreTD5gOW5KP2F\nNUpoBc5+Spnnq3B2tYkRoXcWNc2BgZ+jMRfrKhGDPLeI0kIISHxkrBN7rWAljF/6\nfC5FOupYyRzaUwkNQnjvvIuPXtDkwKkkHjfsK+w4R+SMgmXxZnBschgQlQADWx1F\nhXXj4t/rIAOD6wDn2huK9ofQ3778YfMevXs4C8/wUAoMZsH7WA45mDgK0xz829BU\n4HQ4QgB87eR8dGwI+Ck76jdHYHx29ZeflIDi09CtLSSobnLLsJJzfqcOnaDDL3FZ\nvA+xigo/n1FjfSNGaEj7Sgub6VUD9bMS90VVTXY0CjSLgR+cdFBQoDdlgxOrcOpl\ns2z5AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBRbK19lc9H0VacvthXsqwv+GhYFITAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEAnzrWGRHyvk+7dUDPe76j\nPAzLpAmD1UUwYW+uGXf3c81/Iqtn1+c3sGWje6RB7fFIee4gGj5NHYfU7f9j6rVH\nhFRFN3DFXfTpNb7i7i8+hvNbVmHV/k/XDv/z5BdwkDZM8qySTeQH9hPxLWN5LHkV\nxtm1n0YndLacqybZOXTN1ZYOr8HQpTsMK/B/rg2zN5rhUBC3xsf2qtamvReY7VXr\ngWlJQqShNWoVfblmZB8sv23Kj9n7f1QwJ32oCYcRh3dmOpfNl4ESRnrczQrqXR1U\niMutGpudJdSdfPNI/+Igd7BsVxnQofWes4qab411PeoLlMoBc0XCfICcam2+yrG+\ncQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBA 2026-01-20T10:56:07.850143540+00:00 stderr F QC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2026-01-20T10:56:07Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2026-01-20T10:56:07.851651671+00:00 stderr F I0120 10:56:07.850656 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: 2026-01-20T10:56:07.851651671+00:00 stderr F cause by changes in data.ca-bundle.crt 2026-01-20T10:56:08.241744807+00:00 stderr F I0120 10:56:08.241680 1 request.go:697] Waited for 1.121741891s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client 2026-01-20T10:56:08.450733850+00:00 stderr F I0120 10:56:08.450599 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-13 -n openshift-kube-apiserver because it was missing 2026-01-20T10:56:09.051214426+00:00 stderr F I0120 10:56:09.047052 1 core.go:358] ConfigMap "openshift-config-managed/kubelet-serving-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIG1tSIcjm7gAwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjYwMTIwMTA1NTU0WhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3Njg5MDY1\nNTQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCscGmreTD5gOW5KP2F\nNUpoBc5+Spnnq3B2tYkRoXcWNc2BgZ+jMRfrKhGDPLeI0kIISHxkrBN7rWAljF/6\nfC5FOupYyRzaUwkNQnjvvIuPXtDkwKkkHjfsK+w4R+SMgmXxZnBschgQlQADWx1F\nhXXj4t/rIAOD6wDn2huK9ofQ3778YfMevXs4C8/wUAoMZsH7WA45mDgK0xz829BU\n4HQ4QgB87eR8dGwI+Ck76jdHYHx29ZeflIDi09CtLSSobnLLsJJzfqcOnaDDL3FZ\nvA+xigo/n1FjfSNGaEj7Sgub6VUD9bMS90VVTXY0CjSLgR+cdFBQoDdlgxOrcOpl\ns2z5AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBRbK19lc9H0VacvthXsqwv+GhYFITAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEAnzrWGRHyvk+7dUDPe76j\nPAzLpAmD1UUwYW+uGXf3c81/Iqtn1+c3sGWje6RB7fFIee4gGj5NHYfU7f9j6rVH\nhFRFN3DFXfTpNb7i7i8+hvNbVmHV/k/XDv/z5BdwkDZM8qySTeQH9hPxLWN5LHkV\nxtm1n0YndLacqybZOXTN1ZYOr8HQpTsMK/B/rg2zN5rhUBC3xsf2qtamvReY7VXr\ngWlJQqShNWoVfblmZB8sv23Kj9n7f1QwJ32oCYcRh3dmOpfNl4ESRnrczQrqXR1U\niMutGpudJdSdfPNI/+Igd7BsVxnQofWes4qab411PeoLlMoBc0XCfICcam2+yrG+\ncQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:06Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2026-01-20T10:56:07Z"}],"resourceVersion":null,"uid":"1b8f54dc-8896-4a59-8c53-834fed1d81fd"}} 2026-01-20T10:56:09.054431832+00:00 stderr F I0120 10:56:09.053713 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kubelet-serving-ca -n openshift-config-managed: 2026-01-20T10:56:09.054431832+00:00 stderr F cause by changes in data.ca-bundle.crt 2026-01-20T10:56:09.441643770+00:00 stderr F I0120 10:56:09.441576 1 request.go:697] Waited for 1.160732809s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver 2026-01-20T10:56:09.648399853+00:00 stderr F I0120 10:56:09.647662 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-13 -n openshift-kube-apiserver because it was missing 2026-01-20T10:56:10.647872885+00:00 stderr F I0120 10:56:10.647792 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/oauth-metadata-13 -n openshift-kube-apiserver because it was missing 2026-01-20T10:56:11.649877924+00:00 stderr F I0120 10:56:11.649806 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/bound-sa-token-signing-certs-13 -n openshift-kube-apiserver because it was missing 2026-01-20T10:56:12.648138183+00:00 stderr F I0120 10:56:12.647980 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/etcd-serving-ca-13 -n openshift-kube-apiserver because it was missing 2026-01-20T10:56:13.647162232+00:00 stderr F I0120 10:56:13.646127 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-server-ca-13 -n openshift-kube-apiserver because it was missing 2026-01-20T10:56:14.647985619+00:00 stderr F I0120 10:56:14.645745 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kubelet-serving-ca-13 -n openshift-kube-apiserver because it was missing 2026-01-20T10:56:15.652760943+00:00 stderr F I0120 10:56:15.652358 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/sa-token-signing-certs-13 -n openshift-kube-apiserver because it was missing 2026-01-20T10:56:16.647167539+00:00 stderr F I0120 10:56:16.647081 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-audit-policies-13 -n openshift-kube-apiserver because it was missing 2026-01-20T10:56:17.652053385+00:00 stderr F I0120 10:56:17.651970 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/etcd-client-13 -n openshift-kube-apiserver because it was missing 2026-01-20T10:56:19.451370027+00:00 stderr F I0120 10:56:19.451292 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-serving-certkey-13 -n openshift-kube-apiserver because it was missing 2026-01-20T10:56:20.251346310+00:00 stderr F I0120 10:56:20.247792 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-13 -n openshift-kube-apiserver because it was missing 2026-01-20T10:56:20.849406751+00:00 stderr F I0120 10:56:20.846998 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/webhook-authenticator-13 -n openshift-kube-apiserver because it was missing 2026-01-20T10:56:21.647310708+00:00 stderr F I0120 10:56:21.647224 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 13 triggered by "required configmap/kubelet-serving-ca has changed" 2026-01-20T10:56:21.660542835+00:00 stderr F I0120 10:56:21.660503 1 prune_controller.go:269] Nothing to prune 2026-01-20T10:56:21.664616335+00:00 stderr F I0120 10:56:21.664564 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 13 created because required configmap/kubelet-serving-ca has changed 2026-01-20T10:56:22.841267273+00:00 stderr F I0120 10:56:22.841183 1 request.go:697] Waited for 1.178200711s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2026-01-20T10:56:23.841385071+00:00 stderr F I0120 10:56:23.841291 1 request.go:697] Waited for 1.39637638s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2026-01-20T10:56:24.645348262+00:00 stderr F I0120 10:56:24.644980 1 installer_controller.go:524] node crc with revision 12 is the oldest and needs new revision 13 2026-01-20T10:56:24.645407733+00:00 stderr F I0120 10:56:24.645392 1 installer_controller.go:532] "crc" moving to (v1.NodeStatus) { 2026-01-20T10:56:24.645407733+00:00 stderr F NodeName: (string) (len=3) "crc", 2026-01-20T10:56:24.645407733+00:00 stderr F CurrentRevision: (int32) 12, 2026-01-20T10:56:24.645407733+00:00 stderr F TargetRevision: (int32) 13, 2026-01-20T10:56:24.645407733+00:00 stderr F LastFailedRevision: (int32) 1, 2026-01-20T10:56:24.645407733+00:00 stderr F LastFailedTime: (*v1.Time)(0xc005abac78)(2024-06-26 12:52:09 +0000 UTC), 2026-01-20T10:56:24.645407733+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2026-01-20T10:56:24.645407733+00:00 stderr F LastFailedCount: (int) 1, 2026-01-20T10:56:24.645407733+00:00 stderr F LastFallbackCount: (int) 0, 2026-01-20T10:56:24.645407733+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2026-01-20T10:56:24.645407733+00:00 stderr F (string) (len=2059) "installer: node-admin-client-cert-key\",\n (string) (len=31) \"check-endpoints-client-cert-key\",\n (string) (len=14) \"kubelet-client\",\n (string) (len=16) \"node-kubeconfigs\"\n },\n OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {\n (string) (len=17) \"user-serving-cert\",\n (string) (len=21) \"user-serving-cert-000\",\n (string) (len=21) \"user-serving-cert-001\",\n (string) (len=21) \"user-serving-cert-002\",\n (string) (len=21) \"user-serving-cert-003\",\n (string) (len=21) \"user-serving-cert-004\",\n (string) (len=21) \"user-serving-cert-005\",\n (string) (len=21) \"user-serving-cert-006\",\n (string) (len=21) \"user-serving-cert-007\",\n (string) (len=21) \"user-serving-cert-008\",\n (string) (len=21) \"user-serving-cert-009\"\n },\n CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\",\n (string) (len=29) \"control-plane-node-kubeconfig\",\n (string) (len=26) \"check-endpoints-kubeconfig\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:19.085310 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:19.096616 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:19.099505 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:49:49.099716 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:03.102802 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2026-01-20T10:56:24.645407733+00:00 stderr F } 2026-01-20T10:56:24.645407733+00:00 stderr F } 2026-01-20T10:56:24.659840341+00:00 stderr F I0120 10:56:24.659731 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeTargetRevisionChanged' Updating node "crc" from revision 12 to 13 because node crc with revision 12 is the oldest 2026-01-20T10:56:24.672829000+00:00 stderr F I0120 10:56:24.672735 1 prune_controller.go:269] Nothing to prune 2026-01-20T10:56:24.682511211+00:00 stderr F I0120 10:56:24.682375 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2026-01-20T10:56:24Z","message":"NodeInstallerProgressing: 1 node is at revision 12; 0 nodes have achieved new revision 13","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12; 0 nodes have achieved new revision 13","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2026-01-20T10:56:24.698411729+00:00 stderr F I0120 10:56:24.697455 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 12; 0 nodes have achieved new revision 13"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12; 0 nodes have achieved new revision 13" 2026-01-20T10:56:25.841737810+00:00 stderr F I0120 10:56:25.841630 1 request.go:697] Waited for 1.177381878s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2026-01-20T10:56:27.453691560+00:00 stderr F I0120 10:56:27.453602 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-13-crc -n openshift-kube-apiserver because it was missing 2026-01-20T10:56:28.044177568+00:00 stderr F I0120 10:56:28.043563 1 installer_controller.go:512] "crc" is in transition to 13, but has not made progress because installer is not finished, but in Pending phase 2026-01-20T10:56:28.641651692+00:00 stderr F I0120 10:56:28.641472 1 request.go:697] Waited for 1.187052778s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dopenshift-kube-apiserver 2026-01-20T10:56:31.045851107+00:00 stderr F I0120 10:56:31.045091 1 installer_controller.go:512] "crc" is in transition to 13, but has not made progress because installer is not finished, but in Running phase 2026-01-20T10:56:32.244980360+00:00 stderr F I0120 10:56:32.244880 1 installer_controller.go:512] "crc" is in transition to 13, but has not made progress because installer is not finished, but in Running phase 2026-01-20T10:56:34.533246697+00:00 stderr F W0120 10:56:34.532349 1 degraded_webhook.go:147] failed to connect to webhook "webhook.cert-manager.io" via service "cert-manager-webhook.cert-manager.svc:443": dial tcp 10.217.5.163:443: connect: connection refused 2026-01-20T10:56:35.538309149+00:00 stderr F W0120 10:56:35.537439 1 degraded_webhook.go:147] failed to connect to webhook "webhook.cert-manager.io" via service "cert-manager-webhook.cert-manager.svc:443": dial tcp 10.217.5.163:443: connect: connection refused 2026-01-20T10:56:37.546332766+00:00 stderr F W0120 10:56:37.545571 1 degraded_webhook.go:147] failed to connect to webhook "webhook.cert-manager.io" via service "cert-manager-webhook.cert-manager.svc:443": dial tcp 10.217.5.163:443: connect: connection refused 2026-01-20T10:56:38.555021125+00:00 stderr F W0120 10:56:38.554919 1 degraded_webhook.go:147] failed to connect to webhook "webhook.cert-manager.io" via service "cert-manager-webhook.cert-manager.svc:443": dial tcp 10.217.5.163:443: connect: connection refused 2026-01-20T10:56:40.600130970+00:00 stderr F I0120 10:56:40.600043 1 prune_controller.go:269] Nothing to prune 2026-01-20T10:56:40.613614813+00:00 stderr F I0120 10:56:40.613514 1 installer_controller.go:512] "crc" is in transition to 13, but has not made progress because installer is not finished, but in Running phase 2026-01-20T10:56:40.614782154+00:00 stderr F W0120 10:56:40.614729 1 degraded_webhook.go:147] failed to connect to webhook "webhook.cert-manager.io" via service "cert-manager-webhook.cert-manager.svc:443": dial tcp 10.217.5.163:443: connect: connection refused 2026-01-20T10:56:41.620139534+00:00 stderr F W0120 10:56:41.619982 1 degraded_webhook.go:147] failed to connect to webhook "webhook.cert-manager.io" via service "cert-manager-webhook.cert-manager.svc:443": dial tcp 10.217.5.163:443: connect: connection refused 2026-01-20T10:56:59.116628583+00:00 stderr F I0120 10:56:59.115893 1 prune_controller.go:269] Nothing to prune 2026-01-20T10:56:59.130533781+00:00 stderr F I0120 10:56:59.130247 1 installer_controller.go:512] "crc" is in transition to 13, but has not made progress because installer is not finished, but in Running phase 2026-01-20T10:56:59.179254529+00:00 stderr F I0120 10:56:59.176945 1 prune_controller.go:269] Nothing to prune 2026-01-20T10:57:00.317625683+00:00 stderr F I0120 10:57:00.317520 1 request.go:697] Waited for 1.137639284s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2026-01-20T10:57:01.519705203+00:00 stderr F I0120 10:57:01.519617 1 installer_controller.go:512] "crc" is in transition to 13, but has not made progress because installer is not finished, but in Running phase 2026-01-20T10:57:06.773010666+00:00 stderr F I0120 10:57:06.772614 1 installer_controller.go:512] "crc" is in transition to 13, but has not made progress because installer is not finished, but in Running phase 2026-01-20T10:57:07.843584418+00:00 stderr F I0120 10:57:07.842838 1 termination_observer.go:236] Observed event "TerminationPreShutdownHooksFinished" for API server pod "kube-apiserver-crc" (last termination at 2025-08-13 20:08:47 +0000 UTC) at 2026-01-20 10:57:07 +0000 UTC 2026-01-20T10:57:09.871235270+00:00 stderr F I0120 10:57:09.870055 1 termination_observer.go:236] Observed event "TerminationGracefulTerminationFinished" for API server pod "kube-apiserver-crc" (last termination at 2025-08-13 20:08:47 +0000 UTC) at 2026-01-20 10:57:09 +0000 UTC 2026-01-20T10:57:16.254805324+00:00 stderr F E0120 10:57:16.254006 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:16.265373884+00:00 stderr F E0120 10:57:16.265275 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:16.281132731+00:00 stderr F E0120 10:57:16.280633 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:16.307527809+00:00 stderr F E0120 10:57:16.307457 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:16.351367878+00:00 stderr F E0120 10:57:16.351298 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:16.438012130+00:00 stderr F E0120 10:57:16.436252 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:16.600968839+00:00 stderr F E0120 10:57:16.600280 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:16.941593667+00:00 stderr F E0120 10:57:16.941391 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:17.586398069+00:00 stderr F E0120 10:57:17.586301 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:18.871855464+00:00 stderr F E0120 10:57:18.871395 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:21.438683944+00:00 stderr F E0120 10:57:21.438350 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:26.254736666+00:00 stderr F E0120 10:57:26.254046 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:26.561752926+00:00 stderr F E0120 10:57:26.561689 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.254897475+00:00 stderr F E0120 10:57:36.254370 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:46.117667908+00:00 stderr F E0120 10:57:46.115300 1 leaderelection.go:332] error retrieving resource lock openshift-kube-apiserver-operator/kube-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-apiserver-operator/leases/kube-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:46.240142157+00:00 stderr F E0120 10:57:46.240038 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:46.240966129+00:00 stderr F E0120 10:57:46.240650 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:46.243787763+00:00 stderr F E0120 10:57:46.243742 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:46.250617294+00:00 stderr F E0120 10:57:46.250510 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:46.252606857+00:00 stderr F E0120 10:57:46.252560 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:46.253892091+00:00 stderr F E0120 10:57:46.253867 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:46.254007074+00:00 stderr F E0120 10:57:46.253981 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:46.262186410+00:00 stderr F E0120 10:57:46.262135 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:46.266714790+00:00 stderr F E0120 10:57:46.266661 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:46.268348413+00:00 stderr F E0120 10:57:46.268317 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:46.440870826+00:00 stderr F E0120 10:57:46.440423 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:46.641261725+00:00 stderr F E0120 10:57:46.641190 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:46.844129630+00:00 stderr F E0120 10:57:46.844043 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:47.040191675+00:00 stderr F E0120 10:57:47.040106 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:47.246811540+00:00 stderr F E0120 10:57:47.246716 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:47.444343053+00:00 stderr F E0120 10:57:47.442921 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:47.845516512+00:00 stderr F E0120 10:57:47.843379 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:48.059537962+00:00 stderr F E0120 10:57:48.059433 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:48.242383057+00:00 stderr F E0120 10:57:48.242314 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:48.442890090+00:00 stderr F E0120 10:57:48.442821 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:49.511603352+00:00 stderr F E0120 10:57:49.511523 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: "assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:58:18.405330329+00:00 stderr F I0120 10:58:18.404718 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000164041115133657716033074 0ustar zuulzuul2025-08-13T20:01:27.619418140+00:00 stderr F I0813 20:01:27.619067 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:01:27.619418140+00:00 stderr F I0813 20:01:27.619317 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:01:27.623407753+00:00 stderr F I0813 20:01:27.623350 1 observer_polling.go:159] Starting file observer 2025-08-13T20:01:36.827605524+00:00 stderr F I0813 20:01:36.825569 1 builder.go:299] kube-apiserver-operator version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2 2025-08-13T20:01:59.521524942+00:00 stderr F I0813 20:01:59.520654 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:01:59.521524942+00:00 stderr F W0813 20:01:59.521433 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:59.521524942+00:00 stderr F W0813 20:01:59.521445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:02:01.273561360+00:00 stderr F I0813 20:02:01.271283 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:02:01.273561360+00:00 stderr F I0813 20:02:01.272562 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-apiserver-operator/kube-apiserver-operator-lock... 2025-08-13T20:02:01.306901900+00:00 stderr F I0813 20:02:01.302114 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:02:01.306901900+00:00 stderr F I0813 20:02:01.302209 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:02:01.306901900+00:00 stderr F I0813 20:02:01.302358 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:02:01.306901900+00:00 stderr F I0813 20:02:01.302384 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:02:01.306901900+00:00 stderr F I0813 20:02:01.302511 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:02:01.322654950+00:00 stderr F I0813 20:02:01.321242 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:02:01.322654950+00:00 stderr F I0813 20:02:01.302391 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:02:01.322654950+00:00 stderr F I0813 20:02:01.322353 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:02:01.322654950+00:00 stderr F I0813 20:02:01.322378 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:02:01.404880544+00:00 stderr F I0813 20:02:01.403667 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:02:01.423025072+00:00 stderr F I0813 20:02:01.422737 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:02:01.423450894+00:00 stderr F I0813 20:02:01.423368 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:02:05.703355729+00:00 stderr F I0813 20:02:05.700614 1 leaderelection.go:260] successfully acquired lease openshift-kube-apiserver-operator/kube-apiserver-operator-lock 2025-08-13T20:02:05.713348184+00:00 stderr F I0813 20:02:05.711766 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator-lock", UID:"e11b3070-9ae9-4936-9a58-0ad566006f7f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"30617", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-apiserver-operator-78d54458c4-sc8h7_7069e219-921b-44d2-84f4-301ccffeb8ac became leader 2025-08-13T20:02:05.759933782+00:00 stderr F I0813 20:02:05.759742 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:02:05.774889959+00:00 stderr F I0813 20:02:05.772704 1 starter.go:140] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:02:05.776863865+00:00 stderr F I0813 20:02:05.776663 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:02:17.879199399+00:00 stderr F I0813 20:02:17.878450 1 base_controller.go:67] Waiting for caches to sync for KubeAPIServerStaticResources 2025-08-13T20:02:17.879984281+00:00 stderr F I0813 20:02:17.879924 1 base_controller.go:67] Waiting for caches to sync for SCCReconcileController 2025-08-13T20:02:17.880006932+00:00 stderr F I0813 20:02:17.879998 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T20:02:17.880069803+00:00 stderr F I0813 20:02:17.880021 1 base_controller.go:67] Waiting for caches to sync for NodeKubeconfigController 2025-08-13T20:02:17.880069803+00:00 stderr F I0813 20:02:17.880056 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:02:17.880263309+00:00 stderr F I0813 20:02:17.880213 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-apiserver 2025-08-13T20:02:17.880322861+00:00 stderr F I0813 20:02:17.880279 1 certrotationcontroller.go:886] Starting CertRotation 2025-08-13T20:02:17.880322861+00:00 stderr F I0813 20:02:17.880304 1 certrotationcontroller.go:851] Waiting for CertRotation 2025-08-13T20:02:17.880552497+00:00 stderr F I0813 20:02:17.880502 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-08-13T20:02:17.880552497+00:00 stderr F I0813 20:02:17.880537 1 base_controller.go:67] Waiting for caches to sync for CertRotationTimeUpgradeableController 2025-08-13T20:02:17.881171435+00:00 stderr F I0813 20:02:17.881136 1 base_controller.go:67] Waiting for caches to sync for ServiceAccountIssuerController 2025-08-13T20:02:17.881326439+00:00 stderr F I0813 20:02:17.881303 1 base_controller.go:67] Waiting for caches to sync for EventWatchController 2025-08-13T20:02:17.881586467+00:00 stderr F I0813 20:02:17.881561 1 base_controller.go:67] Waiting for caches to sync for BoundSATokenSignerController 2025-08-13T20:02:17.881675539+00:00 stderr F I0813 20:02:17.881633 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-08-13T20:02:17.882130002+00:00 stderr F I0813 20:02:17.882093 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:02:17.882254426+00:00 stderr F I0813 20:02:17.882212 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-08-13T20:02:17.882353649+00:00 stderr F I0813 20:02:17.882331 1 base_controller.go:67] Waiting for caches to sync for KubeletVersionSkewController 2025-08-13T20:02:17.882443481+00:00 stderr F I0813 20:02:17.882421 1 base_controller.go:67] Waiting for caches to sync for WorkerLatencyProfile 2025-08-13T20:02:17.882553794+00:00 stderr F I0813 20:02:17.882516 1 base_controller.go:67] Waiting for caches to sync for webhookSupportabilityController 2025-08-13T20:02:17.882888394+00:00 stderr F I0813 20:02:17.881137 1 base_controller.go:67] Waiting for caches to sync for PodSecurityReadinessController 2025-08-13T20:02:17.882955656+00:00 stderr F I0813 20:02:17.882936 1 base_controller.go:73] Caches are synced for PodSecurityReadinessController 2025-08-13T20:02:17.883161222+00:00 stderr F I0813 20:02:17.883118 1 base_controller.go:110] Starting #1 worker of PodSecurityReadinessController controller ... 2025-08-13T20:02:17.903573644+00:00 stderr F I0813 20:02:17.903482 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-08-13T20:02:17.906328282+00:00 stderr F I0813 20:02:17.906267 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-08-13T20:02:17.906367934+00:00 stderr F I0813 20:02:17.906325 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-08-13T20:02:17.906367934+00:00 stderr F I0813 20:02:17.906339 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-08-13T20:02:17.913309502+00:00 stderr F I0813 20:02:17.913261 1 termination_observer.go:145] Starting TerminationObserver 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918169 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918245 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918301 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918422 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918437 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918450 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918464 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918476 1 base_controller.go:67] Waiting for caches to sync for StartupMonitorPodCondition 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918488 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateFallback 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918501 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918748 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918766 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:02:17.921977849+00:00 stderr F I0813 20:02:17.921919 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:02:17.921977849+00:00 stderr F I0813 20:02:17.921959 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T20:02:18.011930785+00:00 stderr F I0813 20:02:18.011883 1 base_controller.go:73] Caches are synced for KubeletVersionSkewController 2025-08-13T20:02:18.011993247+00:00 stderr F I0813 20:02:18.011979 1 base_controller.go:110] Starting #1 worker of KubeletVersionSkewController controller ... 2025-08-13T20:02:18.012509172+00:00 stderr F E0813 20:02:18.012486 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T20:02:18.019541782+00:00 stderr F E0813 20:02:18.019469 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T20:02:18.032089430+00:00 stderr F E0813 20:02:18.032013 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T20:02:18.096188709+00:00 stderr F I0813 20:02:18.096086 1 base_controller.go:73] Caches are synced for ServiceAccountIssuerController 2025-08-13T20:02:18.096188709+00:00 stderr F I0813 20:02:18.096143 1 base_controller.go:110] Starting #1 worker of ServiceAccountIssuerController controller ... 2025-08-13T20:02:18.096367914+00:00 stderr F I0813 20:02:18.096309 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:02:18.096412085+00:00 stderr F I0813 20:02:18.096397 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:02:18.097138846+00:00 stderr F I0813 20:02:18.097071 1 base_controller.go:73] Caches are synced for CertRotationTimeUpgradeableController 2025-08-13T20:02:18.097163917+00:00 stderr F I0813 20:02:18.097117 1 base_controller.go:110] Starting #1 worker of CertRotationTimeUpgradeableController controller ... 2025-08-13T20:02:18.097237749+00:00 stderr F I0813 20:02:18.097176 1 base_controller.go:73] Caches are synced for SCCReconcileController 2025-08-13T20:02:18.097237749+00:00 stderr F I0813 20:02:18.097209 1 base_controller.go:110] Starting #1 worker of SCCReconcileController controller ... 2025-08-13T20:02:18.097459735+00:00 stderr F I0813 20:02:18.097399 1 certrotationcontroller.go:869] Finished waiting for CertRotation 2025-08-13T20:02:18.097583379+00:00 stderr F I0813 20:02:18.097544 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.097727063+00:00 stderr F I0813 20:02:18.097635 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-apiserver 2025-08-13T20:02:18.097828716+00:00 stderr F I0813 20:02:18.097741 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-apiserver controller ... 2025-08-13T20:02:18.098194596+00:00 stderr F I0813 20:02:18.098132 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098210446+00:00 stderr F I0813 20:02:18.098192 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098220017+00:00 stderr F I0813 20:02:18.098212 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098311529+00:00 stderr F I0813 20:02:18.098264 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098359401+00:00 stderr F I0813 20:02:18.098344 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098421402+00:00 stderr F I0813 20:02:18.098375 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098501605+00:00 stderr F I0813 20:02:18.098460 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098584537+00:00 stderr F I0813 20:02:18.098566 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098637529+00:00 stderr F I0813 20:02:18.098625 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098670160+00:00 stderr F I0813 20:02:18.098345 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098703180+00:00 stderr F I0813 20:02:18.098363 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.118752002+00:00 stderr F I0813 20:02:18.118654 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T20:02:18.118752002+00:00 stderr F I0813 20:02:18.118719 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T20:02:18.118881376+00:00 stderr F I0813 20:02:18.118690 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T20:02:18.118927347+00:00 stderr F I0813 20:02:18.118913 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T20:02:18.119201625+00:00 stderr F I0813 20:02:18.118861 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T20:02:18.119241016+00:00 stderr F I0813 20:02:18.119228 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T20:02:18.122157770+00:00 stderr F I0813 20:02:18.122062 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:02:18.122157770+00:00 stderr F I0813 20:02:18.122114 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:02:18.122157770+00:00 stderr F I0813 20:02:18.122134 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:02:18.122199531+00:00 stderr F I0813 20:02:18.122189 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:02:18.126026110+00:00 stderr F I0813 20:02:18.125901 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:02:18.296336678+00:00 stderr F I0813 20:02:18.296214 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:18.296719959+00:00 stderr F I0813 20:02:18.296667 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:18.383032062+00:00 stderr F I0813 20:02:18.382910 1 base_controller.go:73] Caches are synced for webhookSupportabilityController 2025-08-13T20:02:18.383032062+00:00 stderr F I0813 20:02:18.382952 1 base_controller.go:110] Starting #1 worker of webhookSupportabilityController controller ... 2025-08-13T20:02:18.486049641+00:00 stderr F I0813 20:02:18.485949 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:18.685675145+00:00 stderr F I0813 20:02:18.685551 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:18.783402033+00:00 stderr F I0813 20:02:18.782672 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-08-13T20:02:18.783402033+00:00 stderr F I0813 20:02:18.782721 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-08-13T20:02:18.886902746+00:00 stderr F I0813 20:02:18.886694 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:18.904420416+00:00 stderr F I0813 20:02:18.904290 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-08-13T20:02:18.904420416+00:00 stderr F I0813 20:02:18.904336 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-08-13T20:02:18.906703381+00:00 stderr F I0813 20:02:18.906616 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-08-13T20:02:18.906703381+00:00 stderr F I0813 20:02:18.906658 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-08-13T20:02:18.906703381+00:00 stderr F I0813 20:02:18.906658 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-08-13T20:02:18.906703381+00:00 stderr F I0813 20:02:18.906679 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-08-13T20:02:18.906703381+00:00 stderr F I0813 20:02:18.906694 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-08-13T20:02:18.906734692+00:00 stderr F I0813 20:02:18.906681 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-08-13T20:02:18.919072304+00:00 stderr F I0813 20:02:18.918963 1 base_controller.go:73] Caches are synced for StaticPodStateFallback 2025-08-13T20:02:18.919072304+00:00 stderr F I0813 20:02:18.919023 1 base_controller.go:110] Starting #1 worker of StaticPodStateFallback controller ... 2025-08-13T20:02:18.919108845+00:00 stderr F I0813 20:02:18.919085 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T20:02:18.919108845+00:00 stderr F I0813 20:02:18.919093 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T20:02:18.919913978+00:00 stderr F I0813 20:02:18.919758 1 base_controller.go:73] Caches are synced for StartupMonitorPodCondition 2025-08-13T20:02:18.919913978+00:00 stderr F I0813 20:02:18.919904 1 base_controller.go:110] Starting #1 worker of StartupMonitorPodCondition controller ... 2025-08-13T20:02:18.920003100+00:00 stderr F I0813 20:02:18.919954 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T20:02:18.920089033+00:00 stderr F I0813 20:02:18.919981 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T20:02:18.920103733+00:00 stderr F I0813 20:02:18.920091 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T20:02:18.920103733+00:00 stderr F I0813 20:02:18.920099 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T20:02:18.922216883+00:00 stderr F I0813 20:02:18.921920 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9 2025-08-13T20:02:18.922646015+00:00 stderr F I0813 20:02:18.922185 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T20:02:18.922662596+00:00 stderr F I0813 20:02:18.922647 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T20:02:18.981311399+00:00 stderr F I0813 20:02:18.981186 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-08-13T20:02:18.981311399+00:00 stderr F I0813 20:02:18.981246 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-08-13T20:02:19.082276589+00:00 stderr F I0813 20:02:19.082038 1 request.go:697] Waited for 1.160648461s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps?limit=500&resourceVersion=0 2025-08-13T20:02:19.108201699+00:00 stderr F I0813 20:02:19.108016 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:19.121299662+00:00 stderr F I0813 20:02:19.121188 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T20:02:19.121299662+00:00 stderr F I0813 20:02:19.121243 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T20:02:19.121299662+00:00 stderr F I0813 20:02:19.121245 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T20:02:19.121299662+00:00 stderr F I0813 20:02:19.121281 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T20:02:19.183865117+00:00 stderr F I0813 20:02:19.183548 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-08-13T20:02:19.183865117+00:00 stderr F I0813 20:02:19.183577 1 base_controller.go:73] Caches are synced for WorkerLatencyProfile 2025-08-13T20:02:19.183865117+00:00 stderr F I0813 20:02:19.183615 1 base_controller.go:110] Starting #1 worker of WorkerLatencyProfile controller ... 2025-08-13T20:02:19.183865117+00:00 stderr F I0813 20:02:19.183591 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-08-13T20:02:19.185129783+00:00 stderr F I0813 20:02:19.185067 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T20:02:19.284423006+00:00 stderr F I0813 20:02:19.284276 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:19.379709014+00:00 stderr F I0813 20:02:19.379559 1 base_controller.go:73] Caches are synced for KubeAPIServerStaticResources 2025-08-13T20:02:19.379709014+00:00 stderr F I0813 20:02:19.379598 1 base_controller.go:110] Starting #1 worker of KubeAPIServerStaticResources controller ... 2025-08-13T20:02:19.485287026+00:00 stderr F I0813 20:02:19.485140 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:19.499418569+00:00 stderr F I0813 20:02:19.499274 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.499418569+00:00 stderr F I0813 20:02:19.499329 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.582021656+00:00 stderr F I0813 20:02:19.581705 1 base_controller.go:73] Caches are synced for BoundSATokenSignerController 2025-08-13T20:02:19.582021656+00:00 stderr F I0813 20:02:19.581760 1 base_controller.go:110] Starting #1 worker of BoundSATokenSignerController controller ... 2025-08-13T20:02:19.685821037+00:00 stderr F I0813 20:02:19.685580 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:19.698764636+00:00 stderr F I0813 20:02:19.698586 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.698764636+00:00 stderr F I0813 20:02:19.698622 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.698764636+00:00 stderr F I0813 20:02:19.698647 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.698764636+00:00 stderr F I0813 20:02:19.698673 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.698764636+00:00 stderr F I0813 20:02:19.698693 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.698764636+00:00 stderr F I0813 20:02:19.698719 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.698999373+00:00 stderr F I0813 20:02:19.698767 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.698999373+00:00 stderr F I0813 20:02:19.698888 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.698999373+00:00 stderr F I0813 20:02:19.698963 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.699015473+00:00 stderr F I0813 20:02:19.698999 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.699130016+00:00 stderr F I0813 20:02:19.699047 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.699130016+00:00 stderr F I0813 20:02:19.699090 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.699147557+00:00 stderr F I0813 20:02:19.699138 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.699157397+00:00 stderr F I0813 20:02:19.699146 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.699253950+00:00 stderr F I0813 20:02:19.699174 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.699253950+00:00 stderr F I0813 20:02:19.699206 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.699253950+00:00 stderr F I0813 20:02:19.699229 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.699253950+00:00 stderr F I0813 20:02:19.699237 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.703409108+00:00 stderr F I0813 20:02:19.703331 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.703409108+00:00 stderr F I0813 20:02:19.703368 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.703409108+00:00 stderr F I0813 20:02:19.703384 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.703409108+00:00 stderr F I0813 20:02:19.703388 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.780948090+00:00 stderr F I0813 20:02:19.780765 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T20:02:19.780948090+00:00 stderr F I0813 20:02:19.780887 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T20:02:19.780948090+00:00 stderr F I0813 20:02:19.780899 1 base_controller.go:73] Caches are synced for NodeKubeconfigController 2025-08-13T20:02:19.780948090+00:00 stderr F I0813 20:02:19.780925 1 base_controller.go:110] Starting #1 worker of NodeKubeconfigController controller ... 2025-08-13T20:02:19.889872838+00:00 stderr F I0813 20:02:19.889719 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:20.082710318+00:00 stderr F I0813 20:02:20.082501 1 request.go:697] Waited for 2.16078237s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/kube-system/secrets?limit=500&resourceVersion=0 2025-08-13T20:02:20.092984811+00:00 stderr F I0813 20:02:20.092914 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:20.293953334+00:00 stderr F I0813 20:02:20.289478 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:20.468287497+00:00 stderr F E0813 20:02:20.468230 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9] 2025-08-13T20:02:20.492221110+00:00 stderr F I0813 20:02:20.492157 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:02:20.499635562+00:00 stderr F I0813 20:02:20.499560 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:02:20.501735802+00:00 stderr F I0813 20:02:20.501518 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:20.502890005+00:00 stderr F I0813 20:02:20.502770 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:20.526226710+00:00 stderr F I0813 20:02:20.519326 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:02:20.526226710+00:00 stderr F I0813 20:02:20.519364 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:02:20.582665250+00:00 stderr F I0813 20:02:20.582493 1 base_controller.go:73] Caches are synced for EventWatchController 2025-08-13T20:02:20.582665250+00:00 stderr F I0813 20:02:20.582569 1 base_controller.go:110] Starting #1 worker of EventWatchController controller ... 2025-08-13T20:02:20.685719830+00:00 stderr F I0813 20:02:20.685578 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:20.781261646+00:00 stderr F I0813 20:02:20.781129 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:02:20.781261646+00:00 stderr F I0813 20:02:20.781199 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:02:21.088760848+00:00 stderr F I0813 20:02:21.082530 1 request.go:697] Waited for 2.160698948s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller 2025-08-13T20:02:21.154002859+00:00 stderr F I0813 20:02:21.152140 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]" 2025-08-13T20:02:21.182432900+00:00 stderr F I0813 20:02:21.182253 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:02:21.906225198+00:00 stderr F E0813 20:02:21.905498 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:02:22.082699332+00:00 stderr F I0813 20:02:22.082563 1 request.go:697] Waited for 2.295925205s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config 2025-08-13T20:02:23.283449646+00:00 stderr F I0813 20:02:23.282137 1 request.go:697] Waited for 1.368757607s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2025-08-13T20:02:23.596598178+00:00 stderr F I0813 20:02:23.593187 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/webhook-authenticator -n openshift-kube-apiserver because it changed 2025-08-13T20:02:23.598113781+00:00 stderr F I0813 20:02:23.598044 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 10 triggered by "optional secret/webhook-authenticator has changed" 2025-08-13T20:02:25.681928317+00:00 stderr F I0813 20:02:25.681602 1 request.go:697] Waited for 1.173333492s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2025-08-13T20:02:25.917500667+00:00 stderr F I0813 20:02:25.917379 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-pod-10 -n openshift-kube-apiserver because it was missing 2025-08-13T20:02:25.919894385+00:00 stderr F I0813 20:02:25.919761 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:02:26.197572827+00:00 stderr F I0813 20:02:26.195195 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:02:26.202027674+00:00 stderr F I0813 20:02:26.199059 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:02:26.839246442+00:00 stderr F I0813 20:02:26.838416 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing 2025-08-13T20:02:26.890933386+00:00 stderr F I0813 20:02:26.890736 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]" to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T20:02:26.906316425+00:00 stderr F E0813 20:02:26.905734 1 podsecurityreadinesscontroller.go:102] "namespace:" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-config-operator?dryRun=All&fieldManager=pod-security-readiness-controller&force=false\": dial tcp 10.217.4.1:443: connect: connection refused" openshift-config-operator="(MISSING)" 2025-08-13T20:02:27.062032857+00:00 stderr F I0813 20:02:27.061872 1 termination_observer.go:236] Observed event "TerminationPreShutdownHooksFinished" for API server pod "kube-apiserver-crc" (last termination at 2024-06-27 13:28:05 +0000 UTC) at 2025-08-13 20:02:26 +0000 UTC 2025-08-13T20:02:27.282296610+00:00 stderr F I0813 20:02:27.282173 1 request.go:697] Waited for 1.078495576s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa 2025-08-13T20:02:27.288882048+00:00 stderr F E0813 20:02:27.288736 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:27.493059402+00:00 stderr F W0813 20:02:27.492835 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.493059402+00:00 stderr F E0813 20:02:27.492985 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.689448045+00:00 stderr F I0813 20:02:27.688426 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ConfigMapCreateFailed' Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.696928148+00:00 stderr F I0813 20:02:27.696380 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RevisionCreateFailed' Failed to create revision 10: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.697385821+00:00 stderr F E0813 20:02:27.697363 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.702021973+00:00 stderr F E0813 20:02:27.701974 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.723108285+00:00 stderr F E0813 20:02:27.722548 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.749338123+00:00 stderr F E0813 20:02:27.749277 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.794133701+00:00 stderr F E0813 20:02:27.793987 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.878679733+00:00 stderr F E0813 20:02:27.878589 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.888764171+00:00 stderr F E0813 20:02:27.888701 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:28.043317470+00:00 stderr F E0813 20:02:28.043179 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:28.289375869+00:00 stderr F E0813 20:02:28.289268 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:28.379214582+00:00 stderr F E0813 20:02:28.379076 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:28.497073854+00:00 stderr F W0813 20:02:28.496880 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:28.497182557+00:00 stderr F E0813 20:02:28.497164 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:28.689759121+00:00 stderr F E0813 20:02:28.689575 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:28.894884463+00:00 stderr F E0813 20:02:28.894732 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:29.025637333+00:00 stderr F E0813 20:02:29.025366 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:29.088538967+00:00 stderr F E0813 20:02:29.088396 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:29.295313266+00:00 stderr F W0813 20:02:29.295216 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:29.295313266+00:00 stderr F E0813 20:02:29.295284 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:29.488824236+00:00 stderr F E0813 20:02:29.488450 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:29.505160672+00:00 stderr F I0813 20:02:29.503630 1 termination_observer.go:236] Observed event "TerminationGracefulTerminationFinished" for API server pod "kube-apiserver-crc" (last termination at 2024-06-27 13:28:05 +0000 UTC) at 2025-08-13 20:02:28 +0000 UTC 2025-08-13T20:02:29.907564892+00:00 stderr F E0813 20:02:29.907113 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:30.091381636+00:00 stderr F W0813 20:02:30.088902 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:30.091381636+00:00 stderr F E0813 20:02:30.089000 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:30.299063050+00:00 stderr F E0813 20:02:30.298658 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:30.330682142+00:00 stderr F E0813 20:02:30.330624 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:30.697258979+00:00 stderr F E0813 20:02:30.697085 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:30.887421723+00:00 stderr F W0813 20:02:30.887227 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:30.887421723+00:00 stderr F E0813 20:02:30.887279 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.086941195+00:00 stderr F E0813 20:02:31.086732 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.490093166+00:00 stderr F E0813 20:02:31.489980 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:31.686267992+00:00 stderr F W0813 20:02:31.686138 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.686267992+00:00 stderr F E0813 20:02:31.686205 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.875890662+00:00 stderr F E0813 20:02:31.875719 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:02:31.901012308+00:00 stderr F E0813 20:02:31.900911 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.913477984+00:00 stderr F E0813 20:02:31.913424 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/controlplane.operator.openshift.io/v1alpha1/namespaces/openshift-kube-apiserver/podnetworkconnectivitychecks": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.917548790+00:00 stderr F E0813 20:02:31.917479 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:02:31.921520413+00:00 stderr F E0813 20:02:31.921463 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.935493062+00:00 stderr F E0813 20:02:31.935342 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.957609803+00:00 stderr F E0813 20:02:31.957460 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.999495908+00:00 stderr F E0813 20:02:31.999379 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.090438442+00:00 stderr F E0813 20:02:32.090324 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.253104613+00:00 stderr F E0813 20:02:32.252976 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.287631368+00:00 stderr F E0813 20:02:32.287450 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:32.485872313+00:00 stderr F W0813 20:02:32.485628 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.485872313+00:00 stderr F E0813 20:02:32.485740 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.575041717+00:00 stderr F E0813 20:02:32.574943 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.685457496+00:00 stderr F E0813 20:02:32.685363 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.896541708+00:00 stderr F E0813 20:02:32.896479 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.081990088+00:00 stderr F I0813 20:02:33.081929 1 request.go:697] Waited for 1.153124285s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client 2025-08-13T20:02:33.116142203+00:00 stderr F E0813 20:02:33.116034 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:33.219207153+00:00 stderr F E0813 20:02:33.219100 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.288288754+00:00 stderr F E0813 20:02:33.288215 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.838225562+00:00 stderr F W0813 20:02:33.837417 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.838328125+00:00 stderr F E0813 20:02:33.838272 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.895170166+00:00 stderr F E0813 20:02:33.895117 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:34.123866160+00:00 stderr F I0813 20:02:34.123707 1 request.go:697] Waited for 1.437381574s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2025-08-13T20:02:34.132409974+00:00 stderr F E0813 20:02:34.131888 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.285426288+00:00 stderr F I0813 20:02:34.285325 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.287761615+00:00 stderr F E0813 20:02:34.287687 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.287761615+00:00 stderr F E0813 20:02:34.287726 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:02:34.501765090+00:00 stderr F E0813 20:02:34.501711 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.704875064+00:00 stderr F E0813 20:02:34.704735 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.888966325+00:00 stderr F E0813 20:02:34.888912 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.086482340+00:00 stderr F W0813 20:02:35.086334 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.086482340+00:00 stderr F E0813 20:02:35.086413 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.286871476+00:00 stderr F I0813 20:02:35.286587 1 request.go:697] Waited for 1.149563123s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2025-08-13T20:02:35.291251821+00:00 stderr F E0813 20:02:35.291186 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.889880789+00:00 stderr F E0813 20:02:35.889726 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:36.090603975+00:00 stderr F E0813 20:02:36.090455 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.482122994+00:00 stderr F I0813 20:02:36.482009 1 request.go:697] Waited for 1.394598594s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs 2025-08-13T20:02:36.485995604+00:00 stderr F W0813 20:02:36.485923 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.485995604+00:00 stderr F E0813 20:02:36.485981 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.686004040+00:00 stderr F E0813 20:02:36.685921 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.885666116+00:00 stderr F I0813 20:02:36.885489 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.886292603+00:00 stderr F E0813 20:02:36.886219 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.064326992+00:00 stderr F E0813 20:02:37.064202 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.109672166+00:00 stderr F E0813 20:02:37.109536 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:37.287078257+00:00 stderr F E0813 20:02:37.286993 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.682584849+00:00 stderr F I0813 20:02:37.682427 1 request.go:697] Waited for 1.195569716s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs 2025-08-13T20:02:37.686699647+00:00 stderr F W0813 20:02:37.686536 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.686699647+00:00 stderr F E0813 20:02:37.686594 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.886023922+00:00 stderr F E0813 20:02:37.885969 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.021723653+00:00 stderr F E0813 20:02:38.021300 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.496642781+00:00 stderr F E0813 20:02:38.496414 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:38.610677824+00:00 stderr F E0813 20:02:38.610594 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:02:38.682735480+00:00 stderr F I0813 20:02:38.682566 1 request.go:697] Waited for 1.3545324s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:02:38.686076805+00:00 stderr F E0813 20:02:38.685979 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.085831439+00:00 stderr F I0813 20:02:39.085615 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.087639121+00:00 stderr F E0813 20:02:39.087595 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.487886129+00:00 stderr F E0813 20:02:39.487464 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.895188618+00:00 stderr F W0813 20:02:39.895017 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.895188618+00:00 stderr F E0813 20:02:39.895131 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.283222147+00:00 stderr F I0813 20:02:40.282596 1 request.go:697] Waited for 1.096328425s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2025-08-13T20:02:40.288259131+00:00 stderr F E0813 20:02:40.287945 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.513629800+00:00 stderr F E0813 20:02:40.513510 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:40.686668777+00:00 stderr F E0813 20:02:40.686571 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.084468975+00:00 stderr F I0813 20:02:41.084342 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.086141903+00:00 stderr F E0813 20:02:41.086113 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.215589026+00:00 stderr F E0813 20:02:41.215513 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:02:41.285356717+00:00 stderr F E0813 20:02:41.285243 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.682650530+00:00 stderr F E0813 20:02:41.682502 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:02:41.685689016+00:00 stderr F E0813 20:02:41.685538 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.885515477+00:00 stderr F E0813 20:02:41.885385 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.082715363+00:00 stderr F I0813 20:02:42.082621 1 request.go:697] Waited for 1.024058585s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa 2025-08-13T20:02:42.088271282+00:00 stderr F E0813 20:02:42.088207 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:42.186979738+00:00 stderr F E0813 20:02:42.186926 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.885542287+00:00 stderr F I0813 20:02:42.885433 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.887598435+00:00 stderr F E0813 20:02:42.887569 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.106315655+00:00 stderr F E0813 20:02:43.106176 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:43.288137192+00:00 stderr F E0813 20:02:43.288006 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.284390023+00:00 stderr F I0813 20:02:44.284299 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.297508357+00:00 stderr F E0813 20:02:44.297295 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.087187274+00:00 stderr F E0813 20:02:45.087053 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.317925606+00:00 stderr F E0813 20:02:45.317397 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:45.684556595+00:00 stderr F I0813 20:02:45.684425 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.688731014+00:00 stderr F E0813 20:02:45.688693 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.087318435+00:00 stderr F E0813 20:02:46.087233 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.884228139+00:00 stderr F I0813 20:02:46.884159 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.886405971+00:00 stderr F E0813 20:02:46.886333 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:47.109306260+00:00 stderr F E0813 20:02:47.109244 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:48.092381354+00:00 stderr F E0813 20:02:48.092322 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:48.287721726+00:00 stderr F I0813 20:02:48.287638 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:48.289058344+00:00 stderr F E0813 20:02:48.289024 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:48.335550790+00:00 stderr F E0813 20:02:48.335494 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:48.486293661+00:00 stderr F E0813 20:02:48.486221 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.049689662+00:00 stderr F E0813 20:02:49.049111 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:02:49.543433607+00:00 stderr F E0813 20:02:49.543292 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:49.684554953+00:00 stderr F I0813 20:02:49.684439 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.686370745+00:00 stderr F E0813 20:02:49.686338 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.885480655+00:00 stderr F E0813 20:02:49.885305 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:50.087540489+00:00 stderr F E0813 20:02:50.087422 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:50.697695005+00:00 stderr F W0813 20:02:50.697388 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:50.697695005+00:00 stderr F E0813 20:02:50.697469 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:51.229356262+00:00 stderr F E0813 20:02:51.229271 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:02:51.284985769+00:00 stderr F I0813 20:02:51.284770 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:51.286830041+00:00 stderr F E0813 20:02:51.286659 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:51.485913661+00:00 stderr F E0813 20:02:51.485733 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:51.685286878+00:00 stderr F E0813 20:02:51.685195 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:02:52.176266554+00:00 stderr F E0813 20:02:52.176148 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:52.430383403+00:00 stderr F E0813 20:02:52.430286 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.285294732+00:00 stderr F I0813 20:02:53.285147 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.286495956+00:00 stderr F E0813 20:02:53.286258 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.493943244+00:00 stderr F E0813 20:02:53.487108 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.707885268+00:00 stderr F E0813 20:02:53.707699 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:54.086622012+00:00 stderr F E0813 20:02:54.086481 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:55.499963951+00:00 stderr F E0813 20:02:55.499741 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:55.686172193+00:00 stderr F E0813 20:02:55.686116 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:57.288966766+00:00 stderr F E0813 20:02:57.287212 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:57.506960085+00:00 stderr F E0813 20:02:57.506636 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:59.052462204+00:00 stderr F E0813 20:02:59.052368 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:02:59.089362067+00:00 stderr F E0813 20:02:59.089260 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:59.285310937+00:00 stderr F E0813 20:02:59.285216 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:59.485938219+00:00 stderr F E0813 20:02:59.485835 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:59.711761171+00:00 stderr F E0813 20:02:59.711633 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:00.887365237+00:00 stderr F E0813 20:03:00.887225 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:01.232437542+00:00 stderr F E0813 20:03:01.232338 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:03:01.687880194+00:00 stderr F E0813 20:03:01.687697 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:03:02.086353261+00:00 stderr F E0813 20:03:02.086236 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:03.466545123+00:00 stderr F E0813 20:03:03.466443 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:03.684693427+00:00 stderr F I0813 20:03:03.684563 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:03.686747335+00:00 stderr F E0813 20:03:03.686674 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.467702884+00:00 stderr F E0813 20:03:04.467650 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:08.822029240+00:00 stderr F E0813 20:03:08.821291 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:09.055237253+00:00 stderr F E0813 20:03:09.055153 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:03:09.202882104+00:00 stderr F E0813 20:03:09.200200 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:09.379519793+00:00 stderr F E0813 20:03:09.379391 1 leaderelection.go:332] error retrieving resource lock openshift-kube-apiserver-operator/kube-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-apiserver-operator/leases/kube-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:10.016453054+00:00 stderr F E0813 20:03:10.016044 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:11.184720560+00:00 stderr F W0813 20:03:11.184594 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:11.184720560+00:00 stderr F E0813 20:03:11.184678 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:11.235424036+00:00 stderr F E0813 20:03:11.235291 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:03:11.690173799+00:00 stderr F E0813 20:03:11.689975 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:03:12.912997853+00:00 stderr F E0813 20:03:12.912941 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:13.726670644+00:00 stderr F E0813 20:03:13.726599 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.125912634+00:00 stderr F E0813 20:03:18.125755 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:18.922056237+00:00 stderr F E0813 20:03:18.921896 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.924148366+00:00 stderr F E0813 20:03:18.924051 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.932113354+00:00 stderr F E0813 20:03:18.932008 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.946991928+00:00 stderr F E0813 20:03:18.946904 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.968900543+00:00 stderr F E0813 20:03:18.968646 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:19.010868720+00:00 stderr F E0813 20:03:19.010530 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:19.058148869+00:00 stderr F E0813 20:03:19.058059 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:03:19.100995822+00:00 stderr F E0813 20:03:19.100933 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:19.194039566+00:00 stderr F E0813 20:03:19.193906 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:19.262645363+00:00 stderr F E0813 20:03:19.262539 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:19.418278873+00:00 stderr F E0813 20:03:19.417415 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:19.591219037+00:00 stderr F E0813 20:03:19.591118 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:19.723619304+00:00 stderr F E0813 20:03:19.723502 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:20.728083109+00:00 stderr F E0813 20:03:20.727651 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:21.124169747+00:00 stderr F E0813 20:03:21.124066 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:21.237491670+00:00 stderr F E0813 20:03:21.237385 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:03:21.691880852+00:00 stderr F E0813 20:03:21.691747 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:03:22.011119850+00:00 stderr F E0813 20:03:22.010974 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.174660680+00:00 stderr F I0813 20:03:24.174209 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.176144142+00:00 stderr F E0813 20:03:24.176038 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.573362163+00:00 stderr F E0813 20:03:24.573244 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.954020392+00:00 stderr F E0813 20:03:24.953891 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:29.062750943+00:00 stderr F E0813 20:03:29.062580 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:03:29.198595688+00:00 stderr F E0813 20:03:29.198530 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:29.695720750+00:00 stderr F E0813 20:03:29.695628 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:30.557427003+00:00 stderr F E0813 20:03:30.557361 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:31.240127639+00:00 stderr F E0813 20:03:31.240077 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:03:31.694991844+00:00 stderr F E0813 20:03:31.694894 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:03:34.237457784+00:00 stderr F E0813 20:03:34.237057 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:39.067095360+00:00 stderr F E0813 20:03:39.066545 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:03:39.197105628+00:00 stderr F E0813 20:03:39.196995 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:39.938886849+00:00 stderr F E0813 20:03:39.938749 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:41.243562208+00:00 stderr F E0813 20:03:41.243196 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:03:41.697672863+00:00 stderr F E0813 20:03:41.697378 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:03:49.071193746+00:00 stderr F E0813 20:03:49.070619 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:03:49.200004021+00:00 stderr F E0813 20:03:49.199886 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:49.786542082+00:00 stderr F E0813 20:03:49.786481 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:51.262599080+00:00 stderr F E0813 20:03:51.262267 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:03:51.710307585+00:00 stderr F E0813 20:03:51.710129 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:03:52.158916293+00:00 stderr F W0813 20:03:52.158156 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:52.158916293+00:00 stderr F E0813 20:03:52.158834 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.881637898+00:00 stderr F E0813 20:03:53.879692 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:59.102664930+00:00 stderr F E0813 20:03:59.102179 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:03:59.199332758+00:00 stderr F E0813 20:03:59.199234 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:00.451170349+00:00 stderr F E0813 20:04:00.448977 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:01.292417287+00:00 stderr F E0813 20:04:01.292352 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:04:01.734228721+00:00 stderr F E0813 20:04:01.728554 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:05.252811975+00:00 stderr F I0813 20:04:05.252134 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.261754790+00:00 stderr F E0813 20:04:05.260639 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:09.110825073+00:00 stderr F E0813 20:04:09.110712 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:09.200838809+00:00 stderr F E0813 20:04:09.200682 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:09.386987151+00:00 stderr F E0813 20:04:09.386466 1 leaderelection.go:332] error retrieving resource lock openshift-kube-apiserver-operator/kube-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-apiserver-operator/leases/kube-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:11.350253577+00:00 stderr F E0813 20:04:11.349438 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:04:11.731660527+00:00 stderr F E0813 20:04:11.731547 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:13.412571139+00:00 stderr F E0813 20:04:13.412460 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:18.126151473+00:00 stderr F E0813 20:04:18.125554 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:18.922528371+00:00 stderr F E0813 20:04:18.922199 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:18.924317022+00:00 stderr F E0813 20:04:18.924241 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:19.134651123+00:00 stderr F E0813 20:04:19.134583 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:19.134989332+00:00 stderr F E0813 20:04:19.134963 1 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:19.136563777+00:00 stderr F E0813 20:04:19.136456 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1c16ba9353 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreateFailed,Message:Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,LastTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:19.200251414+00:00 stderr F E0813 20:04:19.200126 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:19.458201473+00:00 stderr F E0813 20:04:19.457505 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:19.810122922+00:00 stderr F E0813 20:04:19.810042 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:21.354154239+00:00 stderr F E0813 20:04:21.353918 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:04:21.354203371+00:00 stderr F E0813 20:04:21.354157 1 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:04:21.737023152+00:00 stderr F E0813 20:04:21.736903 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:21.737290399+00:00 stderr F E0813 20:04:21.737166 1 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:21.747134221+00:00 stderr F E0813 20:04:21.747030 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events/kube-apiserver-operator.185b6c1d9ffe1e7e\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:36.884312137 +0000 UTC m=+69.582103023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:24.410946792+00:00 stderr F E0813 20:04:24.408355 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1c16ba9353 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreateFailed,Message:Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,LastTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:25.586371371+00:00 stderr F E0813 20:04:25.585478 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events/kube-apiserver-operator.185b6c1d9ffe1e7e\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:36.884312137 +0000 UTC m=+69.582103023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:29.312902817+00:00 stderr F E0813 20:04:29.312340 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:34.413537650+00:00 stderr F E0813 20:04:34.413133 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1c16ba9353 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreateFailed,Message:Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,LastTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:35.594003465+00:00 stderr F E0813 20:04:35.590587 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events/kube-apiserver-operator.185b6c1d9ffe1e7e\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:36.884312137 +0000 UTC m=+69.582103023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:39.240500555+00:00 stderr F E0813 20:04:39.239746 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:41.412515322+00:00 stderr F E0813 20:04:41.412003 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:41.517410856+00:00 stderr F E0813 20:04:41.517330 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:44.424010949+00:00 stderr F E0813 20:04:44.423326 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1c16ba9353 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreateFailed,Message:Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,LastTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:45.607638554+00:00 stderr F E0813 20:04:45.596688 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events/kube-apiserver-operator.185b6c1d9ffe1e7e\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:36.884312137 +0000 UTC m=+69.582103023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:46.880512543+00:00 stderr F E0813 20:04:46.880306 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:49.205967865+00:00 stderr F E0813 20:04:49.202745 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:52.944390879+00:00 stderr F E0813 20:04:52.943846 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:54.427027635+00:00 stderr F E0813 20:04:54.426478 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1c16ba9353 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreateFailed,Message:Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,LastTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:55.599457919+00:00 stderr F E0813 20:04:55.599021 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events/kube-apiserver-operator.185b6c1d9ffe1e7e\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:36.884312137 +0000 UTC m=+69.582103023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:56.193565092+00:00 stderr F E0813 20:04:56.193401 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:59.210231627+00:00 stderr F E0813 20:04:59.204607 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:04.431498625+00:00 stderr F E0813 20:05:04.431061 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1c16ba9353 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreateFailed,Message:Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,LastTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:05:05.601999314+00:00 stderr F E0813 20:05:05.601540 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events/kube-apiserver-operator.185b6c1d9ffe1e7e\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:36.884312137 +0000 UTC m=+69.582103023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:05:09.236661316+00:00 stderr F E0813 20:05:09.236236 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:09.383111080+00:00 stderr F E0813 20:05:09.382720 1 leaderelection.go:332] error retrieving resource lock openshift-kube-apiserver-operator/kube-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-apiserver-operator/leases/kube-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:11.829698110+00:00 stderr F E0813 20:05:11.829275 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:14.091414447+00:00 stderr F W0813 20:05:14.090446 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:14.091414447+00:00 stderr F E0813 20:05:14.090630 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:14.441321307+00:00 stderr F E0813 20:05:14.440432 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1c16ba9353 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreateFailed,Message:Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,LastTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:05:15.606367279+00:00 stderr F E0813 20:05:15.605755 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events/kube-apiserver-operator.185b6c1d9ffe1e7e\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:36.884312137 +0000 UTC m=+69.582103023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:05:15.806682745+00:00 stderr F E0813 20:05:15.806565 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:18.128535484+00:00 stderr F E0813 20:05:18.128411 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:05:18.923285102+00:00 stderr F E0813 20:05:18.922736 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:18.938201949+00:00 stderr F E0813 20:05:18.938058 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:19.209547759+00:00 stderr F E0813 20:05:19.209191 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:19.444272021+00:00 stderr F E0813 20:05:19.443961 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:05:19.828615827+00:00 stderr F E0813 20:05:19.828516 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:43.067359005+00:00 stderr F I0813 20:05:43.065919 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:05:43.067359005+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:05:43.067359005+00:00 stderr F CurrentRevision: (int32) 9, 2025-08-13T20:05:43.067359005+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:05:43.067359005+00:00 stderr F LastFailedRevision: (int32) 1, 2025-08-13T20:05:43.067359005+00:00 stderr F LastFailedTime: (*v1.Time)(0xc0021abea8)(2024-06-26 12:52:09 +0000 UTC), 2025-08-13T20:05:43.067359005+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:05:43.067359005+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:05:43.067359005+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:05:43.067359005+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:05:43.067359005+00:00 stderr F (string) (len=2059) "installer: node-admin-client-cert-key\",\n (string) (len=31) \"check-endpoints-client-cert-key\",\n (string) (len=14) \"kubelet-client\",\n (string) (len=16) \"node-kubeconfigs\"\n },\n OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {\n (string) (len=17) \"user-serving-cert\",\n (string) (len=21) \"user-serving-cert-000\",\n (string) (len=21) \"user-serving-cert-001\",\n (string) (len=21) \"user-serving-cert-002\",\n (string) (len=21) \"user-serving-cert-003\",\n (string) (len=21) \"user-serving-cert-004\",\n (string) (len=21) \"user-serving-cert-005\",\n (string) (len=21) \"user-serving-cert-006\",\n (string) (len=21) \"user-serving-cert-007\",\n (string) (len=21) \"user-serving-cert-008\",\n (string) (len=21) \"user-serving-cert-009\"\n },\n CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\",\n (string) (len=29) \"control-plane-node-kubeconfig\",\n (string) (len=26) \"check-endpoints-kubeconfig\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:19.085310 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:19.096616 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:19.099505 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:49:49.099716 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:03.102802 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:05:43.067359005+00:00 stderr F } 2025-08-13T20:05:43.067359005+00:00 stderr F } 2025-08-13T20:05:43.067359005+00:00 stderr F because static pod is ready 2025-08-13T20:05:43.114641889+00:00 stderr F I0813 20:05:43.114519 1 helpers.go:260] lister was stale at resourceVersion=30706, live get showed resourceVersion=31246 2025-08-13T20:05:43.150149946+00:00 stderr F I0813 20:05:43.148693 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "crc" from revision 8 to 9 because static pod is ready 2025-08-13T20:05:55.437383641+00:00 stderr F I0813 20:05:55.436663 1 reflector.go:351] Caches populated for *v1.Scheduler from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:58.013147382+00:00 stderr F I0813 20:05:57.926832 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:58.052286792+00:00 stderr F I0813 20:05:58.052155 1 helpers.go:260] lister was stale at resourceVersion=30706, live get showed resourceVersion=31823 2025-08-13T20:05:58.128587747+00:00 stderr F I0813 20:05:58.127110 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:58.200418084+00:00 stderr F I0813 20:05:58.200336 1 helpers.go:260] lister was stale at resourceVersion=30706, live get showed resourceVersion=31947 2025-08-13T20:05:58.200418084+00:00 stderr F W0813 20:05:58.200380 1 staticpod.go:38] revision 10 is unexpectedly already the latest available revision. This is a possible race! 2025-08-13T20:05:58.200463696+00:00 stderr F E0813 20:05:58.200427 1 base_controller.go:268] RevisionController reconciliation failed: conflicting latestAvailableRevision 10 2025-08-13T20:05:58.314226252+00:00 stderr F I0813 20:05:58.311183 1 helpers.go:260] lister was stale at resourceVersion=30706, live get showed resourceVersion=31947 2025-08-13T20:05:58.314226252+00:00 stderr F W0813 20:05:58.311231 1 staticpod.go:38] revision 10 is unexpectedly already the latest available revision. This is a possible race! 2025-08-13T20:05:58.314226252+00:00 stderr F E0813 20:05:58.311258 1 base_controller.go:268] RevisionController reconciliation failed: conflicting latestAvailableRevision 10 2025-08-13T20:05:58.342522043+00:00 stderr F I0813 20:05:58.342053 1 helpers.go:260] lister was stale at resourceVersion=30706, live get showed resourceVersion=31947 2025-08-13T20:05:58.342522043+00:00 stderr F W0813 20:05:58.342474 1 staticpod.go:38] revision 10 is unexpectedly already the latest available revision. This is a possible race! 2025-08-13T20:05:58.342522043+00:00 stderr F E0813 20:05:58.342503 1 base_controller.go:268] RevisionController reconciliation failed: conflicting latestAvailableRevision 10 2025-08-13T20:06:00.719452509+00:00 stderr F I0813 20:06:00.719020 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:01.479830303+00:00 stderr F I0813 20:06:01.479663 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:02.789855037+00:00 stderr F I0813 20:06:02.787758 1 reflector.go:351] Caches populated for *v1.KubeAPIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:06.220363992+00:00 stderr F I0813 20:06:06.219590 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:06.797371706+00:00 stderr F I0813 20:06:06.796734 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:08.084738681+00:00 stderr F I0813 20:06:08.084679 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:08.348720890+00:00 stderr F I0813 20:06:08.348309 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:10.168909382+00:00 stderr F I0813 20:06:10.167730 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:13.305920214+00:00 stderr F I0813 20:06:13.304600 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:13.823342542+00:00 stderr F I0813 20:06:13.823278 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:15.913681751+00:00 stderr F I0813 20:06:15.912900 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:17.953297747+00:00 stderr F I0813 20:06:17.953166 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:18.997522470+00:00 stderr F I0813 20:06:18.995168 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:20.484612853+00:00 stderr F I0813 20:06:20.481273 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:20.832134275+00:00 stderr F I0813 20:06:20.830053 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:21.027976373+00:00 stderr F I0813 20:06:21.026982 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:23.123362347+00:00 stderr F I0813 20:06:23.122947 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:24.086211088+00:00 stderr F I0813 20:06:24.086107 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:24.098150370+00:00 stderr F I0813 20:06:24.098014 1 termination_observer.go:130] Observed termination of API server pod "kube-apiserver-crc" at 2025-08-13 20:04:15 +0000 UTC 2025-08-13T20:06:24.112124860+00:00 stderr F I0813 20:06:24.112002 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:06:24.112124860+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:06:24.112124860+00:00 stderr F CurrentRevision: (int32) 9, 2025-08-13T20:06:24.112124860+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:06:24.112124860+00:00 stderr F LastFailedRevision: (int32) 1, 2025-08-13T20:06:24.112124860+00:00 stderr F LastFailedTime: (*v1.Time)(0xc0027b9050)(2024-06-26 12:52:09 +0000 UTC), 2025-08-13T20:06:24.112124860+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:06:24.112124860+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:06:24.112124860+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:06:24.112124860+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:06:24.112124860+00:00 stderr F (string) (len=2059) "installer: node-admin-client-cert-key\",\n (string) (len=31) \"check-endpoints-client-cert-key\",\n (string) (len=14) \"kubelet-client\",\n (string) (len=16) \"node-kubeconfigs\"\n },\n OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {\n (string) (len=17) \"user-serving-cert\",\n (string) (len=21) \"user-serving-cert-000\",\n (string) (len=21) \"user-serving-cert-001\",\n (string) (len=21) \"user-serving-cert-002\",\n (string) (len=21) \"user-serving-cert-003\",\n (string) (len=21) \"user-serving-cert-004\",\n (string) (len=21) \"user-serving-cert-005\",\n (string) (len=21) \"user-serving-cert-006\",\n (string) (len=21) \"user-serving-cert-007\",\n (string) (len=21) \"user-serving-cert-008\",\n (string) (len=21) \"user-serving-cert-009\"\n },\n CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\",\n (string) (len=29) \"control-plane-node-kubeconfig\",\n (string) (len=26) \"check-endpoints-kubeconfig\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:19.085310 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:19.096616 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:19.099505 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:49:49.099716 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:03.102802 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:06:24.112124860+00:00 stderr F } 2025-08-13T20:06:24.112124860+00:00 stderr F } 2025-08-13T20:06:24.112124860+00:00 stderr F because static pod is ready 2025-08-13T20:06:24.143504988+00:00 stderr F I0813 20:06:24.143375 1 helpers.go:260] lister was stale at resourceVersion=30706, live get showed resourceVersion=31947 2025-08-13T20:06:24.164448378+00:00 stderr F I0813 20:06:24.164305 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "crc" from revision 8 to 9 because static pod is ready 2025-08-13T20:06:24.687857027+00:00 stderr F I0813 20:06:24.687742 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:25.501171447+00:00 stderr F I0813 20:06:25.501098 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:06:25.501171447+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:06:25.501171447+00:00 stderr F CurrentRevision: (int32) 9, 2025-08-13T20:06:25.501171447+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:06:25.501171447+00:00 stderr F LastFailedRevision: (int32) 1, 2025-08-13T20:06:25.501171447+00:00 stderr F LastFailedTime: (*v1.Time)(0xc005bf0c90)(2024-06-26 12:52:09 +0000 UTC), 2025-08-13T20:06:25.501171447+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:06:25.501171447+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:06:25.501171447+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:06:25.501171447+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:06:25.501171447+00:00 stderr F (string) (len=2059) "installer: node-admin-client-cert-key\",\n (string) (len=31) \"check-endpoints-client-cert-key\",\n (string) (len=14) \"kubelet-client\",\n (string) (len=16) \"node-kubeconfigs\"\n },\n OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {\n (string) (len=17) \"user-serving-cert\",\n (string) (len=21) \"user-serving-cert-000\",\n (string) (len=21) \"user-serving-cert-001\",\n (string) (len=21) \"user-serving-cert-002\",\n (string) (len=21) \"user-serving-cert-003\",\n (string) (len=21) \"user-serving-cert-004\",\n (string) (len=21) \"user-serving-cert-005\",\n (string) (len=21) \"user-serving-cert-006\",\n (string) (len=21) \"user-serving-cert-007\",\n (string) (len=21) \"user-serving-cert-008\",\n (string) (len=21) \"user-serving-cert-009\"\n },\n CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\",\n (string) (len=29) \"control-plane-node-kubeconfig\",\n (string) (len=26) \"check-endpoints-kubeconfig\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:19.085310 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:19.096616 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:19.099505 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:49:49.099716 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:03.102802 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:06:25.501171447+00:00 stderr F } 2025-08-13T20:06:25.501171447+00:00 stderr F } 2025-08-13T20:06:25.501171447+00:00 stderr F because static pod is ready 2025-08-13T20:06:25.517202186+00:00 stderr F I0813 20:06:25.517044 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:25.543665974+00:00 stderr F I0813 20:06:25.543613 1 helpers.go:260] lister was stale at resourceVersion=30706, live get showed resourceVersion=32041 2025-08-13T20:06:26.140531476+00:00 stderr F I0813 20:06:26.140387 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:27.886909364+00:00 stderr F I0813 20:06:27.886722 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:28.477964700+00:00 stderr F I0813 20:06:28.477299 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:28.803335017+00:00 stderr F I0813 20:06:28.803175 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:28.803641036+00:00 stderr F I0813 20:06:28.803498 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-08-13T20:06:29.674410552+00:00 stderr F I0813 20:06:29.671815 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:29.877198799+00:00 stderr F I0813 20:06:29.875523 1 reflector.go:351] Caches populated for *v1.Authentication from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:29.889383038+00:00 stderr F I0813 20:06:29.886916 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:31.292428349+00:00 stderr F I0813 20:06:31.292313 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:31.494001678+00:00 stderr F I0813 20:06:31.490208 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:32.677649895+00:00 stderr F I0813 20:06:32.677031 1 reflector.go:351] Caches populated for *v1.APIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:33.013536475+00:00 stderr F I0813 20:06:33.013155 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:33.276858445+00:00 stderr F I0813 20:06:33.274143 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:33.278657506+00:00 stderr F I0813 20:06:33.278563 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-08-13T20:06:35.037602926+00:00 stderr F I0813 20:06:35.036613 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:37.144938036+00:00 stderr F I0813 20:06:37.141555 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:38.823364977+00:00 stderr F I0813 20:06:38.822602 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:42.599176403+00:00 stderr F I0813 20:06:42.598474 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:43.194326857+00:00 stderr F I0813 20:06:43.193966 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeapiservers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:43.197930590+00:00 stderr F I0813 20:06:43.195748 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:06:43.198819035+00:00 stderr F I0813 20:06:43.198697 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:43.209392969+00:00 stderr F I0813 20:06:43.209285 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:43.212251571+00:00 stderr F I0813 20:06:43.212085 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 11 triggered by "configmap \"kube-apiserver-cert-syncer-kubeconfig-10\" not found" 2025-08-13T20:06:43.224306866+00:00 stderr F I0813 20:06:43.224140 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:43.540526073+00:00 stderr F I0813 20:06:43.538687 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:43.861718771+00:00 stderr F I0813 20:06:43.861624 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9" to "NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 10",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 10" 2025-08-13T20:06:43.863087181+00:00 stderr F E0813 20:06:43.862915 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:43.874888159+00:00 stderr F I0813 20:06:43.874700 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:43.876200427+00:00 stderr F I0813 20:06:43.876129 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:43.880994614+00:00 stderr F I0813 20:06:43.880931 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:06:44.686921531+00:00 stderr F I0813 20:06:44.684407 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:44.710498407+00:00 stderr F I0813 20:06:44.698668 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10]" 2025-08-13T20:06:44.795200656+00:00 stderr F E0813 20:06:44.795113 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:44.812895113+00:00 stderr F E0813 20:06:44.808500 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:45.007854942+00:00 stderr F I0813 20:06:45.007627 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:45.013050641+00:00 stderr F E0813 20:06:45.012958 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:45.015726088+00:00 stderr F E0813 20:06:45.015702 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:45.017251091+00:00 stderr F I0813 20:06:45.015751 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:45.035925207+00:00 stderr F I0813 20:06:45.035704 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:45.036284657+00:00 stderr F E0813 20:06:45.036250 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:45.108955201+00:00 stderr F I0813 20:06:45.108736 1 reflector.go:351] Caches populated for *v1.OAuth from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:45.119003569+00:00 stderr F E0813 20:06:45.118342 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:45.119003569+00:00 stderr F I0813 20:06:45.118418 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:45.229408244+00:00 stderr F I0813 20:06:45.229219 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:45.281371224+00:00 stderr F E0813 20:06:45.281270 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:45.281371224+00:00 stderr F I0813 20:06:45.281347 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:45.605049374+00:00 stderr F E0813 20:06:45.604950 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:45.605106336+00:00 stderr F I0813 20:06:45.605022 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:46.159407527+00:00 stderr F I0813 20:06:46.159252 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-pod-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:46.250021875+00:00 stderr F E0813 20:06:46.247318 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:46.250021875+00:00 stderr F I0813 20:06:46.247419 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:47.529952613+00:00 stderr F E0813 20:06:47.529861 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:47.530083207+00:00 stderr F I0813 20:06:47.530058 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:47.873036359+00:00 stderr F I0813 20:06:47.871318 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:48.555591518+00:00 stderr F I0813 20:06:48.555494 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:48.954120725+00:00 stderr F I0813 20:06:48.953954 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:49.227756030+00:00 stderr F I0813 20:06:49.224391 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:49.587960507+00:00 stderr F I0813 20:06:49.587728 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:49.877065196+00:00 stderr F I0813 20:06:49.875605 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/oauth-metadata-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:50.092003138+00:00 stderr F E0813 20:06:50.091852 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:50.092003138+00:00 stderr F I0813 20:06:50.091961 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:50.490470183+00:00 stderr F I0813 20:06:50.486047 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/bound-sa-token-signing-certs-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:50.625026801+00:00 stderr F I0813 20:06:50.621506 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/etcd-serving-ca-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:50.721054824+00:00 stderr F I0813 20:06:50.690508 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-server-ca-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:50.783372881+00:00 stderr F I0813 20:06:50.783231 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kubelet-serving-ca-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:51.189895356+00:00 stderr F I0813 20:06:51.187513 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/sa-token-signing-certs-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:51.750118559+00:00 stderr F I0813 20:06:51.743548 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-audit-policies-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:53.682530292+00:00 stderr F I0813 20:06:53.680117 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/etcd-client-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:54.823520696+00:00 stderr F I0813 20:06:54.821366 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-serving-certkey-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:54.949069235+00:00 stderr F I0813 20:06:54.948306 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:55.217855282+00:00 stderr F E0813 20:06:55.214692 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:55.223659848+00:00 stderr F I0813 20:06:55.223140 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:55.380837275+00:00 stderr F I0813 20:06:55.380586 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/webhook-authenticator-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:55.922704220+00:00 stderr F I0813 20:06:55.920082 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:56.053962303+00:00 stderr F I0813 20:06:56.049454 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 11 triggered by "configmap \"kube-apiserver-cert-syncer-kubeconfig-10\" not found" 2025-08-13T20:06:56.360839252+00:00 stderr F I0813 20:06:56.360704 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 11 created because configmap "kube-apiserver-cert-syncer-kubeconfig-10" not found 2025-08-13T20:06:56.440267529+00:00 stderr F I0813 20:06:56.440209 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:06:56.452133759+00:00 stderr F W0813 20:06:56.450383 1 staticpod.go:38] revision 11 is unexpectedly already the latest available revision. This is a possible race! 2025-08-13T20:06:56.452133759+00:00 stderr F E0813 20:06:56.450437 1 base_controller.go:268] RevisionController reconciliation failed: conflicting latestAvailableRevision 11 2025-08-13T20:06:57.529571870+00:00 stderr F I0813 20:06:57.528545 1 request.go:697] Waited for 1.102452078s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa 2025-08-13T20:06:59.153954122+00:00 stderr F I0813 20:06:59.153508 1 installer_controller.go:524] node crc with revision 9 is the oldest and needs new revision 11 2025-08-13T20:06:59.154122097+00:00 stderr F I0813 20:06:59.154101 1 installer_controller.go:532] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:06:59.154122097+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:06:59.154122097+00:00 stderr F CurrentRevision: (int32) 9, 2025-08-13T20:06:59.154122097+00:00 stderr F TargetRevision: (int32) 11, 2025-08-13T20:06:59.154122097+00:00 stderr F LastFailedRevision: (int32) 1, 2025-08-13T20:06:59.154122097+00:00 stderr F LastFailedTime: (*v1.Time)(0xc002214d08)(2024-06-26 12:52:09 +0000 UTC), 2025-08-13T20:06:59.154122097+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:06:59.154122097+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:06:59.154122097+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:06:59.154122097+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:06:59.154122097+00:00 stderr F (string) (len=2059) "installer: node-admin-client-cert-key\",\n (string) (len=31) \"check-endpoints-client-cert-key\",\n (string) (len=14) \"kubelet-client\",\n (string) (len=16) \"node-kubeconfigs\"\n },\n OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {\n (string) (len=17) \"user-serving-cert\",\n (string) (len=21) \"user-serving-cert-000\",\n (string) (len=21) \"user-serving-cert-001\",\n (string) (len=21) \"user-serving-cert-002\",\n (string) (len=21) \"user-serving-cert-003\",\n (string) (len=21) \"user-serving-cert-004\",\n (string) (len=21) \"user-serving-cert-005\",\n (string) (len=21) \"user-serving-cert-006\",\n (string) (len=21) \"user-serving-cert-007\",\n (string) (len=21) \"user-serving-cert-008\",\n (string) (len=21) \"user-serving-cert-009\"\n },\n CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\",\n (string) (len=29) \"control-plane-node-kubeconfig\",\n (string) (len=26) \"check-endpoints-kubeconfig\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:19.085310 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:19.096616 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:19.099505 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:49:49.099716 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:03.102802 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:06:59.154122097+00:00 stderr F } 2025-08-13T20:06:59.154122097+00:00 stderr F } 2025-08-13T20:06:59.241574604+00:00 stderr F I0813 20:06:59.235963 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeTargetRevisionChanged' Updating node "crc" from revision 9 to 11 because node crc with revision 9 is the oldest 2025-08-13T20:06:59.256743019+00:00 stderr F I0813 20:06:59.256671 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 11","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:59.258615703+00:00 stderr F I0813 20:06:59.258583 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:06:59.267689573+00:00 stderr F I0813 20:06:59.267567 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:59.305698563+00:00 stderr F I0813 20:06:59.305637 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 10" to "NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 11",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 10" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 11" 2025-08-13T20:06:59.360396791+00:00 stderr F I0813 20:06:59.360342 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:06:59.364085437+00:00 stderr F I0813 20:06:59.363854 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 11","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:59.391010869+00:00 stderr F I0813 20:06:59.390540 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10]" to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T20:07:00.332573145+00:00 stderr F I0813 20:07:00.332426 1 request.go:697] Waited for 1.018757499s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:07:01.331607958+00:00 stderr F I0813 20:07:01.328004 1 request.go:697] Waited for 1.351597851s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa 2025-08-13T20:07:03.219312669+00:00 stderr F I0813 20:07:03.215042 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-11-crc -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:04.482399593+00:00 stderr F I0813 20:07:04.480938 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:07:05.929009659+00:00 stderr F I0813 20:07:05.928059 1 request.go:697] Waited for 1.107414811s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client 2025-08-13T20:07:06.134280204+00:00 stderr F I0813 20:07:06.133848 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:07:06.931311815+00:00 stderr F I0813 20:07:06.928683 1 request.go:697] Waited for 1.128893936s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config 2025-08-13T20:07:08.540323188+00:00 stderr F I0813 20:07:08.539269 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:09.934975573+00:00 stderr F I0813 20:07:09.934565 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:10.134911265+00:00 stderr F I0813 20:07:10.134731 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:15.687022109+00:00 stderr F I0813 20:07:15.682408 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 12 triggered by "required secret/localhost-recovery-client-token has changed" 2025-08-13T20:07:15.727955753+00:00 stderr F I0813 20:07:15.726710 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-pod-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:15.787348406+00:00 stderr F I0813 20:07:15.785969 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:15.841157668+00:00 stderr F I0813 20:07:15.840703 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:16.958534985+00:00 stderr F I0813 20:07:16.958185 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/oauth-metadata-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:17.311540395+00:00 stderr F I0813 20:07:17.304820 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/bound-sa-token-signing-certs-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:18.708619280+00:00 stderr F I0813 20:07:18.691467 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/etcd-serving-ca-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:19.950694422+00:00 stderr F I0813 20:07:19.917858 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-server-ca-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:20.068989554+00:00 stderr F I0813 20:07:20.066161 1 request.go:697] Waited for 1.141655182s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:07:20.562498663+00:00 stderr F I0813 20:07:20.535507 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kubelet-serving-ca-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:21.514635910+00:00 stderr F I0813 20:07:21.512470 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/sa-token-signing-certs-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:22.503481592+00:00 stderr F I0813 20:07:22.494349 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-audit-policies-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:23.613992292+00:00 stderr F I0813 20:07:23.612845 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/etcd-client-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:25.144175883+00:00 stderr F I0813 20:07:25.142508 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-serving-certkey-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:25.706344610+00:00 stderr F I0813 20:07:25.704672 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:26.723927306+00:00 stderr F I0813 20:07:26.687676 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/webhook-authenticator-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:27.483437834+00:00 stderr F I0813 20:07:27.482905 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 12 triggered by "required secret/localhost-recovery-client-token has changed" 2025-08-13T20:07:27.541270120+00:00 stderr F I0813 20:07:27.533928 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:07:27.541270120+00:00 stderr F I0813 20:07:27.538561 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 12 created because required secret/localhost-recovery-client-token has changed 2025-08-13T20:07:28.676481077+00:00 stderr F I0813 20:07:28.675817 1 request.go:697] Waited for 1.140203711s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:07:29.865947060+00:00 stderr F I0813 20:07:29.865352 1 request.go:697] Waited for 1.175779001s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-11-crc 2025-08-13T20:07:31.498696882+00:00 stderr F I0813 20:07:31.497544 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:07:31.498696882+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:07:31.498696882+00:00 stderr F CurrentRevision: (int32) 9, 2025-08-13T20:07:31.498696882+00:00 stderr F TargetRevision: (int32) 12, 2025-08-13T20:07:31.498696882+00:00 stderr F LastFailedRevision: (int32) 1, 2025-08-13T20:07:31.498696882+00:00 stderr F LastFailedTime: (*v1.Time)(0xc001e57e90)(2024-06-26 12:52:09 +0000 UTC), 2025-08-13T20:07:31.498696882+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:07:31.498696882+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:07:31.498696882+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:07:31.498696882+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:07:31.498696882+00:00 stderr F (string) (len=2059) "installer: node-admin-client-cert-key\",\n (string) (len=31) \"check-endpoints-client-cert-key\",\n (string) (len=14) \"kubelet-client\",\n (string) (len=16) \"node-kubeconfigs\"\n },\n OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {\n (string) (len=17) \"user-serving-cert\",\n (string) (len=21) \"user-serving-cert-000\",\n (string) (len=21) \"user-serving-cert-001\",\n (string) (len=21) \"user-serving-cert-002\",\n (string) (len=21) \"user-serving-cert-003\",\n (string) (len=21) \"user-serving-cert-004\",\n (string) (len=21) \"user-serving-cert-005\",\n (string) (len=21) \"user-serving-cert-006\",\n (string) (len=21) \"user-serving-cert-007\",\n (string) (len=21) \"user-serving-cert-008\",\n (string) (len=21) \"user-serving-cert-009\"\n },\n CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\",\n (string) (len=29) \"control-plane-node-kubeconfig\",\n (string) (len=26) \"check-endpoints-kubeconfig\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:19.085310 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:19.096616 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:19.099505 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:49:49.099716 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:03.102802 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:07:31.498696882+00:00 stderr F } 2025-08-13T20:07:31.498696882+00:00 stderr F } 2025-08-13T20:07:31.498696882+00:00 stderr F because new revision pending 2025-08-13T20:07:32.044521161+00:00 stderr F I0813 20:07:32.044386 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 12","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:32.045112688+00:00 stderr F I0813 20:07:32.045042 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:07:32.362962951+00:00 stderr F I0813 20:07:32.328253 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 12","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:32.387055412+00:00 stderr F I0813 20:07:32.386819 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 11" to "NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 12",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 11" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 12" 2025-08-13T20:07:32.393638680+00:00 stderr F E0813 20:07:32.393100 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:07:32.410870024+00:00 stderr F I0813 20:07:32.408004 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 12","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:32.476455865+00:00 stderr F E0813 20:07:32.476356 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:07:33.066909054+00:00 stderr F I0813 20:07:33.065163 1 request.go:697] Waited for 1.014589539s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs 2025-08-13T20:07:33.908378810+00:00 stderr F I0813 20:07:33.906597 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-12-crc -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:34.471827254+00:00 stderr F I0813 20:07:34.470752 1 installer_controller.go:512] "crc" is in transition to 12, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:07:35.082344628+00:00 stderr F I0813 20:07:35.082140 1 request.go:697] Waited for 1.17853902s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:07:36.264957995+00:00 stderr F I0813 20:07:36.264379 1 request.go:697] Waited for 1.169600653s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:07:37.266086328+00:00 stderr F I0813 20:07:37.265156 1 request.go:697] Waited for 1.195307181s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller 2025-08-13T20:07:37.705277070+00:00 stderr F I0813 20:07:37.705217 1 installer_controller.go:512] "crc" is in transition to 12, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:40.448543522+00:00 stderr F I0813 20:07:40.447141 1 installer_controller.go:512] "crc" is in transition to 12, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:41.068953629+00:00 stderr F I0813 20:07:41.068385 1 installer_controller.go:512] "crc" is in transition to 12, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:42.872562730+00:00 stderr F I0813 20:07:42.872111 1 installer_controller.go:512] "crc" is in transition to 12, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:08:23.948041820+00:00 stderr F I0813 20:08:23.946664 1 installer_controller.go:512] "crc" is in transition to 12, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:08:23.987622815+00:00 stderr F I0813 20:08:23.985531 1 termination_observer.go:236] Observed event "TerminationPreShutdownHooksFinished" for API server pod "kube-apiserver-crc" (last termination at 2025-08-13 20:04:15 +0000 UTC) at 2025-08-13 20:08:23 +0000 UTC 2025-08-13T20:08:26.078102491+00:00 stderr F I0813 20:08:26.072992 1 termination_observer.go:236] Observed event "TerminationGracefulTerminationFinished" for API server pod "kube-apiserver-crc" (last termination at 2025-08-13 20:04:15 +0000 UTC) at 2025-08-13 20:08:25 +0000 UTC 2025-08-13T20:08:29.272876957+00:00 stderr F E0813 20:08:29.272045 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.287636911+00:00 stderr F E0813 20:08:29.287217 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.301855268+00:00 stderr F E0813 20:08:29.301817 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.329675506+00:00 stderr F E0813 20:08:29.329619 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.377394244+00:00 stderr F E0813 20:08:29.377301 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.464043468+00:00 stderr F E0813 20:08:29.463572 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.628308168+00:00 stderr F E0813 20:08:29.627972 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.953248174+00:00 stderr F E0813 20:08:29.953100 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.602610212+00:00 stderr F E0813 20:08:30.602162 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.890209348+00:00 stderr F E0813 20:08:31.889997 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:34.459982315+00:00 stderr F E0813 20:08:34.459464 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.274705807+00:00 stderr F E0813 20:08:39.274066 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.586015852+00:00 stderr F E0813 20:08:39.584968 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.277814194+00:00 stderr F E0813 20:08:49.276218 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:59.279422830+00:00 stderr F E0813 20:08:59.278561 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:00.072093276+00:00 stderr F E0813 20:09:00.072031 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:30.676249413+00:00 stderr F I0813 20:09:30.672351 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:32.652853485+00:00 stderr F I0813 20:09:32.652667 1 reflector.go:351] Caches populated for *v1.Authentication from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:32.997060833+00:00 stderr F I0813 20:09:32.996997 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeapiservers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:33.004507667+00:00 stderr F I0813 20:09:33.004432 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:09:34.052182014+00:00 stderr F I0813 20:09:34.052080 1 request.go:697] Waited for 1.047880553s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa 2025-08-13T20:09:34.859533992+00:00 stderr F I0813 20:09:34.859391 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:35.055166871+00:00 stderr F I0813 20:09:35.055104 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:35.274030664+00:00 stderr F I0813 20:09:35.273887 1 request.go:697] Waited for 1.346065751s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:09:35.654672239+00:00 stderr F I0813 20:09:35.654588 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:35.847469136+00:00 stderr F I0813 20:09:35.846666 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:38.056865441+00:00 stderr F I0813 20:09:38.056747 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:38.855268932+00:00 stderr F I0813 20:09:38.855138 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:38.942880734+00:00 stderr F I0813 20:09:38.942760 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.454254595+00:00 stderr F I0813 20:09:39.454166 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.870002795+00:00 stderr F I0813 20:09:39.869847 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:40.312326496+00:00 stderr F I0813 20:09:40.310888 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:40.316394453+00:00 stderr F I0813 20:09:40.313333 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-08-13T20:09:40.498634928+00:00 stderr F I0813 20:09:40.498514 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:09:40.503509538+00:00 stderr F I0813 20:09:40.503433 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:40Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:40.522071150+00:00 stderr F I0813 20:09:40.521872 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 12"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 12" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12" 2025-08-13T20:09:40.948998720+00:00 stderr F I0813 20:09:40.948523 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:41.268646514+00:00 stderr F I0813 20:09:41.265043 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.289537333+00:00 stderr F E0813 20:09:41.288663 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:41.290885212+00:00 stderr F I0813 20:09:41.290625 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.307019204+00:00 stderr F E0813 20:09:41.306859 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:41.308151097+00:00 stderr F I0813 20:09:41.307896 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.313935433+00:00 stderr F E0813 20:09:41.313848 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:41.317861915+00:00 stderr F I0813 20:09:41.317767 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.323179508+00:00 stderr F E0813 20:09:41.323077 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:41.364632316+00:00 stderr F I0813 20:09:41.364497 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.371219595+00:00 stderr F E0813 20:09:41.371120 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:41.454053880+00:00 stderr F I0813 20:09:41.453389 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.462409540+00:00 stderr F E0813 20:09:41.462300 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:41.625209247+00:00 stderr F I0813 20:09:41.624176 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:41.625209247+00:00 stderr F I0813 20:09:41.624462 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.637270673+00:00 stderr F E0813 20:09:41.637136 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:41.651362987+00:00 stderr F I0813 20:09:41.651285 1 request.go:697] Waited for 1.142561947s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver 2025-08-13T20:09:41.964522376+00:00 stderr F I0813 20:09:41.962065 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.970516497+00:00 stderr F E0813 20:09:41.970466 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:42.613871283+00:00 stderr F I0813 20:09:42.613744 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:42Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:42.624072166+00:00 stderr F E0813 20:09:42.623097 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:42.674291996+00:00 stderr F I0813 20:09:42.674173 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:43.255737967+00:00 stderr F I0813 20:09:43.255395 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:43.507659819+00:00 stderr F I0813 20:09:43.507470 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:43.906040130+00:00 stderr F I0813 20:09:43.905940 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:43Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:43.915460560+00:00 stderr F E0813 20:09:43.915310 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:45.063148636+00:00 stderr F I0813 20:09:45.062119 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.454994181+00:00 stderr F I0813 20:09:45.454880 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.556559583+00:00 stderr F I0813 20:09:45.556125 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:47.029896635+00:00 stderr F I0813 20:09:47.029137 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:49.431534712+00:00 stderr F I0813 20:09:49.431396 1 reflector.go:351] Caches populated for *v1.OAuth from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:49.603868443+00:00 stderr F I0813 20:09:49.603016 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:51.097949529+00:00 stderr F I0813 20:09:51.095966 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:51.156008534+00:00 stderr F I0813 20:09:51.155372 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:51.346948488+00:00 stderr F I0813 20:09:51.343971 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:51.899874211+00:00 stderr F I0813 20:09:51.897396 1 reflector.go:351] Caches populated for *v1.KubeAPIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:52.322158879+00:00 stderr F I0813 20:09:52.322056 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:52.717963967+00:00 stderr F I0813 20:09:52.717106 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.001761305+00:00 stderr F I0813 20:09:55.001372 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.604439374+00:00 stderr F I0813 20:09:55.604041 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.691689536+00:00 stderr F I0813 20:09:55.691355 1 reflector.go:351] Caches populated for *v1.Scheduler from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:56.397833182+00:00 stderr F I0813 20:09:56.397367 1 termination_observer.go:130] Observed termination of API server pod "kube-apiserver-crc" at 2025-08-13 20:08:47 +0000 UTC 2025-08-13T20:09:56.793974910+00:00 stderr F I0813 20:09:56.792242 1 request.go:697] Waited for 1.18414682s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:09:57.991351730+00:00 stderr F I0813 20:09:57.991270 1 request.go:697] Waited for 1.191825831s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:09:58.200016761+00:00 stderr F I0813 20:09:58.199677 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:59.278447041+00:00 stderr F I0813 20:09:59.278325 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:00.344520857+00:00 stderr F I0813 20:10:00.344375 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:00.597497930+00:00 stderr F I0813 20:10:00.594133 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:06.140558853+00:00 stderr F I0813 20:10:06.136477 1 reflector.go:351] Caches populated for *v1.APIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:08.678223861+00:00 stderr F I0813 20:10:08.677607 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:11.615709241+00:00 stderr F I0813 20:10:11.614708 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:12.447865849+00:00 stderr F I0813 20:10:12.447401 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.023764372+00:00 stderr F I0813 20:10:15.023635 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:16.189231935+00:00 stderr F I0813 20:10:16.187663 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:18.832884551+00:00 stderr F I0813 20:10:18.831311 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:19.913476462+00:00 stderr F I0813 20:10:19.913155 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:20.042624055+00:00 stderr F I0813 20:10:20.042485 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:22.281866056+00:00 stderr F I0813 20:10:22.280848 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:26.156467354+00:00 stderr F I0813 20:10:26.156096 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:28.443609229+00:00 stderr F I0813 20:10:28.442724 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:33.622891484+00:00 stderr F I0813 20:10:33.622596 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-08-13T20:10:33.626321162+00:00 stderr F I0813 20:10:33.622305 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:17:07.038035726+00:00 stderr F I0813 20:17:07.036853 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:19:40.315920166+00:00 stderr F I0813 20:19:40.315069 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-08-13T20:19:56.823831628+00:00 stderr F I0813 20:19:56.822700 1 request.go:697] Waited for 1.189846745s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:19:57.816367055+00:00 stderr F I0813 20:19:57.815909 1 request.go:697] Waited for 1.194141978s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:20:33.628645173+00:00 stderr F I0813 20:20:33.627675 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-08-13T20:25:08.288098181+00:00 stderr F I0813 20:25:08.287198 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:29:40.318423466+00:00 stderr F I0813 20:29:40.316980 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-08-13T20:29:56.621923248+00:00 stderr F I0813 20:29:56.621255 1 request.go:697] Waited for 1.002703993s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:29:57.820948224+00:00 stderr F I0813 20:29:57.820346 1 request.go:697] Waited for 1.188787292s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:30:33.631933560+00:00 stderr F I0813 20:30:33.629183 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-08-13T20:39:40.323212651+00:00 stderr F I0813 20:39:40.322450 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-08-13T20:39:56.818558953+00:00 stderr F I0813 20:39:56.817863 1 request.go:697] Waited for 1.198272606s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:39:58.019908028+00:00 stderr F I0813 20:39:58.017924 1 request.go:697] Waited for 1.193313884s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:40:30.989897036+00:00 stderr F I0813 20:40:30.988880 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:40:33.630681710+00:00 stderr F I0813 20:40:33.630274 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-08-13T20:42:36.383445345+00:00 stderr F I0813 20:42:36.312511 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.383445345+00:00 stderr F I0813 20:42:36.320010 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.383445345+00:00 stderr F I0813 20:42:36.322303 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.383445345+00:00 stderr F I0813 20:42:36.322653 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.383445345+00:00 stderr F I0813 20:42:36.312996 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.383656031+00:00 stderr F I0813 20:42:36.383617 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.384451584+00:00 stderr F I0813 20:42:36.384429 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.384734842+00:00 stderr F I0813 20:42:36.384713 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.421150922+00:00 stderr F I0813 20:42:36.404204 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.424615362+00:00 stderr F I0813 20:42:36.424585 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.425559339+00:00 stderr F I0813 20:42:36.425536 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.448576532+00:00 stderr F I0813 20:42:36.448525 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.449479658+00:00 stderr F I0813 20:42:36.449420 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.451915479+00:00 stderr F I0813 20:42:36.313036 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.454242126+00:00 stderr F I0813 20:42:36.313051 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.454667788+00:00 stderr F I0813 20:42:36.313065 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.509514919+00:00 stderr F I0813 20:42:36.313078 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.509514919+00:00 stderr F I0813 20:42:36.313093 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.519564509+00:00 stderr F I0813 20:42:36.313106 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.519564509+00:00 stderr F I0813 20:42:36.313119 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.540144902+00:00 stderr F I0813 20:42:36.313131 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.560178770+00:00 stderr F I0813 20:42:36.313144 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.560178770+00:00 stderr F I0813 20:42:36.313156 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.560178770+00:00 stderr F I0813 20:42:36.313168 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.560178770+00:00 stderr F I0813 20:42:36.313214 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.560178770+00:00 stderr F I0813 20:42:36.313257 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.560178770+00:00 stderr F I0813 20:42:36.313270 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.560178770+00:00 stderr F I0813 20:42:36.313287 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.573976478+00:00 stderr F I0813 20:42:36.313298 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.574460152+00:00 stderr F I0813 20:42:36.313311 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.574851103+00:00 stderr F I0813 20:42:36.313322 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.575304116+00:00 stderr F I0813 20:42:36.313332 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.575304116+00:00 stderr F I0813 20:42:36.313343 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.576330626+00:00 stderr F I0813 20:42:36.313361 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.576330626+00:00 stderr F I0813 20:42:36.313376 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.576330626+00:00 stderr F I0813 20:42:36.313387 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.576330626+00:00 stderr F I0813 20:42:36.313403 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.576330626+00:00 stderr F I0813 20:42:36.313415 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.576568523+00:00 stderr F I0813 20:42:36.313426 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.576968474+00:00 stderr F I0813 20:42:36.313443 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.582537195+00:00 stderr F I0813 20:42:36.313454 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.583345418+00:00 stderr F I0813 20:42:36.313467 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.583345418+00:00 stderr F I0813 20:42:36.313483 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.583700218+00:00 stderr F I0813 20:42:36.313494 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.589248298+00:00 stderr F I0813 20:42:36.313522 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.594907491+00:00 stderr F I0813 20:42:36.313539 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.594907491+00:00 stderr F I0813 20:42:36.313600 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.594907491+00:00 stderr F I0813 20:42:36.313615 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.594907491+00:00 stderr F I0813 20:42:36.319488 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.594907491+00:00 stderr F I0813 20:42:36.451063 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.594907491+00:00 stderr F I0813 20:42:36.469514 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:39.450401166+00:00 stderr F E0813 20:42:39.449691 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.458503730+00:00 stderr F E0813 20:42:39.458413 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.472485983+00:00 stderr F E0813 20:42:39.472408 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.496672820+00:00 stderr F E0813 20:42:39.496580 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.541319437+00:00 stderr F E0813 20:42:39.541106 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.626199914+00:00 stderr F E0813 20:42:39.626025 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.789937274+00:00 stderr F E0813 20:42:39.789271 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.112324339+00:00 stderr F E0813 20:42:40.112277 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.694097262+00:00 stderr F I0813 20:42:40.693760 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:40.694507514+00:00 stderr F I0813 20:42:40.694398 1 base_controller.go:172] Shutting down NodeController ... 2025-08-13T20:42:40.694530464+00:00 stderr F I0813 20:42:40.694496 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.694530464+00:00 stderr F I0813 20:42:40.694507 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.694559375+00:00 stderr F I0813 20:42:40.694528 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.694559375+00:00 stderr F I0813 20:42:40.694534 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.694559375+00:00 stderr F I0813 20:42:40.694542 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.694577336+00:00 stderr F I0813 20:42:40.694558 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.694577336+00:00 stderr F I0813 20:42:40.694564 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.694596276+00:00 stderr F I0813 20:42:40.694587 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.694596276+00:00 stderr F I0813 20:42:40.694592 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.694672368+00:00 stderr F I0813 20:42:40.694607 1 base_controller.go:172] Shutting down BoundSATokenSignerController ... 2025-08-13T20:42:40.694672368+00:00 stderr F I0813 20:42:40.694614 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:42:40.694672368+00:00 stderr F I0813 20:42:40.694648 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.694672368+00:00 stderr F I0813 20:42:40.694654 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.694691069+00:00 stderr F I0813 20:42:40.694671 1 base_controller.go:172] Shutting down KubeAPIServerStaticResources ... 2025-08-13T20:42:40.694705539+00:00 stderr F I0813 20:42:40.694691 1 base_controller.go:172] Shutting down auditPolicyController ... 2025-08-13T20:42:40.694719900+00:00 stderr F I0813 20:42:40.694704 1 base_controller.go:172] Shutting down WorkerLatencyProfile ... 2025-08-13T20:42:40.694734100+00:00 stderr F I0813 20:42:40.694717 1 base_controller.go:172] Shutting down MissingStaticPodController ... 2025-08-13T20:42:40.694748590+00:00 stderr F I0813 20:42:40.694731 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.694910 1 base_controller.go:172] Shutting down BackingResourceController ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.694951 1 base_controller.go:172] Shutting down GuardController ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.694966 1 base_controller.go:172] Shutting down StaticPodStateController ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.694979 1 base_controller.go:172] Shutting down InstallerStateController ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.694994 1 base_controller.go:172] Shutting down StartupMonitorPodCondition ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.695009 1 base_controller.go:172] Shutting down InstallerController ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.695021 1 base_controller.go:172] Shutting down StaticPodStateFallback ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.695036 1 base_controller.go:172] Shutting down EncryptionStateController ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.695049 1 base_controller.go:172] Shutting down EncryptionPruneController ... 2025-08-13T20:42:40.695271006+00:00 stderr F I0813 20:42:40.695200 1 base_controller.go:172] Shutting down StatusSyncer_kube-apiserver ... 2025-08-13T20:42:40.695320867+00:00 stderr F I0813 20:42:40.695290 1 base_controller.go:172] Shutting down PodSecurityReadinessController ... 2025-08-13T20:42:40.695376439+00:00 stderr F I0813 20:42:40.695358 1 certrotationcontroller.go:899] Shutting down CertRotation 2025-08-13T20:42:40.695475131+00:00 stderr F I0813 20:42:40.695445 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:40.695549334+00:00 stderr F I0813 20:42:40.695531 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.694746 1 base_controller.go:172] Shutting down EncryptionConditionController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696122 1 base_controller.go:172] Shutting down PruneController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.695061 1 base_controller.go:172] Shutting down EncryptionMigrationController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696139 1 base_controller.go:114] Shutting down worker of NodeController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696150 1 base_controller.go:104] All NodeController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696153 1 base_controller.go:172] Shutting down EncryptionKeyController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696170 1 base_controller.go:172] Shutting down ConnectivityCheckController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696182 1 controller_manager.go:54] NodeController controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696187 1 base_controller.go:172] Shutting down webhookSupportabilityController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696192 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696200 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696211 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696214 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696259 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696259 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696271 1 base_controller.go:114] Shutting down worker of BoundSATokenSignerController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696281 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696292 1 base_controller.go:114] Shutting down worker of KubeAPIServerStaticResources controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696296 1 base_controller.go:114] Shutting down worker of BackingResourceController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696304 1 base_controller.go:114] Shutting down worker of auditPolicyController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696307 1 base_controller.go:114] Shutting down worker of GuardController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696313 1 base_controller.go:114] Shutting down worker of WorkerLatencyProfile controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696315 1 base_controller.go:104] All GuardController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696322 1 base_controller.go:114] Shutting down worker of MissingStaticPodController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696325 1 controller_manager.go:54] GuardController controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696330 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696334 1 base_controller.go:114] Shutting down worker of StaticPodStateController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696337 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696343 1 base_controller.go:114] Shutting down worker of InstallerStateController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696344 1 controller_manager.go:54] RevisionController controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696352 1 base_controller.go:114] Shutting down worker of EncryptionConditionController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696362 1 base_controller.go:114] Shutting down worker of PruneController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696366 1 base_controller.go:114] Shutting down worker of StartupMonitorPodCondition controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696368 1 base_controller.go:104] All PruneController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696374 1 base_controller.go:104] All StartupMonitorPodCondition workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696375 1 controller_manager.go:54] PruneController controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696379 1 controller_manager.go:54] StartupMonitorPodCondition controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696387 1 base_controller.go:114] Shutting down worker of EncryptionKeyController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696396 1 base_controller.go:114] Shutting down worker of webhookSupportabilityController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696404 1 base_controller.go:104] All webhookSupportabilityController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696405 1 base_controller.go:114] Shutting down worker of InstallerController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696415 1 base_controller.go:104] All InstallerController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696422 1 controller_manager.go:54] InstallerController controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696425 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696431 1 base_controller.go:114] Shutting down worker of StaticPodStateFallback controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696435 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696439 1 base_controller.go:104] All StaticPodStateFallback workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696440 1 controller_manager.go:54] LoggingSyncer controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696445 1 controller_manager.go:54] StaticPodStateFallback controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696472 1 base_controller.go:114] Shutting down worker of EncryptionStateController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696389 1 base_controller.go:114] Shutting down worker of ConnectivityCheckController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696486 1 base_controller.go:114] Shutting down worker of EncryptionMigrationController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696487 1 base_controller.go:104] All ConnectivityCheckController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696451 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696503 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696509 1 controller_manager.go:54] UnsupportedConfigOverridesController controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696464 1 base_controller.go:114] Shutting down worker of EncryptionPruneController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696529 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696568 1 base_controller.go:172] Shutting down EventWatchController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696583 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696597 1 base_controller.go:172] Shutting down NodeKubeconfigController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696612 1 base_controller.go:172] Shutting down TargetConfigController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696626 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696631 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696644 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696649 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696671 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696675 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696688 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696692 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696702 1 base_controller.go:114] Shutting down worker of EventWatchController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696708 1 base_controller.go:104] All EventWatchController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696720 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696728 1 base_controller.go:114] Shutting down worker of NodeKubeconfigController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696737 1 base_controller.go:114] Shutting down worker of TargetConfigController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr P I0813 2025-08-13T20:42:40.697432808+00:00 stderr F 20:42:40.696750 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696758 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696758 1 base_controller.go:104] All auditPolicyController workers have been terminated 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696767 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696837 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696857 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696879 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696883 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696898 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696905 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696917 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696924 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697087 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697126 1 base_controller.go:172] Shutting down SCCReconcileController ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697139 1 base_controller.go:172] Shutting down CertRotationTimeUpgradeableController ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697151 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697164 1 base_controller.go:172] Shutting down ServiceAccountIssuerController ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697176 1 base_controller.go:172] Shutting down KubeletVersionSkewController ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697318 1 base_controller.go:114] Shutting down worker of SCCReconcileController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697327 1 base_controller.go:104] All SCCReconcileController workers have been terminated 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697336 1 base_controller.go:114] Shutting down worker of CertRotationTimeUpgradeableController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697344 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697358 1 base_controller.go:114] Shutting down worker of ServiceAccountIssuerController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697366 1 base_controller.go:114] Shutting down worker of KubeletVersionSkewController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697371 1 base_controller.go:104] All KubeletVersionSkewController workers have been terminated 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697383 1 termination_observer.go:155] Shutting down TerminationObserver 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697412 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:42:40.697462309+00:00 stderr F I0813 20:42:40.697445 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:40.697680035+00:00 stderr F I0813 20:42:40.697577 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:40.697680035+00:00 stderr F I0813 20:42:40.697663 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:40.697839070+00:00 stderr F I0813 20:42:40.697729 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:42:40.697839070+00:00 stderr F I0813 20:42:40.697827 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:40.697864340+00:00 stderr F I0813 20:42:40.697850 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:40.698041385+00:00 stderr F I0813 20:42:40.697971 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:40.698041385+00:00 stderr F I0813 20:42:40.698012 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:40.698062476+00:00 stderr F I0813 20:42:40.698041 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:42:40.698077136+00:00 stderr F I0813 20:42:40.698062 1 builder.go:330] server exited 2025-08-13T20:42:40.698211990+00:00 stderr F I0813 20:42:40.698156 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:42:40.698211990+00:00 stderr F I0813 20:42:40.698186 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:42:40.698268332+00:00 stderr F I0813 20:42:40.698211 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:40.698284232+00:00 stderr F I0813 20:42:40.698262 1 base_controller.go:104] All TargetConfigController workers have been terminated 2025-08-13T20:42:40.698284232+00:00 stderr F I0813 20:42:40.698274 1 base_controller.go:104] All NodeKubeconfigController workers have been terminated 2025-08-13T20:42:40.698298763+00:00 stderr F I0813 20:42:40.698287 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698313053+00:00 stderr F I0813 20:42:40.698298 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698313053+00:00 stderr F I0813 20:42:40.698307 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698331914+00:00 stderr F I0813 20:42:40.698321 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698346064+00:00 stderr F I0813 20:42:40.698330 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698418246+00:00 stderr F I0813 20:42:40.698357 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698418246+00:00 stderr F I0813 20:42:40.698397 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698418246+00:00 stderr F I0813 20:42:40.698409 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698438587+00:00 stderr F I0813 20:42:40.698416 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698438587+00:00 stderr F I0813 20:42:40.698426 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698453327+00:00 stderr F I0813 20:42:40.698436 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698453327+00:00 stderr F I0813 20:42:40.698445 1 base_controller.go:104] All BoundSATokenSignerController workers have been terminated 2025-08-13T20:42:40.698475168+00:00 stderr F I0813 20:42:40.698457 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698475168+00:00 stderr F I0813 20:42:40.698467 1 base_controller.go:104] All KubeAPIServerStaticResources workers have been terminated 2025-08-13T20:42:40.698489928+00:00 stderr F I0813 20:42:40.698480 1 base_controller.go:104] All WorkerLatencyProfile workers have been terminated 2025-08-13T20:42:40.698504369+00:00 stderr F I0813 20:42:40.698493 1 base_controller.go:104] All MissingStaticPodController workers have been terminated 2025-08-13T20:42:40.698518789+00:00 stderr F I0813 20:42:40.698501 1 controller_manager.go:54] MissingStaticPodController controller terminated 2025-08-13T20:42:40.698533250+00:00 stderr F I0813 20:42:40.698515 1 base_controller.go:104] All EncryptionConditionController workers have been terminated 2025-08-13T20:42:40.698533250+00:00 stderr F I0813 20:42:40.698528 1 base_controller.go:150] All StatusSyncer_kube-apiserver post start hooks have been terminated 2025-08-13T20:42:40.698548030+00:00 stderr F I0813 20:42:40.698537 1 base_controller.go:104] All StaticPodStateController workers have been terminated 2025-08-13T20:42:40.698548030+00:00 stderr F I0813 20:42:40.698542 1 controller_manager.go:54] StaticPodStateController controller terminated 2025-08-13T20:42:40.698562680+00:00 stderr F I0813 20:42:40.698550 1 base_controller.go:104] All EncryptionPruneController workers have been terminated 2025-08-13T20:42:40.698577201+00:00 stderr F I0813 20:42:40.698560 1 base_controller.go:104] All InstallerStateController workers have been terminated 2025-08-13T20:42:40.698577201+00:00 stderr F I0813 20:42:40.698565 1 controller_manager.go:54] InstallerStateController controller terminated 2025-08-13T20:42:40.698591811+00:00 stderr F I0813 20:42:40.698574 1 base_controller.go:104] All EncryptionMigrationController workers have been terminated 2025-08-13T20:42:40.698591811+00:00 stderr F I0813 20:42:40.698583 1 base_controller.go:104] All EncryptionStateController workers have been terminated 2025-08-13T20:42:40.698591811+00:00 stderr F I0813 20:42:40.698585 1 base_controller.go:114] Shutting down worker of PodSecurityReadinessController controller ... 2025-08-13T20:42:40.698607632+00:00 stderr F I0813 20:42:40.698593 1 base_controller.go:104] All EncryptionKeyController workers have been terminated 2025-08-13T20:42:40.698607632+00:00 stderr F I0813 20:42:40.698598 1 base_controller.go:104] All PodSecurityReadinessController workers have been terminated 2025-08-13T20:42:40.698607632+00:00 stderr F I0813 20:42:40.698602 1 base_controller.go:104] All BackingResourceController workers have been terminated 2025-08-13T20:42:40.698623482+00:00 stderr F I0813 20:42:40.698608 1 controller_manager.go:54] BackingResourceController controller terminated 2025-08-13T20:42:40.698623482+00:00 stderr F I0813 20:42:40.698618 1 base_controller.go:104] All ServiceAccountIssuerController workers have been terminated 2025-08-13T20:42:40.698637963+00:00 stderr F I0813 20:42:40.698626 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:42:40.698652123+00:00 stderr F I0813 20:42:40.698636 1 base_controller.go:104] All CertRotationTimeUpgradeableController workers have been terminated 2025-08-13T20:42:40.698843558+00:00 stderr F I0813 20:42:40.698711 1 base_controller.go:114] Shutting down worker of StatusSyncer_kube-apiserver controller ... 2025-08-13T20:42:40.698843558+00:00 stderr F I0813 20:42:40.698739 1 base_controller.go:104] All StatusSyncer_kube-apiserver workers have been terminated 2025-08-13T20:42:40.700186097+00:00 stderr F E0813 20:42:40.700133 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-apiserver-operator/leases/kube-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.700392933+00:00 stderr F W0813 20:42:40.700361 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000066324315133657716033103 0ustar zuulzuul2025-08-13T19:59:10.406231987+00:00 stderr F I0813 19:59:10.403110 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T19:59:10.406231987+00:00 stderr F I0813 19:59:10.404001 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:10.410000464+00:00 stderr F I0813 19:59:10.409906 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:11.410106593+00:00 stderr F I0813 19:59:11.409319 1 builder.go:299] kube-apiserver-operator version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2 2025-08-13T19:59:20.080237695+00:00 stderr F I0813 19:59:20.064219 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:20.080237695+00:00 stderr F W0813 19:59:20.064941 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:20.080237695+00:00 stderr F W0813 19:59:20.064957 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:20.968396892+00:00 stderr F I0813 19:59:20.929396 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:20.968960328+00:00 stderr F I0813 19:59:20.968911 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:20.969213366+00:00 stderr F I0813 19:59:20.969182 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:20.969589386+00:00 stderr F I0813 19:59:20.969553 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:21.027905059+00:00 stderr F I0813 19:59:21.018540 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:21.027905059+00:00 stderr F I0813 19:59:21.018626 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:21.027905059+00:00 stderr F I0813 19:59:21.018655 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:21.027905059+00:00 stderr F I0813 19:59:21.020506 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.027905059+00:00 stderr F I0813 19:59:21.020534 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:21.067076155+00:00 stderr F I0813 19:59:21.064635 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:21.198603954+00:00 stderr F I0813 19:59:21.198514 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:21.201436764+00:00 stderr F I0813 19:59:21.201395 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-apiserver-operator/kube-apiserver-operator-lock... 2025-08-13T19:59:21.319617853+00:00 stderr F I0813 19:59:21.219440 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:21.319942802+00:00 stderr F I0813 19:59:21.319909 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:21.337971366+00:00 stderr F E0813 19:59:21.324274 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.337971366+00:00 stderr F E0813 19:59:21.324414 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.337971366+00:00 stderr F E0813 19:59:21.337285 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.354668482+00:00 stderr F E0813 19:59:21.354211 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.354668482+00:00 stderr F E0813 19:59:21.354276 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.378231954+00:00 stderr F E0813 19:59:21.368212 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.428961370+00:00 stderr F E0813 19:59:21.427143 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.428961370+00:00 stderr F E0813 19:59:21.427270 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.582008663+00:00 stderr F E0813 19:59:21.564368 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.582008663+00:00 stderr F E0813 19:59:21.564741 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.662010263+00:00 stderr F E0813 19:59:21.659349 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.662010263+00:00 stderr F E0813 19:59:21.659493 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.821740476+00:00 stderr F E0813 19:59:21.821422 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.821740476+00:00 stderr F E0813 19:59:21.821479 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.902165139+00:00 stderr F I0813 19:59:21.896240 1 leaderelection.go:260] successfully acquired lease openshift-kube-apiserver-operator/kube-apiserver-operator-lock 2025-08-13T19:59:21.902165139+00:00 stderr F I0813 19:59:21.900088 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator-lock", UID:"e11b3070-9ae9-4936-9a58-0ad566006f7f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28027", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-apiserver-operator-78d54458c4-sc8h7_c7f30bb9-5fae-472f-b491-3b6d1380fb20 became leader 2025-08-13T19:59:21.945262068+00:00 stderr F I0813 19:59:21.921222 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:22.226725461+00:00 stderr F E0813 19:59:22.141585 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:22.226725461+00:00 stderr F E0813 19:59:22.214588 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:22.285924848+00:00 stderr F I0813 19:59:22.260011 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:22.300489964+00:00 stderr F I0813 19:59:22.300383 1 starter.go:140] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:22.796644677+00:00 stderr F E0813 19:59:22.788656 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:22.893581660+00:00 stderr F E0813 19:59:22.858965 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.070172439+00:00 stderr F E0813 19:59:24.069365 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.143020776+00:00 stderr F E0813 19:59:24.141008 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:26.649134033+00:00 stderr F E0813 19:59:26.643536 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:26.733443906+00:00 stderr F E0813 19:59:26.724689 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:27.725856005+00:00 stderr F I0813 19:59:27.724591 1 base_controller.go:67] Waiting for caches to sync for SCCReconcileController 2025-08-13T19:59:29.825671460+00:00 stderr F I0813 19:59:29.810612 1 request.go:697] Waited for 1.996140239s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/secrets?limit=500&resourceVersion=0 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.749243 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.750474 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.750492 1 base_controller.go:67] Waiting for caches to sync for NodeKubeconfigController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.750507 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.750698 1 certrotationcontroller.go:886] Starting CertRotation 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.750725 1 certrotationcontroller.go:851] Waiting for CertRotation 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.750753 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.750770 1 base_controller.go:67] Waiting for caches to sync for CertRotationTimeUpgradeableController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769353 1 termination_observer.go:145] Starting TerminationObserver 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769414 1 base_controller.go:67] Waiting for caches to sync for EventWatchController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769432 1 base_controller.go:67] Waiting for caches to sync for BoundSATokenSignerController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769448 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769463 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769483 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769501 1 base_controller.go:67] Waiting for caches to sync for KubeletVersionSkewController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769518 1 base_controller.go:67] Waiting for caches to sync for WorkerLatencyProfile 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769529 1 base_controller.go:67] Waiting for caches to sync for webhookSupportabilityController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769545 1 base_controller.go:67] Waiting for caches to sync for ServiceAccountIssuerController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769559 1 base_controller.go:67] Waiting for caches to sync for PodSecurityReadinessController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769563 1 base_controller.go:73] Caches are synced for PodSecurityReadinessController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769577 1 base_controller.go:110] Starting #1 worker of PodSecurityReadinessController controller ... 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.770420 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.770545 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.770569 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.770588 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.770601 1 base_controller.go:67] Waiting for caches to sync for StartupMonitorPodCondition 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.770624 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateFallback 2025-08-13T19:59:32.784351197+00:00 stderr F I0813 19:59:32.784319 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T19:59:32.784537142+00:00 stderr F I0813 19:59:32.784520 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T19:59:32.881509787+00:00 stderr F I0813 19:59:32.808536 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:32.881509787+00:00 stderr F I0813 19:59:32.808607 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:32.881509787+00:00 stderr F I0813 19:59:32.808633 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T19:59:32.881509787+00:00 stderr F I0813 19:59:32.809391 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-08-13T19:59:32.881509787+00:00 stderr F I0813 19:59:32.809431 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-08-13T19:59:32.881509787+00:00 stderr F I0813 19:59:32.809451 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-08-13T19:59:32.881509787+00:00 stderr F I0813 19:59:32.809466 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-08-13T19:59:33.057951796+00:00 stderr F E0813 19:59:32.938639 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:33.057951796+00:00 stderr F E0813 19:59:32.938876 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:33.133759637+00:00 stderr F I0813 19:59:33.097995 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-apiserver 2025-08-13T19:59:33.133759637+00:00 stderr F I0813 19:59:33.098146 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T19:59:33.627575204+00:00 stderr F I0813 19:59:33.566350 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T19:59:34.621301521+00:00 stderr F I0813 19:59:34.620676 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T19:59:34.621301521+00:00 stderr F I0813 19:59:34.620732 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T19:59:34.621301521+00:00 stderr F I0813 19:59:34.620975 1 base_controller.go:73] Caches are synced for StaticPodStateFallback 2025-08-13T19:59:34.621301521+00:00 stderr F I0813 19:59:34.620997 1 base_controller.go:110] Starting #1 worker of StaticPodStateFallback controller ... 2025-08-13T19:59:35.761142381+00:00 stderr F I0813 19:59:35.759386 1 base_controller.go:67] Waiting for caches to sync for KubeAPIServerStaticResources 2025-08-13T19:59:35.769575441+00:00 stderr F I0813 19:59:35.769540 1 base_controller.go:73] Caches are synced for KubeletVersionSkewController 2025-08-13T19:59:35.769643613+00:00 stderr F I0813 19:59:35.769625 1 base_controller.go:110] Starting #1 worker of KubeletVersionSkewController controller ... 2025-08-13T19:59:35.769994773+00:00 stderr F E0813 19:59:35.769972 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:35.815189132+00:00 stderr F I0813 19:59:35.815031 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T19:59:35.822472109+00:00 stderr F E0813 19:59:35.818123 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:35.822472109+00:00 stderr F I0813 19:59:35.820422 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T19:59:35.822472109+00:00 stderr F E0813 19:59:35.820875 1 base_controller.go:268] StaticPodStateFallback reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:35.939666840+00:00 stderr F E0813 19:59:35.938506 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:36.295905945+00:00 stderr F E0813 19:59:36.295711 1 base_controller.go:268] StaticPodStateFallback reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:36.295951286+00:00 stderr F E0813 19:59:36.295921 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:36.308655118+00:00 stderr F E0813 19:59:36.308104 1 base_controller.go:268] StaticPodStateFallback reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:36.471091479+00:00 stderr F E0813 19:59:36.470635 1 base_controller.go:268] StaticPodStateFallback reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:36.471091479+00:00 stderr F E0813 19:59:36.470884 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:36.522045071+00:00 stderr F E0813 19:59:36.520763 1 base_controller.go:268] StaticPodStateFallback reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:36.569665028+00:00 stderr F E0813 19:59:36.569470 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:40.217258604+00:00 stderr F I0813 19:59:40.207376 1 trace.go:236] Trace[1993198320]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.838) (total time: 10368ms): 2025-08-13T19:59:40.217258604+00:00 stderr F Trace[1993198320]: ---"Objects listed" error: 10368ms (19:59:40.207) 2025-08-13T19:59:40.217258604+00:00 stderr F Trace[1993198320]: [10.368623989s] [10.368623989s] END 2025-08-13T19:59:40.272640772+00:00 stderr F E0813 19:59:40.262013 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:40.621248000+00:00 stderr F E0813 19:59:40.583928 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:41.229194869+00:00 stderr F E0813 19:59:41.224921 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:42.024455049+00:00 stderr F I0813 19:59:42.020242 1 trace.go:236] Trace[1834057747]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.787) (total time: 10582ms): 2025-08-13T19:59:42.024455049+00:00 stderr F Trace[1834057747]: ---"Objects listed" error: 6515ms (19:59:36.303) 2025-08-13T19:59:42.024455049+00:00 stderr F Trace[1834057747]: [10.582929308s] [10.582929308s] END 2025-08-13T19:59:42.024455049+00:00 stderr F I0813 19:59:42.021271 1 trace.go:236] Trace[2078079104]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.796) (total time: 14224ms): 2025-08-13T19:59:42.024455049+00:00 stderr F Trace[2078079104]: ---"Objects listed" error: 14224ms (19:59:42.021) 2025-08-13T19:59:42.024455049+00:00 stderr F Trace[2078079104]: [14.224350017s] [14.224350017s] END 2025-08-13T19:59:42.024455049+00:00 stderr F E0813 19:59:42.021567 1 base_controller.go:268] StaticPodStateFallback reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:42.029403850+00:00 stderr F I0813 19:59:42.029324 1 trace.go:236] Trace[582360872]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.778) (total time: 12250ms): 2025-08-13T19:59:42.029403850+00:00 stderr F Trace[582360872]: ---"Objects listed" error: 12250ms (19:59:42.029) 2025-08-13T19:59:42.029403850+00:00 stderr F Trace[582360872]: [12.250847503s] [12.250847503s] END 2025-08-13T19:59:42.033875407+00:00 stderr F I0813 19:59:42.032164 1 trace.go:236] Trace[1034056651]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.827) (total time: 12205ms): 2025-08-13T19:59:42.033875407+00:00 stderr F Trace[1034056651]: ---"Objects listed" error: 12205ms (19:59:42.032) 2025-08-13T19:59:42.033875407+00:00 stderr F Trace[1034056651]: [12.205071418s] [12.205071418s] END 2025-08-13T19:59:42.033875407+00:00 stderr F I0813 19:59:42.032517 1 trace.go:236] Trace[518580895]: "DeltaFIFO Pop Process" ID:openshift-etcd/builder-dockercfg-sqwsk,Depth:16,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:40.368) (total time: 1663ms): 2025-08-13T19:59:42.033875407+00:00 stderr F Trace[518580895]: [1.663718075s] [1.663718075s] END 2025-08-13T19:59:42.062383520+00:00 stderr F I0813 19:59:42.062268 1 trace.go:236] Trace[1780588024]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.725) (total time: 14336ms): 2025-08-13T19:59:42.062383520+00:00 stderr F Trace[1780588024]: ---"Objects listed" error: 14309ms (19:59:42.034) 2025-08-13T19:59:42.062383520+00:00 stderr F Trace[1780588024]: [14.336749981s] [14.336749981s] END 2025-08-13T19:59:42.096995946+00:00 stderr F I0813 19:59:42.086121 1 trace.go:236] Trace[137601519]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.785) (total time: 12300ms): 2025-08-13T19:59:42.096995946+00:00 stderr F Trace[137601519]: ---"Objects listed" error: 12300ms (19:59:42.086) 2025-08-13T19:59:42.096995946+00:00 stderr F Trace[137601519]: [12.300748394s] [12.300748394s] END 2025-08-13T19:59:42.130937464+00:00 stderr F I0813 19:59:42.129235 1 trace.go:236] Trace[134539601]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.725) (total time: 14403ms): 2025-08-13T19:59:42.130937464+00:00 stderr F Trace[134539601]: ---"Objects listed" error: 14403ms (19:59:42.128) 2025-08-13T19:59:42.130937464+00:00 stderr F Trace[134539601]: [14.403338048s] [14.403338048s] END 2025-08-13T19:59:42.133862377+00:00 stderr F I0813 19:59:42.131372 1 trace.go:236] Trace[1185044808]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.768) (total time: 14363ms): 2025-08-13T19:59:42.133862377+00:00 stderr F Trace[1185044808]: ---"Objects listed" error: 14362ms (19:59:42.131) 2025-08-13T19:59:42.133862377+00:00 stderr F Trace[1185044808]: [14.363246906s] [14.363246906s] END 2025-08-13T19:59:42.133862377+00:00 stderr F E0813 19:59:42.133728 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:42.143759409+00:00 stderr F E0813 19:59:42.133993 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:42.154493295+00:00 stderr F I0813 19:59:42.150129 1 trace.go:236] Trace[1599639947]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.814) (total time: 12335ms): 2025-08-13T19:59:42.154493295+00:00 stderr F Trace[1599639947]: ---"Objects listed" error: 12335ms (19:59:42.150) 2025-08-13T19:59:42.154493295+00:00 stderr F Trace[1599639947]: [12.335962369s] [12.335962369s] END 2025-08-13T19:59:42.177003577+00:00 stderr F I0813 19:59:42.175075 1 trace.go:236] Trace[638251685]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.839) (total time: 12335ms): 2025-08-13T19:59:42.177003577+00:00 stderr F Trace[638251685]: ---"Objects listed" error: 12335ms (19:59:42.174) 2025-08-13T19:59:42.177003577+00:00 stderr F Trace[638251685]: [12.33529162s] [12.33529162s] END 2025-08-13T19:59:42.750373270+00:00 stderr F I0813 19:59:42.183435 1 trace.go:236] Trace[27614155]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.785) (total time: 14398ms): 2025-08-13T19:59:42.750373270+00:00 stderr F Trace[27614155]: ---"Objects listed" error: 14397ms (19:59:42.183) 2025-08-13T19:59:42.750373270+00:00 stderr F Trace[27614155]: [14.398033837s] [14.398033837s] END 2025-08-13T19:59:43.106877212+00:00 stderr F I0813 19:59:43.104636 1 trace.go:236] Trace[2014468090]: "DeltaFIFO Pop Process" ID:openshift-kube-apiserver-operator/builder-dockercfg-2cs69,Depth:13,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:42.203) (total time: 899ms): 2025-08-13T19:59:43.106877212+00:00 stderr F Trace[2014468090]: [899.704855ms] [899.704855ms] END 2025-08-13T19:59:43.106877212+00:00 stderr F I0813 19:59:43.104728 1 trace.go:236] Trace[2080221332]: "DeltaFIFO Pop Process" ID:openshift,Depth:58,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:42.205) (total time: 899ms): 2025-08-13T19:59:43.106877212+00:00 stderr F Trace[2080221332]: [899.1632ms] [899.1632ms] END 2025-08-13T19:59:43.106877212+00:00 stderr F I0813 19:59:43.105632 1 trace.go:236] Trace[1806978334]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.778) (total time: 13326ms): 2025-08-13T19:59:43.106877212+00:00 stderr F Trace[1806978334]: ---"Objects listed" error: 12623ms (19:59:42.402) 2025-08-13T19:59:43.106877212+00:00 stderr F Trace[1806978334]: [13.326226606s] [13.326226606s] END 2025-08-13T19:59:43.106877212+00:00 stderr F I0813 19:59:42.191421 1 trace.go:236] Trace[1088521478]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.797) (total time: 14394ms): 2025-08-13T19:59:43.106877212+00:00 stderr F Trace[1088521478]: ---"Objects listed" error: 14393ms (19:59:42.190) 2025-08-13T19:59:43.106877212+00:00 stderr F Trace[1088521478]: [14.394350463s] [14.394350463s] END 2025-08-13T19:59:43.107256163+00:00 stderr F I0813 19:59:43.107191 1 trace.go:236] Trace[924750758]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.781) (total time: 13320ms): 2025-08-13T19:59:43.107256163+00:00 stderr F Trace[924750758]: ---"Objects listed" error: 13320ms (19:59:43.106) 2025-08-13T19:59:43.107256163+00:00 stderr F Trace[924750758]: [13.320284976s] [13.320284976s] END 2025-08-13T19:59:43.135742825+00:00 stderr F I0813 19:59:43.135669 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-apiserver 2025-08-13T19:59:43.135915400+00:00 stderr F I0813 19:59:43.135899 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-apiserver controller ... 2025-08-13T19:59:43.135977682+00:00 stderr F I0813 19:59:43.135964 1 base_controller.go:73] Caches are synced for CertRotationTimeUpgradeableController 2025-08-13T19:59:43.136017843+00:00 stderr F I0813 19:59:43.135997 1 base_controller.go:110] Starting #1 worker of CertRotationTimeUpgradeableController controller ... 2025-08-13T19:59:43.136067364+00:00 stderr F I0813 19:59:43.136054 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T19:59:43.136098325+00:00 stderr F I0813 19:59:43.136086 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T19:59:43.136177638+00:00 stderr F I0813 19:59:43.136162 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T19:59:43.136218839+00:00 stderr F I0813 19:59:43.136205 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T19:59:43.136698632+00:00 stderr F I0813 19:59:43.136636 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T19:59:43.161048897+00:00 stderr F I0813 19:59:43.160759 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T19:59:43.171934617+00:00 stderr F I0813 19:59:43.171879 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.203209 1 trace.go:236] Trace[1013357871]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.783) (total time: 14419ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1013357871]: ---"Objects listed" error: 14419ms (19:59:42.202) 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1013357871]: [14.419417737s] [14.419417737s] END 2025-08-13T19:59:44.502769353+00:00 stderr F E0813 19:59:42.203284 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.203738 1 trace.go:236] Trace[904353601]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.785) (total time: 14418ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[904353601]: ---"Objects listed" error: 14418ms (19:59:42.203) 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[904353601]: [14.418199793s] [14.418199793s] END 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.253665 1 trace.go:236] Trace[1900812755]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.818) (total time: 14435ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1900812755]: ---"Objects listed" error: 14435ms (19:59:42.253) 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1900812755]: [14.435143226s] [14.435143226s] END 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.313282 1 trace.go:236] Trace[800968062]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.814) (total time: 12498ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[800968062]: ---"Objects listed" error: 12498ms (19:59:42.313) 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[800968062]: [12.498837281s] [12.498837281s] END 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.402149 1 base_controller.go:73] Caches are synced for StartupMonitorPodCondition 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.421581 1 trace.go:236] Trace[222516314]: "DeltaFIFO Pop Process" ID:openshift-kube-apiserver/bootstrap-kube-apiserver-crc.17dc8ed8c9d1f157,Depth:239,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:42.191) (total time: 230ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[222516314]: [230.016326ms] [230.016326ms] END 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.699445 1 trace.go:236] Trace[1849060913]: "DeltaFIFO Pop Process" ID:kube-system/builder-dockercfg-kkqp2,Depth:33,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:42.161) (total time: 537ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1849060913]: [537.766968ms] [537.766968ms] END 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.704628 1 trace.go:236] Trace[1382711218]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.986) (total time: 12718ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1382711218]: ---"Objects listed" error: 12718ms (19:59:42.704) 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1382711218]: [12.718383669s] [12.718383669s] END 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.750164 1 certrotationcontroller.go:869] Finished waiting for CertRotation 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:43.064057 1 base_controller.go:73] Caches are synced for ServiceAccountIssuerController 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:43.096032 1 trace.go:236] Trace[1559849221]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:28.000) (total time: 15095ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1559849221]: ---"Objects listed" error: 15094ms (19:59:43.094) 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1559849221]: [15.095690773s] [15.095690773s] END 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:43.176166 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:43.176183 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:43.176196 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:43.188026 1 base_controller.go:73] Caches are synced for SCCReconcileController 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:43.280637 1 trace.go:236] Trace[839686592]: "DeltaFIFO Pop Process" ID:control-plane-machine-set-operator,Depth:204,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:43.179) (total time: 100ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[839686592]: [100.798494ms] [100.798494ms] END 2025-08-13T19:59:44.502769353+00:00 stderr F E0813 19:59:43.465159 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:44.501106 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T19:59:44.505865951+00:00 stderr F I0813 19:59:44.503724 1 trace.go:236] Trace[1060106757]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.818) (total time: 14684ms): 2025-08-13T19:59:44.505865951+00:00 stderr F Trace[1060106757]: ---"Objects listed" error: 14684ms (19:59:44.503) 2025-08-13T19:59:44.505865951+00:00 stderr F Trace[1060106757]: [14.684920135s] [14.684920135s] END 2025-08-13T19:59:44.702386043+00:00 stderr F I0813 19:59:44.702232 1 trace.go:236] Trace[1629861906]: "DeltaFIFO Pop Process" ID:system:controller:disruption-controller,Depth:111,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:43.363) (total time: 1339ms): 2025-08-13T19:59:44.702386043+00:00 stderr F Trace[1629861906]: [1.339129512s] [1.339129512s] END 2025-08-13T19:59:44.837178345+00:00 stderr F I0813 19:59:44.837078 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:44.935728074+00:00 stderr F I0813 19:59:44.905141 1 trace.go:236] Trace[740019728]: "DeltaFIFO Pop Process" ID:system:openshift:openshift-controller-manager:image-trigger-controller,Depth:33,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:44.704) (total time: 200ms): 2025-08-13T19:59:44.935728074+00:00 stderr F Trace[740019728]: [200.473475ms] [200.473475ms] END 2025-08-13T19:59:44.959257775+00:00 stderr F I0813 19:59:44.959194 1 base_controller.go:110] Starting #1 worker of StartupMonitorPodCondition controller ... 2025-08-13T19:59:45.190940199+00:00 stderr F I0813 19:59:45.141035 1 base_controller.go:110] Starting #1 worker of ServiceAccountIssuerController controller ... 2025-08-13T19:59:45.191140695+00:00 stderr F I0813 19:59:45.157727 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T19:59:45.191207477+00:00 stderr F I0813 19:59:45.157743 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:45.191256538+00:00 stderr F I0813 19:59:45.157751 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:45.404111546+00:00 stderr F I0813 19:59:45.157767 1 base_controller.go:110] Starting #1 worker of SCCReconcileController controller ... 2025-08-13T19:59:45.404111546+00:00 stderr F I0813 19:59:45.171163 1 trace.go:236] Trace[1633723082]: "DeltaFIFO Pop Process" ID:system:openshift:operator:openshift-apiserver-operator,Depth:18,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:44.905) (total time: 183ms): 2025-08-13T19:59:45.404111546+00:00 stderr F Trace[1633723082]: [183.235093ms] [183.235093ms] END 2025-08-13T19:59:46.094984539+00:00 stderr F I0813 19:59:46.094748 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.094984539+00:00 stderr F I0813 19:59:46.094945 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.094984539+00:00 stderr F I0813 19:59:46.094958 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.105011265+00:00 stderr F I0813 19:59:46.101336 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.111537851+00:00 stderr F I0813 19:59:46.111438 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.111537851+00:00 stderr F I0813 19:59:46.111508 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.111537851+00:00 stderr F I0813 19:59:46.111519 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.111606923+00:00 stderr F I0813 19:59:46.111557 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.112875449+00:00 stderr F I0813 19:59:46.112743 1 trace.go:236] Trace[1514538963]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.826) (total time: 16285ms): 2025-08-13T19:59:46.112875449+00:00 stderr F Trace[1514538963]: ---"Objects listed" error: 16285ms (19:59:46.112) 2025-08-13T19:59:46.112875449+00:00 stderr F Trace[1514538963]: [16.285982883s] [16.285982883s] END 2025-08-13T19:59:46.141217627+00:00 stderr F I0813 19:59:46.141153 1 trace.go:236] Trace[2132981388]: "DeltaFIFO Pop Process" ID:openshift-kube-apiserver/kube-apiserver-crc.17dc8f01381192ba,Depth:193,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:44.992) (total time: 1148ms): 2025-08-13T19:59:46.141217627+00:00 stderr F Trace[2132981388]: [1.148505488s] [1.148505488s] END 2025-08-13T19:59:46.144456709+00:00 stderr F I0813 19:59:46.144416 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.144650605+00:00 stderr F I0813 19:59:46.144624 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.144927433+00:00 stderr F I0813 19:59:46.144902 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.154048783+00:00 stderr F I0813 19:59:46.153965 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-8,config-8,etcd-serving-ca-8,kube-apiserver-audit-policies-8,kube-apiserver-cert-syncer-kubeconfig-8,kube-apiserver-pod-8,kubelet-serving-ca-8,sa-token-signing-certs-8, secrets: etcd-client-8,localhost-recovery-client-token-8,localhost-recovery-serving-certkey-8 2025-08-13T19:59:46.154176986+00:00 stderr F I0813 19:59:46.154156 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.158647634+00:00 stderr F I0813 19:59:46.158612 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.158740346+00:00 stderr F I0813 19:59:46.158722 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.158907421+00:00 stderr F I0813 19:59:46.158762 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.158953042+00:00 stderr F I0813 19:59:46.158939 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.159055045+00:00 stderr F I0813 19:59:46.159037 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T19:59:46.159088616+00:00 stderr F I0813 19:59:46.159077 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T19:59:46.160268820+00:00 stderr F E0813 19:59:46.160204 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:46.160356262+00:00 stderr F I0813 19:59:46.160302 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.327105875+00:00 stderr F I0813 19:59:46.323731 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.640573681+00:00 stderr F I0813 19:59:46.334064 1 trace.go:236] Trace[383502669]: "DeltaFIFO Pop Process" ID:openshift-kube-apiserver/kube-apiserver-crc.17dcded7fc33a4b6,Depth:142,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:46.176) (total time: 157ms): 2025-08-13T19:59:46.640573681+00:00 stderr F Trace[383502669]: [157.750217ms] [157.750217ms] END 2025-08-13T19:59:46.640638053+00:00 stderr F I0813 19:59:46.347238 1 trace.go:236] Trace[927508524]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.768) (total time: 18578ms): 2025-08-13T19:59:46.640638053+00:00 stderr F Trace[927508524]: ---"Objects listed" error: 18578ms (19:59:46.347) 2025-08-13T19:59:46.640638053+00:00 stderr F Trace[927508524]: [18.578666736s] [18.578666736s] END 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.640610 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.347553 1 trace.go:236] Trace[669931977]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.768) (total time: 18578ms): 2025-08-13T19:59:46.652059088+00:00 stderr F Trace[669931977]: ---"Objects listed" error: 18407ms (19:59:46.175) 2025-08-13T19:59:46.652059088+00:00 stderr F Trace[669931977]: [18.578891232s] [18.578891232s] END 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.647321 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.368954 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.647454 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.369250 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.647500 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.369267 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.649332 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-08-13T19:59:46.652506731+00:00 stderr F I0813 19:59:46.369287 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.652586393+00:00 stderr F I0813 19:59:46.652556 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.652668456+00:00 stderr F I0813 19:59:46.384797 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.652713147+00:00 stderr F I0813 19:59:46.652698 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.657545305+00:00 stderr F I0813 19:59:46.398921 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-08-13T19:59:46.657621557+00:00 stderr F I0813 19:59:46.657601 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-08-13T19:59:46.660720275+00:00 stderr F I0813 19:59:46.399045 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-08-13T19:59:46.661012374+00:00 stderr F I0813 19:59:46.660988 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-08-13T19:59:46.662276710+00:00 stderr F I0813 19:59:46.449596 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.662334651+00:00 stderr F I0813 19:59:46.662316 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.663541776+00:00 stderr F I0813 19:59:46.449633 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.663610078+00:00 stderr F I0813 19:59:46.663585 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.666594283+00:00 stderr F I0813 19:59:46.451435 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-08-13T19:59:46.666665075+00:00 stderr F I0813 19:59:46.666646 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-08-13T19:59:46.673105408+00:00 stderr F I0813 19:59:46.468242 1 base_controller.go:73] Caches are synced for KubeAPIServerStaticResources 2025-08-13T19:59:46.673184980+00:00 stderr F I0813 19:59:46.673163 1 base_controller.go:110] Starting #1 worker of KubeAPIServerStaticResources controller ... 2025-08-13T19:59:46.676508885+00:00 stderr F I0813 19:59:46.468300 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.676573737+00:00 stderr F I0813 19:59:46.676555 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.681108276+00:00 stderr F I0813 19:59:46.468319 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.681302312+00:00 stderr F I0813 19:59:46.681180 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.707044046+00:00 stderr F I0813 19:59:46.485287 1 base_controller.go:73] Caches are synced for BoundSATokenSignerController 2025-08-13T19:59:46.707230501+00:00 stderr F I0813 19:59:46.707200 1 base_controller.go:110] Starting #1 worker of BoundSATokenSignerController controller ... 2025-08-13T19:59:46.750947557+00:00 stderr F I0813 19:59:46.485317 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-08-13T19:59:46.750947557+00:00 stderr F I0813 19:59:46.750875 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-08-13T19:59:46.751920665+00:00 stderr F I0813 19:59:46.502223 1 base_controller.go:73] Caches are synced for NodeKubeconfigController 2025-08-13T19:59:46.751920665+00:00 stderr F I0813 19:59:46.751097 1 base_controller.go:110] Starting #1 worker of NodeKubeconfigController controller ... 2025-08-13T19:59:46.831887244+00:00 stderr F I0813 19:59:46.825572 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.831887244+00:00 stderr F I0813 19:59:46.825695 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.865169383+00:00 stderr F I0813 19:59:46.865101 1 base_controller.go:73] Caches are synced for EventWatchController 2025-08-13T19:59:46.865286456+00:00 stderr F I0813 19:59:46.865268 1 base_controller.go:110] Starting #1 worker of EventWatchController controller ... 2025-08-13T19:59:47.704232902+00:00 stderr F I0813 19:59:47.692032 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:47.862264577+00:00 stderr F I0813 19:59:47.844259 1 trace.go:236] Trace[1477171505]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.786) (total time: 20057ms): 2025-08-13T19:59:47.862264577+00:00 stderr F Trace[1477171505]: ---"Objects listed" error: 20057ms (19:59:47.844) 2025-08-13T19:59:47.862264577+00:00 stderr F Trace[1477171505]: [20.057651336s] [20.057651336s] END 2025-08-13T19:59:47.862264577+00:00 stderr F I0813 19:59:47.844328 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:47.903459451+00:00 stderr F I0813 19:59:46.502273 1 base_controller.go:73] Caches are synced for WorkerLatencyProfile 2025-08-13T19:59:47.903459451+00:00 stderr F I0813 19:59:47.887274 1 base_controller.go:110] Starting #1 worker of WorkerLatencyProfile controller ... 2025-08-13T19:59:47.947048473+00:00 stderr F I0813 19:59:47.918354 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T19:59:47.947048473+00:00 stderr F I0813 19:59:46.502285 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T19:59:47.947048473+00:00 stderr F I0813 19:59:47.918431 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T19:59:48.329221767+00:00 stderr F I0813 19:59:46.502296 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T19:59:48.329364341+00:00 stderr F I0813 19:59:48.329343 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T19:59:48.450923837+00:00 stderr F I0813 19:59:48.433261 1 trace.go:236] Trace[1914587675]: "DeltaFIFO Pop Process" ID:openshift-config-managed/dashboard-cluster-total,Depth:35,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:47.844) (total time: 588ms): 2025-08-13T19:59:48.450923837+00:00 stderr F Trace[1914587675]: [588.63424ms] [588.63424ms] END 2025-08-13T19:59:48.658903396+00:00 stderr F I0813 19:59:48.658421 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:49.540014883+00:00 stderr F I0813 19:59:46.515255 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded changed from False to True ("NodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)") 2025-08-13T19:59:49.540014883+00:00 stderr F I0813 19:59:48.760552 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:49.540014883+00:00 stderr F I0813 19:59:48.760587 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:49.540014883+00:00 stderr F I0813 19:59:49.539556 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:49.540014883+00:00 stderr F I0813 19:59:49.539612 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:49.540014883+00:00 stderr F I0813 19:59:49.539633 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:49.680538308+00:00 stderr F I0813 19:59:49.680477 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:29:32Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:49.757899653+00:00 stderr F I0813 19:59:49.746243 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T19:59:49.757899653+00:00 stderr F I0813 19:59:49.746307 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T19:59:50.217136634+00:00 stderr P I0813 19:59:50.215305 1 core.go:358] ConfigMap "openshift-config-managed/kube-apiserver-client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKAB 2025-08-13T19:59:50.217380451+00:00 stderr F UmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T19:49:36Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T19:59:50.251507593+00:00 stderr F I0813 19:59:50.250263 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: 2025-08-13T19:59:50.251507593+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:50.251590646+00:00 stderr F I0813 19:59:50.251524 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:29:32Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:50.251992417+00:00 stderr F I0813 19:59:50.251867 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") 2025-08-13T19:59:50.296506296+00:00 stderr F I0813 19:59:50.294579 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/node-kubeconfigs -n openshift-kube-apiserver because it changed 2025-08-13T19:59:50.656940961+00:00 stderr F I0813 19:59:50.655745 1 core.go:358] ConfigMap "openshift-kube-apiserver/aggregator-client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIV+a/r/KBVSQwDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2FnZ3JlZ2F0b3It\nY2xpZW50LXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX2FnZ3JlZ2F0b3ItY2xpZW50LXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwz1oeDcXqAniG+VxAzEbZbeheswm\nibqk0LwWbA9YAD2aJCC2U0gbXouz0u1dzDnEuwzslM0OFq2kW+1RmEB1drVBkCMV\ny/gKGmRafqGt31/rDe81XneOBzrUC/rNVDZq7rx4wsZ8YzYkPhj1frvlCCWyOdyB\n+nWF+ZZQHLXeSuHuVGnfGqmckiQf/R8ITZp/vniyeOED0w8B9ZdfVHNYJksR/Vn2\ngslU8a/mluPzSCyD10aHnX5c75yTzW4TBQvytjkEpDR5LBoRmHiuL64999DtWonq\niX7TdcoQY1LuHyilaXIp0TazmkRb3ycHAY/RQ3xumj9I25D8eLCwWvI8GwIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\nWtUaz8JmZMUc/fPnQTR0L7R9wakwHwYDVR0jBBgwFoAUWtUaz8JmZMUc/fPnQTR0\nL7R9wakwDQYJKoZIhvcNAQELBQADggEBAECt0YM/4XvtPuVY5pY2aAXRuthsw2zQ\nnXVvR5jDsfpaNvMXQWaMjM1B+giNKhDDeLmcF7GlWmFfccqBPicgFhUgQANE3ALN\ngq2Wttd641M6i4B3UuRewNySj1sc12wfgAcwaRcecDTCsZo5yuF90z4mXpZs7MWh\nKCxYPpAtLqi17IF1tJVz/03L+6WD5kUProTELtY7/KBJYV/GONMG+KAMBjg1ikMK\njA0HQiCZiWDuW1ZdAwuvh6oRNWoQy6w9Wksard/AnfXUFBwNgULMp56+tOOPHxtm\nu3XYTN0dPJXsimSk4KfS0by8waS7ocoXa3LgQxb/6h0ympDbcWtgD0w=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:00Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}},"f:labels":{".":{},"f:auth.openshift.io/managed-certificate-type":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T19:49:33Z"}],"resourceVersion":null,"uid":"1d0d5c4a-d5a2-488a-94e2-bf622b67cadf"}} 2025-08-13T19:59:50.660061100+00:00 stderr F E0813 19:59:50.656059 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:50.745166186+00:00 stderr F E0813 19:59:50.745016 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-8,config-8,etcd-serving-ca-8,kube-apiserver-audit-policies-8,kube-apiserver-cert-syncer-kubeconfig-8,kube-apiserver-pod-8,kubelet-serving-ca-8,sa-token-signing-certs-8, secrets: etcd-client-8,localhost-recovery-client-token-8,localhost-recovery-serving-certkey-8] 2025-08-13T19:59:50.913502905+00:00 stderr F I0813 19:59:50.913036 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/aggregator-client-ca -n openshift-kube-apiserver: 2025-08-13T19:59:50.913502905+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:50.932861687+00:00 stderr F I0813 19:59:50.930694 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-8,config-8,etcd-serving-ca-8,kube-apiserver-audit-policies-8,kube-apiserver-cert-syncer-kubeconfig-8,kube-apiserver-pod-8,kubelet-serving-ca-8,sa-token-signing-certs-8, secrets: etcd-client-8,localhost-recovery-client-token-8,localhost-recovery-serving-certkey-8]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:29:32Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:51.056973045+00:00 stderr F I0813 19:59:51.056886 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:52.549088319+00:00 stderr F I0813 19:59:52.370677 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:52.370630792 +0000 UTC))" 2025-08-13T19:59:52.657562101+00:00 stderr F I0813 19:59:52.450127 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:52.669975275+00:00 stderr F I0813 19:59:52.494126 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-8,config-8,etcd-serving-ca-8,kube-apiserver-audit-policies-8,kube-apiserver-cert-syncer-kubeconfig-8,kube-apiserver-pod-8,kubelet-serving-ca-8,sa-token-signing-certs-8, secrets: etcd-client-8,localhost-recovery-client-token-8,localhost-recovery-serving-certkey-8]" 2025-08-13T19:59:52.669975275+00:00 stderr F I0813 19:59:52.668765 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:52.668661858 +0000 UTC))" 2025-08-13T19:59:52.669975275+00:00 stderr F I0813 19:59:52.669440 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:52.669342427 +0000 UTC))" 2025-08-13T19:59:52.669975275+00:00 stderr F I0813 19:59:52.669571 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:52.66945185 +0000 UTC))" 2025-08-13T19:59:52.669975275+00:00 stderr F I0813 19:59:52.669823 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.669603794 +0000 UTC))" 2025-08-13T19:59:52.671953091+00:00 stderr F I0813 19:59:52.670080 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.669968045 +0000 UTC))" 2025-08-13T19:59:52.883664316+00:00 stderr F I0813 19:59:52.883408 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.834619428 +0000 UTC))" 2025-08-13T19:59:52.883664316+00:00 stderr F I0813 19:59:52.883495 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.8834619 +0000 UTC))" 2025-08-13T19:59:52.883664316+00:00 stderr F I0813 19:59:52.883516 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.883502232 +0000 UTC))" 2025-08-13T19:59:52.884120229+00:00 stderr F I0813 19:59:52.883954 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-apiserver-operator.svc,metrics.openshift-kube-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:09 +0000 UTC to 2026-06-26 12:47:10 +0000 UTC (now=2025-08-13 19:59:52.883929784 +0000 UTC))" 2025-08-13T19:59:52.884397857+00:00 stderr F I0813 19:59:52.884342 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115158\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115153\" (2025-08-13 18:59:11 +0000 UTC to 2026-08-13 18:59:11 +0000 UTC (now=2025-08-13 19:59:52.884319995 +0000 UTC))" 2025-08-13T19:59:52.988627408+00:00 stderr F I0813 19:59:52.988069 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:29:32Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:53.006953971+00:00 stderr F I0813 19:59:53.004302 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:53.225649734+00:00 stderr F I0813 19:59:53.221749 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-8,config-8,etcd-serving-ca-8,kube-apiserver-audit-policies-8,kube-apiserver-cert-syncer-kubeconfig-8,kube-apiserver-pod-8,kubelet-serving-ca-8,sa-token-signing-certs-8, secrets: etcd-client-8,localhost-recovery-client-token-8,localhost-recovery-serving-certkey-8]" to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T19:59:53.558720948+00:00 stderr F I0813 19:59:53.558238 1 trace.go:236] Trace[1369719048]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.838) (total time: 23717ms): 2025-08-13T19:59:53.558720948+00:00 stderr F Trace[1369719048]: ---"Objects listed" error: 23716ms (19:59:53.555) 2025-08-13T19:59:53.558720948+00:00 stderr F Trace[1369719048]: [23.717376611s] [23.717376611s] END 2025-08-13T19:59:53.558720948+00:00 stderr F I0813 19:59:53.558621 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.583756152+00:00 stderr F I0813 19:59:53.583564 1 base_controller.go:73] Caches are synced for webhookSupportabilityController 2025-08-13T19:59:53.583756152+00:00 stderr F I0813 19:59:53.583593 1 base_controller.go:110] Starting #1 worker of webhookSupportabilityController controller ... 2025-08-13T19:59:53.605947754+00:00 stderr F I0813 19:59:53.605280 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-08-13T19:59:53.605947754+00:00 stderr F I0813 19:59:53.605320 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-08-13T19:59:58.613089865+00:00 stderr F I0813 19:59:58.609004 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 9 triggered by "optional secret/webhook-authenticator has changed" 2025-08-13T19:59:58.966505629+00:00 stderr F I0813 19:59:58.917090 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/webhook-authenticator -n openshift-kube-apiserver because it changed 2025-08-13T19:59:59.839284638+00:00 stderr F I0813 19:59:59.839152 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-pod-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:01.806181934+00:00 stderr F I0813 20:00:01.796903 1 core.go:358] ConfigMap "openshift-kube-apiserver/kubelet-serving-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:18Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-controller-manager-operator","operation":"Update","time":"2025-08-13T20:00:01Z"}],"resourceVersion":null,"uid":"449953d7-35d8-4eaf-8671-65eda2b482f7"}} 2025-08-13T20:00:01.809347355+00:00 stderr F I0813 20:00:01.808935 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver: 2025-08-13T20:00:01.809347355+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:01.809347355+00:00 stderr F I0813 20:00:01.809002 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:01.907607275+00:00 stderr F I0813 20:00:01.904389 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:01.924416555+00:00 stderr F I0813 20:00:01.924338 1 core.go:358] ConfigMap "openshift-config-managed/kubelet-serving-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:06Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:01Z"}],"resourceVersion":null,"uid":"1b8f54dc-8896-4a59-8c53-834fed1d81fd"}} 2025-08-13T20:00:01.953431302+00:00 stderr F I0813 20:00:01.929953 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kubelet-serving-ca -n openshift-config-managed: 2025-08-13T20:00:01.953431302+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:02.037372065+00:00 stderr F I0813 20:00:02.035056 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/oauth-metadata-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:02.070134019+00:00 stderr F I0813 20:00:02.068760 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/bound-sa-token-signing-certs-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:02.923722088+00:00 stderr F I0813 20:00:02.874163 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/etcd-serving-ca-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:03.445770364+00:00 stderr F I0813 20:00:03.445627 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-server-ca-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:04.121005026+00:00 stderr F I0813 20:00:04.112026 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kubelet-serving-ca-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:05.170528143+00:00 stderr F I0813 20:00:05.160062 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/sa-token-signing-certs-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:05.681253986+00:00 stderr P I0813 20:00:05.677309 1 core.go:358] ConfigMap "openshift-kube-apiserver/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAw 2025-08-13T20:00:05.681315977+00:00 stderr F ggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:00:05.682706277+00:00 stderr F I0813 20:00:05.681355 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-kube-apiserver: 2025-08-13T20:00:05.682706277+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:05.934535818+00:00 stderr F I0813 20:00:05.920362 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.920269061 +0000 UTC))" 2025-08-13T20:00:05.935118714+00:00 stderr F I0813 20:00:05.935088 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.934695512 +0000 UTC))" 2025-08-13T20:00:05.935300110+00:00 stderr F I0813 20:00:05.935278 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.935151055 +0000 UTC))" 2025-08-13T20:00:05.935382762+00:00 stderr F I0813 20:00:05.935362 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.935336371 +0000 UTC))" 2025-08-13T20:00:05.935441924+00:00 stderr F I0813 20:00:05.935428 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.935407243 +0000 UTC))" 2025-08-13T20:00:05.935499995+00:00 stderr F I0813 20:00:05.935487 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.935465504 +0000 UTC))" 2025-08-13T20:00:05.935693671+00:00 stderr F I0813 20:00:05.935672 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.935519506 +0000 UTC))" 2025-08-13T20:00:05.935751492+00:00 stderr F I0813 20:00:05.935736 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.935717932 +0000 UTC))" 2025-08-13T20:00:05.939093088+00:00 stderr F I0813 20:00:05.939070 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.939023986 +0000 UTC))" 2025-08-13T20:00:05.939164370+00:00 stderr F I0813 20:00:05.939150 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.939132769 +0000 UTC))" 2025-08-13T20:00:05.939613873+00:00 stderr F I0813 20:00:05.939589 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-apiserver-operator.svc,metrics.openshift-kube-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:09 +0000 UTC to 2026-06-26 12:47:10 +0000 UTC (now=2025-08-13 20:00:05.939563471 +0000 UTC))" 2025-08-13T20:00:05.940415505+00:00 stderr F I0813 20:00:05.940349 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115158\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115153\" (2025-08-13 18:59:11 +0000 UTC to 2026-08-13 18:59:11 +0000 UTC (now=2025-08-13 20:00:05.940324553 +0000 UTC))" 2025-08-13T20:00:06.125240245+00:00 stderr F I0813 20:00:06.124171 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-audit-policies-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:06.867873281+00:00 stderr F I0813 20:00:06.858398 1 request.go:697] Waited for 1.172199574s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-apiserver-client-ca 2025-08-13T20:00:08.100086485+00:00 stderr F I0813 20:00:07.974193 1 request.go:697] Waited for 1.291491224s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/trusted-ca-bundle 2025-08-13T20:00:08.100086485+00:00 stderr P I0813 20:00:07.983291 1 core.go:358] ConfigMap "openshift-config-managed/kube-apiserver-client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZ 2025-08-13T20:00:08.100180528+00:00 stderr F XJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:05Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T20:00:08.189908176+00:00 stderr F I0813 20:00:08.189030 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/etcd-client-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:08.212464720+00:00 stderr F I0813 20:00:08.206614 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: 2025-08-13T20:00:08.212464720+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:10.243962056+00:00 stderr F I0813 20:00:10.231607 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-serving-certkey-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:10.463320621+00:00 stderr F I0813 20:00:10.460161 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:11.026082297+00:00 stderr F I0813 20:00:11.005505 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/webhook-authenticator-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:11.373230415+00:00 stderr F I0813 20:00:11.353030 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 9 triggered by "optional secret/webhook-authenticator has changed" 2025-08-13T20:00:11.822691641+00:00 stderr F I0813 20:00:11.822014 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 9 created because optional secret/webhook-authenticator has changed 2025-08-13T20:00:11.846949852+00:00 stderr F I0813 20:00:11.846098 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:00:12.011013411+00:00 stderr F W0813 20:00:12.010579 1 staticpod.go:38] revision 9 is unexpectedly already the latest available revision. This is a possible race! 2025-08-13T20:00:12.011013411+00:00 stderr F E0813 20:00:12.010970 1 base_controller.go:268] RevisionController reconciliation failed: conflicting latestAvailableRevision 9 2025-08-13T20:00:13.096090201+00:00 stderr F I0813 20:00:13.095264 1 installer_controller.go:524] node crc with revision 8 is the oldest and needs new revision 9 2025-08-13T20:00:13.096090201+00:00 stderr F I0813 20:00:13.096032 1 installer_controller.go:532] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:00:13.096090201+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:00:13.096090201+00:00 stderr F CurrentRevision: (int32) 8, 2025-08-13T20:00:13.096090201+00:00 stderr F TargetRevision: (int32) 9, 2025-08-13T20:00:13.096090201+00:00 stderr F LastFailedRevision: (int32) 1, 2025-08-13T20:00:13.096090201+00:00 stderr F LastFailedTime: (*v1.Time)(0xc003c319e0)(2024-06-26 12:52:09 +0000 UTC), 2025-08-13T20:00:13.096090201+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:00:13.096090201+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:00:13.096090201+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:00:13.096090201+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:00:13.096090201+00:00 stderr F (string) (len=2059) "installer: node-admin-client-cert-key\",\n (string) (len=31) \"check-endpoints-client-cert-key\",\n (string) (len=14) \"kubelet-client\",\n (string) (len=16) \"node-kubeconfigs\"\n },\n OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {\n (string) (len=17) \"user-serving-cert\",\n (string) (len=21) \"user-serving-cert-000\",\n (string) (len=21) \"user-serving-cert-001\",\n (string) (len=21) \"user-serving-cert-002\",\n (string) (len=21) \"user-serving-cert-003\",\n (string) (len=21) \"user-serving-cert-004\",\n (string) (len=21) \"user-serving-cert-005\",\n (string) (len=21) \"user-serving-cert-006\",\n (string) (len=21) \"user-serving-cert-007\",\n (string) (len=21) \"user-serving-cert-008\",\n (string) (len=21) \"user-serving-cert-009\"\n },\n CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\",\n (string) (len=29) \"control-plane-node-kubeconfig\",\n (string) (len=26) \"check-endpoints-kubeconfig\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:19.085310 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:19.096616 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:19.099505 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:49:49.099716 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:03.102802 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:00:13.096090201+00:00 stderr F } 2025-08-13T20:00:13.096090201+00:00 stderr F } 2025-08-13T20:00:13.198325806+00:00 stderr F I0813 20:00:13.195633 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeTargetRevisionChanged' Updating node "crc" from revision 8 to 9 because node crc with revision 8 is the oldest 2025-08-13T20:00:13.284744520+00:00 stderr F I0813 20:00:13.284573 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:00:13.285207963+00:00 stderr F I0813 20:00:13.285148 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:13.486186314+00:00 stderr F I0813 20:00:13.485044 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9" 2025-08-13T20:00:13.491050473+00:00 stderr F I0813 20:00:13.491008 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:13.571507167+00:00 stderr F E0813 20:00:13.571212 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:14.466612320+00:00 stderr F I0813 20:00:14.458400 1 request.go:697] Waited for 1.135419415s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:00:17.365699214+00:00 stderr F I0813 20:00:17.348199 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-9-crc -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:17.620944372+00:00 stderr F I0813 20:00:17.619554 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:18.586967406+00:00 stderr F I0813 20:00:18.586756 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:21.262488005+00:00 stderr F I0813 20:00:21.259307 1 request.go:697] Waited for 1.120055997s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs 2025-08-13T20:00:22.147030787+00:00 stderr F I0813 20:00:22.142248 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:25.082034549+00:00 stderr F I0813 20:00:25.074714 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:28.579868887+00:00 stderr F I0813 20:00:28.571487 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:00:49.954168169+00:00 stderr P I0813 20:00:49.869279 1 core.go:358] ConfigMap "openshift-kube-apiserver/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwgg 2025-08-13T20:00:49.954592101+00:00 stderr F Ei\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:00:49.954592101+00:00 stderr F I0813 20:00:49.950160 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-kube-apiserver: 2025-08-13T20:00:49.954592101+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:50.452469968+00:00 stderr P I0813 20:00:50.451511 1 core.go:358] ConfigMap "openshift-config-managed/kube-apiserver-client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE 2025-08-13T20:00:50.452521249+00:00 stderr F 3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:49Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T20:00:50.491882951+00:00 stderr F I0813 20:00:50.472665 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: 2025-08-13T20:00:50.491882951+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:59.988258923+00:00 stderr F I0813 20:00:59.971160 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.971063173 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.991927 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:59.991860806 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992048 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.991966509 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992073 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.992059152 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992099 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992080352 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992128 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992105723 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992151 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992136864 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992169 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992156825 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992188 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:59.992174365 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992228 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:00:59.992200266 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992255 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992240197 +0000 UTC))" 2025-08-13T20:00:59.994898803+00:00 stderr F I0813 20:00:59.992640 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-apiserver-operator.svc,metrics.openshift-kube-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:09 +0000 UTC to 2026-06-26 12:47:10 +0000 UTC (now=2025-08-13 20:00:59.992603197 +0000 UTC))" 2025-08-13T20:00:59.994898803+00:00 stderr F I0813 20:00:59.993035 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115158\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115153\" (2025-08-13 18:59:11 +0000 UTC to 2026-08-13 18:59:11 +0000 UTC (now=2025-08-13 20:00:59.993011119 +0000 UTC))" 2025-08-13T20:01:14.400175850+00:00 stderr F I0813 20:01:14.390670 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:14.400175850+00:00 stderr F I0813 20:01:14.393475 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:14.429368972+00:00 stderr F I0813 20:01:14.424378 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:14.429368972+00:00 stderr F I0813 20:01:14.425685 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:14.425611745 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.425769 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:14.425704257 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485069 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:14.484961197 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485106 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:14.485085791 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485272 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:14.485183653 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485306 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:14.485284976 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485351 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:14.485314047 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485377 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:14.485357498 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485404 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:14.485386229 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485472 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:14.485444671 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485516 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:14.485497092 +0000 UTC))" 2025-08-13T20:01:14.505378189+00:00 stderr F I0813 20:01:14.489240 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-apiserver-operator.svc,metrics.openshift-kube-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:22 +0000 UTC to 2027-08-13 20:00:23 +0000 UTC (now=2025-08-13 20:01:14.489194768 +0000 UTC))" 2025-08-13T20:01:14.507034327+00:00 stderr F I0813 20:01:14.506154 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115158\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115153\" (2025-08-13 18:59:11 +0000 UTC to 2026-08-13 18:59:11 +0000 UTC (now=2025-08-13 20:01:14.48961271 +0000 UTC))" 2025-08-13T20:01:15.604133219+00:00 stderr F I0813 20:01:15.603594 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="b57d435dda9015b836844ccd5987ed8197c1e056dd12749adf934051633baa90", new="60b634b6a45b60ea7526d03c4e0dd32ed9c3754978fa8314240e9b0d791c4ab0") 2025-08-13T20:01:15.604133219+00:00 stderr F W0813 20:01:15.603963 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:01:15.604133219+00:00 stderr F I0813 20:01:15.604069 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="1b9136fb24ce5731f959f039a819bcb38f1319259d783f75058cdbd1e9634c27", new="f77cc21f2f2924073edb7f73a484e9ce3a552373ca706e2e4a46fb72d4f6a8fa") 2025-08-13T20:01:15.604323094+00:00 stderr F I0813 20:01:15.604280 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:15.607932247+00:00 stderr F I0813 20:01:15.604630 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:15.608345929+00:00 stderr F I0813 20:01:15.608191 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.608345929+00:00 stderr F I0813 20:01:15.608316 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.608345929+00:00 stderr F I0813 20:01:15.608230 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.608443591+00:00 stderr F I0813 20:01:15.608365 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.608443591+00:00 stderr F I0813 20:01:15.608414 1 base_controller.go:172] Shutting down TargetConfigController ... 2025-08-13T20:01:15.608443591+00:00 stderr F I0813 20:01:15.608358 1 base_controller.go:172] Shutting down ConnectivityCheckController ... 2025-08-13T20:01:15.608517964+00:00 stderr F I0813 20:01:15.608285 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:01:15.608517964+00:00 stderr F I0813 20:01:15.608378 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:01:15.608517964+00:00 stderr F I0813 20:01:15.608391 1 base_controller.go:172] Shutting down webhookSupportabilityController ... 2025-08-13T20:01:15.608517964+00:00 stderr F I0813 20:01:15.608399 1 base_controller.go:172] Shutting down NodeController ... 2025-08-13T20:01:15.608517964+00:00 stderr F I0813 20:01:15.608432 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.608517964+00:00 stderr F I0813 20:01:15.608512 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.608540594+00:00 stderr F I0813 20:01:15.608477 1 base_controller.go:172] Shutting down ServiceAccountIssuerController ... 2025-08-13T20:01:15.608540594+00:00 stderr F I0813 20:01:15.608527 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.608540594+00:00 stderr F I0813 20:01:15.608531 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.608550604+00:00 stderr F I0813 20:01:15.608544 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:01:15.608562215+00:00 stderr F I0813 20:01:15.608556 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:01:15.608626157+00:00 stderr F I0813 20:01:15.608587 1 base_controller.go:172] Shutting down MissingStaticPodController ... 2025-08-13T20:01:15.608626157+00:00 stderr F I0813 20:01:15.608601 1 base_controller.go:172] Shutting down EncryptionStateController ... 2025-08-13T20:01:15.608626157+00:00 stderr F I0813 20:01:15.608589 1 certrotationcontroller.go:899] Shutting down CertRotation 2025-08-13T20:01:15.608626157+00:00 stderr F I0813 20:01:15.608616 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:01:15.608677738+00:00 stderr F I0813 20:01:15.608658 1 base_controller.go:172] Shutting down StartupMonitorPodCondition ... 2025-08-13T20:01:15.608711579+00:00 stderr F I0813 20:01:15.608671 1 base_controller.go:172] Shutting down EncryptionPruneController ... 2025-08-13T20:01:15.608739830+00:00 stderr F I0813 20:01:15.608679 1 base_controller.go:172] Shutting down WorkerLatencyProfile ... 2025-08-13T20:01:15.608766861+00:00 stderr F I0813 20:01:15.608705 1 base_controller.go:172] Shutting down EventWatchController ... 2025-08-13T20:01:15.610025417+00:00 stderr F I0813 20:01:15.608737 1 termination_observer.go:155] Shutting down TerminationObserver 2025-08-13T20:01:15.610150250+00:00 stderr F I0813 20:01:15.610097 1 base_controller.go:114] Shutting down worker of BackingResourceController controller ... 2025-08-13T20:01:15.610183981+00:00 stderr F I0813 20:01:15.610136 1 base_controller.go:172] Shutting down PruneController ... 2025-08-13T20:01:15.610236473+00:00 stderr F I0813 20:01:15.610223 1 base_controller.go:172] Shutting down StaticPodStateController ... 2025-08-13T20:01:15.610264143+00:00 stderr F I0813 20:01:15.610200 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.610303184+00:00 stderr F I0813 20:01:15.610289 1 base_controller.go:172] Shutting down InstallerController ... 2025-08-13T20:01:15.610344556+00:00 stderr F I0813 20:01:15.610332 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:01:15.610415938+00:00 stderr F I0813 20:01:15.610362 1 base_controller.go:172] Shutting down CertRotationTimeUpgradeableController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.608865 1 base_controller.go:172] Shutting down NodeKubeconfigController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.608868 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.608884 1 base_controller.go:172] Shutting down auditPolicyController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.608913 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.608921 1 base_controller.go:172] Shutting down BoundSATokenSignerController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.608955 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.608979 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.609010 1 base_controller.go:172] Shutting down KubeAPIServerStaticResources ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.609030 1 base_controller.go:172] Shutting down EncryptionMigrationController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.609049 1 base_controller.go:172] Shutting down EncryptionKeyController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.609074 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609093 1 base_controller.go:172] Shutting down BackingResourceController ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609210 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609252 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609320 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.608263 1 base_controller.go:172] Shutting down SCCReconcileController ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609339 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609361 1 base_controller.go:114] Shutting down worker of EncryptionStateController controller ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609505 1 base_controller.go:114] Shutting down worker of EncryptionPruneController controller ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609514 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609651 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609737 1 base_controller.go:114] Shutting down worker of EncryptionMigrationController controller ... 2025-08-13T20:01:15.641128403+00:00 stderr F I0813 20:01:15.609751 1 base_controller.go:114] Shutting down worker of EncryptionKeyController controller ... 2025-08-13T20:01:15.641128403+00:00 stderr F I0813 20:01:15.609759 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641128403+00:00 stderr F I0813 20:01:15.610261 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641143274+00:00 stderr F I0813 20:01:15.610350 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:01:15.641156434+00:00 stderr F I0813 20:01:15.610443 1 base_controller.go:114] Shutting down worker of NodeController controller ... 2025-08-13T20:01:15.641156434+00:00 stderr F I0813 20:01:15.610451 1 base_controller.go:114] Shutting down worker of ServiceAccountIssuerController controller ... 2025-08-13T20:01:15.641156434+00:00 stderr F I0813 20:01:15.610460 1 base_controller.go:114] Shutting down worker of StartupMonitorPodCondition controller ... 2025-08-13T20:01:15.641167685+00:00 stderr F I0813 20:01:15.608763 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641167685+00:00 stderr F I0813 20:01:15.610501 1 base_controller.go:114] Shutting down worker of PruneController controller ... 2025-08-13T20:01:15.641167685+00:00 stderr F I0813 20:01:15.610508 1 base_controller.go:114] Shutting down worker of StaticPodStateController controller ... 2025-08-13T20:01:15.641181705+00:00 stderr F I0813 20:01:15.610516 1 base_controller.go:114] Shutting down worker of InstallerController controller ... 2025-08-13T20:01:15.641181705+00:00 stderr F I0813 20:01:15.610967 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:01:15.641181705+00:00 stderr F I0813 20:01:15.610992 1 base_controller.go:172] Shutting down StatusSyncer_kube-apiserver ... 2025-08-13T20:01:15.641192225+00:00 stderr F I0813 20:01:15.611015 1 base_controller.go:114] Shutting down worker of StatusSyncer_kube-apiserver controller ... 2025-08-13T20:01:15.641192225+00:00 stderr F I0813 20:01:15.611016 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:15.641192225+00:00 stderr F I0813 20:01:15.611190 1 base_controller.go:114] Shutting down worker of SCCReconcileController controller ... 2025-08-13T20:01:15.641202416+00:00 stderr F I0813 20:01:15.611203 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641202416+00:00 stderr F I0813 20:01:15.611229 1 base_controller.go:172] Shutting down EncryptionConditionController ... 2025-08-13T20:01:15.641217486+00:00 stderr F I0813 20:01:15.611318 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:15.641217486+00:00 stderr F I0813 20:01:15.611364 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:15.641229086+00:00 stderr F I0813 20:01:15.611392 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:01:15.641229086+00:00 stderr F I0813 20:01:15.611420 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:15.641229086+00:00 stderr F I0813 20:01:15.611550 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:01:15.641239587+00:00 stderr F I0813 20:01:15.611598 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:01:15.641239587+00:00 stderr F I0813 20:01:15.611643 1 base_controller.go:114] Shutting down worker of ConnectivityCheckController controller ... 2025-08-13T20:01:15.641249067+00:00 stderr F I0813 20:01:15.611675 1 base_controller.go:114] Shutting down worker of webhookSupportabilityController controller ... 2025-08-13T20:01:15.641249067+00:00 stderr F I0813 20:01:15.611683 1 base_controller.go:114] Shutting down worker of TargetConfigController controller ... 2025-08-13T20:01:15.641259667+00:00 stderr F I0813 20:01:15.611704 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641259667+00:00 stderr F I0813 20:01:15.611986 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:01:15.641259667+00:00 stderr F I0813 20:01:15.612066 1 base_controller.go:114] Shutting down worker of EncryptionConditionController controller ... 2025-08-13T20:01:15.641269957+00:00 stderr F I0813 20:01:15.612072 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:01:15.641269957+00:00 stderr F I0813 20:01:15.612084 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:01:15.641289048+00:00 stderr F I0813 20:01:15.612104 1 base_controller.go:114] Shutting down worker of MissingStaticPodController controller ... 2025-08-13T20:01:15.641289048+00:00 stderr F I0813 20:01:15.612113 1 base_controller.go:114] Shutting down worker of auditPolicyController controller ... 2025-08-13T20:01:15.641289048+00:00 stderr F I0813 20:01:15.612120 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:01:15.641299968+00:00 stderr F I0813 20:01:15.612127 1 base_controller.go:114] Shutting down worker of WorkerLatencyProfile controller ... 2025-08-13T20:01:15.641309749+00:00 stderr F I0813 20:01:15.612135 1 base_controller.go:114] Shutting down worker of EventWatchController controller ... 2025-08-13T20:01:15.641309749+00:00 stderr F I0813 20:01:15.612141 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612148 1 base_controller.go:114] Shutting down worker of NodeKubeconfigController controller ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612221 1 base_controller.go:172] Shutting down GuardController ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612236 1 base_controller.go:172] Shutting down KubeletVersionSkewController ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612252 1 base_controller.go:172] Shutting down StaticPodStateFallback ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612263 1 base_controller.go:172] Shutting down InstallerStateController ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612280 1 base_controller.go:114] Shutting down worker of GuardController controller ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612286 1 base_controller.go:114] Shutting down worker of KubeletVersionSkewController controller ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612292 1 base_controller.go:114] Shutting down worker of StaticPodStateFallback controller ... 2025-08-13T20:01:15.641738091+00:00 stderr F I0813 20:01:15.612296 1 base_controller.go:114] Shutting down worker of InstallerStateController controller ... 2025-08-13T20:01:15.641738091+00:00 stderr F I0813 20:01:15.612330 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:01:15.641738091+00:00 stderr F I0813 20:01:15.612339 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641754601+00:00 stderr F I0813 20:01:15.612356 1 base_controller.go:114] Shutting down worker of BoundSATokenSignerController controller ... 2025-08-13T20:01:15.641754601+00:00 stderr F I0813 20:01:15.612365 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641754601+00:00 stderr F I0813 20:01:15.612413 1 base_controller.go:114] Shutting down worker of CertRotationTimeUpgradeableController controller ... 2025-08-13T20:01:15.641754601+00:00 stderr F I0813 20:01:15.612581 1 base_controller.go:172] Shutting down PodSecurityReadinessController ... 2025-08-13T20:01:15.641769122+00:00 stderr F I0813 20:01:15.612700 1 base_controller.go:114] Shutting down worker of PodSecurityReadinessController controller ... 2025-08-13T20:01:15.641769122+00:00 stderr F I0813 20:01:15.619109 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:01:15.641935446+00:00 stderr F E0813 20:01:15.630417 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": context canceled, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/alerts/api-usage.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/alerts/audit-errors.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/alerts/cpu-utilization.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/alerts/kube-apiserver-requests.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/alerts/podsecurity-violations.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): client rate limiter Wait returned an error: context canceled, client rate limiter Wait returned an error: context canceled] 2025-08-13T20:01:15.641935446+00:00 stderr F I0813 20:01:15.641395 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.641958087+00:00 stderr F I0813 20:01:15.641945 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:15.641967987+00:00 stderr F I0813 20:01:15.641411 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.641967987+00:00 stderr F I0813 20:01:15.641963 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:15.641977968+00:00 stderr F I0813 20:01:15.641419 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.641977968+00:00 stderr F I0813 20:01:15.641425 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.641987658+00:00 stderr F I0813 20:01:15.641440 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.641987658+00:00 stderr F I0813 20:01:15.641980 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:15.641997388+00:00 stderr F I0813 20:01:15.641450 1 base_controller.go:104] All BackingResourceController workers have been terminated 2025-08-13T20:01:15.644121349+00:00 stderr F I0813 20:01:15.643992 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:01:15.644121349+00:00 stderr F I0813 20:01:15.644035 1 base_controller.go:104] All ConnectivityCheckController workers have been terminated 2025-08-13T20:01:15.644121349+00:00 stderr F I0813 20:01:15.644047 1 base_controller.go:104] All auditPolicyController workers have been terminated 2025-08-13T20:01:15.644121349+00:00 stderr F I0813 20:01:15.644061 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:01:15.644121349+00:00 stderr F I0813 20:01:15.644098 1 base_controller.go:104] All MissingStaticPodController workers have been terminated 2025-08-13T20:01:15.644242882+00:00 stderr F I0813 20:01:15.644208 1 base_controller.go:104] All StaticPodStateFallback workers have been terminated 2025-08-13T20:01:15.645271592+00:00 stderr F I0813 20:01:15.645070 1 base_controller.go:104] All GuardController workers have been terminated 2025-08-13T20:01:15.645271592+00:00 stderr F I0813 20:01:15.644765 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:15.645499838+00:00 stderr F I0813 20:01:15.645456 1 base_controller.go:104] All InstallerStateController workers have been terminated 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:16.128512 1 base_controller.go:104] All CertRotationTimeUpgradeableController workers have been terminated 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:16.128679 1 base_controller.go:104] All PodSecurityReadinessController workers have been terminated 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:16.128768 1 base_controller.go:114] Shutting down worker of KubeAPIServerStaticResources controller ... 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:16.128827 1 base_controller.go:104] All webhookSupportabilityController workers have been terminated 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:16.128877 1 controller_manager.go:54] BackingResourceController controller terminated 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:16.128832 1 base_controller.go:104] All KubeAPIServerStaticResources workers have been terminated 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:16.128894 1 base_controller.go:104] All WorkerLatencyProfile workers have been terminated 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:15.641461 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:16.128904 1 base_controller.go:104] All TargetConfigController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:16.128911 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:16.128913 1 base_controller.go:104] All EventWatchController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:15.641470 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:16.128920 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:15.641480 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:16.128925 1 base_controller.go:104] All NodeKubeconfigController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:15.641488 1 base_controller.go:104] All EncryptionStateController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:15.641497 1 base_controller.go:104] All EncryptionPruneController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:16.128947 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:01:16.128975874+00:00 stderr F I0813 20:01:15.641507 1 base_controller.go:104] All EncryptionMigrationController workers have been terminated 2025-08-13T20:01:16.128975874+00:00 stderr F I0813 20:01:15.641530 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:01:16.128975874+00:00 stderr F I0813 20:01:16.128969 1 controller_manager.go:54] MissingStaticPodController controller terminated 2025-08-13T20:01:16.128975874+00:00 stderr F I0813 20:01:16.128970 1 controller_manager.go:54] LoggingSyncer controller terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641553 1 base_controller.go:104] All ServiceAccountIssuerController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.128976 1 controller_manager.go:54] RevisionController controller terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641564 1 base_controller.go:104] All StartupMonitorPodCondition workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129046 1 controller_manager.go:54] StartupMonitorPodCondition controller terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129023 1 base_controller.go:104] All BoundSATokenSignerController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641569 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129054 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129067 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129071 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129072 1 controller_manager.go:54] UnsupportedConfigOverridesController controller terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641586 1 base_controller.go:104] All StaticPodStateController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129085 1 controller_manager.go:54] StaticPodStateController controller terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641607 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641618 1 base_controller.go:150] All StatusSyncer_kube-apiserver post start hooks have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129112 1 base_controller.go:104] All StatusSyncer_kube-apiserver workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641645 1 base_controller.go:104] All SCCReconcileController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641654 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641455 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129134 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641540 1 base_controller.go:104] All NodeController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129144 1 controller_manager.go:54] NodeController controller terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.128988 1 builder.go:330] server exited 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641518 1 base_controller.go:104] All EncryptionKeyController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.128997 1 base_controller.go:104] All EncryptionConditionController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129004 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129007 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129032 1 base_controller.go:104] All KubeletVersionSkewController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641597 1 base_controller.go:104] All InstallerController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129190 1 controller_manager.go:54] InstallerController controller terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641577 1 base_controller.go:104] All PruneController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129207 1 controller_manager.go:54] PruneController controller terminated 2025-08-13T20:01:16.130086206+00:00 stderr F I0813 20:01:16.130032 1 controller_manager.go:54] StaticPodStateFallback controller terminated 2025-08-13T20:01:16.135711526+00:00 stderr F I0813 20:01:16.133970 1 controller_manager.go:54] GuardController controller terminated 2025-08-13T20:01:16.135711526+00:00 stderr F I0813 20:01:16.134100 1 controller_manager.go:54] InstallerStateController controller terminated 2025-08-13T20:01:18.800179041+00:00 stderr F W0813 20:01:18.797867 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000027700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-op0000755000175000017500000000000015133657716033102 5ustar zuulzuul././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-op0000755000175000017500000000000015133657741033100 5ustar zuulzuul././@LongLink0000644000000000000000000000034100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-op0000644000175000017500000016477115133657716033124 0ustar zuulzuul2026-01-20T10:49:36.313016411+00:00 stderr F I0120 10:49:36.312098 1 cmd.go:240] Using service-serving-cert provided certificates 2026-01-20T10:49:36.313016411+00:00 stderr F I0120 10:49:36.312591 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:49:36.313779674+00:00 stderr F I0120 10:49:36.313736 1 observer_polling.go:159] Starting file observer 2026-01-20T10:49:36.414544713+00:00 stderr F I0120 10:49:36.414294 1 builder.go:298] openshift-apiserver-operator version - 2026-01-20T10:49:37.079323092+00:00 stderr F I0120 10:49:37.075897 1 secure_serving.go:57] Forcing use of http/1.1 only 2026-01-20T10:49:37.079323092+00:00 stderr F W0120 10:49:37.076810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:37.079323092+00:00 stderr F W0120 10:49:37.076819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:37.080246470+00:00 stderr F I0120 10:49:37.080225 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2026-01-20T10:49:37.092812513+00:00 stderr F I0120 10:49:37.092788 1 leaderelection.go:250] attempting to acquire leader lease openshift-apiserver-operator/openshift-apiserver-operator-lock... 2026-01-20T10:49:37.109754839+00:00 stderr F I0120 10:49:37.096252 1 secure_serving.go:213] Serving securely on [::]:8443 2026-01-20T10:49:37.109754839+00:00 stderr F I0120 10:49:37.096315 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:49:37.109754839+00:00 stderr F I0120 10:49:37.099656 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:49:37.118088022+00:00 stderr F I0120 10:49:37.116629 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2026-01-20T10:49:37.118088022+00:00 stderr F I0120 10:49:37.117191 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:49:37.118088022+00:00 stderr F I0120 10:49:37.117680 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:49:37.118088022+00:00 stderr F I0120 10:49:37.117694 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:37.118874616+00:00 stderr F I0120 10:49:37.118837 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:49:37.118874616+00:00 stderr F I0120 10:49:37.118868 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:37.200264716+00:00 stderr F I0120 10:49:37.200184 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:49:37.220003197+00:00 stderr F I0120 10:49:37.219943 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:37.221288056+00:00 stderr F I0120 10:49:37.221169 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:55:14.797940153+00:00 stderr F I0120 10:55:14.797299 1 leaderelection.go:260] successfully acquired lease openshift-apiserver-operator/openshift-apiserver-operator-lock 2026-01-20T10:55:14.797940153+00:00 stderr F I0120 10:55:14.797481 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator-lock", UID:"d9b35288-1c3d-4620-987e-0e2acf09bc76", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"42075", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-apiserver-operator-7c88c4c865-kn67m_e2cdd58f-bd0e-41be-8cc4-e455fe72b5a8 became leader 2026-01-20T10:55:14.800280756+00:00 stderr F I0120 10:55:14.800236 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2026-01-20T10:55:14.804171400+00:00 stderr F I0120 10:55:14.804119 1 starter.go:133] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2026-01-20T10:55:14.804230771+00:00 stderr F I0120 10:55:14.804182 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2026-01-20T10:55:14.816889128+00:00 stderr F I0120 10:55:14.816807 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2026-01-20T10:55:14.817366751+00:00 stderr F I0120 10:55:14.817327 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2026-01-20T10:55:14.821987214+00:00 stderr F I0120 10:55:14.820719 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2026-01-20T10:55:14.825027415+00:00 stderr F I0120 10:55:14.824974 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_openshift-apiserver 2026-01-20T10:55:14.825027415+00:00 stderr F I0120 10:55:14.824999 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2026-01-20T10:55:14.825585760+00:00 stderr F I0120 10:55:14.825552 1 base_controller.go:67] Waiting for caches to sync for NamespaceFinalizerController_openshift-apiserver 2026-01-20T10:55:14.826834744+00:00 stderr F I0120 10:55:14.826796 1 base_controller.go:67] Waiting for caches to sync for SecretRevisionPruneController 2026-01-20T10:55:14.826854134+00:00 stderr F I0120 10:55:14.826839 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2026-01-20T10:55:14.827088931+00:00 stderr F I0120 10:55:14.827038 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2026-01-20T10:55:14.827192043+00:00 stderr F I0120 10:55:14.824987 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2026-01-20T10:55:14.827353828+00:00 stderr F I0120 10:55:14.827337 1 base_controller.go:67] Waiting for caches to sync for APIServiceController_openshift-apiserver 2026-01-20T10:55:14.827397809+00:00 stderr F I0120 10:55:14.827383 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2026-01-20T10:55:14.827417779+00:00 stderr F I0120 10:55:14.827407 1 base_controller.go:67] Waiting for caches to sync for OpenShiftAPIServerWorkloadController 2026-01-20T10:55:14.827428220+00:00 stderr F I0120 10:55:14.827423 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2026-01-20T10:55:14.827457060+00:00 stderr F I0120 10:55:14.827446 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2026-01-20T10:55:14.827482861+00:00 stderr F I0120 10:55:14.827470 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2026-01-20T10:55:14.827491601+00:00 stderr F I0120 10:55:14.827485 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2026-01-20T10:55:14.828522068+00:00 stderr F I0120 10:55:14.828497 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2026-01-20T10:55:14.830275375+00:00 stderr F I0120 10:55:14.830160 1 base_controller.go:67] Waiting for caches to sync for APIServerStaticResources 2026-01-20T10:55:14.920134511+00:00 stderr F I0120 10:55:14.918359 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2026-01-20T10:55:14.920134511+00:00 stderr F I0120 10:55:14.918410 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2026-01-20T10:55:14.925223357+00:00 stderr F I0120 10:55:14.925165 1 base_controller.go:73] Caches are synced for StatusSyncer_openshift-apiserver 2026-01-20T10:55:14.925295698+00:00 stderr F I0120 10:55:14.925277 1 base_controller.go:110] Starting #1 worker of StatusSyncer_openshift-apiserver controller ... 2026-01-20T10:55:14.926193932+00:00 stderr F I0120 10:55:14.926149 1 base_controller.go:73] Caches are synced for NamespaceFinalizerController_openshift-apiserver 2026-01-20T10:55:14.926193932+00:00 stderr F I0120 10:55:14.926170 1 base_controller.go:110] Starting #1 worker of NamespaceFinalizerController_openshift-apiserver controller ... 2026-01-20T10:55:14.926242603+00:00 stderr F I0120 10:55:14.926188 1 base_controller.go:73] Caches are synced for auditPolicyController 2026-01-20T10:55:14.926242603+00:00 stderr F I0120 10:55:14.926211 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2026-01-20T10:55:14.926615243+00:00 stderr F I0120 10:55:14.926577 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2026-01-20T10:55:14.935252113+00:00 stderr F I0120 10:55:14.928180 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2026-01-20T10:55:14.935252113+00:00 stderr F I0120 10:55:14.928192 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2026-01-20T10:55:14.935252113+00:00 stderr F I0120 10:55:14.928284 1 base_controller.go:73] Caches are synced for LoggingSyncer 2026-01-20T10:55:14.935252113+00:00 stderr F I0120 10:55:14.928338 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2026-01-20T10:55:14.935252113+00:00 stderr F I0120 10:55:14.931497 1 base_controller.go:73] Caches are synced for APIServerStaticResources 2026-01-20T10:55:14.935252113+00:00 stderr F I0120 10:55:14.931507 1 base_controller.go:110] Starting #1 worker of APIServerStaticResources controller ... 2026-01-20T10:55:15.003467762+00:00 stderr F I0120 10:55:15.003099 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:55:15.019284723+00:00 stderr F I0120 10:55:15.019225 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:15.108045380+00:00 stderr F I0120 10:55:15.107985 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:141 2026-01-20T10:55:15.218952557+00:00 stderr F I0120 10:55:15.218897 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:15.227923796+00:00 stderr F I0120 10:55:15.227828 1 base_controller.go:73] Caches are synced for APIServiceController_openshift-apiserver 2026-01-20T10:55:15.227923796+00:00 stderr F I0120 10:55:15.227880 1 base_controller.go:110] Starting #1 worker of APIServiceController_openshift-apiserver controller ... 2026-01-20T10:55:15.228219913+00:00 stderr F I0120 10:55:15.228038 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling 2026-01-20T10:55:15.424511076+00:00 stderr F I0120 10:55:15.423796 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:15.624900678+00:00 stderr F I0120 10:55:15.624029 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:15.819709801+00:00 stderr F I0120 10:55:15.819616 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:15.827033776+00:00 stderr F I0120 10:55:15.826970 1 base_controller.go:73] Caches are synced for RevisionController 2026-01-20T10:55:15.827033776+00:00 stderr F I0120 10:55:15.826998 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2026-01-20T10:55:15.827033776+00:00 stderr F I0120 10:55:15.827006 1 base_controller.go:73] Caches are synced for SecretRevisionPruneController 2026-01-20T10:55:15.827105088+00:00 stderr F I0120 10:55:15.827033 1 base_controller.go:110] Starting #1 worker of SecretRevisionPruneController controller ... 2026-01-20T10:55:15.827208231+00:00 stderr F I0120 10:55:15.827161 1 base_controller.go:73] Caches are synced for ConfigObserver 2026-01-20T10:55:15.827208231+00:00 stderr F I0120 10:55:15.827176 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2026-01-20T10:55:15.828215767+00:00 stderr F I0120 10:55:15.828146 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2026-01-20T10:55:15.828215767+00:00 stderr F I0120 10:55:15.828175 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2026-01-20T10:55:15.828215767+00:00 stderr F I0120 10:55:15.828192 1 base_controller.go:73] Caches are synced for EncryptionStateController 2026-01-20T10:55:15.828215767+00:00 stderr F I0120 10:55:15.828206 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2026-01-20T10:55:15.828245258+00:00 stderr F I0120 10:55:15.828216 1 base_controller.go:73] Caches are synced for OpenShiftAPIServerWorkloadController 2026-01-20T10:55:15.828245258+00:00 stderr F I0120 10:55:15.828227 1 base_controller.go:110] Starting #1 worker of OpenShiftAPIServerWorkloadController controller ... 2026-01-20T10:55:15.828245258+00:00 stderr F I0120 10:55:15.828233 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2026-01-20T10:55:15.828245258+00:00 stderr F I0120 10:55:15.828239 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2026-01-20T10:55:15.828259558+00:00 stderr F I0120 10:55:15.828177 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2026-01-20T10:55:15.828962067+00:00 stderr F I0120 10:55:15.828196 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2026-01-20T10:55:15.828962067+00:00 stderr F I0120 10:55:15.828156 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2026-01-20T10:55:15.828962067+00:00 stderr F I0120 10:55:15.828944 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2026-01-20T10:55:16.023376550+00:00 stderr F I0120 10:55:16.023269 1 request.go:697] Waited for 1.197762799s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services?limit=500&resourceVersion=0 2026-01-20T10:55:16.026086542+00:00 stderr F I0120 10:55:16.025976 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:16.230944533+00:00 stderr F I0120 10:55:16.230839 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:16.429431324+00:00 stderr F I0120 10:55:16.429318 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:16.625092610+00:00 stderr F I0120 10:55:16.624362 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:16.718303445+00:00 stderr F I0120 10:55:16.718226 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2026-01-20T10:55:16.718303445+00:00 stderr F I0120 10:55:16.718258 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2026-01-20T10:55:16.831735968+00:00 stderr F I0120 10:55:16.831629 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:17.024617620+00:00 stderr F I0120 10:55:17.024544 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:17.217610044+00:00 stderr F I0120 10:55:17.217185 1 request.go:697] Waited for 2.389807254s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config/configmaps?limit=500&resourceVersion=0 2026-01-20T10:55:17.219928016+00:00 stderr F I0120 10:55:17.219882 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:17.420965286+00:00 stderr F I0120 10:55:17.420564 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:17.620453723+00:00 stderr F I0120 10:55:17.620371 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:17.821140153+00:00 stderr F I0120 10:55:17.820951 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2026-01-20T10:55:17.829385853+00:00 stderr F I0120 10:55:17.829290 1 base_controller.go:73] Caches are synced for ResourceSyncController 2026-01-20T10:55:17.829385853+00:00 stderr F I0120 10:55:17.829328 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2026-01-20T10:55:18.417585483+00:00 stderr F I0120 10:55:18.417442 1 request.go:697] Waited for 3.483985262s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver 2026-01-20T10:56:07.101488158+00:00 stderr F I0120 10:56:07.099454 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:56:07.099379971 +0000 UTC))" 2026-01-20T10:56:07.101488158+00:00 stderr F I0120 10:56:07.099658 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:56:07.099639908 +0000 UTC))" 2026-01-20T10:56:07.101488158+00:00 stderr F I0120 10:56:07.099686 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.099665478 +0000 UTC))" 2026-01-20T10:56:07.101488158+00:00 stderr F I0120 10:56:07.099708 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.099693009 +0000 UTC))" 2026-01-20T10:56:07.101488158+00:00 stderr F I0120 10:56:07.099733 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.09971472 +0000 UTC))" 2026-01-20T10:56:07.101488158+00:00 stderr F I0120 10:56:07.099757 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.09973929 +0000 UTC))" 2026-01-20T10:56:07.101488158+00:00 stderr F I0120 10:56:07.099778 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.099763721 +0000 UTC))" 2026-01-20T10:56:07.101488158+00:00 stderr F I0120 10:56:07.099800 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.099783761 +0000 UTC))" 2026-01-20T10:56:07.101488158+00:00 stderr F I0120 10:56:07.099824 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.099807762 +0000 UTC))" 2026-01-20T10:56:07.101488158+00:00 stderr F I0120 10:56:07.099847 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:56:07.099833633 +0000 UTC))" 2026-01-20T10:56:07.101488158+00:00 stderr F I0120 10:56:07.099867 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.099852993 +0000 UTC))" 2026-01-20T10:56:07.101488158+00:00 stderr F I0120 10:56:07.099890 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.099874674 +0000 UTC))" 2026-01-20T10:56:07.101488158+00:00 stderr F I0120 10:56:07.100351 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-apiserver-operator.svc,metrics.openshift-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:07 +0000 UTC to 2027-08-13 20:00:08 +0000 UTC (now=2026-01-20 10:56:07.100329357 +0000 UTC))" 2026-01-20T10:56:07.101488158+00:00 stderr F I0120 10:56:07.100700 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906177\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906176\" (2026-01-20 09:49:36 +0000 UTC to 2027-01-20 09:49:36 +0000 UTC (now=2026-01-20 10:56:07.100683636 +0000 UTC))" 2026-01-20T10:57:14.816102638+00:00 stderr F E0120 10:57:14.815461 1 leaderelection.go:332] error retrieving resource lock openshift-apiserver-operator/openshift-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-apiserver-operator/leases/openshift-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:14.928954292+00:00 stderr F E0120 10:57:14.928840 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:14.934401506+00:00 stderr F E0120 10:57:14.934345 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:14.936883322+00:00 stderr F E0120 10:57:14.936821 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:14.941545316+00:00 stderr F E0120 10:57:14.941479 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:14.945007857+00:00 stderr F E0120 10:57:14.944917 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:14.946577408+00:00 stderr F E0120 10:57:14.946505 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:15.131578060+00:00 stderr F E0120 10:57:15.131489 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:15.249232362+00:00 stderr F E0120 10:57:15.249163 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:15.530533461+00:00 stderr F E0120 10:57:15.530409 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:15.731200168+00:00 stderr F E0120 10:57:15.731110 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:16.130511408+00:00 stderr F E0120 10:57:16.130386 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:16.330634401+00:00 stderr F E0120 10:57:16.330529 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:16.531504072+00:00 stderr F E0120 10:57:16.531436 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:16.730564406+00:00 stderr F E0120 10:57:16.730481 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:16.842858186+00:00 stderr F E0120 10:57:16.842791 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:16.941405272+00:00 stderr F E0120 10:57:16.941340 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:17.329455334+00:00 stderr F E0120 10:57:17.329362 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:17.532190465+00:00 stderr F E0120 10:57:17.532114 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:17.929465111+00:00 stderr F E0120 10:57:17.929354 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:18.130641652+00:00 stderr F E0120 10:57:18.130559 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:18.333426004+00:00 stderr F E0120 10:57:18.333361 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:18.642258752+00:00 stderr F E0120 10:57:18.642192 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:18.732575610+00:00 stderr F E0120 10:57:18.732510 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:19.130514624+00:00 stderr F E0120 10:57:19.130040 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:19.332827324+00:00 stderr F E0120 10:57:19.332753 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:19.531401785+00:00 stderr F E0120 10:57:19.531339 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:20.133787966+00:00 stderr F E0120 10:57:20.133440 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:20.442212122+00:00 stderr F E0120 10:57:20.442146 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:20.730401913+00:00 stderr F E0120 10:57:20.730352 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:20.933039712+00:00 stderr F E0120 10:57:20.932970 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:21.131724066+00:00 stderr F E0120 10:57:21.131608 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:21.733273975+00:00 stderr F E0120 10:57:21.733177 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:22.241940007+00:00 stderr F E0120 10:57:22.241858 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:22.384874647+00:00 stderr F E0120 10:57:22.384787 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:23.293661880+00:00 stderr F E0120 10:57:23.292736 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:23.675141508+00:00 stderr F E0120 10:57:23.675003 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:23.695457636+00:00 stderr F E0120 10:57:23.695394 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.048664396+00:00 stderr F E0120 10:57:24.048586 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.949108669+00:00 stderr F E0120 10:57:24.947776 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.843703236+00:00 stderr F E0120 10:57:25.843584 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:26.244762853+00:00 stderr F E0120 10:57:26.244677 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:27.644843798+00:00 stderr F E0120 10:57:27.644253 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:28.415913670+00:00 stderr F E0120 10:57:28.415814 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:28.820474958+00:00 stderr F E0120 10:57:28.820355 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:29.441218815+00:00 stderr F E0120 10:57:29.441139 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:31.242983343+00:00 stderr F E0120 10:57:31.242920 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:31.371446030+00:00 stderr F E0120 10:57:31.371371 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:33.048501340+00:00 stderr F E0120 10:57:33.047476 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:34.841930808+00:00 stderr F E0120 10:57:34.841412 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:34.945317432+00:00 stderr F E0120 10:57:34.945251 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.666311944+00:00 stderr F E0120 10:57:36.666230 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:38.658692863+00:00 stderr F E0120 10:57:38.658617 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:41.622348829+00:00 stderr F E0120 10:57:41.621448 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:44.945708346+00:00 stderr F E0120 10:57:44.945640 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:45.090534496+00:00 stderr F E0120 10:57:45.090475 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:46.692695226+00:00 stderr F E0120 10:57:46.691834 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000034100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-op0000644000175000017500000064223615133657716033121 0ustar zuulzuul2025-08-13T20:00:32.623453154+00:00 stderr F I0813 20:00:32.622897 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T20:00:32.623453154+00:00 stderr F I0813 20:00:32.623330 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:00:32.625343068+00:00 stderr F I0813 20:00:32.623961 1 observer_polling.go:159] Starting file observer 2025-08-13T20:00:33.276056553+00:00 stderr F I0813 20:00:33.274072 1 builder.go:298] openshift-apiserver-operator version - 2025-08-13T20:00:36.573424133+00:00 stderr F I0813 20:00:36.572654 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:00:36.573424133+00:00 stderr F W0813 20:00:36.573276 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:00:36.573424133+00:00 stderr F W0813 20:00:36.573285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:00:36.887505149+00:00 stderr F I0813 20:00:36.880697 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:00:36.887505149+00:00 stderr F I0813 20:00:36.883661 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:36.887505149+00:00 stderr F I0813 20:00:36.883892 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:00:36.887505149+00:00 stderr F I0813 20:00:36.884303 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:00:36.887505149+00:00 stderr F I0813 20:00:36.884422 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:00:36.887505149+00:00 stderr F I0813 20:00:36.886040 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:00:36.887505149+00:00 stderr F I0813 20:00:36.886058 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:00:36.904856823+00:00 stderr F I0813 20:00:36.903498 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:00:36.904856823+00:00 stderr F I0813 20:00:36.903553 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:00:36.923594798+00:00 stderr F I0813 20:00:36.920666 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:00:36.923594798+00:00 stderr F I0813 20:00:36.921349 1 leaderelection.go:250] attempting to acquire leader lease openshift-apiserver-operator/openshift-apiserver-operator-lock... 2025-08-13T20:00:36.989631781+00:00 stderr F I0813 20:00:36.986773 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:00:36.990036712+00:00 stderr F I0813 20:00:36.990004 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:00:37.010309200+00:00 stderr F I0813 20:00:37.010248 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:00:37.044255768+00:00 stderr F I0813 20:00:37.044199 1 leaderelection.go:260] successfully acquired lease openshift-apiserver-operator/openshift-apiserver-operator-lock 2025-08-13T20:00:37.064193987+00:00 stderr F I0813 20:00:37.046499 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator-lock", UID:"d9b35288-1c3d-4620-987e-0e2acf09bc76", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"29854", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-apiserver-operator-7c88c4c865-kn67m_a533f076-9102-4f1c-ac58-2cc3fe6b65c6 became leader 2025-08-13T20:00:37.064279739+00:00 stderr F I0813 20:00:37.059666 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:00:37.151322491+00:00 stderr F I0813 20:00:37.151228 1 starter.go:133] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:00:37.203134919+00:00 stderr F I0813 20:00:37.201263 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:00:37.642070444+00:00 stderr F I0813 20:00:37.637452 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-08-13T20:00:37.650333960+00:00 stderr F I0813 20:00:37.643338 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:00:37.659630705+00:00 stderr F I0813 20:00:37.650486 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:00:37.659732778+00:00 stderr F I0813 20:00:37.658070 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T20:00:37.659787410+00:00 stderr F I0813 20:00:37.658093 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:00:37.680132630+00:00 stderr F I0813 20:00:37.680081 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:00:37.680234053+00:00 stderr F I0813 20:00:37.680217 1 base_controller.go:67] Waiting for caches to sync for NamespaceFinalizerController_openshift-apiserver 2025-08-13T20:00:38.720224967+00:00 stderr F I0813 20:00:38.719249 1 request.go:697] Waited for 1.078839913s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver-operator/secrets?limit=500&resourceVersion=0 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.738681 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.738733 1 base_controller.go:67] Waiting for caches to sync for OpenShiftAPIServerWorkloadController 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.738746 1 base_controller.go:67] Waiting for caches to sync for APIServiceController_openshift-apiserver 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.738760 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.738772 1 base_controller.go:67] Waiting for caches to sync for APIServerStaticResources 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.738827 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_openshift-apiserver 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.738905 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.739020 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.739036 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.739050 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-08-13T20:00:38.805124978+00:00 stderr F I0813 20:00:38.797689 1 base_controller.go:67] Waiting for caches to sync for SecretRevisionPruneController 2025-08-13T20:00:38.805124978+00:00 stderr F I0813 20:00:38.798463 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-08-13T20:00:41.851229294+00:00 stderr F I0813 20:00:41.800206 1 base_controller.go:73] Caches are synced for SecretRevisionPruneController 2025-08-13T20:00:42.094738887+00:00 stderr F I0813 20:00:42.094226 1 base_controller.go:110] Starting #1 worker of SecretRevisionPruneController controller ... 2025-08-13T20:00:42.095003695+00:00 stderr F I0813 20:00:41.805143 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-08-13T20:00:42.095003695+00:00 stderr F I0813 20:00:42.094944 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-08-13T20:00:42.095003695+00:00 stderr F I0813 20:00:41.848831 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-08-13T20:00:42.095051716+00:00 stderr F I0813 20:00:42.095011 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-08-13T20:00:42.101372966+00:00 stderr F I0813 20:00:42.101290 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:00:42.101372966+00:00 stderr F I0813 20:00:42.101339 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:00:42.102589461+00:00 stderr F I0813 20:00:41.849214 1 base_controller.go:73] Caches are synced for OpenShiftAPIServerWorkloadController 2025-08-13T20:00:42.102589461+00:00 stderr F I0813 20:00:42.102575 1 base_controller.go:110] Starting #1 worker of OpenShiftAPIServerWorkloadController controller ... 2025-08-13T20:00:42.102870659+00:00 stderr F I0813 20:00:41.849228 1 base_controller.go:73] Caches are synced for APIServiceController_openshift-apiserver 2025-08-13T20:00:42.102870659+00:00 stderr F I0813 20:00:42.102854 1 base_controller.go:110] Starting #1 worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T20:00:42.103007643+00:00 stderr F I0813 20:00:41.849236 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-08-13T20:00:42.103007643+00:00 stderr F I0813 20:00:42.102939 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-08-13T20:00:42.103378783+00:00 stderr F I0813 20:00:41.849559 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-08-13T20:00:42.103378783+00:00 stderr F I0813 20:00:42.103367 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-08-13T20:00:42.103721293+00:00 stderr F I0813 20:00:42.103666 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling 2025-08-13T20:00:42.103721293+00:00 stderr F I0813 20:00:41.849576 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-08-13T20:00:42.103721293+00:00 stderr F I0813 20:00:42.103710 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-08-13T20:00:42.112283647+00:00 stderr F I0813 20:00:42.110133 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T20:00:42.112283647+00:00 stderr F I0813 20:00:41.849585 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-08-13T20:00:42.112283647+00:00 stderr F I0813 20:00:42.110192 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-08-13T20:00:42.112283647+00:00 stderr F I0813 20:00:41.849679 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:00:42.112283647+00:00 stderr F I0813 20:00:42.110352 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:41.849703 1 base_controller.go:73] Caches are synced for APIServerStaticResources 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:42.150683 1 base_controller.go:110] Starting #1 worker of APIServerStaticResources controller ... 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:41.961021 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:42.150763 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:41.961108 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:42.151139 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:42.021017 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:42.151487 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:42.032626 1 base_controller.go:73] Caches are synced for NamespaceFinalizerController_openshift-apiserver 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:42.151738 1 base_controller.go:110] Starting #1 worker of NamespaceFinalizerController_openshift-apiserver controller ... 2025-08-13T20:00:42.203779096+00:00 stderr F I0813 20:00:42.202033 1 base_controller.go:73] Caches are synced for StatusSyncer_openshift-apiserver 2025-08-13T20:00:42.203779096+00:00 stderr F I0813 20:00:42.202077 1 base_controller.go:110] Starting #1 worker of StatusSyncer_openshift-apiserver controller ... 2025-08-13T20:00:42.260567555+00:00 stderr F I0813 20:00:42.260259 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:00:42.299249098+00:00 stderr F I0813 20:00:42.298608 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:00:42.322060519+00:00 stderr F I0813 20:00:42.321963 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:00:42.327430152+00:00 stderr F I0813 20:00:42.323555 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:00:42.362673617+00:00 stderr F I0813 20:00:42.360270 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:00:42.362673617+00:00 stderr F I0813 20:00:42.360314 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:00:42.670889245+00:00 stderr F I0813 20:00:42.669476 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:00:42.743163146+00:00 stderr F I0813 20:00:42.742372 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-08-13T20:00:42.743163146+00:00 stderr F I0813 20:00:42.742670 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-08-13T20:00:45.524920664+00:00 stderr F I0813 20:00:45.508635 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServicesAvailable: PreconditionNotReady","reason":"APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:45.821627595+00:00 stderr F I0813 20:00:45.812347 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available changed from True to False ("APIServicesAvailable: PreconditionNotReady") 2025-08-13T20:01:00.051369613+00:00 stderr F I0813 20:00:59.999945 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.999559696 +0000 UTC))" 2025-08-13T20:01:00.051369613+00:00 stderr F I0813 20:01:00.051334 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:00.051282991 +0000 UTC))" 2025-08-13T20:01:00.051451885+00:00 stderr F I0813 20:01:00.051370 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.051347462 +0000 UTC))" 2025-08-13T20:01:00.051451885+00:00 stderr F I0813 20:01:00.051403 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.051380133 +0000 UTC))" 2025-08-13T20:01:00.051451885+00:00 stderr F I0813 20:01:00.051429 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.051415584 +0000 UTC))" 2025-08-13T20:01:00.051566589+00:00 stderr F I0813 20:01:00.051481 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.051435195 +0000 UTC))" 2025-08-13T20:01:00.051680422+00:00 stderr F I0813 20:01:00.051592 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.051564049 +0000 UTC))" 2025-08-13T20:01:00.051680422+00:00 stderr F I0813 20:01:00.051651 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.05162805 +0000 UTC))" 2025-08-13T20:01:00.051680422+00:00 stderr F I0813 20:01:00.051670 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:00.051659281 +0000 UTC))" 2025-08-13T20:01:00.051711783+00:00 stderr F I0813 20:01:00.051697 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:00.051680092 +0000 UTC))" 2025-08-13T20:01:00.051812326+00:00 stderr F I0813 20:01:00.051723 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.051707923 +0000 UTC))" 2025-08-13T20:01:00.052406013+00:00 stderr F I0813 20:01:00.052327 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-apiserver-operator.svc,metrics.openshift-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:07 +0000 UTC to 2027-08-13 20:00:08 +0000 UTC (now=2025-08-13 20:01:00.05230566 +0000 UTC))" 2025-08-13T20:01:00.052915467+00:00 stderr F I0813 20:01:00.052691 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115236\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115234\" (2025-08-13 19:00:33 +0000 UTC to 2026-08-13 19:00:33 +0000 UTC (now=2025-08-13 20:01:00.05265489 +0000 UTC))" 2025-08-13T20:01:00.075203073+00:00 stderr F I0813 20:01:00.074877 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:04.674153274+00:00 stderr F I0813 20:01:04.672724 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" 2025-08-13T20:02:16.819652382+00:00 stderr F I0813 20:02:16.817488 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:02:17.836711546+00:00 stderr F I0813 20:02:17.836540 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)" 2025-08-13T20:02:31.903143649+00:00 stderr F E0813 20:02:31.902389 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.915566814+00:00 stderr F E0813 20:02:31.915523 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.929668616+00:00 stderr F E0813 20:02:31.929568 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.953661520+00:00 stderr F E0813 20:02:31.953490 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.998468599+00:00 stderr F E0813 20:02:31.998309 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.094028095+00:00 stderr F E0813 20:02:32.091120 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.123160126+00:00 stderr F E0813 20:02:32.123058 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.132639056+00:00 stderr F E0813 20:02:32.132542 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.149736984+00:00 stderr F E0813 20:02:32.149581 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.174235313+00:00 stderr F E0813 20:02:32.174067 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.218270729+00:00 stderr F E0813 20:02:32.218136 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.300349550+00:00 stderr F E0813 20:02:32.300297 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.500620033+00:00 stderr F E0813 20:02:32.500564 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.700608449+00:00 stderr F E0813 20:02:32.700509 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.901161410+00:00 stderr F E0813 20:02:32.901109 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.225520973+00:00 stderr F E0813 20:02:33.225466 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.345324831+00:00 stderr F E0813 20:02:33.345274 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.888991740+00:00 stderr F E0813 20:02:33.888337 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.632116838+00:00 stderr F E0813 20:02:34.632014 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.172962687+00:00 stderr F E0813 20:02:35.172879 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.197114260+00:00 stderr F E0813 20:02:37.197026 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.737456145+00:00 stderr F E0813 20:02:37.737335 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.123017143+00:00 stderr F E0813 20:02:42.122325 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.155753357+00:00 stderr F E0813 20:02:42.155690 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.163199439+00:00 stderr F E0813 20:02:42.162939 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.163199439+00:00 stderr F E0813 20:02:42.163106 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:42.174492282+00:00 stderr F E0813 20:02:42.174391 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.179982768+00:00 stderr F E0813 20:02:42.179920 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:42.322272097+00:00 stderr F E0813 20:02:42.322144 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.521540853+00:00 stderr F E0813 20:02:42.521307 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.922004457+00:00 stderr F E0813 20:02:42.921908 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.124500944+00:00 stderr F E0813 20:02:43.124391 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:43.323042348+00:00 stderr F E0813 20:02:43.322908 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.521237212+00:00 stderr F E0813 20:02:43.521044 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.920910073+00:00 stderr F E0813 20:02:43.920768 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.323879629+00:00 stderr F E0813 20:02:44.323719 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:44.521968140+00:00 stderr F E0813 20:02:44.521871 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.124535789+00:00 stderr F E0813 20:02:45.124346 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:45.319572743+00:00 stderr F E0813 20:02:45.319381 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.924420698+00:00 stderr F E0813 20:02:45.924237 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:46.523057565+00:00 stderr F E0813 20:02:46.522935 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:46.720616821+00:00 stderr F E0813 20:02:46.720477 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:47.336036357+00:00 stderr F E0813 20:02:47.335935 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:47.994396188+00:00 stderr F E0813 20:02:47.993511 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:49.283347107+00:00 stderr F E0813 20:02:49.283216 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.283472481+00:00 stderr F E0813 20:02:49.283445 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:51.855959557+00:00 stderr F E0813 20:02:51.855716 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:52.123110748+00:00 stderr F E0813 20:02:52.122730 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:52.567385021+00:00 stderr F E0813 20:02:52.567277 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.409958606+00:00 stderr F E0813 20:02:54.409592 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:56.099987657+00:00 stderr F E0813 20:02:56.099735 1 leaderelection.go:332] error retrieving resource lock openshift-apiserver-operator/openshift-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-apiserver-operator/leases/openshift-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:56.992247701+00:00 stderr F E0813 20:02:56.991649 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:02.124640714+00:00 stderr F E0813 20:03:02.123737 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:03.807227862+00:00 stderr F E0813 20:03:03.807117 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.652390122+00:00 stderr F E0813 20:03:04.652187 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:07.242069628+00:00 stderr F E0813 20:03:07.241944 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:12.125445946+00:00 stderr F E0813 20:03:12.124657 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:13.051087142+00:00 stderr F E0813 20:03:13.050957 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:22.132117281+00:00 stderr F E0813 20:03:22.131173 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.136296422+00:00 stderr F E0813 20:03:25.136123 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.738604899+00:00 stderr F E0813 20:03:27.738450 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:32.124766154+00:00 stderr F E0813 20:03:32.124621 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.126310690+00:00 stderr F E0813 20:03:42.125532 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.154910106+00:00 stderr F E0813 20:03:42.154730 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.161279778+00:00 stderr F E0813 20:03:42.161177 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:42.205909101+00:00 stderr F I0813 20:03:42.205800 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.207388163+00:00 stderr F E0813 20:03:42.207359 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.213896209+00:00 stderr F I0813 20:03:42.213740 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.215527656+00:00 stderr F E0813 20:03:42.215513 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.227506847+00:00 stderr F I0813 20:03:42.227401 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.229187455+00:00 stderr F E0813 20:03:42.229161 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.250562135+00:00 stderr F I0813 20:03:42.250442 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.252163971+00:00 stderr F E0813 20:03:42.252100 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.293449997+00:00 stderr F I0813 20:03:42.293392 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.295256739+00:00 stderr F E0813 20:03:42.295181 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.377169276+00:00 stderr F I0813 20:03:42.377103 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.379393929+00:00 stderr F E0813 20:03:42.379299 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.541071301+00:00 stderr F I0813 20:03:42.540941 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.545997002+00:00 stderr F E0813 20:03:42.545764 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.866756742+00:00 stderr F I0813 20:03:42.866645 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.867915695+00:00 stderr F E0813 20:03:42.867756 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:43.509672403+00:00 stderr F I0813 20:03:43.509555 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:43Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:43.511247028+00:00 stderr F E0813 20:03:43.511182 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:44.793050554+00:00 stderr F I0813 20:03:44.792933 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:44Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:44.795308088+00:00 stderr F E0813 20:03:44.795247 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:47.357068477+00:00 stderr F I0813 20:03:47.356945 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:47Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:47.359056754+00:00 stderr F E0813 20:03:47.358985 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:52.133416455+00:00 stderr F E0813 20:03:52.132768 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:52.493938720+00:00 stderr F I0813 20:03:52.486096 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:52Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:52.493938720+00:00 stderr F E0813 20:03:52.488188 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:54.024984158+00:00 stderr F E0813 20:03:54.024284 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:56.105893981+00:00 stderr F E0813 20:03:56.104408 1 leaderelection.go:332] error retrieving resource lock openshift-apiserver-operator/openshift-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-apiserver-operator/leases/openshift-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:02.137089403+00:00 stderr F E0813 20:04:02.135923 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:02.739498108+00:00 stderr F I0813 20:04:02.734664 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:04:02Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:04:02.740609460+00:00 stderr F E0813 20:04:02.740483 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.102662429+00:00 stderr F E0813 20:04:06.101089 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:08.723089732+00:00 stderr F E0813 20:04:08.722924 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:12.136377423+00:00 stderr F E0813 20:04:12.135667 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:22.140944498+00:00 stderr F E0813 20:04:22.139880 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.242015481+00:00 stderr F I0813 20:04:23.241952 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:04:23Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:04:23.246984593+00:00 stderr F E0813 20:04:23.246831 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:32.136452333+00:00 stderr F E0813 20:04:32.135741 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:42.157979619+00:00 stderr F E0813 20:04:42.157223 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:42.178114396+00:00 stderr F E0813 20:04:42.177968 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:42.183583663+00:00 stderr F E0813 20:04:42.183519 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:42.205522291+00:00 stderr F I0813 20:04:42.205452 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:04:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:04:42.207164318+00:00 stderr F E0813 20:04:42.207094 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:52.143327400+00:00 stderr F E0813 20:04:52.142325 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:56.135917062+00:00 stderr F E0813 20:04:56.134039 1 leaderelection.go:332] error retrieving resource lock openshift-apiserver-operator/openshift-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-apiserver-operator/leases/openshift-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:02.150079333+00:00 stderr F E0813 20:05:02.149400 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:04.226529326+00:00 stderr F I0813 20:05:04.223354 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:05:04Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:05:04.226756632+00:00 stderr F E0813 20:05:04.226667 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:12.436760004+00:00 stderr F E0813 20:05:12.436000 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.982682525+00:00 stderr F E0813 20:05:15.982157 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.214735049+00:00 stderr F I0813 20:05:42.213508 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:05:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:05:42.275706025+00:00 stderr F I0813 20:05:42.275473 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded changed from False to True ("APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)") 2025-08-13T20:05:57.285378861+00:00 stderr F I0813 20:05:57.284355 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:05:57.523034657+00:00 stderr F I0813 20:05:57.519246 1 core.go:359] ConfigMap "openshift-apiserver/image-import-ca" changes: {"data":{"image-registry.openshift-image-registry.svc..5000":"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n","image-registry.openshift-image-registry.svc.cluster.local..5000":"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:05:57.525101056+00:00 stderr F I0813 20:05:57.523954 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/image-import-ca -n openshift-apiserver: 2025-08-13T20:05:57.525101056+00:00 stderr F cause by changes in data.image-registry.openshift-image-registry.svc..5000,data.image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:05:57.677947233+00:00 stderr F I0813 20:05:57.676320 1 apps.go:154] Deployment "openshift-apiserver/apiserver" changes: {"metadata":{"annotations":{"operator.openshift.io/dep-openshift-apiserver.image-import-ca.configmap":"ZjlHVA==","operator.openshift.io/spec-hash":"7538696d7771eb6997d5f9627023b75abea5bcd941bd000eddd83452c44c117a"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"operator.openshift.io/dep-openshift-apiserver.image-import-ca.configmap":"ZjlHVA=="}},"spec":{"containers":[{"args":["if [ -s /var/run/configmaps/trusted-ca-bundle/tls-ca-bundle.pem ]; then\n echo \"Copying system trust bundle\"\n cp -f /var/run/configmaps/trusted-ca-bundle/tls-ca-bundle.pem /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\nexec openshift-apiserver start --config=/var/run/configmaps/config/config.yaml -v=2\n"],"command":["/bin/bash","-ec"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78","imagePullPolicy":"IfNotPresent","livenessProbe":{"failureThreshold":3,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"name":"openshift-apiserver","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":1,"httpGet":{"path":"readyz","port":8443,"scheme":"HTTPS"},"periodSeconds":5,"successThreshold":1,"timeoutSeconds":10},"resources":{"requests":{"cpu":"100m","memory":"200Mi"}},"securityContext":{"privileged":true,"readOnlyRootFilesystem":false,"runAsUser":0},"startupProbe":{"failureThreshold":30,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"periodSeconds":5,"successThreshold":1,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/lib/kubelet/","name":"node-pullsecrets","readOnly":true},{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/audit","name":"audit"},{"mountPath":"/var/run/secrets/etcd-client","name":"etcd-client"},{"mountPath":"/var/run/configmaps/etcd-serving-ca","name":"etcd-serving-ca"},{"mountPath":"/var/run/configmaps/image-import-ca","name":"image-import-ca"},{"mountPath":"/var/run/configmaps/trusted-ca-bundle","name":"trusted-ca-bundle"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/var/run/secrets/encryption-config","name":"encryption-config"},{"mountPath":"/var/log/openshift-apiserver","name":"audit-dir"}]},{"args":["--listen","0.0.0.0:17698","--namespace","$(POD_NAMESPACE)","--v","2"],"command":["cluster-kube-apiserver-operator","check-endpoints"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","imagePullPolicy":"IfNotPresent","name":"openshift-apiserver-check-endpoints","ports":[{"containerPort":17698,"name":"check-endpoints","protocol":"TCP"}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError"}],"dnsPolicy":null,"initContainers":[{"command":["sh","-c","chmod 0700 /var/log/openshift-apiserver \u0026\u0026 touch /var/log/openshift-apiserver/audit.log \u0026\u0026 chmod 0600 /var/log/openshift-apiserver/*"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78","imagePullPolicy":"IfNotPresent","name":"fix-audit-permissions","resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"securityContext":{"privileged":true,"runAsUser":0},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/log/openshift-apiserver","name":"audit-dir"}]}],"restartPolicy":null,"schedulerName":null,"securityContext":null,"serviceAccount":null,"volumes":[{"hostPath":{"path":"/var/lib/kubelet/","type":"Directory"},"name":"node-pullsecrets"},{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"audit-1"},"name":"audit"},{"name":"etcd-client","secret":{"defaultMode":384,"secretName":"etcd-client"}},{"configMap":{"name":"etcd-serving-ca"},"name":"etcd-serving-ca"},{"configMap":{"name":"image-import-ca","optional":true},"name":"image-import-ca"},{"name":"serving-cert","secret":{"defaultMode":384,"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"trusted-ca-bundle","optional":true},"name":"trusted-ca-bundle"},{"name":"encryption-config","secret":{"defaultMode":384,"optional":true,"secretName":"encryption-config-1"}},{"hostPath":{"path":"/var/log/openshift-apiserver"},"name":"audit-dir"}]}}}} 2025-08-13T20:05:57.712527173+00:00 stderr F I0813 20:05:57.710377 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/apiserver -n openshift-apiserver because it changed 2025-08-13T20:05:57.792728349+00:00 stderr F I0813 20:05:57.792468 1 helpers.go:184] lister was stale at resourceVersion=30641, live get showed resourceVersion=31903 2025-08-13T20:06:06.054104191+00:00 stderr F I0813 20:06:06.052307 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:06.357041686+00:00 stderr F I0813 20:06:06.356939 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:06.445729146+00:00 stderr F I0813 20:06:06.445316 1 helpers.go:184] lister was stale at resourceVersion=30641, live get showed resourceVersion=31909 2025-08-13T20:06:06.477306380+00:00 stderr F I0813 20:06:06.477196 1 helpers.go:184] lister was stale at resourceVersion=30641, live get showed resourceVersion=31909 2025-08-13T20:06:11.032088889+00:00 stderr F I0813 20:06:11.031411 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:11.299406344+00:00 stderr F I0813 20:06:11.299320 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:13.285884901+00:00 stderr F I0813 20:06:13.284224 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:14.486063610+00:00 stderr F I0813 20:06:14.485923 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:15.055726412+00:00 stderr F I0813 20:06:15.053898 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:06:15.459610248+00:00 stderr F I0813 20:06:15.459453 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:17.607433323+00:00 stderr F I0813 20:06:17.607118 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:19.355976544+00:00 stderr F I0813 20:06:19.354945 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:19.425608978+00:00 stderr F I0813 20:06:19.425257 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:19.623244878+00:00 stderr F I0813 20:06:19.623160 1 helpers.go:184] lister was stale at resourceVersion=30641, live get showed resourceVersion=31984 2025-08-13T20:06:19.669516183+00:00 stderr F I0813 20:06:19.668198 1 helpers.go:184] lister was stale at resourceVersion=30641, live get showed resourceVersion=31984 2025-08-13T20:06:19.768531358+00:00 stderr F I0813 20:06:19.768080 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:20.567335762+00:00 stderr F I0813 20:06:20.566920 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:21.953215278+00:00 stderr F I0813 20:06:21.952766 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:22.176324287+00:00 stderr F I0813 20:06:22.176251 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=openshiftapiservers from k8s.io/client-go/dynamic/dynamicinformer/informer.go:108 2025-08-13T20:06:23.357625934+00:00 stderr F I0813 20:06:23.357518 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:23.807047764+00:00 stderr F I0813 20:06:23.806931 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:23.827925682+00:00 stderr F E0813 20:06:23.827635 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:23.829383633+00:00 stderr F I0813 20:06:23.829291 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:23.834485579+00:00 stderr F I0813 20:06:23.834379 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:23.842751926+00:00 stderr F E0813 20:06:23.842631 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:23.855149561+00:00 stderr F I0813 20:06:23.855012 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:23.862187083+00:00 stderr F E0813 20:06:23.862066 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:23.884900333+00:00 stderr F I0813 20:06:23.884209 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:23.891101781+00:00 stderr F E0813 20:06:23.891048 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:23.932631530+00:00 stderr F I0813 20:06:23.932329 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:23.940998389+00:00 stderr F E0813 20:06:23.940698 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:24.024163381+00:00 stderr F I0813 20:06:24.023531 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:24.031439149+00:00 stderr F E0813 20:06:24.031375 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:24.193903412+00:00 stderr F I0813 20:06:24.193399 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:24.201835099+00:00 stderr F E0813 20:06:24.201172 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:24.524439637+00:00 stderr F I0813 20:06:24.522204 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:24.536657377+00:00 stderr F E0813 20:06:24.536589 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:24.971001895+00:00 stderr F I0813 20:06:24.970928 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:25.177934641+00:00 stderr F I0813 20:06:25.177836 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:25.185500097+00:00 stderr F E0813 20:06:25.185333 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:26.476481046+00:00 stderr F I0813 20:06:26.476125 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:26.492605468+00:00 stderr F E0813 20:06:26.492414 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:29.054767127+00:00 stderr F I0813 20:06:29.053969 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:29.065161865+00:00 stderr F E0813 20:06:29.063297 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:32.761019095+00:00 stderr F I0813 20:06:32.749752 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:33.373923428+00:00 stderr F I0813 20:06:33.370632 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:34.188020078+00:00 stderr F I0813 20:06:34.186083 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:34.201345500+00:00 stderr F E0813 20:06:34.201140 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:34.629962859+00:00 stderr F I0813 20:06:34.629748 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T20:06:35.628992572+00:00 stderr F I0813 20:06:35.623979 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from sigs.k8s.io/kube-storage-version-migrator/pkg/clients/informer/factory.go:132 2025-08-13T20:06:37.573096771+00:00 stderr F I0813 20:06:37.559256 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:39.863063066+00:00 stderr F I0813 20:06:39.854404 1 reflector.go:351] Caches populated for *v1.OpenShiftAPIServer from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:06:40.874002091+00:00 stderr F I0813 20:06:40.873474 1 reflector.go:351] Caches populated for *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:41.517049087+00:00 stderr F I0813 20:06:41.516981 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:06:42.215088761+00:00 stderr F I0813 20:06:42.212846 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:42.230288416+00:00 stderr F E0813 20:06:42.228844 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:42.403380339+00:00 stderr F I0813 20:06:42.402470 1 reflector.go:351] Caches populated for *v1.Project from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:42.618395174+00:00 stderr F I0813 20:06:42.618271 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:43.381188904+00:00 stderr F I0813 20:06:43.380529 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:43.702493616+00:00 stderr F I0813 20:06:43.702335 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:44.447973600+00:00 stderr F I0813 20:06:44.446315 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:44.709153819+00:00 stderr F E0813 20:06:44.707647 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:45.663485390+00:00 stderr F I0813 20:06:45.663401 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:46.013983068+00:00 stderr F I0813 20:06:46.013676 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:46.630691410+00:00 stderr F I0813 20:06:46.629730 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:47.052339409+00:00 stderr F I0813 20:06:47.051586 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:06:48.619540462+00:00 stderr F I0813 20:06:48.619377 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:06:49.788030533+00:00 stderr F I0813 20:06:49.784534 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:50.214571492+00:00 stderr F I0813 20:06:50.214240 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:52.329369285+00:00 stderr F I0813 20:06:52.329004 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:52.581238027+00:00 stderr F I0813 20:06:52.581124 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:53.108701330+00:00 stderr F I0813 20:06:53.106313 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:53.850514579+00:00 stderr F I0813 20:06:53.850039 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:53.852128345+00:00 stderr F I0813 20:06:53.851354 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:54.563951803+00:00 stderr F I0813 20:06:54.563666 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded changed from True to False ("APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)") 2025-08-13T20:06:57.320102145+00:00 stderr F I0813 20:06:57.319606 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:01.233182636+00:00 stderr F I0813 20:07:01.232496 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:17.890728631+00:00 stderr F I0813 20:07:17.881267 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:18.637895843+00:00 stderr F I0813 20:07:18.635400 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()" 2025-08-13T20:07:21.193390350+00:00 stderr F I0813 20:07:21.178719 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-7fc54b8dd7-d2bhp pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:21.265029694+00:00 stderr F I0813 20:07:21.263587 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-7fc54b8dd7-d2bhp pod)" 2025-08-13T20:07:24.311204460+00:00 stderr F I0813 20:07:24.310365 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-7fc54b8dd7-d2bhp pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:24.368491813+00:00 stderr F I0813 20:07:24.367473 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-7fc54b8dd7-d2bhp pod)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-7fc54b8dd7-d2bhp pod)" 2025-08-13T20:07:27.080709085+00:00 stderr F I0813 20:07:27.078242 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-7fc54b8dd7-d2bhp pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:27.121975998+00:00 stderr F I0813 20:07:27.118623 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-7fc54b8dd7-d2bhp pod)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-7fc54b8dd7-d2bhp pod)" 2025-08-13T20:07:32.597969319+00:00 stderr F I0813 20:07:32.596873 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:32.667490962+00:00 stderr F I0813 20:07:32.658855 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-7fc54b8dd7-d2bhp pod)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()" 2025-08-13T20:07:32.750329137+00:00 stderr F E0813 20:07:32.750225 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:07:32.750329137+00:00 stderr F apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:07:32.759322405+00:00 stderr F I0813 20:07:32.752724 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"","reason":"APIServerDeployment_NoPod::APIServices_Error","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:32.792450235+00:00 stderr F I0813 20:07:32.792049 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" 2025-08-13T20:07:34.176923719+00:00 stderr F I0813 20:07:34.170758 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.","reason":"APIServerDeployment_NoPod","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:34.195911053+00:00 stderr F I0813 20:07:34.195827 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node." 2025-08-13T20:07:36.311843249+00:00 stderr F I0813 20:07:36.311067 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:07:36Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:36.346835552+00:00 stderr F I0813 20:07:36.346490 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()" to "All is well",Available changed from False to True ("All is well") 2025-08-13T20:08:32.179388330+00:00 stderr F E0813 20:08:32.178583 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.189343265+00:00 stderr F E0813 20:08:32.189232 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.203144140+00:00 stderr F E0813 20:08:32.203070 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.228753714+00:00 stderr F E0813 20:08:32.228697 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.275595507+00:00 stderr F E0813 20:08:32.275537 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.289332641+00:00 stderr F E0813 20:08:32.289208 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.360314446+00:00 stderr F E0813 20:08:32.360136 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.527515109+00:00 stderr F E0813 20:08:32.527362 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.856120041+00:00 stderr F E0813 20:08:32.855691 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.501560646+00:00 stderr F E0813 20:08:33.501472 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.881626283+00:00 stderr F E0813 20:08:33.881532 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:34.787856165+00:00 stderr F E0813 20:08:34.786299 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.683511705+00:00 stderr F E0813 20:08:35.682585 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.352861596+00:00 stderr F E0813 20:08:37.352634 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.486967511+00:00 stderr F E0813 20:08:37.486760 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.283191570+00:00 stderr F E0813 20:08:39.282318 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.096391326+00:00 stderr F E0813 20:08:41.095863 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.181862167+00:00 stderr F E0813 20:08:42.179122 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.207859713+00:00 stderr F E0813 20:08:42.207708 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.208449180+00:00 stderr F E0813 20:08:42.207955 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.229404461+00:00 stderr F E0813 20:08:42.229189 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.229404461+00:00 stderr F E0813 20:08:42.229343 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:42.268871782+00:00 stderr F E0813 20:08:42.267704 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:42.382185411+00:00 stderr F E0813 20:08:42.381628 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.766076528+00:00 stderr F E0813 20:08:42.766028 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.891857444+00:00 stderr F E0813 20:08:42.889247 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.973855744+00:00 stderr F E0813 20:08:42.971113 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.367543341+00:00 stderr F E0813 20:08:43.367458 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.570761568+00:00 stderr F E0813 20:08:43.570658 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:43.769219928+00:00 stderr F E0813 20:08:43.769164 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.368482849+00:00 stderr F E0813 20:08:44.366601 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.576396050+00:00 stderr F E0813 20:08:44.572921 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:44.687210467+00:00 stderr F E0813 20:08:44.687127 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.170720400+00:00 stderr F E0813 20:08:45.170091 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:45.367202854+00:00 stderr F E0813 20:08:45.366522 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.970722057+00:00 stderr F E0813 20:08:45.970545 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:46.481114281+00:00 stderr F E0813 20:08:46.481039 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.570690708+00:00 stderr F E0813 20:08:46.570554 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:46.766928924+00:00 stderr F E0813 20:08:46.766834 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.389880155+00:00 stderr F E0813 20:08:47.387262 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:48.059283578+00:00 stderr F E0813 20:08:48.059137 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:48.287271614+00:00 stderr F E0813 20:08:48.287221 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.328361563+00:00 stderr F E0813 20:08:49.328167 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.350525429+00:00 stderr F E0813 20:08:49.350239 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:50.082487095+00:00 stderr F E0813 20:08:50.081155 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.883630395+00:00 stderr F E0813 20:08:51.883481 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.921413918+00:00 stderr F E0813 20:08:51.921209 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:52.180256249+00:00 stderr F E0813 20:08:52.180202 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:53.684069314+00:00 stderr F E0813 20:08:53.682653 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.453376391+00:00 stderr F E0813 20:08:54.451082 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.419142742+00:00 stderr F E0813 20:08:56.418869 1 leaderelection.go:332] error retrieving resource lock openshift-apiserver-operator/openshift-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-apiserver-operator/leases/openshift-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.052167092+00:00 stderr F E0813 20:08:57.052043 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:09:29.824090490+00:00 stderr F I0813 20:09:29.822973 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:31.240366727+00:00 stderr F I0813 20:09:31.239942 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:36.295474821+00:00 stderr F I0813 20:09:36.294940 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:36.472729763+00:00 stderr F I0813 20:09:36.472581 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:39.011372797+00:00 stderr F I0813 20:09:39.011140 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:39.190142743+00:00 stderr F I0813 20:09:39.189974 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:39.949937737+00:00 stderr F I0813 20:09:39.949751 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:09:40.632052473+00:00 stderr F I0813 20:09:40.631458 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:42.697111970+00:00 stderr F I0813 20:09:42.696118 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=openshiftapiservers from k8s.io/client-go/dynamic/dynamicinformer/informer.go:108 2025-08-13T20:09:44.948264682+00:00 stderr F I0813 20:09:44.947880 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:45.090233883+00:00 stderr F I0813 20:09:45.090158 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:45.949887510+00:00 stderr F I0813 20:09:45.949400 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:46.360890224+00:00 stderr F I0813 20:09:46.360329 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:09:48.613127477+00:00 stderr F I0813 20:09:48.612972 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from sigs.k8s.io/kube-storage-version-migrator/pkg/clients/informer/factory.go:132 2025-08-13T20:09:48.923736133+00:00 stderr F I0813 20:09:48.922579 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:49.794622532+00:00 stderr F I0813 20:09:49.793994 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:50.355175894+00:00 stderr F I0813 20:09:50.352768 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:50.502105847+00:00 stderr F I0813 20:09:50.500868 1 reflector.go:351] Caches populated for *v1.OpenShiftAPIServer from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:09:52.014953081+00:00 stderr F I0813 20:09:52.013366 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:52.469163333+00:00 stderr F I0813 20:09:52.468239 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T20:09:52.841878240+00:00 stderr F I0813 20:09:52.840162 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:09:53.419225933+00:00 stderr F I0813 20:09:53.416296 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:54.028120271+00:00 stderr F I0813 20:09:54.026244 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:54.472161952+00:00 stderr F I0813 20:09:54.471226 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:54.804352795+00:00 stderr F I0813 20:09:54.803583 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:55.208708598+00:00 stderr F I0813 20:09:55.208626 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:56.289377132+00:00 stderr F I0813 20:09:56.288969 1 reflector.go:351] Caches populated for *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:57.105866032+00:00 stderr F I0813 20:09:57.105686 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:05.900208772+00:00 stderr F I0813 20:10:05.899418 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:12.147163668+00:00 stderr F I0813 20:10:12.146472 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:15.256298709+00:00 stderr F I0813 20:10:15.242093 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:20.755480033+00:00 stderr F I0813 20:10:20.754899 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:21.980853556+00:00 stderr F I0813 20:10:21.980244 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:23.527055176+00:00 stderr F I0813 20:10:23.526690 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:23.859439076+00:00 stderr F I0813 20:10:23.859356 1 reflector.go:351] Caches populated for *v1.Project from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:24.663610433+00:00 stderr F I0813 20:10:24.662181 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:10:26.217124734+00:00 stderr F I0813 20:10:26.216375 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:27.979341987+00:00 stderr F I0813 20:10:27.979198 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:29.016644348+00:00 stderr F I0813 20:10:29.016063 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:33.261888743+00:00 stderr F I0813 20:10:33.261552 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:34.479462892+00:00 stderr F I0813 20:10:34.478962 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:34.698461010+00:00 stderr F I0813 20:10:34.697436 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:36.213468297+00:00 stderr F I0813 20:10:36.213066 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:37.684223695+00:00 stderr F I0813 20:10:37.684042 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:11:00.169609486+00:00 stderr F I0813 20:11:00.165362 1 request.go:697] Waited for 1.001815503s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/apis/route.openshift.io/v1 2025-08-13T20:21:41.161724381+00:00 stderr F I0813 20:21:41.160393 1 request.go:697] Waited for 1.000116633s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/apis/template.openshift.io/v1 2025-08-13T20:42:36.411058991+00:00 stderr F I0813 20:42:36.401963 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.415098707+00:00 stderr F I0813 20:42:36.414695 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.421596965+00:00 stderr F I0813 20:42:36.416342 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.421596965+00:00 stderr F I0813 20:42:36.419879 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.421596965+00:00 stderr F I0813 20:42:36.421094 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.441999143+00:00 stderr F I0813 20:42:36.436903 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.454961217+00:00 stderr F I0813 20:42:36.452746 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.464151 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.464389 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.464482 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.464527 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.464587 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.464646 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.465696 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.465866 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.466178 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.466359 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.466497 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.466640 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.466733 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.466873 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.466952 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.467003 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.467068 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.486676531+00:00 stderr F I0813 20:42:36.483443 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.486676531+00:00 stderr F I0813 20:42:36.483970 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.486676531+00:00 stderr F I0813 20:42:36.484065 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.486676531+00:00 stderr F I0813 20:42:36.484189 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.486676531+00:00 stderr F I0813 20:42:36.484396 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.508609013+00:00 stderr F I0813 20:42:36.507478 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.547376721+00:00 stderr F I0813 20:42:36.547282 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.563360252+00:00 stderr F I0813 20:42:36.563269 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.563690941+00:00 stderr F I0813 20:42:36.563595 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.571200458+00:00 stderr F I0813 20:42:36.571058 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.579639761+00:00 stderr F I0813 20:42:36.579547 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.579848497+00:00 stderr F I0813 20:42:36.579727 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.580176156+00:00 stderr F I0813 20:42:36.580117 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.585343085+00:00 stderr F I0813 20:42:36.584706 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.585343085+00:00 stderr F I0813 20:42:36.585101 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.601554873+00:00 stderr F I0813 20:42:36.601410 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.604017204+00:00 stderr F I0813 20:42:36.603073 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.604017204+00:00 stderr F I0813 20:42:36.603646 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.604410115+00:00 stderr F I0813 20:42:36.604202 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.605026003+00:00 stderr F I0813 20:42:36.604971 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:41.560395498+00:00 stderr F I0813 20:42:41.559614 1 cmd.go:128] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:41.564204617+00:00 stderr F I0813 20:42:41.562965 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:41.565885616+00:00 stderr F I0813 20:42:41.565738 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:41.567214934+00:00 stderr F I0813 20:42:41.567135 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:42:41.567214934+00:00 stderr F I0813 20:42:41.567198 1 base_controller.go:172] Shutting down NamespaceFinalizerController_openshift-apiserver ... 2025-08-13T20:42:41.567470661+00:00 stderr F I0813 20:42:41.567393 1 base_controller.go:172] Shutting down StatusSyncer_openshift-apiserver ... 2025-08-13T20:42:41.567531943+00:00 stderr F I0813 20:42:41.567493 1 base_controller.go:172] Shutting down EncryptionKeyController ... 2025-08-13T20:42:41.567531943+00:00 stderr F I0813 20:42:41.567512 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:41.567577785+00:00 stderr F I0813 20:42:41.567549 1 base_controller.go:172] Shutting down ConnectivityCheckController ... 2025-08-13T20:42:41.567591225+00:00 stderr F I0813 20:42:41.567554 1 base_controller.go:172] Shutting down SecretRevisionPruneController ... 2025-08-13T20:42:41.568050328+00:00 stderr F I0813 20:42:41.567536 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:42:41.568158161+00:00 stderr F I0813 20:42:41.567213 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:42:41.568158161+00:00 stderr F I0813 20:42:41.568152 1 base_controller.go:172] Shutting down APIServerStaticResources ... 2025-08-13T20:42:41.568176892+00:00 stderr F I0813 20:42:41.568166 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:41.568189682+00:00 stderr F I0813 20:42:41.568179 1 base_controller.go:172] Shutting down EncryptionMigrationController ... 2025-08-13T20:42:41.568201383+00:00 stderr F I0813 20:42:41.568193 1 base_controller.go:172] Shutting down EncryptionStateController ... 2025-08-13T20:42:41.568213603+00:00 stderr F I0813 20:42:41.568206 1 base_controller.go:172] Shutting down EncryptionPruneController ... 2025-08-13T20:42:41.568263164+00:00 stderr F I0813 20:42:41.568220 1 base_controller.go:172] Shutting down auditPolicyController ... 2025-08-13T20:42:41.568278355+00:00 stderr F I0813 20:42:41.568267 1 base_controller.go:172] Shutting down APIServiceController_openshift-apiserver ... 2025-08-13T20:42:41.568290495+00:00 stderr F I0813 20:42:41.568279 1 base_controller.go:172] Shutting down OpenShiftAPIServerWorkloadController ... 2025-08-13T20:42:41.568302435+00:00 stderr F I0813 20:42:41.568293 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:42:41.568314006+00:00 stderr F I0813 20:42:41.568305 1 base_controller.go:172] Shutting down EncryptionConditionController ... 2025-08-13T20:42:41.568880472+00:00 stderr F I0813 20:42:41.567455 1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-apiserver controller ... 2025-08-13T20:42:41.568880472+00:00 stderr F I0813 20:42:41.567589 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:41.568880472+00:00 stderr F I0813 20:42:41.568842 1 base_controller.go:114] Shutting down worker of ConnectivityCheckController controller ... 2025-08-13T20:42:41.568880472+00:00 stderr F I0813 20:42:41.568869 1 base_controller.go:104] All ConnectivityCheckController workers have been terminated 2025-08-13T20:42:41.568970345+00:00 stderr F I0813 20:42:41.568101 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:42:41.568970345+00:00 stderr F I0813 20:42:41.568948 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:42:41.568970345+00:00 stderr F I0813 20:42:41.568319 1 base_controller.go:114] Shutting down worker of EncryptionKeyController controller ... 2025-08-13T20:42:41.568970345+00:00 stderr F I0813 20:42:41.568964 1 base_controller.go:114] Shutting down worker of NamespaceFinalizerController_openshift-apiserver controller ... 2025-08-13T20:42:41.568989525+00:00 stderr F I0813 20:42:41.568974 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:42:41.568989525+00:00 stderr F I0813 20:42:41.568981 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:42:41.568989525+00:00 stderr F I0813 20:42:41.568982 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:42:41.569004836+00:00 stderr F I0813 20:42:41.568996 1 base_controller.go:114] Shutting down worker of APIServerStaticResources controller ... 2025-08-13T20:42:41.569016266+00:00 stderr F I0813 20:42:41.569008 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:41.569027306+00:00 stderr F I0813 20:42:41.569019 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:41.569038277+00:00 stderr F I0813 20:42:41.569030 1 base_controller.go:114] Shutting down worker of EncryptionMigrationController controller ... 2025-08-13T20:42:41.569049237+00:00 stderr F I0813 20:42:41.569038 1 base_controller.go:114] Shutting down worker of EncryptionStateController controller ... 2025-08-13T20:42:41.569060767+00:00 stderr F I0813 20:42:41.569046 1 base_controller.go:114] Shutting down worker of EncryptionPruneController controller ... 2025-08-13T20:42:41.569072158+00:00 stderr F I0813 20:42:41.569059 1 base_controller.go:114] Shutting down worker of auditPolicyController controller ... 2025-08-13T20:42:41.569083528+00:00 stderr F I0813 20:42:41.569067 1 base_controller.go:104] All APIServerStaticResources workers have been terminated 2025-08-13T20:42:41.569130949+00:00 stderr F I0813 20:42:41.569092 1 base_controller.go:114] Shutting down worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T20:42:41.569130949+00:00 stderr F I0813 20:42:41.569126 1 base_controller.go:114] Shutting down worker of OpenShiftAPIServerWorkloadController controller ... 2025-08-13T20:42:41.569143190+00:00 stderr F I0813 20:42:41.569135 1 base_controller.go:104] All OpenShiftAPIServerWorkloadController workers have been terminated 2025-08-13T20:42:41.569152760+00:00 stderr F I0813 20:42:41.569145 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:42:41.569163740+00:00 stderr F I0813 20:42:41.569153 1 base_controller.go:114] Shutting down worker of EncryptionConditionController controller ... 2025-08-13T20:42:41.569217322+00:00 stderr F I0813 20:42:41.569184 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:42:41.569217322+00:00 stderr F I0813 20:42:41.569211 1 base_controller.go:104] All NamespaceFinalizerController_openshift-apiserver workers have been terminated 2025-08-13T20:42:41.569276544+00:00 stderr F I0813 20:42:41.569251 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:42:41.569276544+00:00 stderr F I0813 20:42:41.569263 1 base_controller.go:104] All EncryptionMigrationController workers have been terminated 2025-08-13T20:42:41.569290064+00:00 stderr F I0813 20:42:41.569274 1 base_controller.go:104] All EncryptionKeyController workers have been terminated 2025-08-13T20:42:41.569290064+00:00 stderr F I0813 20:42:41.569284 1 base_controller.go:104] All EncryptionPruneController workers have been terminated 2025-08-13T20:42:41.569302034+00:00 stderr F I0813 20:42:41.569293 1 base_controller.go:104] All EncryptionStateController workers have been terminated 2025-08-13T20:42:41.569313445+00:00 stderr F I0813 20:42:41.569303 1 base_controller.go:104] All EncryptionConditionController workers have been terminated 2025-08-13T20:42:41.570580971+00:00 stderr F I0813 20:42:41.570511 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:41.571554339+00:00 stderr F I0813 20:42:41.571477 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:42:41.571822437+00:00 stderr F I0813 20:42:41.571727 1 base_controller.go:114] Shutting down worker of SecretRevisionPruneController controller ... 2025-08-13T20:42:41.571822437+00:00 stderr F I0813 20:42:41.571759 1 base_controller.go:104] All SecretRevisionPruneController workers have been terminated 2025-08-13T20:42:41.572262430+00:00 stderr F I0813 20:42:41.572191 1 base_controller.go:104] All APIServiceController_openshift-apiserver workers have been terminated 2025-08-13T20:42:41.572279030+00:00 stderr F I0813 20:42:41.572263 1 base_controller.go:104] All auditPolicyController workers have been terminated 2025-08-13T20:42:41.572570778+00:00 stderr F I0813 20:42:41.572519 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:41.572683722+00:00 stderr F I0813 20:42:41.572279 1 base_controller.go:150] All StatusSyncer_openshift-apiserver post start hooks have been terminated 2025-08-13T20:42:41.572683722+00:00 stderr F I0813 20:42:41.572664 1 base_controller.go:104] All StatusSyncer_openshift-apiserver workers have been terminated 2025-08-13T20:42:41.573062053+00:00 stderr F I0813 20:42:41.572997 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:41.574001360+00:00 stderr F I0813 20:42:41.573930 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:41.574739081+00:00 stderr F I0813 20:42:41.574010 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:41.574865255+00:00 stderr F I0813 20:42:41.574102 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:41.574896446+00:00 stderr F I0813 20:42:41.574375 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:42:41.574984268+00:00 stderr F E0813 20:42:41.574407 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-apiserver-operator/leases/openshift-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.574984268+00:00 stderr F I0813 20:42:41.574941 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:41.574984268+00:00 stderr F I0813 20:42:41.574958 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:42:41.574984268+00:00 stderr F I0813 20:42:41.574974 1 builder.go:329] server exited 2025-08-13T20:42:41.575146793+00:00 stderr F I0813 20:42:41.574458 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:42:41.575146793+00:00 stderr F I0813 20:42:41.573201 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:41.576441030+00:00 stderr F I0813 20:42:41.576366 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator-lock", UID:"d9b35288-1c3d-4620-987e-0e2acf09bc76", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"37378", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-apiserver-operator-7c88c4c865-kn67m_a533f076-9102-4f1c-ac58-2cc3fe6b65c6 stopped leading 2025-08-13T20:42:41.578623853+00:00 stderr F W0813 20:42:41.578497 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000034100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-op0000644000175000017500000024575515133657716033126 0ustar zuulzuul2025-08-13T19:59:34.135123902+00:00 stderr F I0813 19:59:34.115398 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T19:59:34.136554623+00:00 stderr F I0813 19:59:34.136493 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:34.219283961+00:00 stderr F I0813 19:59:34.207593 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:35.552098122+00:00 stderr F I0813 19:59:35.549458 1 builder.go:298] openshift-apiserver-operator version - 2025-08-13T19:59:41.023017032+00:00 stderr F I0813 19:59:41.021715 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:41.023017032+00:00 stderr F W0813 19:59:41.022603 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:41.023017032+00:00 stderr F W0813 19:59:41.022613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:41.121703825+00:00 stderr F I0813 19:59:41.096993 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:41.121703825+00:00 stderr F I0813 19:59:41.097644 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:41.124078033+00:00 stderr F I0813 19:59:41.123190 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:41.133035848+00:00 stderr F I0813 19:59:41.132945 1 leaderelection.go:250] attempting to acquire leader lease openshift-apiserver-operator/openshift-apiserver-operator-lock... 2025-08-13T19:59:41.144280409+00:00 stderr F I0813 19:59:41.136102 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.144507235+00:00 stderr F I0813 19:59:41.144422 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:41.145183715+00:00 stderr F I0813 19:59:41.141975 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:41.145266797+00:00 stderr F I0813 19:59:41.145244 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:41.230075334+00:00 stderr F I0813 19:59:41.206005 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:41.230075334+00:00 stderr F I0813 19:59:41.207491 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:41.258358011+00:00 stderr F I0813 19:59:41.234516 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:41.507550514+00:00 stderr F I0813 19:59:41.490492 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:41.622481470+00:00 stderr F I0813 19:59:41.562304 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:41.622481470+00:00 stderr F E0813 19:59:41.562884 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.622481470+00:00 stderr F E0813 19:59:41.562933 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.622481470+00:00 stderr F E0813 19:59:41.594311 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.638918819+00:00 stderr F E0813 19:59:41.638768 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.642404768+00:00 stderr F E0813 19:59:41.642245 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.687986337+00:00 stderr F E0813 19:59:41.686170 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.792514437+00:00 stderr F E0813 19:59:41.790374 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.854587096+00:00 stderr F E0813 19:59:41.854483 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.854587096+00:00 stderr F E0813 19:59:41.854544 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.924636323+00:00 stderr F I0813 19:59:41.889206 1 leaderelection.go:260] successfully acquired lease openshift-apiserver-operator/openshift-apiserver-operator-lock 2025-08-13T19:59:41.950578953+00:00 stderr F I0813 19:59:41.948675 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:42.022083821+00:00 stderr F I0813 19:59:42.018917 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator-lock", UID:"d9b35288-1c3d-4620-987e-0e2acf09bc76", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28297", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-apiserver-operator-7c88c4c865-kn67m_f4e743f1-18d7-4ed5-bf22-f0d7b2e289da became leader 2025-08-13T19:59:42.022083821+00:00 stderr F E0813 19:59:42.019038 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.022083821+00:00 stderr F E0813 19:59:42.019069 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.100370823+00:00 stderr F E0813 19:59:42.100311 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.184075459+00:00 stderr F E0813 19:59:42.184009 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.305423498+00:00 stderr F E0813 19:59:42.266711 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.308877906+00:00 stderr F I0813 19:59:42.308767 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:42.324312196+00:00 stderr F I0813 19:59:42.314029 1 starter.go:133] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:42.325739457+00:00 stderr F I0813 19:59:42.325498 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:42.505465430+00:00 stderr F E0813 19:59:42.505124 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.587987932+00:00 stderr F E0813 19:59:42.587167 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.157083323+00:00 stderr F E0813 19:59:43.155147 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.243961490+00:00 stderr F E0813 19:59:43.242357 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.423531584+00:00 stderr F I0813 19:59:44.350010 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-08-13T19:59:44.423531584+00:00 stderr F I0813 19:59:44.370098 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:44.425127769+00:00 stderr F I0813 19:59:44.425086 1 base_controller.go:67] Waiting for caches to sync for OpenShiftAPIServerWorkloadController 2025-08-13T19:59:44.425207902+00:00 stderr F I0813 19:59:44.425189 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.428650 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.452637 1 base_controller.go:67] Waiting for caches to sync for NamespaceFinalizerController_openshift-apiserver 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.452689 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.452706 1 base_controller.go:67] Waiting for caches to sync for SecretRevisionPruneController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.452940 1 base_controller.go:67] Waiting for caches to sync for APIServiceController_openshift-apiserver 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453112 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453130 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453280 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453303 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453360 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453383 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453447 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453460 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-08-13T19:59:44.499400267+00:00 stderr F E0813 19:59:44.497287 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.529484614+00:00 stderr F E0813 19:59:44.528348 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.637050180+00:00 stderr F I0813 19:59:44.636109 1 base_controller.go:67] Waiting for caches to sync for APIServerStaticResources 2025-08-13T19:59:44.680926361+00:00 stderr F I0813 19:59:44.678565 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_openshift-apiserver 2025-08-13T19:59:45.346473993+00:00 stderr F I0813 19:59:45.333954 1 trace.go:236] Trace[880847307]: "DeltaFIFO Pop Process" ID:config-operator,Depth:22,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:45.152) (total time: 180ms): 2025-08-13T19:59:45.346473993+00:00 stderr F Trace[880847307]: [180.65508ms] [180.65508ms] END 2025-08-13T19:59:45.347405809+00:00 stderr F I0813 19:59:45.347363 1 request.go:697] Waited for 1.074015845s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps?limit=500&resourceVersion=0 2025-08-13T19:59:46.881152029+00:00 stderr F I0813 19:59:46.880389 1 base_controller.go:73] Caches are synced for NamespaceFinalizerController_openshift-apiserver 2025-08-13T19:59:46.914408827+00:00 stderr F I0813 19:59:46.911315 1 base_controller.go:110] Starting #1 worker of NamespaceFinalizerController_openshift-apiserver controller ... 2025-08-13T19:59:46.914408827+00:00 stderr F I0813 19:59:46.880450 1 base_controller.go:73] Caches are synced for SecretRevisionPruneController 2025-08-13T19:59:46.914408827+00:00 stderr F I0813 19:59:46.911661 1 base_controller.go:110] Starting #1 worker of SecretRevisionPruneController controller ... 2025-08-13T19:59:47.039904944+00:00 stderr F I0813 19:59:47.038817 1 base_controller.go:73] Caches are synced for APIServerStaticResources 2025-08-13T19:59:47.039904944+00:00 stderr F I0813 19:59:47.038944 1 base_controller.go:110] Starting #1 worker of APIServerStaticResources controller ... 2025-08-13T19:59:47.050216138+00:00 stderr F I0813 19:59:47.049173 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T19:59:47.050216138+00:00 stderr F I0813 19:59:47.049312 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T19:59:47.268890551+00:00 stderr F I0813 19:59:47.268767 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-08-13T19:59:47.268993164+00:00 stderr F I0813 19:59:47.268972 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-08-13T19:59:47.269063136+00:00 stderr F I0813 19:59:47.269043 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-08-13T19:59:47.269097157+00:00 stderr F I0813 19:59:47.269085 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-08-13T19:59:47.318959819+00:00 stderr F E0813 19:59:47.282138 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.318959819+00:00 stderr F E0813 19:59:47.282332 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.359016360+00:00 stderr F I0813 19:59:47.358950 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-08-13T19:59:47.359111993+00:00 stderr F I0813 19:59:47.359088 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-08-13T19:59:47.385992089+00:00 stderr F I0813 19:59:47.365671 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T19:59:47.385992089+00:00 stderr F I0813 19:59:47.365900 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.147306 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.152040 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.151264 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.152162 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.151273 1 base_controller.go:73] Caches are synced for StatusSyncer_openshift-apiserver 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.153572 1 base_controller.go:110] Starting #1 worker of StatusSyncer_openshift-apiserver controller ... 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.151281 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.153614 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:48.178754988+00:00 stderr F I0813 19:59:48.178520 1 base_controller.go:73] Caches are synced for APIServiceController_openshift-apiserver 2025-08-13T19:59:48.178984295+00:00 stderr F I0813 19:59:48.178887 1 base_controller.go:110] Starting #1 worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T19:59:48.200723534+00:00 stderr F I0813 19:59:48.189988 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling 2025-08-13T19:59:48.200723534+00:00 stderr F I0813 19:59:48.151349 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-08-13T19:59:48.200723534+00:00 stderr F I0813 19:59:48.190079 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-08-13T19:59:48.200723534+00:00 stderr F I0813 19:59:48.151410 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-08-13T19:59:48.200723534+00:00 stderr F I0813 19:59:48.190101 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-08-13T19:59:48.200723534+00:00 stderr F I0813 19:59:48.151426 1 base_controller.go:73] Caches are synced for OpenShiftAPIServerWorkloadController 2025-08-13T19:59:48.200723534+00:00 stderr F I0813 19:59:48.190641 1 base_controller.go:110] Starting #1 worker of OpenShiftAPIServerWorkloadController controller ... 2025-08-13T19:59:48.201255449+00:00 stderr F I0813 19:59:48.201208 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T19:59:48.201310881+00:00 stderr F I0813 19:59:48.151535 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:48.201365282+00:00 stderr F I0813 19:59:48.201348 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:48.379765438+00:00 stderr F I0813 19:59:48.377924 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:17Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:52.679660601+00:00 stderr F I0813 19:59:52.678581 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded changed from False to True ("APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()") 2025-08-13T19:59:52.815968576+00:00 stderr F I0813 19:59:52.814213 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:52.856690037+00:00 stderr F I0813 19:59:52.856631 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.947888527+00:00 stderr F I0813 19:59:52.930025 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.985924371+00:00 stderr F I0813 19:59:52.985768 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:52.985684334 +0000 UTC))" 2025-08-13T19:59:52.985924371+00:00 stderr F I0813 19:59:52.985892 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:52.98587307 +0000 UTC))" 2025-08-13T19:59:52.985924371+00:00 stderr F I0813 19:59:52.985914 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:52.98589904 +0000 UTC))" 2025-08-13T19:59:52.986020194+00:00 stderr F I0813 19:59:52.985939 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:52.985919701 +0000 UTC))" 2025-08-13T19:59:52.989767381+00:00 stderr F I0813 19:59:52.989708 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.985948382 +0000 UTC))" 2025-08-13T19:59:52.989870384+00:00 stderr F I0813 19:59:52.989827 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.98974213 +0000 UTC))" 2025-08-13T19:59:52.989922125+00:00 stderr F I0813 19:59:52.989879 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.989863123 +0000 UTC))" 2025-08-13T19:59:52.989934526+00:00 stderr F I0813 19:59:52.989918 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.989903795 +0000 UTC))" 2025-08-13T19:59:52.989986507+00:00 stderr F I0813 19:59:52.989946 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.989924315 +0000 UTC))" 2025-08-13T19:59:52.990365098+00:00 stderr F I0813 19:59:52.990315 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-apiserver-operator.svc,metrics.openshift-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 19:59:52.990295666 +0000 UTC))" 2025-08-13T19:59:53.015943127+00:00 stderr F I0813 19:59:52.990714 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115180\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115176\" (2025-08-13 18:59:35 +0000 UTC to 2026-08-13 18:59:35 +0000 UTC (now=2025-08-13 19:59:52.990651836 +0000 UTC))" 2025-08-13T19:59:53.044103870+00:00 stderr F I0813 19:59:52.941961 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:54.150721773+00:00 stderr F I0813 19:59:54.136572 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:54.150721773+00:00 stderr F I0813 19:59:54.141652 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:54.186460882+00:00 stderr F I0813 19:59:54.186325 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:54.186460882+00:00 stderr F I0813 19:59:54.186427 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:55.132750746+00:00 stderr F I0813 19:59:55.131992 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-67cbf64bc9-mtx25 pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:17Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:55.849174409+00:00 stderr F I0813 19:59:55.847137 1 trace.go:236] Trace[896855870]: "Reflector ListAndWatch" name:k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:141 (13-Aug-2025 19:59:44.428) (total time: 11413ms): 2025-08-13T19:59:55.849174409+00:00 stderr F Trace[896855870]: ---"Objects listed" error: 11412ms (19:59:55.841) 2025-08-13T19:59:55.849174409+00:00 stderr F Trace[896855870]: [11.413184959s] [11.413184959s] END 2025-08-13T19:59:55.849174409+00:00 stderr F I0813 19:59:55.847586 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:141 2025-08-13T19:59:55.852177834+00:00 stderr F I0813 19:59:55.850951 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-08-13T19:59:55.852177834+00:00 stderr F I0813 19:59:55.851008 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-08-13T19:59:56.081976655+00:00 stderr F I0813 19:59:56.063059 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-67cbf64bc9-mtx25 pod)" 2025-08-13T20:00:00.935404843+00:00 stderr F I0813 20:00:00.926421 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-67cbf64bc9-mtx25 pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:17Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"","reason":"APIServerDeployment_NoPod::APIServices_Error","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:01.007640322+00:00 stderr F E0813 20:00:01.007204 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.324598297+00:00 stderr F I0813 20:00:01.321168 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" 2025-08-13T20:00:01.389367993+00:00 stderr F I0813 20:00:01.389240 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:17Z","message":"APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"","reason":"APIServices_Error","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:01.733756640+00:00 stderr F I0813 20:00:01.729465 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:17Z","message":"APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"","reason":"APIServices_Error","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:01.733756640+00:00 stderr F I0813 20:00:01.702742 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded changed from True to False ("All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" 2025-08-13T20:00:01.893116002+00:00 stderr F E0813 20:00:01.892352 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:02.341122666+00:00 stderr F I0813 20:00:02.338716 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:17Z","message":"APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"","reason":"APIServices_Error","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:02.380369065+00:00 stderr F E0813 20:00:02.378934 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:02.898135048+00:00 stderr F I0813 20:00:02.894040 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:17Z","message":"APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"","reason":"APIServices_Error","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:02.960055244+00:00 stderr F E0813 20:00:02.959041 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:02.961464364+00:00 stderr F I0813 20:00:02.960628 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" 2025-08-13T20:00:05.445899825+00:00 stderr F I0813 20:00:05.444491 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:05Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:05.525944197+00:00 stderr F I0813 20:00:05.520820 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well") 2025-08-13T20:00:05.783362207+00:00 stderr F I0813 20:00:05.767722 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.767683 +0000 UTC))" 2025-08-13T20:00:05.783543412+00:00 stderr F I0813 20:00:05.783514 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.78345736 +0000 UTC))" 2025-08-13T20:00:05.783626085+00:00 stderr F I0813 20:00:05.783603 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.783570633 +0000 UTC))" 2025-08-13T20:00:05.783685886+00:00 stderr F I0813 20:00:05.783668 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.783647445 +0000 UTC))" 2025-08-13T20:00:05.783761848+00:00 stderr F I0813 20:00:05.783741 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.783705937 +0000 UTC))" 2025-08-13T20:00:05.784002945+00:00 stderr F I0813 20:00:05.783985 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.783963184 +0000 UTC))" 2025-08-13T20:00:05.784071537+00:00 stderr F I0813 20:00:05.784056 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.784040176 +0000 UTC))" 2025-08-13T20:00:05.798856269+00:00 stderr F I0813 20:00:05.784390 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.784092188 +0000 UTC))" 2025-08-13T20:00:05.799622201+00:00 stderr F I0813 20:00:05.799596 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.799511798 +0000 UTC))" 2025-08-13T20:00:05.799693793+00:00 stderr F I0813 20:00:05.799679 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.799652172 +0000 UTC))" 2025-08-13T20:00:05.800171056+00:00 stderr F I0813 20:00:05.800144 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-apiserver-operator.svc,metrics.openshift-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 20:00:05.800123445 +0000 UTC))" 2025-08-13T20:00:05.800500196+00:00 stderr F I0813 20:00:05.800476 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115180\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115176\" (2025-08-13 18:59:35 +0000 UTC to 2026-08-13 18:59:35 +0000 UTC (now=2025-08-13 20:00:05.800458495 +0000 UTC))" 2025-08-13T20:00:28.273069989+00:00 stderr F I0813 20:00:28.266938 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:00:28.273069989+00:00 stderr F I0813 20:00:28.272188 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.272753 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:28.272702608 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275316 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:28.275274432 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275353 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:28.275331893 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275377 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:28.275358604 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275399 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:28.275385875 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275428 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:28.275405685 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275448 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:28.275435676 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275467 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:28.275454277 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275494 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:28.275476947 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275524 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:28.275506228 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275954 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-apiserver-operator.svc,metrics.openshift-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:07 +0000 UTC to 2027-08-13 20:00:08 +0000 UTC (now=2025-08-13 20:00:28.27593466 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.279726 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115180\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115176\" (2025-08-13 18:59:35 +0000 UTC to 2026-08-13 18:59:35 +0000 UTC (now=2025-08-13 20:00:28.279546053 +0000 UTC))" 2025-08-13T20:00:28.309754775+00:00 stderr F I0813 20:00:28.308236 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:29.270989773+00:00 stderr F I0813 20:00:29.263950 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="32d40e5d8e7856640e9aa0fa689da20ce17efb0f940d3f20467677d758499f97", new="619bd1166efa93cd4a6ecce49845ad14ea28915925e46f2a1ae6f0f79bf4e301") 2025-08-13T20:00:29.270989773+00:00 stderr F W0813 20:00:29.264566 1 builder.go:154] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:00:29.270989773+00:00 stderr F I0813 20:00:29.264640 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="2deeedad0b20e8edd995d2b2452184a3a6d229c608ee3d8e51038616f572694e", new="cc755c6895764b0695467ba8a77e5ba8598a858e253fc2c3e013de460f5584c5") 2025-08-13T20:00:29.271215499+00:00 stderr F I0813 20:00:29.271181 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:00:29.271295661+00:00 stderr F I0813 20:00:29.271281 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:00:29.271485157+00:00 stderr F I0813 20:00:29.271434 1 base_controller.go:172] Shutting down EncryptionKeyController ... 2025-08-13T20:00:29.271501527+00:00 stderr F I0813 20:00:29.271493 1 base_controller.go:172] Shutting down EncryptionConditionController ... 2025-08-13T20:00:29.271551519+00:00 stderr F I0813 20:00:29.271514 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:00:29.271562879+00:00 stderr F I0813 20:00:29.271550 1 base_controller.go:172] Shutting down APIServerStaticResources ... 2025-08-13T20:00:29.271682982+00:00 stderr F I0813 20:00:29.271572 1 base_controller.go:172] Shutting down SecretRevisionPruneController ... 2025-08-13T20:00:29.271682982+00:00 stderr F I0813 20:00:29.271586 1 base_controller.go:172] Shutting down NamespaceFinalizerController_openshift-apiserver ... 2025-08-13T20:00:29.271738294+00:00 stderr F I0813 20:00:29.271698 1 base_controller.go:114] Shutting down worker of EncryptionKeyController controller ... 2025-08-13T20:00:29.271738294+00:00 stderr F I0813 20:00:29.271732 1 base_controller.go:104] All EncryptionKeyController workers have been terminated 2025-08-13T20:00:29.271860937+00:00 stderr F I0813 20:00:29.271748 1 base_controller.go:114] Shutting down worker of EncryptionConditionController controller ... 2025-08-13T20:00:29.271860937+00:00 stderr F I0813 20:00:29.271770 1 base_controller.go:104] All EncryptionConditionController workers have been terminated 2025-08-13T20:00:29.271860937+00:00 stderr F I0813 20:00:29.271828 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:00:29.271880868+00:00 stderr F I0813 20:00:29.271858 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:00:29.271880868+00:00 stderr F I0813 20:00:29.271874 1 base_controller.go:114] Shutting down worker of APIServerStaticResources controller ... 2025-08-13T20:00:29.271890288+00:00 stderr F I0813 20:00:29.271881 1 base_controller.go:104] All APIServerStaticResources workers have been terminated 2025-08-13T20:00:29.271899219+00:00 stderr F I0813 20:00:29.271889 1 base_controller.go:114] Shutting down worker of SecretRevisionPruneController controller ... 2025-08-13T20:00:29.271899219+00:00 stderr F I0813 20:00:29.271895 1 base_controller.go:104] All SecretRevisionPruneController workers have been terminated 2025-08-13T20:00:29.271908319+00:00 stderr F I0813 20:00:29.271902 1 base_controller.go:114] Shutting down worker of NamespaceFinalizerController_openshift-apiserver controller ... 2025-08-13T20:00:29.271917359+00:00 stderr F I0813 20:00:29.271908 1 base_controller.go:104] All NamespaceFinalizerController_openshift-apiserver workers have been terminated 2025-08-13T20:00:29.271983701+00:00 stderr F I0813 20:00:29.271959 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:00:29.272080664+00:00 stderr F I0813 20:00:29.272047 1 base_controller.go:172] Shutting down OpenShiftAPIServerWorkloadController ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273043 1 base_controller.go:172] Shutting down auditPolicyController ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273101 1 base_controller.go:172] Shutting down EncryptionPruneController ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273117 1 base_controller.go:172] Shutting down APIServiceController_openshift-apiserver ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273129 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273144 1 base_controller.go:172] Shutting down StatusSyncer_openshift-apiserver ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273149 1 base_controller.go:150] All StatusSyncer_openshift-apiserver post start hooks have been terminated 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273175 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273188 1 base_controller.go:172] Shutting down EncryptionStateController ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273201 1 base_controller.go:114] Shutting down worker of auditPolicyController controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273210 1 base_controller.go:114] Shutting down worker of EncryptionPruneController controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273217 1 base_controller.go:104] All EncryptionPruneController workers have been terminated 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273226 1 base_controller.go:114] Shutting down worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273235 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273240 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273249 1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-apiserver controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273255 1 base_controller.go:104] All StatusSyncer_openshift-apiserver workers have been terminated 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273263 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273267 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273274 1 base_controller.go:114] Shutting down worker of EncryptionStateController controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273279 1 base_controller.go:104] All EncryptionStateController workers have been terminated 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273307 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273318 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273323 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273412 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273463 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273552 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273577 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273602 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273725 1 base_controller.go:172] Shutting down EncryptionMigrationController ... 2025-08-13T20:00:29.276727666+00:00 stderr F I0813 20:00:29.276689 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:00:29.277242271+00:00 stderr F I0813 20:00:29.277217 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:00:29.277347174+00:00 stderr F I0813 20:00:29.277326 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:29.280376320+00:00 stderr F I0813 20:00:29.277485 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:00:29.281461021+00:00 stderr F E0813 20:00:29.281299 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:00:29.281558024+00:00 stderr F I0813 20:00:29.281539 1 base_controller.go:114] Shutting down worker of OpenShiftAPIServerWorkloadController controller ... 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.281357 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.282340 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.282541 1 base_controller.go:104] All auditPolicyController workers have been terminated 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.282553 1 base_controller.go:104] All APIServiceController_openshift-apiserver workers have been terminated 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.294267 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.294408 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.294470 1 builder.go:329] server exited 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.294494 1 base_controller.go:114] Shutting down worker of EncryptionMigrationController controller ... 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.294505 1 base_controller.go:104] All EncryptionMigrationController workers have been terminated 2025-08-13T20:00:29.325711713+00:00 stderr F I0813 20:00:29.325682 1 base_controller.go:104] All OpenShiftAPIServerWorkloadController workers have been terminated 2025-08-13T20:00:29.325923179+00:00 stderr F I0813 20:00:29.325901 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:00:29.326444954+00:00 stderr F I0813 20:00:29.326421 1 base_controller.go:172] Shutting down ConnectivityCheckController ... 2025-08-13T20:00:29.326606559+00:00 stderr F E0813 20:00:29.326582 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/controlplane.operator.openshift.io/v1alpha1/namespaces/openshift-apiserver/podnetworkconnectivitychecks": context canceled 2025-08-13T20:00:29.326673950+00:00 stderr F I0813 20:00:29.326657 1 base_controller.go:114] Shutting down worker of ConnectivityCheckController controller ... 2025-08-13T20:00:29.326720562+00:00 stderr F I0813 20:00:29.326704 1 base_controller.go:104] All ConnectivityCheckController workers have been terminated 2025-08-13T20:00:29.326878466+00:00 stderr F I0813 20:00:29.326859 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:00:29.326955608+00:00 stderr F I0813 20:00:29.326939 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:00:29.406027283+00:00 stderr F W0813 20:00:29.405506 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-manager-webhook-855f577f79-7bdxq_e7870154-de6e-4216-81fb-b87e7502c412/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-mana0000755000175000017500000000000015133657715032764 5ustar zuulzuul././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-manager-webhook-855f577f79-7bdxq_e7870154-de6e-4216-81fb-b87e7502c412/cert-manager-webhook/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-mana0000755000175000017500000000000015133657734032765 5ustar zuulzuul././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-manager-webhook-855f577f79-7bdxq_e7870154-de6e-4216-81fb-b87e7502c412/cert-manager-webhook/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-mana0000644000175000017500000001102115133657715032761 0ustar zuulzuul2026-01-20T10:56:42.147144053+00:00 stderr F I0120 10:56:42.146903 1 webhook.go:47] "starting cert-manager webhook" logger="cert-manager.webhook.webhook" version="v1.19.2" git_commit="6e38ee57a338a1f27bb724ddb5933f4b8e23e567" go_version="go1.25.5" platform="linux/amd64" 2026-01-20T10:56:42.147704238+00:00 stderr F I0120 10:56:42.147651 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false 2026-01-20T10:56:42.147704238+00:00 stderr F I0120 10:56:42.147679 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false 2026-01-20T10:56:42.147704238+00:00 stderr F I0120 10:56:42.147684 1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false 2026-01-20T10:56:42.147704238+00:00 stderr F I0120 10:56:42.147688 1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false 2026-01-20T10:56:42.147704238+00:00 stderr F I0120 10:56:42.147693 1 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true 2026-01-20T10:56:42.149606489+00:00 stderr F I0120 10:56:42.149568 1 webhook.go:132] "using dynamic certificate generating using CA stored in Secret resource" logger="cert-manager.webhook.webhook" secret_namespace="cert-manager" secret_name="cert-manager-webhook-ca" 2026-01-20T10:56:42.149606489+00:00 stderr F I0120 10:56:42.149589 1 webhook.go:144] "serving insecurely as tls certificate data not provided" logger="cert-manager.webhook.webhook" 2026-01-20T10:56:42.151768308+00:00 stderr F I0120 10:56:42.151515 1 server.go:193] "listening for insecure healthz connections" logger="cert-manager.webhook" address=6080 2026-01-20T10:56:42.151862570+00:00 stderr F I0120 10:56:42.151833 1 server.go:183] "Registering webhook" logger="cert-manager.controller-runtime.webhook" path="/mutate" 2026-01-20T10:56:42.151923482+00:00 stderr F I0120 10:56:42.151896 1 server.go:183] "Registering webhook" logger="cert-manager.controller-runtime.webhook" path="/validate" 2026-01-20T10:56:42.151957903+00:00 stderr F I0120 10:56:42.151939 1 server.go:208] "Starting metrics server" logger="cert-manager.controller-runtime.metrics" 2026-01-20T10:56:42.152151738+00:00 stderr F I0120 10:56:42.152105 1 server.go:191] "Starting webhook server" logger="cert-manager.controller-runtime.webhook" 2026-01-20T10:56:42.152151738+00:00 stderr F I0120 10:56:42.152097 1 server.go:247] "Serving metrics server" logger="cert-manager.controller-runtime.metrics" bindAddress="0.0.0.0:9402" secure=false 2026-01-20T10:56:42.152400314+00:00 stderr F I0120 10:56:42.152339 1 server.go:242] "Serving webhook server" logger="cert-manager.controller-runtime.webhook" host="" port=10250 2026-01-20T10:56:42.156433992+00:00 stderr F E0120 10:56:42.156380 1 dynamic_source.go:221] "Failed to generate serving certificate, retrying..." err="no tls.Certificate available yet, try again later" logger="cert-manager" interval="1s" 2026-01-20T10:56:42.189977555+00:00 stderr F I0120 10:56:42.189884 1 reflector.go:436] "Caches populated" type="*v1.Secret" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" 2026-01-20T10:56:42.255123439+00:00 stderr F I0120 10:56:42.254316 1 authority.go:265] "Will regenerate CA" logger="cert-manager" reason="CA secret not found" 2026-01-20T10:56:42.262075425+00:00 stderr F I0120 10:56:42.261978 1 authority.go:403] "Created new root CA Secret" logger="cert-manager" 2026-01-20T10:56:42.263049232+00:00 stderr F I0120 10:56:42.263016 1 authority.go:285] "Detected change in CA secret data, update current CA data and notify watches" logger="cert-manager" 2026-01-20T10:56:43.159007977+00:00 stderr F I0120 10:56:43.158860 1 dynamic_source.go:289] "Updated cert-manager TLS certificate" logger="cert-manager" DNSNames=["cert-manager-webhook","cert-manager-webhook.cert-manager","cert-manager-webhook.cert-manager.svc"] 2026-01-20T10:56:43.159007977+00:00 stderr F I0120 10:56:43.158946 1 dynamic_source.go:172] "Detected root CA rotation - regenerating serving certificates" logger="cert-manager" 2026-01-20T10:56:43.164966358+00:00 stderr F I0120 10:56:43.164656 1 dynamic_source.go:289] "Updated cert-manager TLS certificate" logger="cert-manager" DNSNames=["cert-manager-webhook","cert-manager-webhook.cert-manager","cert-manager-webhook.cert-manager.svc"] 2026-01-20T10:56:59.038784534+00:00 stderr F I0120 10:56:59.038165 1 ???:1] "http: TLS handshake error from 10.217.0.7:42926: remote error: tls: bad certificate" ././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000755000175000017500000000000015133657715033072 5ustar zuulzuul././@LongLink0000644000000000000000000000032400000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000755000175000017500000000000015133657735033074 5ustar zuulzuul././@LongLink0000644000000000000000000000033100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000644000175000017500000000107315133657715033075 0ustar zuulzuul2026-01-20T10:49:36.013903690+00:00 stderr F I0120 10:49:36.012685 1 cmd.go:47] Starting console conversion webhook server 2026-01-20T10:49:36.015667564+00:00 stderr F I0120 10:49:36.014508 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2026-01-20T10:49:36.015667564+00:00 stderr F I0120 10:49:36.015198 1 cmd.go:93] Serving on [::]:9443 2026-01-20T10:49:36.015801698+00:00 stderr F I0120 10:49:36.015685 1 certwatcher.go:115] "Starting certificate watcher" logger="controller-runtime.certwatcher" ././@LongLink0000644000000000000000000000033100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000644000175000017500000000161515133657715033077 0ustar zuulzuul2025-08-13T19:59:37.777127617+00:00 stderr F I0813 19:59:37.775509 1 cmd.go:47] Starting console conversion webhook server 2025-08-13T19:59:37.955172273+00:00 stderr F I0813 19:59:37.955027 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2025-08-13T19:59:37.956180592+00:00 stderr F I0813 19:59:37.956106 1 cmd.go:93] Serving on [::]:9443 2025-08-13T19:59:37.957682034+00:00 stderr F I0813 19:59:37.957611 1 certwatcher.go:115] "Starting certificate watcher" logger="controller-runtime.certwatcher" 2025-08-13T20:00:50.382659807+00:00 stderr F I0813 20:00:50.373360 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2025-08-13T20:00:50.382659807+00:00 stderr F I0813 20:00:50.374426 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" ././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015133657715033071 5ustar zuulzuul././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/pruner/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015133657734033072 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/pruner/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000000367015133657715033101 0ustar zuulzuul2025-08-13T20:07:25.650906231+00:00 stderr F I0813 20:07:25.647078 1 cmd.go:40] &{ true {false} prune true map[cert-dir:0xc00086f720 max-eligible-revision:0xc00086f4a0 protected-revisions:0xc00086f540 resource-dir:0xc00086f5e0 static-pod-name:0xc00086f680 v:0xc00086fe00] [0xc00086fe00 0xc00086f4a0 0xc00086f540 0xc00086f5e0 0xc00086f720 0xc00086f680] [] map[cert-dir:0xc00086f720 help:0xc0008821e0 log-flush-frequency:0xc00086fd60 max-eligible-revision:0xc00086f4a0 protected-revisions:0xc00086f540 resource-dir:0xc00086f5e0 static-pod-name:0xc00086f680 v:0xc00086fe00 vmodule:0xc00086fea0] [0xc00086f4a0 0xc00086f540 0xc00086f5e0 0xc00086f680 0xc00086f720 0xc00086fd60 0xc00086fe00 0xc00086fea0 0xc0008821e0] [0xc00086f720 0xc0008821e0 0xc00086fd60 0xc00086f4a0 0xc00086f540 0xc00086f5e0 0xc00086f680 0xc00086fe00 0xc00086fea0] map[104:0xc0008821e0 118:0xc00086fe00] [] -1 0 0xc00085e870 true 0x73b100 []} 2025-08-13T20:07:25.650906231+00:00 stderr F I0813 20:07:25.648509 1 cmd.go:41] (*prune.PruneOptions)(0xc00085d8b0)({ 2025-08-13T20:07:25.650906231+00:00 stderr F MaxEligibleRevision: (int) 11, 2025-08-13T20:07:25.650906231+00:00 stderr F ProtectedRevisions: ([]int) (len=6 cap=6) { 2025-08-13T20:07:25.650906231+00:00 stderr F (int) 6, 2025-08-13T20:07:25.650906231+00:00 stderr F (int) 7, 2025-08-13T20:07:25.650906231+00:00 stderr F (int) 8, 2025-08-13T20:07:25.650906231+00:00 stderr F (int) 9, 2025-08-13T20:07:25.650906231+00:00 stderr F (int) 10, 2025-08-13T20:07:25.650906231+00:00 stderr F (int) 11 2025-08-13T20:07:25.650906231+00:00 stderr F }, 2025-08-13T20:07:25.650906231+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:07:25.650906231+00:00 stderr F CertDir: (string) (len=29) "kube-controller-manager-certs", 2025-08-13T20:07:25.650906231+00:00 stderr F StaticPodName: (string) (len=27) "kube-controller-manager-pod" 2025-08-13T20:07:25.650906231+00:00 stderr F }) ././@LongLink0000644000000000000000000000024300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000755000175000017500000000000015133657715033145 5ustar zuulzuul././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000755000175000017500000000000015133657735033147 5ustar zuulzuul././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000644000175000017500000030117315133657715033154 0ustar zuulzuul2026-01-20T10:49:37.874769920+00:00 stdout F Copying system trust bundle 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239315 1 feature_gate.go:239] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2026-01-20T10:49:39.242083218+00:00 stderr F I0120 10:49:39.239355 1 feature_gate.go:249] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239383 1 config.go:121] Ignoring unknown FeatureGate "NetworkDiagnosticsConfig" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239388 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProviderExternal" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239391 1 config.go:121] Ignoring unknown FeatureGate "ExternalOIDC" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239395 1 config.go:121] Ignoring unknown FeatureGate "ExternalRouteCertificate" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239398 1 config.go:121] Ignoring unknown FeatureGate "BuildCSIVolumes" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239401 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstall" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239405 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallGCP" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239408 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallPowerVS" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239411 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallVSphere" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239414 1 config.go:121] Ignoring unknown FeatureGate "InsightsConfigAPI" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239418 1 config.go:121] Ignoring unknown FeatureGate "MachineAPIOperatorDisableMachineHealthCheckController" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239422 1 config.go:121] Ignoring unknown FeatureGate "MachineConfigNodes" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239425 1 config.go:121] Ignoring unknown FeatureGate "SigstoreImageVerification" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239429 1 config.go:121] Ignoring unknown FeatureGate "UpgradeStatus" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239432 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallOpenStack" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239435 1 config.go:121] Ignoring unknown FeatureGate "DNSNameResolver" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239438 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProviderGCP" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239442 1 config.go:121] Ignoring unknown FeatureGate "GatewayAPI" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239445 1 config.go:121] Ignoring unknown FeatureGate "PinnedImages" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239448 1 config.go:121] Ignoring unknown FeatureGate "AzureWorkloadIdentity" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239452 1 config.go:121] Ignoring unknown FeatureGate "BareMetalLoadBalancer" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239455 1 config.go:121] Ignoring unknown FeatureGate "ImagePolicy" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239458 1 config.go:121] Ignoring unknown FeatureGate "MetricsCollectionProfiles" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239465 1 config.go:121] Ignoring unknown FeatureGate "NodeDisruptionPolicy" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239468 1 config.go:121] Ignoring unknown FeatureGate "VSphereStaticIPs" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239472 1 config.go:121] Ignoring unknown FeatureGate "CSIDriverSharedResource" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239475 1 config.go:121] Ignoring unknown FeatureGate "OpenShiftPodSecurityAdmission" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239478 1 config.go:121] Ignoring unknown FeatureGate "AlibabaPlatform" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239481 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallAWS" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239485 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallAzure" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239488 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProviderAzure" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239491 1 config.go:121] Ignoring unknown FeatureGate "NetworkLiveMigration" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239494 1 config.go:121] Ignoring unknown FeatureGate "Example" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239498 1 config.go:121] Ignoring unknown FeatureGate "InsightsOnDemandDataGather" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239501 1 config.go:121] Ignoring unknown FeatureGate "InstallAlternateInfrastructureAWS" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239504 1 config.go:121] Ignoring unknown FeatureGate "ManagedBootImages" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239508 1 config.go:121] Ignoring unknown FeatureGate "VSphereMultiVCenters" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239511 1 config.go:121] Ignoring unknown FeatureGate "VolumeGroupSnapshot" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239514 1 config.go:121] Ignoring unknown FeatureGate "AdminNetworkPolicy" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239518 1 config.go:121] Ignoring unknown FeatureGate "EtcdBackendQuota" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239521 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallNutanix" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239524 1 config.go:121] Ignoring unknown FeatureGate "GCPLabelsTags" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239527 1 config.go:121] Ignoring unknown FeatureGate "VSphereControlPlaneMachineSet" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239531 1 config.go:121] Ignoring unknown FeatureGate "MetricsServer" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239534 1 config.go:121] Ignoring unknown FeatureGate "MixedCPUsAllocation" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239537 1 config.go:121] Ignoring unknown FeatureGate "VSphereDriverConfiguration" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239540 1 config.go:121] Ignoring unknown FeatureGate "ChunkSizeMiB" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239543 1 config.go:121] Ignoring unknown FeatureGate "HardwareSpeed" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239547 1 config.go:121] Ignoring unknown FeatureGate "InsightsConfig" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239550 1 config.go:121] Ignoring unknown FeatureGate "MachineAPIProviderOpenStack" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239553 1 config.go:121] Ignoring unknown FeatureGate "NewOLM" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239556 1 config.go:121] Ignoring unknown FeatureGate "AutomatedEtcdBackup" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239559 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallIBMCloud" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239563 1 config.go:121] Ignoring unknown FeatureGate "GCPClusterHostedDNS" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239566 1 config.go:121] Ignoring unknown FeatureGate "PlatformOperators" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239569 1 config.go:121] Ignoring unknown FeatureGate "SignatureStores" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239572 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProvider" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239576 1 config.go:121] Ignoring unknown FeatureGate "PrivateHostedZoneAWS" 2026-01-20T10:49:39.242083218+00:00 stderr F W0120 10:49:39.239579 1 config.go:121] Ignoring unknown FeatureGate "OnClusterBuild" 2026-01-20T10:49:39.270510544+00:00 stderr F I0120 10:49:39.270421 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2026-01-20T10:49:39.876107970+00:00 stderr F I0120 10:49:39.875248 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2026-01-20T10:49:39.884224577+00:00 stderr F I0120 10:49:39.884103 1 audit.go:340] Using audit backend: ignoreErrors 2026-01-20T10:49:39.886241748+00:00 stderr F I0120 10:49:39.886190 1 plugins.go:83] "Registered admission plugin" plugin="NamespaceLifecycle" 2026-01-20T10:49:39.886241748+00:00 stderr F I0120 10:49:39.886211 1 plugins.go:83] "Registered admission plugin" plugin="ValidatingAdmissionWebhook" 2026-01-20T10:49:39.886241748+00:00 stderr F I0120 10:49:39.886217 1 plugins.go:83] "Registered admission plugin" plugin="MutatingAdmissionWebhook" 2026-01-20T10:49:39.886241748+00:00 stderr F I0120 10:49:39.886223 1 plugins.go:83] "Registered admission plugin" plugin="ValidatingAdmissionPolicy" 2026-01-20T10:49:39.897459580+00:00 stderr F I0120 10:49:39.897396 1 admission.go:48] Admission plugin "project.openshift.io/ProjectRequestLimit" is not configured so it will be disabled. 2026-01-20T10:49:39.903660269+00:00 stderr F I0120 10:49:39.903600 1 plugins.go:157] Loaded 5 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,build.openshift.io/BuildConfigSecretInjector,image.openshift.io/ImageLimitRange,image.openshift.io/ImagePolicy,MutatingAdmissionWebhook. 2026-01-20T10:49:39.903720671+00:00 stderr F I0120 10:49:39.903694 1 plugins.go:160] Loaded 9 validating admission controller(s) successfully in the following order: OwnerReferencesPermissionEnforcement,build.openshift.io/BuildConfigSecretInjector,build.openshift.io/BuildByStrategy,image.openshift.io/ImageLimitRange,image.openshift.io/ImagePolicy,quota.openshift.io/ClusterResourceQuota,route.openshift.io/RequiredRouteAnnotations,ValidatingAdmissionWebhook,ResourceQuota. 2026-01-20T10:49:39.904660290+00:00 stderr F I0120 10:49:39.904624 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2026-01-20T10:49:39.904660290+00:00 stderr F I0120 10:49:39.904643 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2026-01-20T10:49:39.904679890+00:00 stderr F I0120 10:49:39.904664 1 maxinflight.go:116] "Set denominator for readonly requests" limit=3000 2026-01-20T10:49:39.904679890+00:00 stderr F I0120 10:49:39.904669 1 maxinflight.go:120] "Set denominator for mutating requests" limit=1500 2026-01-20T10:49:39.928324480+00:00 stderr F I0120 10:49:39.928252 1 store.go:1579] "Monitoring resource count at path" resource="deploymentconfigs.apps.openshift.io" path="//deploymentconfigs" 2026-01-20T10:49:39.930945540+00:00 stderr F I0120 10:49:39.930329 1 cacher.go:451] cacher (deploymentconfigs.apps.openshift.io): initialized 2026-01-20T10:49:39.930945540+00:00 stderr F I0120 10:49:39.930364 1 reflector.go:351] Caches populated for *apps.DeploymentConfig from storage/cacher.go:/deploymentconfigs 2026-01-20T10:49:39.933305852+00:00 stderr F I0120 10:49:39.933257 1 handler.go:275] Adding GroupVersion apps.openshift.io v1 to ResourceManager 2026-01-20T10:49:39.933374064+00:00 stderr F I0120 10:49:39.933345 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2026-01-20T10:49:39.933374064+00:00 stderr F I0120 10:49:39.933358 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2026-01-20T10:49:39.939199791+00:00 stderr F I0120 10:49:39.937771 1 handler.go:275] Adding GroupVersion authorization.openshift.io v1 to ResourceManager 2026-01-20T10:49:39.939199791+00:00 stderr F I0120 10:49:39.937835 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2026-01-20T10:49:39.939199791+00:00 stderr F I0120 10:49:39.937843 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2026-01-20T10:49:39.944054539+00:00 stderr F I0120 10:49:39.942161 1 handler.go:275] Adding GroupVersion project.openshift.io v1 to ResourceManager 2026-01-20T10:49:39.944054539+00:00 stderr F I0120 10:49:39.942271 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2026-01-20T10:49:39.944054539+00:00 stderr F I0120 10:49:39.942284 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2026-01-20T10:49:39.957870030+00:00 stderr F I0120 10:49:39.957797 1 store.go:1579] "Monitoring resource count at path" resource="routes.route.openshift.io" path="//routes" 2026-01-20T10:49:39.958979334+00:00 stderr F I0120 10:49:39.958932 1 handler.go:275] Adding GroupVersion route.openshift.io v1 to ResourceManager 2026-01-20T10:49:39.959042536+00:00 stderr F I0120 10:49:39.959010 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2026-01-20T10:49:39.959042536+00:00 stderr F I0120 10:49:39.959021 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2026-01-20T10:49:39.964499832+00:00 stderr F I0120 10:49:39.964437 1 cacher.go:451] cacher (routes.route.openshift.io): initialized 2026-01-20T10:49:39.964499832+00:00 stderr F I0120 10:49:39.964473 1 reflector.go:351] Caches populated for *route.Route from storage/cacher.go:/routes 2026-01-20T10:49:39.968311129+00:00 stderr F I0120 10:49:39.967703 1 store.go:1579] "Monitoring resource count at path" resource="rangeallocations.security.openshift.io" path="//rangeallocations" 2026-01-20T10:49:39.971356381+00:00 stderr F I0120 10:49:39.971312 1 cacher.go:451] cacher (rangeallocations.security.openshift.io): initialized 2026-01-20T10:49:39.971356381+00:00 stderr F I0120 10:49:39.971334 1 reflector.go:351] Caches populated for *security.RangeAllocation from storage/cacher.go:/rangeallocations 2026-01-20T10:49:39.977650323+00:00 stderr F I0120 10:49:39.976607 1 handler.go:275] Adding GroupVersion security.openshift.io v1 to ResourceManager 2026-01-20T10:49:39.977650323+00:00 stderr F I0120 10:49:39.976757 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2026-01-20T10:49:39.977650323+00:00 stderr F I0120 10:49:39.976769 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2026-01-20T10:49:39.995318651+00:00 stderr F I0120 10:49:39.995258 1 store.go:1579] "Monitoring resource count at path" resource="builds.build.openshift.io" path="//builds" 2026-01-20T10:49:39.997957041+00:00 stderr F I0120 10:49:39.997433 1 cacher.go:451] cacher (builds.build.openshift.io): initialized 2026-01-20T10:49:39.997957041+00:00 stderr F I0120 10:49:39.997463 1 reflector.go:351] Caches populated for *build.Build from storage/cacher.go:/builds 2026-01-20T10:49:40.004568763+00:00 stderr F I0120 10:49:40.003695 1 store.go:1579] "Monitoring resource count at path" resource="buildconfigs.build.openshift.io" path="//buildconfigs" 2026-01-20T10:49:40.010143603+00:00 stderr F I0120 10:49:40.005845 1 cacher.go:451] cacher (buildconfigs.build.openshift.io): initialized 2026-01-20T10:49:40.010143603+00:00 stderr F I0120 10:49:40.005869 1 reflector.go:351] Caches populated for *build.BuildConfig from storage/cacher.go:/buildconfigs 2026-01-20T10:49:40.010143603+00:00 stderr F I0120 10:49:40.006976 1 handler.go:275] Adding GroupVersion build.openshift.io v1 to ResourceManager 2026-01-20T10:49:40.010143603+00:00 stderr F I0120 10:49:40.007048 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2026-01-20T10:49:40.010143603+00:00 stderr F I0120 10:49:40.007082 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2026-01-20T10:49:40.022511889+00:00 stderr F I0120 10:49:40.022450 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca, incoming err: 2026-01-20T10:49:40.022511889+00:00 stderr F I0120 10:49:40.022473 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca 2026-01-20T10:49:40.022511889+00:00 stderr F I0120 10:49:40.022504 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123, incoming err: 2026-01-20T10:49:40.022541510+00:00 stderr F I0120 10:49:40.022509 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123 2026-01-20T10:49:40.022541510+00:00 stderr F I0120 10:49:40.022521 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123/default-route-openshift-image-registry.apps-crc.testing, incoming err: 2026-01-20T10:49:40.022610852+00:00 stderr F I0120 10:49:40.022592 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123/image-registry.openshift-image-registry.svc..5000, incoming err: 2026-01-20T10:49:40.022673204+00:00 stderr F I0120 10:49:40.022651 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123/image-registry.openshift-image-registry.svc.cluster.local..5000, incoming err: 2026-01-20T10:49:40.022730336+00:00 stderr F I0120 10:49:40.022709 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..data, incoming err: 2026-01-20T10:49:40.022730336+00:00 stderr F I0120 10:49:40.022718 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/..data 2026-01-20T10:49:40.022741096+00:00 stderr F I0120 10:49:40.022728 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/default-route-openshift-image-registry.apps-crc.testing, incoming err: 2026-01-20T10:49:40.022741096+00:00 stderr F I0120 10:49:40.022733 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/default-route-openshift-image-registry.apps-crc.testing 2026-01-20T10:49:40.022750116+00:00 stderr F I0120 10:49:40.022742 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc..5000, incoming err: 2026-01-20T10:49:40.022750116+00:00 stderr F I0120 10:49:40.022746 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc..5000 2026-01-20T10:49:40.022763777+00:00 stderr F I0120 10:49:40.022756 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000, incoming err: 2026-01-20T10:49:40.022771327+00:00 stderr F I0120 10:49:40.022761 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 2026-01-20T10:49:40.035026760+00:00 stderr F I0120 10:49:40.034926 1 store.go:1579] "Monitoring resource count at path" resource="images.image.openshift.io" path="//images" 2026-01-20T10:49:40.043014863+00:00 stderr F I0120 10:49:40.042815 1 store.go:1579] "Monitoring resource count at path" resource="imagestreams.image.openshift.io" path="//imagestreams" 2026-01-20T10:49:40.065878070+00:00 stderr F I0120 10:49:40.065823 1 cacher.go:451] cacher (imagestreams.image.openshift.io): initialized 2026-01-20T10:49:40.065942802+00:00 stderr F I0120 10:49:40.065932 1 reflector.go:351] Caches populated for *image.ImageStream from storage/cacher.go:/imagestreams 2026-01-20T10:49:40.068415187+00:00 stderr F I0120 10:49:40.068366 1 handler.go:275] Adding GroupVersion image.openshift.io v1 to ResourceManager 2026-01-20T10:49:40.068415187+00:00 stderr F W0120 10:49:40.068393 1 genericapiserver.go:756] Skipping API image.openshift.io/1.0 because it has no resources. 2026-01-20T10:49:40.068415187+00:00 stderr F W0120 10:49:40.068404 1 genericapiserver.go:756] Skipping API image.openshift.io/pre012 because it has no resources. 2026-01-20T10:49:40.069161930+00:00 stderr F I0120 10:49:40.068568 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2026-01-20T10:49:40.069161930+00:00 stderr F I0120 10:49:40.068583 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2026-01-20T10:49:40.071788310+00:00 stderr F I0120 10:49:40.071758 1 handler.go:275] Adding GroupVersion quota.openshift.io v1 to ResourceManager 2026-01-20T10:49:40.071849342+00:00 stderr F I0120 10:49:40.071815 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2026-01-20T10:49:40.071849342+00:00 stderr F I0120 10:49:40.071826 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2026-01-20T10:49:40.088075806+00:00 stderr F I0120 10:49:40.087650 1 store.go:1579] "Monitoring resource count at path" resource="templates.template.openshift.io" path="//templates" 2026-01-20T10:49:40.096222034+00:00 stderr F I0120 10:49:40.096162 1 store.go:1579] "Monitoring resource count at path" resource="templateinstances.template.openshift.io" path="//templateinstances" 2026-01-20T10:49:40.098205615+00:00 stderr F I0120 10:49:40.098158 1 cacher.go:451] cacher (templates.template.openshift.io): initialized 2026-01-20T10:49:40.098205615+00:00 stderr F I0120 10:49:40.098190 1 reflector.go:351] Caches populated for *template.Template from storage/cacher.go:/templates 2026-01-20T10:49:40.098716060+00:00 stderr F I0120 10:49:40.098689 1 cacher.go:451] cacher (templateinstances.template.openshift.io): initialized 2026-01-20T10:49:40.098716060+00:00 stderr F I0120 10:49:40.098702 1 reflector.go:351] Caches populated for *template.TemplateInstance from storage/cacher.go:/templateinstances 2026-01-20T10:49:40.108986763+00:00 stderr F I0120 10:49:40.108941 1 store.go:1579] "Monitoring resource count at path" resource="brokertemplateinstances.template.openshift.io" path="//brokertemplateinstances" 2026-01-20T10:49:40.110085596+00:00 stderr F I0120 10:49:40.110040 1 cacher.go:451] cacher (brokertemplateinstances.template.openshift.io): initialized 2026-01-20T10:49:40.110085596+00:00 stderr F I0120 10:49:40.110080 1 reflector.go:351] Caches populated for *template.BrokerTemplateInstance from storage/cacher.go:/brokertemplateinstances 2026-01-20T10:49:40.110867441+00:00 stderr F I0120 10:49:40.110849 1 handler.go:275] Adding GroupVersion template.openshift.io v1 to ResourceManager 2026-01-20T10:49:40.110966834+00:00 stderr F I0120 10:49:40.110935 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2026-01-20T10:49:40.110995985+00:00 stderr F I0120 10:49:40.110986 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2026-01-20T10:49:40.155673345+00:00 stderr F I0120 10:49:40.155611 1 cacher.go:451] cacher (images.image.openshift.io): initialized 2026-01-20T10:49:40.155673345+00:00 stderr F I0120 10:49:40.155667 1 reflector.go:351] Caches populated for *image.Image from storage/cacher.go:/images 2026-01-20T10:49:40.557668890+00:00 stderr F I0120 10:49:40.557606 1 server.go:50] Starting master on 0.0.0.0:8443 (v0.0.0-master+$Format:%H$) 2026-01-20T10:49:40.557710041+00:00 stderr F I0120 10:49:40.557682 1 genericapiserver.go:528] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2026-01-20T10:49:40.559943949+00:00 stderr F I0120 10:49:40.559895 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:49:40.559943949+00:00 stderr F I0120 10:49:40.559913 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:40.559961580+00:00 stderr F I0120 10:49:40.559936 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:49:40.559961580+00:00 stderr F I0120 10:49:40.559954 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:40.560278329+00:00 stderr F I0120 10:49:40.559904 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:49:40.560278329+00:00 stderr F I0120 10:49:40.560271 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:49:40.560562458+00:00 stderr F I0120 10:49:40.560527 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-apiserver.svc\" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2026-01-20 10:49:40.560470585 +0000 UTC))" 2026-01-20T10:49:40.560855187+00:00 stderr F I0120 10:49:40.560803 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906179\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906179\" (2026-01-20 09:49:39 +0000 UTC to 2027-01-20 09:49:39 +0000 UTC (now=2026-01-20 10:49:40.560783974 +0000 UTC))" 2026-01-20T10:49:40.560855187+00:00 stderr F I0120 10:49:40.560825 1 secure_serving.go:213] Serving securely on [::]:8443 2026-01-20T10:49:40.560855187+00:00 stderr F I0120 10:49:40.560851 1 genericapiserver.go:681] [graceful-termination] waiting for shutdown to be initiated 2026-01-20T10:49:40.560890288+00:00 stderr F I0120 10:49:40.560867 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2026-01-20T10:49:40.560988051+00:00 stderr F I0120 10:49:40.560960 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:49:40.561598379+00:00 stderr F I0120 10:49:40.561572 1 openshift_apiserver.go:593] Using default project node label selector: 2026-01-20T10:49:40.562094825+00:00 stderr F I0120 10:49:40.562045 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.562265410+00:00 stderr F I0120 10:49:40.562206 1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller 2026-01-20T10:49:40.568365696+00:00 stderr F I0120 10:49:40.568331 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.569054726+00:00 stderr F I0120 10:49:40.569029 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.576959017+00:00 stderr F I0120 10:49:40.575263 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.576959017+00:00 stderr F I0120 10:49:40.575540 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.576959017+00:00 stderr F I0120 10:49:40.575728 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.576959017+00:00 stderr F I0120 10:49:40.576591 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.577090741+00:00 stderr F I0120 10:49:40.577072 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.581845356+00:00 stderr F I0120 10:49:40.581794 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.581845356+00:00 stderr F I0120 10:49:40.581826 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.582359782+00:00 stderr F I0120 10:49:40.582338 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.588958333+00:00 stderr F I0120 10:49:40.588924 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.590593653+00:00 stderr F I0120 10:49:40.590562 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.595987347+00:00 stderr F I0120 10:49:40.595942 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.596036329+00:00 stderr F I0120 10:49:40.596007 1 reflector.go:351] Caches populated for *v1.Route from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.596980147+00:00 stderr F I0120 10:49:40.596955 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.601083522+00:00 stderr F I0120 10:49:40.599299 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.618490003+00:00 stderr F I0120 10:49:40.618432 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.619354419+00:00 stderr F I0120 10:49:40.619301 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.620930387+00:00 stderr F I0120 10:49:40.620886 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.621385921+00:00 stderr F I0120 10:49:40.621360 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.634845131+00:00 stderr F I0120 10:49:40.634616 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.639975026+00:00 stderr F I0120 10:49:40.639927 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:40.660504832+00:00 stderr F I0120 10:49:40.660359 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:49:40.660504832+00:00 stderr F I0120 10:49:40.660455 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:40.660504832+00:00 stderr F I0120 10:49:40.660488 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:40.660665957+00:00 stderr F I0120 10:49:40.660642 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:40.660613416 +0000 UTC))" 2026-01-20T10:49:40.661039578+00:00 stderr F I0120 10:49:40.660950 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-apiserver.svc\" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2026-01-20 10:49:40.660932025 +0000 UTC))" 2026-01-20T10:49:40.661282076+00:00 stderr F I0120 10:49:40.661260 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906179\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906179\" (2026-01-20 09:49:39 +0000 UTC to 2027-01-20 09:49:39 +0000 UTC (now=2026-01-20 10:49:40.661245025 +0000 UTC))" 2026-01-20T10:49:40.661446971+00:00 stderr F I0120 10:49:40.661424 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:49:40.661396709 +0000 UTC))" 2026-01-20T10:49:40.661457791+00:00 stderr F I0120 10:49:40.661447 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:49:40.6614334 +0000 UTC))" 2026-01-20T10:49:40.661486042+00:00 stderr F I0120 10:49:40.661466 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:49:40.661452481 +0000 UTC))" 2026-01-20T10:49:40.661493992+00:00 stderr F I0120 10:49:40.661487 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:49:40.661476152 +0000 UTC))" 2026-01-20T10:49:40.661519373+00:00 stderr F I0120 10:49:40.661503 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:40.661491722 +0000 UTC))" 2026-01-20T10:49:40.661546584+00:00 stderr F I0120 10:49:40.661530 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:40.661519583 +0000 UTC))" 2026-01-20T10:49:40.661559824+00:00 stderr F I0120 10:49:40.661552 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:40.661538323 +0000 UTC))" 2026-01-20T10:49:40.661583605+00:00 stderr F I0120 10:49:40.661567 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:40.661556194 +0000 UTC))" 2026-01-20T10:49:40.661591425+00:00 stderr F I0120 10:49:40.661585 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:49:40.661573964 +0000 UTC))" 2026-01-20T10:49:40.661616946+00:00 stderr F I0120 10:49:40.661600 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:49:40.661591275 +0000 UTC))" 2026-01-20T10:49:40.661626116+00:00 stderr F I0120 10:49:40.661619 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:40.661608476 +0000 UTC))" 2026-01-20T10:49:40.661896954+00:00 stderr F I0120 10:49:40.661876 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-apiserver.svc\" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2026-01-20 10:49:40.661861203 +0000 UTC))" 2026-01-20T10:49:40.662175563+00:00 stderr F I0120 10:49:40.662157 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906179\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906179\" (2026-01-20 09:49:39 +0000 UTC to 2027-01-20 09:49:39 +0000 UTC (now=2026-01-20 10:49:40.662146472 +0000 UTC))" 2026-01-20T10:49:40.705146772+00:00 stderr F I0120 10:49:40.705091 1 reflector.go:351] Caches populated for *etcd.ImageLayers from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:56:07.107987652+00:00 stderr F I0120 10:56:07.107913 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:56:07.107868149 +0000 UTC))" 2026-01-20T10:56:07.107987652+00:00 stderr F I0120 10:56:07.107953 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:56:07.107939561 +0000 UTC))" 2026-01-20T10:56:07.107987652+00:00 stderr F I0120 10:56:07.107977 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.107959232 +0000 UTC))" 2026-01-20T10:56:07.108031233+00:00 stderr F I0120 10:56:07.107997 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.107982152 +0000 UTC))" 2026-01-20T10:56:07.108031233+00:00 stderr F I0120 10:56:07.108020 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.108001953 +0000 UTC))" 2026-01-20T10:56:07.108087825+00:00 stderr F I0120 10:56:07.108053 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.108029083 +0000 UTC))" 2026-01-20T10:56:07.108106625+00:00 stderr F I0120 10:56:07.108100 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.108082385 +0000 UTC))" 2026-01-20T10:56:07.108132836+00:00 stderr F I0120 10:56:07.108120 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.108105325 +0000 UTC))" 2026-01-20T10:56:07.108195038+00:00 stderr F I0120 10:56:07.108146 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.108127996 +0000 UTC))" 2026-01-20T10:56:07.108195038+00:00 stderr F I0120 10:56:07.108162 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:56:07.108151777 +0000 UTC))" 2026-01-20T10:56:07.108195038+00:00 stderr F I0120 10:56:07.108184 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.108169897 +0000 UTC))" 2026-01-20T10:56:07.108241339+00:00 stderr F I0120 10:56:07.108210 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.108188778 +0000 UTC))" 2026-01-20T10:56:07.109218936+00:00 stderr F I0120 10:56:07.108582 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-apiserver.svc\" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2026-01-20 10:56:07.108561419 +0000 UTC))" 2026-01-20T10:56:07.109218936+00:00 stderr F I0120 10:56:07.108937 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906179\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906179\" (2026-01-20 09:49:39 +0000 UTC to 2027-01-20 09:49:39 +0000 UTC (now=2026-01-20 10:56:07.108917488 +0000 UTC))" 2026-01-20T10:56:33.790480003+00:00 stderr F E0120 10:56:33.790131 1 strategy.go:60] unable to parse manifest for "sha256:4f35566977c35306a8f2102841ceb7fa10a6d9ac47c079131caed5655140f9b2": unexpected end of JSON input 2026-01-20T10:56:34.577674663+00:00 stderr F E0120 10:56:34.577572 1 strategy.go:60] unable to parse manifest for "sha256:5dfcc5b000a1fab4be66bbd43e4db44b61176e2bcba9c24f6fe887dea9b7fd49": unexpected end of JSON input 2026-01-20T10:56:34.579625546+00:00 stderr F E0120 10:56:34.579177 1 strategy.go:60] unable to parse manifest for "sha256:7f501bba8a09957a0ac28ba0c20768f087cf0b16d92139b3f8f0758e9f60691f": unexpected end of JSON input 2026-01-20T10:56:34.581015713+00:00 stderr F E0120 10:56:34.580974 1 strategy.go:60] unable to parse manifest for "sha256:7902b4cbd202ce8eeda6cadf7826b5b74d2715fc9a9950446121858d71a8860b": unexpected end of JSON input 2026-01-20T10:56:34.592819611+00:00 stderr F I0120 10:56:34.592769 1 trace.go:236] Trace[866744255]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d29abccc-a572-4dfe-815e-dbefe1b023d4,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (20-Jan-2026 10:56:33.462) (total time: 1130ms): 2026-01-20T10:56:34.592819611+00:00 stderr F Trace[866744255]: ---"Write to database call succeeded" len:795 1125ms (10:56:34.588) 2026-01-20T10:56:34.592819611+00:00 stderr F Trace[866744255]: [1.130054954s] [1.130054954s] END 2026-01-20T10:56:34.880946972+00:00 stderr F E0120 10:56:34.880856 1 strategy.go:60] unable to parse manifest for "sha256:df8858f0c01ae1657a14234a94f6785cbb2fba7f12c9d0325f427a3f1284481b": unexpected end of JSON input 2026-01-20T10:56:34.882596637+00:00 stderr F E0120 10:56:34.882545 1 strategy.go:60] unable to parse manifest for "sha256:3fbe86a646d6eb899a619930c0f052d90f5c46342c050d0675f59ae2bf68dc13": unexpected end of JSON input 2026-01-20T10:56:34.883704026+00:00 stderr F E0120 10:56:34.883626 1 strategy.go:60] unable to parse manifest for "sha256:3a9b09f62edaf1464cdc0a47f56fbe407d6c175aaf68aba5f79a6de39fcaa6ea": unexpected end of JSON input 2026-01-20T10:56:34.889898914+00:00 stderr F I0120 10:56:34.889854 1 trace.go:236] Trace[1526430637]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:a9f2b949-f3ea-4be9-9e10-45b582361d6e,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (20-Jan-2026 10:56:33.286) (total time: 1603ms): 2026-01-20T10:56:34.889898914+00:00 stderr F Trace[1526430637]: ---"Write to database call succeeded" len:506 1602ms (10:56:34.888) 2026-01-20T10:56:34.889898914+00:00 stderr F Trace[1526430637]: [1.603526454s] [1.603526454s] END 2026-01-20T10:56:35.608220720+00:00 stderr F E0120 10:56:35.608104 1 strategy.go:60] unable to parse manifest for "sha256:55dc61c31ea50a8f7a45e993a9b3220097974948b5cd1ab3f317e7702e8cb6fc": unexpected end of JSON input 2026-01-20T10:56:36.988646502+00:00 stderr F E0120 10:56:36.986332 1 strategy.go:60] unable to parse manifest for "sha256:70a21b3f93c05843ce9d07f125b1464436caf01680bb733754a2a5df5bc3b11b": unexpected end of JSON input 2026-01-20T10:56:36.988646502+00:00 stderr F E0120 10:56:36.987733 1 strategy.go:60] unable to parse manifest for "sha256:8ef04c895436412065c0f1090db68060d2bb339a400e8653ca3a370211690d1f": unexpected end of JSON input 2026-01-20T10:56:36.991449437+00:00 stderr F E0120 10:56:36.989236 1 strategy.go:60] unable to parse manifest for "sha256:080eba193988a6d0cf8e553598de1b9aaf900e823df1386fdee193d7b581a6b6": unexpected end of JSON input 2026-01-20T10:56:36.994036427+00:00 stderr F I0120 10:56:36.993992 1 trace.go:236] Trace[1545045939]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:72ce9e34-02e7-477c-ac6b-0663c4d49322,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (20-Jan-2026 10:56:35.797) (total time: 1196ms): 2026-01-20T10:56:36.994036427+00:00 stderr F Trace[1545045939]: ---"Write to database call succeeded" len:739 1195ms (10:56:36.993) 2026-01-20T10:56:36.994036427+00:00 stderr F Trace[1545045939]: [1.196664618s] [1.196664618s] END 2026-01-20T10:56:37.576152348+00:00 stderr F E0120 10:56:37.576099 1 strategy.go:60] unable to parse manifest for "sha256:7bcc365e0ba823ed020ee6e6c3e0c23be5871c8dea3f7f1a65029002c83f9e55": unexpected end of JSON input 2026-01-20T10:56:37.579156290+00:00 stderr F E0120 10:56:37.579132 1 strategy.go:60] unable to parse manifest for "sha256:6a9e81b2eea2f32f2750909b6aa037c2c2e68be3bc9daf3c7a3163c9e1df379f": unexpected end of JSON input 2026-01-20T10:56:37.581550093+00:00 stderr F E0120 10:56:37.581521 1 strategy.go:60] unable to parse manifest for "sha256:00cf28cf9a6c427962f922855a6cc32692c760764ce2ce7411cf605dd510367f": unexpected end of JSON input 2026-01-20T10:56:37.584000679+00:00 stderr F E0120 10:56:37.583975 1 strategy.go:60] unable to parse manifest for "sha256:2cee344e4cfcfdc9a117fd82baa6f2d5daa7eeed450e02cd5d5554b424410439": unexpected end of JSON input 2026-01-20T10:56:37.586346993+00:00 stderr F E0120 10:56:37.586319 1 strategy.go:60] unable to parse manifest for "sha256:aa02a20c2edf83a009746b45a0fd2e0b4a2b224fdef1581046f6afef38c0bee2": unexpected end of JSON input 2026-01-20T10:56:37.586437315+00:00 stderr F E0120 10:56:37.586400 1 strategy.go:60] unable to parse manifest for "sha256:d186c94f8843f854d77b2b05d10efb0d272f88a4bf4f1d8ebe304428b9396392": unexpected end of JSON input 2026-01-20T10:56:37.588195343+00:00 stderr F E0120 10:56:37.588150 1 strategy.go:60] unable to parse manifest for "sha256:59b88fb0c467ca43bf3c1af6bfd8777577638dd8079f995cdb20b6f4e20ce0b6": unexpected end of JSON input 2026-01-20T10:56:37.590376752+00:00 stderr F E0120 10:56:37.590341 1 strategy.go:60] unable to parse manifest for "sha256:e37aeaeb0159194a9855350e13e399470f39ce340d6381069933742990741fb8": unexpected end of JSON input 2026-01-20T10:56:37.590610258+00:00 stderr F E0120 10:56:37.590585 1 strategy.go:60] unable to parse manifest for "sha256:603d10af5e3476add5b5726fdef893033869ae89824ee43949a46c9f004ef65d": unexpected end of JSON input 2026-01-20T10:56:37.593015642+00:00 stderr F E0120 10:56:37.592987 1 strategy.go:60] unable to parse manifest for "sha256:f89a54e6d1340be8ddd84a602cb4f1f27c1983417f655941645bf11809d49f18": unexpected end of JSON input 2026-01-20T10:56:37.593373772+00:00 stderr F E0120 10:56:37.593334 1 strategy.go:60] unable to parse manifest for "sha256:eed7e29bf583e4f01e170bb9f22f2a78098bf15243269b670c307caa6813b783": unexpected end of JSON input 2026-01-20T10:56:37.595808967+00:00 stderr F E0120 10:56:37.595754 1 strategy.go:60] unable to parse manifest for "sha256:739fac452e78a21a16b66e0451b85590b9e48ec7a1ed3887fbb9ed85cf564275": unexpected end of JSON input 2026-01-20T10:56:37.596379083+00:00 stderr F E0120 10:56:37.596352 1 strategy.go:60] unable to parse manifest for "sha256:b80a514f136f738736d6bf654dc3258c13b04a819e001dd8a39ef2f7475fd9d9": unexpected end of JSON input 2026-01-20T10:56:37.597943425+00:00 stderr F E0120 10:56:37.597890 1 strategy.go:60] unable to parse manifest for "sha256:0eea1d20aaa26041edf26b925fb204d839e5b93122190191893a0299b2e1b589": unexpected end of JSON input 2026-01-20T10:56:37.598779868+00:00 stderr F E0120 10:56:37.598722 1 strategy.go:60] unable to parse manifest for "sha256:7ef75cdbc399425105060771cb8e700198cc0bddcfb60bf4311bf87ea62fd440": unexpected end of JSON input 2026-01-20T10:56:37.599840666+00:00 stderr F E0120 10:56:37.599804 1 strategy.go:60] unable to parse manifest for "sha256:3b94ccfa422b8ba0014302a3cfc6916b69f0f5a9dfd757b6704049834d4ff0ae": unexpected end of JSON input 2026-01-20T10:56:37.601593663+00:00 stderr F E0120 10:56:37.601564 1 strategy.go:60] unable to parse manifest for "sha256:46a4e73ddb085d1f36b39903ea13ba307bb958789707e9afde048764b3e3cae2": unexpected end of JSON input 2026-01-20T10:56:37.603765561+00:00 stderr F E0120 10:56:37.603718 1 strategy.go:60] unable to parse manifest for "sha256:bcb0e15cc9d2d3449f0b1acac7b0275035a80e1b3b835391b5464f7bf4553b89": unexpected end of JSON input 2026-01-20T10:56:37.604965284+00:00 stderr F I0120 10:56:37.604931 1 trace.go:236] Trace[1221687104]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:27d0ac1d-5fb4-4029-ae1f-e3de88340960,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (20-Jan-2026 10:56:33.413) (total time: 4191ms): 2026-01-20T10:56:37.604965284+00:00 stderr F Trace[1221687104]: ---"Write to database call succeeded" len:1230 4189ms (10:56:37.603) 2026-01-20T10:56:37.604965284+00:00 stderr F Trace[1221687104]: [4.191522866s] [4.191522866s] END 2026-01-20T10:56:37.610935074+00:00 stderr F I0120 10:56:37.610600 1 trace.go:236] Trace[916164701]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:061c4aad-e721-48d5-9682-b439ef6124fb,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (20-Jan-2026 10:56:33.774) (total time: 3835ms): 2026-01-20T10:56:37.610935074+00:00 stderr F Trace[916164701]: ---"Write to database call succeeded" len:953 3834ms (10:56:37.609) 2026-01-20T10:56:37.610935074+00:00 stderr F Trace[916164701]: [3.835767323s] [3.835767323s] END 2026-01-20T10:56:37.677131546+00:00 stderr F E0120 10:56:37.676981 1 strategy.go:60] unable to parse manifest for "sha256:5fb3543c0d42146f0506c1ea4d09575131da6a2f27885729b7cfce13a0fa90e3": unexpected end of JSON input 2026-01-20T10:56:37.680870076+00:00 stderr F E0120 10:56:37.680808 1 strategy.go:60] unable to parse manifest for "sha256:1d68b58a73f4cf15fcd886ab39fddf18be923b52b24cb8ec3ab1da2d3e9bd5f6": unexpected end of JSON input 2026-01-20T10:56:37.683741793+00:00 stderr F E0120 10:56:37.683690 1 strategy.go:60] unable to parse manifest for "sha256:7de877b0e748cdb47cb702400f3ddaa3c3744a022887e2213c2bb27775ab4b25": unexpected end of JSON input 2026-01-20T10:56:37.686208680+00:00 stderr F E0120 10:56:37.686168 1 strategy.go:60] unable to parse manifest for "sha256:af9c08644ca057d83ef4b7d8de1489f01c5a52ff8670133b8a09162831b7fb34": unexpected end of JSON input 2026-01-20T10:56:37.688399469+00:00 stderr F E0120 10:56:37.688348 1 strategy.go:60] unable to parse manifest for "sha256:b053401886c06581d3c296855525cc13e0613100a596ed007bb69d5f8e972346": unexpected end of JSON input 2026-01-20T10:56:37.690414782+00:00 stderr F E0120 10:56:37.690372 1 strategy.go:60] unable to parse manifest for "sha256:61555b923dabe4ff734279ed1bdb9eb6d450c760e1cc04463cf88608ac8d1338": unexpected end of JSON input 2026-01-20T10:56:37.692590881+00:00 stderr F E0120 10:56:37.692548 1 strategy.go:60] unable to parse manifest for "sha256:9ab26cb4005e9b60fd6349950957bbd0120efba216036da53c547c6f1c9e5e7f": unexpected end of JSON input 2026-01-20T10:56:37.694799721+00:00 stderr F E0120 10:56:37.694745 1 strategy.go:60] unable to parse manifest for "sha256:2254dc2f421f496b504aafbbd8ea37e660652c4b6b4f9a0681664b10873be7fe": unexpected end of JSON input 2026-01-20T10:56:37.697369950+00:00 stderr F E0120 10:56:37.697319 1 strategy.go:60] unable to parse manifest for "sha256:e4b1599ba6e88f6df7c4e67d6397371d61b6829d926411184e9855e71e840b8c": unexpected end of JSON input 2026-01-20T10:56:37.707267256+00:00 stderr F I0120 10:56:37.706413 1 trace.go:236] Trace[1648942413]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:a608da52-f3ed-4e5b-9beb-00b8c4f931dd,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (20-Jan-2026 10:56:34.384) (total time: 3322ms): 2026-01-20T10:56:37.707267256+00:00 stderr F Trace[1648942413]: ---"Write to database call succeeded" len:1132 3319ms (10:56:37.703) 2026-01-20T10:56:37.707267256+00:00 stderr F Trace[1648942413]: [3.322175585s] [3.322175585s] END 2026-01-20T10:56:37.717540663+00:00 stderr F E0120 10:56:37.717470 1 strategy.go:60] unable to parse manifest for "sha256:a8e4081414cfa644e212ded354dfee12706e63afb19a27c0c0ae2c8c64e56ca6": unexpected end of JSON input 2026-01-20T10:56:37.721094058+00:00 stderr F E0120 10:56:37.720989 1 strategy.go:60] unable to parse manifest for "sha256:38c7e4f7dea04bb536f05d78e0107ebc2a3607cf030db7f5c249f13ce1f52d59": unexpected end of JSON input 2026-01-20T10:56:37.723670137+00:00 stderr F E0120 10:56:37.723624 1 strategy.go:60] unable to parse manifest for "sha256:d2f17aaf2f871fda5620466d69ac67b9c355c0bae5912a1dbef9a51ca8813e50": unexpected end of JSON input 2026-01-20T10:56:37.726308138+00:00 stderr F E0120 10:56:37.726245 1 strategy.go:60] unable to parse manifest for "sha256:e4be2fb7216f432632819b2441df42a5a0063f7f473c2923ca6912b2d64b7494": unexpected end of JSON input 2026-01-20T10:56:37.728268631+00:00 stderr F E0120 10:56:37.728211 1 strategy.go:60] unable to parse manifest for "sha256:14de89e89efc97aee3b50141108b7833708c3a93ad90bf89940025ab5267ba86": unexpected end of JSON input 2026-01-20T10:56:37.730457430+00:00 stderr F E0120 10:56:37.730387 1 strategy.go:60] unable to parse manifest for "sha256:f438230ed2c2e609d0d7dbc430ccf1e9bad2660e6410187fd6e9b14a2952e70b": unexpected end of JSON input 2026-01-20T10:56:37.732353671+00:00 stderr F E0120 10:56:37.732296 1 strategy.go:60] unable to parse manifest for "sha256:f953734d89252219c3dcd8f703ba8b58c9c8a0f5dfa9425c9e56ec0834f7d288": unexpected end of JSON input 2026-01-20T10:56:37.734254972+00:00 stderr F E0120 10:56:37.734188 1 strategy.go:60] unable to parse manifest for "sha256:e4223a60b887ec24cad7dd70fdb6c3f2c107fb7118331be6f45d626219cfe7f3": unexpected end of JSON input 2026-01-20T10:56:37.738638280+00:00 stderr F I0120 10:56:37.738515 1 trace.go:236] Trace[1416259998]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:c6184ea8-337b-4644-83cb-54c9c9f815c3,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (20-Jan-2026 10:56:34.298) (total time: 3440ms): 2026-01-20T10:56:37.738638280+00:00 stderr F Trace[1416259998]: ---"Write to database call succeeded" len:1025 3439ms (10:56:37.737) 2026-01-20T10:56:37.738638280+00:00 stderr F Trace[1416259998]: [3.440383225s] [3.440383225s] END 2026-01-20T10:56:38.005974803+00:00 stderr F E0120 10:56:38.005877 1 strategy.go:60] unable to parse manifest for "sha256:431753c8a6a8541fdc0edd3385b2c765925d244fdd2347d2baa61303789696be": unexpected end of JSON input 2026-01-20T10:56:38.009381445+00:00 stderr F E0120 10:56:38.009321 1 strategy.go:60] unable to parse manifest for "sha256:64acf3403b5c2c85f7a28f326c63f1312b568db059c66d90b34e3c59fde3a74b": unexpected end of JSON input 2026-01-20T10:56:38.012101398+00:00 stderr F E0120 10:56:38.012015 1 strategy.go:60] unable to parse manifest for "sha256:74051f86b00fb102e34276f03a310c16bc57b9c2a001a56ba66359e15ee48ba6": unexpected end of JSON input 2026-01-20T10:56:38.014686177+00:00 stderr F E0120 10:56:38.014637 1 strategy.go:60] unable to parse manifest for "sha256:33d4dff40514e91d86b42e90b24b09a5ca770d9f67657c936363d348cd33d188": unexpected end of JSON input 2026-01-20T10:56:38.017470462+00:00 stderr F E0120 10:56:38.017419 1 strategy.go:60] unable to parse manifest for "sha256:7711108ef60ef6f0536bfa26914af2afaf6455ce6e4c4abd391e31a2d95d0178": unexpected end of JSON input 2026-01-20T10:56:38.019788285+00:00 stderr F E0120 10:56:38.019674 1 strategy.go:60] unable to parse manifest for "sha256:b163564be6ed5b80816e61a4ee31e42f42dbbf345253daac10ecc9fadf31baa3": unexpected end of JSON input 2026-01-20T10:56:38.022087157+00:00 stderr F E0120 10:56:38.022033 1 strategy.go:60] unable to parse manifest for "sha256:920ff7e5efc777cb523669c425fd7b553176c9f4b34a85ceddcb548c2ac5f78a": unexpected end of JSON input 2026-01-20T10:56:38.025048356+00:00 stderr F E0120 10:56:38.024760 1 strategy.go:60] unable to parse manifest for "sha256:32a5e806bd88b40568d46864fd313541498e38fabfc5afb5f3bdfe052c4b4c5f": unexpected end of JSON input 2026-01-20T10:56:38.027243325+00:00 stderr F E0120 10:56:38.027215 1 strategy.go:60] unable to parse manifest for "sha256:229ee7b88c5f700c95d557d0b37b8f78dbb6b125b188c3bf050cfdb32aec7962": unexpected end of JSON input 2026-01-20T10:56:38.029748362+00:00 stderr F E0120 10:56:38.029697 1 strategy.go:60] unable to parse manifest for "sha256:78bf175cecb15524b2ef81bff8cc11acdf7c0f74c08417f0e443483912e4878a": unexpected end of JSON input 2026-01-20T10:56:38.035284871+00:00 stderr F I0120 10:56:38.035259 1 trace.go:236] Trace[2049608815]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d077cb10-3ce4-443d-86a3-91512ae0ed20,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (20-Jan-2026 10:56:33.717) (total time: 4317ms): 2026-01-20T10:56:38.035284871+00:00 stderr F Trace[2049608815]: ---"Write to database call succeeded" len:1241 4316ms (10:56:38.034) 2026-01-20T10:56:38.035284871+00:00 stderr F Trace[2049608815]: [4.317537375s] [4.317537375s] END 2026-01-20T10:56:39.362229004+00:00 stderr F E0120 10:56:39.362157 1 strategy.go:60] unable to parse manifest for "sha256:2ae058ee7239213fb495491112be8cc7e6d6661864fd399deb27f23f50f05eb4": unexpected end of JSON input 2026-01-20T10:56:39.365948433+00:00 stderr F E0120 10:56:39.365854 1 strategy.go:60] unable to parse manifest for "sha256:db3f5192237bfdab2355304f17916e09bc29d6d529fdec48b09a08290ae35905": unexpected end of JSON input 2026-01-20T10:56:39.369621423+00:00 stderr F E0120 10:56:39.369535 1 strategy.go:60] unable to parse manifest for "sha256:b4cb02a4e7cb915b6890d592ed5b4ab67bcef19bf855029c95231f51dd071352": unexpected end of JSON input 2026-01-20T10:56:39.373110006+00:00 stderr F E0120 10:56:39.372993 1 strategy.go:60] unable to parse manifest for "sha256:fa9556628c15b8eb22cafccb737b3fbcecfd681a5c2cfea3302dd771c644a7db": unexpected end of JSON input 2026-01-20T10:56:39.376422465+00:00 stderr F E0120 10:56:39.376344 1 strategy.go:60] unable to parse manifest for "sha256:a0a6db2dcdb3d49e36bd0665e3e00f242a690391700e42cab14e86b154152bfd": unexpected end of JSON input 2026-01-20T10:56:39.380613519+00:00 stderr F E0120 10:56:39.380541 1 strategy.go:60] unable to parse manifest for "sha256:e90172ca0f09acf5db1721bd7df304dffd184e00145072132cb71c7f0797adf6": unexpected end of JSON input 2026-01-20T10:56:39.384627196+00:00 stderr F E0120 10:56:39.384570 1 strategy.go:60] unable to parse manifest for "sha256:421d1f6a10e263677b7687ccea8e4a59058e2e3c80585505eec9a9c2e6f9f40e": unexpected end of JSON input 2026-01-20T10:56:39.389016805+00:00 stderr F E0120 10:56:39.388925 1 strategy.go:60] unable to parse manifest for "sha256:6c009f430da02bdcff618a7dcd085d7d22547263eeebfb8d6377a4cf6f58769d": unexpected end of JSON input 2026-01-20T10:56:39.393648099+00:00 stderr F E0120 10:56:39.393575 1 strategy.go:60] unable to parse manifest for "sha256:dc84fed0f6f40975a2277c126438c8aa15c70eeac75981dbaa4b6b853eff61a6": unexpected end of JSON input 2026-01-20T10:56:39.396965638+00:00 stderr F E0120 10:56:39.396928 1 strategy.go:60] unable to parse manifest for "sha256:78af15475eac13d2ff439b33a9c3bdd39147858a824c420e8042fd5f35adce15": unexpected end of JSON input 2026-01-20T10:56:39.400006560+00:00 stderr F E0120 10:56:39.399968 1 strategy.go:60] unable to parse manifest for "sha256:06bbbf9272d5c5161f444388593e9bd8db793d8a2d95a50b429b3c0301fafcdd": unexpected end of JSON input 2026-01-20T10:56:39.403482424+00:00 stderr F E0120 10:56:39.403433 1 strategy.go:60] unable to parse manifest for "sha256:caba895933209aa9a4f3121f9ec8e5e8013398ab4f72bd3ff255227aad8d2c3e": unexpected end of JSON input 2026-01-20T10:56:39.409004392+00:00 stderr F E0120 10:56:39.408923 1 strategy.go:60] unable to parse manifest for "sha256:dbe9905fe2b20ed30b0e2d64543016fa9c145eeb5a678f720ba9d2055f0c9f88": unexpected end of JSON input 2026-01-20T10:56:39.421346414+00:00 stderr F I0120 10:56:39.421285 1 trace.go:236] Trace[6941692]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:4d0e7f4f-9e88-4f97-8af1-6ef212927141,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (20-Jan-2026 10:56:33.232) (total time: 6188ms): 2026-01-20T10:56:39.421346414+00:00 stderr F Trace[6941692]: ---"Write to database call succeeded" len:1743 6185ms (10:56:39.418) 2026-01-20T10:56:39.421346414+00:00 stderr F Trace[6941692]: [6.188365741s] [6.188365741s] END 2026-01-20T10:56:39.641081027+00:00 stderr F E0120 10:56:39.640965 1 strategy.go:60] unable to parse manifest for "sha256:496e23be70520863bce6f7cdc54d280aca2c133d06e992795c4dcbde1a9dd1ab": unexpected end of JSON input 2026-01-20T10:56:39.644536299+00:00 stderr F E0120 10:56:39.644451 1 strategy.go:60] unable to parse manifest for "sha256:022488b1bf697b7dd8c393171a3247bef4ea545a9ab828501e72168f2aac9415": unexpected end of JSON input 2026-01-20T10:56:39.647414097+00:00 stderr F E0120 10:56:39.647336 1 strategy.go:60] unable to parse manifest for "sha256:7164a06e9ba98a3ce9991bd7019512488efe30895175bb463e255f00eb9421fd": unexpected end of JSON input 2026-01-20T10:56:39.650390377+00:00 stderr F E0120 10:56:39.650346 1 strategy.go:60] unable to parse manifest for "sha256:81684e422367a075ac113e69ea11d8721416ce4bedea035e25313c5e726fd7d1": unexpected end of JSON input 2026-01-20T10:56:39.659687187+00:00 stderr F E0120 10:56:39.659619 1 strategy.go:60] unable to parse manifest for "sha256:b838fa18dab68d43a19f0c329c3643850691b8f9915823c4f8d25685eb293a11": unexpected end of JSON input 2026-01-20T10:56:39.662620216+00:00 stderr F E0120 10:56:39.662560 1 strategy.go:60] unable to parse manifest for "sha256:8a5b580b76c2fc2dfe55d13bb0dd53e8c71d718fc1a3773264b1710f49060222": unexpected end of JSON input 2026-01-20T10:56:39.665570535+00:00 stderr F E0120 10:56:39.665468 1 strategy.go:60] unable to parse manifest for "sha256:2f59ad75b66a3169b0b03032afb09aa3cfa531dbd844e3d3a562246e7d09c282": unexpected end of JSON input 2026-01-20T10:56:39.668099663+00:00 stderr F E0120 10:56:39.668037 1 strategy.go:60] unable to parse manifest for "sha256:9d759db3bb650e5367216ce261779c5a58693fc7ae10f21cd264011562bd746d": unexpected end of JSON input 2026-01-20T10:56:39.668314879+00:00 stderr F E0120 10:56:39.668256 1 strategy.go:60] unable to parse manifest for "sha256:e851770fd181ef49193111f7afcdbf872ad23f3a8234e0e07a742c4ca2882c3d": unexpected end of JSON input 2026-01-20T10:56:39.670800096+00:00 stderr F E0120 10:56:39.670760 1 strategy.go:60] unable to parse manifest for "sha256:bf5e518dba2aa935829d9db88d933a264e54ffbfa80041b41287fd70c1c35ba5": unexpected end of JSON input 2026-01-20T10:56:39.672564824+00:00 stderr F E0120 10:56:39.672517 1 strategy.go:60] unable to parse manifest for "sha256:ce5c0becf829aca80734b4caf3ab6b76cb00f7d78f4e39fb136636a764dea7f6": unexpected end of JSON input 2026-01-20T10:56:39.673529789+00:00 stderr F E0120 10:56:39.672915 1 strategy.go:60] unable to parse manifest for "sha256:f7ca08a8dda3610fcc10cc1fe5f5d0b9f8fc7a283b01975d0fe2c1e77ae06193": unexpected end of JSON input 2026-01-20T10:56:39.675336088+00:00 stderr F E0120 10:56:39.675300 1 strategy.go:60] unable to parse manifest for "sha256:3f00540ce2a3a01d2a147a7d73825fe78697be213a050bd09edae36266d6bc40": unexpected end of JSON input 2026-01-20T10:56:39.678480643+00:00 stderr F E0120 10:56:39.678395 1 strategy.go:60] unable to parse manifest for "sha256:868224c3b7c309b9e04003af70a5563af8e4c662f0c53f2a7606e0573c9fad85": unexpected end of JSON input 2026-01-20T10:56:39.681006461+00:00 stderr F E0120 10:56:39.680942 1 strategy.go:60] unable to parse manifest for "sha256:0669a28577b41bb05c67492ef18a1d48a299ac54d1500df8f9f8f760ce4be24b": unexpected end of JSON input 2026-01-20T10:56:39.683372524+00:00 stderr F I0120 10:56:39.683274 1 trace.go:236] Trace[262708617]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:b45be9ac-3e7d-4c54-b945-1fe32bb6e7eb,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (20-Jan-2026 10:56:34.641) (total time: 5041ms): 2026-01-20T10:56:39.683372524+00:00 stderr F Trace[262708617]: ---"Write to database call succeeded" len:1142 5039ms (10:56:39.680) 2026-01-20T10:56:39.683372524+00:00 stderr F Trace[262708617]: [5.041829303s] [5.041829303s] END 2026-01-20T10:56:39.685460980+00:00 stderr F E0120 10:56:39.685390 1 strategy.go:60] unable to parse manifest for "sha256:9036a59a8275f9c205ef5fc674f38c0495275a1a7912029f9a784406bb00b1f5": unexpected end of JSON input 2026-01-20T10:56:39.689419707+00:00 stderr F E0120 10:56:39.689366 1 strategy.go:60] unable to parse manifest for "sha256:425e2c7c355bea32be238aa2c7bdd363b6ab3709412bdf095efe28a8f6c07d84": unexpected end of JSON input 2026-01-20T10:56:39.692099489+00:00 stderr F E0120 10:56:39.692001 1 strategy.go:60] unable to parse manifest for "sha256:67fee4b64b269f5666a1051d806635b675903ef56d07b7cc019d3d59ff1aa97c": unexpected end of JSON input 2026-01-20T10:56:39.694446722+00:00 stderr F E0120 10:56:39.694391 1 strategy.go:60] unable to parse manifest for "sha256:b85cbdbc289752c91ac7f468cffef916fe9ab01865f3e32cfcc44ccdd633b168": unexpected end of JSON input 2026-01-20T10:56:39.696352603+00:00 stderr F E0120 10:56:39.696299 1 strategy.go:60] unable to parse manifest for "sha256:663eb81388ae8f824e7920c272f6d2e2274cf6c140d61416607261cdce9d50e2": unexpected end of JSON input 2026-01-20T10:56:39.705208891+00:00 stderr F I0120 10:56:39.703724 1 trace.go:236] Trace[1751141825]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:ded729c9-2db2-4586-904b-bd577b838e87,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (20-Jan-2026 10:56:35.268) (total time: 4435ms): 2026-01-20T10:56:39.705208891+00:00 stderr F Trace[1751141825]: ---"Write to database call succeeded" len:1153 4433ms (10:56:39.702) 2026-01-20T10:56:39.705208891+00:00 stderr F Trace[1751141825]: [4.435046057s] [4.435046057s] END 2026-01-20T10:57:24.141909752+00:00 stderr F E0120 10:57:24.141780 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.141909752+00:00 stderr F E0120 10:57:24.141885 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.147423038+00:00 stderr F E0120 10:57:24.147346 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.147464989+00:00 stderr F E0120 10:57:24.147426 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.149370000+00:00 stderr F E0120 10:57:24.149309 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.149388860+00:00 stderr F E0120 10:57:24.149375 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.154904816+00:00 stderr F E0120 10:57:24.154838 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.154981588+00:00 stderr F E0120 10:57:24.154965 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.181128079+00:00 stderr F E0120 10:57:24.180307 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.181128079+00:00 stderr F E0120 10:57:24.180367 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.188108293+00:00 stderr F E0120 10:57:24.187483 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.188108293+00:00 stderr F E0120 10:57:24.187555 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.191572516+00:00 stderr F E0120 10:57:24.191549 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.191629157+00:00 stderr F E0120 10:57:24.191616 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.192995263+00:00 stderr F E0120 10:57:24.192975 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.193049864+00:00 stderr F E0120 10:57:24.193034 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.194423281+00:00 stderr F E0120 10:57:24.194399 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.194495123+00:00 stderr F E0120 10:57:24.194469 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.099662490+00:00 stderr F E0120 10:57:25.099596 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.099662490+00:00 stderr F E0120 10:57:25.099654 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.103277816+00:00 stderr F E0120 10:57:25.103220 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.103369428+00:00 stderr F E0120 10:57:25.103352 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.130636289+00:00 stderr F E0120 10:57:25.130569 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.130636289+00:00 stderr F E0120 10:57:25.130607 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.130817264+00:00 stderr F E0120 10:57:25.130569 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.130848835+00:00 stderr F E0120 10:57:25.130830 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.132566741+00:00 stderr F E0120 10:57:25.132542 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.132583531+00:00 stderr F E0120 10:57:25.132564 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.133981768+00:00 stderr F E0120 10:57:25.133956 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.134182443+00:00 stderr F E0120 10:57:25.134162 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.135163978+00:00 stderr F E0120 10:57:25.135135 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.135180029+00:00 stderr F E0120 10:57:25.135159 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.135566960+00:00 stderr F E0120 10:57:25.135543 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.135575370+00:00 stderr F E0120 10:57:25.135568 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.137246174+00:00 stderr F E0120 10:57:25.137222 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.137259054+00:00 stderr F E0120 10:57:25.137244 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.138163278+00:00 stderr F E0120 10:57:25.138134 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.138179199+00:00 stderr F E0120 10:57:25.138164 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.139854703+00:00 stderr F E0120 10:57:25.139824 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.139854703+00:00 stderr F E0120 10:57:25.139849 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.140609012+00:00 stderr F E0120 10:57:25.140585 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.140619093+00:00 stderr F E0120 10:57:25.140608 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.141878167+00:00 stderr F E0120 10:57:25.141843 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.141878167+00:00 stderr F E0120 10:57:25.141871 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.142803351+00:00 stderr F E0120 10:57:25.142772 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.142819901+00:00 stderr F E0120 10:57:25.142805 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.146548429+00:00 stderr F E0120 10:57:25.146510 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.146548429+00:00 stderr F E0120 10:57:25.146538 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.148033159+00:00 stderr F E0120 10:57:25.147997 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.148049590+00:00 stderr F E0120 10:57:25.148028 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.149100107+00:00 stderr F E0120 10:57:25.149076 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.149113737+00:00 stderr F E0120 10:57:25.149098 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.149358364+00:00 stderr F E0120 10:57:25.149336 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.149365694+00:00 stderr F E0120 10:57:25.149358 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:26.030671111+00:00 stderr F E0120 10:57:26.030616 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:26.030769703+00:00 stderr F E0120 10:57:26.030748 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:26.492186456+00:00 stderr F E0120 10:57:26.491931 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:26.492397981+00:00 stderr F E0120 10:57:26.492383 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:26.890645653+00:00 stderr F E0120 10:57:26.890536 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:26.890786907+00:00 stderr F E0120 10:57:26.890729 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:28.381463828+00:00 stderr F E0120 10:57:28.381388 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:28.381578281+00:00 stderr F E0120 10:57:28.381547 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:29.578287509+00:00 stderr F E0120 10:57:29.578211 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:29.578446574+00:00 stderr F E0120 10:57:29.578416 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:29.791959230+00:00 stderr F E0120 10:57:29.791871 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:29.792035982+00:00 stderr F E0120 10:57:29.792021 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:31.988483888+00:00 stderr F E0120 10:57:31.988399 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:31.988567800+00:00 stderr F E0120 10:57:31.988544 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:35.295142443+00:00 stderr F E0120 10:57:35.295030 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:35.295142443+00:00 stderr F E0120 10:57:35.295118 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:40.234369372+00:00 stderr F E0120 10:57:40.234304 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:40.234369372+00:00 stderr F E0120 10:57:40.234343 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:44.499557397+00:00 stderr F E0120 10:57:44.499502 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:44.499594628+00:00 stderr F E0120 10:57:44.499553 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:44.955584327+00:00 stderr F E0120 10:57:44.955521 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:44.955584327+00:00 stderr F E0120 10:57:44.955566 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:45.301637718+00:00 stderr F E0120 10:57:45.301575 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:45.301637718+00:00 stderr F E0120 10:57:45.301629 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:45.935777989+00:00 stderr F E0120 10:57:45.935666 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:45.935777989+00:00 stderr F E0120 10:57:45.935744 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:47.065192686+00:00 stderr F E0120 10:57:47.065117 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:47.065192686+00:00 stderr F E0120 10:57:47.065176 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000644000175000017500000026634015133657715033162 0ustar zuulzuul2025-08-13T20:07:23.235321095+00:00 stdout F Copying system trust bundle 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.020450 1 feature_gate.go:239] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-08-13T20:07:25.021836665+00:00 stderr F I0813 20:07:25.020513 1 feature_gate.go:249] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021595 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstall" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021612 1 config.go:121] Ignoring unknown FeatureGate "OpenShiftPodSecurityAdmission" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021617 1 config.go:121] Ignoring unknown FeatureGate "SigstoreImageVerification" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021621 1 config.go:121] Ignoring unknown FeatureGate "SignatureStores" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021626 1 config.go:121] Ignoring unknown FeatureGate "NewOLM" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021630 1 config.go:121] Ignoring unknown FeatureGate "AzureWorkloadIdentity" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021635 1 config.go:121] Ignoring unknown FeatureGate "CSIDriverSharedResource" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021639 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallPowerVS" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021716 1 config.go:121] Ignoring unknown FeatureGate "AdminNetworkPolicy" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021722 1 config.go:121] Ignoring unknown FeatureGate "Example" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021727 1 config.go:121] Ignoring unknown FeatureGate "MixedCPUsAllocation" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021731 1 config.go:121] Ignoring unknown FeatureGate "MetricsCollectionProfiles" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021736 1 config.go:121] Ignoring unknown FeatureGate "PrivateHostedZoneAWS" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021740 1 config.go:121] Ignoring unknown FeatureGate "UpgradeStatus" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021745 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallAzure" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021750 1 config.go:121] Ignoring unknown FeatureGate "DNSNameResolver" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021754 1 config.go:121] Ignoring unknown FeatureGate "NetworkDiagnosticsConfig" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021758 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallVSphere" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021762 1 config.go:121] Ignoring unknown FeatureGate "ExternalOIDC" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021767 1 config.go:121] Ignoring unknown FeatureGate "InsightsConfigAPI" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021809 1 config.go:121] Ignoring unknown FeatureGate "NodeDisruptionPolicy" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021818 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProviderGCP" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021822 1 config.go:121] Ignoring unknown FeatureGate "GCPClusterHostedDNS" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021826 1 config.go:121] Ignoring unknown FeatureGate "HardwareSpeed" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021830 1 config.go:121] Ignoring unknown FeatureGate "InsightsConfig" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021837 1 config.go:121] Ignoring unknown FeatureGate "VSphereMultiVCenters" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021841 1 config.go:121] Ignoring unknown FeatureGate "BuildCSIVolumes" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021845 1 config.go:121] Ignoring unknown FeatureGate "ChunkSizeMiB" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021860 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallNutanix" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021865 1 config.go:121] Ignoring unknown FeatureGate "VSphereStaticIPs" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021869 1 config.go:121] Ignoring unknown FeatureGate "ImagePolicy" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021898 1 config.go:121] Ignoring unknown FeatureGate "ManagedBootImages" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021907 1 config.go:121] Ignoring unknown FeatureGate "PinnedImages" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021912 1 config.go:121] Ignoring unknown FeatureGate "VSphereControlPlaneMachineSet" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021916 1 config.go:121] Ignoring unknown FeatureGate "NetworkLiveMigration" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021920 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallOpenStack" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021924 1 config.go:121] Ignoring unknown FeatureGate "GCPLabelsTags" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021929 1 config.go:121] Ignoring unknown FeatureGate "MachineAPIProviderOpenStack" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021933 1 config.go:121] Ignoring unknown FeatureGate "MetricsServer" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021938 1 config.go:121] Ignoring unknown FeatureGate "VSphereDriverConfiguration" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021942 1 config.go:121] Ignoring unknown FeatureGate "EtcdBackendQuota" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021946 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProviderExternal" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021950 1 config.go:121] Ignoring unknown FeatureGate "MachineAPIOperatorDisableMachineHealthCheckController" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021956 1 config.go:121] Ignoring unknown FeatureGate "PlatformOperators" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021960 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallGCP" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021964 1 config.go:121] Ignoring unknown FeatureGate "OnClusterBuild" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021969 1 config.go:121] Ignoring unknown FeatureGate "VolumeGroupSnapshot" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021973 1 config.go:121] Ignoring unknown FeatureGate "ExternalRouteCertificate" 2025-08-13T20:07:25.021997390+00:00 stderr F W0813 20:07:25.021978 1 config.go:121] Ignoring unknown FeatureGate "AlibabaPlatform" 2025-08-13T20:07:25.021997390+00:00 stderr F W0813 20:07:25.021982 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallAWS" 2025-08-13T20:07:25.021997390+00:00 stderr F W0813 20:07:25.021986 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallIBMCloud" 2025-08-13T20:07:25.021997390+00:00 stderr F W0813 20:07:25.021991 1 config.go:121] Ignoring unknown FeatureGate "AutomatedEtcdBackup" 2025-08-13T20:07:25.022010260+00:00 stderr F W0813 20:07:25.021995 1 config.go:121] Ignoring unknown FeatureGate "BareMetalLoadBalancer" 2025-08-13T20:07:25.022010260+00:00 stderr F W0813 20:07:25.022000 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProvider" 2025-08-13T20:07:25.022010260+00:00 stderr F W0813 20:07:25.022004 1 config.go:121] Ignoring unknown FeatureGate "InstallAlternateInfrastructureAWS" 2025-08-13T20:07:25.022050681+00:00 stderr F W0813 20:07:25.022008 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProviderAzure" 2025-08-13T20:07:25.022050681+00:00 stderr F W0813 20:07:25.022013 1 config.go:121] Ignoring unknown FeatureGate "MachineConfigNodes" 2025-08-13T20:07:25.022050681+00:00 stderr F W0813 20:07:25.022017 1 config.go:121] Ignoring unknown FeatureGate "GatewayAPI" 2025-08-13T20:07:25.022050681+00:00 stderr F W0813 20:07:25.022022 1 config.go:121] Ignoring unknown FeatureGate "InsightsOnDemandDataGather" 2025-08-13T20:07:25.025046997+00:00 stderr F I0813 20:07:25.024565 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:07:25.679468230+00:00 stderr F I0813 20:07:25.676369 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:07:25.711720245+00:00 stderr F I0813 20:07:25.709906 1 audit.go:340] Using audit backend: ignoreErrors 2025-08-13T20:07:25.735643920+00:00 stderr F I0813 20:07:25.735038 1 plugins.go:83] "Registered admission plugin" plugin="NamespaceLifecycle" 2025-08-13T20:07:25.735643920+00:00 stderr F I0813 20:07:25.735540 1 plugins.go:83] "Registered admission plugin" plugin="ValidatingAdmissionWebhook" 2025-08-13T20:07:25.735643920+00:00 stderr F I0813 20:07:25.735556 1 plugins.go:83] "Registered admission plugin" plugin="MutatingAdmissionWebhook" 2025-08-13T20:07:25.735643920+00:00 stderr F I0813 20:07:25.735562 1 plugins.go:83] "Registered admission plugin" plugin="ValidatingAdmissionPolicy" 2025-08-13T20:07:25.736633209+00:00 stderr F I0813 20:07:25.736209 1 admission.go:48] Admission plugin "project.openshift.io/ProjectRequestLimit" is not configured so it will be disabled. 2025-08-13T20:07:25.773228408+00:00 stderr F I0813 20:07:25.773109 1 plugins.go:157] Loaded 5 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,build.openshift.io/BuildConfigSecretInjector,image.openshift.io/ImageLimitRange,image.openshift.io/ImagePolicy,MutatingAdmissionWebhook. 2025-08-13T20:07:25.773228408+00:00 stderr F I0813 20:07:25.773159 1 plugins.go:160] Loaded 9 validating admission controller(s) successfully in the following order: OwnerReferencesPermissionEnforcement,build.openshift.io/BuildConfigSecretInjector,build.openshift.io/BuildByStrategy,image.openshift.io/ImageLimitRange,image.openshift.io/ImagePolicy,quota.openshift.io/ClusterResourceQuota,route.openshift.io/RequiredRouteAnnotations,ValidatingAdmissionWebhook,ResourceQuota. 2025-08-13T20:07:25.782088992+00:00 stderr F I0813 20:07:25.773960 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:25.782088992+00:00 stderr F I0813 20:07:25.774009 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:25.782088992+00:00 stderr F I0813 20:07:25.774038 1 maxinflight.go:116] "Set denominator for readonly requests" limit=3000 2025-08-13T20:07:25.782088992+00:00 stderr F I0813 20:07:25.774045 1 maxinflight.go:120] "Set denominator for mutating requests" limit=1500 2025-08-13T20:07:25.823566991+00:00 stderr F I0813 20:07:25.823427 1 store.go:1579] "Monitoring resource count at path" resource="builds.build.openshift.io" path="//builds" 2025-08-13T20:07:25.833390993+00:00 stderr F I0813 20:07:25.833321 1 store.go:1579] "Monitoring resource count at path" resource="buildconfigs.build.openshift.io" path="//buildconfigs" 2025-08-13T20:07:25.841593648+00:00 stderr F I0813 20:07:25.841419 1 handler.go:275] Adding GroupVersion build.openshift.io v1 to ResourceManager 2025-08-13T20:07:25.841593648+00:00 stderr F I0813 20:07:25.841554 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:25.841593648+00:00 stderr F I0813 20:07:25.841566 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:25.846512949+00:00 stderr F I0813 20:07:25.846443 1 handler.go:275] Adding GroupVersion quota.openshift.io v1 to ResourceManager 2025-08-13T20:07:25.846623362+00:00 stderr F I0813 20:07:25.846572 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:25.846623362+00:00 stderr F I0813 20:07:25.846612 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:26.020963071+00:00 stderr F I0813 20:07:26.020524 1 store.go:1579] "Monitoring resource count at path" resource="routes.route.openshift.io" path="//routes" 2025-08-13T20:07:26.025418569+00:00 stderr F I0813 20:07:26.024741 1 handler.go:275] Adding GroupVersion route.openshift.io v1 to ResourceManager 2025-08-13T20:07:26.025418569+00:00 stderr F I0813 20:07:26.025146 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:26.025418569+00:00 stderr F I0813 20:07:26.025164 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:26.046650937+00:00 stderr F I0813 20:07:26.046552 1 store.go:1579] "Monitoring resource count at path" resource="rangeallocations.security.openshift.io" path="//rangeallocations" 2025-08-13T20:07:26.057699754+00:00 stderr F I0813 20:07:26.057587 1 handler.go:275] Adding GroupVersion security.openshift.io v1 to ResourceManager 2025-08-13T20:07:26.057933581+00:00 stderr F I0813 20:07:26.057844 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:26.057933581+00:00 stderr F I0813 20:07:26.057912 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:26.074950239+00:00 stderr F I0813 20:07:26.071717 1 store.go:1579] "Monitoring resource count at path" resource="deploymentconfigs.apps.openshift.io" path="//deploymentconfigs" 2025-08-13T20:07:26.094870880+00:00 stderr F I0813 20:07:26.094722 1 cacher.go:451] cacher (buildconfigs.build.openshift.io): initialized 2025-08-13T20:07:26.095282612+00:00 stderr F I0813 20:07:26.095244 1 cacher.go:451] cacher (rangeallocations.security.openshift.io): initialized 2025-08-13T20:07:26.095351024+00:00 stderr F I0813 20:07:26.095335 1 reflector.go:351] Caches populated for *security.RangeAllocation from storage/cacher.go:/rangeallocations 2025-08-13T20:07:26.095544449+00:00 stderr F I0813 20:07:26.095525 1 cacher.go:451] cacher (deploymentconfigs.apps.openshift.io): initialized 2025-08-13T20:07:26.095588340+00:00 stderr F I0813 20:07:26.095575 1 reflector.go:351] Caches populated for *apps.DeploymentConfig from storage/cacher.go:/deploymentconfigs 2025-08-13T20:07:26.096232139+00:00 stderr F I0813 20:07:26.096203 1 cacher.go:451] cacher (builds.build.openshift.io): initialized 2025-08-13T20:07:26.099095701+00:00 stderr F I0813 20:07:26.099067 1 reflector.go:351] Caches populated for *build.Build from storage/cacher.go:/builds 2025-08-13T20:07:26.099946485+00:00 stderr F I0813 20:07:26.099859 1 reflector.go:351] Caches populated for *build.BuildConfig from storage/cacher.go:/buildconfigs 2025-08-13T20:07:26.101696446+00:00 stderr F I0813 20:07:26.101627 1 handler.go:275] Adding GroupVersion apps.openshift.io v1 to ResourceManager 2025-08-13T20:07:26.102704534+00:00 stderr F I0813 20:07:26.102633 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:26.102704534+00:00 stderr F I0813 20:07:26.102685 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:26.132331104+00:00 stderr F I0813 20:07:26.132237 1 cacher.go:451] cacher (routes.route.openshift.io): initialized 2025-08-13T20:07:26.132331104+00:00 stderr F I0813 20:07:26.132293 1 reflector.go:351] Caches populated for *route.Route from storage/cacher.go:/routes 2025-08-13T20:07:26.173127094+00:00 stderr F I0813 20:07:26.173021 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca, incoming err: 2025-08-13T20:07:26.173127094+00:00 stderr F I0813 20:07:26.173074 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca 2025-08-13T20:07:26.173186895+00:00 stderr F I0813 20:07:26.173126 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123, incoming err: 2025-08-13T20:07:26.173186895+00:00 stderr F I0813 20:07:26.173132 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123 2025-08-13T20:07:26.173186895+00:00 stderr F I0813 20:07:26.173146 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123/default-route-openshift-image-registry.apps-crc.testing, incoming err: 2025-08-13T20:07:26.173252247+00:00 stderr F I0813 20:07:26.173208 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123/image-registry.openshift-image-registry.svc..5000, incoming err: 2025-08-13T20:07:26.173362670+00:00 stderr F I0813 20:07:26.173322 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123/image-registry.openshift-image-registry.svc.cluster.local..5000, incoming err: 2025-08-13T20:07:26.173467893+00:00 stderr F I0813 20:07:26.173425 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..data, incoming err: 2025-08-13T20:07:26.173467893+00:00 stderr F I0813 20:07:26.173453 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/..data 2025-08-13T20:07:26.173480724+00:00 stderr F I0813 20:07:26.173473 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/default-route-openshift-image-registry.apps-crc.testing, incoming err: 2025-08-13T20:07:26.173490634+00:00 stderr F I0813 20:07:26.173479 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:07:26.173502834+00:00 stderr F I0813 20:07:26.173496 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc..5000, incoming err: 2025-08-13T20:07:26.173512675+00:00 stderr F I0813 20:07:26.173501 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc..5000 2025-08-13T20:07:26.173522885+00:00 stderr F I0813 20:07:26.173513 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000, incoming err: 2025-08-13T20:07:26.173522885+00:00 stderr F I0813 20:07:26.173518 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:07:26.229558662+00:00 stderr F I0813 20:07:26.229472 1 store.go:1579] "Monitoring resource count at path" resource="images.image.openshift.io" path="//images" 2025-08-13T20:07:26.262353442+00:00 stderr F I0813 20:07:26.262258 1 store.go:1579] "Monitoring resource count at path" resource="imagestreams.image.openshift.io" path="//imagestreams" 2025-08-13T20:07:26.345720652+00:00 stderr F I0813 20:07:26.345645 1 cacher.go:451] cacher (imagestreams.image.openshift.io): initialized 2025-08-13T20:07:26.349918072+00:00 stderr F I0813 20:07:26.349863 1 reflector.go:351] Caches populated for *image.ImageStream from storage/cacher.go:/imagestreams 2025-08-13T20:07:26.369736351+00:00 stderr F I0813 20:07:26.369613 1 handler.go:275] Adding GroupVersion image.openshift.io v1 to ResourceManager 2025-08-13T20:07:26.370657947+00:00 stderr F W0813 20:07:26.370604 1 genericapiserver.go:756] Skipping API image.openshift.io/1.0 because it has no resources. 2025-08-13T20:07:26.370657947+00:00 stderr F W0813 20:07:26.370641 1 genericapiserver.go:756] Skipping API image.openshift.io/pre012 because it has no resources. 2025-08-13T20:07:26.371867112+00:00 stderr F I0813 20:07:26.371470 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:26.372350326+00:00 stderr F I0813 20:07:26.372293 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:26.385405080+00:00 stderr F I0813 20:07:26.385348 1 handler.go:275] Adding GroupVersion project.openshift.io v1 to ResourceManager 2025-08-13T20:07:26.385561074+00:00 stderr F I0813 20:07:26.385511 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:26.385561074+00:00 stderr F I0813 20:07:26.385546 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:26.416983185+00:00 stderr F I0813 20:07:26.416673 1 store.go:1579] "Monitoring resource count at path" resource="templates.template.openshift.io" path="//templates" 2025-08-13T20:07:26.436443863+00:00 stderr F I0813 20:07:26.436348 1 store.go:1579] "Monitoring resource count at path" resource="templateinstances.template.openshift.io" path="//templateinstances" 2025-08-13T20:07:26.443011912+00:00 stderr F I0813 20:07:26.442942 1 cacher.go:451] cacher (templateinstances.template.openshift.io): initialized 2025-08-13T20:07:26.443040462+00:00 stderr F I0813 20:07:26.443009 1 reflector.go:351] Caches populated for *template.TemplateInstance from storage/cacher.go:/templateinstances 2025-08-13T20:07:26.455495849+00:00 stderr F I0813 20:07:26.454602 1 store.go:1579] "Monitoring resource count at path" resource="brokertemplateinstances.template.openshift.io" path="//brokertemplateinstances" 2025-08-13T20:07:26.458248898+00:00 stderr F I0813 20:07:26.458174 1 cacher.go:451] cacher (templates.template.openshift.io): initialized 2025-08-13T20:07:26.458248898+00:00 stderr F I0813 20:07:26.458240 1 reflector.go:351] Caches populated for *template.Template from storage/cacher.go:/templates 2025-08-13T20:07:26.460369019+00:00 stderr F I0813 20:07:26.460124 1 cacher.go:451] cacher (images.image.openshift.io): initialized 2025-08-13T20:07:26.460369019+00:00 stderr F I0813 20:07:26.460182 1 reflector.go:351] Caches populated for *image.Image from storage/cacher.go:/images 2025-08-13T20:07:26.463958292+00:00 stderr F I0813 20:07:26.463908 1 handler.go:275] Adding GroupVersion template.openshift.io v1 to ResourceManager 2025-08-13T20:07:26.464163788+00:00 stderr F I0813 20:07:26.464118 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:26.464163788+00:00 stderr F I0813 20:07:26.464149 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:26.464408955+00:00 stderr F I0813 20:07:26.464364 1 cacher.go:451] cacher (brokertemplateinstances.template.openshift.io): initialized 2025-08-13T20:07:26.464455926+00:00 stderr F I0813 20:07:26.464418 1 reflector.go:351] Caches populated for *template.BrokerTemplateInstance from storage/cacher.go:/brokertemplateinstances 2025-08-13T20:07:26.484150241+00:00 stderr F I0813 20:07:26.483684 1 handler.go:275] Adding GroupVersion authorization.openshift.io v1 to ResourceManager 2025-08-13T20:07:26.484232063+00:00 stderr F I0813 20:07:26.483850 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:26.484232063+00:00 stderr F I0813 20:07:26.483908 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:27.540162388+00:00 stderr F I0813 20:07:27.537590 1 server.go:50] Starting master on 0.0.0.0:8443 (v0.0.0-master+$Format:%H$) 2025-08-13T20:07:27.540162388+00:00 stderr F I0813 20:07:27.537839 1 genericapiserver.go:528] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:07:27.551402810+00:00 stderr F I0813 20:07:27.551033 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:07:27.551402810+00:00 stderr F I0813 20:07:27.551069 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:07:27.551402810+00:00 stderr F I0813 20:07:27.551125 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:07:27.551402810+00:00 stderr F I0813 20:07:27.551144 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:07:27.551402810+00:00 stderr F I0813 20:07:27.551172 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:07:27.551402810+00:00 stderr F I0813 20:07:27.551226 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:07:27.551402810+00:00 stderr F I0813 20:07:27.551313 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-apiserver.svc\" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:07:27.551267526 +0000 UTC))" 2025-08-13T20:07:27.551645367+00:00 stderr F I0813 20:07:27.551598 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115645\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115645\" (2025-08-13 19:07:25 +0000 UTC to 2026-08-13 19:07:25 +0000 UTC (now=2025-08-13 20:07:27.551574325 +0000 UTC))" 2025-08-13T20:07:27.551660568+00:00 stderr F I0813 20:07:27.551646 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:07:27.551725429+00:00 stderr F I0813 20:07:27.551686 1 genericapiserver.go:681] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:07:27.551740480+00:00 stderr F I0813 20:07:27.551729 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:07:27.551972457+00:00 stderr F I0813 20:07:27.551937 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:07:27.552988366+00:00 stderr F I0813 20:07:27.552961 1 openshift_apiserver.go:593] Using default project node label selector: 2025-08-13T20:07:27.553094279+00:00 stderr F I0813 20:07:27.553077 1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller 2025-08-13T20:07:27.572047802+00:00 stderr F I0813 20:07:27.571750 1 healthz.go:261] poststarthook/authorization.openshift.io-bootstrapclusterroles,poststarthook/authorization.openshift.io-ensurenodebootstrap-sa check failed: healthz 2025-08-13T20:07:27.572047802+00:00 stderr F [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: not finished 2025-08-13T20:07:27.572047802+00:00 stderr F [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: not finished 2025-08-13T20:07:27.576639694+00:00 stderr F I0813 20:07:27.575071 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.576639694+00:00 stderr F I0813 20:07:27.575487 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.576639694+00:00 stderr F I0813 20:07:27.575951 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.576639694+00:00 stderr F I0813 20:07:27.576187 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.577214520+00:00 stderr F I0813 20:07:27.577137 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.577962422+00:00 stderr F I0813 20:07:27.577938 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.578227459+00:00 stderr F I0813 20:07:27.578206 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.588645468+00:00 stderr F I0813 20:07:27.585074 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.591386827+00:00 stderr F I0813 20:07:27.591347 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.593579579+00:00 stderr F I0813 20:07:27.593471 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.593949880+00:00 stderr F I0813 20:07:27.593924 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.595380011+00:00 stderr F I0813 20:07:27.595134 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.596744490+00:00 stderr F I0813 20:07:27.596714 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.598120860+00:00 stderr F I0813 20:07:27.598093 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.600147248+00:00 stderr F I0813 20:07:27.600118 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.614977173+00:00 stderr F I0813 20:07:27.611062 1 reflector.go:351] Caches populated for *v1.Route from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.614977173+00:00 stderr F I0813 20:07:27.611978 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.616675152+00:00 stderr F I0813 20:07:27.615986 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.618291068+00:00 stderr F I0813 20:07:27.617171 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.620571273+00:00 stderr F I0813 20:07:27.620521 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.627531003+00:00 stderr F I0813 20:07:27.627368 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.651381577+00:00 stderr F I0813 20:07:27.651267 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:07:27.651381577+00:00 stderr F I0813 20:07:27.651361 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:07:27.652189850+00:00 stderr F I0813 20:07:27.651622 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:07:27.652189850+00:00 stderr F I0813 20:07:27.651749 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:27.651709916 +0000 UTC))" 2025-08-13T20:07:27.652219171+00:00 stderr F I0813 20:07:27.652193 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-apiserver.svc\" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:07:27.652171329 +0000 UTC))" 2025-08-13T20:07:27.652622852+00:00 stderr F I0813 20:07:27.652480 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115645\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115645\" (2025-08-13 19:07:25 +0000 UTC to 2026-08-13 19:07:25 +0000 UTC (now=2025-08-13 20:07:27.652463258 +0000 UTC))" 2025-08-13T20:07:27.654845316+00:00 stderr F I0813 20:07:27.654701 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:07:27.654653201 +0000 UTC))" 2025-08-13T20:07:27.654873837+00:00 stderr F I0813 20:07:27.654844 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:07:27.654817125 +0000 UTC))" 2025-08-13T20:07:27.654922278+00:00 stderr F I0813 20:07:27.654901 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:07:27.654853046 +0000 UTC))" 2025-08-13T20:07:27.654938689+00:00 stderr F I0813 20:07:27.654928 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:07:27.654912818 +0000 UTC))" 2025-08-13T20:07:27.654953739+00:00 stderr F I0813 20:07:27.654946 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:27.654934669 +0000 UTC))" 2025-08-13T20:07:27.654980680+00:00 stderr F I0813 20:07:27.654970 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:27.654952329 +0000 UTC))" 2025-08-13T20:07:27.655065712+00:00 stderr F I0813 20:07:27.654994 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:27.65497726 +0000 UTC))" 2025-08-13T20:07:27.655065712+00:00 stderr F I0813 20:07:27.655043 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:27.655025441 +0000 UTC))" 2025-08-13T20:07:27.655082973+00:00 stderr F I0813 20:07:27.655067 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:07:27.655050672 +0000 UTC))" 2025-08-13T20:07:27.655094843+00:00 stderr F I0813 20:07:27.655086 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:07:27.655075233 +0000 UTC))" 2025-08-13T20:07:27.655130624+00:00 stderr F I0813 20:07:27.655107 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:27.655092493 +0000 UTC))" 2025-08-13T20:07:27.658235403+00:00 stderr F I0813 20:07:27.658033 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-apiserver.svc\" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:07:27.658011927 +0000 UTC))" 2025-08-13T20:07:27.658620234+00:00 stderr F I0813 20:07:27.658438 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115645\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115645\" (2025-08-13 19:07:25 +0000 UTC to 2026-08-13 19:07:25 +0000 UTC (now=2025-08-13 20:07:27.658417289 +0000 UTC))" 2025-08-13T20:07:27.661626220+00:00 stderr F I0813 20:07:27.661547 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.705870569+00:00 stderr F I0813 20:07:27.703063 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.851120933+00:00 stderr F I0813 20:07:27.851060 1 reflector.go:351] Caches populated for *etcd.ImageLayers from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:08:03.895683401+00:00 stderr F E0813 20:08:03.895372 1 strategy.go:60] unable to parse manifest for "sha256:b95b77cd1b794265f246037453886664374732b4da033339ff08e95ec410994c": unexpected end of JSON input 2025-08-13T20:08:03.914238963+00:00 stderr F I0813 20:08:03.913112 1 trace.go:236] Trace[499527712]: "Create" accept:application/json, */*,audit-id:064330a5-92f0-4ee6-b57c-a4a65def5eee,client:10.217.0.46,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:cluster-samples-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:POST (13-Aug-2025 20:08:03.000) (total time: 912ms): 2025-08-13T20:08:03.914238963+00:00 stderr F Trace[499527712]: ---"Write to database call succeeded" len:436 908ms (20:08:03.911) 2025-08-13T20:08:03.914238963+00:00 stderr F Trace[499527712]: [912.494531ms] [912.494531ms] END 2025-08-13T20:08:42.680410211+00:00 stderr F E0813 20:08:42.679639 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.680410211+00:00 stderr F E0813 20:08:42.680000 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.687005750+00:00 stderr F E0813 20:08:42.686917 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.687221417+00:00 stderr F E0813 20:08:42.687005 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.691993933+00:00 stderr F E0813 20:08:42.691953 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.692173599+00:00 stderr F E0813 20:08:42.692135 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.695301828+00:00 stderr F E0813 20:08:42.695256 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.695398301+00:00 stderr F E0813 20:08:42.695380 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.747725021+00:00 stderr F E0813 20:08:42.747669 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.747938077+00:00 stderr F E0813 20:08:42.747888 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.752690934+00:00 stderr F E0813 20:08:42.752657 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.752819277+00:00 stderr F E0813 20:08:42.752753 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.756291127+00:00 stderr F E0813 20:08:42.756265 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.756361749+00:00 stderr F E0813 20:08:42.756345 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.760296242+00:00 stderr F E0813 20:08:42.760195 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.760377704+00:00 stderr F E0813 20:08:42.760333 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.763271847+00:00 stderr F E0813 20:08:42.763191 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.763293148+00:00 stderr F E0813 20:08:42.763279 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.517637905+00:00 stderr F E0813 20:08:43.517509 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.517637905+00:00 stderr F E0813 20:08:43.517602 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.522309689+00:00 stderr F E0813 20:08:43.521746 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.522309689+00:00 stderr F E0813 20:08:43.521870 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.525432328+00:00 stderr F E0813 20:08:43.525397 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.525530711+00:00 stderr F E0813 20:08:43.525514 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.534683723+00:00 stderr F E0813 20:08:43.534064 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.534683723+00:00 stderr F E0813 20:08:43.534151 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.536467764+00:00 stderr F E0813 20:08:43.536396 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.536542457+00:00 stderr F E0813 20:08:43.536487 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.546050469+00:00 stderr F E0813 20:08:43.545968 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.546185853+00:00 stderr F E0813 20:08:43.546164 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.582191905+00:00 stderr F E0813 20:08:43.581025 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.582191905+00:00 stderr F E0813 20:08:43.581113 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.597845344+00:00 stderr F E0813 20:08:43.597742 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.598038620+00:00 stderr F E0813 20:08:43.598016 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.605990648+00:00 stderr F E0813 20:08:43.605034 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.605990648+00:00 stderr F E0813 20:08:43.605299 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.617762435+00:00 stderr F E0813 20:08:43.617714 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.618463275+00:00 stderr F E0813 20:08:43.618441 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.631235552+00:00 stderr F E0813 20:08:43.631111 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.631235552+00:00 stderr F E0813 20:08:43.631195 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.633063684+00:00 stderr F E0813 20:08:43.633030 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.633179007+00:00 stderr F E0813 20:08:43.633158 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.664941568+00:00 stderr F E0813 20:08:43.664815 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.664941568+00:00 stderr F E0813 20:08:43.664883 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.665028590+00:00 stderr F E0813 20:08:43.664756 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.665098482+00:00 stderr F E0813 20:08:43.665082 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.670473807+00:00 stderr F E0813 20:08:43.670001 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.670473807+00:00 stderr F E0813 20:08:43.670073 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.671117565+00:00 stderr F E0813 20:08:43.670938 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.671117565+00:00 stderr F E0813 20:08:43.670992 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.674330557+00:00 stderr F E0813 20:08:43.674193 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.674330557+00:00 stderr F E0813 20:08:43.674263 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.677569540+00:00 stderr F E0813 20:08:43.677422 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.677569540+00:00 stderr F E0813 20:08:43.677516 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.518948113+00:00 stderr F E0813 20:08:44.518836 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.519334454+00:00 stderr F E0813 20:08:44.519105 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.255174302+00:00 stderr F E0813 20:08:45.255113 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.255495301+00:00 stderr F E0813 20:08:45.255471 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.782377827+00:00 stderr F E0813 20:08:45.782289 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.782557402+00:00 stderr F E0813 20:08:45.782437 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.639857322+00:00 stderr F E0813 20:08:47.638317 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.639857322+00:00 stderr F E0813 20:08:47.638519 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.879284687+00:00 stderr F E0813 20:08:47.879093 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.879455542+00:00 stderr F E0813 20:08:47.879342 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.159244084+00:00 stderr F E0813 20:08:48.159178 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.159371267+00:00 stderr F E0813 20:08:48.159305 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.159446559+00:00 stderr F E0813 20:08:48.159400 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.159477540+00:00 stderr F E0813 20:08:48.159408 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.168062356+00:00 stderr F E0813 20:08:48.168036 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.168134548+00:00 stderr F E0813 20:08:48.168119 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.200477386+00:00 stderr F E0813 20:08:48.200372 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.200694172+00:00 stderr F E0813 20:08:48.200614 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.204462580+00:00 stderr F E0813 20:08:48.204390 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.204605784+00:00 stderr F E0813 20:08:48.204549 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.215371863+00:00 stderr F E0813 20:08:48.215286 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.215519747+00:00 stderr F E0813 20:08:48.215463 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.236359945+00:00 stderr F E0813 20:08:48.236258 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.236598911+00:00 stderr F E0813 20:08:48.236510 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.268222328+00:00 stderr F E0813 20:08:48.268104 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.268717872+00:00 stderr F E0813 20:08:48.268693 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.482943234+00:00 stderr F E0813 20:08:48.480051 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.482943234+00:00 stderr F E0813 20:08:48.480202 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.568999762+00:00 stderr F E0813 20:08:48.568880 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.570530536+00:00 stderr F E0813 20:08:48.569033 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.580451170+00:00 stderr F E0813 20:08:48.580403 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.580549253+00:00 stderr F E0813 20:08:48.580532 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.012194089+00:00 stderr F E0813 20:08:49.012114 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.012219579+00:00 stderr F E0813 20:08:49.012201 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.326890131+00:00 stderr F E0813 20:08:49.326755 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.327013085+00:00 stderr F E0813 20:08:49.326884 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.673518909+00:00 stderr F E0813 20:08:49.673432 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.673628263+00:00 stderr F E0813 20:08:49.673527 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.723191914+00:00 stderr F E0813 20:08:49.723034 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.723191914+00:00 stderr F E0813 20:08:49.723164 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.819041402+00:00 stderr F E0813 20:08:49.818928 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.819041402+00:00 stderr F E0813 20:08:49.818993 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.824400875+00:00 stderr F E0813 20:08:49.824312 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.824400875+00:00 stderr F E0813 20:08:49.824372 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.955995548+00:00 stderr F E0813 20:08:49.955762 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.955995548+00:00 stderr F E0813 20:08:49.955940 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.066716563+00:00 stderr F E0813 20:08:50.066586 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.066716563+00:00 stderr F E0813 20:08:50.066667 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.139565671+00:00 stderr F E0813 20:08:50.138008 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.139565671+00:00 stderr F E0813 20:08:50.138109 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.306224180+00:00 stderr F E0813 20:08:51.306053 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.306224180+00:00 stderr F E0813 20:08:51.306140 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.772305213+00:00 stderr F E0813 20:08:51.771570 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.772305213+00:00 stderr F E0813 20:08:51.772201 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.803185008+00:00 stderr F E0813 20:08:51.803111 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.803239480+00:00 stderr F E0813 20:08:51.803183 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.176496652+00:00 stderr F E0813 20:08:52.176361 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.176496652+00:00 stderr F E0813 20:08:52.176478 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.487428976+00:00 stderr F E0813 20:08:52.487371 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.487649713+00:00 stderr F E0813 20:08:52.487587 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.496487466+00:00 stderr F E0813 20:08:52.496452 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.496672701+00:00 stderr F E0813 20:08:52.496648 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.810637063+00:00 stderr F E0813 20:08:52.810556 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.810637063+00:00 stderr F E0813 20:08:52.810624 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.814600507+00:00 stderr F E0813 20:08:52.814523 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.814600507+00:00 stderr F E0813 20:08:52.814589 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:53.029824327+00:00 stderr F E0813 20:08:53.029690 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:53.029975582+00:00 stderr F E0813 20:08:53.029890 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.010145296+00:00 stderr F E0813 20:08:56.009052 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.010145296+00:00 stderr F E0813 20:08:56.009245 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.600006948+00:00 stderr F E0813 20:08:56.599861 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.600116191+00:00 stderr F E0813 20:08:56.600000 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.304175456+00:00 stderr F E0813 20:08:57.304076 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.304227648+00:00 stderr F E0813 20:08:57.304172 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.337506012+00:00 stderr F E0813 20:08:57.337392 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.337506012+00:00 stderr F E0813 20:08:57.337457 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.610713075+00:00 stderr F E0813 20:08:57.610572 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.610713075+00:00 stderr F E0813 20:08:57.610688 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.794263737+00:00 stderr F E0813 20:08:57.794096 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.794263737+00:00 stderr F E0813 20:08:57.794186 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.658566789+00:00 stderr F E0813 20:08:58.658408 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.658633531+00:00 stderr F E0813 20:08:58.658563 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.713140353+00:00 stderr F E0813 20:08:58.713075 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.713287518+00:00 stderr F E0813 20:08:58.713269 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.742951748+00:00 stderr F E0813 20:08:58.742866 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.743079232+00:00 stderr F E0813 20:08:58.743062 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:34.148973349+00:00 stderr F I0813 20:09:34.148730 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:34.869006983+00:00 stderr F I0813 20:09:34.868932 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:35.576449306+00:00 stderr F I0813 20:09:35.576346 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:35.784050738+00:00 stderr F I0813 20:09:35.783984 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:37.050415975+00:00 stderr F I0813 20:09:37.050268 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:37.312150859+00:00 stderr F I0813 20:09:37.310042 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:39.898706978+00:00 stderr F I0813 20:09:39.898600 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:46.110149745+00:00 stderr F I0813 20:09:46.110030 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:48.834429082+00:00 stderr F I0813 20:09:48.833277 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:48.892728974+00:00 stderr F I0813 20:09:48.892672 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:51.791420782+00:00 stderr F I0813 20:09:51.791292 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:53.442499030+00:00 stderr F I0813 20:09:53.435370 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:56.934371515+00:00 stderr F I0813 20:09:56.934268 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:59.692452641+00:00 stderr F I0813 20:09:59.692353 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:03.923274763+00:00 stderr F I0813 20:10:03.923110 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:05.041617317+00:00 stderr F I0813 20:10:05.041496 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:13.806297546+00:00 stderr F I0813 20:10:13.806192 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:14.139715366+00:00 stderr F I0813 20:10:14.139628 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:35.573370085+00:00 stderr F I0813 20:10:35.573172 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:35.662360187+00:00 stderr F I0813 20:10:35.662208 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:36.571610796+00:00 stderr F I0813 20:10:36.571495 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:17:34.122353834+00:00 stderr F I0813 20:17:34.118870 1 trace.go:236] Trace[1693487721]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:1fe089cf-6742-4b4d-a354-6690bd53f61e,client:10.217.0.19,api-group:route.openshift.io,api-version:v1,name:oauth-openshift,subresource:,namespace:openshift-authentication,protocol:HTTP/2.0,resource:routes,scope:resource,url:/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes/oauth-openshift,user-agent:authentication-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (13-Aug-2025 20:17:33.447) (total time: 671ms): 2025-08-13T20:17:34.122353834+00:00 stderr F Trace[1693487721]: ---"About to write a response" 670ms (20:17:34.118) 2025-08-13T20:17:34.122353834+00:00 stderr F Trace[1693487721]: [671.040603ms] [671.040603ms] END 2025-08-13T20:18:26.366246249+00:00 stderr F I0813 20:18:26.362702 1 trace.go:236] Trace[1917860378]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:07499f87-1a86-4458-a0a3-18f0c0d9c932,client:10.217.0.19,api-group:route.openshift.io,api-version:v1,name:oauth-openshift,subresource:,namespace:openshift-authentication,protocol:HTTP/2.0,resource:routes,scope:resource,url:/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes/oauth-openshift,user-agent:authentication-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (13-Aug-2025 20:18:25.572) (total time: 789ms): 2025-08-13T20:18:26.366246249+00:00 stderr F Trace[1917860378]: ---"About to write a response" 788ms (20:18:26.360) 2025-08-13T20:18:26.366246249+00:00 stderr F Trace[1917860378]: [789.800313ms] [789.800313ms] END 2025-08-13T20:18:34.542003345+00:00 stderr F E0813 20:18:34.540993 1 strategy.go:60] unable to parse manifest for "sha256:b95b77cd1b794265f246037453886664374732b4da033339ff08e95ec410994c": unexpected end of JSON input 2025-08-13T20:18:34.763501320+00:00 stderr F I0813 20:18:34.762714 1 trace.go:236] Trace[2007589504]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:05d22fbd-a8ec-46ef-8a2c-3a920dd334ea,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Aug-2025 20:18:33.432) (total time: 1330ms): 2025-08-13T20:18:34.763501320+00:00 stderr F Trace[2007589504]: ---"Write to database call succeeded" len:287 1316ms (20:18:34.751) 2025-08-13T20:18:34.763501320+00:00 stderr F Trace[2007589504]: [1.330074082s] [1.330074082s] END 2025-08-13T20:22:18.881545212+00:00 stderr F E0813 20:22:18.880539 1 strategy.go:60] unable to parse manifest for "sha256:5f73c1b804b7ff63f61151b4f194fe45c645de27671a182582eac8b3fcb30dd4": unexpected end of JSON input 2025-08-13T20:22:18.903663024+00:00 stderr F I0813 20:22:18.903580 1 trace.go:236] Trace[1121898041]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:da392399-76ee-46af-89a0-9e901be9ab96,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Aug-2025 20:22:18.339) (total time: 563ms): 2025-08-13T20:22:18.903663024+00:00 stderr F Trace[1121898041]: ---"Write to database call succeeded" len:274 562ms (20:22:18.902) 2025-08-13T20:22:18.903663024+00:00 stderr F Trace[1121898041]: [563.883166ms] [563.883166ms] END 2025-08-13T20:33:33.938260917+00:00 stderr F E0813 20:33:33.937966 1 strategy.go:60] unable to parse manifest for "sha256:b95b77cd1b794265f246037453886664374732b4da033339ff08e95ec410994c": unexpected end of JSON input 2025-08-13T20:33:33.953413383+00:00 stderr F I0813 20:33:33.951719 1 trace.go:236] Trace[1496715544]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:11e9111b-024c-48db-a0d2-0ce5b8dbf5ab,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Aug-2025 20:33:33.343) (total time: 607ms): 2025-08-13T20:33:33.953413383+00:00 stderr F Trace[1496715544]: ---"Write to database call succeeded" len:287 606ms (20:33:33.950) 2025-08-13T20:33:33.953413383+00:00 stderr F Trace[1496715544]: [607.831872ms] [607.831872ms] END 2025-08-13T20:41:04.002220584+00:00 stderr F E0813 20:41:04.001878 1 strategy.go:60] unable to parse manifest for "sha256:5f73c1b804b7ff63f61151b4f194fe45c645de27671a182582eac8b3fcb30dd4": unexpected end of JSON input 2025-08-13T20:41:04.014897090+00:00 stderr F I0813 20:41:04.013060 1 trace.go:236] Trace[586486215]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:934f2366-fb55-42fc-80ba-80239bd09acf,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Aug-2025 20:41:03.334) (total time: 678ms): 2025-08-13T20:41:04.014897090+00:00 stderr F Trace[586486215]: ---"Write to database call succeeded" len:274 676ms (20:41:04.011) 2025-08-13T20:41:04.014897090+00:00 stderr F Trace[586486215]: [678.206432ms] [678.206432ms] END 2025-08-13T20:42:36.314535278+00:00 stderr F I0813 20:42:36.313683 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.321893450+00:00 stderr F I0813 20:42:36.321625 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.322135567+00:00 stderr F I0813 20:42:36.322017 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382031144+00:00 stderr F I0813 20:42:36.316915 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.414129159+00:00 stderr F I0813 20:42:36.316948 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.415028545+00:00 stderr F I0813 20:42:36.316971 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.415411526+00:00 stderr F I0813 20:42:36.316985 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.415672034+00:00 stderr F I0813 20:42:36.316998 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.415962002+00:00 stderr F I0813 20:42:36.317012 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437615566+00:00 stderr F I0813 20:42:36.317023 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.441272512+00:00 stderr F I0813 20:42:36.317122 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.441877979+00:00 stderr F I0813 20:42:36.317139 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442279471+00:00 stderr F I0813 20:42:36.317166 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442527238+00:00 stderr F I0813 20:42:36.317179 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445592236+00:00 stderr F I0813 20:42:36.317191 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.446068460+00:00 stderr F I0813 20:42:36.317208 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.446335858+00:00 stderr F I0813 20:42:36.317253 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.457722176+00:00 stderr F I0813 20:42:36.317274 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.460092924+00:00 stderr F I0813 20:42:36.317291 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.460092924+00:00 stderr F I0813 20:42:36.317307 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.460092924+00:00 stderr F I0813 20:42:36.317326 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:38.033451965+00:00 stderr F I0813 20:42:38.032809 1 genericapiserver.go:689] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:38.033451965+00:00 stderr F I0813 20:42:38.032891 1 genericapiserver.go:541] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:38.035510985+00:00 stderr F I0813 20:42:38.034578 1 genericapiserver.go:1057] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-apiserver", Name:"apiserver-7fc54b8dd7-d2bhp", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving 2025-08-13T20:42:38.035510985+00:00 stderr F I0813 20:42:38.032881 1 genericapiserver.go:696] [graceful-termination] RunPreShutdownHooks has completed 2025-08-13T20:42:38.041608500+00:00 stderr F W0813 20:42:38.041489 1 genericapiserver.go:1060] failed to create event openshift-apiserver/apiserver-7fc54b8dd7-d2bhp.185b6e4d4a884464: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/events": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000755000175000017500000000000015133657735033147 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000644000175000017500000000000015133657715033135 0ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000644000175000017500000000000015133657715033135 0ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000755000175000017500000000000015133657735033147 5ustar zuulzuul././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000644000175000017500000005642315133657715033161 0ustar zuulzuul2025-08-13T20:07:24.772404583+00:00 stderr F W0813 20:07:24.771967 1 cmd.go:245] Using insecure, self-signed certificates 2025-08-13T20:07:24.777288853+00:00 stderr F I0813 20:07:24.776657 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755115644 cert, and key in /tmp/serving-cert-2464900357/serving-signer.crt, /tmp/serving-cert-2464900357/serving-signer.key 2025-08-13T20:07:25.331597096+00:00 stderr F I0813 20:07:25.325173 1 observer_polling.go:159] Starting file observer 2025-08-13T20:07:25.420711131+00:00 stderr F I0813 20:07:25.420175 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2 2025-08-13T20:07:25.435699481+00:00 stderr F I0813 20:07:25.435631 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2464900357/tls.crt::/tmp/serving-cert-2464900357/tls.key" 2025-08-13T20:07:26.555451245+00:00 stderr F I0813 20:07:26.516921 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:07:26.570867117+00:00 stderr F I0813 20:07:26.569648 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T20:07:26.570867117+00:00 stderr F I0813 20:07:26.569692 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T20:07:26.570867117+00:00 stderr F I0813 20:07:26.569711 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T20:07:26.570867117+00:00 stderr F I0813 20:07:26.569717 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T20:07:26.603546134+00:00 stderr F I0813 20:07:26.602166 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:07:26.603546134+00:00 stderr F I0813 20:07:26.602519 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:07:26.603546134+00:00 stderr F W0813 20:07:26.602556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:07:26.603546134+00:00 stderr F W0813 20:07:26.602565 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:07:26.624606658+00:00 stderr F I0813 20:07:26.622290 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:07:26.624606658+00:00 stderr F I0813 20:07:26.622370 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:07:26.624606658+00:00 stderr F I0813 20:07:26.622410 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:07:26.624606658+00:00 stderr F I0813 20:07:26.622447 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:07:26.624606658+00:00 stderr F I0813 20:07:26.623139 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-2464900357/tls.crt::/tmp/serving-cert-2464900357/tls.key" 2025-08-13T20:07:26.624606658+00:00 stderr F I0813 20:07:26.622993 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:07:26.624606658+00:00 stderr F I0813 20:07:26.623361 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:07:26.626915804+00:00 stderr F I0813 20:07:26.626743 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsTimeToStart 2025-08-13T20:07:26.631294090+00:00 stderr F I0813 20:07:26.631213 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:26.631294090+00:00 stderr F I0813 20:07:26.631217 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:26.633355789+00:00 stderr F I0813 20:07:26.631563 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:26.646267679+00:00 stderr F I0813 20:07:26.646177 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2464900357/tls.crt::/tmp/serving-cert-2464900357/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1755115644\" (2025-08-13 20:07:24 +0000 UTC to 2025-09-12 20:07:25 +0000 UTC (now=2025-08-13 20:07:26.646104804 +0000 UTC))" 2025-08-13T20:07:26.646669191+00:00 stderr F I0813 20:07:26.646580 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115646\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115646\" (2025-08-13 19:07:25 +0000 UTC to 2026-08-13 19:07:25 +0000 UTC (now=2025-08-13 20:07:26.646549117 +0000 UTC))" 2025-08-13T20:07:26.646669191+00:00 stderr F I0813 20:07:26.646640 1 secure_serving.go:213] Serving securely on [::]:17698 2025-08-13T20:07:26.646684411+00:00 stderr F I0813 20:07:26.646675 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:07:26.646754703+00:00 stderr F I0813 20:07:26.646702 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.726158 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.726391 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.726754 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:07:26.726710525 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727133 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727492 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:07:26.726770897 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727561 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:07:26.727525769 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727583 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:07:26.72756897 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727602 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.727589171 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727622 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.727608371 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727641 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.727627762 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727660 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.727646862 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727682 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:07:26.727668973 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727700 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:07:26.727688594 +0000 UTC))" 2025-08-13T20:07:26.733341906+00:00 stderr F I0813 20:07:26.733256 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2464900357/tls.crt::/tmp/serving-cert-2464900357/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1755115644\" (2025-08-13 20:07:24 +0000 UTC to 2025-09-12 20:07:25 +0000 UTC (now=2025-08-13 20:07:26.733224312 +0000 UTC))" 2025-08-13T20:07:26.733905142+00:00 stderr F I0813 20:07:26.733831 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115646\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115646\" (2025-08-13 19:07:25 +0000 UTC to 2026-08-13 19:07:25 +0000 UTC (now=2025-08-13 20:07:26.733764008 +0000 UTC))" 2025-08-13T20:07:26.734398026+00:00 stderr F I0813 20:07:26.734252 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:07:26.734215741 +0000 UTC))" 2025-08-13T20:07:26.734398026+00:00 stderr F I0813 20:07:26.734304 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:07:26.734285943 +0000 UTC))" 2025-08-13T20:07:26.734398026+00:00 stderr F I0813 20:07:26.734325 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:07:26.734311043 +0000 UTC))" 2025-08-13T20:07:26.734398026+00:00 stderr F I0813 20:07:26.734345 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:07:26.734332014 +0000 UTC))" 2025-08-13T20:07:26.734398026+00:00 stderr F I0813 20:07:26.734392 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.734378285 +0000 UTC))" 2025-08-13T20:07:26.734438267+00:00 stderr F I0813 20:07:26.734411 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.734398416 +0000 UTC))" 2025-08-13T20:07:26.734448417+00:00 stderr F I0813 20:07:26.734439 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.734416206 +0000 UTC))" 2025-08-13T20:07:26.734931861+00:00 stderr F I0813 20:07:26.734459 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.734445447 +0000 UTC))" 2025-08-13T20:07:26.734931861+00:00 stderr F I0813 20:07:26.734505 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:07:26.734487998 +0000 UTC))" 2025-08-13T20:07:26.734931861+00:00 stderr F I0813 20:07:26.734535 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:07:26.734523269 +0000 UTC))" 2025-08-13T20:07:26.734931861+00:00 stderr F I0813 20:07:26.734570 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.7345425 +0000 UTC))" 2025-08-13T20:07:26.734931861+00:00 stderr F I0813 20:07:26.734901 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2464900357/tls.crt::/tmp/serving-cert-2464900357/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1755115644\" (2025-08-13 20:07:24 +0000 UTC to 2025-09-12 20:07:25 +0000 UTC (now=2025-08-13 20:07:26.734860739 +0000 UTC))" 2025-08-13T20:07:26.735531588+00:00 stderr F I0813 20:07:26.735199 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115646\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115646\" (2025-08-13 19:07:25 +0000 UTC to 2026-08-13 19:07:25 +0000 UTC (now=2025-08-13 20:07:26.735180778 +0000 UTC))" 2025-08-13T20:07:27.249549866+00:00 stderr F I0813 20:07:27.249414 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:27.328082557+00:00 stderr F I0813 20:07:27.327932 1 base_controller.go:73] Caches are synced for CheckEndpointsTimeToStart 2025-08-13T20:07:27.328258672+00:00 stderr F I0813 20:07:27.328237 1 base_controller.go:110] Starting #1 worker of CheckEndpointsTimeToStart controller ... 2025-08-13T20:07:27.328386936+00:00 stderr F I0813 20:07:27.328372 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsStop 2025-08-13T20:07:27.329224680+00:00 stderr F I0813 20:07:27.329204 1 base_controller.go:73] Caches are synced for CheckEndpointsStop 2025-08-13T20:07:27.329332243+00:00 stderr F I0813 20:07:27.329318 1 base_controller.go:110] Starting #1 worker of CheckEndpointsStop controller ... 2025-08-13T20:07:27.340871004+00:00 stderr F I0813 20:07:27.330220 1 base_controller.go:67] Waiting for caches to sync for check-endpoints 2025-08-13T20:07:27.349559893+00:00 stderr F I0813 20:07:27.349500 1 base_controller.go:73] Caches are synced for check-endpoints 2025-08-13T20:07:27.349665676+00:00 stderr F I0813 20:07:27.349644 1 base_controller.go:110] Starting #1 worker of check-endpoints controller ... 2025-08-13T20:07:27.349838191+00:00 stderr F I0813 20:07:27.330947 1 base_controller.go:172] Shutting down CheckEndpointsTimeToStart ... 2025-08-13T20:07:27.349909243+00:00 stderr F I0813 20:07:27.331010 1 base_controller.go:114] Shutting down worker of CheckEndpointsTimeToStart controller ... 2025-08-13T20:07:27.349959345+00:00 stderr F I0813 20:07:27.349941 1 base_controller.go:104] All CheckEndpointsTimeToStart workers have been terminated 2025-08-13T20:07:27.350053037+00:00 stderr F I0813 20:07:27.348318 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:27.350318915+00:00 stderr F I0813 20:07:27.349376 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:38.535505534+00:00 stderr F I0813 20:09:38.533238 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:38.910861766+00:00 stderr F I0813 20:09:38.910681 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:48.325177351+00:00 stderr F I0813 20:09:48.324483 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:49.727563139+00:00 stderr F I0813 20:09:49.726230 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.459037956+00:00 stderr F I0813 20:09:55.458404 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:58.082039310+00:00 stderr F I0813 20:09:58.081099 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:42:36.316616768+00:00 stderr F I0813 20:42:36.315144 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.319874512+00:00 stderr F I0813 20:42:36.319513 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.320954413+00:00 stderr F I0813 20:42:36.320719 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.321202490+00:00 stderr F I0813 20:42:36.321099 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.323320731+00:00 stderr F I0813 20:42:36.322202 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.323320731+00:00 stderr F I0813 20:42:36.322520 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:37.888191757+00:00 stderr F I0813 20:42:37.887497 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:37.888191757+00:00 stderr F I0813 20:42:37.888159 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:37.888314061+00:00 stderr F I0813 20:42:37.888189 1 genericapiserver.go:539] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:42:37.888514306+00:00 stderr F I0813 20:42:37.888377 1 base_controller.go:172] Shutting down check-endpoints ... 2025-08-13T20:42:37.888753863+00:00 stderr F I0813 20:42:37.888529 1 base_controller.go:172] Shutting down CheckEndpointsStop ... ././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000644000175000017500000005602115133657715033153 0ustar zuulzuul2026-01-20T10:49:39.104793736+00:00 stderr F W0120 10:49:39.104020 1 cmd.go:245] Using insecure, self-signed certificates 2026-01-20T10:49:39.104921600+00:00 stderr F I0120 10:49:39.104734 1 crypto.go:601] Generating new CA for check-endpoints-signer@1768906179 cert, and key in /tmp/serving-cert-3847061339/serving-signer.crt, /tmp/serving-cert-3847061339/serving-signer.key 2026-01-20T10:49:39.516903608+00:00 stderr F I0120 10:49:39.516833 1 observer_polling.go:159] Starting file observer 2026-01-20T10:49:39.535200646+00:00 stderr F I0120 10:49:39.535146 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2 2026-01-20T10:49:39.536045521+00:00 stderr F I0120 10:49:39.536017 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3847061339/tls.crt::/tmp/serving-cert-3847061339/tls.key" 2026-01-20T10:49:39.865787215+00:00 stderr F I0120 10:49:39.865725 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2026-01-20T10:49:39.867488598+00:00 stderr F I0120 10:49:39.867422 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2026-01-20T10:49:39.867488598+00:00 stderr F I0120 10:49:39.867437 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2026-01-20T10:49:39.867488598+00:00 stderr F I0120 10:49:39.867450 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2026-01-20T10:49:39.867488598+00:00 stderr F I0120 10:49:39.867455 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2026-01-20T10:49:39.870217251+00:00 stderr F I0120 10:49:39.870175 1 secure_serving.go:57] Forcing use of http/1.1 only 2026-01-20T10:49:39.870217251+00:00 stderr F W0120 10:49:39.870202 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:39.870217251+00:00 stderr F W0120 10:49:39.870208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:39.870571891+00:00 stderr F I0120 10:49:39.870464 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2026-01-20T10:49:39.874695426+00:00 stderr F I0120 10:49:39.874236 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:49:39.874695426+00:00 stderr F I0120 10:49:39.874262 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:49:39.874695426+00:00 stderr F I0120 10:49:39.874310 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:49:39.874695426+00:00 stderr F I0120 10:49:39.874320 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:39.874695426+00:00 stderr F I0120 10:49:39.874333 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:49:39.874695426+00:00 stderr F I0120 10:49:39.874338 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:39.875308885+00:00 stderr F I0120 10:49:39.875280 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsTimeToStart 2026-01-20T10:49:39.875308885+00:00 stderr F I0120 10:49:39.875291 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-3847061339/tls.crt::/tmp/serving-cert-3847061339/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1768906179\" (2026-01-20 10:49:38 +0000 UTC to 2026-02-19 10:49:39 +0000 UTC (now=2026-01-20 10:49:39.875247773 +0000 UTC))" 2026-01-20T10:49:39.875665997+00:00 stderr F I0120 10:49:39.875642 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906179\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906179\" (2026-01-20 09:49:39 +0000 UTC to 2027-01-20 09:49:39 +0000 UTC (now=2026-01-20 10:49:39.875613975 +0000 UTC))" 2026-01-20T10:49:39.875678747+00:00 stderr F I0120 10:49:39.875669 1 secure_serving.go:213] Serving securely on [::]:17698 2026-01-20T10:49:39.875710208+00:00 stderr F I0120 10:49:39.875692 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2026-01-20T10:49:39.875736509+00:00 stderr F I0120 10:49:39.875718 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-3847061339/tls.crt::/tmp/serving-cert-3847061339/tls.key" 2026-01-20T10:49:39.875857352+00:00 stderr F I0120 10:49:39.875827 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:49:39.883211786+00:00 stderr F I0120 10:49:39.883158 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:49:39.883738482+00:00 stderr F I0120 10:49:39.883715 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:49:39.885269679+00:00 stderr F I0120 10:49:39.885223 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:49:39.975570529+00:00 stderr F I0120 10:49:39.975504 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:49:39.975570529+00:00 stderr F I0120 10:49:39.975520 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:39.975632881+00:00 stderr F I0120 10:49:39.975594 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:39.976322552+00:00 stderr F I0120 10:49:39.975771 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:39.975741554 +0000 UTC))" 2026-01-20T10:49:39.976322552+00:00 stderr F I0120 10:49:39.976053 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-3847061339/tls.crt::/tmp/serving-cert-3847061339/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1768906179\" (2026-01-20 10:49:38 +0000 UTC to 2026-02-19 10:49:39 +0000 UTC (now=2026-01-20 10:49:39.976038223 +0000 UTC))" 2026-01-20T10:49:39.976322552+00:00 stderr F I0120 10:49:39.976310 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906179\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906179\" (2026-01-20 09:49:39 +0000 UTC to 2027-01-20 09:49:39 +0000 UTC (now=2026-01-20 10:49:39.976296791 +0000 UTC))" 2026-01-20T10:49:39.976496488+00:00 stderr F I0120 10:49:39.976473 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:49:39.976459857 +0000 UTC))" 2026-01-20T10:49:39.976507728+00:00 stderr F I0120 10:49:39.976496 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:49:39.976483028 +0000 UTC))" 2026-01-20T10:49:39.976525289+00:00 stderr F I0120 10:49:39.976516 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:49:39.976501368 +0000 UTC))" 2026-01-20T10:49:39.976553700+00:00 stderr F I0120 10:49:39.976533 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:49:39.976520939 +0000 UTC))" 2026-01-20T10:49:39.976591781+00:00 stderr F I0120 10:49:39.976553 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:39.976541019 +0000 UTC))" 2026-01-20T10:49:39.976600771+00:00 stderr F I0120 10:49:39.976593 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:39.97655884 +0000 UTC))" 2026-01-20T10:49:39.976634312+00:00 stderr F I0120 10:49:39.976612 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:39.976598671 +0000 UTC))" 2026-01-20T10:49:39.976643942+00:00 stderr F I0120 10:49:39.976633 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:39.976620992 +0000 UTC))" 2026-01-20T10:49:39.976656643+00:00 stderr F I0120 10:49:39.976649 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:49:39.976637272 +0000 UTC))" 2026-01-20T10:49:39.976693544+00:00 stderr F I0120 10:49:39.976669 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:49:39.976657223 +0000 UTC))" 2026-01-20T10:49:39.976703664+00:00 stderr F I0120 10:49:39.976695 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:39.976678433 +0000 UTC))" 2026-01-20T10:49:39.976962622+00:00 stderr F I0120 10:49:39.976935 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-3847061339/tls.crt::/tmp/serving-cert-3847061339/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1768906179\" (2026-01-20 10:49:38 +0000 UTC to 2026-02-19 10:49:39 +0000 UTC (now=2026-01-20 10:49:39.976924031 +0000 UTC))" 2026-01-20T10:49:39.981907772+00:00 stderr F I0120 10:49:39.977213 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906179\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906179\" (2026-01-20 09:49:39 +0000 UTC to 2027-01-20 09:49:39 +0000 UTC (now=2026-01-20 10:49:39.977201619 +0000 UTC))" 2026-01-20T10:49:40.098840974+00:00 stderr F I0120 10:49:40.098774 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:49:40.176375976+00:00 stderr F I0120 10:49:40.176311 1 base_controller.go:73] Caches are synced for CheckEndpointsTimeToStart 2026-01-20T10:49:40.176375976+00:00 stderr F I0120 10:49:40.176343 1 base_controller.go:110] Starting #1 worker of CheckEndpointsTimeToStart controller ... 2026-01-20T10:49:40.176429577+00:00 stderr F I0120 10:49:40.176404 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsStop 2026-01-20T10:49:40.176429577+00:00 stderr F I0120 10:49:40.176409 1 base_controller.go:73] Caches are synced for CheckEndpointsStop 2026-01-20T10:49:40.176429577+00:00 stderr F I0120 10:49:40.176413 1 base_controller.go:110] Starting #1 worker of CheckEndpointsStop controller ... 2026-01-20T10:49:40.176601593+00:00 stderr F I0120 10:49:40.176573 1 base_controller.go:114] Shutting down worker of CheckEndpointsTimeToStart controller ... 2026-01-20T10:49:40.176655804+00:00 stderr F I0120 10:49:40.176619 1 base_controller.go:67] Waiting for caches to sync for check-endpoints 2026-01-20T10:49:40.176691255+00:00 stderr F I0120 10:49:40.176520 1 base_controller.go:172] Shutting down CheckEndpointsTimeToStart ... 2026-01-20T10:49:40.176691255+00:00 stderr F I0120 10:49:40.176675 1 base_controller.go:104] All CheckEndpointsTimeToStart workers have been terminated 2026-01-20T10:49:40.183089860+00:00 stderr F I0120 10:49:40.182747 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:49:40.187079112+00:00 stderr F I0120 10:49:40.183296 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:49:40.277667271+00:00 stderr F I0120 10:49:40.277594 1 base_controller.go:73] Caches are synced for check-endpoints 2026-01-20T10:49:40.277667271+00:00 stderr F I0120 10:49:40.277628 1 base_controller.go:110] Starting #1 worker of check-endpoints controller ... 2026-01-20T10:56:07.105148256+00:00 stderr F I0120 10:56:07.096815 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:56:07.09676916 +0000 UTC))" 2026-01-20T10:56:07.105204537+00:00 stderr F I0120 10:56:07.105146 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:56:07.105117245 +0000 UTC))" 2026-01-20T10:56:07.105204537+00:00 stderr F I0120 10:56:07.105177 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.105156006 +0000 UTC))" 2026-01-20T10:56:07.105213058+00:00 stderr F I0120 10:56:07.105201 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.105183357 +0000 UTC))" 2026-01-20T10:56:07.105360321+00:00 stderr F I0120 10:56:07.105227 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.105209437 +0000 UTC))" 2026-01-20T10:56:07.105360321+00:00 stderr F I0120 10:56:07.105256 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.105239348 +0000 UTC))" 2026-01-20T10:56:07.105360321+00:00 stderr F I0120 10:56:07.105278 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.105262159 +0000 UTC))" 2026-01-20T10:56:07.105360321+00:00 stderr F I0120 10:56:07.105302 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.105284829 +0000 UTC))" 2026-01-20T10:56:07.105360321+00:00 stderr F I0120 10:56:07.105325 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.10530734 +0000 UTC))" 2026-01-20T10:56:07.105360321+00:00 stderr F I0120 10:56:07.105349 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:56:07.105331931 +0000 UTC))" 2026-01-20T10:56:07.105401723+00:00 stderr F I0120 10:56:07.105374 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.105356051 +0000 UTC))" 2026-01-20T10:56:07.105450224+00:00 stderr F I0120 10:56:07.105406 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.105385922 +0000 UTC))" 2026-01-20T10:56:07.107137680+00:00 stderr F I0120 10:56:07.105820 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-3847061339/tls.crt::/tmp/serving-cert-3847061339/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1768906179\" (2026-01-20 10:49:38 +0000 UTC to 2026-02-19 10:49:39 +0000 UTC (now=2026-01-20 10:56:07.105801164 +0000 UTC))" 2026-01-20T10:56:07.107137680+00:00 stderr F I0120 10:56:07.106219 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906179\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906179\" (2026-01-20 09:49:39 +0000 UTC to 2027-01-20 09:49:39 +0000 UTC (now=2026-01-20 10:56:07.106176044 +0000 UTC))" 2026-01-20T10:58:20.615356535+00:00 stderr F I0120 10:58:20.614777 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000023700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_cons0000755000175000017500000000000015133657715033151 5ustar zuulzuul././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_cons0000755000175000017500000000000015133657736033154 5ustar zuulzuul././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_cons0000644000175000017500000000202215133657715033147 0ustar zuulzuul2025-08-13T20:05:30.075912298+00:00 stderr F W0813 20:05:30.075190 1 authoptions.go:112] Flag inactivity-timeout is set to less then 300 seconds and will be ignored! 2025-08-13T20:05:30.788889885+00:00 stderr F I0813 20:05:30.788210 1 main.go:605] Binding to [::]:8443... 2025-08-13T20:05:30.788889885+00:00 stderr F I0813 20:05:30.788264 1 main.go:607] using TLS 2025-08-13T20:05:33.795639346+00:00 stderr F I0813 20:05:33.789942 1 metrics.go:128] serverconfig.Metrics: Update ConsolePlugin metrics... 2025-08-13T20:05:33.904070491+00:00 stderr F I0813 20:05:33.895041 1 metrics.go:138] serverconfig.Metrics: Update ConsolePlugin metrics: &map[] (took 103.776412ms) 2025-08-13T20:05:35.792924520+00:00 stderr F I0813 20:05:35.791269 1 metrics.go:80] usage.Metrics: Count console users... 2025-08-13T20:05:36.280627718+00:00 stderr F I0813 20:05:36.280048 1 metrics.go:156] usage.Metrics: Update console users metrics: 0 kubeadmin, 0 cluster-admins, 0 developers, 0 unknown/errors (took 488.51125ms) ././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_cons0000644000175000017500000000000015133657715033141 0ustar zuulzuul././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_cons0000644000175000017500000000202315133657715033150 0ustar zuulzuul2026-01-20T10:49:36.025434381+00:00 stderr F W0120 10:49:36.024442 1 authoptions.go:112] Flag inactivity-timeout is set to less then 300 seconds and will be ignored! 2026-01-20T10:49:46.326321599+00:00 stderr F I0120 10:49:46.325706 1 main.go:605] Binding to [::]:8443... 2026-01-20T10:49:46.326321599+00:00 stderr F I0120 10:49:46.326264 1 main.go:607] using TLS 2026-01-20T10:49:49.328452402+00:00 stderr F I0120 10:49:49.327934 1 metrics.go:128] serverconfig.Metrics: Update ConsolePlugin metrics... 2026-01-20T10:49:49.526662700+00:00 stderr F I0120 10:49:49.526306 1 metrics.go:138] serverconfig.Metrics: Update ConsolePlugin metrics: &map[] (took 197.876167ms) 2026-01-20T10:49:51.325659946+00:00 stderr F I0120 10:49:51.325509 1 metrics.go:80] usage.Metrics: Count console users... 2026-01-20T10:49:51.814698461+00:00 stderr F I0120 10:49:51.814622 1 metrics.go:156] usage.Metrics: Update console users metrics: 0 kubeadmin, 0 cluster-admins, 0 developers, 0 unknown/errors (took 489.018244ms) ././@LongLink0000644000000000000000000000033500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000755000175000017500000000000015133657716033056 5ustar zuulzuul././@LongLink0000644000000000000000000000040400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000755000175000017500000000000015133657737033061 5ustar zuulzuul././@LongLink0000644000000000000000000000041100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000644000175000017500000010371215133657716033064 0ustar zuulzuul2026-01-20T10:49:37.239394437+00:00 stderr F I0120 10:49:37.237406 1 cmd.go:233] Using service-serving-cert provided certificates 2026-01-20T10:49:37.244132602+00:00 stderr F I0120 10:49:37.240009 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:49:37.244132602+00:00 stderr F I0120 10:49:37.240708 1 observer_polling.go:159] Starting file observer 2026-01-20T10:49:37.292081882+00:00 stderr F I0120 10:49:37.291997 1 builder.go:271] openshift-kube-storage-version-migrator-operator version 4.16.0-202406131906.p0.gbf6afbb.assembly.stream.el9-bf6afbb-bf6afbb820531b4adc3a52f78a90f317c5580bad 2026-01-20T10:49:38.171359584+00:00 stderr F I0120 10:49:38.170261 1 secure_serving.go:57] Forcing use of http/1.1 only 2026-01-20T10:49:38.171359584+00:00 stderr F W0120 10:49:38.170710 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:38.171359584+00:00 stderr F W0120 10:49:38.170716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:38.199108649+00:00 stderr F I0120 10:49:38.198412 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2026-01-20T10:49:38.205521784+00:00 stderr F I0120 10:49:38.200543 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:49:38.205521784+00:00 stderr F I0120 10:49:38.200606 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:49:38.205521784+00:00 stderr F I0120 10:49:38.200657 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:49:38.205521784+00:00 stderr F I0120 10:49:38.201439 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:38.205521784+00:00 stderr F I0120 10:49:38.203256 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock... 2026-01-20T10:49:38.205521784+00:00 stderr F I0120 10:49:38.204287 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2026-01-20T10:49:38.205521784+00:00 stderr F I0120 10:49:38.205425 1 secure_serving.go:210] Serving securely on [::]:8443 2026-01-20T10:49:38.205521784+00:00 stderr F I0120 10:49:38.205457 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:49:38.220027577+00:00 stderr F I0120 10:49:38.201118 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:49:38.220027577+00:00 stderr F I0120 10:49:38.217639 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:38.305481889+00:00 stderr F I0120 10:49:38.305413 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:38.305540181+00:00 stderr F I0120 10:49:38.305490 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:49:38.320181517+00:00 stderr F I0120 10:49:38.320119 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:54:36.745503334+00:00 stderr F I0120 10:54:36.744572 1 leaderelection.go:260] successfully acquired lease openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock 2026-01-20T10:54:36.745503334+00:00 stderr F I0120 10:54:36.744668 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"openshift-kube-storage-version-migrator-operator-lock", UID:"f11eeab2-1d72-43eb-b632-de6b959eb2b8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41941", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-storage-version-migrator-operator-686c6c748c-qbnnr_f9f339cf-eac0-4234-8423-dd1e540c150a became leader 2026-01-20T10:54:36.759436266+00:00 stderr F I0120 10:54:36.759312 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2026-01-20T10:54:36.759436266+00:00 stderr F I0120 10:54:36.759343 1 base_controller.go:73] Caches are synced for LoggingSyncer 2026-01-20T10:54:36.759436266+00:00 stderr F I0120 10:54:36.759370 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2026-01-20T10:54:36.759683292+00:00 stderr F I0120 10:54:36.759621 1 base_controller.go:67] Waiting for caches to sync for KubeStorageVersionMigratorStaticResources 2026-01-20T10:54:36.759683292+00:00 stderr F I0120 10:54:36.759647 1 base_controller.go:73] Caches are synced for KubeStorageVersionMigratorStaticResources 2026-01-20T10:54:36.759683292+00:00 stderr F I0120 10:54:36.759653 1 base_controller.go:110] Starting #1 worker of KubeStorageVersionMigratorStaticResources controller ... 2026-01-20T10:54:36.759683292+00:00 stderr F I0120 10:54:36.759654 1 base_controller.go:67] Waiting for caches to sync for StaticConditionsController 2026-01-20T10:54:36.759683292+00:00 stderr F I0120 10:54:36.759670 1 base_controller.go:73] Caches are synced for StaticConditionsController 2026-01-20T10:54:36.759706583+00:00 stderr F I0120 10:54:36.759678 1 base_controller.go:110] Starting #1 worker of StaticConditionsController controller ... 2026-01-20T10:54:36.759857567+00:00 stderr F I0120 10:54:36.759794 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2026-01-20T10:54:36.759857567+00:00 stderr F I0120 10:54:36.759826 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2026-01-20T10:54:36.759857567+00:00 stderr F I0120 10:54:36.759834 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2026-01-20T10:54:36.759880708+00:00 stderr F I0120 10:54:36.759868 1 base_controller.go:67] Waiting for caches to sync for KubeStorageVersionMigrator 2026-01-20T10:54:36.760184856+00:00 stderr F I0120 10:54:36.760127 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-storage-version-migrator 2026-01-20T10:54:36.860785307+00:00 stderr F I0120 10:54:36.860687 1 base_controller.go:73] Caches are synced for KubeStorageVersionMigrator 2026-01-20T10:54:36.860785307+00:00 stderr F I0120 10:54:36.860720 1 base_controller.go:110] Starting #1 worker of KubeStorageVersionMigrator controller ... 2026-01-20T10:54:36.860849589+00:00 stderr F I0120 10:54:36.860776 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-storage-version-migrator 2026-01-20T10:54:36.860849589+00:00 stderr F I0120 10:54:36.860808 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-storage-version-migrator controller ... 2026-01-20T10:56:07.104541460+00:00 stderr F I0120 10:56:07.103829 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:56:07.103761719 +0000 UTC))" 2026-01-20T10:56:07.104541460+00:00 stderr F I0120 10:56:07.104527 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:56:07.104508249 +0000 UTC))" 2026-01-20T10:56:07.104583211+00:00 stderr F I0120 10:56:07.104549 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.10453284 +0000 UTC))" 2026-01-20T10:56:07.104583211+00:00 stderr F I0120 10:56:07.104575 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.10455472 +0000 UTC))" 2026-01-20T10:56:07.104637902+00:00 stderr F I0120 10:56:07.104595 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.104581111 +0000 UTC))" 2026-01-20T10:56:07.104637902+00:00 stderr F I0120 10:56:07.104622 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.104606201 +0000 UTC))" 2026-01-20T10:56:07.104650193+00:00 stderr F I0120 10:56:07.104643 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.104628232 +0000 UTC))" 2026-01-20T10:56:07.105118375+00:00 stderr F I0120 10:56:07.104665 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.104649133 +0000 UTC))" 2026-01-20T10:56:07.105118375+00:00 stderr F I0120 10:56:07.104690 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.104674563 +0000 UTC))" 2026-01-20T10:56:07.105118375+00:00 stderr F I0120 10:56:07.104712 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:56:07.104698014 +0000 UTC))" 2026-01-20T10:56:07.105118375+00:00 stderr F I0120 10:56:07.104750 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.104719274 +0000 UTC))" 2026-01-20T10:56:07.105118375+00:00 stderr F I0120 10:56:07.104775 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.104755825 +0000 UTC))" 2026-01-20T10:56:07.109002470+00:00 stderr F I0120 10:56:07.105262 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-storage-version-migrator-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-storage-version-migrator-operator.svc,metrics.openshift-kube-storage-version-migrator-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:26 +0000 UTC to 2027-08-13 20:00:27 +0000 UTC (now=2026-01-20 10:56:07.105229368 +0000 UTC))" 2026-01-20T10:56:07.109002470+00:00 stderr F I0120 10:56:07.105674 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906178\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906177\" (2026-01-20 09:49:37 +0000 UTC to 2027-01-20 09:49:37 +0000 UTC (now=2026-01-20 10:56:07.10565585 +0000 UTC))" 2026-01-20T10:57:36.765020715+00:00 stderr F E0120 10:57:36.764486 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:36.772705728+00:00 stderr F E0120 10:57:36.772674 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:36.780925136+00:00 stderr F E0120 10:57:36.780874 1 leaderelection.go:332] error retrieving resource lock openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-storage-version-migrator-operator/leases/openshift-kube-storage-version-migrator-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.785504716+00:00 stderr F E0120 10:57:36.785474 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:36.809184123+00:00 stderr F E0120 10:57:36.809120 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:36.854523232+00:00 stderr F E0120 10:57:36.854464 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:36.864415943+00:00 stderr F W0120 10:57:36.864372 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.864415943+00:00 stderr F E0120 10:57:36.864401 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.872653781+00:00 stderr F W0120 10:57:36.872610 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.872653781+00:00 stderr F E0120 10:57:36.872645 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.889062155+00:00 stderr F W0120 10:57:36.889009 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.889091045+00:00 stderr F E0120 10:57:36.889072 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.912887995+00:00 stderr F W0120 10:57:36.912803 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.912887995+00:00 stderr F E0120 10:57:36.912862 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.955938673+00:00 stderr F W0120 10:57:36.955842 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.955938673+00:00 stderr F E0120 10:57:36.955908 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.040795037+00:00 stderr F W0120 10:57:37.040710 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.040795037+00:00 stderr F E0120 10:57:37.040750 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.166254385+00:00 stderr F E0120 10:57:37.165541 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:37.365338780+00:00 stderr F W0120 10:57:37.365255 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.365338780+00:00 stderr F E0120 10:57:37.365308 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.566434118+00:00 stderr F E0120 10:57:37.566335 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:37.766258102+00:00 stderr F W0120 10:57:37.766172 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.766258102+00:00 stderr F E0120 10:57:37.766235 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:37.980487848+00:00 stderr F E0120 10:57:37.980389 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:38.411942838+00:00 stderr F W0120 10:57:38.411821 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:38.411942838+00:00 stderr F E0120 10:57:38.411888 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:38.630045376+00:00 stderr F E0120 10:57:38.629915 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:39.696910519+00:00 stderr F W0120 10:57:39.696837 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:39.696910519+00:00 stderr F E0120 10:57:39.696886 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:39.917617446+00:00 stderr F E0120 10:57:39.917527 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:42.260932236+00:00 stderr F W0120 10:57:42.260400 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:42.260932236+00:00 stderr F E0120 10:57:42.260837 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:42.486298146+00:00 stderr F E0120 10:57:42.486193 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2026-01-20T10:57:47.388393583+00:00 stderr F W0120 10:57:47.387723 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:47.388393583+00:00 stderr F E0120 10:57:47.388330 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:47.612110460+00:00 stderr F E0120 10:57:47.611520 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] ././@LongLink0000644000000000000000000000041100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000644000175000017500000010334015133657716033061 0ustar zuulzuul2025-08-13T20:00:45.618270286+00:00 stderr F I0813 20:00:45.616690 1 cmd.go:233] Using service-serving-cert provided certificates 2025-08-13T20:00:45.618270286+00:00 stderr F I0813 20:00:45.617047 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:00:45.618270286+00:00 stderr F I0813 20:00:45.617822 1 observer_polling.go:159] Starting file observer 2025-08-13T20:00:46.189065082+00:00 stderr F I0813 20:00:46.188028 1 builder.go:271] openshift-kube-storage-version-migrator-operator version 4.16.0-202406131906.p0.gbf6afbb.assembly.stream.el9-bf6afbb-bf6afbb820531b4adc3a52f78a90f317c5580bad 2025-08-13T20:00:57.392828055+00:00 stderr F I0813 20:00:57.391890 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:00:57.392828055+00:00 stderr F W0813 20:00:57.392512 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:00:57.392828055+00:00 stderr F W0813 20:00:57.392521 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:03.400228587+00:00 stderr F I0813 20:01:03.372176 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:01:03.400228587+00:00 stderr F I0813 20:01:03.376636 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:01:03.402490702+00:00 stderr F I0813 20:01:03.382316 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:03.408512604+00:00 stderr F I0813 20:01:03.382385 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:03.409650426+00:00 stderr F I0813 20:01:03.382414 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:03.433956279+00:00 stderr F I0813 20:01:03.427085 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:03.463050499+00:00 stderr F I0813 20:01:03.447590 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:03.467044513+00:00 stderr F I0813 20:01:03.464488 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:01:03.467044513+00:00 stderr F I0813 20:01:03.464725 1 secure_serving.go:210] Serving securely on [::]:8443 2025-08-13T20:01:03.551232493+00:00 stderr F I0813 20:01:03.549119 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:03.586958362+00:00 stderr F I0813 20:01:03.585324 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:01:03.602420253+00:00 stderr F I0813 20:01:03.598102 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock... 2025-08-13T20:01:03.646296424+00:00 stderr F I0813 20:01:03.644984 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:03.665302456+00:00 stderr F I0813 20:01:03.665177 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:02:58.875037003+00:00 stderr F E0813 20:02:58.874152 1 leaderelection.go:332] error retrieving resource lock openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-storage-version-migrator-operator/leases/openshift-kube-storage-version-migrator-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:47.552341092+00:00 stderr F E0813 20:04:47.551352 1 leaderelection.go:332] error retrieving resource lock openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-storage-version-migrator-operator/leases/openshift-kube-storage-version-migrator-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:06:25.498325085+00:00 stderr F I0813 20:06:25.496973 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"openshift-kube-storage-version-migrator-operator-lock", UID:"f11eeab2-1d72-43eb-b632-de6b959eb2b8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"32049", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-storage-version-migrator-operator-686c6c748c-qbnnr_7f35bdb1-fde8-47f2-9c84-83ee0362fb0d became leader 2025-08-13T20:06:25.499967312+00:00 stderr F I0813 20:06:25.499398 1 leaderelection.go:260] successfully acquired lease openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock 2025-08-13T20:06:25.588894659+00:00 stderr F I0813 20:06:25.587371 1 base_controller.go:67] Waiting for caches to sync for KubeStorageVersionMigratorStaticResources 2025-08-13T20:06:25.588894659+00:00 stderr F I0813 20:06:25.587401 1 base_controller.go:67] Waiting for caches to sync for KubeStorageVersionMigrator 2025-08-13T20:06:25.588894659+00:00 stderr F I0813 20:06:25.587485 1 base_controller.go:67] Waiting for caches to sync for StaticConditionsController 2025-08-13T20:06:25.588894659+00:00 stderr F I0813 20:06:25.587504 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:06:25.588894659+00:00 stderr F I0813 20:06:25.587368 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:06:25.595438816+00:00 stderr F I0813 20:06:25.595316 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-storage-version-migrator 2025-08-13T20:06:25.595438816+00:00 stderr F I0813 20:06:25.595372 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-storage-version-migrator 2025-08-13T20:06:25.595438816+00:00 stderr F I0813 20:06:25.595391 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-storage-version-migrator controller ... 2025-08-13T20:06:25.687902114+00:00 stderr F I0813 20:06:25.687663 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:06:25.687902114+00:00 stderr F I0813 20:06:25.687688 1 base_controller.go:73] Caches are synced for KubeStorageVersionMigrator 2025-08-13T20:06:25.687902114+00:00 stderr F I0813 20:06:25.687728 1 base_controller.go:73] Caches are synced for StaticConditionsController 2025-08-13T20:06:25.687902114+00:00 stderr F I0813 20:06:25.687709 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:06:25.687902114+00:00 stderr F I0813 20:06:25.687739 1 base_controller.go:110] Starting #1 worker of StaticConditionsController controller ... 2025-08-13T20:06:25.688692727+00:00 stderr F I0813 20:06:25.687731 1 base_controller.go:110] Starting #1 worker of KubeStorageVersionMigrator controller ... 2025-08-13T20:06:25.688692727+00:00 stderr F I0813 20:06:25.688646 1 base_controller.go:73] Caches are synced for KubeStorageVersionMigratorStaticResources 2025-08-13T20:06:25.688692727+00:00 stderr F I0813 20:06:25.688661 1 base_controller.go:110] Starting #1 worker of KubeStorageVersionMigratorStaticResources controller ... 2025-08-13T20:06:25.691854777+00:00 stderr F I0813 20:06:25.689721 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:06:25.691854777+00:00 stderr F I0813 20:06:25.689750 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:08:25.599533059+00:00 stderr F E0813 20:08:25.597985 1 leaderelection.go:332] error retrieving resource lock openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-storage-version-migrator-operator/leases/openshift-kube-storage-version-migrator-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.734448848+00:00 stderr F W0813 20:08:25.734004 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.735539009+00:00 stderr F E0813 20:08:25.734976 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.735539009+00:00 stderr F E0813 20:08:25.735112 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:25.745218906+00:00 stderr F W0813 20:08:25.745141 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.745249227+00:00 stderr F E0813 20:08:25.745211 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.750312662+00:00 stderr F E0813 20:08:25.750288 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:25.759633450+00:00 stderr F W0813 20:08:25.759430 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.759633450+00:00 stderr F E0813 20:08:25.759503 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.770068159+00:00 stderr F E0813 20:08:25.770022 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:25.789868556+00:00 stderr F W0813 20:08:25.788442 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.789868556+00:00 stderr F E0813 20:08:25.788577 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.806198235+00:00 stderr F E0813 20:08:25.806145 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:25.857742602+00:00 stderr F W0813 20:08:25.852403 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.857742602+00:00 stderr F E0813 20:08:25.852463 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.861860991+00:00 stderr F E0813 20:08:25.860102 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:25.943490411+00:00 stderr F W0813 20:08:25.943359 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.943548423+00:00 stderr F E0813 20:08:25.943538 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.167700009+00:00 stderr F W0813 20:08:26.147031 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.168963606+00:00 stderr F E0813 20:08:26.168842 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.336574241+00:00 stderr F E0813 20:08:26.336238 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:26.535848075+00:00 stderr F W0813 20:08:26.534337 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.535848075+00:00 stderr F E0813 20:08:26.534557 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.734114189+00:00 stderr F E0813 20:08:26.734020 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:27.063517294+00:00 stderr F E0813 20:08:27.063454 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:27.184202064+00:00 stderr F W0813 20:08:27.184100 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.184202064+00:00 stderr F E0813 20:08:27.184168 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.720215052+00:00 stderr F E0813 20:08:27.719910 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:28.473667634+00:00 stderr F W0813 20:08:28.473522 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.473667634+00:00 stderr F E0813 20:08:28.473628 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.023002023+00:00 stderr F E0813 20:08:29.022568 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:31.039953282+00:00 stderr F W0813 20:08:31.039353 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.039953282+00:00 stderr F E0813 20:08:31.039685 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.591380821+00:00 stderr F E0813 20:08:31.591277 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:36.165164983+00:00 stderr F W0813 20:08:36.164486 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:36.165164983+00:00 stderr F E0813 20:08:36.165119 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:36.719395434+00:00 stderr F E0813 20:08:36.718222 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:46.418918077+00:00 stderr F W0813 20:08:46.416514 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.418918077+00:00 stderr F E0813 20:08:46.418697 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.965185829+00:00 stderr F E0813 20:08:46.964570 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:42:36.461846675+00:00 stderr F I0813 20:42:36.430750 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.461846675+00:00 stderr F I0813 20:42:36.438450 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.467606601+00:00 stderr F I0813 20:42:36.443218 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.471980047+00:00 stderr F I0813 20:42:36.438644 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.472057619+00:00 stderr F I0813 20:42:36.451160 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.478446954+00:00 stderr F I0813 20:42:36.441451 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.483736176+00:00 stderr F I0813 20:42:36.451208 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:41.244733697+00:00 stderr F I0813 20:42:41.242977 1 cmd.go:121] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:41.245123478+00:00 stderr F I0813 20:42:41.245089 1 base_controller.go:172] Shutting down KubeStorageVersionMigrator ... 2025-08-13T20:42:41.245201180+00:00 stderr F I0813 20:42:41.245181 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:41.245334404+00:00 stderr F I0813 20:42:41.245305 1 base_controller.go:172] Shutting down KubeStorageVersionMigratorStaticResources ... 2025-08-13T20:42:41.245404646+00:00 stderr F I0813 20:42:41.245314 1 base_controller.go:172] Shutting down StatusSyncer_kube-storage-version-migrator ... 2025-08-13T20:42:41.246118767+00:00 stderr F I0813 20:42:41.246045 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:42:41.246118767+00:00 stderr F I0813 20:42:41.245437 1 base_controller.go:150] All StatusSyncer_kube-storage-version-migrator post start hooks have been terminated 2025-08-13T20:42:41.246338343+00:00 stderr F I0813 20:42:41.246165 1 base_controller.go:172] Shutting down StaticConditionsController ... 2025-08-13T20:42:41.246648952+00:00 stderr F I0813 20:42:41.246111 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:42:41.246648952+00:00 stderr F I0813 20:42:41.246205 1 base_controller.go:114] Shutting down worker of StaticConditionsController controller ... 2025-08-13T20:42:41.246720334+00:00 stderr F I0813 20:42:41.246375 1 base_controller.go:114] Shutting down worker of KubeStorageVersionMigrator controller ... 2025-08-13T20:42:41.247062304+00:00 stderr F I0813 20:42:41.245736 1 genericapiserver.go:538] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:41.247193198+00:00 stderr F I0813 20:42:41.247165 1 genericapiserver.go:681] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:41.247711623+00:00 stderr F I0813 20:42:41.247664 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:42:41.247711623+00:00 stderr F I0813 20:42:41.247686 1 base_controller.go:104] All StaticConditionsController workers have been terminated 2025-08-13T20:42:41.247734993+00:00 stderr F I0813 20:42:41.247715 1 base_controller.go:114] Shutting down worker of KubeStorageVersionMigratorStaticResources controller ... 2025-08-13T20:42:41.247734993+00:00 stderr F I0813 20:42:41.247725 1 base_controller.go:104] All KubeStorageVersionMigratorStaticResources workers have been terminated 2025-08-13T20:42:41.247749164+00:00 stderr F I0813 20:42:41.247733 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:41.247749164+00:00 stderr F I0813 20:42:41.247739 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:41.248002231+00:00 stderr F W0813 20:42:41.247762 1 builder.go:109] graceful termination failed, controllers failed with error: stopped ././@LongLink0000644000000000000000000000041100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000644000175000017500000011576115133657716033073 0ustar zuulzuul2025-08-13T19:59:35.330178547+00:00 stderr F I0813 19:59:35.328428 1 cmd.go:233] Using service-serving-cert provided certificates 2025-08-13T19:59:35.355461738+00:00 stderr F I0813 19:59:35.332273 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:35.355461738+00:00 stderr F I0813 19:59:35.339700 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:37.266163302+00:00 stderr F I0813 19:59:37.198407 1 builder.go:271] openshift-kube-storage-version-migrator-operator version 4.16.0-202406131906.p0.gbf6afbb.assembly.stream.el9-bf6afbb-bf6afbb820531b4adc3a52f78a90f317c5580bad 2025-08-13T19:59:43.151264028+00:00 stderr F I0813 19:59:43.111663 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:43.151420742+00:00 stderr F W0813 19:59:43.151395 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:43.151455293+00:00 stderr F W0813 19:59:43.151443 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:43.215601542+00:00 stderr F I0813 19:59:43.215510 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:43.243873157+00:00 stderr F I0813 19:59:43.243742 1 secure_serving.go:210] Serving securely on [::]:8443 2025-08-13T19:59:43.244636699+00:00 stderr F I0813 19:59:43.244613 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:43.344536237+00:00 stderr F I0813 19:59:43.342200 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:43.344536237+00:00 stderr F I0813 19:59:43.245514 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:43.344536237+00:00 stderr F I0813 19:59:43.246310 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:43.344536237+00:00 stderr F I0813 19:59:43.343769 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:43.386934916+00:00 stderr F I0813 19:59:43.386311 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock... 2025-08-13T19:59:43.406232186+00:00 stderr F I0813 19:59:43.315336 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.406232186+00:00 stderr F I0813 19:59:43.405896 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:43.480296707+00:00 stderr F I0813 19:59:43.268900 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:44.354915128+00:00 stderr F I0813 19:59:44.351258 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:44.432892331+00:00 stderr F I0813 19:59:44.432397 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:44.467855027+00:00 stderr F I0813 19:59:44.467672 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:44.490205385+00:00 stderr F E0813 19:59:44.488873 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.490205385+00:00 stderr F E0813 19:59:44.488994 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.520984552+00:00 stderr F E0813 19:59:44.519629 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.554039574+00:00 stderr F E0813 19:59:44.553983 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.564088650+00:00 stderr F E0813 19:59:44.564054 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.573408966+00:00 stderr F E0813 19:59:44.572182 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.613393246+00:00 stderr F E0813 19:59:44.588701 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.613393246+00:00 stderr F E0813 19:59:44.604003 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.633157629+00:00 stderr F E0813 19:59:44.632735 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.654976221+00:00 stderr F E0813 19:59:44.654441 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.713615723+00:00 stderr F E0813 19:59:44.713422 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.736544557+00:00 stderr F E0813 19:59:44.736012 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.877195426+00:00 stderr F E0813 19:59:44.873915 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.900236113+00:00 stderr F E0813 19:59:44.900115 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:45.213142002+00:00 stderr F E0813 19:59:45.212320 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:45.238307950+00:00 stderr F E0813 19:59:45.235450 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:45.856013558+00:00 stderr F E0813 19:59:45.855286 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:45.879065245+00:00 stderr F E0813 19:59:45.878010 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:46.461090794+00:00 stderr F I0813 19:59:46.460595 1 leaderelection.go:260] successfully acquired lease openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock 2025-08-13T19:59:46.462002060+00:00 stderr F I0813 19:59:46.461413 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"openshift-kube-storage-version-migrator-operator-lock", UID:"f11eeab2-1d72-43eb-b632-de6b959eb2b8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28371", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-storage-version-migrator-operator-686c6c748c-qbnnr_1a321c6b-3aae-44eb-a5dc-fa9e08493642 became leader 2025-08-13T19:59:47.139745260+00:00 stderr F E0813 19:59:47.136166 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.160170902+00:00 stderr F E0813 19:59:47.160114 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.237902628+00:00 stderr F I0813 19:59:47.211266 1 base_controller.go:67] Waiting for caches to sync for KubeStorageVersionMigratorStaticResources 2025-08-13T19:59:47.237902628+00:00 stderr F I0813 19:59:47.212603 1 base_controller.go:67] Waiting for caches to sync for StaticConditionsController 2025-08-13T19:59:47.237902628+00:00 stderr F I0813 19:59:47.212710 1 base_controller.go:67] Waiting for caches to sync for KubeStorageVersionMigrator 2025-08-13T19:59:47.237902628+00:00 stderr F I0813 19:59:47.212710 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T19:59:47.237902628+00:00 stderr F I0813 19:59:47.231932 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-storage-version-migrator 2025-08-13T19:59:47.250364453+00:00 stderr F I0813 19:59:47.208997 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:47.356337384+00:00 stderr F I0813 19:59:47.356281 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:47.356431077+00:00 stderr F I0813 19:59:47.356417 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:47.413240606+00:00 stderr F I0813 19:59:47.413189 1 base_controller.go:73] Caches are synced for KubeStorageVersionMigratorStaticResources 2025-08-13T19:59:47.413321108+00:00 stderr F I0813 19:59:47.413301 1 base_controller.go:110] Starting #1 worker of KubeStorageVersionMigratorStaticResources controller ... 2025-08-13T19:59:47.413666138+00:00 stderr F I0813 19:59:47.413500 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T19:59:47.413666138+00:00 stderr F I0813 19:59:47.413658 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T19:59:47.418444654+00:00 stderr F I0813 19:59:47.415580 1 base_controller.go:73] Caches are synced for StaticConditionsController 2025-08-13T19:59:47.418444654+00:00 stderr F I0813 19:59:47.415616 1 base_controller.go:110] Starting #1 worker of StaticConditionsController controller ... 2025-08-13T19:59:47.834501895+00:00 stderr F I0813 19:59:47.819048 1 base_controller.go:73] Caches are synced for KubeStorageVersionMigrator 2025-08-13T19:59:47.834501895+00:00 stderr F I0813 19:59:47.819436 1 base_controller.go:110] Starting #1 worker of KubeStorageVersionMigrator controller ... 2025-08-13T19:59:47.933420705+00:00 stderr F I0813 19:59:47.932095 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-storage-version-migrator 2025-08-13T19:59:47.933420705+00:00 stderr F I0813 19:59:47.932166 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-storage-version-migrator controller ... 2025-08-13T19:59:48.570223548+00:00 stderr F I0813 19:59:48.569636 1 status_controller.go:213] clusteroperator/kube-storage-version-migrator diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"}]}} 2025-08-13T19:59:48.812035401+00:00 stderr F I0813 19:59:48.811900 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"59f9d1a9-dda1-4c2c-8c2d-b99e720cbed0", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") 2025-08-13T19:59:49.703053540+00:00 stderr F E0813 19:59:49.702187 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.722226326+00:00 stderr F E0813 19:59:49.721050 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:51.828009674+00:00 stderr F I0813 19:59:51.826737 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876354 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:51.876167977 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876408 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:51.876391944 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876427 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.876414654 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876444 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.876432905 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876461 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.876449915 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876480 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.876466376 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876496 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.876484806 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876517 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.876500697 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876539 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.876525097 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.877084 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-storage-version-migrator-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-storage-version-migrator-operator.svc,metrics.openshift-kube-storage-version-migrator-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:10 +0000 UTC to 2026-06-26 12:47:11 +0000 UTC (now=2025-08-13 19:59:51.877056793 +0000 UTC))" 2025-08-13T19:59:51.877674800+00:00 stderr F I0813 19:59:51.877505 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115182\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115178\" (2025-08-13 18:59:37 +0000 UTC to 2026-08-13 18:59:37 +0000 UTC (now=2025-08-13 19:59:51.877488145 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.730045 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.729990125 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747565 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.747497084 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747598 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.747581067 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747621 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.747605227 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747644 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.747631328 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747663 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.747650479 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747705 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.747667939 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747727 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.747714131 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747743 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.747731741 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747766 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.747755002 +0000 UTC))" 2025-08-13T20:00:05.749320426+00:00 stderr F I0813 20:00:05.748276 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-storage-version-migrator-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-storage-version-migrator-operator.svc,metrics.openshift-kube-storage-version-migrator-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:10 +0000 UTC to 2026-06-26 12:47:11 +0000 UTC (now=2025-08-13 20:00:05.748243846 +0000 UTC))" 2025-08-13T20:00:05.772445416+00:00 stderr F I0813 20:00:05.769337 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115182\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115178\" (2025-08-13 18:59:37 +0000 UTC to 2026-08-13 18:59:37 +0000 UTC (now=2025-08-13 20:00:05.769289016 +0000 UTC))" 2025-08-13T20:00:34.342539512+00:00 stderr F I0813 20:00:34.340074 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="can't remove non-existent watcher: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:34.342539512+00:00 stderr F I0813 20:00:34.342506 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="can't remove non-existent watcher: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:00:34.347134543+00:00 stderr F I0813 20:00:34.346908 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.347876 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:34.347766291 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.347908 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:34.347894655 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.347944 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:34.347915466 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.347964 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:34.347950157 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.347989 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:34.347970797 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.348009 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:34.347997468 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.348026 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:34.348014418 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.348051 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:34.348031179 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.348082 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:34.34806024 +0000 UTC))" 2025-08-13T20:00:34.348300997+00:00 stderr F I0813 20:00:34.348159 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:34.348095171 +0000 UTC))" 2025-08-13T20:00:34.351572430+00:00 stderr F I0813 20:00:34.348503 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-storage-version-migrator-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-storage-version-migrator-operator.svc,metrics.openshift-kube-storage-version-migrator-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:26 +0000 UTC to 2027-08-13 20:00:27 +0000 UTC (now=2025-08-13 20:00:34.348471461 +0000 UTC))" 2025-08-13T20:00:34.357169469+00:00 stderr F I0813 20:00:34.356184 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115182\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115178\" (2025-08-13 18:59:37 +0000 UTC to 2026-08-13 18:59:37 +0000 UTC (now=2025-08-13 20:00:34.356076198 +0000 UTC))" 2025-08-13T20:00:35.369679330+00:00 stderr F I0813 20:00:35.364527 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="43548186e7ce5eab21976aea3b471207a358b9f8fb63bf325b8f4755a5142ae9", new="cdf9d7851715e3205e610dc8b06ddc4b8a158c767e0f50cab6e974e6fee4d6bf") 2025-08-13T20:00:35.369679330+00:00 stderr F W0813 20:00:35.369569 1 builder.go:132] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:00:35.369753702+00:00 stderr F I0813 20:00:35.369717 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="9e10b51cb3256c60ae44b395564462b79050e988d1626d5f34804f849a3655a7", new="f7b6ebeaff863e5f1a2771d98136282ec8f6675eb20222ebefd0d2097785c6f3") 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.372131 1 genericapiserver.go:681] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.372256 1 genericapiserver.go:538] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.372306 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.372714 1 base_controller.go:172] Shutting down KubeStorageVersionMigrator ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.372742 1 base_controller.go:172] Shutting down StaticConditionsController ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.372894 1 genericapiserver.go:639] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.373245 1 base_controller.go:114] Shutting down worker of StaticConditionsController controller ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.373320 1 base_controller.go:104] All StaticConditionsController workers have been terminated 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.373339 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.373685 1 secure_serving.go:255] Stopped listening on [::]:8443 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.373702 1 genericapiserver.go:588] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.376760 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.376876 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.376922 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.376984 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377018 1 genericapiserver.go:701] [graceful-termination] apiserver is exiting 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377039 1 builder.go:302] server exited 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377111 1 base_controller.go:114] Shutting down worker of KubeStorageVersionMigrator controller ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377129 1 base_controller.go:104] All KubeStorageVersionMigrator workers have been terminated 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377162 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377182 1 base_controller.go:172] Shutting down KubeStorageVersionMigratorStaticResources ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377194 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377277 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377284 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377292 1 base_controller.go:114] Shutting down worker of KubeStorageVersionMigratorStaticResources controller ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377298 1 base_controller.go:104] All KubeStorageVersionMigratorStaticResources workers have been terminated 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377307 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377314 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377334 1 base_controller.go:114] Shutting down worker of StatusSyncer_kube-storage-version-migrator controller ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.378324 1 base_controller.go:172] Shutting down StatusSyncer_kube-storage-version-migrator ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.378427 1 base_controller.go:150] All StatusSyncer_kube-storage-version-migrator post start hooks have been terminated 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.378437 1 base_controller.go:104] All StatusSyncer_kube-storage-version-migrator workers have been terminated 2025-08-13T20:00:35.384702929+00:00 stderr F W0813 20:00:35.381309 1 builder.go:109] graceful termination failed, controllers failed with error: stopped ././@LongLink0000644000000000000000000000025100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015133657716033245 5ustar zuulzuul././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015133657741033243 5ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000000015133657716033235 0ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000000015133657716033235 0ustar zuulzuul././@LongLink0000644000000000000000000000026500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015133657741033243 5ustar zuulzuul././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000062415133657716033251 0ustar zuulzuul2025-08-13T19:50:57.794478745+00:00 stdout F 2025-08-13T19:50:57+00:00 [cnibincopy] Successfully copied files in /usr/src/plugins/rhel9/bin/ to /host/opt/cni/bin/upgrade_d145407d-9046-4618-84a2-0bd3cab7b7ed 2025-08-13T19:50:57.820920950+00:00 stdout F 2025-08-13T19:50:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d145407d-9046-4618-84a2-0bd3cab7b7ed to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000062415133657716033251 0ustar zuulzuul2026-01-20T10:47:26.059298781+00:00 stdout F 2026-01-20T10:47:26+00:00 [cnibincopy] Successfully copied files in /usr/src/plugins/rhel9/bin/ to /host/opt/cni/bin/upgrade_55bc497b-8b30-458b-ab20-bc0f37c51e02 2026-01-20T10:47:26.067714598+00:00 stdout F 2026-01-20T10:47:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_55bc497b-8b30-458b-ab20-bc0f37c51e02 to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000030300000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015133657741033243 5ustar zuulzuul././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000063615133657716033254 0ustar zuulzuul2025-08-13T19:50:47.211892989+00:00 stdout F 2025-08-13T19:50:47+00:00 [cnibincopy] Successfully copied files in /usr/src/egress-router-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fb608648-8ddd-4f33-bc1d-0991683cda60 2025-08-13T19:50:47.335048529+00:00 stdout F 2025-08-13T19:50:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fb608648-8ddd-4f33-bc1d-0991683cda60 to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000063615133657716033254 0ustar zuulzuul2026-01-20T10:47:24.940354004+00:00 stdout F 2026-01-20T10:47:24+00:00 [cnibincopy] Successfully copied files in /usr/src/egress-router-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2e3da0e3-1292-434d-8bd0-85ea1dc14b02 2026-01-20T10:47:24.950159670+00:00 stdout F 2026-01-20T10:47:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2e3da0e3-1292-434d-8bd0-85ea1dc14b02 to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015133657741033243 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000012015133657716033240 0ustar zuulzuul2026-01-20T10:47:30.878474420+00:00 stdout F Done configuring CNI. Sleep=false ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000012015133657716033240 0ustar zuulzuul2025-08-13T19:51:11.553008882+00:00 stdout F Done configuring CNI. Sleep=false ././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015133657741033243 5ustar zuulzuul././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000063015133657716033246 0ustar zuulzuul2025-08-13T19:51:10.079928701+00:00 stdout F 2025-08-13T19:51:10+00:00 [cnibincopy] Successfully copied files in /usr/src/whereabouts/rhel9/bin/ to /host/opt/cni/bin/upgrade_2927e247-a3e2-4e6c-9d4c-53a5b8439023 2025-08-13T19:51:10.090046880+00:00 stdout F 2025-08-13T19:51:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2927e247-a3e2-4e6c-9d4c-53a5b8439023 to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000063015133657716033246 0ustar zuulzuul2026-01-20T10:47:29.860645729+00:00 stdout F 2026-01-20T10:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/whereabouts/rhel9/bin/ to /host/opt/cni/bin/upgrade_4b75d71f-febb-4657-8dad-e0402052bf93 2026-01-20T10:47:29.870119365+00:00 stdout F 2026-01-20T10:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4b75d71f-febb-4657-8dad-e0402052bf93 to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015133657741033243 5ustar zuulzuul././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000063315133657716033251 0ustar zuulzuul2025-08-13T19:51:02.025704074+00:00 stdout F 2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/route-override/rhel9/bin/ to /host/opt/cni/bin/upgrade_63e00e37-7601-412f-989f-3015d2849f1c 2025-08-13T19:51:02.044774749+00:00 stdout F 2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_63e00e37-7601-412f-989f-3015d2849f1c to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000063315133657716033251 0ustar zuulzuul2026-01-20T10:47:27.737642193+00:00 stdout F 2026-01-20T10:47:27+00:00 [cnibincopy] Successfully copied files in /usr/src/route-override/rhel9/bin/ to /host/opt/cni/bin/upgrade_788976f2-d9c0-41f5-81e2-5a47df3d4820 2026-01-20T10:47:27.743745138+00:00 stdout F 2026-01-20T10:47:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_788976f2-d9c0-41f5-81e2-5a47df3d4820 to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015133657741033243 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000061015133657716033244 0ustar zuulzuul2026-01-20T10:47:26.403747055+00:00 stdout F 2026-01-20T10:47:26+00:00 [cnibincopy] Successfully copied files in /bondcni/rhel9/ to /host/opt/cni/bin/upgrade_67841807-7dd1-4169-83c2-014df4f9be7e 2026-01-20T10:47:26.407819204+00:00 stdout F 2026-01-20T10:47:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_67841807-7dd1-4169-83c2-014df4f9be7e to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000061015133657716033244 0ustar zuulzuul2025-08-13T19:50:59.775239797+00:00 stdout F 2025-08-13T19:50:59+00:00 [cnibincopy] Successfully copied files in /bondcni/rhel9/ to /host/opt/cni/bin/upgrade_b1bfe828-9c07-4461-84b8-c6d1eb367d18 2025-08-13T19:50:59.791945004+00:00 stdout F 2025-08-13T19:50:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b1bfe828-9c07-4461-84b8-c6d1eb367d18 to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000024200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_netwo0000755000175000017500000000000015133657716033233 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_netwo0000755000175000017500000000000015133657741033231 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_netwo0000644000175000017500000006547215133657716033253 0ustar zuulzuul2026-01-20T10:49:35.451209010+00:00 stderr F I0120 10:49:35.449106 1 main.go:45] Version:216149b14d9cb61ae90ac65d839a448fb11075bb 2026-01-20T10:49:35.451209010+00:00 stderr F I0120 10:49:35.449988 1 main.go:46] Starting with config{ :9091 crc} 2026-01-20T10:49:35.455237243+00:00 stderr F W0120 10:49:35.453906 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2026-01-20T10:49:35.471152067+00:00 stderr F I0120 10:49:35.463111 1 controller.go:42] Setting up event handlers 2026-01-20T10:49:35.471152067+00:00 stderr F I0120 10:49:35.463320 1 podmetrics.go:101] Serving network metrics 2026-01-20T10:49:35.471152067+00:00 stderr F I0120 10:49:35.463328 1 controller.go:101] Starting pod controller 2026-01-20T10:49:35.471152067+00:00 stderr F I0120 10:49:35.463331 1 controller.go:104] Waiting for informer caches to sync 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.670892 1 controller.go:109] Starting workers 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.671734 1 controller.go:114] Started workers 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.671774 1 controller.go:192] Received pod 'csi-hostpathplugin-hvm8g' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.671820 1 controller.go:192] Received pod 'openshift-apiserver-operator-7c88c4c865-kn67m' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.671888 1 controller.go:151] Successfully synced 'openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.671898 1 controller.go:192] Received pod 'apiserver-7fc54b8dd7-d2bhp' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.671914 1 controller.go:151] Successfully synced 'openshift-apiserver/apiserver-7fc54b8dd7-d2bhp' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.671920 1 controller.go:192] Received pod 'authentication-operator-7cc7ff75d5-g9qv8' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.671932 1 controller.go:151] Successfully synced 'openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.671937 1 controller.go:192] Received pod 'oauth-openshift-74fc7c67cc-xqf8b' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.671965 1 controller.go:151] Successfully synced 'openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.671969 1 controller.go:192] Received pod 'cluster-samples-operator-bc474d5d6-wshwg' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.671979 1 controller.go:151] Successfully synced 'openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.671984 1 controller.go:192] Received pod 'openshift-config-operator-77658b5b66-dq5sc' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.671993 1 controller.go:151] Successfully synced 'openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.671997 1 controller.go:192] Received pod 'console-conversion-webhook-595f9969b-l6z49' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672009 1 controller.go:151] Successfully synced 'openshift-console-operator/console-conversion-webhook-595f9969b-l6z49' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672013 1 controller.go:192] Received pod 'console-operator-5dbbc74dc9-cp5cd' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672021 1 controller.go:151] Successfully synced 'openshift-console-operator/console-operator-5dbbc74dc9-cp5cd' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672025 1 controller.go:192] Received pod 'console-644bb77b49-5x5xk' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672040 1 controller.go:151] Successfully synced 'openshift-console/console-644bb77b49-5x5xk' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672045 1 controller.go:192] Received pod 'downloads-65476884b9-9wcvx' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672082 1 controller.go:151] Successfully synced 'openshift-console/downloads-65476884b9-9wcvx' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672088 1 controller.go:192] Received pod 'openshift-controller-manager-operator-7978d7d7f6-2nt8z' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672098 1 controller.go:151] Successfully synced 'openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672103 1 controller.go:192] Received pod 'controller-manager-778975cc4f-x5vcf' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672115 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-778975cc4f-x5vcf' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672119 1 controller.go:192] Received pod 'dns-operator-75f687757b-nz2xb' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672132 1 controller.go:151] Successfully synced 'openshift-dns-operator/dns-operator-75f687757b-nz2xb' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672136 1 controller.go:192] Received pod 'dns-default-gbw49' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672147 1 controller.go:151] Successfully synced 'openshift-dns/dns-default-gbw49' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672150 1 controller.go:192] Received pod 'etcd-operator-768d5b5d86-722mg' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672160 1 controller.go:151] Successfully synced 'openshift-etcd-operator/etcd-operator-768d5b5d86-722mg' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672164 1 controller.go:192] Received pod 'cluster-image-registry-operator-7769bd8d7d-q5cvv' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672173 1 controller.go:151] Successfully synced 'openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672177 1 controller.go:192] Received pod 'image-registry-75779c45fd-v2j2v' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672186 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-75779c45fd-v2j2v' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672193 1 controller.go:192] Received pod 'ingress-canary-2vhcn' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672201 1 controller.go:151] Successfully synced 'openshift-ingress-canary/ingress-canary-2vhcn' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672205 1 controller.go:192] Received pod 'ingress-operator-7d46d5bb6d-rrg6t' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672214 1 controller.go:151] Successfully synced 'openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672217 1 controller.go:192] Received pod 'kube-apiserver-operator-78d54458c4-sc8h7' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672226 1 controller.go:151] Successfully synced 'openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672229 1 controller.go:192] Received pod 'installer-12-crc' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672238 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/installer-12-crc' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672242 1 controller.go:192] Received pod 'installer-9-crc' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672250 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/installer-9-crc' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672254 1 controller.go:192] Received pod 'kube-controller-manager-operator-6f6cb54958-rbddb' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672264 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672268 1 controller.go:192] Received pod 'installer-10-crc' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672277 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-10-crc' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672280 1 controller.go:192] Received pod 'installer-10-retry-1-crc' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672294 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-10-retry-1-crc' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672298 1 controller.go:192] Received pod 'installer-11-crc' 2026-01-20T10:49:35.672315895+00:00 stderr F I0120 10:49:35.672307 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-11-crc' 2026-01-20T10:49:35.672374397+00:00 stderr F I0120 10:49:35.672311 1 controller.go:192] Received pod 'revision-pruner-10-crc' 2026-01-20T10:49:35.672374397+00:00 stderr F I0120 10:49:35.672323 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-10-crc' 2026-01-20T10:49:35.672374397+00:00 stderr F I0120 10:49:35.672327 1 controller.go:192] Received pod 'revision-pruner-11-crc' 2026-01-20T10:49:35.672374397+00:00 stderr F I0120 10:49:35.672336 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-11-crc' 2026-01-20T10:49:35.672374397+00:00 stderr F I0120 10:49:35.672340 1 controller.go:192] Received pod 'revision-pruner-8-crc' 2026-01-20T10:49:35.672374397+00:00 stderr F I0120 10:49:35.672360 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-8-crc' 2026-01-20T10:49:35.672374397+00:00 stderr F I0120 10:49:35.672364 1 controller.go:192] Received pod 'revision-pruner-9-crc' 2026-01-20T10:49:35.672385067+00:00 stderr F I0120 10:49:35.672373 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-9-crc' 2026-01-20T10:49:35.672385067+00:00 stderr F I0120 10:49:35.672377 1 controller.go:192] Received pod 'openshift-kube-scheduler-operator-5d9b995f6b-fcgd7' 2026-01-20T10:49:35.672392967+00:00 stderr F I0120 10:49:35.672386 1 controller.go:151] Successfully synced 'openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7' 2026-01-20T10:49:35.672392967+00:00 stderr F I0120 10:49:35.672390 1 controller.go:192] Received pod 'installer-7-crc' 2026-01-20T10:49:35.672486890+00:00 stderr F I0120 10:49:35.672400 1 controller.go:151] Successfully synced 'openshift-kube-scheduler/installer-7-crc' 2026-01-20T10:49:35.672486890+00:00 stderr F I0120 10:49:35.672409 1 controller.go:192] Received pod 'installer-8-crc' 2026-01-20T10:49:35.672486890+00:00 stderr F I0120 10:49:35.672424 1 controller.go:151] Successfully synced 'openshift-kube-scheduler/installer-8-crc' 2026-01-20T10:49:35.672486890+00:00 stderr F I0120 10:49:35.672428 1 controller.go:192] Received pod 'kube-storage-version-migrator-operator-686c6c748c-qbnnr' 2026-01-20T10:49:35.672486890+00:00 stderr F I0120 10:49:35.672439 1 controller.go:151] Successfully synced 'openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr' 2026-01-20T10:49:35.672486890+00:00 stderr F I0120 10:49:35.672443 1 controller.go:192] Received pod 'migrator-f7c6d88df-q2fnv' 2026-01-20T10:49:35.672486890+00:00 stderr F I0120 10:49:35.672453 1 controller.go:151] Successfully synced 'openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv' 2026-01-20T10:49:35.672486890+00:00 stderr F I0120 10:49:35.672457 1 controller.go:192] Received pod 'control-plane-machine-set-operator-649bd778b4-tt5tw' 2026-01-20T10:49:35.672486890+00:00 stderr F I0120 10:49:35.672466 1 controller.go:151] Successfully synced 'openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw' 2026-01-20T10:49:35.672486890+00:00 stderr F I0120 10:49:35.672470 1 controller.go:192] Received pod 'machine-api-operator-788b7c6b6c-ctdmb' 2026-01-20T10:49:35.672486890+00:00 stderr F I0120 10:49:35.672479 1 controller.go:151] Successfully synced 'openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb' 2026-01-20T10:49:35.672486890+00:00 stderr F I0120 10:49:35.672483 1 controller.go:192] Received pod 'machine-config-controller-6df6df6b6b-58shh' 2026-01-20T10:49:35.672503381+00:00 stderr F I0120 10:49:35.672492 1 controller.go:151] Successfully synced 'openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh' 2026-01-20T10:49:35.672503381+00:00 stderr F I0120 10:49:35.672496 1 controller.go:192] Received pod 'machine-config-operator-76788bff89-wkjgm' 2026-01-20T10:49:35.672510941+00:00 stderr F I0120 10:49:35.672506 1 controller.go:151] Successfully synced 'openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm' 2026-01-20T10:49:35.672529841+00:00 stderr F I0120 10:49:35.672510 1 controller.go:192] Received pod 'certified-operators-7287f' 2026-01-20T10:49:35.672529841+00:00 stderr F I0120 10:49:35.672520 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-7287f' 2026-01-20T10:49:35.672529841+00:00 stderr F I0120 10:49:35.672523 1 controller.go:192] Received pod 'community-operators-8jhz6' 2026-01-20T10:49:35.672537952+00:00 stderr F I0120 10:49:35.672532 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-8jhz6' 2026-01-20T10:49:35.672545162+00:00 stderr F I0120 10:49:35.672537 1 controller.go:192] Received pod 'community-operators-sdddl' 2026-01-20T10:49:35.672553242+00:00 stderr F I0120 10:49:35.672547 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-sdddl' 2026-01-20T10:49:35.672560272+00:00 stderr F I0120 10:49:35.672551 1 controller.go:192] Received pod 'marketplace-operator-8b455464d-f9xdt' 2026-01-20T10:49:35.672567203+00:00 stderr F I0120 10:49:35.672562 1 controller.go:151] Successfully synced 'openshift-marketplace/marketplace-operator-8b455464d-f9xdt' 2026-01-20T10:49:35.672574143+00:00 stderr F I0120 10:49:35.672566 1 controller.go:192] Received pod 'redhat-marketplace-8s8pc' 2026-01-20T10:49:35.672581053+00:00 stderr F I0120 10:49:35.672576 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-8s8pc' 2026-01-20T10:49:35.672587973+00:00 stderr F I0120 10:49:35.672580 1 controller.go:192] Received pod 'redhat-operators-f4jkp' 2026-01-20T10:49:35.672595513+00:00 stderr F I0120 10:49:35.672589 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-f4jkp' 2026-01-20T10:49:35.672602434+00:00 stderr F I0120 10:49:35.672593 1 controller.go:192] Received pod 'multus-admission-controller-6c7c885997-4hbbc' 2026-01-20T10:49:35.672610014+00:00 stderr F I0120 10:49:35.672604 1 controller.go:151] Successfully synced 'openshift-multus/multus-admission-controller-6c7c885997-4hbbc' 2026-01-20T10:49:35.672616924+00:00 stderr F I0120 10:49:35.672607 1 controller.go:192] Received pod 'network-metrics-daemon-qdfr4' 2026-01-20T10:49:35.672624294+00:00 stderr F I0120 10:49:35.672617 1 controller.go:151] Successfully synced 'openshift-multus/network-metrics-daemon-qdfr4' 2026-01-20T10:49:35.672624294+00:00 stderr F I0120 10:49:35.672621 1 controller.go:192] Received pod 'network-check-source-5c5478f8c-vqvt7' 2026-01-20T10:49:35.672661565+00:00 stderr F I0120 10:49:35.672630 1 controller.go:151] Successfully synced 'openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7' 2026-01-20T10:49:35.672661565+00:00 stderr F I0120 10:49:35.672638 1 controller.go:192] Received pod 'network-check-target-v54bt' 2026-01-20T10:49:35.672661565+00:00 stderr F I0120 10:49:35.672652 1 controller.go:151] Successfully synced 'openshift-network-diagnostics/network-check-target-v54bt' 2026-01-20T10:49:35.672661565+00:00 stderr F I0120 10:49:35.672657 1 controller.go:192] Received pod 'apiserver-69c565c9b6-vbdpd' 2026-01-20T10:49:35.672674996+00:00 stderr F I0120 10:49:35.672667 1 controller.go:151] Successfully synced 'openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd' 2026-01-20T10:49:35.672674996+00:00 stderr F I0120 10:49:35.672671 1 controller.go:192] Received pod 'catalog-operator-857456c46-7f5wf' 2026-01-20T10:49:35.672763819+00:00 stderr F I0120 10:49:35.672681 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf' 2026-01-20T10:49:35.672763819+00:00 stderr F I0120 10:49:35.672689 1 controller.go:192] Received pod 'collect-profiles-29251920-wcws2' 2026-01-20T10:49:35.672763819+00:00 stderr F I0120 10:49:35.672699 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2' 2026-01-20T10:49:35.672763819+00:00 stderr F I0120 10:49:35.672703 1 controller.go:192] Received pod 'collect-profiles-29251935-d7x6j' 2026-01-20T10:49:35.672763819+00:00 stderr F I0120 10:49:35.672718 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j' 2026-01-20T10:49:35.672763819+00:00 stderr F I0120 10:49:35.672722 1 controller.go:192] Received pod 'collect-profiles-29251950-x8jjd' 2026-01-20T10:49:35.672763819+00:00 stderr F I0120 10:49:35.672732 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd' 2026-01-20T10:49:35.672763819+00:00 stderr F I0120 10:49:35.672736 1 controller.go:192] Received pod 'olm-operator-6d8474f75f-x54mh' 2026-01-20T10:49:35.672763819+00:00 stderr F I0120 10:49:35.672746 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh' 2026-01-20T10:49:35.672763819+00:00 stderr F I0120 10:49:35.672749 1 controller.go:192] Received pod 'package-server-manager-84d578d794-jw7r2' 2026-01-20T10:49:35.672763819+00:00 stderr F I0120 10:49:35.672759 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2' 2026-01-20T10:49:35.672776859+00:00 stderr F I0120 10:49:35.672763 1 controller.go:192] Received pod 'packageserver-8464bcc55b-sjnqz' 2026-01-20T10:49:35.672776859+00:00 stderr F I0120 10:49:35.672773 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz' 2026-01-20T10:49:35.672784389+00:00 stderr F I0120 10:49:35.672776 1 controller.go:192] Received pod 'route-controller-manager-776b8b7477-sfpvs' 2026-01-20T10:49:35.672791779+00:00 stderr F I0120 10:49:35.672786 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs' 2026-01-20T10:49:35.672798810+00:00 stderr F I0120 10:49:35.672790 1 controller.go:192] Received pod 'service-ca-operator-546b4f8984-pwccz' 2026-01-20T10:49:35.672807040+00:00 stderr F I0120 10:49:35.672799 1 controller.go:151] Successfully synced 'openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz' 2026-01-20T10:49:35.672807040+00:00 stderr F I0120 10:49:35.672804 1 controller.go:192] Received pod 'service-ca-666f99b6f-kk8kg' 2026-01-20T10:49:35.672821690+00:00 stderr F I0120 10:49:35.672813 1 controller.go:151] Successfully synced 'openshift-service-ca/service-ca-666f99b6f-kk8kg' 2026-01-20T10:49:35.672849841+00:00 stderr F I0120 10:49:35.672830 1 controller.go:151] Successfully synced 'hostpath-provisioner/csi-hostpathplugin-hvm8g' 2026-01-20T10:50:43.756184758+00:00 stderr F I0120 10:50:43.755446 1 controller.go:192] Received pod 'collect-profiles-29481765-pbh8m' 2026-01-20T10:50:43.756318511+00:00 stderr F I0120 10:50:43.756292 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29481765-pbh8m' 2026-01-20T10:50:43.852135282+00:00 stderr F I0120 10:50:43.848936 1 controller.go:192] Received pod 'redhat-operators-ndkkp' 2026-01-20T10:50:43.852135282+00:00 stderr F I0120 10:50:43.848974 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-ndkkp' 2026-01-20T10:50:43.855195530+00:00 stderr F I0120 10:50:43.854440 1 controller.go:192] Received pod 'redhat-marketplace-qr2m4' 2026-01-20T10:50:43.855195530+00:00 stderr F I0120 10:50:43.854467 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-qr2m4' 2026-01-20T10:50:43.871696805+00:00 stderr F I0120 10:50:43.871630 1 controller.go:192] Received pod 'certified-operators-tshnz' 2026-01-20T10:50:43.871696805+00:00 stderr F I0120 10:50:43.871671 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-tshnz' 2026-01-20T10:50:49.950163904+00:00 stderr F I0120 10:50:49.948934 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2' 2026-01-20T10:50:58.058510118+00:00 stderr F I0120 10:50:58.058110 1 controller.go:192] Received pod 'marketplace-operator-8b455464d-nc8zc' 2026-01-20T10:50:58.058510118+00:00 stderr F I0120 10:50:58.058503 1 controller.go:151] Successfully synced 'openshift-marketplace/marketplace-operator-8b455464d-nc8zc' 2026-01-20T10:50:59.045247697+00:00 stderr F I0120 10:50:59.045184 1 controller.go:151] Successfully synced 'openshift-marketplace/marketplace-operator-8b455464d-f9xdt' 2026-01-20T10:51:02.750341507+00:00 stderr F I0120 10:51:02.749774 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-qr2m4' 2026-01-20T10:51:03.403726802+00:00 stderr F I0120 10:51:03.403631 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-8s8pc' 2026-01-20T10:51:03.424275174+00:00 stderr F I0120 10:51:03.424238 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-7287f' 2026-01-20T10:51:03.452657033+00:00 stderr F I0120 10:51:03.450444 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-f4jkp' 2026-01-20T10:51:03.548878305+00:00 stderr F I0120 10:51:03.548799 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-ndkkp' 2026-01-20T10:51:04.019360670+00:00 stderr F I0120 10:51:04.017670 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-8jhz6' 2026-01-20T10:51:04.886706140+00:00 stderr F I0120 10:51:04.885952 1 controller.go:192] Received pod 'redhat-marketplace-2mx7j' 2026-01-20T10:51:04.886763602+00:00 stderr F I0120 10:51:04.886706 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-2mx7j' 2026-01-20T10:51:05.555187290+00:00 stderr F I0120 10:51:05.555144 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-sdddl' 2026-01-20T10:51:05.605178210+00:00 stderr F I0120 10:51:05.605101 1 controller.go:192] Received pod 'certified-operators-mpjb7' 2026-01-20T10:51:05.605178210+00:00 stderr F I0120 10:51:05.605168 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-mpjb7' 2026-01-20T10:51:05.826382343+00:00 stderr F I0120 10:51:05.826317 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-tshnz' 2026-01-20T10:51:06.419729699+00:00 stderr F I0120 10:51:06.419417 1 controller.go:192] Received pod 'redhat-operators-2nxg8' 2026-01-20T10:51:06.419729699+00:00 stderr F I0120 10:51:06.419718 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-2nxg8' 2026-01-20T10:51:07.097757494+00:00 stderr F I0120 10:51:07.097019 1 controller.go:192] Received pod 'community-operators-6m4w2' 2026-01-20T10:51:07.097757494+00:00 stderr F I0120 10:51:07.097679 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-6m4w2' 2026-01-20T10:51:07.428289647+00:00 stderr F I0120 10:51:07.427550 1 controller.go:192] Received pod 'community-operators-9k9jk' 2026-01-20T10:51:07.428289647+00:00 stderr F I0120 10:51:07.427586 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-9k9jk' 2026-01-20T10:51:45.637371526+00:00 stderr F I0120 10:51:45.636425 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-9k9jk' 2026-01-20T10:53:58.628861427+00:00 stderr F I0120 10:53:58.627886 1 controller.go:151] Successfully synced 'openshift-multus/cni-sysctl-allowlist-ds-xhs5c' 2026-01-20T10:55:16.783176254+00:00 stderr F I0120 10:55:16.782375 1 controller.go:192] Received pod 'image-registry-75b7bb6564-ln84v' 2026-01-20T10:55:16.783253376+00:00 stderr F I0120 10:55:16.783190 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-75b7bb6564-ln84v' 2026-01-20T10:56:02.400613229+00:00 stderr F I0120 10:56:02.400019 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-75779c45fd-v2j2v' 2026-01-20T10:56:28.019981546+00:00 stderr F I0120 10:56:28.017286 1 controller.go:192] Received pod 'installer-13-crc' 2026-01-20T10:56:28.019981546+00:00 stderr F I0120 10:56:28.017827 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/installer-13-crc' 2026-01-20T10:56:35.219499231+00:00 stderr F I0120 10:56:35.218572 1 controller.go:192] Received pod 'cert-manager-cainjector-676dd9bd64-mggnx' 2026-01-20T10:56:35.219499231+00:00 stderr F I0120 10:56:35.219443 1 controller.go:151] Successfully synced 'cert-manager/cert-manager-cainjector-676dd9bd64-mggnx' 2026-01-20T10:56:35.264625495+00:00 stderr F I0120 10:56:35.264531 1 controller.go:192] Received pod 'cert-manager-webhook-855f577f79-7bdxq' 2026-01-20T10:56:35.264733428+00:00 stderr F I0120 10:56:35.264719 1 controller.go:151] Successfully synced 'cert-manager/cert-manager-webhook-855f577f79-7bdxq' 2026-01-20T10:56:35.268345316+00:00 stderr F I0120 10:56:35.267741 1 controller.go:192] Received pod 'cert-manager-758df9885c-cq6zm' 2026-01-20T10:56:35.268345316+00:00 stderr F I0120 10:56:35.267767 1 controller.go:151] Successfully synced 'cert-manager/cert-manager-758df9885c-cq6zm' 2026-01-20T10:56:44.740178320+00:00 stderr F I0120 10:56:44.738258 1 controller.go:151] Successfully synced 'openshift-ovn-kubernetes/ovnkube-node-44qcg' ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_netwo0000644000175000017500000011767515133657716033256 0ustar zuulzuul2025-08-13T19:59:14.203072729+00:00 stderr F I0813 19:59:14.200136 1 main.go:45] Version:216149b14d9cb61ae90ac65d839a448fb11075bb 2025-08-13T19:59:14.203072729+00:00 stderr F I0813 19:59:14.201374 1 main.go:46] Starting with config{ :9091 crc} 2025-08-13T19:59:14.243408519+00:00 stderr F W0813 19:59:14.238333 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:59:15.680406063+00:00 stderr F I0813 19:59:15.679464 1 controller.go:42] Setting up event handlers 2025-08-13T19:59:15.819134228+00:00 stderr F I0813 19:59:15.811144 1 podmetrics.go:101] Serving network metrics 2025-08-13T19:59:15.819134228+00:00 stderr F I0813 19:59:15.811485 1 controller.go:101] Starting pod controller 2025-08-13T19:59:15.819134228+00:00 stderr F I0813 19:59:15.811492 1 controller.go:104] Waiting for informer caches to sync 2025-08-13T19:59:22.235926353+00:00 stderr F I0813 19:59:22.214243 1 controller.go:109] Starting workers 2025-08-13T19:59:22.251194178+00:00 stderr F I0813 19:59:22.239598 1 controller.go:114] Started workers 2025-08-13T19:59:22.251194178+00:00 stderr F I0813 19:59:22.239961 1 controller.go:192] Received pod 'csi-hostpathplugin-hvm8g' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.292973 1 controller.go:192] Received pod 'openshift-apiserver-operator-7c88c4c865-kn67m' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293124 1 controller.go:151] Successfully synced 'openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293138 1 controller.go:192] Received pod 'apiserver-67cbf64bc9-mtx25' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293157 1 controller.go:151] Successfully synced 'openshift-apiserver/apiserver-67cbf64bc9-mtx25' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293165 1 controller.go:192] Received pod 'authentication-operator-7cc7ff75d5-g9qv8' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293176 1 controller.go:151] Successfully synced 'openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293182 1 controller.go:192] Received pod 'oauth-openshift-765b47f944-n2lhl' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293199 1 controller.go:151] Successfully synced 'openshift-authentication/oauth-openshift-765b47f944-n2lhl' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293206 1 controller.go:192] Received pod 'cluster-samples-operator-bc474d5d6-wshwg' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293217 1 controller.go:151] Successfully synced 'openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293221 1 controller.go:192] Received pod 'openshift-config-operator-77658b5b66-dq5sc' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293246 1 controller.go:151] Successfully synced 'openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293253 1 controller.go:192] Received pod 'console-conversion-webhook-595f9969b-l6z49' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293269 1 controller.go:151] Successfully synced 'openshift-console-operator/console-conversion-webhook-595f9969b-l6z49' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293276 1 controller.go:192] Received pod 'console-operator-5dbbc74dc9-cp5cd' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293287 1 controller.go:151] Successfully synced 'openshift-console-operator/console-operator-5dbbc74dc9-cp5cd' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293292 1 controller.go:192] Received pod 'console-84fccc7b6-mkncc' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293306 1 controller.go:151] Successfully synced 'openshift-console/console-84fccc7b6-mkncc' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293311 1 controller.go:192] Received pod 'downloads-65476884b9-9wcvx' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293327 1 controller.go:151] Successfully synced 'openshift-console/downloads-65476884b9-9wcvx' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293334 1 controller.go:192] Received pod 'openshift-controller-manager-operator-7978d7d7f6-2nt8z' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293345 1 controller.go:151] Successfully synced 'openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293350 1 controller.go:192] Received pod 'controller-manager-6ff78978b4-q4vv8' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.350984 1 controller.go:151] Successfully synced 'hostpath-provisioner/csi-hostpathplugin-hvm8g' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403421 1 controller.go:192] Received pod 'dns-operator-75f687757b-nz2xb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403501 1 controller.go:151] Successfully synced 'openshift-dns-operator/dns-operator-75f687757b-nz2xb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403513 1 controller.go:192] Received pod 'dns-default-gbw49' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403534 1 controller.go:151] Successfully synced 'openshift-dns/dns-default-gbw49' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403539 1 controller.go:192] Received pod 'etcd-operator-768d5b5d86-722mg' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403550 1 controller.go:151] Successfully synced 'openshift-etcd-operator/etcd-operator-768d5b5d86-722mg' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403556 1 controller.go:192] Received pod 'cluster-image-registry-operator-7769bd8d7d-q5cvv' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403567 1 controller.go:151] Successfully synced 'openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403572 1 controller.go:192] Received pod 'image-registry-585546dd8b-v5m4t' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403583 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-585546dd8b-v5m4t' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403587 1 controller.go:192] Received pod 'ingress-canary-2vhcn' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403598 1 controller.go:151] Successfully synced 'openshift-ingress-canary/ingress-canary-2vhcn' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403607 1 controller.go:192] Received pod 'ingress-operator-7d46d5bb6d-rrg6t' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403624 1 controller.go:151] Successfully synced 'openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403629 1 controller.go:192] Received pod 'kube-apiserver-operator-78d54458c4-sc8h7' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403659 1 controller.go:151] Successfully synced 'openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403665 1 controller.go:192] Received pod 'kube-controller-manager-operator-6f6cb54958-rbddb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403675 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403680 1 controller.go:192] Received pod 'revision-pruner-8-crc' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403691 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-8-crc' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403696 1 controller.go:192] Received pod 'openshift-kube-scheduler-operator-5d9b995f6b-fcgd7' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403721 1 controller.go:151] Successfully synced 'openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403728 1 controller.go:192] Received pod 'kube-storage-version-migrator-operator-686c6c748c-qbnnr' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403744 1 controller.go:151] Successfully synced 'openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403751 1 controller.go:192] Received pod 'migrator-f7c6d88df-q2fnv' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403767 1 controller.go:151] Successfully synced 'openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403828 1 controller.go:192] Received pod 'control-plane-machine-set-operator-649bd778b4-tt5tw' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403905 1 controller.go:151] Successfully synced 'openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403915 1 controller.go:192] Received pod 'machine-api-operator-788b7c6b6c-ctdmb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403929 1 controller.go:151] Successfully synced 'openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403935 1 controller.go:192] Received pod 'machine-config-controller-6df6df6b6b-58shh' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403944 1 controller.go:151] Successfully synced 'openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403949 1 controller.go:192] Received pod 'machine-config-operator-76788bff89-wkjgm' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403960 1 controller.go:151] Successfully synced 'openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403973 1 controller.go:192] Received pod 'certified-operators-7287f' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403985 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-7287f' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403990 1 controller.go:192] Received pod 'certified-operators-g4v97' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404000 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-g4v97' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404005 1 controller.go:192] Received pod 'community-operators-8jhz6' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404015 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-8jhz6' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404024 1 controller.go:192] Received pod 'community-operators-k9qqb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404039 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-k9qqb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404043 1 controller.go:192] Received pod 'marketplace-operator-8b455464d-f9xdt' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404055 1 controller.go:151] Successfully synced 'openshift-marketplace/marketplace-operator-8b455464d-f9xdt' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404060 1 controller.go:192] Received pod 'redhat-marketplace-8s8pc' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404070 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-8s8pc' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404075 1 controller.go:192] Received pod 'redhat-marketplace-rmwfn' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404093 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-rmwfn' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404099 1 controller.go:192] Received pod 'redhat-operators-dcqzh' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404109 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-dcqzh' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404114 1 controller.go:192] Received pod 'redhat-operators-f4jkp' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404124 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-f4jkp' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404128 1 controller.go:192] Received pod 'multus-admission-controller-6c7c885997-4hbbc' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404141 1 controller.go:151] Successfully synced 'openshift-multus/multus-admission-controller-6c7c885997-4hbbc' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404149 1 controller.go:192] Received pod 'network-metrics-daemon-qdfr4' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404162 1 controller.go:151] Successfully synced 'openshift-multus/network-metrics-daemon-qdfr4' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404166 1 controller.go:192] Received pod 'network-check-source-5c5478f8c-vqvt7' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404176 1 controller.go:151] Successfully synced 'openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404180 1 controller.go:192] Received pod 'network-check-target-v54bt' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404191 1 controller.go:151] Successfully synced 'openshift-network-diagnostics/network-check-target-v54bt' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404196 1 controller.go:192] Received pod 'apiserver-69c565c9b6-vbdpd' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404212 1 controller.go:151] Successfully synced 'openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404218 1 controller.go:192] Received pod 'catalog-operator-857456c46-7f5wf' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404229 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404233 1 controller.go:192] Received pod 'collect-profiles-29251905-zmjv9' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404243 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404248 1 controller.go:192] Received pod 'olm-operator-6d8474f75f-x54mh' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404257 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404262 1 controller.go:192] Received pod 'package-server-manager-84d578d794-jw7r2' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404278 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2' 2025-08-13T19:59:22.415768500+00:00 stderr P I0813 19:59:22.404286 1 con 2025-08-13T19:59:22.416128400+00:00 stderr F troller.go:192] Received pod 'packageserver-8464bcc55b-sjnqz' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404298 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404302 1 controller.go:192] Received pod 'route-controller-manager-5c4dbb8899-tchz5' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404318 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404323 1 controller.go:192] Received pod 'service-ca-operator-546b4f8984-pwccz' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404345 1 controller.go:151] Successfully synced 'openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404349 1 controller.go:192] Received pod 'service-ca-666f99b6f-vlbxv' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404361 1 controller.go:151] Successfully synced 'openshift-service-ca/service-ca-666f99b6f-vlbxv' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404375 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-6ff78978b4-q4vv8' 2025-08-13T19:59:47.793265519+00:00 stderr F I0813 19:59:47.789858 1 controller.go:192] Received pod 'service-ca-666f99b6f-kk8kg' 2025-08-13T19:59:47.798560020+00:00 stderr F I0813 19:59:47.795202 1 controller.go:151] Successfully synced 'openshift-service-ca/service-ca-666f99b6f-kk8kg' 2025-08-13T19:59:52.481966496+00:00 stderr F I0813 19:59:52.454353 1 controller.go:151] Successfully synced 'openshift-service-ca/service-ca-666f99b6f-vlbxv' 2025-08-13T19:59:54.699326581+00:00 stderr F I0813 19:59:54.695209 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-6ff78978b4-q4vv8' 2025-08-13T20:00:00.033952767+00:00 stderr F I0813 20:00:00.016212 1 controller.go:192] Received pod 'controller-manager-c4dd57946-mpxjt' 2025-08-13T20:00:00.033952767+00:00 stderr F I0813 20:00:00.031888 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-c4dd57946-mpxjt' 2025-08-13T20:00:01.733428071+00:00 stderr F I0813 20:00:01.686450 1 controller.go:192] Received pod 'route-controller-manager-5b77f9fd48-hb8xt' 2025-08-13T20:00:01.733428071+00:00 stderr F I0813 20:00:01.687228 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt' 2025-08-13T20:00:05.392908974+00:00 stderr F I0813 20:00:05.381095 1 controller.go:192] Received pod 'collect-profiles-29251920-wcws2' 2025-08-13T20:00:05.392908974+00:00 stderr F I0813 20:00:05.390516 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2' 2025-08-13T20:00:12.837175058+00:00 stderr F I0813 20:00:12.831533 1 controller.go:192] Received pod 'revision-pruner-9-crc' 2025-08-13T20:00:12.837175058+00:00 stderr F I0813 20:00:12.832637 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-9-crc' 2025-08-13T20:00:13.415470178+00:00 stderr F I0813 20:00:13.414068 1 controller.go:192] Received pod 'installer-9-crc' 2025-08-13T20:00:13.415470178+00:00 stderr F I0813 20:00:13.414423 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-9-crc' 2025-08-13T20:00:18.087146555+00:00 stderr F I0813 20:00:18.085440 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt' 2025-08-13T20:00:19.199895773+00:00 stderr F I0813 20:00:19.199376 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-c4dd57946-mpxjt' 2025-08-13T20:00:23.817216183+00:00 stderr F I0813 20:00:23.814290 1 controller.go:192] Received pod 'route-controller-manager-6cfd9fc8fc-7sbzw' 2025-08-13T20:00:23.817216183+00:00 stderr F I0813 20:00:23.815556 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw' 2025-08-13T20:00:24.263536059+00:00 stderr F I0813 20:00:24.262921 1 controller.go:192] Received pod 'installer-9-crc' 2025-08-13T20:00:24.263536059+00:00 stderr F I0813 20:00:24.263147 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/installer-9-crc' 2025-08-13T20:00:24.263536059+00:00 stderr F I0813 20:00:24.263157 1 controller.go:192] Received pod 'console-5d9678894c-wx62n' 2025-08-13T20:00:24.263536059+00:00 stderr F I0813 20:00:24.263168 1 controller.go:151] Successfully synced 'openshift-console/console-5d9678894c-wx62n' 2025-08-13T20:00:24.263536059+00:00 stderr F I0813 20:00:24.263180 1 controller.go:192] Received pod 'controller-manager-67685c4459-7p2h8' 2025-08-13T20:00:24.263536059+00:00 stderr F I0813 20:00:24.263191 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-67685c4459-7p2h8' 2025-08-13T20:00:27.488118776+00:00 stderr F I0813 20:00:27.486613 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5' 2025-08-13T20:00:27.637899127+00:00 stderr F I0813 20:00:27.636553 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-585546dd8b-v5m4t' 2025-08-13T20:00:29.897264330+00:00 stderr F I0813 20:00:29.895923 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-67685c4459-7p2h8' 2025-08-13T20:00:30.862824172+00:00 stderr F I0813 20:00:30.859600 1 controller.go:192] Received pod 'image-registry-7cbd5666ff-bbfrf' 2025-08-13T20:00:30.862824172+00:00 stderr F I0813 20:00:30.860019 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-7cbd5666ff-bbfrf' 2025-08-13T20:00:30.904206192+00:00 stderr F I0813 20:00:30.904154 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw' 2025-08-13T20:00:30.971050898+00:00 stderr F I0813 20:00:30.970969 1 controller.go:192] Received pod 'image-registry-75779c45fd-v2j2v' 2025-08-13T20:00:30.971139431+00:00 stderr F I0813 20:00:30.971121 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-75779c45fd-v2j2v' 2025-08-13T20:00:31.082874517+00:00 stderr F I0813 20:00:31.082592 1 controller.go:192] Received pod 'controller-manager-78589965b8-vmcwt' 2025-08-13T20:00:31.082874517+00:00 stderr F I0813 20:00:31.082653 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-78589965b8-vmcwt' 2025-08-13T20:00:36.540416602+00:00 stderr F I0813 20:00:36.533143 1 controller.go:192] Received pod 'route-controller-manager-846977c6bc-7gjhh' 2025-08-13T20:00:36.540416602+00:00 stderr F I0813 20:00:36.533597 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh' 2025-08-13T20:00:41.455954023+00:00 stderr F I0813 20:00:41.454478 1 controller.go:151] Successfully synced 'openshift-apiserver/apiserver-67cbf64bc9-mtx25' 2025-08-13T20:00:45.801433589+00:00 stderr F I0813 20:00:45.784549 1 controller.go:192] Received pod 'installer-10-crc' 2025-08-13T20:00:45.801433589+00:00 stderr F I0813 20:00:45.798645 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-10-crc' 2025-08-13T20:00:45.900313318+00:00 stderr F I0813 20:00:45.873345 1 controller.go:192] Received pod 'revision-pruner-10-crc' 2025-08-13T20:00:45.900531905+00:00 stderr F I0813 20:00:45.900504 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-10-crc' 2025-08-13T20:00:45.951490608+00:00 stderr F I0813 20:00:45.949257 1 controller.go:192] Received pod 'installer-7-crc' 2025-08-13T20:00:45.951490608+00:00 stderr F I0813 20:00:45.949399 1 controller.go:151] Successfully synced 'openshift-kube-scheduler/installer-7-crc' 2025-08-13T20:00:47.103216737+00:00 stderr F I0813 20:00:47.102333 1 controller.go:151] Successfully synced 'openshift-authentication/oauth-openshift-765b47f944-n2lhl' 2025-08-13T20:01:00.041614175+00:00 stderr F I0813 20:01:00.039017 1 controller.go:192] Received pod 'oauth-openshift-74fc7c67cc-xqf8b' 2025-08-13T20:01:00.054891223+00:00 stderr F I0813 20:01:00.054311 1 controller.go:151] Successfully synced 'openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b' 2025-08-13T20:01:00.080921856+00:00 stderr F I0813 20:01:00.078163 1 controller.go:192] Received pod 'apiserver-67cbf64bc9-jjfds' 2025-08-13T20:01:00.080921856+00:00 stderr F I0813 20:01:00.078340 1 controller.go:151] Successfully synced 'openshift-apiserver/apiserver-67cbf64bc9-jjfds' 2025-08-13T20:01:30.283968186+00:00 stderr F I0813 20:01:30.282197 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-9-crc' 2025-08-13T20:01:30.283968186+00:00 stderr F I0813 20:01:30.283523 1 controller.go:192] Received pod 'console-644bb77b49-5x5xk' 2025-08-13T20:01:30.283968186+00:00 stderr F I0813 20:01:30.283591 1 controller.go:151] Successfully synced 'openshift-console/console-644bb77b49-5x5xk' 2025-08-13T20:06:25.106206287+00:00 stderr F I0813 20:06:25.105569 1 controller.go:192] Received pod 'installer-10-retry-1-crc' 2025-08-13T20:06:25.106544196+00:00 stderr F I0813 20:06:25.105651 1 controller.go:192] Received pod 'controller-manager-598fc85fd4-8wlsm' 2025-08-13T20:06:25.109592163+00:00 stderr F I0813 20:06:25.109563 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-10-retry-1-crc' 2025-08-13T20:06:25.109648825+00:00 stderr F I0813 20:06:25.109635 1 controller.go:192] Received pod 'route-controller-manager-6884dcf749-n4qpx' 2025-08-13T20:06:25.109696206+00:00 stderr F I0813 20:06:25.109682 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx' 2025-08-13T20:06:25.109731127+00:00 stderr F I0813 20:06:25.109719 1 controller.go:151] Successfully synced 'openshift-console/console-84fccc7b6-mkncc' 2025-08-13T20:06:25.109764118+00:00 stderr F I0813 20:06:25.109753 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-78589965b8-vmcwt' 2025-08-13T20:06:25.109920543+00:00 stderr F I0813 20:06:25.109906 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh' 2025-08-13T20:06:25.109960774+00:00 stderr F I0813 20:06:25.109949 1 controller.go:151] Successfully synced 'openshift-console/console-5d9678894c-wx62n' 2025-08-13T20:06:25.109990255+00:00 stderr F I0813 20:06:25.109979 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/kube-apiserver-startup-monitor-crc' 2025-08-13T20:06:25.110058797+00:00 stderr F I0813 20:06:25.110002 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-598fc85fd4-8wlsm' 2025-08-13T20:06:25.110085968+00:00 stderr F I0813 20:06:25.110010 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-7cbd5666ff-bbfrf' 2025-08-13T20:06:32.084964732+00:00 stderr F I0813 20:06:32.083272 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-rmwfn' 2025-08-13T20:06:34.485249150+00:00 stderr F I0813 20:06:34.484717 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-dcqzh' 2025-08-13T20:06:34.779125875+00:00 stderr F I0813 20:06:34.778404 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-g4v97' 2025-08-13T20:06:35.123551180+00:00 stderr F I0813 20:06:35.123213 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-k9qqb' 2025-08-13T20:06:35.928407086+00:00 stderr F I0813 20:06:35.928154 1 controller.go:192] Received pod 'redhat-marketplace-4txfd' 2025-08-13T20:06:35.929223220+00:00 stderr F I0813 20:06:35.929127 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-4txfd' 2025-08-13T20:06:36.643606302+00:00 stderr F I0813 20:06:36.642995 1 controller.go:192] Received pod 'certified-operators-cfdk8' 2025-08-13T20:06:36.643736426+00:00 stderr F I0813 20:06:36.643721 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-cfdk8' 2025-08-13T20:06:37.666895990+00:00 stderr F I0813 20:06:37.665212 1 controller.go:192] Received pod 'redhat-operators-pmqwc' 2025-08-13T20:06:37.666895990+00:00 stderr F I0813 20:06:37.665954 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-pmqwc' 2025-08-13T20:06:39.388929202+00:00 stderr F I0813 20:06:39.388413 1 controller.go:192] Received pod 'community-operators-p7svp' 2025-08-13T20:06:39.389035525+00:00 stderr F I0813 20:06:39.389019 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-p7svp' 2025-08-13T20:06:49.910865775+00:00 stderr F I0813 20:06:49.910061 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/kube-controller-manager-crc' 2025-08-13T20:07:05.606433680+00:00 stderr F I0813 20:07:05.590499 1 controller.go:192] Received pod 'installer-11-crc' 2025-08-13T20:07:05.606433680+00:00 stderr F I0813 20:07:05.591545 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/installer-11-crc' 2025-08-13T20:07:09.571116321+00:00 stderr F I0813 20:07:09.568456 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-4txfd' 2025-08-13T20:07:17.348392622+00:00 stderr F I0813 20:07:17.344598 1 controller.go:151] Successfully synced 'openshift-apiserver/apiserver-67cbf64bc9-jjfds' 2025-08-13T20:07:21.108945819+00:00 stderr F I0813 20:07:21.104214 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-cfdk8' 2025-08-13T20:07:21.346738817+00:00 stderr F I0813 20:07:21.346147 1 controller.go:192] Received pod 'apiserver-7fc54b8dd7-d2bhp' 2025-08-13T20:07:21.346738817+00:00 stderr F I0813 20:07:21.346207 1 controller.go:151] Successfully synced 'openshift-apiserver/apiserver-7fc54b8dd7-d2bhp' 2025-08-13T20:07:23.498192142+00:00 stderr F I0813 20:07:23.495542 1 controller.go:192] Received pod 'revision-pruner-11-crc' 2025-08-13T20:07:23.498192142+00:00 stderr F I0813 20:07:23.497749 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-11-crc' 2025-08-13T20:07:24.100475910+00:00 stderr F I0813 20:07:24.094438 1 controller.go:192] Received pod 'installer-8-crc' 2025-08-13T20:07:24.100475910+00:00 stderr F I0813 20:07:24.095461 1 controller.go:151] Successfully synced 'openshift-kube-scheduler/installer-8-crc' 2025-08-13T20:07:26.190039028+00:00 stderr F I0813 20:07:26.184720 1 controller.go:192] Received pod 'installer-11-crc' 2025-08-13T20:07:26.190039028+00:00 stderr F I0813 20:07:26.189167 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-11-crc' 2025-08-13T20:07:34.911652425+00:00 stderr F I0813 20:07:34.905613 1 controller.go:192] Received pod 'installer-12-crc' 2025-08-13T20:07:34.911652425+00:00 stderr F I0813 20:07:34.906370 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/installer-12-crc' 2025-08-13T20:07:41.042723107+00:00 stderr F I0813 20:07:41.042017 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/installer-11-crc' 2025-08-13T20:07:42.641679050+00:00 stderr F I0813 20:07:42.640862 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-p7svp' 2025-08-13T20:08:01.087979002+00:00 stderr F I0813 20:08:01.087350 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-pmqwc' 2025-08-13T20:08:08.267400173+00:00 stderr F I0813 20:08:08.264471 1 controller.go:151] Successfully synced 'openshift-kube-scheduler/openshift-kube-scheduler-crc' 2025-08-13T20:08:12.291050774+00:00 stderr F I0813 20:08:12.289975 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/kube-controller-manager-crc' 2025-08-13T20:10:21.976031818+00:00 stderr F I0813 20:10:21.975169 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/kube-apiserver-startup-monitor-crc' 2025-08-13T20:10:50.613462976+00:00 stderr F I0813 20:10:50.611261 1 controller.go:151] Successfully synced 'openshift-multus/cni-sysctl-allowlist-ds-jx5m8' 2025-08-13T20:11:00.793692470+00:00 stderr F I0813 20:11:00.789632 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-598fc85fd4-8wlsm' 2025-08-13T20:11:00.852394873+00:00 stderr F I0813 20:11:00.852251 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx' 2025-08-13T20:11:02.192181446+00:00 stderr F I0813 20:11:02.192032 1 controller.go:192] Received pod 'controller-manager-778975cc4f-x5vcf' 2025-08-13T20:11:02.192437293+00:00 stderr F I0813 20:11:02.192326 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-778975cc4f-x5vcf' 2025-08-13T20:11:02.283304968+00:00 stderr F I0813 20:11:02.281406 1 controller.go:192] Received pod 'route-controller-manager-776b8b7477-sfpvs' 2025-08-13T20:11:02.283304968+00:00 stderr F I0813 20:11:02.281657 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs' 2025-08-13T20:15:01.024125786+00:00 stderr F I0813 20:15:01.017300 1 controller.go:192] Received pod 'collect-profiles-29251935-d7x6j' 2025-08-13T20:15:01.024125786+00:00 stderr F I0813 20:15:01.020210 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j' 2025-08-13T20:16:58.871475492+00:00 stderr F I0813 20:16:58.866052 1 controller.go:192] Received pod 'certified-operators-8bbjz' 2025-08-13T20:16:58.871475492+00:00 stderr F I0813 20:16:58.870147 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-8bbjz' 2025-08-13T20:17:01.049845789+00:00 stderr F I0813 20:17:01.049067 1 controller.go:192] Received pod 'redhat-marketplace-nsk78' 2025-08-13T20:17:01.049906111+00:00 stderr F I0813 20:17:01.049893 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-nsk78' 2025-08-13T20:17:21.605438507+00:00 stderr F I0813 20:17:21.604580 1 controller.go:192] Received pod 'redhat-operators-swl5s' 2025-08-13T20:17:21.605556170+00:00 stderr F I0813 20:17:21.605442 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-swl5s' 2025-08-13T20:17:31.144134404+00:00 stderr F I0813 20:17:31.142706 1 controller.go:192] Received pod 'community-operators-tfv59' 2025-08-13T20:17:31.144134404+00:00 stderr F I0813 20:17:31.143821 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-tfv59' 2025-08-13T20:17:36.938844574+00:00 stderr F I0813 20:17:36.938059 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-nsk78' 2025-08-13T20:17:45.316184057+00:00 stderr F I0813 20:17:45.314435 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-8bbjz' 2025-08-13T20:18:56.581914971+00:00 stderr F I0813 20:18:56.581177 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-tfv59' 2025-08-13T20:19:38.583898355+00:00 stderr F I0813 20:19:38.580924 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-swl5s' 2025-08-13T20:27:06.515561143+00:00 stderr F I0813 20:27:06.514117 1 controller.go:192] Received pod 'redhat-marketplace-jbzn9' 2025-08-13T20:27:06.522213573+00:00 stderr F I0813 20:27:06.521390 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-jbzn9' 2025-08-13T20:27:06.626739732+00:00 stderr F I0813 20:27:06.626334 1 controller.go:192] Received pod 'certified-operators-xldzg' 2025-08-13T20:27:06.626769043+00:00 stderr F I0813 20:27:06.626758 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-xldzg' 2025-08-13T20:27:30.206946955+00:00 stderr F I0813 20:27:30.204841 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-xldzg' 2025-08-13T20:27:30.240926917+00:00 stderr F I0813 20:27:30.239904 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-jbzn9' 2025-08-13T20:28:44.070439440+00:00 stderr F I0813 20:28:44.069725 1 controller.go:192] Received pod 'community-operators-hvwvm' 2025-08-13T20:28:44.071889282+00:00 stderr F I0813 20:28:44.070535 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-hvwvm' 2025-08-13T20:29:07.485652753+00:00 stderr F I0813 20:29:07.484254 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-hvwvm' 2025-08-13T20:29:30.800937402+00:00 stderr F I0813 20:29:30.800342 1 controller.go:192] Received pod 'redhat-operators-zdwjn' 2025-08-13T20:29:30.801141928+00:00 stderr F I0813 20:29:30.801085 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-zdwjn' 2025-08-13T20:30:02.812209420+00:00 stderr F I0813 20:30:02.808013 1 controller.go:192] Received pod 'collect-profiles-29251950-x8jjd' 2025-08-13T20:30:02.812209420+00:00 stderr F I0813 20:30:02.808719 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd' 2025-08-13T20:30:08.194448628+00:00 stderr F I0813 20:30:08.193089 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9' 2025-08-13T20:30:34.188255151+00:00 stderr F I0813 20:30:34.187324 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-zdwjn' 2025-08-13T20:37:48.902847474+00:00 stderr F I0813 20:37:48.896192 1 controller.go:192] Received pod 'redhat-marketplace-nkzlk' 2025-08-13T20:37:48.902847474+00:00 stderr F I0813 20:37:48.899938 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-nkzlk' 2025-08-13T20:38:09.866597412+00:00 stderr F I0813 20:38:09.865627 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-nkzlk' 2025-08-13T20:38:36.821311697+00:00 stderr F I0813 20:38:36.817256 1 controller.go:192] Received pod 'certified-operators-4kmbv' 2025-08-13T20:38:36.826287260+00:00 stderr F I0813 20:38:36.824009 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-4kmbv' 2025-08-13T20:38:58.234193701+00:00 stderr F I0813 20:38:58.233474 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-4kmbv' 2025-08-13T20:41:22.218162180+00:00 stderr F I0813 20:41:22.216147 1 controller.go:192] Received pod 'redhat-operators-k2tgr' 2025-08-13T20:41:22.218162180+00:00 stderr F I0813 20:41:22.217285 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-k2tgr' 2025-08-13T20:42:15.662994211+00:00 stderr F I0813 20:42:15.659488 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-k2tgr' 2025-08-13T20:42:27.892076227+00:00 stderr F I0813 20:42:27.887166 1 controller.go:192] Received pod 'community-operators-sdddl' 2025-08-13T20:42:27.892076227+00:00 stderr F I0813 20:42:27.888532 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-sdddl' 2025-08-13T20:42:44.151210941+00:00 stderr F I0813 20:42:44.150298 1 controller.go:116] Shutting down workers ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_netwo0000755000175000017500000000000015133657741033231 5ustar zuulzuul././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_netwo0000644000175000017500000000202015133657716033227 0ustar zuulzuul2026-01-20T10:49:36.551438233+00:00 stderr F W0120 10:49:36.551036 1 deprecated.go:66] 2026-01-20T10:49:36.551438233+00:00 stderr F ==== Removed Flag Warning ====================== 2026-01-20T10:49:36.551438233+00:00 stderr F 2026-01-20T10:49:36.551438233+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2026-01-20T10:49:36.551438233+00:00 stderr F 2026-01-20T10:49:36.551438233+00:00 stderr F =============================================== 2026-01-20T10:49:36.551438233+00:00 stderr F 2026-01-20T10:49:36.551909447+00:00 stderr F I0120 10:49:36.551887 1 kube-rbac-proxy.go:233] Valid token audiences: 2026-01-20T10:49:36.551967149+00:00 stderr F I0120 10:49:36.551946 1 kube-rbac-proxy.go:347] Reading certificate files 2026-01-20T10:49:36.552523286+00:00 stderr F I0120 10:49:36.552488 1 kube-rbac-proxy.go:395] Starting TCP socket on :8443 2026-01-20T10:49:36.557644142+00:00 stderr F I0120 10:49:36.554256 1 kube-rbac-proxy.go:402] Listening securely on :8443 ././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_netwo0000644000175000017500000000222515133657716033236 0ustar zuulzuul2025-08-13T19:59:23.418270946+00:00 stderr F W0813 19:59:23.412030 1 deprecated.go:66] 2025-08-13T19:59:23.418270946+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:23.418270946+00:00 stderr F 2025-08-13T19:59:23.418270946+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:23.418270946+00:00 stderr F 2025-08-13T19:59:23.418270946+00:00 stderr F =============================================== 2025-08-13T19:59:23.418270946+00:00 stderr F 2025-08-13T19:59:23.418270946+00:00 stderr F I0813 19:59:23.413617 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:23.418270946+00:00 stderr F I0813 19:59:23.413677 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:23.432166223+00:00 stderr F I0813 19:59:23.432126 1 kube-rbac-proxy.go:395] Starting TCP socket on :8443 2025-08-13T19:59:23.450188556+00:00 stderr F I0813 19:59:23.450142 1 kube-rbac-proxy.go:402] Listening securely on :8443 2025-08-13T20:42:43.082211382+00:00 stderr F I0813 20:42:43.081428 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000023600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015133657715033062 5ustar zuulzuul././@LongLink0000644000000000000000000000025000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015133657734033063 5ustar zuulzuul././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000016462315133657715033100 0ustar zuulzuul2025-08-13T20:07:36.638917816+00:00 stderr F I0813 20:07:36.636214 1 cmd.go:92] &{ true {false} installer true map[cert-configmaps:0xc000a0f180 cert-dir:0xc000a0f360 cert-secrets:0xc000a0f0e0 configmaps:0xc000a0ec80 namespace:0xc000a0eaa0 optional-cert-configmaps:0xc000a0f2c0 optional-cert-secrets:0xc000a0f220 optional-configmaps:0xc000a0edc0 optional-secrets:0xc000a0ed20 pod:0xc000a0eb40 pod-manifest-dir:0xc000a0ef00 resource-dir:0xc000a0ee60 revision:0xc000a0ea00 secrets:0xc000a0ebe0 v:0xc000a1e780] [0xc000a1e780 0xc000a0ea00 0xc000a0eaa0 0xc000a0eb40 0xc000a0ee60 0xc000a0ef00 0xc000a0ec80 0xc000a0edc0 0xc000a0ebe0 0xc000a0ed20 0xc000a0f360 0xc000a0f180 0xc000a0f2c0 0xc000a0f0e0 0xc000a0f220] [] map[cert-configmaps:0xc000a0f180 cert-dir:0xc000a0f360 cert-secrets:0xc000a0f0e0 configmaps:0xc000a0ec80 help:0xc000a1eb40 kubeconfig:0xc000a0e960 log-flush-frequency:0xc000a1e6e0 namespace:0xc000a0eaa0 optional-cert-configmaps:0xc000a0f2c0 optional-cert-secrets:0xc000a0f220 optional-configmaps:0xc000a0edc0 optional-secrets:0xc000a0ed20 pod:0xc000a0eb40 pod-manifest-dir:0xc000a0ef00 pod-manifests-lock-file:0xc000a0f040 resource-dir:0xc000a0ee60 revision:0xc000a0ea00 secrets:0xc000a0ebe0 timeout-duration:0xc000a0efa0 v:0xc000a1e780 vmodule:0xc000a1e820] [0xc000a0e960 0xc000a0ea00 0xc000a0eaa0 0xc000a0eb40 0xc000a0ebe0 0xc000a0ec80 0xc000a0ed20 0xc000a0edc0 0xc000a0ee60 0xc000a0ef00 0xc000a0efa0 0xc000a0f040 0xc000a0f0e0 0xc000a0f180 0xc000a0f220 0xc000a0f2c0 0xc000a0f360 0xc000a1e6e0 0xc000a1e780 0xc000a1e820 0xc000a1eb40] [0xc000a0f180 0xc000a0f360 0xc000a0f0e0 0xc000a0ec80 0xc000a1eb40 0xc000a0e960 0xc000a1e6e0 0xc000a0eaa0 0xc000a0f2c0 0xc000a0f220 0xc000a0edc0 0xc000a0ed20 0xc000a0eb40 0xc000a0ef00 0xc000a0f040 0xc000a0ee60 0xc000a0ea00 0xc000a0ebe0 0xc000a0efa0 0xc000a1e780 0xc000a1e820] map[104:0xc000a1eb40 118:0xc000a1e780] [] -1 0 0xc000a023c0 true 0xa51380 []} 2025-08-13T20:07:36.638917816+00:00 stderr F I0813 20:07:36.636984 1 cmd.go:93] (*installerpod.InstallOptions)(0xc000a0a340)({ 2025-08-13T20:07:36.638917816+00:00 stderr F KubeConfig: (string) "", 2025-08-13T20:07:36.638917816+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-08-13T20:07:36.638917816+00:00 stderr F Revision: (string) (len=2) "12", 2025-08-13T20:07:36.638917816+00:00 stderr F NodeName: (string) "", 2025-08-13T20:07:36.638917816+00:00 stderr F Namespace: (string) (len=24) "openshift-kube-apiserver", 2025-08-13T20:07:36.638917816+00:00 stderr F PodConfigMapNamePrefix: (string) (len=18) "kube-apiserver-pod", 2025-08-13T20:07:36.638917816+00:00 stderr F SecretNamePrefixes: ([]string) (len=3 cap=4) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=11) "etcd-client", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=34) "localhost-recovery-serving-certkey", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=17) "encryption-config", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "webhook-authenticator" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=8 cap=8) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=18) "kube-apiserver-pod", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=6) "config", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=37) "kube-apiserver-cert-syncer-kubeconfig", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=28) "bound-sa-token-signing-certs", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=15) "etcd-serving-ca", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=18) "kubelet-serving-ca", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=22) "sa-token-signing-certs", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=29) "kube-apiserver-audit-policies" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=3 cap=4) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=14) "oauth-metadata", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=12) "cloud-config", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=24) "kube-apiserver-server-ca" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F CertSecretNames: ([]string) (len=10 cap=16) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=17) "aggregator-client", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=30) "localhost-serving-cert-certkey", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=31) "service-network-serving-certkey", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=37) "external-loadbalancer-serving-certkey", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=37) "internal-loadbalancer-serving-certkey", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=33) "bound-service-account-signing-key", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=40) "control-plane-node-admin-client-cert-key", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=31) "check-endpoints-client-cert-key", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=14) "kubelet-client", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=16) "node-kubeconfigs" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=17) "user-serving-cert", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-000", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-001", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-002", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-003", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-004", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-005", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-006", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-007", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-008", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-009" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=20) "aggregator-client-ca", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=9) "client-ca", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=29) "control-plane-node-kubeconfig", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=26) "check-endpoints-kubeconfig" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=17) "trusted-ca-bundle" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", 2025-08-13T20:07:36.638917816+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:07:36.638917816+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-08-13T20:07:36.638917816+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-08-13T20:07:36.638917816+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-08-13T20:07:36.638917816+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-08-13T20:07:36.638917816+00:00 stderr F KubeletVersion: (string) "" 2025-08-13T20:07:36.638917816+00:00 stderr F }) 2025-08-13T20:07:36.647040639+00:00 stderr F I0813 20:07:36.644426 1 cmd.go:410] Getting controller reference for node crc 2025-08-13T20:07:36.676170244+00:00 stderr F I0813 20:07:36.676103 1 cmd.go:423] Waiting for installer revisions to settle for node crc 2025-08-13T20:07:36.729144953+00:00 stderr F I0813 20:07:36.729077 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting 2025-08-13T20:07:46.742419322+00:00 stderr F I0813 20:07:46.742303 1 cmd.go:515] Waiting additional period after revisions have settled for node crc 2025-08-13T20:08:16.746174226+00:00 stderr F I0813 20:08:16.745979 1 cmd.go:521] Getting installer pods for node crc 2025-08-13T20:08:16.759085647+00:00 stderr F I0813 20:08:16.758998 1 cmd.go:539] Latest installer revision for node crc is: 12 2025-08-13T20:08:16.759085647+00:00 stderr F I0813 20:08:16.759051 1 cmd.go:428] Querying kubelet version for node crc 2025-08-13T20:08:16.766542280+00:00 stderr F I0813 20:08:16.766427 1 cmd.go:441] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-08-13T20:08:16.766638183+00:00 stderr F I0813 20:08:16.766618 1 cmd.go:290] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12" ... 2025-08-13T20:08:16.767661233+00:00 stderr F I0813 20:08:16.767400 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12" ... 2025-08-13T20:08:16.767719274+00:00 stderr F I0813 20:08:16.767702 1 cmd.go:226] Getting secrets ... 2025-08-13T20:08:16.777540006+00:00 stderr F I0813 20:08:16.777422 1 copy.go:32] Got secret openshift-kube-apiserver/etcd-client-12 2025-08-13T20:08:16.787704677+00:00 stderr F I0813 20:08:16.784537 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-recovery-client-token-12 2025-08-13T20:08:16.790097306+00:00 stderr F I0813 20:08:16.790066 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-recovery-serving-certkey-12 2025-08-13T20:08:16.792622618+00:00 stderr F I0813 20:08:16.792596 1 copy.go:24] Failed to get secret openshift-kube-apiserver/encryption-config-12: secrets "encryption-config-12" not found 2025-08-13T20:08:16.798056444+00:00 stderr F I0813 20:08:16.797997 1 copy.go:32] Got secret openshift-kube-apiserver/webhook-authenticator-12 2025-08-13T20:08:16.798153447+00:00 stderr F I0813 20:08:16.798139 1 cmd.go:239] Getting config maps ... 2025-08-13T20:08:16.807858675+00:00 stderr F I0813 20:08:16.803474 1 copy.go:60] Got configMap openshift-kube-apiserver/bound-sa-token-signing-certs-12 2025-08-13T20:08:16.830867405+00:00 stderr F I0813 20:08:16.828390 1 copy.go:60] Got configMap openshift-kube-apiserver/config-12 2025-08-13T20:08:16.837695091+00:00 stderr F I0813 20:08:16.836653 1 copy.go:60] Got configMap openshift-kube-apiserver/etcd-serving-ca-12 2025-08-13T20:08:16.953566323+00:00 stderr F I0813 20:08:16.953507 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-audit-policies-12 2025-08-13T20:08:17.152655191+00:00 stderr F I0813 20:08:17.152585 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-cert-syncer-kubeconfig-12 2025-08-13T20:08:17.356523426+00:00 stderr F I0813 20:08:17.356468 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-pod-12 2025-08-13T20:08:17.560140594+00:00 stderr F I0813 20:08:17.558391 1 copy.go:60] Got configMap openshift-kube-apiserver/kubelet-serving-ca-12 2025-08-13T20:08:17.752620232+00:00 stderr F I0813 20:08:17.752494 1 copy.go:60] Got configMap openshift-kube-apiserver/sa-token-signing-certs-12 2025-08-13T20:08:17.958951147+00:00 stderr F I0813 20:08:17.958857 1 copy.go:52] Failed to get config map openshift-kube-apiserver/cloud-config-12: configmaps "cloud-config-12" not found 2025-08-13T20:08:18.158022865+00:00 stderr F I0813 20:08:18.156610 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-server-ca-12 2025-08-13T20:08:18.352269654+00:00 stderr F I0813 20:08:18.352156 1 copy.go:60] Got configMap openshift-kube-apiserver/oauth-metadata-12 2025-08-13T20:08:18.352369817+00:00 stderr F I0813 20:08:18.352353 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/etcd-client" ... 2025-08-13T20:08:18.353113228+00:00 stderr F I0813 20:08:18.353009 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/etcd-client/tls.crt" ... 2025-08-13T20:08:18.353340855+00:00 stderr F I0813 20:08:18.353319 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/etcd-client/tls.key" ... 2025-08-13T20:08:18.353507260+00:00 stderr F I0813 20:08:18.353487 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-client-token" ... 2025-08-13T20:08:18.353665544+00:00 stderr F I0813 20:08:18.353647 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-08-13T20:08:18.354413516+00:00 stderr F I0813 20:08:18.353881 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-client-token/token" ... 2025-08-13T20:08:18.354604501+00:00 stderr F I0813 20:08:18.354582 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-client-token/ca.crt" ... 2025-08-13T20:08:18.354759505+00:00 stderr F I0813 20:08:18.354739 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-client-token/namespace" ... 2025-08-13T20:08:18.355074254+00:00 stderr F I0813 20:08:18.354982 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-serving-certkey" ... 2025-08-13T20:08:18.355209868+00:00 stderr F I0813 20:08:18.355190 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-serving-certkey/tls.crt" ... 2025-08-13T20:08:18.355372803+00:00 stderr F I0813 20:08:18.355353 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-serving-certkey/tls.key" ... 2025-08-13T20:08:18.356621279+00:00 stderr F I0813 20:08:18.356597 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/webhook-authenticator" ... 2025-08-13T20:08:18.356843525+00:00 stderr F I0813 20:08:18.356820 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/webhook-authenticator/kubeConfig" ... 2025-08-13T20:08:18.357036741+00:00 stderr F I0813 20:08:18.357015 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/bound-sa-token-signing-certs" ... 2025-08-13T20:08:18.357222456+00:00 stderr F I0813 20:08:18.357203 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/bound-sa-token-signing-certs/service-account-001.pub" ... 2025-08-13T20:08:18.357353840+00:00 stderr F I0813 20:08:18.357335 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/config" ... 2025-08-13T20:08:18.357935916+00:00 stderr F I0813 20:08:18.357489 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/config/config.yaml" ... 2025-08-13T20:08:18.359209643+00:00 stderr F I0813 20:08:18.359186 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/etcd-serving-ca" ... 2025-08-13T20:08:18.359448360+00:00 stderr F I0813 20:08:18.359410 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/etcd-serving-ca/ca-bundle.crt" ... 2025-08-13T20:08:18.359598054+00:00 stderr F I0813 20:08:18.359579 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-audit-policies" ... 2025-08-13T20:08:18.359723238+00:00 stderr F I0813 20:08:18.359705 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-audit-policies/policy.yaml" ... 2025-08-13T20:08:18.359975605+00:00 stderr F I0813 20:08:18.359952 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-cert-syncer-kubeconfig" ... 2025-08-13T20:08:18.360104889+00:00 stderr F I0813 20:08:18.360086 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig" ... 2025-08-13T20:08:18.360250723+00:00 stderr F I0813 20:08:18.360232 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-pod" ... 2025-08-13T20:08:18.360398597+00:00 stderr F I0813 20:08:18.360378 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-pod/forceRedeploymentReason" ... 2025-08-13T20:08:18.360577992+00:00 stderr F I0813 20:08:18.360555 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-pod/kube-apiserver-startup-monitor-pod.yaml" ... 2025-08-13T20:08:18.360767418+00:00 stderr F I0813 20:08:18.360716 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-pod/pod.yaml" ... 2025-08-13T20:08:18.361041326+00:00 stderr F I0813 20:08:18.361018 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-pod/version" ... 2025-08-13T20:08:18.361181070+00:00 stderr F I0813 20:08:18.361161 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kubelet-serving-ca" ... 2025-08-13T20:08:18.361530210+00:00 stderr F I0813 20:08:18.361505 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kubelet-serving-ca/ca-bundle.crt" ... 2025-08-13T20:08:18.361743146+00:00 stderr F I0813 20:08:18.361708 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/sa-token-signing-certs" ... 2025-08-13T20:08:18.361983433+00:00 stderr F I0813 20:08:18.361934 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/sa-token-signing-certs/service-account-001.pub" ... 2025-08-13T20:08:18.362161068+00:00 stderr F I0813 20:08:18.362141 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/sa-token-signing-certs/service-account-002.pub" ... 2025-08-13T20:08:18.362731884+00:00 stderr F I0813 20:08:18.362563 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/sa-token-signing-certs/service-account-003.pub" ... 2025-08-13T20:08:18.365098422+00:00 stderr F I0813 20:08:18.364128 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-server-ca" ... 2025-08-13T20:08:18.365098422+00:00 stderr F I0813 20:08:18.364307 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-server-ca/ca-bundle.crt" ... 2025-08-13T20:08:18.365098422+00:00 stderr F I0813 20:08:18.364445 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/oauth-metadata" ... 2025-08-13T20:08:18.365098422+00:00 stderr F I0813 20:08:18.364556 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/oauth-metadata/oauthMetadata" ... 2025-08-13T20:08:18.367237633+00:00 stderr F I0813 20:08:18.367154 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs" ... 2025-08-13T20:08:18.367237633+00:00 stderr F I0813 20:08:18.367217 1 cmd.go:226] Getting secrets ... 2025-08-13T20:08:18.553964087+00:00 stderr F I0813 20:08:18.551863 1 copy.go:32] Got secret openshift-kube-apiserver/aggregator-client 2025-08-13T20:08:18.755506325+00:00 stderr F I0813 20:08:18.755397 1 copy.go:32] Got secret openshift-kube-apiserver/bound-service-account-signing-key 2025-08-13T20:08:18.961047308+00:00 stderr F I0813 20:08:18.959393 1 copy.go:32] Got secret openshift-kube-apiserver/check-endpoints-client-cert-key 2025-08-13T20:08:19.157436069+00:00 stderr F I0813 20:08:19.157218 1 copy.go:32] Got secret openshift-kube-apiserver/control-plane-node-admin-client-cert-key 2025-08-13T20:08:19.361055187+00:00 stderr F I0813 20:08:19.360239 1 copy.go:32] Got secret openshift-kube-apiserver/external-loadbalancer-serving-certkey 2025-08-13T20:08:19.554011299+00:00 stderr F I0813 20:08:19.552970 1 copy.go:32] Got secret openshift-kube-apiserver/internal-loadbalancer-serving-certkey 2025-08-13T20:08:19.758824191+00:00 stderr F I0813 20:08:19.758068 1 copy.go:32] Got secret openshift-kube-apiserver/kubelet-client 2025-08-13T20:08:19.952841864+00:00 stderr F I0813 20:08:19.952693 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-serving-cert-certkey 2025-08-13T20:08:20.155019351+00:00 stderr F I0813 20:08:20.152283 1 copy.go:32] Got secret openshift-kube-apiserver/node-kubeconfigs 2025-08-13T20:08:20.355912051+00:00 stderr F I0813 20:08:20.354435 1 copy.go:32] Got secret openshift-kube-apiserver/service-network-serving-certkey 2025-08-13T20:08:20.557744358+00:00 stderr F I0813 20:08:20.557637 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert: secrets "user-serving-cert" not found 2025-08-13T20:08:20.752183112+00:00 stderr F I0813 20:08:20.752136 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-000: secrets "user-serving-cert-000" not found 2025-08-13T20:08:20.951144607+00:00 stderr F I0813 20:08:20.951043 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-001: secrets "user-serving-cert-001" not found 2025-08-13T20:08:21.153445757+00:00 stderr F I0813 20:08:21.153364 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-002: secrets "user-serving-cert-002" not found 2025-08-13T20:08:21.358305480+00:00 stderr F I0813 20:08:21.358147 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: secrets "user-serving-cert-003" not found 2025-08-13T20:08:21.552003393+00:00 stderr F I0813 20:08:21.550526 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-004: secrets "user-serving-cert-004" not found 2025-08-13T20:08:21.754680194+00:00 stderr F I0813 20:08:21.754592 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-005: secrets "user-serving-cert-005" not found 2025-08-13T20:08:21.952917598+00:00 stderr F I0813 20:08:21.952702 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-006: secrets "user-serving-cert-006" not found 2025-08-13T20:08:22.158927624+00:00 stderr F I0813 20:08:22.156637 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-007: secrets "user-serving-cert-007" not found 2025-08-13T20:08:22.368824392+00:00 stderr F I0813 20:08:22.365760 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-008: secrets "user-serving-cert-008" not found 2025-08-13T20:08:22.559923021+00:00 stderr F I0813 20:08:22.557941 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-009: secrets "user-serving-cert-009" not found 2025-08-13T20:08:22.559923021+00:00 stderr F I0813 20:08:22.558009 1 cmd.go:239] Getting config maps ... 2025-08-13T20:08:22.753218473+00:00 stderr F I0813 20:08:22.752977 1 copy.go:60] Got configMap openshift-kube-apiserver/aggregator-client-ca 2025-08-13T20:08:22.958543640+00:00 stderr F I0813 20:08:22.958422 1 copy.go:60] Got configMap openshift-kube-apiserver/check-endpoints-kubeconfig 2025-08-13T20:08:23.152462540+00:00 stderr F I0813 20:08:23.152406 1 copy.go:60] Got configMap openshift-kube-apiserver/client-ca 2025-08-13T20:08:23.355346357+00:00 stderr F I0813 20:08:23.355215 1 copy.go:60] Got configMap openshift-kube-apiserver/control-plane-node-kubeconfig 2025-08-13T20:08:23.633038419+00:00 stderr F I0813 20:08:23.632400 1 copy.go:60] Got configMap openshift-kube-apiserver/trusted-ca-bundle 2025-08-13T20:08:23.633038419+00:00 stderr F I0813 20:08:23.632844 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client" ... 2025-08-13T20:08:23.633354288+00:00 stderr F I0813 20:08:23.633123 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client/tls.crt" ... 2025-08-13T20:08:23.634466239+00:00 stderr F I0813 20:08:23.634378 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client/tls.key" ... 2025-08-13T20:08:23.634614964+00:00 stderr F I0813 20:08:23.634555 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key" ... 2025-08-13T20:08:23.634614964+00:00 stderr F I0813 20:08:23.634596 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key/service-account.pub" ... 2025-08-13T20:08:23.639930056+00:00 stderr F I0813 20:08:23.635714 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key/service-account.key" ... 2025-08-13T20:08:23.639930056+00:00 stderr F I0813 20:08:23.636052 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key" ... 2025-08-13T20:08:23.639930056+00:00 stderr F I0813 20:08:23.636078 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key/tls.crt" ... 2025-08-13T20:08:23.643978932+00:00 stderr F I0813 20:08:23.643653 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key/tls.key" ... 2025-08-13T20:08:23.643978932+00:00 stderr F I0813 20:08:23.643958 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key" ... 2025-08-13T20:08:23.644008123+00:00 stderr F I0813 20:08:23.643993 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key/tls.key" ... 2025-08-13T20:08:23.646206346+00:00 stderr F I0813 20:08:23.646135 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key/tls.crt" ... 2025-08-13T20:08:23.646387351+00:00 stderr F I0813 20:08:23.646327 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey" ... 2025-08-13T20:08:23.646387351+00:00 stderr F I0813 20:08:23.646374 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey/tls.crt" ... 2025-08-13T20:08:23.646686290+00:00 stderr F I0813 20:08:23.646604 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey/tls.key" ... 2025-08-13T20:08:23.653659940+00:00 stderr F I0813 20:08:23.653566 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey" ... 2025-08-13T20:08:23.653728932+00:00 stderr F I0813 20:08:23.653695 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt" ... 2025-08-13T20:08:23.656370107+00:00 stderr F I0813 20:08:23.656272 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey/tls.key" ... 2025-08-13T20:08:23.660841906+00:00 stderr F I0813 20:08:23.658150 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client" ... 2025-08-13T20:08:23.660841906+00:00 stderr F I0813 20:08:23.658211 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client/tls.crt" ... 2025-08-13T20:08:23.660841906+00:00 stderr F I0813 20:08:23.659295 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client/tls.key" ... 2025-08-13T20:08:23.660841906+00:00 stderr F I0813 20:08:23.659416 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey" ... 2025-08-13T20:08:23.660841906+00:00 stderr F I0813 20:08:23.659443 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey/tls.crt" ... 2025-08-13T20:08:23.663432750+00:00 stderr F I0813 20:08:23.662231 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey/tls.key" ... 2025-08-13T20:08:23.663432750+00:00 stderr F I0813 20:08:23.662485 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs" ... 2025-08-13T20:08:23.663432750+00:00 stderr F I0813 20:08:23.662504 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/lb-ext.kubeconfig" ... 2025-08-13T20:08:23.664350806+00:00 stderr F I0813 20:08:23.664307 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/lb-int.kubeconfig" ... 2025-08-13T20:08:23.666085116+00:00 stderr F I0813 20:08:23.664501 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig" ... 2025-08-13T20:08:23.666085116+00:00 stderr F I0813 20:08:23.665514 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig" ... 2025-08-13T20:08:23.666085116+00:00 stderr F I0813 20:08:23.665643 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey" ... 2025-08-13T20:08:23.666085116+00:00 stderr F I0813 20:08:23.665658 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey/tls.crt" ... 2025-08-13T20:08:23.666085116+00:00 stderr F I0813 20:08:23.665954 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey/tls.key" ... 2025-08-13T20:08:23.666230140+00:00 stderr F I0813 20:08:23.666179 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca" ... 2025-08-13T20:08:23.666230140+00:00 stderr F I0813 20:08:23.666201 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca/ca-bundle.crt" ... 2025-08-13T20:08:23.666524959+00:00 stderr F I0813 20:08:23.666397 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/check-endpoints-kubeconfig" ... 2025-08-13T20:08:23.666524959+00:00 stderr F I0813 20:08:23.666438 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/check-endpoints-kubeconfig/kubeconfig" ... 2025-08-13T20:08:23.675065014+00:00 stderr F I0813 20:08:23.674383 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/client-ca" ... 2025-08-13T20:08:23.727612830+00:00 stderr F I0813 20:08:23.678973 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/client-ca/ca-bundle.crt" ... 2025-08-13T20:08:23.728107704+00:00 stderr F I0813 20:08:23.728022 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/control-plane-node-kubeconfig" ... 2025-08-13T20:08:23.728170316+00:00 stderr F I0813 20:08:23.728088 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/control-plane-node-kubeconfig/kubeconfig" ... 2025-08-13T20:08:23.730094851+00:00 stderr F I0813 20:08:23.730033 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/trusted-ca-bundle" ... 2025-08-13T20:08:23.730271116+00:00 stderr F I0813 20:08:23.730167 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/trusted-ca-bundle/ca-bundle.crt" ... 2025-08-13T20:08:23.730853573+00:00 stderr F I0813 20:08:23.730656 1 cmd.go:332] Getting pod configmaps/kube-apiserver-pod-12 -n openshift-kube-apiserver 2025-08-13T20:08:23.756294352+00:00 stderr F I0813 20:08:23.754194 1 cmd.go:348] Creating directory for static pod manifest "/etc/kubernetes/manifests" ... 2025-08-13T20:08:23.756294352+00:00 stderr F I0813 20:08:23.754297 1 cmd.go:376] Writing a pod under "kube-apiserver-startup-monitor-pod.yaml" key 2025-08-13T20:08:23.756294352+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-startup-monitor","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"revision":"12"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources"}},{"name":"manifests","hostPath":{"path":"/etc/kubernetes/manifests"}},{"name":"pod-resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12"}},{"name":"var-lock","hostPath":{"path":"/var/lock"}},{"name":"var-log","hostPath":{"path":"/var/log/kube-apiserver"}}],"containers":[{"name":"startup-monitor","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","startup-monitor"],"args":["-v=2","--fallback-timeout-duration=300s","--target-name=kube-apiserver","--manifests-dir=/etc/kubernetes/manifests","--resource-dir=/etc/kubernetes/static-pod-resources","--installer-lock-file=/var/lock/kube-apiserver-installer.lock","--revision=12","--node-name=crc","--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--log-file-path=/var/log/kube-apiserver/startup.log"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"manifests","mountPath":"/etc/kubernetes/manifests"},{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/secrets","subPath":"secrets"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/configmaps","subPath":"configmaps"},{"name":"var-lock","mountPath":"/var/lock"},{"name":"var-log","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"terminationGracePeriodSeconds":5,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:08:23.767161554+00:00 stderr F I0813 20:08:23.767055 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/kube-apiserver-startup-monitor-pod.yaml" ... 2025-08-13T20:08:23.767422721+00:00 stderr F I0813 20:08:23.767311 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml" ... 2025-08-13T20:08:23.767422721+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-startup-monitor","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"revision":"12"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources"}},{"name":"manifests","hostPath":{"path":"/etc/kubernetes/manifests"}},{"name":"pod-resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12"}},{"name":"var-lock","hostPath":{"path":"/var/lock"}},{"name":"var-log","hostPath":{"path":"/var/log/kube-apiserver"}}],"containers":[{"name":"startup-monitor","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","startup-monitor"],"args":["-v=2","--fallback-timeout-duration=300s","--target-name=kube-apiserver","--manifests-dir=/etc/kubernetes/manifests","--resource-dir=/etc/kubernetes/static-pod-resources","--installer-lock-file=/var/lock/kube-apiserver-installer.lock","--revision=12","--node-name=crc","--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--log-file-path=/var/log/kube-apiserver/startup.log"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"manifests","mountPath":"/etc/kubernetes/manifests"},{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/secrets","subPath":"secrets"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/configmaps","subPath":"configmaps"},{"name":"var-lock","mountPath":"/var/lock"},{"name":"var-log","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"terminationGracePeriodSeconds":5,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:08:23.767702189+00:00 stderr F I0813 20:08:23.767602 1 cmd.go:376] Writing a pod under "kube-apiserver-pod.yaml" key 2025-08-13T20:08:23.767702189+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"apiserver":"true","app":"openshift-kube-apiserver","revision":"12"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-apiserver","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-certs"}},{"name":"audit-dir","hostPath":{"path":"/var/log/kube-apiserver"}}],"initContainers":[{"name":"setup","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","100","/bin/bash","-ec"],"args":["echo \"Fixing audit permissions ...\"\nchmod 0700 /var/log/kube-apiserver \u0026\u0026 touch /var/log/kube-apiserver/audit.log \u0026\u0026 chmod 0600 /var/log/kube-apiserver/*\n\nLOCK=/var/log/kube-apiserver/.lock\necho \"Acquiring exclusive lock ${LOCK} ...\"\n\n# Waiting for 15s max for old kube-apiserver's watch-termination process to exit and remove the lock.\n# Two cases:\n# 1. if kubelet does not start the old and new in parallel (i.e. works as expected), the flock will always succeed without any time.\n# 2. if kubelet does overlap old and new pods for up to 130s, the flock will wait and immediate return when the old finishes.\n#\n# NOTE: We can increase 15s for a bigger expected overlap. But a higher value means less noise about the broken kubelet behaviour, i.e. we hide a bug.\n# NOTE: Do not tweak these timings without considering the livenessProbe initialDelaySeconds\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 15 \"${LOCK_FD}\" || {\n echo \"$(date -Iseconds -u) kubelet did not terminate old kube-apiserver before new one\" \u003e\u003e /var/log/kube-apiserver/lock.log\n echo -n \": WARNING: kubelet did not terminate old kube-apiserver before new one.\"\n\n # We failed to acquire exclusive lock, which means there is old kube-apiserver running in system.\n # Since we utilize SO_REUSEPORT, we need to make sure the old kube-apiserver stopped listening.\n #\n # NOTE: This is a fallback for broken kubelet, if you observe this please report a bug.\n echo -n \"Waiting for port 6443 to be released due to likely bug in kubelet or CRI-O \"\n while [ -n \"$(ss -Htan state listening '( sport = 6443 or sport = 6080 )')\" ]; do\n echo -n \".\"\n sleep 1\n (( tries += 1 ))\n if [[ \"${tries}\" -gt 10 ]]; then\n echo \"Timed out waiting for port :6443 and :6080 to be released, this is likely a bug in kubelet or CRI-O\"\n exit 1\n fi\n done\n # This is to make sure the server has terminated independently from the lock.\n # After the port has been freed (requests can be pending and need 60s max).\n sleep 65\n}\n# We cannot hold the lock from the init container to the main container. We release it here. There is no risk, at this point we know we are safe.\nflock -u \"${LOCK_FD}\"\n"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"containers":[{"name":"kube-apiserver","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-ec"],"args":["LOCK=/var/log/kube-apiserver/.lock\n# We should be able to acquire the lock immediatelly. If not, it means the init container has not released it yet and kubelet or CRI-O started container prematurely.\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 30 \"${LOCK_FD}\" || {\n echo \"Failed to acquire lock for kube-apiserver. Please check setup container for details. This is likely kubelet or CRI-O bug.\"\n exit 1\n}\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle ...\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nexec watch-termination --termination-touch-file=/var/log/kube-apiserver/.terminating --termination-log-file=/var/log/kube-apiserver/termination.log --graceful-termination-duration=15s --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig -- hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --advertise-address=${HOST_IP} -v=2 --permit-address-sharing\n"],"ports":[{"containerPort":6443}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"STATIC_POD_VERSION","value":"12"},{"name":"HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"GOGC","value":"100"}],"resources":{"requests":{"cpu":"265m","memory":"1Gi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"},{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"livenessProbe":{"httpGet":{"path":"livez","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"readyz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":1},"startupProbe":{"httpGet":{"path":"healthz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":30},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}},{"name":"kube-apiserver-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-cert-regeneration-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-regeneration-controller"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","-v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-insecure-readyz","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","insecure-readyz"],"args":["--insecure-port=6080","--delegate-url=https://localhost:6443/readyz"],"ports":[{"containerPort":6080}],"resources":{"requests":{"cpu":"5m","memory":"50M 2025-08-13T20:08:23.767733110+00:00 stderr F i"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-check-endpoints","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","check-endpoints"],"args":["--kubeconfig","/etc/kubernetes/static-pod-certs/configmaps/check-endpoints-kubeconfig/kubeconfig","--listen","0.0.0.0:17697","--namespace","$(POD_NAMESPACE)","--v","2"],"ports":[{"name":"check-endpoints","hostPort":17697,"containerPort":17697,"protocol":"TCP"}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"terminationGracePeriodSeconds":15,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:08:23.768399900+00:00 stderr F I0813 20:08:23.768332 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/kube-apiserver-pod.yaml" ... 2025-08-13T20:08:23.768798391+00:00 stderr F I0813 20:08:23.768742 1 cmd.go:614] Removed existing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-pod.yaml" ... 2025-08-13T20:08:23.768926285+00:00 stderr F I0813 20:08:23.768828 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-pod.yaml" ... 2025-08-13T20:08:23.768926285+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"apiserver":"true","app":"openshift-kube-apiserver","revision":"12"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-apiserver","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-certs"}},{"name":"audit-dir","hostPath":{"path":"/var/log/kube-apiserver"}}],"initContainers":[{"name":"setup","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","100","/bin/bash","-ec"],"args":["echo \"Fixing audit permissions ...\"\nchmod 0700 /var/log/kube-apiserver \u0026\u0026 touch /var/log/kube-apiserver/audit.log \u0026\u0026 chmod 0600 /var/log/kube-apiserver/*\n\nLOCK=/var/log/kube-apiserver/.lock\necho \"Acquiring exclusive lock ${LOCK} ...\"\n\n# Waiting for 15s max for old kube-apiserver's watch-termination process to exit and remove the lock.\n# Two cases:\n# 1. if kubelet does not start the old and new in parallel (i.e. works as expected), the flock will always succeed without any time.\n# 2. if kubelet does overlap old and new pods for up to 130s, the flock will wait and immediate return when the old finishes.\n#\n# NOTE: We can increase 15s for a bigger expected overlap. But a higher value means less noise about the broken kubelet behaviour, i.e. we hide a bug.\n# NOTE: Do not tweak these timings without considering the livenessProbe initialDelaySeconds\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 15 \"${LOCK_FD}\" || {\n echo \"$(date -Iseconds -u) kubelet did not terminate old kube-apiserver before new one\" \u003e\u003e /var/log/kube-apiserver/lock.log\n echo -n \": WARNING: kubelet did not terminate old kube-apiserver before new one.\"\n\n # We failed to acquire exclusive lock, which means there is old kube-apiserver running in system.\n # Since we utilize SO_REUSEPORT, we need to make sure the old kube-apiserver stopped listening.\n #\n # NOTE: This is a fallback for broken kubelet, if you observe this please report a bug.\n echo -n \"Waiting for port 6443 to be released due to likely bug in kubelet or CRI-O \"\n while [ -n \"$(ss -Htan state listening '( sport = 6443 or sport = 6080 )')\" ]; do\n echo -n \".\"\n sleep 1\n (( tries += 1 ))\n if [[ \"${tries}\" -gt 10 ]]; then\n echo \"Timed out waiting for port :6443 and :6080 to be released, this is likely a bug in kubelet or CRI-O\"\n exit 1\n fi\n done\n # This is to make sure the server has terminated independently from the lock.\n # After the port has been freed (requests can be pending and need 60s max).\n sleep 65\n}\n# We cannot hold the lock from the init container to the main container. We release it here. There is no risk, at this point we know we are safe.\nflock -u \"${LOCK_FD}\"\n"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"containers":[{"name":"kube-apiserver","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-ec"],"args":["LOCK=/var/log/kube-apiserver/.lock\n# We should be able to acquire the lock immediatelly. If not, it means the init container has not released it yet and kubelet or CRI-O started container prematurely.\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 30 \"${LOCK_FD}\" || {\n echo \"Failed to acquire lock for kube-apiserver. Please check setup container for details. This is likely kubelet or CRI-O bug.\"\n exit 1\n}\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle ...\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nexec watch-termination --termination-touch-file=/var/log/kube-apiserver/.terminating --termination-log-file=/var/log/kube-apiserver/termination.log --graceful-termination-duration=15s --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig -- hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --advertise-address=${HOST_IP} -v=2 --permit-address-sharing\n"],"ports":[{"containerPort":6443}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"STATIC_POD_VERSION","value":"12"},{"name":"HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"GOGC","value":"100"}],"resources":{"requests":{"cpu":"265m","memory":"1Gi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"},{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"livenessProbe":{"httpGet":{"path":"livez","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"readyz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":1},"startupProbe":{"httpGet":{"path":"healthz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":30},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}},{"name":"kube-apiserver-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-cert-regeneration-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-regeneration-controller"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","-v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-insecure-readyz","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","insecure-readyz"],"args":["--insecure-port=6080","--delegate-url=https://localhost:6443/readyz"],"ports":[{"containerPort":6080}],"resources":{"re 2025-08-13T20:08:23.768952735+00:00 stderr F quests":{"cpu":"5m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-check-endpoints","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","check-endpoints"],"args":["--kubeconfig","/etc/kubernetes/static-pod-certs/configmaps/check-endpoints-kubeconfig/kubeconfig","--listen","0.0.0.0:17697","--namespace","$(POD_NAMESPACE)","--v","2"],"ports":[{"name":"check-endpoints","hostPort":17697,"containerPort":17697,"protocol":"TCP"}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"terminationGracePeriodSeconds":15,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000755000175000017500000000000015133657715032777 5ustar zuulzuul././@LongLink0000644000000000000000000000034100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000755000175000017500000000000015133657735033001 5ustar zuulzuul././@LongLink0000644000000000000000000000034600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000002266615133657715033015 0ustar zuulzuul2025-08-13T20:04:17.711019301+00:00 stderr F I0813 20:04:17.710517 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:04:17.731650389+00:00 stderr F I0813 20:04:17.731502 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:04:17.741661665+00:00 stderr F W0813 20:04:17.741251 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:17.742158909+00:00 stderr F W0813 20:04:17.741915 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:17.750083895+00:00 stderr F E0813 20:04:17.749981 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:17.750083895+00:00 stderr F E0813 20:04:17.749982 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:18.584086456+00:00 stderr F W0813 20:04:18.583537 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:18.584086456+00:00 stderr F E0813 20:04:18.583991 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:18.647018632+00:00 stderr F W0813 20:04:18.646930 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:18.647018632+00:00 stderr F E0813 20:04:18.646976 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:20.350624461+00:00 stderr F W0813 20:04:20.346192 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:20.350624461+00:00 stderr F E0813 20:04:20.346495 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:20.358060563+00:00 stderr F W0813 20:04:20.357964 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:20.358214727+00:00 stderr F E0813 20:04:20.358149 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.939148562+00:00 stderr F W0813 20:04:23.938654 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.939262655+00:00 stderr F E0813 20:04:23.939234 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:25.261915080+00:00 stderr F W0813 20:04:25.260621 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:25.261915080+00:00 stderr F E0813 20:04:25.260694 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:36.435909533+00:00 stderr F W0813 20:04:36.434632 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:36.435909533+00:00 stderr F E0813 20:04:36.435360 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:36.668995607+00:00 stderr F W0813 20:04:36.660927 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:36.668995607+00:00 stderr F E0813 20:04:36.668906 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:50.885207502+00:00 stderr F W0813 20:04:50.884304 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:50.918401032+00:00 stderr F E0813 20:04:50.918193 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:52.839595078+00:00 stderr F W0813 20:04:52.839119 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:52.839595078+00:00 stderr F E0813 20:04:52.839544 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:17.762563304+00:00 stderr F F0813 20:05:17.755149 1 main.go:175] timed out waiting for FeatureGate detection ././@LongLink0000644000000000000000000000034600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/3.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000003152315133657715033005 0ustar zuulzuul2026-01-20T10:49:35.362723856+00:00 stderr F I0120 10:49:35.362205 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:49:35.373079990+00:00 stderr F I0120 10:49:35.372849 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2026-01-20T10:49:35.406921631+00:00 stderr F I0120 10:49:35.406852 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:49:35.407454228+00:00 stderr F I0120 10:49:35.407417 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2026-01-20T10:49:35.409087597+00:00 stderr F I0120 10:49:35.408949 1 recorder_logging.go:44] &Event{ObjectMeta:{dummy.188c6acb622c5a43 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:FeatureGatesInitialized,Message:FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}},Source:EventSource{Component:,Host:,},FirstTimestamp:2026-01-20 10:49:35.407741507 +0000 UTC m=+0.383829293,LastTimestamp:2026-01-20 10:49:35.407741507 +0000 UTC m=+0.383829293,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} 2026-01-20T10:49:35.409104478+00:00 stderr F I0120 10:49:35.409041 1 main.go:173] FeatureGates initialized: [AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2026-01-20T10:49:35.409564112+00:00 stderr F I0120 10:49:35.409540 1 webhook.go:173] "msg"="skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called" "GVK"={"Group":"machine.openshift.io","Version":"v1","Kind":"ControlPlaneMachineSet"} "logger"="controller-runtime.builder" 2026-01-20T10:49:35.409635574+00:00 stderr F I0120 10:49:35.409605 1 webhook.go:189] "msg"="Registering a validating webhook" "GVK"={"Group":"machine.openshift.io","Version":"v1","Kind":"ControlPlaneMachineSet"} "logger"="controller-runtime.builder" "path"="/validate-machine-openshift-io-v1-controlplanemachineset" 2026-01-20T10:49:35.409714256+00:00 stderr F I0120 10:49:35.409692 1 server.go:183] "msg"="Registering webhook" "logger"="controller-runtime.webhook" "path"="/validate-machine-openshift-io-v1-controlplanemachineset" 2026-01-20T10:49:35.409791738+00:00 stderr F I0120 10:49:35.409776 1 main.go:228] "msg"="starting manager" "logger"="setup" 2026-01-20T10:49:35.411167691+00:00 stderr F I0120 10:49:35.409968 1 server.go:185] "msg"="Starting metrics server" "logger"="controller-runtime.metrics" 2026-01-20T10:49:35.432116018+00:00 stderr F I0120 10:49:35.420215 1 server.go:50] "msg"="starting server" "addr"={"IP":"::","Port":8081,"Zone":""} "kind"="health probe" 2026-01-20T10:49:35.432116018+00:00 stderr F I0120 10:49:35.420392 1 server.go:224] "msg"="Serving metrics server" "bindAddress"=":8080" "logger"="controller-runtime.metrics" "secure"=false 2026-01-20T10:49:35.432116018+00:00 stderr F I0120 10:49:35.420434 1 server.go:191] "msg"="Starting webhook server" "logger"="controller-runtime.webhook" 2026-01-20T10:49:35.432116018+00:00 stderr F I0120 10:49:35.420900 1 certwatcher.go:161] "msg"="Updated current TLS certificate" "logger"="controller-runtime.certwatcher" 2026-01-20T10:49:35.432116018+00:00 stderr F I0120 10:49:35.420942 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-api/control-plane-machine-set-leader... 2026-01-20T10:49:35.432116018+00:00 stderr F I0120 10:49:35.421344 1 certwatcher.go:115] "msg"="Starting certificate watcher" "logger"="controller-runtime.certwatcher" 2026-01-20T10:49:35.432116018+00:00 stderr F I0120 10:49:35.423647 1 server.go:242] "msg"="Serving webhook server" "host"="" "logger"="controller-runtime.webhook" "port"=9443 2026-01-20T10:51:58.157101392+00:00 stderr F I0120 10:51:58.152682 1 leaderelection.go:260] successfully acquired lease openshift-machine-api/control-plane-machine-set-leader 2026-01-20T10:51:58.157101392+00:00 stderr F I0120 10:51:58.153723 1 recorder.go:104] "msg"="control-plane-machine-set-operator-649bd778b4-tt5tw_145ec87e-104f-4f0d-91b1-14d953a79a4c became leader" "logger"="events" "object"={"kind":"Lease","namespace":"openshift-machine-api","name":"control-plane-machine-set-leader","uid":"04d6c6f9-cb98-4c35-8cde-538add77d9ad","apiVersion":"coordination.k8s.io/v1","resourceVersion":"41567"} "reason"="LeaderElection" "type"="Normal" 2026-01-20T10:51:58.157101392+00:00 stderr F I0120 10:51:58.154104 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.ControlPlaneMachineSet" 2026-01-20T10:51:58.157101392+00:00 stderr F I0120 10:51:58.154119 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1beta1.Machine" 2026-01-20T10:51:58.157101392+00:00 stderr F I0120 10:51:58.154129 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.Node" 2026-01-20T10:51:58.157101392+00:00 stderr F I0120 10:51:58.154139 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.ClusterOperator" 2026-01-20T10:51:58.157101392+00:00 stderr F I0120 10:51:58.154147 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.Infrastructure" 2026-01-20T10:51:58.157101392+00:00 stderr F I0120 10:51:58.154155 1 controller.go:186] "msg"="Starting Controller" "controller"="controlplanemachineset" 2026-01-20T10:51:58.157101392+00:00 stderr F I0120 10:51:58.154317 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachinesetgenerator" "source"="kind source: *v1.ControlPlaneMachineSet" 2026-01-20T10:51:58.157101392+00:00 stderr F I0120 10:51:58.154327 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachinesetgenerator" "source"="kind source: *v1beta1.Machine" 2026-01-20T10:51:58.157101392+00:00 stderr F I0120 10:51:58.154334 1 controller.go:186] "msg"="Starting Controller" "controller"="controlplanemachinesetgenerator" 2026-01-20T10:51:58.181872284+00:00 stderr F I0120 10:51:58.181793 1 reflector.go:351] Caches populated for *v1.ControlPlaneMachineSet from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2026-01-20T10:51:58.195225933+00:00 stderr F I0120 10:51:58.195140 1 reflector.go:351] Caches populated for *v1beta1.Machine from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2026-01-20T10:51:58.195607413+00:00 stderr F I0120 10:51:58.195545 1 reflector.go:351] Caches populated for *v1.Node from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2026-01-20T10:51:58.196191370+00:00 stderr F I0120 10:51:58.196155 1 reflector.go:351] Caches populated for *v1.Infrastructure from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2026-01-20T10:51:58.206714998+00:00 stderr F I0120 10:51:58.206653 1 reflector.go:351] Caches populated for *v1.ClusterOperator from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2026-01-20T10:51:58.286477667+00:00 stderr F I0120 10:51:58.286422 1 watch_filters.go:179] reconcile triggered by infrastructure change 2026-01-20T10:51:58.297023076+00:00 stderr F I0120 10:51:58.296990 1 controller.go:220] "msg"="Starting workers" "controller"="controlplanemachinesetgenerator" "worker count"=1 2026-01-20T10:51:58.302182323+00:00 stderr F I0120 10:51:58.302134 1 controller.go:220] "msg"="Starting workers" "controller"="controlplanemachineset" "worker count"=1 2026-01-20T10:51:58.302200293+00:00 stderr F I0120 10:51:58.302191 1 controller.go:169] "msg"="Reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="ae08767c-a386-488d-8779-4825e3882777" 2026-01-20T10:51:58.302634925+00:00 stderr F I0120 10:51:58.302598 1 controller.go:177] "msg"="No control plane machine set found, setting operator status available" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="ae08767c-a386-488d-8779-4825e3882777" 2026-01-20T10:51:58.302655096+00:00 stderr F I0120 10:51:58.302646 1 controller.go:183] "msg"="Finished reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="ae08767c-a386-488d-8779-4825e3882777" 2026-01-20T10:57:10.362342187+00:00 stderr F E0120 10:57:10.361614 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/control-plane-machine-set-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/control-plane-machine-set-leader": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:36.364292687+00:00 stderr F E0120 10:57:36.364204 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/control-plane-machine-set-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/control-plane-machine-set-leader": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000034600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000005306115133657715033006 0ustar zuulzuul2025-08-13T20:05:35.452608365+00:00 stderr F I0813 20:05:35.448700 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:35.502237766+00:00 stderr F I0813 20:05:35.502054 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:05:35.689005154+00:00 stderr F I0813 20:05:35.688074 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:05:35.735570748+00:00 stderr F I0813 20:05:35.734903 1 recorder_logging.go:44] &Event{ObjectMeta:{dummy.185b6c47dd59a765 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:FeatureGatesInitialized,Message:FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}},Source:EventSource{Component:,Host:,},FirstTimestamp:2025-08-13 20:05:35.703058277 +0000 UTC m=+1.630097491,LastTimestamp:2025-08-13 20:05:35.703058277 +0000 UTC m=+1.630097491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} 2025-08-13T20:05:35.735570748+00:00 stderr F I0813 20:05:35.735070 1 main.go:173] FeatureGates initialized: [AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:05:35.745880803+00:00 stderr F I0813 20:05:35.743278 1 webhook.go:173] "msg"="skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called" "GVK"={"Group":"machine.openshift.io","Version":"v1","Kind":"ControlPlaneMachineSet"} "logger"="controller-runtime.builder" 2025-08-13T20:05:35.745880803+00:00 stderr F I0813 20:05:35.743496 1 webhook.go:189] "msg"="Registering a validating webhook" "GVK"={"Group":"machine.openshift.io","Version":"v1","Kind":"ControlPlaneMachineSet"} "logger"="controller-runtime.builder" "path"="/validate-machine-openshift-io-v1-controlplanemachineset" 2025-08-13T20:05:35.745880803+00:00 stderr F I0813 20:05:35.743655 1 server.go:183] "msg"="Registering webhook" "logger"="controller-runtime.webhook" "path"="/validate-machine-openshift-io-v1-controlplanemachineset" 2025-08-13T20:05:35.745880803+00:00 stderr F I0813 20:05:35.743716 1 main.go:228] "msg"="starting manager" "logger"="setup" 2025-08-13T20:05:35.808901448+00:00 stderr F I0813 20:05:35.807974 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:05:35.815684252+00:00 stderr F I0813 20:05:35.809405 1 server.go:185] "msg"="Starting metrics server" "logger"="controller-runtime.metrics" 2025-08-13T20:05:35.815684252+00:00 stderr F I0813 20:05:35.809583 1 server.go:224] "msg"="Serving metrics server" "bindAddress"=":8080" "logger"="controller-runtime.metrics" "secure"=false 2025-08-13T20:05:35.815684252+00:00 stderr F I0813 20:05:35.812095 1 server.go:50] "msg"="starting server" "addr"={"IP":"::","Port":8081,"Zone":""} "kind"="health probe" 2025-08-13T20:05:35.815684252+00:00 stderr F I0813 20:05:35.812187 1 server.go:191] "msg"="Starting webhook server" "logger"="controller-runtime.webhook" 2025-08-13T20:05:35.815684252+00:00 stderr F I0813 20:05:35.813033 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-api/control-plane-machine-set-leader... 2025-08-13T20:05:35.820718106+00:00 stderr F I0813 20:05:35.820343 1 certwatcher.go:161] "msg"="Updated current TLS certificate" "logger"="controller-runtime.certwatcher" 2025-08-13T20:05:35.820718106+00:00 stderr F I0813 20:05:35.820531 1 server.go:242] "msg"="Serving webhook server" "host"="" "logger"="controller-runtime.webhook" "port"=9443 2025-08-13T20:05:35.820741327+00:00 stderr F I0813 20:05:35.820719 1 certwatcher.go:115] "msg"="Starting certificate watcher" "logger"="controller-runtime.certwatcher" 2025-08-13T20:08:08.092392625+00:00 stderr F I0813 20:08:08.090438 1 leaderelection.go:260] successfully acquired lease openshift-machine-api/control-plane-machine-set-leader 2025-08-13T20:08:08.094648189+00:00 stderr F I0813 20:08:08.094304 1 recorder.go:104] "msg"="control-plane-machine-set-operator-649bd778b4-tt5tw_d3b7e6b8-9166-459d-a6c8-d99794b50433 became leader" "logger"="events" "object"={"kind":"Lease","namespace":"openshift-machine-api","name":"control-plane-machine-set-leader","uid":"04d6c6f9-cb98-4c35-8cde-538add77d9ad","apiVersion":"coordination.k8s.io/v1","resourceVersion":"32821"} "reason"="LeaderElection" "type"="Normal" 2025-08-13T20:08:08.102699080+00:00 stderr F I0813 20:08:08.102529 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.ControlPlaneMachineSet" 2025-08-13T20:08:08.102699080+00:00 stderr F I0813 20:08:08.102605 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachinesetgenerator" "source"="kind source: *v1.ControlPlaneMachineSet" 2025-08-13T20:08:08.104515132+00:00 stderr F I0813 20:08:08.103345 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachinesetgenerator" "source"="kind source: *v1beta1.Machine" 2025-08-13T20:08:08.104515132+00:00 stderr F I0813 20:08:08.103378 1 controller.go:186] "msg"="Starting Controller" "controller"="controlplanemachinesetgenerator" 2025-08-13T20:08:08.104515132+00:00 stderr F I0813 20:08:08.103747 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1beta1.Machine" 2025-08-13T20:08:08.104515132+00:00 stderr F I0813 20:08:08.103844 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.Node" 2025-08-13T20:08:08.104515132+00:00 stderr F I0813 20:08:08.103922 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.ClusterOperator" 2025-08-13T20:08:08.104515132+00:00 stderr F I0813 20:08:08.103944 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.Infrastructure" 2025-08-13T20:08:08.104515132+00:00 stderr F I0813 20:08:08.103964 1 controller.go:186] "msg"="Starting Controller" "controller"="controlplanemachineset" 2025-08-13T20:08:08.201578905+00:00 stderr F I0813 20:08:08.201332 1 reflector.go:351] Caches populated for *v1beta1.Machine from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:08:08.201701399+00:00 stderr F I0813 20:08:08.201676 1 reflector.go:351] Caches populated for *v1.Infrastructure from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:08:08.209287466+00:00 stderr F I0813 20:08:08.208649 1 reflector.go:351] Caches populated for *v1.ClusterOperator from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:08:08.222844255+00:00 stderr F I0813 20:08:08.222401 1 reflector.go:351] Caches populated for *v1.Node from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:08:08.231268937+00:00 stderr F I0813 20:08:08.231171 1 reflector.go:351] Caches populated for *v1.ControlPlaneMachineSet from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:08:08.282215947+00:00 stderr F I0813 20:08:08.280096 1 watch_filters.go:179] reconcile triggered by infrastructure change 2025-08-13T20:08:08.335102994+00:00 stderr F I0813 20:08:08.328518 1 controller.go:220] "msg"="Starting workers" "controller"="controlplanemachineset" "worker count"=1 2025-08-13T20:08:08.335102994+00:00 stderr F I0813 20:08:08.330982 1 controller.go:169] "msg"="Reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="35d200ba-a81a-41d2-9a64-b3749882f3b1" 2025-08-13T20:08:08.335102994+00:00 stderr F I0813 20:08:08.333281 1 controller.go:177] "msg"="No control plane machine set found, setting operator status available" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="35d200ba-a81a-41d2-9a64-b3749882f3b1" 2025-08-13T20:08:08.335102994+00:00 stderr F I0813 20:08:08.333431 1 controller.go:183] "msg"="Finished reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="35d200ba-a81a-41d2-9a64-b3749882f3b1" 2025-08-13T20:08:08.335102994+00:00 stderr F I0813 20:08:08.333528 1 controller.go:220] "msg"="Starting workers" "controller"="controlplanemachinesetgenerator" "worker count"=1 2025-08-13T20:08:34.116134166+00:00 stderr F E0813 20:08:34.115389 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/control-plane-machine-set-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/control-plane-machine-set-leader": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:00.119765093+00:00 stderr F E0813 20:09:00.118762 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/control-plane-machine-set-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/control-plane-machine-set-leader": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:43.292954294+00:00 stderr F I0813 20:09:43.291277 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:50.954091206+00:00 stderr F I0813 20:09:50.953177 1 reflector.go:351] Caches populated for *v1.Infrastructure from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:09:50.954306271+00:00 stderr F I0813 20:09:50.953764 1 watch_filters.go:179] reconcile triggered by infrastructure change 2025-08-13T20:09:50.954429834+00:00 stderr F I0813 20:09:50.954375 1 controller.go:169] "msg"="Reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="13720113-1d0c-466f-b496-4b482b402cda" 2025-08-13T20:09:50.954588299+00:00 stderr F I0813 20:09:50.954555 1 controller.go:177] "msg"="No control plane machine set found, setting operator status available" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="13720113-1d0c-466f-b496-4b482b402cda" 2025-08-13T20:09:50.954711542+00:00 stderr F I0813 20:09:50.954681 1 controller.go:183] "msg"="Finished reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="13720113-1d0c-466f-b496-4b482b402cda" 2025-08-13T20:09:52.483540696+00:00 stderr F I0813 20:09:52.483360 1 reflector.go:351] Caches populated for *v1.ClusterOperator from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:09:52.484613777+00:00 stderr F I0813 20:09:52.484079 1 controller.go:169] "msg"="Reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="daab30e3-71d1-45b0-9c1e-41cc640cb2f9" 2025-08-13T20:09:52.484613777+00:00 stderr F I0813 20:09:52.484208 1 controller.go:177] "msg"="No control plane machine set found, setting operator status available" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="daab30e3-71d1-45b0-9c1e-41cc640cb2f9" 2025-08-13T20:09:52.484613777+00:00 stderr F I0813 20:09:52.484248 1 controller.go:183] "msg"="Finished reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="daab30e3-71d1-45b0-9c1e-41cc640cb2f9" 2025-08-13T20:09:58.740448876+00:00 stderr F I0813 20:09:58.739092 1 reflector.go:351] Caches populated for *v1beta1.Machine from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:10:25.144246163+00:00 stderr F I0813 20:10:25.143425 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:39.931663151+00:00 stderr F I0813 20:10:39.929697 1 reflector.go:351] Caches populated for *v1.Node from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:10:45.189277050+00:00 stderr F I0813 20:10:45.188639 1 reflector.go:351] Caches populated for *v1.ControlPlaneMachineSet from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:37:36.184186726+00:00 stderr F I0813 20:37:36.183258 1 controller.go:169] "msg"="Reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="70949f2c-5ba7-46b4-9945-1570ad727541" 2025-08-13T20:37:36.191103225+00:00 stderr F I0813 20:37:36.190959 1 controller.go:177] "msg"="No control plane machine set found, setting operator status available" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="70949f2c-5ba7-46b4-9945-1570ad727541" 2025-08-13T20:37:36.195214644+00:00 stderr F I0813 20:37:36.195112 1 controller.go:183] "msg"="Finished reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="70949f2c-5ba7-46b4-9945-1570ad727541" 2025-08-13T20:41:15.217404928+00:00 stderr F I0813 20:41:15.216565 1 watch_filters.go:179] reconcile triggered by infrastructure change 2025-08-13T20:41:15.217873911+00:00 stderr F I0813 20:41:15.217737 1 controller.go:169] "msg"="Reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="c9fe446c-1028-4ba4-bcb4-ec131907f1ad" 2025-08-13T20:41:15.218243532+00:00 stderr F I0813 20:41:15.218176 1 controller.go:177] "msg"="No control plane machine set found, setting operator status available" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="c9fe446c-1028-4ba4-bcb4-ec131907f1ad" 2025-08-13T20:41:15.218480519+00:00 stderr F I0813 20:41:15.218389 1 controller.go:183] "msg"="Finished reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="c9fe446c-1028-4ba4-bcb4-ec131907f1ad" 2025-08-13T20:42:36.410736622+00:00 stderr F I0813 20:42:36.392587 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.410736622+00:00 stderr F I0813 20:42:36.387065 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.423466448+00:00 stderr F I0813 20:42:36.393743 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.423466448+00:00 stderr F I0813 20:42:36.387287 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.435931618+00:00 stderr F I0813 20:42:36.432591 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.435931618+00:00 stderr F I0813 20:42:36.386855 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437936926+00:00 stderr F I0813 20:42:36.393763 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:39.121052261+00:00 stderr F I0813 20:42:39.120327 1 internal.go:516] "msg"="Stopping and waiting for non leader election runnables" 2025-08-13T20:42:39.121052261+00:00 stderr F I0813 20:42:39.121041 1 internal.go:520] "msg"="Stopping and waiting for leader election runnables" 2025-08-13T20:42:39.123198083+00:00 stderr F I0813 20:42:39.123122 1 controller.go:240] "msg"="Shutdown signal received, waiting for all workers to finish" "controller"="controlplanemachineset" 2025-08-13T20:42:39.123198083+00:00 stderr F I0813 20:42:39.123186 1 controller.go:242] "msg"="All workers finished" "controller"="controlplanemachineset" 2025-08-13T20:42:39.124597463+00:00 stderr F I0813 20:42:39.124511 1 controller.go:240] "msg"="Shutdown signal received, waiting for all workers to finish" "controller"="controlplanemachinesetgenerator" 2025-08-13T20:42:39.124651265+00:00 stderr F I0813 20:42:39.124614 1 controller.go:242] "msg"="All workers finished" "controller"="controlplanemachinesetgenerator" 2025-08-13T20:42:39.124716827+00:00 stderr F I0813 20:42:39.124673 1 internal.go:526] "msg"="Stopping and waiting for caches" 2025-08-13T20:42:39.129012041+00:00 stderr F I0813 20:42:39.128957 1 internal.go:530] "msg"="Stopping and waiting for webhooks" 2025-08-13T20:42:39.129362521+00:00 stderr F I0813 20:42:39.129333 1 server.go:249] "msg"="Shutting down webhook server with timeout of 1 minute" "logger"="controller-runtime.webhook" 2025-08-13T20:42:39.129548196+00:00 stderr F I0813 20:42:39.129529 1 internal.go:533] "msg"="Stopping and waiting for HTTP servers" 2025-08-13T20:42:39.129749002+00:00 stderr F I0813 20:42:39.129657 1 server.go:231] "msg"="Shutting down metrics server with timeout of 1 minute" "logger"="controller-runtime.metrics" 2025-08-13T20:42:39.130267357+00:00 stderr F I0813 20:42:39.130159 1 server.go:43] "msg"="shutting down server" "addr"={"IP":"::","Port":8081,"Zone":""} "kind"="health probe" 2025-08-13T20:42:39.130490133+00:00 stderr F I0813 20:42:39.130410 1 internal.go:537] "msg"="Wait completed, proceeding to shutdown the manager" 2025-08-13T20:42:39.136122996+00:00 stderr F E0813 20:42:39.135999 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/control-plane-machine-set-leader": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000023400000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015133657715033062 5ustar zuulzuul././@LongLink0000644000000000000000000000024200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/setup/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015133657734033063 5ustar zuulzuul././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/setup/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000000041115133657715033060 0ustar zuulzuul2026-01-20T10:57:21.382031026+00:00 stdout F Fixing audit permissions ... 2026-01-20T10:57:21.390329226+00:00 stdout F Acquiring exclusive lock /var/log/kube-apiserver/.lock ... 2026-01-20T10:57:21.391728112+00:00 stdout F flock: getting lock took 0.000006 seconds ././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-check-endpoints/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015133657734033063 5ustar zuulzuul././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-check-endpoints/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000003626515133657715033100 0ustar zuulzuul2026-01-20T10:57:24.018541980+00:00 stderr F W0120 10:57:24.018387 1 cmd.go:245] Using insecure, self-signed certificates 2026-01-20T10:57:24.018762945+00:00 stderr F I0120 10:57:24.018732 1 crypto.go:601] Generating new CA for check-endpoints-signer@1768906644 cert, and key in /tmp/serving-cert-445666046/serving-signer.crt, /tmp/serving-cert-445666046/serving-signer.key 2026-01-20T10:57:24.319376305+00:00 stderr F I0120 10:57:24.319252 1 observer_polling.go:159] Starting file observer 2026-01-20T10:57:29.333049634+00:00 stderr F I0120 10:57:29.332973 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2 2026-01-20T10:57:29.334345818+00:00 stderr F I0120 10:57:29.334300 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-445666046/tls.crt::/tmp/serving-cert-445666046/tls.key" 2026-01-20T10:57:29.570080433+00:00 stderr F I0120 10:57:29.569991 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2026-01-20T10:57:29.572299740+00:00 stderr F I0120 10:57:29.572257 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2026-01-20T10:57:29.572299740+00:00 stderr F I0120 10:57:29.572274 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2026-01-20T10:57:29.572299740+00:00 stderr F I0120 10:57:29.572288 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2026-01-20T10:57:29.572299740+00:00 stderr F I0120 10:57:29.572293 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2026-01-20T10:57:29.575418733+00:00 stderr F I0120 10:57:29.575384 1 secure_serving.go:57] Forcing use of http/1.1 only 2026-01-20T10:57:29.575418733+00:00 stderr F W0120 10:57:29.575413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:57:29.575436133+00:00 stderr F W0120 10:57:29.575420 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:57:29.575747692+00:00 stderr F I0120 10:57:29.575721 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2026-01-20T10:57:29.579302306+00:00 stderr F I0120 10:57:29.579268 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:57:29.579302306+00:00 stderr F I0120 10:57:29.579296 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:57:29.579371378+00:00 stderr F I0120 10:57:29.579340 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:57:29.579371378+00:00 stderr F I0120 10:57:29.579359 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:57:29.579447200+00:00 stderr F I0120 10:57:29.579412 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:57:29.579459390+00:00 stderr F I0120 10:57:29.579450 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:57:29.579824140+00:00 stderr F I0120 10:57:29.579790 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-445666046/tls.crt::/tmp/serving-cert-445666046/tls.key" 2026-01-20T10:57:29.580009974+00:00 stderr F I0120 10:57:29.579975 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-445666046/tls.crt::/tmp/serving-cert-445666046/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1768906644\" (2026-01-20 10:57:23 +0000 UTC to 2026-02-19 10:57:24 +0000 UTC (now=2026-01-20 10:57:29.579947953 +0000 UTC))" 2026-01-20T10:57:29.580361524+00:00 stderr F I0120 10:57:29.580327 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906649\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906649\" (2026-01-20 09:57:29 +0000 UTC to 2027-01-20 09:57:29 +0000 UTC (now=2026-01-20 10:57:29.580298212 +0000 UTC))" 2026-01-20T10:57:29.582101250+00:00 stderr F I0120 10:57:29.582018 1 secure_serving.go:213] Serving securely on [::]:17697 2026-01-20T10:57:29.582101250+00:00 stderr F I0120 10:57:29.582090 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2026-01-20T10:57:29.582245784+00:00 stderr F I0120 10:57:29.582223 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:29.582954882+00:00 stderr F I0120 10:57:29.582922 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsTimeToStart 2026-01-20T10:57:29.583290661+00:00 stderr F I0120 10:57:29.583260 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:57:29.583969370+00:00 stderr F I0120 10:57:29.583931 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:29.585927481+00:00 stderr F I0120 10:57:29.585872 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:29.679481776+00:00 stderr F I0120 10:57:29.679405 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:57:29.679481776+00:00 stderr F I0120 10:57:29.679425 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:57:29.679590498+00:00 stderr F I0120 10:57:29.679544 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:57:29.679977658+00:00 stderr F I0120 10:57:29.679942 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:57:29.679907237 +0000 UTC))" 2026-01-20T10:57:29.680014949+00:00 stderr F I0120 10:57:29.679997 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:57:29.679957068 +0000 UTC))" 2026-01-20T10:57:29.680024280+00:00 stderr F I0120 10:57:29.680020 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:57:29.680005339 +0000 UTC))" 2026-01-20T10:57:29.680048210+00:00 stderr F I0120 10:57:29.680036 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:57:29.68002468 +0000 UTC))" 2026-01-20T10:57:29.680104242+00:00 stderr F I0120 10:57:29.680090 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:57:29.68004347 +0000 UTC))" 2026-01-20T10:57:29.680118162+00:00 stderr F I0120 10:57:29.680112 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:57:29.680099212 +0000 UTC))" 2026-01-20T10:57:29.680138993+00:00 stderr F I0120 10:57:29.680127 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:57:29.680116172 +0000 UTC))" 2026-01-20T10:57:29.680158113+00:00 stderr F I0120 10:57:29.680145 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:57:29.680134103 +0000 UTC))" 2026-01-20T10:57:29.680179414+00:00 stderr F I0120 10:57:29.680168 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:57:29.680152003 +0000 UTC))" 2026-01-20T10:57:29.680226645+00:00 stderr F I0120 10:57:29.680208 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:57:29.680197624 +0000 UTC))" 2026-01-20T10:57:29.680249396+00:00 stderr F I0120 10:57:29.680234 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:57:29.680220915 +0000 UTC))" 2026-01-20T10:57:29.680257816+00:00 stderr F I0120 10:57:29.680252 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:57:29.680240515 +0000 UTC))" 2026-01-20T10:57:29.680554654+00:00 stderr F I0120 10:57:29.680531 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-445666046/tls.crt::/tmp/serving-cert-445666046/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1768906644\" (2026-01-20 10:57:23 +0000 UTC to 2026-02-19 10:57:24 +0000 UTC (now=2026-01-20 10:57:29.680518113 +0000 UTC))" 2026-01-20T10:57:29.680807250+00:00 stderr F I0120 10:57:29.680789 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906649\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906649\" (2026-01-20 09:57:29 +0000 UTC to 2027-01-20 09:57:29 +0000 UTC (now=2026-01-20 10:57:29.680764599 +0000 UTC))" 2026-01-20T10:57:29.870365013+00:00 stderr F I0120 10:57:29.870285 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:29.885379690+00:00 stderr F I0120 10:57:29.885273 1 base_controller.go:73] Caches are synced for CheckEndpointsTimeToStart 2026-01-20T10:57:29.885379690+00:00 stderr F I0120 10:57:29.885344 1 base_controller.go:110] Starting #1 worker of CheckEndpointsTimeToStart controller ... 2026-01-20T10:57:29.885454832+00:00 stderr F I0120 10:57:29.885429 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsStop 2026-01-20T10:57:29.885454832+00:00 stderr F I0120 10:57:29.885441 1 base_controller.go:73] Caches are synced for CheckEndpointsStop 2026-01-20T10:57:29.885454832+00:00 stderr F I0120 10:57:29.885445 1 base_controller.go:110] Starting #1 worker of CheckEndpointsStop controller ... 2026-01-20T10:57:29.885499253+00:00 stderr F I0120 10:57:29.885477 1 base_controller.go:172] Shutting down CheckEndpointsTimeToStart ... 2026-01-20T10:57:29.885975916+00:00 stderr F I0120 10:57:29.885942 1 base_controller.go:67] Waiting for caches to sync for check-endpoints 2026-01-20T10:57:29.885975916+00:00 stderr F I0120 10:57:29.885964 1 base_controller.go:114] Shutting down worker of CheckEndpointsTimeToStart controller ... 2026-01-20T10:57:29.885975916+00:00 stderr F I0120 10:57:29.885972 1 base_controller.go:104] All CheckEndpointsTimeToStart workers have been terminated 2026-01-20T10:57:29.887728712+00:00 stderr F I0120 10:57:29.887682 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:29.893652829+00:00 stderr F I0120 10:57:29.893614 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:29.986667789+00:00 stderr F I0120 10:57:29.986579 1 base_controller.go:73] Caches are synced for check-endpoints 2026-01-20T10:57:29.986667789+00:00 stderr F I0120 10:57:29.986615 1 base_controller.go:110] Starting #1 worker of check-endpoints controller ... ././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-insecure-readyz/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015133657734033063 5ustar zuulzuul././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-insecure-readyz/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000000016415133657715033065 0ustar zuulzuul2026-01-20T10:57:23.742298984+00:00 stderr F I0120 10:57:23.742003 1 readyz.go:111] Listening on 0.0.0.0:6080 ././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-regeneration-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015133657734033063 5ustar zuulzuul././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-regeneration-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000002615515133657715033075 0ustar zuulzuul2026-01-20T10:57:23.151398887+00:00 stderr F W0120 10:57:23.151251 1 cmd.go:245] Using insecure, self-signed certificates 2026-01-20T10:57:23.151577102+00:00 stderr F I0120 10:57:23.151550 1 crypto.go:601] Generating new CA for cert-regeneration-controller-signer@1768906643 cert, and key in /tmp/serving-cert-3561638804/serving-signer.crt, /tmp/serving-cert-3561638804/serving-signer.key 2026-01-20T10:57:23.538516384+00:00 stderr F I0120 10:57:23.538431 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:57:23.541208516+00:00 stderr F I0120 10:57:23.541017 1 observer_polling.go:159] Starting file observer 2026-01-20T10:57:29.041830172+00:00 stderr F I0120 10:57:29.041755 1 builder.go:299] cert-regeneration-controller version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2 2026-01-20T10:57:29.046307631+00:00 stderr F I0120 10:57:29.046235 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2026-01-20T10:57:29.046640750+00:00 stderr F I0120 10:57:29.046613 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-apiserver/cert-regeneration-controller-lock... 2026-01-20T10:57:29.054674512+00:00 stderr F I0120 10:57:29.054620 1 leaderelection.go:260] successfully acquired lease openshift-kube-apiserver/cert-regeneration-controller-lock 2026-01-20T10:57:29.054989191+00:00 stderr F I0120 10:57:29.054882 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-apiserver", Name:"cert-regeneration-controller-lock", UID:"eb250dab-ea81-4164-a3be-f2834c870dea", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"43302", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_1d22a29f-607b-4d65-8db7-d2177e7f3810 became leader 2026-01-20T10:57:29.066354511+00:00 stderr F I0120 10:57:29.066276 1 cabundlesyncer.go:82] Starting CA bundle controller 2026-01-20T10:57:29.066354511+00:00 stderr F I0120 10:57:29.066322 1 shared_informer.go:311] Waiting for caches to sync for CABundleController 2026-01-20T10:57:29.066799673+00:00 stderr F I0120 10:57:29.066763 1 certrotationcontroller.go:886] Starting CertRotation 2026-01-20T10:57:29.066799673+00:00 stderr F I0120 10:57:29.066794 1 certrotationcontroller.go:851] Waiting for CertRotation 2026-01-20T10:57:29.072240977+00:00 stderr F I0120 10:57:29.072180 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:29.072565815+00:00 stderr F I0120 10:57:29.072522 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:29.073811818+00:00 stderr F I0120 10:57:29.073777 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:29.074460576+00:00 stderr F I0120 10:57:29.074418 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:29.074613079+00:00 stderr F I0120 10:57:29.074567 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:29.075439221+00:00 stderr F I0120 10:57:29.075406 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:29.096957650+00:00 stderr F I0120 10:57:29.096894 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:29.149218112+00:00 stderr F I0120 10:57:29.149144 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:29.152643723+00:00 stderr F I0120 10:57:29.152605 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:29.168291576+00:00 stderr F I0120 10:57:29.168211 1 shared_informer.go:318] Caches are synced for CABundleController 2026-01-20T10:57:29.168345028+00:00 stderr F I0120 10:57:29.154955 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeapiservers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:29.168538963+00:00 stderr F I0120 10:57:29.168448 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2026-01-20T10:57:29.168588654+00:00 stderr F I0120 10:57:29.168571 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2026-01-20T10:57:29.168618225+00:00 stderr F I0120 10:57:29.168607 1 internalloadbalancer.go:27] syncing internal loadbalancer hostnames: api-int.crc.testing 2026-01-20T10:57:29.168647756+00:00 stderr F I0120 10:57:29.168637 1 certrotationcontroller.go:869] Finished waiting for CertRotation 2026-01-20T10:57:29.168866721+00:00 stderr F I0120 10:57:29.168705 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:57:29.168916513+00:00 stderr F I0120 10:57:29.168904 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:57:29.168950604+00:00 stderr F I0120 10:57:29.168938 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:57:29.169015645+00:00 stderr F I0120 10:57:29.169001 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:57:29.169048856+00:00 stderr F I0120 10:57:29.169037 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:57:29.169097637+00:00 stderr F I0120 10:57:29.169084 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:57:29.169147119+00:00 stderr F I0120 10:57:29.169123 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2026-01-20T10:57:29.169159759+00:00 stderr F I0120 10:57:29.169152 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:57:29.169170559+00:00 stderr F I0120 10:57:29.169159 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:57:29.169170559+00:00 stderr F I0120 10:57:29.169166 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:57:29.169206240+00:00 stderr F I0120 10:57:29.169184 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:57:29.169206240+00:00 stderr F I0120 10:57:29.169193 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:57:29.169206240+00:00 stderr F I0120 10:57:29.169198 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:57:29.169219801+00:00 stderr F I0120 10:57:29.169212 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:57:29.169219801+00:00 stderr F I0120 10:57:29.169216 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:57:29.169229561+00:00 stderr F I0120 10:57:29.169220 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:57:29.169263962+00:00 stderr F I0120 10:57:29.169244 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:57:29.169263962+00:00 stderr F I0120 10:57:29.169252 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:57:29.169263962+00:00 stderr F I0120 10:57:29.169256 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:57:29.172129398+00:00 stderr F I0120 10:57:29.169015 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:57:29.172200250+00:00 stderr F I0120 10:57:29.172163 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:57:29.172200250+00:00 stderr F I0120 10:57:29.172194 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:57:29.172211161+00:00 stderr F I0120 10:57:29.172201 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:57:29.172223021+00:00 stderr F I0120 10:57:29.169083 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2026-01-20T10:57:29.172234411+00:00 stderr F I0120 10:57:29.172173 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:57:29.172243452+00:00 stderr F I0120 10:57:29.172233 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:57:29.173716870+00:00 stderr F I0120 10:57:29.172142 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:57:29.173716870+00:00 stderr F I0120 10:57:29.173704 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:57:29.173716870+00:00 stderr F I0120 10:57:29.173710 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:57:29.174236413+00:00 stderr F I0120 10:57:29.172151 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:57:29.174236413+00:00 stderr F I0120 10:57:29.174219 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:57:29.174236413+00:00 stderr F I0120 10:57:29.174229 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:57:29.174259034+00:00 stderr F I0120 10:57:29.172124 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:57:29.174259034+00:00 stderr F I0120 10:57:29.174251 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:57:29.174259034+00:00 stderr F I0120 10:57:29.174255 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:57:29.175575119+00:00 stderr F I0120 10:57:29.175544 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:57:29.175575119+00:00 stderr F I0120 10:57:29.175557 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:57:29.175575119+00:00 stderr F I0120 10:57:29.175562 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... ././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-syncer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015133657734033063 5ustar zuulzuul././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-syncer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000000316015133657715033064 0ustar zuulzuul2026-01-20T10:57:22.812287739+00:00 stderr F I0120 10:57:22.812170 1 base_controller.go:67] Waiting for caches to sync for CertSyncController 2026-01-20T10:57:22.812444103+00:00 stderr F I0120 10:57:22.812432 1 observer_polling.go:159] Starting file observer 2026-01-20T10:57:29.128634918+00:00 stderr F I0120 10:57:29.128541 1 base_controller.go:73] Caches are synced for CertSyncController 2026-01-20T10:57:29.128634918+00:00 stderr F I0120 10:57:29.128613 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ... 2026-01-20T10:57:29.128770891+00:00 stderr F I0120 10:57:29.128717 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}] 2026-01-20T10:57:29.129317926+00:00 stderr F I0120 10:57:29.129252 1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}] ././@LongLink0000644000000000000000000000025300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015133657734033063 5ustar zuulzuul././@LongLink0000644000000000000000000000026000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000120332215133657715033067 0ustar zuulzuul2026-01-20T10:57:22.349975613+00:00 stdout F flock: getting lock took 0.000008 seconds 2026-01-20T10:57:22.350321132+00:00 stdout F Copying system trust bundle ... 2026-01-20T10:57:22.367111416+00:00 stderr F I0120 10:57:22.366965 1 loader.go:395] Config loaded from file: /etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig 2026-01-20T10:57:22.367575668+00:00 stderr F Copying termination logs to "/var/log/kube-apiserver/termination.log" 2026-01-20T10:57:22.367654231+00:00 stderr F I0120 10:57:22.367623 1 main.go:161] Touching termination lock file "/var/log/kube-apiserver/.terminating" 2026-01-20T10:57:22.368633057+00:00 stderr F I0120 10:57:22.368527 1 main.go:219] Launching sub-process "/usr/bin/hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --advertise-address=192.168.126.11 -v=2 --permit-address-sharing" 2026-01-20T10:57:22.470625304+00:00 stderr F Flag --openshift-config has been deprecated, to be removed 2026-01-20T10:57:22.470772248+00:00 stderr F I0120 10:57:22.470645 16 flags.go:64] FLAG: --admission-control="[]" 2026-01-20T10:57:22.470772248+00:00 stderr F I0120 10:57:22.470741 16 flags.go:64] FLAG: --admission-control-config-file="" 2026-01-20T10:57:22.470772248+00:00 stderr F I0120 10:57:22.470748 16 flags.go:64] FLAG: --advertise-address="192.168.126.11" 2026-01-20T10:57:22.470772248+00:00 stderr F I0120 10:57:22.470754 16 flags.go:64] FLAG: --aggregator-reject-forwarding-redirect="true" 2026-01-20T10:57:22.470772248+00:00 stderr F I0120 10:57:22.470760 16 flags.go:64] FLAG: --allow-metric-labels="[]" 2026-01-20T10:57:22.470784738+00:00 stderr F I0120 10:57:22.470769 16 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2026-01-20T10:57:22.470784738+00:00 stderr F I0120 10:57:22.470774 16 flags.go:64] FLAG: --allow-privileged="false" 2026-01-20T10:57:22.470784738+00:00 stderr F I0120 10:57:22.470778 16 flags.go:64] FLAG: --anonymous-auth="true" 2026-01-20T10:57:22.470814349+00:00 stderr F I0120 10:57:22.470783 16 flags.go:64] FLAG: --api-audiences="[]" 2026-01-20T10:57:22.470814349+00:00 stderr F I0120 10:57:22.470793 16 flags.go:64] FLAG: --apiserver-count="1" 2026-01-20T10:57:22.470814349+00:00 stderr F I0120 10:57:22.470798 16 flags.go:64] FLAG: --audit-log-batch-buffer-size="10000" 2026-01-20T10:57:22.470814349+00:00 stderr F I0120 10:57:22.470803 16 flags.go:64] FLAG: --audit-log-batch-max-size="1" 2026-01-20T10:57:22.470838889+00:00 stderr F I0120 10:57:22.470807 16 flags.go:64] FLAG: --audit-log-batch-max-wait="0s" 2026-01-20T10:57:22.470838889+00:00 stderr F I0120 10:57:22.470817 16 flags.go:64] FLAG: --audit-log-batch-throttle-burst="0" 2026-01-20T10:57:22.470838889+00:00 stderr F I0120 10:57:22.470821 16 flags.go:64] FLAG: --audit-log-batch-throttle-enable="false" 2026-01-20T10:57:22.470838889+00:00 stderr F I0120 10:57:22.470825 16 flags.go:64] FLAG: --audit-log-batch-throttle-qps="0" 2026-01-20T10:57:22.470838889+00:00 stderr F I0120 10:57:22.470831 16 flags.go:64] FLAG: --audit-log-compress="false" 2026-01-20T10:57:22.470865880+00:00 stderr F I0120 10:57:22.470835 16 flags.go:64] FLAG: --audit-log-format="json" 2026-01-20T10:57:22.470865880+00:00 stderr F I0120 10:57:22.470843 16 flags.go:64] FLAG: --audit-log-maxage="0" 2026-01-20T10:57:22.470865880+00:00 stderr F I0120 10:57:22.470847 16 flags.go:64] FLAG: --audit-log-maxbackup="0" 2026-01-20T10:57:22.470865880+00:00 stderr F I0120 10:57:22.470851 16 flags.go:64] FLAG: --audit-log-maxsize="0" 2026-01-20T10:57:22.470865880+00:00 stderr F I0120 10:57:22.470855 16 flags.go:64] FLAG: --audit-log-mode="blocking" 2026-01-20T10:57:22.470865880+00:00 stderr F I0120 10:57:22.470860 16 flags.go:64] FLAG: --audit-log-path="" 2026-01-20T10:57:22.470875030+00:00 stderr F I0120 10:57:22.470863 16 flags.go:64] FLAG: --audit-log-truncate-enabled="false" 2026-01-20T10:57:22.470896991+00:00 stderr F I0120 10:57:22.470868 16 flags.go:64] FLAG: --audit-log-truncate-max-batch-size="10485760" 2026-01-20T10:57:22.470896991+00:00 stderr F I0120 10:57:22.470877 16 flags.go:64] FLAG: --audit-log-truncate-max-event-size="102400" 2026-01-20T10:57:22.470896991+00:00 stderr F I0120 10:57:22.470880 16 flags.go:64] FLAG: --audit-log-version="audit.k8s.io/v1" 2026-01-20T10:57:22.470896991+00:00 stderr F I0120 10:57:22.470883 16 flags.go:64] FLAG: --audit-policy-file="" 2026-01-20T10:57:22.470896991+00:00 stderr F I0120 10:57:22.470886 16 flags.go:64] FLAG: --audit-webhook-batch-buffer-size="10000" 2026-01-20T10:57:22.470896991+00:00 stderr F I0120 10:57:22.470889 16 flags.go:64] FLAG: --audit-webhook-batch-initial-backoff="10s" 2026-01-20T10:57:22.470922442+00:00 stderr F I0120 10:57:22.470893 16 flags.go:64] FLAG: --audit-webhook-batch-max-size="400" 2026-01-20T10:57:22.470922442+00:00 stderr F I0120 10:57:22.470898 16 flags.go:64] FLAG: --audit-webhook-batch-max-wait="30s" 2026-01-20T10:57:22.470922442+00:00 stderr F I0120 10:57:22.470901 16 flags.go:64] FLAG: --audit-webhook-batch-throttle-burst="15" 2026-01-20T10:57:22.470922442+00:00 stderr F I0120 10:57:22.470905 16 flags.go:64] FLAG: --audit-webhook-batch-throttle-enable="true" 2026-01-20T10:57:22.470922442+00:00 stderr F I0120 10:57:22.470908 16 flags.go:64] FLAG: --audit-webhook-batch-throttle-qps="10" 2026-01-20T10:57:22.470922442+00:00 stderr F I0120 10:57:22.470912 16 flags.go:64] FLAG: --audit-webhook-config-file="" 2026-01-20T10:57:22.470922442+00:00 stderr F I0120 10:57:22.470915 16 flags.go:64] FLAG: --audit-webhook-initial-backoff="10s" 2026-01-20T10:57:22.470931182+00:00 stderr F I0120 10:57:22.470919 16 flags.go:64] FLAG: --audit-webhook-mode="batch" 2026-01-20T10:57:22.470931182+00:00 stderr F I0120 10:57:22.470922 16 flags.go:64] FLAG: --audit-webhook-truncate-enabled="false" 2026-01-20T10:57:22.470938092+00:00 stderr F I0120 10:57:22.470925 16 flags.go:64] FLAG: --audit-webhook-truncate-max-batch-size="10485760" 2026-01-20T10:57:22.470938092+00:00 stderr F I0120 10:57:22.470929 16 flags.go:64] FLAG: --audit-webhook-truncate-max-event-size="102400" 2026-01-20T10:57:22.470938092+00:00 stderr F I0120 10:57:22.470932 16 flags.go:64] FLAG: --audit-webhook-version="audit.k8s.io/v1" 2026-01-20T10:57:22.470945602+00:00 stderr F I0120 10:57:22.470936 16 flags.go:64] FLAG: --authentication-config="" 2026-01-20T10:57:22.470945602+00:00 stderr F I0120 10:57:22.470939 16 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" 2026-01-20T10:57:22.470952552+00:00 stderr F I0120 10:57:22.470942 16 flags.go:64] FLAG: --authentication-token-webhook-config-file="" 2026-01-20T10:57:22.470952552+00:00 stderr F I0120 10:57:22.470945 16 flags.go:64] FLAG: --authentication-token-webhook-version="v1beta1" 2026-01-20T10:57:22.470959543+00:00 stderr F I0120 10:57:22.470949 16 flags.go:64] FLAG: --authorization-config="" 2026-01-20T10:57:22.470959543+00:00 stderr F I0120 10:57:22.470952 16 flags.go:64] FLAG: --authorization-mode="[]" 2026-01-20T10:57:22.470966553+00:00 stderr F I0120 10:57:22.470957 16 flags.go:64] FLAG: --authorization-policy-file="" 2026-01-20T10:57:22.470966553+00:00 stderr F I0120 10:57:22.470961 16 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" 2026-01-20T10:57:22.470973373+00:00 stderr F I0120 10:57:22.470965 16 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" 2026-01-20T10:57:22.470973373+00:00 stderr F I0120 10:57:22.470968 16 flags.go:64] FLAG: --authorization-webhook-config-file="" 2026-01-20T10:57:22.470980183+00:00 stderr F I0120 10:57:22.470971 16 flags.go:64] FLAG: --authorization-webhook-version="v1beta1" 2026-01-20T10:57:22.470986743+00:00 stderr F I0120 10:57:22.470975 16 flags.go:64] FLAG: --bind-address="0.0.0.0" 2026-01-20T10:57:22.470986743+00:00 stderr F I0120 10:57:22.470978 16 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2026-01-20T10:57:22.470986743+00:00 stderr F I0120 10:57:22.470982 16 flags.go:64] FLAG: --client-ca-file="" 2026-01-20T10:57:22.470998244+00:00 stderr F I0120 10:57:22.470985 16 flags.go:64] FLAG: --cloud-config="" 2026-01-20T10:57:22.470998244+00:00 stderr F I0120 10:57:22.470988 16 flags.go:64] FLAG: --cloud-provider="" 2026-01-20T10:57:22.471029324+00:00 stderr F I0120 10:57:22.470992 16 flags.go:64] FLAG: --cloud-provider-gce-l7lb-src-cidrs="130.211.0.0/22,35.191.0.0/16" 2026-01-20T10:57:22.471029324+00:00 stderr F I0120 10:57:22.471008 16 flags.go:64] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16" 2026-01-20T10:57:22.471029324+00:00 stderr F I0120 10:57:22.471013 16 flags.go:64] FLAG: --contention-profiling="false" 2026-01-20T10:57:22.471029324+00:00 stderr F I0120 10:57:22.471016 16 flags.go:64] FLAG: --cors-allowed-origins="[]" 2026-01-20T10:57:22.471029324+00:00 stderr F I0120 10:57:22.471020 16 flags.go:64] FLAG: --debug-socket-path="" 2026-01-20T10:57:22.471046755+00:00 stderr F I0120 10:57:22.471023 16 flags.go:64] FLAG: --default-not-ready-toleration-seconds="300" 2026-01-20T10:57:22.471046755+00:00 stderr F I0120 10:57:22.471027 16 flags.go:64] FLAG: --default-unreachable-toleration-seconds="300" 2026-01-20T10:57:22.471046755+00:00 stderr F I0120 10:57:22.471031 16 flags.go:64] FLAG: --default-watch-cache-size="100" 2026-01-20T10:57:22.471046755+00:00 stderr F I0120 10:57:22.471034 16 flags.go:64] FLAG: --delete-collection-workers="1" 2026-01-20T10:57:22.471073165+00:00 stderr F I0120 10:57:22.471038 16 flags.go:64] FLAG: --disable-admission-plugins="[]" 2026-01-20T10:57:22.471073165+00:00 stderr F I0120 10:57:22.471045 16 flags.go:64] FLAG: --disabled-metrics="[]" 2026-01-20T10:57:22.471073165+00:00 stderr F I0120 10:57:22.471049 16 flags.go:64] FLAG: --egress-selector-config-file="" 2026-01-20T10:57:22.471109396+00:00 stderr F I0120 10:57:22.471053 16 flags.go:64] FLAG: --enable-admission-plugins="[]" 2026-01-20T10:57:22.471109396+00:00 stderr F I0120 10:57:22.471084 16 flags.go:64] FLAG: --enable-aggregator-routing="false" 2026-01-20T10:57:22.471109396+00:00 stderr F I0120 10:57:22.471088 16 flags.go:64] FLAG: --enable-bootstrap-token-auth="false" 2026-01-20T10:57:22.471109396+00:00 stderr F I0120 10:57:22.471092 16 flags.go:64] FLAG: --enable-garbage-collector="true" 2026-01-20T10:57:22.471109396+00:00 stderr F I0120 10:57:22.471095 16 flags.go:64] FLAG: --enable-logs-handler="true" 2026-01-20T10:57:22.471109396+00:00 stderr F I0120 10:57:22.471098 16 flags.go:64] FLAG: --enable-priority-and-fairness="true" 2026-01-20T10:57:22.471109396+00:00 stderr F I0120 10:57:22.471101 16 flags.go:64] FLAG: --encryption-provider-config="" 2026-01-20T10:57:22.471109396+00:00 stderr F I0120 10:57:22.471104 16 flags.go:64] FLAG: --encryption-provider-config-automatic-reload="false" 2026-01-20T10:57:22.471119657+00:00 stderr F I0120 10:57:22.471107 16 flags.go:64] FLAG: --endpoint-reconciler-type="lease" 2026-01-20T10:57:22.471119657+00:00 stderr F I0120 10:57:22.471111 16 flags.go:64] FLAG: --etcd-cafile="" 2026-01-20T10:57:22.471119657+00:00 stderr F I0120 10:57:22.471114 16 flags.go:64] FLAG: --etcd-certfile="" 2026-01-20T10:57:22.471127187+00:00 stderr F I0120 10:57:22.471117 16 flags.go:64] FLAG: --etcd-compaction-interval="5m0s" 2026-01-20T10:57:22.471127187+00:00 stderr F I0120 10:57:22.471121 16 flags.go:64] FLAG: --etcd-count-metric-poll-period="1m0s" 2026-01-20T10:57:22.471134007+00:00 stderr F I0120 10:57:22.471125 16 flags.go:64] FLAG: --etcd-db-metric-poll-interval="30s" 2026-01-20T10:57:22.471140657+00:00 stderr F I0120 10:57:22.471129 16 flags.go:64] FLAG: --etcd-healthcheck-timeout="2s" 2026-01-20T10:57:22.471140657+00:00 stderr F I0120 10:57:22.471132 16 flags.go:64] FLAG: --etcd-keyfile="" 2026-01-20T10:57:22.471147457+00:00 stderr F I0120 10:57:22.471136 16 flags.go:64] FLAG: --etcd-prefix="/registry" 2026-01-20T10:57:22.471147457+00:00 stderr F I0120 10:57:22.471139 16 flags.go:64] FLAG: --etcd-readycheck-timeout="2s" 2026-01-20T10:57:22.471160388+00:00 stderr F I0120 10:57:22.471142 16 flags.go:64] FLAG: --etcd-servers="[]" 2026-01-20T10:57:22.471160388+00:00 stderr F I0120 10:57:22.471146 16 flags.go:64] FLAG: --etcd-servers-overrides="[]" 2026-01-20T10:57:22.471160388+00:00 stderr F I0120 10:57:22.471152 16 flags.go:64] FLAG: --event-ttl="1h0m0s" 2026-01-20T10:57:22.471160388+00:00 stderr F I0120 10:57:22.471155 16 flags.go:64] FLAG: --external-hostname="" 2026-01-20T10:57:22.471185718+00:00 stderr F I0120 10:57:22.471158 16 flags.go:64] FLAG: --feature-gates="" 2026-01-20T10:57:22.471185718+00:00 stderr F I0120 10:57:22.471166 16 flags.go:64] FLAG: --goaway-chance="0" 2026-01-20T10:57:22.471185718+00:00 stderr F I0120 10:57:22.471171 16 flags.go:64] FLAG: --help="false" 2026-01-20T10:57:22.471185718+00:00 stderr F I0120 10:57:22.471174 16 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2026-01-20T10:57:22.471185718+00:00 stderr F I0120 10:57:22.471177 16 flags.go:64] FLAG: --kubelet-certificate-authority="" 2026-01-20T10:57:22.471185718+00:00 stderr F I0120 10:57:22.471180 16 flags.go:64] FLAG: --kubelet-client-certificate="" 2026-01-20T10:57:22.471194659+00:00 stderr F I0120 10:57:22.471183 16 flags.go:64] FLAG: --kubelet-client-key="" 2026-01-20T10:57:22.471194659+00:00 stderr F I0120 10:57:22.471186 16 flags.go:64] FLAG: --kubelet-port="10250" 2026-01-20T10:57:22.471201529+00:00 stderr F I0120 10:57:22.471191 16 flags.go:64] FLAG: --kubelet-preferred-address-types="[Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP]" 2026-01-20T10:57:22.471201529+00:00 stderr F I0120 10:57:22.471196 16 flags.go:64] FLAG: --kubelet-read-only-port="10255" 2026-01-20T10:57:22.471208359+00:00 stderr F I0120 10:57:22.471199 16 flags.go:64] FLAG: --kubelet-timeout="5s" 2026-01-20T10:57:22.471208359+00:00 stderr F I0120 10:57:22.471203 16 flags.go:64] FLAG: --kubernetes-service-node-port="0" 2026-01-20T10:57:22.471215159+00:00 stderr F I0120 10:57:22.471206 16 flags.go:64] FLAG: --lease-reuse-duration-seconds="60" 2026-01-20T10:57:22.471221719+00:00 stderr F I0120 10:57:22.471210 16 flags.go:64] FLAG: --livez-grace-period="0s" 2026-01-20T10:57:22.471221719+00:00 stderr F I0120 10:57:22.471214 16 flags.go:64] FLAG: --log-flush-frequency="5s" 2026-01-20T10:57:22.471250251+00:00 stderr F I0120 10:57:22.471217 16 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2026-01-20T10:57:22.471250251+00:00 stderr F I0120 10:57:22.471226 16 flags.go:64] FLAG: --log-json-split-stream="false" 2026-01-20T10:57:22.471250251+00:00 stderr F I0120 10:57:22.471229 16 flags.go:64] FLAG: --logging-format="text" 2026-01-20T10:57:22.471250251+00:00 stderr F I0120 10:57:22.471232 16 flags.go:64] FLAG: --max-connection-bytes-per-sec="0" 2026-01-20T10:57:22.471250251+00:00 stderr F I0120 10:57:22.471235 16 flags.go:64] FLAG: --max-mutating-requests-inflight="200" 2026-01-20T10:57:22.471250251+00:00 stderr F I0120 10:57:22.471239 16 flags.go:64] FLAG: --max-requests-inflight="400" 2026-01-20T10:57:22.471250251+00:00 stderr F I0120 10:57:22.471243 16 flags.go:64] FLAG: --min-request-timeout="1800" 2026-01-20T10:57:22.471259411+00:00 stderr F I0120 10:57:22.471247 16 flags.go:64] FLAG: --oidc-ca-file="" 2026-01-20T10:57:22.471259411+00:00 stderr F I0120 10:57:22.471251 16 flags.go:64] FLAG: --oidc-client-id="" 2026-01-20T10:57:22.471266232+00:00 stderr F I0120 10:57:22.471256 16 flags.go:64] FLAG: --oidc-groups-claim="" 2026-01-20T10:57:22.471266232+00:00 stderr F I0120 10:57:22.471260 16 flags.go:64] FLAG: --oidc-groups-prefix="" 2026-01-20T10:57:22.471273032+00:00 stderr F I0120 10:57:22.471265 16 flags.go:64] FLAG: --oidc-issuer-url="" 2026-01-20T10:57:22.471303372+00:00 stderr F I0120 10:57:22.471269 16 flags.go:64] FLAG: --oidc-required-claim="" 2026-01-20T10:57:22.471303372+00:00 stderr F I0120 10:57:22.471281 16 flags.go:64] FLAG: --oidc-signing-algs="[RS256]" 2026-01-20T10:57:22.471303372+00:00 stderr F I0120 10:57:22.471286 16 flags.go:64] FLAG: --oidc-username-claim="sub" 2026-01-20T10:57:22.471303372+00:00 stderr F I0120 10:57:22.471290 16 flags.go:64] FLAG: --oidc-username-prefix="" 2026-01-20T10:57:22.471303372+00:00 stderr F I0120 10:57:22.471293 16 flags.go:64] FLAG: --openshift-config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2026-01-20T10:57:22.471303372+00:00 stderr F I0120 10:57:22.471297 16 flags.go:64] FLAG: --peer-advertise-ip="" 2026-01-20T10:57:22.471317083+00:00 stderr F I0120 10:57:22.471301 16 flags.go:64] FLAG: --peer-advertise-port="" 2026-01-20T10:57:22.471317083+00:00 stderr F I0120 10:57:22.471304 16 flags.go:64] FLAG: --peer-ca-file="" 2026-01-20T10:57:22.471338833+00:00 stderr F I0120 10:57:22.471308 16 flags.go:64] FLAG: --permit-address-sharing="true" 2026-01-20T10:57:22.471338833+00:00 stderr F I0120 10:57:22.471317 16 flags.go:64] FLAG: --permit-port-sharing="false" 2026-01-20T10:57:22.471338833+00:00 stderr F I0120 10:57:22.471320 16 flags.go:64] FLAG: --profiling="true" 2026-01-20T10:57:22.471338833+00:00 stderr F I0120 10:57:22.471325 16 flags.go:64] FLAG: --proxy-client-cert-file="" 2026-01-20T10:57:22.471338833+00:00 stderr F I0120 10:57:22.471329 16 flags.go:64] FLAG: --proxy-client-key-file="" 2026-01-20T10:57:22.471347484+00:00 stderr F I0120 10:57:22.471333 16 flags.go:64] FLAG: --request-timeout="1m0s" 2026-01-20T10:57:22.471347484+00:00 stderr F I0120 10:57:22.471338 16 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2026-01-20T10:57:22.471347484+00:00 stderr F I0120 10:57:22.471343 16 flags.go:64] FLAG: --requestheader-client-ca-file="" 2026-01-20T10:57:22.471378864+00:00 stderr F I0120 10:57:22.471347 16 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[]" 2026-01-20T10:57:22.471378864+00:00 stderr F I0120 10:57:22.471359 16 flags.go:64] FLAG: --requestheader-group-headers="[]" 2026-01-20T10:57:22.471378864+00:00 stderr F I0120 10:57:22.471363 16 flags.go:64] FLAG: --requestheader-username-headers="[]" 2026-01-20T10:57:22.471378864+00:00 stderr F I0120 10:57:22.471369 16 flags.go:64] FLAG: --runtime-config="" 2026-01-20T10:57:22.471387605+00:00 stderr F I0120 10:57:22.471376 16 flags.go:64] FLAG: --secure-port="6443" 2026-01-20T10:57:22.471387605+00:00 stderr F I0120 10:57:22.471381 16 flags.go:64] FLAG: --send-retry-after-while-not-ready-once="false" 2026-01-20T10:57:22.471394465+00:00 stderr F I0120 10:57:22.471384 16 flags.go:64] FLAG: --service-account-extend-token-expiration="true" 2026-01-20T10:57:22.471416435+00:00 stderr F I0120 10:57:22.471388 16 flags.go:64] FLAG: --service-account-issuer="[]" 2026-01-20T10:57:22.471416435+00:00 stderr F I0120 10:57:22.471396 16 flags.go:64] FLAG: --service-account-jwks-uri="" 2026-01-20T10:57:22.471416435+00:00 stderr F I0120 10:57:22.471399 16 flags.go:64] FLAG: --service-account-key-file="[]" 2026-01-20T10:57:22.471416435+00:00 stderr F I0120 10:57:22.471404 16 flags.go:64] FLAG: --service-account-lookup="true" 2026-01-20T10:57:22.471416435+00:00 stderr F I0120 10:57:22.471407 16 flags.go:64] FLAG: --service-account-max-token-expiration="0s" 2026-01-20T10:57:22.471416435+00:00 stderr F I0120 10:57:22.471410 16 flags.go:64] FLAG: --service-account-signing-key-file="" 2026-01-20T10:57:22.471425196+00:00 stderr F I0120 10:57:22.471413 16 flags.go:64] FLAG: --service-cluster-ip-range="" 2026-01-20T10:57:22.471448016+00:00 stderr F I0120 10:57:22.471416 16 flags.go:64] FLAG: --service-node-port-range="30000-32767" 2026-01-20T10:57:22.471448016+00:00 stderr F I0120 10:57:22.471428 16 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2026-01-20T10:57:22.471448016+00:00 stderr F I0120 10:57:22.471431 16 flags.go:64] FLAG: --shutdown-delay-duration="0s" 2026-01-20T10:57:22.471448016+00:00 stderr F I0120 10:57:22.471434 16 flags.go:64] FLAG: --shutdown-send-retry-after="false" 2026-01-20T10:57:22.471448016+00:00 stderr F I0120 10:57:22.471438 16 flags.go:64] FLAG: --shutdown-watch-termination-grace-period="0s" 2026-01-20T10:57:22.471448016+00:00 stderr F I0120 10:57:22.471441 16 flags.go:64] FLAG: --storage-backend="" 2026-01-20T10:57:22.471461237+00:00 stderr F I0120 10:57:22.471444 16 flags.go:64] FLAG: --storage-media-type="application/vnd.kubernetes.protobuf" 2026-01-20T10:57:22.471461237+00:00 stderr F I0120 10:57:22.471449 16 flags.go:64] FLAG: --strict-transport-security-directives="[]" 2026-01-20T10:57:22.471461237+00:00 stderr F I0120 10:57:22.471453 16 flags.go:64] FLAG: --tls-cert-file="" 2026-01-20T10:57:22.471468647+00:00 stderr F I0120 10:57:22.471456 16 flags.go:64] FLAG: --tls-cipher-suites="[]" 2026-01-20T10:57:22.471468647+00:00 stderr F I0120 10:57:22.471461 16 flags.go:64] FLAG: --tls-min-version="" 2026-01-20T10:57:22.471475487+00:00 stderr F I0120 10:57:22.471464 16 flags.go:64] FLAG: --tls-private-key-file="" 2026-01-20T10:57:22.471475487+00:00 stderr F I0120 10:57:22.471467 16 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2026-01-20T10:57:22.471482297+00:00 stderr F I0120 10:57:22.471473 16 flags.go:64] FLAG: --token-auth-file="" 2026-01-20T10:57:22.471482297+00:00 stderr F I0120 10:57:22.471476 16 flags.go:64] FLAG: --tracing-config-file="" 2026-01-20T10:57:22.471489107+00:00 stderr F I0120 10:57:22.471479 16 flags.go:64] FLAG: --v="2" 2026-01-20T10:57:22.471495708+00:00 stderr F I0120 10:57:22.471484 16 flags.go:64] FLAG: --version="false" 2026-01-20T10:57:22.471495708+00:00 stderr F I0120 10:57:22.471489 16 flags.go:64] FLAG: --vmodule="" 2026-01-20T10:57:22.471502538+00:00 stderr F I0120 10:57:22.471493 16 flags.go:64] FLAG: --watch-cache="true" 2026-01-20T10:57:22.471502538+00:00 stderr F I0120 10:57:22.471497 16 flags.go:64] FLAG: --watch-cache-sizes="[]" 2026-01-20T10:57:22.471572020+00:00 stderr F I0120 10:57:22.471538 16 plugins.go:83] "Registered admission plugin" plugin="authorization.openshift.io/RestrictSubjectBindings" 2026-01-20T10:57:22.471572020+00:00 stderr F I0120 10:57:22.471555 16 plugins.go:83] "Registered admission plugin" plugin="route.openshift.io/RouteHostAssignment" 2026-01-20T10:57:22.471572020+00:00 stderr F I0120 10:57:22.471562 16 plugins.go:83] "Registered admission plugin" plugin="image.openshift.io/ImagePolicy" 2026-01-20T10:57:22.471603470+00:00 stderr F I0120 10:57:22.471569 16 plugins.go:83] "Registered admission plugin" plugin="route.openshift.io/IngressAdmission" 2026-01-20T10:57:22.471603470+00:00 stderr F I0120 10:57:22.471580 16 plugins.go:83] "Registered admission plugin" plugin="autoscaling.openshift.io/ManagementCPUsOverride" 2026-01-20T10:57:22.471603470+00:00 stderr F I0120 10:57:22.471587 16 plugins.go:83] "Registered admission plugin" plugin="autoscaling.openshift.io/ManagedNode" 2026-01-20T10:57:22.471603470+00:00 stderr F I0120 10:57:22.471595 16 plugins.go:83] "Registered admission plugin" plugin="autoscaling.openshift.io/MixedCPUs" 2026-01-20T10:57:22.471631891+00:00 stderr F I0120 10:57:22.471603 16 plugins.go:83] "Registered admission plugin" plugin="scheduling.openshift.io/OriginPodNodeEnvironment" 2026-01-20T10:57:22.471631891+00:00 stderr F I0120 10:57:22.471615 16 plugins.go:83] "Registered admission plugin" plugin="autoscaling.openshift.io/ClusterResourceOverride" 2026-01-20T10:57:22.471631891+00:00 stderr F I0120 10:57:22.471623 16 plugins.go:83] "Registered admission plugin" plugin="quota.openshift.io/ClusterResourceQuota" 2026-01-20T10:57:22.471656982+00:00 stderr F I0120 10:57:22.471630 16 plugins.go:83] "Registered admission plugin" plugin="autoscaling.openshift.io/RunOnceDuration" 2026-01-20T10:57:22.471656982+00:00 stderr F I0120 10:57:22.471641 16 plugins.go:83] "Registered admission plugin" plugin="scheduling.openshift.io/PodNodeConstraints" 2026-01-20T10:57:22.471656982+00:00 stderr F I0120 10:57:22.471647 16 plugins.go:83] "Registered admission plugin" plugin="security.openshift.io/SecurityContextConstraint" 2026-01-20T10:57:22.471696883+00:00 stderr F I0120 10:57:22.471664 16 plugins.go:83] "Registered admission plugin" plugin="security.openshift.io/SCCExecRestrictions" 2026-01-20T10:57:22.471696883+00:00 stderr F I0120 10:57:22.471676 16 plugins.go:83] "Registered admission plugin" plugin="network.openshift.io/ExternalIPRanger" 2026-01-20T10:57:22.471696883+00:00 stderr F I0120 10:57:22.471685 16 plugins.go:83] "Registered admission plugin" plugin="network.openshift.io/RestrictedEndpointsAdmission" 2026-01-20T10:57:22.471719463+00:00 stderr F I0120 10:57:22.471692 16 plugins.go:83] "Registered admission plugin" plugin="storage.openshift.io/CSIInlineVolumeSecurity" 2026-01-20T10:57:22.471719463+00:00 stderr F I0120 10:57:22.471706 16 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateAPIServer" 2026-01-20T10:57:22.471719463+00:00 stderr F I0120 10:57:22.471713 16 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateAuthentication" 2026-01-20T10:57:22.471748204+00:00 stderr F I0120 10:57:22.471720 16 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateFeatureGate" 2026-01-20T10:57:22.471748204+00:00 stderr F I0120 10:57:22.471731 16 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateConsole" 2026-01-20T10:57:22.471748204+00:00 stderr F I0120 10:57:22.471737 16 plugins.go:83] "Registered admission plugin" plugin="operator.openshift.io/ValidateDNS" 2026-01-20T10:57:22.471769785+00:00 stderr F I0120 10:57:22.471744 16 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateImage" 2026-01-20T10:57:22.471769785+00:00 stderr F I0120 10:57:22.471761 16 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateOAuth" 2026-01-20T10:57:22.471777285+00:00 stderr F I0120 10:57:22.471767 16 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateProject" 2026-01-20T10:57:22.471803726+00:00 stderr F I0120 10:57:22.471774 16 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/DenyDeleteClusterConfiguration" 2026-01-20T10:57:22.471803726+00:00 stderr F I0120 10:57:22.471785 16 plugins.go:83] "Registered admission plugin" plugin="operator.openshift.io/DenyDeleteClusterOperators" 2026-01-20T10:57:22.471803726+00:00 stderr F I0120 10:57:22.471790 16 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateScheduler" 2026-01-20T10:57:22.471814056+00:00 stderr F I0120 10:57:22.471797 16 plugins.go:83] "Registered admission plugin" plugin="operator.openshift.io/ValidateKubeControllerManager" 2026-01-20T10:57:22.471814056+00:00 stderr F I0120 10:57:22.471807 16 plugins.go:83] "Registered admission plugin" plugin="quota.openshift.io/ValidateClusterResourceQuota" 2026-01-20T10:57:22.471841977+00:00 stderr F I0120 10:57:22.471814 16 plugins.go:83] "Registered admission plugin" plugin="security.openshift.io/ValidateSecurityContextConstraints" 2026-01-20T10:57:22.471841977+00:00 stderr F I0120 10:57:22.471825 16 plugins.go:83] "Registered admission plugin" plugin="authorization.openshift.io/ValidateRoleBindingRestriction" 2026-01-20T10:57:22.471841977+00:00 stderr F I0120 10:57:22.471832 16 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateNetwork" 2026-01-20T10:57:22.471865317+00:00 stderr F I0120 10:57:22.471839 16 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateAPIRequestCount" 2026-01-20T10:57:22.471865317+00:00 stderr F I0120 10:57:22.471849 16 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/RestrictExtremeWorkerLatencyProfile" 2026-01-20T10:57:22.471865317+00:00 stderr F I0120 10:57:22.471856 16 plugins.go:83] "Registered admission plugin" plugin="security.openshift.io/DefaultSecurityContextConstraints" 2026-01-20T10:57:22.471889728+00:00 stderr F I0120 10:57:22.471863 16 plugins.go:83] "Registered admission plugin" plugin="route.openshift.io/ValidateRoute" 2026-01-20T10:57:22.471889728+00:00 stderr F I0120 10:57:22.471874 16 plugins.go:83] "Registered admission plugin" plugin="route.openshift.io/DefaultRoute" 2026-01-20T10:57:22.474970049+00:00 stderr F W0120 10:57:22.474715 16 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2026-01-20T10:57:22.474970049+00:00 stderr F I0120 10:57:22.474927 16 feature_gate.go:250] feature gates: &{map[]} 2026-01-20T10:57:22.475015460+00:00 stderr F W0120 10:57:22.474975 16 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2026-01-20T10:57:22.475015460+00:00 stderr F I0120 10:57:22.474988 16 feature_gate.go:250] feature gates: &{map[]} 2026-01-20T10:57:22.475185775+00:00 stderr F W0120 10:57:22.475021 16 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2026-01-20T10:57:22.475185775+00:00 stderr F I0120 10:57:22.475032 16 feature_gate.go:250] feature gates: &{map[]} 2026-01-20T10:57:22.475185775+00:00 stderr F W0120 10:57:22.475095 16 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2026-01-20T10:57:22.475185775+00:00 stderr F I0120 10:57:22.475102 16 feature_gate.go:250] feature gates: &{map[]} 2026-01-20T10:57:22.475185775+00:00 stderr F W0120 10:57:22.475143 16 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2026-01-20T10:57:22.475185775+00:00 stderr F I0120 10:57:22.475149 16 feature_gate.go:250] feature gates: &{map[]} 2026-01-20T10:57:22.475265947+00:00 stderr F W0120 10:57:22.475217 16 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2026-01-20T10:57:22.475265947+00:00 stderr F I0120 10:57:22.475231 16 feature_gate.go:250] feature gates: &{map[]} 2026-01-20T10:57:22.475328618+00:00 stderr F W0120 10:57:22.475291 16 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2026-01-20T10:57:22.475328618+00:00 stderr F I0120 10:57:22.475303 16 feature_gate.go:250] feature gates: &{map[]} 2026-01-20T10:57:22.475389210+00:00 stderr F W0120 10:57:22.475351 16 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2026-01-20T10:57:22.475389210+00:00 stderr F I0120 10:57:22.475370 16 feature_gate.go:250] feature gates: &{map[]} 2026-01-20T10:57:22.475449522+00:00 stderr F I0120 10:57:22.475412 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2026-01-20T10:57:22.475497423+00:00 stderr F W0120 10:57:22.475462 16 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2026-01-20T10:57:22.475497423+00:00 stderr F I0120 10:57:22.475473 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2026-01-20T10:57:22.475556834+00:00 stderr F W0120 10:57:22.475515 16 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2026-01-20T10:57:22.475556834+00:00 stderr F I0120 10:57:22.475526 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2026-01-20T10:57:22.475616156+00:00 stderr F W0120 10:57:22.475576 16 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2026-01-20T10:57:22.475616156+00:00 stderr F I0120 10:57:22.475588 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2026-01-20T10:57:22.475662157+00:00 stderr F W0120 10:57:22.475626 16 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2026-01-20T10:57:22.475662157+00:00 stderr F I0120 10:57:22.475637 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2026-01-20T10:57:22.475708118+00:00 stderr F W0120 10:57:22.475678 16 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2026-01-20T10:57:22.475708118+00:00 stderr F I0120 10:57:22.475690 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2026-01-20T10:57:22.475763620+00:00 stderr F W0120 10:57:22.475731 16 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2026-01-20T10:57:22.475763620+00:00 stderr F I0120 10:57:22.475742 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2026-01-20T10:57:22.475821801+00:00 stderr F W0120 10:57:22.475792 16 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2026-01-20T10:57:22.475821801+00:00 stderr F I0120 10:57:22.475805 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2026-01-20T10:57:22.475912614+00:00 stderr F W0120 10:57:22.475865 16 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2026-01-20T10:57:22.475940314+00:00 stderr F I0120 10:57:22.475879 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2026-01-20T10:57:22.475988266+00:00 stderr F W0120 10:57:22.475957 16 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2026-01-20T10:57:22.475988266+00:00 stderr F I0120 10:57:22.475968 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2026-01-20T10:57:22.476046247+00:00 stderr F W0120 10:57:22.476008 16 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2026-01-20T10:57:22.476046247+00:00 stderr F I0120 10:57:22.476019 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2026-01-20T10:57:22.476117319+00:00 stderr F I0120 10:57:22.476058 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true]} 2026-01-20T10:57:22.476158720+00:00 stderr F I0120 10:57:22.476119 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false]} 2026-01-20T10:57:22.476209341+00:00 stderr F W0120 10:57:22.476177 16 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2026-01-20T10:57:22.476209341+00:00 stderr F I0120 10:57:22.476189 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false]} 2026-01-20T10:57:22.476295954+00:00 stderr F I0120 10:57:22.476234 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2026-01-20T10:57:22.476337795+00:00 stderr F W0120 10:57:22.476286 16 feature_gate.go:227] unrecognized feature gate: Example 2026-01-20T10:57:22.476337795+00:00 stderr F I0120 10:57:22.476298 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2026-01-20T10:57:22.476389306+00:00 stderr F W0120 10:57:22.476347 16 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider 2026-01-20T10:57:22.476389306+00:00 stderr F I0120 10:57:22.476360 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2026-01-20T10:57:22.476458698+00:00 stderr F W0120 10:57:22.476413 16 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2026-01-20T10:57:22.476458698+00:00 stderr F I0120 10:57:22.476426 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2026-01-20T10:57:22.476505379+00:00 stderr F W0120 10:57:22.476469 16 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2026-01-20T10:57:22.476505379+00:00 stderr F I0120 10:57:22.476481 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2026-01-20T10:57:22.476618792+00:00 stderr F W0120 10:57:22.476528 16 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2026-01-20T10:57:22.476618792+00:00 stderr F I0120 10:57:22.476562 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2026-01-20T10:57:22.476649173+00:00 stderr F W0120 10:57:22.476614 16 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2026-01-20T10:57:22.476649173+00:00 stderr F I0120 10:57:22.476626 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2026-01-20T10:57:22.476714025+00:00 stderr F W0120 10:57:22.476679 16 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2026-01-20T10:57:22.476714025+00:00 stderr F I0120 10:57:22.476691 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2026-01-20T10:57:22.476773686+00:00 stderr F W0120 10:57:22.476739 16 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2026-01-20T10:57:22.476773686+00:00 stderr F I0120 10:57:22.476751 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2026-01-20T10:57:22.476864480+00:00 stderr F W0120 10:57:22.476825 16 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2026-01-20T10:57:22.476864480+00:00 stderr F I0120 10:57:22.476838 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2026-01-20T10:57:22.476917401+00:00 stderr F W0120 10:57:22.476883 16 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2026-01-20T10:57:22.476917401+00:00 stderr F I0120 10:57:22.476895 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2026-01-20T10:57:22.476989143+00:00 stderr F W0120 10:57:22.476953 16 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2026-01-20T10:57:22.476989143+00:00 stderr F I0120 10:57:22.476966 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2026-01-20T10:57:22.477055365+00:00 stderr F W0120 10:57:22.477011 16 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2026-01-20T10:57:22.477055365+00:00 stderr F I0120 10:57:22.477023 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2026-01-20T10:57:22.477138927+00:00 stderr F W0120 10:57:22.477091 16 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2026-01-20T10:57:22.477138927+00:00 stderr F I0120 10:57:22.477105 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2026-01-20T10:57:22.477189988+00:00 stderr F W0120 10:57:22.477156 16 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2026-01-20T10:57:22.477189988+00:00 stderr F I0120 10:57:22.477169 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2026-01-20T10:57:22.477274000+00:00 stderr F W0120 10:57:22.477239 16 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2026-01-20T10:57:22.477274000+00:00 stderr F I0120 10:57:22.477252 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2026-01-20T10:57:22.477324482+00:00 stderr F W0120 10:57:22.477296 16 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2026-01-20T10:57:22.477324482+00:00 stderr F I0120 10:57:22.477308 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2026-01-20T10:57:22.477433214+00:00 stderr F W0120 10:57:22.477381 16 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2026-01-20T10:57:22.477433214+00:00 stderr F I0120 10:57:22.477392 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true]} 2026-01-20T10:57:22.477469095+00:00 stderr F W0120 10:57:22.477438 16 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2026-01-20T10:57:22.477476656+00:00 stderr F I0120 10:57:22.477452 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true]} 2026-01-20T10:57:22.477541067+00:00 stderr F W0120 10:57:22.477506 16 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2026-01-20T10:57:22.477541067+00:00 stderr F I0120 10:57:22.477518 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true]} 2026-01-20T10:57:22.477615229+00:00 stderr F W0120 10:57:22.477584 16 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2026-01-20T10:57:22.477615229+00:00 stderr F I0120 10:57:22.477598 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true]} 2026-01-20T10:57:22.477682141+00:00 stderr F W0120 10:57:22.477645 16 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2026-01-20T10:57:22.477682141+00:00 stderr F I0120 10:57:22.477658 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true]} 2026-01-20T10:57:22.477746243+00:00 stderr F I0120 10:57:22.477700 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2026-01-20T10:57:22.477831885+00:00 stderr F W0120 10:57:22.477762 16 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2026-01-20T10:57:22.477831885+00:00 stderr F I0120 10:57:22.477774 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2026-01-20T10:57:22.477881396+00:00 stderr F W0120 10:57:22.477848 16 feature_gate.go:227] unrecognized feature gate: MetricsServer 2026-01-20T10:57:22.477881396+00:00 stderr F I0120 10:57:22.477861 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2026-01-20T10:57:22.477949908+00:00 stderr F W0120 10:57:22.477913 16 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2026-01-20T10:57:22.477949908+00:00 stderr F I0120 10:57:22.477925 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2026-01-20T10:57:22.478054431+00:00 stderr F W0120 10:57:22.477976 16 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2026-01-20T10:57:22.478054431+00:00 stderr F I0120 10:57:22.477989 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2026-01-20T10:57:22.478087591+00:00 stderr F W0120 10:57:22.478040 16 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2026-01-20T10:57:22.478130423+00:00 stderr F I0120 10:57:22.478053 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2026-01-20T10:57:22.478203895+00:00 stderr F W0120 10:57:22.478162 16 feature_gate.go:227] unrecognized feature gate: NewOLM 2026-01-20T10:57:22.478203895+00:00 stderr F I0120 10:57:22.478175 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2026-01-20T10:57:22.478269586+00:00 stderr F W0120 10:57:22.478235 16 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2026-01-20T10:57:22.478269586+00:00 stderr F I0120 10:57:22.478247 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2026-01-20T10:57:22.478339558+00:00 stderr F I0120 10:57:22.478298 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false]} 2026-01-20T10:57:22.478392249+00:00 stderr F W0120 10:57:22.478359 16 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2026-01-20T10:57:22.478392249+00:00 stderr F I0120 10:57:22.478371 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false]} 2026-01-20T10:57:22.478485742+00:00 stderr F W0120 10:57:22.478430 16 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2026-01-20T10:57:22.478485742+00:00 stderr F I0120 10:57:22.478441 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false]} 2026-01-20T10:57:22.478532873+00:00 stderr F W0120 10:57:22.478498 16 feature_gate.go:227] unrecognized feature gate: PinnedImages 2026-01-20T10:57:22.478532873+00:00 stderr F I0120 10:57:22.478513 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false]} 2026-01-20T10:57:22.478598735+00:00 stderr F W0120 10:57:22.478556 16 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2026-01-20T10:57:22.478598735+00:00 stderr F I0120 10:57:22.478568 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false]} 2026-01-20T10:57:22.478664847+00:00 stderr F W0120 10:57:22.478625 16 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2026-01-20T10:57:22.478664847+00:00 stderr F I0120 10:57:22.478637 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false]} 2026-01-20T10:57:22.478733688+00:00 stderr F I0120 10:57:22.478686 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false]} 2026-01-20T10:57:22.478789030+00:00 stderr F I0120 10:57:22.478742 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false]} 2026-01-20T10:57:22.478869492+00:00 stderr F I0120 10:57:22.478820 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false]} 2026-01-20T10:57:22.480003813+00:00 stderr F I0120 10:57:22.478885 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false]} 2026-01-20T10:57:22.480020183+00:00 stderr F W0120 10:57:22.478944 16 feature_gate.go:227] unrecognized feature gate: SignatureStores 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.478952 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false]} 2026-01-20T10:57:22.480020183+00:00 stderr F W0120 10:57:22.478999 16 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479010 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false]} 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479097 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false]} 2026-01-20T10:57:22.480020183+00:00 stderr F W0120 10:57:22.479158 16 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479170 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false]} 2026-01-20T10:57:22.480020183+00:00 stderr F W0120 10:57:22.479229 16 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479241 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false]} 2026-01-20T10:57:22.480020183+00:00 stderr F W0120 10:57:22.479301 16 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479313 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false]} 2026-01-20T10:57:22.480020183+00:00 stderr F W0120 10:57:22.479365 16 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479378 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false]} 2026-01-20T10:57:22.480020183+00:00 stderr F W0120 10:57:22.479438 16 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479448 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false]} 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479500 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} 2026-01-20T10:57:22.480020183+00:00 stderr F W0120 10:57:22.479567 16 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479579 16 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} 2026-01-20T10:57:22.480020183+00:00 stderr F Flag --openshift-config has been deprecated, to be removed 2026-01-20T10:57:22.480020183+00:00 stderr F Flag --enable-logs-handler has been deprecated, This flag will be removed in v1.19 2026-01-20T10:57:22.480020183+00:00 stderr F Flag --kubelet-read-only-port has been deprecated, kubelet-read-only-port is deprecated and will be removed. 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479708 16 flags.go:64] FLAG: --admission-control="[]" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479720 16 flags.go:64] FLAG: --admission-control-config-file="/tmp/kubeapiserver-admission-config.yaml1873568072" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479725 16 flags.go:64] FLAG: --advertise-address="192.168.126.11" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479729 16 flags.go:64] FLAG: --aggregator-reject-forwarding-redirect="true" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479733 16 flags.go:64] FLAG: --allow-metric-labels="[]" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479742 16 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479748 16 flags.go:64] FLAG: --allow-privileged="true" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479753 16 flags.go:64] FLAG: --anonymous-auth="true" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479756 16 flags.go:64] FLAG: --api-audiences="[https://kubernetes.default.svc]" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479764 16 flags.go:64] FLAG: --apiserver-count="1" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479768 16 flags.go:64] FLAG: --audit-log-batch-buffer-size="10000" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479772 16 flags.go:64] FLAG: --audit-log-batch-max-size="1" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479776 16 flags.go:64] FLAG: --audit-log-batch-max-wait="0s" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479780 16 flags.go:64] FLAG: --audit-log-batch-throttle-burst="0" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479783 16 flags.go:64] FLAG: --audit-log-batch-throttle-enable="false" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479787 16 flags.go:64] FLAG: --audit-log-batch-throttle-qps="0" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479792 16 flags.go:64] FLAG: --audit-log-compress="false" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479796 16 flags.go:64] FLAG: --audit-log-format="json" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479799 16 flags.go:64] FLAG: --audit-log-maxage="0" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479803 16 flags.go:64] FLAG: --audit-log-maxbackup="10" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479807 16 flags.go:64] FLAG: --audit-log-maxsize="200" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479810 16 flags.go:64] FLAG: --audit-log-mode="blocking" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479815 16 flags.go:64] FLAG: --audit-log-path="/var/log/kube-apiserver/audit.log" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479819 16 flags.go:64] FLAG: --audit-log-truncate-enabled="false" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479827 16 flags.go:64] FLAG: --audit-log-truncate-max-batch-size="10485760" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479832 16 flags.go:64] FLAG: --audit-log-truncate-max-event-size="102400" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479836 16 flags.go:64] FLAG: --audit-log-version="audit.k8s.io/v1" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479840 16 flags.go:64] FLAG: --audit-policy-file="/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-audit-policies/policy.yaml" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479846 16 flags.go:64] FLAG: --audit-webhook-batch-buffer-size="10000" 2026-01-20T10:57:22.480020183+00:00 stderr F I0120 10:57:22.479850 16 flags.go:64] FLAG: --audit-webhook-batch-initial-backoff="10s" 2026-01-20T10:57:22.480020183+00:00 stderr P I0120 10:57:22.479854 16 flags.go:6 2026-01-20T10:57:22.480052904+00:00 stderr F 4] FLAG: --audit-webhook-batch-max-size="400" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479858 16 flags.go:64] FLAG: --audit-webhook-batch-max-wait="30s" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479862 16 flags.go:64] FLAG: --audit-webhook-batch-throttle-burst="15" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479867 16 flags.go:64] FLAG: --audit-webhook-batch-throttle-enable="true" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479871 16 flags.go:64] FLAG: --audit-webhook-batch-throttle-qps="10" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479876 16 flags.go:64] FLAG: --audit-webhook-config-file="" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479880 16 flags.go:64] FLAG: --audit-webhook-initial-backoff="10s" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479884 16 flags.go:64] FLAG: --audit-webhook-mode="batch" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479887 16 flags.go:64] FLAG: --audit-webhook-truncate-enabled="false" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479891 16 flags.go:64] FLAG: --audit-webhook-truncate-max-batch-size="10485760" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479895 16 flags.go:64] FLAG: --audit-webhook-truncate-max-event-size="102400" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479899 16 flags.go:64] FLAG: --audit-webhook-version="audit.k8s.io/v1" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479906 16 flags.go:64] FLAG: --authentication-config="" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479910 16 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479913 16 flags.go:64] FLAG: --authentication-token-webhook-config-file="/etc/kubernetes/static-pod-resources/secrets/webhook-authenticator/kubeConfig" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479918 16 flags.go:64] FLAG: --authentication-token-webhook-version="v1" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479922 16 flags.go:64] FLAG: --authorization-config="" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479925 16 flags.go:64] FLAG: --authorization-mode="[Scope,SystemMasters,RBAC,Node]" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479934 16 flags.go:64] FLAG: --authorization-policy-file="" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479937 16 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479941 16 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479945 16 flags.go:64] FLAG: --authorization-webhook-config-file="" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479948 16 flags.go:64] FLAG: --authorization-webhook-version="v1beta1" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479952 16 flags.go:64] FLAG: --bind-address="0.0.0.0" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479956 16 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479960 16 flags.go:64] FLAG: --client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479964 16 flags.go:64] FLAG: --cloud-config="" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479968 16 flags.go:64] FLAG: --cloud-provider="" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479971 16 flags.go:64] FLAG: --cloud-provider-gce-l7lb-src-cidrs="130.211.0.0/22,35.191.0.0/16" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479978 16 flags.go:64] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479987 16 flags.go:64] FLAG: --contention-profiling="false" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.479991 16 flags.go:64] FLAG: --cors-allowed-origins="[//127\\.0\\.0\\.1(:|$),//localhost(:|$)]" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.480000 16 flags.go:64] FLAG: --debug-socket-path="" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.480003 16 flags.go:64] FLAG: --default-not-ready-toleration-seconds="300" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.480008 16 flags.go:64] FLAG: --default-unreachable-toleration-seconds="300" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.480012 16 flags.go:64] FLAG: --default-watch-cache-size="100" 2026-01-20T10:57:22.480052904+00:00 stderr F I0120 10:57:22.480016 16 flags.go:64] FLAG: --delete-collection-workers="1" 2026-01-20T10:57:22.481143362+00:00 stderr F I0120 10:57:22.480020 16 flags.go:64] FLAG: --disable-admission-plugins="[]" 2026-01-20T10:57:22.481143362+00:00 stderr F I0120 10:57:22.480030 16 flags.go:64] FLAG: --disabled-metrics="[]" 2026-01-20T10:57:22.481143362+00:00 stderr F I0120 10:57:22.480037 16 flags.go:64] FLAG: --egress-selector-config-file="" 2026-01-20T10:57:22.481143362+00:00 stderr F I0120 10:57:22.480041 16 flags.go:64] FLAG: --enable-admission-plugins="[CertificateApproval,CertificateSigning,CertificateSubjectRestriction,DefaultIngressClass,DefaultStorageClass,DefaultTolerationSeconds,LimitRanger,MutatingAdmissionWebhook,NamespaceLifecycle,NodeRestriction,OwnerReferencesPermissionEnforcement,PersistentVolumeClaimResize,PersistentVolumeLabel,PodNodeSelector,PodTolerationRestriction,Priority,ResourceQuota,RuntimeClass,ServiceAccount,StorageObjectInUseProtection,TaintNodesByCondition,ValidatingAdmissionWebhook,ValidatingAdmissionPolicy,authorization.openshift.io/RestrictSubjectBindings,authorization.openshift.io/ValidateRoleBindingRestriction,config.openshift.io/DenyDeleteClusterConfiguration,config.openshift.io/ValidateAPIServer,config.openshift.io/ValidateAuthentication,config.openshift.io/ValidateConsole,config.openshift.io/ValidateFeatureGate,config.openshift.io/ValidateImage,config.openshift.io/ValidateOAuth,config.openshift.io/ValidateProject,config.openshift.io/ValidateScheduler,image.openshift.io/ImagePolicy,network.openshift.io/ExternalIPRanger,network.openshift.io/RestrictedEndpointsAdmission,quota.openshift.io/ClusterResourceQuota,quota.openshift.io/ValidateClusterResourceQuota,route.openshift.io/IngressAdmission,scheduling.openshift.io/OriginPodNodeEnvironment,security.openshift.io/DefaultSecurityContextConstraints,security.openshift.io/SCCExecRestrictions,security.openshift.io/SecurityContextConstraint,security.openshift.io/ValidateSecurityContextConstraints,storage.openshift.io/CSIInlineVolumeSecurity]" 2026-01-20T10:57:22.481143362+00:00 stderr F I0120 10:57:22.480111 16 flags.go:64] FLAG: --enable-aggregator-routing="true" 2026-01-20T10:57:22.481143362+00:00 stderr F I0120 10:57:22.480116 16 flags.go:64] FLAG: --enable-bootstrap-token-auth="false" 2026-01-20T10:57:22.481143362+00:00 stderr F I0120 10:57:22.480120 16 flags.go:64] FLAG: --enable-garbage-collector="true" 2026-01-20T10:57:22.481143362+00:00 stderr F I0120 10:57:22.480124 16 flags.go:64] FLAG: --enable-logs-handler="false" 2026-01-20T10:57:22.481143362+00:00 stderr F I0120 10:57:22.480127 16 flags.go:64] FLAG: --enable-priority-and-fairness="true" 2026-01-20T10:57:22.481143362+00:00 stderr F I0120 10:57:22.480131 16 flags.go:64] FLAG: --encryption-provider-config="" 2026-01-20T10:57:22.481143362+00:00 stderr F I0120 10:57:22.480134 16 flags.go:64] FLAG: --encryption-provider-config-automatic-reload="false" 2026-01-20T10:57:22.481143362+00:00 stderr F I0120 10:57:22.480141 16 flags.go:64] FLAG: --endpoint-reconciler-type="lease" 2026-01-20T10:57:22.481143362+00:00 stderr F I0120 10:57:22.480145 16 flags.go:64] FLAG: --etcd-cafile="/etc/kubernetes/static-pod-resources/configmaps/etcd-serving-ca/ca-bundle.crt" 2026-01-20T10:57:22.481143362+00:00 stderr F I0120 10:57:22.480150 16 flags.go:64] FLAG: --etcd-certfile="/etc/kubernetes/static-pod-resources/secrets/etcd-client/tls.crt" 2026-01-20T10:57:22.481143362+00:00 stderr F I0120 10:57:22.480154 16 flags.go:64] FLAG: --etcd-compaction-interval="5m0s" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480158 16 flags.go:64] FLAG: --etcd-count-metric-poll-period="1m0s" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480161 16 flags.go:64] FLAG: --etcd-db-metric-poll-interval="30s" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480165 16 flags.go:64] FLAG: --etcd-healthcheck-timeout="9s" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480168 16 flags.go:64] FLAG: --etcd-keyfile="/etc/kubernetes/static-pod-resources/secrets/etcd-client/tls.key" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480173 16 flags.go:64] FLAG: --etcd-prefix="kubernetes.io" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480176 16 flags.go:64] FLAG: --etcd-readycheck-timeout="9s" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480180 16 flags.go:64] FLAG: --etcd-servers="[https://192.168.126.11:2379,https://localhost:2379]" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480189 16 flags.go:64] FLAG: --etcd-servers-overrides="[]" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480197 16 flags.go:64] FLAG: --event-ttl="3h0m0s" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480202 16 flags.go:64] FLAG: --external-hostname="" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480205 16 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480227 16 flags.go:64] FLAG: --goaway-chance="0" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480232 16 flags.go:64] FLAG: --help="false" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480236 16 flags.go:64] FLAG: --http2-max-streams-per-connection="2000" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480244 16 flags.go:64] FLAG: --kubelet-certificate-authority="/etc/kubernetes/static-pod-resources/configmaps/kubelet-serving-ca/ca-bundle.crt" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480250 16 flags.go:64] FLAG: --kubelet-client-certificate="/etc/kubernetes/static-pod-certs/secrets/kubelet-client/tls.crt" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480254 16 flags.go:64] FLAG: --kubelet-client-key="/etc/kubernetes/static-pod-certs/secrets/kubelet-client/tls.key" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480259 16 flags.go:64] FLAG: --kubelet-port="10250" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480264 16 flags.go:64] FLAG: --kubelet-preferred-address-types="[InternalIP]" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480272 16 flags.go:64] FLAG: --kubelet-read-only-port="0" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480276 16 flags.go:64] FLAG: --kubelet-timeout="5s" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480280 16 flags.go:64] FLAG: --kubernetes-service-node-port="0" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480285 16 flags.go:64] FLAG: --lease-reuse-duration-seconds="60" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480289 16 flags.go:64] FLAG: --livez-grace-period="0s" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480293 16 flags.go:64] FLAG: --log-flush-frequency="5s" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480303 16 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480307 16 flags.go:64] FLAG: --log-json-split-stream="false" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480311 16 flags.go:64] FLAG: --logging-format="text" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480318 16 flags.go:64] FLAG: --max-connection-bytes-per-sec="0" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480322 16 flags.go:64] FLAG: --max-mutating-requests-inflight="1000" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480326 16 flags.go:64] FLAG: --max-requests-inflight="3000" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480332 16 flags.go:64] FLAG: --min-request-timeout="3600" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480340 16 flags.go:64] FLAG: --oidc-ca-file="" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480347 16 flags.go:64] FLAG: --oidc-client-id="" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480351 16 flags.go:64] FLAG: --oidc-groups-claim="" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480355 16 flags.go:64] FLAG: --oidc-groups-prefix="" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480362 16 flags.go:64] FLAG: --oidc-issuer-url="" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480366 16 flags.go:64] FLAG: --oidc-required-claim="" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480371 16 flags.go:64] FLAG: --oidc-signing-algs="[RS256]" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480385 16 flags.go:64] FLAG: --oidc-username-claim="sub" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480389 16 flags.go:64] FLAG: --oidc-username-prefix="" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480392 16 flags.go:64] FLAG: --openshift-config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480398 16 flags.go:64] FLAG: --peer-advertise-ip="" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480403 16 flags.go:64] FLAG: --peer-advertise-port="" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480407 16 flags.go:64] FLAG: --peer-ca-file="" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480411 16 flags.go:64] FLAG: --permit-address-sharing="true" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480414 16 flags.go:64] FLAG: --permit-port-sharing="false" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480418 16 flags.go:64] FLAG: --profiling="true" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480422 16 flags.go:64] FLAG: --proxy-client-cert-file="/etc/kubernetes/static-pod-certs/secrets/aggregator-client/tls.crt" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480426 16 flags.go:64] FLAG: --proxy-client-key-file="/etc/kubernetes/static-pod-certs/secrets/aggregator-client/tls.key" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480435 16 flags.go:64] FLAG: --request-timeout="1m0s" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480439 16 flags.go:64] FLAG: --requestheader-allowed-names="[kube-apiserver-proxy,system:kube-apiserver-proxy,system:openshift-aggregator]" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480449 16 flags.go:64] FLAG: --requestheader-client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480454 16 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[X-Remote-Extra-]" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480462 16 flags.go:64] FLAG: --requestheader-group-headers="[X-Remote-Group]" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480470 16 flags.go:64] FLAG: --requestheader-username-headers="[X-Remote-User]" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480478 16 flags.go:64] FLAG: --runtime-config="" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480483 16 flags.go:64] FLAG: --secure-port="6443" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480487 16 flags.go:64] FLAG: --send-retry-after-while-not-ready-once="true" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480490 16 flags.go:64] FLAG: --service-account-extend-token-expiration="true" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480494 16 flags.go:64] FLAG: --service-account-issuer="[https://kubernetes.default.svc]" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480503 16 flags.go:64] FLAG: --service-account-jwks-uri="https://api.crc.testing:6443/openid/v1/jwks" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480507 16 flags.go:64] FLAG: --service-account-key-file="[/etc/kubernetes/static-pod-resources/configmaps/sa-token-signing-certs/service-account-001.pub,/etc/kubernetes/static-pod-resources/configmaps/sa-token-signing-certs/service-account-002.pub,/etc/kubernetes/static-pod-resources/configmaps/sa-token-signing-certs/service-account-003.pub,/etc/kubernetes/static-pod-resources/configmaps/bound-sa-token-signing-certs/service-account-001.pub]" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480548 16 flags.go:64] FLAG: --service-account-lookup="true" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480553 16 flags.go:64] FLAG: --service-account-max-token-expiration="0s" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480557 16 flags.go:64] FLAG: --service-account-signing-key-file="/etc/kubernetes/static-pod-certs/secrets/bound-service-account-signing-key/service-account.key" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480562 16 flags.go:64] FLAG: --service-cluster-ip-range="10.217.4.0/23" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480566 16 flags.go:64] FLAG: --service-node-port-range="30000-32767" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480576 16 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480580 16 flags.go:64] FLAG: --shutdown-delay-duration="0s" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480583 16 flags.go:64] FLAG: --shutdown-send-retry-after="true" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480587 16 flags.go:64] FLAG: --shutdown-watch-termination-grace-period="0s" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480591 16 flags.go:64] FLAG: --storage-backend="etcd3" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480595 16 flags.go:64] FLAG: --storage-media-type="application/vnd.kubernetes.protobuf" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480599 16 flags.go:64] FLAG: --strict-transport-security-directives="[max-age=31536000,includeSubDomains,preload]" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480607 16 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt" 2026-01-20T10:57:22.481178913+00:00 stderr F I0120 10:57:22.480612 16 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2026-01-20T10:57:22.481178913+00:00 stderr P I0120 10:57:22.480633 16 flags.go:64] F 2026-01-20T10:57:22.481217034+00:00 stderr F LAG: --tls-min-version="VersionTLS12" 2026-01-20T10:57:22.481217034+00:00 stderr F I0120 10:57:22.480637 16 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" 2026-01-20T10:57:22.481217034+00:00 stderr F I0120 10:57:22.480643 16 flags.go:64] FLAG: --tls-sni-cert-key="[/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.crt,/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.key;/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt,/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key;/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.crt,/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.key;/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt,/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.key;/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.crt,/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.key]" 2026-01-20T10:57:22.481217034+00:00 stderr F I0120 10:57:22.480673 16 flags.go:64] FLAG: --token-auth-file="" 2026-01-20T10:57:22.481217034+00:00 stderr F I0120 10:57:22.480678 16 flags.go:64] FLAG: --tracing-config-file="" 2026-01-20T10:57:22.481217034+00:00 stderr F I0120 10:57:22.480682 16 flags.go:64] FLAG: --v="2" 2026-01-20T10:57:22.481217034+00:00 stderr F I0120 10:57:22.480687 16 flags.go:64] FLAG: --version="false" 2026-01-20T10:57:22.481217034+00:00 stderr F I0120 10:57:22.480697 16 flags.go:64] FLAG: --vmodule="" 2026-01-20T10:57:22.481217034+00:00 stderr F I0120 10:57:22.480701 16 flags.go:64] FLAG: --watch-cache="true" 2026-01-20T10:57:22.481217034+00:00 stderr F I0120 10:57:22.480707 16 flags.go:64] FLAG: --watch-cache-sizes="[]" 2026-01-20T10:57:22.481217034+00:00 stderr F I0120 10:57:22.480783 16 options.go:222] external host was not specified, using 192.168.126.11 2026-01-20T10:57:22.482609691+00:00 stderr F I0120 10:57:22.482527 16 server.go:189] Version: v1.29.5+29c95f3 2026-01-20T10:57:22.482609691+00:00 stderr F I0120 10:57:22.482559 16 server.go:191] "Golang settings" GOGC="100" GOMAXPROCS="" GOTRACEBACK="" 2026-01-20T10:57:22.483447563+00:00 stderr F I0120 10:57:22.483396 16 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" 2026-01-20T10:57:22.484051969+00:00 stderr F I0120 10:57:22.483725 16 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.key" 2026-01-20T10:57:22.484565442+00:00 stderr F I0120 10:57:22.484491 16 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" 2026-01-20T10:57:22.485327253+00:00 stderr F I0120 10:57:22.485262 16 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.key" 2026-01-20T10:57:22.485825536+00:00 stderr F I0120 10:57:22.485776 16 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.key" 2026-01-20T10:57:22.486389931+00:00 stderr F I0120 10:57:22.486324 16 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.crt::/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.key" 2026-01-20T10:57:22.863837333+00:00 stderr F I0120 10:57:22.863669 16 apf_controller.go:292] NewTestableController "Controller" with serverConcurrencyLimit=4000, name=Controller, asFieldManager="api-priority-and-fairness-config-consumer-v1" 2026-01-20T10:57:22.863965026+00:00 stderr F I0120 10:57:22.863805 16 apf_controller.go:861] Introducing queues for priority level "exempt": config={"type":"Exempt","exempt":{"nominalConcurrencyShares":0,"lendablePercent":0}}, nominalCL=0, lendableCL=0, borrowingCL=4000, currentCL=0, quiescing=false (shares=0xc00067e5c0, shareSum=5) 2026-01-20T10:57:22.863965026+00:00 stderr F I0120 10:57:22.863906 16 apf_controller.go:861] Introducing queues for priority level "catch-all": config={"type":"Limited","limited":{"nominalConcurrencyShares":5,"limitResponse":{"type":"Reject"},"lendablePercent":0}}, nominalCL=4000, lendableCL=0, borrowingCL=4000, currentCL=4000, quiescing=false (shares=0xc00067e060, shareSum=5) 2026-01-20T10:57:22.877250427+00:00 stderr F I0120 10:57:22.877142 16 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2026-01-20T10:57:22.877340249+00:00 stderr F I0120 10:57:22.877256 16 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2026-01-20T10:57:22.882128866+00:00 stderr F I0120 10:57:22.881633 16 shared_informer.go:311] Waiting for caches to sync for node_authorizer 2026-01-20T10:57:22.882418414+00:00 stderr F I0120 10:57:22.882317 16 audit.go:340] Using audit backend: ignoreErrors 2026-01-20T10:57:22.900020910+00:00 stderr F I0120 10:57:22.899863 16 store.go:1579] "Monitoring resource count at path" resource="events" path="//events" 2026-01-20T10:57:22.901175040+00:00 stderr F I0120 10:57:22.901025 16 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2026-01-20T10:57:22.902953708+00:00 stderr F I0120 10:57:22.902389 16 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2026-01-20T10:57:22.914796821+00:00 stderr F W0120 10:57:22.914651 16 admission.go:76] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts. 2026-01-20T10:57:22.915168880+00:00 stderr F I0120 10:57:22.915095 16 admission.go:47] Admission plugin "autoscaling.openshift.io/ClusterResourceOverride" is not configured so it will be disabled. 2026-01-20T10:57:22.915417107+00:00 stderr F I0120 10:57:22.915346 16 admission.go:33] Admission plugin "autoscaling.openshift.io/RunOnceDuration" is not configured so it will be disabled. 2026-01-20T10:57:22.915417107+00:00 stderr F I0120 10:57:22.915370 16 admission.go:32] Admission plugin "scheduling.openshift.io/PodNodeConstraints" is not configured so it will be disabled. 2026-01-20T10:57:22.922234467+00:00 stderr F I0120 10:57:22.921201 16 plugins.go:157] Loaded 23 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,PodNodeSelector,Priority,DefaultTolerationSeconds,PodTolerationRestriction,PersistentVolumeLabel,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,autoscaling.openshift.io/ManagementCPUsOverride,scheduling.openshift.io/OriginPodNodeEnvironment,image.openshift.io/ImagePolicy,security.openshift.io/SecurityContextConstraint,route.openshift.io/RouteHostAssignment,autoscaling.openshift.io/MixedCPUs,route.openshift.io/DefaultRoute,security.openshift.io/DefaultSecurityContextConstraints,MutatingAdmissionWebhook. 2026-01-20T10:57:22.922234467+00:00 stderr F I0120 10:57:22.921230 16 plugins.go:160] Loaded 47 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,PodNodeSelector,Priority,PodTolerationRestriction,OwnerReferencesPermissionEnforcement,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,autoscaling.openshift.io/ManagementCPUsOverride,authorization.openshift.io/RestrictSubjectBindings,scheduling.openshift.io/OriginPodNodeEnvironment,network.openshift.io/ExternalIPRanger,network.openshift.io/RestrictedEndpointsAdmission,image.openshift.io/ImagePolicy,security.openshift.io/SecurityContextConstraint,security.openshift.io/SCCExecRestrictions,route.openshift.io/IngressAdmission,storage.openshift.io/CSIInlineVolumeSecurity,autoscaling.openshift.io/ManagedNode,config.openshift.io/ValidateAPIServer,config.openshift.io/ValidateAuthentication,config.openshift.io/ValidateFeatureGate,config.openshift.io/ValidateConsole,operator.openshift.io/ValidateDNS,config.openshift.io/ValidateImage,config.openshift.io/ValidateOAuth,config.openshift.io/ValidateProject,config.openshift.io/DenyDeleteClusterConfiguration,operator.openshift.io/DenyDeleteClusterOperators,config.openshift.io/ValidateScheduler,quota.openshift.io/ValidateClusterResourceQuota,security.openshift.io/ValidateSecurityContextConstraints,authorization.openshift.io/ValidateRoleBindingRestriction,config.openshift.io/ValidateNetwork,config.openshift.io/ValidateAPIRequestCount,config.openshift.io/RestrictExtremeWorkerLatencyProfile,route.openshift.io/ValidateRoute,operator.openshift.io/ValidateKubeControllerManager,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota,quota.openshift.io/ClusterResourceQuota. 2026-01-20T10:57:22.922234467+00:00 stderr F I0120 10:57:22.921691 16 instance.go:297] Using reconciler: lease 2026-01-20T10:57:22.962298397+00:00 stderr F I0120 10:57:22.962146 16 store.go:1579] "Monitoring resource count at path" resource="customresourcedefinitions.apiextensions.k8s.io" path="//apiextensions.k8s.io/customresourcedefinitions" 2026-01-20T10:57:22.965182553+00:00 stderr F I0120 10:57:22.965030 16 handler.go:275] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager 2026-01-20T10:57:22.965182553+00:00 stderr F W0120 10:57:22.965091 16 genericapiserver.go:774] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources. 2026-01-20T10:57:23.009446804+00:00 stderr F I0120 10:57:23.000991 16 store.go:1579] "Monitoring resource count at path" resource="events" path="//events" 2026-01-20T10:57:23.013154771+00:00 stderr F I0120 10:57:23.012986 16 store.go:1579] "Monitoring resource count at path" resource="resourcequotas" path="//resourcequotas" 2026-01-20T10:57:23.014911418+00:00 stderr F I0120 10:57:23.014784 16 cacher.go:460] cacher (resourcequotas): initialized 2026-01-20T10:57:23.014911418+00:00 stderr F I0120 10:57:23.014821 16 reflector.go:351] Caches populated for *core.ResourceQuota from storage/cacher.go:/resourcequotas 2026-01-20T10:57:23.026488774+00:00 stderr F I0120 10:57:23.026342 16 store.go:1579] "Monitoring resource count at path" resource="secrets" path="//secrets" 2026-01-20T10:57:23.046473353+00:00 stderr F I0120 10:57:23.044890 16 store.go:1579] "Monitoring resource count at path" resource="configmaps" path="//configmaps" 2026-01-20T10:57:23.055794419+00:00 stderr F I0120 10:57:23.055123 16 store.go:1579] "Monitoring resource count at path" resource="namespaces" path="//namespaces" 2026-01-20T10:57:23.069604464+00:00 stderr F I0120 10:57:23.069430 16 cacher.go:460] cacher (namespaces): initialized 2026-01-20T10:57:23.069604464+00:00 stderr F I0120 10:57:23.069486 16 reflector.go:351] Caches populated for *core.Namespace from storage/cacher.go:/namespaces 2026-01-20T10:57:23.074694339+00:00 stderr F I0120 10:57:23.072266 16 store.go:1579] "Monitoring resource count at path" resource="serviceaccounts" path="//serviceaccounts" 2026-01-20T10:57:23.082286220+00:00 stderr F I0120 10:57:23.082153 16 store.go:1579] "Monitoring resource count at path" resource="podtemplates" path="//podtemplates" 2026-01-20T10:57:23.085235348+00:00 stderr F I0120 10:57:23.084204 16 cacher.go:460] cacher (podtemplates): initialized 2026-01-20T10:57:23.085235348+00:00 stderr F I0120 10:57:23.084233 16 reflector.go:351] Caches populated for *core.PodTemplate from storage/cacher.go:/podtemplates 2026-01-20T10:57:23.086310276+00:00 stderr F I0120 10:57:23.086215 16 cacher.go:460] cacher (serviceaccounts): initialized 2026-01-20T10:57:23.086310276+00:00 stderr F I0120 10:57:23.086264 16 reflector.go:351] Caches populated for *core.ServiceAccount from storage/cacher.go:/serviceaccounts 2026-01-20T10:57:23.089698035+00:00 stderr F I0120 10:57:23.089574 16 store.go:1579] "Monitoring resource count at path" resource="limitranges" path="//limitranges" 2026-01-20T10:57:23.095428677+00:00 stderr F I0120 10:57:23.091079 16 cacher.go:460] cacher (limitranges): initialized 2026-01-20T10:57:23.095428677+00:00 stderr F I0120 10:57:23.091108 16 reflector.go:351] Caches populated for *core.LimitRange from storage/cacher.go:/limitranges 2026-01-20T10:57:23.095428677+00:00 stderr F I0120 10:57:23.091287 16 cacher.go:460] cacher (secrets): initialized 2026-01-20T10:57:23.095428677+00:00 stderr F I0120 10:57:23.091313 16 reflector.go:351] Caches populated for *core.Secret from storage/cacher.go:/secrets 2026-01-20T10:57:23.099621608+00:00 stderr F I0120 10:57:23.098162 16 store.go:1579] "Monitoring resource count at path" resource="persistentvolumes" path="//persistentvolumes" 2026-01-20T10:57:23.099881895+00:00 stderr F I0120 10:57:23.099823 16 cacher.go:460] cacher (persistentvolumes): initialized 2026-01-20T10:57:23.099881895+00:00 stderr F I0120 10:57:23.099850 16 reflector.go:351] Caches populated for *core.PersistentVolume from storage/cacher.go:/persistentvolumes 2026-01-20T10:57:23.109726815+00:00 stderr F I0120 10:57:23.109453 16 store.go:1579] "Monitoring resource count at path" resource="persistentvolumeclaims" path="//persistentvolumeclaims" 2026-01-20T10:57:23.112039946+00:00 stderr F I0120 10:57:23.111960 16 cacher.go:460] cacher (persistentvolumeclaims): initialized 2026-01-20T10:57:23.112095198+00:00 stderr F I0120 10:57:23.112040 16 reflector.go:351] Caches populated for *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims 2026-01-20T10:57:23.122324599+00:00 stderr F I0120 10:57:23.122179 16 store.go:1579] "Monitoring resource count at path" resource="endpoints" path="//services/endpoints" 2026-01-20T10:57:23.129908269+00:00 stderr F I0120 10:57:23.129813 16 cacher.go:460] cacher (endpoints): initialized 2026-01-20T10:57:23.129908269+00:00 stderr F I0120 10:57:23.129849 16 reflector.go:351] Caches populated for *core.Endpoints from storage/cacher.go:/services/endpoints 2026-01-20T10:57:23.132534289+00:00 stderr F I0120 10:57:23.132428 16 cacher.go:460] cacher (configmaps): initialized 2026-01-20T10:57:23.132534289+00:00 stderr F I0120 10:57:23.132474 16 reflector.go:351] Caches populated for *core.ConfigMap from storage/cacher.go:/configmaps 2026-01-20T10:57:23.135550619+00:00 stderr F I0120 10:57:23.135449 16 store.go:1579] "Monitoring resource count at path" resource="nodes" path="//minions" 2026-01-20T10:57:23.143032156+00:00 stderr F I0120 10:57:23.142919 16 cacher.go:460] cacher (nodes): initialized 2026-01-20T10:57:23.143032156+00:00 stderr F I0120 10:57:23.142952 16 reflector.go:351] Caches populated for *core.Node from storage/cacher.go:/minions 2026-01-20T10:57:23.148523531+00:00 stderr F I0120 10:57:23.148438 16 store.go:1579] "Monitoring resource count at path" resource="pods" path="//pods" 2026-01-20T10:57:23.156164513+00:00 stderr F I0120 10:57:23.156046 16 store.go:1579] "Monitoring resource count at path" resource="services" path="//services/specs" 2026-01-20T10:57:23.159958123+00:00 stderr F I0120 10:57:23.159880 16 cacher.go:460] cacher (services): initialized 2026-01-20T10:57:23.160026265+00:00 stderr F I0120 10:57:23.159990 16 reflector.go:351] Caches populated for *core.Service from storage/cacher.go:/services/specs 2026-01-20T10:57:23.162384838+00:00 stderr F I0120 10:57:23.162301 16 cacher.go:460] cacher (pods): initialized 2026-01-20T10:57:23.162384838+00:00 stderr F I0120 10:57:23.162323 16 reflector.go:351] Caches populated for *core.Pod from storage/cacher.go:/pods 2026-01-20T10:57:23.164438612+00:00 stderr F I0120 10:57:23.164344 16 store.go:1579] "Monitoring resource count at path" resource="serviceaccounts" path="//serviceaccounts" 2026-01-20T10:57:23.172389773+00:00 stderr F I0120 10:57:23.172312 16 store.go:1579] "Monitoring resource count at path" resource="replicationcontrollers" path="//controllers" 2026-01-20T10:57:23.172757362+00:00 stderr F I0120 10:57:23.172698 16 cacher.go:460] cacher (serviceaccounts): initialized 2026-01-20T10:57:23.172757362+00:00 stderr F I0120 10:57:23.172715 16 reflector.go:351] Caches populated for *core.ServiceAccount from storage/cacher.go:/serviceaccounts 2026-01-20T10:57:23.173730618+00:00 stderr F I0120 10:57:23.173660 16 instance.go:707] Enabling API group "". 2026-01-20T10:57:23.173748428+00:00 stderr F I0120 10:57:23.173721 16 cacher.go:460] cacher (replicationcontrollers): initialized 2026-01-20T10:57:23.173748428+00:00 stderr F I0120 10:57:23.173736 16 reflector.go:351] Caches populated for *core.ReplicationController from storage/cacher.go:/controllers 2026-01-20T10:57:23.205412166+00:00 stderr F I0120 10:57:23.205284 16 handler.go:275] Adding GroupVersion v1 to ResourceManager 2026-01-20T10:57:23.205527989+00:00 stderr F I0120 10:57:23.205479 16 instance.go:694] API group "internal.apiserver.k8s.io" is not enabled, skipping. 2026-01-20T10:57:23.205576351+00:00 stderr F I0120 10:57:23.205524 16 instance.go:707] Enabling API group "authentication.k8s.io". 2026-01-20T10:57:23.205608781+00:00 stderr F I0120 10:57:23.205571 16 instance.go:707] Enabling API group "authorization.k8s.io". 2026-01-20T10:57:23.212413371+00:00 stderr F I0120 10:57:23.212358 16 store.go:1579] "Monitoring resource count at path" resource="horizontalpodautoscalers.autoscaling" path="//horizontalpodautoscalers" 2026-01-20T10:57:23.213613742+00:00 stderr F I0120 10:57:23.213476 16 cacher.go:460] cacher (horizontalpodautoscalers.autoscaling): initialized 2026-01-20T10:57:23.213613742+00:00 stderr F I0120 10:57:23.213499 16 reflector.go:351] Caches populated for *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers 2026-01-20T10:57:23.218514302+00:00 stderr F I0120 10:57:23.218443 16 store.go:1579] "Monitoring resource count at path" resource="horizontalpodautoscalers.autoscaling" path="//horizontalpodautoscalers" 2026-01-20T10:57:23.218514302+00:00 stderr F I0120 10:57:23.218482 16 instance.go:707] Enabling API group "autoscaling". 2026-01-20T10:57:23.219439697+00:00 stderr F I0120 10:57:23.219369 16 cacher.go:460] cacher (horizontalpodautoscalers.autoscaling): initialized 2026-01-20T10:57:23.219439697+00:00 stderr F I0120 10:57:23.219396 16 reflector.go:351] Caches populated for *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers 2026-01-20T10:57:23.224766107+00:00 stderr F I0120 10:57:23.224701 16 store.go:1579] "Monitoring resource count at path" resource="jobs.batch" path="//jobs" 2026-01-20T10:57:23.226107323+00:00 stderr F I0120 10:57:23.225989 16 cacher.go:460] cacher (jobs.batch): initialized 2026-01-20T10:57:23.226107323+00:00 stderr F I0120 10:57:23.226011 16 reflector.go:351] Caches populated for *batch.Job from storage/cacher.go:/jobs 2026-01-20T10:57:23.233732655+00:00 stderr F I0120 10:57:23.233087 16 store.go:1579] "Monitoring resource count at path" resource="cronjobs.batch" path="//cronjobs" 2026-01-20T10:57:23.233732655+00:00 stderr F I0120 10:57:23.233181 16 instance.go:707] Enabling API group "batch". 2026-01-20T10:57:23.234433943+00:00 stderr F I0120 10:57:23.234369 16 cacher.go:460] cacher (cronjobs.batch): initialized 2026-01-20T10:57:23.234433943+00:00 stderr F I0120 10:57:23.234398 16 reflector.go:351] Caches populated for *batch.CronJob from storage/cacher.go:/cronjobs 2026-01-20T10:57:23.241133160+00:00 stderr F I0120 10:57:23.241031 16 store.go:1579] "Monitoring resource count at path" resource="certificatesigningrequests.certificates.k8s.io" path="//certificatesigningrequests" 2026-01-20T10:57:23.241261444+00:00 stderr F I0120 10:57:23.241204 16 instance.go:707] Enabling API group "certificates.k8s.io". 2026-01-20T10:57:23.242498807+00:00 stderr F I0120 10:57:23.242411 16 cacher.go:460] cacher (certificatesigningrequests.certificates.k8s.io): initialized 2026-01-20T10:57:23.242498807+00:00 stderr F I0120 10:57:23.242434 16 reflector.go:351] Caches populated for *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests 2026-01-20T10:57:23.243718319+00:00 stderr F I0120 10:57:23.243647 16 cacher.go:460] cacher (customresourcedefinitions.apiextensions.k8s.io): initialized 2026-01-20T10:57:23.243718319+00:00 stderr F I0120 10:57:23.243673 16 reflector.go:351] Caches populated for *apiextensions.CustomResourceDefinition from storage/cacher.go:/apiextensions.k8s.io/customresourcedefinitions 2026-01-20T10:57:23.252195483+00:00 stderr F I0120 10:57:23.252100 16 store.go:1579] "Monitoring resource count at path" resource="leases.coordination.k8s.io" path="//leases" 2026-01-20T10:57:23.252195483+00:00 stderr F I0120 10:57:23.252139 16 instance.go:707] Enabling API group "coordination.k8s.io". 2026-01-20T10:57:23.254191816+00:00 stderr F I0120 10:57:23.253639 16 cacher.go:460] cacher (leases.coordination.k8s.io): initialized 2026-01-20T10:57:23.254191816+00:00 stderr F I0120 10:57:23.253672 16 reflector.go:351] Caches populated for *coordination.Lease from storage/cacher.go:/leases 2026-01-20T10:57:23.260372379+00:00 stderr F I0120 10:57:23.260299 16 store.go:1579] "Monitoring resource count at path" resource="endpointslices.discovery.k8s.io" path="//endpointslices" 2026-01-20T10:57:23.260372379+00:00 stderr F I0120 10:57:23.260347 16 instance.go:707] Enabling API group "discovery.k8s.io". 2026-01-20T10:57:23.262850875+00:00 stderr F I0120 10:57:23.262792 16 cacher.go:460] cacher (endpointslices.discovery.k8s.io): initialized 2026-01-20T10:57:23.262918296+00:00 stderr F I0120 10:57:23.262852 16 reflector.go:351] Caches populated for *discovery.EndpointSlice from storage/cacher.go:/endpointslices 2026-01-20T10:57:23.269680266+00:00 stderr F I0120 10:57:23.269587 16 store.go:1579] "Monitoring resource count at path" resource="networkpolicies.networking.k8s.io" path="//networkpolicies" 2026-01-20T10:57:23.270964519+00:00 stderr F I0120 10:57:23.270895 16 cacher.go:460] cacher (networkpolicies.networking.k8s.io): initialized 2026-01-20T10:57:23.270964519+00:00 stderr F I0120 10:57:23.270922 16 reflector.go:351] Caches populated for *networking.NetworkPolicy from storage/cacher.go:/networkpolicies 2026-01-20T10:57:23.278442087+00:00 stderr F I0120 10:57:23.278372 16 store.go:1579] "Monitoring resource count at path" resource="ingresses.networking.k8s.io" path="//ingress" 2026-01-20T10:57:23.279690000+00:00 stderr F I0120 10:57:23.279608 16 cacher.go:460] cacher (ingresses.networking.k8s.io): initialized 2026-01-20T10:57:23.279690000+00:00 stderr F I0120 10:57:23.279645 16 reflector.go:351] Caches populated for *networking.Ingress from storage/cacher.go:/ingress 2026-01-20T10:57:23.286983523+00:00 stderr F I0120 10:57:23.286890 16 store.go:1579] "Monitoring resource count at path" resource="ingressclasses.networking.k8s.io" path="//ingressclasses" 2026-01-20T10:57:23.287120857+00:00 stderr F I0120 10:57:23.287011 16 instance.go:707] Enabling API group "networking.k8s.io". 2026-01-20T10:57:23.288907343+00:00 stderr F I0120 10:57:23.288814 16 cacher.go:460] cacher (ingressclasses.networking.k8s.io): initialized 2026-01-20T10:57:23.288907343+00:00 stderr F I0120 10:57:23.288843 16 reflector.go:351] Caches populated for *networking.IngressClass from storage/cacher.go:/ingressclasses 2026-01-20T10:57:23.297308186+00:00 stderr F I0120 10:57:23.297179 16 store.go:1579] "Monitoring resource count at path" resource="runtimeclasses.node.k8s.io" path="//runtimeclasses" 2026-01-20T10:57:23.297308186+00:00 stderr F I0120 10:57:23.297230 16 instance.go:707] Enabling API group "node.k8s.io". 2026-01-20T10:57:23.303413638+00:00 stderr F I0120 10:57:23.303265 16 cacher.go:460] cacher (runtimeclasses.node.k8s.io): initialized 2026-01-20T10:57:23.303413638+00:00 stderr F I0120 10:57:23.303299 16 reflector.go:351] Caches populated for *node.RuntimeClass from storage/cacher.go:/runtimeclasses 2026-01-20T10:57:23.307632859+00:00 stderr F I0120 10:57:23.307526 16 store.go:1579] "Monitoring resource count at path" resource="poddisruptionbudgets.policy" path="//poddisruptionbudgets" 2026-01-20T10:57:23.307632859+00:00 stderr F I0120 10:57:23.307573 16 instance.go:707] Enabling API group "policy". 2026-01-20T10:57:23.309238842+00:00 stderr F I0120 10:57:23.309151 16 cacher.go:460] cacher (poddisruptionbudgets.policy): initialized 2026-01-20T10:57:23.309238842+00:00 stderr F I0120 10:57:23.309192 16 reflector.go:351] Caches populated for *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets 2026-01-20T10:57:23.314874211+00:00 stderr F I0120 10:57:23.314756 16 store.go:1579] "Monitoring resource count at path" resource="roles.rbac.authorization.k8s.io" path="//roles" 2026-01-20T10:57:23.319559984+00:00 stderr F I0120 10:57:23.319473 16 cacher.go:460] cacher (roles.rbac.authorization.k8s.io): initialized 2026-01-20T10:57:23.319559984+00:00 stderr F I0120 10:57:23.319500 16 reflector.go:351] Caches populated for *rbac.Role from storage/cacher.go:/roles 2026-01-20T10:57:23.325355927+00:00 stderr F I0120 10:57:23.325272 16 store.go:1579] "Monitoring resource count at path" resource="rolebindings.rbac.authorization.k8s.io" path="//rolebindings" 2026-01-20T10:57:23.331909641+00:00 stderr F I0120 10:57:23.331797 16 cacher.go:460] cacher (rolebindings.rbac.authorization.k8s.io): initialized 2026-01-20T10:57:23.331909641+00:00 stderr F I0120 10:57:23.331822 16 reflector.go:351] Caches populated for *rbac.RoleBinding from storage/cacher.go:/rolebindings 2026-01-20T10:57:23.335192598+00:00 stderr F I0120 10:57:23.335129 16 store.go:1579] "Monitoring resource count at path" resource="clusterroles.rbac.authorization.k8s.io" path="//clusterroles" 2026-01-20T10:57:23.340946140+00:00 stderr F I0120 10:57:23.340865 16 cacher.go:460] cacher (clusterroles.rbac.authorization.k8s.io): initialized 2026-01-20T10:57:23.340963881+00:00 stderr F I0120 10:57:23.340945 16 reflector.go:351] Caches populated for *rbac.ClusterRole from storage/cacher.go:/clusterroles 2026-01-20T10:57:23.342506902+00:00 stderr F I0120 10:57:23.342449 16 store.go:1579] "Monitoring resource count at path" resource="clusterrolebindings.rbac.authorization.k8s.io" path="//clusterrolebindings" 2026-01-20T10:57:23.342674316+00:00 stderr F I0120 10:57:23.342630 16 instance.go:707] Enabling API group "rbac.authorization.k8s.io". 2026-01-20T10:57:23.346367054+00:00 stderr F I0120 10:57:23.346313 16 cacher.go:460] cacher (clusterrolebindings.rbac.authorization.k8s.io): initialized 2026-01-20T10:57:23.346384384+00:00 stderr F I0120 10:57:23.346365 16 reflector.go:351] Caches populated for *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings 2026-01-20T10:57:23.353003479+00:00 stderr F I0120 10:57:23.352628 16 store.go:1579] "Monitoring resource count at path" resource="priorityclasses.scheduling.k8s.io" path="//priorityclasses" 2026-01-20T10:57:23.353003479+00:00 stderr F I0120 10:57:23.352672 16 instance.go:707] Enabling API group "scheduling.k8s.io". 2026-01-20T10:57:23.354315094+00:00 stderr F I0120 10:57:23.354239 16 cacher.go:460] cacher (priorityclasses.scheduling.k8s.io): initialized 2026-01-20T10:57:23.354315094+00:00 stderr F I0120 10:57:23.354271 16 reflector.go:351] Caches populated for *scheduling.PriorityClass from storage/cacher.go:/priorityclasses 2026-01-20T10:57:23.360590850+00:00 stderr F I0120 10:57:23.360516 16 store.go:1579] "Monitoring resource count at path" resource="storageclasses.storage.k8s.io" path="//storageclasses" 2026-01-20T10:57:23.361709109+00:00 stderr F I0120 10:57:23.361639 16 cacher.go:460] cacher (storageclasses.storage.k8s.io): initialized 2026-01-20T10:57:23.361709109+00:00 stderr F I0120 10:57:23.361664 16 reflector.go:351] Caches populated for *storage.StorageClass from storage/cacher.go:/storageclasses 2026-01-20T10:57:23.367574254+00:00 stderr F I0120 10:57:23.367398 16 store.go:1579] "Monitoring resource count at path" resource="volumeattachments.storage.k8s.io" path="//volumeattachments" 2026-01-20T10:57:23.368309104+00:00 stderr F I0120 10:57:23.368086 16 cacher.go:460] cacher (volumeattachments.storage.k8s.io): initialized 2026-01-20T10:57:23.368309104+00:00 stderr F I0120 10:57:23.368105 16 reflector.go:351] Caches populated for *storage.VolumeAttachment from storage/cacher.go:/volumeattachments 2026-01-20T10:57:23.374367304+00:00 stderr F I0120 10:57:23.374297 16 store.go:1579] "Monitoring resource count at path" resource="csinodes.storage.k8s.io" path="//csinodes" 2026-01-20T10:57:23.376930462+00:00 stderr F I0120 10:57:23.376843 16 cacher.go:460] cacher (csinodes.storage.k8s.io): initialized 2026-01-20T10:57:23.376930462+00:00 stderr F I0120 10:57:23.376867 16 reflector.go:351] Caches populated for *storage.CSINode from storage/cacher.go:/csinodes 2026-01-20T10:57:23.382524300+00:00 stderr F I0120 10:57:23.382331 16 store.go:1579] "Monitoring resource count at path" resource="csidrivers.storage.k8s.io" path="//csidrivers" 2026-01-20T10:57:23.386246798+00:00 stderr F I0120 10:57:23.384655 16 cacher.go:460] cacher (csidrivers.storage.k8s.io): initialized 2026-01-20T10:57:23.386246798+00:00 stderr F I0120 10:57:23.384670 16 reflector.go:351] Caches populated for *storage.CSIDriver from storage/cacher.go:/csidrivers 2026-01-20T10:57:23.391106147+00:00 stderr F I0120 10:57:23.391001 16 store.go:1579] "Monitoring resource count at path" resource="csistoragecapacities.storage.k8s.io" path="//csistoragecapacities" 2026-01-20T10:57:23.392099363+00:00 stderr F I0120 10:57:23.391107 16 instance.go:707] Enabling API group "storage.k8s.io". 2026-01-20T10:57:23.393106730+00:00 stderr F I0120 10:57:23.392919 16 cacher.go:460] cacher (csistoragecapacities.storage.k8s.io): initialized 2026-01-20T10:57:23.393106730+00:00 stderr F I0120 10:57:23.392940 16 reflector.go:351] Caches populated for *storage.CSIStorageCapacity from storage/cacher.go:/csistoragecapacities 2026-01-20T10:57:23.400214607+00:00 stderr F I0120 10:57:23.400139 16 store.go:1579] "Monitoring resource count at path" resource="flowschemas.flowcontrol.apiserver.k8s.io" path="//flowschemas" 2026-01-20T10:57:23.406288478+00:00 stderr F I0120 10:57:23.405661 16 cacher.go:460] cacher (flowschemas.flowcontrol.apiserver.k8s.io): initialized 2026-01-20T10:57:23.406288478+00:00 stderr F I0120 10:57:23.405692 16 reflector.go:351] Caches populated for *flowcontrol.FlowSchema from storage/cacher.go:/flowschemas 2026-01-20T10:57:23.410186751+00:00 stderr F I0120 10:57:23.410122 16 store.go:1579] "Monitoring resource count at path" resource="prioritylevelconfigurations.flowcontrol.apiserver.k8s.io" path="//prioritylevelconfigurations" 2026-01-20T10:57:23.411790843+00:00 stderr F I0120 10:57:23.411705 16 cacher.go:460] cacher (prioritylevelconfigurations.flowcontrol.apiserver.k8s.io): initialized 2026-01-20T10:57:23.411790843+00:00 stderr F I0120 10:57:23.411727 16 reflector.go:351] Caches populated for *flowcontrol.PriorityLevelConfiguration from storage/cacher.go:/prioritylevelconfigurations 2026-01-20T10:57:23.418871521+00:00 stderr F I0120 10:57:23.418807 16 store.go:1579] "Monitoring resource count at path" resource="flowschemas.flowcontrol.apiserver.k8s.io" path="//flowschemas" 2026-01-20T10:57:23.419894988+00:00 stderr F I0120 10:57:23.419814 16 cacher.go:460] cacher (flowschemas.flowcontrol.apiserver.k8s.io): initialized 2026-01-20T10:57:23.419894988+00:00 stderr F I0120 10:57:23.419835 16 reflector.go:351] Caches populated for *flowcontrol.FlowSchema from storage/cacher.go:/flowschemas 2026-01-20T10:57:23.425384913+00:00 stderr F I0120 10:57:23.425323 16 store.go:1579] "Monitoring resource count at path" resource="prioritylevelconfigurations.flowcontrol.apiserver.k8s.io" path="//prioritylevelconfigurations" 2026-01-20T10:57:23.425430964+00:00 stderr F I0120 10:57:23.425392 16 instance.go:707] Enabling API group "flowcontrol.apiserver.k8s.io". 2026-01-20T10:57:23.426161823+00:00 stderr F I0120 10:57:23.426107 16 cacher.go:460] cacher (prioritylevelconfigurations.flowcontrol.apiserver.k8s.io): initialized 2026-01-20T10:57:23.426176804+00:00 stderr F I0120 10:57:23.426150 16 reflector.go:351] Caches populated for *flowcontrol.PriorityLevelConfiguration from storage/cacher.go:/prioritylevelconfigurations 2026-01-20T10:57:23.436145458+00:00 stderr F I0120 10:57:23.436086 16 store.go:1579] "Monitoring resource count at path" resource="deployments.apps" path="//deployments" 2026-01-20T10:57:23.441312015+00:00 stderr F I0120 10:57:23.441230 16 cacher.go:460] cacher (deployments.apps): initialized 2026-01-20T10:57:23.441312015+00:00 stderr F I0120 10:57:23.441264 16 reflector.go:351] Caches populated for *apps.Deployment from storage/cacher.go:/deployments 2026-01-20T10:57:23.444412856+00:00 stderr F I0120 10:57:23.444134 16 store.go:1579] "Monitoring resource count at path" resource="statefulsets.apps" path="//statefulsets" 2026-01-20T10:57:23.445429993+00:00 stderr F I0120 10:57:23.445351 16 cacher.go:460] cacher (statefulsets.apps): initialized 2026-01-20T10:57:23.445429993+00:00 stderr F I0120 10:57:23.445377 16 reflector.go:351] Caches populated for *apps.StatefulSet from storage/cacher.go:/statefulsets 2026-01-20T10:57:23.451431571+00:00 stderr F I0120 10:57:23.451360 16 store.go:1579] "Monitoring resource count at path" resource="daemonsets.apps" path="//daemonsets" 2026-01-20T10:57:23.458271563+00:00 stderr F I0120 10:57:23.457344 16 cacher.go:460] cacher (daemonsets.apps): initialized 2026-01-20T10:57:23.458271563+00:00 stderr F I0120 10:57:23.457373 16 reflector.go:351] Caches populated for *apps.DaemonSet from storage/cacher.go:/daemonsets 2026-01-20T10:57:23.458271563+00:00 stderr F I0120 10:57:23.458214 16 store.go:1579] "Monitoring resource count at path" resource="replicasets.apps" path="//replicasets" 2026-01-20T10:57:23.467139087+00:00 stderr F I0120 10:57:23.467022 16 store.go:1579] "Monitoring resource count at path" resource="controllerrevisions.apps" path="//controllerrevisions" 2026-01-20T10:57:23.467205529+00:00 stderr F I0120 10:57:23.467159 16 instance.go:707] Enabling API group "apps". 2026-01-20T10:57:23.467680151+00:00 stderr F I0120 10:57:23.467605 16 cacher.go:460] cacher (replicasets.apps): initialized 2026-01-20T10:57:23.467680151+00:00 stderr F I0120 10:57:23.467636 16 reflector.go:351] Caches populated for *apps.ReplicaSet from storage/cacher.go:/replicasets 2026-01-20T10:57:23.470451325+00:00 stderr F I0120 10:57:23.469662 16 cacher.go:460] cacher (controllerrevisions.apps): initialized 2026-01-20T10:57:23.470451325+00:00 stderr F I0120 10:57:23.469681 16 reflector.go:351] Caches populated for *apps.ControllerRevision from storage/cacher.go:/controllerrevisions 2026-01-20T10:57:23.474010769+00:00 stderr F I0120 10:57:23.473926 16 store.go:1579] "Monitoring resource count at path" resource="validatingwebhookconfigurations.admissionregistration.k8s.io" path="//validatingwebhookconfigurations" 2026-01-20T10:57:23.475389706+00:00 stderr F I0120 10:57:23.475333 16 cacher.go:460] cacher (validatingwebhookconfigurations.admissionregistration.k8s.io): initialized 2026-01-20T10:57:23.475389706+00:00 stderr F I0120 10:57:23.475355 16 reflector.go:351] Caches populated for *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations 2026-01-20T10:57:23.480970823+00:00 stderr F I0120 10:57:23.480904 16 store.go:1579] "Monitoring resource count at path" resource="mutatingwebhookconfigurations.admissionregistration.k8s.io" path="//mutatingwebhookconfigurations" 2026-01-20T10:57:23.482077972+00:00 stderr F I0120 10:57:23.481023 16 instance.go:707] Enabling API group "admissionregistration.k8s.io". 2026-01-20T10:57:23.485585265+00:00 stderr F I0120 10:57:23.483463 16 cacher.go:460] cacher (mutatingwebhookconfigurations.admissionregistration.k8s.io): initialized 2026-01-20T10:57:23.485585265+00:00 stderr F I0120 10:57:23.483492 16 reflector.go:351] Caches populated for *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations 2026-01-20T10:57:23.490992369+00:00 stderr F I0120 10:57:23.490626 16 store.go:1579] "Monitoring resource count at path" resource="events" path="//events" 2026-01-20T10:57:23.490992369+00:00 stderr F I0120 10:57:23.490683 16 instance.go:707] Enabling API group "events.k8s.io". 2026-01-20T10:57:23.490992369+00:00 stderr F I0120 10:57:23.490702 16 instance.go:694] API group "resource.k8s.io" is not enabled, skipping. 2026-01-20T10:57:23.500306165+00:00 stderr F I0120 10:57:23.498994 16 handler.go:275] Adding GroupVersion authentication.k8s.io v1 to ResourceManager 2026-01-20T10:57:23.500306165+00:00 stderr F W0120 10:57:23.499017 16 genericapiserver.go:774] Skipping API authentication.k8s.io/v1beta1 because it has no resources. 2026-01-20T10:57:23.500306165+00:00 stderr F W0120 10:57:23.499023 16 genericapiserver.go:774] Skipping API authentication.k8s.io/v1alpha1 because it has no resources. 2026-01-20T10:57:23.500306165+00:00 stderr F I0120 10:57:23.499443 16 handler.go:275] Adding GroupVersion authorization.k8s.io v1 to ResourceManager 2026-01-20T10:57:23.500306165+00:00 stderr F W0120 10:57:23.499449 16 genericapiserver.go:774] Skipping API authorization.k8s.io/v1beta1 because it has no resources. 2026-01-20T10:57:23.500306165+00:00 stderr F I0120 10:57:23.500086 16 handler.go:275] Adding GroupVersion autoscaling v2 to ResourceManager 2026-01-20T10:57:23.500749926+00:00 stderr F I0120 10:57:23.500661 16 handler.go:275] Adding GroupVersion autoscaling v1 to ResourceManager 2026-01-20T10:57:23.500749926+00:00 stderr F W0120 10:57:23.500674 16 genericapiserver.go:774] Skipping API autoscaling/v2beta1 because it has no resources. 2026-01-20T10:57:23.500749926+00:00 stderr F W0120 10:57:23.500678 16 genericapiserver.go:774] Skipping API autoscaling/v2beta2 because it has no resources. 2026-01-20T10:57:23.502212665+00:00 stderr F I0120 10:57:23.502135 16 handler.go:275] Adding GroupVersion batch v1 to ResourceManager 2026-01-20T10:57:23.502212665+00:00 stderr F W0120 10:57:23.502149 16 genericapiserver.go:774] Skipping API batch/v1beta1 because it has no resources. 2026-01-20T10:57:23.502796881+00:00 stderr F I0120 10:57:23.502718 16 handler.go:275] Adding GroupVersion certificates.k8s.io v1 to ResourceManager 2026-01-20T10:57:23.502796881+00:00 stderr F W0120 10:57:23.502731 16 genericapiserver.go:774] Skipping API certificates.k8s.io/v1beta1 because it has no resources. 2026-01-20T10:57:23.502796881+00:00 stderr F W0120 10:57:23.502735 16 genericapiserver.go:774] Skipping API certificates.k8s.io/v1alpha1 because it has no resources. 2026-01-20T10:57:23.503357025+00:00 stderr F I0120 10:57:23.503293 16 handler.go:275] Adding GroupVersion coordination.k8s.io v1 to ResourceManager 2026-01-20T10:57:23.503357025+00:00 stderr F W0120 10:57:23.503306 16 genericapiserver.go:774] Skipping API coordination.k8s.io/v1beta1 because it has no resources. 2026-01-20T10:57:23.503357025+00:00 stderr F W0120 10:57:23.503332 16 genericapiserver.go:774] Skipping API discovery.k8s.io/v1beta1 because it has no resources. 2026-01-20T10:57:23.503875069+00:00 stderr F I0120 10:57:23.503795 16 handler.go:275] Adding GroupVersion discovery.k8s.io v1 to ResourceManager 2026-01-20T10:57:23.504923726+00:00 stderr F I0120 10:57:23.504848 16 handler.go:275] Adding GroupVersion networking.k8s.io v1 to ResourceManager 2026-01-20T10:57:23.504923726+00:00 stderr F W0120 10:57:23.504865 16 genericapiserver.go:774] Skipping API networking.k8s.io/v1beta1 because it has no resources. 2026-01-20T10:57:23.504923726+00:00 stderr F W0120 10:57:23.504869 16 genericapiserver.go:774] Skipping API networking.k8s.io/v1alpha1 because it has no resources. 2026-01-20T10:57:23.505266896+00:00 stderr F I0120 10:57:23.505155 16 handler.go:275] Adding GroupVersion node.k8s.io v1 to ResourceManager 2026-01-20T10:57:23.505266896+00:00 stderr F W0120 10:57:23.505168 16 genericapiserver.go:774] Skipping API node.k8s.io/v1beta1 because it has no resources. 2026-01-20T10:57:23.505266896+00:00 stderr F W0120 10:57:23.505172 16 genericapiserver.go:774] Skipping API node.k8s.io/v1alpha1 because it has no resources. 2026-01-20T10:57:23.505693817+00:00 stderr F I0120 10:57:23.505635 16 handler.go:275] Adding GroupVersion policy v1 to ResourceManager 2026-01-20T10:57:23.505693817+00:00 stderr F W0120 10:57:23.505650 16 genericapiserver.go:774] Skipping API policy/v1beta1 because it has no resources. 2026-01-20T10:57:23.508574883+00:00 stderr F I0120 10:57:23.508478 16 handler.go:275] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager 2026-01-20T10:57:23.508574883+00:00 stderr F W0120 10:57:23.508494 16 genericapiserver.go:774] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources. 2026-01-20T10:57:23.508574883+00:00 stderr F W0120 10:57:23.508498 16 genericapiserver.go:774] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. 2026-01-20T10:57:23.509129848+00:00 stderr F I0120 10:57:23.508981 16 handler.go:275] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager 2026-01-20T10:57:23.509129848+00:00 stderr F W0120 10:57:23.508995 16 genericapiserver.go:774] Skipping API scheduling.k8s.io/v1beta1 because it has no resources. 2026-01-20T10:57:23.509129848+00:00 stderr F W0120 10:57:23.508999 16 genericapiserver.go:774] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. 2026-01-20T10:57:23.510469703+00:00 stderr F I0120 10:57:23.510367 16 handler.go:275] Adding GroupVersion storage.k8s.io v1 to ResourceManager 2026-01-20T10:57:23.510469703+00:00 stderr F W0120 10:57:23.510382 16 genericapiserver.go:774] Skipping API storage.k8s.io/v1beta1 because it has no resources. 2026-01-20T10:57:23.510469703+00:00 stderr F W0120 10:57:23.510386 16 genericapiserver.go:774] Skipping API storage.k8s.io/v1alpha1 because it has no resources. 2026-01-20T10:57:23.511365267+00:00 stderr F I0120 10:57:23.511289 16 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager 2026-01-20T10:57:23.512049755+00:00 stderr F I0120 10:57:23.511960 16 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager 2026-01-20T10:57:23.512049755+00:00 stderr F W0120 10:57:23.511974 16 genericapiserver.go:774] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources. 2026-01-20T10:57:23.512049755+00:00 stderr F W0120 10:57:23.511978 16 genericapiserver.go:774] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources. 2026-01-20T10:57:23.514602243+00:00 stderr F I0120 10:57:23.514474 16 handler.go:275] Adding GroupVersion apps v1 to ResourceManager 2026-01-20T10:57:23.514602243+00:00 stderr F W0120 10:57:23.514491 16 genericapiserver.go:774] Skipping API apps/v1beta2 because it has no resources. 2026-01-20T10:57:23.514602243+00:00 stderr F W0120 10:57:23.514495 16 genericapiserver.go:774] Skipping API apps/v1beta1 because it has no resources. 2026-01-20T10:57:23.516443652+00:00 stderr F I0120 10:57:23.516318 16 handler.go:275] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager 2026-01-20T10:57:23.516443652+00:00 stderr F W0120 10:57:23.516333 16 genericapiserver.go:774] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. 2026-01-20T10:57:23.516443652+00:00 stderr F W0120 10:57:23.516338 16 genericapiserver.go:774] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources. 2026-01-20T10:57:23.518054924+00:00 stderr F I0120 10:57:23.517600 16 handler.go:275] Adding GroupVersion events.k8s.io v1 to ResourceManager 2026-01-20T10:57:23.518054924+00:00 stderr F W0120 10:57:23.517616 16 genericapiserver.go:774] Skipping API events.k8s.io/v1beta1 because it has no resources. 2026-01-20T10:57:23.529450375+00:00 stderr F I0120 10:57:23.529340 16 store.go:1579] "Monitoring resource count at path" resource="apiservices.apiregistration.k8s.io" path="//apiregistration.k8s.io/apiservices" 2026-01-20T10:57:23.530639087+00:00 stderr F I0120 10:57:23.530335 16 handler.go:275] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager 2026-01-20T10:57:23.530639087+00:00 stderr F W0120 10:57:23.530355 16 genericapiserver.go:774] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. 2026-01-20T10:57:23.531459148+00:00 stderr F I0120 10:57:23.531352 16 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="aggregator-proxy-cert::/etc/kubernetes/static-pod-certs/secrets/aggregator-client/tls.crt::/etc/kubernetes/static-pod-certs/secrets/aggregator-client/tls.key" 2026-01-20T10:57:23.540049855+00:00 stderr F I0120 10:57:23.539928 16 cacher.go:460] cacher (apiservices.apiregistration.k8s.io): initialized 2026-01-20T10:57:23.540049855+00:00 stderr F I0120 10:57:23.540013 16 reflector.go:351] Caches populated for *apiregistration.APIService from storage/cacher.go:/apiregistration.k8s.io/apiservices 2026-01-20T10:57:23.951198989+00:00 stderr F I0120 10:57:23.950520 16 genericapiserver.go:576] "[graceful-termination] using HTTP Server shutdown timeout" shutdownTimeout="2s" 2026-01-20T10:57:23.951198989+00:00 stderr F I0120 10:57:23.950794 16 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2026-01-20T10:57:23.951198989+00:00 stderr F I0120 10:57:23.950834 16 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2026-01-20T10:57:23.952550134+00:00 stderr F I0120 10:57:23.951428 16 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" 2026-01-20T10:57:23.952550134+00:00 stderr F I0120 10:57:23.951675 16 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.key" 2026-01-20T10:57:23.952550134+00:00 stderr F I0120 10:57:23.952015 16 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" 2026-01-20T10:57:23.952826511+00:00 stderr F I0120 10:57:23.952659 16 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.key" 2026-01-20T10:57:23.952986625+00:00 stderr F I0120 10:57:23.952921 16 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.key" 2026-01-20T10:57:23.953140859+00:00 stderr F I0120 10:57:23.953054 16 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.crt::/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.key" 2026-01-20T10:57:23.953306925+00:00 stderr F I0120 10:57:23.953172 16 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:57:23.953143139 +0000 UTC))" 2026-01-20T10:57:23.953306925+00:00 stderr F I0120 10:57:23.953248 16 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:57:23.953210632 +0000 UTC))" 2026-01-20T10:57:23.953306925+00:00 stderr F I0120 10:57:23.953269 16 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:57:23.953256713 +0000 UTC))" 2026-01-20T10:57:23.953338786+00:00 stderr F I0120 10:57:23.953304 16 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:57:23.953274594 +0000 UTC))" 2026-01-20T10:57:23.953338786+00:00 stderr F I0120 10:57:23.953327 16 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:57:23.953312745 +0000 UTC))" 2026-01-20T10:57:23.953376556+00:00 stderr F I0120 10:57:23.953344 16 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:57:23.953333225 +0000 UTC))" 2026-01-20T10:57:23.953415097+00:00 stderr F I0120 10:57:23.953382 16 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:57:23.953350956 +0000 UTC))" 2026-01-20T10:57:23.953415097+00:00 stderr F I0120 10:57:23.953402 16 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:57:23.953390967 +0000 UTC))" 2026-01-20T10:57:23.953449438+00:00 stderr F I0120 10:57:23.953417 16 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:57:23.953406777 +0000 UTC))" 2026-01-20T10:57:23.953491409+00:00 stderr F I0120 10:57:23.953459 16 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:57:23.953425918 +0000 UTC))" 2026-01-20T10:57:23.953523370+00:00 stderr F I0120 10:57:23.953482 16 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:57:23.953469549 +0000 UTC))" 2026-01-20T10:57:23.954275740+00:00 stderr F I0120 10:57:23.953812 16 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" certDetail="\"10.217.4.1\" [serving] validServingFor=[10.217.4.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster.local,openshift,openshift.default,openshift.default.svc,openshift.default.svc.cluster.local,10.217.4.1] issuer=\"kube-apiserver-service-network-signer\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:57:23.953793857 +0000 UTC))" 2026-01-20T10:57:23.954275740+00:00 stderr F I0120 10:57:23.954129 16 named_certificates.go:53] "Loaded SNI cert" index=5 certName="sni-serving-cert::/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.crt::/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.key" certDetail="\"localhost-recovery\" [serving] validServingFor=[localhost-recovery] issuer=\"openshift-kube-apiserver-operator_localhost-recovery-serving-signer@1719406013\" (2024-06-26 12:47:06 +0000 UTC to 2034-06-24 12:46:53 +0000 UTC (now=2026-01-20 10:57:23.954106946 +0000 UTC))" 2026-01-20T10:57:23.954534607+00:00 stderr F I0120 10:57:23.954475 16 named_certificates.go:53] "Loaded SNI cert" index=4 certName="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.key" certDetail="\"api-int.crc.testing\" [serving] validServingFor=[api-int.crc.testing] issuer=\"kube-apiserver-lb-signer\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:57:23.954459135 +0000 UTC))" 2026-01-20T10:57:23.954849975+00:00 stderr F I0120 10:57:23.954791 16 named_certificates.go:53] "Loaded SNI cert" index=3 certName="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.key" certDetail="\"api.crc.testing\" [serving] validServingFor=[api.crc.testing] issuer=\"kube-apiserver-lb-signer\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:57:23.954776463 +0000 UTC))" 2026-01-20T10:57:23.955159213+00:00 stderr F I0120 10:57:23.955104 16 named_certificates.go:53] "Loaded SNI cert" index=2 certName="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" certDetail="\"10.217.4.1\" [serving] validServingFor=[10.217.4.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster.local,openshift,openshift.default,openshift.default.svc,openshift.default.svc.cluster.local,10.217.4.1] issuer=\"kube-apiserver-service-network-signer\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:57:23.955085641 +0000 UTC))" 2026-01-20T10:57:23.955487161+00:00 stderr F I0120 10:57:23.955433 16 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.key" certDetail="\"127.0.0.1\" [serving] validServingFor=[127.0.0.1,localhost,127.0.0.1] issuer=\"kube-apiserver-localhost-signer\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:57:23.95541668 +0000 UTC))" 2026-01-20T10:57:23.955777689+00:00 stderr F I0120 10:57:23.955723 16 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906642\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906642\" (2026-01-20 09:57:22 +0000 UTC to 2027-01-20 09:57:22 +0000 UTC (now=2026-01-20 10:57:23.955710117 +0000 UTC))" 2026-01-20T10:57:23.955820360+00:00 stderr F I0120 10:57:23.955780 16 secure_serving.go:213] Serving securely on [::]:6443 2026-01-20T10:57:23.955895592+00:00 stderr F I0120 10:57:23.955828 16 genericapiserver.go:700] [graceful-termination] waiting for shutdown to be initiated 2026-01-20T10:57:23.955895592+00:00 stderr F I0120 10:57:23.955869 16 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/etc/kubernetes/static-pod-certs/secrets/aggregator-client/tls.crt::/etc/kubernetes/static-pod-certs/secrets/aggregator-client/tls.key" 2026-01-20T10:57:23.955967944+00:00 stderr F I0120 10:57:23.955914 16 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:57:23.956081188+00:00 stderr F I0120 10:57:23.956000 16 available_controller.go:445] Starting AvailableConditionController 2026-01-20T10:57:23.956081188+00:00 stderr F I0120 10:57:23.956016 16 cache.go:32] Waiting for caches to sync for AvailableConditionController controller 2026-01-20T10:57:23.956081188+00:00 stderr F I0120 10:57:23.956044 16 controller.go:80] Starting OpenAPI V3 AggregationController 2026-01-20T10:57:23.956151970+00:00 stderr F I0120 10:57:23.956051 16 apf_controller.go:374] Starting API Priority and Fairness config controller 2026-01-20T10:57:23.956151970+00:00 stderr F I0120 10:57:23.956113 16 apiaccess_count_controller.go:89] Starting APIRequestCount controller. 2026-01-20T10:57:23.956206651+00:00 stderr F I0120 10:57:23.956161 16 customresource_discovery_controller.go:289] Starting DiscoveryController 2026-01-20T10:57:23.956206651+00:00 stderr F I0120 10:57:23.956184 16 controller.go:116] Starting legacy_token_tracking_controller 2026-01-20T10:57:23.956206651+00:00 stderr F I0120 10:57:23.956198 16 shared_informer.go:311] Waiting for caches to sync for configmaps 2026-01-20T10:57:23.956260903+00:00 stderr F I0120 10:57:23.956225 16 apiservice_controller.go:97] Starting APIServiceRegistrationController 2026-01-20T10:57:23.956260903+00:00 stderr F I0120 10:57:23.956235 16 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller 2026-01-20T10:57:23.956310704+00:00 stderr F I0120 10:57:23.956261 16 gc_controller.go:78] Starting apiserver lease garbage collector 2026-01-20T10:57:23.956698274+00:00 stderr F I0120 10:57:23.956594 16 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller 2026-01-20T10:57:23.956698274+00:00 stderr F I0120 10:57:23.956618 16 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller 2026-01-20T10:57:23.956715325+00:00 stderr F I0120 10:57:23.956656 16 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2026-01-20T10:57:23.956829338+00:00 stderr F I0120 10:57:23.956762 16 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2026-01-20T10:57:23.956977241+00:00 stderr F I0120 10:57:23.956927 16 aggregator.go:163] waiting for initial CRD sync... 2026-01-20T10:57:23.956977241+00:00 stderr F I0120 10:57:23.956946 16 controller.go:78] Starting OpenAPI AggregationController 2026-01-20T10:57:23.957016782+00:00 stderr F I0120 10:57:23.956980 16 system_namespaces_controller.go:67] Starting system namespaces controller 2026-01-20T10:57:23.957088134+00:00 stderr F I0120 10:57:23.957032 16 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller 2026-01-20T10:57:23.957088134+00:00 stderr F I0120 10:57:23.957052 16 crdregistration_controller.go:112] Starting crd-autoregister controller 2026-01-20T10:57:23.957104625+00:00 stderr F I0120 10:57:23.957071 16 shared_informer.go:311] Waiting for caches to sync for crd-autoregister 2026-01-20T10:57:23.957203807+00:00 stderr F I0120 10:57:23.957124 16 handler_discovery.go:412] Starting ResourceDiscoveryManager 2026-01-20T10:57:23.957286839+00:00 stderr F I0120 10:57:23.957228 16 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController 2026-01-20T10:57:23.957367272+00:00 stderr F I0120 10:57:23.957323 16 controller.go:133] Starting OpenAPI controller 2026-01-20T10:57:23.957400402+00:00 stderr F I0120 10:57:23.957359 16 controller.go:85] Starting OpenAPI V3 controller 2026-01-20T10:57:23.957400402+00:00 stderr F I0120 10:57:23.957386 16 naming_controller.go:291] Starting NamingConditionController 2026-01-20T10:57:23.957460464+00:00 stderr F I0120 10:57:23.957418 16 establishing_controller.go:76] Starting EstablishingController 2026-01-20T10:57:23.971439084+00:00 stderr F W0120 10:57:23.969740 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-console/configmaps" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:23.971439084+00:00 stderr F I0120 10:57:23.970225 16 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController 2026-01-20T10:57:23.971439084+00:00 stderr F I0120 10:57:23.970272 16 crd_finalizer.go:266] Starting CRDFinalizer 2026-01-20T10:57:23.971439084+00:00 stderr F W0120 10:57:23.970572 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:23.971439084+00:00 stderr F W0120 10:57:23.971133 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-machine-api/configmaps" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:23.971785753+00:00 stderr F W0120 10:57:23.971659 16 patch_genericapiserver.go:204] Request to "/apis/apps/v1/controllerrevisions" (source IP 38.102.83.220:36286, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:23.971785753+00:00 stderr F W0120 10:57:23.971730 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-dns-operator/configmaps" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:23.973027726+00:00 stderr F W0120 10:57:23.972379 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-image-registry/secrets" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:23.973027726+00:00 stderr F W0120 10:57:23.972536 16 patch_genericapiserver.go:204] Request to "/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock" (source IP 38.102.83.220:36286, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:23.975639445+00:00 stderr F W0120 10:57:23.973103 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-ingress-canary/configmaps" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:23.975639445+00:00 stderr F W0120 10:57:23.973472 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/kube-system/configmaps" (source IP 38.102.83.220:36286, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:23.975639445+00:00 stderr F W0120 10:57:23.973662 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-image-registry/secrets" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:23.975639445+00:00 stderr F W0120 10:57:23.974393 16 patch_genericapiserver.go:204] Request to "/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master" (source IP 38.102.83.220:36296, user agent "crc/ovnkube@254d920e214b (linux/amd64) kubernetes/v0.29.2") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:23.975639445+00:00 stderr F W0120 10:57:23.974823 16 patch_genericapiserver.go:204] Request to "/apis/operator.openshift.io/v1/networks/cluster" (source IP 38.102.83.220:36250, user agent "network-operator/4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:23.975639445+00:00 stderr F W0120 10:57:23.974947 16 patch_genericapiserver.go:204] Request to "/apis/config.openshift.io/v1/clusterversions/version/status" (source IP 38.102.83.220:36236, user agent "cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format/openshift-cluster-version") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:23.983785750+00:00 stderr F W0120 10:57:23.977717 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:23.983785750+00:00 stderr F W0120 10:57:23.979044 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-ovn-kubernetes/pods" (source IP 38.102.83.220:36342, user agent "crc/ovnkube@254d920e214b (linux/amd64) kubernetes/v0.29.2") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:23.983785750+00:00 stderr F W0120 10:57:23.979597 16 patch_genericapiserver.go:204] Request to "/apis/networking.k8s.io/v1/networkpolicies" (source IP 38.102.83.220:36342, user agent "crc/ovnkube@254d920e214b (linux/amd64) kubernetes/v0.29.2") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:23.983785750+00:00 stderr F I0120 10:57:23.980382 16 patch_genericapiserver.go:201] Loopback request to "/api/v1/namespaces/openshift-kube-apiserver/pods" (user agent "cluster-kube-apiserver-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. 2026-01-20T10:57:23.983785750+00:00 stderr F W0120 10:57:23.980720 16 patch_genericapiserver.go:204] Request to "/api/v1/pods" (source IP 38.102.83.220:36322, user agent "multus-daemon/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:23.983785750+00:00 stderr F I0120 10:57:23.981453 16 patch_genericapiserver.go:201] Loopback request to "/api/v1/namespaces/openshift-kube-apiserver/configmaps" (user agent "cluster-kube-apiserver-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. 2026-01-20T10:57:23.983785750+00:00 stderr F I0120 10:57:23.981598 16 patch_genericapiserver.go:201] Loopback request to "/api/v1/namespaces/openshift-kube-apiserver/secrets" (user agent "cluster-kube-apiserver-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. 2026-01-20T10:57:23.983785750+00:00 stderr F E0120 10:57:23.981841 16 sdn_readyz_wait.go:100] api-openshift-apiserver-available did not find any IPs for kubernetes.default.svc endpoint 2026-01-20T10:57:23.986086031+00:00 stderr F I0120 10:57:23.985977 16 patch_genericapiserver.go:201] Loopback request to "/api/v1/namespaces/openshift-kube-scheduler/secrets" (user agent "cluster-kube-scheduler-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. 2026-01-20T10:57:23.989871992+00:00 stderr F I0120 10:57:23.988424 16 patch_genericapiserver.go:201] Loopback request to "/api/v1/namespaces/openshift-kube-apiserver/pods" (user agent "cluster-kube-apiserver-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. 2026-01-20T10:57:23.989871992+00:00 stderr F I0120 10:57:23.989246 16 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:23.989911683+00:00 stderr F I0120 10:57:23.989862 16 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:23.990793046+00:00 stderr F I0120 10:57:23.990092 16 reflector.go:351] Caches populated for *v1.VolumeAttachment from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:23.990793046+00:00 stderr F I0120 10:57:23.990246 16 reflector.go:351] Caches populated for *v1.IngressClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:23.990793046+00:00 stderr F I0120 10:57:23.990625 16 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:23.991155505+00:00 stderr F I0120 10:57:23.991074 16 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:23.992704256+00:00 stderr F I0120 10:57:23.991597 16 reflector.go:351] Caches populated for *v1.APIService from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:23.992704256+00:00 stderr F I0120 10:57:23.992534 16 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:23.992745678+00:00 stderr F I0120 10:57:23.992666 16 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:23.993407725+00:00 stderr F I0120 10:57:23.993337 16 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:23.993931479+00:00 stderr F I0120 10:57:23.993859 16 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:23.994023461+00:00 stderr F I0120 10:57:23.993967 16 reflector.go:351] Caches populated for *v1.FlowSchema from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:23.995605103+00:00 stderr F I0120 10:57:23.994751 16 reflector.go:351] Caches populated for *v1.PriorityLevelConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:23.995605103+00:00 stderr F I0120 10:57:23.994789 16 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:23.995605103+00:00 stderr F I0120 10:57:23.995085 16 reflector.go:351] Caches populated for *v1.Lease from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:23.995605103+00:00 stderr F I0120 10:57:23.995184 16 reflector.go:351] Caches populated for *v1.PriorityClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:23.995605103+00:00 stderr F I0120 10:57:23.995441 16 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:23.995912781+00:00 stderr F E0120 10:57:23.995842 16 sdn_readyz_wait.go:100] api-openshift-oauth-apiserver-available did not find any IPs for kubernetes.default.svc endpoint 2026-01-20T10:57:23.995912781+00:00 stderr F I0120 10:57:23.995860 16 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:23.996008884+00:00 stderr F I0120 10:57:23.995950 16 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:24.000539973+00:00 stderr F I0120 10:57:24.000422 16 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:24.005657698+00:00 stderr F I0120 10:57:24.005577 16 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:24.012038858+00:00 stderr F I0120 10:57:24.011927 16 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:24.012565082+00:00 stderr F I0120 10:57:24.012448 16 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:24.018892939+00:00 stderr F I0120 10:57:24.018796 16 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:24.036465933+00:00 stderr F W0120 10:57:24.036346 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-etcd-operator/configmaps" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.046198441+00:00 stderr F W0120 10:57:24.045775 16 patch_genericapiserver.go:204] Request to "/apis/apps/v1/daemonsets" (source IP 38.102.83.220:36250, user agent "network-operator/4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.047628228+00:00 stderr F I0120 10:57:24.047518 16 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:24.057900880+00:00 stderr F I0120 10:57:24.057787 16 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller 2026-01-20T10:57:24.057900880+00:00 stderr F I0120 10:57:24.057831 16 apf_controller.go:379] Running API Priority and Fairness config worker 2026-01-20T10:57:24.057900880+00:00 stderr F I0120 10:57:24.057860 16 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process 2026-01-20T10:57:24.058140897+00:00 stderr F I0120 10:57:24.058071 16 cache.go:39] Caches are synced for AvailableConditionController controller 2026-01-20T10:57:24.058606769+00:00 stderr F I0120 10:57:24.058505 16 apf_controller.go:861] Introducing queues for priority level "node-high": config={"type":"Limited","limited":{"nominalConcurrencyShares":40,"limitResponse":{"type":"Queue","queuing":{"queues":64,"handSize":6,"queueLengthLimit":50}},"lendablePercent":25}}, nominalCL=628, lendableCL=157, borrowingCL=4000, currentCL=550, quiescing=false (shares=0xc00360f068, shareSum=255) 2026-01-20T10:57:24.058606769+00:00 stderr F I0120 10:57:24.058575 16 apf_controller.go:861] Introducing queues for priority level "system": config={"type":"Limited","limited":{"nominalConcurrencyShares":30,"limitResponse":{"type":"Queue","queuing":{"queues":64,"handSize":6,"queueLengthLimit":50}},"lendablePercent":33}}, nominalCL=471, lendableCL=155, borrowingCL=4000, currentCL=394, quiescing=false (shares=0xc00360f168, shareSum=255) 2026-01-20T10:57:24.058646830+00:00 stderr F I0120 10:57:24.058591 16 apf_controller.go:861] Introducing queues for priority level "workload-low": config={"type":"Limited","limited":{"nominalConcurrencyShares":100,"limitResponse":{"type":"Queue","queuing":{"queues":128,"handSize":6,"queueLengthLimit":50}},"lendablePercent":90}}, nominalCL=1569, lendableCL=1412, borrowingCL=4000, currentCL=863, quiescing=false (shares=0xc00360f220, shareSum=255) 2026-01-20T10:57:24.058659160+00:00 stderr F I0120 10:57:24.058630 16 apf_controller.go:861] Introducing queues for priority level "workload-high": config={"type":"Limited","limited":{"nominalConcurrencyShares":40,"limitResponse":{"type":"Queue","queuing":{"queues":128,"handSize":6,"queueLengthLimit":50}},"lendablePercent":50}}, nominalCL=628, lendableCL=314, borrowingCL=4000, currentCL=471, quiescing=false (shares=0xc00360f1d0, shareSum=255) 2026-01-20T10:57:24.058708351+00:00 stderr F I0120 10:57:24.058656 16 apf_controller.go:861] Introducing queues for priority level "global-default": config={"type":"Limited","limited":{"nominalConcurrencyShares":20,"limitResponse":{"type":"Queue","queuing":{"queues":128,"handSize":6,"queueLengthLimit":50}},"lendablePercent":50}}, nominalCL=314, lendableCL=157, borrowingCL=4000, currentCL=236, quiescing=false (shares=0xc00360efa0, shareSum=255) 2026-01-20T10:57:24.058708351+00:00 stderr F I0120 10:57:24.058681 16 apf_controller.go:861] Introducing queues for priority level "leader-election": config={"type":"Limited","limited":{"nominalConcurrencyShares":10,"limitResponse":{"type":"Queue","queuing":{"queues":16,"handSize":4,"queueLengthLimit":50}},"lendablePercent":0}}, nominalCL=157, lendableCL=0, borrowingCL=4000, currentCL=157, quiescing=false (shares=0xc00360f010, shareSum=255) 2026-01-20T10:57:24.058766963+00:00 stderr F I0120 10:57:24.058700 16 apf_controller.go:861] Introducing queues for priority level "openshift-control-plane-operators": config={"type":"Limited","limited":{"nominalConcurrencyShares":10,"limitResponse":{"type":"Queue","queuing":{"queues":128,"handSize":6,"queueLengthLimit":50}},"lendablePercent":33}}, nominalCL=157, lendableCL=52, borrowingCL=4000, currentCL=131, quiescing=false (shares=0xc00360f0d8, shareSum=255) 2026-01-20T10:57:24.058766963+00:00 stderr F I0120 10:57:24.058740 16 apf_controller.go:869] Retaining queues for priority level "catch-all": config={"type":"Limited","limited":{"nominalConcurrencyShares":5,"limitResponse":{"type":"Reject"},"lendablePercent":0}}, nominalCL=79, lendableCL=0, borrowingCL=4000, currentCL=4000, quiescing=false, numPending=0 (shares=0xc00360ef30, shareSum=255) 2026-01-20T10:57:24.058813064+00:00 stderr F I0120 10:57:24.058765 16 shared_informer.go:318] Caches are synced for configmaps 2026-01-20T10:57:24.058896176+00:00 stderr F I0120 10:57:24.058814 16 apf_controller.go:455] "Update CurrentCL" plName="global-default" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.275724843661171 currentCL=357 concurrencyDenominator=357 backstop=false 2026-01-20T10:57:24.058896176+00:00 stderr F I0120 10:57:24.058848 16 cache.go:39] Caches are synced for APIServiceRegistrationController controller 2026-01-20T10:57:24.058976218+00:00 stderr F I0120 10:57:24.058906 16 apf_controller.go:455] "Update CurrentCL" plName="node-high" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.275724843661171 currentCL=1072 concurrencyDenominator=1072 backstop=false 2026-01-20T10:57:24.059033710+00:00 stderr F I0120 10:57:24.058972 16 apf_controller.go:455] "Update CurrentCL" plName="system" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.275724843661171 currentCL=719 concurrencyDenominator=719 backstop=false 2026-01-20T10:57:24.059127202+00:00 stderr F I0120 10:57:24.059030 16 apf_controller.go:455] "Update CurrentCL" plName="workload-low" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.275724843661171 currentCL=357 concurrencyDenominator=357 backstop=false 2026-01-20T10:57:24.059230095+00:00 stderr F I0120 10:57:24.059165 16 apf_controller.go:455] "Update CurrentCL" plName="exempt" seatDemandHighWatermark=3 seatDemandAvg=3.0000000000000004 seatDemandStdev=0 seatDemandSmoothed=3.0000000000000004 fairFrac=2.275724843661171 currentCL=7 concurrencyDenominator=7 backstop=false 2026-01-20T10:57:24.059230095+00:00 stderr F I0120 10:57:24.059204 16 apf_controller.go:455] "Update CurrentCL" plName="workload-high" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.275724843661171 currentCL=715 concurrencyDenominator=715 backstop=false 2026-01-20T10:57:24.059393799+00:00 stderr F I0120 10:57:24.059327 16 apf_controller.go:455] "Update CurrentCL" plName="catch-all" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.275724843661171 currentCL=180 concurrencyDenominator=180 backstop=false 2026-01-20T10:57:24.059393799+00:00 stderr F I0120 10:57:24.059369 16 apf_controller.go:455] "Update CurrentCL" plName="leader-election" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.275724843661171 currentCL=357 concurrencyDenominator=357 backstop=false 2026-01-20T10:57:24.059453551+00:00 stderr F I0120 10:57:24.059410 16 apf_controller.go:455] "Update CurrentCL" plName="openshift-control-plane-operators" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.275724843661171 currentCL=239 concurrencyDenominator=239 backstop=false 2026-01-20T10:57:24.087286417+00:00 stderr F W0120 10:57:24.085836 16 patch_genericapiserver.go:204] Request to "/apis/storage.k8s.io/v1/csistoragecapacities" (source IP 38.102.83.220:36286, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.094176270+00:00 stderr F I0120 10:57:24.094012 16 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io 2026-01-20T10:57:24.097107277+00:00 stderr F I0120 10:57:24.096437 16 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:24.100424275+00:00 stderr F I0120 10:57:24.100335 16 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:24.102302965+00:00 stderr F I0120 10:57:24.101692 16 healthz.go:261] informer-sync,poststarthook/start-apiextensions-controllers,poststarthook/crd-informer-synced,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes,poststarthook/apiservice-registration-controller check failed: readyz 2026-01-20T10:57:24.102302965+00:00 stderr F [-]informer-sync failed: 2 informers not started yet: [*v1.Secret *v1.ConfigMap] 2026-01-20T10:57:24.102302965+00:00 stderr F [-]poststarthook/start-apiextensions-controllers failed: not finished 2026-01-20T10:57:24.102302965+00:00 stderr F [-]poststarthook/crd-informer-synced failed: not finished 2026-01-20T10:57:24.102302965+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2026-01-20T10:57:24.102302965+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2026-01-20T10:57:24.102302965+00:00 stderr F [-]poststarthook/apiservice-registration-controller failed: not finished 2026-01-20T10:57:24.106921636+00:00 stderr F I0120 10:57:24.103093 16 handler.go:275] Adding GroupVersion packages.operators.coreos.com v1 to ResourceManager 2026-01-20T10:57:24.107010869+00:00 stderr F I0120 10:57:24.106921 16 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:24.111907848+00:00 stderr F I0120 10:57:24.109305 16 handler.go:275] Adding GroupVersion quota.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.116307235+00:00 stderr F I0120 10:57:24.116174 16 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:24.120134406+00:00 stderr F I0120 10:57:24.119599 16 handler.go:275] Adding GroupVersion oauth.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.123788372+00:00 stderr F W0120 10:57:24.123699 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-ovn-kubernetes/secrets" (source IP 38.102.83.220:36250, user agent "network-operator/4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.135191484+00:00 stderr F I0120 10:57:24.132047 16 handler.go:275] Adding GroupVersion image.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.135191484+00:00 stderr F I0120 10:57:24.132230 16 handler.go:275] Adding GroupVersion route.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.135191484+00:00 stderr F I0120 10:57:24.132252 16 handler.go:275] Adding GroupVersion template.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.135191484+00:00 stderr F I0120 10:57:24.133761 16 handler.go:275] Adding GroupVersion security.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.135191484+00:00 stderr F I0120 10:57:24.133875 16 handler.go:275] Adding GroupVersion build.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.135191484+00:00 stderr F I0120 10:57:24.134552 16 handler.go:275] Adding GroupVersion project.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.135191484+00:00 stderr F I0120 10:57:24.135054 16 handler.go:275] Adding GroupVersion user.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.139219181+00:00 stderr F I0120 10:57:24.135211 16 handler.go:275] Adding GroupVersion authorization.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.139219181+00:00 stderr F I0120 10:57:24.137252 16 handler.go:275] Adding GroupVersion apps.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.142546489+00:00 stderr F E0120 10:57:24.142475 16 controller.go:146] Error updating APIService "v1.apps.openshift.io" with err: failed to download v1.apps.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.142546489+00:00 stderr F , Header: map[Audit-Id:[b54f0aac-08b0-4112-8a88-c2a9e1b17d3c] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:24 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:24.149118652+00:00 stderr F E0120 10:57:24.148023 16 controller.go:146] Error updating APIService "v1.authorization.openshift.io" with err: failed to download v1.authorization.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.149118652+00:00 stderr F , Header: map[Audit-Id:[dc2a57ad-b49c-4c9a-8d6a-769521206780] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:24 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:24.152152713+00:00 stderr F E0120 10:57:24.152001 16 controller.go:146] Error updating APIService "v1.build.openshift.io" with err: failed to download v1.build.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.152152713+00:00 stderr F , Header: map[Audit-Id:[3f10231c-43b5-45f1-a8e6-346093bb361c] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:24 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:24.156043516+00:00 stderr F E0120 10:57:24.155955 16 controller.go:146] Error updating APIService "v1.image.openshift.io" with err: failed to download v1.image.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.156043516+00:00 stderr F , Header: map[Audit-Id:[e162735e-14e4-4f63-926a-2e3453219817] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:24 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:24.160295088+00:00 stderr F E0120 10:57:24.160201 16 controller.go:146] Error updating APIService "v1.oauth.openshift.io" with err: failed to download v1.oauth.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.160295088+00:00 stderr F , Header: map[Audit-Id:[857bd6e5-2ff5-457b-b843-08e9bc9dce2c] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:24 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:24.163823872+00:00 stderr F E0120 10:57:24.163724 16 controller.go:146] Error updating APIService "v1.packages.operators.coreos.com" with err: failed to download v1.packages.operators.coreos.com: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.163823872+00:00 stderr F , Header: map[Audit-Id:[972ad2bf-bf25-4553-9876-63774cf09935] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:24 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:24.181554220+00:00 stderr F E0120 10:57:24.181418 16 controller.go:146] Error updating APIService "v1.project.openshift.io" with err: failed to download v1.project.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.181554220+00:00 stderr F , Header: map[Audit-Id:[ce87d6e2-2f21-44b1-b230-14c7064a3a6d] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:24 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:24.182138296+00:00 stderr F I0120 10:57:24.182047 16 shared_informer.go:318] Caches are synced for node_authorizer 2026-01-20T10:57:24.188707880+00:00 stderr F E0120 10:57:24.188587 16 controller.go:146] Error updating APIService "v1.quota.openshift.io" with err: failed to download v1.quota.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.188707880+00:00 stderr F , Header: map[Audit-Id:[69225b6b-4efb-44d7-a577-9bdcf6bd6650] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:24 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:24.192013117+00:00 stderr F E0120 10:57:24.191934 16 controller.go:146] Error updating APIService "v1.route.openshift.io" with err: failed to download v1.route.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.192013117+00:00 stderr F , Header: map[Audit-Id:[76f6335b-afb8-46ec-8550-8677f4cd5975] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:24 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:24.193524367+00:00 stderr F E0120 10:57:24.193450 16 controller.go:146] Error updating APIService "v1.security.openshift.io" with err: failed to download v1.security.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.193524367+00:00 stderr F , Header: map[Audit-Id:[e78fe5a8-8407-471d-bc74-fd17799ce20a] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:24 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:24.194049700+00:00 stderr F I0120 10:57:24.193763 16 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:24.194888313+00:00 stderr F E0120 10:57:24.194806 16 controller.go:146] Error updating APIService "v1.template.openshift.io" with err: failed to download v1.template.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.194888313+00:00 stderr F , Header: map[Audit-Id:[229a22b3-2bb3-4d54-b0e7-1866ad44b303] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:24 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:24.196834884+00:00 stderr F E0120 10:57:24.196751 16 controller.go:146] Error updating APIService "v1.user.openshift.io" with err: failed to download v1.user.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.196834884+00:00 stderr F , Header: map[Audit-Id:[04fd0776-45e5-4d1a-b2bb-6ab79f3ea085] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:24 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:24.208196325+00:00 stderr F I0120 10:57:24.208099 16 healthz.go:261] poststarthook/start-apiextensions-controllers,poststarthook/crd-informer-synced,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2026-01-20T10:57:24.208196325+00:00 stderr F [-]poststarthook/start-apiextensions-controllers failed: not finished 2026-01-20T10:57:24.208196325+00:00 stderr F [-]poststarthook/crd-informer-synced failed: not finished 2026-01-20T10:57:24.208196325+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2026-01-20T10:57:24.208196325+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2026-01-20T10:57:24.237784197+00:00 stderr F W0120 10:57:24.237636 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-route-controller-manager/configmaps" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.257003996+00:00 stderr F I0120 10:57:24.256911 16 handler.go:275] Adding GroupVersion policy.networking.k8s.io v1alpha1 to ResourceManager 2026-01-20T10:57:24.257166160+00:00 stderr F I0120 10:57:24.257110 16 handler.go:275] Adding GroupVersion machine.openshift.io v1beta1 to ResourceManager 2026-01-20T10:57:24.257309114+00:00 stderr F I0120 10:57:24.256927 16 genericapiserver.go:527] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2026-01-20T10:57:24.257309114+00:00 stderr F I0120 10:57:24.257233 16 shared_informer.go:318] Caches are synced for crd-autoregister 2026-01-20T10:57:24.257436897+00:00 stderr F I0120 10:57:24.257391 16 handler.go:275] Adding GroupVersion operator.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.257853528+00:00 stderr F I0120 10:57:24.257802 16 handler.go:275] Adding GroupVersion operators.coreos.com v1alpha1 to ResourceManager 2026-01-20T10:57:24.257999272+00:00 stderr F I0120 10:57:24.257963 16 handler.go:275] Adding GroupVersion config.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.258275879+00:00 stderr F I0120 10:57:24.258229 16 handler.go:275] Adding GroupVersion ipam.cluster.x-k8s.io v1alpha1 to ResourceManager 2026-01-20T10:57:24.258344571+00:00 stderr F I0120 10:57:24.258311 16 handler.go:275] Adding GroupVersion ipam.cluster.x-k8s.io v1beta1 to ResourceManager 2026-01-20T10:57:24.258691291+00:00 stderr F I0120 10:57:24.258651 16 handler.go:275] Adding GroupVersion console.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.258792793+00:00 stderr F I0120 10:57:24.258749 16 aggregator.go:165] initial CRD sync complete... 2026-01-20T10:57:24.258848285+00:00 stderr F I0120 10:57:24.258798 16 autoregister_controller.go:141] Starting autoregister controller 2026-01-20T10:57:24.258882516+00:00 stderr F I0120 10:57:24.258851 16 cache.go:32] Waiting for caches to sync for autoregister controller 2026-01-20T10:57:24.258921137+00:00 stderr F I0120 10:57:24.258883 16 cache.go:39] Caches are synced for autoregister controller 2026-01-20T10:57:24.259587024+00:00 stderr F I0120 10:57:24.259527 16 handler.go:275] Adding GroupVersion machineconfiguration.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.259832260+00:00 stderr F I0120 10:57:24.259790 16 handler.go:275] Adding GroupVersion console.openshift.io v1alpha1 to ResourceManager 2026-01-20T10:57:24.260081277+00:00 stderr F I0120 10:57:24.260027 16 handler.go:275] Adding GroupVersion monitoring.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.260317833+00:00 stderr F I0120 10:57:24.260271 16 handler.go:275] Adding GroupVersion monitoring.coreos.com v1 to ResourceManager 2026-01-20T10:57:24.260377555+00:00 stderr F I0120 10:57:24.260344 16 handler.go:275] Adding GroupVersion monitoring.coreos.com v1alpha1 to ResourceManager 2026-01-20T10:57:24.260447936+00:00 stderr F I0120 10:57:24.260405 16 handler.go:275] Adding GroupVersion monitoring.coreos.com v1beta1 to ResourceManager 2026-01-20T10:57:24.260496888+00:00 stderr F I0120 10:57:24.260464 16 handler.go:275] Adding GroupVersion quota.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.260767195+00:00 stderr F I0120 10:57:24.260724 16 handler.go:275] Adding GroupVersion operators.coreos.com v1 to ResourceManager 2026-01-20T10:57:24.260828276+00:00 stderr F I0120 10:57:24.260796 16 handler.go:275] Adding GroupVersion operators.coreos.com v2 to ResourceManager 2026-01-20T10:57:24.260879048+00:00 stderr F I0120 10:57:24.260847 16 handler.go:275] Adding GroupVersion acme.cert-manager.io v1 to ResourceManager 2026-01-20T10:57:24.261170205+00:00 stderr F I0120 10:57:24.261086 16 handler.go:275] Adding GroupVersion authorization.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.261397392+00:00 stderr F I0120 10:57:24.261355 16 handler.go:275] Adding GroupVersion infrastructure.cluster.x-k8s.io v1alpha5 to ResourceManager 2026-01-20T10:57:24.261460504+00:00 stderr F I0120 10:57:24.261426 16 handler.go:275] Adding GroupVersion infrastructure.cluster.x-k8s.io v1beta1 to ResourceManager 2026-01-20T10:57:24.261511715+00:00 stderr F I0120 10:57:24.261478 16 handler.go:275] Adding GroupVersion k8s.cni.cncf.io v1 to ResourceManager 2026-01-20T10:57:24.261592427+00:00 stderr F I0120 10:57:24.261556 16 handler.go:275] Adding GroupVersion cert-manager.io v1 to ResourceManager 2026-01-20T10:57:24.261690840+00:00 stderr F I0120 10:57:24.261657 16 handler.go:275] Adding GroupVersion security.internal.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.261819313+00:00 stderr F I0120 10:57:24.261781 16 handler.go:275] Adding GroupVersion k8s.ovn.org v1 to ResourceManager 2026-01-20T10:57:24.263875107+00:00 stderr F I0120 10:57:24.263125 16 handler.go:275] Adding GroupVersion operators.coreos.com v1alpha2 to ResourceManager 2026-01-20T10:57:24.263875107+00:00 stderr F I0120 10:57:24.263372 16 handler.go:275] Adding GroupVersion controlplane.operator.openshift.io v1alpha1 to ResourceManager 2026-01-20T10:57:24.263875107+00:00 stderr F I0120 10:57:24.263499 16 handler.go:275] Adding GroupVersion network.operator.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.263875107+00:00 stderr F I0120 10:57:24.263643 16 handler.go:275] Adding GroupVersion migration.k8s.io v1alpha1 to ResourceManager 2026-01-20T10:57:24.265450509+00:00 stderr F I0120 10:57:24.265393 16 handler.go:275] Adding GroupVersion operator.openshift.io v1alpha1 to ResourceManager 2026-01-20T10:57:24.265975753+00:00 stderr F I0120 10:57:24.265930 16 handler.go:275] Adding GroupVersion ingress.operator.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.266037474+00:00 stderr F I0120 10:57:24.266004 16 handler.go:275] Adding GroupVersion imageregistry.operator.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.266749613+00:00 stderr F I0120 10:57:24.266699 16 handler.go:275] Adding GroupVersion helm.openshift.io v1beta1 to ResourceManager 2026-01-20T10:57:24.266801564+00:00 stderr F I0120 10:57:24.266768 16 handler.go:275] Adding GroupVersion security.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.267866383+00:00 stderr F I0120 10:57:24.267815 16 handler.go:275] Adding GroupVersion autoscaling.openshift.io v1beta1 to ResourceManager 2026-01-20T10:57:24.268119460+00:00 stderr F W0120 10:57:24.268027 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/hostpath-provisioner/configmaps" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.268331545+00:00 stderr F I0120 10:57:24.268117 16 handler.go:275] Adding GroupVersion whereabouts.cni.cncf.io v1alpha1 to ResourceManager 2026-01-20T10:57:24.268705185+00:00 stderr F I0120 10:57:24.268661 16 handler.go:275] Adding GroupVersion samples.operator.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.268776857+00:00 stderr F I0120 10:57:24.268732 16 handler.go:275] Adding GroupVersion machine.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.269391063+00:00 stderr F I0120 10:57:24.268975 16 handler.go:275] Adding GroupVersion apiserver.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.273263156+00:00 stderr F I0120 10:57:24.273185 16 handler.go:275] Adding GroupVersion autoscaling.openshift.io v1 to ResourceManager 2026-01-20T10:57:24.273684787+00:00 stderr F I0120 10:57:24.273613 16 controller.go:222] Updating CRD OpenAPI spec because adminnetworkpolicies.policy.networking.k8s.io changed 2026-01-20T10:57:24.273684787+00:00 stderr F I0120 10:57:24.273662 16 controller.go:222] Updating CRD OpenAPI spec because adminpolicybasedexternalroutes.k8s.ovn.org changed 2026-01-20T10:57:24.273684787+00:00 stderr F I0120 10:57:24.273673 16 controller.go:222] Updating CRD OpenAPI spec because alertingrules.monitoring.openshift.io changed 2026-01-20T10:57:24.273727118+00:00 stderr F I0120 10:57:24.273685 16 controller.go:222] Updating CRD OpenAPI spec because alertmanagerconfigs.monitoring.coreos.com changed 2026-01-20T10:57:24.273727118+00:00 stderr F I0120 10:57:24.273698 16 controller.go:222] Updating CRD OpenAPI spec because alertmanagers.monitoring.coreos.com changed 2026-01-20T10:57:24.273727118+00:00 stderr F I0120 10:57:24.273710 16 controller.go:222] Updating CRD OpenAPI spec because alertrelabelconfigs.monitoring.openshift.io changed 2026-01-20T10:57:24.273763809+00:00 stderr F I0120 10:57:24.273720 16 controller.go:222] Updating CRD OpenAPI spec because apirequestcounts.apiserver.openshift.io changed 2026-01-20T10:57:24.273763809+00:00 stderr F I0120 10:57:24.273733 16 controller.go:222] Updating CRD OpenAPI spec because apiservers.config.openshift.io changed 2026-01-20T10:57:24.273763809+00:00 stderr F I0120 10:57:24.273742 16 controller.go:222] Updating CRD OpenAPI spec because authentications.config.openshift.io changed 2026-01-20T10:57:24.273763809+00:00 stderr F I0120 10:57:24.273752 16 controller.go:222] Updating CRD OpenAPI spec because authentications.operator.openshift.io changed 2026-01-20T10:57:24.273804380+00:00 stderr F I0120 10:57:24.273761 16 controller.go:222] Updating CRD OpenAPI spec because baselineadminnetworkpolicies.policy.networking.k8s.io changed 2026-01-20T10:57:24.273804380+00:00 stderr F I0120 10:57:24.273774 16 controller.go:222] Updating CRD OpenAPI spec because builds.config.openshift.io changed 2026-01-20T10:57:24.273804380+00:00 stderr F I0120 10:57:24.273783 16 controller.go:222] Updating CRD OpenAPI spec because catalogsources.operators.coreos.com changed 2026-01-20T10:57:24.273804380+00:00 stderr F I0120 10:57:24.273794 16 controller.go:222] Updating CRD OpenAPI spec because certificaterequests.cert-manager.io changed 2026-01-20T10:57:24.273842951+00:00 stderr F I0120 10:57:24.273803 16 controller.go:222] Updating CRD OpenAPI spec because certificates.cert-manager.io changed 2026-01-20T10:57:24.273842951+00:00 stderr F I0120 10:57:24.273818 16 controller.go:222] Updating CRD OpenAPI spec because challenges.acme.cert-manager.io changed 2026-01-20T10:57:24.273842951+00:00 stderr F I0120 10:57:24.273827 16 controller.go:222] Updating CRD OpenAPI spec because clusterautoscalers.autoscaling.openshift.io changed 2026-01-20T10:57:24.273880662+00:00 stderr F I0120 10:57:24.273838 16 controller.go:222] Updating CRD OpenAPI spec because clustercsidrivers.operator.openshift.io changed 2026-01-20T10:57:24.273880662+00:00 stderr F I0120 10:57:24.273851 16 controller.go:222] Updating CRD OpenAPI spec because clusterissuers.cert-manager.io changed 2026-01-20T10:57:24.273880662+00:00 stderr F I0120 10:57:24.273861 16 controller.go:222] Updating CRD OpenAPI spec because clusteroperators.config.openshift.io changed 2026-01-20T10:57:24.273880662+00:00 stderr F I0120 10:57:24.273870 16 controller.go:222] Updating CRD OpenAPI spec because clusterresourcequotas.quota.openshift.io changed 2026-01-20T10:57:24.273923593+00:00 stderr F I0120 10:57:24.273882 16 controller.go:222] Updating CRD OpenAPI spec because clusterserviceversions.operators.coreos.com changed 2026-01-20T10:57:24.273923593+00:00 stderr F I0120 10:57:24.273897 16 controller.go:222] Updating CRD OpenAPI spec because clusterversions.config.openshift.io changed 2026-01-20T10:57:24.273923593+00:00 stderr F I0120 10:57:24.273912 16 controller.go:222] Updating CRD OpenAPI spec because configs.imageregistry.operator.openshift.io changed 2026-01-20T10:57:24.273967834+00:00 stderr F I0120 10:57:24.273924 16 controller.go:222] Updating CRD OpenAPI spec because configs.operator.openshift.io changed 2026-01-20T10:57:24.273967834+00:00 stderr F I0120 10:57:24.273942 16 controller.go:222] Updating CRD OpenAPI spec because configs.samples.operator.openshift.io changed 2026-01-20T10:57:24.273967834+00:00 stderr F I0120 10:57:24.273953 16 controller.go:222] Updating CRD OpenAPI spec because consoleclidownloads.console.openshift.io changed 2026-01-20T10:57:24.274002305+00:00 stderr F I0120 10:57:24.273963 16 controller.go:222] Updating CRD OpenAPI spec because consoleexternalloglinks.console.openshift.io changed 2026-01-20T10:57:24.274002305+00:00 stderr F I0120 10:57:24.273976 16 controller.go:222] Updating CRD OpenAPI spec because consolelinks.console.openshift.io changed 2026-01-20T10:57:24.274002305+00:00 stderr F I0120 10:57:24.273987 16 controller.go:222] Updating CRD OpenAPI spec because consolenotifications.console.openshift.io changed 2026-01-20T10:57:24.274039896+00:00 stderr F I0120 10:57:24.273999 16 controller.go:222] Updating CRD OpenAPI spec because consoleplugins.console.openshift.io changed 2026-01-20T10:57:24.274039896+00:00 stderr F I0120 10:57:24.274016 16 controller.go:222] Updating CRD OpenAPI spec because consolequickstarts.console.openshift.io changed 2026-01-20T10:57:24.274039896+00:00 stderr F I0120 10:57:24.274025 16 controller.go:222] Updating CRD OpenAPI spec because consoles.config.openshift.io changed 2026-01-20T10:57:24.274086577+00:00 stderr F I0120 10:57:24.274033 16 controller.go:222] Updating CRD OpenAPI spec because consoles.operator.openshift.io changed 2026-01-20T10:57:24.274086577+00:00 stderr F I0120 10:57:24.274048 16 controller.go:222] Updating CRD OpenAPI spec because consolesamples.console.openshift.io changed 2026-01-20T10:57:24.274086577+00:00 stderr F I0120 10:57:24.274071 16 controller.go:222] Updating CRD OpenAPI spec because consoleyamlsamples.console.openshift.io changed 2026-01-20T10:57:24.274108968+00:00 stderr F I0120 10:57:24.274081 16 controller.go:222] Updating CRD OpenAPI spec because containerruntimeconfigs.machineconfiguration.openshift.io changed 2026-01-20T10:57:24.274118188+00:00 stderr F I0120 10:57:24.274092 16 controller.go:222] Updating CRD OpenAPI spec because controllerconfigs.machineconfiguration.openshift.io changed 2026-01-20T10:57:24.274118188+00:00 stderr F I0120 10:57:24.274105 16 controller.go:222] Updating CRD OpenAPI spec because controlplanemachinesets.machine.openshift.io changed 2026-01-20T10:57:24.274126118+00:00 stderr F I0120 10:57:24.274113 16 controller.go:222] Updating CRD OpenAPI spec because csisnapshotcontrollers.operator.openshift.io changed 2026-01-20T10:57:24.274162919+00:00 stderr F I0120 10:57:24.274122 16 controller.go:222] Updating CRD OpenAPI spec because dnses.config.openshift.io changed 2026-01-20T10:57:24.274162919+00:00 stderr F I0120 10:57:24.274135 16 controller.go:222] Updating CRD OpenAPI spec because dnses.operator.openshift.io changed 2026-01-20T10:57:24.274162919+00:00 stderr F I0120 10:57:24.274145 16 controller.go:222] Updating CRD OpenAPI spec because dnsrecords.ingress.operator.openshift.io changed 2026-01-20T10:57:24.274162919+00:00 stderr F I0120 10:57:24.274155 16 controller.go:222] Updating CRD OpenAPI spec because egressfirewalls.k8s.ovn.org changed 2026-01-20T10:57:24.274206690+00:00 stderr F I0120 10:57:24.274166 16 controller.go:222] Updating CRD OpenAPI spec because egressips.k8s.ovn.org changed 2026-01-20T10:57:24.274206690+00:00 stderr F I0120 10:57:24.274180 16 controller.go:222] Updating CRD OpenAPI spec because egressqoses.k8s.ovn.org changed 2026-01-20T10:57:24.274206690+00:00 stderr F I0120 10:57:24.274189 16 controller.go:222] Updating CRD OpenAPI spec because egressrouters.network.operator.openshift.io changed 2026-01-20T10:57:24.274215651+00:00 stderr F I0120 10:57:24.274199 16 controller.go:222] Updating CRD OpenAPI spec because egressservices.k8s.ovn.org changed 2026-01-20T10:57:24.274252682+00:00 stderr F I0120 10:57:24.274212 16 controller.go:222] Updating CRD OpenAPI spec because etcds.operator.openshift.io changed 2026-01-20T10:57:24.274252682+00:00 stderr F I0120 10:57:24.274235 16 controller.go:222] Updating CRD OpenAPI spec because featuregates.config.openshift.io changed 2026-01-20T10:57:24.274252682+00:00 stderr F I0120 10:57:24.274244 16 controller.go:222] Updating CRD OpenAPI spec because helmchartrepositories.helm.openshift.io changed 2026-01-20T10:57:24.274293433+00:00 stderr F I0120 10:57:24.274254 16 controller.go:222] Updating CRD OpenAPI spec because imagecontentpolicies.config.openshift.io changed 2026-01-20T10:57:24.274293433+00:00 stderr F I0120 10:57:24.274267 16 controller.go:222] Updating CRD OpenAPI spec because imagecontentsourcepolicies.operator.openshift.io changed 2026-01-20T10:57:24.274293433+00:00 stderr F I0120 10:57:24.274275 16 controller.go:222] Updating CRD OpenAPI spec because imagedigestmirrorsets.config.openshift.io changed 2026-01-20T10:57:24.274293433+00:00 stderr F I0120 10:57:24.274285 16 controller.go:222] Updating CRD OpenAPI spec because imagepruners.imageregistry.operator.openshift.io changed 2026-01-20T10:57:24.274335134+00:00 stderr F I0120 10:57:24.274295 16 controller.go:222] Updating CRD OpenAPI spec because images.config.openshift.io changed 2026-01-20T10:57:24.274335134+00:00 stderr F I0120 10:57:24.274312 16 controller.go:222] Updating CRD OpenAPI spec because imagetagmirrorsets.config.openshift.io changed 2026-01-20T10:57:24.274335134+00:00 stderr F I0120 10:57:24.274323 16 controller.go:222] Updating CRD OpenAPI spec because infrastructures.config.openshift.io changed 2026-01-20T10:57:24.274347644+00:00 stderr F I0120 10:57:24.274333 16 controller.go:222] Updating CRD OpenAPI spec because ingresscontrollers.operator.openshift.io changed 2026-01-20T10:57:24.274384105+00:00 stderr F I0120 10:57:24.274345 16 controller.go:222] Updating CRD OpenAPI spec because ingresses.config.openshift.io changed 2026-01-20T10:57:24.274384105+00:00 stderr F I0120 10:57:24.274358 16 controller.go:222] Updating CRD OpenAPI spec because installplans.operators.coreos.com changed 2026-01-20T10:57:24.274384105+00:00 stderr F I0120 10:57:24.274369 16 controller.go:222] Updating CRD OpenAPI spec because ipaddressclaims.ipam.cluster.x-k8s.io changed 2026-01-20T10:57:24.274420526+00:00 stderr F I0120 10:57:24.274379 16 controller.go:222] Updating CRD OpenAPI spec because ipaddresses.ipam.cluster.x-k8s.io changed 2026-01-20T10:57:24.274420526+00:00 stderr F I0120 10:57:24.274393 16 controller.go:222] Updating CRD OpenAPI spec because ippools.whereabouts.cni.cncf.io changed 2026-01-20T10:57:24.274420526+00:00 stderr F I0120 10:57:24.274403 16 controller.go:222] Updating CRD OpenAPI spec because issuers.cert-manager.io changed 2026-01-20T10:57:24.274429406+00:00 stderr F I0120 10:57:24.274412 16 controller.go:222] Updating CRD OpenAPI spec because kubeapiservers.operator.openshift.io changed 2026-01-20T10:57:24.274465077+00:00 stderr F I0120 10:57:24.274423 16 controller.go:222] Updating CRD OpenAPI spec because kubecontrollermanagers.operator.openshift.io changed 2026-01-20T10:57:24.274465077+00:00 stderr F I0120 10:57:24.274437 16 controller.go:222] Updating CRD OpenAPI spec because kubeletconfigs.machineconfiguration.openshift.io changed 2026-01-20T10:57:24.274465077+00:00 stderr F I0120 10:57:24.274447 16 controller.go:222] Updating CRD OpenAPI spec because kubeschedulers.operator.openshift.io changed 2026-01-20T10:57:24.274473967+00:00 stderr F I0120 10:57:24.274457 16 controller.go:222] Updating CRD OpenAPI spec because kubestorageversionmigrators.operator.openshift.io changed 2026-01-20T10:57:24.274507568+00:00 stderr F I0120 10:57:24.274468 16 controller.go:222] Updating CRD OpenAPI spec because machineautoscalers.autoscaling.openshift.io changed 2026-01-20T10:57:24.274507568+00:00 stderr F I0120 10:57:24.274480 16 controller.go:222] Updating CRD OpenAPI spec because machineconfigpools.machineconfiguration.openshift.io changed 2026-01-20T10:57:24.274507568+00:00 stderr F I0120 10:57:24.274490 16 controller.go:222] Updating CRD OpenAPI spec because machineconfigs.machineconfiguration.openshift.io changed 2026-01-20T10:57:24.274516408+00:00 stderr F I0120 10:57:24.274500 16 controller.go:222] Updating CRD OpenAPI spec because machineconfigurations.operator.openshift.io changed 2026-01-20T10:57:24.274566560+00:00 stderr F I0120 10:57:24.274520 16 controller.go:222] Updating CRD OpenAPI spec because machinehealthchecks.machine.openshift.io changed 2026-01-20T10:57:24.274566560+00:00 stderr F I0120 10:57:24.274535 16 controller.go:222] Updating CRD OpenAPI spec because machines.machine.openshift.io changed 2026-01-20T10:57:24.274566560+00:00 stderr F I0120 10:57:24.274545 16 controller.go:222] Updating CRD OpenAPI spec because machinesets.machine.openshift.io changed 2026-01-20T10:57:24.274566560+00:00 stderr F I0120 10:57:24.274554 16 controller.go:222] Updating CRD OpenAPI spec because metal3remediations.infrastructure.cluster.x-k8s.io changed 2026-01-20T10:57:24.274605091+00:00 stderr F I0120 10:57:24.274565 16 controller.go:222] Updating CRD OpenAPI spec because metal3remediationtemplates.infrastructure.cluster.x-k8s.io changed 2026-01-20T10:57:24.274605091+00:00 stderr F I0120 10:57:24.274578 16 controller.go:222] Updating CRD OpenAPI spec because network-attachment-definitions.k8s.cni.cncf.io changed 2026-01-20T10:57:24.274605091+00:00 stderr F I0120 10:57:24.274590 16 controller.go:222] Updating CRD OpenAPI spec because networks.config.openshift.io changed 2026-01-20T10:57:24.274639792+00:00 stderr F I0120 10:57:24.274599 16 controller.go:222] Updating CRD OpenAPI spec because networks.operator.openshift.io changed 2026-01-20T10:57:24.274639792+00:00 stderr F I0120 10:57:24.274614 16 controller.go:222] Updating CRD OpenAPI spec because nodes.config.openshift.io changed 2026-01-20T10:57:24.274639792+00:00 stderr F I0120 10:57:24.274624 16 controller.go:222] Updating CRD OpenAPI spec because oauths.config.openshift.io changed 2026-01-20T10:57:24.274688563+00:00 stderr F I0120 10:57:24.274634 16 controller.go:222] Updating CRD OpenAPI spec because olmconfigs.operators.coreos.com changed 2026-01-20T10:57:24.274688563+00:00 stderr F I0120 10:57:24.274648 16 controller.go:222] Updating CRD OpenAPI spec because openshiftapiservers.operator.openshift.io changed 2026-01-20T10:57:24.274688563+00:00 stderr F I0120 10:57:24.274657 16 controller.go:222] Updating CRD OpenAPI spec because openshiftcontrollermanagers.operator.openshift.io changed 2026-01-20T10:57:24.274688563+00:00 stderr F I0120 10:57:24.274668 16 controller.go:222] Updating CRD OpenAPI spec because operatorconditions.operators.coreos.com changed 2026-01-20T10:57:24.274688563+00:00 stderr F I0120 10:57:24.274677 16 controller.go:222] Updating CRD OpenAPI spec because operatorgroups.operators.coreos.com changed 2026-01-20T10:57:24.274698933+00:00 stderr F I0120 10:57:24.274687 16 controller.go:222] Updating CRD OpenAPI spec because operatorhubs.config.openshift.io changed 2026-01-20T10:57:24.274737844+00:00 stderr F I0120 10:57:24.274697 16 controller.go:222] Updating CRD OpenAPI spec because operatorpkis.network.operator.openshift.io changed 2026-01-20T10:57:24.274737844+00:00 stderr F I0120 10:57:24.274712 16 controller.go:222] Updating CRD OpenAPI spec because operators.operators.coreos.com changed 2026-01-20T10:57:24.274737844+00:00 stderr F I0120 10:57:24.274721 16 controller.go:222] Updating CRD OpenAPI spec because orders.acme.cert-manager.io changed 2026-01-20T10:57:24.274746735+00:00 stderr F I0120 10:57:24.274731 16 controller.go:222] Updating CRD OpenAPI spec because overlappingrangeipreservations.whereabouts.cni.cncf.io changed 2026-01-20T10:57:24.274780775+00:00 stderr F I0120 10:57:24.274742 16 controller.go:222] Updating CRD OpenAPI spec because podmonitors.monitoring.coreos.com changed 2026-01-20T10:57:24.274780775+00:00 stderr F I0120 10:57:24.274755 16 controller.go:222] Updating CRD OpenAPI spec because podnetworkconnectivitychecks.controlplane.operator.openshift.io changed 2026-01-20T10:57:24.274780775+00:00 stderr F I0120 10:57:24.274766 16 controller.go:222] Updating CRD OpenAPI spec because probes.monitoring.coreos.com changed 2026-01-20T10:57:24.274821176+00:00 stderr F I0120 10:57:24.274780 16 controller.go:222] Updating CRD OpenAPI spec because projecthelmchartrepositories.helm.openshift.io changed 2026-01-20T10:57:24.274821176+00:00 stderr F I0120 10:57:24.274797 16 controller.go:222] Updating CRD OpenAPI spec because projects.config.openshift.io changed 2026-01-20T10:57:24.274851797+00:00 stderr F I0120 10:57:24.274811 16 controller.go:222] Updating CRD OpenAPI spec because prometheuses.monitoring.coreos.com changed 2026-01-20T10:57:24.274851797+00:00 stderr F I0120 10:57:24.274827 16 controller.go:222] Updating CRD OpenAPI spec because prometheusrules.monitoring.coreos.com changed 2026-01-20T10:57:24.274851797+00:00 stderr F I0120 10:57:24.274838 16 controller.go:222] Updating CRD OpenAPI spec because proxies.config.openshift.io changed 2026-01-20T10:57:24.274888968+00:00 stderr F I0120 10:57:24.274849 16 controller.go:222] Updating CRD OpenAPI spec because rangeallocations.security.internal.openshift.io changed 2026-01-20T10:57:24.274888968+00:00 stderr F I0120 10:57:24.274862 16 controller.go:222] Updating CRD OpenAPI spec because rolebindingrestrictions.authorization.openshift.io changed 2026-01-20T10:57:24.274888968+00:00 stderr F I0120 10:57:24.274872 16 controller.go:222] Updating CRD OpenAPI spec because schedulers.config.openshift.io changed 2026-01-20T10:57:24.274924389+00:00 stderr F I0120 10:57:24.274883 16 controller.go:222] Updating CRD OpenAPI spec because securitycontextconstraints.security.openshift.io changed 2026-01-20T10:57:24.274924389+00:00 stderr F I0120 10:57:24.274899 16 controller.go:222] Updating CRD OpenAPI spec because servicecas.operator.openshift.io changed 2026-01-20T10:57:24.274959200+00:00 stderr F I0120 10:57:24.274918 16 controller.go:222] Updating CRD OpenAPI spec because servicemonitors.monitoring.coreos.com changed 2026-01-20T10:57:24.274959200+00:00 stderr F I0120 10:57:24.274938 16 controller.go:222] Updating CRD OpenAPI spec because storages.operator.openshift.io changed 2026-01-20T10:57:24.274959200+00:00 stderr F I0120 10:57:24.274949 16 controller.go:222] Updating CRD OpenAPI spec because storagestates.migration.k8s.io changed 2026-01-20T10:57:24.274969820+00:00 stderr F I0120 10:57:24.274958 16 controller.go:222] Updating CRD OpenAPI spec because storageversionmigrations.migration.k8s.io changed 2026-01-20T10:57:24.275014882+00:00 stderr F I0120 10:57:24.274968 16 controller.go:222] Updating CRD OpenAPI spec because subscriptions.operators.coreos.com changed 2026-01-20T10:57:24.275014882+00:00 stderr F I0120 10:57:24.274983 16 controller.go:222] Updating CRD OpenAPI spec because thanosrulers.monitoring.coreos.com changed 2026-01-20T10:57:24.283740582+00:00 stderr F W0120 10:57:24.283622 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-cluster-machine-approver/configmaps" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.303759022+00:00 stderr F W0120 10:57:24.303635 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-marketplace/secrets" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.307782878+00:00 stderr F I0120 10:57:24.307690 16 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2026-01-20T10:57:24.307782878+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2026-01-20T10:57:24.307782878+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2026-01-20T10:57:24.326211146+00:00 stderr F I0120 10:57:24.326087 16 patch_genericapiserver.go:201] Loopback request to "/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc" (user agent "cluster-kube-apiserver-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. 2026-01-20T10:57:24.330521400+00:00 stderr F W0120 10:57:24.330426 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-marketplace/secrets" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.406268973+00:00 stderr F W0120 10:57:24.406106 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-network-node-identity/configmaps" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.408078971+00:00 stderr F I0120 10:57:24.407981 16 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2026-01-20T10:57:24.408078971+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2026-01-20T10:57:24.408078971+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2026-01-20T10:57:24.430565366+00:00 stderr F W0120 10:57:24.430418 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-authentication-operator/configmaps" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.437122809+00:00 stderr F W0120 10:57:24.436998 16 patch_genericapiserver.go:204] Request to "/apis/acme.cert-manager.io/v1/challenges" (source IP 38.102.83.220:36286, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.472219767+00:00 stderr F I0120 10:57:24.471963 16 patch_genericapiserver.go:201] Loopback request to "/api/v1/namespaces/openshift-kube-controller-manager/secrets" (user agent "cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. 2026-01-20T10:57:24.498729818+00:00 stderr F W0120 10:57:24.498583 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-apiserver/configmaps" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.522334583+00:00 stderr F I0120 10:57:24.522179 16 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2026-01-20T10:57:24.522334583+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2026-01-20T10:57:24.522334583+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2026-01-20T10:57:24.556297001+00:00 stderr F W0120 10:57:24.556157 16 patch_genericapiserver.go:204] Request to "/api/v1/podtemplates" (source IP 38.102.83.220:36286, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.607735251+00:00 stderr F I0120 10:57:24.607521 16 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2026-01-20T10:57:24.607735251+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2026-01-20T10:57:24.607735251+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2026-01-20T10:57:24.711432933+00:00 stderr F I0120 10:57:24.711287 16 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2026-01-20T10:57:24.711432933+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2026-01-20T10:57:24.711432933+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2026-01-20T10:57:24.765319898+00:00 stderr F W0120 10:57:24.765155 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-kube-apiserver/endpoints" (source IP 38.102.83.220:36250, user agent "network-operator/4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.781846655+00:00 stderr F W0120 10:57:24.780450 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-dns/configmaps" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.807755441+00:00 stderr F I0120 10:57:24.807584 16 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2026-01-20T10:57:24.807755441+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2026-01-20T10:57:24.807755441+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2026-01-20T10:57:24.825261144+00:00 stderr F W0120 10:57:24.825046 16 patch_genericapiserver.go:204] Request to "/api/v1/configmaps" (source IP 38.102.83.220:36250, user agent "network-operator/4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.866210206+00:00 stderr F W0120 10:57:24.865553 16 patch_genericapiserver.go:204] Request to "/apis/operators.coreos.com/v1alpha1/catalogsources" (source IP 38.102.83.220:36286, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.879387735+00:00 stderr F W0120 10:57:24.879249 16 patch_genericapiserver.go:204] Request to "/apis/network.operator.openshift.io/v1/operatorpkis" (source IP 38.102.83.220:36286, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.899268140+00:00 stderr F W0120 10:57:24.899031 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-config-managed/configmaps" (source IP 38.102.83.220:36236, user agent "cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format/openshift-config") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.906158143+00:00 stderr F W0120 10:57:24.905999 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-authentication/secrets" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.907525169+00:00 stderr F I0120 10:57:24.907378 16 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2026-01-20T10:57:24.907525169+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2026-01-20T10:57:24.907525169+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2026-01-20T10:57:24.961088935+00:00 stderr F W0120 10:57:24.960962 16 patch_genericapiserver.go:204] Request to "/apis/apps/v1/daemonsets" (source IP 38.102.83.220:36286, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:24.970274569+00:00 stderr F I0120 10:57:24.970088 16 storage_scheduling.go:111] all system priority classes are created successfully or already exist. 2026-01-20T10:57:24.988583122+00:00 stderr F W0120 10:57:24.988469 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces" (source IP 38.102.83.220:36286, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:25.002042509+00:00 stderr F W0120 10:57:25.001931 16 patch_genericapiserver.go:204] Request to "/apis/k8s.cni.cncf.io/v1/network-attachment-definitions" (source IP 38.102.83.220:36286, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:25.007343849+00:00 stderr F I0120 10:57:25.007253 16 healthz.go:261] poststarthook/rbac/bootstrap-roles check failed: readyz 2026-01-20T10:57:25.007343849+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2026-01-20T10:57:25.023984209+00:00 stderr F W0120 10:57:25.023836 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-ovn-kubernetes/configmaps" (source IP 38.102.83.220:36250, user agent "network-operator/4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:25.086739758+00:00 stderr F W0120 10:57:25.086562 16 patch_genericapiserver.go:204] Request to "/api/v1/nodes" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:25.090541328+00:00 stderr F E0120 10:57:25.090448 16 controller.go:113] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: Error, could not get list of group versions for APIService 2026-01-20T10:57:25.090541328+00:00 stderr F I0120 10:57:25.090471 16 controller.go:126] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue. 2026-01-20T10:57:25.092128541+00:00 stderr F E0120 10:57:25.092034 16 controller.go:102] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to download v1.packages.operators.coreos.com: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.092128541+00:00 stderr F , Header: map[Audit-Id:[bdf36397-1df3-4b86-9459-09a666492135] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:25 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:25.092128541+00:00 stderr F I0120 10:57:25.092053 16 controller.go:109] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue. 2026-01-20T10:57:25.100160033+00:00 stderr F E0120 10:57:25.100048 16 controller.go:113] loading OpenAPI spec for "v1.quota.openshift.io" failed with: Error, could not get list of group versions for APIService 2026-01-20T10:57:25.100160033+00:00 stderr F I0120 10:57:25.100108 16 controller.go:126] OpenAPI AggregationController: action for item v1.quota.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.102669840+00:00 stderr F E0120 10:57:25.102502 16 controller.go:113] loading OpenAPI spec for "v1.oauth.openshift.io" failed with: Error, could not get list of group versions for APIService 2026-01-20T10:57:25.102669840+00:00 stderr F I0120 10:57:25.102525 16 controller.go:126] OpenAPI AggregationController: action for item v1.oauth.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.103940003+00:00 stderr F E0120 10:57:25.103830 16 controller.go:102] loading OpenAPI spec for "v1.quota.openshift.io" failed with: failed to download v1.quota.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.103940003+00:00 stderr F , Header: map[Audit-Id:[779ae9b1-aa6d-4c67-95cd-03bb32115feb] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:25 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:25.103940003+00:00 stderr F I0120 10:57:25.103855 16 controller.go:109] OpenAPI AggregationController: action for item v1.quota.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.105902435+00:00 stderr F W0120 10:57:25.105780 16 patch_genericapiserver.go:204] Request to "/api/v1/services" (source IP 38.102.83.220:36342, user agent "crc/ovnkube@254d920e214b (linux/amd64) kubernetes/v0.29.2") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:25.106659965+00:00 stderr F E0120 10:57:25.106571 16 controller.go:102] loading OpenAPI spec for "v1.oauth.openshift.io" failed with: failed to download v1.oauth.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.106659965+00:00 stderr F , Header: map[Audit-Id:[0c2d381b-ac63-4c14-abe1-9c85b996a7ec] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:25 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:25.106659965+00:00 stderr F I0120 10:57:25.106592 16 controller.go:109] OpenAPI AggregationController: action for item v1.oauth.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.107232090+00:00 stderr F W0120 10:57:25.107127 16 patch_genericapiserver.go:204] Request to "/apis/rbac.authorization.k8s.io/v1/roles" (source IP 38.102.83.220:36286, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:25.107232090+00:00 stderr F I0120 10:57:25.107181 16 healthz.go:261] poststarthook/rbac/bootstrap-roles check failed: readyz 2026-01-20T10:57:25.107232090+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2026-01-20T10:57:25.120440959+00:00 stderr F W0120 10:57:25.120211 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-authentication/secrets" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:25.131245835+00:00 stderr F E0120 10:57:25.131020 16 controller.go:102] loading OpenAPI spec for "v1.image.openshift.io" failed with: failed to download v1.image.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.131245835+00:00 stderr F , Header: map[Audit-Id:[6c690d2e-b55a-4e0c-a3f2-c50e9091a355] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:25 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:25.131245835+00:00 stderr F I0120 10:57:25.131040 16 controller.go:109] OpenAPI AggregationController: action for item v1.image.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.132235571+00:00 stderr F E0120 10:57:25.132132 16 controller.go:113] loading OpenAPI spec for "v1.image.openshift.io" failed with: Error, could not get list of group versions for APIService 2026-01-20T10:57:25.132235571+00:00 stderr F I0120 10:57:25.132149 16 controller.go:126] OpenAPI AggregationController: action for item v1.image.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.132965031+00:00 stderr F E0120 10:57:25.132858 16 controller.go:102] loading OpenAPI spec for "v1.route.openshift.io" failed with: failed to download v1.route.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.132965031+00:00 stderr F , Header: map[Audit-Id:[38484c67-cadf-44ce-bb64-22c436e69884] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:25 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:25.133423283+00:00 stderr F I0120 10:57:25.133335 16 controller.go:109] OpenAPI AggregationController: action for item v1.route.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.134575693+00:00 stderr F E0120 10:57:25.134486 16 controller.go:113] loading OpenAPI spec for "v1.route.openshift.io" failed with: Error, could not get list of group versions for APIService 2026-01-20T10:57:25.134575693+00:00 stderr F I0120 10:57:25.134504 16 controller.go:126] OpenAPI AggregationController: action for item v1.route.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.135955090+00:00 stderr F E0120 10:57:25.135850 16 controller.go:102] loading OpenAPI spec for "v1.security.openshift.io" failed with: failed to download v1.security.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.135955090+00:00 stderr F , Header: map[Audit-Id:[6d32b5a3-3750-47be-a4fb-48409f4aecc2] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:25 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:25.135955090+00:00 stderr F I0120 10:57:25.135866 16 controller.go:109] OpenAPI AggregationController: action for item v1.security.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.137022718+00:00 stderr F E0120 10:57:25.136926 16 controller.go:113] loading OpenAPI spec for "v1.security.openshift.io" failed with: Error, could not get list of group versions for APIService 2026-01-20T10:57:25.137022718+00:00 stderr F I0120 10:57:25.136946 16 controller.go:126] OpenAPI AggregationController: action for item v1.security.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.137724136+00:00 stderr F E0120 10:57:25.137589 16 controller.go:102] loading OpenAPI spec for "v1.template.openshift.io" failed with: failed to download v1.template.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.137724136+00:00 stderr F , Header: map[Audit-Id:[cad47c58-4853-4cba-b26f-0aeb40450fa3] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:25 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:25.138216990+00:00 stderr F I0120 10:57:25.138120 16 controller.go:109] OpenAPI AggregationController: action for item v1.template.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.139578656+00:00 stderr F E0120 10:57:25.139427 16 controller.go:113] loading OpenAPI spec for "v1.template.openshift.io" failed with: Error, could not get list of group versions for APIService 2026-01-20T10:57:25.139578656+00:00 stderr F I0120 10:57:25.139447 16 controller.go:126] OpenAPI AggregationController: action for item v1.template.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.140475199+00:00 stderr F E0120 10:57:25.140372 16 controller.go:102] loading OpenAPI spec for "v1.build.openshift.io" failed with: failed to download v1.build.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.140475199+00:00 stderr F , Header: map[Audit-Id:[7eddaf52-8524-4dd1-b3b6-1ae05da0d9c0] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:25 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:25.140666774+00:00 stderr F I0120 10:57:25.140594 16 controller.go:109] OpenAPI AggregationController: action for item v1.build.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.141461266+00:00 stderr F E0120 10:57:25.141368 16 controller.go:113] loading OpenAPI spec for "v1.build.openshift.io" failed with: Error, could not get list of group versions for APIService 2026-01-20T10:57:25.141461266+00:00 stderr F I0120 10:57:25.141428 16 controller.go:126] OpenAPI AggregationController: action for item v1.build.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.143288943+00:00 stderr F E0120 10:57:25.142604 16 controller.go:102] loading OpenAPI spec for "v1.project.openshift.io" failed with: failed to download v1.project.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.143288943+00:00 stderr F , Header: map[Audit-Id:[55891339-122e-4793-be15-fbd370048188] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:25 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:25.143288943+00:00 stderr F I0120 10:57:25.142622 16 controller.go:109] OpenAPI AggregationController: action for item v1.project.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.143424817+00:00 stderr F E0120 10:57:25.143342 16 controller.go:113] loading OpenAPI spec for "v1.project.openshift.io" failed with: Error, could not get list of group versions for APIService 2026-01-20T10:57:25.143811048+00:00 stderr F I0120 10:57:25.143722 16 controller.go:126] OpenAPI AggregationController: action for item v1.project.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.145266966+00:00 stderr F E0120 10:57:25.145140 16 controller.go:102] loading OpenAPI spec for "v1.user.openshift.io" failed with: failed to download v1.user.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.145266966+00:00 stderr F , Header: map[Audit-Id:[1121bd53-01c2-4ca9-9f84-5de4fe217596] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:25 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:25.145266966+00:00 stderr F I0120 10:57:25.145159 16 controller.go:109] OpenAPI AggregationController: action for item v1.user.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.146834638+00:00 stderr F E0120 10:57:25.146718 16 controller.go:113] loading OpenAPI spec for "v1.user.openshift.io" failed with: Error, could not get list of group versions for APIService 2026-01-20T10:57:25.146834638+00:00 stderr F I0120 10:57:25.146744 16 controller.go:126] OpenAPI AggregationController: action for item v1.user.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.146955441+00:00 stderr F E0120 10:57:25.146867 16 controller.go:102] loading OpenAPI spec for "v1.authorization.openshift.io" failed with: failed to download v1.authorization.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.146955441+00:00 stderr F , Header: map[Audit-Id:[dffb0b76-b37f-4e8d-96b5-e03efde18d10] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:25 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:25.148028749+00:00 stderr F I0120 10:57:25.147912 16 controller.go:109] OpenAPI AggregationController: action for item v1.authorization.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.148401249+00:00 stderr F E0120 10:57:25.148290 16 controller.go:113] loading OpenAPI spec for "v1.authorization.openshift.io" failed with: Error, could not get list of group versions for APIService 2026-01-20T10:57:25.148401249+00:00 stderr F I0120 10:57:25.148311 16 controller.go:126] OpenAPI AggregationController: action for item v1.authorization.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.149543250+00:00 stderr F E0120 10:57:25.149339 16 controller.go:102] loading OpenAPI spec for "v1.apps.openshift.io" failed with: failed to download v1.apps.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.149543250+00:00 stderr F , Header: map[Audit-Id:[90a1368d-6a02-4520-9dce-004549f4b818] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 20 Jan 2026 10:57:25 GMT] X-Content-Type-Options:[nosniff]] 2026-01-20T10:57:25.149543250+00:00 stderr F I0120 10:57:25.149355 16 controller.go:109] OpenAPI AggregationController: action for item v1.apps.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.151192683+00:00 stderr F E0120 10:57:25.151048 16 controller.go:113] loading OpenAPI spec for "v1.apps.openshift.io" failed with: Error, could not get list of group versions for APIService 2026-01-20T10:57:25.151192683+00:00 stderr F I0120 10:57:25.151082 16 controller.go:126] OpenAPI AggregationController: action for item v1.apps.openshift.io: Rate Limited Requeue. 2026-01-20T10:57:25.191970991+00:00 stderr F W0120 10:57:25.191825 16 patch_genericapiserver.go:204] Request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" (source IP 38.102.83.220:36286, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:25.207003989+00:00 stderr F I0120 10:57:25.206872 16 healthz.go:261] poststarthook/rbac/bootstrap-roles check failed: readyz 2026-01-20T10:57:25.207003989+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2026-01-20T10:57:25.218850852+00:00 stderr F W0120 10:57:25.218657 16 patch_genericapiserver.go:204] Request to "/apis/controlplane.operator.openshift.io/v1alpha1/namespaces/openshift-network-diagnostics/podnetworkconnectivitychecks" (source IP 38.102.83.220:36250, user agent "network-operator/4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:25.219319724+00:00 stderr F W0120 10:57:25.219090 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-apiserver/configmaps" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:25.248381443+00:00 stderr F W0120 10:57:25.245294 16 patch_genericapiserver.go:204] Request to "/apis/k8s.cni.cncf.io/v1/network-attachment-definitions" (source IP 38.102.83.220:36296, user agent "crc/ovnkube@254d920e214b (linux/amd64) kubernetes/v0.29.2") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:25.278621473+00:00 stderr F W0120 10:57:25.278463 16 patch_genericapiserver.go:204] Request to "/apis/cert-manager.io/v1/issuers" (source IP 38.102.83.220:36286, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:25.294290287+00:00 stderr F W0120 10:57:25.294158 16 patch_genericapiserver.go:204] Request to "/api/v1/replicationcontrollers" (source IP 38.102.83.220:36286, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:25.294290287+00:00 stderr F W0120 10:57:25.294197 16 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps" (source IP 38.102.83.220:36310, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2026-01-20T10:57:25.308026870+00:00 stderr F I0120 10:57:25.307923 16 patch_genericapiserver.go:93] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-crc", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'KubeAPIReadyz' readyz=true 2026-01-20T10:57:25.315993691+00:00 stderr F W0120 10:57:25.315739 16 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.126.11] 2026-01-20T10:57:25.318844446+00:00 stderr F I0120 10:57:25.318764 16 controller.go:624] quota admission added evaluator for: endpoints 2026-01-20T10:57:25.334450379+00:00 stderr F I0120 10:57:25.334308 16 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io 2026-01-20T10:57:25.654982186+00:00 stderr F I0120 10:57:25.653693 16 store.go:1579] "Monitoring resource count at path" resource="operatorpkis.network.operator.openshift.io" path="//network.operator.openshift.io/operatorpkis" 2026-01-20T10:57:25.656489026+00:00 stderr F I0120 10:57:25.656384 16 cacher.go:460] cacher (operatorpkis.network.operator.openshift.io): initialized 2026-01-20T10:57:25.656489026+00:00 stderr F I0120 10:57:25.656415 16 reflector.go:351] Caches populated for network.operator.openshift.io/v1, Kind=OperatorPKI from storage/cacher.go:/network.operator.openshift.io/operatorpkis 2026-01-20T10:57:25.679617867+00:00 stderr F I0120 10:57:25.679400 16 store.go:1579] "Monitoring resource count at path" resource="ingresscontrollers.operator.openshift.io" path="//operator.openshift.io/ingresscontrollers" 2026-01-20T10:57:25.683111569+00:00 stderr F I0120 10:57:25.682957 16 cacher.go:460] cacher (ingresscontrollers.operator.openshift.io): initialized 2026-01-20T10:57:25.683169681+00:00 stderr F I0120 10:57:25.683120 16 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=IngressController from storage/cacher.go:/operator.openshift.io/ingresscontrollers 2026-01-20T10:57:25.690110435+00:00 stderr F I0120 10:57:25.689976 16 store.go:1579] "Monitoring resource count at path" resource="machines.machine.openshift.io" path="//machine.openshift.io/machines" 2026-01-20T10:57:25.691087871+00:00 stderr F I0120 10:57:25.690940 16 cacher.go:460] cacher (machines.machine.openshift.io): initialized 2026-01-20T10:57:25.691087871+00:00 stderr F I0120 10:57:25.690965 16 reflector.go:351] Caches populated for machine.openshift.io/v1beta1, Kind=Machine from storage/cacher.go:/machine.openshift.io/machines 2026-01-20T10:57:25.705300736+00:00 stderr F I0120 10:57:25.705188 16 store.go:1579] "Monitoring resource count at path" resource="clusterversions.config.openshift.io" path="//config.openshift.io/clusterversions" 2026-01-20T10:57:25.709721874+00:00 stderr F I0120 10:57:25.709655 16 cacher.go:460] cacher (clusterversions.config.openshift.io): initialized 2026-01-20T10:57:25.709721874+00:00 stderr F I0120 10:57:25.709679 16 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=ClusterVersion from storage/cacher.go:/config.openshift.io/clusterversions 2026-01-20T10:57:25.760481106+00:00 stderr F I0120 10:57:25.760324 16 store.go:1579] "Monitoring resource count at path" resource="machinesets.machine.openshift.io" path="//machine.openshift.io/machinesets" 2026-01-20T10:57:25.761933804+00:00 stderr F I0120 10:57:25.761835 16 cacher.go:460] cacher (machinesets.machine.openshift.io): initialized 2026-01-20T10:57:25.761933804+00:00 stderr F I0120 10:57:25.761871 16 reflector.go:351] Caches populated for machine.openshift.io/v1beta1, Kind=MachineSet from storage/cacher.go:/machine.openshift.io/machinesets 2026-01-20T10:57:25.771313933+00:00 stderr F I0120 10:57:25.771174 16 store.go:1579] "Monitoring resource count at path" resource="kubecontrollermanagers.operator.openshift.io" path="//operator.openshift.io/kubecontrollermanagers" 2026-01-20T10:57:25.774249720+00:00 stderr F I0120 10:57:25.774106 16 cacher.go:460] cacher (kubecontrollermanagers.operator.openshift.io): initialized 2026-01-20T10:57:25.774249720+00:00 stderr F I0120 10:57:25.774144 16 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=KubeControllerManager from storage/cacher.go:/operator.openshift.io/kubecontrollermanagers 2026-01-20T10:57:25.781412499+00:00 stderr F I0120 10:57:25.781315 16 store.go:1579] "Monitoring resource count at path" resource="prometheusrules.monitoring.coreos.com" path="//monitoring.coreos.com/prometheusrules" 2026-01-20T10:57:25.792267036+00:00 stderr F I0120 10:57:25.792152 16 cacher.go:460] cacher (prometheusrules.monitoring.coreos.com): initialized 2026-01-20T10:57:25.792267036+00:00 stderr F I0120 10:57:25.792182 16 reflector.go:351] Caches populated for monitoring.coreos.com/v1, Kind=PrometheusRule from storage/cacher.go:/monitoring.coreos.com/prometheusrules 2026-01-20T10:57:25.965512338+00:00 stderr F I0120 10:57:25.965413 16 store.go:1579] "Monitoring resource count at path" resource="egressqoses.k8s.ovn.org" path="//k8s.ovn.org/egressqoses" 2026-01-20T10:57:25.966889554+00:00 stderr F I0120 10:57:25.966800 16 cacher.go:460] cacher (egressqoses.k8s.ovn.org): initialized 2026-01-20T10:57:25.966889554+00:00 stderr F I0120 10:57:25.966820 16 reflector.go:351] Caches populated for k8s.ovn.org/v1, Kind=EgressQoS from storage/cacher.go:/k8s.ovn.org/egressqoses 2026-01-20T10:57:26.159340304+00:00 stderr F I0120 10:57:26.159197 16 store.go:1579] "Monitoring resource count at path" resource="egressfirewalls.k8s.ovn.org" path="//k8s.ovn.org/egressfirewalls" 2026-01-20T10:57:26.160807563+00:00 stderr F I0120 10:57:26.160718 16 cacher.go:460] cacher (egressfirewalls.k8s.ovn.org): initialized 2026-01-20T10:57:26.160807563+00:00 stderr F I0120 10:57:26.160742 16 reflector.go:351] Caches populated for k8s.ovn.org/v1, Kind=EgressFirewall from storage/cacher.go:/k8s.ovn.org/egressfirewalls 2026-01-20T10:57:26.288823578+00:00 stderr F I0120 10:57:26.288671 16 store.go:1579] "Monitoring resource count at path" resource="networks.config.openshift.io" path="//config.openshift.io/networks" 2026-01-20T10:57:26.291225681+00:00 stderr F I0120 10:57:26.291126 16 cacher.go:460] cacher (networks.config.openshift.io): initialized 2026-01-20T10:57:26.291311343+00:00 stderr F I0120 10:57:26.291247 16 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Network from storage/cacher.go:/config.openshift.io/networks 2026-01-20T10:57:26.385037113+00:00 stderr F I0120 10:57:26.384864 16 store.go:1579] "Monitoring resource count at path" resource="servicemonitors.monitoring.coreos.com" path="//monitoring.coreos.com/servicemonitors" 2026-01-20T10:57:26.398902129+00:00 stderr F I0120 10:57:26.398775 16 cacher.go:460] cacher (servicemonitors.monitoring.coreos.com): initialized 2026-01-20T10:57:26.398902129+00:00 stderr F I0120 10:57:26.398796 16 reflector.go:351] Caches populated for monitoring.coreos.com/v1, Kind=ServiceMonitor from storage/cacher.go:/monitoring.coreos.com/servicemonitors 2026-01-20T10:57:26.485049307+00:00 stderr F I0120 10:57:26.484918 16 store.go:1579] "Monitoring resource count at path" resource="subscriptions.operators.coreos.com" path="//operators.coreos.com/subscriptions" 2026-01-20T10:57:26.486199338+00:00 stderr F I0120 10:57:26.486098 16 cacher.go:460] cacher (subscriptions.operators.coreos.com): initialized 2026-01-20T10:57:26.486199338+00:00 stderr F I0120 10:57:26.486124 16 reflector.go:351] Caches populated for operators.coreos.com/v1alpha1, Kind=Subscription from storage/cacher.go:/operators.coreos.com/subscriptions 2026-01-20T10:57:26.525820705+00:00 stderr F I0120 10:57:26.525699 16 store.go:1579] "Monitoring resource count at path" resource="rolebindingrestrictions.authorization.openshift.io" path="//authorization.openshift.io/rolebindingrestrictions" 2026-01-20T10:57:26.526825732+00:00 stderr F I0120 10:57:26.526776 16 cacher.go:460] cacher (rolebindingrestrictions.authorization.openshift.io): initialized 2026-01-20T10:57:26.526825732+00:00 stderr F I0120 10:57:26.526798 16 reflector.go:351] Caches populated for authorization.openshift.io/v1, Kind=RoleBindingRestriction from storage/cacher.go:/authorization.openshift.io/rolebindingrestrictions 2026-01-20T10:57:26.544682654+00:00 stderr F I0120 10:57:26.543761 16 store.go:1579] "Monitoring resource count at path" resource="egressservices.k8s.ovn.org" path="//k8s.ovn.org/egressservices" 2026-01-20T10:57:26.544956622+00:00 stderr F I0120 10:57:26.544913 16 cacher.go:460] cacher (egressservices.k8s.ovn.org): initialized 2026-01-20T10:57:26.544956622+00:00 stderr F I0120 10:57:26.544934 16 reflector.go:351] Caches populated for k8s.ovn.org/v1, Kind=EgressService from storage/cacher.go:/k8s.ovn.org/egressservices 2026-01-20T10:57:26.598914938+00:00 stderr F I0120 10:57:26.598760 16 store.go:1579] "Monitoring resource count at path" resource="metal3remediations.infrastructure.cluster.x-k8s.io" path="//infrastructure.cluster.x-k8s.io/metal3remediations" 2026-01-20T10:57:26.600024547+00:00 stderr F I0120 10:57:26.599812 16 cacher.go:460] cacher (metal3remediations.infrastructure.cluster.x-k8s.io): initialized 2026-01-20T10:57:26.600024547+00:00 stderr F I0120 10:57:26.599834 16 reflector.go:351] Caches populated for infrastructure.cluster.x-k8s.io/v1alpha5, Kind=Metal3Remediation from storage/cacher.go:/infrastructure.cluster.x-k8s.io/metal3remediations 2026-01-20T10:57:26.605701057+00:00 stderr F I0120 10:57:26.605615 16 store.go:1579] "Monitoring resource count at path" resource="metal3remediations.infrastructure.cluster.x-k8s.io" path="//infrastructure.cluster.x-k8s.io/metal3remediations" 2026-01-20T10:57:26.606492229+00:00 stderr F I0120 10:57:26.606405 16 cacher.go:460] cacher (metal3remediations.infrastructure.cluster.x-k8s.io): initialized 2026-01-20T10:57:26.606492229+00:00 stderr F I0120 10:57:26.606429 16 reflector.go:351] Caches populated for infrastructure.cluster.x-k8s.io/v1beta1, Kind=Metal3Remediation from storage/cacher.go:/infrastructure.cluster.x-k8s.io/metal3remediations 2026-01-20T10:57:26.636905453+00:00 stderr F I0120 10:57:26.636766 16 store.go:1579] "Monitoring resource count at path" resource="network-attachment-definitions.k8s.cni.cncf.io" path="//k8s.cni.cncf.io/network-attachment-definitions" 2026-01-20T10:57:26.640147289+00:00 stderr F I0120 10:57:26.639970 16 cacher.go:460] cacher (network-attachment-definitions.k8s.cni.cncf.io): initialized 2026-01-20T10:57:26.640147289+00:00 stderr F I0120 10:57:26.640008 16 reflector.go:351] Caches populated for k8s.cni.cncf.io/v1, Kind=NetworkAttachmentDefinition from storage/cacher.go:/k8s.cni.cncf.io/network-attachment-definitions 2026-01-20T10:57:26.662640174+00:00 stderr F I0120 10:57:26.662522 16 store.go:1579] "Monitoring resource count at path" resource="baselineadminnetworkpolicies.policy.networking.k8s.io" path="//policy.networking.k8s.io/baselineadminnetworkpolicies" 2026-01-20T10:57:26.663547208+00:00 stderr F I0120 10:57:26.663474 16 cacher.go:460] cacher (baselineadminnetworkpolicies.policy.networking.k8s.io): initialized 2026-01-20T10:57:26.663547208+00:00 stderr F I0120 10:57:26.663500 16 reflector.go:351] Caches populated for policy.networking.k8s.io/v1alpha1, Kind=BaselineAdminNetworkPolicy from storage/cacher.go:/policy.networking.k8s.io/baselineadminnetworkpolicies 2026-01-20T10:57:26.749311276+00:00 stderr F I0120 10:57:26.749188 16 store.go:1579] "Monitoring resource count at path" resource="alertmanagerconfigs.monitoring.coreos.com" path="//monitoring.coreos.com/alertmanagerconfigs" 2026-01-20T10:57:26.751175335+00:00 stderr F I0120 10:57:26.750477 16 cacher.go:460] cacher (alertmanagerconfigs.monitoring.coreos.com): initialized 2026-01-20T10:57:26.751175335+00:00 stderr F I0120 10:57:26.750502 16 reflector.go:351] Caches populated for monitoring.coreos.com/v1alpha1, Kind=AlertmanagerConfig from storage/cacher.go:/monitoring.coreos.com/alertmanagerconfigs 2026-01-20T10:57:26.759916466+00:00 stderr F I0120 10:57:26.759804 16 store.go:1579] "Monitoring resource count at path" resource="alertmanagerconfigs.monitoring.coreos.com" path="//monitoring.coreos.com/alertmanagerconfigs" 2026-01-20T10:57:26.760802020+00:00 stderr F I0120 10:57:26.760715 16 cacher.go:460] cacher (alertmanagerconfigs.monitoring.coreos.com): initialized 2026-01-20T10:57:26.760802020+00:00 stderr F I0120 10:57:26.760739 16 reflector.go:351] Caches populated for monitoring.coreos.com/v1beta1, Kind=AlertmanagerConfig from storage/cacher.go:/monitoring.coreos.com/alertmanagerconfigs 2026-01-20T10:57:26.812826005+00:00 stderr F I0120 10:57:26.812719 16 store.go:1579] "Monitoring resource count at path" resource="machinehealthchecks.machine.openshift.io" path="//machine.openshift.io/machinehealthchecks" 2026-01-20T10:57:26.814946891+00:00 stderr F I0120 10:57:26.814869 16 cacher.go:460] cacher (machinehealthchecks.machine.openshift.io): initialized 2026-01-20T10:57:26.814946891+00:00 stderr F I0120 10:57:26.814895 16 reflector.go:351] Caches populated for machine.openshift.io/v1beta1, Kind=MachineHealthCheck from storage/cacher.go:/machine.openshift.io/machinehealthchecks 2026-01-20T10:57:26.877500235+00:00 stderr F I0120 10:57:26.877366 16 store.go:1579] "Monitoring resource count at path" resource="overlappingrangeipreservations.whereabouts.cni.cncf.io" path="//whereabouts.cni.cncf.io/overlappingrangeipreservations" 2026-01-20T10:57:26.878749719+00:00 stderr F I0120 10:57:26.878678 16 cacher.go:460] cacher (overlappingrangeipreservations.whereabouts.cni.cncf.io): initialized 2026-01-20T10:57:26.878749719+00:00 stderr F I0120 10:57:26.878699 16 reflector.go:351] Caches populated for whereabouts.cni.cncf.io/v1alpha1, Kind=OverlappingRangeIPReservation from storage/cacher.go:/whereabouts.cni.cncf.io/overlappingrangeipreservations 2026-01-20T10:57:26.994188772+00:00 stderr F I0120 10:57:26.994026 16 store.go:1579] "Monitoring resource count at path" resource="probes.monitoring.coreos.com" path="//monitoring.coreos.com/probes" 2026-01-20T10:57:26.995493936+00:00 stderr F I0120 10:57:26.995411 16 cacher.go:460] cacher (probes.monitoring.coreos.com): initialized 2026-01-20T10:57:26.995493936+00:00 stderr F I0120 10:57:26.995432 16 reflector.go:351] Caches populated for monitoring.coreos.com/v1, Kind=Probe from storage/cacher.go:/monitoring.coreos.com/probes 2026-01-20T10:57:27.076093507+00:00 stderr F I0120 10:57:27.075860 16 store.go:1579] "Monitoring resource count at path" resource="ipaddresses.ipam.cluster.x-k8s.io" path="//ipam.cluster.x-k8s.io/ipaddresses" 2026-01-20T10:57:27.078609194+00:00 stderr F I0120 10:57:27.078465 16 cacher.go:460] cacher (ipaddresses.ipam.cluster.x-k8s.io): initialized 2026-01-20T10:57:27.078609194+00:00 stderr F I0120 10:57:27.078498 16 reflector.go:351] Caches populated for ipam.cluster.x-k8s.io/v1alpha1, Kind=IPAddress from storage/cacher.go:/ipam.cluster.x-k8s.io/ipaddresses 2026-01-20T10:57:27.085586609+00:00 stderr F I0120 10:57:27.085461 16 store.go:1579] "Monitoring resource count at path" resource="ipaddresses.ipam.cluster.x-k8s.io" path="//ipam.cluster.x-k8s.io/ipaddresses" 2026-01-20T10:57:27.086914524+00:00 stderr F I0120 10:57:27.086814 16 cacher.go:460] cacher (ipaddresses.ipam.cluster.x-k8s.io): initialized 2026-01-20T10:57:27.086914524+00:00 stderr F I0120 10:57:27.086836 16 reflector.go:351] Caches populated for ipam.cluster.x-k8s.io/v1beta1, Kind=IPAddress from storage/cacher.go:/ipam.cluster.x-k8s.io/ipaddresses 2026-01-20T10:57:27.138660873+00:00 stderr F I0120 10:57:27.138507 16 store.go:1579] "Monitoring resource count at path" resource="orders.acme.cert-manager.io" path="//acme.cert-manager.io/orders" 2026-01-20T10:57:27.139908345+00:00 stderr F I0120 10:57:27.139832 16 cacher.go:460] cacher (orders.acme.cert-manager.io): initialized 2026-01-20T10:57:27.139908345+00:00 stderr F I0120 10:57:27.139870 16 reflector.go:351] Caches populated for acme.cert-manager.io/v1, Kind=Order from storage/cacher.go:/acme.cert-manager.io/orders 2026-01-20T10:57:27.289420349+00:00 stderr F I0120 10:57:27.289225 16 store.go:1579] "Monitoring resource count at path" resource="networks.operator.openshift.io" path="//operator.openshift.io/networks" 2026-01-20T10:57:27.291880884+00:00 stderr F I0120 10:57:27.291743 16 cacher.go:460] cacher (networks.operator.openshift.io): initialized 2026-01-20T10:57:27.291880884+00:00 stderr F I0120 10:57:27.291772 16 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=Network from storage/cacher.go:/operator.openshift.io/networks 2026-01-20T10:57:27.432333948+00:00 stderr F I0120 10:57:27.432139 16 store.go:1579] "Monitoring resource count at path" resource="prometheuses.monitoring.coreos.com" path="//monitoring.coreos.com/prometheuses" 2026-01-20T10:57:27.433529550+00:00 stderr F I0120 10:57:27.433443 16 cacher.go:460] cacher (prometheuses.monitoring.coreos.com): initialized 2026-01-20T10:57:27.433529550+00:00 stderr F I0120 10:57:27.433465 16 reflector.go:351] Caches populated for monitoring.coreos.com/v1, Kind=Prometheus from storage/cacher.go:/monitoring.coreos.com/prometheuses 2026-01-20T10:57:27.465702660+00:00 stderr F I0120 10:57:27.465537 16 store.go:1579] "Monitoring resource count at path" resource="thanosrulers.monitoring.coreos.com" path="//monitoring.coreos.com/thanosrulers" 2026-01-20T10:57:27.466727538+00:00 stderr F I0120 10:57:27.466663 16 cacher.go:460] cacher (thanosrulers.monitoring.coreos.com): initialized 2026-01-20T10:57:27.466727538+00:00 stderr F I0120 10:57:27.466692 16 reflector.go:351] Caches populated for monitoring.coreos.com/v1, Kind=ThanosRuler from storage/cacher.go:/monitoring.coreos.com/thanosrulers 2026-01-20T10:57:27.478635393+00:00 stderr F I0120 10:57:27.478525 16 store.go:1579] "Monitoring resource count at path" resource="clusterresourcequotas.quota.openshift.io" path="//quota.openshift.io/clusterresourcequotas" 2026-01-20T10:57:27.479991968+00:00 stderr F I0120 10:57:27.479922 16 cacher.go:460] cacher (clusterresourcequotas.quota.openshift.io): initialized 2026-01-20T10:57:27.479991968+00:00 stderr F I0120 10:57:27.479944 16 reflector.go:351] Caches populated for quota.openshift.io/v1, Kind=ClusterResourceQuota from storage/cacher.go:/quota.openshift.io/clusterresourcequotas 2026-01-20T10:57:27.493471485+00:00 stderr F I0120 10:57:27.493231 16 store.go:1579] "Monitoring resource count at path" resource="installplans.operators.coreos.com" path="//operators.coreos.com/installplans" 2026-01-20T10:57:27.494315318+00:00 stderr F I0120 10:57:27.494238 16 cacher.go:460] cacher (installplans.operators.coreos.com): initialized 2026-01-20T10:57:27.494315318+00:00 stderr F I0120 10:57:27.494267 16 reflector.go:351] Caches populated for operators.coreos.com/v1alpha1, Kind=InstallPlan from storage/cacher.go:/operators.coreos.com/installplans 2026-01-20T10:57:27.505868393+00:00 stderr F I0120 10:57:27.505769 16 store.go:1579] "Monitoring resource count at path" resource="operatorgroups.operators.coreos.com" path="//operators.coreos.com/operatorgroups" 2026-01-20T10:57:27.510237538+00:00 stderr F I0120 10:57:27.510163 16 cacher.go:460] cacher (operatorgroups.operators.coreos.com): initialized 2026-01-20T10:57:27.510237538+00:00 stderr F I0120 10:57:27.510189 16 reflector.go:351] Caches populated for operators.coreos.com/v1, Kind=OperatorGroup from storage/cacher.go:/operators.coreos.com/operatorgroups 2026-01-20T10:57:27.513317780+00:00 stderr F I0120 10:57:27.513237 16 store.go:1579] "Monitoring resource count at path" resource="operatorgroups.operators.coreos.com" path="//operators.coreos.com/operatorgroups" 2026-01-20T10:57:27.515231951+00:00 stderr F I0120 10:57:27.515174 16 cacher.go:460] cacher (operatorgroups.operators.coreos.com): initialized 2026-01-20T10:57:27.515231951+00:00 stderr F I0120 10:57:27.515191 16 reflector.go:351] Caches populated for operators.coreos.com/v1alpha2, Kind=OperatorGroup from storage/cacher.go:/operators.coreos.com/operatorgroups 2026-01-20T10:57:27.562766098+00:00 stderr F I0120 10:57:27.562656 16 store.go:1579] "Monitoring resource count at path" resource="adminpolicybasedexternalroutes.k8s.ovn.org" path="//k8s.ovn.org/adminpolicybasedexternalroutes" 2026-01-20T10:57:27.564330590+00:00 stderr F I0120 10:57:27.564276 16 cacher.go:460] cacher (adminpolicybasedexternalroutes.k8s.ovn.org): initialized 2026-01-20T10:57:27.564330590+00:00 stderr F I0120 10:57:27.564295 16 reflector.go:351] Caches populated for k8s.ovn.org/v1, Kind=AdminPolicyBasedExternalRoute from storage/cacher.go:/k8s.ovn.org/adminpolicybasedexternalroutes 2026-01-20T10:57:27.588138689+00:00 stderr F I0120 10:57:27.588006 16 store.go:1579] "Monitoring resource count at path" resource="controlplanemachinesets.machine.openshift.io" path="//machine.openshift.io/controlplanemachinesets" 2026-01-20T10:57:27.590498481+00:00 stderr F I0120 10:57:27.590428 16 cacher.go:460] cacher (controlplanemachinesets.machine.openshift.io): initialized 2026-01-20T10:57:27.590524512+00:00 stderr F I0120 10:57:27.590491 16 reflector.go:351] Caches populated for machine.openshift.io/v1, Kind=ControlPlaneMachineSet from storage/cacher.go:/machine.openshift.io/controlplanemachinesets 2026-01-20T10:57:27.885903133+00:00 stderr F I0120 10:57:27.885793 16 store.go:1579] "Monitoring resource count at path" resource="podnetworkconnectivitychecks.controlplane.operator.openshift.io" path="//controlplane.operator.openshift.io/podnetworkconnectivitychecks" 2026-01-20T10:57:27.891126591+00:00 stderr F I0120 10:57:27.891032 16 cacher.go:460] cacher (podnetworkconnectivitychecks.controlplane.operator.openshift.io): initialized 2026-01-20T10:57:27.891126591+00:00 stderr F I0120 10:57:27.891053 16 reflector.go:351] Caches populated for controlplane.operator.openshift.io/v1alpha1, Kind=PodNetworkConnectivityCheck from storage/cacher.go:/controlplane.operator.openshift.io/podnetworkconnectivitychecks 2026-01-20T10:57:28.000914084+00:00 stderr F I0120 10:57:28.000782 16 store.go:1579] "Monitoring resource count at path" resource="metal3remediationtemplates.infrastructure.cluster.x-k8s.io" path="//infrastructure.cluster.x-k8s.io/metal3remediationtemplates" 2026-01-20T10:57:28.002379784+00:00 stderr F I0120 10:57:28.002285 16 cacher.go:460] cacher (metal3remediationtemplates.infrastructure.cluster.x-k8s.io): initialized 2026-01-20T10:57:28.002379784+00:00 stderr F I0120 10:57:28.002326 16 reflector.go:351] Caches populated for infrastructure.cluster.x-k8s.io/v1alpha5, Kind=Metal3RemediationTemplate from storage/cacher.go:/infrastructure.cluster.x-k8s.io/metal3remediationtemplates 2026-01-20T10:57:28.009468471+00:00 stderr F I0120 10:57:28.009369 16 store.go:1579] "Monitoring resource count at path" resource="metal3remediationtemplates.infrastructure.cluster.x-k8s.io" path="//infrastructure.cluster.x-k8s.io/metal3remediationtemplates" 2026-01-20T10:57:28.010260442+00:00 stderr F I0120 10:57:28.010176 16 cacher.go:460] cacher (metal3remediationtemplates.infrastructure.cluster.x-k8s.io): initialized 2026-01-20T10:57:28.010260442+00:00 stderr F I0120 10:57:28.010200 16 reflector.go:351] Caches populated for infrastructure.cluster.x-k8s.io/v1beta1, Kind=Metal3RemediationTemplate from storage/cacher.go:/infrastructure.cluster.x-k8s.io/metal3remediationtemplates 2026-01-20T10:57:28.018338656+00:00 stderr F I0120 10:57:28.018229 16 store.go:1579] "Monitoring resource count at path" resource="machineautoscalers.autoscaling.openshift.io" path="//autoscaling.openshift.io/machineautoscalers" 2026-01-20T10:57:28.019024984+00:00 stderr F I0120 10:57:28.018928 16 cacher.go:460] cacher (machineautoscalers.autoscaling.openshift.io): initialized 2026-01-20T10:57:28.019024984+00:00 stderr F I0120 10:57:28.018961 16 reflector.go:351] Caches populated for autoscaling.openshift.io/v1beta1, Kind=MachineAutoscaler from storage/cacher.go:/autoscaling.openshift.io/machineautoscalers 2026-01-20T10:57:28.029425819+00:00 stderr F I0120 10:57:28.029330 16 store.go:1579] "Monitoring resource count at path" resource="egressips.k8s.ovn.org" path="//k8s.ovn.org/egressips" 2026-01-20T10:57:28.030197289+00:00 stderr F I0120 10:57:28.030033 16 cacher.go:460] cacher (egressips.k8s.ovn.org): initialized 2026-01-20T10:57:28.030197289+00:00 stderr F I0120 10:57:28.030106 16 reflector.go:351] Caches populated for k8s.ovn.org/v1, Kind=EgressIP from storage/cacher.go:/k8s.ovn.org/egressips 2026-01-20T10:57:28.153852839+00:00 stderr F I0120 10:57:28.153674 16 store.go:1579] "Monitoring resource count at path" resource="kubeschedulers.operator.openshift.io" path="//operator.openshift.io/kubeschedulers" 2026-01-20T10:57:28.155616826+00:00 stderr F I0120 10:57:28.155520 16 cacher.go:460] cacher (kubeschedulers.operator.openshift.io): initialized 2026-01-20T10:57:28.155616826+00:00 stderr F I0120 10:57:28.155547 16 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=KubeScheduler from storage/cacher.go:/operator.openshift.io/kubeschedulers 2026-01-20T10:57:28.630286399+00:00 stderr F I0120 10:57:28.630153 16 store.go:1579] "Monitoring resource count at path" resource="infrastructures.config.openshift.io" path="//config.openshift.io/infrastructures" 2026-01-20T10:57:28.631638445+00:00 stderr F I0120 10:57:28.631553 16 cacher.go:460] cacher (infrastructures.config.openshift.io): initialized 2026-01-20T10:57:28.631638445+00:00 stderr F I0120 10:57:28.631580 16 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Infrastructure from storage/cacher.go:/config.openshift.io/infrastructures 2026-01-20T10:57:28.653154413+00:00 stderr F I0120 10:57:28.653030 16 store.go:1579] "Monitoring resource count at path" resource="alertmanagers.monitoring.coreos.com" path="//monitoring.coreos.com/alertmanagers" 2026-01-20T10:57:28.653987676+00:00 stderr F I0120 10:57:28.653920 16 cacher.go:460] cacher (alertmanagers.monitoring.coreos.com): initialized 2026-01-20T10:57:28.653987676+00:00 stderr F I0120 10:57:28.653941 16 reflector.go:351] Caches populated for monitoring.coreos.com/v1, Kind=Alertmanager from storage/cacher.go:/monitoring.coreos.com/alertmanagers 2026-01-20T10:57:28.747556000+00:00 stderr F I0120 10:57:28.747420 16 store.go:1579] "Monitoring resource count at path" resource="ipaddressclaims.ipam.cluster.x-k8s.io" path="//ipam.cluster.x-k8s.io/ipaddressclaims" 2026-01-20T10:57:28.748666919+00:00 stderr F I0120 10:57:28.748593 16 cacher.go:460] cacher (ipaddressclaims.ipam.cluster.x-k8s.io): initialized 2026-01-20T10:57:28.748666919+00:00 stderr F I0120 10:57:28.748616 16 reflector.go:351] Caches populated for ipam.cluster.x-k8s.io/v1alpha1, Kind=IPAddressClaim from storage/cacher.go:/ipam.cluster.x-k8s.io/ipaddressclaims 2026-01-20T10:57:28.759171457+00:00 stderr F I0120 10:57:28.759052 16 store.go:1579] "Monitoring resource count at path" resource="ipaddressclaims.ipam.cluster.x-k8s.io" path="//ipam.cluster.x-k8s.io/ipaddressclaims" 2026-01-20T10:57:28.760483212+00:00 stderr F I0120 10:57:28.760414 16 cacher.go:460] cacher (ipaddressclaims.ipam.cluster.x-k8s.io): initialized 2026-01-20T10:57:28.760483212+00:00 stderr F I0120 10:57:28.760439 16 reflector.go:351] Caches populated for ipam.cluster.x-k8s.io/v1beta1, Kind=IPAddressClaim from storage/cacher.go:/ipam.cluster.x-k8s.io/ipaddressclaims 2026-01-20T10:57:28.794076611+00:00 stderr F I0120 10:57:28.793918 16 store.go:1579] "Monitoring resource count at path" resource="featuregates.config.openshift.io" path="//config.openshift.io/featuregates" 2026-01-20T10:57:28.796091263+00:00 stderr F I0120 10:57:28.795950 16 cacher.go:460] cacher (featuregates.config.openshift.io): initialized 2026-01-20T10:57:28.796091263+00:00 stderr F I0120 10:57:28.795988 16 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=FeatureGate from storage/cacher.go:/config.openshift.io/featuregates 2026-01-20T10:57:28.818467935+00:00 stderr F I0120 10:57:28.818333 16 store.go:1579] "Monitoring resource count at path" resource="securitycontextconstraints.security.openshift.io" path="//security.openshift.io/securitycontextconstraints" 2026-01-20T10:57:28.826464357+00:00 stderr F I0120 10:57:28.826326 16 cacher.go:460] cacher (securitycontextconstraints.security.openshift.io): initialized 2026-01-20T10:57:28.826464357+00:00 stderr F I0120 10:57:28.826355 16 reflector.go:351] Caches populated for security.openshift.io/v1, Kind=SecurityContextConstraints from storage/cacher.go:/security.openshift.io/securitycontextconstraints 2026-01-20T10:57:28.905693092+00:00 stderr F I0120 10:57:28.905573 16 store.go:1579] "Monitoring resource count at path" resource="clusteroperators.config.openshift.io" path="//config.openshift.io/clusteroperators" 2026-01-20T10:57:28.931154975+00:00 stderr F I0120 10:57:28.931001 16 cacher.go:460] cacher (clusteroperators.config.openshift.io): initialized 2026-01-20T10:57:28.931154975+00:00 stderr F I0120 10:57:28.931050 16 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=ClusterOperator from storage/cacher.go:/config.openshift.io/clusteroperators 2026-01-20T10:57:28.970826475+00:00 stderr F I0120 10:57:28.970700 16 store.go:1579] "Monitoring resource count at path" resource="operatorconditions.operators.coreos.com" path="//operators.coreos.com/operatorconditions" 2026-01-20T10:57:28.972675603+00:00 stderr F I0120 10:57:28.972591 16 cacher.go:460] cacher (operatorconditions.operators.coreos.com): initialized 2026-01-20T10:57:28.972675603+00:00 stderr F I0120 10:57:28.972636 16 reflector.go:351] Caches populated for operators.coreos.com/v1, Kind=OperatorCondition from storage/cacher.go:/operators.coreos.com/operatorconditions 2026-01-20T10:57:28.980985763+00:00 stderr F I0120 10:57:28.980872 16 store.go:1579] "Monitoring resource count at path" resource="operatorconditions.operators.coreos.com" path="//operators.coreos.com/operatorconditions" 2026-01-20T10:57:28.982075782+00:00 stderr F I0120 10:57:28.981973 16 cacher.go:460] cacher (operatorconditions.operators.coreos.com): initialized 2026-01-20T10:57:28.982075782+00:00 stderr F I0120 10:57:28.982010 16 reflector.go:351] Caches populated for operators.coreos.com/v2, Kind=OperatorCondition from storage/cacher.go:/operators.coreos.com/operatorconditions 2026-01-20T10:57:28.986708264+00:00 stderr F I0120 10:57:28.986637 16 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:28.991326077+00:00 stderr F I0120 10:57:28.991213 16 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:28.992407085+00:00 stderr F I0120 10:57:28.992334 16 reflector.go:351] Caches populated for *v1.Group from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:28.999019410+00:00 stderr F I0120 10:57:28.998934 16 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:29.005091031+00:00 stderr F I0120 10:57:29.004949 16 store.go:1579] "Monitoring resource count at path" resource="apirequestcounts.apiserver.openshift.io" path="//apiserver.openshift.io/apirequestcounts" 2026-01-20T10:57:29.049178787+00:00 stderr F I0120 10:57:29.049024 16 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io 2026-01-20T10:57:29.053169742+00:00 stderr F I0120 10:57:29.053020 16 cacher.go:901] cacher (leases.coordination.k8s.io): 1 objects queued in incoming channel. 2026-01-20T10:57:29.053169742+00:00 stderr F I0120 10:57:29.053052 16 cacher.go:901] cacher (leases.coordination.k8s.io): 2 objects queued in incoming channel. 2026-01-20T10:57:29.053331216+00:00 stderr F I0120 10:57:29.053262 16 trace.go:236] Trace[1827967632]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:ca458521-fb7f-430b-8beb-db5bdc689a1d,client:38.102.83.220,api-group:coordination.k8s.io,api-version:v1,name:crc,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc,user-agent:kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21,verb:PUT (20-Jan-2026 10:57:28.545) (total time: 507ms): 2026-01-20T10:57:29.053331216+00:00 stderr F Trace[1827967632]: ["GuaranteedUpdate etcd3" audit-id:ca458521-fb7f-430b-8beb-db5bdc689a1d,key:/leases/kube-node-lease/crc,type:*coordination.Lease,resource:leases.coordination.k8s.io 507ms (10:57:28.545) 2026-01-20T10:57:29.053331216+00:00 stderr F Trace[1827967632]: ---"About to Encode" 503ms (10:57:29.049)] 2026-01-20T10:57:29.053331216+00:00 stderr F Trace[1827967632]: [507.644495ms] [507.644495ms] END 2026-01-20T10:57:29.086267598+00:00 stderr F I0120 10:57:29.086153 16 store.go:1579] "Monitoring resource count at path" resource="kubeapiservers.operator.openshift.io" path="//operator.openshift.io/kubeapiservers" 2026-01-20T10:57:29.146234383+00:00 stderr F I0120 10:57:29.143912 16 cacher.go:460] cacher (kubeapiservers.operator.openshift.io): initialized 2026-01-20T10:57:29.146234383+00:00 stderr F I0120 10:57:29.143965 16 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=KubeAPIServer from storage/cacher.go:/operator.openshift.io/kubeapiservers 2026-01-20T10:57:29.174821979+00:00 stderr F I0120 10:57:29.174701 16 store.go:1579] "Monitoring resource count at path" resource="certificates.cert-manager.io" path="//cert-manager.io/certificates" 2026-01-20T10:57:29.179489633+00:00 stderr F I0120 10:57:29.179398 16 cacher.go:460] cacher (certificates.cert-manager.io): initialized 2026-01-20T10:57:29.179489633+00:00 stderr F I0120 10:57:29.179431 16 reflector.go:351] Caches populated for cert-manager.io/v1, Kind=Certificate from storage/cacher.go:/cert-manager.io/certificates 2026-01-20T10:57:29.196123813+00:00 stderr F I0120 10:57:29.195994 16 store.go:1579] "Monitoring resource count at path" resource="proxies.config.openshift.io" path="//config.openshift.io/proxies" 2026-01-20T10:57:29.201927086+00:00 stderr F I0120 10:57:29.201824 16 cacher.go:460] cacher (proxies.config.openshift.io): initialized 2026-01-20T10:57:29.201967827+00:00 stderr F I0120 10:57:29.201909 16 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Proxy from storage/cacher.go:/config.openshift.io/proxies 2026-01-20T10:57:29.236550191+00:00 stderr F I0120 10:57:29.236436 16 store.go:1579] "Monitoring resource count at path" resource="podmonitors.monitoring.coreos.com" path="//monitoring.coreos.com/podmonitors" 2026-01-20T10:57:29.241014310+00:00 stderr F I0120 10:57:29.240932 16 cacher.go:460] cacher (podmonitors.monitoring.coreos.com): initialized 2026-01-20T10:57:29.241014310+00:00 stderr F I0120 10:57:29.240959 16 reflector.go:351] Caches populated for monitoring.coreos.com/v1, Kind=PodMonitor from storage/cacher.go:/monitoring.coreos.com/podmonitors 2026-01-20T10:57:29.250400038+00:00 stderr F I0120 10:57:29.250328 16 store.go:1579] "Monitoring resource count at path" resource="alertingrules.monitoring.openshift.io" path="//monitoring.openshift.io/alertingrules" 2026-01-20T10:57:29.251816795+00:00 stderr F I0120 10:57:29.251465 16 cacher.go:460] cacher (alertingrules.monitoring.openshift.io): initialized 2026-01-20T10:57:29.251816795+00:00 stderr F I0120 10:57:29.251490 16 reflector.go:351] Caches populated for monitoring.openshift.io/v1, Kind=AlertingRule from storage/cacher.go:/monitoring.openshift.io/alertingrules 2026-01-20T10:57:29.259894489+00:00 stderr F I0120 10:57:29.259831 16 store.go:1579] "Monitoring resource count at path" resource="machineconfigs.machineconfiguration.openshift.io" path="//machineconfiguration.openshift.io/machineconfigs" 2026-01-20T10:57:29.278052759+00:00 stderr F I0120 10:57:29.277923 16 cacher.go:460] cacher (machineconfigs.machineconfiguration.openshift.io): initialized 2026-01-20T10:57:29.278052759+00:00 stderr F I0120 10:57:29.277948 16 reflector.go:351] Caches populated for machineconfiguration.openshift.io/v1, Kind=MachineConfig from storage/cacher.go:/machineconfiguration.openshift.io/machineconfigs 2026-01-20T10:57:29.307765565+00:00 stderr F I0120 10:57:29.307636 16 cacher.go:460] cacher (apirequestcounts.apiserver.openshift.io): initialized 2026-01-20T10:57:29.307765565+00:00 stderr F I0120 10:57:29.307689 16 reflector.go:351] Caches populated for apiserver.openshift.io/v1, Kind=APIRequestCount from storage/cacher.go:/apiserver.openshift.io/apirequestcounts 2026-01-20T10:57:29.488415642+00:00 stderr F I0120 10:57:29.488271 16 store.go:1579] "Monitoring resource count at path" resource="challenges.acme.cert-manager.io" path="//acme.cert-manager.io/challenges" 2026-01-20T10:57:29.489715337+00:00 stderr F I0120 10:57:29.489644 16 cacher.go:460] cacher (challenges.acme.cert-manager.io): initialized 2026-01-20T10:57:29.489732937+00:00 stderr F I0120 10:57:29.489699 16 reflector.go:351] Caches populated for acme.cert-manager.io/v1, Kind=Challenge from storage/cacher.go:/acme.cert-manager.io/challenges 2026-01-20T10:57:29.633744625+00:00 stderr F I0120 10:57:29.632387 16 controller.go:624] quota admission added evaluator for: namespaces 2026-01-20T10:57:29.647595642+00:00 stderr F I0120 10:57:29.647462 16 controller.go:624] quota admission added evaluator for: serviceaccounts 2026-01-20T10:57:29.654901005+00:00 stderr F I0120 10:57:29.654776 16 store.go:1579] "Monitoring resource count at path" resource="certificaterequests.cert-manager.io" path="//cert-manager.io/certificaterequests" 2026-01-20T10:57:29.656040325+00:00 stderr F I0120 10:57:29.655963 16 cacher.go:460] cacher (certificaterequests.cert-manager.io): initialized 2026-01-20T10:57:29.656040325+00:00 stderr F I0120 10:57:29.655990 16 reflector.go:351] Caches populated for cert-manager.io/v1, Kind=CertificateRequest from storage/cacher.go:/cert-manager.io/certificaterequests 2026-01-20T10:57:29.787789919+00:00 stderr F I0120 10:57:29.787649 16 store.go:1579] "Monitoring resource count at path" resource="egressrouters.network.operator.openshift.io" path="//network.operator.openshift.io/egressrouters" 2026-01-20T10:57:29.788795766+00:00 stderr F I0120 10:57:29.788715 16 cacher.go:460] cacher (egressrouters.network.operator.openshift.io): initialized 2026-01-20T10:57:29.788795766+00:00 stderr F I0120 10:57:29.788752 16 reflector.go:351] Caches populated for network.operator.openshift.io/v1, Kind=EgressRouter from storage/cacher.go:/network.operator.openshift.io/egressrouters 2026-01-20T10:57:29.867562529+00:00 stderr F I0120 10:57:29.867421 16 store.go:1579] "Monitoring resource count at path" resource="projecthelmchartrepositories.helm.openshift.io" path="//helm.openshift.io/projecthelmchartrepositories" 2026-01-20T10:57:29.872980603+00:00 stderr F I0120 10:57:29.872850 16 cacher.go:460] cacher (projecthelmchartrepositories.helm.openshift.io): initialized 2026-01-20T10:57:29.872980603+00:00 stderr F I0120 10:57:29.872877 16 reflector.go:351] Caches populated for helm.openshift.io/v1beta1, Kind=ProjectHelmChartRepository from storage/cacher.go:/helm.openshift.io/projecthelmchartrepositories 2026-01-20T10:57:29.881482027+00:00 stderr F I0120 10:57:29.881417 16 store.go:1579] "Monitoring resource count at path" resource="catalogsources.operators.coreos.com" path="//operators.coreos.com/catalogsources" 2026-01-20T10:57:29.884772034+00:00 stderr F I0120 10:57:29.884674 16 cacher.go:460] cacher (catalogsources.operators.coreos.com): initialized 2026-01-20T10:57:29.884772034+00:00 stderr F I0120 10:57:29.884703 16 reflector.go:351] Caches populated for operators.coreos.com/v1alpha1, Kind=CatalogSource from storage/cacher.go:/operators.coreos.com/catalogsources 2026-01-20T10:57:29.893196897+00:00 stderr F I0120 10:57:29.892824 16 store.go:1579] "Monitoring resource count at path" resource="ippools.whereabouts.cni.cncf.io" path="//whereabouts.cni.cncf.io/ippools" 2026-01-20T10:57:29.894113631+00:00 stderr F I0120 10:57:29.894017 16 cacher.go:460] cacher (ippools.whereabouts.cni.cncf.io): initialized 2026-01-20T10:57:29.894113631+00:00 stderr F I0120 10:57:29.894038 16 reflector.go:351] Caches populated for whereabouts.cni.cncf.io/v1alpha1, Kind=IPPool from storage/cacher.go:/whereabouts.cni.cncf.io/ippools 2026-01-20T10:57:30.117867888+00:00 stderr F I0120 10:57:30.117719 16 store.go:1579] "Monitoring resource count at path" resource="controllerconfigs.machineconfiguration.openshift.io" path="//machineconfiguration.openshift.io/controllerconfigs" 2026-01-20T10:57:30.120770365+00:00 stderr F I0120 10:57:30.120657 16 cacher.go:460] cacher (controllerconfigs.machineconfiguration.openshift.io): initialized 2026-01-20T10:57:30.120770365+00:00 stderr F I0120 10:57:30.120695 16 reflector.go:351] Caches populated for machineconfiguration.openshift.io/v1, Kind=ControllerConfig from storage/cacher.go:/machineconfiguration.openshift.io/controllerconfigs 2026-01-20T10:57:30.235893539+00:00 stderr F I0120 10:57:30.235782 16 store.go:1579] "Monitoring resource count at path" resource="openshiftapiservers.operator.openshift.io" path="//operator.openshift.io/openshiftapiservers" 2026-01-20T10:57:30.238364725+00:00 stderr F I0120 10:57:30.238293 16 cacher.go:460] cacher (openshiftapiservers.operator.openshift.io): initialized 2026-01-20T10:57:30.238364725+00:00 stderr F I0120 10:57:30.238315 16 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=OpenShiftAPIServer from storage/cacher.go:/operator.openshift.io/openshiftapiservers 2026-01-20T10:57:30.299970194+00:00 stderr F I0120 10:57:30.299814 16 store.go:1579] "Monitoring resource count at path" resource="issuers.cert-manager.io" path="//cert-manager.io/issuers" 2026-01-20T10:57:30.301830714+00:00 stderr F I0120 10:57:30.301739 16 cacher.go:460] cacher (issuers.cert-manager.io): initialized 2026-01-20T10:57:30.301851514+00:00 stderr F I0120 10:57:30.301820 16 reflector.go:351] Caches populated for cert-manager.io/v1, Kind=Issuer from storage/cacher.go:/cert-manager.io/issuers 2026-01-20T10:57:30.328701694+00:00 stderr F I0120 10:57:30.328517 16 store.go:1579] "Monitoring resource count at path" resource="clusterserviceversions.operators.coreos.com" path="//operators.coreos.com/clusterserviceversions" 2026-01-20T10:57:30.332324590+00:00 stderr F I0120 10:57:30.332198 16 cacher.go:460] cacher (clusterserviceversions.operators.coreos.com): initialized 2026-01-20T10:57:30.332324590+00:00 stderr F I0120 10:57:30.332235 16 reflector.go:351] Caches populated for operators.coreos.com/v1alpha1, Kind=ClusterServiceVersion from storage/cacher.go:/operators.coreos.com/clusterserviceversions 2026-01-20T10:57:30.713162332+00:00 stderr F I0120 10:57:30.713000 16 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io 2026-01-20T10:57:30.752849881+00:00 stderr F I0120 10:57:30.752712 16 store.go:1579] "Monitoring resource count at path" resource="adminnetworkpolicies.policy.networking.k8s.io" path="//policy.networking.k8s.io/adminnetworkpolicies" 2026-01-20T10:57:30.754016131+00:00 stderr F I0120 10:57:30.753941 16 cacher.go:460] cacher (adminnetworkpolicies.policy.networking.k8s.io): initialized 2026-01-20T10:57:30.754016131+00:00 stderr F I0120 10:57:30.753963 16 reflector.go:351] Caches populated for policy.networking.k8s.io/v1alpha1, Kind=AdminNetworkPolicy from storage/cacher.go:/policy.networking.k8s.io/adminnetworkpolicies 2026-01-20T10:57:31.112931523+00:00 stderr F I0120 10:57:31.112795 16 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io 2026-01-20T10:57:31.468289141+00:00 stderr F I0120 10:57:31.467900 16 store.go:1579] "Monitoring resource count at path" resource="alertrelabelconfigs.monitoring.openshift.io" path="//monitoring.openshift.io/alertrelabelconfigs" 2026-01-20T10:57:31.470247223+00:00 stderr F I0120 10:57:31.470156 16 cacher.go:460] cacher (alertrelabelconfigs.monitoring.openshift.io): initialized 2026-01-20T10:57:31.470247223+00:00 stderr F I0120 10:57:31.470192 16 reflector.go:351] Caches populated for monitoring.openshift.io/v1, Kind=AlertRelabelConfig from storage/cacher.go:/monitoring.openshift.io/alertrelabelconfigs 2026-01-20T10:57:31.935411294+00:00 stderr F I0120 10:57:31.935230 16 store.go:1579] "Monitoring resource count at path" resource="dnsrecords.ingress.operator.openshift.io" path="//ingress.operator.openshift.io/dnsrecords" 2026-01-20T10:57:31.936848351+00:00 stderr F I0120 10:57:31.936740 16 cacher.go:460] cacher (dnsrecords.ingress.operator.openshift.io): initialized 2026-01-20T10:57:31.936848351+00:00 stderr F I0120 10:57:31.936782 16 reflector.go:351] Caches populated for ingress.operator.openshift.io/v1, Kind=DNSRecord from storage/cacher.go:/ingress.operator.openshift.io/dnsrecords 2026-01-20T10:57:32.122184014+00:00 stderr F I0120 10:57:32.121995 16 controller.go:624] quota admission added evaluator for: daemonsets.apps 2026-01-20T10:57:33.316115487+00:00 stderr F I0120 10:57:33.315783 16 controller.go:624] quota admission added evaluator for: servicemonitors.monitoring.coreos.com 2026-01-20T10:57:34.864895806+00:00 stderr F I0120 10:57:34.864771 16 store.go:1579] "Monitoring resource count at path" resource="apiservers.config.openshift.io" path="//config.openshift.io/apiservers" 2026-01-20T10:57:34.866671422+00:00 stderr F I0120 10:57:34.866557 16 cacher.go:460] cacher (apiservers.config.openshift.io): initialized 2026-01-20T10:57:34.866671422+00:00 stderr F I0120 10:57:34.866582 16 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=APIServer from storage/cacher.go:/config.openshift.io/apiservers 2026-01-20T10:57:34.881276028+00:00 stderr F I0120 10:57:34.881143 16 store.go:1579] "Monitoring resource count at path" resource="authentications.config.openshift.io" path="//config.openshift.io/authentications" 2026-01-20T10:57:34.883600010+00:00 stderr F I0120 10:57:34.883423 16 cacher.go:460] cacher (authentications.config.openshift.io): initialized 2026-01-20T10:57:34.883600010+00:00 stderr F I0120 10:57:34.883543 16 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Authentication from storage/cacher.go:/config.openshift.io/authentications 2026-01-20T10:57:34.913775218+00:00 stderr F I0120 10:57:34.913621 16 store.go:1579] "Monitoring resource count at path" resource="consoles.config.openshift.io" path="//config.openshift.io/consoles" 2026-01-20T10:57:34.914941848+00:00 stderr F I0120 10:57:34.914871 16 cacher.go:460] cacher (consoles.config.openshift.io): initialized 2026-01-20T10:57:34.914941848+00:00 stderr F I0120 10:57:34.914890 16 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Console from storage/cacher.go:/config.openshift.io/consoles 2026-01-20T10:57:34.933752206+00:00 stderr F I0120 10:57:34.933616 16 store.go:1579] "Monitoring resource count at path" resource="dnses.config.openshift.io" path="//config.openshift.io/dnses" 2026-01-20T10:57:34.934861496+00:00 stderr F I0120 10:57:34.934737 16 cacher.go:460] cacher (dnses.config.openshift.io): initialized 2026-01-20T10:57:34.934861496+00:00 stderr F I0120 10:57:34.934758 16 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=DNS from storage/cacher.go:/config.openshift.io/dnses 2026-01-20T10:57:34.958184692+00:00 stderr F I0120 10:57:34.958053 16 store.go:1579] "Monitoring resource count at path" resource="images.config.openshift.io" path="//config.openshift.io/images" 2026-01-20T10:57:34.962151647+00:00 stderr F I0120 10:57:34.962026 16 cacher.go:460] cacher (images.config.openshift.io): initialized 2026-01-20T10:57:34.962151647+00:00 stderr F I0120 10:57:34.962067 16 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Image from storage/cacher.go:/config.openshift.io/images 2026-01-20T10:57:34.977892344+00:00 stderr F I0120 10:57:34.977761 16 store.go:1579] "Monitoring resource count at path" resource="ingresses.config.openshift.io" path="//config.openshift.io/ingresses" 2026-01-20T10:57:34.979730112+00:00 stderr F I0120 10:57:34.979663 16 cacher.go:460] cacher (ingresses.config.openshift.io): initialized 2026-01-20T10:57:34.979730112+00:00 stderr F I0120 10:57:34.979685 16 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Ingress from storage/cacher.go:/config.openshift.io/ingresses 2026-01-20T10:57:34.993392533+00:00 stderr F I0120 10:57:34.993272 16 store.go:1579] "Monitoring resource count at path" resource="oauths.config.openshift.io" path="//config.openshift.io/oauths" 2026-01-20T10:57:34.994783200+00:00 stderr F I0120 10:57:34.994725 16 cacher.go:460] cacher (oauths.config.openshift.io): initialized 2026-01-20T10:57:34.994783200+00:00 stderr F I0120 10:57:34.994754 16 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=OAuth from storage/cacher.go:/config.openshift.io/oauths 2026-01-20T10:57:35.005515004+00:00 stderr F I0120 10:57:35.005411 16 store.go:1579] "Monitoring resource count at path" resource="projects.config.openshift.io" path="//config.openshift.io/projects" 2026-01-20T10:57:35.007222869+00:00 stderr F I0120 10:57:35.007157 16 cacher.go:460] cacher (projects.config.openshift.io): initialized 2026-01-20T10:57:35.007222869+00:00 stderr F I0120 10:57:35.007179 16 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Project from storage/cacher.go:/config.openshift.io/projects 2026-01-20T10:57:35.051560901+00:00 stderr F I0120 10:57:35.050328 16 store.go:1579] "Monitoring resource count at path" resource="schedulers.config.openshift.io" path="//config.openshift.io/schedulers" 2026-01-20T10:57:35.052119976+00:00 stderr F I0120 10:57:35.052034 16 cacher.go:460] cacher (schedulers.config.openshift.io): initialized 2026-01-20T10:57:35.052119976+00:00 stderr F I0120 10:57:35.052061 16 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Scheduler from storage/cacher.go:/config.openshift.io/schedulers 2026-01-20T10:57:35.124909541+00:00 stderr F I0120 10:57:35.124752 16 controller.go:624] quota admission added evaluator for: deployments.apps 2026-01-20T10:57:35.206272193+00:00 stderr F I0120 10:57:35.206052 16 store.go:1579] "Monitoring resource count at path" resource="operatorhubs.config.openshift.io" path="//config.openshift.io/operatorhubs" 2026-01-20T10:57:35.208039850+00:00 stderr F I0120 10:57:35.207932 16 cacher.go:460] cacher (operatorhubs.config.openshift.io): initialized 2026-01-20T10:57:35.208039850+00:00 stderr F I0120 10:57:35.207955 16 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=OperatorHub from storage/cacher.go:/config.openshift.io/operatorhubs 2026-01-20T10:57:35.904355905+00:00 stderr F I0120 10:57:35.904211 16 store.go:1579] "Monitoring resource count at path" resource="helmchartrepositories.helm.openshift.io" path="//helm.openshift.io/helmchartrepositories" 2026-01-20T10:57:35.906076859+00:00 stderr F I0120 10:57:35.905990 16 cacher.go:460] cacher (helmchartrepositories.helm.openshift.io): initialized 2026-01-20T10:57:35.906076859+00:00 stderr F I0120 10:57:35.906010 16 reflector.go:351] Caches populated for helm.openshift.io/v1beta1, Kind=HelmChartRepository from storage/cacher.go:/helm.openshift.io/helmchartrepositories 2026-01-20T10:57:35.912851079+00:00 stderr F I0120 10:57:35.912749 16 controller.go:624] quota admission added evaluator for: prometheusrules.monitoring.coreos.com 2026-01-20T10:57:40.313178176+00:00 stderr F I0120 10:57:40.312984 16 controller.go:624] quota admission added evaluator for: operatorpkis.network.operator.openshift.io 2026-01-20T10:57:41.681694138+00:00 stderr F I0120 10:57:41.681497 16 controller.go:624] quota admission added evaluator for: podnetworkconnectivitychecks.controlplane.operator.openshift.io 2026-01-20T10:57:41.681694138+00:00 stderr F I0120 10:57:41.681532 16 controller.go:624] quota admission added evaluator for: podnetworkconnectivitychecks.controlplane.operator.openshift.io 2026-01-20T10:57:44.111949797+00:00 stderr F I0120 10:57:44.111813 16 controller.go:624] quota admission added evaluator for: serviceaccounts 2026-01-20T10:57:44.313332352+00:00 stderr F I0120 10:57:44.313057 16 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io 2026-01-20T10:57:44.514583914+00:00 stderr F I0120 10:57:44.514329 16 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io 2026-01-20T10:57:45.319821520+00:00 stderr F I0120 10:57:45.319656 16 controller.go:624] quota admission added evaluator for: deployments.apps 2026-01-20T10:57:45.714397804+00:00 stderr F I0120 10:57:45.714264 16 controller.go:624] quota admission added evaluator for: servicemonitors.monitoring.coreos.com 2026-01-20T10:57:46.318176881+00:00 stderr F I0120 10:57:46.317551 16 controller.go:624] quota admission added evaluator for: daemonsets.apps 2026-01-20T10:57:48.657718821+00:00 stderr F I0120 10:57:48.657570 16 store.go:1579] "Monitoring resource count at path" resource="consoleplugins.console.openshift.io" path="//console.openshift.io/consoleplugins" 2026-01-20T10:57:48.658814370+00:00 stderr F I0120 10:57:48.658689 16 cacher.go:460] cacher (consoleplugins.console.openshift.io): initialized 2026-01-20T10:57:48.658814370+00:00 stderr F I0120 10:57:48.658733 16 reflector.go:351] Caches populated for console.openshift.io/v1, Kind=ConsolePlugin from storage/cacher.go:/console.openshift.io/consoleplugins 2026-01-20T10:57:48.665420865+00:00 stderr F I0120 10:57:48.665326 16 store.go:1579] "Monitoring resource count at path" resource="consoleplugins.console.openshift.io" path="//console.openshift.io/consoleplugins" 2026-01-20T10:57:48.667027288+00:00 stderr F I0120 10:57:48.666958 16 cacher.go:460] cacher (consoleplugins.console.openshift.io): initialized 2026-01-20T10:57:48.667027288+00:00 stderr F I0120 10:57:48.666975 16 reflector.go:351] Caches populated for console.openshift.io/v1alpha1, Kind=ConsolePlugin from storage/cacher.go:/console.openshift.io/consoleplugins 2026-01-20T10:57:48.712262184+00:00 stderr F I0120 10:57:48.712132 16 controller.go:624] quota admission added evaluator for: operatorpkis.network.operator.openshift.io 2026-01-20T10:57:48.856886198+00:00 stderr F I0120 10:57:48.856770 16 store.go:1579] "Monitoring resource count at path" resource="authentications.operator.openshift.io" path="//operator.openshift.io/authentications" 2026-01-20T10:57:48.860575886+00:00 stderr F I0120 10:57:48.860486 16 cacher.go:460] cacher (authentications.operator.openshift.io): initialized 2026-01-20T10:57:48.860575886+00:00 stderr F I0120 10:57:48.860515 16 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=Authentication from storage/cacher.go:/operator.openshift.io/authentications 2026-01-20T10:57:48.902264398+00:00 stderr F I0120 10:57:48.901831 16 store.go:1579] "Monitoring resource count at path" resource="containerruntimeconfigs.machineconfiguration.openshift.io" path="//machineconfiguration.openshift.io/containerruntimeconfigs" 2026-01-20T10:57:48.902740831+00:00 stderr F I0120 10:57:48.902651 16 cacher.go:460] cacher (containerruntimeconfigs.machineconfiguration.openshift.io): initialized 2026-01-20T10:57:48.902740831+00:00 stderr F I0120 10:57:48.902672 16 reflector.go:351] Caches populated for machineconfiguration.openshift.io/v1, Kind=ContainerRuntimeConfig from storage/cacher.go:/machineconfiguration.openshift.io/containerruntimeconfigs 2026-01-20T10:57:49.382058157+00:00 stderr F I0120 10:57:49.381896 16 store.go:1579] "Monitoring resource count at path" resource="imagetagmirrorsets.config.openshift.io" path="//config.openshift.io/imagetagmirrorsets" 2026-01-20T10:57:49.383605947+00:00 stderr F I0120 10:57:49.383520 16 cacher.go:460] cacher (imagetagmirrorsets.config.openshift.io): initialized 2026-01-20T10:57:49.383605947+00:00 stderr F I0120 10:57:49.383567 16 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=ImageTagMirrorSet from storage/cacher.go:/config.openshift.io/imagetagmirrorsets 2026-01-20T10:57:49.461404125+00:00 stderr F I0120 10:57:49.461295 16 store.go:1579] "Monitoring resource count at path" resource="storageversionmigrations.migration.k8s.io" path="//migration.k8s.io/storageversionmigrations" 2026-01-20T10:57:49.464398754+00:00 stderr F I0120 10:57:49.464270 16 cacher.go:460] cacher (storageversionmigrations.migration.k8s.io): initialized 2026-01-20T10:57:49.464398754+00:00 stderr F I0120 10:57:49.464294 16 reflector.go:351] Caches populated for migration.k8s.io/v1alpha1, Kind=StorageVersionMigration from storage/cacher.go:/migration.k8s.io/storageversionmigrations 2026-01-20T10:57:49.489644781+00:00 stderr F I0120 10:57:49.489254 16 store.go:1579] "Monitoring resource count at path" resource="configs.imageregistry.operator.openshift.io" path="//imageregistry.operator.openshift.io/configs" 2026-01-20T10:57:49.491779318+00:00 stderr F I0120 10:57:49.491701 16 cacher.go:460] cacher (configs.imageregistry.operator.openshift.io): initialized 2026-01-20T10:57:49.491804179+00:00 stderr F I0120 10:57:49.491768 16 reflector.go:351] Caches populated for imageregistry.operator.openshift.io/v1, Kind=Config from storage/cacher.go:/imageregistry.operator.openshift.io/configs 2026-01-20T10:57:50.150424266+00:00 stderr F I0120 10:57:50.150275 16 store.go:1579] "Monitoring resource count at path" resource="consolenotifications.console.openshift.io" path="//console.openshift.io/consolenotifications" 2026-01-20T10:57:50.152014208+00:00 stderr F I0120 10:57:50.151917 16 cacher.go:460] cacher (consolenotifications.console.openshift.io): initialized 2026-01-20T10:57:50.152014208+00:00 stderr F I0120 10:57:50.151944 16 reflector.go:351] Caches populated for console.openshift.io/v1, Kind=ConsoleNotification from storage/cacher.go:/console.openshift.io/consolenotifications 2026-01-20T10:57:50.248301564+00:00 stderr F I0120 10:57:50.248175 16 store.go:1579] "Monitoring resource count at path" resource="consoleclidownloads.console.openshift.io" path="//console.openshift.io/consoleclidownloads" 2026-01-20T10:57:50.250524323+00:00 stderr F I0120 10:57:50.250433 16 cacher.go:460] cacher (consoleclidownloads.console.openshift.io): initialized 2026-01-20T10:57:50.250524323+00:00 stderr F I0120 10:57:50.250455 16 reflector.go:351] Caches populated for console.openshift.io/v1, Kind=ConsoleCLIDownload from storage/cacher.go:/console.openshift.io/consoleclidownloads 2026-01-20T10:57:50.367340732+00:00 stderr F I0120 10:57:50.367176 16 store.go:1579] "Monitoring resource count at path" resource="olmconfigs.operators.coreos.com" path="//operators.coreos.com/olmconfigs" 2026-01-20T10:57:50.369209041+00:00 stderr F I0120 10:57:50.368984 16 cacher.go:460] cacher (olmconfigs.operators.coreos.com): initialized 2026-01-20T10:57:50.369209041+00:00 stderr F I0120 10:57:50.369009 16 reflector.go:351] Caches populated for operators.coreos.com/v1, Kind=OLMConfig from storage/cacher.go:/operators.coreos.com/olmconfigs 2026-01-20T10:57:50.382036461+00:00 stderr F I0120 10:57:50.381924 16 store.go:1579] "Monitoring resource count at path" resource="etcds.operator.openshift.io" path="//operator.openshift.io/etcds" 2026-01-20T10:57:50.389329223+00:00 stderr F I0120 10:57:50.389226 16 cacher.go:460] cacher (etcds.operator.openshift.io): initialized 2026-01-20T10:57:50.389329223+00:00 stderr F I0120 10:57:50.389265 16 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=Etcd from storage/cacher.go:/operator.openshift.io/etcds 2026-01-20T10:57:51.328213373+00:00 stderr F I0120 10:57:51.328024 16 store.go:1579] "Monitoring resource count at path" resource="nodes.config.openshift.io" path="//config.openshift.io/nodes" 2026-01-20T10:57:51.330600476+00:00 stderr F I0120 10:57:51.330499 16 cacher.go:460] cacher (nodes.config.openshift.io): initialized 2026-01-20T10:57:51.330600476+00:00 stderr F I0120 10:57:51.330524 16 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Node from storage/cacher.go:/config.openshift.io/nodes 2026-01-20T10:57:51.657732157+00:00 stderr F I0120 10:57:51.657490 16 store.go:1579] "Monitoring resource count at path" resource="kubeletconfigs.machineconfiguration.openshift.io" path="//machineconfiguration.openshift.io/kubeletconfigs" 2026-01-20T10:57:51.660272505+00:00 stderr F I0120 10:57:51.659965 16 cacher.go:460] cacher (kubeletconfigs.machineconfiguration.openshift.io): initialized 2026-01-20T10:57:51.660272505+00:00 stderr F I0120 10:57:51.660016 16 reflector.go:351] Caches populated for machineconfiguration.openshift.io/v1, Kind=KubeletConfig from storage/cacher.go:/machineconfiguration.openshift.io/kubeletconfigs 2026-01-20T10:57:52.838285308+00:00 stderr F I0120 10:57:52.838176 16 store.go:1579] "Monitoring resource count at path" resource="machineconfigpools.machineconfiguration.openshift.io" path="//machineconfiguration.openshift.io/machineconfigpools" 2026-01-20T10:57:52.842266983+00:00 stderr F I0120 10:57:52.842191 16 cacher.go:460] cacher (machineconfigpools.machineconfiguration.openshift.io): initialized 2026-01-20T10:57:52.842266983+00:00 stderr F I0120 10:57:52.842228 16 reflector.go:351] Caches populated for machineconfiguration.openshift.io/v1, Kind=MachineConfigPool from storage/cacher.go:/machineconfiguration.openshift.io/machineconfigpools 2026-01-20T10:57:53.239401716+00:00 stderr F E0120 10:57:53.239229 16 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"client disconnected"}: client disconnected 2026-01-20T10:57:53.239526399+00:00 stderr F E0120 10:57:53.239363 16 wrap.go:54] timeout or abort while handling: method=POST URI="/api/v1/namespaces/kube-system/events" audit-ID="a0622a9a-c408-4ec6-bef0-ff6971a9339a" 2026-01-20T10:57:53.239526399+00:00 stderr F E0120 10:57:53.239388 16 timeout.go:142] post-timeout activity - time-elapsed: 4.67µs, POST "/api/v1/namespaces/kube-system/events" result: 2026-01-20T10:57:53.480145412+00:00 stderr F I0120 10:57:53.479882 16 store.go:1579] "Monitoring resource count at path" resource="openshiftcontrollermanagers.operator.openshift.io" path="//operator.openshift.io/openshiftcontrollermanagers" 2026-01-20T10:57:53.483029498+00:00 stderr F I0120 10:57:53.482505 16 cacher.go:460] cacher (openshiftcontrollermanagers.operator.openshift.io): initialized 2026-01-20T10:57:53.483029498+00:00 stderr F I0120 10:57:53.482534 16 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=OpenShiftControllerManager from storage/cacher.go:/operator.openshift.io/openshiftcontrollermanagers 2026-01-20T10:57:54.061234869+00:00 stderr F I0120 10:57:54.061117 16 apf_controller.go:455] "Update CurrentCL" plName="exempt" seatDemandHighWatermark=5 seatDemandAvg=0.007132428186494039 seatDemandStdev=0.10406383771170363 seatDemandSmoothed=2.8116604023988754 fairFrac=2.2759685360990574 currentCL=6 concurrencyDenominator=6 backstop=false 2026-01-20T10:57:54.542280231+00:00 stderr F I0120 10:57:54.542143 16 store.go:1579] "Monitoring resource count at path" resource="clusterissuers.cert-manager.io" path="//cert-manager.io/clusterissuers" 2026-01-20T10:57:54.543854572+00:00 stderr F I0120 10:57:54.543174 16 cacher.go:460] cacher (clusterissuers.cert-manager.io): initialized 2026-01-20T10:57:54.543854572+00:00 stderr F I0120 10:57:54.543209 16 reflector.go:351] Caches populated for cert-manager.io/v1, Kind=ClusterIssuer from storage/cacher.go:/cert-manager.io/clusterissuers 2026-01-20T10:57:55.592813902+00:00 stderr F I0120 10:57:55.592678 16 controller.go:624] quota admission added evaluator for: csistoragecapacities.storage.k8s.io 2026-01-20T10:57:55.592813902+00:00 stderr F I0120 10:57:55.592723 16 controller.go:624] quota admission added evaluator for: csistoragecapacities.storage.k8s.io 2026-01-20T10:57:58.337626260+00:00 stderr F I0120 10:57:58.337467 16 store.go:1579] "Monitoring resource count at path" resource="builds.config.openshift.io" path="//config.openshift.io/builds" 2026-01-20T10:57:58.345153898+00:00 stderr F I0120 10:57:58.343514 16 cacher.go:460] cacher (builds.config.openshift.io): initialized 2026-01-20T10:57:58.345153898+00:00 stderr F I0120 10:57:58.343569 16 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Build from storage/cacher.go:/config.openshift.io/builds 2026-01-20T10:57:58.467473864+00:00 stderr F I0120 10:57:58.467285 16 store.go:1579] "Monitoring resource count at path" resource="configs.operator.openshift.io" path="//operator.openshift.io/configs" 2026-01-20T10:57:58.468804078+00:00 stderr F I0120 10:57:58.468735 16 cacher.go:460] cacher (configs.operator.openshift.io): initialized 2026-01-20T10:57:58.468804078+00:00 stderr F I0120 10:57:58.468756 16 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=Config from storage/cacher.go:/operator.openshift.io/configs 2026-01-20T10:58:04.826195925+00:00 stderr F I0120 10:58:04.826033 16 store.go:1579] "Monitoring resource count at path" resource="rangeallocations.security.internal.openshift.io" path="//security.internal.openshift.io/rangeallocations" 2026-01-20T10:58:04.827505888+00:00 stderr F I0120 10:58:04.827444 16 cacher.go:460] cacher (rangeallocations.security.internal.openshift.io): initialized 2026-01-20T10:58:04.827551199+00:00 stderr F I0120 10:58:04.827517 16 reflector.go:351] Caches populated for security.internal.openshift.io/v1, Kind=RangeAllocation from storage/cacher.go:/security.internal.openshift.io/rangeallocations 2026-01-20T10:58:05.759139212+00:00 stderr F I0120 10:58:05.758937 16 trace.go:236] Trace[2060590502]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:60f86248-0f20-4ee0-8389-a48b852a37f3,client:38.102.83.39,api-group:,api-version:v1,name:,subresource:,namespace:openshift-must-gather-jdb4k,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/openshift-must-gather-jdb4k/pods,user-agent:oc/4.20.0 (linux/amd64) kubernetes/dc61926,verb:POST (20-Jan-2026 10:58:04.825) (total time: 932ms): 2026-01-20T10:58:05.759139212+00:00 stderr F Trace[2060590502]: ---"Write to database call failed" len:2191,err:pods "must-gather-" is forbidden: error looking up service account openshift-must-gather-jdb4k/default: serviceaccount "default" not found 932ms (10:58:05.758) 2026-01-20T10:58:05.759139212+00:00 stderr F Trace[2060590502]: [932.950958ms] [932.950958ms] END 2026-01-20T10:58:07.456132047+00:00 stderr F I0120 10:58:07.455376 16 store.go:1579] "Monitoring resource count at path" resource="dnses.operator.openshift.io" path="//operator.openshift.io/dnses" 2026-01-20T10:58:07.458832535+00:00 stderr F I0120 10:58:07.458648 16 cacher.go:460] cacher (dnses.operator.openshift.io): initialized 2026-01-20T10:58:07.458832535+00:00 stderr F I0120 10:58:07.458672 16 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=DNS from storage/cacher.go:/operator.openshift.io/dnses 2026-01-20T10:58:07.469081005+00:00 stderr F I0120 10:58:07.468972 16 store.go:1579] "Monitoring resource count at path" resource="imagepruners.imageregistry.operator.openshift.io" path="//imageregistry.operator.openshift.io/imagepruners" 2026-01-20T10:58:07.472579394+00:00 stderr F I0120 10:58:07.472506 16 cacher.go:460] cacher (imagepruners.imageregistry.operator.openshift.io): initialized 2026-01-20T10:58:07.472579394+00:00 stderr F I0120 10:58:07.472539 16 reflector.go:351] Caches populated for imageregistry.operator.openshift.io/v1, Kind=ImagePruner from storage/cacher.go:/imageregistry.operator.openshift.io/imagepruners 2026-01-20T10:58:07.481429019+00:00 stderr F I0120 10:58:07.478797 16 store.go:1579] "Monitoring resource count at path" resource="imagecontentsourcepolicies.operator.openshift.io" path="//operator.openshift.io/imagecontentsourcepolicies" 2026-01-20T10:58:07.481429019+00:00 stderr F I0120 10:58:07.479658 16 cacher.go:460] cacher (imagecontentsourcepolicies.operator.openshift.io): initialized 2026-01-20T10:58:07.481429019+00:00 stderr F I0120 10:58:07.479683 16 reflector.go:351] Caches populated for operator.openshift.io/v1alpha1, Kind=ImageContentSourcePolicy from storage/cacher.go:/operator.openshift.io/imagecontentsourcepolicies 2026-01-20T10:58:07.488495098+00:00 stderr F I0120 10:58:07.488433 16 store.go:1579] "Monitoring resource count at path" resource="imagecontentpolicies.config.openshift.io" path="//config.openshift.io/imagecontentpolicies" 2026-01-20T10:58:07.489449972+00:00 stderr F I0120 10:58:07.489396 16 cacher.go:460] cacher (imagecontentpolicies.config.openshift.io): initialized 2026-01-20T10:58:07.489449972+00:00 stderr F I0120 10:58:07.489418 16 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=ImageContentPolicy from storage/cacher.go:/config.openshift.io/imagecontentpolicies 2026-01-20T10:58:07.495879116+00:00 stderr F I0120 10:58:07.495807 16 cacher.go:901] cacher (endpoints): 1 objects queued in incoming channel. 2026-01-20T10:58:07.495879116+00:00 stderr F I0120 10:58:07.495825 16 cacher.go:901] cacher (endpoints): 2 objects queued in incoming channel. 2026-01-20T10:58:07.499691383+00:00 stderr F I0120 10:58:07.499497 16 store.go:1579] "Monitoring resource count at path" resource="operators.operators.coreos.com" path="//operators.coreos.com/operators" 2026-01-20T10:58:07.500995216+00:00 stderr F I0120 10:58:07.500504 16 cacher.go:460] cacher (operators.operators.coreos.com): initialized 2026-01-20T10:58:07.500995216+00:00 stderr F I0120 10:58:07.500528 16 reflector.go:351] Caches populated for operators.coreos.com/v1, Kind=Operator from storage/cacher.go:/operators.coreos.com/operators 2026-01-20T10:58:07.505612893+00:00 stderr F I0120 10:58:07.505544 16 cacher.go:901] cacher (endpointslices.discovery.k8s.io): 1 objects queued in incoming channel. 2026-01-20T10:58:07.505612893+00:00 stderr F I0120 10:58:07.505570 16 cacher.go:901] cacher (endpointslices.discovery.k8s.io): 2 objects queued in incoming channel. 2026-01-20T10:58:07.514095658+00:00 stderr F I0120 10:58:07.513840 16 store.go:1579] "Monitoring resource count at path" resource="clustercsidrivers.operator.openshift.io" path="//operator.openshift.io/clustercsidrivers" 2026-01-20T10:58:07.515207847+00:00 stderr F I0120 10:58:07.515096 16 cacher.go:460] cacher (clustercsidrivers.operator.openshift.io): initialized 2026-01-20T10:58:07.515207847+00:00 stderr F I0120 10:58:07.515113 16 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=ClusterCSIDriver from storage/cacher.go:/operator.openshift.io/clustercsidrivers 2026-01-20T10:58:07.524937604+00:00 stderr F I0120 10:58:07.524840 16 store.go:1579] "Monitoring resource count at path" resource="consoleyamlsamples.console.openshift.io" path="//console.openshift.io/consoleyamlsamples" 2026-01-20T10:58:07.526048133+00:00 stderr F I0120 10:58:07.525992 16 cacher.go:460] cacher (consoleyamlsamples.console.openshift.io): initialized 2026-01-20T10:58:07.526048133+00:00 stderr F I0120 10:58:07.526014 16 reflector.go:351] Caches populated for console.openshift.io/v1, Kind=ConsoleYAMLSample from storage/cacher.go:/console.openshift.io/consoleyamlsamples 2026-01-20T10:58:07.539933115+00:00 stderr F I0120 10:58:07.539847 16 store.go:1579] "Monitoring resource count at path" resource="consolesamples.console.openshift.io" path="//console.openshift.io/consolesamples" 2026-01-20T10:58:07.540694955+00:00 stderr F I0120 10:58:07.540647 16 cacher.go:460] cacher (consolesamples.console.openshift.io): initialized 2026-01-20T10:58:07.540747936+00:00 stderr F I0120 10:58:07.540723 16 reflector.go:351] Caches populated for console.openshift.io/v1, Kind=ConsoleSample from storage/cacher.go:/console.openshift.io/consolesamples 2026-01-20T10:58:07.553982482+00:00 stderr F I0120 10:58:07.553887 16 store.go:1579] "Monitoring resource count at path" resource="kubestorageversionmigrators.operator.openshift.io" path="//operator.openshift.io/kubestorageversionmigrators" 2026-01-20T10:58:07.556367752+00:00 stderr F I0120 10:58:07.556290 16 cacher.go:460] cacher (kubestorageversionmigrators.operator.openshift.io): initialized 2026-01-20T10:58:07.556470815+00:00 stderr F I0120 10:58:07.556376 16 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=KubeStorageVersionMigrator from storage/cacher.go:/operator.openshift.io/kubestorageversionmigrators 2026-01-20T10:58:07.566256994+00:00 stderr F I0120 10:58:07.566166 16 store.go:1579] "Monitoring resource count at path" resource="consolequickstarts.console.openshift.io" path="//console.openshift.io/consolequickstarts" 2026-01-20T10:58:07.593580658+00:00 stderr F I0120 10:58:07.593473 16 store.go:1579] "Monitoring resource count at path" resource="consoles.operator.openshift.io" path="//operator.openshift.io/consoles" 2026-01-20T10:58:07.604860584+00:00 stderr F I0120 10:58:07.604752 16 cacher.go:460] cacher (consoles.operator.openshift.io): initialized 2026-01-20T10:58:07.604860584+00:00 stderr F I0120 10:58:07.604783 16 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=Console from storage/cacher.go:/operator.openshift.io/consoles 2026-01-20T10:58:07.620221874+00:00 stderr F I0120 10:58:07.620073 16 cacher.go:460] cacher (consolequickstarts.console.openshift.io): initialized 2026-01-20T10:58:07.620221874+00:00 stderr F I0120 10:58:07.620103 16 reflector.go:351] Caches populated for console.openshift.io/v1, Kind=ConsoleQuickStart from storage/cacher.go:/console.openshift.io/consolequickstarts 2026-01-20T10:58:07.630631349+00:00 stderr F I0120 10:58:07.630527 16 store.go:1579] "Monitoring resource count at path" resource="csisnapshotcontrollers.operator.openshift.io" path="//operator.openshift.io/csisnapshotcontrollers" 2026-01-20T10:58:07.634088776+00:00 stderr F I0120 10:58:07.634006 16 cacher.go:460] cacher (csisnapshotcontrollers.operator.openshift.io): initialized 2026-01-20T10:58:07.634088776+00:00 stderr F I0120 10:58:07.634033 16 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=CSISnapshotController from storage/cacher.go:/operator.openshift.io/csisnapshotcontrollers 2026-01-20T10:58:07.687003161+00:00 stderr F I0120 10:58:07.686885 16 store.go:1579] "Monitoring resource count at path" resource="clusterautoscalers.autoscaling.openshift.io" path="//autoscaling.openshift.io/clusterautoscalers" 2026-01-20T10:58:07.688496449+00:00 stderr F I0120 10:58:07.688430 16 cacher.go:460] cacher (clusterautoscalers.autoscaling.openshift.io): initialized 2026-01-20T10:58:07.688521400+00:00 stderr F I0120 10:58:07.688499 16 reflector.go:351] Caches populated for autoscaling.openshift.io/v1, Kind=ClusterAutoscaler from storage/cacher.go:/autoscaling.openshift.io/clusterautoscalers 2026-01-20T10:58:07.703884079+00:00 stderr F I0120 10:58:07.703753 16 store.go:1579] "Monitoring resource count at path" resource="consolelinks.console.openshift.io" path="//console.openshift.io/consolelinks" 2026-01-20T10:58:07.704883284+00:00 stderr F I0120 10:58:07.704768 16 cacher.go:460] cacher (consolelinks.console.openshift.io): initialized 2026-01-20T10:58:07.704883284+00:00 stderr F I0120 10:58:07.704795 16 reflector.go:351] Caches populated for console.openshift.io/v1, Kind=ConsoleLink from storage/cacher.go:/console.openshift.io/consolelinks 2026-01-20T10:58:07.717010213+00:00 stderr F I0120 10:58:07.716905 16 store.go:1579] "Monitoring resource count at path" resource="configs.samples.operator.openshift.io" path="//samples.operator.openshift.io/configs" 2026-01-20T10:58:07.718438149+00:00 stderr F I0120 10:58:07.718369 16 cacher.go:460] cacher (configs.samples.operator.openshift.io): initialized 2026-01-20T10:58:07.718468580+00:00 stderr F I0120 10:58:07.718432 16 reflector.go:351] Caches populated for samples.operator.openshift.io/v1, Kind=Config from storage/cacher.go:/samples.operator.openshift.io/configs 2026-01-20T10:58:07.727932940+00:00 stderr F I0120 10:58:07.727869 16 store.go:1579] "Monitoring resource count at path" resource="storagestates.migration.k8s.io" path="//migration.k8s.io/storagestates" 2026-01-20T10:58:07.729160651+00:00 stderr F I0120 10:58:07.729104 16 cacher.go:460] cacher (storagestates.migration.k8s.io): initialized 2026-01-20T10:58:07.729205352+00:00 stderr F I0120 10:58:07.729178 16 reflector.go:351] Caches populated for migration.k8s.io/v1alpha1, Kind=StorageState from storage/cacher.go:/migration.k8s.io/storagestates 2026-01-20T10:58:07.739237648+00:00 stderr F I0120 10:58:07.739162 16 store.go:1579] "Monitoring resource count at path" resource="machineconfigurations.operator.openshift.io" path="//operator.openshift.io/machineconfigurations" 2026-01-20T10:58:07.740283564+00:00 stderr F I0120 10:58:07.740232 16 cacher.go:460] cacher (machineconfigurations.operator.openshift.io): initialized 2026-01-20T10:58:07.740299854+00:00 stderr F I0120 10:58:07.740280 16 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=MachineConfiguration from storage/cacher.go:/operator.openshift.io/machineconfigurations 2026-01-20T10:58:07.748951924+00:00 stderr F I0120 10:58:07.748840 16 store.go:1579] "Monitoring resource count at path" resource="servicecas.operator.openshift.io" path="//operator.openshift.io/servicecas" 2026-01-20T10:58:07.750223847+00:00 stderr F I0120 10:58:07.750156 16 cacher.go:460] cacher (servicecas.operator.openshift.io): initialized 2026-01-20T10:58:07.750223847+00:00 stderr F I0120 10:58:07.750179 16 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=ServiceCA from storage/cacher.go:/operator.openshift.io/servicecas 2026-01-20T10:58:07.761694658+00:00 stderr F I0120 10:58:07.761544 16 store.go:1579] "Monitoring resource count at path" resource="consoleexternalloglinks.console.openshift.io" path="//console.openshift.io/consoleexternalloglinks" 2026-01-20T10:58:07.762567450+00:00 stderr F I0120 10:58:07.762432 16 cacher.go:460] cacher (consoleexternalloglinks.console.openshift.io): initialized 2026-01-20T10:58:07.762567450+00:00 stderr F I0120 10:58:07.762451 16 reflector.go:351] Caches populated for console.openshift.io/v1, Kind=ConsoleExternalLogLink from storage/cacher.go:/console.openshift.io/consoleexternalloglinks 2026-01-20T10:58:07.771768023+00:00 stderr F I0120 10:58:07.771660 16 store.go:1579] "Monitoring resource count at path" resource="imagedigestmirrorsets.config.openshift.io" path="//config.openshift.io/imagedigestmirrorsets" 2026-01-20T10:58:07.772811280+00:00 stderr F I0120 10:58:07.772306 16 cacher.go:460] cacher (imagedigestmirrorsets.config.openshift.io): initialized 2026-01-20T10:58:07.772811280+00:00 stderr F I0120 10:58:07.772325 16 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=ImageDigestMirrorSet from storage/cacher.go:/config.openshift.io/imagedigestmirrorsets 2026-01-20T10:58:07.784713802+00:00 stderr F I0120 10:58:07.784626 16 store.go:1579] "Monitoring resource count at path" resource="storages.operator.openshift.io" path="//operator.openshift.io/storages" 2026-01-20T10:58:07.785996665+00:00 stderr F I0120 10:58:07.785836 16 cacher.go:460] cacher (storages.operator.openshift.io): initialized 2026-01-20T10:58:07.785996665+00:00 stderr F I0120 10:58:07.785872 16 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=Storage from storage/cacher.go:/operator.openshift.io/storages 2026-01-20T10:58:12.525810018+00:00 stderr F I0120 10:58:12.525634 16 httplog.go:92] system:serviceaccount:openshift-apiserver:openshift-apiserver-sa[system:serviceaccounts,system:serviceaccounts:openshift-apiserver,system:authenticated] is impersonating system:serviceaccount:kube-system:namespace-controller[system:serviceaccounts,system:serviceaccounts:kube-system,system:authenticated] 2026-01-20T10:58:12.531182614+00:00 stderr F I0120 10:58:12.531032 16 httplog.go:92] system:serviceaccount:openshift-apiserver:openshift-apiserver-sa[system:serviceaccounts,system:serviceaccounts:openshift-apiserver,system:authenticated] is impersonating system:serviceaccount:kube-system:namespace-controller[system:serviceaccounts,system:serviceaccounts:kube-system,system:authenticated] 2026-01-20T10:58:12.879311716+00:00 stderr F I0120 10:58:12.879125 16 httplog.go:92] system:serviceaccount:openshift-apiserver:openshift-apiserver-sa[system:serviceaccounts,system:serviceaccounts:openshift-apiserver,system:authenticated] is impersonating system:serviceaccount:kube-system:namespace-controller[system:serviceaccounts,system:serviceaccounts:kube-system,system:authenticated] 2026-01-20T10:58:12.884943850+00:00 stderr F I0120 10:58:12.884772 16 httplog.go:92] system:serviceaccount:openshift-apiserver:openshift-apiserver-sa[system:serviceaccounts,system:serviceaccounts:openshift-apiserver,system:authenticated] is impersonating system:serviceaccount:kube-system:namespace-controller[system:serviceaccounts,system:serviceaccounts:kube-system,system:authenticated] 2026-01-20T10:58:19.066281728+00:00 stderr F E0120 10:58:19.066147 16 wrap.go:54] timeout or abort while handling: method=GET URI="/apis/route.openshift.io/v1/routes?allowWatchBookmarks=true&resourceVersion=43454&timeout=9m26s&timeoutSeconds=566&watch=true" audit-ID="cb2ed62d-fdbc-47a8-8161-82623edb2b0a" 2026-01-20T10:58:19.066471813+00:00 stderr F E0120 10:58:19.066406 16 wrap.go:54] timeout or abort while handling: method=GET URI="/apis/build.openshift.io/v1/builds?allowWatchBookmarks=true&resourceVersion=43455&timeout=9m32s&timeoutSeconds=572&watch=true" audit-ID="92d0dea1-0136-404f-b196-bc02fa423ec2" 2026-01-20T10:58:19.067776526+00:00 stderr F E0120 10:58:19.067707 16 wrap.go:54] timeout or abort while handling: method=GET URI="/apis/image.openshift.io/v1/imagestreams?allowWatchBookmarks=true&resourceVersion=43467&timeout=8m6s&timeoutSeconds=486&watch=true" audit-ID="09e7fbc0-cd02-4b1d-8de0-237a561d0d54" 2026-01-20T10:58:19.067776526+00:00 stderr F E0120 10:58:19.067735 16 wrap.go:54] timeout or abort while handling: method=GET URI="/apis/template.openshift.io/v1/brokertemplateinstances?allowWatchBookmarks=true&resourceVersion=43472&timeout=8m26s&timeoutSeconds=506&watch=true" audit-ID="a9e6986e-34a1-4485-81f6-9a31a99f9329" 2026-01-20T10:58:19.067845728+00:00 stderr F E0120 10:58:19.067801 16 wrap.go:54] timeout or abort while handling: method=GET URI="/apis/oauth.openshift.io/v1/useroauthaccesstokens?allowWatchBookmarks=true&resourceVersion=43459&timeout=5m50s&timeoutSeconds=350&watch=true" audit-ID="d342742b-e79a-4d7e-a53c-d09a104afa7a" 2026-01-20T10:58:19.068411932+00:00 stderr F E0120 10:58:19.068353 16 wrap.go:54] timeout or abort while handling: method=GET URI="/apis/apps.openshift.io/v1/deploymentconfigs?allowWatchBookmarks=true&resourceVersion=43459&timeout=6m48s&timeoutSeconds=408&watch=true" audit-ID="56a4bb78-2203-4f3b-95c4-15e06c372c9e" 2026-01-20T10:58:19.068411932+00:00 stderr F E0120 10:58:19.068372 16 wrap.go:54] timeout or abort while handling: method=GET URI="/apis/security.openshift.io/v1/rangeallocations?allowWatchBookmarks=true&resourceVersion=43472&timeout=8m2s&timeoutSeconds=482&watch=true" audit-ID="e806c7f7-b141-4f37-bac3-4fae9766f6bc" 2026-01-20T10:58:19.068488494+00:00 stderr F E0120 10:58:19.068438 16 wrap.go:54] timeout or abort while handling: method=GET URI="/apis/template.openshift.io/v1/templateinstances?allowWatchBookmarks=true&resourceVersion=43460&timeout=5m59s&timeoutSeconds=359&watch=true" audit-ID="f0713f18-2973-46ee-82b9-a0e85dcff7f4" 2026-01-20T10:58:19.068863803+00:00 stderr F E0120 10:58:19.068806 16 wrap.go:54] timeout or abort while handling: method=GET URI="/apis/template.openshift.io/v1/templates?allowWatchBookmarks=true&resourceVersion=43459&timeout=9m1s&timeoutSeconds=541&watch=true" audit-ID="791e608c-9fed-4c2d-af06-40ce225a810e" 2026-01-20T10:58:19.069094789+00:00 stderr F E0120 10:58:19.069006 16 wrap.go:54] timeout or abort while handling: method=GET URI="/apis/build.openshift.io/v1/buildconfigs?allowWatchBookmarks=true&resourceVersion=43455&timeout=8m4s&timeoutSeconds=484&watch=true" audit-ID="24e99f96-9e87-434c-b06c-e8415573b4a3" ././@LongLink0000644000000000000000000000025600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015133657716033072 5ustar zuulzuul././@LongLink0000644000000000000000000000032200000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015133657736033074 5ustar zuulzuul././@LongLink0000644000000000000000000000032700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000001510615133657716033077 0ustar zuulzuul2025-08-13T20:08:13.912570675+00:00 stderr F I0813 20:08:13.912110 1 observer_polling.go:159] Starting file observer 2025-08-13T20:08:13.912570675+00:00 stderr F I0813 20:08:13.912498 1 base_controller.go:67] Waiting for caches to sync for CertSyncController 2025-08-13T20:08:14.014008973+00:00 stderr F I0813 20:08:14.012990 1 base_controller.go:73] Caches are synced for CertSyncController 2025-08-13T20:08:14.014008973+00:00 stderr F I0813 20:08:14.013087 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ... 2025-08-13T20:08:14.014008973+00:00 stderr F I0813 20:08:14.013232 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:08:14.014008973+00:00 stderr F I0813 20:08:14.013695 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:09:00.642329395+00:00 stderr F I0813 20:09:00.642073 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:09:00.642836380+00:00 stderr F I0813 20:09:00.642748 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:09:06.039403674+00:00 stderr F I0813 20:09:06.039168 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:09:06.040066213+00:00 stderr F I0813 20:09:06.040023 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:09:06.040328071+00:00 stderr F I0813 20:09:06.040299 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:09:06.040866436+00:00 stderr F I0813 20:09:06.040697 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:09:06.041170375+00:00 stderr F I0813 20:09:06.041082 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:09:06.041673049+00:00 stderr F I0813 20:09:06.041618 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:19:00.657764897+00:00 stderr F I0813 20:19:00.657557 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:19:00.669646716+00:00 stderr F I0813 20:19:00.669516 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:19:00.676291486+00:00 stderr F I0813 20:19:00.676168 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:19:00.684225793+00:00 stderr F I0813 20:19:00.684075 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:19:06.041870472+00:00 stderr F I0813 20:19:06.041707 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:19:06.042281974+00:00 stderr F I0813 20:19:06.042201 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:29:00.643715058+00:00 stderr F I0813 20:29:00.643482 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:29:00.644704676+00:00 stderr F I0813 20:29:00.644581 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:29:00.644983084+00:00 stderr F I0813 20:29:00.644921 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:29:00.646332343+00:00 stderr F I0813 20:29:00.645357 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:29:06.042539590+00:00 stderr F I0813 20:29:06.042335 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:29:06.043743395+00:00 stderr F I0813 20:29:06.043351 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:29:06.043743395+00:00 stderr F I0813 20:29:06.043572 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:29:06.044232079+00:00 stderr F I0813 20:29:06.044104 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:39:00.644997435+00:00 stderr F I0813 20:39:00.644245 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:39:00.645629583+00:00 stderr F I0813 20:39:00.645535 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:39:00.645942712+00:00 stderr F I0813 20:39:00.645848 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:39:00.646226851+00:00 stderr F I0813 20:39:00.646102 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:39:06.043658059+00:00 stderr F I0813 20:39:06.043487 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:39:06.049015103+00:00 stderr F I0813 20:39:06.048915 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:39:06.049189868+00:00 stderr F I0813 20:39:06.049110 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:39:06.049439836+00:00 stderr F I0813 20:39:06.049391 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] ././@LongLink0000644000000000000000000000032700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000001164215133657736033102 0ustar zuulzuul2026-01-20T10:42:03.624409539+00:00 stderr F I0120 10:42:03.624258 1 base_controller.go:67] Waiting for caches to sync for CertSyncController 2026-01-20T10:42:03.624409539+00:00 stderr F I0120 10:42:03.624386 1 observer_polling.go:159] Starting file observer 2026-01-20T10:42:03.629770275+00:00 stderr F W0120 10:42:03.629614 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2026-01-20T10:42:03.629804506+00:00 stderr F E0120 10:42:03.629783 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2026-01-20T10:42:03.632686910+00:00 stderr F W0120 10:42:03.629819 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2026-01-20T10:42:03.632686910+00:00 stderr F E0120 10:42:03.629856 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Secret: fa**********ed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2026-01-20T10:42:14.653302748+00:00 stderr F W0120 10:42:14.653090 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2026-01-20T10:42:14.654315309+00:00 stderr F I0120 10:42:14.654232 1 trace.go:236] Trace[1831232996]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (20-Jan-2026 10:42:04.651) (total time: 10002ms): 2026-01-20T10:42:14.654315309+00:00 stderr F Trace[1831232996]: ---"Objects listed" error:Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:42:14.653) 2026-01-20T10:42:14.654315309+00:00 stderr F Trace[1831232996]: [10.002079969s] [10.002079969s] END 2026-01-20T10:42:14.654358180+00:00 stderr F E0120 10:42:14.654331 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Secret: fa**********ed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2026-01-20T10:42:14.794393492+00:00 stderr F W0120 10:42:14.794237 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2026-01-20T10:42:14.794460874+00:00 stderr F I0120 10:42:14.794394 1 trace.go:236] Trace[1832933794]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (20-Jan-2026 10:42:04.792) (total time: 10001ms): 2026-01-20T10:42:14.794460874+00:00 stderr F Trace[1832933794]: ---"Objects listed" error:Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:42:14.794) 2026-01-20T10:42:14.794460874+00:00 stderr F Trace[1832933794]: [10.001534494s] [10.001534494s] END 2026-01-20T10:42:14.794460874+00:00 stderr F E0120 10:42:14.794425 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2026-01-20T10:42:22.025557333+00:00 stderr F I0120 10:42:22.025436 1 base_controller.go:73] Caches are synced for CertSyncController 2026-01-20T10:42:22.025557333+00:00 stderr F I0120 10:42:22.025482 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ... 2026-01-20T10:42:22.026045507+00:00 stderr F I0120 10:42:22.025978 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:42:22.026316845+00:00 stderr F I0120 10:42:22.026274 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] ././@LongLink0000644000000000000000000000032700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000003017415133657736033103 0ustar zuulzuul2026-01-20T10:47:06.674463337+00:00 stderr F I0120 10:47:06.673907 1 observer_polling.go:159] Starting file observer 2026-01-20T10:47:06.674463337+00:00 stderr F I0120 10:47:06.674258 1 base_controller.go:67] Waiting for caches to sync for CertSyncController 2026-01-20T10:47:16.681410697+00:00 stderr F W0120 10:47:16.681270 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2026-01-20T10:47:16.681543911+00:00 stderr F W0120 10:47:16.681270 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2026-01-20T10:47:16.682164741+00:00 stderr F I0120 10:47:16.682117 1 trace.go:236] Trace[2119463000]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (20-Jan-2026 10:47:06.674) (total time: 10006ms): 2026-01-20T10:47:16.682164741+00:00 stderr F Trace[2119463000]: ---"Objects listed" error:Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10006ms (10:47:16.681) 2026-01-20T10:47:16.682164741+00:00 stderr F Trace[2119463000]: [10.006916559s] [10.006916559s] END 2026-01-20T10:47:16.682164741+00:00 stderr F I0120 10:47:16.682150 1 trace.go:236] Trace[182094307]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (20-Jan-2026 10:47:06.674) (total time: 10007ms): 2026-01-20T10:47:16.682164741+00:00 stderr F Trace[182094307]: ---"Objects listed" error:Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10006ms (10:47:16.681) 2026-01-20T10:47:16.682164741+00:00 stderr F Trace[182094307]: [10.007123135s] [10.007123135s] END 2026-01-20T10:47:16.682210322+00:00 stderr F E0120 10:47:16.682177 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2026-01-20T10:47:16.682210322+00:00 stderr F E0120 10:47:16.682181 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Secret: fa**********ed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2026-01-20T10:47:23.276489143+00:00 stderr F I0120 10:47:23.275334 1 base_controller.go:73] Caches are synced for CertSyncController 2026-01-20T10:47:23.276489143+00:00 stderr F I0120 10:47:23.275367 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ... 2026-01-20T10:47:23.276489143+00:00 stderr F I0120 10:47:23.275685 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:47:23.276489143+00:00 stderr F I0120 10:47:23.275957 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2026-01-20T10:49:41.961534840+00:00 stderr F I0120 10:49:41.961271 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:49:41.961712506+00:00 stderr F I0120 10:49:41.961676 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2026-01-20T10:49:42.060317049+00:00 stderr F I0120 10:49:42.059666 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:49:42.060317049+00:00 stderr F I0120 10:49:42.059888 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2026-01-20T10:49:42.077633947+00:00 stderr F I0120 10:49:42.076867 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:49:42.077633947+00:00 stderr F I0120 10:49:42.077475 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2026-01-20T10:49:42.082258777+00:00 stderr F I0120 10:49:42.081887 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:49:42.082258777+00:00 stderr F I0120 10:49:42.082142 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2026-01-20T10:49:42.091950343+00:00 stderr F I0120 10:49:42.091759 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:49:42.091991464+00:00 stderr F I0120 10:49:42.091975 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2026-01-20T10:49:42.102466223+00:00 stderr F I0120 10:49:42.102275 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:49:42.102497174+00:00 stderr F I0120 10:49:42.102480 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2026-01-20T10:49:47.034483759+00:00 stderr F I0120 10:49:47.034420 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:49:47.034656235+00:00 stderr F I0120 10:49:47.034629 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2026-01-20T10:49:47.041220444+00:00 stderr F I0120 10:49:47.041183 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:49:47.041448021+00:00 stderr F I0120 10:49:47.041413 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2026-01-20T10:49:47.047745794+00:00 stderr F I0120 10:49:47.047700 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:49:47.047905168+00:00 stderr F I0120 10:49:47.047881 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2026-01-20T10:49:47.087345230+00:00 stderr F I0120 10:49:47.086907 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:49:47.087345230+00:00 stderr F I0120 10:49:47.087219 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2026-01-20T10:49:47.094644911+00:00 stderr F I0120 10:49:47.094538 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:49:47.094852649+00:00 stderr F I0120 10:49:47.094826 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2026-01-20T10:49:47.101766209+00:00 stderr F I0120 10:49:47.101711 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:49:47.101998676+00:00 stderr F I0120 10:49:47.101969 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2026-01-20T10:56:09.218036745+00:00 stderr F I0120 10:56:09.217920 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:56:09.221024045+00:00 stderr F I0120 10:56:09.220947 1 certsync_controller.go:146] Creating directory "/etc/kubernetes/static-pod-certs/configmaps/client-ca" ... 2026-01-20T10:56:09.221130968+00:00 stderr F I0120 10:56:09.221093 1 certsync_controller.go:159] Writing configmap manifest "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" ... 2026-01-20T10:56:09.224717194+00:00 stderr F I0120 10:56:09.224635 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2026-01-20T10:56:09.227258543+00:00 stderr F I0120 10:56:09.227160 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CertificateUpdated' Wrote updated configmap: openshift-kube-controller-manager/client-ca 2026-01-20T10:57:23.139686598+00:00 stderr F I0120 10:57:23.139523 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:57:23.139885593+00:00 stderr F I0120 10:57:23.139838 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2026-01-20T10:57:23.140020656+00:00 stderr F I0120 10:57:23.139987 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:57:23.140845749+00:00 stderr F I0120 10:57:23.140804 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2026-01-20T10:57:23.244130219+00:00 stderr F I0120 10:57:23.244046 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:57:23.255157001+00:00 stderr F I0120 10:57:23.253470 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2026-01-20T10:57:23.255157001+00:00 stderr F I0120 10:57:23.253607 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:57:23.255157001+00:00 stderr F I0120 10:57:23.253769 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2026-01-20T10:57:50.919211306+00:00 stderr F I0120 10:57:50.919133 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:57:50.919533226+00:00 stderr F I0120 10:57:50.919490 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2026-01-20T10:57:50.919673549+00:00 stderr F I0120 10:57:50.919646 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:57:50.919845004+00:00 stderr F I0120 10:57:50.919819 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2026-01-20T10:57:52.721817737+00:00 stderr F I0120 10:57:52.721742 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:57:52.722446524+00:00 stderr F I0120 10:57:52.722412 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2026-01-20T10:57:52.722655899+00:00 stderr F I0120 10:57:52.722627 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2026-01-20T10:57:52.722922696+00:00 stderr F I0120 10:57:52.722894 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] ././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015133657736033074 5ustar zuulzuul././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000040332215133657716033100 0ustar zuulzuul2025-08-13T20:08:13.208515559+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10357 \))" ]; do sleep 1; done' 2025-08-13T20:08:13.220701708+00:00 stderr F ++ ss -Htanop '(' sport = 10357 ')' 2025-08-13T20:08:13.241290128+00:00 stderr F + '[' -n '' ']' 2025-08-13T20:08:13.250386589+00:00 stderr F + exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2 2025-08-13T20:08:13.608173907+00:00 stderr F I0813 20:08:13.607874 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:08:13.609914107+00:00 stderr F I0813 20:08:13.609835 1 observer_polling.go:159] Starting file observer 2025-08-13T20:08:13.624952348+00:00 stderr F I0813 20:08:13.624655 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88 2025-08-13T20:08:13.626584115+00:00 stderr F I0813 20:08:13.626491 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:14.664548094+00:00 stderr F I0813 20:08:14.662425 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:08:14.679061880+00:00 stderr F I0813 20:08:14.676873 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T20:08:14.679061880+00:00 stderr F I0813 20:08:14.678838 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T20:08:14.679061880+00:00 stderr F I0813 20:08:14.678872 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T20:08:14.679061880+00:00 stderr F I0813 20:08:14.678879 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T20:08:14.693006890+00:00 stderr F I0813 20:08:14.691169 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:08:14.693006890+00:00 stderr F I0813 20:08:14.692424 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:08:14.711465439+00:00 stderr F W0813 20:08:14.711350 1 builder.go:358] unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope 2025-08-13T20:08:14.712233371+00:00 stderr F I0813 20:08:14.712027 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager/cluster-policy-controller-lock... 2025-08-13T20:08:14.713337843+00:00 stderr F I0813 20:08:14.713274 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:08:14.713531348+00:00 stderr F I0813 20:08:14.713444 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:08:14.713655132+00:00 stderr F I0813 20:08:14.713580 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:08:14.713655132+00:00 stderr F I0813 20:08:14.713635 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:08:14.713698223+00:00 stderr F I0813 20:08:14.713576 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ControlPlaneTopology' unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope 2025-08-13T20:08:14.714049543+00:00 stderr F I0813 20:08:14.714017 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:14.716051860+00:00 stderr F I0813 20:08:14.714734 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:08:14.716264537+00:00 stderr F I0813 20:08:14.716236 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:08:14.717059509+00:00 stderr F I0813 20:08:14.716760 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-08-13 20:08:14.71671728 +0000 UTC))" 2025-08-13T20:08:14.718251374+00:00 stderr F I0813 20:08:14.717945 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115694\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115693\" (2025-08-13 19:08:13 +0000 UTC to 2026-08-13 19:08:13 +0000 UTC (now=2025-08-13 20:08:14.717855672 +0000 UTC))" 2025-08-13T20:08:14.718251374+00:00 stderr F I0813 20:08:14.718004 1 secure_serving.go:213] Serving securely on [::]:10357 2025-08-13T20:08:14.718251374+00:00 stderr F I0813 20:08:14.718032 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:08:14.718251374+00:00 stderr F I0813 20:08:14.718048 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:08:14.720447926+00:00 stderr F I0813 20:08:14.720412 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:14.724976626+00:00 stderr F I0813 20:08:14.724881 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:14.729246629+00:00 stderr F I0813 20:08:14.729159 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:14.729419954+00:00 stderr F I0813 20:08:14.729391 1 leaderelection.go:260] successfully acquired lease openshift-kube-controller-manager/cluster-policy-controller-lock 2025-08-13T20:08:14.730022001+00:00 stderr F I0813 20:08:14.729949 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-controller-manager", Name:"cluster-policy-controller-lock", UID:"bb093f33-8655-47de-8ab9-7ce6fce91fc7", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"32879", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_2c95bd88-c329-4928-a119-89e7d6436f66 became leader 2025-08-13T20:08:14.738369680+00:00 stderr F I0813 20:08:14.738315 1 policy_controller.go:78] Starting "openshift.io/cluster-quota-reconciliation" 2025-08-13T20:08:14.814328098+00:00 stderr F I0813 20:08:14.814216 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:08:14.814499683+00:00 stderr F I0813 20:08:14.814217 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:08:14.814741960+00:00 stderr F I0813 20:08:14.814693 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:14.814646487 +0000 UTC))" 2025-08-13T20:08:14.815283746+00:00 stderr F I0813 20:08:14.815214 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-08-13 20:08:14.815180533 +0000 UTC))" 2025-08-13T20:08:14.816965214+00:00 stderr F I0813 20:08:14.816924 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:08:14.825132808+00:00 stderr F I0813 20:08:14.825025 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115694\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115693\" (2025-08-13 19:08:13 +0000 UTC to 2026-08-13 19:08:13 +0000 UTC (now=2025-08-13 20:08:14.824993454 +0000 UTC))" 2025-08-13T20:08:14.825274092+00:00 stderr F I0813 20:08:14.825210 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:08:14.82519215 +0000 UTC))" 2025-08-13T20:08:14.825289022+00:00 stderr F I0813 20:08:14.825274 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:08:14.825240061 +0000 UTC))" 2025-08-13T20:08:14.825588811+00:00 stderr F I0813 20:08:14.825400 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressrouters.network.operator.openshift.io" 2025-08-13T20:08:14.825588811+00:00 stderr F I0813 20:08:14.825458 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="installplans.operators.coreos.com" 2025-08-13T20:08:14.825588811+00:00 stderr F I0813 20:08:14.825487 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges" 2025-08-13T20:08:14.825588811+00:00 stderr F I0813 20:08:14.825506 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertrelabelconfigs.monitoring.openshift.io" 2025-08-13T20:08:14.825588811+00:00 stderr F I0813 20:08:14.825538 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch" 2025-08-13T20:08:14.825588811+00:00 stderr F I0813 20:08:14.825557 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io" 2025-08-13T20:08:14.825588811+00:00 stderr F I0813 20:08:14.825576 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressservices.k8s.ovn.org" 2025-08-13T20:08:14.825623412+00:00 stderr F I0813 20:08:14.825592 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="servicemonitors.monitoring.coreos.com" 2025-08-13T20:08:14.825623412+00:00 stderr F I0813 20:08:14.825609 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorpkis.network.operator.openshift.io" 2025-08-13T20:08:14.825634042+00:00 stderr F I0813 20:08:14.825626 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorgroups.operators.coreos.com" 2025-08-13T20:08:14.825697544+00:00 stderr F I0813 20:08:14.825642 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="clusterserviceversions.operators.coreos.com" 2025-08-13T20:08:14.825697544+00:00 stderr F I0813 20:08:14.825692 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinehealthchecks.machine.openshift.io" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.825709 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podmonitors.monitoring.coreos.com" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.825834 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:14.825282982 +0000 UTC))" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.825883 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:14.825864139 +0000 UTC))" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.825955 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:14.825940541 +0000 UTC))" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.825974 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:14.825963172 +0000 UTC))" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.825989 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:14.825979022 +0000 UTC))" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.826007 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:14.825994963 +0000 UTC))" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.826023 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:08:14.826012713 +0000 UTC))" 2025-08-13T20:08:14.826067965+00:00 stderr F I0813 20:08:14.826049 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:08:14.826030704 +0000 UTC))" 2025-08-13T20:08:14.826133267+00:00 stderr F I0813 20:08:14.826072 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:14.826060315 +0000 UTC))" 2025-08-13T20:08:14.826567179+00:00 stderr F I0813 20:08:14.826496 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-08-13 20:08:14.826462946 +0000 UTC))" 2025-08-13T20:08:14.826931140+00:00 stderr F I0813 20:08:14.826822 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115694\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115693\" (2025-08-13 19:08:13 +0000 UTC to 2026-08-13 19:08:13 +0000 UTC (now=2025-08-13 20:08:14.826762155 +0000 UTC))" 2025-08-13T20:08:14.830272085+00:00 stderr F I0813 20:08:14.830158 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagers.monitoring.coreos.com" 2025-08-13T20:08:14.830272085+00:00 stderr F I0813 20:08:14.830227 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="catalogsources.operators.coreos.com" 2025-08-13T20:08:14.830272085+00:00 stderr F I0813 20:08:14.830250 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templateinstances.template.openshift.io" 2025-08-13T20:08:14.830307426+00:00 stderr F I0813 20:08:14.830267 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheuses.monitoring.coreos.com" 2025-08-13T20:08:14.830307426+00:00 stderr F I0813 20:08:14.830286 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorconditions.operators.coreos.com" 2025-08-13T20:08:14.830307426+00:00 stderr F I0813 20:08:14.830302 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="projecthelmchartrepositories.helm.openshift.io" 2025-08-13T20:08:14.830329687+00:00 stderr F I0813 20:08:14.830320 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediationtemplates.infrastructure.cluster.x-k8s.io" 2025-08-13T20:08:14.831114599+00:00 stderr F I0813 20:08:14.830389 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="subscriptions.operators.coreos.com" 2025-08-13T20:08:14.831114599+00:00 stderr F I0813 20:08:14.830482 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831415 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831478 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831498 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831521 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="network-attachment-definitions.k8s.cni.cncf.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831550 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="routes.route.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831566 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressqoses.k8s.ovn.org" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831597 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machines.machine.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831616 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="overlappingrangeipreservations.whereabouts.cni.cncf.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831714 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831735 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindingrestrictions.authorization.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831753 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresscontrollers.operator.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831768 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831948 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddressclaims.ipam.cluster.x-k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831969 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheusrules.monitoring.coreos.com" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831987 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="thanosrulers.monitoring.coreos.com" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832034 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ippools.whereabouts.cni.cncf.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832076 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832113 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="dnsrecords.ingress.operator.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832154 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediations.infrastructure.cluster.x-k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832179 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="buildconfigs.build.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832198 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podnetworkconnectivitychecks.controlplane.operator.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832218 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressfirewalls.k8s.ovn.org" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832234 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832250 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deploymentconfigs.apps.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832268 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="builds.build.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832292 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832331 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832350 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machineautoscalers.autoscaling.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832390 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinesets.machine.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832407 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832432 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832447 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832463 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controlplanemachinesets.machine.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832480 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="probes.monitoring.coreos.com" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832496 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertingrules.monitoring.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832511 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832546 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832583 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832599 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddresses.ipam.cluster.x-k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832615 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagerconfigs.monitoring.coreos.com" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832641 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832657 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templates.template.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832713 1 policy_controller.go:88] Started "openshift.io/cluster-quota-reconciliation" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832723 1 policy_controller.go:78] Starting "openshift.io/cluster-csr-approver" 2025-08-13T20:08:14.833540459+00:00 stderr F I0813 20:08:14.833019 1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller 2025-08-13T20:08:14.833540459+00:00 stderr F I0813 20:08:14.833093 1 reconciliation_controller.go:140] Starting the cluster quota reconciliation controller 2025-08-13T20:08:14.833540459+00:00 stderr F I0813 20:08:14.833272 1 resource_quota_monitor.go:305] "QuotaMonitor running" 2025-08-13T20:08:14.876513471+00:00 stderr F I0813 20:08:14.876383 1 policy_controller.go:88] Started "openshift.io/cluster-csr-approver" 2025-08-13T20:08:14.876513471+00:00 stderr F I0813 20:08:14.876440 1 policy_controller.go:78] Starting "openshift.io/podsecurity-admission-label-syncer" 2025-08-13T20:08:14.877069787+00:00 stderr F I0813 20:08:14.877029 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorCertApprover_csr-approver-controller 2025-08-13T20:08:14.887001612+00:00 stderr F I0813 20:08:14.883489 1 reconciliation_controller.go:207] syncing resource quota controller with updated resources from discovery: added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=persistentvolumeclaims /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services apps.openshift.io/v1, Resource=deploymentconfigs apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets authorization.openshift.io/v1, Resource=rolebindingrestrictions autoscaling.openshift.io/v1beta1, Resource=machineautoscalers autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs build.openshift.io/v1, Resource=buildconfigs build.openshift.io/v1, Resource=builds controlplane.operator.openshift.io/v1alpha1, Resource=podnetworkconnectivitychecks coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events helm.openshift.io/v1beta1, Resource=projecthelmchartrepositories image.openshift.io/v1, Resource=imagestreams infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediations infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediationtemplates ingress.operator.openshift.io/v1, Resource=dnsrecords ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddressclaims ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddresses k8s.cni.cncf.io/v1, Resource=network-attachment-definitions k8s.ovn.org/v1, Resource=egressfirewalls k8s.ovn.org/v1, Resource=egressqoses k8s.ovn.org/v1, Resource=egressservices machine.openshift.io/v1, Resource=controlplanemachinesets machine.openshift.io/v1beta1, Resource=machinehealthchecks machine.openshift.io/v1beta1, Resource=machines machine.openshift.io/v1beta1, Resource=machinesets monitoring.coreos.com/v1, Resource=alertmanagers monitoring.coreos.com/v1, Resource=podmonitors monitoring.coreos.com/v1, Resource=probes monitoring.coreos.com/v1, Resource=prometheuses monitoring.coreos.com/v1, Resource=prometheusrules monitoring.coreos.com/v1, Resource=servicemonitors monitoring.coreos.com/v1, Resource=thanosrulers monitoring.coreos.com/v1beta1, Resource=alertmanagerconfigs monitoring.openshift.io/v1, Resource=alertingrules monitoring.openshift.io/v1, Resource=alertrelabelconfigs network.operator.openshift.io/v1, Resource=egressrouters network.operator.openshift.io/v1, Resource=operatorpkis networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies operator.openshift.io/v1, Resource=ingresscontrollers operators.coreos.com/v1, Resource=operatorgroups operators.coreos.com/v1alpha1, Resource=catalogsources operators.coreos.com/v1alpha1, Resource=clusterserviceversions operators.coreos.com/v1alpha1, Resource=installplans operators.coreos.com/v1alpha1, Resource=subscriptions operators.coreos.com/v2, Resource=operatorconditions policy/v1, Resource=poddisruptionbudgets rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles route.openshift.io/v1, Resource=routes storage.k8s.io/v1, Resource=csistoragecapacities template.openshift.io/v1, Resource=templateinstances template.openshift.io/v1, Resource=templates whereabouts.cni.cncf.io/v1alpha1, Resource=ippools whereabouts.cni.cncf.io/v1alpha1, Resource=overlappingrangeipreservations], removed: [] 2025-08-13T20:08:14.887001612+00:00 stderr F I0813 20:08:14.885290 1 policy_controller.go:88] Started "openshift.io/podsecurity-admission-label-syncer" 2025-08-13T20:08:14.887001612+00:00 stderr F I0813 20:08:14.885311 1 policy_controller.go:78] Starting "openshift.io/privileged-namespaces-psa-label-syncer" 2025-08-13T20:08:14.887001612+00:00 stderr F I0813 20:08:14.885539 1 base_controller.go:67] Waiting for caches to sync for pod-security-admission-label-synchronization-controller 2025-08-13T20:08:14.889977587+00:00 stderr F I0813 20:08:14.889608 1 policy_controller.go:88] Started "openshift.io/privileged-namespaces-psa-label-syncer" 2025-08-13T20:08:14.889977587+00:00 stderr F I0813 20:08:14.889639 1 policy_controller.go:78] Starting "openshift.io/namespace-security-allocation" 2025-08-13T20:08:14.890300166+00:00 stderr F I0813 20:08:14.890259 1 privileged_namespaces_controller.go:75] "Starting" controller="privileged-namespaces-psa-label-syncer" 2025-08-13T20:08:14.890355978+00:00 stderr F I0813 20:08:14.890339 1 shared_informer.go:311] Waiting for caches to sync for privileged-namespaces-psa-label-syncer 2025-08-13T20:08:14.904998308+00:00 stderr F I0813 20:08:14.904851 1 policy_controller.go:88] Started "openshift.io/namespace-security-allocation" 2025-08-13T20:08:14.904998308+00:00 stderr F I0813 20:08:14.904911 1 policy_controller.go:78] Starting "openshift.io/resourcequota" 2025-08-13T20:08:14.904998308+00:00 stderr F I0813 20:08:14.904982 1 base_controller.go:67] Waiting for caches to sync for namespace-security-allocation-controller 2025-08-13T20:08:15.162401148+00:00 stderr F I0813 20:08:15.161553 1 policy_controller.go:88] Started "openshift.io/resourcequota" 2025-08-13T20:08:15.162401148+00:00 stderr F I0813 20:08:15.161594 1 policy_controller.go:91] Started Origin Controllers 2025-08-13T20:08:15.169223483+00:00 stderr F I0813 20:08:15.162368 1 resource_quota_controller.go:294] "Starting resource quota controller" 2025-08-13T20:08:15.169223483+00:00 stderr F I0813 20:08:15.169155 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2025-08-13T20:08:15.169337147+00:00 stderr F I0813 20:08:15.169293 1 resource_quota_monitor.go:305] "QuotaMonitor running" 2025-08-13T20:08:15.180857717+00:00 stderr F I0813 20:08:15.176319 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.180857717+00:00 stderr F I0813 20:08:15.176639 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.180857717+00:00 stderr F I0813 20:08:15.176918 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.180857717+00:00 stderr F I0813 20:08:15.177210 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.216483758+00:00 stderr F I0813 20:08:15.211262 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.231770197+00:00 stderr F I0813 20:08:15.231706 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.234820014+00:00 stderr F I0813 20:08:15.234745 1 reflector.go:351] Caches populated for *v2.HorizontalPodAutoscaler from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.237585623+00:00 stderr F I0813 20:08:15.237558 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.244701137+00:00 stderr F I0813 20:08:15.237756 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.294877586+00:00 stderr F I0813 20:08:15.242155 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.294877586+00:00 stderr F I0813 20:08:15.288151 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.295008890+00:00 stderr F I0813 20:08:15.288479 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.295342139+00:00 stderr F I0813 20:08:15.295285 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.295398661+00:00 stderr F I0813 20:08:15.295372 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.295510714+00:00 stderr F I0813 20:08:15.294031 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.295674509+00:00 stderr F I0813 20:08:15.295656 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.298247583+00:00 stderr F I0813 20:08:15.298219 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.298883051+00:00 stderr F I0813 20:08:15.298858 1 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.345075025+00:00 stderr F I0813 20:08:15.345008 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.352814607+00:00 stderr F I0813 20:08:15.352734 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.377367631+00:00 stderr F I0813 20:08:15.377297 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.436101725+00:00 stderr F I0813 20:08:15.380231 1 reflector.go:351] Caches populated for *v1.ControllerRevision from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.436101725+00:00 stderr F I0813 20:08:15.383691 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.436101725+00:00 stderr F I0813 20:08:15.387926 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.436101725+00:00 stderr F I0813 20:08:15.389594 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorCertApprover_csr-approver-controller 2025-08-13T20:08:15.436101725+00:00 stderr F I0813 20:08:15.427446 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorCertApprover_csr-approver-controller controller ... 2025-08-13T20:08:15.445339760+00:00 stderr F I0813 20:08:15.445274 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.484563625+00:00 stderr F I0813 20:08:15.484393 1 reflector.go:351] Caches populated for *v1.Lease from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.492000598+00:00 stderr F I0813 20:08:15.489766 1 resource_quota_controller.go:470] "syncing resource quota controller with updated resources from discovery" diff="added: [image.openshift.io/v1, Resource=imagestreams], removed: []" 2025-08-13T20:08:15.492000598+00:00 stderr F I0813 20:08:15.489952 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2025-08-13T20:08:15.492000598+00:00 stderr F I0813 20:08:15.488683 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.492547094+00:00 stderr F I0813 20:08:15.492488 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.494281263+00:00 stderr F I0813 20:08:15.494227 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.494741616+00:00 stderr F I0813 20:08:15.494715 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "basic-user" not found 2025-08-13T20:08:15.496173787+00:00 stderr F I0813 20:08:15.496128 1 trace.go:236] Trace[1789160028]: "DeltaFIFO Pop Process" ID:cluster-autoscaler,Depth:186,Reason:slow event handlers blocking the queue (13-Aug-2025 20:08:15.383) (total time: 112ms): 2025-08-13T20:08:15.496173787+00:00 stderr F Trace[1789160028]: [112.348001ms] [112.348001ms] END 2025-08-13T20:08:15.502290473+00:00 stderr F I0813 20:08:15.502258 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:08:15.506173914+00:00 stderr F I0813 20:08:15.506106 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:08:15.506173914+00:00 stderr F I0813 20:08:15.506151 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-provisioner-cfg" not found 2025-08-13T20:08:15.506173914+00:00 stderr F I0813 20:08:15.506159 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-provisioner-cfg" not found 2025-08-13T20:08:15.506199455+00:00 stderr F I0813 20:08:15.506179 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:controller:bootstrap-signer" not found 2025-08-13T20:08:15.506199455+00:00 stderr F I0813 20:08:15.506191 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506223216+00:00 stderr F I0813 20:08:15.506198 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506223216+00:00 stderr F I0813 20:08:15.506207 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506223216+00:00 stderr F I0813 20:08:15.506214 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506236806+00:00 stderr F I0813 20:08:15.506222 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506236806+00:00 stderr F I0813 20:08:15.506229 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506249946+00:00 stderr F I0813 20:08:15.506238 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506249946+00:00 stderr F I0813 20:08:15.506244 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system::leader-locking-kube-controller-manager" not found 2025-08-13T20:08:15.506262537+00:00 stderr F I0813 20:08:15.506252 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found 2025-08-13T20:08:15.506262537+00:00 stderr F I0813 20:08:15.506259 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:controller:bootstrap-signer" not found 2025-08-13T20:08:15.506274767+00:00 stderr F I0813 20:08:15.506267 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:controller:cloud-provider" not found 2025-08-13T20:08:15.506285057+00:00 stderr F I0813 20:08:15.506276 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:controller:token-cleaner" not found 2025-08-13T20:08:15.506297818+00:00 stderr F I0813 20:08:15.506291 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506307568+00:00 stderr F I0813 20:08:15.506300 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system::leader-locking-kube-controller-manager" not found 2025-08-13T20:08:15.506319168+00:00 stderr F I0813 20:08:15.506306 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found 2025-08-13T20:08:15.506319168+00:00 stderr F I0813 20:08:15.506315 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:leader-locking-openshift-controller-manager" not found 2025-08-13T20:08:15.506339989+00:00 stderr F I0813 20:08:15.506321 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506339989+00:00 stderr F I0813 20:08:15.506335 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506355549+00:00 stderr F I0813 20:08:15.506346 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506367130+00:00 stderr F I0813 20:08:15.506359 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506434522+00:00 stderr F I0813 20:08:15.506380 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "machine-approver" not found 2025-08-13T20:08:15.506434522+00:00 stderr F I0813 20:08:15.506408 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506434522+00:00 stderr F I0813 20:08:15.506423 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "cluster-samples-operator" not found 2025-08-13T20:08:15.506434522+00:00 stderr F I0813 20:08:15.506429 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506452792+00:00 stderr F I0813 20:08:15.506444 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "csi-snapshot-controller-operator-role" not found 2025-08-13T20:08:15.506467723+00:00 stderr F I0813 20:08:15.506458 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506478923+00:00 stderr F I0813 20:08:15.506471 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-configmap-reader" not found 2025-08-13T20:08:15.506488833+00:00 stderr F I0813 20:08:15.506479 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-operator" not found 2025-08-13T20:08:15.506488833+00:00 stderr F I0813 20:08:15.506485 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-public" not found 2025-08-13T20:08:15.506501614+00:00 stderr F I0813 20:08:15.506493 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "machine-api-controllers" not found 2025-08-13T20:08:15.506513854+00:00 stderr F I0813 20:08:15.506500 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "machine-approver" not found 2025-08-13T20:08:15.506513854+00:00 stderr F I0813 20:08:15.506509 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "openshift-network-public-role" not found 2025-08-13T20:08:15.506532774+00:00 stderr F I0813 20:08:15.506521 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:oauth-servercert-trust" not found 2025-08-13T20:08:15.506532774+00:00 stderr F I0813 20:08:15.506528 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506548415+00:00 stderr F I0813 20:08:15.506540 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "coreos-pull-secret-reader" not found 2025-08-13T20:08:15.506560575+00:00 stderr F I0813 20:08:15.506549 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-operator" not found 2025-08-13T20:08:15.506560575+00:00 stderr F I0813 20:08:15.506555 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "ingress-operator" not found 2025-08-13T20:08:15.506572646+00:00 stderr F I0813 20:08:15.506565 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "machine-api-controllers" not found 2025-08-13T20:08:15.506588206+00:00 stderr F I0813 20:08:15.506578 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-operator" not found 2025-08-13T20:08:15.506600286+00:00 stderr F I0813 20:08:15.506585 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506612357+00:00 stderr F I0813 20:08:15.506599 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-user-settings-admin" not found 2025-08-13T20:08:15.506624287+00:00 stderr F I0813 20:08:15.506611 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-operator" not found 2025-08-13T20:08:15.506624287+00:00 stderr F I0813 20:08:15.506620 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506639688+00:00 stderr F I0813 20:08:15.506632 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506651758+00:00 stderr F I0813 20:08:15.506644 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506667328+00:00 stderr F I0813 20:08:15.506658 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:leader-locking-openshift-controller-manager" not found 2025-08-13T20:08:15.506679559+00:00 stderr F I0813 20:08:15.506664 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "dns-operator" not found 2025-08-13T20:08:15.506679559+00:00 stderr F I0813 20:08:15.506674 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506747431+00:00 stderr F I0813 20:08:15.506699 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506747431+00:00 stderr F I0813 20:08:15.506733 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506768191+00:00 stderr F I0813 20:08:15.506746 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506768191+00:00 stderr F I0813 20:08:15.506762 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "cluster-image-registry-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506770 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "node-ca" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506827 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506852 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:sa-creating-openshift-controller-manager" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506861 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:sa-creating-route-controller-manager" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506873 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "ingress-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506880 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506921 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506939 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506950 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506965 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506978 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506991 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:leader-election-lock-cluster-policy-controller" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506998 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507009 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507022 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:sa-listing-configmaps" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507039 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "cluster-autoscaler" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507045 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "cluster-autoscaler-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507053 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "control-plane-machine-set-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507058 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "machine-api-controllers" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507066 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "machine-api-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507075 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s-cluster-autoscaler-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507083 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s-machine-api-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507095 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "machine-config-daemon" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507107 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "mcc-prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507115 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "mcd-prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507122 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507134 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "marketplace-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507141 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "openshift-marketplace-metrics" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507153 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "cluster-monitoring-operator-alert-customization" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507161 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507173 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "whereabouts-cni" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507179 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507190 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "network-diagnostics" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507197 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507209 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "network-node-identity-leases" not found 2025-08-13T20:08:15.509924612+00:00 stderr F E0813 20:08:15.507232 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-node-identity" should be enqueued: namespace "openshift-network-node-identity" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507241 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507300 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:node-config-reader" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507312 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr P I0813 20:08:15.507330 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "col 2025-08-13T20:08:15.509993354+00:00 stderr F lect-profiles" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507338 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "operator-lifecycle-manager-metrics" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507344 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "packageserver" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507354 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "packageserver-service-cert" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507374 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "openshift-ovn-kubernetes-control-plane-limited" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507382 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "openshift-ovn-kubernetes-node-limited" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507389 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507401 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507437 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:leader-locking-openshift-route-controller-manager" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507453 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507471 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:controller:service-ca" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507490 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "copied-csv-viewer" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507498 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "shared-resource-viewer" not found 2025-08-13T20:08:15.511094655+00:00 stderr F I0813 20:08:15.510971 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.512133275+00:00 stderr F E0813 20:08:15.511975 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2025-08-13T20:08:15.514202474+00:00 stderr F E0813 20:08:15.514156 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca" should be enqueued: namespace "openshift-service-ca" not found 2025-08-13T20:08:15.524208091+00:00 stderr F I0813 20:08:15.524124 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.532220851+00:00 stderr F I0813 20:08:15.532160 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.545559513+00:00 stderr F I0813 20:08:15.545469 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.556140527+00:00 stderr F I0813 20:08:15.556044 1 reflector.go:351] Caches populated for *v1.PodTemplate from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.583228713+00:00 stderr F I0813 20:08:15.583171 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.585195870+00:00 stderr F I0813 20:08:15.585160 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.596064472+00:00 stderr F I0813 20:08:15.595457 1 shared_informer.go:318] Caches are synced for resource quota 2025-08-13T20:08:15.596064472+00:00 stderr F I0813 20:08:15.595523 1 resource_quota_controller.go:496] "synced quota controller" 2025-08-13T20:08:15.752113946+00:00 stderr F I0813 20:08:15.752055 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.766668683+00:00 stderr F I0813 20:08:15.766543 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.791997529+00:00 stderr F I0813 20:08:15.791017 1 shared_informer.go:318] Caches are synced for privileged-namespaces-psa-label-syncer 2025-08-13T20:08:15.805262569+00:00 stderr F I0813 20:08:15.805174 1 base_controller.go:73] Caches are synced for namespace-security-allocation-controller 2025-08-13T20:08:15.805262569+00:00 stderr F I0813 20:08:15.805219 1 base_controller.go:110] Starting #1 worker of namespace-security-allocation-controller controller ... 2025-08-13T20:08:15.805317911+00:00 stderr F I0813 20:08:15.805294 1 namespace_scc_allocation_controller.go:111] Repairing SCC UID Allocations 2025-08-13T20:08:15.949999609+00:00 stderr F I0813 20:08:15.949577 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.970305261+00:00 stderr F I0813 20:08:15.968108 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.167957588+00:00 stderr F I0813 20:08:16.167440 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.343548593+00:00 stderr F I0813 20:08:16.343039 1 request.go:697] Waited for 1.173256768s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/resourcequotas?limit=500&resourceVersion=0 2025-08-13T20:08:16.346233830+00:00 stderr F I0813 20:08:16.346152 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.367920662+00:00 stderr F I0813 20:08:16.366461 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.370045532+00:00 stderr F I0813 20:08:16.369989 1 shared_informer.go:318] Caches are synced for resource quota 2025-08-13T20:08:16.508977916+00:00 stderr F I0813 20:08:16.508543 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.549842197+00:00 stderr F I0813 20:08:16.549722 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.568407570+00:00 stderr F I0813 20:08:16.568294 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.766668494+00:00 stderr F I0813 20:08:16.766564 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.945633085+00:00 stderr F I0813 20:08:16.945525 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.967295166+00:00 stderr F I0813 20:08:16.967193 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.086702110+00:00 stderr F I0813 20:08:17.085952 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.158487008+00:00 stderr F I0813 20:08:17.157991 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.172618403+00:00 stderr F W0813 20:08:17.172277 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:08:17.172618403+00:00 stderr F I0813 20:08:17.172391 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.181821517+00:00 stderr F W0813 20:08:17.181359 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:08:17.349080902+00:00 stderr F I0813 20:08:17.346494 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.363956099+00:00 stderr F I0813 20:08:17.363068 1 request.go:697] Waited for 2.191471622s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/monitoring.coreos.com/v1/servicemonitors?limit=500&resourceVersion=0 2025-08-13T20:08:17.369678783+00:00 stderr F I0813 20:08:17.369557 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.567909807+00:00 stderr F I0813 20:08:17.567537 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.577176012+00:00 stderr F I0813 20:08:17.577067 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.586982193+00:00 stderr F I0813 20:08:17.586130 1 base_controller.go:73] Caches are synced for pod-security-admission-label-synchronization-controller 2025-08-13T20:08:17.586982193+00:00 stderr F I0813 20:08:17.586172 1 base_controller.go:110] Starting #1 worker of pod-security-admission-label-synchronization-controller controller ... 2025-08-13T20:08:17.768841898+00:00 stderr F I0813 20:08:17.768711 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.973583317+00:00 stderr F I0813 20:08:17.973528 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.980504715+00:00 stderr F I0813 20:08:17.980440 1 namespace_scc_allocation_controller.go:116] Repair complete 2025-08-13T20:08:18.168683820+00:00 stderr F I0813 20:08:18.168557 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:18.365096742+00:00 stderr F I0813 20:08:18.363927 1 request.go:697] Waited for 3.191556604s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/operators.coreos.com/v2/operatorconditions?limit=500&resourceVersion=0 2025-08-13T20:08:18.368047266+00:00 stderr F I0813 20:08:18.367957 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:18.567371701+00:00 stderr F I0813 20:08:18.567211 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:18.766389487+00:00 stderr F I0813 20:08:18.766128 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:18.966073862+00:00 stderr F I0813 20:08:18.965964 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:19.180138490+00:00 stderr F I0813 20:08:19.179751 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:19.364296150+00:00 stderr F I0813 20:08:19.364147 1 request.go:697] Waited for 4.191066971s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0 2025-08-13T20:08:19.379091444+00:00 stderr F I0813 20:08:19.379038 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:19.568422642+00:00 stderr F I0813 20:08:19.568366 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:19.766052129+00:00 stderr F I0813 20:08:19.765998 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:19.967672679+00:00 stderr F I0813 20:08:19.967043 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:20.168539889+00:00 stderr F I0813 20:08:20.168358 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:20.366753172+00:00 stderr F I0813 20:08:20.366609 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:20.563195074+00:00 stderr F I0813 20:08:20.563064 1 request.go:697] Waited for 5.389329947s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/operators.coreos.com/v1/operatorgroups?limit=500&resourceVersion=0 2025-08-13T20:08:20.566910400+00:00 stderr F I0813 20:08:20.566494 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:20.767176112+00:00 stderr F I0813 20:08:20.767083 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:20.970139491+00:00 stderr F I0813 20:08:20.970051 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:21.166165592+00:00 stderr F I0813 20:08:21.166100 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:21.366337351+00:00 stderr F I0813 20:08:21.366152 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:21.564432569+00:00 stderr F I0813 20:08:21.564369 1 request.go:697] Waited for 6.389277006s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/monitoring.coreos.com/v1beta1/alertmanagerconfigs?limit=500&resourceVersion=0 2025-08-13T20:08:21.575270540+00:00 stderr F I0813 20:08:21.575083 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:21.766876404+00:00 stderr F I0813 20:08:21.766682 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:21.966526208+00:00 stderr F I0813 20:08:21.965911 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:21.970859722+00:00 stderr F I0813 20:08:21.970506 1 reconciliation_controller.go:224] synced cluster resource quota controller 2025-08-13T20:08:22.034338812+00:00 stderr F I0813 20:08:22.034242 1 reconciliation_controller.go:149] Caches are synced 2025-08-13T20:08:44.520618251+00:00 stderr F E0813 20:08:44.520526 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?allowWatchBookmarks=true&resourceVersion=32866&timeout=5m7s&timeoutSeconds=307&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-08-13T20:08:45.257388125+00:00 stderr F E0813 20:08:45.257309 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?allowWatchBookmarks=true&resourceVersion=32866&timeout=5m53s&timeoutSeconds=353&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-08-13T20:08:45.783930832+00:00 stderr F E0813 20:08:45.783822 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?allowWatchBookmarks=true&resourceVersion=32890&timeout=6m30s&timeoutSeconds=390&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-08-13T20:08:47.641227111+00:00 stderr F E0813 20:08:47.639719 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?allowWatchBookmarks=true&resourceVersion=32882&timeout=5m56s&timeoutSeconds=356&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-08-13T20:08:47.889473979+00:00 stderr F E0813 20:08:47.889166 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?allowWatchBookmarks=true&resourceVersion=32866&timeout=8m34s&timeoutSeconds=514&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:48.277980608+00:00 stderr F E0813 20:08:48.277870 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?allowWatchBookmarks=true&resourceVersion=32887&timeout=6m7s&timeoutSeconds=367&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-08-13T20:08:48.487870096+00:00 stderr F E0813 20:08:48.485466 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?allowWatchBookmarks=true&resourceVersion=32881&timeout=9m9s&timeoutSeconds=549&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-08-13T20:08:58.353763679+00:00 stderr F I0813 20:08:58.353644 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:58.468278553+00:00 stderr F I0813 20:08:58.468098 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:58.701085768+00:00 stderr F I0813 20:08:58.700666 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:58.937601149+00:00 stderr F I0813 20:08:58.937479 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:59.881007247+00:00 stderr F I0813 20:08:59.880767 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:59.987227123+00:00 stderr F I0813 20:08:59.987057 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:00.003355905+00:00 stderr F I0813 20:09:00.003258 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:00.150216806+00:00 stderr F I0813 20:09:00.150103 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:00.425437117+00:00 stderr F I0813 20:09:00.425289 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:00.492074917+00:00 stderr F I0813 20:09:00.491969 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:00.705735963+00:00 stderr F I0813 20:09:00.705607 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:00.908423513+00:00 stderr F I0813 20:09:00.908259 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:01.307012931+00:00 stderr F I0813 20:09:01.306948 1 reflector.go:351] Caches populated for *v2.HorizontalPodAutoscaler from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:01.332152642+00:00 stderr F I0813 20:09:01.332074 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:01.943401017+00:00 stderr F I0813 20:09:01.943318 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.165926027+00:00 stderr F I0813 20:09:02.165756 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.170353624+00:00 stderr F I0813 20:09:02.169458 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.200222970+00:00 stderr F I0813 20:09:02.198436 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.243060378+00:00 stderr F I0813 20:09:02.243000 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.276498097+00:00 stderr F I0813 20:09:02.276377 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.413029791+00:00 stderr F I0813 20:09:02.412912 1 reflector.go:351] Caches populated for *v1.ControllerRevision from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.474952837+00:00 stderr F I0813 20:09:02.471915 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.480103634+00:00 stderr F I0813 20:09:02.479473 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.714094713+00:00 stderr F I0813 20:09:02.713322 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.723976247+00:00 stderr F I0813 20:09:02.723848 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:03.241760852+00:00 stderr F I0813 20:09:03.241565 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:03.284594570+00:00 stderr F I0813 20:09:03.284437 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:03.503286530+00:00 stderr F I0813 20:09:03.503146 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:03.604591535+00:00 stderr F I0813 20:09:03.604525 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:03.722984019+00:00 stderr F I0813 20:09:03.722920 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:03.850325700+00:00 stderr F I0813 20:09:03.850225 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:03.850922437+00:00 stderr F I0813 20:09:03.850753 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:09:03.851303858+00:00 stderr F I0813 20:09:03.851165 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:09:04.098252009+00:00 stderr F I0813 20:09:04.097733 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:04.152472923+00:00 stderr F I0813 20:09:04.152204 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:04.225751164+00:00 stderr F I0813 20:09:04.225694 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:04.817556751+00:00 stderr F I0813 20:09:04.817408 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:04.831556452+00:00 stderr F W0813 20:09:04.831450 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:09:04.831616414+00:00 stderr F I0813 20:09:04.831588 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:04.840324364+00:00 stderr F W0813 20:09:04.840265 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:09:04.859071671+00:00 stderr F I0813 20:09:04.858997 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:04.894092585+00:00 stderr F I0813 20:09:04.893961 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:05.441834341+00:00 stderr F I0813 20:09:05.441682 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:05.474877109+00:00 stderr F I0813 20:09:05.474730 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:05.562097250+00:00 stderr F I0813 20:09:05.562031 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:05.640134787+00:00 stderr F I0813 20:09:05.640072 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:05.838480114+00:00 stderr F I0813 20:09:05.838389 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:05.990681308+00:00 stderr F I0813 20:09:05.990602 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.433181395+00:00 stderr F I0813 20:09:06.433114 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.474216631+00:00 stderr F I0813 20:09:06.474129 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.505855978+00:00 stderr F I0813 20:09:06.505680 1 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.513476227+00:00 stderr F I0813 20:09:06.513410 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.558408355+00:00 stderr F I0813 20:09:06.558346 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.571862281+00:00 stderr F I0813 20:09:06.571677 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.699625664+00:00 stderr F I0813 20:09:06.699451 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.951436104+00:00 stderr F I0813 20:09:06.951223 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:07.370529009+00:00 stderr F I0813 20:09:07.370423 1 reflector.go:351] Caches populated for *v1.Lease from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:07.459069147+00:00 stderr F I0813 20:09:07.458924 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:07.471986478+00:00 stderr F I0813 20:09:07.471871 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:07.600928745+00:00 stderr F I0813 20:09:07.600544 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.149093319+00:00 stderr F I0813 20:09:08.149020 1 reflector.go:351] Caches populated for *v1.PodTemplate from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.218213131+00:00 stderr F I0813 20:09:08.218130 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.324632872+00:00 stderr F I0813 20:09:08.324517 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.624968103+00:00 stderr F I0813 20:09:08.624869 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.710337260+00:00 stderr F I0813 20:09:08.710168 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.856873891+00:00 stderr F I0813 20:09:08.856737 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.951410252+00:00 stderr F I0813 20:09:08.951241 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.958402552+00:00 stderr F I0813 20:09:08.958285 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:09.342542026+00:00 stderr F I0813 20:09:09.342458 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:09.516463073+00:00 stderr F I0813 20:09:09.516330 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:09.586271414+00:00 stderr F I0813 20:09:09.586068 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:10.136190901+00:00 stderr F I0813 20:09:10.136083 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:10.477340542+00:00 stderr F I0813 20:09:10.476685 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:10.552882668+00:00 stderr F I0813 20:09:10.552464 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:11.762035674+00:00 stderr F I0813 20:09:11.747513 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:11.815543008+00:00 stderr F I0813 20:09:11.813580 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:11.818041440+00:00 stderr F I0813 20:09:11.818002 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:09:11.818113032+00:00 stderr F I0813 20:09:11.818093 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:09:12.066879514+00:00 stderr F I0813 20:09:12.065303 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:12.315147813+00:00 stderr F I0813 20:09:12.303247 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:12.524755562+00:00 stderr F I0813 20:09:12.524662 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:12.720248007+00:00 stderr F I0813 20:09:12.720183 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:13.244186329+00:00 stderr F I0813 20:09:13.244048 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:13.257643605+00:00 stderr F I0813 20:09:13.257585 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:13.616384901+00:00 stderr F I0813 20:09:13.616285 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:13.735556497+00:00 stderr F I0813 20:09:13.735460 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:13.984755492+00:00 stderr F I0813 20:09:13.984631 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:17:31.923443489+00:00 stderr F W0813 20:17:31.922743 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:19:03.872750719+00:00 stderr F I0813 20:19:03.872574 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:19:03.872750719+00:00 stderr F I0813 20:19:03.872691 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:19:11.823569241+00:00 stderr F I0813 20:19:11.823055 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:19:11.823691854+00:00 stderr F I0813 20:19:11.823534 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:23:32.941069167+00:00 stderr F W0813 20:23:32.940701 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:29:03.858920750+00:00 stderr F I0813 20:29:03.858613 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:29:03.858920750+00:00 stderr F I0813 20:29:03.858685 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:29:11.824287689+00:00 stderr F I0813 20:29:11.824144 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:29:11.824287689+00:00 stderr F I0813 20:29:11.824213 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:30:08.952423486+00:00 stderr F W0813 20:30:08.951886 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:38:14.959442937+00:00 stderr F W0813 20:38:14.959259 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:39:03.857190733+00:00 stderr F I0813 20:39:03.856948 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:39:03.857190733+00:00 stderr F I0813 20:39:03.857044 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:39:11.824009178+00:00 stderr F I0813 20:39:11.823497 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:39:11.824009178+00:00 stderr F I0813 20:39:11.823576 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:42:36.401697921+00:00 stderr F I0813 20:42:36.401556 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.402032941+00:00 stderr F I0813 20:42:36.401959 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.452903 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.453302 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.453415 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.453526 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.453623 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.453837 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454083 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454201 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454384 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454477 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454593 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454704 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454871 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454963 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.455028 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.455092 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455254165+00:00 stderr F I0813 20:42:36.455171 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455598315+00:00 stderr F I0813 20:42:36.455311 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.508540511+00:00 stderr F I0813 20:42:36.476285 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.509114978+00:00 stderr F I0813 20:42:36.476311 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.509639023+00:00 stderr F I0813 20:42:36.476325 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.510044045+00:00 stderr F I0813 20:42:36.476337 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.510289942+00:00 stderr F I0813 20:42:36.476350 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.510379984+00:00 stderr F I0813 20:42:36.476374 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.510477887+00:00 stderr F I0813 20:42:36.476387 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.510663292+00:00 stderr F I0813 20:42:36.476398 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.510663292+00:00 stderr F I0813 20:42:36.476411 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.510860668+00:00 stderr F I0813 20:42:36.476423 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.511060774+00:00 stderr F I0813 20:42:36.476435 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.511280550+00:00 stderr F I0813 20:42:36.476445 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.511485186+00:00 stderr F I0813 20:42:36.476728 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.511736753+00:00 stderr F I0813 20:42:36.476742 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531584186+00:00 stderr F I0813 20:42:36.476757 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531934476+00:00 stderr F I0813 20:42:36.476768 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.532140292+00:00 stderr F I0813 20:42:36.476838 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.532311397+00:00 stderr F I0813 20:42:36.476850 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.532883403+00:00 stderr F I0813 20:42:36.476862 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537875467+00:00 stderr F I0813 20:42:36.476872 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.538035062+00:00 stderr F I0813 20:42:36.476884 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.538194606+00:00 stderr F I0813 20:42:36.476894 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.538423403+00:00 stderr F I0813 20:42:36.476905 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.538639449+00:00 stderr F I0813 20:42:36.476915 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.538853175+00:00 stderr F I0813 20:42:36.476927 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.539185355+00:00 stderr F I0813 20:42:36.476950 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.539359230+00:00 stderr F I0813 20:42:36.476961 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.540034479+00:00 stderr F I0813 20:42:36.476973 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.540474832+00:00 stderr F I0813 20:42:36.476984 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542741847+00:00 stderr F I0813 20:42:36.476995 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542931643+00:00 stderr F I0813 20:42:36.477110 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.543525590+00:00 stderr F I0813 20:42:36.477126 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.543853859+00:00 stderr F I0813 20:42:36.484817 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.549611285+00:00 stderr F I0813 20:42:36.484846 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.549737449+00:00 stderr F I0813 20:42:36.484864 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.550030577+00:00 stderr F I0813 20:42:36.484875 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.550115900+00:00 stderr F I0813 20:42:36.484887 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.550366187+00:00 stderr F I0813 20:42:36.484898 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.550463350+00:00 stderr F I0813 20:42:36.484911 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.554275140+00:00 stderr F I0813 20:42:36.484922 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.554705332+00:00 stderr F I0813 20:42:36.484934 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.554705332+00:00 stderr F I0813 20:42:36.484945 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.554825256+00:00 stderr F I0813 20:42:36.484958 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.555266918+00:00 stderr F I0813 20:42:36.485013 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.555976549+00:00 stderr F I0813 20:42:36.485027 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.556657328+00:00 stderr F I0813 20:42:36.485046 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.556854264+00:00 stderr F I0813 20:42:36.485063 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.557073040+00:00 stderr F I0813 20:42:36.485074 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.557320978+00:00 stderr F I0813 20:42:36.485085 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.557464872+00:00 stderr F I0813 20:42:36.485095 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.557752930+00:00 stderr F I0813 20:42:36.485108 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.557977447+00:00 stderr F I0813 20:42:36.485119 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.558130701+00:00 stderr F I0813 20:42:36.485138 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.558316766+00:00 stderr F I0813 20:42:36.485148 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.558438440+00:00 stderr F I0813 20:42:36.485159 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.558550093+00:00 stderr F I0813 20:42:36.485173 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568607083+00:00 stderr F I0813 20:42:36.485183 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.570719554+00:00 stderr F I0813 20:42:36.485201 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.575013368+00:00 stderr F I0813 20:42:36.485212 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.575123871+00:00 stderr F I0813 20:42:36.485257 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.575261375+00:00 stderr F I0813 20:42:36.485276 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:37.043289618+00:00 stderr F I0813 20:42:37.042098 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:37.044261266+00:00 stderr F I0813 20:42:37.044132 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:37.045152852+00:00 stderr F I0813 20:42:37.044963 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:37.046348896+00:00 stderr F I0813 20:42:37.046268 1 genericapiserver.go:539] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:42:37.046475130+00:00 stderr F I0813 20:42:37.046323 1 genericapiserver.go:603] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:42:37.046685336+00:00 stderr F I0813 20:42:37.046626 1 reconciliation_controller.go:159] Shutting down ClusterQuotaReconcilationController 2025-08-13T20:42:37.048059266+00:00 stderr F I0813 20:42:37.047620 1 secure_serving.go:258] Stopped listening on [::]:10357 2025-08-13T20:42:37.048059266+00:00 stderr F I0813 20:42:37.047658 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:37.048059266+00:00 stderr F I0813 20:42:37.047672 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:37.048059266+00:00 stderr F I0813 20:42:37.047765 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:37.048059266+00:00 stderr F I0813 20:42:37.047186 1 base_controller.go:172] Shutting down pod-security-admission-label-synchronization-controller ... 2025-08-13T20:42:37.048577981+00:00 stderr F I0813 20:42:37.048472 1 clusterquotamapping.go:142] Shutting down ClusterQuotaMappingController controller 2025-08-13T20:42:37.048577981+00:00 stderr F I0813 20:42:37.048504 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:37.048577981+00:00 stderr F I0813 20:42:37.048523 1 base_controller.go:172] Shutting down namespace-security-allocation-controller ... 2025-08-13T20:42:37.048654573+00:00 stderr F I0813 20:42:37.048574 1 privileged_namespaces_controller.go:85] "Shutting down" controller="privileged-namespaces-psa-label-syncer" 2025-08-13T20:42:37.048654573+00:00 stderr F I0813 20:42:37.048640 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:37.048713745+00:00 stderr F I0813 20:42:37.048670 1 base_controller.go:172] Shutting down WebhookAuthenticatorCertApprover_csr-approver-controller ... 2025-08-13T20:42:37.049026484+00:00 stderr F I0813 20:42:37.048937 1 base_controller.go:114] Shutting down worker of namespace-security-allocation-controller controller ... 2025-08-13T20:42:37.049026484+00:00 stderr F I0813 20:42:37.048968 1 base_controller.go:104] All namespace-security-allocation-controller workers have been terminated 2025-08-13T20:42:37.049026484+00:00 stderr F I0813 20:42:37.048544 1 genericapiserver.go:628] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:42:37.049026484+00:00 stderr F I0813 20:42:37.048996 1 genericapiserver.go:669] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:42:37.049026484+00:00 stderr F I0813 20:42:37.049010 1 base_controller.go:114] Shutting down worker of WebhookAuthenticatorCertApprover_csr-approver-controller controller ... 2025-08-13T20:42:37.049026484+00:00 stderr F I0813 20:42:37.049017 1 base_controller.go:104] All WebhookAuthenticatorCertApprover_csr-approver-controller workers have been terminated 2025-08-13T20:42:37.049174998+00:00 stderr F I0813 20:42:37.049126 1 resource_quota_monitor.go:339] "QuotaMonitor stopped monitors" stopped=72 total=72 2025-08-13T20:42:37.049174998+00:00 stderr F I0813 20:42:37.049155 1 resource_quota_monitor.go:340] "QuotaMonitor stopping" 2025-08-13T20:42:37.049209279+00:00 stderr F I0813 20:42:37.049130 1 resource_quota_monitor.go:339] "QuotaMonitor stopped monitors" stopped=1 total=1 2025-08-13T20:42:37.049356803+00:00 stderr F I0813 20:42:37.049337 1 resource_quota_monitor.go:340] "QuotaMonitor stopping" 2025-08-13T20:42:37.049442716+00:00 stderr F I0813 20:42:37.049427 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:37.049846687+00:00 stderr F I0813 20:42:37.049757 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:42:37.050170747+00:00 stderr F I0813 20:42:37.050063 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:42:37.050614049+00:00 stderr F I0813 20:42:37.050553 1 reconciliation_controller.go:367] resource quota controller worker shutting down 2025-08-13T20:42:37.050680411+00:00 stderr F I0813 20:42:37.050637 1 reconciliation_controller.go:367] resource quota controller worker shutting down 2025-08-13T20:42:37.050680411+00:00 stderr F I0813 20:42:37.050666 1 reconciliation_controller.go:367] resource quota controller worker shutting down 2025-08-13T20:42:37.050680411+00:00 stderr F I0813 20:42:37.050675 1 reconciliation_controller.go:367] resource quota controller worker shutting down 2025-08-13T20:42:37.050695842+00:00 stderr F I0813 20:42:37.050683 1 reconciliation_controller.go:367] resource quota controller worker shutting down 2025-08-13T20:42:37.050711832+00:00 stderr F I0813 20:42:37.050702 1 base_controller.go:114] Shutting down worker of pod-security-admission-label-synchronization-controller controller ... 2025-08-13T20:42:37.050723803+00:00 stderr F I0813 20:42:37.050714 1 base_controller.go:104] All pod-security-admission-label-synchronization-controller workers have been terminated 2025-08-13T20:42:37.051066102+00:00 stderr F I0813 20:42:37.050975 1 resource_quota_controller.go:317] "Shutting down resource quota controller" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051005 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051076 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051096 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051102 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051106 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051116 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051119 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051131 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051132 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051166095+00:00 stderr F I0813 20:42:37.051141 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051166095+00:00 stderr F I0813 20:42:37.051146 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051279829+00:00 stderr F I0813 20:42:37.051203 1 builder.go:330] server exited 2025-08-13T20:42:37.058025303+00:00 stderr F E0813 20:42:37.056294 1 leaderelection.go:308] Failed to release lock: Put "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock?timeout=1m47s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:42:37.058025303+00:00 stderr F W0813 20:42:37.056374 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/6.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000000403415133657716033075 0ustar zuulzuul2026-01-20T10:45:18.650530309+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10357 \))" ]; do sleep 1; done' 2026-01-20T10:45:18.659566803+00:00 stderr F ++ ss -Htanop '(' sport = 10357 ')' 2026-01-20T10:45:18.664879829+00:00 stderr F + '[' -n '' ']' 2026-01-20T10:45:18.665951230+00:00 stderr F + exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2 2026-01-20T10:45:18.745044337+00:00 stderr F I0120 10:45:18.744806 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:45:18.746460408+00:00 stderr F I0120 10:45:18.746398 1 observer_polling.go:159] Starting file observer 2026-01-20T10:45:18.748138548+00:00 stderr F I0120 10:45:18.748085 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88 2026-01-20T10:45:18.749710963+00:00 stderr F I0120 10:45:18.749659 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2026-01-20T10:45:46.147699845+00:00 stderr F I0120 10:45:46.147599 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2026-01-20T10:45:46.147792748+00:00 stderr F F0120 10:45:46.147689 1 cmd.go:170] failed checking apiserver connectivity: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/7.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000046021015133657716033077 0ustar zuulzuul2026-01-20T10:47:06.243225398+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10357 \))" ]; do sleep 1; done' 2026-01-20T10:47:06.249085657+00:00 stderr F ++ ss -Htanop '(' sport = 10357 ')' 2026-01-20T10:47:06.255600644+00:00 stderr F + '[' -n '' ']' 2026-01-20T10:47:06.255727785+00:00 stderr F + exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2 2026-01-20T10:47:06.394546774+00:00 stderr F I0120 10:47:06.394390 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:47:06.395622076+00:00 stderr F I0120 10:47:06.395575 1 observer_polling.go:159] Starting file observer 2026-01-20T10:47:06.400193140+00:00 stderr F I0120 10:47:06.400159 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88 2026-01-20T10:47:06.401798210+00:00 stderr F I0120 10:47:06.401399 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2026-01-20T10:47:23.144204297+00:00 stderr F I0120 10:47:23.144073 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2026-01-20T10:47:23.154130925+00:00 stderr F I0120 10:47:23.154007 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2026-01-20T10:47:23.154233658+00:00 stderr F I0120 10:47:23.154207 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2026-01-20T10:47:23.154315120+00:00 stderr F I0120 10:47:23.154292 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2026-01-20T10:47:23.154369482+00:00 stderr F I0120 10:47:23.154350 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2026-01-20T10:47:23.162258075+00:00 stderr F I0120 10:47:23.161726 1 secure_serving.go:57] Forcing use of http/1.1 only 2026-01-20T10:47:23.162258075+00:00 stderr F I0120 10:47:23.161890 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2026-01-20T10:47:23.165467882+00:00 stderr F W0120 10:47:23.165264 1 builder.go:358] unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope 2026-01-20T10:47:23.165467882+00:00 stderr F I0120 10:47:23.165428 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ControlPlaneTopology' unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope 2026-01-20T10:47:23.165908964+00:00 stderr F I0120 10:47:23.165835 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager/cluster-policy-controller-lock... 2026-01-20T10:47:23.189686926+00:00 stderr F I0120 10:47:23.188729 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:47:23.189686926+00:00 stderr F I0120 10:47:23.188802 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:47:23.189686926+00:00 stderr F I0120 10:47:23.188863 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:47:23.189686926+00:00 stderr F I0120 10:47:23.188881 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:47:23.191915107+00:00 stderr F I0120 10:47:23.191876 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2026-01-20 10:47:23.191739542 +0000 UTC))" 2026-01-20T10:47:23.192394219+00:00 stderr F I0120 10:47:23.192370 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:47:23.192432050+00:00 stderr F I0120 10:47:23.192422 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:47:23.193159271+00:00 stderr F I0120 10:47:23.193126 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2026-01-20T10:47:23.193722656+00:00 stderr F I0120 10:47:23.193643 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906027\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906026\" (2026-01-20 09:47:06 +0000 UTC to 2027-01-20 09:47:06 +0000 UTC (now=2026-01-20 10:47:23.193542951 +0000 UTC))" 2026-01-20T10:47:23.193761197+00:00 stderr F I0120 10:47:23.193736 1 secure_serving.go:213] Serving securely on [::]:10357 2026-01-20T10:47:23.193851229+00:00 stderr F I0120 10:47:23.193829 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2026-01-20T10:47:23.193935752+00:00 stderr F I0120 10:47:23.193912 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:47:23.197416255+00:00 stderr F I0120 10:47:23.197394 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:47:23.198102884+00:00 stderr F I0120 10:47:23.198050 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:47:23.198784663+00:00 stderr F I0120 10:47:23.198763 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:47:23.289259349+00:00 stderr F I0120 10:47:23.289214 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:47:23.297107612+00:00 stderr F I0120 10:47:23.295125 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:47:23.297107612+00:00 stderr F I0120 10:47:23.295170 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:47:23.297107612+00:00 stderr F I0120 10:47:23.295327 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:47:23.295300052 +0000 UTC))" 2026-01-20T10:47:23.297107612+00:00 stderr F I0120 10:47:23.295618 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2026-01-20 10:47:23.29560091 +0000 UTC))" 2026-01-20T10:47:23.297107612+00:00 stderr F I0120 10:47:23.295862 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906027\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906026\" (2026-01-20 09:47:06 +0000 UTC to 2027-01-20 09:47:06 +0000 UTC (now=2026-01-20 10:47:23.295849897 +0000 UTC))" 2026-01-20T10:47:23.297107612+00:00 stderr F I0120 10:47:23.296182 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:47:23.296166385 +0000 UTC))" 2026-01-20T10:47:23.297107612+00:00 stderr F I0120 10:47:23.296198 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:47:23.296189226 +0000 UTC))" 2026-01-20T10:47:23.297107612+00:00 stderr F I0120 10:47:23.296212 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:47:23.296202496 +0000 UTC))" 2026-01-20T10:47:23.297107612+00:00 stderr F I0120 10:47:23.296226 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:47:23.296216477 +0000 UTC))" 2026-01-20T10:47:23.297107612+00:00 stderr F I0120 10:47:23.296240 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:47:23.296230117 +0000 UTC))" 2026-01-20T10:47:23.297107612+00:00 stderr F I0120 10:47:23.296255 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:47:23.296245748 +0000 UTC))" 2026-01-20T10:47:23.297107612+00:00 stderr F I0120 10:47:23.296269 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:47:23.296259228 +0000 UTC))" 2026-01-20T10:47:23.297107612+00:00 stderr F I0120 10:47:23.296283 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:47:23.296272748 +0000 UTC))" 2026-01-20T10:47:23.297107612+00:00 stderr F I0120 10:47:23.296297 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:47:23.296287609 +0000 UTC))" 2026-01-20T10:47:23.297107612+00:00 stderr F I0120 10:47:23.296310 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:47:23.296301659 +0000 UTC))" 2026-01-20T10:47:23.297107612+00:00 stderr F I0120 10:47:23.296325 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:47:23.296315119 +0000 UTC))" 2026-01-20T10:47:23.297107612+00:00 stderr F I0120 10:47:23.296599 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2026-01-20 10:47:23.296586687 +0000 UTC))" 2026-01-20T10:47:23.297107612+00:00 stderr F I0120 10:47:23.296861 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906027\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906026\" (2026-01-20 09:47:06 +0000 UTC to 2027-01-20 09:47:06 +0000 UTC (now=2026-01-20 10:47:23.296850845 +0000 UTC))" 2026-01-20T10:50:26.923649355+00:00 stderr F I0120 10:50:26.923544 1 leaderelection.go:260] successfully acquired lease openshift-kube-controller-manager/cluster-policy-controller-lock 2026-01-20T10:50:26.923910272+00:00 stderr F I0120 10:50:26.923638 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-controller-manager", Name:"cluster-policy-controller-lock", UID:"bb093f33-8655-47de-8ab9-7ce6fce91fc7", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41081", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_56af3a11-942b-4862-809e-ca3bc5e162c8 became leader 2026-01-20T10:50:29.078142368+00:00 stderr F I0120 10:50:29.076979 1 policy_controller.go:78] Starting "openshift.io/namespace-security-allocation" 2026-01-20T10:50:29.087712544+00:00 stderr F I0120 10:50:29.087684 1 policy_controller.go:88] Started "openshift.io/namespace-security-allocation" 2026-01-20T10:50:29.087754076+00:00 stderr F I0120 10:50:29.087743 1 policy_controller.go:78] Starting "openshift.io/resourcequota" 2026-01-20T10:50:29.087813307+00:00 stderr F I0120 10:50:29.087744 1 base_controller.go:67] Waiting for caches to sync for namespace-security-allocation-controller 2026-01-20T10:50:29.128087177+00:00 stderr F I0120 10:50:29.128033 1 policy_controller.go:88] Started "openshift.io/resourcequota" 2026-01-20T10:50:29.128146609+00:00 stderr F I0120 10:50:29.128135 1 policy_controller.go:78] Starting "openshift.io/cluster-quota-reconciliation" 2026-01-20T10:50:29.128484549+00:00 stderr F I0120 10:50:29.128431 1 resource_quota_controller.go:294] "Starting resource quota controller" 2026-01-20T10:50:29.128496729+00:00 stderr F I0120 10:50:29.128482 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2026-01-20T10:50:29.128540350+00:00 stderr F I0120 10:50:29.128516 1 resource_quota_monitor.go:305] "QuotaMonitor running" 2026-01-20T10:50:29.139170287+00:00 stderr F I0120 10:50:29.139104 1 resource_quota_controller.go:470] "syncing resource quota controller with updated resources from discovery" diff="added: [image.openshift.io/v1, Resource=imagestreams], removed: []" 2026-01-20T10:50:29.151318816+00:00 stderr F I0120 10:50:29.151236 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps" 2026-01-20T10:50:29.151318816+00:00 stderr F I0120 10:50:29.151301 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch" 2026-01-20T10:50:29.151428519+00:00 stderr F I0120 10:50:29.151391 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="buildconfigs.build.openshift.io" 2026-01-20T10:50:29.151480151+00:00 stderr F I0120 10:50:29.151451 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templateinstances.template.openshift.io" 2026-01-20T10:50:29.151535792+00:00 stderr F I0120 10:50:29.151503 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediations.infrastructure.cluster.x-k8s.io" 2026-01-20T10:50:29.151583554+00:00 stderr F I0120 10:50:29.151560 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="catalogsources.operators.coreos.com" 2026-01-20T10:50:29.151625045+00:00 stderr F I0120 10:50:29.151604 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges" 2026-01-20T10:50:29.151658266+00:00 stderr F I0120 10:50:29.151638 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates" 2026-01-20T10:50:29.151690937+00:00 stderr F I0120 10:50:29.151670 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps" 2026-01-20T10:50:29.151729268+00:00 stderr F I0120 10:50:29.151709 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templates.template.openshift.io" 2026-01-20T10:50:29.151771419+00:00 stderr F I0120 10:50:29.151747 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="overlappingrangeipreservations.whereabouts.cni.cncf.io" 2026-01-20T10:50:29.151840671+00:00 stderr F I0120 10:50:29.151814 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts" 2026-01-20T10:50:29.151906273+00:00 stderr F I0120 10:50:29.151883 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io" 2026-01-20T10:50:29.151958454+00:00 stderr F I0120 10:50:29.151935 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io" 2026-01-20T10:50:29.152010346+00:00 stderr F I0120 10:50:29.151989 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podmonitors.monitoring.coreos.com" 2026-01-20T10:50:29.152075429+00:00 stderr F I0120 10:50:29.152037 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io" 2026-01-20T10:50:29.152146051+00:00 stderr F I0120 10:50:29.152124 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagers.monitoring.coreos.com" 2026-01-20T10:50:29.152183252+00:00 stderr F I0120 10:50:29.152162 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="thanosrulers.monitoring.coreos.com" 2026-01-20T10:50:29.152215813+00:00 stderr F I0120 10:50:29.152196 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertrelabelconfigs.monitoring.openshift.io" 2026-01-20T10:50:29.152249584+00:00 stderr F I0120 10:50:29.152229 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorconditions.operators.coreos.com" 2026-01-20T10:50:29.152286845+00:00 stderr F I0120 10:50:29.152267 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressfirewalls.k8s.ovn.org" 2026-01-20T10:50:29.152322816+00:00 stderr F I0120 10:50:29.152302 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressservices.k8s.ovn.org" 2026-01-20T10:50:29.152356057+00:00 stderr F I0120 10:50:29.152336 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps" 2026-01-20T10:50:29.152390678+00:00 stderr F I0120 10:50:29.152370 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinehealthchecks.machine.openshift.io" 2026-01-20T10:50:29.152429119+00:00 stderr F I0120 10:50:29.152409 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="subscriptions.operators.coreos.com" 2026-01-20T10:50:29.152481690+00:00 stderr F I0120 10:50:29.152461 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheusrules.monitoring.coreos.com" 2026-01-20T10:50:29.152516381+00:00 stderr F I0120 10:50:29.152496 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresscontrollers.operator.openshift.io" 2026-01-20T10:50:29.152564923+00:00 stderr F I0120 10:50:29.152544 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="dnsrecords.ingress.operator.openshift.io" 2026-01-20T10:50:29.152597414+00:00 stderr F I0120 10:50:29.152577 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinesets.machine.openshift.io" 2026-01-20T10:50:29.152682026+00:00 stderr F I0120 10:50:29.152653 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="probes.monitoring.coreos.com" 2026-01-20T10:50:29.152705157+00:00 stderr F I0120 10:50:29.152689 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="servicemonitors.monitoring.coreos.com" 2026-01-20T10:50:29.152752418+00:00 stderr F I0120 10:50:29.152732 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io" 2026-01-20T10:50:29.152792559+00:00 stderr F I0120 10:50:29.152771 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deploymentconfigs.apps.openshift.io" 2026-01-20T10:50:29.152843631+00:00 stderr F I0120 10:50:29.152820 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podnetworkconnectivitychecks.controlplane.operator.openshift.io" 2026-01-20T10:50:29.152890612+00:00 stderr F I0120 10:50:29.152868 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressqoses.k8s.ovn.org" 2026-01-20T10:50:29.152932373+00:00 stderr F I0120 10:50:29.152912 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controlplanemachinesets.machine.openshift.io" 2026-01-20T10:50:29.152968684+00:00 stderr F I0120 10:50:29.152948 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheuses.monitoring.coreos.com" 2026-01-20T10:50:29.153010045+00:00 stderr F I0120 10:50:29.152990 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling" 2026-01-20T10:50:29.153084888+00:00 stderr F I0120 10:50:29.153054 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io" 2026-01-20T10:50:29.153153089+00:00 stderr F I0120 10:50:29.153131 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machineautoscalers.autoscaling.openshift.io" 2026-01-20T10:50:29.153187190+00:00 stderr F I0120 10:50:29.153167 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="projecthelmchartrepositories.helm.openshift.io" 2026-01-20T10:50:29.153230722+00:00 stderr F I0120 10:50:29.153208 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddresses.ipam.cluster.x-k8s.io" 2026-01-20T10:50:29.153268553+00:00 stderr F I0120 10:50:29.153247 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="installplans.operators.coreos.com" 2026-01-20T10:50:29.153303794+00:00 stderr F I0120 10:50:29.153284 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagerconfigs.monitoring.coreos.com" 2026-01-20T10:50:29.153345865+00:00 stderr F I0120 10:50:29.153326 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorpkis.network.operator.openshift.io" 2026-01-20T10:50:29.153384366+00:00 stderr F I0120 10:50:29.153364 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorgroups.operators.coreos.com" 2026-01-20T10:50:29.153420047+00:00 stderr F I0120 10:50:29.153400 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ippools.whereabouts.cni.cncf.io" 2026-01-20T10:50:29.153464468+00:00 stderr F I0120 10:50:29.153444 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="builds.build.openshift.io" 2026-01-20T10:50:29.153498619+00:00 stderr F I0120 10:50:29.153478 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddressclaims.ipam.cluster.x-k8s.io" 2026-01-20T10:50:29.153531560+00:00 stderr F I0120 10:50:29.153512 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints" 2026-01-20T10:50:29.153581452+00:00 stderr F I0120 10:50:29.153561 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="routes.route.openshift.io" 2026-01-20T10:50:29.153616603+00:00 stderr F I0120 10:50:29.153597 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressrouters.network.operator.openshift.io" 2026-01-20T10:50:29.153651814+00:00 stderr F I0120 10:50:29.153632 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps" 2026-01-20T10:50:29.153686535+00:00 stderr F I0120 10:50:29.153667 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="network-attachment-definitions.k8s.cni.cncf.io" 2026-01-20T10:50:29.153721786+00:00 stderr F I0120 10:50:29.153702 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machines.machine.openshift.io" 2026-01-20T10:50:29.153756097+00:00 stderr F I0120 10:50:29.153736 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertingrules.monitoring.openshift.io" 2026-01-20T10:50:29.153803478+00:00 stderr F I0120 10:50:29.153783 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy" 2026-01-20T10:50:29.153852529+00:00 stderr F I0120 10:50:29.153832 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps" 2026-01-20T10:50:29.153885560+00:00 stderr F I0120 10:50:29.153866 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch" 2026-01-20T10:50:29.153919851+00:00 stderr F I0120 10:50:29.153900 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io" 2026-01-20T10:50:29.153952452+00:00 stderr F I0120 10:50:29.153933 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io" 2026-01-20T10:50:29.154040285+00:00 stderr F I0120 10:50:29.154015 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindingrestrictions.authorization.openshift.io" 2026-01-20T10:50:29.154083816+00:00 stderr F I0120 10:50:29.154055 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediationtemplates.infrastructure.cluster.x-k8s.io" 2026-01-20T10:50:29.154209020+00:00 stderr F I0120 10:50:29.154181 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="clusterserviceversions.operators.coreos.com" 2026-01-20T10:50:29.154267211+00:00 stderr F I0120 10:50:29.154223 1 policy_controller.go:88] Started "openshift.io/cluster-quota-reconciliation" 2026-01-20T10:50:29.154267211+00:00 stderr F I0120 10:50:29.154240 1 policy_controller.go:78] Starting "openshift.io/cluster-csr-approver" 2026-01-20T10:50:29.154337713+00:00 stderr F I0120 10:50:29.154312 1 reconciliation_controller.go:140] Starting the cluster quota reconciliation controller 2026-01-20T10:50:29.154395975+00:00 stderr F I0120 10:50:29.154374 1 resource_quota_monitor.go:305] "QuotaMonitor running" 2026-01-20T10:50:29.154510598+00:00 stderr F I0120 10:50:29.154471 1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller 2026-01-20T10:50:29.159140142+00:00 stderr F I0120 10:50:29.159056 1 policy_controller.go:88] Started "openshift.io/cluster-csr-approver" 2026-01-20T10:50:29.159140142+00:00 stderr F I0120 10:50:29.159126 1 policy_controller.go:78] Starting "openshift.io/podsecurity-admission-label-syncer" 2026-01-20T10:50:29.159226394+00:00 stderr F I0120 10:50:29.159192 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorCertApprover_csr-approver-controller 2026-01-20T10:50:29.161868750+00:00 stderr F I0120 10:50:29.161788 1 reconciliation_controller.go:207] syncing resource quota controller with updated resources from discovery: added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=persistentvolumeclaims /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services apps.openshift.io/v1, Resource=deploymentconfigs apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets authorization.openshift.io/v1, Resource=rolebindingrestrictions autoscaling.openshift.io/v1beta1, Resource=machineautoscalers autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs build.openshift.io/v1, Resource=buildconfigs build.openshift.io/v1, Resource=builds controlplane.operator.openshift.io/v1alpha1, Resource=podnetworkconnectivitychecks coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events helm.openshift.io/v1beta1, Resource=projecthelmchartrepositories image.openshift.io/v1, Resource=imagestreams infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediations infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediationtemplates ingress.operator.openshift.io/v1, Resource=dnsrecords ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddressclaims ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddresses k8s.cni.cncf.io/v1, Resource=network-attachment-definitions k8s.ovn.org/v1, Resource=egressfirewalls k8s.ovn.org/v1, Resource=egressqoses k8s.ovn.org/v1, Resource=egressservices machine.openshift.io/v1, Resource=controlplanemachinesets machine.openshift.io/v1beta1, Resource=machinehealthchecks machine.openshift.io/v1beta1, Resource=machines machine.openshift.io/v1beta1, Resource=machinesets monitoring.coreos.com/v1, Resource=alertmanagers monitoring.coreos.com/v1, Resource=podmonitors monitoring.coreos.com/v1, Resource=probes monitoring.coreos.com/v1, Resource=prometheuses monitoring.coreos.com/v1, Resource=prometheusrules monitoring.coreos.com/v1, Resource=servicemonitors monitoring.coreos.com/v1, Resource=thanosrulers monitoring.coreos.com/v1beta1, Resource=alertmanagerconfigs monitoring.openshift.io/v1, Resource=alertingrules monitoring.openshift.io/v1, Resource=alertrelabelconfigs network.operator.openshift.io/v1, Resource=egressrouters network.operator.openshift.io/v1, Resource=operatorpkis networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies operator.openshift.io/v1, Resource=ingresscontrollers operators.coreos.com/v1, Resource=operatorgroups operators.coreos.com/v1alpha1, Resource=catalogsources operators.coreos.com/v1alpha1, Resource=clusterserviceversions operators.coreos.com/v1alpha1, Resource=installplans operators.coreos.com/v1alpha1, Resource=subscriptions operators.coreos.com/v2, Resource=operatorconditions policy/v1, Resource=poddisruptionbudgets rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles route.openshift.io/v1, Resource=routes storage.k8s.io/v1, Resource=csistoragecapacities template.openshift.io/v1, Resource=templateinstances template.openshift.io/v1, Resource=templates whereabouts.cni.cncf.io/v1alpha1, Resource=ippools whereabouts.cni.cncf.io/v1alpha1, Resource=overlappingrangeipreservations], removed: [] 2026-01-20T10:50:29.283186736+00:00 stderr F I0120 10:50:29.283100 1 policy_controller.go:88] Started "openshift.io/podsecurity-admission-label-syncer" 2026-01-20T10:50:29.283186736+00:00 stderr F I0120 10:50:29.283129 1 policy_controller.go:78] Starting "openshift.io/privileged-namespaces-psa-label-syncer" 2026-01-20T10:50:29.283254197+00:00 stderr F I0120 10:50:29.283190 1 base_controller.go:67] Waiting for caches to sync for pod-security-admission-label-synchronization-controller 2026-01-20T10:50:29.481867330+00:00 stderr F I0120 10:50:29.481810 1 policy_controller.go:88] Started "openshift.io/privileged-namespaces-psa-label-syncer" 2026-01-20T10:50:29.481867330+00:00 stderr F I0120 10:50:29.481830 1 policy_controller.go:91] Started Origin Controllers 2026-01-20T10:50:29.482358364+00:00 stderr F I0120 10:50:29.482178 1 privileged_namespaces_controller.go:75] "Starting" controller="privileged-namespaces-psa-label-syncer" 2026-01-20T10:50:29.482358364+00:00 stderr F I0120 10:50:29.482201 1 shared_informer.go:311] Waiting for caches to sync for privileged-namespaces-psa-label-syncer 2026-01-20T10:50:29.888683161+00:00 stderr F I0120 10:50:29.888597 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2026-01-20T10:50:29.889406992+00:00 stderr F I0120 10:50:29.889381 1 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.889633189+00:00 stderr F I0120 10:50:29.889581 1 reflector.go:351] Caches populated for *v2.HorizontalPodAutoscaler from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.891138692+00:00 stderr F I0120 10:50:29.891084 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.892266375+00:00 stderr F I0120 10:50:29.892208 1 reflector.go:351] Caches populated for *v1.Lease from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.895727144+00:00 stderr F I0120 10:50:29.895686 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.902164509+00:00 stderr F I0120 10:50:29.898574 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.902164509+00:00 stderr F I0120 10:50:29.898886 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.903848748+00:00 stderr F I0120 10:50:29.903327 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.903848748+00:00 stderr F I0120 10:50:29.903595 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.903848748+00:00 stderr F I0120 10:50:29.903748 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.903848748+00:00 stderr F I0120 10:50:29.903770 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.907142863+00:00 stderr F I0120 10:50:29.903962 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.907142863+00:00 stderr F I0120 10:50:29.904530 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.909578393+00:00 stderr F I0120 10:50:29.909551 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.910561821+00:00 stderr F I0120 10:50:29.910385 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.910930602+00:00 stderr F I0120 10:50:29.910902 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.910965413+00:00 stderr F I0120 10:50:29.910946 1 reflector.go:351] Caches populated for *v1.PodTemplate from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.911080266+00:00 stderr F I0120 10:50:29.911043 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.911216730+00:00 stderr F I0120 10:50:29.911193 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.913582198+00:00 stderr F I0120 10:50:29.913554 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.925621576+00:00 stderr F I0120 10:50:29.925559 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.929556239+00:00 stderr F I0120 10:50:29.929521 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.929781325+00:00 stderr F I0120 10:50:29.929751 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.929890448+00:00 stderr F I0120 10:50:29.929873 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.930125645+00:00 stderr F I0120 10:50:29.930107 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.930296970+00:00 stderr F I0120 10:50:29.930251 1 reflector.go:351] Caches populated for *v1.ControllerRevision from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.930887968+00:00 stderr F I0120 10:50:29.930821 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.931009141+00:00 stderr F I0120 10:50:29.930980 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "basic-user" not found 2026-01-20T10:50:29.931009141+00:00 stderr F I0120 10:50:29.930995 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.931022721+00:00 stderr F I0120 10:50:29.931002 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.933262545+00:00 stderr F I0120 10:50:29.933054 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.933262545+00:00 stderr F I0120 10:50:29.933112 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.933262545+00:00 stderr F I0120 10:50:29.933120 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-autoscaler" not found 2026-01-20T10:50:29.933262545+00:00 stderr F I0120 10:50:29.933129 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-autoscaler-operator" not found 2026-01-20T10:50:29.933262545+00:00 stderr F I0120 10:50:29.933136 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-monitoring-operator" not found 2026-01-20T10:50:29.933262545+00:00 stderr F I0120 10:50:29.933143 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.933262545+00:00 stderr F I0120 10:50:29.933151 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-reader" not found 2026-01-20T10:50:29.933262545+00:00 stderr F I0120 10:50:29.933167 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-samples-operator" not found 2026-01-20T10:50:29.933262545+00:00 stderr F I0120 10:50:29.933176 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-samples-operator-imageconfig-reader" not found 2026-01-20T10:50:29.933262545+00:00 stderr F I0120 10:50:29.933184 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-samples-operator-proxy-reader" not found 2026-01-20T10:50:29.933262545+00:00 stderr F I0120 10:50:29.933191 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-status" not found 2026-01-20T10:50:29.933262545+00:00 stderr F I0120 10:50:29.933198 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.933262545+00:00 stderr F I0120 10:50:29.933206 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "console" not found 2026-01-20T10:50:29.933262545+00:00 stderr F I0120 10:50:29.933214 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:auth-delegator" not found 2026-01-20T10:50:29.933262545+00:00 stderr F I0120 10:50:29.933221 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "console-extensions-reader" not found 2026-01-20T10:50:29.933262545+00:00 stderr F I0120 10:50:29.933228 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "console-operator" not found 2026-01-20T10:50:29.933262545+00:00 stderr F I0120 10:50:29.933235 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:auth-delegator" not found 2026-01-20T10:50:29.933262545+00:00 stderr F I0120 10:50:29.933243 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "control-plane-machine-set-operator" not found 2026-01-20T10:50:29.933262545+00:00 stderr F I0120 10:50:29.933250 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2026-01-20T10:50:29.933308736+00:00 stderr F I0120 10:50:29.933263 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-provisioner-runner" not found 2026-01-20T10:50:29.933308736+00:00 stderr F I0120 10:50:29.933270 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-provisioner-runner" not found 2026-01-20T10:50:29.933308736+00:00 stderr F I0120 10:50:29.933278 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "csi-snapshot-controller-operator-clusterrole" not found 2026-01-20T10:50:29.933308736+00:00 stderr F I0120 10:50:29.933286 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.933308736+00:00 stderr F I0120 10:50:29.933295 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-image-registry-operator" not found 2026-01-20T10:50:29.933308736+00:00 stderr F I0120 10:50:29.933302 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "dns-monitoring" not found 2026-01-20T10:50:29.933328547+00:00 stderr F I0120 10:50:29.933311 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "helm-chartrepos-viewer" not found 2026-01-20T10:50:29.933328547+00:00 stderr F I0120 10:50:29.933318 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "kube-apiserver" not found 2026-01-20T10:50:29.933339847+00:00 stderr F I0120 10:50:29.933325 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.933339847+00:00 stderr F I0120 10:50:29.933333 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-api-controllers" not found 2026-01-20T10:50:29.933351038+00:00 stderr F I0120 10:50:29.933341 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-api-controllers-metal3-remediation" not found 2026-01-20T10:50:29.933361598+00:00 stderr F I0120 10:50:29.933348 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-api-operator" not found 2026-01-20T10:50:29.933361598+00:00 stderr F I0120 10:50:29.933357 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-api-operator-ext-remediation" not found 2026-01-20T10:50:29.933372448+00:00 stderr F I0120 10:50:29.933363 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-config-controller" not found 2026-01-20T10:50:29.933383379+00:00 stderr F I0120 10:50:29.933373 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-config-daemon" not found 2026-01-20T10:50:29.933393819+00:00 stderr F I0120 10:50:29.933380 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-config-server" not found 2026-01-20T10:50:29.933393819+00:00 stderr F I0120 10:50:29.933389 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-os-builder" not found 2026-01-20T10:50:29.933404489+00:00 stderr F I0120 10:50:29.933396 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:scc:anyuid" not found 2026-01-20T10:50:29.933414909+00:00 stderr F I0120 10:50:29.933404 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "marketplace-operator" not found 2026-01-20T10:50:29.933414909+00:00 stderr F I0120 10:50:29.933411 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "metrics-daemon-role" not found 2026-01-20T10:50:29.933430970+00:00 stderr F I0120 10:50:29.933419 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "multus-admission-controller-webhook" not found 2026-01-20T10:50:29.933430970+00:00 stderr F I0120 10:50:29.933426 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "multus-ancillary-tools" not found 2026-01-20T10:50:29.933441640+00:00 stderr F I0120 10:50:29.933434 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "multus-ancillary-tools" not found 2026-01-20T10:50:29.933451991+00:00 stderr F I0120 10:50:29.933441 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "multus" not found 2026-01-20T10:50:29.933451991+00:00 stderr F I0120 10:50:29.933448 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "multus-ancillary-tools" not found 2026-01-20T10:50:29.933462611+00:00 stderr F I0120 10:50:29.933455 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "whereabouts-cni" not found 2026-01-20T10:50:29.933472831+00:00 stderr F I0120 10:50:29.933463 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "network-diagnostics" not found 2026-01-20T10:50:29.933483461+00:00 stderr F I0120 10:50:29.933472 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "network-node-identity" not found 2026-01-20T10:50:29.933493792+00:00 stderr F I0120 10:50:29.933480 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:operator-lifecycle-manager" not found 2026-01-20T10:50:29.933493792+00:00 stderr F I0120 10:50:29.933488 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-dns" not found 2026-01-20T10:50:29.933504322+00:00 stderr F I0120 10:50:29.933496 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-dns-operator" not found 2026-01-20T10:50:29.933514432+00:00 stderr F I0120 10:50:29.933503 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-pruner" not found 2026-01-20T10:50:29.933524883+00:00 stderr F I0120 10:50:29.933512 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-ingress-operator" not found 2026-01-20T10:50:29.933524883+00:00 stderr F I0120 10:50:29.933520 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-ingress-router" not found 2026-01-20T10:50:29.933535433+00:00 stderr F I0120 10:50:29.933528 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-iptables-alerter" not found 2026-01-20T10:50:29.933545493+00:00 stderr F E0120 10:50:29.933522 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2026-01-20T10:50:29.933560564+00:00 stderr F I0120 10:50:29.933535 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-ovn-kubernetes-control-plane-limited" not found 2026-01-20T10:50:29.933570964+00:00 stderr F I0120 10:50:29.933558 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-ovn-kubernetes-node-limited" not found 2026-01-20T10:50:29.933570964+00:00 stderr F I0120 10:50:29.933567 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-ovn-kubernetes-kube-rbac-proxy" not found 2026-01-20T10:50:29.933581994+00:00 stderr F I0120 10:50:29.933573 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:auth-delegator" not found 2026-01-20T10:50:29.933592315+00:00 stderr F I0120 10:50:29.933579 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "prometheus-k8s-scheduler-resources" not found 2026-01-20T10:50:29.933592315+00:00 stderr F I0120 10:50:29.933586 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "registry-monitoring" not found 2026-01-20T10:50:29.933603115+00:00 stderr F I0120 10:50:29.933592 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:registry" not found 2026-01-20T10:50:29.933603115+00:00 stderr F I0120 10:50:29.933598 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "router-monitoring" not found 2026-01-20T10:50:29.933613895+00:00 stderr F I0120 10:50:29.933604 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found 2026-01-20T10:50:29.933613895+00:00 stderr F I0120 10:50:29.933611 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "self-provisioner" not found 2026-01-20T10:50:29.933625305+00:00 stderr F I0120 10:50:29.933616 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.933625305+00:00 stderr F I0120 10:50:29.933622 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-bootstrapper" not found 2026-01-20T10:50:29.933636506+00:00 stderr F I0120 10:50:29.933629 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found 2026-01-20T10:50:29.933646886+00:00 stderr F I0120 10:50:29.933634 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:basic-user" not found 2026-01-20T10:50:29.933646886+00:00 stderr F I0120 10:50:29.933640 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found 2026-01-20T10:50:29.933675788+00:00 stderr F I0120 10:50:29.933645 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found 2026-01-20T10:50:29.933675788+00:00 stderr F I0120 10:50:29.933656 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found 2026-01-20T10:50:29.933675788+00:00 stderr F I0120 10:50:29.933661 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:attachdetach-controller" not found 2026-01-20T10:50:29.933675788+00:00 stderr F I0120 10:50:29.933667 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:certificate-controller" not found 2026-01-20T10:50:29.933730819+00:00 stderr F I0120 10:50:29.933673 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:clusterrole-aggregation-controller" not found 2026-01-20T10:50:29.933730819+00:00 stderr F I0120 10:50:29.933680 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:cronjob-controller" not found 2026-01-20T10:50:29.933730819+00:00 stderr F I0120 10:50:29.933685 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:daemon-set-controller" not found 2026-01-20T10:50:29.933730819+00:00 stderr F I0120 10:50:29.933691 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:deployment-controller" not found 2026-01-20T10:50:29.933730819+00:00 stderr F I0120 10:50:29.933696 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:disruption-controller" not found 2026-01-20T10:50:29.933730819+00:00 stderr F I0120 10:50:29.933701 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:endpoint-controller" not found 2026-01-20T10:50:29.933730819+00:00 stderr F I0120 10:50:29.933706 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:endpointslice-controller" not found 2026-01-20T10:50:29.933730819+00:00 stderr F I0120 10:50:29.933712 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:endpointslicemirroring-controller" not found 2026-01-20T10:50:29.933730819+00:00 stderr F I0120 10:50:29.933717 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:ephemeral-volume-controller" not found 2026-01-20T10:50:29.933730819+00:00 stderr F I0120 10:50:29.933723 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:expand-controller" not found 2026-01-20T10:50:29.933834872+00:00 stderr F I0120 10:50:29.933728 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:generic-garbage-collector" not found 2026-01-20T10:50:29.933834872+00:00 stderr F I0120 10:50:29.933734 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:horizontal-pod-autoscaler" not found 2026-01-20T10:50:29.933834872+00:00 stderr F I0120 10:50:29.933741 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:job-controller" not found 2026-01-20T10:50:29.933834872+00:00 stderr F I0120 10:50:29.933747 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:legacy-service-account-token-cleaner" not found 2026-01-20T10:50:29.933834872+00:00 stderr F I0120 10:50:29.933752 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:namespace-controller" not found 2026-01-20T10:50:29.933834872+00:00 stderr F I0120 10:50:29.933758 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:node-controller" not found 2026-01-20T10:50:29.933834872+00:00 stderr F I0120 10:50:29.933763 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:persistent-volume-binder" not found 2026-01-20T10:50:29.933834872+00:00 stderr F I0120 10:50:29.933768 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:pod-garbage-collector" not found 2026-01-20T10:50:29.933834872+00:00 stderr F I0120 10:50:29.933773 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:pv-protection-controller" not found 2026-01-20T10:50:29.933834872+00:00 stderr F I0120 10:50:29.933779 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:pvc-protection-controller" not found 2026-01-20T10:50:29.933834872+00:00 stderr F I0120 10:50:29.933784 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:replicaset-controller" not found 2026-01-20T10:50:29.933834872+00:00 stderr F I0120 10:50:29.933790 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:replication-controller" not found 2026-01-20T10:50:29.933834872+00:00 stderr F I0120 10:50:29.933794 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:resourcequota-controller" not found 2026-01-20T10:50:29.933834872+00:00 stderr F I0120 10:50:29.933800 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:root-ca-cert-publisher" not found 2026-01-20T10:50:29.933834872+00:00 stderr F I0120 10:50:29.933804 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:route-controller" not found 2026-01-20T10:50:29.933834872+00:00 stderr F I0120 10:50:29.933811 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:service-account-controller" not found 2026-01-20T10:50:29.933834872+00:00 stderr F I0120 10:50:29.933816 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:service-ca-cert-publisher" not found 2026-01-20T10:50:29.933834872+00:00 stderr F I0120 10:50:29.933822 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:service-controller" not found 2026-01-20T10:50:29.934253604+00:00 stderr F I0120 10:50:29.934225 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:statefulset-controller" not found 2026-01-20T10:50:29.934253604+00:00 stderr F I0120 10:50:29.934239 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:ttl-after-finished-controller" not found 2026-01-20T10:50:29.934253604+00:00 stderr F I0120 10:50:29.934246 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:ttl-controller" not found 2026-01-20T10:50:29.934276205+00:00 stderr F I0120 10:50:29.934251 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2026-01-20T10:50:29.934276205+00:00 stderr F I0120 10:50:29.934257 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:discovery" not found 2026-01-20T10:50:29.934276205+00:00 stderr F I0120 10:50:29.934262 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2026-01-20T10:50:29.934276205+00:00 stderr F I0120 10:50:29.934267 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2026-01-20T10:50:29.934276205+00:00 stderr F I0120 10:50:29.934272 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:kube-controller-manager" not found 2026-01-20T10:50:29.934304696+00:00 stderr F I0120 10:50:29.934278 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:kube-dns" not found 2026-01-20T10:50:29.934304696+00:00 stderr F I0120 10:50:29.934283 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found 2026-01-20T10:50:29.934304696+00:00 stderr F I0120 10:50:29.934288 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:master" not found 2026-01-20T10:50:29.934304696+00:00 stderr F I0120 10:50:29.934293 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:monitoring" not found 2026-01-20T10:50:29.934304696+00:00 stderr F I0120 10:50:29.934298 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node" not found 2026-01-20T10:50:29.934322316+00:00 stderr F I0120 10:50:29.934303 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-admin" not found 2026-01-20T10:50:29.934322316+00:00 stderr F I0120 10:50:29.934309 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-admin" not found 2026-01-20T10:50:29.934333567+00:00 stderr F I0120 10:50:29.934320 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-bootstrapper" not found 2026-01-20T10:50:29.934333567+00:00 stderr F I0120 10:50:29.934326 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-proxier" not found 2026-01-20T10:50:29.934344257+00:00 stderr F I0120 10:50:29.934331 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-proxier" not found 2026-01-20T10:50:29.934344257+00:00 stderr F I0120 10:50:29.934337 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found 2026-01-20T10:50:29.934354947+00:00 stderr F I0120 10:50:29.934343 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:build-config-change-controller" not found 2026-01-20T10:50:29.934354947+00:00 stderr F I0120 10:50:29.934348 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:build-controller" not found 2026-01-20T10:50:29.934365708+00:00 stderr F I0120 10:50:29.934354 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:cluster-csr-approver-controller" not found 2026-01-20T10:50:29.934365708+00:00 stderr F I0120 10:50:29.934361 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:cluster-quota-reconciliation-controller" not found 2026-01-20T10:50:29.934376328+00:00 stderr F I0120 10:50:29.934367 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:default-rolebindings-controller" not found 2026-01-20T10:50:29.934386718+00:00 stderr F I0120 10:50:29.934375 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:deployer-controller" not found 2026-01-20T10:50:29.934386718+00:00 stderr F I0120 10:50:29.934381 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:deploymentconfig-controller" not found 2026-01-20T10:50:29.934397468+00:00 stderr F I0120 10:50:29.934387 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:horizontal-pod-autoscaler" not found 2026-01-20T10:50:29.934397468+00:00 stderr F I0120 10:50:29.934392 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:image-import-controller" not found 2026-01-20T10:50:29.934411569+00:00 stderr F I0120 10:50:29.934398 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:image-trigger-controller" not found 2026-01-20T10:50:29.934411569+00:00 stderr F I0120 10:50:29.934403 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:auth-delegator" not found 2026-01-20T10:50:29.934411569+00:00 stderr F I0120 10:50:29.934408 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:check-endpoints-crd-reader" not found 2026-01-20T10:50:29.934423119+00:00 stderr F I0120 10:50:29.934413 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:check-endpoints-node-reader" not found 2026-01-20T10:50:29.934423119+00:00 stderr F I0120 10:50:29.934419 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:machine-approver" not found 2026-01-20T10:50:29.934434179+00:00 stderr F I0120 10:50:29.934424 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:namespace-security-allocation-controller" not found 2026-01-20T10:50:29.934444510+00:00 stderr F I0120 10:50:29.934431 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:origin-namespace-controller" not found 2026-01-20T10:50:29.934444510+00:00 stderr F I0120 10:50:29.934438 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:podsecurity-admission-label-syncer-controller" not found 2026-01-20T10:50:29.934455280+00:00 stderr F I0120 10:50:29.934444 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:privileged-namespaces-psa-label-syncer" not found 2026-01-20T10:50:29.934455280+00:00 stderr F I0120 10:50:29.934450 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:pv-recycler-controller" not found 2026-01-20T10:50:29.934466040+00:00 stderr F I0120 10:50:29.934456 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:resourcequota-controller" not found 2026-01-20T10:50:29.934466040+00:00 stderr F I0120 10:50:29.934461 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:service-ca" not found 2026-01-20T10:50:29.934476781+00:00 stderr F I0120 10:50:29.934468 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:service-ingress-ip-controller" not found 2026-01-20T10:50:29.934476781+00:00 stderr F I0120 10:50:29.934473 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:serviceaccount-controller" not found 2026-01-20T10:50:29.934518472+00:00 stderr F I0120 10:50:29.934498 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:serviceaccount-pull-secrets-controller" not found 2026-01-20T10:50:29.934518472+00:00 stderr F I0120 10:50:29.934507 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:template-instance-controller" not found 2026-01-20T10:50:29.934518472+00:00 stderr F I0120 10:50:29.934513 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "admin" not found 2026-01-20T10:50:29.934531122+00:00 stderr F I0120 10:50:29.934518 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:template-instance-finalizer-controller" not found 2026-01-20T10:50:29.934531122+00:00 stderr F I0120 10:50:29.934524 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "admin" not found 2026-01-20T10:50:29.934542183+00:00 stderr F I0120 10:50:29.934531 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:template-service-broker" not found 2026-01-20T10:50:29.934542183+00:00 stderr F I0120 10:50:29.934539 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:unidling-controller" not found 2026-01-20T10:50:29.934553133+00:00 stderr F I0120 10:50:29.934544 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found 2026-01-20T10:50:29.934563803+00:00 stderr F I0120 10:50:29.934550 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.934563803+00:00 stderr F I0120 10:50:29.934556 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.934574483+00:00 stderr F I0120 10:50:29.934561 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.934574483+00:00 stderr F I0120 10:50:29.934567 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:openshift-controller-manager" not found 2026-01-20T10:50:29.934585164+00:00 stderr F I0120 10:50:29.934573 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:openshift-controller-manager:image-trigger-controller" not found 2026-01-20T10:50:29.934585164+00:00 stderr F I0120 10:50:29.934579 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:openshift-controller-manager:ingress-to-route-controller" not found 2026-01-20T10:50:29.934599554+00:00 stderr F I0120 10:50:29.934586 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:openshift-controller-manager:update-buildconfig-status" not found 2026-01-20T10:50:29.934599554+00:00 stderr F I0120 10:50:29.934591 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:openshift-route-controller-manager" not found 2026-01-20T10:50:29.934610264+00:00 stderr F I0120 10:50:29.934597 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.934610264+00:00 stderr F I0120 10:50:29.934603 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.934620965+00:00 stderr F I0120 10:50:29.934610 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:operator:etcd-backup-role" not found 2026-01-20T10:50:29.934620965+00:00 stderr F I0120 10:50:29.934615 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.934631685+00:00 stderr F I0120 10:50:29.934620 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.934631685+00:00 stderr F I0120 10:50:29.934625 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.934642365+00:00 stderr F I0120 10:50:29.934631 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.934642365+00:00 stderr F I0120 10:50:29.934635 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.934653716+00:00 stderr F I0120 10:50:29.934641 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.934653716+00:00 stderr F I0120 10:50:29.934646 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found 2026-01-20T10:50:29.934664706+00:00 stderr F I0120 10:50:29.934651 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.934664706+00:00 stderr F I0120 10:50:29.934657 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.934676086+00:00 stderr F I0120 10:50:29.934662 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.934676086+00:00 stderr F I0120 10:50:29.934667 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.934676086+00:00 stderr F I0120 10:50:29.934673 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.934691167+00:00 stderr F I0120 10:50:29.934679 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.934691167+00:00 stderr F I0120 10:50:29.934684 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.934728708+00:00 stderr F I0120 10:50:29.934689 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.934728708+00:00 stderr F I0120 10:50:29.934721 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2026-01-20T10:50:29.934740648+00:00 stderr F I0120 10:50:29.934729 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found 2026-01-20T10:50:29.934740648+00:00 stderr F I0120 10:50:29.934736 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:scc:restricted-v2" not found 2026-01-20T10:50:29.934752229+00:00 stderr F I0120 10:50:29.934742 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:tokenreview-openshift-controller-manager" not found 2026-01-20T10:50:29.934752229+00:00 stderr F I0120 10:50:29.934748 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:tokenreview-openshift-route-controller-manager" not found 2026-01-20T10:50:29.934762739+00:00 stderr F I0120 10:50:29.934755 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:useroauthaccesstoken-manager" not found 2026-01-20T10:50:29.934773109+00:00 stderr F I0120 10:50:29.934762 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found 2026-01-20T10:50:29.934773109+00:00 stderr F I0120 10:50:29.934768 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found 2026-01-20T10:50:29.934783809+00:00 stderr F I0120 10:50:29.934774 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:sdn-reader" not found 2026-01-20T10:50:29.934783809+00:00 stderr F I0120 10:50:29.934780 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found 2026-01-20T10:50:29.934797950+00:00 stderr F I0120 10:50:29.934787 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found 2026-01-20T10:50:29.934797950+00:00 stderr F I0120 10:50:29.934792 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:webhook" not found 2026-01-20T10:50:29.935408947+00:00 stderr F E0120 10:50:29.935380 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca" should be enqueued: namespace "openshift-service-ca" not found 2026-01-20T10:50:29.985241533+00:00 stderr F I0120 10:50:29.985164 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorCertApprover_csr-approver-controller 2026-01-20T10:50:29.985241533+00:00 stderr F I0120 10:50:29.985200 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorCertApprover_csr-approver-controller controller ... 2026-01-20T10:50:29.986587001+00:00 stderr F I0120 10:50:29.986554 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.986727725+00:00 stderr F I0120 10:50:29.986708 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-config-controller-events" not found 2026-01-20T10:50:29.986802698+00:00 stderr F I0120 10:50:29.986783 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-config-daemon-events" not found 2026-01-20T10:50:29.986948003+00:00 stderr F I0120 10:50:29.986935 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-os-builder-events" not found 2026-01-20T10:50:29.986999434+00:00 stderr F I0120 10:50:29.986970 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2026-01-20T10:50:29.987081407+00:00 stderr F I0120 10:50:29.987039 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2026-01-20T10:50:29.987081407+00:00 stderr F I0120 10:50:29.987077 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2026-01-20T10:50:29.987096897+00:00 stderr F I0120 10:50:29.987091 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2026-01-20T10:50:29.987118458+00:00 stderr F I0120 10:50:29.987104 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2026-01-20T10:50:29.987118458+00:00 stderr F I0120 10:50:29.987114 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2026-01-20T10:50:29.987126238+00:00 stderr F I0120 10:50:29.987121 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2026-01-20T10:50:29.987134308+00:00 stderr F I0120 10:50:29.987128 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2026-01-20T10:50:29.987150939+00:00 stderr F I0120 10:50:29.987134 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2026-01-20T10:50:29.987150939+00:00 stderr F I0120 10:50:29.987141 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2026-01-20T10:50:29.987159189+00:00 stderr F I0120 10:50:29.987154 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2026-01-20T10:50:29.987167409+00:00 stderr F I0120 10:50:29.987161 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2026-01-20T10:50:29.987175639+00:00 stderr F I0120 10:50:29.987169 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2026-01-20T10:50:29.987228421+00:00 stderr F I0120 10:50:29.987203 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2026-01-20T10:50:29.987280332+00:00 stderr F I0120 10:50:29.987268 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2026-01-20T10:50:29.987321953+00:00 stderr F I0120 10:50:29.987284 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.987495898+00:00 stderr F I0120 10:50:29.987464 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2026-01-20T10:50:29.987701764+00:00 stderr F I0120 10:50:29.987678 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2026-01-20T10:50:29.987747806+00:00 stderr F I0120 10:50:29.987735 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2026-01-20T10:50:29.987782667+00:00 stderr F I0120 10:50:29.987771 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2026-01-20T10:50:29.987852278+00:00 stderr F I0120 10:50:29.987829 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2026-01-20T10:50:29.987852278+00:00 stderr F I0120 10:50:29.987847 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2026-01-20T10:50:29.987863139+00:00 stderr F I0120 10:50:29.987855 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2026-01-20T10:50:29.988214359+00:00 stderr F I0120 10:50:29.988192 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2026-01-20T10:50:29.988259260+00:00 stderr F I0120 10:50:29.988231 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2026-01-20T10:50:29.988519967+00:00 stderr F I0120 10:50:29.988502 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2026-01-20T10:50:29.989083053+00:00 stderr F E0120 10:50:29.989041 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2026-01-20T10:50:29.990897546+00:00 stderr F E0120 10:50:29.990864 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca" should be enqueued: namespace "openshift-service-ca" not found 2026-01-20T10:50:29.991012129+00:00 stderr F I0120 10:50:29.990983 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:29.998327791+00:00 stderr F I0120 10:50:29.997871 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:30.006221697+00:00 stderr F I0120 10:50:30.006158 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:30.088082486+00:00 stderr F I0120 10:50:30.088000 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:30.088964812+00:00 stderr F I0120 10:50:30.088928 1 shared_informer.go:318] Caches are synced for resource quota 2026-01-20T10:50:30.088964812+00:00 stderr F I0120 10:50:30.088950 1 resource_quota_controller.go:496] "synced quota controller" 2026-01-20T10:50:30.132050443+00:00 stderr F I0120 10:50:30.131990 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:30.280263193+00:00 stderr F I0120 10:50:30.280206 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:30.291877497+00:00 stderr F I0120 10:50:30.291859 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:30.329704858+00:00 stderr F I0120 10:50:30.329624 1 shared_informer.go:318] Caches are synced for resource quota 2026-01-20T10:50:30.487810553+00:00 stderr F I0120 10:50:30.487761 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:30.488111831+00:00 stderr F I0120 10:50:30.488081 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:30.488744180+00:00 stderr F I0120 10:50:30.488719 1 base_controller.go:73] Caches are synced for namespace-security-allocation-controller 2026-01-20T10:50:30.488744180+00:00 stderr F I0120 10:50:30.488737 1 base_controller.go:110] Starting #1 worker of namespace-security-allocation-controller controller ... 2026-01-20T10:50:30.488778021+00:00 stderr F I0120 10:50:30.488762 1 namespace_scc_allocation_controller.go:111] Repairing SCC UID Allocations 2026-01-20T10:50:30.583662905+00:00 stderr F I0120 10:50:30.583612 1 shared_informer.go:318] Caches are synced for privileged-namespaces-psa-label-syncer 2026-01-20T10:50:30.678136816+00:00 stderr F I0120 10:50:30.678090 1 request.go:697] Waited for 1.194840874s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0 2026-01-20T10:50:30.680242067+00:00 stderr F I0120 10:50:30.680202 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:30.687233678+00:00 stderr F I0120 10:50:30.687192 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:30.886867561+00:00 stderr F I0120 10:50:30.886728 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:30.890371271+00:00 stderr F I0120 10:50:30.890276 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:31.083908497+00:00 stderr F I0120 10:50:31.083835 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:31.087645295+00:00 stderr F I0120 10:50:31.087452 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:31.287437932+00:00 stderr F I0120 10:50:31.287380 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:31.488213616+00:00 stderr F I0120 10:50:31.488118 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:31.505548316+00:00 stderr F I0120 10:50:31.505471 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:31.584344866+00:00 stderr F I0120 10:50:31.584243 1 base_controller.go:73] Caches are synced for pod-security-admission-label-synchronization-controller 2026-01-20T10:50:31.584344866+00:00 stderr F I0120 10:50:31.584279 1 base_controller.go:110] Starting #1 worker of pod-security-admission-label-synchronization-controller controller ... 2026-01-20T10:50:31.598737491+00:00 stderr F I0120 10:50:31.598673 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:31.678635892+00:00 stderr F I0120 10:50:31.678555 1 request.go:697] Waited for 1.793608467s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/secrets?limit=500&resourceVersion=0 2026-01-20T10:50:31.688366383+00:00 stderr F I0120 10:50:31.688277 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:31.880412626+00:00 stderr F I0120 10:50:31.880295 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:31.887230413+00:00 stderr F I0120 10:50:31.887172 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:31.923053285+00:00 stderr F I0120 10:50:31.922950 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:32.087937046+00:00 stderr F I0120 10:50:32.087835 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:32.294258419+00:00 stderr F I0120 10:50:32.294010 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:32.311861806+00:00 stderr F I0120 10:50:32.311772 1 namespace_scc_allocation_controller.go:116] Repair complete 2026-01-20T10:50:32.488329771+00:00 stderr F I0120 10:50:32.488254 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:32.685800011+00:00 stderr F I0120 10:50:32.685743 1 request.go:697] Waited for 2.797417299s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/monitoring.coreos.com/v1/probes?limit=500&resourceVersion=0 2026-01-20T10:50:32.688723665+00:00 stderr F I0120 10:50:32.688675 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:32.892001242+00:00 stderr F W0120 10:50:32.891933 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2026-01-20T10:50:32.892072124+00:00 stderr F I0120 10:50:32.892040 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:32.897437958+00:00 stderr F W0120 10:50:32.897393 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2026-01-20T10:50:33.088247076+00:00 stderr F I0120 10:50:33.088184 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:33.287780675+00:00 stderr F I0120 10:50:33.287711 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:33.488928440+00:00 stderr F I0120 10:50:33.488879 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:33.686398279+00:00 stderr F I0120 10:50:33.686296 1 request.go:697] Waited for 3.797831622s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0 2026-01-20T10:50:33.693874705+00:00 stderr F I0120 10:50:33.693798 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:33.889661656+00:00 stderr F I0120 10:50:33.888863 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:34.088778723+00:00 stderr F I0120 10:50:34.088686 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:34.287358645+00:00 stderr F I0120 10:50:34.287279 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:34.487733947+00:00 stderr F I0120 10:50:34.487684 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:34.686587117+00:00 stderr F I0120 10:50:34.686511 1 request.go:697] Waited for 4.797973608s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/operators.coreos.com/v1alpha1/catalogsources?limit=500&resourceVersion=0 2026-01-20T10:50:34.688912164+00:00 stderr F I0120 10:50:34.688879 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:34.888909816+00:00 stderr F I0120 10:50:34.888848 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:35.088413115+00:00 stderr F I0120 10:50:35.088359 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:35.288458718+00:00 stderr F I0120 10:50:35.287883 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:35.487393270+00:00 stderr F I0120 10:50:35.487333 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:35.691171771+00:00 stderr F I0120 10:50:35.691086 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:35.885835699+00:00 stderr F I0120 10:50:35.885715 1 request.go:697] Waited for 5.996935462s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/ipam.cluster.x-k8s.io/v1beta1/ipaddresses?limit=500&resourceVersion=0 2026-01-20T10:50:35.888864477+00:00 stderr F I0120 10:50:35.888797 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:36.089629901+00:00 stderr F I0120 10:50:36.089553 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:36.288783159+00:00 stderr F I0120 10:50:36.288697 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:36.487988548+00:00 stderr F I0120 10:50:36.487884 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:36.688500146+00:00 stderr F I0120 10:50:36.688386 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:50:36.690736121+00:00 stderr F I0120 10:50:36.690656 1 reconciliation_controller.go:224] synced cluster resource quota controller 2026-01-20T10:50:36.755105825+00:00 stderr F I0120 10:50:36.755009 1 reconciliation_controller.go:149] Caches are synced 2026-01-20T10:56:07.111031604+00:00 stderr F I0120 10:56:07.110985 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:56:07.110946872 +0000 UTC))" 2026-01-20T10:56:07.111088376+00:00 stderr F I0120 10:56:07.111028 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:56:07.111002133 +0000 UTC))" 2026-01-20T10:56:07.111088376+00:00 stderr F I0120 10:56:07.111046 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.111033084 +0000 UTC))" 2026-01-20T10:56:07.111098116+00:00 stderr F I0120 10:56:07.111090 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.111051955 +0000 UTC))" 2026-01-20T10:56:07.111123267+00:00 stderr F I0120 10:56:07.111107 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.111096716 +0000 UTC))" 2026-01-20T10:56:07.111144137+00:00 stderr F I0120 10:56:07.111126 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.111116246 +0000 UTC))" 2026-01-20T10:56:07.111144137+00:00 stderr F I0120 10:56:07.111140 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.111130137 +0000 UTC))" 2026-01-20T10:56:07.111309162+00:00 stderr F I0120 10:56:07.111176 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.111163858 +0000 UTC))" 2026-01-20T10:56:07.111309162+00:00 stderr F I0120 10:56:07.111193 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.111183318 +0000 UTC))" 2026-01-20T10:56:07.111309162+00:00 stderr F I0120 10:56:07.111208 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:56:07.111199319 +0000 UTC))" 2026-01-20T10:56:07.111309162+00:00 stderr F I0120 10:56:07.111241 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.111213289 +0000 UTC))" 2026-01-20T10:56:07.111309162+00:00 stderr F I0120 10:56:07.111257 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.111245981 +0000 UTC))" 2026-01-20T10:56:07.112179946+00:00 stderr F I0120 10:56:07.111645 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2026-01-20 10:56:07.11160979 +0000 UTC))" 2026-01-20T10:56:07.112179946+00:00 stderr F I0120 10:56:07.111988 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906027\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906026\" (2026-01-20 09:47:06 +0000 UTC to 2027-01-20 09:47:06 +0000 UTC (now=2026-01-20 10:56:07.11197222 +0000 UTC))" 2026-01-20T10:56:24.941122079+00:00 stderr F I0120 10:56:24.939981 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CreatedSCCRanges' created SCC ranges for openstack namespace 2026-01-20T10:56:24.941122079+00:00 stderr F I0120 10:56:24.940255 1 podsecurity_label_sync_controller.go:304] no service accounts were found in the "openstack" NS 2026-01-20T10:56:25.762253261+00:00 stderr F I0120 10:56:25.761617 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CreatedSCCRanges' created SCC ranges for openstack-operators namespace 2026-01-20T10:56:30.689164511+00:00 stderr F I0120 10:56:30.686110 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CreatedSCCRanges' created SCC ranges for cert-manager-operator namespace 2026-01-20T10:56:33.671276766+00:00 stderr F I0120 10:56:33.669273 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CreatedSCCRanges' created SCC ranges for cert-manager namespace 2026-01-20T10:56:36.806717587+00:00 stderr F I0120 10:56:36.806605 1 reconciliation_controller.go:207] syncing resource quota controller with updated resources from discovery: added: [acme.cert-manager.io/v1, Resource=challenges acme.cert-manager.io/v1, Resource=orders cert-manager.io/v1, Resource=certificaterequests cert-manager.io/v1, Resource=certificates cert-manager.io/v1, Resource=issuers], removed: [] 2026-01-20T10:56:36.806907952+00:00 stderr F I0120 10:56:36.806840 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="challenges.acme.cert-manager.io" 2026-01-20T10:56:36.807001514+00:00 stderr F I0120 10:56:36.806949 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="certificaterequests.cert-manager.io" 2026-01-20T10:56:36.807094617+00:00 stderr F I0120 10:56:36.807044 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="orders.acme.cert-manager.io" 2026-01-20T10:56:36.807222900+00:00 stderr F I0120 10:56:36.807159 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="certificates.cert-manager.io" 2026-01-20T10:56:36.807242651+00:00 stderr F I0120 10:56:36.807232 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="issuers.cert-manager.io" 2026-01-20T10:56:36.828863283+00:00 stderr F I0120 10:56:36.828788 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:56:36.877149632+00:00 stderr F I0120 10:56:36.873377 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:56:36.904138258+00:00 stderr F I0120 10:56:36.903774 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:56:36.921447154+00:00 stderr F I0120 10:56:36.921335 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:56:36.952952821+00:00 stderr F I0120 10:56:36.952871 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:56:37.008385113+00:00 stderr F I0120 10:56:37.008330 1 reconciliation_controller.go:224] synced cluster resource quota controller 2026-01-20T10:57:26.031955785+00:00 stderr F E0120 10:57:26.031891 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?allowWatchBookmarks=true&resourceVersion=42483&timeout=8m2s&timeoutSeconds=482&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2026-01-20T10:57:26.493358677+00:00 stderr F E0120 10:57:26.493269 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?allowWatchBookmarks=true&resourceVersion=42659&timeout=8m51s&timeoutSeconds=531&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2026-01-20T10:57:26.891876426+00:00 stderr F E0120 10:57:26.891822 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?allowWatchBookmarks=true&resourceVersion=42607&timeout=6m45s&timeoutSeconds=405&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2026-01-20T10:57:28.382661991+00:00 stderr F E0120 10:57:28.382601 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?allowWatchBookmarks=true&resourceVersion=42620&timeout=5m0s&timeoutSeconds=300&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2026-01-20T10:57:29.579791729+00:00 stderr F E0120 10:57:29.579308 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?allowWatchBookmarks=true&resourceVersion=43235&timeout=8m52s&timeoutSeconds=532&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2026-01-20T10:57:29.792923895+00:00 stderr F E0120 10:57:29.792855 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?allowWatchBookmarks=true&resourceVersion=42799&timeout=5m9s&timeoutSeconds=309&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2026-01-20T10:57:31.991351873+00:00 stderr F E0120 10:57:31.991247 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?allowWatchBookmarks=true&resourceVersion=42611&timeout=8m20s&timeoutSeconds=500&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2026-01-20T10:57:38.378450092+00:00 stderr F I0120 10:57:38.378361 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:38.836621668+00:00 stderr F I0120 10:57:38.835916 1 reflector.go:351] Caches populated for *v1.Lease from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:38.996238519+00:00 stderr F I0120 10:57:38.996168 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:39.670136301+00:00 stderr F I0120 10:57:39.670025 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:40.235519213+00:00 stderr F W0120 10:57:40.235477 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?resourceVersion=42659\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2026-01-20T10:57:40.235584145+00:00 stderr F E0120 10:57:40.235571 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?resourceVersion=42659\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2026-01-20T10:57:40.456658831+00:00 stderr F I0120 10:57:40.456564 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:40.869734825+00:00 stderr F I0120 10:57:40.869684 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:41.070052993+00:00 stderr F I0120 10:57:41.070002 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:41.244281890+00:00 stderr F I0120 10:57:41.244210 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:41.285448469+00:00 stderr F I0120 10:57:41.285382 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:41.447989038+00:00 stderr F I0120 10:57:41.447883 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:41.575120309+00:00 stderr F I0120 10:57:41.574988 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:41.928253908+00:00 stderr F I0120 10:57:41.928201 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:42.076416336+00:00 stderr F I0120 10:57:42.076340 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:42.636106298+00:00 stderr F I0120 10:57:42.636027 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:42.780706961+00:00 stderr F I0120 10:57:42.780669 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:42.796417667+00:00 stderr F I0120 10:57:42.796345 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:42.850283211+00:00 stderr F I0120 10:57:42.850218 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:43.071874481+00:00 stderr F I0120 10:57:43.071796 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:43.414674657+00:00 stderr F I0120 10:57:43.414594 1 reflector.go:351] Caches populated for *v1.PodTemplate from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:43.570946419+00:00 stderr F I0120 10:57:43.570889 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:43.852175137+00:00 stderr F I0120 10:57:43.852123 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:43.868761346+00:00 stderr F I0120 10:57:43.868706 1 reflector.go:351] Caches populated for *v1.ControllerRevision from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:44.061470471+00:00 stderr F I0120 10:57:44.061381 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:44.208180262+00:00 stderr F I0120 10:57:44.207605 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:44.500769569+00:00 stderr F W0120 10:57:44.500691 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?resourceVersion=42483\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2026-01-20T10:57:44.500769569+00:00 stderr F E0120 10:57:44.500746 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?resourceVersion=42483\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2026-01-20T10:57:44.778647197+00:00 stderr F I0120 10:57:44.778576 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:44.866701646+00:00 stderr F I0120 10:57:44.866614 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:44.953054720+00:00 stderr F I0120 10:57:44.952995 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:44.956436709+00:00 stderr F W0120 10:57:44.956397 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?resourceVersion=42611\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2026-01-20T10:57:44.956456430+00:00 stderr F E0120 10:57:44.956435 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?resourceVersion=42611\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2026-01-20T10:57:44.989348290+00:00 stderr F I0120 10:57:44.989309 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:45.054867432+00:00 stderr F I0120 10:57:45.054795 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:45.288999095+00:00 stderr F I0120 10:57:45.288897 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:45.395552522+00:00 stderr F I0120 10:57:45.395435 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:45.618646181+00:00 stderr F I0120 10:57:45.618295 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:45.620134651+00:00 stderr F I0120 10:57:45.620109 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2026-01-20T10:57:45.620134651+00:00 stderr F I0120 10:57:45.620125 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2026-01-20T10:57:45.936981640+00:00 stderr F W0120 10:57:45.936907 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?resourceVersion=42799\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2026-01-20T10:57:45.936981640+00:00 stderr F E0120 10:57:45.936946 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?resourceVersion=42799\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2026-01-20T10:57:46.075023770+00:00 stderr F I0120 10:57:46.074913 1 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:46.613978514+00:00 stderr F I0120 10:57:46.613900 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:46.693993689+00:00 stderr F I0120 10:57:46.693909 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:46.941675320+00:00 stderr F I0120 10:57:46.941603 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:47.066699606+00:00 stderr F W0120 10:57:47.066631 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?resourceVersion=42620\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2026-01-20T10:57:47.066699606+00:00 stderr F E0120 10:57:47.066676 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?resourceVersion=42620\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2026-01-20T10:57:47.405275020+00:00 stderr F I0120 10:57:47.405194 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:47.617586625+00:00 stderr F I0120 10:57:47.617516 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:47.843156309+00:00 stderr F I0120 10:57:47.843055 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:47.863262912+00:00 stderr F I0120 10:57:47.863207 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:47.960598515+00:00 stderr F I0120 10:57:47.960514 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:48.071213261+00:00 stderr F I0120 10:57:48.071155 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:48.151952206+00:00 stderr F I0120 10:57:48.151891 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:48.304602213+00:00 stderr F I0120 10:57:48.304537 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:48.423646741+00:00 stderr F I0120 10:57:48.423529 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:48.519106326+00:00 stderr F I0120 10:57:48.519025 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:48.604545865+00:00 stderr F I0120 10:57:48.604487 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:48.712612843+00:00 stderr F I0120 10:57:48.712530 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:48.719002172+00:00 stderr F I0120 10:57:48.718944 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:49.163238940+00:00 stderr F I0120 10:57:49.163181 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:49.193636993+00:00 stderr F I0120 10:57:49.193557 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:49.360032105+00:00 stderr F I0120 10:57:49.358305 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:49.361222576+00:00 stderr F I0120 10:57:49.361107 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:49.361382070+00:00 stderr F I0120 10:57:49.361344 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:49.367346798+00:00 stderr F I0120 10:57:49.367288 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:49.430682032+00:00 stderr F I0120 10:57:49.430621 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:49.458212760+00:00 stderr F I0120 10:57:49.458145 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:49.766409781+00:00 stderr F I0120 10:57:49.766335 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:49.987705003+00:00 stderr F I0120 10:57:49.987650 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:49.989603724+00:00 stderr F I0120 10:57:49.989577 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:50.293506390+00:00 stderr F I0120 10:57:50.293412 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:50.324639773+00:00 stderr F I0120 10:57:50.324551 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:50.489386650+00:00 stderr F I0120 10:57:50.489335 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:50.697947945+00:00 stderr F I0120 10:57:50.696493 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:50.845770384+00:00 stderr F I0120 10:57:50.845707 1 reflector.go:351] Caches populated for *v2.HorizontalPodAutoscaler from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:50.865505766+00:00 stderr F I0120 10:57:50.865434 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:50.978014411+00:00 stderr F I0120 10:57:50.977945 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:51.018242186+00:00 stderr F I0120 10:57:51.018167 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:52.145793274+00:00 stderr F I0120 10:57:52.145718 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:52.283355812+00:00 stderr F I0120 10:57:52.283280 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:52.376077415+00:00 stderr F I0120 10:57:52.375973 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:52.409965301+00:00 stderr F I0120 10:57:52.409878 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:52.529865991+00:00 stderr F I0120 10:57:52.529811 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:52.632637329+00:00 stderr F I0120 10:57:52.632505 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:52.636030818+00:00 stderr F I0120 10:57:52.635936 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:52.767147536+00:00 stderr F I0120 10:57:52.766416 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:53.024014209+00:00 stderr F I0120 10:57:53.023933 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:53.024391589+00:00 stderr F I0120 10:57:53.024369 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2026-01-20T10:57:53.024429430+00:00 stderr F I0120 10:57:53.024419 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2026-01-20T10:57:53.674316247+00:00 stderr F I0120 10:57:53.674211 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:53.840737397+00:00 stderr F I0120 10:57:53.840637 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:54.302226512+00:00 stderr F I0120 10:57:54.302168 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:57:55.575532005+00:00 stderr F I0120 10:57:55.575472 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:58:04.834768543+00:00 stderr F I0120 10:58:04.834681 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CreatedSCCRanges' created SCC ranges for openshift-must-gather-jdb4k namespace 2026-01-20T10:58:14.776140577+00:00 stderr F I0120 10:58:14.776094 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:58:15.853286957+00:00 stderr F I0120 10:58:15.853222 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:58:17.027877572+00:00 stderr F I0120 10:58:17.027789 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:58:20.193290744+00:00 stderr F W0120 10:58:20.193225 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2026-01-20T10:58:20.193327605+00:00 stderr F I0120 10:58:20.193297 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2026-01-20T10:58:20.205340431+00:00 stderr F W0120 10:58:20.205180 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ ././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015133657736033074 5ustar zuulzuul././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/4.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000012737015133657716033106 0ustar zuulzuul2026-01-20T10:58:19.916580256+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10257 \))" ]; do sleep 1; done' 2026-01-20T10:58:19.924941938+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:58:19.937096677+00:00 stderr F + '[' -n '' ']' 2026-01-20T10:58:19.938455281+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ']' 2026-01-20T10:58:19.938481592+00:00 stderr F + echo 'Copying system trust bundle' 2026-01-20T10:58:19.938492882+00:00 stdout F Copying system trust bundle 2026-01-20T10:58:19.938503032+00:00 stderr F + cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem 2026-01-20T10:58:19.945566582+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ']' 2026-01-20T10:58:19.945911631+00:00 stderr F + exec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key '--controllers=*' --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 2026-01-20T10:58:20.020670610+00:00 stderr F W0120 10:58:20.020502 1 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2026-01-20T10:58:20.020670610+00:00 stderr F W0120 10:58:20.020652 1 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2026-01-20T10:58:20.020710031+00:00 stderr F W0120 10:58:20.020678 1 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2026-01-20T10:58:20.020710031+00:00 stderr F W0120 10:58:20.020704 1 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2026-01-20T10:58:20.020797963+00:00 stderr F W0120 10:58:20.020776 1 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2026-01-20T10:58:20.020821374+00:00 stderr F W0120 10:58:20.020807 1 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2026-01-20T10:58:20.020870705+00:00 stderr F W0120 10:58:20.020833 1 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2026-01-20T10:58:20.020870705+00:00 stderr F W0120 10:58:20.020859 1 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2026-01-20T10:58:20.020942257+00:00 stderr F W0120 10:58:20.020906 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2026-01-20T10:58:20.020951957+00:00 stderr F W0120 10:58:20.020941 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2026-01-20T10:58:20.020985528+00:00 stderr F W0120 10:58:20.020966 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2026-01-20T10:58:20.021009318+00:00 stderr F W0120 10:58:20.020996 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2026-01-20T10:58:20.021038079+00:00 stderr F W0120 10:58:20.021021 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2026-01-20T10:58:20.021070830+00:00 stderr F W0120 10:58:20.021051 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2026-01-20T10:58:20.021124681+00:00 stderr F W0120 10:58:20.021100 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2026-01-20T10:58:20.021155752+00:00 stderr F W0120 10:58:20.021138 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2026-01-20T10:58:20.021189613+00:00 stderr F W0120 10:58:20.021172 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2026-01-20T10:58:20.021223494+00:00 stderr F W0120 10:58:20.021205 1 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2026-01-20T10:58:20.021322766+00:00 stderr F W0120 10:58:20.021295 1 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2026-01-20T10:58:20.021366727+00:00 stderr F W0120 10:58:20.021349 1 feature_gate.go:227] unrecognized feature gate: Example 2026-01-20T10:58:20.021418319+00:00 stderr F W0120 10:58:20.021401 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2026-01-20T10:58:20.021445279+00:00 stderr F W0120 10:58:20.021429 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2026-01-20T10:58:20.021473360+00:00 stderr F W0120 10:58:20.021457 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2026-01-20T10:58:20.021500521+00:00 stderr F W0120 10:58:20.021484 1 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2026-01-20T10:58:20.021528141+00:00 stderr F W0120 10:58:20.021512 1 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2026-01-20T10:58:20.021555372+00:00 stderr F W0120 10:58:20.021539 1 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2026-01-20T10:58:20.021583083+00:00 stderr F W0120 10:58:20.021567 1 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2026-01-20T10:58:20.021610373+00:00 stderr F W0120 10:58:20.021594 1 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2026-01-20T10:58:20.021638414+00:00 stderr F W0120 10:58:20.021622 1 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2026-01-20T10:58:20.021665605+00:00 stderr F W0120 10:58:20.021649 1 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2026-01-20T10:58:20.021699406+00:00 stderr F W0120 10:58:20.021683 1 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2026-01-20T10:58:20.021726956+00:00 stderr F W0120 10:58:20.021711 1 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2026-01-20T10:58:20.021754387+00:00 stderr F W0120 10:58:20.021738 1 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2026-01-20T10:58:20.021780378+00:00 stderr F W0120 10:58:20.021764 1 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2026-01-20T10:58:20.021808828+00:00 stderr F W0120 10:58:20.021792 1 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2026-01-20T10:58:20.021836629+00:00 stderr F W0120 10:58:20.021819 1 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2026-01-20T10:58:20.021863650+00:00 stderr F W0120 10:58:20.021847 1 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2026-01-20T10:58:20.021889680+00:00 stderr F W0120 10:58:20.021873 1 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2026-01-20T10:58:20.021916271+00:00 stderr F W0120 10:58:20.021900 1 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2026-01-20T10:58:20.021966352+00:00 stderr F W0120 10:58:20.021950 1 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2026-01-20T10:58:20.021992073+00:00 stderr F W0120 10:58:20.021976 1 feature_gate.go:227] unrecognized feature gate: MetricsServer 2026-01-20T10:58:20.022018024+00:00 stderr F W0120 10:58:20.022002 1 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2026-01-20T10:58:20.022045534+00:00 stderr F W0120 10:58:20.022029 1 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2026-01-20T10:58:20.022092115+00:00 stderr F W0120 10:58:20.022055 1 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2026-01-20T10:58:20.022119776+00:00 stderr F W0120 10:58:20.022104 1 feature_gate.go:227] unrecognized feature gate: NewOLM 2026-01-20T10:58:20.022145817+00:00 stderr F W0120 10:58:20.022132 1 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2026-01-20T10:58:20.022191848+00:00 stderr F W0120 10:58:20.022177 1 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2026-01-20T10:58:20.022217739+00:00 stderr F W0120 10:58:20.022204 1 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2026-01-20T10:58:20.022244179+00:00 stderr F W0120 10:58:20.022230 1 feature_gate.go:227] unrecognized feature gate: PinnedImages 2026-01-20T10:58:20.022269910+00:00 stderr F W0120 10:58:20.022256 1 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2026-01-20T10:58:20.022300591+00:00 stderr F W0120 10:58:20.022287 1 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2026-01-20T10:58:20.022429044+00:00 stderr F W0120 10:58:20.022407 1 feature_gate.go:227] unrecognized feature gate: SignatureStores 2026-01-20T10:58:20.022450214+00:00 stderr F W0120 10:58:20.022437 1 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2026-01-20T10:58:20.022497056+00:00 stderr F W0120 10:58:20.022481 1 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2026-01-20T10:58:20.022524416+00:00 stderr F W0120 10:58:20.022509 1 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2026-01-20T10:58:20.022556077+00:00 stderr F W0120 10:58:20.022542 1 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2026-01-20T10:58:20.022586978+00:00 stderr F W0120 10:58:20.022573 1 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2026-01-20T10:58:20.022645909+00:00 stderr F W0120 10:58:20.022605 1 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2026-01-20T10:58:20.022693600+00:00 stderr F W0120 10:58:20.022675 1 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2026-01-20T10:58:20.022812333+00:00 stderr F I0120 10:58:20.022780 1 flags.go:64] FLAG: --allocate-node-cidrs="false" 2026-01-20T10:58:20.022812333+00:00 stderr F I0120 10:58:20.022791 1 flags.go:64] FLAG: --allow-metric-labels="[]" 2026-01-20T10:58:20.022812333+00:00 stderr F I0120 10:58:20.022797 1 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2026-01-20T10:58:20.022812333+00:00 stderr F I0120 10:58:20.022802 1 flags.go:64] FLAG: --allow-untagged-cloud="false" 2026-01-20T10:58:20.022812333+00:00 stderr F I0120 10:58:20.022805 1 flags.go:64] FLAG: --attach-detach-reconcile-sync-period="1m0s" 2026-01-20T10:58:20.022824324+00:00 stderr F I0120 10:58:20.022810 1 flags.go:64] FLAG: --authentication-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2026-01-20T10:58:20.022824324+00:00 stderr F I0120 10:58:20.022814 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2026-01-20T10:58:20.022824324+00:00 stderr F I0120 10:58:20.022817 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2026-01-20T10:58:20.022824324+00:00 stderr F I0120 10:58:20.022820 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false" 2026-01-20T10:58:20.022833674+00:00 stderr F I0120 10:58:20.022823 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2026-01-20T10:58:20.022841204+00:00 stderr F I0120 10:58:20.022830 1 flags.go:64] FLAG: --authorization-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2026-01-20T10:58:20.022841204+00:00 stderr F I0120 10:58:20.022835 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2026-01-20T10:58:20.022854666+00:00 stderr F I0120 10:58:20.022838 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2026-01-20T10:58:20.022854666+00:00 stderr F I0120 10:58:20.022843 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2026-01-20T10:58:20.022854666+00:00 stderr F I0120 10:58:20.022846 1 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2026-01-20T10:58:20.022854666+00:00 stderr F I0120 10:58:20.022850 1 flags.go:64] FLAG: --cidr-allocator-type="RangeAllocator" 2026-01-20T10:58:20.022863026+00:00 stderr F I0120 10:58:20.022853 1 flags.go:64] FLAG: --client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2026-01-20T10:58:20.022863026+00:00 stderr F I0120 10:58:20.022857 1 flags.go:64] FLAG: --cloud-config="" 2026-01-20T10:58:20.022863026+00:00 stderr F I0120 10:58:20.022860 1 flags.go:64] FLAG: --cloud-provider="" 2026-01-20T10:58:20.022871046+00:00 stderr F I0120 10:58:20.022863 1 flags.go:64] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16" 2026-01-20T10:58:20.022879926+00:00 stderr F I0120 10:58:20.022869 1 flags.go:64] FLAG: --cluster-cidr="10.217.0.0/22" 2026-01-20T10:58:20.022879926+00:00 stderr F I0120 10:58:20.022873 1 flags.go:64] FLAG: --cluster-name="crc-d8rkd" 2026-01-20T10:58:20.022889496+00:00 stderr F I0120 10:58:20.022876 1 flags.go:64] FLAG: --cluster-signing-cert-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt" 2026-01-20T10:58:20.022889496+00:00 stderr F I0120 10:58:20.022882 1 flags.go:64] FLAG: --cluster-signing-duration="8760h0m0s" 2026-01-20T10:58:20.022889496+00:00 stderr F I0120 10:58:20.022885 1 flags.go:64] FLAG: --cluster-signing-key-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2026-01-20T10:58:20.022900037+00:00 stderr F I0120 10:58:20.022888 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-cert-file="" 2026-01-20T10:58:20.022900037+00:00 stderr F I0120 10:58:20.022892 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-key-file="" 2026-01-20T10:58:20.022900037+00:00 stderr F I0120 10:58:20.022894 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-cert-file="" 2026-01-20T10:58:20.022910317+00:00 stderr F I0120 10:58:20.022897 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-key-file="" 2026-01-20T10:58:20.022910317+00:00 stderr F I0120 10:58:20.022901 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-cert-file="" 2026-01-20T10:58:20.022910317+00:00 stderr F I0120 10:58:20.022904 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-key-file="" 2026-01-20T10:58:20.022910317+00:00 stderr F I0120 10:58:20.022907 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-cert-file="" 2026-01-20T10:58:20.022918957+00:00 stderr F I0120 10:58:20.022910 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-key-file="" 2026-01-20T10:58:20.022918957+00:00 stderr F I0120 10:58:20.022913 1 flags.go:64] FLAG: --concurrent-cron-job-syncs="5" 2026-01-20T10:58:20.022926477+00:00 stderr F I0120 10:58:20.022917 1 flags.go:64] FLAG: --concurrent-deployment-syncs="5" 2026-01-20T10:58:20.022926477+00:00 stderr F I0120 10:58:20.022920 1 flags.go:64] FLAG: --concurrent-endpoint-syncs="5" 2026-01-20T10:58:20.022926477+00:00 stderr F I0120 10:58:20.022923 1 flags.go:64] FLAG: --concurrent-ephemeralvolume-syncs="5" 2026-01-20T10:58:20.022934427+00:00 stderr F I0120 10:58:20.022926 1 flags.go:64] FLAG: --concurrent-gc-syncs="20" 2026-01-20T10:58:20.022934427+00:00 stderr F I0120 10:58:20.022929 1 flags.go:64] FLAG: --concurrent-horizontal-pod-autoscaler-syncs="5" 2026-01-20T10:58:20.022946548+00:00 stderr F I0120 10:58:20.022933 1 flags.go:64] FLAG: --concurrent-job-syncs="5" 2026-01-20T10:58:20.022946548+00:00 stderr F I0120 10:58:20.022937 1 flags.go:64] FLAG: --concurrent-namespace-syncs="10" 2026-01-20T10:58:20.022946548+00:00 stderr F I0120 10:58:20.022939 1 flags.go:64] FLAG: --concurrent-rc-syncs="5" 2026-01-20T10:58:20.022946548+00:00 stderr F I0120 10:58:20.022942 1 flags.go:64] FLAG: --concurrent-replicaset-syncs="5" 2026-01-20T10:58:20.022954778+00:00 stderr F I0120 10:58:20.022945 1 flags.go:64] FLAG: --concurrent-resource-quota-syncs="5" 2026-01-20T10:58:20.022954778+00:00 stderr F I0120 10:58:20.022948 1 flags.go:64] FLAG: --concurrent-service-endpoint-syncs="5" 2026-01-20T10:58:20.022954778+00:00 stderr F I0120 10:58:20.022951 1 flags.go:64] FLAG: --concurrent-service-syncs="1" 2026-01-20T10:58:20.022962968+00:00 stderr F I0120 10:58:20.022954 1 flags.go:64] FLAG: --concurrent-serviceaccount-token-syncs="5" 2026-01-20T10:58:20.022962968+00:00 stderr F I0120 10:58:20.022957 1 flags.go:64] FLAG: --concurrent-statefulset-syncs="5" 2026-01-20T10:58:20.022962968+00:00 stderr F I0120 10:58:20.022960 1 flags.go:64] FLAG: --concurrent-ttl-after-finished-syncs="5" 2026-01-20T10:58:20.022971178+00:00 stderr F I0120 10:58:20.022963 1 flags.go:64] FLAG: --concurrent-validating-admission-policy-status-syncs="5" 2026-01-20T10:58:20.022971178+00:00 stderr F I0120 10:58:20.022966 1 flags.go:64] FLAG: --configure-cloud-routes="true" 2026-01-20T10:58:20.022978659+00:00 stderr F I0120 10:58:20.022968 1 flags.go:64] FLAG: --contention-profiling="false" 2026-01-20T10:58:20.022978659+00:00 stderr F I0120 10:58:20.022973 1 flags.go:64] FLAG: --controller-start-interval="0s" 2026-01-20T10:58:20.022986329+00:00 stderr F I0120 10:58:20.022975 1 flags.go:64] FLAG: --controllers="[*,-bootstrapsigner,-tokencleaner,-ttl]" 2026-01-20T10:58:20.022986329+00:00 stderr F I0120 10:58:20.022981 1 flags.go:64] FLAG: --disable-attach-detach-reconcile-sync="false" 2026-01-20T10:58:20.022993809+00:00 stderr F I0120 10:58:20.022983 1 flags.go:64] FLAG: --disabled-metrics="[]" 2026-01-20T10:58:20.022993809+00:00 stderr F I0120 10:58:20.022988 1 flags.go:64] FLAG: --enable-dynamic-provisioning="true" 2026-01-20T10:58:20.022993809+00:00 stderr F I0120 10:58:20.022991 1 flags.go:64] FLAG: --enable-garbage-collector="true" 2026-01-20T10:58:20.023001839+00:00 stderr F I0120 10:58:20.022993 1 flags.go:64] FLAG: --enable-hostpath-provisioner="false" 2026-01-20T10:58:20.023001839+00:00 stderr F I0120 10:58:20.022997 1 flags.go:64] FLAG: --enable-leader-migration="false" 2026-01-20T10:58:20.023009549+00:00 stderr F I0120 10:58:20.023000 1 flags.go:64] FLAG: --endpoint-updates-batch-period="0s" 2026-01-20T10:58:20.023009549+00:00 stderr F I0120 10:58:20.023003 1 flags.go:64] FLAG: --endpointslice-updates-batch-period="0s" 2026-01-20T10:58:20.023009549+00:00 stderr F I0120 10:58:20.023006 1 flags.go:64] FLAG: --external-cloud-volume-plugin="" 2026-01-20T10:58:20.023086281+00:00 stderr F I0120 10:58:20.023009 1 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2026-01-20T10:58:20.023086281+00:00 stderr F I0120 10:58:20.023033 1 flags.go:64] FLAG: --flex-volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" 2026-01-20T10:58:20.023086281+00:00 stderr F I0120 10:58:20.023037 1 flags.go:64] FLAG: --help="false" 2026-01-20T10:58:20.023086281+00:00 stderr F I0120 10:58:20.023040 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-cpu-initialization-period="5m0s" 2026-01-20T10:58:20.023086281+00:00 stderr F I0120 10:58:20.023043 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-delay="5m0s" 2026-01-20T10:58:20.023086281+00:00 stderr F I0120 10:58:20.023046 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-stabilization="5m0s" 2026-01-20T10:58:20.023086281+00:00 stderr F I0120 10:58:20.023049 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-initial-readiness-delay="30s" 2026-01-20T10:58:20.023086281+00:00 stderr F I0120 10:58:20.023052 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-sync-period="15s" 2026-01-20T10:58:20.023131862+00:00 stderr F I0120 10:58:20.023113 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-tolerance="0.1" 2026-01-20T10:58:20.023131862+00:00 stderr F I0120 10:58:20.023123 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-upscale-delay="3m0s" 2026-01-20T10:58:20.023131862+00:00 stderr F I0120 10:58:20.023127 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2026-01-20T10:58:20.023140593+00:00 stderr F I0120 10:58:20.023130 1 flags.go:64] FLAG: --kube-api-burst="300" 2026-01-20T10:58:20.023140593+00:00 stderr F I0120 10:58:20.023135 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" 2026-01-20T10:58:20.023148123+00:00 stderr F I0120 10:58:20.023138 1 flags.go:64] FLAG: --kube-api-qps="150" 2026-01-20T10:58:20.023148123+00:00 stderr F I0120 10:58:20.023143 1 flags.go:64] FLAG: --kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2026-01-20T10:58:20.023155923+00:00 stderr F I0120 10:58:20.023147 1 flags.go:64] FLAG: --large-cluster-size-threshold="50" 2026-01-20T10:58:20.023155923+00:00 stderr F I0120 10:58:20.023150 1 flags.go:64] FLAG: --leader-elect="true" 2026-01-20T10:58:20.023163663+00:00 stderr F I0120 10:58:20.023153 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s" 2026-01-20T10:58:20.023163663+00:00 stderr F I0120 10:58:20.023157 1 flags.go:64] FLAG: --leader-elect-renew-deadline="12s" 2026-01-20T10:58:20.023163663+00:00 stderr F I0120 10:58:20.023160 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases" 2026-01-20T10:58:20.023171773+00:00 stderr F I0120 10:58:20.023163 1 flags.go:64] FLAG: --leader-elect-resource-name="kube-controller-manager" 2026-01-20T10:58:20.023171773+00:00 stderr F I0120 10:58:20.023167 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" 2026-01-20T10:58:20.023179474+00:00 stderr F I0120 10:58:20.023169 1 flags.go:64] FLAG: --leader-elect-retry-period="3s" 2026-01-20T10:58:20.023179474+00:00 stderr F I0120 10:58:20.023173 1 flags.go:64] FLAG: --leader-migration-config="" 2026-01-20T10:58:20.023179474+00:00 stderr F I0120 10:58:20.023176 1 flags.go:64] FLAG: --legacy-service-account-token-clean-up-period="8760h0m0s" 2026-01-20T10:58:20.023187594+00:00 stderr F I0120 10:58:20.023179 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2026-01-20T10:58:20.023194994+00:00 stderr F I0120 10:58:20.023182 1 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2026-01-20T10:58:20.023194994+00:00 stderr F I0120 10:58:20.023189 1 flags.go:64] FLAG: --log-json-split-stream="false" 2026-01-20T10:58:20.023194994+00:00 stderr F I0120 10:58:20.023192 1 flags.go:64] FLAG: --logging-format="text" 2026-01-20T10:58:20.023202824+00:00 stderr F I0120 10:58:20.023195 1 flags.go:64] FLAG: --master="" 2026-01-20T10:58:20.023202824+00:00 stderr F I0120 10:58:20.023198 1 flags.go:64] FLAG: --max-endpoints-per-slice="100" 2026-01-20T10:58:20.023210564+00:00 stderr F I0120 10:58:20.023201 1 flags.go:64] FLAG: --min-resync-period="12h0m0s" 2026-01-20T10:58:20.023210564+00:00 stderr F I0120 10:58:20.023204 1 flags.go:64] FLAG: --mirroring-concurrent-service-endpoint-syncs="5" 2026-01-20T10:58:20.023210564+00:00 stderr F I0120 10:58:20.023207 1 flags.go:64] FLAG: --mirroring-endpointslice-updates-batch-period="0s" 2026-01-20T10:58:20.023218905+00:00 stderr F I0120 10:58:20.023210 1 flags.go:64] FLAG: --mirroring-max-endpoints-per-subset="1000" 2026-01-20T10:58:20.023218905+00:00 stderr F I0120 10:58:20.023214 1 flags.go:64] FLAG: --namespace-sync-period="5m0s" 2026-01-20T10:58:20.023230815+00:00 stderr F I0120 10:58:20.023217 1 flags.go:64] FLAG: --node-cidr-mask-size="0" 2026-01-20T10:58:20.023230815+00:00 stderr F I0120 10:58:20.023221 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv4="0" 2026-01-20T10:58:20.023230815+00:00 stderr F I0120 10:58:20.023224 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv6="0" 2026-01-20T10:58:20.023230815+00:00 stderr F I0120 10:58:20.023227 1 flags.go:64] FLAG: --node-eviction-rate="0.1" 2026-01-20T10:58:20.023239245+00:00 stderr F I0120 10:58:20.023230 1 flags.go:64] FLAG: --node-monitor-grace-period="40s" 2026-01-20T10:58:20.023239245+00:00 stderr F I0120 10:58:20.023234 1 flags.go:64] FLAG: --node-monitor-period="5s" 2026-01-20T10:58:20.023246895+00:00 stderr F I0120 10:58:20.023237 1 flags.go:64] FLAG: --node-startup-grace-period="1m0s" 2026-01-20T10:58:20.023246895+00:00 stderr F I0120 10:58:20.023241 1 flags.go:64] FLAG: --node-sync-period="0s" 2026-01-20T10:58:20.023254976+00:00 stderr F I0120 10:58:20.023244 1 flags.go:64] FLAG: --openshift-config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2026-01-20T10:58:20.023254976+00:00 stderr F I0120 10:58:20.023248 1 flags.go:64] FLAG: --permit-address-sharing="false" 2026-01-20T10:58:20.023254976+00:00 stderr F I0120 10:58:20.023251 1 flags.go:64] FLAG: --permit-port-sharing="false" 2026-01-20T10:58:20.023263076+00:00 stderr F I0120 10:58:20.023255 1 flags.go:64] FLAG: --profiling="true" 2026-01-20T10:58:20.023263076+00:00 stderr F I0120 10:58:20.023258 1 flags.go:64] FLAG: --pv-recycler-increment-timeout-nfs="30" 2026-01-20T10:58:20.023271746+00:00 stderr F I0120 10:58:20.023261 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-hostpath="60" 2026-01-20T10:58:20.023271746+00:00 stderr F I0120 10:58:20.023265 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-nfs="300" 2026-01-20T10:58:20.023281166+00:00 stderr F I0120 10:58:20.023268 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-hostpath="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2026-01-20T10:58:20.023281166+00:00 stderr F I0120 10:58:20.023273 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-nfs="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2026-01-20T10:58:20.023281166+00:00 stderr F I0120 10:58:20.023277 1 flags.go:64] FLAG: --pv-recycler-timeout-increment-hostpath="30" 2026-01-20T10:58:20.023291476+00:00 stderr F I0120 10:58:20.023280 1 flags.go:64] FLAG: --pvclaimbinder-sync-period="15s" 2026-01-20T10:58:20.023291476+00:00 stderr F I0120 10:58:20.023283 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2026-01-20T10:58:20.023291476+00:00 stderr F I0120 10:58:20.023287 1 flags.go:64] FLAG: --requestheader-client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2026-01-20T10:58:20.023301647+00:00 stderr F I0120 10:58:20.023291 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2026-01-20T10:58:20.023301647+00:00 stderr F I0120 10:58:20.023295 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2026-01-20T10:58:20.023311357+00:00 stderr F I0120 10:58:20.023300 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2026-01-20T10:58:20.023311357+00:00 stderr F I0120 10:58:20.023304 1 flags.go:64] FLAG: --resource-quota-sync-period="5m0s" 2026-01-20T10:58:20.023311357+00:00 stderr F I0120 10:58:20.023307 1 flags.go:64] FLAG: --root-ca-file="/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt" 2026-01-20T10:58:20.023321257+00:00 stderr F I0120 10:58:20.023311 1 flags.go:64] FLAG: --route-reconciliation-period="10s" 2026-01-20T10:58:20.023321257+00:00 stderr F I0120 10:58:20.023314 1 flags.go:64] FLAG: --secondary-node-eviction-rate="0.01" 2026-01-20T10:58:20.023321257+00:00 stderr F I0120 10:58:20.023317 1 flags.go:64] FLAG: --secure-port="10257" 2026-01-20T10:58:20.023333907+00:00 stderr F I0120 10:58:20.023321 1 flags.go:64] FLAG: --service-account-private-key-file="/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key" 2026-01-20T10:58:20.023333907+00:00 stderr F I0120 10:58:20.023325 1 flags.go:64] FLAG: --service-cluster-ip-range="10.217.4.0/23" 2026-01-20T10:58:20.023333907+00:00 stderr F I0120 10:58:20.023328 1 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2026-01-20T10:58:20.023341908+00:00 stderr F I0120 10:58:20.023331 1 flags.go:64] FLAG: --terminated-pod-gc-threshold="12500" 2026-01-20T10:58:20.023341908+00:00 stderr F I0120 10:58:20.023335 1 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" 2026-01-20T10:58:20.023366988+00:00 stderr F I0120 10:58:20.023339 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2026-01-20T10:58:20.023366988+00:00 stderr F I0120 10:58:20.023358 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2026-01-20T10:58:20.023366988+00:00 stderr F I0120 10:58:20.023361 1 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2026-01-20T10:58:20.023375329+00:00 stderr F I0120 10:58:20.023365 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2026-01-20T10:58:20.023375329+00:00 stderr F I0120 10:58:20.023370 1 flags.go:64] FLAG: --unhealthy-zone-threshold="0.55" 2026-01-20T10:58:20.023382809+00:00 stderr F I0120 10:58:20.023374 1 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" 2026-01-20T10:58:20.023382809+00:00 stderr F I0120 10:58:20.023378 1 flags.go:64] FLAG: --use-service-account-credentials="true" 2026-01-20T10:58:20.023390279+00:00 stderr F I0120 10:58:20.023381 1 flags.go:64] FLAG: --v="2" 2026-01-20T10:58:20.023390279+00:00 stderr F I0120 10:58:20.023385 1 flags.go:64] FLAG: --version="false" 2026-01-20T10:58:20.023397769+00:00 stderr F I0120 10:58:20.023390 1 flags.go:64] FLAG: --vmodule="" 2026-01-20T10:58:20.023397769+00:00 stderr F I0120 10:58:20.023394 1 flags.go:64] FLAG: --volume-host-allow-local-loopback="true" 2026-01-20T10:58:20.023405099+00:00 stderr F I0120 10:58:20.023397 1 flags.go:64] FLAG: --volume-host-cidr-denylist="[]" 2026-01-20T10:58:20.025908303+00:00 stderr F I0120 10:58:20.025865 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2026-01-20T10:58:20.268937845+00:00 stderr F I0120 10:58:20.268840 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2026-01-20T10:58:20.268937845+00:00 stderr F I0120 10:58:20.268919 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2026-01-20T10:58:20.271963202+00:00 stderr F I0120 10:58:20.271419 1 controllermanager.go:203] "Starting" version="v1.29.5+29c95f3" 2026-01-20T10:58:20.271963202+00:00 stderr F I0120 10:58:20.271439 1 controllermanager.go:205] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" 2026-01-20T10:58:20.273793119+00:00 stderr F I0120 10:58:20.273706 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2026-01-20T10:58:20.273906492+00:00 stderr F I0120 10:58:20.273855 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:58:20.27382816 +0000 UTC))" 2026-01-20T10:58:20.273932252+00:00 stderr F I0120 10:58:20.273881 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2026-01-20T10:58:20.273932252+00:00 stderr F I0120 10:58:20.273919 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:58:20.273872601 +0000 UTC))" 2026-01-20T10:58:20.273953793+00:00 stderr F I0120 10:58:20.273940 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:58:20.273927902 +0000 UTC))" 2026-01-20T10:58:20.273973823+00:00 stderr F I0120 10:58:20.273957 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:58:20.273945383 +0000 UTC))" 2026-01-20T10:58:20.273993054+00:00 stderr F I0120 10:58:20.273971 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:58:20.273961593 +0000 UTC))" 2026-01-20T10:58:20.274013634+00:00 stderr F I0120 10:58:20.273993 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:58:20.273977033 +0000 UTC))" 2026-01-20T10:58:20.274013634+00:00 stderr F I0120 10:58:20.274008 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:58:20.273998534 +0000 UTC))" 2026-01-20T10:58:20.274134127+00:00 stderr F I0120 10:58:20.274023 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:58:20.274013024 +0000 UTC))" 2026-01-20T10:58:20.274134127+00:00 stderr F I0120 10:58:20.274104 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:58:20.274049775 +0000 UTC))" 2026-01-20T10:58:20.274134127+00:00 stderr F I0120 10:58:20.274125 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:58:20.274114617 +0000 UTC))" 2026-01-20T10:58:20.274184619+00:00 stderr F I0120 10:58:20.274142 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:58:20.274131727 +0000 UTC))" 2026-01-20T10:58:20.274480156+00:00 stderr F I0120 10:58:20.274375 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2026-01-20T10:58:20.274696022+00:00 stderr F I0120 10:58:20.274469 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2026-01-20 10:58:20.274452805 +0000 UTC))" 2026-01-20T10:58:20.275177655+00:00 stderr F I0120 10:58:20.275089 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906700\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906700\" (2026-01-20 09:58:20 +0000 UTC to 2027-01-20 09:58:20 +0000 UTC (now=2026-01-20 10:58:20.275042361 +0000 UTC))" 2026-01-20T10:58:20.275177655+00:00 stderr F I0120 10:58:20.275133 1 secure_serving.go:213] Serving securely on [::]:10257 2026-01-20T10:58:20.275556554+00:00 stderr F I0120 10:58:20.275378 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:58:20.275556554+00:00 stderr F I0120 10:58:20.275414 1 leaderelection.go:250] attempting to acquire leader lease kube-system/kube-controller-manager... ././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000264042715133657716033113 0ustar zuulzuul2025-08-13T20:08:12.802058725+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10257 \))" ]; do sleep 1; done' 2025-08-13T20:08:12.814964715+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-08-13T20:08:12.823497100+00:00 stderr F + '[' -n '' ']' 2025-08-13T20:08:12.826307440+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ']' 2025-08-13T20:08:12.826307440+00:00 stderr F + echo 'Copying system trust bundle' 2025-08-13T20:08:12.826307440+00:00 stderr F + cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem 2025-08-13T20:08:12.826418804+00:00 stdout F Copying system trust bundle 2025-08-13T20:08:12.833987271+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ']' 2025-08-13T20:08:12.834642410+00:00 stderr F + exec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key '--controllers=*' --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.217037 1 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.218038 1 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.218143 1 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.218936 1 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.219007 1 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.219115 1 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.219181 1 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.219238 1 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.219374 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2025-08-13T20:08:13.219493834+00:00 stderr F W0813 20:08:13.219435 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2025-08-13T20:08:13.219503624+00:00 stderr F W0813 20:08:13.219496 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2025-08-13T20:08:13.219624757+00:00 stderr F W0813 20:08:13.219574 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2025-08-13T20:08:13.219694409+00:00 stderr F W0813 20:08:13.219655 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2025-08-13T20:08:13.219957897+00:00 stderr F W0813 20:08:13.219834 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2025-08-13T20:08:13.219980567+00:00 stderr F W0813 20:08:13.219954 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2025-08-13T20:08:13.220114111+00:00 stderr F W0813 20:08:13.220028 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2025-08-13T20:08:13.220130522+00:00 stderr F W0813 20:08:13.220122 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2025-08-13T20:08:13.220338958+00:00 stderr F W0813 20:08:13.220244 1 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2025-08-13T20:08:13.220510753+00:00 stderr F W0813 20:08:13.220445 1 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2025-08-13T20:08:13.220660927+00:00 stderr F W0813 20:08:13.220583 1 feature_gate.go:227] unrecognized feature gate: Example 2025-08-13T20:08:13.220759810+00:00 stderr F W0813 20:08:13.220672 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2025-08-13T20:08:13.220759810+00:00 stderr F W0813 20:08:13.220746 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2025-08-13T20:08:13.220980366+00:00 stderr F W0813 20:08:13.220930 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2025-08-13T20:08:13.221078669+00:00 stderr F W0813 20:08:13.221034 1 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2025-08-13T20:08:13.221226703+00:00 stderr F W0813 20:08:13.221167 1 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2025-08-13T20:08:13.221321016+00:00 stderr F W0813 20:08:13.221259 1 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2025-08-13T20:08:13.221435869+00:00 stderr F W0813 20:08:13.221357 1 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2025-08-13T20:08:13.221449420+00:00 stderr F W0813 20:08:13.221439 1 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2025-08-13T20:08:13.221647945+00:00 stderr F W0813 20:08:13.221535 1 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2025-08-13T20:08:13.221647945+00:00 stderr F W0813 20:08:13.221628 1 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2025-08-13T20:08:13.221755538+00:00 stderr F W0813 20:08:13.221707 1 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2025-08-13T20:08:13.222715416+00:00 stderr F W0813 20:08:13.222675 1 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2025-08-13T20:08:13.222878071+00:00 stderr F W0813 20:08:13.222845 1 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2025-08-13T20:08:13.223023605+00:00 stderr F W0813 20:08:13.222982 1 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2025-08-13T20:08:13.223200670+00:00 stderr F W0813 20:08:13.223143 1 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-08-13T20:08:13.223277612+00:00 stderr F W0813 20:08:13.223248 1 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2025-08-13T20:08:13.223453957+00:00 stderr F W0813 20:08:13.223362 1 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2025-08-13T20:08:13.223503038+00:00 stderr F W0813 20:08:13.223473 1 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2025-08-13T20:08:13.223605581+00:00 stderr F W0813 20:08:13.223570 1 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2025-08-13T20:08:13.223913790+00:00 stderr F W0813 20:08:13.223758 1 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2025-08-13T20:08:13.224109056+00:00 stderr F W0813 20:08:13.223957 1 feature_gate.go:227] unrecognized feature gate: MetricsServer 2025-08-13T20:08:13.224109056+00:00 stderr F W0813 20:08:13.224079 1 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2025-08-13T20:08:13.224210499+00:00 stderr F W0813 20:08:13.224173 1 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2025-08-13T20:08:13.224326502+00:00 stderr F W0813 20:08:13.224289 1 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2025-08-13T20:08:13.224450766+00:00 stderr F W0813 20:08:13.224413 1 feature_gate.go:227] unrecognized feature gate: NewOLM 2025-08-13T20:08:13.224622561+00:00 stderr F W0813 20:08:13.224535 1 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2025-08-13T20:08:13.224868638+00:00 stderr F W0813 20:08:13.224731 1 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2025-08-13T20:08:13.224955920+00:00 stderr F W0813 20:08:13.224940 1 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2025-08-13T20:08:13.225143595+00:00 stderr F W0813 20:08:13.225056 1 feature_gate.go:227] unrecognized feature gate: PinnedImages 2025-08-13T20:08:13.225218398+00:00 stderr F W0813 20:08:13.225163 1 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2025-08-13T20:08:13.225303580+00:00 stderr F W0813 20:08:13.225268 1 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2025-08-13T20:08:13.226198776+00:00 stderr F W0813 20:08:13.225924 1 feature_gate.go:227] unrecognized feature gate: SignatureStores 2025-08-13T20:08:13.226198776+00:00 stderr F W0813 20:08:13.226080 1 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2025-08-13T20:08:13.226355810+00:00 stderr F W0813 20:08:13.226243 1 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2025-08-13T20:08:13.234484243+00:00 stderr F W0813 20:08:13.231603 1 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2025-08-13T20:08:13.234484243+00:00 stderr F W0813 20:08:13.231843 1 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2025-08-13T20:08:13.234484243+00:00 stderr F W0813 20:08:13.232047 1 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2025-08-13T20:08:13.234484243+00:00 stderr F W0813 20:08:13.232157 1 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2025-08-13T20:08:13.234484243+00:00 stderr F W0813 20:08:13.232381 1 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232672 1 flags.go:64] FLAG: --allocate-node-cidrs="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232693 1 flags.go:64] FLAG: --allow-metric-labels="[]" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232707 1 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232716 1 flags.go:64] FLAG: --allow-untagged-cloud="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232721 1 flags.go:64] FLAG: --attach-detach-reconcile-sync-period="1m0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232727 1 flags.go:64] FLAG: --authentication-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232735 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232744 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232749 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232753 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232827 1 flags.go:64] FLAG: --authorization-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232841 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232846 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232854 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232865 1 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232869 1 flags.go:64] FLAG: --cidr-allocator-type="RangeAllocator" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232873 1 flags.go:64] FLAG: --client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232882 1 flags.go:64] FLAG: --cloud-config="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232915 1 flags.go:64] FLAG: --cloud-provider="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232920 1 flags.go:64] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232931 1 flags.go:64] FLAG: --cluster-cidr="10.217.0.0/22" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232935 1 flags.go:64] FLAG: --cluster-name="crc-d8rkd" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232942 1 flags.go:64] FLAG: --cluster-signing-cert-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232947 1 flags.go:64] FLAG: --cluster-signing-duration="8760h0m0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232951 1 flags.go:64] FLAG: --cluster-signing-key-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232961 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-cert-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232965 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-key-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232969 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-cert-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232973 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-key-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232977 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-cert-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232981 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-key-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232985 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-cert-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232988 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-key-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232994 1 flags.go:64] FLAG: --concurrent-cron-job-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233000 1 flags.go:64] FLAG: --concurrent-deployment-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233004 1 flags.go:64] FLAG: --concurrent-endpoint-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233008 1 flags.go:64] FLAG: --concurrent-ephemeralvolume-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233013 1 flags.go:64] FLAG: --concurrent-gc-syncs="20" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233018 1 flags.go:64] FLAG: --concurrent-horizontal-pod-autoscaler-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233022 1 flags.go:64] FLAG: --concurrent-job-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233026 1 flags.go:64] FLAG: --concurrent-namespace-syncs="10" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233029 1 flags.go:64] FLAG: --concurrent-rc-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233033 1 flags.go:64] FLAG: --concurrent-replicaset-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233062 1 flags.go:64] FLAG: --concurrent-resource-quota-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233067 1 flags.go:64] FLAG: --concurrent-service-endpoint-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233071 1 flags.go:64] FLAG: --concurrent-service-syncs="1" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233075 1 flags.go:64] FLAG: --concurrent-serviceaccount-token-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233079 1 flags.go:64] FLAG: --concurrent-statefulset-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233082 1 flags.go:64] FLAG: --concurrent-ttl-after-finished-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233086 1 flags.go:64] FLAG: --concurrent-validating-admission-policy-status-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233090 1 flags.go:64] FLAG: --configure-cloud-routes="true" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233094 1 flags.go:64] FLAG: --contention-profiling="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233098 1 flags.go:64] FLAG: --controller-start-interval="0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233106 1 flags.go:64] FLAG: --controllers="[*,-bootstrapsigner,-tokencleaner,-ttl]" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233137 1 flags.go:64] FLAG: --disable-attach-detach-reconcile-sync="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233143 1 flags.go:64] FLAG: --disabled-metrics="[]" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233155 1 flags.go:64] FLAG: --enable-dynamic-provisioning="true" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233160 1 flags.go:64] FLAG: --enable-garbage-collector="true" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233165 1 flags.go:64] FLAG: --enable-hostpath-provisioner="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233169 1 flags.go:64] FLAG: --enable-leader-migration="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233174 1 flags.go:64] FLAG: --endpoint-updates-batch-period="0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233196 1 flags.go:64] FLAG: --endpointslice-updates-batch-period="0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233202 1 flags.go:64] FLAG: --external-cloud-volume-plugin="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233206 1 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233253 1 flags.go:64] FLAG: --flex-volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233259 1 flags.go:64] FLAG: --help="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233264 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-cpu-initialization-period="5m0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233268 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-delay="5m0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233272 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-stabilization="5m0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233276 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-initial-readiness-delay="30s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233279 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-sync-period="15s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233284 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-tolerance="0.1" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233294 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-upscale-delay="3m0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233298 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233303 1 flags.go:64] FLAG: --kube-api-burst="300" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233308 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233312 1 flags.go:64] FLAG: --kube-api-qps="150" 2025-08-13T20:08:13.234484243+00:00 stderr P I0813 20:08:13.233319 1 flags.go:64] FLAG: --kubeconfig="/etc/kubernetes/static-pod-resources/configmaps 2025-08-13T20:08:13.234855374+00:00 stderr F /controller-manager-kubeconfig/kubeconfig" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233326 1 flags.go:64] FLAG: --large-cluster-size-threshold="50" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233334 1 flags.go:64] FLAG: --leader-elect="true" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233339 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233343 1 flags.go:64] FLAG: --leader-elect-renew-deadline="12s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233350 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233354 1 flags.go:64] FLAG: --leader-elect-resource-name="kube-controller-manager" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233358 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233362 1 flags.go:64] FLAG: --leader-elect-retry-period="3s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233366 1 flags.go:64] FLAG: --leader-migration-config="" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233370 1 flags.go:64] FLAG: --legacy-service-account-token-clean-up-period="8760h0m0s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233374 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233378 1 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233386 1 flags.go:64] FLAG: --log-json-split-stream="false" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233390 1 flags.go:64] FLAG: --logging-format="text" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233396 1 flags.go:64] FLAG: --master="" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233400 1 flags.go:64] FLAG: --max-endpoints-per-slice="100" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233404 1 flags.go:64] FLAG: --min-resync-period="12h0m0s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233408 1 flags.go:64] FLAG: --mirroring-concurrent-service-endpoint-syncs="5" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233412 1 flags.go:64] FLAG: --mirroring-endpointslice-updates-batch-period="0s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233420 1 flags.go:64] FLAG: --mirroring-max-endpoints-per-subset="1000" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233425 1 flags.go:64] FLAG: --namespace-sync-period="5m0s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233429 1 flags.go:64] FLAG: --node-cidr-mask-size="0" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233433 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv4="0" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233436 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv6="0" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233442 1 flags.go:64] FLAG: --node-eviction-rate="0.1" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233447 1 flags.go:64] FLAG: --node-monitor-grace-period="40s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233451 1 flags.go:64] FLAG: --node-monitor-period="5s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233454 1 flags.go:64] FLAG: --node-startup-grace-period="1m0s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233458 1 flags.go:64] FLAG: --node-sync-period="0s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233462 1 flags.go:64] FLAG: --openshift-config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233467 1 flags.go:64] FLAG: --permit-address-sharing="false" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233471 1 flags.go:64] FLAG: --permit-port-sharing="false" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233474 1 flags.go:64] FLAG: --profiling="true" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233478 1 flags.go:64] FLAG: --pv-recycler-increment-timeout-nfs="30" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233482 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-hostpath="60" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233488 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-nfs="300" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233492 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-hostpath="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233501 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-nfs="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233507 1 flags.go:64] FLAG: --pv-recycler-timeout-increment-hostpath="30" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233511 1 flags.go:64] FLAG: --pvclaimbinder-sync-period="15s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233515 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233524 1 flags.go:64] FLAG: --requestheader-client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233529 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233539 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233550 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233564 1 flags.go:64] FLAG: --resource-quota-sync-period="5m0s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233568 1 flags.go:64] FLAG: --root-ca-file="/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233573 1 flags.go:64] FLAG: --route-reconciliation-period="10s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233577 1 flags.go:64] FLAG: --secondary-node-eviction-rate="0.01" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233581 1 flags.go:64] FLAG: --secure-port="10257" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233586 1 flags.go:64] FLAG: --service-account-private-key-file="/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233590 1 flags.go:64] FLAG: --service-cluster-ip-range="10.217.4.0/23" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233594 1 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233598 1 flags.go:64] FLAG: --terminated-pod-gc-threshold="12500" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233602 1 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233610 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233635 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233723 1 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233729 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233737 1 flags.go:64] FLAG: --unhealthy-zone-threshold="0.55" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233743 1 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233748 1 flags.go:64] FLAG: --use-service-account-credentials="true" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233754 1 flags.go:64] FLAG: --v="2" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233761 1 flags.go:64] FLAG: --version="false" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233769 1 flags.go:64] FLAG: --vmodule="" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233833 1 flags.go:64] FLAG: --volume-host-allow-local-loopback="true" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233840 1 flags.go:64] FLAG: --volume-host-cidr-denylist="[]" 2025-08-13T20:08:13.247379713+00:00 stderr F I0813 20:08:13.246721 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:13.820033282+00:00 stderr F I0813 20:08:13.816639 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-08-13T20:08:13.820033282+00:00 stderr F I0813 20:08:13.816752 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-08-13T20:08:13.827928258+00:00 stderr F I0813 20:08:13.827555 1 controllermanager.go:203] "Starting" version="v1.29.5+29c95f3" 2025-08-13T20:08:13.827928258+00:00 stderr F I0813 20:08:13.827621 1 controllermanager.go:205] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" 2025-08-13T20:08:13.839544281+00:00 stderr F I0813 20:08:13.839462 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:08:13.839409377 +0000 UTC))" 2025-08-13T20:08:13.839544281+00:00 stderr F I0813 20:08:13.839535 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:13.83951253 +0000 UTC))" 2025-08-13T20:08:13.839617973+00:00 stderr F I0813 20:08:13.839555 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:13.839542801 +0000 UTC))" 2025-08-13T20:08:13.839617973+00:00 stderr F I0813 20:08:13.839572 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:08:13.839560951 +0000 UTC))" 2025-08-13T20:08:13.839617973+00:00 stderr F I0813 20:08:13.839587 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:13.839576672 +0000 UTC))" 2025-08-13T20:08:13.839617973+00:00 stderr F I0813 20:08:13.839604 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:13.839593662 +0000 UTC))" 2025-08-13T20:08:13.839631154+00:00 stderr F I0813 20:08:13.839620 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:13.839609253 +0000 UTC))" 2025-08-13T20:08:13.839643864+00:00 stderr F I0813 20:08:13.839635 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:08:13.839624983 +0000 UTC))" 2025-08-13T20:08:13.839692735+00:00 stderr F I0813 20:08:13.839651 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:13.839640424 +0000 UTC))" 2025-08-13T20:08:13.839711196+00:00 stderr F I0813 20:08:13.839702 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:13.839680665 +0000 UTC))" 2025-08-13T20:08:13.840731445+00:00 stderr F I0813 20:08:13.840547 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-08-13 20:08:13.840525139 +0000 UTC))" 2025-08-13T20:08:13.841108246+00:00 stderr F I0813 20:08:13.841055 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115693\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115693\" (2025-08-13 19:08:13 +0000 UTC to 2026-08-13 19:08:13 +0000 UTC (now=2025-08-13 20:08:13.841028684 +0000 UTC))" 2025-08-13T20:08:13.841128376+00:00 stderr F I0813 20:08:13.841103 1 secure_serving.go:213] Serving securely on [::]:10257 2025-08-13T20:08:13.841605360+00:00 stderr F I0813 20:08:13.841538 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:08:13.841704753+00:00 stderr F I0813 20:08:13.841650 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:13.841704753+00:00 stderr F I0813 20:08:13.841657 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-08-13T20:08:13.842010682+00:00 stderr F I0813 20:08:13.841932 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-08-13T20:08:13.844290897+00:00 stderr F I0813 20:08:13.844225 1 leaderelection.go:250] attempting to acquire leader lease kube-system/kube-controller-manager... 2025-08-13T20:08:24.708357459+00:00 stderr F E0813 20:08:24.708183 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:08:28.619865836+00:00 stderr F E0813 20:08:28.618751 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:08:33.289069024+00:00 stderr F E0813 20:08:33.288206 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:08:37.779061605+00:00 stderr F E0813 20:08:37.778956 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:08:47.305208117+00:00 stderr F I0813 20:08:47.304304 1 leaderelection.go:260] successfully acquired lease kube-system/kube-controller-manager 2025-08-13T20:08:47.307865224+00:00 stderr F I0813 20:08:47.306050 1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="crc_02346a15-e302-4734-adfc-ae3167a2c006 became leader" 2025-08-13T20:08:47.314085082+00:00 stderr F I0813 20:08:47.313970 1 controllermanager.go:756] "Starting controller" controller="serviceaccount-token-controller" 2025-08-13T20:08:47.319254970+00:00 stderr F I0813 20:08:47.319158 1 controllermanager.go:787] "Started controller" controller="serviceaccount-token-controller" 2025-08-13T20:08:47.319254970+00:00 stderr F I0813 20:08:47.319196 1 controllermanager.go:756] "Starting controller" controller="persistentvolume-protection-controller" 2025-08-13T20:08:47.319562919+00:00 stderr F I0813 20:08:47.319326 1 shared_informer.go:311] Waiting for caches to sync for tokens 2025-08-13T20:08:47.332049047+00:00 stderr F I0813 20:08:47.330725 1 controllermanager.go:787] "Started controller" controller="persistentvolume-protection-controller" 2025-08-13T20:08:47.332049047+00:00 stderr F I0813 20:08:47.330769 1 controllermanager.go:756] "Starting controller" controller="cronjob-controller" 2025-08-13T20:08:47.332049047+00:00 stderr F I0813 20:08:47.331075 1 pv_protection_controller.go:78] "Starting PV protection controller" 2025-08-13T20:08:47.332049047+00:00 stderr F I0813 20:08:47.331085 1 shared_informer.go:311] Waiting for caches to sync for PV protection 2025-08-13T20:08:47.345728149+00:00 stderr F I0813 20:08:47.345645 1 controllermanager.go:787] "Started controller" controller="cronjob-controller" 2025-08-13T20:08:47.345728149+00:00 stderr F I0813 20:08:47.345692 1 controllermanager.go:756] "Starting controller" controller="node-lifecycle-controller" 2025-08-13T20:08:47.348831908+00:00 stderr F I0813 20:08:47.346065 1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" 2025-08-13T20:08:47.348831908+00:00 stderr F I0813 20:08:47.346100 1 shared_informer.go:311] Waiting for caches to sync for cronjob 2025-08-13T20:08:47.351836084+00:00 stderr F I0813 20:08:47.349161 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:47.359707530+00:00 stderr F I0813 20:08:47.359642 1 node_lifecycle_controller.go:425] "Controller will reconcile labels" 2025-08-13T20:08:47.362833019+00:00 stderr F I0813 20:08:47.360031 1 controllermanager.go:787] "Started controller" controller="node-lifecycle-controller" 2025-08-13T20:08:47.362833019+00:00 stderr F I0813 20:08:47.360095 1 controllermanager.go:756] "Starting controller" controller="service-lb-controller" 2025-08-13T20:08:47.381045792+00:00 stderr F I0813 20:08:47.380943 1 node_lifecycle_controller.go:459] "Sending events to api server" 2025-08-13T20:08:47.381045792+00:00 stderr F I0813 20:08:47.380996 1 node_lifecycle_controller.go:470] "Starting node controller" 2025-08-13T20:08:47.381045792+00:00 stderr F I0813 20:08:47.381006 1 shared_informer.go:311] Waiting for caches to sync for taint 2025-08-13T20:08:47.400273773+00:00 stderr F E0813 20:08:47.400179 1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" 2025-08-13T20:08:47.400273773+00:00 stderr F I0813 20:08:47.400230 1 controllermanager.go:765] "Warning: skipping controller" controller="service-lb-controller" 2025-08-13T20:08:47.400273773+00:00 stderr F I0813 20:08:47.400244 1 controllermanager.go:756] "Starting controller" controller="persistentvolumeclaim-protection-controller" 2025-08-13T20:08:47.409490347+00:00 stderr F I0813 20:08:47.409417 1 controllermanager.go:787] "Started controller" controller="persistentvolumeclaim-protection-controller" 2025-08-13T20:08:47.409490347+00:00 stderr F I0813 20:08:47.409466 1 controllermanager.go:756] "Starting controller" controller="persistentvolume-expander-controller" 2025-08-13T20:08:47.412744711+00:00 stderr F I0813 20:08:47.409730 1 pvc_protection_controller.go:102] "Starting PVC protection controller" 2025-08-13T20:08:47.412744711+00:00 stderr F I0813 20:08:47.409761 1 shared_informer.go:311] Waiting for caches to sync for PVC protection 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.413923 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.413966 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.413977 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.413985 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/fc" 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.414091 1 controllermanager.go:787] "Started controller" controller="persistentvolume-expander-controller" 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.414103 1 controllermanager.go:756] "Starting controller" controller="namespace-controller" 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.414324 1 expand_controller.go:328] "Starting expand controller" 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.414337 1 shared_informer.go:311] Waiting for caches to sync for expand 2025-08-13T20:08:47.484767685+00:00 stderr F I0813 20:08:47.484662 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:47.513165790+00:00 stderr F I0813 20:08:47.511982 1 controllermanager.go:787] "Started controller" controller="namespace-controller" 2025-08-13T20:08:47.513165790+00:00 stderr F I0813 20:08:47.512070 1 controllermanager.go:750] "Warning: controller is disabled" controller="bootstrap-signer-controller" 2025-08-13T20:08:47.513165790+00:00 stderr F I0813 20:08:47.512082 1 controllermanager.go:756] "Starting controller" controller="cloud-node-lifecycle-controller" 2025-08-13T20:08:47.513165790+00:00 stderr F I0813 20:08:47.512320 1 namespace_controller.go:197] "Starting namespace controller" 2025-08-13T20:08:47.513165790+00:00 stderr F I0813 20:08:47.512331 1 shared_informer.go:311] Waiting for caches to sync for namespace 2025-08-13T20:08:47.518661417+00:00 stderr F E0813 20:08:47.518410 1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" 2025-08-13T20:08:47.518661417+00:00 stderr F I0813 20:08:47.518457 1 controllermanager.go:765] "Warning: skipping controller" controller="cloud-node-lifecycle-controller" 2025-08-13T20:08:47.518661417+00:00 stderr F I0813 20:08:47.518501 1 controllermanager.go:739] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"] 2025-08-13T20:08:47.518661417+00:00 stderr F I0813 20:08:47.518510 1 controllermanager.go:739] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"] 2025-08-13T20:08:47.518661417+00:00 stderr F I0813 20:08:47.518523 1 controllermanager.go:756] "Starting controller" controller="taint-eviction-controller" 2025-08-13T20:08:47.521696374+00:00 stderr F I0813 20:08:47.521606 1 shared_informer.go:318] Caches are synced for tokens 2025-08-13T20:08:47.528885020+00:00 stderr F I0813 20:08:47.528370 1 controllermanager.go:787] "Started controller" controller="taint-eviction-controller" 2025-08-13T20:08:47.528885020+00:00 stderr F I0813 20:08:47.528423 1 controllermanager.go:756] "Starting controller" controller="pod-garbage-collector-controller" 2025-08-13T20:08:47.528885020+00:00 stderr F I0813 20:08:47.528595 1 taint_eviction.go:285] "Starting" controller="taint-eviction-controller" 2025-08-13T20:08:47.528885020+00:00 stderr F I0813 20:08:47.528649 1 taint_eviction.go:291] "Sending events to api server" 2025-08-13T20:08:47.528885020+00:00 stderr F I0813 20:08:47.528669 1 shared_informer.go:311] Waiting for caches to sync for taint-eviction-controller 2025-08-13T20:08:47.533961206+00:00 stderr F I0813 20:08:47.533595 1 controllermanager.go:787] "Started controller" controller="pod-garbage-collector-controller" 2025-08-13T20:08:47.533961206+00:00 stderr F I0813 20:08:47.533691 1 controllermanager.go:756] "Starting controller" controller="job-controller" 2025-08-13T20:08:47.542092599+00:00 stderr F I0813 20:08:47.538074 1 gc_controller.go:101] "Starting GC controller" 2025-08-13T20:08:47.542092599+00:00 stderr F I0813 20:08:47.538152 1 shared_informer.go:311] Waiting for caches to sync for GC 2025-08-13T20:08:47.544197089+00:00 stderr F I0813 20:08:47.543498 1 controllermanager.go:787] "Started controller" controller="job-controller" 2025-08-13T20:08:47.544197089+00:00 stderr F I0813 20:08:47.543543 1 controllermanager.go:756] "Starting controller" controller="deployment-controller" 2025-08-13T20:08:47.544197089+00:00 stderr F I0813 20:08:47.543719 1 job_controller.go:224] "Starting job controller" 2025-08-13T20:08:47.544197089+00:00 stderr F I0813 20:08:47.543748 1 shared_informer.go:311] Waiting for caches to sync for job 2025-08-13T20:08:47.556115701+00:00 stderr F I0813 20:08:47.555921 1 controllermanager.go:787] "Started controller" controller="deployment-controller" 2025-08-13T20:08:47.559824737+00:00 stderr F I0813 20:08:47.559712 1 deployment_controller.go:168] "Starting controller" controller="deployment" 2025-08-13T20:08:47.559824737+00:00 stderr F I0813 20:08:47.559810 1 shared_informer.go:311] Waiting for caches to sync for deployment 2025-08-13T20:08:47.565910802+00:00 stderr F I0813 20:08:47.555967 1 controllermanager.go:756] "Starting controller" controller="service-ca-certificate-publisher-controller" 2025-08-13T20:08:47.578665598+00:00 stderr F I0813 20:08:47.578540 1 controllermanager.go:787] "Started controller" controller="service-ca-certificate-publisher-controller" 2025-08-13T20:08:47.578665598+00:00 stderr F I0813 20:08:47.578585 1 controllermanager.go:756] "Starting controller" controller="ephemeral-volume-controller" 2025-08-13T20:08:47.578839603+00:00 stderr F I0813 20:08:47.578749 1 publisher.go:80] Starting service CA certificate configmap publisher 2025-08-13T20:08:47.578839603+00:00 stderr F I0813 20:08:47.578821 1 shared_informer.go:311] Waiting for caches to sync for crt configmap 2025-08-13T20:08:47.593589226+00:00 stderr F I0813 20:08:47.593446 1 controllermanager.go:787] "Started controller" controller="ephemeral-volume-controller" 2025-08-13T20:08:47.593589226+00:00 stderr F I0813 20:08:47.593506 1 controllermanager.go:739] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"] 2025-08-13T20:08:47.593589226+00:00 stderr F I0813 20:08:47.593563 1 controllermanager.go:756] "Starting controller" controller="resourcequota-controller" 2025-08-13T20:08:47.594115581+00:00 stderr F I0813 20:08:47.593956 1 controller.go:169] "Starting ephemeral volume controller" 2025-08-13T20:08:47.594115581+00:00 stderr F I0813 20:08:47.593990 1 shared_informer.go:311] Waiting for caches to sync for ephemeral 2025-08-13T20:08:47.663499480+00:00 stderr F I0813 20:08:47.663405 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressservices.k8s.ovn.org" 2025-08-13T20:08:47.663592503+00:00 stderr F I0813 20:08:47.663576 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorpkis.network.operator.openshift.io" 2025-08-13T20:08:47.663757647+00:00 stderr F I0813 20:08:47.663737 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps" 2025-08-13T20:08:47.663855670+00:00 stderr F I0813 20:08:47.663838 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps" 2025-08-13T20:08:47.664090207+00:00 stderr F I0813 20:08:47.664064 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling" 2025-08-13T20:08:47.664204100+00:00 stderr F I0813 20:08:47.664180 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templateinstances.template.openshift.io" 2025-08-13T20:08:47.664274882+00:00 stderr F I0813 20:08:47.664256 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="subscriptions.operators.coreos.com" 2025-08-13T20:08:47.665563459+00:00 stderr F I0813 20:08:47.665538 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ippools.whereabouts.cni.cncf.io" 2025-08-13T20:08:47.672934471+00:00 stderr F I0813 20:08:47.665683 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinesets.machine.openshift.io" 2025-08-13T20:08:47.681052003+00:00 stderr F I0813 20:08:47.680988 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheuses.monitoring.coreos.com" 2025-08-13T20:08:47.681210638+00:00 stderr F I0813 20:08:47.681193 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="overlappingrangeipreservations.whereabouts.cni.cncf.io" 2025-08-13T20:08:47.681275660+00:00 stderr F I0813 20:08:47.681261 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints" 2025-08-13T20:08:47.681527937+00:00 stderr F I0813 20:08:47.681514 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps" 2025-08-13T20:08:47.681640190+00:00 stderr F I0813 20:08:47.681622 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io" 2025-08-13T20:08:47.681755253+00:00 stderr F I0813 20:08:47.681735 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io" 2025-08-13T20:08:47.681918398+00:00 stderr F I0813 20:08:47.681874 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="builds.build.openshift.io" 2025-08-13T20:08:47.682005181+00:00 stderr F I0813 20:08:47.681990 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="routes.route.openshift.io" 2025-08-13T20:08:47.682088773+00:00 stderr F I0813 20:08:47.682074 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddresses.ipam.cluster.x-k8s.io" 2025-08-13T20:08:47.682201246+00:00 stderr F I0813 20:08:47.682146 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="probes.monitoring.coreos.com" 2025-08-13T20:08:47.682254948+00:00 stderr F I0813 20:08:47.682241 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates" 2025-08-13T20:08:47.689962349+00:00 stderr F I0813 20:08:47.682335 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io" 2025-08-13T20:08:47.690404331+00:00 stderr F I0813 20:08:47.690360 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindingrestrictions.authorization.openshift.io" 2025-08-13T20:08:47.690549246+00:00 stderr F I0813 20:08:47.690490 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="imagestreams.image.openshift.io" 2025-08-13T20:08:47.690656209+00:00 stderr F I0813 20:08:47.690633 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podnetworkconnectivitychecks.controlplane.operator.openshift.io" 2025-08-13T20:08:47.690863845+00:00 stderr F I0813 20:08:47.690813 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresscontrollers.operator.openshift.io" 2025-08-13T20:08:47.690983548+00:00 stderr F I0813 20:08:47.690962 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch" 2025-08-13T20:08:47.691075101+00:00 stderr F I0813 20:08:47.691034 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy" 2025-08-13T20:08:47.691162583+00:00 stderr F I0813 20:08:47.691142 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediationtemplates.infrastructure.cluster.x-k8s.io" 2025-08-13T20:08:47.691261316+00:00 stderr F I0813 20:08:47.691239 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinehealthchecks.machine.openshift.io" 2025-08-13T20:08:47.691387000+00:00 stderr F I0813 20:08:47.691364 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagerconfigs.monitoring.coreos.com" 2025-08-13T20:08:47.691459882+00:00 stderr F I0813 20:08:47.691440 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps" 2025-08-13T20:08:47.691533464+00:00 stderr F I0813 20:08:47.691516 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="buildconfigs.build.openshift.io" 2025-08-13T20:08:47.691663838+00:00 stderr F I0813 20:08:47.691614 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machineautoscalers.autoscaling.openshift.io" 2025-08-13T20:08:47.691750050+00:00 stderr F I0813 20:08:47.691730 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machines.machine.openshift.io" 2025-08-13T20:08:47.693725277+00:00 stderr F I0813 20:08:47.691869 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorgroups.operators.coreos.com" 2025-08-13T20:08:47.693972164+00:00 stderr F I0813 20:08:47.693888 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="thanosrulers.monitoring.coreos.com" 2025-08-13T20:08:47.694075027+00:00 stderr F I0813 20:08:47.694053 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="catalogsources.operators.coreos.com" 2025-08-13T20:08:47.694980603+00:00 stderr F I0813 20:08:47.694954 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io" 2025-08-13T20:08:47.695050235+00:00 stderr F I0813 20:08:47.695036 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io" 2025-08-13T20:08:47.695113096+00:00 stderr F I0813 20:08:47.695099 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressfirewalls.k8s.ovn.org" 2025-08-13T20:08:47.695176268+00:00 stderr F I0813 20:08:47.695162 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podmonitors.monitoring.coreos.com" 2025-08-13T20:08:47.695549889+00:00 stderr F I0813 20:08:47.695534 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts" 2025-08-13T20:08:47.695709674+00:00 stderr F I0813 20:08:47.695690 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deploymentconfigs.apps.openshift.io" 2025-08-13T20:08:47.695829957+00:00 stderr F I0813 20:08:47.695765 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templates.template.openshift.io" 2025-08-13T20:08:47.695972371+00:00 stderr F I0813 20:08:47.695949 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="dnsrecords.ingress.operator.openshift.io" 2025-08-13T20:08:47.696053863+00:00 stderr F I0813 20:08:47.696037 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertrelabelconfigs.monitoring.openshift.io" 2025-08-13T20:08:47.697097493+00:00 stderr F I0813 20:08:47.696100 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io" 2025-08-13T20:08:47.697188466+00:00 stderr F I0813 20:08:47.697170 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="clusterserviceversions.operators.coreos.com" 2025-08-13T20:08:47.697251688+00:00 stderr F I0813 20:08:47.697237 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheusrules.monitoring.coreos.com" 2025-08-13T20:08:47.697311319+00:00 stderr F I0813 20:08:47.697298 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="installplans.operators.coreos.com" 2025-08-13T20:08:47.697363921+00:00 stderr F I0813 20:08:47.697350 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io" 2025-08-13T20:08:47.697460114+00:00 stderr F I0813 20:08:47.697441 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges" 2025-08-13T20:08:47.697514945+00:00 stderr F I0813 20:08:47.697502 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps" 2025-08-13T20:08:47.699990416+00:00 stderr F I0813 20:08:47.699964 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controlplanemachinesets.machine.openshift.io" 2025-08-13T20:08:47.700063808+00:00 stderr F I0813 20:08:47.700049 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="servicemonitors.monitoring.coreos.com" 2025-08-13T20:08:47.700124190+00:00 stderr F I0813 20:08:47.700110 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="projecthelmchartrepositories.helm.openshift.io" 2025-08-13T20:08:47.700181062+00:00 stderr F I0813 20:08:47.700167 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediations.infrastructure.cluster.x-k8s.io" 2025-08-13T20:08:47.700241273+00:00 stderr F I0813 20:08:47.700227 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorconditions.operators.coreos.com" 2025-08-13T20:08:47.700302275+00:00 stderr F I0813 20:08:47.700289 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch" 2025-08-13T20:08:47.700361927+00:00 stderr F I0813 20:08:47.700348 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="network-attachment-definitions.k8s.cni.cncf.io" 2025-08-13T20:08:47.700418678+00:00 stderr F I0813 20:08:47.700404 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertingrules.monitoring.openshift.io" 2025-08-13T20:08:47.700488130+00:00 stderr F I0813 20:08:47.700473 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddressclaims.ipam.cluster.x-k8s.io" 2025-08-13T20:08:47.700552982+00:00 stderr F I0813 20:08:47.700539 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressqoses.k8s.ovn.org" 2025-08-13T20:08:47.700609754+00:00 stderr F I0813 20:08:47.700595 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagers.monitoring.coreos.com" 2025-08-13T20:08:47.708060808+00:00 stderr F I0813 20:08:47.708031 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressrouters.network.operator.openshift.io" 2025-08-13T20:08:47.708192761+00:00 stderr F I0813 20:08:47.708170 1 controllermanager.go:787] "Started controller" controller="resourcequota-controller" 2025-08-13T20:08:47.708234283+00:00 stderr F I0813 20:08:47.708222 1 controllermanager.go:756] "Starting controller" controller="statefulset-controller" 2025-08-13T20:08:47.708613683+00:00 stderr F I0813 20:08:47.708589 1 resource_quota_controller.go:294] "Starting resource quota controller" 2025-08-13T20:08:47.708658455+00:00 stderr F I0813 20:08:47.708645 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2025-08-13T20:08:47.708983164+00:00 stderr F I0813 20:08:47.708956 1 resource_quota_monitor.go:305] "QuotaMonitor running" 2025-08-13T20:08:47.717629892+00:00 stderr F I0813 20:08:47.717537 1 controllermanager.go:787] "Started controller" controller="statefulset-controller" 2025-08-13T20:08:47.717629892+00:00 stderr F I0813 20:08:47.717590 1 controllermanager.go:756] "Starting controller" controller="certificatesigningrequest-approving-controller" 2025-08-13T20:08:47.717967062+00:00 stderr F I0813 20:08:47.717887 1 stateful_set.go:161] "Starting stateful set controller" 2025-08-13T20:08:47.717967062+00:00 stderr F I0813 20:08:47.717946 1 shared_informer.go:311] Waiting for caches to sync for stateful set 2025-08-13T20:08:47.727717951+00:00 stderr F I0813 20:08:47.725686 1 controllermanager.go:787] "Started controller" controller="certificatesigningrequest-approving-controller" 2025-08-13T20:08:47.727717951+00:00 stderr F I0813 20:08:47.725735 1 controllermanager.go:756] "Starting controller" controller="persistentvolume-attach-detach-controller" 2025-08-13T20:08:47.727717951+00:00 stderr F I0813 20:08:47.725962 1 resource_quota_controller.go:470] "syncing resource quota controller with updated resources from discovery" diff="added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=persistentvolumeclaims /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services apps.openshift.io/v1, Resource=deploymentconfigs apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets authorization.openshift.io/v1, Resource=rolebindingrestrictions autoscaling.openshift.io/v1beta1, Resource=machineautoscalers autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs build.openshift.io/v1, Resource=buildconfigs build.openshift.io/v1, Resource=builds controlplane.operator.openshift.io/v1alpha1, Resource=podnetworkconnectivitychecks coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events helm.openshift.io/v1beta1, Resource=projecthelmchartrepositories image.openshift.io/v1, Resource=imagestreams infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediations infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediationtemplates ingress.operator.openshift.io/v1, Resource=dnsrecords ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddressclaims ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddresses k8s.cni.cncf.io/v1, Resource=network-attachment-definitions k8s.ovn.org/v1, Resource=egressfirewalls k8s.ovn.org/v1, Resource=egressqoses k8s.ovn.org/v1, Resource=egressservices machine.openshift.io/v1, Resource=controlplanemachinesets machine.openshift.io/v1beta1, Resource=machinehealthchecks machine.openshift.io/v1beta1, Resource=machines machine.openshift.io/v1beta1, Resource=machinesets monitoring.coreos.com/v1, Resource=alertmanagers monitoring.coreos.com/v1, Resource=podmonitors monitoring.coreos.com/v1, Resource=probes monitoring.coreos.com/v1, Resource=prometheuses monitoring.coreos.com/v1, Resource=prometheusrules monitoring.coreos.com/v1, Resource=servicemonitors monitoring.coreos.com/v1, Resource=thanosrulers monitoring.coreos.com/v1beta1, Resource=alertmanagerconfigs monitoring.openshift.io/v1, Resource=alertingrules monitoring.openshift.io/v1, Resource=alertrelabelconfigs network.operator.openshift.io/v1, Resource=egressrouters network.operator.openshift.io/v1, Resource=operatorpkis networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies operator.openshift.io/v1, Resource=ingresscontrollers operators.coreos.com/v1, Resource=operatorgroups operators.coreos.com/v1alpha1, Resource=catalogsources operators.coreos.com/v1alpha1, Resource=clusterserviceversions operators.coreos.com/v1alpha1, Resource=installplans operators.coreos.com/v1alpha1, Resource=subscriptions operators.coreos.com/v2, Resource=operatorconditions policy/v1, Resource=poddisruptionbudgets rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles route.openshift.io/v1, Resource=routes storage.k8s.io/v1, Resource=csistoragecapacities template.openshift.io/v1, Resource=templateinstances template.openshift.io/v1, Resource=templates whereabouts.cni.cncf.io/v1alpha1, Resource=ippools whereabouts.cni.cncf.io/v1alpha1, Resource=overlappingrangeipreservations], removed: []" 2025-08-13T20:08:47.727717951+00:00 stderr F I0813 20:08:47.726044 1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving" 2025-08-13T20:08:47.727717951+00:00 stderr F I0813 20:08:47.726061 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving 2025-08-13T20:08:47.731523350+00:00 stderr F W0813 20:08:47.731379 1 probe.go:268] Flexvolume plugin directory at /etc/kubernetes/kubelet-plugins/volume/exec does not exist. Recreating. 2025-08-13T20:08:47.733283421+00:00 stderr F I0813 20:08:47.733142 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" 2025-08-13T20:08:47.733283421+00:00 stderr F I0813 20:08:47.733180 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" 2025-08-13T20:08:47.733283421+00:00 stderr F I0813 20:08:47.733191 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/fc" 2025-08-13T20:08:47.733283421+00:00 stderr F I0813 20:08:47.733199 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" 2025-08-13T20:08:47.733283421+00:00 stderr F I0813 20:08:47.733236 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/csi" 2025-08-13T20:08:47.733397194+00:00 stderr F I0813 20:08:47.733310 1 controllermanager.go:787] "Started controller" controller="persistentvolume-attach-detach-controller" 2025-08-13T20:08:47.733397194+00:00 stderr F I0813 20:08:47.733323 1 controllermanager.go:756] "Starting controller" controller="serviceaccount-controller" 2025-08-13T20:08:47.734055003+00:00 stderr F I0813 20:08:47.733559 1 attach_detach_controller.go:337] "Starting attach detach controller" 2025-08-13T20:08:47.734055003+00:00 stderr F I0813 20:08:47.733597 1 shared_informer.go:311] Waiting for caches to sync for attach detach 2025-08-13T20:08:47.743914816+00:00 stderr F I0813 20:08:47.742516 1 controllermanager.go:787] "Started controller" controller="serviceaccount-controller" 2025-08-13T20:08:47.743914816+00:00 stderr F I0813 20:08:47.742608 1 controllermanager.go:756] "Starting controller" controller="node-ipam-controller" 2025-08-13T20:08:47.743914816+00:00 stderr F I0813 20:08:47.742622 1 controllermanager.go:765] "Warning: skipping controller" controller="node-ipam-controller" 2025-08-13T20:08:47.743914816+00:00 stderr F I0813 20:08:47.742629 1 controllermanager.go:756] "Starting controller" controller="root-ca-certificate-publisher-controller" 2025-08-13T20:08:47.743914816+00:00 stderr F I0813 20:08:47.743502 1 serviceaccounts_controller.go:111] "Starting service account controller" 2025-08-13T20:08:47.743914816+00:00 stderr F I0813 20:08:47.743515 1 shared_informer.go:311] Waiting for caches to sync for service account 2025-08-13T20:08:47.749229948+00:00 stderr F I0813 20:08:47.748654 1 controllermanager.go:787] "Started controller" controller="root-ca-certificate-publisher-controller" 2025-08-13T20:08:47.749229948+00:00 stderr F I0813 20:08:47.748695 1 controllermanager.go:750] "Warning: controller is disabled" controller="token-cleaner-controller" 2025-08-13T20:08:47.749229948+00:00 stderr F I0813 20:08:47.748706 1 controllermanager.go:756] "Starting controller" controller="persistentvolume-binder-controller" 2025-08-13T20:08:47.751055660+00:00 stderr F I0813 20:08:47.750996 1 publisher.go:102] "Starting root CA cert publisher controller" 2025-08-13T20:08:47.751055660+00:00 stderr F I0813 20:08:47.751038 1 shared_informer.go:311] Waiting for caches to sync for crt configmap 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.769926 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/host-path" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.769963 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/nfs" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.769979 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.769987 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.769994 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.770003 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.770016 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/csi" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.770071 1 controllermanager.go:787] "Started controller" controller="persistentvolume-binder-controller" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.770094 1 controllermanager.go:739] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"] 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.770106 1 controllermanager.go:756] "Starting controller" controller="endpoints-controller" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.770573 1 pv_controller_base.go:319] "Starting persistent volume controller" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.770661 1 shared_informer.go:311] Waiting for caches to sync for persistent volume 2025-08-13T20:08:47.785079096+00:00 stderr F I0813 20:08:47.783877 1 controllermanager.go:787] "Started controller" controller="endpoints-controller" 2025-08-13T20:08:47.785079096+00:00 stderr F I0813 20:08:47.783943 1 controllermanager.go:756] "Starting controller" controller="garbage-collector-controller" 2025-08-13T20:08:47.785079096+00:00 stderr F I0813 20:08:47.784181 1 endpoints_controller.go:174] "Starting endpoint controller" 2025-08-13T20:08:47.785079096+00:00 stderr F I0813 20:08:47.784193 1 shared_informer.go:311] Waiting for caches to sync for endpoint 2025-08-13T20:08:47.820883362+00:00 stderr F I0813 20:08:47.820581 1 controllermanager.go:787] "Started controller" controller="garbage-collector-controller" 2025-08-13T20:08:47.820883362+00:00 stderr F I0813 20:08:47.820627 1 controllermanager.go:756] "Starting controller" controller="horizontal-pod-autoscaler-controller" 2025-08-13T20:08:47.830451327+00:00 stderr F I0813 20:08:47.830373 1 garbagecollector.go:155] "Starting controller" controller="garbagecollector" 2025-08-13T20:08:47.830451327+00:00 stderr F I0813 20:08:47.830427 1 shared_informer.go:311] Waiting for caches to sync for garbage collector 2025-08-13T20:08:47.830523069+00:00 stderr F I0813 20:08:47.830478 1 graph_builder.go:302] "Running" component="GraphBuilder" 2025-08-13T20:08:47.877555977+00:00 stderr F I0813 20:08:47.877463 1 controllermanager.go:787] "Started controller" controller="horizontal-pod-autoscaler-controller" 2025-08-13T20:08:47.877555977+00:00 stderr F I0813 20:08:47.877513 1 controllermanager.go:756] "Starting controller" controller="certificatesigningrequest-cleaner-controller" 2025-08-13T20:08:47.877706862+00:00 stderr F I0813 20:08:47.877652 1 horizontal.go:200] "Starting HPA controller" 2025-08-13T20:08:47.877706862+00:00 stderr F I0813 20:08:47.877683 1 shared_informer.go:311] Waiting for caches to sync for HPA 2025-08-13T20:08:47.894572865+00:00 stderr F I0813 20:08:47.894436 1 controllermanager.go:787] "Started controller" controller="certificatesigningrequest-cleaner-controller" 2025-08-13T20:08:47.894572865+00:00 stderr F I0813 20:08:47.894485 1 controllermanager.go:756] "Starting controller" controller="certificatesigningrequest-signing-controller" 2025-08-13T20:08:47.894677348+00:00 stderr F I0813 20:08:47.894627 1 cleaner.go:83] "Starting CSR cleaner controller" 2025-08-13T20:08:47.906217069+00:00 stderr F I0813 20:08:47.906136 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.907055493+00:00 stderr F I0813 20:08:47.907018 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.907687251+00:00 stderr F I0813 20:08:47.907664 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.908426862+00:00 stderr F I0813 20:08:47.908405 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.908849514+00:00 stderr F I0813 20:08:47.908826 1 controllermanager.go:787] "Started controller" controller="certificatesigningrequest-signing-controller" 2025-08-13T20:08:47.908945487+00:00 stderr F I0813 20:08:47.908923 1 controllermanager.go:756] "Starting controller" controller="node-route-controller" 2025-08-13T20:08:47.909004379+00:00 stderr F I0813 20:08:47.908989 1 core.go:290] "Will not configure cloud provider routes for allocate-node-cidrs" CIDRs=false routes=true 2025-08-13T20:08:47.909040140+00:00 stderr F I0813 20:08:47.909028 1 controllermanager.go:765] "Warning: skipping controller" controller="node-route-controller" 2025-08-13T20:08:47.909075031+00:00 stderr F I0813 20:08:47.909063 1 controllermanager.go:756] "Starting controller" controller="clusterrole-aggregation-controller" 2025-08-13T20:08:47.909285347+00:00 stderr F I0813 20:08:47.909267 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving" 2025-08-13T20:08:47.909325288+00:00 stderr F I0813 20:08:47.909313 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving 2025-08-13T20:08:47.909398970+00:00 stderr F I0813 20:08:47.909385 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client" 2025-08-13T20:08:47.909430071+00:00 stderr F I0813 20:08:47.909419 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client 2025-08-13T20:08:47.909487813+00:00 stderr F I0813 20:08:47.909474 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client" 2025-08-13T20:08:47.909518374+00:00 stderr F I0813 20:08:47.909507 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client 2025-08-13T20:08:47.909576075+00:00 stderr F I0813 20:08:47.909562 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown" 2025-08-13T20:08:47.909615386+00:00 stderr F I0813 20:08:47.909600 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown 2025-08-13T20:08:47.909668118+00:00 stderr F I0813 20:08:47.909653 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.909880414+00:00 stderr F I0813 20:08:47.909831 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.910015758+00:00 stderr F I0813 20:08:47.909999 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.910103910+00:00 stderr F I0813 20:08:47.910089 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.925983626+00:00 stderr P I0813 20:08:47.925923 1 garbagecollector.go:241] "syncing garbage collector with updated resources from discovery" attempt=1 diff="added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=namespaces /v1, Resource=nodes /v1, Resource=persistentvolumeclaims /v1, Resource=persistentvolumes /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services admissionregistration.k8s.io/v1, Resource=mutatingwebhookconfigurations admissionregistration.k8s.io/v1, Resource=validatingwebhookconfigurations apiextensions.k8s.io/v1, Resource=customresourcedefinitions apiregistration.k8s.io/v1, Resource=apiservices apiserver.openshift.io/v1, Resource=apirequestcounts apps.openshift.io/v1, Resource=deploymentconfigs apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets authorization.openshift.io/v1, Resource=rolebindingrestrictions autoscaling.openshift.io/v1, Resource=clusterautoscalers autoscaling.openshift.io/v1beta1, Resource=machineautoscalers autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs build.openshift.io/v1, Resource=buildconfigs build.openshift.io/v1, Resource=builds certificates.k8s.io/v1, Resource=certificatesigningrequests config.openshift.io/v1, Resource=apiservers config.openshift.io/v1, Resource=authentications config.openshift.io/v1, Resource=builds config.openshift.io/v1, Resource=clusteroperators config.openshift.io/v1, Resource=clusterversions config.openshift.io/v1, Resource=consoles config.openshift.io/v1, Resource=dnses config.openshift.io/v1, Resource=featuregates config.openshift.io/v1, Resource=imagecontentpolicies config.openshift.io/v1, Resource=imagedigestmirrorsets config.openshift.io/v1, Resource=images config.openshift.io/v1, Resource=imagetagmirrorsets config.openshift.io/v1, Resource=infrastructures config.openshift.io/v1, Resource=ingresses config.openshift.io/v1, Resource=networks config.openshift.io/v1, Resource=nodes config.openshift.io/v1, Resource=oauths config.openshift.io/v1, Resource=operatorhubs config.openshift.io/v1, Resource=projects config.openshift.io/v1, Resource=proxies config.openshift.io/v1, Resource=schedulers console.openshift.io/v1, Resource=consoleclidownloads console.openshift.io/v1, Resource=consoleexternalloglinks console.openshift.io/v1, Resource=consolelinks console.openshift.io/v1, Resource=consolenotifications console.openshift.io/v1, Resource=consoleplugins console.openshift.io/v1, Resource=consolequickstarts console.openshift.io/v1, Resource=consolesamples console.openshift.io/v1, Resource=consoleyamlsamples controlplane.operator.openshift.io/v1alpha1, Resource=podnetworkconnectivitychecks coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events flowcontrol.apiserver.k8s.io/v1, Resource=flowschemas flowcontrol.apiserver.k8s.io/v1, Resource=prioritylevelconfigurations helm.openshift.io/v1beta1, Resource=helmchartrepositories helm.openshift.io/v1beta1, Resource=projecthelmchartrepositories image.openshift.io/v1, Resource=images image.openshift.io/v1, Resource=imagestreams imageregistry.operator.openshift.io/v1, Resource=configs imageregistry.operator.openshift.io/v1, Resource=imagepruners infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediations infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediationtemplates ingress.operator.openshift.io/v1, Resource=dnsrecords ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddressclaims ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddresses k8s.cni.cncf.io/v1, Resource=network-attachment-definitions k8s.ovn.org/v1, Resource=adminpolicybasedexternalroutes k8s.ovn.org/v1, Resource=egressfirewalls k8s.ovn.org/v1, Resource=egressips k8s.ovn.org/v1, Resource=egressqoses k8s.ovn.org/v1, Resource=egressservices machine.openshift.io/v1, Resource=controlplanemachinesets machine.openshift.io/v1beta1, Resource=machinehealthchecks machine.openshift.io/v1beta1, Resource=machines machine.openshift.io/v1beta1, Resource=machinesets machineconfiguration.openshift.io/v1, Resource=containerruntimeconfigs machineconfiguration.openshift.io/v1, Resource=controllerconfigs machineconfiguration.openshift.io/v1, Resource=kubeletconfigs machineconfiguration.openshift.io/v1, Resource=machineconfigpools machineconfiguration.openshift.io/v1, Resource=machineconfigs migration.k8s.io/v1alpha1, Resource=storagestates migration.k8s.io/v1alpha1, Resource=storageversionmigrations monitoring.coreos.com/v1, Resource=alertmanagers monitoring.coreos.com/v1, Resource=podmonitors monitoring.coreos.com/v1, Resource=probes monitoring.coreos.com/v1, Resource=prometheuses monitoring.coreos.com/v1, Resource=prometheusrules monitoring.coreos.com/v1, Resource=servicemonitors monitoring.coreos.com/v1, Resource=thanosrulers monitoring.coreos.com/v1beta1, Resource=alertmanagerconfigs monitoring.openshift.io/v1, Resource=alertingrules monitoring.openshift.io/v1, Resource=alertrelabelconfigs network.operator.openshift.io/v1, Resource=egressrouters network.operator.openshift.io/v1, Resource=operatorpkis networking.k8s.io/v1, Resource=ingressclasses networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies node.k8s.io/v1, Resource=runtimeclasses oauth.openshift.io/v1, Resource=oauthaccesstokens oauth.openshift.io/v1, Resource=oauthauthorizetokens oauth.openshift.io/v1, Resource=oauthclientauthorizations oauth.openshift.io/v1, Resource=oauthclients oauth.openshift.io/v1, Resource=useroauthaccesstokens operator.openshift.io/v1, Resource=authentications operator.openshift.io/v1, Resource=clustercsidrivers operator.openshift.io/v1, Resource=configs operator.openshift.io/v1, Resource=consoles operator.openshift.io/v1, Resource=csisnapshotcontrollers operator.openshift.io/v1, Resource=dnses operator.openshift.io/v1, Resource=etcds operator.openshift.io/v1, Resource=ingresscontrollers operator.openshift.io/v1, Resource=kubeapiservers operator.openshift.io/v1, Resource=kubecontrollermanagers operator.openshift.io/v1, Resource=kubeschedulers operator.openshift.io/v1, Resource=kubestorageversionmigrators operator.openshift.io/v1, Resource=machineconfigurations operator.openshift.io/v1, Resource=networks operator.openshift.io/v1, Resource=openshiftapiservers operator.openshift.io/v1, Resource=openshiftcontrollermanagers operator.openshift.io/v1, Resource=servicecas operator.openshift.io/v1, Resource=storages operator.openshift.io/v1alpha1, Resource=imagecontentsourcepolicies operators.coreos.com/v1, Resource=olmconfigs operators.coreos.com/v1, Resource=operatorgroups operators.coreos.com/v1, Resource=operators operators.coreos.com/v1alpha1, Resource=catalogsources operators.coreos.com/v1alpha1, Resource=clusterserviceversions operators.coreos.com/v1alpha1, Resource=installplans operators.coreos.com/v1alpha1, Resource=subscriptions operators.coreos.com/v2, Resource=operatorconditions policy.networking.k8s.io/v1alpha1, Resource=adminnetworkpolicies policy.networking.k8s.io/v1alpha1, Resource=baselineadminnetworkpolicies policy/v1, Resource=poddisruptionbudgets project.openshift.io/v1, Resource=projects quota.openshift.io/v1, Resource=clusterresourcequotas rbac.authorization.k8s.io/v1, Resource=clusterrolebindings rbac.authorization.k8s.io/v1, Resource=clusterroles rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles route.openshift.io/v1, Resource=routes samples.operator.openshift.io/v1, Resource=configs scheduling.k8s.io/v1, Resource=priorityclasses security.internal.openshift.io/v1, Resource=rangeallocations security.openshift.io/v1, Resource=rangeallocations security.openshift.io/v1, Resource=securitycontextconstraints storage.k8s.io/v1, Resource=csidrivers storage.k8s.io/v1, Resource=csinodes storage.k8s.io/v1, Resource=csistoragecapacities storage.k8s.io/v1, Resource=storageclasses storage.k8s.io/v1, Resource=volumeattachments t 2025-08-13T20:08:47.926037307+00:00 stderr F emplate.openshift.io/v1, Resource=brokertemplateinstances template.openshift.io/v1, Resource=templateinstances template.openshift.io/v1, Resource=templates user.openshift.io/v1, Resource=groups user.openshift.io/v1, Resource=identities user.openshift.io/v1, Resource=users whereabouts.cni.cncf.io/v1alpha1, Resource=ippools whereabouts.cni.cncf.io/v1alpha1, Resource=overlappingrangeipreservations], removed: []" 2025-08-13T20:08:47.930503085+00:00 stderr F I0813 20:08:47.930474 1 controllermanager.go:787] "Started controller" controller="clusterrole-aggregation-controller" 2025-08-13T20:08:47.930555957+00:00 stderr F I0813 20:08:47.930542 1 controllermanager.go:756] "Starting controller" controller="ttl-after-finished-controller" 2025-08-13T20:08:47.930855115+00:00 stderr F I0813 20:08:47.930766 1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" 2025-08-13T20:08:47.930944148+00:00 stderr F I0813 20:08:47.930920 1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator 2025-08-13T20:08:47.942238932+00:00 stderr F I0813 20:08:47.942157 1 controllermanager.go:787] "Started controller" controller="ttl-after-finished-controller" 2025-08-13T20:08:47.942238932+00:00 stderr F I0813 20:08:47.942206 1 controllermanager.go:756] "Starting controller" controller="endpointslice-controller" 2025-08-13T20:08:47.942862410+00:00 stderr F I0813 20:08:47.942411 1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" 2025-08-13T20:08:47.942862410+00:00 stderr F I0813 20:08:47.942443 1 shared_informer.go:311] Waiting for caches to sync for TTL after finished 2025-08-13T20:08:47.951827457+00:00 stderr F I0813 20:08:47.951738 1 controllermanager.go:787] "Started controller" controller="endpointslice-controller" 2025-08-13T20:08:47.951940510+00:00 stderr F I0813 20:08:47.951885 1 controllermanager.go:756] "Starting controller" controller="endpointslice-mirroring-controller" 2025-08-13T20:08:47.952293440+00:00 stderr F I0813 20:08:47.952270 1 endpointslice_controller.go:264] "Starting endpoint slice controller" 2025-08-13T20:08:47.952336421+00:00 stderr F I0813 20:08:47.952324 1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice 2025-08-13T20:08:47.965276902+00:00 stderr F I0813 20:08:47.965247 1 controllermanager.go:787] "Started controller" controller="endpointslice-mirroring-controller" 2025-08-13T20:08:47.965335574+00:00 stderr F I0813 20:08:47.965322 1 controllermanager.go:756] "Starting controller" controller="daemonset-controller" 2025-08-13T20:08:47.965650653+00:00 stderr F I0813 20:08:47.965587 1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" 2025-08-13T20:08:47.965947521+00:00 stderr F I0813 20:08:47.965884 1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring 2025-08-13T20:08:47.984336619+00:00 stderr F I0813 20:08:47.984295 1 controllermanager.go:787] "Started controller" controller="daemonset-controller" 2025-08-13T20:08:47.984405141+00:00 stderr F I0813 20:08:47.984391 1 controllermanager.go:756] "Starting controller" controller="disruption-controller" 2025-08-13T20:08:47.984636617+00:00 stderr F I0813 20:08:47.984617 1 daemon_controller.go:297] "Starting daemon sets controller" 2025-08-13T20:08:47.984675948+00:00 stderr F I0813 20:08:47.984663 1 shared_informer.go:311] Waiting for caches to sync for daemon sets 2025-08-13T20:08:48.022675888+00:00 stderr F I0813 20:08:48.022550 1 controllermanager.go:787] "Started controller" controller="disruption-controller" 2025-08-13T20:08:48.022675888+00:00 stderr F I0813 20:08:48.022596 1 controllermanager.go:756] "Starting controller" controller="replicationcontroller-controller" 2025-08-13T20:08:48.022881444+00:00 stderr F I0813 20:08:48.022826 1 disruption.go:433] "Sending events to api server." 2025-08-13T20:08:48.022881444+00:00 stderr F I0813 20:08:48.022874 1 disruption.go:444] "Starting disruption controller" 2025-08-13T20:08:48.022925815+00:00 stderr F I0813 20:08:48.022885 1 shared_informer.go:311] Waiting for caches to sync for disruption 2025-08-13T20:08:48.046176452+00:00 stderr F I0813 20:08:48.045213 1 controllermanager.go:787] "Started controller" controller="replicationcontroller-controller" 2025-08-13T20:08:48.046176452+00:00 stderr F I0813 20:08:48.045261 1 controllermanager.go:756] "Starting controller" controller="replicaset-controller" 2025-08-13T20:08:48.046176452+00:00 stderr F I0813 20:08:48.045467 1 replica_set.go:214] "Starting controller" name="replicationcontroller" 2025-08-13T20:08:48.046176452+00:00 stderr F I0813 20:08:48.045477 1 shared_informer.go:311] Waiting for caches to sync for ReplicationController 2025-08-13T20:08:48.056757625+00:00 stderr F I0813 20:08:48.056717 1 controllermanager.go:787] "Started controller" controller="replicaset-controller" 2025-08-13T20:08:48.056887219+00:00 stderr F I0813 20:08:48.056870 1 controllermanager.go:750] "Warning: controller is disabled" controller="ttl-controller" 2025-08-13T20:08:48.056970581+00:00 stderr F I0813 20:08:48.056953 1 controllermanager.go:756] "Starting controller" controller="legacy-serviceaccount-token-cleaner-controller" 2025-08-13T20:08:48.057249199+00:00 stderr F I0813 20:08:48.057228 1 replica_set.go:214] "Starting controller" name="replicaset" 2025-08-13T20:08:48.057290110+00:00 stderr F I0813 20:08:48.057277 1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet 2025-08-13T20:08:48.074530845+00:00 stderr F I0813 20:08:48.074487 1 controllermanager.go:787] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller" 2025-08-13T20:08:48.077342405+00:00 stderr F I0813 20:08:48.076871 1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" 2025-08-13T20:08:48.077342405+00:00 stderr F I0813 20:08:48.076923 1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner 2025-08-13T20:08:48.086612211+00:00 stderr F I0813 20:08:48.085824 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2025-08-13T20:08:48.130402997+00:00 stderr F I0813 20:08:48.130305 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.138358525+00:00 stderr F I0813 20:08:48.138217 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.170315431+00:00 stderr F I0813 20:08:48.170265 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.171449313+00:00 stderr F I0813 20:08:48.171427 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.172733700+00:00 stderr F I0813 20:08:48.172666 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.174917463+00:00 stderr F I0813 20:08:48.174283 1 reflector.go:351] Caches populated for *v1.ControllerRevision from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.175025206+00:00 stderr F I0813 20:08:48.174973 1 reflector.go:351] Caches populated for *v1.Lease from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.198988943+00:00 stderr F W0813 20:08:48.198926 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:48.199176618+00:00 stderr F E0813 20:08:48.199153 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:48.207232879+00:00 stderr F I0813 20:08:48.207003 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.208400983+00:00 stderr F I0813 20:08:48.208321 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.216556407+00:00 stderr F I0813 20:08:48.215182 1 reflector.go:351] Caches populated for *v1.CSINode from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.216579717+00:00 stderr F I0813 20:08:48.216549 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.216974529+00:00 stderr F I0813 20:08:48.216922 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.217110883+00:00 stderr F I0813 20:08:48.216938 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.217265997+00:00 stderr F I0813 20:08:48.217211 1 reflector.go:351] Caches populated for *v2.HorizontalPodAutoscaler from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.217400161+00:00 stderr F I0813 20:08:48.217342 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.217544645+00:00 stderr F I0813 20:08:48.217463 1 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.217544645+00:00 stderr F I0813 20:08:48.217534 1 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.217710190+00:00 stderr F I0813 20:08:48.217663 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.217933916+00:00 stderr F I0813 20:08:48.217865 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.218073750+00:00 stderr F I0813 20:08:48.218022 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.218409630+00:00 stderr F I0813 20:08:48.218346 1 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.218512183+00:00 stderr F I0813 20:08:48.218456 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.218690978+00:00 stderr F I0813 20:08:48.218631 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.220661534+00:00 stderr F I0813 20:08:48.220472 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.223578098+00:00 stderr F I0813 20:08:48.220985 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.224551166+00:00 stderr F I0813 20:08:48.221162 1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"crc\" does not exist" 2025-08-13T20:08:48.224595457+00:00 stderr F I0813 20:08:48.221204 1 topologycache.go:217] "Ignoring node because it has an excluded label" node="crc" 2025-08-13T20:08:48.224743342+00:00 stderr F I0813 20:08:48.224717 1 topologycache.go:253] "Insufficient node info for topology hints" totalZones=0 totalCPU="0" sufficientNodeInfo=true 2025-08-13T20:08:48.224842944+00:00 stderr F I0813 20:08:48.221292 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.225055050+00:00 stderr F I0813 20:08:48.221631 1 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.225223615+00:00 stderr F I0813 20:08:48.221716 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.225371769+00:00 stderr F I0813 20:08:48.221719 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.225526074+00:00 stderr F W0813 20:08:48.221748 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:48.225670288+00:00 stderr F E0813 20:08:48.225648 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:48.225749350+00:00 stderr F I0813 20:08:48.221862 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-28658250" 2025-08-13T20:08:48.225880434+00:00 stderr F I0813 20:08:48.225847 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251905" 2025-08-13T20:08:48.225964516+00:00 stderr F I0813 20:08:48.225950 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251920" 2025-08-13T20:08:48.225994367+00:00 stderr F I0813 20:08:48.221888 1 reflector.go:351] Caches populated for *v1.PodTemplate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.226186543+00:00 stderr F I0813 20:08:48.221990 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.226397989+00:00 stderr F I0813 20:08:48.222054 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.226537903+00:00 stderr F I0813 20:08:48.222075 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.226714268+00:00 stderr F I0813 20:08:48.222116 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.226981866+00:00 stderr F I0813 20:08:48.217472 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.228448028+00:00 stderr F I0813 20:08:48.228393 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.230164247+00:00 stderr F I0813 20:08:48.230139 1 shared_informer.go:318] Caches are synced for taint-eviction-controller 2025-08-13T20:08:48.234610694+00:00 stderr F I0813 20:08:48.231951 1 shared_informer.go:318] Caches are synced for PV protection 2025-08-13T20:08:48.234610694+00:00 stderr F I0813 20:08:48.232593 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.238498946+00:00 stderr F I0813 20:08:48.238401 1 shared_informer.go:318] Caches are synced for GC 2025-08-13T20:08:48.242876641+00:00 stderr F I0813 20:08:48.242709 1 shared_informer.go:318] Caches are synced for TTL after finished 2025-08-13T20:08:48.244119587+00:00 stderr F I0813 20:08:48.244009 1 shared_informer.go:318] Caches are synced for job 2025-08-13T20:08:48.244119587+00:00 stderr F I0813 20:08:48.244088 1 shared_informer.go:318] Caches are synced for service account 2025-08-13T20:08:48.250977604+00:00 stderr F I0813 20:08:48.250638 1 shared_informer.go:318] Caches are synced for ReplicationController 2025-08-13T20:08:48.250977604+00:00 stderr F I0813 20:08:48.250684 1 shared_informer.go:318] Caches are synced for cronjob 2025-08-13T20:08:48.278232365+00:00 stderr F I0813 20:08:48.278135 1 shared_informer.go:318] Caches are synced for HPA 2025-08-13T20:08:48.279346197+00:00 stderr F I0813 20:08:48.279293 1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner 2025-08-13T20:08:48.298048733+00:00 stderr F I0813 20:08:48.297946 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.317593214+00:00 stderr F I0813 20:08:48.317526 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.318342615+00:00 stderr F I0813 20:08:48.318317 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.325951913+00:00 stderr F I0813 20:08:48.325858 1 shared_informer.go:318] Caches are synced for namespace 2025-08-13T20:08:48.331644416+00:00 stderr F I0813 20:08:48.331605 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.333685375+00:00 stderr F I0813 20:08:48.333658 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.338846663+00:00 stderr F I0813 20:08:48.338697 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.348167540+00:00 stderr F I0813 20:08:48.348048 1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator 2025-08-13T20:08:48.350661502+00:00 stderr F I0813 20:08:48.350587 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.350875298+00:00 stderr F W0813 20:08:48.350769 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:48.352023321+00:00 stderr F E0813 20:08:48.351990 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:48.352316749+00:00 stderr F I0813 20:08:48.352292 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.352696890+00:00 stderr F I0813 20:08:48.352673 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.353075921+00:00 stderr F W0813 20:08:48.353051 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:48.354002427+00:00 stderr F E0813 20:08:48.353935 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:48.354090910+00:00 stderr F W0813 20:08:48.353565 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:48.354207563+00:00 stderr F E0813 20:08:48.354182 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:48.354292706+00:00 stderr F I0813 20:08:48.353672 1 reflector.go:351] Caches populated for *v1.VolumeAttachment from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.354933774+00:00 stderr F W0813 20:08:48.353699 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:08:48.358727313+00:00 stderr F I0813 20:08:48.358592 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.359022251+00:00 stderr F I0813 20:08:48.358977 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.359144925+00:00 stderr F I0813 20:08:48.353742 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.359256398+00:00 stderr F I0813 20:08:48.359208 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.359385592+00:00 stderr F W0813 20:08:48.359333 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:48.359401552+00:00 stderr F E0813 20:08:48.359384 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:48.359477644+00:00 stderr F I0813 20:08:48.356161 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.359677240+00:00 stderr F I0813 20:08:48.359138 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.360197125+00:00 stderr F I0813 20:08:48.360133 1 reflector.go:351] Caches populated for *v1.RoleBindingRestriction from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.361672467+00:00 stderr F E0813 20:08:48.361440 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Template: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:08:48.375195585+00:00 stderr F I0813 20:08:48.375086 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.379486658+00:00 stderr F I0813 20:08:48.379428 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.379542450+00:00 stderr F I0813 20:08:48.379435 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.379935441+00:00 stderr F I0813 20:08:48.379854 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.381218408+00:00 stderr F I0813 20:08:48.381024 1 shared_informer.go:318] Caches are synced for persistent volume 2025-08-13T20:08:48.387830607+00:00 stderr F I0813 20:08:48.387722 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.388398514+00:00 stderr F I0813 20:08:48.388370 1 shared_informer.go:318] Caches are synced for daemon sets 2025-08-13T20:08:48.388496096+00:00 stderr F I0813 20:08:48.388480 1 shared_informer.go:311] Waiting for caches to sync for daemon sets 2025-08-13T20:08:48.388531187+00:00 stderr F I0813 20:08:48.388519 1 shared_informer.go:318] Caches are synced for daemon sets 2025-08-13T20:08:48.388627580+00:00 stderr F I0813 20:08:48.388610 1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring 2025-08-13T20:08:48.388703572+00:00 stderr F I0813 20:08:48.388683 1 endpointslicemirroring_controller.go:230] "Starting worker threads" total=5 2025-08-13T20:08:48.389498105+00:00 stderr F I0813 20:08:48.389447 1 shared_informer.go:318] Caches are synced for taint 2025-08-13T20:08:48.389560367+00:00 stderr F I0813 20:08:48.389516 1 node_lifecycle_controller.go:676] "Controller observed a new Node" node="crc" 2025-08-13T20:08:48.389620629+00:00 stderr F I0813 20:08:48.389566 1 controller_utils.go:173] "Recording event message for node" event="Registered Node crc in Controller" node="crc" 2025-08-13T20:08:48.389633729+00:00 stderr F I0813 20:08:48.389620 1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone="" 2025-08-13T20:08:48.389888996+00:00 stderr F I0813 20:08:48.389735 1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="crc" 2025-08-13T20:08:48.391201484+00:00 stderr F I0813 20:08:48.391145 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.391743690+00:00 stderr F I0813 20:08:48.391691 1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal" 2025-08-13T20:08:48.393003736+00:00 stderr F I0813 20:08:48.392924 1 event.go:376] "Event occurred" object="crc" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node crc event: Registered Node crc in Controller" 2025-08-13T20:08:48.393402597+00:00 stderr F I0813 20:08:48.388428 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.395087505+00:00 stderr F I0813 20:08:48.395035 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.397462214+00:00 stderr F I0813 20:08:48.397421 1 shared_informer.go:318] Caches are synced for ephemeral 2025-08-13T20:08:48.400224593+00:00 stderr F I0813 20:08:48.400144 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.401080097+00:00 stderr F I0813 20:08:48.400979 1 shared_informer.go:318] Caches are synced for endpoint 2025-08-13T20:08:48.410372524+00:00 stderr F I0813 20:08:48.410296 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.410976841+00:00 stderr F I0813 20:08:48.410950 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.411347542+00:00 stderr F I0813 20:08:48.411308 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.411610529+00:00 stderr F I0813 20:08:48.411587 1 shared_informer.go:318] Caches are synced for PVC protection 2025-08-13T20:08:48.411818525+00:00 stderr F I0813 20:08:48.411718 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.412553236+00:00 stderr F I0813 20:08:48.412524 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.417402485+00:00 stderr F I0813 20:08:48.417332 1 shared_informer.go:318] Caches are synced for expand 2025-08-13T20:08:48.417578380+00:00 stderr F I0813 20:08:48.417554 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown 2025-08-13T20:08:48.417712944+00:00 stderr F I0813 20:08:48.417698 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client 2025-08-13T20:08:48.417850498+00:00 stderr F I0813 20:08:48.417732 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.418007713+00:00 stderr F I0813 20:08:48.417959 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving 2025-08-13T20:08:48.418052554+00:00 stderr F I0813 20:08:48.418037 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client 2025-08-13T20:08:48.418422424+00:00 stderr F I0813 20:08:48.418400 1 shared_informer.go:318] Caches are synced for stateful set 2025-08-13T20:08:48.431114778+00:00 stderr F I0813 20:08:48.423134 1 shared_informer.go:318] Caches are synced for disruption 2025-08-13T20:08:48.435616157+00:00 stderr F I0813 20:08:48.435579 1 shared_informer.go:318] Caches are synced for certificate-csrapproving 2025-08-13T20:08:48.438044067+00:00 stderr F I0813 20:08:48.438016 1 shared_informer.go:318] Caches are synced for attach detach 2025-08-13T20:08:48.454388596+00:00 stderr F I0813 20:08:48.454249 1 shared_informer.go:318] Caches are synced for endpoint_slice 2025-08-13T20:08:48.454388596+00:00 stderr F I0813 20:08:48.454349 1 endpointslice_controller.go:271] "Starting worker threads" total=5 2025-08-13T20:08:48.460518801+00:00 stderr F I0813 20:08:48.460429 1 shared_informer.go:318] Caches are synced for deployment 2025-08-13T20:08:48.478241410+00:00 stderr F I0813 20:08:48.478170 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.503509074+00:00 stderr F I0813 20:08:48.503079 1 shared_informer.go:318] Caches are synced for ReplicaSet 2025-08-13T20:08:48.508104666+00:00 stderr F I0813 20:08:48.508060 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865" duration="131.204µs" 2025-08-13T20:08:48.508303131+00:00 stderr F I0813 20:08:48.508281 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-57b5589fc8" duration="117.054µs" 2025-08-13T20:08:48.508427425+00:00 stderr F I0813 20:08:48.508411 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67cbf64bc9" duration="74.312µs" 2025-08-13T20:08:48.508714653+00:00 stderr F I0813 20:08:48.508693 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67f74899b5" duration="233.996µs" 2025-08-13T20:08:48.508920109+00:00 stderr F I0813 20:08:48.508873 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-7fc54b8dd7" duration="117.344µs" 2025-08-13T20:08:48.509201057+00:00 stderr F I0813 20:08:48.509124 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-8457997b6b" duration="150.545µs" 2025-08-13T20:08:48.509321921+00:00 stderr F I0813 20:08:48.509304 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication-operator/authentication-operator-7cc7ff75d5" duration="63.222µs" 2025-08-13T20:08:48.509433834+00:00 stderr F I0813 20:08:48.509416 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-5b69888cbb" duration="64.282µs" 2025-08-13T20:08:48.509532907+00:00 stderr F I0813 20:08:48.509517 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-6d9765f7fd" duration="53.111µs" 2025-08-13T20:08:48.510116973+00:00 stderr F I0813 20:08:48.509758 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-74fc7c67cc" duration="193.206µs" 2025-08-13T20:08:48.510340060+00:00 stderr F I0813 20:08:48.510316 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-765b47f944" duration="66.902µs" 2025-08-13T20:08:48.510444173+00:00 stderr F I0813 20:08:48.510428 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-7b5ff7b59b" duration="52.671µs" 2025-08-13T20:08:48.510617848+00:00 stderr F I0813 20:08:48.510548 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-8545fbf5fd" duration="47.331µs" 2025-08-13T20:08:48.510865445+00:00 stderr F I0813 20:08:48.510842 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-5596ddcb44" duration="184.695µs" 2025-08-13T20:08:48.511003849+00:00 stderr F I0813 20:08:48.510980 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-7874c8775" duration="46.281µs" 2025-08-13T20:08:48.511213555+00:00 stderr F I0813 20:08:48.511193 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6" duration="67.052µs" 2025-08-13T20:08:48.511316458+00:00 stderr F I0813 20:08:48.511300 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-64d5db54b5" duration="50.271µs" 2025-08-13T20:08:48.511487413+00:00 stderr F I0813 20:08:48.511471 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-6d5d9649f6" duration="123.113µs" 2025-08-13T20:08:48.511580455+00:00 stderr F I0813 20:08:48.511565 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-config-operator/openshift-config-operator-77658b5b66" duration="46.872µs" 2025-08-13T20:08:48.511685768+00:00 stderr F I0813 20:08:48.511670 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-conversion-webhook-595f9969b" duration="46.642µs" 2025-08-13T20:08:48.511837123+00:00 stderr F I0813 20:08:48.511762 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-operator-5dbbc74dc9" duration="44.761µs" 2025-08-13T20:08:48.512094350+00:00 stderr F I0813 20:08:48.512072 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-5d9678894c" duration="190.645µs" 2025-08-13T20:08:48.512197953+00:00 stderr F I0813 20:08:48.512179 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-644bb77b49" duration="51.971µs" 2025-08-13T20:08:48.512302406+00:00 stderr F I0813 20:08:48.512286 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-6b869cb98c" duration="48.421µs" 2025-08-13T20:08:48.512395189+00:00 stderr F I0813 20:08:48.512380 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-84fccc7b6" duration="48.661µs" 2025-08-13T20:08:48.515739085+00:00 stderr F I0813 20:08:48.515698 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-b6485f8c7" duration="3.261564ms" 2025-08-13T20:08:48.515985072+00:00 stderr F I0813 20:08:48.515957 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/downloads-65476884b9" duration="99.363µs" 2025-08-13T20:08:48.516149796+00:00 stderr F I0813 20:08:48.516126 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6" duration="93.453µs" 2025-08-13T20:08:48.516332402+00:00 stderr F I0813 20:08:48.516313 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-5797bcd546" duration="94.853µs" 2025-08-13T20:08:48.516480176+00:00 stderr F I0813 20:08:48.516457 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="61.871µs" 2025-08-13T20:08:48.516847846+00:00 stderr F I0813 20:08:48.516823 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-659898b96d" duration="304.438µs" 2025-08-13T20:08:48.517000331+00:00 stderr F I0813 20:08:48.516976 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-67685c4459" duration="83.823µs" 2025-08-13T20:08:48.517119064+00:00 stderr F I0813 20:08:48.517101 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-6ff78978b4" duration="60.702µs" 2025-08-13T20:08:48.517214557+00:00 stderr F I0813 20:08:48.517199 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-75cfd5db5d" duration="48.281µs" 2025-08-13T20:08:48.517416273+00:00 stderr F I0813 20:08:48.517399 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-78589965b8" duration="149.344µs" 2025-08-13T20:08:48.517542806+00:00 stderr F I0813 20:08:48.517495 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-7bbb4b7f4c" duration="47.162µs" 2025-08-13T20:08:48.517640709+00:00 stderr F I0813 20:08:48.517625 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-99c8765d7" duration="47.821µs" 2025-08-13T20:08:48.517733182+00:00 stderr F I0813 20:08:48.517718 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-b69786f4f" duration="45.691µs" 2025-08-13T20:08:48.518174034+00:00 stderr F I0813 20:08:48.518154 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-c4dd57946" duration="147.475µs" 2025-08-13T20:08:48.518278607+00:00 stderr F I0813 20:08:48.518262 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-dns-operator/dns-operator-75f687757b" duration="48.322µs" 2025-08-13T20:08:48.518372390+00:00 stderr F I0813 20:08:48.518357 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-etcd-operator/etcd-operator-768d5b5d86" duration="47.482µs" 2025-08-13T20:08:48.518461153+00:00 stderr F I0813 20:08:48.518445 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d" duration="43.021µs" 2025-08-13T20:08:48.518642008+00:00 stderr F I0813 20:08:48.518625 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-585546dd8b" duration="133.144µs" 2025-08-13T20:08:48.518733881+00:00 stderr F I0813 20:08:48.518718 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-5b87ddc766" duration="45.801µs" 2025-08-13T20:08:48.518917706+00:00 stderr F I0813 20:08:48.518871 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-75779c45fd" duration="104.853µs" 2025-08-13T20:08:48.519125312+00:00 stderr F I0813 20:08:48.519013 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-7cbd5666ff" duration="51.771µs" 2025-08-13T20:08:48.519227035+00:00 stderr F I0813 20:08:48.519212 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress-operator/ingress-operator-7d46d5bb6d" duration="52.222µs" 2025-08-13T20:08:48.519330148+00:00 stderr F I0813 20:08:48.519314 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress/router-default-5c9bf7bc58" duration="57.292µs" 2025-08-13T20:08:48.519425300+00:00 stderr F I0813 20:08:48.519409 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4" duration="45.112µs" 2025-08-13T20:08:48.519514883+00:00 stderr F I0813 20:08:48.519499 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958" duration="44.561µs" 2025-08-13T20:08:48.519698918+00:00 stderr F I0813 20:08:48.519682 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b" duration="136.063µs" 2025-08-13T20:08:48.519873833+00:00 stderr F I0813 20:08:48.519845 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c" duration="106.423µs" 2025-08-13T20:08:48.520035408+00:00 stderr F I0813 20:08:48.520011 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator/migrator-f7c6d88df" duration="47.032µs" 2025-08-13T20:08:48.520123260+00:00 stderr F I0813 20:08:48.520108 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/control-plane-machine-set-operator-649bd778b4" duration="38.331µs" 2025-08-13T20:08:48.520224123+00:00 stderr F I0813 20:08:48.520205 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/machine-api-operator-788b7c6b6c" duration="43.781µs" 2025-08-13T20:08:48.520428139+00:00 stderr F I0813 20:08:48.520411 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-controller-6df6df6b6b" duration="156.254µs" 2025-08-13T20:08:48.520514642+00:00 stderr F I0813 20:08:48.520499 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-operator-76788bff89" duration="39.891µs" 2025-08-13T20:08:48.520614754+00:00 stderr F I0813 20:08:48.520599 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-marketplace/marketplace-operator-8b455464d" duration="44.941µs" 2025-08-13T20:08:48.520707577+00:00 stderr F I0813 20:08:48.520691 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-5b6c864f95" duration="46.591µs" 2025-08-13T20:08:48.521066787+00:00 stderr F I0813 20:08:48.521045 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-6c7c885997" duration="304.829µs" 2025-08-13T20:08:48.521168150+00:00 stderr F I0813 20:08:48.521151 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-7955f778" duration="44.222µs" 2025-08-13T20:08:48.521297464+00:00 stderr F I0813 20:08:48.521282 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-diagnostics/network-check-source-5c5478f8c" duration="42.281µs" 2025-08-13T20:08:48.521387367+00:00 stderr F I0813 20:08:48.521369 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-operator/network-operator-767c585db5" duration="40.101µs" 2025-08-13T20:08:48.521618653+00:00 stderr F I0813 20:08:48.521596 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-64b8d4c75" duration="175.855µs" 2025-08-13T20:08:48.521724346+00:00 stderr F I0813 20:08:48.521709 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-69c565c9b6" duration="56.512µs" 2025-08-13T20:08:48.521877411+00:00 stderr F I0813 20:08:48.521856 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/catalog-operator-857456c46" duration="97.333µs" 2025-08-13T20:08:48.522025955+00:00 stderr F I0813 20:08:48.522009 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f" duration="60.672µs" 2025-08-13T20:08:48.522209420+00:00 stderr F I0813 20:08:48.522190 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/package-server-manager-84d578d794" duration="130.343µs" 2025-08-13T20:08:48.522306923+00:00 stderr F I0813 20:08:48.522291 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/packageserver-8464bcc55b" duration="47.221µs" 2025-08-13T20:08:48.522396456+00:00 stderr F I0813 20:08:48.522378 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58" duration="40.661µs" 2025-08-13T20:08:48.522477938+00:00 stderr F I0813 20:08:48.522462 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5446f98575" duration="35.511µs" 2025-08-13T20:08:48.522561200+00:00 stderr F I0813 20:08:48.522546 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5b77f9fd48" duration="38.981µs" 2025-08-13T20:08:48.522763826+00:00 stderr F I0813 20:08:48.522744 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5c4dbb8899" duration="123.553µs" 2025-08-13T20:08:48.523657262+00:00 stderr F I0813 20:08:48.523633 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc" duration="58.202µs" 2025-08-13T20:08:48.523737074+00:00 stderr F I0813 20:08:48.523680 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-777dbbb7bb" duration="100.573µs" 2025-08-13T20:08:48.523935630+00:00 stderr F I0813 20:08:48.523817 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7f79969969" duration="44.211µs" 2025-08-13T20:08:48.523935630+00:00 stderr F I0813 20:08:48.523918 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-846977c6bc" duration="41.351µs" 2025-08-13T20:08:48.524028362+00:00 stderr F I0813 20:08:48.524010 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7d967d98df" duration="306.618µs" 2025-08-13T20:08:48.524132005+00:00 stderr F I0813 20:08:48.524115 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca-operator/service-ca-operator-546b4f8984" duration="49.082µs" 2025-08-13T20:08:48.524378432+00:00 stderr F I0813 20:08:48.524361 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca/service-ca-666f99b6f" duration="198.806µs" 2025-08-13T20:08:48.528627834+00:00 stderr F I0813 20:08:48.528586 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.529736296+00:00 stderr F I0813 20:08:48.524011 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-868695ccb4" duration="66.492µs" 2025-08-13T20:08:48.529835749+00:00 stderr F I0813 20:08:48.523496 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="44.411µs" 2025-08-13T20:08:48.529881410+00:00 stderr F I0813 20:08:48.523422 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-66f66f94cf" duration="76.642µs" 2025-08-13T20:08:48.532723372+00:00 stderr F I0813 20:08:48.532694 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.540652669+00:00 stderr F I0813 20:08:48.540449 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[v1/Node, namespace: openshift-network-diagnostics, name: crc, uid: c83c88d3-f34d-4083-a59d-1c50f90f89b8]" observed="[v1/Node, namespace: , name: crc, uid: c83c88d3-f34d-4083-a59d-1c50f90f89b8]" 2025-08-13T20:08:48.562388312+00:00 stderr F I0813 20:08:48.562313 1 shared_informer.go:311] Waiting for caches to sync for garbage collector 2025-08-13T20:08:48.566424568+00:00 stderr F I0813 20:08:48.566367 1 reflector.go:351] Caches populated for *v1.PriorityLevelConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.566628734+00:00 stderr F I0813 20:08:48.566556 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.566831220+00:00 stderr F I0813 20:08:48.566750 1 reflector.go:351] Caches populated for *v1.IngressClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.567184670+00:00 stderr F I0813 20:08:48.567158 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.567446997+00:00 stderr F I0813 20:08:48.566866 1 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.569103285+00:00 stderr F I0813 20:08:48.567739 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.569210628+00:00 stderr F I0813 20:08:48.569149 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.570571407+00:00 stderr F I0813 20:08:48.570495 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.573558342+00:00 stderr F I0813 20:08:48.572326 1 reflector.go:351] Caches populated for *v1.FlowSchema from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.573985525+00:00 stderr F I0813 20:08:48.573960 1 reflector.go:351] Caches populated for *v1.PriorityClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.574140849+00:00 stderr F W0813 20:08:48.574121 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:48.574193961+00:00 stderr F E0813 20:08:48.574176 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RangeAllocation: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:48.576663381+00:00 stderr F I0813 20:08:48.576638 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.577426713+00:00 stderr F I0813 20:08:48.577400 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.579201814+00:00 stderr F I0813 20:08:48.579139 1 shared_informer.go:318] Caches are synced for crt configmap 2025-08-13T20:08:48.586915115+00:00 stderr F W0813 20:08:48.585066 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:48.586915115+00:00 stderr F E0813 20:08:48.585116 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.UserOAuthAccessToken: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:48.591644731+00:00 stderr F I0813 20:08:48.591578 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.592452454+00:00 stderr F W0813 20:08:48.592423 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:48.592565347+00:00 stderr F I0813 20:08:48.592512 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.592703811+00:00 stderr F I0813 20:08:48.592654 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.592880516+00:00 stderr F I0813 20:08:48.592831 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.593031731+00:00 stderr F I0813 20:08:48.592980 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.593171065+00:00 stderr F E0813 20:08:48.592512 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BrokerTemplateInstance: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:48.593312349+00:00 stderr F I0813 20:08:48.592871 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.593403331+00:00 stderr F I0813 20:08:48.593253 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.593720100+00:00 stderr F I0813 20:08:48.593307 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.599483286+00:00 stderr F I0813 20:08:48.599421 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.602196293+00:00 stderr F I0813 20:08:48.600468 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.602296876+00:00 stderr F I0813 20:08:48.600671 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.602373079+00:00 stderr F I0813 20:08:48.601187 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.602494552+00:00 stderr F I0813 20:08:48.601738 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[config.openshift.io/v1/ClusterVersion, namespace: openshift-monitoring, name: version, uid: a73cbaa6-40d3-4694-9b98-c0a6eed45825]" observed="[config.openshift.io/v1/ClusterVersion, namespace: , name: version, uid: a73cbaa6-40d3-4694-9b98-c0a6eed45825]" 2025-08-13T20:08:48.608744571+00:00 stderr F I0813 20:08:48.608609 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.615807384+00:00 stderr F I0813 20:08:48.615638 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.622410963+00:00 stderr F I0813 20:08:48.622293 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.623100253+00:00 stderr F I0813 20:08:48.623058 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.623356120+00:00 stderr F I0813 20:08:48.623293 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.623675739+00:00 stderr F I0813 20:08:48.623549 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.628728374+00:00 stderr F I0813 20:08:48.628596 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.652296110+00:00 stderr F I0813 20:08:48.652191 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.658929790+00:00 stderr F I0813 20:08:48.658113 1 shared_informer.go:318] Caches are synced for crt configmap 2025-08-13T20:08:48.676236946+00:00 stderr F I0813 20:08:48.675092 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.708386508+00:00 stderr F I0813 20:08:48.707751 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.741002513+00:00 stderr F I0813 20:08:48.738120 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.741002513+00:00 stderr F I0813 20:08:48.738431 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[operator.openshift.io/v1/Console, namespace: openshift-console, name: cluster, uid: 5f9f95ea-d66e-45cc-9aa2-ed289b62d92e]" observed="[operator.openshift.io/v1/Console, namespace: , name: cluster, uid: 5f9f95ea-d66e-45cc-9aa2-ed289b62d92e]" 2025-08-13T20:08:48.841039621+00:00 stderr F I0813 20:08:48.838733 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.844560452+00:00 stderr F I0813 20:08:48.844037 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.862288021+00:00 stderr F I0813 20:08:48.862136 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.874573103+00:00 stderr F I0813 20:08:48.874469 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.893934288+00:00 stderr F I0813 20:08:48.891728 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.916631109+00:00 stderr F I0813 20:08:48.916556 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.927503220+00:00 stderr F I0813 20:08:48.927264 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.941566404+00:00 stderr F I0813 20:08:48.941452 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.958976503+00:00 stderr F I0813 20:08:48.958634 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.977092122+00:00 stderr F I0813 20:08:48.976871 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.990464056+00:00 stderr F I0813 20:08:48.988703 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.012039984+00:00 stderr F I0813 20:08:49.010217 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.014452403+00:00 stderr F W0813 20:08:49.014420 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:49.014524605+00:00 stderr F E0813 20:08:49.014506 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:49.022316009+00:00 stderr F I0813 20:08:49.020263 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.037744521+00:00 stderr F I0813 20:08:49.037694 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.048128789+00:00 stderr F I0813 20:08:49.048016 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.064155498+00:00 stderr F I0813 20:08:49.062308 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.078476709+00:00 stderr F I0813 20:08:49.078311 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.091086090+00:00 stderr F I0813 20:08:49.090366 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.105036000+00:00 stderr F I0813 20:08:49.104982 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.120080202+00:00 stderr F I0813 20:08:49.120025 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.130395058+00:00 stderr F I0813 20:08:49.130346 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.147741075+00:00 stderr F I0813 20:08:49.147222 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.162212570+00:00 stderr F I0813 20:08:49.162098 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.176187670+00:00 stderr F I0813 20:08:49.176131 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.190469110+00:00 stderr F I0813 20:08:49.190333 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.208186768+00:00 stderr F I0813 20:08:49.208129 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.208536468+00:00 stderr F I0813 20:08:49.208504 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[machineconfiguration.openshift.io/v1/MachineConfigPool, namespace: openshift-machine-api, name: master, uid: 8efb5656-7de8-467a-9f2a-237a011a4783]" observed="[machineconfiguration.openshift.io/v1/MachineConfigPool, namespace: , name: master, uid: 8efb5656-7de8-467a-9f2a-237a011a4783]" 2025-08-13T20:08:49.208618340+00:00 stderr F I0813 20:08:49.208600 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[machineconfiguration.openshift.io/v1/MachineConfigPool, namespace: openshift-machine-api, name: worker, uid: 87ae8215-5559-4b8a-a6cc-81c3c83b8a6e]" observed="[machineconfiguration.openshift.io/v1/MachineConfigPool, namespace: , name: worker, uid: 87ae8215-5559-4b8a-a6cc-81c3c83b8a6e]" 2025-08-13T20:08:49.220694246+00:00 stderr F I0813 20:08:49.220638 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.236635423+00:00 stderr F I0813 20:08:49.236452 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.254709032+00:00 stderr F I0813 20:08:49.254553 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.275917170+00:00 stderr F I0813 20:08:49.275764 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.290558850+00:00 stderr F I0813 20:08:49.290497 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.300722161+00:00 stderr F I0813 20:08:49.300641 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.312479628+00:00 stderr F I0813 20:08:49.312393 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.328198769+00:00 stderr F I0813 20:08:49.328091 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.328658202+00:00 stderr F W0813 20:08:49.328589 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:49.328658202+00:00 stderr F E0813 20:08:49.328644 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:49.344517027+00:00 stderr F I0813 20:08:49.344372 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.361311858+00:00 stderr F I0813 20:08:49.360169 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.375651149+00:00 stderr F I0813 20:08:49.375555 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.376022760+00:00 stderr F I0813 20:08:49.375736 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[operator.openshift.io/v1/DNS, namespace: openshift-dns, name: default, uid: 8e7b8280-016f-4ceb-a792-fc5be2494468]" observed="[operator.openshift.io/v1/DNS, namespace: , name: default, uid: 8e7b8280-016f-4ceb-a792-fc5be2494468]" 2025-08-13T20:08:49.398016871+00:00 stderr F I0813 20:08:49.397925 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.416319915+00:00 stderr F I0813 20:08:49.416217 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.428527415+00:00 stderr F I0813 20:08:49.428372 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.442392953+00:00 stderr F I0813 20:08:49.442298 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.455387295+00:00 stderr F I0813 20:08:49.455267 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.466575146+00:00 stderr F I0813 20:08:49.466459 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.515061396+00:00 stderr F I0813 20:08:49.514868 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.515827588+00:00 stderr F I0813 20:08:49.515264 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[operator.openshift.io/v1/Network, namespace: openshift-host-network, name: cluster, uid: 5ca11404-f665-4aa0-85cf-da2f3e9c86ad]" observed="[operator.openshift.io/v1/Network, namespace: , name: cluster, uid: 5ca11404-f665-4aa0-85cf-da2f3e9c86ad]" 2025-08-13T20:08:49.675247759+00:00 stderr F W0813 20:08:49.675128 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:08:49.675247759+00:00 stderr F E0813 20:08:49.675207 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Template: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:08:49.725230932+00:00 stderr F W0813 20:08:49.725119 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:49.725230932+00:00 stderr F E0813 20:08:49.725191 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:49.822870962+00:00 stderr F W0813 20:08:49.822769 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:49.822989725+00:00 stderr F E0813 20:08:49.822972 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:49.825884788+00:00 stderr F W0813 20:08:49.825859 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:49.826011682+00:00 stderr F E0813 20:08:49.825995 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:49.888533904+00:00 stderr F W0813 20:08:49.888484 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:49.888609186+00:00 stderr F E0813 20:08:49.888590 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.UserOAuthAccessToken: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:49.960108826+00:00 stderr F W0813 20:08:49.958192 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:49.960108826+00:00 stderr F E0813 20:08:49.958250 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:49.976144286+00:00 stderr F I0813 20:08:49.976056 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:50.069486842+00:00 stderr F W0813 20:08:50.069337 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:50.069486842+00:00 stderr F E0813 20:08:50.069406 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BrokerTemplateInstance: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:50.140110166+00:00 stderr F W0813 20:08:50.139872 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:50.140110166+00:00 stderr F E0813 20:08:50.139952 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RangeAllocation: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:51.308362571+00:00 stderr F W0813 20:08:51.307680 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:51.308362571+00:00 stderr F E0813 20:08:51.307763 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:51.773955860+00:00 stderr F W0813 20:08:51.773871 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:51.774141596+00:00 stderr F E0813 20:08:51.774083 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:51.806047811+00:00 stderr F W0813 20:08:51.805161 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:51.806047811+00:00 stderr F E0813 20:08:51.805217 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:52.179231950+00:00 stderr F W0813 20:08:52.179055 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:52.179231950+00:00 stderr F E0813 20:08:52.179115 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RangeAllocation: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:52.265486693+00:00 stderr F W0813 20:08:52.265263 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:52.265486693+00:00 stderr F E0813 20:08:52.265331 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.UserOAuthAccessToken: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:52.491827473+00:00 stderr F W0813 20:08:52.491636 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:52.491827473+00:00 stderr F E0813 20:08:52.491694 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:52.500160531+00:00 stderr F W0813 20:08:52.500089 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:08:52.500160531+00:00 stderr F E0813 20:08:52.500138 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Template: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:08:52.812302311+00:00 stderr F W0813 20:08:52.812161 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:52.812302311+00:00 stderr F E0813 20:08:52.812225 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:52.815945425+00:00 stderr F W0813 20:08:52.815757 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:52.815976186+00:00 stderr F E0813 20:08:52.815945 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:53.031703841+00:00 stderr F W0813 20:08:53.031583 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:53.031703841+00:00 stderr F E0813 20:08:53.031646 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BrokerTemplateInstance: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:56.011840344+00:00 stderr F W0813 20:08:56.011526 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:56.011840344+00:00 stderr F E0813 20:08:56.011692 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:56.601994895+00:00 stderr F W0813 20:08:56.601862 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:56.601994895+00:00 stderr F E0813 20:08:56.601951 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RangeAllocation: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:56.803401380+00:00 stderr F W0813 20:08:56.803277 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:56.803401380+00:00 stderr F E0813 20:08:56.803351 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.UserOAuthAccessToken: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:57.305835734+00:00 stderr F W0813 20:08:57.305720 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:57.305835734+00:00 stderr F E0813 20:08:57.305815 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:57.338952673+00:00 stderr F W0813 20:08:57.338861 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:57.339030125+00:00 stderr F E0813 20:08:57.339013 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:57.612987720+00:00 stderr F W0813 20:08:57.612859 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:57.612987720+00:00 stderr F E0813 20:08:57.612941 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:57.796716768+00:00 stderr F W0813 20:08:57.796587 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:57.796716768+00:00 stderr F E0813 20:08:57.796653 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:58.662998036+00:00 stderr F W0813 20:08:58.662024 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:58.662998036+00:00 stderr F E0813 20:08:58.662124 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:58.715656035+00:00 stderr F W0813 20:08:58.715505 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:58.715656035+00:00 stderr F E0813 20:08:58.715607 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BrokerTemplateInstance: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:58.744535883+00:00 stderr F W0813 20:08:58.744405 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:08:58.744535883+00:00 stderr F E0813 20:08:58.744479 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Template: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:09:03.688126690+00:00 stderr F I0813 20:09:03.688005 1 reflector.go:351] Caches populated for *v1.RangeAllocation from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:04.809332475+00:00 stderr F I0813 20:09:04.809170 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:05.427590013+00:00 stderr F I0813 20:09:05.427367 1 reflector.go:351] Caches populated for *v1.TemplateInstance from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:05.487728298+00:00 stderr F I0813 20:09:05.487572 1 reflector.go:351] Caches populated for *v1.UserOAuthAccessToken from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:05.573574079+00:00 stderr F I0813 20:09:05.573457 1 reflector.go:351] Caches populated for *v1.DeploymentConfig from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.227612332+00:00 stderr F I0813 20:09:07.227471 1 reflector.go:351] Caches populated for *v1.BrokerTemplateInstance from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.547623066+00:00 stderr F I0813 20:09:07.545003 1 reflector.go:351] Caches populated for *v1.Route from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.547623066+00:00 stderr F I0813 20:09:07.545470 1 graph_builder.go:407] "item references an owner with coordinates that do not match the observed identity" item="[route.openshift.io/v1/Route, namespace: openshift-ingress-canary, name: canary, uid: d099e691-9f65-4b04-8fcd-6df8ad5c5015]" owner="[apps/v1/DaemonSet, namespace: openshift-ingress-canary, name: ingress-canary, uid: b5512a08-cd29-46f9-9661-4c860338b2ca]" 2025-08-13T20:09:07.868350962+00:00 stderr F I0813 20:09:07.868204 1 reflector.go:351] Caches populated for *v1.Build from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:10.443062899+00:00 stderr F I0813 20:09:10.442950 1 reflector.go:351] Caches populated for *v1.BuildConfig from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:11.364711734+00:00 stderr F I0813 20:09:11.364560 1 reflector.go:351] Caches populated for *v1.Template from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:11.388139495+00:00 stderr F I0813 20:09:11.387977 1 shared_informer.go:318] Caches are synced for resource quota 2025-08-13T20:09:11.388139495+00:00 stderr F I0813 20:09:11.388056 1 resource_quota_controller.go:496] "synced quota controller" 2025-08-13T20:09:11.410004902+00:00 stderr F I0813 20:09:11.409687 1 shared_informer.go:318] Caches are synced for resource quota 2025-08-13T20:09:11.430987904+00:00 stderr F I0813 20:09:11.430874 1 shared_informer.go:318] Caches are synced for garbage collector 2025-08-13T20:09:11.430987904+00:00 stderr F I0813 20:09:11.430946 1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage" 2025-08-13T20:09:11.463425614+00:00 stderr F I0813 20:09:11.463303 1 shared_informer.go:318] Caches are synced for garbage collector 2025-08-13T20:09:11.463425614+00:00 stderr F I0813 20:09:11.463340 1 garbagecollector.go:290] "synced garbage collector" 2025-08-13T20:09:11.463425614+00:00 stderr F I0813 20:09:11.463419 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-machine-config-operator, uid: 74ba47b5-3bcc-42c0-9deb-e12d8da952ac]" virtual=false 2025-08-13T20:09:11.463690351+00:00 stderr F I0813 20:09:11.463664 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-console-user-settings, uid: a8cf2c15-bc3c-4cf4-9841-5e4727e35f10]" virtual=false 2025-08-13T20:09:11.463939508+00:00 stderr F I0813 20:09:11.463865 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cloud-platform-infra, uid: 2d3b34a1-0340-4ea4-b15a-1a0164234aa8]" virtual=false 2025-08-13T20:09:11.464029561+00:00 stderr F I0813 20:09:11.463976 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-kube-scheduler-operator, uid: 6f0ce1c2-37a2-421e-91e4-7be791e0ea85]" virtual=false 2025-08-13T20:09:11.464126364+00:00 stderr F I0813 20:09:11.464058 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-multus, uid: 117f4806-93cc-4b78-ae9b-74d16192e441]" virtual=false 2025-08-13T20:09:11.464126364+00:00 stderr F I0813 20:09:11.464107 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-console, uid: eb0e1992-df9a-419d-b7f4-4d9b50e766e7]" virtual=false 2025-08-13T20:09:11.464325160+00:00 stderr F I0813 20:09:11.464183 1 garbagecollector.go:549] "Processing item" item="[batch/v1/CronJob, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 946673ee-e5bd-418a-934e-c38198674faa]" virtual=false 2025-08-13T20:09:11.464325160+00:00 stderr F I0813 20:09:11.464264 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-network-operator, uid: 3796492a-40b7-473e-aa60-d1e803b2692a]" virtual=false 2025-08-13T20:09:11.464325160+00:00 stderr F I0813 20:09:11.464287 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cloud-network-config-controller, uid: 54f58fbf-f373-4947-8956-e1108a6bd97e]" virtual=false 2025-08-13T20:09:11.464325160+00:00 stderr F I0813 20:09:11.464302 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-config-managed, uid: c4538c2f-b9cd-4ad4-8d0e-02315cca7510]" virtual=false 2025-08-13T20:09:11.464325160+00:00 stderr F I0813 20:09:11.464303 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-operator-lifecycle-manager, name: olm-operators, uid: b9143910-b01b-4a5d-b64e-0612b2b7b21d]" virtual=false 2025-08-13T20:09:11.464452453+00:00 stderr F I0813 20:09:11.464379 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cluster-version, uid: 2a354368-bd77-40cc-ae1a-68449ece8efb]" virtual=false 2025-08-13T20:09:11.464533586+00:00 stderr F I0813 20:09:11.464477 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-operators, name: global-operators, uid: 5a05d65b-a6fc-48c8-8588-06b4ec3a70e9]" virtual=false 2025-08-13T20:09:11.464533586+00:00 stderr F I0813 20:09:11.463860 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-monitoring, uid: 98779a1e-aec5-4b04-aba2-1e5d3ba2ec08]" virtual=false 2025-08-13T20:09:11.464701300+00:00 stderr F I0813 20:09:11.464219 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cluster-machine-approver, uid: 9bfd5222-f6af-4843-b92c-1a5e0e0faafe]" virtual=false 2025-08-13T20:09:11.464822804+00:00 stderr F I0813 20:09:11.464225 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-network-node-identity, uid: c5e796ee-668d-4610-a134-4468bf0cbdf2]" virtual=false 2025-08-13T20:09:11.465949756+00:00 stderr F I0813 20:09:11.464241 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-kube-controller-manager-operator, uid: 73d77f82-bd47-45ec-bb06-d29938e02cfd]" virtual=false 2025-08-13T20:09:11.465949756+00:00 stderr F I0813 20:09:11.464252 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-machine-api, uid: df6705c6-ec2d-4f74-a803-3f769fd28210]" virtual=false 2025-08-13T20:09:11.465949756+00:00 stderr F I0813 20:09:11.464266 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-nutanix-infra, uid: 3e99d67f-8f0c-4b0e-b68c-e85c2944daf4]" virtual=false 2025-08-13T20:09:11.465949756+00:00 stderr F I0813 20:09:11.464267 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-controller-manager-operator, uid: d21d09aa-3262-40bb-bb1c-9a70b88b2f48]" virtual=false 2025-08-13T20:09:11.485689722+00:00 stderr F I0813 20:09:11.483124 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-controller-manager-operator, uid: d21d09aa-3262-40bb-bb1c-9a70b88b2f48]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.485689722+00:00 stderr F I0813 20:09:11.483194 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-etcd-operator, uid: 7413916b-b5eb-4718-9e41-632b89d445af]" virtual=false 2025-08-13T20:09:11.488878523+00:00 stderr F I0813 20:09:11.486306 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-operators, name: global-operators, uid: 5a05d65b-a6fc-48c8-8588-06b4ec3a70e9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.488878523+00:00 stderr F I0813 20:09:11.486367 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-marketplace, uid: ca0ec70d-cbfe-4901-ae7a-150a9dfe5920]" virtual=false 2025-08-13T20:09:11.488878523+00:00 stderr F I0813 20:09:11.488112 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-multus, uid: 117f4806-93cc-4b78-ae9b-74d16192e441]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.488878523+00:00 stderr F I0813 20:09:11.488147 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-openstack-infra, uid: ea73e21f-aeeb-4b3d-9983-95bf78a60eea]" virtual=false 2025-08-13T20:09:11.488878523+00:00 stderr F I0813 20:09:11.488431 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-network-operator, uid: 3796492a-40b7-473e-aa60-d1e803b2692a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.488878523+00:00 stderr F I0813 20:09:11.488457 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-operators, uid: 7920babe-d7ab-4197-9546-c2d10b1040a1]" virtual=false 2025-08-13T20:09:11.489874192+00:00 stderr F I0813 20:09:11.489157 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-monitoring, uid: 98779a1e-aec5-4b04-aba2-1e5d3ba2ec08]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.489874192+00:00 stderr F I0813 20:09:11.489254 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-ovirt-infra, uid: 2d9c4601-cca2-4813-b9e6-2830b7097736]" virtual=false 2025-08-13T20:09:11.493095934+00:00 stderr F I0813 20:09:11.491170 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[batch/v1/CronJob, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 946673ee-e5bd-418a-934e-c38198674faa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.493095934+00:00 stderr F I0813 20:09:11.491207 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-config-operator, uid: 45fc724c-871f-4abe-8f95-988ef4974157]" virtual=false 2025-08-13T20:09:11.496080280+00:00 stderr F I0813 20:09:11.495106 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-kube-controller-manager-operator, uid: 73d77f82-bd47-45ec-bb06-d29938e02cfd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.496080280+00:00 stderr F I0813 20:09:11.495170 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cluster-storage-operator, uid: 01a4f482-d92f-4128-87f6-1f1177ad4f4a]" virtual=false 2025-08-13T20:09:11.496080280+00:00 stderr F I0813 20:09:11.495421 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-config-managed, uid: c4538c2f-b9cd-4ad4-8d0e-02315cca7510]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.496080280+00:00 stderr F I0813 20:09:11.495447 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-kni-infra, uid: a8db5ce0-f088-4b83-8f52-bf8a94f40b94]" virtual=false 2025-08-13T20:09:11.498893421+00:00 stderr F I0813 20:09:11.498853 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-operator-lifecycle-manager, name: olm-operators, uid: b9143910-b01b-4a5d-b64e-0612b2b7b21d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.498959612+00:00 stderr F I0813 20:09:11.498929 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-service-ca-operator, uid: f62224e7-87fc-4c57-b23f-4581de499fa2]" virtual=false 2025-08-13T20:09:11.499281652+00:00 stderr F I0813 20:09:11.499145 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-network-node-identity, uid: c5e796ee-668d-4610-a134-4468bf0cbdf2]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.499281652+00:00 stderr F I0813 20:09:11.499196 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-apiserver-operator, uid: 52965c25-9895-4c80-8901-655c30931a31]" virtual=false 2025-08-13T20:09:11.530719793+00:00 stderr F I0813 20:09:11.530619 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-console, uid: eb0e1992-df9a-419d-b7f4-4d9b50e766e7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.530719793+00:00 stderr F I0813 20:09:11.530670 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-ovn-kubernetes, uid: 393084dd-9b88-439d-9c82-e5d9e9fc7290]" virtual=false 2025-08-13T20:09:11.532400661+00:00 stderr F I0813 20:09:11.531008 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-operators, uid: 7920babe-d7ab-4197-9546-c2d10b1040a1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.532400661+00:00 stderr F I0813 20:09:11.531054 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-console-operator, uid: 744177a1-0b8c-479f-9660-aa2ce2d1003b]" virtual=false 2025-08-13T20:09:11.540053941+00:00 stderr F I0813 20:09:11.539760 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-nutanix-infra, uid: 3e99d67f-8f0c-4b0e-b68c-e85c2944daf4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.540053941+00:00 stderr F I0813 20:09:11.539925 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-kube-storage-version-migrator-operator, uid: 56366865-01d3-431a-9703-d295f389a658]" virtual=false 2025-08-13T20:09:11.559864139+00:00 stderr F I0813 20:09:11.559719 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-console-user-settings, uid: a8cf2c15-bc3c-4cf4-9841-5e4727e35f10]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.559864139+00:00 stderr F I0813 20:09:11.559764 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-user-workload-monitoring, uid: 1d85aaa3-25a8-4cbd-88ce-765463dbeca9]" virtual=false 2025-08-13T20:09:11.562980258+00:00 stderr F I0813 20:09:11.561154 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cloud-network-config-controller, uid: 54f58fbf-f373-4947-8956-e1108a6bd97e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.562980258+00:00 stderr F I0813 20:09:11.561206 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-authentication-operator, uid: a94f410e-4561-4410-90f7-263645a79260]" virtual=false 2025-08-13T20:09:11.565924383+00:00 stderr F I0813 20:09:11.563281 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-kube-scheduler-operator, uid: 6f0ce1c2-37a2-421e-91e4-7be791e0ea85]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.565924383+00:00 stderr F I0813 20:09:11.563318 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cluster-samples-operator, uid: 9b74ff82-99c7-48fa-852e-a3960e84fedd]" virtual=false 2025-08-13T20:09:11.577965448+00:00 stderr F I0813 20:09:11.575135 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cluster-machine-approver, uid: 9bfd5222-f6af-4843-b92c-1a5e0e0faafe]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.577965448+00:00 stderr F I0813 20:09:11.575179 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-host-network, uid: b448caf8-673f-4b75-9f4c-85cbabc7ec6c]" virtual=false 2025-08-13T20:09:11.591330640+00:00 stderr F I0813 20:09:11.590238 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-machine-config-operator, uid: 74ba47b5-3bcc-42c0-9deb-e12d8da952ac]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.591330640+00:00 stderr F I0813 20:09:11.590311 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-image-registry, uid: b700e982-db6f-41d5-bf38-c2a82966abe8]" virtual=false 2025-08-13T20:09:11.607001829+00:00 stderr F I0813 20:09:11.605842 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cluster-version, uid: 2a354368-bd77-40cc-ae1a-68449ece8efb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.607001829+00:00 stderr F I0813 20:09:11.605964 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-ingress-operator, uid: c9d5dc69-e308-44cd-ad51-e2b7cce2619e]" virtual=false 2025-08-13T20:09:11.640845670+00:00 stderr F I0813 20:09:11.640673 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-machine-api, uid: df6705c6-ec2d-4f74-a803-3f769fd28210]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.640887131+00:00 stderr F I0813 20:09:11.640845 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-operator-lifecycle-manager, uid: 45ab0ea7-17dd-4464-9f7e-9913e01318e2]" virtual=false 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641360 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-marketplace, uid: ca0ec70d-cbfe-4901-ae7a-150a9dfe5920]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641410 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-vsphere-infra, uid: 91f4e064-f5b2-4ace-91e6-e1732990ada8]" virtual=false 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641460 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-etcd-operator, uid: 7413916b-b5eb-4718-9e41-632b89d445af]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641539 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-apiserver-operator, uid: 52965c25-9895-4c80-8901-655c30931a31]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641560 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-network-diagnostics, uid: 267bb0cc-5a49-450c-a61e-a94080c18cf9]" virtual=false 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641695 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-service-ca-operator, uid: f62224e7-87fc-4c57-b23f-4581de499fa2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641712 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-config, uid: c38b4142-4b5d-445b-9db1-5d43b8323db9]" virtual=false 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641780 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-dns-operator, uid: b99894aa-45e2-4109-8ecc-66c171f7a6d8]" virtual=false 2025-08-13T20:09:11.664780876+00:00 stderr F I0813 20:09:11.664673 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-openstack-infra, uid: ea73e21f-aeeb-4b3d-9983-95bf78a60eea]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.664780876+00:00 stderr F I0813 20:09:11.664757 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-kube-apiserver-operator, uid: 778598fb-710e-443a-a27d-e077f62db555]" virtual=false 2025-08-13T20:09:11.671300193+00:00 stderr F I0813 20:09:11.670364 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cloud-platform-infra, uid: 2d3b34a1-0340-4ea4-b15a-1a0164234aa8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.671300193+00:00 stderr F I0813 20:09:11.670410 1 garbagecollector.go:549] "Processing item" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-ovn-kubernetes, name: ovn, uid: 5ca00511-43a3-491e-8d30-2c9b23d72bf1]" virtual=false 2025-08-13T20:09:11.671300193+00:00 stderr F I0813 20:09:11.670641 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-ovirt-infra, uid: 2d9c4601-cca2-4813-b9e6-2830b7097736]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.671300193+00:00 stderr F I0813 20:09:11.670670 1 garbagecollector.go:549] "Processing item" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-ovn-kubernetes, name: signer, uid: 17184c6c-10e6-40d8-9367-33d18445ef3e]" virtual=false 2025-08-13T20:09:11.673116295+00:00 stderr F I0813 20:09:11.671985 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cluster-storage-operator, uid: 01a4f482-d92f-4128-87f6-1f1177ad4f4a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.673116295+00:00 stderr F I0813 20:09:11.672040 1 garbagecollector.go:549] "Processing item" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-network-node-identity, name: network-node-identity, uid: c35a9635-b45c-47ac-af09-3ee2daf91305]" virtual=false 2025-08-13T20:09:11.699675736+00:00 stderr F I0813 20:09:11.699000 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-user-workload-monitoring, uid: 1d85aaa3-25a8-4cbd-88ce-765463dbeca9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.699675736+00:00 stderr F I0813 20:09:11.699066 1 garbagecollector.go:549] "Processing item" item="[machine.openshift.io/v1beta1/MachineHealthCheck, namespace: openshift-machine-api, name: machine-api-termination-handler, uid: da0c8169-ed84-4c23-a003-b6883fd55935]" virtual=false 2025-08-13T20:09:11.699675736+00:00 stderr F I0813 20:09:11.699325 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-host-network, uid: b448caf8-673f-4b75-9f4c-85cbabc7ec6c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.699675736+00:00 stderr F I0813 20:09:11.699343 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-config-operator, name: openshift-config-operator, uid: 46cebc51-d29e-4081-9edb-d9f437810b86]" virtual=false 2025-08-13T20:09:11.699675736+00:00 stderr F I0813 20:09:11.699650 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-kni-infra, uid: a8db5ce0-f088-4b83-8f52-bf8a94f40b94]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.699716667+00:00 stderr F I0813 20:09:11.699668 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-controller-manager, name: controller-manager, uid: b42ae171-8338-4274-922f-79cfacb9cfe9]" virtual=false 2025-08-13T20:09:11.702848717+00:00 stderr F I0813 20:09:11.700316 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-ovn-kubernetes, uid: 393084dd-9b88-439d-9c82-e5d9e9fc7290]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.702848717+00:00 stderr F I0813 20:09:11.700379 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-multus, name: multus-admission-controller, uid: 3d4e04fb-152e-4b67-b110-7b7edfa1a90a]" virtual=false 2025-08-13T20:09:11.705897565+00:00 stderr F I0813 20:09:11.705756 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-authentication-operator, uid: a94f410e-4561-4410-90f7-263645a79260]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.705897565+00:00 stderr F I0813 20:09:11.705863 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-ovn-kubernetes, name: ovnkube-control-plane, uid: 346798bd-68de-4941-ab69-8a5a56dd55f7]" virtual=false 2025-08-13T20:09:11.723875370+00:00 stderr F I0813 20:09:11.722541 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-console-operator, uid: 744177a1-0b8c-479f-9660-aa2ce2d1003b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.723875370+00:00 stderr F I0813 20:09:11.723168 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-route-controller-manager, name: route-controller-manager, uid: db9b5a0e-2471-4a94-bbe5-01c34efec882]" virtual=false 2025-08-13T20:09:11.762132647+00:00 stderr F I0813 20:09:11.762062 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-controller-manager, name: controller-manager, uid: b42ae171-8338-4274-922f-79cfacb9cfe9]" 2025-08-13T20:09:11.762162448+00:00 stderr F I0813 20:09:11.762129 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-authentication, name: oauth-openshift, uid: 5c77e036-b030-4587-8bd4-079bc5e84c22]" virtual=false 2025-08-13T20:09:11.771640870+00:00 stderr F I0813 20:09:11.768206 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-network-node-identity, name: network-node-identity, uid: c35a9635-b45c-47ac-af09-3ee2daf91305]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.771640870+00:00 stderr F I0813 20:09:11.768272 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: 7ee99584-ec5a-490c-bc55-11ed3e60244a]" virtual=false 2025-08-13T20:09:11.771640870+00:00 stderr F I0813 20:09:11.771153 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cluster-samples-operator, uid: 9b74ff82-99c7-48fa-852e-a3960e84fedd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.771640870+00:00 stderr F I0813 20:09:11.771186 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-cluster-version, name: cluster-version-operator, uid: b5151a8e-7df7-4f3b-9ada-e1cfd0badda9]" virtual=false 2025-08-13T20:09:11.787451463+00:00 stderr F I0813 20:09:11.784705 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-ovn-kubernetes, name: ovn, uid: 5ca00511-43a3-491e-8d30-2c9b23d72bf1]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.787451463+00:00 stderr F I0813 20:09:11.784743 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: 4e10c137-983b-49c4-b977-7d19390e427b]" virtual=false 2025-08-13T20:09:11.787451463+00:00 stderr F I0813 20:09:11.785041 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-route-controller-manager, name: route-controller-manager, uid: db9b5a0e-2471-4a94-bbe5-01c34efec882]" 2025-08-13T20:09:11.787451463+00:00 stderr F I0813 20:09:11.785059 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-service-ca, name: service-ca, uid: 054eb633-29d2-4eec-90a7-1a83a0e386c1]" virtual=false 2025-08-13T20:09:11.800075395+00:00 stderr F I0813 20:09:11.798363 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-kube-storage-version-migrator-operator, uid: 56366865-01d3-431a-9703-d295f389a658]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.800075395+00:00 stderr F I0813 20:09:11.798441 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: bf9e0c20-07cb-4537-b7f9-efae9f964f5e]" virtual=false 2025-08-13T20:09:11.800075395+00:00 stderr F I0813 20:09:11.798695 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-config-operator, uid: 45fc724c-871f-4abe-8f95-988ef4974157]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.800075395+00:00 stderr F I0813 20:09:11.798719 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-apiserver, name: apiserver, uid: a424780c-5ff8-49aa-b616-57c2d7958f81]" virtual=false 2025-08-13T20:09:11.802963598+00:00 stderr F I0813 20:09:11.800572 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-ingress-operator, uid: c9d5dc69-e308-44cd-ad51-e2b7cce2619e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.802963598+00:00 stderr F I0813 20:09:11.800629 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-image-registry, name: image-registry, uid: ff5a6fbd-d479-457d-86ba-428162a82d5c]" virtual=false 2025-08-13T20:09:11.813057857+00:00 stderr F I0813 20:09:11.811035 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-network-diagnostics, uid: 267bb0cc-5a49-450c-a61e-a94080c18cf9]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.813057857+00:00 stderr F I0813 20:09:11.811086 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-ingress-operator, name: ingress-operator, uid: a575f0c7-77ce-41f4-a832-11dcd8374f9e]" virtual=false 2025-08-13T20:09:11.813057857+00:00 stderr F I0813 20:09:11.812320 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-ovn-kubernetes, name: signer, uid: 17184c6c-10e6-40d8-9367-33d18445ef3e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.813057857+00:00 stderr F I0813 20:09:11.812348 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-marketplace, name: marketplace-operator, uid: 0b54327c-0c40-46f3-a17b-0f07f095ccb7]" virtual=false 2025-08-13T20:09:11.836568521+00:00 stderr F I0813 20:09:11.836450 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-authentication, name: oauth-openshift, uid: 5c77e036-b030-4587-8bd4-079bc5e84c22]" 2025-08-13T20:09:11.836568521+00:00 stderr F I0813 20:09:11.836541 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-authentication-operator, name: authentication-operator, uid: 5e81203d-c202-48ae-b652-35b68d7e5586]" virtual=false 2025-08-13T20:09:11.836568521+00:00 stderr F I0813 20:09:11.836481 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-multus, name: multus-admission-controller, uid: 3d4e04fb-152e-4b67-b110-7b7edfa1a90a]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.836703275+00:00 stderr F I0813 20:09:11.836599 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-console-operator, name: console-conversion-webhook, uid: 4dae11c2-6acd-446b-b52c-67345d4c21ea]" virtual=false 2025-08-13T20:09:11.842381268+00:00 stderr F I0813 20:09:11.842199 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-service-ca, name: service-ca, uid: 054eb633-29d2-4eec-90a7-1a83a0e386c1]" 2025-08-13T20:09:11.842381268+00:00 stderr F I0813 20:09:11.842249 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-ingress, name: router-default, uid: 9ae4d312-7fc4-4344-ab7a-669da95f56bf]" virtual=false 2025-08-13T20:09:11.845851897+00:00 stderr F I0813 20:09:11.845357 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-operator-lifecycle-manager, uid: 45ab0ea7-17dd-4464-9f7e-9913e01318e2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.845851897+00:00 stderr F I0813 20:09:11.845410 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: e8b2ce3d-9cd4-43a5-a8aa-e724fcbf369d]" virtual=false 2025-08-13T20:09:11.847981208+00:00 stderr F I0813 20:09:11.845932 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-config, uid: c38b4142-4b5d-445b-9db1-5d43b8323db9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.847981208+00:00 stderr F I0813 20:09:11.845999 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 8cb0f5f7-4dca-477c-8627-e6db485e4cb2]" virtual=false 2025-08-13T20:09:11.850861941+00:00 stderr F I0813 20:09:11.848977 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-vsphere-infra, uid: 91f4e064-f5b2-4ace-91e6-e1732990ada8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.850861941+00:00 stderr F I0813 20:09:11.849009 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: package-server-manager, uid: 3368f5bb-29da-4770-b432-4e5d1a8491a9]" virtual=false 2025-08-13T20:09:11.854377312+00:00 stderr F I0813 20:09:11.854323 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-ingress, name: router-default, uid: 9ae4d312-7fc4-4344-ab7a-669da95f56bf]" 2025-08-13T20:09:11.854398752+00:00 stderr F I0813 20:09:11.854378 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-console, name: console, uid: acc4559a-2586-4482-947a-aae611d8d9f6]" virtual=false 2025-08-13T20:09:11.860222669+00:00 stderr F I0813 20:09:11.860191 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-apiserver, name: apiserver, uid: a424780c-5ff8-49aa-b616-57c2d7958f81]" 2025-08-13T20:09:11.860301602+00:00 stderr F I0813 20:09:11.860286 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator, uid: 59f9d1a9-dda1-4c2c-8c2d-b99e720cbed0]" virtual=false 2025-08-13T20:09:11.860632361+00:00 stderr F I0813 20:09:11.860607 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-dns-operator, uid: b99894aa-45e2-4109-8ecc-66c171f7a6d8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.860687503+00:00 stderr F I0813 20:09:11.860673 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-storage-version-migrator, name: migrator, uid: 88da59ff-e1b0-4998-b48b-d9b2e9bee2ae]" virtual=false 2025-08-13T20:09:11.860985341+00:00 stderr F I0813 20:09:11.860959 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-ovn-kubernetes, name: ovnkube-control-plane, uid: 346798bd-68de-4941-ab69-8a5a56dd55f7]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.861048233+00:00 stderr F I0813 20:09:11.861033 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 0b1d22b6-78ae-4d00-94ad-381755e08383]" virtual=false 2025-08-13T20:09:11.889986623+00:00 stderr F I0813 20:09:11.888111 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-kube-apiserver-operator, uid: 778598fb-710e-443a-a27d-e077f62db555]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.889986623+00:00 stderr F I0813 20:09:11.888234 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: 485aecbc-d986-4290-a12b-2be6eccbc76c]" virtual=false 2025-08-13T20:09:11.889986623+00:00 stderr F I0813 20:09:11.888673 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-kube-storage-version-migrator, name: migrator, uid: 88da59ff-e1b0-4998-b48b-d9b2e9bee2ae]" 2025-08-13T20:09:11.889986623+00:00 stderr F I0813 20:09:11.888694 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-oauth-apiserver, name: apiserver, uid: 8ac71ab9-8c3d-4c89-9962-205eed0149d5]" virtual=false 2025-08-13T20:09:11.891468375+00:00 stderr F I0813 20:09:11.891401 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 0b1d22b6-78ae-4d00-94ad-381755e08383]" 2025-08-13T20:09:11.891832736+00:00 stderr F I0813 20:09:11.891737 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-machine-api, name: machine-api-operator, uid: 7e7b28b7-f1de-4b37-8a34-a8d6ed3ac1fa]" virtual=false 2025-08-13T20:09:11.892158405+00:00 stderr F I0813 20:09:11.892118 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-config-operator, name: openshift-config-operator, uid: 46cebc51-d29e-4081-9edb-d9f437810b86]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.892280449+00:00 stderr F I0813 20:09:11.892258 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: catalog-operator, uid: 4a08358e-6f9b-492b-9df8-8e54f40e2fb4]" virtual=false 2025-08-13T20:09:11.903685206+00:00 stderr F I0813 20:09:11.902848 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: bf9e0c20-07cb-4537-b7f9-efae9f964f5e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.903685206+00:00 stderr F I0813 20:09:11.902960 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: d75ff515-88a9-4644-8711-c99a391dcc77]" virtual=false 2025-08-13T20:09:11.903685206+00:00 stderr F I0813 20:09:11.903236 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[machine.openshift.io/v1beta1/MachineHealthCheck, namespace: openshift-machine-api, name: machine-api-termination-handler, uid: da0c8169-ed84-4c23-a003-b6883fd55935]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.903685206+00:00 stderr F I0813 20:09:11.903260 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-console, name: downloads, uid: 03b2baf0-d10c-4001-94a6-800af015de08]" virtual=false 2025-08-13T20:09:11.903731527+00:00 stderr F I0813 20:09:11.903685 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-image-registry, name: image-registry, uid: ff5a6fbd-d479-457d-86ba-428162a82d5c]" 2025-08-13T20:09:11.903731527+00:00 stderr F I0813 20:09:11.903710 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-dns-operator, name: dns-operator, uid: d7110071-d620-4ed4-b7e1-05c1c458b7f0]" virtual=false 2025-08-13T20:09:11.909882383+00:00 stderr F I0813 20:09:11.909254 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: 4e10c137-983b-49c4-b977-7d19390e427b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.909882383+00:00 stderr F I0813 20:09:11.909310 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator, uid: f13e36c5-b283-4235-867d-e2ae26d7fa2a]" virtual=false 2025-08-13T20:09:11.922423143+00:00 stderr F I0813 20:09:11.922356 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: 7ee99584-ec5a-490c-bc55-11ed3e60244a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.922534976+00:00 stderr F I0813 20:09:11.922515 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-console-operator, name: console-operator, uid: e977212b-5bb5-4096-9f11-353076a2ebeb]" virtual=false 2025-08-13T20:09:11.923117003+00:00 stderr F I0813 20:09:11.923036 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-marketplace, name: marketplace-operator, uid: 0b54327c-0c40-46f3-a17b-0f07f095ccb7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.923183885+00:00 stderr F I0813 20:09:11.923168 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 1685682f-c45b-43b7-8431-b19c7e8a7d30]" virtual=false 2025-08-13T20:09:11.923394271+00:00 stderr F I0813 20:09:11.923374 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-oauth-apiserver, name: apiserver, uid: 8ac71ab9-8c3d-4c89-9962-205eed0149d5]" 2025-08-13T20:09:11.923445722+00:00 stderr F I0813 20:09:11.923432 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: olm-operator, uid: e9ed1986-ebb6-4ce1-af63-63b3f002df9e]" virtual=false 2025-08-13T20:09:11.929533177+00:00 stderr F I0813 20:09:11.929488 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-console, name: console, uid: acc4559a-2586-4482-947a-aae611d8d9f6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Console","name":"cluster","uid":"5f9f95ea-d66e-45cc-9aa2-ed289b62d92e","controller":true}] 2025-08-13T20:09:11.929614339+00:00 stderr F I0813 20:09:11.929599 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 8a9ccf98-e60f-4580-94d2-1560cf66cd74]" virtual=false 2025-08-13T20:09:11.942948991+00:00 stderr F I0813 20:09:11.941430 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-cluster-version, name: cluster-version-operator, uid: b5151a8e-7df7-4f3b-9ada-e1cfd0badda9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.942948991+00:00 stderr F I0813 20:09:11.941514 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-network-diagnostics, name: network-check-source, uid: 5694fe8b-b5a5-4c14-bc2c-e30718ec8465]" virtual=false 2025-08-13T20:09:11.943400094+00:00 stderr F I0813 20:09:11.943370 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-console-operator, name: console-conversion-webhook, uid: 4dae11c2-6acd-446b-b52c-67345d4c21ea]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.943472496+00:00 stderr F I0813 20:09:11.943458 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-network-operator, name: network-operator, uid: d09aa085-6368-4540-a8c1-4e4c3e9e7344]" virtual=false 2025-08-13T20:09:11.971524541+00:00 stderr F I0813 20:09:11.971460 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: package-server-manager, uid: 3368f5bb-29da-4770-b432-4e5d1a8491a9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.971697176+00:00 stderr F I0813 20:09:11.971675 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 945d64e1-c873-4e9d-b5ff-47904d2b347f]" virtual=false 2025-08-13T20:09:11.994898121+00:00 stderr F I0813 20:09:11.992669 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-console, name: downloads, uid: 03b2baf0-d10c-4001-94a6-800af015de08]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Console","name":"cluster","uid":"5f9f95ea-d66e-45cc-9aa2-ed289b62d92e","controller":true}] 2025-08-13T20:09:11.994898121+00:00 stderr F I0813 20:09:11.992737 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-etcd-operator, name: etcd-operator, uid: fb798e33-6d4c-4082-a60c-594a9db7124a]" virtual=false 2025-08-13T20:09:11.997935278+00:00 stderr F I0813 20:09:11.995441 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 8cb0f5f7-4dca-477c-8627-e6db485e4cb2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.997935278+00:00 stderr F I0813 20:09:11.995501 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: packageserver, uid: c7e0a213-d3b0-4220-bc12-3e9beb007a7b]" virtual=false 2025-08-13T20:09:11.997935278+00:00 stderr F I0813 20:09:11.995674 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-network-diagnostics, name: network-check-source, uid: 5694fe8b-b5a5-4c14-bc2c-e30718ec8465]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.997935278+00:00 stderr F I0813 20:09:11.995697 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: hostpath-provisioner, name: csi-hostpathplugin, uid: f3d8e73a-8b83-44c2-ac21-da847137bc76]" virtual=false 2025-08-13T20:09:11.997935278+00:00 stderr F I0813 20:09:11.996054 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-authentication-operator, name: authentication-operator, uid: 5e81203d-c202-48ae-b652-35b68d7e5586]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.997935278+00:00 stderr F I0813 20:09:11.996080 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ingress-operator, name: ingress-operator, uid: e3ef40dc-9c44-44df-ad9c-5a0bb6e10f9d]" virtual=false 2025-08-13T20:09:12.017887330+00:00 stderr F I0813 20:09:12.015311 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: catalog-operator, uid: 4a08358e-6f9b-492b-9df8-8e54f40e2fb4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.017887330+00:00 stderr F I0813 20:09:12.015397 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 4d34cc47-ddd7-4071-9c9d-e6b189052eff]" virtual=false 2025-08-13T20:09:12.041160677+00:00 stderr F I0813 20:09:12.041085 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator, uid: f13e36c5-b283-4235-867d-e2ae26d7fa2a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.041262830+00:00 stderr F I0813 20:09:12.041247 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-machine-approver, name: machineapprover-rules, uid: 5c4e26c1-e7a2-400a-8d52-1a4e61d81615]" virtual=false 2025-08-13T20:09:12.049357052+00:00 stderr F I0813 20:09:12.049267 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-ingress-operator, name: ingress-operator, uid: a575f0c7-77ce-41f4-a832-11dcd8374f9e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.049535217+00:00 stderr F I0813 20:09:12.049515 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator, uid: 59f9d1a9-dda1-4c2c-8c2d-b99e720cbed0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.049640290+00:00 stderr F I0813 20:09:12.049621 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-version, name: cluster-version-operator, uid: 8b301c8a-95e8-4010-b1dd-add84cec904e]" virtual=false 2025-08-13T20:09:12.049960139+00:00 stderr F I0813 20:09:12.049932 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: e8b2ce3d-9cd4-43a5-a8aa-e724fcbf369d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.050025021+00:00 stderr F I0813 20:09:12.050011 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-image-registry, name: imagestreams-rules, uid: 3f104833-4f1e-4f16-8ada-d5643f802363]" virtual=false 2025-08-13T20:09:12.050276048+00:00 stderr F I0813 20:09:12.050206 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-machine-api, name: machine-api-operator, uid: 7e7b28b7-f1de-4b37-8a34-a8d6ed3ac1fa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.050363321+00:00 stderr F I0813 20:09:12.050349 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-scheduler-operator, name: kube-scheduler-operator, uid: 38913059-2644-4931-bd5e-a039fa76b712]" virtual=false 2025-08-13T20:09:12.050620128+00:00 stderr F I0813 20:09:12.050598 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 1685682f-c45b-43b7-8431-b19c7e8a7d30]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.050672680+00:00 stderr F I0813 20:09:12.050658 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-api, name: machine-api-operator-prometheus-rules, uid: d6bb2e2c-e1cd-49b4-96df-decb63e7b0fd]" virtual=false 2025-08-13T20:09:12.053485991+00:00 stderr F I0813 20:09:12.052006 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-multus, name: prometheus-k8s-rules, uid: bc5d3e92-6eee-46da-a973-4b25555734ea]" virtual=false 2025-08-13T20:09:12.054286883+00:00 stderr F I0813 20:09:12.054263 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: 485aecbc-d986-4290-a12b-2be6eccbc76c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.054364546+00:00 stderr F I0813 20:09:12.054349 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-network-operator, name: openshift-network-operator-ipsec-rules, uid: 8250bdbe-c4d3-4856-aa6a-373951b82216]" virtual=false 2025-08-13T20:09:12.054559731+00:00 stderr F I0813 20:09:12.054539 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: olm-operator, uid: e9ed1986-ebb6-4ce1-af63-63b3f002df9e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.054612673+00:00 stderr F I0813 20:09:12.054599 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ovn-kubernetes, name: networking-rules, uid: fc335c36-0c0a-48e7-a47f-5e72d0e62a18]" virtual=false 2025-08-13T20:09:12.054930162+00:00 stderr F I0813 20:09:12.054880 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-dns-operator, name: dns-operator, uid: d7110071-d620-4ed4-b7e1-05c1c458b7f0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.054994774+00:00 stderr F I0813 20:09:12.054979 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-samples-operator, name: samples-operator-alerts, uid: 5f30e0db-f607-4d3c-966c-6a44f8597ed1]" virtual=false 2025-08-13T20:09:12.060510762+00:00 stderr F I0813 20:09:12.060461 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 945d64e1-c873-4e9d-b5ff-47904d2b347f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.060623125+00:00 stderr F I0813 20:09:12.060605 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-operator-lifecycle-manager, name: olm-alert-rules, uid: 78f48107-8ef2-45e0-a5cb-4f3174faa9d9]" virtual=false 2025-08-13T20:09:12.061691196+00:00 stderr F I0813 20:09:12.061225 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: hostpath-provisioner, name: csi-hostpathplugin, uid: f3d8e73a-8b83-44c2-ac21-da847137bc76]" 2025-08-13T20:09:12.061756458+00:00 stderr F I0813 20:09:12.061741 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-etcd-operator, name: etcd-prometheus-rules, uid: bbe9d208-cdc7-420f-a03a-1d216ca0abb4]" virtual=false 2025-08-13T20:09:12.062305663+00:00 stderr F I0813 20:09:12.061474 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-console-operator, name: console-operator, uid: e977212b-5bb5-4096-9f11-353076a2ebeb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.062326874+00:00 stderr F I0813 20:09:12.062307 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-marketplace, name: marketplace-alert-rules, uid: 325f23ba-096f-41ad-8964-6af44a8de605]" virtual=false 2025-08-13T20:09:12.071225569+00:00 stderr F I0813 20:09:12.068469 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: d75ff515-88a9-4644-8711-c99a391dcc77]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.071225569+00:00 stderr F I0813 20:09:12.068538 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-console-operator, name: cluster-monitoring-prometheus-rules, uid: 8e077079-ee5d-41a1-abbe-a4efe5295b9a]" virtual=false 2025-08-13T20:09:12.071225569+00:00 stderr F I0813 20:09:12.070876 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 8a9ccf98-e60f-4580-94d2-1560cf66cd74]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.071225569+00:00 stderr F I0813 20:09:12.070959 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-image-registry, name: image-registry-rules, uid: e3d343bb-85c3-48e6-83f3-e0bff03e2610]" virtual=false 2025-08-13T20:09:12.071640041+00:00 stderr F I0813 20:09:12.071612 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ingress-operator, name: ingress-operator, uid: e3ef40dc-9c44-44df-ad9c-5a0bb6e10f9d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.072191877+00:00 stderr F I0813 20:09:12.072166 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 585c690d-0c74-4dd0-b081-e5dc02f16e88]" virtual=false 2025-08-13T20:09:12.076067098+00:00 stderr F I0813 20:09:12.076018 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 4d34cc47-ddd7-4071-9c9d-e6b189052eff]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.078967051+00:00 stderr F I0813 20:09:12.078882 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 54a74216-e0ff-4fdf-8ef9-dfd95ace8442]" virtual=false 2025-08-13T20:09:12.079216158+00:00 stderr F I0813 20:09:12.079084 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-etcd-operator, name: etcd-operator, uid: fb798e33-6d4c-4082-a60c-594a9db7124a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.080136485+00:00 stderr F I0813 20:09:12.080010 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: packageserver, uid: c7e0a213-d3b0-4220-bc12-3e9beb007a7b]" owner=[{"apiVersion":"operators.coreos.com/v1alpha1","kind":"ClusterServiceVersion","name":"packageserver","uid":"0beab272-7637-4d44-b3aa-502dcafbc929","controller":false,"blockOwnerDeletion":false}] 2025-08-13T20:09:12.083709447+00:00 stderr F I0813 20:09:12.083590 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ovn-kubernetes, name: networking-rules, uid: fc335c36-0c0a-48e7-a47f-5e72d0e62a18]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.097817612+00:00 stderr F I0813 20:09:12.097632 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-apiserver, name: kube-apiserver-performance-recording-rules, uid: 237c1b6b-f59d-4a3a-b960-8c415a10471e]" virtual=false 2025-08-13T20:09:12.099244372+00:00 stderr F I0813 20:09:12.098549 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-multus, name: prometheus-k8s-rules, uid: bc5d3e92-6eee-46da-a973-4b25555734ea]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.099244372+00:00 stderr F I0813 20:09:12.098618 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ovn-kubernetes, name: master-rules, uid: 999d77e7-76f0-4f53-8849-6b1b62585ead]" virtual=false 2025-08-13T20:09:12.099244372+00:00 stderr F I0813 20:09:12.098877 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-image-registry, name: imagestreams-rules, uid: 3f104833-4f1e-4f16-8ada-d5643f802363]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.099244372+00:00 stderr F I0813 20:09:12.098990 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-dns-operator, name: dns, uid: 43fd1807-ae6c-4bfd-9007-d4537c06cf0a]" virtual=false 2025-08-13T20:09:12.099244372+00:00 stderr F I0813 20:09:12.099138 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-version, name: cluster-version-operator, uid: 8b301c8a-95e8-4010-b1dd-add84cec904e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.099244372+00:00 stderr F I0813 20:09:12.099174 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 017f2b8e-38d4-4d07-b8aa-4bcdb3e002ed]" virtual=false 2025-08-13T20:09:12.099485639+00:00 stderr F I0813 20:09:12.099424 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-9, uid: 6ffaec76-c3c8-4997-ba38-69fd80011f84]" virtual=false 2025-08-13T20:09:12.099543011+00:00 stderr F I0813 20:09:12.099509 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-image-registry, uid: b700e982-db6f-41d5-bf38-c2a82966abe8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.099672715+00:00 stderr F I0813 20:09:12.099598 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-ca-bundle, uid: 974bd056-bb37-4a0d-a539-16df96c14ed2]" virtual=false 2025-08-13T20:09:12.099819039+00:00 stderr F I0813 20:09:12.099759 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-5, uid: fb832e88-85bd-49f2-bece-a38d2f50681d]" virtual=false 2025-08-13T20:09:12.102401363+00:00 stderr F I0813 20:09:12.102314 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-operator-lifecycle-manager, name: olm-alert-rules, uid: 78f48107-8ef2-45e0-a5cb-4f3174faa9d9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.102401363+00:00 stderr F I0813 20:09:12.102368 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-11, uid: e5243530-5a5f-4545-adfa-642dfa650103]" virtual=false 2025-08-13T20:09:12.102957679+00:00 stderr F I0813 20:09:12.102870 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-scheduler-operator, name: kube-scheduler-operator, uid: 38913059-2644-4931-bd5e-a039fa76b712]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.102978670+00:00 stderr F I0813 20:09:12.102968 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-6, uid: ed3f3154-72c5-4095-b56f-856b65764f6d]" virtual=false 2025-08-13T20:09:12.107458998+00:00 stderr F I0813 20:09:12.107389 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-network-operator, name: network-operator, uid: d09aa085-6368-4540-a8c1-4e4c3e9e7344]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.107492199+00:00 stderr F I0813 20:09:12.107454 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-8, uid: 8d893e05-24ef-4d69-8b21-c40f710bda8c]" virtual=false 2025-08-13T20:09:12.108032264+00:00 stderr F I0813 20:09:12.107973 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-api, name: machine-api-operator-prometheus-rules, uid: d6bb2e2c-e1cd-49b4-96df-decb63e7b0fd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.108065565+00:00 stderr F I0813 20:09:12.108031 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-9, uid: 6ffaec76-c3c8-4997-ba38-69fd80011f84]" 2025-08-13T20:09:12.108065565+00:00 stderr F I0813 20:09:12.108050 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-1, uid: 04732c30-44b2-491a-9d6a-453885c75b2f]" virtual=false 2025-08-13T20:09:12.108075166+00:00 stderr F I0813 20:09:12.108066 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-3, uid: c8feb0fe-7b0f-4f04-8d98-24bf9b6e5cd2]" virtual=false 2025-08-13T20:09:12.108594791+00:00 stderr F I0813 20:09:12.108569 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-machine-approver, name: machineapprover-rules, uid: 5c4e26c1-e7a2-400a-8d52-1a4e61d81615]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.108654192+00:00 stderr F I0813 20:09:12.108639 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-7, uid: 4e7e679a-7f59-4947-b9e5-f995fc817b7a]" virtual=false 2025-08-13T20:09:12.109243969+00:00 stderr F I0813 20:09:12.109217 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-samples-operator, name: samples-operator-alerts, uid: 5f30e0db-f607-4d3c-966c-6a44f8597ed1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.109349772+00:00 stderr F I0813 20:09:12.108590 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-network-operator, name: openshift-network-operator-ipsec-rules, uid: 8250bdbe-c4d3-4856-aa6a-373951b82216]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.109379303+00:00 stderr F I0813 20:09:12.109293 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-10, uid: 79d2f65f-9a16-40bd-b8af-596e51945995]" virtual=false 2025-08-13T20:09:12.109548158+00:00 stderr F I0813 20:09:12.109372 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 17da81ae-ac8b-4941-aff1-3d2bf3f00608]" virtual=false 2025-08-13T20:09:12.116526798+00:00 stderr F I0813 20:09:12.116423 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-5, uid: fb832e88-85bd-49f2-bece-a38d2f50681d]" 2025-08-13T20:09:12.116526798+00:00 stderr F I0813 20:09:12.116503 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-6, uid: 75cd3034-678b-4464-b011-f96e3a76bfeb]" virtual=false 2025-08-13T20:09:12.116969681+00:00 stderr F I0813 20:09:12.116897 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-3, uid: c8feb0fe-7b0f-4f04-8d98-24bf9b6e5cd2]" 2025-08-13T20:09:12.116989171+00:00 stderr F I0813 20:09:12.116967 1 garbagecollector.go:549] "Processing item" item="[batch/v1/Job, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-29251905, uid: 7f4930dc-9c3e-449f-86b0-c6f776dc6141]" virtual=false 2025-08-13T20:09:12.117066763+00:00 stderr F I0813 20:09:12.117043 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-8, uid: 8d893e05-24ef-4d69-8b21-c40f710bda8c]" 2025-08-13T20:09:12.117175677+00:00 stderr F I0813 20:09:12.117156 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-machine-config-operator, name: machine-config-server, uid: acfc19b1-6ba7-451e-a76d-1490cb8ae35e]" virtual=false 2025-08-13T20:09:12.119927885+00:00 stderr F I0813 20:09:12.118382 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-10, uid: 79d2f65f-9a16-40bd-b8af-596e51945995]" 2025-08-13T20:09:12.119927885+00:00 stderr F I0813 20:09:12.118434 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: multus-additional-cni-plugins, uid: df2f05a5-7dea-496f-a19f-fd0927dffc2f]" virtual=false 2025-08-13T20:09:12.120191133+00:00 stderr F I0813 20:09:12.120167 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-1, uid: 04732c30-44b2-491a-9d6a-453885c75b2f]" 2025-08-13T20:09:12.120282096+00:00 stderr F I0813 20:09:12.120267 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-ovn-kubernetes, name: ovnkube-node, uid: 1482e709-7589-42fd-976a-3e8042ee895b]" virtual=false 2025-08-13T20:09:12.120491802+00:00 stderr F I0813 20:09:12.120469 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-11, uid: e5243530-5a5f-4545-adfa-642dfa650103]" 2025-08-13T20:09:12.120602845+00:00 stderr F I0813 20:09:12.120585 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: certified-operators, uid: 16d5fe82-aef0-4700-8b13-e78e71d2a10d]" virtual=false 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.124429 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-6, uid: ed3f3154-72c5-4095-b56f-856b65764f6d]" 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.124502 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: multus, uid: caa46963-1770-45a0-9a3d-2e0c6249b258]" virtual=false 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.124879 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-7, uid: 4e7e679a-7f59-4947-b9e5-f995fc817b7a]" 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.124929 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-11, uid: 9b60c938-b391-41f2-ba27-263b409f84ac]" virtual=false 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125156 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ovn-kubernetes, name: master-rules, uid: 999d77e7-76f0-4f53-8849-6b1b62585ead]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125180 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-dns, name: node-resolver, uid: b85ec2e5-ac1c-43ad-9c87-35eb71cbd95f]" virtual=false 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125318 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 17da81ae-ac8b-4941-aff1-3d2bf3f00608]" 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125336 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-ingress-canary, name: ingress-canary, uid: b5512a08-cd29-46f9-9661-4c860338b2ca]" virtual=false 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125581 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-etcd-operator, name: etcd-prometheus-rules, uid: bbe9d208-cdc7-420f-a03a-1d216ca0abb4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125599 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-8, uid: 160e9f84-5bea-4df5-a596-f02e78e90bcc]" virtual=false 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125743 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-apiserver, name: kube-apiserver-performance-recording-rules, uid: 237c1b6b-f59d-4a3a-b960-8c415a10471e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125761 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: redhat-marketplace, uid: 6f259421-4edb-49d8-a6ce-aa41dfc64264]" virtual=false 2025-08-13T20:09:12.127499803+00:00 stderr F I0813 20:09:12.127446 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-6, uid: 75cd3034-678b-4464-b011-f96e3a76bfeb]" 2025-08-13T20:09:12.127499803+00:00 stderr F I0813 20:09:12.127492 1 garbagecollector.go:549] "Processing item" item="[batch/v1/Job, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-29251920, uid: 11f6ecfd-32e5-4666-a477-30fd544386df]" virtual=false 2025-08-13T20:09:12.128876522+00:00 stderr F I0813 20:09:12.127733 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-console-operator, name: cluster-monitoring-prometheus-rules, uid: 8e077079-ee5d-41a1-abbe-a4efe5295b9a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.128876522+00:00 stderr F I0813 20:09:12.127820 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-dns, name: dns-default, uid: bc037602-dc89-4ac4-9d39-64c4b2735a9f]" virtual=false 2025-08-13T20:09:12.129457119+00:00 stderr F I0813 20:09:12.129427 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: openshift-machine-config-operator, name: machine-config-server, uid: acfc19b1-6ba7-451e-a76d-1490cb8ae35e]" 2025-08-13T20:09:12.129578222+00:00 stderr F I0813 20:09:12.129523 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-dns-operator, name: dns, uid: 43fd1807-ae6c-4bfd-9007-d4537c06cf0a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.129631784+00:00 stderr F I0813 20:09:12.129591 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-network-diagnostics, name: network-check-target, uid: 91f9db4f-6cfc-4a79-8ef0-8cd59cb0f235]" virtual=false 2025-08-13T20:09:12.130128788+00:00 stderr F I0813 20:09:12.129973 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-marketplace, name: marketplace-alert-rules, uid: 325f23ba-096f-41ad-8964-6af44a8de605]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.130128788+00:00 stderr F I0813 20:09:12.130021 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-network-operator, name: iptables-alerter, uid: 65b9447d-47aa-4195-a11f-950ea16aeb71]" virtual=false 2025-08-13T20:09:12.130150349+00:00 stderr F I0813 20:09:12.129513 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: community-operators, uid: e583c58d-4569-4cab-9192-62c813516208]" virtual=false 2025-08-13T20:09:12.130355574+00:00 stderr F I0813 20:09:12.130273 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-ca-bundle, uid: 974bd056-bb37-4a0d-a539-16df96c14ed2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:12.130567431+00:00 stderr F I0813 20:09:12.130470 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-9, uid: 794bafc6-76b1-49e3-b649-3c4435ab156a]" virtual=false 2025-08-13T20:09:12.131152237+00:00 stderr F I0813 20:09:12.131126 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 017f2b8e-38d4-4d07-b8aa-4bcdb3e002ed]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.131214269+00:00 stderr F I0813 20:09:12.131199 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-network-node-identity, name: network-node-identity, uid: b28beb6b-9f0f-4fa2-90a3-324fe77364f6]" virtual=false 2025-08-13T20:09:12.135594365+00:00 stderr F I0813 20:09:12.131665 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: certified-operators, uid: 16d5fe82-aef0-4700-8b13-e78e71d2a10d]" 2025-08-13T20:09:12.135594365+00:00 stderr F I0813 20:09:12.131710 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: network-metrics-daemon, uid: dde018de-6a0a-400a-b753-bd5cd908ad9c]" virtual=false 2025-08-13T20:09:12.135594365+00:00 stderr F I0813 20:09:12.132331 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 54a74216-e0ff-4fdf-8ef9-dfd95ace8442]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.135594365+00:00 stderr F I0813 20:09:12.132357 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-12, uid: a1e0f8b2-4421-4716-96ad-e0da033e5a6a]" virtual=false 2025-08-13T20:09:12.135594365+00:00 stderr F I0813 20:09:12.132490 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: openshift-ingress-canary, name: ingress-canary, uid: b5512a08-cd29-46f9-9661-4c860338b2ca]" 2025-08-13T20:09:12.135594365+00:00 stderr F I0813 20:09:12.132510 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-7, uid: 280618b3-e4f5-445a-afe7-23ea06b14201]" virtual=false 2025-08-13T20:09:12.140217257+00:00 stderr F I0813 20:09:12.140124 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-11, uid: 9b60c938-b391-41f2-ba27-263b409f84ac]" 2025-08-13T20:09:12.140217257+00:00 stderr F I0813 20:09:12.140202 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-image-registry, name: node-ca, uid: e04c9af2-e9b1-4c40-b757-270f8e53c17d]" virtual=false 2025-08-13T20:09:12.140333231+00:00 stderr F I0813 20:09:12.140272 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-image-registry, name: image-registry-rules, uid: e3d343bb-85c3-48e6-83f3-e0bff03e2610]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.140368022+00:00 stderr F I0813 20:09:12.140342 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: redhat-operators, uid: 9ba0c63a-ccef-4143-b195-48b1ad0b0bb7]" virtual=false 2025-08-13T20:09:12.140540736+00:00 stderr F I0813 20:09:12.140471 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 585c690d-0c74-4dd0-b081-e5dc02f16e88]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.142763940+00:00 stderr F I0813 20:09:12.141077 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-operator-config, uid: 4f6ec328-7fca-45fe-9f6b-45e4903ea3e8]" virtual=false 2025-08-13T20:09:12.143075239+00:00 stderr F I0813 20:09:12.141132 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: community-operators, uid: e583c58d-4569-4cab-9192-62c813516208]" 2025-08-13T20:09:12.143192553+00:00 stderr F I0813 20:09:12.143172 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-1, uid: 7bc85b09-0648-4f4e-bbcd-9571d2655676]" virtual=false 2025-08-13T20:09:12.143303126+00:00 stderr F I0813 20:09:12.140679 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-9, uid: 794bafc6-76b1-49e3-b649-3c4435ab156a]" 2025-08-13T20:09:12.143333507+00:00 stderr F I0813 20:09:12.142878 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-ovn-kubernetes, name: ovnkube-node, uid: 1482e709-7589-42fd-976a-3e8042ee895b]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.143430199+00:00 stderr F I0813 20:09:12.143372 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-8, uid: 5eec5212-9944-4d33-9069-6d85c7ad2c1a]" virtual=false 2025-08-13T20:09:12.143712207+00:00 stderr F I0813 20:09:12.143690 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-2, uid: a0e0a799-c6ef-45ab-9c7d-bceba4db7d81]" virtual=false 2025-08-13T20:09:12.143986035+00:00 stderr F I0813 20:09:12.140599 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-8, uid: 160e9f84-5bea-4df5-a596-f02e78e90bcc]" 2025-08-13T20:09:12.144078658+00:00 stderr F I0813 20:09:12.144061 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 62dbb159-afde-42ff-ae4d-f010b4e53152]" virtual=false 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.148210 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: redhat-operators, uid: 9ba0c63a-ccef-4143-b195-48b1ad0b0bb7]" 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.148284 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-apiserver, name: kube-apiserver, uid: f37115af-5343-4070-9655-42dcef4f4439]" virtual=false 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.148522 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-dns, name: dns-default, uid: bc037602-dc89-4ac4-9d39-64c4b2735a9f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"DNS","name":"default","uid":"8e7b8280-016f-4ceb-a792-fc5be2494468","controller":true}] 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.148549 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: d26b8449-3866-4f5c-880c-7dab11423e72]" virtual=false 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.148763 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-12, uid: a1e0f8b2-4421-4716-96ad-e0da033e5a6a]" 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.148837 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator-ext-remediation, uid: b645a3b5-011d-4a2c-ac12-008057781b22]" virtual=false 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.148938 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-7, uid: 280618b3-e4f5-445a-afe7-23ea06b14201]" 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.149025 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: 2f481e44-8799-4869-ac92-2893c3d079ef]" virtual=false 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.149252 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: multus-additional-cni-plugins, uid: df2f05a5-7dea-496f-a19f-fd0927dffc2f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.152116508+00:00 stderr F I0813 20:09:12.149954 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[batch/v1/Job, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-29251905, uid: 7f4930dc-9c3e-449f-86b0-c6f776dc6141]" owner=[{"apiVersion":"batch/v1","kind":"CronJob","name":"collect-profiles","uid":"946673ee-e5bd-418a-934e-c38198674faa","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.152259572+00:00 stderr F I0813 20:09:12.152238 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-console-operator, name: console-operator, uid: 9a33b179-4950-456a-93cf-ba4741c91841]" virtual=false 2025-08-13T20:09:12.152353735+00:00 stderr F I0813 20:09:12.150233 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: machine-api-operator, uid: e428423f-fee5-4fca-bdc8-f46a317d9cf7]" virtual=false 2025-08-13T20:09:12.152383816+00:00 stderr F I0813 20:09:12.150263 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: redhat-marketplace, uid: 6f259421-4edb-49d8-a6ce-aa41dfc64264]" 2025-08-13T20:09:12.152410617+00:00 stderr F I0813 20:09:12.150432 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: openshift-image-registry, name: node-ca, uid: e04c9af2-e9b1-4c40-b757-270f8e53c17d]" 2025-08-13T20:09:12.152436268+00:00 stderr F I0813 20:09:12.150476 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-dns, name: node-resolver, uid: b85ec2e5-ac1c-43ad-9c87-35eb71cbd95f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"DNS","name":"default","uid":"8e7b8280-016f-4ceb-a792-fc5be2494468","controller":true}] 2025-08-13T20:09:12.152518170+00:00 stderr F I0813 20:09:12.150517 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[batch/v1/Job, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-29251920, uid: 11f6ecfd-32e5-4666-a477-30fd544386df]" owner=[{"apiVersion":"batch/v1","kind":"CronJob","name":"collect-profiles","uid":"946673ee-e5bd-418a-934e-c38198674faa","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.152577312+00:00 stderr F I0813 20:09:12.150685 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-network-diagnostics, name: network-check-target, uid: 91f9db4f-6cfc-4a79-8ef0-8cd59cb0f235]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.152605922+00:00 stderr F I0813 20:09:12.150722 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-network-operator, name: iptables-alerter, uid: 65b9447d-47aa-4195-a11f-950ea16aeb71]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.152632433+00:00 stderr F I0813 20:09:12.152010 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: multus, uid: caa46963-1770-45a0-9a3d-2e0c6249b258]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.152751747+00:00 stderr F I0813 20:09:12.152675 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 43311039-ddd0-4dd0-b1ee-3a9fed17eab5]" virtual=false 2025-08-13T20:09:12.153314163+00:00 stderr F I0813 20:09:12.153198 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: network-metrics-daemon, uid: dde018de-6a0a-400a-b753-bd5cd908ad9c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.153314163+00:00 stderr F I0813 20:09:12.153248 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-network-operator, name: network-operator, uid: 43b09388-8965-42ba-b082-58877deb0311]" virtual=false 2025-08-13T20:09:12.153599511+00:00 stderr F I0813 20:09:12.153500 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-metrics, uid: 1723c4aa-59d2-43e4-8879-a34e49b01f7b]" virtual=false 2025-08-13T20:09:12.155172536+00:00 stderr F I0813 20:09:12.155128 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 4fe81432-467e-413e-ab32-82832525f054]" virtual=false 2025-08-13T20:09:12.157216495+00:00 stderr F I0813 20:09:12.157163 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-image-registry, name: image-registry-operator, uid: ef1c491a-0721-49cc-98d6-1ed2478c49b0]" virtual=false 2025-08-13T20:09:12.159301594+00:00 stderr F I0813 20:09:12.159273 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ingress-operator, name: ingress-operator, uid: c53f85af-6f9e-4e5e-9242-aad83f5ea8c4]" virtual=false 2025-08-13T20:09:12.159539891+00:00 stderr F I0813 20:09:12.159480 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-2, uid: a0e0a799-c6ef-45ab-9c7d-bceba4db7d81]" 2025-08-13T20:09:12.159560312+00:00 stderr F I0813 20:09:12.159551 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-multus, name: monitor-network, uid: 4d4aca4a-3397-4686-8f20-a7a4f752076f]" virtual=false 2025-08-13T20:09:12.159893161+00:00 stderr F I0813 20:09:12.159779 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-1, uid: 7bc85b09-0648-4f4e-bbcd-9571d2655676]" 2025-08-13T20:09:12.159893161+00:00 stderr F I0813 20:09:12.159872 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-oauth-apiserver, name: openshift-oauth-apiserver, uid: 041dc6d1-ce80-46ef-bbf1-6bcd4b2dd746]" virtual=false 2025-08-13T20:09:12.160164019+00:00 stderr F I0813 20:09:12.160109 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: olm-operator, uid: 9e0ffff9-0f79-4998-9692-74f94eb0549f]" virtual=false 2025-08-13T20:09:12.160282903+00:00 stderr F I0813 20:09:12.160235 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: da943855-7a79-47eb-a200-afae1932cb74]" virtual=false 2025-08-13T20:09:12.160880920+00:00 stderr F I0813 20:09:12.160856 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-8, uid: 5eec5212-9944-4d33-9069-6d85c7ad2c1a]" 2025-08-13T20:09:12.161076945+00:00 stderr F I0813 20:09:12.161057 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-etcd-operator, name: etcd-operator, uid: 0e2435e0-96e3-49f2-beae-6c0c00ee7502]" virtual=false 2025-08-13T20:09:12.161314362+00:00 stderr F I0813 20:09:12.160926 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-network-node-identity, name: network-node-identity, uid: b28beb6b-9f0f-4fa2-90a3-324fe77364f6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.161412945+00:00 stderr F I0813 20:09:12.161393 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: fa0bd541-42b6-4bb8-9a92-d48d39d85d53]" virtual=false 2025-08-13T20:09:12.176955860+00:00 stderr F I0813 20:09:12.176871 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: 2f481e44-8799-4869-ac92-2893c3d079ef]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.177926438+00:00 stderr F I0813 20:09:12.177854 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-operator-config, uid: 4f6ec328-7fca-45fe-9f6b-45e4903ea3e8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.178037242+00:00 stderr F I0813 20:09:12.177152 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator-ext-remediation, uid: b645a3b5-011d-4a2c-ac12-008057781b22]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.178037242+00:00 stderr F I0813 20:09:12.177168 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: d26b8449-3866-4f5c-880c-7dab11423e72]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.178037242+00:00 stderr F I0813 20:09:12.177350 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-metrics, uid: 1723c4aa-59d2-43e4-8879-a34e49b01f7b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.178037242+00:00 stderr F I0813 20:09:12.177379 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 4fe81432-467e-413e-ab32-82832525f054]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.178125084+00:00 stderr F I0813 20:09:12.177707 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-apiserver, name: kube-apiserver, uid: f37115af-5343-4070-9655-42dcef4f4439]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.178340290+00:00 stderr F I0813 20:09:12.178315 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-multus, name: monitor-multus-admission-controller, uid: 02fdf30f-73db-47db-b112-d48ffcd81df7]" virtual=false 2025-08-13T20:09:12.178518585+00:00 stderr F I0813 20:09:12.178500 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ovn-kubernetes, name: monitor-ovn-control-plane-metrics, uid: 3461a67f-3c88-4a19-b593-448a4dadfbeb]" virtual=false 2025-08-13T20:09:12.181700677+00:00 stderr F I0813 20:09:12.181587 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ovn-kubernetes, name: monitor-ovn-node, uid: d8379b36-ded1-48dc-a90a-ce7085a877fa]" virtual=false 2025-08-13T20:09:12.181861941+00:00 stderr F I0813 20:09:12.181764 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-route-controller-manager, name: openshift-route-controller-manager, uid: b0f070d7-8193-440c-87af-d7fa06cb4cfb]" virtual=false 2025-08-13T20:09:12.182004805+00:00 stderr F I0813 20:09:12.181957 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 056bc944-7929-41ba-9874-afcf52028178]" virtual=false 2025-08-13T20:09:12.182129669+00:00 stderr F I0813 20:09:12.182066 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: machine-api-controllers, uid: 5b2318f0-140f-4213-ac5a-43d99820c804]" virtual=false 2025-08-13T20:09:12.182449348+00:00 stderr F I0813 20:09:12.182425 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-marketplace, name: marketplace-operator, uid: 6fe0d0d5-6410-460e-b01a-be75fdd6daa0]" virtual=false 2025-08-13T20:09:12.192205018+00:00 stderr F I0813 20:09:12.192096 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: machine-api-operator, uid: e428423f-fee5-4fca-bdc8-f46a317d9cf7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.192205018+00:00 stderr F I0813 20:09:12.192160 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: a28811b4-8593-4ba1-b2f8-ed5c450909cb]" virtual=false 2025-08-13T20:09:12.192485266+00:00 stderr F I0813 20:09:12.192441 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 62dbb159-afde-42ff-ae4d-f010b4e53152]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.192499226+00:00 stderr F I0813 20:09:12.192484 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver, name: openshift-apiserver, uid: ffd6543e-3a51-45c8-85e5-0b2b8492c009]" virtual=false 2025-08-13T20:09:12.192868117+00:00 stderr F I0813 20:09:12.192841 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-multus, name: monitor-network, uid: 4d4aca4a-3397-4686-8f20-a7a4f752076f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.193029261+00:00 stderr F I0813 20:09:12.192970 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-authentication-operator, name: authentication-operator, uid: 5e7e7195-0445-4e7a-b3aa-598d0e9c8ba2]" virtual=false 2025-08-13T20:09:12.193188676+00:00 stderr F I0813 20:09:12.192860 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ingress-operator, name: ingress-operator, uid: c53f85af-6f9e-4e5e-9242-aad83f5ea8c4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.193203056+00:00 stderr F I0813 20:09:12.193192 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-network-diagnostics, name: network-check-source, uid: b9f78ce7-8b27-49d9-bffc-1525c0c249e4]" virtual=false 2025-08-13T20:09:12.193548976+00:00 stderr F I0813 20:09:12.193526 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-image-registry, name: image-registry-operator, uid: ef1c491a-0721-49cc-98d6-1ed2478c49b0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.193615328+00:00 stderr F I0813 20:09:12.193599 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: catalog-operator, uid: 0198e73b-3fef-466e-9a33-d8c461aa6d9b]" virtual=false 2025-08-13T20:09:12.193941588+00:00 stderr F I0813 20:09:12.193854 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: fa0bd541-42b6-4bb8-9a92-d48d39d85d53]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.193963218+00:00 stderr F I0813 20:09:12.193951 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 5b800138-aea7-4ac6-9bc1-cc0d305347d5]" virtual=false 2025-08-13T20:09:12.194042570+00:00 stderr F I0813 20:09:12.194024 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-oauth-apiserver, name: openshift-oauth-apiserver, uid: 041dc6d1-ce80-46ef-bbf1-6bcd4b2dd746]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.194093642+00:00 stderr F I0813 20:09:12.194079 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-version, name: cluster-version-operator, uid: 4a80685f-4ae9-45b6-beda-e35c99fcc78c]" virtual=false 2025-08-13T20:09:12.194156014+00:00 stderr F I0813 20:09:12.194104 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: olm-operator, uid: 9e0ffff9-0f79-4998-9692-74f94eb0549f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.194156014+00:00 stderr F I0813 20:09:12.194126 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-console-operator, name: console-operator, uid: 9a33b179-4950-456a-93cf-ba4741c91841]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.194169124+00:00 stderr F I0813 20:09:12.194156 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-console, name: console, uid: c8428fe0-e84e-4be1-b578-d568de860a64]" virtual=false 2025-08-13T20:09:12.194290668+00:00 stderr F I0813 20:09:12.194243 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-image-registry, name: image-registry, uid: d6eb5a24-ee3d-4c45-b434-d20993cdc039]" virtual=false 2025-08-13T20:09:12.195496942+00:00 stderr F I0813 20:09:12.194252 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-network-operator, name: network-operator, uid: 43b09388-8965-42ba-b082-58877deb0311]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.195496942+00:00 stderr F I0813 20:09:12.194378 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-controller-manager, name: kube-controller-manager, uid: 55df2ef1-071b-46e2-a7a1-8773d1ba333e]" virtual=false 2025-08-13T20:09:12.195496942+00:00 stderr F I0813 20:09:12.194489 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-etcd-operator, name: etcd-operator, uid: 0e2435e0-96e3-49f2-beae-6c0c00ee7502]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.195496942+00:00 stderr F I0813 20:09:12.194509 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-scheduler-operator, name: kube-scheduler-operator, uid: 3860e86d-4fc6-4c08-ba64-6157374888a3]" virtual=false 2025-08-13T20:09:12.195496942+00:00 stderr F I0813 20:09:12.195007 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 43311039-ddd0-4dd0-b1ee-3a9fed17eab5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.195496942+00:00 stderr F I0813 20:09:12.195045 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver, name: openshift-apiserver-operator-check-endpoints, uid: 7c47d56a-8607-4a94-9c71-801ae5f904e6]" virtual=false 2025-08-13T20:09:12.196044708+00:00 stderr F I0813 20:09:12.196019 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: da943855-7a79-47eb-a200-afae1932cb74]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.196108950+00:00 stderr F I0813 20:09:12.196094 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-authentication, name: oauth-openshift, uid: e8391c3f-2734-4209-b27d-c640acd205de]" virtual=false 2025-08-13T20:09:12.208664920+00:00 stderr F I0813 20:09:12.208603 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ovn-kubernetes, name: monitor-ovn-node, uid: d8379b36-ded1-48dc-a90a-ce7085a877fa]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.210105881+00:00 stderr F I0813 20:09:12.208749 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-machine-approver, name: cluster-machine-approver, uid: 2c495d08-0505-4d86-933e-6b3b35d469e8]" virtual=false 2025-08-13T20:09:12.210307237+00:00 stderr F I0813 20:09:12.210283 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-multus, name: monitor-multus-admission-controller, uid: 02fdf30f-73db-47db-b112-d48ffcd81df7]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.210369729+00:00 stderr F I0813 20:09:12.210355 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-config-operator, name: config-operator, uid: ebb46164-d146-4396-be7d-4f239cfde7b4]" virtual=false 2025-08-13T20:09:12.216191035+00:00 stderr F I0813 20:09:12.216074 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-marketplace, name: marketplace-operator, uid: 6fe0d0d5-6410-460e-b01a-be75fdd6daa0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.216191035+00:00 stderr F I0813 20:09:12.216092 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ovn-kubernetes, name: monitor-ovn-control-plane-metrics, uid: 3461a67f-3c88-4a19-b593-448a4dadfbeb]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.216191035+00:00 stderr F I0813 20:09:12.216132 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-controller-manager, name: openshift-controller-manager, uid: c725187b-e573-4a6f-9e33-9bf5a61d60ba]" virtual=false 2025-08-13T20:09:12.216191035+00:00 stderr F I0813 20:09:12.216171 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-dns-operator, name: dns-operator, uid: 0e225770-846d-4267-827d-96a0e29db21c]" virtual=false 2025-08-13T20:09:12.216833614+00:00 stderr F I0813 20:09:12.216740 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: catalog-operator, uid: 0198e73b-3fef-466e-9a33-d8c461aa6d9b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.216946657+00:00 stderr F I0813 20:09:12.216893 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-scheduler, name: kube-scheduler, uid: 71af2bbe-5bdb-4271-8e06-98a04e980e6f]" virtual=false 2025-08-13T20:09:12.217211575+00:00 stderr F I0813 20:09:12.217190 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 056bc944-7929-41ba-9874-afcf52028178]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.217284797+00:00 stderr F I0813 20:09:12.217230 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-version, name: cluster-version-operator, uid: 4a80685f-4ae9-45b6-beda-e35c99fcc78c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.217297887+00:00 stderr F I0813 20:09:12.217282 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-monitoring-operator, uid: e156be76-80be-4eff-8005-7c24938303ae]" virtual=false 2025-08-13T20:09:12.217334648+00:00 stderr F I0813 20:09:12.217320 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-apiserver, name: revision-status-1, uid: 7df87c7c-0eaf-4109-8b79-031081b1501b]" virtual=false 2025-08-13T20:09:12.217563315+00:00 stderr F I0813 20:09:12.217197 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: a28811b4-8593-4ba1-b2f8-ed5c450909cb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.217633817+00:00 stderr F I0813 20:09:12.217617 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-cluster-samples-operator, name: metrics, uid: 9d303729-12a5-43d3-a67f-d19bf704fa88]" virtual=false 2025-08-13T20:09:12.217772131+00:00 stderr F I0813 20:09:12.217549 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-network-diagnostics, name: network-check-source, uid: b9f78ce7-8b27-49d9-bffc-1525c0c249e4]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.218021748+00:00 stderr F I0813 20:09:12.217975 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-image-registry, name: image-registry, uid: d6eb5a24-ee3d-4c45-b434-d20993cdc039]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.218057489+00:00 stderr F I0813 20:09:12.218019 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-ingress-operator, name: metrics, uid: 1e390522-c38e-4189-86b8-ad75c61e3844]" virtual=false 2025-08-13T20:09:12.218309466+00:00 stderr F I0813 20:09:12.218252 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-ingress-canary, name: ingress-canary, uid: cd641ce4-6a02-4a0c-9222-6ab30b234450]" virtual=false 2025-08-13T20:09:12.218436530+00:00 stderr F I0813 20:09:12.217761 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-console, name: console, uid: c8428fe0-e84e-4be1-b578-d568de860a64]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.218487231+00:00 stderr F I0813 20:09:12.218473 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator-machine-webhook, uid: 7dd2300f-f67e-4eb3-a3fa-1f22c230305a]" virtual=false 2025-08-13T20:09:12.220758446+00:00 stderr F I0813 20:09:12.219119 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-route-controller-manager, name: openshift-route-controller-manager, uid: b0f070d7-8193-440c-87af-d7fa06cb4cfb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.220758446+00:00 stderr F I0813 20:09:12.219254 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-service-ca-operator, name: metrics, uid: 030283b3-acfe-40ed-811c-d9f7f79607f6]" virtual=false 2025-08-13T20:09:12.223625759+00:00 stderr F I0813 20:09:12.223593 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 5b800138-aea7-4ac6-9bc1-cc0d305347d5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.223704391+00:00 stderr F I0813 20:09:12.223690 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-apiserver-operator, name: metrics, uid: ed79a864-3d59-456e-8a6c-724ec68e6d1b]" virtual=false 2025-08-13T20:09:12.223897986+00:00 stderr F I0813 20:09:12.223767 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver, name: openshift-apiserver-operator-check-endpoints, uid: 7c47d56a-8607-4a94-9c71-801ae5f904e6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.223897986+00:00 stderr F I0813 20:09:12.223890 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-controller-manager-operator, name: metrics, uid: 136038f9-f376-4b0b-8c75-a42240d176cc]" virtual=false 2025-08-13T20:09:12.224086352+00:00 stderr F I0813 20:09:12.223641 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-authentication-operator, name: authentication-operator, uid: 5e7e7195-0445-4e7a-b3aa-598d0e9c8ba2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.224147064+00:00 stderr F I0813 20:09:12.224131 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: c1b7d52c-5c8b-4bd1-8ac9-b4a3af1e9062]" virtual=false 2025-08-13T20:09:12.224251647+00:00 stderr F I0813 20:09:12.223716 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-controller-manager, name: kube-controller-manager, uid: 55df2ef1-071b-46e2-a7a1-8773d1ba333e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.224328929+00:00 stderr F I0813 20:09:12.224286 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-node, uid: 422d22df-8c75-41f2-a2b3-636f0480766d]" virtual=false 2025-08-13T20:09:12.227991054+00:00 stderr F I0813 20:09:12.225486 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver, name: openshift-apiserver, uid: ffd6543e-3a51-45c8-85e5-0b2b8492c009]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.227991054+00:00 stderr F I0813 20:09:12.225542 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-network-diagnostics, name: network-check-source, uid: e72217e4-ba2c-4242-af4f-6a9201f08001]" virtual=false 2025-08-13T20:09:12.227991054+00:00 stderr F I0813 20:09:12.226890 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-apiserver, name: revision-status-1, uid: 7df87c7c-0eaf-4109-8b79-031081b1501b]" 2025-08-13T20:09:12.227991054+00:00 stderr F I0813 20:09:12.226954 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-network-diagnostics, name: network-check-target, uid: 151fdab6-cca2-4880-a96c-48e605cc8d3d]" virtual=false 2025-08-13T20:09:12.227991054+00:00 stderr F I0813 20:09:12.227110 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-authentication, name: oauth-openshift, uid: e8391c3f-2734-4209-b27d-c640acd205de]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.227991054+00:00 stderr F I0813 20:09:12.227128 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 7c42fd7c-0955-49c7-819c-4685e0681272]" virtual=false 2025-08-13T20:09:12.228750075+00:00 stderr F I0813 20:09:12.215968 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: machine-api-controllers, uid: 5b2318f0-140f-4213-ac5a-43d99820c804]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.228766906+00:00 stderr F I0813 20:09:12.228754 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-console-operator, name: webhook, uid: 0bec6a60-3529-4fdb-81de-718ea6c4dae4]" virtual=false 2025-08-13T20:09:12.229110386+00:00 stderr F I0813 20:09:12.229047 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-scheduler-operator, name: kube-scheduler-operator, uid: 3860e86d-4fc6-4c08-ba64-6157374888a3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.229110386+00:00 stderr F I0813 20:09:12.229090 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-cluster-version, name: cluster-version-operator, uid: b85c5397-4189-4029-b181-4e339da207b7]" virtual=false 2025-08-13T20:09:12.240537414+00:00 stderr F I0813 20:09:12.240469 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-machine-approver, name: cluster-machine-approver, uid: 2c495d08-0505-4d86-933e-6b3b35d469e8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.240687398+00:00 stderr F I0813 20:09:12.240667 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-image-registry, name: image-registry-operator, uid: 1a925351-3238-40d4-a375-ce0c5a2b70a6]" virtual=false 2025-08-13T20:09:12.241118140+00:00 stderr F I0813 20:09:12.241091 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-controller-manager, name: openshift-controller-manager, uid: c725187b-e573-4a6f-9e33-9bf5a61d60ba]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.241193892+00:00 stderr F I0813 20:09:12.241176 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-controller-manager-operator, name: metrics, uid: 2f6bb711-85a4-408c-913a-54f006dcf2e9]" virtual=false 2025-08-13T20:09:12.241581703+00:00 stderr F I0813 20:09:12.241555 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-cluster-samples-operator, name: metrics, uid: 9d303729-12a5-43d3-a67f-d19bf704fa88]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.241669886+00:00 stderr F I0813 20:09:12.241651 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-network-operator, name: metrics, uid: f8e1888b-9575-400b-83eb-6574023cf8d5]" virtual=false 2025-08-13T20:09:12.242099178+00:00 stderr F I0813 20:09:12.242070 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-monitoring-operator, uid: e156be76-80be-4eff-8005-7c24938303ae]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.242211792+00:00 stderr F I0813 20:09:12.242191 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-apiserver-operator, name: metrics, uid: 4c2fba48-c67e-4420-9529-0bb456da4341]" virtual=false 2025-08-13T20:09:12.242457019+00:00 stderr F I0813 20:09:12.240588 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-dns-operator, name: dns-operator, uid: 0e225770-846d-4267-827d-96a0e29db21c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.242753617+00:00 stderr F I0813 20:09:12.242730 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-scheduler-operator, name: metrics, uid: 080e1aaf-7269-495b-ab74-593efe4192ec]" virtual=false 2025-08-13T20:09:12.248399919+00:00 stderr F I0813 20:09:12.248316 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-config-operator, name: config-operator, uid: ebb46164-d146-4396-be7d-4f239cfde7b4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.248445300+00:00 stderr F I0813 20:09:12.248396 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-controllers, uid: 6a75af62-23dd-4080-8ef6-00c8bb47e103]" virtual=false 2025-08-13T20:09:12.249083479+00:00 stderr F I0813 20:09:12.249057 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-network-diagnostics, name: network-check-source, uid: e72217e4-ba2c-4242-af4f-6a9201f08001]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.249158521+00:00 stderr F I0813 20:09:12.249141 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator, uid: ef047d6e-c72f-4309-a95e-08fb0ed08662]" virtual=false 2025-08-13T20:09:12.249591473+00:00 stderr F I0813 20:09:12.249464 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-network-diagnostics, name: network-check-target, uid: 151fdab6-cca2-4880-a96c-48e605cc8d3d]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.249591473+00:00 stderr F I0813 20:09:12.249531 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-authentication-operator, name: metrics, uid: 20ebd9ba-71d4-4753-8707-d87939791a19]" virtual=false 2025-08-13T20:09:12.249687706+00:00 stderr F I0813 20:09:12.249665 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-scheduler, name: kube-scheduler, uid: 71af2bbe-5bdb-4271-8e06-98a04e980e6f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.249750918+00:00 stderr F I0813 20:09:12.249734 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-dns-operator, name: metrics, uid: c5ee1e81-63ee-4ea0-9d8f-d24cd624c3f2]" virtual=false 2025-08-13T20:09:12.250150769+00:00 stderr F I0813 20:09:12.250125 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-kube-apiserver-operator, name: metrics, uid: ed79a864-3d59-456e-8a6c-724ec68e6d1b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.250220171+00:00 stderr F I0813 20:09:12.250203 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-console-operator, name: metrics, uid: 793d323e-de30-470a-af76-520af7b2dad8]" virtual=false 2025-08-13T20:09:12.250331984+00:00 stderr F I0813 20:09:12.250219 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-ingress-operator, name: metrics, uid: 1e390522-c38e-4189-86b8-ad75c61e3844]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.250386216+00:00 stderr F I0813 20:09:12.250371 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-control-plane, uid: 3988dee0-bf3f-4018-92f4-25a33c418712]" virtual=false 2025-08-13T20:09:12.250673734+00:00 stderr F I0813 20:09:12.250614 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-service-ca-operator, name: metrics, uid: 030283b3-acfe-40ed-811c-d9f7f79607f6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.250673734+00:00 stderr F I0813 20:09:12.250666 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: f7c88755-f4e4-4193-91c1-94a9f4727169]" virtual=false 2025-08-13T20:09:12.250761387+00:00 stderr F I0813 20:09:12.250741 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 7c42fd7c-0955-49c7-819c-4685e0681272]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.250876190+00:00 stderr F I0813 20:09:12.250854 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-multus, name: multus-admission-controller, uid: 35568373-18ec-4ba2-8d18-12de10aa5a3f]" virtual=false 2025-08-13T20:09:12.251012814+00:00 stderr F I0813 20:09:12.250955 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-node, uid: 422d22df-8c75-41f2-a2b3-636f0480766d]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.251098046+00:00 stderr F I0813 20:09:12.250874 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: c1b7d52c-5c8b-4bd1-8ac9-b4a3af1e9062]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.251133857+00:00 stderr F I0813 20:09:12.250766 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-ingress-canary, name: ingress-canary, uid: cd641ce4-6a02-4a0c-9222-6ab30b234450]" owner=[{"apiVersion":"apps/v1","kind":"daemonset","name":"ingress-canary","uid":"b5512a08-cd29-46f9-9661-4c860338b2ca","controller":true}] 2025-08-13T20:09:12.251257711+00:00 stderr F I0813 20:09:12.251203 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: catalog-operator-metrics, uid: 6766edb6-ebfb-4434-a0f6-d2bb95b7aa72]" virtual=false 2025-08-13T20:09:12.251440246+00:00 stderr F I0813 20:09:12.251394 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-config-operator, name: metrics, uid: f04ada1b-55ad-45a3-9231-6d1ff7242fa0]" virtual=false 2025-08-13T20:09:12.251581170+00:00 stderr F I0813 20:09:12.251502 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-etcd-operator, name: metrics, uid: 470dd1a6-5645-4282-97e4-ebd3fef4caae]" virtual=false 2025-08-13T20:09:12.251702794+00:00 stderr F I0813 20:09:12.251649 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-kube-controller-manager-operator, name: metrics, uid: 136038f9-f376-4b0b-8c75-a42240d176cc]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.251718564+00:00 stderr F I0813 20:09:12.251700 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-storage-version-migrator-operator, name: metrics, uid: 3e887cd0-b481-460c-b943-d944dc64df2f]" virtual=false 2025-08-13T20:09:12.252037803+00:00 stderr F I0813 20:09:12.251978 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-cluster-version, name: cluster-version-operator, uid: b85c5397-4189-4029-b181-4e339da207b7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.252037803+00:00 stderr F I0813 20:09:12.252008 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-multus, name: network-metrics-service, uid: d573fe5e-beee-404b-9f93-51f361ad8682]" virtual=false 2025-08-13T20:09:12.266836177+00:00 stderr F I0813 20:09:12.266683 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-image-registry, name: image-registry-operator, uid: 1a925351-3238-40d4-a375-ce0c5a2b70a6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.266836177+00:00 stderr F I0813 20:09:12.266752 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-apiserver, name: check-endpoints, uid: 435aa879-8965-459a-9b2a-dfd8f8924b3a]" virtual=false 2025-08-13T20:09:12.269862194+00:00 stderr F I0813 20:09:12.267722 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator-machine-webhook, uid: 7dd2300f-f67e-4eb3-a3fa-1f22c230305a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.269862194+00:00 stderr F I0813 20:09:12.267777 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 3ff83f1a-4058-4b9e-a4fd-83f51836c82e]" virtual=false 2025-08-13T20:09:12.269862194+00:00 stderr F I0813 20:09:12.269128 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-network-operator, name: metrics, uid: f8e1888b-9575-400b-83eb-6574023cf8d5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.269862194+00:00 stderr F I0813 20:09:12.269153 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: bddcb8c2-0f2d-4efa-a0ec-3e0648c24386]" virtual=false 2025-08-13T20:09:12.275197617+00:00 stderr F I0813 20:09:12.275109 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-multus, name: multus-admission-controller, uid: 35568373-18ec-4ba2-8d18-12de10aa5a3f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.275197617+00:00 stderr F I0813 20:09:12.275171 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-marketplace, name: marketplace-operator-metrics, uid: 1bfd7637-f88e-403e-8d75-c71b380fc127]" virtual=false 2025-08-13T20:09:12.275375472+00:00 stderr F I0813 20:09:12.275237 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-console-operator, name: webhook, uid: 0bec6a60-3529-4fdb-81de-718ea6c4dae4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.275375472+00:00 stderr F I0813 20:09:12.275290 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator-webhook, uid: 128263d4-d278-44f6-9ae4-9e9ecc572513]" virtual=false 2025-08-13T20:09:12.275645460+00:00 stderr F I0813 20:09:12.275588 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-kube-storage-version-migrator-operator, name: metrics, uid: 3e887cd0-b481-460c-b943-d944dc64df2f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.275725782+00:00 stderr F I0813 20:09:12.275686 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 355a1056-7d77-4a52-a1f5-8eb39c13574e]" virtual=false 2025-08-13T20:09:12.276086313+00:00 stderr F I0813 20:09:12.276002 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-controller-manager-operator, name: metrics, uid: 2f6bb711-85a4-408c-913a-54f006dcf2e9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.276086313+00:00 stderr F I0813 20:09:12.276046 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: olm-operator-metrics, uid: f54a9b6f-c334-4276-9ca3-b290325fd276]" virtual=false 2025-08-13T20:09:12.276288369+00:00 stderr F I0813 20:09:12.276222 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-apiserver-operator, name: metrics, uid: 4c2fba48-c67e-4420-9529-0bb456da4341]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.276288369+00:00 stderr F I0813 20:09:12.276276 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-metrics, uid: ae547e8e-2a0a-43b3-8358-80f1e40dfde9]" virtual=false 2025-08-13T20:09:12.284424682+00:00 stderr F I0813 20:09:12.284287 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-multus, name: network-metrics-service, uid: d573fe5e-beee-404b-9f93-51f361ad8682]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.284424682+00:00 stderr F I0813 20:09:12.284361 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-iptables-alerter, uid: 827450c1-83c3-45d0-9aa5-fbb86a2ae6a5]" virtual=false 2025-08-13T20:09:12.284479383+00:00 stderr F I0813 20:09:12.284447 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-authentication-operator, name: metrics, uid: 20ebd9ba-71d4-4753-8707-d87939791a19]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.284479383+00:00 stderr F I0813 20:09:12.284471 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-node-limited, uid: f845abac-4b6d-49f7-ad14-ea5802490663]" virtual=false 2025-08-13T20:09:12.284707390+00:00 stderr F I0813 20:09:12.284607 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-dns-operator, name: metrics, uid: c5ee1e81-63ee-4ea0-9d8f-d24cd624c3f2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.284707390+00:00 stderr F I0813 20:09:12.284662 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostmount-anyuid, uid: a16549a7-7bfd-45d7-b114-aea9597226a2]" virtual=false 2025-08-13T20:09:12.284960887+00:00 stderr F I0813 20:09:12.284851 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-kube-scheduler-operator, name: metrics, uid: 080e1aaf-7269-495b-ab74-593efe4192ec]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.284960887+00:00 stderr F I0813 20:09:12.284925 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: aggregate-olm-view, uid: 764fb58b-036b-45ae-97f9-281416353daf]" virtual=false 2025-08-13T20:09:12.285163303+00:00 stderr F I0813 20:09:12.285098 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: catalog-operator-metrics, uid: 6766edb6-ebfb-4434-a0f6-d2bb95b7aa72]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.285163303+00:00 stderr F I0813 20:09:12.285148 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus-ancillary-tools, uid: 6b5b3d0e-daa2-4431-9b3a-17f2ad98c6cd]" virtual=false 2025-08-13T20:09:12.285247625+00:00 stderr F I0813 20:09:12.284660 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-etcd-operator, name: metrics, uid: 470dd1a6-5645-4282-97e4-ebd3fef4caae]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.285326448+00:00 stderr F I0813 20:09:12.285252 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-monitoring-operator-namespaced, uid: 64176e20-57cd-450a-82b8-c734edcf2055]" virtual=false 2025-08-13T20:09:12.285343518+00:00 stderr F I0813 20:09:12.285329 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator, uid: ef047d6e-c72f-4309-a95e-08fb0ed08662]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.285381289+00:00 stderr F I0813 20:09:12.285350 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: aggregate-olm-edit, uid: 945e6f8b-1604-4556-a7bf-195fa62d6c14]" virtual=false 2025-08-13T20:09:12.285605626+00:00 stderr F I0813 20:09:12.285545 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-controllers, uid: 6a75af62-23dd-4080-8ef6-00c8bb47e103]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.285605626+00:00 stderr F I0813 20:09:12.285594 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler-operator:cluster-reader, uid: 0632bc2e-8683-447f-866f-dc2c3f20dbaa]" virtual=false 2025-08-13T20:09:12.285620386+00:00 stderr F I0813 20:09:12.285602 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-config-operator, name: metrics, uid: f04ada1b-55ad-45a3-9231-6d1ff7242fa0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.285629906+00:00 stderr F I0813 20:09:12.285622 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console-operator, uid: dc1b4ca4-3dce-4e4f-b6d5-d095e346f78d]" virtual=false 2025-08-13T20:09:12.286560013+00:00 stderr F I0813 20:09:12.286478 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-apiserver, name: check-endpoints, uid: 435aa879-8965-459a-9b2a-dfd8f8924b3a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.286560013+00:00 stderr F I0813 20:09:12.286527 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator-proxy-reader, uid: 4f462e59-f589-4aa7-8140-854302a78457]" virtual=false 2025-08-13T20:09:12.286871272+00:00 stderr F I0813 20:09:12.286741 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-control-plane, uid: 3988dee0-bf3f-4018-92f4-25a33c418712]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.287074038+00:00 stderr F I0813 20:09:12.286935 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus-admission-controller-webhook, uid: 8512c398-7b2e-4b14-9cdd-dfe72f813153]" virtual=false 2025-08-13T20:09:12.287955653+00:00 stderr F I0813 20:09:12.287194 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 3ff83f1a-4058-4b9e-a4fd-83f51836c82e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.287955653+00:00 stderr F I0813 20:09:12.287244 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator:cluster-reader, uid: b6fdcbdf-86e5-40c5-9a3a-5dc05d389718]" virtual=false 2025-08-13T20:09:12.293719958+00:00 stderr F I0813 20:09:12.293056 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: f7c88755-f4e4-4193-91c1-94a9f4727169]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.293719958+00:00 stderr F I0813 20:09:12.293155 1 garbagecollector.go:549] "Processing item" item="[apiregistration.k8s.io/v1/APIService, namespace: , name: v1.packages.operators.coreos.com, uid: 16956e05-669a-486b-95ff-66e13a972b59]" virtual=false 2025-08-13T20:09:12.301522992+00:00 stderr F I0813 20:09:12.297237 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-console-operator, name: metrics, uid: 793d323e-de30-470a-af76-520af7b2dad8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.301522992+00:00 stderr F I0813 20:09:12.297307 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: whereabouts-cni, uid: 88054557-80c4-48d2-a55d-ab10752a9270]" virtual=false 2025-08-13T20:09:12.312244339+00:00 stderr F I0813 20:09:12.310588 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus-admission-controller-webhook, uid: 8512c398-7b2e-4b14-9cdd-dfe72f813153]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.312244339+00:00 stderr F I0813 20:09:12.310661 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:nonroot-v2, uid: 6efcc9e9-c5e4-4315-940a-636bac274a19]" virtual=false 2025-08-13T20:09:12.312244339+00:00 stderr F I0813 20:09:12.311587 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: bddcb8c2-0f2d-4efa-a0ec-3e0648c24386]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.312244339+00:00 stderr F I0813 20:09:12.311615 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: control-plane-machine-set-operator, uid: a39c14db-78b4-4b58-8651-479169195296]" virtual=false 2025-08-13T20:09:12.312244339+00:00 stderr F I0813 20:09:12.311965 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-node-limited, uid: f845abac-4b6d-49f7-ad14-ea5802490663]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.312244339+00:00 stderr F I0813 20:09:12.311994 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostaccess, uid: ae10afa1-d594-42ec-9bf7-626463cb1630]" virtual=false 2025-08-13T20:09:12.312497737+00:00 stderr F I0813 20:09:12.312423 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus-ancillary-tools, uid: 6b5b3d0e-daa2-4431-9b3a-17f2ad98c6cd]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.312497737+00:00 stderr F I0813 20:09:12.312482 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator-imageconfig-reader, uid: fab3e514-6318-4108-8ef6-378758fbbc7e]" virtual=false 2025-08-13T20:09:12.312830996+00:00 stderr F I0813 20:09:12.312725 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-marketplace, name: marketplace-operator-metrics, uid: 1bfd7637-f88e-403e-8d75-c71b380fc127]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.312961070+00:00 stderr F I0813 20:09:12.312835 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:restricted-v2, uid: 32267ecd-433d-41f4-a3f0-a0b5b8f32162]" virtual=false 2025-08-13T20:09:12.313132905+00:00 stderr F I0813 20:09:12.313098 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-monitoring-operator-namespaced, uid: 64176e20-57cd-450a-82b8-c734edcf2055]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.313132905+00:00 stderr F I0813 20:09:12.313125 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator, uid: 71adaeb8-9332-4dc7-b8b2-52415e589919]" virtual=false 2025-08-13T20:09:12.313529186+00:00 stderr F I0813 20:09:12.313444 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: olm-operator-metrics, uid: f54a9b6f-c334-4276-9ca3-b290325fd276]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.313680291+00:00 stderr F I0813 20:09:12.313662 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:cluster-config-operator:cluster-reader, uid: 3162d1be-8f00-4957-8db6-a2b1361aaedf]" virtual=false 2025-08-13T20:09:12.313748242+00:00 stderr F I0813 20:09:12.313512 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 355a1056-7d77-4a52-a1f5-8eb39c13574e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.313867126+00:00 stderr F I0813 20:09:12.313760 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers-metal3-remediation-aggregation, uid: 726262c4-2098-4379-a985-90474531ff53]" virtual=false 2025-08-13T20:09:12.314473653+00:00 stderr F I0813 20:09:12.313558 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apiregistration.k8s.io/v1/APIService, namespace: , name: v1.packages.operators.coreos.com, uid: 16956e05-669a-486b-95ff-66e13a972b59]" 2025-08-13T20:09:12.314575056+00:00 stderr F I0813 20:09:12.314488 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler, uid: 329dfd1b-be44-4c8b-b72d-d32f8fab8705]" virtual=false 2025-08-13T20:09:12.320078794+00:00 stderr F I0813 20:09:12.320053 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostmount-anyuid, uid: a16549a7-7bfd-45d7-b114-aea9597226a2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.320142526+00:00 stderr F I0813 20:09:12.320128 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:controller:operator-lifecycle-manager, uid: 2b5865fd-ac96-41b7-96bc-48cce5469705]" virtual=false 2025-08-13T20:09:12.331265975+00:00 stderr F I0813 20:09:12.331199 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler-operator:cluster-reader, uid: 0632bc2e-8683-447f-866f-dc2c3f20dbaa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.331377248+00:00 stderr F I0813 20:09:12.331361 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler-operator, uid: 99f02060-bf1c-484c-8c17-2b1243086e3f]" virtual=false 2025-08-13T20:09:12.332702606+00:00 stderr F I0813 20:09:12.332677 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-metrics, uid: ae547e8e-2a0a-43b3-8358-80f1e40dfde9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.332766778+00:00 stderr F I0813 20:09:12.332752 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostnetwork, uid: 7fa3d83a-0d76-405a-a7b9-190741931396]" virtual=false 2025-08-13T20:09:12.342028193+00:00 stderr F I0813 20:09:12.341973 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator-proxy-reader, uid: 4f462e59-f589-4aa7-8140-854302a78457]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.342183748+00:00 stderr F I0813 20:09:12.342168 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostnetwork-v2, uid: ad07fee7-e5d1-4a09-8086-698133feb11a]" virtual=false 2025-08-13T20:09:12.349950290+00:00 stderr F I0813 20:09:12.349734 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator-webhook, uid: 128263d4-d278-44f6-9ae4-9e9ecc572513]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.349950290+00:00 stderr F I0813 20:09:12.349851 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: net-attach-def-project, uid: 7c380190-b6b7-4bba-8d3d-6bda6c81bc8e]" virtual=false 2025-08-13T20:09:12.350963259+00:00 stderr F I0813 20:09:12.350931 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: whereabouts-cni, uid: 88054557-80c4-48d2-a55d-ab10752a9270]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.351649619+00:00 stderr F I0813 20:09:12.351627 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: network-diagnostics, uid: 6e739f62-fe8b-4f65-8df1-a50711c9b496]" virtual=false 2025-08-13T20:09:12.351864935+00:00 stderr F I0813 20:09:12.351084 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-iptables-alerter, uid: 827450c1-83c3-45d0-9aa5-fbb86a2ae6a5]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.351967908+00:00 stderr F I0813 20:09:12.351945 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-dns-operator, uid: d1456a44-d6b7-4418-911a-f4bbaf6427c4]" virtual=false 2025-08-13T20:09:12.352274527+00:00 stderr F I0813 20:09:12.351370 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console-operator, uid: dc1b4ca4-3dce-4e4f-b6d5-d095e346f78d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.352330339+00:00 stderr F I0813 20:09:12.352316 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console, uid: 8c434f71-b240-42c1-88a4-a6fbc903b388]" virtual=false 2025-08-13T20:09:12.352467823+00:00 stderr F I0813 20:09:12.351407 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: aggregate-olm-view, uid: 764fb58b-036b-45ae-97f9-281416353daf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.352522534+00:00 stderr F I0813 20:09:12.352506 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: helm-chartrepos-viewer, uid: d3d3a0df-2feb-4db3-b436-5b73b4c151eb]" virtual=false 2025-08-13T20:09:12.356563010+00:00 stderr F I0813 20:09:12.356532 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: aggregate-olm-edit, uid: 945e6f8b-1604-4556-a7bf-195fa62d6c14]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.356750355+00:00 stderr F I0813 20:09:12.356732 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator, uid: dbe391a0-135a-4375-82db-699a0d88ce8e]" virtual=false 2025-08-13T20:09:12.357100215+00:00 stderr F I0813 20:09:12.357078 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator:cluster-reader, uid: b6fdcbdf-86e5-40c5-9a3a-5dc05d389718]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.357158247+00:00 stderr F I0813 20:09:12.357144 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: operatorhub-config-reader, uid: e2651db3-cf40-4b1c-a7da-ae197e936593]" virtual=false 2025-08-13T20:09:12.364557999+00:00 stderr F I0813 20:09:12.364447 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator-imageconfig-reader, uid: fab3e514-6318-4108-8ef6-378758fbbc7e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.364761565+00:00 stderr F I0813 20:09:12.364698 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:controller:machine-approver, uid: 672519e3-9ef9-4c9e-a091-7c4b0d3de1c4]" virtual=false 2025-08-13T20:09:12.372323552+00:00 stderr F I0813 20:09:12.372201 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator, uid: 71adaeb8-9332-4dc7-b8b2-52415e589919]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.372548008+00:00 stderr F I0813 20:09:12.372528 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:nonroot, uid: b76bf753-d87c-45a9-91db-eea6f389f4d5]" virtual=false 2025-08-13T20:09:12.374115013+00:00 stderr F I0813 20:09:12.374089 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler, uid: 329dfd1b-be44-4c8b-b72d-d32f8fab8705]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.374261918+00:00 stderr F I0813 20:09:12.374244 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:anyuid, uid: 74815ef0-3c58-4271-9c93-5c834b5a10e5]" virtual=false 2025-08-13T20:09:12.375837833+00:00 stderr F I0813 20:09:12.375616 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers-metal3-remediation-aggregation, uid: 726262c4-2098-4379-a985-90474531ff53]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.375837833+00:00 stderr F I0813 20:09:12.375706 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-kube-rbac-proxy, uid: 5402260d-a88f-4905-b91a-c0ec390d8675]" virtual=false 2025-08-13T20:09:12.376448560+00:00 stderr F I0813 20:09:12.376404 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:restricted-v2, uid: 32267ecd-433d-41f4-a3f0-a0b5b8f32162]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.376448560+00:00 stderr F I0813 20:09:12.376439 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: metrics-daemon-role, uid: 199383b0-9cef-4533-81a7-f22f011a69a5]" virtual=false 2025-08-13T20:09:12.378645203+00:00 stderr F I0813 20:09:12.378560 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostnetwork, uid: 7fa3d83a-0d76-405a-a7b9-190741931396]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.378838279+00:00 stderr F I0813 20:09:12.378746 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: network-node-identity, uid: c11a15d3-53e9-40f6-b868-c05634ec19ff]" virtual=false 2025-08-13T20:09:12.381070403+00:00 stderr F I0813 20:09:12.381048 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:nonroot-v2, uid: 6efcc9e9-c5e4-4315-940a-636bac274a19]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.381133485+00:00 stderr F I0813 20:09:12.381119 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers, uid: 9206794b-b0af-49c1-a4fb-787d45f3f1f8]" virtual=false 2025-08-13T20:09:12.396499335+00:00 stderr F I0813 20:09:12.396407 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: control-plane-machine-set-operator, uid: a39c14db-78b4-4b58-8651-479169195296]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.396701841+00:00 stderr F I0813 20:09:12.396623 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-image-registry-operator, uid: 6edd083b-28fe-4808-aa2e-67cd5ba297ab]" virtual=false 2025-08-13T20:09:12.397560596+00:00 stderr F I0813 20:09:12.397538 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: network-diagnostics, uid: 6e739f62-fe8b-4f65-8df1-a50711c9b496]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.397691069+00:00 stderr F I0813 20:09:12.397673 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: csi-snapshot-controller-operator-clusterrole, uid: ff646139-2b95-4409-9ed8-321d5912f92e]" virtual=false 2025-08-13T20:09:12.404298939+00:00 stderr F I0813 20:09:12.403536 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:controller:operator-lifecycle-manager, uid: 2b5865fd-ac96-41b7-96bc-48cce5469705]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.404298939+00:00 stderr F I0813 20:09:12.403659 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: marketplace-operator, uid: f411d32e-1a09-441c-99dd-7b75e5b87298]" virtual=false 2025-08-13T20:09:12.404298939+00:00 stderr F I0813 20:09:12.404050 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostaccess, uid: ae10afa1-d594-42ec-9bf7-626463cb1630]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.404298939+00:00 stderr F I0813 20:09:12.404076 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ingress-operator, uid: e3a75357-5a06-4934-a482-0d77f1fbb9b2]" virtual=false 2025-08-13T20:09:12.404298939+00:00 stderr F I0813 20:09:12.404267 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:cluster-config-operator:cluster-reader, uid: 3162d1be-8f00-4957-8db6-a2b1361aaedf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.404432743+00:00 stderr F I0813 20:09:12.404291 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus, uid: 6e9e0a19-df2a-431c-a2d8-e73b63d4f45c]" virtual=false 2025-08-13T20:09:12.406146462+00:00 stderr F I0813 20:09:12.406090 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: helm-chartrepos-viewer, uid: d3d3a0df-2feb-4db3-b436-5b73b4c151eb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.406170052+00:00 stderr F I0813 20:09:12.406140 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers-metal3-remediation, uid: 4f414358-8053-4a3d-b8f5-95980f16dda0]" virtual=false 2025-08-13T20:09:12.406757639+00:00 stderr F I0813 20:09:12.406726 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console, uid: 8c434f71-b240-42c1-88a4-a6fbc903b388]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.406927144+00:00 stderr F I0813 20:09:12.406882 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostmount, uid: 43c793ea-8f8d-4996-bb74-5e312be44b1a]" virtual=false 2025-08-13T20:09:12.408725406+00:00 stderr F I0813 20:09:12.408614 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostnetwork-v2, uid: ad07fee7-e5d1-4a09-8086-698133feb11a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.410531157+00:00 stderr F I0813 20:09:12.410405 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:machine-config-operator:cluster-reader, uid: 1fda91fe-aa05-455a-8865-3034f4e4cff8]" virtual=false 2025-08-13T20:09:12.412443682+00:00 stderr F I0813 20:09:12.412409 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler-operator, uid: 99f02060-bf1c-484c-8c17-2b1243086e3f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.412522745+00:00 stderr F I0813 20:09:12.412508 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:cluster-samples-operator:cluster-reader, uid: b1a93bf7-4bf4-4d81-a69b-76fb07155d62]" virtual=false 2025-08-13T20:09:12.416593851+00:00 stderr F I0813 20:09:12.416547 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator, uid: dbe391a0-135a-4375-82db-699a0d88ce8e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.416688384+00:00 stderr F I0813 20:09:12.416673 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-cluster-reader, uid: 2d3a058f-4b36-405a-8d7a-38d4fac9bbdf]" virtual=false 2025-08-13T20:09:12.417007953+00:00 stderr F I0813 20:09:12.416983 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-dns-operator, uid: d1456a44-d6b7-4418-911a-f4bbaf6427c4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.417067905+00:00 stderr F I0813 20:09:12.417054 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: project-helm-chartrepository-editor, uid: 86c81b4a-a328-46e4-a36e-736a9454eb6d]" virtual=false 2025-08-13T20:09:12.417556739+00:00 stderr F I0813 20:09:12.417534 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: metrics-daemon-role, uid: 199383b0-9cef-4533-81a7-f22f011a69a5]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.417611870+00:00 stderr F I0813 20:09:12.417598 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: registry-monitoring, uid: df05d985-dbd7-487d-b14e-5f77e9a774d1]" virtual=false 2025-08-13T20:09:12.419447023+00:00 stderr F I0813 20:09:12.418165 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:nonroot, uid: b76bf753-d87c-45a9-91db-eea6f389f4d5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.419447023+00:00 stderr F I0813 20:09:12.418219 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:restricted, uid: 75ded65f-c56d-483a-aa78-9a3dfb682f3a]" virtual=false 2025-08-13T20:09:12.422225643+00:00 stderr F I0813 20:09:12.422075 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: net-attach-def-project, uid: 7c380190-b6b7-4bba-8d3d-6bda6c81bc8e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.422225643+00:00 stderr F I0813 20:09:12.422215 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: prometheus-k8s-scheduler-resources, uid: c2078daf-5c4f-48e6-b914-b4ca03df3cb9]" virtual=false 2025-08-13T20:09:12.423494039+00:00 stderr F I0813 20:09:12.423389 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: operatorhub-config-reader, uid: e2651db3-cf40-4b1c-a7da-ae197e936593]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.423494039+00:00 stderr F I0813 20:09:12.423460 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-control-plane-limited, uid: 8ee51cdb-bf4d-4e61-9535-b23c9dd08843]" virtual=false 2025-08-13T20:09:12.428961936+00:00 stderr F I0813 20:09:12.428863 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: csi-snapshot-controller-operator-clusterrole, uid: ff646139-2b95-4409-9ed8-321d5912f92e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.429135791+00:00 stderr F I0813 20:09:12.429078 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console-extensions-reader, uid: ecabf9a2-18e2-45e0-9591-e2a9b4363684]" virtual=false 2025-08-13T20:09:12.435999688+00:00 stderr F I0813 20:09:12.435876 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:controller:machine-approver, uid: 672519e3-9ef9-4c9e-a091-7c4b0d3de1c4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.436231434+00:00 stderr F I0813 20:09:12.436211 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:privileged, uid: 301bd3c5-2517-462f-bb39-9758c8065aa4]" virtual=false 2025-08-13T20:09:12.438677634+00:00 stderr F I0813 20:09:12.438616 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-image-registry-operator, uid: 6edd083b-28fe-4808-aa2e-67cd5ba297ab]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.438760677+00:00 stderr F I0813 20:09:12.438742 1 garbagecollector.go:549] "Processing item" item="[policy/v1/PodDisruptionBudget, namespace: openshift-operator-lifecycle-manager, name: packageserver-pdb, uid: 7faaf7ff-09b4-4ea8-95d0-99384dbe0390]" virtual=false 2025-08-13T20:09:12.441166536+00:00 stderr F I0813 20:09:12.441086 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: marketplace-operator, uid: f411d32e-1a09-441c-99dd-7b75e5b87298]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.441266709+00:00 stderr F I0813 20:09:12.441249 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-namespace, uid: fa51c0bf-8455-4908-a5b9-5047521669d7]" virtual=false 2025-08-13T20:09:12.450292927+00:00 stderr F I0813 20:09:12.450201 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-kube-rbac-proxy, uid: 5402260d-a88f-4905-b91a-c0ec390d8675]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.451266095+00:00 stderr F I0813 20:09:12.451242 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: network-node-identity, uid: c11a15d3-53e9-40f6-b868-c05634ec19ff]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.451402359+00:00 stderr F I0813 20:09:12.451385 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: authentication-operator-config, uid: b02c5a6c-aa5e-45ae-9058-5573f870c452]" virtual=false 2025-08-13T20:09:12.452016007+00:00 stderr F I0813 20:09:12.451992 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator-config, uid: b7bf0a70-f77f-40ce-8903-84d4dba4ea3a]" virtual=false 2025-08-13T20:09:12.454326373+00:00 stderr F I0813 20:09:12.454300 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers, uid: 9206794b-b0af-49c1-a4fb-787d45f3f1f8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.454392435+00:00 stderr F I0813 20:09:12.454377 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-3, uid: e5c106c5-9215-42bf-9f83-ad893e5dfc9f]" virtual=false 2025-08-13T20:09:12.454642582+00:00 stderr F I0813 20:09:12.454594 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:anyuid, uid: 74815ef0-3c58-4271-9c93-5c834b5a10e5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.454700504+00:00 stderr F I0813 20:09:12.454686 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-workloads-namespace, uid: eadeeb62-a305-4855-95f4-6d8dc0433482]" virtual=false 2025-08-13T20:09:12.455150337+00:00 stderr F I0813 20:09:12.454987 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus, uid: 6e9e0a19-df2a-431c-a2d8-e73b63d4f45c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.455653451+00:00 stderr F I0813 20:09:12.455633 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-2, uid: 6002c8b9-0d97-445d-a699-b98f9b3b0a7e]" virtual=false 2025-08-13T20:09:12.455999951+00:00 stderr F I0813 20:09:12.455969 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers-metal3-remediation, uid: 4f414358-8053-4a3d-b8f5-95980f16dda0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.456096404+00:00 stderr F I0813 20:09:12.456078 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: node-cluster, uid: 13ea4157-64e4-4040-bd37-d252de132aff]" virtual=false 2025-08-13T20:09:12.457175995+00:00 stderr F I0813 20:09:12.457152 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:machine-config-operator:cluster-reader, uid: 1fda91fe-aa05-455a-8865-3034f4e4cff8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.457244887+00:00 stderr F I0813 20:09:12.457226 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-10, uid: 84db8ecd-30ab-46ed-a310-4e96d94d3fd1]" virtual=false 2025-08-13T20:09:12.477003863+00:00 stderr F I0813 20:09:12.476842 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostmount, uid: 43c793ea-8f8d-4996-bb74-5e312be44b1a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.477003863+00:00 stderr F I0813 20:09:12.476946 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator-config, uid: c29eb760-4269-415b-a4e7-ce7850749f0e]" virtual=false 2025-08-13T20:09:12.477135457+00:00 stderr F I0813 20:09:12.477107 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-cluster-reader, uid: 2d3a058f-4b36-405a-8d7a-38d4fac9bbdf]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.477344733+00:00 stderr F I0813 20:09:12.477269 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-3, uid: e5c106c5-9215-42bf-9f83-ad893e5dfc9f]" 2025-08-13T20:09:12.477344733+00:00 stderr F I0813 20:09:12.477322 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-node-identity, name: network-node-identity-leases, uid: d937764d-a2f2-4f97-8faa-ee73604e87e3]" virtual=false 2025-08-13T20:09:12.477401165+00:00 stderr F I0813 20:09:12.477384 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-ingress-operator, name: trusted-ca, uid: 1f6546f8-a303-43d3-8110-ebd844c52acc]" virtual=false 2025-08-13T20:09:12.477833677+00:00 stderr F I0813 20:09:12.477752 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-2, uid: 6002c8b9-0d97-445d-a699-b98f9b3b0a7e]" 2025-08-13T20:09:12.477854728+00:00 stderr F I0813 20:09:12.477844 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-route-controller-manager, name: prometheus-k8s, uid: 2247b757-2dac-4d86-bfbd-8dff6864b9e9]" virtual=false 2025-08-13T20:09:12.478324681+00:00 stderr F I0813 20:09:12.478265 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ingress-operator, uid: e3a75357-5a06-4934-a482-0d77f1fbb9b2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.478461955+00:00 stderr F I0813 20:09:12.478443 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: machine-approver, uid: ff6a6f67-ddd9-476d-bf91-e242616a03c7]" virtual=false 2025-08-13T20:09:12.479538086+00:00 stderr F I0813 20:09:12.479505 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console-extensions-reader, uid: ecabf9a2-18e2-45e0-9591-e2a9b4363684]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.479538086+00:00 stderr F I0813 20:09:12.479532 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-diagnostics, name: prometheus-k8s, uid: 147efe47-06b1-489e-a050-0509fe1494a0]" virtual=false 2025-08-13T20:09:12.493480686+00:00 stderr F I0813 20:09:12.493361 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-namespace, uid: fa51c0bf-8455-4908-a5b9-5047521669d7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.493740073+00:00 stderr F I0813 20:09:12.493643 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-operator, uid: 644d299e-7c04-454d-a20a-2641713ce520]" virtual=false 2025-08-13T20:09:12.498699055+00:00 stderr F I0813 20:09:12.498093 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: project-helm-chartrepository-editor, uid: 86c81b4a-a328-46e4-a36e-736a9454eb6d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.498699055+00:00 stderr F I0813 20:09:12.498165 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-etcd-operator, name: prometheus-k8s, uid: bdfbbe53-9083-4628-919f-d92be5217116]" virtual=false 2025-08-13T20:09:12.501325531+00:00 stderr F I0813 20:09:12.501260 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[policy/v1/PodDisruptionBudget, namespace: openshift-operator-lifecycle-manager, name: packageserver-pdb, uid: 7faaf7ff-09b4-4ea8-95d0-99384dbe0390]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.501400523+00:00 stderr F I0813 20:09:12.501324 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-config-operator, name: prometheus-k8s, uid: 4e0e5d70-4367-4096-a0d7-165ab4597f6f]" virtual=false 2025-08-13T20:09:12.508439585+00:00 stderr F I0813 20:09:12.508356 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-control-plane-limited, uid: 8ee51cdb-bf4d-4e61-9535-b23c9dd08843]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.508491956+00:00 stderr F I0813 20:09:12.508443 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-operator, name: prometheus-k8s, uid: d3b6e7ef-13fc-4616-8132-30ed8b95b8b6]" virtual=false 2025-08-13T20:09:12.508763754+00:00 stderr F I0813 20:09:12.508721 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:cluster-samples-operator:cluster-reader, uid: b1a93bf7-4bf4-4d81-a69b-76fb07155d62]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.508817975+00:00 stderr F I0813 20:09:12.508768 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-scheduler-operator, name: prometheus-k8s, uid: 007819f0-5856-454e-a23f-645d2e97ddc9]" virtual=false 2025-08-13T20:09:12.522491778+00:00 stderr F I0813 20:09:12.522362 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:restricted, uid: 75ded65f-c56d-483a-aa78-9a3dfb682f3a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.522491778+00:00 stderr F I0813 20:09:12.522440 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-node-limited, uid: 991c2e92-733c-4b46-b14a-a76232a62c05]" virtual=false 2025-08-13T20:09:12.531540917+00:00 stderr F I0813 20:09:12.526882 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-10, uid: 84db8ecd-30ab-46ed-a310-4e96d94d3fd1]" 2025-08-13T20:09:12.531540917+00:00 stderr F I0813 20:09:12.526958 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-pod, uid: 950b403d-a1ab-4def-9716-54d82ee220cf]" virtual=false 2025-08-13T20:09:12.531540917+00:00 stderr F I0813 20:09:12.528365 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:privileged, uid: 301bd3c5-2517-462f-bb39-9758c8065aa4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.531540917+00:00 stderr F I0813 20:09:12.528393 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-user-settings, name: console-user-settings-admin, uid: 64c7926d-fb43-4663-b39f-8b3dc932ced2]" virtual=false 2025-08-13T20:09:12.531540917+00:00 stderr F I0813 20:09:12.529008 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: registry-monitoring, uid: df05d985-dbd7-487d-b14e-5f77e9a774d1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.531540917+00:00 stderr F I0813 20:09:12.529033 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-controller-manager, name: prometheus-k8s, uid: 25e48399-1d46-4030-b302-f0e0a40b15ac]" virtual=false 2025-08-13T20:09:12.550622954+00:00 stderr F I0813 20:09:12.550550 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-node-limited, uid: 991c2e92-733c-4b46-b14a-a76232a62c05]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.550622954+00:00 stderr F I0813 20:09:12.550598 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-apiserver-operator, name: trusted-ca-bundle, uid: fec29a7e-ed54-4cd2-a16a-9f72be2c61f3]" virtual=false 2025-08-13T20:09:12.551651474+00:00 stderr F I0813 20:09:12.551566 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-workloads-namespace, uid: eadeeb62-a305-4855-95f4-6d8dc0433482]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.551942492+00:00 stderr F I0813 20:09:12.551755 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-controller-manager-operator, name: prometheus-k8s, uid: 56b96a63-85f5-47bb-88cb-be23eec085bd]" virtual=false 2025-08-13T20:09:12.552365464+00:00 stderr F I0813 20:09:12.552307 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: authentication-operator-config, uid: b02c5a6c-aa5e-45ae-9058-5573f870c452]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.552365464+00:00 stderr F I0813 20:09:12.552355 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-multus, name: whereabouts-cni, uid: 32ee943f-953b-467b-83ff-09220026838e]" virtual=false 2025-08-13T20:09:12.553244169+00:00 stderr F I0813 20:09:12.553152 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator-config, uid: b7bf0a70-f77f-40ce-8903-84d4dba4ea3a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.553244169+00:00 stderr F I0813 20:09:12.553197 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: b5fbb36a-21eb-4ffc-98a3-b0ccc0c0751c]" virtual=false 2025-08-13T20:09:12.553442745+00:00 stderr F I0813 20:09:12.553420 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: prometheus-k8s-scheduler-resources, uid: c2078daf-5c4f-48e6-b914-b4ca03df3cb9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.553509177+00:00 stderr F I0813 20:09:12.553491 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-cluster-total, uid: c38274d9-e6e8-4d0b-b766-d07f00c60697]" virtual=false 2025-08-13T20:09:12.553583769+00:00 stderr F I0813 20:09:12.553532 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-diagnostics, name: prometheus-k8s, uid: 147efe47-06b1-489e-a050-0509fe1494a0]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.558109069+00:00 stderr F I0813 20:09:12.557092 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: node-cluster, uid: 13ea4157-64e4-4040-bd37-d252de132aff]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.558109069+00:00 stderr F I0813 20:09:12.557146 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-marketplace, name: marketplace-operator, uid: af983aa6-e063-42fe-945c-3f6bb1f3e446]" virtual=false 2025-08-13T20:09:12.559074326+00:00 stderr F I0813 20:09:12.553580 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console, name: console-operator, uid: 00c6aa74-29ed-484b-adf5-26bb1faa032f]" virtual=false 2025-08-13T20:09:12.559685664+00:00 stderr F I0813 20:09:12.559656 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator-config, uid: c29eb760-4269-415b-a4e7-ce7850749f0e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.559850299+00:00 stderr F I0813 20:09:12.559764 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-controller-manager, name: prometheus-k8s, uid: 05961b4d-e50c-47f7-80b9-a2fc6dea707c]" virtual=false 2025-08-13T20:09:12.575375254+00:00 stderr F I0813 20:09:12.575307 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-operator, name: prometheus-k8s, uid: d3b6e7ef-13fc-4616-8132-30ed8b95b8b6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.575933170+00:00 stderr F I0813 20:09:12.575871 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-scheduler, name: prometheus-k8s, uid: af8e9559-dd0c-4362-9ecb-cdae5324c3fd]" virtual=false 2025-08-13T20:09:12.576147906+00:00 stderr F I0813 20:09:12.575346 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-route-controller-manager, name: prometheus-k8s, uid: 2247b757-2dac-4d86-bfbd-8dff6864b9e9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.576231668+00:00 stderr F I0813 20:09:12.576213 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: olm-operator-serviceaccount, uid: 4e27b169-814e-4c39-bb1c-ed8a71de6e00]" virtual=false 2025-08-13T20:09:12.576455695+00:00 stderr F I0813 20:09:12.575368 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: machine-approver, uid: ff6a6f67-ddd9-476d-bf91-e242616a03c7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.576455695+00:00 stderr F I0813 20:09:12.576446 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-apiserver-operator, name: prometheus-k8s, uid: 543709bc-20a4-4aa5-9c2d-22d28d836926]" virtual=false 2025-08-13T20:09:12.577232417+00:00 stderr F I0813 20:09:12.575465 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-node-identity, name: network-node-identity-leases, uid: d937764d-a2f2-4f97-8faa-ee73604e87e3]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.577383341+00:00 stderr F I0813 20:09:12.577361 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ingress-operator, name: prometheus-k8s, uid: ea595b36-3d8a-484e-8397-2d1568e5dbf5]" virtual=false 2025-08-13T20:09:12.577482354+00:00 stderr F I0813 20:09:12.575524 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-config-operator, name: prometheus-k8s, uid: 4e0e5d70-4367-4096-a0d7-165ab4597f6f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.577520175+00:00 stderr F I0813 20:09:12.575563 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-pod, uid: 950b403d-a1ab-4def-9716-54d82ee220cf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.577753982+00:00 stderr F I0813 20:09:12.577675 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-service-ca-operator, name: prometheus-k8s, uid: 0bd8dea8-4527-46af-9f03-68d1df279d8b]" virtual=false 2025-08-13T20:09:12.578699689+00:00 stderr F I0813 20:09:12.577633 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: machine-api-operator, uid: dff291f1-42cd-4626-8331-8b0fef4d088c]" virtual=false 2025-08-13T20:09:12.582263241+00:00 stderr F I0813 20:09:12.581183 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-controller-manager, name: prometheus-k8s, uid: 25e48399-1d46-4030-b302-f0e0a40b15ac]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.582263241+00:00 stderr F I0813 20:09:12.581249 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 9876773e-9d26-40a6-bfd6-e2f1512249d0]" virtual=false 2025-08-13T20:09:12.588976464+00:00 stderr F I0813 20:09:12.588896 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-user-settings, name: console-user-settings-admin, uid: 64c7926d-fb43-4663-b39f-8b3dc932ced2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.589136238+00:00 stderr F I0813 20:09:12.589075 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator-config, uid: a31ddc0b-aec5-455b-8a66-7121efaad6c3]" virtual=false 2025-08-13T20:09:12.599089704+00:00 stderr F I0813 20:09:12.598960 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: b5fbb36a-21eb-4ffc-98a3-b0ccc0c0751c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.599089704+00:00 stderr F I0813 20:09:12.599058 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: machine-api-controllers, uid: 7c112e2e-5dbd-404e-b2c5-a73aa30d28a1]" virtual=false 2025-08-13T20:09:12.605089976+00:00 stderr F I0813 20:09:12.605023 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-ingress-operator, name: trusted-ca, uid: 1f6546f8-a303-43d3-8110-ebd844c52acc]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:12.605274441+00:00 stderr F I0813 20:09:12.605249 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: node-ca, uid: 06d86d33-0cb8-49e8-baff-236e595eef4d]" virtual=false 2025-08-13T20:09:12.608372920+00:00 stderr F I0813 20:09:12.605634 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-operator, uid: 644d299e-7c04-454d-a20a-2641713ce520]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.608757351+00:00 stderr F I0813 20:09:12.608729 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console, name: console-operator, uid: 00c6aa74-29ed-484b-adf5-26bb1faa032f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.608943616+00:00 stderr F I0813 20:09:12.608858 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-node-cluster-rsrc-use, uid: 54d1c16d-4e01-45e7-a710-5f70910784d6]" virtual=false 2025-08-13T20:09:12.609003888+00:00 stderr F I0813 20:09:12.608983 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-operator, uid: 60e015be-05fd-4865-8047-01a0e067f6f0]" virtual=false 2025-08-13T20:09:12.609266195+00:00 stderr F I0813 20:09:12.608610 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-apiserver-operator, name: trusted-ca-bundle, uid: fec29a7e-ed54-4cd2-a16a-9f72be2c61f3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:12.609369928+00:00 stderr F I0813 20:09:12.609314 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: prometheus-k8s-machine-api-operator, uid: e1dd6a03-7458-44ac-a6c7-edfefd391324]" virtual=false 2025-08-13T20:09:12.609510002+00:00 stderr F I0813 20:09:12.608682 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-multus, name: whereabouts-cni, uid: 32ee943f-953b-467b-83ff-09220026838e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.609510002+00:00 stderr F I0813 20:09:12.609497 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift, name: copied-csv-viewer, uid: c333d29d-0f91-4c19-a9fa-625f1576a12a]" virtual=false 2025-08-13T20:09:12.609528213+00:00 stderr F I0813 20:09:12.609510 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-scheduler-operator, name: prometheus-k8s, uid: 007819f0-5856-454e-a23f-645d2e97ddc9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.609539903+00:00 stderr F I0813 20:09:12.609532 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-apiserver, name: prometheus-k8s, uid: 2284de94-810e-4240-8f67-14ef839c6064]" virtual=false 2025-08-13T20:09:12.609635346+00:00 stderr F I0813 20:09:12.609081 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-etcd-operator, name: prometheus-k8s, uid: bdfbbe53-9083-4628-919f-d92be5217116]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.609703858+00:00 stderr F I0813 20:09:12.609650 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: admin-gates, uid: ca7c55e0-b2d1-4635-a43a-3c60b3ca968b]" virtual=false 2025-08-13T20:09:12.614504206+00:00 stderr F I0813 20:09:12.614402 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-cluster-total, uid: c38274d9-e6e8-4d0b-b766-d07f00c60697]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.614616709+00:00 stderr F I0813 20:09:12.614597 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: coreos-pull-secret-reader, uid: cdaab605-1dd2-4c71-902b-e715152f501c]" virtual=false 2025-08-13T20:09:12.619052696+00:00 stderr F I0813 20:09:12.617203 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-controller-manager-operator, name: prometheus-k8s, uid: 56b96a63-85f5-47bb-88cb-be23eec085bd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.619052696+00:00 stderr F I0813 20:09:12.617280 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-operator, name: console-operator, uid: 819a8d31-510a-43dc-9b44-d2d194694335]" virtual=false 2025-08-13T20:09:12.636666471+00:00 stderr F I0813 20:09:12.636562 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ingress-operator, name: prometheus-k8s, uid: ea595b36-3d8a-484e-8397-2d1568e5dbf5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.636666471+00:00 stderr F I0813 20:09:12.636637 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: b43cd4d0-8394-41bd-9c23-389c33ffc953]" virtual=false 2025-08-13T20:09:12.652250608+00:00 stderr F I0813 20:09:12.645874 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-scheduler, name: prometheus-k8s, uid: af8e9559-dd0c-4362-9ecb-cdae5324c3fd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.652250608+00:00 stderr F I0813 20:09:12.645968 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 2a1a6443-92bf-42c9-af0e-799ad6d8e75f]" virtual=false 2025-08-13T20:09:12.667004601+00:00 stderr F I0813 20:09:12.660727 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: machine-api-operator, uid: dff291f1-42cd-4626-8331-8b0fef4d088c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.667004601+00:00 stderr F I0813 20:09:12.661014 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: node-ca, uid: 0bd6243f-f605-4cae-ae6f-9274fc0fab04]" virtual=false 2025-08-13T20:09:12.672155409+00:00 stderr F I0813 20:09:12.670689 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-marketplace, name: marketplace-operator, uid: af983aa6-e063-42fe-945c-3f6bb1f3e446]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.672155409+00:00 stderr F I0813 20:09:12.670818 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ingress-operator, name: ingress-operator, uid: 66faf8af-a62a-4380-adf8-70a5fd528d66]" virtual=false 2025-08-13T20:09:12.672155409+00:00 stderr F I0813 20:09:12.671406 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 9876773e-9d26-40a6-bfd6-e2f1512249d0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.672155409+00:00 stderr F I0813 20:09:12.671445 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: 5de31099-cb98-4349-a07d-5b47004d4e10]" virtual=false 2025-08-13T20:09:12.672155409+00:00 stderr F I0813 20:09:12.671686 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-apiserver-operator, name: prometheus-k8s, uid: 543709bc-20a4-4aa5-9c2d-22d28d836926]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.672155409+00:00 stderr F I0813 20:09:12.671706 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 353f2c71-579c-4479-8c03-fec2a64cdccb]" virtual=false 2025-08-13T20:09:12.752097651+00:00 stderr F I0813 20:09:12.749502 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-operator, uid: 60e015be-05fd-4865-8047-01a0e067f6f0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.752097651+00:00 stderr F I0813 20:09:12.749572 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: iptables-alerter, uid: a04dfba9-1e5c-429c-af4e-a16926a1a922]" virtual=false 2025-08-13T20:09:12.766936966+00:00 stderr F I0813 20:09:12.764520 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-service-ca-operator, name: prometheus-k8s, uid: 0bd8dea8-4527-46af-9f03-68d1df279d8b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.766936966+00:00 stderr F I0813 20:09:12.764614 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-node, uid: 940e7205-a7c2-4202-bda2-b8d6e6cc323e]" virtual=false 2025-08-13T20:09:12.769938302+00:00 stderr F I0813 20:09:12.769769 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-apiserver, name: prometheus-k8s, uid: 2284de94-810e-4240-8f67-14ef839c6064]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.770103907+00:00 stderr F I0813 20:09:12.769978 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-console-operator, name: console-operator, uid: 5d735f56-5d70-4872-9ba1-eeb14212f370]" virtual=false 2025-08-13T20:09:12.770444817+00:00 stderr F I0813 20:09:12.770423 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-controller-manager, name: prometheus-k8s, uid: 05961b4d-e50c-47f7-80b9-a2fc6dea707c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.770582391+00:00 stderr F I0813 20:09:12.770564 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: efae6cd6-358c-4e88-a33d-efa2c6b9d0e2]" virtual=false 2025-08-13T20:09:12.783942574+00:00 stderr F I0813 20:09:12.781338 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: node-ca, uid: 06d86d33-0cb8-49e8-baff-236e595eef4d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.783942574+00:00 stderr F I0813 20:09:12.781442 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: 651212c1-c815-4a28-8ec6-1c6280dbdbec]" virtual=false 2025-08-13T20:09:12.783942574+00:00 stderr F I0813 20:09:12.781986 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-node-cluster-rsrc-use, uid: 54d1c16d-4e01-45e7-a710-5f70910784d6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.783942574+00:00 stderr F I0813 20:09:12.782077 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 906775a7-f83d-4a63-b5ed-bad9d086c7aa]" virtual=false 2025-08-13T20:09:12.796726050+00:00 stderr F I0813 20:09:12.795367 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: machine-api-controllers, uid: 7c112e2e-5dbd-404e-b2c5-a73aa30d28a1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.796726050+00:00 stderr F I0813 20:09:12.795585 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: marketplace-operator, uid: 621c721c-fedc-415a-95e9-429e192d9990]" virtual=false 2025-08-13T20:09:12.807507729+00:00 stderr F I0813 20:09:12.807422 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-operator, name: console-operator, uid: 819a8d31-510a-43dc-9b44-d2d194694335]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.810893156+00:00 stderr F I0813 20:09:12.809240 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: cluster-network-operator, uid: e6388388-ee10-4833-b042-1e47aaa130a7]" virtual=false 2025-08-13T20:09:12.810893156+00:00 stderr F I0813 20:09:12.810134 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: coreos-pull-secret-reader, uid: cdaab605-1dd2-4c71-902b-e715152f501c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.810893156+00:00 stderr F I0813 20:09:12.810160 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-control-plane, uid: 92320b12-3d07-45bf-80cb-218961388b77]" virtual=false 2025-08-13T20:09:12.832015512+00:00 stderr F I0813 20:09:12.829524 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: b43cd4d0-8394-41bd-9c23-389c33ffc953]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.832015512+00:00 stderr F I0813 20:09:12.829567 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: dae999a5-24f7-4e94-a4b1-471fa177fa34]" virtual=false 2025-08-13T20:09:12.865013238+00:00 stderr F I0813 20:09:12.864864 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: prometheus-k8s-machine-api-operator, uid: e1dd6a03-7458-44ac-a6c7-edfefd391324]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.865013238+00:00 stderr F I0813 20:09:12.864977 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus-ac, uid: f0debeff-4753-4149-9bd6-028dddb9b67d]" virtual=false 2025-08-13T20:09:12.873874792+00:00 stderr F I0813 20:09:12.871746 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: node-ca, uid: 0bd6243f-f605-4cae-ae6f-9274fc0fab04]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.873874792+00:00 stderr F I0813 20:09:12.871993 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-machine-approver, name: machine-approver-sa, uid: b07a4cd6-c640-4266-9ce2-600406fbd64f]" virtual=false 2025-08-13T20:09:12.892882537+00:00 stderr F I0813 20:09:12.890509 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 2a1a6443-92bf-42c9-af0e-799ad6d8e75f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.892882537+00:00 stderr F I0813 20:09:12.890661 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-node-identity, name: network-node-identity, uid: 7e561373-e820-499a-95a4-ce248a0f5525]" virtual=false 2025-08-13T20:09:12.892882537+00:00 stderr F I0813 20:09:12.892010 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-ingress-operator, name: ingress-operator, uid: 66faf8af-a62a-4380-adf8-70a5fd528d66]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.892882537+00:00 stderr F I0813 20:09:12.892114 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 77b47efb-3f01-4bfa-9558-359ab057875f]" virtual=false 2025-08-13T20:09:12.895847312+00:00 stderr F I0813 20:09:12.893068 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-node, uid: 940e7205-a7c2-4202-bda2-b8d6e6cc323e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.895847312+00:00 stderr F I0813 20:09:12.893184 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-config-operator, name: openshift-config-operator, uid: 97fb8215-a3a4-428f-b059-ff59344586ab]" virtual=false 2025-08-13T20:09:12.898930050+00:00 stderr F I0813 20:09:12.896708 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 353f2c71-579c-4479-8c03-fec2a64cdccb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.898930050+00:00 stderr F I0813 20:09:12.896988 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 46c2de5e-5b41-4f38-9bf5-780cad1e517d]" virtual=false 2025-08-13T20:09:12.908932277+00:00 stderr F I0813 20:09:12.906484 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: olm-operator-serviceaccount, uid: 4e27b169-814e-4c39-bb1c-ed8a71de6e00]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.908932277+00:00 stderr F I0813 20:09:12.906555 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-dns-operator, name: dns-operator, uid: fb958c91-4fd0-44ff-87ab-ea516f2eb6cf]" virtual=false 2025-08-13T20:09:12.908932277+00:00 stderr F I0813 20:09:12.906652 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift, name: copied-csv-viewer, uid: c333d29d-0f91-4c19-a9fa-625f1576a12a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.908932277+00:00 stderr F I0813 20:09:12.906750 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus, uid: 30dd8554-cd5a-476c-b346-91c68439eed7]" virtual=false 2025-08-13T20:09:12.908932277+00:00 stderr F I0813 20:09:12.907040 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: 651212c1-c815-4a28-8ec6-1c6280dbdbec]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.908932277+00:00 stderr F I0813 20:09:12.907070 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 7216ec91-40a8-4309-9d6f-2620c82247e2]" virtual=false 2025-08-13T20:09:12.937872787+00:00 stderr F I0813 20:09:12.935885 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: iptables-alerter, uid: a04dfba9-1e5c-429c-af4e-a16926a1a922]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.937872787+00:00 stderr F I0813 20:09:12.936095 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-etcd-operator, name: etcd-operator, uid: 87565fdf-8fb7-4af2-bb82-2bbd1c71cec9]" virtual=false 2025-08-13T20:09:12.937872787+00:00 stderr F I0813 20:09:12.936108 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: admin-gates, uid: ca7c55e0-b2d1-4635-a43a-3c60b3ca968b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.937872787+00:00 stderr F I0813 20:09:12.936135 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: 10dd43d4-0c6c-4543-8d29-d868fdae181d]" virtual=false 2025-08-13T20:09:12.953929447+00:00 stderr F I0813 20:09:12.953257 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: marketplace-operator, uid: 621c721c-fedc-415a-95e9-429e192d9990]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.953929447+00:00 stderr F I0813 20:09:12.953468 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-termination-handler, uid: 38e87230-6b06-40a0-af2a-442a47fe9507]" virtual=false 2025-08-13T20:09:12.965881760+00:00 stderr F I0813 20:09:12.965646 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-console-operator, name: console-operator, uid: 5d735f56-5d70-4872-9ba1-eeb14212f370]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.965881760+00:00 stderr F I0813 20:09:12.965692 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator, uid: 9ae8aac8-5e71-4dc5-acdc-bd42630689b5]" virtual=false 2025-08-13T20:09:12.983866316+00:00 stderr F I0813 20:09:12.983705 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: cluster-network-operator, uid: e6388388-ee10-4833-b042-1e47aaa130a7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.983866316+00:00 stderr F I0813 20:09:12.983775 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 70bd2dd3-4e78-4176-92de-09c3ccb93594]" virtual=false 2025-08-13T20:09:12.995885660+00:00 stderr F I0813 20:09:12.995107 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: 5de31099-cb98-4349-a07d-5b47004d4e10]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.995885660+00:00 stderr F I0813 20:09:12.995169 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator, uid: c607908b-16ad-4fb3-95dd-a2d9df939986]" virtual=false 2025-08-13T20:09:12.998082183+00:00 stderr F I0813 20:09:12.996410 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: efae6cd6-358c-4e88-a33d-efa2c6b9d0e2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.998082183+00:00 stderr F I0813 20:09:12.996455 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-console, name: console, uid: 970aab5a-86f2-41fd-87c5-c7415f735449]" virtual=false 2025-08-13T20:09:12.998882086+00:00 stderr F I0813 20:09:12.998833 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 906775a7-f83d-4a63-b5ed-bad9d086c7aa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.998934438+00:00 stderr F I0813 20:09:12.998876 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: metrics-daemon-sa, uid: 3052cc53-1784-4f82-91bc-b467a202b3b1]" virtual=false 2025-08-13T20:09:13.002491110+00:00 stderr F I0813 20:09:13.002453 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-network-node-identity, name: network-node-identity, uid: 7e561373-e820-499a-95a4-ce248a0f5525]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.002587312+00:00 stderr F I0813 20:09:13.002566 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus-ancillary-tools, uid: 456ced67-1537-4b8d-8397-a06fbaaa6bc4]" virtual=false 2025-08-13T20:09:13.012008912+00:00 stderr F I0813 20:09:13.011947 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-config-operator, name: openshift-config-operator, uid: 97fb8215-a3a4-428f-b059-ff59344586ab]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.012109495+00:00 stderr F I0813 20:09:13.012094 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-controllers, uid: 8577a786-447c-40a7-bbfa-d3af148d8584]" virtual=false 2025-08-13T20:09:13.012169307+00:00 stderr F I0813 20:09:13.011976 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus-ac, uid: f0debeff-4753-4149-9bd6-028dddb9b67d]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.012261210+00:00 stderr F I0813 20:09:13.012221 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-authentication-operator, name: authentication-operator, uid: 0ab5e157-3b4f-4841-8064-7b4519d31987]" virtual=false 2025-08-13T20:09:13.015548044+00:00 stderr F I0813 20:09:13.014925 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-control-plane, uid: 92320b12-3d07-45bf-80cb-218961388b77]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.015548044+00:00 stderr F I0813 20:09:13.014968 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-node, uid: 39c6a6e2-d1ca-48b1-b647-64eb92ab22d8]" virtual=false 2025-08-13T20:09:13.015548044+00:00 stderr F I0813 20:09:13.015272 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-cluster-machine-approver, name: machine-approver-sa, uid: b07a4cd6-c640-4266-9ce2-600406fbd64f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.015548044+00:00 stderr F I0813 20:09:13.015311 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-images, uid: 8cb4b490-1625-48bb-919c-ccf0098eecd7]" virtual=false 2025-08-13T20:09:13.031552413+00:00 stderr F I0813 20:09:13.031318 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: dae999a5-24f7-4e94-a4b1-471fa177fa34]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.031552413+00:00 stderr F I0813 20:09:13.031407 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-cluster, uid: 0d4712c5-d5b0-41d5-a647-d63817261699]" virtual=false 2025-08-13T20:09:13.043345101+00:00 stderr F I0813 20:09:13.042453 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-termination-handler, uid: 38e87230-6b06-40a0-af2a-442a47fe9507]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.043345101+00:00 stderr F I0813 20:09:13.042538 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-node-rsrc-use, uid: b8637e8a-1405-492e-bb16-9110c9aa6848]" virtual=false 2025-08-13T20:09:13.046583434+00:00 stderr F I0813 20:09:13.046029 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 77b47efb-3f01-4bfa-9558-359ab057875f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.046583434+00:00 stderr F I0813 20:09:13.046094 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: etcd-dashboard, uid: c6f28bae-aa57-468c-9400-cc06ce489bf2]" virtual=false 2025-08-13T20:09:13.060381259+00:00 stderr F I0813 20:09:13.060293 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus, uid: 30dd8554-cd5a-476c-b346-91c68439eed7]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.060588545+00:00 stderr F I0813 20:09:13.060566 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator-config, uid: 1ab68a4b-dd84-4ca5-8df6-3f02a224a116]" virtual=false 2025-08-13T20:09:13.066276918+00:00 stderr F I0813 20:09:13.066136 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-multus, name: metrics-daemon-sa, uid: 3052cc53-1784-4f82-91bc-b467a202b3b1]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.066276918+00:00 stderr F I0813 20:09:13.066241 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: grafana-dashboard-apiserver-performance, uid: 08a66e39-2f66-4f16-8e3f-1584a5baa4d7]" virtual=false 2025-08-13T20:09:13.089862925+00:00 stderr F I0813 20:09:13.089744 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 7216ec91-40a8-4309-9d6f-2620c82247e2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.090052650+00:00 stderr F I0813 20:09:13.090028 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-namespace-by-pod, uid: 77e50462-2587-42e1-85b6-b8d283d88f5d]" virtual=false 2025-08-13T20:09:13.090622636+00:00 stderr F I0813 20:09:13.090564 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus-ancillary-tools, uid: 456ced67-1537-4b8d-8397-a06fbaaa6bc4]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.090684488+00:00 stderr F I0813 20:09:13.090668 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-workload, uid: 134b842b-df65-4691-b059-613958e332a5]" virtual=false 2025-08-13T20:09:13.091029158+00:00 stderr F I0813 20:09:13.091001 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 46c2de5e-5b41-4f38-9bf5-780cad1e517d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.091160512+00:00 stderr F I0813 20:09:13.091143 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-prometheus, uid: 5985f141-3c54-4417-9255-f53e7838729d]" virtual=false 2025-08-13T20:09:13.095020513+00:00 stderr F I0813 20:09:13.094981 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: 10dd43d4-0c6c-4543-8d29-d868fdae181d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.095108125+00:00 stderr F I0813 20:09:13.095090 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-console-operator, name: telemetry-config, uid: 44471669-105d-4c00-b23a-d7c3e0f0cede]" virtual=false 2025-08-13T20:09:13.095408744+00:00 stderr F I0813 20:09:13.095384 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-console, name: console, uid: 970aab5a-86f2-41fd-87c5-c7415f735449]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.095636160+00:00 stderr F I0813 20:09:13.095549 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-cluster-machine-approver, name: kube-rbac-proxy, uid: 83f36bd0-03ab-465f-8813-55e992938c92]" virtual=false 2025-08-13T20:09:13.100217561+00:00 stderr F I0813 20:09:13.100180 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator, uid: 9ae8aac8-5e71-4dc5-acdc-bd42630689b5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.100395677+00:00 stderr F I0813 20:09:13.100329 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: service-ca-bundle, uid: af98ab2a-94fb-4365-a506-7ec3b3dad927]" virtual=false 2025-08-13T20:09:13.137387087+00:00 stderr F I0813 20:09:13.137293 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-etcd-operator, name: etcd-operator, uid: 87565fdf-8fb7-4af2-bb82-2bbd1c71cec9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.137387087+00:00 stderr F I0813 20:09:13.137372 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: trusted-ca-bundle, uid: 3a5e7c7a-1b75-49bd-b1ad-2610fcb12e76]" virtual=false 2025-08-13T20:09:13.137681486+00:00 stderr F I0813 20:09:13.137622 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-dns-operator, name: dns-operator, uid: fb958c91-4fd0-44ff-87ab-ea516f2eb6cf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.137681486+00:00 stderr F I0813 20:09:13.137669 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-service-ca-bundle, uid: fe11618d-f458-43e0-9d36-031e3c99e7b7]" virtual=false 2025-08-13T20:09:13.146855939+00:00 stderr F I0813 20:09:13.146467 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-controllers, uid: 8577a786-447c-40a7-bbfa-d3af148d8584]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.146855939+00:00 stderr F I0813 20:09:13.146753 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-pod-total, uid: c9b01aa2-5b5e-42b8-b64d-61b1d6ebd241]" virtual=false 2025-08-13T20:09:13.165397000+00:00 stderr F I0813 20:09:13.163994 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator, uid: c607908b-16ad-4fb3-95dd-a2d9df939986]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.178827755+00:00 stderr F I0813 20:09:13.167036 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 79c4826b-23cb-475e-9f0c-eab8eb98f835]" virtual=false 2025-08-13T20:09:13.179596237+00:00 stderr F I0813 20:09:13.179360 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 70bd2dd3-4e78-4176-92de-09c3ccb93594]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.179881066+00:00 stderr F I0813 20:09:13.179857 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-autoscaler, uid: 90140550-0bf0-45b6-bc21-2756500fa74e]" virtual=false 2025-08-13T20:09:13.188229685+00:00 stderr F I0813 20:09:13.187076 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: etcd-dashboard, uid: c6f28bae-aa57-468c-9400-cc06ce489bf2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.188229685+00:00 stderr F I0813 20:09:13.187157 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-autoscaler-operator, uid: 122f7969-6bbd-4171-b0d2-3382822b14bc]" virtual=false 2025-08-13T20:09:13.249248164+00:00 stderr F I0813 20:09:13.248861 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-node, uid: 39c6a6e2-d1ca-48b1-b647-64eb92ab22d8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.249248164+00:00 stderr F I0813 20:09:13.248947 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-monitoring-operator, uid: f9b8a349-3d3e-43d3-8552-f17fd46dfe4d]" virtual=false 2025-08-13T20:09:13.251207321+00:00 stderr F I0813 20:09:13.251121 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: grafana-dashboard-apiserver-performance, uid: 08a66e39-2f66-4f16-8e3f-1584a5baa4d7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.254506875+00:00 stderr F I0813 20:09:13.254470 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 65ec419a-7cc1-4ad7-851d-a3e1eb8bf158]" virtual=false 2025-08-13T20:09:13.254625179+00:00 stderr F I0813 20:09:13.251225 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-authentication-operator, name: authentication-operator, uid: 0ab5e157-3b4f-4841-8064-7b4519d31987]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.255062961+00:00 stderr F I0813 20:09:13.251340 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator-config, uid: 1ab68a4b-dd84-4ca5-8df6-3f02a224a116]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.255243496+00:00 stderr F I0813 20:09:13.251382 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-images, uid: 8cb4b490-1625-48bb-919c-ccf0098eecd7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.255411801+00:00 stderr F I0813 20:09:13.251464 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-node-rsrc-use, uid: b8637e8a-1405-492e-bb16-9110c9aa6848]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.255592556+00:00 stderr F I0813 20:09:13.254317 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-cluster, uid: 0d4712c5-d5b0-41d5-a647-d63817261699]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.255737890+00:00 stderr F I0813 20:09:13.255167 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-network-operator, uid: 741fb11c-22e2-4896-b41e-4bc6506dadb4]" virtual=false 2025-08-13T20:09:13.256109371+00:00 stderr F I0813 20:09:13.255343 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator, uid: 2613967c-5e5d-427c-802b-3219c1314d9f]" virtual=false 2025-08-13T20:09:13.256625366+00:00 stderr F I0813 20:09:13.255515 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator-imageconfig-reader, uid: 6e8698c1-8ae7-4cf2-9591-7d7aaf70f1e4]" virtual=false 2025-08-13T20:09:13.257069259+00:00 stderr F I0813 20:09:13.255697 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator-proxy-reader, uid: 0fc0e835-2d7d-4935-a386-83f3cdcc2356]" virtual=false 2025-08-13T20:09:13.258028156+00:00 stderr F I0813 20:09:13.255882 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-version-operator, uid: d1772a5b-7528-4218-b1a5-b8e37adaddd0]" virtual=false 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.263495 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: service-ca-bundle, uid: af98ab2a-94fb-4365-a506-7ec3b3dad927]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.263563 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console, uid: b060a198-fcb6-4114-b32b-339ffffe6077]" virtual=false 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.263784 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator-config, uid: a31ddc0b-aec5-455b-8a66-7121efaad6c3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.263864 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-auth-delegator, uid: 5b5cc72d-8cfc-47d3-8393-7b66b592f99e]" virtual=false 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264093 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-autoscaler-operator, uid: 122f7969-6bbd-4171-b0d2-3382822b14bc]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264113 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-extensions-reader, uid: 402476b7-93bc-4cfa-b038-c3108d7ea260]" virtual=false 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264260 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-prometheus, uid: 5985f141-3c54-4417-9255-f53e7838729d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264277 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-operator, uid: cd350fa5-6dc3-49dd-a977-ebe3ffe5edaa]" virtual=false 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264417 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-cluster-machine-approver, name: kube-rbac-proxy, uid: 83f36bd0-03ab-465f-8813-55e992938c92]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264434 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-operator-auth-delegator, uid: 6aee10d5-48ad-432d-ae71-69853ed0161c]" virtual=false 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264580 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-namespace-by-pod, uid: 77e50462-2587-42e1-85b6-b8d283d88f5d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264598 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: control-plane-machine-set-operator, uid: a398c8c9-26b5-430a-94f5-42d3f40459d5]" virtual=false 2025-08-13T20:09:13.279019018+00:00 stderr F I0813 20:09:13.278873 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-workload, uid: 134b842b-df65-4691-b059-613958e332a5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.279211213+00:00 stderr F I0813 20:09:13.279142 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: csi-snapshot-controller-operator-clusterrole, uid: 3c762506-422e-4421-915e-5872f5c48dbd]" virtual=false 2025-08-13T20:09:13.279737128+00:00 stderr F I0813 20:09:13.279714 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-console-operator, name: telemetry-config, uid: 44471669-105d-4c00-b23a-d7c3e0f0cede]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.279859402+00:00 stderr F I0813 20:09:13.279839 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: custom-account-openshift-machine-config-operator, uid: 7d9e4776-ed62-42b1-b845-cc4c6cc67c88]" virtual=false 2025-08-13T20:09:13.280452989+00:00 stderr F I0813 20:09:13.280430 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: trusted-ca-bundle, uid: 3a5e7c7a-1b75-49bd-b1ad-2610fcb12e76]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:13.280717027+00:00 stderr F I0813 20:09:13.280694 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: default-account-cluster-image-registry-operator, uid: cda37d84-e0e5-4e44-9a71-02182975026d]" virtual=false 2025-08-13T20:09:13.292366881+00:00 stderr F I0813 20:09:13.292254 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 79c4826b-23cb-475e-9f0c-eab8eb98f835]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.292366881+00:00 stderr F I0813 20:09:13.292319 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: helm-chartrepos-view, uid: 22aa2287-b511-4fb0-a162-d3c7fc093bc5]" virtual=false 2025-08-13T20:09:13.292857535+00:00 stderr F I0813 20:09:13.292771 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-service-ca-bundle, uid: fe11618d-f458-43e0-9d36-031e3c99e7b7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:13.292878845+00:00 stderr F I0813 20:09:13.292857 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-controllers, uid: 9d6d5d27-a1df-40a9-b87c-44c9c7277ec0]" virtual=false 2025-08-13T20:09:13.293312008+00:00 stderr F I0813 20:09:13.293287 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-pod-total, uid: c9b01aa2-5b5e-42b8-b64d-61b1d6ebd241]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.293394050+00:00 stderr F I0813 20:09:13.293354 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-controllers-baremetal, uid: f4e67882-ed45-4d9c-b562-81ffe3d1cb30]" virtual=false 2025-08-13T20:09:13.298474026+00:00 stderr F I0813 20:09:13.293694 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-autoscaler, uid: 90140550-0bf0-45b6-bc21-2756500fa74e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.298474026+00:00 stderr F I0813 20:09:13.293747 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-operator, uid: 2b488f4f-97f2-434d-a62e-6ce4db9636a2]" virtual=false 2025-08-13T20:09:13.319708115+00:00 stderr F I0813 20:09:13.319464 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-monitoring-operator, uid: f9b8a349-3d3e-43d3-8552-f17fd46dfe4d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.320034494+00:00 stderr F I0813 20:09:13.320012 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-operator-ext-remediation, uid: 06cb8f99-d6bc-46a7-bcf1-c610b6fc190a]" virtual=false 2025-08-13T20:09:13.320962761+00:00 stderr F I0813 20:09:13.320839 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-extensions-reader, uid: 402476b7-93bc-4cfa-b038-c3108d7ea260]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.320962761+00:00 stderr F I0813 20:09:13.320946 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: marketplace-operator, uid: 62bf9f3d-b2d7-4e1a-bd45-41b46e18211a]" virtual=false 2025-08-13T20:09:13.321174957+00:00 stderr F I0813 20:09:13.321097 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-version-operator, uid: d1772a5b-7528-4218-b1a5-b8e37adaddd0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.321378752+00:00 stderr F I0813 20:09:13.321353 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: e117a220-adbb-4f6c-89f8-b4507726d12f]" virtual=false 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.333665 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-auth-delegator, uid: 5b5cc72d-8cfc-47d3-8393-7b66b592f99e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.333743 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: metrics-daemon-sa-rolebinding, uid: 6dc834f4-1d8d-4397-b483-82d00bc808ca]" virtual=false 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334156 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-controllers-baremetal, uid: f4e67882-ed45-4d9c-b562-81ffe3d1cb30]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334185 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-admission-controller-webhook, uid: 048da2fc-4827-4d11-943b-079ac5e15768]" virtual=false 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334226 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator-proxy-reader, uid: 0fc0e835-2d7d-4935-a386-83f3cdcc2356]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334292 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-public, uid: 57bec53f-bb5e-4dc9-8957-267dc3c81e5c]" virtual=false 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334559 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-network-operator, uid: 741fb11c-22e2-4896-b41e-4bc6506dadb4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334588 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: console-operator, uid: 8943421a-94fe-4e82-ba2b-ff3e7994ada6]" virtual=false 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334637 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator, uid: 2613967c-5e5d-427c-802b-3219c1314d9f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334659 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: ingress-operator, uid: ab127597-b672-4c7c-8321-931d286b7455]" virtual=false 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334973 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console, uid: b060a198-fcb6-4114-b32b-339ffffe6077]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.335000 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-dns-operator, name: prometheus-k8s, uid: 0c4210ed-76ef-4fcc-b765-edadb0b0c238]" virtual=false 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.335232 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 65ec419a-7cc1-4ad7-851d-a3e1eb8bf158]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.335265 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-ancillary-tools, uid: 7b8f2404-6a99-452b-99e6-2ee5aee1a907]" virtual=false 2025-08-13T20:09:13.335495837+00:00 stderr F I0813 20:09:13.335439 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: helm-chartrepos-view, uid: 22aa2287-b511-4fb0-a162-d3c7fc093bc5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:13.335512978+00:00 stderr F I0813 20:09:13.335495 1 garbagecollector.go:549] "Processing item" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: controlplanemachineset.machine.openshift.io, uid: c0896a42-9644-4ae1-b9d2-64b9d8d72a93]" virtual=false 2025-08-13T20:09:13.336057523+00:00 stderr F I0813 20:09:13.336029 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: custom-account-openshift-machine-config-operator, uid: 7d9e4776-ed62-42b1-b845-cc4c6cc67c88]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.336242219+00:00 stderr F I0813 20:09:13.336219 1 garbagecollector.go:549] "Processing item" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: multus.openshift.io, uid: 8e8dfece-6a87-43e4-aef0-9bae8de3390b]" virtual=false 2025-08-13T20:09:13.336650830+00:00 stderr F I0813 20:09:13.336624 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: default-account-cluster-image-registry-operator, uid: cda37d84-e0e5-4e44-9a71-02182975026d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.336814855+00:00 stderr F I0813 20:09:13.336737 1 garbagecollector.go:549] "Processing item" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: network-node-identity.openshift.io, uid: 6dec73bc-003d-45c2-b80b-6abdd589c12e]" virtual=false 2025-08-13T20:09:13.337409332+00:00 stderr F I0813 20:09:13.337334 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator-imageconfig-reader, uid: 6e8698c1-8ae7-4cf2-9591-7d7aaf70f1e4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.338635057+00:00 stderr F I0813 20:09:13.337484 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/PriorityLevelConfiguration, namespace: , name: openshift-control-plane-operators, uid: e58c2fe6-ebe9-4808-9cea-7443d2a56c5c]" virtual=false 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.366693 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-operator-auth-delegator, uid: 6aee10d5-48ad-432d-ae71-69853ed0161c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.366768 1 garbagecollector.go:549] "Processing item" item="[scheduling.k8s.io/v1/PriorityClass, namespace: , name: openshift-user-critical, uid: 53eb906b-da85-4299-867d-b35bdfc9d7dd]" virtual=false 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.367313 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-controllers, uid: 9d6d5d27-a1df-40a9-b87c-44c9c7277ec0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.367335 1 garbagecollector.go:549] "Processing item" item="[operator.openshift.io/v1/KubeControllerManager, namespace: , name: cluster, uid: 466fedf7-9ce3-473e-9b71-9bf08b103d4a]" virtual=false 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.367564 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: control-plane-machine-set-operator, uid: a398c8c9-26b5-430a-94f5-42d3f40459d5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.367583 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: prometheus-k8s, uid: 6c37d9c1-9f73-4181-99c5-8e303e842810]" virtual=false 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.367627 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: marketplace-operator, uid: 62bf9f3d-b2d7-4e1a-bd45-41b46e18211a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.367678 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ingress-operator, name: ingress-operator, uid: 47b053c2-86d6-43b7-8698-8ddc264337d8]" virtual=false 2025-08-13T20:09:13.374689641+00:00 stderr F I0813 20:09:13.374596 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: csi-snapshot-controller-operator-clusterrole, uid: 3c762506-422e-4421-915e-5872f5c48dbd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.375007130+00:00 stderr F I0813 20:09:13.374863 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: machine-api-controllers, uid: c9a01375-7b9c-4ef6-9359-758fb218dce6]" virtual=false 2025-08-13T20:09:13.376608086+00:00 stderr F I0813 20:09:13.376539 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: e117a220-adbb-4f6c-89f8-b4507726d12f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.376703549+00:00 stderr F I0813 20:09:13.376686 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-operator-lifecycle-manager, name: operator-lifecycle-manager-metrics, uid: ffc884e1-7792-4e09-a959-3dd895bdc7be]" virtual=false 2025-08-13T20:09:13.377532662+00:00 stderr F I0813 20:09:13.377507 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: multus.openshift.io, uid: 8e8dfece-6a87-43e4-aef0-9bae8de3390b]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.377627915+00:00 stderr F I0813 20:09:13.377610 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console, name: prometheus-k8s, uid: 82076a4f-26ca-40e7-93f2-a97aeb21fd34]" virtual=false 2025-08-13T20:09:13.378016906+00:00 stderr F I0813 20:09:13.377993 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-operator-ext-remediation, uid: 06cb8f99-d6bc-46a7-bcf1-c610b6fc190a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.378108909+00:00 stderr F I0813 20:09:13.378091 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-marketplace, name: openshift-marketplace-metrics, uid: 1e11e685-d90b-49c4-a86c-7a1bfec99a48]" virtual=false 2025-08-13T20:09:13.380415715+00:00 stderr F I0813 20:09:13.380389 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-operator, uid: cd350fa5-6dc3-49dd-a977-ebe3ffe5edaa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.380584200+00:00 stderr F I0813 20:09:13.380480 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: console-operator, uid: 8943421a-94fe-4e82-ba2b-ff3e7994ada6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.380584200+00:00 stderr F I0813 20:09:13.380551 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-samples-operator, name: prometheus-k8s, uid: 23bfecb0-7911-4670-91df-97738b1eff9e]" virtual=false 2025-08-13T20:09:13.380759865+00:00 stderr F I0813 20:09:13.380502 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: d15aeb28-0bc5-4c54-abc8-682b1d2fe551]" virtual=false 2025-08-13T20:09:13.380884209+00:00 stderr F I0813 20:09:13.380865 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-operator, uid: 2b488f4f-97f2-434d-a62e-6ce4db9636a2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.381071064+00:00 stderr F I0813 20:09:13.381013 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-operator, name: prometheus-k8s, uid: eba18ce2-3b68-4a35-8e42-62f924dd6873]" virtual=false 2025-08-13T20:09:13.406487093+00:00 stderr F I0813 20:09:13.406365 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-public, uid: 57bec53f-bb5e-4dc9-8957-267dc3c81e5c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.406487093+00:00 stderr F I0813 20:09:13.406462 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-dns-operator, name: dns-operator, uid: 57d9aa60-9c70-4a04-875e-7d6d78ca3521]" virtual=false 2025-08-13T20:09:13.407113531+00:00 stderr F I0813 20:09:13.407055 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: metrics-daemon-sa-rolebinding, uid: 6dc834f4-1d8d-4397-b483-82d00bc808ca]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.407135321+00:00 stderr F I0813 20:09:13.407113 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-storage-operator, name: csi-snapshot-controller-operator-role, uid: bc19cbb1-cf64-4d32-9037-aae84228edd0]" virtual=false 2025-08-13T20:09:13.407333367+00:00 stderr F I0813 20:09:13.407269 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-dns-operator, name: prometheus-k8s, uid: 0c4210ed-76ef-4fcc-b765-edadb0b0c238]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.407333367+00:00 stderr F I0813 20:09:13.407306 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[operator.openshift.io/v1/KubeControllerManager, namespace: , name: cluster, uid: 466fedf7-9ce3-473e-9b71-9bf08b103d4a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:13.407440120+00:00 stderr F I0813 20:09:13.407343 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-version, name: prometheus-k8s, uid: fada1de4-8b21-4108-922b-4896582712b1]" virtual=false 2025-08-13T20:09:13.407496982+00:00 stderr F I0813 20:09:13.407290 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: network-node-identity.openshift.io, uid: 6dec73bc-003d-45c2-b80b-6abdd589c12e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.407560733+00:00 stderr F I0813 20:09:13.407546 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: openshift-network-public-role, uid: df54377a-e4ab-45dd-b1aa-937aea1ce027]" virtual=false 2025-08-13T20:09:13.407860632+00:00 stderr F I0813 20:09:13.407695 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/PriorityLevelConfiguration, namespace: , name: openshift-control-plane-operators, uid: e58c2fe6-ebe9-4808-9cea-7443d2a56c5c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.412673 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-authentication-operator, name: prometheus-k8s, uid: f07b75b1-a4a5-4e43-897d-6a260cd48bda]" virtual=false 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.408221 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-ancillary-tools, uid: 7b8f2404-6a99-452b-99e6-2ee5aee1a907]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.412981 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-machine-approver, name: prometheus-k8s, uid: 84939b38-04f1-4a93-a3ca-c9043e356d14]" virtual=false 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.408256 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-admission-controller-webhook, uid: 048da2fc-4827-4d11-943b-079ac5e15768]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.413217 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: machine-api-controllers, uid: fb58d9da-83d6-4a5c-8e72-5cd11c3d11bd]" virtual=false 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.408416 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: controlplanemachineset.machine.openshift.io, uid: c0896a42-9644-4ae1-b9d2-64b9d8d72a93]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.413340 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 6953f399-e3c8-4b06-943f-747a47e7d1dd]" virtual=false 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.408470 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: ingress-operator, uid: ab127597-b672-4c7c-8321-931d286b7455]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.413531 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: prometheus-k8s-cluster-autoscaler-operator, uid: c9d3067d-afa1-4cb6-8ad4-50b15b22f883]" virtual=false 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.408544 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ingress-operator, name: ingress-operator, uid: 47b053c2-86d6-43b7-8698-8ddc264337d8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.413632 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-oauth-apiserver, name: prometheus-k8s, uid: cd505f93-0531-428a-87d0-81f5b7f889c3]" virtual=false 2025-08-13T20:09:13.413750331+00:00 stderr F I0813 20:09:13.409166 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[scheduling.k8s.io/v1/PriorityClass, namespace: , name: openshift-user-critical, uid: 53eb906b-da85-4299-867d-b35bdfc9d7dd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.413763121+00:00 stderr F I0813 20:09:13.413753 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-apiserver-operator, name: prometheus-k8s, uid: 92ba624a-dcbc-4477-85d4-f2df67ae5db7]" virtual=false 2025-08-13T20:09:13.414027469+00:00 stderr F I0813 20:09:13.411010 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-control-plane-limited, uid: f89858c6-0219-4025-8fcf-4fd198d46157]" virtual=false 2025-08-13T20:09:13.424537780+00:00 stderr F I0813 20:09:13.424475 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console, name: prometheus-k8s, uid: 82076a4f-26ca-40e7-93f2-a97aeb21fd34]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.424715915+00:00 stderr F I0813 20:09:13.424696 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-controller-manager-operator, name: prometheus-k8s, uid: 7b72705a-23d0-4665-9d24-8e04a8aae0ef]" virtual=false 2025-08-13T20:09:13.425479357+00:00 stderr F I0813 20:09:13.425452 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-operator-lifecycle-manager, name: operator-lifecycle-manager-metrics, uid: ffc884e1-7792-4e09-a959-3dd895bdc7be]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.425969221+00:00 stderr F I0813 20:09:13.425946 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-apiserver, name: prometheus-k8s, uid: 7187d3b6-e6d6-4ff7-b923-65dff4f99eed]" virtual=false 2025-08-13T20:09:13.426696432+00:00 stderr F I0813 20:09:13.426624 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: prometheus-k8s, uid: 6c37d9c1-9f73-4181-99c5-8e303e842810]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.427005881+00:00 stderr F I0813 20:09:13.426982 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: e28ff59d-f034-4be4-a92c-c9dc921cc933]" virtual=false 2025-08-13T20:09:13.427273909+00:00 stderr F I0813 20:09:13.427250 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-samples-operator, name: prometheus-k8s, uid: 23bfecb0-7911-4670-91df-97738b1eff9e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.427334890+00:00 stderr F I0813 20:09:13.427318 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-multus, name: prometheus-k8s, uid: f8cc27cf-7bee-48e2-88b3-8be26f38260a]" virtual=false 2025-08-13T20:09:13.435208336+00:00 stderr F I0813 20:09:13.435142 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: d15aeb28-0bc5-4c54-abc8-682b1d2fe551]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.435349080+00:00 stderr F I0813 20:09:13.435330 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: prometheus-k8s, uid: 2e6f30f3-f11e-4248-ae14-d64b23438f44]" virtual=false 2025-08-13T20:09:13.435880395+00:00 stderr F I0813 20:09:13.435715 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-marketplace, name: openshift-marketplace-metrics, uid: 1e11e685-d90b-49c4-a86c-7a1bfec99a48]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.436124982+00:00 stderr F I0813 20:09:13.436095 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-authentication, name: prometheus-k8s, uid: f1df4a56-c0f4-4813-bb2c-0b0a559bf7eb]" virtual=false 2025-08-13T20:09:13.436457652+00:00 stderr F I0813 20:09:13.436435 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-dns-operator, name: dns-operator, uid: 57d9aa60-9c70-4a04-875e-7d6d78ca3521]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.436545814+00:00 stderr F I0813 20:09:13.436531 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-configmap-reader, uid: 92fde0d1-fb5a-4510-8310-6ae35d89235f]" virtual=false 2025-08-13T20:09:13.441559038+00:00 stderr F I0813 20:09:13.441526 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: openshift-network-public-role, uid: df54377a-e4ab-45dd-b1aa-937aea1ce027]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.441675811+00:00 stderr F I0813 20:09:13.441660 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-operator, name: prometheus-k8s, uid: 96b52d1f-be7e-421c-89e5-37d718c9e5d3]" virtual=false 2025-08-13T20:09:13.442136135+00:00 stderr F I0813 20:09:13.442085 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: machine-api-controllers, uid: c9a01375-7b9c-4ef6-9359-758fb218dce6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.442270429+00:00 stderr F I0813 20:09:13.442250 1 garbagecollector.go:549] "Processing item" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: hostnetwork-v2, uid: 456c5d8f-a352-4d46-a237-b5198b2b47bf]" virtual=false 2025-08-13T20:09:13.442769403+00:00 stderr F I0813 20:09:13.442745 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: prometheus-k8s-cluster-autoscaler-operator, uid: c9d3067d-afa1-4cb6-8ad4-50b15b22f883]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.442956278+00:00 stderr F I0813 20:09:13.442934 1 garbagecollector.go:549] "Processing item" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: machine-api-termination-handler, uid: 0d71aa13-f5f7-4516-9777-54ea4b30344d]" virtual=false 2025-08-13T20:09:13.443288498+00:00 stderr F I0813 20:09:13.443186 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: machine-api-controllers, uid: fb58d9da-83d6-4a5c-8e72-5cd11c3d11bd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.443348979+00:00 stderr F I0813 20:09:13.443334 1 garbagecollector.go:549] "Processing item" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: nonroot-v2, uid: 414f5c6e-4eb7-4571-b5b7-7dc6cb18bba6]" virtual=false 2025-08-13T20:09:13.443516034+00:00 stderr F I0813 20:09:13.443497 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-operator, name: prometheus-k8s, uid: eba18ce2-3b68-4a35-8e42-62f924dd6873]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.443564586+00:00 stderr F I0813 20:09:13.443551 1 garbagecollector.go:549] "Processing item" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: restricted-v2, uid: ccc1f43b-6b7f-41ce-a399-bc1d0b2e9f0d]" virtual=false 2025-08-13T20:09:13.443891595+00:00 stderr F I0813 20:09:13.443838 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-version, name: prometheus-k8s, uid: fada1de4-8b21-4108-922b-4896582712b1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.444072160+00:00 stderr F I0813 20:09:13.444056 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver, uid: 08401087-e2b5-41ab-b2ae-1b49b4b21090]" virtual=false 2025-08-13T20:09:13.444386469+00:00 stderr F I0813 20:09:13.444355 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-control-plane-limited, uid: f89858c6-0219-4025-8fcf-4fd198d46157]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.444536564+00:00 stderr F I0813 20:09:13.444491 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver-operator, uid: 435f63da-cd39-4cca-8a62-b69e4f06c2e9]" virtual=false 2025-08-13T20:09:13.444979366+00:00 stderr F I0813 20:09:13.444955 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-oauth-apiserver, name: prometheus-k8s, uid: cd505f93-0531-428a-87d0-81f5b7f889c3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.445090699+00:00 stderr F I0813 20:09:13.445075 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver-sar, uid: 939310fe-0d2e-4a8a-b1b3-0a5805456306]" virtual=false 2025-08-13T20:09:13.445266624+00:00 stderr F I0813 20:09:13.445246 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-authentication-operator, name: prometheus-k8s, uid: f07b75b1-a4a5-4e43-897d-6a260cd48bda]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.445316236+00:00 stderr F I0813 20:09:13.445303 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-authentication-operator, uid: e5927a4e-7a4c-4a11-a3d1-4186ce46adf3]" virtual=false 2025-08-13T20:09:13.451400100+00:00 stderr F I0813 20:09:13.451347 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-apiserver-operator, name: prometheus-k8s, uid: 92ba624a-dcbc-4477-85d4-f2df67ae5db7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.451501153+00:00 stderr F I0813 20:09:13.451485 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-controller-manager, uid: 68cc303b-a0cf-4b58-b7ad-75a4bff72246]" virtual=false 2025-08-13T20:09:13.451657248+00:00 stderr F I0813 20:09:13.451637 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-multus, name: prometheus-k8s, uid: f8cc27cf-7bee-48e2-88b3-8be26f38260a]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.451708439+00:00 stderr F I0813 20:09:13.451694 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-etcd-operator, uid: 9a9602ea-26a7-4c24-9fda-572140113932]" virtual=false 2025-08-13T20:09:13.452104661+00:00 stderr F I0813 20:09:13.452081 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-machine-approver, name: prometheus-k8s, uid: 84939b38-04f1-4a93-a3ca-c9043e356d14]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.452163852+00:00 stderr F I0813 20:09:13.452150 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-kube-apiserver-operator, uid: 29bd4a1f-f8f3-4c5b-8b56-4b5faf744ff9]" virtual=false 2025-08-13T20:09:13.452397689+00:00 stderr F I0813 20:09:13.452374 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 6953f399-e3c8-4b06-943f-747a47e7d1dd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.452450420+00:00 stderr F I0813 20:09:13.452437 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-monitoring-metrics, uid: 9bbe637c-a1dc-48d4-bf30-3aee0abd189a]" virtual=false 2025-08-13T20:09:13.455024574+00:00 stderr F I0813 20:09:13.454984 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-controller-manager-operator, name: prometheus-k8s, uid: 7b72705a-23d0-4665-9d24-8e04a8aae0ef]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.455118307+00:00 stderr F I0813 20:09:13.455076 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-apiserver, uid: b67a06ba-eaf2-4045-a4f3-80f32bd3af74]" virtual=false 2025-08-13T20:09:13.455267011+00:00 stderr F I0813 20:09:13.455247 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: e28ff59d-f034-4be4-a92c-c9dc921cc933]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.455321923+00:00 stderr F I0813 20:09:13.455307 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-apiserver-sar, uid: d58e2800-eb4f-4b97-a79d-1d5375725f25]" virtual=false 2025-08-13T20:09:13.455840988+00:00 stderr F I0813 20:09:13.455766 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-storage-operator, name: csi-snapshot-controller-operator-role, uid: bc19cbb1-cf64-4d32-9037-aae84228edd0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.455956611+00:00 stderr F I0813 20:09:13.455934 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-server, uid: 1cc90dfb-3877-4794-b4fb-a55f9f9fd21f]" virtual=false 2025-08-13T20:09:13.461531121+00:00 stderr F I0813 20:09:13.461492 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-apiserver, name: prometheus-k8s, uid: 7187d3b6-e6d6-4ff7-b923-65dff4f99eed]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.461651464+00:00 stderr F I0813 20:09:13.461635 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-ovn-kubernetes, uid: e774665f-b1c0-416c-adde-718d512e91f9]" virtual=false 2025-08-13T20:09:13.468554532+00:00 stderr F I0813 20:09:13.468521 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-configmap-reader, uid: 92fde0d1-fb5a-4510-8310-6ae35d89235f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.468869991+00:00 stderr F I0813 20:09:13.468849 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-cluster-readers, uid: 11e86910-6d61-4703-8234-c26167470b26]" virtual=false 2025-08-13T20:09:13.473698810+00:00 stderr F I0813 20:09:13.473638 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-authentication, name: prometheus-k8s, uid: f1df4a56-c0f4-4813-bb2c-0b0a559bf7eb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.474087021+00:00 stderr F I0813 20:09:13.474021 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-group, uid: 7560b26d-8f8c-4cda-ac4b-0a2f586da492]" virtual=false 2025-08-13T20:09:13.474732509+00:00 stderr F I0813 20:09:13.474675 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: prometheus-k8s, uid: 2e6f30f3-f11e-4248-ae14-d64b23438f44]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.474993347+00:00 stderr F I0813 20:09:13.474880 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-transient, uid: 9ca91046-71b1-4755-8430-f290694fb843]" virtual=false 2025-08-13T20:09:13.484234522+00:00 stderr F I0813 20:09:13.483741 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-operator, name: prometheus-k8s, uid: 96b52d1f-be7e-421c-89e5-37d718c9e5d3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.484234522+00:00 stderr F I0813 20:09:13.483859 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-whereabouts, uid: be8f44a2-0530-4fc0-a743-77944a2d6cfe]" virtual=false 2025-08-13T20:09:13.499940822+00:00 stderr F I0813 20:09:13.499769 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: hostnetwork-v2, uid: 456c5d8f-a352-4d46-a237-b5198b2b47bf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.500235710+00:00 stderr F I0813 20:09:13.500146 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: network-diagnostics, uid: 79dad4ee-c1a9-4879-a1dd-69ea8ca307a1]" virtual=false 2025-08-13T20:09:13.504300597+00:00 stderr F I0813 20:09:13.504168 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: machine-api-termination-handler, uid: 0d71aa13-f5f7-4516-9777-54ea4b30344d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.504300597+00:00 stderr F I0813 20:09:13.504250 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: network-node-identity, uid: 2fd4e670-d0fc-405b-9bb0-978652cd7871]" virtual=false 2025-08-13T20:09:13.512250795+00:00 stderr F I0813 20:09:13.511857 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver, uid: 08401087-e2b5-41ab-b2ae-1b49b4b21090]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.512250795+00:00 stderr F I0813 20:09:13.511992 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: olm-operator-binding-openshift-operator-lifecycle-manager, uid: 9ccd11bb-8a90-4da3-bd98-a6b01e412542]" virtual=false 2025-08-13T20:09:13.512566694+00:00 stderr F I0813 20:09:13.512531 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: nonroot-v2, uid: 414f5c6e-4eb7-4571-b5b7-7dc6cb18bba6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.512652596+00:00 stderr F I0813 20:09:13.512631 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-dns-operator, uid: 97ca47d3-e623-4331-b837-ffc3e3dac836]" virtual=false 2025-08-13T20:09:13.519372129+00:00 stderr F I0813 20:09:13.519325 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: restricted-v2, uid: ccc1f43b-6b7f-41ce-a399-bc1d0b2e9f0d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.519467732+00:00 stderr F I0813 20:09:13.519452 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ingress-operator, uid: 26eba0ad-34cf-49ed-9698-bb35cae97907]" virtual=false 2025-08-13T20:09:13.533261697+00:00 stderr F I0813 20:09:13.533170 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver-operator, uid: 435f63da-cd39-4cca-8a62-b69e4f06c2e9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.533261697+00:00 stderr F I0813 20:09:13.533239 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-iptables-alerter, uid: 848eff75-354e-4076-906d-a4b9cc19945b]" virtual=false 2025-08-13T20:09:13.533665949+00:00 stderr F I0813 20:09:13.533638 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-authentication-operator, uid: e5927a4e-7a4c-4a11-a3d1-4186ce46adf3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.533735521+00:00 stderr F I0813 20:09:13.533719 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-control-plane-limited, uid: 7f642a86-d7eb-4509-8666-f07259f6a62f]" virtual=false 2025-08-13T20:09:13.536348366+00:00 stderr F I0813 20:09:13.536318 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-controller-manager, uid: 68cc303b-a0cf-4b58-b7ad-75a4bff72246]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.536468969+00:00 stderr F I0813 20:09:13.536423 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-node-identity-limited, uid: 52425be5-e912-46fb-8a48-8e773a98d5c1]" virtual=false 2025-08-13T20:09:13.536743237+00:00 stderr F I0813 20:09:13.536672 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver-sar, uid: 939310fe-0d2e-4a8a-b1b3-0a5805456306]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.536759008+00:00 stderr F I0813 20:09:13.536738 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-node-kube-rbac-proxy, uid: 245c37d5-9ba8-4086-aa11-840b6e8a724e]" virtual=false 2025-08-13T20:09:13.567349115+00:00 stderr F I0813 20:09:13.566833 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-etcd-operator, uid: 9a9602ea-26a7-4c24-9fda-572140113932]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.568242800+00:00 stderr F I0813 20:09:13.568208 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-monitoring-metrics, uid: 9bbe637c-a1dc-48d4-bf30-3aee0abd189a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.568333043+00:00 stderr F I0813 20:09:13.568313 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: registry-monitoring, uid: f277fb99-3fcb-44ac-b911-507aa165a19e]" virtual=false 2025-08-13T20:09:13.569097585+00:00 stderr F I0813 20:09:13.568416 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-apiserver, uid: b67a06ba-eaf2-4045-a4f3-80f32bd3af74]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.569097585+00:00 stderr F I0813 20:09:13.568479 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-console-operator, name: console-operator-config, uid: 412419b7-04a8-4c46-913c-1f7d4e7ef828]" virtual=false 2025-08-13T20:09:13.569097585+00:00 stderr F I0813 20:09:13.568585 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: prometheus-k8s-scheduler-resources, uid: 83861e61-c21e-4d21-bf6a-21620ffd522d]" virtual=false 2025-08-13T20:09:13.575425206+00:00 stderr F I0813 20:09:13.572339 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-kube-apiserver-operator, uid: 29bd4a1f-f8f3-4c5b-8b56-4b5faf744ff9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.575425206+00:00 stderr F I0813 20:09:13.572424 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: release-verification, uid: 10773088-74cb-4fd7-a888-64eb35390cc8]" virtual=false 2025-08-13T20:09:13.575425206+00:00 stderr F I0813 20:09:13.575071 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-server, uid: 1cc90dfb-3877-4794-b4fb-a55f9f9fd21f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.575425206+00:00 stderr F I0813 20:09:13.575139 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: openshift-network-features, uid: e0bb8616-5510-480e-98cd-00a3fc1de91b]" virtual=false 2025-08-13T20:09:13.577416103+00:00 stderr F I0813 20:09:13.576873 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-apiserver-sar, uid: d58e2800-eb4f-4b97-a79d-1d5375725f25]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.577416103+00:00 stderr F I0813 20:09:13.576958 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-image-registry, name: trusted-ca, uid: 817847ce-1358-44f5-9e39-d680eb81f478]" virtual=false 2025-08-13T20:09:13.587123532+00:00 stderr F I0813 20:09:13.586674 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-ovn-kubernetes, uid: e774665f-b1c0-416c-adde-718d512e91f9]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.587123532+00:00 stderr F I0813 20:09:13.586762 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:controller:machine-approver, uid: 7b6449e0-b191-4f94-ad74-226c264035e7]" virtual=false 2025-08-13T20:09:13.587123532+00:00 stderr F I0813 20:09:13.587023 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-cluster-readers, uid: 11e86910-6d61-4703-8234-c26167470b26]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.587123532+00:00 stderr F I0813 20:09:13.587114 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:authentication, uid: 32dcecb0-39fb-4417-a02d-801980f312a5]" virtual=false 2025-08-13T20:09:13.597370555+00:00 stderr F I0813 20:09:13.597273 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-group, uid: 7560b26d-8f8c-4cda-ac4b-0a2f586da492]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.597370555+00:00 stderr F I0813 20:09:13.597352 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:cluster-kube-scheduler-operator, uid: cac5ab58-ac29-4b01-978f-1d735a5c5af1]" virtual=false 2025-08-13T20:09:13.608836714+00:00 stderr F I0813 20:09:13.608728 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-transient, uid: 9ca91046-71b1-4755-8430-f290694fb843]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.608896796+00:00 stderr F I0813 20:09:13.608867 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:etcd-operator, uid: fe8cbe89-e1d8-491c-abcd-6405df195395]" virtual=false 2025-08-13T20:09:13.611452139+00:00 stderr F I0813 20:09:13.611353 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: network-diagnostics, uid: 79dad4ee-c1a9-4879-a1dd-69ea8ca307a1]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.611491640+00:00 stderr F I0813 20:09:13.611409 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-apiserver-operator, uid: 6d428360-3d92-42b3-a692-62fcef7dc28f]" virtual=false 2025-08-13T20:09:13.615759543+00:00 stderr F I0813 20:09:13.615705 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-whereabouts, uid: be8f44a2-0530-4fc0-a743-77944a2d6cfe]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.615830865+00:00 stderr F I0813 20:09:13.615766 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-controller-manager-operator, uid: 8fa65b16-3f3f-4a00-9f7f-b6347622e728]" virtual=false 2025-08-13T20:09:13.626977234+00:00 stderr F I0813 20:09:13.626765 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: network-node-identity, uid: 2fd4e670-d0fc-405b-9bb0-978652cd7871]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.626977234+00:00 stderr F I0813 20:09:13.626949 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-storage-version-migrator-operator, uid: 7c611553-0af9-459b-9675-c9d6e321b535]" virtual=false 2025-08-13T20:09:13.637646600+00:00 stderr F I0813 20:09:13.637605 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: olm-operator-binding-openshift-operator-lifecycle-manager, uid: 9ccd11bb-8a90-4da3-bd98-a6b01e412542]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.637739093+00:00 stderr F I0813 20:09:13.637723 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-apiserver-operator, uid: 3b1d0d06-e85b-43d3-a710-b1842b977bda]" virtual=false 2025-08-13T20:09:13.646385521+00:00 stderr F I0813 20:09:13.646352 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-dns-operator, uid: 97ca47d3-e623-4331-b837-ffc3e3dac836]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.646500994+00:00 stderr F I0813 20:09:13.646465 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-config-operator, uid: 88bb6e9f-f058-4bf1-91ec-fac61efb59fd]" virtual=false 2025-08-13T20:09:13.649411438+00:00 stderr F I0813 20:09:13.649386 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ingress-operator, uid: 26eba0ad-34cf-49ed-9698-bb35cae97907]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.649478569+00:00 stderr F I0813 20:09:13.649464 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-controller-manager-operator, uid: 6da2b488-b18a-4be7-b541-3d8d33584c6e]" virtual=false 2025-08-13T20:09:13.657377166+00:00 stderr F I0813 20:09:13.657225 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-iptables-alerter, uid: 848eff75-354e-4076-906d-a4b9cc19945b]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.657377166+00:00 stderr F I0813 20:09:13.657299 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:service-ca-operator, uid: 90ce341e-6032-46e0-b9d2-7a01b6b8a68b]" virtual=false 2025-08-13T20:09:13.661150164+00:00 stderr F I0813 20:09:13.659981 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-control-plane-limited, uid: 7f642a86-d7eb-4509-8666-f07259f6a62f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.661150164+00:00 stderr F I0813 20:09:13.660034 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:scc:restricted-v2, uid: cb821e01-a19c-4b4c-8409-e70b79a0669a]" virtual=false 2025-08-13T20:09:13.666638971+00:00 stderr F I0813 20:09:13.666528 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-node-identity-limited, uid: 52425be5-e912-46fb-8a48-8e773a98d5c1]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.666638971+00:00 stderr F I0813 20:09:13.666605 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator-config, uid: c6ebf7f7-9838-428d-bacd-d9377dca684d]" virtual=false 2025-08-13T20:09:13.676642828+00:00 stderr F I0813 20:09:13.676136 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-node-kube-rbac-proxy, uid: 245c37d5-9ba8-4086-aa11-840b6e8a724e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.676642828+00:00 stderr F I0813 20:09:13.676211 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-1, uid: 7d5f234a-24b6-4ac1-954c-7cc8434206a4]" virtual=false 2025-08-13T20:09:13.689615640+00:00 stderr F I0813 20:09:13.688406 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: registry-monitoring, uid: f277fb99-3fcb-44ac-b911-507aa165a19e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.689615640+00:00 stderr F I0813 20:09:13.688528 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-2, uid: 899b36ec-d94b-42d9-8be5-92ce893bdf60]" virtual=false 2025-08-13T20:09:13.690349261+00:00 stderr F I0813 20:09:13.690264 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-console-operator, name: console-operator-config, uid: 412419b7-04a8-4c46-913c-1f7d4e7ef828]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.690349261+00:00 stderr F I0813 20:09:13.690337 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-3, uid: 1543dfc3-df4c-45db-8acd-845d6e66194a]" virtual=false 2025-08-13T20:09:13.697501996+00:00 stderr F I0813 20:09:13.697274 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: prometheus-k8s-scheduler-resources, uid: 83861e61-c21e-4d21-bf6a-21620ffd522d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.697501996+00:00 stderr F I0813 20:09:13.697347 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-4, uid: 86dedbd4-e2d0-4b4f-8c78-ace31808403c]" virtual=false 2025-08-13T20:09:13.700591095+00:00 stderr F I0813 20:09:13.700486 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: release-verification, uid: 10773088-74cb-4fd7-a888-64eb35390cc8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.700591095+00:00 stderr F I0813 20:09:13.700564 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-5, uid: 9924a3a2-9116-4daa-916b-0afdeb883e44]" virtual=false 2025-08-13T20:09:13.708161272+00:00 stderr F I0813 20:09:13.707979 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: openshift-network-features, uid: e0bb8616-5510-480e-98cd-00a3fc1de91b]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.708161272+00:00 stderr F I0813 20:09:13.708069 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-6, uid: 46686543-d20d-4e62-906e-aaf6c4de8edf]" virtual=false 2025-08-13T20:09:13.712588149+00:00 stderr F I0813 20:09:13.712498 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-image-registry, name: trusted-ca, uid: 817847ce-1358-44f5-9e39-d680eb81f478]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:13.712836536+00:00 stderr F I0813 20:09:13.712813 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-storage-version-migrator-operator, name: config, uid: 1bb5d3ff-abf0-42d2-868b-f27622788fc1]" virtual=false 2025-08-13T20:09:13.719177678+00:00 stderr F I0813 20:09:13.719008 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:controller:machine-approver, uid: 7b6449e0-b191-4f94-ad74-226c264035e7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.719177678+00:00 stderr F I0813 20:09:13.719076 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-api, name: kube-rbac-proxy, uid: d55be534-ee47-42be-a177-eeff870f3f9c]" virtual=false 2025-08-13T20:09:13.723093280+00:00 stderr F I0813 20:09:13.722922 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:authentication, uid: 32dcecb0-39fb-4417-a02d-801980f312a5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.723093280+00:00 stderr F I0813 20:09:13.722970 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-api, name: kube-rbac-proxy-cluster-autoscaler-operator, uid: 21319178-4025-4471-9dc1-a15898759ed3]" virtual=false 2025-08-13T20:09:13.737514924+00:00 stderr F I0813 20:09:13.737391 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:cluster-kube-scheduler-operator, uid: cac5ab58-ac29-4b01-978f-1d735a5c5af1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.737514924+00:00 stderr F I0813 20:09:13.737480 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-api, name: machine-api-operator-images, uid: 9037023c-6158-4d9d-8f71-bae268832ce9]" virtual=false 2025-08-13T20:09:13.744522544+00:00 stderr F I0813 20:09:13.744425 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-1, uid: 7d5f234a-24b6-4ac1-954c-7cc8434206a4]" 2025-08-13T20:09:13.744588796+00:00 stderr F I0813 20:09:13.744524 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-api, name: mao-trusted-ca, uid: 9a2cfbbf-b8be-4334-82e9-91abbed9baee]" virtual=false 2025-08-13T20:09:13.748353244+00:00 stderr F I0813 20:09:13.748283 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:etcd-operator, uid: fe8cbe89-e1d8-491c-abcd-6405df195395]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.748559260+00:00 stderr F I0813 20:09:13.748531 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: coreos-bootimages, uid: 32557db9-6675-42cd-8433-382f972fd9ed]" virtual=false 2025-08-13T20:09:13.749489707+00:00 stderr F I0813 20:09:13.749461 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-2, uid: 899b36ec-d94b-42d9-8be5-92ce893bdf60]" 2025-08-13T20:09:13.749588160+00:00 stderr F I0813 20:09:13.749568 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: kube-rbac-proxy, uid: ba7edbb4-1ba2-49b6-98a7-d849069e9f80]" virtual=false 2025-08-13T20:09:13.750179797+00:00 stderr F I0813 20:09:13.750150 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-controller-manager-operator, uid: 8fa65b16-3f3f-4a00-9f7f-b6347622e728]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.750269479+00:00 stderr F I0813 20:09:13.750240 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: machine-config-operator-images, uid: 1a9b80cc-4335-4f9b-b76a-0fd268339500]" virtual=false 2025-08-13T20:09:13.754986244+00:00 stderr F I0813 20:09:13.754877 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-3, uid: 1543dfc3-df4c-45db-8acd-845d6e66194a]" 2025-08-13T20:09:13.755054466+00:00 stderr F I0813 20:09:13.754996 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: machine-config-osimageurl, uid: 6dc55689-0211-4518-ae6b-72770afc52d4]" virtual=false 2025-08-13T20:09:13.755302883+00:00 stderr F I0813 20:09:13.755184 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-apiserver-operator, uid: 6d428360-3d92-42b3-a692-62fcef7dc28f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.755302883+00:00 stderr F I0813 20:09:13.755231 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-marketplace, name: marketplace-trusted-ca, uid: d0f4a635-2baa-4cf8-ab26-6cb4ffda662d]" virtual=false 2025-08-13T20:09:13.758261748+00:00 stderr F I0813 20:09:13.758150 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-4, uid: 86dedbd4-e2d0-4b4f-8c78-ace31808403c]" 2025-08-13T20:09:13.758261748+00:00 stderr F I0813 20:09:13.758198 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-multus, name: cni-copy-resources, uid: 3807440c-7cb7-402f-9b6c-985e141f073d]" virtual=false 2025-08-13T20:09:13.759644848+00:00 stderr F I0813 20:09:13.759589 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-storage-version-migrator-operator, uid: 7c611553-0af9-459b-9675-c9d6e321b535]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.759644848+00:00 stderr F I0813 20:09:13.759615 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-multus, name: default-cni-sysctl-allowlist, uid: 4b4aac2c-265b-4c8a-86d2-81b7b6b58bd6]" virtual=false 2025-08-13T20:09:13.764300921+00:00 stderr F I0813 20:09:13.764230 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-5, uid: 9924a3a2-9116-4daa-916b-0afdeb883e44]" 2025-08-13T20:09:13.764445256+00:00 stderr F I0813 20:09:13.764423 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-multus, name: multus-daemon-config, uid: 0040ed70-dcc9-471d-b68f-333eb3d3a64c]" virtual=false 2025-08-13T20:09:13.765971299+00:00 stderr F I0813 20:09:13.765741 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-6, uid: 46686543-d20d-4e62-906e-aaf6c4de8edf]" 2025-08-13T20:09:13.766123394+00:00 stderr F I0813 20:09:13.766055 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-network-node-identity, name: ovnkube-identity-cm, uid: 08920289-81e3-422d-8580-e3f323d2bc00]" virtual=false 2025-08-13T20:09:13.775135382+00:00 stderr F I0813 20:09:13.775063 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-apiserver-operator, uid: 3b1d0d06-e85b-43d3-a710-b1842b977bda]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.775284306+00:00 stderr F I0813 20:09:13.775260 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-network-operator, name: applied-cluster, uid: 0f792eb6-67bb-459c-8f9b-ddc568eb49b6]" virtual=false 2025-08-13T20:09:13.780334191+00:00 stderr F I0813 20:09:13.780199 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-config-operator, uid: 88bb6e9f-f058-4bf1-91ec-fac61efb59fd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.780334191+00:00 stderr F I0813 20:09:13.780271 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-network-operator, name: iptables-alerter-script, uid: 3fcfe917-6069-4709-8a13-9f7bb418d1e6]" virtual=false 2025-08-13T20:09:13.781297219+00:00 stderr F I0813 20:09:13.781215 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-controller-manager-operator, uid: 6da2b488-b18a-4be7-b541-3d8d33584c6e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.781297219+00:00 stderr F I0813 20:09:13.781263 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-oauth-apiserver, name: revision-status-1, uid: 68bff2ff-9a16-46c3-aeb1-4de75aa0760b]" virtual=false 2025-08-13T20:09:13.789884805+00:00 stderr F I0813 20:09:13.789723 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:service-ca-operator, uid: 90ce341e-6032-46e0-b9d2-7a01b6b8a68b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.789884805+00:00 stderr F I0813 20:09:13.789863 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-config, uid: e9bee127-1b3d-47ee-af83-2bfcb79f8702]" virtual=false 2025-08-13T20:09:13.796895156+00:00 stderr F I0813 20:09:13.796698 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:scc:restricted-v2, uid: cb821e01-a19c-4b4c-8409-e70b79a0669a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.796895156+00:00 stderr F I0813 20:09:13.796842 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: machine-api-controllers, uid: a777ad98-1b3e-45c1-a882-fa02dc5e0e45]" virtual=false 2025-08-13T20:09:13.804105813+00:00 stderr F I0813 20:09:13.803146 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator-config, uid: c6ebf7f7-9838-428d-bacd-d9377dca684d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.804105813+00:00 stderr F I0813 20:09:13.803218 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: console-public, uid: 8d38d3d4-2560-4d42-8625-65533a47724e]" virtual=false 2025-08-13T20:09:13.842101862+00:00 stderr F I0813 20:09:13.841989 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-kube-storage-version-migrator-operator, name: config, uid: 1bb5d3ff-abf0-42d2-868b-f27622788fc1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.842101862+00:00 stderr F I0813 20:09:13.842053 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-nodes-identity-limited, uid: c4aa8346-c5c7-4f58-9efa-e1f81fc14f12]" virtual=false 2025-08-13T20:09:13.845007075+00:00 stderr F I0813 20:09:13.844888 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-oauth-apiserver, name: revision-status-1, uid: 68bff2ff-9a16-46c3-aeb1-4de75aa0760b]" 2025-08-13T20:09:13.845007075+00:00 stderr F I0813 20:09:13.844962 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-ovn-kubernetes, name: ovnkube-config, uid: d30c2b3b-e0c9-4cd0-9fc5-e48c94f3f693]" virtual=false 2025-08-13T20:09:13.854466177+00:00 stderr F I0813 20:09:13.854348 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-api, name: kube-rbac-proxy, uid: d55be534-ee47-42be-a177-eeff870f3f9c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.854466177+00:00 stderr F I0813 20:09:13.854424 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-ovn-kubernetes, name: ovnkube-script-lib, uid: 2276c91e-6ff8-4fa7-a126-0ab84fde00d5]" virtual=false 2025-08-13T20:09:13.856594638+00:00 stderr F I0813 20:09:13.856528 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-api, name: kube-rbac-proxy-cluster-autoscaler-operator, uid: 21319178-4025-4471-9dc1-a15898759ed3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.856594638+00:00 stderr F I0813 20:09:13.856557 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-service-ca-operator, name: service-ca-operator-config, uid: 3d3e7ab7-58a1-43a1-974e-5528e582f464]" virtual=false 2025-08-13T20:09:13.865252486+00:00 stderr F I0813 20:09:13.865155 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-api, name: machine-api-operator-images, uid: 9037023c-6158-4d9d-8f71-bae268832ce9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.865297697+00:00 stderr F I0813 20:09:13.865245 1 garbagecollector.go:549] "Processing item" item="[v1/Secret, namespace: openshift-operator-lifecycle-manager, name: pprof-cert, uid: c9632a1a-4f13-4685-bf8a-cb629542c724]" virtual=false 2025-08-13T20:09:13.872710760+00:00 stderr F I0813 20:09:13.871890 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-api, name: mao-trusted-ca, uid: 9a2cfbbf-b8be-4334-82e9-91abbed9baee]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:13.872710760+00:00 stderr F I0813 20:09:13.871974 1 garbagecollector.go:549] "Processing item" item="[v1/Secret, namespace: openshift-etcd-operator, name: etcd-client, uid: e8eb8415-8608-4aeb-990e-0c587edd97cc]" virtual=false 2025-08-13T20:09:13.875620983+00:00 stderr F I0813 20:09:13.874997 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: coreos-bootimages, uid: 32557db9-6675-42cd-8433-382f972fd9ed]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.875620983+00:00 stderr F I0813 20:09:13.875119 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-node-identity, name: system:openshift:scc:hostnetwork-v2, uid: 6a315570-1774-46a4-8c1e-a8dec4ae42dd]" virtual=false 2025-08-13T20:09:13.878346631+00:00 stderr F I0813 20:09:13.878266 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: kube-rbac-proxy, uid: ba7edbb4-1ba2-49b6-98a7-d849069e9f80]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.878346631+00:00 stderr F I0813 20:09:13.878325 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: network-diagnostics, uid: 42b5b6e6-9a80-4513-a033-38d0e838dc93]" virtual=false 2025-08-13T20:09:13.883757186+00:00 stderr F I0813 20:09:13.883066 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: machine-config-operator-images, uid: 1a9b80cc-4335-4f9b-b76a-0fd268339500]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.883757186+00:00 stderr F I0813 20:09:13.883148 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-control-plane-limited, uid: bab757f6-b841-475a-892d-a531bdd67547]" virtual=false 2025-08-13T20:09:13.886694131+00:00 stderr F I0813 20:09:13.886582 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: machine-config-osimageurl, uid: 6dc55689-0211-4518-ae6b-72770afc52d4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.886730652+00:00 stderr F I0813 20:09:13.886696 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-diagnostics, name: prometheus-k8s, uid: 82531bdb-b490-4bb6-9435-a96aa861af59]" virtual=false 2025-08-13T20:09:13.890283644+00:00 stderr F I0813 20:09:13.889455 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-marketplace, name: marketplace-trusted-ca, uid: d0f4a635-2baa-4cf8-ab26-6cb4ffda662d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.890283644+00:00 stderr F I0813 20:09:13.889526 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-multus, name: multus-whereabouts, uid: d9f001d9-3949-4f2b-b322-bdc495b6b073]" virtual=false 2025-08-13T20:09:13.890283644+00:00 stderr F I0813 20:09:13.890261 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-multus, name: cni-copy-resources, uid: 3807440c-7cb7-402f-9b6c-985e141f073d]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.890325895+00:00 stderr F I0813 20:09:13.890313 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: prometheus-k8s, uid: 16831aec-8154-47dc-bd17-bf35f40f27a2]" virtual=false 2025-08-13T20:09:13.893479375+00:00 stderr F I0813 20:09:13.893039 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-multus, name: default-cni-sysctl-allowlist, uid: 4b4aac2c-265b-4c8a-86d2-81b7b6b58bd6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.893479375+00:00 stderr F I0813 20:09:13.893098 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: openshift-network-public-role-binding, uid: 3df77a31-3595-4c7d-8f42-fe6d72f5e8fc]" virtual=false 2025-08-13T20:09:13.897031317+00:00 stderr F I0813 20:09:13.896866 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-multus, name: multus-daemon-config, uid: 0040ed70-dcc9-471d-b68f-333eb3d3a64c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.897031317+00:00 stderr F I0813 20:09:13.896947 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-multus, name: prometheus-k8s, uid: 76ffecc8-c07a-4b4b-a533-f0349dddd305]" virtual=false 2025-08-13T20:09:13.900166967+00:00 stderr F I0813 20:09:13.900055 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-network-node-identity, name: ovnkube-identity-cm, uid: 08920289-81e3-422d-8580-e3f323d2bc00]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.900166967+00:00 stderr F I0813 20:09:13.900142 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: d04be21e-6483-4e06-947a-bfe10bccef5f]" virtual=false 2025-08-13T20:09:13.903428440+00:00 stderr F I0813 20:09:13.903277 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-network-operator, name: applied-cluster, uid: 0f792eb6-67bb-459c-8f9b-ddc568eb49b6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.903428440+00:00 stderr F I0813 20:09:13.903327 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-node-identity, name: network-node-identity-leases, uid: 323976da-f51f-48d6-91ee-874a014db6f7]" virtual=false 2025-08-13T20:09:13.906514819+00:00 stderr F I0813 20:09:13.906406 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-network-operator, name: iptables-alerter-script, uid: 3fcfe917-6069-4709-8a13-9f7bb418d1e6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.906514819+00:00 stderr F I0813 20:09:13.906470 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: adminnetworkpolicies.policy.networking.k8s.io, uid: 3b2c3d0f-1154-4221-88d8-96b15373169a]" virtual=false 2025-08-13T20:09:13.924139574+00:00 stderr F I0813 20:09:13.923878 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-config, uid: e9bee127-1b3d-47ee-af83-2bfcb79f8702]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:13.924139574+00:00 stderr F I0813 20:09:13.923972 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: adminpolicybasedexternalroutes.k8s.ovn.org, uid: 8e2ba691-6cd2-489e-929b-7aaef51d1387]" virtual=false 2025-08-13T20:09:13.928879240+00:00 stderr F I0813 20:09:13.928268 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: machine-api-controllers, uid: a777ad98-1b3e-45c1-a882-fa02dc5e0e45]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.928879240+00:00 stderr F I0813 20:09:13.928347 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: baselineadminnetworkpolicies.policy.networking.k8s.io, uid: d059be2d-29f9-4a7a-9bc5-e75a784d63e4]" virtual=false 2025-08-13T20:09:13.935465829+00:00 stderr F I0813 20:09:13.935276 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: console-public, uid: 8d38d3d4-2560-4d42-8625-65533a47724e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.935465829+00:00 stderr F I0813 20:09:13.935346 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressfirewalls.k8s.ovn.org, uid: a49e9150-d58b-4ccf-9fa5-c6abaac414b2]" virtual=false 2025-08-13T20:09:13.977415072+00:00 stderr F I0813 20:09:13.977322 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-nodes-identity-limited, uid: c4aa8346-c5c7-4f58-9efa-e1f81fc14f12]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.977415072+00:00 stderr F I0813 20:09:13.977387 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressips.k8s.ovn.org, uid: 9f3a9522-f1f8-4d28-bf47-0823bd97a03c]" virtual=false 2025-08-13T20:09:13.979169852+00:00 stderr F I0813 20:09:13.979111 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-ovn-kubernetes, name: ovnkube-config, uid: d30c2b3b-e0c9-4cd0-9fc5-e48c94f3f693]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.979265325+00:00 stderr F I0813 20:09:13.979166 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressqoses.k8s.ovn.org, uid: c31943bc-e8ef-4bee-9147-31e90cad6638]" virtual=false 2025-08-13T20:09:13.988566541+00:00 stderr F I0813 20:09:13.988453 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-ovn-kubernetes, name: ovnkube-script-lib, uid: 2276c91e-6ff8-4fa7-a126-0ab84fde00d5]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.988566541+00:00 stderr F I0813 20:09:13.988525 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressservices.k8s.ovn.org, uid: 1b3e9680-0044-44dc-b306-5f8b2922b184]" virtual=false 2025-08-13T20:09:13.994508452+00:00 stderr F I0813 20:09:13.994420 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-service-ca-operator, name: service-ca-operator-config, uid: 3d3e7ab7-58a1-43a1-974e-5528e582f464]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.994508452+00:00 stderr F I0813 20:09:13.994475 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: ippools.whereabouts.cni.cncf.io, uid: 6327ea74-f505-4fab-a6a5-2f489c34a8c6]" virtual=false 2025-08-13T20:09:14.001161803+00:00 stderr F I0813 20:09:14.000597 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Secret, namespace: openshift-operator-lifecycle-manager, name: pprof-cert, uid: c9632a1a-4f13-4685-bf8a-cb629542c724]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:14.001161803+00:00 stderr F I0813 20:09:14.000767 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: network-attachment-definitions.k8s.cni.cncf.io, uid: a99eded9-1e26-4704-8381-dc109f03cc1f]" virtual=false 2025-08-13T20:09:14.006212027+00:00 stderr F I0813 20:09:14.006105 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Secret, namespace: openshift-etcd-operator, name: etcd-client, uid: e8eb8415-8608-4aeb-990e-0c587edd97cc]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:14.006212027+00:00 stderr F I0813 20:09:14.006157 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: overlappingrangeipreservations.whereabouts.cni.cncf.io, uid: ae18c5bd-30f0-4e7f-bcc1-2d4dcbc94239]" virtual=false 2025-08-13T20:09:14.007450383+00:00 stderr F I0813 20:09:14.006640 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-node-identity, name: system:openshift:scc:hostnetwork-v2, uid: 6a315570-1774-46a4-8c1e-a8dec4ae42dd]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.007450383+00:00 stderr F I0813 20:09:14.006713 1 garbagecollector.go:549] "Processing item" item="[config.openshift.io/v1/Image, namespace: , name: cluster, uid: f9e57099-766b-4ef5-9883-f7bf72d3acd8]" virtual=false 2025-08-13T20:09:14.009835451+00:00 stderr F I0813 20:09:14.009730 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: network-diagnostics, uid: 42b5b6e6-9a80-4513-a033-38d0e838dc93]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.009863642+00:00 stderr F I0813 20:09:14.009847 1 garbagecollector.go:549] "Processing item" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-master-11405dc064e9fc83a779a06d1cd665b3, uid: 6c39c9b1-6ae4-4da9-bc28-1744aa4c8a1d]" virtual=false 2025-08-13T20:09:14.013264310+00:00 stderr F I0813 20:09:14.013186 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-control-plane-limited, uid: bab757f6-b841-475a-892d-a531bdd67547]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.013264310+00:00 stderr F I0813 20:09:14.013233 1 garbagecollector.go:549] "Processing item" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5, uid: c414de32-5f1e-433d-8ddc-6ef8e86afda7]" virtual=false 2025-08-13T20:09:14.017742038+00:00 stderr F I0813 20:09:14.017635 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-diagnostics, name: prometheus-k8s, uid: 82531bdb-b490-4bb6-9435-a96aa861af59]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.017742038+00:00 stderr F I0813 20:09:14.017719 1 garbagecollector.go:549] "Processing item" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, uid: 8980258c-3f45-4fdf-85aa-5d7816ef57b0]" virtual=false 2025-08-13T20:09:14.019994403+00:00 stderr F I0813 20:09:14.019883 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-multus, name: multus-whereabouts, uid: d9f001d9-3949-4f2b-b322-bdc495b6b073]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.020016583+00:00 stderr F I0813 20:09:14.019999 1 garbagecollector.go:549] "Processing item" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-worker-83accf81260e29bcce65a184dd980479, uid: 064c58c6-b7e3-4279-8d84-dee3da8cc701]" virtual=false 2025-08-13T20:09:14.025669655+00:00 stderr F I0813 20:09:14.025112 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: prometheus-k8s, uid: 16831aec-8154-47dc-bd17-bf35f40f27a2]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.025669655+00:00 stderr F I0813 20:09:14.025162 1 garbagecollector.go:549] "Processing item" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-worker-cd34d74bc72d90aa53a9e6409d8a13ff, uid: 3ef54a1f-2601-44d6-aa6d-d19e1277d0b9]" virtual=false 2025-08-13T20:09:14.027746875+00:00 stderr F I0813 20:09:14.027600 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: openshift-network-public-role-binding, uid: 3df77a31-3595-4c7d-8f42-fe6d72f5e8fc]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.027746875+00:00 stderr F I0813 20:09:14.027686 1 garbagecollector.go:549] "Processing item" item="[route.openshift.io/v1/Route, namespace: openshift-ingress-canary, name: canary, uid: d099e691-9f65-4b04-8fcd-6df8ad5c5015]" virtual=false 2025-08-13T20:09:14.029419133+00:00 stderr F I0813 20:09:14.029059 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-multus, name: prometheus-k8s, uid: 76ffecc8-c07a-4b4b-a533-f0349dddd305]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.034613732+00:00 stderr F I0813 20:09:14.034479 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: d04be21e-6483-4e06-947a-bfe10bccef5f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.038099632+00:00 stderr F I0813 20:09:14.037756 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-node-identity, name: network-node-identity-leases, uid: 323976da-f51f-48d6-91ee-874a014db6f7]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.046549154+00:00 stderr F I0813 20:09:14.046371 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: adminnetworkpolicies.policy.networking.k8s.io, uid: 3b2c3d0f-1154-4221-88d8-96b15373169a]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.065247960+00:00 stderr F I0813 20:09:14.065136 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: baselineadminnetworkpolicies.policy.networking.k8s.io, uid: d059be2d-29f9-4a7a-9bc5-e75a784d63e4]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.065678152+00:00 stderr F I0813 20:09:14.065612 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: adminpolicybasedexternalroutes.k8s.ovn.org, uid: 8e2ba691-6cd2-489e-929b-7aaef51d1387]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.066061833+00:00 stderr F I0813 20:09:14.066024 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressfirewalls.k8s.ovn.org, uid: a49e9150-d58b-4ccf-9fa5-c6abaac414b2]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.097353211+00:00 stderr F I0813 20:09:14.096713 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressips.k8s.ovn.org, uid: 9f3a9522-f1f8-4d28-bf47-0823bd97a03c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.100254584+00:00 stderr F I0813 20:09:14.100156 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressqoses.k8s.ovn.org, uid: c31943bc-e8ef-4bee-9147-31e90cad6638]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.104339911+00:00 stderr F I0813 20:09:14.103727 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressservices.k8s.ovn.org, uid: 1b3e9680-0044-44dc-b306-5f8b2922b184]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.107078699+00:00 stderr F I0813 20:09:14.106627 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: ippools.whereabouts.cni.cncf.io, uid: 6327ea74-f505-4fab-a6a5-2f489c34a8c6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.114501082+00:00 stderr F I0813 20:09:14.113938 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: overlappingrangeipreservations.whereabouts.cni.cncf.io, uid: ae18c5bd-30f0-4e7f-bcc1-2d4dcbc94239]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.117990042+00:00 stderr F I0813 20:09:14.116225 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: network-attachment-definitions.k8s.cni.cncf.io, uid: a99eded9-1e26-4704-8381-dc109f03cc1f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.120355630+00:00 stderr F I0813 20:09:14.119464 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[config.openshift.io/v1/Image, namespace: , name: cluster, uid: f9e57099-766b-4ef5-9883-f7bf72d3acd8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:14.121110752+00:00 stderr F I0813 20:09:14.121034 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-master-11405dc064e9fc83a779a06d1cd665b3, uid: 6c39c9b1-6ae4-4da9-bc28-1744aa4c8a1d]" owner=[{"apiVersion":"machineconfiguration.openshift.io/v1","kind":"MachineConfigPool","name":"master","uid":"8efb5656-7de8-467a-9f2a-237a011a4783","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.123210882+00:00 stderr F I0813 20:09:14.123012 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5, uid: c414de32-5f1e-433d-8ddc-6ef8e86afda7]" owner=[{"apiVersion":"machineconfiguration.openshift.io/v1","kind":"MachineConfigPool","name":"master","uid":"8efb5656-7de8-467a-9f2a-237a011a4783","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.125406005+00:00 stderr F I0813 20:09:14.125289 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, uid: 8980258c-3f45-4fdf-85aa-5d7816ef57b0]" owner=[{"apiVersion":"machineconfiguration.openshift.io/v1","kind":"MachineConfigPool","name":"master","uid":"8efb5656-7de8-467a-9f2a-237a011a4783","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.130221963+00:00 stderr F I0813 20:09:14.130187 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-worker-83accf81260e29bcce65a184dd980479, uid: 064c58c6-b7e3-4279-8d84-dee3da8cc701]" owner=[{"apiVersion":"machineconfiguration.openshift.io/v1","kind":"MachineConfigPool","name":"worker","uid":"87ae8215-5559-4b8a-a6cc-81c3c83b8a6e","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.132879569+00:00 stderr F I0813 20:09:14.132824 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-worker-cd34d74bc72d90aa53a9e6409d8a13ff, uid: 3ef54a1f-2601-44d6-aa6d-d19e1277d0b9]" owner=[{"apiVersion":"machineconfiguration.openshift.io/v1","kind":"MachineConfigPool","name":"worker","uid":"87ae8215-5559-4b8a-a6cc-81c3c83b8a6e","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.135660319+00:00 stderr F I0813 20:09:14.135627 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[route.openshift.io/v1/Route, namespace: openshift-ingress-canary, name: canary, uid: d099e691-9f65-4b04-8fcd-6df8ad5c5015]" owner=[{"apiVersion":"apps/v1","kind":"daemonset","name":"ingress-canary","uid":"b5512a08-cd29-46f9-9661-4c860338b2ca","controller":true}] 2025-08-13T20:10:15.242365070+00:00 stderr F I0813 20:10:15.233259 1 event.go:376] "Event occurred" object="openshift-multus/cni-sysctl-allowlist-ds" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cni-sysctl-allowlist-ds-jx5m8" 2025-08-13T20:10:18.196245188+00:00 stderr F I0813 20:10:18.196096 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ControllerRevision, namespace: openshift-multus, name: cni-sysctl-allowlist-ds-56444b9596, uid: 296d6144-09f5-4dc7-9ab3-000f2dd8cf46]" virtual=false 2025-08-13T20:10:18.196610098+00:00 stderr F I0813 20:10:18.196559 1 garbagecollector.go:549] "Processing item" item="[v1/Pod, namespace: openshift-multus, name: cni-sysctl-allowlist-ds-jx5m8, uid: b78e72e3-8ece-4d66-aa9c-25445bacdc99]" virtual=false 2025-08-13T20:10:18.219466944+00:00 stderr F I0813 20:10:18.219401 1 garbagecollector.go:688] "Deleting item" item="[apps/v1/ControllerRevision, namespace: openshift-multus, name: cni-sysctl-allowlist-ds-56444b9596, uid: 296d6144-09f5-4dc7-9ab3-000f2dd8cf46]" propagationPolicy="Background" 2025-08-13T20:10:18.220159104+00:00 stderr F I0813 20:10:18.219455 1 garbagecollector.go:688] "Deleting item" item="[v1/Pod, namespace: openshift-multus, name: cni-sysctl-allowlist-ds-jx5m8, uid: b78e72e3-8ece-4d66-aa9c-25445bacdc99]" propagationPolicy="Background" 2025-08-13T20:10:59.686088783+00:00 stderr F I0813 20:10:59.683724 1 replica_set.go:621] "Too many replicas" replicaSet="openshift-route-controller-manager/route-controller-manager-6884dcf749" need=0 deleting=1 2025-08-13T20:10:59.686088783+00:00 stderr F I0813 20:10:59.684108 1 replica_set.go:248] "Found related ReplicaSets" replicaSet="openshift-route-controller-manager/route-controller-manager-6884dcf749" relatedReplicaSets=["openshift-route-controller-manager/route-controller-manager-776b8b7477","openshift-route-controller-manager/route-controller-manager-5446f98575","openshift-route-controller-manager/route-controller-manager-5b77f9fd48","openshift-route-controller-manager/route-controller-manager-5c4dbb8899","openshift-route-controller-manager/route-controller-manager-6884dcf749","openshift-route-controller-manager/route-controller-manager-777dbbb7bb","openshift-route-controller-manager/route-controller-manager-7d967d98df","openshift-route-controller-manager/route-controller-manager-846977c6bc","openshift-route-controller-manager/route-controller-manager-66f66f94cf","openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc","openshift-route-controller-manager/route-controller-manager-7f79969969","openshift-route-controller-manager/route-controller-manager-868695ccb4"] 2025-08-13T20:10:59.686088783+00:00 stderr F I0813 20:10:59.684171 1 event.go:376] "Event occurred" object="openshift-route-controller-manager/route-controller-manager" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set route-controller-manager-6884dcf749 to 0 from 1" 2025-08-13T20:10:59.686088783+00:00 stderr F I0813 20:10:59.684410 1 controller_utils.go:609] "Deleting pod" controller="route-controller-manager-6884dcf749" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" 2025-08-13T20:10:59.686088783+00:00 stderr F I0813 20:10:59.684749 1 event.go:376] "Event occurred" object="openshift-controller-manager/controller-manager" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set controller-manager-598fc85fd4 to 0 from 1" 2025-08-13T20:10:59.687644158+00:00 stderr F I0813 20:10:59.687007 1 replica_set.go:621] "Too many replicas" replicaSet="openshift-controller-manager/controller-manager-598fc85fd4" need=0 deleting=1 2025-08-13T20:10:59.687644158+00:00 stderr F I0813 20:10:59.687101 1 replica_set.go:248] "Found related ReplicaSets" replicaSet="openshift-controller-manager/controller-manager-598fc85fd4" relatedReplicaSets=["openshift-controller-manager/controller-manager-659898b96d","openshift-controller-manager/controller-manager-6ff78978b4","openshift-controller-manager/controller-manager-75cfd5db5d","openshift-controller-manager/controller-manager-7bbb4b7f4c","openshift-controller-manager/controller-manager-99c8765d7","openshift-controller-manager/controller-manager-b69786f4f","openshift-controller-manager/controller-manager-5797bcd546","openshift-controller-manager/controller-manager-67685c4459","openshift-controller-manager/controller-manager-78589965b8","openshift-controller-manager/controller-manager-c4dd57946","openshift-controller-manager/controller-manager-778975cc4f","openshift-controller-manager/controller-manager-598fc85fd4"] 2025-08-13T20:10:59.687644158+00:00 stderr F I0813 20:10:59.687333 1 controller_utils.go:609] "Deleting pod" controller="controller-manager-598fc85fd4" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" 2025-08-13T20:10:59.695564135+00:00 stderr F I0813 20:10:59.695433 1 deployment_controller.go:507] "Error syncing deployment" deployment="openshift-controller-manager/controller-manager" err="Operation cannot be fulfilled on deployments.apps \"controller-manager\": the object has been modified; please apply your changes to the latest version and try again" 2025-08-13T20:10:59.702011570+00:00 stderr F I0813 20:10:59.701910 1 deployment_controller.go:507] "Error syncing deployment" deployment="openshift-route-controller-manager/route-controller-manager" err="Operation cannot be fulfilled on deployments.apps \"route-controller-manager\": the object has been modified; please apply your changes to the latest version and try again" 2025-08-13T20:10:59.721870629+00:00 stderr F I0813 20:10:59.721668 1 event.go:376] "Event occurred" object="openshift-route-controller-manager/route-controller-manager" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set route-controller-manager-776b8b7477 to 1 from 0" 2025-08-13T20:10:59.731856846+00:00 stderr F I0813 20:10:59.728994 1 event.go:376] "Event occurred" object="openshift-controller-manager/controller-manager" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set controller-manager-778975cc4f to 1 from 0" 2025-08-13T20:10:59.748226745+00:00 stderr F I0813 20:10:59.747647 1 event.go:376] "Event occurred" object="openshift-route-controller-manager/route-controller-manager-6884dcf749" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: route-controller-manager-6884dcf749-n4qpx" 2025-08-13T20:10:59.800091072+00:00 stderr F I0813 20:10:59.799961 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="116.454919ms" 2025-08-13T20:10:59.807908986+00:00 stderr F I0813 20:10:59.800945 1 event.go:376] "Event occurred" object="openshift-controller-manager/controller-manager-598fc85fd4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: controller-manager-598fc85fd4-8wlsm" 2025-08-13T20:10:59.809548343+00:00 stderr F I0813 20:10:59.809388 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="161.014787ms" 2025-08-13T20:10:59.809574074+00:00 stderr F I0813 20:10:59.809537 1 replica_set.go:585] "Too few replicas" replicaSet="openshift-controller-manager/controller-manager-778975cc4f" need=1 creating=1 2025-08-13T20:10:59.840894912+00:00 stderr F I0813 20:10:59.840650 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="192.344795ms" 2025-08-13T20:10:59.841197671+00:00 stderr F I0813 20:10:59.841143 1 replica_set.go:585] "Too few replicas" replicaSet="openshift-route-controller-manager/route-controller-manager-776b8b7477" need=1 creating=1 2025-08-13T20:10:59.842972571+00:00 stderr F I0813 20:10:59.841585 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="154.722446ms" 2025-08-13T20:10:59.846082041+00:00 stderr F I0813 20:10:59.843119 1 deployment_controller.go:507] "Error syncing deployment" deployment="openshift-route-controller-manager/route-controller-manager" err="Operation cannot be fulfilled on deployments.apps \"route-controller-manager\": the object has been modified; please apply your changes to the latest version and try again" 2025-08-13T20:10:59.847259364+00:00 stderr F I0813 20:10:59.847173 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="47.132021ms" 2025-08-13T20:10:59.850840847+00:00 stderr F I0813 20:10:59.849446 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="121.073µs" 2025-08-13T20:10:59.895562309+00:00 stderr F I0813 20:10:59.893518 1 event.go:376] "Event occurred" object="openshift-controller-manager/controller-manager-778975cc4f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: controller-manager-778975cc4f-x5vcf" 2025-08-13T20:10:59.916033096+00:00 stderr F I0813 20:10:59.913261 1 event.go:376] "Event occurred" object="openshift-route-controller-manager/route-controller-manager-776b8b7477" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: route-controller-manager-776b8b7477-sfpvs" 2025-08-13T20:10:59.956079364+00:00 stderr F I0813 20:10:59.955984 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="114.325958ms" 2025-08-13T20:10:59.956286910+00:00 stderr F I0813 20:10:59.956255 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="93.453µs" 2025-08-13T20:10:59.970990092+00:00 stderr F I0813 20:10:59.970868 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="161.369486ms" 2025-08-13T20:10:59.994078604+00:00 stderr F I0813 20:10:59.993975 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="153.216473ms" 2025-08-13T20:11:00.002982189+00:00 stderr F I0813 20:11:00.002852 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="31.625627ms" 2025-08-13T20:11:00.003505364+00:00 stderr F I0813 20:11:00.003416 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="217.526µs" 2025-08-13T20:11:00.006635064+00:00 stderr F I0813 20:11:00.006602 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="38.101µs" 2025-08-13T20:11:00.022237361+00:00 stderr F I0813 20:11:00.020429 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="26.091848ms" 2025-08-13T20:11:00.022237361+00:00 stderr F I0813 20:11:00.021165 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="131.754µs" 2025-08-13T20:11:00.022237361+00:00 stderr F I0813 20:11:00.021224 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="18.911µs" 2025-08-13T20:11:00.051909872+00:00 stderr F I0813 20:11:00.051612 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="62.532µs" 2025-08-13T20:11:00.399700743+00:00 stderr F I0813 20:11:00.399640 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="77.293µs" 2025-08-13T20:11:00.509064199+00:00 stderr F I0813 20:11:00.508908 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="66.812µs" 2025-08-13T20:11:00.666413040+00:00 stderr F I0813 20:11:00.666271 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="78.033µs" 2025-08-13T20:11:00.744231472+00:00 stderr F I0813 20:11:00.744045 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="304.548µs" 2025-08-13T20:11:00.772302806+00:00 stderr F I0813 20:11:00.772210 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="83.382µs" 2025-08-13T20:11:00.782855379+00:00 stderr F I0813 20:11:00.780878 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="96.213µs" 2025-08-13T20:11:00.820830668+00:00 stderr F I0813 20:11:00.815070 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="59.832µs" 2025-08-13T20:11:00.848323746+00:00 stderr F I0813 20:11:00.845189 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="57.481µs" 2025-08-13T20:11:01.521766564+00:00 stderr F I0813 20:11:01.521384 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="407.931µs" 2025-08-13T20:11:01.553975188+00:00 stderr F I0813 20:11:01.552726 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="95.063µs" 2025-08-13T20:11:01.573428735+00:00 stderr F I0813 20:11:01.573000 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="215.366µs" 2025-08-13T20:11:01.597047563+00:00 stderr F I0813 20:11:01.596684 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="84.792µs" 2025-08-13T20:11:01.614690128+00:00 stderr F I0813 20:11:01.612124 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="61.432µs" 2025-08-13T20:11:01.643016831+00:00 stderr F I0813 20:11:01.642674 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="85.323µs" 2025-08-13T20:11:02.189949232+00:00 stderr F I0813 20:11:02.189586 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="118.123µs" 2025-08-13T20:11:02.284109911+00:00 stderr F I0813 20:11:02.282662 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="160.454µs" 2025-08-13T20:11:02.706573933+00:00 stderr F I0813 20:11:02.706232 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="3.430578ms" 2025-08-13T20:11:02.747130546+00:00 stderr F I0813 20:11:02.742157 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="59.321µs" 2025-08-13T20:11:03.756875606+00:00 stderr F I0813 20:11:03.756699 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="27.958882ms" 2025-08-13T20:11:03.757122423+00:00 stderr F I0813 20:11:03.757071 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="118.803µs" 2025-08-13T20:11:03.831419043+00:00 stderr F I0813 20:11:03.831349 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="48.773649ms" 2025-08-13T20:11:03.831652280+00:00 stderr F I0813 20:11:03.831627 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="92.093µs" 2025-08-13T20:11:03.840501514+00:00 stderr F I0813 20:11:03.837482 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-66f66f94cf" duration="9.26µs" 2025-08-13T20:11:03.878395910+00:00 stderr F I0813 20:11:03.875526 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-75cfd5db5d" duration="11.16µs" 2025-08-13T20:15:00.203593711+00:00 stderr F I0813 20:15:00.203243 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created job collect-profiles-29251935" 2025-08-13T20:15:00.210739586+00:00 stderr F I0813 20:15:00.210633 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:00.344609754+00:00 stderr F I0813 20:15:00.339868 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:00.344609754+00:00 stderr F I0813 20:15:00.340109 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles-29251935" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: collect-profiles-29251935-d7x6j" 2025-08-13T20:15:00.374665496+00:00 stderr F I0813 20:15:00.374329 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:00.382615054+00:00 stderr F I0813 20:15:00.379152 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:00.411165522+00:00 stderr F I0813 20:15:00.409102 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:00.466745326+00:00 stderr F I0813 20:15:00.466507 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:01.020671647+00:00 stderr F I0813 20:15:01.020543 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:02.374771730+00:00 stderr F I0813 20:15:02.374665 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:03.385932521+00:00 stderr F I0813 20:15:03.385873 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:03.395491475+00:00 stderr F I0813 20:15:03.395432 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:04.448076683+00:00 stderr F I0813 20:15:04.446505 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:04.689482515+00:00 stderr F I0813 20:15:04.689325 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:05.382243267+00:00 stderr F I0813 20:15:05.381489 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:05.546215138+00:00 stderr F I0813 20:15:05.545919 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:05.593678289+00:00 stderr F I0813 20:15:05.593530 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:05.629989270+00:00 stderr F I0813 20:15:05.629819 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles-29251935" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" 2025-08-13T20:15:05.629989270+00:00 stderr F I0813 20:15:05.629766 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:05.675380811+00:00 stderr F I0813 20:15:05.675228 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-28658250" 2025-08-13T20:15:05.675952768+00:00 stderr F I0813 20:15:05.675839 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="SuccessfulDelete" message="Deleted job collect-profiles-28658250" 2025-08-13T20:15:05.675952768+00:00 stderr F I0813 20:15:05.675883 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="SawCompletedJob" message="Saw completed job: collect-profiles-29251935, status: Complete" 2025-08-13T20:18:48.575668025+00:00 stderr F I0813 20:18:48.575532 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251905" 2025-08-13T20:18:48.576946972+00:00 stderr F I0813 20:18:48.576877 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-5797bcd546" duration="501.145µs" 2025-08-13T20:18:48.579160035+00:00 stderr F I0813 20:18:48.578014 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-7bbb4b7f4c" duration="252.247µs" 2025-08-13T20:18:48.579160035+00:00 stderr F I0813 20:18:48.578835 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251920" 2025-08-13T20:18:48.579160035+00:00 stderr F I0813 20:18:48.578877 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:18:48.579479144+00:00 stderr F I0813 20:18:48.579402 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-585546dd8b" duration="1.280157ms" 2025-08-13T20:18:48.579602698+00:00 stderr F I0813 20:18:48.579552 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress-operator/ingress-operator-7d46d5bb6d" duration="62.932µs" 2025-08-13T20:18:48.580147903+00:00 stderr F I0813 20:18:48.580037 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-marketplace/marketplace-operator-8b455464d" duration="397.592µs" 2025-08-13T20:18:48.580406671+00:00 stderr F I0813 20:18:48.580310 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-777dbbb7bb" duration="204.405µs" 2025-08-13T20:18:48.580451922+00:00 stderr F I0813 20:18:48.580415 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication-operator/authentication-operator-7cc7ff75d5" duration="49.102µs" 2025-08-13T20:18:48.580581386+00:00 stderr F I0813 20:18:48.580511 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-conversion-webhook-595f9969b" duration="37.091µs" 2025-08-13T20:18:48.580660798+00:00 stderr F I0813 20:18:48.580609 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress/router-default-5c9bf7bc58" duration="48.081µs" 2025-08-13T20:18:48.580824103+00:00 stderr F I0813 20:18:48.580726 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/control-plane-machine-set-operator-649bd778b4" duration="39.251µs" 2025-08-13T20:18:48.581135021+00:00 stderr F I0813 20:18:48.581062 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-operator-76788bff89" duration="175.775µs" 2025-08-13T20:18:48.581301466+00:00 stderr F I0813 20:18:48.581206 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/package-server-manager-84d578d794" duration="45.061µs" 2025-08-13T20:18:48.581301466+00:00 stderr F I0813 20:18:48.581288 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7f79969969" duration="37.992µs" 2025-08-13T20:18:48.581422490+00:00 stderr F I0813 20:18:48.581384 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-765b47f944" duration="62.401µs" 2025-08-13T20:18:48.581541263+00:00 stderr F I0813 20:18:48.581504 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-84fccc7b6" duration="46.972µs" 2025-08-13T20:18:48.581645876+00:00 stderr F I0813 20:18:48.581610 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-5596ddcb44" duration="31.421µs" 2025-08-13T20:18:48.581941554+00:00 stderr F I0813 20:18:48.581832 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-75779c45fd" duration="120.523µs" 2025-08-13T20:18:48.581961025+00:00 stderr F I0813 20:18:48.581947 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c" duration="37.771µs" 2025-08-13T20:18:48.582184921+00:00 stderr F I0813 20:18:48.582075 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator/migrator-f7c6d88df" duration="27.691µs" 2025-08-13T20:18:48.582184921+00:00 stderr F I0813 20:18:48.582163 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-controller-6df6df6b6b" duration="41.281µs" 2025-08-13T20:18:48.582299475+00:00 stderr F I0813 20:18:48.582249 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-5b6c864f95" duration="37.531µs" 2025-08-13T20:18:48.582888912+00:00 stderr F I0813 20:18:48.582409 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-57b5589fc8" duration="56.532µs" 2025-08-13T20:18:48.582888912+00:00 stderr F I0813 20:18:48.582562 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-8457997b6b" duration="52.932µs" 2025-08-13T20:18:48.582888912+00:00 stderr F I0813 20:18:48.582634 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d" duration="38.631µs" 2025-08-13T20:18:48.582888912+00:00 stderr F I0813 20:18:48.582744 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-5d9678894c" duration="81.872µs" 2025-08-13T20:18:48.583560411+00:00 stderr F I0813 20:18:48.583189 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-659898b96d" duration="102.563µs" 2025-08-13T20:18:48.583560411+00:00 stderr F I0813 20:18:48.583290 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-64b8d4c75" duration="42.012µs" 2025-08-13T20:18:48.583560411+00:00 stderr F I0813 20:18:48.583454 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="44.371µs" 2025-08-13T20:18:48.583560411+00:00 stderr F I0813 20:18:48.583506 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-6ff78978b4" duration="30.511µs" 2025-08-13T20:18:48.583678534+00:00 stderr F I0813 20:18:48.583610 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-644bb77b49" duration="68.892µs" 2025-08-13T20:18:48.583689784+00:00 stderr F I0813 20:18:48.583675 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-6c7c885997" duration="41.401µs" 2025-08-13T20:18:48.583878770+00:00 stderr F I0813 20:18:48.583763 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-69c565c9b6" duration="44.461µs" 2025-08-13T20:18:48.584012814+00:00 stderr F I0813 20:18:48.583939 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="44.741µs" 2025-08-13T20:18:48.584209389+00:00 stderr F I0813 20:18:48.584158 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-7b5ff7b59b" duration="120.563µs" 2025-08-13T20:18:48.584343763+00:00 stderr F I0813 20:18:48.584270 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-8545fbf5fd" duration="37.201µs" 2025-08-13T20:18:48.584480837+00:00 stderr F I0813 20:18:48.584360 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-c4dd57946" duration="40.231µs" 2025-08-13T20:18:48.584703693+00:00 stderr F I0813 20:18:48.584654 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/machine-api-operator-788b7c6b6c" duration="49.902µs" 2025-08-13T20:18:48.585082864+00:00 stderr F I0813 20:18:48.584838 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/catalog-operator-857456c46" duration="122.894µs" 2025-08-13T20:18:48.585082864+00:00 stderr F I0813 20:18:48.585011 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5b77f9fd48" duration="88.173µs" 2025-08-13T20:18:48.585104995+00:00 stderr F I0813 20:18:48.585079 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6" duration="41.281µs" 2025-08-13T20:18:48.585226448+00:00 stderr F I0813 20:18:48.585142 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/downloads-65476884b9" duration="31.771µs" 2025-08-13T20:18:48.585293140+00:00 stderr F I0813 20:18:48.585242 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-b6485f8c7" duration="57.141µs" 2025-08-13T20:18:48.585418334+00:00 stderr F I0813 20:18:48.585364 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-dns-operator/dns-operator-75f687757b" duration="39.571µs" 2025-08-13T20:18:48.585596599+00:00 stderr F I0813 20:18:48.585510 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-5b87ddc766" duration="45.891µs" 2025-08-13T20:18:48.585695662+00:00 stderr F I0813 20:18:48.585636 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b" duration="57.112µs" 2025-08-13T20:18:48.585842386+00:00 stderr F I0813 20:18:48.585746 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-5b69888cbb" duration="55.842µs" 2025-08-13T20:18:48.586018651+00:00 stderr F I0813 20:18:48.585930 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-operator-5dbbc74dc9" duration="50.101µs" 2025-08-13T20:18:48.586438363+00:00 stderr F I0813 20:18:48.586348 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4" duration="59.982µs" 2025-08-13T20:18:48.586755572+00:00 stderr F I0813 20:18:48.586671 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f" duration="245.357µs" 2025-08-13T20:18:48.587424411+00:00 stderr F I0813 20:18:48.587357 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958" duration="36.031µs" 2025-08-13T20:18:48.594681488+00:00 stderr F I0813 20:18:48.594599 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-6d5d9649f6" duration="37.611µs" 2025-08-13T20:18:48.594706819+00:00 stderr F I0813 20:18:48.594697 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-diagnostics/network-check-source-5c5478f8c" duration="32.811µs" 2025-08-13T20:18:48.594801982+00:00 stderr F I0813 20:18:48.594751 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-operator/network-operator-767c585db5" duration="24.561µs" 2025-08-13T20:18:48.594943216+00:00 stderr F I0813 20:18:48.594887 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5c4dbb8899" duration="30.461µs" 2025-08-13T20:18:48.595144622+00:00 stderr F I0813 20:18:48.595051 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="35.071µs" 2025-08-13T20:18:48.595160232+00:00 stderr F I0813 20:18:48.595142 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67f74899b5" duration="37.141µs" 2025-08-13T20:18:48.595255485+00:00 stderr F I0813 20:18:48.595197 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6" duration="23.461µs" 2025-08-13T20:18:48.595335777+00:00 stderr F I0813 20:18:48.595282 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5446f98575" duration="30.451µs" 2025-08-13T20:18:48.595410959+00:00 stderr F I0813 20:18:48.595359 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc" duration="27.43µs" 2025-08-13T20:18:48.595473371+00:00 stderr F I0813 20:18:48.595435 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-868695ccb4" duration="25.651µs" 2025-08-13T20:18:48.595512062+00:00 stderr F I0813 20:18:48.595498 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-78589965b8" duration="34.711µs" 2025-08-13T20:18:48.596365926+00:00 stderr F I0813 20:18:48.596286 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-6b869cb98c" duration="34.821µs" 2025-08-13T20:18:48.596385377+00:00 stderr F I0813 20:18:48.596367 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-99c8765d7" duration="36.311µs" 2025-08-13T20:18:48.596489480+00:00 stderr F I0813 20:18:48.596420 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-etcd-operator/etcd-operator-768d5b5d86" duration="23.921µs" 2025-08-13T20:18:48.596502440+00:00 stderr F I0813 20:18:48.596485 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-7cbd5666ff" duration="29.291µs" 2025-08-13T20:18:48.596592103+00:00 stderr F I0813 20:18:48.596541 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/packageserver-8464bcc55b" duration="25.54µs" 2025-08-13T20:18:48.596678375+00:00 stderr F I0813 20:18:48.596627 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="32.831µs" 2025-08-13T20:18:48.596820069+00:00 stderr F I0813 20:18:48.596731 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67cbf64bc9" duration="34.631µs" 2025-08-13T20:18:48.597425287+00:00 stderr F I0813 20:18:48.597366 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-6d9765f7fd" duration="592.737µs" 2025-08-13T20:18:48.597515979+00:00 stderr F I0813 20:18:48.597470 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-config-operator/openshift-config-operator-77658b5b66" duration="40.571µs" 2025-08-13T20:18:48.597581281+00:00 stderr F I0813 20:18:48.597543 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-67685c4459" duration="36.811µs" 2025-08-13T20:18:48.597709255+00:00 stderr F I0813 20:18:48.597629 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-b69786f4f" duration="30.711µs" 2025-08-13T20:18:48.597852239+00:00 stderr F I0813 20:18:48.597758 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-7955f778" duration="22.721µs" 2025-08-13T20:18:48.597909680+00:00 stderr F I0813 20:18:48.597875 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58" duration="41.331µs" 2025-08-13T20:18:48.597958332+00:00 stderr F I0813 20:18:48.597927 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7d967d98df" duration="20.131µs" 2025-08-13T20:18:48.598063635+00:00 stderr F I0813 20:18:48.598017 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-74fc7c67cc" duration="73.102µs" 2025-08-13T20:18:48.598143787+00:00 stderr F I0813 20:18:48.598101 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-7874c8775" duration="31.091µs" 2025-08-13T20:18:48.598239250+00:00 stderr F I0813 20:18:48.598197 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-64d5db54b5" duration="32.871µs" 2025-08-13T20:18:48.598292361+00:00 stderr F I0813 20:18:48.598251 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-846977c6bc" duration="21.781µs" 2025-08-13T20:18:48.598344053+00:00 stderr F I0813 20:18:48.598310 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca-operator/service-ca-operator-546b4f8984" duration="16.211µs" 2025-08-13T20:18:48.598395504+00:00 stderr F I0813 20:18:48.598359 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca/service-ca-666f99b6f" duration="27.161µs" 2025-08-13T20:18:48.598407275+00:00 stderr F I0813 20:18:48.598393 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865" duration="21.41µs" 2025-08-13T20:18:48.598491587+00:00 stderr F I0813 20:18:48.598449 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-7fc54b8dd7" duration="34.551µs" 2025-08-13T20:28:48.562112435+00:00 stderr F I0813 20:28:48.561715 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251905" 2025-08-13T20:28:48.562112435+00:00 stderr F I0813 20:28:48.561942 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251920" 2025-08-13T20:28:48.562112435+00:00 stderr F I0813 20:28:48.561975 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:28:48.572416611+00:00 stderr F I0813 20:28:48.572297 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-5d9678894c" duration="85.463µs" 2025-08-13T20:28:48.572499904+00:00 stderr F I0813 20:28:48.572463 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-659898b96d" duration="117.013µs" 2025-08-13T20:28:48.572616307+00:00 stderr F I0813 20:28:48.572551 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="34.691µs" 2025-08-13T20:28:48.572616307+00:00 stderr F I0813 20:28:48.572558 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-6ff78978b4" duration="33.061µs" 2025-08-13T20:28:48.572616307+00:00 stderr F I0813 20:28:48.572483 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d" duration="76.202µs" 2025-08-13T20:28:48.572757081+00:00 stderr F I0813 20:28:48.572662 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-64b8d4c75" duration="75.922µs" 2025-08-13T20:28:48.572873254+00:00 stderr F I0813 20:28:48.572822 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-7b5ff7b59b" duration="41.641µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.572909 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-8545fbf5fd" duration="47.451µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573449 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-644bb77b49" duration="47.121µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573518 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-6c7c885997" duration="38.131µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573685 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-69c565c9b6" duration="45.511µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573745 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="43.461µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573854 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6" duration="78.192µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573899 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/downloads-65476884b9" duration="27.591µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573961 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-c4dd57946" duration="23.76µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573998 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/machine-api-operator-788b7c6b6c" duration="24.511µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.574109 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/catalog-operator-857456c46" duration="39.352µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.574180 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5b77f9fd48" duration="33.721µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.574282 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-5b69888cbb" duration="55.232µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.574505 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-operator-5dbbc74dc9" duration="205.626µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.574598 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-b6485f8c7" duration="60.452µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.574636 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-dns-operator/dns-operator-75f687757b" duration="24.411µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.574697 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-5b87ddc766" duration="29.201µs" 2025-08-13T20:28:48.574873782+00:00 stderr F I0813 20:28:48.574763 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b" duration="30.641µs" 2025-08-13T20:28:48.574894543+00:00 stderr F I0813 20:28:48.574884 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4" duration="26.231µs" 2025-08-13T20:28:48.574996966+00:00 stderr F I0813 20:28:48.574952 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f" duration="37.291µs" 2025-08-13T20:28:48.575148860+00:00 stderr F I0813 20:28:48.575105 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958" duration="61.651µs" 2025-08-13T20:28:48.575220462+00:00 stderr F I0813 20:28:48.575180 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-6d5d9649f6" duration="33.781µs" 2025-08-13T20:28:48.575297414+00:00 stderr F I0813 20:28:48.575258 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-diagnostics/network-check-source-5c5478f8c" duration="27.651µs" 2025-08-13T20:28:48.575443028+00:00 stderr F I0813 20:28:48.575400 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67f74899b5" duration="54.142µs" 2025-08-13T20:28:48.575522701+00:00 stderr F I0813 20:28:48.575478 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6" duration="40.361µs" 2025-08-13T20:28:48.575628464+00:00 stderr F I0813 20:28:48.575586 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-operator/network-operator-767c585db5" duration="44.871µs" 2025-08-13T20:28:48.575731537+00:00 stderr F I0813 20:28:48.575692 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5c4dbb8899" duration="32.401µs" 2025-08-13T20:28:48.575953403+00:00 stderr F I0813 20:28:48.575902 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="130.744µs" 2025-08-13T20:28:48.576073166+00:00 stderr F I0813 20:28:48.576033 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-78589965b8" duration="50.821µs" 2025-08-13T20:28:48.576170999+00:00 stderr F I0813 20:28:48.576127 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5446f98575" duration="57.532µs" 2025-08-13T20:28:48.582190482+00:00 stderr F I0813 20:28:48.582120 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc" duration="29.911µs" 2025-08-13T20:28:48.582343637+00:00 stderr F I0813 20:28:48.582293 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-868695ccb4" duration="129.864µs" 2025-08-13T20:28:48.582499811+00:00 stderr F I0813 20:28:48.582449 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67cbf64bc9" duration="45.631µs" 2025-08-13T20:28:48.583335125+00:00 stderr F I0813 20:28:48.583281 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-6d9765f7fd" duration="113.853µs" 2025-08-13T20:28:48.583938443+00:00 stderr F I0813 20:28:48.583890 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-6b869cb98c" duration="45.131µs" 2025-08-13T20:28:48.584046436+00:00 stderr F I0813 20:28:48.583990 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-99c8765d7" duration="33.051µs" 2025-08-13T20:28:48.584188930+00:00 stderr F I0813 20:28:48.584142 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-etcd-operator/etcd-operator-768d5b5d86" duration="99.673µs" 2025-08-13T20:28:48.585535798+00:00 stderr F I0813 20:28:48.584312 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-7cbd5666ff" duration="44.421µs" 2025-08-13T20:28:48.586182467+00:00 stderr F I0813 20:28:48.586133 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/packageserver-8464bcc55b" duration="913.937µs" 2025-08-13T20:28:48.586293080+00:00 stderr F I0813 20:28:48.586249 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="59.842µs" 2025-08-13T20:28:48.586364702+00:00 stderr F I0813 20:28:48.586323 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-74fc7c67cc" duration="39.511µs" 2025-08-13T20:28:48.586441614+00:00 stderr F I0813 20:28:48.586400 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-7874c8775" duration="34.391µs" 2025-08-13T20:28:48.586544867+00:00 stderr F I0813 20:28:48.586500 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-config-operator/openshift-config-operator-77658b5b66" duration="43.001µs" 2025-08-13T20:28:48.586659321+00:00 stderr F I0813 20:28:48.586616 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-67685c4459" duration="59.341µs" 2025-08-13T20:28:48.586722923+00:00 stderr F I0813 20:28:48.586682 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-b69786f4f" duration="30.361µs" 2025-08-13T20:28:48.586953449+00:00 stderr F I0813 20:28:48.586879 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-7955f778" duration="114.814µs" 2025-08-13T20:28:48.587034661+00:00 stderr F I0813 20:28:48.586968 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58" duration="51.511µs" 2025-08-13T20:28:48.587130644+00:00 stderr F I0813 20:28:48.587086 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7d967d98df" duration="41.431µs" 2025-08-13T20:28:48.587262428+00:00 stderr F I0813 20:28:48.587178 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865" duration="34.911µs" 2025-08-13T20:28:48.587368671+00:00 stderr F I0813 20:28:48.587326 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-7fc54b8dd7" duration="75.113µs" 2025-08-13T20:28:48.587453754+00:00 stderr F I0813 20:28:48.587412 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-64d5db54b5" duration="30.911µs" 2025-08-13T20:28:48.587557306+00:00 stderr F I0813 20:28:48.587514 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-846977c6bc" duration="38.641µs" 2025-08-13T20:28:48.587613678+00:00 stderr F I0813 20:28:48.587576 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca-operator/service-ca-operator-546b4f8984" duration="23.861µs" 2025-08-13T20:28:48.587703781+00:00 stderr F I0813 20:28:48.587656 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca/service-ca-666f99b6f" duration="28.301µs" 2025-08-13T20:28:48.587851305+00:00 stderr F I0813 20:28:48.587761 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication-operator/authentication-operator-7cc7ff75d5" duration="45.171µs" 2025-08-13T20:28:48.587911167+00:00 stderr F I0813 20:28:48.587869 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-conversion-webhook-595f9969b" duration="29.431µs" 2025-08-13T20:28:48.588074651+00:00 stderr F I0813 20:28:48.587997 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-5797bcd546" duration="42.771µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588170 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-7bbb4b7f4c" duration="74.632µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588259 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-585546dd8b" duration="37.031µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588304 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress-operator/ingress-operator-7d46d5bb6d" duration="30.261µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588366 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-marketplace/marketplace-operator-8b455464d" duration="34.341µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588419 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-777dbbb7bb" duration="21.48µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588496 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-84fccc7b6" duration="23.58µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588565 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress/router-default-5c9bf7bc58" duration="39.001µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588586 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-765b47f944" duration="91.553µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588606 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/control-plane-machine-set-operator-649bd778b4" duration="15.021µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588676 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-operator-76788bff89" duration="29.521µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588744 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/package-server-manager-84d578d794" duration="54.632µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588868 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7f79969969" duration="99.053µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588957 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-57b5589fc8" duration="39.531µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.589076 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-8457997b6b" duration="98.843µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.589133 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-5596ddcb44" duration="24.191µs" 2025-08-13T20:28:48.589284106+00:00 stderr F I0813 20:28:48.589212 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-75779c45fd" duration="49.501µs" 2025-08-13T20:28:48.589284106+00:00 stderr F I0813 20:28:48.589245 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c" duration="18.491µs" 2025-08-13T20:28:48.589508483+00:00 stderr F I0813 20:28:48.589442 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator/migrator-f7c6d88df" duration="166.574µs" 2025-08-13T20:28:48.589623106+00:00 stderr F I0813 20:28:48.589571 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-controller-6df6df6b6b" duration="61.752µs" 2025-08-13T20:28:48.589721529+00:00 stderr F I0813 20:28:48.589668 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-5b6c864f95" duration="40.511µs" 2025-08-13T20:30:01.235727093+00:00 stderr F I0813 20:30:01.235606 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created job collect-profiles-29251950" 2025-08-13T20:30:01.241730696+00:00 stderr F I0813 20:30:01.241647 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:01.844719909+00:00 stderr F I0813 20:30:01.844619 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:01.844986387+00:00 stderr F I0813 20:30:01.844929 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles-29251950" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: collect-profiles-29251950-x8jjd" 2025-08-13T20:30:01.967460807+00:00 stderr F I0813 20:30:01.966767 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:01.986882936+00:00 stderr F I0813 20:30:01.986767 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:02.037764628+00:00 stderr F I0813 20:30:02.037702 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:02.065930468+00:00 stderr F I0813 20:30:02.065451 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:02.813309422+00:00 stderr F I0813 20:30:02.811373 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:03.324810106+00:00 stderr F I0813 20:30:03.324606 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:04.354378292+00:00 stderr F I0813 20:30:04.354319 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:05.704293336+00:00 stderr F I0813 20:30:05.704129 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:06.998551512+00:00 stderr F I0813 20:30:06.997918 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:07.101296225+00:00 stderr F I0813 20:30:07.100365 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:07.348520052+00:00 stderr F I0813 20:30:07.348345 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:08.078659269+00:00 stderr F I0813 20:30:08.075462 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:08.095253666+00:00 stderr F I0813 20:30:08.095157 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:08.107656023+00:00 stderr F I0813 20:30:08.107555 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles-29251950" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" 2025-08-13T20:30:08.108126586+00:00 stderr F I0813 20:30:08.108002 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:08.125248199+00:00 stderr F I0813 20:30:08.125193 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="SuccessfulDelete" message="Deleted job collect-profiles-29251905" 2025-08-13T20:30:08.125360992+00:00 stderr F I0813 20:30:08.125312 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="SawCompletedJob" message="Saw completed job: collect-profiles-29251950, status: Complete" 2025-08-13T20:30:08.127420121+00:00 stderr F I0813 20:30:08.127298 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251905" 2025-08-13T20:30:08.131391925+00:00 stderr F I0813 20:30:08.131365 1 garbagecollector.go:549] "Processing item" item="[v1/Pod, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-29251905-zmjv9, uid: 8500d7bd-50fb-4ca6-af41-b7a24cae43cd]" virtual=false 2025-08-13T20:30:08.156610490+00:00 stderr F I0813 20:30:08.156406 1 garbagecollector.go:688] "Deleting item" item="[v1/Pod, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-29251905-zmjv9, uid: 8500d7bd-50fb-4ca6-af41-b7a24cae43cd]" propagationPolicy="Background" 2025-08-13T20:38:48.563688441+00:00 stderr F I0813 20:38:48.563520 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:38:48.563921368+00:00 stderr F I0813 20:38:48.563891 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251920" 2025-08-13T20:38:48.563995810+00:00 stderr F I0813 20:38:48.563976 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:38:48.579185338+00:00 stderr F I0813 20:38:48.579044 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-6d5d9649f6" duration="96.223µs" 2025-08-13T20:38:48.579382724+00:00 stderr F I0813 20:38:48.579325 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-diagnostics/network-check-source-5c5478f8c" duration="64.582µs" 2025-08-13T20:38:48.579478777+00:00 stderr F I0813 20:38:48.579417 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="70.062µs" 2025-08-13T20:38:48.579595150+00:00 stderr F I0813 20:38:48.579542 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67f74899b5" duration="128.404µs" 2025-08-13T20:38:48.579691783+00:00 stderr F I0813 20:38:48.579644 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-operator/network-operator-767c585db5" duration="38.571µs" 2025-08-13T20:38:48.579874068+00:00 stderr F I0813 20:38:48.579751 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-868695ccb4" duration="23.39µs" 2025-08-13T20:38:48.579874068+00:00 stderr F I0813 20:38:48.579855 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5c4dbb8899" duration="53.032µs" 2025-08-13T20:38:48.579893449+00:00 stderr F I0813 20:38:48.579555 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6" duration="41.712µs" 2025-08-13T20:38:48.579893449+00:00 stderr F I0813 20:38:48.579881 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-78589965b8" duration="32.901µs" 2025-08-13T20:38:48.580004002+00:00 stderr F I0813 20:38:48.579957 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5446f98575" duration="33.591µs" 2025-08-13T20:38:48.580082494+00:00 stderr F I0813 20:38:48.580029 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc" duration="35.401µs" 2025-08-13T20:38:48.580273420+00:00 stderr F I0813 20:38:48.580135 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-etcd-operator/etcd-operator-768d5b5d86" duration="29.191µs" 2025-08-13T20:38:48.580355612+00:00 stderr F I0813 20:38:48.580309 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-7cbd5666ff" duration="38.441µs" 2025-08-13T20:38:48.580447345+00:00 stderr F I0813 20:38:48.580402 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/packageserver-8464bcc55b" duration="35.661µs" 2025-08-13T20:38:48.580511216+00:00 stderr F I0813 20:38:48.580468 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="29.241µs" 2025-08-13T20:38:48.580644020+00:00 stderr F I0813 20:38:48.580575 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67cbf64bc9" duration="46.271µs" 2025-08-13T20:38:48.580763494+00:00 stderr F I0813 20:38:48.580716 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-6d9765f7fd" duration="42.351µs" 2025-08-13T20:38:48.581030471+00:00 stderr F I0813 20:38:48.580978 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-6b869cb98c" duration="71.373µs" 2025-08-13T20:38:48.581085763+00:00 stderr F I0813 20:38:48.581044 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-99c8765d7" duration="26.761µs" 2025-08-13T20:38:48.581194846+00:00 stderr F I0813 20:38:48.581123 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-b69786f4f" duration="28.801µs" 2025-08-13T20:38:48.581346230+00:00 stderr F I0813 20:38:48.581230 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-7955f778" duration="34.051µs" 2025-08-13T20:38:48.581507155+00:00 stderr F I0813 20:38:48.581457 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58" duration="34.011µs" 2025-08-13T20:38:48.581600998+00:00 stderr F I0813 20:38:48.581520 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7d967d98df" duration="23.7µs" 2025-08-13T20:38:48.581851845+00:00 stderr F I0813 20:38:48.581745 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-74fc7c67cc" duration="91.943µs" 2025-08-13T20:38:48.582722360+00:00 stderr F I0813 20:38:48.582669 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-7874c8775" duration="79.962µs" 2025-08-13T20:38:48.582849384+00:00 stderr F I0813 20:38:48.582757 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-config-operator/openshift-config-operator-77658b5b66" duration="27.321µs" 2025-08-13T20:38:48.583107141+00:00 stderr F I0813 20:38:48.583052 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-67685c4459" duration="197.715µs" 2025-08-13T20:38:48.583309887+00:00 stderr F I0813 20:38:48.583255 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca-operator/service-ca-operator-546b4f8984" duration="117.753µs" 2025-08-13T20:38:48.583549744+00:00 stderr F I0813 20:38:48.583504 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca/service-ca-666f99b6f" duration="205.326µs" 2025-08-13T20:38:48.583708129+00:00 stderr F I0813 20:38:48.583662 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865" duration="28.671µs" 2025-08-13T20:38:48.583830742+00:00 stderr F I0813 20:38:48.583742 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-7fc54b8dd7" duration="41.281µs" 2025-08-13T20:38:48.583936325+00:00 stderr F I0813 20:38:48.583891 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-64d5db54b5" duration="26.831µs" 2025-08-13T20:38:48.584010307+00:00 stderr F I0813 20:38:48.583968 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-846977c6bc" duration="37.761µs" 2025-08-13T20:38:48.584276475+00:00 stderr F I0813 20:38:48.584227 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-585546dd8b" duration="196.265µs" 2025-08-13T20:38:48.584376748+00:00 stderr F I0813 20:38:48.584333 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress-operator/ingress-operator-7d46d5bb6d" duration="32.111µs" 2025-08-13T20:38:48.584441130+00:00 stderr F I0813 20:38:48.584401 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-marketplace/marketplace-operator-8b455464d" duration="31.011µs" 2025-08-13T20:38:48.584523642+00:00 stderr F I0813 20:38:48.584483 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-777dbbb7bb" duration="27.481µs" 2025-08-13T20:38:48.584710087+00:00 stderr F I0813 20:38:48.584664 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication-operator/authentication-operator-7cc7ff75d5" duration="38.741µs" 2025-08-13T20:38:48.584883842+00:00 stderr F I0813 20:38:48.584758 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-conversion-webhook-595f9969b" duration="26.791µs" 2025-08-13T20:38:48.584902043+00:00 stderr F I0813 20:38:48.584887 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-5797bcd546" duration="52.341µs" 2025-08-13T20:38:48.584992956+00:00 stderr F I0813 20:38:48.584949 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-7bbb4b7f4c" duration="27.581µs" 2025-08-13T20:38:48.585127759+00:00 stderr F I0813 20:38:48.585070 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-operator-76788bff89" duration="53.972µs" 2025-08-13T20:38:48.585459759+00:00 stderr F I0813 20:38:48.585410 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/package-server-manager-84d578d794" duration="33.531µs" 2025-08-13T20:38:48.585527751+00:00 stderr F I0813 20:38:48.585485 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7f79969969" duration="34.571µs" 2025-08-13T20:38:48.585616714+00:00 stderr F I0813 20:38:48.585565 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-765b47f944" duration="31.711µs" 2025-08-13T20:38:48.589687711+00:00 stderr F I0813 20:38:48.589631 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-84fccc7b6" duration="4.005145ms" 2025-08-13T20:38:48.589843235+00:00 stderr F I0813 20:38:48.589742 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress/router-default-5c9bf7bc58" duration="66.022µs" 2025-08-13T20:38:48.590003700+00:00 stderr F I0813 20:38:48.589957 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/control-plane-machine-set-operator-649bd778b4" duration="50.462µs" 2025-08-13T20:38:48.590130664+00:00 stderr F I0813 20:38:48.590062 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c" duration="33.121µs" 2025-08-13T20:38:48.590369011+00:00 stderr F I0813 20:38:48.590321 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator/migrator-f7c6d88df" duration="223.846µs" 2025-08-13T20:38:48.590464003+00:00 stderr F I0813 20:38:48.590420 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-controller-6df6df6b6b" duration="36.831µs" 2025-08-13T20:38:48.590549516+00:00 stderr F I0813 20:38:48.590505 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-5b6c864f95" duration="28.981µs" 2025-08-13T20:38:48.590647599+00:00 stderr F I0813 20:38:48.590601 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-57b5589fc8" duration="38.001µs" 2025-08-13T20:38:48.590744671+00:00 stderr F I0813 20:38:48.590701 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-8457997b6b" duration="60.032µs" 2025-08-13T20:38:48.590946127+00:00 stderr F I0813 20:38:48.590847 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-5596ddcb44" duration="88.072µs" 2025-08-13T20:38:48.590946127+00:00 stderr F I0813 20:38:48.590939 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-75779c45fd" duration="30.511µs" 2025-08-13T20:38:48.591036330+00:00 stderr F I0813 20:38:48.590992 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-5d9678894c" duration="39.751µs" 2025-08-13T20:38:48.591137643+00:00 stderr F I0813 20:38:48.591096 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-659898b96d" duration="48.141µs" 2025-08-13T20:38:48.591320338+00:00 stderr F I0813 20:38:48.591264 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d" duration="39.011µs" 2025-08-13T20:38:48.591434191+00:00 stderr F I0813 20:38:48.591365 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="42.922µs" 2025-08-13T20:38:48.591563155+00:00 stderr F I0813 20:38:48.591507 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-6ff78978b4" duration="34.601µs" 2025-08-13T20:38:48.591712439+00:00 stderr F I0813 20:38:48.591665 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-64b8d4c75" duration="63.352µs" 2025-08-13T20:38:48.591858964+00:00 stderr F I0813 20:38:48.591761 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-69c565c9b6" duration="55.722µs" 2025-08-13T20:38:48.592280436+00:00 stderr F I0813 20:38:48.592216 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="112.373µs" 2025-08-13T20:38:48.592280436+00:00 stderr F I0813 20:38:48.592241 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-7b5ff7b59b" duration="54.821µs" 2025-08-13T20:38:48.592337147+00:00 stderr F I0813 20:38:48.592292 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-8545fbf5fd" duration="28.831µs" 2025-08-13T20:38:48.592410689+00:00 stderr F I0813 20:38:48.592361 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-644bb77b49" duration="55.082µs" 2025-08-13T20:38:48.592423270+00:00 stderr F I0813 20:38:48.592407 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-6c7c885997" duration="48.321µs" 2025-08-13T20:38:48.592544483+00:00 stderr F I0813 20:38:48.592501 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5b77f9fd48" duration="46.191µs" 2025-08-13T20:38:48.592544483+00:00 stderr F I0813 20:38:48.592500 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/catalog-operator-857456c46" duration="73.452µs" 2025-08-13T20:38:48.592645506+00:00 stderr F I0813 20:38:48.592602 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/downloads-65476884b9" duration="35.051µs" 2025-08-13T20:38:48.592645506+00:00 stderr F I0813 20:38:48.592607 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6" duration="74.652µs" 2025-08-13T20:38:48.592742669+00:00 stderr F I0813 20:38:48.592700 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-c4dd57946" duration="37.121µs" 2025-08-13T20:38:48.592858822+00:00 stderr F I0813 20:38:48.592814 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/machine-api-operator-788b7c6b6c" duration="32.851µs" 2025-08-13T20:38:48.593329486+00:00 stderr F I0813 20:38:48.593282 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-5b69888cbb" duration="36.211µs" 2025-08-13T20:38:48.593329486+00:00 stderr F I0813 20:38:48.593317 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b" duration="71.042µs" 2025-08-13T20:38:48.593463430+00:00 stderr F I0813 20:38:48.593410 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-operator-5dbbc74dc9" duration="59.982µs" 2025-08-13T20:38:48.593463430+00:00 stderr F I0813 20:38:48.593430 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-b6485f8c7" duration="57.031µs" 2025-08-13T20:38:48.593569523+00:00 stderr F I0813 20:38:48.593515 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4" duration="33.301µs" 2025-08-13T20:38:48.593720277+00:00 stderr F I0813 20:38:48.593581 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f" duration="57.002µs" 2025-08-13T20:38:48.593720277+00:00 stderr F I0813 20:38:48.593520 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-dns-operator/dns-operator-75f687757b" duration="108.993µs" 2025-08-13T20:38:48.593720277+00:00 stderr F I0813 20:38:48.593656 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958" duration="93.573µs" 2025-08-13T20:38:48.593840021+00:00 stderr F I0813 20:38:48.593757 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-5b87ddc766" duration="761.432µs" 2025-08-13T20:42:36.319033048+00:00 stderr F I0813 20:42:36.318589 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.319139171+00:00 stderr F I0813 20:42:36.319088 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.319178332+00:00 stderr F I0813 20:42:36.317754 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.319271204+00:00 stderr F I0813 20:42:36.319077 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.319420109+00:00 stderr F I0813 20:42:36.317882 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.328448 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.330937 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.330939 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331078 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331166 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331255 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331304 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331362 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331380 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331430 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331436 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331495 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331521 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331638 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331714 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331767 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331857 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331892 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331953 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331966 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332074 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332104 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332160 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332176 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332270 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332294 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332370 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332392 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332495 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332518 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332543 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332600 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332633 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332724 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332746 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332906 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.333041 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.333101 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.333156 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334203 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334319 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334389 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334458 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334513 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334576 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334639 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334689 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334739 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334892535+00:00 stderr F I0813 20:42:36.334863 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334973467+00:00 stderr F I0813 20:42:36.334926 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335049099+00:00 stderr F I0813 20:42:36.335007 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335274136+00:00 stderr F I0813 20:42:36.335166 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335292186+00:00 stderr F I0813 20:42:36.335281 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335344998+00:00 stderr F I0813 20:42:36.335317 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335406310+00:00 stderr F I0813 20:42:36.335360 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335515213+00:00 stderr F I0813 20:42:36.335444 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335531863+00:00 stderr F I0813 20:42:36.335523 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335630546+00:00 stderr F I0813 20:42:36.335586 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335706258+00:00 stderr F I0813 20:42:36.335663 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335871703+00:00 stderr F I0813 20:42:36.335753 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335890124+00:00 stderr F I0813 20:42:36.335880 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.336009607+00:00 stderr F I0813 20:42:36.335967 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.346184240+00:00 stderr F I0813 20:42:36.346144 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.346439938+00:00 stderr F I0813 20:42:36.346417 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.346598742+00:00 stderr F I0813 20:42:36.346578 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.346698135+00:00 stderr F I0813 20:42:36.346682 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.346955483+00:00 stderr F I0813 20:42:36.346934 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.347066456+00:00 stderr F I0813 20:42:36.347050 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.347323543+00:00 stderr F I0813 20:42:36.347301 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.347448877+00:00 stderr F I0813 20:42:36.347432 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.347572440+00:00 stderr F I0813 20:42:36.347555 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.347707654+00:00 stderr F I0813 20:42:36.347691 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.347887830+00:00 stderr F I0813 20:42:36.347866 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.354572862+00:00 stderr F I0813 20:42:36.347987 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.354670485+00:00 stderr F I0813 20:42:36.335346 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.354752457+00:00 stderr F I0813 20:42:36.347994 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.354893622+00:00 stderr F I0813 20:42:36.348005 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.355175890+00:00 stderr F I0813 20:42:36.348010 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.355309814+00:00 stderr F I0813 20:42:36.335334 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.355389816+00:00 stderr F I0813 20:42:36.348124 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.355461678+00:00 stderr F I0813 20:42:36.348139 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.355558661+00:00 stderr F I0813 20:42:36.348153 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.355760807+00:00 stderr F I0813 20:42:36.348167 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353021 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353054 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353068 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353083 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353097 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353131 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353146 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353163 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353175 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353188 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353201 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353212 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353256 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353275 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.507074939+00:00 stderr F I0813 20:42:36.353287 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531252906+00:00 stderr F I0813 20:42:36.353300 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531252906+00:00 stderr F I0813 20:42:36.353313 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531252906+00:00 stderr F I0813 20:42:36.353324 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531252906+00:00 stderr F I0813 20:42:36.353337 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531252906+00:00 stderr F I0813 20:42:36.353349 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531252906+00:00 stderr F I0813 20:42:36.353361 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353373 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353385 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353404 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353417 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353429 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353441 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353452 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353469 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353480 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.539522884+00:00 stderr F I0813 20:42:36.353493 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.539643678+00:00 stderr F I0813 20:42:36.353505 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.539880275+00:00 stderr F I0813 20:42:36.353532 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.539911286+00:00 stderr F I0813 20:42:36.353546 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.540170323+00:00 stderr F I0813 20:42:36.353558 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.540352968+00:00 stderr F I0813 20:42:36.353570 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.541644816+00:00 stderr F I0813 20:42:36.353582 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.541845051+00:00 stderr F I0813 20:42:36.353593 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.541925174+00:00 stderr F I0813 20:42:36.353685 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542021067+00:00 stderr F I0813 20:42:36.353711 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542147120+00:00 stderr F I0813 20:42:36.353723 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542277294+00:00 stderr F I0813 20:42:36.353733 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542345226+00:00 stderr F I0813 20:42:36.353745 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542485010+00:00 stderr F I0813 20:42:36.353824 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542703936+00:00 stderr F I0813 20:42:36.353846 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.543057326+00:00 stderr F I0813 20:42:36.353862 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.543371405+00:00 stderr F I0813 20:42:36.353884 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.543665844+00:00 stderr F I0813 20:42:36.353900 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.543995933+00:00 stderr F I0813 20:42:36.353910 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.544116567+00:00 stderr F I0813 20:42:36.353924 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.544263771+00:00 stderr F I0813 20:42:36.353941 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.550576843+00:00 stderr F I0813 20:42:36.353952 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.554663011+00:00 stderr F I0813 20:42:36.353961 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.554996211+00:00 stderr F I0813 20:42:36.353972 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.555433513+00:00 stderr F I0813 20:42:36.353983 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.555596668+00:00 stderr F I0813 20:42:36.353998 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.555743802+00:00 stderr F I0813 20:42:36.354008 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.556133413+00:00 stderr F I0813 20:42:36.354020 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.556312429+00:00 stderr F I0813 20:42:36.354034 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354045 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354055 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354070 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354081 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354170 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354182 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354187 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354203 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354221 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354280 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354291 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354307 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354322 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354337 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354353 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354924 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:37.167199781+00:00 stderr F I0813 20:42:37.167091 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:42:37.167384846+00:00 stderr F I0813 20:42:37.167363 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:37.169842477+00:00 stderr F I0813 20:42:37.168712 1 secure_serving.go:258] Stopped listening on [::]:10257 2025-08-13T20:42:37.172441962+00:00 stderr F I0813 20:42:37.172372 1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-08-13T20:42:37.177007703+00:00 stderr F I0813 20:42:37.176268 1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-08-13T20:42:37.179891917+00:00 stderr F I0813 20:42:37.179850 1 publisher.go:114] "Shutting down root CA cert publisher controller" 2025-08-13T20:42:37.181820452+00:00 stderr F I0813 20:42:37.180049 1 publisher.go:92] Shutting down service CA certificate configmap publisher 2025-08-13T20:42:37.185348774+00:00 stderr F I0813 20:42:37.185285 1 garbagecollector.go:175] "Shutting down controller" controller="garbagecollector" 2025-08-13T20:42:37.185440086+00:00 stderr F I0813 20:42:37.185376 1 job_controller.go:238] "Shutting down job controller" ././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/3.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000245642015133657716033111 0ustar zuulzuul2026-01-20T10:57:23.311522421+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10257 \))" ]; do sleep 1; done' 2026-01-20T10:57:23.314598824+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:23.324727101+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,25sec,0)' ']' 2026-01-20T10:57:23.324727101+00:00 stderr F + sleep 1 2026-01-20T10:57:24.325527157+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:24.332052880+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,24sec,0)' ']' 2026-01-20T10:57:24.332052880+00:00 stderr F + sleep 1 2026-01-20T10:57:25.336247386+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:25.342644395+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,23sec,0)' ']' 2026-01-20T10:57:25.342757509+00:00 stderr F + sleep 1 2026-01-20T10:57:26.346386290+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:26.357548065+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,22sec,0)' ']' 2026-01-20T10:57:26.357548065+00:00 stderr F + sleep 1 2026-01-20T10:57:27.361960367+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:27.370843292+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,21sec,0)' ']' 2026-01-20T10:57:27.370843292+00:00 stderr F + sleep 1 2026-01-20T10:57:28.375026588+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:28.379575719+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,20sec,0)' ']' 2026-01-20T10:57:28.379575719+00:00 stderr F + sleep 1 2026-01-20T10:57:29.383493258+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:29.389188188+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,19sec,0)' ']' 2026-01-20T10:57:29.389188188+00:00 stderr F + sleep 1 2026-01-20T10:57:30.393405495+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:30.399255490+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,18sec,0)' ']' 2026-01-20T10:57:30.399255490+00:00 stderr F + sleep 1 2026-01-20T10:57:31.402869971+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:31.413390259+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,17sec,0)' ']' 2026-01-20T10:57:31.413390259+00:00 stderr F + sleep 1 2026-01-20T10:57:32.416949028+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:32.422131466+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,16sec,0)' ']' 2026-01-20T10:57:32.422131466+00:00 stderr F + sleep 1 2026-01-20T10:57:33.424968566+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:33.429711781+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,15sec,0)' ']' 2026-01-20T10:57:33.429711781+00:00 stderr F + sleep 1 2026-01-20T10:57:34.434915114+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:34.444380015+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,14sec,0)' ']' 2026-01-20T10:57:34.444380015+00:00 stderr F + sleep 1 2026-01-20T10:57:35.448439247+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:35.453134782+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,13sec,0)' ']' 2026-01-20T10:57:35.453134782+00:00 stderr F + sleep 1 2026-01-20T10:57:36.456230699+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:36.462119894+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,12sec,0)' ']' 2026-01-20T10:57:36.462221767+00:00 stderr F + sleep 1 2026-01-20T10:57:37.465431447+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:37.474442365+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,11sec,0)' ']' 2026-01-20T10:57:37.474560318+00:00 stderr F + sleep 1 2026-01-20T10:57:38.478434737+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:38.488311157+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,10sec,0)' ']' 2026-01-20T10:57:38.488502722+00:00 stderr F + sleep 1 2026-01-20T10:57:39.493862950+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:39.500348881+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,9.753ms,0)' ']' 2026-01-20T10:57:39.500423853+00:00 stderr F + sleep 1 2026-01-20T10:57:40.504815655+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:40.511311826+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,8.743ms,0)' ']' 2026-01-20T10:57:40.511411449+00:00 stderr F + sleep 1 2026-01-20T10:57:41.514895706+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:41.519476618+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,7.734ms,0)' ']' 2026-01-20T10:57:41.519529639+00:00 stderr F + sleep 1 2026-01-20T10:57:42.524369323+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:42.530365291+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,6.723ms,0)' ']' 2026-01-20T10:57:42.530433753+00:00 stderr F + sleep 1 2026-01-20T10:57:43.533242243+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:43.537916937+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,5.715ms,0)' ']' 2026-01-20T10:57:43.537969138+00:00 stderr F + sleep 1 2026-01-20T10:57:44.541764653+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:44.546843558+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,4.707ms,0)' ']' 2026-01-20T10:57:44.546907140+00:00 stderr F + sleep 1 2026-01-20T10:57:45.550153321+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:45.556439317+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,3.697ms,0)' ']' 2026-01-20T10:57:45.556505668+00:00 stderr F + sleep 1 2026-01-20T10:57:46.559559274+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:46.564322630+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,2.689ms,0)' ']' 2026-01-20T10:57:46.564356191+00:00 stderr F + sleep 1 2026-01-20T10:57:47.567440258+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:47.572658436+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,1.681ms,0)' ']' 2026-01-20T10:57:47.572658436+00:00 stderr F + sleep 1 2026-01-20T10:57:48.575779574+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:48.581566297+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,673ms,0)' ']' 2026-01-20T10:57:48.581566297+00:00 stderr F + sleep 1 2026-01-20T10:57:49.587567681+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:49.592790939+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:60050 timer:(timewait,,0)' ']' 2026-01-20T10:57:49.592790939+00:00 stderr F + sleep 1 2026-01-20T10:57:50.597191761+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:57:50.605455349+00:00 stderr F + '[' -n '' ']' 2026-01-20T10:57:50.606489057+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ']' 2026-01-20T10:57:50.606558499+00:00 stdout F Copying system trust bundle 2026-01-20T10:57:50.606568919+00:00 stderr F + echo 'Copying system trust bundle' 2026-01-20T10:57:50.606568919+00:00 stderr F + cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem 2026-01-20T10:57:50.610545124+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ']' 2026-01-20T10:57:50.610908633+00:00 stderr F + exec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key '--controllers=*' --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 2026-01-20T10:57:50.706224504+00:00 stderr F W0120 10:57:50.705836 1 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2026-01-20T10:57:50.706224504+00:00 stderr F W0120 10:57:50.706206 1 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2026-01-20T10:57:50.706272005+00:00 stderr F W0120 10:57:50.706237 1 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2026-01-20T10:57:50.706279845+00:00 stderr F W0120 10:57:50.706270 1 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2026-01-20T10:57:50.706325076+00:00 stderr F W0120 10:57:50.706305 1 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2026-01-20T10:57:50.706363127+00:00 stderr F W0120 10:57:50.706344 1 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2026-01-20T10:57:50.706401289+00:00 stderr F W0120 10:57:50.706378 1 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2026-01-20T10:57:50.706426320+00:00 stderr F W0120 10:57:50.706412 1 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2026-01-20T10:57:50.706482961+00:00 stderr F W0120 10:57:50.706464 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2026-01-20T10:57:50.706513272+00:00 stderr F W0120 10:57:50.706495 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2026-01-20T10:57:50.706547593+00:00 stderr F W0120 10:57:50.706528 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2026-01-20T10:57:50.706570344+00:00 stderr F W0120 10:57:50.706557 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2026-01-20T10:57:50.706602195+00:00 stderr F W0120 10:57:50.706584 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2026-01-20T10:57:50.706622205+00:00 stderr F W0120 10:57:50.706614 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2026-01-20T10:57:50.706661006+00:00 stderr F W0120 10:57:50.706641 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2026-01-20T10:57:50.706691247+00:00 stderr F W0120 10:57:50.706673 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2026-01-20T10:57:50.706724588+00:00 stderr F W0120 10:57:50.706705 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2026-01-20T10:57:50.706763839+00:00 stderr F W0120 10:57:50.706739 1 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2026-01-20T10:57:50.706854821+00:00 stderr F W0120 10:57:50.706832 1 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2026-01-20T10:57:50.706908473+00:00 stderr F W0120 10:57:50.706889 1 feature_gate.go:227] unrecognized feature gate: Example 2026-01-20T10:57:50.706938453+00:00 stderr F W0120 10:57:50.706920 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2026-01-20T10:57:50.706972764+00:00 stderr F W0120 10:57:50.706954 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2026-01-20T10:57:50.707002065+00:00 stderr F W0120 10:57:50.706984 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2026-01-20T10:57:50.707030596+00:00 stderr F W0120 10:57:50.707013 1 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2026-01-20T10:57:50.707075437+00:00 stderr F W0120 10:57:50.707041 1 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2026-01-20T10:57:50.707104898+00:00 stderr F W0120 10:57:50.707089 1 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2026-01-20T10:57:50.707135039+00:00 stderr F W0120 10:57:50.707116 1 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2026-01-20T10:57:50.707166519+00:00 stderr F W0120 10:57:50.707148 1 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2026-01-20T10:57:50.707195900+00:00 stderr F W0120 10:57:50.707178 1 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2026-01-20T10:57:50.707224241+00:00 stderr F W0120 10:57:50.707207 1 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2026-01-20T10:57:50.707252152+00:00 stderr F W0120 10:57:50.707234 1 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2026-01-20T10:57:50.707282132+00:00 stderr F W0120 10:57:50.707263 1 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2026-01-20T10:57:50.707305313+00:00 stderr F W0120 10:57:50.707290 1 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2026-01-20T10:57:50.707332914+00:00 stderr F W0120 10:57:50.707315 1 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2026-01-20T10:57:50.707360494+00:00 stderr F W0120 10:57:50.707341 1 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2026-01-20T10:57:50.707383235+00:00 stderr F W0120 10:57:50.707370 1 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2026-01-20T10:57:50.707482678+00:00 stderr F W0120 10:57:50.707461 1 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2026-01-20T10:57:50.707513968+00:00 stderr F W0120 10:57:50.707490 1 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2026-01-20T10:57:50.707543829+00:00 stderr F W0120 10:57:50.707520 1 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2026-01-20T10:57:50.707593360+00:00 stderr F W0120 10:57:50.707575 1 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2026-01-20T10:57:50.707622691+00:00 stderr F W0120 10:57:50.707604 1 feature_gate.go:227] unrecognized feature gate: MetricsServer 2026-01-20T10:57:50.707637402+00:00 stderr F W0120 10:57:50.707631 1 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2026-01-20T10:57:50.707675313+00:00 stderr F W0120 10:57:50.707657 1 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2026-01-20T10:57:50.707709794+00:00 stderr F W0120 10:57:50.707691 1 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2026-01-20T10:57:50.707739324+00:00 stderr F W0120 10:57:50.707721 1 feature_gate.go:227] unrecognized feature gate: NewOLM 2026-01-20T10:57:50.707762185+00:00 stderr F W0120 10:57:50.707748 1 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2026-01-20T10:57:50.707816966+00:00 stderr F W0120 10:57:50.707797 1 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2026-01-20T10:57:50.707867648+00:00 stderr F W0120 10:57:50.707849 1 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2026-01-20T10:57:50.707902959+00:00 stderr F W0120 10:57:50.707885 1 feature_gate.go:227] unrecognized feature gate: PinnedImages 2026-01-20T10:57:50.707944750+00:00 stderr F W0120 10:57:50.707926 1 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2026-01-20T10:57:50.707991041+00:00 stderr F W0120 10:57:50.707972 1 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2026-01-20T10:57:50.708261478+00:00 stderr F W0120 10:57:50.708227 1 feature_gate.go:227] unrecognized feature gate: SignatureStores 2026-01-20T10:57:50.708314199+00:00 stderr F W0120 10:57:50.708288 1 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2026-01-20T10:57:50.708383441+00:00 stderr F W0120 10:57:50.708357 1 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2026-01-20T10:57:50.708419162+00:00 stderr F W0120 10:57:50.708400 1 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2026-01-20T10:57:50.708456133+00:00 stderr F W0120 10:57:50.708435 1 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2026-01-20T10:57:50.708493754+00:00 stderr F W0120 10:57:50.708475 1 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2026-01-20T10:57:50.708533815+00:00 stderr F W0120 10:57:50.708515 1 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2026-01-20T10:57:50.708607817+00:00 stderr F W0120 10:57:50.708581 1 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2026-01-20T10:57:50.708735870+00:00 stderr F I0120 10:57:50.708697 1 flags.go:64] FLAG: --allocate-node-cidrs="false" 2026-01-20T10:57:50.708735870+00:00 stderr F I0120 10:57:50.708713 1 flags.go:64] FLAG: --allow-metric-labels="[]" 2026-01-20T10:57:50.708735870+00:00 stderr F I0120 10:57:50.708720 1 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2026-01-20T10:57:50.708735870+00:00 stderr F I0120 10:57:50.708725 1 flags.go:64] FLAG: --allow-untagged-cloud="false" 2026-01-20T10:57:50.708735870+00:00 stderr F I0120 10:57:50.708730 1 flags.go:64] FLAG: --attach-detach-reconcile-sync-period="1m0s" 2026-01-20T10:57:50.708752661+00:00 stderr F I0120 10:57:50.708735 1 flags.go:64] FLAG: --authentication-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2026-01-20T10:57:50.708752661+00:00 stderr F I0120 10:57:50.708740 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2026-01-20T10:57:50.708752661+00:00 stderr F I0120 10:57:50.708743 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2026-01-20T10:57:50.708752661+00:00 stderr F I0120 10:57:50.708746 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false" 2026-01-20T10:57:50.708764141+00:00 stderr F I0120 10:57:50.708749 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2026-01-20T10:57:50.708764141+00:00 stderr F I0120 10:57:50.708757 1 flags.go:64] FLAG: --authorization-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2026-01-20T10:57:50.708764141+00:00 stderr F I0120 10:57:50.708761 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2026-01-20T10:57:50.708779821+00:00 stderr F I0120 10:57:50.708764 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2026-01-20T10:57:50.708813762+00:00 stderr F I0120 10:57:50.708767 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2026-01-20T10:57:50.708813762+00:00 stderr F I0120 10:57:50.708795 1 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2026-01-20T10:57:50.708813762+00:00 stderr F I0120 10:57:50.708798 1 flags.go:64] FLAG: --cidr-allocator-type="RangeAllocator" 2026-01-20T10:57:50.708813762+00:00 stderr F I0120 10:57:50.708802 1 flags.go:64] FLAG: --client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2026-01-20T10:57:50.708813762+00:00 stderr F I0120 10:57:50.708808 1 flags.go:64] FLAG: --cloud-config="" 2026-01-20T10:57:50.708823813+00:00 stderr F I0120 10:57:50.708812 1 flags.go:64] FLAG: --cloud-provider="" 2026-01-20T10:57:50.708831403+00:00 stderr F I0120 10:57:50.708816 1 flags.go:64] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16" 2026-01-20T10:57:50.708831403+00:00 stderr F I0120 10:57:50.708826 1 flags.go:64] FLAG: --cluster-cidr="10.217.0.0/22" 2026-01-20T10:57:50.708839263+00:00 stderr F I0120 10:57:50.708831 1 flags.go:64] FLAG: --cluster-name="crc-d8rkd" 2026-01-20T10:57:50.708839263+00:00 stderr F I0120 10:57:50.708834 1 flags.go:64] FLAG: --cluster-signing-cert-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt" 2026-01-20T10:57:50.708847093+00:00 stderr F I0120 10:57:50.708838 1 flags.go:64] FLAG: --cluster-signing-duration="8760h0m0s" 2026-01-20T10:57:50.708847093+00:00 stderr F I0120 10:57:50.708842 1 flags.go:64] FLAG: --cluster-signing-key-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2026-01-20T10:57:50.708855023+00:00 stderr F I0120 10:57:50.708845 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-cert-file="" 2026-01-20T10:57:50.708855023+00:00 stderr F I0120 10:57:50.708849 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-key-file="" 2026-01-20T10:57:50.708855023+00:00 stderr F I0120 10:57:50.708851 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-cert-file="" 2026-01-20T10:57:50.708863384+00:00 stderr F I0120 10:57:50.708854 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-key-file="" 2026-01-20T10:57:50.708863384+00:00 stderr F I0120 10:57:50.708857 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-cert-file="" 2026-01-20T10:57:50.708863384+00:00 stderr F I0120 10:57:50.708860 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-key-file="" 2026-01-20T10:57:50.708871414+00:00 stderr F I0120 10:57:50.708863 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-cert-file="" 2026-01-20T10:57:50.708871414+00:00 stderr F I0120 10:57:50.708866 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-key-file="" 2026-01-20T10:57:50.708879174+00:00 stderr F I0120 10:57:50.708869 1 flags.go:64] FLAG: --concurrent-cron-job-syncs="5" 2026-01-20T10:57:50.708879174+00:00 stderr F I0120 10:57:50.708873 1 flags.go:64] FLAG: --concurrent-deployment-syncs="5" 2026-01-20T10:57:50.708879174+00:00 stderr F I0120 10:57:50.708876 1 flags.go:64] FLAG: --concurrent-endpoint-syncs="5" 2026-01-20T10:57:50.708887664+00:00 stderr F I0120 10:57:50.708879 1 flags.go:64] FLAG: --concurrent-ephemeralvolume-syncs="5" 2026-01-20T10:57:50.708887664+00:00 stderr F I0120 10:57:50.708882 1 flags.go:64] FLAG: --concurrent-gc-syncs="20" 2026-01-20T10:57:50.708895544+00:00 stderr F I0120 10:57:50.708885 1 flags.go:64] FLAG: --concurrent-horizontal-pod-autoscaler-syncs="5" 2026-01-20T10:57:50.708895544+00:00 stderr F I0120 10:57:50.708889 1 flags.go:64] FLAG: --concurrent-job-syncs="5" 2026-01-20T10:57:50.708895544+00:00 stderr F I0120 10:57:50.708892 1 flags.go:64] FLAG: --concurrent-namespace-syncs="10" 2026-01-20T10:57:50.708909815+00:00 stderr F I0120 10:57:50.708895 1 flags.go:64] FLAG: --concurrent-rc-syncs="5" 2026-01-20T10:57:50.708909815+00:00 stderr F I0120 10:57:50.708898 1 flags.go:64] FLAG: --concurrent-replicaset-syncs="5" 2026-01-20T10:57:50.708909815+00:00 stderr F I0120 10:57:50.708901 1 flags.go:64] FLAG: --concurrent-resource-quota-syncs="5" 2026-01-20T10:57:50.708909815+00:00 stderr F I0120 10:57:50.708903 1 flags.go:64] FLAG: --concurrent-service-endpoint-syncs="5" 2026-01-20T10:57:50.708909815+00:00 stderr F I0120 10:57:50.708906 1 flags.go:64] FLAG: --concurrent-service-syncs="1" 2026-01-20T10:57:50.708919065+00:00 stderr F I0120 10:57:50.708910 1 flags.go:64] FLAG: --concurrent-serviceaccount-token-syncs="5" 2026-01-20T10:57:50.708919065+00:00 stderr F I0120 10:57:50.708913 1 flags.go:64] FLAG: --concurrent-statefulset-syncs="5" 2026-01-20T10:57:50.708919065+00:00 stderr F I0120 10:57:50.708915 1 flags.go:64] FLAG: --concurrent-ttl-after-finished-syncs="5" 2026-01-20T10:57:50.708927585+00:00 stderr F I0120 10:57:50.708918 1 flags.go:64] FLAG: --concurrent-validating-admission-policy-status-syncs="5" 2026-01-20T10:57:50.708927585+00:00 stderr F I0120 10:57:50.708922 1 flags.go:64] FLAG: --configure-cloud-routes="true" 2026-01-20T10:57:50.708935176+00:00 stderr F I0120 10:57:50.708925 1 flags.go:64] FLAG: --contention-profiling="false" 2026-01-20T10:57:50.708935176+00:00 stderr F I0120 10:57:50.708929 1 flags.go:64] FLAG: --controller-start-interval="0s" 2026-01-20T10:57:50.708942736+00:00 stderr F I0120 10:57:50.708932 1 flags.go:64] FLAG: --controllers="[*,-bootstrapsigner,-tokencleaner,-ttl]" 2026-01-20T10:57:50.708942736+00:00 stderr F I0120 10:57:50.708938 1 flags.go:64] FLAG: --disable-attach-detach-reconcile-sync="false" 2026-01-20T10:57:50.708950756+00:00 stderr F I0120 10:57:50.708941 1 flags.go:64] FLAG: --disabled-metrics="[]" 2026-01-20T10:57:50.708950756+00:00 stderr F I0120 10:57:50.708945 1 flags.go:64] FLAG: --enable-dynamic-provisioning="true" 2026-01-20T10:57:50.708958516+00:00 stderr F I0120 10:57:50.708948 1 flags.go:64] FLAG: --enable-garbage-collector="true" 2026-01-20T10:57:50.708958516+00:00 stderr F I0120 10:57:50.708952 1 flags.go:64] FLAG: --enable-hostpath-provisioner="false" 2026-01-20T10:57:50.708958516+00:00 stderr F I0120 10:57:50.708955 1 flags.go:64] FLAG: --enable-leader-migration="false" 2026-01-20T10:57:50.708967006+00:00 stderr F I0120 10:57:50.708958 1 flags.go:64] FLAG: --endpoint-updates-batch-period="0s" 2026-01-20T10:57:50.708967006+00:00 stderr F I0120 10:57:50.708962 1 flags.go:64] FLAG: --endpointslice-updates-batch-period="0s" 2026-01-20T10:57:50.708974547+00:00 stderr F I0120 10:57:50.708964 1 flags.go:64] FLAG: --external-cloud-volume-plugin="" 2026-01-20T10:57:50.709012948+00:00 stderr F I0120 10:57:50.708968 1 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2026-01-20T10:57:50.709012948+00:00 stderr F I0120 10:57:50.708999 1 flags.go:64] FLAG: --flex-volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" 2026-01-20T10:57:50.709012948+00:00 stderr F I0120 10:57:50.709005 1 flags.go:64] FLAG: --help="false" 2026-01-20T10:57:50.709012948+00:00 stderr F I0120 10:57:50.709008 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-cpu-initialization-period="5m0s" 2026-01-20T10:57:50.709025698+00:00 stderr F I0120 10:57:50.709012 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-delay="5m0s" 2026-01-20T10:57:50.709025698+00:00 stderr F I0120 10:57:50.709015 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-stabilization="5m0s" 2026-01-20T10:57:50.709025698+00:00 stderr F I0120 10:57:50.709018 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-initial-readiness-delay="30s" 2026-01-20T10:57:50.709025698+00:00 stderr F I0120 10:57:50.709021 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-sync-period="15s" 2026-01-20T10:57:50.709034338+00:00 stderr F I0120 10:57:50.709024 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-tolerance="0.1" 2026-01-20T10:57:50.709034338+00:00 stderr F I0120 10:57:50.709029 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-upscale-delay="3m0s" 2026-01-20T10:57:50.709041808+00:00 stderr F I0120 10:57:50.709033 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2026-01-20T10:57:50.709041808+00:00 stderr F I0120 10:57:50.709037 1 flags.go:64] FLAG: --kube-api-burst="300" 2026-01-20T10:57:50.709049518+00:00 stderr F I0120 10:57:50.709040 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" 2026-01-20T10:57:50.709049518+00:00 stderr F I0120 10:57:50.709044 1 flags.go:64] FLAG: --kube-api-qps="150" 2026-01-20T10:57:50.709069469+00:00 stderr F I0120 10:57:50.709048 1 flags.go:64] FLAG: --kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2026-01-20T10:57:50.709069469+00:00 stderr F I0120 10:57:50.709053 1 flags.go:64] FLAG: --large-cluster-size-threshold="50" 2026-01-20T10:57:50.709077989+00:00 stderr F I0120 10:57:50.709072 1 flags.go:64] FLAG: --leader-elect="true" 2026-01-20T10:57:50.709089870+00:00 stderr F I0120 10:57:50.709076 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s" 2026-01-20T10:57:50.709089870+00:00 stderr F I0120 10:57:50.709080 1 flags.go:64] FLAG: --leader-elect-renew-deadline="12s" 2026-01-20T10:57:50.709089870+00:00 stderr F I0120 10:57:50.709083 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases" 2026-01-20T10:57:50.709089870+00:00 stderr F I0120 10:57:50.709086 1 flags.go:64] FLAG: --leader-elect-resource-name="kube-controller-manager" 2026-01-20T10:57:50.709099230+00:00 stderr F I0120 10:57:50.709090 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" 2026-01-20T10:57:50.709099230+00:00 stderr F I0120 10:57:50.709093 1 flags.go:64] FLAG: --leader-elect-retry-period="3s" 2026-01-20T10:57:50.709099230+00:00 stderr F I0120 10:57:50.709096 1 flags.go:64] FLAG: --leader-migration-config="" 2026-01-20T10:57:50.709107750+00:00 stderr F I0120 10:57:50.709099 1 flags.go:64] FLAG: --legacy-service-account-token-clean-up-period="8760h0m0s" 2026-01-20T10:57:50.709107750+00:00 stderr F I0120 10:57:50.709103 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2026-01-20T10:57:50.709116410+00:00 stderr F I0120 10:57:50.709107 1 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2026-01-20T10:57:50.709116410+00:00 stderr F I0120 10:57:50.709113 1 flags.go:64] FLAG: --log-json-split-stream="false" 2026-01-20T10:57:50.709124880+00:00 stderr F I0120 10:57:50.709116 1 flags.go:64] FLAG: --logging-format="text" 2026-01-20T10:57:50.709124880+00:00 stderr F I0120 10:57:50.709119 1 flags.go:64] FLAG: --master="" 2026-01-20T10:57:50.709133031+00:00 stderr F I0120 10:57:50.709122 1 flags.go:64] FLAG: --max-endpoints-per-slice="100" 2026-01-20T10:57:50.709133031+00:00 stderr F I0120 10:57:50.709126 1 flags.go:64] FLAG: --min-resync-period="12h0m0s" 2026-01-20T10:57:50.709133031+00:00 stderr F I0120 10:57:50.709129 1 flags.go:64] FLAG: --mirroring-concurrent-service-endpoint-syncs="5" 2026-01-20T10:57:50.709141891+00:00 stderr F I0120 10:57:50.709133 1 flags.go:64] FLAG: --mirroring-endpointslice-updates-batch-period="0s" 2026-01-20T10:57:50.709141891+00:00 stderr F I0120 10:57:50.709136 1 flags.go:64] FLAG: --mirroring-max-endpoints-per-subset="1000" 2026-01-20T10:57:50.709154091+00:00 stderr F I0120 10:57:50.709139 1 flags.go:64] FLAG: --namespace-sync-period="5m0s" 2026-01-20T10:57:50.709154091+00:00 stderr F I0120 10:57:50.709143 1 flags.go:64] FLAG: --node-cidr-mask-size="0" 2026-01-20T10:57:50.709154091+00:00 stderr F I0120 10:57:50.709147 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv4="0" 2026-01-20T10:57:50.709154091+00:00 stderr F I0120 10:57:50.709149 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv6="0" 2026-01-20T10:57:50.709162561+00:00 stderr F I0120 10:57:50.709152 1 flags.go:64] FLAG: --node-eviction-rate="0.1" 2026-01-20T10:57:50.709162561+00:00 stderr F I0120 10:57:50.709156 1 flags.go:64] FLAG: --node-monitor-grace-period="40s" 2026-01-20T10:57:50.709162561+00:00 stderr F I0120 10:57:50.709159 1 flags.go:64] FLAG: --node-monitor-period="5s" 2026-01-20T10:57:50.709171122+00:00 stderr F I0120 10:57:50.709162 1 flags.go:64] FLAG: --node-startup-grace-period="1m0s" 2026-01-20T10:57:50.709171122+00:00 stderr F I0120 10:57:50.709165 1 flags.go:64] FLAG: --node-sync-period="0s" 2026-01-20T10:57:50.709178992+00:00 stderr F I0120 10:57:50.709168 1 flags.go:64] FLAG: --openshift-config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2026-01-20T10:57:50.709178992+00:00 stderr F I0120 10:57:50.709172 1 flags.go:64] FLAG: --permit-address-sharing="false" 2026-01-20T10:57:50.709178992+00:00 stderr F I0120 10:57:50.709175 1 flags.go:64] FLAG: --permit-port-sharing="false" 2026-01-20T10:57:50.709187752+00:00 stderr F I0120 10:57:50.709178 1 flags.go:64] FLAG: --profiling="true" 2026-01-20T10:57:50.709187752+00:00 stderr F I0120 10:57:50.709182 1 flags.go:64] FLAG: --pv-recycler-increment-timeout-nfs="30" 2026-01-20T10:57:50.709187752+00:00 stderr F I0120 10:57:50.709184 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-hostpath="60" 2026-01-20T10:57:50.709196143+00:00 stderr F I0120 10:57:50.709188 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-nfs="300" 2026-01-20T10:57:50.709196143+00:00 stderr F I0120 10:57:50.709191 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-hostpath="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2026-01-20T10:57:50.709204024+00:00 stderr F I0120 10:57:50.709195 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-nfs="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2026-01-20T10:57:50.709204024+00:00 stderr F I0120 10:57:50.709199 1 flags.go:64] FLAG: --pv-recycler-timeout-increment-hostpath="30" 2026-01-20T10:57:50.709211774+00:00 stderr F I0120 10:57:50.709202 1 flags.go:64] FLAG: --pvclaimbinder-sync-period="15s" 2026-01-20T10:57:50.709211774+00:00 stderr F I0120 10:57:50.709206 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2026-01-20T10:57:50.709219664+00:00 stderr F I0120 10:57:50.709210 1 flags.go:64] FLAG: --requestheader-client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2026-01-20T10:57:50.709219664+00:00 stderr F I0120 10:57:50.709214 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2026-01-20T10:57:50.709227674+00:00 stderr F I0120 10:57:50.709219 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2026-01-20T10:57:50.709234994+00:00 stderr F I0120 10:57:50.709223 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2026-01-20T10:57:50.709234994+00:00 stderr F I0120 10:57:50.709229 1 flags.go:64] FLAG: --resource-quota-sync-period="5m0s" 2026-01-20T10:57:50.709242705+00:00 stderr F I0120 10:57:50.709232 1 flags.go:64] FLAG: --root-ca-file="/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt" 2026-01-20T10:57:50.709242705+00:00 stderr F I0120 10:57:50.709238 1 flags.go:64] FLAG: --route-reconciliation-period="10s" 2026-01-20T10:57:50.709254045+00:00 stderr F I0120 10:57:50.709242 1 flags.go:64] FLAG: --secondary-node-eviction-rate="0.01" 2026-01-20T10:57:50.709254045+00:00 stderr F I0120 10:57:50.709245 1 flags.go:64] FLAG: --secure-port="10257" 2026-01-20T10:57:50.709254045+00:00 stderr F I0120 10:57:50.709248 1 flags.go:64] FLAG: --service-account-private-key-file="/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key" 2026-01-20T10:57:50.709262445+00:00 stderr F I0120 10:57:50.709252 1 flags.go:64] FLAG: --service-cluster-ip-range="10.217.4.0/23" 2026-01-20T10:57:50.709262445+00:00 stderr F I0120 10:57:50.709256 1 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2026-01-20T10:57:50.709262445+00:00 stderr F I0120 10:57:50.709258 1 flags.go:64] FLAG: --terminated-pod-gc-threshold="12500" 2026-01-20T10:57:50.709270295+00:00 stderr F I0120 10:57:50.709261 1 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" 2026-01-20T10:57:50.709277955+00:00 stderr F I0120 10:57:50.709265 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2026-01-20T10:57:50.709277955+00:00 stderr F I0120 10:57:50.709273 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2026-01-20T10:57:50.709287096+00:00 stderr F I0120 10:57:50.709277 1 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2026-01-20T10:57:50.709287096+00:00 stderr F I0120 10:57:50.709281 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2026-01-20T10:57:50.709294866+00:00 stderr F I0120 10:57:50.709287 1 flags.go:64] FLAG: --unhealthy-zone-threshold="0.55" 2026-01-20T10:57:50.709294866+00:00 stderr F I0120 10:57:50.709290 1 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" 2026-01-20T10:57:50.709302586+00:00 stderr F I0120 10:57:50.709294 1 flags.go:64] FLAG: --use-service-account-credentials="true" 2026-01-20T10:57:50.709302586+00:00 stderr F I0120 10:57:50.709297 1 flags.go:64] FLAG: --v="2" 2026-01-20T10:57:50.709310336+00:00 stderr F I0120 10:57:50.709301 1 flags.go:64] FLAG: --version="false" 2026-01-20T10:57:50.709310336+00:00 stderr F I0120 10:57:50.709306 1 flags.go:64] FLAG: --vmodule="" 2026-01-20T10:57:50.709318617+00:00 stderr F I0120 10:57:50.709311 1 flags.go:64] FLAG: --volume-host-allow-local-loopback="true" 2026-01-20T10:57:50.709318617+00:00 stderr F I0120 10:57:50.709314 1 flags.go:64] FLAG: --volume-host-cidr-denylist="[]" 2026-01-20T10:57:50.712473010+00:00 stderr F I0120 10:57:50.712433 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2026-01-20T10:57:50.965601384+00:00 stderr F I0120 10:57:50.965529 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2026-01-20T10:57:50.965632364+00:00 stderr F I0120 10:57:50.965612 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2026-01-20T10:57:50.967873664+00:00 stderr F I0120 10:57:50.967820 1 controllermanager.go:203] "Starting" version="v1.29.5+29c95f3" 2026-01-20T10:57:50.967946566+00:00 stderr F I0120 10:57:50.967927 1 controllermanager.go:205] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" 2026-01-20T10:57:50.969928638+00:00 stderr F I0120 10:57:50.969865 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2026-01-20T10:57:50.969928638+00:00 stderr F I0120 10:57:50.969897 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2026-01-20T10:57:50.970677178+00:00 stderr F I0120 10:57:50.970615 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2026-01-20T10:57:50.970880253+00:00 stderr F I0120 10:57:50.970827 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:57:50.970790271 +0000 UTC))" 2026-01-20T10:57:50.970954995+00:00 stderr F I0120 10:57:50.970938 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:57:50.970911344 +0000 UTC))" 2026-01-20T10:57:50.971009887+00:00 stderr F I0120 10:57:50.970996 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:57:50.970976466 +0000 UTC))" 2026-01-20T10:57:50.971102989+00:00 stderr F I0120 10:57:50.971081 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:57:50.971029837 +0000 UTC))" 2026-01-20T10:57:50.971167211+00:00 stderr F I0120 10:57:50.971152 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:57:50.97113326 +0000 UTC))" 2026-01-20T10:57:50.971223682+00:00 stderr F I0120 10:57:50.971209 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:57:50.971189321 +0000 UTC))" 2026-01-20T10:57:50.971274654+00:00 stderr F I0120 10:57:50.971261 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:57:50.971243623 +0000 UTC))" 2026-01-20T10:57:50.971339065+00:00 stderr F I0120 10:57:50.971325 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:57:50.971305724 +0000 UTC))" 2026-01-20T10:57:50.971405007+00:00 stderr F I0120 10:57:50.971390 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:57:50.971372496 +0000 UTC))" 2026-01-20T10:57:50.971458948+00:00 stderr F I0120 10:57:50.971445 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:57:50.971427518 +0000 UTC))" 2026-01-20T10:57:50.971512290+00:00 stderr F I0120 10:57:50.971498 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:57:50.971480989 +0000 UTC))" 2026-01-20T10:57:50.971935751+00:00 stderr F I0120 10:57:50.971918 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2026-01-20 10:57:50.97189513 +0000 UTC))" 2026-01-20T10:57:50.972422784+00:00 stderr F I0120 10:57:50.972386 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906670\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906670\" (2026-01-20 09:57:50 +0000 UTC to 2027-01-20 09:57:50 +0000 UTC (now=2026-01-20 10:57:50.972364692 +0000 UTC))" 2026-01-20T10:57:50.972497155+00:00 stderr F I0120 10:57:50.972480 1 secure_serving.go:213] Serving securely on [::]:10257 2026-01-20T10:57:50.972549778+00:00 stderr F I0120 10:57:50.972507 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:57:50.972987309+00:00 stderr F I0120 10:57:50.972965 1 leaderelection.go:250] attempting to acquire leader lease kube-system/kube-controller-manager... 2026-01-20T10:58:07.114325285+00:00 stderr F I0120 10:58:07.114262 1 leaderelection.go:260] successfully acquired lease kube-system/kube-controller-manager 2026-01-20T10:58:07.114569361+00:00 stderr F I0120 10:58:07.114539 1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="crc_efd38a5e-3eda-44e4-baec-2c773be24890 became leader" 2026-01-20T10:58:07.116148840+00:00 stderr F I0120 10:58:07.116125 1 controllermanager.go:756] "Starting controller" controller="serviceaccount-token-controller" 2026-01-20T10:58:07.117463744+00:00 stderr F I0120 10:58:07.117442 1 controllermanager.go:787] "Started controller" controller="serviceaccount-token-controller" 2026-01-20T10:58:07.117506245+00:00 stderr F I0120 10:58:07.117496 1 controllermanager.go:756] "Starting controller" controller="replicaset-controller" 2026-01-20T10:58:07.117546346+00:00 stderr F I0120 10:58:07.117450 1 shared_informer.go:311] Waiting for caches to sync for tokens 2026-01-20T10:58:07.120648535+00:00 stderr F I0120 10:58:07.120601 1 controllermanager.go:787] "Started controller" controller="replicaset-controller" 2026-01-20T10:58:07.120648535+00:00 stderr F I0120 10:58:07.120639 1 controllermanager.go:756] "Starting controller" controller="root-ca-certificate-publisher-controller" 2026-01-20T10:58:07.120810789+00:00 stderr F I0120 10:58:07.120783 1 replica_set.go:214] "Starting controller" name="replicaset" 2026-01-20T10:58:07.120810789+00:00 stderr F I0120 10:58:07.120805 1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet 2026-01-20T10:58:07.124816731+00:00 stderr F I0120 10:58:07.124766 1 controllermanager.go:787] "Started controller" controller="root-ca-certificate-publisher-controller" 2026-01-20T10:58:07.124816731+00:00 stderr F I0120 10:58:07.124810 1 controllermanager.go:756] "Starting controller" controller="service-ca-certificate-publisher-controller" 2026-01-20T10:58:07.124969984+00:00 stderr F I0120 10:58:07.124943 1 publisher.go:102] "Starting root CA cert publisher controller" 2026-01-20T10:58:07.125009795+00:00 stderr F I0120 10:58:07.124996 1 shared_informer.go:311] Waiting for caches to sync for crt configmap 2026-01-20T10:58:07.134896527+00:00 stderr F I0120 10:58:07.134820 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.139217246+00:00 stderr F I0120 10:58:07.139166 1 controllermanager.go:787] "Started controller" controller="service-ca-certificate-publisher-controller" 2026-01-20T10:58:07.139245467+00:00 stderr F I0120 10:58:07.139225 1 controllermanager.go:739] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"] 2026-01-20T10:58:07.139245467+00:00 stderr F I0120 10:58:07.139238 1 controllermanager.go:750] "Warning: controller is disabled" controller="bootstrap-signer-controller" 2026-01-20T10:58:07.139272958+00:00 stderr F I0120 10:58:07.139248 1 controllermanager.go:756] "Starting controller" controller="node-ipam-controller" 2026-01-20T10:58:07.139272958+00:00 stderr F I0120 10:58:07.139258 1 controllermanager.go:765] "Warning: skipping controller" controller="node-ipam-controller" 2026-01-20T10:58:07.139272958+00:00 stderr F I0120 10:58:07.139267 1 controllermanager.go:756] "Starting controller" controller="persistentvolume-binder-controller" 2026-01-20T10:58:07.139590546+00:00 stderr F I0120 10:58:07.139552 1 publisher.go:80] Starting service CA certificate configmap publisher 2026-01-20T10:58:07.140379226+00:00 stderr F I0120 10:58:07.140337 1 shared_informer.go:311] Waiting for caches to sync for crt configmap 2026-01-20T10:58:07.152935895+00:00 stderr F I0120 10:58:07.152306 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/host-path" 2026-01-20T10:58:07.152935895+00:00 stderr F I0120 10:58:07.152898 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/nfs" 2026-01-20T10:58:07.152989126+00:00 stderr F I0120 10:58:07.152936 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" 2026-01-20T10:58:07.152989126+00:00 stderr F I0120 10:58:07.152947 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" 2026-01-20T10:58:07.152989126+00:00 stderr F I0120 10:58:07.152957 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" 2026-01-20T10:58:07.152989126+00:00 stderr F I0120 10:58:07.152968 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" 2026-01-20T10:58:07.153030277+00:00 stderr F I0120 10:58:07.152993 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/csi" 2026-01-20T10:58:07.153184181+00:00 stderr F I0120 10:58:07.153140 1 controllermanager.go:787] "Started controller" controller="persistentvolume-binder-controller" 2026-01-20T10:58:07.153224312+00:00 stderr F I0120 10:58:07.153198 1 controllermanager.go:756] "Starting controller" controller="persistentvolume-attach-detach-controller" 2026-01-20T10:58:07.153437108+00:00 stderr F I0120 10:58:07.153394 1 pv_controller_base.go:319] "Starting persistent volume controller" 2026-01-20T10:58:07.153437108+00:00 stderr F I0120 10:58:07.153430 1 shared_informer.go:311] Waiting for caches to sync for persistent volume 2026-01-20T10:58:07.158370283+00:00 stderr F W0120 10:58:07.158284 1 probe.go:268] Flexvolume plugin directory at /etc/kubernetes/kubelet-plugins/volume/exec does not exist. Recreating. 2026-01-20T10:58:07.158976169+00:00 stderr F I0120 10:58:07.158938 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" 2026-01-20T10:58:07.158976169+00:00 stderr F I0120 10:58:07.158956 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" 2026-01-20T10:58:07.158976169+00:00 stderr F I0120 10:58:07.158964 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/fc" 2026-01-20T10:58:07.158993479+00:00 stderr F I0120 10:58:07.158978 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" 2026-01-20T10:58:07.159041651+00:00 stderr F I0120 10:58:07.159017 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/csi" 2026-01-20T10:58:07.159192284+00:00 stderr F I0120 10:58:07.159164 1 controllermanager.go:787] "Started controller" controller="persistentvolume-attach-detach-controller" 2026-01-20T10:58:07.159192284+00:00 stderr F I0120 10:58:07.159186 1 controllermanager.go:756] "Starting controller" controller="taint-eviction-controller" 2026-01-20T10:58:07.159489032+00:00 stderr F I0120 10:58:07.159446 1 attach_detach_controller.go:337] "Starting attach detach controller" 2026-01-20T10:58:07.159651676+00:00 stderr F I0120 10:58:07.159626 1 shared_informer.go:311] Waiting for caches to sync for attach detach 2026-01-20T10:58:07.160572839+00:00 stderr F I0120 10:58:07.160517 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.163271368+00:00 stderr F I0120 10:58:07.163233 1 controllermanager.go:787] "Started controller" controller="taint-eviction-controller" 2026-01-20T10:58:07.163296918+00:00 stderr F I0120 10:58:07.163272 1 controllermanager.go:739] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"] 2026-01-20T10:58:07.163296918+00:00 stderr F I0120 10:58:07.163285 1 controllermanager.go:756] "Starting controller" controller="legacy-serviceaccount-token-cleaner-controller" 2026-01-20T10:58:07.163350250+00:00 stderr F I0120 10:58:07.163314 1 taint_eviction.go:285] "Starting" controller="taint-eviction-controller" 2026-01-20T10:58:07.163386840+00:00 stderr F I0120 10:58:07.163363 1 taint_eviction.go:291] "Sending events to api server" 2026-01-20T10:58:07.163429171+00:00 stderr F I0120 10:58:07.163406 1 shared_informer.go:311] Waiting for caches to sync for taint-eviction-controller 2026-01-20T10:58:07.165599717+00:00 stderr F I0120 10:58:07.165573 1 controllermanager.go:787] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller" 2026-01-20T10:58:07.165599717+00:00 stderr F I0120 10:58:07.165588 1 controllermanager.go:756] "Starting controller" controller="namespace-controller" 2026-01-20T10:58:07.165722410+00:00 stderr F I0120 10:58:07.165696 1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" 2026-01-20T10:58:07.165762121+00:00 stderr F I0120 10:58:07.165748 1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner 2026-01-20T10:58:07.201691944+00:00 stderr F I0120 10:58:07.201622 1 controllermanager.go:787] "Started controller" controller="namespace-controller" 2026-01-20T10:58:07.201691944+00:00 stderr F I0120 10:58:07.201657 1 controllermanager.go:756] "Starting controller" controller="garbage-collector-controller" 2026-01-20T10:58:07.201834697+00:00 stderr F I0120 10:58:07.201807 1 namespace_controller.go:197] "Starting namespace controller" 2026-01-20T10:58:07.201887279+00:00 stderr F I0120 10:58:07.201869 1 shared_informer.go:311] Waiting for caches to sync for namespace 2026-01-20T10:58:07.210273282+00:00 stderr F I0120 10:58:07.210220 1 garbagecollector.go:155] "Starting controller" controller="garbagecollector" 2026-01-20T10:58:07.210273282+00:00 stderr F I0120 10:58:07.210249 1 controllermanager.go:787] "Started controller" controller="garbage-collector-controller" 2026-01-20T10:58:07.210308353+00:00 stderr F I0120 10:58:07.210283 1 controllermanager.go:756] "Starting controller" controller="certificatesigningrequest-cleaner-controller" 2026-01-20T10:58:07.210308353+00:00 stderr F I0120 10:58:07.210283 1 graph_builder.go:302] "Running" component="GraphBuilder" 2026-01-20T10:58:07.210391255+00:00 stderr F I0120 10:58:07.210260 1 shared_informer.go:311] Waiting for caches to sync for garbage collector 2026-01-20T10:58:07.212404276+00:00 stderr F I0120 10:58:07.212366 1 controllermanager.go:787] "Started controller" controller="certificatesigningrequest-cleaner-controller" 2026-01-20T10:58:07.212404276+00:00 stderr F I0120 10:58:07.212393 1 controllermanager.go:756] "Starting controller" controller="node-lifecycle-controller" 2026-01-20T10:58:07.212514189+00:00 stderr F I0120 10:58:07.212490 1 cleaner.go:83] "Starting CSR cleaner controller" 2026-01-20T10:58:07.215448653+00:00 stderr F I0120 10:58:07.215411 1 node_lifecycle_controller.go:425] "Controller will reconcile labels" 2026-01-20T10:58:07.215531865+00:00 stderr F I0120 10:58:07.215503 1 controllermanager.go:787] "Started controller" controller="node-lifecycle-controller" 2026-01-20T10:58:07.215531865+00:00 stderr F I0120 10:58:07.215526 1 controllermanager.go:756] "Starting controller" controller="clusterrole-aggregation-controller" 2026-01-20T10:58:07.215641608+00:00 stderr F I0120 10:58:07.215615 1 node_lifecycle_controller.go:459] "Sending events to api server" 2026-01-20T10:58:07.215763911+00:00 stderr F I0120 10:58:07.215740 1 node_lifecycle_controller.go:470] "Starting node controller" 2026-01-20T10:58:07.215816042+00:00 stderr F I0120 10:58:07.215798 1 shared_informer.go:311] Waiting for caches to sync for taint 2026-01-20T10:58:07.218237114+00:00 stderr F I0120 10:58:07.218179 1 shared_informer.go:318] Caches are synced for tokens 2026-01-20T10:58:07.218510921+00:00 stderr F I0120 10:58:07.218479 1 controllermanager.go:787] "Started controller" controller="clusterrole-aggregation-controller" 2026-01-20T10:58:07.218510921+00:00 stderr F I0120 10:58:07.218494 1 controllermanager.go:756] "Starting controller" controller="ttl-after-finished-controller" 2026-01-20T10:58:07.218616864+00:00 stderr F I0120 10:58:07.218593 1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" 2026-01-20T10:58:07.218672385+00:00 stderr F I0120 10:58:07.218654 1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator 2026-01-20T10:58:07.220569984+00:00 stderr F I0120 10:58:07.220531 1 controllermanager.go:787] "Started controller" controller="ttl-after-finished-controller" 2026-01-20T10:58:07.220569984+00:00 stderr F I0120 10:58:07.220565 1 controllermanager.go:739] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"] 2026-01-20T10:58:07.220596204+00:00 stderr F I0120 10:58:07.220575 1 controllermanager.go:756] "Starting controller" controller="resourcequota-controller" 2026-01-20T10:58:07.220732078+00:00 stderr F I0120 10:58:07.220684 1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" 2026-01-20T10:58:07.220732078+00:00 stderr F I0120 10:58:07.220707 1 shared_informer.go:311] Waiting for caches to sync for TTL after finished 2026-01-20T10:58:07.235350309+00:00 stderr P I0120 10:58:07.235306 1 garbagecollector.go:241] "syncing garbage collector with updated resources from discovery" attempt=1 diff="added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=namespaces /v1, Resource=nodes /v1, Resource=persistentvolumeclaims /v1, Resource=persistentvolumes /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services acme.cert-manager.io/v1, Resource=challenges acme.cert-manager.io/v1, Resource=orders admissionregistration.k8s.io/v1, Resource=mutatingwebhookconfigurations admissionregistration.k8s.io/v1, Resource=validatingwebhookconfigurations apiextensions.k8s.io/v1, Resource=customresourcedefinitions apiregistration.k8s.io/v1, Resource=apiservices apiserver.openshift.io/v1, Resource=apirequestcounts apps.openshift.io/v1, Resource=deploymentconfigs apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets authorization.openshift.io/v1, Resource=rolebindingrestrictions autoscaling.openshift.io/v1, Resource=clusterautoscalers autoscaling.openshift.io/v1beta1, Resource=machineautoscalers autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs build.openshift.io/v1, Resource=buildconfigs build.openshift.io/v1, Resource=builds cert-manager.io/v1, Resource=certificaterequests cert-manager.io/v1, Resource=certificates cert-manager.io/v1, Resource=clusterissuers cert-manager.io/v1, Resource=issuers certificates.k8s.io/v1, Resource=certificatesigningrequests config.openshift.io/v1, Resource=apiservers config.openshift.io/v1, Resource=authentications config.openshift.io/v1, Resource=builds config.openshift.io/v1, Resource=clusteroperators config.openshift.io/v1, Resource=clusterversions config.openshift.io/v1, Resource=consoles config.openshift.io/v1, Resource=dnses config.openshift.io/v1, Resource=featuregates config.openshift.io/v1, Resource=imagecontentpolicies config.openshift.io/v1, Resource=imagedigestmirrorsets config.openshift.io/v1, Resource=images config.openshift.io/v1, Resource=imagetagmirrorsets config.openshift.io/v1, Resource=infrastructures config.openshift.io/v1, Resource=ingresses config.openshift.io/v1, Resource=networks config.openshift.io/v1, Resource=nodes config.openshift.io/v1, Resource=oauths config.openshift.io/v1, Resource=operatorhubs config.openshift.io/v1, Resource=projects config.openshift.io/v1, Resource=proxies config.openshift.io/v1, Resource=schedulers console.openshift.io/v1, Resource=consoleclidownloads console.openshift.io/v1, Resource=consoleexternalloglinks console.openshift.io/v1, Resource=consolelinks console.openshift.io/v1, Resource=consolenotifications console.openshift.io/v1, Resource=consoleplugins console.openshift.io/v1, Resource=consolequickstarts console.openshift.io/v1, Resource=consolesamples console.openshift.io/v1, Resource=consoleyamlsamples controlplane.operator.openshift.io/v1alpha1, Resource=podnetworkconnectivitychecks coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events flowcontrol.apiserver.k8s.io/v1, Resource=flowschemas flowcontrol.apiserver.k8s.io/v1, Resource=prioritylevelconfigurations helm.openshift.io/v1beta1, Resource=helmchartrepositories helm.openshift.io/v1beta1, Resource=projecthelmchartrepositories image.openshift.io/v1, Resource=images image.openshift.io/v1, Resource=imagestreams imageregistry.operator.openshift.io/v1, Resource=configs imageregistry.operator.openshift.io/v1, Resource=imagepruners infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediations infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediationtemplates ingress.operator.openshift.io/v1, Resource=dnsrecords ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddressclaims ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddresses k8s.cni.cncf.io/v1, Resource=network-attachment-definitions k8s.ovn.org/v1, Resource=adminpolicybasedexternalroutes k8s.ovn.org/v1, Resource=egressfirewalls k8s.ovn.org/v1, Resource=egressips k8s.ovn.org/v1, Resource=egressqoses k8s.ovn.org/v1, Resource=egressservices machine.openshift.io/v1, Resource=controlplanemachinesets machine.openshift.io/v1beta1, Resource=machinehealthchecks machine.openshift.io/v1beta1, Resource=machines machine.openshift.io/v1beta1, Resource=machinesets machineconfiguration.openshift.io/v1, Resource=containerruntimeconfigs machineconfiguration.openshift.io/v1, Resource=controllerconfigs machineconfiguration.openshift.io/v1, Resource=kubeletconfigs machineconfiguration.openshift.io/v1, Resource=machineconfigpools machineconfiguration.openshift.io/v1, Resource=machineconfigs migration.k8s.io/v1alpha1, Resource=storagestates migration.k8s.io/v1alpha1, Resource=storageversionmigrations monitoring.coreos.com/v1, Resource=alertmanagers monitoring.coreos.com/v1, Resource=podmonitors monitoring.coreos.com/v1, Resource=probes monitoring.coreos.com/v1, Resource=prometheuses monitoring.coreos.com/v1, Resource=prometheusrules monitoring.coreos.com/v1, Resource=servicemonitors monitoring.coreos.com/v1, Resource=thanosrulers monitoring.coreos.com/v1beta1, Resource=alertmanagerconfigs monitoring.openshift.io/v1, Resource=alertingrules monitoring.openshift.io/v1, Resource=alertrelabelconfigs network.operator.openshift.io/v1, Resource=egressrouters network.operator.openshift.io/v1, Resource=operatorpkis networking.k8s.io/v1, Resource=ingressclasses networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies node.k8s.io/v1, Resource=runtimeclasses oauth.openshift.io/v1, Resource=oauthaccesstokens oauth.openshift.io/v1, Resource=oauthauthorizetokens oauth.openshift.io/v1, Resource=oauthclientauthorizations oauth.openshift.io/v1, Resource=oauthclients oauth.openshift.io/v1, Resource=useroauthaccesstokens operator.openshift.io/v1, Resource=authentications operator.openshift.io/v1, Resource=clustercsidrivers operator.openshift.io/v1, Resource=configs operator.openshift.io/v1, Resource=consoles operator.openshift.io/v1, Resource=csisnapshotcontrollers operator.openshift.io/v1, Resource=dnses operator.openshift.io/v1, Resource=etcds operator.openshift.io/v1, Resource=ingresscontrollers operator.openshift.io/v1, Resource=kubeapiservers operator.openshift.io/v1, Resource=kubecontrollermanagers operator.openshift.io/v1, Resource=kubeschedulers operator.openshift.io/v1, Resource=kubestorageversionmigrators operator.openshift.io/v1, Resource=machineconfigurations operator.openshift.io/v1, Resource=networks operator.openshift.io/v1, Resource=openshiftapiservers operator.openshift.io/v1, Resource=openshiftcontrollermanagers operator.openshift.io/v1, Resource=servicecas operator.openshift.io/v1, Resource=storages operator.openshift.io/v1alpha1, Resource=imagecontentsourcepolicies operators.coreos.com/v1, Resource=olmconfigs operators.coreos.com/v1, Resource=operatorgroups operators.coreos.com/v1, Resource=operators operators.coreos.com/v1alpha1, Resource=catalogsources operators.coreos.com/v1alpha1, Resource=clusterserviceversions operators.coreos.com/v1alpha1, Resource=installplans operators.coreos.com/v1alpha1, Resource=subscriptions operators.coreos.com/v2, Resource=operatorconditions policy.networking.k8s.io/v1alpha1, Resource=adminnetworkpolicies policy.networking.k8s.io/v1alpha1, Resource=baselineadminnetworkpolicies policy/v1, Resource=poddisruptionbudgets project.openshift.io/v1, Resource=projects quota.openshift.io/v1, Resource=clusterresourcequotas rbac.authorization.k8s.io/v1, Resource=clusterrolebindings rbac.authorization.k8s.io/v1, Resource=clusterroles rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles route.openshift.io/v1, Resource=routes samples.operator.openshift.io/v1, Resource=configs scheduling.k8s.io/v1, Resource=priorityclasses security.internal.openshift.io/v1, Resource=rangeallocations security.openshift.io/v1, Resource=rangeallocations security.openshift. 2026-01-20T10:58:07.235396870+00:00 stderr F io/v1, Resource=securitycontextconstraints storage.k8s.io/v1, Resource=csidrivers storage.k8s.io/v1, Resource=csinodes storage.k8s.io/v1, Resource=csistoragecapacities storage.k8s.io/v1, Resource=storageclasses storage.k8s.io/v1, Resource=volumeattachments template.openshift.io/v1, Resource=brokertemplateinstances template.openshift.io/v1, Resource=templateinstances template.openshift.io/v1, Resource=templates user.openshift.io/v1, Resource=groups user.openshift.io/v1, Resource=identities user.openshift.io/v1, Resource=users whereabouts.cni.cncf.io/v1alpha1, Resource=ippools whereabouts.cni.cncf.io/v1alpha1, Resource=overlappingrangeipreservations], removed: []" 2026-01-20T10:58:07.251289403+00:00 stderr F I0120 10:58:07.251254 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps" 2026-01-20T10:58:07.251386396+00:00 stderr F I0120 10:58:07.251374 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="buildconfigs.build.openshift.io" 2026-01-20T10:58:07.251438187+00:00 stderr F I0120 10:58:07.251428 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorgroups.operators.coreos.com" 2026-01-20T10:58:07.251491999+00:00 stderr F I0120 10:58:07.251481 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresscontrollers.operator.openshift.io" 2026-01-20T10:58:07.251540920+00:00 stderr F I0120 10:58:07.251531 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io" 2026-01-20T10:58:07.251584751+00:00 stderr F I0120 10:58:07.251575 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machineautoscalers.autoscaling.openshift.io" 2026-01-20T10:58:07.251626662+00:00 stderr F I0120 10:58:07.251616 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controlplanemachinesets.machine.openshift.io" 2026-01-20T10:58:07.251848148+00:00 stderr F I0120 10:58:07.251833 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagerconfigs.monitoring.coreos.com" 2026-01-20T10:58:07.251890429+00:00 stderr F I0120 10:58:07.251880 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch" 2026-01-20T10:58:07.251953640+00:00 stderr F I0120 10:58:07.251943 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="builds.build.openshift.io" 2026-01-20T10:58:07.252019162+00:00 stderr F I0120 10:58:07.252008 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressservices.k8s.ovn.org" 2026-01-20T10:58:07.252084844+00:00 stderr F I0120 10:58:07.252051 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressrouters.network.operator.openshift.io" 2026-01-20T10:58:07.252428512+00:00 stderr F I0120 10:58:07.252414 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts" 2026-01-20T10:58:07.252509214+00:00 stderr F I0120 10:58:07.252498 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="certificaterequests.cert-manager.io" 2026-01-20T10:58:07.252556096+00:00 stderr F I0120 10:58:07.252545 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="servicemonitors.monitoring.coreos.com" 2026-01-20T10:58:07.252634227+00:00 stderr F I0120 10:58:07.252620 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertrelabelconfigs.monitoring.openshift.io" 2026-01-20T10:58:07.252714700+00:00 stderr F I0120 10:58:07.252701 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machines.machine.openshift.io" 2026-01-20T10:58:07.252787081+00:00 stderr F I0120 10:58:07.252775 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheuses.monitoring.coreos.com" 2026-01-20T10:58:07.252884614+00:00 stderr F I0120 10:58:07.252873 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="catalogsources.operators.coreos.com" 2026-01-20T10:58:07.252921805+00:00 stderr F I0120 10:58:07.252912 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps" 2026-01-20T10:58:07.252989776+00:00 stderr F I0120 10:58:07.252979 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch" 2026-01-20T10:58:07.253047218+00:00 stderr F I0120 10:58:07.253036 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io" 2026-01-20T10:58:07.253127840+00:00 stderr F I0120 10:58:07.253115 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="routes.route.openshift.io" 2026-01-20T10:58:07.253175111+00:00 stderr F I0120 10:58:07.253165 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressqoses.k8s.ovn.org" 2026-01-20T10:58:07.253229902+00:00 stderr F I0120 10:58:07.253216 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="installplans.operators.coreos.com" 2026-01-20T10:58:07.253321155+00:00 stderr F I0120 10:58:07.253310 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="projecthelmchartrepositories.helm.openshift.io" 2026-01-20T10:58:07.253432798+00:00 stderr F I0120 10:58:07.253421 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddressclaims.ipam.cluster.x-k8s.io" 2026-01-20T10:58:07.253501219+00:00 stderr F I0120 10:58:07.253491 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddresses.ipam.cluster.x-k8s.io" 2026-01-20T10:58:07.253536380+00:00 stderr F I0120 10:58:07.253527 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates" 2026-01-20T10:58:07.253858198+00:00 stderr F I0120 10:58:07.253846 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io" 2026-01-20T10:58:07.253925920+00:00 stderr F I0120 10:58:07.253914 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="thanosrulers.monitoring.coreos.com" 2026-01-20T10:58:07.254001992+00:00 stderr F I0120 10:58:07.253989 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressfirewalls.k8s.ovn.org" 2026-01-20T10:58:07.254106104+00:00 stderr F I0120 10:58:07.254094 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheusrules.monitoring.coreos.com" 2026-01-20T10:58:07.254202858+00:00 stderr F I0120 10:58:07.254188 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="clusterserviceversions.operators.coreos.com" 2026-01-20T10:58:07.254765942+00:00 stderr F I0120 10:58:07.254747 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps" 2026-01-20T10:58:07.254821703+00:00 stderr F I0120 10:58:07.254811 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podnetworkconnectivitychecks.controlplane.operator.openshift.io" 2026-01-20T10:58:07.254863364+00:00 stderr F I0120 10:58:07.254853 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediations.infrastructure.cluster.x-k8s.io" 2026-01-20T10:58:07.254900985+00:00 stderr F I0120 10:58:07.254891 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io" 2026-01-20T10:58:07.254941676+00:00 stderr F I0120 10:58:07.254932 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="imagestreams.image.openshift.io" 2026-01-20T10:58:07.254984647+00:00 stderr F I0120 10:58:07.254975 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="network-attachment-definitions.k8s.cni.cncf.io" 2026-01-20T10:58:07.255031309+00:00 stderr F I0120 10:58:07.255021 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podmonitors.monitoring.coreos.com" 2026-01-20T10:58:07.255109350+00:00 stderr F I0120 10:58:07.255098 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="overlappingrangeipreservations.whereabouts.cni.cncf.io" 2026-01-20T10:58:07.255158262+00:00 stderr F I0120 10:58:07.255148 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deploymentconfigs.apps.openshift.io" 2026-01-20T10:58:07.255199463+00:00 stderr F I0120 10:58:07.255188 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindingrestrictions.authorization.openshift.io" 2026-01-20T10:58:07.255258514+00:00 stderr F I0120 10:58:07.255247 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinesets.machine.openshift.io" 2026-01-20T10:58:07.255303555+00:00 stderr F I0120 10:58:07.255293 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertingrules.monitoring.openshift.io" 2026-01-20T10:58:07.255342546+00:00 stderr F I0120 10:58:07.255333 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorpkis.network.operator.openshift.io" 2026-01-20T10:58:07.255386607+00:00 stderr F I0120 10:58:07.255376 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps" 2026-01-20T10:58:07.255432419+00:00 stderr F I0120 10:58:07.255422 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinehealthchecks.machine.openshift.io" 2026-01-20T10:58:07.255472380+00:00 stderr F I0120 10:58:07.255462 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="probes.monitoring.coreos.com" 2026-01-20T10:58:07.255510641+00:00 stderr F I0120 10:58:07.255501 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="issuers.cert-manager.io" 2026-01-20T10:58:07.255550542+00:00 stderr F I0120 10:58:07.255541 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediationtemplates.infrastructure.cluster.x-k8s.io" 2026-01-20T10:58:07.255589943+00:00 stderr F I0120 10:58:07.255580 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="subscriptions.operators.coreos.com" 2026-01-20T10:58:07.255636944+00:00 stderr F I0120 10:58:07.255627 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io" 2026-01-20T10:58:07.255687475+00:00 stderr F I0120 10:58:07.255677 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templateinstances.template.openshift.io" 2026-01-20T10:58:07.255727806+00:00 stderr F I0120 10:58:07.255718 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="certificates.cert-manager.io" 2026-01-20T10:58:07.255769667+00:00 stderr F I0120 10:58:07.255759 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ippools.whereabouts.cni.cncf.io" 2026-01-20T10:58:07.255809718+00:00 stderr F I0120 10:58:07.255800 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints" 2026-01-20T10:58:07.255843589+00:00 stderr F I0120 10:58:07.255833 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges" 2026-01-20T10:58:07.255882210+00:00 stderr F I0120 10:58:07.255872 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="orders.acme.cert-manager.io" 2026-01-20T10:58:07.256859694+00:00 stderr F I0120 10:58:07.256591 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling" 2026-01-20T10:58:07.256859694+00:00 stderr F I0120 10:58:07.256663 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io" 2026-01-20T10:58:07.256859694+00:00 stderr F I0120 10:58:07.256719 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templates.template.openshift.io" 2026-01-20T10:58:07.256859694+00:00 stderr F I0120 10:58:07.256746 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="challenges.acme.cert-manager.io" 2026-01-20T10:58:07.256859694+00:00 stderr F I0120 10:58:07.256762 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy" 2026-01-20T10:58:07.256859694+00:00 stderr F I0120 10:58:07.256779 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io" 2026-01-20T10:58:07.256859694+00:00 stderr F I0120 10:58:07.256801 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorconditions.operators.coreos.com" 2026-01-20T10:58:07.256859694+00:00 stderr F I0120 10:58:07.256820 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps" 2026-01-20T10:58:07.256859694+00:00 stderr F I0120 10:58:07.256842 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="dnsrecords.ingress.operator.openshift.io" 2026-01-20T10:58:07.256900085+00:00 stderr F I0120 10:58:07.256878 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagers.monitoring.coreos.com" 2026-01-20T10:58:07.256908436+00:00 stderr F I0120 10:58:07.256902 1 controllermanager.go:787] "Started controller" controller="resourcequota-controller" 2026-01-20T10:58:07.256918426+00:00 stderr F I0120 10:58:07.256912 1 controllermanager.go:756] "Starting controller" controller="job-controller" 2026-01-20T10:58:07.256997709+00:00 stderr F I0120 10:58:07.256981 1 resource_quota_controller.go:294] "Starting resource quota controller" 2026-01-20T10:58:07.257032020+00:00 stderr F I0120 10:58:07.257020 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2026-01-20T10:58:07.257096891+00:00 stderr F I0120 10:58:07.257084 1 resource_quota_monitor.go:305] "QuotaMonitor running" 2026-01-20T10:58:07.260581059+00:00 stderr F I0120 10:58:07.260532 1 controllermanager.go:787] "Started controller" controller="job-controller" 2026-01-20T10:58:07.260581059+00:00 stderr F I0120 10:58:07.260549 1 controllermanager.go:756] "Starting controller" controller="deployment-controller" 2026-01-20T10:58:07.260720963+00:00 stderr F I0120 10:58:07.260704 1 job_controller.go:224] "Starting job controller" 2026-01-20T10:58:07.260751024+00:00 stderr F I0120 10:58:07.260741 1 shared_informer.go:311] Waiting for caches to sync for job 2026-01-20T10:58:07.262909659+00:00 stderr F I0120 10:58:07.262874 1 controllermanager.go:787] "Started controller" controller="deployment-controller" 2026-01-20T10:58:07.262909659+00:00 stderr F I0120 10:58:07.262890 1 controllermanager.go:750] "Warning: controller is disabled" controller="ttl-controller" 2026-01-20T10:58:07.262909659+00:00 stderr F I0120 10:58:07.262898 1 controllermanager.go:750] "Warning: controller is disabled" controller="token-cleaner-controller" 2026-01-20T10:58:07.262929429+00:00 stderr F I0120 10:58:07.262908 1 controllermanager.go:756] "Starting controller" controller="persistentvolume-protection-controller" 2026-01-20T10:58:07.262984481+00:00 stderr F I0120 10:58:07.262951 1 resource_quota_controller.go:470] "syncing resource quota controller with updated resources from discovery" diff="added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=persistentvolumeclaims /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services acme.cert-manager.io/v1, Resource=challenges acme.cert-manager.io/v1, Resource=orders apps.openshift.io/v1, Resource=deploymentconfigs apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets authorization.openshift.io/v1, Resource=rolebindingrestrictions autoscaling.openshift.io/v1beta1, Resource=machineautoscalers autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs build.openshift.io/v1, Resource=buildconfigs build.openshift.io/v1, Resource=builds cert-manager.io/v1, Resource=certificaterequests cert-manager.io/v1, Resource=certificates cert-manager.io/v1, Resource=issuers controlplane.operator.openshift.io/v1alpha1, Resource=podnetworkconnectivitychecks coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events helm.openshift.io/v1beta1, Resource=projecthelmchartrepositories image.openshift.io/v1, Resource=imagestreams infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediations infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediationtemplates ingress.operator.openshift.io/v1, Resource=dnsrecords ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddressclaims ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddresses k8s.cni.cncf.io/v1, Resource=network-attachment-definitions k8s.ovn.org/v1, Resource=egressfirewalls k8s.ovn.org/v1, Resource=egressqoses k8s.ovn.org/v1, Resource=egressservices machine.openshift.io/v1, Resource=controlplanemachinesets machine.openshift.io/v1beta1, Resource=machinehealthchecks machine.openshift.io/v1beta1, Resource=machines machine.openshift.io/v1beta1, Resource=machinesets monitoring.coreos.com/v1, Resource=alertmanagers monitoring.coreos.com/v1, Resource=podmonitors monitoring.coreos.com/v1, Resource=probes monitoring.coreos.com/v1, Resource=prometheuses monitoring.coreos.com/v1, Resource=prometheusrules monitoring.coreos.com/v1, Resource=servicemonitors monitoring.coreos.com/v1, Resource=thanosrulers monitoring.coreos.com/v1beta1, Resource=alertmanagerconfigs monitoring.openshift.io/v1, Resource=alertingrules monitoring.openshift.io/v1, Resource=alertrelabelconfigs network.operator.openshift.io/v1, Resource=egressrouters network.operator.openshift.io/v1, Resource=operatorpkis networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies operator.openshift.io/v1, Resource=ingresscontrollers operators.coreos.com/v1, Resource=operatorgroups operators.coreos.com/v1alpha1, Resource=catalogsources operators.coreos.com/v1alpha1, Resource=clusterserviceversions operators.coreos.com/v1alpha1, Resource=installplans operators.coreos.com/v1alpha1, Resource=subscriptions operators.coreos.com/v2, Resource=operatorconditions policy/v1, Resource=poddisruptionbudgets rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles route.openshift.io/v1, Resource=routes storage.k8s.io/v1, Resource=csistoragecapacities template.openshift.io/v1, Resource=templateinstances template.openshift.io/v1, Resource=templates whereabouts.cni.cncf.io/v1alpha1, Resource=ippools whereabouts.cni.cncf.io/v1alpha1, Resource=overlappingrangeipreservations], removed: []" 2026-01-20T10:58:07.263084743+00:00 stderr F I0120 10:58:07.263052 1 deployment_controller.go:168] "Starting controller" controller="deployment" 2026-01-20T10:58:07.263126514+00:00 stderr F I0120 10:58:07.263114 1 shared_informer.go:311] Waiting for caches to sync for deployment 2026-01-20T10:58:07.265897425+00:00 stderr F I0120 10:58:07.265827 1 controllermanager.go:787] "Started controller" controller="persistentvolume-protection-controller" 2026-01-20T10:58:07.265897425+00:00 stderr F I0120 10:58:07.265857 1 controllermanager.go:756] "Starting controller" controller="persistentvolume-expander-controller" 2026-01-20T10:58:07.266039678+00:00 stderr F I0120 10:58:07.266020 1 pv_protection_controller.go:78] "Starting PV protection controller" 2026-01-20T10:58:07.266131271+00:00 stderr F I0120 10:58:07.266112 1 shared_informer.go:311] Waiting for caches to sync for PV protection 2026-01-20T10:58:07.269721811+00:00 stderr F I0120 10:58:07.269658 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" 2026-01-20T10:58:07.269721811+00:00 stderr F I0120 10:58:07.269706 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" 2026-01-20T10:58:07.269721811+00:00 stderr F I0120 10:58:07.269716 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" 2026-01-20T10:58:07.269751162+00:00 stderr F I0120 10:58:07.269724 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/fc" 2026-01-20T10:58:07.269808104+00:00 stderr F I0120 10:58:07.269791 1 controllermanager.go:787] "Started controller" controller="persistentvolume-expander-controller" 2026-01-20T10:58:07.269817654+00:00 stderr F I0120 10:58:07.269807 1 controllermanager.go:756] "Starting controller" controller="endpoints-controller" 2026-01-20T10:58:07.269881355+00:00 stderr F I0120 10:58:07.269866 1 expand_controller.go:328] "Starting expand controller" 2026-01-20T10:58:07.269910686+00:00 stderr F I0120 10:58:07.269900 1 shared_informer.go:311] Waiting for caches to sync for expand 2026-01-20T10:58:07.271996229+00:00 stderr F I0120 10:58:07.271944 1 controllermanager.go:787] "Started controller" controller="endpoints-controller" 2026-01-20T10:58:07.271996229+00:00 stderr F I0120 10:58:07.271964 1 controllermanager.go:756] "Starting controller" controller="endpointslice-controller" 2026-01-20T10:58:07.272163324+00:00 stderr F I0120 10:58:07.272119 1 endpoints_controller.go:174] "Starting endpoint controller" 2026-01-20T10:58:07.272163324+00:00 stderr F I0120 10:58:07.272134 1 shared_informer.go:311] Waiting for caches to sync for endpoint 2026-01-20T10:58:07.274146334+00:00 stderr F I0120 10:58:07.274114 1 controllermanager.go:787] "Started controller" controller="endpointslice-controller" 2026-01-20T10:58:07.274146334+00:00 stderr F I0120 10:58:07.274133 1 controllermanager.go:756] "Starting controller" controller="pod-garbage-collector-controller" 2026-01-20T10:58:07.274247967+00:00 stderr F I0120 10:58:07.274229 1 endpointslice_controller.go:264] "Starting endpoint slice controller" 2026-01-20T10:58:07.274277147+00:00 stderr F I0120 10:58:07.274267 1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice 2026-01-20T10:58:07.276756321+00:00 stderr F I0120 10:58:07.276729 1 controllermanager.go:787] "Started controller" controller="pod-garbage-collector-controller" 2026-01-20T10:58:07.276756321+00:00 stderr F I0120 10:58:07.276745 1 controllermanager.go:756] "Starting controller" controller="daemonset-controller" 2026-01-20T10:58:07.277210682+00:00 stderr F I0120 10:58:07.276812 1 gc_controller.go:101] "Starting GC controller" 2026-01-20T10:58:07.277255403+00:00 stderr F I0120 10:58:07.277242 1 shared_informer.go:311] Waiting for caches to sync for GC 2026-01-20T10:58:07.279497890+00:00 stderr F I0120 10:58:07.279458 1 controllermanager.go:787] "Started controller" controller="daemonset-controller" 2026-01-20T10:58:07.279497890+00:00 stderr F I0120 10:58:07.279484 1 controllermanager.go:756] "Starting controller" controller="horizontal-pod-autoscaler-controller" 2026-01-20T10:58:07.279903250+00:00 stderr F I0120 10:58:07.279864 1 daemon_controller.go:297] "Starting daemon sets controller" 2026-01-20T10:58:07.279903250+00:00 stderr F I0120 10:58:07.279889 1 shared_informer.go:311] Waiting for caches to sync for daemon sets 2026-01-20T10:58:07.291111845+00:00 stderr F I0120 10:58:07.291048 1 controllermanager.go:787] "Started controller" controller="horizontal-pod-autoscaler-controller" 2026-01-20T10:58:07.291183177+00:00 stderr F I0120 10:58:07.291135 1 horizontal.go:200] "Starting HPA controller" 2026-01-20T10:58:07.291183177+00:00 stderr F I0120 10:58:07.291171 1 shared_informer.go:311] Waiting for caches to sync for HPA 2026-01-20T10:58:07.291214088+00:00 stderr F I0120 10:58:07.291202 1 controllermanager.go:756] "Starting controller" controller="certificatesigningrequest-approving-controller" 2026-01-20T10:58:07.292988892+00:00 stderr F I0120 10:58:07.292955 1 controllermanager.go:787] "Started controller" controller="certificatesigningrequest-approving-controller" 2026-01-20T10:58:07.292988892+00:00 stderr F I0120 10:58:07.292972 1 controllermanager.go:756] "Starting controller" controller="endpointslice-mirroring-controller" 2026-01-20T10:58:07.293113105+00:00 stderr F I0120 10:58:07.293075 1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving" 2026-01-20T10:58:07.293113105+00:00 stderr F I0120 10:58:07.293105 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving 2026-01-20T10:58:07.296151932+00:00 stderr F I0120 10:58:07.296109 1 controllermanager.go:787] "Started controller" controller="endpointslice-mirroring-controller" 2026-01-20T10:58:07.296208675+00:00 stderr F I0120 10:58:07.296198 1 controllermanager.go:756] "Starting controller" controller="serviceaccount-controller" 2026-01-20T10:58:07.296260746+00:00 stderr F I0120 10:58:07.296239 1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" 2026-01-20T10:58:07.296260746+00:00 stderr F I0120 10:58:07.296254 1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring 2026-01-20T10:58:07.298514302+00:00 stderr F I0120 10:58:07.298459 1 controllermanager.go:787] "Started controller" controller="serviceaccount-controller" 2026-01-20T10:58:07.298514302+00:00 stderr F I0120 10:58:07.298478 1 controllermanager.go:756] "Starting controller" controller="statefulset-controller" 2026-01-20T10:58:07.298588254+00:00 stderr F I0120 10:58:07.298561 1 serviceaccounts_controller.go:111] "Starting service account controller" 2026-01-20T10:58:07.298588254+00:00 stderr F I0120 10:58:07.298576 1 shared_informer.go:311] Waiting for caches to sync for service account 2026-01-20T10:58:07.300825241+00:00 stderr F I0120 10:58:07.300797 1 controllermanager.go:787] "Started controller" controller="statefulset-controller" 2026-01-20T10:58:07.300888323+00:00 stderr F I0120 10:58:07.300877 1 controllermanager.go:756] "Starting controller" controller="service-lb-controller" 2026-01-20T10:58:07.300953125+00:00 stderr F I0120 10:58:07.300925 1 stateful_set.go:161] "Starting stateful set controller" 2026-01-20T10:58:07.300953125+00:00 stderr F I0120 10:58:07.300939 1 shared_informer.go:311] Waiting for caches to sync for stateful set 2026-01-20T10:58:07.303167471+00:00 stderr F E0120 10:58:07.303146 1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" 2026-01-20T10:58:07.303213592+00:00 stderr F I0120 10:58:07.303200 1 controllermanager.go:765] "Warning: skipping controller" controller="service-lb-controller" 2026-01-20T10:58:07.303246443+00:00 stderr F I0120 10:58:07.303236 1 controllermanager.go:756] "Starting controller" controller="cloud-node-lifecycle-controller" 2026-01-20T10:58:07.305636804+00:00 stderr F E0120 10:58:07.305619 1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" 2026-01-20T10:58:07.305683165+00:00 stderr F I0120 10:58:07.305671 1 controllermanager.go:765] "Warning: skipping controller" controller="cloud-node-lifecycle-controller" 2026-01-20T10:58:07.305716596+00:00 stderr F I0120 10:58:07.305706 1 controllermanager.go:756] "Starting controller" controller="persistentvolumeclaim-protection-controller" 2026-01-20T10:58:07.308596689+00:00 stderr F I0120 10:58:07.308559 1 controllermanager.go:787] "Started controller" controller="persistentvolumeclaim-protection-controller" 2026-01-20T10:58:07.308615139+00:00 stderr F I0120 10:58:07.308592 1 controllermanager.go:756] "Starting controller" controller="disruption-controller" 2026-01-20T10:58:07.308759543+00:00 stderr F I0120 10:58:07.308665 1 pvc_protection_controller.go:102] "Starting PVC protection controller" 2026-01-20T10:58:07.308759543+00:00 stderr F I0120 10:58:07.308689 1 shared_informer.go:311] Waiting for caches to sync for PVC protection 2026-01-20T10:58:07.321613290+00:00 stderr F I0120 10:58:07.321555 1 controllermanager.go:787] "Started controller" controller="disruption-controller" 2026-01-20T10:58:07.321613290+00:00 stderr F I0120 10:58:07.321578 1 controllermanager.go:756] "Starting controller" controller="node-route-controller" 2026-01-20T10:58:07.321643081+00:00 stderr F I0120 10:58:07.321616 1 core.go:290] "Will not configure cloud provider routes for allocate-node-cidrs" CIDRs=false routes=true 2026-01-20T10:58:07.321643081+00:00 stderr F I0120 10:58:07.321624 1 controllermanager.go:765] "Warning: skipping controller" controller="node-route-controller" 2026-01-20T10:58:07.321643081+00:00 stderr F I0120 10:58:07.321635 1 disruption.go:433] "Sending events to api server." 2026-01-20T10:58:07.321652351+00:00 stderr F I0120 10:58:07.321637 1 controllermanager.go:739] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"] 2026-01-20T10:58:07.321659221+00:00 stderr F I0120 10:58:07.321652 1 controllermanager.go:756] "Starting controller" controller="replicationcontroller-controller" 2026-01-20T10:58:07.321703622+00:00 stderr F I0120 10:58:07.321681 1 disruption.go:444] "Starting disruption controller" 2026-01-20T10:58:07.321703622+00:00 stderr F I0120 10:58:07.321694 1 shared_informer.go:311] Waiting for caches to sync for disruption 2026-01-20T10:58:07.324490303+00:00 stderr F I0120 10:58:07.324464 1 controllermanager.go:787] "Started controller" controller="replicationcontroller-controller" 2026-01-20T10:58:07.324542454+00:00 stderr F I0120 10:58:07.324529 1 controllermanager.go:756] "Starting controller" controller="cronjob-controller" 2026-01-20T10:58:07.324584705+00:00 stderr F I0120 10:58:07.324563 1 replica_set.go:214] "Starting controller" name="replicationcontroller" 2026-01-20T10:58:07.324584705+00:00 stderr F I0120 10:58:07.324575 1 shared_informer.go:311] Waiting for caches to sync for ReplicationController 2026-01-20T10:58:07.326888853+00:00 stderr F I0120 10:58:07.326860 1 controllermanager.go:787] "Started controller" controller="cronjob-controller" 2026-01-20T10:58:07.326916234+00:00 stderr F I0120 10:58:07.326895 1 controllermanager.go:756] "Starting controller" controller="certificatesigningrequest-signing-controller" 2026-01-20T10:58:07.326995477+00:00 stderr F I0120 10:58:07.326944 1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" 2026-01-20T10:58:07.326995477+00:00 stderr F I0120 10:58:07.326967 1 shared_informer.go:311] Waiting for caches to sync for cronjob 2026-01-20T10:58:07.329684814+00:00 stderr F I0120 10:58:07.329659 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2026-01-20T10:58:07.330128126+00:00 stderr F I0120 10:58:07.330080 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving" 2026-01-20T10:58:07.330128126+00:00 stderr F I0120 10:58:07.330122 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving 2026-01-20T10:58:07.330164997+00:00 stderr F I0120 10:58:07.330140 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2026-01-20T10:58:07.330386893+00:00 stderr F I0120 10:58:07.330373 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2026-01-20T10:58:07.330856304+00:00 stderr F I0120 10:58:07.330813 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client" 2026-01-20T10:58:07.330856304+00:00 stderr F I0120 10:58:07.330843 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client 2026-01-20T10:58:07.330900896+00:00 stderr F I0120 10:58:07.330879 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2026-01-20T10:58:07.331073930+00:00 stderr F I0120 10:58:07.331047 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2026-01-20T10:58:07.331411048+00:00 stderr F I0120 10:58:07.331386 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client" 2026-01-20T10:58:07.331411048+00:00 stderr F I0120 10:58:07.331397 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client 2026-01-20T10:58:07.331423419+00:00 stderr F I0120 10:58:07.331411 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2026-01-20T10:58:07.331706546+00:00 stderr F I0120 10:58:07.331690 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2026-01-20T10:58:07.332081005+00:00 stderr F I0120 10:58:07.332044 1 controllermanager.go:787] "Started controller" controller="certificatesigningrequest-signing-controller" 2026-01-20T10:58:07.332131576+00:00 stderr F I0120 10:58:07.332104 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2026-01-20T10:58:07.332222029+00:00 stderr F I0120 10:58:07.332204 1 controllermanager.go:756] "Starting controller" controller="ephemeral-volume-controller" 2026-01-20T10:58:07.332307111+00:00 stderr F I0120 10:58:07.332081 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown" 2026-01-20T10:58:07.332307111+00:00 stderr F I0120 10:58:07.332297 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown 2026-01-20T10:58:07.336623741+00:00 stderr F I0120 10:58:07.335160 1 controllermanager.go:787] "Started controller" controller="ephemeral-volume-controller" 2026-01-20T10:58:07.336623741+00:00 stderr F I0120 10:58:07.335293 1 controller.go:169] "Starting ephemeral volume controller" 2026-01-20T10:58:07.336623741+00:00 stderr F I0120 10:58:07.335305 1 shared_informer.go:311] Waiting for caches to sync for ephemeral 2026-01-20T10:58:07.338866268+00:00 stderr F I0120 10:58:07.338825 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.339168796+00:00 stderr F I0120 10:58:07.339128 1 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.339234607+00:00 stderr F I0120 10:58:07.339207 1 reflector.go:351] Caches populated for *v1.VolumeAttachment from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.340337685+00:00 stderr F I0120 10:58:07.340292 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2026-01-20T10:58:07.340715844+00:00 stderr F I0120 10:58:07.340681 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.341270969+00:00 stderr F I0120 10:58:07.341239 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.349109888+00:00 stderr F I0120 10:58:07.346003 1 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.349109888+00:00 stderr F I0120 10:58:07.346522 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.349109888+00:00 stderr F I0120 10:58:07.346640 1 reflector.go:351] Caches populated for *v1.PodTemplate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.349109888+00:00 stderr F I0120 10:58:07.346832 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.349109888+00:00 stderr F I0120 10:58:07.346940 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.349109888+00:00 stderr F I0120 10:58:07.347916 1 reflector.go:351] Caches populated for *v1.Lease from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.349109888+00:00 stderr F I0120 10:58:07.348121 1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"crc\" does not exist" 2026-01-20T10:58:07.349109888+00:00 stderr F I0120 10:58:07.348229 1 topologycache.go:217] "Ignoring node because it has an excluded label" node="crc" 2026-01-20T10:58:07.349109888+00:00 stderr F I0120 10:58:07.348249 1 topologycache.go:253] "Insufficient node info for topology hints" totalZones=0 totalCPU="0" sufficientNodeInfo=true 2026-01-20T10:58:07.349109888+00:00 stderr F I0120 10:58:07.348503 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.349109888+00:00 stderr F I0120 10:58:07.348610 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.349109888+00:00 stderr F I0120 10:58:07.348704 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.360109907+00:00 stderr F I0120 10:58:07.359240 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.362883838+00:00 stderr F I0120 10:58:07.360154 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.362883838+00:00 stderr F I0120 10:58:07.360207 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.362883838+00:00 stderr F I0120 10:58:07.360249 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.362883838+00:00 stderr F I0120 10:58:07.360941 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.362883838+00:00 stderr F I0120 10:58:07.361125 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.362883838+00:00 stderr F I0120 10:58:07.361502 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.362883838+00:00 stderr F I0120 10:58:07.362016 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.363267797+00:00 stderr F I0120 10:58:07.363229 1 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.364763556+00:00 stderr F I0120 10:58:07.364698 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.364763556+00:00 stderr F I0120 10:58:07.364743 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.365010542+00:00 stderr F I0120 10:58:07.364978 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.365195316+00:00 stderr F I0120 10:58:07.365172 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.365973986+00:00 stderr F I0120 10:58:07.365937 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.366326456+00:00 stderr F I0120 10:58:07.366266 1 shared_informer.go:318] Caches are synced for PV protection 2026-01-20T10:58:07.366601523+00:00 stderr F I0120 10:58:07.366492 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.366601523+00:00 stderr F I0120 10:58:07.366497 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.366679995+00:00 stderr F I0120 10:58:07.366661 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.366794087+00:00 stderr F I0120 10:58:07.366764 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.366794087+00:00 stderr F I0120 10:58:07.366775 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.366864269+00:00 stderr F I0120 10:58:07.366665 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.366908560+00:00 stderr F I0120 10:58:07.366704 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.369210889+00:00 stderr F I0120 10:58:07.369147 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.369360303+00:00 stderr F I0120 10:58:07.369334 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.369623639+00:00 stderr F I0120 10:58:07.369588 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.369671490+00:00 stderr F I0120 10:58:07.369643 1 reflector.go:351] Caches populated for *v1.RoleBindingRestriction from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.369752272+00:00 stderr F I0120 10:58:07.369727 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.369966198+00:00 stderr F I0120 10:58:07.369909 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.370031269+00:00 stderr F I0120 10:58:07.370000 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.370031269+00:00 stderr F I0120 10:58:07.370024 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.371155218+00:00 stderr F I0120 10:58:07.371118 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.372276147+00:00 stderr F I0120 10:58:07.372237 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.372276147+00:00 stderr F I0120 10:58:07.372243 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.372344118+00:00 stderr F I0120 10:58:07.372320 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.372901712+00:00 stderr F I0120 10:58:07.372871 1 shared_informer.go:318] Caches are synced for expand 2026-01-20T10:58:07.373045986+00:00 stderr F I0120 10:58:07.372973 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.373148649+00:00 stderr F I0120 10:58:07.373124 1 reflector.go:351] Caches populated for *v1.CSINode from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.377450538+00:00 stderr F I0120 10:58:07.377404 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.377492509+00:00 stderr F I0120 10:58:07.377475 1 shared_informer.go:318] Caches are synced for GC 2026-01-20T10:58:07.377492509+00:00 stderr F I0120 10:58:07.377478 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.377616232+00:00 stderr F I0120 10:58:07.377407 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.377616232+00:00 stderr F I0120 10:58:07.377523 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.377616232+00:00 stderr F I0120 10:58:07.377559 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.377616232+00:00 stderr F I0120 10:58:07.377588 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.377676804+00:00 stderr F I0120 10:58:07.377647 1 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.377685924+00:00 stderr F I0120 10:58:07.377677 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.377730855+00:00 stderr F I0120 10:58:07.377682 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.377730855+00:00 stderr F I0120 10:58:07.377691 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.377758876+00:00 stderr F I0120 10:58:07.377737 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.377767136+00:00 stderr F I0120 10:58:07.377697 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.377812697+00:00 stderr F I0120 10:58:07.377777 1 reflector.go:351] Caches populated for *v1.DeploymentConfig from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.378330900+00:00 stderr F I0120 10:58:07.378290 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.378503145+00:00 stderr F I0120 10:58:07.378468 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.378709750+00:00 stderr F I0120 10:58:07.378681 1 reflector.go:351] Caches populated for *v1.Build from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.378744401+00:00 stderr F I0120 10:58:07.378719 1 reflector.go:351] Caches populated for *v2.HorizontalPodAutoscaler from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.378804112+00:00 stderr F I0120 10:58:07.378778 1 reflector.go:351] Caches populated for *v1.TemplateInstance from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.378851423+00:00 stderr F I0120 10:58:07.378828 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.378851423+00:00 stderr F I0120 10:58:07.378839 1 reflector.go:351] Caches populated for *v1.BuildConfig from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.378892364+00:00 stderr F I0120 10:58:07.378845 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.378948986+00:00 stderr F I0120 10:58:07.378926 1 reflector.go:351] Caches populated for *v1.Route from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.379183202+00:00 stderr F I0120 10:58:07.379149 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.379323695+00:00 stderr F I0120 10:58:07.379294 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2026-01-20T10:58:07.379332835+00:00 stderr F I0120 10:58:07.379325 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2026-01-20T10:58:07.379366156+00:00 stderr F I0120 10:58:07.379343 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29481765" 2026-01-20T10:58:07.381315616+00:00 stderr F I0120 10:58:07.381286 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.381610083+00:00 stderr F I0120 10:58:07.381586 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.382785843+00:00 stderr F I0120 10:58:07.382757 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.384931368+00:00 stderr F I0120 10:58:07.384889 1 reflector.go:351] Caches populated for *v1.ControllerRevision from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.391669549+00:00 stderr F I0120 10:58:07.391623 1 shared_informer.go:318] Caches are synced for HPA 2026-01-20T10:58:07.393707450+00:00 stderr F I0120 10:58:07.393677 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.394681516+00:00 stderr F I0120 10:58:07.394652 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.399089377+00:00 stderr F I0120 10:58:07.396143 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.399983861+00:00 stderr F I0120 10:58:07.399946 1 reflector.go:351] Caches populated for *v1.Template from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.404131555+00:00 stderr F I0120 10:58:07.400525 1 shared_informer.go:318] Caches are synced for service account 2026-01-20T10:58:07.404131555+00:00 stderr F I0120 10:58:07.401102 1 shared_informer.go:318] Caches are synced for stateful set 2026-01-20T10:58:07.404131555+00:00 stderr F I0120 10:58:07.402516 1 shared_informer.go:318] Caches are synced for namespace 2026-01-20T10:58:07.404181117+00:00 stderr F I0120 10:58:07.404155 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.409576934+00:00 stderr F I0120 10:58:07.409518 1 shared_informer.go:318] Caches are synced for PVC protection 2026-01-20T10:58:07.413993036+00:00 stderr F I0120 10:58:07.413407 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.416127200+00:00 stderr F I0120 10:58:07.416093 1 shared_informer.go:318] Caches are synced for taint 2026-01-20T10:58:07.416189781+00:00 stderr F I0120 10:58:07.416159 1 node_lifecycle_controller.go:676] "Controller observed a new Node" node="crc" 2026-01-20T10:58:07.416259973+00:00 stderr F I0120 10:58:07.416206 1 controller_utils.go:173] "Recording event message for node" event="Registered Node crc in Controller" node="crc" 2026-01-20T10:58:07.416304494+00:00 stderr F I0120 10:58:07.416280 1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone="" 2026-01-20T10:58:07.416456528+00:00 stderr F I0120 10:58:07.416421 1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="crc" 2026-01-20T10:58:07.416675665+00:00 stderr F I0120 10:58:07.416647 1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal" 2026-01-20T10:58:07.416927621+00:00 stderr F I0120 10:58:07.416899 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.417841254+00:00 stderr F I0120 10:58:07.417810 1 event.go:376] "Event occurred" object="crc" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node crc event: Registered Node crc in Controller" 2026-01-20T10:58:07.421375093+00:00 stderr F I0120 10:58:07.421338 1 shared_informer.go:318] Caches are synced for TTL after finished 2026-01-20T10:58:07.423238581+00:00 stderr F I0120 10:58:07.423212 1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator 2026-01-20T10:58:07.424705558+00:00 stderr F I0120 10:58:07.424670 1 shared_informer.go:318] Caches are synced for ReplicationController 2026-01-20T10:58:07.424797320+00:00 stderr F I0120 10:58:07.424740 1 shared_informer.go:318] Caches are synced for disruption 2026-01-20T10:58:07.427111359+00:00 stderr F I0120 10:58:07.427053 1 shared_informer.go:318] Caches are synced for cronjob 2026-01-20T10:58:07.431807659+00:00 stderr F I0120 10:58:07.431777 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client 2026-01-20T10:58:07.431826179+00:00 stderr F I0120 10:58:07.431805 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client 2026-01-20T10:58:07.431933712+00:00 stderr F I0120 10:58:07.431805 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving 2026-01-20T10:58:07.432243160+00:00 stderr F I0120 10:58:07.432207 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[apps/v1/daemonset, namespace: openshift-ingress-canary, name: ingress-canary, uid: b5512a08-cd29-46f9-9661-4c860338b2ca]" observed="[apps/v1/DaemonSet, namespace: openshift-ingress-canary, name: ingress-canary, uid: b5512a08-cd29-46f9-9661-4c860338b2ca]" 2026-01-20T10:58:07.432365733+00:00 stderr F I0120 10:58:07.432341 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown 2026-01-20T10:58:07.435427310+00:00 stderr F I0120 10:58:07.435375 1 shared_informer.go:318] Caches are synced for ephemeral 2026-01-20T10:58:07.441394942+00:00 stderr F I0120 10:58:07.441345 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.442692085+00:00 stderr F I0120 10:58:07.442656 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.442715616+00:00 stderr F I0120 10:58:07.442691 1 reflector.go:351] Caches populated for *v1.IngressClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.442750457+00:00 stderr F I0120 10:58:07.442733 1 reflector.go:351] Caches populated for *v1.PriorityClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.442816338+00:00 stderr F I0120 10:58:07.442786 1 reflector.go:351] Caches populated for *v1.PriorityLevelConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.442861939+00:00 stderr F I0120 10:58:07.442834 1 shared_informer.go:311] Waiting for caches to sync for garbage collector 2026-01-20T10:58:07.449867037+00:00 stderr F I0120 10:58:07.446472 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.452327799+00:00 stderr F I0120 10:58:07.452091 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.452327799+00:00 stderr F I0120 10:58:07.452304 1 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.452937165+00:00 stderr F I0120 10:58:07.452491 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.452937165+00:00 stderr F I0120 10:58:07.452643 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.452937165+00:00 stderr F I0120 10:58:07.452744 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.452937165+00:00 stderr F I0120 10:58:07.452836 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.455657044+00:00 stderr F I0120 10:58:07.453126 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.455657044+00:00 stderr F I0120 10:58:07.453710 1 shared_informer.go:318] Caches are synced for persistent volume 2026-01-20T10:58:07.455732996+00:00 stderr F I0120 10:58:07.455706 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.456292071+00:00 stderr F I0120 10:58:07.456207 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.456500156+00:00 stderr F I0120 10:58:07.456357 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.456500156+00:00 stderr F I0120 10:58:07.456471 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.457286846+00:00 stderr F I0120 10:58:07.456668 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.457286846+00:00 stderr F I0120 10:58:07.456751 1 reflector.go:351] Caches populated for *v1.FlowSchema from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.457286846+00:00 stderr F I0120 10:58:07.456972 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.457286846+00:00 stderr F I0120 10:58:07.456998 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.457286846+00:00 stderr F I0120 10:58:07.457147 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.457286846+00:00 stderr F I0120 10:58:07.457188 1 shared_informer.go:318] Caches are synced for resource quota 2026-01-20T10:58:07.457286846+00:00 stderr F I0120 10:58:07.457214 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.457286846+00:00 stderr F I0120 10:58:07.457265 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.457692796+00:00 stderr F I0120 10:58:07.457318 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.457692796+00:00 stderr F I0120 10:58:07.457415 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.457692796+00:00 stderr F I0120 10:58:07.457678 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.457798459+00:00 stderr F I0120 10:58:07.457731 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[config.openshift.io/v1/ClusterVersion, namespace: openshift-authentication-operator, name: version, uid: a73cbaa6-40d3-4694-9b98-c0a6eed45825]" observed="[config.openshift.io/v1/ClusterVersion, namespace: , name: version, uid: a73cbaa6-40d3-4694-9b98-c0a6eed45825]" 2026-01-20T10:58:07.458002304+00:00 stderr F I0120 10:58:07.457943 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.458143057+00:00 stderr F I0120 10:58:07.458030 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.458143057+00:00 stderr F I0120 10:58:07.457949 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.459322828+00:00 stderr F I0120 10:58:07.459249 1 reflector.go:351] Caches populated for *v1.UserOAuthAccessToken from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.459579134+00:00 stderr F I0120 10:58:07.459490 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.459579134+00:00 stderr F I0120 10:58:07.459520 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.459676737+00:00 stderr F I0120 10:58:07.459626 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.459689287+00:00 stderr F I0120 10:58:07.459672 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.461168524+00:00 stderr F I0120 10:58:07.459712 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[machineconfiguration.openshift.io/v1/MachineConfigPool, namespace: openshift-machine-api, name: master, uid: 8efb5656-7de8-467a-9f2a-237a011a4783]" observed="[machineconfiguration.openshift.io/v1/MachineConfigPool, namespace: , name: master, uid: 8efb5656-7de8-467a-9f2a-237a011a4783]" 2026-01-20T10:58:07.461168524+00:00 stderr F I0120 10:58:07.459747 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.461168524+00:00 stderr F I0120 10:58:07.459748 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[machineconfiguration.openshift.io/v1/MachineConfigPool, namespace: openshift-machine-api, name: worker, uid: 87ae8215-5559-4b8a-a6cc-81c3c83b8a6e]" observed="[machineconfiguration.openshift.io/v1/MachineConfigPool, namespace: , name: worker, uid: 87ae8215-5559-4b8a-a6cc-81c3c83b8a6e]" 2026-01-20T10:58:07.461168524+00:00 stderr F I0120 10:58:07.459781 1 shared_informer.go:318] Caches are synced for attach detach 2026-01-20T10:58:07.461168524+00:00 stderr F I0120 10:58:07.459976 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.461168524+00:00 stderr F I0120 10:58:07.460242 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.461168524+00:00 stderr F I0120 10:58:07.461147 1 reconciler.go:352] "attacherDetacher.AttachVolume started" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" nodeName="crc" scheduledPods=["openshift-image-registry/image-registry-75b7bb6564-ln84v"] 2026-01-20T10:58:07.461202105+00:00 stderr F I0120 10:58:07.461170 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.461242466+00:00 stderr F E0120 10:58:07.461219 1 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:crc}" failed. No retries permitted until 2026-01-20 10:58:07.961186174 +0000 UTC m=+17.334621427 (durationBeforeRetry 500ms). Error: AttachVolume.FindAttachablePluginBySpec failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") from node "crc" 2026-01-20T10:58:07.461272526+00:00 stderr F I0120 10:58:07.461253 1 shared_informer.go:318] Caches are synced for job 2026-01-20T10:58:07.461329898+00:00 stderr F I0120 10:58:07.461308 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.461418430+00:00 stderr F I0120 10:58:07.461394 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.461451801+00:00 stderr F I0120 10:58:07.461428 1 desired_state_of_world_populator.go:168] "Volume changes from attachable to non-attachable" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" 2026-01-20T10:58:07.461640067+00:00 stderr F I0120 10:58:07.461611 1 event.go:376] "Event occurred" object="openshift-image-registry/image-registry-75b7bb6564-ln84v" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="AttachVolume.FindAttachablePluginBySpec failed for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" " 2026-01-20T10:58:07.464323395+00:00 stderr F I0120 10:58:07.464298 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.464747995+00:00 stderr F I0120 10:58:07.464719 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.464982861+00:00 stderr F I0120 10:58:07.464932 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.465901274+00:00 stderr F I0120 10:58:07.465862 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.466025787+00:00 stderr F I0120 10:58:07.466002 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.466153451+00:00 stderr F I0120 10:58:07.466125 1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner 2026-01-20T10:58:07.466196422+00:00 stderr F I0120 10:58:07.466179 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.466337775+00:00 stderr F I0120 10:58:07.466315 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.466398557+00:00 stderr F I0120 10:58:07.466383 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.466448408+00:00 stderr F I0120 10:58:07.466433 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.466546551+00:00 stderr F I0120 10:58:07.466532 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.466546551+00:00 stderr F I0120 10:58:07.466542 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.466646003+00:00 stderr F I0120 10:58:07.466625 1 reflector.go:351] Caches populated for *v1.BrokerTemplateInstance from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.466676244+00:00 stderr F I0120 10:58:07.466660 1 reflector.go:351] Caches populated for *v1.RangeAllocation from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.466802057+00:00 stderr F I0120 10:58:07.466390 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.467012472+00:00 stderr F I0120 10:58:07.466987 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.467265250+00:00 stderr F I0120 10:58:07.467251 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.467265250+00:00 stderr F I0120 10:58:07.466134 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.467433904+00:00 stderr F I0120 10:58:07.467421 1 shared_informer.go:318] Caches are synced for taint-eviction-controller 2026-01-20T10:58:07.467509696+00:00 stderr F I0120 10:58:07.467492 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.467626369+00:00 stderr F I0120 10:58:07.467609 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.467775932+00:00 stderr F I0120 10:58:07.467760 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.467844564+00:00 stderr F I0120 10:58:07.467830 1 shared_informer.go:318] Caches are synced for deployment 2026-01-20T10:58:07.467975517+00:00 stderr F I0120 10:58:07.467950 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[operator.openshift.io/v1/Network, namespace: openshift-ovn-kubernetes, name: cluster, uid: 5ca11404-f665-4aa0-85cf-da2f3e9c86ad]" observed="[operator.openshift.io/v1/Network, namespace: , name: cluster, uid: 5ca11404-f665-4aa0-85cf-da2f3e9c86ad]" 2026-01-20T10:58:07.468020859+00:00 stderr F I0120 10:58:07.468003 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[operator.openshift.io/v1/DNS, namespace: openshift-dns, name: default, uid: 8e7b8280-016f-4ceb-a792-fc5be2494468]" observed="[operator.openshift.io/v1/DNS, namespace: , name: default, uid: 8e7b8280-016f-4ceb-a792-fc5be2494468]" 2026-01-20T10:58:07.468685805+00:00 stderr F I0120 10:58:07.468660 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.469010223+00:00 stderr F I0120 10:58:07.468988 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.475000765+00:00 stderr F I0120 10:58:07.473584 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.475956270+00:00 stderr F I0120 10:58:07.475921 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.476246647+00:00 stderr F I0120 10:58:07.476176 1 shared_informer.go:318] Caches are synced for endpoint 2026-01-20T10:58:07.478449604+00:00 stderr F I0120 10:58:07.478416 1 shared_informer.go:318] Caches are synced for endpoint_slice 2026-01-20T10:58:07.478460354+00:00 stderr F I0120 10:58:07.478445 1 endpointslice_controller.go:271] "Starting worker threads" total=5 2026-01-20T10:58:07.480162617+00:00 stderr F I0120 10:58:07.480128 1 shared_informer.go:318] Caches are synced for daemon sets 2026-01-20T10:58:07.480162617+00:00 stderr F I0120 10:58:07.480143 1 shared_informer.go:311] Waiting for caches to sync for daemon sets 2026-01-20T10:58:07.480162617+00:00 stderr F I0120 10:58:07.480148 1 shared_informer.go:318] Caches are synced for daemon sets 2026-01-20T10:58:07.480540546+00:00 stderr F I0120 10:58:07.480509 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.490650213+00:00 stderr F I0120 10:58:07.490577 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.494401178+00:00 stderr F I0120 10:58:07.494344 1 shared_informer.go:318] Caches are synced for certificate-csrapproving 2026-01-20T10:58:07.496419900+00:00 stderr F I0120 10:58:07.496382 1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring 2026-01-20T10:58:07.496433010+00:00 stderr F I0120 10:58:07.496417 1 endpointslicemirroring_controller.go:230] "Starting worker threads" total=5 2026-01-20T10:58:07.501477668+00:00 stderr F I0120 10:58:07.501439 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.516991542+00:00 stderr F I0120 10:58:07.516869 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.522134173+00:00 stderr F I0120 10:58:07.521399 1 shared_informer.go:318] Caches are synced for ReplicaSet 2026-01-20T10:58:07.522134173+00:00 stderr F I0120 10:58:07.521556 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="cert-manager/cert-manager-758df9885c" duration="90.922µs" 2026-01-20T10:58:07.522134173+00:00 stderr F I0120 10:58:07.521596 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="cert-manager/cert-manager-cainjector-676dd9bd64" duration="24.891µs" 2026-01-20T10:58:07.522134173+00:00 stderr F I0120 10:58:07.521638 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="cert-manager/cert-manager-webhook-855f577f79" duration="33.681µs" 2026-01-20T10:58:07.522134173+00:00 stderr F I0120 10:58:07.521672 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865" duration="25.081µs" 2026-01-20T10:58:07.522134173+00:00 stderr F I0120 10:58:07.521718 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-57b5589fc8" duration="37.761µs" 2026-01-20T10:58:07.522134173+00:00 stderr F I0120 10:58:07.521771 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67cbf64bc9" duration="46.201µs" 2026-01-20T10:58:07.522134173+00:00 stderr F I0120 10:58:07.521814 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67f74899b5" duration="35.911µs" 2026-01-20T10:58:07.522134173+00:00 stderr F I0120 10:58:07.521846 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-7fc54b8dd7" duration="25.99µs" 2026-01-20T10:58:07.522134173+00:00 stderr F I0120 10:58:07.521875 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-8457997b6b" duration="22.04µs" 2026-01-20T10:58:07.522134173+00:00 stderr F I0120 10:58:07.521900 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication-operator/authentication-operator-7cc7ff75d5" duration="18.421µs" 2026-01-20T10:58:07.522134173+00:00 stderr F I0120 10:58:07.521926 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-5b69888cbb" duration="19.09µs" 2026-01-20T10:58:07.522134173+00:00 stderr F I0120 10:58:07.521948 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-6d9765f7fd" duration="15.63µs" 2026-01-20T10:58:07.522134173+00:00 stderr F I0120 10:58:07.521972 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-74fc7c67cc" duration="17.221µs" 2026-01-20T10:58:07.522134173+00:00 stderr F I0120 10:58:07.521995 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-765b47f944" duration="16.12µs" 2026-01-20T10:58:07.522134173+00:00 stderr F I0120 10:58:07.522017 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-7b5ff7b59b" duration="15.511µs" 2026-01-20T10:58:07.522134173+00:00 stderr F I0120 10:58:07.522040 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-8545fbf5fd" duration="15.69µs" 2026-01-20T10:58:07.522134173+00:00 stderr F I0120 10:58:07.522107 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-5596ddcb44" duration="59.992µs" 2026-01-20T10:58:07.522195844+00:00 stderr F I0120 10:58:07.522132 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-7874c8775" duration="16.41µs" 2026-01-20T10:58:07.522195844+00:00 stderr F I0120 10:58:07.522157 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6" duration="18.55µs" 2026-01-20T10:58:07.522195844+00:00 stderr F I0120 10:58:07.522182 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-64d5db54b5" duration="17.611µs" 2026-01-20T10:58:07.522207134+00:00 stderr F I0120 10:58:07.522202 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-6d5d9649f6" duration="13.94µs" 2026-01-20T10:58:07.522254036+00:00 stderr F I0120 10:58:07.522229 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-config-operator/openshift-config-operator-77658b5b66" duration="20.061µs" 2026-01-20T10:58:07.522279436+00:00 stderr F I0120 10:58:07.522264 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-conversion-webhook-595f9969b" duration="19.971µs" 2026-01-20T10:58:07.522318647+00:00 stderr F I0120 10:58:07.522297 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-operator-5dbbc74dc9" duration="22.471µs" 2026-01-20T10:58:07.522345728+00:00 stderr F I0120 10:58:07.522330 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-5d9678894c" duration="21.44µs" 2026-01-20T10:58:07.522384389+00:00 stderr F I0120 10:58:07.522369 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-644bb77b49" duration="28.59µs" 2026-01-20T10:58:07.522419960+00:00 stderr F I0120 10:58:07.522405 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-6b869cb98c" duration="25.32µs" 2026-01-20T10:58:07.522451641+00:00 stderr F I0120 10:58:07.522436 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-84fccc7b6" duration="20.62µs" 2026-01-20T10:58:07.522485641+00:00 stderr F I0120 10:58:07.522471 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-b6485f8c7" duration="23.521µs" 2026-01-20T10:58:07.522517922+00:00 stderr F I0120 10:58:07.522504 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/downloads-65476884b9" duration="22.101µs" 2026-01-20T10:58:07.522544823+00:00 stderr F I0120 10:58:07.522530 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6" duration="15.08µs" 2026-01-20T10:58:07.522581054+00:00 stderr F I0120 10:58:07.522566 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-5797bcd546" duration="22.47µs" 2026-01-20T10:58:07.522612235+00:00 stderr F I0120 10:58:07.522597 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="20.32µs" 2026-01-20T10:58:07.522640845+00:00 stderr F I0120 10:58:07.522626 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-659898b96d" duration="18.311µs" 2026-01-20T10:58:07.522667616+00:00 stderr F I0120 10:58:07.522653 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-67685c4459" duration="16.821µs" 2026-01-20T10:58:07.522702657+00:00 stderr F I0120 10:58:07.522688 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-6ff78978b4" duration="23.6µs" 2026-01-20T10:58:07.522739598+00:00 stderr F I0120 10:58:07.522724 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="25.43µs" 2026-01-20T10:58:07.522802739+00:00 stderr F I0120 10:58:07.522775 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-78589965b8" duration="28.58µs" 2026-01-20T10:58:07.522813260+00:00 stderr F I0120 10:58:07.522805 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-7bbb4b7f4c" duration="17.16µs" 2026-01-20T10:58:07.522849891+00:00 stderr F I0120 10:58:07.522834 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-99c8765d7" duration="22.04µs" 2026-01-20T10:58:07.522881591+00:00 stderr F I0120 10:58:07.522867 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-b69786f4f" duration="18.951µs" 2026-01-20T10:58:07.522915192+00:00 stderr F I0120 10:58:07.522901 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-c4dd57946" duration="23.321µs" 2026-01-20T10:58:07.522941783+00:00 stderr F I0120 10:58:07.522925 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-dns-operator/dns-operator-75f687757b" duration="14.48µs" 2026-01-20T10:58:07.522976284+00:00 stderr F I0120 10:58:07.522958 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-etcd-operator/etcd-operator-768d5b5d86" duration="21.74µs" 2026-01-20T10:58:07.523011975+00:00 stderr F I0120 10:58:07.522994 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d" duration="23.331µs" 2026-01-20T10:58:07.523047065+00:00 stderr F I0120 10:58:07.523030 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-585546dd8b" duration="24.301µs" 2026-01-20T10:58:07.523097208+00:00 stderr F I0120 10:58:07.523081 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-5b87ddc766" duration="40.631µs" 2026-01-20T10:58:07.523127779+00:00 stderr F I0120 10:58:07.523113 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-75779c45fd" duration="20.9µs" 2026-01-20T10:58:07.523163929+00:00 stderr F I0120 10:58:07.523148 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-75b7bb6564" duration="23.831µs" 2026-01-20T10:58:07.523192810+00:00 stderr F I0120 10:58:07.523177 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-7cbd5666ff" duration="19.031µs" 2026-01-20T10:58:07.523225431+00:00 stderr F I0120 10:58:07.523211 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress-operator/ingress-operator-7d46d5bb6d" duration="22.791µs" 2026-01-20T10:58:07.523264362+00:00 stderr F I0120 10:58:07.523247 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress/router-default-5c9bf7bc58" duration="25.37µs" 2026-01-20T10:58:07.523298603+00:00 stderr F I0120 10:58:07.523284 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4" duration="22.82µs" 2026-01-20T10:58:07.523333934+00:00 stderr F I0120 10:58:07.523316 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958" duration="21.6µs" 2026-01-20T10:58:07.523366544+00:00 stderr F I0120 10:58:07.523349 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b" duration="21.381µs" 2026-01-20T10:58:07.523396965+00:00 stderr F I0120 10:58:07.523381 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c" duration="21.891µs" 2026-01-20T10:58:07.523418976+00:00 stderr F I0120 10:58:07.523405 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator/migrator-f7c6d88df" duration="12.28µs" 2026-01-20T10:58:07.523452687+00:00 stderr F I0120 10:58:07.523435 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/control-plane-machine-set-operator-649bd778b4" duration="19.79µs" 2026-01-20T10:58:07.523487158+00:00 stderr F I0120 10:58:07.523472 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/machine-api-operator-788b7c6b6c" duration="22.161µs" 2026-01-20T10:58:07.523550399+00:00 stderr F I0120 10:58:07.523530 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-controller-6df6df6b6b" duration="22.171µs" 2026-01-20T10:58:07.523581250+00:00 stderr F I0120 10:58:07.523566 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-operator-76788bff89" duration="24.89µs" 2026-01-20T10:58:07.523628891+00:00 stderr F I0120 10:58:07.523611 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-marketplace/marketplace-operator-8b455464d" duration="33.181µs" 2026-01-20T10:58:07.523660232+00:00 stderr F I0120 10:58:07.523643 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-5b6c864f95" duration="19.35µs" 2026-01-20T10:58:07.523696593+00:00 stderr F I0120 10:58:07.523682 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-6c7c885997" duration="29.15µs" 2026-01-20T10:58:07.523739594+00:00 stderr F I0120 10:58:07.523723 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-7955f778" duration="29.96µs" 2026-01-20T10:58:07.523777405+00:00 stderr F I0120 10:58:07.523761 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-diagnostics/network-check-source-5c5478f8c" duration="25.68µs" 2026-01-20T10:58:07.523818996+00:00 stderr F I0120 10:58:07.523803 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-operator/network-operator-767c585db5" duration="31.28µs" 2026-01-20T10:58:07.523862477+00:00 stderr F I0120 10:58:07.523847 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-64b8d4c75" duration="33.55µs" 2026-01-20T10:58:07.523907458+00:00 stderr F I0120 10:58:07.523892 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-69c565c9b6" duration="34.371µs" 2026-01-20T10:58:07.523950059+00:00 stderr F I0120 10:58:07.523933 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/catalog-operator-857456c46" duration="28.631µs" 2026-01-20T10:58:07.523986660+00:00 stderr F I0120 10:58:07.523969 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f" duration="25.68µs" 2026-01-20T10:58:07.524073012+00:00 stderr F I0120 10:58:07.524009 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/package-server-manager-84d578d794" duration="28.48µs" 2026-01-20T10:58:07.524087873+00:00 stderr F I0120 10:58:07.524075 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/packageserver-8464bcc55b" duration="48.861µs" 2026-01-20T10:58:07.524683277+00:00 stderr F I0120 10:58:07.524115 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58" duration="31.371µs" 2026-01-20T10:58:07.524683277+00:00 stderr F I0120 10:58:07.524150 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5446f98575" duration="22.181µs" 2026-01-20T10:58:07.524683277+00:00 stderr F I0120 10:58:07.524175 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5b77f9fd48" duration="17.901µs" 2026-01-20T10:58:07.524683277+00:00 stderr F I0120 10:58:07.524201 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5c4dbb8899" duration="19.04µs" 2026-01-20T10:58:07.524683277+00:00 stderr F I0120 10:58:07.524229 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="18.901µs" 2026-01-20T10:58:07.524683277+00:00 stderr F I0120 10:58:07.524270 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc" duration="33.421µs" 2026-01-20T10:58:07.524683277+00:00 stderr F I0120 10:58:07.524299 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="21.971µs" 2026-01-20T10:58:07.524683277+00:00 stderr F I0120 10:58:07.524325 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-777dbbb7bb" duration="19.09µs" 2026-01-20T10:58:07.524683277+00:00 stderr F I0120 10:58:07.524351 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7d967d98df" duration="19.54µs" 2026-01-20T10:58:07.524683277+00:00 stderr F I0120 10:58:07.524382 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7f79969969" duration="24.041µs" 2026-01-20T10:58:07.524683277+00:00 stderr F I0120 10:58:07.524414 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-846977c6bc" duration="24.611µs" 2026-01-20T10:58:07.524683277+00:00 stderr F I0120 10:58:07.524441 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-868695ccb4" duration="20.26µs" 2026-01-20T10:58:07.524683277+00:00 stderr F I0120 10:58:07.524467 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca-operator/service-ca-operator-546b4f8984" duration="19.551µs" 2026-01-20T10:58:07.524683277+00:00 stderr F I0120 10:58:07.524490 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca/service-ca-666f99b6f" duration="16.641µs" 2026-01-20T10:58:07.525999891+00:00 stderr F I0120 10:58:07.525344 1 shared_informer.go:318] Caches are synced for crt configmap 2026-01-20T10:58:07.528467243+00:00 stderr F I0120 10:58:07.528443 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.541280339+00:00 stderr F I0120 10:58:07.541213 1 shared_informer.go:318] Caches are synced for resource quota 2026-01-20T10:58:07.541280339+00:00 stderr F I0120 10:58:07.541245 1 resource_quota_controller.go:496] "synced quota controller" 2026-01-20T10:58:07.541757661+00:00 stderr F I0120 10:58:07.541727 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.542364636+00:00 stderr F I0120 10:58:07.542319 1 shared_informer.go:318] Caches are synced for crt configmap 2026-01-20T10:58:07.558442065+00:00 stderr F I0120 10:58:07.558376 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.618113380+00:00 stderr F I0120 10:58:07.617165 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.618113380+00:00 stderr F I0120 10:58:07.617403 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[operator.openshift.io/v1/Console, namespace: openshift-console, name: cluster, uid: 5f9f95ea-d66e-45cc-9aa2-ed289b62d92e]" observed="[operator.openshift.io/v1/Console, namespace: , name: cluster, uid: 5f9f95ea-d66e-45cc-9aa2-ed289b62d92e]" 2026-01-20T10:58:07.632552458+00:00 stderr F I0120 10:58:07.632489 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.635379780+00:00 stderr F I0120 10:58:07.635348 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.689647648+00:00 stderr F I0120 10:58:07.689591 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.706344892+00:00 stderr F I0120 10:58:07.706266 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.720894541+00:00 stderr F I0120 10:58:07.720833 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.730533007+00:00 stderr F I0120 10:58:07.730162 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.742411128+00:00 stderr F I0120 10:58:07.742366 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.751359225+00:00 stderr F I0120 10:58:07.751313 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.763195646+00:00 stderr F I0120 10:58:07.763156 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.775541950+00:00 stderr F I0120 10:58:07.775485 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.786928209+00:00 stderr F I0120 10:58:07.786873 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:58:07.837271068+00:00 stderr F I0120 10:58:07.837197 1 shared_informer.go:318] Caches are synced for garbage collector 2026-01-20T10:58:07.837271068+00:00 stderr F I0120 10:58:07.837246 1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage" 2026-01-20T10:58:07.843387563+00:00 stderr F I0120 10:58:07.843343 1 shared_informer.go:318] Caches are synced for garbage collector 2026-01-20T10:58:07.843387563+00:00 stderr F I0120 10:58:07.843375 1 garbagecollector.go:290] "synced garbage collector" 2026-01-20T10:58:07.843472725+00:00 stderr F I0120 10:58:07.843437 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-config-operator, name: prometheus-k8s, uid: 4e0e5d70-4367-4096-a0d7-165ab4597f6f]" virtual=false 2026-01-20T10:58:07.843513946+00:00 stderr F I0120 10:58:07.843483 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-controller-manager-operator, name: prometheus-k8s, uid: 7b72705a-23d0-4665-9d24-8e04a8aae0ef]" virtual=false 2026-01-20T10:58:07.843523006+00:00 stderr F I0120 10:58:07.843500 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: machine-api-controllers, uid: c9a01375-7b9c-4ef6-9359-758fb218dce6]" virtual=false 2026-01-20T10:58:07.843570917+00:00 stderr F I0120 10:58:07.843530 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-apiserver-operator, name: prometheus-k8s, uid: 92ba624a-dcbc-4477-85d4-f2df67ae5db7]" virtual=false 2026-01-20T10:58:07.843616039+00:00 stderr F I0120 10:58:07.843593 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: machine-api-operator, uid: dff291f1-42cd-4626-8331-8b0fef4d088c]" virtual=false 2026-01-20T10:58:07.843626079+00:00 stderr F I0120 10:58:07.843611 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-dns-operator, name: prometheus-k8s, uid: 0c4210ed-76ef-4fcc-b765-edadb0b0c238]" virtual=false 2026-01-20T10:58:07.843780453+00:00 stderr F I0120 10:58:07.843745 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-machine-approver, name: prometheus-k8s, uid: 84939b38-04f1-4a93-a3ca-c9043e356d14]" virtual=false 2026-01-20T10:58:07.843780453+00:00 stderr F I0120 10:58:07.843761 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ingress-operator, name: ingress-operator, uid: 47b053c2-86d6-43b7-8698-8ddc264337d8]" virtual=false 2026-01-20T10:58:07.843812254+00:00 stderr F I0120 10:58:07.843794 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-multus, name: whereabouts-cni, uid: 32ee943f-953b-467b-83ff-09220026838e]" virtual=false 2026-01-20T10:58:07.843918726+00:00 stderr F I0120 10:58:07.843896 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-apiserver, name: prometheus-k8s, uid: 7187d3b6-e6d6-4ff7-b923-65dff4f99eed]" virtual=false 2026-01-20T10:58:07.843980098+00:00 stderr F I0120 10:58:07.843954 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: ingress-operator, uid: ab127597-b672-4c7c-8321-931d286b7455]" virtual=false 2026-01-20T10:58:07.843988028+00:00 stderr F I0120 10:58:07.843971 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 9876773e-9d26-40a6-bfd6-e2f1512249d0]" virtual=false 2026-01-20T10:58:07.843995438+00:00 stderr F I0120 10:58:07.843983 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-apiserver, name: prometheus-k8s, uid: 2284de94-810e-4240-8f67-14ef839c6064]" virtual=false 2026-01-20T10:58:07.844035859+00:00 stderr F I0120 10:58:07.843894 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: machine-api-controllers, uid: fb58d9da-83d6-4a5c-8e72-5cd11c3d11bd]" virtual=false 2026-01-20T10:58:07.844035859+00:00 stderr F I0120 10:58:07.844024 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 79c4826b-23cb-475e-9f0c-eab8eb98f835]" virtual=false 2026-01-20T10:58:07.844233544+00:00 stderr F I0120 10:58:07.844193 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: openshift-network-public-role, uid: df54377a-e4ab-45dd-b1aa-937aea1ce027]" virtual=false 2026-01-20T10:58:07.844233544+00:00 stderr F I0120 10:58:07.844208 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-public, uid: 57bec53f-bb5e-4dc9-8957-267dc3c81e5c]" virtual=false 2026-01-20T10:58:07.844282085+00:00 stderr F I0120 10:58:07.844260 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-operator, name: console-operator, uid: 819a8d31-510a-43dc-9b44-d2d194694335]" virtual=false 2026-01-20T10:58:07.844290155+00:00 stderr F I0120 10:58:07.844246 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: prometheus-k8s, uid: 6c37d9c1-9f73-4181-99c5-8e303e842810]" virtual=false 2026-01-20T10:58:07.844409828+00:00 stderr F I0120 10:58:07.843534 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/ClusterServiceVersion, namespace: openshift-operator-lifecycle-manager, name: packageserver, uid: 0beab272-7637-4d44-b3aa-502dcafbc929]" virtual=false 2026-01-20T10:58:07.850178205+00:00 stderr F I0120 10:58:07.849923 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/ClusterServiceVersion, namespace: openshift-operator-lifecycle-manager, name: packageserver, uid: 0beab272-7637-4d44-b3aa-502dcafbc929]" 2026-01-20T10:58:07.850178205+00:00 stderr F I0120 10:58:07.849964 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-operator, uid: 644d299e-7c04-454d-a20a-2641713ce520]" virtual=false 2026-01-20T10:58:07.855163752+00:00 stderr F I0120 10:58:07.854846 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-multus, name: whereabouts-cni, uid: 32ee943f-953b-467b-83ff-09220026838e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:07.855163752+00:00 stderr F I0120 10:58:07.854899 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: machine-api-controllers, uid: 7c112e2e-5dbd-404e-b2c5-a73aa30d28a1]" virtual=false 2026-01-20T10:58:07.858239500+00:00 stderr F I0120 10:58:07.858132 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-operator, name: console-operator, uid: 819a8d31-510a-43dc-9b44-d2d194694335]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.858239500+00:00 stderr F I0120 10:58:07.858209 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: node-ca, uid: 06d86d33-0cb8-49e8-baff-236e595eef4d]" virtual=false 2026-01-20T10:58:07.858991539+00:00 stderr F I0120 10:58:07.858731 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-apiserver-operator, name: prometheus-k8s, uid: 92ba624a-dcbc-4477-85d4-f2df67ae5db7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.858991539+00:00 stderr F I0120 10:58:07.858766 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-multus, name: prometheus-k8s, uid: f8cc27cf-7bee-48e2-88b3-8be26f38260a]" virtual=false 2026-01-20T10:58:07.858991539+00:00 stderr F I0120 10:58:07.858975 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-config-operator, name: prometheus-k8s, uid: 4e0e5d70-4367-4096-a0d7-165ab4597f6f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.859029690+00:00 stderr F I0120 10:58:07.858980 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: machine-api-operator, uid: dff291f1-42cd-4626-8331-8b0fef4d088c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.859029690+00:00 stderr F I0120 10:58:07.858997 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-controller-manager-operator, name: prometheus-k8s, uid: 56b96a63-85f5-47bb-88cb-be23eec085bd]" virtual=false 2026-01-20T10:58:07.859081231+00:00 stderr F I0120 10:58:07.859032 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: prometheus-k8s-cluster-autoscaler-operator, uid: c9d3067d-afa1-4cb6-8ad4-50b15b22f883]" virtual=false 2026-01-20T10:58:07.859802440+00:00 stderr F I0120 10:58:07.859316 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ingress-operator, name: ingress-operator, uid: 47b053c2-86d6-43b7-8698-8ddc264337d8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.859802440+00:00 stderr F I0120 10:58:07.859348 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: b5fbb36a-21eb-4ffc-98a3-b0ccc0c0751c]" virtual=false 2026-01-20T10:58:07.859802440+00:00 stderr F I0120 10:58:07.859552 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-public, uid: 57bec53f-bb5e-4dc9-8957-267dc3c81e5c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.859802440+00:00 stderr F I0120 10:58:07.859572 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-user-settings, name: console-user-settings-admin, uid: 64c7926d-fb43-4663-b39f-8b3dc932ced2]" virtual=false 2026-01-20T10:58:07.862323294+00:00 stderr F I0120 10:58:07.862278 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: ingress-operator, uid: ab127597-b672-4c7c-8321-931d286b7455]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.862345025+00:00 stderr F I0120 10:58:07.862337 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: e28ff59d-f034-4be4-a92c-c9dc921cc933]" virtual=false 2026-01-20T10:58:07.862436907+00:00 stderr F I0120 10:58:07.862398 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: openshift-network-public-role, uid: df54377a-e4ab-45dd-b1aa-937aea1ce027]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:07.862465748+00:00 stderr F I0120 10:58:07.862444 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 6953f399-e3c8-4b06-943f-747a47e7d1dd]" virtual=false 2026-01-20T10:58:07.864653482+00:00 stderr F I0120 10:58:07.862920 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: machine-api-controllers, uid: fb58d9da-83d6-4a5c-8e72-5cd11c3d11bd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.864653482+00:00 stderr F I0120 10:58:07.862940 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v2/OperatorCondition, namespace: openshift-operator-lifecycle-manager, name: packageserver, uid: e1433908-0107-419f-b096-c81f5e875465]" virtual=false 2026-01-20T10:58:07.864653482+00:00 stderr F I0120 10:58:07.863508 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 79c4826b-23cb-475e-9f0c-eab8eb98f835]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.864653482+00:00 stderr F I0120 10:58:07.863546 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-oauth-apiserver, name: prometheus-k8s, uid: cd505f93-0531-428a-87d0-81f5b7f889c3]" virtual=false 2026-01-20T10:58:07.864653482+00:00 stderr F I0120 10:58:07.863906 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-controller-manager-operator, name: prometheus-k8s, uid: 7b72705a-23d0-4665-9d24-8e04a8aae0ef]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.864653482+00:00 stderr F I0120 10:58:07.863922 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-configmap-reader, uid: 92fde0d1-fb5a-4510-8310-6ae35d89235f]" virtual=false 2026-01-20T10:58:07.868774678+00:00 stderr F I0120 10:58:07.868706 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-dns-operator, name: prometheus-k8s, uid: 0c4210ed-76ef-4fcc-b765-edadb0b0c238]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.868774678+00:00 stderr F I0120 10:58:07.868742 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-samples-operator, name: prometheus-k8s, uid: 23bfecb0-7911-4670-91df-97738b1eff9e]" virtual=false 2026-01-20T10:58:07.868874180+00:00 stderr F I0120 10:58:07.868833 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: machine-api-controllers, uid: c9a01375-7b9c-4ef6-9359-758fb218dce6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.868894221+00:00 stderr F I0120 10:58:07.868871 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-operator, name: prometheus-k8s, uid: eba18ce2-3b68-4a35-8e42-62f924dd6873]" virtual=false 2026-01-20T10:58:07.868953272+00:00 stderr F I0120 10:58:07.868921 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-apiserver, name: prometheus-k8s, uid: 7187d3b6-e6d6-4ff7-b923-65dff4f99eed]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.868961892+00:00 stderr F I0120 10:58:07.868948 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-monitoring, name: cluster-monitoring-operator-alert-customization, uid: 453b9a7e-62aa-4f3d-9330-918d9f77515b]" virtual=false 2026-01-20T10:58:07.869031284+00:00 stderr F I0120 10:58:07.868982 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-multus, name: prometheus-k8s, uid: f8cc27cf-7bee-48e2-88b3-8be26f38260a]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:07.869080455+00:00 stderr F I0120 10:58:07.869034 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-authentication, name: prometheus-k8s, uid: f1df4a56-c0f4-4813-bb2c-0b0a559bf7eb]" virtual=false 2026-01-20T10:58:07.869199828+00:00 stderr F I0120 10:58:07.869155 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-machine-approver, name: prometheus-k8s, uid: 84939b38-04f1-4a93-a3ca-c9043e356d14]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.869199828+00:00 stderr F I0120 10:58:07.869179 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-marketplace, name: openshift-marketplace-metrics, uid: 1e11e685-d90b-49c4-a86c-7a1bfec99a48]" virtual=false 2026-01-20T10:58:07.869393583+00:00 stderr F I0120 10:58:07.869343 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: prometheus-k8s-cluster-autoscaler-operator, uid: c9d3067d-afa1-4cb6-8ad4-50b15b22f883]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.869393583+00:00 stderr F I0120 10:58:07.869382 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-operator, uid: 644d299e-7c04-454d-a20a-2641713ce520]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.869405503+00:00 stderr F I0120 10:58:07.869395 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-node-identity, name: network-node-identity-leases, uid: d937764d-a2f2-4f97-8faa-ee73604e87e3]" virtual=false 2026-01-20T10:58:07.869455065+00:00 stderr F I0120 10:58:07.869426 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: e117a220-adbb-4f6c-89f8-b4507726d12f]" virtual=false 2026-01-20T10:58:07.869470625+00:00 stderr F I0120 10:58:07.869444 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: b5fbb36a-21eb-4ffc-98a3-b0ccc0c0751c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:07.869554867+00:00 stderr F I0120 10:58:07.869520 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ingress-operator, name: prometheus-k8s, uid: ea595b36-3d8a-484e-8397-2d1568e5dbf5]" virtual=false 2026-01-20T10:58:07.870464831+00:00 stderr F I0120 10:58:07.869965 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: prometheus-k8s, uid: 6c37d9c1-9f73-4181-99c5-8e303e842810]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.870464831+00:00 stderr F I0120 10:58:07.869995 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-scheduler-operator, name: prometheus-k8s, uid: 007819f0-5856-454e-a23f-645d2e97ddc9]" virtual=false 2026-01-20T10:58:07.870464831+00:00 stderr F I0120 10:58:07.870354 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-apiserver, name: prometheus-k8s, uid: 2284de94-810e-4240-8f67-14ef839c6064]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.870464831+00:00 stderr F I0120 10:58:07.870430 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: console-operator, uid: 8943421a-94fe-4e82-ba2b-ff3e7994ada6]" virtual=false 2026-01-20T10:58:07.871029625+00:00 stderr F I0120 10:58:07.870957 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-controller-manager-operator, name: prometheus-k8s, uid: 56b96a63-85f5-47bb-88cb-be23eec085bd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.871029625+00:00 stderr F I0120 10:58:07.871005 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console, name: prometheus-k8s, uid: 82076a4f-26ca-40e7-93f2-a97aeb21fd34]" virtual=false 2026-01-20T10:58:07.874192925+00:00 stderr F I0120 10:58:07.872570 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 9876773e-9d26-40a6-bfd6-e2f1512249d0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.874192925+00:00 stderr F I0120 10:58:07.872612 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-monitoring, name: console-operator, uid: 9d4ff0ed-d9b6-4c92-8dab-48c5d5dcf7ae]" virtual=false 2026-01-20T10:58:07.874192925+00:00 stderr F I0120 10:58:07.873302 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-user-settings, name: console-user-settings-admin, uid: 64c7926d-fb43-4663-b39f-8b3dc932ced2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.874192925+00:00 stderr F I0120 10:58:07.873350 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-route-controller-manager, name: prometheus-k8s, uid: 2247b757-2dac-4d86-bfbd-8dff6864b9e9]" virtual=false 2026-01-20T10:58:07.874192925+00:00 stderr F I0120 10:58:07.873724 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: machine-api-controllers, uid: 7c112e2e-5dbd-404e-b2c5-a73aa30d28a1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.874192925+00:00 stderr F I0120 10:58:07.873767 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[operators.coreos.com/v2/OperatorCondition, namespace: openshift-operator-lifecycle-manager, name: packageserver, uid: e1433908-0107-419f-b096-c81f5e875465]" owner=[{"apiVersion":"operators.coreos.com/v1alpha1","kind":"ClusterServiceVersion","name":"packageserver","uid":"0beab272-7637-4d44-b3aa-502dcafbc929","controller":true,"blockOwnerDeletion":false}] 2026-01-20T10:58:07.874192925+00:00 stderr F I0120 10:58:07.873754 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-etcd-operator, name: prometheus-k8s, uid: bdfbbe53-9083-4628-919f-d92be5217116]" virtual=false 2026-01-20T10:58:07.874192925+00:00 stderr F I0120 10:58:07.873806 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-apiserver-operator, name: prometheus-k8s, uid: 543709bc-20a4-4aa5-9c2d-22d28d836926]" virtual=false 2026-01-20T10:58:07.878584676+00:00 stderr F I0120 10:58:07.878539 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: node-ca, uid: 06d86d33-0cb8-49e8-baff-236e595eef4d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.878584676+00:00 stderr F I0120 10:58:07.878571 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: prometheus-k8s-machine-api-operator, uid: e1dd6a03-7458-44ac-a6c7-edfefd391324]" virtual=false 2026-01-20T10:58:07.879226393+00:00 stderr F I0120 10:58:07.879177 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-node-identity, name: network-node-identity-leases, uid: d937764d-a2f2-4f97-8faa-ee73604e87e3]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:07.879239274+00:00 stderr F I0120 10:58:07.879225 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-operator, name: prometheus-k8s, uid: 96b52d1f-be7e-421c-89e5-37d718c9e5d3]" virtual=false 2026-01-20T10:58:07.879550522+00:00 stderr F I0120 10:58:07.879517 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ingress-operator, name: prometheus-k8s, uid: ea595b36-3d8a-484e-8397-2d1568e5dbf5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.879566352+00:00 stderr F I0120 10:58:07.879551 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console, name: console-operator, uid: 00c6aa74-29ed-484b-adf5-26bb1faa032f]" virtual=false 2026-01-20T10:58:07.879631304+00:00 stderr F I0120 10:58:07.879598 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: e28ff59d-f034-4be4-a92c-c9dc921cc933]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.879680465+00:00 stderr F I0120 10:58:07.879658 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-scheduler, name: prometheus-k8s, uid: af8e9559-dd0c-4362-9ecb-cdae5324c3fd]" virtual=false 2026-01-20T10:58:07.881974803+00:00 stderr F I0120 10:58:07.881941 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 6953f399-e3c8-4b06-943f-747a47e7d1dd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.882011564+00:00 stderr F I0120 10:58:07.881994 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: machine-approver, uid: ff6a6f67-ddd9-476d-bf91-e242616a03c7]" virtual=false 2026-01-20T10:58:07.882779253+00:00 stderr F I0120 10:58:07.882731 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-configmap-reader, uid: 92fde0d1-fb5a-4510-8310-6ae35d89235f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.882795434+00:00 stderr F I0120 10:58:07.882777 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: d15aeb28-0bc5-4c54-abc8-682b1d2fe551]" virtual=false 2026-01-20T10:58:07.883965573+00:00 stderr F I0120 10:58:07.883916 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-authentication, name: prometheus-k8s, uid: f1df4a56-c0f4-4813-bb2c-0b0a559bf7eb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.884027245+00:00 stderr F I0120 10:58:07.883959 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-diagnostics, name: prometheus-k8s, uid: 147efe47-06b1-489e-a050-0509fe1494a0]" virtual=false 2026-01-20T10:58:07.884365123+00:00 stderr F I0120 10:58:07.884326 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-monitoring, name: cluster-monitoring-operator-alert-customization, uid: 453b9a7e-62aa-4f3d-9330-918d9f77515b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.884377313+00:00 stderr F I0120 10:58:07.884369 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-machine-approver, name: prometheus-k8s, uid: fafa61e3-9783-4739-8430-0d7952720a14]" virtual=false 2026-01-20T10:58:07.884747244+00:00 stderr F I0120 10:58:07.884707 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-samples-operator, name: prometheus-k8s, uid: 23bfecb0-7911-4670-91df-97738b1eff9e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.884758194+00:00 stderr F I0120 10:58:07.884748 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift, name: copied-csv-viewer, uid: c333d29d-0f91-4c19-a9fa-625f1576a12a]" virtual=false 2026-01-20T10:58:07.885165944+00:00 stderr F I0120 10:58:07.885122 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: e117a220-adbb-4f6c-89f8-b4507726d12f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.885205105+00:00 stderr F I0120 10:58:07.885181 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: coreos-pull-secret-reader, uid: cdaab605-1dd2-4c71-902b-e715152f501c]" virtual=false 2026-01-20T10:58:07.886584680+00:00 stderr F I0120 10:58:07.886537 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-operator, name: prometheus-k8s, uid: eba18ce2-3b68-4a35-8e42-62f924dd6873]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.886605130+00:00 stderr F I0120 10:58:07.886582 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-user-workload-monitoring, name: cluster-monitoring-operator, uid: 10195306-33ff-4c68-8c56-cd4ae639cbde]" virtual=false 2026-01-20T10:58:07.888522199+00:00 stderr F I0120 10:58:07.888441 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-oauth-apiserver, name: prometheus-k8s, uid: cd505f93-0531-428a-87d0-81f5b7f889c3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.888540460+00:00 stderr F I0120 10:58:07.888518 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: csi-snapshot-controller-operator-authentication-reader, uid: daae309d-1001-4cca-b012-aa17027ac98c]" virtual=false 2026-01-20T10:58:07.893117356+00:00 stderr F I0120 10:58:07.891437 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console, name: prometheus-k8s, uid: 82076a4f-26ca-40e7-93f2-a97aeb21fd34]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.893117356+00:00 stderr F I0120 10:58:07.891484 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 65ec419a-7cc1-4ad7-851d-a3e1eb8bf158]" virtual=false 2026-01-20T10:58:07.893117356+00:00 stderr F I0120 10:58:07.892233 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-monitoring, name: console-operator, uid: 9d4ff0ed-d9b6-4c92-8dab-48c5d5dcf7ae]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.893117356+00:00 stderr F I0120 10:58:07.892290 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: prometheus-k8s-machine-api-operator, uid: b81faa8b-7299-4812-aa0b-64bd21477eb2]" virtual=false 2026-01-20T10:58:07.896535793+00:00 stderr F I0120 10:58:07.895308 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: prometheus-k8s-machine-api-operator, uid: e1dd6a03-7458-44ac-a6c7-edfefd391324]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.896535793+00:00 stderr F I0120 10:58:07.895366 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-service-ca-operator, name: prometheus-k8s, uid: 0bd8dea8-4527-46af-9f03-68d1df279d8b]" virtual=false 2026-01-20T10:58:07.896535793+00:00 stderr F I0120 10:58:07.895961 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-marketplace, name: openshift-marketplace-metrics, uid: 1e11e685-d90b-49c4-a86c-7a1bfec99a48]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.896535793+00:00 stderr F I0120 10:58:07.895993 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-apiserver-operator, name: prometheus-k8s, uid: 63fb0acf-79a0-4111-a6ec-5dda2d4bdf26]" virtual=false 2026-01-20T10:58:07.896535793+00:00 stderr F I0120 10:58:07.895995 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-route-controller-manager, name: prometheus-k8s, uid: 2247b757-2dac-4d86-bfbd-8dff6864b9e9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.896535793+00:00 stderr F I0120 10:58:07.896090 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-node-identity, name: network-node-identity-leases, uid: 323976da-f51f-48d6-91ee-874a014db6f7]" virtual=false 2026-01-20T10:58:07.896535793+00:00 stderr F I0120 10:58:07.896467 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift, name: copied-csv-viewer, uid: c333d29d-0f91-4c19-a9fa-625f1576a12a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.896535793+00:00 stderr F I0120 10:58:07.896525 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-controller-manager, name: prometheus-k8s, uid: 25e48399-1d46-4030-b302-f0e0a40b15ac]" virtual=false 2026-01-20T10:58:07.898231175+00:00 stderr F I0120 10:58:07.896985 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-etcd-operator, name: prometheus-k8s, uid: bdfbbe53-9083-4628-919f-d92be5217116]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.898231175+00:00 stderr F I0120 10:58:07.897011 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-dns-operator, name: dns-operator, uid: 57d9aa60-9c70-4a04-875e-7d6d78ca3521]" virtual=false 2026-01-20T10:58:07.903010937+00:00 stderr F I0120 10:58:07.902967 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-operator, name: prometheus-k8s, uid: 96b52d1f-be7e-421c-89e5-37d718c9e5d3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.903045628+00:00 stderr F I0120 10:58:07.903009 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-controller-manager, name: prometheus-k8s, uid: 05961b4d-e50c-47f7-80b9-a2fc6dea707c]" virtual=false 2026-01-20T10:58:07.903254743+00:00 stderr F I0120 10:58:07.903223 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console, name: console-operator, uid: 00c6aa74-29ed-484b-adf5-26bb1faa032f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.903254743+00:00 stderr F I0120 10:58:07.903239 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-operator, name: prometheus-k8s, uid: d3b6e7ef-13fc-4616-8132-30ed8b95b8b6]" virtual=false 2026-01-20T10:58:07.903517420+00:00 stderr F I0120 10:58:07.903485 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: machine-approver, uid: ff6a6f67-ddd9-476d-bf91-e242616a03c7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.903527530+00:00 stderr F I0120 10:58:07.903516 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: d04be21e-6483-4e06-947a-bfe10bccef5f]" virtual=false 2026-01-20T10:58:07.903686644+00:00 stderr F I0120 10:58:07.903641 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-diagnostics, name: prometheus-k8s, uid: 147efe47-06b1-489e-a050-0509fe1494a0]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:07.903694024+00:00 stderr F I0120 10:58:07.903687 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-marketplace, name: marketplace-operator, uid: af983aa6-e063-42fe-945c-3f6bb1f3e446]" virtual=false 2026-01-20T10:58:07.903822067+00:00 stderr F I0120 10:58:07.903769 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: csi-snapshot-controller-operator-authentication-reader, uid: daae309d-1001-4cca-b012-aa17027ac98c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.903866898+00:00 stderr F I0120 10:58:07.903841 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-version, name: prometheus-k8s, uid: fada1de4-8b21-4108-922b-4896582712b1]" virtual=false 2026-01-20T10:58:07.904048493+00:00 stderr F I0120 10:58:07.903642 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: coreos-pull-secret-reader, uid: cdaab605-1dd2-4c71-902b-e715152f501c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.904085275+00:00 stderr F I0120 10:58:07.904053 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-operator-lifecycle-manager, name: operator-lifecycle-manager-metrics, uid: ffc884e1-7792-4e09-a959-3dd895bdc7be]" virtual=false 2026-01-20T10:58:07.908932377+00:00 stderr F I0120 10:58:07.908879 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-apiserver-operator, name: prometheus-k8s, uid: 543709bc-20a4-4aa5-9c2d-22d28d836926]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.909079721+00:00 stderr F I0120 10:58:07.909045 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-user-workload-monitoring, name: cluster-monitoring-operator, uid: 10195306-33ff-4c68-8c56-cd4ae639cbde]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.909233415+00:00 stderr F I0120 10:58:07.909195 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-scheduler, name: prometheus-k8s, uid: af8e9559-dd0c-4362-9ecb-cdae5324c3fd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.909242585+00:00 stderr F I0120 10:58:07.909233 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-console, name: console-operator, uid: fffaa351-d6ff-44c3-9a2e-3f8fb82481ce]" virtual=false 2026-01-20T10:58:07.909323497+00:00 stderr F I0120 10:58:07.909300 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-storage-operator, name: csi-snapshot-controller-operator-role, uid: bc19cbb1-cf64-4d32-9037-aae84228edd0]" virtual=false 2026-01-20T10:58:07.909517072+00:00 stderr F I0120 10:58:07.909419 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-console-user-settings, name: console-user-settings-admin, uid: bbf1a2e5-8b73-4bae-b5a6-d557ecbce86f]" virtual=false 2026-01-20T10:58:07.910990290+00:00 stderr F I0120 10:58:07.910947 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: d15aeb28-0bc5-4c54-abc8-682b1d2fe551]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.911008160+00:00 stderr F I0120 10:58:07.910989 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-dns-operator, name: dns-operator, uid: f37660aa-b829-4efb-af91-c0475f8a91fd]" virtual=false 2026-01-20T10:58:07.911384830+00:00 stderr F I0120 10:58:07.911342 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-machine-approver, name: prometheus-k8s, uid: fafa61e3-9783-4739-8430-0d7952720a14]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.911402080+00:00 stderr F I0120 10:58:07.911393 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cloud-platform-infra, uid: 2d3b34a1-0340-4ea4-b15a-1a0164234aa8]" virtual=false 2026-01-20T10:58:07.913418602+00:00 stderr F I0120 10:58:07.913373 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-scheduler-operator, name: prometheus-k8s, uid: 007819f0-5856-454e-a23f-645d2e97ddc9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.913418602+00:00 stderr F I0120 10:58:07.913393 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cluster-machine-approver, uid: 9bfd5222-f6af-4843-b92c-1a5e0e0faafe]" virtual=false 2026-01-20T10:58:07.913685008+00:00 stderr F I0120 10:58:07.913648 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-node-identity, name: network-node-identity-leases, uid: 323976da-f51f-48d6-91ee-874a014db6f7]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:07.913685008+00:00 stderr F I0120 10:58:07.913680 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-console-operator, uid: 744177a1-0b8c-479f-9660-aa2ce2d1003b]" virtual=false 2026-01-20T10:58:07.914875828+00:00 stderr F I0120 10:58:07.913932 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: console-operator, uid: 8943421a-94fe-4e82-ba2b-ff3e7994ada6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.914875828+00:00 stderr F I0120 10:58:07.913958 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: machine-api-operator, uid: b434b818-3667-41cd-9f86-670cf81c3dc9]" virtual=false 2026-01-20T10:58:07.928999567+00:00 stderr F I0120 10:58:07.928932 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-service-ca-operator, name: prometheus-k8s, uid: 0bd8dea8-4527-46af-9f03-68d1df279d8b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.929029028+00:00 stderr F I0120 10:58:07.928989 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-monitoring, uid: 98779a1e-aec5-4b04-aba2-1e5d3ba2ec08]" virtual=false 2026-01-20T10:58:07.929322726+00:00 stderr F I0120 10:58:07.929288 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 65ec419a-7cc1-4ad7-851d-a3e1eb8bf158]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.929322726+00:00 stderr F I0120 10:58:07.929313 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-kube-scheduler-operator, uid: 6f0ce1c2-37a2-421e-91e4-7be791e0ea85]" virtual=false 2026-01-20T10:58:07.929478530+00:00 stderr F I0120 10:58:07.929439 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: prometheus-k8s-machine-api-operator, uid: b81faa8b-7299-4812-aa0b-64bd21477eb2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.929478530+00:00 stderr F I0120 10:58:07.929459 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-marketplace, uid: ca0ec70d-cbfe-4901-ae7a-150a9dfe5920]" virtual=false 2026-01-20T10:58:07.930034854+00:00 stderr F I0120 10:58:07.929995 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: d04be21e-6483-4e06-947a-bfe10bccef5f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:07.930034854+00:00 stderr F I0120 10:58:07.930016 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-nutanix-infra, uid: 3e99d67f-8f0c-4b0e-b68c-e85c2944daf4]" virtual=false 2026-01-20T10:58:07.930477765+00:00 stderr F I0120 10:58:07.930428 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-controller-manager, name: prometheus-k8s, uid: 25e48399-1d46-4030-b302-f0e0a40b15ac]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.930477765+00:00 stderr F I0120 10:58:07.930448 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-openstack-infra, uid: ea73e21f-aeeb-4b3d-9983-95bf78a60eea]" virtual=false 2026-01-20T10:58:07.930650409+00:00 stderr F I0120 10:58:07.930613 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-version, name: prometheus-k8s, uid: fada1de4-8b21-4108-922b-4896582712b1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.930650409+00:00 stderr F I0120 10:58:07.930640 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-oauth-apiserver, name: prometheus-k8s, uid: 6398f534-6d5a-4a68-8bf3-84a463b928f0]" virtual=false 2026-01-20T10:58:07.930834624+00:00 stderr F I0120 10:58:07.930792 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-apiserver-operator, name: prometheus-k8s, uid: 63fb0acf-79a0-4111-a6ec-5dda2d4bdf26]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.930834624+00:00 stderr F I0120 10:58:07.930817 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-dns-operator, uid: b99894aa-45e2-4109-8ecc-66c171f7a6d8]" virtual=false 2026-01-20T10:58:07.931027479+00:00 stderr F I0120 10:58:07.930993 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-dns-operator, name: dns-operator, uid: 57d9aa60-9c70-4a04-875e-7d6d78ca3521]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.931027479+00:00 stderr F I0120 10:58:07.931019 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-config-operator, name: metrics, uid: f04ada1b-55ad-45a3-9231-6d1ff7242fa0]" virtual=false 2026-01-20T10:58:07.931239694+00:00 stderr F I0120 10:58:07.931204 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-operator-lifecycle-manager, name: operator-lifecycle-manager-metrics, uid: ffc884e1-7792-4e09-a959-3dd895bdc7be]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.931239694+00:00 stderr F I0120 10:58:07.931230 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-multus, uid: 117f4806-93cc-4b78-ae9b-74d16192e441]" virtual=false 2026-01-20T10:58:07.932408794+00:00 stderr F I0120 10:58:07.931491 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-operator, name: prometheus-k8s, uid: d3b6e7ef-13fc-4616-8132-30ed8b95b8b6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.932408794+00:00 stderr F I0120 10:58:07.931514 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 3ff83f1a-4058-4b9e-a4fd-83f51836c82e]" virtual=false 2026-01-20T10:58:07.932408794+00:00 stderr F I0120 10:58:07.931807 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-controller-manager, name: prometheus-k8s, uid: 05961b4d-e50c-47f7-80b9-a2fc6dea707c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.932408794+00:00 stderr F I0120 10:58:07.931826 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-vsphere-infra, uid: 91f4e064-f5b2-4ace-91e6-e1732990ada8]" virtual=false 2026-01-20T10:58:07.940111889+00:00 stderr F I0120 10:58:07.938148 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cloud-platform-infra, uid: 2d3b34a1-0340-4ea4-b15a-1a0164234aa8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.940111889+00:00 stderr F I0120 10:58:07.938197 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cluster-version, uid: 2a354368-bd77-40cc-ae1a-68449ece8efb]" virtual=false 2026-01-20T10:58:07.940111889+00:00 stderr F I0120 10:58:07.939264 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-multus, uid: 117f4806-93cc-4b78-ae9b-74d16192e441]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:07.940111889+00:00 stderr F I0120 10:58:07.939287 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-controller-manager-operator, uid: d21d09aa-3262-40bb-bb1c-9a70b88b2f48]" virtual=false 2026-01-20T10:58:07.943659070+00:00 stderr F I0120 10:58:07.940287 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-console-user-settings, name: console-user-settings-admin, uid: bbf1a2e5-8b73-4bae-b5a6-d557ecbce86f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.943659070+00:00 stderr F I0120 10:58:07.940316 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-etcd-operator, uid: 7413916b-b5eb-4718-9e41-632b89d445af]" virtual=false 2026-01-20T10:58:07.943659070+00:00 stderr F I0120 10:58:07.941871 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cluster-machine-approver, uid: 9bfd5222-f6af-4843-b92c-1a5e0e0faafe]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.943659070+00:00 stderr F I0120 10:58:07.941900 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-image-registry, uid: b700e982-db6f-41d5-bf38-c2a82966abe8]" virtual=false 2026-01-20T10:58:07.943659070+00:00 stderr F I0120 10:58:07.942229 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-marketplace, name: marketplace-operator, uid: af983aa6-e063-42fe-945c-3f6bb1f3e446]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.943659070+00:00 stderr F I0120 10:58:07.942251 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-marketplace, name: community-operators, uid: daa5c70d-2f05-4c99-b062-49370cf4b7bd]" virtual=false 2026-01-20T10:58:07.943659070+00:00 stderr F I0120 10:58:07.943183 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-console, name: console-operator, uid: fffaa351-d6ff-44c3-9a2e-3f8fb82481ce]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.943659070+00:00 stderr F I0120 10:58:07.943211 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-network-diagnostics, uid: 267bb0cc-5a49-450c-a61e-a94080c18cf9]" virtual=false 2026-01-20T10:58:07.945341362+00:00 stderr F I0120 10:58:07.943846 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-oauth-apiserver, name: prometheus-k8s, uid: 6398f534-6d5a-4a68-8bf3-84a463b928f0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.945341362+00:00 stderr F I0120 10:58:07.943871 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-operator-lifecycle-manager, uid: 45ab0ea7-17dd-4464-9f7e-9913e01318e2]" virtual=false 2026-01-20T10:58:07.945341362+00:00 stderr F I0120 10:58:07.944146 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: machine-api-operator, uid: b434b818-3667-41cd-9f86-670cf81c3dc9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.945341362+00:00 stderr F I0120 10:58:07.944165 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-user-workload-monitoring, uid: 1d85aaa3-25a8-4cbd-88ce-765463dbeca9]" virtual=false 2026-01-20T10:58:07.945341362+00:00 stderr F I0120 10:58:07.945177 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-nutanix-infra, uid: 3e99d67f-8f0c-4b0e-b68c-e85c2944daf4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.945341362+00:00 stderr F I0120 10:58:07.945198 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cluster-samples-operator, uid: 9b74ff82-99c7-48fa-852e-a3960e84fedd]" virtual=false 2026-01-20T10:58:07.949093088+00:00 stderr F I0120 10:58:07.945854 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-storage-operator, name: csi-snapshot-controller-operator-role, uid: bc19cbb1-cf64-4d32-9037-aae84228edd0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.949093088+00:00 stderr F I0120 10:58:07.945885 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-kni-infra, uid: a8db5ce0-f088-4b83-8f52-bf8a94f40b94]" virtual=false 2026-01-20T10:58:07.949093088+00:00 stderr F I0120 10:58:07.947439 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-marketplace, uid: ca0ec70d-cbfe-4901-ae7a-150a9dfe5920]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.949093088+00:00 stderr F I0120 10:58:07.947462 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-network-node-identity, uid: c5e796ee-668d-4610-a134-4468bf0cbdf2]" virtual=false 2026-01-20T10:58:07.949093088+00:00 stderr F I0120 10:58:07.947538 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 3ff83f1a-4058-4b9e-a4fd-83f51836c82e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.949093088+00:00 stderr F I0120 10:58:07.947595 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-console-user-settings, uid: a8cf2c15-bc3c-4cf4-9841-5e4727e35f10]" virtual=false 2026-01-20T10:58:07.949093088+00:00 stderr F I0120 10:58:07.947853 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-kube-scheduler-operator, uid: 6f0ce1c2-37a2-421e-91e4-7be791e0ea85]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.949093088+00:00 stderr F I0120 10:58:07.947876 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-kube-storage-version-migrator-operator, uid: 56366865-01d3-431a-9703-d295f389a658]" virtual=false 2026-01-20T10:58:07.949093088+00:00 stderr F I0120 10:58:07.948021 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-monitoring, uid: 98779a1e-aec5-4b04-aba2-1e5d3ba2ec08]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.949093088+00:00 stderr F I0120 10:58:07.948039 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-config, uid: c38b4142-4b5d-445b-9db1-5d43b8323db9]" virtual=false 2026-01-20T10:58:07.952158656+00:00 stderr F I0120 10:58:07.950799 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-vsphere-infra, uid: 91f4e064-f5b2-4ace-91e6-e1732990ada8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.952158656+00:00 stderr F I0120 10:58:07.950832 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-host-network, uid: b448caf8-673f-4b75-9f4c-85cbabc7ec6c]" virtual=false 2026-01-20T10:58:07.952158656+00:00 stderr F I0120 10:58:07.951134 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-controller-manager-operator, uid: d21d09aa-3262-40bb-bb1c-9a70b88b2f48]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.952158656+00:00 stderr F I0120 10:58:07.951150 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-machine-config-operator, uid: 74ba47b5-3bcc-42c0-9deb-e12d8da952ac]" virtual=false 2026-01-20T10:58:07.952158656+00:00 stderr F I0120 10:58:07.951359 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-openstack-infra, uid: ea73e21f-aeeb-4b3d-9983-95bf78a60eea]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.952158656+00:00 stderr F I0120 10:58:07.951374 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-operators, uid: 7920babe-d7ab-4197-9546-c2d10b1040a1]" virtual=false 2026-01-20T10:58:07.952158656+00:00 stderr F I0120 10:58:07.951793 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-marketplace, name: community-operators, uid: daa5c70d-2f05-4c99-b062-49370cf4b7bd]" owner=[{"apiVersion":"operators.coreos.com/v1alpha1","kind":"CatalogSource","name":"community-operators","uid":"e583c58d-4569-4cab-9192-62c813516208","controller":false,"blockOwnerDeletion":false}] 2026-01-20T10:58:07.952158656+00:00 stderr F I0120 10:58:07.951808 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-network-operator, uid: 3796492a-40b7-473e-aa60-d1e803b2692a]" virtual=false 2026-01-20T10:58:07.952158656+00:00 stderr F I0120 10:58:07.952027 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-config-operator, name: metrics, uid: f04ada1b-55ad-45a3-9231-6d1ff7242fa0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.952158656+00:00 stderr F I0120 10:58:07.952040 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cloud-network-config-controller, uid: 54f58fbf-f373-4947-8956-e1108a6bd97e]" virtual=false 2026-01-20T10:58:07.958093147+00:00 stderr F I0120 10:58:07.955174 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-console-operator, uid: 744177a1-0b8c-479f-9660-aa2ce2d1003b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.958093147+00:00 stderr F I0120 10:58:07.955224 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-console, uid: eb0e1992-df9a-419d-b7f4-4d9b50e766e7]" virtual=false 2026-01-20T10:58:07.958093147+00:00 stderr F I0120 10:58:07.955396 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-dns-operator, name: dns-operator, uid: f37660aa-b829-4efb-af91-c0475f8a91fd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.958093147+00:00 stderr F I0120 10:58:07.955420 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-kube-apiserver-operator, uid: 778598fb-710e-443a-a27d-e077f62db555]" virtual=false 2026-01-20T10:58:07.958093147+00:00 stderr F I0120 10:58:07.955556 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-dns-operator, uid: b99894aa-45e2-4109-8ecc-66c171f7a6d8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.958093147+00:00 stderr F I0120 10:58:07.955565 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-user-workload-monitoring, uid: 1d85aaa3-25a8-4cbd-88ce-765463dbeca9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.958093147+00:00 stderr F I0120 10:58:07.955570 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-machine-api, uid: df6705c6-ec2d-4f74-a803-3f769fd28210]" virtual=false 2026-01-20T10:58:07.958093147+00:00 stderr F I0120 10:58:07.955579 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-ovirt-infra, uid: 2d9c4601-cca2-4813-b9e6-2830b7097736]" virtual=false 2026-01-20T10:58:07.961170545+00:00 stderr F I0120 10:58:07.961129 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-etcd-operator, uid: 7413916b-b5eb-4718-9e41-632b89d445af]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.961187475+00:00 stderr F I0120 10:58:07.961175 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-apiserver-operator, uid: 52965c25-9895-4c80-8901-655c30931a31]" virtual=false 2026-01-20T10:58:07.961543354+00:00 stderr F I0120 10:58:07.961502 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-image-registry, uid: b700e982-db6f-41d5-bf38-c2a82966abe8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.961543354+00:00 stderr F I0120 10:58:07.961527 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cluster-storage-operator, uid: 01a4f482-d92f-4128-87f6-1f1177ad4f4a]" virtual=false 2026-01-20T10:58:07.961694158+00:00 stderr F I0120 10:58:07.961670 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-operator-lifecycle-manager, uid: 45ab0ea7-17dd-4464-9f7e-9913e01318e2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.961704908+00:00 stderr F I0120 10:58:07.961691 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-authentication-operator, uid: a94f410e-4561-4410-90f7-263645a79260]" virtual=false 2026-01-20T10:58:07.962743574+00:00 stderr F I0120 10:58:07.961979 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-network-diagnostics, uid: 267bb0cc-5a49-450c-a61e-a94080c18cf9]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:07.962743574+00:00 stderr F I0120 10:58:07.962005 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-kube-controller-manager-operator, uid: 73d77f82-bd47-45ec-bb06-d29938e02cfd]" virtual=false 2026-01-20T10:58:07.962743574+00:00 stderr F I0120 10:58:07.962284 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-config, uid: c38b4142-4b5d-445b-9db1-5d43b8323db9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.962743574+00:00 stderr F I0120 10:58:07.962290 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-machine-config-operator, uid: 74ba47b5-3bcc-42c0-9deb-e12d8da952ac]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.962743574+00:00 stderr F I0120 10:58:07.962305 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-config-managed, uid: c4538c2f-b9cd-4ad4-8d0e-02315cca7510]" virtual=false 2026-01-20T10:58:07.962743574+00:00 stderr F I0120 10:58:07.962307 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-config-operator, uid: 45fc724c-871f-4abe-8f95-988ef4974157]" virtual=false 2026-01-20T10:58:07.962743574+00:00 stderr F I0120 10:58:07.962301 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-console-user-settings, uid: a8cf2c15-bc3c-4cf4-9841-5e4727e35f10]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.962743574+00:00 stderr F I0120 10:58:07.962348 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-service-ca-operator, uid: f62224e7-87fc-4c57-b23f-4581de499fa2]" virtual=false 2026-01-20T10:58:07.962743574+00:00 stderr F I0120 10:58:07.962436 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cluster-samples-operator, uid: 9b74ff82-99c7-48fa-852e-a3960e84fedd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.962743574+00:00 stderr F I0120 10:58:07.962453 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-ingress-operator, uid: c9d5dc69-e308-44cd-ad51-e2b7cce2619e]" virtual=false 2026-01-20T10:58:07.962743574+00:00 stderr F I0120 10:58:07.962471 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-kube-storage-version-migrator-operator, uid: 56366865-01d3-431a-9703-d295f389a658]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.962743574+00:00 stderr F I0120 10:58:07.962490 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-ovn-kubernetes, uid: 393084dd-9b88-439d-9c82-e5d9e9fc7290]" virtual=false 2026-01-20T10:58:07.962743574+00:00 stderr F I0120 10:58:07.962627 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cluster-version, uid: 2a354368-bd77-40cc-ae1a-68449ece8efb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.962743574+00:00 stderr F I0120 10:58:07.962656 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-console, name: prometheus-k8s, uid: 68628535-73c6-4332-9439-4dbc315aff94]" virtual=false 2026-01-20T10:58:07.962789395+00:00 stderr F I0120 10:58:07.962750 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-kni-infra, uid: a8db5ce0-f088-4b83-8f52-bf8a94f40b94]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.962789395+00:00 stderr F I0120 10:58:07.962755 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-network-node-identity, uid: c5e796ee-668d-4610-a134-4468bf0cbdf2]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:07.962789395+00:00 stderr F I0120 10:58:07.962773 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-operator-lifecycle-manager, name: operator-lifecycle-manager-metrics, uid: 529eb670-d2f5-4a87-a1cc-b0a4cbd22e03]" virtual=false 2026-01-20T10:58:07.962789395+00:00 stderr F I0120 10:58:07.962772 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-console-operator, name: prometheus-k8s, uid: 7ed8090b-3718-4d38-abfd-8f495986d2c4]" virtual=false 2026-01-20T10:58:07.965077973+00:00 stderr F I0120 10:58:07.965010 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-host-network, uid: b448caf8-673f-4b75-9f4c-85cbabc7ec6c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:07.965096374+00:00 stderr F I0120 10:58:07.965082 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift, name: copied-csv-viewers, uid: 51e093b6-6761-48c8-bdf3-e1d1320734a3]" virtual=false 2026-01-20T10:58:07.965157375+00:00 stderr F I0120 10:58:07.965123 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-operators, uid: 7920babe-d7ab-4197-9546-c2d10b1040a1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.965157375+00:00 stderr F I0120 10:58:07.965149 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-image-registry, name: node-ca, uid: 153c121a-eaab-48ca-a2d5-58b32eb3c97b]" virtual=false 2026-01-20T10:58:07.968261264+00:00 stderr F I0120 10:58:07.968232 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cloud-network-config-controller, uid: 54f58fbf-f373-4947-8956-e1108a6bd97e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.968279835+00:00 stderr F I0120 10:58:07.968257 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-diagnostics, name: prometheus-k8s, uid: 82531bdb-b490-4bb6-9435-a96aa861af59]" virtual=false 2026-01-20T10:58:07.968993444+00:00 stderr F I0120 10:58:07.968500 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-network-operator, uid: 3796492a-40b7-473e-aa60-d1e803b2692a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.968993444+00:00 stderr F I0120 10:58:07.968523 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-apiserver, name: prometheus-k8s, uid: aaaa746a-592e-4f64-9701-4c374c38bb62]" virtual=false 2026-01-20T10:58:07.968993444+00:00 stderr F I0120 10:58:07.968666 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-kube-apiserver-operator, uid: 778598fb-710e-443a-a27d-e077f62db555]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.968993444+00:00 stderr F I0120 10:58:07.968680 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-marketplace, name: redhat-marketplace, uid: 73712edb-385d-4bf0-9c03-b6c570b1a22f]" virtual=false 2026-01-20T10:58:07.969362433+00:00 stderr F I0120 10:58:07.969166 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-console, uid: eb0e1992-df9a-419d-b7f4-4d9b50e766e7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.969362433+00:00 stderr F I0120 10:58:07.969208 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: olm-operator-metrics, uid: f54a9b6f-c334-4276-9ca3-b290325fd276]" virtual=false 2026-01-20T10:58:07.970993674+00:00 stderr F I0120 10:58:07.970440 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-ovirt-infra, uid: 2d9c4601-cca2-4813-b9e6-2830b7097736]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.970993674+00:00 stderr F I0120 10:58:07.970465 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-service-ca-operator, name: metrics, uid: 030283b3-acfe-40ed-811c-d9f7f79607f6]" virtual=false 2026-01-20T10:58:07.970993674+00:00 stderr F I0120 10:58:07.970649 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-machine-api, uid: df6705c6-ec2d-4f74-a803-3f769fd28210]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.970993674+00:00 stderr F I0120 10:58:07.970667 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-cluster-samples-operator, name: metrics, uid: 9d303729-12a5-43d3-a67f-d19bf704fa88]" virtual=false 2026-01-20T10:58:07.970993674+00:00 stderr F I0120 10:58:07.970800 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-apiserver-operator, uid: 52965c25-9895-4c80-8901-655c30931a31]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.970993674+00:00 stderr F I0120 10:58:07.970817 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-cluster-version, name: cluster-version-operator, uid: b85c5397-4189-4029-b181-4e339da207b7]" virtual=false 2026-01-20T10:58:07.975173760+00:00 stderr F I0120 10:58:07.974640 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cluster-storage-operator, uid: 01a4f482-d92f-4128-87f6-1f1177ad4f4a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.975173760+00:00 stderr F I0120 10:58:07.974668 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-console, name: downloads, uid: d6818508-d113-4821-84c8-94f59cfa13cb]" virtual=false 2026-01-20T10:58:07.975173760+00:00 stderr F I0120 10:58:07.974813 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-kube-controller-manager-operator, uid: 73d77f82-bd47-45ec-bb06-d29938e02cfd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.975173760+00:00 stderr F I0120 10:58:07.974832 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-scheduler-operator, name: metrics, uid: 080e1aaf-7269-495b-ab74-593efe4192ec]" virtual=false 2026-01-20T10:58:07.975173760+00:00 stderr F I0120 10:58:07.975135 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-config-managed, uid: c4538c2f-b9cd-4ad4-8d0e-02315cca7510]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.975173760+00:00 stderr F I0120 10:58:07.975148 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator-machine-webhook, uid: 7dd2300f-f67e-4eb3-a3fa-1f22c230305a]" virtual=false 2026-01-20T10:58:07.975337494+00:00 stderr F I0120 10:58:07.975298 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-ovn-kubernetes, uid: 393084dd-9b88-439d-9c82-e5d9e9fc7290]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:07.975337494+00:00 stderr F I0120 10:58:07.975318 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-multus, name: network-metrics-service, uid: d573fe5e-beee-404b-9f93-51f361ad8682]" virtual=false 2026-01-20T10:58:07.976474023+00:00 stderr F I0120 10:58:07.976186 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-operator-lifecycle-manager, name: operator-lifecycle-manager-metrics, uid: 529eb670-d2f5-4a87-a1cc-b0a4cbd22e03]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.976474023+00:00 stderr F I0120 10:58:07.976208 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: cert-manager, name: cert-manager-cainjector, uid: 17915584-2494-42c4-a8a9-acf2692b2235]" virtual=false 2026-01-20T10:58:07.976474023+00:00 stderr F I0120 10:58:07.976397 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-ingress-operator, uid: c9d5dc69-e308-44cd-ad51-e2b7cce2619e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.976474023+00:00 stderr F I0120 10:58:07.976409 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-controller-manager-operator, name: metrics, uid: 2f6bb711-85a4-408c-913a-54f006dcf2e9]" virtual=false 2026-01-20T10:58:07.976844522+00:00 stderr F I0120 10:58:07.976651 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-service-ca-operator, uid: f62224e7-87fc-4c57-b23f-4581de499fa2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.976844522+00:00 stderr F I0120 10:58:07.976672 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-ingress-canary, name: ingress-canary, uid: cd641ce4-6a02-4a0c-9222-6ab30b234450]" virtual=false 2026-01-20T10:58:07.976844522+00:00 stderr F I0120 10:58:07.976771 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-config-operator, uid: 45fc724c-871f-4abe-8f95-988ef4974157]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.976844522+00:00 stderr F I0120 10:58:07.976785 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: console-operator, uid: 4e4abf45-f8a5-4971-bef5-322574b06e51]" virtual=false 2026-01-20T10:58:07.978199727+00:00 stderr F I0120 10:58:07.977794 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-console-operator, name: prometheus-k8s, uid: 7ed8090b-3718-4d38-abfd-8f495986d2c4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.978199727+00:00 stderr F I0120 10:58:07.977825 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator, uid: ef047d6e-c72f-4309-a95e-08fb0ed08662]" virtual=false 2026-01-20T10:58:07.978199727+00:00 stderr F I0120 10:58:07.978080 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-authentication-operator, uid: a94f410e-4561-4410-90f7-263645a79260]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.978199727+00:00 stderr F I0120 10:58:07.978096 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: cert-manager, name: cert-manager, uid: 3e6e73db-5a03-46ff-9ebd-6afc528fb218]" virtual=false 2026-01-20T10:58:07.978199727+00:00 stderr F I0120 10:58:07.978132 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-console, name: prometheus-k8s, uid: 68628535-73c6-4332-9439-4dbc315aff94]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.978199727+00:00 stderr F I0120 10:58:07.978158 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: c1b7d52c-5c8b-4bd1-8ac9-b4a3af1e9062]" virtual=false 2026-01-20T10:58:07.978239928+00:00 stderr F I0120 10:58:07.978218 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/Service, namespace: openshift-console, name: downloads, uid: d6818508-d113-4821-84c8-94f59cfa13cb]" 2026-01-20T10:58:07.978291029+00:00 stderr F I0120 10:58:07.978261 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: bddcb8c2-0f2d-4efa-a0ec-3e0648c24386]" virtual=false 2026-01-20T10:58:07.978925055+00:00 stderr F I0120 10:58:07.978887 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-diagnostics, name: prometheus-k8s, uid: 82531bdb-b490-4bb6-9435-a96aa861af59]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:07.978925055+00:00 stderr F I0120 10:58:07.978916 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: packageserver-service, uid: 8099635d-a821-489e-8b18-cae3e83f00b2]" virtual=false 2026-01-20T10:58:07.979120570+00:00 stderr F I0120 10:58:07.979088 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift, name: copied-csv-viewers, uid: 51e093b6-6761-48c8-bdf3-e1d1320734a3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.979120570+00:00 stderr F I0120 10:58:07.979108 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-route-controller-manager, name: route-controller-manager, uid: 105b901a-a54d-4dae-b7b6-99e83d48166f]" virtual=false 2026-01-20T10:58:07.979254404+00:00 stderr F I0120 10:58:07.979222 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-image-registry, name: node-ca, uid: 153c121a-eaab-48ca-a2d5-58b32eb3c97b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.979254404+00:00 stderr F I0120 10:58:07.979247 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-apiserver, name: api, uid: fb5bd66d-5e82-4bcc-8126-39324a92dccc]" virtual=false 2026-01-20T10:58:07.981151872+00:00 stderr F I0120 10:58:07.981119 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-marketplace, name: redhat-marketplace, uid: 73712edb-385d-4bf0-9c03-b6c570b1a22f]" owner=[{"apiVersion":"operators.coreos.com/v1alpha1","kind":"CatalogSource","name":"redhat-marketplace","uid":"6f259421-4edb-49d8-a6ce-aa41dfc64264","controller":false,"blockOwnerDeletion":false}] 2026-01-20T10:58:07.981151872+00:00 stderr F I0120 10:58:07.981142 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-authentication-operator, name: metrics, uid: 20ebd9ba-71d4-4753-8707-d87939791a19]" virtual=false 2026-01-20T10:58:07.981527921+00:00 stderr F I0120 10:58:07.981497 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/Service, namespace: openshift-route-controller-manager, name: route-controller-manager, uid: 105b901a-a54d-4dae-b7b6-99e83d48166f]" 2026-01-20T10:58:07.981527921+00:00 stderr F I0120 10:58:07.981520 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-console, name: console, uid: 5b0bdd1d-b81c-479c-9a03-f3ff2b5db014]" virtual=false 2026-01-20T10:58:07.982286640+00:00 stderr F I0120 10:58:07.981775 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/Service, namespace: cert-manager, name: cert-manager, uid: 3e6e73db-5a03-46ff-9ebd-6afc528fb218]" 2026-01-20T10:58:07.982286640+00:00 stderr F I0120 10:58:07.981793 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-image-registry, name: image-registry, uid: 7b12735e-9db4-4c6e-99f6-b2626c4e9f08]" virtual=false 2026-01-20T10:58:07.982286640+00:00 stderr F I0120 10:58:07.982003 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-cluster-version, name: cluster-version-operator, uid: b85c5397-4189-4029-b181-4e339da207b7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.982286640+00:00 stderr F I0120 10:58:07.982019 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-network-diagnostics, name: network-check-source, uid: e72217e4-ba2c-4242-af4f-6a9201f08001]" virtual=false 2026-01-20T10:58:07.982286640+00:00 stderr F I0120 10:58:07.982135 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/Service, namespace: cert-manager, name: cert-manager-cainjector, uid: 17915584-2494-42c4-a8a9-acf2692b2235]" 2026-01-20T10:58:07.982286640+00:00 stderr F I0120 10:58:07.982148 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: cert-manager, name: cert-manager-webhook, uid: d1133a70-2dbf-40a7-a6bb-813db182c8f1]" virtual=false 2026-01-20T10:58:07.982314551+00:00 stderr F I0120 10:58:07.982307 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/Service, namespace: openshift-apiserver, name: api, uid: fb5bd66d-5e82-4bcc-8126-39324a92dccc]" 2026-01-20T10:58:07.982324311+00:00 stderr F I0120 10:58:07.982318 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-apiserver, name: check-endpoints, uid: 435aa879-8965-459a-9b2a-dfd8f8924b3a]" virtual=false 2026-01-20T10:58:07.982571819+00:00 stderr F I0120 10:58:07.982532 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: olm-operator-metrics, uid: f54a9b6f-c334-4276-9ca3-b290325fd276]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.982571819+00:00 stderr F I0120 10:58:07.982554 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-image-registry, name: image-registry-operator, uid: 1a925351-3238-40d4-a375-ce0c5a2b70a6]" virtual=false 2026-01-20T10:58:07.982698482+00:00 stderr F I0120 10:58:07.982669 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-multus, name: network-metrics-service, uid: d573fe5e-beee-404b-9f93-51f361ad8682]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:07.982698482+00:00 stderr F I0120 10:58:07.982687 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-controller-manager-operator, name: metrics, uid: 136038f9-f376-4b0b-8c75-a42240d176cc]" virtual=false 2026-01-20T10:58:07.986692453+00:00 stderr F I0120 10:58:07.986639 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/Service, namespace: openshift-console, name: console, uid: 5b0bdd1d-b81c-479c-9a03-f3ff2b5db014]" 2026-01-20T10:58:07.986692453+00:00 stderr F I0120 10:58:07.986666 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 7c42fd7c-0955-49c7-819c-4685e0681272]" virtual=false 2026-01-20T10:58:07.987390190+00:00 stderr F I0120 10:58:07.987013 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-ingress-canary, name: ingress-canary, uid: cd641ce4-6a02-4a0c-9222-6ab30b234450]" owner=[{"apiVersion":"apps/v1","kind":"daemonset","name":"ingress-canary","uid":"b5512a08-cd29-46f9-9661-4c860338b2ca","controller":true}] 2026-01-20T10:58:07.987390190+00:00 stderr F I0120 10:58:07.987035 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-multus, name: multus-admission-controller, uid: 35568373-18ec-4ba2-8d18-12de10aa5a3f]" virtual=false 2026-01-20T10:58:07.987390190+00:00 stderr F I0120 10:58:07.987162 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-service-ca-operator, name: metrics, uid: 030283b3-acfe-40ed-811c-d9f7f79607f6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.987390190+00:00 stderr F I0120 10:58:07.987178 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-marketplace, name: certified-operators, uid: 97052848-7332-4254-8854-60d45bb91123]" virtual=false 2026-01-20T10:58:07.987390190+00:00 stderr F I0120 10:58:07.987307 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-kube-scheduler-operator, name: metrics, uid: 080e1aaf-7269-495b-ab74-593efe4192ec]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.987390190+00:00 stderr F I0120 10:58:07.987319 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-control-plane, uid: 3988dee0-bf3f-4018-92f4-25a33c418712]" virtual=false 2026-01-20T10:58:07.989508484+00:00 stderr F I0120 10:58:07.987436 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-apiserver, name: prometheus-k8s, uid: aaaa746a-592e-4f64-9701-4c374c38bb62]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.989508484+00:00 stderr F I0120 10:58:07.987454 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-authentication, name: oauth-openshift, uid: 64190ecd-229c-482a-966a-b5649b5042ed]" virtual=false 2026-01-20T10:58:07.989508484+00:00 stderr F I0120 10:58:07.987572 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-cluster-samples-operator, name: metrics, uid: 9d303729-12a5-43d3-a67f-d19bf704fa88]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.989508484+00:00 stderr F I0120 10:58:07.987585 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-controller-manager, name: controller-manager, uid: 2222c363-21dc-4d99-b2be-80dc3cdf8209]" virtual=false 2026-01-20T10:58:07.989508484+00:00 stderr F I0120 10:58:07.987703 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/Service, namespace: openshift-image-registry, name: image-registry, uid: 7b12735e-9db4-4c6e-99f6-b2626c4e9f08]" 2026-01-20T10:58:07.989508484+00:00 stderr F I0120 10:58:07.987714 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-dns, name: dns-default, uid: 9c0247d8-5697-41ea-812e-582bb93c9b4d]" virtual=false 2026-01-20T10:58:07.989508484+00:00 stderr F I0120 10:58:07.987920 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/Service, namespace: cert-manager, name: cert-manager-webhook, uid: d1133a70-2dbf-40a7-a6bb-813db182c8f1]" 2026-01-20T10:58:07.989508484+00:00 stderr F I0120 10:58:07.987932 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-ingress, name: router-internal-default, uid: 3ded9605-ced3-4583-97b6-f93264b463a7]" virtual=false 2026-01-20T10:58:07.989508484+00:00 stderr F I0120 10:58:07.988361 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: c1b7d52c-5c8b-4bd1-8ac9-b4a3af1e9062]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.989508484+00:00 stderr F I0120 10:58:07.988375 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-controllers, uid: 6a75af62-23dd-4080-8ef6-00c8bb47e103]" virtual=false 2026-01-20T10:58:07.989508484+00:00 stderr F I0120 10:58:07.988490 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: console-operator, uid: 4e4abf45-f8a5-4971-bef5-322574b06e51]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.989508484+00:00 stderr F I0120 10:58:07.988505 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator-webhook, uid: 128263d4-d278-44f6-9ae4-9e9ecc572513]" virtual=false 2026-01-20T10:58:07.989508484+00:00 stderr F I0120 10:58:07.988618 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator-machine-webhook, uid: 7dd2300f-f67e-4eb3-a3fa-1f22c230305a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.989508484+00:00 stderr F I0120 10:58:07.988631 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: f7c88755-f4e4-4193-91c1-94a9f4727169]" virtual=false 2026-01-20T10:58:07.989508484+00:00 stderr F I0120 10:58:07.988809 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: packageserver-service, uid: 8099635d-a821-489e-8b18-cae3e83f00b2]" owner=[{"apiVersion":"operators.coreos.com/v1alpha1","kind":"ClusterServiceVersion","name":"packageserver","uid":"0beab272-7637-4d44-b3aa-502dcafbc929","controller":false,"blockOwnerDeletion":false}] 2026-01-20T10:58:07.989508484+00:00 stderr F I0120 10:58:07.988824 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-marketplace, name: redhat-operators, uid: adccbaa4-8d5b-4985-9a89-66271ea4bf4e]" virtual=false 2026-01-20T10:58:07.989508484+00:00 stderr F I0120 10:58:07.988977 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: bddcb8c2-0f2d-4efa-a0ec-3e0648c24386]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.989508484+00:00 stderr F I0120 10:58:07.988993 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-apiserver-operator, name: metrics, uid: 4c2fba48-c67e-4420-9529-0bb456da4341]" virtual=false 2026-01-20T10:58:07.989508484+00:00 stderr F I0120 10:58:07.989148 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator, uid: ef047d6e-c72f-4309-a95e-08fb0ed08662]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.989508484+00:00 stderr F I0120 10:58:07.989161 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-console-operator, name: metrics, uid: 793d323e-de30-470a-af76-520af7b2dad8]" virtual=false 2026-01-20T10:58:07.990852648+00:00 stderr F I0120 10:58:07.990651 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/Service, namespace: openshift-controller-manager, name: controller-manager, uid: 2222c363-21dc-4d99-b2be-80dc3cdf8209]" 2026-01-20T10:58:07.990852648+00:00 stderr F I0120 10:58:07.990672 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-etcd, name: etcd, uid: 09198b54-ff7d-4bc0-9111-00e2f443a981]" virtual=false 2026-01-20T10:58:07.992438859+00:00 stderr F I0120 10:58:07.990876 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/Service, namespace: openshift-authentication, name: oauth-openshift, uid: 64190ecd-229c-482a-966a-b5649b5042ed]" 2026-01-20T10:58:07.992438859+00:00 stderr F I0120 10:58:07.990894 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-scheduler, name: scheduler, uid: a839a554-406d-4df8-b3ae-b533cb3e24bc]" virtual=false 2026-01-20T10:58:07.992438859+00:00 stderr F I0120 10:58:07.991170 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-network-diagnostics, name: network-check-source, uid: e72217e4-ba2c-4242-af4f-6a9201f08001]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:07.992438859+00:00 stderr F I0120 10:58:07.991187 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-network-diagnostics, name: network-check-target, uid: 151fdab6-cca2-4880-a96c-48e605cc8d3d]" virtual=false 2026-01-20T10:58:07.992837699+00:00 stderr F I0120 10:58:07.992813 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-kube-controller-manager-operator, name: metrics, uid: 136038f9-f376-4b0b-8c75-a42240d176cc]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.992860879+00:00 stderr F I0120 10:58:07.992838 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-storage-version-migrator-operator, name: metrics, uid: 3e887cd0-b481-460c-b943-d944dc64df2f]" virtual=false 2026-01-20T10:58:07.993616168+00:00 stderr F I0120 10:58:07.992994 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-control-plane, uid: 3988dee0-bf3f-4018-92f4-25a33c418712]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:07.993616168+00:00 stderr F I0120 10:58:07.993013 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-monitoring, name: cluster-monitoring-operator, uid: 8108d04e-011f-45ba-91f8-5a8b6851d41d]" virtual=false 2026-01-20T10:58:07.993616168+00:00 stderr F I0120 10:58:07.993153 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-marketplace, name: certified-operators, uid: 97052848-7332-4254-8854-60d45bb91123]" owner=[{"apiVersion":"operators.coreos.com/v1alpha1","kind":"CatalogSource","name":"certified-operators","uid":"16d5fe82-aef0-4700-8b13-e78e71d2a10d","controller":false,"blockOwnerDeletion":false}] 2026-01-20T10:58:07.993616168+00:00 stderr F I0120 10:58:07.993167 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: catalog-operator-metrics, uid: 6766edb6-ebfb-4434-a0f6-d2bb95b7aa72]" virtual=false 2026-01-20T10:58:07.993616168+00:00 stderr F I0120 10:58:07.993282 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-authentication-operator, name: metrics, uid: 20ebd9ba-71d4-4753-8707-d87939791a19]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.993616168+00:00 stderr F I0120 10:58:07.993297 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-metrics, uid: ae547e8e-2a0a-43b3-8358-80f1e40dfde9]" virtual=false 2026-01-20T10:58:07.993616168+00:00 stderr F I0120 10:58:07.993502 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-apiserver, name: check-endpoints, uid: 435aa879-8965-459a-9b2a-dfd8f8924b3a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.993616168+00:00 stderr F I0120 10:58:07.993538 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-ingress-operator, name: metrics, uid: 1e390522-c38e-4189-86b8-ad75c61e3844]" virtual=false 2026-01-20T10:58:07.994775968+00:00 stderr F I0120 10:58:07.994579 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/Service, namespace: openshift-kube-scheduler, name: scheduler, uid: a839a554-406d-4df8-b3ae-b533cb3e24bc]" 2026-01-20T10:58:07.994775968+00:00 stderr F I0120 10:58:07.994604 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-apiserver-operator, name: metrics, uid: ed79a864-3d59-456e-8a6c-724ec68e6d1b]" virtual=false 2026-01-20T10:58:07.995171678+00:00 stderr F I0120 10:58:07.995011 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-ingress, name: router-internal-default, uid: 3ded9605-ced3-4583-97b6-f93264b463a7]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"router-default","uid":"9ae4d312-7fc4-4344-ab7a-669da95f56bf","controller":true}] 2026-01-20T10:58:07.995171678+00:00 stderr F I0120 10:58:07.995041 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-apiserver, name: apiserver, uid: 44a33f79-7e24-4f1b-bc46-f52dfcec13b8]" virtual=false 2026-01-20T10:58:07.995190289+00:00 stderr F I0120 10:58:07.995182 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-dns, name: dns-default, uid: 9c0247d8-5697-41ea-812e-582bb93c9b4d]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"DNS","name":"default","uid":"8e7b8280-016f-4ceb-a792-fc5be2494468","controller":true}] 2026-01-20T10:58:07.995309332+00:00 stderr F I0120 10:58:07.995200 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-marketplace, name: marketplace-operator-metrics, uid: 1bfd7637-f88e-403e-8d75-c71b380fc127]" virtual=false 2026-01-20T10:58:07.995309332+00:00 stderr F I0120 10:58:07.995275 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/Service, namespace: openshift-etcd, name: etcd, uid: 09198b54-ff7d-4bc0-9111-00e2f443a981]" 2026-01-20T10:58:07.995309332+00:00 stderr F I0120 10:58:07.995289 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-oauth-apiserver, name: api, uid: 8ccd218c-b483-42f1-81ef-8a1e9a05f574]" virtual=false 2026-01-20T10:58:07.995938627+00:00 stderr F I0120 10:58:07.995355 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-image-registry, name: image-registry-operator, uid: 1a925351-3238-40d4-a375-ce0c5a2b70a6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.995938627+00:00 stderr F I0120 10:58:07.995376 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-console-operator, name: webhook, uid: 0bec6a60-3529-4fdb-81de-718ea6c4dae4]" virtual=false 2026-01-20T10:58:07.998563064+00:00 stderr F I0120 10:58:07.998528 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-multus, name: multus-admission-controller, uid: 35568373-18ec-4ba2-8d18-12de10aa5a3f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:07.998581844+00:00 stderr F I0120 10:58:07.998568 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-dns-operator, name: metrics, uid: c5ee1e81-63ee-4ea0-9d8f-d24cd624c3f2]" virtual=false 2026-01-20T10:58:07.998680687+00:00 stderr F I0120 10:58:07.998656 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-marketplace, name: redhat-operators, uid: adccbaa4-8d5b-4985-9a89-66271ea4bf4e]" owner=[{"apiVersion":"operators.coreos.com/v1alpha1","kind":"CatalogSource","name":"redhat-operators","uid":"9ba0c63a-ccef-4143-b195-48b1ad0b0bb7","controller":false,"blockOwnerDeletion":false}] 2026-01-20T10:58:07.998680687+00:00 stderr F I0120 10:58:07.998674 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 355a1056-7d77-4a52-a1f5-8eb39c13574e]" virtual=false 2026-01-20T10:58:07.998975644+00:00 stderr F I0120 10:58:07.998950 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-controller-manager-operator, name: metrics, uid: 2f6bb711-85a4-408c-913a-54f006dcf2e9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.998975644+00:00 stderr F I0120 10:58:07.998968 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-network-operator, name: metrics, uid: f8e1888b-9575-400b-83eb-6574023cf8d5]" virtual=false 2026-01-20T10:58:07.999264702+00:00 stderr F I0120 10:58:07.999237 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-network-diagnostics, name: network-check-target, uid: 151fdab6-cca2-4880-a96c-48e605cc8d3d]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:07.999297802+00:00 stderr F I0120 10:58:07.999278 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-controller-manager, name: kube-controller-manager, uid: 419fdf14-5d8d-4271-b9e7-729de80d8cd2]" virtual=false 2026-01-20T10:58:07.999468648+00:00 stderr F I0120 10:58:07.999448 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-console-operator, name: metrics, uid: 793d323e-de30-470a-af76-520af7b2dad8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.999476638+00:00 stderr F I0120 10:58:07.999469 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-node, uid: 422d22df-8c75-41f2-a2b3-636f0480766d]" virtual=false 2026-01-20T10:58:07.999569650+00:00 stderr F I0120 10:58:07.999552 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 7c42fd7c-0955-49c7-819c-4685e0681272]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:07.999577800+00:00 stderr F I0120 10:58:07.999568 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-etcd-operator, name: metrics, uid: 470dd1a6-5645-4282-97e4-ebd3fef4caae]" virtual=false 2026-01-20T10:58:08.000255807+00:00 stderr F I0120 10:58:08.000193 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator-webhook, uid: 128263d4-d278-44f6-9ae4-9e9ecc572513]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.000255807+00:00 stderr F I0120 10:58:08.000236 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: e3ac9630-07e1-4aa4-843f-40a4b0edff98]" virtual=false 2026-01-20T10:58:08.000477143+00:00 stderr F I0120 10:58:08.000434 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-controllers, uid: 6a75af62-23dd-4080-8ef6-00c8bb47e103]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.000477143+00:00 stderr F I0120 10:58:08.000467 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-operator, name: prometheus-k8s, uid: 7188429c-116a-41d4-b2a2-45b51b794769]" virtual=false 2026-01-20T10:58:08.000533714+00:00 stderr F I0120 10:58:08.000504 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-apiserver-operator, name: metrics, uid: 4c2fba48-c67e-4420-9529-0bb456da4341]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.000541915+00:00 stderr F I0120 10:58:08.000533 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-operator, name: prometheus-k8s, uid: 282ae96b-f5ba-4a11-bc5e-9b2688f7715c]" virtual=false 2026-01-20T10:58:08.000550965+00:00 stderr F I0120 10:58:08.000540 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: f7c88755-f4e4-4193-91c1-94a9f4727169]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.000583496+00:00 stderr F I0120 10:58:08.000563 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-service-ca-operator, name: prometheus-k8s, uid: df902098-e780-4aea-b4ba-386424eac15b]" virtual=false 2026-01-20T10:58:08.001475328+00:00 stderr F I0120 10:58:08.001444 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/Service, namespace: openshift-kube-apiserver, name: apiserver, uid: 44a33f79-7e24-4f1b-bc46-f52dfcec13b8]" 2026-01-20T10:58:08.001475328+00:00 stderr F I0120 10:58:08.001465 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: b70f0647-f140-40de-94b5-a719b7eea118]" virtual=false 2026-01-20T10:58:08.006326501+00:00 stderr F I0120 10:58:08.006286 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/Service, namespace: openshift-oauth-apiserver, name: api, uid: 8ccd218c-b483-42f1-81ef-8a1e9a05f574]" 2026-01-20T10:58:08.006326501+00:00 stderr F I0120 10:58:08.006313 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: console-public, uid: 8d38d3d4-2560-4d42-8625-65533a47724e]" virtual=false 2026-01-20T10:58:08.022596915+00:00 stderr F I0120 10:58:08.022530 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-kube-storage-version-migrator-operator, name: metrics, uid: 3e887cd0-b481-460c-b943-d944dc64df2f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.022619855+00:00 stderr F I0120 10:58:08.022593 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: cluster-samples-operator-openshift-config-secret-reader, uid: 8e56c231-008d-4621-9c6e-23f979369b21]" virtual=false 2026-01-20T10:58:08.026130474+00:00 stderr F I0120 10:58:08.026079 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-metrics, uid: ae547e8e-2a0a-43b3-8358-80f1e40dfde9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.026150125+00:00 stderr F I0120 10:58:08.026121 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: machine-api-controllers, uid: afaa6258-38bd-452b-b419-a7a2631b3090]" virtual=false 2026-01-20T10:58:08.031111121+00:00 stderr F I0120 10:58:08.031057 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-ingress-operator, name: metrics, uid: 1e390522-c38e-4189-86b8-ad75c61e3844]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.031111121+00:00 stderr F I0120 10:58:08.031103 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: 90f553e9-1bdb-4888-8400-8635eb23b719]" virtual=false 2026-01-20T10:58:08.035346368+00:00 stderr F I0120 10:58:08.035315 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: catalog-operator-metrics, uid: 6766edb6-ebfb-4434-a0f6-d2bb95b7aa72]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.035370239+00:00 stderr F I0120 10:58:08.035344 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-version, name: prometheus-k8s, uid: 9e098fa2-db52-4c33-b39a-02358e3a9e0c]" virtual=false 2026-01-20T10:58:08.039091154+00:00 stderr F I0120 10:58:08.039047 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-monitoring, name: cluster-monitoring-operator, uid: 8108d04e-011f-45ba-91f8-5a8b6851d41d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.039118525+00:00 stderr F I0120 10:58:08.039110 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: prometheus-k8s-cluster-autoscaler-operator, uid: 8b0b5b45-675e-4709-9b90-74e3f108e035]" virtual=false 2026-01-20T10:58:08.043185068+00:00 stderr F I0120 10:58:08.043107 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-kube-apiserver-operator, name: metrics, uid: ed79a864-3d59-456e-8a6c-724ec68e6d1b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.043185068+00:00 stderr F I0120 10:58:08.043115 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/Service, namespace: openshift-kube-controller-manager, name: kube-controller-manager, uid: 419fdf14-5d8d-4271-b9e7-729de80d8cd2]" 2026-01-20T10:58:08.043185068+00:00 stderr F I0120 10:58:08.043134 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-etcd-operator, name: prometheus-k8s, uid: 9830b203-a5a1-400e-ab46-bc60cd82db13]" virtual=false 2026-01-20T10:58:08.043185068+00:00 stderr F I0120 10:58:08.043160 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-multus, name: prometheus-k8s, uid: 76ffecc8-c07a-4b4b-a533-f0349dddd305]" virtual=false 2026-01-20T10:58:08.072628326+00:00 stderr F I0120 10:58:08.072548 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-marketplace, name: marketplace-operator-metrics, uid: 1bfd7637-f88e-403e-8d75-c71b380fc127]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.072628326+00:00 stderr F I0120 10:58:08.072600 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift, name: cluster-samples-operator-openshift-edit, uid: 51645039-f461-4bd6-b917-c350f2081dd5]" virtual=false 2026-01-20T10:58:08.077971972+00:00 stderr F I0120 10:58:08.077916 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-console-operator, name: webhook, uid: 0bec6a60-3529-4fdb-81de-718ea6c4dae4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.077971972+00:00 stderr F I0120 10:58:08.077953 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-scheduler, name: prometheus-k8s, uid: a213bff1-520d-4e2f-927a-40d13b2f1901]" virtual=false 2026-01-20T10:58:08.110107128+00:00 stderr F I0120 10:58:08.106971 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-dns-operator, name: metrics, uid: c5ee1e81-63ee-4ea0-9d8f-d24cd624c3f2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.110107128+00:00 stderr F I0120 10:58:08.107024 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-monitoring, name: console-operator, uid: 51573cb0-4753-43d5-a80a-ca9398cf9f90]" virtual=false 2026-01-20T10:58:08.116807357+00:00 stderr F I0120 10:58:08.116419 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-node, uid: 422d22df-8c75-41f2-a2b3-636f0480766d]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:08.116807357+00:00 stderr F I0120 10:58:08.116459 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-apiserver-operator, name: prometheus-k8s, uid: a97a87c8-9657-4a6f-88f4-38a7871d42d4]" virtual=false 2026-01-20T10:58:08.122323187+00:00 stderr F I0120 10:58:08.120568 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 355a1056-7d77-4a52-a1f5-8eb39c13574e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.122323187+00:00 stderr F I0120 10:58:08.120607 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: b4dbad3d-fcdf-47df-9f99-96752d898355]" virtual=false 2026-01-20T10:58:08.127097139+00:00 stderr F I0120 10:58:08.126548 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-network-operator, name: metrics, uid: f8e1888b-9575-400b-83eb-6574023cf8d5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.127097139+00:00 stderr F I0120 10:58:08.126597 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-marketplace, name: openshift-marketplace-metrics, uid: ffe4690e-17be-4ba4-b050-35c28bc0fc83]" virtual=false 2026-01-20T10:58:08.127097139+00:00 stderr F I0120 10:58:08.126741 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: e3ac9630-07e1-4aa4-843f-40a4b0edff98]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.127097139+00:00 stderr F I0120 10:58:08.126761 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-controller-manager-operator, name: prometheus-k8s, uid: 25069215-dc01-419a-ad4e-bcf1eeb7deaf]" virtual=false 2026-01-20T10:58:08.137905584+00:00 stderr F I0120 10:58:08.134407 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-etcd-operator, name: metrics, uid: 470dd1a6-5645-4282-97e4-ebd3fef4caae]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.137905584+00:00 stderr F I0120 10:58:08.134458 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 57452db9-9f49-4e6d-9f15-a66c5d7a3b74]" virtual=false 2026-01-20T10:58:08.140526960+00:00 stderr F I0120 10:58:08.140474 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-operator, name: prometheus-k8s, uid: 7188429c-116a-41d4-b2a2-45b51b794769]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.140568281+00:00 stderr F I0120 10:58:08.140529 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 6f2a9ce0-e85a-4870-a5b8-641b64dc9c93]" virtual=false 2026-01-20T10:58:08.167121155+00:00 stderr F I0120 10:58:08.164534 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: b70f0647-f140-40de-94b5-a719b7eea118]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.167121155+00:00 stderr F I0120 10:58:08.164582 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: openshift-network-public-role-binding, uid: 3df77a31-3595-4c7d-8f42-fe6d72f5e8fc]" virtual=false 2026-01-20T10:58:08.175118939+00:00 stderr F I0120 10:58:08.173400 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-operator, name: prometheus-k8s, uid: 282ae96b-f5ba-4a11-bc5e-9b2688f7715c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.175118939+00:00 stderr F I0120 10:58:08.173452 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-console-operator, name: console-operator, uid: b185912e-a17b-4654-b0d2-b107b556fec5]" virtual=false 2026-01-20T10:58:08.175118939+00:00 stderr F I0120 10:58:08.173601 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: cluster-samples-operator-openshift-config-secret-reader, uid: 8e56c231-008d-4621-9c6e-23f979369b21]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.175118939+00:00 stderr F I0120 10:58:08.173617 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-marketplace, name: marketplace-operator, uid: ce0496bb-6d27-4a64-b0ea-2858aaaaf38f]" virtual=false 2026-01-20T10:58:08.179167792+00:00 stderr F I0120 10:58:08.177751 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-service-ca-operator, name: prometheus-k8s, uid: df902098-e780-4aea-b4ba-386424eac15b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.179167792+00:00 stderr F I0120 10:58:08.177790 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-scheduler-operator, name: prometheus-k8s, uid: b230d512-d31a-48b9-b1eb-5b0af8942d72]" virtual=false 2026-01-20T10:58:08.179167792+00:00 stderr F I0120 10:58:08.177895 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: console-public, uid: 8d38d3d4-2560-4d42-8625-65533a47724e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.179167792+00:00 stderr F I0120 10:58:08.177910 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-multus, name: multus-whereabouts, uid: d9f001d9-3949-4f2b-b322-bdc495b6b073]" virtual=false 2026-01-20T10:58:08.185263457+00:00 stderr F I0120 10:58:08.183319 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: prometheus-k8s-cluster-autoscaler-operator, uid: 8b0b5b45-675e-4709-9b90-74e3f108e035]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.185263457+00:00 stderr F I0120 10:58:08.183360 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ingress-operator, name: prometheus-k8s, uid: 5e63327b-7c2f-40c4-bdb1-3382f0df6785]" virtual=false 2026-01-20T10:58:08.185263457+00:00 stderr F I0120 10:58:08.183552 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-multus, name: prometheus-k8s, uid: 76ffecc8-c07a-4b4b-a533-f0349dddd305]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:08.185263457+00:00 stderr F I0120 10:58:08.183570 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-node-identity, name: system:openshift:scc:hostnetwork-v2, uid: 6a315570-1774-46a4-8c1e-a8dec4ae42dd]" virtual=false 2026-01-20T10:58:08.187710209+00:00 stderr F I0120 10:58:08.187660 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: machine-api-controllers, uid: afaa6258-38bd-452b-b419-a7a2631b3090]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.187710209+00:00 stderr F I0120 10:58:08.187697 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-controller-manager, name: prometheus-k8s, uid: 1d070311-4fbd-4dae-9140-d78d7f4d4575]" virtual=false 2026-01-20T10:58:08.192601162+00:00 stderr F I0120 10:58:08.192414 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: 90f553e9-1bdb-4888-8400-8635eb23b719]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.192601162+00:00 stderr F I0120 10:58:08.192478 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-authentication, name: prometheus-k8s, uid: 7c67d003-8fda-45c8-9db8-f7c2add219eb]" virtual=false 2026-01-20T10:58:08.195366173+00:00 stderr F I0120 10:58:08.195312 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-etcd-operator, name: prometheus-k8s, uid: 9830b203-a5a1-400e-ab46-bc60cd82db13]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.195366173+00:00 stderr F I0120 10:58:08.195337 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: console-operator, uid: e9525389-2dca-4e82-a0c8-1959e8d3f6ef]" virtual=false 2026-01-20T10:58:08.196296597+00:00 stderr F I0120 10:58:08.196232 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-version, name: prometheus-k8s, uid: 9e098fa2-db52-4c33-b39a-02358e3a9e0c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.196296597+00:00 stderr F I0120 10:58:08.196277 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 5b8609fb-7185-421b-a062-64861725c240]" virtual=false 2026-01-20T10:58:08.203552861+00:00 stderr F I0120 10:58:08.203427 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-scheduler, name: prometheus-k8s, uid: a213bff1-520d-4e2f-927a-40d13b2f1901]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.203552861+00:00 stderr F I0120 10:58:08.203468 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-samples-operator, name: prometheus-k8s, uid: dcbd6504-f1ec-4cd3-aa4c-d9fa262e8691]" virtual=false 2026-01-20T10:58:08.204137526+00:00 stderr F I0120 10:58:08.204096 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift, name: cluster-samples-operator-openshift-edit, uid: 51645039-f461-4bd6-b917-c350f2081dd5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.204137526+00:00 stderr F I0120 10:58:08.204126 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: machine-approver, uid: bfc0a838-d2db-4b1b-b2fa-b1b3ed79837f]" virtual=false 2026-01-20T10:58:08.205050259+00:00 stderr F I0120 10:58:08.205008 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-monitoring, name: console-operator, uid: 51573cb0-4753-43d5-a80a-ca9398cf9f90]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.205084480+00:00 stderr F I0120 10:58:08.205046 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: console, uid: f23dffb7-2c10-4dad-918c-95f11496c236]" virtual=false 2026-01-20T10:58:08.226788892+00:00 stderr F I0120 10:58:08.226661 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: b4dbad3d-fcdf-47df-9f99-96752d898355]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.226788892+00:00 stderr F I0120 10:58:08.226721 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-controller-manager-operator, name: prometheus-k8s, uid: aac1809c-7591-4279-9b3a-f54481665b62]" virtual=false 2026-01-20T10:58:08.227343225+00:00 stderr F I0120 10:58:08.227283 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-apiserver-operator, name: prometheus-k8s, uid: a97a87c8-9657-4a6f-88f4-38a7871d42d4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.227343225+00:00 stderr F I0120 10:58:08.227332 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: 7ee99584-ec5a-490c-bc55-11ed3e60244a]" virtual=false 2026-01-20T10:58:08.233946523+00:00 stderr F I0120 10:58:08.233862 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-marketplace, name: openshift-marketplace-metrics, uid: ffe4690e-17be-4ba4-b050-35c28bc0fc83]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.233946523+00:00 stderr F I0120 10:58:08.233902 1 garbagecollector.go:549] "Processing item" item="[batch/v1/CronJob, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 946673ee-e5bd-418a-934e-c38198674faa]" virtual=false 2026-01-20T10:58:08.240007297+00:00 stderr F I0120 10:58:08.239942 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-controller-manager-operator, name: prometheus-k8s, uid: 25069215-dc01-419a-ad4e-bcf1eeb7deaf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.240007297+00:00 stderr F I0120 10:58:08.239975 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-apiserver, name: prometheus-k8s, uid: bf0c3d32-dd79-434c-b9f7-cd199254cb05]" virtual=false 2026-01-20T10:58:08.246903042+00:00 stderr F I0120 10:58:08.246836 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 57452db9-9f49-4e6d-9f15-a66c5d7a3b74]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.246903042+00:00 stderr F I0120 10:58:08.246884 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-config-operator, name: prometheus-k8s, uid: 0c21355c-3ceb-4fa7-8948-24ef652948e2]" virtual=false 2026-01-20T10:58:08.253311815+00:00 stderr F I0120 10:58:08.253269 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 6f2a9ce0-e85a-4870-a5b8-641b64dc9c93]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.253336045+00:00 stderr F I0120 10:58:08.253309 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: console-configmap-reader, uid: 96f1c1d8-079f-4d3e-ab80-520fea692547]" virtual=false 2026-01-20T10:58:08.261754849+00:00 stderr F I0120 10:58:08.261636 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: openshift-network-public-role-binding, uid: 3df77a31-3595-4c7d-8f42-fe6d72f5e8fc]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:08.261754849+00:00 stderr F I0120 10:58:08.261681 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-controller-manager, name: prometheus-k8s, uid: b1006f5c-7799-42f1-904e-88f572ba47b3]" virtual=false 2026-01-20T10:58:08.285432781+00:00 stderr F I0120 10:58:08.284828 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-console-operator, name: console-operator, uid: b185912e-a17b-4654-b0d2-b107b556fec5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.285432781+00:00 stderr F I0120 10:58:08.284890 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-dns-operator, name: prometheus-k8s, uid: 4f8c14ca-36ee-4b58-aa1f-cac42171174a]" virtual=false 2026-01-20T10:58:08.285432781+00:00 stderr F I0120 10:58:08.285252 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-marketplace, name: marketplace-operator, uid: ce0496bb-6d27-4a64-b0ea-2858aaaaf38f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.285432781+00:00 stderr F I0120 10:58:08.285294 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ingress-operator, name: ingress-operator, uid: fb403acd-60bb-4e1d-812d-5b3609688224]" virtual=false 2026-01-20T10:58:08.285432781+00:00 stderr F I0120 10:58:08.285371 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-scheduler-operator, name: prometheus-k8s, uid: b230d512-d31a-48b9-b1eb-5b0af8942d72]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.285432781+00:00 stderr F I0120 10:58:08.285422 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-image-registry, name: prometheus-k8s, uid: a9e56c6d-5ac9-4b79-990a-a087cbdff6b6]" virtual=false 2026-01-20T10:58:08.290716526+00:00 stderr F I0120 10:58:08.290591 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-multus, name: multus-whereabouts, uid: d9f001d9-3949-4f2b-b322-bdc495b6b073]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:08.290716526+00:00 stderr F I0120 10:58:08.290636 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-monitoring, name: cluster-monitoring-operator, uid: 46d97ac2-17af-4ffa-903a-c11271097ec9]" virtual=false 2026-01-20T10:58:08.295461935+00:00 stderr F I0120 10:58:08.295414 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ingress-operator, name: prometheus-k8s, uid: 5e63327b-7c2f-40c4-bdb1-3382f0df6785]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.295484736+00:00 stderr F I0120 10:58:08.295460 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: machine-api-controllers, uid: 6e3cba9d-4203-4082-ad69-807338a0bc67]" virtual=false 2026-01-20T10:58:08.297711283+00:00 stderr F I0120 10:58:08.297670 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-node-identity, name: system:openshift:scc:hostnetwork-v2, uid: 6a315570-1774-46a4-8c1e-a8dec4ae42dd]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:08.297743613+00:00 stderr F I0120 10:58:08.297714 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-monitoring, name: cluster-monitoring-operator-alert-customization, uid: 5ab31ba2-d0b0-4677-80c5-034b6c6541d6]" virtual=false 2026-01-20T10:58:08.307196733+00:00 stderr F I0120 10:58:08.307109 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-controller-manager, name: prometheus-k8s, uid: 1d070311-4fbd-4dae-9140-d78d7f4d4575]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.307196733+00:00 stderr F I0120 10:58:08.307187 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: network-diagnostics, uid: 42b5b6e6-9a80-4513-a033-38d0e838dc93]" virtual=false 2026-01-20T10:58:08.312807886+00:00 stderr F I0120 10:58:08.312734 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-authentication, name: prometheus-k8s, uid: 7c67d003-8fda-45c8-9db8-f7c2add219eb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.312807886+00:00 stderr F I0120 10:58:08.312789 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-storage-operator, name: csi-snapshot-controller-operator-role, uid: 239e5b05-c173-4227-8370-2ccdce453a8c]" virtual=false 2026-01-20T10:58:08.319676071+00:00 stderr F I0120 10:58:08.319607 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: console-operator, uid: e9525389-2dca-4e82-a0c8-1959e8d3f6ef]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.319676071+00:00 stderr F I0120 10:58:08.319661 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: machine-api-controllers, uid: a777ad98-1b3e-45c1-a882-fa02dc5e0e45]" virtual=false 2026-01-20T10:58:08.325553650+00:00 stderr F I0120 10:58:08.325448 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 5b8609fb-7185-421b-a062-64861725c240]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.325553650+00:00 stderr F I0120 10:58:08.325514 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: ingress-operator, uid: 10dd3c44-d460-4ab3-b671-c27cf19cd1d5]" virtual=false 2026-01-20T10:58:08.334245611+00:00 stderr F I0120 10:58:08.332320 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-samples-operator, name: prometheus-k8s, uid: dcbd6504-f1ec-4cd3-aa4c-d9fa262e8691]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.334245611+00:00 stderr F I0120 10:58:08.332362 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: d75ff515-88a9-4644-8711-c99a391dcc77]" virtual=false 2026-01-20T10:58:08.338952171+00:00 stderr F I0120 10:58:08.338157 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: machine-approver, uid: bfc0a838-d2db-4b1b-b2fa-b1b3ed79837f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.338952171+00:00 stderr F I0120 10:58:08.338186 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: console-operator, uid: 4c176e02-35d3-42a6-9509-b0af233a3594]" virtual=false 2026-01-20T10:58:08.339158086+00:00 stderr F I0120 10:58:08.339131 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: console, uid: f23dffb7-2c10-4dad-918c-95f11496c236]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.339172356+00:00 stderr F I0120 10:58:08.339154 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-route-controller-manager, name: prometheus-k8s, uid: d9bf7097-7fa5-4e09-8381-9bfaa22c2f7a]" virtual=false 2026-01-20T10:58:08.354219568+00:00 stderr F I0120 10:58:08.354129 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-controller-manager-operator, name: prometheus-k8s, uid: aac1809c-7591-4279-9b3a-f54481665b62]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.354219568+00:00 stderr F I0120 10:58:08.354186 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-ingress-operator, name: ingress-operator, uid: a575f0c7-77ce-41f4-a832-11dcd8374f9e]" virtual=false 2026-01-20T10:58:08.359402700+00:00 stderr F I0120 10:58:08.359359 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: 7ee99584-ec5a-490c-bc55-11ed3e60244a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.359456671+00:00 stderr F I0120 10:58:08.359433 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 8a9ccf98-e60f-4580-94d2-1560cf66cd74]" virtual=false 2026-01-20T10:58:08.368953052+00:00 stderr F I0120 10:58:08.368882 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[batch/v1/CronJob, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 946673ee-e5bd-418a-934e-c38198674faa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.368953052+00:00 stderr F I0120 10:58:08.368931 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: e8b2ce3d-9cd4-43a5-a8aa-e724fcbf369d]" virtual=false 2026-01-20T10:58:08.371893607+00:00 stderr F I0120 10:58:08.371716 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-apiserver, name: prometheus-k8s, uid: bf0c3d32-dd79-434c-b9f7-cd199254cb05]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.371893607+00:00 stderr F I0120 10:58:08.371745 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: olm-operator, uid: e9ed1986-ebb6-4ce1-af63-63b3f002df9e]" virtual=false 2026-01-20T10:58:08.386155390+00:00 stderr F I0120 10:58:08.386033 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-config-operator, name: prometheus-k8s, uid: 0c21355c-3ceb-4fa7-8948-24ef652948e2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.386155390+00:00 stderr F I0120 10:58:08.386103 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: bf9e0c20-07cb-4537-b7f9-efae9f964f5e]" virtual=false 2026-01-20T10:58:08.388165690+00:00 stderr F I0120 10:58:08.387438 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: console-configmap-reader, uid: 96f1c1d8-079f-4d3e-ab80-520fea692547]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.388165690+00:00 stderr F I0120 10:58:08.387472 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-cluster-version, name: cluster-version-operator, uid: b5151a8e-7df7-4f3b-9ada-e1cfd0badda9]" virtual=false 2026-01-20T10:58:08.399141919+00:00 stderr F I0120 10:58:08.399075 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-controller-manager, name: prometheus-k8s, uid: b1006f5c-7799-42f1-904e-88f572ba47b3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.399176380+00:00 stderr F I0120 10:58:08.399136 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 1685682f-c45b-43b7-8431-b19c7e8a7d30]" virtual=false 2026-01-20T10:58:08.411582665+00:00 stderr F I0120 10:58:08.411485 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-dns-operator, name: prometheus-k8s, uid: 4f8c14ca-36ee-4b58-aa1f-cac42171174a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.411582665+00:00 stderr F I0120 10:58:08.411527 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-machine-api, name: machine-api-operator, uid: 7e7b28b7-f1de-4b37-8a34-a8d6ed3ac1fa]" virtual=false 2026-01-20T10:58:08.412520529+00:00 stderr F I0120 10:58:08.412451 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ingress-operator, name: ingress-operator, uid: fb403acd-60bb-4e1d-812d-5b3609688224]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.412520529+00:00 stderr F I0120 10:58:08.412498 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: 4e10c137-983b-49c4-b977-7d19390e427b]" virtual=false 2026-01-20T10:58:08.417720861+00:00 stderr F I0120 10:58:08.417642 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-image-registry, name: prometheus-k8s, uid: a9e56c6d-5ac9-4b79-990a-a087cbdff6b6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.417720861+00:00 stderr F I0120 10:58:08.417686 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator, uid: 59f9d1a9-dda1-4c2c-8c2d-b99e720cbed0]" virtual=false 2026-01-20T10:58:08.419315781+00:00 stderr F I0120 10:58:08.419276 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-monitoring, name: cluster-monitoring-operator, uid: 46d97ac2-17af-4ffa-903a-c11271097ec9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.419342842+00:00 stderr F I0120 10:58:08.419309 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-etcd-operator, name: etcd-operator, uid: fb798e33-6d4c-4082-a60c-594a9db7124a]" virtual=false 2026-01-20T10:58:08.426988076+00:00 stderr F I0120 10:58:08.426929 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: machine-api-controllers, uid: 6e3cba9d-4203-4082-ad69-807338a0bc67]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.426988076+00:00 stderr F I0120 10:58:08.426955 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator, uid: f13e36c5-b283-4235-867d-e2ae26d7fa2a]" virtual=false 2026-01-20T10:58:08.432427944+00:00 stderr F I0120 10:58:08.432376 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-monitoring, name: cluster-monitoring-operator-alert-customization, uid: 5ab31ba2-d0b0-4677-80c5-034b6c6541d6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.432427944+00:00 stderr F I0120 10:58:08.432401 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-marketplace, name: marketplace-operator, uid: 0b54327c-0c40-46f3-a17b-0f07f095ccb7]" virtual=false 2026-01-20T10:58:08.441408632+00:00 stderr F I0120 10:58:08.441342 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: network-diagnostics, uid: 42b5b6e6-9a80-4513-a033-38d0e838dc93]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:08.441556676+00:00 stderr F I0120 10:58:08.441530 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: package-server-manager, uid: 3368f5bb-29da-4770-b432-4e5d1a8491a9]" virtual=false 2026-01-20T10:58:08.447101657+00:00 stderr F I0120 10:58:08.447010 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-storage-operator, name: csi-snapshot-controller-operator-role, uid: 239e5b05-c173-4227-8370-2ccdce453a8c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.447101657+00:00 stderr F I0120 10:58:08.447077 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-dns-operator, name: dns-operator, uid: d7110071-d620-4ed4-b7e1-05c1c458b7f0]" virtual=false 2026-01-20T10:58:08.451653273+00:00 stderr F I0120 10:58:08.451587 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: machine-api-controllers, uid: a777ad98-1b3e-45c1-a882-fa02dc5e0e45]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.451653273+00:00 stderr F I0120 10:58:08.451631 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-config-operator, name: openshift-config-operator, uid: 46cebc51-d29e-4081-9edb-d9f437810b86]" virtual=false 2026-01-20T10:58:08.460126858+00:00 stderr F I0120 10:58:08.460048 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: ingress-operator, uid: 10dd3c44-d460-4ab3-b671-c27cf19cd1d5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.460126858+00:00 stderr F I0120 10:58:08.460117 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-multus, name: multus-admission-controller, uid: 3d4e04fb-152e-4b67-b110-7b7edfa1a90a]" virtual=false 2026-01-20T10:58:08.462443177+00:00 stderr F I0120 10:58:08.462387 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: d75ff515-88a9-4644-8711-c99a391dcc77]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.462443177+00:00 stderr F I0120 10:58:08.462416 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-network-diagnostics, name: network-check-source, uid: 5694fe8b-b5a5-4c14-bc2c-e30718ec8465]" virtual=false 2026-01-20T10:58:08.470129553+00:00 stderr F I0120 10:58:08.470038 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: console-operator, uid: 4c176e02-35d3-42a6-9509-b0af233a3594]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.470129553+00:00 stderr F I0120 10:58:08.470093 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-console-operator, name: console-operator, uid: e977212b-5bb5-4096-9f11-353076a2ebeb]" virtual=false 2026-01-20T10:58:08.474688908+00:00 stderr F I0120 10:58:08.474660 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-route-controller-manager, name: prometheus-k8s, uid: d9bf7097-7fa5-4e09-8381-9bfaa22c2f7a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.474785360+00:00 stderr F I0120 10:58:08.474755 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-network-operator, name: network-operator, uid: d09aa085-6368-4540-a8c1-4e4c3e9e7344]" virtual=false 2026-01-20T10:58:08.489226157+00:00 stderr F I0120 10:58:08.489186 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-ingress-operator, name: ingress-operator, uid: a575f0c7-77ce-41f4-a832-11dcd8374f9e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.489314959+00:00 stderr F I0120 10:58:08.489299 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-node-limited, uid: f845abac-4b6d-49f7-ad14-ea5802490663]" virtual=false 2026-01-20T10:58:08.495545378+00:00 stderr F I0120 10:58:08.495505 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 8a9ccf98-e60f-4580-94d2-1560cf66cd74]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.495633390+00:00 stderr F I0120 10:58:08.495619 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 945d64e1-c873-4e9d-b5ff-47904d2b347f]" virtual=false 2026-01-20T10:58:08.506107446+00:00 stderr F I0120 10:58:08.503575 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: e8b2ce3d-9cd4-43a5-a8aa-e724fcbf369d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.506107446+00:00 stderr F I0120 10:58:08.503614 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: 485aecbc-d986-4290-a12b-2be6eccbc76c]" virtual=false 2026-01-20T10:58:08.506684541+00:00 stderr F I0120 10:58:08.506656 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: olm-operator, uid: e9ed1986-ebb6-4ce1-af63-63b3f002df9e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.506745813+00:00 stderr F I0120 10:58:08.506733 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-monitoring-operator-namespaced, uid: 64176e20-57cd-450a-82b8-c734edcf2055]" virtual=false 2026-01-20T10:58:08.513479783+00:00 stderr F I0120 10:58:08.513450 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: bf9e0c20-07cb-4537-b7f9-efae9f964f5e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.513497074+00:00 stderr F I0120 10:58:08.513484 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 8cb0f5f7-4dca-477c-8627-e6db485e4cb2]" virtual=false 2026-01-20T10:58:08.522491052+00:00 stderr F I0120 10:58:08.522470 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-cluster-version, name: cluster-version-operator, uid: b5151a8e-7df7-4f3b-9ada-e1cfd0badda9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.522547203+00:00 stderr F I0120 10:58:08.522535 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: catalog-operator, uid: 4a08358e-6f9b-492b-9df8-8e54f40e2fb4]" virtual=false 2026-01-20T10:58:08.532276071+00:00 stderr F I0120 10:58:08.532238 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 1685682f-c45b-43b7-8431-b19c7e8a7d30]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.532276071+00:00 stderr F I0120 10:58:08.532270 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-console-operator, name: console-conversion-webhook, uid: 4dae11c2-6acd-446b-b52c-67345d4c21ea]" virtual=false 2026-01-20T10:58:08.540349076+00:00 stderr F I0120 10:58:08.540298 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-machine-api, name: machine-api-operator, uid: 7e7b28b7-f1de-4b37-8a34-a8d6ed3ac1fa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.540368207+00:00 stderr F I0120 10:58:08.540345 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostnetwork-v2, uid: ad07fee7-e5d1-4a09-8086-698133feb11a]" virtual=false 2026-01-20T10:58:08.549091908+00:00 stderr F I0120 10:58:08.548015 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: 4e10c137-983b-49c4-b977-7d19390e427b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.549091908+00:00 stderr F I0120 10:58:08.548040 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: network-diagnostics, uid: 6e739f62-fe8b-4f65-8df1-a50711c9b496]" virtual=false 2026-01-20T10:58:08.550038542+00:00 stderr F I0120 10:58:08.550013 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator, uid: 59f9d1a9-dda1-4c2c-8c2d-b99e720cbed0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.550115324+00:00 stderr F I0120 10:58:08.550102 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:cluster-config-operator:cluster-reader, uid: 3162d1be-8f00-4957-8db6-a2b1361aaedf]" virtual=false 2026-01-20T10:58:08.555683485+00:00 stderr F I0120 10:58:08.555637 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-etcd-operator, name: etcd-operator, uid: fb798e33-6d4c-4082-a60c-594a9db7124a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.555711716+00:00 stderr F I0120 10:58:08.555695 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler-operator, uid: 99f02060-bf1c-484c-8c17-2b1243086e3f]" virtual=false 2026-01-20T10:58:08.562080287+00:00 stderr F I0120 10:58:08.562034 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator, uid: f13e36c5-b283-4235-867d-e2ae26d7fa2a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.562146159+00:00 stderr F I0120 10:58:08.562128 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ingress-operator, uid: e3a75357-5a06-4934-a482-0d77f1fbb9b2]" virtual=false 2026-01-20T10:58:08.566918541+00:00 stderr F I0120 10:58:08.566877 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-marketplace, name: marketplace-operator, uid: 0b54327c-0c40-46f3-a17b-0f07f095ccb7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.566937841+00:00 stderr F I0120 10:58:08.566921 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator:cluster-reader, uid: b6fdcbdf-86e5-40c5-9a3a-5dc05d389718]" virtual=false 2026-01-20T10:58:08.578842173+00:00 stderr F I0120 10:58:08.578795 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-dns-operator, name: dns-operator, uid: d7110071-d620-4ed4-b7e1-05c1c458b7f0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.578864714+00:00 stderr F I0120 10:58:08.578842 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator-imageconfig-reader, uid: fab3e514-6318-4108-8ef6-378758fbbc7e]" virtual=false 2026-01-20T10:58:08.579447549+00:00 stderr F I0120 10:58:08.579415 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: package-server-manager, uid: 3368f5bb-29da-4770-b432-4e5d1a8491a9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.579447549+00:00 stderr F I0120 10:58:08.579442 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator-proxy-reader, uid: 4f462e59-f589-4aa7-8140-854302a78457]" virtual=false 2026-01-20T10:58:08.586572880+00:00 stderr F I0120 10:58:08.586548 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-config-operator, name: openshift-config-operator, uid: 46cebc51-d29e-4081-9edb-d9f437810b86]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.586633311+00:00 stderr F I0120 10:58:08.586622 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:cluster-samples-operator:cluster-reader, uid: b1a93bf7-4bf4-4d81-a69b-76fb07155d62]" virtual=false 2026-01-20T10:58:08.591260299+00:00 stderr F I0120 10:58:08.591209 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-multus, name: multus-admission-controller, uid: 3d4e04fb-152e-4b67-b110-7b7edfa1a90a]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:08.591277950+00:00 stderr F I0120 10:58:08.591265 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:privileged, uid: 301bd3c5-2517-462f-bb39-9758c8065aa4]" virtual=false 2026-01-20T10:58:08.594717557+00:00 stderr F I0120 10:58:08.594662 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-network-diagnostics, name: network-check-source, uid: 5694fe8b-b5a5-4c14-bc2c-e30718ec8465]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:08.594717557+00:00 stderr F I0120 10:58:08.594696 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: csi-snapshot-controller-operator-clusterrole, uid: ff646139-2b95-4409-9ed8-321d5912f92e]" virtual=false 2026-01-20T10:58:08.604354431+00:00 stderr F I0120 10:58:08.604317 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-console-operator, name: console-operator, uid: e977212b-5bb5-4096-9f11-353076a2ebeb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.604386652+00:00 stderr F I0120 10:58:08.604349 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers, uid: 9206794b-b0af-49c1-a4fb-787d45f3f1f8]" virtual=false 2026-01-20T10:58:08.608392364+00:00 stderr F I0120 10:58:08.608361 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-network-operator, name: network-operator, uid: d09aa085-6368-4540-a8c1-4e4c3e9e7344]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.608476146+00:00 stderr F I0120 10:58:08.608459 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers-metal3-remediation-aggregation, uid: 726262c4-2098-4379-a985-90474531ff53]" virtual=false 2026-01-20T10:58:08.617661489+00:00 stderr F I0120 10:58:08.617597 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-node-limited, uid: f845abac-4b6d-49f7-ad14-ea5802490663]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:08.617661489+00:00 stderr F I0120 10:58:08.617632 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus, uid: 6e9e0a19-df2a-431c-a2d8-e73b63d4f45c]" virtual=false 2026-01-20T10:58:08.627777427+00:00 stderr F I0120 10:58:08.627711 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 945d64e1-c873-4e9d-b5ff-47904d2b347f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.627777427+00:00 stderr F I0120 10:58:08.627765 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: prometheus-k8s-scheduler-resources, uid: c2078daf-5c4f-48e6-b914-b4ca03df3cb9]" virtual=false 2026-01-20T10:58:08.635114492+00:00 stderr F I0120 10:58:08.635027 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: 485aecbc-d986-4290-a12b-2be6eccbc76c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.635232285+00:00 stderr F I0120 10:58:08.635206 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostaccess, uid: ae10afa1-d594-42ec-9bf7-626463cb1630]" virtual=false 2026-01-20T10:58:08.641198788+00:00 stderr F I0120 10:58:08.641135 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-monitoring-operator-namespaced, uid: 64176e20-57cd-450a-82b8-c734edcf2055]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.641344731+00:00 stderr F I0120 10:58:08.641328 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console-extensions-reader, uid: ecabf9a2-18e2-45e0-9591-e2a9b4363684]" virtual=false 2026-01-20T10:58:08.651098309+00:00 stderr F I0120 10:58:08.651015 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 8cb0f5f7-4dca-477c-8627-e6db485e4cb2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.651253783+00:00 stderr F I0120 10:58:08.651232 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-iptables-alerter, uid: 827450c1-83c3-45d0-9aa5-fbb86a2ae6a5]" virtual=false 2026-01-20T10:58:08.659268626+00:00 stderr F I0120 10:58:08.659177 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: catalog-operator, uid: 4a08358e-6f9b-492b-9df8-8e54f40e2fb4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.659321938+00:00 stderr F I0120 10:58:08.659269 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: whereabouts-cni, uid: 88054557-80c4-48d2-a55d-ab10752a9270]" virtual=false 2026-01-20T10:58:08.668832169+00:00 stderr F I0120 10:58:08.668790 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-console-operator, name: console-conversion-webhook, uid: 4dae11c2-6acd-446b-b52c-67345d4c21ea]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.668932912+00:00 stderr F I0120 10:58:08.668916 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:controller:operator-lifecycle-manager, uid: 2b5865fd-ac96-41b7-96bc-48cce5469705]" virtual=false 2026-01-20T10:58:08.674669388+00:00 stderr F I0120 10:58:08.674639 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostnetwork-v2, uid: ad07fee7-e5d1-4a09-8086-698133feb11a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.674835942+00:00 stderr F I0120 10:58:08.674819 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: aggregate-olm-view, uid: 764fb58b-036b-45ae-97f9-281416353daf]" virtual=false 2026-01-20T10:58:08.678379782+00:00 stderr F I0120 10:58:08.678316 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: network-diagnostics, uid: 6e739f62-fe8b-4f65-8df1-a50711c9b496]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:08.678418123+00:00 stderr F I0120 10:58:08.678377 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler, uid: 329dfd1b-be44-4c8b-b72d-d32f8fab8705]" virtual=false 2026-01-20T10:58:08.687087443+00:00 stderr F I0120 10:58:08.686999 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:cluster-config-operator:cluster-reader, uid: 3162d1be-8f00-4957-8db6-a2b1361aaedf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.687135194+00:00 stderr F I0120 10:58:08.687054 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator, uid: 71adaeb8-9332-4dc7-b8b2-52415e589919]" virtual=false 2026-01-20T10:58:08.691200607+00:00 stderr F I0120 10:58:08.691162 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler-operator, uid: 99f02060-bf1c-484c-8c17-2b1243086e3f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.691284260+00:00 stderr F I0120 10:58:08.691270 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostnetwork, uid: 7fa3d83a-0d76-405a-a7b9-190741931396]" virtual=false 2026-01-20T10:58:08.693737212+00:00 stderr F I0120 10:58:08.693674 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ingress-operator, uid: e3a75357-5a06-4934-a482-0d77f1fbb9b2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.693760022+00:00 stderr F I0120 10:58:08.693729 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: metrics-daemon-role, uid: 199383b0-9cef-4533-81a7-f22f011a69a5]" virtual=false 2026-01-20T10:58:08.700813842+00:00 stderr F I0120 10:58:08.700746 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator:cluster-reader, uid: b6fdcbdf-86e5-40c5-9a3a-5dc05d389718]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.700876993+00:00 stderr F I0120 10:58:08.700812 1 garbagecollector.go:549] "Processing item" item="[apiregistration.k8s.io/v1/APIService, namespace: , name: v1.packages.operators.coreos.com, uid: 16956e05-669a-486b-95ff-66e13a972b59]" virtual=false 2026-01-20T10:58:08.708265951+00:00 stderr F I0120 10:58:08.708192 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator-imageconfig-reader, uid: fab3e514-6318-4108-8ef6-378758fbbc7e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.708296592+00:00 stderr F I0120 10:58:08.708264 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: marketplace-operator, uid: f411d32e-1a09-441c-99dd-7b75e5b87298]" virtual=false 2026-01-20T10:58:08.714645613+00:00 stderr F I0120 10:58:08.714592 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator-proxy-reader, uid: 4f462e59-f589-4aa7-8140-854302a78457]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.714672724+00:00 stderr F I0120 10:58:08.714642 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console, uid: 8c434f71-b240-42c1-88a4-a6fbc903b388]" virtual=false 2026-01-20T10:58:08.717606578+00:00 stderr F I0120 10:58:08.717569 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:cluster-samples-operator:cluster-reader, uid: b1a93bf7-4bf4-4d81-a69b-76fb07155d62]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.717624789+00:00 stderr F I0120 10:58:08.717616 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-dns-operator, uid: d1456a44-d6b7-4418-911a-f4bbaf6427c4]" virtual=false 2026-01-20T10:58:08.723054567+00:00 stderr F I0120 10:58:08.723008 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:privileged, uid: 301bd3c5-2517-462f-bb39-9758c8065aa4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.723090028+00:00 stderr F I0120 10:58:08.723052 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:controller:machine-approver, uid: 672519e3-9ef9-4c9e-a091-7c4b0d3de1c4]" virtual=false 2026-01-20T10:58:08.727854919+00:00 stderr F I0120 10:58:08.727803 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: csi-snapshot-controller-operator-clusterrole, uid: ff646139-2b95-4409-9ed8-321d5912f92e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.727905580+00:00 stderr F I0120 10:58:08.727877 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: control-plane-machine-set-operator, uid: a39c14db-78b4-4b58-8651-479169195296]" virtual=false 2026-01-20T10:58:08.734751784+00:00 stderr F I0120 10:58:08.734698 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers, uid: 9206794b-b0af-49c1-a4fb-787d45f3f1f8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.734775384+00:00 stderr F I0120 10:58:08.734759 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers-metal3-remediation, uid: 4f414358-8053-4a3d-b8f5-95980f16dda0]" virtual=false 2026-01-20T10:58:08.741335011+00:00 stderr F I0120 10:58:08.741283 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers-metal3-remediation-aggregation, uid: 726262c4-2098-4379-a985-90474531ff53]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.741365221+00:00 stderr F I0120 10:58:08.741334 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: registry-monitoring, uid: df05d985-dbd7-487d-b14e-5f77e9a774d1]" virtual=false 2026-01-20T10:58:08.748441121+00:00 stderr F I0120 10:58:08.748367 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus, uid: 6e9e0a19-df2a-431c-a2d8-e73b63d4f45c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:08.748441121+00:00 stderr F I0120 10:58:08.748419 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:restricted, uid: 75ded65f-c56d-483a-aa78-9a3dfb682f3a]" virtual=false 2026-01-20T10:58:08.763319179+00:00 stderr F I0120 10:58:08.763150 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: prometheus-k8s-scheduler-resources, uid: c2078daf-5c4f-48e6-b914-b4ca03df3cb9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.763319179+00:00 stderr F I0120 10:58:08.763234 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler-operator:cluster-reader, uid: 0632bc2e-8683-447f-866f-dc2c3f20dbaa]" virtual=false 2026-01-20T10:58:08.763319179+00:00 stderr F I0120 10:58:08.763245 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apiregistration.k8s.io/v1/APIService, namespace: , name: v1.packages.operators.coreos.com, uid: 16956e05-669a-486b-95ff-66e13a972b59]" 2026-01-20T10:58:08.763319179+00:00 stderr F I0120 10:58:08.763306 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: helm-chartrepos-viewer, uid: d3d3a0df-2feb-4db3-b436-5b73b4c151eb]" virtual=false 2026-01-20T10:58:08.770402449+00:00 stderr F I0120 10:58:08.770319 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostaccess, uid: ae10afa1-d594-42ec-9bf7-626463cb1630]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.770402449+00:00 stderr F I0120 10:58:08.770379 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:restricted-v2, uid: 32267ecd-433d-41f4-a3f0-a0b5b8f32162]" virtual=false 2026-01-20T10:58:08.775989451+00:00 stderr F I0120 10:58:08.775915 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console-extensions-reader, uid: ecabf9a2-18e2-45e0-9591-e2a9b4363684]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.775989451+00:00 stderr F I0120 10:58:08.775980 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-control-plane-limited, uid: 8ee51cdb-bf4d-4e61-9535-b23c9dd08843]" virtual=false 2026-01-20T10:58:08.783498871+00:00 stderr F I0120 10:58:08.783432 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-iptables-alerter, uid: 827450c1-83c3-45d0-9aa5-fbb86a2ae6a5]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:08.783498871+00:00 stderr F I0120 10:58:08.783485 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:nonroot, uid: b76bf753-d87c-45a9-91db-eea6f389f4d5]" virtual=false 2026-01-20T10:58:08.791924575+00:00 stderr F I0120 10:58:08.791858 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: whereabouts-cni, uid: 88054557-80c4-48d2-a55d-ab10752a9270]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:08.791924575+00:00 stderr F I0120 10:58:08.791904 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: aggregate-olm-edit, uid: 945e6f8b-1604-4556-a7bf-195fa62d6c14]" virtual=false 2026-01-20T10:58:08.800305448+00:00 stderr F I0120 10:58:08.800216 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:controller:operator-lifecycle-manager, uid: 2b5865fd-ac96-41b7-96bc-48cce5469705]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.800305448+00:00 stderr F I0120 10:58:08.800286 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator-ext-remediation, uid: b645a3b5-011d-4a2c-ac12-008057781b22]" virtual=false 2026-01-20T10:58:08.805611193+00:00 stderr F I0120 10:58:08.805549 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: aggregate-olm-view, uid: 764fb58b-036b-45ae-97f9-281416353daf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.805611193+00:00 stderr F I0120 10:58:08.805592 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:machine-config-operator:cluster-reader, uid: 1fda91fe-aa05-455a-8865-3034f4e4cff8]" virtual=false 2026-01-20T10:58:08.813690999+00:00 stderr F I0120 10:58:08.813602 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler, uid: 329dfd1b-be44-4c8b-b72d-d32f8fab8705]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.813690999+00:00 stderr F I0120 10:58:08.813643 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostmount, uid: 43c793ea-8f8d-4996-bb74-5e312be44b1a]" virtual=false 2026-01-20T10:58:08.822818210+00:00 stderr F I0120 10:58:08.822745 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator, uid: 71adaeb8-9332-4dc7-b8b2-52415e589919]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.822818210+00:00 stderr F I0120 10:58:08.822790 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus-admission-controller-webhook, uid: 8512c398-7b2e-4b14-9cdd-dfe72f813153]" virtual=false 2026-01-20T10:58:08.824288518+00:00 stderr F I0120 10:58:08.824229 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostnetwork, uid: 7fa3d83a-0d76-405a-a7b9-190741931396]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.824288518+00:00 stderr F I0120 10:58:08.824260 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-image-registry-operator, uid: 6edd083b-28fe-4808-aa2e-67cd5ba297ab]" virtual=false 2026-01-20T10:58:08.825279783+00:00 stderr F I0120 10:58:08.825206 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: metrics-daemon-role, uid: 199383b0-9cef-4533-81a7-f22f011a69a5]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:08.825309744+00:00 stderr F I0120 10:58:08.825278 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator, uid: dbe391a0-135a-4375-82db-699a0d88ce8e]" virtual=false 2026-01-20T10:58:08.841719660+00:00 stderr F I0120 10:58:08.841633 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: marketplace-operator, uid: f411d32e-1a09-441c-99dd-7b75e5b87298]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.841761631+00:00 stderr F I0120 10:58:08.841728 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console-operator, uid: dc1b4ca4-3dce-4e4f-b6d5-d095e346f78d]" virtual=false 2026-01-20T10:58:08.850710579+00:00 stderr F I0120 10:58:08.850610 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console, uid: 8c434f71-b240-42c1-88a4-a6fbc903b388]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.850759620+00:00 stderr F I0120 10:58:08.850724 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-kube-rbac-proxy, uid: 5402260d-a88f-4905-b91a-c0ec390d8675]" virtual=false 2026-01-20T10:58:08.851669584+00:00 stderr F I0120 10:58:08.851611 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-dns-operator, uid: d1456a44-d6b7-4418-911a-f4bbaf6427c4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.851829768+00:00 stderr F I0120 10:58:08.851794 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:anyuid, uid: 74815ef0-3c58-4271-9c93-5c834b5a10e5]" virtual=false 2026-01-20T10:58:08.856667491+00:00 stderr F I0120 10:58:08.856617 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:controller:machine-approver, uid: 672519e3-9ef9-4c9e-a091-7c4b0d3de1c4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.856814144+00:00 stderr F I0120 10:58:08.856779 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: network-node-identity, uid: c11a15d3-53e9-40f6-b868-c05634ec19ff]" virtual=false 2026-01-20T10:58:08.867644719+00:00 stderr F I0120 10:58:08.867549 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: control-plane-machine-set-operator, uid: a39c14db-78b4-4b58-8651-479169195296]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.867644719+00:00 stderr F I0120 10:58:08.867610 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: project-helm-chartrepository-editor, uid: 86c81b4a-a328-46e4-a36e-736a9454eb6d]" virtual=false 2026-01-20T10:58:08.868424209+00:00 stderr F I0120 10:58:08.868335 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers-metal3-remediation, uid: 4f414358-8053-4a3d-b8f5-95980f16dda0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.868424209+00:00 stderr F I0120 10:58:08.868364 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: operatorhub-config-reader, uid: e2651db3-cf40-4b1c-a7da-ae197e936593]" virtual=false 2026-01-20T10:58:08.877256203+00:00 stderr F I0120 10:58:08.877186 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: registry-monitoring, uid: df05d985-dbd7-487d-b14e-5f77e9a774d1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.877256203+00:00 stderr F I0120 10:58:08.877247 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-monitoring-operator, uid: e156be76-80be-4eff-8005-7c24938303ae]" virtual=false 2026-01-20T10:58:08.883670006+00:00 stderr F I0120 10:58:08.883622 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:restricted, uid: 75ded65f-c56d-483a-aa78-9a3dfb682f3a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.883874581+00:00 stderr F I0120 10:58:08.883846 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus-ancillary-tools, uid: 6b5b3d0e-daa2-4431-9b3a-17f2ad98c6cd]" virtual=false 2026-01-20T10:58:08.896209815+00:00 stderr F I0120 10:58:08.896046 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: helm-chartrepos-viewer, uid: d3d3a0df-2feb-4db3-b436-5b73b4c151eb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.896209815+00:00 stderr F I0120 10:58:08.896195 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: net-attach-def-project, uid: 7c380190-b6b7-4bba-8d3d-6bda6c81bc8e]" virtual=false 2026-01-20T10:58:08.902384252+00:00 stderr F I0120 10:58:08.902308 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler-operator:cluster-reader, uid: 0632bc2e-8683-447f-866f-dc2c3f20dbaa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.902495364+00:00 stderr F I0120 10:58:08.902481 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostmount-anyuid, uid: a16549a7-7bfd-45d7-b114-aea9597226a2]" virtual=false 2026-01-20T10:58:08.902666089+00:00 stderr F I0120 10:58:08.902484 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:restricted-v2, uid: 32267ecd-433d-41f4-a3f0-a0b5b8f32162]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.902676289+00:00 stderr F I0120 10:58:08.902662 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:nonroot-v2, uid: 6efcc9e9-c5e4-4315-940a-636bac274a19]" virtual=false 2026-01-20T10:58:08.906338602+00:00 stderr F I0120 10:58:08.906270 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-control-plane-limited, uid: 8ee51cdb-bf4d-4e61-9535-b23c9dd08843]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:08.906366522+00:00 stderr F I0120 10:58:08.906339 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-cluster-reader, uid: 2d3a058f-4b36-405a-8d7a-38d4fac9bbdf]" virtual=false 2026-01-20T10:58:08.918748688+00:00 stderr F I0120 10:58:08.918630 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:nonroot, uid: b76bf753-d87c-45a9-91db-eea6f389f4d5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.918748688+00:00 stderr F I0120 10:58:08.918730 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: certified-operators, uid: 16d5fe82-aef0-4700-8b13-e78e71d2a10d]" virtual=false 2026-01-20T10:58:08.932980639+00:00 stderr F I0120 10:58:08.932874 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: aggregate-olm-edit, uid: 945e6f8b-1604-4556-a7bf-195fa62d6c14]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.932980639+00:00 stderr F I0120 10:58:08.932945 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: community-operators, uid: e583c58d-4569-4cab-9192-62c813516208]" virtual=false 2026-01-20T10:58:08.940279984+00:00 stderr F I0120 10:58:08.940208 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:machine-config-operator:cluster-reader, uid: 1fda91fe-aa05-455a-8865-3034f4e4cff8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.940452568+00:00 stderr F I0120 10:58:08.940397 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: redhat-marketplace, uid: 6f259421-4edb-49d8-a6ce-aa41dfc64264]" virtual=false 2026-01-20T10:58:08.942030939+00:00 stderr F I0120 10:58:08.941954 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator-ext-remediation, uid: b645a3b5-011d-4a2c-ac12-008057781b22]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.942091850+00:00 stderr F I0120 10:58:08.942026 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-cluster-machine-approver, name: machine-approver-7874c8775, uid: c132add1-7a11-405a-a206-ad555920dbc3]" virtual=false 2026-01-20T10:58:08.947466257+00:00 stderr F I0120 10:58:08.947390 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostmount, uid: 43c793ea-8f8d-4996-bb74-5e312be44b1a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.947466257+00:00 stderr F I0120 10:58:08.947442 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-cluster-version, name: cluster-version-operator-6d5d9649f6, uid: c82e10c3-bd71-43ae-bdff-a5dce77ff096]" virtual=false 2026-01-20T10:58:08.947923898+00:00 stderr F I0120 10:58:08.947860 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus-admission-controller-webhook, uid: 8512c398-7b2e-4b14-9cdd-dfe72f813153]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:08.947943669+00:00 stderr F I0120 10:58:08.947922 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator-7978d7d7f6, uid: 89e69b0e-6261-49ff-80e6-98dbd196bd87]" virtual=false 2026-01-20T10:58:08.958018185+00:00 stderr F I0120 10:58:08.957935 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-image-registry-operator, uid: 6edd083b-28fe-4808-aa2e-67cd5ba297ab]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.958225310+00:00 stderr F I0120 10:58:08.958201 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-service-ca-operator, name: service-ca-operator-546b4f8984, uid: 9edef8f0-3959-4861-8d92-af5228053363]" virtual=false 2026-01-20T10:58:08.959279837+00:00 stderr F I0120 10:58:08.959211 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator, uid: dbe391a0-135a-4375-82db-699a0d88ce8e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.959279837+00:00 stderr F I0120 10:58:08.959265 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-operator-lifecycle-manager, name: olm-operator-6d8474f75f, uid: d58f2cd8-620a-4650-8e4f-65f3b094b59b]" virtual=false 2026-01-20T10:58:08.978318140+00:00 stderr F I0120 10:58:08.978210 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-kube-rbac-proxy, uid: 5402260d-a88f-4905-b91a-c0ec390d8675]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:08.978318140+00:00 stderr F I0120 10:58:08.978283 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: cert-manager, name: cert-manager-cainjector-676dd9bd64, uid: 67e14b54-4172-4263-86d0-f2f044841ede]" virtual=false 2026-01-20T10:58:08.978592307+00:00 stderr F I0120 10:58:08.978547 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console-operator, uid: dc1b4ca4-3dce-4e4f-b6d5-d095e346f78d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.978592307+00:00 stderr F I0120 10:58:08.978572 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-image-registry, name: image-registry-75b7bb6564, uid: bedccd10-9902-4a37-a6d8-886a484d944c]" virtual=false 2026-01-20T10:58:08.983164164+00:00 stderr F I0120 10:58:08.983044 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: certified-operators, uid: 16d5fe82-aef0-4700-8b13-e78e71d2a10d]" 2026-01-20T10:58:08.983201705+00:00 stderr F I0120 10:58:08.983167 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-10, uid: 79d2f65f-9a16-40bd-b8af-596e51945995]" virtual=false 2026-01-20T10:58:08.986438087+00:00 stderr F I0120 10:58:08.986382 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:anyuid, uid: 74815ef0-3c58-4271-9c93-5c834b5a10e5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.986438087+00:00 stderr F I0120 10:58:08.986409 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-machine-api, name: control-plane-machine-set-operator-649bd778b4, uid: 8b73a9c4-0835-468c-8d91-7b4c54aef119]" virtual=false 2026-01-20T10:58:08.987675828+00:00 stderr F I0120 10:58:08.987596 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: network-node-identity, uid: c11a15d3-53e9-40f6-b868-c05634ec19ff]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:08.987675828+00:00 stderr F I0120 10:58:08.987626 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: redhat-operators, uid: 9ba0c63a-ccef-4143-b195-48b1ad0b0bb7]" virtual=false 2026-01-20T10:58:08.993211178+00:00 stderr F I0120 10:58:08.993110 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: operatorhub-config-reader, uid: e2651db3-cf40-4b1c-a7da-ae197e936593]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.993211178+00:00 stderr F I0120 10:58:08.993168 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-network-node-identity, name: network-node-identity, uid: b28beb6b-9f0f-4fa2-90a3-324fe77364f6]" virtual=false 2026-01-20T10:58:08.996945644+00:00 stderr F I0120 10:58:08.996889 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: community-operators, uid: e583c58d-4569-4cab-9192-62c813516208]" 2026-01-20T10:58:08.997134199+00:00 stderr F I0120 10:58:08.997100 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-dns, name: dns-default, uid: bc037602-dc89-4ac4-9d39-64c4b2735a9f]" virtual=false 2026-01-20T10:58:08.997565089+00:00 stderr F I0120 10:58:08.997488 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: project-helm-chartrepository-editor, uid: 86c81b4a-a328-46e4-a36e-736a9454eb6d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:08.997565089+00:00 stderr F I0120 10:58:08.997517 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-ingress-operator, name: ingress-operator-7d46d5bb6d, uid: 8836d337-e669-4b70-84af-a8e408527030]" virtual=false 2026-01-20T10:58:09.003624723+00:00 stderr F I0120 10:58:09.003584 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: redhat-marketplace, uid: 6f259421-4edb-49d8-a6ce-aa41dfc64264]" 2026-01-20T10:58:09.003804368+00:00 stderr F I0120 10:58:09.003755 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-9, uid: 794bafc6-76b1-49e3-b649-3c4435ab156a]" virtual=false 2026-01-20T10:58:09.004498565+00:00 stderr F I0120 10:58:09.004408 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-monitoring-operator, uid: e156be76-80be-4eff-8005-7c24938303ae]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.004498565+00:00 stderr F I0120 10:58:09.004460 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-machine-config-operator, name: machine-config-controller-6df6df6b6b, uid: b8a1c4b8-fe9f-4171-a66c-6b7abb807274]" virtual=false 2026-01-20T10:58:09.017590238+00:00 stderr F I0120 10:58:09.017530 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus-ancillary-tools, uid: 6b5b3d0e-daa2-4431-9b3a-17f2ad98c6cd]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.017590238+00:00 stderr F I0120 10:58:09.017576 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-oauth-apiserver, name: apiserver-69c565c9b6, uid: 21e763e9-1eef-4800-be82-6a4db146f2d1]" virtual=false 2026-01-20T10:58:09.026845603+00:00 stderr F I0120 10:58:09.026803 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: net-attach-def-project, uid: 7c380190-b6b7-4bba-8d3d-6bda6c81bc8e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.026877453+00:00 stderr F I0120 10:58:09.026847 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-ovn-kubernetes, name: ovnkube-control-plane-77c846df58, uid: 435781ac-1a0b-430f-bb65-abd6c1ca968f]" virtual=false 2026-01-20T10:58:09.036485548+00:00 stderr F I0120 10:58:09.036386 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostmount-anyuid, uid: a16549a7-7bfd-45d7-b114-aea9597226a2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.036485548+00:00 stderr F I0120 10:58:09.036446 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-route-controller-manager, name: route-controller-manager-776b8b7477, uid: 6bdcaed9-ad1b-4cd3-b947-bcc889a39f90]" virtual=false 2026-01-20T10:58:09.038855178+00:00 stderr F I0120 10:58:09.038790 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:nonroot-v2, uid: 6efcc9e9-c5e4-4315-940a-636bac274a19]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.038855178+00:00 stderr F I0120 10:58:09.038827 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: cert-manager, name: cert-manager-webhook-855f577f79, uid: 3305f8ae-25d6-412c-a622-01ac79405495]" virtual=false 2026-01-20T10:58:09.040597872+00:00 stderr F I0120 10:58:09.040572 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-cluster-reader, uid: 2d3a058f-4b36-405a-8d7a-38d4fac9bbdf]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.040615832+00:00 stderr F I0120 10:58:09.040595 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-authentication-operator, name: authentication-operator-7cc7ff75d5, uid: 988a2264-fbab-4296-93d4-de8bc5a38df5]" virtual=false 2026-01-20T10:58:09.046461601+00:00 stderr F I0120 10:58:09.046413 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-10, uid: 79d2f65f-9a16-40bd-b8af-596e51945995]" 2026-01-20T10:58:09.046461601+00:00 stderr F I0120 10:58:09.046439 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-config-operator, name: openshift-config-operator-77658b5b66, uid: 9cef21ef-8176-4c79-a1b0-a266ead9d4f6]" virtual=false 2026-01-20T10:58:09.052714010+00:00 stderr F I0120 10:58:09.052685 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: redhat-operators, uid: 9ba0c63a-ccef-4143-b195-48b1ad0b0bb7]" 2026-01-20T10:58:09.052714010+00:00 stderr F I0120 10:58:09.052707 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-7, uid: 280618b3-e4f5-445a-afe7-23ea06b14201]" virtual=false 2026-01-20T10:58:09.067395973+00:00 stderr F I0120 10:58:09.067152 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-9, uid: 794bafc6-76b1-49e3-b649-3c4435ab156a]" 2026-01-20T10:58:09.067395973+00:00 stderr F I0120 10:58:09.067202 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-kube-storage-version-migrator, name: migrator-f7c6d88df, uid: dfdeca8a-71c7-4f55-ad85-86a3d4f6ef9d]" virtual=false 2026-01-20T10:58:09.073411676+00:00 stderr F I0120 10:58:09.073335 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-cluster-machine-approver, name: machine-approver-7874c8775, uid: c132add1-7a11-405a-a206-ad555920dbc3]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"machine-approver","uid":"7ee99584-ec5a-490c-bc55-11ed3e60244a","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.073411676+00:00 stderr F I0120 10:58:09.073362 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-network-diagnostics, name: network-check-source-5c5478f8c, uid: 1983421b-516e-40bf-be2f-d5ca39e5e8f0]" virtual=false 2026-01-20T10:58:09.075961151+00:00 stderr F I0120 10:58:09.075918 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-cluster-version, name: cluster-version-operator-6d5d9649f6, uid: c82e10c3-bd71-43ae-bdff-a5dce77ff096]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"cluster-version-operator","uid":"b5151a8e-7df7-4f3b-9ada-e1cfd0badda9","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.075961151+00:00 stderr F I0120 10:58:09.075942 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-apiserver, name: apiserver-7fc54b8dd7, uid: 5f61a04f-ba2f-40ae-a540-5a3f540b9dc2]" virtual=false 2026-01-20T10:58:09.079117271+00:00 stderr F I0120 10:58:09.079013 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator-7978d7d7f6, uid: 89e69b0e-6261-49ff-80e6-98dbd196bd87]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"openshift-controller-manager-operator","uid":"945d64e1-c873-4e9d-b5ff-47904d2b347f","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.079117271+00:00 stderr F I0120 10:58:09.079036 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator-6f6cb54958, uid: 05830690-2017-4811-86f7-90459442ac07]" virtual=false 2026-01-20T10:58:09.086134108+00:00 stderr F I0120 10:58:09.086048 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-service-ca-operator, name: service-ca-operator-546b4f8984, uid: 9edef8f0-3959-4861-8d92-af5228053363]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"service-ca-operator","uid":"4e10c137-983b-49c4-b977-7d19390e427b","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.086134108+00:00 stderr F I0120 10:58:09.086106 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator-5d9b995f6b, uid: 10507f34-78cd-4b53-87ae-2511ad70bc72]" virtual=false 2026-01-20T10:58:09.089531665+00:00 stderr F I0120 10:58:09.089444 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-operator-lifecycle-manager, name: olm-operator-6d8474f75f, uid: d58f2cd8-620a-4650-8e4f-65f3b094b59b]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"olm-operator","uid":"e9ed1986-ebb6-4ce1-af63-63b3f002df9e","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.089531665+00:00 stderr F I0120 10:58:09.089512 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator-686c6c748c, uid: 2625a7a1-fddd-409b-a1b0-0dcea9e8febb]" virtual=false 2026-01-20T10:58:09.104220918+00:00 stderr F I0120 10:58:09.104146 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: cert-manager, name: cert-manager-cainjector-676dd9bd64, uid: 67e14b54-4172-4263-86d0-f2f044841ede]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"cert-manager-cainjector","uid":"ffeb61c7-b2d2-4eb9-bd47-9472ba4ad4af","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.104220918+00:00 stderr F I0120 10:58:09.104199 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 17da81ae-ac8b-4941-aff1-3d2bf3f00608]" virtual=false 2026-01-20T10:58:09.109323228+00:00 stderr F I0120 10:58:09.109290 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-image-registry, name: image-registry-75b7bb6564, uid: bedccd10-9902-4a37-a6d8-886a484d944c]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"image-registry","uid":"ff5a6fbd-d479-457d-86ba-428162a82d5c","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.109344809+00:00 stderr F I0120 10:58:09.109330 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-authentication, name: oauth-openshift-74fc7c67cc, uid: 19328877-ae29-4751-a48b-fd7ef5663e0e]" virtual=false 2026-01-20T10:58:09.116039008+00:00 stderr F I0120 10:58:09.115984 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-machine-api, name: control-plane-machine-set-operator-649bd778b4, uid: 8b73a9c4-0835-468c-8d91-7b4c54aef119]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"control-plane-machine-set-operator","uid":"e8b2ce3d-9cd4-43a5-a8aa-e724fcbf369d","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.116039008+00:00 stderr F I0120 10:58:09.116026 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-dns-operator, name: dns-operator-75f687757b, uid: 89316280-b23a-4936-afc9-ac76646fd5b0]" virtual=false 2026-01-20T10:58:09.118452350+00:00 stderr F I0120 10:58:09.118420 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-7, uid: 280618b3-e4f5-445a-afe7-23ea06b14201]" 2026-01-20T10:58:09.118452350+00:00 stderr F I0120 10:58:09.118443 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-etcd-operator, name: etcd-operator-768d5b5d86, uid: 5f81e406-3d11-4aea-acf0-1e40fdd29de2]" virtual=false 2026-01-20T10:58:09.123320594+00:00 stderr F I0120 10:58:09.123272 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-network-node-identity, name: network-node-identity, uid: b28beb6b-9f0f-4fa2-90a3-324fe77364f6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.123346834+00:00 stderr F I0120 10:58:09.123332 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-image-registry, name: node-ca, uid: e04c9af2-e9b1-4c40-b757-270f8e53c17d]" virtual=false 2026-01-20T10:58:09.126549575+00:00 stderr F I0120 10:58:09.126505 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-dns, name: dns-default, uid: bc037602-dc89-4ac4-9d39-64c4b2735a9f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"DNS","name":"default","uid":"8e7b8280-016f-4ceb-a792-fc5be2494468","controller":true}] 2026-01-20T10:58:09.126549575+00:00 stderr F I0120 10:58:09.126539 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-11, uid: 9b60c938-b391-41f2-ba27-263b409f84ac]" virtual=false 2026-01-20T10:58:09.129320106+00:00 stderr F I0120 10:58:09.129281 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-ingress-operator, name: ingress-operator-7d46d5bb6d, uid: 8836d337-e669-4b70-84af-a8e408527030]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"ingress-operator","uid":"a575f0c7-77ce-41f4-a832-11dcd8374f9e","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.129320106+00:00 stderr F I0120 10:58:09.129301 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-marketplace, name: marketplace-operator-8b455464d, uid: facbd59b-fbe9-42f6-9512-9219bc0f7c59]" virtual=false 2026-01-20T10:58:09.136727714+00:00 stderr F I0120 10:58:09.136670 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-machine-config-operator, name: machine-config-controller-6df6df6b6b, uid: b8a1c4b8-fe9f-4171-a66c-6b7abb807274]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"machine-config-controller","uid":"0b1d22b6-78ae-4d00-94ad-381755e08383","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.136753515+00:00 stderr F I0120 10:58:09.136740 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-multus, name: multus-admission-controller-6c7c885997, uid: cc1283b1-c4b8-45e0-98f4-c53dae453433]" virtual=false 2026-01-20T10:58:09.149524219+00:00 stderr F I0120 10:58:09.149469 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-oauth-apiserver, name: apiserver-69c565c9b6, uid: 21e763e9-1eef-4800-be82-6a4db146f2d1]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"apiserver","uid":"8ac71ab9-8c3d-4c89-9962-205eed0149d5","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.149552950+00:00 stderr F I0120 10:58:09.149524 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator-7c88c4c865, uid: 769e5acf-9de5-406b-a06b-bd222bbd75ef]" virtual=false 2026-01-20T10:58:09.160918678+00:00 stderr F I0120 10:58:09.160840 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-ovn-kubernetes, name: ovnkube-control-plane-77c846df58, uid: 435781ac-1a0b-430f-bb65-abd6c1ca968f]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"ovnkube-control-plane","uid":"346798bd-68de-4941-ab69-8a5a56dd55f7","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.160918678+00:00 stderr F I0120 10:58:09.160890 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-dns, name: node-resolver, uid: b85ec2e5-ac1c-43ad-9c87-35eb71cbd95f]" virtual=false 2026-01-20T10:58:09.162501819+00:00 stderr F I0120 10:58:09.162463 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-route-controller-manager, name: route-controller-manager-776b8b7477, uid: 6bdcaed9-ad1b-4cd3-b947-bcc889a39f90]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"route-controller-manager","uid":"db9b5a0e-2471-4a94-bbe5-01c34efec882","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.162519679+00:00 stderr F I0120 10:58:09.162505 1 garbagecollector.go:549] "Processing item" item="[v1/Pod, namespace: openshift-ingress-canary, name: ingress-canary-2vhcn, uid: 0b5d722a-1123-4935-9740-52a08d018bc9]" virtual=false 2026-01-20T10:58:09.166714405+00:00 stderr F I0120 10:58:09.166674 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: cert-manager, name: cert-manager-webhook-855f577f79, uid: 3305f8ae-25d6-412c-a622-01ac79405495]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"cert-manager-webhook","uid":"c42c8502-cba5-4edd-b9d0-f209e94b81d7","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.166731966+00:00 stderr F I0120 10:58:09.166711 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-ingress, name: router-default-5c9bf7bc58, uid: 0efb51ee-9643-4b0f-9d44-4a886579a1c5]" virtual=false 2026-01-20T10:58:09.169260170+00:00 stderr F I0120 10:58:09.169233 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 17da81ae-ac8b-4941-aff1-3d2bf3f00608]" 2026-01-20T10:58:09.169277691+00:00 stderr F I0120 10:58:09.169267 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: multus, uid: caa46963-1770-45a0-9a3d-2e0c6249b258]" virtual=false 2026-01-20T10:58:09.173528979+00:00 stderr F I0120 10:58:09.173491 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-authentication-operator, name: authentication-operator-7cc7ff75d5, uid: 988a2264-fbab-4296-93d4-de8bc5a38df5]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"authentication-operator","uid":"5e81203d-c202-48ae-b652-35b68d7e5586","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.173549140+00:00 stderr F I0120 10:58:09.173540 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-8, uid: 160e9f84-5bea-4df5-a596-f02e78e90bcc]" virtual=false 2026-01-20T10:58:09.178977197+00:00 stderr F I0120 10:58:09.178931 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-config-operator, name: openshift-config-operator-77658b5b66, uid: 9cef21ef-8176-4c79-a1b0-a266ead9d4f6]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"openshift-config-operator","uid":"46cebc51-d29e-4081-9edb-d9f437810b86","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.178995158+00:00 stderr F I0120 10:58:09.178980 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-network-operator, name: network-operator-767c585db5, uid: 032f7ffa-36f3-4edf-882a-6a236dc81772]" virtual=false 2026-01-20T10:58:09.188757335+00:00 stderr F I0120 10:58:09.188696 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: openshift-image-registry, name: node-ca, uid: e04c9af2-e9b1-4c40-b757-270f8e53c17d]" 2026-01-20T10:58:09.188757335+00:00 stderr F I0120 10:58:09.188732 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator-bc474d5d6, uid: 47558afd-87c6-4c87-9f4f-0cedb5bf1348]" virtual=false 2026-01-20T10:58:09.192844190+00:00 stderr F I0120 10:58:09.192802 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-11, uid: 9b60c938-b391-41f2-ba27-263b409f84ac]" 2026-01-20T10:58:09.192844190+00:00 stderr F I0120 10:58:09.192827 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-console-operator, name: console-operator-5dbbc74dc9, uid: f1923bf4-180b-4023-9749-8f4ab7def832]" virtual=false 2026-01-20T10:58:09.199613581+00:00 stderr F I0120 10:58:09.199375 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-kube-storage-version-migrator, name: migrator-f7c6d88df, uid: dfdeca8a-71c7-4f55-ad85-86a3d4f6ef9d]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"migrator","uid":"88da59ff-e1b0-4998-b48b-d9b2e9bee2ae","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.199613581+00:00 stderr F I0120 10:58:09.199410 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-console, name: console-644bb77b49, uid: cb02675b-d6fb-40e7-b7d7-0c82671355c4]" virtual=false 2026-01-20T10:58:09.206733563+00:00 stderr F I0120 10:58:09.206694 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-network-diagnostics, name: network-check-source-5c5478f8c, uid: 1983421b-516e-40bf-be2f-d5ca39e5e8f0]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"network-check-source","uid":"5694fe8b-b5a5-4c14-bc2c-e30718ec8465","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.206755313+00:00 stderr F I0120 10:58:09.206742 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-8, uid: 8d893e05-24ef-4d69-8b21-c40f710bda8c]" virtual=false 2026-01-20T10:58:09.209580755+00:00 stderr F I0120 10:58:09.209538 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-apiserver, name: apiserver-7fc54b8dd7, uid: 5f61a04f-ba2f-40ae-a540-5a3f540b9dc2]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"apiserver","uid":"a424780c-5ff8-49aa-b616-57c2d7958f81","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.209580755+00:00 stderr F I0120 10:58:09.209568 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: cert-manager, name: cert-manager-758df9885c, uid: 531fcbf9-d0b3-42a6-8574-841297f51ed6]" virtual=false 2026-01-20T10:58:09.212576971+00:00 stderr F I0120 10:58:09.212526 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator-6f6cb54958, uid: 05830690-2017-4811-86f7-90459442ac07]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"kube-controller-manager-operator","uid":"8a9ccf98-e60f-4580-94d2-1560cf66cd74","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.212576971+00:00 stderr F I0120 10:58:09.212560 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator-78d54458c4, uid: 0653d8bb-773a-4180-8163-2e0e8b1868bc]" virtual=false 2026-01-20T10:58:09.218910462+00:00 stderr F I0120 10:58:09.218832 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator-5d9b995f6b, uid: 10507f34-78cd-4b53-87ae-2511ad70bc72]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"openshift-kube-scheduler-operator","uid":"f13e36c5-b283-4235-867d-e2ae26d7fa2a","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.218910462+00:00 stderr F I0120 10:58:09.218896 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: network-metrics-daemon, uid: dde018de-6a0a-400a-b753-bd5cd908ad9c]" virtual=false 2026-01-20T10:58:09.222645086+00:00 stderr F I0120 10:58:09.222585 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator-686c6c748c, uid: 2625a7a1-fddd-409b-a1b0-0dcea9e8febb]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"kube-storage-version-migrator-operator","uid":"59f9d1a9-dda1-4c2c-8c2d-b99e720cbed0","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.222666037+00:00 stderr F I0120 10:58:09.222639 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-network-operator, name: iptables-alerter, uid: 65b9447d-47aa-4195-a11f-950ea16aeb71]" virtual=false 2026-01-20T10:58:09.240212432+00:00 stderr F I0120 10:58:09.240155 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-8, uid: 160e9f84-5bea-4df5-a596-f02e78e90bcc]" 2026-01-20T10:58:09.240212432+00:00 stderr F I0120 10:58:09.240199 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-service-ca, name: service-ca-666f99b6f, uid: 8e1c6ac1-0ab1-48bc-9de4-83068455da2e]" virtual=false 2026-01-20T10:58:09.243344942+00:00 stderr F I0120 10:58:09.243303 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-authentication, name: oauth-openshift-74fc7c67cc, uid: 19328877-ae29-4751-a48b-fd7ef5663e0e]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"oauth-openshift","uid":"5c77e036-b030-4587-8bd4-079bc5e84c22","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.243363523+00:00 stderr F I0120 10:58:09.243346 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-console-operator, name: console-conversion-webhook-595f9969b, uid: 74c18b0a-8cb0-4535-bfe8-26315d38e6da]" virtual=false 2026-01-20T10:58:09.249415336+00:00 stderr F I0120 10:58:09.249363 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-dns-operator, name: dns-operator-75f687757b, uid: 89316280-b23a-4936-afc9-ac76646fd5b0]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"dns-operator","uid":"d7110071-d620-4ed4-b7e1-05c1c458b7f0","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.249415336+00:00 stderr F I0120 10:58:09.249403 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-9, uid: 6ffaec76-c3c8-4997-ba38-69fd80011f84]" virtual=false 2026-01-20T10:58:09.252624708+00:00 stderr F I0120 10:58:09.252575 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-etcd-operator, name: etcd-operator-768d5b5d86, uid: 5f81e406-3d11-4aea-acf0-1e40fdd29de2]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"etcd-operator","uid":"fb798e33-6d4c-4082-a60c-594a9db7124a","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.252624708+00:00 stderr F I0120 10:58:09.252604 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-machine-config-operator, name: machine-config-server, uid: acfc19b1-6ba7-451e-a76d-1490cb8ae35e]" virtual=false 2026-01-20T10:58:09.263338500+00:00 stderr F I0120 10:58:09.263264 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-marketplace, name: marketplace-operator-8b455464d, uid: facbd59b-fbe9-42f6-9512-9219bc0f7c59]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"marketplace-operator","uid":"0b54327c-0c40-46f3-a17b-0f07f095ccb7","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.263338500+00:00 stderr F I0120 10:58:09.263302 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-network-diagnostics, name: network-check-target, uid: 91f9db4f-6cfc-4a79-8ef0-8cd59cb0f235]" virtual=false 2026-01-20T10:58:09.270284466+00:00 stderr F I0120 10:58:09.270198 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-multus, name: multus-admission-controller-6c7c885997, uid: cc1283b1-c4b8-45e0-98f4-c53dae453433]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"multus-admission-controller","uid":"3d4e04fb-152e-4b67-b110-7b7edfa1a90a","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.270284466+00:00 stderr F I0120 10:58:09.270227 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-84d578d794, uid: 3c47c3d4-976b-483f-964e-8c46f11ecf15]" virtual=false 2026-01-20T10:58:09.271941599+00:00 stderr F I0120 10:58:09.271895 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-8, uid: 8d893e05-24ef-4d69-8b21-c40f710bda8c]" 2026-01-20T10:58:09.271963059+00:00 stderr F I0120 10:58:09.271940 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-image-registry, name: cluster-image-registry-operator-7769bd8d7d, uid: 9711a43d-4a1b-4a69-9c39-f52785ebbf0d]" virtual=false 2026-01-20T10:58:09.282622670+00:00 stderr F I0120 10:58:09.282572 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator-7c88c4c865, uid: 769e5acf-9de5-406b-a06b-bd222bbd75ef]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"openshift-apiserver-operator","uid":"bf9e0c20-07cb-4537-b7f9-efae9f964f5e","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.282622670+00:00 stderr F I0120 10:58:09.282614 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-13, uid: 0355a172-cce0-4ffe-bbfd-f05c340462a5]" virtual=false 2026-01-20T10:58:09.292728036+00:00 stderr F I0120 10:58:09.292628 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-dns, name: node-resolver, uid: b85ec2e5-ac1c-43ad-9c87-35eb71cbd95f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"DNS","name":"default","uid":"8e7b8280-016f-4ceb-a792-fc5be2494468","controller":true}] 2026-01-20T10:58:09.292728036+00:00 stderr F I0120 10:58:09.292658 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-machine-api, name: machine-api-operator-788b7c6b6c, uid: 405b6017-8f85-4458-96f6-8265aa8e42d7]" virtual=false 2026-01-20T10:58:09.295495386+00:00 stderr F I0120 10:58:09.295429 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Pod, namespace: openshift-ingress-canary, name: ingress-canary-2vhcn, uid: 0b5d722a-1123-4935-9740-52a08d018bc9]" owner=[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"ingress-canary","uid":"b5512a08-cd29-46f9-9661-4c860338b2ca","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.295495386+00:00 stderr F I0120 10:58:09.295460 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-console, name: downloads-65476884b9, uid: 8abeeec7-ca20-4671-b076-82bbe28392eb]" virtual=false 2026-01-20T10:58:09.300674568+00:00 stderr F I0120 10:58:09.300589 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-ingress, name: router-default-5c9bf7bc58, uid: 0efb51ee-9643-4b0f-9d44-4a886579a1c5]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"router-default","uid":"9ae4d312-7fc4-4344-ab7a-669da95f56bf","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.300674568+00:00 stderr F I0120 10:58:09.300617 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-operator-lifecycle-manager, name: packageserver-8464bcc55b, uid: 6ee8a05c-e6f7-4c44-96b6-a21e4fd03368]" virtual=false 2026-01-20T10:58:09.304052144+00:00 stderr F I0120 10:58:09.304024 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: multus, uid: caa46963-1770-45a0-9a3d-2e0c6249b258]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.304094165+00:00 stderr F I0120 10:58:09.304055 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-ovn-kubernetes, name: ovnkube-node, uid: 1482e709-7589-42fd-976a-3e8042ee895b]" virtual=false 2026-01-20T10:58:09.312843457+00:00 stderr F I0120 10:58:09.312763 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-network-operator, name: network-operator-767c585db5, uid: 032f7ffa-36f3-4edf-882a-6a236dc81772]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"network-operator","uid":"d09aa085-6368-4540-a8c1-4e4c3e9e7344","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.312843457+00:00 stderr F I0120 10:58:09.312809 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-controller-manager, name: controller-manager-778975cc4f, uid: b3903b7b-b29a-4c4c-afdf-e6733a611156]" virtual=false 2026-01-20T10:58:09.316112521+00:00 stderr F I0120 10:58:09.316054 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-9, uid: 6ffaec76-c3c8-4997-ba38-69fd80011f84]" 2026-01-20T10:58:09.316112521+00:00 stderr F I0120 10:58:09.316099 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-12, uid: a1e0f8b2-4421-4716-96ad-e0da033e5a6a]" virtual=false 2026-01-20T10:58:09.320033710+00:00 stderr F I0120 10:58:09.319999 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: openshift-machine-config-operator, name: machine-config-server, uid: acfc19b1-6ba7-451e-a76d-1490cb8ae35e]" 2026-01-20T10:58:09.320053020+00:00 stderr F I0120 10:58:09.320031 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: multus-additional-cni-plugins, uid: df2f05a5-7dea-496f-a19f-fd0927dffc2f]" virtual=false 2026-01-20T10:58:09.323187510+00:00 stderr F I0120 10:58:09.323147 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator-bc474d5d6, uid: 47558afd-87c6-4c87-9f4f-0cedb5bf1348]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"cluster-samples-operator","uid":"d75ff515-88a9-4644-8711-c99a391dcc77","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.323187510+00:00 stderr F I0120 10:58:09.323180 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: hostpath-provisioner, name: csi-hostpathplugin, uid: f3d8e73a-8b83-44c2-ac21-da847137bc76]" virtual=false 2026-01-20T10:58:09.326863463+00:00 stderr F I0120 10:58:09.326820 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-console-operator, name: console-operator-5dbbc74dc9, uid: f1923bf4-180b-4023-9749-8f4ab7def832]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"console-operator","uid":"e977212b-5bb5-4096-9f11-353076a2ebeb","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.326863463+00:00 stderr F I0120 10:58:09.326846 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-operator-lifecycle-manager, name: catalog-operator-857456c46, uid: d5cc066b-fa23-4d13-982e-a52282290255]" virtual=false 2026-01-20T10:58:09.333907102+00:00 stderr F I0120 10:58:09.333837 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-console, name: console-644bb77b49, uid: cb02675b-d6fb-40e7-b7d7-0c82671355c4]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"console","uid":"acc4559a-2586-4482-947a-aae611d8d9f6","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.333926873+00:00 stderr F I0120 10:58:09.333909 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-machine-config-operator, name: machine-config-operator-76788bff89, uid: 77c5a195-a240-44aa-beb3-a4a81134728a]" virtual=false 2026-01-20T10:58:09.344289896+00:00 stderr F I0120 10:58:09.344175 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: cert-manager, name: cert-manager-758df9885c, uid: 531fcbf9-d0b3-42a6-8574-841297f51ed6]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"cert-manager","uid":"a1bcf83b-4ea7-4f86-a661-8ffeaec556e6","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.344289896+00:00 stderr F I0120 10:58:09.344230 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-config-operator, name: config-operator, uid: ebb46164-d146-4396-be7d-4f239cfde7b4]" virtual=false 2026-01-20T10:58:09.347198740+00:00 stderr F I0120 10:58:09.347103 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator-78d54458c4, uid: 0653d8bb-773a-4180-8163-2e0e8b1868bc]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"kube-apiserver-operator","uid":"1685682f-c45b-43b7-8431-b19c7e8a7d30","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.347198740+00:00 stderr F I0120 10:58:09.347165 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-console-operator, name: console-operator, uid: 9a33b179-4950-456a-93cf-ba4741c91841]" virtual=false 2026-01-20T10:58:09.349584380+00:00 stderr F I0120 10:58:09.349535 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-13, uid: 0355a172-cce0-4ffe-bbfd-f05c340462a5]" 2026-01-20T10:58:09.349584380+00:00 stderr F I0120 10:58:09.349565 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-console, name: console, uid: c8428fe0-e84e-4be1-b578-d568de860a64]" virtual=false 2026-01-20T10:58:09.353023298+00:00 stderr F I0120 10:58:09.352962 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: network-metrics-daemon, uid: dde018de-6a0a-400a-b753-bd5cd908ad9c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.353023298+00:00 stderr F I0120 10:58:09.352992 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-image-registry, name: image-registry, uid: d6eb5a24-ee3d-4c45-b434-d20993cdc039]" virtual=false 2026-01-20T10:58:09.357800229+00:00 stderr F I0120 10:58:09.357711 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-network-operator, name: iptables-alerter, uid: 65b9447d-47aa-4195-a11f-950ea16aeb71]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.357800229+00:00 stderr F I0120 10:58:09.357772 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 43311039-ddd0-4dd0-b1ee-3a9fed17eab5]" virtual=false 2026-01-20T10:58:09.373706333+00:00 stderr F I0120 10:58:09.373616 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-service-ca, name: service-ca-666f99b6f, uid: 8e1c6ac1-0ab1-48bc-9de4-83068455da2e]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"service-ca","uid":"054eb633-29d2-4eec-90a7-1a83a0e386c1","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.373706333+00:00 stderr F I0120 10:58:09.373666 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-marketplace, name: marketplace-operator, uid: 6fe0d0d5-6410-460e-b01a-be75fdd6daa0]" virtual=false 2026-01-20T10:58:09.375641813+00:00 stderr F I0120 10:58:09.375576 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-console-operator, name: console-conversion-webhook-595f9969b, uid: 74c18b0a-8cb0-4535-bfe8-26315d38e6da]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"console-conversion-webhook","uid":"4dae11c2-6acd-446b-b52c-67345d4c21ea","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.375670873+00:00 stderr F I0120 10:58:09.375644 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver, name: openshift-apiserver, uid: ffd6543e-3a51-45c8-85e5-0b2b8492c009]" virtual=false 2026-01-20T10:58:09.383010019+00:00 stderr F I0120 10:58:09.382956 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-12, uid: a1e0f8b2-4421-4716-96ad-e0da033e5a6a]" 2026-01-20T10:58:09.383010019+00:00 stderr F I0120 10:58:09.382988 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 62dbb159-afde-42ff-ae4d-f010b4e53152]" virtual=false 2026-01-20T10:58:09.389474744+00:00 stderr F I0120 10:58:09.389386 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: hostpath-provisioner, name: csi-hostpathplugin, uid: f3d8e73a-8b83-44c2-ac21-da847137bc76]" 2026-01-20T10:58:09.389474744+00:00 stderr F I0120 10:58:09.389445 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-controller-manager, name: kube-controller-manager, uid: 55df2ef1-071b-46e2-a7a1-8773d1ba333e]" virtual=false 2026-01-20T10:58:09.397052736+00:00 stderr F I0120 10:58:09.396989 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-network-diagnostics, name: network-check-target, uid: 91f9db4f-6cfc-4a79-8ef0-8cd59cb0f235]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.397052736+00:00 stderr F I0120 10:58:09.397025 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: machine-api-controllers, uid: 5b2318f0-140f-4213-ac5a-43d99820c804]" virtual=false 2026-01-20T10:58:09.402694889+00:00 stderr F I0120 10:58:09.402653 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-84d578d794, uid: 3c47c3d4-976b-483f-964e-8c46f11ecf15]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"package-server-manager","uid":"3368f5bb-29da-4770-b432-4e5d1a8491a9","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.402719590+00:00 stderr F I0120 10:58:09.402710 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-oauth-apiserver, name: openshift-oauth-apiserver, uid: 041dc6d1-ce80-46ef-bbf1-6bcd4b2dd746]" virtual=false 2026-01-20T10:58:09.409274027+00:00 stderr F I0120 10:58:09.409230 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-image-registry, name: cluster-image-registry-operator-7769bd8d7d, uid: 9711a43d-4a1b-4a69-9c39-f52785ebbf0d]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"cluster-image-registry-operator","uid":"485aecbc-d986-4290-a12b-2be6eccbc76c","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.409295057+00:00 stderr F I0120 10:58:09.409276 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-controller-manager, name: openshift-controller-manager, uid: c725187b-e573-4a6f-9e33-9bf5a61d60ba]" virtual=false 2026-01-20T10:58:09.426620377+00:00 stderr F I0120 10:58:09.426557 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-machine-api, name: machine-api-operator-788b7c6b6c, uid: 405b6017-8f85-4458-96f6-8265aa8e42d7]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"machine-api-operator","uid":"7e7b28b7-f1de-4b37-8a34-a8d6ed3ac1fa","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.426658378+00:00 stderr F I0120 10:58:09.426611 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ingress-operator, name: ingress-operator, uid: c53f85af-6f9e-4e5e-9242-aad83f5ea8c4]" virtual=false 2026-01-20T10:58:09.429085120+00:00 stderr F I0120 10:58:09.429032 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-console, name: downloads-65476884b9, uid: 8abeeec7-ca20-4671-b076-82bbe28392eb]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"downloads","uid":"03b2baf0-d10c-4001-94a6-800af015de08","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.429085120+00:00 stderr F I0120 10:58:09.429078 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-apiserver, name: kube-apiserver, uid: f37115af-5343-4070-9655-42dcef4f4439]" virtual=false 2026-01-20T10:58:09.433637956+00:00 stderr F I0120 10:58:09.433523 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-operator-lifecycle-manager, name: packageserver-8464bcc55b, uid: 6ee8a05c-e6f7-4c44-96b6-a21e4fd03368]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"packageserver","uid":"c7e0a213-d3b0-4220-bc12-3e9beb007a7b","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.433637956+00:00 stderr F I0120 10:58:09.433554 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: olm-operator, uid: 9e0ffff9-0f79-4998-9692-74f94eb0549f]" virtual=false 2026-01-20T10:58:09.436871118+00:00 stderr F I0120 10:58:09.436809 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-ovn-kubernetes, name: ovnkube-node, uid: 1482e709-7589-42fd-976a-3e8042ee895b]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.436871118+00:00 stderr F I0120 10:58:09.436858 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: a28811b4-8593-4ba1-b2f8-ed5c450909cb]" virtual=false 2026-01-20T10:58:09.478179017+00:00 stderr F I0120 10:58:09.446855 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-controller-manager, name: controller-manager-778975cc4f, uid: b3903b7b-b29a-4c4c-afdf-e6733a611156]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"controller-manager","uid":"b42ae171-8338-4274-922f-79cfacb9cfe9","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.478179017+00:00 stderr F I0120 10:58:09.446893 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-etcd-operator, name: etcd-operator, uid: 0e2435e0-96e3-49f2-beae-6c0c00ee7502]" virtual=false 2026-01-20T10:58:09.478179017+00:00 stderr F I0120 10:58:09.454876 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: multus-additional-cni-plugins, uid: df2f05a5-7dea-496f-a19f-fd0927dffc2f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.478179017+00:00 stderr F I0120 10:58:09.454897 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: fa0bd541-42b6-4bb8-9a92-d48d39d85d53]" virtual=false 2026-01-20T10:58:09.478179017+00:00 stderr F I0120 10:58:09.458758 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-operator-lifecycle-manager, name: catalog-operator-857456c46, uid: d5cc066b-fa23-4d13-982e-a52282290255]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"catalog-operator","uid":"4a08358e-6f9b-492b-9df8-8e54f40e2fb4","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.478179017+00:00 stderr F I0120 10:58:09.458780 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-multus, name: monitor-network, uid: 4d4aca4a-3397-4686-8f20-a7a4f752076f]" virtual=false 2026-01-20T10:58:09.478179017+00:00 stderr F I0120 10:58:09.466233 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-machine-config-operator, name: machine-config-operator-76788bff89, uid: 77c5a195-a240-44aa-beb3-a4a81134728a]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"machine-config-operator","uid":"8cb0f5f7-4dca-477c-8627-e6db485e4cb2","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.478179017+00:00 stderr F I0120 10:58:09.466257 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver, name: openshift-apiserver-operator-check-endpoints, uid: 7c47d56a-8607-4a94-9c71-801ae5f904e6]" virtual=false 2026-01-20T10:58:09.483771699+00:00 stderr F I0120 10:58:09.483672 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-config-operator, name: config-operator, uid: ebb46164-d146-4396-be7d-4f239cfde7b4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.483862031+00:00 stderr F I0120 10:58:09.483827 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-machine-approver, name: cluster-machine-approver, uid: 2c495d08-0505-4d86-933e-6b3b35d469e8]" virtual=false 2026-01-20T10:58:09.485483372+00:00 stderr F I0120 10:58:09.485427 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-console-operator, name: console-operator, uid: 9a33b179-4950-456a-93cf-ba4741c91841]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.485483372+00:00 stderr F I0120 10:58:09.485469 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: d26b8449-3866-4f5c-880c-7dab11423e72]" virtual=false 2026-01-20T10:58:09.487690289+00:00 stderr F I0120 10:58:09.487624 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-console, name: console, uid: c8428fe0-e84e-4be1-b578-d568de860a64]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.487690289+00:00 stderr F I0120 10:58:09.487679 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-scheduler, name: kube-scheduler, uid: 71af2bbe-5bdb-4271-8e06-98a04e980e6f]" virtual=false 2026-01-20T10:58:09.492208303+00:00 stderr F I0120 10:58:09.492039 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 43311039-ddd0-4dd0-b1ee-3a9fed17eab5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.492208303+00:00 stderr F I0120 10:58:09.492156 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-multus, name: monitor-multus-admission-controller, uid: 02fdf30f-73db-47db-b112-d48ffcd81df7]" virtual=false 2026-01-20T10:58:09.493045695+00:00 stderr F I0120 10:58:09.492972 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-image-registry, name: image-registry, uid: d6eb5a24-ee3d-4c45-b434-d20993cdc039]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.493045695+00:00 stderr F I0120 10:58:09.493000 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: da943855-7a79-47eb-a200-afae1932cb74]" virtual=false 2026-01-20T10:58:09.509534544+00:00 stderr F I0120 10:58:09.509080 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-marketplace, name: marketplace-operator, uid: 6fe0d0d5-6410-460e-b01a-be75fdd6daa0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.509534544+00:00 stderr F I0120 10:58:09.509118 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-dns-operator, name: dns-operator, uid: 0e225770-846d-4267-827d-96a0e29db21c]" virtual=false 2026-01-20T10:58:09.513026502+00:00 stderr F I0120 10:58:09.512921 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver, name: openshift-apiserver, uid: ffd6543e-3a51-45c8-85e5-0b2b8492c009]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.513026502+00:00 stderr F I0120 10:58:09.512970 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 056bc944-7929-41ba-9874-afcf52028178]" virtual=false 2026-01-20T10:58:09.525224462+00:00 stderr F I0120 10:58:09.525107 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 62dbb159-afde-42ff-ae4d-f010b4e53152]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.525224462+00:00 stderr F I0120 10:58:09.525151 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-scheduler-operator, name: kube-scheduler-operator, uid: 3860e86d-4fc6-4c08-ba64-6157374888a3]" virtual=false 2026-01-20T10:58:09.525852067+00:00 stderr F I0120 10:58:09.525799 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-controller-manager, name: kube-controller-manager, uid: 55df2ef1-071b-46e2-a7a1-8773d1ba333e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.525852067+00:00 stderr F I0120 10:58:09.525829 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: machine-api-operator, uid: e428423f-fee5-4fca-bdc8-f46a317d9cf7]" virtual=false 2026-01-20T10:58:09.533030610+00:00 stderr F I0120 10:58:09.532947 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: machine-api-controllers, uid: 5b2318f0-140f-4213-ac5a-43d99820c804]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.533030610+00:00 stderr F I0120 10:58:09.532995 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-network-diagnostics, name: network-check-source, uid: b9f78ce7-8b27-49d9-bffc-1525c0c249e4]" virtual=false 2026-01-20T10:58:09.538295644+00:00 stderr F I0120 10:58:09.538227 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-oauth-apiserver, name: openshift-oauth-apiserver, uid: 041dc6d1-ce80-46ef-bbf1-6bcd4b2dd746]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.538295644+00:00 stderr F I0120 10:58:09.538275 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-route-controller-manager, name: openshift-route-controller-manager, uid: b0f070d7-8193-440c-87af-d7fa06cb4cfb]" virtual=false 2026-01-20T10:58:09.544029080+00:00 stderr F I0120 10:58:09.543952 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-controller-manager, name: openshift-controller-manager, uid: c725187b-e573-4a6f-9e33-9bf5a61d60ba]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.544029080+00:00 stderr F I0120 10:58:09.543987 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-version, name: cluster-version-operator, uid: 4a80685f-4ae9-45b6-beda-e35c99fcc78c]" virtual=false 2026-01-20T10:58:09.558013315+00:00 stderr F I0120 10:58:09.557905 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ingress-operator, name: ingress-operator, uid: c53f85af-6f9e-4e5e-9242-aad83f5ea8c4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.558013315+00:00 stderr F I0120 10:58:09.557939 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: 2f481e44-8799-4869-ac92-2893c3d079ef]" virtual=false 2026-01-20T10:58:09.565239448+00:00 stderr F I0120 10:58:09.565151 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-apiserver, name: kube-apiserver, uid: f37115af-5343-4070-9655-42dcef4f4439]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.565239448+00:00 stderr F I0120 10:58:09.565190 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-authentication, name: oauth-openshift, uid: e8391c3f-2734-4209-b27d-c640acd205de]" virtual=false 2026-01-20T10:58:09.569468086+00:00 stderr F I0120 10:58:09.569406 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: olm-operator, uid: 9e0ffff9-0f79-4998-9692-74f94eb0549f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.569468086+00:00 stderr F I0120 10:58:09.569435 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 4fe81432-467e-413e-ab32-82832525f054]" virtual=false 2026-01-20T10:58:09.572528453+00:00 stderr F I0120 10:58:09.572458 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: a28811b4-8593-4ba1-b2f8-ed5c450909cb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.572528453+00:00 stderr F I0120 10:58:09.572506 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-image-registry, name: image-registry-operator, uid: ef1c491a-0721-49cc-98d6-1ed2478c49b0]" virtual=false 2026-01-20T10:58:09.583345228+00:00 stderr F I0120 10:58:09.583295 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-etcd-operator, name: etcd-operator, uid: 0e2435e0-96e3-49f2-beae-6c0c00ee7502]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.583345228+00:00 stderr F I0120 10:58:09.583329 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 5b800138-aea7-4ac6-9bc1-cc0d305347d5]" virtual=false 2026-01-20T10:58:09.587674858+00:00 stderr F I0120 10:58:09.587626 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: fa0bd541-42b6-4bb8-9a92-d48d39d85d53]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.587821892+00:00 stderr F I0120 10:58:09.587769 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-network-operator, name: network-operator, uid: 43b09388-8965-42ba-b082-58877deb0311]" virtual=false 2026-01-20T10:58:09.593770573+00:00 stderr F I0120 10:58:09.593721 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-multus, name: monitor-network, uid: 4d4aca4a-3397-4686-8f20-a7a4f752076f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.593770573+00:00 stderr F I0120 10:58:09.593762 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: catalog-operator, uid: 0198e73b-3fef-466e-9a33-d8c461aa6d9b]" virtual=false 2026-01-20T10:58:09.599437637+00:00 stderr F I0120 10:58:09.599397 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver, name: openshift-apiserver-operator-check-endpoints, uid: 7c47d56a-8607-4a94-9c71-801ae5f904e6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.599547960+00:00 stderr F I0120 10:58:09.599523 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-metrics, uid: 1723c4aa-59d2-43e4-8879-a34e49b01f7b]" virtual=false 2026-01-20T10:58:09.609810370+00:00 stderr F I0120 10:58:09.609753 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-machine-approver, name: cluster-machine-approver, uid: 2c495d08-0505-4d86-933e-6b3b35d469e8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.611261627+00:00 stderr F I0120 10:58:09.611220 1 garbagecollector.go:549] "Processing item" item="[policy/v1/PodDisruptionBudget, namespace: openshift-operator-lifecycle-manager, name: packageserver-pdb, uid: 7faaf7ff-09b4-4ea8-95d0-99384dbe0390]" virtual=false 2026-01-20T10:58:09.616130091+00:00 stderr F I0120 10:58:09.616011 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: d26b8449-3866-4f5c-880c-7dab11423e72]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.616241594+00:00 stderr F I0120 10:58:09.616215 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-samples-operator, name: samples-operator-alerts, uid: 5f30e0db-f607-4d3c-966c-6a44f8597ed1]" virtual=false 2026-01-20T10:58:09.619131437+00:00 stderr F I0120 10:58:09.619050 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-scheduler, name: kube-scheduler, uid: 71af2bbe-5bdb-4271-8e06-98a04e980e6f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.619292221+00:00 stderr F I0120 10:58:09.619256 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-dns-operator, name: dns, uid: 43fd1807-ae6c-4bfd-9007-d4537c06cf0a]" virtual=false 2026-01-20T10:58:09.620364898+00:00 stderr F I0120 10:58:09.620321 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-multus, name: monitor-multus-admission-controller, uid: 02fdf30f-73db-47db-b112-d48ffcd81df7]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.620487121+00:00 stderr F I0120 10:58:09.620461 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 017f2b8e-38d4-4d07-b8aa-4bcdb3e002ed]" virtual=false 2026-01-20T10:58:09.630350042+00:00 stderr F I0120 10:58:09.630226 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: da943855-7a79-47eb-a200-afae1932cb74]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.630350042+00:00 stderr F I0120 10:58:09.630304 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-console-operator, name: cluster-monitoring-prometheus-rules, uid: 8e077079-ee5d-41a1-abbe-a4efe5295b9a]" virtual=false 2026-01-20T10:58:09.642470380+00:00 stderr F I0120 10:58:09.642383 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-dns-operator, name: dns-operator, uid: 0e225770-846d-4267-827d-96a0e29db21c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.642470380+00:00 stderr F I0120 10:58:09.642441 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-etcd-operator, name: etcd-prometheus-rules, uid: bbe9d208-cdc7-420f-a03a-1d216ca0abb4]" virtual=false 2026-01-20T10:58:09.646399129+00:00 stderr F I0120 10:58:09.646277 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 056bc944-7929-41ba-9874-afcf52028178]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.646399129+00:00 stderr F I0120 10:58:09.646317 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 54a74216-e0ff-4fdf-8ef9-dfd95ace8442]" virtual=false 2026-01-20T10:58:09.653267634+00:00 stderr F I0120 10:58:09.653195 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-scheduler-operator, name: kube-scheduler-operator, uid: 3860e86d-4fc6-4c08-ba64-6157374888a3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.653267634+00:00 stderr F I0120 10:58:09.653239 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-scheduler-operator, name: kube-scheduler-operator, uid: 38913059-2644-4931-bd5e-a039fa76b712]" virtual=false 2026-01-20T10:58:09.658854016+00:00 stderr F I0120 10:58:09.658797 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: machine-api-operator, uid: e428423f-fee5-4fca-bdc8-f46a317d9cf7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.658854016+00:00 stderr F I0120 10:58:09.658831 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 585c690d-0c74-4dd0-b081-e5dc02f16e88]" virtual=false 2026-01-20T10:58:09.668314986+00:00 stderr F I0120 10:58:09.667872 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-network-diagnostics, name: network-check-source, uid: b9f78ce7-8b27-49d9-bffc-1525c0c249e4]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.668314986+00:00 stderr F I0120 10:58:09.667924 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-machine-approver, name: machineapprover-rules, uid: 5c4e26c1-e7a2-400a-8d52-1a4e61d81615]" virtual=false 2026-01-20T10:58:09.672506453+00:00 stderr F I0120 10:58:09.672377 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-route-controller-manager, name: openshift-route-controller-manager, uid: b0f070d7-8193-440c-87af-d7fa06cb4cfb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.672506453+00:00 stderr F I0120 10:58:09.672418 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-version, name: cluster-version-operator, uid: 8b301c8a-95e8-4010-b1dd-add84cec904e]" virtual=false 2026-01-20T10:58:09.675513149+00:00 stderr F I0120 10:58:09.675367 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-version, name: cluster-version-operator, uid: 4a80685f-4ae9-45b6-beda-e35c99fcc78c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.675712594+00:00 stderr F I0120 10:58:09.675658 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-image-registry, name: image-registry-rules, uid: e3d343bb-85c3-48e6-83f3-e0bff03e2610]" virtual=false 2026-01-20T10:58:09.693349762+00:00 stderr F I0120 10:58:09.693261 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: 2f481e44-8799-4869-ac92-2893c3d079ef]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.693349762+00:00 stderr F I0120 10:58:09.693311 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-api, name: machine-api-operator-prometheus-rules, uid: d6bb2e2c-e1cd-49b4-96df-decb63e7b0fd]" virtual=false 2026-01-20T10:58:09.698584725+00:00 stderr F I0120 10:58:09.698532 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-authentication, name: oauth-openshift, uid: e8391c3f-2734-4209-b27d-c640acd205de]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.698584725+00:00 stderr F I0120 10:58:09.698564 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-image-registry, name: imagestreams-rules, uid: 3f104833-4f1e-4f16-8ada-d5643f802363]" virtual=false 2026-01-20T10:58:09.703734266+00:00 stderr F I0120 10:58:09.703671 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 4fe81432-467e-413e-ab32-82832525f054]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.703734266+00:00 stderr F I0120 10:58:09.703722 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-multus, name: prometheus-k8s-rules, uid: bc5d3e92-6eee-46da-a973-4b25555734ea]" virtual=false 2026-01-20T10:58:09.705848180+00:00 stderr F I0120 10:58:09.705795 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-image-registry, name: image-registry-operator, uid: ef1c491a-0721-49cc-98d6-1ed2478c49b0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.705864881+00:00 stderr F I0120 10:58:09.705855 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-apiserver, name: kube-apiserver-performance-recording-rules, uid: 237c1b6b-f59d-4a3a-b960-8c415a10471e]" virtual=false 2026-01-20T10:58:09.713412171+00:00 stderr F I0120 10:58:09.713333 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 5b800138-aea7-4ac6-9bc1-cc0d305347d5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.713448342+00:00 stderr F I0120 10:58:09.713411 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-marketplace, name: marketplace-alert-rules, uid: 325f23ba-096f-41ad-8964-6af44a8de605]" virtual=false 2026-01-20T10:58:09.724545994+00:00 stderr F I0120 10:58:09.724462 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-network-operator, name: network-operator, uid: 43b09388-8965-42ba-b082-58877deb0311]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.724545994+00:00 stderr F I0120 10:58:09.724517 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-network-operator, name: openshift-network-operator-ipsec-rules, uid: 8250bdbe-c4d3-4856-aa6a-373951b82216]" virtual=false 2026-01-20T10:58:09.732024275+00:00 stderr F I0120 10:58:09.731951 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: catalog-operator, uid: 0198e73b-3fef-466e-9a33-d8c461aa6d9b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.732024275+00:00 stderr F I0120 10:58:09.731997 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ingress-operator, name: ingress-operator, uid: e3ef40dc-9c44-44df-ad9c-5a0bb6e10f9d]" virtual=false 2026-01-20T10:58:09.732746173+00:00 stderr F I0120 10:58:09.732703 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-metrics, uid: 1723c4aa-59d2-43e4-8879-a34e49b01f7b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.732746173+00:00 stderr F I0120 10:58:09.732733 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 4d34cc47-ddd7-4071-9c9d-e6b189052eff]" virtual=false 2026-01-20T10:58:09.752501404+00:00 stderr F I0120 10:58:09.749451 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[policy/v1/PodDisruptionBudget, namespace: openshift-operator-lifecycle-manager, name: packageserver-pdb, uid: 7faaf7ff-09b4-4ea8-95d0-99384dbe0390]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.752501404+00:00 stderr F I0120 10:58:09.749513 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-operator-lifecycle-manager, name: olm-alert-rules, uid: 78f48107-8ef2-45e0-a5cb-4f3174faa9d9]" virtual=false 2026-01-20T10:58:09.755564762+00:00 stderr F I0120 10:58:09.753964 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-samples-operator, name: samples-operator-alerts, uid: 5f30e0db-f607-4d3c-966c-6a44f8597ed1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.755564762+00:00 stderr F I0120 10:58:09.754003 1 garbagecollector.go:549] "Processing item" item="[machine.openshift.io/v1beta1/MachineHealthCheck, namespace: openshift-machine-api, name: machine-api-termination-handler, uid: da0c8169-ed84-4c23-a003-b6883fd55935]" virtual=false 2026-01-20T10:58:09.771259901+00:00 stderr F I0120 10:58:09.769233 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-console-operator, name: cluster-monitoring-prometheus-rules, uid: 8e077079-ee5d-41a1-abbe-a4efe5295b9a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.771259901+00:00 stderr F I0120 10:58:09.769285 1 garbagecollector.go:549] "Processing item" item="[v1/ResourceQuota, namespace: openshift-host-network, name: host-network-namespace-quotas, uid: 499f87a8-1221-4cfd-b28c-9ae80d5ba123]" virtual=false 2026-01-20T10:58:09.771259901+00:00 stderr F I0120 10:58:09.769374 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-dns-operator, name: dns, uid: 43fd1807-ae6c-4bfd-9007-d4537c06cf0a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.771259901+00:00 stderr F I0120 10:58:09.769395 1 garbagecollector.go:549] "Processing item" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: installer-artifacts, uid: 897a4e58-3c9c-4859-a14e-e70529b83bbe]" virtual=false 2026-01-20T10:58:09.771259901+00:00 stderr F I0120 10:58:09.769482 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 017f2b8e-38d4-4d07-b8aa-4bcdb3e002ed]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.771259901+00:00 stderr F I0120 10:58:09.769500 1 garbagecollector.go:549] "Processing item" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: tools, uid: dfed6660-519c-4896-a7a0-c3ac7cb3f64b]" virtual=false 2026-01-20T10:58:09.789607878+00:00 stderr F I0120 10:58:09.789548 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-etcd-operator, name: etcd-prometheus-rules, uid: bbe9d208-cdc7-420f-a03a-1d216ca0abb4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.789730941+00:00 stderr F I0120 10:58:09.789717 1 garbagecollector.go:549] "Processing item" item="[route.openshift.io/v1/Route, namespace: openshift-ingress-canary, name: canary, uid: d099e691-9f65-4b04-8fcd-6df8ad5c5015]" virtual=false 2026-01-20T10:58:09.797807956+00:00 stderr F I0120 10:58:09.797764 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 585c690d-0c74-4dd0-b081-e5dc02f16e88]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.797950790+00:00 stderr F I0120 10:58:09.797934 1 garbagecollector.go:549] "Processing item" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-network-node-identity, name: network-node-identity, uid: c35a9635-b45c-47ac-af09-3ee2daf91305]" virtual=false 2026-01-20T10:58:09.798743080+00:00 stderr F I0120 10:58:09.798720 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 54a74216-e0ff-4fdf-8ef9-dfd95ace8442]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.798823142+00:00 stderr F I0120 10:58:09.798810 1 garbagecollector.go:549] "Processing item" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: cli, uid: bffe1e0b-75b9-453c-a1d8-5723606ab263]" virtual=false 2026-01-20T10:58:09.811442232+00:00 stderr F I0120 10:58:09.811388 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-scheduler-operator, name: kube-scheduler-operator, uid: 38913059-2644-4931-bd5e-a039fa76b712]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.811560245+00:00 stderr F I0120 10:58:09.811545 1 garbagecollector.go:549] "Processing item" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: network-tools, uid: c03fc985-15de-4664-a9fe-10bd1f45afd8]" virtual=false 2026-01-20T10:58:09.830042904+00:00 stderr F I0120 10:58:09.829985 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-machine-approver, name: machineapprover-rules, uid: 5c4e26c1-e7a2-400a-8d52-1a4e61d81615]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.830174398+00:00 stderr F I0120 10:58:09.830158 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 70bd2dd3-4e78-4176-92de-09c3ccb93594]" virtual=false 2026-01-20T10:58:09.832646710+00:00 stderr F I0120 10:58:09.832616 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-version, name: cluster-version-operator, uid: 8b301c8a-95e8-4010-b1dd-add84cec904e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.832746723+00:00 stderr F I0120 10:58:09.832729 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 7216ec91-40a8-4309-9d6f-2620c82247e2]" virtual=false 2026-01-20T10:58:09.836633211+00:00 stderr F I0120 10:58:09.836597 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-image-registry, name: image-registry-rules, uid: e3d343bb-85c3-48e6-83f3-e0bff03e2610]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.836733824+00:00 stderr F I0120 10:58:09.836719 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: 10dd43d4-0c6c-4543-8d29-d868fdae181d]" virtual=false 2026-01-20T10:58:09.840767977+00:00 stderr F I0120 10:58:09.840730 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-multus, name: prometheus-k8s-rules, uid: bc5d3e92-6eee-46da-a973-4b25555734ea]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.840860949+00:00 stderr F I0120 10:58:09.840847 1 garbagecollector.go:549] "Processing item" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: oauth-proxy, uid: 9acec097-9868-44f6-a179-49640dd8e719]" virtual=false 2026-01-20T10:58:09.847871497+00:00 stderr F I0120 10:58:09.847799 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-api, name: machine-api-operator-prometheus-rules, uid: d6bb2e2c-e1cd-49b4-96df-decb63e7b0fd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.848007630+00:00 stderr F I0120 10:58:09.847989 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: 5de31099-cb98-4349-a07d-5b47004d4e10]" virtual=false 2026-01-20T10:58:09.861990975+00:00 stderr F I0120 10:58:09.861893 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-image-registry, name: imagestreams-rules, uid: 3f104833-4f1e-4f16-8ada-d5643f802363]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.861990975+00:00 stderr F I0120 10:58:09.861944 1 garbagecollector.go:549] "Processing item" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: driver-toolkit, uid: bf2aafeb-b943-4673-8978-ff549746c6bb]" virtual=false 2026-01-20T10:58:09.863642258+00:00 stderr F I0120 10:58:09.863534 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-apiserver, name: kube-apiserver-performance-recording-rules, uid: 237c1b6b-f59d-4a3a-b960-8c415a10471e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.863642258+00:00 stderr F I0120 10:58:09.863577 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-network-operator, name: openshift-network-operator-ipsec-rules, uid: 8250bdbe-c4d3-4856-aa6a-373951b82216]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.863642258+00:00 stderr F I0120 10:58:09.863602 1 garbagecollector.go:549] "Processing item" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: must-gather, uid: c474c767-c860-4e73-b1b6-ce1a6f4cf507]" virtual=false 2026-01-20T10:58:09.863642258+00:00 stderr F I0120 10:58:09.863623 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus-ac, uid: f0debeff-4753-4149-9bd6-028dddb9b67d]" virtual=false 2026-01-20T10:58:09.866469870+00:00 stderr F I0120 10:58:09.866425 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ingress-operator, name: ingress-operator, uid: e3ef40dc-9c44-44df-ad9c-5a0bb6e10f9d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.866494010+00:00 stderr F I0120 10:58:09.866471 1 garbagecollector.go:549] "Processing item" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: cli-artifacts, uid: 942e0ba6-9f7e-4dd1-9976-a373d64fbf65]" virtual=false 2026-01-20T10:58:09.867505375+00:00 stderr F I0120 10:58:09.867476 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-marketplace, name: marketplace-alert-rules, uid: 325f23ba-096f-41ad-8964-6af44a8de605]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.867563627+00:00 stderr F I0120 10:58:09.867507 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: node-ca, uid: 0bd6243f-f605-4cae-ae6f-9274fc0fab04]" virtual=false 2026-01-20T10:58:09.870322757+00:00 stderr F I0120 10:58:09.870301 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 4d34cc47-ddd7-4071-9c9d-e6b189052eff]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.870340598+00:00 stderr F I0120 10:58:09.870324 1 garbagecollector.go:549] "Processing item" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: tests, uid: 7778095f-760e-4c45-992d-fa5bfe5ceefd]" virtual=false 2026-01-20T10:58:09.871988070+00:00 stderr F I0120 10:58:09.871965 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-operator-lifecycle-manager, name: olm-alert-rules, uid: 78f48107-8ef2-45e0-a5cb-4f3174faa9d9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.872005290+00:00 stderr F I0120 10:58:09.871989 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 353f2c71-579c-4479-8c03-fec2a64cdccb]" virtual=false 2026-01-20T10:58:09.873935670+00:00 stderr F I0120 10:58:09.873885 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ResourceQuota, namespace: openshift-host-network, name: host-network-namespace-quotas, uid: 499f87a8-1221-4cfd-b28c-9ae80d5ba123]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.873968240+00:00 stderr F I0120 10:58:09.873949 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus, uid: 30dd8554-cd5a-476c-b346-91c68439eed7]" virtual=false 2026-01-20T10:58:09.875934930+00:00 stderr F I0120 10:58:09.875899 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[machine.openshift.io/v1beta1/MachineHealthCheck, namespace: openshift-machine-api, name: machine-api-termination-handler, uid: da0c8169-ed84-4c23-a003-b6883fd55935]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.875954020+00:00 stderr F I0120 10:58:09.875947 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-monitoring, name: openshift-cluster-monitoring, uid: 58528e0e-a83e-439a-a129-7b0a4ae24a96]" virtual=false 2026-01-20T10:58:09.884394765+00:00 stderr F I0120 10:58:09.884359 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: installer-artifacts, uid: 897a4e58-3c9c-4859-a14e-e70529b83bbe]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.884420335+00:00 stderr F I0120 10:58:09.884392 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-operator-lifecycle-manager, name: olm-operators, uid: b9143910-b01b-4a5d-b64e-0612b2b7b21d]" virtual=false 2026-01-20T10:58:09.900653048+00:00 stderr F I0120 10:58:09.900508 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: tools, uid: dfed6660-519c-4896-a7a0-c3ac7cb3f64b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.900653048+00:00 stderr F I0120 10:58:09.900589 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-operators, name: global-operators, uid: 5a05d65b-a6fc-48c8-8588-06b4ec3a70e9]" virtual=false 2026-01-20T10:58:09.900958235+00:00 stderr F I0120 10:58:09.900891 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-network-node-identity, name: network-node-identity, uid: c35a9635-b45c-47ac-af09-3ee2daf91305]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.900958235+00:00 stderr F I0120 10:58:09.900944 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: metrics-daemon-sa, uid: 3052cc53-1784-4f82-91bc-b467a202b3b1]" virtual=false 2026-01-20T10:58:09.903401877+00:00 stderr F I0120 10:58:09.903365 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[route.openshift.io/v1/Route, namespace: openshift-ingress-canary, name: canary, uid: d099e691-9f65-4b04-8fcd-6df8ad5c5015]" owner=[{"apiVersion":"apps/v1","kind":"daemonset","name":"ingress-canary","uid":"b5512a08-cd29-46f9-9661-4c860338b2ca","controller":true}] 2026-01-20T10:58:09.903427228+00:00 stderr F I0120 10:58:09.903412 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus-ancillary-tools, uid: 456ced67-1537-4b8d-8397-a06fbaaa6bc4]" virtual=false 2026-01-20T10:58:09.921307513+00:00 stderr F I0120 10:58:09.921110 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: cli, uid: bffe1e0b-75b9-453c-a1d8-5723606ab263]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.921307513+00:00 stderr F I0120 10:58:09.921173 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-termination-handler, uid: 38e87230-6b06-40a0-af2a-442a47fe9507]" virtual=false 2026-01-20T10:58:09.932501087+00:00 stderr F I0120 10:58:09.932311 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: network-tools, uid: c03fc985-15de-4664-a9fe-10bd1f45afd8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.932501087+00:00 stderr F I0120 10:58:09.932353 1 garbagecollector.go:549] "Processing item" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: installer, uid: 88d58746-7912-47b5-a52e-f0badf538ff2]" virtual=false 2026-01-20T10:58:09.947927189+00:00 stderr F I0120 10:58:09.947857 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 70bd2dd3-4e78-4176-92de-09c3ccb93594]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.948100033+00:00 stderr F I0120 10:58:09.948050 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-1, uid: 7d5f234a-24b6-4ac1-954c-7cc8434206a4]" virtual=false 2026-01-20T10:58:09.955079421+00:00 stderr F I0120 10:58:09.955004 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 7216ec91-40a8-4309-9d6f-2620c82247e2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.955106151+00:00 stderr F I0120 10:58:09.955084 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: efae6cd6-358c-4e88-a33d-efa2c6b9d0e2]" virtual=false 2026-01-20T10:58:09.956708971+00:00 stderr F I0120 10:58:09.956667 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: 10dd43d4-0c6c-4543-8d29-d868fdae181d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.956830135+00:00 stderr F I0120 10:58:09.956806 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-monitoring, name: cluster-monitoring-operator, uid: 86c0fe0d-29a3-4976-8e34-622c5487375c]" virtual=false 2026-01-20T10:58:09.959249176+00:00 stderr F I0120 10:58:09.959182 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: 5de31099-cb98-4349-a07d-5b47004d4e10]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.959276937+00:00 stderr F I0120 10:58:09.959250 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-etcd-operator, name: etcd-operator, uid: 87565fdf-8fb7-4af2-bb82-2bbd1c71cec9]" virtual=false 2026-01-20T10:58:09.963130915+00:00 stderr F I0120 10:58:09.963083 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: oauth-proxy, uid: 9acec097-9868-44f6-a179-49640dd8e719]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.963130915+00:00 stderr F I0120 10:58:09.963110 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 77b47efb-3f01-4bfa-9558-359ab057875f]" virtual=false 2026-01-20T10:58:09.975219372+00:00 stderr F I0120 10:58:09.975153 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: driver-toolkit, uid: bf2aafeb-b943-4673-8978-ff549746c6bb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.975219372+00:00 stderr F I0120 10:58:09.975200 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 46c2de5e-5b41-4f38-9bf5-780cad1e517d]" virtual=false 2026-01-20T10:58:09.977423558+00:00 stderr F I0120 10:58:09.977362 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus-ac, uid: f0debeff-4753-4149-9bd6-028dddb9b67d]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:09.977423558+00:00 stderr F I0120 10:58:09.977387 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-operator, uid: 60e015be-05fd-4865-8047-01a0e067f6f0]" virtual=false 2026-01-20T10:58:09.982737213+00:00 stderr F I0120 10:58:09.982336 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: must-gather, uid: c474c767-c860-4e73-b1b6-ce1a6f4cf507]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.982737213+00:00 stderr F I0120 10:58:09.982366 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ingress-operator, name: ingress-operator, uid: 66faf8af-a62a-4380-adf8-70a5fd528d66]" virtual=false 2026-01-20T10:58:09.988695175+00:00 stderr F I0120 10:58:09.988657 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: node-ca, uid: 0bd6243f-f605-4cae-ae6f-9274fc0fab04]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.988712205+00:00 stderr F I0120 10:58:09.988701 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-config-operator, name: openshift-config-operator, uid: 97fb8215-a3a4-428f-b059-ff59344586ab]" virtual=false 2026-01-20T10:58:09.992915361+00:00 stderr F I0120 10:58:09.992874 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: cli-artifacts, uid: 942e0ba6-9f7e-4dd1-9976-a373d64fbf65]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.992935592+00:00 stderr F I0120 10:58:09.992927 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-controllers, uid: 8577a786-447c-40a7-bbfa-d3af148d8584]" virtual=false 2026-01-20T10:58:09.999789636+00:00 stderr F I0120 10:58:09.999743 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: tests, uid: 7778095f-760e-4c45-992d-fa5bfe5ceefd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:09.999846818+00:00 stderr F I0120 10:58:09.999815 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: cluster-network-operator, uid: e6388388-ee10-4833-b042-1e47aaa130a7]" virtual=false 2026-01-20T10:58:10.003121721+00:00 stderr F I0120 10:58:10.001925 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 353f2c71-579c-4479-8c03-fec2a64cdccb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.003121721+00:00 stderr F I0120 10:58:10.001951 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-machine-approver, name: builder, uid: e5aa0f53-6b29-446f-b20e-4098ddf36d61]" virtual=false 2026-01-20T10:58:10.004157877+00:00 stderr F I0120 10:58:10.003824 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus, uid: 30dd8554-cd5a-476c-b346-91c68439eed7]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.004157877+00:00 stderr F I0120 10:58:10.003867 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-node-identity, name: network-node-identity, uid: 7e561373-e820-499a-95a4-ce248a0f5525]" virtual=false 2026-01-20T10:58:10.009657346+00:00 stderr F I0120 10:58:10.009632 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-monitoring, name: openshift-cluster-monitoring, uid: 58528e0e-a83e-439a-a129-7b0a4ae24a96]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.009724868+00:00 stderr F I0120 10:58:10.009713 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: 651212c1-c815-4a28-8ec6-1c6280dbdbec]" virtual=false 2026-01-20T10:58:10.012756305+00:00 stderr F I0120 10:58:10.012736 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-operator-lifecycle-manager, name: olm-operators, uid: b9143910-b01b-4a5d-b64e-0612b2b7b21d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.012807846+00:00 stderr F I0120 10:58:10.012796 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager, name: default, uid: 76e83ab2-9557-4a68-bf85-85479267a323]" virtual=false 2026-01-20T10:58:10.013107934+00:00 stderr F I0120 10:58:10.013090 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-1, uid: 7d5f234a-24b6-4ac1-954c-7cc8434206a4]" 2026-01-20T10:58:10.013152255+00:00 stderr F I0120 10:58:10.013142 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-1, uid: 7bc85b09-0648-4f4e-bbcd-9571d2655676]" virtual=false 2026-01-20T10:58:10.035383240+00:00 stderr F I0120 10:58:10.035290 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-multus, name: metrics-daemon-sa, uid: 3052cc53-1784-4f82-91bc-b467a202b3b1]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.035383240+00:00 stderr F I0120 10:58:10.035352 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: olm-operator-serviceaccount, uid: 4e27b169-814e-4c39-bb1c-ed8a71de6e00]" virtual=false 2026-01-20T10:58:10.036880258+00:00 stderr F I0120 10:58:10.036832 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-operators, name: global-operators, uid: 5a05d65b-a6fc-48c8-8588-06b4ec3a70e9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.036914069+00:00 stderr F I0120 10:58:10.036894 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-6, uid: 46686543-d20d-4e62-906e-aaf6c4de8edf]" virtual=false 2026-01-20T10:58:10.037986956+00:00 stderr F I0120 10:58:10.037932 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus-ancillary-tools, uid: 456ced67-1537-4b8d-8397-a06fbaaa6bc4]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.037986956+00:00 stderr F I0120 10:58:10.037963 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-console, name: console, uid: 970aab5a-86f2-41fd-87c5-c7415f735449]" virtual=false 2026-01-20T10:58:10.053003438+00:00 stderr F I0120 10:58:10.052944 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-termination-handler, uid: 38e87230-6b06-40a0-af2a-442a47fe9507]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.053003438+00:00 stderr F I0120 10:58:10.052981 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: dae999a5-24f7-4e94-a4b1-471fa177fa34]" virtual=false 2026-01-20T10:58:10.066874010+00:00 stderr F I0120 10:58:10.066828 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: installer, uid: 88d58746-7912-47b5-a52e-f0badf538ff2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.066918901+00:00 stderr F I0120 10:58:10.066871 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: marketplace-operator, uid: 621c721c-fedc-415a-95e9-429e192d9990]" virtual=false 2026-01-20T10:58:10.067181218+00:00 stderr F I0120 10:58:10.067156 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-cluster-machine-approver, name: builder, uid: e5aa0f53-6b29-446f-b20e-4098ddf36d61]" 2026-01-20T10:58:10.067209969+00:00 stderr F I0120 10:58:10.067198 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-controller-manager, name: builder, uid: 46bd11b1-e144-4f00-b9f6-a584df6e9a0e]" virtual=false 2026-01-20T10:58:10.075873899+00:00 stderr F I0120 10:58:10.075845 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager, name: default, uid: 76e83ab2-9557-4a68-bf85-85479267a323]" 2026-01-20T10:58:10.075906780+00:00 stderr F I0120 10:58:10.075876 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator, uid: c607908b-16ad-4fb3-95dd-a2d9df939986]" virtual=false 2026-01-20T10:58:10.079734216+00:00 stderr F I0120 10:58:10.079709 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-1, uid: 7bc85b09-0648-4f4e-bbcd-9571d2655676]" 2026-01-20T10:58:10.079734216+00:00 stderr F I0120 10:58:10.079731 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 906775a7-f83d-4a63-b5ed-bad9d086c7aa]" virtual=false 2026-01-20T10:58:10.085901053+00:00 stderr F I0120 10:58:10.085872 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: efae6cd6-358c-4e88-a33d-efa2c6b9d0e2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.085933964+00:00 stderr F I0120 10:58:10.085915 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 2a1a6443-92bf-42c9-af0e-799ad6d8e75f]" virtual=false 2026-01-20T10:58:10.092361958+00:00 stderr F I0120 10:58:10.092291 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-monitoring, name: cluster-monitoring-operator, uid: 86c0fe0d-29a3-4976-8e34-622c5487375c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.092415339+00:00 stderr F I0120 10:58:10.092355 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-3, uid: 1543dfc3-df4c-45db-8acd-845d6e66194a]" virtual=false 2026-01-20T10:58:10.092734817+00:00 stderr F I0120 10:58:10.092703 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-etcd-operator, name: etcd-operator, uid: 87565fdf-8fb7-4af2-bb82-2bbd1c71cec9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.092749707+00:00 stderr F I0120 10:58:10.092741 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-5, uid: fb832e88-85bd-49f2-bece-a38d2f50681d]" virtual=false 2026-01-20T10:58:10.096240566+00:00 stderr F I0120 10:58:10.096200 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 77b47efb-3f01-4bfa-9558-359ab057875f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.096262976+00:00 stderr F I0120 10:58:10.096236 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: deployer, uid: f74640ad-73ea-44c8-a54b-275c3bc4150d]" virtual=false 2026-01-20T10:58:10.099136819+00:00 stderr F I0120 10:58:10.099087 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-6, uid: 46686543-d20d-4e62-906e-aaf6c4de8edf]" 2026-01-20T10:58:10.099136819+00:00 stderr F I0120 10:58:10.099111 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: default, name: default, uid: d048f127-d5e0-4b33-a504-c18c59c46f0b]" virtual=false 2026-01-20T10:58:10.109605216+00:00 stderr F I0120 10:58:10.109535 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 46c2de5e-5b41-4f38-9bf5-780cad1e517d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.109605216+00:00 stderr F I0120 10:58:10.109583 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: iptables-alerter, uid: a04dfba9-1e5c-429c-af4e-a16926a1a922]" virtual=false 2026-01-20T10:58:10.115497205+00:00 stderr F I0120 10:58:10.115450 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-operator, uid: 60e015be-05fd-4865-8047-01a0e067f6f0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.115497205+00:00 stderr F I0120 10:58:10.115489 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-console-operator, name: console-operator, uid: 5d735f56-5d70-4872-9ba1-eeb14212f370]" virtual=false 2026-01-20T10:58:10.115798003+00:00 stderr F I0120 10:58:10.115767 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-ingress-operator, name: ingress-operator, uid: 66faf8af-a62a-4380-adf8-70a5fd528d66]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.115810713+00:00 stderr F I0120 10:58:10.115801 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-public, name: deployer, uid: 50560a3b-b057-466d-bcf6-867cc67ab743]" virtual=false 2026-01-20T10:58:10.119507157+00:00 stderr F I0120 10:58:10.119479 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-config-operator, name: openshift-config-operator, uid: 97fb8215-a3a4-428f-b059-ff59344586ab]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.119592059+00:00 stderr F I0120 10:58:10.119575 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator, uid: 9ae8aac8-5e71-4dc5-acdc-bd42630689b5]" virtual=false 2026-01-20T10:58:10.127196052+00:00 stderr F I0120 10:58:10.127137 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-controllers, uid: 8577a786-447c-40a7-bbfa-d3af148d8584]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.127279154+00:00 stderr F I0120 10:58:10.127192 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: deployer, uid: d2891b43-c43d-4301-8b2f-ab810f64e280]" virtual=false 2026-01-20T10:58:10.129659755+00:00 stderr F I0120 10:58:10.129633 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: cluster-network-operator, uid: e6388388-ee10-4833-b042-1e47aaa130a7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.129739137+00:00 stderr F I0120 10:58:10.129723 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-machine-approver, name: machine-approver-sa, uid: b07a4cd6-c640-4266-9ce2-600406fbd64f]" virtual=false 2026-01-20T10:58:10.132256211+00:00 stderr F I0120 10:58:10.132231 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-controller-manager, name: builder, uid: 46bd11b1-e144-4f00-b9f6-a584df6e9a0e]" 2026-01-20T10:58:10.132311602+00:00 stderr F I0120 10:58:10.132300 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: b43cd4d0-8394-41bd-9c23-389c33ffc953]" virtual=false 2026-01-20T10:58:10.137994646+00:00 stderr F I0120 10:58:10.137937 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-network-node-identity, name: network-node-identity, uid: 7e561373-e820-499a-95a4-ce248a0f5525]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.137994646+00:00 stderr F I0120 10:58:10.137981 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-4, uid: 932884fb-7f39-49cc-bc5a-34d361a12e91]" virtual=false 2026-01-20T10:58:10.142239754+00:00 stderr F I0120 10:58:10.142203 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: 651212c1-c815-4a28-8ec6-1c6280dbdbec]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.142239754+00:00 stderr F I0120 10:58:10.142234 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: deployer, uid: efac8c96-daad-473a-a712-f4f4f97472b1]" virtual=false 2026-01-20T10:58:10.152470884+00:00 stderr F I0120 10:58:10.152433 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-3, uid: 1543dfc3-df4c-45db-8acd-845d6e66194a]" 2026-01-20T10:58:10.152554776+00:00 stderr F I0120 10:58:10.152543 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-2, uid: a0e0a799-c6ef-45ab-9c7d-bceba4db7d81]" virtual=false 2026-01-20T10:58:10.156353562+00:00 stderr F I0120 10:58:10.156328 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-5, uid: fb832e88-85bd-49f2-bece-a38d2f50681d]" 2026-01-20T10:58:10.156379273+00:00 stderr F I0120 10:58:10.156352 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-7, uid: 29e42176-b5f0-48b5-a766-b1545d67b91c]" virtual=false 2026-01-20T10:58:10.162597932+00:00 stderr F I0120 10:58:10.162546 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: deployer, uid: f74640ad-73ea-44c8-a54b-275c3bc4150d]" 2026-01-20T10:58:10.162597932+00:00 stderr F I0120 10:58:10.162575 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-dns-operator, name: dns-operator, uid: fb958c91-4fd0-44ff-87ab-ea516f2eb6cf]" virtual=false 2026-01-20T10:58:10.162705864+00:00 stderr F I0120 10:58:10.162681 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: olm-operator-serviceaccount, uid: 4e27b169-814e-4c39-bb1c-ed8a71de6e00]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.162705864+00:00 stderr F I0120 10:58:10.162703 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-8, uid: 5eec5212-9944-4d33-9069-6d85c7ad2c1a]" virtual=false 2026-01-20T10:58:10.165292510+00:00 stderr F I0120 10:58:10.165255 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: default, name: default, uid: d048f127-d5e0-4b33-a504-c18c59c46f0b]" 2026-01-20T10:58:10.165292510+00:00 stderr F I0120 10:58:10.165276 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-6, uid: ed3f3154-72c5-4095-b56f-856b65764f6d]" virtual=false 2026-01-20T10:58:10.171650991+00:00 stderr F I0120 10:58:10.171622 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-console, name: console, uid: 970aab5a-86f2-41fd-87c5-c7415f735449]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.171650991+00:00 stderr F I0120 10:58:10.171645 1 garbagecollector.go:549] "Processing item" item="[v1/Secret, namespace: openshift-operator-lifecycle-manager, name: pprof-cert, uid: c9632a1a-4f13-4685-bf8a-cb629542c724]" virtual=false 2026-01-20T10:58:10.179738957+00:00 stderr F I0120 10:58:10.179681 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-public, name: deployer, uid: 50560a3b-b057-466d-bcf6-867cc67ab743]" 2026-01-20T10:58:10.179738957+00:00 stderr F I0120 10:58:10.179721 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-3, uid: c8feb0fe-7b0f-4f04-8d98-24bf9b6e5cd2]" virtual=false 2026-01-20T10:58:10.188011607+00:00 stderr F I0120 10:58:10.187539 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: dae999a5-24f7-4e94-a4b1-471fa177fa34]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.188011607+00:00 stderr F I0120 10:58:10.187585 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-4, uid: 86dedbd4-e2d0-4b4f-8c78-ace31808403c]" virtual=false 2026-01-20T10:58:10.188890319+00:00 stderr F I0120 10:58:10.188855 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: deployer, uid: d2891b43-c43d-4301-8b2f-ab810f64e280]" 2026-01-20T10:58:10.188905719+00:00 stderr F I0120 10:58:10.188897 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-11, uid: e5243530-5a5f-4545-adfa-642dfa650103]" virtual=false 2026-01-20T10:58:10.197217471+00:00 stderr F I0120 10:58:10.197156 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: marketplace-operator, uid: 621c721c-fedc-415a-95e9-429e192d9990]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.197217471+00:00 stderr F I0120 10:58:10.197197 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-3, uid: e5c106c5-9215-42bf-9f83-ad893e5dfc9f]" virtual=false 2026-01-20T10:58:10.203030088+00:00 stderr F I0120 10:58:10.202954 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-4, uid: 932884fb-7f39-49cc-bc5a-34d361a12e91]" 2026-01-20T10:58:10.203030088+00:00 stderr F I0120 10:58:10.202994 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-1, uid: 04732c30-44b2-491a-9d6a-453885c75b2f]" virtual=false 2026-01-20T10:58:10.205777258+00:00 stderr F I0120 10:58:10.205732 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-multus, name: deployer, uid: efac8c96-daad-473a-a712-f4f4f97472b1]" 2026-01-20T10:58:10.205803389+00:00 stderr F I0120 10:58:10.205785 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-6, uid: 75cd3034-678b-4464-b011-f96e3a76bfeb]" virtual=false 2026-01-20T10:58:10.213080494+00:00 stderr F I0120 10:58:10.212969 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator, uid: c607908b-16ad-4fb3-95dd-a2d9df939986]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.213080494+00:00 stderr F I0120 10:58:10.213034 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-5, uid: 9924a3a2-9116-4daa-916b-0afdeb883e44]" virtual=false 2026-01-20T10:58:10.218014259+00:00 stderr F I0120 10:58:10.217854 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 906775a7-f83d-4a63-b5ed-bad9d086c7aa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.218014259+00:00 stderr F I0120 10:58:10.217888 1 garbagecollector.go:549] "Processing item" item="[v1/Secret, namespace: openshift-etcd-operator, name: etcd-client, uid: e8eb8415-8608-4aeb-990e-0c587edd97cc]" virtual=false 2026-01-20T10:58:10.219444825+00:00 stderr F I0120 10:58:10.219393 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-2, uid: a0e0a799-c6ef-45ab-9c7d-bceba4db7d81]" 2026-01-20T10:58:10.219467846+00:00 stderr F I0120 10:58:10.219459 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-2, uid: 899b36ec-d94b-42d9-8be5-92ce893bdf60]" virtual=false 2026-01-20T10:58:10.222664387+00:00 stderr F I0120 10:58:10.222632 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-7, uid: 29e42176-b5f0-48b5-a766-b1545d67b91c]" 2026-01-20T10:58:10.222688997+00:00 stderr F I0120 10:58:10.222676 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator-config, uid: b7bf0a70-f77f-40ce-8903-84d4dba4ea3a]" virtual=false 2026-01-20T10:58:10.222722178+00:00 stderr F I0120 10:58:10.222687 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 2a1a6443-92bf-42c9-af0e-799ad6d8e75f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.222810241+00:00 stderr F I0120 10:58:10.222770 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-apiserver-operator, name: trusted-ca-bundle, uid: fec29a7e-ed54-4cd2-a16a-9f72be2c61f3]" virtual=false 2026-01-20T10:58:10.229875731+00:00 stderr F I0120 10:58:10.229827 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-8, uid: 5eec5212-9944-4d33-9069-6d85c7ad2c1a]" 2026-01-20T10:58:10.229875731+00:00 stderr F I0120 10:58:10.229861 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-apiserver, name: revision-status-1, uid: 7df87c7c-0eaf-4109-8b79-031081b1501b]" virtual=false 2026-01-20T10:58:10.233164614+00:00 stderr F I0120 10:58:10.233131 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-6, uid: ed3f3154-72c5-4095-b56f-856b65764f6d]" 2026-01-20T10:58:10.233247556+00:00 stderr F I0120 10:58:10.233232 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-cluster-machine-approver, name: kube-rbac-proxy, uid: 83f36bd0-03ab-465f-8813-55e992938c92]" virtual=false 2026-01-20T10:58:10.241223979+00:00 stderr F I0120 10:58:10.241144 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: iptables-alerter, uid: a04dfba9-1e5c-429c-af4e-a16926a1a922]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.241223979+00:00 stderr F I0120 10:58:10.241185 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: admin-gates, uid: ca7c55e0-b2d1-4635-a43a-3c60b3ca968b]" virtual=false 2026-01-20T10:58:10.247180250+00:00 stderr F I0120 10:58:10.246255 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-console-operator, name: console-operator, uid: 5d735f56-5d70-4872-9ba1-eeb14212f370]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.247180250+00:00 stderr F I0120 10:58:10.246298 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-cluster-total, uid: c38274d9-e6e8-4d0b-b766-d07f00c60697]" virtual=false 2026-01-20T10:58:10.247716313+00:00 stderr F I0120 10:58:10.247676 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-3, uid: c8feb0fe-7b0f-4f04-8d98-24bf9b6e5cd2]" 2026-01-20T10:58:10.247716313+00:00 stderr F I0120 10:58:10.247707 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-cluster, uid: 0d4712c5-d5b0-41d5-a647-d63817261699]" virtual=false 2026-01-20T10:58:10.252909055+00:00 stderr F I0120 10:58:10.252880 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator, uid: 9ae8aac8-5e71-4dc5-acdc-bd42630689b5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.252927996+00:00 stderr F I0120 10:58:10.252912 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-namespace, uid: fa51c0bf-8455-4908-a5b9-5047521669d7]" virtual=false 2026-01-20T10:58:10.253049749+00:00 stderr F I0120 10:58:10.252999 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-4, uid: 86dedbd4-e2d0-4b4f-8c78-ace31808403c]" 2026-01-20T10:58:10.253138601+00:00 stderr F I0120 10:58:10.253104 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-node, uid: 39c6a6e2-d1ca-48b1-b647-64eb92ab22d8]" virtual=false 2026-01-20T10:58:10.256945417+00:00 stderr F I0120 10:58:10.256888 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-11, uid: e5243530-5a5f-4545-adfa-642dfa650103]" 2026-01-20T10:58:10.256962918+00:00 stderr F I0120 10:58:10.256941 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-pod, uid: 950b403d-a1ab-4def-9716-54d82ee220cf]" virtual=false 2026-01-20T10:58:10.262220612+00:00 stderr F I0120 10:58:10.262181 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-cluster-machine-approver, name: machine-approver-sa, uid: b07a4cd6-c640-4266-9ce2-600406fbd64f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.262273873+00:00 stderr F I0120 10:58:10.262220 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-workload, uid: 134b842b-df65-4691-b059-613958e332a5]" virtual=false 2026-01-20T10:58:10.262550570+00:00 stderr F I0120 10:58:10.262512 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-3, uid: e5c106c5-9215-42bf-9f83-ad893e5dfc9f]" 2026-01-20T10:58:10.262550570+00:00 stderr F I0120 10:58:10.262536 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-workloads-namespace, uid: eadeeb62-a305-4855-95f4-6d8dc0433482]" virtual=false 2026-01-20T10:58:10.269930328+00:00 stderr F I0120 10:58:10.269882 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-1, uid: 04732c30-44b2-491a-9d6a-453885c75b2f]" 2026-01-20T10:58:10.269930328+00:00 stderr F I0120 10:58:10.269914 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-namespace-by-pod, uid: 77e50462-2587-42e1-85b6-b8d283d88f5d]" virtual=false 2026-01-20T10:58:10.273089668+00:00 stderr F I0120 10:58:10.273042 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-6, uid: 75cd3034-678b-4464-b011-f96e3a76bfeb]" 2026-01-20T10:58:10.273111898+00:00 stderr F I0120 10:58:10.273092 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-node-cluster-rsrc-use, uid: 54d1c16d-4e01-45e7-a710-5f70910784d6]" virtual=false 2026-01-20T10:58:10.273845557+00:00 stderr F I0120 10:58:10.273811 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: b43cd4d0-8394-41bd-9c23-389c33ffc953]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.273845557+00:00 stderr F I0120 10:58:10.273836 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-node-rsrc-use, uid: b8637e8a-1405-492e-bb16-9110c9aa6848]" virtual=false 2026-01-20T10:58:10.275328225+00:00 stderr F I0120 10:58:10.275293 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-5, uid: 9924a3a2-9116-4daa-916b-0afdeb883e44]" 2026-01-20T10:58:10.275328225+00:00 stderr F I0120 10:58:10.275317 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-pod-total, uid: c9b01aa2-5b5e-42b8-b64d-61b1d6ebd241]" virtual=false 2026-01-20T10:58:10.282299591+00:00 stderr F I0120 10:58:10.282260 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-2, uid: 899b36ec-d94b-42d9-8be5-92ce893bdf60]" 2026-01-20T10:58:10.282299591+00:00 stderr F I0120 10:58:10.282283 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-prometheus, uid: 5985f141-3c54-4417-9255-f53e7838729d]" virtual=false 2026-01-20T10:58:10.296685138+00:00 stderr F I0120 10:58:10.296623 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-dns-operator, name: dns-operator, uid: fb958c91-4fd0-44ff-87ab-ea516f2eb6cf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.296685138+00:00 stderr F I0120 10:58:10.296670 1 garbagecollector.go:549] "Processing item" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: controlplanemachineset.machine.openshift.io, uid: c0896a42-9644-4ae1-b9d2-64b9d8d72a93]" virtual=false 2026-01-20T10:58:10.297899348+00:00 stderr F I0120 10:58:10.297864 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-apiserver, name: revision-status-1, uid: 7df87c7c-0eaf-4109-8b79-031081b1501b]" 2026-01-20T10:58:10.297899348+00:00 stderr F I0120 10:58:10.297887 1 garbagecollector.go:549] "Processing item" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: multus.openshift.io, uid: 8e8dfece-6a87-43e4-aef0-9bae8de3390b]" virtual=false 2026-01-20T10:58:10.305596234+00:00 stderr F I0120 10:58:10.305510 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Secret, namespace: openshift-operator-lifecycle-manager, name: pprof-cert, uid: c9632a1a-4f13-4685-bf8a-cb629542c724]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2026-01-20T10:58:10.305596234+00:00 stderr F I0120 10:58:10.305545 1 garbagecollector.go:549] "Processing item" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: network-node-identity.openshift.io, uid: 6dec73bc-003d-45c2-b80b-6abdd589c12e]" virtual=false 2026-01-20T10:58:10.353302485+00:00 stderr F I0120 10:58:10.353240 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Secret, namespace: openshift-etcd-operator, name: etcd-client, uid: e8eb8415-8608-4aeb-990e-0c587edd97cc]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2026-01-20T10:58:10.353302485+00:00 stderr F I0120 10:58:10.353284 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: etcd-dashboard, uid: c6f28bae-aa57-468c-9400-cc06ce489bf2]" virtual=false 2026-01-20T10:58:10.358039115+00:00 stderr F I0120 10:58:10.357989 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator-config, uid: b7bf0a70-f77f-40ce-8903-84d4dba4ea3a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.358039115+00:00 stderr F I0120 10:58:10.358022 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: grafana-dashboard-apiserver-performance, uid: 08a66e39-2f66-4f16-8e3f-1584a5baa4d7]" virtual=false 2026-01-20T10:58:10.360819246+00:00 stderr F I0120 10:58:10.360789 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-apiserver-operator, name: trusted-ca-bundle, uid: fec29a7e-ed54-4cd2-a16a-9f72be2c61f3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2026-01-20T10:58:10.360819246+00:00 stderr F I0120 10:58:10.360811 1 garbagecollector.go:549] "Processing item" item="[scheduling.k8s.io/v1/PriorityClass, namespace: , name: openshift-user-critical, uid: 53eb906b-da85-4299-867d-b35bdfc9d7dd]" virtual=false 2026-01-20T10:58:10.369287881+00:00 stderr F I0120 10:58:10.369251 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-cluster-machine-approver, name: kube-rbac-proxy, uid: 83f36bd0-03ab-465f-8813-55e992938c92]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.369365544+00:00 stderr F I0120 10:58:10.369348 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: node-cluster, uid: 13ea4157-64e4-4040-bd37-d252de132aff]" virtual=false 2026-01-20T10:58:10.374924585+00:00 stderr F I0120 10:58:10.374895 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: admin-gates, uid: ca7c55e0-b2d1-4635-a43a-3c60b3ca968b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.374943185+00:00 stderr F I0120 10:58:10.374928 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/PriorityLevelConfiguration, namespace: , name: openshift-control-plane-operators, uid: e58c2fe6-ebe9-4808-9cea-7443d2a56c5c]" virtual=false 2026-01-20T10:58:10.381446350+00:00 stderr F I0120 10:58:10.381405 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-cluster-total, uid: c38274d9-e6e8-4d0b-b766-d07f00c60697]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.381446350+00:00 stderr F I0120 10:58:10.381429 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: openshift-network-features, uid: e0bb8616-5510-480e-98cd-00a3fc1de91b]" virtual=false 2026-01-20T10:58:10.383569755+00:00 stderr F I0120 10:58:10.383543 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-cluster, uid: 0d4712c5-d5b0-41d5-a647-d63817261699]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.383587085+00:00 stderr F I0120 10:58:10.383569 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: release-verification, uid: 10773088-74cb-4fd7-a888-64eb35390cc8]" virtual=false 2026-01-20T10:58:10.386926870+00:00 stderr F I0120 10:58:10.386908 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-namespace, uid: fa51c0bf-8455-4908-a5b9-5047521669d7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.386981671+00:00 stderr F I0120 10:58:10.386969 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-console-operator, name: console-operator-config, uid: 412419b7-04a8-4c46-913c-1f7d4e7ef828]" virtual=false 2026-01-20T10:58:10.389475214+00:00 stderr F I0120 10:58:10.389447 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-node, uid: 39c6a6e2-d1ca-48b1-b647-64eb92ab22d8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.389475214+00:00 stderr F I0120 10:58:10.389468 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-console-operator, name: telemetry-config, uid: 44471669-105d-4c00-b23a-d7c3e0f0cede]" virtual=false 2026-01-20T10:58:10.395650441+00:00 stderr F I0120 10:58:10.395619 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-pod, uid: 950b403d-a1ab-4def-9716-54d82ee220cf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.395650441+00:00 stderr F I0120 10:58:10.395645 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-images, uid: 8cb4b490-1625-48bb-919c-ccf0098eecd7]" virtual=false 2026-01-20T10:58:10.395850486+00:00 stderr F I0120 10:58:10.395833 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-workload, uid: 134b842b-df65-4691-b059-613958e332a5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.395893517+00:00 stderr F I0120 10:58:10.395883 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator-config, uid: 1ab68a4b-dd84-4ca5-8df6-3f02a224a116]" virtual=false 2026-01-20T10:58:10.399123429+00:00 stderr F I0120 10:58:10.399106 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-workloads-namespace, uid: eadeeb62-a305-4855-95f4-6d8dc0433482]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.399172870+00:00 stderr F I0120 10:58:10.399162 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-ca-bundle, uid: 974bd056-bb37-4a0d-a539-16df96c14ed2]" virtual=false 2026-01-20T10:58:10.406735213+00:00 stderr F I0120 10:58:10.406706 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-node-cluster-rsrc-use, uid: 54d1c16d-4e01-45e7-a710-5f70910784d6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.406735213+00:00 stderr F I0120 10:58:10.406704 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-namespace-by-pod, uid: 77e50462-2587-42e1-85b6-b8d283d88f5d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.406810235+00:00 stderr F I0120 10:58:10.406780 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-service-ca-bundle, uid: fe11618d-f458-43e0-9d36-031e3c99e7b7]" virtual=false 2026-01-20T10:58:10.406835605+00:00 stderr F I0120 10:58:10.406731 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-operator-config, uid: 4f6ec328-7fca-45fe-9f6b-45e4903ea3e8]" virtual=false 2026-01-20T10:58:10.411314989+00:00 stderr F I0120 10:58:10.411268 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-node-rsrc-use, uid: b8637e8a-1405-492e-bb16-9110c9aa6848]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.411335060+00:00 stderr F I0120 10:58:10.411310 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-2, uid: 6002c8b9-0d97-445d-a699-b98f9b3b0a7e]" virtual=false 2026-01-20T10:58:10.412223492+00:00 stderr F I0120 10:58:10.412176 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-pod-total, uid: c9b01aa2-5b5e-42b8-b64d-61b1d6ebd241]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.412292244+00:00 stderr F I0120 10:58:10.412260 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-image-registry, name: trusted-ca, uid: 817847ce-1358-44f5-9e39-d680eb81f478]" virtual=false 2026-01-20T10:58:10.417897606+00:00 stderr F I0120 10:58:10.417863 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-prometheus, uid: 5985f141-3c54-4417-9255-f53e7838729d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.417914277+00:00 stderr F I0120 10:58:10.417897 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-ingress-operator, name: trusted-ca, uid: 1f6546f8-a303-43d3-8110-ebd844c52acc]" virtual=false 2026-01-20T10:58:10.425701755+00:00 stderr F I0120 10:58:10.425589 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: controlplanemachineset.machine.openshift.io, uid: c0896a42-9644-4ae1-b9d2-64b9d8d72a93]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.425701755+00:00 stderr F I0120 10:58:10.425641 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator-config, uid: a31ddc0b-aec5-455b-8a66-7121efaad6c3]" virtual=false 2026-01-20T10:58:10.431805570+00:00 stderr F I0120 10:58:10.431760 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: multus.openshift.io, uid: 8e8dfece-6a87-43e4-aef0-9bae8de3390b]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.431805570+00:00 stderr F I0120 10:58:10.431795 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-10, uid: 84db8ecd-30ab-46ed-a310-4e96d94d3fd1]" virtual=false 2026-01-20T10:58:10.436763796+00:00 stderr F I0120 10:58:10.436735 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: network-node-identity.openshift.io, uid: 6dec73bc-003d-45c2-b80b-6abdd589c12e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.436782786+00:00 stderr F I0120 10:58:10.436777 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-7, uid: 4e7e679a-7f59-4947-b9e5-f995fc817b7a]" virtual=false 2026-01-20T10:58:10.476095715+00:00 stderr F I0120 10:58:10.476006 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-2, uid: 6002c8b9-0d97-445d-a699-b98f9b3b0a7e]" 2026-01-20T10:58:10.476131376+00:00 stderr F I0120 10:58:10.476116 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator-config, uid: c29eb760-4269-415b-a4e7-ce7850749f0e]" virtual=false 2026-01-20T10:58:10.486382755+00:00 stderr F I0120 10:58:10.486351 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: etcd-dashboard, uid: c6f28bae-aa57-468c-9400-cc06ce489bf2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.486458977+00:00 stderr F I0120 10:58:10.486447 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator-config, uid: c6ebf7f7-9838-428d-bacd-d9377dca684d]" virtual=false 2026-01-20T10:58:10.490668385+00:00 stderr F I0120 10:58:10.490631 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: grafana-dashboard-apiserver-performance, uid: 08a66e39-2f66-4f16-8e3f-1584a5baa4d7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.490688525+00:00 stderr F I0120 10:58:10.490681 1 garbagecollector.go:549] "Processing item" item="[migration.k8s.io/v1alpha1/StorageVersionMigration, namespace: , name: console-plugin-storage-version-migration, uid: 97786232-f628-4bf7-9aa1-0ab2580b9a37]" virtual=false 2026-01-20T10:58:10.497265672+00:00 stderr F I0120 10:58:10.497238 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-10, uid: 84db8ecd-30ab-46ed-a310-4e96d94d3fd1]" 2026-01-20T10:58:10.497289522+00:00 stderr F I0120 10:58:10.497268 1 garbagecollector.go:549] "Processing item" item="[migration.k8s.io/v1alpha1/StorageVersionMigration, namespace: , name: machineconfiguration-controllerconfig-storage-version-migration, uid: 4cbaaaf2-51d8-401d-bae6-b6f1e5b45d39]" virtual=false 2026-01-20T10:58:10.498229287+00:00 stderr F I0120 10:58:10.498209 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[scheduling.k8s.io/v1/PriorityClass, namespace: , name: openshift-user-critical, uid: 53eb906b-da85-4299-867d-b35bdfc9d7dd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.498280608+00:00 stderr F I0120 10:58:10.498269 1 garbagecollector.go:549] "Processing item" item="[migration.k8s.io/v1alpha1/StorageVersionMigration, namespace: , name: machineconfiguration-machineconfigpool-storage-version-migration, uid: 7bf21232-ae22-481b-a79a-dd7eacf3583e]" virtual=false 2026-01-20T10:58:10.503339506+00:00 stderr F I0120 10:58:10.503291 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-7, uid: 4e7e679a-7f59-4947-b9e5-f995fc817b7a]" 2026-01-20T10:58:10.503339506+00:00 stderr F I0120 10:58:10.503321 1 garbagecollector.go:549] "Processing item" item="[operator.openshift.io/v1/Etcd, namespace: , name: cluster, uid: f72bfffd-24a6-4b3a-a0e5-af2471354406]" virtual=false 2026-01-20T10:58:10.505269955+00:00 stderr F I0120 10:58:10.505238 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: node-cluster, uid: 13ea4157-64e4-4040-bd37-d252de132aff]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.505269955+00:00 stderr F I0120 10:58:10.505264 1 garbagecollector.go:549] "Processing item" item="[config.openshift.io/v1/Console, namespace: , name: cluster, uid: 1acbce88-d79f-4a02-bd85-4634f5c624be]" virtual=false 2026-01-20T10:58:10.514142301+00:00 stderr F I0120 10:58:10.511899 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/PriorityLevelConfiguration, namespace: , name: openshift-control-plane-operators, uid: e58c2fe6-ebe9-4808-9cea-7443d2a56c5c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.514142301+00:00 stderr F I0120 10:58:10.511925 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-autoscaler, uid: 90140550-0bf0-45b6-bc21-2756500fa74e]" virtual=false 2026-01-20T10:58:10.519548078+00:00 stderr F I0120 10:58:10.519513 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: openshift-network-features, uid: e0bb8616-5510-480e-98cd-00a3fc1de91b]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.519569378+00:00 stderr F I0120 10:58:10.519561 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-autoscaler-operator, uid: 122f7969-6bbd-4171-b0d2-3382822b14bc]" virtual=false 2026-01-20T10:58:10.523589531+00:00 stderr F I0120 10:58:10.523547 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-console-operator, name: console-operator-config, uid: 412419b7-04a8-4c46-913c-1f7d4e7ef828]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.523661213+00:00 stderr F I0120 10:58:10.523633 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-monitoring-operator, uid: f9b8a349-3d3e-43d3-8552-f17fd46dfe4d]" virtual=false 2026-01-20T10:58:10.524097584+00:00 stderr F I0120 10:58:10.524074 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: release-verification, uid: 10773088-74cb-4fd7-a888-64eb35390cc8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.524121844+00:00 stderr F I0120 10:58:10.524109 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-network-operator, uid: 741fb11c-22e2-4896-b41e-4bc6506dadb4]" virtual=false 2026-01-20T10:58:10.529628764+00:00 stderr F I0120 10:58:10.529587 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-console-operator, name: telemetry-config, uid: 44471669-105d-4c00-b23a-d7c3e0f0cede]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.529661565+00:00 stderr F I0120 10:58:10.529638 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator, uid: 2613967c-5e5d-427c-802b-3219c1314d9f]" virtual=false 2026-01-20T10:58:10.530295721+00:00 stderr F I0120 10:58:10.530261 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator-config, uid: 1ab68a4b-dd84-4ca5-8df6-3f02a224a116]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.530309351+00:00 stderr F I0120 10:58:10.530300 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator-imageconfig-reader, uid: 6e8698c1-8ae7-4cf2-9591-7d7aaf70f1e4]" virtual=false 2026-01-20T10:58:10.532757174+00:00 stderr F I0120 10:58:10.532719 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-images, uid: 8cb4b490-1625-48bb-919c-ccf0098eecd7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.532771334+00:00 stderr F I0120 10:58:10.532760 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator-proxy-reader, uid: 0fc0e835-2d7d-4935-a386-83f3cdcc2356]" virtual=false 2026-01-20T10:58:10.535591286+00:00 stderr F I0120 10:58:10.535554 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-ca-bundle, uid: 974bd056-bb37-4a0d-a539-16df96c14ed2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2026-01-20T10:58:10.535606626+00:00 stderr F I0120 10:58:10.535594 1 garbagecollector.go:549] "Processing item" item="[operator.openshift.io/v1/OpenShiftAPIServer, namespace: , name: cluster, uid: 5f5e95ce-59b9-4ad4-8913-c62d41574952]" virtual=false 2026-01-20T10:58:10.539510065+00:00 stderr F I0120 10:58:10.539467 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-operator-config, uid: 4f6ec328-7fca-45fe-9f6b-45e4903ea3e8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.539525385+00:00 stderr F I0120 10:58:10.539511 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-version-operator, uid: d1772a5b-7528-4218-b1a5-b8e37adaddd0]" virtual=false 2026-01-20T10:58:10.539757361+00:00 stderr F I0120 10:58:10.539709 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-service-ca-bundle, uid: fe11618d-f458-43e0-9d36-031e3c99e7b7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2026-01-20T10:58:10.539809942+00:00 stderr F I0120 10:58:10.539774 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console, uid: b060a198-fcb6-4114-b32b-339ffffe6077]" virtual=false 2026-01-20T10:58:10.546178005+00:00 stderr F I0120 10:58:10.546126 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-image-registry, name: trusted-ca, uid: 817847ce-1358-44f5-9e39-d680eb81f478]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2026-01-20T10:58:10.546197005+00:00 stderr F I0120 10:58:10.546177 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-auth-delegator, uid: 5b5cc72d-8cfc-47d3-8393-7b66b592f99e]" virtual=false 2026-01-20T10:58:10.552988557+00:00 stderr F I0120 10:58:10.552927 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-ingress-operator, name: trusted-ca, uid: 1f6546f8-a303-43d3-8110-ebd844c52acc]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2026-01-20T10:58:10.552988557+00:00 stderr F I0120 10:58:10.552981 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-extensions-reader, uid: 402476b7-93bc-4cfa-b038-c3108d7ea260]" virtual=false 2026-01-20T10:58:10.558484907+00:00 stderr F I0120 10:58:10.558438 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator-config, uid: a31ddc0b-aec5-455b-8a66-7121efaad6c3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.558506718+00:00 stderr F I0120 10:58:10.558487 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-operator, uid: cd350fa5-6dc3-49dd-a977-ebe3ffe5edaa]" virtual=false 2026-01-20T10:58:10.633522133+00:00 stderr F I0120 10:58:10.633449 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[migration.k8s.io/v1alpha1/StorageVersionMigration, namespace: , name: machineconfiguration-controllerconfig-storage-version-migration, uid: 4cbaaaf2-51d8-401d-bae6-b6f1e5b45d39]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.633522133+00:00 stderr F I0120 10:58:10.633503 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-operator-auth-delegator, uid: 6aee10d5-48ad-432d-ae71-69853ed0161c]" virtual=false 2026-01-20T10:58:10.636113609+00:00 stderr F I0120 10:58:10.635960 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[migration.k8s.io/v1alpha1/StorageVersionMigration, namespace: , name: machineconfiguration-machineconfigpool-storage-version-migration, uid: 7bf21232-ae22-481b-a79a-dd7eacf3583e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.636113609+00:00 stderr F I0120 10:58:10.635990 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: control-plane-machine-set-operator, uid: a398c8c9-26b5-430a-94f5-42d3f40459d5]" virtual=false 2026-01-20T10:58:10.637010501+00:00 stderr F I0120 10:58:10.636960 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[migration.k8s.io/v1alpha1/StorageVersionMigration, namespace: , name: console-plugin-storage-version-migration, uid: 97786232-f628-4bf7-9aa1-0ab2580b9a37]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.637045342+00:00 stderr F I0120 10:58:10.637024 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: csi-snapshot-controller-operator-clusterrole, uid: 3c762506-422e-4421-915e-5872f5c48dbd]" virtual=false 2026-01-20T10:58:10.640959191+00:00 stderr F I0120 10:58:10.639106 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator-config, uid: c29eb760-4269-415b-a4e7-ce7850749f0e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.640959191+00:00 stderr F I0120 10:58:10.639147 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: custom-account-openshift-machine-config-operator, uid: 7d9e4776-ed62-42b1-b845-cc4c6cc67c88]" virtual=false 2026-01-20T10:58:10.640959191+00:00 stderr F I0120 10:58:10.639364 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[operator.openshift.io/v1/Etcd, namespace: , name: cluster, uid: f72bfffd-24a6-4b3a-a0e5-af2471354406]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2026-01-20T10:58:10.640959191+00:00 stderr F I0120 10:58:10.639386 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: default-account-cluster-image-registry-operator, uid: cda37d84-e0e5-4e44-9a71-02182975026d]" virtual=false 2026-01-20T10:58:10.643960799+00:00 stderr F I0120 10:58:10.643895 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator-config, uid: c6ebf7f7-9838-428d-bacd-d9377dca684d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.643960799+00:00 stderr F I0120 10:58:10.643943 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: helm-chartrepos-view, uid: 22aa2287-b511-4fb0-a162-d3c7fc093bc5]" virtual=false 2026-01-20T10:58:10.645491807+00:00 stderr F I0120 10:58:10.645417 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[config.openshift.io/v1/Console, namespace: , name: cluster, uid: 1acbce88-d79f-4a02-bd85-4634f5c624be]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2026-01-20T10:58:10.645491807+00:00 stderr F I0120 10:58:10.645461 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-controllers, uid: 9d6d5d27-a1df-40a9-b87c-44c9c7277ec0]" virtual=false 2026-01-20T10:58:10.645842026+00:00 stderr F I0120 10:58:10.645808 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-autoscaler, uid: 90140550-0bf0-45b6-bc21-2756500fa74e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.645861396+00:00 stderr F I0120 10:58:10.645846 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-controllers-baremetal, uid: f4e67882-ed45-4d9c-b562-81ffe3d1cb30]" virtual=false 2026-01-20T10:58:10.649576421+00:00 stderr F I0120 10:58:10.649527 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-autoscaler-operator, uid: 122f7969-6bbd-4171-b0d2-3382822b14bc]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.649594842+00:00 stderr F I0120 10:58:10.649574 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-operator, uid: 2b488f4f-97f2-434d-a62e-6ce4db9636a2]" virtual=false 2026-01-20T10:58:10.649766216+00:00 stderr F I0120 10:58:10.649698 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-monitoring-operator, uid: f9b8a349-3d3e-43d3-8552-f17fd46dfe4d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.649776606+00:00 stderr F I0120 10:58:10.649766 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-operator-ext-remediation, uid: 06cb8f99-d6bc-46a7-bcf1-c610b6fc190a]" virtual=false 2026-01-20T10:58:10.654498066+00:00 stderr F I0120 10:58:10.654453 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-network-operator, uid: 741fb11c-22e2-4896-b41e-4bc6506dadb4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.654498066+00:00 stderr F I0120 10:58:10.654480 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: marketplace-operator, uid: 62bf9f3d-b2d7-4e1a-bd45-41b46e18211a]" virtual=false 2026-01-20T10:58:10.655426680+00:00 stderr F I0120 10:58:10.655401 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator, uid: 2613967c-5e5d-427c-802b-3219c1314d9f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.655464081+00:00 stderr F I0120 10:58:10.655438 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: metrics-daemon-sa-rolebinding, uid: 6dc834f4-1d8d-4397-b483-82d00bc808ca]" virtual=false 2026-01-20T10:58:10.659046561+00:00 stderr F I0120 10:58:10.659013 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator-imageconfig-reader, uid: 6e8698c1-8ae7-4cf2-9591-7d7aaf70f1e4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.659046561+00:00 stderr F I0120 10:58:10.659036 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-admission-controller-webhook, uid: 048da2fc-4827-4d11-943b-079ac5e15768]" virtual=false 2026-01-20T10:58:10.663081993+00:00 stderr F I0120 10:58:10.663012 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator-proxy-reader, uid: 0fc0e835-2d7d-4935-a386-83f3cdcc2356]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.663162485+00:00 stderr F I0120 10:58:10.663110 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-ancillary-tools, uid: 7b8f2404-6a99-452b-99e6-2ee5aee1a907]" virtual=false 2026-01-20T10:58:10.667411814+00:00 stderr F I0120 10:58:10.667353 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-version-operator, uid: d1772a5b-7528-4218-b1a5-b8e37adaddd0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.667433254+00:00 stderr F I0120 10:58:10.667413 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-cluster-readers, uid: 11e86910-6d61-4703-8234-c26167470b26]" virtual=false 2026-01-20T10:58:10.669248661+00:00 stderr F I0120 10:58:10.668917 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[operator.openshift.io/v1/OpenShiftAPIServer, namespace: , name: cluster, uid: 5f5e95ce-59b9-4ad4-8913-c62d41574952]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2026-01-20T10:58:10.669248661+00:00 stderr F I0120 10:58:10.668976 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-group, uid: 7560b26d-8f8c-4cda-ac4b-0a2f586da492]" virtual=false 2026-01-20T10:58:10.672491043+00:00 stderr F I0120 10:58:10.672452 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-auth-delegator, uid: 5b5cc72d-8cfc-47d3-8393-7b66b592f99e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.672526294+00:00 stderr F I0120 10:58:10.672505 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-transient, uid: 9ca91046-71b1-4755-8430-f290694fb843]" virtual=false 2026-01-20T10:58:10.677292064+00:00 stderr F I0120 10:58:10.676101 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-extensions-reader, uid: 402476b7-93bc-4cfa-b038-c3108d7ea260]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.677292064+00:00 stderr F I0120 10:58:10.676149 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-whereabouts, uid: be8f44a2-0530-4fc0-a743-77944a2d6cfe]" virtual=false 2026-01-20T10:58:10.680330142+00:00 stderr F I0120 10:58:10.680296 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-operator, uid: cd350fa5-6dc3-49dd-a977-ebe3ffe5edaa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.680330142+00:00 stderr F I0120 10:58:10.680321 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: network-diagnostics, uid: 79dad4ee-c1a9-4879-a1dd-69ea8ca307a1]" virtual=false 2026-01-20T10:58:10.682210889+00:00 stderr F I0120 10:58:10.682173 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console, uid: b060a198-fcb6-4114-b32b-339ffffe6077]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.682210889+00:00 stderr F I0120 10:58:10.682201 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: network-node-identity, uid: 2fd4e670-d0fc-405b-9bb0-978652cd7871]" virtual=false 2026-01-20T10:58:10.754036234+00:00 stderr F I0120 10:58:10.753958 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-operator-auth-delegator, uid: 6aee10d5-48ad-432d-ae71-69853ed0161c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.754100356+00:00 stderr F I0120 10:58:10.754031 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: olm-operator-binding-openshift-operator-lifecycle-manager, uid: 9ccd11bb-8a90-4da3-bd98-a6b01e412542]" virtual=false 2026-01-20T10:58:10.756898057+00:00 stderr F I0120 10:58:10.756833 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: control-plane-machine-set-operator, uid: a398c8c9-26b5-430a-94f5-42d3f40459d5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.756918957+00:00 stderr F I0120 10:58:10.756894 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-dns-operator, uid: 97ca47d3-e623-4331-b837-ffc3e3dac836]" virtual=false 2026-01-20T10:58:10.759818431+00:00 stderr F I0120 10:58:10.759762 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: csi-snapshot-controller-operator-clusterrole, uid: 3c762506-422e-4421-915e-5872f5c48dbd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.759910453+00:00 stderr F I0120 10:58:10.759864 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ingress-operator, uid: 26eba0ad-34cf-49ed-9698-bb35cae97907]" virtual=false 2026-01-20T10:58:10.762366286+00:00 stderr F I0120 10:58:10.762304 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: custom-account-openshift-machine-config-operator, uid: 7d9e4776-ed62-42b1-b845-cc4c6cc67c88]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.762388346+00:00 stderr F I0120 10:58:10.762374 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-iptables-alerter, uid: 848eff75-354e-4076-906d-a4b9cc19945b]" virtual=false 2026-01-20T10:58:10.765840194+00:00 stderr F I0120 10:58:10.765788 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: default-account-cluster-image-registry-operator, uid: cda37d84-e0e5-4e44-9a71-02182975026d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.765860754+00:00 stderr F I0120 10:58:10.765850 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-control-plane-limited, uid: 7f642a86-d7eb-4509-8666-f07259f6a62f]" virtual=false 2026-01-20T10:58:10.769046825+00:00 stderr F I0120 10:58:10.768991 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: helm-chartrepos-view, uid: 22aa2287-b511-4fb0-a162-d3c7fc093bc5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2026-01-20T10:58:10.769046825+00:00 stderr F I0120 10:58:10.769038 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-node-identity-limited, uid: 52425be5-e912-46fb-8a48-8e773a98d5c1]" virtual=false 2026-01-20T10:58:10.772490782+00:00 stderr F I0120 10:58:10.772436 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-controllers, uid: 9d6d5d27-a1df-40a9-b87c-44c9c7277ec0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.772490782+00:00 stderr F I0120 10:58:10.772463 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-node-kube-rbac-proxy, uid: 245c37d5-9ba8-4086-aa11-840b6e8a724e]" virtual=false 2026-01-20T10:58:10.777359236+00:00 stderr F I0120 10:58:10.777313 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-controllers-baremetal, uid: f4e67882-ed45-4d9c-b562-81ffe3d1cb30]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.777385577+00:00 stderr F I0120 10:58:10.777361 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: prometheus-k8s-scheduler-resources, uid: 83861e61-c21e-4d21-bf6a-21620ffd522d]" virtual=false 2026-01-20T10:58:10.780602559+00:00 stderr F I0120 10:58:10.780548 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-operator, uid: 2b488f4f-97f2-434d-a62e-6ce4db9636a2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.780602559+00:00 stderr F I0120 10:58:10.780582 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: registry-monitoring, uid: f277fb99-3fcb-44ac-b911-507aa165a19e]" virtual=false 2026-01-20T10:58:10.782308822+00:00 stderr F I0120 10:58:10.782258 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-operator-ext-remediation, uid: 06cb8f99-d6bc-46a7-bcf1-c610b6fc190a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.782308822+00:00 stderr F I0120 10:58:10.782297 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:controller:machine-approver, uid: 7b6449e0-b191-4f94-ad74-226c264035e7]" virtual=false 2026-01-20T10:58:10.786421876+00:00 stderr F I0120 10:58:10.786367 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: marketplace-operator, uid: 62bf9f3d-b2d7-4e1a-bd45-41b46e18211a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.786421876+00:00 stderr F I0120 10:58:10.786397 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-storage-version-migrator-operator, name: config, uid: 1bb5d3ff-abf0-42d2-868b-f27622788fc1]" virtual=false 2026-01-20T10:58:10.787458773+00:00 stderr F I0120 10:58:10.787404 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: metrics-daemon-sa-rolebinding, uid: 6dc834f4-1d8d-4397-b483-82d00bc808ca]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.787458773+00:00 stderr F I0120 10:58:10.787433 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:authentication, uid: 32dcecb0-39fb-4417-a02d-801980f312a5]" virtual=false 2026-01-20T10:58:10.791005923+00:00 stderr F I0120 10:58:10.790948 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-admission-controller-webhook, uid: 048da2fc-4827-4d11-943b-079ac5e15768]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.791005923+00:00 stderr F I0120 10:58:10.790981 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:cluster-kube-scheduler-operator, uid: cac5ab58-ac29-4b01-978f-1d735a5c5af1]" virtual=false 2026-01-20T10:58:10.793178188+00:00 stderr F I0120 10:58:10.793134 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-ancillary-tools, uid: 7b8f2404-6a99-452b-99e6-2ee5aee1a907]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.793178188+00:00 stderr F I0120 10:58:10.793167 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-api, name: kube-rbac-proxy, uid: d55be534-ee47-42be-a177-eeff870f3f9c]" virtual=false 2026-01-20T10:58:10.796836531+00:00 stderr F I0120 10:58:10.796780 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-cluster-readers, uid: 11e86910-6d61-4703-8234-c26167470b26]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.796858162+00:00 stderr F I0120 10:58:10.796830 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-api, name: kube-rbac-proxy-cluster-autoscaler-operator, uid: 21319178-4025-4471-9dc1-a15898759ed3]" virtual=false 2026-01-20T10:58:10.800249028+00:00 stderr F I0120 10:58:10.800192 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-group, uid: 7560b26d-8f8c-4cda-ac4b-0a2f586da492]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.800249028+00:00 stderr F I0120 10:58:10.800231 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-api, name: machine-api-operator-images, uid: 9037023c-6158-4d9d-8f71-bae268832ce9]" virtual=false 2026-01-20T10:58:10.802855894+00:00 stderr F I0120 10:58:10.802805 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-transient, uid: 9ca91046-71b1-4755-8430-f290694fb843]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.802855894+00:00 stderr F I0120 10:58:10.802828 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-api, name: mao-trusted-ca, uid: 9a2cfbbf-b8be-4334-82e9-91abbed9baee]" virtual=false 2026-01-20T10:58:10.806366194+00:00 stderr F I0120 10:58:10.806306 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-whereabouts, uid: be8f44a2-0530-4fc0-a743-77944a2d6cfe]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.806366194+00:00 stderr F I0120 10:58:10.806349 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:etcd-operator, uid: fe8cbe89-e1d8-491c-abcd-6405df195395]" virtual=false 2026-01-20T10:58:10.809586585+00:00 stderr F I0120 10:58:10.809533 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: network-diagnostics, uid: 79dad4ee-c1a9-4879-a1dd-69ea8ca307a1]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.809586585+00:00 stderr F I0120 10:58:10.809568 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-apiserver-operator, uid: 6d428360-3d92-42b3-a692-62fcef7dc28f]" virtual=false 2026-01-20T10:58:10.813288879+00:00 stderr F I0120 10:58:10.813235 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: network-node-identity, uid: 2fd4e670-d0fc-405b-9bb0-978652cd7871]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.813288879+00:00 stderr F I0120 10:58:10.813282 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-controller-manager-operator, uid: 8fa65b16-3f3f-4a00-9f7f-b6347622e728]" virtual=false 2026-01-20T10:58:10.887359180+00:00 stderr F I0120 10:58:10.887253 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: olm-operator-binding-openshift-operator-lifecycle-manager, uid: 9ccd11bb-8a90-4da3-bd98-a6b01e412542]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.887359180+00:00 stderr F I0120 10:58:10.887313 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: coreos-bootimages, uid: 32557db9-6675-42cd-8433-382f972fd9ed]" virtual=false 2026-01-20T10:58:10.892050370+00:00 stderr F I0120 10:58:10.892002 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ingress-operator, uid: 26eba0ad-34cf-49ed-9698-bb35cae97907]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.892050370+00:00 stderr F I0120 10:58:10.892043 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: kube-rbac-proxy, uid: ba7edbb4-1ba2-49b6-98a7-d849069e9f80]" virtual=false 2026-01-20T10:58:10.892561512+00:00 stderr F I0120 10:58:10.892517 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-dns-operator, uid: 97ca47d3-e623-4331-b837-ffc3e3dac836]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.892561512+00:00 stderr F I0120 10:58:10.892550 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-storage-version-migrator-operator, uid: 7c611553-0af9-459b-9675-c9d6e321b535]" virtual=false 2026-01-20T10:58:10.893936338+00:00 stderr F I0120 10:58:10.893901 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-iptables-alerter, uid: 848eff75-354e-4076-906d-a4b9cc19945b]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.893936338+00:00 stderr F I0120 10:58:10.893928 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-apiserver-operator, uid: 3b1d0d06-e85b-43d3-a710-b1842b977bda]" virtual=false 2026-01-20T10:58:10.898212516+00:00 stderr F I0120 10:58:10.898171 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-control-plane-limited, uid: 7f642a86-d7eb-4509-8666-f07259f6a62f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.898275758+00:00 stderr F I0120 10:58:10.898255 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: machine-config-operator-images, uid: 1a9b80cc-4335-4f9b-b76a-0fd268339500]" virtual=false 2026-01-20T10:58:10.900838423+00:00 stderr F I0120 10:58:10.900800 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-node-identity-limited, uid: 52425be5-e912-46fb-8a48-8e773a98d5c1]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.900867323+00:00 stderr F I0120 10:58:10.900858 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: machine-config-osimageurl, uid: 6dc55689-0211-4518-ae6b-72770afc52d4]" virtual=false 2026-01-20T10:58:10.903422118+00:00 stderr F I0120 10:58:10.903374 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-node-kube-rbac-proxy, uid: 245c37d5-9ba8-4086-aa11-840b6e8a724e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:10.903422118+00:00 stderr F I0120 10:58:10.903403 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-marketplace, name: marketplace-trusted-ca, uid: d0f4a635-2baa-4cf8-ab26-6cb4ffda662d]" virtual=false 2026-01-20T10:58:10.910216132+00:00 stderr F I0120 10:58:10.910152 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: prometheus-k8s-scheduler-resources, uid: 83861e61-c21e-4d21-bf6a-21620ffd522d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.910238302+00:00 stderr F I0120 10:58:10.910211 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-config-operator, uid: 88bb6e9f-f058-4bf1-91ec-fac61efb59fd]" virtual=false 2026-01-20T10:58:10.916966933+00:00 stderr F I0120 10:58:10.916158 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: registry-monitoring, uid: f277fb99-3fcb-44ac-b911-507aa165a19e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.916966933+00:00 stderr F I0120 10:58:10.916216 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-controller-manager-operator, uid: 6da2b488-b18a-4be7-b541-3d8d33584c6e]" virtual=false 2026-01-20T10:58:10.917232999+00:00 stderr F I0120 10:58:10.917179 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:controller:machine-approver, uid: 7b6449e0-b191-4f94-ad74-226c264035e7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.917232999+00:00 stderr F I0120 10:58:10.917224 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:service-ca-operator, uid: 90ce341e-6032-46e0-b9d2-7a01b6b8a68b]" virtual=false 2026-01-20T10:58:10.919040776+00:00 stderr F I0120 10:58:10.919004 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-kube-storage-version-migrator-operator, name: config, uid: 1bb5d3ff-abf0-42d2-868b-f27622788fc1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.919076786+00:00 stderr F I0120 10:58:10.919037 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-monitoring, name: telemetry-config, uid: 74c753c8-b52b-48a3-9800-ce294f362b5d]" virtual=false 2026-01-20T10:58:10.921778635+00:00 stderr F I0120 10:58:10.921732 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:authentication, uid: 32dcecb0-39fb-4417-a02d-801980f312a5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.921796096+00:00 stderr F I0120 10:58:10.921785 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-multus, name: cni-copy-resources, uid: 3807440c-7cb7-402f-9b6c-985e141f073d]" virtual=false 2026-01-20T10:58:10.926866014+00:00 stderr F I0120 10:58:10.926824 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:cluster-kube-scheduler-operator, uid: cac5ab58-ac29-4b01-978f-1d735a5c5af1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.926891785+00:00 stderr F I0120 10:58:10.926867 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:scc:restricted-v2, uid: cb821e01-a19c-4b4c-8409-e70b79a0669a]" virtual=false 2026-01-20T10:58:10.930315772+00:00 stderr F I0120 10:58:10.930283 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-api, name: kube-rbac-proxy, uid: d55be534-ee47-42be-a177-eeff870f3f9c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.930360603+00:00 stderr F I0120 10:58:10.930317 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-multus, name: default-cni-sysctl-allowlist, uid: 4b4aac2c-265b-4c8a-86d2-81b7b6b58bd6]" virtual=false 2026-01-20T10:58:10.949419977+00:00 stderr F I0120 10:58:10.947940 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-api, name: mao-trusted-ca, uid: 9a2cfbbf-b8be-4334-82e9-91abbed9baee]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2026-01-20T10:58:10.949419977+00:00 stderr F I0120 10:58:10.947971 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-multus, name: multus-daemon-config, uid: 0040ed70-dcc9-471d-b68f-333eb3d3a64c]" virtual=false 2026-01-20T10:58:10.949419977+00:00 stderr F I0120 10:58:10.948215 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:etcd-operator, uid: fe8cbe89-e1d8-491c-abcd-6405df195395]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.949419977+00:00 stderr F I0120 10:58:10.948236 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-network-node-identity, name: ovnkube-identity-cm, uid: 08920289-81e3-422d-8580-e3f323d2bc00]" virtual=false 2026-01-20T10:58:10.949419977+00:00 stderr F I0120 10:58:10.949294 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-api, name: kube-rbac-proxy-cluster-autoscaler-operator, uid: 21319178-4025-4471-9dc1-a15898759ed3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.949419977+00:00 stderr F I0120 10:58:10.949365 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-network-operator, name: applied-cluster, uid: 0f792eb6-67bb-459c-8f9b-ddc568eb49b6]" virtual=false 2026-01-20T10:58:10.957209674+00:00 stderr F I0120 10:58:10.957143 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-apiserver-operator, uid: 6d428360-3d92-42b3-a692-62fcef7dc28f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.957209674+00:00 stderr F I0120 10:58:10.957177 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-network-operator, name: iptables-alerter-script, uid: 3fcfe917-6069-4709-8a13-9f7bb418d1e6]" virtual=false 2026-01-20T10:58:10.963130275+00:00 stderr F I0120 10:58:10.961327 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-controller-manager-operator, uid: 8fa65b16-3f3f-4a00-9f7f-b6347622e728]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.963130275+00:00 stderr F I0120 10:58:10.961357 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-oauth-apiserver, name: revision-status-1, uid: 68bff2ff-9a16-46c3-aeb1-4de75aa0760b]" virtual=false 2026-01-20T10:58:10.967126917+00:00 stderr F I0120 10:58:10.963473 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-api, name: machine-api-operator-images, uid: 9037023c-6158-4d9d-8f71-bae268832ce9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:10.967126917+00:00 stderr F I0120 10:58:10.963532 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-config, uid: e9bee127-1b3d-47ee-af83-2bfcb79f8702]" virtual=false 2026-01-20T10:58:11.017939297+00:00 stderr F I0120 10:58:11.017459 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-oauth-apiserver, name: revision-status-1, uid: 68bff2ff-9a16-46c3-aeb1-4de75aa0760b]" 2026-01-20T10:58:11.017939297+00:00 stderr F I0120 10:58:11.017520 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-service-ca-operator, name: service-ca-operator-config, uid: 3d3e7ab7-58a1-43a1-974e-5528e582f464]" virtual=false 2026-01-20T10:58:11.024092163+00:00 stderr F I0120 10:58:11.021122 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: kube-rbac-proxy, uid: ba7edbb4-1ba2-49b6-98a7-d849069e9f80]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.024092163+00:00 stderr F I0120 10:58:11.021165 1 garbagecollector.go:549] "Processing item" item="[config.openshift.io/v1/OperatorHub, namespace: , name: cluster, uid: a3ebaa3e-0f5e-4439-aca1-2c7fd0b4318a]" virtual=false 2026-01-20T10:58:11.024092163+00:00 stderr F I0120 10:58:11.021834 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: coreos-bootimages, uid: 32557db9-6675-42cd-8433-382f972fd9ed]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.024092163+00:00 stderr F I0120 10:58:11.021882 1 garbagecollector.go:549] "Processing item" item="[config.openshift.io/v1/Image, namespace: , name: cluster, uid: f9e57099-766b-4ef5-9883-f7bf72d3acd8]" virtual=false 2026-01-20T10:58:11.028107086+00:00 stderr F I0120 10:58:11.025379 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-apiserver-operator, uid: 3b1d0d06-e85b-43d3-a710-b1842b977bda]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.028107086+00:00 stderr F I0120 10:58:11.025407 1 garbagecollector.go:549] "Processing item" item="[console.openshift.io/v1/ConsoleCLIDownload, namespace: , name: helm-download-links, uid: 8ec5a8ef-9dcd-4c55-b0d4-8901c50a6e86]" virtual=false 2026-01-20T10:58:11.032119787+00:00 stderr F I0120 10:58:11.031345 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-storage-version-migrator-operator, uid: 7c611553-0af9-459b-9675-c9d6e321b535]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.032119787+00:00 stderr F I0120 10:58:11.031378 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver, uid: 08401087-e2b5-41ab-b2ae-1b49b4b21090]" virtual=false 2026-01-20T10:58:11.036092719+00:00 stderr F I0120 10:58:11.035279 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: machine-config-osimageurl, uid: 6dc55689-0211-4518-ae6b-72770afc52d4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.036092719+00:00 stderr F I0120 10:58:11.035303 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver-operator, uid: 435f63da-cd39-4cca-8a62-b69e4f06c2e9]" virtual=false 2026-01-20T10:58:11.039248009+00:00 stderr F I0120 10:58:11.038617 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-config-operator, uid: 88bb6e9f-f058-4bf1-91ec-fac61efb59fd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.039248009+00:00 stderr F I0120 10:58:11.038638 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver-sar, uid: 939310fe-0d2e-4a8a-b1b3-0a5805456306]" virtual=false 2026-01-20T10:58:11.041129916+00:00 stderr F I0120 10:58:11.041091 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: machine-config-operator-images, uid: 1a9b80cc-4335-4f9b-b76a-0fd268339500]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.041129916+00:00 stderr F I0120 10:58:11.041116 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-authentication-operator, uid: e5927a4e-7a4c-4a11-a3d1-4186ce46adf3]" virtual=false 2026-01-20T10:58:11.044459131+00:00 stderr F I0120 10:58:11.044424 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-controller-manager-operator, uid: 6da2b488-b18a-4be7-b541-3d8d33584c6e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.044459131+00:00 stderr F I0120 10:58:11.044451 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-controller-manager, uid: 68cc303b-a0cf-4b58-b7ad-75a4bff72246]" virtual=false 2026-01-20T10:58:11.047658803+00:00 stderr F I0120 10:58:11.047605 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-marketplace, name: marketplace-trusted-ca, uid: d0f4a635-2baa-4cf8-ab26-6cb4ffda662d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.047658803+00:00 stderr F I0120 10:58:11.047652 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-etcd-operator, uid: 9a9602ea-26a7-4c24-9fda-572140113932]" virtual=false 2026-01-20T10:58:11.050730201+00:00 stderr F I0120 10:58:11.050687 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:service-ca-operator, uid: 90ce341e-6032-46e0-b9d2-7a01b6b8a68b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.050749921+00:00 stderr F I0120 10:58:11.050726 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-kube-apiserver-operator, uid: 29bd4a1f-f8f3-4c5b-8b56-4b5faf744ff9]" virtual=false 2026-01-20T10:58:11.054684461+00:00 stderr F I0120 10:58:11.054640 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-multus, name: cni-copy-resources, uid: 3807440c-7cb7-402f-9b6c-985e141f073d]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:11.054704941+00:00 stderr F I0120 10:58:11.054698 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-monitoring-metrics, uid: 9bbe637c-a1dc-48d4-bf30-3aee0abd189a]" virtual=false 2026-01-20T10:58:11.056236320+00:00 stderr F I0120 10:58:11.056015 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-monitoring, name: telemetry-config, uid: 74c753c8-b52b-48a3-9800-ce294f362b5d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.056236320+00:00 stderr F I0120 10:58:11.056053 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-apiserver, uid: b67a06ba-eaf2-4045-a4f3-80f32bd3af74]" virtual=false 2026-01-20T10:58:11.061609697+00:00 stderr F I0120 10:58:11.061214 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-network-node-identity, name: ovnkube-identity-cm, uid: 08920289-81e3-422d-8580-e3f323d2bc00]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:11.061609697+00:00 stderr F I0120 10:58:11.061222 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:scc:restricted-v2, uid: cb821e01-a19c-4b4c-8409-e70b79a0669a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.061609697+00:00 stderr F I0120 10:58:11.061259 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-apiserver-sar, uid: d58e2800-eb4f-4b97-a79d-1d5375725f25]" virtual=false 2026-01-20T10:58:11.061609697+00:00 stderr F I0120 10:58:11.061264 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-server, uid: 1cc90dfb-3877-4794-b4fb-a55f9f9fd21f]" virtual=false 2026-01-20T10:58:11.063547125+00:00 stderr F I0120 10:58:11.063213 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-multus, name: default-cni-sysctl-allowlist, uid: 4b4aac2c-265b-4c8a-86d2-81b7b6b58bd6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:11.063547125+00:00 stderr F I0120 10:58:11.063246 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-ovn-kubernetes, uid: e774665f-b1c0-416c-adde-718d512e91f9]" virtual=false 2026-01-20T10:58:11.067302542+00:00 stderr F I0120 10:58:11.067264 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-multus, name: multus-daemon-config, uid: 0040ed70-dcc9-471d-b68f-333eb3d3a64c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:11.067302542+00:00 stderr F I0120 10:58:11.067292 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: adminnetworkpolicies.policy.networking.k8s.io, uid: 3b2c3d0f-1154-4221-88d8-96b15373169a]" virtual=false 2026-01-20T10:58:11.070855722+00:00 stderr F I0120 10:58:11.070816 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-network-operator, name: applied-cluster, uid: 0f792eb6-67bb-459c-8f9b-ddc568eb49b6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:11.070873912+00:00 stderr F I0120 10:58:11.070855 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: adminpolicybasedexternalroutes.k8s.ovn.org, uid: 8e2ba691-6cd2-489e-929b-7aaef51d1387]" virtual=false 2026-01-20T10:58:11.073301044+00:00 stderr F I0120 10:58:11.073254 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-network-operator, name: iptables-alerter-script, uid: 3fcfe917-6069-4709-8a13-9f7bb418d1e6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:11.073301044+00:00 stderr F I0120 10:58:11.073288 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: baselineadminnetworkpolicies.policy.networking.k8s.io, uid: d059be2d-29f9-4a7a-9bc5-e75a784d63e4]" virtual=false 2026-01-20T10:58:11.089958557+00:00 stderr F I0120 10:58:11.089895 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-config, uid: e9bee127-1b3d-47ee-af83-2bfcb79f8702]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2026-01-20T10:58:11.089958557+00:00 stderr F I0120 10:58:11.089931 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressfirewalls.k8s.ovn.org, uid: a49e9150-d58b-4ccf-9fa5-c6abaac414b2]" virtual=false 2026-01-20T10:58:11.146964065+00:00 stderr F I0120 10:58:11.146872 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-service-ca-operator, name: service-ca-operator-config, uid: 3d3e7ab7-58a1-43a1-974e-5528e582f464]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.146964065+00:00 stderr F I0120 10:58:11.146922 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressips.k8s.ovn.org, uid: 9f3a9522-f1f8-4d28-bf47-0823bd97a03c]" virtual=false 2026-01-20T10:58:11.148618937+00:00 stderr F I0120 10:58:11.148591 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[config.openshift.io/v1/OperatorHub, namespace: , name: cluster, uid: a3ebaa3e-0f5e-4439-aca1-2c7fd0b4318a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2026-01-20T10:58:11.148638067+00:00 stderr F I0120 10:58:11.148618 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressqoses.k8s.ovn.org, uid: c31943bc-e8ef-4bee-9147-31e90cad6638]" virtual=false 2026-01-20T10:58:11.152862114+00:00 stderr F I0120 10:58:11.152827 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[config.openshift.io/v1/Image, namespace: , name: cluster, uid: f9e57099-766b-4ef5-9883-f7bf72d3acd8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2026-01-20T10:58:11.152862114+00:00 stderr F I0120 10:58:11.152855 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressservices.k8s.ovn.org, uid: 1b3e9680-0044-44dc-b306-5f8b2922b184]" virtual=false 2026-01-20T10:58:11.159377910+00:00 stderr F I0120 10:58:11.159326 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[console.openshift.io/v1/ConsoleCLIDownload, namespace: , name: helm-download-links, uid: 8ec5a8ef-9dcd-4c55-b0d4-8901c50a6e86]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.159401061+00:00 stderr F I0120 10:58:11.159380 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: ippools.whereabouts.cni.cncf.io, uid: 6327ea74-f505-4fab-a6a5-2f489c34a8c6]" virtual=false 2026-01-20T10:58:11.163952796+00:00 stderr F I0120 10:58:11.163909 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver, uid: 08401087-e2b5-41ab-b2ae-1b49b4b21090]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.163985637+00:00 stderr F I0120 10:58:11.163958 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: network-attachment-definitions.k8s.cni.cncf.io, uid: a99eded9-1e26-4704-8381-dc109f03cc1f]" virtual=false 2026-01-20T10:58:11.167533157+00:00 stderr F I0120 10:58:11.167488 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver-operator, uid: 435f63da-cd39-4cca-8a62-b69e4f06c2e9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.167550837+00:00 stderr F I0120 10:58:11.167537 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: overlappingrangeipreservations.whereabouts.cni.cncf.io, uid: ae18c5bd-30f0-4e7f-bcc1-2d4dcbc94239]" virtual=false 2026-01-20T10:58:11.170412080+00:00 stderr F I0120 10:58:11.170351 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver-sar, uid: 939310fe-0d2e-4a8a-b1b3-0a5805456306]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.173154339+00:00 stderr F I0120 10:58:11.173132 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-authentication-operator, uid: e5927a4e-7a4c-4a11-a3d1-4186ce46adf3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.176037723+00:00 stderr F I0120 10:58:11.176020 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-controller-manager, uid: 68cc303b-a0cf-4b58-b7ad-75a4bff72246]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.180079916+00:00 stderr F I0120 10:58:11.180013 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-etcd-operator, uid: 9a9602ea-26a7-4c24-9fda-572140113932]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.183021371+00:00 stderr F I0120 10:58:11.182921 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-kube-apiserver-operator, uid: 29bd4a1f-f8f3-4c5b-8b56-4b5faf744ff9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.187716080+00:00 stderr F I0120 10:58:11.187677 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-monitoring-metrics, uid: 9bbe637c-a1dc-48d4-bf30-3aee0abd189a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.190817709+00:00 stderr F I0120 10:58:11.190768 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-apiserver, uid: b67a06ba-eaf2-4045-a4f3-80f32bd3af74]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.194209955+00:00 stderr F I0120 10:58:11.194174 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-server, uid: 1cc90dfb-3877-4794-b4fb-a55f9f9fd21f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.196993045+00:00 stderr F I0120 10:58:11.196931 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-ovn-kubernetes, uid: e774665f-b1c0-416c-adde-718d512e91f9]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:11.197552619+00:00 stderr F I0120 10:58:11.197502 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-apiserver-sar, uid: d58e2800-eb4f-4b97-a79d-1d5375725f25]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2026-01-20T10:58:11.200034203+00:00 stderr F I0120 10:58:11.199992 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: adminpolicybasedexternalroutes.k8s.ovn.org, uid: 8e2ba691-6cd2-489e-929b-7aaef51d1387]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:11.204656940+00:00 stderr F I0120 10:58:11.204618 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: adminnetworkpolicies.policy.networking.k8s.io, uid: 3b2c3d0f-1154-4221-88d8-96b15373169a]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:11.208017946+00:00 stderr F I0120 10:58:11.207953 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: baselineadminnetworkpolicies.policy.networking.k8s.io, uid: d059be2d-29f9-4a7a-9bc5-e75a784d63e4]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:11.221144959+00:00 stderr F I0120 10:58:11.221052 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressfirewalls.k8s.ovn.org, uid: a49e9150-d58b-4ccf-9fa5-c6abaac414b2]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:11.233800360+00:00 stderr F I0120 10:58:11.233737 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressips.k8s.ovn.org, uid: 9f3a9522-f1f8-4d28-bf47-0823bd97a03c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:11.236483488+00:00 stderr F I0120 10:58:11.236435 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressqoses.k8s.ovn.org, uid: c31943bc-e8ef-4bee-9147-31e90cad6638]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:11.240304715+00:00 stderr F I0120 10:58:11.240256 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressservices.k8s.ovn.org, uid: 1b3e9680-0044-44dc-b306-5f8b2922b184]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:11.244045831+00:00 stderr F I0120 10:58:11.243998 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: ippools.whereabouts.cni.cncf.io, uid: 6327ea74-f505-4fab-a6a5-2f489c34a8c6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:11.246720739+00:00 stderr F I0120 10:58:11.246662 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: network-attachment-definitions.k8s.cni.cncf.io, uid: a99eded9-1e26-4704-8381-dc109f03cc1f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:11.250579866+00:00 stderr F I0120 10:58:11.250543 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: overlappingrangeipreservations.whereabouts.cni.cncf.io, uid: ae18c5bd-30f0-4e7f-bcc1-2d4dcbc94239]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2026-01-20T10:58:12.994475942+00:00 stderr F I0120 10:58:12.994347 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: must-gather-hpl5w, uid: cd7b16f1-0abe-4901-9d33-515efa827346]" virtual=false 2026-01-20T10:58:12.997293964+00:00 stderr F I0120 10:58:12.997239 1 garbagecollector.go:688] "Deleting item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: must-gather-hpl5w, uid: cd7b16f1-0abe-4901-9d33-515efa827346]" propagationPolicy="Background" 2026-01-20T10:58:17.928164009+00:00 stderr F I0120 10:58:17.927857 1 namespace_controller.go:182] "Namespace has been deleted" namespace="openshift-must-gather-jdb4k" ././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000017562715133657716033116 0ustar zuulzuul2026-01-20T10:42:02.269217579+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10257 \))" ]; do sleep 1; done' 2026-01-20T10:42:02.276954797+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2026-01-20T10:42:02.288526056+00:00 stderr F + '[' -n '' ']' 2026-01-20T10:42:02.289893891+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ']' 2026-01-20T10:42:02.290262202+00:00 stdout F Copying system trust bundle 2026-01-20T10:42:02.290284801+00:00 stderr F + echo 'Copying system trust bundle' 2026-01-20T10:42:02.290284801+00:00 stderr F + cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem 2026-01-20T10:42:02.300902059+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ']' 2026-01-20T10:42:02.483943609+00:00 stderr F + exec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key '--controllers=*' --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 2026-01-20T10:42:03.020911625+00:00 stderr F W0120 10:42:03.020656 1 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2026-01-20T10:42:03.020975523+00:00 stderr F W0120 10:42:03.020907 1 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2026-01-20T10:42:03.020975523+00:00 stderr F W0120 10:42:03.020963 1 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2026-01-20T10:42:03.021072355+00:00 stderr F W0120 10:42:03.021021 1 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2026-01-20T10:42:03.021115564+00:00 stderr F W0120 10:42:03.021087 1 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2026-01-20T10:42:03.021189757+00:00 stderr F W0120 10:42:03.021149 1 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2026-01-20T10:42:03.021244800+00:00 stderr F W0120 10:42:03.021212 1 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2026-01-20T10:42:03.021330377+00:00 stderr F W0120 10:42:03.021286 1 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2026-01-20T10:42:03.021477404+00:00 stderr F W0120 10:42:03.021434 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2026-01-20T10:42:03.021555696+00:00 stderr F W0120 10:42:03.021516 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2026-01-20T10:42:03.021644102+00:00 stderr F W0120 10:42:03.021610 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2026-01-20T10:42:03.021724742+00:00 stderr F W0120 10:42:03.021692 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2026-01-20T10:42:03.021837406+00:00 stderr F W0120 10:42:03.021803 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2026-01-20T10:42:03.021928671+00:00 stderr F W0120 10:42:03.021895 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2026-01-20T10:42:03.022970156+00:00 stderr F W0120 10:42:03.022909 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2026-01-20T10:42:03.023039252+00:00 stderr F W0120 10:42:03.023005 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2026-01-20T10:42:03.023126188+00:00 stderr F W0120 10:42:03.023091 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2026-01-20T10:42:03.023210767+00:00 stderr F W0120 10:42:03.023177 1 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2026-01-20T10:42:03.023445520+00:00 stderr F W0120 10:42:03.023401 1 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2026-01-20T10:42:03.023605111+00:00 stderr F W0120 10:42:03.023570 1 feature_gate.go:227] unrecognized feature gate: Example 2026-01-20T10:42:03.023719914+00:00 stderr F W0120 10:42:03.023686 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2026-01-20T10:42:03.023830220+00:00 stderr F W0120 10:42:03.023795 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2026-01-20T10:42:03.023931120+00:00 stderr F W0120 10:42:03.023898 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2026-01-20T10:42:03.024011660+00:00 stderr F W0120 10:42:03.023979 1 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2026-01-20T10:42:03.024084094+00:00 stderr F W0120 10:42:03.024052 1 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2026-01-20T10:42:03.024202335+00:00 stderr F W0120 10:42:03.024164 1 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2026-01-20T10:42:03.024277988+00:00 stderr F W0120 10:42:03.024246 1 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2026-01-20T10:42:03.024363085+00:00 stderr F W0120 10:42:03.024331 1 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2026-01-20T10:42:03.024444195+00:00 stderr F W0120 10:42:03.024412 1 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2026-01-20T10:42:03.024545695+00:00 stderr F W0120 10:42:03.024511 1 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2026-01-20T10:42:03.024616790+00:00 stderr F W0120 10:42:03.024583 1 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2026-01-20T10:42:03.024716930+00:00 stderr F W0120 10:42:03.024683 1 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2026-01-20T10:42:03.024825047+00:00 stderr F W0120 10:42:03.024791 1 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2026-01-20T10:42:03.024909005+00:00 stderr F W0120 10:42:03.024876 1 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2026-01-20T10:42:03.024994942+00:00 stderr F W0120 10:42:03.024961 1 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2026-01-20T10:42:03.025075203+00:00 stderr F W0120 10:42:03.025042 1 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2026-01-20T10:42:03.025153054+00:00 stderr F W0120 10:42:03.025120 1 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2026-01-20T10:42:03.025244799+00:00 stderr F W0120 10:42:03.025213 1 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2026-01-20T10:42:03.025341262+00:00 stderr F W0120 10:42:03.025298 1 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2026-01-20T10:42:03.025494796+00:00 stderr F W0120 10:42:03.025456 1 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2026-01-20T10:42:03.025590448+00:00 stderr F W0120 10:42:03.025546 1 feature_gate.go:227] unrecognized feature gate: MetricsServer 2026-01-20T10:42:03.025655946+00:00 stderr F W0120 10:42:03.025630 1 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2026-01-20T10:42:03.025757955+00:00 stderr F W0120 10:42:03.025706 1 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2026-01-20T10:42:03.025903503+00:00 stderr F W0120 10:42:03.025867 1 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2026-01-20T10:42:03.025974708+00:00 stderr F W0120 10:42:03.025945 1 feature_gate.go:227] unrecognized feature gate: NewOLM 2026-01-20T10:42:03.026066162+00:00 stderr F W0120 10:42:03.026032 1 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2026-01-20T10:42:03.026204664+00:00 stderr F W0120 10:42:03.026172 1 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2026-01-20T10:42:03.026609073+00:00 stderr F W0120 10:42:03.026574 1 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2026-01-20T10:42:03.026856411+00:00 stderr F W0120 10:42:03.026791 1 feature_gate.go:227] unrecognized feature gate: PinnedImages 2026-01-20T10:42:03.026917321+00:00 stderr F W0120 10:42:03.026884 1 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2026-01-20T10:42:03.026998800+00:00 stderr F W0120 10:42:03.026968 1 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2026-01-20T10:42:03.027440651+00:00 stderr F W0120 10:42:03.027404 1 feature_gate.go:227] unrecognized feature gate: SignatureStores 2026-01-20T10:42:03.027530187+00:00 stderr F W0120 10:42:03.027495 1 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2026-01-20T10:42:03.027760263+00:00 stderr F W0120 10:42:03.027700 1 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2026-01-20T10:42:03.027848119+00:00 stderr F W0120 10:42:03.027816 1 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2026-01-20T10:42:03.027928819+00:00 stderr F W0120 10:42:03.027897 1 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2026-01-20T10:42:03.028051838+00:00 stderr F W0120 10:42:03.028018 1 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2026-01-20T10:42:03.028159646+00:00 stderr F W0120 10:42:03.028119 1 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2026-01-20T10:42:03.028328502+00:00 stderr F W0120 10:42:03.028295 1 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2026-01-20T10:42:03.028537619+00:00 stderr F I0120 10:42:03.028497 1 flags.go:64] FLAG: --allocate-node-cidrs="false" 2026-01-20T10:42:03.028551062+00:00 stderr F I0120 10:42:03.028520 1 flags.go:64] FLAG: --allow-metric-labels="[]" 2026-01-20T10:42:03.028562167+00:00 stderr F I0120 10:42:03.028545 1 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2026-01-20T10:42:03.028562167+00:00 stderr F I0120 10:42:03.028555 1 flags.go:64] FLAG: --allow-untagged-cloud="false" 2026-01-20T10:42:03.028576230+00:00 stderr F I0120 10:42:03.028563 1 flags.go:64] FLAG: --attach-detach-reconcile-sync-period="1m0s" 2026-01-20T10:42:03.028586934+00:00 stderr F I0120 10:42:03.028574 1 flags.go:64] FLAG: --authentication-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2026-01-20T10:42:03.028598029+00:00 stderr F I0120 10:42:03.028584 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2026-01-20T10:42:03.028598029+00:00 stderr F I0120 10:42:03.028591 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2026-01-20T10:42:03.028609083+00:00 stderr F I0120 10:42:03.028598 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false" 2026-01-20T10:42:03.028647934+00:00 stderr F I0120 10:42:03.028605 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2026-01-20T10:42:03.028647934+00:00 stderr F I0120 10:42:03.028631 1 flags.go:64] FLAG: --authorization-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2026-01-20T10:42:03.028647934+00:00 stderr F I0120 10:42:03.028640 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2026-01-20T10:42:03.028674451+00:00 stderr F I0120 10:42:03.028647 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2026-01-20T10:42:03.028674451+00:00 stderr F I0120 10:42:03.028654 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2026-01-20T10:42:03.028674451+00:00 stderr F I0120 10:42:03.028663 1 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2026-01-20T10:42:03.028687514+00:00 stderr F I0120 10:42:03.028675 1 flags.go:64] FLAG: --cidr-allocator-type="RangeAllocator" 2026-01-20T10:42:03.028698589+00:00 stderr F I0120 10:42:03.028683 1 flags.go:64] FLAG: --client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2026-01-20T10:42:03.028698589+00:00 stderr F I0120 10:42:03.028691 1 flags.go:64] FLAG: --cloud-config="" 2026-01-20T10:42:03.028709693+00:00 stderr F I0120 10:42:03.028697 1 flags.go:64] FLAG: --cloud-provider="" 2026-01-20T10:42:03.028720328+00:00 stderr F I0120 10:42:03.028706 1 flags.go:64] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16" 2026-01-20T10:42:03.028755011+00:00 stderr F I0120 10:42:03.028719 1 flags.go:64] FLAG: --cluster-cidr="10.217.0.0/22" 2026-01-20T10:42:03.028755011+00:00 stderr F I0120 10:42:03.028726 1 flags.go:64] FLAG: --cluster-name="crc-d8rkd" 2026-01-20T10:42:03.028771563+00:00 stderr F I0120 10:42:03.028759 1 flags.go:64] FLAG: --cluster-signing-cert-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt" 2026-01-20T10:42:03.028782907+00:00 stderr F I0120 10:42:03.028769 1 flags.go:64] FLAG: --cluster-signing-duration="8760h0m0s" 2026-01-20T10:42:03.028782907+00:00 stderr F I0120 10:42:03.028776 1 flags.go:64] FLAG: --cluster-signing-key-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2026-01-20T10:42:03.028794781+00:00 stderr F I0120 10:42:03.028784 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-cert-file="" 2026-01-20T10:42:03.028806206+00:00 stderr F I0120 10:42:03.028791 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-key-file="" 2026-01-20T10:42:03.028806206+00:00 stderr F I0120 10:42:03.028799 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-cert-file="" 2026-01-20T10:42:03.028817720+00:00 stderr F I0120 10:42:03.028805 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-key-file="" 2026-01-20T10:42:03.028817720+00:00 stderr F I0120 10:42:03.028812 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-cert-file="" 2026-01-20T10:42:03.028828694+00:00 stderr F I0120 10:42:03.028818 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-key-file="" 2026-01-20T10:42:03.028839759+00:00 stderr F I0120 10:42:03.028826 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-cert-file="" 2026-01-20T10:42:03.028839759+00:00 stderr F I0120 10:42:03.028832 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-key-file="" 2026-01-20T10:42:03.028882118+00:00 stderr F I0120 10:42:03.028846 1 flags.go:64] FLAG: --concurrent-cron-job-syncs="5" 2026-01-20T10:42:03.028882118+00:00 stderr F I0120 10:42:03.028862 1 flags.go:64] FLAG: --concurrent-deployment-syncs="5" 2026-01-20T10:42:03.028882118+00:00 stderr F I0120 10:42:03.028869 1 flags.go:64] FLAG: --concurrent-endpoint-syncs="5" 2026-01-20T10:42:03.028882118+00:00 stderr F I0120 10:42:03.028876 1 flags.go:64] FLAG: --concurrent-ephemeralvolume-syncs="5" 2026-01-20T10:42:03.028896191+00:00 stderr F I0120 10:42:03.028883 1 flags.go:64] FLAG: --concurrent-gc-syncs="20" 2026-01-20T10:42:03.028896191+00:00 stderr F I0120 10:42:03.028890 1 flags.go:64] FLAG: --concurrent-horizontal-pod-autoscaler-syncs="5" 2026-01-20T10:42:03.028913472+00:00 stderr F I0120 10:42:03.028896 1 flags.go:64] FLAG: --concurrent-job-syncs="5" 2026-01-20T10:42:03.028913472+00:00 stderr F I0120 10:42:03.028902 1 flags.go:64] FLAG: --concurrent-namespace-syncs="10" 2026-01-20T10:42:03.028925017+00:00 stderr F I0120 10:42:03.028909 1 flags.go:64] FLAG: --concurrent-rc-syncs="5" 2026-01-20T10:42:03.028925017+00:00 stderr F I0120 10:42:03.028919 1 flags.go:64] FLAG: --concurrent-replicaset-syncs="5" 2026-01-20T10:42:03.028936681+00:00 stderr F I0120 10:42:03.028925 1 flags.go:64] FLAG: --concurrent-resource-quota-syncs="5" 2026-01-20T10:42:03.028936681+00:00 stderr F I0120 10:42:03.028932 1 flags.go:64] FLAG: --concurrent-service-endpoint-syncs="5" 2026-01-20T10:42:03.028947666+00:00 stderr F I0120 10:42:03.028938 1 flags.go:64] FLAG: --concurrent-service-syncs="1" 2026-01-20T10:42:03.028958700+00:00 stderr F I0120 10:42:03.028945 1 flags.go:64] FLAG: --concurrent-serviceaccount-token-syncs="5" 2026-01-20T10:42:03.028958700+00:00 stderr F I0120 10:42:03.028951 1 flags.go:64] FLAG: --concurrent-statefulset-syncs="5" 2026-01-20T10:42:03.028970404+00:00 stderr F I0120 10:42:03.028958 1 flags.go:64] FLAG: --concurrent-ttl-after-finished-syncs="5" 2026-01-20T10:42:03.028970404+00:00 stderr F I0120 10:42:03.028964 1 flags.go:64] FLAG: --concurrent-validating-admission-policy-status-syncs="5" 2026-01-20T10:42:03.028981369+00:00 stderr F I0120 10:42:03.028971 1 flags.go:64] FLAG: --configure-cloud-routes="true" 2026-01-20T10:42:03.028991944+00:00 stderr F I0120 10:42:03.028983 1 flags.go:64] FLAG: --contention-profiling="false" 2026-01-20T10:42:03.029002508+00:00 stderr F I0120 10:42:03.028989 1 flags.go:64] FLAG: --controller-start-interval="0s" 2026-01-20T10:42:03.029013073+00:00 stderr F I0120 10:42:03.028997 1 flags.go:64] FLAG: --controllers="[*,-bootstrapsigner,-tokencleaner,-ttl]" 2026-01-20T10:42:03.029023658+00:00 stderr F I0120 10:42:03.029012 1 flags.go:64] FLAG: --disable-attach-detach-reconcile-sync="false" 2026-01-20T10:42:03.029034213+00:00 stderr F I0120 10:42:03.029019 1 flags.go:64] FLAG: --disabled-metrics="[]" 2026-01-20T10:42:03.029045207+00:00 stderr F I0120 10:42:03.029031 1 flags.go:64] FLAG: --enable-dynamic-provisioning="true" 2026-01-20T10:42:03.029045207+00:00 stderr F I0120 10:42:03.029038 1 flags.go:64] FLAG: --enable-garbage-collector="true" 2026-01-20T10:42:03.029056672+00:00 stderr F I0120 10:42:03.029045 1 flags.go:64] FLAG: --enable-hostpath-provisioner="false" 2026-01-20T10:42:03.029056672+00:00 stderr F I0120 10:42:03.029051 1 flags.go:64] FLAG: --enable-leader-migration="false" 2026-01-20T10:42:03.029067616+00:00 stderr F I0120 10:42:03.029058 1 flags.go:64] FLAG: --endpoint-updates-batch-period="0s" 2026-01-20T10:42:03.029078651+00:00 stderr F I0120 10:42:03.029065 1 flags.go:64] FLAG: --endpointslice-updates-batch-period="0s" 2026-01-20T10:42:03.029078651+00:00 stderr F I0120 10:42:03.029071 1 flags.go:64] FLAG: --external-cloud-volume-plugin="" 2026-01-20T10:42:03.029153394+00:00 stderr F I0120 10:42:03.029078 1 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2026-01-20T10:42:03.029153394+00:00 stderr F I0120 10:42:03.029136 1 flags.go:64] FLAG: --flex-volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" 2026-01-20T10:42:03.029153394+00:00 stderr F I0120 10:42:03.029146 1 flags.go:64] FLAG: --help="false" 2026-01-20T10:42:03.029167067+00:00 stderr F I0120 10:42:03.029153 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-cpu-initialization-period="5m0s" 2026-01-20T10:42:03.029167067+00:00 stderr F I0120 10:42:03.029160 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-delay="5m0s" 2026-01-20T10:42:03.029183059+00:00 stderr F I0120 10:42:03.029167 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-stabilization="5m0s" 2026-01-20T10:42:03.029194193+00:00 stderr F I0120 10:42:03.029179 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-initial-readiness-delay="30s" 2026-01-20T10:42:03.029194193+00:00 stderr F I0120 10:42:03.029187 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-sync-period="15s" 2026-01-20T10:42:03.029205718+00:00 stderr F I0120 10:42:03.029193 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-tolerance="0.1" 2026-01-20T10:42:03.029216302+00:00 stderr F I0120 10:42:03.029205 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-upscale-delay="3m0s" 2026-01-20T10:42:03.029227297+00:00 stderr F I0120 10:42:03.029212 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2026-01-20T10:42:03.029227297+00:00 stderr F I0120 10:42:03.029220 1 flags.go:64] FLAG: --kube-api-burst="300" 2026-01-20T10:42:03.029238262+00:00 stderr F I0120 10:42:03.029227 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" 2026-01-20T10:42:03.029248826+00:00 stderr F I0120 10:42:03.029235 1 flags.go:64] FLAG: --kube-api-qps="150" 2026-01-20T10:42:03.029259891+00:00 stderr F I0120 10:42:03.029245 1 flags.go:64] FLAG: --kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2026-01-20T10:42:03.029259891+00:00 stderr F I0120 10:42:03.029255 1 flags.go:64] FLAG: --large-cluster-size-threshold="50" 2026-01-20T10:42:03.029270895+00:00 stderr F I0120 10:42:03.029261 1 flags.go:64] FLAG: --leader-elect="true" 2026-01-20T10:42:03.029281920+00:00 stderr F I0120 10:42:03.029268 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s" 2026-01-20T10:42:03.029281920+00:00 stderr F I0120 10:42:03.029275 1 flags.go:64] FLAG: --leader-elect-renew-deadline="12s" 2026-01-20T10:42:03.029293634+00:00 stderr F I0120 10:42:03.029282 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases" 2026-01-20T10:42:03.029304659+00:00 stderr F I0120 10:42:03.029289 1 flags.go:64] FLAG: --leader-elect-resource-name="kube-controller-manager" 2026-01-20T10:42:03.029304659+00:00 stderr F I0120 10:42:03.029297 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" 2026-01-20T10:42:03.029316143+00:00 stderr F I0120 10:42:03.029303 1 flags.go:64] FLAG: --leader-elect-retry-period="3s" 2026-01-20T10:42:03.029316143+00:00 stderr F I0120 10:42:03.029310 1 flags.go:64] FLAG: --leader-migration-config="" 2026-01-20T10:42:03.029355593+00:00 stderr F I0120 10:42:03.029322 1 flags.go:64] FLAG: --legacy-service-account-token-clean-up-period="8760h0m0s" 2026-01-20T10:42:03.029355593+00:00 stderr F I0120 10:42:03.029336 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2026-01-20T10:42:03.029355593+00:00 stderr F I0120 10:42:03.029343 1 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2026-01-20T10:42:03.029368877+00:00 stderr F I0120 10:42:03.029355 1 flags.go:64] FLAG: --log-json-split-stream="false" 2026-01-20T10:42:03.029368877+00:00 stderr F I0120 10:42:03.029362 1 flags.go:64] FLAG: --logging-format="text" 2026-01-20T10:42:03.029380361+00:00 stderr F I0120 10:42:03.029368 1 flags.go:64] FLAG: --master="" 2026-01-20T10:42:03.029380361+00:00 stderr F I0120 10:42:03.029375 1 flags.go:64] FLAG: --max-endpoints-per-slice="100" 2026-01-20T10:42:03.029391356+00:00 stderr F I0120 10:42:03.029381 1 flags.go:64] FLAG: --min-resync-period="12h0m0s" 2026-01-20T10:42:03.029402440+00:00 stderr F I0120 10:42:03.029388 1 flags.go:64] FLAG: --mirroring-concurrent-service-endpoint-syncs="5" 2026-01-20T10:42:03.029402440+00:00 stderr F I0120 10:42:03.029395 1 flags.go:64] FLAG: --mirroring-endpointslice-updates-batch-period="0s" 2026-01-20T10:42:03.029414114+00:00 stderr F I0120 10:42:03.029402 1 flags.go:64] FLAG: --mirroring-max-endpoints-per-subset="1000" 2026-01-20T10:42:03.029414114+00:00 stderr F I0120 10:42:03.029409 1 flags.go:64] FLAG: --namespace-sync-period="5m0s" 2026-01-20T10:42:03.029437783+00:00 stderr F I0120 10:42:03.029416 1 flags.go:64] FLAG: --node-cidr-mask-size="0" 2026-01-20T10:42:03.029437783+00:00 stderr F I0120 10:42:03.029422 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv4="0" 2026-01-20T10:42:03.029437783+00:00 stderr F I0120 10:42:03.029428 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv6="0" 2026-01-20T10:42:03.029449567+00:00 stderr F I0120 10:42:03.029434 1 flags.go:64] FLAG: --node-eviction-rate="0.1" 2026-01-20T10:42:03.029449567+00:00 stderr F I0120 10:42:03.029442 1 flags.go:64] FLAG: --node-monitor-grace-period="40s" 2026-01-20T10:42:03.029460581+00:00 stderr F I0120 10:42:03.029448 1 flags.go:64] FLAG: --node-monitor-period="5s" 2026-01-20T10:42:03.029471146+00:00 stderr F I0120 10:42:03.029461 1 flags.go:64] FLAG: --node-startup-grace-period="1m0s" 2026-01-20T10:42:03.029482191+00:00 stderr F I0120 10:42:03.029469 1 flags.go:64] FLAG: --node-sync-period="0s" 2026-01-20T10:42:03.029482191+00:00 stderr F I0120 10:42:03.029475 1 flags.go:64] FLAG: --openshift-config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2026-01-20T10:42:03.029493175+00:00 stderr F I0120 10:42:03.029483 1 flags.go:64] FLAG: --permit-address-sharing="false" 2026-01-20T10:42:03.029504230+00:00 stderr F I0120 10:42:03.029490 1 flags.go:64] FLAG: --permit-port-sharing="false" 2026-01-20T10:42:03.029504230+00:00 stderr F I0120 10:42:03.029497 1 flags.go:64] FLAG: --profiling="true" 2026-01-20T10:42:03.029515634+00:00 stderr F I0120 10:42:03.029503 1 flags.go:64] FLAG: --pv-recycler-increment-timeout-nfs="30" 2026-01-20T10:42:03.029515634+00:00 stderr F I0120 10:42:03.029510 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-hostpath="60" 2026-01-20T10:42:03.029526619+00:00 stderr F I0120 10:42:03.029516 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-nfs="300" 2026-01-20T10:42:03.029537883+00:00 stderr F I0120 10:42:03.029523 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-hostpath="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2026-01-20T10:42:03.029549267+00:00 stderr F I0120 10:42:03.029532 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-nfs="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2026-01-20T10:42:03.029549267+00:00 stderr F I0120 10:42:03.029541 1 flags.go:64] FLAG: --pv-recycler-timeout-increment-hostpath="30" 2026-01-20T10:42:03.029560292+00:00 stderr F I0120 10:42:03.029547 1 flags.go:64] FLAG: --pvclaimbinder-sync-period="15s" 2026-01-20T10:42:03.029570837+00:00 stderr F I0120 10:42:03.029554 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2026-01-20T10:42:03.029581391+00:00 stderr F I0120 10:42:03.029567 1 flags.go:64] FLAG: --requestheader-client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2026-01-20T10:42:03.029594365+00:00 stderr F I0120 10:42:03.029578 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2026-01-20T10:42:03.029642561+00:00 stderr F I0120 10:42:03.029592 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2026-01-20T10:42:03.029642561+00:00 stderr F I0120 10:42:03.029622 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2026-01-20T10:42:03.029654905+00:00 stderr F I0120 10:42:03.029646 1 flags.go:64] FLAG: --resource-quota-sync-period="5m0s" 2026-01-20T10:42:03.029665570+00:00 stderr F I0120 10:42:03.029654 1 flags.go:64] FLAG: --root-ca-file="/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt" 2026-01-20T10:42:03.029676614+00:00 stderr F I0120 10:42:03.029663 1 flags.go:64] FLAG: --route-reconciliation-period="10s" 2026-01-20T10:42:03.029676614+00:00 stderr F I0120 10:42:03.029670 1 flags.go:64] FLAG: --secondary-node-eviction-rate="0.01" 2026-01-20T10:42:03.029693116+00:00 stderr F I0120 10:42:03.029677 1 flags.go:64] FLAG: --secure-port="10257" 2026-01-20T10:42:03.029693116+00:00 stderr F I0120 10:42:03.029684 1 flags.go:64] FLAG: --service-account-private-key-file="/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key" 2026-01-20T10:42:03.029704890+00:00 stderr F I0120 10:42:03.029693 1 flags.go:64] FLAG: --service-cluster-ip-range="10.217.4.0/23" 2026-01-20T10:42:03.029704890+00:00 stderr F I0120 10:42:03.029700 1 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2026-01-20T10:42:03.029715795+00:00 stderr F I0120 10:42:03.029707 1 flags.go:64] FLAG: --terminated-pod-gc-threshold="12500" 2026-01-20T10:42:03.029726360+00:00 stderr F I0120 10:42:03.029714 1 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" 2026-01-20T10:42:03.029822222+00:00 stderr F I0120 10:42:03.029722 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2026-01-20T10:42:03.029822222+00:00 stderr F I0120 10:42:03.029805 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2026-01-20T10:42:03.029822222+00:00 stderr F I0120 10:42:03.029814 1 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2026-01-20T10:42:03.029842392+00:00 stderr F I0120 10:42:03.029822 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2026-01-20T10:42:03.029853567+00:00 stderr F I0120 10:42:03.029841 1 flags.go:64] FLAG: --unhealthy-zone-threshold="0.55" 2026-01-20T10:42:03.029853567+00:00 stderr F I0120 10:42:03.029848 1 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" 2026-01-20T10:42:03.029864511+00:00 stderr F I0120 10:42:03.029855 1 flags.go:64] FLAG: --use-service-account-credentials="true" 2026-01-20T10:42:03.029875036+00:00 stderr F I0120 10:42:03.029862 1 flags.go:64] FLAG: --v="2" 2026-01-20T10:42:03.029887900+00:00 stderr F I0120 10:42:03.029877 1 flags.go:64] FLAG: --version="false" 2026-01-20T10:42:03.029899104+00:00 stderr F I0120 10:42:03.029886 1 flags.go:64] FLAG: --vmodule="" 2026-01-20T10:42:03.029909589+00:00 stderr F I0120 10:42:03.029895 1 flags.go:64] FLAG: --volume-host-allow-local-loopback="true" 2026-01-20T10:42:03.029920094+00:00 stderr F I0120 10:42:03.029902 1 flags.go:64] FLAG: --volume-host-cidr-denylist="[]" 2026-01-20T10:42:03.054956035+00:00 stderr F I0120 10:42:03.054872 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2026-01-20T10:42:03.496934314+00:00 stderr F I0120 10:42:03.496833 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2026-01-20T10:42:03.497856960+00:00 stderr F I0120 10:42:03.497803 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2026-01-20T10:42:03.502202657+00:00 stderr F I0120 10:42:03.502139 1 controllermanager.go:203] "Starting" version="v1.29.5+29c95f3" 2026-01-20T10:42:03.502202657+00:00 stderr F I0120 10:42:03.502177 1 controllermanager.go:205] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" 2026-01-20T10:42:03.511004024+00:00 stderr F I0120 10:42:03.510941 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2026-01-20T10:42:03.511432506+00:00 stderr F I0120 10:42:03.511373 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:42:03.511318273 +0000 UTC))" 2026-01-20T10:42:03.511460717+00:00 stderr F I0120 10:42:03.511429 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:42:03.511406935 +0000 UTC))" 2026-01-20T10:42:03.511483498+00:00 stderr F I0120 10:42:03.511455 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:42:03.511438286 +0000 UTC))" 2026-01-20T10:42:03.511504728+00:00 stderr F I0120 10:42:03.511479 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:42:03.511464467 +0000 UTC))" 2026-01-20T10:42:03.511520799+00:00 stderr F I0120 10:42:03.511501 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:42:03.511487038 +0000 UTC))" 2026-01-20T10:42:03.511610891+00:00 stderr F I0120 10:42:03.511535 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:42:03.511509478 +0000 UTC))" 2026-01-20T10:42:03.511610891+00:00 stderr F I0120 10:42:03.511561 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:42:03.511547069 +0000 UTC))" 2026-01-20T10:42:03.511610891+00:00 stderr F I0120 10:42:03.511596 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:42:03.51156743 +0000 UTC))" 2026-01-20T10:42:03.511652052+00:00 stderr F I0120 10:42:03.511617 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:42:03.511601901 +0000 UTC))" 2026-01-20T10:42:03.511652052+00:00 stderr F I0120 10:42:03.511642 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:42:03.511626932 +0000 UTC))" 2026-01-20T10:42:03.511872969+00:00 stderr F I0120 10:42:03.511833 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2026-01-20T10:42:03.512053904+00:00 stderr F I0120 10:42:03.512010 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2026-01-20 10:42:03.511984632 +0000 UTC))" 2026-01-20T10:42:03.512445935+00:00 stderr F I0120 10:42:03.512403 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768905723\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768905723\" (2026-01-20 09:42:03 +0000 UTC to 2027-01-20 09:42:03 +0000 UTC (now=2026-01-20 10:42:03.512380203 +0000 UTC))" 2026-01-20T10:42:03.512476056+00:00 stderr F I0120 10:42:03.512439 1 secure_serving.go:213] Serving securely on [::]:10257 2026-01-20T10:42:03.512552429+00:00 stderr F I0120 10:42:03.512513 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2026-01-20T10:42:03.513168777+00:00 stderr F I0120 10:42:03.513109 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:42:03.513795925+00:00 stderr F I0120 10:42:03.513756 1 leaderelection.go:250] attempting to acquire leader lease kube-system/kube-controller-manager... 2026-01-20T10:42:03.517468362+00:00 stderr F E0120 10:42:03.517391 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:06.646243251+00:00 stderr F E0120 10:42:06.646115 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:12.707849301+00:00 stderr F E0120 10:42:12.707622 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:15.737911965+00:00 stderr F E0120 10:42:15.737813 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:20.442085937+00:00 stderr F E0120 10:42:20.441950 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:25.950123530+00:00 stderr F E0120 10:42:25.950024 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:31.521408540+00:00 stderr F E0120 10:42:31.521276 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:37.474994412+00:00 stderr F E0120 10:42:37.474853 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:40.704269593+00:00 stderr F E0120 10:42:40.704169 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:44.564584467+00:00 stderr F E0120 10:42:44.564507 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:50.566257689+00:00 stderr F E0120 10:42:50.566161 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:42:54.964040410+00:00 stderr F E0120 10:42:54.963975 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:00.493084897+00:00 stderr F E0120 10:43:00.492397 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:04.844158871+00:00 stderr F E0120 10:43:04.844045 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:10.277964609+00:00 stderr F E0120 10:43:10.277870 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:13.621404541+00:00 stderr F E0120 10:43:13.621286 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:19.389616569+00:00 stderr F E0120 10:43:19.389485 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:23.404648213+00:00 stderr F E0120 10:43:23.404511 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:28.969942178+00:00 stderr F E0120 10:43:28.969864 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:34.264955503+00:00 stderr F E0120 10:43:34.264828 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:38.333946695+00:00 stderr F E0120 10:43:38.333833 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:41.746784997+00:00 stderr F E0120 10:43:41.746639 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:46.512120854+00:00 stderr F E0120 10:43:46.512025 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:52.316333449+00:00 stderr F E0120 10:43:52.316256 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:43:57.995939257+00:00 stderr F E0120 10:43:57.995848 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:03.882471917+00:00 stderr F E0120 10:44:03.882328 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:07.405928076+00:00 stderr F E0120 10:44:07.405822 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:11.959135126+00:00 stderr F E0120 10:44:11.959024 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:15.315286703+00:00 stderr F E0120 10:44:15.315199 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:19.424810286+00:00 stderr F E0120 10:44:19.424670 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:24.478684817+00:00 stderr F E0120 10:44:24.478545 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:27.736353115+00:00 stderr F E0120 10:44:27.736211 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:34.077015669+00:00 stderr F E0120 10:44:34.076907 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:39.277099026+00:00 stderr F E0120 10:44:39.276942 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:43.586536635+00:00 stderr F E0120 10:44:43.586409 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:49.256230407+00:00 stderr F E0120 10:44:49.256099 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:53.198123830+00:00 stderr F E0120 10:44:53.198037 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:44:57.979064327+00:00 stderr F E0120 10:44:57.978958 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:01.986490850+00:00 stderr F E0120 10:45:01.986378 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:07.725480599+00:00 stderr F E0120 10:45:07.725373 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:13.212875776+00:00 stderr F E0120 10:45:13.212706 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:19.332731670+00:00 stderr F E0120 10:45:19.331950 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.47.54:53: dial udp 199.204.47.54:53: connect: network is unreachable 2026-01-20T10:45:22.753488139+00:00 stderr F E0120 10:45:22.753386 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:26.767402566+00:00 stderr F E0120 10:45:26.767301 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:31.266958429+00:00 stderr F E0120 10:45:31.266825 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:35.749685252+00:00 stderr F E0120 10:45:35.749537 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:40.494689376+00:00 stderr F E0120 10:45:40.494546 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:45.014042820+00:00 stderr F E0120 10:45:45.013931 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:49.080246672+00:00 stderr F E0120 10:45:49.080122 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:45:53.927014426+00:00 stderr F E0120 10:45:53.926323 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:46:00.160484752+00:00 stderr F E0120 10:46:00.160397 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:46:03.363117807+00:00 stderr F E0120 10:46:03.362326 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:46:09.020706857+00:00 stderr F E0120 10:46:09.020620 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2026-01-20T10:46:12.291761116+00:00 stderr F I0120 10:46:12.291467 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2026-01-20T10:46:12.291761116+00:00 stderr F I0120 10:46:12.291499 1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2026-01-20T10:46:12.291761116+00:00 stderr F I0120 10:46:12.291552 1 controllermanager.go:332] Requested to terminate. Exiting. ././@LongLink0000644000000000000000000000033200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015133657736033074 5ustar zuulzuul././@LongLink0000644000000000000000000000033700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000002613215133657716033100 0ustar zuulzuul2025-08-13T20:08:14.197956647+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 9443 \))" ]; do sleep 1; done' 2025-08-13T20:08:14.202063545+00:00 stderr F ++ ss -Htanop '(' sport = 9443 ')' 2025-08-13T20:08:14.221748139+00:00 stderr F + '[' -n '' ']' 2025-08-13T20:08:14.225107916+00:00 stderr F + exec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager --listen=0.0.0.0:9443 -v=2 2025-08-13T20:08:14.457003233+00:00 stderr F W0813 20:08:14.455918 1 cmd.go:244] Using insecure, self-signed certificates 2025-08-13T20:08:14.457003233+00:00 stderr F I0813 20:08:14.456485 1 crypto.go:601] Generating new CA for cert-recovery-controller-signer@1755115694 cert, and key in /tmp/serving-cert-54853747/serving-signer.crt, /tmp/serving-cert-54853747/serving-signer.key 2025-08-13T20:08:15.002454652+00:00 stderr F I0813 20:08:15.002044 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:08:15.006327353+00:00 stderr F I0813 20:08:15.006199 1 observer_polling.go:159] Starting file observer 2025-08-13T20:08:15.025571455+00:00 stderr F I0813 20:08:15.025458 1 builder.go:298] cert-recovery-controller version 4.16.0-202406131906.p0.g0338b3b.assembly.stream.el9-0338b3b-0338b3be6912024d03def2c26f0fa10218fc2c25 2025-08-13T20:08:15.032073211+00:00 stderr F I0813 20:08:15.032017 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaTopology 2025-08-13T20:08:15.034477840+00:00 stderr F I0813 20:08:15.032351 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager/cert-recovery-controller-lock... 2025-08-13T20:08:15.047637117+00:00 stderr F I0813 20:08:15.046947 1 leaderelection.go:260] successfully acquired lease openshift-kube-controller-manager/cert-recovery-controller-lock 2025-08-13T20:08:15.050152229+00:00 stderr F I0813 20:08:15.047956 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-controller-manager", Name:"cert-recovery-controller-lock", UID:"fea77749-6e8a-4e4e-9933-ff0da4b5904e", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"32883", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_4dcd504f-4413-4e50-8836-0f9844860e38 became leader 2025-08-13T20:08:15.073932201+00:00 stderr F I0813 20:08:15.072722 1 csrcontroller.go:102] Starting CSR controller 2025-08-13T20:08:15.073932201+00:00 stderr F I0813 20:08:15.072814 1 shared_informer.go:311] Waiting for caches to sync for CSRController 2025-08-13T20:08:15.073932201+00:00 stderr F I0813 20:08:15.073619 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:08:15.110874860+00:00 stderr F I0813 20:08:15.110647 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.111544390+00:00 stderr F I0813 20:08:15.111477 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.125419677+00:00 stderr F I0813 20:08:15.125251 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.136199657+00:00 stderr F I0813 20:08:15.136093 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.140250753+00:00 stderr F I0813 20:08:15.140161 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.149215380+00:00 stderr F I0813 20:08:15.149097 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.157014833+00:00 stderr F I0813 20:08:15.153563 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubecontrollermanagers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.157014833+00:00 stderr F I0813 20:08:15.155864 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.163627113+00:00 stderr F I0813 20:08:15.163577 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.174963438+00:00 stderr F I0813 20:08:15.174910 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:08:15.175033700+00:00 stderr F I0813 20:08:15.175018 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:08:15.175122503+00:00 stderr F I0813 20:08:15.175107 1 shared_informer.go:318] Caches are synced for CSRController 2025-08-13T20:08:15.175204425+00:00 stderr F I0813 20:08:15.175189 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:08:15.175239226+00:00 stderr F I0813 20:08:15.175227 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:08:15.175270617+00:00 stderr F I0813 20:08:15.175259 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:08:25.185664033+00:00 stderr F E0813 20:08:25.185445 1 csrcontroller.go:146] key failed with : Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca": dial tcp [::1]:6443: connect: connection refused 2025-08-13T20:08:35.194931186+00:00 stderr F E0813 20:08:35.194614 1 csrcontroller.go:146] key failed with : Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca": dial tcp [::1]:6443: connect: connection refused 2025-08-13T20:09:00.927591932+00:00 stderr F I0813 20:09:00.927364 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:03.126142517+00:00 stderr F I0813 20:09:03.125981 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubecontrollermanagers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:03.735864809+00:00 stderr F I0813 20:09:03.733930 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.052092650+00:00 stderr F I0813 20:09:07.050311 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:08.250621750+00:00 stderr F I0813 20:09:08.250516 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:08.999427729+00:00 stderr F I0813 20:09:08.999355 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:09.482069647+00:00 stderr F I0813 20:09:09.481699 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:10.710151277+00:00 stderr F I0813 20:09:10.710080 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:12.143121140+00:00 stderr F I0813 20:09:12.141232 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.401701 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.376416 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.473142 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.483130 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.483496 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.483591 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.483732 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.485449 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.485893 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.839045900+00:00 stderr F I0813 20:42:36.838854 1 cmd.go:128] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:36.844614870+00:00 stderr F E0813 20:42:36.844372 1 leaderelection.go:308] Failed to release lock: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cert-recovery-controller-lock?timeout=4m0s": dial tcp [::1]:6443: connect: connection refused 2025-08-13T20:42:36.848355598+00:00 stderr F I0813 20:42:36.848208 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:36.848494642+00:00 stderr F I0813 20:42:36.848420 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:36.848494642+00:00 stderr F I0813 20:42:36.848449 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:36.848494642+00:00 stderr F I0813 20:42:36.848475 1 csrcontroller.go:104] Shutting down CSR controller 2025-08-13T20:42:36.848494642+00:00 stderr F I0813 20:42:36.848485 1 csrcontroller.go:106] CSR controller shut down 2025-08-13T20:42:36.849512342+00:00 stderr F I0813 20:42:36.848300 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:36.849512342+00:00 stderr F I0813 20:42:36.849486 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:36.849651526+00:00 stderr F I0813 20:42:36.848500 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:36.849651526+00:00 stderr F I0813 20:42:36.849609 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:36.851173319+00:00 stderr F W0813 20:42:36.850995 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000033700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000001144515133657716033101 0ustar zuulzuul2026-01-20T10:42:03.692885796+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 9443 \))" ]; do sleep 1; done' 2026-01-20T10:42:03.697528751+00:00 stderr F ++ ss -Htanop '(' sport = 9443 ')' 2026-01-20T10:42:03.701182627+00:00 stderr F + '[' -n '' ']' 2026-01-20T10:42:03.701879738+00:00 stderr F + exec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager --listen=0.0.0.0:9443 -v=2 2026-01-20T10:42:03.758812297+00:00 stderr F W0120 10:42:03.758334 1 cmd.go:244] Using insecure, self-signed certificates 2026-01-20T10:42:03.758812297+00:00 stderr F I0120 10:42:03.758708 1 crypto.go:601] Generating new CA for cert-recovery-controller-signer@1768905723 cert, and key in /tmp/serving-cert-3179794406/serving-signer.crt, /tmp/serving-cert-3179794406/serving-signer.key 2026-01-20T10:42:04.114531565+00:00 stderr F I0120 10:42:04.114461 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:42:04.115370070+00:00 stderr F I0120 10:42:04.115309 1 observer_polling.go:159] Starting file observer 2026-01-20T10:42:04.117441400+00:00 stderr F W0120 10:42:04.117360 1 builder.go:266] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/pods": dial tcp [::1]:6443: connect: connection refused 2026-01-20T10:42:04.117772821+00:00 stderr F I0120 10:42:04.117587 1 builder.go:298] cert-recovery-controller version 4.16.0-202406131906.p0.g0338b3b.assembly.stream.el9-0338b3b-0338b3be6912024d03def2c26f0fa10218fc2c25 2026-01-20T10:42:04.120504040+00:00 stderr F W0120 10:42:04.120418 1 builder.go:357] unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused 2026-01-20T10:42:04.120597523+00:00 stderr F I0120 10:42:04.120562 1 event.go:364] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"openshift-kube-controller-manager", Name:"openshift-kube-controller-manager", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ControlPlaneTopology' unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused 2026-01-20T10:42:04.121174040+00:00 stderr F I0120 10:42:04.121080 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager/cert-recovery-controller-lock... 2026-01-20T10:42:04.122324993+00:00 stderr F E0120 10:42:04.122260 1 leaderelection.go:332] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cert-recovery-controller-lock?timeout=1m47s": dial tcp [::1]:6443: connect: connection refused 2026-01-20T10:42:04.123856088+00:00 stderr F E0120 10:42:04.123784 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/events\": dial tcp [::1]:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-kube-controller-manager.188c6a624f596837 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Namespace,Namespace:openshift-kube-controller-manager,Name:openshift-kube-controller-manager,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ControlPlaneTopology,Message:unable to get control plane topology, using HA cluster values for leader election: Get \"https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster\": dial tcp [::1]:6443: connect: connection refused,Source:EventSource{Component:cert-recovery-controller,Host:,},FirstTimestamp:2026-01-20 10:42:04.120361015 +0000 UTC m=+0.414773891,LastTimestamp:2026-01-20 10:42:04.120361015 +0000 UTC m=+0.414773891,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:cert-recovery-controller,ReportingInstance:,}" 2026-01-20T10:46:12.290114303+00:00 stderr F I0120 10:46:12.289906 1 cmd.go:128] Received SIGTERM or SIGINT signal, shutting down controller. 2026-01-20T10:46:12.290114303+00:00 stderr F W0120 10:46:12.289981 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000033700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000004747415133657716033114 0ustar zuulzuul2026-01-20T10:47:06.760072897+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 9443 \))" ]; do sleep 1; done' 2026-01-20T10:47:06.764203276+00:00 stderr F ++ ss -Htanop '(' sport = 9443 ')' 2026-01-20T10:47:06.769147344+00:00 stderr F + '[' -n '' ']' 2026-01-20T10:47:06.769936323+00:00 stderr F + exec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager --listen=0.0.0.0:9443 -v=2 2026-01-20T10:47:06.818668098+00:00 stderr F W0120 10:47:06.818551 1 cmd.go:244] Using insecure, self-signed certificates 2026-01-20T10:47:06.818906811+00:00 stderr F I0120 10:47:06.818890 1 crypto.go:601] Generating new CA for cert-recovery-controller-signer@1768906026 cert, and key in /tmp/serving-cert-1465303195/serving-signer.crt, /tmp/serving-cert-1465303195/serving-signer.key 2026-01-20T10:47:07.233891159+00:00 stderr F I0120 10:47:07.233838 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:47:07.237315069+00:00 stderr F I0120 10:47:07.237250 1 observer_polling.go:159] Starting file observer 2026-01-20T10:47:17.239311438+00:00 stderr F W0120 10:47:17.239223 1 builder.go:266] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/pods": net/http: TLS handshake timeout 2026-01-20T10:47:17.239429841+00:00 stderr F I0120 10:47:17.239406 1 builder.go:298] cert-recovery-controller version 4.16.0-202406131906.p0.g0338b3b.assembly.stream.el9-0338b3b-0338b3be6912024d03def2c26f0fa10218fc2c25 2026-01-20T10:47:23.243630726+00:00 stderr F I0120 10:47:23.241296 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaTopology 2026-01-20T10:47:23.243630726+00:00 stderr F I0120 10:47:23.241777 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager/cert-recovery-controller-lock... 2026-01-20T10:53:10.529114151+00:00 stderr F I0120 10:53:10.527542 1 leaderelection.go:260] successfully acquired lease openshift-kube-controller-manager/cert-recovery-controller-lock 2026-01-20T10:53:10.529203823+00:00 stderr F I0120 10:53:10.527739 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-controller-manager", Name:"cert-recovery-controller-lock", UID:"fea77749-6e8a-4e4e-9933-ff0da4b5904e", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41689", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_df652003-412a-45d0-a98c-710ba8ecb355 became leader 2026-01-20T10:53:10.534625663+00:00 stderr F I0120 10:53:10.534520 1 csrcontroller.go:102] Starting CSR controller 2026-01-20T10:53:10.534625663+00:00 stderr F I0120 10:53:10.534573 1 shared_informer.go:311] Waiting for caches to sync for CSRController 2026-01-20T10:53:10.539110789+00:00 stderr F I0120 10:53:10.536355 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2026-01-20T10:53:10.539110789+00:00 stderr F I0120 10:53:10.536907 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:10.541281725+00:00 stderr F I0120 10:53:10.539887 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:10.546117450+00:00 stderr F I0120 10:53:10.543376 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:10.553214463+00:00 stderr F I0120 10:53:10.553135 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubecontrollermanagers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:10.554255860+00:00 stderr F I0120 10:53:10.553969 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:10.561485796+00:00 stderr F I0120 10:53:10.561435 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:10.568648272+00:00 stderr F I0120 10:53:10.568568 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:10.579074050+00:00 stderr F I0120 10:53:10.578986 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:10.587475907+00:00 stderr F I0120 10:53:10.587377 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:53:10.636338360+00:00 stderr F I0120 10:53:10.636230 1 shared_informer.go:318] Caches are synced for CSRController 2026-01-20T10:53:10.636338360+00:00 stderr F I0120 10:53:10.636308 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2026-01-20T10:53:10.636338360+00:00 stderr F I0120 10:53:10.636315 1 base_controller.go:73] Caches are synced for ResourceSyncController 2026-01-20T10:53:10.636338360+00:00 stderr F I0120 10:53:10.636321 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2026-01-20T10:53:10.637643353+00:00 stderr F I0120 10:53:10.637578 1 base_controller.go:73] Caches are synced for CertRotationController 2026-01-20T10:53:10.637643353+00:00 stderr F I0120 10:53:10.637597 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2026-01-20T10:56:07.034236988+00:00 stderr F I0120 10:56:07.034161 1 core.go:359] ConfigMap "openshift-kube-controller-manager-operator/csr-controller-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIG1tSIcjm7gAwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjYwMTIwMTA1NTU0WhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3Njg5MDY1\nNTQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCscGmreTD5gOW5KP2F\nNUpoBc5+Spnnq3B2tYkRoXcWNc2BgZ+jMRfrKhGDPLeI0kIISHxkrBN7rWAljF/6\nfC5FOupYyRzaUwkNQnjvvIuPXtDkwKkkHjfsK+w4R+SMgmXxZnBschgQlQADWx1F\nhXXj4t/rIAOD6wDn2huK9ofQ3778YfMevXs4C8/wUAoMZsH7WA45mDgK0xz829BU\n4HQ4QgB87eR8dGwI+Ck76jdHYHx29ZeflIDi09CtLSSobnLLsJJzfqcOnaDDL3FZ\nvA+xigo/n1FjfSNGaEj7Sgub6VUD9bMS90VVTXY0CjSLgR+cdFBQoDdlgxOrcOpl\ns2z5AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBRbK19lc9H0VacvthXsqwv+GhYFITAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEAnzrWGRHyvk+7dUDPe76j\nPAzLpAmD1UUwYW+uGXf3c81/Iqtn1+c3sGWje6RB7fFIee4gGj5NHYfU7f9j6rVH\nhFRFN3DFXfTpNb7i7i8+hvNbVmHV/k/XDv/z5BdwkDZM8qySTeQH9hPxLWN5LHkV\nxtm1n0YndLacqybZOXTN1ZYOr8HQpTsMK/B/rg2zN5rhUBC3xsf2qtamvReY7VXr\ngWlJQqShNWoVfblmZB8sv23Kj9n7f1QwJ32oCYcRh3dmOpfNl4ESRnrczQrqXR1U\niMutGpudJdSdfPNI/+Igd7BsVxnQofWes4qab411PeoLlMoBc0XCfICcam2+yrG+\ncQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2026-01-20T10:56:07.034375582+00:00 stderr F I0120 10:56:07.034356 1 csrcontroller.go:178] Refreshed CSRCABundle. 2026-01-20T10:56:07.034808103+00:00 stderr F I0120 10:56:07.034751 1 event.go:364] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"openshift-kube-controller-manager", Name:"openshift-kube-controller-manager", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator: 2026-01-20T10:56:07.034808103+00:00 stderr F cause by changes in data.ca-bundle.crt 2026-01-20T10:56:07.041149644+00:00 stderr F I0120 10:56:07.040016 1 core.go:359] ConfigMap "openshift-config-managed/csr-controller-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIG1tSIcjm7gAwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjYwMTIwMTA1NTU0WhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3Njg5MDY1\nNTQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCscGmreTD5gOW5KP2F\nNUpoBc5+Spnnq3B2tYkRoXcWNc2BgZ+jMRfrKhGDPLeI0kIISHxkrBN7rWAljF/6\nfC5FOupYyRzaUwkNQnjvvIuPXtDkwKkkHjfsK+w4R+SMgmXxZnBschgQlQADWx1F\nhXXj4t/rIAOD6wDn2huK9ofQ3778YfMevXs4C8/wUAoMZsH7WA45mDgK0xz829BU\n4HQ4QgB87eR8dGwI+Ck76jdHYHx29ZeflIDi09CtLSSobnLLsJJzfqcOnaDDL3FZ\nvA+xigo/n1FjfSNGaEj7Sgub6VUD9bMS90VVTXY0CjSLgR+cdFBQoDdlgxOrcOpl\ns2z5AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBRbK19lc9H0VacvthXsqwv+GhYFITAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEAnzrWGRHyvk+7dUDPe76j\nPAzLpAmD1UUwYW+uGXf3c81/Iqtn1+c3sGWje6RB7fFIee4gGj5NHYfU7f9j6rVH\nhFRFN3DFXfTpNb7i7i8+hvNbVmHV/k/XDv/z5BdwkDZM8qySTeQH9hPxLWN5LHkV\nxtm1n0YndLacqybZOXTN1ZYOr8HQpTsMK/B/rg2zN5rhUBC3xsf2qtamvReY7VXr\ngWlJQqShNWoVfblmZB8sv23Kj9n7f1QwJ32oCYcRh3dmOpfNl4ESRnrczQrqXR1U\niMutGpudJdSdfPNI/+Igd7BsVxnQofWes4qab411PeoLlMoBc0XCfICcam2+yrG+\ncQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:13Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-controller-manager-operator","operation":"Update","time":"2026-01-20T10:56:07Z"}],"resourceVersion":null,"uid":"4aabbce1-72f4-478a-b382-9ed7c988ad76"}} 2026-01-20T10:56:07.041701879+00:00 stderr F I0120 10:56:07.041492 1 event.go:364] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"openshift-kube-controller-manager", Name:"openshift-kube-controller-manager", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/csr-controller-ca -n openshift-config-managed: 2026-01-20T10:56:07.041701879+00:00 stderr F cause by changes in data.ca-bundle.crt 2026-01-20T10:57:10.565674874+00:00 stderr F E0120 10:57:10.565526 1 leaderelection.go:332] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cert-recovery-controller-lock?timeout=4m0s": dial tcp [::1]:6443: connect: connection refused 2026-01-20T10:57:38.594616599+00:00 stderr F I0120 10:57:38.594492 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubecontrollermanagers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:42.346005426+00:00 stderr F I0120 10:57:42.345889 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:45.602632229+00:00 stderr F I0120 10:57:45.602542 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:46.695078869+00:00 stderr F I0120 10:57:46.695004 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:47.776738834+00:00 stderr F I0120 10:57:47.776589 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:49.556326125+00:00 stderr F I0120 10:57:49.556240 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:49.955225154+00:00 stderr F I0120 10:57:49.955147 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:52.891712001+00:00 stderr F I0120 10:57:52.891618 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2026-01-20T10:57:54.829133606+00:00 stderr F I0120 10:57:54.828850 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000755000175000017500000000000015133657716033037 5ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000755000175000017500000000000015133657737033042 5ustar zuulzuul././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000644000175000017500000000000015133657716033027 0ustar zuulzuul././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000644000175000017500000000000015133657716033027 0ustar zuulzuul././@LongLink0000644000000000000000000000032300000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000755000175000017500000000000015133657737033042 5ustar zuulzuul././@LongLink0000644000000000000000000000033000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/3.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000644000175000017500000004110715133657716033044 0ustar zuulzuul2026-01-20T10:49:37.152949625+00:00 stderr F I0120 10:49:37.150847 1 cmd.go:241] Using service-serving-cert provided certificates 2026-01-20T10:49:37.157091751+00:00 stderr F I0120 10:49:37.153665 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:49:37.161324010+00:00 stderr F I0120 10:49:37.158352 1 observer_polling.go:159] Starting file observer 2026-01-20T10:49:37.264082839+00:00 stderr F I0120 10:49:37.263963 1 builder.go:299] config-operator version 4.16.0-202406131906.p0.g441d29c.assembly.stream.el9-441d29c-441d29c92b1759d1780a525112e764280b78b0d6 2026-01-20T10:49:38.054120043+00:00 stderr F I0120 10:49:38.049015 1 secure_serving.go:57] Forcing use of http/1.1 only 2026-01-20T10:49:38.054120043+00:00 stderr F W0120 10:49:38.049555 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:38.054120043+00:00 stderr F W0120 10:49:38.049562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:38.069254314+00:00 stderr F I0120 10:49:38.063966 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:49:38.069254314+00:00 stderr F I0120 10:49:38.064067 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:49:38.069254314+00:00 stderr F I0120 10:49:38.066574 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:38.069254314+00:00 stderr F I0120 10:49:38.066735 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:49:38.069254314+00:00 stderr F I0120 10:49:38.066780 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:49:38.069254314+00:00 stderr F I0120 10:49:38.066790 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:38.076143234+00:00 stderr F I0120 10:49:38.074252 1 secure_serving.go:213] Serving securely on [::]:8443 2026-01-20T10:49:38.076143234+00:00 stderr F I0120 10:49:38.074307 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2026-01-20T10:49:38.076143234+00:00 stderr F I0120 10:49:38.074352 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2026-01-20T10:49:38.076143234+00:00 stderr F I0120 10:49:38.074448 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:49:38.076143234+00:00 stderr F I0120 10:49:38.074640 1 leaderelection.go:250] attempting to acquire leader lease openshift-config-operator/config-operator-lock... 2026-01-20T10:49:38.167616040+00:00 stderr F I0120 10:49:38.167523 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:38.174548891+00:00 stderr F I0120 10:49:38.174206 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:38.174548891+00:00 stderr F I0120 10:49:38.174266 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:55:54.069884297+00:00 stderr F I0120 10:55:54.069187 1 leaderelection.go:260] successfully acquired lease openshift-config-operator/config-operator-lock 2026-01-20T10:55:54.069946208+00:00 stderr F I0120 10:55:54.069419 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-config-operator", Name:"config-operator-lock", UID:"0f77897f-a069-4784-b2c6-559ada951a0f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"42317", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-config-operator-77658b5b66-dq5sc_c43a248d-112a-4865-be78-2a12359d571d became leader 2026-01-20T10:55:54.085903508+00:00 stderr F I0120 10:55:54.085821 1 base_controller.go:67] Waiting for caches to sync for FeatureUpgradeableController 2026-01-20T10:55:54.085903508+00:00 stderr F I0120 10:55:54.085844 1 base_controller.go:67] Waiting for caches to sync for AWSPlatformServiceLocationController 2026-01-20T10:55:54.085903508+00:00 stderr F I0120 10:55:54.085887 1 base_controller.go:67] Waiting for caches to sync for MigrationPlatformStatusController 2026-01-20T10:55:54.085961699+00:00 stderr F I0120 10:55:54.085911 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2026-01-20T10:55:54.085961699+00:00 stderr F I0120 10:55:54.085931 1 base_controller.go:67] Waiting for caches to sync for FeatureGateController 2026-01-20T10:55:54.085961699+00:00 stderr F I0120 10:55:54.085947 1 base_controller.go:67] Waiting for caches to sync for ConfigOperatorController 2026-01-20T10:55:54.085961699+00:00 stderr F I0120 10:55:54.085953 1 base_controller.go:73] Caches are synced for ConfigOperatorController 2026-01-20T10:55:54.085982620+00:00 stderr F I0120 10:55:54.085963 1 base_controller.go:110] Starting #1 worker of ConfigOperatorController controller ... 2026-01-20T10:55:54.086001561+00:00 stderr F I0120 10:55:54.085972 1 base_controller.go:67] Waiting for caches to sync for KubeCloudConfigController 2026-01-20T10:55:54.086001561+00:00 stderr F I0120 10:55:54.085972 1 base_controller.go:67] Waiting for caches to sync for LatencySensitiveRemovalController 2026-01-20T10:55:54.086019951+00:00 stderr F I0120 10:55:54.086007 1 base_controller.go:73] Caches are synced for LatencySensitiveRemovalController 2026-01-20T10:55:54.086036951+00:00 stderr F I0120 10:55:54.086020 1 base_controller.go:110] Starting #1 worker of LatencySensitiveRemovalController controller ... 2026-01-20T10:55:54.086113653+00:00 stderr F I0120 10:55:54.086046 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_config-operator 2026-01-20T10:55:54.086306459+00:00 stderr F I0120 10:55:54.086229 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2026-01-20T10:55:54.088029495+00:00 stderr F I0120 10:55:54.086112 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-config-operator", Name:"openshift-config-operator", UID:"46cebc51-d29e-4081-9edb-d9f437810b86", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling 2026-01-20T10:55:54.088029495+00:00 stderr F E0120 10:55:54.087886 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2026-01-20T10:55:54.186085954+00:00 stderr F I0120 10:55:54.185997 1 base_controller.go:73] Caches are synced for AWSPlatformServiceLocationController 2026-01-20T10:55:54.186085954+00:00 stderr F I0120 10:55:54.186036 1 base_controller.go:110] Starting #1 worker of AWSPlatformServiceLocationController controller ... 2026-01-20T10:55:54.186149985+00:00 stderr F I0120 10:55:54.186101 1 base_controller.go:73] Caches are synced for FeatureGateController 2026-01-20T10:55:54.186149985+00:00 stderr F I0120 10:55:54.186142 1 base_controller.go:110] Starting #1 worker of FeatureGateController controller ... 2026-01-20T10:55:54.186200607+00:00 stderr F I0120 10:55:54.186155 1 base_controller.go:73] Caches are synced for MigrationPlatformStatusController 2026-01-20T10:55:54.186208357+00:00 stderr F I0120 10:55:54.186201 1 base_controller.go:110] Starting #1 worker of MigrationPlatformStatusController controller ... 2026-01-20T10:55:54.186268048+00:00 stderr F I0120 10:55:54.186245 1 base_controller.go:73] Caches are synced for FeatureUpgradeableController 2026-01-20T10:55:54.186268048+00:00 stderr F I0120 10:55:54.186255 1 base_controller.go:110] Starting #1 worker of FeatureUpgradeableController controller ... 2026-01-20T10:55:54.186304069+00:00 stderr F I0120 10:55:54.186246 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2026-01-20T10:55:54.186336220+00:00 stderr F I0120 10:55:54.186310 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2026-01-20T10:55:54.186384592+00:00 stderr F I0120 10:55:54.186146 1 base_controller.go:73] Caches are synced for StatusSyncer_config-operator 2026-01-20T10:55:54.186392992+00:00 stderr F I0120 10:55:54.186380 1 base_controller.go:110] Starting #1 worker of StatusSyncer_config-operator controller ... 2026-01-20T10:55:54.186392992+00:00 stderr F I0120 10:55:54.186354 1 base_controller.go:73] Caches are synced for KubeCloudConfigController 2026-01-20T10:55:54.186401762+00:00 stderr F I0120 10:55:54.186396 1 base_controller.go:110] Starting #1 worker of KubeCloudConfigController controller ... 2026-01-20T10:55:54.188765146+00:00 stderr F I0120 10:55:54.188718 1 base_controller.go:73] Caches are synced for LoggingSyncer 2026-01-20T10:55:54.188765146+00:00 stderr F I0120 10:55:54.188744 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2026-01-20T10:56:07.098265221+00:00 stderr F I0120 10:56:07.096980 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:56:07.096940935 +0000 UTC))" 2026-01-20T10:56:07.098265221+00:00 stderr F I0120 10:56:07.097148 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:56:07.09713438 +0000 UTC))" 2026-01-20T10:56:07.098265221+00:00 stderr F I0120 10:56:07.097165 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.097153791 +0000 UTC))" 2026-01-20T10:56:07.098265221+00:00 stderr F I0120 10:56:07.097182 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.097171741 +0000 UTC))" 2026-01-20T10:56:07.098265221+00:00 stderr F I0120 10:56:07.097197 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.097186482 +0000 UTC))" 2026-01-20T10:56:07.098265221+00:00 stderr F I0120 10:56:07.097217 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.097201132 +0000 UTC))" 2026-01-20T10:56:07.098265221+00:00 stderr F I0120 10:56:07.097234 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.097223234 +0000 UTC))" 2026-01-20T10:56:07.098265221+00:00 stderr F I0120 10:56:07.097249 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.097238624 +0000 UTC))" 2026-01-20T10:56:07.098265221+00:00 stderr F I0120 10:56:07.097263 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.097253434 +0000 UTC))" 2026-01-20T10:56:07.098265221+00:00 stderr F I0120 10:56:07.097279 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:56:07.097270355 +0000 UTC))" 2026-01-20T10:56:07.098265221+00:00 stderr F I0120 10:56:07.097295 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.097284215 +0000 UTC))" 2026-01-20T10:56:07.098265221+00:00 stderr F I0120 10:56:07.097309 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.097299186 +0000 UTC))" 2026-01-20T10:56:07.098265221+00:00 stderr F I0120 10:56:07.097591 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-config-operator.svc\" [serving] validServingFor=[metrics.openshift-config-operator.svc,metrics.openshift-config-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2026-01-20 10:56:07.097573263 +0000 UTC))" 2026-01-20T10:56:07.101520539+00:00 stderr F I0120 10:56:07.100096 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906177\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906177\" (2026-01-20 09:49:37 +0000 UTC to 2027-01-20 09:49:37 +0000 UTC (now=2026-01-20 10:56:07.10005244 +0000 UTC))" ././@LongLink0000644000000000000000000000033000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000644000175000017500000010113315133657716033040 0ustar zuulzuul2025-08-13T20:00:06.609040181+00:00 stderr F I0813 20:00:06.606756 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:00:06.609040181+00:00 stderr F I0813 20:00:06.607693 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:00:06.613715154+00:00 stderr F I0813 20:00:06.613562 1 observer_polling.go:159] Starting file observer 2025-08-13T20:00:06.745670326+00:00 stderr F I0813 20:00:06.745105 1 builder.go:299] config-operator version 4.16.0-202406131906.p0.g441d29c.assembly.stream.el9-441d29c-441d29c92b1759d1780a525112e764280b78b0d6 2025-08-13T20:00:08.615501452+00:00 stderr F I0813 20:00:08.614612 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:00:08.615501452+00:00 stderr F W0813 20:00:08.615358 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:00:08.615501452+00:00 stderr F W0813 20:00:08.615367 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:00:08.803599465+00:00 stderr F I0813 20:00:08.803212 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:00:08.804180802+00:00 stderr F I0813 20:00:08.804153 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:00:08.804909533+00:00 stderr F I0813 20:00:08.804883 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:00:08.804961744+00:00 stderr F I0813 20:00:08.804948 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:00:08.811385237+00:00 stderr F I0813 20:00:08.811308 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:00:08.811453569+00:00 stderr F I0813 20:00:08.811439 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:00:08.816644177+00:00 stderr F I0813 20:00:08.816601 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:00:08.823141632+00:00 stderr F I0813 20:00:08.823095 1 leaderelection.go:250] attempting to acquire leader lease openshift-config-operator/config-operator-lock... 2025-08-13T20:00:08.827510887+00:00 stderr F I0813 20:00:08.827483 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:00:08.828151935+00:00 stderr F I0813 20:00:08.827768 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:00:08.835083783+00:00 stderr F I0813 20:00:08.827946 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:10.206555929+00:00 stderr F I0813 20:00:10.205128 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:00:10.206555929+00:00 stderr F I0813 20:00:10.205730 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:00:10.213300141+00:00 stderr F I0813 20:00:10.213253 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:00:10.859242570+00:00 stderr F I0813 20:00:10.858742 1 leaderelection.go:260] successfully acquired lease openshift-config-operator/config-operator-lock 2025-08-13T20:00:10.860707962+00:00 stderr F I0813 20:00:10.860514 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-config-operator", Name:"config-operator-lock", UID:"0f77897f-a069-4784-b2c6-559ada951a0f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"29031", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-config-operator-77658b5b66-dq5sc_122f4599-0c2f-4c0b-a0bb-7ed9e07d3e2c became leader 2025-08-13T20:00:11.819948463+00:00 stderr F I0813 20:00:11.818486 1 base_controller.go:67] Waiting for caches to sync for FeatureUpgradeableController 2025-08-13T20:00:11.820923450+00:00 stderr F I0813 20:00:11.820062 1 base_controller.go:67] Waiting for caches to sync for ConfigOperatorController 2025-08-13T20:00:11.820923450+00:00 stderr F I0813 20:00:11.820101 1 base_controller.go:73] Caches are synced for ConfigOperatorController 2025-08-13T20:00:11.820923450+00:00 stderr F I0813 20:00:11.820196 1 base_controller.go:110] Starting #1 worker of ConfigOperatorController controller ... 2025-08-13T20:00:11.821148627+00:00 stderr F I0813 20:00:11.820960 1 base_controller.go:67] Waiting for caches to sync for MigrationPlatformStatusController 2025-08-13T20:00:11.821148627+00:00 stderr F I0813 20:00:11.821011 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:00:11.821148627+00:00 stderr F I0813 20:00:11.821026 1 base_controller.go:67] Waiting for caches to sync for FeatureGateController 2025-08-13T20:00:11.821148627+00:00 stderr F I0813 20:00:11.821053 1 base_controller.go:67] Waiting for caches to sync for LatencySensitiveRemovalController 2025-08-13T20:00:11.821148627+00:00 stderr F I0813 20:00:11.821058 1 base_controller.go:73] Caches are synced for LatencySensitiveRemovalController 2025-08-13T20:00:11.821148627+00:00 stderr F I0813 20:00:11.821064 1 base_controller.go:110] Starting #1 worker of LatencySensitiveRemovalController controller ... 2025-08-13T20:00:11.826646533+00:00 stderr F I0813 20:00:11.825981 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:00:11.826646533+00:00 stderr F I0813 20:00:11.826333 1 base_controller.go:67] Waiting for caches to sync for AWSPlatformServiceLocationController 2025-08-13T20:00:11.826646533+00:00 stderr F I0813 20:00:11.826354 1 base_controller.go:67] Waiting for caches to sync for KubeCloudConfigController 2025-08-13T20:00:11.844577405+00:00 stderr F I0813 20:00:11.844491 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_config-operator 2025-08-13T20:00:11.845871972+00:00 stderr F E0813 20:00:11.844912 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-08-13T20:00:11.845871972+00:00 stderr F I0813 20:00:11.844973 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-config-operator", Name:"openshift-config-operator", UID:"46cebc51-d29e-4081-9edb-d9f437810b86", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T20:00:11.859098619+00:00 stderr F E0813 20:00:11.857896 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-08-13T20:00:11.968039095+00:00 stderr F E0813 20:00:11.951700 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-08-13T20:00:12.048349065+00:00 stderr F I0813 20:00:12.048045 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:00:12.048349065+00:00 stderr F I0813 20:00:12.048126 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:00:12.100115001+00:00 stderr F I0813 20:00:12.099542 1 base_controller.go:73] Caches are synced for FeatureUpgradeableController 2025-08-13T20:00:12.100115001+00:00 stderr F I0813 20:00:12.099591 1 base_controller.go:110] Starting #1 worker of FeatureUpgradeableController controller ... 2025-08-13T20:00:12.100115001+00:00 stderr F I0813 20:00:12.099812 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:00:12.100115001+00:00 stderr F I0813 20:00:12.099867 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:00:12.253356991+00:00 stderr F I0813 20:00:12.250025 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:00:12.286079044+00:00 stderr F I0813 20:00:12.272341 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:00:12.286079044+00:00 stderr F I0813 20:00:12.274011 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:00:12.442111723+00:00 stderr F I0813 20:00:12.442008 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:00:12.445729096+00:00 stderr F I0813 20:00:12.445624 1 base_controller.go:73] Caches are synced for AWSPlatformServiceLocationController 2025-08-13T20:00:12.450233745+00:00 stderr F I0813 20:00:12.450133 1 base_controller.go:110] Starting #1 worker of AWSPlatformServiceLocationController controller ... 2025-08-13T20:00:12.450256085+00:00 stderr F I0813 20:00:12.450242 1 base_controller.go:73] Caches are synced for FeatureGateController 2025-08-13T20:00:12.450256085+00:00 stderr F I0813 20:00:12.450249 1 base_controller.go:110] Starting #1 worker of FeatureGateController controller ... 2025-08-13T20:00:12.450478772+00:00 stderr F I0813 20:00:12.449988 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:00:12.469317419+00:00 stderr F I0813 20:00:12.469145 1 base_controller.go:73] Caches are synced for StatusSyncer_config-operator 2025-08-13T20:00:12.469317419+00:00 stderr F I0813 20:00:12.469192 1 base_controller.go:110] Starting #1 worker of StatusSyncer_config-operator controller ... 2025-08-13T20:00:12.475613638+00:00 stderr F I0813 20:00:12.475482 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:00:12.521284411+00:00 stderr F I0813 20:00:12.521184 1 base_controller.go:73] Caches are synced for MigrationPlatformStatusController 2025-08-13T20:00:12.521284411+00:00 stderr F I0813 20:00:12.521242 1 base_controller.go:110] Starting #1 worker of MigrationPlatformStatusController controller ... 2025-08-13T20:00:12.558069769+00:00 stderr F I0813 20:00:12.556643 1 base_controller.go:73] Caches are synced for KubeCloudConfigController 2025-08-13T20:00:12.558069769+00:00 stderr F I0813 20:00:12.556684 1 base_controller.go:110] Starting #1 worker of KubeCloudConfigController controller ... 2025-08-13T20:01:00.024546078+00:00 stderr F I0813 20:01:00.011104 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:00.01059532 +0000 UTC))" 2025-08-13T20:01:00.024546078+00:00 stderr F I0813 20:01:00.011756 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:00.011738473 +0000 UTC))" 2025-08-13T20:01:00.049043057+00:00 stderr F I0813 20:01:00.048971 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.011764744 +0000 UTC))" 2025-08-13T20:01:00.049170980+00:00 stderr F I0813 20:01:00.049151 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.049107288 +0000 UTC))" 2025-08-13T20:01:00.049355846+00:00 stderr F I0813 20:01:00.049331 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.049197771 +0000 UTC))" 2025-08-13T20:01:00.049464969+00:00 stderr F I0813 20:01:00.049447 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.049422757 +0000 UTC))" 2025-08-13T20:01:00.049530141+00:00 stderr F I0813 20:01:00.049513 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.049491139 +0000 UTC))" 2025-08-13T20:01:00.049596852+00:00 stderr F I0813 20:01:00.049579 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.049557081 +0000 UTC))" 2025-08-13T20:01:00.049661514+00:00 stderr F I0813 20:01:00.049645 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:00.049620973 +0000 UTC))" 2025-08-13T20:01:00.059181316+00:00 stderr F I0813 20:01:00.049740 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:00.049719456 +0000 UTC))" 2025-08-13T20:01:00.059327780+00:00 stderr F I0813 20:01:00.059308 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.059262308 +0000 UTC))" 2025-08-13T20:01:00.068147291+00:00 stderr F I0813 20:01:00.068084 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-config-operator.svc\" [serving] validServingFor=[metrics.openshift-config-operator.svc,metrics.openshift-config-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 20:01:00.068004057 +0000 UTC))" 2025-08-13T20:01:00.068977955+00:00 stderr F I0813 20:01:00.068951 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115208\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115207\" (2025-08-13 19:00:06 +0000 UTC to 2026-08-13 19:00:06 +0000 UTC (now=2025-08-13 20:01:00.068701687 +0000 UTC))" 2025-08-13T20:01:23.351014291+00:00 stderr F I0813 20:01:23.350151 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:23.351014291+00:00 stderr F I0813 20:01:23.350951 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:23.351996309+00:00 stderr F I0813 20:01:23.351484 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:23.352476933+00:00 stderr F I0813 20:01:23.352336 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:23.352288928 +0000 UTC))" 2025-08-13T20:01:23.352476933+00:00 stderr F I0813 20:01:23.352389 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:23.35237251 +0000 UTC))" 2025-08-13T20:01:23.352476933+00:00 stderr F I0813 20:01:23.352411 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:23.352397221 +0000 UTC))" 2025-08-13T20:01:23.352476933+00:00 stderr F I0813 20:01:23.352427 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:23.352416221 +0000 UTC))" 2025-08-13T20:01:23.352508124+00:00 stderr F I0813 20:01:23.352445 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:23.352434132 +0000 UTC))" 2025-08-13T20:01:23.352508124+00:00 stderr F I0813 20:01:23.352480 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:23.352451412 +0000 UTC))" 2025-08-13T20:01:23.352518574+00:00 stderr F I0813 20:01:23.352505 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:23.352486313 +0000 UTC))" 2025-08-13T20:01:23.352563405+00:00 stderr F I0813 20:01:23.352529 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:23.352510784 +0000 UTC))" 2025-08-13T20:01:23.352575506+00:00 stderr F I0813 20:01:23.352567 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:23.352552205 +0000 UTC))" 2025-08-13T20:01:23.352612767+00:00 stderr F I0813 20:01:23.352584 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:23.352574216 +0000 UTC))" 2025-08-13T20:01:23.352666948+00:00 stderr F I0813 20:01:23.352621 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:23.352607877 +0000 UTC))" 2025-08-13T20:01:23.354274304+00:00 stderr F I0813 20:01:23.354230 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-config-operator.svc\" [serving] validServingFor=[metrics.openshift-config-operator.svc,metrics.openshift-config-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:01:23.354202292 +0000 UTC))" 2025-08-13T20:01:23.354967744+00:00 stderr F I0813 20:01:23.354866 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115208\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115207\" (2025-08-13 19:00:06 +0000 UTC to 2026-08-13 19:00:06 +0000 UTC (now=2025-08-13 20:01:23.354759108 +0000 UTC))" 2025-08-13T20:01:26.622344829+00:00 stderr F I0813 20:01:26.621656 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="7021067932790448a11809da10b860f6f1ea1555731d97a3cf678bc8b9574622", new="dca6c81c3751f96f1b64e72dc06b40fe72f2952cfcac2b16deea87fc6cd08c4d") 2025-08-13T20:01:26.622547835+00:00 stderr F W0813 20:01:26.622499 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.key was modified 2025-08-13T20:01:26.622707979+00:00 stderr F I0813 20:01:26.622619 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="e7c4eabcdc7aa32e59c3e68ad4841e132e9166775ff32392cab346655d2dac9f", new="adf358f49f26d932aaa3db3a86640e1ed83874695ab0abb173ca1ba5a73101ec") 2025-08-13T20:01:26.623009248+00:00 stderr F I0813 20:01:26.622955 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:26.623130121+00:00 stderr F I0813 20:01:26.623115 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:26.623421960+00:00 stderr F I0813 20:01:26.623368 1 base_controller.go:172] Shutting down FeatureGateController ... 2025-08-13T20:01:26.623436000+00:00 stderr F I0813 20:01:26.623426 1 base_controller.go:172] Shutting down AWSPlatformServiceLocationController ... 2025-08-13T20:01:26.623555654+00:00 stderr F I0813 20:01:26.623444 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:01:26.623555654+00:00 stderr F I0813 20:01:26.623475 1 base_controller.go:172] Shutting down FeatureUpgradeableController ... 2025-08-13T20:01:26.623555654+00:00 stderr F I0813 20:01:26.623493 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:01:26.623617945+00:00 stderr F I0813 20:01:26.623573 1 base_controller.go:172] Shutting down LatencySensitiveRemovalController ... 2025-08-13T20:01:26.623617945+00:00 stderr F I0813 20:01:26.623607 1 base_controller.go:172] Shutting down ConfigOperatorController ... 2025-08-13T20:01:26.624546072+00:00 stderr F I0813 20:01:26.624517 1 base_controller.go:172] Shutting down KubeCloudConfigController ... 2025-08-13T20:01:26.624602753+00:00 stderr F I0813 20:01:26.624589 1 base_controller.go:172] Shutting down MigrationPlatformStatusController ... 2025-08-13T20:01:26.624657085+00:00 stderr F I0813 20:01:26.624644 1 base_controller.go:172] Shutting down StatusSyncer_config-operator ... 2025-08-13T20:01:26.624698276+00:00 stderr F I0813 20:01:26.624676 1 base_controller.go:150] All StatusSyncer_config-operator post start hooks have been terminated 2025-08-13T20:01:26.624765838+00:00 stderr F I0813 20:01:26.624750 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:01:26.624957463+00:00 stderr F I0813 20:01:26.624931 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:26.625110698+00:00 stderr F I0813 20:01:26.625092 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:01:26.625173410+00:00 stderr F I0813 20:01:26.625159 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:01:26.625242542+00:00 stderr F I0813 20:01:26.625229 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:01:26.626234790+00:00 stderr F I0813 20:01:26.626160 1 base_controller.go:114] Shutting down worker of FeatureUpgradeableController controller ... 2025-08-13T20:01:26.626252370+00:00 stderr F I0813 20:01:26.626240 1 base_controller.go:104] All FeatureUpgradeableController workers have been terminated 2025-08-13T20:01:26.626338233+00:00 stderr F I0813 20:01:26.626291 1 base_controller.go:114] Shutting down worker of AWSPlatformServiceLocationController controller ... 2025-08-13T20:01:26.626350213+00:00 stderr F I0813 20:01:26.626339 1 base_controller.go:104] All AWSPlatformServiceLocationController workers have been terminated 2025-08-13T20:01:26.626406105+00:00 stderr F I0813 20:01:26.626367 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:01:26.626406105+00:00 stderr F I0813 20:01:26.626391 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:01:26.626406105+00:00 stderr F I0813 20:01:26.626142 1 base_controller.go:114] Shutting down worker of FeatureGateController controller ... 2025-08-13T20:01:26.626419325+00:00 stderr F I0813 20:01:26.626407 1 base_controller.go:104] All FeatureGateController workers have been terminated 2025-08-13T20:01:26.626428965+00:00 stderr F I0813 20:01:26.626418 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:01:26.626428965+00:00 stderr F I0813 20:01:26.626425 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:01:26.627513836+00:00 stderr F I0813 20:01:26.626613 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:26.627513836+00:00 stderr F I0813 20:01:26.626682 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:26.627513836+00:00 stderr F I0813 20:01:26.626711 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:01:26.627513836+00:00 stderr F I0813 20:01:26.626744 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:26.630990065+00:00 stderr F I0813 20:01:26.630905 1 base_controller.go:114] Shutting down worker of LatencySensitiveRemovalController controller ... 2025-08-13T20:01:26.630990065+00:00 stderr F I0813 20:01:26.630965 1 base_controller.go:104] All LatencySensitiveRemovalController workers have been terminated 2025-08-13T20:01:26.631095068+00:00 stderr F I0813 20:01:26.631051 1 base_controller.go:114] Shutting down worker of MigrationPlatformStatusController controller ... 2025-08-13T20:01:26.631095068+00:00 stderr F I0813 20:01:26.631077 1 base_controller.go:104] All MigrationPlatformStatusController workers have been terminated 2025-08-13T20:01:26.631125419+00:00 stderr F I0813 20:01:26.631059 1 base_controller.go:114] Shutting down worker of StatusSyncer_config-operator controller ... 2025-08-13T20:01:26.631165940+00:00 stderr F I0813 20:01:26.631152 1 base_controller.go:104] All StatusSyncer_config-operator workers have been terminated 2025-08-13T20:01:26.631203632+00:00 stderr F I0813 20:01:26.631119 1 base_controller.go:114] Shutting down worker of KubeCloudConfigController controller ... 2025-08-13T20:01:26.631244433+00:00 stderr F I0813 20:01:26.631232 1 base_controller.go:104] All KubeCloudConfigController workers have been terminated 2025-08-13T20:01:26.633866888+00:00 stderr F I0813 20:01:26.631340 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:01:26.633866888+00:00 stderr F I0813 20:01:26.631376 1 builder.go:330] server exited 2025-08-13T20:01:26.633866888+00:00 stderr F I0813 20:01:26.631415 1 base_controller.go:114] Shutting down worker of ConfigOperatorController controller ... 2025-08-13T20:01:26.633866888+00:00 stderr F I0813 20:01:26.631458 1 base_controller.go:104] All ConfigOperatorController workers have been terminated 2025-08-13T20:01:29.264183958+00:00 stderr F W0813 20:01:29.256407 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000033000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000644000175000017500000007570415133657716033056 0ustar zuulzuul2025-08-13T20:05:36.187245573+00:00 stderr F I0813 20:05:36.180676 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:05:36.192640098+00:00 stderr F I0813 20:05:36.189383 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:36.197893678+00:00 stderr F I0813 20:05:36.193740 1 observer_polling.go:159] Starting file observer 2025-08-13T20:05:36.478370310+00:00 stderr F I0813 20:05:36.476926 1 builder.go:299] config-operator version 4.16.0-202406131906.p0.g441d29c.assembly.stream.el9-441d29c-441d29c92b1759d1780a525112e764280b78b0d6 2025-08-13T20:05:37.273257573+00:00 stderr F I0813 20:05:37.272955 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:05:37.273257573+00:00 stderr F W0813 20:05:37.273193 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:37.273257573+00:00 stderr F W0813 20:05:37.273201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:37.325391825+00:00 stderr F I0813 20:05:37.325120 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:05:37.325612042+00:00 stderr F I0813 20:05:37.325561 1 leaderelection.go:250] attempting to acquire leader lease openshift-config-operator/config-operator-lock... 2025-08-13T20:05:37.326370803+00:00 stderr F I0813 20:05:37.326289 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:05:37.328065692+00:00 stderr F I0813 20:05:37.327701 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:05:37.328065692+00:00 stderr F I0813 20:05:37.327944 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:37.328573397+00:00 stderr F I0813 20:05:37.328474 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:05:37.328573397+00:00 stderr F I0813 20:05:37.328514 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:37.328573397+00:00 stderr F I0813 20:05:37.328476 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:05:37.328573397+00:00 stderr F I0813 20:05:37.328555 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:05:37.328614748+00:00 stderr F I0813 20:05:37.328588 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:05:37.329384150+00:00 stderr F I0813 20:05:37.328735 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:05:37.414109896+00:00 stderr F I0813 20:05:37.413550 1 leaderelection.go:260] successfully acquired lease openshift-config-operator/config-operator-lock 2025-08-13T20:05:37.416395842+00:00 stderr F I0813 20:05:37.416309 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-config-operator", Name:"config-operator-lock", UID:"0f77897f-a069-4784-b2c6-559ada951a0f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"31747", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-config-operator-77658b5b66-dq5sc_81393664-a273-42ce-b620-ccd9229e7705 became leader 2025-08-13T20:05:37.434366396+00:00 stderr F I0813 20:05:37.430576 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:05:37.434366396+00:00 stderr F I0813 20:05:37.430762 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:37.435024715+00:00 stderr F I0813 20:05:37.434560 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:37.625336365+00:00 stderr F I0813 20:05:37.625240 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:05:37.626072746+00:00 stderr F I0813 20:05:37.626024 1 base_controller.go:67] Waiting for caches to sync for AWSPlatformServiceLocationController 2025-08-13T20:05:37.626157699+00:00 stderr F I0813 20:05:37.626137 1 base_controller.go:67] Waiting for caches to sync for KubeCloudConfigController 2025-08-13T20:05:37.626332764+00:00 stderr F I0813 20:05:37.626234 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_config-operator 2025-08-13T20:05:37.626433157+00:00 stderr F I0813 20:05:37.626375 1 base_controller.go:67] Waiting for caches to sync for ConfigOperatorController 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.626486 1 base_controller.go:73] Caches are synced for ConfigOperatorController 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.626567 1 base_controller.go:110] Starting #1 worker of ConfigOperatorController controller ... 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.626854 1 base_controller.go:67] Waiting for caches to sync for MigrationPlatformStatusController 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.626992 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.627100 1 base_controller.go:67] Waiting for caches to sync for FeatureGateController 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.627123 1 base_controller.go:67] Waiting for caches to sync for LatencySensitiveRemovalController 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.627196 1 base_controller.go:73] Caches are synced for LatencySensitiveRemovalController 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.627203 1 base_controller.go:110] Starting #1 worker of LatencySensitiveRemovalController controller ... 2025-08-13T20:05:37.627753844+00:00 stderr F I0813 20:05:37.627732 1 base_controller.go:67] Waiting for caches to sync for FeatureUpgradeableController 2025-08-13T20:05:37.628027542+00:00 stderr F E0813 20:05:37.628001 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-08-13T20:05:37.630202254+00:00 stderr F I0813 20:05:37.630165 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-config-operator", Name:"openshift-config-operator", UID:"46cebc51-d29e-4081-9edb-d9f437810b86", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T20:05:37.633285683+00:00 stderr F E0813 20:05:37.633261 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-08-13T20:05:37.643717082+00:00 stderr F E0813 20:05:37.643665 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-08-13T20:05:37.725701009+00:00 stderr F I0813 20:05:37.725637 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:05:37.725853283+00:00 stderr F I0813 20:05:37.725831 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:05:37.726215824+00:00 stderr F I0813 20:05:37.726192 1 base_controller.go:73] Caches are synced for AWSPlatformServiceLocationController 2025-08-13T20:05:37.726284146+00:00 stderr F I0813 20:05:37.726268 1 base_controller.go:110] Starting #1 worker of AWSPlatformServiceLocationController controller ... 2025-08-13T20:05:37.726529923+00:00 stderr F I0813 20:05:37.726509 1 base_controller.go:73] Caches are synced for StatusSyncer_config-operator 2025-08-13T20:05:37.726569094+00:00 stderr F I0813 20:05:37.726556 1 base_controller.go:110] Starting #1 worker of StatusSyncer_config-operator controller ... 2025-08-13T20:05:37.730930929+00:00 stderr F I0813 20:05:37.730051 1 base_controller.go:73] Caches are synced for FeatureUpgradeableController 2025-08-13T20:05:37.731001201+00:00 stderr F I0813 20:05:37.730983 1 base_controller.go:110] Starting #1 worker of FeatureUpgradeableController controller ... 2025-08-13T20:05:37.731230777+00:00 stderr F I0813 20:05:37.731187 1 base_controller.go:73] Caches are synced for FeatureGateController 2025-08-13T20:05:37.731271559+00:00 stderr F I0813 20:05:37.731257 1 base_controller.go:110] Starting #1 worker of FeatureGateController controller ... 2025-08-13T20:05:37.731335730+00:00 stderr F I0813 20:05:37.731317 1 base_controller.go:73] Caches are synced for MigrationPlatformStatusController 2025-08-13T20:05:37.731375092+00:00 stderr F I0813 20:05:37.731362 1 base_controller.go:110] Starting #1 worker of MigrationPlatformStatusController controller ... 2025-08-13T20:05:37.731427753+00:00 stderr F I0813 20:05:37.731405 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:05:37.731459594+00:00 stderr F I0813 20:05:37.731448 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:05:37.767118245+00:00 stderr F I0813 20:05:37.766661 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:37.827659169+00:00 stderr F I0813 20:05:37.827513 1 base_controller.go:73] Caches are synced for KubeCloudConfigController 2025-08-13T20:05:37.828510483+00:00 stderr F I0813 20:05:37.828486 1 base_controller.go:110] Starting #1 worker of KubeCloudConfigController controller ... 2025-08-13T20:08:37.866996977+00:00 stderr F W0813 20:08:37.863766 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.866996977+00:00 stderr F E0813 20:08:37.865660 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.878356312+00:00 stderr F W0813 20:08:37.878274 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.878477386+00:00 stderr F E0813 20:08:37.878372 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.892624171+00:00 stderr F W0813 20:08:37.892524 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.892673483+00:00 stderr F E0813 20:08:37.892652 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.918055460+00:00 stderr F W0813 20:08:37.917452 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.918055460+00:00 stderr F E0813 20:08:37.917522 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.962652209+00:00 stderr F W0813 20:08:37.962491 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.962652209+00:00 stderr F E0813 20:08:37.962563 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.049389966+00:00 stderr F W0813 20:08:38.049292 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.049499499+00:00 stderr F E0813 20:08:38.049381 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.215044335+00:00 stderr F W0813 20:08:38.214875 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.215044335+00:00 stderr F E0813 20:08:38.214976 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.258970315+00:00 stderr F E0813 20:08:38.258730 1 leaderelection.go:332] error retrieving resource lock openshift-config-operator/config-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-config-operator/leases/config-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.542396601+00:00 stderr F W0813 20:08:38.541764 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.542880965+00:00 stderr F E0813 20:08:38.542361 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.191196853+00:00 stderr F W0813 20:08:39.190668 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.191196853+00:00 stderr F E0813 20:08:39.191134 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.485072119+00:00 stderr F W0813 20:08:40.483459 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.485072119+00:00 stderr F E0813 20:08:40.483518 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.050922693+00:00 stderr F W0813 20:08:43.048040 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.050922693+00:00 stderr F E0813 20:08:43.050394 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.184106516+00:00 stderr F W0813 20:08:48.180432 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.184106516+00:00 stderr F E0813 20:08:48.182751 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.427457842+00:00 stderr F W0813 20:08:58.426605 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.427457842+00:00 stderr F E0813 20:08:58.427363 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:29.989139703+00:00 stderr F I0813 20:09:29.988276 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:35.132409295+00:00 stderr F I0813 20:09:35.131738 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:41.851451214+00:00 stderr F I0813 20:09:41.848936 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.746116538+00:00 stderr F I0813 20:09:45.745718 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.935623861+00:00 stderr F I0813 20:09:45.935482 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:46.245138595+00:00 stderr F I0813 20:09:46.245075 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:52.696959325+00:00 stderr F I0813 20:09:52.694671 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.301629342+00:00 stderr F I0813 20:09:55.301139 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.755286399+00:00 stderr F I0813 20:09:55.755198 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=configs from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.900307047+00:00 stderr F I0813 20:09:55.900239 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:18.602007832+00:00 stderr F I0813 20:10:18.601141 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:42:36.371746557+00:00 stderr F I0813 20:42:36.370140 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.371746557+00:00 stderr F I0813 20:42:36.371308 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.388339156+00:00 stderr F I0813 20:42:36.368481 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.388339156+00:00 stderr F I0813 20:42:36.370500 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.421560184+00:00 stderr F I0813 20:42:36.370600 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.461377802+00:00 stderr F I0813 20:42:36.370615 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.461377802+00:00 stderr F I0813 20:42:36.370628 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.461377802+00:00 stderr F I0813 20:42:36.370640 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.461377802+00:00 stderr F I0813 20:42:36.370653 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.472563384+00:00 stderr F I0813 20:42:36.370687 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514733510+00:00 stderr F I0813 20:42:36.361477 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:37.879286220+00:00 stderr F W0813 20:42:37.876887 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.884558642+00:00 stderr F E0813 20:42:37.883645 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.894989093+00:00 stderr F W0813 20:42:37.892563 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.894989093+00:00 stderr F E0813 20:42:37.892634 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.916855614+00:00 stderr F W0813 20:42:37.916590 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.916855614+00:00 stderr F E0813 20:42:37.916663 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.953634034+00:00 stderr F W0813 20:42:37.941999 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.953634034+00:00 stderr F E0813 20:42:37.942076 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.999502676+00:00 stderr F W0813 20:42:37.996164 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.999502676+00:00 stderr F E0813 20:42:37.996257 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.089921413+00:00 stderr F W0813 20:42:38.088872 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.090067927+00:00 stderr F E0813 20:42:38.089995 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.254857078+00:00 stderr F W0813 20:42:38.254491 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.254857078+00:00 stderr F E0813 20:42:38.254583 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.579708984+00:00 stderr F W0813 20:42:38.578982 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.579708984+00:00 stderr F E0813 20:42:38.579042 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.223660669+00:00 stderr F W0813 20:42:39.222692 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.223660669+00:00 stderr F E0813 20:42:39.223042 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.239147526+00:00 stderr F E0813 20:42:39.238968 1 leaderelection.go:332] error retrieving resource lock openshift-config-operator/config-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-config-operator/leases/config-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.506215615+00:00 stderr F W0813 20:42:40.505414 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.506215615+00:00 stderr F E0813 20:42:40.505864 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.594969504+00:00 stderr F I0813 20:42:40.593861 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:40.595041106+00:00 stderr F I0813 20:42:40.594988 1 leaderelection.go:285] failed to renew lease openshift-config-operator/config-operator-lock: timed out waiting for the condition 2025-08-13T20:42:40.596103676+00:00 stderr F I0813 20:42:40.596073 1 base_controller.go:172] Shutting down LatencySensitiveRemovalController ... 2025-08-13T20:42:40.596210150+00:00 stderr F I0813 20:42:40.596137 1 base_controller.go:172] Shutting down ConfigOperatorController ... 2025-08-13T20:42:40.596289842+00:00 stderr F I0813 20:42:40.596269 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:40.597412994+00:00 stderr F I0813 20:42:40.597314 1 base_controller.go:172] Shutting down MigrationPlatformStatusController ... 2025-08-13T20:42:40.597412994+00:00 stderr F E0813 20:42:40.597329 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-config-operator/leases/config-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.597412994+00:00 stderr F I0813 20:42:40.597388 1 base_controller.go:172] Shutting down KubeCloudConfigController ... 2025-08-13T20:42:40.597412994+00:00 stderr F I0813 20:42:40.597405 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:42:40.597965530+00:00 stderr F I0813 20:42:40.597940 1 base_controller.go:172] Shutting down FeatureGateController ... 2025-08-13T20:42:40.598019672+00:00 stderr F I0813 20:42:40.598006 1 base_controller.go:172] Shutting down FeatureUpgradeableController ... 2025-08-13T20:42:40.598061883+00:00 stderr F I0813 20:42:40.598050 1 base_controller.go:172] Shutting down StatusSyncer_config-operator ... 2025-08-13T20:42:40.598991580+00:00 stderr F I0813 20:42:40.597152 1 base_controller.go:114] Shutting down worker of LatencySensitiveRemovalController controller ... 2025-08-13T20:42:40.599345920+00:00 stderr F I0813 20:42:40.598080 1 base_controller.go:150] All StatusSyncer_config-operator post start hooks have been terminated 2025-08-13T20:42:40.599588477+00:00 stderr F I0813 20:42:40.599560 1 base_controller.go:172] Shutting down AWSPlatformServiceLocationController ... 2025-08-13T20:42:40.599653929+00:00 stderr F I0813 20:42:40.599635 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:40.600121662+00:00 stderr F I0813 20:42:40.600094 1 base_controller.go:104] All LatencySensitiveRemovalController workers have been terminated 2025-08-13T20:42:40.600173824+00:00 stderr F W0813 20:42:40.600096 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015133657716033071 5ustar zuulzuul././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015133657737033074 5ustar zuulzuul././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000026200415133657716033077 0ustar zuulzuul2025-08-13T19:59:19.965638478+00:00 stderr F time="2025-08-13T19:59:19Z" level=info msg="Using in-cluster kube client config" 2025-08-13T19:59:20.758915041+00:00 stderr F time="2025-08-13T19:59:20Z" level=info msg="Defaulting Interval to '12h0m0s'" 2025-08-13T19:59:21.874938483+00:00 stderr F I0813 19:59:21.865254 1 handler.go:275] Adding GroupVersion packages.operators.coreos.com v1 to ResourceManager 2025-08-13T19:59:22.045221487+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2025-08-13T19:59:22.045221487+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="operator ready" 2025-08-13T19:59:22.045221487+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="starting informers..." 2025-08-13T19:59:22.045221487+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="informers started" 2025-08-13T19:59:22.045221487+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="waiting for caches to sync..." 2025-08-13T19:59:22.456175441+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="starting workers..." 2025-08-13T19:59:22.478751865+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="connecting to source" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T19:59:22.707297850+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="connecting to source" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T19:59:22.802301208+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="connecting to source" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T19:59:22.948340281+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="connecting to source" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T19:59:23.487362766+00:00 stderr F I0813 19:59:23.487270 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:23.487461169+00:00 stderr F I0813 19:59:23.487443 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:23.617713662+00:00 stderr F I0813 19:59:23.616166 1 secure_serving.go:213] Serving securely on [::]:5443 2025-08-13T19:59:23.658459603+00:00 stderr F I0813 19:59:23.658385 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:23.658534185+00:00 stderr F I0813 19:59:23.658520 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:23.658735911+00:00 stderr F I0813 19:59:23.658704 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:23.658818703+00:00 stderr F I0813 19:59:23.658762 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:23.658948727+00:00 stderr F I0813 19:59:23.658931 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.658987408+00:00 stderr F I0813 19:59:23.658970 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:23.660177232+00:00 stderr F I0813 19:59:23.659931 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:23.683340952+00:00 stderr F I0813 19:59:23.683297 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:23.683464446+00:00 stderr F I0813 19:59:23.660065 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::apiserver.local.config/certificates/apiserver.crt::apiserver.local.config/certificates/apiserver.key" 2025-08-13T19:59:23.697110585+00:00 stderr F I0813 19:59:23.693327 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:23.707916413+00:00 stderr F W0813 19:59:23.706281 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:50216->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:23.749929450+00:00 stderr F W0813 19:59:23.749230 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:59130->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:23.789565970+00:00 stderr F W0813 19:59:23.774492 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:34813->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:23.789565970+00:00 stderr F I0813 19:59:23.665072 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.789565970+00:00 stderr F I0813 19:59:23.788885 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:23.851459845+00:00 stderr F I0813 19:59:23.838251 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:23.976643633+00:00 stderr F I0813 19:59:23.859441 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:23.976643633+00:00 stderr F I0813 19:59:23.862289 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:23.976643633+00:00 stderr F E0813 19:59:23.862499 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.976643633+00:00 stderr F E0813 19:59:23.862529 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.976643633+00:00 stderr F E0813 19:59:23.872977 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.976643633+00:00 stderr F W0813 19:59:23.959545 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:46183->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:23.976643633+00:00 stderr F E0813 19:59:23.959998 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.976643633+00:00 stderr F E0813 19:59:23.960030 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.976643633+00:00 stderr F I0813 19:59:23.960080 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:23.976643633+00:00 stderr F I0813 19:59:23.960117 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:23.976643633+00:00 stderr F I0813 19:59:23.960260 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:23.977470177+00:00 stderr F E0813 19:59:23.977429 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.977557779+00:00 stderr F E0813 19:59:23.977533 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.978249119+00:00 stderr F E0813 19:59:23.978222 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.988976935+00:00 stderr F E0813 19:59:23.988881 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.996523760+00:00 stderr F E0813 19:59:23.996485 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.999168595+00:00 stderr F E0813 19:59:23.999078 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.003033765+00:00 stderr F E0813 19:59:24.002966 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.015609204+00:00 stderr F E0813 19:59:24.014282 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.033490984+00:00 stderr F E0813 19:59:24.012013 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.033653068+00:00 stderr F E0813 19:59:24.033622 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.044284161+00:00 stderr F E0813 19:59:24.043528 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.054992656+00:00 stderr F E0813 19:59:24.054324 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.054992656+00:00 stderr F E0813 19:59:24.054381 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.078767994+00:00 stderr F E0813 19:59:24.077021 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.097853998+00:00 stderr F E0813 19:59:24.095203 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.125724753+00:00 stderr F E0813 19:59:24.124005 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.136417948+00:00 stderr F E0813 19:59:24.136044 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.160041621+00:00 stderr F E0813 19:59:24.157966 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.176171051+00:00 stderr F E0813 19:59:24.175957 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.288501183+00:00 stderr F E0813 19:59:24.284585 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.300258988+00:00 stderr F E0813 19:59:24.300092 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.322673827+00:00 stderr F E0813 19:59:24.321370 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.338433826+00:00 stderr F E0813 19:59:24.338312 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.657089129+00:00 stderr F E0813 19:59:24.642557 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.657089129+00:00 stderr F E0813 19:59:24.643580 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.657089129+00:00 stderr F E0813 19:59:24.646489 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.682534534+00:00 stderr F E0813 19:59:24.678028 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:25.291438891+00:00 stderr F E0813 19:59:25.286624 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:25.291438891+00:00 stderr F E0813 19:59:25.287059 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:25.291438891+00:00 stderr F E0813 19:59:25.287505 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:25.301457677+00:00 stderr F W0813 19:59:25.298317 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:55001->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:25.318372169+00:00 stderr F W0813 19:59:25.318170 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:54845->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:25.318478232+00:00 stderr F W0813 19:59:25.318405 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:49886->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:25.318491072+00:00 stderr F W0813 19:59:25.318472 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:60072->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:25.318637136+00:00 stderr F E0813 19:59:25.318603 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:26.249019077+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T19:59:26.256263304+00:00 stderr F time="2025-08-13T19:59:26Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:55001->10.217.4.10:53: read: connection refused\"" source="{community-operators openshift-marketplace}" 2025-08-13T19:59:26.295117731+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T19:59:26.295117731+00:00 stderr F time="2025-08-13T19:59:26Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:60072->10.217.4.10:53: read: connection refused\"" source="{certified-operators openshift-marketplace}" 2025-08-13T19:59:26.583654216+00:00 stderr F E0813 19:59:26.582010 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:26.583654216+00:00 stderr F E0813 19:59:26.582431 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:26.584416318+00:00 stderr F E0813 19:59:26.584012 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:26.605873800+00:00 stderr F E0813 19:59:26.605652 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:27.095678202+00:00 stderr F E0813 19:59:27.094975 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.123041232+00:00 stderr F E0813 19:59:27.122181 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.123041232+00:00 stderr F E0813 19:59:27.122689 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.123218527+00:00 stderr F E0813 19:59:27.123107 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.176560857+00:00 stderr F E0813 19:59:27.126044 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.247275103+00:00 stderr F W0813 19:59:27.245344 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:43093->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:27.489334133+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T19:59:27.489534669+00:00 stderr F time="2025-08-13T19:59:27Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:49886->10.217.4.10:53: read: connection refused\"" source="{redhat-marketplace openshift-marketplace}" 2025-08-13T19:59:27.611118185+00:00 stderr F W0813 19:59:27.610462 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:38532->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:27.611508286+00:00 stderr F W0813 19:59:27.611349 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:38374->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:27.666886914+00:00 stderr F W0813 19:59:27.662119 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:53325->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:27.722828369+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T19:59:27.722999974+00:00 stderr F time="2025-08-13T19:59:27Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:38532->10.217.4.10:53: read: connection refused\"" source="{redhat-operators openshift-marketplace}" 2025-08-13T19:59:27.892387802+00:00 stderr F E0813 19:59:27.892315 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.894210574+00:00 stderr F E0813 19:59:27.893518 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.894714329+00:00 stderr F E0813 19:59:27.893532 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.895250094+00:00 stderr F E0813 19:59:27.893611 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.896173530+00:00 stderr F E0813 19:59:27.893652 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.939682790+00:00 stderr F E0813 19:59:27.907594 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.975969835+00:00 stderr F E0813 19:59:27.975907 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.979482245+00:00 stderr F E0813 19:59:27.979448 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.285637351+00:00 stderr F E0813 19:59:28.149888 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.305910959+00:00 stderr F E0813 19:59:28.153221 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.305910959+00:00 stderr F E0813 19:59:28.154254 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.321043260+00:00 stderr F E0813 19:59:28.155060 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.514887246+00:00 stderr F E0813 19:59:28.509691 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.514887246+00:00 stderr F E0813 19:59:28.510964 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.514887246+00:00 stderr F E0813 19:59:28.511241 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.538281293+00:00 stderr F E0813 19:59:28.538083 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.539471637+00:00 stderr F E0813 19:59:28.538879 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.580161727+00:00 stderr F E0813 19:59:28.576581 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.586629151+00:00 stderr F E0813 19:59:28.583976 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.586629151+00:00 stderr F E0813 19:59:28.586318 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.596542443+00:00 stderr F E0813 19:59:28.595931 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.617760498+00:00 stderr F E0813 19:59:28.615621 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.622035180+00:00 stderr F E0813 19:59:28.620897 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.622035180+00:00 stderr F E0813 19:59:28.621387 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.622035180+00:00 stderr F E0813 19:59:28.621475 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.627189007+00:00 stderr F E0813 19:59:28.626373 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.627189007+00:00 stderr F E0813 19:59:28.626969 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.702275718+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T19:59:28.702328449+00:00 stderr F time="2025-08-13T19:59:28Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:43093->10.217.4.10:53: read: connection refused\"" source="{certified-operators openshift-marketplace}" 2025-08-13T19:59:28.862011231+00:00 stderr F E0813 19:59:28.860255 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.862011231+00:00 stderr F E0813 19:59:28.860620 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.862011231+00:00 stderr F E0813 19:59:28.861078 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.862011231+00:00 stderr F E0813 19:59:28.861521 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.862011231+00:00 stderr F E0813 19:59:28.861751 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:29.142703442+00:00 stderr F E0813 19:59:29.142638 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.153274173+00:00 stderr F E0813 19:59:29.153156 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.155073075+00:00 stderr F E0813 19:59:29.153425 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.166039687+00:00 stderr F E0813 19:59:29.165990 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.258589855+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T19:59:29.258742140+00:00 stderr F time="2025-08-13T19:59:29Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:38374->10.217.4.10:53: read: connection refused\"" source="{community-operators openshift-marketplace}" 2025-08-13T19:59:29.422345723+00:00 stderr F E0813 19:59:29.419873 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:29.422345723+00:00 stderr F E0813 19:59:29.420250 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:29.422345723+00:00 stderr F E0813 19:59:29.420442 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:29.422345723+00:00 stderr F E0813 19:59:29.420622 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:29.422345723+00:00 stderr F E0813 19:59:29.420927 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:29.626727279+00:00 stderr F W0813 19:59:29.625254 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:49867->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:29.928732508+00:00 stderr F W0813 19:59:29.916278 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:52709->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:30.160441743+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T19:59:30.168229755+00:00 stderr F time="2025-08-13T19:59:30Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:52709->10.217.4.10:53: read: connection refused\"" source="{redhat-marketplace openshift-marketplace}" 2025-08-13T19:59:30.201098882+00:00 stderr F E0813 19:59:30.201004 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:30.205076685+00:00 stderr F E0813 19:59:30.201367 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:30.205076685+00:00 stderr F E0813 19:59:30.201700 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:30.261903495+00:00 stderr F E0813 19:59:30.242444 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:30.275320018+00:00 stderr F E0813 19:59:30.275283 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:30.571214002+00:00 stderr F W0813 19:59:30.571157 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:40819->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:30.798852981+00:00 stderr F W0813 19:59:30.796635 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:43500->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:31.732417732+00:00 stderr F E0813 19:59:31.705416 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:31.732417732+00:00 stderr F E0813 19:59:31.706088 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:31.732417732+00:00 stderr F E0813 19:59:31.706282 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:31.732417732+00:00 stderr F E0813 19:59:31.706505 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:31.732417732+00:00 stderr F E0813 19:59:31.724590 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:33.186124520+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T19:59:33.186291375+00:00 stderr F time="2025-08-13T19:59:33Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:43500->10.217.4.10:53: read: connection refused\"" source="{redhat-operators openshift-marketplace}" 2025-08-13T19:59:33.731313711+00:00 stderr F W0813 19:59:33.731250 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T19:59:34.232634982+00:00 stderr F W0813 19:59:34.231206 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T19:59:34.280269209+00:00 stderr F E0813 19:59:34.266200 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:34.283483771+00:00 stderr F E0813 19:59:34.281889 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:34.305643193+00:00 stderr F E0813 19:59:34.305574 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:34.306139547+00:00 stderr F E0813 19:59:34.306120 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:34.545033677+00:00 stderr F E0813 19:59:34.522194 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:34.545033677+00:00 stderr F E0813 19:59:34.522523 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:34.674225549+00:00 stderr F E0813 19:59:34.673123 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:34.706747196+00:00 stderr F E0813 19:59:34.705343 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:34.754865488+00:00 stderr F E0813 19:59:34.669727 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:34.850960187+00:00 stderr F W0813 19:59:34.850308 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T19:59:35.058219095+00:00 stderr F W0813 19:59:35.056680 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T19:59:39.987047992+00:00 stderr F E0813 19:59:39.985498 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:40.039932499+00:00 stderr F E0813 19:59:40.039605 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:40.100807584+00:00 stderr F E0813 19:59:40.100483 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:40.260306171+00:00 stderr F E0813 19:59:40.243282 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:40.283765190+00:00 stderr F E0813 19:59:40.260700 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:40.700961392+00:00 stderr F W0813 19:59:40.680350 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T19:59:40.788635771+00:00 stderr F W0813 19:59:40.788513 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T19:59:41.074215822+00:00 stderr F W0813 19:59:41.073168 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T19:59:41.319629407+00:00 stderr F W0813 19:59:41.315176 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T19:59:44.540736135+00:00 stderr F E0813 19:59:44.524881 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.540736135+00:00 stderr F E0813 19:59:44.540258 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.633273543+00:00 stderr F E0813 19:59:44.633111 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.633371135+00:00 stderr F E0813 19:59:44.633328 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.932636194+00:00 stderr F W0813 19:59:49.931997 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T19:59:50.531744722+00:00 stderr F E0813 19:59:50.530292 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:50.871163378+00:00 stderr F E0813 19:59:50.844545 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:50.871163378+00:00 stderr F E0813 19:59:50.845564 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:50.871163378+00:00 stderr F E0813 19:59:50.852372 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:50.871571450+00:00 stderr F E0813 19:59:50.871534 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:51.326018234+00:00 stderr F W0813 19:59:51.325587 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T19:59:51.450971396+00:00 stderr F W0813 19:59:51.450692 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T19:59:52.606671350+00:00 stderr F W0813 19:59:52.605750 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:00:03.507188075+00:00 stderr F W0813 20:00:03.505514 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:00:06.623020709+00:00 stderr F W0813 20:00:06.622177 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:00:06.623415940+00:00 stderr F W0813 20:00:06.623258 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:00:07.675007495+00:00 stderr F W0813 20:00:07.671547 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:00:27.945990342+00:00 stderr F W0813 20:00:27.941701 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:00:29.988055289+00:00 stderr F W0813 20:00:29.985630 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:00:31.935746376+00:00 stderr F W0813 20:00:31.935224 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:00:35.857438308+00:00 stderr F W0813 20:00:35.855636 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:01:10.363073166+00:00 stderr F W0813 20:01:10.361738 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:01:11.010767704+00:00 stderr F W0813 20:01:11.009529 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:01:11.878877308+00:00 stderr F W0813 20:01:11.876332 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:01:21.349283955+00:00 stderr F W0813 20:01:21.348270 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:02:22.615119270+00:00 stderr F W0813 20:02:22.614245 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:02:24.747603143+00:00 stderr F W0813 20:02:24.746366 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:02:26.988925622+00:00 stderr F W0813 20:02:26.988481 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:02:30.453457865+00:00 stderr F W0813 20:02:30.452679 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:03:52.874053825+00:00 stderr F W0813 20:03:52.871907 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:04:04.401934902+00:00 stderr F E0813 20:04:04.401068 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:04.402998952+00:00 stderr F E0813 20:04:04.402948 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.281572926+00:00 stderr F E0813 20:04:05.280372 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.281572926+00:00 stderr F E0813 20:04:05.281070 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.286935489+00:00 stderr F E0813 20:04:05.284345 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.286935489+00:00 stderr F E0813 20:04:05.284410 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:09.215685534+00:00 stderr F W0813 20:04:09.215602 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:04:29.629577865+00:00 stderr F W0813 20:04:29.629008 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:04:35.640297450+00:00 stderr F W0813 20:04:35.639529 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:05:05.288435794+00:00 stderr F E0813 20:05:05.287900 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.288496986+00:00 stderr F E0813 20:05:05.288432 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.321420159+00:00 stderr F E0813 20:05:05.321363 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.321516132+00:00 stderr F E0813 20:05:05.321498 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:06:33.199988341+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:06:33.200242358+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:06:46.091914232+00:00 stderr F time="2025-08-13T20:06:46Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T20:06:46.092101228+00:00 stderr F time="2025-08-13T20:06:46Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused\"" source="{redhat-marketplace openshift-marketplace}" 2025-08-13T20:06:46.092101228+00:00 stderr F time="2025-08-13T20:06:46Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T20:06:48.336406265+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:07:00.663275096+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:07:19.651681279+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T20:07:41.934622389+00:00 stderr F time="2025-08-13T20:07:41Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:07:50.348659746+00:00 stderr F time="2025-08-13T20:07:50Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:08:18.638742578+00:00 stderr F time="2025-08-13T20:08:18Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T20:08:42.737265511+00:00 stderr F E0813 20:08:42.735492 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.744188410+00:00 stderr F E0813 20:08:42.744107 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.621857893+00:00 stderr F E0813 20:08:43.618322 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.621857893+00:00 stderr F E0813 20:08:43.618410 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.625036794+00:00 stderr F E0813 20:08:43.624526 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.625036794+00:00 stderr F E0813 20:08:43.624621 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:40.151279360+00:00 stderr F time="2025-08-13T20:09:40Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:09:40.151279360+00:00 stderr F time="2025-08-13T20:09:40Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:09:49.496565397+00:00 stderr F time="2025-08-13T20:09:49Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T20:09:52.875492813+00:00 stderr F time="2025-08-13T20:09:52Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T20:16:58.160638342+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:17:00.205634852+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T20:17:16.072748758+00:00 stderr F time="2025-08-13T20:17:16Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T20:17:30.179055164+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:27:05.651766544+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T20:27:05.841924311+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:28:43.332268651+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:29:25.916234348+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:29:25.916953379+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:29:33.240381805+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T20:29:36.514472290+00:00 stderr F time="2025-08-13T20:29:36Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T20:29:44.206538782+00:00 stderr F time="2025-08-13T20:29:44Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:29:52.126891777+00:00 stderr F time="2025-08-13T20:29:52Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:37:48.225824045+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T20:38:36.094882524+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:41:21.481556744+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T20:42:26.028870400+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:42:39.339171660+00:00 stderr F W0813 20:42:39.338527 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:42:39.734393723+00:00 stderr F W0813 20:42:39.734342 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:42:39.951022478+00:00 stderr F W0813 20:42:39.950937 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:42:40.047724586+00:00 stderr F W0813 20:42:40.047677 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:42:40.344992617+00:00 stderr F W0813 20:42:40.344945 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:42:40.742693633+00:00 stderr F W0813 20:42:40.742601 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:42:40.959143163+00:00 stderr F W0813 20:42:40.959053 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:42:41.055167932+00:00 stderr F W0813 20:42:41.055001 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:42:41.670657416+00:00 stderr F W0813 20:42:41.670253 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:42:42.480053712+00:00 stderr F W0813 20:42:42.479949 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:42:42.551026478+00:00 stderr F W0813 20:42:42.550906 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:42:42.839825044+00:00 stderr F W0813 20:42:42.839695 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:42:44.057333295+00:00 stderr F I0813 20:42:44.056154 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:44.057333295+00:00 stderr F I0813 20:42:44.056440 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:44.057445078+00:00 stderr F I0813 20:42:44.057377 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:44.057554841+00:00 stderr F I0813 20:42:44.057509 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:44.057614363+00:00 stderr F I0813 20:42:44.057573 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:44.057673074+00:00 stderr F I0813 20:42:44.057635 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:44.058142168+00:00 stderr F I0813 20:42:44.057951 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::apiserver.local.config/certificates/apiserver.crt::apiserver.local.config/certificates/apiserver.key" 2025-08-13T20:42:44.058402505+00:00 stderr F I0813 20:42:44.058348 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:44.058523589+00:00 stderr F I0813 20:42:44.058456 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:44.059177148+00:00 stderr F I0813 20:42:44.059151 1 secure_serving.go:258] Stopped listening on [::]:5443 ././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000010422415133657716033076 0ustar zuulzuul2026-01-20T10:49:37.651159519+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="Using in-cluster kube client config" 2026-01-20T10:49:37.655092159+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="Defaulting Interval to '12h0m0s'" 2026-01-20T10:49:37.666076154+00:00 stderr F I0120 10:49:37.664540 1 handler.go:275] Adding GroupVersion packages.operators.coreos.com v1 to ResourceManager 2026-01-20T10:49:37.669100976+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2026-01-20T10:49:37.669100976+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="operator ready" 2026-01-20T10:49:37.669100976+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="starting informers..." 2026-01-20T10:49:37.669100976+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="informers started" 2026-01-20T10:49:37.669100976+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="waiting for caches to sync..." 2026-01-20T10:49:37.770804513+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="starting workers..." 2026-01-20T10:49:37.772356701+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="connecting to source" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2026-01-20T10:49:37.772399122+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="connecting to source" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2026-01-20T10:49:37.777362254+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="connecting to source" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2026-01-20T10:49:37.777825778+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="connecting to source" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2026-01-20T10:49:37.818477646+00:00 stderr F W0120 10:49:37.818405 1 logging.go:59] [core] [Channel #1 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:40951->10.217.4.10:53: read: connection refused" 2026-01-20T10:49:37.819231868+00:00 stderr F W0120 10:49:37.818827 1 logging.go:59] [core] [Channel #4 SubChannel #5] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:39244->10.217.4.10:53: read: connection refused" 2026-01-20T10:49:37.819257209+00:00 stderr F W0120 10:49:37.819243 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:48398->10.217.4.10:53: read: connection refused" 2026-01-20T10:49:37.825281003+00:00 stderr F W0120 10:49:37.823553 1 logging.go:59] [core] [Channel #6 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:33977->10.217.4.10:53: read: connection refused" 2026-01-20T10:49:37.856315159+00:00 stderr F I0120 10:49:37.854928 1 secure_serving.go:213] Serving securely on [::]:5443 2026-01-20T10:49:37.856315159+00:00 stderr F I0120 10:49:37.854979 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:49:37.856315159+00:00 stderr F I0120 10:49:37.855146 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:49:37.856315159+00:00 stderr F I0120 10:49:37.855208 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:49:37.856315159+00:00 stderr F I0120 10:49:37.855216 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:37.856315159+00:00 stderr F I0120 10:49:37.855221 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::apiserver.local.config/certificates/apiserver.crt::apiserver.local.config/certificates/apiserver.key" 2026-01-20T10:49:37.856315159+00:00 stderr F I0120 10:49:37.855505 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:49:37.856315159+00:00 stderr F I0120 10:49:37.855509 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:49:37.856315159+00:00 stderr F I0120 10:49:37.855226 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:49:37.856315159+00:00 stderr F I0120 10:49:37.855598 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:37.856315159+00:00 stderr F I0120 10:49:37.855513 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:49:37.856315159+00:00 stderr F I0120 10:49:37.855818 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:49:37.856315159+00:00 stderr F I0120 10:49:37.855825 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:37.856315159+00:00 stderr F I0120 10:49:37.855833 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:49:37.856315159+00:00 stderr F I0120 10:49:37.855837 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:37.873137771+00:00 stderr F time="2026-01-20T10:49:37Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2026-01-20T10:49:37.875615556+00:00 stderr F time="2026-01-20T10:49:37Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:48398->10.217.4.10:53: read: connection refused\"" source="{community-operators openshift-marketplace}" 2026-01-20T10:49:37.958266403+00:00 stderr F I0120 10:49:37.955370 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:37.958266403+00:00 stderr F I0120 10:49:37.955410 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:49:37.958266403+00:00 stderr F I0120 10:49:37.956168 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:37.958266403+00:00 stderr F I0120 10:49:37.956182 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:37.958266403+00:00 stderr F I0120 10:49:37.957550 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:49:37.958266403+00:00 stderr F I0120 10:49:37.957584 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:38.474489507+00:00 stderr F time="2026-01-20T10:49:38Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2026-01-20T10:49:38.474489507+00:00 stderr F time="2026-01-20T10:49:38Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:40951->10.217.4.10:53: read: connection refused\"" source="{certified-operators openshift-marketplace}" 2026-01-20T10:49:38.860520046+00:00 stderr F W0120 10:49:38.859781 1 logging.go:59] [core] [Channel #4 SubChannel #5] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:50387->10.217.4.10:53: read: connection refused" 2026-01-20T10:49:38.866073215+00:00 stderr F W0120 10:49:38.865315 1 logging.go:59] [core] [Channel #6 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:46483->10.217.4.10:53: read: connection refused" 2026-01-20T10:49:38.866073215+00:00 stderr F W0120 10:49:38.865358 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:52845->10.217.4.10:53: read: connection refused" 2026-01-20T10:49:38.866768006+00:00 stderr F W0120 10:49:38.866524 1 logging.go:59] [core] [Channel #1 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:36624->10.217.4.10:53: read: connection refused" 2026-01-20T10:49:39.079642810+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2026-01-20T10:49:39.079642810+00:00 stderr F time="2026-01-20T10:49:39Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:50387->10.217.4.10:53: read: connection refused\"" source="{redhat-marketplace openshift-marketplace}" 2026-01-20T10:49:39.677030266+00:00 stderr F time="2026-01-20T10:49:39Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2026-01-20T10:49:39.677030266+00:00 stderr F time="2026-01-20T10:49:39Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:46483->10.217.4.10:53: read: connection refused\"" source="{redhat-operators openshift-marketplace}" 2026-01-20T10:49:40.264874581+00:00 stderr F W0120 10:49:40.263383 1 logging.go:59] [core] [Channel #6 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:35071->10.217.4.10:53: read: connection refused" 2026-01-20T10:49:40.369805068+00:00 stderr F W0120 10:49:40.367454 1 logging.go:59] [core] [Channel #1 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:36770->10.217.4.10:53: read: connection refused" 2026-01-20T10:49:40.477231259+00:00 stderr F W0120 10:49:40.476691 1 logging.go:59] [core] [Channel #4 SubChannel #5] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:33961->10.217.4.10:53: read: connection refused" 2026-01-20T10:49:40.644097382+00:00 stderr F W0120 10:49:40.643248 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:56470->10.217.4.10:53: read: connection refused" 2026-01-20T10:49:42.401252804+00:00 stderr F W0120 10:49:42.401190 1 logging.go:59] [core] [Channel #6 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:42088->10.217.4.10:53: read: connection refused" 2026-01-20T10:49:42.753617187+00:00 stderr F W0120 10:49:42.753562 1 logging.go:59] [core] [Channel #4 SubChannel #5] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:38003->10.217.4.10:53: read: connection refused" 2026-01-20T10:49:43.266840459+00:00 stderr F W0120 10:49:43.264738 1 logging.go:59] [core] [Channel #1 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:55343->10.217.4.10:53: read: connection refused" 2026-01-20T10:49:43.391578349+00:00 stderr F W0120 10:49:43.391523 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:52390->10.217.4.10:53: read: connection refused" 2026-01-20T10:49:46.603565714+00:00 stderr F W0120 10:49:46.603501 1 logging.go:59] [core] [Channel #6 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2026-01-20T10:49:46.919216258+00:00 stderr F W0120 10:49:46.919160 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2026-01-20T10:49:46.951817811+00:00 stderr F W0120 10:49:46.951778 1 logging.go:59] [core] [Channel #4 SubChannel #5] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2026-01-20T10:49:47.739173744+00:00 stderr F W0120 10:49:47.738850 1 logging.go:59] [core] [Channel #1 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2026-01-20T10:49:52.207198717+00:00 stderr F W0120 10:49:52.207157 1 logging.go:59] [core] [Channel #4 SubChannel #5] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2026-01-20T10:49:53.207357181+00:00 stderr F W0120 10:49:53.206854 1 logging.go:59] [core] [Channel #1 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2026-01-20T10:49:53.257674654+00:00 stderr F W0120 10:49:53.257597 1 logging.go:59] [core] [Channel #6 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2026-01-20T10:49:53.875431610+00:00 stderr F W0120 10:49:53.875376 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2026-01-20T10:50:01.920755036+00:00 stderr F W0120 10:50:01.920436 1 logging.go:59] [core] [Channel #1 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2026-01-20T10:50:03.519761961+00:00 stderr F W0120 10:50:03.519338 1 logging.go:59] [core] [Channel #4 SubChannel #5] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2026-01-20T10:50:03.771498359+00:00 stderr F W0120 10:50:03.771410 1 logging.go:59] [core] [Channel #6 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2026-01-20T10:50:04.100221011+00:00 stderr F W0120 10:50:04.100138 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2026-01-20T10:50:20.589505143+00:00 stderr F W0120 10:50:20.588913 1 logging.go:59] [core] [Channel #1 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2026-01-20T10:50:21.112306318+00:00 stderr F W0120 10:50:21.112265 1 logging.go:59] [core] [Channel #6 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2026-01-20T10:50:21.848663476+00:00 stderr F W0120 10:50:21.848563 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2026-01-20T10:50:22.905032283+00:00 stderr F W0120 10:50:22.904988 1 logging.go:59] [core] [Channel #4 SubChannel #5] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2026-01-20T10:50:47.392035440+00:00 stderr F W0120 10:50:47.391952 1 logging.go:59] [core] [Channel #4 SubChannel #5] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2026-01-20T10:50:50.851347987+00:00 stderr F W0120 10:50:50.851292 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2026-01-20T10:50:51.684083160+00:00 stderr F W0120 10:50:51.683999 1 logging.go:59] [core] [Channel #1 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2026-01-20T10:50:52.517671147+00:00 stderr F W0120 10:50:52.517620 1 logging.go:59] [core] [Channel #6 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2026-01-20T10:51:00.918148289+00:00 stderr F time="2026-01-20T10:51:00Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2026-01-20T10:51:00.918148289+00:00 stderr F time="2026-01-20T10:51:00Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused\"" source="{redhat-marketplace openshift-marketplace}" 2026-01-20T10:51:03.107975632+00:00 stderr F time="2026-01-20T10:51:03Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2026-01-20T10:51:03.107999383+00:00 stderr F time="2026-01-20T10:51:03Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused\"" source="{redhat-marketplace openshift-marketplace}" 2026-01-20T10:51:05.907642474+00:00 stderr F time="2026-01-20T10:51:05Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2026-01-20T10:51:05.907668475+00:00 stderr F time="2026-01-20T10:51:05Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused\"" source="{redhat-operators openshift-marketplace}" 2026-01-20T10:51:06.915309847+00:00 stderr F time="2026-01-20T10:51:06Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2026-01-20T10:51:06.915309847+00:00 stderr F time="2026-01-20T10:51:06Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused\"" source="{community-operators openshift-marketplace}" 2026-01-20T10:51:31.540264771+00:00 stderr F time="2026-01-20T10:51:31Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2026-01-20T10:51:31.540264771+00:00 stderr F time="2026-01-20T10:51:31Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused\"" source="{community-operators openshift-marketplace}" 2026-01-20T10:51:32.937614809+00:00 stderr F time="2026-01-20T10:51:32Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2026-01-20T10:51:35.736245817+00:00 stderr F time="2026-01-20T10:51:35Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2026-01-20T10:51:35.736304039+00:00 stderr F time="2026-01-20T10:51:35Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused\"" source="{certified-operators openshift-marketplace}" 2026-01-20T10:51:36.932844138+00:00 stderr F time="2026-01-20T10:51:36Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2026-01-20T10:51:41.415340732+00:00 stderr F W0120 10:51:41.415281 1 logging.go:59] [core] [Channel #6 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2026-01-20T10:51:45.108958256+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2026-01-20T10:51:45.992873798+00:00 stderr F time="2026-01-20T10:51:45Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2026-01-20T10:51:45.992873798+00:00 stderr F time="2026-01-20T10:51:45Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused\"" source="{redhat-operators openshift-marketplace}" 2026-01-20T10:51:52.890134825+00:00 stderr F time="2026-01-20T10:51:52Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2026-01-20T10:52:37.590115965+00:00 stderr F time="2026-01-20T10:52:37Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2026-01-20T10:52:37.590115965+00:00 stderr F time="2026-01-20T10:52:37Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused\"" source="{redhat-operators openshift-marketplace}" 2026-01-20T10:57:24.162968898+00:00 stderr F E0120 10:57:24.162354 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:24.163026970+00:00 stderr F E0120 10:57:24.162970 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.090233560+00:00 stderr F E0120 10:57:25.090144 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.090233560+00:00 stderr F E0120 10:57:25.090192 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.090449126+00:00 stderr F E0120 10:57:25.090419 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:57:25.090449126+00:00 stderr F E0120 10:57:25.090440 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000024400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_s0000755000175000017500000000000015133657715033030 5ustar zuulzuul././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_s0000755000175000017500000000000015133657734033031 5ustar zuulzuul././@LongLink0000644000000000000000000000027700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_s0000644000175000017500000007636215133657715033050 0ustar zuulzuul2026-01-20T10:49:37.117870966+00:00 stderr F W0120 10:49:37.111780 1 cmd.go:237] Using insecure, self-signed certificates 2026-01-20T10:49:37.117870966+00:00 stderr F I0120 10:49:37.117787 1 crypto.go:601] Generating new CA for service-ca-controller-signer@1768906177 cert, and key in /tmp/serving-cert-1133640266/serving-signer.crt, /tmp/serving-cert-1133640266/serving-signer.key 2026-01-20T10:49:37.536123105+00:00 stderr F I0120 10:49:37.534708 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2026-01-20T10:49:37.540077166+00:00 stderr F I0120 10:49:37.536253 1 observer_polling.go:159] Starting file observer 2026-01-20T10:49:37.586169119+00:00 stderr F I0120 10:49:37.586031 1 builder.go:271] service-ca-controller version v4.16.0-202406131906.p0.g538c7b9.assembly.stream.el9-0-g6d77dd5- 2026-01-20T10:49:37.588181721+00:00 stderr F I0120 10:49:37.588145 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1133640266/tls.crt::/tmp/serving-cert-1133640266/tls.key" 2026-01-20T10:49:38.421887386+00:00 stderr F I0120 10:49:38.420209 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2026-01-20T10:49:38.430887910+00:00 stderr F I0120 10:49:38.430857 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2026-01-20T10:49:38.430933261+00:00 stderr F I0120 10:49:38.430923 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2026-01-20T10:49:38.430966572+00:00 stderr F I0120 10:49:38.430957 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2026-01-20T10:49:38.430990673+00:00 stderr F I0120 10:49:38.430982 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2026-01-20T10:49:38.433760337+00:00 stderr F I0120 10:49:38.433743 1 secure_serving.go:57] Forcing use of http/1.1 only 2026-01-20T10:49:38.433799478+00:00 stderr F W0120 10:49:38.433789 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:38.433825459+00:00 stderr F W0120 10:49:38.433816 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2026-01-20T10:49:38.434012915+00:00 stderr F I0120 10:49:38.433998 1 genericapiserver.go:525] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2026-01-20T10:49:38.439821151+00:00 stderr F I0120 10:49:38.439662 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:49:38.439821151+00:00 stderr F I0120 10:49:38.439692 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:49:38.439821151+00:00 stderr F I0120 10:49:38.439765 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:49:38.439821151+00:00 stderr F I0120 10:49:38.439773 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:38.439821151+00:00 stderr F I0120 10:49:38.439791 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:49:38.439821151+00:00 stderr F I0120 10:49:38.439797 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:38.440513392+00:00 stderr F I0120 10:49:38.440492 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-1133640266/tls.crt::/tmp/serving-cert-1133640266/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1768906177\" (2026-01-20 10:49:36 +0000 UTC to 2026-02-19 10:49:37 +0000 UTC (now=2026-01-20 10:49:38.4404562 +0000 UTC))" 2026-01-20T10:49:38.440817431+00:00 stderr F I0120 10:49:38.440804 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906178\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906177\" (2026-01-20 09:49:37 +0000 UTC to 2027-01-20 09:49:37 +0000 UTC (now=2026-01-20 10:49:38.44078644 +0000 UTC))" 2026-01-20T10:49:38.440855093+00:00 stderr F I0120 10:49:38.440845 1 secure_serving.go:210] Serving securely on [::]:8443 2026-01-20T10:49:38.440892934+00:00 stderr F I0120 10:49:38.440883 1 genericapiserver.go:673] [graceful-termination] waiting for shutdown to be initiated 2026-01-20T10:49:38.440939915+00:00 stderr F I0120 10:49:38.440925 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-1133640266/tls.crt::/tmp/serving-cert-1133640266/tls.key" 2026-01-20T10:49:38.441095870+00:00 stderr F I0120 10:49:38.441079 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:49:38.445597477+00:00 stderr F I0120 10:49:38.445551 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2026-01-20T10:49:38.448358301+00:00 stderr F I0120 10:49:38.445808 1 leaderelection.go:250] attempting to acquire leader lease openshift-service-ca/service-ca-controller-lock... 2026-01-20T10:49:38.542106847+00:00 stderr F I0120 10:49:38.541546 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:38.542106847+00:00 stderr F I0120 10:49:38.541869 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:49:38.542106847+00:00 stderr F I0120 10:49:38.541972 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:38.549104110+00:00 stderr F I0120 10:49:38.544382 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:38.544335995 +0000 UTC))" 2026-01-20T10:49:38.549104110+00:00 stderr F I0120 10:49:38.544940 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-1133640266/tls.crt::/tmp/serving-cert-1133640266/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1768906177\" (2026-01-20 10:49:36 +0000 UTC to 2026-02-19 10:49:37 +0000 UTC (now=2026-01-20 10:49:38.544924043 +0000 UTC))" 2026-01-20T10:49:38.549104110+00:00 stderr F I0120 10:49:38.545223 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906178\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906177\" (2026-01-20 09:49:37 +0000 UTC to 2027-01-20 09:49:37 +0000 UTC (now=2026-01-20 10:49:38.545208432 +0000 UTC))" 2026-01-20T10:49:38.553094411+00:00 stderr F I0120 10:49:38.549174 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:49:38.549140211 +0000 UTC))" 2026-01-20T10:49:38.553094411+00:00 stderr F I0120 10:49:38.549206 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:49:38.549195323 +0000 UTC))" 2026-01-20T10:49:38.553094411+00:00 stderr F I0120 10:49:38.549223 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:49:38.549210253 +0000 UTC))" 2026-01-20T10:49:38.553094411+00:00 stderr F I0120 10:49:38.549238 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:49:38.549227404 +0000 UTC))" 2026-01-20T10:49:38.553094411+00:00 stderr F I0120 10:49:38.549253 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:38.549242804 +0000 UTC))" 2026-01-20T10:49:38.553094411+00:00 stderr F I0120 10:49:38.549268 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:38.549257155 +0000 UTC))" 2026-01-20T10:49:38.553094411+00:00 stderr F I0120 10:49:38.549303 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:38.549271445 +0000 UTC))" 2026-01-20T10:49:38.553094411+00:00 stderr F I0120 10:49:38.549320 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:38.549308036 +0000 UTC))" 2026-01-20T10:49:38.553094411+00:00 stderr F I0120 10:49:38.549333 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:49:38.549323897 +0000 UTC))" 2026-01-20T10:49:38.553094411+00:00 stderr F I0120 10:49:38.549351 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:49:38.549341977 +0000 UTC))" 2026-01-20T10:49:38.553094411+00:00 stderr F I0120 10:49:38.549367 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:38.549357698 +0000 UTC))" 2026-01-20T10:49:38.553094411+00:00 stderr F I0120 10:49:38.549645 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-1133640266/tls.crt::/tmp/serving-cert-1133640266/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1768906177\" (2026-01-20 10:49:36 +0000 UTC to 2026-02-19 10:49:37 +0000 UTC (now=2026-01-20 10:49:38.549632596 +0000 UTC))" 2026-01-20T10:49:38.553094411+00:00 stderr F I0120 10:49:38.549885 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906178\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906177\" (2026-01-20 09:49:37 +0000 UTC to 2027-01-20 09:49:37 +0000 UTC (now=2026-01-20 10:49:38.549873143 +0000 UTC))" 2026-01-20T10:54:43.921577177+00:00 stderr F I0120 10:54:43.920753 1 leaderelection.go:260] successfully acquired lease openshift-service-ca/service-ca-controller-lock 2026-01-20T10:54:43.921577177+00:00 stderr F I0120 10:54:43.920905 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-service-ca", Name:"service-ca-controller-lock", UID:"0db0ad19-cc12-475d-ab84-bad75b08b334", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41962", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' service-ca-666f99b6f-kk8kg_0883a330-3f69-4211-9a61-2581cc36557b became leader 2026-01-20T10:54:43.925210483+00:00 stderr F I0120 10:54:43.925114 1 base_controller.go:67] Waiting for caches to sync for ConfigMapCABundleInjector 2026-01-20T10:54:43.926045165+00:00 stderr F I0120 10:54:43.925952 1 base_controller.go:67] Waiting for caches to sync for ServiceServingCertUpdateController 2026-01-20T10:54:43.926045165+00:00 stderr F I0120 10:54:43.926000 1 base_controller.go:67] Waiting for caches to sync for CRDCABundleInjector 2026-01-20T10:54:43.926106967+00:00 stderr F I0120 10:54:43.926083 1 base_controller.go:67] Waiting for caches to sync for MutatingWebhookCABundleInjector 2026-01-20T10:54:43.926135987+00:00 stderr F I0120 10:54:43.926108 1 base_controller.go:67] Waiting for caches to sync for ValidatingWebhookCABundleInjector 2026-01-20T10:54:43.926156178+00:00 stderr F I0120 10:54:43.926131 1 base_controller.go:67] Waiting for caches to sync for LegacyVulnerableConfigMapCABundleInjector 2026-01-20T10:54:43.926351553+00:00 stderr F I0120 10:54:43.926275 1 base_controller.go:67] Waiting for caches to sync for APIServiceCABundleInjector 2026-01-20T10:54:43.926654961+00:00 stderr F I0120 10:54:43.926585 1 base_controller.go:67] Waiting for caches to sync for ServiceServingCertController 2026-01-20T10:54:44.032143963+00:00 stderr F I0120 10:54:44.031225 1 base_controller.go:73] Caches are synced for MutatingWebhookCABundleInjector 2026-01-20T10:54:44.032143963+00:00 stderr F I0120 10:54:44.031274 1 base_controller.go:110] Starting #1 worker of MutatingWebhookCABundleInjector controller ... 2026-01-20T10:54:44.032143963+00:00 stderr F I0120 10:54:44.031290 1 base_controller.go:110] Starting #2 worker of MutatingWebhookCABundleInjector controller ... 2026-01-20T10:54:44.032143963+00:00 stderr F I0120 10:54:44.031295 1 base_controller.go:110] Starting #3 worker of MutatingWebhookCABundleInjector controller ... 2026-01-20T10:54:44.032143963+00:00 stderr F I0120 10:54:44.031300 1 base_controller.go:110] Starting #4 worker of MutatingWebhookCABundleInjector controller ... 2026-01-20T10:54:44.032143963+00:00 stderr F I0120 10:54:44.031305 1 base_controller.go:110] Starting #5 worker of MutatingWebhookCABundleInjector controller ... 2026-01-20T10:54:44.032143963+00:00 stderr F I0120 10:54:44.031463 1 base_controller.go:73] Caches are synced for ValidatingWebhookCABundleInjector 2026-01-20T10:54:44.032143963+00:00 stderr F I0120 10:54:44.031469 1 base_controller.go:110] Starting #1 worker of ValidatingWebhookCABundleInjector controller ... 2026-01-20T10:54:44.032143963+00:00 stderr F I0120 10:54:44.031476 1 base_controller.go:110] Starting #2 worker of ValidatingWebhookCABundleInjector controller ... 2026-01-20T10:54:44.032143963+00:00 stderr F I0120 10:54:44.031481 1 base_controller.go:110] Starting #3 worker of ValidatingWebhookCABundleInjector controller ... 2026-01-20T10:54:44.032143963+00:00 stderr F I0120 10:54:44.031487 1 base_controller.go:110] Starting #4 worker of ValidatingWebhookCABundleInjector controller ... 2026-01-20T10:54:44.032143963+00:00 stderr F I0120 10:54:44.031492 1 base_controller.go:110] Starting #5 worker of ValidatingWebhookCABundleInjector controller ... 2026-01-20T10:54:44.036109749+00:00 stderr F I0120 10:54:44.035611 1 base_controller.go:73] Caches are synced for APIServiceCABundleInjector 2026-01-20T10:54:44.036109749+00:00 stderr F I0120 10:54:44.035640 1 base_controller.go:110] Starting #1 worker of APIServiceCABundleInjector controller ... 2026-01-20T10:54:44.036109749+00:00 stderr F I0120 10:54:44.035652 1 base_controller.go:110] Starting #2 worker of APIServiceCABundleInjector controller ... 2026-01-20T10:54:44.036109749+00:00 stderr F I0120 10:54:44.035659 1 base_controller.go:110] Starting #3 worker of APIServiceCABundleInjector controller ... 2026-01-20T10:54:44.036109749+00:00 stderr F I0120 10:54:44.035666 1 base_controller.go:110] Starting #4 worker of APIServiceCABundleInjector controller ... 2026-01-20T10:54:44.036109749+00:00 stderr F I0120 10:54:44.035671 1 base_controller.go:110] Starting #5 worker of APIServiceCABundleInjector controller ... 2026-01-20T10:54:44.226397441+00:00 stderr F I0120 10:54:44.226279 1 base_controller.go:73] Caches are synced for LegacyVulnerableConfigMapCABundleInjector 2026-01-20T10:54:44.226397441+00:00 stderr F I0120 10:54:44.226320 1 base_controller.go:110] Starting #1 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2026-01-20T10:54:44.226397441+00:00 stderr F I0120 10:54:44.226336 1 base_controller.go:110] Starting #2 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2026-01-20T10:54:44.226397441+00:00 stderr F I0120 10:54:44.226341 1 base_controller.go:110] Starting #3 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2026-01-20T10:54:44.226397441+00:00 stderr F I0120 10:54:44.226350 1 base_controller.go:110] Starting #4 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2026-01-20T10:54:44.226397441+00:00 stderr F I0120 10:54:44.226359 1 base_controller.go:110] Starting #5 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2026-01-20T10:54:44.226397441+00:00 stderr F I0120 10:54:44.226368 1 base_controller.go:73] Caches are synced for ServiceServingCertUpdateController 2026-01-20T10:54:44.226548305+00:00 stderr F I0120 10:54:44.226402 1 base_controller.go:73] Caches are synced for CRDCABundleInjector 2026-01-20T10:54:44.226548305+00:00 stderr F I0120 10:54:44.226429 1 base_controller.go:110] Starting #1 worker of CRDCABundleInjector controller ... 2026-01-20T10:54:44.226548305+00:00 stderr F I0120 10:54:44.226441 1 base_controller.go:110] Starting #2 worker of CRDCABundleInjector controller ... 2026-01-20T10:54:44.226548305+00:00 stderr F I0120 10:54:44.226480 1 base_controller.go:110] Starting #3 worker of CRDCABundleInjector controller ... 2026-01-20T10:54:44.226548305+00:00 stderr F I0120 10:54:44.226491 1 base_controller.go:110] Starting #4 worker of CRDCABundleInjector controller ... 2026-01-20T10:54:44.226548305+00:00 stderr F I0120 10:54:44.226506 1 base_controller.go:110] Starting #5 worker of CRDCABundleInjector controller ... 2026-01-20T10:54:44.226548305+00:00 stderr F I0120 10:54:44.226408 1 base_controller.go:110] Starting #1 worker of ServiceServingCertUpdateController controller ... 2026-01-20T10:54:44.226548305+00:00 stderr F I0120 10:54:44.226532 1 base_controller.go:110] Starting #2 worker of ServiceServingCertUpdateController controller ... 2026-01-20T10:54:44.226548305+00:00 stderr F I0120 10:54:44.226537 1 base_controller.go:110] Starting #3 worker of ServiceServingCertUpdateController controller ... 2026-01-20T10:54:44.226548305+00:00 stderr F I0120 10:54:44.226542 1 base_controller.go:110] Starting #4 worker of ServiceServingCertUpdateController controller ... 2026-01-20T10:54:44.226592666+00:00 stderr F I0120 10:54:44.226546 1 base_controller.go:110] Starting #5 worker of ServiceServingCertUpdateController controller ... 2026-01-20T10:54:44.226934886+00:00 stderr F I0120 10:54:44.226899 1 base_controller.go:73] Caches are synced for ServiceServingCertController 2026-01-20T10:54:44.226997608+00:00 stderr F I0120 10:54:44.226975 1 base_controller.go:110] Starting #1 worker of ServiceServingCertController controller ... 2026-01-20T10:54:44.227042509+00:00 stderr F I0120 10:54:44.227028 1 base_controller.go:110] Starting #2 worker of ServiceServingCertController controller ... 2026-01-20T10:54:44.227135922+00:00 stderr F I0120 10:54:44.227118 1 base_controller.go:110] Starting #3 worker of ServiceServingCertController controller ... 2026-01-20T10:54:44.227187973+00:00 stderr F I0120 10:54:44.227173 1 base_controller.go:110] Starting #4 worker of ServiceServingCertController controller ... 2026-01-20T10:54:44.227226034+00:00 stderr F I0120 10:54:44.227213 1 base_controller.go:110] Starting #5 worker of ServiceServingCertController controller ... 2026-01-20T10:54:44.227354107+00:00 stderr F I0120 10:54:44.226907 1 base_controller.go:73] Caches are synced for ConfigMapCABundleInjector 2026-01-20T10:54:44.227414059+00:00 stderr F I0120 10:54:44.227371 1 base_controller.go:110] Starting #1 worker of ConfigMapCABundleInjector controller ... 2026-01-20T10:54:44.227414059+00:00 stderr F I0120 10:54:44.227403 1 base_controller.go:110] Starting #2 worker of ConfigMapCABundleInjector controller ... 2026-01-20T10:54:44.227426989+00:00 stderr F I0120 10:54:44.227411 1 base_controller.go:110] Starting #3 worker of ConfigMapCABundleInjector controller ... 2026-01-20T10:54:44.227438650+00:00 stderr F I0120 10:54:44.227428 1 base_controller.go:110] Starting #4 worker of ConfigMapCABundleInjector controller ... 2026-01-20T10:54:44.227451770+00:00 stderr F I0120 10:54:44.227435 1 base_controller.go:110] Starting #5 worker of ConfigMapCABundleInjector controller ... 2026-01-20T10:56:07.100681036+00:00 stderr F I0120 10:56:07.100048 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:56:07.100009537 +0000 UTC))" 2026-01-20T10:56:07.100681036+00:00 stderr F I0120 10:56:07.100631 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:56:07.100619365 +0000 UTC))" 2026-01-20T10:56:07.100681036+00:00 stderr F I0120 10:56:07.100648 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.100636055 +0000 UTC))" 2026-01-20T10:56:07.100681036+00:00 stderr F I0120 10:56:07.100663 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.100652506 +0000 UTC))" 2026-01-20T10:56:07.100724267+00:00 stderr F I0120 10:56:07.100681 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.100667406 +0000 UTC))" 2026-01-20T10:56:07.100724267+00:00 stderr F I0120 10:56:07.100701 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.100686046 +0000 UTC))" 2026-01-20T10:56:07.100724267+00:00 stderr F I0120 10:56:07.100719 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.100705757 +0000 UTC))" 2026-01-20T10:56:07.100764919+00:00 stderr F I0120 10:56:07.100735 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.100724647 +0000 UTC))" 2026-01-20T10:56:07.100764919+00:00 stderr F I0120 10:56:07.100756 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.100745078 +0000 UTC))" 2026-01-20T10:56:07.100809510+00:00 stderr F I0120 10:56:07.100776 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:56:07.100761678 +0000 UTC))" 2026-01-20T10:56:07.100809510+00:00 stderr F I0120 10:56:07.100801 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.100786399 +0000 UTC))" 2026-01-20T10:56:07.100841861+00:00 stderr F I0120 10:56:07.100819 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.10080711 +0000 UTC))" 2026-01-20T10:56:07.102376811+00:00 stderr F I0120 10:56:07.101869 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-1133640266/tls.crt::/tmp/serving-cert-1133640266/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1768906177\" (2026-01-20 10:49:36 +0000 UTC to 2026-02-19 10:49:37 +0000 UTC (now=2026-01-20 10:56:07.101778435 +0000 UTC))" 2026-01-20T10:56:07.109255977+00:00 stderr F I0120 10:56:07.109179 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906178\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906177\" (2026-01-20 09:49:37 +0000 UTC to 2027-01-20 09:49:37 +0000 UTC (now=2026-01-20 10:56:07.109138964 +0000 UTC))" 2026-01-20T10:56:24.946980226+00:00 stderr F I0120 10:56:24.946015 1 configmap.go:109] updating configmap openstack/openshift-service-ca.crt with the service signing CA bundle 2026-01-20T10:56:25.756526857+00:00 stderr F I0120 10:56:25.756467 1 configmap.go:109] updating configmap openstack-operators/openshift-service-ca.crt with the service signing CA bundle 2026-01-20T10:56:30.689430938+00:00 stderr F I0120 10:56:30.687490 1 configmap.go:109] updating configmap cert-manager-operator/openshift-service-ca.crt with the service signing CA bundle 2026-01-20T10:56:33.652256694+00:00 stderr F I0120 10:56:33.650049 1 configmap.go:109] updating configmap cert-manager/openshift-service-ca.crt with the service signing CA bundle 2026-01-20T10:57:43.978917419+00:00 stderr F E0120 10:57:43.978293 1 leaderelection.go:332] error retrieving resource lock openshift-service-ca/service-ca-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca/leases/service-ca-controller-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000027700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_s0000644000175000017500000017005415133657715033041 0ustar zuulzuul2025-08-13T19:59:53.905156333+00:00 stderr F W0813 19:59:53.904104 1 cmd.go:237] Using insecure, self-signed certificates 2025-08-13T19:59:53.905741480+00:00 stderr F I0813 19:59:53.905400 1 crypto.go:601] Generating new CA for service-ca-controller-signer@1755115193 cert, and key in /tmp/serving-cert-2770977124/serving-signer.crt, /tmp/serving-cert-2770977124/serving-signer.key 2025-08-13T19:59:56.792955952+00:00 stderr F I0813 19:59:56.785561 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:56.795018501+00:00 stderr F I0813 19:59:56.794758 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:57.536248119+00:00 stderr F I0813 19:59:57.532205 1 builder.go:271] service-ca-controller version v4.16.0-202406131906.p0.g538c7b9.assembly.stream.el9-0-g6d77dd5- 2025-08-13T19:59:57.570850555+00:00 stderr F I0813 19:59:57.568995 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2770977124/tls.crt::/tmp/serving-cert-2770977124/tls.key" 2025-08-13T20:00:00.381378871+00:00 stderr F I0813 20:00:00.361664 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:00:00.690333237+00:00 stderr F I0813 20:00:00.690271 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T20:00:00.690551553+00:00 stderr F I0813 20:00:00.690534 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T20:00:00.690677027+00:00 stderr F I0813 20:00:00.690658 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T20:00:00.690713028+00:00 stderr F I0813 20:00:00.690701 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T20:00:00.779052026+00:00 stderr F I0813 20:00:00.773924 1 genericapiserver.go:525] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:00:00.779052026+00:00 stderr F I0813 20:00:00.774096 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:00:00.779052026+00:00 stderr F W0813 20:00:00.774116 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:00:00.779052026+00:00 stderr F W0813 20:00:00.774123 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:00:00.877103421+00:00 stderr F I0813 20:00:00.870488 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:00:00.877103421+00:00 stderr F I0813 20:00:00.871260 1 leaderelection.go:250] attempting to acquire leader lease openshift-service-ca/service-ca-controller-lock... 2025-08-13T20:00:00.969972448+00:00 stderr F I0813 20:00:00.966590 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:00:00.969972448+00:00 stderr F I0813 20:00:00.967111 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:00:00.969972448+00:00 stderr F I0813 20:00:00.967373 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:00:00.969972448+00:00 stderr F I0813 20:00:00.967386 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:00:00.969972448+00:00 stderr F I0813 20:00:00.967427 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:00:00.969972448+00:00 stderr F I0813 20:00:00.967435 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:00:00.978537222+00:00 stderr F I0813 20:00:00.978440 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-2770977124/tls.crt::/tmp/serving-cert-2770977124/tls.key" 2025-08-13T20:00:01.237433212+00:00 stderr F I0813 20:00:00.994394 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2770977124/tls.crt::/tmp/serving-cert-2770977124/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1755115193\" (2025-08-13 19:59:55 +0000 UTC to 2025-09-12 19:59:56 +0000 UTC (now=2025-08-13 20:00:00.994350203 +0000 UTC))" 2025-08-13T20:00:01.275449706+00:00 stderr F I0813 20:00:01.275385 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:00:01.279296405+00:00 stderr F I0813 20:00:01.010429 1 leaderelection.go:260] successfully acquired lease openshift-service-ca/service-ca-controller-lock 2025-08-13T20:00:01.279622185+00:00 stderr F I0813 20:00:01.012093 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-service-ca", Name:"service-ca-controller-lock", UID:"0db0ad19-cc12-475d-ab84-bad75b08b334", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28836", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' service-ca-666f99b6f-kk8kg_02e86f09-10c8-4c1e-ad51-e1a5d4b40977 became leader 2025-08-13T20:00:01.279657306+00:00 stderr F I0813 20:00:01.084581 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:00:01.308670713+00:00 stderr F I0813 20:00:01.196057 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:00:01.530259199+00:00 stderr F I0813 20:00:01.530197 1 base_controller.go:67] Waiting for caches to sync for CRDCABundleInjector 2025-08-13T20:00:01.530383673+00:00 stderr F I0813 20:00:01.530364 1 base_controller.go:67] Waiting for caches to sync for MutatingWebhookCABundleInjector 2025-08-13T20:00:01.530430114+00:00 stderr F I0813 20:00:01.530417 1 base_controller.go:67] Waiting for caches to sync for ConfigMapCABundleInjector 2025-08-13T20:00:01.530882487+00:00 stderr F I0813 20:00:01.530768 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115200\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115199\" (2025-08-13 18:59:57 +0000 UTC to 2026-08-13 18:59:57 +0000 UTC (now=2025-08-13 20:00:01.530729443 +0000 UTC))" 2025-08-13T20:00:01.530963879+00:00 stderr F I0813 20:00:01.530940 1 secure_serving.go:210] Serving securely on [::]:8443 2025-08-13T20:00:01.531060582+00:00 stderr F I0813 20:00:01.531037 1 genericapiserver.go:673] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:00:01.531119894+00:00 stderr F I0813 20:00:01.531102 1 base_controller.go:67] Waiting for caches to sync for APIServiceCABundleInjector 2025-08-13T20:00:01.531203066+00:00 stderr F I0813 20:00:01.531180 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:00:01.531558186+00:00 stderr F I0813 20:00:01.531528 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:01.531387811 +0000 UTC))" 2025-08-13T20:00:01.531647709+00:00 stderr F I0813 20:00:01.531623 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:01.531593267 +0000 UTC))" 2025-08-13T20:00:01.531753892+00:00 stderr F I0813 20:00:01.531728 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:01.53169742 +0000 UTC))" 2025-08-13T20:00:01.583070505+00:00 stderr F I0813 20:00:01.536568 1 base_controller.go:67] Waiting for caches to sync for ValidatingWebhookCABundleInjector 2025-08-13T20:00:01.583070505+00:00 stderr F I0813 20:00:01.536977 1 base_controller.go:67] Waiting for caches to sync for ServiceServingCertUpdateController 2025-08-13T20:00:01.583070505+00:00 stderr F I0813 20:00:01.558136 1 base_controller.go:67] Waiting for caches to sync for ServiceServingCertController 2025-08-13T20:00:01.583302481+00:00 stderr F I0813 20:00:01.583259 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:01.583185308 +0000 UTC))" 2025-08-13T20:00:01.583375023+00:00 stderr F I0813 20:00:01.583354 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:01.583331552 +0000 UTC))" 2025-08-13T20:00:01.583433285+00:00 stderr F I0813 20:00:01.583416 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:01.583397484 +0000 UTC))" 2025-08-13T20:00:01.583497847+00:00 stderr F I0813 20:00:01.583478 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:01.583459136 +0000 UTC))" 2025-08-13T20:00:01.583627270+00:00 stderr F I0813 20:00:01.583607 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:01.583585969 +0000 UTC))" 2025-08-13T20:00:01.585390851+00:00 stderr F I0813 20:00:01.585359 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:01.585325739 +0000 UTC))" 2025-08-13T20:00:01.586007958+00:00 stderr F I0813 20:00:01.585984 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2770977124/tls.crt::/tmp/serving-cert-2770977124/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1755115193\" (2025-08-13 19:59:55 +0000 UTC to 2025-09-12 19:59:56 +0000 UTC (now=2025-08-13 20:00:01.585965167 +0000 UTC))" 2025-08-13T20:00:01.597391783+00:00 stderr F I0813 20:00:01.597350 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115200\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115199\" (2025-08-13 18:59:57 +0000 UTC to 2026-08-13 18:59:57 +0000 UTC (now=2025-08-13 20:00:01.59730462 +0000 UTC))" 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.670628 1 base_controller.go:73] Caches are synced for MutatingWebhookCABundleInjector 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.670675 1 base_controller.go:110] Starting #1 worker of MutatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.670687 1 base_controller.go:110] Starting #2 worker of MutatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.670692 1 base_controller.go:110] Starting #3 worker of MutatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.670703 1 base_controller.go:110] Starting #4 worker of MutatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.680986 1 base_controller.go:73] Caches are synced for APIServiceCABundleInjector 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.681021 1 base_controller.go:110] Starting #1 worker of APIServiceCABundleInjector controller ... 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.681031 1 base_controller.go:110] Starting #2 worker of APIServiceCABundleInjector controller ... 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.681038 1 base_controller.go:110] Starting #3 worker of APIServiceCABundleInjector controller ... 2025-08-13T20:00:01.691703231+00:00 stderr F I0813 20:00:01.691667 1 apiservice.go:62] updating apiservice v1.apps.openshift.io with the service signing CA bundle 2025-08-13T20:00:01.703267051+00:00 stderr F I0813 20:00:01.703221 1 apiservice.go:62] updating apiservice v1.authorization.openshift.io with the service signing CA bundle 2025-08-13T20:00:01.761934793+00:00 stderr F I0813 20:00:01.761658 1 base_controller.go:110] Starting #5 worker of MutatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.785012081+00:00 stderr F I0813 20:00:01.783472 1 base_controller.go:110] Starting #4 worker of APIServiceCABundleInjector controller ... 2025-08-13T20:00:01.785012081+00:00 stderr F I0813 20:00:01.783523 1 base_controller.go:110] Starting #5 worker of APIServiceCABundleInjector controller ... 2025-08-13T20:00:01.785012081+00:00 stderr F I0813 20:00:01.783573 1 apiservice.go:62] updating apiservice v1.build.openshift.io with the service signing CA bundle 2025-08-13T20:00:01.785012081+00:00 stderr F I0813 20:00:01.783755 1 apiservice.go:62] updating apiservice v1.image.openshift.io with the service signing CA bundle 2025-08-13T20:00:01.785012081+00:00 stderr F I0813 20:00:01.783893 1 apiservice.go:62] updating apiservice v1.oauth.openshift.io with the service signing CA bundle 2025-08-13T20:00:01.880091571+00:00 stderr F I0813 20:00:01.880026 1 base_controller.go:73] Caches are synced for ValidatingWebhookCABundleInjector 2025-08-13T20:00:01.880192074+00:00 stderr F I0813 20:00:01.880176 1 base_controller.go:110] Starting #1 worker of ValidatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.880236895+00:00 stderr F I0813 20:00:01.880223 1 base_controller.go:110] Starting #2 worker of ValidatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.880268266+00:00 stderr F I0813 20:00:01.880256 1 base_controller.go:110] Starting #3 worker of ValidatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.880304517+00:00 stderr F I0813 20:00:01.880290 1 base_controller.go:110] Starting #4 worker of ValidatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.880334788+00:00 stderr F I0813 20:00:01.880323 1 base_controller.go:110] Starting #5 worker of ValidatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.880497083+00:00 stderr F I0813 20:00:01.880473 1 admissionwebhook.go:116] updating validatingwebhookconfiguration controlplanemachineset.machine.openshift.io with the service signing CA bundle 2025-08-13T20:00:01.882019036+00:00 stderr F I0813 20:00:01.881994 1 admissionwebhook.go:116] updating validatingwebhookconfiguration multus.openshift.io with the service signing CA bundle 2025-08-13T20:00:01.922162840+00:00 stderr F I0813 20:00:01.918431 1 base_controller.go:67] Waiting for caches to sync for LegacyVulnerableConfigMapCABundleInjector 2025-08-13T20:00:02.498433851+00:00 stderr F I0813 20:00:02.491383 1 apiservice.go:62] updating apiservice v1.project.openshift.io with the service signing CA bundle 2025-08-13T20:00:02.507945382+00:00 stderr F I0813 20:00:02.498640 1 apiservice.go:62] updating apiservice v1.quota.openshift.io with the service signing CA bundle 2025-08-13T20:00:02.507945382+00:00 stderr F I0813 20:00:02.501503 1 apiservice.go:62] updating apiservice v1.security.openshift.io with the service signing CA bundle 2025-08-13T20:00:02.507945382+00:00 stderr F I0813 20:00:02.506265 1 apiservice.go:62] updating apiservice v1.route.openshift.io with the service signing CA bundle 2025-08-13T20:00:02.878309293+00:00 stderr F E0813 20:00:02.873725 1 base_controller.go:266] "APIServiceCABundleInjector" controller failed to sync "v1.apps.openshift.io", err: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.apps.openshift.io": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:02.878309293+00:00 stderr F I0813 20:00:02.874328 1 apiservice.go:62] updating apiservice v1.template.openshift.io with the service signing CA bundle 2025-08-13T20:00:03.200411708+00:00 stderr F I0813 20:00:03.183130 1 apiservice.go:62] updating apiservice v1.user.openshift.io with the service signing CA bundle 2025-08-13T20:00:03.200411708+00:00 stderr F I0813 20:00:03.183506 1 apiservice.go:62] updating apiservice v1.apps.openshift.io with the service signing CA bundle 2025-08-13T20:00:06.292934267+00:00 stderr F I0813 20:00:06.292110 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:06.292061932 +0000 UTC))" 2025-08-13T20:00:06.292934267+00:00 stderr F I0813 20:00:06.292742 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:06.292719861 +0000 UTC))" 2025-08-13T20:00:06.292934267+00:00 stderr F I0813 20:00:06.292769 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:06.292749792 +0000 UTC))" 2025-08-13T20:00:06.292934267+00:00 stderr F I0813 20:00:06.292874 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:06.292826134 +0000 UTC))" 2025-08-13T20:00:06.292934267+00:00 stderr F I0813 20:00:06.292902 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:06.292885976 +0000 UTC))" 2025-08-13T20:00:06.292992459+00:00 stderr F I0813 20:00:06.292927 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:06.292909116 +0000 UTC))" 2025-08-13T20:00:06.292992459+00:00 stderr F I0813 20:00:06.292951 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:06.292935147 +0000 UTC))" 2025-08-13T20:00:06.292992459+00:00 stderr F I0813 20:00:06.292974 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:06.292957398 +0000 UTC))" 2025-08-13T20:00:06.293007149+00:00 stderr F I0813 20:00:06.292999 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:06.292980798 +0000 UTC))" 2025-08-13T20:00:06.297475096+00:00 stderr F I0813 20:00:06.293026 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:06.293007269 +0000 UTC))" 2025-08-13T20:00:06.297475096+00:00 stderr F I0813 20:00:06.293450 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2770977124/tls.crt::/tmp/serving-cert-2770977124/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1755115193\" (2025-08-13 19:59:55 +0000 UTC to 2025-09-12 19:59:56 +0000 UTC (now=2025-08-13 20:00:06.293427291 +0000 UTC))" 2025-08-13T20:00:06.297475096+00:00 stderr F I0813 20:00:06.293893 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115200\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115199\" (2025-08-13 18:59:57 +0000 UTC to 2026-08-13 18:59:57 +0000 UTC (now=2025-08-13 20:00:06.293768051 +0000 UTC))" 2025-08-13T20:00:06.488525964+00:00 stderr F I0813 20:00:06.485459 1 base_controller.go:73] Caches are synced for ServiceServingCertController 2025-08-13T20:00:06.507458814+00:00 stderr F I0813 20:00:06.507060 1 base_controller.go:110] Starting #1 worker of ServiceServingCertController controller ... 2025-08-13T20:00:06.507458814+00:00 stderr F I0813 20:00:06.507115 1 base_controller.go:110] Starting #2 worker of ServiceServingCertController controller ... 2025-08-13T20:00:06.507458814+00:00 stderr F I0813 20:00:06.507121 1 base_controller.go:110] Starting #3 worker of ServiceServingCertController controller ... 2025-08-13T20:00:06.507458814+00:00 stderr F I0813 20:00:06.507130 1 base_controller.go:110] Starting #4 worker of ServiceServingCertController controller ... 2025-08-13T20:00:06.507458814+00:00 stderr F I0813 20:00:06.507135 1 base_controller.go:110] Starting #5 worker of ServiceServingCertController controller ... 2025-08-13T20:00:06.518336814+00:00 stderr F I0813 20:00:06.486469 1 base_controller.go:73] Caches are synced for ServiceServingCertUpdateController 2025-08-13T20:00:06.518336814+00:00 stderr F I0813 20:00:06.518259 1 base_controller.go:110] Starting #1 worker of ServiceServingCertUpdateController controller ... 2025-08-13T20:00:06.518336814+00:00 stderr F I0813 20:00:06.518276 1 base_controller.go:110] Starting #2 worker of ServiceServingCertUpdateController controller ... 2025-08-13T20:00:06.518336814+00:00 stderr F I0813 20:00:06.518281 1 base_controller.go:110] Starting #3 worker of ServiceServingCertUpdateController controller ... 2025-08-13T20:00:06.518336814+00:00 stderr F I0813 20:00:06.518290 1 base_controller.go:110] Starting #4 worker of ServiceServingCertUpdateController controller ... 2025-08-13T20:00:06.518336814+00:00 stderr F I0813 20:00:06.518295 1 base_controller.go:110] Starting #5 worker of ServiceServingCertUpdateController controller ... 2025-08-13T20:00:06.548071772+00:00 stderr F I0813 20:00:06.547959 1 base_controller.go:73] Caches are synced for CRDCABundleInjector 2025-08-13T20:00:06.549093651+00:00 stderr F I0813 20:00:06.548118 1 base_controller.go:110] Starting #1 worker of CRDCABundleInjector controller ... 2025-08-13T20:00:06.549093651+00:00 stderr F I0813 20:00:06.548214 1 base_controller.go:110] Starting #2 worker of CRDCABundleInjector controller ... 2025-08-13T20:00:06.549093651+00:00 stderr F I0813 20:00:06.548221 1 base_controller.go:110] Starting #3 worker of CRDCABundleInjector controller ... 2025-08-13T20:00:06.549093651+00:00 stderr F I0813 20:00:06.548227 1 base_controller.go:110] Starting #4 worker of CRDCABundleInjector controller ... 2025-08-13T20:00:06.549093651+00:00 stderr F I0813 20:00:06.548231 1 base_controller.go:110] Starting #5 worker of CRDCABundleInjector controller ... 2025-08-13T20:00:06.745229034+00:00 stderr F I0813 20:00:06.745082 1 crd.go:69] updating customresourcedefinition alertmanagerconfigs.monitoring.coreos.com conversion webhook config with the service signing CA bundle 2025-08-13T20:00:06.879052430+00:00 stderr F I0813 20:00:06.878991 1 crd.go:69] updating customresourcedefinition consoleplugins.console.openshift.io conversion webhook config with the service signing CA bundle 2025-08-13T20:00:11.997074793+00:00 stderr F I0813 20:00:11.996116 1 trace.go:236] Trace[2035768624]: "DeltaFIFO Pop Process" ID:openshift-authentication/kube-root-ca.crt,Depth:468,Reason:slow event handlers blocking the queue (13-Aug-2025 20:00:10.956) (total time: 1039ms): 2025-08-13T20:00:11.997074793+00:00 stderr F Trace[2035768624]: [1.039966012s] [1.039966012s] END 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232646 1 base_controller.go:73] Caches are synced for LegacyVulnerableConfigMapCABundleInjector 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232763 1 base_controller.go:110] Starting #1 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232822 1 base_controller.go:110] Starting #2 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232828 1 base_controller.go:110] Starting #3 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232855 1 base_controller.go:110] Starting #4 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232862 1 base_controller.go:110] Starting #5 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232933 1 base_controller.go:73] Caches are synced for ConfigMapCABundleInjector 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232940 1 base_controller.go:110] Starting #1 worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232945 1 base_controller.go:110] Starting #2 worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232952 1 base_controller.go:110] Starting #3 worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232957 1 base_controller.go:110] Starting #4 worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232962 1 base_controller.go:110] Starting #5 worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.233002 1 configmap.go:109] updating configmap default/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.233395 1 configmap.go:109] updating configmap hostpath-provisioner/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.233493 1 configmap.go:109] updating configmap kube-node-lease/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.233767 1 configmap.go:109] updating configmap kube-public/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.233979 1 configmap.go:109] updating configmap kube-system/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.307510945+00:00 stderr F I0813 20:00:12.307368 1 configmap.go:109] updating configmap openshift-apiserver-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.312396304+00:00 stderr F I0813 20:00:12.312358 1 configmap.go:109] updating configmap openshift-apiserver/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.347208117+00:00 stderr F I0813 20:00:12.345914 1 configmap.go:109] updating configmap openshift-authentication-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.347208117+00:00 stderr F I0813 20:00:12.346312 1 configmap.go:109] updating configmap openshift-authentication-operator/service-ca-bundle with the service signing CA bundle 2025-08-13T20:00:12.347208117+00:00 stderr F I0813 20:00:12.346883 1 configmap.go:109] updating configmap openshift-authentication/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.363920953+00:00 stderr F I0813 20:00:12.359612 1 configmap.go:109] updating configmap openshift-authentication/v4-0-config-system-service-ca with the service signing CA bundle 2025-08-13T20:00:12.390245744+00:00 stderr F I0813 20:00:12.390087 1 configmap.go:109] updating configmap openshift-cloud-network-config-controller/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.653877521+00:00 stderr F I0813 20:00:12.652872 1 configmap.go:109] updating configmap openshift-cloud-platform-infra/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.863091997+00:00 stderr F I0813 20:00:12.821553 1 configmap.go:109] updating configmap openshift-cluster-machine-approver/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.863091997+00:00 stderr F I0813 20:00:12.822059 1 configmap.go:109] updating configmap openshift-cluster-samples-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.863091997+00:00 stderr F I0813 20:00:12.835767 1 configmap.go:109] updating configmap openshift-cluster-storage-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.876028596+00:00 stderr F I0813 20:00:12.873745 1 configmap.go:109] updating configmap openshift-cluster-version/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:13.012099386+00:00 stderr F I0813 20:00:13.003114 1 configmap.go:109] updating configmap openshift-config-managed/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:13.173388725+00:00 stderr F I0813 20:00:13.173331 1 configmap.go:109] updating configmap openshift-config-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:13.377111194+00:00 stderr F I0813 20:00:13.355691 1 configmap.go:109] updating configmap openshift-config/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:13.601168223+00:00 stderr F I0813 20:00:13.600240 1 configmap.go:109] updating configmap openshift-console-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:13.812380335+00:00 stderr F I0813 20:00:13.806721 1 configmap.go:109] updating configmap openshift-console-user-settings/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:14.609907106+00:00 stderr F I0813 20:00:14.607658 1 configmap.go:109] updating configmap openshift-console/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:14.727356554+00:00 stderr F I0813 20:00:14.724130 1 configmap.go:109] updating configmap openshift-console/service-ca with the service signing CA bundle 2025-08-13T20:00:15.440417256+00:00 stderr F I0813 20:00:15.439282 1 configmap.go:109] updating configmap openshift-controller-manager-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:15.515047674+00:00 stderr F I0813 20:00:15.513957 1 configmap.go:109] updating configmap openshift-controller-manager/openshift-service-ca with the service signing CA bundle 2025-08-13T20:00:16.540070781+00:00 stderr F I0813 20:00:16.539585 1 configmap.go:109] updating configmap openshift-controller-manager/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.540382920+00:00 stderr F I0813 20:00:16.540302 1 configmap.go:109] updating configmap openshift-dns-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.560032700+00:00 stderr F I0813 20:00:16.559963 1 configmap.go:109] updating configmap openshift-dns/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.560454953+00:00 stderr F I0813 20:00:16.560428 1 configmap.go:109] updating configmap openshift-etcd-operator/etcd-service-ca-bundle with the service signing CA bundle 2025-08-13T20:00:16.687434013+00:00 stderr F I0813 20:00:16.680442 1 configmap.go:109] updating configmap openshift-etcd-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.762941806+00:00 stderr F I0813 20:00:16.750734 1 configmap.go:109] updating configmap openshift-etcd/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.792458758+00:00 stderr F I0813 20:00:16.792392 1 configmap.go:109] updating configmap openshift-host-network/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.792644173+00:00 stderr F I0813 20:00:16.792622 1 configmap.go:109] updating configmap openshift-image-registry/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.792730676+00:00 stderr F I0813 20:00:16.792711 1 configmap.go:109] updating configmap openshift-image-registry/serviceca with the service signing CA bundle 2025-08-13T20:00:16.796090031+00:00 stderr F I0813 20:00:16.796058 1 configmap.go:109] updating configmap openshift-infra/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.818726757+00:00 stderr F I0813 20:00:16.818316 1 configmap.go:109] updating configmap openshift-ingress-canary/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:17.049716543+00:00 stderr F I0813 20:00:17.049660 1 configmap.go:109] updating configmap openshift-ingress-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:17.317574771+00:00 stderr F I0813 20:00:17.315887 1 configmap.go:109] updating configmap openshift-ingress/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:17.455584107+00:00 stderr F I0813 20:00:17.452246 1 configmap.go:109] updating configmap openshift-ingress/service-ca-bundle with the service signing CA bundle 2025-08-13T20:00:17.570884264+00:00 stderr F I0813 20:00:17.570709 1 configmap.go:109] updating configmap openshift-kni-infra/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:17.955957004+00:00 stderr F I0813 20:00:17.953019 1 configmap.go:109] updating configmap openshift-kube-apiserver-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:17.955957004+00:00 stderr F I0813 20:00:17.953016 1 configmap.go:109] updating configmap openshift-kube-apiserver/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:18.188114034+00:00 stderr F I0813 20:00:18.187887 1 configmap.go:109] updating configmap openshift-kube-controller-manager-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:18.522050445+00:00 stderr F I0813 20:00:18.521633 1 configmap.go:109] updating configmap openshift-kube-controller-manager/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:18.825947690+00:00 stderr F I0813 20:00:18.825317 1 configmap.go:109] updating configmap openshift-kube-scheduler-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:18.933481736+00:00 stderr F I0813 20:00:18.886131 1 configmap.go:109] updating configmap openshift-kube-scheduler/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:19.208135938+00:00 stderr F I0813 20:00:19.207619 1 configmap.go:109] updating configmap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:19.587523646+00:00 stderr F I0813 20:00:19.586045 1 configmap.go:109] updating configmap openshift-kube-storage-version-migrator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:19.606589159+00:00 stderr F I0813 20:00:19.606153 1 configmap.go:109] updating configmap openshift-machine-api/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:20.009422926+00:00 stderr F I0813 20:00:20.002186 1 configmap.go:109] updating configmap openshift-machine-config-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:20.009422926+00:00 stderr F I0813 20:00:20.002507 1 configmap.go:109] updating configmap openshift-marketplace/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:20.895244294+00:00 stderr F I0813 20:00:20.890157 1 configmap.go:109] updating configmap openshift-monitoring/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:20.895421089+00:00 stderr F I0813 20:00:20.895375 1 request.go:697] Waited for 1.250420235s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/api/v1/namespaces/openshift-machine-api/configmaps/openshift-service-ca.crt 2025-08-13T20:00:21.179931351+00:00 stderr F I0813 20:00:21.005663 1 configmap.go:109] updating configmap openshift-multus/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:21.188084174+00:00 stderr F I0813 20:00:21.016159 1 configmap.go:109] updating configmap openshift-network-node-identity/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:21.219745687+00:00 stderr F I0813 20:00:21.016256 1 configmap.go:109] updating configmap openshift-network-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:21.220406856+00:00 stderr F I0813 20:00:21.014189 1 configmap.go:109] updating configmap openshift-network-diagnostics/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:21.986147588+00:00 stderr F I0813 20:00:21.974352 1 configmap.go:109] updating configmap openshift-node/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:21.995887216+00:00 stderr F I0813 20:00:21.993021 1 configmap.go:109] updating configmap openshift-nutanix-infra/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:22.048125206+00:00 stderr F I0813 20:00:22.042403 1 configmap.go:109] updating configmap openshift-oauth-apiserver/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:22.048125206+00:00 stderr F I0813 20:00:22.042635 1 configmap.go:109] updating configmap openshift-openstack-infra/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:22.048125206+00:00 stderr F I0813 20:00:22.042704 1 configmap.go:109] updating configmap openshift-operator-lifecycle-manager/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:22.133694216+00:00 stderr F I0813 20:00:22.130556 1 configmap.go:109] updating configmap openshift-operators/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:22.139828371+00:00 stderr F I0813 20:00:22.138307 1 configmap.go:109] updating configmap openshift-ovirt-infra/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:22.514936557+00:00 stderr F I0813 20:00:22.506140 1 configmap.go:109] updating configmap openshift-ovn-kubernetes/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:22.620769205+00:00 stderr F I0813 20:00:22.591045 1 configmap.go:109] updating configmap openshift-route-controller-manager/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:23.145944782+00:00 stderr F I0813 20:00:23.144593 1 configmap.go:109] updating configmap openshift-service-ca-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:23.185630594+00:00 stderr F I0813 20:00:23.185527 1 configmap.go:109] updating configmap openshift-service-ca/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:23.222596558+00:00 stderr F I0813 20:00:23.222544 1 configmap.go:109] updating configmap openshift-user-workload-monitoring/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:23.344223196+00:00 stderr F I0813 20:00:23.337740 1 configmap.go:109] updating configmap openshift-vsphere-infra/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:23.801506735+00:00 stderr F I0813 20:00:23.795233 1 configmap.go:109] updating configmap openshift/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.991128 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.990982871 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.991967 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:59.991945759 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992043 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.99197661 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992090 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.992072732 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992118 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992102963 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992147 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992130494 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992191 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992179605 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992209 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992196796 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992227 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:59.992214366 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992253 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:00:59.992242747 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992278 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992259978 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992932 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2770977124/tls.crt::/tmp/serving-cert-2770977124/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1755115193\" (2025-08-13 19:59:55 +0000 UTC to 2025-09-12 19:59:56 +0000 UTC (now=2025-08-13 20:00:59.992908786 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.993287 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115200\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115199\" (2025-08-13 18:59:57 +0000 UTC to 2026-08-13 18:59:57 +0000 UTC (now=2025-08-13 20:00:59.993263116 +0000 UTC))" 2025-08-13T20:03:15.140543818+00:00 stderr F E0813 20:03:15.139505 1 leaderelection.go:332] error retrieving resource lock openshift-service-ca/service-ca-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca/leases/service-ca-controller-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:15.507620624+00:00 stderr F E0813 20:04:15.506936 1 leaderelection.go:332] error retrieving resource lock openshift-service-ca/service-ca-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca/leases/service-ca-controller-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.192296423+00:00 stderr F E0813 20:05:15.190409 1 leaderelection.go:332] error retrieving resource lock openshift-service-ca/service-ca-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca/leases/service-ca-controller-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:36.387910163+00:00 stderr F I0813 20:42:36.387265 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.388108869+00:00 stderr F I0813 20:42:36.387996 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.386245 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.386322 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.396159 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.396305 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.429408 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.429654 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.429735 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.429929 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:41.745388331+00:00 stderr F I0813 20:42:41.744466 1 cmd.go:121] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:41.745575776+00:00 stderr F I0813 20:42:41.745476 1 genericapiserver.go:681] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:41.745575776+00:00 stderr F I0813 20:42:41.745524 1 base_controller.go:172] Shutting down ConfigMapCABundleInjector ... 2025-08-13T20:42:41.745592667+00:00 stderr F I0813 20:42:41.745571 1 base_controller.go:172] Shutting down LegacyVulnerableConfigMapCABundleInjector ... 2025-08-13T20:42:41.745592667+00:00 stderr F I0813 20:42:41.745575 1 genericapiserver.go:538] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:41.745602877+00:00 stderr F I0813 20:42:41.745590 1 base_controller.go:172] Shutting down CRDCABundleInjector ... 2025-08-13T20:42:41.745602877+00:00 stderr F I0813 20:42:41.745594 1 genericapiserver.go:541] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:42:41.745647888+00:00 stderr F I0813 20:42:41.745613 1 genericapiserver.go:605] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:42:41.745647888+00:00 stderr F I0813 20:42:41.745637 1 base_controller.go:172] Shutting down ServiceServingCertUpdateController ... 2025-08-13T20:42:41.745754302+00:00 stderr F I0813 20:42:41.745684 1 base_controller.go:172] Shutting down ServiceServingCertController ... 2025-08-13T20:42:41.745754302+00:00 stderr F I0813 20:42:41.745701 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:41.745857274+00:00 stderr F I0813 20:42:41.745770 1 base_controller.go:172] Shutting down ValidatingWebhookCABundleInjector ... 2025-08-13T20:42:41.745857274+00:00 stderr F I0813 20:42:41.745820 1 genericapiserver.go:639] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:41.745876575+00:00 stderr F I0813 20:42:41.745860 1 genericapiserver.go:630] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:42:41.745876575+00:00 stderr F I0813 20:42:41.745868 1 base_controller.go:172] Shutting down APIServiceCABundleInjector ... 2025-08-13T20:42:41.745887015+00:00 stderr F I0813 20:42:41.745876 1 genericapiserver.go:671] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:42:41.745897006+00:00 stderr F I0813 20:42:41.745886 1 base_controller.go:172] Shutting down MutatingWebhookCABundleInjector ... 2025-08-13T20:42:41.745951497+00:00 stderr F I0813 20:42:41.745904 1 base_controller.go:114] Shutting down worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.745972808+00:00 stderr F I0813 20:42:41.745951 1 base_controller.go:114] Shutting down worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.745972808+00:00 stderr F I0813 20:42:41.745961 1 base_controller.go:114] Shutting down worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746189854+00:00 stderr F I0813 20:42:41.745969 1 base_controller.go:114] Shutting down worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746189854+00:00 stderr F I0813 20:42:41.745980 1 base_controller.go:114] Shutting down worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746189854+00:00 stderr F I0813 20:42:41.746006 1 base_controller.go:104] All ConfigMapCABundleInjector workers have been terminated 2025-08-13T20:42:41.746335778+00:00 stderr F I0813 20:42:41.746152 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:41.746543974+00:00 stderr F I0813 20:42:41.746486 1 secure_serving.go:255] Stopped listening on [::]:8443 2025-08-13T20:42:41.746559525+00:00 stderr F I0813 20:42:41.746540 1 genericapiserver.go:588] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:41.746569765+00:00 stderr F I0813 20:42:41.746559 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:41.746605376+00:00 stderr F I0813 20:42:41.746584 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:41.746689328+00:00 stderr F I0813 20:42:41.746642 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/tmp/serving-cert-2770977124/tls.crt::/tmp/serving-cert-2770977124/tls.key" 2025-08-13T20:42:41.746870454+00:00 stderr F I0813 20:42:41.746834 1 base_controller.go:114] Shutting down worker of ServiceServingCertController controller ... 2025-08-13T20:42:41.746870454+00:00 stderr F I0813 20:42:41.746864 1 base_controller.go:114] Shutting down worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746938256+00:00 stderr F I0813 20:42:41.746906 1 base_controller.go:114] Shutting down worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746951286+00:00 stderr F I0813 20:42:41.746936 1 base_controller.go:114] Shutting down worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746951286+00:00 stderr F I0813 20:42:41.746945 1 base_controller.go:114] Shutting down worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746962886+00:00 stderr F I0813 20:42:41.746952 1 base_controller.go:114] Shutting down worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746974507+00:00 stderr F I0813 20:42:41.746962 1 base_controller.go:104] All LegacyVulnerableConfigMapCABundleInjector workers have been terminated 2025-08-13T20:42:41.747098780+00:00 stderr F I0813 20:42:41.747030 1 genericapiserver.go:701] [graceful-termination] apiserver is exiting 2025-08-13T20:42:41.747098780+00:00 stderr F I0813 20:42:41.747076 1 builder.go:302] server exited 2025-08-13T20:42:41.747499952+00:00 stderr F E0813 20:42:41.747390 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca/leases/service-ca-controller-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.747646846+00:00 stderr F W0813 20:42:41.747540 1 leaderelection.go:85] leader election lost 2025-08-13T20:42:41.747646846+00:00 stderr F I0813 20:42:41.747611 1 base_controller.go:114] Shutting down worker of ServiceServingCertController controller ... ././@LongLink0000644000000000000000000000025600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000755000175000017500000000000015133657715033147 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000755000175000017500000000000015133657735033151 5ustar zuulzuul././@LongLink0000644000000000000000000000030300000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000644000175000017500000005533115133657715033160 0ustar zuulzuul2025-08-13T20:01:14.502314102+00:00 stdout F Copying system trust bundle 2025-08-13T20:01:15.583350346+00:00 stderr F I0813 20:01:15.577320 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" 2025-08-13T20:01:15.583350346+00:00 stderr F I0813 20:01:15.579292 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" 2025-08-13T20:01:16.424418308+00:00 stderr F I0813 20:01:16.424200 1 audit.go:340] Using audit backend: ignoreErrors 2025-08-13T20:01:18.860923183+00:00 stderr F I0813 20:01:18.827448 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:01:20.309308931+00:00 stderr F I0813 20:01:20.302331 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T20:01:20.309308931+00:00 stderr F I0813 20:01:20.302376 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T20:01:20.309308931+00:00 stderr F I0813 20:01:20.302397 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T20:01:20.309308931+00:00 stderr F I0813 20:01:20.302403 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T20:01:20.554633356+00:00 stderr F I0813 20:01:20.533716 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:01:20.554633356+00:00 stderr F I0813 20:01:20.534993 1 genericapiserver.go:528] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:01:20.954588391+00:00 stderr F I0813 20:01:20.951690 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:01:20.954588391+00:00 stderr F I0813 20:01:20.951755 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:01:20.954588391+00:00 stderr F I0813 20:01:20.951914 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:20.954588391+00:00 stderr F I0813 20:01:20.951936 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:20.954588391+00:00 stderr F I0813 20:01:20.951962 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:20.954588391+00:00 stderr F I0813 20:01:20.951970 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:20.954588391+00:00 stderr F I0813 20:01:20.953060 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" 2025-08-13T20:01:20.958902234+00:00 stderr F I0813 20:01:20.956315 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" certDetail="\"oauth-openshift.openshift-authentication.svc\" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:06 +0000 UTC to 2027-08-13 20:00:07 +0000 UTC (now=2025-08-13 20:01:20.952733968 +0000 UTC))" 2025-08-13T20:01:20.958902234+00:00 stderr F I0813 20:01:20.956749 1 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" certDetail="\"*.apps-crc.testing\" [serving] validServingFor=[*.apps-crc.testing] issuer=\"ingress-operator@1719406118\" (2024-06-26 12:48:39 +0000 UTC to 2026-06-26 12:48:40 +0000 UTC (now=2025-08-13 20:01:20.956695071 +0000 UTC))" 2025-08-13T20:01:20.965157162+00:00 stderr F I0813 20:01:20.960488 1 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" 2025-08-13T20:01:20.996439674+00:00 stderr F I0813 20:01:20.994375 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115276\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115275\" (2025-08-13 19:01:15 +0000 UTC to 2026-08-13 19:01:15 +0000 UTC (now=2025-08-13 20:01:20.994131168 +0000 UTC))" 2025-08-13T20:01:20.996439674+00:00 stderr F I0813 20:01:20.994452 1 secure_serving.go:213] Serving securely on [::]:6443 2025-08-13T20:01:20.996439674+00:00 stderr F I0813 20:01:20.994490 1 genericapiserver.go:681] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:01:20.996439674+00:00 stderr F I0813 20:01:20.994509 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:01:20.999536142+00:00 stderr F I0813 20:01:20.997020 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:01:21.003034432+00:00 stderr F I0813 20:01:21.001406 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:01:21.017450403+00:00 stderr F I0813 20:01:21.017345 1 reflector.go:351] Caches populated for *v1.Group from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:01:21.017450403+00:00 stderr F I0813 20:01:21.017361 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:01:21.094297944+00:00 stderr F I0813 20:01:21.094024 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:01:21.094297944+00:00 stderr F I0813 20:01:21.094195 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:21.094297944+00:00 stderr F I0813 20:01:21.094252 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.094431 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.094400877 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.094820 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" certDetail="\"oauth-openshift.openshift-authentication.svc\" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:06 +0000 UTC to 2027-08-13 20:00:07 +0000 UTC (now=2025-08-13 20:01:21.094748977 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.095175 1 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" certDetail="\"*.apps-crc.testing\" [serving] validServingFor=[*.apps-crc.testing] issuer=\"ingress-operator@1719406118\" (2024-06-26 12:48:39 +0000 UTC to 2026-06-26 12:48:40 +0000 UTC (now=2025-08-13 20:01:21.095146758 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.095428 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115276\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115275\" (2025-08-13 19:01:15 +0000 UTC to 2026-08-13 19:01:15 +0000 UTC (now=2025-08-13 20:01:21.095416776 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.095927 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:21.095881159 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.095953 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:21.095938851 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096000 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:21.095980842 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096017 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:21.096006323 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096034 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.096022563 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096054 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.096041484 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096070 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.096058844 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096087 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.096075395 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096137 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:21.096092525 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096164 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:21.096147627 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096183 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.096171578 +0000 UTC))" 2025-08-13T20:01:21.117871356+00:00 stderr F I0813 20:01:21.115991 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" certDetail="\"oauth-openshift.openshift-authentication.svc\" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:06 +0000 UTC to 2027-08-13 20:00:07 +0000 UTC (now=2025-08-13 20:01:21.115950232 +0000 UTC))" 2025-08-13T20:01:21.117871356+00:00 stderr F I0813 20:01:21.116413 1 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" certDetail="\"*.apps-crc.testing\" [serving] validServingFor=[*.apps-crc.testing] issuer=\"ingress-operator@1719406118\" (2024-06-26 12:48:39 +0000 UTC to 2026-06-26 12:48:40 +0000 UTC (now=2025-08-13 20:01:21.116373694 +0000 UTC))" 2025-08-13T20:01:21.118922666+00:00 stderr F I0813 20:01:21.118737 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115276\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115275\" (2025-08-13 19:01:15 +0000 UTC to 2026-08-13 19:01:15 +0000 UTC (now=2025-08-13 20:01:21.118719181 +0000 UTC))" 2025-08-13T20:06:02.194718284+00:00 stderr F I0813 20:06:02.193134 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:00.871143946+00:00 stderr F I0813 20:07:00.868133 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:06.442934193+00:00 stderr F I0813 20:07:06.442399 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:35.102717324+00:00 stderr F I0813 20:09:35.102399 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:36.885284120+00:00 stderr F I0813 20:09:36.885208 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:45.285875742+00:00 stderr F I0813 20:09:45.285584 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:42:36.418082203+00:00 stderr F I0813 20:42:36.411925 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.418082203+00:00 stderr F I0813 20:42:36.415480 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.418442384+00:00 stderr F I0813 20:42:36.418365 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.418907447+00:00 stderr F I0813 20:42:36.409926 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:40.919276434+00:00 stderr F I0813 20:42:40.918383 1 genericapiserver.go:541] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:40.919544461+00:00 stderr F I0813 20:42:40.918425 1 genericapiserver.go:689] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:40.919602463+00:00 stderr F I0813 20:42:40.919510 1 genericapiserver.go:696] [graceful-termination] RunPreShutdownHooks has completed 2025-08-13T20:42:40.925908115+00:00 stderr F I0813 20:42:40.921108 1 genericapiserver.go:1057] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"default", Name:"kube-system", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving 2025-08-13T20:42:40.927359407+00:00 stderr F I0813 20:42:40.927288 1 genericapiserver.go:1057] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"default", Name:"kube-system", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished 2025-08-13T20:42:40.927359407+00:00 stderr F I0813 20:42:40.927331 1 genericapiserver.go:548] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:42:40.927452619+00:00 stderr F I0813 20:42:40.927395 1 genericapiserver.go:612] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:42:40.927466860+00:00 stderr F I0813 20:42:40.927451 1 genericapiserver.go:647] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:40.928975223+00:00 stderr F I0813 20:42:40.928936 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" 2025-08-13T20:42:40.928996954+00:00 stderr F I0813 20:42:40.928977 1 genericapiserver.go:1057] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"default", Name:"kube-system", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening 2025-08-13T20:42:40.930019173+00:00 stderr F I0813 20:42:40.929742 1 genericapiserver.go:638] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:42:40.930115756+00:00 stderr F I0813 20:42:40.930071 1 genericapiserver.go:679] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:42:40.930115756+00:00 stderr F I0813 20:42:40.930094 1 genericapiserver.go:703] "[graceful-termination] audit backend shutdown completed" 2025-08-13T20:42:40.930115756+00:00 stderr F I0813 20:42:40.930104 1 genericapiserver.go:1057] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"default", Name:"kube-system", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished 2025-08-13T20:42:40.930631431+00:00 stderr F I0813 20:42:40.930547 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:40.931352372+00:00 stderr F I0813 20:42:40.931206 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:40.931506586+00:00 stderr F I0813 20:42:40.931453 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:40.931543027+00:00 stderr F I0813 20:42:40.931469 1 secure_serving.go:258] Stopped listening on [::]:6443 2025-08-13T20:42:40.931606989+00:00 stderr F I0813 20:42:40.931589 1 genericapiserver.go:595] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:40.931656681+00:00 stderr F I0813 20:42:40.931644 1 genericapiserver.go:711] [graceful-termination] apiserver is exiting 2025-08-13T20:42:40.931714812+00:00 stderr F I0813 20:42:40.931688 1 genericapiserver.go:1057] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"default", Name:"kube-system", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationGracefulTerminationFinished' All pending requests processed 2025-08-13T20:42:40.932471814+00:00 stderr F I0813 20:42:40.932398 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:40.932471814+00:00 stderr F I0813 20:42:40.932425 1 dynamic_serving_content.go:146] "Shutting down controller" name="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" 2025-08-13T20:42:40.933570996+00:00 stderr F I0813 20:42:40.933454 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" ././@LongLink0000644000000000000000000000030300000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000644000175000017500000006125015133657715033155 0ustar zuulzuul2026-01-20T10:49:36.377354420+00:00 stdout F Copying system trust bundle 2026-01-20T10:49:37.082271741+00:00 stderr F I0120 10:49:37.079396 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" 2026-01-20T10:49:37.082271741+00:00 stderr F I0120 10:49:37.080182 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" 2026-01-20T10:49:38.810227624+00:00 stderr F I0120 10:49:38.809886 1 audit.go:340] Using audit backend: ignoreErrors 2026-01-20T10:49:39.142643079+00:00 stderr F I0120 10:49:39.142546 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2026-01-20T10:49:39.156304265+00:00 stderr F I0120 10:49:39.155046 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2026-01-20T10:49:39.156304265+00:00 stderr F I0120 10:49:39.155094 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2026-01-20T10:49:39.156304265+00:00 stderr F I0120 10:49:39.155108 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2026-01-20T10:49:39.156304265+00:00 stderr F I0120 10:49:39.155114 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2026-01-20T10:49:39.163928757+00:00 stderr F I0120 10:49:39.163880 1 secure_serving.go:57] Forcing use of http/1.1 only 2026-01-20T10:49:39.163997620+00:00 stderr F I0120 10:49:39.163942 1 genericapiserver.go:528] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2026-01-20T10:49:39.169787706+00:00 stderr F I0120 10:49:39.169737 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2026-01-20T10:49:39.169957101+00:00 stderr F I0120 10:49:39.169934 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2026-01-20T10:49:39.170195889+00:00 stderr F I0120 10:49:39.170167 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2026-01-20T10:49:39.170195889+00:00 stderr F I0120 10:49:39.170188 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:39.170254240+00:00 stderr F I0120 10:49:39.170236 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2026-01-20T10:49:39.170254240+00:00 stderr F I0120 10:49:39.170245 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:39.170764216+00:00 stderr F I0120 10:49:39.170743 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" certDetail="\"oauth-openshift.openshift-authentication.svc\" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:06 +0000 UTC to 2027-08-13 20:00:07 +0000 UTC (now=2026-01-20 10:49:39.170697744 +0000 UTC))" 2026-01-20T10:49:39.170936141+00:00 stderr F I0120 10:49:39.170908 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" 2026-01-20T10:49:39.171197959+00:00 stderr F I0120 10:49:39.171180 1 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" certDetail="\"*.apps-crc.testing\" [serving] validServingFor=[*.apps-crc.testing] issuer=\"ingress-operator@1719406118\" (2024-06-26 12:48:39 +0000 UTC to 2026-06-26 12:48:40 +0000 UTC (now=2026-01-20 10:49:39.171158598 +0000 UTC))" 2026-01-20T10:49:39.171443806+00:00 stderr F I0120 10:49:39.171415 1 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" 2026-01-20T10:49:39.171562840+00:00 stderr F I0120 10:49:39.171550 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906178\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906177\" (2026-01-20 09:49:37 +0000 UTC to 2027-01-20 09:49:37 +0000 UTC (now=2026-01-20 10:49:39.171533679 +0000 UTC))" 2026-01-20T10:49:39.171693854+00:00 stderr F I0120 10:49:39.171604 1 secure_serving.go:213] Serving securely on [::]:6443 2026-01-20T10:49:39.171746575+00:00 stderr F I0120 10:49:39.171735 1 genericapiserver.go:681] [graceful-termination] waiting for shutdown to be initiated 2026-01-20T10:49:39.171775386+00:00 stderr F I0120 10:49:39.171739 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2026-01-20T10:49:39.176295244+00:00 stderr F W0120 10:49:39.175998 1 reflector.go:539] k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229: failed to list *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io) 2026-01-20T10:49:39.198819740+00:00 stderr F E0120 10:49:39.198745 1 reflector.go:147] k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229: Failed to watch *v1.Group: failed to list *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io) 2026-01-20T10:49:39.199123119+00:00 stderr F I0120 10:49:39.199028 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:39.199412208+00:00 stderr F I0120 10:49:39.199213 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:39.199524131+00:00 stderr F I0120 10:49:39.199498 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:49:39.272546866+00:00 stderr F I0120 10:49:39.271171 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2026-01-20T10:49:39.272546866+00:00 stderr F I0120 10:49:39.271214 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2026-01-20T10:49:39.272546866+00:00 stderr F I0120 10:49:39.271244 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2026-01-20T10:49:39.272546866+00:00 stderr F I0120 10:49:39.271480 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:39.271449553 +0000 UTC))" 2026-01-20T10:49:39.272546866+00:00 stderr F I0120 10:49:39.272005 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" certDetail="\"oauth-openshift.openshift-authentication.svc\" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:06 +0000 UTC to 2027-08-13 20:00:07 +0000 UTC (now=2026-01-20 10:49:39.271764932 +0000 UTC))" 2026-01-20T10:49:39.272546866+00:00 stderr F I0120 10:49:39.272299 1 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" certDetail="\"*.apps-crc.testing\" [serving] validServingFor=[*.apps-crc.testing] issuer=\"ingress-operator@1719406118\" (2024-06-26 12:48:39 +0000 UTC to 2026-06-26 12:48:40 +0000 UTC (now=2026-01-20 10:49:39.272282978 +0000 UTC))" 2026-01-20T10:49:39.272602547+00:00 stderr F I0120 10:49:39.272546 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906178\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906177\" (2026-01-20 09:49:37 +0000 UTC to 2027-01-20 09:49:37 +0000 UTC (now=2026-01-20 10:49:39.272535305 +0000 UTC))" 2026-01-20T10:49:39.272712891+00:00 stderr F I0120 10:49:39.272691 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:49:39.272676669 +0000 UTC))" 2026-01-20T10:49:39.272721771+00:00 stderr F I0120 10:49:39.272715 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:49:39.27270465 +0000 UTC))" 2026-01-20T10:49:39.272747432+00:00 stderr F I0120 10:49:39.272730 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:49:39.272719991 +0000 UTC))" 2026-01-20T10:49:39.272755392+00:00 stderr F I0120 10:49:39.272749 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:49:39.272739161 +0000 UTC))" 2026-01-20T10:49:39.272772282+00:00 stderr F I0120 10:49:39.272763 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:39.272753502 +0000 UTC))" 2026-01-20T10:49:39.272800773+00:00 stderr F I0120 10:49:39.272779 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:39.272769952 +0000 UTC))" 2026-01-20T10:49:39.272813304+00:00 stderr F I0120 10:49:39.272798 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:39.272787003 +0000 UTC))" 2026-01-20T10:49:39.272824114+00:00 stderr F I0120 10:49:39.272817 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:39.272804183 +0000 UTC))" 2026-01-20T10:49:39.272868745+00:00 stderr F I0120 10:49:39.272835 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:49:39.272822694 +0000 UTC))" 2026-01-20T10:49:39.276536628+00:00 stderr F I0120 10:49:39.276508 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:49:39.276481226 +0000 UTC))" 2026-01-20T10:49:39.276569659+00:00 stderr F I0120 10:49:39.276549 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:49:39.276525357 +0000 UTC))" 2026-01-20T10:49:39.276897579+00:00 stderr F I0120 10:49:39.276858 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" certDetail="\"oauth-openshift.openshift-authentication.svc\" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:06 +0000 UTC to 2027-08-13 20:00:07 +0000 UTC (now=2026-01-20 10:49:39.276841387 +0000 UTC))" 2026-01-20T10:49:39.277700133+00:00 stderr F I0120 10:49:39.277177 1 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" certDetail="\"*.apps-crc.testing\" [serving] validServingFor=[*.apps-crc.testing] issuer=\"ingress-operator@1719406118\" (2024-06-26 12:48:39 +0000 UTC to 2026-06-26 12:48:40 +0000 UTC (now=2026-01-20 10:49:39.277163107 +0000 UTC))" 2026-01-20T10:49:39.277700133+00:00 stderr F I0120 10:49:39.277449 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906178\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906177\" (2026-01-20 09:49:37 +0000 UTC to 2027-01-20 09:49:37 +0000 UTC (now=2026-01-20 10:49:39.277437625 +0000 UTC))" 2026-01-20T10:49:40.634107988+00:00 stderr F W0120 10:49:40.631443 1 reflector.go:539] k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229: failed to list *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io) 2026-01-20T10:49:40.634107988+00:00 stderr F E0120 10:49:40.631474 1 reflector.go:147] k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229: Failed to watch *v1.Group: failed to list *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io) 2026-01-20T10:49:43.268318925+00:00 stderr F I0120 10:49:43.267453 1 reflector.go:351] Caches populated for *v1.Group from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2026-01-20T10:56:07.108702832+00:00 stderr F I0120 10:56:07.107470 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2026-01-20 10:56:07.107408797 +0000 UTC))" 2026-01-20T10:56:07.108702832+00:00 stderr F I0120 10:56:07.107527 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2026-01-20 10:56:07.107504139 +0000 UTC))" 2026-01-20T10:56:07.108702832+00:00 stderr F I0120 10:56:07.107564 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.10753728 +0000 UTC))" 2026-01-20T10:56:07.108702832+00:00 stderr F I0120 10:56:07.107594 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2026-01-20 10:56:07.107572541 +0000 UTC))" 2026-01-20T10:56:07.108702832+00:00 stderr F I0120 10:56:07.107626 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.107603602 +0000 UTC))" 2026-01-20T10:56:07.108702832+00:00 stderr F I0120 10:56:07.107659 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.107634743 +0000 UTC))" 2026-01-20T10:56:07.108702832+00:00 stderr F I0120 10:56:07.107688 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.107667074 +0000 UTC))" 2026-01-20T10:56:07.108702832+00:00 stderr F I0120 10:56:07.107719 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.107697065 +0000 UTC))" 2026-01-20T10:56:07.108702832+00:00 stderr F I0120 10:56:07.107748 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.107726735 +0000 UTC))" 2026-01-20T10:56:07.108702832+00:00 stderr F I0120 10:56:07.107783 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2026-01-20 10:56:07.107760986 +0000 UTC))" 2026-01-20T10:56:07.108702832+00:00 stderr F I0120 10:56:07.107815 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1768906554\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2026-01-20 10:55:54 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2026-01-20 10:56:07.107792207 +0000 UTC))" 2026-01-20T10:56:07.108702832+00:00 stderr F I0120 10:56:07.107844 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2026-01-20 10:56:07.107822588 +0000 UTC))" 2026-01-20T10:56:07.108702832+00:00 stderr F I0120 10:56:07.108462 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" certDetail="\"oauth-openshift.openshift-authentication.svc\" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:06 +0000 UTC to 2027-08-13 20:00:07 +0000 UTC (now=2026-01-20 10:56:07.108427445 +0000 UTC))" 2026-01-20T10:56:07.109249327+00:00 stderr F I0120 10:56:07.109009 1 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" certDetail="\"*.apps-crc.testing\" [serving] validServingFor=[*.apps-crc.testing] issuer=\"ingress-operator@1719406118\" (2024-06-26 12:48:39 +0000 UTC to 2026-06-26 12:48:40 +0000 UTC (now=2026-01-20 10:56:07.10898313 +0000 UTC))" 2026-01-20T10:56:07.113146021+00:00 stderr F I0120 10:56:07.113105 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1768906178\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1768906177\" (2026-01-20 09:49:37 +0000 UTC to 2027-01-20 09:49:37 +0000 UTC (now=2026-01-20 10:56:07.113032548 +0000 UTC))" 2026-01-20T10:58:19.463788275+00:00 stderr F I0120 10:58:19.463717 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015133657715033070 5ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015133657735033072 5ustar zuulzuul././@LongLink0000644000000000000000000000032600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000000224315133657715033073 0ustar zuulzuul2025-08-13T19:59:04.519182976+00:00 stderr F W0813 19:59:04.505970 1 deprecated.go:66] 2025-08-13T19:59:04.519182976+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:04.519182976+00:00 stderr F 2025-08-13T19:59:04.519182976+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:04.519182976+00:00 stderr F 2025-08-13T19:59:04.519182976+00:00 stderr F =============================================== 2025-08-13T19:59:04.519182976+00:00 stderr F 2025-08-13T19:59:04.521923144+00:00 stderr F I0813 19:59:04.521868 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:04.522683976+00:00 stderr F I0813 19:59:04.522582 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:04.531614440+00:00 stderr F I0813 19:59:04.530405 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:8443 2025-08-13T19:59:04.536668264+00:00 stderr F I0813 19:59:04.535283 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:8443 2025-08-13T20:42:43.954344275+00:00 stderr F I0813 20:42:43.954103 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000032600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000000203615133657715033073 0ustar zuulzuul2026-01-20T10:49:34.827517304+00:00 stderr F W0120 10:49:34.826927 1 deprecated.go:66] 2026-01-20T10:49:34.827517304+00:00 stderr F ==== Removed Flag Warning ====================== 2026-01-20T10:49:34.827517304+00:00 stderr F 2026-01-20T10:49:34.827517304+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2026-01-20T10:49:34.827517304+00:00 stderr F 2026-01-20T10:49:34.827517304+00:00 stderr F =============================================== 2026-01-20T10:49:34.827517304+00:00 stderr F 2026-01-20T10:49:34.827671368+00:00 stderr F I0120 10:49:34.827616 1 kube-rbac-proxy.go:233] Valid token audiences: 2026-01-20T10:49:34.827671368+00:00 stderr F I0120 10:49:34.827658 1 kube-rbac-proxy.go:347] Reading certificate files 2026-01-20T10:49:34.828527124+00:00 stderr F I0120 10:49:34.828180 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:8443 2026-01-20T10:49:34.834709332+00:00 stderr F I0120 10:49:34.834667 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:8443 ././@LongLink0000644000000000000000000000033000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015133657735033072 5ustar zuulzuul././@LongLink0000644000000000000000000000033500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000003410415133657715033074 0ustar zuulzuul2025-08-13T19:59:47.427869073+00:00 stderr F 2025-08-13T19:59:47Z INFO setup starting manager 2025-08-13T19:59:47.581323448+00:00 stderr F 2025-08-13T19:59:47Z INFO controller-runtime.metrics Starting metrics server 2025-08-13T19:59:47.581953346+00:00 stderr F 2025-08-13T19:59:47Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": ":9090", "secure": false} 2025-08-13T19:59:47.596895631+00:00 stderr F 2025-08-13T19:59:47Z INFO starting server {"kind": "pprof", "addr": "[::]:6060"} 2025-08-13T19:59:47.596895631+00:00 stderr F 2025-08-13T19:59:47Z INFO starting server {"kind": "health probe", "addr": "[::]:8080"} 2025-08-13T19:59:47.895817023+00:00 stderr F I0813 19:59:47.761307 1 leaderelection.go:250] attempting to acquire leader lease openshift-operator-lifecycle-manager/packageserver-controller-lock... 2025-08-13T19:59:48.598371620+00:00 stderr F I0813 19:59:48.595508 1 leaderelection.go:260] successfully acquired lease openshift-operator-lifecycle-manager/packageserver-controller-lock 2025-08-13T19:59:48.599217465+00:00 stderr F 2025-08-13T19:59:48Z DEBUG events package-server-manager-84d578d794-jw7r2_e40d9bf0-eba2-484b-a0df-0a92c0213730 became leader {"type": "Normal", "object": {"kind":"Lease","namespace":"openshift-operator-lifecycle-manager","name":"packageserver-controller-lock","uid":"0beb9bb7-cfd9-4760-98f3-f0c893f5cf42","apiVersion":"coordination.k8s.io/v1","resourceVersion":"28430"}, "reason": "LeaderElection"} 2025-08-13T19:59:48.600629435+00:00 stderr F 2025-08-13T19:59:48Z INFO Starting EventSource {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion", "source": "kind source: *v1alpha1.ClusterServiceVersion"} 2025-08-13T19:59:48.600653725+00:00 stderr F 2025-08-13T19:59:48Z INFO Starting EventSource {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion", "source": "kind source: *v1.Infrastructure"} 2025-08-13T19:59:48.600653725+00:00 stderr F 2025-08-13T19:59:48Z INFO Starting Controller {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion"} 2025-08-13T19:59:50.305095201+00:00 stderr F 2025-08-13T19:59:50Z INFO controllers.packageserver requeueing the packageserver deployment after encountering infrastructure event {"infrastructure": "cluster"} 2025-08-13T19:59:51.209164813+00:00 stderr F 2025-08-13T19:59:51Z INFO Starting workers {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion", "worker count": 1} 2025-08-13T19:59:51.543424412+00:00 stderr F 2025-08-13T19:59:51Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-08-13T19:59:51.543424412+00:00 stderr F 2025-08-13T19:59:51Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T19:59:52.151692491+00:00 stderr F 2025-08-13T19:59:52Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T19:59:52.151692491+00:00 stderr F 2025-08-13T19:59:52Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-08-13T19:59:53.088333270+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-08-13T19:59:53.088333270+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T19:59:53.390313498+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-08-13T19:59:53.390313498+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T19:59:53.390534724+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T19:59:53.390534724+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-08-13T19:59:53.534931020+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-08-13T19:59:53.534931020+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:03:01.319955058+00:00 stderr F E0813 20:03:01.318734 1 leaderelection.go:332] error retrieving resource lock openshift-operator-lifecycle-manager/packageserver-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-operator-lifecycle-manager/leases/packageserver-controller-lock": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:01.394983163+00:00 stderr F E0813 20:04:01.393330 1 leaderelection.go:332] error retrieving resource lock openshift-operator-lifecycle-manager/packageserver-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-operator-lifecycle-manager/leases/packageserver-controller-lock": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:01.342585258+00:00 stderr F E0813 20:05:01.341321 1 leaderelection.go:332] error retrieving resource lock openshift-operator-lifecycle-manager/packageserver-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-operator-lifecycle-manager/leases/packageserver-controller-lock": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:06:29.332357546+00:00 stderr F 2025-08-13T20:06:29Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-08-13T20:06:29.332357546+00:00 stderr F 2025-08-13T20:06:29Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:06:29.340078067+00:00 stderr F 2025-08-13T20:06:29Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:06:29.360067510+00:00 stderr F 2025-08-13T20:06:29Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-08-13T20:06:29.387128505+00:00 stderr F 2025-08-13T20:06:29Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-08-13T20:06:29.387128505+00:00 stderr F 2025-08-13T20:06:29Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:06:41.133024418+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver requeueing the packageserver deployment after encountering infrastructure event {"infrastructure": "cluster"} 2025-08-13T20:06:41.133024418+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver requeueing the packageserver deployment after encountering infrastructure event {"infrastructure": "cluster"} 2025-08-13T20:06:41.133024418+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-08-13T20:06:41.133024418+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:06:41.133024418+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:06:41.133024418+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-08-13T20:06:41.194134500+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-08-13T20:06:41.194134500+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:09:45.228025353+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver requeueing the packageserver deployment after encountering infrastructure event {"infrastructure": "cluster"} 2025-08-13T20:09:45.228025353+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver requeueing the packageserver deployment after encountering infrastructure event {"infrastructure": "cluster"} 2025-08-13T20:09:45.284368859+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-08-13T20:09:45.284368859+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:09:45.284368859+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:09:45.284368859+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-08-13T20:09:45.367941865+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-08-13T20:09:45.368022437+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:10:16.711473839+00:00 stderr F 2025-08-13T20:10:16Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-08-13T20:10:16.711690345+00:00 stderr F 2025-08-13T20:10:16Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:10:16.712255871+00:00 stderr F 2025-08-13T20:10:16Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:10:16.723341429+00:00 stderr F 2025-08-13T20:10:16Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-08-13T20:10:16.745037671+00:00 stderr F 2025-08-13T20:10:16Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-08-13T20:10:16.745037671+00:00 stderr F 2025-08-13T20:10:16Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:42:42.894471130+00:00 stderr F 2025-08-13T20:42:42Z INFO Stopping and waiting for non leader election runnables 2025-08-13T20:42:42.894471130+00:00 stderr F 2025-08-13T20:42:42Z INFO Stopping and waiting for leader election runnables 2025-08-13T20:42:42.901418100+00:00 stderr F 2025-08-13T20:42:42Z INFO Shutdown signal received, waiting for all workers to finish {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion"} 2025-08-13T20:42:42.901418100+00:00 stderr F 2025-08-13T20:42:42Z INFO All workers finished {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion"} 2025-08-13T20:42:42.902414669+00:00 stderr F 2025-08-13T20:42:42Z INFO Stopping and waiting for caches 2025-08-13T20:42:42.910454040+00:00 stderr F 2025-08-13T20:42:42Z INFO Stopping and waiting for webhooks 2025-08-13T20:42:42.910542823+00:00 stderr F 2025-08-13T20:42:42Z INFO Stopping and waiting for HTTP servers 2025-08-13T20:42:42.913641892+00:00 stderr F 2025-08-13T20:42:42Z INFO shutting down server {"kind": "health probe", "addr": "[::]:8080"} 2025-08-13T20:42:42.913984912+00:00 stderr F 2025-08-13T20:42:42Z INFO shutting down server {"kind": "pprof", "addr": "[::]:6060"} 2025-08-13T20:42:42.915348912+00:00 stderr F 2025-08-13T20:42:42Z INFO controller-runtime.metrics Shutting down metrics server with timeout of 1 minute ././@LongLink0000644000000000000000000000033500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000001327515133657715033102 0ustar zuulzuul2026-01-20T10:49:38.766122610+00:00 stderr F 2026-01-20T10:49:38Z INFO setup starting manager 2026-01-20T10:49:38.774118974+00:00 stderr F 2026-01-20T10:49:38Z INFO controller-runtime.metrics Starting metrics server 2026-01-20T10:49:38.774146944+00:00 stderr F 2026-01-20T10:49:38Z INFO starting server {"kind": "pprof", "addr": "[::]:6060"} 2026-01-20T10:49:38.774795285+00:00 stderr F 2026-01-20T10:49:38Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": ":9090", "secure": false} 2026-01-20T10:49:38.778720154+00:00 stderr F 2026-01-20T10:49:38Z INFO starting server {"kind": "health probe", "addr": "[::]:8080"} 2026-01-20T10:49:38.849975664+00:00 stderr F I0120 10:49:38.849886 1 leaderelection.go:250] attempting to acquire leader lease openshift-operator-lifecycle-manager/packageserver-controller-lock... 2026-01-20T10:54:20.536188241+00:00 stderr F I0120 10:54:20.535280 1 leaderelection.go:260] successfully acquired lease openshift-operator-lifecycle-manager/packageserver-controller-lock 2026-01-20T10:54:20.544383800+00:00 stderr F 2026-01-20T10:54:20Z DEBUG events package-server-manager-84d578d794-jw7r2_ee5e479b-f62f-4dce-b421-9ae18cb2c58c became leader {"type": "Normal", "object": {"kind":"Lease","namespace":"openshift-operator-lifecycle-manager","name":"packageserver-controller-lock","uid":"0beb9bb7-cfd9-4760-98f3-f0c893f5cf42","apiVersion":"coordination.k8s.io/v1","resourceVersion":"41881"}, "reason": "LeaderElection"} 2026-01-20T10:54:20.544383800+00:00 stderr F 2026-01-20T10:54:20Z INFO Starting EventSource {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion", "source": "kind source: *v1alpha1.ClusterServiceVersion"} 2026-01-20T10:54:20.544383800+00:00 stderr F 2026-01-20T10:54:20Z INFO Starting EventSource {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion", "source": "kind source: *v1.Infrastructure"} 2026-01-20T10:54:20.544383800+00:00 stderr F 2026-01-20T10:54:20Z INFO Starting Controller {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion"} 2026-01-20T10:54:20.674553059+00:00 stderr F 2026-01-20T10:54:20Z INFO controllers.packageserver requeueing the packageserver deployment after encountering infrastructure event {"infrastructure": "cluster"} 2026-01-20T10:54:20.674553059+00:00 stderr F 2026-01-20T10:54:20Z INFO Starting workers {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion", "worker count": 1} 2026-01-20T10:54:20.675121604+00:00 stderr F 2026-01-20T10:54:20Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2026-01-20T10:54:20.675202926+00:00 stderr F 2026-01-20T10:54:20Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2026-01-20T10:54:20.980943868+00:00 stderr F 2026-01-20T10:54:20Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2026-01-20T10:54:20.981576204+00:00 stderr F 2026-01-20T10:54:20Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2026-01-20T10:54:21.031773282+00:00 stderr F 2026-01-20T10:54:21Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2026-01-20T10:54:21.031773282+00:00 stderr F 2026-01-20T10:54:21Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2026-01-20T10:57:20.668691821+00:00 stderr F E0120 10:57:20.668028 1 leaderelection.go:332] error retrieving resource lock openshift-operator-lifecycle-manager/packageserver-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-operator-lifecycle-manager/leases/packageserver-controller-lock": dial tcp 10.217.4.1:443: connect: connection refused 2026-01-20T10:58:18.867916169+00:00 stderr F 2026-01-20T10:58:18Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2026-01-20T10:58:18.867916169+00:00 stderr F 2026-01-20T10:58:18Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2026-01-20T10:58:18.867916169+00:00 stderr F 2026-01-20T10:58:18Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2026-01-20T10:58:18.868191966+00:00 stderr F 2026-01-20T10:58:18Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2026-01-20T10:58:18.890139204+00:00 stderr F 2026-01-20T10:58:18Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2026-01-20T10:58:18.890139204+00:00 stderr F 2026-01-20T10:58:18Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} ././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-ln84v_7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015133657716033043 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-ln84v_7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76/registry/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015133657741033041 5ustar zuulzuul././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-ln84v_7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76/registry/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000644000175000017500000004424315133657716033054 0ustar zuulzuul2026-01-20T10:55:17.582447800+00:00 stderr F time="2026-01-20T10:55:17.582229054Z" level=info msg="start registry" distribution_version=v3.0.0+unknown go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" openshift_version=4.16.0-202406131906.p0.g58a613b.assembly.stream.el9-58a613b 2026-01-20T10:55:17.582728647+00:00 stderr F time="2026-01-20T10:55:17.582666456Z" level=info msg="caching project quota objects with TTL 1m0s" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2026-01-20T10:55:17.583653762+00:00 stderr F time="2026-01-20T10:55:17.58355987Z" level=info msg="redis not configured" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2026-01-20T10:55:17.583858578+00:00 stderr F time="2026-01-20T10:55:17.583802916Z" level=info msg="using openshift blob descriptor cache" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2026-01-20T10:55:17.583858578+00:00 stderr F time="2026-01-20T10:55:17.583829997Z" level=warning msg="Registry does not implement RepositoryRemover. Will not be able to delete repos and tags" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2026-01-20T10:55:17.584051093+00:00 stderr F time="2026-01-20T10:55:17.58393114Z" level=info msg="Starting upload purge in 32m0s" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2026-01-20T10:55:17.585926913+00:00 stderr F time="2026-01-20T10:55:17.58578978Z" level=info msg="Using \"image-registry.openshift-image-registry.svc:5000\" as Docker Registry URL" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2026-01-20T10:55:17.586334734+00:00 stderr F time="2026-01-20T10:55:17.586238741Z" level=info msg="listening on :5000, tls" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2026-01-20T10:55:26.439480072+00:00 stderr F time="2026-01-20T10:55:26.439200854Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=bf515e13-83c7-41f3-ad8c-d2ca0ec81924 http.request.method=GET http.request.remoteaddr="10.217.0.2:41374" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="105.993µs" http.response.status=200 http.response.written=0 2026-01-20T10:55:36.441315653+00:00 stderr F time="2026-01-20T10:55:36.441165999Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=f1d3565b-f3c7-4e9e-8208-d4e46f0893ee http.request.method=GET http.request.remoteaddr="10.217.0.2:50764" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="153.444µs" http.response.status=200 http.response.written=0 2026-01-20T10:55:36.441940181+00:00 stderr F time="2026-01-20T10:55:36.441852988Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=f0874846-16aa-4109-bc3c-e585b9140d6f http.request.method=GET http.request.remoteaddr="10.217.0.2:50766" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="47.801µs" http.response.status=200 http.response.written=0 2026-01-20T10:55:46.440663675+00:00 stderr F time="2026-01-20T10:55:46.439887305Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=4dca053a-4164-439e-bae5-9714d2854ef8 http.request.method=GET http.request.remoteaddr="10.217.0.2:47470" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="171.276µs" http.response.status=200 http.response.written=0 2026-01-20T10:55:46.440828970+00:00 stderr F time="2026-01-20T10:55:46.44044997Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=12854d1e-cc08-40ba-b2ad-7fc32f2766a0 http.request.method=GET http.request.remoteaddr="10.217.0.2:47462" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="57.152µs" http.response.status=200 http.response.written=0 2026-01-20T10:55:56.454913167+00:00 stderr F time="2026-01-20T10:55:56.444305152Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=4deea445-a49f-478d-af50-004a62c2ca39 http.request.method=GET http.request.remoteaddr="10.217.0.2:53156" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="65.532µs" http.response.status=200 http.response.written=0 2026-01-20T10:55:56.454913167+00:00 stderr F time="2026-01-20T10:55:56.445459932Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=8505a7b5-ea57-46be-b1bb-a08a8bb46c1f http.request.method=GET http.request.remoteaddr="10.217.0.2:53168" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="26.87µs" http.response.status=200 http.response.written=0 2026-01-20T10:56:06.438717725+00:00 stderr F time="2026-01-20T10:56:06.438128959Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=7bf04500-2f11-4756-8d6f-4bd200d00ecf http.request.method=GET http.request.remoteaddr="10.217.0.2:53588" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="40.901µs" http.response.status=200 http.response.written=0 2026-01-20T10:56:06.439321672+00:00 stderr F time="2026-01-20T10:56:06.439213269Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=75215583-7e2f-4afc-96ae-ef1945fb531c http.request.method=GET http.request.remoteaddr="10.217.0.2:53586" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="48.681µs" http.response.status=200 http.response.written=0 2026-01-20T10:56:16.438622997+00:00 stderr F time="2026-01-20T10:56:16.437709943Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=1cc5a7ee-8e41-4f0a-980b-69da7f3dadea http.request.method=GET http.request.remoteaddr="10.217.0.2:50020" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="43.421µs" http.response.status=200 http.response.written=0 2026-01-20T10:56:16.438706059+00:00 stderr F time="2026-01-20T10:56:16.438596127Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=930cb0b5-23e8-4201-939a-1601ac808720 http.request.method=GET http.request.remoteaddr="10.217.0.2:50008" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="19.52µs" http.response.status=200 http.response.written=0 2026-01-20T10:56:26.439973636+00:00 stderr F time="2026-01-20T10:56:26.438966479Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=078ea000-d5d6-497b-b2e5-b4f3f6e85a63 http.request.method=GET http.request.remoteaddr="10.217.0.2:50704" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="112.013µs" http.response.status=200 http.response.written=0 2026-01-20T10:56:26.440128790+00:00 stderr F time="2026-01-20T10:56:26.439940825Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=4f06db23-dc9e-4bc7-b5bc-88ee8508d8b7 http.request.method=GET http.request.remoteaddr="10.217.0.2:50702" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="74.102µs" http.response.status=200 http.response.written=0 2026-01-20T10:56:36.438098768+00:00 stderr F time="2026-01-20T10:56:36.437447031Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=e0d0281b-cd6f-4274-a8d1-889f01af129c http.request.method=GET http.request.remoteaddr="10.217.0.2:41388" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="62.082µs" http.response.status=200 http.response.written=0 2026-01-20T10:56:36.438345405+00:00 stderr F time="2026-01-20T10:56:36.438280423Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=45c56357-1faf-43f1-b332-185df0649a5a http.request.method=GET http.request.remoteaddr="10.217.0.2:41400" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="24.121µs" http.response.status=200 http.response.written=0 2026-01-20T10:56:46.437787185+00:00 stderr F time="2026-01-20T10:56:46.437180929Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=5d6dc2e0-bdec-42bc-a04b-41bd630b9a28 http.request.method=GET http.request.remoteaddr="10.217.0.2:36370" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="45.601µs" http.response.status=200 http.response.written=0 2026-01-20T10:56:46.437869087+00:00 stderr F time="2026-01-20T10:56:46.437345253Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=474ed9b8-a33d-40d4-8da1-33432c895e13 http.request.method=GET http.request.remoteaddr="10.217.0.2:36362" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="42.431µs" http.response.status=200 http.response.written=0 2026-01-20T10:56:56.437948553+00:00 stderr F time="2026-01-20T10:56:56.437363068Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=30bedf63-3d25-4b8f-9538-a1a3d31a17c3 http.request.method=GET http.request.remoteaddr="10.217.0.2:43300" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="43.491µs" http.response.status=200 http.response.written=0 2026-01-20T10:56:56.438926240+00:00 stderr F time="2026-01-20T10:56:56.438857958Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=b16a9c35-a347-4340-9115-fac8516cd2aa http.request.method=GET http.request.remoteaddr="10.217.0.2:43312" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="60.881µs" http.response.status=200 http.response.written=0 2026-01-20T10:57:06.438042638+00:00 stderr F time="2026-01-20T10:57:06.437456843Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=b54c147f-4457-4dc1-be79-bf2aaa84d8d5 http.request.method=GET http.request.remoteaddr="10.217.0.2:46256" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="45.461µs" http.response.status=200 http.response.written=0 2026-01-20T10:57:06.439627440+00:00 stderr F time="2026-01-20T10:57:06.439521797Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=b5cd0dd5-21ce-45ee-9704-7e93a8ef516f http.request.method=GET http.request.remoteaddr="10.217.0.2:46258" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="202.795µs" http.response.status=200 http.response.written=0 2026-01-20T10:57:16.439435977+00:00 stderr F time="2026-01-20T10:57:16.439213121Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=677d35ed-382e-4a0d-9406-1a238de543ed http.request.method=GET http.request.remoteaddr="10.217.0.2:55836" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="61.402µs" http.response.status=200 http.response.written=0 2026-01-20T10:57:16.439643362+00:00 stderr F time="2026-01-20T10:57:16.439576441Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=142ae326-16c6-4a84-a7fe-c8a0e859389c http.request.method=GET http.request.remoteaddr="10.217.0.2:55828" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="45.491µs" http.response.status=200 http.response.written=0 2026-01-20T10:57:26.439962555+00:00 stderr F time="2026-01-20T10:57:26.439169694Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=070d38c2-4533-43c2-97bb-a51a95b1b721 http.request.method=GET http.request.remoteaddr="10.217.0.2:40802" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="58.332µs" http.response.status=200 http.response.written=0 2026-01-20T10:57:26.440184510+00:00 stderr F time="2026-01-20T10:57:26.439230835Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=9128d407-24f0-4ddf-8fbf-457ed02d5786 http.request.method=GET http.request.remoteaddr="10.217.0.2:40796" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="80.602µs" http.response.status=200 http.response.written=0 2026-01-20T10:57:36.439258700+00:00 stderr F time="2026-01-20T10:57:36.438595902Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=7f0bb8c5-86b5-4ac5-8dc7-443a36abba69 http.request.method=GET http.request.remoteaddr="10.217.0.2:53012" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="51.042µs" http.response.status=200 http.response.written=0 2026-01-20T10:57:36.439362683+00:00 stderr F time="2026-01-20T10:57:36.438825628Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=aff514cb-ce6d-4274-9a5c-8490dab1c139 http.request.method=GET http.request.remoteaddr="10.217.0.2:53024" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="85.523µs" http.response.status=200 http.response.written=0 2026-01-20T10:57:46.436968722+00:00 stderr F time="2026-01-20T10:57:46.436374186Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=d87c99eb-057b-44ae-9974-f523d6e96ab4 http.request.method=GET http.request.remoteaddr="10.217.0.2:58576" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="52.721µs" http.response.status=200 http.response.written=0 2026-01-20T10:57:46.437965739+00:00 stderr F time="2026-01-20T10:57:46.437889767Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=c5f8bb19-d3ce-4b58-8f80-6f4b4a4c5329 http.request.method=GET http.request.remoteaddr="10.217.0.2:58578" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="17.35µs" http.response.status=200 http.response.written=0 2026-01-20T10:57:56.439242056+00:00 stderr F time="2026-01-20T10:57:56.438510727Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=9f943747-954a-4736-9b9f-bf2044fc827b http.request.method=GET http.request.remoteaddr="10.217.0.2:34574" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="46.421µs" http.response.status=200 http.response.written=0 2026-01-20T10:57:56.439315788+00:00 stderr F time="2026-01-20T10:57:56.439254397Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=a8c03e2a-3244-4306-a6ea-7332608f5468 http.request.method=GET http.request.remoteaddr="10.217.0.2:34590" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="47.331µs" http.response.status=200 http.response.written=0 2026-01-20T10:58:06.441223267+00:00 stderr F time="2026-01-20T10:58:06.440204601Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=002eb1ee-a20f-4d1d-83a1-9e59ba53a8c7 http.request.method=GET http.request.remoteaddr="10.217.0.2:60812" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="192.565µs" http.response.status=200 http.response.written=0 2026-01-20T10:58:06.441392081+00:00 stderr F time="2026-01-20T10:58:06.440750165Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=4efc4fe7-e8f5-4df8-860a-0909819ef0d9 http.request.method=GET http.request.remoteaddr="10.217.0.2:60802" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="48.461µs" http.response.status=200 http.response.written=0 2026-01-20T10:58:16.437824305+00:00 stderr F time="2026-01-20T10:58:16.437482496Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=605b8057-1cbf-4800-96f6-5f12a671e601 http.request.method=GET http.request.remoteaddr="10.217.0.2:50106" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="42.471µs" http.response.status=200 http.response.written=0 2026-01-20T10:58:16.439718822+00:00 stderr F time="2026-01-20T10:58:16.43965196Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.37:5000" http.request.id=dd3e2d38-9691-4c90-a751-3e8655a523ce http.request.method=GET http.request.remoteaddr="10.217.0.2:50120" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="34.761µs" http.response.status=200 http.response.written=0 home/zuul/zuul-output/logs/ci-framework-data/logs/ci_script_000_run_hook_without_retry.log0000644000175000017500000002334115133657522031320 0ustar zuulzuul[WARNING]: Found variable using reserved name: namespace PLAY [Download tools] ********************************************************** TASK [download_tools : Install build dependencies name=['jq', 'skopeo', 'sqlite', 'httpd-tools', 'virt-install', 'gcc', 'python3-jinja2', 'xmlstarlet', 'openssl']] *** Tuesday 20 January 2026 10:55:43 +0000 (0:00:00.038) 0:00:00.038 ******* Tuesday 20 January 2026 10:55:43 +0000 (0:00:00.038) 0:00:00.038 ******* changed: [localhost] TASK [download_tools : Set opm download url suffix opm_url_suffix=latest/download] *** Tuesday 20 January 2026 10:55:48 +0000 (0:00:05.442) 0:00:05.481 ******* Tuesday 20 January 2026 10:55:48 +0000 (0:00:05.442) 0:00:05.480 ******* skipping: [localhost] TASK [download_tools : Set opm download url suffix opm_url_suffix=download/{{ opm_version }}] *** Tuesday 20 January 2026 10:55:48 +0000 (0:00:00.045) 0:00:05.526 ******* Tuesday 20 January 2026 10:55:48 +0000 (0:00:00.045) 0:00:05.526 ******* ok: [localhost] TASK [download_tools : Create $HOME/bin dir path={{ lookup('env', 'HOME') }}/bin, state=directory, mode=0755] *** Tuesday 20 January 2026 10:55:48 +0000 (0:00:00.047) 0:00:05.574 ******* Tuesday 20 January 2026 10:55:48 +0000 (0:00:00.047) 0:00:05.573 ******* ok: [localhost] TASK [download_tools : Download opm url=https://github.com/operator-framework/operator-registry/releases/{{ opm_url_suffix }}/linux-amd64-opm, dest={{ lookup('env', 'HOME') }}/bin/opm, mode=0755, timeout=30] *** Tuesday 20 January 2026 10:55:49 +0000 (0:00:00.396) 0:00:05.970 ******* Tuesday 20 January 2026 10:55:49 +0000 (0:00:00.396) 0:00:05.970 ******* changed: [localhost] TASK [download_tools : Get version from sdk_version _sdk_version={{ sdk_version | regex_search('v(.*)', '\1') | first }}] *** Tuesday 20 January 2026 10:55:50 +0000 (0:00:01.162) 0:00:07.132 ******* Tuesday 20 January 2026 10:55:50 +0000 (0:00:01.162) 0:00:07.132 ******* ok: [localhost] TASK [download_tools : Set operator-sdk file for version < 1.3.0 _operator_sdk_file=operator-sdk-{{ sdk_version }}-x86_64-linux-gnu] *** Tuesday 20 January 2026 10:55:50 +0000 (0:00:00.052) 0:00:07.185 ******* Tuesday 20 January 2026 10:55:50 +0000 (0:00:00.052) 0:00:07.185 ******* skipping: [localhost] TASK [download_tools : Set operator-sdk file for version >= 1.3.0 _operator_sdk_file=operator-sdk_linux_amd64] *** Tuesday 20 January 2026 10:55:50 +0000 (0:00:00.038) 0:00:07.224 ******* Tuesday 20 January 2026 10:55:50 +0000 (0:00:00.038) 0:00:07.223 ******* ok: [localhost] TASK [download_tools : Download operator-sdk url=https://github.com/operator-framework/operator-sdk/releases/download/{{ sdk_version }}/{{ _operator_sdk_file }}, dest={{ lookup('env', 'HOME') }}/bin/operator-sdk, mode=0755, force=True, timeout=30] *** Tuesday 20 January 2026 10:55:50 +0000 (0:00:00.039) 0:00:07.263 ******* Tuesday 20 January 2026 10:55:50 +0000 (0:00:00.039) 0:00:07.263 ******* changed: [localhost] TASK [download_tools : Download and extract kustomize src=https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2F{{ kustomize_version }}/kustomize_{{ kustomize_version }}_linux_amd64.tar.gz, dest={{ lookup('env', 'HOME') }}/bin/, remote_src=True] *** Tuesday 20 January 2026 10:55:51 +0000 (0:00:01.378) 0:00:08.641 ******* Tuesday 20 January 2026 10:55:51 +0000 (0:00:01.378) 0:00:08.641 ******* changed: [localhost] TASK [download_tools : Download kubectl url=https://dl.k8s.io/release/{{ kubectl_version }}/bin/linux/amd64/kubectl, dest={{ lookup('env', 'HOME') }}/bin/kubectl, mode=0755, timeout=30] *** Tuesday 20 January 2026 10:55:53 +0000 (0:00:01.497) 0:00:10.138 ******* Tuesday 20 January 2026 10:55:53 +0000 (0:00:01.497) 0:00:10.138 ******* ok: [localhost] TASK [download_tools : Download kuttl url=https://github.com/kudobuilder/kuttl/releases/download/v{{ kuttl_version }}/kubectl-kuttl_{{ kuttl_version }}_linux_x86_64, dest={{ lookup('env', 'HOME') }}/bin/kubectl-kuttl, mode=0755, timeout=30] *** Tuesday 20 January 2026 10:55:53 +0000 (0:00:00.462) 0:00:10.601 ******* Tuesday 20 January 2026 10:55:53 +0000 (0:00:00.462) 0:00:10.601 ******* changed: [localhost] TASK [download_tools : Download chainsaw src=https://github.com/kyverno/chainsaw/releases/download/v{{ chainsaw_version }}/chainsaw_linux_amd64.tar.gz, dest={{ lookup('env', 'HOME') }}/bin/, remote_src=True, extra_opts=['--exclude', 'README.md', '--exclude', 'LICENSE']] *** Tuesday 20 January 2026 10:55:54 +0000 (0:00:01.020) 0:00:11.622 ******* Tuesday 20 January 2026 10:55:54 +0000 (0:00:01.020) 0:00:11.622 ******* changed: [localhost] TASK [download_tools : Download and extract yq src=https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64.tar.gz, dest={{ lookup('env', 'HOME') }}/bin/, remote_src=True, mode=0755] *** Tuesday 20 January 2026 10:55:58 +0000 (0:00:03.438) 0:00:15.060 ******* Tuesday 20 January 2026 10:55:58 +0000 (0:00:03.438) 0:00:15.060 ******* changed: [localhost] TASK [download_tools : Link yq_linux_amd64 as yq src={{ lookup('env', 'HOME') }}/bin/yq_linux_amd64, dest={{ lookup('env', 'HOME') }}/bin/yq, state=link] *** Tuesday 20 January 2026 10:56:00 +0000 (0:00:02.003) 0:00:17.063 ******* Tuesday 20 January 2026 10:56:00 +0000 (0:00:02.003) 0:00:17.063 ******* changed: [localhost] TASK [download_tools : Deinstall golang state=absent, name=['golang-bin', 'golang-src', 'golang']] *** Tuesday 20 January 2026 10:56:00 +0000 (0:00:00.511) 0:00:17.575 ******* Tuesday 20 January 2026 10:56:00 +0000 (0:00:00.511) 0:00:17.575 ******* ok: [localhost] TASK [download_tools : Delete old go version installed from upstream path={{ item }}, state=absent] *** Tuesday 20 January 2026 10:56:02 +0000 (0:00:01.811) 0:00:19.387 ******* Tuesday 20 January 2026 10:56:02 +0000 (0:00:01.811) 0:00:19.386 ******* ok: [localhost] => (item=/usr/local/go) ok: [localhost] => (item=/home/zuul/bin/go) ok: [localhost] => (item=/home/zuul/bin/gofmt) ok: [localhost] => (item=/usr/local/bin/go) ok: [localhost] => (item=/usr/local/bin/gofmt) TASK [download_tools : Download and extract golang src=https://golang.org/dl/go{{ go_version }}.linux-amd64.tar.gz, dest=/usr/local, remote_src=True, extra_opts=['--exclude', 'go/misc', '--exclude', 'go/pkg/linux_amd64_race', '--exclude', 'go/test']] *** Tuesday 20 January 2026 10:56:04 +0000 (0:00:02.045) 0:00:21.432 ******* Tuesday 20 January 2026 10:56:04 +0000 (0:00:02.045) 0:00:21.431 ******* changed: [localhost] TASK [download_tools : Set alternatives link to installed go version _raw_params=set -e update-alternatives --install /usr/local/bin/{{ item }} {{ item }} /usr/local/go/bin/{{ item }} 1 ] *** Tuesday 20 January 2026 10:56:18 +0000 (0:00:13.352) 0:00:34.784 ******* Tuesday 20 January 2026 10:56:18 +0000 (0:00:13.352) 0:00:34.784 ******* changed: [localhost] => (item=go) changed: [localhost] => (item=gofmt) TASK [download_tools : Clean bash cache msg=When move from rpm to upstream version, make sure to clean bash cache using `hash -d go`] *** Tuesday 20 January 2026 10:56:18 +0000 (0:00:00.802) 0:00:35.587 ******* Tuesday 20 January 2026 10:56:18 +0000 (0:00:00.802) 0:00:35.586 ******* ok: [localhost] => msg: When move from rpm to upstream version, make sure to clean bash cache using `hash -d go` PLAY RECAP ********************************************************************* localhost : ok=18 changed=10 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 Tuesday 20 January 2026 10:56:18 +0000 (0:00:00.053) 0:00:35.640 ******* =============================================================================== download_tools : Download and extract golang --------------------------- 13.35s download_tools : Install build dependencies ----------------------------- 5.44s download_tools : Download chainsaw -------------------------------------- 3.44s download_tools : Delete old go version installed from upstream ---------- 2.05s download_tools : Download and extract yq -------------------------------- 2.00s download_tools : Deinstall golang --------------------------------------- 1.81s download_tools : Download and extract kustomize ------------------------- 1.50s download_tools : Download operator-sdk ---------------------------------- 1.38s download_tools : Download opm ------------------------------------------- 1.16s download_tools : Download kuttl ----------------------------------------- 1.02s download_tools : Set alternatives link to installed go version ---------- 0.80s download_tools : Link yq_linux_amd64 as yq ------------------------------ 0.51s download_tools : Download kubectl --------------------------------------- 0.46s download_tools : Create $HOME/bin dir ----------------------------------- 0.40s download_tools : Clean bash cache --------------------------------------- 0.05s download_tools : Get version from sdk_version --------------------------- 0.05s download_tools : Set opm download url suffix ---------------------------- 0.05s download_tools : Set opm download url suffix ---------------------------- 0.05s download_tools : Set operator-sdk file for version >= 1.3.0 ------------- 0.04s download_tools : Set operator-sdk file for version < 1.3.0 -------------- 0.04s Tuesday 20 January 2026 10:56:18 +0000 (0:00:00.054) 0:00:35.640 ******* =============================================================================== download_tools --------------------------------------------------------- 35.60s ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ total ------------------------------------------------------------------ 35.60s home/zuul/zuul-output/logs/ci-framework-data/logs/pre_infra_download_needed_tools.log0000644000175000017500000003675015133657522030436 0ustar zuulzuul2026-01-20 10:55:43,276 p=32423 u=zuul n=ansible | [WARNING]: Found variable using reserved name: namespace 2026-01-20 10:55:43,277 p=32423 u=zuul n=ansible | PLAY [Download tools] ********************************************************** 2026-01-20 10:55:43,312 p=32423 u=zuul n=ansible | TASK [download_tools : Install build dependencies name=['jq', 'skopeo', 'sqlite', 'httpd-tools', 'virt-install', 'gcc', 'python3-jinja2', 'xmlstarlet', 'openssl']] *** 2026-01-20 10:55:43,312 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:43 +0000 (0:00:00.038) 0:00:00.038 ******* 2026-01-20 10:55:43,312 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:43 +0000 (0:00:00.038) 0:00:00.038 ******* 2026-01-20 10:55:48,741 p=32423 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:48,754 p=32423 u=zuul n=ansible | TASK [download_tools : Set opm download url suffix opm_url_suffix=latest/download] *** 2026-01-20 10:55:48,754 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:48 +0000 (0:00:05.442) 0:00:05.481 ******* 2026-01-20 10:55:48,754 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:48 +0000 (0:00:05.442) 0:00:05.480 ******* 2026-01-20 10:55:48,786 p=32423 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:48,799 p=32423 u=zuul n=ansible | TASK [download_tools : Set opm download url suffix opm_url_suffix=download/{{ opm_version }}] *** 2026-01-20 10:55:48,800 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:48 +0000 (0:00:00.045) 0:00:05.526 ******* 2026-01-20 10:55:48,800 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:48 +0000 (0:00:00.045) 0:00:05.526 ******* 2026-01-20 10:55:48,832 p=32423 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:48,847 p=32423 u=zuul n=ansible | TASK [download_tools : Create $HOME/bin dir path={{ lookup('env', 'HOME') }}/bin, state=directory, mode=0755] *** 2026-01-20 10:55:48,847 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:48 +0000 (0:00:00.047) 0:00:05.574 ******* 2026-01-20 10:55:48,848 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:48 +0000 (0:00:00.047) 0:00:05.573 ******* 2026-01-20 10:55:49,233 p=32423 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:49,243 p=32423 u=zuul n=ansible | TASK [download_tools : Download opm url=https://github.com/operator-framework/operator-registry/releases/{{ opm_url_suffix }}/linux-amd64-opm, dest={{ lookup('env', 'HOME') }}/bin/opm, mode=0755, timeout=30] *** 2026-01-20 10:55:49,244 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:49 +0000 (0:00:00.396) 0:00:05.970 ******* 2026-01-20 10:55:49,244 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:49 +0000 (0:00:00.396) 0:00:05.970 ******* 2026-01-20 10:55:50,383 p=32423 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:50,405 p=32423 u=zuul n=ansible | TASK [download_tools : Get version from sdk_version _sdk_version={{ sdk_version | regex_search('v(.*)', '\1') | first }}] *** 2026-01-20 10:55:50,406 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:50 +0000 (0:00:01.162) 0:00:07.132 ******* 2026-01-20 10:55:50,406 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:50 +0000 (0:00:01.162) 0:00:07.132 ******* 2026-01-20 10:55:50,445 p=32423 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:50,458 p=32423 u=zuul n=ansible | TASK [download_tools : Set operator-sdk file for version < 1.3.0 _operator_sdk_file=operator-sdk-{{ sdk_version }}-x86_64-linux-gnu] *** 2026-01-20 10:55:50,459 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:50 +0000 (0:00:00.052) 0:00:07.185 ******* 2026-01-20 10:55:50,459 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:50 +0000 (0:00:00.052) 0:00:07.185 ******* 2026-01-20 10:55:50,482 p=32423 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:50,497 p=32423 u=zuul n=ansible | TASK [download_tools : Set operator-sdk file for version >= 1.3.0 _operator_sdk_file=operator-sdk_linux_amd64] *** 2026-01-20 10:55:50,497 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:50 +0000 (0:00:00.038) 0:00:07.224 ******* 2026-01-20 10:55:50,497 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:50 +0000 (0:00:00.038) 0:00:07.223 ******* 2026-01-20 10:55:50,524 p=32423 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:50,536 p=32423 u=zuul n=ansible | TASK [download_tools : Download operator-sdk url=https://github.com/operator-framework/operator-sdk/releases/download/{{ sdk_version }}/{{ _operator_sdk_file }}, dest={{ lookup('env', 'HOME') }}/bin/operator-sdk, mode=0755, force=True, timeout=30] *** 2026-01-20 10:55:50,537 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:50 +0000 (0:00:00.039) 0:00:07.263 ******* 2026-01-20 10:55:50,537 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:50 +0000 (0:00:00.039) 0:00:07.263 ******* 2026-01-20 10:55:51,903 p=32423 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:51,915 p=32423 u=zuul n=ansible | TASK [download_tools : Download and extract kustomize src=https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2F{{ kustomize_version }}/kustomize_{{ kustomize_version }}_linux_amd64.tar.gz, dest={{ lookup('env', 'HOME') }}/bin/, remote_src=True] *** 2026-01-20 10:55:51,915 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:51 +0000 (0:00:01.378) 0:00:08.641 ******* 2026-01-20 10:55:51,915 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:51 +0000 (0:00:01.378) 0:00:08.641 ******* 2026-01-20 10:55:53,402 p=32423 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:53,412 p=32423 u=zuul n=ansible | TASK [download_tools : Download kubectl url=https://dl.k8s.io/release/{{ kubectl_version }}/bin/linux/amd64/kubectl, dest={{ lookup('env', 'HOME') }}/bin/kubectl, mode=0755, timeout=30] *** 2026-01-20 10:55:53,412 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:53 +0000 (0:00:01.497) 0:00:10.138 ******* 2026-01-20 10:55:53,412 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:53 +0000 (0:00:01.497) 0:00:10.138 ******* 2026-01-20 10:55:53,864 p=32423 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:53,874 p=32423 u=zuul n=ansible | TASK [download_tools : Download kuttl url=https://github.com/kudobuilder/kuttl/releases/download/v{{ kuttl_version }}/kubectl-kuttl_{{ kuttl_version }}_linux_x86_64, dest={{ lookup('env', 'HOME') }}/bin/kubectl-kuttl, mode=0755, timeout=30] *** 2026-01-20 10:55:53,875 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:53 +0000 (0:00:00.462) 0:00:10.601 ******* 2026-01-20 10:55:53,875 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:53 +0000 (0:00:00.462) 0:00:10.601 ******* 2026-01-20 10:55:54,882 p=32423 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:54,895 p=32423 u=zuul n=ansible | TASK [download_tools : Download chainsaw src=https://github.com/kyverno/chainsaw/releases/download/v{{ chainsaw_version }}/chainsaw_linux_amd64.tar.gz, dest={{ lookup('env', 'HOME') }}/bin/, remote_src=True, extra_opts=['--exclude', 'README.md', '--exclude', 'LICENSE']] *** 2026-01-20 10:55:54,896 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:54 +0000 (0:00:01.020) 0:00:11.622 ******* 2026-01-20 10:55:54,896 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:54 +0000 (0:00:01.020) 0:00:11.622 ******* 2026-01-20 10:55:58,323 p=32423 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:58,333 p=32423 u=zuul n=ansible | TASK [download_tools : Download and extract yq src=https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64.tar.gz, dest={{ lookup('env', 'HOME') }}/bin/, remote_src=True, mode=0755] *** 2026-01-20 10:55:58,334 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:58 +0000 (0:00:03.438) 0:00:15.060 ******* 2026-01-20 10:55:58,334 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:55:58 +0000 (0:00:03.438) 0:00:15.060 ******* 2026-01-20 10:56:00,326 p=32423 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:00,337 p=32423 u=zuul n=ansible | TASK [download_tools : Link yq_linux_amd64 as yq src={{ lookup('env', 'HOME') }}/bin/yq_linux_amd64, dest={{ lookup('env', 'HOME') }}/bin/yq, state=link] *** 2026-01-20 10:56:00,337 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:56:00 +0000 (0:00:02.003) 0:00:17.063 ******* 2026-01-20 10:56:00,337 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:56:00 +0000 (0:00:02.003) 0:00:17.063 ******* 2026-01-20 10:56:00,827 p=32423 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:00,848 p=32423 u=zuul n=ansible | TASK [download_tools : Deinstall golang state=absent, name=['golang-bin', 'golang-src', 'golang']] *** 2026-01-20 10:56:00,848 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:56:00 +0000 (0:00:00.511) 0:00:17.575 ******* 2026-01-20 10:56:00,849 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:56:00 +0000 (0:00:00.511) 0:00:17.575 ******* 2026-01-20 10:56:02,648 p=32423 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:02,660 p=32423 u=zuul n=ansible | TASK [download_tools : Delete old go version installed from upstream path={{ item }}, state=absent] *** 2026-01-20 10:56:02,660 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:56:02 +0000 (0:00:01.811) 0:00:19.387 ******* 2026-01-20 10:56:02,660 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:56:02 +0000 (0:00:01.811) 0:00:19.386 ******* 2026-01-20 10:56:03,801 p=32423 u=zuul n=ansible | ok: [localhost] => (item=/usr/local/go) 2026-01-20 10:56:04,010 p=32423 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/bin/go) 2026-01-20 10:56:04,234 p=32423 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/bin/gofmt) 2026-01-20 10:56:04,471 p=32423 u=zuul n=ansible | ok: [localhost] => (item=/usr/local/bin/go) 2026-01-20 10:56:04,692 p=32423 u=zuul n=ansible | ok: [localhost] => (item=/usr/local/bin/gofmt) 2026-01-20 10:56:04,705 p=32423 u=zuul n=ansible | TASK [download_tools : Download and extract golang src=https://golang.org/dl/go{{ go_version }}.linux-amd64.tar.gz, dest=/usr/local, remote_src=True, extra_opts=['--exclude', 'go/misc', '--exclude', 'go/pkg/linux_amd64_race', '--exclude', 'go/test']] *** 2026-01-20 10:56:04,706 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:56:04 +0000 (0:00:02.045) 0:00:21.432 ******* 2026-01-20 10:56:04,706 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:56:04 +0000 (0:00:02.045) 0:00:21.431 ******* 2026-01-20 10:56:18,048 p=32423 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:18,058 p=32423 u=zuul n=ansible | TASK [download_tools : Set alternatives link to installed go version _raw_params=set -e update-alternatives --install /usr/local/bin/{{ item }} {{ item }} /usr/local/go/bin/{{ item }} 1 ] *** 2026-01-20 10:56:18,058 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:56:18 +0000 (0:00:13.352) 0:00:34.784 ******* 2026-01-20 10:56:18,058 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:56:18 +0000 (0:00:13.352) 0:00:34.784 ******* 2026-01-20 10:56:18,392 p=32423 u=zuul n=ansible | changed: [localhost] => (item=go) 2026-01-20 10:56:18,849 p=32423 u=zuul n=ansible | changed: [localhost] => (item=gofmt) 2026-01-20 10:56:18,860 p=32423 u=zuul n=ansible | TASK [download_tools : Clean bash cache msg=When move from rpm to upstream version, make sure to clean bash cache using `hash -d go`] *** 2026-01-20 10:56:18,860 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:56:18 +0000 (0:00:00.802) 0:00:35.587 ******* 2026-01-20 10:56:18,860 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:56:18 +0000 (0:00:00.802) 0:00:35.586 ******* 2026-01-20 10:56:18,874 p=32423 u=zuul n=ansible | ok: [localhost] => msg: When move from rpm to upstream version, make sure to clean bash cache using `hash -d go` 2026-01-20 10:56:18,913 p=32423 u=zuul n=ansible | PLAY RECAP ********************************************************************* 2026-01-20 10:56:18,913 p=32423 u=zuul n=ansible | localhost : ok=18 changed=10 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:56:18 +0000 (0:00:00.053) 0:00:35.640 ******* 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | =============================================================================== 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | download_tools : Download and extract golang --------------------------- 13.35s 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | download_tools : Install build dependencies ----------------------------- 5.44s 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | download_tools : Download chainsaw -------------------------------------- 3.44s 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | download_tools : Delete old go version installed from upstream ---------- 2.05s 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | download_tools : Download and extract yq -------------------------------- 2.00s 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | download_tools : Deinstall golang --------------------------------------- 1.81s 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | download_tools : Download and extract kustomize ------------------------- 1.50s 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | download_tools : Download operator-sdk ---------------------------------- 1.38s 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | download_tools : Download opm ------------------------------------------- 1.16s 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | download_tools : Download kuttl ----------------------------------------- 1.02s 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | download_tools : Set alternatives link to installed go version ---------- 0.80s 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | download_tools : Link yq_linux_amd64 as yq ------------------------------ 0.51s 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | download_tools : Download kubectl --------------------------------------- 0.46s 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | download_tools : Create $HOME/bin dir ----------------------------------- 0.40s 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | download_tools : Clean bash cache --------------------------------------- 0.05s 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | download_tools : Get version from sdk_version --------------------------- 0.05s 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | download_tools : Set opm download url suffix ---------------------------- 0.05s 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | download_tools : Set opm download url suffix ---------------------------- 0.05s 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | download_tools : Set operator-sdk file for version >= 1.3.0 ------------- 0.04s 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | download_tools : Set operator-sdk file for version < 1.3.0 -------------- 0.04s 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | Tuesday 20 January 2026 10:56:18 +0000 (0:00:00.054) 0:00:35.640 ******* 2026-01-20 10:56:18,914 p=32423 u=zuul n=ansible | =============================================================================== 2026-01-20 10:56:18,915 p=32423 u=zuul n=ansible | download_tools --------------------------------------------------------- 35.60s 2026-01-20 10:56:18,915 p=32423 u=zuul n=ansible | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2026-01-20 10:56:18,915 p=32423 u=zuul n=ansible | total ------------------------------------------------------------------ 35.60s home/zuul/zuul-output/logs/ci-framework-data/logs/ci_script_001_fetch_openshift.log0000644000175000017500000000035215133657525027635 0ustar zuulzuulWARNING: Using insecure TLS client config. Setting this option is not supported! Login successful. You have access to 64 projects, the list has been suppressed. You can list all projects with 'oc projects' Using project "default". home/zuul/zuul-output/logs/ci-framework-data/logs/ci_script_002_run_hook_without_retry_fetch.log0000644000175000017500000003335115133657612032475 0ustar zuulzuul[WARNING]: Found variable using reserved name: namespace PLAY [Sync repos for controller to compute for periodic jobs and gating repo] *** TASK [Gathering Facts ] ******************************************************** Tuesday 20 January 2026 10:57:03 +0000 (0:00:00.011) 0:00:00.011 ******* Tuesday 20 January 2026 10:57:03 +0000 (0:00:00.011) 0:00:00.011 ******* ok: [compute-0] ok: [compute-1] TASK [Check for gating repo on controller path={{ cifmw_basedir }}/artifacts/repositories/gating.repo] *** Tuesday 20 January 2026 10:57:05 +0000 (0:00:01.467) 0:00:01.479 ******* Tuesday 20 January 2026 10:57:05 +0000 (0:00:01.467) 0:00:01.478 ******* ok: [compute-0 -> controller(38.102.83.39)] ok: [compute-1 -> controller(38.102.83.39)] TASK [Copy repositories from controller to computes dest=/etc/yum.repos.d/, src={{ cifmw_basedir }}/artifacts/repositories/, mode=0755] *** Tuesday 20 January 2026 10:57:06 +0000 (0:00:00.732) 0:00:02.212 ******* Tuesday 20 January 2026 10:57:06 +0000 (0:00:00.732) 0:00:02.211 ******* changed: [compute-1] changed: [compute-0] PLAY [Build dataset hook] ****************************************************** TASK [Load parameters dir={{ item }}, ignore_unknown_extensions=True, extensions=['yaml', 'yml']] *** Tuesday 20 January 2026 10:57:11 +0000 (0:00:05.195) 0:00:07.407 ******* Tuesday 20 January 2026 10:57:11 +0000 (0:00:05.195) 0:00:07.406 ******* ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/parameters) ok: [localhost] => (item=/etc/ci/env) TASK [Ensure CRC hostname is set _crc_hostname={{ cifmw_crc_hostname | default('crc') }}] *** Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.140) 0:00:07.548 ******* Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.140) 0:00:07.547 ******* ok: [localhost] TASK [Check we have some compute in inventory computes_len={{ groups['computes'] | default([]) | length }}] *** Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.081) 0:00:07.629 ******* Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.081) 0:00:07.629 ******* ok: [localhost] TASK [Ensure that the isolated net was configured for crc that=['crc_ci_bootstrap_networks_out is defined', 'crc_ci_bootstrap_networks_out[_crc_hostname] is defined', "crc_ci_bootstrap_networks_out[_crc_hostname]['default'] is defined"]] *** Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.041) 0:00:07.671 ******* Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.041) 0:00:07.671 ******* ok: [localhost] => changed: false msg: All assertions passed TASK [Ensure we have needed bits for compute when needed that=['crc_ci_bootstrap_networks_out[_first_compute] is defined', "crc_ci_bootstrap_networks_out[_first_compute]['default'] is defined"]] *** Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.037) 0:00:07.708 ******* Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.037) 0:00:07.708 ******* ok: [localhost] => changed: false msg: All assertions passed TASK [Set facts for further usage within the framework cifmw_edpm_prepare_extra_vars={'NNCP_INTERFACE': '{{ crc_ci_bootstrap_networks_out[_crc_hostname].default.iface }}', 'NNCP_DNS_SERVER': "{{\n cifmw_nncp_dns_server |\n default(crc_ci_bootstrap_networks_out[_crc_hostname].default.ip) |\n split('/') | first\n}}", 'NETWORK_MTU': '{{ crc_ci_bootstrap_networks_out[_crc_hostname].default.mtu }}'}] *** Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.065) 0:00:07.774 ******* Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.066) 0:00:07.774 ******* ok: [localhost] TASK [Ensure the kustomizations dirs exists path={{ cifmw_basedir }}/artifacts/manifests/kustomizations/{{ item }}, state=directory, mode=0755] *** Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.051) 0:00:07.826 ******* Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.051) 0:00:07.825 ******* changed: [localhost] => (item=dataplane) changed: [localhost] => (item=controlplane) TASK [Create OpenStackControlPlane CR Kustomization dest={{ cifmw_basedir }}/artifacts/manifests/kustomizations/controlplane/99-kustomization.yaml, content=apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: namespace: {{ namespace }} patches: - target: kind: OpenStackControlPlane patch: |- - op: replace path: /spec/dns/template/options value: [ { "key": "server", "values": [ "192.168.122.10" ] }, { "key": "no-negcache", "values": [] } ], mode=0644] *** Tuesday 20 January 2026 10:57:12 +0000 (0:00:00.509) 0:00:08.335 ******* Tuesday 20 January 2026 10:57:12 +0000 (0:00:00.509) 0:00:08.334 ******* changed: [localhost] TASK [Set specific fact for compute accesses cifmw_edpm_deploy_extra_vars={{ edpm_install_yamls_vars }}] *** Tuesday 20 January 2026 10:57:12 +0000 (0:00:00.428) 0:00:08.763 ******* Tuesday 20 January 2026 10:57:12 +0000 (0:00:00.428) 0:00:08.762 ******* ok: [localhost] TASK [Create EDPM CR Kustomization mode=0644, dest={{ cifmw_basedir }}/artifacts/manifests/kustomizations/dataplane/99-kustomization.yaml, content=apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: namespace: {{ namespace }} patches: - target: kind: OpenStackDataPlaneNodeSet patch: |- {% for compute_node in groups['computes'] %} - op: replace path: /spec/nodes/edpm-{{ compute_node }}/hostName value: "{{compute_node}}" {% endfor %} - op: replace path: /spec/nodeTemplate/ansible/ansibleVars/neutron_public_interface_name value: "{{ crc_ci_bootstrap_networks_out[_first_compute].default.iface | default('') }}" {% for compute_node in groups['computes'] %} - op: replace path: /spec/nodes/edpm-{{ compute_node }}/networks/0/defaultRoute value: false {% endfor %} {% for compute_node in groups['computes'] if compute_node != _first_compute %} - op: replace path: /spec/nodes/edpm-{{ compute_node }}/ansible/ansibleHost value: >- {{ crc_ci_bootstrap_networks_out[compute_node].default.ip4 | default(crc_ci_bootstrap_networks_out[compute_node].default.ip) | ansible.utils.ipaddr('address') }} - op: replace path: /spec/nodes/edpm-{{ compute_node }}/networks/0/fixedIP value: >- {{ crc_ci_bootstrap_networks_out[compute_node].default.ip4 | default(crc_ci_bootstrap_networks_out[compute_node].default.ip) | ansible.utils.ipaddr('address') }} {% endfor %} - op: add path: /spec/nodeTemplate/ansible/ansibleVars/edpm_os_net_config_mappings value: net_config_data_lookup: edpm-compute: nic2: "{{ crc_ci_bootstrap_networks_out[_first_compute].default.iface | default('ens7') }}" - op: add path: /spec/nodeTemplate/ansible/ansibleVars/edpm_network_config_debug value: true - op: add path: /spec/env value: {} - op: add path: /spec/env value: - name: "ANSIBLE_VERBOSITY" value: "2" - op: replace path: /spec/nodeTemplate/ansible/ansibleVars/edpm_network_config_template value: |- {%- raw %} --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {% set _ = mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) %} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: interface name: nic1 use_dhcp: true mtu: {{ min_viable_mtu }} - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% if edpm_network_config_nmstate | bool %} # this ovs_extra configuration fixes OSPRH-17551, but it will be not needed when FDP-1472 is resolved ovs_extra: - "set interface eth1 external-ids:ovn-egress-iface=true" {% endif %} {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} {% endraw %} - op: replace path: /spec/nodeTemplate/ansible/ansibleUser value: "{{ hostvars[_first_compute].ansible_user | default('zuul') }}" - op: replace path: /spec/nodeTemplate/ansible/ansibleVars/ctlplane_dns_nameservers value: {% for dns_server in dns_servers %} - "{{ dns_server }}" {% endfor %} {% if content_provider_registry_ip is defined %} - op: add path: /spec/nodeTemplate/ansible/ansibleVars/edpm_container_registry_insecure_registries value: ["{{ content_provider_registry_ip }}:5001"] {% endif %} - op: add path: /spec/nodeTemplate/ansible/ansibleVars/edpm_sshd_allowed_ranges value: ["0.0.0.0/0"] {% if cifmw_hook_fetch_compute_facts_edpm_cmd is defined %} - op: add path: /spec/nodeTemplate/ansible/ansibleVars/edpm_bootstrap_command value: |- {{ cifmw_hook_fetch_compute_facts_edpm_cmd | indent( width=8) }} {% endif %} {% if cifmw_edpm_telemetry_enabled_exporters is defined and cifmw_edpm_telemetry_enabled_exporters | length > 0 %} - op: replace path: /spec/nodeTemplate/ansible/ansibleVars/edpm_telemetry_enabled_exporters value: {% for exporter in cifmw_edpm_telemetry_enabled_exporters %} - "{{ exporter }}" {% endfor %} {% endif %}] *** Tuesday 20 January 2026 10:57:12 +0000 (0:00:00.118) 0:00:08.881 ******* Tuesday 20 January 2026 10:57:12 +0000 (0:00:00.118) 0:00:08.881 ******* changed: [localhost] TASK [Ensure we know about the private host keys _raw_params=ssh-keyscan {{ cifmw_edpm_deploy_extra_vars.DATAPLANE_COMPUTE_IP }} >> ~/.ssh/known_hosts ] *** Tuesday 20 January 2026 10:57:13 +0000 (0:00:00.513) 0:00:09.395 ******* Tuesday 20 January 2026 10:57:13 +0000 (0:00:00.513) 0:00:09.394 ******* changed: [localhost] TASK [Save compute info dest={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml, content={{ file_content | to_nice_yaml }}, mode=0644] *** Tuesday 20 January 2026 10:57:13 +0000 (0:00:00.428) 0:00:09.823 ******* Tuesday 20 January 2026 10:57:13 +0000 (0:00:00.428) 0:00:09.822 ******* changed: [localhost] PLAY RECAP ********************************************************************* compute-0 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 compute-1 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 localhost : ok=12 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.551) 0:00:10.375 ******* =============================================================================== Copy repositories from controller to computes --------------------------- 5.20s Gathering Facts --------------------------------------------------------- 1.47s Check for gating repo on controller ------------------------------------- 0.73s Save compute info ------------------------------------------------------- 0.55s Create EDPM CR Kustomization -------------------------------------------- 0.51s Ensure the kustomizations dirs exists ----------------------------------- 0.51s Ensure we know about the private host keys ------------------------------ 0.43s Create OpenStackControlPlane CR Kustomization --------------------------- 0.43s Load parameters --------------------------------------------------------- 0.14s Set specific fact for compute accesses ---------------------------------- 0.12s Ensure CRC hostname is set ---------------------------------------------- 0.08s Ensure we have needed bits for compute when needed ---------------------- 0.07s Set facts for further usage within the framework ------------------------ 0.05s Check we have some compute in inventory --------------------------------- 0.04s Ensure that the isolated net was configured for crc --------------------- 0.04s Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.552) 0:00:10.374 ******* =============================================================================== ansible.builtin.copy ---------------------------------------------------- 6.69s gather_facts ------------------------------------------------------------ 1.47s ansible.builtin.stat ---------------------------------------------------- 0.73s ansible.builtin.file ---------------------------------------------------- 0.51s ansible.builtin.shell --------------------------------------------------- 0.43s ansible.builtin.set_fact ------------------------------------------------ 0.29s ansible.builtin.include_vars -------------------------------------------- 0.14s ansible.builtin.assert -------------------------------------------------- 0.10s ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ total ------------------------------------------------------------------ 10.36s home/zuul/zuul-output/logs/ci-framework-data/logs/post_infra_fetch_nodes_facts_and_save_the.log0000644000175000017500000004537015133657612032441 0ustar zuulzuul2026-01-20 10:57:03,807 p=33593 u=zuul n=ansible | [WARNING]: Found variable using reserved name: namespace 2026-01-20 10:57:03,807 p=33593 u=zuul n=ansible | PLAY [Sync repos for controller to compute for periodic jobs and gating repo] *** 2026-01-20 10:57:03,816 p=33593 u=zuul n=ansible | TASK [Gathering Facts ] ******************************************************** 2026-01-20 10:57:03,816 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:03 +0000 (0:00:00.011) 0:00:00.011 ******* 2026-01-20 10:57:03,816 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:03 +0000 (0:00:00.011) 0:00:00.011 ******* 2026-01-20 10:57:05,234 p=33593 u=zuul n=ansible | ok: [compute-0] 2026-01-20 10:57:05,248 p=33593 u=zuul n=ansible | ok: [compute-1] 2026-01-20 10:57:05,284 p=33593 u=zuul n=ansible | TASK [Check for gating repo on controller path={{ cifmw_basedir }}/artifacts/repositories/gating.repo] *** 2026-01-20 10:57:05,284 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:05 +0000 (0:00:01.467) 0:00:01.479 ******* 2026-01-20 10:57:05,284 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:05 +0000 (0:00:01.467) 0:00:01.478 ******* 2026-01-20 10:57:05,989 p=33593 u=zuul n=ansible | ok: [compute-0 -> controller(38.102.83.39)] 2026-01-20 10:57:06,008 p=33593 u=zuul n=ansible | ok: [compute-1 -> controller(38.102.83.39)] 2026-01-20 10:57:06,016 p=33593 u=zuul n=ansible | TASK [Copy repositories from controller to computes dest=/etc/yum.repos.d/, src={{ cifmw_basedir }}/artifacts/repositories/, mode=0755] *** 2026-01-20 10:57:06,016 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:06 +0000 (0:00:00.732) 0:00:02.212 ******* 2026-01-20 10:57:06,016 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:06 +0000 (0:00:00.732) 0:00:02.211 ******* 2026-01-20 10:57:11,006 p=33593 u=zuul n=ansible | changed: [compute-1] 2026-01-20 10:57:11,130 p=33593 u=zuul n=ansible | changed: [compute-0] 2026-01-20 10:57:11,183 p=33593 u=zuul n=ansible | PLAY [Build dataset hook] ****************************************************** 2026-01-20 10:57:11,212 p=33593 u=zuul n=ansible | TASK [Load parameters dir={{ item }}, ignore_unknown_extensions=True, extensions=['yaml', 'yml']] *** 2026-01-20 10:57:11,212 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:11 +0000 (0:00:05.195) 0:00:07.407 ******* 2026-01-20 10:57:11,212 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:11 +0000 (0:00:05.195) 0:00:07.406 ******* 2026-01-20 10:57:11,317 p=33593 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2026-01-20 10:57:11,327 p=33593 u=zuul n=ansible | ok: [localhost] => (item=/etc/ci/env) 2026-01-20 10:57:11,352 p=33593 u=zuul n=ansible | TASK [Ensure CRC hostname is set _crc_hostname={{ cifmw_crc_hostname | default('crc') }}] *** 2026-01-20 10:57:11,353 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.140) 0:00:07.548 ******* 2026-01-20 10:57:11,353 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.140) 0:00:07.547 ******* 2026-01-20 10:57:11,385 p=33593 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:11,434 p=33593 u=zuul n=ansible | TASK [Check we have some compute in inventory computes_len={{ groups['computes'] | default([]) | length }}] *** 2026-01-20 10:57:11,434 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.081) 0:00:07.629 ******* 2026-01-20 10:57:11,434 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.081) 0:00:07.629 ******* 2026-01-20 10:57:11,465 p=33593 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:11,476 p=33593 u=zuul n=ansible | TASK [Ensure that the isolated net was configured for crc that=['crc_ci_bootstrap_networks_out is defined', 'crc_ci_bootstrap_networks_out[_crc_hostname] is defined', "crc_ci_bootstrap_networks_out[_crc_hostname]['default'] is defined"]] *** 2026-01-20 10:57:11,476 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.041) 0:00:07.671 ******* 2026-01-20 10:57:11,476 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.041) 0:00:07.671 ******* 2026-01-20 10:57:11,503 p=33593 u=zuul n=ansible | ok: [localhost] => changed: false msg: All assertions passed 2026-01-20 10:57:11,513 p=33593 u=zuul n=ansible | TASK [Ensure we have needed bits for compute when needed that=['crc_ci_bootstrap_networks_out[_first_compute] is defined', "crc_ci_bootstrap_networks_out[_first_compute]['default'] is defined"]] *** 2026-01-20 10:57:11,513 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.037) 0:00:07.708 ******* 2026-01-20 10:57:11,513 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.037) 0:00:07.708 ******* 2026-01-20 10:57:11,563 p=33593 u=zuul n=ansible | ok: [localhost] => changed: false msg: All assertions passed 2026-01-20 10:57:11,579 p=33593 u=zuul n=ansible | TASK [Set facts for further usage within the framework cifmw_edpm_prepare_extra_vars={'NNCP_INTERFACE': '{{ crc_ci_bootstrap_networks_out[_crc_hostname].default.iface }}', 'NNCP_DNS_SERVER': "{{\n cifmw_nncp_dns_server |\n default(crc_ci_bootstrap_networks_out[_crc_hostname].default.ip) |\n split('/') | first\n}}", 'NETWORK_MTU': '{{ crc_ci_bootstrap_networks_out[_crc_hostname].default.mtu }}'}] *** 2026-01-20 10:57:11,579 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.065) 0:00:07.774 ******* 2026-01-20 10:57:11,579 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.066) 0:00:07.774 ******* 2026-01-20 10:57:11,615 p=33593 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:11,630 p=33593 u=zuul n=ansible | TASK [Ensure the kustomizations dirs exists path={{ cifmw_basedir }}/artifacts/manifests/kustomizations/{{ item }}, state=directory, mode=0755] *** 2026-01-20 10:57:11,630 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.051) 0:00:07.826 ******* 2026-01-20 10:57:11,630 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:11 +0000 (0:00:00.051) 0:00:07.825 ******* 2026-01-20 10:57:11,944 p=33593 u=zuul n=ansible | changed: [localhost] => (item=dataplane) 2026-01-20 10:57:12,114 p=33593 u=zuul n=ansible | changed: [localhost] => (item=controlplane) 2026-01-20 10:57:12,139 p=33593 u=zuul n=ansible | TASK [Create OpenStackControlPlane CR Kustomization dest={{ cifmw_basedir }}/artifacts/manifests/kustomizations/controlplane/99-kustomization.yaml, content=apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: namespace: {{ namespace }} patches: - target: kind: OpenStackControlPlane patch: |- - op: replace path: /spec/dns/template/options value: [ { "key": "server", "values": [ "192.168.122.10" ] }, { "key": "no-negcache", "values": [] } ], mode=0644] *** 2026-01-20 10:57:12,140 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:12 +0000 (0:00:00.509) 0:00:08.335 ******* 2026-01-20 10:57:12,140 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:12 +0000 (0:00:00.509) 0:00:08.334 ******* 2026-01-20 10:57:12,553 p=33593 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:57:12,567 p=33593 u=zuul n=ansible | TASK [Set specific fact for compute accesses cifmw_edpm_deploy_extra_vars={{ edpm_install_yamls_vars }}] *** 2026-01-20 10:57:12,568 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:12 +0000 (0:00:00.428) 0:00:08.763 ******* 2026-01-20 10:57:12,568 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:12 +0000 (0:00:00.428) 0:00:08.762 ******* 2026-01-20 10:57:12,668 p=33593 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:12,686 p=33593 u=zuul n=ansible | TASK [Create EDPM CR Kustomization mode=0644, dest={{ cifmw_basedir }}/artifacts/manifests/kustomizations/dataplane/99-kustomization.yaml, content=apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: namespace: {{ namespace }} patches: - target: kind: OpenStackDataPlaneNodeSet patch: |- {% for compute_node in groups['computes'] %} - op: replace path: /spec/nodes/edpm-{{ compute_node }}/hostName value: "{{compute_node}}" {% endfor %} - op: replace path: /spec/nodeTemplate/ansible/ansibleVars/neutron_public_interface_name value: "{{ crc_ci_bootstrap_networks_out[_first_compute].default.iface | default('') }}" {% for compute_node in groups['computes'] %} - op: replace path: /spec/nodes/edpm-{{ compute_node }}/networks/0/defaultRoute value: false {% endfor %} {% for compute_node in groups['computes'] if compute_node != _first_compute %} - op: replace path: /spec/nodes/edpm-{{ compute_node }}/ansible/ansibleHost value: >- {{ crc_ci_bootstrap_networks_out[compute_node].default.ip4 | default(crc_ci_bootstrap_networks_out[compute_node].default.ip) | ansible.utils.ipaddr('address') }} - op: replace path: /spec/nodes/edpm-{{ compute_node }}/networks/0/fixedIP value: >- {{ crc_ci_bootstrap_networks_out[compute_node].default.ip4 | default(crc_ci_bootstrap_networks_out[compute_node].default.ip) | ansible.utils.ipaddr('address') }} {% endfor %} - op: add path: /spec/nodeTemplate/ansible/ansibleVars/edpm_os_net_config_mappings value: net_config_data_lookup: edpm-compute: nic2: "{{ crc_ci_bootstrap_networks_out[_first_compute].default.iface | default('ens7') }}" - op: add path: /spec/nodeTemplate/ansible/ansibleVars/edpm_network_config_debug value: true - op: add path: /spec/env value: {} - op: add path: /spec/env value: - name: "ANSIBLE_VERBOSITY" value: "2" - op: replace path: /spec/nodeTemplate/ansible/ansibleVars/edpm_network_config_template value: |- {%- raw %} --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {% set _ = mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) %} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: interface name: nic1 use_dhcp: true mtu: {{ min_viable_mtu }} - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% if edpm_network_config_nmstate | bool %} # this ovs_extra configuration fixes OSPRH-17551, but it will be not needed when FDP-1472 is resolved ovs_extra: - "set interface eth1 external-ids:ovn-egress-iface=true" {% endif %} {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} {% endraw %} - op: replace path: /spec/nodeTemplate/ansible/ansibleUser value: "{{ hostvars[_first_compute].ansible_user | default('zuul') }}" - op: replace path: /spec/nodeTemplate/ansible/ansibleVars/ctlplane_dns_nameservers value: {% for dns_server in dns_servers %} - "{{ dns_server }}" {% endfor %} {% if content_provider_registry_ip is defined %} - op: add path: /spec/nodeTemplate/ansible/ansibleVars/edpm_container_registry_insecure_registries value: ["{{ content_provider_registry_ip }}:5001"] {% endif %} - op: add path: /spec/nodeTemplate/ansible/ansibleVars/edpm_sshd_allowed_ranges value: ["0.0.0.0/0"] {% if cifmw_hook_fetch_compute_facts_edpm_cmd is defined %} - op: add path: /spec/nodeTemplate/ansible/ansibleVars/edpm_bootstrap_command value: |- {{ cifmw_hook_fetch_compute_facts_edpm_cmd | indent( width=8) }} {% endif %} {% if cifmw_edpm_telemetry_enabled_exporters is defined and cifmw_edpm_telemetry_enabled_exporters | length > 0 %} - op: replace path: /spec/nodeTemplate/ansible/ansibleVars/edpm_telemetry_enabled_exporters value: {% for exporter in cifmw_edpm_telemetry_enabled_exporters %} - "{{ exporter }}" {% endfor %} {% endif %}] *** 2026-01-20 10:57:12,686 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:12 +0000 (0:00:00.118) 0:00:08.881 ******* 2026-01-20 10:57:12,686 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:12 +0000 (0:00:00.118) 0:00:08.881 ******* 2026-01-20 10:57:13,185 p=33593 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:57:13,199 p=33593 u=zuul n=ansible | TASK [Ensure we know about the private host keys _raw_params=ssh-keyscan {{ cifmw_edpm_deploy_extra_vars.DATAPLANE_COMPUTE_IP }} >> ~/.ssh/known_hosts ] *** 2026-01-20 10:57:13,199 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:13 +0000 (0:00:00.513) 0:00:09.395 ******* 2026-01-20 10:57:13,200 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:13 +0000 (0:00:00.513) 0:00:09.394 ******* 2026-01-20 10:57:13,614 p=33593 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:57:13,628 p=33593 u=zuul n=ansible | TASK [Save compute info dest={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml, content={{ file_content | to_nice_yaml }}, mode=0644] *** 2026-01-20 10:57:13,628 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:13 +0000 (0:00:00.428) 0:00:09.823 ******* 2026-01-20 10:57:13,628 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:13 +0000 (0:00:00.428) 0:00:09.822 ******* 2026-01-20 10:57:14,101 p=33593 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:57:14,179 p=33593 u=zuul n=ansible | PLAY RECAP ********************************************************************* 2026-01-20 10:57:14,179 p=33593 u=zuul n=ansible | compute-0 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-20 10:57:14,179 p=33593 u=zuul n=ansible | compute-1 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-20 10:57:14,179 p=33593 u=zuul n=ansible | localhost : ok=12 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-20 10:57:14,179 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.551) 0:00:10.375 ******* 2026-01-20 10:57:14,179 p=33593 u=zuul n=ansible | =============================================================================== 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | Copy repositories from controller to computes --------------------------- 5.20s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | Gathering Facts --------------------------------------------------------- 1.47s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | Check for gating repo on controller ------------------------------------- 0.73s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | Save compute info ------------------------------------------------------- 0.55s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | Create EDPM CR Kustomization -------------------------------------------- 0.51s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | Ensure the kustomizations dirs exists ----------------------------------- 0.51s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | Ensure we know about the private host keys ------------------------------ 0.43s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | Create OpenStackControlPlane CR Kustomization --------------------------- 0.43s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | Load parameters --------------------------------------------------------- 0.14s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | Set specific fact for compute accesses ---------------------------------- 0.12s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | Ensure CRC hostname is set ---------------------------------------------- 0.08s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | Ensure we have needed bits for compute when needed ---------------------- 0.07s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | Set facts for further usage within the framework ------------------------ 0.05s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | Check we have some compute in inventory --------------------------------- 0.04s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | Ensure that the isolated net was configured for crc --------------------- 0.04s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.552) 0:00:10.374 ******* 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | =============================================================================== 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | ansible.builtin.copy ---------------------------------------------------- 6.69s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | gather_facts ------------------------------------------------------------ 1.47s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | ansible.builtin.stat ---------------------------------------------------- 0.73s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | ansible.builtin.file ---------------------------------------------------- 0.51s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | ansible.builtin.shell --------------------------------------------------- 0.43s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | ansible.builtin.set_fact ------------------------------------------------ 0.29s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | ansible.builtin.include_vars -------------------------------------------- 0.14s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | ansible.builtin.assert -------------------------------------------------- 0.10s 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2026-01-20 10:57:14,180 p=33593 u=zuul n=ansible | total ------------------------------------------------------------------ 10.36s home/zuul/zuul-output/logs/ci-framework-data/logs/ci_script_003_run_hook_without_retry_80.log0000644000175000017500000000413715133657620031633 0ustar zuulzuul[WARNING]: Found variable using reserved name: namespace PLAY [Kustomize ControlPlane for horizon service] ****************************** TASK [Ensure the kustomizations dir exists path={{ cifmw_basedir }}/artifacts/manifests/kustomizations/controlplane, state=directory, mode=0755] *** Tuesday 20 January 2026 10:57:19 +0000 (0:00:00.065) 0:00:00.065 ******* Tuesday 20 January 2026 10:57:19 +0000 (0:00:00.063) 0:00:00.063 ******* ok: [localhost] TASK [Create kustomize yaml to enable Horizon dest={{ cifmw_basedir }}/artifacts/manifests/kustomizations/controlplane/80-horizon-kustomization.yaml, content=apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: {{ namespace }} patches: - target: kind: OpenStackControlPlane patch: |- - op: add path: /spec/horizon/enabled value: true - op: add path: /spec/horizon/template/memcachedInstance value: memcached, mode=0644] *** Tuesday 20 January 2026 10:57:19 +0000 (0:00:00.304) 0:00:00.370 ******* Tuesday 20 January 2026 10:57:19 +0000 (0:00:00.304) 0:00:00.368 ******* changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.651) 0:00:01.021 ******* =============================================================================== Create kustomize yaml to enable Horizon --------------------------------- 0.65s Ensure the kustomizations dir exists ------------------------------------ 0.30s Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.651) 0:00:01.020 ******* =============================================================================== ansible.builtin.copy ---------------------------------------------------- 0.65s ansible.builtin.file ---------------------------------------------------- 0.30s ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ total ------------------------------------------------------------------- 0.96s home/zuul/zuul-output/logs/ci-framework-data/logs/pre_deploy_80_kustomize_openstack_cr.log0000644000175000017500000000627515133657620031372 0ustar zuulzuul2026-01-20 10:57:19,011 p=34076 u=zuul n=ansible | [WARNING]: Found variable using reserved name: namespace 2026-01-20 10:57:19,011 p=34076 u=zuul n=ansible | PLAY [Kustomize ControlPlane for horizon service] ****************************** 2026-01-20 10:57:19,060 p=34076 u=zuul n=ansible | TASK [Ensure the kustomizations dir exists path={{ cifmw_basedir }}/artifacts/manifests/kustomizations/controlplane, state=directory, mode=0755] *** 2026-01-20 10:57:19,060 p=34076 u=zuul n=ansible | Tuesday 20 January 2026 10:57:19 +0000 (0:00:00.065) 0:00:00.065 ******* 2026-01-20 10:57:19,061 p=34076 u=zuul n=ansible | Tuesday 20 January 2026 10:57:19 +0000 (0:00:00.063) 0:00:00.063 ******* 2026-01-20 10:57:19,355 p=34076 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:19,365 p=34076 u=zuul n=ansible | TASK [Create kustomize yaml to enable Horizon dest={{ cifmw_basedir }}/artifacts/manifests/kustomizations/controlplane/80-horizon-kustomization.yaml, content=apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: {{ namespace }} patches: - target: kind: OpenStackControlPlane patch: |- - op: add path: /spec/horizon/enabled value: true - op: add path: /spec/horizon/template/memcachedInstance value: memcached, mode=0644] *** 2026-01-20 10:57:19,365 p=34076 u=zuul n=ansible | Tuesday 20 January 2026 10:57:19 +0000 (0:00:00.304) 0:00:00.370 ******* 2026-01-20 10:57:19,365 p=34076 u=zuul n=ansible | Tuesday 20 January 2026 10:57:19 +0000 (0:00:00.304) 0:00:00.368 ******* 2026-01-20 10:57:19,980 p=34076 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:57:20,017 p=34076 u=zuul n=ansible | PLAY RECAP ********************************************************************* 2026-01-20 10:57:20,017 p=34076 u=zuul n=ansible | localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-20 10:57:20,017 p=34076 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.651) 0:00:01.021 ******* 2026-01-20 10:57:20,017 p=34076 u=zuul n=ansible | =============================================================================== 2026-01-20 10:57:20,017 p=34076 u=zuul n=ansible | Create kustomize yaml to enable Horizon --------------------------------- 0.65s 2026-01-20 10:57:20,017 p=34076 u=zuul n=ansible | Ensure the kustomizations dir exists ------------------------------------ 0.30s 2026-01-20 10:57:20,017 p=34076 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.651) 0:00:01.020 ******* 2026-01-20 10:57:20,018 p=34076 u=zuul n=ansible | =============================================================================== 2026-01-20 10:57:20,018 p=34076 u=zuul n=ansible | ansible.builtin.copy ---------------------------------------------------- 0.65s 2026-01-20 10:57:20,018 p=34076 u=zuul n=ansible | ansible.builtin.file ---------------------------------------------------- 0.30s 2026-01-20 10:57:20,018 p=34076 u=zuul n=ansible | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2026-01-20 10:57:20,018 p=34076 u=zuul n=ansible | total ------------------------------------------------------------------- 0.96s home/zuul/zuul-output/logs/ci-framework-data/logs/ci_script_004_run_hook_without_retry_create.log0000644000175000017500000000623215133657622032650 0ustar zuulzuul[WARNING]: Found variable using reserved name: namespace PLAY [Deploy cluster-observability-operator] *********************************** TASK [Create the COO subscription _raw_params=oc create -f - < changed: true cmd: | oc create -f - < changed: true cmd: | oc create -f - < '/tmp/crc-logs-artifacts/pods' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/6.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/6.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/5.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/5.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/4.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/4.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/3.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/3.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/5.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/5.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb' -> '/tmp/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb' '/ostree/deploy/rhcos/var/log/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe/0.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe/0.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe/1.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe/1.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner/0.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner/0.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner/1.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner/1.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar/0.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar/0.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar/1.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar/1.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner/1.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner/1.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner/0.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_image-registry-75b7bb6564-ln84v_7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-ln84v_7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_image-registry-75b7bb6564-ln84v_7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76/registry' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-ln84v_7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76/registry' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_image-registry-75b7bb6564-ln84v_7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76/registry/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-ln84v_7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76/registry/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/extract-utilities' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/extract-utilities' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/extract-utilities/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/extract-utilities/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/registry-server' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/registry-server' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/registry-server/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/registry-server/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/extract-content' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/extract-content' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/extract-content/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/extract-content/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/collect-profiles' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/collect-profiles' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/collect-profiles/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/collect-profiles/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/3.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/3.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/5.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/5.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/4.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/4.log' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/8.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/8.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/7.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/7.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/4.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/4.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/5.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/5.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/pruner' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/pruner' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/pruner/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/pruner/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/3.log' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/3.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/northd' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/northd' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/northd/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/northd/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/nbdb' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/nbdb' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/nbdb/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/nbdb/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/kubecfg-setup' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/kubecfg-setup' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/kubecfg-setup/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/kubecfg-setup/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/sbdb' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/sbdb' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/sbdb/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/sbdb/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/ovn-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/ovn-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/ovn-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/ovn-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/ovnkube-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/ovnkube-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/ovnkube-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/ovnkube-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/ovn-acl-logging' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/ovn-acl-logging' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/ovn-acl-logging/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/ovn-acl-logging/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/kube-rbac-proxy-node' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/kube-rbac-proxy-node' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/kube-rbac-proxy-node/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/kube-rbac-proxy-node/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/kube-rbac-proxy-ovn-metrics' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/kube-rbac-proxy-ovn-metrics' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/kube-rbac-proxy-ovn-metrics/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/kube-rbac-proxy-ovn-metrics/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/3.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/3.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/4.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/4.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/7.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/7.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/6.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/6.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1' -> '/tmp/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1' '/ostree/deploy/rhcos/var/log/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console' -> '/tmp/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console' '/ostree/deploy/rhcos/var/log/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/d9aeaa1aa1d02c7e8201fbb13a3ee252fd99aa6b0819f3318aaa2bd88982712e.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/d9aeaa1aa1d02c7e8201fbb13a3ee252fd99aa6b0819f3318aaa2bd88982712e.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/3.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/3.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a' -> '/tmp/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a' '/ostree/deploy/rhcos/var/log/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions' -> '/tmp/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions' '/ostree/deploy/rhcos/var/log/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver' -> '/tmp/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver' '/ostree/deploy/rhcos/var/log/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c' -> '/tmp/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c' '/ostree/deploy/rhcos/var/log/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server' -> '/tmp/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server' '/ostree/deploy/rhcos/var/log/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/5.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/5.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/7.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/7.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/6.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/6.log' '/ostree/deploy/rhcos/var/log/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342' -> '/tmp/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342' '/ostree/deploy/rhcos/var/log/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager' -> '/tmp/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager' '/ostree/deploy/rhcos/var/log/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7dae59545f22b3fb679a7fbf878a6379' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7dae59545f22b3fb679a7fbf878a6379' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7dae59545f22b3fb679a7fbf878a6379/startup-monitor' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7dae59545f22b3fb679a7fbf878a6379/startup-monitor' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-13-crc_9387c79a-cd5b-4d24-a558-6dbbdd89fe1e' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_9387c79a-cd5b-4d24-a558-6dbbdd89fe1e' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-13-crc_9387c79a-cd5b-4d24-a558-6dbbdd89fe1e/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_9387c79a-cd5b-4d24-a558-6dbbdd89fe1e/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-13-crc_9387c79a-cd5b-4d24-a558-6dbbdd89fe1e/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_9387c79a-cd5b-4d24-a558-6dbbdd89fe1e/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/registry-server' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/registry-server' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/registry-server/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/registry-server/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/extract-utilities' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/extract-utilities' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/extract-utilities/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/extract-utilities/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/extract-content' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/extract-content' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/extract-content/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/extract-content/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-nc8zc_5adb4a31-5991-4381-a1ea-f1b095a071ea' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-nc8zc_5adb4a31-5991-4381-a1ea-f1b095a071ea' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-nc8zc_5adb4a31-5991-4381-a1ea-f1b095a071ea/marketplace-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-nc8zc_5adb4a31-5991-4381-a1ea-f1b095a071ea/marketplace-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-nc8zc_5adb4a31-5991-4381-a1ea-f1b095a071ea/marketplace-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-nc8zc_5adb4a31-5991-4381-a1ea-f1b095a071ea/marketplace-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29481765-pbh8m_835aa241-cfd8-4527-953a-e91ac4516103' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29481765-pbh8m_835aa241-cfd8-4527-953a-e91ac4516103' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29481765-pbh8m_835aa241-cfd8-4527-953a-e91ac4516103/collect-profiles' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29481765-pbh8m_835aa241-cfd8-4527-953a-e91ac4516103/collect-profiles' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29481765-pbh8m_835aa241-cfd8-4527-953a-e91ac4516103/collect-profiles/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29481765-pbh8m_835aa241-cfd8-4527-953a-e91ac4516103/collect-profiles/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/pruner' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/pruner' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/pruner/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/pruner/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-syncer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-syncer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-syncer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-syncer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-regeneration-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-regeneration-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-regeneration-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-regeneration-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-insecure-readyz' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-insecure-readyz' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-insecure-readyz/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-insecure-readyz/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-check-endpoints' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-check-endpoints' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-check-endpoints/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-check-endpoints/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/setup' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/setup' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/setup/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/setup/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/pruner' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/pruner' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/pruner/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/pruner/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/registry-server' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/registry-server' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/registry-server/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/registry-server/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/extract-content' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/extract-content' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/extract-content/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/extract-content/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/extract-utilities' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/extract-utilities' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/extract-utilities/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/extract-utilities/0.log' '/ostree/deploy/rhcos/var/log/pods/cert-manager_cert-manager-cainjector-676dd9bd64-mggnx_c229f43c-9d3a-4848-a9da-d997459f440b' -> '/tmp/crc-logs-artifacts/pods/cert-manager_cert-manager-cainjector-676dd9bd64-mggnx_c229f43c-9d3a-4848-a9da-d997459f440b' '/ostree/deploy/rhcos/var/log/pods/cert-manager_cert-manager-cainjector-676dd9bd64-mggnx_c229f43c-9d3a-4848-a9da-d997459f440b/cert-manager-cainjector' -> '/tmp/crc-logs-artifacts/pods/cert-manager_cert-manager-cainjector-676dd9bd64-mggnx_c229f43c-9d3a-4848-a9da-d997459f440b/cert-manager-cainjector' '/ostree/deploy/rhcos/var/log/pods/cert-manager_cert-manager-cainjector-676dd9bd64-mggnx_c229f43c-9d3a-4848-a9da-d997459f440b/cert-manager-cainjector/0.log' -> '/tmp/crc-logs-artifacts/pods/cert-manager_cert-manager-cainjector-676dd9bd64-mggnx_c229f43c-9d3a-4848-a9da-d997459f440b/cert-manager-cainjector/0.log' '/ostree/deploy/rhcos/var/log/pods/cert-manager_cert-manager-cainjector-676dd9bd64-mggnx_c229f43c-9d3a-4848-a9da-d997459f440b/cert-manager-cainjector/1.log' -> '/tmp/crc-logs-artifacts/pods/cert-manager_cert-manager-cainjector-676dd9bd64-mggnx_c229f43c-9d3a-4848-a9da-d997459f440b/cert-manager-cainjector/1.log' '/ostree/deploy/rhcos/var/log/pods/cert-manager_cert-manager-758df9885c-cq6zm_f12a256b-7128-4680-8f54-8e40a3e56300' -> '/tmp/crc-logs-artifacts/pods/cert-manager_cert-manager-758df9885c-cq6zm_f12a256b-7128-4680-8f54-8e40a3e56300' '/ostree/deploy/rhcos/var/log/pods/cert-manager_cert-manager-758df9885c-cq6zm_f12a256b-7128-4680-8f54-8e40a3e56300/cert-manager-controller' -> '/tmp/crc-logs-artifacts/pods/cert-manager_cert-manager-758df9885c-cq6zm_f12a256b-7128-4680-8f54-8e40a3e56300/cert-manager-controller' '/ostree/deploy/rhcos/var/log/pods/cert-manager_cert-manager-758df9885c-cq6zm_f12a256b-7128-4680-8f54-8e40a3e56300/cert-manager-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/cert-manager_cert-manager-758df9885c-cq6zm_f12a256b-7128-4680-8f54-8e40a3e56300/cert-manager-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/cert-manager_cert-manager-758df9885c-cq6zm_f12a256b-7128-4680-8f54-8e40a3e56300/cert-manager-controller/1.log' -> '/tmp/crc-logs-artifacts/pods/cert-manager_cert-manager-758df9885c-cq6zm_f12a256b-7128-4680-8f54-8e40a3e56300/cert-manager-controller/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72/collect-profiles' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72/collect-profiles' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72/collect-profiles/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72/collect-profiles/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/extract-utilities' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/extract-utilities' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/extract-utilities/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/extract-utilities/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/extract-content' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/extract-content' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/extract-content/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/extract-content/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/registry-server' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/registry-server' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/registry-server/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/registry-server/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/pruner' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/pruner' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/pruner/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/pruner/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/cert-manager_cert-manager-webhook-855f577f79-7bdxq_e7870154-de6e-4216-81fb-b87e7502c412' -> '/tmp/crc-logs-artifacts/pods/cert-manager_cert-manager-webhook-855f577f79-7bdxq_e7870154-de6e-4216-81fb-b87e7502c412' '/ostree/deploy/rhcos/var/log/pods/cert-manager_cert-manager-webhook-855f577f79-7bdxq_e7870154-de6e-4216-81fb-b87e7502c412/cert-manager-webhook' -> '/tmp/crc-logs-artifacts/pods/cert-manager_cert-manager-webhook-855f577f79-7bdxq_e7870154-de6e-4216-81fb-b87e7502c412/cert-manager-webhook' '/ostree/deploy/rhcos/var/log/pods/cert-manager_cert-manager-webhook-855f577f79-7bdxq_e7870154-de6e-4216-81fb-b87e7502c412/cert-manager-webhook/0.log' -> '/tmp/crc-logs-artifacts/pods/cert-manager_cert-manager-webhook-855f577f79-7bdxq_e7870154-de6e-4216-81fb-b87e7502c412/cert-manager-webhook/0.log' + sudo chown -R core:core /tmp/crc-logs-artifacts home/zuul/zuul-output/logs/ci-framework-data/logs/ci_script_000_copy_logs_from_crc.log0000644000175000017500000003511715133657716030345 0ustar zuulzuulExecuting: program /usr/bin/ssh host api.crc.testing, user core, command sftp OpenSSH_9.9p1, OpenSSL 3.5.1 1 Jul 2025 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Reading configuration data /etc/ssh/ssh_config.d/50-redhat.conf debug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config debug1: configuration requests final Match pass debug1: re-parsing configuration debug1: Reading configuration data /etc/ssh/ssh_config debug1: Reading configuration data /etc/ssh/ssh_config.d/50-redhat.conf debug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config debug1: Connecting to api.crc.testing [38.102.83.220] port 22. debug1: Connection established. debug1: identity file /home/zuul/.ssh/id_cifw type 2 debug1: identity file /home/zuul/.ssh/id_cifw-cert type -1 debug1: Local version string SSH-2.0-OpenSSH_9.9 debug1: Remote protocol version 2.0, remote software version OpenSSH_8.7 debug1: compat_banner: match: OpenSSH_8.7 pat OpenSSH* compat 0x04000000 debug1: Authenticating to api.crc.testing:22 as 'core' debug1: load_hostkeys: fopen /home/zuul/.ssh/known_hosts2: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: algorithm: curve25519-sha256 debug1: kex: host key algorithm: ssh-ed25519 debug1: kex: server->client cipher: aes256-gcm@openssh.com MAC: compression: none debug1: kex: client->server cipher: aes256-gcm@openssh.com MAC: compression: none debug1: kex: curve25519-sha256 need=32 dh_need=32 debug1: kex: curve25519-sha256 need=32 dh_need=32 debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: SSH2_MSG_KEX_ECDH_REPLY received debug1: Server host key: ssh-ed25519 SHA256:/ZfZ15bRL0d31T2CAq03Iw4h8DAqA2+9vySbGcnzmJo debug1: load_hostkeys: fopen /home/zuul/.ssh/known_hosts2: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory debug1: Host 'api.crc.testing' is known and matches the ED25519 host key. debug1: Found key in /home/zuul/.ssh/known_hosts:43 debug1: ssh_packet_send2_wrapped: resetting send seqnr 3 debug1: rekey out after 4294967296 blocks debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: ssh_packet_read_poll2: resetting read seqnr 3 debug1: SSH2_MSG_NEWKEYS received debug1: rekey in after 4294967296 blocks debug1: SSH2_MSG_EXT_INFO received debug1: kex_ext_info_client_parse: server-sig-algs= debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic debug1: Next authentication method: gssapi-with-mic debug1: No credentials were supplied, or the credentials were unavailable or inaccessible No Kerberos credentials available (default cache: KCM:) debug1: No credentials were supplied, or the credentials were unavailable or inaccessible No Kerberos credentials available (default cache: KCM:) debug1: Next authentication method: publickey debug1: Will attempt key: /home/zuul/.ssh/id_cifw ECDSA SHA256:+ectkkowT8UO342BaYaayieWw2qo7SgQGGoUemtz4GY explicit debug1: Offering public key: /home/zuul/.ssh/id_cifw ECDSA SHA256:+ectkkowT8UO342BaYaayieWw2qo7SgQGGoUemtz4GY explicit debug1: Server accepts key: /home/zuul/.ssh/id_cifw ECDSA SHA256:+ectkkowT8UO342BaYaayieWw2qo7SgQGGoUemtz4GY explicit Authenticated to api.crc.testing ([38.102.83.220]:22) using "publickey". debug1: pkcs11_del_provider: called, provider_id = (null) debug1: channel 0: new session [client-session] (inactive timeout: 0) debug1: Requesting no-more-sessions@openssh.com debug1: Entering interactive session. debug1: pledge: filesystem debug1: client_input_global_request: rtype hostkeys-00@openssh.com want_reply 0 debug1: client_input_hostkeys: searching /home/zuul/.ssh/known_hosts for api.crc.testing / (none) debug1: client_input_hostkeys: searching /home/zuul/.ssh/known_hosts2 for api.crc.testing / (none) debug1: client_input_hostkeys: hostkeys file /home/zuul/.ssh/known_hosts2 does not exist debug1: client_input_hostkeys: no new or deprecated keys from server debug1: Remote: /var/home/core/.ssh/authorized_keys:28: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding debug1: Remote: /var/home/core/.ssh/authorized_keys:28: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding debug1: Sending subsystem: sftp debug1: pledge: fork scp: debug1: Fetching /tmp/crc-logs-artifacts/ to /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts scp: debug1: truncating at 4625 scp: debug1: truncating at 59795 scp: debug1: truncating at 31986 scp: debug1: truncating at 61484 scp: debug1: truncating at 1917 scp: debug1: truncating at 440 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 736 scp: debug1: truncating at 9874 scp: debug1: truncating at 15883 scp: debug1: truncating at 12298 scp: debug1: truncating at 16471 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 440 scp: debug1: truncating at 1976 scp: debug1: truncating at 329426 scp: debug1: truncating at 265 scp: debug1: truncating at 15541 scp: debug1: truncating at 116 scp: debug1: truncating at 11373 scp: debug1: truncating at 1648 scp: debug1: truncating at 1973 scp: debug1: truncating at 19721 scp: debug1: truncating at 739 scp: debug1: truncating at 59909 scp: debug1: truncating at 43448 scp: debug1: truncating at 9383 scp: debug1: truncating at 4785 scp: debug1: truncating at 5142 scp: debug1: truncating at 50621 scp: debug1: truncating at 59236 scp: debug1: truncating at 148903 scp: debug1: truncating at 85 scp: debug1: truncating at 85 scp: debug1: truncating at 85 scp: debug1: truncating at 5593 scp: debug1: truncating at 6351 scp: debug1: truncating at 5919 scp: debug1: truncating at 23257 scp: debug1: truncating at 25256 scp: debug1: truncating at 7794 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 440 scp: debug1: truncating at 59637 scp: debug1: truncating at 120 scp: debug1: truncating at 381 scp: debug1: truncating at 1875 scp: debug1: truncating at 2038 scp: debug1: truncating at 9955 scp: debug1: truncating at 4976 scp: debug1: truncating at 98939 scp: debug1: truncating at 93408 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 23827 scp: debug1: truncating at 23569 scp: debug1: truncating at 96 scp: debug1: truncating at 0 scp: debug1: truncating at 122585 scp: debug1: truncating at 78087 scp: debug1: truncating at 73869 scp: debug1: truncating at 574827 scp: debug1: truncating at 398831 scp: debug1: truncating at 411501 scp: debug1: truncating at 25403 scp: debug1: truncating at 31322 scp: debug1: truncating at 51603 scp: debug1: truncating at 11526 scp: debug1: truncating at 75 scp: debug1: truncating at 43455 scp: debug1: truncating at 127371 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 7599 scp: debug1: truncating at 7732 scp: debug1: truncating at 12999 scp: debug1: truncating at 12595 scp: debug1: truncating at 10649 scp: debug1: truncating at 571 scp: debug1: truncating at 909 scp: debug1: truncating at 9654 scp: debug1: truncating at 13139 scp: debug1: truncating at 22065 scp: debug1: truncating at 1187 scp: debug1: truncating at 1054 scp: debug1: truncating at 14404 scp: debug1: truncating at 5821 scp: debug1: truncating at 664 scp: debug1: truncating at 4037 scp: debug1: truncating at 112225 scp: debug1: truncating at 177988 scp: debug1: truncating at 30933 scp: debug1: truncating at 83 scp: debug1: truncating at 0 scp: debug1: truncating at 1212 scp: debug1: truncating at 1345 scp: debug1: truncating at 49270 scp: debug1: truncating at 16333 scp: debug1: truncating at 30477 scp: debug1: truncating at 39824 scp: debug1: truncating at 28909 scp: debug1: truncating at 249420 scp: debug1: truncating at 23773 scp: debug1: truncating at 26036 scp: debug1: truncating at 58729 scp: debug1: truncating at 4141 scp: debug1: truncating at 760212 scp: debug1: truncating at 14943 scp: debug1: truncating at 11338 scp: debug1: truncating at 12256 scp: debug1: truncating at 1042 scp: debug1: truncating at 0 scp: debug1: truncating at 1043 scp: debug1: truncating at 11354 scp: debug1: truncating at 4901 scp: debug1: truncating at 20284 scp: debug1: truncating at 6726 scp: debug1: truncating at 12386 scp: debug1: truncating at 4974 scp: debug1: truncating at 132818 scp: debug1: truncating at 2076 scp: debug1: truncating at 155784 scp: debug1: truncating at 44792 scp: debug1: truncating at 737559 scp: debug1: truncating at 679184 scp: debug1: truncating at 64407 scp: debug1: truncating at 602 scp: debug1: truncating at 2922 scp: debug1: truncating at 434522 scp: debug1: truncating at 1508074 scp: debug1: truncating at 491271 scp: debug1: truncating at 217484 scp: debug1: truncating at 210349 scp: debug1: truncating at 4640 scp: debug1: truncating at 4680 scp: debug1: truncating at 5158 scp: debug1: truncating at 1961847 scp: debug1: truncating at 8033 scp: debug1: truncating at 2347 scp: debug1: truncating at 0 scp: debug1: truncating at 2415 scp: debug1: truncating at 5622 scp: debug1: truncating at 16967 scp: debug1: truncating at 33371 scp: debug1: truncating at 31684 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 2031 scp: debug1: truncating at 46063 scp: debug1: truncating at 11634 scp: debug1: truncating at 391605 scp: debug1: truncating at 890 scp: debug1: truncating at 164882 scp: debug1: truncating at 3764 scp: debug1: truncating at 61 scp: debug1: truncating at 61 scp: debug1: truncating at 31683 scp: debug1: truncating at 6688 scp: debug1: truncating at 51450 scp: debug1: truncating at 3349 scp: debug1: truncating at 52224 scp: debug1: truncating at 37860 scp: debug1: truncating at 23357 scp: debug1: truncating at 16479 scp: debug1: truncating at 46647 scp: debug1: truncating at 296671 scp: debug1: truncating at 100273 scp: debug1: truncating at 119903 scp: debug1: truncating at 34762 scp: debug1: truncating at 34528 scp: debug1: truncating at 39921 scp: debug1: truncating at 91140 scp: debug1: truncating at 34964 scp: debug1: truncating at 995512 scp: debug1: truncating at 11627525 scp: debug1: truncating at 7599 scp: debug1: truncating at 7732 scp: debug1: truncating at 5271 scp: debug1: truncating at 7598 scp: debug1: truncating at 10792 scp: debug1: truncating at 1212 scp: debug1: truncating at 1345 scp: debug1: truncating at 79699 scp: debug1: truncating at 136260 scp: debug1: truncating at 1173 scp: debug1: truncating at 1040 scp: debug1: truncating at 1276 scp: debug1: truncating at 1386 scp: debug1: truncating at 736 scp: debug1: truncating at 27628 scp: debug1: truncating at 117785 scp: debug1: truncating at 475401 scp: debug1: truncating at 222883 scp: debug1: truncating at 408 scp: debug1: truncating at 408 scp: debug1: truncating at 411 scp: debug1: truncating at 411 scp: debug1: truncating at 392 scp: debug1: truncating at 392 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 404 scp: debug1: truncating at 404 scp: debug1: truncating at 414 scp: debug1: truncating at 414 scp: debug1: truncating at 80 scp: debug1: truncating at 80 scp: debug1: truncating at 157417 scp: debug1: truncating at 56924 scp: debug1: truncating at 1316 scp: debug1: truncating at 1183 scp: debug1: truncating at 0 scp: debug1: truncating at 440 scp: debug1: truncating at 0 scp: debug1: truncating at 167419 scp: debug1: truncating at 38969 scp: debug1: truncating at 172802 scp: debug1: truncating at 769174 scp: debug1: truncating at 227835 scp: debug1: truncating at 27836 scp: debug1: truncating at 59897 scp: debug1: truncating at 214174 scp: debug1: truncating at 84973 scp: debug1: truncating at 27450 scp: debug1: truncating at 40893 scp: debug1: truncating at 1040 scp: debug1: truncating at 1173 scp: debug1: truncating at 18595 scp: debug1: truncating at 65035 scp: debug1: truncating at 17107 scp: debug1: truncating at 1533 scp: debug1: truncating at 1533 scp: debug1: truncating at 60194 scp: debug1: truncating at 180933 scp: debug1: truncating at 396 scp: debug1: truncating at 396 scp: debug1: truncating at 11549 scp: debug1: truncating at 13994 scp: debug1: truncating at 1040 scp: debug1: truncating at 1173 scp: debug1: truncating at 43448 scp: debug1: truncating at 27628 scp: debug1: truncating at 23803 scp: debug1: truncating at 57419 scp: debug1: truncating at 8727966 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 74 scp: debug1: truncating at 74 scp: debug1: truncating at 74 scp: debug1: truncating at 17961 scp: debug1: truncating at 17958 scp: debug1: truncating at 20593 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 240 scp: debug1: truncating at 1184 scp: debug1: truncating at 518 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 1040 scp: debug1: truncating at 1173 scp: debug1: truncating at 24998 scp: debug1: truncating at 29309 scp: debug1: truncating at 45393 scp: debug1: truncating at 48554 scp: debug1: truncating at 2301 scp: debug1: truncating at 614882 scp: debug1: truncating at 1040 scp: debug1: truncating at 1173 scp: debug1: truncating at 73965 scp: debug1: truncating at 324520 scp: debug1: truncating at 193188 scp: debug1: truncating at 2192 scp: debug1: truncating at 1509 scp: debug1: truncating at 5045 scp: debug1: truncating at 101 scp: debug1: truncating at 101 scp: debug1: truncating at 101 scp: debug1: truncating at 22128 scp: debug1: truncating at 79965 scp: debug1: truncating at 15339 scp: debug1: truncating at 17534 scp: debug1: truncating at 1212 scp: debug1: truncating at 1345 debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: channel 0: free: client-session, nchannels 1 Transferred: sent 172588, received 40433328 bytes, in 1.3 seconds Bytes per second: sent 136637.8, received 32011048.4 debug1: Exit status 0 home/zuul/zuul-output/logs/ci-framework-data/logs/ansible.log0000644000175000017500000061037715133657734023503 0ustar zuulzuul2026-01-20 10:54:06,928 p=30902 u=zuul n=ansible | Starting galaxy collection install process 2026-01-20 10:54:06,929 p=30902 u=zuul n=ansible | Process install dependency map 2026-01-20 10:54:22,513 p=30902 u=zuul n=ansible | Starting collection install process 2026-01-20 10:54:22,514 p=30902 u=zuul n=ansible | Installing 'cifmw.general:1.0.0+05fab9c7' to '/home/zuul/.ansible/collections/ansible_collections/cifmw/general' 2026-01-20 10:54:23,013 p=30902 u=zuul n=ansible | Created collection for cifmw.general:1.0.0+05fab9c7 at /home/zuul/.ansible/collections/ansible_collections/cifmw/general 2026-01-20 10:54:23,013 p=30902 u=zuul n=ansible | cifmw.general:1.0.0+05fab9c7 was installed successfully 2026-01-20 10:54:23,013 p=30902 u=zuul n=ansible | Installing 'containers.podman:1.16.2' to '/home/zuul/.ansible/collections/ansible_collections/containers/podman' 2026-01-20 10:54:23,070 p=30902 u=zuul n=ansible | Created collection for containers.podman:1.16.2 at /home/zuul/.ansible/collections/ansible_collections/containers/podman 2026-01-20 10:54:23,070 p=30902 u=zuul n=ansible | containers.podman:1.16.2 was installed successfully 2026-01-20 10:54:23,070 p=30902 u=zuul n=ansible | Installing 'community.general:10.0.1' to '/home/zuul/.ansible/collections/ansible_collections/community/general' 2026-01-20 10:54:23,794 p=30902 u=zuul n=ansible | Created collection for community.general:10.0.1 at /home/zuul/.ansible/collections/ansible_collections/community/general 2026-01-20 10:54:23,794 p=30902 u=zuul n=ansible | community.general:10.0.1 was installed successfully 2026-01-20 10:54:23,794 p=30902 u=zuul n=ansible | Installing 'ansible.posix:1.6.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/posix' 2026-01-20 10:54:23,845 p=30902 u=zuul n=ansible | Created collection for ansible.posix:1.6.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/posix 2026-01-20 10:54:23,845 p=30902 u=zuul n=ansible | ansible.posix:1.6.2 was installed successfully 2026-01-20 10:54:23,845 p=30902 u=zuul n=ansible | Installing 'ansible.utils:5.1.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/utils' 2026-01-20 10:54:23,943 p=30902 u=zuul n=ansible | Created collection for ansible.utils:5.1.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/utils 2026-01-20 10:54:23,943 p=30902 u=zuul n=ansible | ansible.utils:5.1.2 was installed successfully 2026-01-20 10:54:23,943 p=30902 u=zuul n=ansible | Installing 'community.libvirt:1.3.0' to '/home/zuul/.ansible/collections/ansible_collections/community/libvirt' 2026-01-20 10:54:23,967 p=30902 u=zuul n=ansible | Created collection for community.libvirt:1.3.0 at /home/zuul/.ansible/collections/ansible_collections/community/libvirt 2026-01-20 10:54:23,967 p=30902 u=zuul n=ansible | community.libvirt:1.3.0 was installed successfully 2026-01-20 10:54:23,968 p=30902 u=zuul n=ansible | Installing 'community.crypto:2.22.3' to '/home/zuul/.ansible/collections/ansible_collections/community/crypto' 2026-01-20 10:54:24,118 p=30902 u=zuul n=ansible | Created collection for community.crypto:2.22.3 at /home/zuul/.ansible/collections/ansible_collections/community/crypto 2026-01-20 10:54:24,118 p=30902 u=zuul n=ansible | community.crypto:2.22.3 was installed successfully 2026-01-20 10:54:24,118 p=30902 u=zuul n=ansible | Installing 'kubernetes.core:5.0.0' to '/home/zuul/.ansible/collections/ansible_collections/kubernetes/core' 2026-01-20 10:54:24,238 p=30902 u=zuul n=ansible | Created collection for kubernetes.core:5.0.0 at /home/zuul/.ansible/collections/ansible_collections/kubernetes/core 2026-01-20 10:54:24,238 p=30902 u=zuul n=ansible | kubernetes.core:5.0.0 was installed successfully 2026-01-20 10:54:24,238 p=30902 u=zuul n=ansible | Installing 'ansible.netcommon:7.1.0' to '/home/zuul/.ansible/collections/ansible_collections/ansible/netcommon' 2026-01-20 10:54:24,309 p=30902 u=zuul n=ansible | Created collection for ansible.netcommon:7.1.0 at /home/zuul/.ansible/collections/ansible_collections/ansible/netcommon 2026-01-20 10:54:24,309 p=30902 u=zuul n=ansible | ansible.netcommon:7.1.0 was installed successfully 2026-01-20 10:54:24,309 p=30902 u=zuul n=ansible | Installing 'openstack.config_template:2.1.1' to '/home/zuul/.ansible/collections/ansible_collections/openstack/config_template' 2026-01-20 10:54:24,328 p=30902 u=zuul n=ansible | Created collection for openstack.config_template:2.1.1 at /home/zuul/.ansible/collections/ansible_collections/openstack/config_template 2026-01-20 10:54:24,328 p=30902 u=zuul n=ansible | openstack.config_template:2.1.1 was installed successfully 2026-01-20 10:54:24,328 p=30902 u=zuul n=ansible | Installing 'junipernetworks.junos:9.1.0' to '/home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos' 2026-01-20 10:54:24,563 p=30902 u=zuul n=ansible | Created collection for junipernetworks.junos:9.1.0 at /home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos 2026-01-20 10:54:24,563 p=30902 u=zuul n=ansible | junipernetworks.junos:9.1.0 was installed successfully 2026-01-20 10:54:24,563 p=30902 u=zuul n=ansible | Installing 'cisco.ios:9.0.3' to '/home/zuul/.ansible/collections/ansible_collections/cisco/ios' 2026-01-20 10:54:24,827 p=30902 u=zuul n=ansible | Created collection for cisco.ios:9.0.3 at /home/zuul/.ansible/collections/ansible_collections/cisco/ios 2026-01-20 10:54:24,827 p=30902 u=zuul n=ansible | cisco.ios:9.0.3 was installed successfully 2026-01-20 10:54:24,827 p=30902 u=zuul n=ansible | Installing 'mellanox.onyx:1.0.0' to '/home/zuul/.ansible/collections/ansible_collections/mellanox/onyx' 2026-01-20 10:54:24,860 p=30902 u=zuul n=ansible | Created collection for mellanox.onyx:1.0.0 at /home/zuul/.ansible/collections/ansible_collections/mellanox/onyx 2026-01-20 10:54:24,861 p=30902 u=zuul n=ansible | mellanox.onyx:1.0.0 was installed successfully 2026-01-20 10:54:24,861 p=30902 u=zuul n=ansible | Installing 'community.okd:4.0.0' to '/home/zuul/.ansible/collections/ansible_collections/community/okd' 2026-01-20 10:54:24,888 p=30902 u=zuul n=ansible | Created collection for community.okd:4.0.0 at /home/zuul/.ansible/collections/ansible_collections/community/okd 2026-01-20 10:54:24,888 p=30902 u=zuul n=ansible | community.okd:4.0.0 was installed successfully 2026-01-20 10:54:24,888 p=30902 u=zuul n=ansible | Installing '@NAMESPACE@.@NAME@:3.1.4' to '/home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@' 2026-01-20 10:54:24,976 p=30902 u=zuul n=ansible | Created collection for @NAMESPACE@.@NAME@:3.1.4 at /home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@ 2026-01-20 10:54:24,976 p=30902 u=zuul n=ansible | @NAMESPACE@.@NAME@:3.1.4 was installed successfully 2026-01-20 10:54:34,401 p=31537 u=zuul n=ansible | PLAY [Remove status flag] ****************************************************** 2026-01-20 10:54:34,421 p=31537 u=zuul n=ansible | TASK [Gathering Facts ] ******************************************************** 2026-01-20 10:54:34,421 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:34 +0000 (0:00:00.038) 0:00:00.038 ******* 2026-01-20 10:54:34,421 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:34 +0000 (0:00:00.036) 0:00:00.036 ******* 2026-01-20 10:54:35,349 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:54:35,372 p=31537 u=zuul n=ansible | TASK [Delete success flag if exists path={{ ansible_user_dir }}/cifmw-success, state=absent] *** 2026-01-20 10:54:35,372 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.951) 0:00:00.989 ******* 2026-01-20 10:54:35,373 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.951) 0:00:00.987 ******* 2026-01-20 10:54:35,684 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:54:35,693 p=31537 u=zuul n=ansible | TASK [Inherit from parent scenarios if needed _raw_params=ci/playbooks/tasks/inherit_parent_scenario.yml] *** 2026-01-20 10:54:35,693 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.320) 0:00:01.309 ******* 2026-01-20 10:54:35,693 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.320) 0:00:01.308 ******* 2026-01-20 10:54:35,715 p=31537 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/ci/playbooks/tasks/inherit_parent_scenario.yml for localhost 2026-01-20 10:54:35,769 p=31537 u=zuul n=ansible | TASK [Inherit from parent parameter file if instructed file={{ item }}] ******** 2026-01-20 10:54:35,769 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.076) 0:00:01.386 ******* 2026-01-20 10:54:35,769 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.076) 0:00:01.384 ******* 2026-01-20 10:54:35,791 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:35,798 p=31537 u=zuul n=ansible | TASK [cifmw_setup : Set custom cifmw PATH reusable fact cifmw_path={{ ansible_user_dir }}/.crc/bin:{{ ansible_user_dir }}/.crc/bin/oc:{{ ansible_user_dir }}/bin:{{ ansible_env.PATH }}, cacheable=True] *** 2026-01-20 10:54:35,798 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.028) 0:00:01.414 ******* 2026-01-20 10:54:35,798 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.028) 0:00:01.413 ******* 2026-01-20 10:54:35,821 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:54:35,827 p=31537 u=zuul n=ansible | TASK [cifmw_setup : Get customized parameters ci_framework_params={{ hostvars[inventory_hostname] | dict2items | selectattr("key", "match", "^(cifmw|pre|post)_(?!install_yamls|openshift_token|openshift_login|openshift_kubeconfig).*") | list | items2dict }}] *** 2026-01-20 10:54:35,827 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.029) 0:00:01.444 ******* 2026-01-20 10:54:35,827 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.029) 0:00:01.442 ******* 2026-01-20 10:54:35,910 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:54:35,917 p=31537 u=zuul n=ansible | TASK [install_ca : Ensure target directory exists path={{ cifmw_install_ca_trust_dir }}, state=directory, mode=0755] *** 2026-01-20 10:54:35,917 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.089) 0:00:01.533 ******* 2026-01-20 10:54:35,917 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.089) 0:00:01.532 ******* 2026-01-20 10:54:36,117 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:54:36,123 p=31537 u=zuul n=ansible | TASK [install_ca : Install internal CA from url url={{ cifmw_install_ca_url }}, dest={{ cifmw_install_ca_trust_dir }}, validate_certs={{ cifmw_install_ca_url_validate_certs | default(omit) }}, mode=0644] *** 2026-01-20 10:54:36,123 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:36 +0000 (0:00:00.205) 0:00:01.739 ******* 2026-01-20 10:54:36,123 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:36 +0000 (0:00:00.205) 0:00:01.738 ******* 2026-01-20 10:54:36,144 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:36,152 p=31537 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from inline dest={{ cifmw_install_ca_trust_dir }}/cifmw_inline_ca_bundle.crt, content={{ cifmw_install_ca_bundle_inline }}, mode=0644] *** 2026-01-20 10:54:36,152 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:36 +0000 (0:00:00.028) 0:00:01.768 ******* 2026-01-20 10:54:36,152 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:36 +0000 (0:00:00.028) 0:00:01.767 ******* 2026-01-20 10:54:36,171 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:36,177 p=31537 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from file dest={{ cifmw_install_ca_trust_dir }}/{{ cifmw_install_ca_bundle_src | basename }}, src={{ cifmw_install_ca_bundle_src }}, mode=0644] *** 2026-01-20 10:54:36,177 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:36 +0000 (0:00:00.025) 0:00:01.794 ******* 2026-01-20 10:54:36,177 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:36 +0000 (0:00:00.025) 0:00:01.792 ******* 2026-01-20 10:54:36,199 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:36,208 p=31537 u=zuul n=ansible | TASK [install_ca : Update ca bundle _raw_params=update-ca-trust] *************** 2026-01-20 10:54:36,208 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:36 +0000 (0:00:00.030) 0:00:01.825 ******* 2026-01-20 10:54:36,208 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:36 +0000 (0:00:00.031) 0:00:01.823 ******* 2026-01-20 10:54:37,599 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:37,611 p=31537 u=zuul n=ansible | TASK [repo_setup : Ensure directories are present path={{ cifmw_repo_setup_basedir }}/{{ item }}, state=directory, mode=0755] *** 2026-01-20 10:54:37,611 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:37 +0000 (0:00:01.402) 0:00:03.227 ******* 2026-01-20 10:54:37,611 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:37 +0000 (0:00:01.402) 0:00:03.226 ******* 2026-01-20 10:54:37,794 p=31537 u=zuul n=ansible | changed: [localhost] => (item=tmp) 2026-01-20 10:54:38,008 p=31537 u=zuul n=ansible | changed: [localhost] => (item=artifacts/repositories) 2026-01-20 10:54:38,516 p=31537 u=zuul n=ansible | changed: [localhost] => (item=venv/repo_setup) 2026-01-20 10:54:38,524 p=31537 u=zuul n=ansible | TASK [repo_setup : Make sure git-core package is installed name=git-core, state=present] *** 2026-01-20 10:54:38,524 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:38 +0000 (0:00:00.913) 0:00:04.140 ******* 2026-01-20 10:54:38,524 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:38 +0000 (0:00:00.913) 0:00:04.139 ******* 2026-01-20 10:54:39,505 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:54:39,511 p=31537 u=zuul n=ansible | TASK [repo_setup : Get repo-setup repository accept_hostkey=True, dest={{ cifmw_repo_setup_basedir }}/tmp/repo-setup, repo={{ cifmw_repo_setup_src }}] *** 2026-01-20 10:54:39,511 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:39 +0000 (0:00:00.987) 0:00:05.127 ******* 2026-01-20 10:54:39,511 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:39 +0000 (0:00:00.987) 0:00:05.126 ******* 2026-01-20 10:54:40,626 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:40,632 p=31537 u=zuul n=ansible | TASK [repo_setup : Initialize python venv and install requirements virtualenv={{ cifmw_repo_setup_venv }}, requirements={{ cifmw_repo_setup_basedir }}/tmp/repo-setup/requirements.txt, virtualenv_command=python3 -m venv --system-site-packages --upgrade-deps] *** 2026-01-20 10:54:40,632 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:40 +0000 (0:00:01.121) 0:00:06.249 ******* 2026-01-20 10:54:40,632 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:40 +0000 (0:00:01.121) 0:00:06.247 ******* 2026-01-20 10:54:49,643 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:49,651 p=31537 u=zuul n=ansible | TASK [repo_setup : Install repo-setup package chdir={{ cifmw_repo_setup_basedir }}/tmp/repo-setup, creates={{ cifmw_repo_setup_venv }}/bin/repo-setup, _raw_params={{ cifmw_repo_setup_venv }}/bin/python setup.py install] *** 2026-01-20 10:54:49,652 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:49 +0000 (0:00:09.019) 0:00:15.268 ******* 2026-01-20 10:54:49,652 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:49 +0000 (0:00:09.019) 0:00:15.267 ******* 2026-01-20 10:54:50,444 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:50,451 p=31537 u=zuul n=ansible | TASK [repo_setup : Set cifmw_repo_setup_dlrn_hash_tag from content provider cifmw_repo_setup_dlrn_hash_tag={{ content_provider_dlrn_md5_hash }}] *** 2026-01-20 10:54:50,451 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:50 +0000 (0:00:00.799) 0:00:16.067 ******* 2026-01-20 10:54:50,451 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:50 +0000 (0:00:00.799) 0:00:16.066 ******* 2026-01-20 10:54:50,469 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:50,475 p=31537 u=zuul n=ansible | TASK [repo_setup : Run repo-setup _raw_params={{ cifmw_repo_setup_venv }}/bin/repo-setup {{ cifmw_repo_setup_promotion }} {{ cifmw_repo_setup_additional_repos }} -d {{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }} -b {{ cifmw_repo_setup_branch }} --rdo-mirror {{ cifmw_repo_setup_rdo_mirror }} {% if cifmw_repo_setup_dlrn_hash_tag | length > 0 %} --dlrn-hash-tag {{ cifmw_repo_setup_dlrn_hash_tag }} {% endif %} -o {{ cifmw_repo_setup_output }}] *** 2026-01-20 10:54:50,475 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:50 +0000 (0:00:00.024) 0:00:16.092 ******* 2026-01-20 10:54:50,475 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:50 +0000 (0:00:00.024) 0:00:16.090 ******* 2026-01-20 10:54:51,083 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:51,089 p=31537 u=zuul n=ansible | TASK [repo_setup : Get component repo url={{ cifmw_repo_setup_dlrn_uri }}/{{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }}-{{ cifmw_repo_setup_branch }}/component/{{ cifmw_repo_setup_component_name }}/{{ cifmw_repo_setup_component_promotion_tag }}/delorean.repo, dest={{ cifmw_repo_setup_output }}/{{ cifmw_repo_setup_component_name }}_{{ cifmw_repo_setup_component_promotion_tag }}_delorean.repo, mode=0644] *** 2026-01-20 10:54:51,089 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:51 +0000 (0:00:00.613) 0:00:16.706 ******* 2026-01-20 10:54:51,089 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:51 +0000 (0:00:00.613) 0:00:16.704 ******* 2026-01-20 10:54:51,118 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:51,124 p=31537 u=zuul n=ansible | TASK [repo_setup : Rename component repo path={{ cifmw_repo_setup_output }}/{{ cifmw_repo_setup_component_name }}_{{ cifmw_repo_setup_component_promotion_tag }}_delorean.repo, regexp=delorean-component-{{ cifmw_repo_setup_component_name }}, replace={{ cifmw_repo_setup_component_name }}-{{ cifmw_repo_setup_component_promotion_tag }}] *** 2026-01-20 10:54:51,125 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:51 +0000 (0:00:00.035) 0:00:16.741 ******* 2026-01-20 10:54:51,125 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:51 +0000 (0:00:00.035) 0:00:16.740 ******* 2026-01-20 10:54:51,154 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:51,160 p=31537 u=zuul n=ansible | TASK [repo_setup : Disable component repo in current-podified dlrn repo path={{ cifmw_repo_setup_output }}/delorean.repo, section=delorean-component-{{ cifmw_repo_setup_component_name }}, option=enabled, value=0, mode=0644] *** 2026-01-20 10:54:51,160 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:51 +0000 (0:00:00.035) 0:00:16.777 ******* 2026-01-20 10:54:51,160 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:51 +0000 (0:00:00.035) 0:00:16.775 ******* 2026-01-20 10:54:51,191 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:51,199 p=31537 u=zuul n=ansible | TASK [repo_setup : Run repo-setup-get-hash _raw_params={{ cifmw_repo_setup_venv }}/bin/repo-setup-get-hash --dlrn-url {{ cifmw_repo_setup_dlrn_uri[:-1] }} --os-version {{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }} --release {{ cifmw_repo_setup_branch }} {% if cifmw_repo_setup_component_name | length > 0 -%} --component {{ cifmw_repo_setup_component_name }} --tag {{ cifmw_repo_setup_component_promotion_tag }} {% else -%} --tag {{cifmw_repo_setup_promotion }} {% endif -%} {% if (cifmw_repo_setup_dlrn_hash_tag | length > 0) and (cifmw_repo_setup_component_name | length <= 0) -%} --dlrn-hash-tag {{ cifmw_repo_setup_dlrn_hash_tag }} {% endif -%} --json] *** 2026-01-20 10:54:51,200 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:51 +0000 (0:00:00.039) 0:00:16.816 ******* 2026-01-20 10:54:51,200 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:51 +0000 (0:00:00.039) 0:00:16.815 ******* 2026-01-20 10:54:51,644 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:51,653 p=31537 u=zuul n=ansible | TASK [repo_setup : Dump full hash in delorean.repo.md5 file content={{ _repo_setup_json['full_hash'] }} , dest={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5, mode=0644] *** 2026-01-20 10:54:51,653 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:51 +0000 (0:00:00.453) 0:00:17.269 ******* 2026-01-20 10:54:51,653 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:51 +0000 (0:00:00.453) 0:00:17.268 ******* 2026-01-20 10:54:52,322 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:52,336 p=31537 u=zuul n=ansible | TASK [repo_setup : Dump current-podified hash url={{ cifmw_repo_setup_dlrn_uri }}/{{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }}-{{ cifmw_repo_setup_branch }}/current-podified/delorean.repo.md5, dest={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5, mode=0644] *** 2026-01-20 10:54:52,336 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.683) 0:00:17.952 ******* 2026-01-20 10:54:52,336 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.683) 0:00:17.951 ******* 2026-01-20 10:54:52,352 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:52,366 p=31537 u=zuul n=ansible | TASK [repo_setup : Slurp current podified hash src={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5] *** 2026-01-20 10:54:52,366 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.030) 0:00:17.983 ******* 2026-01-20 10:54:52,366 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.030) 0:00:17.981 ******* 2026-01-20 10:54:52,382 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:52,392 p=31537 u=zuul n=ansible | TASK [repo_setup : Update the value of full_hash _repo_setup_json={{ _repo_setup_json | combine({'full_hash': _hash}, recursive=true) }}] *** 2026-01-20 10:54:52,393 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.026) 0:00:18.009 ******* 2026-01-20 10:54:52,393 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.026) 0:00:18.008 ******* 2026-01-20 10:54:52,408 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:52,417 p=31537 u=zuul n=ansible | TASK [repo_setup : Export hashes facts for further use cifmw_repo_setup_full_hash={{ _repo_setup_json['full_hash'] }}, cifmw_repo_setup_commit_hash={{ _repo_setup_json['commit_hash'] }}, cifmw_repo_setup_distro_hash={{ _repo_setup_json['distro_hash'] }}, cifmw_repo_setup_extended_hash={{ _repo_setup_json['extended_hash'] }}, cifmw_repo_setup_dlrn_api_url={{ _repo_setup_json['dlrn_api_url'] }}, cifmw_repo_setup_dlrn_url={{ _repo_setup_json['dlrn_url'] }}, cifmw_repo_setup_release={{ _repo_setup_json['release'] }}, cacheable=True] *** 2026-01-20 10:54:52,417 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.024) 0:00:18.034 ******* 2026-01-20 10:54:52,417 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.024) 0:00:18.032 ******* 2026-01-20 10:54:52,444 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:54:52,452 p=31537 u=zuul n=ansible | TASK [repo_setup : Create download directory path={{ cifmw_repo_setup_rhos_release_path }}, state=directory, mode=0755] *** 2026-01-20 10:54:52,452 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.034) 0:00:18.068 ******* 2026-01-20 10:54:52,452 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.034) 0:00:18.067 ******* 2026-01-20 10:54:52,465 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:52,475 p=31537 u=zuul n=ansible | TASK [repo_setup : Print the URL to request msg={{ cifmw_repo_setup_rhos_release_rpm }}] *** 2026-01-20 10:54:52,475 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.022) 0:00:18.091 ******* 2026-01-20 10:54:52,475 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.022) 0:00:18.090 ******* 2026-01-20 10:54:52,489 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:52,499 p=31537 u=zuul n=ansible | TASK [Download the RPM name=krb_request] *************************************** 2026-01-20 10:54:52,499 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.023) 0:00:18.115 ******* 2026-01-20 10:54:52,499 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.024) 0:00:18.114 ******* 2026-01-20 10:54:52,512 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:52,522 p=31537 u=zuul n=ansible | TASK [repo_setup : Install RHOS Release tool name={{ cifmw_repo_setup_rhos_release_rpm if cifmw_repo_setup_rhos_release_rpm is not url else cifmw_krb_request_out.path }}, state=present, disable_gpg_check={{ cifmw_repo_setup_rhos_release_gpg_check | bool }}] *** 2026-01-20 10:54:52,522 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.023) 0:00:18.138 ******* 2026-01-20 10:54:52,522 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.023) 0:00:18.137 ******* 2026-01-20 10:54:52,537 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:52,546 p=31537 u=zuul n=ansible | TASK [repo_setup : Get rhos-release tool version _raw_params=rhos-release --version] *** 2026-01-20 10:54:52,546 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.024) 0:00:18.163 ******* 2026-01-20 10:54:52,546 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.024) 0:00:18.161 ******* 2026-01-20 10:54:52,560 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:52,569 p=31537 u=zuul n=ansible | TASK [repo_setup : Print rhos-release tool version msg={{ rr_version.stdout }}] *** 2026-01-20 10:54:52,569 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.022) 0:00:18.186 ******* 2026-01-20 10:54:52,569 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.022) 0:00:18.184 ******* 2026-01-20 10:54:52,582 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:52,593 p=31537 u=zuul n=ansible | TASK [repo_setup : Generate repos using rhos-release {{ cifmw_repo_setup_rhos_release_args }} _raw_params=rhos-release {{ cifmw_repo_setup_rhos_release_args }} \ -t {{ cifmw_repo_setup_output }}] *** 2026-01-20 10:54:52,593 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.023) 0:00:18.209 ******* 2026-01-20 10:54:52,593 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.023) 0:00:18.208 ******* 2026-01-20 10:54:52,606 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:52,616 p=31537 u=zuul n=ansible | TASK [repo_setup : Check for /etc/ci/mirror_info.sh path=/etc/ci/mirror_info.sh] *** 2026-01-20 10:54:52,616 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.023) 0:00:18.232 ******* 2026-01-20 10:54:52,616 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.023) 0:00:18.231 ******* 2026-01-20 10:54:52,792 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:54:52,799 p=31537 u=zuul n=ansible | TASK [repo_setup : Use RDO proxy mirrors chdir={{ cifmw_repo_setup_output }}, _raw_params=set -o pipefail source /etc/ci/mirror_info.sh sed -i -e "s|https://trunk.rdoproject.org|$NODEPOOL_RDO_PROXY|g" *.repo ] *** 2026-01-20 10:54:52,799 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.183) 0:00:18.416 ******* 2026-01-20 10:54:52,799 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.183) 0:00:18.414 ******* 2026-01-20 10:54:52,994 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:53,001 p=31537 u=zuul n=ansible | TASK [repo_setup : Use RDO CentOS mirrors (remove CentOS 10 conditional when Nodepool mirrors exist) chdir={{ cifmw_repo_setup_output }}, _raw_params=set -o pipefail source /etc/ci/mirror_info.sh sed -i -e "s|http://mirror.stream.centos.org|$NODEPOOL_CENTOS_MIRROR|g" *.repo ] *** 2026-01-20 10:54:53,001 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.201) 0:00:18.618 ******* 2026-01-20 10:54:53,001 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.201) 0:00:18.616 ******* 2026-01-20 10:54:53,214 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:53,223 p=31537 u=zuul n=ansible | TASK [repo_setup : Check for gating.repo file on content provider url=http://{{ content_provider_registry_ip }}:8766/gating.repo] *** 2026-01-20 10:54:53,223 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.221) 0:00:18.840 ******* 2026-01-20 10:54:53,223 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.221) 0:00:18.838 ******* 2026-01-20 10:54:53,753 p=31537 u=zuul n=ansible | fatal: [localhost]: FAILED! => changed: false elapsed: 0 msg: 'Status code was -1 and not [200]: Request failed: ' redirected: false status: -1 url: http://38.102.83.51:8766/gating.repo 2026-01-20 10:54:53,753 p=31537 u=zuul n=ansible | ...ignoring 2026-01-20 10:54:53,762 p=31537 u=zuul n=ansible | TASK [repo_setup : Populate gating repo from content provider ip content=[gating-repo] baseurl=http://{{ content_provider_registry_ip }}:8766/ enabled=1 gpgcheck=0 priority=1 , dest={{ cifmw_repo_setup_output }}/gating.repo, mode=0644] *** 2026-01-20 10:54:53,762 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.538) 0:00:19.378 ******* 2026-01-20 10:54:53,762 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.538) 0:00:19.377 ******* 2026-01-20 10:54:53,792 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:53,800 p=31537 u=zuul n=ansible | TASK [repo_setup : Check for DLRN repo at the destination path={{ cifmw_repo_setup_output }}/delorean.repo] *** 2026-01-20 10:54:53,800 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.038) 0:00:19.417 ******* 2026-01-20 10:54:53,800 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.038) 0:00:19.415 ******* 2026-01-20 10:54:53,831 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:53,838 p=31537 u=zuul n=ansible | TASK [repo_setup : Lower the priority of DLRN repos to allow installation from gating repo path={{ cifmw_repo_setup_output }}/delorean.repo, regexp=priority=1, replace=priority=20] *** 2026-01-20 10:54:53,838 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.037) 0:00:19.454 ******* 2026-01-20 10:54:53,838 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.037) 0:00:19.453 ******* 2026-01-20 10:54:53,871 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:53,877 p=31537 u=zuul n=ansible | TASK [repo_setup : Check for DLRN component repo path={{ cifmw_repo_setup_output }}/{{ _comp_repo }}] *** 2026-01-20 10:54:53,877 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.039) 0:00:19.493 ******* 2026-01-20 10:54:53,877 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.039) 0:00:19.492 ******* 2026-01-20 10:54:53,905 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:53,914 p=31537 u=zuul n=ansible | TASK [repo_setup : Lower the priority of componennt repos to allow installation from gating repo path={{ cifmw_repo_setup_output }}//{{ _comp_repo }}, regexp=priority=1, replace=priority=2] *** 2026-01-20 10:54:53,914 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.037) 0:00:19.531 ******* 2026-01-20 10:54:53,914 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.037) 0:00:19.529 ******* 2026-01-20 10:54:53,944 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:53,950 p=31537 u=zuul n=ansible | TASK [repo_setup : Find existing repos from /etc/yum.repos.d directory paths=/etc/yum.repos.d/, patterns=*.repo, recurse=False] *** 2026-01-20 10:54:53,951 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.036) 0:00:19.567 ******* 2026-01-20 10:54:53,951 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.036) 0:00:19.566 ******* 2026-01-20 10:54:54,253 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:54:54,260 p=31537 u=zuul n=ansible | TASK [repo_setup : Remove existing repos from /etc/yum.repos.d directory path={{ item }}, state=absent] *** 2026-01-20 10:54:54,260 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:54 +0000 (0:00:00.309) 0:00:19.876 ******* 2026-01-20 10:54:54,260 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:54 +0000 (0:00:00.309) 0:00:19.875 ******* 2026-01-20 10:54:54,480 p=31537 u=zuul n=ansible | changed: [localhost] => (item=/etc/yum.repos.d/centos-addons.repo) 2026-01-20 10:54:54,654 p=31537 u=zuul n=ansible | changed: [localhost] => (item=/etc/yum.repos.d/centos.repo) 2026-01-20 10:54:54,661 p=31537 u=zuul n=ansible | TASK [repo_setup : Cleanup existing metadata _raw_params=dnf clean metadata] *** 2026-01-20 10:54:54,661 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:54 +0000 (0:00:00.401) 0:00:20.278 ******* 2026-01-20 10:54:54,661 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:54 +0000 (0:00:00.401) 0:00:20.276 ******* 2026-01-20 10:54:55,070 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:55,077 p=31537 u=zuul n=ansible | TASK [repo_setup : Copy generated repos to /etc/yum.repos.d directory mode=0755, remote_src=True, src={{ cifmw_repo_setup_output }}/, dest=/etc/yum.repos.d] *** 2026-01-20 10:54:55,077 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:55 +0000 (0:00:00.416) 0:00:20.694 ******* 2026-01-20 10:54:55,077 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:55 +0000 (0:00:00.416) 0:00:20.692 ******* 2026-01-20 10:54:55,318 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:55,329 p=31537 u=zuul n=ansible | TASK [ci_setup : Gather variables for each operating system _raw_params={{ item }}] *** 2026-01-20 10:54:55,329 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:55 +0000 (0:00:00.252) 0:00:20.946 ******* 2026-01-20 10:54:55,330 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:55 +0000 (0:00:00.252) 0:00:20.944 ******* 2026-01-20 10:54:55,363 p=31537 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/ci_setup/vars/redhat.yml) 2026-01-20 10:54:55,372 p=31537 u=zuul n=ansible | TASK [ci_setup : List packages to install var=cifmw_ci_setup_packages] ********* 2026-01-20 10:54:55,372 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:55 +0000 (0:00:00.042) 0:00:20.988 ******* 2026-01-20 10:54:55,372 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:55 +0000 (0:00:00.042) 0:00:20.987 ******* 2026-01-20 10:54:55,389 p=31537 u=zuul n=ansible | ok: [localhost] => cifmw_ci_setup_packages: - bash-completion - ca-certificates - git-core - make - tar - tmux - python3-pip 2026-01-20 10:54:55,398 p=31537 u=zuul n=ansible | TASK [ci_setup : Install needed packages name={{ cifmw_ci_setup_packages }}, state=latest] *** 2026-01-20 10:54:55,398 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:55 +0000 (0:00:00.025) 0:00:21.014 ******* 2026-01-20 10:54:55,398 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:55 +0000 (0:00:00.025) 0:00:21.013 ******* 2026-01-20 10:55:29,325 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:29,332 p=31537 u=zuul n=ansible | TASK [ci_setup : Gather version of openshift client _raw_params=oc version --client -o yaml] *** 2026-01-20 10:55:29,332 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:29 +0000 (0:00:33.934) 0:00:54.948 ******* 2026-01-20 10:55:29,334 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:29 +0000 (0:00:33.936) 0:00:54.949 ******* 2026-01-20 10:55:29,530 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:29,537 p=31537 u=zuul n=ansible | TASK [ci_setup : Ensure openshift client install path is present path={{ cifmw_ci_setup_oc_install_path }}, state=directory, mode=0755] *** 2026-01-20 10:55:29,537 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:29 +0000 (0:00:00.204) 0:00:55.153 ******* 2026-01-20 10:55:29,537 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:29 +0000 (0:00:00.202) 0:00:55.152 ******* 2026-01-20 10:55:29,725 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:29,731 p=31537 u=zuul n=ansible | TASK [ci_setup : Install openshift client src={{ cifmw_ci_setup_openshift_client_download_uri }}/{{ cifmw_ci_setup_openshift_client_version }}/openshift-client-linux.tar.gz, dest={{ cifmw_ci_setup_oc_install_path }}, remote_src=True, mode=0755, creates={{ cifmw_ci_setup_oc_install_path }}/oc] *** 2026-01-20 10:55:29,731 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:29 +0000 (0:00:00.194) 0:00:55.348 ******* 2026-01-20 10:55:29,732 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:29 +0000 (0:00:00.194) 0:00:55.346 ******* 2026-01-20 10:55:34,985 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:34,991 p=31537 u=zuul n=ansible | TASK [ci_setup : Add the OC path to cifmw_path if needed cifmw_path={{ cifmw_ci_setup_oc_install_path }}:{{ ansible_env.PATH }}, cacheable=True] *** 2026-01-20 10:55:34,992 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:34 +0000 (0:00:05.260) 0:01:00.608 ******* 2026-01-20 10:55:34,992 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:34 +0000 (0:00:05.260) 0:01:00.607 ******* 2026-01-20 10:55:35,014 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:35,021 p=31537 u=zuul n=ansible | TASK [ci_setup : Create completion file] *************************************** 2026-01-20 10:55:35,021 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.029) 0:01:00.637 ******* 2026-01-20 10:55:35,021 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.029) 0:01:00.636 ******* 2026-01-20 10:55:35,300 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:35,306 p=31537 u=zuul n=ansible | TASK [ci_setup : Source completion from within .bashrc create=True, mode=0644, path={{ ansible_user_dir }}/.bashrc, block=if [ -f ~/.oc_completion ]; then source ~/.oc_completion fi] *** 2026-01-20 10:55:35,306 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.285) 0:01:00.923 ******* 2026-01-20 10:55:35,306 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.285) 0:01:00.921 ******* 2026-01-20 10:55:35,617 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:35,623 p=31537 u=zuul n=ansible | TASK [ci_setup : Check rhsm status _raw_params=subscription-manager status] **** 2026-01-20 10:55:35,623 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.316) 0:01:01.239 ******* 2026-01-20 10:55:35,623 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.316) 0:01:01.238 ******* 2026-01-20 10:55:35,639 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:35,646 p=31537 u=zuul n=ansible | TASK [ci_setup : Gather the repos to be enabled _repos={{ cifmw_ci_setup_rhel_rhsm_default_repos + (cifmw_ci_setup_rhel_rhsm_extra_repos | default([])) }}] *** 2026-01-20 10:55:35,646 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.023) 0:01:01.263 ******* 2026-01-20 10:55:35,646 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.023) 0:01:01.261 ******* 2026-01-20 10:55:35,660 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:35,665 p=31537 u=zuul n=ansible | TASK [ci_setup : Enabling the required repositories. name={{ item }}, state={{ rhsm_repo_state | default('enabled') }}] *** 2026-01-20 10:55:35,665 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.019) 0:01:01.282 ******* 2026-01-20 10:55:35,665 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.019) 0:01:01.280 ******* 2026-01-20 10:55:35,678 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:35,685 p=31537 u=zuul n=ansible | TASK [ci_setup : Get current /etc/redhat-release _raw_params=cat /etc/redhat-release] *** 2026-01-20 10:55:35,685 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.019) 0:01:01.301 ******* 2026-01-20 10:55:35,685 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.019) 0:01:01.300 ******* 2026-01-20 10:55:35,698 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:35,704 p=31537 u=zuul n=ansible | TASK [ci_setup : Print current /etc/redhat-release msg={{ _current_rh_release.stdout }}] *** 2026-01-20 10:55:35,704 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.019) 0:01:01.321 ******* 2026-01-20 10:55:35,704 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.019) 0:01:01.319 ******* 2026-01-20 10:55:35,716 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:35,724 p=31537 u=zuul n=ansible | TASK [ci_setup : Ensure the repos are enabled in the system using yum name={{ item.name }}, baseurl={{ item.baseurl }}, description={{ item.description | default(item.name) }}, gpgcheck={{ item.gpgcheck | default(false) }}, enabled=True, state={{ yum_repo_state | default('present') }}] *** 2026-01-20 10:55:35,724 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.020) 0:01:01.341 ******* 2026-01-20 10:55:35,724 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.020) 0:01:01.339 ******* 2026-01-20 10:55:35,742 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:35,749 p=31537 u=zuul n=ansible | TASK [ci_setup : Manage directories path={{ item }}, state={{ directory_state }}, mode=0755, owner={{ ansible_user_id }}, group={{ ansible_user_id }}] *** 2026-01-20 10:55:35,749 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.024) 0:01:01.365 ******* 2026-01-20 10:55:35,749 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.024) 0:01:01.364 ******* 2026-01-20 10:55:36,033 p=31537 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/manifests/openstack/cr) 2026-01-20 10:55:36,233 p=31537 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/logs) 2026-01-20 10:55:36,429 p=31537 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/tmp) 2026-01-20 10:55:36,633 p=31537 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/volumes) 2026-01-20 10:55:36,818 p=31537 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2026-01-20 10:55:36,830 p=31537 u=zuul n=ansible | TASK [Prepare install_yamls make targets name=install_yamls, apply={'tags': ['bootstrap']}] *** 2026-01-20 10:55:36,830 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:36 +0000 (0:00:01.081) 0:01:02.447 ******* 2026-01-20 10:55:36,831 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:36 +0000 (0:00:01.081) 0:01:02.445 ******* 2026-01-20 10:55:36,949 p=31537 u=zuul n=ansible | TASK [install_yamls : Ensure directories exist path={{ item }}, state=directory, mode=0755] *** 2026-01-20 10:55:36,949 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:36 +0000 (0:00:00.118) 0:01:02.566 ******* 2026-01-20 10:55:36,949 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:36 +0000 (0:00:00.118) 0:01:02.564 ******* 2026-01-20 10:55:37,167 p=31537 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts) 2026-01-20 10:55:37,326 p=31537 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/roles/install_yamls_makes/tasks) 2026-01-20 10:55:37,487 p=31537 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2026-01-20 10:55:37,494 p=31537 u=zuul n=ansible | TASK [Create variables with local repos based on Zuul items name=install_yamls, tasks_from=zuul_set_operators_repo.yml] *** 2026-01-20 10:55:37,494 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.544) 0:01:03.110 ******* 2026-01-20 10:55:37,494 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.544) 0:01:03.109 ******* 2026-01-20 10:55:37,526 p=31537 u=zuul n=ansible | TASK [install_yamls : Set fact with local repos based on Zuul items cifmw_install_yamls_operators_repo={{ cifmw_install_yamls_operators_repo | default({}) | combine(_repo_operator_info | items2dict) }}] *** 2026-01-20 10:55:37,526 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.032) 0:01:03.142 ******* 2026-01-20 10:55:37,526 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.032) 0:01:03.141 ******* 2026-01-20 10:55:37,568 p=31537 u=zuul n=ansible | ok: [localhost] => (item={'branch': 'main', 'change': '320', 'change_url': 'https://github.com/openstack-k8s-operators/watcher-operator/pull/320', 'commit_id': '581f46572d07c53c87a11aa044b02e73f253eea6', 'patchset': '581f46572d07c53c87a11aa044b02e73f253eea6', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/openstack-k8s-operators/watcher-operator', 'name': 'openstack-k8s-operators/watcher-operator', 'short_name': 'watcher-operator', 'src_dir': 'src/github.com/openstack-k8s-operators/watcher-operator'}, 'topic': None}) 2026-01-20 10:55:37,576 p=31537 u=zuul n=ansible | TASK [install_yamls : Print helpful data for debugging msg=_repo_operator_name: {{ _repo_operator_name }} _repo_operator_info: {{ _repo_operator_info }} cifmw_install_yamls_operators_repo: {{ cifmw_install_yamls_operators_repo }} ] *** 2026-01-20 10:55:37,576 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.050) 0:01:03.193 ******* 2026-01-20 10:55:37,576 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.050) 0:01:03.191 ******* 2026-01-20 10:55:37,619 p=31537 u=zuul n=ansible | ok: [localhost] => (item={'branch': 'main', 'change': '320', 'change_url': 'https://github.com/openstack-k8s-operators/watcher-operator/pull/320', 'commit_id': '581f46572d07c53c87a11aa044b02e73f253eea6', 'patchset': '581f46572d07c53c87a11aa044b02e73f253eea6', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/openstack-k8s-operators/watcher-operator', 'name': 'openstack-k8s-operators/watcher-operator', 'short_name': 'watcher-operator', 'src_dir': 'src/github.com/openstack-k8s-operators/watcher-operator'}, 'topic': None}) => msg: | _repo_operator_name: watcher _repo_operator_info: [{'key': 'WATCHER_REPO', 'value': '/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator'}, {'key': 'WATCHER_BRANCH', 'value': ''}] cifmw_install_yamls_operators_repo: {'WATCHER_REPO': '/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator', 'WATCHER_BRANCH': ''} 2026-01-20 10:55:37,632 p=31537 u=zuul n=ansible | TASK [Customize install_yamls devsetup vars if needed name=install_yamls, tasks_from=customize_devsetup_vars.yml] *** 2026-01-20 10:55:37,632 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.056) 0:01:03.249 ******* 2026-01-20 10:55:37,632 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.056) 0:01:03.247 ******* 2026-01-20 10:55:37,674 p=31537 u=zuul n=ansible | TASK [install_yamls : Update opm_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^opm_version:, line=opm_version: {{ cifmw_install_yamls_opm_version }}, state=present] *** 2026-01-20 10:55:37,674 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.042) 0:01:03.291 ******* 2026-01-20 10:55:37,674 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.042) 0:01:03.289 ******* 2026-01-20 10:55:37,695 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:37,701 p=31537 u=zuul n=ansible | TASK [install_yamls : Update sdk_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^sdk_version:, line=sdk_version: {{ cifmw_install_yamls_sdk_version }}, state=present] *** 2026-01-20 10:55:37,702 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.027) 0:01:03.318 ******* 2026-01-20 10:55:37,702 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.027) 0:01:03.317 ******* 2026-01-20 10:55:37,719 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:37,726 p=31537 u=zuul n=ansible | TASK [install_yamls : Update go_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^go_version:, line=go_version: {{ cifmw_install_yamls_go_version }}, state=present] *** 2026-01-20 10:55:37,726 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.024) 0:01:03.342 ******* 2026-01-20 10:55:37,726 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.024) 0:01:03.341 ******* 2026-01-20 10:55:37,745 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:37,753 p=31537 u=zuul n=ansible | TASK [install_yamls : Update kustomize_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^kustomize_version:, line=kustomize_version: {{ cifmw_install_yamls_kustomize_version }}, state=present] *** 2026-01-20 10:55:37,753 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.026) 0:01:03.369 ******* 2026-01-20 10:55:37,753 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.026) 0:01:03.368 ******* 2026-01-20 10:55:37,777 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:37,790 p=31537 u=zuul n=ansible | TASK [install_yamls : Compute the cifmw_install_yamls_vars final value _install_yamls_override_vars={{ _install_yamls_override_vars | default({}) | combine(item, recursive=True) }}] *** 2026-01-20 10:55:37,790 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.037) 0:01:03.407 ******* 2026-01-20 10:55:37,790 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.037) 0:01:03.405 ******* 2026-01-20 10:55:37,881 p=31537 u=zuul n=ansible | ok: [localhost] => (item={'BMO_SETUP': False, 'INSTALL_CERT_MANAGER': False}) 2026-01-20 10:55:37,890 p=31537 u=zuul n=ansible | TASK [install_yamls : Set environment override cifmw_install_yamls_environment fact cifmw_install_yamls_environment={{ _install_yamls_override_vars.keys() | map('upper') | zip(_install_yamls_override_vars.values()) | items2dict(key_name=0, value_name=1) | combine({ 'OUT': cifmw_install_yamls_manifests_dir, 'OUTPUT_DIR': cifmw_install_yamls_edpm_dir, 'CHECKOUT_FROM_OPENSTACK_REF': cifmw_install_yamls_checkout_openstack_ref, 'OPENSTACK_K8S_BRANCH': (zuul is defined and not zuul.branch |regex_search('master|antelope|rhos')) | ternary(zuul.branch, 'main') }) | combine(install_yamls_operators_repos) }}, cacheable=True] *** 2026-01-20 10:55:37,890 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.099) 0:01:03.507 ******* 2026-01-20 10:55:37,890 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.099) 0:01:03.505 ******* 2026-01-20 10:55:37,930 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:37,937 p=31537 u=zuul n=ansible | TASK [install_yamls : Get environment structure base_path={{ cifmw_install_yamls_repo }}] *** 2026-01-20 10:55:37,937 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.046) 0:01:03.553 ******* 2026-01-20 10:55:37,937 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.046) 0:01:03.552 ******* 2026-01-20 10:55:38,534 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:38,543 p=31537 u=zuul n=ansible | TASK [install_yamls : Ensure Output directory exists path={{ cifmw_install_yamls_out_dir }}, state=directory, mode=0755] *** 2026-01-20 10:55:38,543 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:38 +0000 (0:00:00.605) 0:01:04.159 ******* 2026-01-20 10:55:38,543 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:38 +0000 (0:00:00.605) 0:01:04.158 ******* 2026-01-20 10:55:38,732 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:38,739 p=31537 u=zuul n=ansible | TASK [install_yamls : Ensure user cifmw_install_yamls_vars contains existing Makefile variables that=_cifmw_install_yamls_unmatched_vars | length == 0, msg=cifmw_install_yamls_vars contains a variable that is not defined in install_yamls Makefile nor cifmw_install_yamls_whitelisted_vars: {{ _cifmw_install_yamls_unmatched_vars | join(', ')}}, quiet=True] *** 2026-01-20 10:55:38,739 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:38 +0000 (0:00:00.196) 0:01:04.355 ******* 2026-01-20 10:55:38,739 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:38 +0000 (0:00:00.196) 0:01:04.354 ******* 2026-01-20 10:55:38,776 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:38,797 p=31537 u=zuul n=ansible | TASK [install_yamls : Generate /home/zuul/ci-framework-data/artifacts/install_yamls.sh dest={{ cifmw_install_yamls_out_dir }}/{{ cifmw_install_yamls_envfile }}, content={% for k,v in cifmw_install_yamls_environment.items() %} export {{ k }}={{ v }} {% endfor %}, mode=0644] *** 2026-01-20 10:55:38,797 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:38 +0000 (0:00:00.057) 0:01:04.413 ******* 2026-01-20 10:55:38,797 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:38 +0000 (0:00:00.058) 0:01:04.412 ******* 2026-01-20 10:55:39,224 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:39,237 p=31537 u=zuul n=ansible | TASK [install_yamls : Set install_yamls default values cifmw_install_yamls_defaults={{ get_makefiles_env_output.makefiles_values | combine(cifmw_install_yamls_environment) }}, cacheable=True] *** 2026-01-20 10:55:39,238 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.440) 0:01:04.854 ******* 2026-01-20 10:55:39,238 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.440) 0:01:04.853 ******* 2026-01-20 10:55:39,266 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:39,277 p=31537 u=zuul n=ansible | TASK [install_yamls : Show the env structure var=cifmw_install_yamls_environment] *** 2026-01-20 10:55:39,277 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.039) 0:01:04.894 ******* 2026-01-20 10:55:39,277 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.039) 0:01:04.892 ******* 2026-01-20 10:55:39,296 p=31537 u=zuul n=ansible | ok: [localhost] => cifmw_install_yamls_environment: BMO_SETUP: false CHECKOUT_FROM_OPENSTACK_REF: 'true' INSTALL_CERT_MANAGER: false OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm WATCHER_BRANCH: '' WATCHER_REPO: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator 2026-01-20 10:55:39,305 p=31537 u=zuul n=ansible | TASK [install_yamls : Show the env structure defaults var=cifmw_install_yamls_defaults] *** 2026-01-20 10:55:39,305 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.027) 0:01:04.921 ******* 2026-01-20 10:55:39,305 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.027) 0:01:04.920 ******* 2026-01-20 10:55:39,337 p=31537 u=zuul n=ansible | ok: [localhost] => cifmw_install_yamls_defaults: ADOPTED_EXTERNAL_NETWORK: 172.21.1.0/24 ADOPTED_INTERNALAPI_NETWORK: 172.17.1.0/24 ADOPTED_STORAGEMGMT_NETWORK: 172.20.1.0/24 ADOPTED_STORAGE_NETWORK: 172.18.1.0/24 ADOPTED_TENANT_NETWORK: 172.9.1.0/24 ANSIBLEEE: config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_BRANCH: main ANSIBLEEE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest ANSIBLEEE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml ANSIBLEEE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/test/kuttl/tests ANSIBLEEE_KUTTL_NAMESPACE: ansibleee-kuttl-tests ANSIBLEEE_REPO: https://github.com/openstack-k8s-operators/openstack-ansibleee-operator ANSIBLEE_COMMIT_HASH: '' BARBICAN: config/samples/barbican_v1beta1_barbican.yaml BARBICAN_BRANCH: main BARBICAN_COMMIT_HASH: '' BARBICAN_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml BARBICAN_DEPL_IMG: unused BARBICAN_IMG: quay.io/openstack-k8s-operators/barbican-operator-index:latest BARBICAN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml BARBICAN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/test/kuttl/tests BARBICAN_KUTTL_NAMESPACE: barbican-kuttl-tests BARBICAN_REPO: https://github.com/openstack-k8s-operators/barbican-operator.git BARBICAN_SERVICE_ENABLED: 'true' BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY: sE**********U= BAREMETAL_BRANCH: main BAREMETAL_COMMIT_HASH: '' BAREMETAL_IMG: quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest BAREMETAL_OS_CONTAINER_IMG: '' BAREMETAL_OS_IMG: '' BAREMETAL_OS_IMG_TYPE: '' BAREMETAL_REPO: https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git BAREMETAL_TIMEOUT: 20m BASH_IMG: quay.io/openstack-k8s-operators/bash:latest BGP_ASN: '64999' BGP_LEAF_1: 100.65.4.1 BGP_LEAF_2: 100.64.4.1 BGP_OVN_ROUTING: 'false' BGP_PEER_ASN: '64999' BGP_SOURCE_IP: 172.30.4.2 BGP_SOURCE_IP6: f00d:f00d:f00d:f00d:f00d:f00d:f00d:42 BMAAS_BRIDGE_IPV4_PREFIX: 172.20.1.2/24 BMAAS_BRIDGE_IPV6_PREFIX: fd00:bbbb::2/64 BMAAS_INSTANCE_DISK_SIZE: '20' BMAAS_INSTANCE_MEMORY: '4096' BMAAS_INSTANCE_NAME_PREFIX: crc-bmaas BMAAS_INSTANCE_NET_MODEL: virtio BMAAS_INSTANCE_OS_VARIANT: centos-stream9 BMAAS_INSTANCE_VCPUS: '2' BMAAS_INSTANCE_VIRT_TYPE: kvm BMAAS_IPV4: 'true' BMAAS_IPV6: 'false' BMAAS_LIBVIRT_USER: sushyemu BMAAS_METALLB_ADDRESS_POOL: 172.20.1.64/26 BMAAS_METALLB_POOL_NAME: baremetal BMAAS_NETWORK_IPV4_PREFIX: 172.20.1.1/24 BMAAS_NETWORK_IPV6_PREFIX: fd00:bbbb::1/64 BMAAS_NETWORK_NAME: crc-bmaas BMAAS_NODE_COUNT: '1' BMAAS_OCP_INSTANCE_NAME: crc BMAAS_REDFISH_PASSWORD: password BMAAS_REDFISH_USERNAME: admin BMAAS_ROUTE_LIBVIRT_NETWORKS: crc-bmaas,crc,default BMAAS_SUSHY_EMULATOR_DRIVER: libvirt BMAAS_SUSHY_EMULATOR_IMAGE: quay.io/metal3-io/sushy-tools:latest BMAAS_SUSHY_EMULATOR_NAMESPACE: sushy-emulator BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE: /etc/openstack/clouds.yaml BMAAS_SUSHY_EMULATOR_OS_CLOUD: openstack BMH_NAMESPACE: openstack BMO_BRANCH: release-0.9 BMO_CLEANUP: 'true' BMO_COMMIT_HASH: '' BMO_IPA_BRANCH: stable/2024.1 BMO_IRONIC_HOST: 192.168.122.10 BMO_PROVISIONING_INTERFACE: '' BMO_REPO: https://github.com/metal3-io/baremetal-operator BMO_SETUP: false BMO_SETUP_ROUTE_REPLACE: 'true' BM_CTLPLANE_INTERFACE: enp1s0 BM_INSTANCE_MEMORY: '8192' BM_INSTANCE_NAME_PREFIX: edpm-compute-baremetal BM_INSTANCE_NAME_SUFFIX: '0' BM_NETWORK_NAME: default BM_NODE_COUNT: '1' BM_ROOT_PASSWORD: '' BM_ROOT_PASSWORD_SECRET: '' CEILOMETER_CENTRAL_DEPL_IMG: unused CEILOMETER_NOTIFICATION_DEPL_IMG: unused CEPH_BRANCH: release-1.15 CEPH_CLIENT: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml CEPH_COMMON: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml CEPH_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml CEPH_CRDS: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml CEPH_IMG: quay.io/ceph/demo:latest-squid CEPH_OP: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml CEPH_REPO: https://github.com/rook/rook.git CERTMANAGER_TIMEOUT: 300s CHECKOUT_FROM_OPENSTACK_REF: 'true' CINDER: config/samples/cinder_v1beta1_cinder.yaml CINDERAPI_DEPL_IMG: unused CINDERBKP_DEPL_IMG: unused CINDERSCH_DEPL_IMG: unused CINDERVOL_DEPL_IMG: unused CINDER_BRANCH: main CINDER_COMMIT_HASH: '' CINDER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml CINDER_IMG: quay.io/openstack-k8s-operators/cinder-operator-index:latest CINDER_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml CINDER_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests CINDER_KUTTL_NAMESPACE: cinder-kuttl-tests CINDER_REPO: https://github.com/openstack-k8s-operators/cinder-operator.git CLEANUP_DIR_CMD: rm -Rf CRC_BGP_NIC_1_MAC: '52:54:00:11:11:11' CRC_BGP_NIC_2_MAC: '52:54:00:11:11:12' CRC_HTTPS_PROXY: '' CRC_HTTP_PROXY: '' CRC_STORAGE_NAMESPACE: crc-storage CRC_STORAGE_RETRIES: '3' CRC_URL: '''https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz''' CRC_VERSION: latest DATAPLANE_ANSIBLE_SECRET: dataplane-ansible-ssh-private-key-secret DATAPLANE_ANSIBLE_USER: '' DATAPLANE_COMPUTE_IP: 192.168.122.100 DATAPLANE_CONTAINER_PREFIX: openstack DATAPLANE_CONTAINER_TAG: current-podified DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest DATAPLANE_DEFAULT_GW: 192.168.122.1 DATAPLANE_EXTRA_NOVA_CONFIG_FILE: /dev/null DATAPLANE_GROWVOLS_ARGS: /=8GB /tmp=1GB /home=1GB /var=100% DATAPLANE_KUSTOMIZE_SCENARIO: preprovisioned DATAPLANE_NETWORKER_IP: 192.168.122.200 DATAPLANE_NETWORK_INTERFACE_NAME: eth0 DATAPLANE_NOVA_NFS_PATH: '' DATAPLANE_NTP_SERVER: pool.ntp.org DATAPLANE_PLAYBOOK: osp.edpm.download_cache DATAPLANE_REGISTRY_URL: quay.io/podified-antelope-centos9 DATAPLANE_RUNNER_IMG: '' DATAPLANE_SERVER_ROLE: compute DATAPLANE_SSHD_ALLOWED_RANGES: '[''192.168.122.0/24'']' DATAPLANE_TIMEOUT: 30m DATAPLANE_TLS_ENABLED: 'true' DATAPLANE_TOTAL_NETWORKER_NODES: '1' DATAPLANE_TOTAL_NODES: '1' DBSERVICE: galera DESIGNATE: config/samples/designate_v1beta1_designate.yaml DESIGNATE_BRANCH: main DESIGNATE_COMMIT_HASH: '' DESIGNATE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml DESIGNATE_IMG: quay.io/openstack-k8s-operators/designate-operator-index:latest DESIGNATE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml DESIGNATE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/test/kuttl/tests DESIGNATE_KUTTL_NAMESPACE: designate-kuttl-tests DESIGNATE_REPO: https://github.com/openstack-k8s-operators/designate-operator.git DNSDATA: config/samples/network_v1beta1_dnsdata.yaml DNSDATA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml DNSMASQ: config/samples/network_v1beta1_dnsmasq.yaml DNSMASQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml DNS_DEPL_IMG: unused DNS_DOMAIN: localdomain DOWNLOAD_TOOLS_SELECTION: all EDPM_ATTACH_EXTNET: 'true' EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES: '''[]''' EDPM_COMPUTE_ADDITIONAL_NETWORKS: '''[]''' EDPM_COMPUTE_CELLS: '1' EDPM_COMPUTE_CEPH_ENABLED: 'true' EDPM_COMPUTE_CEPH_NOVA: 'true' EDPM_COMPUTE_DHCP_AGENT_ENABLED: 'true' EDPM_COMPUTE_SRIOV_ENABLED: 'true' EDPM_COMPUTE_SUFFIX: '0' EDPM_CONFIGURE_DEFAULT_ROUTE: 'true' EDPM_CONFIGURE_HUGEPAGES: 'false' EDPM_CONFIGURE_NETWORKING: 'true' EDPM_FIRSTBOOT_EXTRA: /tmp/edpm-firstboot-extra EDPM_NETWORKER_SUFFIX: '0' EDPM_TOTAL_NETWORKERS: '1' EDPM_TOTAL_NODES: '1' GALERA_REPLICAS: '' GENERATE_SSH_KEYS: 'true' GIT_CLONE_OPTS: '' GLANCE: config/samples/glance_v1beta1_glance.yaml GLANCEAPI_DEPL_IMG: unused GLANCE_BRANCH: main GLANCE_COMMIT_HASH: '' GLANCE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml GLANCE_IMG: quay.io/openstack-k8s-operators/glance-operator-index:latest GLANCE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml GLANCE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests GLANCE_KUTTL_NAMESPACE: glance-kuttl-tests GLANCE_REPO: https://github.com/openstack-k8s-operators/glance-operator.git HEAT: config/samples/heat_v1beta1_heat.yaml HEATAPI_DEPL_IMG: unused HEATCFNAPI_DEPL_IMG: unused HEATENGINE_DEPL_IMG: unused HEAT_AUTH_ENCRYPTION_KEY: 76**********f0 HEAT_BRANCH: main HEAT_COMMIT_HASH: '' HEAT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml HEAT_IMG: quay.io/openstack-k8s-operators/heat-operator-index:latest HEAT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml HEAT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/test/kuttl/tests HEAT_KUTTL_NAMESPACE: heat-kuttl-tests HEAT_REPO: https://github.com/openstack-k8s-operators/heat-operator.git HEAT_SERVICE_ENABLED: 'true' HORIZON: config/samples/horizon_v1beta1_horizon.yaml HORIZON_BRANCH: main HORIZON_COMMIT_HASH: '' HORIZON_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml HORIZON_DEPL_IMG: unused HORIZON_IMG: quay.io/openstack-k8s-operators/horizon-operator-index:latest HORIZON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml HORIZON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/test/kuttl/tests HORIZON_KUTTL_NAMESPACE: horizon-kuttl-tests HORIZON_REPO: https://github.com/openstack-k8s-operators/horizon-operator.git INFRA_BRANCH: main INFRA_COMMIT_HASH: '' INFRA_IMG: quay.io/openstack-k8s-operators/infra-operator-index:latest INFRA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml INFRA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/test/kuttl/tests INFRA_KUTTL_NAMESPACE: infra-kuttl-tests INFRA_REPO: https://github.com/openstack-k8s-operators/infra-operator.git INSTALL_CERT_MANAGER: false INSTALL_NMSTATE: true || false INSTALL_NNCP: true || false INTERNALAPI_HOST_ROUTES: '' IPV6_LAB_IPV4_NETWORK_IPADDRESS: 172.30.0.1/24 IPV6_LAB_IPV6_NETWORK_IPADDRESS: fd00:abcd:abcd:fc00::1/64 IPV6_LAB_LIBVIRT_STORAGE_POOL: default IPV6_LAB_MANAGE_FIREWALLD: 'true' IPV6_LAB_NAT64_HOST_IPV4: 172.30.0.2/24 IPV6_LAB_NAT64_HOST_IPV6: fd00:abcd:abcd:fc00::2/64 IPV6_LAB_NAT64_INSTANCE_NAME: nat64-router IPV6_LAB_NAT64_IPV6_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL: 192.168.255.0/24 IPV6_LAB_NAT64_TAYGA_IPV4: 192.168.255.1 IPV6_LAB_NAT64_TAYGA_IPV6: fd00:abcd:abcd:fc00::3 IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX: fd00:abcd:abcd:fcff::/96 IPV6_LAB_NAT64_UPDATE_PACKAGES: 'false' IPV6_LAB_NETWORK_NAME: nat64 IPV6_LAB_SNO_CLUSTER_NETWORK: fd00:abcd:0::/48 IPV6_LAB_SNO_HOST_IP: fd00:abcd:abcd:fc00::11 IPV6_LAB_SNO_HOST_PREFIX: '64' IPV6_LAB_SNO_INSTANCE_NAME: sno IPV6_LAB_SNO_MACHINE_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_SNO_OCP_MIRROR_URL: https://mirror.openshift.com/pub/openshift-v4/clients/ocp IPV6_LAB_SNO_OCP_VERSION: latest-4.14 IPV6_LAB_SNO_SERVICE_NETWORK: fd00:abcd:abcd:fc03::/112 IPV6_LAB_SSH_PUB_KEY: /home/zuul/.ssh/id_rsa.pub IPV6_LAB_WORK_DIR: /home/zuul/.ipv6lab IRONIC: config/samples/ironic_v1beta1_ironic.yaml IRONICAPI_DEPL_IMG: unused IRONICCON_DEPL_IMG: unused IRONICINS_DEPL_IMG: unused IRONICNAG_DEPL_IMG: unused IRONICPXE_DEPL_IMG: unused IRONIC_BRANCH: main IRONIC_COMMIT_HASH: '' IRONIC_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml IRONIC_IMAGE: quay.io/metal3-io/ironic IRONIC_IMAGE_TAG: release-24.1 IRONIC_IMG: quay.io/openstack-k8s-operators/ironic-operator-index:latest IRONIC_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml IRONIC_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/test/kuttl/tests IRONIC_KUTTL_NAMESPACE: ironic-kuttl-tests IRONIC_REPO: https://github.com/openstack-k8s-operators/ironic-operator.git KEYSTONEAPI: config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_DEPL_IMG: unused KEYSTONE_BRANCH: main KEYSTONE_COMMIT_HASH: '' KEYSTONE_FEDERATION_CLIENT_SECRET: CO**********6f KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE: openstack KEYSTONE_IMG: quay.io/openstack-k8s-operators/keystone-operator-index:latest KEYSTONE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml KEYSTONE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/test/kuttl/tests KEYSTONE_KUTTL_NAMESPACE: keystone-kuttl-tests KEYSTONE_REPO: https://github.com/openstack-k8s-operators/keystone-operator.git KUBEADMIN_PWD: '12345678' LIBVIRT_SECRET: libvirt-secret LOKI_DEPLOY_MODE: openshift-network LOKI_DEPLOY_NAMESPACE: netobserv LOKI_DEPLOY_SIZE: 1x.demo LOKI_NAMESPACE: openshift-operators-redhat LOKI_OPERATOR_GROUP: openshift-operators-redhat-loki LOKI_SUBSCRIPTION: loki-operator LVMS_CR: '1' MANILA: config/samples/manila_v1beta1_manila.yaml MANILAAPI_DEPL_IMG: unused MANILASCH_DEPL_IMG: unused MANILASHARE_DEPL_IMG: unused MANILA_BRANCH: main MANILA_COMMIT_HASH: '' MANILA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml MANILA_IMG: quay.io/openstack-k8s-operators/manila-operator-index:latest MANILA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml MANILA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests MANILA_KUTTL_NAMESPACE: manila-kuttl-tests MANILA_REPO: https://github.com/openstack-k8s-operators/manila-operator.git MANILA_SERVICE_ENABLED: 'true' MARIADB: config/samples/mariadb_v1beta1_galera.yaml MARIADB_BRANCH: main MARIADB_CHAINSAW_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/config.yaml MARIADB_CHAINSAW_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/tests MARIADB_CHAINSAW_NAMESPACE: mariadb-chainsaw-tests MARIADB_COMMIT_HASH: '' MARIADB_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml MARIADB_DEPL_IMG: unused MARIADB_IMG: quay.io/openstack-k8s-operators/mariadb-operator-index:latest MARIADB_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml MARIADB_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/kuttl/tests MARIADB_KUTTL_NAMESPACE: mariadb-kuttl-tests MARIADB_REPO: https://github.com/openstack-k8s-operators/mariadb-operator.git MEMCACHED: config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_DEPL_IMG: unused METADATA_SHARED_SECRET: '12**********42' METALLB_IPV6_POOL: fd00:aaaa::80-fd00:aaaa::90 METALLB_POOL: 192.168.122.80-192.168.122.90 MICROSHIFT: '0' NAMESPACE: openstack NETCONFIG: config/samples/network_v1beta1_netconfig.yaml NETCONFIG_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml NETCONFIG_DEPL_IMG: unused NETOBSERV_DEPLOY_NAMESPACE: netobserv NETOBSERV_NAMESPACE: openshift-netobserv-operator NETOBSERV_OPERATOR_GROUP: openshift-netobserv-operator-net NETOBSERV_SUBSCRIPTION: netobserv-operator NETWORK_BGP: 'false' NETWORK_DESIGNATE_ADDRESS_PREFIX: 172.28.0 NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX: 172.50.0 NETWORK_INTERNALAPI_ADDRESS_PREFIX: 172.17.0 NETWORK_ISOLATION: 'true' NETWORK_ISOLATION_INSTANCE_NAME: crc NETWORK_ISOLATION_IPV4: 'true' NETWORK_ISOLATION_IPV4_ADDRESS: 172.16.1.1/24 NETWORK_ISOLATION_IPV4_NAT: 'true' NETWORK_ISOLATION_IPV6: 'false' NETWORK_ISOLATION_IPV6_ADDRESS: fd00:aaaa::1/64 NETWORK_ISOLATION_IP_ADDRESS: 192.168.122.10 NETWORK_ISOLATION_MAC: '52:54:00:11:11:10' NETWORK_ISOLATION_NETWORK_NAME: net-iso NETWORK_ISOLATION_NET_NAME: default NETWORK_ISOLATION_USE_DEFAULT_NETWORK: 'true' NETWORK_MTU: '1500' NETWORK_STORAGEMGMT_ADDRESS_PREFIX: 172.20.0 NETWORK_STORAGE_ADDRESS_PREFIX: 172.18.0 NETWORK_STORAGE_MACVLAN: '' NETWORK_TENANT_ADDRESS_PREFIX: 172.19.0 NETWORK_VLAN_START: '20' NETWORK_VLAN_STEP: '1' NEUTRONAPI: config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_DEPL_IMG: unused NEUTRON_BRANCH: main NEUTRON_COMMIT_HASH: '' NEUTRON_IMG: quay.io/openstack-k8s-operators/neutron-operator-index:latest NEUTRON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml NEUTRON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests NEUTRON_KUTTL_NAMESPACE: neutron-kuttl-tests NEUTRON_REPO: https://github.com/openstack-k8s-operators/neutron-operator.git NFS_HOME: /home/nfs NMSTATE_NAMESPACE: openshift-nmstate NMSTATE_OPERATOR_GROUP: openshift-nmstate-tn6k8 NMSTATE_SUBSCRIPTION: kubernetes-nmstate-operator NNCP_ADDITIONAL_HOST_ROUTES: '' NNCP_BGP_1_INTERFACE: enp7s0 NNCP_BGP_1_IP_ADDRESS: 100.65.4.2 NNCP_BGP_2_INTERFACE: enp8s0 NNCP_BGP_2_IP_ADDRESS: 100.64.4.2 NNCP_BRIDGE: ospbr NNCP_CLEANUP_TIMEOUT: 120s NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX: 'fd00:aaaa::' NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX: '10' NNCP_CTLPLANE_IP_ADDRESS_PREFIX: 192.168.122 NNCP_CTLPLANE_IP_ADDRESS_SUFFIX: '10' NNCP_DNS_SERVER: 192.168.122.1 NNCP_DNS_SERVER_IPV6: fd00:aaaa::1 NNCP_GATEWAY: 192.168.122.1 NNCP_GATEWAY_IPV6: fd00:aaaa::1 NNCP_INTERFACE: enp6s0 NNCP_NODES: '' NNCP_TIMEOUT: 240s NOVA: config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_BRANCH: main NOVA_COMMIT_HASH: '' NOVA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_IMG: quay.io/openstack-k8s-operators/nova-operator-index:latest NOVA_REPO: https://github.com/openstack-k8s-operators/nova-operator.git NUMBER_OF_INSTANCES: '1' OCP_NETWORK_NAME: crc OCTAVIA: config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_BRANCH: main OCTAVIA_COMMIT_HASH: '' OCTAVIA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_IMG: quay.io/openstack-k8s-operators/octavia-operator-index:latest OCTAVIA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml OCTAVIA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/test/kuttl/tests OCTAVIA_KUTTL_NAMESPACE: octavia-kuttl-tests OCTAVIA_REPO: https://github.com/openstack-k8s-operators/octavia-operator.git OKD: 'false' OPENSTACK_BRANCH: main OPENSTACK_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-bundle:latest OPENSTACK_COMMIT_HASH: '' OPENSTACK_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_CRDS_DIR: openstack_crds OPENSTACK_CTLPLANE: config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_IMG: quay.io/openstack-k8s-operators/openstack-operator-index:latest OPENSTACK_K8S_BRANCH: main OPENSTACK_K8S_TAG: latest OPENSTACK_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml OPENSTACK_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/test/kuttl/tests OPENSTACK_KUTTL_NAMESPACE: openstack-kuttl-tests OPENSTACK_NEUTRON_CUSTOM_CONF: '' OPENSTACK_REPO: https://github.com/openstack-k8s-operators/openstack-operator.git OPENSTACK_STORAGE_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest OPERATOR_BASE_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator OPERATOR_CHANNEL: '' OPERATOR_NAMESPACE: openstack-operators OPERATOR_SOURCE: '' OPERATOR_SOURCE_NAMESPACE: '' OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm OVNCONTROLLER: config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_NMAP: 'true' OVNDBS: config/samples/ovn_v1beta1_ovndbcluster.yaml OVNDBS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml OVNNORTHD: config/samples/ovn_v1beta1_ovnnorthd.yaml OVNNORTHD_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml OVN_BRANCH: main OVN_COMMIT_HASH: '' OVN_IMG: quay.io/openstack-k8s-operators/ovn-operator-index:latest OVN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml OVN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/test/kuttl/tests OVN_KUTTL_NAMESPACE: ovn-kuttl-tests OVN_REPO: https://github.com/openstack-k8s-operators/ovn-operator.git PASSWORD: '12**********78' PLACEMENTAPI: config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_DEPL_IMG: unused PLACEMENT_BRANCH: main PLACEMENT_COMMIT_HASH: '' PLACEMENT_IMG: quay.io/openstack-k8s-operators/placement-operator-index:latest PLACEMENT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml PLACEMENT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/test/kuttl/tests PLACEMENT_KUTTL_NAMESPACE: placement-kuttl-tests PLACEMENT_REPO: https://github.com/openstack-k8s-operators/placement-operator.git PULL_SECRET: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/pull-secret.txt RABBITMQ: docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_BRANCH: patches RABBITMQ_COMMIT_HASH: '' RABBITMQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_DEPL_IMG: unused RABBITMQ_IMG: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest RABBITMQ_REPO: https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git REDHAT_OPERATORS: 'false' REDIS: config/samples/redis_v1beta1_redis.yaml REDIS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml REDIS_DEPL_IMG: unused RH_REGISTRY_PWD: '' RH_REGISTRY_USER: '' SECRET: os**********et SG_CORE_DEPL_IMG: unused STANDALONE_COMPUTE_DRIVER: libvirt STANDALONE_EXTERNAL_NET_PREFFIX: 172.21.0 STANDALONE_INTERNALAPI_NET_PREFIX: 172.17.0 STANDALONE_STORAGEMGMT_NET_PREFIX: 172.20.0 STANDALONE_STORAGE_NET_PREFIX: 172.18.0 STANDALONE_TENANT_NET_PREFIX: 172.19.0 STORAGEMGMT_HOST_ROUTES: '' STORAGE_CLASS: local-storage STORAGE_HOST_ROUTES: '' SWIFT: config/samples/swift_v1beta1_swift.yaml SWIFT_BRANCH: main SWIFT_COMMIT_HASH: '' SWIFT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml SWIFT_IMG: quay.io/openstack-k8s-operators/swift-operator-index:latest SWIFT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml SWIFT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/test/kuttl/tests SWIFT_KUTTL_NAMESPACE: swift-kuttl-tests SWIFT_REPO: https://github.com/openstack-k8s-operators/swift-operator.git TELEMETRY: config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_BRANCH: main TELEMETRY_COMMIT_HASH: '' TELEMETRY_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_IMG: quay.io/openstack-k8s-operators/telemetry-operator-index:latest TELEMETRY_KUTTL_BASEDIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator TELEMETRY_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml TELEMETRY_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/test/kuttl/suites TELEMETRY_KUTTL_NAMESPACE: telemetry-kuttl-tests TELEMETRY_KUTTL_RELPATH: test/kuttl/suites TELEMETRY_REPO: https://github.com/openstack-k8s-operators/telemetry-operator.git TENANT_HOST_ROUTES: '' TIMEOUT: 300s TLS_ENABLED: 'false' WATCHER_BRANCH: '' WATCHER_REPO: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator tripleo_deploy: 'export REGISTRY_PWD:' 2026-01-20 10:55:39,345 p=31537 u=zuul n=ansible | TASK [install_yamls : Generate make targets install_yamls_path={{ cifmw_install_yamls_repo }}, output_directory={{ cifmw_install_yamls_tasks_out }}] *** 2026-01-20 10:55:39,345 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.040) 0:01:04.961 ******* 2026-01-20 10:55:39,345 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.040) 0:01:04.960 ******* 2026-01-20 10:55:39,683 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:39,691 p=31537 u=zuul n=ansible | TASK [install_yamls : Debug generate_make module var=cifmw_generate_makes] ***** 2026-01-20 10:55:39,691 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.346) 0:01:05.308 ******* 2026-01-20 10:55:39,691 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.346) 0:01:05.306 ******* 2026-01-20 10:55:39,715 p=31537 u=zuul n=ansible | ok: [localhost] => cifmw_generate_makes: changed: false debug: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/Makefile: - all - help - cleanup - deploy_cleanup - wait - crc_storage - crc_storage_cleanup - crc_storage_release - crc_storage_with_retries - crc_storage_cleanup_with_retries - operator_namespace - namespace - namespace_cleanup - input - input_cleanup - crc_bmo_setup - crc_bmo_cleanup - openstack_prep - openstack - openstack_wait - openstack_init - openstack_cleanup - openstack_repo - openstack_deploy_prep - openstack_deploy - openstack_wait_deploy - openstack_deploy_cleanup - openstack_update_run - update_services - update_system - openstack_patch_version - edpm_deploy_generate_keys - edpm_patch_ansible_runner_image - edpm_deploy_prep - edpm_deploy_cleanup - edpm_deploy - edpm_deploy_baremetal_prep - edpm_deploy_baremetal - edpm_wait_deploy_baremetal - edpm_wait_deploy - edpm_register_dns - edpm_nova_discover_hosts - openstack_crds - openstack_crds_cleanup - edpm_deploy_networker_prep - edpm_deploy_networker_cleanup - edpm_deploy_networker - infra_prep - infra - infra_cleanup - dns_deploy_prep - dns_deploy - dns_deploy_cleanup - netconfig_deploy_prep - netconfig_deploy - netconfig_deploy_cleanup - memcached_deploy_prep - memcached_deploy - memcached_deploy_cleanup - keystone_prep - keystone - keystone_cleanup - keystone_deploy_prep - keystone_deploy - keystone_deploy_cleanup - barbican_prep - barbican - barbican_cleanup - barbican_deploy_prep - barbican_deploy - barbican_deploy_validate - barbican_deploy_cleanup - mariadb - mariadb_cleanup - mariadb_deploy_prep - mariadb_deploy - mariadb_deploy_cleanup - placement_prep - placement - placement_cleanup - placement_deploy_prep - placement_deploy - placement_deploy_cleanup - glance_prep - glance - glance_cleanup - glance_deploy_prep - glance_deploy - glance_deploy_cleanup - ovn_prep - ovn - ovn_cleanup - ovn_deploy_prep - ovn_deploy - ovn_deploy_cleanup - neutron_prep - neutron - neutron_cleanup - neutron_deploy_prep - neutron_deploy - neutron_deploy_cleanup - cinder_prep - cinder - cinder_cleanup - cinder_deploy_prep - cinder_deploy - cinder_deploy_cleanup - rabbitmq_prep - rabbitmq - rabbitmq_cleanup - rabbitmq_deploy_prep - rabbitmq_deploy - rabbitmq_deploy_cleanup - ironic_prep - ironic - ironic_cleanup - ironic_deploy_prep - ironic_deploy - ironic_deploy_cleanup - octavia_prep - octavia - octavia_cleanup - octavia_deploy_prep - octavia_deploy - octavia_deploy_cleanup - designate_prep - designate - designate_cleanup - designate_deploy_prep - designate_deploy - designate_deploy_cleanup - nova_prep - nova - nova_cleanup - nova_deploy_prep - nova_deploy - nova_deploy_cleanup - mariadb_kuttl_run - mariadb_kuttl - kuttl_db_prep - kuttl_db_cleanup - kuttl_common_prep - kuttl_common_cleanup - keystone_kuttl_run - keystone_kuttl - barbican_kuttl_run - barbican_kuttl - placement_kuttl_run - placement_kuttl - cinder_kuttl_run - cinder_kuttl - neutron_kuttl_run - neutron_kuttl - octavia_kuttl_run - octavia_kuttl - designate_kuttl - designate_kuttl_run - ovn_kuttl_run - ovn_kuttl - infra_kuttl_run - infra_kuttl - ironic_kuttl_run - ironic_kuttl - ironic_kuttl_crc - heat_kuttl_run - heat_kuttl - heat_kuttl_crc - ansibleee_kuttl_run - ansibleee_kuttl_cleanup - ansibleee_kuttl_prep - ansibleee_kuttl - glance_kuttl_run - glance_kuttl - manila_kuttl_run - manila_kuttl - swift_kuttl_run - swift_kuttl - horizon_kuttl_run - horizon_kuttl - openstack_kuttl_run - openstack_kuttl - mariadb_chainsaw_run - mariadb_chainsaw - horizon_prep - horizon - horizon_cleanup - horizon_deploy_prep - horizon_deploy - horizon_deploy_cleanup - heat_prep - heat - heat_cleanup - heat_deploy_prep - heat_deploy - heat_deploy_cleanup - ansibleee_prep - ansibleee - ansibleee_cleanup - baremetal_prep - baremetal - baremetal_cleanup - ceph_help - ceph - ceph_cleanup - rook_prep - rook - rook_deploy_prep - rook_deploy - rook_crc_disk - rook_cleanup - lvms - nmstate - nncp - nncp_cleanup - netattach - netattach_cleanup - metallb - metallb_config - metallb_config_cleanup - metallb_cleanup - loki - loki_cleanup - loki_deploy - loki_deploy_cleanup - netobserv - netobserv_cleanup - netobserv_deploy - netobserv_deploy_cleanup - manila_prep - manila - manila_cleanup - manila_deploy_prep - manila_deploy - manila_deploy_cleanup - telemetry_prep - telemetry - telemetry_cleanup - telemetry_deploy_prep - telemetry_deploy - telemetry_deploy_cleanup - telemetry_kuttl_run - telemetry_kuttl - swift_prep - swift - swift_cleanup - swift_deploy_prep - swift_deploy - swift_deploy_cleanup - certmanager - certmanager_cleanup - validate_marketplace - redis_deploy_prep - redis_deploy - redis_deploy_cleanup - set_slower_etcd_profile /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/Makefile: - help - download_tools - nfs - nfs_cleanup - crc - crc_cleanup - crc_scrub - crc_attach_default_interface - crc_attach_default_interface_cleanup - ipv6_lab_network - ipv6_lab_network_cleanup - ipv6_lab_nat64_router - ipv6_lab_nat64_router_cleanup - ipv6_lab_sno - ipv6_lab_sno_cleanup - ipv6_lab - ipv6_lab_cleanup - attach_default_interface - attach_default_interface_cleanup - network_isolation_bridge - network_isolation_bridge_cleanup - edpm_baremetal_compute - edpm_compute - edpm_compute_bootc - edpm_ansible_runner - edpm_computes_bgp - edpm_compute_repos - edpm_compute_cleanup - edpm_networker - edpm_networker_cleanup - edpm_deploy_instance - tripleo_deploy - standalone_deploy - standalone_sync - standalone - standalone_cleanup - standalone_snapshot - standalone_revert - cifmw_prepare - cifmw_cleanup - bmaas_network - bmaas_network_cleanup - bmaas_route_crc_and_crc_bmaas_networks - bmaas_route_crc_and_crc_bmaas_networks_cleanup - bmaas_crc_attach_network - bmaas_crc_attach_network_cleanup - bmaas_crc_baremetal_bridge - bmaas_crc_baremetal_bridge_cleanup - bmaas_baremetal_net_nad - bmaas_baremetal_net_nad_cleanup - bmaas_metallb - bmaas_metallb_cleanup - bmaas_virtual_bms - bmaas_virtual_bms_cleanup - bmaas_sushy_emulator - bmaas_sushy_emulator_cleanup - bmaas_sushy_emulator_wait - bmaas_generate_nodes_yaml - bmaas - bmaas_cleanup failed: false success: true 2026-01-20 10:55:39,724 p=31537 u=zuul n=ansible | TASK [install_yamls : Create the install_yamls parameters file dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml, content={{ { 'cifmw_install_yamls_environment': cifmw_install_yamls_environment, 'cifmw_install_yamls_defaults': cifmw_install_yamls_defaults } | to_nice_yaml }}, mode=0644] *** 2026-01-20 10:55:39,724 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.032) 0:01:05.341 ******* 2026-01-20 10:55:39,724 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.032) 0:01:05.339 ******* 2026-01-20 10:55:40,123 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:40,130 p=31537 u=zuul n=ansible | TASK [install_yamls : Create empty cifmw_install_yamls_environment if needed cifmw_install_yamls_environment={}] *** 2026-01-20 10:55:40,130 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:40 +0000 (0:00:00.405) 0:01:05.746 ******* 2026-01-20 10:55:40,130 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:40 +0000 (0:00:00.405) 0:01:05.745 ******* 2026-01-20 10:55:40,150 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:40,164 p=31537 u=zuul n=ansible | TASK [discover_latest_image : Get latest image url={{ cifmw_discover_latest_image_base_url }}, image_prefix={{ cifmw_discover_latest_image_qcow_prefix }}, images_file={{ cifmw_discover_latest_image_images_file }}] *** 2026-01-20 10:55:40,164 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:40 +0000 (0:00:00.034) 0:01:05.780 ******* 2026-01-20 10:55:40,164 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:40 +0000 (0:00:00.034) 0:01:05.779 ******* 2026-01-20 10:55:40,920 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:40,927 p=31537 u=zuul n=ansible | TASK [discover_latest_image : Export facts accordingly cifmw_discovered_image_name={{ discovered_image['data']['image_name'] }}, cifmw_discovered_image_url={{ discovered_image['data']['image_url'] }}, cifmw_discovered_hash={{ discovered_image['data']['hash'] }}, cifmw_discovered_hash_algorithm={{ discovered_image['data']['hash_algorithm'] }}, cacheable=True] *** 2026-01-20 10:55:40,927 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:40 +0000 (0:00:00.763) 0:01:06.544 ******* 2026-01-20 10:55:40,927 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:40 +0000 (0:00:00.763) 0:01:06.542 ******* 2026-01-20 10:55:40,947 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:40,960 p=31537 u=zuul n=ansible | TASK [cifmw_setup : Create artifacts with custom params mode=0644, dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/custom-params.yml, content={{ ci_framework_params | to_nice_yaml }}] *** 2026-01-20 10:55:40,960 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:40 +0000 (0:00:00.032) 0:01:06.576 ******* 2026-01-20 10:55:40,960 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:40 +0000 (0:00:00.032) 0:01:06.575 ******* 2026-01-20 10:55:41,359 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:41,371 p=31537 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-20 10:55:41,372 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.411) 0:01:06.988 ******* 2026-01-20 10:55:41,372 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.411) 0:01:06.987 ******* 2026-01-20 10:55:41,431 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:41,437 p=31537 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-20 10:55:41,438 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.066) 0:01:07.054 ******* 2026-01-20 10:55:41,438 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.066) 0:01:07.053 ******* 2026-01-20 10:55:41,540 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:41,557 p=31537 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_infra _raw_params={{ hook.type }}.yml] *** 2026-01-20 10:55:41,557 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.119) 0:01:07.173 ******* 2026-01-20 10:55:41,557 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.119) 0:01:07.172 ******* 2026-01-20 10:55:41,685 p=31537 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/run_hook/tasks/playbook.yml for localhost => (item={'name': 'Download needed tools', 'inventory': 'localhost,', 'connection': 'local', 'type': 'playbook', 'source': '/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/download_tools.yaml'}) 2026-01-20 10:55:41,697 p=31537 u=zuul n=ansible | TASK [run_hook : Set playbook path for Download needed tools cifmw_basedir={{ _bdir }}, hook_name={{ _hook_name }}, playbook_path={{ _play | realpath }}, log_path={{ _bdir }}/logs/{{ step }}_{{ _hook_name }}.log, extra_vars=-e namespace={{ cifmw_openstack_namespace }} {%- if hook.extra_vars is defined and hook.extra_vars|length > 0 -%} {% for key,value in hook.extra_vars.items() -%} {%- if key == 'file' %} -e "@{{ value }}" {%- else %} -e "{{ key }}={{ value }}" {%- endif %} {%- endfor %} {%- endif %}] *** 2026-01-20 10:55:41,697 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.140) 0:01:07.314 ******* 2026-01-20 10:55:41,698 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.140) 0:01:07.312 ******* 2026-01-20 10:55:41,739 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:41,750 p=31537 u=zuul n=ansible | TASK [run_hook : Get file stat path={{ playbook_path }}] *********************** 2026-01-20 10:55:41,750 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.052) 0:01:07.366 ******* 2026-01-20 10:55:41,750 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.052) 0:01:07.365 ******* 2026-01-20 10:55:41,952 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:41,963 p=31537 u=zuul n=ansible | TASK [run_hook : Fail if playbook doesn't exist msg=Playbook {{ playbook_path }} doesn't seem to exist.] *** 2026-01-20 10:55:41,963 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.212) 0:01:07.579 ******* 2026-01-20 10:55:41,963 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.212) 0:01:07.578 ******* 2026-01-20 10:55:41,979 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:41,989 p=31537 u=zuul n=ansible | TASK [run_hook : Get parameters files paths={{ (cifmw_basedir, 'artifacts/parameters') | path_join }}, file_type=file, patterns=*.yml] *** 2026-01-20 10:55:41,989 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.025) 0:01:07.605 ******* 2026-01-20 10:55:41,989 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.025) 0:01:07.604 ******* 2026-01-20 10:55:42,168 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:42,179 p=31537 u=zuul n=ansible | TASK [run_hook : Add parameters artifacts as extra variables extra_vars={{ extra_vars }} {% for file in cifmw_run_hook_parameters_files.files %} -e "@{{ file.path }}" {%- endfor %}] *** 2026-01-20 10:55:42,179 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:42 +0000 (0:00:00.190) 0:01:07.795 ******* 2026-01-20 10:55:42,179 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:42 +0000 (0:00:00.190) 0:01:07.794 ******* 2026-01-20 10:55:42,196 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:42,206 p=31537 u=zuul n=ansible | TASK [run_hook : Ensure log directory exists path={{ log_path | dirname }}, state=directory, mode=0755] *** 2026-01-20 10:55:42,206 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:42 +0000 (0:00:00.027) 0:01:07.822 ******* 2026-01-20 10:55:42,206 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:42 +0000 (0:00:00.027) 0:01:07.821 ******* 2026-01-20 10:55:42,384 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:42,401 p=31537 u=zuul n=ansible | TASK [run_hook : Ensure artifacts directory exists path={{ cifmw_basedir }}/artifacts, state=directory, mode=0755] *** 2026-01-20 10:55:42,402 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:42 +0000 (0:00:00.195) 0:01:08.018 ******* 2026-01-20 10:55:42,402 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:42 +0000 (0:00:00.195) 0:01:08.017 ******* 2026-01-20 10:55:42,607 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:42,620 p=31537 u=zuul n=ansible | TASK [run_hook : Run hook without retry - Download needed tools] *************** 2026-01-20 10:55:42,620 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:42 +0000 (0:00:00.218) 0:01:08.237 ******* 2026-01-20 10:55:42,620 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:42 +0000 (0:00:00.218) 0:01:08.235 ******* 2026-01-20 10:55:42,708 p=31537 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_000_run_hook_without_retry.log 2026-01-20 10:56:19,010 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:19,019 p=31537 u=zuul n=ansible | TASK [run_hook : Run hook with retry - Download needed tools] ****************** 2026-01-20 10:56:19,019 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:36.399) 0:01:44.636 ******* 2026-01-20 10:56:19,020 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:36.399) 0:01:44.634 ******* 2026-01-20 10:56:19,035 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:19,043 p=31537 u=zuul n=ansible | TASK [run_hook : Check if we have a file path={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2026-01-20 10:56:19,044 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.024) 0:01:44.660 ******* 2026-01-20 10:56:19,044 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.024) 0:01:44.659 ******* 2026-01-20 10:56:19,201 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:19,208 p=31537 u=zuul n=ansible | TASK [run_hook : Load generated content in main playbook file={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2026-01-20 10:56:19,208 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.164) 0:01:44.825 ******* 2026-01-20 10:56:19,209 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.164) 0:01:44.823 ******* 2026-01-20 10:56:19,221 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:19,254 p=31537 u=zuul n=ansible | PLAY [Prepare host virtualization] ********************************************* 2026-01-20 10:56:19,271 p=31537 u=zuul n=ansible | TASK [cifmw_setup : Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] *** 2026-01-20 10:56:19,271 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.062) 0:01:44.888 ******* 2026-01-20 10:56:19,271 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.062) 0:01:44.886 ******* 2026-01-20 10:56:19,331 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:19,344 p=31537 u=zuul n=ansible | TASK [Ensure libvirt is present/configured name=libvirt_manager] *************** 2026-01-20 10:56:19,345 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.073) 0:01:44.961 ******* 2026-01-20 10:56:19,345 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.073) 0:01:44.960 ******* 2026-01-20 10:56:19,370 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:19,380 p=31537 u=zuul n=ansible | TASK [Perpare OpenShift provisioner node name=openshift_provisioner_node] ****** 2026-01-20 10:56:19,380 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.035) 0:01:44.996 ******* 2026-01-20 10:56:19,380 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.035) 0:01:44.995 ******* 2026-01-20 10:56:19,402 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:19,433 p=31537 u=zuul n=ansible | PLAY [Run cifmw_setup infra, build package, container and operators, deploy EDPM] *** 2026-01-20 10:56:19,472 p=31537 u=zuul n=ansible | TASK [cifmw_setup : Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] *** 2026-01-20 10:56:19,472 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.091) 0:01:45.088 ******* 2026-01-20 10:56:19,472 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.091) 0:01:45.087 ******* 2026-01-20 10:56:19,522 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:19,531 p=31537 u=zuul n=ansible | TASK [networking_mapper : Check for Networking Environment Definition file existence path={{ cifmw_networking_mapper_networking_env_def_path }}] *** 2026-01-20 10:56:19,531 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.059) 0:01:45.148 ******* 2026-01-20 10:56:19,531 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.059) 0:01:45.146 ******* 2026-01-20 10:56:19,704 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:19,713 p=31537 u=zuul n=ansible | TASK [networking_mapper : Check for Networking Definition file existance that=['_net_env_def_stat.stat.exists'], msg=Ensure that the Networking Environment Definition file exists in {{ cifmw_networking_mapper_networking_env_def_path }}, quiet=True] *** 2026-01-20 10:56:19,713 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.182) 0:01:45.330 ******* 2026-01-20 10:56:19,713 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.182) 0:01:45.328 ******* 2026-01-20 10:56:19,784 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:19,791 p=31537 u=zuul n=ansible | TASK [networking_mapper : Load the Networking Definition from file path={{ cifmw_networking_mapper_networking_env_def_path }}] *** 2026-01-20 10:56:19,791 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.077) 0:01:45.407 ******* 2026-01-20 10:56:19,791 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.077) 0:01:45.406 ******* 2026-01-20 10:56:19,814 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:19,823 p=31537 u=zuul n=ansible | TASK [networking_mapper : Set cifmw_networking_env_definition is present cifmw_networking_env_definition={{ _net_env_def_slurp['content'] | b64decode | from_yaml }}, cacheable=True] *** 2026-01-20 10:56:19,823 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.031) 0:01:45.439 ******* 2026-01-20 10:56:19,823 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.031) 0:01:45.438 ******* 2026-01-20 10:56:19,845 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:19,861 p=31537 u=zuul n=ansible | TASK [Deploy OCP using Hive name=hive] ***************************************** 2026-01-20 10:56:19,861 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.038) 0:01:45.478 ******* 2026-01-20 10:56:19,861 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.038) 0:01:45.476 ******* 2026-01-20 10:56:19,882 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:19,891 p=31537 u=zuul n=ansible | TASK [Prepare CRC name=rhol_crc] *********************************************** 2026-01-20 10:56:19,891 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.029) 0:01:45.507 ******* 2026-01-20 10:56:19,891 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.029) 0:01:45.506 ******* 2026-01-20 10:56:19,911 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:19,922 p=31537 u=zuul n=ansible | TASK [Deploy OpenShift cluster using dev-scripts name=devscripts] ************** 2026-01-20 10:56:19,922 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.031) 0:01:45.538 ******* 2026-01-20 10:56:19,922 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.031) 0:01:45.537 ******* 2026-01-20 10:56:19,940 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:19,949 p=31537 u=zuul n=ansible | TASK [openshift_login : Ensure output directory exists path={{ cifmw_openshift_login_basedir }}/artifacts, state=directory, mode=0755] *** 2026-01-20 10:56:19,949 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.027) 0:01:45.566 ******* 2026-01-20 10:56:19,949 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.027) 0:01:45.564 ******* 2026-01-20 10:56:20,139 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:20,147 p=31537 u=zuul n=ansible | TASK [openshift_login : OpenShift login _raw_params=login.yml] ***************** 2026-01-20 10:56:20,147 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.197) 0:01:45.763 ******* 2026-01-20 10:56:20,147 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.197) 0:01:45.762 ******* 2026-01-20 10:56:20,177 p=31537 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_login/tasks/login.yml for localhost 2026-01-20 10:56:20,190 p=31537 u=zuul n=ansible | TASK [openshift_login : Check if the password file is present path={{ cifmw_openshift_login_password_file | default(cifmw_openshift_password_file) }}] *** 2026-01-20 10:56:20,190 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.043) 0:01:45.806 ******* 2026-01-20 10:56:20,190 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.043) 0:01:45.805 ******* 2026-01-20 10:56:20,211 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:20,219 p=31537 u=zuul n=ansible | TASK [openshift_login : Fetch user password content src={{ cifmw_openshift_login_password_file | default(cifmw_openshift_password_file) }}] *** 2026-01-20 10:56:20,219 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.029) 0:01:45.835 ******* 2026-01-20 10:56:20,219 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.029) 0:01:45.834 ******* 2026-01-20 10:56:20,240 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:20,249 p=31537 u=zuul n=ansible | TASK [openshift_login : Set user password as a fact cifmw_openshift_login_password={{ cifmw_openshift_login_password_file_slurp.content | b64decode }}, cacheable=True] *** 2026-01-20 10:56:20,249 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.030) 0:01:45.866 ******* 2026-01-20 10:56:20,249 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.030) 0:01:45.864 ******* 2026-01-20 10:56:20,272 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:20,281 p=31537 u=zuul n=ansible | TASK [openshift_login : Set role variables cifmw_openshift_login_kubeconfig={{ cifmw_openshift_login_kubeconfig | default(cifmw_openshift_kubeconfig) | default( ansible_env.KUBECONFIG if 'KUBECONFIG' in ansible_env else cifmw_openshift_login_kubeconfig_default_path ) | trim }}, cifmw_openshift_login_user={{ cifmw_openshift_login_user | default(cifmw_openshift_user) | default(omit) }}, cifmw_openshift_login_password={{********** cifmw_openshift_login_password | default(cifmw_openshift_password) | default(omit) }}, cifmw_openshift_login_api={{ cifmw_openshift_login_api | default(cifmw_openshift_api) | default(omit) }}, cifmw_openshift_login_cert_login={{ cifmw_openshift_login_cert_login | default(false)}}, cifmw_openshift_login_provided_token={{ cifmw_openshift_provided_token | default(omit) }}, cacheable=True] *** 2026-01-20 10:56:20,281 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.031) 0:01:45.897 ******* 2026-01-20 10:56:20,281 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.031) 0:01:45.896 ******* 2026-01-20 10:56:20,314 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:20,321 p=31537 u=zuul n=ansible | TASK [openshift_login : Check if kubeconfig exists path={{ cifmw_openshift_login_kubeconfig }}] *** 2026-01-20 10:56:20,321 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.040) 0:01:45.938 ******* 2026-01-20 10:56:20,321 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.040) 0:01:45.936 ******* 2026-01-20 10:56:20,488 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:20,497 p=31537 u=zuul n=ansible | TASK [openshift_login : Assert that enough data is provided to log in to OpenShift that=cifmw_openshift_login_kubeconfig_stat.stat.exists or (cifmw_openshift_login_provided_token is defined and cifmw_openshift_login_provided_token != '') or ( (cifmw_openshift_login_user is defined) and (cifmw_openshift_login_password is defined) and (cifmw_openshift_login_api is defined) ), msg=If an existing kubeconfig is not provided user/pwd or provided/initial token and API URL must be given] *** 2026-01-20 10:56:20,497 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.175) 0:01:46.114 ******* 2026-01-20 10:56:20,497 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.175) 0:01:46.112 ******* 2026-01-20 10:56:20,522 p=31537 u=zuul n=ansible | ok: [localhost] => changed: false msg: All assertions passed 2026-01-20 10:56:20,531 p=31537 u=zuul n=ansible | TASK [openshift_login : Fetch kubeconfig content src={{ cifmw_openshift_login_kubeconfig }}] *** 2026-01-20 10:56:20,531 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.033) 0:01:46.147 ******* 2026-01-20 10:56:20,531 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.033) 0:01:46.146 ******* 2026-01-20 10:56:20,550 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:20,558 p=31537 u=zuul n=ansible | TASK [openshift_login : Fetch x509 key based users cifmw_openshift_login_key_based_users={{ ( cifmw_openshift_login_kubeconfig_content_b64.content | b64decode | from_yaml ). users | default([]) | selectattr('user.client-certificate-data', 'defined') | map(attribute="name") | map("split", "/") | map("first") }}, cacheable=True] *** 2026-01-20 10:56:20,559 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.027) 0:01:46.175 ******* 2026-01-20 10:56:20,559 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.027) 0:01:46.174 ******* 2026-01-20 10:56:20,579 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:20,588 p=31537 u=zuul n=ansible | TASK [openshift_login : Assign key based user if not provided and available cifmw_openshift_login_user={{ (cifmw_openshift_login_assume_cert_system_user | ternary('system:', '')) + (cifmw_openshift_login_key_based_users | map('replace', 'system:', '') | unique | first) }}, cifmw_openshift_login_cert_login=True, cacheable=True] *** 2026-01-20 10:56:20,588 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.029) 0:01:46.204 ******* 2026-01-20 10:56:20,588 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.029) 0:01:46.203 ******* 2026-01-20 10:56:20,609 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:20,618 p=31537 u=zuul n=ansible | TASK [openshift_login : Set the retry count cifmw_openshift_login_retries_cnt={{ 0 if cifmw_openshift_login_retries_cnt is undefined else cifmw_openshift_login_retries_cnt|int + 1 }}] *** 2026-01-20 10:56:20,618 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.030) 0:01:46.235 ******* 2026-01-20 10:56:20,618 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.030) 0:01:46.233 ******* 2026-01-20 10:56:20,643 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:20,651 p=31537 u=zuul n=ansible | TASK [openshift_login : Fetch token _raw_params=try_login.yml] ***************** 2026-01-20 10:56:20,651 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.033) 0:01:46.268 ******* 2026-01-20 10:56:20,651 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.033) 0:01:46.266 ******* 2026-01-20 10:56:20,676 p=31537 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_login/tasks/try_login.yml for localhost 2026-01-20 10:56:20,689 p=31537 u=zuul n=ansible | TASK [openshift_login : Try get OpenShift access token _raw_params=oc whoami -t] *** 2026-01-20 10:56:20,689 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.037) 0:01:46.306 ******* 2026-01-20 10:56:20,689 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.037) 0:01:46.304 ******* 2026-01-20 10:56:20,710 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:20,720 p=31537 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift token output_dir={{ cifmw_openshift_login_basedir }}/artifacts, script=oc login {%- if cifmw_openshift_login_provided_token is not defined %} {%- if cifmw_openshift_login_user is defined %} -u {{ cifmw_openshift_login_user }} {%- endif %} {%- if cifmw_openshift_login_password is defined %} -p {{ cifmw_openshift_login_password }} {%- endif %} {% else %} --token={{ cifmw_openshift_login_provided_token }} {%- endif %} {%- if cifmw_openshift_login_skip_tls_verify|bool %} --insecure-skip-tls-verify=true {%- endif %} {%- if cifmw_openshift_login_api is defined %} {{ cifmw_openshift_login_api }} {%- endif %}] *** 2026-01-20 10:56:20,720 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.030) 0:01:46.336 ******* 2026-01-20 10:56:20,720 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.030) 0:01:46.335 ******* 2026-01-20 10:56:20,779 p=31537 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_001_fetch_openshift.log 2026-01-20 10:56:21,248 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:21,258 p=31537 u=zuul n=ansible | TASK [openshift_login : Ensure kubeconfig is provided that=cifmw_openshift_login_kubeconfig != ""] *** 2026-01-20 10:56:21,258 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:21 +0000 (0:00:00.537) 0:01:46.874 ******* 2026-01-20 10:56:21,258 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:21 +0000 (0:00:00.537) 0:01:46.873 ******* 2026-01-20 10:56:21,275 p=31537 u=zuul n=ansible | ok: [localhost] => changed: false msg: All assertions passed 2026-01-20 10:56:21,284 p=31537 u=zuul n=ansible | TASK [openshift_login : Fetch new OpenShift access token _raw_params=oc whoami -t] *** 2026-01-20 10:56:21,284 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:21 +0000 (0:00:00.026) 0:01:46.901 ******* 2026-01-20 10:56:21,284 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:21 +0000 (0:00:00.026) 0:01:46.899 ******* 2026-01-20 10:56:21,574 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:21,582 p=31537 u=zuul n=ansible | TASK [openshift_login : Set new OpenShift token cifmw_openshift_login_token={{ (not cifmw_openshift_login_new_token_out.skipped | default(false)) | ternary(cifmw_openshift_login_new_token_out.stdout, cifmw_openshift_login_whoami_out.stdout) }}, cacheable=True] *** 2026-01-20 10:56:21,582 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:21 +0000 (0:00:00.297) 0:01:47.198 ******* 2026-01-20 10:56:21,582 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:21 +0000 (0:00:00.297) 0:01:47.197 ******* 2026-01-20 10:56:21,605 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:21,613 p=31537 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift API URL _raw_params=oc whoami --show-server=true] *** 2026-01-20 10:56:21,613 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:21 +0000 (0:00:00.031) 0:01:47.230 ******* 2026-01-20 10:56:21,613 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:21 +0000 (0:00:00.031) 0:01:47.228 ******* 2026-01-20 10:56:21,897 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:21,906 p=31537 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift kubeconfig context _raw_params=oc whoami -c] *** 2026-01-20 10:56:21,906 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:21 +0000 (0:00:00.292) 0:01:47.522 ******* 2026-01-20 10:56:21,906 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:21 +0000 (0:00:00.292) 0:01:47.521 ******* 2026-01-20 10:56:22,206 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:22,217 p=31537 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift current user _raw_params=oc whoami] **** 2026-01-20 10:56:22,217 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:22 +0000 (0:00:00.311) 0:01:47.834 ******* 2026-01-20 10:56:22,217 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:22 +0000 (0:00:00.311) 0:01:47.832 ******* 2026-01-20 10:56:22,517 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:22,527 p=31537 u=zuul n=ansible | TASK [openshift_login : Set OpenShift user, context and API facts cifmw_openshift_login_api={{ cifmw_openshift_login_api_out.stdout }}, cifmw_openshift_login_context={{ cifmw_openshift_login_context_out.stdout }}, cifmw_openshift_login_user={{ _oauth_user }}, cifmw_openshift_kubeconfig={{ cifmw_openshift_login_kubeconfig }}, cifmw_openshift_api={{ cifmw_openshift_login_api_out.stdout }}, cifmw_openshift_context={{ cifmw_openshift_login_context_out.stdout }}, cifmw_openshift_user={{ _oauth_user }}, cifmw_openshift_token={{ cifmw_openshift_login_token | default(omit) }}, cifmw_install_yamls_environment={{ ( cifmw_install_yamls_environment | combine({'KUBECONFIG': cifmw_openshift_login_kubeconfig}) ) if cifmw_install_yamls_environment is defined else omit }}, cacheable=True] *** 2026-01-20 10:56:22,527 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:22 +0000 (0:00:00.309) 0:01:48.143 ******* 2026-01-20 10:56:22,527 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:22 +0000 (0:00:00.309) 0:01:48.142 ******* 2026-01-20 10:56:22,563 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:22,571 p=31537 u=zuul n=ansible | TASK [openshift_login : Create the openshift_login parameters file dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/openshift-login-params.yml, content={{ cifmw_openshift_login_params_content | from_yaml | to_nice_yaml }}, mode=0600] *** 2026-01-20 10:56:22,571 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:22 +0000 (0:00:00.044) 0:01:48.187 ******* 2026-01-20 10:56:22,571 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:22 +0000 (0:00:00.044) 0:01:48.186 ******* 2026-01-20 10:56:22,969 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:22,985 p=31537 u=zuul n=ansible | TASK [openshift_login : Read the install yamls parameters file path={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml] *** 2026-01-20 10:56:22,986 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:22 +0000 (0:00:00.414) 0:01:48.602 ******* 2026-01-20 10:56:22,986 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:22 +0000 (0:00:00.414) 0:01:48.601 ******* 2026-01-20 10:56:23,310 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:23,326 p=31537 u=zuul n=ansible | TASK [openshift_login : Append the KUBECONFIG to the install yamls parameters content={{ cifmw_openshift_login_install_yamls_artifacts_slurp['content'] | b64decode | from_yaml | combine( { 'cifmw_install_yamls_environment': { 'KUBECONFIG': cifmw_openshift_login_kubeconfig } }, recursive=true) | to_nice_yaml }}, dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml, mode=0600] *** 2026-01-20 10:56:23,326 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:23 +0000 (0:00:00.340) 0:01:48.943 ******* 2026-01-20 10:56:23,327 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:23 +0000 (0:00:00.340) 0:01:48.942 ******* 2026-01-20 10:56:23,809 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:23,837 p=31537 u=zuul n=ansible | TASK [openshift_setup : Ensure output directory exists path={{ cifmw_openshift_setup_basedir }}/artifacts, state=directory, mode=0755] *** 2026-01-20 10:56:23,837 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:23 +0000 (0:00:00.510) 0:01:49.454 ******* 2026-01-20 10:56:23,837 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:23 +0000 (0:00:00.510) 0:01:49.452 ******* 2026-01-20 10:56:24,026 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:24,043 p=31537 u=zuul n=ansible | TASK [openshift_setup : Fetch namespaces to create cifmw_openshift_setup_namespaces={{ (( ([cifmw_install_yamls_defaults['NAMESPACE']] + ([cifmw_install_yamls_defaults['OPERATOR_NAMESPACE']] if 'OPERATOR_NAMESPACE' is in cifmw_install_yamls_defaults else []) ) if cifmw_install_yamls_defaults is defined else [] ) + cifmw_openshift_setup_create_namespaces) | unique }}] *** 2026-01-20 10:56:24,043 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:24 +0000 (0:00:00.205) 0:01:49.659 ******* 2026-01-20 10:56:24,043 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:24 +0000 (0:00:00.206) 0:01:49.658 ******* 2026-01-20 10:56:24,072 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:24,097 p=31537 u=zuul n=ansible | TASK [openshift_setup : Create required namespaces kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, name={{ item }}, kind=Namespace, state=present] *** 2026-01-20 10:56:24,097 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:24 +0000 (0:00:00.054) 0:01:49.714 ******* 2026-01-20 10:56:24,098 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:24 +0000 (0:00:00.054) 0:01:49.713 ******* 2026-01-20 10:56:25,088 p=31537 u=zuul n=ansible | changed: [localhost] => (item=openstack) 2026-01-20 10:56:25,856 p=31537 u=zuul n=ansible | changed: [localhost] => (item=openstack-operators) 2026-01-20 10:56:25,869 p=31537 u=zuul n=ansible | TASK [openshift_setup : Get internal OpenShift registry route kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, kind=Route, name=default-route, namespace=openshift-image-registry] *** 2026-01-20 10:56:25,869 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:25 +0000 (0:00:01.771) 0:01:51.485 ******* 2026-01-20 10:56:25,869 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:25 +0000 (0:00:01.771) 0:01:51.484 ******* 2026-01-20 10:56:25,884 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:25,893 p=31537 u=zuul n=ansible | TASK [openshift_setup : Allow anonymous image-pulls in CRC registry for targeted namespaces state=present, kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'kind': 'RoleBinding', 'apiVersion': 'rbac.authorization.k8s.io/v1', 'metadata': {'name': 'system:image-puller', 'namespace': '{{ item }}'}, 'subjects': [{'kind': 'User', 'name': 'system:anonymous'}, {'kind': 'User', 'name': 'system:unauthenticated'}], 'roleRef': {'kind': 'ClusterRole', 'name': 'system:image-puller'}}] *** 2026-01-20 10:56:25,893 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:25 +0000 (0:00:00.023) 0:01:51.509 ******* 2026-01-20 10:56:25,893 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:25 +0000 (0:00:00.023) 0:01:51.508 ******* 2026-01-20 10:56:25,914 p=31537 u=zuul n=ansible | skipping: [localhost] => (item=openstack) 2026-01-20 10:56:25,915 p=31537 u=zuul n=ansible | skipping: [localhost] => (item=openstack-operators) 2026-01-20 10:56:25,915 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:25,925 p=31537 u=zuul n=ansible | TASK [openshift_setup : Wait for the image registry to be ready kind=Deployment, name=image-registry, namespace=openshift-image-registry, kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, wait=True, wait_sleep=10, wait_timeout=600, wait_condition={'type': 'Available', 'status': 'True'}] *** 2026-01-20 10:56:25,925 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:25 +0000 (0:00:00.031) 0:01:51.541 ******* 2026-01-20 10:56:25,925 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:25 +0000 (0:00:00.031) 0:01:51.540 ******* 2026-01-20 10:56:25,945 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:25,954 p=31537 u=zuul n=ansible | TASK [openshift_setup : Login into OpenShift internal registry output_dir={{ cifmw_openshift_setup_basedir }}/artifacts, script=podman login -u {{ cifmw_openshift_user }} -p {{ cifmw_openshift_token }} {%- if cifmw_openshift_setup_skip_internal_registry_tls_verify|bool %} --tls-verify=false {%- endif %} {{ cifmw_openshift_setup_registry_default_route.resources[0].spec.host }}] *** 2026-01-20 10:56:25,954 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:25 +0000 (0:00:00.029) 0:01:51.570 ******* 2026-01-20 10:56:25,954 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:25 +0000 (0:00:00.029) 0:01:51.569 ******* 2026-01-20 10:56:25,977 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:25,985 p=31537 u=zuul n=ansible | TASK [Ensure we have custom CA installed on host role=install_ca] ************** 2026-01-20 10:56:25,985 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:25 +0000 (0:00:00.030) 0:01:51.601 ******* 2026-01-20 10:56:25,985 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:25 +0000 (0:00:00.030) 0:01:51.600 ******* 2026-01-20 10:56:26,005 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:26,013 p=31537 u=zuul n=ansible | TASK [openshift_setup : Update ca bundle _raw_params=update-ca-trust extract] *** 2026-01-20 10:56:26,013 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.028) 0:01:51.630 ******* 2026-01-20 10:56:26,013 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.028) 0:01:51.628 ******* 2026-01-20 10:56:26,033 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:26,041 p=31537 u=zuul n=ansible | TASK [openshift_setup : Slurp CAs file src={{ cifmw_openshift_setup_ca_bundle_path }}] *** 2026-01-20 10:56:26,041 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.027) 0:01:51.657 ******* 2026-01-20 10:56:26,041 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.027) 0:01:51.656 ******* 2026-01-20 10:56:26,061 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:26,070 p=31537 u=zuul n=ansible | TASK [openshift_setup : Create config map with registry CAs kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'apiVersion': 'v1', 'kind': 'ConfigMap', 'metadata': {'namespace': 'openshift-config', 'name': 'registry-cas'}, 'data': '{{ _config_map_data | items2dict }}'}] *** 2026-01-20 10:56:26,070 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.028) 0:01:51.686 ******* 2026-01-20 10:56:26,070 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.028) 0:01:51.685 ******* 2026-01-20 10:56:26,091 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:26,100 p=31537 u=zuul n=ansible | TASK [openshift_setup : Install Red Hat CA for pulling images from internal registry kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, merge_type=merge, definition={'apiVersion': 'config.openshift.io/v1', 'kind': 'Image', 'metadata': {'name': 'cluster'}, 'spec': {'additionalTrustedCA': {'name': 'registry-cas'}}}] *** 2026-01-20 10:56:26,100 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.030) 0:01:51.716 ******* 2026-01-20 10:56:26,100 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.030) 0:01:51.715 ******* 2026-01-20 10:56:26,119 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:26,128 p=31537 u=zuul n=ansible | TASK [openshift_setup : Add insecure registry kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, merge_type=merge, definition={'apiVersion': 'config.openshift.io/v1', 'kind': 'Image', 'metadata': {'name': 'cluster'}, 'spec': {'registrySources': {'insecureRegistries': ['{{ cifmw_update_containers_registry }}'], 'allowedRegistries': '{{ all_registries }}'}}}] *** 2026-01-20 10:56:26,128 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.028) 0:01:51.744 ******* 2026-01-20 10:56:26,128 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.028) 0:01:51.743 ******* 2026-01-20 10:56:26,912 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:26,922 p=31537 u=zuul n=ansible | TASK [openshift_setup : Create a ICSP with repository digest mirrors kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'apiVersion': 'operator.openshift.io/v1alpha1', 'kind': 'ImageContentSourcePolicy', 'metadata': {'name': 'registry-digest-mirrors'}, 'spec': {'repositoryDigestMirrors': '{{ cifmw_openshift_setup_digest_mirrors }}'}}] *** 2026-01-20 10:56:26,922 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.794) 0:01:52.539 ******* 2026-01-20 10:56:26,922 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.794) 0:01:52.537 ******* 2026-01-20 10:56:26,946 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:26,954 p=31537 u=zuul n=ansible | TASK [openshift_setup : Gather network.operator info kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, api_version=operator.openshift.io/v1, kind=Network, name=cluster] *** 2026-01-20 10:56:26,954 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.032) 0:01:52.571 ******* 2026-01-20 10:56:26,955 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.032) 0:01:52.569 ******* 2026-01-20 10:56:27,918 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:27,930 p=31537 u=zuul n=ansible | TASK [openshift_setup : Patch network operator api_version=operator.openshift.io/v1, kubeconfig={{ cifmw_openshift_kubeconfig }}, kind=Network, name=cluster, persist_config=True, patch=[{'path': '/spec/defaultNetwork/ovnKubernetesConfig/gatewayConfig/routingViaHost', 'value': True, 'op': 'replace'}, {'path': '/spec/defaultNetwork/ovnKubernetesConfig/gatewayConfig/ipForwarding', 'value': 'Global', 'op': 'replace'}]] *** 2026-01-20 10:56:27,930 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:27 +0000 (0:00:00.975) 0:01:53.546 ******* 2026-01-20 10:56:27,930 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:27 +0000 (0:00:00.975) 0:01:53.545 ******* 2026-01-20 10:56:28,868 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:28,879 p=31537 u=zuul n=ansible | TASK [openshift_setup : Patch samples registry configuration kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, api_version=samples.operator.openshift.io/v1, kind=Config, name=cluster, patch=[{'op': 'replace', 'path': '/spec/samplesRegistry', 'value': 'registry.redhat.io'}]] *** 2026-01-20 10:56:28,879 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:28 +0000 (0:00:00.948) 0:01:54.495 ******* 2026-01-20 10:56:28,879 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:28 +0000 (0:00:00.948) 0:01:54.494 ******* 2026-01-20 10:56:29,614 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:29,622 p=31537 u=zuul n=ansible | TASK [openshift_setup : Delete the pods from openshift-marketplace namespace kind=Pod, state=absent, delete_all=True, kubeconfig={{ cifmw_openshift_kubeconfig }}, namespace=openshift-marketplace] *** 2026-01-20 10:56:29,623 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.743) 0:01:55.239 ******* 2026-01-20 10:56:29,623 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.743) 0:01:55.238 ******* 2026-01-20 10:56:29,639 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:29,647 p=31537 u=zuul n=ansible | TASK [openshift_setup : Wait for openshift-marketplace pods to be running _raw_params=oc wait pod --all --for=condition=Ready -n openshift-marketplace --timeout=1m] *** 2026-01-20 10:56:29,648 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.025) 0:01:55.264 ******* 2026-01-20 10:56:29,648 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.025) 0:01:55.263 ******* 2026-01-20 10:56:29,662 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:29,676 p=31537 u=zuul n=ansible | TASK [Deploy Observability operator. name=openshift_obs] *********************** 2026-01-20 10:56:29,676 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.028) 0:01:55.292 ******* 2026-01-20 10:56:29,676 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.028) 0:01:55.291 ******* 2026-01-20 10:56:29,693 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:29,701 p=31537 u=zuul n=ansible | TASK [Deploy Metal3 BMHs name=deploy_bmh] ************************************** 2026-01-20 10:56:29,702 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.025) 0:01:55.318 ******* 2026-01-20 10:56:29,702 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.025) 0:01:55.317 ******* 2026-01-20 10:56:29,719 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:29,728 p=31537 u=zuul n=ansible | TASK [Install certmanager operator role name=cert_manager] ********************* 2026-01-20 10:56:29,728 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.026) 0:01:55.345 ******* 2026-01-20 10:56:29,728 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.026) 0:01:55.343 ******* 2026-01-20 10:56:29,817 p=31537 u=zuul n=ansible | TASK [cert_manager : Create role needed directories path={{ cifmw_cert_manager_manifests_dir }}, state=directory, mode=0755] *** 2026-01-20 10:56:29,818 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.089) 0:01:55.434 ******* 2026-01-20 10:56:29,818 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.089) 0:01:55.433 ******* 2026-01-20 10:56:30,039 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:30,047 p=31537 u=zuul n=ansible | TASK [cert_manager : Create the cifmw_cert_manager_operator_namespace namespace" kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, name={{ cifmw_cert_manager_operator_namespace }}, kind=Namespace, state=present] *** 2026-01-20 10:56:30,048 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:30 +0000 (0:00:00.230) 0:01:55.664 ******* 2026-01-20 10:56:30,048 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:30 +0000 (0:00:00.230) 0:01:55.663 ******* 2026-01-20 10:56:30,799 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:30,807 p=31537 u=zuul n=ansible | TASK [cert_manager : Install from Release Manifest _raw_params=release_manifest.yml] *** 2026-01-20 10:56:30,807 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:30 +0000 (0:00:00.759) 0:01:56.424 ******* 2026-01-20 10:56:30,807 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:30 +0000 (0:00:00.759) 0:01:56.422 ******* 2026-01-20 10:56:30,835 p=31537 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/cert_manager/tasks/release_manifest.yml for localhost 2026-01-20 10:56:30,846 p=31537 u=zuul n=ansible | TASK [cert_manager : Download release manifests url={{ cifmw_cert_manager_release_manifest }}, dest={{ cifmw_cert_manager_manifests_dir }}/cert_manager_manifest.yml, mode=0664] *** 2026-01-20 10:56:30,846 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:30 +0000 (0:00:00.038) 0:01:56.463 ******* 2026-01-20 10:56:30,846 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:30 +0000 (0:00:00.038) 0:01:56.461 ******* 2026-01-20 10:56:31,516 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:31,524 p=31537 u=zuul n=ansible | TASK [cert_manager : Install cert-manager from release manifest kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, state=present, src={{ cifmw_cert_manager_manifests_dir }}/cert_manager_manifest.yml] *** 2026-01-20 10:56:31,525 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:31 +0000 (0:00:00.678) 0:01:57.141 ******* 2026-01-20 10:56:31,525 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:31 +0000 (0:00:00.678) 0:01:57.140 ******* 2026-01-20 10:56:34,932 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:34,978 p=31537 u=zuul n=ansible | TASK [cert_manager : Install from OLM Manifest _raw_params=olm_manifest.yml] *** 2026-01-20 10:56:34,978 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:34 +0000 (0:00:03.453) 0:02:00.595 ******* 2026-01-20 10:56:34,978 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:34 +0000 (0:00:03.453) 0:02:00.593 ******* 2026-01-20 10:56:34,994 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:35,005 p=31537 u=zuul n=ansible | TASK [cert_manager : Check for cert-manager namspeace existance kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, name=cert-manager, kind=Namespace, field_selectors=['status.phase=Active']] *** 2026-01-20 10:56:35,006 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:35 +0000 (0:00:00.027) 0:02:00.622 ******* 2026-01-20 10:56:35,006 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:35 +0000 (0:00:00.027) 0:02:00.621 ******* 2026-01-20 10:56:35,701 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:35,717 p=31537 u=zuul n=ansible | TASK [cert_manager : Wait for cert-manager pods to be ready kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, namespace=cert-manager, kind=Pod, wait=True, wait_sleep=10, wait_timeout=600, wait_condition={'type': 'Ready', 'status': 'True'}, label_selectors=['app = {{ item }}']] *** 2026-01-20 10:56:35,717 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:35 +0000 (0:00:00.711) 0:02:01.334 ******* 2026-01-20 10:56:35,717 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:35 +0000 (0:00:00.711) 0:02:01.332 ******* 2026-01-20 10:56:46,472 p=31537 u=zuul n=ansible | ok: [localhost] => (item=cainjector) 2026-01-20 10:56:58,537 p=31537 u=zuul n=ansible | ok: [localhost] => (item=webhook) 2026-01-20 10:56:59,264 p=31537 u=zuul n=ansible | ok: [localhost] => (item=cert-manager) 2026-01-20 10:56:59,282 p=31537 u=zuul n=ansible | TASK [cert_manager : Create $HOME/bin dir path={{ ansible_user_dir }}/bin, state=directory, mode=0755] *** 2026-01-20 10:56:59,282 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:59 +0000 (0:00:23.564) 0:02:24.899 ******* 2026-01-20 10:56:59,282 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:59 +0000 (0:00:23.564) 0:02:24.897 ******* 2026-01-20 10:56:59,467 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:59,475 p=31537 u=zuul n=ansible | TASK [cert_manager : Install cert-manager cmctl CLI url=https://github.com/cert-manager/cmctl/releases/{{ cifmw_cert_manager_version }}/download/cmctl_{{ _os }}_{{ _arch }}, dest={{ ansible_user_dir }}/bin/cmctl, mode=0755] *** 2026-01-20 10:56:59,475 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:59 +0000 (0:00:00.192) 0:02:25.092 ******* 2026-01-20 10:56:59,475 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:59 +0000 (0:00:00.193) 0:02:25.090 ******* 2026-01-20 10:57:01,309 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:57:01,328 p=31537 u=zuul n=ansible | TASK [cert_manager : Verify cert_manager api _raw_params={{ ansible_user_dir }}/bin/cmctl check api --wait=2m] *** 2026-01-20 10:57:01,329 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:01.853) 0:02:26.945 ******* 2026-01-20 10:57:01,329 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:01.853) 0:02:26.944 ******* 2026-01-20 10:57:01,693 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:57:01,713 p=31537 u=zuul n=ansible | TASK [Configure hosts networking using nmstate name=ci_nmstate] **************** 2026-01-20 10:57:01,713 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.384) 0:02:27.329 ******* 2026-01-20 10:57:01,713 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.384) 0:02:27.328 ******* 2026-01-20 10:57:01,734 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:01,747 p=31537 u=zuul n=ansible | TASK [Configure multus networks name=ci_multus] ******************************** 2026-01-20 10:57:01,747 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.033) 0:02:27.363 ******* 2026-01-20 10:57:01,747 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.033) 0:02:27.362 ******* 2026-01-20 10:57:01,765 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:01,777 p=31537 u=zuul n=ansible | TASK [Deploy Sushy Emulator service pod name=sushy_emulator] ******************* 2026-01-20 10:57:01,777 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.029) 0:02:27.393 ******* 2026-01-20 10:57:01,777 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.029) 0:02:27.392 ******* 2026-01-20 10:57:01,799 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:01,810 p=31537 u=zuul n=ansible | TASK [Setup Libvirt on controller name=libvirt_manager] ************************ 2026-01-20 10:57:01,811 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.033) 0:02:27.427 ******* 2026-01-20 10:57:01,811 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.033) 0:02:27.426 ******* 2026-01-20 10:57:01,829 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:01,841 p=31537 u=zuul n=ansible | TASK [Prepare container package builder name=pkg_build] ************************ 2026-01-20 10:57:01,842 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.031) 0:02:27.458 ******* 2026-01-20 10:57:01,842 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.031) 0:02:27.457 ******* 2026-01-20 10:57:01,865 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:01,877 p=31537 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-20 10:57:01,877 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.035) 0:02:27.494 ******* 2026-01-20 10:57:01,877 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.035) 0:02:27.492 ******* 2026-01-20 10:57:01,942 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:01,952 p=31537 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-20 10:57:01,952 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.074) 0:02:27.568 ******* 2026-01-20 10:57:01,952 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.074) 0:02:27.567 ******* 2026-01-20 10:57:02,056 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:02,067 p=31537 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_infra _raw_params={{ hook.type }}.yml] *** 2026-01-20 10:57:02,068 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.115) 0:02:27.684 ******* 2026-01-20 10:57:02,068 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.115) 0:02:27.683 ******* 2026-01-20 10:57:02,191 p=31537 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/run_hook/tasks/playbook.yml for localhost => (item={'name': 'Fetch nodes facts and save them as parameters', 'type': 'playbook', 'inventory': '/home/zuul/ci-framework-data/artifacts/zuul_inventory.yml', 'source': 'fetch_compute_facts.yml'}) 2026-01-20 10:57:02,206 p=31537 u=zuul n=ansible | TASK [run_hook : Set playbook path for Fetch nodes facts and save them as parameters cifmw_basedir={{ _bdir }}, hook_name={{ _hook_name }}, playbook_path={{ _play | realpath }}, log_path={{ _bdir }}/logs/{{ step }}_{{ _hook_name }}.log, extra_vars=-e namespace={{ cifmw_openstack_namespace }} {%- if hook.extra_vars is defined and hook.extra_vars|length > 0 -%} {% for key,value in hook.extra_vars.items() -%} {%- if key == 'file' %} -e "@{{ value }}" {%- else %} -e "{{ key }}={{ value }}" {%- endif %} {%- endfor %} {%- endif %}] *** 2026-01-20 10:57:02,206 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.138) 0:02:27.822 ******* 2026-01-20 10:57:02,206 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.138) 0:02:27.821 ******* 2026-01-20 10:57:02,247 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:02,255 p=31537 u=zuul n=ansible | TASK [run_hook : Get file stat path={{ playbook_path }}] *********************** 2026-01-20 10:57:02,255 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.049) 0:02:27.872 ******* 2026-01-20 10:57:02,255 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.049) 0:02:27.870 ******* 2026-01-20 10:57:02,455 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:02,463 p=31537 u=zuul n=ansible | TASK [run_hook : Fail if playbook doesn't exist msg=Playbook {{ playbook_path }} doesn't seem to exist.] *** 2026-01-20 10:57:02,464 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.208) 0:02:28.080 ******* 2026-01-20 10:57:02,464 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.208) 0:02:28.079 ******* 2026-01-20 10:57:02,478 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:02,486 p=31537 u=zuul n=ansible | TASK [run_hook : Get parameters files paths={{ (cifmw_basedir, 'artifacts/parameters') | path_join }}, file_type=file, patterns=*.yml] *** 2026-01-20 10:57:02,486 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.022) 0:02:28.102 ******* 2026-01-20 10:57:02,486 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.022) 0:02:28.101 ******* 2026-01-20 10:57:02,654 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:02,662 p=31537 u=zuul n=ansible | TASK [run_hook : Add parameters artifacts as extra variables extra_vars={{ extra_vars }} {% for file in cifmw_run_hook_parameters_files.files %} -e "@{{ file.path }}" {%- endfor %}] *** 2026-01-20 10:57:02,662 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.176) 0:02:28.279 ******* 2026-01-20 10:57:02,662 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.176) 0:02:28.277 ******* 2026-01-20 10:57:02,681 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:02,689 p=31537 u=zuul n=ansible | TASK [run_hook : Ensure log directory exists path={{ log_path | dirname }}, state=directory, mode=0755] *** 2026-01-20 10:57:02,689 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.026) 0:02:28.305 ******* 2026-01-20 10:57:02,689 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.026) 0:02:28.304 ******* 2026-01-20 10:57:02,872 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:02,881 p=31537 u=zuul n=ansible | TASK [run_hook : Ensure artifacts directory exists path={{ cifmw_basedir }}/artifacts, state=directory, mode=0755] *** 2026-01-20 10:57:02,881 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.192) 0:02:28.497 ******* 2026-01-20 10:57:02,881 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.192) 0:02:28.496 ******* 2026-01-20 10:57:03,107 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:03,115 p=31537 u=zuul n=ansible | TASK [run_hook : Run hook without retry - Fetch nodes facts and save them as parameters] *** 2026-01-20 10:57:03,116 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:03 +0000 (0:00:00.234) 0:02:28.732 ******* 2026-01-20 10:57:03,116 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:03 +0000 (0:00:00.234) 0:02:28.731 ******* 2026-01-20 10:57:03,179 p=31537 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_002_run_hook_without_retry_fetch.log 2026-01-20 10:57:14,297 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:57:14,307 p=31537 u=zuul n=ansible | TASK [run_hook : Run hook with retry - Fetch nodes facts and save them as parameters] *** 2026-01-20 10:57:14,307 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:11.191) 0:02:39.923 ******* 2026-01-20 10:57:14,307 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:11.191) 0:02:39.922 ******* 2026-01-20 10:57:14,326 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:14,334 p=31537 u=zuul n=ansible | TASK [run_hook : Check if we have a file path={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2026-01-20 10:57:14,334 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.027) 0:02:39.951 ******* 2026-01-20 10:57:14,334 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.027) 0:02:39.949 ******* 2026-01-20 10:57:14,547 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:14,555 p=31537 u=zuul n=ansible | TASK [run_hook : Load generated content in main playbook file={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2026-01-20 10:57:14,555 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.221) 0:02:40.172 ******* 2026-01-20 10:57:14,556 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.221) 0:02:40.170 ******* 2026-01-20 10:57:14,579 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:14,598 p=31537 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-20 10:57:14,598 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.042) 0:02:40.215 ******* 2026-01-20 10:57:14,598 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.042) 0:02:40.213 ******* 2026-01-20 10:57:14,648 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:14,657 p=31537 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-20 10:57:14,657 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.058) 0:02:40.273 ******* 2026-01-20 10:57:14,657 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.058) 0:02:40.272 ******* 2026-01-20 10:57:14,755 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:14,765 p=31537 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_package_build _raw_params={{ hook.type }}.yml] *** 2026-01-20 10:57:14,765 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.108) 0:02:40.382 ******* 2026-01-20 10:57:14,765 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.108) 0:02:40.380 ******* 2026-01-20 10:57:14,862 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:14,877 p=31537 u=zuul n=ansible | TASK [cifmw_setup : Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] *** 2026-01-20 10:57:14,877 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.111) 0:02:40.494 ******* 2026-01-20 10:57:14,877 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.111) 0:02:40.492 ******* 2026-01-20 10:57:14,919 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:14,929 p=31537 u=zuul n=ansible | TASK [pkg_build : Generate volume list build_volumes={% for pkg in cifmw_pkg_build_list -%} - "{{ pkg.src|default(cifmw_pkg_build_pkg_basedir ~ '/' ~ pkg.name) }}:/root/src/{{ pkg.name }}:z" - "{{ cifmw_pkg_build_basedir }}/volumes/packages/{{ pkg.name }}:/root/{{ pkg.name }}:z" - "{{ cifmw_pkg_build_basedir }}/logs/build_{{ pkg.name }}:/root/logs:z" {% endfor -%} - "{{ cifmw_pkg_build_basedir }}/volumes/packages/gating_repo:/root/gating_repo:z" - "{{ cifmw_pkg_build_basedir }}/artifacts/repositories:/root/yum.repos.d:z,ro" - "{{ cifmw_pkg_build_basedir }}/artifacts/build-packages.yml:/root/playbook.yml:z,ro" ] *** 2026-01-20 10:57:14,929 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.051) 0:02:40.545 ******* 2026-01-20 10:57:14,929 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.051) 0:02:40.544 ******* 2026-01-20 10:57:14,951 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:14,960 p=31537 u=zuul n=ansible | TASK [pkg_build : Build package using container name={{ pkg.name }}-builder, auto_remove=True, detach=False, privileged=True, log_driver=k8s-file, log_level=info, log_opt={'path': '{{ cifmw_pkg_build_basedir }}/logs/{{ pkg.name }}-builder.log'}, image={{ cifmw_pkg_build_ctx_name }}, volume={{ build_volumes | from_yaml }}, security_opt=['label=disable', 'seccomp=unconfined', 'apparmor=unconfined'], env={'PROJECT': '{{ pkg.name }}'}, command=ansible-playbook -i localhost, -c local playbook.yml] *** 2026-01-20 10:57:14,960 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.030) 0:02:40.576 ******* 2026-01-20 10:57:14,960 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.030) 0:02:40.575 ******* 2026-01-20 10:57:14,973 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:14,986 p=31537 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-20 10:57:14,987 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.026) 0:02:40.603 ******* 2026-01-20 10:57:14,987 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.026) 0:02:40.602 ******* 2026-01-20 10:57:15,051 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:15,059 p=31537 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-20 10:57:15,060 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.072) 0:02:40.676 ******* 2026-01-20 10:57:15,060 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.072) 0:02:40.675 ******* 2026-01-20 10:57:15,159 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:15,167 p=31537 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_package_build _raw_params={{ hook.type }}.yml] *** 2026-01-20 10:57:15,168 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.107) 0:02:40.784 ******* 2026-01-20 10:57:15,168 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.107) 0:02:40.783 ******* 2026-01-20 10:57:15,266 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:15,294 p=31537 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-20 10:57:15,294 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.126) 0:02:40.911 ******* 2026-01-20 10:57:15,295 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.126) 0:02:40.910 ******* 2026-01-20 10:57:15,344 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:15,355 p=31537 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-20 10:57:15,355 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.060) 0:02:40.971 ******* 2026-01-20 10:57:15,355 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.060) 0:02:40.970 ******* 2026-01-20 10:57:15,451 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:15,460 p=31537 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_container_build _raw_params={{ hook.type }}.yml] *** 2026-01-20 10:57:15,460 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.105) 0:02:41.077 ******* 2026-01-20 10:57:15,460 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.105) 0:02:41.075 ******* 2026-01-20 10:57:15,557 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:15,570 p=31537 u=zuul n=ansible | TASK [cifmw_setup : Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] *** 2026-01-20 10:57:15,571 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.110) 0:02:41.187 ******* 2026-01-20 10:57:15,571 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.110) 0:02:41.186 ******* 2026-01-20 10:57:15,614 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:15,624 p=31537 u=zuul n=ansible | TASK [cifmw_setup : Nothing to do yet msg=No support for that step yet] ******** 2026-01-20 10:57:15,624 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.053) 0:02:41.240 ******* 2026-01-20 10:57:15,624 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.053) 0:02:41.239 ******* 2026-01-20 10:57:15,643 p=31537 u=zuul n=ansible | ok: [localhost] => msg: No support for that step yet 2026-01-20 10:57:15,650 p=31537 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-20 10:57:15,650 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.026) 0:02:41.267 ******* 2026-01-20 10:57:15,651 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.026) 0:02:41.265 ******* 2026-01-20 10:57:15,710 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:15,718 p=31537 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-20 10:57:15,718 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.067) 0:02:41.335 ******* 2026-01-20 10:57:15,718 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.067) 0:02:41.333 ******* 2026-01-20 10:57:15,822 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:15,835 p=31537 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_container_build _raw_params={{ hook.type }}.yml] *** 2026-01-20 10:57:15,836 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.117) 0:02:41.452 ******* 2026-01-20 10:57:15,836 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.117) 0:02:41.451 ******* 2026-01-20 10:57:15,940 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:15,969 p=31537 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-20 10:57:15,969 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.133) 0:02:41.585 ******* 2026-01-20 10:57:15,969 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.133) 0:02:41.584 ******* 2026-01-20 10:57:16,020 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:16,028 p=31537 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-20 10:57:16,028 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.059) 0:02:41.644 ******* 2026-01-20 10:57:16,028 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.059) 0:02:41.643 ******* 2026-01-20 10:57:16,123 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:16,132 p=31537 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_operator_build _raw_params={{ hook.type }}.yml] *** 2026-01-20 10:57:16,133 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.104) 0:02:41.749 ******* 2026-01-20 10:57:16,133 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.104) 0:02:41.748 ******* 2026-01-20 10:57:16,238 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,253 p=31537 u=zuul n=ansible | TASK [cifmw_setup : Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] *** 2026-01-20 10:57:16,254 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.120) 0:02:41.870 ******* 2026-01-20 10:57:16,254 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.120) 0:02:41.869 ******* 2026-01-20 10:57:16,346 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:16,357 p=31537 u=zuul n=ansible | TASK [operator_build : Ensure mandatory directories exist path={{ cifmw_operator_build_basedir }}/{{ item }}, state=directory, mode=0755] *** 2026-01-20 10:57:16,357 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.103) 0:02:41.974 ******* 2026-01-20 10:57:16,357 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.103) 0:02:41.972 ******* 2026-01-20 10:57:16,384 p=31537 u=zuul n=ansible | skipping: [localhost] => (item=artifacts) 2026-01-20 10:57:16,389 p=31537 u=zuul n=ansible | skipping: [localhost] => (item=logs) 2026-01-20 10:57:16,390 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,399 p=31537 u=zuul n=ansible | TASK [operator_build : Initialize role output cifmw_operator_build_output={{ cifmw_operator_build_output }}, cifmw_operator_build_meta_name={{ cifmw_operator_build_meta_name }}] *** 2026-01-20 10:57:16,400 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.042) 0:02:42.016 ******* 2026-01-20 10:57:16,400 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.042) 0:02:42.015 ******* 2026-01-20 10:57:16,474 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,481 p=31537 u=zuul n=ansible | TASK [operator_build : Populate operators list with zuul info _raw_params=zuul_info.yml] *** 2026-01-20 10:57:16,481 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.081) 0:02:42.098 ******* 2026-01-20 10:57:16,481 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.081) 0:02:42.096 ******* 2026-01-20 10:57:16,506 p=31537 u=zuul n=ansible | skipping: [localhost] => (item={'branch': 'main', 'change': '320', 'change_url': 'https://github.com/openstack-k8s-operators/watcher-operator/pull/320', 'commit_id': '581f46572d07c53c87a11aa044b02e73f253eea6', 'patchset': '581f46572d07c53c87a11aa044b02e73f253eea6', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/openstack-k8s-operators/watcher-operator', 'name': 'openstack-k8s-operators/watcher-operator', 'short_name': 'watcher-operator', 'src_dir': 'src/github.com/openstack-k8s-operators/watcher-operator'}, 'topic': None}) 2026-01-20 10:57:16,508 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,517 p=31537 u=zuul n=ansible | TASK [operator_build : Merge lists of operators operators_list={{ [cifmw_operator_build_operators, zuul_info_operators | default([])] | community.general.lists_mergeby('name') }}] *** 2026-01-20 10:57:16,517 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.035) 0:02:42.133 ******* 2026-01-20 10:57:16,517 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.035) 0:02:42.132 ******* 2026-01-20 10:57:16,539 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,548 p=31537 u=zuul n=ansible | TASK [operator_build : Get meta_operator src dir from operators_list cifmw_operator_build_meta_src={{ (operators_list | selectattr('name', 'eq', cifmw_operator_build_meta_name) | map(attribute='src') | first ) | default(cifmw_operator_build_meta_src, true) }}] *** 2026-01-20 10:57:16,548 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.031) 0:02:42.164 ******* 2026-01-20 10:57:16,548 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.031) 0:02:42.163 ******* 2026-01-20 10:57:16,570 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,578 p=31537 u=zuul n=ansible | TASK [operator_build : Adds meta-operator to the list operators_list={{ [operators_list, meta_operator_info] | community.general.lists_mergeby('name') }}] *** 2026-01-20 10:57:16,578 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.030) 0:02:42.194 ******* 2026-01-20 10:57:16,578 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.030) 0:02:42.193 ******* 2026-01-20 10:57:16,601 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,609 p=31537 u=zuul n=ansible | TASK [operator_build : Clone operator's code when src dir is empty _raw_params=clone.yml] *** 2026-01-20 10:57:16,609 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.030) 0:02:42.225 ******* 2026-01-20 10:57:16,609 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.030) 0:02:42.224 ******* 2026-01-20 10:57:16,633 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,642 p=31537 u=zuul n=ansible | TASK [operator_build : Building operators _raw_params=build.yml] *************** 2026-01-20 10:57:16,642 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.032) 0:02:42.258 ******* 2026-01-20 10:57:16,642 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.033) 0:02:42.257 ******* 2026-01-20 10:57:16,667 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,676 p=31537 u=zuul n=ansible | TASK [operator_build : Building meta operator _raw_params=build.yml] *********** 2026-01-20 10:57:16,677 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.034) 0:02:42.293 ******* 2026-01-20 10:57:16,677 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.034) 0:02:42.292 ******* 2026-01-20 10:57:16,701 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,710 p=31537 u=zuul n=ansible | TASK [operator_build : Gather role output dest={{ cifmw_operator_build_basedir }}/artifacts/custom-operators.yml, content={{ cifmw_operator_build_output | to_nice_yaml }}, mode=0644] *** 2026-01-20 10:57:16,710 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.033) 0:02:42.326 ******* 2026-01-20 10:57:16,710 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.033) 0:02:42.325 ******* 2026-01-20 10:57:16,736 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,752 p=31537 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-20 10:57:16,752 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.042) 0:02:42.368 ******* 2026-01-20 10:57:16,752 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.042) 0:02:42.367 ******* 2026-01-20 10:57:16,804 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:16,812 p=31537 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-20 10:57:16,812 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.059) 0:02:42.428 ******* 2026-01-20 10:57:16,812 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.059) 0:02:42.427 ******* 2026-01-20 10:57:16,920 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:16,929 p=31537 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_operator_build _raw_params={{ hook.type }}.yml] *** 2026-01-20 10:57:16,929 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.116) 0:02:42.545 ******* 2026-01-20 10:57:16,929 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.116) 0:02:42.544 ******* 2026-01-20 10:57:17,034 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:17,053 p=31537 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-20 10:57:17,053 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.124) 0:02:42.670 ******* 2026-01-20 10:57:17,053 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.124) 0:02:42.668 ******* 2026-01-20 10:57:17,110 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:17,123 p=31537 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-20 10:57:17,124 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.070) 0:02:42.740 ******* 2026-01-20 10:57:17,124 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.070) 0:02:42.739 ******* 2026-01-20 10:57:17,228 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:17,239 p=31537 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_deploy _raw_params={{ hook.type }}.yml] *** 2026-01-20 10:57:17,239 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.115) 0:02:42.856 ******* 2026-01-20 10:57:17,239 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.115) 0:02:42.854 ******* 2026-01-20 10:57:17,377 p=31537 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/run_hook/tasks/playbook.yml for localhost => (item={'name': '80 Kustomize OpenStack CR', 'type': 'playbook', 'source': 'control_plane_horizon.yml'}) 2026-01-20 10:57:17,386 p=31537 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/run_hook/tasks/playbook.yml for localhost => (item={'name': 'Create coo subscription', 'type': 'playbook', 'source': '/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator/ci/playbooks/deploy_cluster_observability_operator.yaml'}) 2026-01-20 10:57:17,401 p=31537 u=zuul n=ansible | TASK [run_hook : Set playbook path for 80 Kustomize OpenStack CR cifmw_basedir={{ _bdir }}, hook_name={{ _hook_name }}, playbook_path={{ _play | realpath }}, log_path={{ _bdir }}/logs/{{ step }}_{{ _hook_name }}.log, extra_vars=-e namespace={{ cifmw_openstack_namespace }} {%- if hook.extra_vars is defined and hook.extra_vars|length > 0 -%} {% for key,value in hook.extra_vars.items() -%} {%- if key == 'file' %} -e "@{{ value }}" {%- else %} -e "{{ key }}={{ value }}" {%- endif %} {%- endfor %} {%- endif %}] *** 2026-01-20 10:57:17,401 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.161) 0:02:43.017 ******* 2026-01-20 10:57:17,401 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.161) 0:02:43.016 ******* 2026-01-20 10:57:17,447 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:17,456 p=31537 u=zuul n=ansible | TASK [run_hook : Get file stat path={{ playbook_path }}] *********************** 2026-01-20 10:57:17,456 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.054) 0:02:43.072 ******* 2026-01-20 10:57:17,456 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.054) 0:02:43.071 ******* 2026-01-20 10:57:17,663 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:17,673 p=31537 u=zuul n=ansible | TASK [run_hook : Fail if playbook doesn't exist msg=Playbook {{ playbook_path }} doesn't seem to exist.] *** 2026-01-20 10:57:17,673 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.217) 0:02:43.290 ******* 2026-01-20 10:57:17,673 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.217) 0:02:43.288 ******* 2026-01-20 10:57:17,696 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:17,705 p=31537 u=zuul n=ansible | TASK [run_hook : Get parameters files paths={{ (cifmw_basedir, 'artifacts/parameters') | path_join }}, file_type=file, patterns=*.yml] *** 2026-01-20 10:57:17,706 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.032) 0:02:43.322 ******* 2026-01-20 10:57:17,706 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.032) 0:02:43.321 ******* 2026-01-20 10:57:17,885 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:17,896 p=31537 u=zuul n=ansible | TASK [run_hook : Add parameters artifacts as extra variables extra_vars={{ extra_vars }} {% for file in cifmw_run_hook_parameters_files.files %} -e "@{{ file.path }}" {%- endfor %}] *** 2026-01-20 10:57:17,896 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.190) 0:02:43.513 ******* 2026-01-20 10:57:17,896 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.190) 0:02:43.511 ******* 2026-01-20 10:57:17,920 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:17,929 p=31537 u=zuul n=ansible | TASK [run_hook : Ensure log directory exists path={{ log_path | dirname }}, state=directory, mode=0755] *** 2026-01-20 10:57:17,929 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.032) 0:02:43.545 ******* 2026-01-20 10:57:17,929 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.032) 0:02:43.544 ******* 2026-01-20 10:57:18,122 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:18,131 p=31537 u=zuul n=ansible | TASK [run_hook : Ensure artifacts directory exists path={{ cifmw_basedir }}/artifacts, state=directory, mode=0755] *** 2026-01-20 10:57:18,131 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:18 +0000 (0:00:00.201) 0:02:43.747 ******* 2026-01-20 10:57:18,131 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:18 +0000 (0:00:00.201) 0:02:43.746 ******* 2026-01-20 10:57:18,335 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:18,344 p=31537 u=zuul n=ansible | TASK [run_hook : Run hook without retry - 80 Kustomize OpenStack CR] *********** 2026-01-20 10:57:18,344 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:18 +0000 (0:00:00.213) 0:02:43.961 ******* 2026-01-20 10:57:18,344 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:18 +0000 (0:00:00.213) 0:02:43.959 ******* 2026-01-20 10:57:18,398 p=31537 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_003_run_hook_without_retry_80.log 2026-01-20 10:57:20,124 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:57:20,134 p=31537 u=zuul n=ansible | TASK [run_hook : Run hook with retry - 80 Kustomize OpenStack CR] ************** 2026-01-20 10:57:20,134 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:01.789) 0:02:45.751 ******* 2026-01-20 10:57:20,134 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:01.789) 0:02:45.749 ******* 2026-01-20 10:57:20,161 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:20,169 p=31537 u=zuul n=ansible | TASK [run_hook : Check if we have a file path={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2026-01-20 10:57:20,169 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.035) 0:02:45.786 ******* 2026-01-20 10:57:20,170 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.035) 0:02:45.784 ******* 2026-01-20 10:57:20,360 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:20,368 p=31537 u=zuul n=ansible | TASK [run_hook : Load generated content in main playbook file={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2026-01-20 10:57:20,368 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.198) 0:02:45.984 ******* 2026-01-20 10:57:20,368 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.198) 0:02:45.983 ******* 2026-01-20 10:57:20,389 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:20,398 p=31537 u=zuul n=ansible | TASK [run_hook : Set playbook path for Create coo subscription cifmw_basedir={{ _bdir }}, hook_name={{ _hook_name }}, playbook_path={{ _play | realpath }}, log_path={{ _bdir }}/logs/{{ step }}_{{ _hook_name }}.log, extra_vars=-e namespace={{ cifmw_openstack_namespace }} {%- if hook.extra_vars is defined and hook.extra_vars|length > 0 -%} {% for key,value in hook.extra_vars.items() -%} {%- if key == 'file' %} -e "@{{ value }}" {%- else %} -e "{{ key }}={{ value }}" {%- endif %} {%- endfor %} {%- endif %}] *** 2026-01-20 10:57:20,398 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.030) 0:02:46.015 ******* 2026-01-20 10:57:20,398 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.030) 0:02:46.013 ******* 2026-01-20 10:57:20,441 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:20,449 p=31537 u=zuul n=ansible | TASK [run_hook : Get file stat path={{ playbook_path }}] *********************** 2026-01-20 10:57:20,450 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.051) 0:02:46.066 ******* 2026-01-20 10:57:20,450 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.051) 0:02:46.065 ******* 2026-01-20 10:57:20,652 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:20,660 p=31537 u=zuul n=ansible | TASK [run_hook : Fail if playbook doesn't exist msg=Playbook {{ playbook_path }} doesn't seem to exist.] *** 2026-01-20 10:57:20,660 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.210) 0:02:46.277 ******* 2026-01-20 10:57:20,660 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.210) 0:02:46.275 ******* 2026-01-20 10:57:20,682 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:20,691 p=31537 u=zuul n=ansible | TASK [run_hook : Get parameters files paths={{ (cifmw_basedir, 'artifacts/parameters') | path_join }}, file_type=file, patterns=*.yml] *** 2026-01-20 10:57:20,691 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.031) 0:02:46.308 ******* 2026-01-20 10:57:20,691 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.031) 0:02:46.306 ******* 2026-01-20 10:57:20,872 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:20,884 p=31537 u=zuul n=ansible | TASK [run_hook : Add parameters artifacts as extra variables extra_vars={{ extra_vars }} {% for file in cifmw_run_hook_parameters_files.files %} -e "@{{ file.path }}" {%- endfor %}] *** 2026-01-20 10:57:20,884 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.192) 0:02:46.500 ******* 2026-01-20 10:57:20,884 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.192) 0:02:46.499 ******* 2026-01-20 10:57:20,910 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:20,919 p=31537 u=zuul n=ansible | TASK [run_hook : Ensure log directory exists path={{ log_path | dirname }}, state=directory, mode=0755] *** 2026-01-20 10:57:20,919 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.035) 0:02:46.535 ******* 2026-01-20 10:57:20,919 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.035) 0:02:46.534 ******* 2026-01-20 10:57:21,106 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:21,114 p=31537 u=zuul n=ansible | TASK [run_hook : Ensure artifacts directory exists path={{ cifmw_basedir }}/artifacts, state=directory, mode=0755] *** 2026-01-20 10:57:21,114 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:21 +0000 (0:00:00.194) 0:02:46.730 ******* 2026-01-20 10:57:21,114 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:21 +0000 (0:00:00.194) 0:02:46.729 ******* 2026-01-20 10:57:21,312 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:21,323 p=31537 u=zuul n=ansible | TASK [run_hook : Run hook without retry - Create coo subscription] ************* 2026-01-20 10:57:21,323 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:21 +0000 (0:00:00.209) 0:02:46.940 ******* 2026-01-20 10:57:21,323 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:21 +0000 (0:00:00.209) 0:02:46.938 ******* 2026-01-20 10:57:21,382 p=31537 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_004_run_hook_without_retry_create.log 2026-01-20 10:57:22,601 p=31537 u=zuul n=ansible | fatal: [localhost]: FAILED! => censored: 'the output has been hidden due to the fact that ''no_log: true'' was specified for this result' changed: true 2026-01-20 10:57:22,602 p=31537 u=zuul n=ansible | NO MORE HOSTS LEFT ************************************************************* 2026-01-20 10:57:22,603 p=31537 u=zuul n=ansible | PLAY RECAP ********************************************************************* 2026-01-20 10:57:22,604 p=31537 u=zuul n=ansible | localhost : ok=146 changed=44 unreachable=0 failed=1 skipped=97 rescued=0 ignored=1 2026-01-20 10:57:22,604 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:22 +0000 (0:00:01.280) 0:02:48.220 ******* 2026-01-20 10:57:22,604 p=31537 u=zuul n=ansible | =============================================================================== 2026-01-20 10:57:22,604 p=31537 u=zuul n=ansible | run_hook : Run hook without retry - Download needed tools -------------- 36.40s 2026-01-20 10:57:22,604 p=31537 u=zuul n=ansible | ci_setup : Install needed packages ------------------------------------- 33.93s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | cert_manager : Wait for cert-manager pods to be ready ------------------ 23.56s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | run_hook : Run hook without retry - Fetch nodes facts and save them as parameters -- 11.19s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | repo_setup : Initialize python venv and install requirements ------------ 9.02s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | ci_setup : Install openshift client ------------------------------------- 5.26s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | cert_manager : Install cert-manager from release manifest --------------- 3.45s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | cert_manager : Install cert-manager cmctl CLI --------------------------- 1.85s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | run_hook : Run hook without retry - 80 Kustomize OpenStack CR ----------- 1.79s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | openshift_setup : Create required namespaces ---------------------------- 1.77s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | install_ca : Update ca bundle ------------------------------------------- 1.40s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | run_hook : Run hook without retry - Create coo subscription ------------- 1.28s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | repo_setup : Get repo-setup repository ---------------------------------- 1.12s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | ci_setup : Manage directories ------------------------------------------- 1.08s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | repo_setup : Make sure git-core package is installed -------------------- 0.99s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | openshift_setup : Gather network.operator info -------------------------- 0.98s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | Gathering Facts --------------------------------------------------------- 0.95s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | openshift_setup : Patch network operator -------------------------------- 0.95s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | repo_setup : Ensure directories are present ----------------------------- 0.91s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | repo_setup : Install repo-setup package --------------------------------- 0.80s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:22 +0000 (0:00:01.281) 0:02:48.220 ******* 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | =============================================================================== 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | run_hook --------------------------------------------------------------- 57.94s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | ci_setup --------------------------------------------------------------- 41.50s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | cert_manager ----------------------------------------------------------- 31.90s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | repo_setup ------------------------------------------------------------- 17.72s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | openshift_setup --------------------------------------------------------- 5.84s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | openshift_login --------------------------------------------------------- 3.89s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | install_yamls ----------------------------------------------------------- 3.21s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | install_ca -------------------------------------------------------------- 1.69s 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | cifmw_setup ------------------------------------------------------------- 1.54s 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | gather_facts ------------------------------------------------------------ 0.95s 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | discover_latest_image --------------------------------------------------- 0.80s 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | operator_build ---------------------------------------------------------- 0.39s 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | networking_mapper ------------------------------------------------------- 0.33s 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | ansible.builtin.file ---------------------------------------------------- 0.32s 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | ansible.builtin.include_tasks ------------------------------------------- 0.08s 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | pkg_build --------------------------------------------------------------- 0.06s 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | ansible.builtin.include_vars -------------------------------------------- 0.03s 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | total ----------------------------------------------------------------- 168.18s home/zuul/zuul-output/logs/ci-framework-data/logs/2026-01-20_10-57/0000775000175000017500000000000015133657764023131 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/logs/2026-01-20_10-57/ansible.log0000666000175000017500000061045115133657622025253 0ustar zuulzuul2026-01-20 10:54:06,928 p=30902 u=zuul n=ansible | Starting galaxy collection install process 2026-01-20 10:54:06,929 p=30902 u=zuul n=ansible | Process install dependency map 2026-01-20 10:54:22,513 p=30902 u=zuul n=ansible | Starting collection install process 2026-01-20 10:54:22,514 p=30902 u=zuul n=ansible | Installing 'cifmw.general:1.0.0+05fab9c7' to '/home/zuul/.ansible/collections/ansible_collections/cifmw/general' 2026-01-20 10:54:23,013 p=30902 u=zuul n=ansible | Created collection for cifmw.general:1.0.0+05fab9c7 at /home/zuul/.ansible/collections/ansible_collections/cifmw/general 2026-01-20 10:54:23,013 p=30902 u=zuul n=ansible | cifmw.general:1.0.0+05fab9c7 was installed successfully 2026-01-20 10:54:23,013 p=30902 u=zuul n=ansible | Installing 'containers.podman:1.16.2' to '/home/zuul/.ansible/collections/ansible_collections/containers/podman' 2026-01-20 10:54:23,070 p=30902 u=zuul n=ansible | Created collection for containers.podman:1.16.2 at /home/zuul/.ansible/collections/ansible_collections/containers/podman 2026-01-20 10:54:23,070 p=30902 u=zuul n=ansible | containers.podman:1.16.2 was installed successfully 2026-01-20 10:54:23,070 p=30902 u=zuul n=ansible | Installing 'community.general:10.0.1' to '/home/zuul/.ansible/collections/ansible_collections/community/general' 2026-01-20 10:54:23,794 p=30902 u=zuul n=ansible | Created collection for community.general:10.0.1 at /home/zuul/.ansible/collections/ansible_collections/community/general 2026-01-20 10:54:23,794 p=30902 u=zuul n=ansible | community.general:10.0.1 was installed successfully 2026-01-20 10:54:23,794 p=30902 u=zuul n=ansible | Installing 'ansible.posix:1.6.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/posix' 2026-01-20 10:54:23,845 p=30902 u=zuul n=ansible | Created collection for ansible.posix:1.6.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/posix 2026-01-20 10:54:23,845 p=30902 u=zuul n=ansible | ansible.posix:1.6.2 was installed successfully 2026-01-20 10:54:23,845 p=30902 u=zuul n=ansible | Installing 'ansible.utils:5.1.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/utils' 2026-01-20 10:54:23,943 p=30902 u=zuul n=ansible | Created collection for ansible.utils:5.1.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/utils 2026-01-20 10:54:23,943 p=30902 u=zuul n=ansible | ansible.utils:5.1.2 was installed successfully 2026-01-20 10:54:23,943 p=30902 u=zuul n=ansible | Installing 'community.libvirt:1.3.0' to '/home/zuul/.ansible/collections/ansible_collections/community/libvirt' 2026-01-20 10:54:23,967 p=30902 u=zuul n=ansible | Created collection for community.libvirt:1.3.0 at /home/zuul/.ansible/collections/ansible_collections/community/libvirt 2026-01-20 10:54:23,967 p=30902 u=zuul n=ansible | community.libvirt:1.3.0 was installed successfully 2026-01-20 10:54:23,968 p=30902 u=zuul n=ansible | Installing 'community.crypto:2.22.3' to '/home/zuul/.ansible/collections/ansible_collections/community/crypto' 2026-01-20 10:54:24,118 p=30902 u=zuul n=ansible | Created collection for community.crypto:2.22.3 at /home/zuul/.ansible/collections/ansible_collections/community/crypto 2026-01-20 10:54:24,118 p=30902 u=zuul n=ansible | community.crypto:2.22.3 was installed successfully 2026-01-20 10:54:24,118 p=30902 u=zuul n=ansible | Installing 'kubernetes.core:5.0.0' to '/home/zuul/.ansible/collections/ansible_collections/kubernetes/core' 2026-01-20 10:54:24,238 p=30902 u=zuul n=ansible | Created collection for kubernetes.core:5.0.0 at /home/zuul/.ansible/collections/ansible_collections/kubernetes/core 2026-01-20 10:54:24,238 p=30902 u=zuul n=ansible | kubernetes.core:5.0.0 was installed successfully 2026-01-20 10:54:24,238 p=30902 u=zuul n=ansible | Installing 'ansible.netcommon:7.1.0' to '/home/zuul/.ansible/collections/ansible_collections/ansible/netcommon' 2026-01-20 10:54:24,309 p=30902 u=zuul n=ansible | Created collection for ansible.netcommon:7.1.0 at /home/zuul/.ansible/collections/ansible_collections/ansible/netcommon 2026-01-20 10:54:24,309 p=30902 u=zuul n=ansible | ansible.netcommon:7.1.0 was installed successfully 2026-01-20 10:54:24,309 p=30902 u=zuul n=ansible | Installing 'openstack.config_template:2.1.1' to '/home/zuul/.ansible/collections/ansible_collections/openstack/config_template' 2026-01-20 10:54:24,328 p=30902 u=zuul n=ansible | Created collection for openstack.config_template:2.1.1 at /home/zuul/.ansible/collections/ansible_collections/openstack/config_template 2026-01-20 10:54:24,328 p=30902 u=zuul n=ansible | openstack.config_template:2.1.1 was installed successfully 2026-01-20 10:54:24,328 p=30902 u=zuul n=ansible | Installing 'junipernetworks.junos:9.1.0' to '/home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos' 2026-01-20 10:54:24,563 p=30902 u=zuul n=ansible | Created collection for junipernetworks.junos:9.1.0 at /home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos 2026-01-20 10:54:24,563 p=30902 u=zuul n=ansible | junipernetworks.junos:9.1.0 was installed successfully 2026-01-20 10:54:24,563 p=30902 u=zuul n=ansible | Installing 'cisco.ios:9.0.3' to '/home/zuul/.ansible/collections/ansible_collections/cisco/ios' 2026-01-20 10:54:24,827 p=30902 u=zuul n=ansible | Created collection for cisco.ios:9.0.3 at /home/zuul/.ansible/collections/ansible_collections/cisco/ios 2026-01-20 10:54:24,827 p=30902 u=zuul n=ansible | cisco.ios:9.0.3 was installed successfully 2026-01-20 10:54:24,827 p=30902 u=zuul n=ansible | Installing 'mellanox.onyx:1.0.0' to '/home/zuul/.ansible/collections/ansible_collections/mellanox/onyx' 2026-01-20 10:54:24,860 p=30902 u=zuul n=ansible | Created collection for mellanox.onyx:1.0.0 at /home/zuul/.ansible/collections/ansible_collections/mellanox/onyx 2026-01-20 10:54:24,861 p=30902 u=zuul n=ansible | mellanox.onyx:1.0.0 was installed successfully 2026-01-20 10:54:24,861 p=30902 u=zuul n=ansible | Installing 'community.okd:4.0.0' to '/home/zuul/.ansible/collections/ansible_collections/community/okd' 2026-01-20 10:54:24,888 p=30902 u=zuul n=ansible | Created collection for community.okd:4.0.0 at /home/zuul/.ansible/collections/ansible_collections/community/okd 2026-01-20 10:54:24,888 p=30902 u=zuul n=ansible | community.okd:4.0.0 was installed successfully 2026-01-20 10:54:24,888 p=30902 u=zuul n=ansible | Installing '@NAMESPACE@.@NAME@:3.1.4' to '/home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@' 2026-01-20 10:54:24,976 p=30902 u=zuul n=ansible | Created collection for @NAMESPACE@.@NAME@:3.1.4 at /home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@ 2026-01-20 10:54:24,976 p=30902 u=zuul n=ansible | @NAMESPACE@.@NAME@:3.1.4 was installed successfully 2026-01-20 10:54:34,401 p=31537 u=zuul n=ansible | PLAY [Remove status flag] ****************************************************** 2026-01-20 10:54:34,421 p=31537 u=zuul n=ansible | TASK [Gathering Facts ] ******************************************************** 2026-01-20 10:54:34,421 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:34 +0000 (0:00:00.038) 0:00:00.038 ******* 2026-01-20 10:54:34,421 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:34 +0000 (0:00:00.036) 0:00:00.036 ******* 2026-01-20 10:54:35,349 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:54:35,372 p=31537 u=zuul n=ansible | TASK [Delete success flag if exists path={{ ansible_user_dir }}/cifmw-success, state=absent] *** 2026-01-20 10:54:35,372 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.951) 0:00:00.989 ******* 2026-01-20 10:54:35,373 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.951) 0:00:00.987 ******* 2026-01-20 10:54:35,684 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:54:35,693 p=31537 u=zuul n=ansible | TASK [Inherit from parent scenarios if needed _raw_params=ci/playbooks/tasks/inherit_parent_scenario.yml] *** 2026-01-20 10:54:35,693 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.320) 0:00:01.309 ******* 2026-01-20 10:54:35,693 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.320) 0:00:01.308 ******* 2026-01-20 10:54:35,715 p=31537 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/ci/playbooks/tasks/inherit_parent_scenario.yml for localhost 2026-01-20 10:54:35,769 p=31537 u=zuul n=ansible | TASK [Inherit from parent parameter file if instructed file={{ item }}] ******** 2026-01-20 10:54:35,769 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.076) 0:00:01.386 ******* 2026-01-20 10:54:35,769 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.076) 0:00:01.384 ******* 2026-01-20 10:54:35,791 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:35,798 p=31537 u=zuul n=ansible | TASK [cifmw_setup : Set custom cifmw PATH reusable fact cifmw_path={{ ansible_user_dir }}/.crc/bin:{{ ansible_user_dir }}/.crc/bin/oc:{{ ansible_user_dir }}/bin:{{ ansible_env.PATH }}, cacheable=True] *** 2026-01-20 10:54:35,798 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.028) 0:00:01.414 ******* 2026-01-20 10:54:35,798 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.028) 0:00:01.413 ******* 2026-01-20 10:54:35,821 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:54:35,827 p=31537 u=zuul n=ansible | TASK [cifmw_setup : Get customized parameters ci_framework_params={{ hostvars[inventory_hostname] | dict2items | selectattr("key", "match", "^(cifmw|pre|post)_(?!install_yamls|openshift_token|openshift_login|openshift_kubeconfig).*") | list | items2dict }}] *** 2026-01-20 10:54:35,827 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.029) 0:00:01.444 ******* 2026-01-20 10:54:35,827 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.029) 0:00:01.442 ******* 2026-01-20 10:54:35,910 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:54:35,917 p=31537 u=zuul n=ansible | TASK [install_ca : Ensure target directory exists path={{ cifmw_install_ca_trust_dir }}, state=directory, mode=0755] *** 2026-01-20 10:54:35,917 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.089) 0:00:01.533 ******* 2026-01-20 10:54:35,917 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:35 +0000 (0:00:00.089) 0:00:01.532 ******* 2026-01-20 10:54:36,117 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:54:36,123 p=31537 u=zuul n=ansible | TASK [install_ca : Install internal CA from url url={{ cifmw_install_ca_url }}, dest={{ cifmw_install_ca_trust_dir }}, validate_certs={{ cifmw_install_ca_url_validate_certs | default(omit) }}, mode=0644] *** 2026-01-20 10:54:36,123 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:36 +0000 (0:00:00.205) 0:00:01.739 ******* 2026-01-20 10:54:36,123 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:36 +0000 (0:00:00.205) 0:00:01.738 ******* 2026-01-20 10:54:36,144 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:36,152 p=31537 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from inline dest={{ cifmw_install_ca_trust_dir }}/cifmw_inline_ca_bundle.crt, content={{ cifmw_install_ca_bundle_inline }}, mode=0644] *** 2026-01-20 10:54:36,152 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:36 +0000 (0:00:00.028) 0:00:01.768 ******* 2026-01-20 10:54:36,152 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:36 +0000 (0:00:00.028) 0:00:01.767 ******* 2026-01-20 10:54:36,171 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:36,177 p=31537 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from file dest={{ cifmw_install_ca_trust_dir }}/{{ cifmw_install_ca_bundle_src | basename }}, src={{ cifmw_install_ca_bundle_src }}, mode=0644] *** 2026-01-20 10:54:36,177 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:36 +0000 (0:00:00.025) 0:00:01.794 ******* 2026-01-20 10:54:36,177 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:36 +0000 (0:00:00.025) 0:00:01.792 ******* 2026-01-20 10:54:36,199 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:36,208 p=31537 u=zuul n=ansible | TASK [install_ca : Update ca bundle _raw_params=update-ca-trust] *************** 2026-01-20 10:54:36,208 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:36 +0000 (0:00:00.030) 0:00:01.825 ******* 2026-01-20 10:54:36,208 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:36 +0000 (0:00:00.031) 0:00:01.823 ******* 2026-01-20 10:54:37,599 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:37,611 p=31537 u=zuul n=ansible | TASK [repo_setup : Ensure directories are present path={{ cifmw_repo_setup_basedir }}/{{ item }}, state=directory, mode=0755] *** 2026-01-20 10:54:37,611 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:37 +0000 (0:00:01.402) 0:00:03.227 ******* 2026-01-20 10:54:37,611 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:37 +0000 (0:00:01.402) 0:00:03.226 ******* 2026-01-20 10:54:37,794 p=31537 u=zuul n=ansible | changed: [localhost] => (item=tmp) 2026-01-20 10:54:38,008 p=31537 u=zuul n=ansible | changed: [localhost] => (item=artifacts/repositories) 2026-01-20 10:54:38,516 p=31537 u=zuul n=ansible | changed: [localhost] => (item=venv/repo_setup) 2026-01-20 10:54:38,524 p=31537 u=zuul n=ansible | TASK [repo_setup : Make sure git-core package is installed name=git-core, state=present] *** 2026-01-20 10:54:38,524 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:38 +0000 (0:00:00.913) 0:00:04.140 ******* 2026-01-20 10:54:38,524 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:38 +0000 (0:00:00.913) 0:00:04.139 ******* 2026-01-20 10:54:39,505 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:54:39,511 p=31537 u=zuul n=ansible | TASK [repo_setup : Get repo-setup repository accept_hostkey=True, dest={{ cifmw_repo_setup_basedir }}/tmp/repo-setup, repo={{ cifmw_repo_setup_src }}] *** 2026-01-20 10:54:39,511 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:39 +0000 (0:00:00.987) 0:00:05.127 ******* 2026-01-20 10:54:39,511 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:39 +0000 (0:00:00.987) 0:00:05.126 ******* 2026-01-20 10:54:40,626 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:40,632 p=31537 u=zuul n=ansible | TASK [repo_setup : Initialize python venv and install requirements virtualenv={{ cifmw_repo_setup_venv }}, requirements={{ cifmw_repo_setup_basedir }}/tmp/repo-setup/requirements.txt, virtualenv_command=python3 -m venv --system-site-packages --upgrade-deps] *** 2026-01-20 10:54:40,632 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:40 +0000 (0:00:01.121) 0:00:06.249 ******* 2026-01-20 10:54:40,632 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:40 +0000 (0:00:01.121) 0:00:06.247 ******* 2026-01-20 10:54:49,643 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:49,651 p=31537 u=zuul n=ansible | TASK [repo_setup : Install repo-setup package chdir={{ cifmw_repo_setup_basedir }}/tmp/repo-setup, creates={{ cifmw_repo_setup_venv }}/bin/repo-setup, _raw_params={{ cifmw_repo_setup_venv }}/bin/python setup.py install] *** 2026-01-20 10:54:49,652 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:49 +0000 (0:00:09.019) 0:00:15.268 ******* 2026-01-20 10:54:49,652 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:49 +0000 (0:00:09.019) 0:00:15.267 ******* 2026-01-20 10:54:50,444 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:50,451 p=31537 u=zuul n=ansible | TASK [repo_setup : Set cifmw_repo_setup_dlrn_hash_tag from content provider cifmw_repo_setup_dlrn_hash_tag={{ content_provider_dlrn_md5_hash }}] *** 2026-01-20 10:54:50,451 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:50 +0000 (0:00:00.799) 0:00:16.067 ******* 2026-01-20 10:54:50,451 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:50 +0000 (0:00:00.799) 0:00:16.066 ******* 2026-01-20 10:54:50,469 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:50,475 p=31537 u=zuul n=ansible | TASK [repo_setup : Run repo-setup _raw_params={{ cifmw_repo_setup_venv }}/bin/repo-setup {{ cifmw_repo_setup_promotion }} {{ cifmw_repo_setup_additional_repos }} -d {{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }} -b {{ cifmw_repo_setup_branch }} --rdo-mirror {{ cifmw_repo_setup_rdo_mirror }} {% if cifmw_repo_setup_dlrn_hash_tag | length > 0 %} --dlrn-hash-tag {{ cifmw_repo_setup_dlrn_hash_tag }} {% endif %} -o {{ cifmw_repo_setup_output }}] *** 2026-01-20 10:54:50,475 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:50 +0000 (0:00:00.024) 0:00:16.092 ******* 2026-01-20 10:54:50,475 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:50 +0000 (0:00:00.024) 0:00:16.090 ******* 2026-01-20 10:54:51,083 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:51,089 p=31537 u=zuul n=ansible | TASK [repo_setup : Get component repo url={{ cifmw_repo_setup_dlrn_uri }}/{{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }}-{{ cifmw_repo_setup_branch }}/component/{{ cifmw_repo_setup_component_name }}/{{ cifmw_repo_setup_component_promotion_tag }}/delorean.repo, dest={{ cifmw_repo_setup_output }}/{{ cifmw_repo_setup_component_name }}_{{ cifmw_repo_setup_component_promotion_tag }}_delorean.repo, mode=0644] *** 2026-01-20 10:54:51,089 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:51 +0000 (0:00:00.613) 0:00:16.706 ******* 2026-01-20 10:54:51,089 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:51 +0000 (0:00:00.613) 0:00:16.704 ******* 2026-01-20 10:54:51,118 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:51,124 p=31537 u=zuul n=ansible | TASK [repo_setup : Rename component repo path={{ cifmw_repo_setup_output }}/{{ cifmw_repo_setup_component_name }}_{{ cifmw_repo_setup_component_promotion_tag }}_delorean.repo, regexp=delorean-component-{{ cifmw_repo_setup_component_name }}, replace={{ cifmw_repo_setup_component_name }}-{{ cifmw_repo_setup_component_promotion_tag }}] *** 2026-01-20 10:54:51,125 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:51 +0000 (0:00:00.035) 0:00:16.741 ******* 2026-01-20 10:54:51,125 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:51 +0000 (0:00:00.035) 0:00:16.740 ******* 2026-01-20 10:54:51,154 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:51,160 p=31537 u=zuul n=ansible | TASK [repo_setup : Disable component repo in current-podified dlrn repo path={{ cifmw_repo_setup_output }}/delorean.repo, section=delorean-component-{{ cifmw_repo_setup_component_name }}, option=enabled, value=0, mode=0644] *** 2026-01-20 10:54:51,160 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:51 +0000 (0:00:00.035) 0:00:16.777 ******* 2026-01-20 10:54:51,160 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:51 +0000 (0:00:00.035) 0:00:16.775 ******* 2026-01-20 10:54:51,191 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:51,199 p=31537 u=zuul n=ansible | TASK [repo_setup : Run repo-setup-get-hash _raw_params={{ cifmw_repo_setup_venv }}/bin/repo-setup-get-hash --dlrn-url {{ cifmw_repo_setup_dlrn_uri[:-1] }} --os-version {{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }} --release {{ cifmw_repo_setup_branch }} {% if cifmw_repo_setup_component_name | length > 0 -%} --component {{ cifmw_repo_setup_component_name }} --tag {{ cifmw_repo_setup_component_promotion_tag }} {% else -%} --tag {{cifmw_repo_setup_promotion }} {% endif -%} {% if (cifmw_repo_setup_dlrn_hash_tag | length > 0) and (cifmw_repo_setup_component_name | length <= 0) -%} --dlrn-hash-tag {{ cifmw_repo_setup_dlrn_hash_tag }} {% endif -%} --json] *** 2026-01-20 10:54:51,200 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:51 +0000 (0:00:00.039) 0:00:16.816 ******* 2026-01-20 10:54:51,200 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:51 +0000 (0:00:00.039) 0:00:16.815 ******* 2026-01-20 10:54:51,644 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:51,653 p=31537 u=zuul n=ansible | TASK [repo_setup : Dump full hash in delorean.repo.md5 file content={{ _repo_setup_json['full_hash'] }} , dest={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5, mode=0644] *** 2026-01-20 10:54:51,653 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:51 +0000 (0:00:00.453) 0:00:17.269 ******* 2026-01-20 10:54:51,653 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:51 +0000 (0:00:00.453) 0:00:17.268 ******* 2026-01-20 10:54:52,322 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:52,336 p=31537 u=zuul n=ansible | TASK [repo_setup : Dump current-podified hash url={{ cifmw_repo_setup_dlrn_uri }}/{{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }}-{{ cifmw_repo_setup_branch }}/current-podified/delorean.repo.md5, dest={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5, mode=0644] *** 2026-01-20 10:54:52,336 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.683) 0:00:17.952 ******* 2026-01-20 10:54:52,336 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.683) 0:00:17.951 ******* 2026-01-20 10:54:52,352 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:52,366 p=31537 u=zuul n=ansible | TASK [repo_setup : Slurp current podified hash src={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5] *** 2026-01-20 10:54:52,366 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.030) 0:00:17.983 ******* 2026-01-20 10:54:52,366 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.030) 0:00:17.981 ******* 2026-01-20 10:54:52,382 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:52,392 p=31537 u=zuul n=ansible | TASK [repo_setup : Update the value of full_hash _repo_setup_json={{ _repo_setup_json | combine({'full_hash': _hash}, recursive=true) }}] *** 2026-01-20 10:54:52,393 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.026) 0:00:18.009 ******* 2026-01-20 10:54:52,393 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.026) 0:00:18.008 ******* 2026-01-20 10:54:52,408 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:52,417 p=31537 u=zuul n=ansible | TASK [repo_setup : Export hashes facts for further use cifmw_repo_setup_full_hash={{ _repo_setup_json['full_hash'] }}, cifmw_repo_setup_commit_hash={{ _repo_setup_json['commit_hash'] }}, cifmw_repo_setup_distro_hash={{ _repo_setup_json['distro_hash'] }}, cifmw_repo_setup_extended_hash={{ _repo_setup_json['extended_hash'] }}, cifmw_repo_setup_dlrn_api_url={{ _repo_setup_json['dlrn_api_url'] }}, cifmw_repo_setup_dlrn_url={{ _repo_setup_json['dlrn_url'] }}, cifmw_repo_setup_release={{ _repo_setup_json['release'] }}, cacheable=True] *** 2026-01-20 10:54:52,417 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.024) 0:00:18.034 ******* 2026-01-20 10:54:52,417 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.024) 0:00:18.032 ******* 2026-01-20 10:54:52,444 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:54:52,452 p=31537 u=zuul n=ansible | TASK [repo_setup : Create download directory path={{ cifmw_repo_setup_rhos_release_path }}, state=directory, mode=0755] *** 2026-01-20 10:54:52,452 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.034) 0:00:18.068 ******* 2026-01-20 10:54:52,452 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.034) 0:00:18.067 ******* 2026-01-20 10:54:52,465 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:52,475 p=31537 u=zuul n=ansible | TASK [repo_setup : Print the URL to request msg={{ cifmw_repo_setup_rhos_release_rpm }}] *** 2026-01-20 10:54:52,475 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.022) 0:00:18.091 ******* 2026-01-20 10:54:52,475 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.022) 0:00:18.090 ******* 2026-01-20 10:54:52,489 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:52,499 p=31537 u=zuul n=ansible | TASK [Download the RPM name=krb_request] *************************************** 2026-01-20 10:54:52,499 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.023) 0:00:18.115 ******* 2026-01-20 10:54:52,499 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.024) 0:00:18.114 ******* 2026-01-20 10:54:52,512 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:52,522 p=31537 u=zuul n=ansible | TASK [repo_setup : Install RHOS Release tool name={{ cifmw_repo_setup_rhos_release_rpm if cifmw_repo_setup_rhos_release_rpm is not url else cifmw_krb_request_out.path }}, state=present, disable_gpg_check={{ cifmw_repo_setup_rhos_release_gpg_check | bool }}] *** 2026-01-20 10:54:52,522 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.023) 0:00:18.138 ******* 2026-01-20 10:54:52,522 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.023) 0:00:18.137 ******* 2026-01-20 10:54:52,537 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:52,546 p=31537 u=zuul n=ansible | TASK [repo_setup : Get rhos-release tool version _raw_params=rhos-release --version] *** 2026-01-20 10:54:52,546 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.024) 0:00:18.163 ******* 2026-01-20 10:54:52,546 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.024) 0:00:18.161 ******* 2026-01-20 10:54:52,560 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:52,569 p=31537 u=zuul n=ansible | TASK [repo_setup : Print rhos-release tool version msg={{ rr_version.stdout }}] *** 2026-01-20 10:54:52,569 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.022) 0:00:18.186 ******* 2026-01-20 10:54:52,569 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.022) 0:00:18.184 ******* 2026-01-20 10:54:52,582 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:52,593 p=31537 u=zuul n=ansible | TASK [repo_setup : Generate repos using rhos-release {{ cifmw_repo_setup_rhos_release_args }} _raw_params=rhos-release {{ cifmw_repo_setup_rhos_release_args }} \ -t {{ cifmw_repo_setup_output }}] *** 2026-01-20 10:54:52,593 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.023) 0:00:18.209 ******* 2026-01-20 10:54:52,593 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.023) 0:00:18.208 ******* 2026-01-20 10:54:52,606 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:52,616 p=31537 u=zuul n=ansible | TASK [repo_setup : Check for /etc/ci/mirror_info.sh path=/etc/ci/mirror_info.sh] *** 2026-01-20 10:54:52,616 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.023) 0:00:18.232 ******* 2026-01-20 10:54:52,616 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.023) 0:00:18.231 ******* 2026-01-20 10:54:52,792 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:54:52,799 p=31537 u=zuul n=ansible | TASK [repo_setup : Use RDO proxy mirrors chdir={{ cifmw_repo_setup_output }}, _raw_params=set -o pipefail source /etc/ci/mirror_info.sh sed -i -e "s|https://trunk.rdoproject.org|$NODEPOOL_RDO_PROXY|g" *.repo ] *** 2026-01-20 10:54:52,799 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.183) 0:00:18.416 ******* 2026-01-20 10:54:52,799 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:52 +0000 (0:00:00.183) 0:00:18.414 ******* 2026-01-20 10:54:52,994 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:53,001 p=31537 u=zuul n=ansible | TASK [repo_setup : Use RDO CentOS mirrors (remove CentOS 10 conditional when Nodepool mirrors exist) chdir={{ cifmw_repo_setup_output }}, _raw_params=set -o pipefail source /etc/ci/mirror_info.sh sed -i -e "s|http://mirror.stream.centos.org|$NODEPOOL_CENTOS_MIRROR|g" *.repo ] *** 2026-01-20 10:54:53,001 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.201) 0:00:18.618 ******* 2026-01-20 10:54:53,001 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.201) 0:00:18.616 ******* 2026-01-20 10:54:53,214 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:53,223 p=31537 u=zuul n=ansible | TASK [repo_setup : Check for gating.repo file on content provider url=http://{{ content_provider_registry_ip }}:8766/gating.repo] *** 2026-01-20 10:54:53,223 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.221) 0:00:18.840 ******* 2026-01-20 10:54:53,223 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.221) 0:00:18.838 ******* 2026-01-20 10:54:53,753 p=31537 u=zuul n=ansible | fatal: [localhost]: FAILED! => changed: false elapsed: 0 msg: 'Status code was -1 and not [200]: Request failed: ' redirected: false status: -1 url: http://38.102.83.51:8766/gating.repo 2026-01-20 10:54:53,753 p=31537 u=zuul n=ansible | ...ignoring 2026-01-20 10:54:53,762 p=31537 u=zuul n=ansible | TASK [repo_setup : Populate gating repo from content provider ip content=[gating-repo] baseurl=http://{{ content_provider_registry_ip }}:8766/ enabled=1 gpgcheck=0 priority=1 , dest={{ cifmw_repo_setup_output }}/gating.repo, mode=0644] *** 2026-01-20 10:54:53,762 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.538) 0:00:19.378 ******* 2026-01-20 10:54:53,762 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.538) 0:00:19.377 ******* 2026-01-20 10:54:53,792 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:53,800 p=31537 u=zuul n=ansible | TASK [repo_setup : Check for DLRN repo at the destination path={{ cifmw_repo_setup_output }}/delorean.repo] *** 2026-01-20 10:54:53,800 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.038) 0:00:19.417 ******* 2026-01-20 10:54:53,800 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.038) 0:00:19.415 ******* 2026-01-20 10:54:53,831 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:53,838 p=31537 u=zuul n=ansible | TASK [repo_setup : Lower the priority of DLRN repos to allow installation from gating repo path={{ cifmw_repo_setup_output }}/delorean.repo, regexp=priority=1, replace=priority=20] *** 2026-01-20 10:54:53,838 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.037) 0:00:19.454 ******* 2026-01-20 10:54:53,838 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.037) 0:00:19.453 ******* 2026-01-20 10:54:53,871 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:53,877 p=31537 u=zuul n=ansible | TASK [repo_setup : Check for DLRN component repo path={{ cifmw_repo_setup_output }}/{{ _comp_repo }}] *** 2026-01-20 10:54:53,877 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.039) 0:00:19.493 ******* 2026-01-20 10:54:53,877 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.039) 0:00:19.492 ******* 2026-01-20 10:54:53,905 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:53,914 p=31537 u=zuul n=ansible | TASK [repo_setup : Lower the priority of componennt repos to allow installation from gating repo path={{ cifmw_repo_setup_output }}//{{ _comp_repo }}, regexp=priority=1, replace=priority=2] *** 2026-01-20 10:54:53,914 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.037) 0:00:19.531 ******* 2026-01-20 10:54:53,914 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.037) 0:00:19.529 ******* 2026-01-20 10:54:53,944 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:54:53,950 p=31537 u=zuul n=ansible | TASK [repo_setup : Find existing repos from /etc/yum.repos.d directory paths=/etc/yum.repos.d/, patterns=*.repo, recurse=False] *** 2026-01-20 10:54:53,951 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.036) 0:00:19.567 ******* 2026-01-20 10:54:53,951 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:53 +0000 (0:00:00.036) 0:00:19.566 ******* 2026-01-20 10:54:54,253 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:54:54,260 p=31537 u=zuul n=ansible | TASK [repo_setup : Remove existing repos from /etc/yum.repos.d directory path={{ item }}, state=absent] *** 2026-01-20 10:54:54,260 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:54 +0000 (0:00:00.309) 0:00:19.876 ******* 2026-01-20 10:54:54,260 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:54 +0000 (0:00:00.309) 0:00:19.875 ******* 2026-01-20 10:54:54,480 p=31537 u=zuul n=ansible | changed: [localhost] => (item=/etc/yum.repos.d/centos-addons.repo) 2026-01-20 10:54:54,654 p=31537 u=zuul n=ansible | changed: [localhost] => (item=/etc/yum.repos.d/centos.repo) 2026-01-20 10:54:54,661 p=31537 u=zuul n=ansible | TASK [repo_setup : Cleanup existing metadata _raw_params=dnf clean metadata] *** 2026-01-20 10:54:54,661 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:54 +0000 (0:00:00.401) 0:00:20.278 ******* 2026-01-20 10:54:54,661 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:54 +0000 (0:00:00.401) 0:00:20.276 ******* 2026-01-20 10:54:55,070 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:55,077 p=31537 u=zuul n=ansible | TASK [repo_setup : Copy generated repos to /etc/yum.repos.d directory mode=0755, remote_src=True, src={{ cifmw_repo_setup_output }}/, dest=/etc/yum.repos.d] *** 2026-01-20 10:54:55,077 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:55 +0000 (0:00:00.416) 0:00:20.694 ******* 2026-01-20 10:54:55,077 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:55 +0000 (0:00:00.416) 0:00:20.692 ******* 2026-01-20 10:54:55,318 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:54:55,329 p=31537 u=zuul n=ansible | TASK [ci_setup : Gather variables for each operating system _raw_params={{ item }}] *** 2026-01-20 10:54:55,329 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:55 +0000 (0:00:00.252) 0:00:20.946 ******* 2026-01-20 10:54:55,330 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:55 +0000 (0:00:00.252) 0:00:20.944 ******* 2026-01-20 10:54:55,363 p=31537 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/ci_setup/vars/redhat.yml) 2026-01-20 10:54:55,372 p=31537 u=zuul n=ansible | TASK [ci_setup : List packages to install var=cifmw_ci_setup_packages] ********* 2026-01-20 10:54:55,372 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:55 +0000 (0:00:00.042) 0:00:20.988 ******* 2026-01-20 10:54:55,372 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:55 +0000 (0:00:00.042) 0:00:20.987 ******* 2026-01-20 10:54:55,389 p=31537 u=zuul n=ansible | ok: [localhost] => cifmw_ci_setup_packages: - bash-completion - ca-certificates - git-core - make - tar - tmux - python3-pip 2026-01-20 10:54:55,398 p=31537 u=zuul n=ansible | TASK [ci_setup : Install needed packages name={{ cifmw_ci_setup_packages }}, state=latest] *** 2026-01-20 10:54:55,398 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:55 +0000 (0:00:00.025) 0:00:21.014 ******* 2026-01-20 10:54:55,398 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:54:55 +0000 (0:00:00.025) 0:00:21.013 ******* 2026-01-20 10:55:29,325 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:29,332 p=31537 u=zuul n=ansible | TASK [ci_setup : Gather version of openshift client _raw_params=oc version --client -o yaml] *** 2026-01-20 10:55:29,332 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:29 +0000 (0:00:33.934) 0:00:54.948 ******* 2026-01-20 10:55:29,334 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:29 +0000 (0:00:33.936) 0:00:54.949 ******* 2026-01-20 10:55:29,530 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:29,537 p=31537 u=zuul n=ansible | TASK [ci_setup : Ensure openshift client install path is present path={{ cifmw_ci_setup_oc_install_path }}, state=directory, mode=0755] *** 2026-01-20 10:55:29,537 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:29 +0000 (0:00:00.204) 0:00:55.153 ******* 2026-01-20 10:55:29,537 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:29 +0000 (0:00:00.202) 0:00:55.152 ******* 2026-01-20 10:55:29,725 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:29,731 p=31537 u=zuul n=ansible | TASK [ci_setup : Install openshift client src={{ cifmw_ci_setup_openshift_client_download_uri }}/{{ cifmw_ci_setup_openshift_client_version }}/openshift-client-linux.tar.gz, dest={{ cifmw_ci_setup_oc_install_path }}, remote_src=True, mode=0755, creates={{ cifmw_ci_setup_oc_install_path }}/oc] *** 2026-01-20 10:55:29,731 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:29 +0000 (0:00:00.194) 0:00:55.348 ******* 2026-01-20 10:55:29,732 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:29 +0000 (0:00:00.194) 0:00:55.346 ******* 2026-01-20 10:55:34,985 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:34,991 p=31537 u=zuul n=ansible | TASK [ci_setup : Add the OC path to cifmw_path if needed cifmw_path={{ cifmw_ci_setup_oc_install_path }}:{{ ansible_env.PATH }}, cacheable=True] *** 2026-01-20 10:55:34,992 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:34 +0000 (0:00:05.260) 0:01:00.608 ******* 2026-01-20 10:55:34,992 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:34 +0000 (0:00:05.260) 0:01:00.607 ******* 2026-01-20 10:55:35,014 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:35,021 p=31537 u=zuul n=ansible | TASK [ci_setup : Create completion file] *************************************** 2026-01-20 10:55:35,021 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.029) 0:01:00.637 ******* 2026-01-20 10:55:35,021 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.029) 0:01:00.636 ******* 2026-01-20 10:55:35,300 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:35,306 p=31537 u=zuul n=ansible | TASK [ci_setup : Source completion from within .bashrc create=True, mode=0644, path={{ ansible_user_dir }}/.bashrc, block=if [ -f ~/.oc_completion ]; then source ~/.oc_completion fi] *** 2026-01-20 10:55:35,306 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.285) 0:01:00.923 ******* 2026-01-20 10:55:35,306 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.285) 0:01:00.921 ******* 2026-01-20 10:55:35,617 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:35,623 p=31537 u=zuul n=ansible | TASK [ci_setup : Check rhsm status _raw_params=subscription-manager status] **** 2026-01-20 10:55:35,623 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.316) 0:01:01.239 ******* 2026-01-20 10:55:35,623 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.316) 0:01:01.238 ******* 2026-01-20 10:55:35,639 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:35,646 p=31537 u=zuul n=ansible | TASK [ci_setup : Gather the repos to be enabled _repos={{ cifmw_ci_setup_rhel_rhsm_default_repos + (cifmw_ci_setup_rhel_rhsm_extra_repos | default([])) }}] *** 2026-01-20 10:55:35,646 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.023) 0:01:01.263 ******* 2026-01-20 10:55:35,646 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.023) 0:01:01.261 ******* 2026-01-20 10:55:35,660 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:35,665 p=31537 u=zuul n=ansible | TASK [ci_setup : Enabling the required repositories. name={{ item }}, state={{ rhsm_repo_state | default('enabled') }}] *** 2026-01-20 10:55:35,665 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.019) 0:01:01.282 ******* 2026-01-20 10:55:35,665 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.019) 0:01:01.280 ******* 2026-01-20 10:55:35,678 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:35,685 p=31537 u=zuul n=ansible | TASK [ci_setup : Get current /etc/redhat-release _raw_params=cat /etc/redhat-release] *** 2026-01-20 10:55:35,685 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.019) 0:01:01.301 ******* 2026-01-20 10:55:35,685 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.019) 0:01:01.300 ******* 2026-01-20 10:55:35,698 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:35,704 p=31537 u=zuul n=ansible | TASK [ci_setup : Print current /etc/redhat-release msg={{ _current_rh_release.stdout }}] *** 2026-01-20 10:55:35,704 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.019) 0:01:01.321 ******* 2026-01-20 10:55:35,704 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.019) 0:01:01.319 ******* 2026-01-20 10:55:35,716 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:35,724 p=31537 u=zuul n=ansible | TASK [ci_setup : Ensure the repos are enabled in the system using yum name={{ item.name }}, baseurl={{ item.baseurl }}, description={{ item.description | default(item.name) }}, gpgcheck={{ item.gpgcheck | default(false) }}, enabled=True, state={{ yum_repo_state | default('present') }}] *** 2026-01-20 10:55:35,724 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.020) 0:01:01.341 ******* 2026-01-20 10:55:35,724 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.020) 0:01:01.339 ******* 2026-01-20 10:55:35,742 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:35,749 p=31537 u=zuul n=ansible | TASK [ci_setup : Manage directories path={{ item }}, state={{ directory_state }}, mode=0755, owner={{ ansible_user_id }}, group={{ ansible_user_id }}] *** 2026-01-20 10:55:35,749 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.024) 0:01:01.365 ******* 2026-01-20 10:55:35,749 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:35 +0000 (0:00:00.024) 0:01:01.364 ******* 2026-01-20 10:55:36,033 p=31537 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/manifests/openstack/cr) 2026-01-20 10:55:36,233 p=31537 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/logs) 2026-01-20 10:55:36,429 p=31537 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/tmp) 2026-01-20 10:55:36,633 p=31537 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/volumes) 2026-01-20 10:55:36,818 p=31537 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2026-01-20 10:55:36,830 p=31537 u=zuul n=ansible | TASK [Prepare install_yamls make targets name=install_yamls, apply={'tags': ['bootstrap']}] *** 2026-01-20 10:55:36,830 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:36 +0000 (0:00:01.081) 0:01:02.447 ******* 2026-01-20 10:55:36,831 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:36 +0000 (0:00:01.081) 0:01:02.445 ******* 2026-01-20 10:55:36,949 p=31537 u=zuul n=ansible | TASK [install_yamls : Ensure directories exist path={{ item }}, state=directory, mode=0755] *** 2026-01-20 10:55:36,949 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:36 +0000 (0:00:00.118) 0:01:02.566 ******* 2026-01-20 10:55:36,949 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:36 +0000 (0:00:00.118) 0:01:02.564 ******* 2026-01-20 10:55:37,167 p=31537 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts) 2026-01-20 10:55:37,326 p=31537 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/roles/install_yamls_makes/tasks) 2026-01-20 10:55:37,487 p=31537 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2026-01-20 10:55:37,494 p=31537 u=zuul n=ansible | TASK [Create variables with local repos based on Zuul items name=install_yamls, tasks_from=zuul_set_operators_repo.yml] *** 2026-01-20 10:55:37,494 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.544) 0:01:03.110 ******* 2026-01-20 10:55:37,494 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.544) 0:01:03.109 ******* 2026-01-20 10:55:37,526 p=31537 u=zuul n=ansible | TASK [install_yamls : Set fact with local repos based on Zuul items cifmw_install_yamls_operators_repo={{ cifmw_install_yamls_operators_repo | default({}) | combine(_repo_operator_info | items2dict) }}] *** 2026-01-20 10:55:37,526 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.032) 0:01:03.142 ******* 2026-01-20 10:55:37,526 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.032) 0:01:03.141 ******* 2026-01-20 10:55:37,568 p=31537 u=zuul n=ansible | ok: [localhost] => (item={'branch': 'main', 'change': '320', 'change_url': 'https://github.com/openstack-k8s-operators/watcher-operator/pull/320', 'commit_id': '581f46572d07c53c87a11aa044b02e73f253eea6', 'patchset': '581f46572d07c53c87a11aa044b02e73f253eea6', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/openstack-k8s-operators/watcher-operator', 'name': 'openstack-k8s-operators/watcher-operator', 'short_name': 'watcher-operator', 'src_dir': 'src/github.com/openstack-k8s-operators/watcher-operator'}, 'topic': None}) 2026-01-20 10:55:37,576 p=31537 u=zuul n=ansible | TASK [install_yamls : Print helpful data for debugging msg=_repo_operator_name: {{ _repo_operator_name }} _repo_operator_info: {{ _repo_operator_info }} cifmw_install_yamls_operators_repo: {{ cifmw_install_yamls_operators_repo }} ] *** 2026-01-20 10:55:37,576 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.050) 0:01:03.193 ******* 2026-01-20 10:55:37,576 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.050) 0:01:03.191 ******* 2026-01-20 10:55:37,619 p=31537 u=zuul n=ansible | ok: [localhost] => (item={'branch': 'main', 'change': '320', 'change_url': 'https://github.com/openstack-k8s-operators/watcher-operator/pull/320', 'commit_id': '581f46572d07c53c87a11aa044b02e73f253eea6', 'patchset': '581f46572d07c53c87a11aa044b02e73f253eea6', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/openstack-k8s-operators/watcher-operator', 'name': 'openstack-k8s-operators/watcher-operator', 'short_name': 'watcher-operator', 'src_dir': 'src/github.com/openstack-k8s-operators/watcher-operator'}, 'topic': None}) => msg: | _repo_operator_name: watcher _repo_operator_info: [{'key': 'WATCHER_REPO', 'value': '/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator'}, {'key': 'WATCHER_BRANCH', 'value': ''}] cifmw_install_yamls_operators_repo: {'WATCHER_REPO': '/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator', 'WATCHER_BRANCH': ''} 2026-01-20 10:55:37,632 p=31537 u=zuul n=ansible | TASK [Customize install_yamls devsetup vars if needed name=install_yamls, tasks_from=customize_devsetup_vars.yml] *** 2026-01-20 10:55:37,632 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.056) 0:01:03.249 ******* 2026-01-20 10:55:37,632 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.056) 0:01:03.247 ******* 2026-01-20 10:55:37,674 p=31537 u=zuul n=ansible | TASK [install_yamls : Update opm_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^opm_version:, line=opm_version: {{ cifmw_install_yamls_opm_version }}, state=present] *** 2026-01-20 10:55:37,674 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.042) 0:01:03.291 ******* 2026-01-20 10:55:37,674 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.042) 0:01:03.289 ******* 2026-01-20 10:55:37,695 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:37,701 p=31537 u=zuul n=ansible | TASK [install_yamls : Update sdk_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^sdk_version:, line=sdk_version: {{ cifmw_install_yamls_sdk_version }}, state=present] *** 2026-01-20 10:55:37,702 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.027) 0:01:03.318 ******* 2026-01-20 10:55:37,702 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.027) 0:01:03.317 ******* 2026-01-20 10:55:37,719 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:37,726 p=31537 u=zuul n=ansible | TASK [install_yamls : Update go_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^go_version:, line=go_version: {{ cifmw_install_yamls_go_version }}, state=present] *** 2026-01-20 10:55:37,726 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.024) 0:01:03.342 ******* 2026-01-20 10:55:37,726 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.024) 0:01:03.341 ******* 2026-01-20 10:55:37,745 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:37,753 p=31537 u=zuul n=ansible | TASK [install_yamls : Update kustomize_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^kustomize_version:, line=kustomize_version: {{ cifmw_install_yamls_kustomize_version }}, state=present] *** 2026-01-20 10:55:37,753 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.026) 0:01:03.369 ******* 2026-01-20 10:55:37,753 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.026) 0:01:03.368 ******* 2026-01-20 10:55:37,777 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:37,790 p=31537 u=zuul n=ansible | TASK [install_yamls : Compute the cifmw_install_yamls_vars final value _install_yamls_override_vars={{ _install_yamls_override_vars | default({}) | combine(item, recursive=True) }}] *** 2026-01-20 10:55:37,790 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.037) 0:01:03.407 ******* 2026-01-20 10:55:37,790 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.037) 0:01:03.405 ******* 2026-01-20 10:55:37,881 p=31537 u=zuul n=ansible | ok: [localhost] => (item={'BMO_SETUP': False, 'INSTALL_CERT_MANAGER': False}) 2026-01-20 10:55:37,890 p=31537 u=zuul n=ansible | TASK [install_yamls : Set environment override cifmw_install_yamls_environment fact cifmw_install_yamls_environment={{ _install_yamls_override_vars.keys() | map('upper') | zip(_install_yamls_override_vars.values()) | items2dict(key_name=0, value_name=1) | combine({ 'OUT': cifmw_install_yamls_manifests_dir, 'OUTPUT_DIR': cifmw_install_yamls_edpm_dir, 'CHECKOUT_FROM_OPENSTACK_REF': cifmw_install_yamls_checkout_openstack_ref, 'OPENSTACK_K8S_BRANCH': (zuul is defined and not zuul.branch |regex_search('master|antelope|rhos')) | ternary(zuul.branch, 'main') }) | combine(install_yamls_operators_repos) }}, cacheable=True] *** 2026-01-20 10:55:37,890 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.099) 0:01:03.507 ******* 2026-01-20 10:55:37,890 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.099) 0:01:03.505 ******* 2026-01-20 10:55:37,930 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:37,937 p=31537 u=zuul n=ansible | TASK [install_yamls : Get environment structure base_path={{ cifmw_install_yamls_repo }}] *** 2026-01-20 10:55:37,937 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.046) 0:01:03.553 ******* 2026-01-20 10:55:37,937 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:37 +0000 (0:00:00.046) 0:01:03.552 ******* 2026-01-20 10:55:38,534 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:38,543 p=31537 u=zuul n=ansible | TASK [install_yamls : Ensure Output directory exists path={{ cifmw_install_yamls_out_dir }}, state=directory, mode=0755] *** 2026-01-20 10:55:38,543 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:38 +0000 (0:00:00.605) 0:01:04.159 ******* 2026-01-20 10:55:38,543 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:38 +0000 (0:00:00.605) 0:01:04.158 ******* 2026-01-20 10:55:38,732 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:38,739 p=31537 u=zuul n=ansible | TASK [install_yamls : Ensure user cifmw_install_yamls_vars contains existing Makefile variables that=_cifmw_install_yamls_unmatched_vars | length == 0, msg=cifmw_install_yamls_vars contains a variable that is not defined in install_yamls Makefile nor cifmw_install_yamls_whitelisted_vars: {{ _cifmw_install_yamls_unmatched_vars | join(', ')}}, quiet=True] *** 2026-01-20 10:55:38,739 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:38 +0000 (0:00:00.196) 0:01:04.355 ******* 2026-01-20 10:55:38,739 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:38 +0000 (0:00:00.196) 0:01:04.354 ******* 2026-01-20 10:55:38,776 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:38,797 p=31537 u=zuul n=ansible | TASK [install_yamls : Generate /home/zuul/ci-framework-data/artifacts/install_yamls.sh dest={{ cifmw_install_yamls_out_dir }}/{{ cifmw_install_yamls_envfile }}, content={% for k,v in cifmw_install_yamls_environment.items() %} export {{ k }}={{ v }} {% endfor %}, mode=0644] *** 2026-01-20 10:55:38,797 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:38 +0000 (0:00:00.057) 0:01:04.413 ******* 2026-01-20 10:55:38,797 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:38 +0000 (0:00:00.058) 0:01:04.412 ******* 2026-01-20 10:55:39,224 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:39,237 p=31537 u=zuul n=ansible | TASK [install_yamls : Set install_yamls default values cifmw_install_yamls_defaults={{ get_makefiles_env_output.makefiles_values | combine(cifmw_install_yamls_environment) }}, cacheable=True] *** 2026-01-20 10:55:39,238 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.440) 0:01:04.854 ******* 2026-01-20 10:55:39,238 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.440) 0:01:04.853 ******* 2026-01-20 10:55:39,266 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:39,277 p=31537 u=zuul n=ansible | TASK [install_yamls : Show the env structure var=cifmw_install_yamls_environment] *** 2026-01-20 10:55:39,277 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.039) 0:01:04.894 ******* 2026-01-20 10:55:39,277 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.039) 0:01:04.892 ******* 2026-01-20 10:55:39,296 p=31537 u=zuul n=ansible | ok: [localhost] => cifmw_install_yamls_environment: BMO_SETUP: false CHECKOUT_FROM_OPENSTACK_REF: 'true' INSTALL_CERT_MANAGER: false OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm WATCHER_BRANCH: '' WATCHER_REPO: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator 2026-01-20 10:55:39,305 p=31537 u=zuul n=ansible | TASK [install_yamls : Show the env structure defaults var=cifmw_install_yamls_defaults] *** 2026-01-20 10:55:39,305 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.027) 0:01:04.921 ******* 2026-01-20 10:55:39,305 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.027) 0:01:04.920 ******* 2026-01-20 10:55:39,337 p=31537 u=zuul n=ansible | ok: [localhost] => cifmw_install_yamls_defaults: ADOPTED_EXTERNAL_NETWORK: 172.21.1.0/24 ADOPTED_INTERNALAPI_NETWORK: 172.17.1.0/24 ADOPTED_STORAGEMGMT_NETWORK: 172.20.1.0/24 ADOPTED_STORAGE_NETWORK: 172.18.1.0/24 ADOPTED_TENANT_NETWORK: 172.9.1.0/24 ANSIBLEEE: config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_BRANCH: main ANSIBLEEE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest ANSIBLEEE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml ANSIBLEEE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/test/kuttl/tests ANSIBLEEE_KUTTL_NAMESPACE: ansibleee-kuttl-tests ANSIBLEEE_REPO: https://github.com/openstack-k8s-operators/openstack-ansibleee-operator ANSIBLEE_COMMIT_HASH: '' BARBICAN: config/samples/barbican_v1beta1_barbican.yaml BARBICAN_BRANCH: main BARBICAN_COMMIT_HASH: '' BARBICAN_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml BARBICAN_DEPL_IMG: unused BARBICAN_IMG: quay.io/openstack-k8s-operators/barbican-operator-index:latest BARBICAN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml BARBICAN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/test/kuttl/tests BARBICAN_KUTTL_NAMESPACE: barbican-kuttl-tests BARBICAN_REPO: https://github.com/openstack-k8s-operators/barbican-operator.git BARBICAN_SERVICE_ENABLED: 'true' BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY: sEFmdFjDUqRM2VemYslV5yGNWjokioJXsg8Nrlc3drU= BAREMETAL_BRANCH: main BAREMETAL_COMMIT_HASH: '' BAREMETAL_IMG: quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest BAREMETAL_OS_CONTAINER_IMG: '' BAREMETAL_OS_IMG: '' BAREMETAL_OS_IMG_TYPE: '' BAREMETAL_REPO: https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git BAREMETAL_TIMEOUT: 20m BASH_IMG: quay.io/openstack-k8s-operators/bash:latest BGP_ASN: '64999' BGP_LEAF_1: 100.65.4.1 BGP_LEAF_2: 100.64.4.1 BGP_OVN_ROUTING: 'false' BGP_PEER_ASN: '64999' BGP_SOURCE_IP: 172.30.4.2 BGP_SOURCE_IP6: f00d:f00d:f00d:f00d:f00d:f00d:f00d:42 BMAAS_BRIDGE_IPV4_PREFIX: 172.20.1.2/24 BMAAS_BRIDGE_IPV6_PREFIX: fd00:bbbb::2/64 BMAAS_INSTANCE_DISK_SIZE: '20' BMAAS_INSTANCE_MEMORY: '4096' BMAAS_INSTANCE_NAME_PREFIX: crc-bmaas BMAAS_INSTANCE_NET_MODEL: virtio BMAAS_INSTANCE_OS_VARIANT: centos-stream9 BMAAS_INSTANCE_VCPUS: '2' BMAAS_INSTANCE_VIRT_TYPE: kvm BMAAS_IPV4: 'true' BMAAS_IPV6: 'false' BMAAS_LIBVIRT_USER: sushyemu BMAAS_METALLB_ADDRESS_POOL: 172.20.1.64/26 BMAAS_METALLB_POOL_NAME: baremetal BMAAS_NETWORK_IPV4_PREFIX: 172.20.1.1/24 BMAAS_NETWORK_IPV6_PREFIX: fd00:bbbb::1/64 BMAAS_NETWORK_NAME: crc-bmaas BMAAS_NODE_COUNT: '1' BMAAS_OCP_INSTANCE_NAME: crc BMAAS_REDFISH_PASSWORD: password BMAAS_REDFISH_USERNAME: admin BMAAS_ROUTE_LIBVIRT_NETWORKS: crc-bmaas,crc,default BMAAS_SUSHY_EMULATOR_DRIVER: libvirt BMAAS_SUSHY_EMULATOR_IMAGE: quay.io/metal3-io/sushy-tools:latest BMAAS_SUSHY_EMULATOR_NAMESPACE: sushy-emulator BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE: /etc/openstack/clouds.yaml BMAAS_SUSHY_EMULATOR_OS_CLOUD: openstack BMH_NAMESPACE: openstack BMO_BRANCH: release-0.9 BMO_CLEANUP: 'true' BMO_COMMIT_HASH: '' BMO_IPA_BRANCH: stable/2024.1 BMO_IRONIC_HOST: 192.168.122.10 BMO_PROVISIONING_INTERFACE: '' BMO_REPO: https://github.com/metal3-io/baremetal-operator BMO_SETUP: false BMO_SETUP_ROUTE_REPLACE: 'true' BM_CTLPLANE_INTERFACE: enp1s0 BM_INSTANCE_MEMORY: '8192' BM_INSTANCE_NAME_PREFIX: edpm-compute-baremetal BM_INSTANCE_NAME_SUFFIX: '0' BM_NETWORK_NAME: default BM_NODE_COUNT: '1' BM_ROOT_PASSWORD: '' BM_ROOT_PASSWORD_SECRET: '' CEILOMETER_CENTRAL_DEPL_IMG: unused CEILOMETER_NOTIFICATION_DEPL_IMG: unused CEPH_BRANCH: release-1.15 CEPH_CLIENT: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml CEPH_COMMON: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml CEPH_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml CEPH_CRDS: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml CEPH_IMG: quay.io/ceph/demo:latest-squid CEPH_OP: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml CEPH_REPO: https://github.com/rook/rook.git CERTMANAGER_TIMEOUT: 300s CHECKOUT_FROM_OPENSTACK_REF: 'true' CINDER: config/samples/cinder_v1beta1_cinder.yaml CINDERAPI_DEPL_IMG: unused CINDERBKP_DEPL_IMG: unused CINDERSCH_DEPL_IMG: unused CINDERVOL_DEPL_IMG: unused CINDER_BRANCH: main CINDER_COMMIT_HASH: '' CINDER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml CINDER_IMG: quay.io/openstack-k8s-operators/cinder-operator-index:latest CINDER_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml CINDER_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests CINDER_KUTTL_NAMESPACE: cinder-kuttl-tests CINDER_REPO: https://github.com/openstack-k8s-operators/cinder-operator.git CLEANUP_DIR_CMD: rm -Rf CRC_BGP_NIC_1_MAC: '52:54:00:11:11:11' CRC_BGP_NIC_2_MAC: '52:54:00:11:11:12' CRC_HTTPS_PROXY: '' CRC_HTTP_PROXY: '' CRC_STORAGE_NAMESPACE: crc-storage CRC_STORAGE_RETRIES: '3' CRC_URL: '''https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz''' CRC_VERSION: latest DATAPLANE_ANSIBLE_SECRET: dataplane-ansible-ssh-private-key-secret DATAPLANE_ANSIBLE_USER: '' DATAPLANE_COMPUTE_IP: 192.168.122.100 DATAPLANE_CONTAINER_PREFIX: openstack DATAPLANE_CONTAINER_TAG: current-podified DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest DATAPLANE_DEFAULT_GW: 192.168.122.1 DATAPLANE_EXTRA_NOVA_CONFIG_FILE: /dev/null DATAPLANE_GROWVOLS_ARGS: /=8GB /tmp=1GB /home=1GB /var=100% DATAPLANE_KUSTOMIZE_SCENARIO: preprovisioned DATAPLANE_NETWORKER_IP: 192.168.122.200 DATAPLANE_NETWORK_INTERFACE_NAME: eth0 DATAPLANE_NOVA_NFS_PATH: '' DATAPLANE_NTP_SERVER: pool.ntp.org DATAPLANE_PLAYBOOK: osp.edpm.download_cache DATAPLANE_REGISTRY_URL: quay.io/podified-antelope-centos9 DATAPLANE_RUNNER_IMG: '' DATAPLANE_SERVER_ROLE: compute DATAPLANE_SSHD_ALLOWED_RANGES: '[''192.168.122.0/24'']' DATAPLANE_TIMEOUT: 30m DATAPLANE_TLS_ENABLED: 'true' DATAPLANE_TOTAL_NETWORKER_NODES: '1' DATAPLANE_TOTAL_NODES: '1' DBSERVICE: galera DESIGNATE: config/samples/designate_v1beta1_designate.yaml DESIGNATE_BRANCH: main DESIGNATE_COMMIT_HASH: '' DESIGNATE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml DESIGNATE_IMG: quay.io/openstack-k8s-operators/designate-operator-index:latest DESIGNATE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml DESIGNATE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/test/kuttl/tests DESIGNATE_KUTTL_NAMESPACE: designate-kuttl-tests DESIGNATE_REPO: https://github.com/openstack-k8s-operators/designate-operator.git DNSDATA: config/samples/network_v1beta1_dnsdata.yaml DNSDATA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml DNSMASQ: config/samples/network_v1beta1_dnsmasq.yaml DNSMASQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml DNS_DEPL_IMG: unused DNS_DOMAIN: localdomain DOWNLOAD_TOOLS_SELECTION: all EDPM_ATTACH_EXTNET: 'true' EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES: '''[]''' EDPM_COMPUTE_ADDITIONAL_NETWORKS: '''[]''' EDPM_COMPUTE_CELLS: '1' EDPM_COMPUTE_CEPH_ENABLED: 'true' EDPM_COMPUTE_CEPH_NOVA: 'true' EDPM_COMPUTE_DHCP_AGENT_ENABLED: 'true' EDPM_COMPUTE_SRIOV_ENABLED: 'true' EDPM_COMPUTE_SUFFIX: '0' EDPM_CONFIGURE_DEFAULT_ROUTE: 'true' EDPM_CONFIGURE_HUGEPAGES: 'false' EDPM_CONFIGURE_NETWORKING: 'true' EDPM_FIRSTBOOT_EXTRA: /tmp/edpm-firstboot-extra EDPM_NETWORKER_SUFFIX: '0' EDPM_TOTAL_NETWORKERS: '1' EDPM_TOTAL_NODES: '1' GALERA_REPLICAS: '' GENERATE_SSH_KEYS: 'true' GIT_CLONE_OPTS: '' GLANCE: config/samples/glance_v1beta1_glance.yaml GLANCEAPI_DEPL_IMG: unused GLANCE_BRANCH: main GLANCE_COMMIT_HASH: '' GLANCE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml GLANCE_IMG: quay.io/openstack-k8s-operators/glance-operator-index:latest GLANCE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml GLANCE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests GLANCE_KUTTL_NAMESPACE: glance-kuttl-tests GLANCE_REPO: https://github.com/openstack-k8s-operators/glance-operator.git HEAT: config/samples/heat_v1beta1_heat.yaml HEATAPI_DEPL_IMG: unused HEATCFNAPI_DEPL_IMG: unused HEATENGINE_DEPL_IMG: unused HEAT_AUTH_ENCRYPTION_KEY: 767c3ed056cbaa3b9dfedb8c6f825bf0 HEAT_BRANCH: main HEAT_COMMIT_HASH: '' HEAT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml HEAT_IMG: quay.io/openstack-k8s-operators/heat-operator-index:latest HEAT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml HEAT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/test/kuttl/tests HEAT_KUTTL_NAMESPACE: heat-kuttl-tests HEAT_REPO: https://github.com/openstack-k8s-operators/heat-operator.git HEAT_SERVICE_ENABLED: 'true' HORIZON: config/samples/horizon_v1beta1_horizon.yaml HORIZON_BRANCH: main HORIZON_COMMIT_HASH: '' HORIZON_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml HORIZON_DEPL_IMG: unused HORIZON_IMG: quay.io/openstack-k8s-operators/horizon-operator-index:latest HORIZON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml HORIZON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/test/kuttl/tests HORIZON_KUTTL_NAMESPACE: horizon-kuttl-tests HORIZON_REPO: https://github.com/openstack-k8s-operators/horizon-operator.git INFRA_BRANCH: main INFRA_COMMIT_HASH: '' INFRA_IMG: quay.io/openstack-k8s-operators/infra-operator-index:latest INFRA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml INFRA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/test/kuttl/tests INFRA_KUTTL_NAMESPACE: infra-kuttl-tests INFRA_REPO: https://github.com/openstack-k8s-operators/infra-operator.git INSTALL_CERT_MANAGER: false INSTALL_NMSTATE: true || false INSTALL_NNCP: true || false INTERNALAPI_HOST_ROUTES: '' IPV6_LAB_IPV4_NETWORK_IPADDRESS: 172.30.0.1/24 IPV6_LAB_IPV6_NETWORK_IPADDRESS: fd00:abcd:abcd:fc00::1/64 IPV6_LAB_LIBVIRT_STORAGE_POOL: default IPV6_LAB_MANAGE_FIREWALLD: 'true' IPV6_LAB_NAT64_HOST_IPV4: 172.30.0.2/24 IPV6_LAB_NAT64_HOST_IPV6: fd00:abcd:abcd:fc00::2/64 IPV6_LAB_NAT64_INSTANCE_NAME: nat64-router IPV6_LAB_NAT64_IPV6_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL: 192.168.255.0/24 IPV6_LAB_NAT64_TAYGA_IPV4: 192.168.255.1 IPV6_LAB_NAT64_TAYGA_IPV6: fd00:abcd:abcd:fc00::3 IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX: fd00:abcd:abcd:fcff::/96 IPV6_LAB_NAT64_UPDATE_PACKAGES: 'false' IPV6_LAB_NETWORK_NAME: nat64 IPV6_LAB_SNO_CLUSTER_NETWORK: fd00:abcd:0::/48 IPV6_LAB_SNO_HOST_IP: fd00:abcd:abcd:fc00::11 IPV6_LAB_SNO_HOST_PREFIX: '64' IPV6_LAB_SNO_INSTANCE_NAME: sno IPV6_LAB_SNO_MACHINE_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_SNO_OCP_MIRROR_URL: https://mirror.openshift.com/pub/openshift-v4/clients/ocp IPV6_LAB_SNO_OCP_VERSION: latest-4.14 IPV6_LAB_SNO_SERVICE_NETWORK: fd00:abcd:abcd:fc03::/112 IPV6_LAB_SSH_PUB_KEY: /home/zuul/.ssh/id_rsa.pub IPV6_LAB_WORK_DIR: /home/zuul/.ipv6lab IRONIC: config/samples/ironic_v1beta1_ironic.yaml IRONICAPI_DEPL_IMG: unused IRONICCON_DEPL_IMG: unused IRONICINS_DEPL_IMG: unused IRONICNAG_DEPL_IMG: unused IRONICPXE_DEPL_IMG: unused IRONIC_BRANCH: main IRONIC_COMMIT_HASH: '' IRONIC_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml IRONIC_IMAGE: quay.io/metal3-io/ironic IRONIC_IMAGE_TAG: release-24.1 IRONIC_IMG: quay.io/openstack-k8s-operators/ironic-operator-index:latest IRONIC_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml IRONIC_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/test/kuttl/tests IRONIC_KUTTL_NAMESPACE: ironic-kuttl-tests IRONIC_REPO: https://github.com/openstack-k8s-operators/ironic-operator.git KEYSTONEAPI: config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_DEPL_IMG: unused KEYSTONE_BRANCH: main KEYSTONE_COMMIT_HASH: '' KEYSTONE_FEDERATION_CLIENT_SECRET: COX8bmlKAWn56XCGMrKQJj7dgHNAOl6f KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE: openstack KEYSTONE_IMG: quay.io/openstack-k8s-operators/keystone-operator-index:latest KEYSTONE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml KEYSTONE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/test/kuttl/tests KEYSTONE_KUTTL_NAMESPACE: keystone-kuttl-tests KEYSTONE_REPO: https://github.com/openstack-k8s-operators/keystone-operator.git KUBEADMIN_PWD: '12345678' LIBVIRT_SECRET: libvirt-secret LOKI_DEPLOY_MODE: openshift-network LOKI_DEPLOY_NAMESPACE: netobserv LOKI_DEPLOY_SIZE: 1x.demo LOKI_NAMESPACE: openshift-operators-redhat LOKI_OPERATOR_GROUP: openshift-operators-redhat-loki LOKI_SUBSCRIPTION: loki-operator LVMS_CR: '1' MANILA: config/samples/manila_v1beta1_manila.yaml MANILAAPI_DEPL_IMG: unused MANILASCH_DEPL_IMG: unused MANILASHARE_DEPL_IMG: unused MANILA_BRANCH: main MANILA_COMMIT_HASH: '' MANILA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml MANILA_IMG: quay.io/openstack-k8s-operators/manila-operator-index:latest MANILA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml MANILA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests MANILA_KUTTL_NAMESPACE: manila-kuttl-tests MANILA_REPO: https://github.com/openstack-k8s-operators/manila-operator.git MANILA_SERVICE_ENABLED: 'true' MARIADB: config/samples/mariadb_v1beta1_galera.yaml MARIADB_BRANCH: main MARIADB_CHAINSAW_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/config.yaml MARIADB_CHAINSAW_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/tests MARIADB_CHAINSAW_NAMESPACE: mariadb-chainsaw-tests MARIADB_COMMIT_HASH: '' MARIADB_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml MARIADB_DEPL_IMG: unused MARIADB_IMG: quay.io/openstack-k8s-operators/mariadb-operator-index:latest MARIADB_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml MARIADB_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/kuttl/tests MARIADB_KUTTL_NAMESPACE: mariadb-kuttl-tests MARIADB_REPO: https://github.com/openstack-k8s-operators/mariadb-operator.git MEMCACHED: config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_DEPL_IMG: unused METADATA_SHARED_SECRET: '1234567842' METALLB_IPV6_POOL: fd00:aaaa::80-fd00:aaaa::90 METALLB_POOL: 192.168.122.80-192.168.122.90 MICROSHIFT: '0' NAMESPACE: openstack NETCONFIG: config/samples/network_v1beta1_netconfig.yaml NETCONFIG_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml NETCONFIG_DEPL_IMG: unused NETOBSERV_DEPLOY_NAMESPACE: netobserv NETOBSERV_NAMESPACE: openshift-netobserv-operator NETOBSERV_OPERATOR_GROUP: openshift-netobserv-operator-net NETOBSERV_SUBSCRIPTION: netobserv-operator NETWORK_BGP: 'false' NETWORK_DESIGNATE_ADDRESS_PREFIX: 172.28.0 NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX: 172.50.0 NETWORK_INTERNALAPI_ADDRESS_PREFIX: 172.17.0 NETWORK_ISOLATION: 'true' NETWORK_ISOLATION_INSTANCE_NAME: crc NETWORK_ISOLATION_IPV4: 'true' NETWORK_ISOLATION_IPV4_ADDRESS: 172.16.1.1/24 NETWORK_ISOLATION_IPV4_NAT: 'true' NETWORK_ISOLATION_IPV6: 'false' NETWORK_ISOLATION_IPV6_ADDRESS: fd00:aaaa::1/64 NETWORK_ISOLATION_IP_ADDRESS: 192.168.122.10 NETWORK_ISOLATION_MAC: '52:54:00:11:11:10' NETWORK_ISOLATION_NETWORK_NAME: net-iso NETWORK_ISOLATION_NET_NAME: default NETWORK_ISOLATION_USE_DEFAULT_NETWORK: 'true' NETWORK_MTU: '1500' NETWORK_STORAGEMGMT_ADDRESS_PREFIX: 172.20.0 NETWORK_STORAGE_ADDRESS_PREFIX: 172.18.0 NETWORK_STORAGE_MACVLAN: '' NETWORK_TENANT_ADDRESS_PREFIX: 172.19.0 NETWORK_VLAN_START: '20' NETWORK_VLAN_STEP: '1' NEUTRONAPI: config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_DEPL_IMG: unused NEUTRON_BRANCH: main NEUTRON_COMMIT_HASH: '' NEUTRON_IMG: quay.io/openstack-k8s-operators/neutron-operator-index:latest NEUTRON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml NEUTRON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests NEUTRON_KUTTL_NAMESPACE: neutron-kuttl-tests NEUTRON_REPO: https://github.com/openstack-k8s-operators/neutron-operator.git NFS_HOME: /home/nfs NMSTATE_NAMESPACE: openshift-nmstate NMSTATE_OPERATOR_GROUP: openshift-nmstate-tn6k8 NMSTATE_SUBSCRIPTION: kubernetes-nmstate-operator NNCP_ADDITIONAL_HOST_ROUTES: '' NNCP_BGP_1_INTERFACE: enp7s0 NNCP_BGP_1_IP_ADDRESS: 100.65.4.2 NNCP_BGP_2_INTERFACE: enp8s0 NNCP_BGP_2_IP_ADDRESS: 100.64.4.2 NNCP_BRIDGE: ospbr NNCP_CLEANUP_TIMEOUT: 120s NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX: 'fd00:aaaa::' NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX: '10' NNCP_CTLPLANE_IP_ADDRESS_PREFIX: 192.168.122 NNCP_CTLPLANE_IP_ADDRESS_SUFFIX: '10' NNCP_DNS_SERVER: 192.168.122.1 NNCP_DNS_SERVER_IPV6: fd00:aaaa::1 NNCP_GATEWAY: 192.168.122.1 NNCP_GATEWAY_IPV6: fd00:aaaa::1 NNCP_INTERFACE: enp6s0 NNCP_NODES: '' NNCP_TIMEOUT: 240s NOVA: config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_BRANCH: main NOVA_COMMIT_HASH: '' NOVA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_IMG: quay.io/openstack-k8s-operators/nova-operator-index:latest NOVA_REPO: https://github.com/openstack-k8s-operators/nova-operator.git NUMBER_OF_INSTANCES: '1' OCP_NETWORK_NAME: crc OCTAVIA: config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_BRANCH: main OCTAVIA_COMMIT_HASH: '' OCTAVIA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_IMG: quay.io/openstack-k8s-operators/octavia-operator-index:latest OCTAVIA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml OCTAVIA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/test/kuttl/tests OCTAVIA_KUTTL_NAMESPACE: octavia-kuttl-tests OCTAVIA_REPO: https://github.com/openstack-k8s-operators/octavia-operator.git OKD: 'false' OPENSTACK_BRANCH: main OPENSTACK_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-bundle:latest OPENSTACK_COMMIT_HASH: '' OPENSTACK_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_CRDS_DIR: openstack_crds OPENSTACK_CTLPLANE: config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_IMG: quay.io/openstack-k8s-operators/openstack-operator-index:latest OPENSTACK_K8S_BRANCH: main OPENSTACK_K8S_TAG: latest OPENSTACK_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml OPENSTACK_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/test/kuttl/tests OPENSTACK_KUTTL_NAMESPACE: openstack-kuttl-tests OPENSTACK_NEUTRON_CUSTOM_CONF: '' OPENSTACK_REPO: https://github.com/openstack-k8s-operators/openstack-operator.git OPENSTACK_STORAGE_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest OPERATOR_BASE_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator OPERATOR_CHANNEL: '' OPERATOR_NAMESPACE: openstack-operators OPERATOR_SOURCE: '' OPERATOR_SOURCE_NAMESPACE: '' OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm OVNCONTROLLER: config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_NMAP: 'true' OVNDBS: config/samples/ovn_v1beta1_ovndbcluster.yaml OVNDBS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml OVNNORTHD: config/samples/ovn_v1beta1_ovnnorthd.yaml OVNNORTHD_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml OVN_BRANCH: main OVN_COMMIT_HASH: '' OVN_IMG: quay.io/openstack-k8s-operators/ovn-operator-index:latest OVN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml OVN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/test/kuttl/tests OVN_KUTTL_NAMESPACE: ovn-kuttl-tests OVN_REPO: https://github.com/openstack-k8s-operators/ovn-operator.git PASSWORD: '12345678' PLACEMENTAPI: config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_DEPL_IMG: unused PLACEMENT_BRANCH: main PLACEMENT_COMMIT_HASH: '' PLACEMENT_IMG: quay.io/openstack-k8s-operators/placement-operator-index:latest PLACEMENT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml PLACEMENT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/test/kuttl/tests PLACEMENT_KUTTL_NAMESPACE: placement-kuttl-tests PLACEMENT_REPO: https://github.com/openstack-k8s-operators/placement-operator.git PULL_SECRET: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/pull-secret.txt RABBITMQ: docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_BRANCH: patches RABBITMQ_COMMIT_HASH: '' RABBITMQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_DEPL_IMG: unused RABBITMQ_IMG: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest RABBITMQ_REPO: https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git REDHAT_OPERATORS: 'false' REDIS: config/samples/redis_v1beta1_redis.yaml REDIS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml REDIS_DEPL_IMG: unused RH_REGISTRY_PWD: '' RH_REGISTRY_USER: '' SECRET: osp-secret SG_CORE_DEPL_IMG: unused STANDALONE_COMPUTE_DRIVER: libvirt STANDALONE_EXTERNAL_NET_PREFFIX: 172.21.0 STANDALONE_INTERNALAPI_NET_PREFIX: 172.17.0 STANDALONE_STORAGEMGMT_NET_PREFIX: 172.20.0 STANDALONE_STORAGE_NET_PREFIX: 172.18.0 STANDALONE_TENANT_NET_PREFIX: 172.19.0 STORAGEMGMT_HOST_ROUTES: '' STORAGE_CLASS: local-storage STORAGE_HOST_ROUTES: '' SWIFT: config/samples/swift_v1beta1_swift.yaml SWIFT_BRANCH: main SWIFT_COMMIT_HASH: '' SWIFT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml SWIFT_IMG: quay.io/openstack-k8s-operators/swift-operator-index:latest SWIFT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml SWIFT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/test/kuttl/tests SWIFT_KUTTL_NAMESPACE: swift-kuttl-tests SWIFT_REPO: https://github.com/openstack-k8s-operators/swift-operator.git TELEMETRY: config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_BRANCH: main TELEMETRY_COMMIT_HASH: '' TELEMETRY_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_IMG: quay.io/openstack-k8s-operators/telemetry-operator-index:latest TELEMETRY_KUTTL_BASEDIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator TELEMETRY_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml TELEMETRY_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/test/kuttl/suites TELEMETRY_KUTTL_NAMESPACE: telemetry-kuttl-tests TELEMETRY_KUTTL_RELPATH: test/kuttl/suites TELEMETRY_REPO: https://github.com/openstack-k8s-operators/telemetry-operator.git TENANT_HOST_ROUTES: '' TIMEOUT: 300s TLS_ENABLED: 'false' WATCHER_BRANCH: '' WATCHER_REPO: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator tripleo_deploy: 'export REGISTRY_PWD:' 2026-01-20 10:55:39,345 p=31537 u=zuul n=ansible | TASK [install_yamls : Generate make targets install_yamls_path={{ cifmw_install_yamls_repo }}, output_directory={{ cifmw_install_yamls_tasks_out }}] *** 2026-01-20 10:55:39,345 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.040) 0:01:04.961 ******* 2026-01-20 10:55:39,345 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.040) 0:01:04.960 ******* 2026-01-20 10:55:39,683 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:39,691 p=31537 u=zuul n=ansible | TASK [install_yamls : Debug generate_make module var=cifmw_generate_makes] ***** 2026-01-20 10:55:39,691 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.346) 0:01:05.308 ******* 2026-01-20 10:55:39,691 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.346) 0:01:05.306 ******* 2026-01-20 10:55:39,715 p=31537 u=zuul n=ansible | ok: [localhost] => cifmw_generate_makes: changed: false debug: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/Makefile: - all - help - cleanup - deploy_cleanup - wait - crc_storage - crc_storage_cleanup - crc_storage_release - crc_storage_with_retries - crc_storage_cleanup_with_retries - operator_namespace - namespace - namespace_cleanup - input - input_cleanup - crc_bmo_setup - crc_bmo_cleanup - openstack_prep - openstack - openstack_wait - openstack_init - openstack_cleanup - openstack_repo - openstack_deploy_prep - openstack_deploy - openstack_wait_deploy - openstack_deploy_cleanup - openstack_update_run - update_services - update_system - openstack_patch_version - edpm_deploy_generate_keys - edpm_patch_ansible_runner_image - edpm_deploy_prep - edpm_deploy_cleanup - edpm_deploy - edpm_deploy_baremetal_prep - edpm_deploy_baremetal - edpm_wait_deploy_baremetal - edpm_wait_deploy - edpm_register_dns - edpm_nova_discover_hosts - openstack_crds - openstack_crds_cleanup - edpm_deploy_networker_prep - edpm_deploy_networker_cleanup - edpm_deploy_networker - infra_prep - infra - infra_cleanup - dns_deploy_prep - dns_deploy - dns_deploy_cleanup - netconfig_deploy_prep - netconfig_deploy - netconfig_deploy_cleanup - memcached_deploy_prep - memcached_deploy - memcached_deploy_cleanup - keystone_prep - keystone - keystone_cleanup - keystone_deploy_prep - keystone_deploy - keystone_deploy_cleanup - barbican_prep - barbican - barbican_cleanup - barbican_deploy_prep - barbican_deploy - barbican_deploy_validate - barbican_deploy_cleanup - mariadb - mariadb_cleanup - mariadb_deploy_prep - mariadb_deploy - mariadb_deploy_cleanup - placement_prep - placement - placement_cleanup - placement_deploy_prep - placement_deploy - placement_deploy_cleanup - glance_prep - glance - glance_cleanup - glance_deploy_prep - glance_deploy - glance_deploy_cleanup - ovn_prep - ovn - ovn_cleanup - ovn_deploy_prep - ovn_deploy - ovn_deploy_cleanup - neutron_prep - neutron - neutron_cleanup - neutron_deploy_prep - neutron_deploy - neutron_deploy_cleanup - cinder_prep - cinder - cinder_cleanup - cinder_deploy_prep - cinder_deploy - cinder_deploy_cleanup - rabbitmq_prep - rabbitmq - rabbitmq_cleanup - rabbitmq_deploy_prep - rabbitmq_deploy - rabbitmq_deploy_cleanup - ironic_prep - ironic - ironic_cleanup - ironic_deploy_prep - ironic_deploy - ironic_deploy_cleanup - octavia_prep - octavia - octavia_cleanup - octavia_deploy_prep - octavia_deploy - octavia_deploy_cleanup - designate_prep - designate - designate_cleanup - designate_deploy_prep - designate_deploy - designate_deploy_cleanup - nova_prep - nova - nova_cleanup - nova_deploy_prep - nova_deploy - nova_deploy_cleanup - mariadb_kuttl_run - mariadb_kuttl - kuttl_db_prep - kuttl_db_cleanup - kuttl_common_prep - kuttl_common_cleanup - keystone_kuttl_run - keystone_kuttl - barbican_kuttl_run - barbican_kuttl - placement_kuttl_run - placement_kuttl - cinder_kuttl_run - cinder_kuttl - neutron_kuttl_run - neutron_kuttl - octavia_kuttl_run - octavia_kuttl - designate_kuttl - designate_kuttl_run - ovn_kuttl_run - ovn_kuttl - infra_kuttl_run - infra_kuttl - ironic_kuttl_run - ironic_kuttl - ironic_kuttl_crc - heat_kuttl_run - heat_kuttl - heat_kuttl_crc - ansibleee_kuttl_run - ansibleee_kuttl_cleanup - ansibleee_kuttl_prep - ansibleee_kuttl - glance_kuttl_run - glance_kuttl - manila_kuttl_run - manila_kuttl - swift_kuttl_run - swift_kuttl - horizon_kuttl_run - horizon_kuttl - openstack_kuttl_run - openstack_kuttl - mariadb_chainsaw_run - mariadb_chainsaw - horizon_prep - horizon - horizon_cleanup - horizon_deploy_prep - horizon_deploy - horizon_deploy_cleanup - heat_prep - heat - heat_cleanup - heat_deploy_prep - heat_deploy - heat_deploy_cleanup - ansibleee_prep - ansibleee - ansibleee_cleanup - baremetal_prep - baremetal - baremetal_cleanup - ceph_help - ceph - ceph_cleanup - rook_prep - rook - rook_deploy_prep - rook_deploy - rook_crc_disk - rook_cleanup - lvms - nmstate - nncp - nncp_cleanup - netattach - netattach_cleanup - metallb - metallb_config - metallb_config_cleanup - metallb_cleanup - loki - loki_cleanup - loki_deploy - loki_deploy_cleanup - netobserv - netobserv_cleanup - netobserv_deploy - netobserv_deploy_cleanup - manila_prep - manila - manila_cleanup - manila_deploy_prep - manila_deploy - manila_deploy_cleanup - telemetry_prep - telemetry - telemetry_cleanup - telemetry_deploy_prep - telemetry_deploy - telemetry_deploy_cleanup - telemetry_kuttl_run - telemetry_kuttl - swift_prep - swift - swift_cleanup - swift_deploy_prep - swift_deploy - swift_deploy_cleanup - certmanager - certmanager_cleanup - validate_marketplace - redis_deploy_prep - redis_deploy - redis_deploy_cleanup - set_slower_etcd_profile /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/Makefile: - help - download_tools - nfs - nfs_cleanup - crc - crc_cleanup - crc_scrub - crc_attach_default_interface - crc_attach_default_interface_cleanup - ipv6_lab_network - ipv6_lab_network_cleanup - ipv6_lab_nat64_router - ipv6_lab_nat64_router_cleanup - ipv6_lab_sno - ipv6_lab_sno_cleanup - ipv6_lab - ipv6_lab_cleanup - attach_default_interface - attach_default_interface_cleanup - network_isolation_bridge - network_isolation_bridge_cleanup - edpm_baremetal_compute - edpm_compute - edpm_compute_bootc - edpm_ansible_runner - edpm_computes_bgp - edpm_compute_repos - edpm_compute_cleanup - edpm_networker - edpm_networker_cleanup - edpm_deploy_instance - tripleo_deploy - standalone_deploy - standalone_sync - standalone - standalone_cleanup - standalone_snapshot - standalone_revert - cifmw_prepare - cifmw_cleanup - bmaas_network - bmaas_network_cleanup - bmaas_route_crc_and_crc_bmaas_networks - bmaas_route_crc_and_crc_bmaas_networks_cleanup - bmaas_crc_attach_network - bmaas_crc_attach_network_cleanup - bmaas_crc_baremetal_bridge - bmaas_crc_baremetal_bridge_cleanup - bmaas_baremetal_net_nad - bmaas_baremetal_net_nad_cleanup - bmaas_metallb - bmaas_metallb_cleanup - bmaas_virtual_bms - bmaas_virtual_bms_cleanup - bmaas_sushy_emulator - bmaas_sushy_emulator_cleanup - bmaas_sushy_emulator_wait - bmaas_generate_nodes_yaml - bmaas - bmaas_cleanup failed: false success: true 2026-01-20 10:55:39,724 p=31537 u=zuul n=ansible | TASK [install_yamls : Create the install_yamls parameters file dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml, content={{ { 'cifmw_install_yamls_environment': cifmw_install_yamls_environment, 'cifmw_install_yamls_defaults': cifmw_install_yamls_defaults } | to_nice_yaml }}, mode=0644] *** 2026-01-20 10:55:39,724 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.032) 0:01:05.341 ******* 2026-01-20 10:55:39,724 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:39 +0000 (0:00:00.032) 0:01:05.339 ******* 2026-01-20 10:55:40,123 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:40,130 p=31537 u=zuul n=ansible | TASK [install_yamls : Create empty cifmw_install_yamls_environment if needed cifmw_install_yamls_environment={}] *** 2026-01-20 10:55:40,130 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:40 +0000 (0:00:00.405) 0:01:05.746 ******* 2026-01-20 10:55:40,130 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:40 +0000 (0:00:00.405) 0:01:05.745 ******* 2026-01-20 10:55:40,150 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:40,164 p=31537 u=zuul n=ansible | TASK [discover_latest_image : Get latest image url={{ cifmw_discover_latest_image_base_url }}, image_prefix={{ cifmw_discover_latest_image_qcow_prefix }}, images_file={{ cifmw_discover_latest_image_images_file }}] *** 2026-01-20 10:55:40,164 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:40 +0000 (0:00:00.034) 0:01:05.780 ******* 2026-01-20 10:55:40,164 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:40 +0000 (0:00:00.034) 0:01:05.779 ******* 2026-01-20 10:55:40,920 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:40,927 p=31537 u=zuul n=ansible | TASK [discover_latest_image : Export facts accordingly cifmw_discovered_image_name={{ discovered_image['data']['image_name'] }}, cifmw_discovered_image_url={{ discovered_image['data']['image_url'] }}, cifmw_discovered_hash={{ discovered_image['data']['hash'] }}, cifmw_discovered_hash_algorithm={{ discovered_image['data']['hash_algorithm'] }}, cacheable=True] *** 2026-01-20 10:55:40,927 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:40 +0000 (0:00:00.763) 0:01:06.544 ******* 2026-01-20 10:55:40,927 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:40 +0000 (0:00:00.763) 0:01:06.542 ******* 2026-01-20 10:55:40,947 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:40,960 p=31537 u=zuul n=ansible | TASK [cifmw_setup : Create artifacts with custom params mode=0644, dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/custom-params.yml, content={{ ci_framework_params | to_nice_yaml }}] *** 2026-01-20 10:55:40,960 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:40 +0000 (0:00:00.032) 0:01:06.576 ******* 2026-01-20 10:55:40,960 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:40 +0000 (0:00:00.032) 0:01:06.575 ******* 2026-01-20 10:55:41,359 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:55:41,371 p=31537 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-20 10:55:41,372 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.411) 0:01:06.988 ******* 2026-01-20 10:55:41,372 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.411) 0:01:06.987 ******* 2026-01-20 10:55:41,431 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:41,437 p=31537 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-20 10:55:41,438 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.066) 0:01:07.054 ******* 2026-01-20 10:55:41,438 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.066) 0:01:07.053 ******* 2026-01-20 10:55:41,540 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:41,557 p=31537 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_infra _raw_params={{ hook.type }}.yml] *** 2026-01-20 10:55:41,557 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.119) 0:01:07.173 ******* 2026-01-20 10:55:41,557 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.119) 0:01:07.172 ******* 2026-01-20 10:55:41,685 p=31537 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/run_hook/tasks/playbook.yml for localhost => (item={'name': 'Download needed tools', 'inventory': 'localhost,', 'connection': 'local', 'type': 'playbook', 'source': '/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/download_tools.yaml'}) 2026-01-20 10:55:41,697 p=31537 u=zuul n=ansible | TASK [run_hook : Set playbook path for Download needed tools cifmw_basedir={{ _bdir }}, hook_name={{ _hook_name }}, playbook_path={{ _play | realpath }}, log_path={{ _bdir }}/logs/{{ step }}_{{ _hook_name }}.log, extra_vars=-e namespace={{ cifmw_openstack_namespace }} {%- if hook.extra_vars is defined and hook.extra_vars|length > 0 -%} {% for key,value in hook.extra_vars.items() -%} {%- if key == 'file' %} -e "@{{ value }}" {%- else %} -e "{{ key }}={{ value }}" {%- endif %} {%- endfor %} {%- endif %}] *** 2026-01-20 10:55:41,697 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.140) 0:01:07.314 ******* 2026-01-20 10:55:41,698 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.140) 0:01:07.312 ******* 2026-01-20 10:55:41,739 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:41,750 p=31537 u=zuul n=ansible | TASK [run_hook : Get file stat path={{ playbook_path }}] *********************** 2026-01-20 10:55:41,750 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.052) 0:01:07.366 ******* 2026-01-20 10:55:41,750 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.052) 0:01:07.365 ******* 2026-01-20 10:55:41,952 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:41,963 p=31537 u=zuul n=ansible | TASK [run_hook : Fail if playbook doesn't exist msg=Playbook {{ playbook_path }} doesn't seem to exist.] *** 2026-01-20 10:55:41,963 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.212) 0:01:07.579 ******* 2026-01-20 10:55:41,963 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.212) 0:01:07.578 ******* 2026-01-20 10:55:41,979 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:55:41,989 p=31537 u=zuul n=ansible | TASK [run_hook : Get parameters files paths={{ (cifmw_basedir, 'artifacts/parameters') | path_join }}, file_type=file, patterns=*.yml] *** 2026-01-20 10:55:41,989 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.025) 0:01:07.605 ******* 2026-01-20 10:55:41,989 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:41 +0000 (0:00:00.025) 0:01:07.604 ******* 2026-01-20 10:55:42,168 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:42,179 p=31537 u=zuul n=ansible | TASK [run_hook : Add parameters artifacts as extra variables extra_vars={{ extra_vars }} {% for file in cifmw_run_hook_parameters_files.files %} -e "@{{ file.path }}" {%- endfor %}] *** 2026-01-20 10:55:42,179 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:42 +0000 (0:00:00.190) 0:01:07.795 ******* 2026-01-20 10:55:42,179 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:42 +0000 (0:00:00.190) 0:01:07.794 ******* 2026-01-20 10:55:42,196 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:42,206 p=31537 u=zuul n=ansible | TASK [run_hook : Ensure log directory exists path={{ log_path | dirname }}, state=directory, mode=0755] *** 2026-01-20 10:55:42,206 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:42 +0000 (0:00:00.027) 0:01:07.822 ******* 2026-01-20 10:55:42,206 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:42 +0000 (0:00:00.027) 0:01:07.821 ******* 2026-01-20 10:55:42,384 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:42,401 p=31537 u=zuul n=ansible | TASK [run_hook : Ensure artifacts directory exists path={{ cifmw_basedir }}/artifacts, state=directory, mode=0755] *** 2026-01-20 10:55:42,402 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:42 +0000 (0:00:00.195) 0:01:08.018 ******* 2026-01-20 10:55:42,402 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:42 +0000 (0:00:00.195) 0:01:08.017 ******* 2026-01-20 10:55:42,607 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:55:42,620 p=31537 u=zuul n=ansible | TASK [run_hook : Run hook without retry - Download needed tools] *************** 2026-01-20 10:55:42,620 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:42 +0000 (0:00:00.218) 0:01:08.237 ******* 2026-01-20 10:55:42,620 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:55:42 +0000 (0:00:00.218) 0:01:08.235 ******* 2026-01-20 10:55:42,708 p=31537 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_000_run_hook_without_retry.log 2026-01-20 10:56:19,010 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:19,019 p=31537 u=zuul n=ansible | TASK [run_hook : Run hook with retry - Download needed tools] ****************** 2026-01-20 10:56:19,019 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:36.399) 0:01:44.636 ******* 2026-01-20 10:56:19,020 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:36.399) 0:01:44.634 ******* 2026-01-20 10:56:19,035 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:19,043 p=31537 u=zuul n=ansible | TASK [run_hook : Check if we have a file path={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2026-01-20 10:56:19,044 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.024) 0:01:44.660 ******* 2026-01-20 10:56:19,044 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.024) 0:01:44.659 ******* 2026-01-20 10:56:19,201 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:19,208 p=31537 u=zuul n=ansible | TASK [run_hook : Load generated content in main playbook file={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2026-01-20 10:56:19,208 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.164) 0:01:44.825 ******* 2026-01-20 10:56:19,209 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.164) 0:01:44.823 ******* 2026-01-20 10:56:19,221 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:19,254 p=31537 u=zuul n=ansible | PLAY [Prepare host virtualization] ********************************************* 2026-01-20 10:56:19,271 p=31537 u=zuul n=ansible | TASK [cifmw_setup : Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] *** 2026-01-20 10:56:19,271 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.062) 0:01:44.888 ******* 2026-01-20 10:56:19,271 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.062) 0:01:44.886 ******* 2026-01-20 10:56:19,331 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:19,344 p=31537 u=zuul n=ansible | TASK [Ensure libvirt is present/configured name=libvirt_manager] *************** 2026-01-20 10:56:19,345 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.073) 0:01:44.961 ******* 2026-01-20 10:56:19,345 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.073) 0:01:44.960 ******* 2026-01-20 10:56:19,370 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:19,380 p=31537 u=zuul n=ansible | TASK [Perpare OpenShift provisioner node name=openshift_provisioner_node] ****** 2026-01-20 10:56:19,380 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.035) 0:01:44.996 ******* 2026-01-20 10:56:19,380 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.035) 0:01:44.995 ******* 2026-01-20 10:56:19,402 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:19,433 p=31537 u=zuul n=ansible | PLAY [Run cifmw_setup infra, build package, container and operators, deploy EDPM] *** 2026-01-20 10:56:19,472 p=31537 u=zuul n=ansible | TASK [cifmw_setup : Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] *** 2026-01-20 10:56:19,472 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.091) 0:01:45.088 ******* 2026-01-20 10:56:19,472 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.091) 0:01:45.087 ******* 2026-01-20 10:56:19,522 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:19,531 p=31537 u=zuul n=ansible | TASK [networking_mapper : Check for Networking Environment Definition file existence path={{ cifmw_networking_mapper_networking_env_def_path }}] *** 2026-01-20 10:56:19,531 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.059) 0:01:45.148 ******* 2026-01-20 10:56:19,531 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.059) 0:01:45.146 ******* 2026-01-20 10:56:19,704 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:19,713 p=31537 u=zuul n=ansible | TASK [networking_mapper : Check for Networking Definition file existance that=['_net_env_def_stat.stat.exists'], msg=Ensure that the Networking Environment Definition file exists in {{ cifmw_networking_mapper_networking_env_def_path }}, quiet=True] *** 2026-01-20 10:56:19,713 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.182) 0:01:45.330 ******* 2026-01-20 10:56:19,713 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.182) 0:01:45.328 ******* 2026-01-20 10:56:19,784 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:19,791 p=31537 u=zuul n=ansible | TASK [networking_mapper : Load the Networking Definition from file path={{ cifmw_networking_mapper_networking_env_def_path }}] *** 2026-01-20 10:56:19,791 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.077) 0:01:45.407 ******* 2026-01-20 10:56:19,791 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.077) 0:01:45.406 ******* 2026-01-20 10:56:19,814 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:19,823 p=31537 u=zuul n=ansible | TASK [networking_mapper : Set cifmw_networking_env_definition is present cifmw_networking_env_definition={{ _net_env_def_slurp['content'] | b64decode | from_yaml }}, cacheable=True] *** 2026-01-20 10:56:19,823 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.031) 0:01:45.439 ******* 2026-01-20 10:56:19,823 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.031) 0:01:45.438 ******* 2026-01-20 10:56:19,845 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:19,861 p=31537 u=zuul n=ansible | TASK [Deploy OCP using Hive name=hive] ***************************************** 2026-01-20 10:56:19,861 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.038) 0:01:45.478 ******* 2026-01-20 10:56:19,861 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.038) 0:01:45.476 ******* 2026-01-20 10:56:19,882 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:19,891 p=31537 u=zuul n=ansible | TASK [Prepare CRC name=rhol_crc] *********************************************** 2026-01-20 10:56:19,891 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.029) 0:01:45.507 ******* 2026-01-20 10:56:19,891 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.029) 0:01:45.506 ******* 2026-01-20 10:56:19,911 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:19,922 p=31537 u=zuul n=ansible | TASK [Deploy OpenShift cluster using dev-scripts name=devscripts] ************** 2026-01-20 10:56:19,922 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.031) 0:01:45.538 ******* 2026-01-20 10:56:19,922 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.031) 0:01:45.537 ******* 2026-01-20 10:56:19,940 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:19,949 p=31537 u=zuul n=ansible | TASK [openshift_login : Ensure output directory exists path={{ cifmw_openshift_login_basedir }}/artifacts, state=directory, mode=0755] *** 2026-01-20 10:56:19,949 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.027) 0:01:45.566 ******* 2026-01-20 10:56:19,949 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:19 +0000 (0:00:00.027) 0:01:45.564 ******* 2026-01-20 10:56:20,139 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:20,147 p=31537 u=zuul n=ansible | TASK [openshift_login : OpenShift login _raw_params=login.yml] ***************** 2026-01-20 10:56:20,147 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.197) 0:01:45.763 ******* 2026-01-20 10:56:20,147 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.197) 0:01:45.762 ******* 2026-01-20 10:56:20,177 p=31537 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_login/tasks/login.yml for localhost 2026-01-20 10:56:20,190 p=31537 u=zuul n=ansible | TASK [openshift_login : Check if the password file is present path={{ cifmw_openshift_login_password_file | default(cifmw_openshift_password_file) }}] *** 2026-01-20 10:56:20,190 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.043) 0:01:45.806 ******* 2026-01-20 10:56:20,190 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.043) 0:01:45.805 ******* 2026-01-20 10:56:20,211 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:20,219 p=31537 u=zuul n=ansible | TASK [openshift_login : Fetch user password content src={{ cifmw_openshift_login_password_file | default(cifmw_openshift_password_file) }}] *** 2026-01-20 10:56:20,219 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.029) 0:01:45.835 ******* 2026-01-20 10:56:20,219 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.029) 0:01:45.834 ******* 2026-01-20 10:56:20,240 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:20,249 p=31537 u=zuul n=ansible | TASK [openshift_login : Set user password as a fact cifmw_openshift_login_password={{ cifmw_openshift_login_password_file_slurp.content | b64decode }}, cacheable=True] *** 2026-01-20 10:56:20,249 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.030) 0:01:45.866 ******* 2026-01-20 10:56:20,249 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.030) 0:01:45.864 ******* 2026-01-20 10:56:20,272 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:20,281 p=31537 u=zuul n=ansible | TASK [openshift_login : Set role variables cifmw_openshift_login_kubeconfig={{ cifmw_openshift_login_kubeconfig | default(cifmw_openshift_kubeconfig) | default( ansible_env.KUBECONFIG if 'KUBECONFIG' in ansible_env else cifmw_openshift_login_kubeconfig_default_path ) | trim }}, cifmw_openshift_login_user={{ cifmw_openshift_login_user | default(cifmw_openshift_user) | default(omit) }}, cifmw_openshift_login_password={{ cifmw_openshift_login_password | default(cifmw_openshift_password) | default(omit) }}, cifmw_openshift_login_api={{ cifmw_openshift_login_api | default(cifmw_openshift_api) | default(omit) }}, cifmw_openshift_login_cert_login={{ cifmw_openshift_login_cert_login | default(false)}}, cifmw_openshift_login_provided_token={{ cifmw_openshift_provided_token | default(omit) }}, cacheable=True] *** 2026-01-20 10:56:20,281 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.031) 0:01:45.897 ******* 2026-01-20 10:56:20,281 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.031) 0:01:45.896 ******* 2026-01-20 10:56:20,314 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:20,321 p=31537 u=zuul n=ansible | TASK [openshift_login : Check if kubeconfig exists path={{ cifmw_openshift_login_kubeconfig }}] *** 2026-01-20 10:56:20,321 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.040) 0:01:45.938 ******* 2026-01-20 10:56:20,321 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.040) 0:01:45.936 ******* 2026-01-20 10:56:20,488 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:20,497 p=31537 u=zuul n=ansible | TASK [openshift_login : Assert that enough data is provided to log in to OpenShift that=cifmw_openshift_login_kubeconfig_stat.stat.exists or (cifmw_openshift_login_provided_token is defined and cifmw_openshift_login_provided_token != '') or ( (cifmw_openshift_login_user is defined) and (cifmw_openshift_login_password is defined) and (cifmw_openshift_login_api is defined) ), msg=If an existing kubeconfig is not provided user/pwd or provided/initial token and API URL must be given] *** 2026-01-20 10:56:20,497 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.175) 0:01:46.114 ******* 2026-01-20 10:56:20,497 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.175) 0:01:46.112 ******* 2026-01-20 10:56:20,522 p=31537 u=zuul n=ansible | ok: [localhost] => changed: false msg: All assertions passed 2026-01-20 10:56:20,531 p=31537 u=zuul n=ansible | TASK [openshift_login : Fetch kubeconfig content src={{ cifmw_openshift_login_kubeconfig }}] *** 2026-01-20 10:56:20,531 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.033) 0:01:46.147 ******* 2026-01-20 10:56:20,531 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.033) 0:01:46.146 ******* 2026-01-20 10:56:20,550 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:20,558 p=31537 u=zuul n=ansible | TASK [openshift_login : Fetch x509 key based users cifmw_openshift_login_key_based_users={{ ( cifmw_openshift_login_kubeconfig_content_b64.content | b64decode | from_yaml ). users | default([]) | selectattr('user.client-certificate-data', 'defined') | map(attribute="name") | map("split", "/") | map("first") }}, cacheable=True] *** 2026-01-20 10:56:20,559 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.027) 0:01:46.175 ******* 2026-01-20 10:56:20,559 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.027) 0:01:46.174 ******* 2026-01-20 10:56:20,579 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:20,588 p=31537 u=zuul n=ansible | TASK [openshift_login : Assign key based user if not provided and available cifmw_openshift_login_user={{ (cifmw_openshift_login_assume_cert_system_user | ternary('system:', '')) + (cifmw_openshift_login_key_based_users | map('replace', 'system:', '') | unique | first) }}, cifmw_openshift_login_cert_login=True, cacheable=True] *** 2026-01-20 10:56:20,588 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.029) 0:01:46.204 ******* 2026-01-20 10:56:20,588 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.029) 0:01:46.203 ******* 2026-01-20 10:56:20,609 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:20,618 p=31537 u=zuul n=ansible | TASK [openshift_login : Set the retry count cifmw_openshift_login_retries_cnt={{ 0 if cifmw_openshift_login_retries_cnt is undefined else cifmw_openshift_login_retries_cnt|int + 1 }}] *** 2026-01-20 10:56:20,618 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.030) 0:01:46.235 ******* 2026-01-20 10:56:20,618 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.030) 0:01:46.233 ******* 2026-01-20 10:56:20,643 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:20,651 p=31537 u=zuul n=ansible | TASK [openshift_login : Fetch token _raw_params=try_login.yml] ***************** 2026-01-20 10:56:20,651 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.033) 0:01:46.268 ******* 2026-01-20 10:56:20,651 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.033) 0:01:46.266 ******* 2026-01-20 10:56:20,676 p=31537 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_login/tasks/try_login.yml for localhost 2026-01-20 10:56:20,689 p=31537 u=zuul n=ansible | TASK [openshift_login : Try get OpenShift access token _raw_params=oc whoami -t] *** 2026-01-20 10:56:20,689 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.037) 0:01:46.306 ******* 2026-01-20 10:56:20,689 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.037) 0:01:46.304 ******* 2026-01-20 10:56:20,710 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:20,720 p=31537 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift token output_dir={{ cifmw_openshift_login_basedir }}/artifacts, script=oc login {%- if cifmw_openshift_login_provided_token is not defined %} {%- if cifmw_openshift_login_user is defined %} -u {{ cifmw_openshift_login_user }} {%- endif %} {%- if cifmw_openshift_login_password is defined %} -p {{ cifmw_openshift_login_password }} {%- endif %} {% else %} --token={{ cifmw_openshift_login_provided_token }} {%- endif %} {%- if cifmw_openshift_login_skip_tls_verify|bool %} --insecure-skip-tls-verify=true {%- endif %} {%- if cifmw_openshift_login_api is defined %} {{ cifmw_openshift_login_api }} {%- endif %}] *** 2026-01-20 10:56:20,720 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.030) 0:01:46.336 ******* 2026-01-20 10:56:20,720 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:20 +0000 (0:00:00.030) 0:01:46.335 ******* 2026-01-20 10:56:20,779 p=31537 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_001_fetch_openshift.log 2026-01-20 10:56:21,248 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:21,258 p=31537 u=zuul n=ansible | TASK [openshift_login : Ensure kubeconfig is provided that=cifmw_openshift_login_kubeconfig != ""] *** 2026-01-20 10:56:21,258 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:21 +0000 (0:00:00.537) 0:01:46.874 ******* 2026-01-20 10:56:21,258 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:21 +0000 (0:00:00.537) 0:01:46.873 ******* 2026-01-20 10:56:21,275 p=31537 u=zuul n=ansible | ok: [localhost] => changed: false msg: All assertions passed 2026-01-20 10:56:21,284 p=31537 u=zuul n=ansible | TASK [openshift_login : Fetch new OpenShift access token _raw_params=oc whoami -t] *** 2026-01-20 10:56:21,284 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:21 +0000 (0:00:00.026) 0:01:46.901 ******* 2026-01-20 10:56:21,284 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:21 +0000 (0:00:00.026) 0:01:46.899 ******* 2026-01-20 10:56:21,574 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:21,582 p=31537 u=zuul n=ansible | TASK [openshift_login : Set new OpenShift token cifmw_openshift_login_token={{ (not cifmw_openshift_login_new_token_out.skipped | default(false)) | ternary(cifmw_openshift_login_new_token_out.stdout, cifmw_openshift_login_whoami_out.stdout) }}, cacheable=True] *** 2026-01-20 10:56:21,582 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:21 +0000 (0:00:00.297) 0:01:47.198 ******* 2026-01-20 10:56:21,582 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:21 +0000 (0:00:00.297) 0:01:47.197 ******* 2026-01-20 10:56:21,605 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:21,613 p=31537 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift API URL _raw_params=oc whoami --show-server=true] *** 2026-01-20 10:56:21,613 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:21 +0000 (0:00:00.031) 0:01:47.230 ******* 2026-01-20 10:56:21,613 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:21 +0000 (0:00:00.031) 0:01:47.228 ******* 2026-01-20 10:56:21,897 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:21,906 p=31537 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift kubeconfig context _raw_params=oc whoami -c] *** 2026-01-20 10:56:21,906 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:21 +0000 (0:00:00.292) 0:01:47.522 ******* 2026-01-20 10:56:21,906 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:21 +0000 (0:00:00.292) 0:01:47.521 ******* 2026-01-20 10:56:22,206 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:22,217 p=31537 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift current user _raw_params=oc whoami] **** 2026-01-20 10:56:22,217 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:22 +0000 (0:00:00.311) 0:01:47.834 ******* 2026-01-20 10:56:22,217 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:22 +0000 (0:00:00.311) 0:01:47.832 ******* 2026-01-20 10:56:22,517 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:22,527 p=31537 u=zuul n=ansible | TASK [openshift_login : Set OpenShift user, context and API facts cifmw_openshift_login_api={{ cifmw_openshift_login_api_out.stdout }}, cifmw_openshift_login_context={{ cifmw_openshift_login_context_out.stdout }}, cifmw_openshift_login_user={{ _oauth_user }}, cifmw_openshift_kubeconfig={{ cifmw_openshift_login_kubeconfig }}, cifmw_openshift_api={{ cifmw_openshift_login_api_out.stdout }}, cifmw_openshift_context={{ cifmw_openshift_login_context_out.stdout }}, cifmw_openshift_user={{ _oauth_user }}, cifmw_openshift_token={{ cifmw_openshift_login_token | default(omit) }}, cifmw_install_yamls_environment={{ ( cifmw_install_yamls_environment | combine({'KUBECONFIG': cifmw_openshift_login_kubeconfig}) ) if cifmw_install_yamls_environment is defined else omit }}, cacheable=True] *** 2026-01-20 10:56:22,527 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:22 +0000 (0:00:00.309) 0:01:48.143 ******* 2026-01-20 10:56:22,527 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:22 +0000 (0:00:00.309) 0:01:48.142 ******* 2026-01-20 10:56:22,563 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:22,571 p=31537 u=zuul n=ansible | TASK [openshift_login : Create the openshift_login parameters file dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/openshift-login-params.yml, content={{ cifmw_openshift_login_params_content | from_yaml | to_nice_yaml }}, mode=0600] *** 2026-01-20 10:56:22,571 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:22 +0000 (0:00:00.044) 0:01:48.187 ******* 2026-01-20 10:56:22,571 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:22 +0000 (0:00:00.044) 0:01:48.186 ******* 2026-01-20 10:56:22,969 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:22,985 p=31537 u=zuul n=ansible | TASK [openshift_login : Read the install yamls parameters file path={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml] *** 2026-01-20 10:56:22,986 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:22 +0000 (0:00:00.414) 0:01:48.602 ******* 2026-01-20 10:56:22,986 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:22 +0000 (0:00:00.414) 0:01:48.601 ******* 2026-01-20 10:56:23,310 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:23,326 p=31537 u=zuul n=ansible | TASK [openshift_login : Append the KUBECONFIG to the install yamls parameters content={{ cifmw_openshift_login_install_yamls_artifacts_slurp['content'] | b64decode | from_yaml | combine( { 'cifmw_install_yamls_environment': { 'KUBECONFIG': cifmw_openshift_login_kubeconfig } }, recursive=true) | to_nice_yaml }}, dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml, mode=0600] *** 2026-01-20 10:56:23,326 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:23 +0000 (0:00:00.340) 0:01:48.943 ******* 2026-01-20 10:56:23,327 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:23 +0000 (0:00:00.340) 0:01:48.942 ******* 2026-01-20 10:56:23,809 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:23,837 p=31537 u=zuul n=ansible | TASK [openshift_setup : Ensure output directory exists path={{ cifmw_openshift_setup_basedir }}/artifacts, state=directory, mode=0755] *** 2026-01-20 10:56:23,837 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:23 +0000 (0:00:00.510) 0:01:49.454 ******* 2026-01-20 10:56:23,837 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:23 +0000 (0:00:00.510) 0:01:49.452 ******* 2026-01-20 10:56:24,026 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:24,043 p=31537 u=zuul n=ansible | TASK [openshift_setup : Fetch namespaces to create cifmw_openshift_setup_namespaces={{ (( ([cifmw_install_yamls_defaults['NAMESPACE']] + ([cifmw_install_yamls_defaults['OPERATOR_NAMESPACE']] if 'OPERATOR_NAMESPACE' is in cifmw_install_yamls_defaults else []) ) if cifmw_install_yamls_defaults is defined else [] ) + cifmw_openshift_setup_create_namespaces) | unique }}] *** 2026-01-20 10:56:24,043 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:24 +0000 (0:00:00.205) 0:01:49.659 ******* 2026-01-20 10:56:24,043 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:24 +0000 (0:00:00.206) 0:01:49.658 ******* 2026-01-20 10:56:24,072 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:24,097 p=31537 u=zuul n=ansible | TASK [openshift_setup : Create required namespaces kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, name={{ item }}, kind=Namespace, state=present] *** 2026-01-20 10:56:24,097 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:24 +0000 (0:00:00.054) 0:01:49.714 ******* 2026-01-20 10:56:24,098 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:24 +0000 (0:00:00.054) 0:01:49.713 ******* 2026-01-20 10:56:25,088 p=31537 u=zuul n=ansible | changed: [localhost] => (item=openstack) 2026-01-20 10:56:25,856 p=31537 u=zuul n=ansible | changed: [localhost] => (item=openstack-operators) 2026-01-20 10:56:25,869 p=31537 u=zuul n=ansible | TASK [openshift_setup : Get internal OpenShift registry route kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, kind=Route, name=default-route, namespace=openshift-image-registry] *** 2026-01-20 10:56:25,869 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:25 +0000 (0:00:01.771) 0:01:51.485 ******* 2026-01-20 10:56:25,869 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:25 +0000 (0:00:01.771) 0:01:51.484 ******* 2026-01-20 10:56:25,884 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:25,893 p=31537 u=zuul n=ansible | TASK [openshift_setup : Allow anonymous image-pulls in CRC registry for targeted namespaces state=present, kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'kind': 'RoleBinding', 'apiVersion': 'rbac.authorization.k8s.io/v1', 'metadata': {'name': 'system:image-puller', 'namespace': '{{ item }}'}, 'subjects': [{'kind': 'User', 'name': 'system:anonymous'}, {'kind': 'User', 'name': 'system:unauthenticated'}], 'roleRef': {'kind': 'ClusterRole', 'name': 'system:image-puller'}}] *** 2026-01-20 10:56:25,893 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:25 +0000 (0:00:00.023) 0:01:51.509 ******* 2026-01-20 10:56:25,893 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:25 +0000 (0:00:00.023) 0:01:51.508 ******* 2026-01-20 10:56:25,914 p=31537 u=zuul n=ansible | skipping: [localhost] => (item=openstack) 2026-01-20 10:56:25,915 p=31537 u=zuul n=ansible | skipping: [localhost] => (item=openstack-operators) 2026-01-20 10:56:25,915 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:25,925 p=31537 u=zuul n=ansible | TASK [openshift_setup : Wait for the image registry to be ready kind=Deployment, name=image-registry, namespace=openshift-image-registry, kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, wait=True, wait_sleep=10, wait_timeout=600, wait_condition={'type': 'Available', 'status': 'True'}] *** 2026-01-20 10:56:25,925 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:25 +0000 (0:00:00.031) 0:01:51.541 ******* 2026-01-20 10:56:25,925 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:25 +0000 (0:00:00.031) 0:01:51.540 ******* 2026-01-20 10:56:25,945 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:25,954 p=31537 u=zuul n=ansible | TASK [openshift_setup : Login into OpenShift internal registry output_dir={{ cifmw_openshift_setup_basedir }}/artifacts, script=podman login -u {{ cifmw_openshift_user }} -p {{ cifmw_openshift_token }} {%- if cifmw_openshift_setup_skip_internal_registry_tls_verify|bool %} --tls-verify=false {%- endif %} {{ cifmw_openshift_setup_registry_default_route.resources[0].spec.host }}] *** 2026-01-20 10:56:25,954 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:25 +0000 (0:00:00.029) 0:01:51.570 ******* 2026-01-20 10:56:25,954 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:25 +0000 (0:00:00.029) 0:01:51.569 ******* 2026-01-20 10:56:25,977 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:25,985 p=31537 u=zuul n=ansible | TASK [Ensure we have custom CA installed on host role=install_ca] ************** 2026-01-20 10:56:25,985 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:25 +0000 (0:00:00.030) 0:01:51.601 ******* 2026-01-20 10:56:25,985 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:25 +0000 (0:00:00.030) 0:01:51.600 ******* 2026-01-20 10:56:26,005 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:26,013 p=31537 u=zuul n=ansible | TASK [openshift_setup : Update ca bundle _raw_params=update-ca-trust extract] *** 2026-01-20 10:56:26,013 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.028) 0:01:51.630 ******* 2026-01-20 10:56:26,013 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.028) 0:01:51.628 ******* 2026-01-20 10:56:26,033 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:26,041 p=31537 u=zuul n=ansible | TASK [openshift_setup : Slurp CAs file src={{ cifmw_openshift_setup_ca_bundle_path }}] *** 2026-01-20 10:56:26,041 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.027) 0:01:51.657 ******* 2026-01-20 10:56:26,041 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.027) 0:01:51.656 ******* 2026-01-20 10:56:26,061 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:26,070 p=31537 u=zuul n=ansible | TASK [openshift_setup : Create config map with registry CAs kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'apiVersion': 'v1', 'kind': 'ConfigMap', 'metadata': {'namespace': 'openshift-config', 'name': 'registry-cas'}, 'data': '{{ _config_map_data | items2dict }}'}] *** 2026-01-20 10:56:26,070 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.028) 0:01:51.686 ******* 2026-01-20 10:56:26,070 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.028) 0:01:51.685 ******* 2026-01-20 10:56:26,091 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:26,100 p=31537 u=zuul n=ansible | TASK [openshift_setup : Install Red Hat CA for pulling images from internal registry kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, merge_type=merge, definition={'apiVersion': 'config.openshift.io/v1', 'kind': 'Image', 'metadata': {'name': 'cluster'}, 'spec': {'additionalTrustedCA': {'name': 'registry-cas'}}}] *** 2026-01-20 10:56:26,100 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.030) 0:01:51.716 ******* 2026-01-20 10:56:26,100 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.030) 0:01:51.715 ******* 2026-01-20 10:56:26,119 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:26,128 p=31537 u=zuul n=ansible | TASK [openshift_setup : Add insecure registry kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, merge_type=merge, definition={'apiVersion': 'config.openshift.io/v1', 'kind': 'Image', 'metadata': {'name': 'cluster'}, 'spec': {'registrySources': {'insecureRegistries': ['{{ cifmw_update_containers_registry }}'], 'allowedRegistries': '{{ all_registries }}'}}}] *** 2026-01-20 10:56:26,128 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.028) 0:01:51.744 ******* 2026-01-20 10:56:26,128 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.028) 0:01:51.743 ******* 2026-01-20 10:56:26,912 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:26,922 p=31537 u=zuul n=ansible | TASK [openshift_setup : Create a ICSP with repository digest mirrors kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'apiVersion': 'operator.openshift.io/v1alpha1', 'kind': 'ImageContentSourcePolicy', 'metadata': {'name': 'registry-digest-mirrors'}, 'spec': {'repositoryDigestMirrors': '{{ cifmw_openshift_setup_digest_mirrors }}'}}] *** 2026-01-20 10:56:26,922 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.794) 0:01:52.539 ******* 2026-01-20 10:56:26,922 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.794) 0:01:52.537 ******* 2026-01-20 10:56:26,946 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:26,954 p=31537 u=zuul n=ansible | TASK [openshift_setup : Gather network.operator info kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, api_version=operator.openshift.io/v1, kind=Network, name=cluster] *** 2026-01-20 10:56:26,954 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.032) 0:01:52.571 ******* 2026-01-20 10:56:26,955 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:26 +0000 (0:00:00.032) 0:01:52.569 ******* 2026-01-20 10:56:27,918 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:27,930 p=31537 u=zuul n=ansible | TASK [openshift_setup : Patch network operator api_version=operator.openshift.io/v1, kubeconfig={{ cifmw_openshift_kubeconfig }}, kind=Network, name=cluster, persist_config=True, patch=[{'path': '/spec/defaultNetwork/ovnKubernetesConfig/gatewayConfig/routingViaHost', 'value': True, 'op': 'replace'}, {'path': '/spec/defaultNetwork/ovnKubernetesConfig/gatewayConfig/ipForwarding', 'value': 'Global', 'op': 'replace'}]] *** 2026-01-20 10:56:27,930 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:27 +0000 (0:00:00.975) 0:01:53.546 ******* 2026-01-20 10:56:27,930 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:27 +0000 (0:00:00.975) 0:01:53.545 ******* 2026-01-20 10:56:28,868 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:28,879 p=31537 u=zuul n=ansible | TASK [openshift_setup : Patch samples registry configuration kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, api_version=samples.operator.openshift.io/v1, kind=Config, name=cluster, patch=[{'op': 'replace', 'path': '/spec/samplesRegistry', 'value': 'registry.redhat.io'}]] *** 2026-01-20 10:56:28,879 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:28 +0000 (0:00:00.948) 0:01:54.495 ******* 2026-01-20 10:56:28,879 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:28 +0000 (0:00:00.948) 0:01:54.494 ******* 2026-01-20 10:56:29,614 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:29,622 p=31537 u=zuul n=ansible | TASK [openshift_setup : Delete the pods from openshift-marketplace namespace kind=Pod, state=absent, delete_all=True, kubeconfig={{ cifmw_openshift_kubeconfig }}, namespace=openshift-marketplace] *** 2026-01-20 10:56:29,623 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.743) 0:01:55.239 ******* 2026-01-20 10:56:29,623 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.743) 0:01:55.238 ******* 2026-01-20 10:56:29,639 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:29,647 p=31537 u=zuul n=ansible | TASK [openshift_setup : Wait for openshift-marketplace pods to be running _raw_params=oc wait pod --all --for=condition=Ready -n openshift-marketplace --timeout=1m] *** 2026-01-20 10:56:29,648 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.025) 0:01:55.264 ******* 2026-01-20 10:56:29,648 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.025) 0:01:55.263 ******* 2026-01-20 10:56:29,662 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:29,676 p=31537 u=zuul n=ansible | TASK [Deploy Observability operator. name=openshift_obs] *********************** 2026-01-20 10:56:29,676 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.028) 0:01:55.292 ******* 2026-01-20 10:56:29,676 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.028) 0:01:55.291 ******* 2026-01-20 10:56:29,693 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:29,701 p=31537 u=zuul n=ansible | TASK [Deploy Metal3 BMHs name=deploy_bmh] ************************************** 2026-01-20 10:56:29,702 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.025) 0:01:55.318 ******* 2026-01-20 10:56:29,702 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.025) 0:01:55.317 ******* 2026-01-20 10:56:29,719 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:29,728 p=31537 u=zuul n=ansible | TASK [Install certmanager operator role name=cert_manager] ********************* 2026-01-20 10:56:29,728 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.026) 0:01:55.345 ******* 2026-01-20 10:56:29,728 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.026) 0:01:55.343 ******* 2026-01-20 10:56:29,817 p=31537 u=zuul n=ansible | TASK [cert_manager : Create role needed directories path={{ cifmw_cert_manager_manifests_dir }}, state=directory, mode=0755] *** 2026-01-20 10:56:29,818 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.089) 0:01:55.434 ******* 2026-01-20 10:56:29,818 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:29 +0000 (0:00:00.089) 0:01:55.433 ******* 2026-01-20 10:56:30,039 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:30,047 p=31537 u=zuul n=ansible | TASK [cert_manager : Create the cifmw_cert_manager_operator_namespace namespace" kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, name={{ cifmw_cert_manager_operator_namespace }}, kind=Namespace, state=present] *** 2026-01-20 10:56:30,048 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:30 +0000 (0:00:00.230) 0:01:55.664 ******* 2026-01-20 10:56:30,048 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:30 +0000 (0:00:00.230) 0:01:55.663 ******* 2026-01-20 10:56:30,799 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:30,807 p=31537 u=zuul n=ansible | TASK [cert_manager : Install from Release Manifest _raw_params=release_manifest.yml] *** 2026-01-20 10:56:30,807 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:30 +0000 (0:00:00.759) 0:01:56.424 ******* 2026-01-20 10:56:30,807 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:30 +0000 (0:00:00.759) 0:01:56.422 ******* 2026-01-20 10:56:30,835 p=31537 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/cert_manager/tasks/release_manifest.yml for localhost 2026-01-20 10:56:30,846 p=31537 u=zuul n=ansible | TASK [cert_manager : Download release manifests url={{ cifmw_cert_manager_release_manifest }}, dest={{ cifmw_cert_manager_manifests_dir }}/cert_manager_manifest.yml, mode=0664] *** 2026-01-20 10:56:30,846 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:30 +0000 (0:00:00.038) 0:01:56.463 ******* 2026-01-20 10:56:30,846 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:30 +0000 (0:00:00.038) 0:01:56.461 ******* 2026-01-20 10:56:31,516 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:31,524 p=31537 u=zuul n=ansible | TASK [cert_manager : Install cert-manager from release manifest kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, state=present, src={{ cifmw_cert_manager_manifests_dir }}/cert_manager_manifest.yml] *** 2026-01-20 10:56:31,525 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:31 +0000 (0:00:00.678) 0:01:57.141 ******* 2026-01-20 10:56:31,525 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:31 +0000 (0:00:00.678) 0:01:57.140 ******* 2026-01-20 10:56:34,932 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:56:34,978 p=31537 u=zuul n=ansible | TASK [cert_manager : Install from OLM Manifest _raw_params=olm_manifest.yml] *** 2026-01-20 10:56:34,978 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:34 +0000 (0:00:03.453) 0:02:00.595 ******* 2026-01-20 10:56:34,978 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:34 +0000 (0:00:03.453) 0:02:00.593 ******* 2026-01-20 10:56:34,994 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:56:35,005 p=31537 u=zuul n=ansible | TASK [cert_manager : Check for cert-manager namspeace existance kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, name=cert-manager, kind=Namespace, field_selectors=['status.phase=Active']] *** 2026-01-20 10:56:35,006 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:35 +0000 (0:00:00.027) 0:02:00.622 ******* 2026-01-20 10:56:35,006 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:35 +0000 (0:00:00.027) 0:02:00.621 ******* 2026-01-20 10:56:35,701 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:35,717 p=31537 u=zuul n=ansible | TASK [cert_manager : Wait for cert-manager pods to be ready kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, namespace=cert-manager, kind=Pod, wait=True, wait_sleep=10, wait_timeout=600, wait_condition={'type': 'Ready', 'status': 'True'}, label_selectors=['app = {{ item }}']] *** 2026-01-20 10:56:35,717 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:35 +0000 (0:00:00.711) 0:02:01.334 ******* 2026-01-20 10:56:35,717 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:35 +0000 (0:00:00.711) 0:02:01.332 ******* 2026-01-20 10:56:46,472 p=31537 u=zuul n=ansible | ok: [localhost] => (item=cainjector) 2026-01-20 10:56:58,537 p=31537 u=zuul n=ansible | ok: [localhost] => (item=webhook) 2026-01-20 10:56:59,264 p=31537 u=zuul n=ansible | ok: [localhost] => (item=cert-manager) 2026-01-20 10:56:59,282 p=31537 u=zuul n=ansible | TASK [cert_manager : Create $HOME/bin dir path={{ ansible_user_dir }}/bin, state=directory, mode=0755] *** 2026-01-20 10:56:59,282 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:59 +0000 (0:00:23.564) 0:02:24.899 ******* 2026-01-20 10:56:59,282 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:59 +0000 (0:00:23.564) 0:02:24.897 ******* 2026-01-20 10:56:59,467 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:56:59,475 p=31537 u=zuul n=ansible | TASK [cert_manager : Install cert-manager cmctl CLI url=https://github.com/cert-manager/cmctl/releases/{{ cifmw_cert_manager_version }}/download/cmctl_{{ _os }}_{{ _arch }}, dest={{ ansible_user_dir }}/bin/cmctl, mode=0755] *** 2026-01-20 10:56:59,475 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:59 +0000 (0:00:00.192) 0:02:25.092 ******* 2026-01-20 10:56:59,475 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:56:59 +0000 (0:00:00.193) 0:02:25.090 ******* 2026-01-20 10:57:01,309 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:57:01,328 p=31537 u=zuul n=ansible | TASK [cert_manager : Verify cert_manager api _raw_params={{ ansible_user_dir }}/bin/cmctl check api --wait=2m] *** 2026-01-20 10:57:01,329 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:01.853) 0:02:26.945 ******* 2026-01-20 10:57:01,329 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:01.853) 0:02:26.944 ******* 2026-01-20 10:57:01,693 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:57:01,713 p=31537 u=zuul n=ansible | TASK [Configure hosts networking using nmstate name=ci_nmstate] **************** 2026-01-20 10:57:01,713 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.384) 0:02:27.329 ******* 2026-01-20 10:57:01,713 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.384) 0:02:27.328 ******* 2026-01-20 10:57:01,734 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:01,747 p=31537 u=zuul n=ansible | TASK [Configure multus networks name=ci_multus] ******************************** 2026-01-20 10:57:01,747 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.033) 0:02:27.363 ******* 2026-01-20 10:57:01,747 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.033) 0:02:27.362 ******* 2026-01-20 10:57:01,765 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:01,777 p=31537 u=zuul n=ansible | TASK [Deploy Sushy Emulator service pod name=sushy_emulator] ******************* 2026-01-20 10:57:01,777 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.029) 0:02:27.393 ******* 2026-01-20 10:57:01,777 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.029) 0:02:27.392 ******* 2026-01-20 10:57:01,799 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:01,810 p=31537 u=zuul n=ansible | TASK [Setup Libvirt on controller name=libvirt_manager] ************************ 2026-01-20 10:57:01,811 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.033) 0:02:27.427 ******* 2026-01-20 10:57:01,811 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.033) 0:02:27.426 ******* 2026-01-20 10:57:01,829 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:01,841 p=31537 u=zuul n=ansible | TASK [Prepare container package builder name=pkg_build] ************************ 2026-01-20 10:57:01,842 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.031) 0:02:27.458 ******* 2026-01-20 10:57:01,842 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.031) 0:02:27.457 ******* 2026-01-20 10:57:01,865 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:01,877 p=31537 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-20 10:57:01,877 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.035) 0:02:27.494 ******* 2026-01-20 10:57:01,877 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.035) 0:02:27.492 ******* 2026-01-20 10:57:01,942 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:01,952 p=31537 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-20 10:57:01,952 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.074) 0:02:27.568 ******* 2026-01-20 10:57:01,952 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:01 +0000 (0:00:00.074) 0:02:27.567 ******* 2026-01-20 10:57:02,056 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:02,067 p=31537 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_infra _raw_params={{ hook.type }}.yml] *** 2026-01-20 10:57:02,068 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.115) 0:02:27.684 ******* 2026-01-20 10:57:02,068 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.115) 0:02:27.683 ******* 2026-01-20 10:57:02,191 p=31537 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/run_hook/tasks/playbook.yml for localhost => (item={'name': 'Fetch nodes facts and save them as parameters', 'type': 'playbook', 'inventory': '/home/zuul/ci-framework-data/artifacts/zuul_inventory.yml', 'source': 'fetch_compute_facts.yml'}) 2026-01-20 10:57:02,206 p=31537 u=zuul n=ansible | TASK [run_hook : Set playbook path for Fetch nodes facts and save them as parameters cifmw_basedir={{ _bdir }}, hook_name={{ _hook_name }}, playbook_path={{ _play | realpath }}, log_path={{ _bdir }}/logs/{{ step }}_{{ _hook_name }}.log, extra_vars=-e namespace={{ cifmw_openstack_namespace }} {%- if hook.extra_vars is defined and hook.extra_vars|length > 0 -%} {% for key,value in hook.extra_vars.items() -%} {%- if key == 'file' %} -e "@{{ value }}" {%- else %} -e "{{ key }}={{ value }}" {%- endif %} {%- endfor %} {%- endif %}] *** 2026-01-20 10:57:02,206 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.138) 0:02:27.822 ******* 2026-01-20 10:57:02,206 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.138) 0:02:27.821 ******* 2026-01-20 10:57:02,247 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:02,255 p=31537 u=zuul n=ansible | TASK [run_hook : Get file stat path={{ playbook_path }}] *********************** 2026-01-20 10:57:02,255 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.049) 0:02:27.872 ******* 2026-01-20 10:57:02,255 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.049) 0:02:27.870 ******* 2026-01-20 10:57:02,455 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:02,463 p=31537 u=zuul n=ansible | TASK [run_hook : Fail if playbook doesn't exist msg=Playbook {{ playbook_path }} doesn't seem to exist.] *** 2026-01-20 10:57:02,464 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.208) 0:02:28.080 ******* 2026-01-20 10:57:02,464 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.208) 0:02:28.079 ******* 2026-01-20 10:57:02,478 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:02,486 p=31537 u=zuul n=ansible | TASK [run_hook : Get parameters files paths={{ (cifmw_basedir, 'artifacts/parameters') | path_join }}, file_type=file, patterns=*.yml] *** 2026-01-20 10:57:02,486 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.022) 0:02:28.102 ******* 2026-01-20 10:57:02,486 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.022) 0:02:28.101 ******* 2026-01-20 10:57:02,654 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:02,662 p=31537 u=zuul n=ansible | TASK [run_hook : Add parameters artifacts as extra variables extra_vars={{ extra_vars }} {% for file in cifmw_run_hook_parameters_files.files %} -e "@{{ file.path }}" {%- endfor %}] *** 2026-01-20 10:57:02,662 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.176) 0:02:28.279 ******* 2026-01-20 10:57:02,662 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.176) 0:02:28.277 ******* 2026-01-20 10:57:02,681 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:02,689 p=31537 u=zuul n=ansible | TASK [run_hook : Ensure log directory exists path={{ log_path | dirname }}, state=directory, mode=0755] *** 2026-01-20 10:57:02,689 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.026) 0:02:28.305 ******* 2026-01-20 10:57:02,689 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.026) 0:02:28.304 ******* 2026-01-20 10:57:02,872 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:02,881 p=31537 u=zuul n=ansible | TASK [run_hook : Ensure artifacts directory exists path={{ cifmw_basedir }}/artifacts, state=directory, mode=0755] *** 2026-01-20 10:57:02,881 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.192) 0:02:28.497 ******* 2026-01-20 10:57:02,881 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:02 +0000 (0:00:00.192) 0:02:28.496 ******* 2026-01-20 10:57:03,107 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:03,115 p=31537 u=zuul n=ansible | TASK [run_hook : Run hook without retry - Fetch nodes facts and save them as parameters] *** 2026-01-20 10:57:03,116 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:03 +0000 (0:00:00.234) 0:02:28.732 ******* 2026-01-20 10:57:03,116 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:03 +0000 (0:00:00.234) 0:02:28.731 ******* 2026-01-20 10:57:03,179 p=31537 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_002_run_hook_without_retry_fetch.log 2026-01-20 10:57:14,297 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:57:14,307 p=31537 u=zuul n=ansible | TASK [run_hook : Run hook with retry - Fetch nodes facts and save them as parameters] *** 2026-01-20 10:57:14,307 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:11.191) 0:02:39.923 ******* 2026-01-20 10:57:14,307 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:11.191) 0:02:39.922 ******* 2026-01-20 10:57:14,326 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:14,334 p=31537 u=zuul n=ansible | TASK [run_hook : Check if we have a file path={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2026-01-20 10:57:14,334 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.027) 0:02:39.951 ******* 2026-01-20 10:57:14,334 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.027) 0:02:39.949 ******* 2026-01-20 10:57:14,547 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:14,555 p=31537 u=zuul n=ansible | TASK [run_hook : Load generated content in main playbook file={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2026-01-20 10:57:14,555 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.221) 0:02:40.172 ******* 2026-01-20 10:57:14,556 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.221) 0:02:40.170 ******* 2026-01-20 10:57:14,579 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:14,598 p=31537 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-20 10:57:14,598 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.042) 0:02:40.215 ******* 2026-01-20 10:57:14,598 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.042) 0:02:40.213 ******* 2026-01-20 10:57:14,648 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:14,657 p=31537 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-20 10:57:14,657 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.058) 0:02:40.273 ******* 2026-01-20 10:57:14,657 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.058) 0:02:40.272 ******* 2026-01-20 10:57:14,755 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:14,765 p=31537 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_package_build _raw_params={{ hook.type }}.yml] *** 2026-01-20 10:57:14,765 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.108) 0:02:40.382 ******* 2026-01-20 10:57:14,765 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.108) 0:02:40.380 ******* 2026-01-20 10:57:14,862 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:14,877 p=31537 u=zuul n=ansible | TASK [cifmw_setup : Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] *** 2026-01-20 10:57:14,877 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.111) 0:02:40.494 ******* 2026-01-20 10:57:14,877 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.111) 0:02:40.492 ******* 2026-01-20 10:57:14,919 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:14,929 p=31537 u=zuul n=ansible | TASK [pkg_build : Generate volume list build_volumes={% for pkg in cifmw_pkg_build_list -%} - "{{ pkg.src|default(cifmw_pkg_build_pkg_basedir ~ '/' ~ pkg.name) }}:/root/src/{{ pkg.name }}:z" - "{{ cifmw_pkg_build_basedir }}/volumes/packages/{{ pkg.name }}:/root/{{ pkg.name }}:z" - "{{ cifmw_pkg_build_basedir }}/logs/build_{{ pkg.name }}:/root/logs:z" {% endfor -%} - "{{ cifmw_pkg_build_basedir }}/volumes/packages/gating_repo:/root/gating_repo:z" - "{{ cifmw_pkg_build_basedir }}/artifacts/repositories:/root/yum.repos.d:z,ro" - "{{ cifmw_pkg_build_basedir }}/artifacts/build-packages.yml:/root/playbook.yml:z,ro" ] *** 2026-01-20 10:57:14,929 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.051) 0:02:40.545 ******* 2026-01-20 10:57:14,929 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.051) 0:02:40.544 ******* 2026-01-20 10:57:14,951 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:14,960 p=31537 u=zuul n=ansible | TASK [pkg_build : Build package using container name={{ pkg.name }}-builder, auto_remove=True, detach=False, privileged=True, log_driver=k8s-file, log_level=info, log_opt={'path': '{{ cifmw_pkg_build_basedir }}/logs/{{ pkg.name }}-builder.log'}, image={{ cifmw_pkg_build_ctx_name }}, volume={{ build_volumes | from_yaml }}, security_opt=['label=disable', 'seccomp=unconfined', 'apparmor=unconfined'], env={'PROJECT': '{{ pkg.name }}'}, command=ansible-playbook -i localhost, -c local playbook.yml] *** 2026-01-20 10:57:14,960 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.030) 0:02:40.576 ******* 2026-01-20 10:57:14,960 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.030) 0:02:40.575 ******* 2026-01-20 10:57:14,973 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:14,986 p=31537 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-20 10:57:14,987 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.026) 0:02:40.603 ******* 2026-01-20 10:57:14,987 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:14 +0000 (0:00:00.026) 0:02:40.602 ******* 2026-01-20 10:57:15,051 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:15,059 p=31537 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-20 10:57:15,060 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.072) 0:02:40.676 ******* 2026-01-20 10:57:15,060 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.072) 0:02:40.675 ******* 2026-01-20 10:57:15,159 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:15,167 p=31537 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_package_build _raw_params={{ hook.type }}.yml] *** 2026-01-20 10:57:15,168 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.107) 0:02:40.784 ******* 2026-01-20 10:57:15,168 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.107) 0:02:40.783 ******* 2026-01-20 10:57:15,266 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:15,294 p=31537 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-20 10:57:15,294 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.126) 0:02:40.911 ******* 2026-01-20 10:57:15,295 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.126) 0:02:40.910 ******* 2026-01-20 10:57:15,344 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:15,355 p=31537 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-20 10:57:15,355 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.060) 0:02:40.971 ******* 2026-01-20 10:57:15,355 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.060) 0:02:40.970 ******* 2026-01-20 10:57:15,451 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:15,460 p=31537 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_container_build _raw_params={{ hook.type }}.yml] *** 2026-01-20 10:57:15,460 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.105) 0:02:41.077 ******* 2026-01-20 10:57:15,460 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.105) 0:02:41.075 ******* 2026-01-20 10:57:15,557 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:15,570 p=31537 u=zuul n=ansible | TASK [cifmw_setup : Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] *** 2026-01-20 10:57:15,571 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.110) 0:02:41.187 ******* 2026-01-20 10:57:15,571 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.110) 0:02:41.186 ******* 2026-01-20 10:57:15,614 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:15,624 p=31537 u=zuul n=ansible | TASK [cifmw_setup : Nothing to do yet msg=No support for that step yet] ******** 2026-01-20 10:57:15,624 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.053) 0:02:41.240 ******* 2026-01-20 10:57:15,624 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.053) 0:02:41.239 ******* 2026-01-20 10:57:15,643 p=31537 u=zuul n=ansible | ok: [localhost] => msg: No support for that step yet 2026-01-20 10:57:15,650 p=31537 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-20 10:57:15,650 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.026) 0:02:41.267 ******* 2026-01-20 10:57:15,651 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.026) 0:02:41.265 ******* 2026-01-20 10:57:15,710 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:15,718 p=31537 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-20 10:57:15,718 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.067) 0:02:41.335 ******* 2026-01-20 10:57:15,718 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.067) 0:02:41.333 ******* 2026-01-20 10:57:15,822 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:15,835 p=31537 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_container_build _raw_params={{ hook.type }}.yml] *** 2026-01-20 10:57:15,836 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.117) 0:02:41.452 ******* 2026-01-20 10:57:15,836 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.117) 0:02:41.451 ******* 2026-01-20 10:57:15,940 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:15,969 p=31537 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-20 10:57:15,969 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.133) 0:02:41.585 ******* 2026-01-20 10:57:15,969 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:15 +0000 (0:00:00.133) 0:02:41.584 ******* 2026-01-20 10:57:16,020 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:16,028 p=31537 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-20 10:57:16,028 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.059) 0:02:41.644 ******* 2026-01-20 10:57:16,028 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.059) 0:02:41.643 ******* 2026-01-20 10:57:16,123 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:16,132 p=31537 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_operator_build _raw_params={{ hook.type }}.yml] *** 2026-01-20 10:57:16,133 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.104) 0:02:41.749 ******* 2026-01-20 10:57:16,133 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.104) 0:02:41.748 ******* 2026-01-20 10:57:16,238 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,253 p=31537 u=zuul n=ansible | TASK [cifmw_setup : Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] *** 2026-01-20 10:57:16,254 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.120) 0:02:41.870 ******* 2026-01-20 10:57:16,254 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.120) 0:02:41.869 ******* 2026-01-20 10:57:16,346 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:16,357 p=31537 u=zuul n=ansible | TASK [operator_build : Ensure mandatory directories exist path={{ cifmw_operator_build_basedir }}/{{ item }}, state=directory, mode=0755] *** 2026-01-20 10:57:16,357 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.103) 0:02:41.974 ******* 2026-01-20 10:57:16,357 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.103) 0:02:41.972 ******* 2026-01-20 10:57:16,384 p=31537 u=zuul n=ansible | skipping: [localhost] => (item=artifacts) 2026-01-20 10:57:16,389 p=31537 u=zuul n=ansible | skipping: [localhost] => (item=logs) 2026-01-20 10:57:16,390 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,399 p=31537 u=zuul n=ansible | TASK [operator_build : Initialize role output cifmw_operator_build_output={{ cifmw_operator_build_output }}, cifmw_operator_build_meta_name={{ cifmw_operator_build_meta_name }}] *** 2026-01-20 10:57:16,400 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.042) 0:02:42.016 ******* 2026-01-20 10:57:16,400 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.042) 0:02:42.015 ******* 2026-01-20 10:57:16,474 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,481 p=31537 u=zuul n=ansible | TASK [operator_build : Populate operators list with zuul info _raw_params=zuul_info.yml] *** 2026-01-20 10:57:16,481 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.081) 0:02:42.098 ******* 2026-01-20 10:57:16,481 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.081) 0:02:42.096 ******* 2026-01-20 10:57:16,506 p=31537 u=zuul n=ansible | skipping: [localhost] => (item={'branch': 'main', 'change': '320', 'change_url': 'https://github.com/openstack-k8s-operators/watcher-operator/pull/320', 'commit_id': '581f46572d07c53c87a11aa044b02e73f253eea6', 'patchset': '581f46572d07c53c87a11aa044b02e73f253eea6', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/openstack-k8s-operators/watcher-operator', 'name': 'openstack-k8s-operators/watcher-operator', 'short_name': 'watcher-operator', 'src_dir': 'src/github.com/openstack-k8s-operators/watcher-operator'}, 'topic': None}) 2026-01-20 10:57:16,508 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,517 p=31537 u=zuul n=ansible | TASK [operator_build : Merge lists of operators operators_list={{ [cifmw_operator_build_operators, zuul_info_operators | default([])] | community.general.lists_mergeby('name') }}] *** 2026-01-20 10:57:16,517 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.035) 0:02:42.133 ******* 2026-01-20 10:57:16,517 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.035) 0:02:42.132 ******* 2026-01-20 10:57:16,539 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,548 p=31537 u=zuul n=ansible | TASK [operator_build : Get meta_operator src dir from operators_list cifmw_operator_build_meta_src={{ (operators_list | selectattr('name', 'eq', cifmw_operator_build_meta_name) | map(attribute='src') | first ) | default(cifmw_operator_build_meta_src, true) }}] *** 2026-01-20 10:57:16,548 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.031) 0:02:42.164 ******* 2026-01-20 10:57:16,548 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.031) 0:02:42.163 ******* 2026-01-20 10:57:16,570 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,578 p=31537 u=zuul n=ansible | TASK [operator_build : Adds meta-operator to the list operators_list={{ [operators_list, meta_operator_info] | community.general.lists_mergeby('name') }}] *** 2026-01-20 10:57:16,578 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.030) 0:02:42.194 ******* 2026-01-20 10:57:16,578 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.030) 0:02:42.193 ******* 2026-01-20 10:57:16,601 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,609 p=31537 u=zuul n=ansible | TASK [operator_build : Clone operator's code when src dir is empty _raw_params=clone.yml] *** 2026-01-20 10:57:16,609 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.030) 0:02:42.225 ******* 2026-01-20 10:57:16,609 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.030) 0:02:42.224 ******* 2026-01-20 10:57:16,633 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,642 p=31537 u=zuul n=ansible | TASK [operator_build : Building operators _raw_params=build.yml] *************** 2026-01-20 10:57:16,642 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.032) 0:02:42.258 ******* 2026-01-20 10:57:16,642 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.033) 0:02:42.257 ******* 2026-01-20 10:57:16,667 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,676 p=31537 u=zuul n=ansible | TASK [operator_build : Building meta operator _raw_params=build.yml] *********** 2026-01-20 10:57:16,677 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.034) 0:02:42.293 ******* 2026-01-20 10:57:16,677 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.034) 0:02:42.292 ******* 2026-01-20 10:57:16,701 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,710 p=31537 u=zuul n=ansible | TASK [operator_build : Gather role output dest={{ cifmw_operator_build_basedir }}/artifacts/custom-operators.yml, content={{ cifmw_operator_build_output | to_nice_yaml }}, mode=0644] *** 2026-01-20 10:57:16,710 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.033) 0:02:42.326 ******* 2026-01-20 10:57:16,710 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.033) 0:02:42.325 ******* 2026-01-20 10:57:16,736 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:16,752 p=31537 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-20 10:57:16,752 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.042) 0:02:42.368 ******* 2026-01-20 10:57:16,752 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.042) 0:02:42.367 ******* 2026-01-20 10:57:16,804 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:16,812 p=31537 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-20 10:57:16,812 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.059) 0:02:42.428 ******* 2026-01-20 10:57:16,812 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.059) 0:02:42.427 ******* 2026-01-20 10:57:16,920 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:16,929 p=31537 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_operator_build _raw_params={{ hook.type }}.yml] *** 2026-01-20 10:57:16,929 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.116) 0:02:42.545 ******* 2026-01-20 10:57:16,929 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:16 +0000 (0:00:00.116) 0:02:42.544 ******* 2026-01-20 10:57:17,034 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:17,053 p=31537 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-20 10:57:17,053 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.124) 0:02:42.670 ******* 2026-01-20 10:57:17,053 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.124) 0:02:42.668 ******* 2026-01-20 10:57:17,110 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:17,123 p=31537 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-20 10:57:17,124 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.070) 0:02:42.740 ******* 2026-01-20 10:57:17,124 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.070) 0:02:42.739 ******* 2026-01-20 10:57:17,228 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:17,239 p=31537 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_deploy _raw_params={{ hook.type }}.yml] *** 2026-01-20 10:57:17,239 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.115) 0:02:42.856 ******* 2026-01-20 10:57:17,239 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.115) 0:02:42.854 ******* 2026-01-20 10:57:17,377 p=31537 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/run_hook/tasks/playbook.yml for localhost => (item={'name': '80 Kustomize OpenStack CR', 'type': 'playbook', 'source': 'control_plane_horizon.yml'}) 2026-01-20 10:57:17,386 p=31537 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/run_hook/tasks/playbook.yml for localhost => (item={'name': 'Create coo subscription', 'type': 'playbook', 'source': '/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator/ci/playbooks/deploy_cluster_observability_operator.yaml'}) 2026-01-20 10:57:17,401 p=31537 u=zuul n=ansible | TASK [run_hook : Set playbook path for 80 Kustomize OpenStack CR cifmw_basedir={{ _bdir }}, hook_name={{ _hook_name }}, playbook_path={{ _play | realpath }}, log_path={{ _bdir }}/logs/{{ step }}_{{ _hook_name }}.log, extra_vars=-e namespace={{ cifmw_openstack_namespace }} {%- if hook.extra_vars is defined and hook.extra_vars|length > 0 -%} {% for key,value in hook.extra_vars.items() -%} {%- if key == 'file' %} -e "@{{ value }}" {%- else %} -e "{{ key }}={{ value }}" {%- endif %} {%- endfor %} {%- endif %}] *** 2026-01-20 10:57:17,401 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.161) 0:02:43.017 ******* 2026-01-20 10:57:17,401 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.161) 0:02:43.016 ******* 2026-01-20 10:57:17,447 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:17,456 p=31537 u=zuul n=ansible | TASK [run_hook : Get file stat path={{ playbook_path }}] *********************** 2026-01-20 10:57:17,456 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.054) 0:02:43.072 ******* 2026-01-20 10:57:17,456 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.054) 0:02:43.071 ******* 2026-01-20 10:57:17,663 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:17,673 p=31537 u=zuul n=ansible | TASK [run_hook : Fail if playbook doesn't exist msg=Playbook {{ playbook_path }} doesn't seem to exist.] *** 2026-01-20 10:57:17,673 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.217) 0:02:43.290 ******* 2026-01-20 10:57:17,673 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.217) 0:02:43.288 ******* 2026-01-20 10:57:17,696 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:17,705 p=31537 u=zuul n=ansible | TASK [run_hook : Get parameters files paths={{ (cifmw_basedir, 'artifacts/parameters') | path_join }}, file_type=file, patterns=*.yml] *** 2026-01-20 10:57:17,706 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.032) 0:02:43.322 ******* 2026-01-20 10:57:17,706 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.032) 0:02:43.321 ******* 2026-01-20 10:57:17,885 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:17,896 p=31537 u=zuul n=ansible | TASK [run_hook : Add parameters artifacts as extra variables extra_vars={{ extra_vars }} {% for file in cifmw_run_hook_parameters_files.files %} -e "@{{ file.path }}" {%- endfor %}] *** 2026-01-20 10:57:17,896 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.190) 0:02:43.513 ******* 2026-01-20 10:57:17,896 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.190) 0:02:43.511 ******* 2026-01-20 10:57:17,920 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:17,929 p=31537 u=zuul n=ansible | TASK [run_hook : Ensure log directory exists path={{ log_path | dirname }}, state=directory, mode=0755] *** 2026-01-20 10:57:17,929 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.032) 0:02:43.545 ******* 2026-01-20 10:57:17,929 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:17 +0000 (0:00:00.032) 0:02:43.544 ******* 2026-01-20 10:57:18,122 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:18,131 p=31537 u=zuul n=ansible | TASK [run_hook : Ensure artifacts directory exists path={{ cifmw_basedir }}/artifacts, state=directory, mode=0755] *** 2026-01-20 10:57:18,131 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:18 +0000 (0:00:00.201) 0:02:43.747 ******* 2026-01-20 10:57:18,131 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:18 +0000 (0:00:00.201) 0:02:43.746 ******* 2026-01-20 10:57:18,335 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:18,344 p=31537 u=zuul n=ansible | TASK [run_hook : Run hook without retry - 80 Kustomize OpenStack CR] *********** 2026-01-20 10:57:18,344 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:18 +0000 (0:00:00.213) 0:02:43.961 ******* 2026-01-20 10:57:18,344 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:18 +0000 (0:00:00.213) 0:02:43.959 ******* 2026-01-20 10:57:18,398 p=31537 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_003_run_hook_without_retry_80.log 2026-01-20 10:57:20,124 p=31537 u=zuul n=ansible | changed: [localhost] 2026-01-20 10:57:20,134 p=31537 u=zuul n=ansible | TASK [run_hook : Run hook with retry - 80 Kustomize OpenStack CR] ************** 2026-01-20 10:57:20,134 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:01.789) 0:02:45.751 ******* 2026-01-20 10:57:20,134 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:01.789) 0:02:45.749 ******* 2026-01-20 10:57:20,161 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:20,169 p=31537 u=zuul n=ansible | TASK [run_hook : Check if we have a file path={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2026-01-20 10:57:20,169 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.035) 0:02:45.786 ******* 2026-01-20 10:57:20,170 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.035) 0:02:45.784 ******* 2026-01-20 10:57:20,360 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:20,368 p=31537 u=zuul n=ansible | TASK [run_hook : Load generated content in main playbook file={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2026-01-20 10:57:20,368 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.198) 0:02:45.984 ******* 2026-01-20 10:57:20,368 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.198) 0:02:45.983 ******* 2026-01-20 10:57:20,389 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:20,398 p=31537 u=zuul n=ansible | TASK [run_hook : Set playbook path for Create coo subscription cifmw_basedir={{ _bdir }}, hook_name={{ _hook_name }}, playbook_path={{ _play | realpath }}, log_path={{ _bdir }}/logs/{{ step }}_{{ _hook_name }}.log, extra_vars=-e namespace={{ cifmw_openstack_namespace }} {%- if hook.extra_vars is defined and hook.extra_vars|length > 0 -%} {% for key,value in hook.extra_vars.items() -%} {%- if key == 'file' %} -e "@{{ value }}" {%- else %} -e "{{ key }}={{ value }}" {%- endif %} {%- endfor %} {%- endif %}] *** 2026-01-20 10:57:20,398 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.030) 0:02:46.015 ******* 2026-01-20 10:57:20,398 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.030) 0:02:46.013 ******* 2026-01-20 10:57:20,441 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:20,449 p=31537 u=zuul n=ansible | TASK [run_hook : Get file stat path={{ playbook_path }}] *********************** 2026-01-20 10:57:20,450 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.051) 0:02:46.066 ******* 2026-01-20 10:57:20,450 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.051) 0:02:46.065 ******* 2026-01-20 10:57:20,652 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:20,660 p=31537 u=zuul n=ansible | TASK [run_hook : Fail if playbook doesn't exist msg=Playbook {{ playbook_path }} doesn't seem to exist.] *** 2026-01-20 10:57:20,660 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.210) 0:02:46.277 ******* 2026-01-20 10:57:20,660 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.210) 0:02:46.275 ******* 2026-01-20 10:57:20,682 p=31537 u=zuul n=ansible | skipping: [localhost] 2026-01-20 10:57:20,691 p=31537 u=zuul n=ansible | TASK [run_hook : Get parameters files paths={{ (cifmw_basedir, 'artifacts/parameters') | path_join }}, file_type=file, patterns=*.yml] *** 2026-01-20 10:57:20,691 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.031) 0:02:46.308 ******* 2026-01-20 10:57:20,691 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.031) 0:02:46.306 ******* 2026-01-20 10:57:20,872 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:20,884 p=31537 u=zuul n=ansible | TASK [run_hook : Add parameters artifacts as extra variables extra_vars={{ extra_vars }} {% for file in cifmw_run_hook_parameters_files.files %} -e "@{{ file.path }}" {%- endfor %}] *** 2026-01-20 10:57:20,884 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.192) 0:02:46.500 ******* 2026-01-20 10:57:20,884 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.192) 0:02:46.499 ******* 2026-01-20 10:57:20,910 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:20,919 p=31537 u=zuul n=ansible | TASK [run_hook : Ensure log directory exists path={{ log_path | dirname }}, state=directory, mode=0755] *** 2026-01-20 10:57:20,919 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.035) 0:02:46.535 ******* 2026-01-20 10:57:20,919 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:20 +0000 (0:00:00.035) 0:02:46.534 ******* 2026-01-20 10:57:21,106 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:21,114 p=31537 u=zuul n=ansible | TASK [run_hook : Ensure artifacts directory exists path={{ cifmw_basedir }}/artifacts, state=directory, mode=0755] *** 2026-01-20 10:57:21,114 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:21 +0000 (0:00:00.194) 0:02:46.730 ******* 2026-01-20 10:57:21,114 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:21 +0000 (0:00:00.194) 0:02:46.729 ******* 2026-01-20 10:57:21,312 p=31537 u=zuul n=ansible | ok: [localhost] 2026-01-20 10:57:21,323 p=31537 u=zuul n=ansible | TASK [run_hook : Run hook without retry - Create coo subscription] ************* 2026-01-20 10:57:21,323 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:21 +0000 (0:00:00.209) 0:02:46.940 ******* 2026-01-20 10:57:21,323 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:21 +0000 (0:00:00.209) 0:02:46.938 ******* 2026-01-20 10:57:21,382 p=31537 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_004_run_hook_without_retry_create.log 2026-01-20 10:57:22,601 p=31537 u=zuul n=ansible | fatal: [localhost]: FAILED! => censored: 'the output has been hidden due to the fact that ''no_log: true'' was specified for this result' changed: true 2026-01-20 10:57:22,602 p=31537 u=zuul n=ansible | NO MORE HOSTS LEFT ************************************************************* 2026-01-20 10:57:22,603 p=31537 u=zuul n=ansible | PLAY RECAP ********************************************************************* 2026-01-20 10:57:22,604 p=31537 u=zuul n=ansible | localhost : ok=146 changed=44 unreachable=0 failed=1 skipped=97 rescued=0 ignored=1 2026-01-20 10:57:22,604 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:22 +0000 (0:00:01.280) 0:02:48.220 ******* 2026-01-20 10:57:22,604 p=31537 u=zuul n=ansible | =============================================================================== 2026-01-20 10:57:22,604 p=31537 u=zuul n=ansible | run_hook : Run hook without retry - Download needed tools -------------- 36.40s 2026-01-20 10:57:22,604 p=31537 u=zuul n=ansible | ci_setup : Install needed packages ------------------------------------- 33.93s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | cert_manager : Wait for cert-manager pods to be ready ------------------ 23.56s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | run_hook : Run hook without retry - Fetch nodes facts and save them as parameters -- 11.19s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | repo_setup : Initialize python venv and install requirements ------------ 9.02s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | ci_setup : Install openshift client ------------------------------------- 5.26s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | cert_manager : Install cert-manager from release manifest --------------- 3.45s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | cert_manager : Install cert-manager cmctl CLI --------------------------- 1.85s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | run_hook : Run hook without retry - 80 Kustomize OpenStack CR ----------- 1.79s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | openshift_setup : Create required namespaces ---------------------------- 1.77s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | install_ca : Update ca bundle ------------------------------------------- 1.40s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | run_hook : Run hook without retry - Create coo subscription ------------- 1.28s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | repo_setup : Get repo-setup repository ---------------------------------- 1.12s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | ci_setup : Manage directories ------------------------------------------- 1.08s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | repo_setup : Make sure git-core package is installed -------------------- 0.99s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | openshift_setup : Gather network.operator info -------------------------- 0.98s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | Gathering Facts --------------------------------------------------------- 0.95s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | openshift_setup : Patch network operator -------------------------------- 0.95s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | repo_setup : Ensure directories are present ----------------------------- 0.91s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | repo_setup : Install repo-setup package --------------------------------- 0.80s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | Tuesday 20 January 2026 10:57:22 +0000 (0:00:01.281) 0:02:48.220 ******* 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | =============================================================================== 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | run_hook --------------------------------------------------------------- 57.94s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | ci_setup --------------------------------------------------------------- 41.50s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | cert_manager ----------------------------------------------------------- 31.90s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | repo_setup ------------------------------------------------------------- 17.72s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | openshift_setup --------------------------------------------------------- 5.84s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | openshift_login --------------------------------------------------------- 3.89s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | install_yamls ----------------------------------------------------------- 3.21s 2026-01-20 10:57:22,605 p=31537 u=zuul n=ansible | install_ca -------------------------------------------------------------- 1.69s 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | cifmw_setup ------------------------------------------------------------- 1.54s 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | gather_facts ------------------------------------------------------------ 0.95s 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | discover_latest_image --------------------------------------------------- 0.80s 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | operator_build ---------------------------------------------------------- 0.39s 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | networking_mapper ------------------------------------------------------- 0.33s 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | ansible.builtin.file ---------------------------------------------------- 0.32s 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | ansible.builtin.include_tasks ------------------------------------------- 0.08s 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | pkg_build --------------------------------------------------------------- 0.06s 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | ansible.builtin.include_vars -------------------------------------------- 0.03s 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2026-01-20 10:57:22,606 p=31537 u=zuul n=ansible | total ----------------------------------------------------------------- 168.18s home/zuul/zuul-output/logs/ci-framework-data/artifacts/0000755000175000017500000000000015133657765022365 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/install_yamls.sh0000644000175000017500000000055215133657453025570 0ustar zuulzuulexport BMO_SETUP=False export INSTALL_CERT_MANAGER=False export OUT=/home/zuul/ci-framework-data/artifacts/manifests export OUTPUT_DIR=/home/zuul/ci-framework-data/artifacts/edpm export CHECKOUT_FROM_OPENSTACK_REF=true export OPENSTACK_K8S_BRANCH=main export WATCHER_REPO=/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator export WATCHER_BRANCH= home/zuul/zuul-output/logs/ci-framework-data/artifacts/manifests/0000755000175000017500000000000015133657607024351 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/manifests/cert-manager/0000755000175000017500000000000015133657745026721 5ustar zuulzuul././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/manifests/cert-manager/cert_manager_manifest.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/manifests/cert-manager/cert_manager_manifest.0000644000175000017500000366045215133657745033257 0ustar zuulzuul# Copyright 2022 The cert-manager Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. apiVersion: v1 kind: Namespace metadata: name: cert-manager --- # Source: cert-manager/templates/crd-acme.cert-manager.io_challenges.yaml apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: "challenges.acme.cert-manager.io" annotations: helm.sh/resource-policy: keep labels: app: "cert-manager" app.kubernetes.io/name: "cert-manager" app.kubernetes.io/instance: "cert-manager" app.kubernetes.io/component: "crds" app.kubernetes.io/version: "v1.19.2" spec: group: acme.cert-manager.io names: categories: - cert-manager - cert-manager-acme kind: Challenge listKind: ChallengeList plural: challenges singular: challenge scope: Namespaced versions: - additionalPrinterColumns: - jsonPath: .status.state name: State type: string - jsonPath: .spec.dnsName name: Domain type: string - jsonPath: .status.reason name: Reason priority: 1 type: string - description: CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC. jsonPath: .metadata.creationTimestamp name: Age type: date name: v1 schema: openAPIV3Schema: description: Challenge is a type to represent a Challenge request with an ACME server properties: apiVersion: description: |- APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources type: string kind: description: |- Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds type: string metadata: type: object spec: properties: authorizationURL: description: |- The URL to the ACME Authorization resource that this challenge is a part of. type: string dnsName: description: |- dnsName is the identifier that this challenge is for, e.g., example.com. If the requested DNSName is a 'wildcard', this field MUST be set to the non-wildcard domain, e.g., for `*.example.com`, it must be `example.com`. type: string issuerRef: description: |- References a properly configured ACME-type Issuer which should be used to create this Challenge. If the Issuer does not exist, processing will be retried. If the Issuer is not an 'ACME' Issuer, an error will be returned and the Challenge will be marked as failed. properties: group: description: |- Group of the issuer being referred to. Defaults to 'cert-manager.io'. type: string kind: description: |- Kind of the issuer being referred to. Defaults to 'Issuer'. type: string name: description: Name of the issuer being referred to. type: string required: - name type: object key: description: |- The ACME challenge key for this challenge For HTTP01 challenges, this is the value that must be responded with to complete the HTTP01 challenge in the format: `.`. For DNS01 challenges, this is the base64 encoded SHA256 sum of the `.` text that must be set as the TXT record content. type: string solver: description: |- Contains the domain solving configuration that should be used to solve this challenge resource. properties: dns01: description: |- Configures cert-manager to attempt to complete authorizations by performing the DNS01 challenge flow. properties: acmeDNS: description: |- Use the 'ACME DNS' (https://github.com/joohoi/acme-dns) API to manage DNS01 challenge records. properties: accountSecretRef: description: |- A reference to a specific 'key' within a Secret resource. In some instances, `key` is a required field. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object host: type: string required: - accountSecretRef - host type: object akamai: description: Use the Akamai DNS zone management API to manage DNS01 challenge records. properties: accessTokenSecretRef: description: |- A reference to a specific 'key' within a Secret resource. In some instances, `key` is a required field. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object clientSecretSecretRef: description: |- A reference to a specific 'key' within a Secret resource. In some instances, `key` is a required field. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object clientTokenSecretRef: description: |- A reference to a specific 'key' within a Secret resource. In some instances, `key` is a required field. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object serviceConsumerDomain: type: string required: - accessTokenSecretRef - clientSecretSecretRef - clientTokenSecretRef - serviceConsumerDomain type: object azureDNS: description: Use the Microsoft Azure DNS API to manage DNS01 challenge records. properties: clientID: description: |- Auth: Azure Service Principal: The ClientID of the Azure Service Principal used to authenticate with Azure DNS. If set, ClientSecret and TenantID must also be set. type: string clientSecretSecretRef: description: |- Auth: Azure Service Principal: A reference to a Secret containing the password associated with the Service Principal. If set, ClientID and TenantID must also be set. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object environment: description: name of the Azure environment (default AzurePublicCloud) enum: - AzurePublicCloud - AzureChinaCloud - AzureGermanCloud - AzureUSGovernmentCloud type: string hostedZoneName: description: name of the DNS zone that should be used type: string managedIdentity: description: |- Auth: Azure Workload Identity or Azure Managed Service Identity: Settings to enable Azure Workload Identity or Azure Managed Service Identity If set, ClientID, ClientSecret and TenantID must not be set. properties: clientID: description: client ID of the managed identity, cannot be used at the same time as resourceID type: string resourceID: description: |- resource ID of the managed identity, cannot be used at the same time as clientID Cannot be used for Azure Managed Service Identity type: string tenantID: description: tenant ID of the managed identity, cannot be used at the same time as resourceID type: string type: object resourceGroupName: description: resource group the DNS zone is located in type: string subscriptionID: description: ID of the Azure subscription type: string tenantID: description: |- Auth: Azure Service Principal: The TenantID of the Azure Service Principal used to authenticate with Azure DNS. If set, ClientID and ClientSecret must also be set. type: string required: - resourceGroupName - subscriptionID type: object cloudDNS: description: Use the Google Cloud DNS API to manage DNS01 challenge records. properties: hostedZoneName: description: |- HostedZoneName is an optional field that tells cert-manager in which Cloud DNS zone the challenge record has to be created. If left empty cert-manager will automatically choose a zone. type: string project: type: string serviceAccountSecretRef: description: |- A reference to a specific 'key' within a Secret resource. In some instances, `key` is a required field. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object required: - project type: object cloudflare: description: Use the Cloudflare API to manage DNS01 challenge records. properties: apiKeySecretRef: description: |- API key to use to authenticate with Cloudflare. Note: using an API token to authenticate is now the recommended method as it allows greater control of permissions. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object apiTokenSecretRef: description: API token used to authenticate with Cloudflare. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object email: description: Email of the account, only required when using API key based authentication. type: string type: object cnameStrategy: description: |- CNAMEStrategy configures how the DNS01 provider should handle CNAME records when found in DNS zones. enum: - None - Follow type: string digitalocean: description: Use the DigitalOcean DNS API to manage DNS01 challenge records. properties: tokenSecretRef: description: |- A reference to a specific 'key' within a Secret resource. In some instances, `key` is a required field. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object required: - tokenSecretRef type: object rfc2136: description: |- Use RFC2136 ("Dynamic Updates in the Domain Name System") (https://datatracker.ietf.org/doc/rfc2136/) to manage DNS01 challenge records. properties: nameserver: description: |- The IP address or hostname of an authoritative DNS server supporting RFC2136 in the form host:port. If the host is an IPv6 address it must be enclosed in square brackets (e.g [2001:db8::1]) ; port is optional. This field is required. type: string protocol: description: Protocol to use for dynamic DNS update queries. Valid values are (case-sensitive) ``TCP`` and ``UDP``; ``UDP`` (default). enum: - TCP - UDP type: string tsigAlgorithm: description: |- The TSIG Algorithm configured in the DNS supporting RFC2136. Used only when ``tsigSecretSecretRef`` and ``tsigKeyName`` are defined. Supported values are (case-insensitive): ``HMACMD5`` (default), ``HMACSHA1``, ``HMACSHA256`` or ``HMACSHA512``. type: string tsigKeyName: description: |- The TSIG Key name configured in the DNS. If ``tsigSecretSecretRef`` is defined, this field is required. type: string tsigSecretSecretRef: description: |- The name of the secret containing the TSIG value. If ``tsigKeyName`` is defined, this field is required. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object required: - nameserver type: object route53: description: Use the AWS Route53 API to manage DNS01 challenge records. properties: accessKeyID: description: |- The AccessKeyID is used for authentication. Cannot be set when SecretAccessKeyID is set. If neither the Access Key nor Key ID are set, we fall-back to using env vars, shared credentials file or AWS Instance metadata, see: https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials type: string accessKeyIDSecretRef: description: |- The SecretAccessKey is used for authentication. If set, pull the AWS access key ID from a key within a Kubernetes Secret. Cannot be set when AccessKeyID is set. If neither the Access Key nor Key ID are set, we fall-back to using env vars, shared credentials file or AWS Instance metadata, see: https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object auth: description: Auth configures how cert-manager authenticates. properties: kubernetes: description: |- Kubernetes authenticates with Route53 using AssumeRoleWithWebIdentity by passing a bound ServiceAccount token. properties: serviceAccountRef: description: |- A reference to a service account that will be used to request a bound token (also known as "projected token"). To use this field, you must configure an RBAC rule to let cert-manager request a token. properties: audiences: description: |- TokenAudiences is an optional list of audiences to include in the token passed to AWS. The default token consisting of the issuer's namespace and name is always included. If unset the audience defaults to `sts.amazonaws.com`. items: type: string type: array x-kubernetes-list-type: atomic name: description: Name of the ServiceAccount used to request a token. type: string required: - name type: object required: - serviceAccountRef type: object required: - kubernetes type: object hostedZoneID: description: If set, the provider will manage only this zone in Route53 and will not do a lookup using the route53:ListHostedZonesByName api call. type: string region: description: |- Override the AWS region. Route53 is a global service and does not have regional endpoints but the region specified here (or via environment variables) is used as a hint to help compute the correct AWS credential scope and partition when it connects to Route53. See: - [Amazon Route 53 endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/r53.html) - [Global services](https://docs.aws.amazon.com/whitepapers/latest/aws-fault-isolation-boundaries/global-services.html) If you omit this region field, cert-manager will use the region from AWS_REGION and AWS_DEFAULT_REGION environment variables, if they are set in the cert-manager controller Pod. The `region` field is not needed if you use [IAM Roles for Service Accounts (IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html). Instead an AWS_REGION environment variable is added to the cert-manager controller Pod by: [Amazon EKS Pod Identity Webhook](https://github.com/aws/amazon-eks-pod-identity-webhook). In this case this `region` field value is ignored. The `region` field is not needed if you use [EKS Pod Identities](https://docs.aws.amazon.com/eks/latest/userguide/pod-identities.html). Instead an AWS_REGION environment variable is added to the cert-manager controller Pod by: [Amazon EKS Pod Identity Agent](https://github.com/aws/eks-pod-identity-agent), In this case this `region` field value is ignored. type: string role: description: |- Role is a Role ARN which the Route53 provider will assume using either the explicit credentials AccessKeyID/SecretAccessKey or the inferred credentials from environment variables, shared credentials file or AWS Instance metadata type: string secretAccessKeySecretRef: description: |- The SecretAccessKey is used for authentication. If neither the Access Key nor Key ID are set, we fall-back to using env vars, shared credentials file or AWS Instance metadata, see: https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object type: object webhook: description: |- Configure an external webhook based DNS01 challenge solver to manage DNS01 challenge records. properties: config: description: |- Additional configuration that should be passed to the webhook apiserver when challenges are processed. This can contain arbitrary JSON data. Secret values should not be specified in this stanza. If secret values are needed (e.g., credentials for a DNS service), you should use a SecretKeySelector to reference a Secret resource. For details on the schema of this field, consult the webhook provider implementation's documentation. x-kubernetes-preserve-unknown-fields: true groupName: description: |- The API group name that should be used when POSTing ChallengePayload resources to the webhook apiserver. This should be the same as the GroupName specified in the webhook provider implementation. type: string solverName: description: |- The name of the solver to use, as defined in the webhook provider implementation. This will typically be the name of the provider, e.g., 'cloudflare'. type: string required: - groupName - solverName type: object type: object http01: description: |- Configures cert-manager to attempt to complete authorizations by performing the HTTP01 challenge flow. It is not possible to obtain certificates for wildcard domain names (e.g., `*.example.com`) using the HTTP01 challenge mechanism. properties: gatewayHTTPRoute: description: |- The Gateway API is a sig-network community API that models service networking in Kubernetes (https://gateway-api.sigs.k8s.io/). The Gateway solver will create HTTPRoutes with the specified labels in the same namespace as the challenge. This solver is experimental, and fields / behaviour may change in the future. properties: labels: additionalProperties: type: string description: |- Custom labels that will be applied to HTTPRoutes created by cert-manager while solving HTTP-01 challenges. type: object parentRefs: description: |- When solving an HTTP-01 challenge, cert-manager creates an HTTPRoute. cert-manager needs to know which parentRefs should be used when creating the HTTPRoute. Usually, the parentRef references a Gateway. See: https://gateway-api.sigs.k8s.io/api-types/httproute/#attaching-to-gateways items: description: |- ParentReference identifies an API object (usually a Gateway) that can be considered a parent of this resource (usually a route). There are two kinds of parent resources with "Core" support: * Gateway (Gateway conformance profile) * Service (Mesh conformance profile, ClusterIP Services only) This API may be extended in the future to support additional kinds of parent resources. The API object must be valid in the cluster; the Group and Kind must be registered in the cluster for this reference to be valid. properties: group: default: gateway.networking.k8s.io description: |- Group is the group of the referent. When unspecified, "gateway.networking.k8s.io" is inferred. To set the core API group (such as for a "Service" kind referent), Group must be explicitly set to "" (empty string). Support: Core maxLength: 253 pattern: ^$|^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$ type: string kind: default: Gateway description: |- Kind is kind of the referent. There are two kinds of parent resources with "Core" support: * Gateway (Gateway conformance profile) * Service (Mesh conformance profile, ClusterIP Services only) Support for other resources is Implementation-Specific. maxLength: 63 minLength: 1 pattern: ^[a-zA-Z]([-a-zA-Z0-9]*[a-zA-Z0-9])?$ type: string name: description: |- Name is the name of the referent. Support: Core maxLength: 253 minLength: 1 type: string namespace: description: |- Namespace is the namespace of the referent. When unspecified, this refers to the local namespace of the Route. Note that there are specific rules for ParentRefs which cross namespace boundaries. Cross-namespace references are only valid if they are explicitly allowed by something in the namespace they are referring to. For example: Gateway has the AllowedRoutes field, and ReferenceGrant provides a generic way to enable any other kind of cross-namespace reference. ParentRefs from a Route to a Service in the same namespace are "producer" routes, which apply default routing rules to inbound connections from any namespace to the Service. ParentRefs from a Route to a Service in a different namespace are "consumer" routes, and these routing rules are only applied to outbound connections originating from the same namespace as the Route, for which the intended destination of the connections are a Service targeted as a ParentRef of the Route. Support: Core maxLength: 63 minLength: 1 pattern: ^[a-z0-9]([-a-z0-9]*[a-z0-9])?$ type: string port: description: |- Port is the network port this Route targets. It can be interpreted differently based on the type of parent resource. When the parent resource is a Gateway, this targets all listeners listening on the specified port that also support this kind of Route(and select this Route). It's not recommended to set `Port` unless the networking behaviors specified in a Route must apply to a specific port as opposed to a listener(s) whose port(s) may be changed. When both Port and SectionName are specified, the name and port of the selected listener must match both specified values. When the parent resource is a Service, this targets a specific port in the Service spec. When both Port (experimental) and SectionName are specified, the name and port of the selected port must match both specified values. Implementations MAY choose to support other parent resources. Implementations supporting other types of parent resources MUST clearly document how/if Port is interpreted. For the purpose of status, an attachment is considered successful as long as the parent resource accepts it partially. For example, Gateway listeners can restrict which Routes can attach to them by Route kind, namespace, or hostname. If 1 of 2 Gateway listeners accept attachment from the referencing Route, the Route MUST be considered successfully attached. If no Gateway listeners accept attachment from this Route, the Route MUST be considered detached from the Gateway. Support: Extended format: int32 maximum: 65535 minimum: 1 type: integer sectionName: description: |- SectionName is the name of a section within the target resource. In the following resources, SectionName is interpreted as the following: * Gateway: Listener name. When both Port (experimental) and SectionName are specified, the name and port of the selected listener must match both specified values. * Service: Port name. When both Port (experimental) and SectionName are specified, the name and port of the selected listener must match both specified values. Implementations MAY choose to support attaching Routes to other resources. If that is the case, they MUST clearly document how SectionName is interpreted. When unspecified (empty string), this will reference the entire resource. For the purpose of status, an attachment is considered successful if at least one section in the parent resource accepts it. For example, Gateway listeners can restrict which Routes can attach to them by Route kind, namespace, or hostname. If 1 of 2 Gateway listeners accept attachment from the referencing Route, the Route MUST be considered successfully attached. If no Gateway listeners accept attachment from this Route, the Route MUST be considered detached from the Gateway. Support: Core maxLength: 253 minLength: 1 pattern: ^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$ type: string required: - name type: object type: array x-kubernetes-list-type: atomic podTemplate: description: |- Optional pod template used to configure the ACME challenge solver pods used for HTTP01 challenges. properties: metadata: description: |- ObjectMeta overrides for the pod used to solve HTTP01 challenges. Only the 'labels' and 'annotations' fields may be set. If labels or annotations overlap with in-built values, the values here will override the in-built values. properties: annotations: additionalProperties: type: string description: Annotations that should be added to the created ACME HTTP01 solver pods. type: object labels: additionalProperties: type: string description: Labels that should be added to the created ACME HTTP01 solver pods. type: object type: object spec: description: |- PodSpec defines overrides for the HTTP01 challenge solver pod. Check ACMEChallengeSolverHTTP01IngressPodSpec to find out currently supported fields. All other fields will be ignored. properties: affinity: description: If specified, the pod's scheduling constraints properties: nodeAffinity: description: Describes node affinity scheduling rules for the pod. properties: preferredDuringSchedulingIgnoredDuringExecution: description: |- The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. items: description: |- An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). properties: preference: description: A node selector term, associated with the corresponding weight. properties: matchExpressions: description: A list of node selector requirements by node's labels. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchFields: description: A list of node selector requirements by node's fields. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic type: object x-kubernetes-map-type: atomic weight: description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. format: int32 type: integer required: - preference - weight type: object type: array x-kubernetes-list-type: atomic requiredDuringSchedulingIgnoredDuringExecution: description: |- If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. properties: nodeSelectorTerms: description: Required. A list of node selector terms. The terms are ORed. items: description: |- A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. properties: matchExpressions: description: A list of node selector requirements by node's labels. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchFields: description: A list of node selector requirements by node's fields. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic type: object x-kubernetes-map-type: atomic type: array x-kubernetes-list-type: atomic required: - nodeSelectorTerms type: object x-kubernetes-map-type: atomic type: object podAffinity: description: Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). properties: preferredDuringSchedulingIgnoredDuringExecution: description: |- The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. items: description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) properties: podAffinityTerm: description: Required. A pod affinity term, associated with the corresponding weight. properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object weight: description: |- weight associated with matching the corresponding podAffinityTerm, in the range 1-100. format: int32 type: integer required: - podAffinityTerm - weight type: object type: array x-kubernetes-list-type: atomic requiredDuringSchedulingIgnoredDuringExecution: description: |- If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. items: description: |- Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object type: array x-kubernetes-list-type: atomic type: object podAntiAffinity: description: Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). properties: preferredDuringSchedulingIgnoredDuringExecution: description: |- The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and subtracting "weight" from the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. items: description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) properties: podAffinityTerm: description: Required. A pod affinity term, associated with the corresponding weight. properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object weight: description: |- weight associated with matching the corresponding podAffinityTerm, in the range 1-100. format: int32 type: integer required: - podAffinityTerm - weight type: object type: array x-kubernetes-list-type: atomic requiredDuringSchedulingIgnoredDuringExecution: description: |- If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. items: description: |- Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object type: array x-kubernetes-list-type: atomic type: object type: object imagePullSecrets: description: If specified, the pod's imagePullSecrets items: description: |- LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. properties: name: default: "" description: |- Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string type: object x-kubernetes-map-type: atomic type: array x-kubernetes-list-map-keys: - name x-kubernetes-list-type: map nodeSelector: additionalProperties: type: string description: |- NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ type: object priorityClassName: description: If specified, the pod's priorityClassName. type: string resources: description: |- If specified, the pod's resource requirements. These values override the global resource configuration flags. Note that when only specifying resource limits, ensure they are greater than or equal to the corresponding global resource requests configured via controller flags (--acme-http01-solver-resource-request-cpu, --acme-http01-solver-resource-request-memory). Kubernetes will reject pod creation if limits are lower than requests, causing challenge failures. properties: limits: additionalProperties: anyOf: - type: integer - type: string pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true description: |- Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ type: object requests: additionalProperties: anyOf: - type: integer - type: string pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true description: |- Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to the global values configured via controller flags. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ type: object type: object securityContext: description: If specified, the pod's security context properties: fsGroup: description: |- A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. format: int64 type: integer fsGroupChangePolicy: description: |- fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. type: string runAsGroup: description: |- The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. format: int64 type: integer runAsNonRoot: description: |- Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. type: boolean runAsUser: description: |- The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. format: int64 type: integer seLinuxOptions: description: |- The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. properties: level: description: Level is SELinux level label that applies to the container. type: string role: description: Role is a SELinux role label that applies to the container. type: string type: description: Type is a SELinux type label that applies to the container. type: string user: description: User is a SELinux user label that applies to the container. type: string type: object seccompProfile: description: |- The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. properties: localhostProfile: description: |- localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type: string type: description: |- type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. type: string required: - type type: object supplementalGroups: description: |- A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. items: format: int64 type: integer type: array x-kubernetes-list-type: atomic sysctls: description: |- Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. items: description: Sysctl defines a kernel parameter to be set properties: name: description: Name of a property to set type: string value: description: Value of a property to set type: string required: - name - value type: object type: array x-kubernetes-list-type: atomic type: object serviceAccountName: description: If specified, the pod's service account type: string tolerations: description: If specified, the pod's tolerations. items: description: |- The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator . properties: effect: description: |- Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. type: string key: description: |- Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. type: string operator: description: |- Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. type: string tolerationSeconds: description: |- TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. format: int64 type: integer value: description: |- Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. type: string type: object type: array x-kubernetes-list-type: atomic type: object type: object serviceType: description: |- Optional service type for Kubernetes solver service. Supported values are NodePort or ClusterIP. If unset, defaults to NodePort. type: string type: object ingress: description: |- The ingress based HTTP01 challenge solver will solve challenges by creating or modifying Ingress resources in order to route requests for '/.well-known/acme-challenge/XYZ' to 'challenge solver' pods that are provisioned by cert-manager for each Challenge to be completed. properties: class: description: |- This field configures the annotation `kubernetes.io/ingress.class` when creating Ingress resources to solve ACME challenges that use this challenge solver. Only one of `class`, `name` or `ingressClassName` may be specified. type: string ingressClassName: description: |- This field configures the field `ingressClassName` on the created Ingress resources used to solve ACME challenges that use this challenge solver. This is the recommended way of configuring the ingress class. Only one of `class`, `name` or `ingressClassName` may be specified. type: string ingressTemplate: description: |- Optional ingress template used to configure the ACME challenge solver ingress used for HTTP01 challenges. properties: metadata: description: |- ObjectMeta overrides for the ingress used to solve HTTP01 challenges. Only the 'labels' and 'annotations' fields may be set. If labels or annotations overlap with in-built values, the values here will override the in-built values. properties: annotations: additionalProperties: type: string description: Annotations that should be added to the created ACME HTTP01 solver ingress. type: object labels: additionalProperties: type: string description: Labels that should be added to the created ACME HTTP01 solver ingress. type: object type: object type: object name: description: |- The name of the ingress resource that should have ACME challenge solving routes inserted into it in order to solve HTTP01 challenges. This is typically used in conjunction with ingress controllers like ingress-gce, which maintains a 1:1 mapping between external IPs and ingress resources. Only one of `class`, `name` or `ingressClassName` may be specified. type: string podTemplate: description: |- Optional pod template used to configure the ACME challenge solver pods used for HTTP01 challenges. properties: metadata: description: |- ObjectMeta overrides for the pod used to solve HTTP01 challenges. Only the 'labels' and 'annotations' fields may be set. If labels or annotations overlap with in-built values, the values here will override the in-built values. properties: annotations: additionalProperties: type: string description: Annotations that should be added to the created ACME HTTP01 solver pods. type: object labels: additionalProperties: type: string description: Labels that should be added to the created ACME HTTP01 solver pods. type: object type: object spec: description: |- PodSpec defines overrides for the HTTP01 challenge solver pod. Check ACMEChallengeSolverHTTP01IngressPodSpec to find out currently supported fields. All other fields will be ignored. properties: affinity: description: If specified, the pod's scheduling constraints properties: nodeAffinity: description: Describes node affinity scheduling rules for the pod. properties: preferredDuringSchedulingIgnoredDuringExecution: description: |- The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. items: description: |- An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). properties: preference: description: A node selector term, associated with the corresponding weight. properties: matchExpressions: description: A list of node selector requirements by node's labels. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchFields: description: A list of node selector requirements by node's fields. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic type: object x-kubernetes-map-type: atomic weight: description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. format: int32 type: integer required: - preference - weight type: object type: array x-kubernetes-list-type: atomic requiredDuringSchedulingIgnoredDuringExecution: description: |- If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. properties: nodeSelectorTerms: description: Required. A list of node selector terms. The terms are ORed. items: description: |- A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. properties: matchExpressions: description: A list of node selector requirements by node's labels. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchFields: description: A list of node selector requirements by node's fields. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic type: object x-kubernetes-map-type: atomic type: array x-kubernetes-list-type: atomic required: - nodeSelectorTerms type: object x-kubernetes-map-type: atomic type: object podAffinity: description: Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). properties: preferredDuringSchedulingIgnoredDuringExecution: description: |- The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. items: description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) properties: podAffinityTerm: description: Required. A pod affinity term, associated with the corresponding weight. properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object weight: description: |- weight associated with matching the corresponding podAffinityTerm, in the range 1-100. format: int32 type: integer required: - podAffinityTerm - weight type: object type: array x-kubernetes-list-type: atomic requiredDuringSchedulingIgnoredDuringExecution: description: |- If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. items: description: |- Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object type: array x-kubernetes-list-type: atomic type: object podAntiAffinity: description: Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). properties: preferredDuringSchedulingIgnoredDuringExecution: description: |- The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and subtracting "weight" from the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. items: description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) properties: podAffinityTerm: description: Required. A pod affinity term, associated with the corresponding weight. properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object weight: description: |- weight associated with matching the corresponding podAffinityTerm, in the range 1-100. format: int32 type: integer required: - podAffinityTerm - weight type: object type: array x-kubernetes-list-type: atomic requiredDuringSchedulingIgnoredDuringExecution: description: |- If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. items: description: |- Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object type: array x-kubernetes-list-type: atomic type: object type: object imagePullSecrets: description: If specified, the pod's imagePullSecrets items: description: |- LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. properties: name: default: "" description: |- Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string type: object x-kubernetes-map-type: atomic type: array x-kubernetes-list-map-keys: - name x-kubernetes-list-type: map nodeSelector: additionalProperties: type: string description: |- NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ type: object priorityClassName: description: If specified, the pod's priorityClassName. type: string resources: description: |- If specified, the pod's resource requirements. These values override the global resource configuration flags. Note that when only specifying resource limits, ensure they are greater than or equal to the corresponding global resource requests configured via controller flags (--acme-http01-solver-resource-request-cpu, --acme-http01-solver-resource-request-memory). Kubernetes will reject pod creation if limits are lower than requests, causing challenge failures. properties: limits: additionalProperties: anyOf: - type: integer - type: string pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true description: |- Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ type: object requests: additionalProperties: anyOf: - type: integer - type: string pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true description: |- Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to the global values configured via controller flags. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ type: object type: object securityContext: description: If specified, the pod's security context properties: fsGroup: description: |- A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. format: int64 type: integer fsGroupChangePolicy: description: |- fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. type: string runAsGroup: description: |- The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. format: int64 type: integer runAsNonRoot: description: |- Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. type: boolean runAsUser: description: |- The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. format: int64 type: integer seLinuxOptions: description: |- The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. properties: level: description: Level is SELinux level label that applies to the container. type: string role: description: Role is a SELinux role label that applies to the container. type: string type: description: Type is a SELinux type label that applies to the container. type: string user: description: User is a SELinux user label that applies to the container. type: string type: object seccompProfile: description: |- The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. properties: localhostProfile: description: |- localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type: string type: description: |- type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. type: string required: - type type: object supplementalGroups: description: |- A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. items: format: int64 type: integer type: array x-kubernetes-list-type: atomic sysctls: description: |- Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. items: description: Sysctl defines a kernel parameter to be set properties: name: description: Name of a property to set type: string value: description: Value of a property to set type: string required: - name - value type: object type: array x-kubernetes-list-type: atomic type: object serviceAccountName: description: If specified, the pod's service account type: string tolerations: description: If specified, the pod's tolerations. items: description: |- The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator . properties: effect: description: |- Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. type: string key: description: |- Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. type: string operator: description: |- Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. type: string tolerationSeconds: description: |- TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. format: int64 type: integer value: description: |- Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. type: string type: object type: array x-kubernetes-list-type: atomic type: object type: object serviceType: description: |- Optional service type for Kubernetes solver service. Supported values are NodePort or ClusterIP. If unset, defaults to NodePort. type: string type: object type: object selector: description: |- Selector selects a set of DNSNames on the Certificate resource that should be solved using this challenge solver. If not specified, the solver will be treated as the 'default' solver with the lowest priority, i.e. if any other solver has a more specific match, it will be used instead. properties: dnsNames: description: |- List of DNSNames that this solver will be used to solve. If specified and a match is found, a dnsNames selector will take precedence over a dnsZones selector. If multiple solvers match with the same dnsNames value, the solver with the most matching labels in matchLabels will be selected. If neither has more matches, the solver defined earlier in the list will be selected. items: type: string type: array x-kubernetes-list-type: atomic dnsZones: description: |- List of DNSZones that this solver will be used to solve. The most specific DNS zone match specified here will take precedence over other DNS zone matches, so a solver specifying sys.example.com will be selected over one specifying example.com for the domain www.sys.example.com. If multiple solvers match with the same dnsZones value, the solver with the most matching labels in matchLabels will be selected. If neither has more matches, the solver defined earlier in the list will be selected. items: type: string type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- A label selector that is used to refine the set of certificate's that this challenge solver will apply to. type: object type: object type: object token: description: |- The ACME challenge token for this challenge. This is the raw value returned from the ACME server. type: string type: description: |- The type of ACME challenge this resource represents. One of "HTTP-01" or "DNS-01". enum: - HTTP-01 - DNS-01 type: string url: description: |- The URL of the ACME Challenge resource for this challenge. This can be used to lookup details about the status of this challenge. type: string wildcard: description: |- wildcard will be true if this challenge is for a wildcard identifier, for example '*.example.com'. type: boolean required: - authorizationURL - dnsName - issuerRef - key - solver - token - type - url type: object status: properties: presented: description: |- presented will be set to true if the challenge values for this challenge are currently 'presented'. This *does not* imply the self check is passing. Only that the values have been 'submitted' for the appropriate challenge mechanism (i.e. the DNS01 TXT record has been presented, or the HTTP01 configuration has been configured). type: boolean processing: description: |- Used to denote whether this challenge should be processed or not. This field will only be set to true by the 'scheduling' component. It will only be set to false by the 'challenges' controller, after the challenge has reached a final state or timed out. If this field is set to false, the challenge controller will not take any more action. type: boolean reason: description: |- Contains human readable information on why the Challenge is in the current state. type: string state: description: |- Contains the current 'state' of the challenge. If not set, the state of the challenge is unknown. enum: - valid - ready - pending - processing - invalid - expired - errored type: string type: object required: - metadata - spec type: object served: true storage: true subresources: status: {} --- # Source: cert-manager/templates/crd-acme.cert-manager.io_orders.yaml apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: "orders.acme.cert-manager.io" annotations: helm.sh/resource-policy: keep labels: app: "cert-manager" app.kubernetes.io/name: "cert-manager" app.kubernetes.io/instance: "cert-manager" app.kubernetes.io/component: "crds" app.kubernetes.io/version: "v1.19.2" spec: group: acme.cert-manager.io names: categories: - cert-manager - cert-manager-acme kind: Order listKind: OrderList plural: orders singular: order scope: Namespaced versions: - additionalPrinterColumns: - jsonPath: .status.state name: State type: string - jsonPath: .spec.issuerRef.name name: Issuer priority: 1 type: string - jsonPath: .status.reason name: Reason priority: 1 type: string - description: CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC. jsonPath: .metadata.creationTimestamp name: Age type: date name: v1 schema: openAPIV3Schema: description: Order is a type to represent an Order with an ACME server properties: apiVersion: description: |- APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources type: string kind: description: |- Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds type: string metadata: type: object spec: properties: commonName: description: |- CommonName is the common name as specified on the DER encoded CSR. If specified, this value must also be present in `dnsNames` or `ipAddresses`. This field must match the corresponding field on the DER encoded CSR. type: string dnsNames: description: |- DNSNames is a list of DNS names that should be included as part of the Order validation process. This field must match the corresponding field on the DER encoded CSR. items: type: string type: array x-kubernetes-list-type: atomic duration: description: |- Duration is the duration for the not after date for the requested certificate. this is set on order creation as pe the ACME spec. type: string ipAddresses: description: |- IPAddresses is a list of IP addresses that should be included as part of the Order validation process. This field must match the corresponding field on the DER encoded CSR. items: type: string type: array x-kubernetes-list-type: atomic issuerRef: description: |- IssuerRef references a properly configured ACME-type Issuer which should be used to create this Order. If the Issuer does not exist, processing will be retried. If the Issuer is not an 'ACME' Issuer, an error will be returned and the Order will be marked as failed. properties: group: description: |- Group of the issuer being referred to. Defaults to 'cert-manager.io'. type: string kind: description: |- Kind of the issuer being referred to. Defaults to 'Issuer'. type: string name: description: Name of the issuer being referred to. type: string required: - name type: object profile: description: |- Profile allows requesting a certificate profile from the ACME server. Supported profiles are listed by the server's ACME directory URL. type: string request: description: |- Certificate signing request bytes in DER encoding. This will be used when finalizing the order. This field must be set on the order. format: byte type: string required: - issuerRef - request type: object status: properties: authorizations: description: |- Authorizations contains data returned from the ACME server on what authorizations must be completed in order to validate the DNS names specified on the Order. items: description: |- ACMEAuthorization contains data returned from the ACME server on an authorization that must be completed in order validate a DNS name on an ACME Order resource. properties: challenges: description: |- Challenges specifies the challenge types offered by the ACME server. One of these challenge types will be selected when validating the DNS name and an appropriate Challenge resource will be created to perform the ACME challenge process. items: description: |- Challenge specifies a challenge offered by the ACME server for an Order. An appropriate Challenge resource can be created to perform the ACME challenge process. properties: token: description: |- Token is the token that must be presented for this challenge. This is used to compute the 'key' that must also be presented. type: string type: description: |- Type is the type of challenge being offered, e.g., 'http-01', 'dns-01', 'tls-sni-01', etc. This is the raw value retrieved from the ACME server. Only 'http-01' and 'dns-01' are supported by cert-manager, other values will be ignored. type: string url: description: |- URL is the URL of this challenge. It can be used to retrieve additional metadata about the Challenge from the ACME server. type: string required: - token - type - url type: object type: array x-kubernetes-list-type: atomic identifier: description: Identifier is the DNS name to be validated as part of this authorization type: string initialState: description: |- InitialState is the initial state of the ACME authorization when first fetched from the ACME server. If an Authorization is already 'valid', the Order controller will not create a Challenge resource for the authorization. This will occur when working with an ACME server that enables 'authz reuse' (such as Let's Encrypt's production endpoint). If not set and 'identifier' is set, the state is assumed to be pending and a Challenge will be created. enum: - valid - ready - pending - processing - invalid - expired - errored type: string url: description: URL is the URL of the Authorization that must be completed type: string wildcard: description: |- Wildcard will be true if this authorization is for a wildcard DNS name. If this is true, the identifier will be the *non-wildcard* version of the DNS name. For example, if '*.example.com' is the DNS name being validated, this field will be 'true' and the 'identifier' field will be 'example.com'. type: boolean required: - url type: object type: array x-kubernetes-list-type: atomic certificate: description: |- Certificate is a copy of the PEM encoded certificate for this Order. This field will be populated after the order has been successfully finalized with the ACME server, and the order has transitioned to the 'valid' state. format: byte type: string failureTime: description: |- FailureTime stores the time that this order failed. This is used to influence garbage collection and back-off. format: date-time type: string finalizeURL: description: |- FinalizeURL of the Order. This is used to obtain certificates for this order once it has been completed. type: string reason: description: |- Reason optionally provides more information about a why the order is in the current state. type: string state: description: |- State contains the current state of this Order resource. States 'success' and 'expired' are 'final' enum: - valid - ready - pending - processing - invalid - expired - errored type: string url: description: |- URL of the Order. This will initially be empty when the resource is first created. The Order controller will populate this field when the Order is first processed. This field will be immutable after it is initially set. type: string type: object required: - metadata - spec type: object served: true storage: true subresources: status: {} --- # Source: cert-manager/templates/crd-cert-manager.io_certificaterequests.yaml apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: "certificaterequests.cert-manager.io" annotations: helm.sh/resource-policy: keep labels: app: "cert-manager" app.kubernetes.io/name: "cert-manager" app.kubernetes.io/instance: "cert-manager" app.kubernetes.io/component: "crds" app.kubernetes.io/version: "v1.19.2" spec: group: cert-manager.io names: categories: - cert-manager kind: CertificateRequest listKind: CertificateRequestList plural: certificaterequests shortNames: - cr - crs singular: certificaterequest scope: Namespaced versions: - additionalPrinterColumns: - jsonPath: .status.conditions[?(@.type == "Approved")].status name: Approved type: string - jsonPath: .status.conditions[?(@.type == "Denied")].status name: Denied type: string - jsonPath: .status.conditions[?(@.type == "Ready")].status name: Ready type: string - jsonPath: .spec.issuerRef.name name: Issuer type: string - jsonPath: .spec.username name: Requester type: string - jsonPath: .status.conditions[?(@.type == "Ready")].message name: Status priority: 1 type: string - description: CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC. jsonPath: .metadata.creationTimestamp name: Age type: date name: v1 schema: openAPIV3Schema: description: |- A CertificateRequest is used to request a signed certificate from one of the configured issuers. All fields within the CertificateRequest's `spec` are immutable after creation. A CertificateRequest will either succeed or fail, as denoted by its `Ready` status condition and its `status.failureTime` field. A CertificateRequest is a one-shot resource, meaning it represents a single point in time request for a certificate and cannot be re-used. properties: apiVersion: description: |- APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources type: string kind: description: |- Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds type: string metadata: type: object spec: description: |- Specification of the desired state of the CertificateRequest resource. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status properties: duration: description: |- Requested 'duration' (i.e. lifetime) of the Certificate. Note that the issuer may choose to ignore the requested duration, just like any other requested attribute. type: string extra: additionalProperties: items: type: string type: array description: |- Extra contains extra attributes of the user that created the CertificateRequest. Populated by the cert-manager webhook on creation and immutable. type: object groups: description: |- Groups contains group membership of the user that created the CertificateRequest. Populated by the cert-manager webhook on creation and immutable. items: type: string type: array x-kubernetes-list-type: atomic isCA: description: |- Requested basic constraints isCA value. Note that the issuer may choose to ignore the requested isCA value, just like any other requested attribute. NOTE: If the CSR in the `Request` field has a BasicConstraints extension, it must have the same isCA value as specified here. If true, this will automatically add the `cert sign` usage to the list of requested `usages`. type: boolean issuerRef: description: |- Reference to the issuer responsible for issuing the certificate. If the issuer is namespace-scoped, it must be in the same namespace as the Certificate. If the issuer is cluster-scoped, it can be used from any namespace. The `name` field of the reference must always be specified. properties: group: description: |- Group of the issuer being referred to. Defaults to 'cert-manager.io'. type: string kind: description: |- Kind of the issuer being referred to. Defaults to 'Issuer'. type: string name: description: Name of the issuer being referred to. type: string required: - name type: object request: description: |- The PEM-encoded X.509 certificate signing request to be submitted to the issuer for signing. If the CSR has a BasicConstraints extension, its isCA attribute must match the `isCA` value of this CertificateRequest. If the CSR has a KeyUsage extension, its key usages must match the key usages in the `usages` field of this CertificateRequest. If the CSR has a ExtKeyUsage extension, its extended key usages must match the extended key usages in the `usages` field of this CertificateRequest. format: byte type: string uid: description: |- UID contains the uid of the user that created the CertificateRequest. Populated by the cert-manager webhook on creation and immutable. type: string usages: description: |- Requested key usages and extended key usages. NOTE: If the CSR in the `Request` field has uses the KeyUsage or ExtKeyUsage extension, these extensions must have the same values as specified here without any additional values. If unset, defaults to `digital signature` and `key encipherment`. items: description: |- KeyUsage specifies valid usage contexts for keys. See: https://tools.ietf.org/html/rfc5280#section-4.2.1.3 https://tools.ietf.org/html/rfc5280#section-4.2.1.12 Valid KeyUsage values are as follows: "signing", "digital signature", "content commitment", "key encipherment", "key agreement", "data encipherment", "cert sign", "crl sign", "encipher only", "decipher only", "any", "server auth", "client auth", "code signing", "email protection", "s/mime", "ipsec end system", "ipsec tunnel", "ipsec user", "timestamping", "ocsp signing", "microsoft sgc", "netscape sgc" enum: - signing - digital signature - content commitment - key encipherment - key agreement - data encipherment - cert sign - crl sign - encipher only - decipher only - any - server auth - client auth - code signing - email protection - s/mime - ipsec end system - ipsec tunnel - ipsec user - timestamping - ocsp signing - microsoft sgc - netscape sgc type: string type: array x-kubernetes-list-type: atomic username: description: |- Username contains the name of the user that created the CertificateRequest. Populated by the cert-manager webhook on creation and immutable. type: string required: - issuerRef - request type: object status: description: |- Status of the CertificateRequest. This is set and managed automatically. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status properties: ca: description: |- The PEM encoded X.509 certificate of the signer, also known as the CA (Certificate Authority). This is set on a best-effort basis by different issuers. If not set, the CA is assumed to be unknown/not available. format: byte type: string certificate: description: |- The PEM encoded X.509 certificate resulting from the certificate signing request. If not set, the CertificateRequest has either not been completed or has failed. More information on failure can be found by checking the `conditions` field. format: byte type: string conditions: description: |- List of status conditions to indicate the status of a CertificateRequest. Known condition types are `Ready`, `InvalidRequest`, `Approved` and `Denied`. items: description: CertificateRequestCondition contains condition information for a CertificateRequest. properties: lastTransitionTime: description: |- LastTransitionTime is the timestamp corresponding to the last status change of this condition. format: date-time type: string message: description: |- Message is a human readable description of the details of the last transition, complementing reason. type: string reason: description: |- Reason is a brief machine readable explanation for the condition's last transition. type: string status: description: Status of the condition, one of (`True`, `False`, `Unknown`). enum: - "True" - "False" - Unknown type: string type: description: |- Type of the condition, known values are (`Ready`, `InvalidRequest`, `Approved`, `Denied`). type: string required: - status - type type: object type: array x-kubernetes-list-map-keys: - type x-kubernetes-list-type: map failureTime: description: |- FailureTime stores the time that this CertificateRequest failed. This is used to influence garbage collection and back-off. format: date-time type: string type: object type: object served: true storage: true subresources: status: {} --- # Source: cert-manager/templates/crd-cert-manager.io_certificates.yaml apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: "certificates.cert-manager.io" annotations: helm.sh/resource-policy: keep labels: app: "cert-manager" app.kubernetes.io/name: "cert-manager" app.kubernetes.io/instance: "cert-manager" app.kubernetes.io/component: "crds" app.kubernetes.io/version: "v1.19.2" spec: group: cert-manager.io names: categories: - cert-manager kind: Certificate listKind: CertificateList plural: certificates shortNames: - cert - certs singular: certificate scope: Namespaced versions: - additionalPrinterColumns: - jsonPath: .status.conditions[?(@.type == "Ready")].status name: Ready type: string - jsonPath: .spec.secretName name: Secret type: string - jsonPath: .spec.issuerRef.name name: Issuer priority: 1 type: string - jsonPath: .status.conditions[?(@.type == "Ready")].message name: Status priority: 1 type: string - description: CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC. jsonPath: .metadata.creationTimestamp name: Age type: date name: v1 schema: openAPIV3Schema: description: |- A Certificate resource should be created to ensure an up to date and signed X.509 certificate is stored in the Kubernetes Secret resource named in `spec.secretName`. The stored certificate will be renewed before it expires (as configured by `spec.renewBefore`). properties: apiVersion: description: |- APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources type: string kind: description: |- Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds type: string metadata: type: object spec: description: |- Specification of the desired state of the Certificate resource. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status properties: additionalOutputFormats: description: |- Defines extra output formats of the private key and signed certificate chain to be written to this Certificate's target Secret. items: description: |- CertificateAdditionalOutputFormat defines an additional output format of a Certificate resource. These contain supplementary data formats of the signed certificate chain and paired private key. properties: type: description: |- Type is the name of the format type that should be written to the Certificate's target Secret. enum: - DER - CombinedPEM type: string required: - type type: object type: array x-kubernetes-list-type: atomic commonName: description: |- Requested common name X509 certificate subject attribute. More info: https://datatracker.ietf.org/doc/html/rfc5280#section-4.1.2.6 NOTE: TLS clients will ignore this value when any subject alternative name is set (see https://tools.ietf.org/html/rfc6125#section-6.4.4). Should have a length of 64 characters or fewer to avoid generating invalid CSRs. Cannot be set if the `literalSubject` field is set. type: string dnsNames: description: Requested DNS subject alternative names. items: type: string type: array x-kubernetes-list-type: atomic duration: description: |- Requested 'duration' (i.e. lifetime) of the Certificate. Note that the issuer may choose to ignore the requested duration, just like any other requested attribute. If unset, this defaults to 90 days. Minimum accepted duration is 1 hour. Value must be in units accepted by Go time.ParseDuration https://golang.org/pkg/time/#ParseDuration. type: string emailAddresses: description: Requested email subject alternative names. items: type: string type: array x-kubernetes-list-type: atomic encodeUsagesInRequest: description: |- Whether the KeyUsage and ExtKeyUsage extensions should be set in the encoded CSR. This option defaults to true, and should only be disabled if the target issuer does not support CSRs with these X509 KeyUsage/ ExtKeyUsage extensions. type: boolean ipAddresses: description: Requested IP address subject alternative names. items: type: string type: array x-kubernetes-list-type: atomic isCA: description: |- Requested basic constraints isCA value. The isCA value is used to set the `isCA` field on the created CertificateRequest resources. Note that the issuer may choose to ignore the requested isCA value, just like any other requested attribute. If true, this will automatically add the `cert sign` usage to the list of requested `usages`. type: boolean issuerRef: description: |- Reference to the issuer responsible for issuing the certificate. If the issuer is namespace-scoped, it must be in the same namespace as the Certificate. If the issuer is cluster-scoped, it can be used from any namespace. The `name` field of the reference must always be specified. properties: group: description: |- Group of the issuer being referred to. Defaults to 'cert-manager.io'. type: string kind: description: |- Kind of the issuer being referred to. Defaults to 'Issuer'. type: string name: description: Name of the issuer being referred to. type: string required: - name type: object keystores: description: Additional keystore output formats to be stored in the Certificate's Secret. properties: jks: description: |- JKS configures options for storing a JKS keystore in the `spec.secretName` Secret resource. properties: alias: description: |- Alias specifies the alias of the key in the keystore, required by the JKS format. If not provided, the default alias `certificate` will be used. type: string create: description: |- Create enables JKS keystore creation for the Certificate. If true, a file named `keystore.jks` will be created in the target Secret resource, encrypted using the password stored in `passwordSecretRef` or `password`. The keystore file will be updated immediately. If the issuer provided a CA certificate, a file named `truststore.jks` will also be created in the target Secret resource, encrypted using the password stored in `passwordSecretRef` containing the issuing Certificate Authority type: boolean password: description: |- Password provides a literal password used to encrypt the JKS keystore. Mutually exclusive with passwordSecretRef. One of password or passwordSecretRef must provide a password with a non-zero length. type: string passwordSecretRef: description: |- PasswordSecretRef is a reference to a non-empty key in a Secret resource containing the password used to encrypt the JKS keystore. Mutually exclusive with password. One of password or passwordSecretRef must provide a password with a non-zero length. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object required: - create type: object pkcs12: description: |- PKCS12 configures options for storing a PKCS12 keystore in the `spec.secretName` Secret resource. properties: create: description: |- Create enables PKCS12 keystore creation for the Certificate. If true, a file named `keystore.p12` will be created in the target Secret resource, encrypted using the password stored in `passwordSecretRef` or in `password`. The keystore file will be updated immediately. If the issuer provided a CA certificate, a file named `truststore.p12` will also be created in the target Secret resource, encrypted using the password stored in `passwordSecretRef` containing the issuing Certificate Authority type: boolean password: description: |- Password provides a literal password used to encrypt the PKCS#12 keystore. Mutually exclusive with passwordSecretRef. One of password or passwordSecretRef must provide a password with a non-zero length. type: string passwordSecretRef: description: |- PasswordSecretRef is a reference to a non-empty key in a Secret resource containing the password used to encrypt the PKCS#12 keystore. Mutually exclusive with password. One of password or passwordSecretRef must provide a password with a non-zero length. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object profile: description: |- Profile specifies the key and certificate encryption algorithms and the HMAC algorithm used to create the PKCS12 keystore. Default value is `LegacyRC2` for backward compatibility. If provided, allowed values are: `LegacyRC2`: Deprecated. Not supported by default in OpenSSL 3 or Java 20. `LegacyDES`: Less secure algorithm. Use this option for maximal compatibility. `Modern2023`: Secure algorithm. Use this option in case you have to always use secure algorithms (e.g., because of company policy). Please note that the security of the algorithm is not that important in reality, because the unencrypted certificate and private key are also stored in the Secret. enum: - LegacyRC2 - LegacyDES - Modern2023 type: string required: - create type: object type: object literalSubject: description: |- Requested X.509 certificate subject, represented using the LDAP "String Representation of a Distinguished Name" [1]. Important: the LDAP string format also specifies the order of the attributes in the subject, this is important when issuing certs for LDAP authentication. Example: `CN=foo,DC=corp,DC=example,DC=com` More info [1]: https://datatracker.ietf.org/doc/html/rfc4514 More info: https://github.com/cert-manager/cert-manager/issues/3203 More info: https://github.com/cert-manager/cert-manager/issues/4424 Cannot be set if the `subject` or `commonName` field is set. type: string nameConstraints: description: |- x.509 certificate NameConstraint extension which MUST NOT be used in a non-CA certificate. More Info: https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1.10 This is an Alpha Feature and is only enabled with the `--feature-gates=NameConstraints=true` option set on both the controller and webhook components. properties: critical: description: if true then the name constraints are marked critical. type: boolean excluded: description: |- Excluded contains the constraints which must be disallowed. Any name matching a restriction in the excluded field is invalid regardless of information appearing in the permitted properties: dnsDomains: description: DNSDomains is a list of DNS domains that are permitted or excluded. items: type: string type: array x-kubernetes-list-type: atomic emailAddresses: description: EmailAddresses is a list of Email Addresses that are permitted or excluded. items: type: string type: array x-kubernetes-list-type: atomic ipRanges: description: |- IPRanges is a list of IP Ranges that are permitted or excluded. This should be a valid CIDR notation. items: type: string type: array x-kubernetes-list-type: atomic uriDomains: description: URIDomains is a list of URI domains that are permitted or excluded. items: type: string type: array x-kubernetes-list-type: atomic type: object permitted: description: Permitted contains the constraints in which the names must be located. properties: dnsDomains: description: DNSDomains is a list of DNS domains that are permitted or excluded. items: type: string type: array x-kubernetes-list-type: atomic emailAddresses: description: EmailAddresses is a list of Email Addresses that are permitted or excluded. items: type: string type: array x-kubernetes-list-type: atomic ipRanges: description: |- IPRanges is a list of IP Ranges that are permitted or excluded. This should be a valid CIDR notation. items: type: string type: array x-kubernetes-list-type: atomic uriDomains: description: URIDomains is a list of URI domains that are permitted or excluded. items: type: string type: array x-kubernetes-list-type: atomic type: object type: object otherNames: description: |- `otherNames` is an escape hatch for SAN that allows any type. We currently restrict the support to string like otherNames, cf RFC 5280 p 37 Any UTF8 String valued otherName can be passed with by setting the keys oid: x.x.x.x and UTF8Value: somevalue for `otherName`. Most commonly this would be UPN set with oid: 1.3.6.1.4.1.311.20.2.3 You should ensure that any OID passed is valid for the UTF8String type as we do not explicitly validate this. items: properties: oid: description: |- OID is the object identifier for the otherName SAN. The object identifier must be expressed as a dotted string, for example, "1.2.840.113556.1.4.221". type: string utf8Value: description: |- utf8Value is the string value of the otherName SAN. The utf8Value accepts any valid UTF8 string to set as value for the otherName SAN. type: string type: object type: array x-kubernetes-list-type: atomic privateKey: description: |- Private key options. These include the key algorithm and size, the used encoding and the rotation policy. properties: algorithm: description: |- Algorithm is the private key algorithm of the corresponding private key for this certificate. If provided, allowed values are either `RSA`, `ECDSA` or `Ed25519`. If `algorithm` is specified and `size` is not provided, key size of 2048 will be used for `RSA` key algorithm and key size of 256 will be used for `ECDSA` key algorithm. key size is ignored when using the `Ed25519` key algorithm. enum: - RSA - ECDSA - Ed25519 type: string encoding: description: |- The private key cryptography standards (PKCS) encoding for this certificate's private key to be encoded in. If provided, allowed values are `PKCS1` and `PKCS8` standing for PKCS#1 and PKCS#8, respectively. Defaults to `PKCS1` if not specified. enum: - PKCS1 - PKCS8 type: string rotationPolicy: description: |- RotationPolicy controls how private keys should be regenerated when a re-issuance is being processed. If set to `Never`, a private key will only be generated if one does not already exist in the target `spec.secretName`. If one does exist but it does not have the correct algorithm or size, a warning will be raised to await user intervention. If set to `Always`, a private key matching the specified requirements will be generated whenever a re-issuance occurs. Default is `Always`. The default was changed from `Never` to `Always` in cert-manager >=v1.18.0. The new default can be disabled by setting the `--feature-gates=DefaultPrivateKeyRotationPolicyAlways=false` option on the controller component. enum: - Never - Always type: string size: description: |- Size is the key bit size of the corresponding private key for this certificate. If `algorithm` is set to `RSA`, valid values are `2048`, `4096` or `8192`, and will default to `2048` if not specified. If `algorithm` is set to `ECDSA`, valid values are `256`, `384` or `521`, and will default to `256` if not specified. If `algorithm` is set to `Ed25519`, Size is ignored. No other values are allowed. type: integer type: object renewBefore: description: |- How long before the currently issued certificate's expiry cert-manager should renew the certificate. For example, if a certificate is valid for 60 minutes, and `renewBefore=10m`, cert-manager will begin to attempt to renew the certificate 50 minutes after it was issued (i.e. when there are 10 minutes remaining until the certificate is no longer valid). NOTE: The actual lifetime of the issued certificate is used to determine the renewal time. If an issuer returns a certificate with a different lifetime than the one requested, cert-manager will use the lifetime of the issued certificate. If unset, this defaults to 1/3 of the issued certificate's lifetime. Minimum accepted value is 5 minutes. Value must be in units accepted by Go time.ParseDuration https://golang.org/pkg/time/#ParseDuration. Cannot be set if the `renewBeforePercentage` field is set. type: string renewBeforePercentage: description: |- `renewBeforePercentage` is like `renewBefore`, except it is a relative percentage rather than an absolute duration. For example, if a certificate is valid for 60 minutes, and `renewBeforePercentage=25`, cert-manager will begin to attempt to renew the certificate 45 minutes after it was issued (i.e. when there are 15 minutes (25%) remaining until the certificate is no longer valid). NOTE: The actual lifetime of the issued certificate is used to determine the renewal time. If an issuer returns a certificate with a different lifetime than the one requested, cert-manager will use the lifetime of the issued certificate. Value must be an integer in the range (0,100). The minimum effective `renewBefore` derived from the `renewBeforePercentage` and `duration` fields is 5 minutes. Cannot be set if the `renewBefore` field is set. format: int32 type: integer revisionHistoryLimit: description: |- The maximum number of CertificateRequest revisions that are maintained in the Certificate's history. Each revision represents a single `CertificateRequest` created by this Certificate, either when it was created, renewed, or Spec was changed. Revisions will be removed by oldest first if the number of revisions exceeds this number. If set, revisionHistoryLimit must be a value of `1` or greater. Default value is `1`. format: int32 type: integer secretName: description: |- Name of the Secret resource that will be automatically created and managed by this Certificate resource. It will be populated with a private key and certificate, signed by the denoted issuer. The Secret resource lives in the same namespace as the Certificate resource. type: string secretTemplate: description: |- Defines annotations and labels to be copied to the Certificate's Secret. Labels and annotations on the Secret will be changed as they appear on the SecretTemplate when added or removed. SecretTemplate annotations are added in conjunction with, and cannot overwrite, the base set of annotations cert-manager sets on the Certificate's Secret. properties: annotations: additionalProperties: type: string description: Annotations is a key value map to be copied to the target Kubernetes Secret. type: object labels: additionalProperties: type: string description: Labels is a key value map to be copied to the target Kubernetes Secret. type: object type: object signatureAlgorithm: description: |- Signature algorithm to use. Allowed values for RSA keys: SHA256WithRSA, SHA384WithRSA, SHA512WithRSA. Allowed values for ECDSA keys: ECDSAWithSHA256, ECDSAWithSHA384, ECDSAWithSHA512. Allowed values for Ed25519 keys: PureEd25519. enum: - SHA256WithRSA - SHA384WithRSA - SHA512WithRSA - ECDSAWithSHA256 - ECDSAWithSHA384 - ECDSAWithSHA512 - PureEd25519 type: string subject: description: |- Requested set of X509 certificate subject attributes. More info: https://datatracker.ietf.org/doc/html/rfc5280#section-4.1.2.6 The common name attribute is specified separately in the `commonName` field. Cannot be set if the `literalSubject` field is set. properties: countries: description: Countries to be used on the Certificate. items: type: string type: array x-kubernetes-list-type: atomic localities: description: Cities to be used on the Certificate. items: type: string type: array x-kubernetes-list-type: atomic organizationalUnits: description: Organizational Units to be used on the Certificate. items: type: string type: array x-kubernetes-list-type: atomic organizations: description: Organizations to be used on the Certificate. items: type: string type: array x-kubernetes-list-type: atomic postalCodes: description: Postal codes to be used on the Certificate. items: type: string type: array x-kubernetes-list-type: atomic provinces: description: State/Provinces to be used on the Certificate. items: type: string type: array x-kubernetes-list-type: atomic serialNumber: description: Serial number to be used on the Certificate. type: string streetAddresses: description: Street addresses to be used on the Certificate. items: type: string type: array x-kubernetes-list-type: atomic type: object uris: description: Requested URI subject alternative names. items: type: string type: array x-kubernetes-list-type: atomic usages: description: |- Requested key usages and extended key usages. These usages are used to set the `usages` field on the created CertificateRequest resources. If `encodeUsagesInRequest` is unset or set to `true`, the usages will additionally be encoded in the `request` field which contains the CSR blob. If unset, defaults to `digital signature` and `key encipherment`. items: description: |- KeyUsage specifies valid usage contexts for keys. See: https://tools.ietf.org/html/rfc5280#section-4.2.1.3 https://tools.ietf.org/html/rfc5280#section-4.2.1.12 Valid KeyUsage values are as follows: "signing", "digital signature", "content commitment", "key encipherment", "key agreement", "data encipherment", "cert sign", "crl sign", "encipher only", "decipher only", "any", "server auth", "client auth", "code signing", "email protection", "s/mime", "ipsec end system", "ipsec tunnel", "ipsec user", "timestamping", "ocsp signing", "microsoft sgc", "netscape sgc" enum: - signing - digital signature - content commitment - key encipherment - key agreement - data encipherment - cert sign - crl sign - encipher only - decipher only - any - server auth - client auth - code signing - email protection - s/mime - ipsec end system - ipsec tunnel - ipsec user - timestamping - ocsp signing - microsoft sgc - netscape sgc type: string type: array x-kubernetes-list-type: atomic required: - issuerRef - secretName type: object status: description: |- Status of the Certificate. This is set and managed automatically. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status properties: conditions: description: |- List of status conditions to indicate the status of certificates. Known condition types are `Ready` and `Issuing`. items: description: CertificateCondition contains condition information for a Certificate. properties: lastTransitionTime: description: |- LastTransitionTime is the timestamp corresponding to the last status change of this condition. format: date-time type: string message: description: |- Message is a human readable description of the details of the last transition, complementing reason. type: string observedGeneration: description: |- If set, this represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.condition[x].observedGeneration is 9, the condition is out of date with respect to the current state of the Certificate. format: int64 type: integer reason: description: |- Reason is a brief machine readable explanation for the condition's last transition. type: string status: description: Status of the condition, one of (`True`, `False`, `Unknown`). enum: - "True" - "False" - Unknown type: string type: description: Type of the condition, known values are (`Ready`, `Issuing`). type: string required: - status - type type: object type: array x-kubernetes-list-map-keys: - type x-kubernetes-list-type: map failedIssuanceAttempts: description: |- The number of continuous failed issuance attempts up till now. This field gets removed (if set) on a successful issuance and gets set to 1 if unset and an issuance has failed. If an issuance has failed, the delay till the next issuance will be calculated using formula time.Hour * 2 ^ (failedIssuanceAttempts - 1). type: integer lastFailureTime: description: |- LastFailureTime is set only if the latest issuance for this Certificate failed and contains the time of the failure. If an issuance has failed, the delay till the next issuance will be calculated using formula time.Hour * 2 ^ (failedIssuanceAttempts - 1). If the latest issuance has succeeded this field will be unset. format: date-time type: string nextPrivateKeySecretName: description: |- The name of the Secret resource containing the private key to be used for the next certificate iteration. The keymanager controller will automatically set this field if the `Issuing` condition is set to `True`. It will automatically unset this field when the Issuing condition is not set or False. type: string notAfter: description: |- The expiration time of the certificate stored in the secret named by this resource in `spec.secretName`. format: date-time type: string notBefore: description: |- The time after which the certificate stored in the secret named by this resource in `spec.secretName` is valid. format: date-time type: string renewalTime: description: |- RenewalTime is the time at which the certificate will be next renewed. If not set, no upcoming renewal is scheduled. format: date-time type: string revision: description: |- The current 'revision' of the certificate as issued. When a CertificateRequest resource is created, it will have the `cert-manager.io/certificate-revision` set to one greater than the current value of this field. Upon issuance, this field will be set to the value of the annotation on the CertificateRequest resource used to issue the certificate. Persisting the value on the CertificateRequest resource allows the certificates controller to know whether a request is part of an old issuance or if it is part of the ongoing revision's issuance by checking if the revision value in the annotation is greater than this field. type: integer type: object type: object served: true storage: true subresources: status: {} --- # Source: cert-manager/templates/crd-cert-manager.io_clusterissuers.yaml apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: "clusterissuers.cert-manager.io" annotations: helm.sh/resource-policy: keep labels: app: "cert-manager" app.kubernetes.io/name: "cert-manager" app.kubernetes.io/instance: "cert-manager" app.kubernetes.io/component: "crds" app.kubernetes.io/version: "v1.19.2" spec: group: cert-manager.io names: categories: - cert-manager kind: ClusterIssuer listKind: ClusterIssuerList plural: clusterissuers shortNames: - ciss singular: clusterissuer scope: Cluster versions: - additionalPrinterColumns: - jsonPath: .status.conditions[?(@.type == "Ready")].status name: Ready type: string - jsonPath: .status.conditions[?(@.type == "Ready")].message name: Status priority: 1 type: string - description: CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC. jsonPath: .metadata.creationTimestamp name: Age type: date name: v1 schema: openAPIV3Schema: description: |- A ClusterIssuer represents a certificate issuing authority which can be referenced as part of `issuerRef` fields. It is similar to an Issuer, however it is cluster-scoped and therefore can be referenced by resources that exist in *any* namespace, not just the same namespace as the referent. properties: apiVersion: description: |- APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources type: string kind: description: |- Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds type: string metadata: type: object spec: description: Desired state of the ClusterIssuer resource. properties: acme: description: |- ACME configures this issuer to communicate with a RFC8555 (ACME) server to obtain signed x509 certificates. properties: caBundle: description: |- Base64-encoded bundle of PEM CAs which can be used to validate the certificate chain presented by the ACME server. Mutually exclusive with SkipTLSVerify; prefer using CABundle to prevent various kinds of security vulnerabilities. If CABundle and SkipTLSVerify are unset, the system certificate bundle inside the container is used to validate the TLS connection. format: byte type: string disableAccountKeyGeneration: description: |- Enables or disables generating a new ACME account key. If true, the Issuer resource will *not* request a new account but will expect the account key to be supplied via an existing secret. If false, the cert-manager system will generate a new ACME account key for the Issuer. Defaults to false. type: boolean email: description: |- Email is the email address to be associated with the ACME account. This field is optional, but it is strongly recommended to be set. It will be used to contact you in case of issues with your account or certificates, including expiry notification emails. This field may be updated after the account is initially registered. type: string enableDurationFeature: description: |- Enables requesting a Not After date on certificates that matches the duration of the certificate. This is not supported by all ACME servers like Let's Encrypt. If set to true when the ACME server does not support it, it will create an error on the Order. Defaults to false. type: boolean externalAccountBinding: description: |- ExternalAccountBinding is a reference to a CA external account of the ACME server. If set, upon registration cert-manager will attempt to associate the given external account credentials with the registered ACME account. properties: keyAlgorithm: description: |- Deprecated: keyAlgorithm field exists for historical compatibility reasons and should not be used. The algorithm is now hardcoded to HS256 in golang/x/crypto/acme. enum: - HS256 - HS384 - HS512 type: string keyID: description: keyID is the ID of the CA key that the External Account is bound to. type: string keySecretRef: description: |- keySecretRef is a Secret Key Selector referencing a data item in a Kubernetes Secret which holds the symmetric MAC key of the External Account Binding. The `key` is the index string that is paired with the key data in the Secret and should not be confused with the key data itself, or indeed with the External Account Binding keyID above. The secret key stored in the Secret **must** be un-padded, base64 URL encoded data. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object required: - keyID - keySecretRef type: object preferredChain: description: |- PreferredChain is the chain to use if the ACME server outputs multiple. PreferredChain is no guarantee that this one gets delivered by the ACME endpoint. For example, for Let's Encrypt's DST cross-sign you would use: "DST Root CA X3" or "ISRG Root X1" for the newer Let's Encrypt root CA. This value picks the first certificate bundle in the combined set of ACME default and alternative chains that has a root-most certificate with this value as its issuer's commonname. maxLength: 64 type: string privateKeySecretRef: description: |- PrivateKey is the name of a Kubernetes Secret resource that will be used to store the automatically generated ACME account private key. Optionally, a `key` may be specified to select a specific entry within the named Secret resource. If `key` is not specified, a default of `tls.key` will be used. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object profile: description: |- Profile allows requesting a certificate profile from the ACME server. Supported profiles are listed by the server's ACME directory URL. type: string server: description: |- Server is the URL used to access the ACME server's 'directory' endpoint. For example, for Let's Encrypt's staging endpoint, you would use: "https://acme-staging-v02.api.letsencrypt.org/directory". Only ACME v2 endpoints (i.e. RFC 8555) are supported. type: string skipTLSVerify: description: |- INSECURE: Enables or disables validation of the ACME server TLS certificate. If true, requests to the ACME server will not have the TLS certificate chain validated. Mutually exclusive with CABundle; prefer using CABundle to prevent various kinds of security vulnerabilities. Only enable this option in development environments. If CABundle and SkipTLSVerify are unset, the system certificate bundle inside the container is used to validate the TLS connection. Defaults to false. type: boolean solvers: description: |- Solvers is a list of challenge solvers that will be used to solve ACME challenges for the matching domains. Solver configurations must be provided in order to obtain certificates from an ACME server. For more information, see: https://cert-manager.io/docs/configuration/acme/ items: description: |- An ACMEChallengeSolver describes how to solve ACME challenges for the issuer it is part of. A selector may be provided to use different solving strategies for different DNS names. Only one of HTTP01 or DNS01 must be provided. properties: dns01: description: |- Configures cert-manager to attempt to complete authorizations by performing the DNS01 challenge flow. properties: acmeDNS: description: |- Use the 'ACME DNS' (https://github.com/joohoi/acme-dns) API to manage DNS01 challenge records. properties: accountSecretRef: description: |- A reference to a specific 'key' within a Secret resource. In some instances, `key` is a required field. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object host: type: string required: - accountSecretRef - host type: object akamai: description: Use the Akamai DNS zone management API to manage DNS01 challenge records. properties: accessTokenSecretRef: description: |- A reference to a specific 'key' within a Secret resource. In some instances, `key` is a required field. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object clientSecretSecretRef: description: |- A reference to a specific 'key' within a Secret resource. In some instances, `key` is a required field. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object clientTokenSecretRef: description: |- A reference to a specific 'key' within a Secret resource. In some instances, `key` is a required field. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object serviceConsumerDomain: type: string required: - accessTokenSecretRef - clientSecretSecretRef - clientTokenSecretRef - serviceConsumerDomain type: object azureDNS: description: Use the Microsoft Azure DNS API to manage DNS01 challenge records. properties: clientID: description: |- Auth: Azure Service Principal: The ClientID of the Azure Service Principal used to authenticate with Azure DNS. If set, ClientSecret and TenantID must also be set. type: string clientSecretSecretRef: description: |- Auth: Azure Service Principal: A reference to a Secret containing the password associated with the Service Principal. If set, ClientID and TenantID must also be set. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object environment: description: name of the Azure environment (default AzurePublicCloud) enum: - AzurePublicCloud - AzureChinaCloud - AzureGermanCloud - AzureUSGovernmentCloud type: string hostedZoneName: description: name of the DNS zone that should be used type: string managedIdentity: description: |- Auth: Azure Workload Identity or Azure Managed Service Identity: Settings to enable Azure Workload Identity or Azure Managed Service Identity If set, ClientID, ClientSecret and TenantID must not be set. properties: clientID: description: client ID of the managed identity, cannot be used at the same time as resourceID type: string resourceID: description: |- resource ID of the managed identity, cannot be used at the same time as clientID Cannot be used for Azure Managed Service Identity type: string tenantID: description: tenant ID of the managed identity, cannot be used at the same time as resourceID type: string type: object resourceGroupName: description: resource group the DNS zone is located in type: string subscriptionID: description: ID of the Azure subscription type: string tenantID: description: |- Auth: Azure Service Principal: The TenantID of the Azure Service Principal used to authenticate with Azure DNS. If set, ClientID and ClientSecret must also be set. type: string required: - resourceGroupName - subscriptionID type: object cloudDNS: description: Use the Google Cloud DNS API to manage DNS01 challenge records. properties: hostedZoneName: description: |- HostedZoneName is an optional field that tells cert-manager in which Cloud DNS zone the challenge record has to be created. If left empty cert-manager will automatically choose a zone. type: string project: type: string serviceAccountSecretRef: description: |- A reference to a specific 'key' within a Secret resource. In some instances, `key` is a required field. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object required: - project type: object cloudflare: description: Use the Cloudflare API to manage DNS01 challenge records. properties: apiKeySecretRef: description: |- API key to use to authenticate with Cloudflare. Note: using an API token to authenticate is now the recommended method as it allows greater control of permissions. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object apiTokenSecretRef: description: API token used to authenticate with Cloudflare. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object email: description: Email of the account, only required when using API key based authentication. type: string type: object cnameStrategy: description: |- CNAMEStrategy configures how the DNS01 provider should handle CNAME records when found in DNS zones. enum: - None - Follow type: string digitalocean: description: Use the DigitalOcean DNS API to manage DNS01 challenge records. properties: tokenSecretRef: description: |- A reference to a specific 'key' within a Secret resource. In some instances, `key` is a required field. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object required: - tokenSecretRef type: object rfc2136: description: |- Use RFC2136 ("Dynamic Updates in the Domain Name System") (https://datatracker.ietf.org/doc/rfc2136/) to manage DNS01 challenge records. properties: nameserver: description: |- The IP address or hostname of an authoritative DNS server supporting RFC2136 in the form host:port. If the host is an IPv6 address it must be enclosed in square brackets (e.g [2001:db8::1]) ; port is optional. This field is required. type: string protocol: description: Protocol to use for dynamic DNS update queries. Valid values are (case-sensitive) ``TCP`` and ``UDP``; ``UDP`` (default). enum: - TCP - UDP type: string tsigAlgorithm: description: |- The TSIG Algorithm configured in the DNS supporting RFC2136. Used only when ``tsigSecretSecretRef`` and ``tsigKeyName`` are defined. Supported values are (case-insensitive): ``HMACMD5`` (default), ``HMACSHA1``, ``HMACSHA256`` or ``HMACSHA512``. type: string tsigKeyName: description: |- The TSIG Key name configured in the DNS. If ``tsigSecretSecretRef`` is defined, this field is required. type: string tsigSecretSecretRef: description: |- The name of the secret containing the TSIG value. If ``tsigKeyName`` is defined, this field is required. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object required: - nameserver type: object route53: description: Use the AWS Route53 API to manage DNS01 challenge records. properties: accessKeyID: description: |- The AccessKeyID is used for authentication. Cannot be set when SecretAccessKeyID is set. If neither the Access Key nor Key ID are set, we fall-back to using env vars, shared credentials file or AWS Instance metadata, see: https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials type: string accessKeyIDSecretRef: description: |- The SecretAccessKey is used for authentication. If set, pull the AWS access key ID from a key within a Kubernetes Secret. Cannot be set when AccessKeyID is set. If neither the Access Key nor Key ID are set, we fall-back to using env vars, shared credentials file or AWS Instance metadata, see: https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object auth: description: Auth configures how cert-manager authenticates. properties: kubernetes: description: |- Kubernetes authenticates with Route53 using AssumeRoleWithWebIdentity by passing a bound ServiceAccount token. properties: serviceAccountRef: description: |- A reference to a service account that will be used to request a bound token (also known as "projected token"). To use this field, you must configure an RBAC rule to let cert-manager request a token. properties: audiences: description: |- TokenAudiences is an optional list of audiences to include in the token passed to AWS. The default token consisting of the issuer's namespace and name is always included. If unset the audience defaults to `sts.amazonaws.com`. items: type: string type: array x-kubernetes-list-type: atomic name: description: Name of the ServiceAccount used to request a token. type: string required: - name type: object required: - serviceAccountRef type: object required: - kubernetes type: object hostedZoneID: description: If set, the provider will manage only this zone in Route53 and will not do a lookup using the route53:ListHostedZonesByName api call. type: string region: description: |- Override the AWS region. Route53 is a global service and does not have regional endpoints but the region specified here (or via environment variables) is used as a hint to help compute the correct AWS credential scope and partition when it connects to Route53. See: - [Amazon Route 53 endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/r53.html) - [Global services](https://docs.aws.amazon.com/whitepapers/latest/aws-fault-isolation-boundaries/global-services.html) If you omit this region field, cert-manager will use the region from AWS_REGION and AWS_DEFAULT_REGION environment variables, if they are set in the cert-manager controller Pod. The `region` field is not needed if you use [IAM Roles for Service Accounts (IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html). Instead an AWS_REGION environment variable is added to the cert-manager controller Pod by: [Amazon EKS Pod Identity Webhook](https://github.com/aws/amazon-eks-pod-identity-webhook). In this case this `region` field value is ignored. The `region` field is not needed if you use [EKS Pod Identities](https://docs.aws.amazon.com/eks/latest/userguide/pod-identities.html). Instead an AWS_REGION environment variable is added to the cert-manager controller Pod by: [Amazon EKS Pod Identity Agent](https://github.com/aws/eks-pod-identity-agent), In this case this `region` field value is ignored. type: string role: description: |- Role is a Role ARN which the Route53 provider will assume using either the explicit credentials AccessKeyID/SecretAccessKey or the inferred credentials from environment variables, shared credentials file or AWS Instance metadata type: string secretAccessKeySecretRef: description: |- The SecretAccessKey is used for authentication. If neither the Access Key nor Key ID are set, we fall-back to using env vars, shared credentials file or AWS Instance metadata, see: https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object type: object webhook: description: |- Configure an external webhook based DNS01 challenge solver to manage DNS01 challenge records. properties: config: description: |- Additional configuration that should be passed to the webhook apiserver when challenges are processed. This can contain arbitrary JSON data. Secret values should not be specified in this stanza. If secret values are needed (e.g., credentials for a DNS service), you should use a SecretKeySelector to reference a Secret resource. For details on the schema of this field, consult the webhook provider implementation's documentation. x-kubernetes-preserve-unknown-fields: true groupName: description: |- The API group name that should be used when POSTing ChallengePayload resources to the webhook apiserver. This should be the same as the GroupName specified in the webhook provider implementation. type: string solverName: description: |- The name of the solver to use, as defined in the webhook provider implementation. This will typically be the name of the provider, e.g., 'cloudflare'. type: string required: - groupName - solverName type: object type: object http01: description: |- Configures cert-manager to attempt to complete authorizations by performing the HTTP01 challenge flow. It is not possible to obtain certificates for wildcard domain names (e.g., `*.example.com`) using the HTTP01 challenge mechanism. properties: gatewayHTTPRoute: description: |- The Gateway API is a sig-network community API that models service networking in Kubernetes (https://gateway-api.sigs.k8s.io/). The Gateway solver will create HTTPRoutes with the specified labels in the same namespace as the challenge. This solver is experimental, and fields / behaviour may change in the future. properties: labels: additionalProperties: type: string description: |- Custom labels that will be applied to HTTPRoutes created by cert-manager while solving HTTP-01 challenges. type: object parentRefs: description: |- When solving an HTTP-01 challenge, cert-manager creates an HTTPRoute. cert-manager needs to know which parentRefs should be used when creating the HTTPRoute. Usually, the parentRef references a Gateway. See: https://gateway-api.sigs.k8s.io/api-types/httproute/#attaching-to-gateways items: description: |- ParentReference identifies an API object (usually a Gateway) that can be considered a parent of this resource (usually a route). There are two kinds of parent resources with "Core" support: * Gateway (Gateway conformance profile) * Service (Mesh conformance profile, ClusterIP Services only) This API may be extended in the future to support additional kinds of parent resources. The API object must be valid in the cluster; the Group and Kind must be registered in the cluster for this reference to be valid. properties: group: default: gateway.networking.k8s.io description: |- Group is the group of the referent. When unspecified, "gateway.networking.k8s.io" is inferred. To set the core API group (such as for a "Service" kind referent), Group must be explicitly set to "" (empty string). Support: Core maxLength: 253 pattern: ^$|^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$ type: string kind: default: Gateway description: |- Kind is kind of the referent. There are two kinds of parent resources with "Core" support: * Gateway (Gateway conformance profile) * Service (Mesh conformance profile, ClusterIP Services only) Support for other resources is Implementation-Specific. maxLength: 63 minLength: 1 pattern: ^[a-zA-Z]([-a-zA-Z0-9]*[a-zA-Z0-9])?$ type: string name: description: |- Name is the name of the referent. Support: Core maxLength: 253 minLength: 1 type: string namespace: description: |- Namespace is the namespace of the referent. When unspecified, this refers to the local namespace of the Route. Note that there are specific rules for ParentRefs which cross namespace boundaries. Cross-namespace references are only valid if they are explicitly allowed by something in the namespace they are referring to. For example: Gateway has the AllowedRoutes field, and ReferenceGrant provides a generic way to enable any other kind of cross-namespace reference. ParentRefs from a Route to a Service in the same namespace are "producer" routes, which apply default routing rules to inbound connections from any namespace to the Service. ParentRefs from a Route to a Service in a different namespace are "consumer" routes, and these routing rules are only applied to outbound connections originating from the same namespace as the Route, for which the intended destination of the connections are a Service targeted as a ParentRef of the Route. Support: Core maxLength: 63 minLength: 1 pattern: ^[a-z0-9]([-a-z0-9]*[a-z0-9])?$ type: string port: description: |- Port is the network port this Route targets. It can be interpreted differently based on the type of parent resource. When the parent resource is a Gateway, this targets all listeners listening on the specified port that also support this kind of Route(and select this Route). It's not recommended to set `Port` unless the networking behaviors specified in a Route must apply to a specific port as opposed to a listener(s) whose port(s) may be changed. When both Port and SectionName are specified, the name and port of the selected listener must match both specified values. When the parent resource is a Service, this targets a specific port in the Service spec. When both Port (experimental) and SectionName are specified, the name and port of the selected port must match both specified values. Implementations MAY choose to support other parent resources. Implementations supporting other types of parent resources MUST clearly document how/if Port is interpreted. For the purpose of status, an attachment is considered successful as long as the parent resource accepts it partially. For example, Gateway listeners can restrict which Routes can attach to them by Route kind, namespace, or hostname. If 1 of 2 Gateway listeners accept attachment from the referencing Route, the Route MUST be considered successfully attached. If no Gateway listeners accept attachment from this Route, the Route MUST be considered detached from the Gateway. Support: Extended format: int32 maximum: 65535 minimum: 1 type: integer sectionName: description: |- SectionName is the name of a section within the target resource. In the following resources, SectionName is interpreted as the following: * Gateway: Listener name. When both Port (experimental) and SectionName are specified, the name and port of the selected listener must match both specified values. * Service: Port name. When both Port (experimental) and SectionName are specified, the name and port of the selected listener must match both specified values. Implementations MAY choose to support attaching Routes to other resources. If that is the case, they MUST clearly document how SectionName is interpreted. When unspecified (empty string), this will reference the entire resource. For the purpose of status, an attachment is considered successful if at least one section in the parent resource accepts it. For example, Gateway listeners can restrict which Routes can attach to them by Route kind, namespace, or hostname. If 1 of 2 Gateway listeners accept attachment from the referencing Route, the Route MUST be considered successfully attached. If no Gateway listeners accept attachment from this Route, the Route MUST be considered detached from the Gateway. Support: Core maxLength: 253 minLength: 1 pattern: ^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$ type: string required: - name type: object type: array x-kubernetes-list-type: atomic podTemplate: description: |- Optional pod template used to configure the ACME challenge solver pods used for HTTP01 challenges. properties: metadata: description: |- ObjectMeta overrides for the pod used to solve HTTP01 challenges. Only the 'labels' and 'annotations' fields may be set. If labels or annotations overlap with in-built values, the values here will override the in-built values. properties: annotations: additionalProperties: type: string description: Annotations that should be added to the created ACME HTTP01 solver pods. type: object labels: additionalProperties: type: string description: Labels that should be added to the created ACME HTTP01 solver pods. type: object type: object spec: description: |- PodSpec defines overrides for the HTTP01 challenge solver pod. Check ACMEChallengeSolverHTTP01IngressPodSpec to find out currently supported fields. All other fields will be ignored. properties: affinity: description: If specified, the pod's scheduling constraints properties: nodeAffinity: description: Describes node affinity scheduling rules for the pod. properties: preferredDuringSchedulingIgnoredDuringExecution: description: |- The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. items: description: |- An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). properties: preference: description: A node selector term, associated with the corresponding weight. properties: matchExpressions: description: A list of node selector requirements by node's labels. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchFields: description: A list of node selector requirements by node's fields. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic type: object x-kubernetes-map-type: atomic weight: description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. format: int32 type: integer required: - preference - weight type: object type: array x-kubernetes-list-type: atomic requiredDuringSchedulingIgnoredDuringExecution: description: |- If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. properties: nodeSelectorTerms: description: Required. A list of node selector terms. The terms are ORed. items: description: |- A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. properties: matchExpressions: description: A list of node selector requirements by node's labels. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchFields: description: A list of node selector requirements by node's fields. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic type: object x-kubernetes-map-type: atomic type: array x-kubernetes-list-type: atomic required: - nodeSelectorTerms type: object x-kubernetes-map-type: atomic type: object podAffinity: description: Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). properties: preferredDuringSchedulingIgnoredDuringExecution: description: |- The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. items: description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) properties: podAffinityTerm: description: Required. A pod affinity term, associated with the corresponding weight. properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object weight: description: |- weight associated with matching the corresponding podAffinityTerm, in the range 1-100. format: int32 type: integer required: - podAffinityTerm - weight type: object type: array x-kubernetes-list-type: atomic requiredDuringSchedulingIgnoredDuringExecution: description: |- If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. items: description: |- Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object type: array x-kubernetes-list-type: atomic type: object podAntiAffinity: description: Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). properties: preferredDuringSchedulingIgnoredDuringExecution: description: |- The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and subtracting "weight" from the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. items: description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) properties: podAffinityTerm: description: Required. A pod affinity term, associated with the corresponding weight. properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object weight: description: |- weight associated with matching the corresponding podAffinityTerm, in the range 1-100. format: int32 type: integer required: - podAffinityTerm - weight type: object type: array x-kubernetes-list-type: atomic requiredDuringSchedulingIgnoredDuringExecution: description: |- If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. items: description: |- Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object type: array x-kubernetes-list-type: atomic type: object type: object imagePullSecrets: description: If specified, the pod's imagePullSecrets items: description: |- LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. properties: name: default: "" description: |- Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string type: object x-kubernetes-map-type: atomic type: array x-kubernetes-list-map-keys: - name x-kubernetes-list-type: map nodeSelector: additionalProperties: type: string description: |- NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ type: object priorityClassName: description: If specified, the pod's priorityClassName. type: string resources: description: |- If specified, the pod's resource requirements. These values override the global resource configuration flags. Note that when only specifying resource limits, ensure they are greater than or equal to the corresponding global resource requests configured via controller flags (--acme-http01-solver-resource-request-cpu, --acme-http01-solver-resource-request-memory). Kubernetes will reject pod creation if limits are lower than requests, causing challenge failures. properties: limits: additionalProperties: anyOf: - type: integer - type: string pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true description: |- Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ type: object requests: additionalProperties: anyOf: - type: integer - type: string pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true description: |- Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to the global values configured via controller flags. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ type: object type: object securityContext: description: If specified, the pod's security context properties: fsGroup: description: |- A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. format: int64 type: integer fsGroupChangePolicy: description: |- fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. type: string runAsGroup: description: |- The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. format: int64 type: integer runAsNonRoot: description: |- Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. type: boolean runAsUser: description: |- The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. format: int64 type: integer seLinuxOptions: description: |- The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. properties: level: description: Level is SELinux level label that applies to the container. type: string role: description: Role is a SELinux role label that applies to the container. type: string type: description: Type is a SELinux type label that applies to the container. type: string user: description: User is a SELinux user label that applies to the container. type: string type: object seccompProfile: description: |- The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. properties: localhostProfile: description: |- localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type: string type: description: |- type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. type: string required: - type type: object supplementalGroups: description: |- A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. items: format: int64 type: integer type: array x-kubernetes-list-type: atomic sysctls: description: |- Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. items: description: Sysctl defines a kernel parameter to be set properties: name: description: Name of a property to set type: string value: description: Value of a property to set type: string required: - name - value type: object type: array x-kubernetes-list-type: atomic type: object serviceAccountName: description: If specified, the pod's service account type: string tolerations: description: If specified, the pod's tolerations. items: description: |- The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator . properties: effect: description: |- Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. type: string key: description: |- Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. type: string operator: description: |- Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. type: string tolerationSeconds: description: |- TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. format: int64 type: integer value: description: |- Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. type: string type: object type: array x-kubernetes-list-type: atomic type: object type: object serviceType: description: |- Optional service type for Kubernetes solver service. Supported values are NodePort or ClusterIP. If unset, defaults to NodePort. type: string type: object ingress: description: |- The ingress based HTTP01 challenge solver will solve challenges by creating or modifying Ingress resources in order to route requests for '/.well-known/acme-challenge/XYZ' to 'challenge solver' pods that are provisioned by cert-manager for each Challenge to be completed. properties: class: description: |- This field configures the annotation `kubernetes.io/ingress.class` when creating Ingress resources to solve ACME challenges that use this challenge solver. Only one of `class`, `name` or `ingressClassName` may be specified. type: string ingressClassName: description: |- This field configures the field `ingressClassName` on the created Ingress resources used to solve ACME challenges that use this challenge solver. This is the recommended way of configuring the ingress class. Only one of `class`, `name` or `ingressClassName` may be specified. type: string ingressTemplate: description: |- Optional ingress template used to configure the ACME challenge solver ingress used for HTTP01 challenges. properties: metadata: description: |- ObjectMeta overrides for the ingress used to solve HTTP01 challenges. Only the 'labels' and 'annotations' fields may be set. If labels or annotations overlap with in-built values, the values here will override the in-built values. properties: annotations: additionalProperties: type: string description: Annotations that should be added to the created ACME HTTP01 solver ingress. type: object labels: additionalProperties: type: string description: Labels that should be added to the created ACME HTTP01 solver ingress. type: object type: object type: object name: description: |- The name of the ingress resource that should have ACME challenge solving routes inserted into it in order to solve HTTP01 challenges. This is typically used in conjunction with ingress controllers like ingress-gce, which maintains a 1:1 mapping between external IPs and ingress resources. Only one of `class`, `name` or `ingressClassName` may be specified. type: string podTemplate: description: |- Optional pod template used to configure the ACME challenge solver pods used for HTTP01 challenges. properties: metadata: description: |- ObjectMeta overrides for the pod used to solve HTTP01 challenges. Only the 'labels' and 'annotations' fields may be set. If labels or annotations overlap with in-built values, the values here will override the in-built values. properties: annotations: additionalProperties: type: string description: Annotations that should be added to the created ACME HTTP01 solver pods. type: object labels: additionalProperties: type: string description: Labels that should be added to the created ACME HTTP01 solver pods. type: object type: object spec: description: |- PodSpec defines overrides for the HTTP01 challenge solver pod. Check ACMEChallengeSolverHTTP01IngressPodSpec to find out currently supported fields. All other fields will be ignored. properties: affinity: description: If specified, the pod's scheduling constraints properties: nodeAffinity: description: Describes node affinity scheduling rules for the pod. properties: preferredDuringSchedulingIgnoredDuringExecution: description: |- The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. items: description: |- An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). properties: preference: description: A node selector term, associated with the corresponding weight. properties: matchExpressions: description: A list of node selector requirements by node's labels. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchFields: description: A list of node selector requirements by node's fields. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic type: object x-kubernetes-map-type: atomic weight: description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. format: int32 type: integer required: - preference - weight type: object type: array x-kubernetes-list-type: atomic requiredDuringSchedulingIgnoredDuringExecution: description: |- If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. properties: nodeSelectorTerms: description: Required. A list of node selector terms. The terms are ORed. items: description: |- A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. properties: matchExpressions: description: A list of node selector requirements by node's labels. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchFields: description: A list of node selector requirements by node's fields. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic type: object x-kubernetes-map-type: atomic type: array x-kubernetes-list-type: atomic required: - nodeSelectorTerms type: object x-kubernetes-map-type: atomic type: object podAffinity: description: Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). properties: preferredDuringSchedulingIgnoredDuringExecution: description: |- The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. items: description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) properties: podAffinityTerm: description: Required. A pod affinity term, associated with the corresponding weight. properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object weight: description: |- weight associated with matching the corresponding podAffinityTerm, in the range 1-100. format: int32 type: integer required: - podAffinityTerm - weight type: object type: array x-kubernetes-list-type: atomic requiredDuringSchedulingIgnoredDuringExecution: description: |- If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. items: description: |- Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object type: array x-kubernetes-list-type: atomic type: object podAntiAffinity: description: Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). properties: preferredDuringSchedulingIgnoredDuringExecution: description: |- The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and subtracting "weight" from the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. items: description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) properties: podAffinityTerm: description: Required. A pod affinity term, associated with the corresponding weight. properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object weight: description: |- weight associated with matching the corresponding podAffinityTerm, in the range 1-100. format: int32 type: integer required: - podAffinityTerm - weight type: object type: array x-kubernetes-list-type: atomic requiredDuringSchedulingIgnoredDuringExecution: description: |- If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. items: description: |- Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object type: array x-kubernetes-list-type: atomic type: object type: object imagePullSecrets: description: If specified, the pod's imagePullSecrets items: description: |- LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. properties: name: default: "" description: |- Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string type: object x-kubernetes-map-type: atomic type: array x-kubernetes-list-map-keys: - name x-kubernetes-list-type: map nodeSelector: additionalProperties: type: string description: |- NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ type: object priorityClassName: description: If specified, the pod's priorityClassName. type: string resources: description: |- If specified, the pod's resource requirements. These values override the global resource configuration flags. Note that when only specifying resource limits, ensure they are greater than or equal to the corresponding global resource requests configured via controller flags (--acme-http01-solver-resource-request-cpu, --acme-http01-solver-resource-request-memory). Kubernetes will reject pod creation if limits are lower than requests, causing challenge failures. properties: limits: additionalProperties: anyOf: - type: integer - type: string pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true description: |- Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ type: object requests: additionalProperties: anyOf: - type: integer - type: string pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true description: |- Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to the global values configured via controller flags. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ type: object type: object securityContext: description: If specified, the pod's security context properties: fsGroup: description: |- A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. format: int64 type: integer fsGroupChangePolicy: description: |- fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. type: string runAsGroup: description: |- The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. format: int64 type: integer runAsNonRoot: description: |- Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. type: boolean runAsUser: description: |- The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. format: int64 type: integer seLinuxOptions: description: |- The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. properties: level: description: Level is SELinux level label that applies to the container. type: string role: description: Role is a SELinux role label that applies to the container. type: string type: description: Type is a SELinux type label that applies to the container. type: string user: description: User is a SELinux user label that applies to the container. type: string type: object seccompProfile: description: |- The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. properties: localhostProfile: description: |- localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type: string type: description: |- type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. type: string required: - type type: object supplementalGroups: description: |- A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. items: format: int64 type: integer type: array x-kubernetes-list-type: atomic sysctls: description: |- Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. items: description: Sysctl defines a kernel parameter to be set properties: name: description: Name of a property to set type: string value: description: Value of a property to set type: string required: - name - value type: object type: array x-kubernetes-list-type: atomic type: object serviceAccountName: description: If specified, the pod's service account type: string tolerations: description: If specified, the pod's tolerations. items: description: |- The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator . properties: effect: description: |- Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. type: string key: description: |- Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. type: string operator: description: |- Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. type: string tolerationSeconds: description: |- TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. format: int64 type: integer value: description: |- Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. type: string type: object type: array x-kubernetes-list-type: atomic type: object type: object serviceType: description: |- Optional service type for Kubernetes solver service. Supported values are NodePort or ClusterIP. If unset, defaults to NodePort. type: string type: object type: object selector: description: |- Selector selects a set of DNSNames on the Certificate resource that should be solved using this challenge solver. If not specified, the solver will be treated as the 'default' solver with the lowest priority, i.e. if any other solver has a more specific match, it will be used instead. properties: dnsNames: description: |- List of DNSNames that this solver will be used to solve. If specified and a match is found, a dnsNames selector will take precedence over a dnsZones selector. If multiple solvers match with the same dnsNames value, the solver with the most matching labels in matchLabels will be selected. If neither has more matches, the solver defined earlier in the list will be selected. items: type: string type: array x-kubernetes-list-type: atomic dnsZones: description: |- List of DNSZones that this solver will be used to solve. The most specific DNS zone match specified here will take precedence over other DNS zone matches, so a solver specifying sys.example.com will be selected over one specifying example.com for the domain www.sys.example.com. If multiple solvers match with the same dnsZones value, the solver with the most matching labels in matchLabels will be selected. If neither has more matches, the solver defined earlier in the list will be selected. items: type: string type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- A label selector that is used to refine the set of certificate's that this challenge solver will apply to. type: object type: object type: object type: array x-kubernetes-list-type: atomic required: - privateKeySecretRef - server type: object ca: description: |- CA configures this issuer to sign certificates using a signing CA keypair stored in a Secret resource. This is used to build internal PKIs that are managed by cert-manager. properties: crlDistributionPoints: description: |- The CRL distribution points is an X.509 v3 certificate extension which identifies the location of the CRL from which the revocation of this certificate can be checked. If not set, certificates will be issued without distribution points set. items: type: string type: array x-kubernetes-list-type: atomic issuingCertificateURLs: description: |- IssuingCertificateURLs is a list of URLs which this issuer should embed into certificates it creates. See https://www.rfc-editor.org/rfc/rfc5280#section-4.2.2.1 for more details. As an example, such a URL might be "http://ca.domain.com/ca.crt". items: type: string type: array x-kubernetes-list-type: atomic ocspServers: description: |- The OCSP server list is an X.509 v3 extension that defines a list of URLs of OCSP responders. The OCSP responders can be queried for the revocation status of an issued certificate. If not set, the certificate will be issued with no OCSP servers set. For example, an OCSP server URL could be "http://ocsp.int-x3.letsencrypt.org". items: type: string type: array x-kubernetes-list-type: atomic secretName: description: |- SecretName is the name of the secret used to sign Certificates issued by this Issuer. type: string required: - secretName type: object selfSigned: description: |- SelfSigned configures this issuer to 'self sign' certificates using the private key used to create the CertificateRequest object. properties: crlDistributionPoints: description: |- The CRL distribution points is an X.509 v3 certificate extension which identifies the location of the CRL from which the revocation of this certificate can be checked. If not set certificate will be issued without CDP. Values are strings. items: type: string type: array x-kubernetes-list-type: atomic type: object vault: description: |- Vault configures this issuer to sign certificates using a HashiCorp Vault PKI backend. properties: auth: description: Auth configures how cert-manager authenticates with the Vault server. properties: appRole: description: |- AppRole authenticates with Vault using the App Role auth mechanism, with the role and secret stored in a Kubernetes Secret resource. properties: path: description: |- Path where the App Role authentication backend is mounted in Vault, e.g: "approle" type: string roleId: description: |- RoleID configured in the App Role authentication backend when setting up the authentication backend in Vault. type: string secretRef: description: |- Reference to a key in a Secret that contains the App Role secret used to authenticate with Vault. The `key` field must be specified and denotes which entry within the Secret resource is used as the app role secret. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object required: - path - roleId - secretRef type: object clientCertificate: description: |- ClientCertificate authenticates with Vault by presenting a client certificate during the request's TLS handshake. Works only when using HTTPS protocol. properties: mountPath: description: |- The Vault mountPath here is the mount path to use when authenticating with Vault. For example, setting a value to `/v1/auth/foo`, will use the path `/v1/auth/foo/login` to authenticate with Vault. If unspecified, the default value "/v1/auth/cert" will be used. type: string name: description: |- Name of the certificate role to authenticate against. If not set, matching any certificate role, if available. type: string secretName: description: |- Reference to Kubernetes Secret of type "kubernetes.io/tls" (hence containing tls.crt and tls.key) used to authenticate to Vault using TLS client authentication. type: string type: object kubernetes: description: |- Kubernetes authenticates with Vault by passing the ServiceAccount token stored in the named Secret resource to the Vault server. properties: mountPath: description: |- The Vault mountPath here is the mount path to use when authenticating with Vault. For example, setting a value to `/v1/auth/foo`, will use the path `/v1/auth/foo/login` to authenticate with Vault. If unspecified, the default value "/v1/auth/kubernetes" will be used. type: string role: description: |- A required field containing the Vault Role to assume. A Role binds a Kubernetes ServiceAccount with a set of Vault policies. type: string secretRef: description: |- The required Secret field containing a Kubernetes ServiceAccount JWT used for authenticating with Vault. Use of 'ambient credentials' is not supported. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object serviceAccountRef: description: |- A reference to a service account that will be used to request a bound token (also known as "projected token"). Compared to using "secretRef", using this field means that you don't rely on statically bound tokens. To use this field, you must configure an RBAC rule to let cert-manager request a token. properties: audiences: description: |- TokenAudiences is an optional list of extra audiences to include in the token passed to Vault. The default token consisting of the issuer's namespace and name is always included. items: type: string type: array x-kubernetes-list-type: atomic name: description: Name of the ServiceAccount used to request a token. type: string required: - name type: object required: - role type: object tokenSecretRef: description: TokenSecretRef authenticates with Vault by presenting a token. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object type: object caBundle: description: |- Base64-encoded bundle of PEM CAs which will be used to validate the certificate chain presented by Vault. Only used if using HTTPS to connect to Vault and ignored for HTTP connections. Mutually exclusive with CABundleSecretRef. If neither CABundle nor CABundleSecretRef are defined, the certificate bundle in the cert-manager controller container is used to validate the TLS connection. format: byte type: string caBundleSecretRef: description: |- Reference to a Secret containing a bundle of PEM-encoded CAs to use when verifying the certificate chain presented by Vault when using HTTPS. Mutually exclusive with CABundle. If neither CABundle nor CABundleSecretRef are defined, the certificate bundle in the cert-manager controller container is used to validate the TLS connection. If no key for the Secret is specified, cert-manager will default to 'ca.crt'. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object clientCertSecretRef: description: |- Reference to a Secret containing a PEM-encoded Client Certificate to use when the Vault server requires mTLS. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object clientKeySecretRef: description: |- Reference to a Secret containing a PEM-encoded Client Private Key to use when the Vault server requires mTLS. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object namespace: description: |- Name of the vault namespace. Namespaces is a set of features within Vault Enterprise that allows Vault environments to support Secure Multi-tenancy. e.g: "ns1" More about namespaces can be found here https://www.vaultproject.io/docs/enterprise/namespaces type: string path: description: |- Path is the mount path of the Vault PKI backend's `sign` endpoint, e.g: "my_pki_mount/sign/my-role-name". type: string server: description: 'Server is the connection address for the Vault server, e.g: "https://vault.example.com:8200".' type: string serverName: description: |- ServerName is used to verify the hostname on the returned certificates by the Vault server. type: string required: - auth - path - server type: object venafi: description: |- Venafi configures this issuer to sign certificates using a Venafi TPP or Venafi Cloud policy zone. properties: cloud: description: |- Cloud specifies the Venafi cloud configuration settings. Only one of TPP or Cloud may be specified. properties: apiTokenSecretRef: description: APITokenSecretRef is a secret key selector for the Venafi Cloud API token. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object url: description: |- URL is the base URL for Venafi Cloud. Defaults to "https://api.venafi.cloud/". type: string required: - apiTokenSecretRef type: object tpp: description: |- TPP specifies Trust Protection Platform configuration settings. Only one of TPP or Cloud may be specified. properties: caBundle: description: |- Base64-encoded bundle of PEM CAs which will be used to validate the certificate chain presented by the TPP server. Only used if using HTTPS; ignored for HTTP. If undefined, the certificate bundle in the cert-manager controller container is used to validate the chain. format: byte type: string caBundleSecretRef: description: |- Reference to a Secret containing a base64-encoded bundle of PEM CAs which will be used to validate the certificate chain presented by the TPP server. Only used if using HTTPS; ignored for HTTP. Mutually exclusive with CABundle. If neither CABundle nor CABundleSecretRef is defined, the certificate bundle in the cert-manager controller container is used to validate the TLS connection. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object credentialsRef: description: |- CredentialsRef is a reference to a Secret containing the Venafi TPP API credentials. The secret must contain the key 'access-token' for the Access Token Authentication, or two keys, 'username' and 'password' for the API Keys Authentication. properties: name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object url: description: |- URL is the base URL for the vedsdk endpoint of the Venafi TPP instance, for example: "https://tpp.example.com/vedsdk". type: string required: - credentialsRef - url type: object zone: description: |- Zone is the Venafi Policy Zone to use for this issuer. All requests made to the Venafi platform will be restricted by the named zone policy. This field is required. type: string required: - zone type: object type: object status: description: Status of the ClusterIssuer. This is set and managed automatically. properties: acme: description: |- ACME specific status options. This field should only be set if the Issuer is configured to use an ACME server to issue certificates. properties: lastPrivateKeyHash: description: |- LastPrivateKeyHash is a hash of the private key associated with the latest registered ACME account, in order to track changes made to registered account associated with the Issuer type: string lastRegisteredEmail: description: |- LastRegisteredEmail is the email associated with the latest registered ACME account, in order to track changes made to registered account associated with the Issuer type: string uri: description: |- URI is the unique account identifier, which can also be used to retrieve account details from the CA type: string type: object conditions: description: |- List of status conditions to indicate the status of a CertificateRequest. Known condition types are `Ready`. items: description: IssuerCondition contains condition information for an Issuer. properties: lastTransitionTime: description: |- LastTransitionTime is the timestamp corresponding to the last status change of this condition. format: date-time type: string message: description: |- Message is a human readable description of the details of the last transition, complementing reason. type: string observedGeneration: description: |- If set, this represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.condition[x].observedGeneration is 9, the condition is out of date with respect to the current state of the Issuer. format: int64 type: integer reason: description: |- Reason is a brief machine readable explanation for the condition's last transition. type: string status: description: Status of the condition, one of (`True`, `False`, `Unknown`). enum: - "True" - "False" - Unknown type: string type: description: Type of the condition, known values are (`Ready`). type: string required: - status - type type: object type: array x-kubernetes-list-map-keys: - type x-kubernetes-list-type: map type: object required: - spec type: object served: true storage: true subresources: status: {} --- # Source: cert-manager/templates/crd-cert-manager.io_issuers.yaml apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: "issuers.cert-manager.io" annotations: helm.sh/resource-policy: keep labels: app: "cert-manager" app.kubernetes.io/name: "cert-manager" app.kubernetes.io/instance: "cert-manager" app.kubernetes.io/component: "crds" app.kubernetes.io/version: "v1.19.2" spec: group: cert-manager.io names: categories: - cert-manager kind: Issuer listKind: IssuerList plural: issuers shortNames: - iss singular: issuer scope: Namespaced versions: - additionalPrinterColumns: - jsonPath: .status.conditions[?(@.type == "Ready")].status name: Ready type: string - jsonPath: .status.conditions[?(@.type == "Ready")].message name: Status priority: 1 type: string - description: CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC. jsonPath: .metadata.creationTimestamp name: Age type: date name: v1 schema: openAPIV3Schema: description: |- An Issuer represents a certificate issuing authority which can be referenced as part of `issuerRef` fields. It is scoped to a single namespace and can therefore only be referenced by resources within the same namespace. properties: apiVersion: description: |- APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources type: string kind: description: |- Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds type: string metadata: type: object spec: description: Desired state of the Issuer resource. properties: acme: description: |- ACME configures this issuer to communicate with a RFC8555 (ACME) server to obtain signed x509 certificates. properties: caBundle: description: |- Base64-encoded bundle of PEM CAs which can be used to validate the certificate chain presented by the ACME server. Mutually exclusive with SkipTLSVerify; prefer using CABundle to prevent various kinds of security vulnerabilities. If CABundle and SkipTLSVerify are unset, the system certificate bundle inside the container is used to validate the TLS connection. format: byte type: string disableAccountKeyGeneration: description: |- Enables or disables generating a new ACME account key. If true, the Issuer resource will *not* request a new account but will expect the account key to be supplied via an existing secret. If false, the cert-manager system will generate a new ACME account key for the Issuer. Defaults to false. type: boolean email: description: |- Email is the email address to be associated with the ACME account. This field is optional, but it is strongly recommended to be set. It will be used to contact you in case of issues with your account or certificates, including expiry notification emails. This field may be updated after the account is initially registered. type: string enableDurationFeature: description: |- Enables requesting a Not After date on certificates that matches the duration of the certificate. This is not supported by all ACME servers like Let's Encrypt. If set to true when the ACME server does not support it, it will create an error on the Order. Defaults to false. type: boolean externalAccountBinding: description: |- ExternalAccountBinding is a reference to a CA external account of the ACME server. If set, upon registration cert-manager will attempt to associate the given external account credentials with the registered ACME account. properties: keyAlgorithm: description: |- Deprecated: keyAlgorithm field exists for historical compatibility reasons and should not be used. The algorithm is now hardcoded to HS256 in golang/x/crypto/acme. enum: - HS256 - HS384 - HS512 type: string keyID: description: keyID is the ID of the CA key that the External Account is bound to. type: string keySecretRef: description: |- keySecretRef is a Secret Key Selector referencing a data item in a Kubernetes Secret which holds the symmetric MAC key of the External Account Binding. The `key` is the index string that is paired with the key data in the Secret and should not be confused with the key data itself, or indeed with the External Account Binding keyID above. The secret key stored in the Secret **must** be un-padded, base64 URL encoded data. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object required: - keyID - keySecretRef type: object preferredChain: description: |- PreferredChain is the chain to use if the ACME server outputs multiple. PreferredChain is no guarantee that this one gets delivered by the ACME endpoint. For example, for Let's Encrypt's DST cross-sign you would use: "DST Root CA X3" or "ISRG Root X1" for the newer Let's Encrypt root CA. This value picks the first certificate bundle in the combined set of ACME default and alternative chains that has a root-most certificate with this value as its issuer's commonname. maxLength: 64 type: string privateKeySecretRef: description: |- PrivateKey is the name of a Kubernetes Secret resource that will be used to store the automatically generated ACME account private key. Optionally, a `key` may be specified to select a specific entry within the named Secret resource. If `key` is not specified, a default of `tls.key` will be used. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object profile: description: |- Profile allows requesting a certificate profile from the ACME server. Supported profiles are listed by the server's ACME directory URL. type: string server: description: |- Server is the URL used to access the ACME server's 'directory' endpoint. For example, for Let's Encrypt's staging endpoint, you would use: "https://acme-staging-v02.api.letsencrypt.org/directory". Only ACME v2 endpoints (i.e. RFC 8555) are supported. type: string skipTLSVerify: description: |- INSECURE: Enables or disables validation of the ACME server TLS certificate. If true, requests to the ACME server will not have the TLS certificate chain validated. Mutually exclusive with CABundle; prefer using CABundle to prevent various kinds of security vulnerabilities. Only enable this option in development environments. If CABundle and SkipTLSVerify are unset, the system certificate bundle inside the container is used to validate the TLS connection. Defaults to false. type: boolean solvers: description: |- Solvers is a list of challenge solvers that will be used to solve ACME challenges for the matching domains. Solver configurations must be provided in order to obtain certificates from an ACME server. For more information, see: https://cert-manager.io/docs/configuration/acme/ items: description: |- An ACMEChallengeSolver describes how to solve ACME challenges for the issuer it is part of. A selector may be provided to use different solving strategies for different DNS names. Only one of HTTP01 or DNS01 must be provided. properties: dns01: description: |- Configures cert-manager to attempt to complete authorizations by performing the DNS01 challenge flow. properties: acmeDNS: description: |- Use the 'ACME DNS' (https://github.com/joohoi/acme-dns) API to manage DNS01 challenge records. properties: accountSecretRef: description: |- A reference to a specific 'key' within a Secret resource. In some instances, `key` is a required field. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object host: type: string required: - accountSecretRef - host type: object akamai: description: Use the Akamai DNS zone management API to manage DNS01 challenge records. properties: accessTokenSecretRef: description: |- A reference to a specific 'key' within a Secret resource. In some instances, `key` is a required field. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object clientSecretSecretRef: description: |- A reference to a specific 'key' within a Secret resource. In some instances, `key` is a required field. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object clientTokenSecretRef: description: |- A reference to a specific 'key' within a Secret resource. In some instances, `key` is a required field. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object serviceConsumerDomain: type: string required: - accessTokenSecretRef - clientSecretSecretRef - clientTokenSecretRef - serviceConsumerDomain type: object azureDNS: description: Use the Microsoft Azure DNS API to manage DNS01 challenge records. properties: clientID: description: |- Auth: Azure Service Principal: The ClientID of the Azure Service Principal used to authenticate with Azure DNS. If set, ClientSecret and TenantID must also be set. type: string clientSecretSecretRef: description: |- Auth: Azure Service Principal: A reference to a Secret containing the password associated with the Service Principal. If set, ClientID and TenantID must also be set. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object environment: description: name of the Azure environment (default AzurePublicCloud) enum: - AzurePublicCloud - AzureChinaCloud - AzureGermanCloud - AzureUSGovernmentCloud type: string hostedZoneName: description: name of the DNS zone that should be used type: string managedIdentity: description: |- Auth: Azure Workload Identity or Azure Managed Service Identity: Settings to enable Azure Workload Identity or Azure Managed Service Identity If set, ClientID, ClientSecret and TenantID must not be set. properties: clientID: description: client ID of the managed identity, cannot be used at the same time as resourceID type: string resourceID: description: |- resource ID of the managed identity, cannot be used at the same time as clientID Cannot be used for Azure Managed Service Identity type: string tenantID: description: tenant ID of the managed identity, cannot be used at the same time as resourceID type: string type: object resourceGroupName: description: resource group the DNS zone is located in type: string subscriptionID: description: ID of the Azure subscription type: string tenantID: description: |- Auth: Azure Service Principal: The TenantID of the Azure Service Principal used to authenticate with Azure DNS. If set, ClientID and ClientSecret must also be set. type: string required: - resourceGroupName - subscriptionID type: object cloudDNS: description: Use the Google Cloud DNS API to manage DNS01 challenge records. properties: hostedZoneName: description: |- HostedZoneName is an optional field that tells cert-manager in which Cloud DNS zone the challenge record has to be created. If left empty cert-manager will automatically choose a zone. type: string project: type: string serviceAccountSecretRef: description: |- A reference to a specific 'key' within a Secret resource. In some instances, `key` is a required field. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object required: - project type: object cloudflare: description: Use the Cloudflare API to manage DNS01 challenge records. properties: apiKeySecretRef: description: |- API key to use to authenticate with Cloudflare. Note: using an API token to authenticate is now the recommended method as it allows greater control of permissions. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object apiTokenSecretRef: description: API token used to authenticate with Cloudflare. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object email: description: Email of the account, only required when using API key based authentication. type: string type: object cnameStrategy: description: |- CNAMEStrategy configures how the DNS01 provider should handle CNAME records when found in DNS zones. enum: - None - Follow type: string digitalocean: description: Use the DigitalOcean DNS API to manage DNS01 challenge records. properties: tokenSecretRef: description: |- A reference to a specific 'key' within a Secret resource. In some instances, `key` is a required field. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object required: - tokenSecretRef type: object rfc2136: description: |- Use RFC2136 ("Dynamic Updates in the Domain Name System") (https://datatracker.ietf.org/doc/rfc2136/) to manage DNS01 challenge records. properties: nameserver: description: |- The IP address or hostname of an authoritative DNS server supporting RFC2136 in the form host:port. If the host is an IPv6 address it must be enclosed in square brackets (e.g [2001:db8::1]) ; port is optional. This field is required. type: string protocol: description: Protocol to use for dynamic DNS update queries. Valid values are (case-sensitive) ``TCP`` and ``UDP``; ``UDP`` (default). enum: - TCP - UDP type: string tsigAlgorithm: description: |- The TSIG Algorithm configured in the DNS supporting RFC2136. Used only when ``tsigSecretSecretRef`` and ``tsigKeyName`` are defined. Supported values are (case-insensitive): ``HMACMD5`` (default), ``HMACSHA1``, ``HMACSHA256`` or ``HMACSHA512``. type: string tsigKeyName: description: |- The TSIG Key name configured in the DNS. If ``tsigSecretSecretRef`` is defined, this field is required. type: string tsigSecretSecretRef: description: |- The name of the secret containing the TSIG value. If ``tsigKeyName`` is defined, this field is required. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object required: - nameserver type: object route53: description: Use the AWS Route53 API to manage DNS01 challenge records. properties: accessKeyID: description: |- The AccessKeyID is used for authentication. Cannot be set when SecretAccessKeyID is set. If neither the Access Key nor Key ID are set, we fall-back to using env vars, shared credentials file or AWS Instance metadata, see: https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials type: string accessKeyIDSecretRef: description: |- The SecretAccessKey is used for authentication. If set, pull the AWS access key ID from a key within a Kubernetes Secret. Cannot be set when AccessKeyID is set. If neither the Access Key nor Key ID are set, we fall-back to using env vars, shared credentials file or AWS Instance metadata, see: https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object auth: description: Auth configures how cert-manager authenticates. properties: kubernetes: description: |- Kubernetes authenticates with Route53 using AssumeRoleWithWebIdentity by passing a bound ServiceAccount token. properties: serviceAccountRef: description: |- A reference to a service account that will be used to request a bound token (also known as "projected token"). To use this field, you must configure an RBAC rule to let cert-manager request a token. properties: audiences: description: |- TokenAudiences is an optional list of audiences to include in the token passed to AWS. The default token consisting of the issuer's namespace and name is always included. If unset the audience defaults to `sts.amazonaws.com`. items: type: string type: array x-kubernetes-list-type: atomic name: description: Name of the ServiceAccount used to request a token. type: string required: - name type: object required: - serviceAccountRef type: object required: - kubernetes type: object hostedZoneID: description: If set, the provider will manage only this zone in Route53 and will not do a lookup using the route53:ListHostedZonesByName api call. type: string region: description: |- Override the AWS region. Route53 is a global service and does not have regional endpoints but the region specified here (or via environment variables) is used as a hint to help compute the correct AWS credential scope and partition when it connects to Route53. See: - [Amazon Route 53 endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/r53.html) - [Global services](https://docs.aws.amazon.com/whitepapers/latest/aws-fault-isolation-boundaries/global-services.html) If you omit this region field, cert-manager will use the region from AWS_REGION and AWS_DEFAULT_REGION environment variables, if they are set in the cert-manager controller Pod. The `region` field is not needed if you use [IAM Roles for Service Accounts (IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html). Instead an AWS_REGION environment variable is added to the cert-manager controller Pod by: [Amazon EKS Pod Identity Webhook](https://github.com/aws/amazon-eks-pod-identity-webhook). In this case this `region` field value is ignored. The `region` field is not needed if you use [EKS Pod Identities](https://docs.aws.amazon.com/eks/latest/userguide/pod-identities.html). Instead an AWS_REGION environment variable is added to the cert-manager controller Pod by: [Amazon EKS Pod Identity Agent](https://github.com/aws/eks-pod-identity-agent), In this case this `region` field value is ignored. type: string role: description: |- Role is a Role ARN which the Route53 provider will assume using either the explicit credentials AccessKeyID/SecretAccessKey or the inferred credentials from environment variables, shared credentials file or AWS Instance metadata type: string secretAccessKeySecretRef: description: |- The SecretAccessKey is used for authentication. If neither the Access Key nor Key ID are set, we fall-back to using env vars, shared credentials file or AWS Instance metadata, see: https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object type: object webhook: description: |- Configure an external webhook based DNS01 challenge solver to manage DNS01 challenge records. properties: config: description: |- Additional configuration that should be passed to the webhook apiserver when challenges are processed. This can contain arbitrary JSON data. Secret values should not be specified in this stanza. If secret values are needed (e.g., credentials for a DNS service), you should use a SecretKeySelector to reference a Secret resource. For details on the schema of this field, consult the webhook provider implementation's documentation. x-kubernetes-preserve-unknown-fields: true groupName: description: |- The API group name that should be used when POSTing ChallengePayload resources to the webhook apiserver. This should be the same as the GroupName specified in the webhook provider implementation. type: string solverName: description: |- The name of the solver to use, as defined in the webhook provider implementation. This will typically be the name of the provider, e.g., 'cloudflare'. type: string required: - groupName - solverName type: object type: object http01: description: |- Configures cert-manager to attempt to complete authorizations by performing the HTTP01 challenge flow. It is not possible to obtain certificates for wildcard domain names (e.g., `*.example.com`) using the HTTP01 challenge mechanism. properties: gatewayHTTPRoute: description: |- The Gateway API is a sig-network community API that models service networking in Kubernetes (https://gateway-api.sigs.k8s.io/). The Gateway solver will create HTTPRoutes with the specified labels in the same namespace as the challenge. This solver is experimental, and fields / behaviour may change in the future. properties: labels: additionalProperties: type: string description: |- Custom labels that will be applied to HTTPRoutes created by cert-manager while solving HTTP-01 challenges. type: object parentRefs: description: |- When solving an HTTP-01 challenge, cert-manager creates an HTTPRoute. cert-manager needs to know which parentRefs should be used when creating the HTTPRoute. Usually, the parentRef references a Gateway. See: https://gateway-api.sigs.k8s.io/api-types/httproute/#attaching-to-gateways items: description: |- ParentReference identifies an API object (usually a Gateway) that can be considered a parent of this resource (usually a route). There are two kinds of parent resources with "Core" support: * Gateway (Gateway conformance profile) * Service (Mesh conformance profile, ClusterIP Services only) This API may be extended in the future to support additional kinds of parent resources. The API object must be valid in the cluster; the Group and Kind must be registered in the cluster for this reference to be valid. properties: group: default: gateway.networking.k8s.io description: |- Group is the group of the referent. When unspecified, "gateway.networking.k8s.io" is inferred. To set the core API group (such as for a "Service" kind referent), Group must be explicitly set to "" (empty string). Support: Core maxLength: 253 pattern: ^$|^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$ type: string kind: default: Gateway description: |- Kind is kind of the referent. There are two kinds of parent resources with "Core" support: * Gateway (Gateway conformance profile) * Service (Mesh conformance profile, ClusterIP Services only) Support for other resources is Implementation-Specific. maxLength: 63 minLength: 1 pattern: ^[a-zA-Z]([-a-zA-Z0-9]*[a-zA-Z0-9])?$ type: string name: description: |- Name is the name of the referent. Support: Core maxLength: 253 minLength: 1 type: string namespace: description: |- Namespace is the namespace of the referent. When unspecified, this refers to the local namespace of the Route. Note that there are specific rules for ParentRefs which cross namespace boundaries. Cross-namespace references are only valid if they are explicitly allowed by something in the namespace they are referring to. For example: Gateway has the AllowedRoutes field, and ReferenceGrant provides a generic way to enable any other kind of cross-namespace reference. ParentRefs from a Route to a Service in the same namespace are "producer" routes, which apply default routing rules to inbound connections from any namespace to the Service. ParentRefs from a Route to a Service in a different namespace are "consumer" routes, and these routing rules are only applied to outbound connections originating from the same namespace as the Route, for which the intended destination of the connections are a Service targeted as a ParentRef of the Route. Support: Core maxLength: 63 minLength: 1 pattern: ^[a-z0-9]([-a-z0-9]*[a-z0-9])?$ type: string port: description: |- Port is the network port this Route targets. It can be interpreted differently based on the type of parent resource. When the parent resource is a Gateway, this targets all listeners listening on the specified port that also support this kind of Route(and select this Route). It's not recommended to set `Port` unless the networking behaviors specified in a Route must apply to a specific port as opposed to a listener(s) whose port(s) may be changed. When both Port and SectionName are specified, the name and port of the selected listener must match both specified values. When the parent resource is a Service, this targets a specific port in the Service spec. When both Port (experimental) and SectionName are specified, the name and port of the selected port must match both specified values. Implementations MAY choose to support other parent resources. Implementations supporting other types of parent resources MUST clearly document how/if Port is interpreted. For the purpose of status, an attachment is considered successful as long as the parent resource accepts it partially. For example, Gateway listeners can restrict which Routes can attach to them by Route kind, namespace, or hostname. If 1 of 2 Gateway listeners accept attachment from the referencing Route, the Route MUST be considered successfully attached. If no Gateway listeners accept attachment from this Route, the Route MUST be considered detached from the Gateway. Support: Extended format: int32 maximum: 65535 minimum: 1 type: integer sectionName: description: |- SectionName is the name of a section within the target resource. In the following resources, SectionName is interpreted as the following: * Gateway: Listener name. When both Port (experimental) and SectionName are specified, the name and port of the selected listener must match both specified values. * Service: Port name. When both Port (experimental) and SectionName are specified, the name and port of the selected listener must match both specified values. Implementations MAY choose to support attaching Routes to other resources. If that is the case, they MUST clearly document how SectionName is interpreted. When unspecified (empty string), this will reference the entire resource. For the purpose of status, an attachment is considered successful if at least one section in the parent resource accepts it. For example, Gateway listeners can restrict which Routes can attach to them by Route kind, namespace, or hostname. If 1 of 2 Gateway listeners accept attachment from the referencing Route, the Route MUST be considered successfully attached. If no Gateway listeners accept attachment from this Route, the Route MUST be considered detached from the Gateway. Support: Core maxLength: 253 minLength: 1 pattern: ^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$ type: string required: - name type: object type: array x-kubernetes-list-type: atomic podTemplate: description: |- Optional pod template used to configure the ACME challenge solver pods used for HTTP01 challenges. properties: metadata: description: |- ObjectMeta overrides for the pod used to solve HTTP01 challenges. Only the 'labels' and 'annotations' fields may be set. If labels or annotations overlap with in-built values, the values here will override the in-built values. properties: annotations: additionalProperties: type: string description: Annotations that should be added to the created ACME HTTP01 solver pods. type: object labels: additionalProperties: type: string description: Labels that should be added to the created ACME HTTP01 solver pods. type: object type: object spec: description: |- PodSpec defines overrides for the HTTP01 challenge solver pod. Check ACMEChallengeSolverHTTP01IngressPodSpec to find out currently supported fields. All other fields will be ignored. properties: affinity: description: If specified, the pod's scheduling constraints properties: nodeAffinity: description: Describes node affinity scheduling rules for the pod. properties: preferredDuringSchedulingIgnoredDuringExecution: description: |- The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. items: description: |- An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). properties: preference: description: A node selector term, associated with the corresponding weight. properties: matchExpressions: description: A list of node selector requirements by node's labels. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchFields: description: A list of node selector requirements by node's fields. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic type: object x-kubernetes-map-type: atomic weight: description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. format: int32 type: integer required: - preference - weight type: object type: array x-kubernetes-list-type: atomic requiredDuringSchedulingIgnoredDuringExecution: description: |- If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. properties: nodeSelectorTerms: description: Required. A list of node selector terms. The terms are ORed. items: description: |- A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. properties: matchExpressions: description: A list of node selector requirements by node's labels. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchFields: description: A list of node selector requirements by node's fields. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic type: object x-kubernetes-map-type: atomic type: array x-kubernetes-list-type: atomic required: - nodeSelectorTerms type: object x-kubernetes-map-type: atomic type: object podAffinity: description: Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). properties: preferredDuringSchedulingIgnoredDuringExecution: description: |- The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. items: description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) properties: podAffinityTerm: description: Required. A pod affinity term, associated with the corresponding weight. properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object weight: description: |- weight associated with matching the corresponding podAffinityTerm, in the range 1-100. format: int32 type: integer required: - podAffinityTerm - weight type: object type: array x-kubernetes-list-type: atomic requiredDuringSchedulingIgnoredDuringExecution: description: |- If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. items: description: |- Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object type: array x-kubernetes-list-type: atomic type: object podAntiAffinity: description: Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). properties: preferredDuringSchedulingIgnoredDuringExecution: description: |- The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and subtracting "weight" from the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. items: description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) properties: podAffinityTerm: description: Required. A pod affinity term, associated with the corresponding weight. properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object weight: description: |- weight associated with matching the corresponding podAffinityTerm, in the range 1-100. format: int32 type: integer required: - podAffinityTerm - weight type: object type: array x-kubernetes-list-type: atomic requiredDuringSchedulingIgnoredDuringExecution: description: |- If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. items: description: |- Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object type: array x-kubernetes-list-type: atomic type: object type: object imagePullSecrets: description: If specified, the pod's imagePullSecrets items: description: |- LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. properties: name: default: "" description: |- Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string type: object x-kubernetes-map-type: atomic type: array x-kubernetes-list-map-keys: - name x-kubernetes-list-type: map nodeSelector: additionalProperties: type: string description: |- NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ type: object priorityClassName: description: If specified, the pod's priorityClassName. type: string resources: description: |- If specified, the pod's resource requirements. These values override the global resource configuration flags. Note that when only specifying resource limits, ensure they are greater than or equal to the corresponding global resource requests configured via controller flags (--acme-http01-solver-resource-request-cpu, --acme-http01-solver-resource-request-memory). Kubernetes will reject pod creation if limits are lower than requests, causing challenge failures. properties: limits: additionalProperties: anyOf: - type: integer - type: string pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true description: |- Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ type: object requests: additionalProperties: anyOf: - type: integer - type: string pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true description: |- Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to the global values configured via controller flags. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ type: object type: object securityContext: description: If specified, the pod's security context properties: fsGroup: description: |- A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. format: int64 type: integer fsGroupChangePolicy: description: |- fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. type: string runAsGroup: description: |- The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. format: int64 type: integer runAsNonRoot: description: |- Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. type: boolean runAsUser: description: |- The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. format: int64 type: integer seLinuxOptions: description: |- The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. properties: level: description: Level is SELinux level label that applies to the container. type: string role: description: Role is a SELinux role label that applies to the container. type: string type: description: Type is a SELinux type label that applies to the container. type: string user: description: User is a SELinux user label that applies to the container. type: string type: object seccompProfile: description: |- The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. properties: localhostProfile: description: |- localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type: string type: description: |- type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. type: string required: - type type: object supplementalGroups: description: |- A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. items: format: int64 type: integer type: array x-kubernetes-list-type: atomic sysctls: description: |- Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. items: description: Sysctl defines a kernel parameter to be set properties: name: description: Name of a property to set type: string value: description: Value of a property to set type: string required: - name - value type: object type: array x-kubernetes-list-type: atomic type: object serviceAccountName: description: If specified, the pod's service account type: string tolerations: description: If specified, the pod's tolerations. items: description: |- The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator . properties: effect: description: |- Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. type: string key: description: |- Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. type: string operator: description: |- Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. type: string tolerationSeconds: description: |- TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. format: int64 type: integer value: description: |- Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. type: string type: object type: array x-kubernetes-list-type: atomic type: object type: object serviceType: description: |- Optional service type for Kubernetes solver service. Supported values are NodePort or ClusterIP. If unset, defaults to NodePort. type: string type: object ingress: description: |- The ingress based HTTP01 challenge solver will solve challenges by creating or modifying Ingress resources in order to route requests for '/.well-known/acme-challenge/XYZ' to 'challenge solver' pods that are provisioned by cert-manager for each Challenge to be completed. properties: class: description: |- This field configures the annotation `kubernetes.io/ingress.class` when creating Ingress resources to solve ACME challenges that use this challenge solver. Only one of `class`, `name` or `ingressClassName` may be specified. type: string ingressClassName: description: |- This field configures the field `ingressClassName` on the created Ingress resources used to solve ACME challenges that use this challenge solver. This is the recommended way of configuring the ingress class. Only one of `class`, `name` or `ingressClassName` may be specified. type: string ingressTemplate: description: |- Optional ingress template used to configure the ACME challenge solver ingress used for HTTP01 challenges. properties: metadata: description: |- ObjectMeta overrides for the ingress used to solve HTTP01 challenges. Only the 'labels' and 'annotations' fields may be set. If labels or annotations overlap with in-built values, the values here will override the in-built values. properties: annotations: additionalProperties: type: string description: Annotations that should be added to the created ACME HTTP01 solver ingress. type: object labels: additionalProperties: type: string description: Labels that should be added to the created ACME HTTP01 solver ingress. type: object type: object type: object name: description: |- The name of the ingress resource that should have ACME challenge solving routes inserted into it in order to solve HTTP01 challenges. This is typically used in conjunction with ingress controllers like ingress-gce, which maintains a 1:1 mapping between external IPs and ingress resources. Only one of `class`, `name` or `ingressClassName` may be specified. type: string podTemplate: description: |- Optional pod template used to configure the ACME challenge solver pods used for HTTP01 challenges. properties: metadata: description: |- ObjectMeta overrides for the pod used to solve HTTP01 challenges. Only the 'labels' and 'annotations' fields may be set. If labels or annotations overlap with in-built values, the values here will override the in-built values. properties: annotations: additionalProperties: type: string description: Annotations that should be added to the created ACME HTTP01 solver pods. type: object labels: additionalProperties: type: string description: Labels that should be added to the created ACME HTTP01 solver pods. type: object type: object spec: description: |- PodSpec defines overrides for the HTTP01 challenge solver pod. Check ACMEChallengeSolverHTTP01IngressPodSpec to find out currently supported fields. All other fields will be ignored. properties: affinity: description: If specified, the pod's scheduling constraints properties: nodeAffinity: description: Describes node affinity scheduling rules for the pod. properties: preferredDuringSchedulingIgnoredDuringExecution: description: |- The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. items: description: |- An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). properties: preference: description: A node selector term, associated with the corresponding weight. properties: matchExpressions: description: A list of node selector requirements by node's labels. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchFields: description: A list of node selector requirements by node's fields. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic type: object x-kubernetes-map-type: atomic weight: description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. format: int32 type: integer required: - preference - weight type: object type: array x-kubernetes-list-type: atomic requiredDuringSchedulingIgnoredDuringExecution: description: |- If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. properties: nodeSelectorTerms: description: Required. A list of node selector terms. The terms are ORed. items: description: |- A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. properties: matchExpressions: description: A list of node selector requirements by node's labels. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchFields: description: A list of node selector requirements by node's fields. items: description: |- A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: The label key that the selector applies to. type: string operator: description: |- Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: |- An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic type: object x-kubernetes-map-type: atomic type: array x-kubernetes-list-type: atomic required: - nodeSelectorTerms type: object x-kubernetes-map-type: atomic type: object podAffinity: description: Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). properties: preferredDuringSchedulingIgnoredDuringExecution: description: |- The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. items: description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) properties: podAffinityTerm: description: Required. A pod affinity term, associated with the corresponding weight. properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object weight: description: |- weight associated with matching the corresponding podAffinityTerm, in the range 1-100. format: int32 type: integer required: - podAffinityTerm - weight type: object type: array x-kubernetes-list-type: atomic requiredDuringSchedulingIgnoredDuringExecution: description: |- If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. items: description: |- Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object type: array x-kubernetes-list-type: atomic type: object podAntiAffinity: description: Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). properties: preferredDuringSchedulingIgnoredDuringExecution: description: |- The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and subtracting "weight" from the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. items: description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) properties: podAffinityTerm: description: Required. A pod affinity term, associated with the corresponding weight. properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object weight: description: |- weight associated with matching the corresponding podAffinityTerm, in the range 1-100. format: int32 type: integer required: - podAffinityTerm - weight type: object type: array x-kubernetes-list-type: atomic requiredDuringSchedulingIgnoredDuringExecution: description: |- If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. items: description: |- Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running properties: labelSelector: description: |- A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic matchLabelKeys: description: |- MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic mismatchLabelKeys: description: |- MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. items: type: string type: array x-kubernetes-list-type: atomic namespaceSelector: description: |- A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. properties: matchExpressions: description: matchExpressions is a list of label selector requirements. The requirements are ANDed. items: description: |- A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. properties: key: description: key is the label key that the selector applies to. type: string operator: description: |- operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. type: string values: description: |- values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. items: type: string type: array x-kubernetes-list-type: atomic required: - key - operator type: object type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. type: object type: object x-kubernetes-map-type: atomic namespaces: description: |- namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". items: type: string type: array x-kubernetes-list-type: atomic topologyKey: description: |- This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. type: string required: - topologyKey type: object type: array x-kubernetes-list-type: atomic type: object type: object imagePullSecrets: description: If specified, the pod's imagePullSecrets items: description: |- LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. properties: name: default: "" description: |- Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string type: object x-kubernetes-map-type: atomic type: array x-kubernetes-list-map-keys: - name x-kubernetes-list-type: map nodeSelector: additionalProperties: type: string description: |- NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ type: object priorityClassName: description: If specified, the pod's priorityClassName. type: string resources: description: |- If specified, the pod's resource requirements. These values override the global resource configuration flags. Note that when only specifying resource limits, ensure they are greater than or equal to the corresponding global resource requests configured via controller flags (--acme-http01-solver-resource-request-cpu, --acme-http01-solver-resource-request-memory). Kubernetes will reject pod creation if limits are lower than requests, causing challenge failures. properties: limits: additionalProperties: anyOf: - type: integer - type: string pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true description: |- Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ type: object requests: additionalProperties: anyOf: - type: integer - type: string pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true description: |- Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to the global values configured via controller flags. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ type: object type: object securityContext: description: If specified, the pod's security context properties: fsGroup: description: |- A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. format: int64 type: integer fsGroupChangePolicy: description: |- fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. type: string runAsGroup: description: |- The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. format: int64 type: integer runAsNonRoot: description: |- Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. type: boolean runAsUser: description: |- The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. format: int64 type: integer seLinuxOptions: description: |- The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. properties: level: description: Level is SELinux level label that applies to the container. type: string role: description: Role is a SELinux role label that applies to the container. type: string type: description: Type is a SELinux type label that applies to the container. type: string user: description: User is a SELinux user label that applies to the container. type: string type: object seccompProfile: description: |- The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. properties: localhostProfile: description: |- localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type: string type: description: |- type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. type: string required: - type type: object supplementalGroups: description: |- A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. items: format: int64 type: integer type: array x-kubernetes-list-type: atomic sysctls: description: |- Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. items: description: Sysctl defines a kernel parameter to be set properties: name: description: Name of a property to set type: string value: description: Value of a property to set type: string required: - name - value type: object type: array x-kubernetes-list-type: atomic type: object serviceAccountName: description: If specified, the pod's service account type: string tolerations: description: If specified, the pod's tolerations. items: description: |- The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator . properties: effect: description: |- Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. type: string key: description: |- Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. type: string operator: description: |- Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. type: string tolerationSeconds: description: |- TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. format: int64 type: integer value: description: |- Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. type: string type: object type: array x-kubernetes-list-type: atomic type: object type: object serviceType: description: |- Optional service type for Kubernetes solver service. Supported values are NodePort or ClusterIP. If unset, defaults to NodePort. type: string type: object type: object selector: description: |- Selector selects a set of DNSNames on the Certificate resource that should be solved using this challenge solver. If not specified, the solver will be treated as the 'default' solver with the lowest priority, i.e. if any other solver has a more specific match, it will be used instead. properties: dnsNames: description: |- List of DNSNames that this solver will be used to solve. If specified and a match is found, a dnsNames selector will take precedence over a dnsZones selector. If multiple solvers match with the same dnsNames value, the solver with the most matching labels in matchLabels will be selected. If neither has more matches, the solver defined earlier in the list will be selected. items: type: string type: array x-kubernetes-list-type: atomic dnsZones: description: |- List of DNSZones that this solver will be used to solve. The most specific DNS zone match specified here will take precedence over other DNS zone matches, so a solver specifying sys.example.com will be selected over one specifying example.com for the domain www.sys.example.com. If multiple solvers match with the same dnsZones value, the solver with the most matching labels in matchLabels will be selected. If neither has more matches, the solver defined earlier in the list will be selected. items: type: string type: array x-kubernetes-list-type: atomic matchLabels: additionalProperties: type: string description: |- A label selector that is used to refine the set of certificate's that this challenge solver will apply to. type: object type: object type: object type: array x-kubernetes-list-type: atomic required: - privateKeySecretRef - server type: object ca: description: |- CA configures this issuer to sign certificates using a signing CA keypair stored in a Secret resource. This is used to build internal PKIs that are managed by cert-manager. properties: crlDistributionPoints: description: |- The CRL distribution points is an X.509 v3 certificate extension which identifies the location of the CRL from which the revocation of this certificate can be checked. If not set, certificates will be issued without distribution points set. items: type: string type: array x-kubernetes-list-type: atomic issuingCertificateURLs: description: |- IssuingCertificateURLs is a list of URLs which this issuer should embed into certificates it creates. See https://www.rfc-editor.org/rfc/rfc5280#section-4.2.2.1 for more details. As an example, such a URL might be "http://ca.domain.com/ca.crt". items: type: string type: array x-kubernetes-list-type: atomic ocspServers: description: |- The OCSP server list is an X.509 v3 extension that defines a list of URLs of OCSP responders. The OCSP responders can be queried for the revocation status of an issued certificate. If not set, the certificate will be issued with no OCSP servers set. For example, an OCSP server URL could be "http://ocsp.int-x3.letsencrypt.org". items: type: string type: array x-kubernetes-list-type: atomic secretName: description: |- SecretName is the name of the secret used to sign Certificates issued by this Issuer. type: string required: - secretName type: object selfSigned: description: |- SelfSigned configures this issuer to 'self sign' certificates using the private key used to create the CertificateRequest object. properties: crlDistributionPoints: description: |- The CRL distribution points is an X.509 v3 certificate extension which identifies the location of the CRL from which the revocation of this certificate can be checked. If not set certificate will be issued without CDP. Values are strings. items: type: string type: array x-kubernetes-list-type: atomic type: object vault: description: |- Vault configures this issuer to sign certificates using a HashiCorp Vault PKI backend. properties: auth: description: Auth configures how cert-manager authenticates with the Vault server. properties: appRole: description: |- AppRole authenticates with Vault using the App Role auth mechanism, with the role and secret stored in a Kubernetes Secret resource. properties: path: description: |- Path where the App Role authentication backend is mounted in Vault, e.g: "approle" type: string roleId: description: |- RoleID configured in the App Role authentication backend when setting up the authentication backend in Vault. type: string secretRef: description: |- Reference to a key in a Secret that contains the App Role secret used to authenticate with Vault. The `key` field must be specified and denotes which entry within the Secret resource is used as the app role secret. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object required: - path - roleId - secretRef type: object clientCertificate: description: |- ClientCertificate authenticates with Vault by presenting a client certificate during the request's TLS handshake. Works only when using HTTPS protocol. properties: mountPath: description: |- The Vault mountPath here is the mount path to use when authenticating with Vault. For example, setting a value to `/v1/auth/foo`, will use the path `/v1/auth/foo/login` to authenticate with Vault. If unspecified, the default value "/v1/auth/cert" will be used. type: string name: description: |- Name of the certificate role to authenticate against. If not set, matching any certificate role, if available. type: string secretName: description: |- Reference to Kubernetes Secret of type "kubernetes.io/tls" (hence containing tls.crt and tls.key) used to authenticate to Vault using TLS client authentication. type: string type: object kubernetes: description: |- Kubernetes authenticates with Vault by passing the ServiceAccount token stored in the named Secret resource to the Vault server. properties: mountPath: description: |- The Vault mountPath here is the mount path to use when authenticating with Vault. For example, setting a value to `/v1/auth/foo`, will use the path `/v1/auth/foo/login` to authenticate with Vault. If unspecified, the default value "/v1/auth/kubernetes" will be used. type: string role: description: |- A required field containing the Vault Role to assume. A Role binds a Kubernetes ServiceAccount with a set of Vault policies. type: string secretRef: description: |- The required Secret field containing a Kubernetes ServiceAccount JWT used for authenticating with Vault. Use of 'ambient credentials' is not supported. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object serviceAccountRef: description: |- A reference to a service account that will be used to request a bound token (also known as "projected token"). Compared to using "secretRef", using this field means that you don't rely on statically bound tokens. To use this field, you must configure an RBAC rule to let cert-manager request a token. properties: audiences: description: |- TokenAudiences is an optional list of extra audiences to include in the token passed to Vault. The default token consisting of the issuer's namespace and name is always included. items: type: string type: array x-kubernetes-list-type: atomic name: description: Name of the ServiceAccount used to request a token. type: string required: - name type: object required: - role type: object tokenSecretRef: description: TokenSecretRef authenticates with Vault by presenting a token. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object type: object caBundle: description: |- Base64-encoded bundle of PEM CAs which will be used to validate the certificate chain presented by Vault. Only used if using HTTPS to connect to Vault and ignored for HTTP connections. Mutually exclusive with CABundleSecretRef. If neither CABundle nor CABundleSecretRef are defined, the certificate bundle in the cert-manager controller container is used to validate the TLS connection. format: byte type: string caBundleSecretRef: description: |- Reference to a Secret containing a bundle of PEM-encoded CAs to use when verifying the certificate chain presented by Vault when using HTTPS. Mutually exclusive with CABundle. If neither CABundle nor CABundleSecretRef are defined, the certificate bundle in the cert-manager controller container is used to validate the TLS connection. If no key for the Secret is specified, cert-manager will default to 'ca.crt'. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object clientCertSecretRef: description: |- Reference to a Secret containing a PEM-encoded Client Certificate to use when the Vault server requires mTLS. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object clientKeySecretRef: description: |- Reference to a Secret containing a PEM-encoded Client Private Key to use when the Vault server requires mTLS. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object namespace: description: |- Name of the vault namespace. Namespaces is a set of features within Vault Enterprise that allows Vault environments to support Secure Multi-tenancy. e.g: "ns1" More about namespaces can be found here https://www.vaultproject.io/docs/enterprise/namespaces type: string path: description: |- Path is the mount path of the Vault PKI backend's `sign` endpoint, e.g: "my_pki_mount/sign/my-role-name". type: string server: description: 'Server is the connection address for the Vault server, e.g: "https://vault.example.com:8200".' type: string serverName: description: |- ServerName is used to verify the hostname on the returned certificates by the Vault server. type: string required: - auth - path - server type: object venafi: description: |- Venafi configures this issuer to sign certificates using a Venafi TPP or Venafi Cloud policy zone. properties: cloud: description: |- Cloud specifies the Venafi cloud configuration settings. Only one of TPP or Cloud may be specified. properties: apiTokenSecretRef: description: APITokenSecretRef is a secret key selector for the Venafi Cloud API token. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object url: description: |- URL is the base URL for Venafi Cloud. Defaults to "https://api.venafi.cloud/". type: string required: - apiTokenSecretRef type: object tpp: description: |- TPP specifies Trust Protection Platform configuration settings. Only one of TPP or Cloud may be specified. properties: caBundle: description: |- Base64-encoded bundle of PEM CAs which will be used to validate the certificate chain presented by the TPP server. Only used if using HTTPS; ignored for HTTP. If undefined, the certificate bundle in the cert-manager controller container is used to validate the chain. format: byte type: string caBundleSecretRef: description: |- Reference to a Secret containing a base64-encoded bundle of PEM CAs which will be used to validate the certificate chain presented by the TPP server. Only used if using HTTPS; ignored for HTTP. Mutually exclusive with CABundle. If neither CABundle nor CABundleSecretRef is defined, the certificate bundle in the cert-manager controller container is used to validate the TLS connection. properties: key: description: |- The key of the entry in the Secret resource's `data` field to be used. Some instances of this field may be defaulted, in others it may be required. type: string name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object credentialsRef: description: |- CredentialsRef is a reference to a Secret containing the Venafi TPP API credentials. The secret must contain the key 'access-token' for the Access Token Authentication, or two keys, 'username' and 'password' for the API Keys Authentication. properties: name: description: |- Name of the resource being referred to. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names type: string required: - name type: object url: description: |- URL is the base URL for the vedsdk endpoint of the Venafi TPP instance, for example: "https://tpp.example.com/vedsdk". type: string required: - credentialsRef - url type: object zone: description: |- Zone is the Venafi Policy Zone to use for this issuer. All requests made to the Venafi platform will be restricted by the named zone policy. This field is required. type: string required: - zone type: object type: object status: description: Status of the Issuer. This is set and managed automatically. properties: acme: description: |- ACME specific status options. This field should only be set if the Issuer is configured to use an ACME server to issue certificates. properties: lastPrivateKeyHash: description: |- LastPrivateKeyHash is a hash of the private key associated with the latest registered ACME account, in order to track changes made to registered account associated with the Issuer type: string lastRegisteredEmail: description: |- LastRegisteredEmail is the email associated with the latest registered ACME account, in order to track changes made to registered account associated with the Issuer type: string uri: description: |- URI is the unique account identifier, which can also be used to retrieve account details from the CA type: string type: object conditions: description: |- List of status conditions to indicate the status of a CertificateRequest. Known condition types are `Ready`. items: description: IssuerCondition contains condition information for an Issuer. properties: lastTransitionTime: description: |- LastTransitionTime is the timestamp corresponding to the last status change of this condition. format: date-time type: string message: description: |- Message is a human readable description of the details of the last transition, complementing reason. type: string observedGeneration: description: |- If set, this represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.condition[x].observedGeneration is 9, the condition is out of date with respect to the current state of the Issuer. format: int64 type: integer reason: description: |- Reason is a brief machine readable explanation for the condition's last transition. type: string status: description: Status of the condition, one of (`True`, `False`, `Unknown`). enum: - "True" - "False" - Unknown type: string type: description: Type of the condition, known values are (`Ready`). type: string required: - status - type type: object type: array x-kubernetes-list-map-keys: - type x-kubernetes-list-type: map type: object required: - spec type: object served: true storage: true subresources: status: {} --- # Source: cert-manager/templates/cainjector-serviceaccount.yaml apiVersion: v1 kind: ServiceAccount automountServiceAccountToken: true metadata: name: cert-manager-cainjector namespace: cert-manager labels: app: cainjector app.kubernetes.io/name: cainjector app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "cainjector" app.kubernetes.io/version: "v1.19.2" --- # Source: cert-manager/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount automountServiceAccountToken: true metadata: name: cert-manager namespace: cert-manager labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" --- # Source: cert-manager/templates/webhook-serviceaccount.yaml apiVersion: v1 kind: ServiceAccount automountServiceAccountToken: true metadata: name: cert-manager-webhook namespace: cert-manager labels: app: webhook app.kubernetes.io/name: webhook app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "webhook" app.kubernetes.io/version: "v1.19.2" --- # Source: cert-manager/templates/cainjector-rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cert-manager-cainjector labels: app: cainjector app.kubernetes.io/name: cainjector app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "cainjector" app.kubernetes.io/version: "v1.19.2" rules: - apiGroups: ["cert-manager.io"] resources: ["certificates"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["secrets"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["get", "create", "update", "patch"] - apiGroups: ["admissionregistration.k8s.io"] resources: ["validatingwebhookconfigurations", "mutatingwebhookconfigurations"] verbs: ["get", "list", "watch", "update", "patch"] - apiGroups: ["apiregistration.k8s.io"] resources: ["apiservices"] verbs: ["get", "list", "watch", "update", "patch"] - apiGroups: ["apiextensions.k8s.io"] resources: ["customresourcedefinitions"] verbs: ["get", "list", "watch", "update", "patch"] --- # Source: cert-manager/templates/rbac.yaml # Issuer controller role apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cert-manager-controller-issuers labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" rules: - apiGroups: ["cert-manager.io"] resources: ["issuers", "issuers/status"] verbs: ["update", "patch"] - apiGroups: ["cert-manager.io"] resources: ["issuers"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["secrets"] verbs: ["get", "list", "watch", "create", "update", "delete"] - apiGroups: [""] resources: ["events"] verbs: ["create", "patch"] --- # Source: cert-manager/templates/rbac.yaml # ClusterIssuer controller role apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cert-manager-controller-clusterissuers labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" rules: - apiGroups: ["cert-manager.io"] resources: ["clusterissuers", "clusterissuers/status"] verbs: ["update", "patch"] - apiGroups: ["cert-manager.io"] resources: ["clusterissuers"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["secrets"] verbs: ["get", "list", "watch", "create", "update", "delete"] - apiGroups: [""] resources: ["events"] verbs: ["create", "patch"] --- # Source: cert-manager/templates/rbac.yaml # Certificates controller role apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cert-manager-controller-certificates labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" rules: - apiGroups: ["cert-manager.io"] resources: ["certificates", "certificates/status", "certificaterequests", "certificaterequests/status"] verbs: ["update", "patch"] - apiGroups: ["cert-manager.io"] resources: ["certificates", "certificaterequests", "clusterissuers", "issuers"] verbs: ["get", "list", "watch"] # We require these rules to support users with the OwnerReferencesPermissionEnforcement # admission controller enabled: # https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#ownerreferencespermissionenforcement - apiGroups: ["cert-manager.io"] resources: ["certificates/finalizers", "certificaterequests/finalizers"] verbs: ["update"] - apiGroups: ["acme.cert-manager.io"] resources: ["orders"] verbs: ["create", "delete", "get", "list", "watch"] - apiGroups: [""] resources: ["secrets"] verbs: ["get", "list", "watch", "create", "update", "delete", "patch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "patch"] --- # Source: cert-manager/templates/rbac.yaml # Orders controller role apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cert-manager-controller-orders labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" rules: - apiGroups: ["acme.cert-manager.io"] resources: ["orders", "orders/status"] verbs: ["update", "patch"] - apiGroups: ["acme.cert-manager.io"] resources: ["orders", "challenges"] verbs: ["get", "list", "watch"] - apiGroups: ["cert-manager.io"] resources: ["clusterissuers", "issuers"] verbs: ["get", "list", "watch"] - apiGroups: ["acme.cert-manager.io"] resources: ["challenges"] verbs: ["create", "delete"] # We require these rules to support users with the OwnerReferencesPermissionEnforcement # admission controller enabled: # https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#ownerreferencespermissionenforcement - apiGroups: ["acme.cert-manager.io"] resources: ["orders/finalizers"] verbs: ["update"] - apiGroups: [""] resources: ["secrets"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "patch"] --- # Source: cert-manager/templates/rbac.yaml # Challenges controller role apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cert-manager-controller-challenges labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" rules: # Use to update challenge resource status - apiGroups: ["acme.cert-manager.io"] resources: ["challenges", "challenges/status"] verbs: ["update", "patch"] # Used to watch challenge resources - apiGroups: ["acme.cert-manager.io"] resources: ["challenges"] verbs: ["get", "list", "watch"] # Used to watch challenges, issuer and clusterissuer resources - apiGroups: ["cert-manager.io"] resources: ["issuers", "clusterissuers"] verbs: ["get", "list", "watch"] # Need to be able to retrieve ACME account private key to complete challenges - apiGroups: [""] resources: ["secrets"] verbs: ["get", "list", "watch"] # Used to create events - apiGroups: [""] resources: ["events"] verbs: ["create", "patch"] # HTTP01 rules - apiGroups: [""] resources: ["pods", "services"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: ["networking.k8s.io"] resources: ["ingresses"] verbs: ["get", "list", "watch", "create", "delete", "update"] - apiGroups: ["gateway.networking.k8s.io"] resources: ["httproutes"] verbs: ["get", "list", "watch", "create", "delete", "update"] # We require the ability to specify a custom hostname when we are creating # new ingress resources. # See: https://github.com/openshift/origin/blob/21f191775636f9acadb44fa42beeb4f75b255532/pkg/route/apiserver/admission/ingress_admission.go#L84-L148 - apiGroups: ["route.openshift.io"] resources: ["routes/custom-host"] verbs: ["create"] # We require these rules to support users with the OwnerReferencesPermissionEnforcement # admission controller enabled: # https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#ownerreferencespermissionenforcement - apiGroups: ["acme.cert-manager.io"] resources: ["challenges/finalizers"] verbs: ["update"] # DNS01 rules (duplicated above) - apiGroups: [""] resources: ["secrets"] verbs: ["get", "list", "watch"] --- # Source: cert-manager/templates/rbac.yaml # ingress-shim controller role apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cert-manager-controller-ingress-shim labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" rules: - apiGroups: ["cert-manager.io"] resources: ["certificates", "certificaterequests"] verbs: ["create", "update", "delete"] - apiGroups: ["cert-manager.io"] resources: ["certificates", "certificaterequests", "issuers", "clusterissuers"] verbs: ["get", "list", "watch"] - apiGroups: ["networking.k8s.io"] resources: ["ingresses"] verbs: ["get", "list", "watch"] # We require these rules to support users with the OwnerReferencesPermissionEnforcement # admission controller enabled: # https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#ownerreferencespermissionenforcement - apiGroups: ["networking.k8s.io"] resources: ["ingresses/finalizers"] verbs: ["update"] - apiGroups: ["gateway.networking.k8s.io"] resources: ["gateways", "httproutes"] verbs: ["get", "list", "watch"] - apiGroups: ["gateway.networking.k8s.io"] resources: ["gateways/finalizers", "httproutes/finalizers"] verbs: ["update"] - apiGroups: [""] resources: ["events"] verbs: ["create", "patch"] --- # Source: cert-manager/templates/rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cert-manager-cluster-view labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" rbac.authorization.k8s.io/aggregate-to-cluster-reader: "true" rules: - apiGroups: ["cert-manager.io"] resources: ["clusterissuers"] verbs: ["get", "list", "watch"] --- # Source: cert-manager/templates/rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cert-manager-view labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" rbac.authorization.k8s.io/aggregate-to-view: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-cluster-reader: "true" rules: - apiGroups: ["cert-manager.io"] resources: ["certificates", "certificaterequests", "issuers"] verbs: ["get", "list", "watch"] - apiGroups: ["acme.cert-manager.io"] resources: ["challenges", "orders"] verbs: ["get", "list", "watch"] --- # Source: cert-manager/templates/rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cert-manager-edit labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-admin: "true" rules: - apiGroups: ["cert-manager.io"] resources: ["certificates", "certificaterequests", "issuers"] verbs: ["create", "delete", "deletecollection", "patch", "update"] - apiGroups: ["cert-manager.io"] resources: ["certificates/status"] verbs: ["update"] - apiGroups: ["acme.cert-manager.io"] resources: ["challenges", "orders"] verbs: ["create", "delete", "deletecollection", "patch", "update"] --- # Source: cert-manager/templates/rbac.yaml # Permission to approve CertificateRequests referencing cert-manager.io Issuers and ClusterIssuers apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cert-manager-controller-approve:cert-manager-io labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "cert-manager" app.kubernetes.io/version: "v1.19.2" rules: - apiGroups: ["cert-manager.io"] resources: ["signers"] verbs: ["approve"] resourceNames: - "issuers.cert-manager.io/*" - "clusterissuers.cert-manager.io/*" --- # Source: cert-manager/templates/rbac.yaml # Permission to: # - Update and sign CertificateSigningRequests referencing cert-manager.io Issuers and ClusterIssuers # - Perform SubjectAccessReviews to test whether users are able to reference Namespaced Issuers apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cert-manager-controller-certificatesigningrequests labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "cert-manager" app.kubernetes.io/version: "v1.19.2" rules: - apiGroups: ["certificates.k8s.io"] resources: ["certificatesigningrequests"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["certificates.k8s.io"] resources: ["certificatesigningrequests/status"] verbs: ["update", "patch"] - apiGroups: ["certificates.k8s.io"] resources: ["signers"] resourceNames: ["issuers.cert-manager.io/*", "clusterissuers.cert-manager.io/*"] verbs: ["sign"] - apiGroups: ["authorization.k8s.io"] resources: ["subjectaccessreviews"] verbs: ["create"] --- # Source: cert-manager/templates/webhook-rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cert-manager-webhook:subjectaccessreviews labels: app: webhook app.kubernetes.io/name: webhook app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "webhook" app.kubernetes.io/version: "v1.19.2" rules: - apiGroups: ["authorization.k8s.io"] resources: ["subjectaccessreviews"] verbs: ["create"] --- # Source: cert-manager/templates/cainjector-rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cert-manager-cainjector labels: app: cainjector app.kubernetes.io/name: cainjector app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "cainjector" app.kubernetes.io/version: "v1.19.2" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cert-manager-cainjector subjects: - name: cert-manager-cainjector namespace: cert-manager kind: ServiceAccount --- # Source: cert-manager/templates/rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cert-manager-controller-issuers labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cert-manager-controller-issuers subjects: - name: cert-manager namespace: cert-manager kind: ServiceAccount --- # Source: cert-manager/templates/rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cert-manager-controller-clusterissuers labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cert-manager-controller-clusterissuers subjects: - name: cert-manager namespace: cert-manager kind: ServiceAccount --- # Source: cert-manager/templates/rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cert-manager-controller-certificates labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cert-manager-controller-certificates subjects: - name: cert-manager namespace: cert-manager kind: ServiceAccount --- # Source: cert-manager/templates/rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cert-manager-controller-orders labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cert-manager-controller-orders subjects: - name: cert-manager namespace: cert-manager kind: ServiceAccount --- # Source: cert-manager/templates/rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cert-manager-controller-challenges labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cert-manager-controller-challenges subjects: - name: cert-manager namespace: cert-manager kind: ServiceAccount --- # Source: cert-manager/templates/rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cert-manager-controller-ingress-shim labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cert-manager-controller-ingress-shim subjects: - name: cert-manager namespace: cert-manager kind: ServiceAccount --- # Source: cert-manager/templates/rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cert-manager-controller-approve:cert-manager-io labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "cert-manager" app.kubernetes.io/version: "v1.19.2" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cert-manager-controller-approve:cert-manager-io subjects: - name: cert-manager namespace: cert-manager kind: ServiceAccount --- # Source: cert-manager/templates/rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cert-manager-controller-certificatesigningrequests labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "cert-manager" app.kubernetes.io/version: "v1.19.2" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cert-manager-controller-certificatesigningrequests subjects: - name: cert-manager namespace: cert-manager kind: ServiceAccount --- # Source: cert-manager/templates/webhook-rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cert-manager-webhook:subjectaccessreviews labels: app: webhook app.kubernetes.io/name: webhook app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "webhook" app.kubernetes.io/version: "v1.19.2" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cert-manager-webhook:subjectaccessreviews subjects: - kind: ServiceAccount name: cert-manager-webhook namespace: cert-manager --- # Source: cert-manager/templates/cainjector-rbac.yaml # leader election rules apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: cert-manager-cainjector:leaderelection namespace: kube-system labels: app: cainjector app.kubernetes.io/name: cainjector app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "cainjector" app.kubernetes.io/version: "v1.19.2" rules: # Used for leader election by the controller # cert-manager-cainjector-leader-election is used by the CertificateBased injector controller # see cmd/cainjector/start.go#L113 # cert-manager-cainjector-leader-election-core is used by the SecretBased injector controller # see cmd/cainjector/start.go#L137 - apiGroups: ["coordination.k8s.io"] resources: ["leases"] resourceNames: ["cert-manager-cainjector-leader-election", "cert-manager-cainjector-leader-election-core"] verbs: ["get", "update", "patch"] - apiGroups: ["coordination.k8s.io"] resources: ["leases"] verbs: ["create"] --- # Source: cert-manager/templates/rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: cert-manager:leaderelection namespace: kube-system labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" rules: - apiGroups: ["coordination.k8s.io"] resources: ["leases"] resourceNames: ["cert-manager-controller"] verbs: ["get", "update", "patch"] - apiGroups: ["coordination.k8s.io"] resources: ["leases"] verbs: ["create"] --- # Source: cert-manager/templates/rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: cert-manager-tokenrequest namespace: cert-manager labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" rules: - apiGroups: [""] resources: ["serviceaccounts/token"] resourceNames: ["cert-manager"] verbs: ["create"] --- # Source: cert-manager/templates/webhook-rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: cert-manager-webhook:dynamic-serving namespace: cert-manager labels: app: webhook app.kubernetes.io/name: webhook app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "webhook" app.kubernetes.io/version: "v1.19.2" rules: - apiGroups: [""] resources: ["secrets"] resourceNames: - 'cert-manager-webhook-ca' verbs: ["get", "list", "watch", "update"] # It's not possible to grant CREATE permission on a single resourceName. - apiGroups: [""] resources: ["secrets"] verbs: ["create"] --- # Source: cert-manager/templates/cainjector-rbac.yaml # grant cert-manager permission to manage the leaderelection configmap in the # leader election namespace apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: cert-manager-cainjector:leaderelection namespace: kube-system labels: app: cainjector app.kubernetes.io/name: cainjector app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "cainjector" app.kubernetes.io/version: "v1.19.2" roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: cert-manager-cainjector:leaderelection subjects: - kind: ServiceAccount name: cert-manager-cainjector namespace: cert-manager --- # Source: cert-manager/templates/rbac.yaml # grant cert-manager permission to manage the leaderelection configmap in the # leader election namespace apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: cert-manager:leaderelection namespace: kube-system labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: cert-manager:leaderelection subjects: - kind: ServiceAccount name: cert-manager namespace: cert-manager --- # Source: cert-manager/templates/rbac.yaml # grant cert-manager permission to create tokens for the serviceaccount apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: cert-manager-tokenrequest namespace: cert-manager labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: cert-manager-tokenrequest subjects: - kind: ServiceAccount name: cert-manager namespace: cert-manager --- # Source: cert-manager/templates/webhook-rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: cert-manager-webhook:dynamic-serving namespace: cert-manager labels: app: webhook app.kubernetes.io/name: webhook app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "webhook" app.kubernetes.io/version: "v1.19.2" roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: cert-manager-webhook:dynamic-serving subjects: - kind: ServiceAccount name: cert-manager-webhook namespace: cert-manager --- # Source: cert-manager/templates/cainjector-service.yaml apiVersion: v1 kind: Service metadata: name: cert-manager-cainjector namespace: cert-manager labels: app: cainjector app.kubernetes.io/name: cainjector app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "cainjector" app.kubernetes.io/version: "v1.19.2" spec: type: ClusterIP ports: - protocol: TCP port: 9402 name: http-metrics selector: app.kubernetes.io/name: cainjector app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "cainjector" --- # Source: cert-manager/templates/service.yaml apiVersion: v1 kind: Service metadata: name: cert-manager namespace: cert-manager labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" spec: type: ClusterIP ports: - protocol: TCP port: 9402 name: tcp-prometheus-servicemonitor targetPort: http-metrics selector: app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" --- # Source: cert-manager/templates/webhook-service.yaml apiVersion: v1 kind: Service metadata: name: cert-manager-webhook namespace: cert-manager labels: app: webhook app.kubernetes.io/name: webhook app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "webhook" app.kubernetes.io/version: "v1.19.2" spec: type: ClusterIP ports: - name: https port: 443 protocol: TCP targetPort: "https" - name: metrics port: 9402 protocol: TCP targetPort: "http-metrics" selector: app.kubernetes.io/name: webhook app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "webhook" --- # Source: cert-manager/templates/cainjector-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: cert-manager-cainjector namespace: cert-manager labels: app: cainjector app.kubernetes.io/name: cainjector app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "cainjector" app.kubernetes.io/version: "v1.19.2" spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: cainjector app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "cainjector" template: metadata: labels: app: cainjector app.kubernetes.io/name: cainjector app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "cainjector" app.kubernetes.io/version: "v1.19.2" annotations: prometheus.io/path: "/metrics" prometheus.io/scrape: 'true' prometheus.io/port: '9402' spec: serviceAccountName: cert-manager-cainjector enableServiceLinks: false securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: cert-manager-cainjector image: "quay.io/jetstack/cert-manager-cainjector:v1.19.2" imagePullPolicy: IfNotPresent args: - --v=2 - --leader-election-namespace=kube-system ports: - containerPort: 9402 name: http-metrics protocol: TCP env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true nodeSelector: kubernetes.io/os: "linux" --- # Source: cert-manager/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: cert-manager namespace: cert-manager labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" template: metadata: labels: app: cert-manager app.kubernetes.io/name: cert-manager app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "controller" app.kubernetes.io/version: "v1.19.2" annotations: prometheus.io/path: "/metrics" prometheus.io/scrape: 'true' prometheus.io/port: '9402' spec: serviceAccountName: cert-manager enableServiceLinks: false securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: cert-manager-controller image: "quay.io/jetstack/cert-manager-controller:v1.19.2" imagePullPolicy: IfNotPresent args: - --v=2 - --cluster-resource-namespace=$(POD_NAMESPACE) - --leader-election-namespace=kube-system - --acme-http01-solver-image=quay.io/jetstack/cert-manager-acmesolver:v1.19.2 - --max-concurrent-challenges=60 ports: - containerPort: 9402 name: http-metrics protocol: TCP - containerPort: 9403 name: http-healthz protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace # LivenessProbe settings are based on those used for the Kubernetes # controller-manager. See: # https://github.com/kubernetes/kubernetes/blob/806b30170c61a38fedd54cc9ede4cd6275a1ad3b/cmd/kubeadm/app/util/staticpod/utils.go#L241-L245 livenessProbe: httpGet: port: http-healthz path: /livez scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 successThreshold: 1 failureThreshold: 8 nodeSelector: kubernetes.io/os: "linux" --- # Source: cert-manager/templates/webhook-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: cert-manager-webhook namespace: cert-manager labels: app: webhook app.kubernetes.io/name: webhook app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "webhook" app.kubernetes.io/version: "v1.19.2" spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: webhook app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "webhook" template: metadata: labels: app: webhook app.kubernetes.io/name: webhook app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "webhook" app.kubernetes.io/version: "v1.19.2" annotations: prometheus.io/path: "/metrics" prometheus.io/scrape: 'true' prometheus.io/port: '9402' spec: serviceAccountName: cert-manager-webhook enableServiceLinks: false securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: cert-manager-webhook image: "quay.io/jetstack/cert-manager-webhook:v1.19.2" imagePullPolicy: IfNotPresent args: - --v=2 - --secure-port=10250 - --dynamic-serving-ca-secret-namespace=$(POD_NAMESPACE) - --dynamic-serving-ca-secret-name=cert-manager-webhook-ca - --dynamic-serving-dns-names=cert-manager-webhook - --dynamic-serving-dns-names=cert-manager-webhook.$(POD_NAMESPACE) - --dynamic-serving-dns-names=cert-manager-webhook.$(POD_NAMESPACE).svc ports: - name: https protocol: TCP containerPort: 10250 - name: healthcheck protocol: TCP containerPort: 6080 - containerPort: 9402 name: http-metrics protocol: TCP livenessProbe: httpGet: path: /livez port: healthcheck scheme: HTTP initialDelaySeconds: 60 periodSeconds: 10 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 3 readinessProbe: httpGet: path: /healthz port: healthcheck scheme: HTTP initialDelaySeconds: 5 periodSeconds: 5 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace nodeSelector: kubernetes.io/os: "linux" --- # Source: cert-manager/templates/webhook-mutating-webhook.yaml apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: name: cert-manager-webhook labels: app: webhook app.kubernetes.io/name: webhook app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "webhook" app.kubernetes.io/version: "v1.19.2" annotations: cert-manager.io/inject-ca-from-secret: "ce**********ca" webhooks: - name: webhook.cert-manager.io rules: - apiGroups: - "cert-manager.io" apiVersions: - "v1" operations: - CREATE resources: - "certificaterequests" admissionReviewVersions: ["v1"] # This webhook only accepts v1 cert-manager resources. # Equivalent matchPolicy ensures that non-v1 resource requests are sent to # this webhook (after the resources have been converted to v1). matchPolicy: Equivalent timeoutSeconds: 30 failurePolicy: Fail # Only include 'sideEffects' field in Kubernetes 1.12+ sideEffects: None clientConfig: service: name: cert-manager-webhook namespace: cert-manager path: /mutate --- # Source: cert-manager/templates/webhook-validating-webhook.yaml apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: name: cert-manager-webhook labels: app: webhook app.kubernetes.io/name: webhook app.kubernetes.io/instance: cert-manager app.kubernetes.io/component: "webhook" app.kubernetes.io/version: "v1.19.2" annotations: cert-manager.io/inject-ca-from-secret: "ce**********ca" webhooks: - name: webhook.cert-manager.io namespaceSelector: matchExpressions: - key: cert-manager.io/disable-validation operator: NotIn values: - "true" rules: - apiGroups: - "cert-manager.io" - "acme.cert-manager.io" apiVersions: - "v1" operations: - CREATE - UPDATE resources: - "*/*" admissionReviewVersions: ["v1"] # This webhook only accepts v1 cert-manager resources. # Equivalent matchPolicy ensures that non-v1 resource requests are sent to # this webhook (after the resources have been converted to v1). matchPolicy: Equivalent timeoutSeconds: 30 failurePolicy: Fail sideEffects: None clientConfig: service: name: cert-manager-webhook namespace: cert-manager path: /validate home/zuul/zuul-output/logs/ci-framework-data/artifacts/manifests/openstack/0000755000175000017500000000000015133657447026342 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/manifests/openstack/cr/0000755000175000017500000000000015133657447026746 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/manifests/kustomizations/0000755000175000017500000000000015133657610027446 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/manifests/kustomizations/dataplane/0000755000175000017500000000000015133657745031410 5ustar zuulzuul././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/manifests/kustomizations/dataplane/99-kustomization.yamlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/manifests/kustomizations/dataplane/99-kustomi0000644000175000017500000000764315133657610033266 0ustar zuulzuulapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: namespace: openstack patches: - target: kind: OpenStackDataPlaneNodeSet patch: |- - op: replace path: /spec/nodes/edpm-compute-0/hostName value: "compute-0" - op: replace path: /spec/nodes/edpm-compute-1/hostName value: "compute-1" - op: replace path: /spec/nodeTemplate/ansible/ansibleVars/neutron_public_interface_name value: "eth1" - op: replace path: /spec/nodes/edpm-compute-0/networks/0/defaultRoute value: false - op: replace path: /spec/nodes/edpm-compute-1/networks/0/defaultRoute value: false - op: replace path: /spec/nodes/edpm-compute-1/ansible/ansibleHost value: >- 192.168.122.101 - op: replace path: /spec/nodes/edpm-compute-1/networks/0/fixedIP value: >- 192.168.122.101 - op: add path: /spec/nodeTemplate/ansible/ansibleVars/edpm_os_net_config_mappings value: net_config_data_lookup: edpm-compute: nic2: "eth1" - op: add path: /spec/nodeTemplate/ansible/ansibleVars/edpm_network_config_debug value: true - op: add path: /spec/env value: {} - op: add path: /spec/env value: - name: "ANSIBLE_VERBOSITY" value: "2" - op: replace path: /spec/nodeTemplate/ansible/ansibleVars/edpm_network_config_template value: |- --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {% set _ = mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) %} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: interface name: nic1 use_dhcp: true mtu: {{ min_viable_mtu }} - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% if edpm_network_config_nmstate | bool %} # this ovs_extra configuration fixes OSPRH-17551, but it will be not needed when FDP-1472 is resolved ovs_extra: - "set interface eth1 external-ids:ovn-egress-iface=true" {% endif %} {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} - op: replace path: /spec/nodeTemplate/ansible/ansibleUser value: "zuul" - op: replace path: /spec/nodeTemplate/ansible/ansibleVars/ctlplane_dns_nameservers value: - "192.168.122.10" - "199.204.44.24" - op: add path: /spec/nodeTemplate/ansible/ansibleVars/edpm_container_registry_insecure_registries value: ["38.102.83.51:5001"] - op: add path: /spec/nodeTemplate/ansible/ansibleVars/edpm_sshd_allowed_ranges value: ["0.0.0.0/0"] - op: replace path: /spec/nodeTemplate/ansible/ansibleVars/edpm_telemetry_enabled_exporters value: - "podman_exporter" - "openstack_network_exporter" home/zuul/zuul-output/logs/ci-framework-data/artifacts/manifests/kustomizations/controlplane/0000755000175000017500000000000015133657745032157 5ustar zuulzuul././@LongLink0000644000000000000000000000017300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/manifests/kustomizations/controlplane/80-horizon-kustomization.yamlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/manifests/kustomizations/controlplane/80-hori0000644000175000017500000000046115133657617033267 0ustar zuulzuulapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: openstack patches: - target: kind: OpenStackControlPlane patch: |- - op: add path: /spec/horizon/enabled value: true - op: add path: /spec/horizon/template/memcachedInstance value: memcached././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/manifests/kustomizations/controlplane/99-kustomization.yamlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/manifests/kustomizations/controlplane/99-kust0000644000175000017500000000062215133657610033316 0ustar zuulzuulapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: namespace: openstack patches: - target: kind: OpenStackControlPlane patch: |- - op: replace path: /spec/dns/template/options value: [ { "key": "server", "values": [ "192.168.122.10" ] }, { "key": "no-negcache", "values": [] } ]home/zuul/zuul-output/logs/ci-framework-data/artifacts/post_infra_fetch_nodes_facts_and_save_the.yml0000644000175000017500000000053115133657611033462 0ustar zuulzuulcifmw_edpm_deploy_extra_vars: DATAPLANE_COMPUTE_IP: 192.168.122.100 DATAPLANE_SINGLE_NODE: 'false' DATAPLANE_SSHD_ALLOWED_RANGES: '[''0.0.0.0/0'']' DATAPLANE_TOTAL_NODES: 2 SSH_KEY_FILE: /home/zuul/.ssh/id_cifw cifmw_edpm_prepare_extra_vars: NETWORK_MTU: 1500 NNCP_DNS_SERVER: 192.168.122.10 NNCP_INTERFACE: ens7 home/zuul/zuul-output/logs/ci-framework-data/artifacts/ci_script_000_run_hook_without_retry.sh0000644000175000017500000000161115133657456032167 0ustar zuulzuul#!/bin/bash set -euo pipefail exec > >(tee -i /home/zuul/ci-framework-data/logs/ci_script_000_run_hook_without_retry.log) 2>&1 export ANSIBLE_CONFIG="/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/ansible.cfg" export ANSIBLE_LOG_PATH="/home/zuul/ci-framework-data/logs/pre_infra_download_needed_tools.log" ansible-playbook -i localhost, -c local -e namespace=openstack -e "@/home/zuul/ci-framework-data/artifacts/parameters/zuul-params.yml" -e "@/home/zuul/ci-framework-data/artifacts/parameters/install-yamls-params.yml" -e "@/home/zuul/ci-framework-data/artifacts/parameters/custom-params.yml" -e "cifmw_basedir=/home/zuul/ci-framework-data" -e "step=pre_infra" -e "hook_name=download_needed_tools" -e "playbook_dir=/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/download_tools.yaml home/zuul/zuul-output/logs/ci-framework-data/artifacts/ci_script_001_fetch_openshift.sh0000644000175000017500000000032515133657524030501 0ustar zuulzuul#!/bin/bash set -euo pipefail exec > >(tee -i /home/zuul/ci-framework-data/logs/ci_script_001_fetch_openshift.log) 2>&1 oc login -u kubeadmin -p 123456789 --insecure-skip-tls-verify=true api.crc.testing:6443 home/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible-vars.yml0000644000175000017500000241755015133657744025511 0ustar zuulzuul_included_dir: changed: false failed: false stat: atime: 1768906622.6334352 attr_flags: '' attributes: [] block_size: 4096 blocks: 0 charset: binary ctime: 1768906583.763411 dev: 64513 device_type: 0 executable: true exists: true gid: 1000 gr_name: zuul inode: 46197012 isblk: false ischr: false isdir: true isfifo: false isgid: false islnk: false isreg: false issock: false isuid: false mimetype: inode/directory mode: '0755' mtime: 1768906583.763411 nlink: 2 path: /home/zuul/ci-framework-data/artifacts/parameters pw_name: zuul readable: true rgrp: true roth: true rusr: true size: 120 uid: 1000 version: '3168648319' wgrp: false woth: false writeable: true wusr: true xgrp: true xoth: true xusr: true _included_file: changed: false failed: false stat: atime: 1768906623.674463 attr_flags: '' attributes: [] block_size: 4096 blocks: 8 charset: us-ascii checksum: 6d8939fb75e97fe158faa310d709fff99c45d6cf ctime: 1768906582.9373894 dev: 64513 device_type: 0 executable: false exists: true gid: 1000 gr_name: zuul inode: 4366262 isblk: false ischr: false isdir: false isfifo: false isgid: false islnk: false isreg: true issock: false isuid: false mimetype: text/plain mode: '0600' mtime: 1768906582.7803853 nlink: 1 path: /home/zuul/ci-framework-data/artifacts/parameters/openshift-login-params.yml pw_name: zuul readable: true rgrp: false roth: false rusr: true size: 280 uid: 1000 version: '1785124229' wgrp: false woth: false writeable: true wusr: true xgrp: false xoth: false xusr: false _parsed_vars: changed: false content: Y2lmbXdfb3BlbnNoaWZ0X2FwaTogYXBpLmNyYy50ZXN0aW5nOjY0NDMKY2lmbXdfb3BlbnNoaWZ0X2NvbnRleHQ6IGRlZmF1bHQvYXBpLWNyYy10ZXN0aW5nOjY0NDMva3ViZWFkbWluCmNpZm13X29wZW5zaGlmdF9rdWJlY29uZmlnOiAvaG9tZS96dXVsLy5jcmMvbWFjaGluZXMvY3JjL2t1YmVjb25maWcKY2lmbXdfb3BlbnNoaWZ0X3Rva2VuOiBzaGEyNTZ+YUNrOVNvTXhBS3pRbl9rWVB2eTFnV19LdlhfTWhWYnAwX3BNMlRuZnV5RQpjaWZtd19vcGVuc2hpZnRfdXNlcjoga3ViZWFkbWluCg== encoding: base64 failed: false source: /home/zuul/ci-framework-data/artifacts/parameters/openshift-login-params.yml _tmp_dir: changed: true failed: false gid: 10001 group: zuul mode: '0700' owner: zuul path: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/tmp/ansible.1xkebns9 size: 40 state: directory uid: 10001 _yaml_files: changed: false examined: 4 failed: false files: - atime: 1768906473.7125292 ctime: 1768906470.6654503 dev: 64513 gid: 1000 gr_name: zuul inode: 63057654 isblk: false ischr: false isdir: false isfifo: false isgid: false islnk: false isreg: true issock: false isuid: false mode: '0644' mtime: 1768906470.458445 nlink: 1 path: /home/zuul/ci-framework-data/artifacts/parameters/zuul-params.yml pw_name: zuul rgrp: true roth: true rusr: true size: 20657 uid: 1000 wgrp: false woth: false wusr: true xgrp: false xoth: false xusr: false - atime: 1768906623.6584625 ctime: 1768906583.766411 dev: 64513 gid: 1000 gr_name: zuul inode: 21110197 isblk: false ischr: false isdir: false isfifo: false isgid: false islnk: false isreg: true issock: false isuid: false mode: '0600' mtime: 1768906583.611407 nlink: 1 path: /home/zuul/ci-framework-data/artifacts/parameters/install-yamls-params.yml pw_name: zuul rgrp: false roth: false rusr: true size: 28355 uid: 1000 wgrp: false woth: false wusr: true xgrp: false xoth: false xusr: false - atime: 1768906543.1623468 ctime: 1768906541.3292987 dev: 64513 gid: 1000 gr_name: zuul inode: 58810635 isblk: false ischr: false isdir: false isfifo: false isgid: false islnk: false isreg: true issock: false isuid: false mode: '0644' mtime: 1768906541.1692946 nlink: 1 path: /home/zuul/ci-framework-data/artifacts/parameters/custom-params.yml pw_name: zuul rgrp: true roth: true rusr: true size: 9278 uid: 1000 wgrp: false woth: false wusr: true xgrp: false xoth: false xusr: false - atime: 1768906623.674463 ctime: 1768906582.9373894 dev: 64513 gid: 1000 gr_name: zuul inode: 4366262 isblk: false ischr: false isdir: false isfifo: false isgid: false islnk: false isreg: true issock: false isuid: false mode: '0600' mtime: 1768906582.7803853 nlink: 1 path: /home/zuul/ci-framework-data/artifacts/parameters/openshift-login-params.yml pw_name: zuul rgrp: false roth: false rusr: true size: 280 uid: 1000 wgrp: false woth: false wusr: true xgrp: false xoth: false xusr: false matched: 4 msg: All paths examined skipped_paths: {} ansible_all_ipv4_addresses: - 38.102.83.39 ansible_all_ipv6_addresses: - fe80::f816:3eff:fe71:e11f ansible_apparmor: status: disabled ansible_architecture: x86_64 ansible_bios_date: 04/01/2014 ansible_bios_vendor: SeaBIOS ansible_bios_version: 1.15.0-1 ansible_board_asset_tag: NA ansible_board_name: NA ansible_board_serial: NA ansible_board_vendor: NA ansible_board_version: NA ansible_chassis_asset_tag: NA ansible_chassis_serial: NA ansible_chassis_vendor: QEMU ansible_chassis_version: pc-i440fx-6.2 ansible_check_mode: false ansible_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ansible_collection_name: null ansible_config_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/post_playbook_0/ansible.cfg ansible_connection: ssh ansible_date_time: date: '2026-01-20' day: '20' epoch: '1768906665' epoch_int: '1768906665' hour: '10' iso8601: '2026-01-20T10:57:45Z' iso8601_basic: 20260120T105745596274 iso8601_basic_short: 20260120T105745 iso8601_micro: '2026-01-20T10:57:45.596274Z' minute: '57' month: '01' second: '45' time: '10:57:45' tz: UTC tz_dst: UTC tz_offset: '+0000' weekday: Tuesday weekday_number: '2' weeknumber: '03' year: '2026' ansible_default_ipv4: address: 38.102.83.39 alias: eth0 broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: eth0 macaddress: fa:16:3e:71:e1:1f mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether ansible_default_ipv6: {} ansible_dependent_role_names: [] ansible_device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 masters: {} uuids: sr0: - 2026-01-20-10-41-26-00 vda1: - 22ac9141-3960-4912-b20e-19fc8a328d40 ansible_devices: sr0: holders: [] host: '' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2026-01-20-10-41-26-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '2048' vendor: QEMU virtual: 1 vda: holders: [] host: '' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: - 22ac9141-3960-4912-b20e-19fc8a328d40 sectors: '83883999' sectorsize: 512 size: 40.00 GB start: '2048' uuid: 22ac9141-3960-4912-b20e-19fc8a328d40 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '83886080' sectorsize: '512' size: 40.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 ansible_diff_mode: false ansible_distribution: CentOS ansible_distribution_file_parsed: true ansible_distribution_file_path: /etc/centos-release ansible_distribution_file_variety: CentOS ansible_distribution_major_version: '9' ansible_distribution_release: Stream ansible_distribution_version: '9' ansible_dns: nameservers: - 192.168.122.10 - 199.204.44.24 - 199.204.47.54 ansible_domain: '' ansible_effective_group_id: 1000 ansible_effective_user_id: 1000 ansible_env: ANSIBLE_LOG_PATH: /home/zuul/ci-framework-data/logs/e2e-collect-logs-must-gather.log BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus DEBUGINFOD_IMA_CERT_PATH: '/etc/keys/ima:' DEBUGINFOD_URLS: 'https://debuginfod.centos.org/ ' HOME: /home/zuul LANG: en_US.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: zuul MOTD_SHOWN: pam PATH: /home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /home/zuul SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 46682 22 SSH_CONNECTION: 38.102.83.114 46682 38.102.83.39 22 USER: zuul XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '12' XDG_SESSION_TYPE: tty _: /usr/bin/python3 which_declare: declare -f ansible_eth0: active: true device: eth0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.39 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::f816:3eff:fe71:e11f prefix: '64' scope: link macaddress: fa:16:3e:71:e1:1f module: virtio_net mtu: 1500 pciid: virtio1 promisc: false speed: -1 timestamping: [] type: ether ansible_facts: _ansible_facts_gathered: true all_ipv4_addresses: - 38.102.83.39 all_ipv6_addresses: - fe80::f816:3eff:fe71:e11f ansible_local: {} apparmor: status: disabled architecture: x86_64 bios_date: 04/01/2014 bios_vendor: SeaBIOS bios_version: 1.15.0-1 board_asset_tag: NA board_name: NA board_serial: NA board_vendor: NA board_version: NA chassis_asset_tag: NA chassis_serial: NA chassis_vendor: QEMU chassis_version: pc-i440fx-6.2 cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=22ac9141-3960-4912-b20e-19fc8a328d40 crc_ci_bootstrap_instance_default_net_config: mtu: '1500' range: 192.168.122.0/24 router_net: '' transparent: true crc_ci_bootstrap_instance_nm_vlan_networks: - key: internal-api value: ip: 172.17.0.5 - key: storage value: ip: 172.18.0.5 - key: tenant value: ip: 172.19.0.5 crc_ci_bootstrap_instance_parent_port_create_yaml: admin_state_up: true allowed_address_pairs: [] binding_host_id: null binding_profile: {} binding_vif_details: {} binding_vif_type: null binding_vnic_type: normal created_at: '2026-01-20T10:44:55Z' data_plane_status: null description: '' device_id: '' device_owner: '' device_profile: null dns_assignment: - fqdn: host-192-168-122-10.openstacklocal. hostname: host-192-168-122-10 ip_address: 192.168.122.10 dns_domain: '' dns_name: '' extra_dhcp_opts: [] fixed_ips: - ip_address: 192.168.122.10 subnet_id: 2e4cfd41-19cc-4d87-87ce-3666153838d5 hardware_offload_type: null hints: '' id: 97004f9b-ecd9-4295-91be-b58c60a6e487 ip_allocation: immediate mac_address: fa:16:3e:db:79:f1 name: crc-a519b063-d122-47cf-ae3b-7548803df408 network_id: 9afcb953-b4d4-4d1b-ba09-6d5eaf6424aa numa_affinity_policy: null port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 propagate_uplink_status: null qos_network_policy_id: null qos_policy_id: null resource_request: null revision_number: 1 security_group_ids: [] status: DOWN tags: [] trunk_details: null trusted: null updated_at: '2026-01-20T10:44:55Z' crc_ci_bootstrap_network_name: zuul-ci-net-90366c73 crc_ci_bootstrap_networks_out: compute-0: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.100/24 mac: fa:16:3e:c1:c1:89 mtu: '1500' internal-api: iface: eth1.20 ip: 172.17.0.100/24 mac: 52:54:00:e4:85:d6 mtu: '1496' parent_iface: eth1 vlan: 20 storage: iface: eth1.21 ip: 172.18.0.100/24 mac: 52:54:00:81:02:e8 mtu: '1496' parent_iface: eth1 vlan: 21 tenant: iface: eth1.22 ip: 172.19.0.100/24 mac: 52:54:00:10:b6:c8 mtu: '1496' parent_iface: eth1 vlan: 22 compute-1: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.101/24 mac: fa:16:3e:62:06:49 mtu: '1500' internal-api: iface: eth1.20 ip: 172.17.0.101/24 mac: 52:54:00:1a:ac:e2 mtu: '1496' parent_iface: eth1 vlan: 20 storage: iface: eth1.21 ip: 172.18.0.101/24 mac: 52:54:00:8e:43:4e mtu: '1496' parent_iface: eth1 vlan: 21 tenant: iface: eth1.22 ip: 172.19.0.101/24 mac: 52:54:00:96:51:04 mtu: '1496' parent_iface: eth1 vlan: 22 controller: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.11/24 mac: fa:16:3e:b8:b9:28 mtu: '1500' crc: default: connection: ci-private-network gw: 192.168.122.1 iface: ens7 ip: 192.168.122.10/24 mac: fa:16:3e:db:79:f1 mtu: '1500' internal-api: connection: ci-private-network-20 iface: ens7.20 ip: 172.17.0.5/24 mac: 52:54:00:75:cf:2c mtu: '1496' parent_iface: ens7 vlan: 20 storage: connection: ci-private-network-21 iface: ens7.21 ip: 172.18.0.5/24 mac: 52:54:00:da:18:d2 mtu: '1496' parent_iface: ens7 vlan: 21 tenant: connection: ci-private-network-22 iface: ens7.22 ip: 172.19.0.5/24 mac: 52:54:00:b7:68:1f mtu: '1496' parent_iface: ens7 vlan: 22 crc_ci_bootstrap_private_net_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2026-01-20T10:43:28Z' description: '' dns_domain: '' id: 9afcb953-b4d4-4d1b-ba09-6d5eaf6424aa ipv4_address_scope: null ipv6_address_scope: null is_default: false is_vlan_qinq: null is_vlan_transparent: true l2_adjacency: true mtu: 1500 name: zuul-ci-net-90366c73 port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 provider:network_type: null provider:physical_network: null provider:segmentation_id: null qos_policy_id: null revision_number: 1 router:external: false segments: null shared: false status: ACTIVE subnets: [] tags: [] updated_at: '2026-01-20T10:43:28Z' crc_ci_bootstrap_private_router_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2026-01-20T10:43:36Z' description: '' enable_ndp_proxy: null external_gateway_info: null flavor_id: null id: 6f8c9a88-0252-4b33-9039-3708bccf4928 name: zuul-ci-subnet-router-90366c73 project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 1 routes: [] status: ACTIVE tags: [] tenant_id: 4b633c451ac74233be3721a3635275e5 updated_at: '2026-01-20T10:43:36Z' crc_ci_bootstrap_private_subnet_create_yaml: allocation_pools: - end: 192.168.122.254 start: 192.168.122.2 cidr: 192.168.122.0/24 created_at: '2026-01-20T10:43:33Z' description: '' dns_nameservers: [] dns_publish_fixed_ip: null enable_dhcp: false gateway_ip: 192.168.122.1 host_routes: [] id: 2e4cfd41-19cc-4d87-87ce-3666153838d5 ip_version: 4 ipv6_address_mode: null ipv6_ra_mode: null name: zuul-ci-subnet-90366c73 network_id: 9afcb953-b4d4-4d1b-ba09-6d5eaf6424aa project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 0 segment_id: null service_types: [] subnetpool_id: null tags: [] updated_at: '2026-01-20T10:43:33Z' crc_ci_bootstrap_provider_dns: - 199.204.44.24 - 199.204.47.54 crc_ci_bootstrap_router_name: zuul-ci-subnet-router-90366c73 crc_ci_bootstrap_subnet_name: zuul-ci-subnet-90366c73 date_time: date: '2026-01-20' day: '20' epoch: '1768906665' epoch_int: '1768906665' hour: '10' iso8601: '2026-01-20T10:57:45Z' iso8601_basic: 20260120T105745596274 iso8601_basic_short: 20260120T105745 iso8601_micro: '2026-01-20T10:57:45.596274Z' minute: '57' month: '01' second: '45' time: '10:57:45' tz: UTC tz_dst: UTC tz_offset: '+0000' weekday: Tuesday weekday_number: '2' weeknumber: '03' year: '2026' default_ipv4: address: 38.102.83.39 alias: eth0 broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: eth0 macaddress: fa:16:3e:71:e1:1f mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether default_ipv6: {} device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 masters: {} uuids: sr0: - 2026-01-20-10-41-26-00 vda1: - 22ac9141-3960-4912-b20e-19fc8a328d40 devices: sr0: holders: [] host: '' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2026-01-20-10-41-26-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '2048' vendor: QEMU virtual: 1 vda: holders: [] host: '' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: - 22ac9141-3960-4912-b20e-19fc8a328d40 sectors: '83883999' sectorsize: 512 size: 40.00 GB start: '2048' uuid: 22ac9141-3960-4912-b20e-19fc8a328d40 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '83886080' sectorsize: '512' size: 40.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 discovered_interpreter_python: /usr/bin/python3 distribution: CentOS distribution_file_parsed: true distribution_file_path: /etc/centos-release distribution_file_variety: CentOS distribution_major_version: '9' distribution_release: Stream distribution_version: '9' dns: nameservers: - 192.168.122.10 - 199.204.44.24 - 199.204.47.54 domain: '' effective_group_id: 1000 effective_user_id: 1000 env: ANSIBLE_LOG_PATH: /home/zuul/ci-framework-data/logs/e2e-collect-logs-must-gather.log BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus DEBUGINFOD_IMA_CERT_PATH: '/etc/keys/ima:' DEBUGINFOD_URLS: 'https://debuginfod.centos.org/ ' HOME: /home/zuul LANG: en_US.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: zuul MOTD_SHOWN: pam PATH: /home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /home/zuul SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 46682 22 SSH_CONNECTION: 38.102.83.114 46682 38.102.83.39 22 USER: zuul XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '12' XDG_SESSION_TYPE: tty _: /usr/bin/python3 which_declare: declare -f eth0: active: true device: eth0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.39 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::f816:3eff:fe71:e11f prefix: '64' scope: link macaddress: fa:16:3e:71:e1:1f module: virtio_net mtu: 1500 pciid: virtio1 promisc: false speed: -1 timestamping: [] type: ether fibre_channel_wwn: [] fips: false form_factor: Other fqdn: controller gather_subset: - min hostname: controller hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d interfaces: - eth0 - lo is_chroot: false iscsi_iqn: '' kernel: 5.14.0-661.el9.x86_64 kernel_version: '#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026' lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback loadavg: 15m: 0.08 1m: 0.59 5m: 0.22 locally_reachable_ips: ipv4: - 38.102.83.39 - 127.0.0.0/8 - 127.0.0.1 ipv6: - ::1 - fe80::f816:3eff:fe71:e11f lsb: {} lvm: N/A machine: x86_64 machine_id: 85ac68c10a6e7ae08ceb898dbdca0cb5 memfree_mb: 3195 memory_mb: nocache: free: 3404 used: 251 real: free: 3195 total: 3655 used: 460 swap: cached: 0 free: 0 total: 0 used: 0 memtotal_mb: 3655 module_setup: true mounts: - block_available: 9928916 block_size: 4096 block_total: 10469115 block_used: 540199 device: /dev/vda1 fstype: xfs inode_available: 20917098 inode_total: 20970992 inode_used: 53894 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota size_available: 40668839936 size_total: 42881495040 uuid: 22ac9141-3960-4912-b20e-19fc8a328d40 nodename: controller os_family: RedHat pkg_mgr: dnf proc_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=22ac9141-3960-4912-b20e-19fc8a328d40 processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor processor_cores: 1 processor_count: 2 processor_nproc: 2 processor_threads_per_core: 1 processor_vcpus: 2 product_name: OpenStack Nova product_serial: NA product_uuid: NA product_version: 26.3.1 python: executable: /usr/bin/python3 has_sslcontext: true type: cpython version: major: 3 micro: 25 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 25 - final - 0 python_version: 3.9.25 real_group_id: 1000 real_user_id: 1000 selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted selinux_python_present: true service_mgr: systemd ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI+HME4ahJQfJECnlUk3Icgw7DjB45ygINRfee3AcsNVR5rNrzPcoaVTiPZ1YEOGS4KKBJ1Qyzp3CgN+dL12iOs= ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAIHcCbCXC3if6/1WyJzr5vIzL+Tzi/3I/oQgtmHTAEmym ssh_host_key_ed25519_public_keytype: ssh-ed25519 ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQD5hZ02CR6jauFfwvnGyh7Gg7GiN8xU/4woiEx+8xAto75E7Pi9h+8iAczj5rpkBpdIX3G3BSeegzMeog4upoxDVvta9EgXuabnQ49Y7WDm0LPFAPgiFBu/CkrcXHPm6OM5a181eFVk4w9Kf3GDJ9Arh5IdZdAbxXEEdenpbQnlz4hFtl/dGIrohDfCuWmrhq5VMqraCeMpiJ4c2G2iMVgZFQf8LvUICbaySrebir4HAfyv1yWZawS1Nql3bsyHsx9Tf25tbj5CHHs+DhC9NI/UgOmeW8rz3IyrdhqKInbsI/AqSHQPmEAEwzRco8xALMmzjICopKKXB9R++ddv/PAqiZPTrixuYUXRQQyJvx080Visb20dtAZTLYdmY7X1oB8Jgvullh3xFHNeqAu0+7OIeoHl9eCxx2sbk1kEtud1CMuDc5cn7h3ANGHZY3/jP0cUAyel1wQE0olv43z2rzgUpI6+8gKM0edyBLbCbww6/PHtcNkrzxAB3WOaAIzZAI8= ssh_host_key_rsa_public_keytype: ssh-rsa swapfree_mb: 0 swaptotal_mb: 0 system: Linux system_capabilities: - '' system_capabilities_enforced: 'True' system_vendor: OpenStack Foundation uptime_seconds: 64 user_dir: /home/zuul user_gecos: '' user_gid: 1000 user_id: zuul user_shell: /bin/bash user_uid: 1000 userspace_architecture: x86_64 userspace_bits: '64' virtualization_role: guest virtualization_tech_guest: - openstack virtualization_tech_host: - kvm virtualization_type: openstack zuul_change_list: - watcher-operator ansible_fibre_channel_wwn: [] ansible_fips: false ansible_forks: 5 ansible_form_factor: Other ansible_fqdn: controller ansible_host: 38.102.83.39 ansible_hostname: controller ansible_hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d ansible_interfaces: - eth0 - lo ansible_inventory_sources: - /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/post_playbook_0/inventory.yaml ansible_is_chroot: false ansible_iscsi_iqn: '' ansible_kernel: 5.14.0-661.el9.x86_64 ansible_kernel_version: '#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026' ansible_lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback ansible_loadavg: 15m: 0.08 1m: 0.59 5m: 0.22 ansible_local: {} ansible_locally_reachable_ips: ipv4: - 38.102.83.39 - 127.0.0.0/8 - 127.0.0.1 ipv6: - ::1 - fe80::f816:3eff:fe71:e11f ansible_lsb: {} ansible_lvm: N/A ansible_machine: x86_64 ansible_machine_id: 85ac68c10a6e7ae08ceb898dbdca0cb5 ansible_memfree_mb: 3195 ansible_memory_mb: nocache: free: 3404 used: 251 real: free: 3195 total: 3655 used: 460 swap: cached: 0 free: 0 total: 0 used: 0 ansible_memtotal_mb: 3655 ansible_mounts: - block_available: 9928916 block_size: 4096 block_total: 10469115 block_used: 540199 device: /dev/vda1 fstype: xfs inode_available: 20917098 inode_total: 20970992 inode_used: 53894 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota size_available: 40668839936 size_total: 42881495040 uuid: 22ac9141-3960-4912-b20e-19fc8a328d40 ansible_nodename: controller ansible_os_family: RedHat ansible_parent_role_names: - cifmw_setup ansible_parent_role_paths: - /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/roles/cifmw_setup ansible_pkg_mgr: dnf ansible_play_batch: &id002 - controller ansible_play_hosts: - controller ansible_play_hosts_all: - compute-0 - compute-1 - controller - crc ansible_play_name: Run ci/playbooks/e2e-collect-logs.yml ansible_play_role_names: &id003 - run_hook - os_must_gather - artifacts - env_op_images - run_hook - cifmw_setup ansible_playbook_python: /usr/lib/zuul/ansible/8/bin/python ansible_port: 22 ansible_proc_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ansible_processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor ansible_processor_cores: 1 ansible_processor_count: 2 ansible_processor_nproc: 2 ansible_processor_threads_per_core: 1 ansible_processor_vcpus: 2 ansible_product_name: OpenStack Nova ansible_product_serial: NA ansible_product_uuid: NA ansible_product_version: 26.3.1 ansible_python: executable: /usr/bin/python3 has_sslcontext: true type: cpython version: major: 3 micro: 25 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 25 - final - 0 ansible_python_interpreter: auto ansible_python_version: 3.9.25 ansible_real_group_id: 1000 ansible_real_user_id: 1000 ansible_role_name: artifacts ansible_role_names: - cifmw_setup - run_hook - env_op_images - os_must_gather - artifacts ansible_run_tags: - all ansible_scp_extra_args: -o PermitLocalCommand=no ansible_selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted ansible_selinux_python_present: true ansible_service_mgr: systemd ansible_sftp_extra_args: -o PermitLocalCommand=no ansible_skip_tags: [] ansible_ssh_common_args: -o PermitLocalCommand=no ansible_ssh_executable: ssh ansible_ssh_extra_args: -o PermitLocalCommand=no ansible_ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI+HME4ahJQfJECnlUk3Icgw7DjB45ygINRfee3AcsNVR5rNrzPcoaVTiPZ1YEOGS4KKBJ1Qyzp3CgN+dL12iOs= ansible_ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ansible_ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAIHcCbCXC3if6/1WyJzr5vIzL+Tzi/3I/oQgtmHTAEmym ansible_ssh_host_key_ed25519_public_keytype: ssh-ed25519 ansible_ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQD5hZ02CR6jauFfwvnGyh7Gg7GiN8xU/4woiEx+8xAto75E7Pi9h+8iAczj5rpkBpdIX3G3BSeegzMeog4upoxDVvta9EgXuabnQ49Y7WDm0LPFAPgiFBu/CkrcXHPm6OM5a181eFVk4w9Kf3GDJ9Arh5IdZdAbxXEEdenpbQnlz4hFtl/dGIrohDfCuWmrhq5VMqraCeMpiJ4c2G2iMVgZFQf8LvUICbaySrebir4HAfyv1yWZawS1Nql3bsyHsx9Tf25tbj5CHHs+DhC9NI/UgOmeW8rz3IyrdhqKInbsI/AqSHQPmEAEwzRco8xALMmzjICopKKXB9R++ddv/PAqiZPTrixuYUXRQQyJvx080Visb20dtAZTLYdmY7X1oB8Jgvullh3xFHNeqAu0+7OIeoHl9eCxx2sbk1kEtud1CMuDc5cn7h3ANGHZY3/jP0cUAyel1wQE0olv43z2rzgUpI6+8gKM0edyBLbCbww6/PHtcNkrzxAB3WOaAIzZAI8= ansible_ssh_host_key_rsa_public_keytype: ssh-rsa ansible_swapfree_mb: 0 ansible_swaptotal_mb: 0 ansible_system: Linux ansible_system_capabilities: - '' ansible_system_capabilities_enforced: 'True' ansible_system_vendor: OpenStack Foundation ansible_uptime_seconds: 64 ansible_user: zuul ansible_user_dir: /home/zuul ansible_user_gecos: '' ansible_user_gid: 1000 ansible_user_id: zuul ansible_user_shell: /bin/bash ansible_user_uid: 1000 ansible_userspace_architecture: x86_64 ansible_userspace_bits: '64' ansible_verbosity: 1 ansible_version: full: 2.15.12 major: 2 minor: 15 revision: 12 string: 2.15.12 ansible_virtualization_role: guest ansible_virtualization_tech_guest: - openstack ansible_virtualization_tech_host: - kvm ansible_virtualization_type: openstack cifmw_architecture_repo: /home/zuul/src/github.com/openstack-k8s-operators/architecture cifmw_architecture_repo_relative: src/github.com/openstack-k8s-operators/architecture cifmw_artifacts_basedir: '{{ cifmw_basedir | default(ansible_user_dir ~ ''/ci-framework-data'') }}' cifmw_artifacts_crc_host: api.crc.testing cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_artifacts_crc_sshkey_ed25519: ~/.crc/machines/crc/id_ed25519 cifmw_artifacts_crc_user: core cifmw_artifacts_gather_logs: true cifmw_artifacts_mask_logs: true cifmw_basedir: /home/zuul/ci-framework-data cifmw_build_images_output: {} cifmw_config_certmanager: true cifmw_default_dns_servers: - 1.1.1.1 - 8.8.8.8 cifmw_deploy_edpm: true cifmw_dlrn_report_result: false cifmw_edpm_deploy_nova_compute_extra_config: '[libvirt] cpu_mode = custom cpu_models = Nehalem ' cifmw_edpm_prepare_kustomizations: - apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: openstack patches: - patch: "apiVersion: core.openstack.org/v1beta1\nkind: OpenStackControlPlane\nmetadata:\n \ name: controlplane\nspec:\n telemetry:\n enabled: true\n template:\n \ ceilometer:\n enabled: true\n metricStorage:\n enabled: true\n customMonitoringStack:\n alertmanagerConfig:\n \ disabled: true\n prometheusConfig:\n enableRemoteWriteReceiver: true\n persistentVolumeClaim:\n resources:\n requests:\n \ storage: 20G\n replicas: 1\n scrapeInterval: 30s\n resourceSelector:\n matchLabels:\n service: metricStorage\n retention: 24h" target: kind: OpenStackControlPlane - patch: "apiVersion: core.openstack.org/v1beta1\nkind: OpenStackControlPlane\nmetadata:\n \ name: controlplane\nspec:\n telemetry:\n template:\n metricStorage:\n \ monitoringStack: null" target: kind: OpenStackControlPlane - patch: "apiVersion: core.openstack.org/v1beta1\nkind: OpenStackControlPlane\nmetadata:\n \ name: controlplane\nspec:\n watcher:\n enabled: true\n template:\n \ decisionengineServiceTemplate:\n customServiceConfig: |\n \ [watcher_cluster_data_model_collectors.compute]\n period = 60\n [watcher_cluster_data_model_collectors.storage]\n period = 60" target: kind: OpenStackControlPlane cifmw_edpm_prepare_skip_crc_storage_creation: true cifmw_edpm_prepare_timeout: 60 cifmw_edpm_telemetry_enabled_exporters: - podman_exporter - openstack_network_exporter cifmw_env_op_images_dir: '{{ cifmw_basedir | default(ansible_user_dir ~ ''/ci-framework-data'') }}' cifmw_env_op_images_dryrun: false cifmw_env_op_images_file: operator_images.yaml cifmw_extras: - '@/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/scenarios/centos-9/multinode-ci.yml' - '@/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/scenarios/centos-9/horizon.yml' - '@/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator/ci/scenarios/edpm-no-notifications.yml' - '@/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator/ci/tests/watcher-tempest.yml' cifmw_install_yamls_defaults: ADOPTED_EXTERNAL_NETWORK: 172.21.1.0/24 ADOPTED_INTERNALAPI_NETWORK: 172.17.1.0/24 ADOPTED_STORAGEMGMT_NETWORK: 172.20.1.0/24 ADOPTED_STORAGE_NETWORK: 172.18.1.0/24 ADOPTED_TENANT_NETWORK: 172.9.1.0/24 ANSIBLEEE: config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_BRANCH: main ANSIBLEEE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest ANSIBLEEE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml ANSIBLEEE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/test/kuttl/tests ANSIBLEEE_KUTTL_NAMESPACE: ansibleee-kuttl-tests ANSIBLEEE_REPO: https://github.com/openstack-k8s-operators/openstack-ansibleee-operator ANSIBLEE_COMMIT_HASH: '' BARBICAN: config/samples/barbican_v1beta1_barbican.yaml BARBICAN_BRANCH: main BARBICAN_COMMIT_HASH: '' BARBICAN_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml BARBICAN_DEPL_IMG: unused BARBICAN_IMG: quay.io/openstack-k8s-operators/barbican-operator-index:latest BARBICAN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml BARBICAN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/test/kuttl/tests BARBICAN_KUTTL_NAMESPACE: barbican-kuttl-tests BARBICAN_REPO: https://github.com/openstack-k8s-operators/barbican-operator.git BARBICAN_SERVICE_ENABLED: 'true' BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY: sE**********U= BAREMETAL_BRANCH: main BAREMETAL_COMMIT_HASH: '' BAREMETAL_IMG: quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest BAREMETAL_OS_CONTAINER_IMG: '' BAREMETAL_OS_IMG: '' BAREMETAL_OS_IMG_TYPE: '' BAREMETAL_REPO: https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git BAREMETAL_TIMEOUT: 20m BASH_IMG: quay.io/openstack-k8s-operators/bash:latest BGP_ASN: '64999' BGP_LEAF_1: 100.65.4.1 BGP_LEAF_2: 100.64.4.1 BGP_OVN_ROUTING: 'false' BGP_PEER_ASN: '64999' BGP_SOURCE_IP: 172.30.4.2 BGP_SOURCE_IP6: f00d:f00d:f00d:f00d:f00d:f00d:f00d:42 BMAAS_BRIDGE_IPV4_PREFIX: 172.20.1.2/24 BMAAS_BRIDGE_IPV6_PREFIX: fd00:bbbb::2/64 BMAAS_INSTANCE_DISK_SIZE: '20' BMAAS_INSTANCE_MEMORY: '4096' BMAAS_INSTANCE_NAME_PREFIX: crc-bmaas BMAAS_INSTANCE_NET_MODEL: virtio BMAAS_INSTANCE_OS_VARIANT: centos-stream9 BMAAS_INSTANCE_VCPUS: '2' BMAAS_INSTANCE_VIRT_TYPE: kvm BMAAS_IPV4: 'true' BMAAS_IPV6: 'false' BMAAS_LIBVIRT_USER: sushyemu BMAAS_METALLB_ADDRESS_POOL: 172.20.1.64/26 BMAAS_METALLB_POOL_NAME: baremetal BMAAS_NETWORK_IPV4_PREFIX: 172.20.1.1/24 BMAAS_NETWORK_IPV6_PREFIX: fd00:bbbb::1/64 BMAAS_NETWORK_NAME: crc-bmaas BMAAS_NODE_COUNT: '1' BMAAS_OCP_INSTANCE_NAME: crc BMAAS_REDFISH_PASSWORD: password BMAAS_REDFISH_USERNAME: admin BMAAS_ROUTE_LIBVIRT_NETWORKS: crc-bmaas,crc,default BMAAS_SUSHY_EMULATOR_DRIVER: libvirt BMAAS_SUSHY_EMULATOR_IMAGE: quay.io/metal3-io/sushy-tools:latest BMAAS_SUSHY_EMULATOR_NAMESPACE: sushy-emulator BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE: /etc/openstack/clouds.yaml BMAAS_SUSHY_EMULATOR_OS_CLOUD: openstack BMH_NAMESPACE: openstack BMO_BRANCH: release-0.9 BMO_CLEANUP: 'true' BMO_COMMIT_HASH: '' BMO_IPA_BRANCH: stable/2024.1 BMO_IRONIC_HOST: 192.168.122.10 BMO_PROVISIONING_INTERFACE: '' BMO_REPO: https://github.com/metal3-io/baremetal-operator BMO_SETUP: false BMO_SETUP_ROUTE_REPLACE: 'true' BM_CTLPLANE_INTERFACE: enp1s0 BM_INSTANCE_MEMORY: '8192' BM_INSTANCE_NAME_PREFIX: edpm-compute-baremetal BM_INSTANCE_NAME_SUFFIX: '0' BM_NETWORK_NAME: default BM_NODE_COUNT: '1' BM_ROOT_PASSWORD: '' BM_ROOT_PASSWORD_SECRET: '' CEILOMETER_CENTRAL_DEPL_IMG: unused CEILOMETER_NOTIFICATION_DEPL_IMG: unused CEPH_BRANCH: release-1.15 CEPH_CLIENT: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml CEPH_COMMON: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml CEPH_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml CEPH_CRDS: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml CEPH_IMG: quay.io/ceph/demo:latest-squid CEPH_OP: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml CEPH_REPO: https://github.com/rook/rook.git CERTMANAGER_TIMEOUT: 300s CHECKOUT_FROM_OPENSTACK_REF: 'true' CINDER: config/samples/cinder_v1beta1_cinder.yaml CINDERAPI_DEPL_IMG: unused CINDERBKP_DEPL_IMG: unused CINDERSCH_DEPL_IMG: unused CINDERVOL_DEPL_IMG: unused CINDER_BRANCH: main CINDER_COMMIT_HASH: '' CINDER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml CINDER_IMG: quay.io/openstack-k8s-operators/cinder-operator-index:latest CINDER_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml CINDER_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests CINDER_KUTTL_NAMESPACE: cinder-kuttl-tests CINDER_REPO: https://github.com/openstack-k8s-operators/cinder-operator.git CLEANUP_DIR_CMD: rm -Rf CRC_BGP_NIC_1_MAC: '52:54:00:11:11:11' CRC_BGP_NIC_2_MAC: '52:54:00:11:11:12' CRC_HTTPS_PROXY: '' CRC_HTTP_PROXY: '' CRC_STORAGE_NAMESPACE: crc-storage CRC_STORAGE_RETRIES: '3' CRC_URL: '''https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz''' CRC_VERSION: latest DATAPLANE_ANSIBLE_SECRET: dataplane-ansible-ssh-private-key-secret DATAPLANE_ANSIBLE_USER: '' DATAPLANE_COMPUTE_IP: 192.168.122.100 DATAPLANE_CONTAINER_PREFIX: openstack DATAPLANE_CONTAINER_TAG: current-podified DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest DATAPLANE_DEFAULT_GW: 192.168.122.1 DATAPLANE_EXTRA_NOVA_CONFIG_FILE: /dev/null DATAPLANE_GROWVOLS_ARGS: /=8GB /tmp=1GB /home=1GB /var=100% DATAPLANE_KUSTOMIZE_SCENARIO: preprovisioned DATAPLANE_NETWORKER_IP: 192.168.122.200 DATAPLANE_NETWORK_INTERFACE_NAME: eth0 DATAPLANE_NOVA_NFS_PATH: '' DATAPLANE_NTP_SERVER: pool.ntp.org DATAPLANE_PLAYBOOK: osp.edpm.download_cache DATAPLANE_REGISTRY_URL: quay.io/podified-antelope-centos9 DATAPLANE_RUNNER_IMG: '' DATAPLANE_SERVER_ROLE: compute DATAPLANE_SSHD_ALLOWED_RANGES: '[''192.168.122.0/24'']' DATAPLANE_TIMEOUT: 30m DATAPLANE_TLS_ENABLED: 'true' DATAPLANE_TOTAL_NETWORKER_NODES: '1' DATAPLANE_TOTAL_NODES: '1' DBSERVICE: galera DESIGNATE: config/samples/designate_v1beta1_designate.yaml DESIGNATE_BRANCH: main DESIGNATE_COMMIT_HASH: '' DESIGNATE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml DESIGNATE_IMG: quay.io/openstack-k8s-operators/designate-operator-index:latest DESIGNATE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml DESIGNATE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/test/kuttl/tests DESIGNATE_KUTTL_NAMESPACE: designate-kuttl-tests DESIGNATE_REPO: https://github.com/openstack-k8s-operators/designate-operator.git DNSDATA: config/samples/network_v1beta1_dnsdata.yaml DNSDATA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml DNSMASQ: config/samples/network_v1beta1_dnsmasq.yaml DNSMASQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml DNS_DEPL_IMG: unused DNS_DOMAIN: localdomain DOWNLOAD_TOOLS_SELECTION: all EDPM_ATTACH_EXTNET: 'true' EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES: '''[]''' EDPM_COMPUTE_ADDITIONAL_NETWORKS: '''[]''' EDPM_COMPUTE_CELLS: '1' EDPM_COMPUTE_CEPH_ENABLED: 'true' EDPM_COMPUTE_CEPH_NOVA: 'true' EDPM_COMPUTE_DHCP_AGENT_ENABLED: 'true' EDPM_COMPUTE_SRIOV_ENABLED: 'true' EDPM_COMPUTE_SUFFIX: '0' EDPM_CONFIGURE_DEFAULT_ROUTE: 'true' EDPM_CONFIGURE_HUGEPAGES: 'false' EDPM_CONFIGURE_NETWORKING: 'true' EDPM_FIRSTBOOT_EXTRA: /tmp/edpm-firstboot-extra EDPM_NETWORKER_SUFFIX: '0' EDPM_TOTAL_NETWORKERS: '1' EDPM_TOTAL_NODES: '1' GALERA_REPLICAS: '' GENERATE_SSH_KEYS: 'true' GIT_CLONE_OPTS: '' GLANCE: config/samples/glance_v1beta1_glance.yaml GLANCEAPI_DEPL_IMG: unused GLANCE_BRANCH: main GLANCE_COMMIT_HASH: '' GLANCE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml GLANCE_IMG: quay.io/openstack-k8s-operators/glance-operator-index:latest GLANCE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml GLANCE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests GLANCE_KUTTL_NAMESPACE: glance-kuttl-tests GLANCE_REPO: https://github.com/openstack-k8s-operators/glance-operator.git HEAT: config/samples/heat_v1beta1_heat.yaml HEATAPI_DEPL_IMG: unused HEATCFNAPI_DEPL_IMG: unused HEATENGINE_DEPL_IMG: unused HEAT_AUTH_ENCRYPTION_KEY: 76**********f0 HEAT_BRANCH: main HEAT_COMMIT_HASH: '' HEAT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml HEAT_IMG: quay.io/openstack-k8s-operators/heat-operator-index:latest HEAT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml HEAT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/test/kuttl/tests HEAT_KUTTL_NAMESPACE: heat-kuttl-tests HEAT_REPO: https://github.com/openstack-k8s-operators/heat-operator.git HEAT_SERVICE_ENABLED: 'true' HORIZON: config/samples/horizon_v1beta1_horizon.yaml HORIZON_BRANCH: main HORIZON_COMMIT_HASH: '' HORIZON_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml HORIZON_DEPL_IMG: unused HORIZON_IMG: quay.io/openstack-k8s-operators/horizon-operator-index:latest HORIZON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml HORIZON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/test/kuttl/tests HORIZON_KUTTL_NAMESPACE: horizon-kuttl-tests HORIZON_REPO: https://github.com/openstack-k8s-operators/horizon-operator.git INFRA_BRANCH: main INFRA_COMMIT_HASH: '' INFRA_IMG: quay.io/openstack-k8s-operators/infra-operator-index:latest INFRA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml INFRA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/test/kuttl/tests INFRA_KUTTL_NAMESPACE: infra-kuttl-tests INFRA_REPO: https://github.com/openstack-k8s-operators/infra-operator.git INSTALL_CERT_MANAGER: false INSTALL_NMSTATE: true || false INSTALL_NNCP: true || false INTERNALAPI_HOST_ROUTES: '' IPV6_LAB_IPV4_NETWORK_IPADDRESS: 172.30.0.1/24 IPV6_LAB_IPV6_NETWORK_IPADDRESS: fd00:abcd:abcd:fc00::1/64 IPV6_LAB_LIBVIRT_STORAGE_POOL: default IPV6_LAB_MANAGE_FIREWALLD: 'true' IPV6_LAB_NAT64_HOST_IPV4: 172.30.0.2/24 IPV6_LAB_NAT64_HOST_IPV6: fd00:abcd:abcd:fc00::2/64 IPV6_LAB_NAT64_INSTANCE_NAME: nat64-router IPV6_LAB_NAT64_IPV6_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL: 192.168.255.0/24 IPV6_LAB_NAT64_TAYGA_IPV4: 192.168.255.1 IPV6_LAB_NAT64_TAYGA_IPV6: fd00:abcd:abcd:fc00::3 IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX: fd00:abcd:abcd:fcff::/96 IPV6_LAB_NAT64_UPDATE_PACKAGES: 'false' IPV6_LAB_NETWORK_NAME: nat64 IPV6_LAB_SNO_CLUSTER_NETWORK: fd00:abcd:0::/48 IPV6_LAB_SNO_HOST_IP: fd00:abcd:abcd:fc00::11 IPV6_LAB_SNO_HOST_PREFIX: '64' IPV6_LAB_SNO_INSTANCE_NAME: sno IPV6_LAB_SNO_MACHINE_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_SNO_OCP_MIRROR_URL: https://mirror.openshift.com/pub/openshift-v4/clients/ocp IPV6_LAB_SNO_OCP_VERSION: latest-4.14 IPV6_LAB_SNO_SERVICE_NETWORK: fd00:abcd:abcd:fc03::/112 IPV6_LAB_SSH_PUB_KEY: /home/zuul/.ssh/id_rsa.pub IPV6_LAB_WORK_DIR: /home/zuul/.ipv6lab IRONIC: config/samples/ironic_v1beta1_ironic.yaml IRONICAPI_DEPL_IMG: unused IRONICCON_DEPL_IMG: unused IRONICINS_DEPL_IMG: unused IRONICNAG_DEPL_IMG: unused IRONICPXE_DEPL_IMG: unused IRONIC_BRANCH: main IRONIC_COMMIT_HASH: '' IRONIC_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml IRONIC_IMAGE: quay.io/metal3-io/ironic IRONIC_IMAGE_TAG: release-24.1 IRONIC_IMG: quay.io/openstack-k8s-operators/ironic-operator-index:latest IRONIC_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml IRONIC_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/test/kuttl/tests IRONIC_KUTTL_NAMESPACE: ironic-kuttl-tests IRONIC_REPO: https://github.com/openstack-k8s-operators/ironic-operator.git KEYSTONEAPI: config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_DEPL_IMG: unused KEYSTONE_BRANCH: main KEYSTONE_COMMIT_HASH: '' KEYSTONE_FEDERATION_CLIENT_SECRET: CO**********6f KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE: openstack KEYSTONE_IMG: quay.io/openstack-k8s-operators/keystone-operator-index:latest KEYSTONE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml KEYSTONE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/test/kuttl/tests KEYSTONE_KUTTL_NAMESPACE: keystone-kuttl-tests KEYSTONE_REPO: https://github.com/openstack-k8s-operators/keystone-operator.git KUBEADMIN_PWD: '12345678' LIBVIRT_SECRET: libvirt-secret LOKI_DEPLOY_MODE: openshift-network LOKI_DEPLOY_NAMESPACE: netobserv LOKI_DEPLOY_SIZE: 1x.demo LOKI_NAMESPACE: openshift-operators-redhat LOKI_OPERATOR_GROUP: openshift-operators-redhat-loki LOKI_SUBSCRIPTION: loki-operator LVMS_CR: '1' MANILA: config/samples/manila_v1beta1_manila.yaml MANILAAPI_DEPL_IMG: unused MANILASCH_DEPL_IMG: unused MANILASHARE_DEPL_IMG: unused MANILA_BRANCH: main MANILA_COMMIT_HASH: '' MANILA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml MANILA_IMG: quay.io/openstack-k8s-operators/manila-operator-index:latest MANILA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml MANILA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests MANILA_KUTTL_NAMESPACE: manila-kuttl-tests MANILA_REPO: https://github.com/openstack-k8s-operators/manila-operator.git MANILA_SERVICE_ENABLED: 'true' MARIADB: config/samples/mariadb_v1beta1_galera.yaml MARIADB_BRANCH: main MARIADB_CHAINSAW_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/config.yaml MARIADB_CHAINSAW_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/tests MARIADB_CHAINSAW_NAMESPACE: mariadb-chainsaw-tests MARIADB_COMMIT_HASH: '' MARIADB_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml MARIADB_DEPL_IMG: unused MARIADB_IMG: quay.io/openstack-k8s-operators/mariadb-operator-index:latest MARIADB_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml MARIADB_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/kuttl/tests MARIADB_KUTTL_NAMESPACE: mariadb-kuttl-tests MARIADB_REPO: https://github.com/openstack-k8s-operators/mariadb-operator.git MEMCACHED: config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_DEPL_IMG: unused METADATA_SHARED_SECRET: '12**********42' METALLB_IPV6_POOL: fd00:aaaa::80-fd00:aaaa::90 METALLB_POOL: 192.168.122.80-192.168.122.90 MICROSHIFT: '0' NAMESPACE: openstack NETCONFIG: config/samples/network_v1beta1_netconfig.yaml NETCONFIG_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml NETCONFIG_DEPL_IMG: unused NETOBSERV_DEPLOY_NAMESPACE: netobserv NETOBSERV_NAMESPACE: openshift-netobserv-operator NETOBSERV_OPERATOR_GROUP: openshift-netobserv-operator-net NETOBSERV_SUBSCRIPTION: netobserv-operator NETWORK_BGP: 'false' NETWORK_DESIGNATE_ADDRESS_PREFIX: 172.28.0 NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX: 172.50.0 NETWORK_INTERNALAPI_ADDRESS_PREFIX: 172.17.0 NETWORK_ISOLATION: 'true' NETWORK_ISOLATION_INSTANCE_NAME: crc NETWORK_ISOLATION_IPV4: 'true' NETWORK_ISOLATION_IPV4_ADDRESS: 172.16.1.1/24 NETWORK_ISOLATION_IPV4_NAT: 'true' NETWORK_ISOLATION_IPV6: 'false' NETWORK_ISOLATION_IPV6_ADDRESS: fd00:aaaa::1/64 NETWORK_ISOLATION_IP_ADDRESS: 192.168.122.10 NETWORK_ISOLATION_MAC: '52:54:00:11:11:10' NETWORK_ISOLATION_NETWORK_NAME: net-iso NETWORK_ISOLATION_NET_NAME: default NETWORK_ISOLATION_USE_DEFAULT_NETWORK: 'true' NETWORK_MTU: '1500' NETWORK_STORAGEMGMT_ADDRESS_PREFIX: 172.20.0 NETWORK_STORAGE_ADDRESS_PREFIX: 172.18.0 NETWORK_STORAGE_MACVLAN: '' NETWORK_TENANT_ADDRESS_PREFIX: 172.19.0 NETWORK_VLAN_START: '20' NETWORK_VLAN_STEP: '1' NEUTRONAPI: config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_DEPL_IMG: unused NEUTRON_BRANCH: main NEUTRON_COMMIT_HASH: '' NEUTRON_IMG: quay.io/openstack-k8s-operators/neutron-operator-index:latest NEUTRON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml NEUTRON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests NEUTRON_KUTTL_NAMESPACE: neutron-kuttl-tests NEUTRON_REPO: https://github.com/openstack-k8s-operators/neutron-operator.git NFS_HOME: /home/nfs NMSTATE_NAMESPACE: openshift-nmstate NMSTATE_OPERATOR_GROUP: openshift-nmstate-tn6k8 NMSTATE_SUBSCRIPTION: kubernetes-nmstate-operator NNCP_ADDITIONAL_HOST_ROUTES: '' NNCP_BGP_1_INTERFACE: enp7s0 NNCP_BGP_1_IP_ADDRESS: 100.65.4.2 NNCP_BGP_2_INTERFACE: enp8s0 NNCP_BGP_2_IP_ADDRESS: 100.64.4.2 NNCP_BRIDGE: ospbr NNCP_CLEANUP_TIMEOUT: 120s NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX: 'fd00:aaaa::' NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX: '10' NNCP_CTLPLANE_IP_ADDRESS_PREFIX: 192.168.122 NNCP_CTLPLANE_IP_ADDRESS_SUFFIX: '10' NNCP_DNS_SERVER: 192.168.122.1 NNCP_DNS_SERVER_IPV6: fd00:aaaa::1 NNCP_GATEWAY: 192.168.122.1 NNCP_GATEWAY_IPV6: fd00:aaaa::1 NNCP_INTERFACE: enp6s0 NNCP_NODES: '' NNCP_TIMEOUT: 240s NOVA: config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_BRANCH: main NOVA_COMMIT_HASH: '' NOVA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_IMG: quay.io/openstack-k8s-operators/nova-operator-index:latest NOVA_REPO: https://github.com/openstack-k8s-operators/nova-operator.git NUMBER_OF_INSTANCES: '1' OCP_NETWORK_NAME: crc OCTAVIA: config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_BRANCH: main OCTAVIA_COMMIT_HASH: '' OCTAVIA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_IMG: quay.io/openstack-k8s-operators/octavia-operator-index:latest OCTAVIA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml OCTAVIA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/test/kuttl/tests OCTAVIA_KUTTL_NAMESPACE: octavia-kuttl-tests OCTAVIA_REPO: https://github.com/openstack-k8s-operators/octavia-operator.git OKD: 'false' OPENSTACK_BRANCH: main OPENSTACK_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-bundle:latest OPENSTACK_COMMIT_HASH: '' OPENSTACK_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_CRDS_DIR: openstack_crds OPENSTACK_CTLPLANE: config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_IMG: quay.io/openstack-k8s-operators/openstack-operator-index:latest OPENSTACK_K8S_BRANCH: main OPENSTACK_K8S_TAG: latest OPENSTACK_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml OPENSTACK_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/test/kuttl/tests OPENSTACK_KUTTL_NAMESPACE: openstack-kuttl-tests OPENSTACK_NEUTRON_CUSTOM_CONF: '' OPENSTACK_REPO: https://github.com/openstack-k8s-operators/openstack-operator.git OPENSTACK_STORAGE_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest OPERATOR_BASE_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator OPERATOR_CHANNEL: '' OPERATOR_NAMESPACE: openstack-operators OPERATOR_SOURCE: '' OPERATOR_SOURCE_NAMESPACE: '' OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm OVNCONTROLLER: config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_NMAP: 'true' OVNDBS: config/samples/ovn_v1beta1_ovndbcluster.yaml OVNDBS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml OVNNORTHD: config/samples/ovn_v1beta1_ovnnorthd.yaml OVNNORTHD_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml OVN_BRANCH: main OVN_COMMIT_HASH: '' OVN_IMG: quay.io/openstack-k8s-operators/ovn-operator-index:latest OVN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml OVN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/test/kuttl/tests OVN_KUTTL_NAMESPACE: ovn-kuttl-tests OVN_REPO: https://github.com/openstack-k8s-operators/ovn-operator.git PASSWORD: '12**********78' PLACEMENTAPI: config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_DEPL_IMG: unused PLACEMENT_BRANCH: main PLACEMENT_COMMIT_HASH: '' PLACEMENT_IMG: quay.io/openstack-k8s-operators/placement-operator-index:latest PLACEMENT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml PLACEMENT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/test/kuttl/tests PLACEMENT_KUTTL_NAMESPACE: placement-kuttl-tests PLACEMENT_REPO: https://github.com/openstack-k8s-operators/placement-operator.git PULL_SECRET: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/pull-secret.txt RABBITMQ: docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_BRANCH: patches RABBITMQ_COMMIT_HASH: '' RABBITMQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_DEPL_IMG: unused RABBITMQ_IMG: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest RABBITMQ_REPO: https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git REDHAT_OPERATORS: 'false' REDIS: config/samples/redis_v1beta1_redis.yaml REDIS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml REDIS_DEPL_IMG: unused RH_REGISTRY_PWD: '' RH_REGISTRY_USER: '' SECRET: os**********et SG_CORE_DEPL_IMG: unused STANDALONE_COMPUTE_DRIVER: libvirt STANDALONE_EXTERNAL_NET_PREFFIX: 172.21.0 STANDALONE_INTERNALAPI_NET_PREFIX: 172.17.0 STANDALONE_STORAGEMGMT_NET_PREFIX: 172.20.0 STANDALONE_STORAGE_NET_PREFIX: 172.18.0 STANDALONE_TENANT_NET_PREFIX: 172.19.0 STORAGEMGMT_HOST_ROUTES: '' STORAGE_CLASS: local-storage STORAGE_HOST_ROUTES: '' SWIFT: config/samples/swift_v1beta1_swift.yaml SWIFT_BRANCH: main SWIFT_COMMIT_HASH: '' SWIFT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml SWIFT_IMG: quay.io/openstack-k8s-operators/swift-operator-index:latest SWIFT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml SWIFT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/test/kuttl/tests SWIFT_KUTTL_NAMESPACE: swift-kuttl-tests SWIFT_REPO: https://github.com/openstack-k8s-operators/swift-operator.git TELEMETRY: config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_BRANCH: main TELEMETRY_COMMIT_HASH: '' TELEMETRY_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_IMG: quay.io/openstack-k8s-operators/telemetry-operator-index:latest TELEMETRY_KUTTL_BASEDIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator TELEMETRY_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml TELEMETRY_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/test/kuttl/suites TELEMETRY_KUTTL_NAMESPACE: telemetry-kuttl-tests TELEMETRY_KUTTL_RELPATH: test/kuttl/suites TELEMETRY_REPO: https://github.com/openstack-k8s-operators/telemetry-operator.git TENANT_HOST_ROUTES: '' TIMEOUT: 300s TLS_ENABLED: 'false' WATCHER_BRANCH: '' WATCHER_REPO: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator tripleo_deploy: 'export REGISTRY_PWD:' cifmw_install_yamls_environment: BMO_SETUP: false CHECKOUT_FROM_OPENSTACK_REF: 'true' INSTALL_CERT_MANAGER: false KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm WATCHER_BRANCH: '' WATCHER_REPO: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator cifmw_installyamls_repos: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls cifmw_installyamls_repos_relative: src/github.com/openstack-k8s-operators/install_yamls cifmw_nolog: true cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_context: default/api-crc-testing:6443/kubeadmin cifmw_openshift_kubeconfig: /home/zuul/.crc/machines/crc/kubeconfig cifmw_openshift_password: '12**********89' cifmw_openshift_setup_skip_internal_registry: true cifmw_openshift_setup_skip_internal_registry_tls_verify: true cifmw_openshift_skip_tls_verify: true cifmw_openshift_token: sha256~aCk9SoMxAKzQn_kYPvy1gW_KvX_MhVbp0_pM2TnfuyE cifmw_openshift_user: kubeadmin cifmw_openstack_k8s_operators_org_url: https://github.com/openstack-k8s-operators cifmw_openstack_namespace: openstack cifmw_operator_build_meta_name: openstack-operator cifmw_operator_build_output: operators: openstack-operator: git_commit_hash: 38e630804dada625f7b015f13f3ac5bb7192f4dd git_src_dir: ~/src/github.com/openstack-k8s-operators/openstack-operator image: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd image_bundle: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-bundle:38e630804dada625f7b015f13f3ac5bb7192f4dd image_catalog: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-index:38e630804dada625f7b015f13f3ac5bb7192f4dd watcher-operator: git_commit_hash: 581f46572d07c53c87a11aa044b02e73f253eea6 git_src_dir: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator image: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator:581f46572d07c53c87a11aa044b02e73f253eea6 image_bundle: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-bundle:581f46572d07c53c87a11aa044b02e73f253eea6 image_catalog: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-index:581f46572d07c53c87a11aa044b02e73f253eea6 cifmw_os_must_gather_additional_namespaces: kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko cifmw_os_must_gather_dump_db: ALL cifmw_os_must_gather_host_network: false cifmw_os_must_gather_image: quay.io/openstack-k8s-operators/openstack-must-gather:latest cifmw_os_must_gather_image_push: true cifmw_os_must_gather_image_registry: quay.rdoproject.org/openstack-k8s-operators cifmw_os_must_gather_kubeconfig: '{{ ansible_user_dir }}/.kube/config' cifmw_os_must_gather_namespaces: - '{{ operator_namespace }}' - '{{ cifmw_openstack_namespace }}' - baremetal-operator-system - openshift-machine-api - cert-manager - openshift-nmstate - openshift-marketplace - metallb-system - crc-storage cifmw_os_must_gather_output_dir: '{{ cifmw_basedir | default(ansible_user_dir ~ ''/ci-framework-data'') }}' cifmw_os_must_gather_output_log_dir: '{{ cifmw_os_must_gather_output_dir }}/logs/openstack-must-gather' cifmw_os_must_gather_repo_path: '{{ ansible_user_dir }}/src/github.com/openstack-k8s-operators/openstack-must-gather' cifmw_os_must_gather_timeout: 30m cifmw_os_must_gather_volume_percentage: 80 cifmw_path: /home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin cifmw_repo: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework cifmw_repo_relative: src/github.com/openstack-k8s-operators/ci-framework cifmw_repo_setup_dist_major_version: 9 cifmw_repo_setup_os_release: centos cifmw_run_hook_debug: '{{ (ansible_verbosity | int) >= 2 | bool }}' cifmw_run_test_role: test_operator cifmw_run_tests: true cifmw_status: changed: false failed: false stat: atime: 1768906474.6765542 attr_flags: '' attributes: [] block_size: 4096 blocks: 8 charset: binary ctime: 1768906465.442315 dev: 64513 device_type: 0 executable: true exists: true gid: 1000 gr_name: zuul inode: 16795655 isblk: false ischr: false isdir: true isfifo: false isgid: false islnk: false isreg: false issock: false isuid: false mimetype: inode/directory mode: '0755' mtime: 1768906465.442315 nlink: 21 path: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework pw_name: zuul readable: true rgrp: true roth: true rusr: true size: 4096 uid: 1000 version: '2368198249' wgrp: false woth: false writeable: true wusr: true xgrp: true xoth: true xusr: true cifmw_success_flag: changed: false failed: false stat: exists: false cifmw_test_operator_tempest_concurrency: 1 cifmw_test_operator_tempest_exclude_list: 'watcher_tempest_plugin.*client_functional.* watcher_tempest_plugin.tests.scenario.test_execute_strategies.TestExecuteStrategies.test_execute_storage_capacity_balance_strategy watcher_tempest_plugin.*\[.*\breal_load\b.*\].* watcher_tempest_plugin.tests.scenario.test_execute_zone_migration.TestExecuteZoneMigrationStrategy.test_execute_zone_migration_without_destination_host watcher_tempest_plugin.*\[.*\bvolume_migration\b.*\].* ' cifmw_test_operator_tempest_external_plugin: - changeRefspec: 380572db57798530b64dcac14c6b01b0382c5d8e changeRepository: https://review.opendev.org/openstack/watcher-tempest-plugin repository: https://opendev.org/openstack/watcher-tempest-plugin.git cifmw_test_operator_tempest_image_tag: watcher_latest cifmw_test_operator_tempest_include_list: 'watcher_tempest_plugin.* ' cifmw_test_operator_tempest_namespace: podified-epoxy-centos9 cifmw_test_operator_tempest_registry: 38.102.83.51:5001 cifmw_test_operator_tempest_tempestconf_config: overrides: 'compute.min_microversion 2.56 compute.min_compute_nodes 2 placement.min_microversion 1.29 compute-feature-enabled.live_migration true compute-feature-enabled.block_migration_for_live_migration true service_available.sg_core true telemetry_services.metric_backends prometheus telemetry.disable_ssl_certificate_validation true telemetry.ceilometer_polling_interval 15 optimize.min_microversion 1.0 optimize.max_microversion 1.4 optimize.datasource prometheus optimize.openstack_type podified optimize.proxy_host_address 38.102.83.39 optimize.proxy_host_user zuul optimize.prometheus_host metric-storage-prometheus.openstack.svc optimize.prometheus_ssl_enabled true optimize.prometheus_ssl_cert_dir /etc/prometheus/secrets/combined-ca-bundle optimize.podified_kubeconfig_path /home/zuul/.crc/machines/crc/kubeconfig optimize.podified_namespace openstack optimize.run_continuous_audit_tests true ' cifmw_update_containers: true cifmw_update_containers_openstack: false cifmw_update_containers_org: podified-epoxy-centos9 cifmw_update_containers_registry: 38.102.83.51:5001 cifmw_update_containers_tag: watcher_latest cifmw_update_containers_watcher: true cifmw_use_crc: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller content_provider_dlrn_md5_hash: '' content_provider_gating_repo_available: false content_provider_gating_repo_url: '' content_provider_os_registry_namespace: podified-epoxy-centos9 content_provider_os_registry_url: 38.102.83.51:5001/podified-epoxy-centos9 content_provider_registry_available: true content_provider_registry_ip: 38.102.83.51 content_provider_registry_ip_port: 38.102.83.51:5001 crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_instance_default_net_config: mtu: '1500' range: 192.168.122.0/24 router_net: '' transparent: true crc_ci_bootstrap_instance_nm_vlan_networks: - key: internal-api value: ip: 172.17.0.5 - key: storage value: ip: 172.18.0.5 - key: tenant value: ip: 172.19.0.5 crc_ci_bootstrap_instance_parent_port_create_yaml: admin_state_up: true allowed_address_pairs: [] binding_host_id: null binding_profile: {} binding_vif_details: {} binding_vif_type: null binding_vnic_type: normal created_at: '2026-01-20T10:44:55Z' data_plane_status: null description: '' device_id: '' device_owner: '' device_profile: null dns_assignment: - fqdn: host-192-168-122-10.openstacklocal. hostname: host-192-168-122-10 ip_address: 192.168.122.10 dns_domain: '' dns_name: '' extra_dhcp_opts: [] fixed_ips: - ip_address: 192.168.122.10 subnet_id: 2e4cfd41-19cc-4d87-87ce-3666153838d5 hardware_offload_type: null hints: '' id: 97004f9b-ecd9-4295-91be-b58c60a6e487 ip_allocation: immediate mac_address: fa:16:3e:db:79:f1 name: crc-a519b063-d122-47cf-ae3b-7548803df408 network_id: 9afcb953-b4d4-4d1b-ba09-6d5eaf6424aa numa_affinity_policy: null port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 propagate_uplink_status: null qos_network_policy_id: null qos_policy_id: null resource_request: null revision_number: 1 security_group_ids: [] status: DOWN tags: [] trunk_details: null trusted: null updated_at: '2026-01-20T10:44:55Z' crc_ci_bootstrap_network_name: zuul-ci-net-90366c73 crc_ci_bootstrap_networking: instances: compute-0: networks: default: ip: 192.168.122.100 internal-api: config_nm: false ip: 172.17.0.100 storage: config_nm: false ip: 172.18.0.100 tenant: config_nm: false ip: 172.19.0.100 compute-1: networks: default: ip: 192.168.122.101 internal-api: config_nm: false ip: 172.17.0.101 storage: config_nm: false ip: 172.18.0.101 tenant: config_nm: false ip: 172.19.0.101 controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: '{{ (''ibm'' in nodepool.cloud) | ternary(''1440'', ''1500'') }}' range: 192.168.122.0/24 router_net: '' transparent: true internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 crc_ci_bootstrap_networks_out: compute-0: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.100/24 mac: fa:16:3e:c1:c1:89 mtu: '1500' internal-api: iface: eth1.20 ip: 172.17.0.100/24 mac: 52:54:00:e4:85:d6 mtu: '1496' parent_iface: eth1 vlan: 20 storage: iface: eth1.21 ip: 172.18.0.100/24 mac: 52:54:00:81:02:e8 mtu: '1496' parent_iface: eth1 vlan: 21 tenant: iface: eth1.22 ip: 172.19.0.100/24 mac: 52:54:00:10:b6:c8 mtu: '1496' parent_iface: eth1 vlan: 22 compute-1: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.101/24 mac: fa:16:3e:62:06:49 mtu: '1500' internal-api: iface: eth1.20 ip: 172.17.0.101/24 mac: 52:54:00:1a:ac:e2 mtu: '1496' parent_iface: eth1 vlan: 20 storage: iface: eth1.21 ip: 172.18.0.101/24 mac: 52:54:00:8e:43:4e mtu: '1496' parent_iface: eth1 vlan: 21 tenant: iface: eth1.22 ip: 172.19.0.101/24 mac: 52:54:00:96:51:04 mtu: '1496' parent_iface: eth1 vlan: 22 controller: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.11/24 mac: fa:16:3e:b8:b9:28 mtu: '1500' crc: default: connection: ci-private-network gw: 192.168.122.1 iface: ens7 ip: 192.168.122.10/24 mac: fa:16:3e:db:79:f1 mtu: '1500' internal-api: connection: ci-private-network-20 iface: ens7.20 ip: 172.17.0.5/24 mac: 52:54:00:75:cf:2c mtu: '1496' parent_iface: ens7 vlan: 20 storage: connection: ci-private-network-21 iface: ens7.21 ip: 172.18.0.5/24 mac: 52:54:00:da:18:d2 mtu: '1496' parent_iface: ens7 vlan: 21 tenant: connection: ci-private-network-22 iface: ens7.22 ip: 172.19.0.5/24 mac: 52:54:00:b7:68:1f mtu: '1496' parent_iface: ens7 vlan: 22 crc_ci_bootstrap_private_net_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2026-01-20T10:43:28Z' description: '' dns_domain: '' id: 9afcb953-b4d4-4d1b-ba09-6d5eaf6424aa ipv4_address_scope: null ipv6_address_scope: null is_default: false is_vlan_qinq: null is_vlan_transparent: true l2_adjacency: true mtu: 1500 name: zuul-ci-net-90366c73 port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 provider:network_type: null provider:physical_network: null provider:segmentation_id: null qos_policy_id: null revision_number: 1 router:external: false segments: null shared: false status: ACTIVE subnets: [] tags: [] updated_at: '2026-01-20T10:43:28Z' crc_ci_bootstrap_private_router_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2026-01-20T10:43:36Z' description: '' enable_ndp_proxy: null external_gateway_info: null flavor_id: null id: 6f8c9a88-0252-4b33-9039-3708bccf4928 name: zuul-ci-subnet-router-90366c73 project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 1 routes: [] status: ACTIVE tags: [] tenant_id: 4b633c451ac74233be3721a3635275e5 updated_at: '2026-01-20T10:43:36Z' crc_ci_bootstrap_private_subnet_create_yaml: allocation_pools: - end: 192.168.122.254 start: 192.168.122.2 cidr: 192.168.122.0/24 created_at: '2026-01-20T10:43:33Z' description: '' dns_nameservers: [] dns_publish_fixed_ip: null enable_dhcp: false gateway_ip: 192.168.122.1 host_routes: [] id: 2e4cfd41-19cc-4d87-87ce-3666153838d5 ip_version: 4 ipv6_address_mode: null ipv6_ra_mode: null name: zuul-ci-subnet-90366c73 network_id: 9afcb953-b4d4-4d1b-ba09-6d5eaf6424aa project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 0 segment_id: null service_types: [] subnetpool_id: null tags: [] updated_at: '2026-01-20T10:43:33Z' crc_ci_bootstrap_provider_dns: - 199.204.44.24 - 199.204.47.54 crc_ci_bootstrap_router_name: zuul-ci-subnet-router-90366c73 crc_ci_bootstrap_subnet_name: zuul-ci-subnet-90366c73 discovered_interpreter_python: /usr/bin/python3 enable_ramdisk: true environment: - ANSIBLE_LOG_PATH: '{{ ansible_user_dir }}/ci-framework-data/logs/e2e-collect-logs-must-gather.log' fetch_dlrn_hash: false gather_subset: - min group_names: - ungrouped groups: all: - compute-0 - compute-1 - controller - crc computes: - compute-0 - compute-1 ocps: - crc ungrouped: &id001 - controller zuul_unreachable: [] hostvars: compute-0: ansible_all_ipv4_addresses: - 38.102.83.147 ansible_all_ipv6_addresses: - fe80::f816:3eff:fe25:2c21 ansible_apparmor: status: disabled ansible_architecture: x86_64 ansible_bios_date: 04/01/2014 ansible_bios_vendor: SeaBIOS ansible_bios_version: 1.15.0-1 ansible_board_asset_tag: NA ansible_board_name: NA ansible_board_serial: NA ansible_board_vendor: NA ansible_board_version: NA ansible_chassis_asset_tag: NA ansible_chassis_serial: NA ansible_chassis_vendor: QEMU ansible_chassis_version: pc-i440fx-6.2 ansible_check_mode: false ansible_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ansible_config_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/post_playbook_0/ansible.cfg ansible_connection: ssh ansible_date_time: date: '2026-01-20' day: '20' epoch: '1768905758' epoch_int: '1768905758' hour: '05' iso8601: '2026-01-20T10:42:38Z' iso8601_basic: 20260120T054238543202 iso8601_basic_short: 20260120T054238 iso8601_micro: '2026-01-20T10:42:38.543202Z' minute: '42' month: '01' second: '38' time: 05:42:38 tz: EST tz_dst: EDT tz_offset: '-0500' weekday: Tuesday weekday_number: '2' weeknumber: '03' year: '2026' ansible_default_ipv4: address: 38.102.83.147 alias: eth0 broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: eth0 macaddress: fa:16:3e:25:2c:21 mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether ansible_default_ipv6: {} ansible_device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 masters: {} uuids: sr0: - 2026-01-20-10-41-36-00 vda1: - 22ac9141-3960-4912-b20e-19fc8a328d40 ansible_devices: sr0: holders: [] host: '' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2026-01-20-10-41-36-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '2048' vendor: QEMU virtual: 1 vda: holders: [] host: '' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: - 22ac9141-3960-4912-b20e-19fc8a328d40 sectors: '167770079' sectorsize: 512 size: 80.00 GB start: '2048' uuid: 22ac9141-3960-4912-b20e-19fc8a328d40 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '167772160' sectorsize: '512' size: 80.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 ansible_diff_mode: false ansible_distribution: CentOS ansible_distribution_file_parsed: true ansible_distribution_file_path: /etc/centos-release ansible_distribution_file_variety: CentOS ansible_distribution_major_version: '9' ansible_distribution_release: Stream ansible_distribution_version: '9' ansible_dns: nameservers: - 199.204.44.24 - 199.204.47.54 search: - novalocal ansible_domain: '' ansible_effective_group_id: 1000 ansible_effective_user_id: 1000 ansible_env: BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus DEBUGINFOD_IMA_CERT_PATH: '/etc/keys/ima:' DEBUGINFOD_URLS: 'https://debuginfod.centos.org/ ' HOME: /home/zuul LANG: en_US.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: zuul MOTD_SHOWN: pam PATH: /home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /home/zuul SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 46316 22 SSH_CONNECTION: 38.102.83.114 46316 38.102.83.147 22 USER: zuul XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '1' XDG_SESSION_TYPE: tty _: /usr/bin/python3 which_declare: declare -f ansible_eth0: active: true device: eth0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.147 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::f816:3eff:fe25:2c21 prefix: '64' scope: link macaddress: fa:16:3e:25:2c:21 module: virtio_net mtu: 1500 pciid: virtio1 promisc: false speed: -1 timestamping: [] type: ether ansible_facts: _ansible_facts_gathered: true all_ipv4_addresses: - 38.102.83.147 all_ipv6_addresses: - fe80::f816:3eff:fe25:2c21 ansible_local: {} apparmor: status: disabled architecture: x86_64 bios_date: 04/01/2014 bios_vendor: SeaBIOS bios_version: 1.15.0-1 board_asset_tag: NA board_name: NA board_serial: NA board_vendor: NA board_version: NA chassis_asset_tag: NA chassis_serial: NA chassis_vendor: QEMU chassis_version: pc-i440fx-6.2 cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=22ac9141-3960-4912-b20e-19fc8a328d40 date_time: date: '2026-01-20' day: '20' epoch: '1768905758' epoch_int: '1768905758' hour: '05' iso8601: '2026-01-20T10:42:38Z' iso8601_basic: 20260120T054238543202 iso8601_basic_short: 20260120T054238 iso8601_micro: '2026-01-20T10:42:38.543202Z' minute: '42' month: '01' second: '38' time: 05:42:38 tz: EST tz_dst: EDT tz_offset: '-0500' weekday: Tuesday weekday_number: '2' weeknumber: '03' year: '2026' default_ipv4: address: 38.102.83.147 alias: eth0 broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: eth0 macaddress: fa:16:3e:25:2c:21 mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether default_ipv6: {} device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 masters: {} uuids: sr0: - 2026-01-20-10-41-36-00 vda1: - 22ac9141-3960-4912-b20e-19fc8a328d40 devices: sr0: holders: [] host: '' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2026-01-20-10-41-36-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '2048' vendor: QEMU virtual: 1 vda: holders: [] host: '' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: - 22ac9141-3960-4912-b20e-19fc8a328d40 sectors: '167770079' sectorsize: 512 size: 80.00 GB start: '2048' uuid: 22ac9141-3960-4912-b20e-19fc8a328d40 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '167772160' sectorsize: '512' size: 80.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 discovered_interpreter_python: /usr/bin/python3 distribution: CentOS distribution_file_parsed: true distribution_file_path: /etc/centos-release distribution_file_variety: CentOS distribution_major_version: '9' distribution_release: Stream distribution_version: '9' dns: nameservers: - 199.204.44.24 - 199.204.47.54 search: - novalocal domain: '' effective_group_id: 1000 effective_user_id: 1000 env: BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus DEBUGINFOD_IMA_CERT_PATH: '/etc/keys/ima:' DEBUGINFOD_URLS: 'https://debuginfod.centos.org/ ' HOME: /home/zuul LANG: en_US.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: zuul MOTD_SHOWN: pam PATH: /home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /home/zuul SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 46316 22 SSH_CONNECTION: 38.102.83.114 46316 38.102.83.147 22 USER: zuul XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '1' XDG_SESSION_TYPE: tty _: /usr/bin/python3 which_declare: declare -f eth0: active: true device: eth0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.147 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::f816:3eff:fe25:2c21 prefix: '64' scope: link macaddress: fa:16:3e:25:2c:21 module: virtio_net mtu: 1500 pciid: virtio1 promisc: false speed: -1 timestamping: [] type: ether fibre_channel_wwn: [] fips: false form_factor: Other fqdn: compute-0 gather_subset: - all hostname: compute-0 hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d interfaces: - eth0 - lo is_chroot: false iscsi_iqn: '' kernel: 5.14.0-661.el9.x86_64 kernel_version: '#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026' lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback loadavg: 15m: 0.02 1m: 0.21 5m: 0.07 locally_reachable_ips: ipv4: - 38.102.83.147 - 127.0.0.0/8 - 127.0.0.1 ipv6: - ::1 - fe80::f816:3eff:fe25:2c21 lsb: {} lvm: N/A machine: x86_64 machine_id: 85ac68c10a6e7ae08ceb898dbdca0cb5 memfree_mb: 7106 memory_mb: nocache: free: 7314 used: 365 real: free: 7106 total: 7679 used: 573 swap: cached: 0 free: 0 total: 0 used: 0 memtotal_mb: 7679 module_setup: true mounts: - block_available: 20341217 block_size: 4096 block_total: 20954875 block_used: 613658 device: /dev/vda1 fstype: xfs inode_available: 41888618 inode_total: 41942512 inode_used: 53894 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota size_available: 83317624832 size_total: 85831168000 uuid: 22ac9141-3960-4912-b20e-19fc8a328d40 nodename: compute-0 os_family: RedHat pkg_mgr: dnf proc_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=22ac9141-3960-4912-b20e-19fc8a328d40 processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor - '2' - AuthenticAMD - AMD EPYC-Rome Processor - '3' - AuthenticAMD - AMD EPYC-Rome Processor - '4' - AuthenticAMD - AMD EPYC-Rome Processor - '5' - AuthenticAMD - AMD EPYC-Rome Processor - '6' - AuthenticAMD - AMD EPYC-Rome Processor - '7' - AuthenticAMD - AMD EPYC-Rome Processor processor_cores: 1 processor_count: 8 processor_nproc: 8 processor_threads_per_core: 1 processor_vcpus: 8 product_name: OpenStack Nova product_serial: NA product_uuid: NA product_version: 26.3.1 python: executable: /usr/bin/python3 has_sslcontext: true type: cpython version: major: 3 micro: 25 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 25 - final - 0 python_version: 3.9.25 real_group_id: 1000 real_user_id: 1000 selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted selinux_python_present: true service_mgr: systemd ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBImXgYRgtVhvHpMswDAh9UlpvtE1pP8PEn0C7uDlhV5zNa4lLLwa9hrNUZaxzn4SgfZQFMldvxjekHdYiRkrnVc= ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAIIQ3/Z4kVl/AT59/Tv6HVvPhUGgpIZb+Fp7H6BUxNPgr ssh_host_key_ed25519_public_keytype: ssh-ed25519 ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQC5Q8F0s8M9Jon+Lj7prVMGsM10+eKWbur0kefVExM5UwKhTvoBLhRBv97P4RC77BVPJIJVOed6O+/WgH0n++ckHPFc40aPxq8DYHQPYf4zjYU1iX8lN1wWUMfKwwwUb0DMqvTNJeCeiOHoHQ2CUi11sV4G6BR1uemczKMFcj3hgHz1RuVosl14gNqJC7qKzXQQteukA/v88QeBckJmYJ5GDBgONjV5FuF4+2MzQzNsNf+gltZJf/WSFAq8lAxTvkNbl/lrCLoCCKaaz2mBcnbNm+d43ZPVFh6Ww5htzLlHd+ReGyvGTSFFcmUkvxdrMcvsK+x4MMPPHmrayzcYDiRHao4jO98naVN0B/MtgOJ4lIVTCZgZRWkKNCOyeuZVsDhxp/Vwfd6U9FIp5fLJlp9828aFJ0fNJCEBPcJ0OrEEbelI50zGEanlrHou5DH1qVjC7vumBB21AWvdxuWmr1/jQOet/J7AP2qsOZL8NHV/BC5j7bkwFuxQUgSQ2rpIv4k= ssh_host_key_rsa_public_keytype: ssh-rsa swapfree_mb: 0 swaptotal_mb: 0 system: Linux system_capabilities: - '' system_capabilities_enforced: 'True' system_vendor: OpenStack Foundation uptime_seconds: 51 user_dir: /home/zuul user_gecos: '' user_gid: 1000 user_id: zuul user_shell: /bin/bash user_uid: 1000 userspace_architecture: x86_64 userspace_bits: '64' virtualization_role: guest virtualization_tech_guest: - openstack virtualization_tech_host: - kvm virtualization_type: openstack ansible_fibre_channel_wwn: [] ansible_fips: false ansible_forks: 5 ansible_form_factor: Other ansible_fqdn: compute-0 ansible_host: 38.102.83.147 ansible_hostname: compute-0 ansible_hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d ansible_interfaces: - eth0 - lo ansible_inventory_sources: - /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/post_playbook_0/inventory.yaml ansible_is_chroot: false ansible_iscsi_iqn: '' ansible_kernel: 5.14.0-661.el9.x86_64 ansible_kernel_version: '#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026' ansible_lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback ansible_loadavg: 15m: 0.02 1m: 0.21 5m: 0.07 ansible_local: {} ansible_locally_reachable_ips: ipv4: - 38.102.83.147 - 127.0.0.0/8 - 127.0.0.1 ipv6: - ::1 - fe80::f816:3eff:fe25:2c21 ansible_lsb: {} ansible_lvm: N/A ansible_machine: x86_64 ansible_machine_id: 85ac68c10a6e7ae08ceb898dbdca0cb5 ansible_memfree_mb: 7106 ansible_memory_mb: nocache: free: 7314 used: 365 real: free: 7106 total: 7679 used: 573 swap: cached: 0 free: 0 total: 0 used: 0 ansible_memtotal_mb: 7679 ansible_mounts: - block_available: 20341217 block_size: 4096 block_total: 20954875 block_used: 613658 device: /dev/vda1 fstype: xfs inode_available: 41888618 inode_total: 41942512 inode_used: 53894 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota size_available: 83317624832 size_total: 85831168000 uuid: 22ac9141-3960-4912-b20e-19fc8a328d40 ansible_nodename: compute-0 ansible_os_family: RedHat ansible_pkg_mgr: dnf ansible_playbook_python: /usr/lib/zuul/ansible/8/bin/python ansible_port: 22 ansible_proc_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ansible_processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor - '2' - AuthenticAMD - AMD EPYC-Rome Processor - '3' - AuthenticAMD - AMD EPYC-Rome Processor - '4' - AuthenticAMD - AMD EPYC-Rome Processor - '5' - AuthenticAMD - AMD EPYC-Rome Processor - '6' - AuthenticAMD - AMD EPYC-Rome Processor - '7' - AuthenticAMD - AMD EPYC-Rome Processor ansible_processor_cores: 1 ansible_processor_count: 8 ansible_processor_nproc: 8 ansible_processor_threads_per_core: 1 ansible_processor_vcpus: 8 ansible_product_name: OpenStack Nova ansible_product_serial: NA ansible_product_uuid: NA ansible_product_version: 26.3.1 ansible_python: executable: /usr/bin/python3 has_sslcontext: true type: cpython version: major: 3 micro: 25 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 25 - final - 0 ansible_python_interpreter: auto ansible_python_version: 3.9.25 ansible_real_group_id: 1000 ansible_real_user_id: 1000 ansible_run_tags: - all ansible_scp_extra_args: -o PermitLocalCommand=no ansible_selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted ansible_selinux_python_present: true ansible_service_mgr: systemd ansible_sftp_extra_args: -o PermitLocalCommand=no ansible_skip_tags: [] ansible_ssh_common_args: -o PermitLocalCommand=no ansible_ssh_executable: ssh ansible_ssh_extra_args: -o PermitLocalCommand=no ansible_ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBImXgYRgtVhvHpMswDAh9UlpvtE1pP8PEn0C7uDlhV5zNa4lLLwa9hrNUZaxzn4SgfZQFMldvxjekHdYiRkrnVc= ansible_ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ansible_ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAIIQ3/Z4kVl/AT59/Tv6HVvPhUGgpIZb+Fp7H6BUxNPgr ansible_ssh_host_key_ed25519_public_keytype: ssh-ed25519 ansible_ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQC5Q8F0s8M9Jon+Lj7prVMGsM10+eKWbur0kefVExM5UwKhTvoBLhRBv97P4RC77BVPJIJVOed6O+/WgH0n++ckHPFc40aPxq8DYHQPYf4zjYU1iX8lN1wWUMfKwwwUb0DMqvTNJeCeiOHoHQ2CUi11sV4G6BR1uemczKMFcj3hgHz1RuVosl14gNqJC7qKzXQQteukA/v88QeBckJmYJ5GDBgONjV5FuF4+2MzQzNsNf+gltZJf/WSFAq8lAxTvkNbl/lrCLoCCKaaz2mBcnbNm+d43ZPVFh6Ww5htzLlHd+ReGyvGTSFFcmUkvxdrMcvsK+x4MMPPHmrayzcYDiRHao4jO98naVN0B/MtgOJ4lIVTCZgZRWkKNCOyeuZVsDhxp/Vwfd6U9FIp5fLJlp9828aFJ0fNJCEBPcJ0OrEEbelI50zGEanlrHou5DH1qVjC7vumBB21AWvdxuWmr1/jQOet/J7AP2qsOZL8NHV/BC5j7bkwFuxQUgSQ2rpIv4k= ansible_ssh_host_key_rsa_public_keytype: ssh-rsa ansible_swapfree_mb: 0 ansible_swaptotal_mb: 0 ansible_system: Linux ansible_system_capabilities: - '' ansible_system_capabilities_enforced: 'True' ansible_system_vendor: OpenStack Foundation ansible_uptime_seconds: 51 ansible_user: zuul ansible_user_dir: /home/zuul ansible_user_gecos: '' ansible_user_gid: 1000 ansible_user_id: zuul ansible_user_shell: /bin/bash ansible_user_uid: 1000 ansible_userspace_architecture: x86_64 ansible_userspace_bits: '64' ansible_verbosity: 1 ansible_version: full: 2.15.12 major: 2 minor: 15 revision: 12 string: 2.15.12 ansible_virtualization_role: guest ansible_virtualization_tech_guest: - openstack ansible_virtualization_tech_host: - kvm ansible_virtualization_type: openstack cifmw_architecture_repo: /home/zuul/src/github.com/openstack-k8s-operators/architecture cifmw_architecture_repo_relative: src/github.com/openstack-k8s-operators/architecture cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_build_images_output: {} cifmw_default_dns_servers: - 1.1.1.1 - 8.8.8.8 cifmw_dlrn_report_result: false cifmw_edpm_telemetry_enabled_exporters: - podman_exporter - openstack_network_exporter cifmw_extras: - '@/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/scenarios/centos-9/multinode-ci.yml' - '@/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/scenarios/centos-9/horizon.yml' - '@/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator/ci/scenarios/edpm-no-notifications.yml' - '@/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator/ci/tests/watcher-tempest.yml' cifmw_installyamls_repos: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls cifmw_installyamls_repos_relative: src/github.com/openstack-k8s-operators/install_yamls cifmw_nolog: true cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: /home/zuul/.crc/machines/crc/kubeconfig cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_openstack_k8s_operators_org_url: https://github.com/openstack-k8s-operators cifmw_openstack_namespace: openstack cifmw_operator_build_output: operators: openstack-operator: git_commit_hash: 38e630804dada625f7b015f13f3ac5bb7192f4dd git_src_dir: ~/src/github.com/openstack-k8s-operators/openstack-operator image: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd image_bundle: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-bundle:38e630804dada625f7b015f13f3ac5bb7192f4dd image_catalog: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-index:38e630804dada625f7b015f13f3ac5bb7192f4dd watcher-operator: git_commit_hash: 581f46572d07c53c87a11aa044b02e73f253eea6 git_src_dir: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator image: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator:581f46572d07c53c87a11aa044b02e73f253eea6 image_bundle: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-bundle:581f46572d07c53c87a11aa044b02e73f253eea6 image_catalog: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-index:581f46572d07c53c87a11aa044b02e73f253eea6 cifmw_repo: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework cifmw_repo_relative: src/github.com/openstack-k8s-operators/ci-framework cifmw_test_operator_tempest_external_plugin: - changeRefspec: 380572db57798530b64dcac14c6b01b0382c5d8e changeRepository: https://review.opendev.org/openstack/watcher-tempest-plugin repository: https://opendev.org/openstack/watcher-tempest-plugin.git cifmw_test_operator_tempest_image_tag: watcher_latest cifmw_test_operator_tempest_namespace: podified-epoxy-centos9 cifmw_test_operator_tempest_registry: 38.102.83.51:5001 cifmw_update_containers_openstack: false cifmw_update_containers_org: podified-epoxy-centos9 cifmw_update_containers_registry: 38.102.83.51:5001 cifmw_update_containers_tag: watcher_latest cifmw_update_containers_watcher: true cifmw_use_libvirt: false cifmw_zuul_target_host: controller content_provider_dlrn_md5_hash: '' content_provider_gating_repo_available: false content_provider_gating_repo_url: '' content_provider_os_registry_namespace: podified-epoxy-centos9 content_provider_os_registry_url: 38.102.83.51:5001/podified-epoxy-centos9 content_provider_registry_available: true content_provider_registry_ip: 38.102.83.51 content_provider_registry_ip_port: 38.102.83.51:5001 crc_ci_bootstrap_cloud_name: vexxhost crc_ci_bootstrap_networking: instances: compute-0: networks: default: ip: 192.168.122.100 internal-api: config_nm: false ip: 172.17.0.100 storage: config_nm: false ip: 172.18.0.100 tenant: config_nm: false ip: 172.19.0.100 compute-1: networks: default: ip: 192.168.122.101 internal-api: config_nm: false ip: 172.17.0.101 storage: config_nm: false ip: 172.18.0.101 tenant: config_nm: false ip: 172.19.0.101 controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: '1500' range: 192.168.122.0/24 router_net: '' transparent: true internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 discovered_interpreter_python: /usr/bin/python3 enable_ramdisk: true fetch_dlrn_hash: false gather_subset: - all group_names: - computes groups: all: - compute-0 - compute-1 - controller - crc computes: - compute-0 - compute-1 ocps: - crc ungrouped: *id001 zuul_unreachable: [] inventory_dir: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/post_playbook_0 inventory_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/post_playbook_0/inventory.yaml inventory_hostname: compute-0 inventory_hostname_short: compute-0 module_setup: true nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: ce914a02-f144-4e95-badb-0391d5aba71b host_id: 5519e7a0ee5dc826795d295efc9c908d171b61deb9bf71b1016f861f interface_ip: 38.102.83.147 label: cloud-centos-9-stream-tripleo-vexxhost private_ipv4: 38.102.83.147 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.147 public_ipv6: '' region: RegionOne slot: null omit: __omit_place_holder__3495a27fa3f7994641c1fe3f418eb5fa4a3ff705 operator_namespace: openstack-operators playbook_dir: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/ci/playbooks push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true unsafe_vars: ansible_connection: ssh ansible_host: 38.102.83.147 ansible_port: 22 ansible_python_interpreter: auto ansible_user: zuul cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_build_images_output: {} cifmw_dlrn_report_result: false cifmw_edpm_telemetry_enabled_exporters: - podman_exporter - openstack_network_exporter cifmw_extras: - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/multinode-ci.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/horizon.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/scenarios/{{ watcher_scenario }}.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/tests/watcher-tempest.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_operator_build_output: operators: openstack-operator: git_commit_hash: 38e630804dada625f7b015f13f3ac5bb7192f4dd git_src_dir: ~/src/github.com/openstack-k8s-operators/openstack-operator image: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd image_bundle: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-bundle:38e630804dada625f7b015f13f3ac5bb7192f4dd image_catalog: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-index:38e630804dada625f7b015f13f3ac5bb7192f4dd watcher-operator: git_commit_hash: 581f46572d07c53c87a11aa044b02e73f253eea6 git_src_dir: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator image: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator:581f46572d07c53c87a11aa044b02e73f253eea6 image_bundle: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-bundle:581f46572d07c53c87a11aa044b02e73f253eea6 image_catalog: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-index:581f46572d07c53c87a11aa044b02e73f253eea6 cifmw_test_operator_tempest_external_plugin: - changeRefspec: 380572db57798530b64dcac14c6b01b0382c5d8e changeRepository: https://review.opendev.org/openstack/watcher-tempest-plugin repository: https://opendev.org/openstack/watcher-tempest-plugin.git cifmw_test_operator_tempest_image_tag: watcher_latest cifmw_test_operator_tempest_namespace: '{{ content_provider_os_registry_url | split(''/'') | last }}' cifmw_test_operator_tempest_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_openstack: false cifmw_update_containers_org: podified-epoxy-centos9 cifmw_update_containers_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_tag: watcher_latest cifmw_update_containers_watcher: true cifmw_use_libvirt: false cifmw_zuul_target_host: controller content_provider_dlrn_md5_hash: '' content_provider_gating_repo_available: false content_provider_gating_repo_url: '' content_provider_os_registry_namespace: podified-epoxy-centos9 content_provider_os_registry_url: 38.102.83.51:5001/podified-epoxy-centos9 content_provider_registry_available: true content_provider_registry_ip: 38.102.83.51 content_provider_registry_ip_port: 38.102.83.51:5001 crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: compute-0: networks: default: ip: 192.168.122.100 internal-api: config_nm: false ip: 172.17.0.100 storage: config_nm: false ip: 172.18.0.100 tenant: config_nm: false ip: 172.19.0.100 compute-1: networks: default: ip: 192.168.122.101 internal-api: config_nm: false ip: 172.17.0.101 storage: config_nm: false ip: 172.18.0.101 tenant: config_nm: false ip: 172.19.0.101 controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: '{{ (''ibm'' in nodepool.cloud) | ternary(''1440'', ''1500'') }}' range: 192.168.122.0/24 router_net: '' transparent: true internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true fetch_dlrn_hash: false nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: ce914a02-f144-4e95-badb-0391d5aba71b host_id: 5519e7a0ee5dc826795d295efc9c908d171b61deb9bf71b1016f861f interface_ip: 38.102.83.147 label: cloud-centos-9-stream-tripleo-vexxhost private_ipv4: 38.102.83.147 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.147 public_ipv6: '' region: RegionOne slot: null push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true watcher_scenario: edpm-no-notifications watcher_services_tag: watcher_latest watcher_tempest_max_microversion: '1.4' zuul_log_collection: false watcher_scenario: edpm-no-notifications watcher_services_tag: watcher_latest watcher_tempest_max_microversion: '1.4' zuul: _inheritance_path: - '' - '' - '' - '' - '' - '' - '' - '' - '' - '' ansible_version: '8' attempts: 1 branch: main build: 90366c73dd19485aa9d993ddaab6d557 build_refs: - branch: main change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n spec:\r\n messagingBus:\r\n cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n \ notificationsBus:\r\n cluster: notifications-rabbitmq\r\n \ user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator topic: null buildset: f20638613fa941389b7ed9d90fc6bce5 buildset_refs: - branch: main change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n spec:\r\n messagingBus:\r\n cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n \ notificationsBus:\r\n cluster: notifications-rabbitmq\r\n \ user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator topic: null change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n \ spec:\r\n messagingBus:\r\n cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n notificationsBus:\r\n cluster: notifications-rabbitmq\r\n user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 child_jobs: [] commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 event_id: 2658e330-f5e9-11f0-9efc-87f0b8025f7f executor: hostname: ze02.softwarefactory-project.io inventory_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/inventory.yaml log_root: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/logs result_data_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/results.json src_root: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/src work_root: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work items: - branch: main change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n spec:\r\n messagingBus:\r\n cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n \ notificationsBus:\r\n cluster: notifications-rabbitmq\r\n \ user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator topic: null job: watcher-operator-validation-epoxy-ocp4-16 jobtags: [] max_attempts: 1 message: UmFiYml0bXEgdmhvc3QgYW5kIHVzZXIgc3VwcG9ydAoKQWRkIG5ldyBtZXNzYWdpbmdCdXMgYW5kIG5vdGlmaWNhdGlvbnNCdXMgaW50ZXJmYWNlcyB0byBob2xkIGNsdXN0ZXIsIHVzZXIgYW5kIHZob3N0IG5hbWVzIGZvciBvcHRpb25hbCB1c2FnZS4NClRoZSBjb250cm9sbGVyIGFkZHMgdGhlc2UgdmFsdWVzIHRvIHRoZSBUcmFuc3BvcnRVUkwgY3JlYXRlIHJlcXVlc3Qgd2hlbiBwcmVzZW50Lg0KDQpBZGRpdGlvbmFsbHksIHdlIG1pZ3JhdGUgUmFiYml0TVEgY2x1c3RlciBuYW1lIHRvIFJhYmJpdE1xIGNvbmZpZyBzdHJ1Y3QgdXNpbmcgRGVmYXVsdFJhYmJpdE1xQ29uZmlnIGZyb20gaW5mcmEtb3BlcmF0b3IgdG8gYXV0b21hdGljYWxseSBwb3B1bGF0ZSB0aGUgbmV3IENsdXN0ZXIgZmllbGQgZnJvbSBsZWdhY3kgUmFiYml0TXFDbHVzdGVyTmFtZS4NCg0KRXhhbXBsZSB1c2FnZToNCg0KYGBgDQogIHNwZWM6DQogICAgbWVzc2FnaW5nQnVzOg0KICAgICAgY2x1c3RlcjogcnBjLXJhYmJpdG1xDQogICAgICB1c2VyOiBycGMtdXNlcg0KICAgICAgdmhvc3Q6IHJwYy12aG9zdA0KICAgIG5vdGlmaWNhdGlvbnNCdXM6DQogICAgICBjbHVzdGVyOiBub3RpZmljYXRpb25zLXJhYmJpdG1xDQogICAgICB1c2VyOiBub3RpZmljYXRpb25zLXVzZXINCiAgICAgIHZob3N0OiBub3RpZmljYXRpb25zLXZob3N0DQpgYGANCg0KSmlyYTogaHR0cHM6Ly9pc3N1ZXMucmVkaGF0LmNvbS9icm93c2UvT1NQUkgtMjM4ODI= patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 pipeline: github-check playbook_context: playbook_projects: trusted/project_0/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 08a84deec7dace955f92270e2cbb8b993f305ad4 trusted/project_1/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 691c03cc007bee9934da14cf46c86009616a2aef trusted/project_2/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 trusted/project_3/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 05fab9c7c87ad7552b529c3d1173a1772e61e9fb untrusted/project_0/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 05fab9c7c87ad7552b529c3d1173a1772e61e9fb untrusted/project_1/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 08a84deec7dace955f92270e2cbb8b993f305ad4 untrusted/project_2/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 691c03cc007bee9934da14cf46c86009616a2aef untrusted/project_3/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 playbooks: - path: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/ci/playbooks/edpm/run.yml roles: - checkout: main checkout_description: playbook branch link_name: ansible/playbook_0/role_0/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_0/role_0/ci-framework/roles - checkout: master checkout_description: project default branch link_name: ansible/playbook_0/role_1/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_0/role_1/config/roles - checkout: master checkout_description: project default branch link_name: ansible/playbook_0/role_2/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_0/role_2/zuul-jobs/roles - checkout: master checkout_description: project default branch link_name: ansible/playbook_0/role_3/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_0/role_3/rdo-jobs/roles post_review: false project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator projects: github.com/crc-org/crc-cloud: canonical_hostname: github.com canonical_name: github.com/crc-org/crc-cloud checkout: main checkout_description: project override ref commit: 42957126d9d9b9d1372615db325b82bd992fa335 name: crc-org/crc-cloud required: true short_name: crc-cloud src_dir: src/github.com/crc-org/crc-cloud github.com/openstack-k8s-operators/ci-framework: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main checkout_description: zuul branch commit: 05fab9c7c87ad7552b529c3d1173a1772e61e9fb name: openstack-k8s-operators/ci-framework required: true short_name: ci-framework src_dir: src/github.com/openstack-k8s-operators/ci-framework github.com/openstack-k8s-operators/edpm-ansible: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/edpm-ansible checkout: main checkout_description: zuul branch commit: 43c8ae13d85939e9a3f9cddbe838cbe4616199f7 name: openstack-k8s-operators/edpm-ansible required: true short_name: edpm-ansible src_dir: src/github.com/openstack-k8s-operators/edpm-ansible github.com/openstack-k8s-operators/infra-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/infra-operator checkout: main checkout_description: zuul branch commit: 0121df8691096e0883637457925e4142353e35ba name: openstack-k8s-operators/infra-operator required: true short_name: infra-operator src_dir: src/github.com/openstack-k8s-operators/infra-operator github.com/openstack-k8s-operators/install_yamls: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/install_yamls checkout: main checkout_description: zuul branch commit: bdf4c9385be5e3e04ff06f67f25d6993db70cf6e name: openstack-k8s-operators/install_yamls required: true short_name: install_yamls src_dir: src/github.com/openstack-k8s-operators/install_yamls github.com/openstack-k8s-operators/openstack-baremetal-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-baremetal-operator checkout: main checkout_description: zuul branch commit: 06cd1004cb26b36ba1054ccf7875fad6248762c5 name: openstack-k8s-operators/openstack-baremetal-operator required: true short_name: openstack-baremetal-operator src_dir: src/github.com/openstack-k8s-operators/openstack-baremetal-operator github.com/openstack-k8s-operators/openstack-must-gather: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-must-gather checkout: main checkout_description: zuul branch commit: 74e48015b997b0bb2efc1ad92a6937949f185a0e name: openstack-k8s-operators/openstack-must-gather required: true short_name: openstack-must-gather src_dir: src/github.com/openstack-k8s-operators/openstack-must-gather github.com/openstack-k8s-operators/openstack-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-operator checkout: main checkout_description: zuul branch commit: 38e630804dada625f7b015f13f3ac5bb7192f4dd name: openstack-k8s-operators/openstack-operator required: true short_name: openstack-operator src_dir: src/github.com/openstack-k8s-operators/openstack-operator github.com/openstack-k8s-operators/repo-setup: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/repo-setup checkout: main checkout_description: zuul branch commit: 37b10946c6a10f9fa26c13305f06bfd6867e723f name: openstack-k8s-operators/repo-setup required: true short_name: repo-setup src_dir: src/github.com/openstack-k8s-operators/repo-setup github.com/openstack-k8s-operators/watcher-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator checkout: main checkout_description: zuul branch commit: 581f46572d07c53c87a11aa044b02e73f253eea6 name: openstack-k8s-operators/watcher-operator required: false short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator opendev.org/zuul/zuul-jobs: canonical_hostname: opendev.org canonical_name: opendev.org/zuul/zuul-jobs checkout: master checkout_description: project default branch commit: 691c03cc007bee9934da14cf46c86009616a2aef name: zuul/zuul-jobs required: true short_name: zuul-jobs src_dir: src/opendev.org/zuul/zuul-jobs review.rdoproject.org/config: canonical_hostname: review.rdoproject.org canonical_name: review.rdoproject.org/config checkout: master checkout_description: project default branch commit: 08a84deec7dace955f92270e2cbb8b993f305ad4 name: config required: true short_name: config src_dir: src/review.rdoproject.org/config ref: refs/pull/320/head resources: {} tenant: rdoproject.org timeout: 10800 topic: null voting: true zuul_execution_branch: main zuul_execution_canonical_name_and_path: github.com/openstack-k8s-operators/ci-framework/ci/playbooks/e2e-collect-logs.yml zuul_execution_phase: post zuul_execution_phase_index: '0' zuul_execution_trusted: 'False' zuul_log_collection: false zuul_success: 'False' zuul_will_retry: 'False' compute-1: ansible_all_ipv4_addresses: - 38.102.83.233 ansible_all_ipv6_addresses: - fe80::f816:3eff:fed1:b2b4 ansible_apparmor: status: disabled ansible_architecture: x86_64 ansible_bios_date: 04/01/2014 ansible_bios_vendor: SeaBIOS ansible_bios_version: 1.15.0-1 ansible_board_asset_tag: NA ansible_board_name: NA ansible_board_serial: NA ansible_board_vendor: NA ansible_board_version: NA ansible_chassis_asset_tag: NA ansible_chassis_serial: NA ansible_chassis_vendor: QEMU ansible_chassis_version: pc-i440fx-6.2 ansible_check_mode: false ansible_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ansible_config_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/post_playbook_0/ansible.cfg ansible_connection: ssh ansible_date_time: date: '2026-01-20' day: '20' epoch: '1768905758' epoch_int: '1768905758' hour: '05' iso8601: '2026-01-20T10:42:38Z' iso8601_basic: 20260120T054238933528 iso8601_basic_short: 20260120T054238 iso8601_micro: '2026-01-20T10:42:38.933528Z' minute: '42' month: '01' second: '38' time: 05:42:38 tz: EST tz_dst: EDT tz_offset: '-0500' weekday: Tuesday weekday_number: '2' weeknumber: '03' year: '2026' ansible_default_ipv4: address: 38.102.83.233 alias: eth0 broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: eth0 macaddress: fa:16:3e:d1:b2:b4 mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether ansible_default_ipv6: {} ansible_device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 masters: {} uuids: sr0: - 2026-01-20-10-41-49-00 vda1: - 22ac9141-3960-4912-b20e-19fc8a328d40 ansible_devices: sr0: holders: [] host: '' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2026-01-20-10-41-49-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '2048' vendor: QEMU virtual: 1 vda: holders: [] host: '' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: - 22ac9141-3960-4912-b20e-19fc8a328d40 sectors: '167770079' sectorsize: 512 size: 80.00 GB start: '2048' uuid: 22ac9141-3960-4912-b20e-19fc8a328d40 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '167772160' sectorsize: '512' size: 80.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 ansible_diff_mode: false ansible_distribution: CentOS ansible_distribution_file_parsed: true ansible_distribution_file_path: /etc/centos-release ansible_distribution_file_variety: CentOS ansible_distribution_major_version: '9' ansible_distribution_release: Stream ansible_distribution_version: '9' ansible_dns: nameservers: - 199.204.44.24 - 199.204.47.54 search: - novalocal ansible_domain: '' ansible_effective_group_id: 1000 ansible_effective_user_id: 1000 ansible_env: BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus DEBUGINFOD_IMA_CERT_PATH: '/etc/keys/ima:' DEBUGINFOD_URLS: 'https://debuginfod.centos.org/ ' HOME: /home/zuul LANG: en_US.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: zuul MOTD_SHOWN: pam PATH: /home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /home/zuul SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 50754 22 SSH_CONNECTION: 38.102.83.114 50754 38.102.83.233 22 USER: zuul XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '1' XDG_SESSION_TYPE: tty _: /usr/bin/python3 which_declare: declare -f ansible_eth0: active: true device: eth0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.233 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::f816:3eff:fed1:b2b4 prefix: '64' scope: link macaddress: fa:16:3e:d1:b2:b4 module: virtio_net mtu: 1500 pciid: virtio1 promisc: false speed: -1 timestamping: [] type: ether ansible_facts: _ansible_facts_gathered: true all_ipv4_addresses: - 38.102.83.233 all_ipv6_addresses: - fe80::f816:3eff:fed1:b2b4 ansible_local: {} apparmor: status: disabled architecture: x86_64 bios_date: 04/01/2014 bios_vendor: SeaBIOS bios_version: 1.15.0-1 board_asset_tag: NA board_name: NA board_serial: NA board_vendor: NA board_version: NA chassis_asset_tag: NA chassis_serial: NA chassis_vendor: QEMU chassis_version: pc-i440fx-6.2 cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=22ac9141-3960-4912-b20e-19fc8a328d40 date_time: date: '2026-01-20' day: '20' epoch: '1768905758' epoch_int: '1768905758' hour: '05' iso8601: '2026-01-20T10:42:38Z' iso8601_basic: 20260120T054238933528 iso8601_basic_short: 20260120T054238 iso8601_micro: '2026-01-20T10:42:38.933528Z' minute: '42' month: '01' second: '38' time: 05:42:38 tz: EST tz_dst: EDT tz_offset: '-0500' weekday: Tuesday weekday_number: '2' weeknumber: '03' year: '2026' default_ipv4: address: 38.102.83.233 alias: eth0 broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: eth0 macaddress: fa:16:3e:d1:b2:b4 mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether default_ipv6: {} device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 masters: {} uuids: sr0: - 2026-01-20-10-41-49-00 vda1: - 22ac9141-3960-4912-b20e-19fc8a328d40 devices: sr0: holders: [] host: '' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2026-01-20-10-41-49-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '2048' vendor: QEMU virtual: 1 vda: holders: [] host: '' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: - 22ac9141-3960-4912-b20e-19fc8a328d40 sectors: '167770079' sectorsize: 512 size: 80.00 GB start: '2048' uuid: 22ac9141-3960-4912-b20e-19fc8a328d40 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '167772160' sectorsize: '512' size: 80.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 discovered_interpreter_python: /usr/bin/python3 distribution: CentOS distribution_file_parsed: true distribution_file_path: /etc/centos-release distribution_file_variety: CentOS distribution_major_version: '9' distribution_release: Stream distribution_version: '9' dns: nameservers: - 199.204.44.24 - 199.204.47.54 search: - novalocal domain: '' effective_group_id: 1000 effective_user_id: 1000 env: BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus DEBUGINFOD_IMA_CERT_PATH: '/etc/keys/ima:' DEBUGINFOD_URLS: 'https://debuginfod.centos.org/ ' HOME: /home/zuul LANG: en_US.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: zuul MOTD_SHOWN: pam PATH: /home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /home/zuul SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 50754 22 SSH_CONNECTION: 38.102.83.114 50754 38.102.83.233 22 USER: zuul XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '1' XDG_SESSION_TYPE: tty _: /usr/bin/python3 which_declare: declare -f eth0: active: true device: eth0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.233 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::f816:3eff:fed1:b2b4 prefix: '64' scope: link macaddress: fa:16:3e:d1:b2:b4 module: virtio_net mtu: 1500 pciid: virtio1 promisc: false speed: -1 timestamping: [] type: ether fibre_channel_wwn: [] fips: false form_factor: Other fqdn: compute-1 gather_subset: - all hostname: compute-1 hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d interfaces: - lo - eth0 is_chroot: false iscsi_iqn: '' kernel: 5.14.0-661.el9.x86_64 kernel_version: '#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026' lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback loadavg: 15m: 0.04 1m: 0.38 5m: 0.12 locally_reachable_ips: ipv4: - 38.102.83.233 - 127.0.0.0/8 - 127.0.0.1 ipv6: - ::1 - fe80::f816:3eff:fed1:b2b4 lsb: {} lvm: N/A machine: x86_64 machine_id: 85ac68c10a6e7ae08ceb898dbdca0cb5 memfree_mb: 7040 memory_mb: nocache: free: 7248 used: 431 real: free: 7040 total: 7679 used: 639 swap: cached: 0 free: 0 total: 0 used: 0 memtotal_mb: 7679 module_setup: true mounts: - block_available: 20341227 block_size: 4096 block_total: 20954875 block_used: 613648 device: /dev/vda1 fstype: xfs inode_available: 41888618 inode_total: 41942512 inode_used: 53894 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota size_available: 83317665792 size_total: 85831168000 uuid: 22ac9141-3960-4912-b20e-19fc8a328d40 nodename: compute-1 os_family: RedHat pkg_mgr: dnf proc_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=22ac9141-3960-4912-b20e-19fc8a328d40 processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor - '2' - AuthenticAMD - AMD EPYC-Rome Processor - '3' - AuthenticAMD - AMD EPYC-Rome Processor - '4' - AuthenticAMD - AMD EPYC-Rome Processor - '5' - AuthenticAMD - AMD EPYC-Rome Processor - '6' - AuthenticAMD - AMD EPYC-Rome Processor - '7' - AuthenticAMD - AMD EPYC-Rome Processor processor_cores: 1 processor_count: 8 processor_nproc: 8 processor_threads_per_core: 1 processor_vcpus: 8 product_name: OpenStack Nova product_serial: NA product_uuid: NA product_version: 26.3.1 python: executable: /usr/bin/python3 has_sslcontext: true type: cpython version: major: 3 micro: 25 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 25 - final - 0 python_version: 3.9.25 real_group_id: 1000 real_user_id: 1000 selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted selinux_python_present: true service_mgr: systemd ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFhNbYjjeK/oArk7LI0XanEzL0O7pgkf21+cq6QoQ5wbEb5g8C3uy8vUvqvdJBSrbH0ip8mNVOAh0sdh8xx0iOk= ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAIFJpjSKPikJsHpfPqsw0KgXOHY/fls40YXS3/2usUOaz ssh_host_key_ed25519_public_keytype: ssh-ed25519 ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQCZYl4khmSSMFZqYE0pbymRw0xsRabp4YkDzSmFiVfyaP7UHC+q6q+POgFEQJNhuYfa6P1PFnIzG58tPOI7IFIuoHc7RKpjo0ghGXsYYCAmY3pW3TtcpNSE5v/+iAD37zJKZJEswxSjjeByERazfj6oCyaWKEP5m9oKYohiMAU8GdyQPiHvmFT4UmmWl0BL+KsiwszcJ/RRzl59M5hlVIT0VW3d1QpEB8WmvxgxaiRGMDOIobkSwxArnabE6IjZdoJFHiuN0JnQuellbUPVEMAV7fK1JA5c8jYZmXSa1QTjjKUVLvGljId6WamT+7+kHcsGoOyMDlaktQnDPwB9F1538fFsSioF8CiLPzSPd0OLmE3Zqg6eQWH8rk8Ox5uQVcGh6AK4yMNMde4GuNcmAmtAEwrzn5PT9BsF2bysDdtpaROdlA4SSyhXZB85irUiLX4/aNsuWaC4iDX/20G3XhDeio7bDHxFlPiT6n+KfWwugRGxcYrUZi4BKkbxBN3aDqE= ssh_host_key_rsa_public_keytype: ssh-rsa swapfree_mb: 0 swaptotal_mb: 0 system: Linux system_capabilities: - '' system_capabilities_enforced: 'True' system_vendor: OpenStack Foundation uptime_seconds: 41 user_dir: /home/zuul user_gecos: '' user_gid: 1000 user_id: zuul user_shell: /bin/bash user_uid: 1000 userspace_architecture: x86_64 userspace_bits: '64' virtualization_role: guest virtualization_tech_guest: - openstack virtualization_tech_host: - kvm virtualization_type: openstack ansible_fibre_channel_wwn: [] ansible_fips: false ansible_forks: 5 ansible_form_factor: Other ansible_fqdn: compute-1 ansible_host: 38.102.83.233 ansible_hostname: compute-1 ansible_hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d ansible_interfaces: - lo - eth0 ansible_inventory_sources: - /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/post_playbook_0/inventory.yaml ansible_is_chroot: false ansible_iscsi_iqn: '' ansible_kernel: 5.14.0-661.el9.x86_64 ansible_kernel_version: '#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026' ansible_lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback ansible_loadavg: 15m: 0.04 1m: 0.38 5m: 0.12 ansible_local: {} ansible_locally_reachable_ips: ipv4: - 38.102.83.233 - 127.0.0.0/8 - 127.0.0.1 ipv6: - ::1 - fe80::f816:3eff:fed1:b2b4 ansible_lsb: {} ansible_lvm: N/A ansible_machine: x86_64 ansible_machine_id: 85ac68c10a6e7ae08ceb898dbdca0cb5 ansible_memfree_mb: 7040 ansible_memory_mb: nocache: free: 7248 used: 431 real: free: 7040 total: 7679 used: 639 swap: cached: 0 free: 0 total: 0 used: 0 ansible_memtotal_mb: 7679 ansible_mounts: - block_available: 20341227 block_size: 4096 block_total: 20954875 block_used: 613648 device: /dev/vda1 fstype: xfs inode_available: 41888618 inode_total: 41942512 inode_used: 53894 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota size_available: 83317665792 size_total: 85831168000 uuid: 22ac9141-3960-4912-b20e-19fc8a328d40 ansible_nodename: compute-1 ansible_os_family: RedHat ansible_pkg_mgr: dnf ansible_playbook_python: /usr/lib/zuul/ansible/8/bin/python ansible_port: 22 ansible_proc_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ansible_processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor - '2' - AuthenticAMD - AMD EPYC-Rome Processor - '3' - AuthenticAMD - AMD EPYC-Rome Processor - '4' - AuthenticAMD - AMD EPYC-Rome Processor - '5' - AuthenticAMD - AMD EPYC-Rome Processor - '6' - AuthenticAMD - AMD EPYC-Rome Processor - '7' - AuthenticAMD - AMD EPYC-Rome Processor ansible_processor_cores: 1 ansible_processor_count: 8 ansible_processor_nproc: 8 ansible_processor_threads_per_core: 1 ansible_processor_vcpus: 8 ansible_product_name: OpenStack Nova ansible_product_serial: NA ansible_product_uuid: NA ansible_product_version: 26.3.1 ansible_python: executable: /usr/bin/python3 has_sslcontext: true type: cpython version: major: 3 micro: 25 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 25 - final - 0 ansible_python_interpreter: auto ansible_python_version: 3.9.25 ansible_real_group_id: 1000 ansible_real_user_id: 1000 ansible_run_tags: - all ansible_scp_extra_args: -o PermitLocalCommand=no ansible_selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted ansible_selinux_python_present: true ansible_service_mgr: systemd ansible_sftp_extra_args: -o PermitLocalCommand=no ansible_skip_tags: [] ansible_ssh_common_args: -o PermitLocalCommand=no ansible_ssh_executable: ssh ansible_ssh_extra_args: -o PermitLocalCommand=no ansible_ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFhNbYjjeK/oArk7LI0XanEzL0O7pgkf21+cq6QoQ5wbEb5g8C3uy8vUvqvdJBSrbH0ip8mNVOAh0sdh8xx0iOk= ansible_ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ansible_ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAIFJpjSKPikJsHpfPqsw0KgXOHY/fls40YXS3/2usUOaz ansible_ssh_host_key_ed25519_public_keytype: ssh-ed25519 ansible_ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQCZYl4khmSSMFZqYE0pbymRw0xsRabp4YkDzSmFiVfyaP7UHC+q6q+POgFEQJNhuYfa6P1PFnIzG58tPOI7IFIuoHc7RKpjo0ghGXsYYCAmY3pW3TtcpNSE5v/+iAD37zJKZJEswxSjjeByERazfj6oCyaWKEP5m9oKYohiMAU8GdyQPiHvmFT4UmmWl0BL+KsiwszcJ/RRzl59M5hlVIT0VW3d1QpEB8WmvxgxaiRGMDOIobkSwxArnabE6IjZdoJFHiuN0JnQuellbUPVEMAV7fK1JA5c8jYZmXSa1QTjjKUVLvGljId6WamT+7+kHcsGoOyMDlaktQnDPwB9F1538fFsSioF8CiLPzSPd0OLmE3Zqg6eQWH8rk8Ox5uQVcGh6AK4yMNMde4GuNcmAmtAEwrzn5PT9BsF2bysDdtpaROdlA4SSyhXZB85irUiLX4/aNsuWaC4iDX/20G3XhDeio7bDHxFlPiT6n+KfWwugRGxcYrUZi4BKkbxBN3aDqE= ansible_ssh_host_key_rsa_public_keytype: ssh-rsa ansible_swapfree_mb: 0 ansible_swaptotal_mb: 0 ansible_system: Linux ansible_system_capabilities: - '' ansible_system_capabilities_enforced: 'True' ansible_system_vendor: OpenStack Foundation ansible_uptime_seconds: 41 ansible_user: zuul ansible_user_dir: /home/zuul ansible_user_gecos: '' ansible_user_gid: 1000 ansible_user_id: zuul ansible_user_shell: /bin/bash ansible_user_uid: 1000 ansible_userspace_architecture: x86_64 ansible_userspace_bits: '64' ansible_verbosity: 1 ansible_version: full: 2.15.12 major: 2 minor: 15 revision: 12 string: 2.15.12 ansible_virtualization_role: guest ansible_virtualization_tech_guest: - openstack ansible_virtualization_tech_host: - kvm ansible_virtualization_type: openstack cifmw_architecture_repo: /home/zuul/src/github.com/openstack-k8s-operators/architecture cifmw_architecture_repo_relative: src/github.com/openstack-k8s-operators/architecture cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_build_images_output: {} cifmw_default_dns_servers: - 1.1.1.1 - 8.8.8.8 cifmw_dlrn_report_result: false cifmw_edpm_telemetry_enabled_exporters: - podman_exporter - openstack_network_exporter cifmw_extras: - '@/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/scenarios/centos-9/multinode-ci.yml' - '@/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/scenarios/centos-9/horizon.yml' - '@/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator/ci/scenarios/edpm-no-notifications.yml' - '@/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator/ci/tests/watcher-tempest.yml' cifmw_installyamls_repos: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls cifmw_installyamls_repos_relative: src/github.com/openstack-k8s-operators/install_yamls cifmw_nolog: true cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: /home/zuul/.crc/machines/crc/kubeconfig cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_openstack_k8s_operators_org_url: https://github.com/openstack-k8s-operators cifmw_openstack_namespace: openstack cifmw_operator_build_output: operators: openstack-operator: git_commit_hash: 38e630804dada625f7b015f13f3ac5bb7192f4dd git_src_dir: ~/src/github.com/openstack-k8s-operators/openstack-operator image: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd image_bundle: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-bundle:38e630804dada625f7b015f13f3ac5bb7192f4dd image_catalog: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-index:38e630804dada625f7b015f13f3ac5bb7192f4dd watcher-operator: git_commit_hash: 581f46572d07c53c87a11aa044b02e73f253eea6 git_src_dir: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator image: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator:581f46572d07c53c87a11aa044b02e73f253eea6 image_bundle: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-bundle:581f46572d07c53c87a11aa044b02e73f253eea6 image_catalog: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-index:581f46572d07c53c87a11aa044b02e73f253eea6 cifmw_repo: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework cifmw_repo_relative: src/github.com/openstack-k8s-operators/ci-framework cifmw_test_operator_tempest_external_plugin: - changeRefspec: 380572db57798530b64dcac14c6b01b0382c5d8e changeRepository: https://review.opendev.org/openstack/watcher-tempest-plugin repository: https://opendev.org/openstack/watcher-tempest-plugin.git cifmw_test_operator_tempest_image_tag: watcher_latest cifmw_test_operator_tempest_namespace: podified-epoxy-centos9 cifmw_test_operator_tempest_registry: 38.102.83.51:5001 cifmw_update_containers_openstack: false cifmw_update_containers_org: podified-epoxy-centos9 cifmw_update_containers_registry: 38.102.83.51:5001 cifmw_update_containers_tag: watcher_latest cifmw_update_containers_watcher: true cifmw_use_libvirt: false cifmw_zuul_target_host: controller content_provider_dlrn_md5_hash: '' content_provider_gating_repo_available: false content_provider_gating_repo_url: '' content_provider_os_registry_namespace: podified-epoxy-centos9 content_provider_os_registry_url: 38.102.83.51:5001/podified-epoxy-centos9 content_provider_registry_available: true content_provider_registry_ip: 38.102.83.51 content_provider_registry_ip_port: 38.102.83.51:5001 crc_ci_bootstrap_cloud_name: vexxhost crc_ci_bootstrap_networking: instances: compute-0: networks: default: ip: 192.168.122.100 internal-api: config_nm: false ip: 172.17.0.100 storage: config_nm: false ip: 172.18.0.100 tenant: config_nm: false ip: 172.19.0.100 compute-1: networks: default: ip: 192.168.122.101 internal-api: config_nm: false ip: 172.17.0.101 storage: config_nm: false ip: 172.18.0.101 tenant: config_nm: false ip: 172.19.0.101 controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: '1500' range: 192.168.122.0/24 router_net: '' transparent: true internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 discovered_interpreter_python: /usr/bin/python3 enable_ramdisk: true fetch_dlrn_hash: false gather_subset: - all group_names: - computes groups: all: - compute-0 - compute-1 - controller - crc computes: - compute-0 - compute-1 ocps: - crc ungrouped: *id001 zuul_unreachable: [] inventory_dir: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/post_playbook_0 inventory_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/post_playbook_0/inventory.yaml inventory_hostname: compute-1 inventory_hostname_short: compute-1 module_setup: true nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: 70fdcf87-2866-4dad-82d7-0accf142e3fc host_id: bdb78bf25a270582fae0ca49d447ffffc4c7a50a772a0a4c0593588a interface_ip: 38.102.83.233 label: cloud-centos-9-stream-tripleo-vexxhost private_ipv4: 38.102.83.233 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.233 public_ipv6: '' region: RegionOne slot: null omit: __omit_place_holder__3495a27fa3f7994641c1fe3f418eb5fa4a3ff705 operator_namespace: openstack-operators playbook_dir: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/ci/playbooks push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true unsafe_vars: ansible_connection: ssh ansible_host: 38.102.83.233 ansible_port: 22 ansible_python_interpreter: auto ansible_user: zuul cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_build_images_output: {} cifmw_dlrn_report_result: false cifmw_edpm_telemetry_enabled_exporters: - podman_exporter - openstack_network_exporter cifmw_extras: - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/multinode-ci.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/horizon.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/scenarios/{{ watcher_scenario }}.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/tests/watcher-tempest.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_operator_build_output: operators: openstack-operator: git_commit_hash: 38e630804dada625f7b015f13f3ac5bb7192f4dd git_src_dir: ~/src/github.com/openstack-k8s-operators/openstack-operator image: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd image_bundle: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-bundle:38e630804dada625f7b015f13f3ac5bb7192f4dd image_catalog: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-index:38e630804dada625f7b015f13f3ac5bb7192f4dd watcher-operator: git_commit_hash: 581f46572d07c53c87a11aa044b02e73f253eea6 git_src_dir: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator image: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator:581f46572d07c53c87a11aa044b02e73f253eea6 image_bundle: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-bundle:581f46572d07c53c87a11aa044b02e73f253eea6 image_catalog: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-index:581f46572d07c53c87a11aa044b02e73f253eea6 cifmw_test_operator_tempest_external_plugin: - changeRefspec: 380572db57798530b64dcac14c6b01b0382c5d8e changeRepository: https://review.opendev.org/openstack/watcher-tempest-plugin repository: https://opendev.org/openstack/watcher-tempest-plugin.git cifmw_test_operator_tempest_image_tag: watcher_latest cifmw_test_operator_tempest_namespace: '{{ content_provider_os_registry_url | split(''/'') | last }}' cifmw_test_operator_tempest_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_openstack: false cifmw_update_containers_org: podified-epoxy-centos9 cifmw_update_containers_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_tag: watcher_latest cifmw_update_containers_watcher: true cifmw_use_libvirt: false cifmw_zuul_target_host: controller content_provider_dlrn_md5_hash: '' content_provider_gating_repo_available: false content_provider_gating_repo_url: '' content_provider_os_registry_namespace: podified-epoxy-centos9 content_provider_os_registry_url: 38.102.83.51:5001/podified-epoxy-centos9 content_provider_registry_available: true content_provider_registry_ip: 38.102.83.51 content_provider_registry_ip_port: 38.102.83.51:5001 crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: compute-0: networks: default: ip: 192.168.122.100 internal-api: config_nm: false ip: 172.17.0.100 storage: config_nm: false ip: 172.18.0.100 tenant: config_nm: false ip: 172.19.0.100 compute-1: networks: default: ip: 192.168.122.101 internal-api: config_nm: false ip: 172.17.0.101 storage: config_nm: false ip: 172.18.0.101 tenant: config_nm: false ip: 172.19.0.101 controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: '{{ (''ibm'' in nodepool.cloud) | ternary(''1440'', ''1500'') }}' range: 192.168.122.0/24 router_net: '' transparent: true internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true fetch_dlrn_hash: false nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: 70fdcf87-2866-4dad-82d7-0accf142e3fc host_id: bdb78bf25a270582fae0ca49d447ffffc4c7a50a772a0a4c0593588a interface_ip: 38.102.83.233 label: cloud-centos-9-stream-tripleo-vexxhost private_ipv4: 38.102.83.233 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.233 public_ipv6: '' region: RegionOne slot: null push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true watcher_scenario: edpm-no-notifications watcher_services_tag: watcher_latest watcher_tempest_max_microversion: '1.4' zuul_log_collection: false watcher_scenario: edpm-no-notifications watcher_services_tag: watcher_latest watcher_tempest_max_microversion: '1.4' zuul: _inheritance_path: - '' - '' - '' - '' - '' - '' - '' - '' - '' - '' ansible_version: '8' attempts: 1 branch: main build: 90366c73dd19485aa9d993ddaab6d557 build_refs: - branch: main change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n spec:\r\n messagingBus:\r\n cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n \ notificationsBus:\r\n cluster: notifications-rabbitmq\r\n \ user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator topic: null buildset: f20638613fa941389b7ed9d90fc6bce5 buildset_refs: - branch: main change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n spec:\r\n messagingBus:\r\n cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n \ notificationsBus:\r\n cluster: notifications-rabbitmq\r\n \ user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator topic: null change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n \ spec:\r\n messagingBus:\r\n cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n notificationsBus:\r\n cluster: notifications-rabbitmq\r\n user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 child_jobs: [] commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 event_id: 2658e330-f5e9-11f0-9efc-87f0b8025f7f executor: hostname: ze02.softwarefactory-project.io inventory_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/inventory.yaml log_root: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/logs result_data_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/results.json src_root: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/src work_root: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work items: - branch: main change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n spec:\r\n messagingBus:\r\n cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n \ notificationsBus:\r\n cluster: notifications-rabbitmq\r\n \ user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator topic: null job: watcher-operator-validation-epoxy-ocp4-16 jobtags: [] max_attempts: 1 message: UmFiYml0bXEgdmhvc3QgYW5kIHVzZXIgc3VwcG9ydAoKQWRkIG5ldyBtZXNzYWdpbmdCdXMgYW5kIG5vdGlmaWNhdGlvbnNCdXMgaW50ZXJmYWNlcyB0byBob2xkIGNsdXN0ZXIsIHVzZXIgYW5kIHZob3N0IG5hbWVzIGZvciBvcHRpb25hbCB1c2FnZS4NClRoZSBjb250cm9sbGVyIGFkZHMgdGhlc2UgdmFsdWVzIHRvIHRoZSBUcmFuc3BvcnRVUkwgY3JlYXRlIHJlcXVlc3Qgd2hlbiBwcmVzZW50Lg0KDQpBZGRpdGlvbmFsbHksIHdlIG1pZ3JhdGUgUmFiYml0TVEgY2x1c3RlciBuYW1lIHRvIFJhYmJpdE1xIGNvbmZpZyBzdHJ1Y3QgdXNpbmcgRGVmYXVsdFJhYmJpdE1xQ29uZmlnIGZyb20gaW5mcmEtb3BlcmF0b3IgdG8gYXV0b21hdGljYWxseSBwb3B1bGF0ZSB0aGUgbmV3IENsdXN0ZXIgZmllbGQgZnJvbSBsZWdhY3kgUmFiYml0TXFDbHVzdGVyTmFtZS4NCg0KRXhhbXBsZSB1c2FnZToNCg0KYGBgDQogIHNwZWM6DQogICAgbWVzc2FnaW5nQnVzOg0KICAgICAgY2x1c3RlcjogcnBjLXJhYmJpdG1xDQogICAgICB1c2VyOiBycGMtdXNlcg0KICAgICAgdmhvc3Q6IHJwYy12aG9zdA0KICAgIG5vdGlmaWNhdGlvbnNCdXM6DQogICAgICBjbHVzdGVyOiBub3RpZmljYXRpb25zLXJhYmJpdG1xDQogICAgICB1c2VyOiBub3RpZmljYXRpb25zLXVzZXINCiAgICAgIHZob3N0OiBub3RpZmljYXRpb25zLXZob3N0DQpgYGANCg0KSmlyYTogaHR0cHM6Ly9pc3N1ZXMucmVkaGF0LmNvbS9icm93c2UvT1NQUkgtMjM4ODI= patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 pipeline: github-check playbook_context: playbook_projects: trusted/project_0/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 08a84deec7dace955f92270e2cbb8b993f305ad4 trusted/project_1/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 691c03cc007bee9934da14cf46c86009616a2aef trusted/project_2/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 trusted/project_3/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 05fab9c7c87ad7552b529c3d1173a1772e61e9fb untrusted/project_0/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 05fab9c7c87ad7552b529c3d1173a1772e61e9fb untrusted/project_1/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 08a84deec7dace955f92270e2cbb8b993f305ad4 untrusted/project_2/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 691c03cc007bee9934da14cf46c86009616a2aef untrusted/project_3/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 playbooks: - path: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/ci/playbooks/edpm/run.yml roles: - checkout: main checkout_description: playbook branch link_name: ansible/playbook_0/role_0/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_0/role_0/ci-framework/roles - checkout: master checkout_description: project default branch link_name: ansible/playbook_0/role_1/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_0/role_1/config/roles - checkout: master checkout_description: project default branch link_name: ansible/playbook_0/role_2/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_0/role_2/zuul-jobs/roles - checkout: master checkout_description: project default branch link_name: ansible/playbook_0/role_3/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_0/role_3/rdo-jobs/roles post_review: false project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator projects: github.com/crc-org/crc-cloud: canonical_hostname: github.com canonical_name: github.com/crc-org/crc-cloud checkout: main checkout_description: project override ref commit: 42957126d9d9b9d1372615db325b82bd992fa335 name: crc-org/crc-cloud required: true short_name: crc-cloud src_dir: src/github.com/crc-org/crc-cloud github.com/openstack-k8s-operators/ci-framework: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main checkout_description: zuul branch commit: 05fab9c7c87ad7552b529c3d1173a1772e61e9fb name: openstack-k8s-operators/ci-framework required: true short_name: ci-framework src_dir: src/github.com/openstack-k8s-operators/ci-framework github.com/openstack-k8s-operators/edpm-ansible: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/edpm-ansible checkout: main checkout_description: zuul branch commit: 43c8ae13d85939e9a3f9cddbe838cbe4616199f7 name: openstack-k8s-operators/edpm-ansible required: true short_name: edpm-ansible src_dir: src/github.com/openstack-k8s-operators/edpm-ansible github.com/openstack-k8s-operators/infra-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/infra-operator checkout: main checkout_description: zuul branch commit: 0121df8691096e0883637457925e4142353e35ba name: openstack-k8s-operators/infra-operator required: true short_name: infra-operator src_dir: src/github.com/openstack-k8s-operators/infra-operator github.com/openstack-k8s-operators/install_yamls: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/install_yamls checkout: main checkout_description: zuul branch commit: bdf4c9385be5e3e04ff06f67f25d6993db70cf6e name: openstack-k8s-operators/install_yamls required: true short_name: install_yamls src_dir: src/github.com/openstack-k8s-operators/install_yamls github.com/openstack-k8s-operators/openstack-baremetal-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-baremetal-operator checkout: main checkout_description: zuul branch commit: 06cd1004cb26b36ba1054ccf7875fad6248762c5 name: openstack-k8s-operators/openstack-baremetal-operator required: true short_name: openstack-baremetal-operator src_dir: src/github.com/openstack-k8s-operators/openstack-baremetal-operator github.com/openstack-k8s-operators/openstack-must-gather: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-must-gather checkout: main checkout_description: zuul branch commit: 74e48015b997b0bb2efc1ad92a6937949f185a0e name: openstack-k8s-operators/openstack-must-gather required: true short_name: openstack-must-gather src_dir: src/github.com/openstack-k8s-operators/openstack-must-gather github.com/openstack-k8s-operators/openstack-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-operator checkout: main checkout_description: zuul branch commit: 38e630804dada625f7b015f13f3ac5bb7192f4dd name: openstack-k8s-operators/openstack-operator required: true short_name: openstack-operator src_dir: src/github.com/openstack-k8s-operators/openstack-operator github.com/openstack-k8s-operators/repo-setup: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/repo-setup checkout: main checkout_description: zuul branch commit: 37b10946c6a10f9fa26c13305f06bfd6867e723f name: openstack-k8s-operators/repo-setup required: true short_name: repo-setup src_dir: src/github.com/openstack-k8s-operators/repo-setup github.com/openstack-k8s-operators/watcher-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator checkout: main checkout_description: zuul branch commit: 581f46572d07c53c87a11aa044b02e73f253eea6 name: openstack-k8s-operators/watcher-operator required: false short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator opendev.org/zuul/zuul-jobs: canonical_hostname: opendev.org canonical_name: opendev.org/zuul/zuul-jobs checkout: master checkout_description: project default branch commit: 691c03cc007bee9934da14cf46c86009616a2aef name: zuul/zuul-jobs required: true short_name: zuul-jobs src_dir: src/opendev.org/zuul/zuul-jobs review.rdoproject.org/config: canonical_hostname: review.rdoproject.org canonical_name: review.rdoproject.org/config checkout: master checkout_description: project default branch commit: 08a84deec7dace955f92270e2cbb8b993f305ad4 name: config required: true short_name: config src_dir: src/review.rdoproject.org/config ref: refs/pull/320/head resources: {} tenant: rdoproject.org timeout: 10800 topic: null voting: true zuul_execution_branch: main zuul_execution_canonical_name_and_path: github.com/openstack-k8s-operators/ci-framework/ci/playbooks/e2e-collect-logs.yml zuul_execution_phase: post zuul_execution_phase_index: '0' zuul_execution_trusted: 'False' zuul_log_collection: false zuul_success: 'False' zuul_will_retry: 'False' controller: _included_dir: changed: false failed: false stat: atime: 1768906622.6334352 attr_flags: '' attributes: [] block_size: 4096 blocks: 0 charset: binary ctime: 1768906583.763411 dev: 64513 device_type: 0 executable: true exists: true gid: 1000 gr_name: zuul inode: 46197012 isblk: false ischr: false isdir: true isfifo: false isgid: false islnk: false isreg: false issock: false isuid: false mimetype: inode/directory mode: '0755' mtime: 1768906583.763411 nlink: 2 path: /home/zuul/ci-framework-data/artifacts/parameters pw_name: zuul readable: true rgrp: true roth: true rusr: true size: 120 uid: 1000 version: '3168648319' wgrp: false woth: false writeable: true wusr: true xgrp: true xoth: true xusr: true _included_file: changed: false failed: false stat: atime: 1768906623.674463 attr_flags: '' attributes: [] block_size: 4096 blocks: 8 charset: us-ascii checksum: 6d8939fb75e97fe158faa310d709fff99c45d6cf ctime: 1768906582.9373894 dev: 64513 device_type: 0 executable: false exists: true gid: 1000 gr_name: zuul inode: 4366262 isblk: false ischr: false isdir: false isfifo: false isgid: false islnk: false isreg: true issock: false isuid: false mimetype: text/plain mode: '0600' mtime: 1768906582.7803853 nlink: 1 path: /home/zuul/ci-framework-data/artifacts/parameters/openshift-login-params.yml pw_name: zuul readable: true rgrp: false roth: false rusr: true size: 280 uid: 1000 version: '1785124229' wgrp: false woth: false writeable: true wusr: true xgrp: false xoth: false xusr: false _parsed_vars: changed: false content: Y2lmbXdfb3BlbnNoaWZ0X2FwaTogYXBpLmNyYy50ZXN0aW5nOjY0NDMKY2lmbXdfb3BlbnNoaWZ0X2NvbnRleHQ6IGRlZmF1bHQvYXBpLWNyYy10ZXN0aW5nOjY0NDMva3ViZWFkbWluCmNpZm13X29wZW5zaGlmdF9rdWJlY29uZmlnOiAvaG9tZS96dXVsLy5jcmMvbWFjaGluZXMvY3JjL2t1YmVjb25maWcKY2lmbXdfb3BlbnNoaWZ0X3Rva2VuOiBzaGEyNTZ+YUNrOVNvTXhBS3pRbl9rWVB2eTFnV19LdlhfTWhWYnAwX3BNMlRuZnV5RQpjaWZtd19vcGVuc2hpZnRfdXNlcjoga3ViZWFkbWluCg== encoding: base64 failed: false source: /home/zuul/ci-framework-data/artifacts/parameters/openshift-login-params.yml _tmp_dir: changed: true failed: false gid: 10001 group: zuul mode: '0700' owner: zuul path: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/tmp/ansible.1xkebns9 size: 40 state: directory uid: 10001 _yaml_files: changed: false examined: 4 failed: false files: - atime: 1768906473.7125292 ctime: 1768906470.6654503 dev: 64513 gid: 1000 gr_name: zuul inode: 63057654 isblk: false ischr: false isdir: false isfifo: false isgid: false islnk: false isreg: true issock: false isuid: false mode: '0644' mtime: 1768906470.458445 nlink: 1 path: /home/zuul/ci-framework-data/artifacts/parameters/zuul-params.yml pw_name: zuul rgrp: true roth: true rusr: true size: 20657 uid: 1000 wgrp: false woth: false wusr: true xgrp: false xoth: false xusr: false - atime: 1768906623.6584625 ctime: 1768906583.766411 dev: 64513 gid: 1000 gr_name: zuul inode: 21110197 isblk: false ischr: false isdir: false isfifo: false isgid: false islnk: false isreg: true issock: false isuid: false mode: '0600' mtime: 1768906583.611407 nlink: 1 path: /home/zuul/ci-framework-data/artifacts/parameters/install-yamls-params.yml pw_name: zuul rgrp: false roth: false rusr: true size: 28355 uid: 1000 wgrp: false woth: false wusr: true xgrp: false xoth: false xusr: false - atime: 1768906543.1623468 ctime: 1768906541.3292987 dev: 64513 gid: 1000 gr_name: zuul inode: 58810635 isblk: false ischr: false isdir: false isfifo: false isgid: false islnk: false isreg: true issock: false isuid: false mode: '0644' mtime: 1768906541.1692946 nlink: 1 path: /home/zuul/ci-framework-data/artifacts/parameters/custom-params.yml pw_name: zuul rgrp: true roth: true rusr: true size: 9278 uid: 1000 wgrp: false woth: false wusr: true xgrp: false xoth: false xusr: false - atime: 1768906623.674463 ctime: 1768906582.9373894 dev: 64513 gid: 1000 gr_name: zuul inode: 4366262 isblk: false ischr: false isdir: false isfifo: false isgid: false islnk: false isreg: true issock: false isuid: false mode: '0600' mtime: 1768906582.7803853 nlink: 1 path: /home/zuul/ci-framework-data/artifacts/parameters/openshift-login-params.yml pw_name: zuul rgrp: false roth: false rusr: true size: 280 uid: 1000 wgrp: false woth: false wusr: true xgrp: false xoth: false xusr: false matched: 4 msg: All paths examined skipped_paths: {} ansible_all_ipv4_addresses: - 38.102.83.39 ansible_all_ipv6_addresses: - fe80::f816:3eff:fe71:e11f ansible_apparmor: status: disabled ansible_architecture: x86_64 ansible_bios_date: 04/01/2014 ansible_bios_vendor: SeaBIOS ansible_bios_version: 1.15.0-1 ansible_board_asset_tag: NA ansible_board_name: NA ansible_board_serial: NA ansible_board_vendor: NA ansible_board_version: NA ansible_chassis_asset_tag: NA ansible_chassis_serial: NA ansible_chassis_vendor: QEMU ansible_chassis_version: pc-i440fx-6.2 ansible_check_mode: false ansible_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ansible_config_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/post_playbook_0/ansible.cfg ansible_connection: ssh ansible_date_time: date: '2026-01-20' day: '20' epoch: '1768906665' epoch_int: '1768906665' hour: '10' iso8601: '2026-01-20T10:57:45Z' iso8601_basic: 20260120T105745596274 iso8601_basic_short: 20260120T105745 iso8601_micro: '2026-01-20T10:57:45.596274Z' minute: '57' month: '01' second: '45' time: '10:57:45' tz: UTC tz_dst: UTC tz_offset: '+0000' weekday: Tuesday weekday_number: '2' weeknumber: '03' year: '2026' ansible_default_ipv4: address: 38.102.83.39 alias: eth0 broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: eth0 macaddress: fa:16:3e:71:e1:1f mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether ansible_default_ipv6: {} ansible_device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 masters: {} uuids: sr0: - 2026-01-20-10-41-26-00 vda1: - 22ac9141-3960-4912-b20e-19fc8a328d40 ansible_devices: sr0: holders: [] host: '' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2026-01-20-10-41-26-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '2048' vendor: QEMU virtual: 1 vda: holders: [] host: '' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: - 22ac9141-3960-4912-b20e-19fc8a328d40 sectors: '83883999' sectorsize: 512 size: 40.00 GB start: '2048' uuid: 22ac9141-3960-4912-b20e-19fc8a328d40 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '83886080' sectorsize: '512' size: 40.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 ansible_diff_mode: false ansible_distribution: CentOS ansible_distribution_file_parsed: true ansible_distribution_file_path: /etc/centos-release ansible_distribution_file_variety: CentOS ansible_distribution_major_version: '9' ansible_distribution_release: Stream ansible_distribution_version: '9' ansible_dns: nameservers: - 192.168.122.10 - 199.204.44.24 - 199.204.47.54 ansible_domain: '' ansible_effective_group_id: 1000 ansible_effective_user_id: 1000 ansible_env: ANSIBLE_LOG_PATH: /home/zuul/ci-framework-data/logs/e2e-collect-logs-must-gather.log BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus DEBUGINFOD_IMA_CERT_PATH: '/etc/keys/ima:' DEBUGINFOD_URLS: 'https://debuginfod.centos.org/ ' HOME: /home/zuul LANG: en_US.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: zuul MOTD_SHOWN: pam PATH: /home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /home/zuul SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 46682 22 SSH_CONNECTION: 38.102.83.114 46682 38.102.83.39 22 USER: zuul XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '12' XDG_SESSION_TYPE: tty _: /usr/bin/python3 which_declare: declare -f ansible_eth0: active: true device: eth0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.39 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::f816:3eff:fe71:e11f prefix: '64' scope: link macaddress: fa:16:3e:71:e1:1f module: virtio_net mtu: 1500 pciid: virtio1 promisc: false speed: -1 timestamping: [] type: ether ansible_facts: _ansible_facts_gathered: true all_ipv4_addresses: - 38.102.83.39 all_ipv6_addresses: - fe80::f816:3eff:fe71:e11f ansible_local: {} apparmor: status: disabled architecture: x86_64 bios_date: 04/01/2014 bios_vendor: SeaBIOS bios_version: 1.15.0-1 board_asset_tag: NA board_name: NA board_serial: NA board_vendor: NA board_version: NA chassis_asset_tag: NA chassis_serial: NA chassis_vendor: QEMU chassis_version: pc-i440fx-6.2 cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=22ac9141-3960-4912-b20e-19fc8a328d40 crc_ci_bootstrap_instance_default_net_config: mtu: '1500' range: 192.168.122.0/24 router_net: '' transparent: true crc_ci_bootstrap_instance_nm_vlan_networks: - key: internal-api value: ip: 172.17.0.5 - key: storage value: ip: 172.18.0.5 - key: tenant value: ip: 172.19.0.5 crc_ci_bootstrap_instance_parent_port_create_yaml: admin_state_up: true allowed_address_pairs: [] binding_host_id: null binding_profile: {} binding_vif_details: {} binding_vif_type: null binding_vnic_type: normal created_at: '2026-01-20T10:44:55Z' data_plane_status: null description: '' device_id: '' device_owner: '' device_profile: null dns_assignment: - fqdn: host-192-168-122-10.openstacklocal. hostname: host-192-168-122-10 ip_address: 192.168.122.10 dns_domain: '' dns_name: '' extra_dhcp_opts: [] fixed_ips: - ip_address: 192.168.122.10 subnet_id: 2e4cfd41-19cc-4d87-87ce-3666153838d5 hardware_offload_type: null hints: '' id: 97004f9b-ecd9-4295-91be-b58c60a6e487 ip_allocation: immediate mac_address: fa:16:3e:db:79:f1 name: crc-a519b063-d122-47cf-ae3b-7548803df408 network_id: 9afcb953-b4d4-4d1b-ba09-6d5eaf6424aa numa_affinity_policy: null port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 propagate_uplink_status: null qos_network_policy_id: null qos_policy_id: null resource_request: null revision_number: 1 security_group_ids: [] status: DOWN tags: [] trunk_details: null trusted: null updated_at: '2026-01-20T10:44:55Z' crc_ci_bootstrap_network_name: zuul-ci-net-90366c73 crc_ci_bootstrap_networks_out: compute-0: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.100/24 mac: fa:16:3e:c1:c1:89 mtu: '1500' internal-api: iface: eth1.20 ip: 172.17.0.100/24 mac: 52:54:00:e4:85:d6 mtu: '1496' parent_iface: eth1 vlan: 20 storage: iface: eth1.21 ip: 172.18.0.100/24 mac: 52:54:00:81:02:e8 mtu: '1496' parent_iface: eth1 vlan: 21 tenant: iface: eth1.22 ip: 172.19.0.100/24 mac: 52:54:00:10:b6:c8 mtu: '1496' parent_iface: eth1 vlan: 22 compute-1: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.101/24 mac: fa:16:3e:62:06:49 mtu: '1500' internal-api: iface: eth1.20 ip: 172.17.0.101/24 mac: 52:54:00:1a:ac:e2 mtu: '1496' parent_iface: eth1 vlan: 20 storage: iface: eth1.21 ip: 172.18.0.101/24 mac: 52:54:00:8e:43:4e mtu: '1496' parent_iface: eth1 vlan: 21 tenant: iface: eth1.22 ip: 172.19.0.101/24 mac: 52:54:00:96:51:04 mtu: '1496' parent_iface: eth1 vlan: 22 controller: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.11/24 mac: fa:16:3e:b8:b9:28 mtu: '1500' crc: default: connection: ci-private-network gw: 192.168.122.1 iface: ens7 ip: 192.168.122.10/24 mac: fa:16:3e:db:79:f1 mtu: '1500' internal-api: connection: ci-private-network-20 iface: ens7.20 ip: 172.17.0.5/24 mac: 52:54:00:75:cf:2c mtu: '1496' parent_iface: ens7 vlan: 20 storage: connection: ci-private-network-21 iface: ens7.21 ip: 172.18.0.5/24 mac: 52:54:00:da:18:d2 mtu: '1496' parent_iface: ens7 vlan: 21 tenant: connection: ci-private-network-22 iface: ens7.22 ip: 172.19.0.5/24 mac: 52:54:00:b7:68:1f mtu: '1496' parent_iface: ens7 vlan: 22 crc_ci_bootstrap_private_net_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2026-01-20T10:43:28Z' description: '' dns_domain: '' id: 9afcb953-b4d4-4d1b-ba09-6d5eaf6424aa ipv4_address_scope: null ipv6_address_scope: null is_default: false is_vlan_qinq: null is_vlan_transparent: true l2_adjacency: true mtu: 1500 name: zuul-ci-net-90366c73 port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 provider:network_type: null provider:physical_network: null provider:segmentation_id: null qos_policy_id: null revision_number: 1 router:external: false segments: null shared: false status: ACTIVE subnets: [] tags: [] updated_at: '2026-01-20T10:43:28Z' crc_ci_bootstrap_private_router_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2026-01-20T10:43:36Z' description: '' enable_ndp_proxy: null external_gateway_info: null flavor_id: null id: 6f8c9a88-0252-4b33-9039-3708bccf4928 name: zuul-ci-subnet-router-90366c73 project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 1 routes: [] status: ACTIVE tags: [] tenant_id: 4b633c451ac74233be3721a3635275e5 updated_at: '2026-01-20T10:43:36Z' crc_ci_bootstrap_private_subnet_create_yaml: allocation_pools: - end: 192.168.122.254 start: 192.168.122.2 cidr: 192.168.122.0/24 created_at: '2026-01-20T10:43:33Z' description: '' dns_nameservers: [] dns_publish_fixed_ip: null enable_dhcp: false gateway_ip: 192.168.122.1 host_routes: [] id: 2e4cfd41-19cc-4d87-87ce-3666153838d5 ip_version: 4 ipv6_address_mode: null ipv6_ra_mode: null name: zuul-ci-subnet-90366c73 network_id: 9afcb953-b4d4-4d1b-ba09-6d5eaf6424aa project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 0 segment_id: null service_types: [] subnetpool_id: null tags: [] updated_at: '2026-01-20T10:43:33Z' crc_ci_bootstrap_provider_dns: - 199.204.44.24 - 199.204.47.54 crc_ci_bootstrap_router_name: zuul-ci-subnet-router-90366c73 crc_ci_bootstrap_subnet_name: zuul-ci-subnet-90366c73 date_time: date: '2026-01-20' day: '20' epoch: '1768906665' epoch_int: '1768906665' hour: '10' iso8601: '2026-01-20T10:57:45Z' iso8601_basic: 20260120T105745596274 iso8601_basic_short: 20260120T105745 iso8601_micro: '2026-01-20T10:57:45.596274Z' minute: '57' month: '01' second: '45' time: '10:57:45' tz: UTC tz_dst: UTC tz_offset: '+0000' weekday: Tuesday weekday_number: '2' weeknumber: '03' year: '2026' default_ipv4: address: 38.102.83.39 alias: eth0 broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: eth0 macaddress: fa:16:3e:71:e1:1f mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether default_ipv6: {} device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 masters: {} uuids: sr0: - 2026-01-20-10-41-26-00 vda1: - 22ac9141-3960-4912-b20e-19fc8a328d40 devices: sr0: holders: [] host: '' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2026-01-20-10-41-26-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '2048' vendor: QEMU virtual: 1 vda: holders: [] host: '' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: - 22ac9141-3960-4912-b20e-19fc8a328d40 sectors: '83883999' sectorsize: 512 size: 40.00 GB start: '2048' uuid: 22ac9141-3960-4912-b20e-19fc8a328d40 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '83886080' sectorsize: '512' size: 40.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 discovered_interpreter_python: /usr/bin/python3 distribution: CentOS distribution_file_parsed: true distribution_file_path: /etc/centos-release distribution_file_variety: CentOS distribution_major_version: '9' distribution_release: Stream distribution_version: '9' dns: nameservers: - 192.168.122.10 - 199.204.44.24 - 199.204.47.54 domain: '' effective_group_id: 1000 effective_user_id: 1000 env: ANSIBLE_LOG_PATH: /home/zuul/ci-framework-data/logs/e2e-collect-logs-must-gather.log BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus DEBUGINFOD_IMA_CERT_PATH: '/etc/keys/ima:' DEBUGINFOD_URLS: 'https://debuginfod.centos.org/ ' HOME: /home/zuul LANG: en_US.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: zuul MOTD_SHOWN: pam PATH: /home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /home/zuul SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 46682 22 SSH_CONNECTION: 38.102.83.114 46682 38.102.83.39 22 USER: zuul XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '12' XDG_SESSION_TYPE: tty _: /usr/bin/python3 which_declare: declare -f eth0: active: true device: eth0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.39 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::f816:3eff:fe71:e11f prefix: '64' scope: link macaddress: fa:16:3e:71:e1:1f module: virtio_net mtu: 1500 pciid: virtio1 promisc: false speed: -1 timestamping: [] type: ether fibre_channel_wwn: [] fips: false form_factor: Other fqdn: controller gather_subset: - min hostname: controller hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d interfaces: - eth0 - lo is_chroot: false iscsi_iqn: '' kernel: 5.14.0-661.el9.x86_64 kernel_version: '#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026' lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback loadavg: 15m: 0.08 1m: 0.59 5m: 0.22 locally_reachable_ips: ipv4: - 38.102.83.39 - 127.0.0.0/8 - 127.0.0.1 ipv6: - ::1 - fe80::f816:3eff:fe71:e11f lsb: {} lvm: N/A machine: x86_64 machine_id: 85ac68c10a6e7ae08ceb898dbdca0cb5 memfree_mb: 3195 memory_mb: nocache: free: 3404 used: 251 real: free: 3195 total: 3655 used: 460 swap: cached: 0 free: 0 total: 0 used: 0 memtotal_mb: 3655 module_setup: true mounts: - block_available: 9928916 block_size: 4096 block_total: 10469115 block_used: 540199 device: /dev/vda1 fstype: xfs inode_available: 20917098 inode_total: 20970992 inode_used: 53894 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota size_available: 40668839936 size_total: 42881495040 uuid: 22ac9141-3960-4912-b20e-19fc8a328d40 nodename: controller os_family: RedHat pkg_mgr: dnf proc_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=22ac9141-3960-4912-b20e-19fc8a328d40 processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor processor_cores: 1 processor_count: 2 processor_nproc: 2 processor_threads_per_core: 1 processor_vcpus: 2 product_name: OpenStack Nova product_serial: NA product_uuid: NA product_version: 26.3.1 python: executable: /usr/bin/python3 has_sslcontext: true type: cpython version: major: 3 micro: 25 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 25 - final - 0 python_version: 3.9.25 real_group_id: 1000 real_user_id: 1000 selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted selinux_python_present: true service_mgr: systemd ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI+HME4ahJQfJECnlUk3Icgw7DjB45ygINRfee3AcsNVR5rNrzPcoaVTiPZ1YEOGS4KKBJ1Qyzp3CgN+dL12iOs= ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAIHcCbCXC3if6/1WyJzr5vIzL+Tzi/3I/oQgtmHTAEmym ssh_host_key_ed25519_public_keytype: ssh-ed25519 ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQD5hZ02CR6jauFfwvnGyh7Gg7GiN8xU/4woiEx+8xAto75E7Pi9h+8iAczj5rpkBpdIX3G3BSeegzMeog4upoxDVvta9EgXuabnQ49Y7WDm0LPFAPgiFBu/CkrcXHPm6OM5a181eFVk4w9Kf3GDJ9Arh5IdZdAbxXEEdenpbQnlz4hFtl/dGIrohDfCuWmrhq5VMqraCeMpiJ4c2G2iMVgZFQf8LvUICbaySrebir4HAfyv1yWZawS1Nql3bsyHsx9Tf25tbj5CHHs+DhC9NI/UgOmeW8rz3IyrdhqKInbsI/AqSHQPmEAEwzRco8xALMmzjICopKKXB9R++ddv/PAqiZPTrixuYUXRQQyJvx080Visb20dtAZTLYdmY7X1oB8Jgvullh3xFHNeqAu0+7OIeoHl9eCxx2sbk1kEtud1CMuDc5cn7h3ANGHZY3/jP0cUAyel1wQE0olv43z2rzgUpI6+8gKM0edyBLbCbww6/PHtcNkrzxAB3WOaAIzZAI8= ssh_host_key_rsa_public_keytype: ssh-rsa swapfree_mb: 0 swaptotal_mb: 0 system: Linux system_capabilities: - '' system_capabilities_enforced: 'True' system_vendor: OpenStack Foundation uptime_seconds: 64 user_dir: /home/zuul user_gecos: '' user_gid: 1000 user_id: zuul user_shell: /bin/bash user_uid: 1000 userspace_architecture: x86_64 userspace_bits: '64' virtualization_role: guest virtualization_tech_guest: - openstack virtualization_tech_host: - kvm virtualization_type: openstack zuul_change_list: - watcher-operator ansible_fibre_channel_wwn: [] ansible_fips: false ansible_forks: 5 ansible_form_factor: Other ansible_fqdn: controller ansible_host: 38.102.83.39 ansible_hostname: controller ansible_hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d ansible_interfaces: - eth0 - lo ansible_inventory_sources: - /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/post_playbook_0/inventory.yaml ansible_is_chroot: false ansible_iscsi_iqn: '' ansible_kernel: 5.14.0-661.el9.x86_64 ansible_kernel_version: '#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026' ansible_lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback ansible_loadavg: 15m: 0.08 1m: 0.59 5m: 0.22 ansible_local: {} ansible_locally_reachable_ips: ipv4: - 38.102.83.39 - 127.0.0.0/8 - 127.0.0.1 ipv6: - ::1 - fe80::f816:3eff:fe71:e11f ansible_lsb: {} ansible_lvm: N/A ansible_machine: x86_64 ansible_machine_id: 85ac68c10a6e7ae08ceb898dbdca0cb5 ansible_memfree_mb: 3195 ansible_memory_mb: nocache: free: 3404 used: 251 real: free: 3195 total: 3655 used: 460 swap: cached: 0 free: 0 total: 0 used: 0 ansible_memtotal_mb: 3655 ansible_mounts: - block_available: 9928916 block_size: 4096 block_total: 10469115 block_used: 540199 device: /dev/vda1 fstype: xfs inode_available: 20917098 inode_total: 20970992 inode_used: 53894 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota size_available: 40668839936 size_total: 42881495040 uuid: 22ac9141-3960-4912-b20e-19fc8a328d40 ansible_nodename: controller ansible_os_family: RedHat ansible_pkg_mgr: dnf ansible_playbook_python: /usr/lib/zuul/ansible/8/bin/python ansible_port: 22 ansible_proc_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ansible_processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor ansible_processor_cores: 1 ansible_processor_count: 2 ansible_processor_nproc: 2 ansible_processor_threads_per_core: 1 ansible_processor_vcpus: 2 ansible_product_name: OpenStack Nova ansible_product_serial: NA ansible_product_uuid: NA ansible_product_version: 26.3.1 ansible_python: executable: /usr/bin/python3 has_sslcontext: true type: cpython version: major: 3 micro: 25 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 25 - final - 0 ansible_python_interpreter: auto ansible_python_version: 3.9.25 ansible_real_group_id: 1000 ansible_real_user_id: 1000 ansible_run_tags: - all ansible_scp_extra_args: -o PermitLocalCommand=no ansible_selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted ansible_selinux_python_present: true ansible_service_mgr: systemd ansible_sftp_extra_args: -o PermitLocalCommand=no ansible_skip_tags: [] ansible_ssh_common_args: -o PermitLocalCommand=no ansible_ssh_executable: ssh ansible_ssh_extra_args: -o PermitLocalCommand=no ansible_ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI+HME4ahJQfJECnlUk3Icgw7DjB45ygINRfee3AcsNVR5rNrzPcoaVTiPZ1YEOGS4KKBJ1Qyzp3CgN+dL12iOs= ansible_ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ansible_ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAIHcCbCXC3if6/1WyJzr5vIzL+Tzi/3I/oQgtmHTAEmym ansible_ssh_host_key_ed25519_public_keytype: ssh-ed25519 ansible_ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQD5hZ02CR6jauFfwvnGyh7Gg7GiN8xU/4woiEx+8xAto75E7Pi9h+8iAczj5rpkBpdIX3G3BSeegzMeog4upoxDVvta9EgXuabnQ49Y7WDm0LPFAPgiFBu/CkrcXHPm6OM5a181eFVk4w9Kf3GDJ9Arh5IdZdAbxXEEdenpbQnlz4hFtl/dGIrohDfCuWmrhq5VMqraCeMpiJ4c2G2iMVgZFQf8LvUICbaySrebir4HAfyv1yWZawS1Nql3bsyHsx9Tf25tbj5CHHs+DhC9NI/UgOmeW8rz3IyrdhqKInbsI/AqSHQPmEAEwzRco8xALMmzjICopKKXB9R++ddv/PAqiZPTrixuYUXRQQyJvx080Visb20dtAZTLYdmY7X1oB8Jgvullh3xFHNeqAu0+7OIeoHl9eCxx2sbk1kEtud1CMuDc5cn7h3ANGHZY3/jP0cUAyel1wQE0olv43z2rzgUpI6+8gKM0edyBLbCbww6/PHtcNkrzxAB3WOaAIzZAI8= ansible_ssh_host_key_rsa_public_keytype: ssh-rsa ansible_swapfree_mb: 0 ansible_swaptotal_mb: 0 ansible_system: Linux ansible_system_capabilities: - '' ansible_system_capabilities_enforced: 'True' ansible_system_vendor: OpenStack Foundation ansible_uptime_seconds: 64 ansible_user: zuul ansible_user_dir: /home/zuul ansible_user_gecos: '' ansible_user_gid: 1000 ansible_user_id: zuul ansible_user_shell: /bin/bash ansible_user_uid: 1000 ansible_userspace_architecture: x86_64 ansible_userspace_bits: '64' ansible_verbosity: 1 ansible_version: full: 2.15.12 major: 2 minor: 15 revision: 12 string: 2.15.12 ansible_virtualization_role: guest ansible_virtualization_tech_guest: - openstack ansible_virtualization_tech_host: - kvm ansible_virtualization_type: openstack cifmw_architecture_repo: /home/zuul/src/github.com/openstack-k8s-operators/architecture cifmw_architecture_repo_relative: src/github.com/openstack-k8s-operators/architecture cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_basedir: /home/zuul/ci-framework-data cifmw_build_images_output: {} cifmw_config_certmanager: true cifmw_default_dns_servers: - 1.1.1.1 - 8.8.8.8 cifmw_deploy_edpm: true cifmw_dlrn_report_result: false cifmw_edpm_deploy_nova_compute_extra_config: '[libvirt] cpu_mode = custom cpu_models = Nehalem ' cifmw_edpm_prepare_kustomizations: - apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: openstack patches: - patch: "apiVersion: core.openstack.org/v1beta1\nkind: OpenStackControlPlane\nmetadata:\n \ name: controlplane\nspec:\n telemetry:\n enabled: true\n \ template:\n ceilometer:\n enabled: true\n metricStorage:\n \ enabled: true\n customMonitoringStack:\n alertmanagerConfig:\n \ disabled: true\n prometheusConfig:\n enableRemoteWriteReceiver: true\n persistentVolumeClaim:\n resources:\n \ requests:\n storage: 20G\n replicas: 1\n scrapeInterval: 30s\n resourceSelector:\n \ matchLabels:\n service: metricStorage\n \ retention: 24h" target: kind: OpenStackControlPlane - patch: "apiVersion: core.openstack.org/v1beta1\nkind: OpenStackControlPlane\nmetadata:\n \ name: controlplane\nspec:\n telemetry:\n template:\n metricStorage:\n \ monitoringStack: null" target: kind: OpenStackControlPlane - patch: "apiVersion: core.openstack.org/v1beta1\nkind: OpenStackControlPlane\nmetadata:\n \ name: controlplane\nspec:\n watcher:\n enabled: true\n template:\n \ decisionengineServiceTemplate:\n customServiceConfig: |\n [watcher_cluster_data_model_collectors.compute]\n \ period = 60\n [watcher_cluster_data_model_collectors.storage]\n \ period = 60" target: kind: OpenStackControlPlane cifmw_edpm_prepare_skip_crc_storage_creation: true cifmw_edpm_prepare_timeout: 60 cifmw_edpm_telemetry_enabled_exporters: - podman_exporter - openstack_network_exporter cifmw_extras: - '@/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/scenarios/centos-9/multinode-ci.yml' - '@/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/scenarios/centos-9/horizon.yml' - '@/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator/ci/scenarios/edpm-no-notifications.yml' - '@/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator/ci/tests/watcher-tempest.yml' cifmw_install_yamls_defaults: ADOPTED_EXTERNAL_NETWORK: 172.21.1.0/24 ADOPTED_INTERNALAPI_NETWORK: 172.17.1.0/24 ADOPTED_STORAGEMGMT_NETWORK: 172.20.1.0/24 ADOPTED_STORAGE_NETWORK: 172.18.1.0/24 ADOPTED_TENANT_NETWORK: 172.9.1.0/24 ANSIBLEEE: config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_BRANCH: main ANSIBLEEE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest ANSIBLEEE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml ANSIBLEEE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/test/kuttl/tests ANSIBLEEE_KUTTL_NAMESPACE: ansibleee-kuttl-tests ANSIBLEEE_REPO: https://github.com/openstack-k8s-operators/openstack-ansibleee-operator ANSIBLEE_COMMIT_HASH: '' BARBICAN: config/samples/barbican_v1beta1_barbican.yaml BARBICAN_BRANCH: main BARBICAN_COMMIT_HASH: '' BARBICAN_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml BARBICAN_DEPL_IMG: unused BARBICAN_IMG: quay.io/openstack-k8s-operators/barbican-operator-index:latest BARBICAN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml BARBICAN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/test/kuttl/tests BARBICAN_KUTTL_NAMESPACE: barbican-kuttl-tests BARBICAN_REPO: https://github.com/openstack-k8s-operators/barbican-operator.git BARBICAN_SERVICE_ENABLED: 'true' BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY: sE**********U= BAREMETAL_BRANCH: main BAREMETAL_COMMIT_HASH: '' BAREMETAL_IMG: quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest BAREMETAL_OS_CONTAINER_IMG: '' BAREMETAL_OS_IMG: '' BAREMETAL_OS_IMG_TYPE: '' BAREMETAL_REPO: https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git BAREMETAL_TIMEOUT: 20m BASH_IMG: quay.io/openstack-k8s-operators/bash:latest BGP_ASN: '64999' BGP_LEAF_1: 100.65.4.1 BGP_LEAF_2: 100.64.4.1 BGP_OVN_ROUTING: 'false' BGP_PEER_ASN: '64999' BGP_SOURCE_IP: 172.30.4.2 BGP_SOURCE_IP6: f00d:f00d:f00d:f00d:f00d:f00d:f00d:42 BMAAS_BRIDGE_IPV4_PREFIX: 172.20.1.2/24 BMAAS_BRIDGE_IPV6_PREFIX: fd00:bbbb::2/64 BMAAS_INSTANCE_DISK_SIZE: '20' BMAAS_INSTANCE_MEMORY: '4096' BMAAS_INSTANCE_NAME_PREFIX: crc-bmaas BMAAS_INSTANCE_NET_MODEL: virtio BMAAS_INSTANCE_OS_VARIANT: centos-stream9 BMAAS_INSTANCE_VCPUS: '2' BMAAS_INSTANCE_VIRT_TYPE: kvm BMAAS_IPV4: 'true' BMAAS_IPV6: 'false' BMAAS_LIBVIRT_USER: sushyemu BMAAS_METALLB_ADDRESS_POOL: 172.20.1.64/26 BMAAS_METALLB_POOL_NAME: baremetal BMAAS_NETWORK_IPV4_PREFIX: 172.20.1.1/24 BMAAS_NETWORK_IPV6_PREFIX: fd00:bbbb::1/64 BMAAS_NETWORK_NAME: crc-bmaas BMAAS_NODE_COUNT: '1' BMAAS_OCP_INSTANCE_NAME: crc BMAAS_REDFISH_PASSWORD: password BMAAS_REDFISH_USERNAME: admin BMAAS_ROUTE_LIBVIRT_NETWORKS: crc-bmaas,crc,default BMAAS_SUSHY_EMULATOR_DRIVER: libvirt BMAAS_SUSHY_EMULATOR_IMAGE: quay.io/metal3-io/sushy-tools:latest BMAAS_SUSHY_EMULATOR_NAMESPACE: sushy-emulator BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE: /etc/openstack/clouds.yaml BMAAS_SUSHY_EMULATOR_OS_CLOUD: openstack BMH_NAMESPACE: openstack BMO_BRANCH: release-0.9 BMO_CLEANUP: 'true' BMO_COMMIT_HASH: '' BMO_IPA_BRANCH: stable/2024.1 BMO_IRONIC_HOST: 192.168.122.10 BMO_PROVISIONING_INTERFACE: '' BMO_REPO: https://github.com/metal3-io/baremetal-operator BMO_SETUP: false BMO_SETUP_ROUTE_REPLACE: 'true' BM_CTLPLANE_INTERFACE: enp1s0 BM_INSTANCE_MEMORY: '8192' BM_INSTANCE_NAME_PREFIX: edpm-compute-baremetal BM_INSTANCE_NAME_SUFFIX: '0' BM_NETWORK_NAME: default BM_NODE_COUNT: '1' BM_ROOT_PASSWORD: '' BM_ROOT_PASSWORD_SECRET: '' CEILOMETER_CENTRAL_DEPL_IMG: unused CEILOMETER_NOTIFICATION_DEPL_IMG: unused CEPH_BRANCH: release-1.15 CEPH_CLIENT: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml CEPH_COMMON: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml CEPH_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml CEPH_CRDS: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml CEPH_IMG: quay.io/ceph/demo:latest-squid CEPH_OP: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml CEPH_REPO: https://github.com/rook/rook.git CERTMANAGER_TIMEOUT: 300s CHECKOUT_FROM_OPENSTACK_REF: 'true' CINDER: config/samples/cinder_v1beta1_cinder.yaml CINDERAPI_DEPL_IMG: unused CINDERBKP_DEPL_IMG: unused CINDERSCH_DEPL_IMG: unused CINDERVOL_DEPL_IMG: unused CINDER_BRANCH: main CINDER_COMMIT_HASH: '' CINDER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml CINDER_IMG: quay.io/openstack-k8s-operators/cinder-operator-index:latest CINDER_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml CINDER_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests CINDER_KUTTL_NAMESPACE: cinder-kuttl-tests CINDER_REPO: https://github.com/openstack-k8s-operators/cinder-operator.git CLEANUP_DIR_CMD: rm -Rf CRC_BGP_NIC_1_MAC: '52:54:00:11:11:11' CRC_BGP_NIC_2_MAC: '52:54:00:11:11:12' CRC_HTTPS_PROXY: '' CRC_HTTP_PROXY: '' CRC_STORAGE_NAMESPACE: crc-storage CRC_STORAGE_RETRIES: '3' CRC_URL: '''https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz''' CRC_VERSION: latest DATAPLANE_ANSIBLE_SECRET: dataplane-ansible-ssh-private-key-secret DATAPLANE_ANSIBLE_USER: '' DATAPLANE_COMPUTE_IP: 192.168.122.100 DATAPLANE_CONTAINER_PREFIX: openstack DATAPLANE_CONTAINER_TAG: current-podified DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest DATAPLANE_DEFAULT_GW: 192.168.122.1 DATAPLANE_EXTRA_NOVA_CONFIG_FILE: /dev/null DATAPLANE_GROWVOLS_ARGS: /=8GB /tmp=1GB /home=1GB /var=100% DATAPLANE_KUSTOMIZE_SCENARIO: preprovisioned DATAPLANE_NETWORKER_IP: 192.168.122.200 DATAPLANE_NETWORK_INTERFACE_NAME: eth0 DATAPLANE_NOVA_NFS_PATH: '' DATAPLANE_NTP_SERVER: pool.ntp.org DATAPLANE_PLAYBOOK: osp.edpm.download_cache DATAPLANE_REGISTRY_URL: quay.io/podified-antelope-centos9 DATAPLANE_RUNNER_IMG: '' DATAPLANE_SERVER_ROLE: compute DATAPLANE_SSHD_ALLOWED_RANGES: '[''192.168.122.0/24'']' DATAPLANE_TIMEOUT: 30m DATAPLANE_TLS_ENABLED: 'true' DATAPLANE_TOTAL_NETWORKER_NODES: '1' DATAPLANE_TOTAL_NODES: '1' DBSERVICE: galera DESIGNATE: config/samples/designate_v1beta1_designate.yaml DESIGNATE_BRANCH: main DESIGNATE_COMMIT_HASH: '' DESIGNATE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml DESIGNATE_IMG: quay.io/openstack-k8s-operators/designate-operator-index:latest DESIGNATE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml DESIGNATE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/test/kuttl/tests DESIGNATE_KUTTL_NAMESPACE: designate-kuttl-tests DESIGNATE_REPO: https://github.com/openstack-k8s-operators/designate-operator.git DNSDATA: config/samples/network_v1beta1_dnsdata.yaml DNSDATA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml DNSMASQ: config/samples/network_v1beta1_dnsmasq.yaml DNSMASQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml DNS_DEPL_IMG: unused DNS_DOMAIN: localdomain DOWNLOAD_TOOLS_SELECTION: all EDPM_ATTACH_EXTNET: 'true' EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES: '''[]''' EDPM_COMPUTE_ADDITIONAL_NETWORKS: '''[]''' EDPM_COMPUTE_CELLS: '1' EDPM_COMPUTE_CEPH_ENABLED: 'true' EDPM_COMPUTE_CEPH_NOVA: 'true' EDPM_COMPUTE_DHCP_AGENT_ENABLED: 'true' EDPM_COMPUTE_SRIOV_ENABLED: 'true' EDPM_COMPUTE_SUFFIX: '0' EDPM_CONFIGURE_DEFAULT_ROUTE: 'true' EDPM_CONFIGURE_HUGEPAGES: 'false' EDPM_CONFIGURE_NETWORKING: 'true' EDPM_FIRSTBOOT_EXTRA: /tmp/edpm-firstboot-extra EDPM_NETWORKER_SUFFIX: '0' EDPM_TOTAL_NETWORKERS: '1' EDPM_TOTAL_NODES: '1' GALERA_REPLICAS: '' GENERATE_SSH_KEYS: 'true' GIT_CLONE_OPTS: '' GLANCE: config/samples/glance_v1beta1_glance.yaml GLANCEAPI_DEPL_IMG: unused GLANCE_BRANCH: main GLANCE_COMMIT_HASH: '' GLANCE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml GLANCE_IMG: quay.io/openstack-k8s-operators/glance-operator-index:latest GLANCE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml GLANCE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests GLANCE_KUTTL_NAMESPACE: glance-kuttl-tests GLANCE_REPO: https://github.com/openstack-k8s-operators/glance-operator.git HEAT: config/samples/heat_v1beta1_heat.yaml HEATAPI_DEPL_IMG: unused HEATCFNAPI_DEPL_IMG: unused HEATENGINE_DEPL_IMG: unused HEAT_AUTH_ENCRYPTION_KEY: 76**********f0 HEAT_BRANCH: main HEAT_COMMIT_HASH: '' HEAT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml HEAT_IMG: quay.io/openstack-k8s-operators/heat-operator-index:latest HEAT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml HEAT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/test/kuttl/tests HEAT_KUTTL_NAMESPACE: heat-kuttl-tests HEAT_REPO: https://github.com/openstack-k8s-operators/heat-operator.git HEAT_SERVICE_ENABLED: 'true' HORIZON: config/samples/horizon_v1beta1_horizon.yaml HORIZON_BRANCH: main HORIZON_COMMIT_HASH: '' HORIZON_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml HORIZON_DEPL_IMG: unused HORIZON_IMG: quay.io/openstack-k8s-operators/horizon-operator-index:latest HORIZON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml HORIZON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/test/kuttl/tests HORIZON_KUTTL_NAMESPACE: horizon-kuttl-tests HORIZON_REPO: https://github.com/openstack-k8s-operators/horizon-operator.git INFRA_BRANCH: main INFRA_COMMIT_HASH: '' INFRA_IMG: quay.io/openstack-k8s-operators/infra-operator-index:latest INFRA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml INFRA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/test/kuttl/tests INFRA_KUTTL_NAMESPACE: infra-kuttl-tests INFRA_REPO: https://github.com/openstack-k8s-operators/infra-operator.git INSTALL_CERT_MANAGER: false INSTALL_NMSTATE: true || false INSTALL_NNCP: true || false INTERNALAPI_HOST_ROUTES: '' IPV6_LAB_IPV4_NETWORK_IPADDRESS: 172.30.0.1/24 IPV6_LAB_IPV6_NETWORK_IPADDRESS: fd00:abcd:abcd:fc00::1/64 IPV6_LAB_LIBVIRT_STORAGE_POOL: default IPV6_LAB_MANAGE_FIREWALLD: 'true' IPV6_LAB_NAT64_HOST_IPV4: 172.30.0.2/24 IPV6_LAB_NAT64_HOST_IPV6: fd00:abcd:abcd:fc00::2/64 IPV6_LAB_NAT64_INSTANCE_NAME: nat64-router IPV6_LAB_NAT64_IPV6_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL: 192.168.255.0/24 IPV6_LAB_NAT64_TAYGA_IPV4: 192.168.255.1 IPV6_LAB_NAT64_TAYGA_IPV6: fd00:abcd:abcd:fc00::3 IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX: fd00:abcd:abcd:fcff::/96 IPV6_LAB_NAT64_UPDATE_PACKAGES: 'false' IPV6_LAB_NETWORK_NAME: nat64 IPV6_LAB_SNO_CLUSTER_NETWORK: fd00:abcd:0::/48 IPV6_LAB_SNO_HOST_IP: fd00:abcd:abcd:fc00::11 IPV6_LAB_SNO_HOST_PREFIX: '64' IPV6_LAB_SNO_INSTANCE_NAME: sno IPV6_LAB_SNO_MACHINE_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_SNO_OCP_MIRROR_URL: https://mirror.openshift.com/pub/openshift-v4/clients/ocp IPV6_LAB_SNO_OCP_VERSION: latest-4.14 IPV6_LAB_SNO_SERVICE_NETWORK: fd00:abcd:abcd:fc03::/112 IPV6_LAB_SSH_PUB_KEY: /home/zuul/.ssh/id_rsa.pub IPV6_LAB_WORK_DIR: /home/zuul/.ipv6lab IRONIC: config/samples/ironic_v1beta1_ironic.yaml IRONICAPI_DEPL_IMG: unused IRONICCON_DEPL_IMG: unused IRONICINS_DEPL_IMG: unused IRONICNAG_DEPL_IMG: unused IRONICPXE_DEPL_IMG: unused IRONIC_BRANCH: main IRONIC_COMMIT_HASH: '' IRONIC_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml IRONIC_IMAGE: quay.io/metal3-io/ironic IRONIC_IMAGE_TAG: release-24.1 IRONIC_IMG: quay.io/openstack-k8s-operators/ironic-operator-index:latest IRONIC_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml IRONIC_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/test/kuttl/tests IRONIC_KUTTL_NAMESPACE: ironic-kuttl-tests IRONIC_REPO: https://github.com/openstack-k8s-operators/ironic-operator.git KEYSTONEAPI: config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_DEPL_IMG: unused KEYSTONE_BRANCH: main KEYSTONE_COMMIT_HASH: '' KEYSTONE_FEDERATION_CLIENT_SECRET: CO**********6f KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE: openstack KEYSTONE_IMG: quay.io/openstack-k8s-operators/keystone-operator-index:latest KEYSTONE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml KEYSTONE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/test/kuttl/tests KEYSTONE_KUTTL_NAMESPACE: keystone-kuttl-tests KEYSTONE_REPO: https://github.com/openstack-k8s-operators/keystone-operator.git KUBEADMIN_PWD: '12345678' LIBVIRT_SECRET: libvirt-secret LOKI_DEPLOY_MODE: openshift-network LOKI_DEPLOY_NAMESPACE: netobserv LOKI_DEPLOY_SIZE: 1x.demo LOKI_NAMESPACE: openshift-operators-redhat LOKI_OPERATOR_GROUP: openshift-operators-redhat-loki LOKI_SUBSCRIPTION: loki-operator LVMS_CR: '1' MANILA: config/samples/manila_v1beta1_manila.yaml MANILAAPI_DEPL_IMG: unused MANILASCH_DEPL_IMG: unused MANILASHARE_DEPL_IMG: unused MANILA_BRANCH: main MANILA_COMMIT_HASH: '' MANILA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml MANILA_IMG: quay.io/openstack-k8s-operators/manila-operator-index:latest MANILA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml MANILA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests MANILA_KUTTL_NAMESPACE: manila-kuttl-tests MANILA_REPO: https://github.com/openstack-k8s-operators/manila-operator.git MANILA_SERVICE_ENABLED: 'true' MARIADB: config/samples/mariadb_v1beta1_galera.yaml MARIADB_BRANCH: main MARIADB_CHAINSAW_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/config.yaml MARIADB_CHAINSAW_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/tests MARIADB_CHAINSAW_NAMESPACE: mariadb-chainsaw-tests MARIADB_COMMIT_HASH: '' MARIADB_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml MARIADB_DEPL_IMG: unused MARIADB_IMG: quay.io/openstack-k8s-operators/mariadb-operator-index:latest MARIADB_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml MARIADB_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/kuttl/tests MARIADB_KUTTL_NAMESPACE: mariadb-kuttl-tests MARIADB_REPO: https://github.com/openstack-k8s-operators/mariadb-operator.git MEMCACHED: config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_DEPL_IMG: unused METADATA_SHARED_SECRET: '12**********42' METALLB_IPV6_POOL: fd00:aaaa::80-fd00:aaaa::90 METALLB_POOL: 192.168.122.80-192.168.122.90 MICROSHIFT: '0' NAMESPACE: openstack NETCONFIG: config/samples/network_v1beta1_netconfig.yaml NETCONFIG_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml NETCONFIG_DEPL_IMG: unused NETOBSERV_DEPLOY_NAMESPACE: netobserv NETOBSERV_NAMESPACE: openshift-netobserv-operator NETOBSERV_OPERATOR_GROUP: openshift-netobserv-operator-net NETOBSERV_SUBSCRIPTION: netobserv-operator NETWORK_BGP: 'false' NETWORK_DESIGNATE_ADDRESS_PREFIX: 172.28.0 NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX: 172.50.0 NETWORK_INTERNALAPI_ADDRESS_PREFIX: 172.17.0 NETWORK_ISOLATION: 'true' NETWORK_ISOLATION_INSTANCE_NAME: crc NETWORK_ISOLATION_IPV4: 'true' NETWORK_ISOLATION_IPV4_ADDRESS: 172.16.1.1/24 NETWORK_ISOLATION_IPV4_NAT: 'true' NETWORK_ISOLATION_IPV6: 'false' NETWORK_ISOLATION_IPV6_ADDRESS: fd00:aaaa::1/64 NETWORK_ISOLATION_IP_ADDRESS: 192.168.122.10 NETWORK_ISOLATION_MAC: '52:54:00:11:11:10' NETWORK_ISOLATION_NETWORK_NAME: net-iso NETWORK_ISOLATION_NET_NAME: default NETWORK_ISOLATION_USE_DEFAULT_NETWORK: 'true' NETWORK_MTU: '1500' NETWORK_STORAGEMGMT_ADDRESS_PREFIX: 172.20.0 NETWORK_STORAGE_ADDRESS_PREFIX: 172.18.0 NETWORK_STORAGE_MACVLAN: '' NETWORK_TENANT_ADDRESS_PREFIX: 172.19.0 NETWORK_VLAN_START: '20' NETWORK_VLAN_STEP: '1' NEUTRONAPI: config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_DEPL_IMG: unused NEUTRON_BRANCH: main NEUTRON_COMMIT_HASH: '' NEUTRON_IMG: quay.io/openstack-k8s-operators/neutron-operator-index:latest NEUTRON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml NEUTRON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests NEUTRON_KUTTL_NAMESPACE: neutron-kuttl-tests NEUTRON_REPO: https://github.com/openstack-k8s-operators/neutron-operator.git NFS_HOME: /home/nfs NMSTATE_NAMESPACE: openshift-nmstate NMSTATE_OPERATOR_GROUP: openshift-nmstate-tn6k8 NMSTATE_SUBSCRIPTION: kubernetes-nmstate-operator NNCP_ADDITIONAL_HOST_ROUTES: '' NNCP_BGP_1_INTERFACE: enp7s0 NNCP_BGP_1_IP_ADDRESS: 100.65.4.2 NNCP_BGP_2_INTERFACE: enp8s0 NNCP_BGP_2_IP_ADDRESS: 100.64.4.2 NNCP_BRIDGE: ospbr NNCP_CLEANUP_TIMEOUT: 120s NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX: 'fd00:aaaa::' NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX: '10' NNCP_CTLPLANE_IP_ADDRESS_PREFIX: 192.168.122 NNCP_CTLPLANE_IP_ADDRESS_SUFFIX: '10' NNCP_DNS_SERVER: 192.168.122.1 NNCP_DNS_SERVER_IPV6: fd00:aaaa::1 NNCP_GATEWAY: 192.168.122.1 NNCP_GATEWAY_IPV6: fd00:aaaa::1 NNCP_INTERFACE: enp6s0 NNCP_NODES: '' NNCP_TIMEOUT: 240s NOVA: config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_BRANCH: main NOVA_COMMIT_HASH: '' NOVA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_IMG: quay.io/openstack-k8s-operators/nova-operator-index:latest NOVA_REPO: https://github.com/openstack-k8s-operators/nova-operator.git NUMBER_OF_INSTANCES: '1' OCP_NETWORK_NAME: crc OCTAVIA: config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_BRANCH: main OCTAVIA_COMMIT_HASH: '' OCTAVIA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_IMG: quay.io/openstack-k8s-operators/octavia-operator-index:latest OCTAVIA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml OCTAVIA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/test/kuttl/tests OCTAVIA_KUTTL_NAMESPACE: octavia-kuttl-tests OCTAVIA_REPO: https://github.com/openstack-k8s-operators/octavia-operator.git OKD: 'false' OPENSTACK_BRANCH: main OPENSTACK_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-bundle:latest OPENSTACK_COMMIT_HASH: '' OPENSTACK_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_CRDS_DIR: openstack_crds OPENSTACK_CTLPLANE: config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_IMG: quay.io/openstack-k8s-operators/openstack-operator-index:latest OPENSTACK_K8S_BRANCH: main OPENSTACK_K8S_TAG: latest OPENSTACK_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml OPENSTACK_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/test/kuttl/tests OPENSTACK_KUTTL_NAMESPACE: openstack-kuttl-tests OPENSTACK_NEUTRON_CUSTOM_CONF: '' OPENSTACK_REPO: https://github.com/openstack-k8s-operators/openstack-operator.git OPENSTACK_STORAGE_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest OPERATOR_BASE_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator OPERATOR_CHANNEL: '' OPERATOR_NAMESPACE: openstack-operators OPERATOR_SOURCE: '' OPERATOR_SOURCE_NAMESPACE: '' OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm OVNCONTROLLER: config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_NMAP: 'true' OVNDBS: config/samples/ovn_v1beta1_ovndbcluster.yaml OVNDBS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml OVNNORTHD: config/samples/ovn_v1beta1_ovnnorthd.yaml OVNNORTHD_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml OVN_BRANCH: main OVN_COMMIT_HASH: '' OVN_IMG: quay.io/openstack-k8s-operators/ovn-operator-index:latest OVN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml OVN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/test/kuttl/tests OVN_KUTTL_NAMESPACE: ovn-kuttl-tests OVN_REPO: https://github.com/openstack-k8s-operators/ovn-operator.git PASSWORD: '12**********78' PLACEMENTAPI: config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_DEPL_IMG: unused PLACEMENT_BRANCH: main PLACEMENT_COMMIT_HASH: '' PLACEMENT_IMG: quay.io/openstack-k8s-operators/placement-operator-index:latest PLACEMENT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml PLACEMENT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/test/kuttl/tests PLACEMENT_KUTTL_NAMESPACE: placement-kuttl-tests PLACEMENT_REPO: https://github.com/openstack-k8s-operators/placement-operator.git PULL_SECRET: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/pull-secret.txt RABBITMQ: docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_BRANCH: patches RABBITMQ_COMMIT_HASH: '' RABBITMQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_DEPL_IMG: unused RABBITMQ_IMG: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest RABBITMQ_REPO: https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git REDHAT_OPERATORS: 'false' REDIS: config/samples/redis_v1beta1_redis.yaml REDIS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml REDIS_DEPL_IMG: unused RH_REGISTRY_PWD: '' RH_REGISTRY_USER: '' SECRET: os**********et SG_CORE_DEPL_IMG: unused STANDALONE_COMPUTE_DRIVER: libvirt STANDALONE_EXTERNAL_NET_PREFFIX: 172.21.0 STANDALONE_INTERNALAPI_NET_PREFIX: 172.17.0 STANDALONE_STORAGEMGMT_NET_PREFIX: 172.20.0 STANDALONE_STORAGE_NET_PREFIX: 172.18.0 STANDALONE_TENANT_NET_PREFIX: 172.19.0 STORAGEMGMT_HOST_ROUTES: '' STORAGE_CLASS: local-storage STORAGE_HOST_ROUTES: '' SWIFT: config/samples/swift_v1beta1_swift.yaml SWIFT_BRANCH: main SWIFT_COMMIT_HASH: '' SWIFT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml SWIFT_IMG: quay.io/openstack-k8s-operators/swift-operator-index:latest SWIFT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml SWIFT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/test/kuttl/tests SWIFT_KUTTL_NAMESPACE: swift-kuttl-tests SWIFT_REPO: https://github.com/openstack-k8s-operators/swift-operator.git TELEMETRY: config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_BRANCH: main TELEMETRY_COMMIT_HASH: '' TELEMETRY_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_IMG: quay.io/openstack-k8s-operators/telemetry-operator-index:latest TELEMETRY_KUTTL_BASEDIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator TELEMETRY_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml TELEMETRY_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/test/kuttl/suites TELEMETRY_KUTTL_NAMESPACE: telemetry-kuttl-tests TELEMETRY_KUTTL_RELPATH: test/kuttl/suites TELEMETRY_REPO: https://github.com/openstack-k8s-operators/telemetry-operator.git TENANT_HOST_ROUTES: '' TIMEOUT: 300s TLS_ENABLED: 'false' WATCHER_BRANCH: '' WATCHER_REPO: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator tripleo_deploy: 'export REGISTRY_PWD:' cifmw_install_yamls_environment: BMO_SETUP: false CHECKOUT_FROM_OPENSTACK_REF: 'true' INSTALL_CERT_MANAGER: false KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm WATCHER_BRANCH: '' WATCHER_REPO: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator cifmw_installyamls_repos: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls cifmw_installyamls_repos_relative: src/github.com/openstack-k8s-operators/install_yamls cifmw_nolog: true cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_context: default/api-crc-testing:6443/kubeadmin cifmw_openshift_kubeconfig: /home/zuul/.crc/machines/crc/kubeconfig cifmw_openshift_password: '12**********89' cifmw_openshift_setup_skip_internal_registry: true cifmw_openshift_setup_skip_internal_registry_tls_verify: true cifmw_openshift_skip_tls_verify: true cifmw_openshift_token: sha256~aCk9SoMxAKzQn_kYPvy1gW_KvX_MhVbp0_pM2TnfuyE cifmw_openshift_user: kubeadmin cifmw_openstack_k8s_operators_org_url: https://github.com/openstack-k8s-operators cifmw_openstack_namespace: openstack cifmw_operator_build_meta_name: openstack-operator cifmw_operator_build_output: operators: openstack-operator: git_commit_hash: 38e630804dada625f7b015f13f3ac5bb7192f4dd git_src_dir: ~/src/github.com/openstack-k8s-operators/openstack-operator image: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd image_bundle: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-bundle:38e630804dada625f7b015f13f3ac5bb7192f4dd image_catalog: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-index:38e630804dada625f7b015f13f3ac5bb7192f4dd watcher-operator: git_commit_hash: 581f46572d07c53c87a11aa044b02e73f253eea6 git_src_dir: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator image: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator:581f46572d07c53c87a11aa044b02e73f253eea6 image_bundle: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-bundle:581f46572d07c53c87a11aa044b02e73f253eea6 image_catalog: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-index:581f46572d07c53c87a11aa044b02e73f253eea6 cifmw_path: /home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin cifmw_repo: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework cifmw_repo_relative: src/github.com/openstack-k8s-operators/ci-framework cifmw_repo_setup_dist_major_version: 9 cifmw_repo_setup_os_release: centos cifmw_run_test_role: test_operator cifmw_run_tests: true cifmw_status: changed: false failed: false stat: atime: 1768906474.6765542 attr_flags: '' attributes: [] block_size: 4096 blocks: 8 charset: binary ctime: 1768906465.442315 dev: 64513 device_type: 0 executable: true exists: true gid: 1000 gr_name: zuul inode: 16795655 isblk: false ischr: false isdir: true isfifo: false isgid: false islnk: false isreg: false issock: false isuid: false mimetype: inode/directory mode: '0755' mtime: 1768906465.442315 nlink: 21 path: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework pw_name: zuul readable: true rgrp: true roth: true rusr: true size: 4096 uid: 1000 version: '2368198249' wgrp: false woth: false writeable: true wusr: true xgrp: true xoth: true xusr: true cifmw_success_flag: changed: false failed: false stat: exists: false cifmw_test_operator_tempest_concurrency: 1 cifmw_test_operator_tempest_exclude_list: 'watcher_tempest_plugin.*client_functional.* watcher_tempest_plugin.tests.scenario.test_execute_strategies.TestExecuteStrategies.test_execute_storage_capacity_balance_strategy watcher_tempest_plugin.*\[.*\breal_load\b.*\].* watcher_tempest_plugin.tests.scenario.test_execute_zone_migration.TestExecuteZoneMigrationStrategy.test_execute_zone_migration_without_destination_host watcher_tempest_plugin.*\[.*\bvolume_migration\b.*\].* ' cifmw_test_operator_tempest_external_plugin: - changeRefspec: 380572db57798530b64dcac14c6b01b0382c5d8e changeRepository: https://review.opendev.org/openstack/watcher-tempest-plugin repository: https://opendev.org/openstack/watcher-tempest-plugin.git cifmw_test_operator_tempest_image_tag: watcher_latest cifmw_test_operator_tempest_include_list: 'watcher_tempest_plugin.* ' cifmw_test_operator_tempest_namespace: podified-epoxy-centos9 cifmw_test_operator_tempest_registry: 38.102.83.51:5001 cifmw_test_operator_tempest_tempestconf_config: overrides: 'compute.min_microversion 2.56 compute.min_compute_nodes 2 placement.min_microversion 1.29 compute-feature-enabled.live_migration true compute-feature-enabled.block_migration_for_live_migration true service_available.sg_core true telemetry_services.metric_backends prometheus telemetry.disable_ssl_certificate_validation true telemetry.ceilometer_polling_interval 15 optimize.min_microversion 1.0 optimize.max_microversion 1.4 optimize.datasource prometheus optimize.openstack_type podified optimize.proxy_host_address 38.102.83.39 optimize.proxy_host_user zuul optimize.prometheus_host metric-storage-prometheus.openstack.svc optimize.prometheus_ssl_enabled true optimize.prometheus_ssl_cert_dir /etc/prometheus/secrets/combined-ca-bundle optimize.podified_kubeconfig_path /home/zuul/.crc/machines/crc/kubeconfig optimize.podified_namespace openstack optimize.run_continuous_audit_tests true ' cifmw_update_containers: true cifmw_update_containers_openstack: false cifmw_update_containers_org: podified-epoxy-centos9 cifmw_update_containers_registry: 38.102.83.51:5001 cifmw_update_containers_tag: watcher_latest cifmw_update_containers_watcher: true cifmw_use_crc: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller content_provider_dlrn_md5_hash: '' content_provider_gating_repo_available: false content_provider_gating_repo_url: '' content_provider_os_registry_namespace: podified-epoxy-centos9 content_provider_os_registry_url: 38.102.83.51:5001/podified-epoxy-centos9 content_provider_registry_available: true content_provider_registry_ip: 38.102.83.51 content_provider_registry_ip_port: 38.102.83.51:5001 crc_ci_bootstrap_cloud_name: vexxhost crc_ci_bootstrap_instance_default_net_config: mtu: '1500' range: 192.168.122.0/24 router_net: '' transparent: true crc_ci_bootstrap_instance_nm_vlan_networks: - key: internal-api value: ip: 172.17.0.5 - key: storage value: ip: 172.18.0.5 - key: tenant value: ip: 172.19.0.5 crc_ci_bootstrap_instance_parent_port_create_yaml: admin_state_up: true allowed_address_pairs: [] binding_host_id: null binding_profile: {} binding_vif_details: {} binding_vif_type: null binding_vnic_type: normal created_at: '2026-01-20T10:44:55Z' data_plane_status: null description: '' device_id: '' device_owner: '' device_profile: null dns_assignment: - fqdn: host-192-168-122-10.openstacklocal. hostname: host-192-168-122-10 ip_address: 192.168.122.10 dns_domain: '' dns_name: '' extra_dhcp_opts: [] fixed_ips: - ip_address: 192.168.122.10 subnet_id: 2e4cfd41-19cc-4d87-87ce-3666153838d5 hardware_offload_type: null hints: '' id: 97004f9b-ecd9-4295-91be-b58c60a6e487 ip_allocation: immediate mac_address: fa:16:3e:db:79:f1 name: crc-a519b063-d122-47cf-ae3b-7548803df408 network_id: 9afcb953-b4d4-4d1b-ba09-6d5eaf6424aa numa_affinity_policy: null port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 propagate_uplink_status: null qos_network_policy_id: null qos_policy_id: null resource_request: null revision_number: 1 security_group_ids: [] status: DOWN tags: [] trunk_details: null trusted: null updated_at: '2026-01-20T10:44:55Z' crc_ci_bootstrap_network_name: zuul-ci-net-90366c73 crc_ci_bootstrap_networking: instances: compute-0: networks: default: ip: 192.168.122.100 internal-api: config_nm: false ip: 172.17.0.100 storage: config_nm: false ip: 172.18.0.100 tenant: config_nm: false ip: 172.19.0.100 compute-1: networks: default: ip: 192.168.122.101 internal-api: config_nm: false ip: 172.17.0.101 storage: config_nm: false ip: 172.18.0.101 tenant: config_nm: false ip: 172.19.0.101 controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: '1500' range: 192.168.122.0/24 router_net: '' transparent: true internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 crc_ci_bootstrap_networks_out: compute-0: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.100/24 mac: fa:16:3e:c1:c1:89 mtu: '1500' internal-api: iface: eth1.20 ip: 172.17.0.100/24 mac: 52:54:00:e4:85:d6 mtu: '1496' parent_iface: eth1 vlan: 20 storage: iface: eth1.21 ip: 172.18.0.100/24 mac: 52:54:00:81:02:e8 mtu: '1496' parent_iface: eth1 vlan: 21 tenant: iface: eth1.22 ip: 172.19.0.100/24 mac: 52:54:00:10:b6:c8 mtu: '1496' parent_iface: eth1 vlan: 22 compute-1: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.101/24 mac: fa:16:3e:62:06:49 mtu: '1500' internal-api: iface: eth1.20 ip: 172.17.0.101/24 mac: 52:54:00:1a:ac:e2 mtu: '1496' parent_iface: eth1 vlan: 20 storage: iface: eth1.21 ip: 172.18.0.101/24 mac: 52:54:00:8e:43:4e mtu: '1496' parent_iface: eth1 vlan: 21 tenant: iface: eth1.22 ip: 172.19.0.101/24 mac: 52:54:00:96:51:04 mtu: '1496' parent_iface: eth1 vlan: 22 controller: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.11/24 mac: fa:16:3e:b8:b9:28 mtu: '1500' crc: default: connection: ci-private-network gw: 192.168.122.1 iface: ens7 ip: 192.168.122.10/24 mac: fa:16:3e:db:79:f1 mtu: '1500' internal-api: connection: ci-private-network-20 iface: ens7.20 ip: 172.17.0.5/24 mac: 52:54:00:75:cf:2c mtu: '1496' parent_iface: ens7 vlan: 20 storage: connection: ci-private-network-21 iface: ens7.21 ip: 172.18.0.5/24 mac: 52:54:00:da:18:d2 mtu: '1496' parent_iface: ens7 vlan: 21 tenant: connection: ci-private-network-22 iface: ens7.22 ip: 172.19.0.5/24 mac: 52:54:00:b7:68:1f mtu: '1496' parent_iface: ens7 vlan: 22 crc_ci_bootstrap_private_net_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2026-01-20T10:43:28Z' description: '' dns_domain: '' id: 9afcb953-b4d4-4d1b-ba09-6d5eaf6424aa ipv4_address_scope: null ipv6_address_scope: null is_default: false is_vlan_qinq: null is_vlan_transparent: true l2_adjacency: true mtu: 1500 name: zuul-ci-net-90366c73 port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 provider:network_type: null provider:physical_network: null provider:segmentation_id: null qos_policy_id: null revision_number: 1 router:external: false segments: null shared: false status: ACTIVE subnets: [] tags: [] updated_at: '2026-01-20T10:43:28Z' crc_ci_bootstrap_private_router_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2026-01-20T10:43:36Z' description: '' enable_ndp_proxy: null external_gateway_info: null flavor_id: null id: 6f8c9a88-0252-4b33-9039-3708bccf4928 name: zuul-ci-subnet-router-90366c73 project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 1 routes: [] status: ACTIVE tags: [] tenant_id: 4b633c451ac74233be3721a3635275e5 updated_at: '2026-01-20T10:43:36Z' crc_ci_bootstrap_private_subnet_create_yaml: allocation_pools: - end: 192.168.122.254 start: 192.168.122.2 cidr: 192.168.122.0/24 created_at: '2026-01-20T10:43:33Z' description: '' dns_nameservers: [] dns_publish_fixed_ip: null enable_dhcp: false gateway_ip: 192.168.122.1 host_routes: [] id: 2e4cfd41-19cc-4d87-87ce-3666153838d5 ip_version: 4 ipv6_address_mode: null ipv6_ra_mode: null name: zuul-ci-subnet-90366c73 network_id: 9afcb953-b4d4-4d1b-ba09-6d5eaf6424aa project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 0 segment_id: null service_types: [] subnetpool_id: null tags: [] updated_at: '2026-01-20T10:43:33Z' crc_ci_bootstrap_provider_dns: - 199.204.44.24 - 199.204.47.54 crc_ci_bootstrap_router_name: zuul-ci-subnet-router-90366c73 crc_ci_bootstrap_subnet_name: zuul-ci-subnet-90366c73 discovered_interpreter_python: /usr/bin/python3 enable_ramdisk: true fetch_dlrn_hash: false gather_subset: - min group_names: - ungrouped groups: all: - compute-0 - compute-1 - controller - crc computes: - compute-0 - compute-1 ocps: - crc ungrouped: *id001 zuul_unreachable: [] inventory_dir: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/post_playbook_0 inventory_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/post_playbook_0/inventory.yaml inventory_hostname: controller inventory_hostname_short: controller logfiles_dest_dir: /home/zuul/ci-framework-data/logs/2026-01-20_10-57 module_setup: true nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: f43c7845-aaa4-43d2-b2cb-88299b0d01b3 host_id: b012578aee5370fae73eb6c92c4679617335173cccca05390470f411 interface_ip: 38.102.83.39 label: cloud-centos-9-stream-tripleo-vexxhost-medium private_ipv4: 38.102.83.39 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.39 public_ipv6: '' region: RegionOne slot: null omit: __omit_place_holder__3495a27fa3f7994641c1fe3f418eb5fa4a3ff705 operator_namespace: openstack-operators playbook_dir: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/ci/playbooks post_ctlplane_deploy: - name: Tune rabbitmq resources source: rabbitmq_tuning.yml type: playbook post_deploy: - inventory: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/hosts name: Download needed tools source: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/download_tools.yaml type: playbook - name: Patch Openstack Prometheus to enable admin API source: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator/ci/playbooks/prometheus_admin_api.yaml type: playbook post_infra: - inventory: /home/zuul/ci-framework-data/artifacts/zuul_inventory.yml name: Fetch nodes facts and save them as parameters source: fetch_compute_facts.yml type: playbook pre_deploy: - name: 80 Kustomize OpenStack CR source: control_plane_horizon.yml type: playbook pre_deploy_create_coo_subscription: - name: Deploy cluster-observability-operator source: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator/ci/playbooks/deploy_cluster_observability_operator.yaml type: playbook pre_infra: - connection: local inventory: localhost, name: Download needed tools source: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/download_tools.yaml type: playbook pre_update: - inventory: /home/zuul/ci-framework-data/artifacts/zuul_inventory.yml name: Fetch nodes facts and save them as parameters source: fetch_compute_facts.yml type: playbook push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true unsafe_vars: ansible_connection: ssh ansible_host: 38.102.83.39 ansible_port: 22 ansible_python_interpreter: auto ansible_user: zuul cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_build_images_output: {} cifmw_dlrn_report_result: false cifmw_edpm_telemetry_enabled_exporters: - podman_exporter - openstack_network_exporter cifmw_extras: - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/multinode-ci.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/horizon.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/scenarios/{{ watcher_scenario }}.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/tests/watcher-tempest.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_operator_build_output: operators: openstack-operator: git_commit_hash: 38e630804dada625f7b015f13f3ac5bb7192f4dd git_src_dir: ~/src/github.com/openstack-k8s-operators/openstack-operator image: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd image_bundle: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-bundle:38e630804dada625f7b015f13f3ac5bb7192f4dd image_catalog: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-index:38e630804dada625f7b015f13f3ac5bb7192f4dd watcher-operator: git_commit_hash: 581f46572d07c53c87a11aa044b02e73f253eea6 git_src_dir: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator image: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator:581f46572d07c53c87a11aa044b02e73f253eea6 image_bundle: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-bundle:581f46572d07c53c87a11aa044b02e73f253eea6 image_catalog: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-index:581f46572d07c53c87a11aa044b02e73f253eea6 cifmw_test_operator_tempest_external_plugin: - changeRefspec: 380572db57798530b64dcac14c6b01b0382c5d8e changeRepository: https://review.opendev.org/openstack/watcher-tempest-plugin repository: https://opendev.org/openstack/watcher-tempest-plugin.git cifmw_test_operator_tempest_image_tag: watcher_latest cifmw_test_operator_tempest_namespace: '{{ content_provider_os_registry_url | split(''/'') | last }}' cifmw_test_operator_tempest_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_openstack: false cifmw_update_containers_org: podified-epoxy-centos9 cifmw_update_containers_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_tag: watcher_latest cifmw_update_containers_watcher: true cifmw_use_libvirt: false cifmw_zuul_target_host: controller content_provider_dlrn_md5_hash: '' content_provider_gating_repo_available: false content_provider_gating_repo_url: '' content_provider_os_registry_namespace: podified-epoxy-centos9 content_provider_os_registry_url: 38.102.83.51:5001/podified-epoxy-centos9 content_provider_registry_available: true content_provider_registry_ip: 38.102.83.51 content_provider_registry_ip_port: 38.102.83.51:5001 crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: compute-0: networks: default: ip: 192.168.122.100 internal-api: config_nm: false ip: 172.17.0.100 storage: config_nm: false ip: 172.18.0.100 tenant: config_nm: false ip: 172.19.0.100 compute-1: networks: default: ip: 192.168.122.101 internal-api: config_nm: false ip: 172.17.0.101 storage: config_nm: false ip: 172.18.0.101 tenant: config_nm: false ip: 172.19.0.101 controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: '{{ (''ibm'' in nodepool.cloud) | ternary(''1440'', ''1500'') }}' range: 192.168.122.0/24 router_net: '' transparent: true internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true fetch_dlrn_hash: false nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: f43c7845-aaa4-43d2-b2cb-88299b0d01b3 host_id: b012578aee5370fae73eb6c92c4679617335173cccca05390470f411 interface_ip: 38.102.83.39 label: cloud-centos-9-stream-tripleo-vexxhost-medium private_ipv4: 38.102.83.39 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.39 public_ipv6: '' region: RegionOne slot: null push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true watcher_scenario: edpm-no-notifications watcher_services_tag: watcher_latest watcher_tempest_max_microversion: '1.4' zuul_log_collection: false watcher_scenario: edpm-no-notifications watcher_services_tag: watcher_latest watcher_tempest_max_microversion: '1.4' zuul: _inheritance_path: - '' - '' - '' - '' - '' - '' - '' - '' - '' - '' ansible_version: '8' attempts: 1 branch: main build: 90366c73dd19485aa9d993ddaab6d557 build_refs: - branch: main change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n spec:\r\n messagingBus:\r\n cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n \ notificationsBus:\r\n cluster: notifications-rabbitmq\r\n \ user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator topic: null buildset: f20638613fa941389b7ed9d90fc6bce5 buildset_refs: - branch: main change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n spec:\r\n messagingBus:\r\n cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n \ notificationsBus:\r\n cluster: notifications-rabbitmq\r\n \ user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator topic: null change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n \ spec:\r\n messagingBus:\r\n cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n notificationsBus:\r\n cluster: notifications-rabbitmq\r\n user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 child_jobs: [] commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 event_id: 2658e330-f5e9-11f0-9efc-87f0b8025f7f executor: hostname: ze02.softwarefactory-project.io inventory_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/inventory.yaml log_root: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/logs result_data_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/results.json src_root: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/src work_root: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work items: - branch: main change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n spec:\r\n messagingBus:\r\n cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n \ notificationsBus:\r\n cluster: notifications-rabbitmq\r\n \ user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator topic: null job: watcher-operator-validation-epoxy-ocp4-16 jobtags: [] max_attempts: 1 message: UmFiYml0bXEgdmhvc3QgYW5kIHVzZXIgc3VwcG9ydAoKQWRkIG5ldyBtZXNzYWdpbmdCdXMgYW5kIG5vdGlmaWNhdGlvbnNCdXMgaW50ZXJmYWNlcyB0byBob2xkIGNsdXN0ZXIsIHVzZXIgYW5kIHZob3N0IG5hbWVzIGZvciBvcHRpb25hbCB1c2FnZS4NClRoZSBjb250cm9sbGVyIGFkZHMgdGhlc2UgdmFsdWVzIHRvIHRoZSBUcmFuc3BvcnRVUkwgY3JlYXRlIHJlcXVlc3Qgd2hlbiBwcmVzZW50Lg0KDQpBZGRpdGlvbmFsbHksIHdlIG1pZ3JhdGUgUmFiYml0TVEgY2x1c3RlciBuYW1lIHRvIFJhYmJpdE1xIGNvbmZpZyBzdHJ1Y3QgdXNpbmcgRGVmYXVsdFJhYmJpdE1xQ29uZmlnIGZyb20gaW5mcmEtb3BlcmF0b3IgdG8gYXV0b21hdGljYWxseSBwb3B1bGF0ZSB0aGUgbmV3IENsdXN0ZXIgZmllbGQgZnJvbSBsZWdhY3kgUmFiYml0TXFDbHVzdGVyTmFtZS4NCg0KRXhhbXBsZSB1c2FnZToNCg0KYGBgDQogIHNwZWM6DQogICAgbWVzc2FnaW5nQnVzOg0KICAgICAgY2x1c3RlcjogcnBjLXJhYmJpdG1xDQogICAgICB1c2VyOiBycGMtdXNlcg0KICAgICAgdmhvc3Q6IHJwYy12aG9zdA0KICAgIG5vdGlmaWNhdGlvbnNCdXM6DQogICAgICBjbHVzdGVyOiBub3RpZmljYXRpb25zLXJhYmJpdG1xDQogICAgICB1c2VyOiBub3RpZmljYXRpb25zLXVzZXINCiAgICAgIHZob3N0OiBub3RpZmljYXRpb25zLXZob3N0DQpgYGANCg0KSmlyYTogaHR0cHM6Ly9pc3N1ZXMucmVkaGF0LmNvbS9icm93c2UvT1NQUkgtMjM4ODI= patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 pipeline: github-check playbook_context: playbook_projects: trusted/project_0/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 08a84deec7dace955f92270e2cbb8b993f305ad4 trusted/project_1/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 691c03cc007bee9934da14cf46c86009616a2aef trusted/project_2/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 trusted/project_3/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 05fab9c7c87ad7552b529c3d1173a1772e61e9fb untrusted/project_0/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 05fab9c7c87ad7552b529c3d1173a1772e61e9fb untrusted/project_1/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 08a84deec7dace955f92270e2cbb8b993f305ad4 untrusted/project_2/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 691c03cc007bee9934da14cf46c86009616a2aef untrusted/project_3/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 playbooks: - path: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/ci/playbooks/edpm/run.yml roles: - checkout: main checkout_description: playbook branch link_name: ansible/playbook_0/role_0/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_0/role_0/ci-framework/roles - checkout: master checkout_description: project default branch link_name: ansible/playbook_0/role_1/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_0/role_1/config/roles - checkout: master checkout_description: project default branch link_name: ansible/playbook_0/role_2/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_0/role_2/zuul-jobs/roles - checkout: master checkout_description: project default branch link_name: ansible/playbook_0/role_3/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_0/role_3/rdo-jobs/roles post_review: false project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator projects: github.com/crc-org/crc-cloud: canonical_hostname: github.com canonical_name: github.com/crc-org/crc-cloud checkout: main checkout_description: project override ref commit: 42957126d9d9b9d1372615db325b82bd992fa335 name: crc-org/crc-cloud required: true short_name: crc-cloud src_dir: src/github.com/crc-org/crc-cloud github.com/openstack-k8s-operators/ci-framework: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main checkout_description: zuul branch commit: 05fab9c7c87ad7552b529c3d1173a1772e61e9fb name: openstack-k8s-operators/ci-framework required: true short_name: ci-framework src_dir: src/github.com/openstack-k8s-operators/ci-framework github.com/openstack-k8s-operators/edpm-ansible: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/edpm-ansible checkout: main checkout_description: zuul branch commit: 43c8ae13d85939e9a3f9cddbe838cbe4616199f7 name: openstack-k8s-operators/edpm-ansible required: true short_name: edpm-ansible src_dir: src/github.com/openstack-k8s-operators/edpm-ansible github.com/openstack-k8s-operators/infra-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/infra-operator checkout: main checkout_description: zuul branch commit: 0121df8691096e0883637457925e4142353e35ba name: openstack-k8s-operators/infra-operator required: true short_name: infra-operator src_dir: src/github.com/openstack-k8s-operators/infra-operator github.com/openstack-k8s-operators/install_yamls: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/install_yamls checkout: main checkout_description: zuul branch commit: bdf4c9385be5e3e04ff06f67f25d6993db70cf6e name: openstack-k8s-operators/install_yamls required: true short_name: install_yamls src_dir: src/github.com/openstack-k8s-operators/install_yamls github.com/openstack-k8s-operators/openstack-baremetal-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-baremetal-operator checkout: main checkout_description: zuul branch commit: 06cd1004cb26b36ba1054ccf7875fad6248762c5 name: openstack-k8s-operators/openstack-baremetal-operator required: true short_name: openstack-baremetal-operator src_dir: src/github.com/openstack-k8s-operators/openstack-baremetal-operator github.com/openstack-k8s-operators/openstack-must-gather: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-must-gather checkout: main checkout_description: zuul branch commit: 74e48015b997b0bb2efc1ad92a6937949f185a0e name: openstack-k8s-operators/openstack-must-gather required: true short_name: openstack-must-gather src_dir: src/github.com/openstack-k8s-operators/openstack-must-gather github.com/openstack-k8s-operators/openstack-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-operator checkout: main checkout_description: zuul branch commit: 38e630804dada625f7b015f13f3ac5bb7192f4dd name: openstack-k8s-operators/openstack-operator required: true short_name: openstack-operator src_dir: src/github.com/openstack-k8s-operators/openstack-operator github.com/openstack-k8s-operators/repo-setup: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/repo-setup checkout: main checkout_description: zuul branch commit: 37b10946c6a10f9fa26c13305f06bfd6867e723f name: openstack-k8s-operators/repo-setup required: true short_name: repo-setup src_dir: src/github.com/openstack-k8s-operators/repo-setup github.com/openstack-k8s-operators/watcher-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator checkout: main checkout_description: zuul branch commit: 581f46572d07c53c87a11aa044b02e73f253eea6 name: openstack-k8s-operators/watcher-operator required: false short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator opendev.org/zuul/zuul-jobs: canonical_hostname: opendev.org canonical_name: opendev.org/zuul/zuul-jobs checkout: master checkout_description: project default branch commit: 691c03cc007bee9934da14cf46c86009616a2aef name: zuul/zuul-jobs required: true short_name: zuul-jobs src_dir: src/opendev.org/zuul/zuul-jobs review.rdoproject.org/config: canonical_hostname: review.rdoproject.org canonical_name: review.rdoproject.org/config checkout: master checkout_description: project default branch commit: 08a84deec7dace955f92270e2cbb8b993f305ad4 name: config required: true short_name: config src_dir: src/review.rdoproject.org/config ref: refs/pull/320/head resources: {} tenant: rdoproject.org timeout: 10800 topic: null voting: true zuul_change_list: - watcher-operator zuul_execution_branch: main zuul_execution_canonical_name_and_path: github.com/openstack-k8s-operators/ci-framework/ci/playbooks/e2e-collect-logs.yml zuul_execution_phase: post zuul_execution_phase_index: '0' zuul_execution_trusted: 'False' zuul_log_collection: false zuul_success: 'False' zuul_will_retry: 'False' crc: ansible_all_ipv4_addresses: - 38.102.83.220 - 192.168.126.11 ansible_all_ipv6_addresses: - fe80::4cdf:649e:2e5b:3790 ansible_apparmor: status: disabled ansible_architecture: x86_64 ansible_bios_date: 04/01/2014 ansible_bios_vendor: SeaBIOS ansible_bios_version: 1.15.0-1 ansible_board_asset_tag: NA ansible_board_name: NA ansible_board_serial: NA ansible_board_vendor: NA ansible_board_version: NA ansible_br_ex: active: true device: br-ex features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.220 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::4cdf:649e:2e5b:3790 prefix: '64' scope: link macaddress: fa:16:3e:0d:e7:11 mtu: 1500 promisc: true timestamping: [] type: ether ansible_br_int: active: false device: br-int features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: 4e:ec:11:72:80:3b mtu: 1400 promisc: true timestamping: [] type: ether ansible_chassis_asset_tag: NA ansible_chassis_serial: NA ansible_chassis_vendor: QEMU ansible_chassis_version: pc-i440fx-6.2 ansible_check_mode: false ansible_cmdline: BOOT_IMAGE: (hd0,gpt3)/boot/ostree/rhcos-8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/vmlinuz-5.14.0-427.22.1.el9_4.x86_64 boot: UUID=6ea7ef63-bc43-49c4-9337-b3b14ffb2763 cgroup_no_v1: all ignition.platform.id: metal ostree: /ostree/boot.1/rhcos/8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/0 psi: '1' root: UUID=68d6f3e9-64e9-44a4-a1d0-311f9c629a01 rootflags: prjquota rw: true systemd.unified_cgroup_hierarchy: '1' ansible_config_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/post_playbook_0/ansible.cfg ansible_connection: ssh ansible_date_time: date: '2026-01-20' day: '20' epoch: '1768905758' epoch_int: '1768905758' hour: '10' iso8601: '2026-01-20T10:42:38Z' iso8601_basic: 20260120T104238722665 iso8601_basic_short: 20260120T104238 iso8601_micro: '2026-01-20T10:42:38.722665Z' minute: '42' month: '01' second: '38' time: '10:42:38' tz: UTC tz_dst: UTC tz_offset: '+0000' weekday: Tuesday weekday_number: '2' weeknumber: '03' year: '2026' ansible_default_ipv4: address: 38.102.83.220 alias: br-ex broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: br-ex macaddress: fa:16:3e:0d:e7:11 mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether ansible_default_ipv6: {} ansible_device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 vda2: - EFI-SYSTEM vda3: - boot vda4: - root masters: {} uuids: sr0: - 2026-01-20-10-41-28-00 vda2: - 7B77-95E7 vda3: - 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 vda4: - 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 ansible_devices: sr0: holders: [] host: 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2026-01-20-10-41-28-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '0' vendor: QEMU virtual: 1 vda: holders: [] host: 'SCSI storage controller: Red Hat, Inc. Virtio block device' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: [] sectors: '2048' sectorsize: 512 size: 1.00 MB start: '2048' uuid: null vda2: holders: [] links: ids: [] labels: - EFI-SYSTEM masters: [] uuids: - 7B77-95E7 sectors: '260096' sectorsize: 512 size: 127.00 MB start: '4096' uuid: 7B77-95E7 vda3: holders: [] links: ids: [] labels: - boot masters: [] uuids: - 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 sectors: '786432' sectorsize: 512 size: 384.00 MB start: '264192' uuid: 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 vda4: holders: [] links: ids: [] labels: - root masters: [] uuids: - 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 sectors: '166721503' sectorsize: 512 size: 79.50 GB start: '1050624' uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '419430400' sectorsize: '512' size: 200.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 ansible_diff_mode: false ansible_distribution: RedHat ansible_distribution_file_parsed: true ansible_distribution_file_path: /etc/redhat-release ansible_distribution_file_search_string: Red Hat ansible_distribution_file_variety: RedHat ansible_distribution_major_version: '4' ansible_distribution_release: NA ansible_distribution_version: '4.16' ansible_dns: nameservers: - 199.204.44.24 - 199.204.47.54 ansible_domain: '' ansible_effective_group_id: 1000 ansible_effective_user_id: 1000 ansible_ens3: active: true device: ens3 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_lockless: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: fa:16:3e:0d:e7:11 module: virtio_net mtu: 1500 pciid: virtio1 promisc: true speed: -1 timestamping: [] type: ether ansible_env: BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus HOME: /var/home/core LANG: C.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: core MOTD_SHOWN: pam PATH: /var/home/core/.local/bin:/var/home/core/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /var/home/core SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 48126 22 SSH_CONNECTION: 38.102.83.114 48126 38.102.83.220 22 USER: core XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '2' XDG_SESSION_TYPE: tty _: /usr/bin/python3.9 which_declare: declare -f ansible_eth10: active: true device: eth10 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 192.168.126.11 broadcast: 192.168.126.255 netmask: 255.255.255.0 network: 192.168.126.0 prefix: '24' macaddress: f6:c1:8f:d4:3a:02 mtu: 1500 promisc: false timestamping: [] type: ether ansible_facts: _ansible_facts_gathered: true all_ipv4_addresses: - 38.102.83.220 - 192.168.126.11 all_ipv6_addresses: - fe80::4cdf:649e:2e5b:3790 ansible_local: {} apparmor: status: disabled architecture: x86_64 bios_date: 04/01/2014 bios_vendor: SeaBIOS bios_version: 1.15.0-1 board_asset_tag: NA board_name: NA board_serial: NA board_vendor: NA board_version: NA br_ex: active: true device: br-ex features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.220 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::4cdf:649e:2e5b:3790 prefix: '64' scope: link macaddress: fa:16:3e:0d:e7:11 mtu: 1500 promisc: true timestamping: [] type: ether br_int: active: false device: br-int features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: 4e:ec:11:72:80:3b mtu: 1400 promisc: true timestamping: [] type: ether chassis_asset_tag: NA chassis_serial: NA chassis_vendor: QEMU chassis_version: pc-i440fx-6.2 cmdline: BOOT_IMAGE: (hd0,gpt3)/boot/ostree/rhcos-8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/vmlinuz-5.14.0-427.22.1.el9_4.x86_64 boot: UUID=6ea7ef63-bc43-49c4-9337-b3b14ffb2763 cgroup_no_v1: all ignition.platform.id: metal ostree: /ostree/boot.1/rhcos/8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/0 psi: '1' root: UUID=68d6f3e9-64e9-44a4-a1d0-311f9c629a01 rootflags: prjquota rw: true systemd.unified_cgroup_hierarchy: '1' date_time: date: '2026-01-20' day: '20' epoch: '1768905758' epoch_int: '1768905758' hour: '10' iso8601: '2026-01-20T10:42:38Z' iso8601_basic: 20260120T104238722665 iso8601_basic_short: 20260120T104238 iso8601_micro: '2026-01-20T10:42:38.722665Z' minute: '42' month: '01' second: '38' time: '10:42:38' tz: UTC tz_dst: UTC tz_offset: '+0000' weekday: Tuesday weekday_number: '2' weeknumber: '03' year: '2026' default_ipv4: address: 38.102.83.220 alias: br-ex broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: br-ex macaddress: fa:16:3e:0d:e7:11 mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether default_ipv6: {} device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 vda2: - EFI-SYSTEM vda3: - boot vda4: - root masters: {} uuids: sr0: - 2026-01-20-10-41-28-00 vda2: - 7B77-95E7 vda3: - 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 vda4: - 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 devices: sr0: holders: [] host: 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2026-01-20-10-41-28-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '0' vendor: QEMU virtual: 1 vda: holders: [] host: 'SCSI storage controller: Red Hat, Inc. Virtio block device' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: [] sectors: '2048' sectorsize: 512 size: 1.00 MB start: '2048' uuid: null vda2: holders: [] links: ids: [] labels: - EFI-SYSTEM masters: [] uuids: - 7B77-95E7 sectors: '260096' sectorsize: 512 size: 127.00 MB start: '4096' uuid: 7B77-95E7 vda3: holders: [] links: ids: [] labels: - boot masters: [] uuids: - 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 sectors: '786432' sectorsize: 512 size: 384.00 MB start: '264192' uuid: 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 vda4: holders: [] links: ids: [] labels: - root masters: [] uuids: - 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 sectors: '166721503' sectorsize: 512 size: 79.50 GB start: '1050624' uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '419430400' sectorsize: '512' size: 200.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 discovered_interpreter_python: /usr/bin/python3.9 distribution: RedHat distribution_file_parsed: true distribution_file_path: /etc/redhat-release distribution_file_search_string: Red Hat distribution_file_variety: RedHat distribution_major_version: '4' distribution_release: NA distribution_version: '4.16' dns: nameservers: - 199.204.44.24 - 199.204.47.54 domain: '' effective_group_id: 1000 effective_user_id: 1000 ens3: active: true device: ens3 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_lockless: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: fa:16:3e:0d:e7:11 module: virtio_net mtu: 1500 pciid: virtio1 promisc: true speed: -1 timestamping: [] type: ether env: BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus HOME: /var/home/core LANG: C.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: core MOTD_SHOWN: pam PATH: /var/home/core/.local/bin:/var/home/core/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /var/home/core SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 48126 22 SSH_CONNECTION: 38.102.83.114 48126 38.102.83.220 22 USER: core XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '2' XDG_SESSION_TYPE: tty _: /usr/bin/python3.9 which_declare: declare -f eth10: active: true device: eth10 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 192.168.126.11 broadcast: 192.168.126.255 netmask: 255.255.255.0 network: 192.168.126.0 prefix: '24' macaddress: f6:c1:8f:d4:3a:02 mtu: 1500 promisc: false timestamping: [] type: ether fibre_channel_wwn: [] fips: false form_factor: Other fqdn: crc gather_subset: - all hostname: crc hostnqn: nqn.2014-08.org.nvmexpress:uuid:fe28b1dc-f424-4106-9c95-00604d2bcd5f interfaces: - br-int - ens3 - eth10 - lo - br-ex - ovn-k8s-mp0 - ovs-system is_chroot: true iscsi_iqn: iqn.1994-05.com.redhat:24fed7ce643e kernel: 5.14.0-427.22.1.el9_4.x86_64 kernel_version: '#1 SMP PREEMPT_DYNAMIC Mon Jun 10 09:23:36 EDT 2024' lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] netns_local: on [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_lockless: on [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback loadavg: 15m: 0.12 1m: 1.31 5m: 0.35 locally_reachable_ips: ipv4: - 38.102.83.220 - 127.0.0.0/8 - 127.0.0.1 - 192.168.126.11 ipv6: - ::1 - fe80::4cdf:649e:2e5b:3790 lsb: {} lvm: N/A machine: x86_64 machine_id: c1bd596843fb445da20eca66471ddf66 memfree_mb: 28973 memory_mb: nocache: free: 30295 used: 1800 real: free: 28973 total: 32095 used: 3122 swap: cached: 0 free: 0 total: 0 used: 0 memtotal_mb: 32095 module_setup: true mounts: - block_available: 13220385 block_size: 4096 block_total: 20823803 block_used: 7603418 device: /dev/vda4 fstype: xfs inode_available: 41489053 inode_total: 41680320 inode_used: 191267 mount: /sysroot options: ro,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota size_available: 54150696960 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13220385 block_size: 4096 block_total: 20823803 block_used: 7603418 device: /dev/vda4 fstype: xfs inode_available: 41489053 inode_total: 41680320 inode_used: 191267 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54150696960 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13220385 block_size: 4096 block_total: 20823803 block_used: 7603418 device: /dev/vda4 fstype: xfs inode_available: 41489053 inode_total: 41680320 inode_used: 191267 mount: /etc options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54150696960 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13220385 block_size: 4096 block_total: 20823803 block_used: 7603418 device: /dev/vda4 fstype: xfs inode_available: 41489053 inode_total: 41680320 inode_used: 191267 mount: /usr options: ro,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54150696960 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13220385 block_size: 4096 block_total: 20823803 block_used: 7603418 device: /dev/vda4 fstype: xfs inode_available: 41489053 inode_total: 41680320 inode_used: 191267 mount: /sysroot/ostree/deploy/rhcos/var options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54150696960 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13220385 block_size: 4096 block_total: 20823803 block_used: 7603418 device: /dev/vda4 fstype: xfs inode_available: 41489053 inode_total: 41680320 inode_used: 191267 mount: /var options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54150696960 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 221344 block_size: 1024 block_total: 358271 block_used: 136927 device: /dev/vda3 fstype: ext4 inode_available: 97936 inode_total: 98304 inode_used: 368 mount: /boot options: ro,seclabel,nosuid,nodev,relatime size_available: 226656256 size_total: 366869504 uuid: 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 - block_available: 0 block_size: 2048 block_total: 241 block_used: 241 device: /dev/sr0 fstype: iso9660 inode_available: 0 inode_total: 0 inode_used: 0 mount: /tmp/openstack-config-drive options: ro,relatime,nojoliet,check=s,map=n,blocksize=2048 size_available: 0 size_total: 493568 uuid: 2026-01-20-10-41-28-00 nodename: crc os_family: RedHat ovn_k8s_mp0: active: false device: ovn-k8s-mp0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: b6:dc:d9:26:03:d4 mtu: 1400 promisc: true timestamping: [] type: ether ovs_system: active: false device: ovs-system features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: on [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: 76:e5:3b:bf:03:29 mtu: 1500 promisc: true timestamping: [] type: ether pkg_mgr: atomic_container proc_cmdline: BOOT_IMAGE: (hd0,gpt3)/boot/ostree/rhcos-8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/vmlinuz-5.14.0-427.22.1.el9_4.x86_64 boot: UUID=6ea7ef63-bc43-49c4-9337-b3b14ffb2763 cgroup_no_v1: all ignition.platform.id: metal ostree: /ostree/boot.1/rhcos/8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/0 psi: '1' root: UUID=68d6f3e9-64e9-44a4-a1d0-311f9c629a01 rootflags: prjquota rw: true systemd.unified_cgroup_hierarchy: '1' processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor - '2' - AuthenticAMD - AMD EPYC-Rome Processor - '3' - AuthenticAMD - AMD EPYC-Rome Processor - '4' - AuthenticAMD - AMD EPYC-Rome Processor - '5' - AuthenticAMD - AMD EPYC-Rome Processor - '6' - AuthenticAMD - AMD EPYC-Rome Processor - '7' - AuthenticAMD - AMD EPYC-Rome Processor - '8' - AuthenticAMD - AMD EPYC-Rome Processor - '9' - AuthenticAMD - AMD EPYC-Rome Processor - '10' - AuthenticAMD - AMD EPYC-Rome Processor - '11' - AuthenticAMD - AMD EPYC-Rome Processor processor_cores: 1 processor_count: 12 processor_nproc: 12 processor_threads_per_core: 1 processor_vcpus: 12 product_name: OpenStack Nova product_serial: NA product_uuid: NA product_version: 26.3.1 python: executable: /usr/bin/python3.9 has_sslcontext: true type: cpython version: major: 3 micro: 18 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 18 - final - 0 python_version: 3.9.18 real_group_id: 1000 real_user_id: 1000 selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted selinux_python_present: true service_mgr: systemd services: NetworkManager-clean-initrd-state.service: name: NetworkManager-clean-initrd-state.service source: systemd state: stopped status: enabled NetworkManager-dispatcher.service: name: NetworkManager-dispatcher.service source: systemd state: running status: enabled NetworkManager-wait-online.service: name: NetworkManager-wait-online.service source: systemd state: stopped status: enabled NetworkManager.service: name: NetworkManager.service source: systemd state: running status: enabled afterburn-checkin.service: name: afterburn-checkin.service source: systemd state: stopped status: enabled afterburn-firstboot-checkin.service: name: afterburn-firstboot-checkin.service source: systemd state: stopped status: enabled afterburn-sshkeys@.service: name: afterburn-sshkeys@.service source: systemd state: unknown status: disabled afterburn.service: name: afterburn.service source: systemd state: inactive status: disabled arp-ethers.service: name: arp-ethers.service source: systemd state: inactive status: disabled auditd.service: name: auditd.service source: systemd state: running status: enabled auth-rpcgss-module.service: name: auth-rpcgss-module.service source: systemd state: stopped status: static autovt@.service: name: autovt@.service source: systemd state: unknown status: alias blk-availability.service: name: blk-availability.service source: systemd state: stopped status: disabled bootc-fetch-apply-updates.service: name: bootc-fetch-apply-updates.service source: systemd state: inactive status: static bootkube.service: name: bootkube.service source: systemd state: inactive status: disabled bootupd.service: name: bootupd.service source: systemd state: stopped status: static chrony-wait.service: name: chrony-wait.service source: systemd state: inactive status: disabled chronyd-restricted.service: name: chronyd-restricted.service source: systemd state: inactive status: disabled chronyd.service: name: chronyd.service source: systemd state: running status: enabled clevis-luks-askpass.service: name: clevis-luks-askpass.service source: systemd state: stopped status: static cni-dhcp.service: name: cni-dhcp.service source: systemd state: inactive status: disabled configure-cloudinit-ssh.service: name: configure-cloudinit-ssh.service source: systemd state: stopped status: enabled console-getty.service: name: console-getty.service source: systemd state: inactive status: disabled console-login-helper-messages-gensnippet-ssh-keys.service: name: console-login-helper-messages-gensnippet-ssh-keys.service source: systemd state: stopped status: enabled container-getty@.service: name: container-getty@.service source: systemd state: unknown status: static coreos-generate-iscsi-initiatorname.service: name: coreos-generate-iscsi-initiatorname.service source: systemd state: stopped status: enabled coreos-ignition-delete-config.service: name: coreos-ignition-delete-config.service source: systemd state: stopped status: enabled coreos-ignition-firstboot-complete.service: name: coreos-ignition-firstboot-complete.service source: systemd state: stopped status: enabled coreos-ignition-write-issues.service: name: coreos-ignition-write-issues.service source: systemd state: stopped status: enabled coreos-installer-disable-device-auto-activation.service: name: coreos-installer-disable-device-auto-activation.service source: systemd state: inactive status: static coreos-installer-noreboot.service: name: coreos-installer-noreboot.service source: systemd state: inactive status: static coreos-installer-reboot.service: name: coreos-installer-reboot.service source: systemd state: inactive status: static coreos-installer-secure-ipl-reboot.service: name: coreos-installer-secure-ipl-reboot.service source: systemd state: inactive status: static coreos-installer.service: name: coreos-installer.service source: systemd state: inactive status: static coreos-liveiso-success.service: name: coreos-liveiso-success.service source: systemd state: stopped status: enabled coreos-platform-chrony-config.service: name: coreos-platform-chrony-config.service source: systemd state: stopped status: enabled coreos-populate-lvmdevices.service: name: coreos-populate-lvmdevices.service source: systemd state: stopped status: enabled coreos-printk-quiet.service: name: coreos-printk-quiet.service source: systemd state: stopped status: enabled coreos-update-ca-trust.service: name: coreos-update-ca-trust.service source: systemd state: stopped status: enabled crc-dnsmasq.service: name: crc-dnsmasq.service source: systemd state: stopped status: not-found crc-pre.service: name: crc-pre.service source: systemd state: stopped status: enabled crio-subid.service: name: crio-subid.service source: systemd state: stopped status: enabled crio-wipe.service: name: crio-wipe.service source: systemd state: stopped status: disabled crio.service: name: crio.service source: systemd state: stopped status: disabled dbus-broker.service: name: dbus-broker.service source: systemd state: running status: enabled dbus-org.freedesktop.hostname1.service: name: dbus-org.freedesktop.hostname1.service source: systemd state: active status: alias dbus-org.freedesktop.locale1.service: name: dbus-org.freedesktop.locale1.service source: systemd state: inactive status: alias dbus-org.freedesktop.login1.service: name: dbus-org.freedesktop.login1.service source: systemd state: active status: alias dbus-org.freedesktop.nm-dispatcher.service: name: dbus-org.freedesktop.nm-dispatcher.service source: systemd state: active status: alias dbus-org.freedesktop.timedate1.service: name: dbus-org.freedesktop.timedate1.service source: systemd state: inactive status: alias dbus.service: name: dbus.service source: systemd state: active status: alias debug-shell.service: name: debug-shell.service source: systemd state: inactive status: disabled disable-mglru.service: name: disable-mglru.service source: systemd state: stopped status: enabled display-manager.service: name: display-manager.service source: systemd state: stopped status: not-found dm-event.service: name: dm-event.service source: systemd state: stopped status: static dnf-makecache.service: name: dnf-makecache.service source: systemd state: inactive status: static dnsmasq.service: name: dnsmasq.service source: systemd state: running status: enabled dracut-cmdline.service: name: dracut-cmdline.service source: systemd state: stopped status: static dracut-initqueue.service: name: dracut-initqueue.service source: systemd state: stopped status: static dracut-mount.service: name: dracut-mount.service source: systemd state: stopped status: static dracut-pre-mount.service: name: dracut-pre-mount.service source: systemd state: stopped status: static dracut-pre-pivot.service: name: dracut-pre-pivot.service source: systemd state: stopped status: static dracut-pre-trigger.service: name: dracut-pre-trigger.service source: systemd state: stopped status: static dracut-pre-udev.service: name: dracut-pre-udev.service source: systemd state: stopped status: static dracut-shutdown-onfailure.service: name: dracut-shutdown-onfailure.service source: systemd state: stopped status: static dracut-shutdown.service: name: dracut-shutdown.service source: systemd state: stopped status: static dummy-network.service: name: dummy-network.service source: systemd state: stopped status: enabled emergency.service: name: emergency.service source: systemd state: stopped status: static fcoe.service: name: fcoe.service source: systemd state: stopped status: not-found fstrim.service: name: fstrim.service source: systemd state: inactive status: static fwupd-offline-update.service: name: fwupd-offline-update.service source: systemd state: inactive status: static fwupd-refresh.service: name: fwupd-refresh.service source: systemd state: inactive status: static fwupd.service: name: fwupd.service source: systemd state: inactive status: static gcp-routes.service: name: gcp-routes.service source: systemd state: stopped status: enabled getty@.service: name: getty@.service source: systemd state: unknown status: enabled getty@tty1.service: name: getty@tty1.service source: systemd state: running status: active gssproxy.service: name: gssproxy.service source: systemd state: stopped status: disabled gvisor-tap-vsock.service: name: gvisor-tap-vsock.service source: systemd state: stopped status: enabled hypervfcopyd.service: name: hypervfcopyd.service source: systemd state: inactive status: static hypervkvpd.service: name: hypervkvpd.service source: systemd state: inactive status: static hypervvssd.service: name: hypervvssd.service source: systemd state: inactive status: static ignition-delete-config.service: name: ignition-delete-config.service source: systemd state: stopped status: enabled initrd-cleanup.service: name: initrd-cleanup.service source: systemd state: stopped status: static initrd-parse-etc.service: name: initrd-parse-etc.service source: systemd state: stopped status: static initrd-switch-root.service: name: initrd-switch-root.service source: systemd state: stopped status: static initrd-udevadm-cleanup-db.service: name: initrd-udevadm-cleanup-db.service source: systemd state: stopped status: static irqbalance.service: name: irqbalance.service source: systemd state: running status: enabled iscsi-init.service: name: iscsi-init.service source: systemd state: stopped status: disabled iscsi-onboot.service: name: iscsi-onboot.service source: systemd state: stopped status: enabled iscsi-shutdown.service: name: iscsi-shutdown.service source: systemd state: stopped status: static iscsi-starter.service: name: iscsi-starter.service source: systemd state: inactive status: disabled iscsi.service: name: iscsi.service source: systemd state: stopped status: indirect iscsid.service: name: iscsid.service source: systemd state: stopped status: disabled iscsiuio.service: name: iscsiuio.service source: systemd state: stopped status: disabled kdump.service: name: kdump.service source: systemd state: stopped status: disabled kmod-static-nodes.service: name: kmod-static-nodes.service source: systemd state: stopped status: static kubelet-auto-node-size.service: name: kubelet-auto-node-size.service source: systemd state: stopped status: enabled kubelet-cleanup.service: name: kubelet-cleanup.service source: systemd state: stopped status: enabled kubelet.service: name: kubelet.service source: systemd state: stopped status: disabled kubens.service: name: kubens.service source: systemd state: stopped status: disabled ldconfig.service: name: ldconfig.service source: systemd state: stopped status: static logrotate.service: name: logrotate.service source: systemd state: stopped status: static lvm2-activation-early.service: name: lvm2-activation-early.service source: systemd state: stopped status: not-found lvm2-lvmpolld.service: name: lvm2-lvmpolld.service source: systemd state: stopped status: static lvm2-monitor.service: name: lvm2-monitor.service source: systemd state: stopped status: enabled machine-config-daemon-firstboot.service: name: machine-config-daemon-firstboot.service source: systemd state: stopped status: enabled machine-config-daemon-pull.service: name: machine-config-daemon-pull.service source: systemd state: stopped status: enabled mdadm-grow-continue@.service: name: mdadm-grow-continue@.service source: systemd state: unknown status: static mdadm-last-resort@.service: name: mdadm-last-resort@.service source: systemd state: unknown status: static mdcheck_continue.service: name: mdcheck_continue.service source: systemd state: inactive status: static mdcheck_start.service: name: mdcheck_start.service source: systemd state: inactive status: static mdmon@.service: name: mdmon@.service source: systemd state: unknown status: static mdmonitor-oneshot.service: name: mdmonitor-oneshot.service source: systemd state: inactive status: static mdmonitor.service: name: mdmonitor.service source: systemd state: stopped status: enabled microcode.service: name: microcode.service source: systemd state: stopped status: enabled modprobe@.service: name: modprobe@.service source: systemd state: unknown status: static modprobe@configfs.service: name: modprobe@configfs.service source: systemd state: stopped status: inactive modprobe@drm.service: name: modprobe@drm.service source: systemd state: stopped status: inactive modprobe@efi_pstore.service: name: modprobe@efi_pstore.service source: systemd state: stopped status: inactive modprobe@fuse.service: name: modprobe@fuse.service source: systemd state: stopped status: inactive multipathd.service: name: multipathd.service source: systemd state: stopped status: enabled netavark-dhcp-proxy.service: name: netavark-dhcp-proxy.service source: systemd state: inactive status: disabled netavark-firewalld-reload.service: name: netavark-firewalld-reload.service source: systemd state: inactive status: disabled network.service: name: network.service source: systemd state: stopped status: not-found nfs-blkmap.service: name: nfs-blkmap.service source: systemd state: inactive status: disabled nfs-idmapd.service: name: nfs-idmapd.service source: systemd state: stopped status: static nfs-mountd.service: name: nfs-mountd.service source: systemd state: stopped status: static nfs-server.service: name: nfs-server.service source: systemd state: stopped status: disabled nfs-utils.service: name: nfs-utils.service source: systemd state: stopped status: static nfsdcld.service: name: nfsdcld.service source: systemd state: stopped status: static nftables.service: name: nftables.service source: systemd state: inactive status: disabled nis-domainname.service: name: nis-domainname.service source: systemd state: inactive status: disabled nm-cloud-setup.service: name: nm-cloud-setup.service source: systemd state: inactive status: disabled nm-priv-helper.service: name: nm-priv-helper.service source: systemd state: inactive status: static nmstate.service: name: nmstate.service source: systemd state: stopped status: enabled node-valid-hostname.service: name: node-valid-hostname.service source: systemd state: stopped status: enabled nodeip-configuration.service: name: nodeip-configuration.service source: systemd state: stopped status: enabled ntpd.service: name: ntpd.service source: systemd state: stopped status: not-found ntpdate.service: name: ntpdate.service source: systemd state: stopped status: not-found nvmefc-boot-connections.service: name: nvmefc-boot-connections.service source: systemd state: stopped status: enabled nvmf-autoconnect.service: name: nvmf-autoconnect.service source: systemd state: inactive status: disabled nvmf-connect@.service: name: nvmf-connect@.service source: systemd state: unknown status: static openvswitch.service: name: openvswitch.service source: systemd state: stopped status: enabled ostree-boot-complete.service: name: ostree-boot-complete.service source: systemd state: stopped status: enabled-runtime ostree-finalize-staged-hold.service: name: ostree-finalize-staged-hold.service source: systemd state: stopped status: static ostree-finalize-staged.service: name: ostree-finalize-staged.service source: systemd state: stopped status: static ostree-prepare-root.service: name: ostree-prepare-root.service source: systemd state: inactive status: static ostree-readonly-sysroot-migration.service: name: ostree-readonly-sysroot-migration.service source: systemd state: stopped status: disabled ostree-remount.service: name: ostree-remount.service source: systemd state: stopped status: enabled ostree-state-overlay@.service: name: ostree-state-overlay@.service source: systemd state: unknown status: disabled ovs-configuration.service: name: ovs-configuration.service source: systemd state: stopped status: enabled ovs-delete-transient-ports.service: name: ovs-delete-transient-ports.service source: systemd state: stopped status: static ovs-vswitchd.service: name: ovs-vswitchd.service source: systemd state: running status: static ovsdb-server.service: name: ovsdb-server.service source: systemd state: running status: static pam_namespace.service: name: pam_namespace.service source: systemd state: inactive status: static plymouth-quit-wait.service: name: plymouth-quit-wait.service source: systemd state: stopped status: not-found plymouth-read-write.service: name: plymouth-read-write.service source: systemd state: stopped status: not-found plymouth-start.service: name: plymouth-start.service source: systemd state: stopped status: not-found podman-auto-update.service: name: podman-auto-update.service source: systemd state: inactive status: disabled podman-clean-transient.service: name: podman-clean-transient.service source: systemd state: inactive status: disabled podman-kube@.service: name: podman-kube@.service source: systemd state: unknown status: disabled podman-restart.service: name: podman-restart.service source: systemd state: inactive status: disabled podman.service: name: podman.service source: systemd state: stopped status: disabled polkit.service: name: polkit.service source: systemd state: inactive status: static qemu-guest-agent.service: name: qemu-guest-agent.service source: systemd state: stopped status: enabled quotaon.service: name: quotaon.service source: systemd state: inactive status: static raid-check.service: name: raid-check.service source: systemd state: inactive status: static rbdmap.service: name: rbdmap.service source: systemd state: stopped status: not-found rc-local.service: name: rc-local.service source: systemd state: stopped status: static rdisc.service: name: rdisc.service source: systemd state: inactive status: disabled rdma-load-modules@.service: name: rdma-load-modules@.service source: systemd state: unknown status: static rdma-ndd.service: name: rdma-ndd.service source: systemd state: inactive status: static rescue.service: name: rescue.service source: systemd state: stopped status: static rhcos-usrlocal-selinux-fixup.service: name: rhcos-usrlocal-selinux-fixup.service source: systemd state: stopped status: enabled rpc-gssd.service: name: rpc-gssd.service source: systemd state: stopped status: static rpc-statd-notify.service: name: rpc-statd-notify.service source: systemd state: stopped status: static rpc-statd.service: name: rpc-statd.service source: systemd state: stopped status: static rpc-svcgssd.service: name: rpc-svcgssd.service source: systemd state: stopped status: not-found rpcbind.service: name: rpcbind.service source: systemd state: stopped status: disabled rpm-ostree-bootstatus.service: name: rpm-ostree-bootstatus.service source: systemd state: inactive status: disabled rpm-ostree-countme.service: name: rpm-ostree-countme.service source: systemd state: inactive status: static rpm-ostree-fix-shadow-mode.service: name: rpm-ostree-fix-shadow-mode.service source: systemd state: stopped status: disabled rpm-ostreed-automatic.service: name: rpm-ostreed-automatic.service source: systemd state: inactive status: static rpm-ostreed.service: name: rpm-ostreed.service source: systemd state: inactive status: static rpmdb-rebuild.service: name: rpmdb-rebuild.service source: systemd state: inactive status: disabled selinux-autorelabel-mark.service: name: selinux-autorelabel-mark.service source: systemd state: stopped status: enabled selinux-autorelabel.service: name: selinux-autorelabel.service source: systemd state: inactive status: static selinux-check-proper-disable.service: name: selinux-check-proper-disable.service source: systemd state: inactive status: disabled serial-getty@.service: name: serial-getty@.service source: systemd state: unknown status: disabled sntp.service: name: sntp.service source: systemd state: stopped status: not-found sshd-keygen@.service: name: sshd-keygen@.service source: systemd state: unknown status: disabled sshd-keygen@ecdsa.service: name: sshd-keygen@ecdsa.service source: systemd state: stopped status: inactive sshd-keygen@ed25519.service: name: sshd-keygen@ed25519.service source: systemd state: stopped status: inactive sshd-keygen@rsa.service: name: sshd-keygen@rsa.service source: systemd state: stopped status: inactive sshd.service: name: sshd.service source: systemd state: running status: enabled sshd@.service: name: sshd@.service source: systemd state: unknown status: static sssd-autofs.service: name: sssd-autofs.service source: systemd state: inactive status: indirect sssd-nss.service: name: sssd-nss.service source: systemd state: inactive status: indirect sssd-pac.service: name: sssd-pac.service source: systemd state: inactive status: indirect sssd-pam.service: name: sssd-pam.service source: systemd state: inactive status: indirect sssd-ssh.service: name: sssd-ssh.service source: systemd state: inactive status: indirect sssd-sudo.service: name: sssd-sudo.service source: systemd state: inactive status: indirect sssd.service: name: sssd.service source: systemd state: stopped status: enabled stalld.service: name: stalld.service source: systemd state: inactive status: disabled syslog.service: name: syslog.service source: systemd state: stopped status: not-found system-update-cleanup.service: name: system-update-cleanup.service source: systemd state: inactive status: static systemd-ask-password-console.service: name: systemd-ask-password-console.service source: systemd state: stopped status: static systemd-ask-password-wall.service: name: systemd-ask-password-wall.service source: systemd state: stopped status: static systemd-backlight@.service: name: systemd-backlight@.service source: systemd state: unknown status: static systemd-binfmt.service: name: systemd-binfmt.service source: systemd state: stopped status: static systemd-bless-boot.service: name: systemd-bless-boot.service source: systemd state: inactive status: static systemd-boot-check-no-failures.service: name: systemd-boot-check-no-failures.service source: systemd state: inactive status: disabled systemd-boot-random-seed.service: name: systemd-boot-random-seed.service source: systemd state: stopped status: static systemd-boot-update.service: name: systemd-boot-update.service source: systemd state: stopped status: enabled systemd-coredump@.service: name: systemd-coredump@.service source: systemd state: unknown status: static systemd-exit.service: name: systemd-exit.service source: systemd state: inactive status: static systemd-fsck-root.service: name: systemd-fsck-root.service source: systemd state: stopped status: static systemd-fsck@.service: name: systemd-fsck@.service source: systemd state: unknown status: static systemd-fsck@dev-disk-by\x2duuid-6ea7ef63\x2dbc43\x2d49c4\x2d9337\x2db3b14ffb2763.service: name: systemd-fsck@dev-disk-by\x2duuid-6ea7ef63\x2dbc43\x2d49c4\x2d9337\x2db3b14ffb2763.service source: systemd state: stopped status: active systemd-growfs-root.service: name: systemd-growfs-root.service source: systemd state: inactive status: static systemd-growfs@.service: name: systemd-growfs@.service source: systemd state: unknown status: static systemd-halt.service: name: systemd-halt.service source: systemd state: inactive status: static systemd-hibernate-resume@.service: name: systemd-hibernate-resume@.service source: systemd state: unknown status: static systemd-hibernate.service: name: systemd-hibernate.service source: systemd state: inactive status: static systemd-hostnamed.service: name: systemd-hostnamed.service source: systemd state: running status: static systemd-hwdb-update.service: name: systemd-hwdb-update.service source: systemd state: stopped status: static systemd-hybrid-sleep.service: name: systemd-hybrid-sleep.service source: systemd state: inactive status: static systemd-initctl.service: name: systemd-initctl.service source: systemd state: stopped status: static systemd-journal-catalog-update.service: name: systemd-journal-catalog-update.service source: systemd state: stopped status: static systemd-journal-flush.service: name: systemd-journal-flush.service source: systemd state: stopped status: static systemd-journal-gatewayd.service: name: systemd-journal-gatewayd.service source: systemd state: inactive status: indirect systemd-journal-remote.service: name: systemd-journal-remote.service source: systemd state: inactive status: indirect systemd-journal-upload.service: name: systemd-journal-upload.service source: systemd state: inactive status: disabled systemd-journald.service: name: systemd-journald.service source: systemd state: running status: static systemd-journald@.service: name: systemd-journald@.service source: systemd state: unknown status: static systemd-kexec.service: name: systemd-kexec.service source: systemd state: inactive status: static systemd-localed.service: name: systemd-localed.service source: systemd state: inactive status: static systemd-logind.service: name: systemd-logind.service source: systemd state: running status: static systemd-machine-id-commit.service: name: systemd-machine-id-commit.service source: systemd state: stopped status: static systemd-modules-load.service: name: systemd-modules-load.service source: systemd state: stopped status: static systemd-network-generator.service: name: systemd-network-generator.service source: systemd state: stopped status: enabled systemd-pcrfs-root.service: name: systemd-pcrfs-root.service source: systemd state: inactive status: static systemd-pcrfs@.service: name: systemd-pcrfs@.service source: systemd state: unknown status: static systemd-pcrmachine.service: name: systemd-pcrmachine.service source: systemd state: stopped status: static systemd-pcrphase-initrd.service: name: systemd-pcrphase-initrd.service source: systemd state: stopped status: static systemd-pcrphase-sysinit.service: name: systemd-pcrphase-sysinit.service source: systemd state: stopped status: static systemd-pcrphase.service: name: systemd-pcrphase.service source: systemd state: stopped status: static systemd-poweroff.service: name: systemd-poweroff.service source: systemd state: inactive status: static systemd-pstore.service: name: systemd-pstore.service source: systemd state: stopped status: enabled systemd-quotacheck.service: name: systemd-quotacheck.service source: systemd state: stopped status: static systemd-random-seed.service: name: systemd-random-seed.service source: systemd state: stopped status: static systemd-reboot.service: name: systemd-reboot.service source: systemd state: inactive status: static systemd-remount-fs.service: name: systemd-remount-fs.service source: systemd state: stopped status: enabled-runtime systemd-repart.service: name: systemd-repart.service source: systemd state: stopped status: masked systemd-rfkill.service: name: systemd-rfkill.service source: systemd state: stopped status: static systemd-suspend-then-hibernate.service: name: systemd-suspend-then-hibernate.service source: systemd state: inactive status: static systemd-suspend.service: name: systemd-suspend.service source: systemd state: inactive status: static systemd-sysctl.service: name: systemd-sysctl.service source: systemd state: stopped status: static systemd-sysext.service: name: systemd-sysext.service source: systemd state: stopped status: disabled systemd-sysupdate-reboot.service: name: systemd-sysupdate-reboot.service source: systemd state: inactive status: indirect systemd-sysupdate.service: name: systemd-sysupdate.service source: systemd state: inactive status: indirect systemd-sysusers.service: name: systemd-sysusers.service source: systemd state: stopped status: static systemd-timedated.service: name: systemd-timedated.service source: systemd state: inactive status: static systemd-timesyncd.service: name: systemd-timesyncd.service source: systemd state: stopped status: not-found systemd-tmpfiles-clean.service: name: systemd-tmpfiles-clean.service source: systemd state: stopped status: static systemd-tmpfiles-setup-dev.service: name: systemd-tmpfiles-setup-dev.service source: systemd state: stopped status: static systemd-tmpfiles-setup.service: name: systemd-tmpfiles-setup.service source: systemd state: stopped status: static systemd-tmpfiles.service: name: systemd-tmpfiles.service source: systemd state: stopped status: not-found systemd-udev-settle.service: name: systemd-udev-settle.service source: systemd state: stopped status: static systemd-udev-trigger.service: name: systemd-udev-trigger.service source: systemd state: stopped status: static systemd-udevd.service: name: systemd-udevd.service source: systemd state: running status: static systemd-update-done.service: name: systemd-update-done.service source: systemd state: stopped status: static systemd-update-utmp-runlevel.service: name: systemd-update-utmp-runlevel.service source: systemd state: stopped status: static systemd-update-utmp.service: name: systemd-update-utmp.service source: systemd state: stopped status: static systemd-user-sessions.service: name: systemd-user-sessions.service source: systemd state: stopped status: static systemd-vconsole-setup.service: name: systemd-vconsole-setup.service source: systemd state: stopped status: static systemd-volatile-root.service: name: systemd-volatile-root.service source: systemd state: inactive status: static systemd-zram-setup@.service: name: systemd-zram-setup@.service source: systemd state: unknown status: static teamd@.service: name: teamd@.service source: systemd state: unknown status: static unbound-anchor.service: name: unbound-anchor.service source: systemd state: stopped status: static user-runtime-dir@.service: name: user-runtime-dir@.service source: systemd state: unknown status: static user-runtime-dir@0.service: name: user-runtime-dir@0.service source: systemd state: stopped status: active user-runtime-dir@1000.service: name: user-runtime-dir@1000.service source: systemd state: stopped status: active user@.service: name: user@.service source: systemd state: unknown status: static user@0.service: name: user@0.service source: systemd state: running status: active user@1000.service: name: user@1000.service source: systemd state: running status: active vgauthd.service: name: vgauthd.service source: systemd state: stopped status: enabled vmtoolsd.service: name: vmtoolsd.service source: systemd state: stopped status: enabled wait-for-primary-ip.service: name: wait-for-primary-ip.service source: systemd state: stopped status: enabled ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDs7MQU61ADe4LfEllZo6w2h2Vo1Z9nNArIkKGmgua8bOly2nQBIoDIKgNOXqUpoIZx1528UeeHSQu9SxYL21mo= ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAIDKHFhjB7ae+dVOClQLGXnCaMXGjEeLhmEhxE64Ddkhe ssh_host_key_ed25519_public_keytype: ssh-ed25519 ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQCr2rWpvTGLA5BK4eYXB55gorB9vAJK1K0iUmnm+r9AcvcXH33bR/O6ZNh9h85mHU5l1Gw9nBLRbHn42EU+6Ht6te2Z1gIiJEKpfiC0sR0aMcT4hKQWHmwYqQM/VLXhPiS4OnhO1OJuz0arj1Anr1hDcEJpVTAj3sbfkgzzbBeEWMg2V3Apr1fqDimNlyWRiDFy3TUdKfnB7nucGaGbHneeVxvwv81RGur6I9VHZe/odqEQTGRUBXdu57xybxd6Yc3863ayL5L1OhGTN/x7d8qeEJGb9zt6VvtFWlpVjIXa2l+uTZVfTvufdLwxJdBRg0kHMXH2ZJ3U8w9NRHMBHG7M6YjX0w95uCB/FnyN6s8V/KRQtSnC6Wt6YMP438rM2K9yydXdS/qUQm5hQLP7eY8/Nl4+RDQAvZOjPp+DeUxXfZOqR4qq8tCKi/5Cvd7ChYfPyymeV4RKAJf971EuO0zphyDK8knic0c2XTybK6WTM8lYcbUMYJxg1CW5o1VMjpk= ssh_host_key_rsa_public_keytype: ssh-rsa swapfree_mb: 0 swaptotal_mb: 0 system: Linux system_capabilities: - '' system_capabilities_enforced: 'True' system_vendor: OpenStack Foundation uptime_seconds: 59 user_dir: /var/home/core user_gecos: CoreOS Admin user_gid: 1000 user_id: core user_shell: /bin/bash user_uid: 1000 userspace_architecture: x86_64 userspace_bits: '64' virtualization_role: guest virtualization_tech_guest: - openstack virtualization_tech_host: - kvm virtualization_type: openstack ansible_fibre_channel_wwn: [] ansible_fips: false ansible_forks: 5 ansible_form_factor: Other ansible_fqdn: crc ansible_host: 38.102.83.220 ansible_hostname: crc ansible_hostnqn: nqn.2014-08.org.nvmexpress:uuid:fe28b1dc-f424-4106-9c95-00604d2bcd5f ansible_interfaces: - br-int - ens3 - eth10 - lo - br-ex - ovn-k8s-mp0 - ovs-system ansible_inventory_sources: - /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/post_playbook_0/inventory.yaml ansible_is_chroot: true ansible_iscsi_iqn: iqn.1994-05.com.redhat:24fed7ce643e ansible_kernel: 5.14.0-427.22.1.el9_4.x86_64 ansible_kernel_version: '#1 SMP PREEMPT_DYNAMIC Mon Jun 10 09:23:36 EDT 2024' ansible_lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] netns_local: on [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_lockless: on [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback ansible_loadavg: 15m: 0.12 1m: 1.31 5m: 0.35 ansible_local: {} ansible_locally_reachable_ips: ipv4: - 38.102.83.220 - 127.0.0.0/8 - 127.0.0.1 - 192.168.126.11 ipv6: - ::1 - fe80::4cdf:649e:2e5b:3790 ansible_lsb: {} ansible_lvm: N/A ansible_machine: x86_64 ansible_machine_id: c1bd596843fb445da20eca66471ddf66 ansible_memfree_mb: 28973 ansible_memory_mb: nocache: free: 30295 used: 1800 real: free: 28973 total: 32095 used: 3122 swap: cached: 0 free: 0 total: 0 used: 0 ansible_memtotal_mb: 32095 ansible_mounts: - block_available: 13220385 block_size: 4096 block_total: 20823803 block_used: 7603418 device: /dev/vda4 fstype: xfs inode_available: 41489053 inode_total: 41680320 inode_used: 191267 mount: /sysroot options: ro,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota size_available: 54150696960 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13220385 block_size: 4096 block_total: 20823803 block_used: 7603418 device: /dev/vda4 fstype: xfs inode_available: 41489053 inode_total: 41680320 inode_used: 191267 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54150696960 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13220385 block_size: 4096 block_total: 20823803 block_used: 7603418 device: /dev/vda4 fstype: xfs inode_available: 41489053 inode_total: 41680320 inode_used: 191267 mount: /etc options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54150696960 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13220385 block_size: 4096 block_total: 20823803 block_used: 7603418 device: /dev/vda4 fstype: xfs inode_available: 41489053 inode_total: 41680320 inode_used: 191267 mount: /usr options: ro,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54150696960 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13220385 block_size: 4096 block_total: 20823803 block_used: 7603418 device: /dev/vda4 fstype: xfs inode_available: 41489053 inode_total: 41680320 inode_used: 191267 mount: /sysroot/ostree/deploy/rhcos/var options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54150696960 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13220385 block_size: 4096 block_total: 20823803 block_used: 7603418 device: /dev/vda4 fstype: xfs inode_available: 41489053 inode_total: 41680320 inode_used: 191267 mount: /var options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54150696960 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 221344 block_size: 1024 block_total: 358271 block_used: 136927 device: /dev/vda3 fstype: ext4 inode_available: 97936 inode_total: 98304 inode_used: 368 mount: /boot options: ro,seclabel,nosuid,nodev,relatime size_available: 226656256 size_total: 366869504 uuid: 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 - block_available: 0 block_size: 2048 block_total: 241 block_used: 241 device: /dev/sr0 fstype: iso9660 inode_available: 0 inode_total: 0 inode_used: 0 mount: /tmp/openstack-config-drive options: ro,relatime,nojoliet,check=s,map=n,blocksize=2048 size_available: 0 size_total: 493568 uuid: 2026-01-20-10-41-28-00 ansible_nodename: crc ansible_os_family: RedHat ansible_ovn_k8s_mp0: active: false device: ovn-k8s-mp0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: b6:dc:d9:26:03:d4 mtu: 1400 promisc: true timestamping: [] type: ether ansible_ovs_system: active: false device: ovs-system features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: on [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: 76:e5:3b:bf:03:29 mtu: 1500 promisc: true timestamping: [] type: ether ansible_pkg_mgr: atomic_container ansible_playbook_python: /usr/lib/zuul/ansible/8/bin/python ansible_port: 22 ansible_proc_cmdline: BOOT_IMAGE: (hd0,gpt3)/boot/ostree/rhcos-8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/vmlinuz-5.14.0-427.22.1.el9_4.x86_64 boot: UUID=6ea7ef63-bc43-49c4-9337-b3b14ffb2763 cgroup_no_v1: all ignition.platform.id: metal ostree: /ostree/boot.1/rhcos/8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/0 psi: '1' root: UUID=68d6f3e9-64e9-44a4-a1d0-311f9c629a01 rootflags: prjquota rw: true systemd.unified_cgroup_hierarchy: '1' ansible_processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor - '2' - AuthenticAMD - AMD EPYC-Rome Processor - '3' - AuthenticAMD - AMD EPYC-Rome Processor - '4' - AuthenticAMD - AMD EPYC-Rome Processor - '5' - AuthenticAMD - AMD EPYC-Rome Processor - '6' - AuthenticAMD - AMD EPYC-Rome Processor - '7' - AuthenticAMD - AMD EPYC-Rome Processor - '8' - AuthenticAMD - AMD EPYC-Rome Processor - '9' - AuthenticAMD - AMD EPYC-Rome Processor - '10' - AuthenticAMD - AMD EPYC-Rome Processor - '11' - AuthenticAMD - AMD EPYC-Rome Processor ansible_processor_cores: 1 ansible_processor_count: 12 ansible_processor_nproc: 12 ansible_processor_threads_per_core: 1 ansible_processor_vcpus: 12 ansible_product_name: OpenStack Nova ansible_product_serial: NA ansible_product_uuid: NA ansible_product_version: 26.3.1 ansible_python: executable: /usr/bin/python3.9 has_sslcontext: true type: cpython version: major: 3 micro: 18 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 18 - final - 0 ansible_python_interpreter: auto ansible_python_version: 3.9.18 ansible_real_group_id: 1000 ansible_real_user_id: 1000 ansible_run_tags: - all ansible_scp_extra_args: -o PermitLocalCommand=no ansible_selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted ansible_selinux_python_present: true ansible_service_mgr: systemd ansible_sftp_extra_args: -o PermitLocalCommand=no ansible_skip_tags: [] ansible_ssh_common_args: -o PermitLocalCommand=no ansible_ssh_executable: ssh ansible_ssh_extra_args: -o PermitLocalCommand=no ansible_ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDs7MQU61ADe4LfEllZo6w2h2Vo1Z9nNArIkKGmgua8bOly2nQBIoDIKgNOXqUpoIZx1528UeeHSQu9SxYL21mo= ansible_ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ansible_ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAIDKHFhjB7ae+dVOClQLGXnCaMXGjEeLhmEhxE64Ddkhe ansible_ssh_host_key_ed25519_public_keytype: ssh-ed25519 ansible_ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQCr2rWpvTGLA5BK4eYXB55gorB9vAJK1K0iUmnm+r9AcvcXH33bR/O6ZNh9h85mHU5l1Gw9nBLRbHn42EU+6Ht6te2Z1gIiJEKpfiC0sR0aMcT4hKQWHmwYqQM/VLXhPiS4OnhO1OJuz0arj1Anr1hDcEJpVTAj3sbfkgzzbBeEWMg2V3Apr1fqDimNlyWRiDFy3TUdKfnB7nucGaGbHneeVxvwv81RGur6I9VHZe/odqEQTGRUBXdu57xybxd6Yc3863ayL5L1OhGTN/x7d8qeEJGb9zt6VvtFWlpVjIXa2l+uTZVfTvufdLwxJdBRg0kHMXH2ZJ3U8w9NRHMBHG7M6YjX0w95uCB/FnyN6s8V/KRQtSnC6Wt6YMP438rM2K9yydXdS/qUQm5hQLP7eY8/Nl4+RDQAvZOjPp+DeUxXfZOqR4qq8tCKi/5Cvd7ChYfPyymeV4RKAJf971EuO0zphyDK8knic0c2XTybK6WTM8lYcbUMYJxg1CW5o1VMjpk= ansible_ssh_host_key_rsa_public_keytype: ssh-rsa ansible_swapfree_mb: 0 ansible_swaptotal_mb: 0 ansible_system: Linux ansible_system_capabilities: - '' ansible_system_capabilities_enforced: 'True' ansible_system_vendor: OpenStack Foundation ansible_uptime_seconds: 59 ansible_user: core ansible_user_dir: /var/home/core ansible_user_gecos: CoreOS Admin ansible_user_gid: 1000 ansible_user_id: core ansible_user_shell: /bin/bash ansible_user_uid: 1000 ansible_userspace_architecture: x86_64 ansible_userspace_bits: '64' ansible_verbosity: 1 ansible_version: full: 2.15.12 major: 2 minor: 15 revision: 12 string: 2.15.12 ansible_virtualization_role: guest ansible_virtualization_tech_guest: - openstack ansible_virtualization_tech_host: - kvm ansible_virtualization_type: openstack cifmw_architecture_repo: /var/home/core/src/github.com/openstack-k8s-operators/architecture cifmw_architecture_repo_relative: src/github.com/openstack-k8s-operators/architecture cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_build_images_output: {} cifmw_default_dns_servers: - 1.1.1.1 - 8.8.8.8 cifmw_dlrn_report_result: false cifmw_edpm_telemetry_enabled_exporters: - podman_exporter - openstack_network_exporter cifmw_extras: - '@/var/home/core/src/github.com/openstack-k8s-operators/ci-framework/scenarios/centos-9/multinode-ci.yml' - '@/var/home/core/src/github.com/openstack-k8s-operators/ci-framework/scenarios/centos-9/horizon.yml' - '@/var/home/core/src/github.com/openstack-k8s-operators/watcher-operator/ci/scenarios/edpm-no-notifications.yml' - '@/var/home/core/src/github.com/openstack-k8s-operators/watcher-operator/ci/tests/watcher-tempest.yml' cifmw_installyamls_repos: /var/home/core/src/github.com/openstack-k8s-operators/install_yamls cifmw_installyamls_repos_relative: src/github.com/openstack-k8s-operators/install_yamls cifmw_nolog: true cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: /var/home/core/.crc/machines/crc/kubeconfig cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_openstack_k8s_operators_org_url: https://github.com/openstack-k8s-operators cifmw_openstack_namespace: openstack cifmw_operator_build_output: operators: openstack-operator: git_commit_hash: 38e630804dada625f7b015f13f3ac5bb7192f4dd git_src_dir: ~/src/github.com/openstack-k8s-operators/openstack-operator image: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd image_bundle: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-bundle:38e630804dada625f7b015f13f3ac5bb7192f4dd image_catalog: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-index:38e630804dada625f7b015f13f3ac5bb7192f4dd watcher-operator: git_commit_hash: 581f46572d07c53c87a11aa044b02e73f253eea6 git_src_dir: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator image: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator:581f46572d07c53c87a11aa044b02e73f253eea6 image_bundle: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-bundle:581f46572d07c53c87a11aa044b02e73f253eea6 image_catalog: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-index:581f46572d07c53c87a11aa044b02e73f253eea6 cifmw_repo: /var/home/core/src/github.com/openstack-k8s-operators/ci-framework cifmw_repo_relative: src/github.com/openstack-k8s-operators/ci-framework cifmw_test_operator_tempest_external_plugin: - changeRefspec: 380572db57798530b64dcac14c6b01b0382c5d8e changeRepository: https://review.opendev.org/openstack/watcher-tempest-plugin repository: https://opendev.org/openstack/watcher-tempest-plugin.git cifmw_test_operator_tempest_image_tag: watcher_latest cifmw_test_operator_tempest_namespace: podified-epoxy-centos9 cifmw_test_operator_tempest_registry: 38.102.83.51:5001 cifmw_update_containers_openstack: false cifmw_update_containers_org: podified-epoxy-centos9 cifmw_update_containers_registry: 38.102.83.51:5001 cifmw_update_containers_tag: watcher_latest cifmw_update_containers_watcher: true cifmw_use_libvirt: false cifmw_zuul_target_host: controller content_provider_dlrn_md5_hash: '' content_provider_gating_repo_available: false content_provider_gating_repo_url: '' content_provider_os_registry_namespace: podified-epoxy-centos9 content_provider_os_registry_url: 38.102.83.51:5001/podified-epoxy-centos9 content_provider_registry_available: true content_provider_registry_ip: 38.102.83.51 content_provider_registry_ip_port: 38.102.83.51:5001 crc_ci_bootstrap_cloud_name: vexxhost crc_ci_bootstrap_networking: instances: compute-0: networks: default: ip: 192.168.122.100 internal-api: config_nm: false ip: 172.17.0.100 storage: config_nm: false ip: 172.18.0.100 tenant: config_nm: false ip: 172.19.0.100 compute-1: networks: default: ip: 192.168.122.101 internal-api: config_nm: false ip: 172.17.0.101 storage: config_nm: false ip: 172.18.0.101 tenant: config_nm: false ip: 172.19.0.101 controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: '1500' range: 192.168.122.0/24 router_net: '' transparent: true internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 discovered_interpreter_python: /usr/bin/python3.9 enable_ramdisk: true fetch_dlrn_hash: false gather_subset: - all group_names: - ocps groups: all: - compute-0 - compute-1 - controller - crc computes: - compute-0 - compute-1 ocps: - crc ungrouped: *id001 zuul_unreachable: [] inventory_dir: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/post_playbook_0 inventory_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/post_playbook_0/inventory.yaml inventory_hostname: crc inventory_hostname_short: crc module_setup: true nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: a519b063-d122-47cf-ae3b-7548803df408 host_id: b012578aee5370fae73eb6c92c4679617335173cccca05390470f411 interface_ip: 38.102.83.220 label: coreos-crc-extracted-2-39-0-3xl private_ipv4: 38.102.83.220 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.220 public_ipv6: '' region: RegionOne slot: null omit: __omit_place_holder__3495a27fa3f7994641c1fe3f418eb5fa4a3ff705 operator_namespace: openstack-operators playbook_dir: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/ci/playbooks push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true services: NetworkManager-clean-initrd-state.service: name: NetworkManager-clean-initrd-state.service source: systemd state: stopped status: enabled NetworkManager-dispatcher.service: name: NetworkManager-dispatcher.service source: systemd state: running status: enabled NetworkManager-wait-online.service: name: NetworkManager-wait-online.service source: systemd state: stopped status: enabled NetworkManager.service: name: NetworkManager.service source: systemd state: running status: enabled afterburn-checkin.service: name: afterburn-checkin.service source: systemd state: stopped status: enabled afterburn-firstboot-checkin.service: name: afterburn-firstboot-checkin.service source: systemd state: stopped status: enabled afterburn-sshkeys@.service: name: afterburn-sshkeys@.service source: systemd state: unknown status: disabled afterburn.service: name: afterburn.service source: systemd state: inactive status: disabled arp-ethers.service: name: arp-ethers.service source: systemd state: inactive status: disabled auditd.service: name: auditd.service source: systemd state: running status: enabled auth-rpcgss-module.service: name: auth-rpcgss-module.service source: systemd state: stopped status: static autovt@.service: name: autovt@.service source: systemd state: unknown status: alias blk-availability.service: name: blk-availability.service source: systemd state: stopped status: disabled bootc-fetch-apply-updates.service: name: bootc-fetch-apply-updates.service source: systemd state: inactive status: static bootkube.service: name: bootkube.service source: systemd state: inactive status: disabled bootupd.service: name: bootupd.service source: systemd state: stopped status: static chrony-wait.service: name: chrony-wait.service source: systemd state: inactive status: disabled chronyd-restricted.service: name: chronyd-restricted.service source: systemd state: inactive status: disabled chronyd.service: name: chronyd.service source: systemd state: running status: enabled clevis-luks-askpass.service: name: clevis-luks-askpass.service source: systemd state: stopped status: static cni-dhcp.service: name: cni-dhcp.service source: systemd state: inactive status: disabled configure-cloudinit-ssh.service: name: configure-cloudinit-ssh.service source: systemd state: stopped status: enabled console-getty.service: name: console-getty.service source: systemd state: inactive status: disabled console-login-helper-messages-gensnippet-ssh-keys.service: name: console-login-helper-messages-gensnippet-ssh-keys.service source: systemd state: stopped status: enabled container-getty@.service: name: container-getty@.service source: systemd state: unknown status: static coreos-generate-iscsi-initiatorname.service: name: coreos-generate-iscsi-initiatorname.service source: systemd state: stopped status: enabled coreos-ignition-delete-config.service: name: coreos-ignition-delete-config.service source: systemd state: stopped status: enabled coreos-ignition-firstboot-complete.service: name: coreos-ignition-firstboot-complete.service source: systemd state: stopped status: enabled coreos-ignition-write-issues.service: name: coreos-ignition-write-issues.service source: systemd state: stopped status: enabled coreos-installer-disable-device-auto-activation.service: name: coreos-installer-disable-device-auto-activation.service source: systemd state: inactive status: static coreos-installer-noreboot.service: name: coreos-installer-noreboot.service source: systemd state: inactive status: static coreos-installer-reboot.service: name: coreos-installer-reboot.service source: systemd state: inactive status: static coreos-installer-secure-ipl-reboot.service: name: coreos-installer-secure-ipl-reboot.service source: systemd state: inactive status: static coreos-installer.service: name: coreos-installer.service source: systemd state: inactive status: static coreos-liveiso-success.service: name: coreos-liveiso-success.service source: systemd state: stopped status: enabled coreos-platform-chrony-config.service: name: coreos-platform-chrony-config.service source: systemd state: stopped status: enabled coreos-populate-lvmdevices.service: name: coreos-populate-lvmdevices.service source: systemd state: stopped status: enabled coreos-printk-quiet.service: name: coreos-printk-quiet.service source: systemd state: stopped status: enabled coreos-update-ca-trust.service: name: coreos-update-ca-trust.service source: systemd state: stopped status: enabled crc-dnsmasq.service: name: crc-dnsmasq.service source: systemd state: stopped status: not-found crc-pre.service: name: crc-pre.service source: systemd state: stopped status: enabled crio-subid.service: name: crio-subid.service source: systemd state: stopped status: enabled crio-wipe.service: name: crio-wipe.service source: systemd state: stopped status: disabled crio.service: name: crio.service source: systemd state: stopped status: disabled dbus-broker.service: name: dbus-broker.service source: systemd state: running status: enabled dbus-org.freedesktop.hostname1.service: name: dbus-org.freedesktop.hostname1.service source: systemd state: active status: alias dbus-org.freedesktop.locale1.service: name: dbus-org.freedesktop.locale1.service source: systemd state: inactive status: alias dbus-org.freedesktop.login1.service: name: dbus-org.freedesktop.login1.service source: systemd state: active status: alias dbus-org.freedesktop.nm-dispatcher.service: name: dbus-org.freedesktop.nm-dispatcher.service source: systemd state: active status: alias dbus-org.freedesktop.timedate1.service: name: dbus-org.freedesktop.timedate1.service source: systemd state: inactive status: alias dbus.service: name: dbus.service source: systemd state: active status: alias debug-shell.service: name: debug-shell.service source: systemd state: inactive status: disabled disable-mglru.service: name: disable-mglru.service source: systemd state: stopped status: enabled display-manager.service: name: display-manager.service source: systemd state: stopped status: not-found dm-event.service: name: dm-event.service source: systemd state: stopped status: static dnf-makecache.service: name: dnf-makecache.service source: systemd state: inactive status: static dnsmasq.service: name: dnsmasq.service source: systemd state: running status: enabled dracut-cmdline.service: name: dracut-cmdline.service source: systemd state: stopped status: static dracut-initqueue.service: name: dracut-initqueue.service source: systemd state: stopped status: static dracut-mount.service: name: dracut-mount.service source: systemd state: stopped status: static dracut-pre-mount.service: name: dracut-pre-mount.service source: systemd state: stopped status: static dracut-pre-pivot.service: name: dracut-pre-pivot.service source: systemd state: stopped status: static dracut-pre-trigger.service: name: dracut-pre-trigger.service source: systemd state: stopped status: static dracut-pre-udev.service: name: dracut-pre-udev.service source: systemd state: stopped status: static dracut-shutdown-onfailure.service: name: dracut-shutdown-onfailure.service source: systemd state: stopped status: static dracut-shutdown.service: name: dracut-shutdown.service source: systemd state: stopped status: static dummy-network.service: name: dummy-network.service source: systemd state: stopped status: enabled emergency.service: name: emergency.service source: systemd state: stopped status: static fcoe.service: name: fcoe.service source: systemd state: stopped status: not-found fstrim.service: name: fstrim.service source: systemd state: inactive status: static fwupd-offline-update.service: name: fwupd-offline-update.service source: systemd state: inactive status: static fwupd-refresh.service: name: fwupd-refresh.service source: systemd state: inactive status: static fwupd.service: name: fwupd.service source: systemd state: inactive status: static gcp-routes.service: name: gcp-routes.service source: systemd state: stopped status: enabled getty@.service: name: getty@.service source: systemd state: unknown status: enabled getty@tty1.service: name: getty@tty1.service source: systemd state: running status: active gssproxy.service: name: gssproxy.service source: systemd state: stopped status: disabled gvisor-tap-vsock.service: name: gvisor-tap-vsock.service source: systemd state: stopped status: enabled hypervfcopyd.service: name: hypervfcopyd.service source: systemd state: inactive status: static hypervkvpd.service: name: hypervkvpd.service source: systemd state: inactive status: static hypervvssd.service: name: hypervvssd.service source: systemd state: inactive status: static ignition-delete-config.service: name: ignition-delete-config.service source: systemd state: stopped status: enabled initrd-cleanup.service: name: initrd-cleanup.service source: systemd state: stopped status: static initrd-parse-etc.service: name: initrd-parse-etc.service source: systemd state: stopped status: static initrd-switch-root.service: name: initrd-switch-root.service source: systemd state: stopped status: static initrd-udevadm-cleanup-db.service: name: initrd-udevadm-cleanup-db.service source: systemd state: stopped status: static irqbalance.service: name: irqbalance.service source: systemd state: running status: enabled iscsi-init.service: name: iscsi-init.service source: systemd state: stopped status: disabled iscsi-onboot.service: name: iscsi-onboot.service source: systemd state: stopped status: enabled iscsi-shutdown.service: name: iscsi-shutdown.service source: systemd state: stopped status: static iscsi-starter.service: name: iscsi-starter.service source: systemd state: inactive status: disabled iscsi.service: name: iscsi.service source: systemd state: stopped status: indirect iscsid.service: name: iscsid.service source: systemd state: stopped status: disabled iscsiuio.service: name: iscsiuio.service source: systemd state: stopped status: disabled kdump.service: name: kdump.service source: systemd state: stopped status: disabled kmod-static-nodes.service: name: kmod-static-nodes.service source: systemd state: stopped status: static kubelet-auto-node-size.service: name: kubelet-auto-node-size.service source: systemd state: stopped status: enabled kubelet-cleanup.service: name: kubelet-cleanup.service source: systemd state: stopped status: enabled kubelet.service: name: kubelet.service source: systemd state: stopped status: disabled kubens.service: name: kubens.service source: systemd state: stopped status: disabled ldconfig.service: name: ldconfig.service source: systemd state: stopped status: static logrotate.service: name: logrotate.service source: systemd state: stopped status: static lvm2-activation-early.service: name: lvm2-activation-early.service source: systemd state: stopped status: not-found lvm2-lvmpolld.service: name: lvm2-lvmpolld.service source: systemd state: stopped status: static lvm2-monitor.service: name: lvm2-monitor.service source: systemd state: stopped status: enabled machine-config-daemon-firstboot.service: name: machine-config-daemon-firstboot.service source: systemd state: stopped status: enabled machine-config-daemon-pull.service: name: machine-config-daemon-pull.service source: systemd state: stopped status: enabled mdadm-grow-continue@.service: name: mdadm-grow-continue@.service source: systemd state: unknown status: static mdadm-last-resort@.service: name: mdadm-last-resort@.service source: systemd state: unknown status: static mdcheck_continue.service: name: mdcheck_continue.service source: systemd state: inactive status: static mdcheck_start.service: name: mdcheck_start.service source: systemd state: inactive status: static mdmon@.service: name: mdmon@.service source: systemd state: unknown status: static mdmonitor-oneshot.service: name: mdmonitor-oneshot.service source: systemd state: inactive status: static mdmonitor.service: name: mdmonitor.service source: systemd state: stopped status: enabled microcode.service: name: microcode.service source: systemd state: stopped status: enabled modprobe@.service: name: modprobe@.service source: systemd state: unknown status: static modprobe@configfs.service: name: modprobe@configfs.service source: systemd state: stopped status: inactive modprobe@drm.service: name: modprobe@drm.service source: systemd state: stopped status: inactive modprobe@efi_pstore.service: name: modprobe@efi_pstore.service source: systemd state: stopped status: inactive modprobe@fuse.service: name: modprobe@fuse.service source: systemd state: stopped status: inactive multipathd.service: name: multipathd.service source: systemd state: stopped status: enabled netavark-dhcp-proxy.service: name: netavark-dhcp-proxy.service source: systemd state: inactive status: disabled netavark-firewalld-reload.service: name: netavark-firewalld-reload.service source: systemd state: inactive status: disabled network.service: name: network.service source: systemd state: stopped status: not-found nfs-blkmap.service: name: nfs-blkmap.service source: systemd state: inactive status: disabled nfs-idmapd.service: name: nfs-idmapd.service source: systemd state: stopped status: static nfs-mountd.service: name: nfs-mountd.service source: systemd state: stopped status: static nfs-server.service: name: nfs-server.service source: systemd state: stopped status: disabled nfs-utils.service: name: nfs-utils.service source: systemd state: stopped status: static nfsdcld.service: name: nfsdcld.service source: systemd state: stopped status: static nftables.service: name: nftables.service source: systemd state: inactive status: disabled nis-domainname.service: name: nis-domainname.service source: systemd state: inactive status: disabled nm-cloud-setup.service: name: nm-cloud-setup.service source: systemd state: inactive status: disabled nm-priv-helper.service: name: nm-priv-helper.service source: systemd state: inactive status: static nmstate.service: name: nmstate.service source: systemd state: stopped status: enabled node-valid-hostname.service: name: node-valid-hostname.service source: systemd state: stopped status: enabled nodeip-configuration.service: name: nodeip-configuration.service source: systemd state: stopped status: enabled ntpd.service: name: ntpd.service source: systemd state: stopped status: not-found ntpdate.service: name: ntpdate.service source: systemd state: stopped status: not-found nvmefc-boot-connections.service: name: nvmefc-boot-connections.service source: systemd state: stopped status: enabled nvmf-autoconnect.service: name: nvmf-autoconnect.service source: systemd state: inactive status: disabled nvmf-connect@.service: name: nvmf-connect@.service source: systemd state: unknown status: static openvswitch.service: name: openvswitch.service source: systemd state: stopped status: enabled ostree-boot-complete.service: name: ostree-boot-complete.service source: systemd state: stopped status: enabled-runtime ostree-finalize-staged-hold.service: name: ostree-finalize-staged-hold.service source: systemd state: stopped status: static ostree-finalize-staged.service: name: ostree-finalize-staged.service source: systemd state: stopped status: static ostree-prepare-root.service: name: ostree-prepare-root.service source: systemd state: inactive status: static ostree-readonly-sysroot-migration.service: name: ostree-readonly-sysroot-migration.service source: systemd state: stopped status: disabled ostree-remount.service: name: ostree-remount.service source: systemd state: stopped status: enabled ostree-state-overlay@.service: name: ostree-state-overlay@.service source: systemd state: unknown status: disabled ovs-configuration.service: name: ovs-configuration.service source: systemd state: stopped status: enabled ovs-delete-transient-ports.service: name: ovs-delete-transient-ports.service source: systemd state: stopped status: static ovs-vswitchd.service: name: ovs-vswitchd.service source: systemd state: running status: static ovsdb-server.service: name: ovsdb-server.service source: systemd state: running status: static pam_namespace.service: name: pam_namespace.service source: systemd state: inactive status: static plymouth-quit-wait.service: name: plymouth-quit-wait.service source: systemd state: stopped status: not-found plymouth-read-write.service: name: plymouth-read-write.service source: systemd state: stopped status: not-found plymouth-start.service: name: plymouth-start.service source: systemd state: stopped status: not-found podman-auto-update.service: name: podman-auto-update.service source: systemd state: inactive status: disabled podman-clean-transient.service: name: podman-clean-transient.service source: systemd state: inactive status: disabled podman-kube@.service: name: podman-kube@.service source: systemd state: unknown status: disabled podman-restart.service: name: podman-restart.service source: systemd state: inactive status: disabled podman.service: name: podman.service source: systemd state: stopped status: disabled polkit.service: name: polkit.service source: systemd state: inactive status: static qemu-guest-agent.service: name: qemu-guest-agent.service source: systemd state: stopped status: enabled quotaon.service: name: quotaon.service source: systemd state: inactive status: static raid-check.service: name: raid-check.service source: systemd state: inactive status: static rbdmap.service: name: rbdmap.service source: systemd state: stopped status: not-found rc-local.service: name: rc-local.service source: systemd state: stopped status: static rdisc.service: name: rdisc.service source: systemd state: inactive status: disabled rdma-load-modules@.service: name: rdma-load-modules@.service source: systemd state: unknown status: static rdma-ndd.service: name: rdma-ndd.service source: systemd state: inactive status: static rescue.service: name: rescue.service source: systemd state: stopped status: static rhcos-usrlocal-selinux-fixup.service: name: rhcos-usrlocal-selinux-fixup.service source: systemd state: stopped status: enabled rpc-gssd.service: name: rpc-gssd.service source: systemd state: stopped status: static rpc-statd-notify.service: name: rpc-statd-notify.service source: systemd state: stopped status: static rpc-statd.service: name: rpc-statd.service source: systemd state: stopped status: static rpc-svcgssd.service: name: rpc-svcgssd.service source: systemd state: stopped status: not-found rpcbind.service: name: rpcbind.service source: systemd state: stopped status: disabled rpm-ostree-bootstatus.service: name: rpm-ostree-bootstatus.service source: systemd state: inactive status: disabled rpm-ostree-countme.service: name: rpm-ostree-countme.service source: systemd state: inactive status: static rpm-ostree-fix-shadow-mode.service: name: rpm-ostree-fix-shadow-mode.service source: systemd state: stopped status: disabled rpm-ostreed-automatic.service: name: rpm-ostreed-automatic.service source: systemd state: inactive status: static rpm-ostreed.service: name: rpm-ostreed.service source: systemd state: inactive status: static rpmdb-rebuild.service: name: rpmdb-rebuild.service source: systemd state: inactive status: disabled selinux-autorelabel-mark.service: name: selinux-autorelabel-mark.service source: systemd state: stopped status: enabled selinux-autorelabel.service: name: selinux-autorelabel.service source: systemd state: inactive status: static selinux-check-proper-disable.service: name: selinux-check-proper-disable.service source: systemd state: inactive status: disabled serial-getty@.service: name: serial-getty@.service source: systemd state: unknown status: disabled sntp.service: name: sntp.service source: systemd state: stopped status: not-found sshd-keygen@.service: name: sshd-keygen@.service source: systemd state: unknown status: disabled sshd-keygen@ecdsa.service: name: sshd-keygen@ecdsa.service source: systemd state: stopped status: inactive sshd-keygen@ed25519.service: name: sshd-keygen@ed25519.service source: systemd state: stopped status: inactive sshd-keygen@rsa.service: name: sshd-keygen@rsa.service source: systemd state: stopped status: inactive sshd.service: name: sshd.service source: systemd state: running status: enabled sshd@.service: name: sshd@.service source: systemd state: unknown status: static sssd-autofs.service: name: sssd-autofs.service source: systemd state: inactive status: indirect sssd-nss.service: name: sssd-nss.service source: systemd state: inactive status: indirect sssd-pac.service: name: sssd-pac.service source: systemd state: inactive status: indirect sssd-pam.service: name: sssd-pam.service source: systemd state: inactive status: indirect sssd-ssh.service: name: sssd-ssh.service source: systemd state: inactive status: indirect sssd-sudo.service: name: sssd-sudo.service source: systemd state: inactive status: indirect sssd.service: name: sssd.service source: systemd state: stopped status: enabled stalld.service: name: stalld.service source: systemd state: inactive status: disabled syslog.service: name: syslog.service source: systemd state: stopped status: not-found system-update-cleanup.service: name: system-update-cleanup.service source: systemd state: inactive status: static systemd-ask-password-console.service: name: systemd-ask-password-console.service source: systemd state: stopped status: static systemd-ask-password-wall.service: name: systemd-ask-password-wall.service source: systemd state: stopped status: static systemd-backlight@.service: name: systemd-backlight@.service source: systemd state: unknown status: static systemd-binfmt.service: name: systemd-binfmt.service source: systemd state: stopped status: static systemd-bless-boot.service: name: systemd-bless-boot.service source: systemd state: inactive status: static systemd-boot-check-no-failures.service: name: systemd-boot-check-no-failures.service source: systemd state: inactive status: disabled systemd-boot-random-seed.service: name: systemd-boot-random-seed.service source: systemd state: stopped status: static systemd-boot-update.service: name: systemd-boot-update.service source: systemd state: stopped status: enabled systemd-coredump@.service: name: systemd-coredump@.service source: systemd state: unknown status: static systemd-exit.service: name: systemd-exit.service source: systemd state: inactive status: static systemd-fsck-root.service: name: systemd-fsck-root.service source: systemd state: stopped status: static systemd-fsck@.service: name: systemd-fsck@.service source: systemd state: unknown status: static systemd-fsck@dev-disk-by\x2duuid-6ea7ef63\x2dbc43\x2d49c4\x2d9337\x2db3b14ffb2763.service: name: systemd-fsck@dev-disk-by\x2duuid-6ea7ef63\x2dbc43\x2d49c4\x2d9337\x2db3b14ffb2763.service source: systemd state: stopped status: active systemd-growfs-root.service: name: systemd-growfs-root.service source: systemd state: inactive status: static systemd-growfs@.service: name: systemd-growfs@.service source: systemd state: unknown status: static systemd-halt.service: name: systemd-halt.service source: systemd state: inactive status: static systemd-hibernate-resume@.service: name: systemd-hibernate-resume@.service source: systemd state: unknown status: static systemd-hibernate.service: name: systemd-hibernate.service source: systemd state: inactive status: static systemd-hostnamed.service: name: systemd-hostnamed.service source: systemd state: running status: static systemd-hwdb-update.service: name: systemd-hwdb-update.service source: systemd state: stopped status: static systemd-hybrid-sleep.service: name: systemd-hybrid-sleep.service source: systemd state: inactive status: static systemd-initctl.service: name: systemd-initctl.service source: systemd state: stopped status: static systemd-journal-catalog-update.service: name: systemd-journal-catalog-update.service source: systemd state: stopped status: static systemd-journal-flush.service: name: systemd-journal-flush.service source: systemd state: stopped status: static systemd-journal-gatewayd.service: name: systemd-journal-gatewayd.service source: systemd state: inactive status: indirect systemd-journal-remote.service: name: systemd-journal-remote.service source: systemd state: inactive status: indirect systemd-journal-upload.service: name: systemd-journal-upload.service source: systemd state: inactive status: disabled systemd-journald.service: name: systemd-journald.service source: systemd state: running status: static systemd-journald@.service: name: systemd-journald@.service source: systemd state: unknown status: static systemd-kexec.service: name: systemd-kexec.service source: systemd state: inactive status: static systemd-localed.service: name: systemd-localed.service source: systemd state: inactive status: static systemd-logind.service: name: systemd-logind.service source: systemd state: running status: static systemd-machine-id-commit.service: name: systemd-machine-id-commit.service source: systemd state: stopped status: static systemd-modules-load.service: name: systemd-modules-load.service source: systemd state: stopped status: static systemd-network-generator.service: name: systemd-network-generator.service source: systemd state: stopped status: enabled systemd-pcrfs-root.service: name: systemd-pcrfs-root.service source: systemd state: inactive status: static systemd-pcrfs@.service: name: systemd-pcrfs@.service source: systemd state: unknown status: static systemd-pcrmachine.service: name: systemd-pcrmachine.service source: systemd state: stopped status: static systemd-pcrphase-initrd.service: name: systemd-pcrphase-initrd.service source: systemd state: stopped status: static systemd-pcrphase-sysinit.service: name: systemd-pcrphase-sysinit.service source: systemd state: stopped status: static systemd-pcrphase.service: name: systemd-pcrphase.service source: systemd state: stopped status: static systemd-poweroff.service: name: systemd-poweroff.service source: systemd state: inactive status: static systemd-pstore.service: name: systemd-pstore.service source: systemd state: stopped status: enabled systemd-quotacheck.service: name: systemd-quotacheck.service source: systemd state: stopped status: static systemd-random-seed.service: name: systemd-random-seed.service source: systemd state: stopped status: static systemd-reboot.service: name: systemd-reboot.service source: systemd state: inactive status: static systemd-remount-fs.service: name: systemd-remount-fs.service source: systemd state: stopped status: enabled-runtime systemd-repart.service: name: systemd-repart.service source: systemd state: stopped status: masked systemd-rfkill.service: name: systemd-rfkill.service source: systemd state: stopped status: static systemd-suspend-then-hibernate.service: name: systemd-suspend-then-hibernate.service source: systemd state: inactive status: static systemd-suspend.service: name: systemd-suspend.service source: systemd state: inactive status: static systemd-sysctl.service: name: systemd-sysctl.service source: systemd state: stopped status: static systemd-sysext.service: name: systemd-sysext.service source: systemd state: stopped status: disabled systemd-sysupdate-reboot.service: name: systemd-sysupdate-reboot.service source: systemd state: inactive status: indirect systemd-sysupdate.service: name: systemd-sysupdate.service source: systemd state: inactive status: indirect systemd-sysusers.service: name: systemd-sysusers.service source: systemd state: stopped status: static systemd-timedated.service: name: systemd-timedated.service source: systemd state: inactive status: static systemd-timesyncd.service: name: systemd-timesyncd.service source: systemd state: stopped status: not-found systemd-tmpfiles-clean.service: name: systemd-tmpfiles-clean.service source: systemd state: stopped status: static systemd-tmpfiles-setup-dev.service: name: systemd-tmpfiles-setup-dev.service source: systemd state: stopped status: static systemd-tmpfiles-setup.service: name: systemd-tmpfiles-setup.service source: systemd state: stopped status: static systemd-tmpfiles.service: name: systemd-tmpfiles.service source: systemd state: stopped status: not-found systemd-udev-settle.service: name: systemd-udev-settle.service source: systemd state: stopped status: static systemd-udev-trigger.service: name: systemd-udev-trigger.service source: systemd state: stopped status: static systemd-udevd.service: name: systemd-udevd.service source: systemd state: running status: static systemd-update-done.service: name: systemd-update-done.service source: systemd state: stopped status: static systemd-update-utmp-runlevel.service: name: systemd-update-utmp-runlevel.service source: systemd state: stopped status: static systemd-update-utmp.service: name: systemd-update-utmp.service source: systemd state: stopped status: static systemd-user-sessions.service: name: systemd-user-sessions.service source: systemd state: stopped status: static systemd-vconsole-setup.service: name: systemd-vconsole-setup.service source: systemd state: stopped status: static systemd-volatile-root.service: name: systemd-volatile-root.service source: systemd state: inactive status: static systemd-zram-setup@.service: name: systemd-zram-setup@.service source: systemd state: unknown status: static teamd@.service: name: teamd@.service source: systemd state: unknown status: static unbound-anchor.service: name: unbound-anchor.service source: systemd state: stopped status: static user-runtime-dir@.service: name: user-runtime-dir@.service source: systemd state: unknown status: static user-runtime-dir@0.service: name: user-runtime-dir@0.service source: systemd state: stopped status: active user-runtime-dir@1000.service: name: user-runtime-dir@1000.service source: systemd state: stopped status: active user@.service: name: user@.service source: systemd state: unknown status: static user@0.service: name: user@0.service source: systemd state: running status: active user@1000.service: name: user@1000.service source: systemd state: running status: active vgauthd.service: name: vgauthd.service source: systemd state: stopped status: enabled vmtoolsd.service: name: vmtoolsd.service source: systemd state: stopped status: enabled wait-for-primary-ip.service: name: wait-for-primary-ip.service source: systemd state: stopped status: enabled unsafe_vars: ansible_connection: ssh ansible_host: 38.102.83.220 ansible_port: 22 ansible_python_interpreter: auto ansible_user: core cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_build_images_output: {} cifmw_dlrn_report_result: false cifmw_edpm_telemetry_enabled_exporters: - podman_exporter - openstack_network_exporter cifmw_extras: - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/multinode-ci.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/horizon.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/scenarios/{{ watcher_scenario }}.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/tests/watcher-tempest.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_operator_build_output: operators: openstack-operator: git_commit_hash: 38e630804dada625f7b015f13f3ac5bb7192f4dd git_src_dir: ~/src/github.com/openstack-k8s-operators/openstack-operator image: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd image_bundle: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-bundle:38e630804dada625f7b015f13f3ac5bb7192f4dd image_catalog: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-index:38e630804dada625f7b015f13f3ac5bb7192f4dd watcher-operator: git_commit_hash: 581f46572d07c53c87a11aa044b02e73f253eea6 git_src_dir: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator image: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator:581f46572d07c53c87a11aa044b02e73f253eea6 image_bundle: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-bundle:581f46572d07c53c87a11aa044b02e73f253eea6 image_catalog: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-index:581f46572d07c53c87a11aa044b02e73f253eea6 cifmw_test_operator_tempest_external_plugin: - changeRefspec: 380572db57798530b64dcac14c6b01b0382c5d8e changeRepository: https://review.opendev.org/openstack/watcher-tempest-plugin repository: https://opendev.org/openstack/watcher-tempest-plugin.git cifmw_test_operator_tempest_image_tag: watcher_latest cifmw_test_operator_tempest_namespace: '{{ content_provider_os_registry_url | split(''/'') | last }}' cifmw_test_operator_tempest_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_openstack: false cifmw_update_containers_org: podified-epoxy-centos9 cifmw_update_containers_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_tag: watcher_latest cifmw_update_containers_watcher: true cifmw_use_libvirt: false cifmw_zuul_target_host: controller content_provider_dlrn_md5_hash: '' content_provider_gating_repo_available: false content_provider_gating_repo_url: '' content_provider_os_registry_namespace: podified-epoxy-centos9 content_provider_os_registry_url: 38.102.83.51:5001/podified-epoxy-centos9 content_provider_registry_available: true content_provider_registry_ip: 38.102.83.51 content_provider_registry_ip_port: 38.102.83.51:5001 crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: compute-0: networks: default: ip: 192.168.122.100 internal-api: config_nm: false ip: 172.17.0.100 storage: config_nm: false ip: 172.18.0.100 tenant: config_nm: false ip: 172.19.0.100 compute-1: networks: default: ip: 192.168.122.101 internal-api: config_nm: false ip: 172.17.0.101 storage: config_nm: false ip: 172.18.0.101 tenant: config_nm: false ip: 172.19.0.101 controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: '{{ (''ibm'' in nodepool.cloud) | ternary(''1440'', ''1500'') }}' range: 192.168.122.0/24 router_net: '' transparent: true internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true fetch_dlrn_hash: false nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: a519b063-d122-47cf-ae3b-7548803df408 host_id: b012578aee5370fae73eb6c92c4679617335173cccca05390470f411 interface_ip: 38.102.83.220 label: coreos-crc-extracted-2-39-0-3xl private_ipv4: 38.102.83.220 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.220 public_ipv6: '' region: RegionOne slot: null push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true watcher_scenario: edpm-no-notifications watcher_services_tag: watcher_latest watcher_tempest_max_microversion: '1.4' zuul_log_collection: false watcher_scenario: edpm-no-notifications watcher_services_tag: watcher_latest watcher_tempest_max_microversion: '1.4' zuul: _inheritance_path: - '' - '' - '' - '' - '' - '' - '' - '' - '' - '' ansible_version: '8' attempts: 1 branch: main build: 90366c73dd19485aa9d993ddaab6d557 build_refs: - branch: main change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n spec:\r\n messagingBus:\r\n cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n \ notificationsBus:\r\n cluster: notifications-rabbitmq\r\n \ user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator topic: null buildset: f20638613fa941389b7ed9d90fc6bce5 buildset_refs: - branch: main change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n spec:\r\n messagingBus:\r\n cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n \ notificationsBus:\r\n cluster: notifications-rabbitmq\r\n \ user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator topic: null change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n \ spec:\r\n messagingBus:\r\n cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n notificationsBus:\r\n cluster: notifications-rabbitmq\r\n user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 child_jobs: [] commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 event_id: 2658e330-f5e9-11f0-9efc-87f0b8025f7f executor: hostname: ze02.softwarefactory-project.io inventory_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/inventory.yaml log_root: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/logs result_data_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/results.json src_root: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/src work_root: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work items: - branch: main change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n spec:\r\n messagingBus:\r\n cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n \ notificationsBus:\r\n cluster: notifications-rabbitmq\r\n \ user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator topic: null job: watcher-operator-validation-epoxy-ocp4-16 jobtags: [] max_attempts: 1 message: UmFiYml0bXEgdmhvc3QgYW5kIHVzZXIgc3VwcG9ydAoKQWRkIG5ldyBtZXNzYWdpbmdCdXMgYW5kIG5vdGlmaWNhdGlvbnNCdXMgaW50ZXJmYWNlcyB0byBob2xkIGNsdXN0ZXIsIHVzZXIgYW5kIHZob3N0IG5hbWVzIGZvciBvcHRpb25hbCB1c2FnZS4NClRoZSBjb250cm9sbGVyIGFkZHMgdGhlc2UgdmFsdWVzIHRvIHRoZSBUcmFuc3BvcnRVUkwgY3JlYXRlIHJlcXVlc3Qgd2hlbiBwcmVzZW50Lg0KDQpBZGRpdGlvbmFsbHksIHdlIG1pZ3JhdGUgUmFiYml0TVEgY2x1c3RlciBuYW1lIHRvIFJhYmJpdE1xIGNvbmZpZyBzdHJ1Y3QgdXNpbmcgRGVmYXVsdFJhYmJpdE1xQ29uZmlnIGZyb20gaW5mcmEtb3BlcmF0b3IgdG8gYXV0b21hdGljYWxseSBwb3B1bGF0ZSB0aGUgbmV3IENsdXN0ZXIgZmllbGQgZnJvbSBsZWdhY3kgUmFiYml0TXFDbHVzdGVyTmFtZS4NCg0KRXhhbXBsZSB1c2FnZToNCg0KYGBgDQogIHNwZWM6DQogICAgbWVzc2FnaW5nQnVzOg0KICAgICAgY2x1c3RlcjogcnBjLXJhYmJpdG1xDQogICAgICB1c2VyOiBycGMtdXNlcg0KICAgICAgdmhvc3Q6IHJwYy12aG9zdA0KICAgIG5vdGlmaWNhdGlvbnNCdXM6DQogICAgICBjbHVzdGVyOiBub3RpZmljYXRpb25zLXJhYmJpdG1xDQogICAgICB1c2VyOiBub3RpZmljYXRpb25zLXVzZXINCiAgICAgIHZob3N0OiBub3RpZmljYXRpb25zLXZob3N0DQpgYGANCg0KSmlyYTogaHR0cHM6Ly9pc3N1ZXMucmVkaGF0LmNvbS9icm93c2UvT1NQUkgtMjM4ODI= patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 pipeline: github-check playbook_context: playbook_projects: trusted/project_0/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 08a84deec7dace955f92270e2cbb8b993f305ad4 trusted/project_1/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 691c03cc007bee9934da14cf46c86009616a2aef trusted/project_2/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 trusted/project_3/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 05fab9c7c87ad7552b529c3d1173a1772e61e9fb untrusted/project_0/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 05fab9c7c87ad7552b529c3d1173a1772e61e9fb untrusted/project_1/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 08a84deec7dace955f92270e2cbb8b993f305ad4 untrusted/project_2/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 691c03cc007bee9934da14cf46c86009616a2aef untrusted/project_3/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 playbooks: - path: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/ci/playbooks/edpm/run.yml roles: - checkout: main checkout_description: playbook branch link_name: ansible/playbook_0/role_0/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_0/role_0/ci-framework/roles - checkout: master checkout_description: project default branch link_name: ansible/playbook_0/role_1/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_0/role_1/config/roles - checkout: master checkout_description: project default branch link_name: ansible/playbook_0/role_2/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_0/role_2/zuul-jobs/roles - checkout: master checkout_description: project default branch link_name: ansible/playbook_0/role_3/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_0/role_3/rdo-jobs/roles post_review: false project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator projects: github.com/crc-org/crc-cloud: canonical_hostname: github.com canonical_name: github.com/crc-org/crc-cloud checkout: main checkout_description: project override ref commit: 42957126d9d9b9d1372615db325b82bd992fa335 name: crc-org/crc-cloud required: true short_name: crc-cloud src_dir: src/github.com/crc-org/crc-cloud github.com/openstack-k8s-operators/ci-framework: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main checkout_description: zuul branch commit: 05fab9c7c87ad7552b529c3d1173a1772e61e9fb name: openstack-k8s-operators/ci-framework required: true short_name: ci-framework src_dir: src/github.com/openstack-k8s-operators/ci-framework github.com/openstack-k8s-operators/edpm-ansible: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/edpm-ansible checkout: main checkout_description: zuul branch commit: 43c8ae13d85939e9a3f9cddbe838cbe4616199f7 name: openstack-k8s-operators/edpm-ansible required: true short_name: edpm-ansible src_dir: src/github.com/openstack-k8s-operators/edpm-ansible github.com/openstack-k8s-operators/infra-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/infra-operator checkout: main checkout_description: zuul branch commit: 0121df8691096e0883637457925e4142353e35ba name: openstack-k8s-operators/infra-operator required: true short_name: infra-operator src_dir: src/github.com/openstack-k8s-operators/infra-operator github.com/openstack-k8s-operators/install_yamls: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/install_yamls checkout: main checkout_description: zuul branch commit: bdf4c9385be5e3e04ff06f67f25d6993db70cf6e name: openstack-k8s-operators/install_yamls required: true short_name: install_yamls src_dir: src/github.com/openstack-k8s-operators/install_yamls github.com/openstack-k8s-operators/openstack-baremetal-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-baremetal-operator checkout: main checkout_description: zuul branch commit: 06cd1004cb26b36ba1054ccf7875fad6248762c5 name: openstack-k8s-operators/openstack-baremetal-operator required: true short_name: openstack-baremetal-operator src_dir: src/github.com/openstack-k8s-operators/openstack-baremetal-operator github.com/openstack-k8s-operators/openstack-must-gather: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-must-gather checkout: main checkout_description: zuul branch commit: 74e48015b997b0bb2efc1ad92a6937949f185a0e name: openstack-k8s-operators/openstack-must-gather required: true short_name: openstack-must-gather src_dir: src/github.com/openstack-k8s-operators/openstack-must-gather github.com/openstack-k8s-operators/openstack-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-operator checkout: main checkout_description: zuul branch commit: 38e630804dada625f7b015f13f3ac5bb7192f4dd name: openstack-k8s-operators/openstack-operator required: true short_name: openstack-operator src_dir: src/github.com/openstack-k8s-operators/openstack-operator github.com/openstack-k8s-operators/repo-setup: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/repo-setup checkout: main checkout_description: zuul branch commit: 37b10946c6a10f9fa26c13305f06bfd6867e723f name: openstack-k8s-operators/repo-setup required: true short_name: repo-setup src_dir: src/github.com/openstack-k8s-operators/repo-setup github.com/openstack-k8s-operators/watcher-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator checkout: main checkout_description: zuul branch commit: 581f46572d07c53c87a11aa044b02e73f253eea6 name: openstack-k8s-operators/watcher-operator required: false short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator opendev.org/zuul/zuul-jobs: canonical_hostname: opendev.org canonical_name: opendev.org/zuul/zuul-jobs checkout: master checkout_description: project default branch commit: 691c03cc007bee9934da14cf46c86009616a2aef name: zuul/zuul-jobs required: true short_name: zuul-jobs src_dir: src/opendev.org/zuul/zuul-jobs review.rdoproject.org/config: canonical_hostname: review.rdoproject.org canonical_name: review.rdoproject.org/config checkout: master checkout_description: project default branch commit: 08a84deec7dace955f92270e2cbb8b993f305ad4 name: config required: true short_name: config src_dir: src/review.rdoproject.org/config ref: refs/pull/320/head resources: {} tenant: rdoproject.org timeout: 10800 topic: null voting: true zuul_execution_branch: main zuul_execution_canonical_name_and_path: github.com/openstack-k8s-operators/ci-framework/ci/playbooks/e2e-collect-logs.yml zuul_execution_phase: post zuul_execution_phase_index: '0' zuul_execution_trusted: 'False' zuul_log_collection: false zuul_success: 'False' zuul_will_retry: 'False' inventory_dir: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/post_playbook_0 inventory_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/post_playbook_0/inventory.yaml inventory_hostname: controller inventory_hostname_short: controller logfiles_dest_dir: /home/zuul/ci-framework-data/logs/2026-01-20_10-57 module_setup: true nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: f43c7845-aaa4-43d2-b2cb-88299b0d01b3 host_id: b012578aee5370fae73eb6c92c4679617335173cccca05390470f411 interface_ip: 38.102.83.39 label: cloud-centos-9-stream-tripleo-vexxhost-medium private_ipv4: 38.102.83.39 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.39 public_ipv6: '' region: RegionOne slot: null omit: __omit_place_holder__3495a27fa3f7994641c1fe3f418eb5fa4a3ff705 operator_namespace: '{{ cifmw_install_yamls_defaults[''OPERATOR_NAMESPACE''] | default(''openstack-operators'') }}' play_hosts: *id002 playbook_dir: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/ci/playbooks post_ctlplane_deploy: - name: Tune rabbitmq resources source: rabbitmq_tuning.yml type: playbook post_deploy: - inventory: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/hosts name: Download needed tools source: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/download_tools.yaml type: playbook - name: Patch Openstack Prometheus to enable admin API source: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator/ci/playbooks/prometheus_admin_api.yaml type: playbook post_infra: - inventory: /home/zuul/ci-framework-data/artifacts/zuul_inventory.yml name: Fetch nodes facts and save them as parameters source: fetch_compute_facts.yml type: playbook pre_deploy: - name: 80 Kustomize OpenStack CR source: control_plane_horizon.yml type: playbook pre_deploy_create_coo_subscription: - name: Deploy cluster-observability-operator source: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator/ci/playbooks/deploy_cluster_observability_operator.yaml type: playbook pre_infra: - connection: local inventory: localhost, name: Download needed tools source: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/download_tools.yaml type: playbook pre_update: - inventory: /home/zuul/ci-framework-data/artifacts/zuul_inventory.yml name: Fetch nodes facts and save them as parameters source: fetch_compute_facts.yml type: playbook push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true role_name: artifacts role_names: *id003 role_path: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/roles/artifacts role_uuid: fa163efc-24cc-51cb-a243-000000000030 unsafe_vars: ansible_connection: ssh ansible_host: 38.102.83.39 ansible_port: 22 ansible_python_interpreter: auto ansible_user: zuul cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_build_images_output: {} cifmw_dlrn_report_result: false cifmw_edpm_telemetry_enabled_exporters: - podman_exporter - openstack_network_exporter cifmw_extras: - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/multinode-ci.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/horizon.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/scenarios/{{ watcher_scenario }}.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/tests/watcher-tempest.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_operator_build_output: operators: openstack-operator: git_commit_hash: 38e630804dada625f7b015f13f3ac5bb7192f4dd git_src_dir: ~/src/github.com/openstack-k8s-operators/openstack-operator image: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd image_bundle: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-bundle:38e630804dada625f7b015f13f3ac5bb7192f4dd image_catalog: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-index:38e630804dada625f7b015f13f3ac5bb7192f4dd watcher-operator: git_commit_hash: 581f46572d07c53c87a11aa044b02e73f253eea6 git_src_dir: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator image: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator:581f46572d07c53c87a11aa044b02e73f253eea6 image_bundle: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-bundle:581f46572d07c53c87a11aa044b02e73f253eea6 image_catalog: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-index:581f46572d07c53c87a11aa044b02e73f253eea6 cifmw_test_operator_tempest_external_plugin: - changeRefspec: 380572db57798530b64dcac14c6b01b0382c5d8e changeRepository: https://review.opendev.org/openstack/watcher-tempest-plugin repository: https://opendev.org/openstack/watcher-tempest-plugin.git cifmw_test_operator_tempest_image_tag: watcher_latest cifmw_test_operator_tempest_namespace: '{{ content_provider_os_registry_url | split(''/'') | last }}' cifmw_test_operator_tempest_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_openstack: false cifmw_update_containers_org: podified-epoxy-centos9 cifmw_update_containers_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_tag: watcher_latest cifmw_update_containers_watcher: true cifmw_use_libvirt: false cifmw_zuul_target_host: controller content_provider_dlrn_md5_hash: '' content_provider_gating_repo_available: false content_provider_gating_repo_url: '' content_provider_os_registry_namespace: podified-epoxy-centos9 content_provider_os_registry_url: 38.102.83.51:5001/podified-epoxy-centos9 content_provider_registry_available: true content_provider_registry_ip: 38.102.83.51 content_provider_registry_ip_port: 38.102.83.51:5001 crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: compute-0: networks: default: ip: 192.168.122.100 internal-api: config_nm: false ip: 172.17.0.100 storage: config_nm: false ip: 172.18.0.100 tenant: config_nm: false ip: 172.19.0.100 compute-1: networks: default: ip: 192.168.122.101 internal-api: config_nm: false ip: 172.17.0.101 storage: config_nm: false ip: 172.18.0.101 tenant: config_nm: false ip: 172.19.0.101 controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: '{{ (''ibm'' in nodepool.cloud) | ternary(''1440'', ''1500'') }}' range: 192.168.122.0/24 router_net: '' transparent: true internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true fetch_dlrn_hash: false nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: f43c7845-aaa4-43d2-b2cb-88299b0d01b3 host_id: b012578aee5370fae73eb6c92c4679617335173cccca05390470f411 interface_ip: 38.102.83.39 label: cloud-centos-9-stream-tripleo-vexxhost-medium private_ipv4: 38.102.83.39 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.39 public_ipv6: '' region: RegionOne slot: null push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true watcher_scenario: edpm-no-notifications watcher_services_tag: watcher_latest watcher_tempest_max_microversion: '1.4' zuul_log_collection: false watcher_scenario: edpm-no-notifications watcher_services_tag: watcher_latest watcher_tempest_max_microversion: '1.4' zuul: _inheritance_path: - '' - '' - '' - '' - '' - '' - '' - '' - '' - '' ansible_version: '8' attempts: 1 branch: main build: 90366c73dd19485aa9d993ddaab6d557 build_refs: - branch: main change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n spec:\r\n messagingBus:\r\n \ cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n \ notificationsBus:\r\n cluster: notifications-rabbitmq\r\n user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator topic: null buildset: f20638613fa941389b7ed9d90fc6bce5 buildset_refs: - branch: main change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n spec:\r\n messagingBus:\r\n \ cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n \ notificationsBus:\r\n cluster: notifications-rabbitmq\r\n user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator topic: null change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n spec:\r\n messagingBus:\r\n \ cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n \ notificationsBus:\r\n cluster: notifications-rabbitmq\r\n user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 child_jobs: [] commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 event_id: 2658e330-f5e9-11f0-9efc-87f0b8025f7f executor: hostname: ze02.softwarefactory-project.io inventory_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/inventory.yaml log_root: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/logs result_data_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/results.json src_root: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/src work_root: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work items: - branch: main change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n spec:\r\n messagingBus:\r\n \ cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n \ notificationsBus:\r\n cluster: notifications-rabbitmq\r\n user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator topic: null job: watcher-operator-validation-epoxy-ocp4-16 jobtags: [] max_attempts: 1 message: UmFiYml0bXEgdmhvc3QgYW5kIHVzZXIgc3VwcG9ydAoKQWRkIG5ldyBtZXNzYWdpbmdCdXMgYW5kIG5vdGlmaWNhdGlvbnNCdXMgaW50ZXJmYWNlcyB0byBob2xkIGNsdXN0ZXIsIHVzZXIgYW5kIHZob3N0IG5hbWVzIGZvciBvcHRpb25hbCB1c2FnZS4NClRoZSBjb250cm9sbGVyIGFkZHMgdGhlc2UgdmFsdWVzIHRvIHRoZSBUcmFuc3BvcnRVUkwgY3JlYXRlIHJlcXVlc3Qgd2hlbiBwcmVzZW50Lg0KDQpBZGRpdGlvbmFsbHksIHdlIG1pZ3JhdGUgUmFiYml0TVEgY2x1c3RlciBuYW1lIHRvIFJhYmJpdE1xIGNvbmZpZyBzdHJ1Y3QgdXNpbmcgRGVmYXVsdFJhYmJpdE1xQ29uZmlnIGZyb20gaW5mcmEtb3BlcmF0b3IgdG8gYXV0b21hdGljYWxseSBwb3B1bGF0ZSB0aGUgbmV3IENsdXN0ZXIgZmllbGQgZnJvbSBsZWdhY3kgUmFiYml0TXFDbHVzdGVyTmFtZS4NCg0KRXhhbXBsZSB1c2FnZToNCg0KYGBgDQogIHNwZWM6DQogICAgbWVzc2FnaW5nQnVzOg0KICAgICAgY2x1c3RlcjogcnBjLXJhYmJpdG1xDQogICAgICB1c2VyOiBycGMtdXNlcg0KICAgICAgdmhvc3Q6IHJwYy12aG9zdA0KICAgIG5vdGlmaWNhdGlvbnNCdXM6DQogICAgICBjbHVzdGVyOiBub3RpZmljYXRpb25zLXJhYmJpdG1xDQogICAgICB1c2VyOiBub3RpZmljYXRpb25zLXVzZXINCiAgICAgIHZob3N0OiBub3RpZmljYXRpb25zLXZob3N0DQpgYGANCg0KSmlyYTogaHR0cHM6Ly9pc3N1ZXMucmVkaGF0LmNvbS9icm93c2UvT1NQUkgtMjM4ODI= patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 pipeline: github-check playbook_context: playbook_projects: trusted/project_0/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 08a84deec7dace955f92270e2cbb8b993f305ad4 trusted/project_1/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 691c03cc007bee9934da14cf46c86009616a2aef trusted/project_2/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 trusted/project_3/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 05fab9c7c87ad7552b529c3d1173a1772e61e9fb untrusted/project_0/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 05fab9c7c87ad7552b529c3d1173a1772e61e9fb untrusted/project_1/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 08a84deec7dace955f92270e2cbb8b993f305ad4 untrusted/project_2/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 691c03cc007bee9934da14cf46c86009616a2aef untrusted/project_3/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 playbooks: - path: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/ci/playbooks/edpm/run.yml roles: - checkout: main checkout_description: playbook branch link_name: ansible/playbook_0/role_0/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_0/role_0/ci-framework/roles - checkout: master checkout_description: project default branch link_name: ansible/playbook_0/role_1/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_0/role_1/config/roles - checkout: master checkout_description: project default branch link_name: ansible/playbook_0/role_2/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_0/role_2/zuul-jobs/roles - checkout: master checkout_description: project default branch link_name: ansible/playbook_0/role_3/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_0/role_3/rdo-jobs/roles post_review: false project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator projects: github.com/crc-org/crc-cloud: canonical_hostname: github.com canonical_name: github.com/crc-org/crc-cloud checkout: main checkout_description: project override ref commit: 42957126d9d9b9d1372615db325b82bd992fa335 name: crc-org/crc-cloud required: true short_name: crc-cloud src_dir: src/github.com/crc-org/crc-cloud github.com/openstack-k8s-operators/ci-framework: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main checkout_description: zuul branch commit: 05fab9c7c87ad7552b529c3d1173a1772e61e9fb name: openstack-k8s-operators/ci-framework required: true short_name: ci-framework src_dir: src/github.com/openstack-k8s-operators/ci-framework github.com/openstack-k8s-operators/edpm-ansible: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/edpm-ansible checkout: main checkout_description: zuul branch commit: 43c8ae13d85939e9a3f9cddbe838cbe4616199f7 name: openstack-k8s-operators/edpm-ansible required: true short_name: edpm-ansible src_dir: src/github.com/openstack-k8s-operators/edpm-ansible github.com/openstack-k8s-operators/infra-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/infra-operator checkout: main checkout_description: zuul branch commit: 0121df8691096e0883637457925e4142353e35ba name: openstack-k8s-operators/infra-operator required: true short_name: infra-operator src_dir: src/github.com/openstack-k8s-operators/infra-operator github.com/openstack-k8s-operators/install_yamls: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/install_yamls checkout: main checkout_description: zuul branch commit: bdf4c9385be5e3e04ff06f67f25d6993db70cf6e name: openstack-k8s-operators/install_yamls required: true short_name: install_yamls src_dir: src/github.com/openstack-k8s-operators/install_yamls github.com/openstack-k8s-operators/openstack-baremetal-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-baremetal-operator checkout: main checkout_description: zuul branch commit: 06cd1004cb26b36ba1054ccf7875fad6248762c5 name: openstack-k8s-operators/openstack-baremetal-operator required: true short_name: openstack-baremetal-operator src_dir: src/github.com/openstack-k8s-operators/openstack-baremetal-operator github.com/openstack-k8s-operators/openstack-must-gather: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-must-gather checkout: main checkout_description: zuul branch commit: 74e48015b997b0bb2efc1ad92a6937949f185a0e name: openstack-k8s-operators/openstack-must-gather required: true short_name: openstack-must-gather src_dir: src/github.com/openstack-k8s-operators/openstack-must-gather github.com/openstack-k8s-operators/openstack-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-operator checkout: main checkout_description: zuul branch commit: 38e630804dada625f7b015f13f3ac5bb7192f4dd name: openstack-k8s-operators/openstack-operator required: true short_name: openstack-operator src_dir: src/github.com/openstack-k8s-operators/openstack-operator github.com/openstack-k8s-operators/repo-setup: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/repo-setup checkout: main checkout_description: zuul branch commit: 37b10946c6a10f9fa26c13305f06bfd6867e723f name: openstack-k8s-operators/repo-setup required: true short_name: repo-setup src_dir: src/github.com/openstack-k8s-operators/repo-setup github.com/openstack-k8s-operators/watcher-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator checkout: main checkout_description: zuul branch commit: 581f46572d07c53c87a11aa044b02e73f253eea6 name: openstack-k8s-operators/watcher-operator required: false short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator opendev.org/zuul/zuul-jobs: canonical_hostname: opendev.org canonical_name: opendev.org/zuul/zuul-jobs checkout: master checkout_description: project default branch commit: 691c03cc007bee9934da14cf46c86009616a2aef name: zuul/zuul-jobs required: true short_name: zuul-jobs src_dir: src/opendev.org/zuul/zuul-jobs review.rdoproject.org/config: canonical_hostname: review.rdoproject.org canonical_name: review.rdoproject.org/config checkout: master checkout_description: project default branch commit: 08a84deec7dace955f92270e2cbb8b993f305ad4 name: config required: true short_name: config src_dir: src/review.rdoproject.org/config ref: refs/pull/320/head resources: {} tenant: rdoproject.org timeout: 10800 topic: null voting: true zuul_change_list: - watcher-operator zuul_execution_branch: main zuul_execution_canonical_name_and_path: github.com/openstack-k8s-operators/ci-framework/ci/playbooks/e2e-collect-logs.yml zuul_execution_phase: post zuul_execution_phase_index: '0' zuul_execution_trusted: 'False' zuul_log_collection: false zuul_success: 'False' zuul_will_retry: 'False' home/zuul/zuul-output/logs/ci-framework-data/artifacts/ci_script_002_run_hook_without_retry_fetch.sh0000644000175000017500000000205415133657577033350 0ustar zuulzuul#!/bin/bash set -euo pipefail exec > >(tee -i /home/zuul/ci-framework-data/logs/ci_script_002_run_hook_without_retry_fetch.log) 2>&1 export ANSIBLE_CONFIG="/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/ansible.cfg" export ANSIBLE_LOG_PATH="/home/zuul/ci-framework-data/logs/post_infra_fetch_nodes_facts_and_save_the.log" ansible-playbook -i /home/zuul/ci-framework-data/artifacts/zuul_inventory.yml -e namespace=openstack -e "@/home/zuul/ci-framework-data/artifacts/parameters/zuul-params.yml" -e "@/home/zuul/ci-framework-data/artifacts/parameters/install-yamls-params.yml" -e "@/home/zuul/ci-framework-data/artifacts/parameters/custom-params.yml" -e "@/home/zuul/ci-framework-data/artifacts/parameters/openshift-login-params.yml" -e "cifmw_basedir=/home/zuul/ci-framework-data" -e "step=post_infra" -e "hook_name=fetch_nodes_facts_and_save_the" -e "playbook_dir=/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/hooks/playbooks" /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/hooks/playbooks/fetch_compute_facts.yml home/zuul/zuul-output/logs/ci-framework-data/artifacts/ci_script_003_run_hook_without_retry_80.sh0000644000175000017500000000204115133657616032475 0ustar zuulzuul#!/bin/bash set -euo pipefail exec > >(tee -i /home/zuul/ci-framework-data/logs/ci_script_003_run_hook_without_retry_80.log) 2>&1 export ANSIBLE_CONFIG="/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/ansible.cfg" export ANSIBLE_LOG_PATH="/home/zuul/ci-framework-data/logs/pre_deploy_80_kustomize_openstack_cr.log" ansible-playbook -i /home/zuul/ci-framework-data/artifacts/zuul_inventory.yml -e namespace=openstack -e "@/home/zuul/ci-framework-data/artifacts/parameters/zuul-params.yml" -e "@/home/zuul/ci-framework-data/artifacts/parameters/install-yamls-params.yml" -e "@/home/zuul/ci-framework-data/artifacts/parameters/custom-params.yml" -e "@/home/zuul/ci-framework-data/artifacts/parameters/openshift-login-params.yml" -e "cifmw_basedir=/home/zuul/ci-framework-data" -e "step=pre_deploy" -e "hook_name=80_kustomize_openstack_cr" -e "playbook_dir=/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/hooks/playbooks" /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/hooks/playbooks/control_plane_horizon.yml ././@LongLink0000644000000000000000000000014600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/ci_script_004_run_hook_without_retry_create.shhome/zuul/zuul-output/logs/ci-framework-data/artifacts/ci_script_004_run_hook_without_retry_create.s0000644000175000017500000000206415133657621033343 0ustar zuulzuul#!/bin/bash set -euo pipefail exec > >(tee -i /home/zuul/ci-framework-data/logs/ci_script_004_run_hook_without_retry_create.log) 2>&1 export ANSIBLE_CONFIG="/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/ansible.cfg" export ANSIBLE_LOG_PATH="/home/zuul/ci-framework-data/logs/pre_deploy_create_coo_subscription.log" ansible-playbook -i /home/zuul/ci-framework-data/artifacts/zuul_inventory.yml -e namespace=openstack -e "@/home/zuul/ci-framework-data/artifacts/parameters/zuul-params.yml" -e "@/home/zuul/ci-framework-data/artifacts/parameters/install-yamls-params.yml" -e "@/home/zuul/ci-framework-data/artifacts/parameters/custom-params.yml" -e "@/home/zuul/ci-framework-data/artifacts/parameters/openshift-login-params.yml" -e "cifmw_basedir=/home/zuul/ci-framework-data" -e "step=pre_deploy" -e "hook_name=create_coo_subscription" -e "playbook_dir=/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator/ci/playbooks" /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator/ci/playbooks/deploy_cluster_observability_operator.yaml home/zuul/zuul-output/logs/ci-framework-data/artifacts/resolv.conf0000644000175000017500000000015215133657654024541 0ustar zuulzuul# Generated by NetworkManager nameserver 192.168.122.10 nameserver 199.204.44.24 nameserver 199.204.47.54 home/zuul/zuul-output/logs/ci-framework-data/artifacts/hosts0000644000175000017500000000023715133657654023447 0ustar zuulzuul127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 home/zuul/zuul-output/logs/ci-framework-data/artifacts/ip-network.txt0000644000175000017500000000315415133657654025225 0ustar zuulzuuldefault via 38.102.83.1 dev eth0 proto dhcp src 38.102.83.39 metric 100 38.102.83.0/24 dev eth0 proto kernel scope link src 38.102.83.39 metric 100 169.254.169.254 via 38.102.83.126 dev eth0 proto dhcp src 38.102.83.39 metric 100 192.168.122.0/24 dev eth1 proto kernel scope link src 192.168.122.11 metric 101 0: from all lookup local 32766: from all lookup main 32767: from all lookup default [ { "ifindex": 1, "ifname": "lo", "flags": [ "LOOPBACK","UP","LOWER_UP" ], "mtu": 65536, "qdisc": "noqueue", "operstate": "UNKNOWN", "linkmode": "DEFAULT", "group": "default", "txqlen": 1000, "link_type": "loopback", "address": "00:00:00:00:00:00", "broadcast": "00:00:00:00:00:00" },{ "ifindex": 2, "ifname": "eth0", "flags": [ "BROADCAST","MULTICAST","UP","LOWER_UP" ], "mtu": 1500, "qdisc": "fq_codel", "operstate": "UP", "linkmode": "DEFAULT", "group": "default", "txqlen": 1000, "link_type": "ether", "address": "fa:16:3e:71:e1:1f", "broadcast": "ff:ff:ff:ff:ff:ff", "altnames": [ "enp0s3","ens3" ] },{ "ifindex": 3, "ifname": "eth1", "flags": [ "BROADCAST","MULTICAST","UP","LOWER_UP" ], "mtu": 1500, "qdisc": "fq_codel", "operstate": "UP", "linkmode": "DEFAULT", "group": "default", "txqlen": 1000, "link_type": "ether", "address": "fa:16:3e:b8:b9:28", "broadcast": "ff:ff:ff:ff:ff:ff", "altnames": [ "enp0s7","ens7" ] } ] home/zuul/zuul-output/logs/ci-framework-data/artifacts/ci_script_000_check_for_oc.sh0000644000175000017500000000020715133657672027737 0ustar zuulzuul#!/bin/bash set -euo pipefail exec > >(tee -i /home/zuul/ci-framework-data/logs/ci_script_000_check_for_oc.log) 2>&1 command -v oc home/zuul/zuul-output/logs/ci-framework-data/artifacts/ci_script_000_run_openstack_must_gather.sh0000644000175000017500000000135115133657674032613 0ustar zuulzuul#!/bin/bash set -euo pipefail exec > >(tee -i /home/zuul/ci-framework-data/logs/ci_script_000_run_openstack_must_gather.log) 2>&1 timeout 2700.0 oc adm must-gather --image quay.io/openstack-k8s-operators/openstack-must-gather:latest --timeout 30m --host-network=False --dest-dir /home/zuul/ci-framework-data/logs/openstack-must-gather --volume-percentage=80 -- ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=$OPENSTACK_DATABASES SOS_EDPM=$SOS_EDPM SOS_DECOMPRESS=$SOS_DECOMPRESS gather 2>&1 || { rc=$? if [ $rc -eq 124 ]; then echo "The must gather command did not finish on time!" echo "2700.0 seconds was not enough to finish the task." fi } home/zuul/zuul-output/logs/ci-framework-data/artifacts/ci_script_000_prepare_root_ssh.sh0000644000175000017500000000122315133657712030703 0ustar zuulzuul#!/bin/bash set -euo pipefail exec > >(tee -i /home/zuul/ci-framework-data/logs/ci_script_000_prepare_root_ssh.log) 2>&1 ssh -i ~/.ssh/id_cifw core@api.crc.testing < >(tee -i /home/zuul/ci-framework-data/logs/ci_script_000_copy_logs_from_crc.log) 2>&1 scp -v -r -i ~/.ssh/id_cifw core@api.crc.testing:/tmp/crc-logs-artifacts /home/zuul/ci-framework-data/logs/crc/ home/zuul/zuul-output/logs/ci-framework-data/artifacts/zuul_inventory.yml0000644000175000017500000016517515133657744026240 0ustar zuulzuulall: children: computes: hosts: compute-0: null compute-1: null ocps: hosts: crc: null zuul_unreachable: hosts: {} hosts: compute-0: ansible_connection: ssh ansible_host: 38.102.83.147 ansible_port: 22 ansible_python_interpreter: auto ansible_user: zuul cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_build_images_output: {} cifmw_dlrn_report_result: false cifmw_edpm_telemetry_enabled_exporters: - podman_exporter - openstack_network_exporter cifmw_extras: - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/multinode-ci.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/horizon.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/scenarios/{{ watcher_scenario }}.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/tests/watcher-tempest.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_operator_build_output: operators: openstack-operator: git_commit_hash: 38e630804dada625f7b015f13f3ac5bb7192f4dd git_src_dir: ~/src/github.com/openstack-k8s-operators/openstack-operator image: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd image_bundle: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-bundle:38e630804dada625f7b015f13f3ac5bb7192f4dd image_catalog: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-index:38e630804dada625f7b015f13f3ac5bb7192f4dd watcher-operator: git_commit_hash: 581f46572d07c53c87a11aa044b02e73f253eea6 git_src_dir: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator image: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator:581f46572d07c53c87a11aa044b02e73f253eea6 image_bundle: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-bundle:581f46572d07c53c87a11aa044b02e73f253eea6 image_catalog: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-index:581f46572d07c53c87a11aa044b02e73f253eea6 cifmw_test_operator_tempest_external_plugin: - changeRefspec: 380572db57798530b64dcac14c6b01b0382c5d8e changeRepository: https://review.opendev.org/openstack/watcher-tempest-plugin repository: https://opendev.org/openstack/watcher-tempest-plugin.git cifmw_test_operator_tempest_image_tag: watcher_latest cifmw_test_operator_tempest_namespace: '{{ content_provider_os_registry_url | split(''/'') | last }}' cifmw_test_operator_tempest_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_openstack: false cifmw_update_containers_org: podified-epoxy-centos9 cifmw_update_containers_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_tag: watcher_latest cifmw_update_containers_watcher: true cifmw_use_libvirt: false cifmw_zuul_target_host: controller content_provider_dlrn_md5_hash: '' content_provider_gating_repo_available: false content_provider_gating_repo_url: '' content_provider_os_registry_namespace: podified-epoxy-centos9 content_provider_os_registry_url: 38.102.83.51:5001/podified-epoxy-centos9 content_provider_registry_available: true content_provider_registry_ip: 38.102.83.51 content_provider_registry_ip_port: 38.102.83.51:5001 crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: compute-0: networks: default: ip: 192.168.122.100 internal-api: config_nm: false ip: 172.17.0.100 storage: config_nm: false ip: 172.18.0.100 tenant: config_nm: false ip: 172.19.0.100 compute-1: networks: default: ip: 192.168.122.101 internal-api: config_nm: false ip: 172.17.0.101 storage: config_nm: false ip: 172.18.0.101 tenant: config_nm: false ip: 172.19.0.101 controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: '{{ (''ibm'' in nodepool.cloud) | ternary(''1440'', ''1500'') }}' range: 192.168.122.0/24 router_net: '' transparent: true internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true fetch_dlrn_hash: false nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: ce914a02-f144-4e95-badb-0391d5aba71b host_id: 5519e7a0ee5dc826795d295efc9c908d171b61deb9bf71b1016f861f interface_ip: 38.102.83.147 label: cloud-centos-9-stream-tripleo-vexxhost private_ipv4: 38.102.83.147 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.147 public_ipv6: '' region: RegionOne slot: null push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true watcher_scenario: edpm-no-notifications watcher_services_tag: watcher_latest watcher_tempest_max_microversion: '1.4' zuul_log_collection: false compute-1: ansible_connection: ssh ansible_host: 38.102.83.233 ansible_port: 22 ansible_python_interpreter: auto ansible_user: zuul cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_build_images_output: {} cifmw_dlrn_report_result: false cifmw_edpm_telemetry_enabled_exporters: - podman_exporter - openstack_network_exporter cifmw_extras: - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/multinode-ci.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/horizon.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/scenarios/{{ watcher_scenario }}.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/tests/watcher-tempest.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_operator_build_output: operators: openstack-operator: git_commit_hash: 38e630804dada625f7b015f13f3ac5bb7192f4dd git_src_dir: ~/src/github.com/openstack-k8s-operators/openstack-operator image: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd image_bundle: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-bundle:38e630804dada625f7b015f13f3ac5bb7192f4dd image_catalog: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-index:38e630804dada625f7b015f13f3ac5bb7192f4dd watcher-operator: git_commit_hash: 581f46572d07c53c87a11aa044b02e73f253eea6 git_src_dir: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator image: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator:581f46572d07c53c87a11aa044b02e73f253eea6 image_bundle: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-bundle:581f46572d07c53c87a11aa044b02e73f253eea6 image_catalog: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-index:581f46572d07c53c87a11aa044b02e73f253eea6 cifmw_test_operator_tempest_external_plugin: - changeRefspec: 380572db57798530b64dcac14c6b01b0382c5d8e changeRepository: https://review.opendev.org/openstack/watcher-tempest-plugin repository: https://opendev.org/openstack/watcher-tempest-plugin.git cifmw_test_operator_tempest_image_tag: watcher_latest cifmw_test_operator_tempest_namespace: '{{ content_provider_os_registry_url | split(''/'') | last }}' cifmw_test_operator_tempest_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_openstack: false cifmw_update_containers_org: podified-epoxy-centos9 cifmw_update_containers_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_tag: watcher_latest cifmw_update_containers_watcher: true cifmw_use_libvirt: false cifmw_zuul_target_host: controller content_provider_dlrn_md5_hash: '' content_provider_gating_repo_available: false content_provider_gating_repo_url: '' content_provider_os_registry_namespace: podified-epoxy-centos9 content_provider_os_registry_url: 38.102.83.51:5001/podified-epoxy-centos9 content_provider_registry_available: true content_provider_registry_ip: 38.102.83.51 content_provider_registry_ip_port: 38.102.83.51:5001 crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: compute-0: networks: default: ip: 192.168.122.100 internal-api: config_nm: false ip: 172.17.0.100 storage: config_nm: false ip: 172.18.0.100 tenant: config_nm: false ip: 172.19.0.100 compute-1: networks: default: ip: 192.168.122.101 internal-api: config_nm: false ip: 172.17.0.101 storage: config_nm: false ip: 172.18.0.101 tenant: config_nm: false ip: 172.19.0.101 controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: '{{ (''ibm'' in nodepool.cloud) | ternary(''1440'', ''1500'') }}' range: 192.168.122.0/24 router_net: '' transparent: true internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true fetch_dlrn_hash: false nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: 70fdcf87-2866-4dad-82d7-0accf142e3fc host_id: bdb78bf25a270582fae0ca49d447ffffc4c7a50a772a0a4c0593588a interface_ip: 38.102.83.233 label: cloud-centos-9-stream-tripleo-vexxhost private_ipv4: 38.102.83.233 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.233 public_ipv6: '' region: RegionOne slot: null push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true watcher_scenario: edpm-no-notifications watcher_services_tag: watcher_latest watcher_tempest_max_microversion: '1.4' zuul_log_collection: false controller: ansible_connection: ssh ansible_host: 38.102.83.39 ansible_port: 22 ansible_python_interpreter: auto ansible_user: zuul cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_build_images_output: {} cifmw_dlrn_report_result: false cifmw_edpm_telemetry_enabled_exporters: - podman_exporter - openstack_network_exporter cifmw_extras: - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/multinode-ci.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/horizon.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/scenarios/{{ watcher_scenario }}.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/tests/watcher-tempest.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_operator_build_output: operators: openstack-operator: git_commit_hash: 38e630804dada625f7b015f13f3ac5bb7192f4dd git_src_dir: ~/src/github.com/openstack-k8s-operators/openstack-operator image: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd image_bundle: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-bundle:38e630804dada625f7b015f13f3ac5bb7192f4dd image_catalog: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-index:38e630804dada625f7b015f13f3ac5bb7192f4dd watcher-operator: git_commit_hash: 581f46572d07c53c87a11aa044b02e73f253eea6 git_src_dir: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator image: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator:581f46572d07c53c87a11aa044b02e73f253eea6 image_bundle: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-bundle:581f46572d07c53c87a11aa044b02e73f253eea6 image_catalog: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-index:581f46572d07c53c87a11aa044b02e73f253eea6 cifmw_test_operator_tempest_external_plugin: - changeRefspec: 380572db57798530b64dcac14c6b01b0382c5d8e changeRepository: https://review.opendev.org/openstack/watcher-tempest-plugin repository: https://opendev.org/openstack/watcher-tempest-plugin.git cifmw_test_operator_tempest_image_tag: watcher_latest cifmw_test_operator_tempest_namespace: '{{ content_provider_os_registry_url | split(''/'') | last }}' cifmw_test_operator_tempest_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_openstack: false cifmw_update_containers_org: podified-epoxy-centos9 cifmw_update_containers_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_tag: watcher_latest cifmw_update_containers_watcher: true cifmw_use_libvirt: false cifmw_zuul_target_host: controller content_provider_dlrn_md5_hash: '' content_provider_gating_repo_available: false content_provider_gating_repo_url: '' content_provider_os_registry_namespace: podified-epoxy-centos9 content_provider_os_registry_url: 38.102.83.51:5001/podified-epoxy-centos9 content_provider_registry_available: true content_provider_registry_ip: 38.102.83.51 content_provider_registry_ip_port: 38.102.83.51:5001 crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: compute-0: networks: default: ip: 192.168.122.100 internal-api: config_nm: false ip: 172.17.0.100 storage: config_nm: false ip: 172.18.0.100 tenant: config_nm: false ip: 172.19.0.100 compute-1: networks: default: ip: 192.168.122.101 internal-api: config_nm: false ip: 172.17.0.101 storage: config_nm: false ip: 172.18.0.101 tenant: config_nm: false ip: 172.19.0.101 controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: '{{ (''ibm'' in nodepool.cloud) | ternary(''1440'', ''1500'') }}' range: 192.168.122.0/24 router_net: '' transparent: true internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true fetch_dlrn_hash: false nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: f43c7845-aaa4-43d2-b2cb-88299b0d01b3 host_id: b012578aee5370fae73eb6c92c4679617335173cccca05390470f411 interface_ip: 38.102.83.39 label: cloud-centos-9-stream-tripleo-vexxhost-medium private_ipv4: 38.102.83.39 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.39 public_ipv6: '' region: RegionOne slot: null push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true watcher_scenario: edpm-no-notifications watcher_services_tag: watcher_latest watcher_tempest_max_microversion: '1.4' zuul_log_collection: false crc: ansible_connection: ssh ansible_host: 38.102.83.220 ansible_port: 22 ansible_python_interpreter: auto ansible_user: core cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_build_images_output: {} cifmw_dlrn_report_result: false cifmw_edpm_telemetry_enabled_exporters: - podman_exporter - openstack_network_exporter cifmw_extras: - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/multinode-ci.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/horizon.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/scenarios/{{ watcher_scenario }}.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/tests/watcher-tempest.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_operator_build_output: operators: openstack-operator: git_commit_hash: 38e630804dada625f7b015f13f3ac5bb7192f4dd git_src_dir: ~/src/github.com/openstack-k8s-operators/openstack-operator image: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd image_bundle: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-bundle:38e630804dada625f7b015f13f3ac5bb7192f4dd image_catalog: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-index:38e630804dada625f7b015f13f3ac5bb7192f4dd watcher-operator: git_commit_hash: 581f46572d07c53c87a11aa044b02e73f253eea6 git_src_dir: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator image: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator:581f46572d07c53c87a11aa044b02e73f253eea6 image_bundle: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-bundle:581f46572d07c53c87a11aa044b02e73f253eea6 image_catalog: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-index:581f46572d07c53c87a11aa044b02e73f253eea6 cifmw_test_operator_tempest_external_plugin: - changeRefspec: 380572db57798530b64dcac14c6b01b0382c5d8e changeRepository: https://review.opendev.org/openstack/watcher-tempest-plugin repository: https://opendev.org/openstack/watcher-tempest-plugin.git cifmw_test_operator_tempest_image_tag: watcher_latest cifmw_test_operator_tempest_namespace: '{{ content_provider_os_registry_url | split(''/'') | last }}' cifmw_test_operator_tempest_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_openstack: false cifmw_update_containers_org: podified-epoxy-centos9 cifmw_update_containers_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_tag: watcher_latest cifmw_update_containers_watcher: true cifmw_use_libvirt: false cifmw_zuul_target_host: controller content_provider_dlrn_md5_hash: '' content_provider_gating_repo_available: false content_provider_gating_repo_url: '' content_provider_os_registry_namespace: podified-epoxy-centos9 content_provider_os_registry_url: 38.102.83.51:5001/podified-epoxy-centos9 content_provider_registry_available: true content_provider_registry_ip: 38.102.83.51 content_provider_registry_ip_port: 38.102.83.51:5001 crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: compute-0: networks: default: ip: 192.168.122.100 internal-api: config_nm: false ip: 172.17.0.100 storage: config_nm: false ip: 172.18.0.100 tenant: config_nm: false ip: 172.19.0.100 compute-1: networks: default: ip: 192.168.122.101 internal-api: config_nm: false ip: 172.17.0.101 storage: config_nm: false ip: 172.18.0.101 tenant: config_nm: false ip: 172.19.0.101 controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: '{{ (''ibm'' in nodepool.cloud) | ternary(''1440'', ''1500'') }}' range: 192.168.122.0/24 router_net: '' transparent: true internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true fetch_dlrn_hash: false nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: a519b063-d122-47cf-ae3b-7548803df408 host_id: b012578aee5370fae73eb6c92c4679617335173cccca05390470f411 interface_ip: 38.102.83.220 label: coreos-crc-extracted-2-39-0-3xl private_ipv4: 38.102.83.220 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.220 public_ipv6: '' region: RegionOne slot: null push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true watcher_scenario: edpm-no-notifications watcher_services_tag: watcher_latest watcher_tempest_max_microversion: '1.4' zuul_log_collection: false localhost: ansible_connection: local vars: cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_build_images_output: {} cifmw_dlrn_report_result: false cifmw_edpm_telemetry_enabled_exporters: - podman_exporter - openstack_network_exporter cifmw_extras: - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/multinode-ci.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/horizon.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/scenarios/{{ watcher_scenario }}.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/tests/watcher-tempest.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_operator_build_output: operators: openstack-operator: git_commit_hash: 38e630804dada625f7b015f13f3ac5bb7192f4dd git_src_dir: ~/src/github.com/openstack-k8s-operators/openstack-operator image: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd image_bundle: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-bundle:38e630804dada625f7b015f13f3ac5bb7192f4dd image_catalog: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-index:38e630804dada625f7b015f13f3ac5bb7192f4dd watcher-operator: git_commit_hash: 581f46572d07c53c87a11aa044b02e73f253eea6 git_src_dir: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator image: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator:581f46572d07c53c87a11aa044b02e73f253eea6 image_bundle: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-bundle:581f46572d07c53c87a11aa044b02e73f253eea6 image_catalog: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-index:581f46572d07c53c87a11aa044b02e73f253eea6 cifmw_test_operator_tempest_external_plugin: - changeRefspec: 380572db57798530b64dcac14c6b01b0382c5d8e changeRepository: https://review.opendev.org/openstack/watcher-tempest-plugin repository: https://opendev.org/openstack/watcher-tempest-plugin.git cifmw_test_operator_tempest_image_tag: watcher_latest cifmw_test_operator_tempest_namespace: '{{ content_provider_os_registry_url | split(''/'') | last }}' cifmw_test_operator_tempest_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_openstack: false cifmw_update_containers_org: podified-epoxy-centos9 cifmw_update_containers_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_tag: watcher_latest cifmw_update_containers_watcher: true cifmw_use_libvirt: false cifmw_zuul_target_host: controller content_provider_dlrn_md5_hash: '' content_provider_gating_repo_available: false content_provider_gating_repo_url: '' content_provider_os_registry_namespace: podified-epoxy-centos9 content_provider_os_registry_url: 38.102.83.51:5001/podified-epoxy-centos9 content_provider_registry_available: true content_provider_registry_ip: 38.102.83.51 content_provider_registry_ip_port: 38.102.83.51:5001 crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: compute-0: networks: default: ip: 192.168.122.100 internal-api: config_nm: false ip: 172.17.0.100 storage: config_nm: false ip: 172.18.0.100 tenant: config_nm: false ip: 172.19.0.100 compute-1: networks: default: ip: 192.168.122.101 internal-api: config_nm: false ip: 172.17.0.101 storage: config_nm: false ip: 172.18.0.101 tenant: config_nm: false ip: 172.19.0.101 controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: '{{ (''ibm'' in nodepool.cloud) | ternary(''1440'', ''1500'') }}' range: 192.168.122.0/24 router_net: '' transparent: true internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true fetch_dlrn_hash: false push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true watcher_scenario: edpm-no-notifications watcher_services_tag: watcher_latest watcher_tempest_max_microversion: '1.4' zuul: _inheritance_path: - '' - '' - '' - '' - '' - '' - '' - '' - '' - '' ansible_version: '8' attempts: 1 branch: main build: 90366c73dd19485aa9d993ddaab6d557 build_refs: - branch: main change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n spec:\r\n messagingBus:\r\n cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n \ notificationsBus:\r\n cluster: notifications-rabbitmq\r\n \ user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator topic: null buildset: f20638613fa941389b7ed9d90fc6bce5 buildset_refs: - branch: main change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n spec:\r\n messagingBus:\r\n cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n \ notificationsBus:\r\n cluster: notifications-rabbitmq\r\n \ user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator topic: null change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n \ spec:\r\n messagingBus:\r\n cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n notificationsBus:\r\n cluster: notifications-rabbitmq\r\n user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 child_jobs: [] commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 event_id: 2658e330-f5e9-11f0-9efc-87f0b8025f7f executor: hostname: ze02.softwarefactory-project.io inventory_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/inventory.yaml log_root: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/logs result_data_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/results.json src_root: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/src work_root: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work items: - branch: main change: '320' change_message: "Rabbitmq vhost and user support\n\nAdd new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.\r\nThe controller adds these values to the TransportURL create request when present.\r\n\r\nAdditionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.\r\n\r\nExample usage:\r\n\r\n```\r\n spec:\r\n messagingBus:\r\n cluster: rpc-rabbitmq\r\n user: rpc-user\r\n vhost: rpc-vhost\r\n \ notificationsBus:\r\n cluster: notifications-rabbitmq\r\n \ user: notifications-user\r\n vhost: notifications-vhost\r\n```\r\n\r\nJira: https://issues.redhat.com/browse/OSPRH-23882" change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator topic: null job: watcher-operator-validation-epoxy-ocp4-16 jobtags: [] max_attempts: 1 message: UmFiYml0bXEgdmhvc3QgYW5kIHVzZXIgc3VwcG9ydAoKQWRkIG5ldyBtZXNzYWdpbmdCdXMgYW5kIG5vdGlmaWNhdGlvbnNCdXMgaW50ZXJmYWNlcyB0byBob2xkIGNsdXN0ZXIsIHVzZXIgYW5kIHZob3N0IG5hbWVzIGZvciBvcHRpb25hbCB1c2FnZS4NClRoZSBjb250cm9sbGVyIGFkZHMgdGhlc2UgdmFsdWVzIHRvIHRoZSBUcmFuc3BvcnRVUkwgY3JlYXRlIHJlcXVlc3Qgd2hlbiBwcmVzZW50Lg0KDQpBZGRpdGlvbmFsbHksIHdlIG1pZ3JhdGUgUmFiYml0TVEgY2x1c3RlciBuYW1lIHRvIFJhYmJpdE1xIGNvbmZpZyBzdHJ1Y3QgdXNpbmcgRGVmYXVsdFJhYmJpdE1xQ29uZmlnIGZyb20gaW5mcmEtb3BlcmF0b3IgdG8gYXV0b21hdGljYWxseSBwb3B1bGF0ZSB0aGUgbmV3IENsdXN0ZXIgZmllbGQgZnJvbSBsZWdhY3kgUmFiYml0TXFDbHVzdGVyTmFtZS4NCg0KRXhhbXBsZSB1c2FnZToNCg0KYGBgDQogIHNwZWM6DQogICAgbWVzc2FnaW5nQnVzOg0KICAgICAgY2x1c3RlcjogcnBjLXJhYmJpdG1xDQogICAgICB1c2VyOiBycGMtdXNlcg0KICAgICAgdmhvc3Q6IHJwYy12aG9zdA0KICAgIG5vdGlmaWNhdGlvbnNCdXM6DQogICAgICBjbHVzdGVyOiBub3RpZmljYXRpb25zLXJhYmJpdG1xDQogICAgICB1c2VyOiBub3RpZmljYXRpb25zLXVzZXINCiAgICAgIHZob3N0OiBub3RpZmljYXRpb25zLXZob3N0DQpgYGANCg0KSmlyYTogaHR0cHM6Ly9pc3N1ZXMucmVkaGF0LmNvbS9icm93c2UvT1NQUkgtMjM4ODI= patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 pipeline: github-check playbook_context: playbook_projects: trusted/project_0/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 08a84deec7dace955f92270e2cbb8b993f305ad4 trusted/project_1/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 691c03cc007bee9934da14cf46c86009616a2aef trusted/project_2/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 trusted/project_3/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 05fab9c7c87ad7552b529c3d1173a1772e61e9fb untrusted/project_0/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 05fab9c7c87ad7552b529c3d1173a1772e61e9fb untrusted/project_1/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 08a84deec7dace955f92270e2cbb8b993f305ad4 untrusted/project_2/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 691c03cc007bee9934da14cf46c86009616a2aef untrusted/project_3/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 playbooks: - path: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/ci/playbooks/edpm/run.yml roles: - checkout: main checkout_description: playbook branch link_name: ansible/playbook_0/role_0/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_0/role_0/ci-framework/roles - checkout: master checkout_description: project default branch link_name: ansible/playbook_0/role_1/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_0/role_1/config/roles - checkout: master checkout_description: project default branch link_name: ansible/playbook_0/role_2/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_0/role_2/zuul-jobs/roles - checkout: master checkout_description: project default branch link_name: ansible/playbook_0/role_3/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_0/role_3/rdo-jobs/roles post_review: false project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator projects: github.com/crc-org/crc-cloud: canonical_hostname: github.com canonical_name: github.com/crc-org/crc-cloud checkout: main checkout_description: project override ref commit: 42957126d9d9b9d1372615db325b82bd992fa335 name: crc-org/crc-cloud required: true short_name: crc-cloud src_dir: src/github.com/crc-org/crc-cloud github.com/openstack-k8s-operators/ci-framework: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main checkout_description: zuul branch commit: 05fab9c7c87ad7552b529c3d1173a1772e61e9fb name: openstack-k8s-operators/ci-framework required: true short_name: ci-framework src_dir: src/github.com/openstack-k8s-operators/ci-framework github.com/openstack-k8s-operators/edpm-ansible: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/edpm-ansible checkout: main checkout_description: zuul branch commit: 43c8ae13d85939e9a3f9cddbe838cbe4616199f7 name: openstack-k8s-operators/edpm-ansible required: true short_name: edpm-ansible src_dir: src/github.com/openstack-k8s-operators/edpm-ansible github.com/openstack-k8s-operators/infra-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/infra-operator checkout: main checkout_description: zuul branch commit: 0121df8691096e0883637457925e4142353e35ba name: openstack-k8s-operators/infra-operator required: true short_name: infra-operator src_dir: src/github.com/openstack-k8s-operators/infra-operator github.com/openstack-k8s-operators/install_yamls: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/install_yamls checkout: main checkout_description: zuul branch commit: bdf4c9385be5e3e04ff06f67f25d6993db70cf6e name: openstack-k8s-operators/install_yamls required: true short_name: install_yamls src_dir: src/github.com/openstack-k8s-operators/install_yamls github.com/openstack-k8s-operators/openstack-baremetal-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-baremetal-operator checkout: main checkout_description: zuul branch commit: 06cd1004cb26b36ba1054ccf7875fad6248762c5 name: openstack-k8s-operators/openstack-baremetal-operator required: true short_name: openstack-baremetal-operator src_dir: src/github.com/openstack-k8s-operators/openstack-baremetal-operator github.com/openstack-k8s-operators/openstack-must-gather: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-must-gather checkout: main checkout_description: zuul branch commit: 74e48015b997b0bb2efc1ad92a6937949f185a0e name: openstack-k8s-operators/openstack-must-gather required: true short_name: openstack-must-gather src_dir: src/github.com/openstack-k8s-operators/openstack-must-gather github.com/openstack-k8s-operators/openstack-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-operator checkout: main checkout_description: zuul branch commit: 38e630804dada625f7b015f13f3ac5bb7192f4dd name: openstack-k8s-operators/openstack-operator required: true short_name: openstack-operator src_dir: src/github.com/openstack-k8s-operators/openstack-operator github.com/openstack-k8s-operators/repo-setup: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/repo-setup checkout: main checkout_description: zuul branch commit: 37b10946c6a10f9fa26c13305f06bfd6867e723f name: openstack-k8s-operators/repo-setup required: true short_name: repo-setup src_dir: src/github.com/openstack-k8s-operators/repo-setup github.com/openstack-k8s-operators/watcher-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator checkout: main checkout_description: zuul branch commit: 581f46572d07c53c87a11aa044b02e73f253eea6 name: openstack-k8s-operators/watcher-operator required: false short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator opendev.org/zuul/zuul-jobs: canonical_hostname: opendev.org canonical_name: opendev.org/zuul/zuul-jobs checkout: master checkout_description: project default branch commit: 691c03cc007bee9934da14cf46c86009616a2aef name: zuul/zuul-jobs required: true short_name: zuul-jobs src_dir: src/opendev.org/zuul/zuul-jobs review.rdoproject.org/config: canonical_hostname: review.rdoproject.org canonical_name: review.rdoproject.org/config checkout: master checkout_description: project default branch commit: 08a84deec7dace955f92270e2cbb8b993f305ad4 name: config required: true short_name: config src_dir: src/review.rdoproject.org/config ref: refs/pull/320/head resources: {} tenant: rdoproject.org timeout: 10800 topic: null voting: true zuul_log_collection: false home/zuul/zuul-output/logs/ci-framework-data/artifacts/parameters/0000755000175000017500000000000015133657744024525 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/parameters/openshift-login-params.yml0000644000175000017500000000043015133657526031631 0ustar zuulzuulcifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_context: default/api-crc-testing:6443/kubeadmin cifmw_openshift_kubeconfig: /home/zuul/.crc/machines/crc/kubeconfig cifmw_openshift_token: sha256~aCk9SoMxAKzQn_kYPvy1gW_KvX_MhVbp0_pM2TnfuyE cifmw_openshift_user: kubeadmin home/zuul/zuul-output/logs/ci-framework-data/artifacts/parameters/zuul-params.yml0000644000175000017500000005026615133657744027541 0ustar zuulzuulcifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_build_images_output: {} cifmw_dlrn_report_result: false cifmw_edpm_telemetry_enabled_exporters: - podman_exporter - openstack_network_exporter cifmw_extras: - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/multinode-ci.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/ci-framework'']. src_dir }}/scenarios/centos-9/horizon.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/scenarios/{{ watcher_scenario }}.yml' - '@{{ ansible_user_dir }}/{{ zuul.projects[''github.com/openstack-k8s-operators/watcher-operator'']. src_dir }}/ci/tests/watcher-tempest.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_operator_build_output: operators: openstack-operator: git_commit_hash: 38e630804dada625f7b015f13f3ac5bb7192f4dd git_src_dir: ~/src/github.com/openstack-k8s-operators/openstack-operator image: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd image_bundle: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-bundle:38e630804dada625f7b015f13f3ac5bb7192f4dd image_catalog: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-index:38e630804dada625f7b015f13f3ac5bb7192f4dd watcher-operator: git_commit_hash: 581f46572d07c53c87a11aa044b02e73f253eea6 git_src_dir: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator image: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator:581f46572d07c53c87a11aa044b02e73f253eea6 image_bundle: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-bundle:581f46572d07c53c87a11aa044b02e73f253eea6 image_catalog: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-index:581f46572d07c53c87a11aa044b02e73f253eea6 cifmw_test_operator_tempest_external_plugin: - changeRefspec: 380572db57798530b64dcac14c6b01b0382c5d8e changeRepository: https://review.opendev.org/openstack/watcher-tempest-plugin repository: https://opendev.org/openstack/watcher-tempest-plugin.git cifmw_test_operator_tempest_image_tag: watcher_latest cifmw_test_operator_tempest_namespace: '{{ content_provider_os_registry_url | split(''/'') | last }}' cifmw_test_operator_tempest_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_openstack: false cifmw_update_containers_org: podified-epoxy-centos9 cifmw_update_containers_registry: '{{ content_provider_os_registry_url | split(''/'') | first }}' cifmw_update_containers_tag: watcher_latest cifmw_update_containers_watcher: true cifmw_use_libvirt: false cifmw_zuul_target_host: controller content_provider_dlrn_md5_hash: '' content_provider_gating_repo_available: false content_provider_gating_repo_url: '' content_provider_os_registry_namespace: podified-epoxy-centos9 content_provider_os_registry_url: 38.102.83.51:5001/podified-epoxy-centos9 content_provider_registry_available: true content_provider_registry_ip: 38.102.83.51 content_provider_registry_ip_port: 38.102.83.51:5001 crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: compute-0: networks: default: ip: 192.168.122.100 internal-api: config_nm: false ip: 172.17.0.100 storage: config_nm: false ip: 172.18.0.100 tenant: config_nm: false ip: 172.19.0.100 compute-1: networks: default: ip: 192.168.122.101 internal-api: config_nm: false ip: 172.17.0.101 storage: config_nm: false ip: 172.18.0.101 tenant: config_nm: false ip: 172.19.0.101 controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: '{{ (''ibm'' in nodepool.cloud) | ternary(''1440'', ''1500'') }}' range: 192.168.122.0/24 router_net: '' transparent: true internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true fetch_dlrn_hash: false push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true watcher_scenario: edpm-no-notifications watcher_services_tag: watcher_latest watcher_tempest_max_microversion: '1.4' zuul: _inheritance_path: - '' - '' - '' - '' - '' - '' - '' - '' - '' - '' ansible_version: '8' attempts: 1 branch: main build: 90366c73dd19485aa9d993ddaab6d557 build_refs: - branch: main change: '320' change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator topic: null buildset: f20638613fa941389b7ed9d90fc6bce5 buildset_refs: - branch: main change: '320' change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator topic: null change: '320' change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 child_jobs: [] commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 event_id: 2658e330-f5e9-11f0-9efc-87f0b8025f7f executor: hostname: ze02.softwarefactory-project.io inventory_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/ansible/inventory.yaml log_root: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/logs result_data_file: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/results.json src_root: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work/src work_root: /var/lib/zuul/builds/90366c73dd19485aa9d993ddaab6d557/work items: - branch: main change: '320' change_url: https://github.com/openstack-k8s-operators/watcher-operator/pull/320 commit_id: 581f46572d07c53c87a11aa044b02e73f253eea6 patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator topic: null job: watcher-operator-validation-epoxy-ocp4-16 jobtags: [] max_attempts: 1 message: UmFiYml0bXEgdmhvc3QgYW5kIHVzZXIgc3VwcG9ydAoKQWRkIG5ldyBtZXNzYWdpbmdCdXMgYW5kIG5vdGlmaWNhdGlvbnNCdXMgaW50ZXJmYWNlcyB0byBob2xkIGNsdXN0ZXIsIHVzZXIgYW5kIHZob3N0IG5hbWVzIGZvciBvcHRpb25hbCB1c2FnZS4NClRoZSBjb250cm9sbGVyIGFkZHMgdGhlc2UgdmFsdWVzIHRvIHRoZSBUcmFuc3BvcnRVUkwgY3JlYXRlIHJlcXVlc3Qgd2hlbiBwcmVzZW50Lg0KDQpBZGRpdGlvbmFsbHksIHdlIG1pZ3JhdGUgUmFiYml0TVEgY2x1c3RlciBuYW1lIHRvIFJhYmJpdE1xIGNvbmZpZyBzdHJ1Y3QgdXNpbmcgRGVmYXVsdFJhYmJpdE1xQ29uZmlnIGZyb20gaW5mcmEtb3BlcmF0b3IgdG8gYXV0b21hdGljYWxseSBwb3B1bGF0ZSB0aGUgbmV3IENsdXN0ZXIgZmllbGQgZnJvbSBsZWdhY3kgUmFiYml0TXFDbHVzdGVyTmFtZS4NCg0KRXhhbXBsZSB1c2FnZToNCg0KYGBgDQogIHNwZWM6DQogICAgbWVzc2FnaW5nQnVzOg0KICAgICAgY2x1c3RlcjogcnBjLXJhYmJpdG1xDQogICAgICB1c2VyOiBycGMtdXNlcg0KICAgICAgdmhvc3Q6IHJwYy12aG9zdA0KICAgIG5vdGlmaWNhdGlvbnNCdXM6DQogICAgICBjbHVzdGVyOiBub3RpZmljYXRpb25zLXJhYmJpdG1xDQogICAgICB1c2VyOiBub3RpZmljYXRpb25zLXVzZXINCiAgICAgIHZob3N0OiBub3RpZmljYXRpb25zLXZob3N0DQpgYGANCg0KSmlyYTogaHR0cHM6Ly9pc3N1ZXMucmVkaGF0LmNvbS9icm93c2UvT1NQUkgtMjM4ODI= patchset: 581f46572d07c53c87a11aa044b02e73f253eea6 pipeline: github-check playbook_context: playbook_projects: trusted/project_0/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 08a84deec7dace955f92270e2cbb8b993f305ad4 trusted/project_1/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 691c03cc007bee9934da14cf46c86009616a2aef trusted/project_2/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 trusted/project_3/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 05fab9c7c87ad7552b529c3d1173a1772e61e9fb untrusted/project_0/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 05fab9c7c87ad7552b529c3d1173a1772e61e9fb untrusted/project_1/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 08a84deec7dace955f92270e2cbb8b993f305ad4 untrusted/project_2/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 691c03cc007bee9934da14cf46c86009616a2aef untrusted/project_3/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 playbooks: - path: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/ci/playbooks/edpm/run.yml roles: - checkout: main checkout_description: playbook branch link_name: ansible/playbook_0/role_0/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_0/role_0/ci-framework/roles - checkout: master checkout_description: project default branch link_name: ansible/playbook_0/role_1/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_0/role_1/config/roles - checkout: master checkout_description: project default branch link_name: ansible/playbook_0/role_2/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_0/role_2/zuul-jobs/roles - checkout: master checkout_description: project default branch link_name: ansible/playbook_0/role_3/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_0/role_3/rdo-jobs/roles post_review: false project: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator name: openstack-k8s-operators/watcher-operator short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator projects: github.com/crc-org/crc-cloud: canonical_hostname: github.com canonical_name: github.com/crc-org/crc-cloud checkout: main checkout_description: project override ref commit: 42957126d9d9b9d1372615db325b82bd992fa335 name: crc-org/crc-cloud required: true short_name: crc-cloud src_dir: src/github.com/crc-org/crc-cloud github.com/openstack-k8s-operators/ci-framework: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main checkout_description: zuul branch commit: 05fab9c7c87ad7552b529c3d1173a1772e61e9fb name: openstack-k8s-operators/ci-framework required: true short_name: ci-framework src_dir: src/github.com/openstack-k8s-operators/ci-framework github.com/openstack-k8s-operators/edpm-ansible: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/edpm-ansible checkout: main checkout_description: zuul branch commit: 43c8ae13d85939e9a3f9cddbe838cbe4616199f7 name: openstack-k8s-operators/edpm-ansible required: true short_name: edpm-ansible src_dir: src/github.com/openstack-k8s-operators/edpm-ansible github.com/openstack-k8s-operators/infra-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/infra-operator checkout: main checkout_description: zuul branch commit: 0121df8691096e0883637457925e4142353e35ba name: openstack-k8s-operators/infra-operator required: true short_name: infra-operator src_dir: src/github.com/openstack-k8s-operators/infra-operator github.com/openstack-k8s-operators/install_yamls: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/install_yamls checkout: main checkout_description: zuul branch commit: bdf4c9385be5e3e04ff06f67f25d6993db70cf6e name: openstack-k8s-operators/install_yamls required: true short_name: install_yamls src_dir: src/github.com/openstack-k8s-operators/install_yamls github.com/openstack-k8s-operators/openstack-baremetal-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-baremetal-operator checkout: main checkout_description: zuul branch commit: 06cd1004cb26b36ba1054ccf7875fad6248762c5 name: openstack-k8s-operators/openstack-baremetal-operator required: true short_name: openstack-baremetal-operator src_dir: src/github.com/openstack-k8s-operators/openstack-baremetal-operator github.com/openstack-k8s-operators/openstack-must-gather: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-must-gather checkout: main checkout_description: zuul branch commit: 74e48015b997b0bb2efc1ad92a6937949f185a0e name: openstack-k8s-operators/openstack-must-gather required: true short_name: openstack-must-gather src_dir: src/github.com/openstack-k8s-operators/openstack-must-gather github.com/openstack-k8s-operators/openstack-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-operator checkout: main checkout_description: zuul branch commit: 38e630804dada625f7b015f13f3ac5bb7192f4dd name: openstack-k8s-operators/openstack-operator required: true short_name: openstack-operator src_dir: src/github.com/openstack-k8s-operators/openstack-operator github.com/openstack-k8s-operators/repo-setup: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/repo-setup checkout: main checkout_description: zuul branch commit: 37b10946c6a10f9fa26c13305f06bfd6867e723f name: openstack-k8s-operators/repo-setup required: true short_name: repo-setup src_dir: src/github.com/openstack-k8s-operators/repo-setup github.com/openstack-k8s-operators/watcher-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/watcher-operator checkout: main checkout_description: zuul branch commit: 581f46572d07c53c87a11aa044b02e73f253eea6 name: openstack-k8s-operators/watcher-operator required: false short_name: watcher-operator src_dir: src/github.com/openstack-k8s-operators/watcher-operator opendev.org/zuul/zuul-jobs: canonical_hostname: opendev.org canonical_name: opendev.org/zuul/zuul-jobs checkout: master checkout_description: project default branch commit: 691c03cc007bee9934da14cf46c86009616a2aef name: zuul/zuul-jobs required: true short_name: zuul-jobs src_dir: src/opendev.org/zuul/zuul-jobs review.rdoproject.org/config: canonical_hostname: review.rdoproject.org canonical_name: review.rdoproject.org/config checkout: master checkout_description: project default branch commit: 08a84deec7dace955f92270e2cbb8b993f305ad4 name: config required: true short_name: config src_dir: src/review.rdoproject.org/config ref: refs/pull/320/head resources: {} tenant: rdoproject.org timeout: 10800 topic: null voting: true zuul_log_collection: false home/zuul/zuul-output/logs/ci-framework-data/artifacts/parameters/install-yamls-params.yml0000644000175000017500000006721715133657744031337 0ustar zuulzuulcifmw_install_yamls_defaults: ADOPTED_EXTERNAL_NETWORK: 172.21.1.0/24 ADOPTED_INTERNALAPI_NETWORK: 172.17.1.0/24 ADOPTED_STORAGEMGMT_NETWORK: 172.20.1.0/24 ADOPTED_STORAGE_NETWORK: 172.18.1.0/24 ADOPTED_TENANT_NETWORK: 172.9.1.0/24 ANSIBLEEE: config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_BRANCH: main ANSIBLEEE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest ANSIBLEEE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml ANSIBLEEE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/test/kuttl/tests ANSIBLEEE_KUTTL_NAMESPACE: ansibleee-kuttl-tests ANSIBLEEE_REPO: https://github.com/openstack-k8s-operators/openstack-ansibleee-operator ANSIBLEE_COMMIT_HASH: '' BARBICAN: config/samples/barbican_v1beta1_barbican.yaml BARBICAN_BRANCH: main BARBICAN_COMMIT_HASH: '' BARBICAN_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml BARBICAN_DEPL_IMG: unused BARBICAN_IMG: quay.io/openstack-k8s-operators/barbican-operator-index:latest BARBICAN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml BARBICAN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/test/kuttl/tests BARBICAN_KUTTL_NAMESPACE: barbican-kuttl-tests BARBICAN_REPO: https://github.com/openstack-k8s-operators/barbican-operator.git BARBICAN_SERVICE_ENABLED: 'true' BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY: sE**********U= BAREMETAL_BRANCH: main BAREMETAL_COMMIT_HASH: '' BAREMETAL_IMG: quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest BAREMETAL_OS_CONTAINER_IMG: '' BAREMETAL_OS_IMG: '' BAREMETAL_OS_IMG_TYPE: '' BAREMETAL_REPO: https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git BAREMETAL_TIMEOUT: 20m BASH_IMG: quay.io/openstack-k8s-operators/bash:latest BGP_ASN: '64999' BGP_LEAF_1: 100.65.4.1 BGP_LEAF_2: 100.64.4.1 BGP_OVN_ROUTING: 'false' BGP_PEER_ASN: '64999' BGP_SOURCE_IP: 172.30.4.2 BGP_SOURCE_IP6: f00d:f00d:f00d:f00d:f00d:f00d:f00d:42 BMAAS_BRIDGE_IPV4_PREFIX: 172.20.1.2/24 BMAAS_BRIDGE_IPV6_PREFIX: fd00:bbbb::2/64 BMAAS_INSTANCE_DISK_SIZE: '20' BMAAS_INSTANCE_MEMORY: '4096' BMAAS_INSTANCE_NAME_PREFIX: crc-bmaas BMAAS_INSTANCE_NET_MODEL: virtio BMAAS_INSTANCE_OS_VARIANT: centos-stream9 BMAAS_INSTANCE_VCPUS: '2' BMAAS_INSTANCE_VIRT_TYPE: kvm BMAAS_IPV4: 'true' BMAAS_IPV6: 'false' BMAAS_LIBVIRT_USER: sushyemu BMAAS_METALLB_ADDRESS_POOL: 172.20.1.64/26 BMAAS_METALLB_POOL_NAME: baremetal BMAAS_NETWORK_IPV4_PREFIX: 172.20.1.1/24 BMAAS_NETWORK_IPV6_PREFIX: fd00:bbbb::1/64 BMAAS_NETWORK_NAME: crc-bmaas BMAAS_NODE_COUNT: '1' BMAAS_OCP_INSTANCE_NAME: crc BMAAS_REDFISH_PASSWORD: password BMAAS_REDFISH_USERNAME: admin BMAAS_ROUTE_LIBVIRT_NETWORKS: crc-bmaas,crc,default BMAAS_SUSHY_EMULATOR_DRIVER: libvirt BMAAS_SUSHY_EMULATOR_IMAGE: quay.io/metal3-io/sushy-tools:latest BMAAS_SUSHY_EMULATOR_NAMESPACE: sushy-emulator BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE: /etc/openstack/clouds.yaml BMAAS_SUSHY_EMULATOR_OS_CLOUD: openstack BMH_NAMESPACE: openstack BMO_BRANCH: release-0.9 BMO_CLEANUP: 'true' BMO_COMMIT_HASH: '' BMO_IPA_BRANCH: stable/2024.1 BMO_IRONIC_HOST: 192.168.122.10 BMO_PROVISIONING_INTERFACE: '' BMO_REPO: https://github.com/metal3-io/baremetal-operator BMO_SETUP: false BMO_SETUP_ROUTE_REPLACE: 'true' BM_CTLPLANE_INTERFACE: enp1s0 BM_INSTANCE_MEMORY: '8192' BM_INSTANCE_NAME_PREFIX: edpm-compute-baremetal BM_INSTANCE_NAME_SUFFIX: '0' BM_NETWORK_NAME: default BM_NODE_COUNT: '1' BM_ROOT_PASSWORD: '' BM_ROOT_PASSWORD_SECRET: '' CEILOMETER_CENTRAL_DEPL_IMG: unused CEILOMETER_NOTIFICATION_DEPL_IMG: unused CEPH_BRANCH: release-1.15 CEPH_CLIENT: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml CEPH_COMMON: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml CEPH_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml CEPH_CRDS: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml CEPH_IMG: quay.io/ceph/demo:latest-squid CEPH_OP: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml CEPH_REPO: https://github.com/rook/rook.git CERTMANAGER_TIMEOUT: 300s CHECKOUT_FROM_OPENSTACK_REF: 'true' CINDER: config/samples/cinder_v1beta1_cinder.yaml CINDERAPI_DEPL_IMG: unused CINDERBKP_DEPL_IMG: unused CINDERSCH_DEPL_IMG: unused CINDERVOL_DEPL_IMG: unused CINDER_BRANCH: main CINDER_COMMIT_HASH: '' CINDER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml CINDER_IMG: quay.io/openstack-k8s-operators/cinder-operator-index:latest CINDER_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml CINDER_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests CINDER_KUTTL_NAMESPACE: cinder-kuttl-tests CINDER_REPO: https://github.com/openstack-k8s-operators/cinder-operator.git CLEANUP_DIR_CMD: rm -Rf CRC_BGP_NIC_1_MAC: '52:54:00:11:11:11' CRC_BGP_NIC_2_MAC: '52:54:00:11:11:12' CRC_HTTPS_PROXY: '' CRC_HTTP_PROXY: '' CRC_STORAGE_NAMESPACE: crc-storage CRC_STORAGE_RETRIES: '3' CRC_URL: '''https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz''' CRC_VERSION: latest DATAPLANE_ANSIBLE_SECRET: dataplane-ansible-ssh-private-key-secret DATAPLANE_ANSIBLE_USER: '' DATAPLANE_COMPUTE_IP: 192.168.122.100 DATAPLANE_CONTAINER_PREFIX: openstack DATAPLANE_CONTAINER_TAG: current-podified DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest DATAPLANE_DEFAULT_GW: 192.168.122.1 DATAPLANE_EXTRA_NOVA_CONFIG_FILE: /dev/null DATAPLANE_GROWVOLS_ARGS: /=8GB /tmp=1GB /home=1GB /var=100% DATAPLANE_KUSTOMIZE_SCENARIO: preprovisioned DATAPLANE_NETWORKER_IP: 192.168.122.200 DATAPLANE_NETWORK_INTERFACE_NAME: eth0 DATAPLANE_NOVA_NFS_PATH: '' DATAPLANE_NTP_SERVER: pool.ntp.org DATAPLANE_PLAYBOOK: osp.edpm.download_cache DATAPLANE_REGISTRY_URL: quay.io/podified-antelope-centos9 DATAPLANE_RUNNER_IMG: '' DATAPLANE_SERVER_ROLE: compute DATAPLANE_SSHD_ALLOWED_RANGES: '[''192.168.122.0/24'']' DATAPLANE_TIMEOUT: 30m DATAPLANE_TLS_ENABLED: 'true' DATAPLANE_TOTAL_NETWORKER_NODES: '1' DATAPLANE_TOTAL_NODES: '1' DBSERVICE: galera DESIGNATE: config/samples/designate_v1beta1_designate.yaml DESIGNATE_BRANCH: main DESIGNATE_COMMIT_HASH: '' DESIGNATE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml DESIGNATE_IMG: quay.io/openstack-k8s-operators/designate-operator-index:latest DESIGNATE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml DESIGNATE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/test/kuttl/tests DESIGNATE_KUTTL_NAMESPACE: designate-kuttl-tests DESIGNATE_REPO: https://github.com/openstack-k8s-operators/designate-operator.git DNSDATA: config/samples/network_v1beta1_dnsdata.yaml DNSDATA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml DNSMASQ: config/samples/network_v1beta1_dnsmasq.yaml DNSMASQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml DNS_DEPL_IMG: unused DNS_DOMAIN: localdomain DOWNLOAD_TOOLS_SELECTION: all EDPM_ATTACH_EXTNET: 'true' EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES: '''[]''' EDPM_COMPUTE_ADDITIONAL_NETWORKS: '''[]''' EDPM_COMPUTE_CELLS: '1' EDPM_COMPUTE_CEPH_ENABLED: 'true' EDPM_COMPUTE_CEPH_NOVA: 'true' EDPM_COMPUTE_DHCP_AGENT_ENABLED: 'true' EDPM_COMPUTE_SRIOV_ENABLED: 'true' EDPM_COMPUTE_SUFFIX: '0' EDPM_CONFIGURE_DEFAULT_ROUTE: 'true' EDPM_CONFIGURE_HUGEPAGES: 'false' EDPM_CONFIGURE_NETWORKING: 'true' EDPM_FIRSTBOOT_EXTRA: /tmp/edpm-firstboot-extra EDPM_NETWORKER_SUFFIX: '0' EDPM_TOTAL_NETWORKERS: '1' EDPM_TOTAL_NODES: '1' GALERA_REPLICAS: '' GENERATE_SSH_KEYS: 'true' GIT_CLONE_OPTS: '' GLANCE: config/samples/glance_v1beta1_glance.yaml GLANCEAPI_DEPL_IMG: unused GLANCE_BRANCH: main GLANCE_COMMIT_HASH: '' GLANCE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml GLANCE_IMG: quay.io/openstack-k8s-operators/glance-operator-index:latest GLANCE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml GLANCE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests GLANCE_KUTTL_NAMESPACE: glance-kuttl-tests GLANCE_REPO: https://github.com/openstack-k8s-operators/glance-operator.git HEAT: config/samples/heat_v1beta1_heat.yaml HEATAPI_DEPL_IMG: unused HEATCFNAPI_DEPL_IMG: unused HEATENGINE_DEPL_IMG: unused HEAT_AUTH_ENCRYPTION_KEY: 76**********f0 HEAT_BRANCH: main HEAT_COMMIT_HASH: '' HEAT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml HEAT_IMG: quay.io/openstack-k8s-operators/heat-operator-index:latest HEAT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml HEAT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/test/kuttl/tests HEAT_KUTTL_NAMESPACE: heat-kuttl-tests HEAT_REPO: https://github.com/openstack-k8s-operators/heat-operator.git HEAT_SERVICE_ENABLED: 'true' HORIZON: config/samples/horizon_v1beta1_horizon.yaml HORIZON_BRANCH: main HORIZON_COMMIT_HASH: '' HORIZON_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml HORIZON_DEPL_IMG: unused HORIZON_IMG: quay.io/openstack-k8s-operators/horizon-operator-index:latest HORIZON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml HORIZON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/test/kuttl/tests HORIZON_KUTTL_NAMESPACE: horizon-kuttl-tests HORIZON_REPO: https://github.com/openstack-k8s-operators/horizon-operator.git INFRA_BRANCH: main INFRA_COMMIT_HASH: '' INFRA_IMG: quay.io/openstack-k8s-operators/infra-operator-index:latest INFRA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml INFRA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/test/kuttl/tests INFRA_KUTTL_NAMESPACE: infra-kuttl-tests INFRA_REPO: https://github.com/openstack-k8s-operators/infra-operator.git INSTALL_CERT_MANAGER: false INSTALL_NMSTATE: true || false INSTALL_NNCP: true || false INTERNALAPI_HOST_ROUTES: '' IPV6_LAB_IPV4_NETWORK_IPADDRESS: 172.30.0.1/24 IPV6_LAB_IPV6_NETWORK_IPADDRESS: fd00:abcd:abcd:fc00::1/64 IPV6_LAB_LIBVIRT_STORAGE_POOL: default IPV6_LAB_MANAGE_FIREWALLD: 'true' IPV6_LAB_NAT64_HOST_IPV4: 172.30.0.2/24 IPV6_LAB_NAT64_HOST_IPV6: fd00:abcd:abcd:fc00::2/64 IPV6_LAB_NAT64_INSTANCE_NAME: nat64-router IPV6_LAB_NAT64_IPV6_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL: 192.168.255.0/24 IPV6_LAB_NAT64_TAYGA_IPV4: 192.168.255.1 IPV6_LAB_NAT64_TAYGA_IPV6: fd00:abcd:abcd:fc00::3 IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX: fd00:abcd:abcd:fcff::/96 IPV6_LAB_NAT64_UPDATE_PACKAGES: 'false' IPV6_LAB_NETWORK_NAME: nat64 IPV6_LAB_SNO_CLUSTER_NETWORK: fd00:abcd:0::/48 IPV6_LAB_SNO_HOST_IP: fd00:abcd:abcd:fc00::11 IPV6_LAB_SNO_HOST_PREFIX: '64' IPV6_LAB_SNO_INSTANCE_NAME: sno IPV6_LAB_SNO_MACHINE_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_SNO_OCP_MIRROR_URL: https://mirror.openshift.com/pub/openshift-v4/clients/ocp IPV6_LAB_SNO_OCP_VERSION: latest-4.14 IPV6_LAB_SNO_SERVICE_NETWORK: fd00:abcd:abcd:fc03::/112 IPV6_LAB_SSH_PUB_KEY: /home/zuul/.ssh/id_rsa.pub IPV6_LAB_WORK_DIR: /home/zuul/.ipv6lab IRONIC: config/samples/ironic_v1beta1_ironic.yaml IRONICAPI_DEPL_IMG: unused IRONICCON_DEPL_IMG: unused IRONICINS_DEPL_IMG: unused IRONICNAG_DEPL_IMG: unused IRONICPXE_DEPL_IMG: unused IRONIC_BRANCH: main IRONIC_COMMIT_HASH: '' IRONIC_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml IRONIC_IMAGE: quay.io/metal3-io/ironic IRONIC_IMAGE_TAG: release-24.1 IRONIC_IMG: quay.io/openstack-k8s-operators/ironic-operator-index:latest IRONIC_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml IRONIC_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/test/kuttl/tests IRONIC_KUTTL_NAMESPACE: ironic-kuttl-tests IRONIC_REPO: https://github.com/openstack-k8s-operators/ironic-operator.git KEYSTONEAPI: config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_DEPL_IMG: unused KEYSTONE_BRANCH: main KEYSTONE_COMMIT_HASH: '' KEYSTONE_FEDERATION_CLIENT_SECRET: CO**********6f KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE: openstack KEYSTONE_IMG: quay.io/openstack-k8s-operators/keystone-operator-index:latest KEYSTONE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml KEYSTONE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/test/kuttl/tests KEYSTONE_KUTTL_NAMESPACE: keystone-kuttl-tests KEYSTONE_REPO: https://github.com/openstack-k8s-operators/keystone-operator.git KUBEADMIN_PWD: '12345678' LIBVIRT_SECRET: libvirt-secret LOKI_DEPLOY_MODE: openshift-network LOKI_DEPLOY_NAMESPACE: netobserv LOKI_DEPLOY_SIZE: 1x.demo LOKI_NAMESPACE: openshift-operators-redhat LOKI_OPERATOR_GROUP: openshift-operators-redhat-loki LOKI_SUBSCRIPTION: loki-operator LVMS_CR: '1' MANILA: config/samples/manila_v1beta1_manila.yaml MANILAAPI_DEPL_IMG: unused MANILASCH_DEPL_IMG: unused MANILASHARE_DEPL_IMG: unused MANILA_BRANCH: main MANILA_COMMIT_HASH: '' MANILA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml MANILA_IMG: quay.io/openstack-k8s-operators/manila-operator-index:latest MANILA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml MANILA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests MANILA_KUTTL_NAMESPACE: manila-kuttl-tests MANILA_REPO: https://github.com/openstack-k8s-operators/manila-operator.git MANILA_SERVICE_ENABLED: 'true' MARIADB: config/samples/mariadb_v1beta1_galera.yaml MARIADB_BRANCH: main MARIADB_CHAINSAW_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/config.yaml MARIADB_CHAINSAW_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/tests MARIADB_CHAINSAW_NAMESPACE: mariadb-chainsaw-tests MARIADB_COMMIT_HASH: '' MARIADB_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml MARIADB_DEPL_IMG: unused MARIADB_IMG: quay.io/openstack-k8s-operators/mariadb-operator-index:latest MARIADB_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml MARIADB_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/kuttl/tests MARIADB_KUTTL_NAMESPACE: mariadb-kuttl-tests MARIADB_REPO: https://github.com/openstack-k8s-operators/mariadb-operator.git MEMCACHED: config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_DEPL_IMG: unused METADATA_SHARED_SECRET: '12**********42' METALLB_IPV6_POOL: fd00:aaaa::80-fd00:aaaa::90 METALLB_POOL: 192.168.122.80-192.168.122.90 MICROSHIFT: '0' NAMESPACE: openstack NETCONFIG: config/samples/network_v1beta1_netconfig.yaml NETCONFIG_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml NETCONFIG_DEPL_IMG: unused NETOBSERV_DEPLOY_NAMESPACE: netobserv NETOBSERV_NAMESPACE: openshift-netobserv-operator NETOBSERV_OPERATOR_GROUP: openshift-netobserv-operator-net NETOBSERV_SUBSCRIPTION: netobserv-operator NETWORK_BGP: 'false' NETWORK_DESIGNATE_ADDRESS_PREFIX: 172.28.0 NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX: 172.50.0 NETWORK_INTERNALAPI_ADDRESS_PREFIX: 172.17.0 NETWORK_ISOLATION: 'true' NETWORK_ISOLATION_INSTANCE_NAME: crc NETWORK_ISOLATION_IPV4: 'true' NETWORK_ISOLATION_IPV4_ADDRESS: 172.16.1.1/24 NETWORK_ISOLATION_IPV4_NAT: 'true' NETWORK_ISOLATION_IPV6: 'false' NETWORK_ISOLATION_IPV6_ADDRESS: fd00:aaaa::1/64 NETWORK_ISOLATION_IP_ADDRESS: 192.168.122.10 NETWORK_ISOLATION_MAC: '52:54:00:11:11:10' NETWORK_ISOLATION_NETWORK_NAME: net-iso NETWORK_ISOLATION_NET_NAME: default NETWORK_ISOLATION_USE_DEFAULT_NETWORK: 'true' NETWORK_MTU: '1500' NETWORK_STORAGEMGMT_ADDRESS_PREFIX: 172.20.0 NETWORK_STORAGE_ADDRESS_PREFIX: 172.18.0 NETWORK_STORAGE_MACVLAN: '' NETWORK_TENANT_ADDRESS_PREFIX: 172.19.0 NETWORK_VLAN_START: '20' NETWORK_VLAN_STEP: '1' NEUTRONAPI: config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_DEPL_IMG: unused NEUTRON_BRANCH: main NEUTRON_COMMIT_HASH: '' NEUTRON_IMG: quay.io/openstack-k8s-operators/neutron-operator-index:latest NEUTRON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml NEUTRON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests NEUTRON_KUTTL_NAMESPACE: neutron-kuttl-tests NEUTRON_REPO: https://github.com/openstack-k8s-operators/neutron-operator.git NFS_HOME: /home/nfs NMSTATE_NAMESPACE: openshift-nmstate NMSTATE_OPERATOR_GROUP: openshift-nmstate-tn6k8 NMSTATE_SUBSCRIPTION: kubernetes-nmstate-operator NNCP_ADDITIONAL_HOST_ROUTES: '' NNCP_BGP_1_INTERFACE: enp7s0 NNCP_BGP_1_IP_ADDRESS: 100.65.4.2 NNCP_BGP_2_INTERFACE: enp8s0 NNCP_BGP_2_IP_ADDRESS: 100.64.4.2 NNCP_BRIDGE: ospbr NNCP_CLEANUP_TIMEOUT: 120s NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX: 'fd00:aaaa::' NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX: '10' NNCP_CTLPLANE_IP_ADDRESS_PREFIX: 192.168.122 NNCP_CTLPLANE_IP_ADDRESS_SUFFIX: '10' NNCP_DNS_SERVER: 192.168.122.1 NNCP_DNS_SERVER_IPV6: fd00:aaaa::1 NNCP_GATEWAY: 192.168.122.1 NNCP_GATEWAY_IPV6: fd00:aaaa::1 NNCP_INTERFACE: enp6s0 NNCP_NODES: '' NNCP_TIMEOUT: 240s NOVA: config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_BRANCH: main NOVA_COMMIT_HASH: '' NOVA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_IMG: quay.io/openstack-k8s-operators/nova-operator-index:latest NOVA_REPO: https://github.com/openstack-k8s-operators/nova-operator.git NUMBER_OF_INSTANCES: '1' OCP_NETWORK_NAME: crc OCTAVIA: config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_BRANCH: main OCTAVIA_COMMIT_HASH: '' OCTAVIA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_IMG: quay.io/openstack-k8s-operators/octavia-operator-index:latest OCTAVIA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml OCTAVIA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/test/kuttl/tests OCTAVIA_KUTTL_NAMESPACE: octavia-kuttl-tests OCTAVIA_REPO: https://github.com/openstack-k8s-operators/octavia-operator.git OKD: 'false' OPENSTACK_BRANCH: main OPENSTACK_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-bundle:latest OPENSTACK_COMMIT_HASH: '' OPENSTACK_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_CRDS_DIR: openstack_crds OPENSTACK_CTLPLANE: config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_IMG: quay.io/openstack-k8s-operators/openstack-operator-index:latest OPENSTACK_K8S_BRANCH: main OPENSTACK_K8S_TAG: latest OPENSTACK_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml OPENSTACK_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/test/kuttl/tests OPENSTACK_KUTTL_NAMESPACE: openstack-kuttl-tests OPENSTACK_NEUTRON_CUSTOM_CONF: '' OPENSTACK_REPO: https://github.com/openstack-k8s-operators/openstack-operator.git OPENSTACK_STORAGE_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest OPERATOR_BASE_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator OPERATOR_CHANNEL: '' OPERATOR_NAMESPACE: openstack-operators OPERATOR_SOURCE: '' OPERATOR_SOURCE_NAMESPACE: '' OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm OVNCONTROLLER: config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_NMAP: 'true' OVNDBS: config/samples/ovn_v1beta1_ovndbcluster.yaml OVNDBS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml OVNNORTHD: config/samples/ovn_v1beta1_ovnnorthd.yaml OVNNORTHD_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml OVN_BRANCH: main OVN_COMMIT_HASH: '' OVN_IMG: quay.io/openstack-k8s-operators/ovn-operator-index:latest OVN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml OVN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/test/kuttl/tests OVN_KUTTL_NAMESPACE: ovn-kuttl-tests OVN_REPO: https://github.com/openstack-k8s-operators/ovn-operator.git PASSWORD: '12**********78' PLACEMENTAPI: config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_DEPL_IMG: unused PLACEMENT_BRANCH: main PLACEMENT_COMMIT_HASH: '' PLACEMENT_IMG: quay.io/openstack-k8s-operators/placement-operator-index:latest PLACEMENT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml PLACEMENT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/test/kuttl/tests PLACEMENT_KUTTL_NAMESPACE: placement-kuttl-tests PLACEMENT_REPO: https://github.com/openstack-k8s-operators/placement-operator.git PULL_SECRET: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/pull-secret.txt RABBITMQ: docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_BRANCH: patches RABBITMQ_COMMIT_HASH: '' RABBITMQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_DEPL_IMG: unused RABBITMQ_IMG: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest RABBITMQ_REPO: https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git REDHAT_OPERATORS: 'false' REDIS: config/samples/redis_v1beta1_redis.yaml REDIS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml REDIS_DEPL_IMG: unused RH_REGISTRY_PWD: '' RH_REGISTRY_USER: '' SECRET: os**********et SG_CORE_DEPL_IMG: unused STANDALONE_COMPUTE_DRIVER: libvirt STANDALONE_EXTERNAL_NET_PREFFIX: 172.21.0 STANDALONE_INTERNALAPI_NET_PREFIX: 172.17.0 STANDALONE_STORAGEMGMT_NET_PREFIX: 172.20.0 STANDALONE_STORAGE_NET_PREFIX: 172.18.0 STANDALONE_TENANT_NET_PREFIX: 172.19.0 STORAGEMGMT_HOST_ROUTES: '' STORAGE_CLASS: local-storage STORAGE_HOST_ROUTES: '' SWIFT: config/samples/swift_v1beta1_swift.yaml SWIFT_BRANCH: main SWIFT_COMMIT_HASH: '' SWIFT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml SWIFT_IMG: quay.io/openstack-k8s-operators/swift-operator-index:latest SWIFT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml SWIFT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/test/kuttl/tests SWIFT_KUTTL_NAMESPACE: swift-kuttl-tests SWIFT_REPO: https://github.com/openstack-k8s-operators/swift-operator.git TELEMETRY: config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_BRANCH: main TELEMETRY_COMMIT_HASH: '' TELEMETRY_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_IMG: quay.io/openstack-k8s-operators/telemetry-operator-index:latest TELEMETRY_KUTTL_BASEDIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator TELEMETRY_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml TELEMETRY_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/test/kuttl/suites TELEMETRY_KUTTL_NAMESPACE: telemetry-kuttl-tests TELEMETRY_KUTTL_RELPATH: test/kuttl/suites TELEMETRY_REPO: https://github.com/openstack-k8s-operators/telemetry-operator.git TENANT_HOST_ROUTES: '' TIMEOUT: 300s TLS_ENABLED: 'false' WATCHER_BRANCH: '' WATCHER_REPO: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator tripleo_deploy: 'export REGISTRY_PWD:' cifmw_install_yamls_environment: BMO_SETUP: false CHECKOUT_FROM_OPENSTACK_REF: 'true' INSTALL_CERT_MANAGER: false KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm WATCHER_BRANCH: '' WATCHER_REPO: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator home/zuul/zuul-output/logs/ci-framework-data/artifacts/parameters/custom-params.yml0000644000175000017500000002210315133657744030041 0ustar zuulzuulcifmw_architecture_repo: /home/zuul/src/github.com/openstack-k8s-operators/architecture cifmw_architecture_repo_relative: src/github.com/openstack-k8s-operators/architecture cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_basedir: /home/zuul/ci-framework-data cifmw_build_images_output: {} cifmw_config_certmanager: true cifmw_default_dns_servers: - 1.1.1.1 - 8.8.8.8 cifmw_deploy_edpm: true cifmw_dlrn_report_result: false cifmw_edpm_deploy_nova_compute_extra_config: '[libvirt] cpu_mode = custom cpu_models = Nehalem ' cifmw_edpm_prepare_kustomizations: - apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: openstack patches: - patch: "apiVersion: core.openstack.org/v1beta1\nkind: OpenStackControlPlane\nmetadata:\n \ name: controlplane\nspec:\n telemetry:\n enabled: true\n template:\n \ ceilometer:\n enabled: true\n metricStorage:\n enabled: true\n customMonitoringStack:\n alertmanagerConfig:\n \ disabled: true\n prometheusConfig:\n enableRemoteWriteReceiver: true\n persistentVolumeClaim:\n resources:\n requests:\n \ storage: 20G\n replicas: 1\n scrapeInterval: 30s\n resourceSelector:\n matchLabels:\n service: metricStorage\n retention: 24h" target: kind: OpenStackControlPlane - patch: "apiVersion: core.openstack.org/v1beta1\nkind: OpenStackControlPlane\nmetadata:\n \ name: controlplane\nspec:\n telemetry:\n template:\n metricStorage:\n \ monitoringStack: null" target: kind: OpenStackControlPlane - patch: "apiVersion: core.openstack.org/v1beta1\nkind: OpenStackControlPlane\nmetadata:\n \ name: controlplane\nspec:\n watcher:\n enabled: true\n template:\n \ decisionengineServiceTemplate:\n customServiceConfig: |\n \ [watcher_cluster_data_model_collectors.compute]\n period = 60\n [watcher_cluster_data_model_collectors.storage]\n period = 60" target: kind: OpenStackControlPlane cifmw_edpm_prepare_skip_crc_storage_creation: true cifmw_edpm_prepare_timeout: 60 cifmw_edpm_telemetry_enabled_exporters: - podman_exporter - openstack_network_exporter cifmw_extras: - '@/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/scenarios/centos-9/multinode-ci.yml' - '@/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/scenarios/centos-9/horizon.yml' - '@/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator/ci/scenarios/edpm-no-notifications.yml' - '@/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator/ci/tests/watcher-tempest.yml' cifmw_installyamls_repos: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls cifmw_installyamls_repos_relative: src/github.com/openstack-k8s-operators/install_yamls cifmw_nolog: true cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_password: '12**********89' cifmw_openshift_setup_skip_internal_registry: true cifmw_openshift_setup_skip_internal_registry_tls_verify: true cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_openstack_k8s_operators_org_url: https://github.com/openstack-k8s-operators cifmw_openstack_namespace: openstack cifmw_operator_build_meta_name: openstack-operator cifmw_operator_build_output: operators: openstack-operator: git_commit_hash: 38e630804dada625f7b015f13f3ac5bb7192f4dd git_src_dir: ~/src/github.com/openstack-k8s-operators/openstack-operator image: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd image_bundle: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-bundle:38e630804dada625f7b015f13f3ac5bb7192f4dd image_catalog: 38.102.83.51:5001/openstack-k8s-operators/openstack-operator-index:38e630804dada625f7b015f13f3ac5bb7192f4dd watcher-operator: git_commit_hash: 581f46572d07c53c87a11aa044b02e73f253eea6 git_src_dir: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator image: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator:581f46572d07c53c87a11aa044b02e73f253eea6 image_bundle: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-bundle:581f46572d07c53c87a11aa044b02e73f253eea6 image_catalog: 38.102.83.51:5001/openstack-k8s-operators/watcher-operator-index:581f46572d07c53c87a11aa044b02e73f253eea6 cifmw_path: /home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin cifmw_repo: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework cifmw_repo_relative: src/github.com/openstack-k8s-operators/ci-framework cifmw_repo_setup_dist_major_version: 9 cifmw_repo_setup_os_release: centos cifmw_run_test_role: test_operator cifmw_run_tests: true cifmw_test_operator_tempest_concurrency: 1 cifmw_test_operator_tempest_exclude_list: 'watcher_tempest_plugin.*client_functional.* watcher_tempest_plugin.tests.scenario.test_execute_strategies.TestExecuteStrategies.test_execute_storage_capacity_balance_strategy watcher_tempest_plugin.*\[.*\breal_load\b.*\].* watcher_tempest_plugin.tests.scenario.test_execute_zone_migration.TestExecuteZoneMigrationStrategy.test_execute_zone_migration_without_destination_host watcher_tempest_plugin.*\[.*\bvolume_migration\b.*\].* ' cifmw_test_operator_tempest_external_plugin: - changeRefspec: 380572db57798530b64dcac14c6b01b0382c5d8e changeRepository: https://review.opendev.org/openstack/watcher-tempest-plugin repository: https://opendev.org/openstack/watcher-tempest-plugin.git cifmw_test_operator_tempest_image_tag: watcher_latest cifmw_test_operator_tempest_include_list: 'watcher_tempest_plugin.* ' cifmw_test_operator_tempest_namespace: podified-epoxy-centos9 cifmw_test_operator_tempest_registry: 38.102.83.51:5001 cifmw_test_operator_tempest_tempestconf_config: overrides: 'compute.min_microversion 2.56 compute.min_compute_nodes 2 placement.min_microversion 1.29 compute-feature-enabled.live_migration true compute-feature-enabled.block_migration_for_live_migration true service_available.sg_core true telemetry_services.metric_backends prometheus telemetry.disable_ssl_certificate_validation true telemetry.ceilometer_polling_interval 15 optimize.min_microversion 1.0 optimize.max_microversion 1.4 optimize.datasource prometheus optimize.openstack_type podified optimize.proxy_host_address 38.102.83.39 optimize.proxy_host_user zuul optimize.prometheus_host metric-storage-prometheus.openstack.svc optimize.prometheus_ssl_enabled true optimize.prometheus_ssl_cert_dir /etc/prometheus/secrets/combined-ca-bundle optimize.podified_kubeconfig_path /home/zuul/.crc/machines/crc/kubeconfig optimize.podified_namespace openstack optimize.run_continuous_audit_tests true ' cifmw_update_containers: true cifmw_update_containers_openstack: false cifmw_update_containers_org: podified-epoxy-centos9 cifmw_update_containers_registry: 38.102.83.51:5001 cifmw_update_containers_tag: watcher_latest cifmw_update_containers_watcher: true cifmw_use_crc: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller post_ctlplane_deploy: - name: Tune rabbitmq resources source: rabbitmq_tuning.yml type: playbook post_deploy: - inventory: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/hosts name: Download needed tools source: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/download_tools.yaml type: playbook - name: Patch Openstack Prometheus to enable admin API source: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator/ci/playbooks/prometheus_admin_api.yaml type: playbook post_infra: - inventory: /home/zuul/ci-framework-data/artifacts/zuul_inventory.yml name: Fetch nodes facts and save them as parameters source: fetch_compute_facts.yml type: playbook pre_deploy: - name: 80 Kustomize OpenStack CR source: control_plane_horizon.yml type: playbook pre_deploy_create_coo_subscription: - name: Deploy cluster-observability-operator source: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator/ci/playbooks/deploy_cluster_observability_operator.yaml type: playbook pre_infra: - connection: local inventory: localhost, name: Download needed tools source: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/download_tools.yaml type: playbook pre_update: - inventory: /home/zuul/ci-framework-data/artifacts/zuul_inventory.yml name: Fetch nodes facts and save them as parameters source: fetch_compute_facts.yml type: playbook home/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible-facts.yml0000644000175000017500000005064715133657653025633 0ustar zuulzuul_ansible_facts_gathered: true all_ipv4_addresses: - 38.102.83.39 all_ipv6_addresses: - fe80::f816:3eff:fe71:e11f ansible_local: {} apparmor: status: disabled architecture: x86_64 bios_date: 04/01/2014 bios_vendor: SeaBIOS bios_version: 1.15.0-1 board_asset_tag: NA board_name: NA board_serial: NA board_vendor: NA board_version: NA chassis_asset_tag: NA chassis_serial: NA chassis_vendor: QEMU chassis_version: pc-i440fx-6.2 cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=22ac9141-3960-4912-b20e-19fc8a328d40 crc_ci_bootstrap_instance_default_net_config: mtu: '1500' range: 192.168.122.0/24 router_net: '' transparent: true crc_ci_bootstrap_instance_nm_vlan_networks: - key: internal-api value: ip: 172.17.0.5 - key: storage value: ip: 172.18.0.5 - key: tenant value: ip: 172.19.0.5 crc_ci_bootstrap_instance_parent_port_create_yaml: admin_state_up: true allowed_address_pairs: [] binding_host_id: null binding_profile: {} binding_vif_details: {} binding_vif_type: null binding_vnic_type: normal created_at: '2026-01-20T10:44:55Z' data_plane_status: null description: '' device_id: '' device_owner: '' device_profile: null dns_assignment: - fqdn: host-192-168-122-10.openstacklocal. hostname: host-192-168-122-10 ip_address: 192.168.122.10 dns_domain: '' dns_name: '' extra_dhcp_opts: [] fixed_ips: - ip_address: 192.168.122.10 subnet_id: 2e4cfd41-19cc-4d87-87ce-3666153838d5 hardware_offload_type: null hints: '' id: 97004f9b-ecd9-4295-91be-b58c60a6e487 ip_allocation: immediate mac_address: fa:16:3e:db:79:f1 name: crc-a519b063-d122-47cf-ae3b-7548803df408 network_id: 9afcb953-b4d4-4d1b-ba09-6d5eaf6424aa numa_affinity_policy: null port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 propagate_uplink_status: null qos_network_policy_id: null qos_policy_id: null resource_request: null revision_number: 1 security_group_ids: [] status: DOWN tags: [] trunk_details: null trusted: null updated_at: '2026-01-20T10:44:55Z' crc_ci_bootstrap_network_name: zuul-ci-net-90366c73 crc_ci_bootstrap_networks_out: compute-0: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.100/24 mac: fa:16:3e:c1:c1:89 mtu: '1500' internal-api: iface: eth1.20 ip: 172.17.0.100/24 mac: 52:54:00:e4:85:d6 mtu: '1496' parent_iface: eth1 vlan: 20 storage: iface: eth1.21 ip: 172.18.0.100/24 mac: 52:54:00:81:02:e8 mtu: '1496' parent_iface: eth1 vlan: 21 tenant: iface: eth1.22 ip: 172.19.0.100/24 mac: 52:54:00:10:b6:c8 mtu: '1496' parent_iface: eth1 vlan: 22 compute-1: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.101/24 mac: fa:16:3e:62:06:49 mtu: '1500' internal-api: iface: eth1.20 ip: 172.17.0.101/24 mac: 52:54:00:1a:ac:e2 mtu: '1496' parent_iface: eth1 vlan: 20 storage: iface: eth1.21 ip: 172.18.0.101/24 mac: 52:54:00:8e:43:4e mtu: '1496' parent_iface: eth1 vlan: 21 tenant: iface: eth1.22 ip: 172.19.0.101/24 mac: 52:54:00:96:51:04 mtu: '1496' parent_iface: eth1 vlan: 22 controller: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.11/24 mac: fa:16:3e:b8:b9:28 mtu: '1500' crc: default: connection: ci-private-network gw: 192.168.122.1 iface: ens7 ip: 192.168.122.10/24 mac: fa:16:3e:db:79:f1 mtu: '1500' internal-api: connection: ci-private-network-20 iface: ens7.20 ip: 172.17.0.5/24 mac: 52:54:00:75:cf:2c mtu: '1496' parent_iface: ens7 vlan: 20 storage: connection: ci-private-network-21 iface: ens7.21 ip: 172.18.0.5/24 mac: 52:54:00:da:18:d2 mtu: '1496' parent_iface: ens7 vlan: 21 tenant: connection: ci-private-network-22 iface: ens7.22 ip: 172.19.0.5/24 mac: 52:54:00:b7:68:1f mtu: '1496' parent_iface: ens7 vlan: 22 crc_ci_bootstrap_private_net_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2026-01-20T10:43:28Z' description: '' dns_domain: '' id: 9afcb953-b4d4-4d1b-ba09-6d5eaf6424aa ipv4_address_scope: null ipv6_address_scope: null is_default: false is_vlan_qinq: null is_vlan_transparent: true l2_adjacency: true mtu: 1500 name: zuul-ci-net-90366c73 port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 provider:network_type: null provider:physical_network: null provider:segmentation_id: null qos_policy_id: null revision_number: 1 router:external: false segments: null shared: false status: ACTIVE subnets: [] tags: [] updated_at: '2026-01-20T10:43:28Z' crc_ci_bootstrap_private_router_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2026-01-20T10:43:36Z' description: '' enable_ndp_proxy: null external_gateway_info: null flavor_id: null id: 6f8c9a88-0252-4b33-9039-3708bccf4928 name: zuul-ci-subnet-router-90366c73 project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 1 routes: [] status: ACTIVE tags: [] tenant_id: 4b633c451ac74233be3721a3635275e5 updated_at: '2026-01-20T10:43:36Z' crc_ci_bootstrap_private_subnet_create_yaml: allocation_pools: - end: 192.168.122.254 start: 192.168.122.2 cidr: 192.168.122.0/24 created_at: '2026-01-20T10:43:33Z' description: '' dns_nameservers: [] dns_publish_fixed_ip: null enable_dhcp: false gateway_ip: 192.168.122.1 host_routes: [] id: 2e4cfd41-19cc-4d87-87ce-3666153838d5 ip_version: 4 ipv6_address_mode: null ipv6_ra_mode: null name: zuul-ci-subnet-90366c73 network_id: 9afcb953-b4d4-4d1b-ba09-6d5eaf6424aa project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 0 segment_id: null service_types: [] subnetpool_id: null tags: [] updated_at: '2026-01-20T10:43:33Z' crc_ci_bootstrap_provider_dns: - 199.204.44.24 - 199.204.47.54 crc_ci_bootstrap_router_name: zuul-ci-subnet-router-90366c73 crc_ci_bootstrap_subnet_name: zuul-ci-subnet-90366c73 date_time: date: '2026-01-20' day: '20' epoch: '1768906665' epoch_int: '1768906665' hour: '10' iso8601: '2026-01-20T10:57:45Z' iso8601_basic: 20260120T105745596274 iso8601_basic_short: 20260120T105745 iso8601_micro: '2026-01-20T10:57:45.596274Z' minute: '57' month: '01' second: '45' time: '10:57:45' tz: UTC tz_dst: UTC tz_offset: '+0000' weekday: Tuesday weekday_number: '2' weeknumber: '03' year: '2026' default_ipv4: address: 38.102.83.39 alias: eth0 broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: eth0 macaddress: fa:16:3e:71:e1:1f mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether default_ipv6: {} device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 masters: {} uuids: sr0: - 2026-01-20-10-41-26-00 vda1: - 22ac9141-3960-4912-b20e-19fc8a328d40 devices: sr0: holders: [] host: '' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2026-01-20-10-41-26-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '2048' vendor: QEMU virtual: 1 vda: holders: [] host: '' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: - 22ac9141-3960-4912-b20e-19fc8a328d40 sectors: '83883999' sectorsize: 512 size: 40.00 GB start: '2048' uuid: 22ac9141-3960-4912-b20e-19fc8a328d40 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '83886080' sectorsize: '512' size: 40.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 discovered_interpreter_python: /usr/bin/python3 distribution: CentOS distribution_file_parsed: true distribution_file_path: /etc/centos-release distribution_file_variety: CentOS distribution_major_version: '9' distribution_release: Stream distribution_version: '9' dns: nameservers: - 192.168.122.10 - 199.204.44.24 - 199.204.47.54 domain: '' effective_group_id: 1000 effective_user_id: 1000 env: ANSIBLE_LOG_PATH: /home/zuul/ci-framework-data/logs/e2e-collect-logs-must-gather.log BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus DEBUGINFOD_IMA_CERT_PATH: '/etc/keys/ima:' DEBUGINFOD_URLS: 'https://debuginfod.centos.org/ ' HOME: /home/zuul LANG: en_US.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: zuul MOTD_SHOWN: pam PATH: /home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /home/zuul SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 46682 22 SSH_CONNECTION: 38.102.83.114 46682 38.102.83.39 22 USER: zuul XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '12' XDG_SESSION_TYPE: tty _: /usr/bin/python3 which_declare: declare -f eth0: active: true device: eth0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.39 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::f816:3eff:fe71:e11f prefix: '64' scope: link macaddress: fa:16:3e:71:e1:1f module: virtio_net mtu: 1500 pciid: virtio1 promisc: false speed: -1 timestamping: [] type: ether fibre_channel_wwn: [] fips: false form_factor: Other fqdn: controller gather_subset: - min hostname: controller hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d interfaces: - eth0 - lo is_chroot: false iscsi_iqn: '' kernel: 5.14.0-661.el9.x86_64 kernel_version: '#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026' lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback loadavg: 15m: 0.08 1m: 0.59 5m: 0.22 locally_reachable_ips: ipv4: - 38.102.83.39 - 127.0.0.0/8 - 127.0.0.1 ipv6: - ::1 - fe80::f816:3eff:fe71:e11f lsb: {} lvm: N/A machine: x86_64 machine_id: 85ac68c10a6e7ae08ceb898dbdca0cb5 memfree_mb: 3195 memory_mb: nocache: free: 3404 used: 251 real: free: 3195 total: 3655 used: 460 swap: cached: 0 free: 0 total: 0 used: 0 memtotal_mb: 3655 module_setup: true mounts: - block_available: 9928916 block_size: 4096 block_total: 10469115 block_used: 540199 device: /dev/vda1 fstype: xfs inode_available: 20917098 inode_total: 20970992 inode_used: 53894 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota size_available: 40668839936 size_total: 42881495040 uuid: 22ac9141-3960-4912-b20e-19fc8a328d40 nodename: controller os_family: RedHat pkg_mgr: dnf proc_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=22ac9141-3960-4912-b20e-19fc8a328d40 processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor processor_cores: 1 processor_count: 2 processor_nproc: 2 processor_threads_per_core: 1 processor_vcpus: 2 product_name: OpenStack Nova product_serial: NA product_uuid: NA product_version: 26.3.1 python: executable: /usr/bin/python3 has_sslcontext: true type: cpython version: major: 3 micro: 25 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 25 - final - 0 python_version: 3.9.25 real_group_id: 1000 real_user_id: 1000 selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted selinux_python_present: true service_mgr: systemd ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI+HME4ahJQfJECnlUk3Icgw7DjB45ygINRfee3AcsNVR5rNrzPcoaVTiPZ1YEOGS4KKBJ1Qyzp3CgN+dL12iOs= ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAIHcCbCXC3if6/1WyJzr5vIzL+Tzi/3I/oQgtmHTAEmym ssh_host_key_ed25519_public_keytype: ssh-ed25519 ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQD5hZ02CR6jauFfwvnGyh7Gg7GiN8xU/4woiEx+8xAto75E7Pi9h+8iAczj5rpkBpdIX3G3BSeegzMeog4upoxDVvta9EgXuabnQ49Y7WDm0LPFAPgiFBu/CkrcXHPm6OM5a181eFVk4w9Kf3GDJ9Arh5IdZdAbxXEEdenpbQnlz4hFtl/dGIrohDfCuWmrhq5VMqraCeMpiJ4c2G2iMVgZFQf8LvUICbaySrebir4HAfyv1yWZawS1Nql3bsyHsx9Tf25tbj5CHHs+DhC9NI/UgOmeW8rz3IyrdhqKInbsI/AqSHQPmEAEwzRco8xALMmzjICopKKXB9R++ddv/PAqiZPTrixuYUXRQQyJvx080Visb20dtAZTLYdmY7X1oB8Jgvullh3xFHNeqAu0+7OIeoHl9eCxx2sbk1kEtud1CMuDc5cn7h3ANGHZY3/jP0cUAyel1wQE0olv43z2rzgUpI6+8gKM0edyBLbCbww6/PHtcNkrzxAB3WOaAIzZAI8= ssh_host_key_rsa_public_keytype: ssh-rsa swapfree_mb: 0 swaptotal_mb: 0 system: Linux system_capabilities: - '' system_capabilities_enforced: 'True' system_vendor: OpenStack Foundation uptime_seconds: 64 user_dir: /home/zuul user_gecos: '' user_gid: 1000 user_id: zuul user_shell: /bin/bash user_uid: 1000 userspace_architecture: x86_64 userspace_bits: '64' virtualization_role: guest virtualization_tech_guest: - openstack virtualization_tech_host: - kvm virtualization_type: openstack zuul_change_list: - watcher-operator home/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/0000755000175000017500000000000015133657744025111 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/delorean.repo.md50000644000175000017500000000004115133657373030246 0ustar zuulzuulc3923531bcda0b0811b2d5053f189beb home/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/delorean-antelope-testing.repo0000644000175000017500000000317215133657375033054 0ustar zuulzuul[delorean-antelope-testing] name=dlrn-antelope-testing baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/deps/latest/ enabled=1 gpgcheck=0 module_hotfixes=1 [delorean-antelope-build-deps] name=dlrn-antelope-build-deps baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/build-deps/latest/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-rabbitmq] name=centos9-rabbitmq baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/messaging/$basearch/rabbitmq-38/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-storage] name=centos9-storage baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/storage/$basearch/ceph-reef/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-opstools] name=centos9-opstools baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/opstools/$basearch/collectd-5/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-nfv-ovs] name=NFV SIG OpenvSwitch baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/nfv/$basearch/openvswitch-2/ gpgcheck=0 enabled=1 module_hotfixes=1 # epel is required for Ceph Reef [epel-low-priority] name=Extra Packages for Enterprise Linux $releasever - $basearch metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-$releasever&arch=$basearch&infra=$infra&content=$contentdir enabled=1 gpgcheck=0 countme=1 priority=100 includepkgs=libarrow*,parquet*,python3-asyncssh,re2,python3-grpcio,grpc*,abseil*,thrift*,blake3 home/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/delorean.repo0000644000175000017500000001341515133657375027575 0ustar zuulzuul[delorean-component-barbican] name=delorean-openstack-barbican-42b4c41831408a8e323fec3c8983b5c793b64874 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/barbican/42/b4/42b4c41831408a8e323fec3c8983b5c793b64874_08052e9d enabled=1 gpgcheck=0 priority=1 [delorean-component-baremetal] name=delorean-python-glean-10df0bd91b9bc5c9fd9cc02d75c0084cd4da29a7 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/baremetal/10/df/10df0bd91b9bc5c9fd9cc02d75c0084cd4da29a7_36137eb3 enabled=1 gpgcheck=0 priority=1 [delorean-component-cinder] name=delorean-openstack-cinder-1c00d6490d88e436f26efb71f2ac96e75252e97c baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/cinder/1c/00/1c00d6490d88e436f26efb71f2ac96e75252e97c_f716f000 enabled=1 gpgcheck=0 priority=1 [delorean-component-clients] name=delorean-python-stevedore-c4acc5639fd2329372142e39464fcca0209b0018 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/clients/c4/ac/c4acc5639fd2329372142e39464fcca0209b0018_d3ef8337 enabled=1 gpgcheck=0 priority=1 [delorean-component-cloudops] name=delorean-python-cloudkitty-tests-tempest-2c80f80e02c5accd099187ea762c8f8389bd7905 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/cloudops/2c/80/2c80f80e02c5accd099187ea762c8f8389bd7905_33e4dd93 enabled=1 gpgcheck=0 priority=1 [delorean-component-common] name=delorean-os-refresh-config-9bfc52b5049be2d8de6134d662fdde9dfa48960f baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/common/9b/fc/9bfc52b5049be2d8de6134d662fdde9dfa48960f_b85780e6 enabled=1 gpgcheck=0 priority=1 [delorean-component-compute] name=delorean-openstack-nova-6f8decf0b4f1aa2e96292b6a2ffc28249fe4af5e baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/compute/6f/8d/6f8decf0b4f1aa2e96292b6a2ffc28249fe4af5e_dc05b899 enabled=1 gpgcheck=0 priority=1 [delorean-component-designate] name=delorean-python-designate-tests-tempest-347fdbc9b4595a10b726526b3c0b5928e5b7fcf2 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/designate/34/7f/347fdbc9b4595a10b726526b3c0b5928e5b7fcf2_3fd39337 enabled=1 gpgcheck=0 priority=1 [delorean-component-glance] name=delorean-openstack-glance-1fd12c29b339f30fe823e2b5beba14b5f241e52a baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/glance/1f/d1/1fd12c29b339f30fe823e2b5beba14b5f241e52a_0d693729 enabled=1 gpgcheck=0 priority=1 [delorean-component-keystone] name=delorean-openstack-keystone-e4b40af0ae3698fbbbbfb8c22468b33aae80e6d7 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/keystone/e4/b4/e4b40af0ae3698fbbbbfb8c22468b33aae80e6d7_264c03cc enabled=1 gpgcheck=0 priority=1 [delorean-component-manila] name=delorean-openstack-manila-3c01b7181572c95dac462eb19c3121e36cb0fe95 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/manila/3c/01/3c01b7181572c95dac462eb19c3121e36cb0fe95_912dfd18 enabled=1 gpgcheck=0 priority=1 [delorean-component-network] name=delorean-python-whitebox-neutron-tests-tempest-12cf06ce36a79a584fc757f4c25ff96845573c93 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/network/12/cf/12cf06ce36a79a584fc757f4c25ff96845573c93_3ed3aba3 enabled=1 gpgcheck=0 priority=1 [delorean-component-octavia] name=delorean-openstack-octavia-ba397f07a7331190208c93368ee23826ac4e2707 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/octavia/ba/39/ba397f07a7331190208c93368ee23826ac4e2707_9d6e596a enabled=1 gpgcheck=0 priority=1 [delorean-component-optimize] name=delorean-openstack-watcher-c014f81a8647287f6dcc339321c1256f5a2e82d5 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/optimize/c0/14/c014f81a8647287f6dcc339321c1256f5a2e82d5_bcbfdccc enabled=1 gpgcheck=0 priority=1 [delorean-component-podified] name=delorean-ansible-config_template-5ccaa22121a7ff05620975540d81f6efb077d8db baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/podified/5c/ca/5ccaa22121a7ff05620975540d81f6efb077d8db_83eb7cc2 enabled=1 gpgcheck=0 priority=1 [delorean-component-puppet] name=delorean-puppet-ceph-7352068d7b8c84ded636ab3158dafa6f3851951e baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/puppet/73/52/7352068d7b8c84ded636ab3158dafa6f3851951e_7cde1ad1 enabled=1 gpgcheck=0 priority=1 [delorean-component-swift] name=delorean-openstack-swift-dc98a8463506ac520c469adb0ef47d0f7753905a baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/swift/dc/98/dc98a8463506ac520c469adb0ef47d0f7753905a_9d02f069 enabled=1 gpgcheck=0 priority=1 [delorean-component-tempest] name=delorean-python-tempestconf-8515371b7cceebd4282e09f1d8f0cc842df82855 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/tempest/85/15/8515371b7cceebd4282e09f1d8f0cc842df82855_a1e336c7 enabled=1 gpgcheck=0 priority=1 [delorean-component-ui] name=delorean-openstack-heat-ui-013accbfd179753bc3f0d1f4e5bed07a4fd9f771 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/ui/01/3a/013accbfd179753bc3f0d1f4e5bed07a4fd9f771_0c88e467 enabled=1 gpgcheck=0 priority=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/repo-setup-centos-appstream.repo0000644000175000017500000000031615133657375033366 0ustar zuulzuul [repo-setup-centos-appstream] name=repo-setup-centos-appstream baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/AppStream/$basearch/os/ gpgcheck=0 enabled=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/repo-setup-centos-baseos.repo0000644000175000017500000000030415133657375032643 0ustar zuulzuul [repo-setup-centos-baseos] name=repo-setup-centos-baseos baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/BaseOS/$basearch/os/ gpgcheck=0 enabled=1 ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/repo-setup-centos-highavailability.repohome/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/repo-setup-centos-highavailabili0000644000175000017500000000034215133657375033362 0ustar zuulzuul [repo-setup-centos-highavailability] name=repo-setup-centos-highavailability baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/HighAvailability/$basearch/os/ gpgcheck=0 enabled=1 ././@LongLink0000644000000000000000000000014600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/repo-setup-centos-powertools.repohome/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/repo-setup-centos-powertools.rep0000644000175000017500000000031115133657375033423 0ustar zuulzuul [repo-setup-centos-powertools] name=repo-setup-centos-powertools baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/CRB/$basearch/os/ gpgcheck=0 enabled=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/NetworkManager/0000755000175000017500000000000015133657745025307 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/NetworkManager/ens3.nmconnection0000644000175000017500000000026215133657654030572 0ustar zuulzuul[connection] id=ens3 uuid=e94616dd-9c37-40d7-ae7c-f2a49628045b type=ethernet interface-name=ens3 [ethernet] [ipv4] method=auto [ipv6] addr-gen-mode=eui64 method=auto [proxy] ././@LongLink0000644000000000000000000000014600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/NetworkManager/ci-private-network.nmconnectionhome/zuul/zuul-output/logs/ci-framework-data/artifacts/NetworkManager/ci-private-network.nmconnectio0000644000175000017500000000051315133657654033275 0ustar zuulzuul[connection] id=ci-private-network uuid=3dd15e7c-20c8-5084-a843-bc1953df59de type=ethernet autoconnect=true interface-name=eth1 [ethernet] mac-address=fa:16:3e:b8:b9:28 mtu=1500 [ipv4] method=manual addresses=192.168.122.11/24 never-default=true gateway=192.168.122.1 [ipv6] addr-gen-mode=stable-privacy method=disabled [proxy] home/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/0000755000175000017500000000000015133657745024405 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/delorean-antelope-testing.repo0000644000175000017500000000317215133657654032347 0ustar zuulzuul[delorean-antelope-testing] name=dlrn-antelope-testing baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/deps/latest/ enabled=1 gpgcheck=0 module_hotfixes=1 [delorean-antelope-build-deps] name=dlrn-antelope-build-deps baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/build-deps/latest/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-rabbitmq] name=centos9-rabbitmq baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/messaging/$basearch/rabbitmq-38/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-storage] name=centos9-storage baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/storage/$basearch/ceph-reef/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-opstools] name=centos9-opstools baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/opstools/$basearch/collectd-5/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-nfv-ovs] name=NFV SIG OpenvSwitch baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/nfv/$basearch/openvswitch-2/ gpgcheck=0 enabled=1 module_hotfixes=1 # epel is required for Ceph Reef [epel-low-priority] name=Extra Packages for Enterprise Linux $releasever - $basearch metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-$releasever&arch=$basearch&infra=$infra&content=$contentdir enabled=1 gpgcheck=0 countme=1 priority=100 includepkgs=libarrow*,parquet*,python3-asyncssh,re2,python3-grpcio,grpc*,abseil*,thrift*,blake3 home/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/delorean.repo0000644000175000017500000001341515133657654027070 0ustar zuulzuul[delorean-component-barbican] name=delorean-openstack-barbican-42b4c41831408a8e323fec3c8983b5c793b64874 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/barbican/42/b4/42b4c41831408a8e323fec3c8983b5c793b64874_08052e9d enabled=1 gpgcheck=0 priority=1 [delorean-component-baremetal] name=delorean-python-glean-10df0bd91b9bc5c9fd9cc02d75c0084cd4da29a7 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/baremetal/10/df/10df0bd91b9bc5c9fd9cc02d75c0084cd4da29a7_36137eb3 enabled=1 gpgcheck=0 priority=1 [delorean-component-cinder] name=delorean-openstack-cinder-1c00d6490d88e436f26efb71f2ac96e75252e97c baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/cinder/1c/00/1c00d6490d88e436f26efb71f2ac96e75252e97c_f716f000 enabled=1 gpgcheck=0 priority=1 [delorean-component-clients] name=delorean-python-stevedore-c4acc5639fd2329372142e39464fcca0209b0018 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/clients/c4/ac/c4acc5639fd2329372142e39464fcca0209b0018_d3ef8337 enabled=1 gpgcheck=0 priority=1 [delorean-component-cloudops] name=delorean-python-cloudkitty-tests-tempest-2c80f80e02c5accd099187ea762c8f8389bd7905 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/cloudops/2c/80/2c80f80e02c5accd099187ea762c8f8389bd7905_33e4dd93 enabled=1 gpgcheck=0 priority=1 [delorean-component-common] name=delorean-os-refresh-config-9bfc52b5049be2d8de6134d662fdde9dfa48960f baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/common/9b/fc/9bfc52b5049be2d8de6134d662fdde9dfa48960f_b85780e6 enabled=1 gpgcheck=0 priority=1 [delorean-component-compute] name=delorean-openstack-nova-6f8decf0b4f1aa2e96292b6a2ffc28249fe4af5e baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/compute/6f/8d/6f8decf0b4f1aa2e96292b6a2ffc28249fe4af5e_dc05b899 enabled=1 gpgcheck=0 priority=1 [delorean-component-designate] name=delorean-python-designate-tests-tempest-347fdbc9b4595a10b726526b3c0b5928e5b7fcf2 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/designate/34/7f/347fdbc9b4595a10b726526b3c0b5928e5b7fcf2_3fd39337 enabled=1 gpgcheck=0 priority=1 [delorean-component-glance] name=delorean-openstack-glance-1fd12c29b339f30fe823e2b5beba14b5f241e52a baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/glance/1f/d1/1fd12c29b339f30fe823e2b5beba14b5f241e52a_0d693729 enabled=1 gpgcheck=0 priority=1 [delorean-component-keystone] name=delorean-openstack-keystone-e4b40af0ae3698fbbbbfb8c22468b33aae80e6d7 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/keystone/e4/b4/e4b40af0ae3698fbbbbfb8c22468b33aae80e6d7_264c03cc enabled=1 gpgcheck=0 priority=1 [delorean-component-manila] name=delorean-openstack-manila-3c01b7181572c95dac462eb19c3121e36cb0fe95 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/manila/3c/01/3c01b7181572c95dac462eb19c3121e36cb0fe95_912dfd18 enabled=1 gpgcheck=0 priority=1 [delorean-component-network] name=delorean-python-whitebox-neutron-tests-tempest-12cf06ce36a79a584fc757f4c25ff96845573c93 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/network/12/cf/12cf06ce36a79a584fc757f4c25ff96845573c93_3ed3aba3 enabled=1 gpgcheck=0 priority=1 [delorean-component-octavia] name=delorean-openstack-octavia-ba397f07a7331190208c93368ee23826ac4e2707 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/octavia/ba/39/ba397f07a7331190208c93368ee23826ac4e2707_9d6e596a enabled=1 gpgcheck=0 priority=1 [delorean-component-optimize] name=delorean-openstack-watcher-c014f81a8647287f6dcc339321c1256f5a2e82d5 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/optimize/c0/14/c014f81a8647287f6dcc339321c1256f5a2e82d5_bcbfdccc enabled=1 gpgcheck=0 priority=1 [delorean-component-podified] name=delorean-ansible-config_template-5ccaa22121a7ff05620975540d81f6efb077d8db baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/podified/5c/ca/5ccaa22121a7ff05620975540d81f6efb077d8db_83eb7cc2 enabled=1 gpgcheck=0 priority=1 [delorean-component-puppet] name=delorean-puppet-ceph-7352068d7b8c84ded636ab3158dafa6f3851951e baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/puppet/73/52/7352068d7b8c84ded636ab3158dafa6f3851951e_7cde1ad1 enabled=1 gpgcheck=0 priority=1 [delorean-component-swift] name=delorean-openstack-swift-dc98a8463506ac520c469adb0ef47d0f7753905a baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/swift/dc/98/dc98a8463506ac520c469adb0ef47d0f7753905a_9d02f069 enabled=1 gpgcheck=0 priority=1 [delorean-component-tempest] name=delorean-python-tempestconf-8515371b7cceebd4282e09f1d8f0cc842df82855 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/tempest/85/15/8515371b7cceebd4282e09f1d8f0cc842df82855_a1e336c7 enabled=1 gpgcheck=0 priority=1 [delorean-component-ui] name=delorean-openstack-heat-ui-013accbfd179753bc3f0d1f4e5bed07a4fd9f771 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/ui/01/3a/013accbfd179753bc3f0d1f4e5bed07a4fd9f771_0c88e467 enabled=1 gpgcheck=0 priority=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/delorean.repo.md50000644000175000017500000000004115133657654027543 0ustar zuulzuulc3923531bcda0b0811b2d5053f189beb home/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/repo-setup-centos-appstream.repo0000644000175000017500000000031615133657654032661 0ustar zuulzuul [repo-setup-centos-appstream] name=repo-setup-centos-appstream baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/AppStream/$basearch/os/ gpgcheck=0 enabled=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/repo-setup-centos-baseos.repo0000644000175000017500000000030415133657654032136 0ustar zuulzuul [repo-setup-centos-baseos] name=repo-setup-centos-baseos baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/BaseOS/$basearch/os/ gpgcheck=0 enabled=1 ././@LongLink0000644000000000000000000000015100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/repo-setup-centos-highavailability.repohome/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/repo-setup-centos-highavailability.0000644000175000017500000000034215133657654033310 0ustar zuulzuul [repo-setup-centos-highavailability] name=repo-setup-centos-highavailability baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/HighAvailability/$basearch/os/ gpgcheck=0 enabled=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/repo-setup-centos-powertools.repo0000644000175000017500000000031115133657654033075 0ustar zuulzuul [repo-setup-centos-powertools] name=repo-setup-centos-powertools baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/CRB/$basearch/os/ gpgcheck=0 enabled=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/0000755000175000017500000000000015133657451023501 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/0000755000175000017500000000000015133657451027534 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/0000755000175000017500000000000015133657745030667 5ustar zuulzuul././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_deploy_validate.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000212115133657453033340 0ustar zuulzuul--- - name: Debug make_barbican_deploy_validate_env when: make_barbican_deploy_validate_env is defined ansible.builtin.debug: var: make_barbican_deploy_validate_env - name: Debug make_barbican_deploy_validate_params when: make_barbican_deploy_validate_params is defined ansible.builtin.debug: var: make_barbican_deploy_validate_params - name: Run barbican_deploy_validate retries: "{{ make_barbican_deploy_validate_retries | default(omit) }}" delay: "{{ make_barbican_deploy_validate_delay | default(omit) }}" until: "{{ make_barbican_deploy_validate_until | default(true) }}" register: "make_barbican_deploy_validate_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_deploy_validate" dry_run: "{{ make_barbican_deploy_validate_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_deploy_validate_env|default({})), **(make_barbican_deploy_validate_params|default({}))) }}" ././@LongLink0000644000000000000000000000017000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000210215133657453033337 0ustar zuulzuul--- - name: Debug make_barbican_deploy_cleanup_env when: make_barbican_deploy_cleanup_env is defined ansible.builtin.debug: var: make_barbican_deploy_cleanup_env - name: Debug make_barbican_deploy_cleanup_params when: make_barbican_deploy_cleanup_params is defined ansible.builtin.debug: var: make_barbican_deploy_cleanup_params - name: Run barbican_deploy_cleanup retries: "{{ make_barbican_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_barbican_deploy_cleanup_delay | default(omit) }}" until: "{{ make_barbican_deploy_cleanup_until | default(true) }}" register: "make_barbican_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_deploy_cleanup" dry_run: "{{ make_barbican_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_deploy_cleanup_env|default({})), **(make_barbican_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb.0000644000175000017500000000152215133657453033260 0ustar zuulzuul--- - name: Debug make_mariadb_env when: make_mariadb_env is defined ansible.builtin.debug: var: make_mariadb_env - name: Debug make_mariadb_params when: make_mariadb_params is defined ansible.builtin.debug: var: make_mariadb_params - name: Run mariadb retries: "{{ make_mariadb_retries | default(omit) }}" delay: "{{ make_mariadb_delay | default(omit) }}" until: "{{ make_mariadb_until | default(true) }}" register: "make_mariadb_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb" dry_run: "{{ make_mariadb_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_env|default({})), **(make_mariadb_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000171215133657453033342 0ustar zuulzuul--- - name: Debug make_mariadb_cleanup_env when: make_mariadb_cleanup_env is defined ansible.builtin.debug: var: make_mariadb_cleanup_env - name: Debug make_mariadb_cleanup_params when: make_mariadb_cleanup_params is defined ansible.builtin.debug: var: make_mariadb_cleanup_params - name: Run mariadb_cleanup retries: "{{ make_mariadb_cleanup_retries | default(omit) }}" delay: "{{ make_mariadb_cleanup_delay | default(omit) }}" until: "{{ make_mariadb_cleanup_until | default(true) }}" register: "make_mariadb_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_cleanup" dry_run: "{{ make_mariadb_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_cleanup_env|default({})), **(make_mariadb_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000171215133657453033445 0ustar zuulzuul--- - name: Debug make_keystone_deploy_env when: make_keystone_deploy_env is defined ansible.builtin.debug: var: make_keystone_deploy_env - name: Debug make_keystone_deploy_params when: make_keystone_deploy_params is defined ansible.builtin.debug: var: make_keystone_deploy_params - name: Run keystone_deploy retries: "{{ make_keystone_deploy_retries | default(omit) }}" delay: "{{ make_keystone_deploy_delay | default(omit) }}" until: "{{ make_keystone_deploy_until | default(true) }}" register: "make_keystone_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone_deploy" dry_run: "{{ make_keystone_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_deploy_env|default({})), **(make_keystone_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000210215133657453033437 0ustar zuulzuul--- - name: Debug make_keystone_deploy_cleanup_env when: make_keystone_deploy_cleanup_env is defined ansible.builtin.debug: var: make_keystone_deploy_cleanup_env - name: Debug make_keystone_deploy_cleanup_params when: make_keystone_deploy_cleanup_params is defined ansible.builtin.debug: var: make_keystone_deploy_cleanup_params - name: Run keystone_deploy_cleanup retries: "{{ make_keystone_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_keystone_deploy_cleanup_delay | default(omit) }}" until: "{{ make_keystone_deploy_cleanup_until | default(true) }}" register: "make_keystone_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone_deploy_cleanup" dry_run: "{{ make_keystone_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_deploy_cleanup_env|default({})), **(make_keystone_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000165415133657453033352 0ustar zuulzuul--- - name: Debug make_barbican_prep_env when: make_barbican_prep_env is defined ansible.builtin.debug: var: make_barbican_prep_env - name: Debug make_barbican_prep_params when: make_barbican_prep_params is defined ansible.builtin.debug: var: make_barbican_prep_params - name: Run barbican_prep retries: "{{ make_barbican_prep_retries | default(omit) }}" delay: "{{ make_barbican_prep_delay | default(omit) }}" until: "{{ make_barbican_prep_until | default(true) }}" register: "make_barbican_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_prep" dry_run: "{{ make_barbican_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_prep_env|default({})), **(make_barbican_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000154115133657453033345 0ustar zuulzuul--- - name: Debug make_barbican_env when: make_barbican_env is defined ansible.builtin.debug: var: make_barbican_env - name: Debug make_barbican_params when: make_barbican_params is defined ansible.builtin.debug: var: make_barbican_params - name: Run barbican retries: "{{ make_barbican_retries | default(omit) }}" delay: "{{ make_barbican_delay | default(omit) }}" until: "{{ make_barbican_until | default(true) }}" register: "make_barbican_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican" dry_run: "{{ make_barbican_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_env|default({})), **(make_barbican_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000173115133657453033346 0ustar zuulzuul--- - name: Debug make_barbican_cleanup_env when: make_barbican_cleanup_env is defined ansible.builtin.debug: var: make_barbican_cleanup_env - name: Debug make_barbican_cleanup_params when: make_barbican_cleanup_params is defined ansible.builtin.debug: var: make_barbican_cleanup_params - name: Run barbican_cleanup retries: "{{ make_barbican_cleanup_retries | default(omit) }}" delay: "{{ make_barbican_cleanup_delay | default(omit) }}" until: "{{ make_barbican_cleanup_until | default(true) }}" register: "make_barbican_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_cleanup" dry_run: "{{ make_barbican_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_cleanup_env|default({})), **(make_barbican_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000202515133657453033343 0ustar zuulzuul--- - name: Debug make_barbican_deploy_prep_env when: make_barbican_deploy_prep_env is defined ansible.builtin.debug: var: make_barbican_deploy_prep_env - name: Debug make_barbican_deploy_prep_params when: make_barbican_deploy_prep_params is defined ansible.builtin.debug: var: make_barbican_deploy_prep_params - name: Run barbican_deploy_prep retries: "{{ make_barbican_deploy_prep_retries | default(omit) }}" delay: "{{ make_barbican_deploy_prep_delay | default(omit) }}" until: "{{ make_barbican_deploy_prep_until | default(true) }}" register: "make_barbican_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_deploy_prep" dry_run: "{{ make_barbican_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_deploy_prep_env|default({})), **(make_barbican_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000171215133657453033345 0ustar zuulzuul--- - name: Debug make_barbican_deploy_env when: make_barbican_deploy_env is defined ansible.builtin.debug: var: make_barbican_deploy_env - name: Debug make_barbican_deploy_params when: make_barbican_deploy_params is defined ansible.builtin.debug: var: make_barbican_deploy_params - name: Run barbican_deploy retries: "{{ make_barbican_deploy_retries | default(omit) }}" delay: "{{ make_barbican_deploy_delay | default(omit) }}" until: "{{ make_barbican_deploy_until | default(true) }}" register: "make_barbican_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_deploy" dry_run: "{{ make_barbican_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_deploy_env|default({})), **(make_barbican_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netconfig_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netconfi0000644000175000017500000000204415133657453033410 0ustar zuulzuul--- - name: Debug make_netconfig_deploy_prep_env when: make_netconfig_deploy_prep_env is defined ansible.builtin.debug: var: make_netconfig_deploy_prep_env - name: Debug make_netconfig_deploy_prep_params when: make_netconfig_deploy_prep_params is defined ansible.builtin.debug: var: make_netconfig_deploy_prep_params - name: Run netconfig_deploy_prep retries: "{{ make_netconfig_deploy_prep_retries | default(omit) }}" delay: "{{ make_netconfig_deploy_prep_delay | default(omit) }}" until: "{{ make_netconfig_deploy_prep_until | default(true) }}" register: "make_netconfig_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netconfig_deploy_prep" dry_run: "{{ make_netconfig_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netconfig_deploy_prep_env|default({})), **(make_netconfig_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netconfig_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netconfi0000644000175000017500000000173115133657453033412 0ustar zuulzuul--- - name: Debug make_netconfig_deploy_env when: make_netconfig_deploy_env is defined ansible.builtin.debug: var: make_netconfig_deploy_env - name: Debug make_netconfig_deploy_params when: make_netconfig_deploy_params is defined ansible.builtin.debug: var: make_netconfig_deploy_params - name: Run netconfig_deploy retries: "{{ make_netconfig_deploy_retries | default(omit) }}" delay: "{{ make_netconfig_deploy_delay | default(omit) }}" until: "{{ make_netconfig_deploy_until | default(true) }}" register: "make_netconfig_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netconfig_deploy" dry_run: "{{ make_netconfig_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netconfig_deploy_env|default({})), **(make_netconfig_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netconfig_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netconfi0000644000175000017500000000212115133657453033404 0ustar zuulzuul--- - name: Debug make_netconfig_deploy_cleanup_env when: make_netconfig_deploy_cleanup_env is defined ansible.builtin.debug: var: make_netconfig_deploy_cleanup_env - name: Debug make_netconfig_deploy_cleanup_params when: make_netconfig_deploy_cleanup_params is defined ansible.builtin.debug: var: make_netconfig_deploy_cleanup_params - name: Run netconfig_deploy_cleanup retries: "{{ make_netconfig_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_netconfig_deploy_cleanup_delay | default(omit) }}" until: "{{ make_netconfig_deploy_cleanup_until | default(true) }}" register: "make_netconfig_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netconfig_deploy_cleanup" dry_run: "{{ make_netconfig_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netconfig_deploy_cleanup_env|default({})), **(make_netconfig_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_memcached_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_memcache0000644000175000017500000000204415133657453033345 0ustar zuulzuul--- - name: Debug make_memcached_deploy_prep_env when: make_memcached_deploy_prep_env is defined ansible.builtin.debug: var: make_memcached_deploy_prep_env - name: Debug make_memcached_deploy_prep_params when: make_memcached_deploy_prep_params is defined ansible.builtin.debug: var: make_memcached_deploy_prep_params - name: Run memcached_deploy_prep retries: "{{ make_memcached_deploy_prep_retries | default(omit) }}" delay: "{{ make_memcached_deploy_prep_delay | default(omit) }}" until: "{{ make_memcached_deploy_prep_until | default(true) }}" register: "make_memcached_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make memcached_deploy_prep" dry_run: "{{ make_memcached_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_memcached_deploy_prep_env|default({})), **(make_memcached_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_memcached_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_memcache0000644000175000017500000000173115133657453033347 0ustar zuulzuul--- - name: Debug make_memcached_deploy_env when: make_memcached_deploy_env is defined ansible.builtin.debug: var: make_memcached_deploy_env - name: Debug make_memcached_deploy_params when: make_memcached_deploy_params is defined ansible.builtin.debug: var: make_memcached_deploy_params - name: Run memcached_deploy retries: "{{ make_memcached_deploy_retries | default(omit) }}" delay: "{{ make_memcached_deploy_delay | default(omit) }}" until: "{{ make_memcached_deploy_until | default(true) }}" register: "make_memcached_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make memcached_deploy" dry_run: "{{ make_memcached_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_memcached_deploy_env|default({})), **(make_memcached_deploy_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_all.yml0000644000175000017500000000142615133657453033156 0ustar zuulzuul--- - name: Debug make_all_env when: make_all_env is defined ansible.builtin.debug: var: make_all_env - name: Debug make_all_params when: make_all_params is defined ansible.builtin.debug: var: make_all_params - name: Run all retries: "{{ make_all_retries | default(omit) }}" delay: "{{ make_all_delay | default(omit) }}" until: "{{ make_all_until | default(true) }}" register: "make_all_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make all" dry_run: "{{ make_all_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_all_env|default({})), **(make_all_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_help.yml0000644000175000017500000000145615133657453033341 0ustar zuulzuul--- - name: Debug make_help_env when: make_help_env is defined ansible.builtin.debug: var: make_help_env - name: Debug make_help_params when: make_help_params is defined ansible.builtin.debug: var: make_help_params - name: Run help retries: "{{ make_help_retries | default(omit) }}" delay: "{{ make_help_delay | default(omit) }}" until: "{{ make_help_until | default(true) }}" register: "make_help_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make help" dry_run: "{{ make_help_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_help_env|default({})), **(make_help_params|default({}))) }}" ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cleanup.0000644000175000017500000000152215133657453033310 0ustar zuulzuul--- - name: Debug make_cleanup_env when: make_cleanup_env is defined ansible.builtin.debug: var: make_cleanup_env - name: Debug make_cleanup_params when: make_cleanup_params is defined ansible.builtin.debug: var: make_cleanup_params - name: Run cleanup retries: "{{ make_cleanup_retries | default(omit) }}" delay: "{{ make_cleanup_delay | default(omit) }}" until: "{{ make_cleanup_until | default(true) }}" register: "make_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cleanup" dry_run: "{{ make_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cleanup_env|default({})), **(make_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_deploy_c0000644000175000017500000000167315133657453033410 0ustar zuulzuul--- - name: Debug make_deploy_cleanup_env when: make_deploy_cleanup_env is defined ansible.builtin.debug: var: make_deploy_cleanup_env - name: Debug make_deploy_cleanup_params when: make_deploy_cleanup_params is defined ansible.builtin.debug: var: make_deploy_cleanup_params - name: Run deploy_cleanup retries: "{{ make_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_deploy_cleanup_delay | default(omit) }}" until: "{{ make_deploy_cleanup_until | default(true) }}" register: "make_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make deploy_cleanup" dry_run: "{{ make_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_deploy_cleanup_env|default({})), **(make_deploy_cleanup_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_wait.yml0000644000175000017500000000144515133657453033353 0ustar zuulzuul--- - name: Debug make_wait_env when: make_wait_env is defined ansible.builtin.debug: var: make_wait_env - name: Debug make_wait_params when: make_wait_params is defined ansible.builtin.debug: var: make_wait_params - name: Run wait retries: "{{ make_wait_retries | default(omit) }}" delay: "{{ make_wait_delay | default(omit) }}" until: "{{ make_wait_until | default(true) }}" register: "make_wait_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make wait" dry_run: "{{ make_wait_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_wait_env|default({})), **(make_wait_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_storage.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_stor0000644000175000017500000000161615133657453033425 0ustar zuulzuul--- - name: Debug make_crc_storage_env when: make_crc_storage_env is defined ansible.builtin.debug: var: make_crc_storage_env - name: Debug make_crc_storage_params when: make_crc_storage_params is defined ansible.builtin.debug: var: make_crc_storage_params - name: Run crc_storage retries: "{{ make_crc_storage_retries | default(omit) }}" delay: "{{ make_crc_storage_delay | default(omit) }}" until: "{{ make_crc_storage_until | default(true) }}" register: "make_crc_storage_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make crc_storage" dry_run: "{{ make_crc_storage_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_storage_env|default({})), **(make_crc_storage_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_storage_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_stor0000644000175000017500000000200615133657453033417 0ustar zuulzuul--- - name: Debug make_crc_storage_cleanup_env when: make_crc_storage_cleanup_env is defined ansible.builtin.debug: var: make_crc_storage_cleanup_env - name: Debug make_crc_storage_cleanup_params when: make_crc_storage_cleanup_params is defined ansible.builtin.debug: var: make_crc_storage_cleanup_params - name: Run crc_storage_cleanup retries: "{{ make_crc_storage_cleanup_retries | default(omit) }}" delay: "{{ make_crc_storage_cleanup_delay | default(omit) }}" until: "{{ make_crc_storage_cleanup_until | default(true) }}" register: "make_crc_storage_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make crc_storage_cleanup" dry_run: "{{ make_crc_storage_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_storage_cleanup_env|default({})), **(make_crc_storage_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_storage_release.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_stor0000644000175000017500000000200615133657453033417 0ustar zuulzuul--- - name: Debug make_crc_storage_release_env when: make_crc_storage_release_env is defined ansible.builtin.debug: var: make_crc_storage_release_env - name: Debug make_crc_storage_release_params when: make_crc_storage_release_params is defined ansible.builtin.debug: var: make_crc_storage_release_params - name: Run crc_storage_release retries: "{{ make_crc_storage_release_retries | default(omit) }}" delay: "{{ make_crc_storage_release_delay | default(omit) }}" until: "{{ make_crc_storage_release_until | default(true) }}" register: "make_crc_storage_release_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make crc_storage_release" dry_run: "{{ make_crc_storage_release_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_storage_release_env|default({})), **(make_crc_storage_release_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_storage_with_retries.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_stor0000644000175000017500000000212115133657453033415 0ustar zuulzuul--- - name: Debug make_crc_storage_with_retries_env when: make_crc_storage_with_retries_env is defined ansible.builtin.debug: var: make_crc_storage_with_retries_env - name: Debug make_crc_storage_with_retries_params when: make_crc_storage_with_retries_params is defined ansible.builtin.debug: var: make_crc_storage_with_retries_params - name: Run crc_storage_with_retries retries: "{{ make_crc_storage_with_retries_retries | default(omit) }}" delay: "{{ make_crc_storage_with_retries_delay | default(omit) }}" until: "{{ make_crc_storage_with_retries_until | default(true) }}" register: "make_crc_storage_with_retries_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make crc_storage_with_retries" dry_run: "{{ make_crc_storage_with_retries_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_storage_with_retries_env|default({})), **(make_crc_storage_with_retries_params|default({}))) }}" ././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_storage_cleanup_with_retries.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_stor0000644000175000017500000000231115133657453033416 0ustar zuulzuul--- - name: Debug make_crc_storage_cleanup_with_retries_env when: make_crc_storage_cleanup_with_retries_env is defined ansible.builtin.debug: var: make_crc_storage_cleanup_with_retries_env - name: Debug make_crc_storage_cleanup_with_retries_params when: make_crc_storage_cleanup_with_retries_params is defined ansible.builtin.debug: var: make_crc_storage_cleanup_with_retries_params - name: Run crc_storage_cleanup_with_retries retries: "{{ make_crc_storage_cleanup_with_retries_retries | default(omit) }}" delay: "{{ make_crc_storage_cleanup_with_retries_delay | default(omit) }}" until: "{{ make_crc_storage_cleanup_with_retries_until | default(true) }}" register: "make_crc_storage_cleanup_with_retries_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make crc_storage_cleanup_with_retries" dry_run: "{{ make_crc_storage_cleanup_with_retries_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_storage_cleanup_with_retries_env|default({})), **(make_crc_storage_cleanup_with_retries_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_operator_namespace.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_operator0000644000175000017500000000176715133657453033451 0ustar zuulzuul--- - name: Debug make_operator_namespace_env when: make_operator_namespace_env is defined ansible.builtin.debug: var: make_operator_namespace_env - name: Debug make_operator_namespace_params when: make_operator_namespace_params is defined ansible.builtin.debug: var: make_operator_namespace_params - name: Run operator_namespace retries: "{{ make_operator_namespace_retries | default(omit) }}" delay: "{{ make_operator_namespace_delay | default(omit) }}" until: "{{ make_operator_namespace_until | default(true) }}" register: "make_operator_namespace_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make operator_namespace" dry_run: "{{ make_operator_namespace_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_operator_namespace_env|default({})), **(make_operator_namespace_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_namespace.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_namespac0000644000175000017500000000156015133657453033374 0ustar zuulzuul--- - name: Debug make_namespace_env when: make_namespace_env is defined ansible.builtin.debug: var: make_namespace_env - name: Debug make_namespace_params when: make_namespace_params is defined ansible.builtin.debug: var: make_namespace_params - name: Run namespace retries: "{{ make_namespace_retries | default(omit) }}" delay: "{{ make_namespace_delay | default(omit) }}" until: "{{ make_namespace_until | default(true) }}" register: "make_namespace_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make namespace" dry_run: "{{ make_namespace_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_namespace_env|default({})), **(make_namespace_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_namespace_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_namespac0000644000175000017500000000175015133657453033375 0ustar zuulzuul--- - name: Debug make_namespace_cleanup_env when: make_namespace_cleanup_env is defined ansible.builtin.debug: var: make_namespace_cleanup_env - name: Debug make_namespace_cleanup_params when: make_namespace_cleanup_params is defined ansible.builtin.debug: var: make_namespace_cleanup_params - name: Run namespace_cleanup retries: "{{ make_namespace_cleanup_retries | default(omit) }}" delay: "{{ make_namespace_cleanup_delay | default(omit) }}" until: "{{ make_namespace_cleanup_until | default(true) }}" register: "make_namespace_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make namespace_cleanup" dry_run: "{{ make_namespace_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_namespace_cleanup_env|default({})), **(make_namespace_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000014600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_input.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_input.ym0000644000175000017500000000146415133657453033373 0ustar zuulzuul--- - name: Debug make_input_env when: make_input_env is defined ansible.builtin.debug: var: make_input_env - name: Debug make_input_params when: make_input_params is defined ansible.builtin.debug: var: make_input_params - name: Run input retries: "{{ make_input_retries | default(omit) }}" delay: "{{ make_input_delay | default(omit) }}" until: "{{ make_input_until | default(true) }}" register: "make_input_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make input" dry_run: "{{ make_input_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_input_env|default({})), **(make_input_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_input_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_input_cl0000644000175000017500000000165415133657453033426 0ustar zuulzuul--- - name: Debug make_input_cleanup_env when: make_input_cleanup_env is defined ansible.builtin.debug: var: make_input_cleanup_env - name: Debug make_input_cleanup_params when: make_input_cleanup_params is defined ansible.builtin.debug: var: make_input_cleanup_params - name: Run input_cleanup retries: "{{ make_input_cleanup_retries | default(omit) }}" delay: "{{ make_input_cleanup_delay | default(omit) }}" until: "{{ make_input_cleanup_until | default(true) }}" register: "make_input_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make input_cleanup" dry_run: "{{ make_input_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_input_cleanup_env|default({})), **(make_input_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_bmo_setup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_bmo_0000644000175000017500000000165415133657453033354 0ustar zuulzuul--- - name: Debug make_crc_bmo_setup_env when: make_crc_bmo_setup_env is defined ansible.builtin.debug: var: make_crc_bmo_setup_env - name: Debug make_crc_bmo_setup_params when: make_crc_bmo_setup_params is defined ansible.builtin.debug: var: make_crc_bmo_setup_params - name: Run crc_bmo_setup retries: "{{ make_crc_bmo_setup_retries | default(omit) }}" delay: "{{ make_crc_bmo_setup_delay | default(omit) }}" until: "{{ make_crc_bmo_setup_until | default(true) }}" register: "make_crc_bmo_setup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make crc_bmo_setup" dry_run: "{{ make_crc_bmo_setup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_bmo_setup_env|default({})), **(make_crc_bmo_setup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_bmo_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_bmo_0000644000175000017500000000171215133657453033347 0ustar zuulzuul--- - name: Debug make_crc_bmo_cleanup_env when: make_crc_bmo_cleanup_env is defined ansible.builtin.debug: var: make_crc_bmo_cleanup_env - name: Debug make_crc_bmo_cleanup_params when: make_crc_bmo_cleanup_params is defined ansible.builtin.debug: var: make_crc_bmo_cleanup_params - name: Run crc_bmo_cleanup retries: "{{ make_crc_bmo_cleanup_retries | default(omit) }}" delay: "{{ make_crc_bmo_cleanup_delay | default(omit) }}" until: "{{ make_crc_bmo_cleanup_until | default(true) }}" register: "make_crc_bmo_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make crc_bmo_cleanup" dry_run: "{{ make_crc_bmo_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_bmo_cleanup_env|default({})), **(make_crc_bmo_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000167315133657453033426 0ustar zuulzuul--- - name: Debug make_openstack_prep_env when: make_openstack_prep_env is defined ansible.builtin.debug: var: make_openstack_prep_env - name: Debug make_openstack_prep_params when: make_openstack_prep_params is defined ansible.builtin.debug: var: make_openstack_prep_params - name: Run openstack_prep retries: "{{ make_openstack_prep_retries | default(omit) }}" delay: "{{ make_openstack_prep_delay | default(omit) }}" until: "{{ make_openstack_prep_until | default(true) }}" register: "make_openstack_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_prep" dry_run: "{{ make_openstack_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_prep_env|default({})), **(make_openstack_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000156015133657453033421 0ustar zuulzuul--- - name: Debug make_openstack_env when: make_openstack_env is defined ansible.builtin.debug: var: make_openstack_env - name: Debug make_openstack_params when: make_openstack_params is defined ansible.builtin.debug: var: make_openstack_params - name: Run openstack retries: "{{ make_openstack_retries | default(omit) }}" delay: "{{ make_openstack_delay | default(omit) }}" until: "{{ make_openstack_until | default(true) }}" register: "make_openstack_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack" dry_run: "{{ make_openstack_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_env|default({})), **(make_openstack_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_wait.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000167315133657453033426 0ustar zuulzuul--- - name: Debug make_openstack_wait_env when: make_openstack_wait_env is defined ansible.builtin.debug: var: make_openstack_wait_env - name: Debug make_openstack_wait_params when: make_openstack_wait_params is defined ansible.builtin.debug: var: make_openstack_wait_params - name: Run openstack_wait retries: "{{ make_openstack_wait_retries | default(omit) }}" delay: "{{ make_openstack_wait_delay | default(omit) }}" until: "{{ make_openstack_wait_until | default(true) }}" register: "make_openstack_wait_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_wait" dry_run: "{{ make_openstack_wait_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_wait_env|default({})), **(make_openstack_wait_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_init.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000167315133657453033426 0ustar zuulzuul--- - name: Debug make_openstack_init_env when: make_openstack_init_env is defined ansible.builtin.debug: var: make_openstack_init_env - name: Debug make_openstack_init_params when: make_openstack_init_params is defined ansible.builtin.debug: var: make_openstack_init_params - name: Run openstack_init retries: "{{ make_openstack_init_retries | default(omit) }}" delay: "{{ make_openstack_init_delay | default(omit) }}" until: "{{ make_openstack_init_until | default(true) }}" register: "make_openstack_init_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_init" dry_run: "{{ make_openstack_init_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_init_env|default({})), **(make_openstack_init_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000175015133657453033422 0ustar zuulzuul--- - name: Debug make_openstack_cleanup_env when: make_openstack_cleanup_env is defined ansible.builtin.debug: var: make_openstack_cleanup_env - name: Debug make_openstack_cleanup_params when: make_openstack_cleanup_params is defined ansible.builtin.debug: var: make_openstack_cleanup_params - name: Run openstack_cleanup retries: "{{ make_openstack_cleanup_retries | default(omit) }}" delay: "{{ make_openstack_cleanup_delay | default(omit) }}" until: "{{ make_openstack_cleanup_until | default(true) }}" register: "make_openstack_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_cleanup" dry_run: "{{ make_openstack_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_cleanup_env|default({})), **(make_openstack_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_repo.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000167315133657453033426 0ustar zuulzuul--- - name: Debug make_openstack_repo_env when: make_openstack_repo_env is defined ansible.builtin.debug: var: make_openstack_repo_env - name: Debug make_openstack_repo_params when: make_openstack_repo_params is defined ansible.builtin.debug: var: make_openstack_repo_params - name: Run openstack_repo retries: "{{ make_openstack_repo_retries | default(omit) }}" delay: "{{ make_openstack_repo_delay | default(omit) }}" until: "{{ make_openstack_repo_until | default(true) }}" register: "make_openstack_repo_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_repo" dry_run: "{{ make_openstack_repo_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_repo_env|default({})), **(make_openstack_repo_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000204415133657453033417 0ustar zuulzuul--- - name: Debug make_openstack_deploy_prep_env when: make_openstack_deploy_prep_env is defined ansible.builtin.debug: var: make_openstack_deploy_prep_env - name: Debug make_openstack_deploy_prep_params when: make_openstack_deploy_prep_params is defined ansible.builtin.debug: var: make_openstack_deploy_prep_params - name: Run openstack_deploy_prep retries: "{{ make_openstack_deploy_prep_retries | default(omit) }}" delay: "{{ make_openstack_deploy_prep_delay | default(omit) }}" until: "{{ make_openstack_deploy_prep_until | default(true) }}" register: "make_openstack_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_deploy_prep" dry_run: "{{ make_openstack_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_deploy_prep_env|default({})), **(make_openstack_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000173115133657453033421 0ustar zuulzuul--- - name: Debug make_openstack_deploy_env when: make_openstack_deploy_env is defined ansible.builtin.debug: var: make_openstack_deploy_env - name: Debug make_openstack_deploy_params when: make_openstack_deploy_params is defined ansible.builtin.debug: var: make_openstack_deploy_params - name: Run openstack_deploy retries: "{{ make_openstack_deploy_retries | default(omit) }}" delay: "{{ make_openstack_deploy_delay | default(omit) }}" until: "{{ make_openstack_deploy_until | default(true) }}" register: "make_openstack_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_deploy" dry_run: "{{ make_openstack_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_deploy_env|default({})), **(make_openstack_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_wait_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000204415133657453033417 0ustar zuulzuul--- - name: Debug make_openstack_wait_deploy_env when: make_openstack_wait_deploy_env is defined ansible.builtin.debug: var: make_openstack_wait_deploy_env - name: Debug make_openstack_wait_deploy_params when: make_openstack_wait_deploy_params is defined ansible.builtin.debug: var: make_openstack_wait_deploy_params - name: Run openstack_wait_deploy retries: "{{ make_openstack_wait_deploy_retries | default(omit) }}" delay: "{{ make_openstack_wait_deploy_delay | default(omit) }}" until: "{{ make_openstack_wait_deploy_until | default(true) }}" register: "make_openstack_wait_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_wait_deploy" dry_run: "{{ make_openstack_wait_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_wait_deploy_env|default({})), **(make_openstack_wait_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000212115133657453033413 0ustar zuulzuul--- - name: Debug make_openstack_deploy_cleanup_env when: make_openstack_deploy_cleanup_env is defined ansible.builtin.debug: var: make_openstack_deploy_cleanup_env - name: Debug make_openstack_deploy_cleanup_params when: make_openstack_deploy_cleanup_params is defined ansible.builtin.debug: var: make_openstack_deploy_cleanup_params - name: Run openstack_deploy_cleanup retries: "{{ make_openstack_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_openstack_deploy_cleanup_delay | default(omit) }}" until: "{{ make_openstack_deploy_cleanup_until | default(true) }}" register: "make_openstack_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_deploy_cleanup" dry_run: "{{ make_openstack_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_deploy_cleanup_env|default({})), **(make_openstack_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_update_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000202515133657453033416 0ustar zuulzuul--- - name: Debug make_openstack_update_run_env when: make_openstack_update_run_env is defined ansible.builtin.debug: var: make_openstack_update_run_env - name: Debug make_openstack_update_run_params when: make_openstack_update_run_params is defined ansible.builtin.debug: var: make_openstack_update_run_params - name: Run openstack_update_run retries: "{{ make_openstack_update_run_retries | default(omit) }}" delay: "{{ make_openstack_update_run_delay | default(omit) }}" until: "{{ make_openstack_update_run_until | default(true) }}" register: "make_openstack_update_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_update_run" dry_run: "{{ make_openstack_update_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_update_run_env|default({})), **(make_openstack_update_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_update_services.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_update_s0000644000175000017500000000171215133657453033410 0ustar zuulzuul--- - name: Debug make_update_services_env when: make_update_services_env is defined ansible.builtin.debug: var: make_update_services_env - name: Debug make_update_services_params when: make_update_services_params is defined ansible.builtin.debug: var: make_update_services_params - name: Run update_services retries: "{{ make_update_services_retries | default(omit) }}" delay: "{{ make_update_services_delay | default(omit) }}" until: "{{ make_update_services_until | default(true) }}" register: "make_update_services_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make update_services" dry_run: "{{ make_update_services_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_update_services_env|default({})), **(make_update_services_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_update_system.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_update_s0000644000175000017500000000165415133657453033415 0ustar zuulzuul--- - name: Debug make_update_system_env when: make_update_system_env is defined ansible.builtin.debug: var: make_update_system_env - name: Debug make_update_system_params when: make_update_system_params is defined ansible.builtin.debug: var: make_update_system_params - name: Run update_system retries: "{{ make_update_system_retries | default(omit) }}" delay: "{{ make_update_system_delay | default(omit) }}" until: "{{ make_update_system_until | default(true) }}" register: "make_update_system_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make update_system" dry_run: "{{ make_update_system_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_update_system_env|default({})), **(make_update_system_params|default({}))) }}" ././@LongLink0000644000000000000000000000017000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_patch_version.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000210215133657453033412 0ustar zuulzuul--- - name: Debug make_openstack_patch_version_env when: make_openstack_patch_version_env is defined ansible.builtin.debug: var: make_openstack_patch_version_env - name: Debug make_openstack_patch_version_params when: make_openstack_patch_version_params is defined ansible.builtin.debug: var: make_openstack_patch_version_params - name: Run openstack_patch_version retries: "{{ make_openstack_patch_version_retries | default(omit) }}" delay: "{{ make_openstack_patch_version_delay | default(omit) }}" until: "{{ make_openstack_patch_version_until | default(true) }}" register: "make_openstack_patch_version_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_patch_version" dry_run: "{{ make_openstack_patch_version_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_patch_version_env|default({})), **(make_openstack_patch_version_params|default({}))) }}" ././@LongLink0000644000000000000000000000017200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_generate_keys.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000214015133657453033355 0ustar zuulzuul--- - name: Debug make_edpm_deploy_generate_keys_env when: make_edpm_deploy_generate_keys_env is defined ansible.builtin.debug: var: make_edpm_deploy_generate_keys_env - name: Debug make_edpm_deploy_generate_keys_params when: make_edpm_deploy_generate_keys_params is defined ansible.builtin.debug: var: make_edpm_deploy_generate_keys_params - name: Run edpm_deploy_generate_keys retries: "{{ make_edpm_deploy_generate_keys_retries | default(omit) }}" delay: "{{ make_edpm_deploy_generate_keys_delay | default(omit) }}" until: "{{ make_edpm_deploy_generate_keys_until | default(true) }}" register: "make_edpm_deploy_generate_keys_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_generate_keys" dry_run: "{{ make_edpm_deploy_generate_keys_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_generate_keys_env|default({})), **(make_edpm_deploy_generate_keys_params|default({}))) }}" ././@LongLink0000644000000000000000000000020000000000000011573 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_patch_ansible_runner_image.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_pat0000644000175000017500000000227215133657453033377 0ustar zuulzuul--- - name: Debug make_edpm_patch_ansible_runner_image_env when: make_edpm_patch_ansible_runner_image_env is defined ansible.builtin.debug: var: make_edpm_patch_ansible_runner_image_env - name: Debug make_edpm_patch_ansible_runner_image_params when: make_edpm_patch_ansible_runner_image_params is defined ansible.builtin.debug: var: make_edpm_patch_ansible_runner_image_params - name: Run edpm_patch_ansible_runner_image retries: "{{ make_edpm_patch_ansible_runner_image_retries | default(omit) }}" delay: "{{ make_edpm_patch_ansible_runner_image_delay | default(omit) }}" until: "{{ make_edpm_patch_ansible_runner_image_until | default(true) }}" register: "make_edpm_patch_ansible_runner_image_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_patch_ansible_runner_image" dry_run: "{{ make_edpm_patch_ansible_runner_image_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_patch_ansible_runner_image_env|default({})), **(make_edpm_patch_ansible_runner_image_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000173115133657453033362 0ustar zuulzuul--- - name: Debug make_edpm_deploy_prep_env when: make_edpm_deploy_prep_env is defined ansible.builtin.debug: var: make_edpm_deploy_prep_env - name: Debug make_edpm_deploy_prep_params when: make_edpm_deploy_prep_params is defined ansible.builtin.debug: var: make_edpm_deploy_prep_params - name: Run edpm_deploy_prep retries: "{{ make_edpm_deploy_prep_retries | default(omit) }}" delay: "{{ make_edpm_deploy_prep_delay | default(omit) }}" until: "{{ make_edpm_deploy_prep_until | default(true) }}" register: "make_edpm_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_prep" dry_run: "{{ make_edpm_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_prep_env|default({})), **(make_edpm_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000200615133657453033356 0ustar zuulzuul--- - name: Debug make_edpm_deploy_cleanup_env when: make_edpm_deploy_cleanup_env is defined ansible.builtin.debug: var: make_edpm_deploy_cleanup_env - name: Debug make_edpm_deploy_cleanup_params when: make_edpm_deploy_cleanup_params is defined ansible.builtin.debug: var: make_edpm_deploy_cleanup_params - name: Run edpm_deploy_cleanup retries: "{{ make_edpm_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_edpm_deploy_cleanup_delay | default(omit) }}" until: "{{ make_edpm_deploy_cleanup_until | default(true) }}" register: "make_edpm_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_cleanup" dry_run: "{{ make_edpm_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_cleanup_env|default({})), **(make_edpm_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000161615133657453033364 0ustar zuulzuul--- - name: Debug make_edpm_deploy_env when: make_edpm_deploy_env is defined ansible.builtin.debug: var: make_edpm_deploy_env - name: Debug make_edpm_deploy_params when: make_edpm_deploy_params is defined ansible.builtin.debug: var: make_edpm_deploy_params - name: Run edpm_deploy retries: "{{ make_edpm_deploy_retries | default(omit) }}" delay: "{{ make_edpm_deploy_delay | default(omit) }}" until: "{{ make_edpm_deploy_until | default(true) }}" register: "make_edpm_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy" dry_run: "{{ make_edpm_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_env|default({})), **(make_edpm_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_baremetal_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000215715133657453033365 0ustar zuulzuul--- - name: Debug make_edpm_deploy_baremetal_prep_env when: make_edpm_deploy_baremetal_prep_env is defined ansible.builtin.debug: var: make_edpm_deploy_baremetal_prep_env - name: Debug make_edpm_deploy_baremetal_prep_params when: make_edpm_deploy_baremetal_prep_params is defined ansible.builtin.debug: var: make_edpm_deploy_baremetal_prep_params - name: Run edpm_deploy_baremetal_prep retries: "{{ make_edpm_deploy_baremetal_prep_retries | default(omit) }}" delay: "{{ make_edpm_deploy_baremetal_prep_delay | default(omit) }}" until: "{{ make_edpm_deploy_baremetal_prep_until | default(true) }}" register: "make_edpm_deploy_baremetal_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_baremetal_prep" dry_run: "{{ make_edpm_deploy_baremetal_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_baremetal_prep_env|default({})), **(make_edpm_deploy_baremetal_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_baremetal.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000204415133657453033360 0ustar zuulzuul--- - name: Debug make_edpm_deploy_baremetal_env when: make_edpm_deploy_baremetal_env is defined ansible.builtin.debug: var: make_edpm_deploy_baremetal_env - name: Debug make_edpm_deploy_baremetal_params when: make_edpm_deploy_baremetal_params is defined ansible.builtin.debug: var: make_edpm_deploy_baremetal_params - name: Run edpm_deploy_baremetal retries: "{{ make_edpm_deploy_baremetal_retries | default(omit) }}" delay: "{{ make_edpm_deploy_baremetal_delay | default(omit) }}" until: "{{ make_edpm_deploy_baremetal_until | default(true) }}" register: "make_edpm_deploy_baremetal_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_baremetal" dry_run: "{{ make_edpm_deploy_baremetal_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_baremetal_env|default({})), **(make_edpm_deploy_baremetal_params|default({}))) }}" ././@LongLink0000644000000000000000000000017300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_wait_deploy_baremetal.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_wai0000644000175000017500000000215715133657453033375 0ustar zuulzuul--- - name: Debug make_edpm_wait_deploy_baremetal_env when: make_edpm_wait_deploy_baremetal_env is defined ansible.builtin.debug: var: make_edpm_wait_deploy_baremetal_env - name: Debug make_edpm_wait_deploy_baremetal_params when: make_edpm_wait_deploy_baremetal_params is defined ansible.builtin.debug: var: make_edpm_wait_deploy_baremetal_params - name: Run edpm_wait_deploy_baremetal retries: "{{ make_edpm_wait_deploy_baremetal_retries | default(omit) }}" delay: "{{ make_edpm_wait_deploy_baremetal_delay | default(omit) }}" until: "{{ make_edpm_wait_deploy_baremetal_until | default(true) }}" register: "make_edpm_wait_deploy_baremetal_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_wait_deploy_baremetal" dry_run: "{{ make_edpm_wait_deploy_baremetal_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_wait_deploy_baremetal_env|default({})), **(make_edpm_wait_deploy_baremetal_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_wait_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_wai0000644000175000017500000000173115133657453033372 0ustar zuulzuul--- - name: Debug make_edpm_wait_deploy_env when: make_edpm_wait_deploy_env is defined ansible.builtin.debug: var: make_edpm_wait_deploy_env - name: Debug make_edpm_wait_deploy_params when: make_edpm_wait_deploy_params is defined ansible.builtin.debug: var: make_edpm_wait_deploy_params - name: Run edpm_wait_deploy retries: "{{ make_edpm_wait_deploy_retries | default(omit) }}" delay: "{{ make_edpm_wait_deploy_delay | default(omit) }}" until: "{{ make_edpm_wait_deploy_until | default(true) }}" register: "make_edpm_wait_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_wait_deploy" dry_run: "{{ make_edpm_wait_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_wait_deploy_env|default({})), **(make_edpm_wait_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_register_dns.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_reg0000644000175000017500000000175015133657453033370 0ustar zuulzuul--- - name: Debug make_edpm_register_dns_env when: make_edpm_register_dns_env is defined ansible.builtin.debug: var: make_edpm_register_dns_env - name: Debug make_edpm_register_dns_params when: make_edpm_register_dns_params is defined ansible.builtin.debug: var: make_edpm_register_dns_params - name: Run edpm_register_dns retries: "{{ make_edpm_register_dns_retries | default(omit) }}" delay: "{{ make_edpm_register_dns_delay | default(omit) }}" until: "{{ make_edpm_register_dns_until | default(true) }}" register: "make_edpm_register_dns_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_register_dns" dry_run: "{{ make_edpm_register_dns_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_register_dns_env|default({})), **(make_edpm_register_dns_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_nova_discover_hosts.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_nov0000644000175000017500000000212115133657453033406 0ustar zuulzuul--- - name: Debug make_edpm_nova_discover_hosts_env when: make_edpm_nova_discover_hosts_env is defined ansible.builtin.debug: var: make_edpm_nova_discover_hosts_env - name: Debug make_edpm_nova_discover_hosts_params when: make_edpm_nova_discover_hosts_params is defined ansible.builtin.debug: var: make_edpm_nova_discover_hosts_params - name: Run edpm_nova_discover_hosts retries: "{{ make_edpm_nova_discover_hosts_retries | default(omit) }}" delay: "{{ make_edpm_nova_discover_hosts_delay | default(omit) }}" until: "{{ make_edpm_nova_discover_hosts_until | default(true) }}" register: "make_edpm_nova_discover_hosts_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_nova_discover_hosts" dry_run: "{{ make_edpm_nova_discover_hosts_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_nova_discover_hosts_env|default({})), **(make_edpm_nova_discover_hosts_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_crds.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000167315133657453033426 0ustar zuulzuul--- - name: Debug make_openstack_crds_env when: make_openstack_crds_env is defined ansible.builtin.debug: var: make_openstack_crds_env - name: Debug make_openstack_crds_params when: make_openstack_crds_params is defined ansible.builtin.debug: var: make_openstack_crds_params - name: Run openstack_crds retries: "{{ make_openstack_crds_retries | default(omit) }}" delay: "{{ make_openstack_crds_delay | default(omit) }}" until: "{{ make_openstack_crds_until | default(true) }}" register: "make_openstack_crds_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_crds" dry_run: "{{ make_openstack_crds_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_crds_env|default({})), **(make_openstack_crds_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_crds_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000206315133657453033420 0ustar zuulzuul--- - name: Debug make_openstack_crds_cleanup_env when: make_openstack_crds_cleanup_env is defined ansible.builtin.debug: var: make_openstack_crds_cleanup_env - name: Debug make_openstack_crds_cleanup_params when: make_openstack_crds_cleanup_params is defined ansible.builtin.debug: var: make_openstack_crds_cleanup_params - name: Run openstack_crds_cleanup retries: "{{ make_openstack_crds_cleanup_retries | default(omit) }}" delay: "{{ make_openstack_crds_cleanup_delay | default(omit) }}" until: "{{ make_openstack_crds_cleanup_until | default(true) }}" register: "make_openstack_crds_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_crds_cleanup" dry_run: "{{ make_openstack_crds_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_crds_cleanup_env|default({})), **(make_openstack_crds_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_networker_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000215715133657453033365 0ustar zuulzuul--- - name: Debug make_edpm_deploy_networker_prep_env when: make_edpm_deploy_networker_prep_env is defined ansible.builtin.debug: var: make_edpm_deploy_networker_prep_env - name: Debug make_edpm_deploy_networker_prep_params when: make_edpm_deploy_networker_prep_params is defined ansible.builtin.debug: var: make_edpm_deploy_networker_prep_params - name: Run edpm_deploy_networker_prep retries: "{{ make_edpm_deploy_networker_prep_retries | default(omit) }}" delay: "{{ make_edpm_deploy_networker_prep_delay | default(omit) }}" until: "{{ make_edpm_deploy_networker_prep_until | default(true) }}" register: "make_edpm_deploy_networker_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_networker_prep" dry_run: "{{ make_edpm_deploy_networker_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_networker_prep_env|default({})), **(make_edpm_deploy_networker_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000017600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_networker_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000223415133657453033361 0ustar zuulzuul--- - name: Debug make_edpm_deploy_networker_cleanup_env when: make_edpm_deploy_networker_cleanup_env is defined ansible.builtin.debug: var: make_edpm_deploy_networker_cleanup_env - name: Debug make_edpm_deploy_networker_cleanup_params when: make_edpm_deploy_networker_cleanup_params is defined ansible.builtin.debug: var: make_edpm_deploy_networker_cleanup_params - name: Run edpm_deploy_networker_cleanup retries: "{{ make_edpm_deploy_networker_cleanup_retries | default(omit) }}" delay: "{{ make_edpm_deploy_networker_cleanup_delay | default(omit) }}" until: "{{ make_edpm_deploy_networker_cleanup_until | default(true) }}" register: "make_edpm_deploy_networker_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_networker_cleanup" dry_run: "{{ make_edpm_deploy_networker_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_networker_cleanup_env|default({})), **(make_edpm_deploy_networker_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_networker.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000204415133657453033360 0ustar zuulzuul--- - name: Debug make_edpm_deploy_networker_env when: make_edpm_deploy_networker_env is defined ansible.builtin.debug: var: make_edpm_deploy_networker_env - name: Debug make_edpm_deploy_networker_params when: make_edpm_deploy_networker_params is defined ansible.builtin.debug: var: make_edpm_deploy_networker_params - name: Run edpm_deploy_networker retries: "{{ make_edpm_deploy_networker_retries | default(omit) }}" delay: "{{ make_edpm_deploy_networker_delay | default(omit) }}" until: "{{ make_edpm_deploy_networker_until | default(true) }}" register: "make_edpm_deploy_networker_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_networker" dry_run: "{{ make_edpm_deploy_networker_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_networker_env|default({})), **(make_edpm_deploy_networker_params|default({}))) }}" ././@LongLink0000644000000000000000000000015300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_pr0000644000175000017500000000157715133657453033415 0ustar zuulzuul--- - name: Debug make_infra_prep_env when: make_infra_prep_env is defined ansible.builtin.debug: var: make_infra_prep_env - name: Debug make_infra_prep_params when: make_infra_prep_params is defined ansible.builtin.debug: var: make_infra_prep_params - name: Run infra_prep retries: "{{ make_infra_prep_retries | default(omit) }}" delay: "{{ make_infra_prep_delay | default(omit) }}" until: "{{ make_infra_prep_until | default(true) }}" register: "make_infra_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make infra_prep" dry_run: "{{ make_infra_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_infra_prep_env|default({})), **(make_infra_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000014600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra.ym0000644000175000017500000000146415133657453033333 0ustar zuulzuul--- - name: Debug make_infra_env when: make_infra_env is defined ansible.builtin.debug: var: make_infra_env - name: Debug make_infra_params when: make_infra_params is defined ansible.builtin.debug: var: make_infra_params - name: Run infra retries: "{{ make_infra_retries | default(omit) }}" delay: "{{ make_infra_delay | default(omit) }}" until: "{{ make_infra_until | default(true) }}" register: "make_infra_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make infra" dry_run: "{{ make_infra_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_infra_env|default({})), **(make_infra_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_cl0000644000175000017500000000165415133657453033366 0ustar zuulzuul--- - name: Debug make_infra_cleanup_env when: make_infra_cleanup_env is defined ansible.builtin.debug: var: make_infra_cleanup_env - name: Debug make_infra_cleanup_params when: make_infra_cleanup_params is defined ansible.builtin.debug: var: make_infra_cleanup_params - name: Run infra_cleanup retries: "{{ make_infra_cleanup_retries | default(omit) }}" delay: "{{ make_infra_cleanup_delay | default(omit) }}" until: "{{ make_infra_cleanup_until | default(true) }}" register: "make_infra_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make infra_cleanup" dry_run: "{{ make_infra_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_infra_cleanup_env|default({})), **(make_infra_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_dns_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_dns_depl0000644000175000017500000000171215133657453033374 0ustar zuulzuul--- - name: Debug make_dns_deploy_prep_env when: make_dns_deploy_prep_env is defined ansible.builtin.debug: var: make_dns_deploy_prep_env - name: Debug make_dns_deploy_prep_params when: make_dns_deploy_prep_params is defined ansible.builtin.debug: var: make_dns_deploy_prep_params - name: Run dns_deploy_prep retries: "{{ make_dns_deploy_prep_retries | default(omit) }}" delay: "{{ make_dns_deploy_prep_delay | default(omit) }}" until: "{{ make_dns_deploy_prep_until | default(true) }}" register: "make_dns_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make dns_deploy_prep" dry_run: "{{ make_dns_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_dns_deploy_prep_env|default({})), **(make_dns_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_dns_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_dns_depl0000644000175000017500000000157715133657453033405 0ustar zuulzuul--- - name: Debug make_dns_deploy_env when: make_dns_deploy_env is defined ansible.builtin.debug: var: make_dns_deploy_env - name: Debug make_dns_deploy_params when: make_dns_deploy_params is defined ansible.builtin.debug: var: make_dns_deploy_params - name: Run dns_deploy retries: "{{ make_dns_deploy_retries | default(omit) }}" delay: "{{ make_dns_deploy_delay | default(omit) }}" until: "{{ make_dns_deploy_until | default(true) }}" register: "make_dns_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make dns_deploy" dry_run: "{{ make_dns_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_dns_deploy_env|default({})), **(make_dns_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_dns_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_dns_depl0000644000175000017500000000176715133657453033406 0ustar zuulzuul--- - name: Debug make_dns_deploy_cleanup_env when: make_dns_deploy_cleanup_env is defined ansible.builtin.debug: var: make_dns_deploy_cleanup_env - name: Debug make_dns_deploy_cleanup_params when: make_dns_deploy_cleanup_params is defined ansible.builtin.debug: var: make_dns_deploy_cleanup_params - name: Run dns_deploy_cleanup retries: "{{ make_dns_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_dns_deploy_cleanup_delay | default(omit) }}" until: "{{ make_dns_deploy_cleanup_until | default(true) }}" register: "make_dns_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make dns_deploy_cleanup" dry_run: "{{ make_dns_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_dns_deploy_cleanup_env|default({})), **(make_dns_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_memcached_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_memcache0000644000175000017500000000212115133657453033341 0ustar zuulzuul--- - name: Debug make_memcached_deploy_cleanup_env when: make_memcached_deploy_cleanup_env is defined ansible.builtin.debug: var: make_memcached_deploy_cleanup_env - name: Debug make_memcached_deploy_cleanup_params when: make_memcached_deploy_cleanup_params is defined ansible.builtin.debug: var: make_memcached_deploy_cleanup_params - name: Run memcached_deploy_cleanup retries: "{{ make_memcached_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_memcached_deploy_cleanup_delay | default(omit) }}" until: "{{ make_memcached_deploy_cleanup_until | default(true) }}" register: "make_memcached_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make memcached_deploy_cleanup" dry_run: "{{ make_memcached_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_memcached_deploy_cleanup_env|default({})), **(make_memcached_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000165415133657453033452 0ustar zuulzuul--- - name: Debug make_keystone_prep_env when: make_keystone_prep_env is defined ansible.builtin.debug: var: make_keystone_prep_env - name: Debug make_keystone_prep_params when: make_keystone_prep_params is defined ansible.builtin.debug: var: make_keystone_prep_params - name: Run keystone_prep retries: "{{ make_keystone_prep_retries | default(omit) }}" delay: "{{ make_keystone_prep_delay | default(omit) }}" until: "{{ make_keystone_prep_until | default(true) }}" register: "make_keystone_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone_prep" dry_run: "{{ make_keystone_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_prep_env|default({})), **(make_keystone_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000154115133657453033445 0ustar zuulzuul--- - name: Debug make_keystone_env when: make_keystone_env is defined ansible.builtin.debug: var: make_keystone_env - name: Debug make_keystone_params when: make_keystone_params is defined ansible.builtin.debug: var: make_keystone_params - name: Run keystone retries: "{{ make_keystone_retries | default(omit) }}" delay: "{{ make_keystone_delay | default(omit) }}" until: "{{ make_keystone_until | default(true) }}" register: "make_keystone_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone" dry_run: "{{ make_keystone_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_env|default({})), **(make_keystone_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000173115133657453033446 0ustar zuulzuul--- - name: Debug make_keystone_cleanup_env when: make_keystone_cleanup_env is defined ansible.builtin.debug: var: make_keystone_cleanup_env - name: Debug make_keystone_cleanup_params when: make_keystone_cleanup_params is defined ansible.builtin.debug: var: make_keystone_cleanup_params - name: Run keystone_cleanup retries: "{{ make_keystone_cleanup_retries | default(omit) }}" delay: "{{ make_keystone_cleanup_delay | default(omit) }}" until: "{{ make_keystone_cleanup_until | default(true) }}" register: "make_keystone_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone_cleanup" dry_run: "{{ make_keystone_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_cleanup_env|default({})), **(make_keystone_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000202515133657453033443 0ustar zuulzuul--- - name: Debug make_keystone_deploy_prep_env when: make_keystone_deploy_prep_env is defined ansible.builtin.debug: var: make_keystone_deploy_prep_env - name: Debug make_keystone_deploy_prep_params when: make_keystone_deploy_prep_params is defined ansible.builtin.debug: var: make_keystone_deploy_prep_params - name: Run keystone_deploy_prep retries: "{{ make_keystone_deploy_prep_retries | default(omit) }}" delay: "{{ make_keystone_deploy_prep_delay | default(omit) }}" until: "{{ make_keystone_deploy_prep_until | default(true) }}" register: "make_keystone_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone_deploy_prep" dry_run: "{{ make_keystone_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_deploy_prep_env|default({})), **(make_keystone_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000200615133657453033337 0ustar zuulzuul--- - name: Debug make_mariadb_deploy_prep_env when: make_mariadb_deploy_prep_env is defined ansible.builtin.debug: var: make_mariadb_deploy_prep_env - name: Debug make_mariadb_deploy_prep_params when: make_mariadb_deploy_prep_params is defined ansible.builtin.debug: var: make_mariadb_deploy_prep_params - name: Run mariadb_deploy_prep retries: "{{ make_mariadb_deploy_prep_retries | default(omit) }}" delay: "{{ make_mariadb_deploy_prep_delay | default(omit) }}" until: "{{ make_mariadb_deploy_prep_until | default(true) }}" register: "make_mariadb_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_deploy_prep" dry_run: "{{ make_mariadb_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_deploy_prep_env|default({})), **(make_mariadb_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000167315133657453033350 0ustar zuulzuul--- - name: Debug make_mariadb_deploy_env when: make_mariadb_deploy_env is defined ansible.builtin.debug: var: make_mariadb_deploy_env - name: Debug make_mariadb_deploy_params when: make_mariadb_deploy_params is defined ansible.builtin.debug: var: make_mariadb_deploy_params - name: Run mariadb_deploy retries: "{{ make_mariadb_deploy_retries | default(omit) }}" delay: "{{ make_mariadb_deploy_delay | default(omit) }}" until: "{{ make_mariadb_deploy_until | default(true) }}" register: "make_mariadb_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_deploy" dry_run: "{{ make_mariadb_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_deploy_env|default({})), **(make_mariadb_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000206315133657453033342 0ustar zuulzuul--- - name: Debug make_mariadb_deploy_cleanup_env when: make_mariadb_deploy_cleanup_env is defined ansible.builtin.debug: var: make_mariadb_deploy_cleanup_env - name: Debug make_mariadb_deploy_cleanup_params when: make_mariadb_deploy_cleanup_params is defined ansible.builtin.debug: var: make_mariadb_deploy_cleanup_params - name: Run mariadb_deploy_cleanup retries: "{{ make_mariadb_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_mariadb_deploy_cleanup_delay | default(omit) }}" until: "{{ make_mariadb_deploy_cleanup_until | default(true) }}" register: "make_mariadb_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_deploy_cleanup" dry_run: "{{ make_mariadb_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_deploy_cleanup_env|default({})), **(make_mariadb_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000167315133657453033376 0ustar zuulzuul--- - name: Debug make_placement_prep_env when: make_placement_prep_env is defined ansible.builtin.debug: var: make_placement_prep_env - name: Debug make_placement_prep_params when: make_placement_prep_params is defined ansible.builtin.debug: var: make_placement_prep_params - name: Run placement_prep retries: "{{ make_placement_prep_retries | default(omit) }}" delay: "{{ make_placement_prep_delay | default(omit) }}" until: "{{ make_placement_prep_until | default(true) }}" register: "make_placement_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement_prep" dry_run: "{{ make_placement_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_prep_env|default({})), **(make_placement_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000156015133657453033371 0ustar zuulzuul--- - name: Debug make_placement_env when: make_placement_env is defined ansible.builtin.debug: var: make_placement_env - name: Debug make_placement_params when: make_placement_params is defined ansible.builtin.debug: var: make_placement_params - name: Run placement retries: "{{ make_placement_retries | default(omit) }}" delay: "{{ make_placement_delay | default(omit) }}" until: "{{ make_placement_until | default(true) }}" register: "make_placement_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement" dry_run: "{{ make_placement_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_env|default({})), **(make_placement_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000175015133657453033372 0ustar zuulzuul--- - name: Debug make_placement_cleanup_env when: make_placement_cleanup_env is defined ansible.builtin.debug: var: make_placement_cleanup_env - name: Debug make_placement_cleanup_params when: make_placement_cleanup_params is defined ansible.builtin.debug: var: make_placement_cleanup_params - name: Run placement_cleanup retries: "{{ make_placement_cleanup_retries | default(omit) }}" delay: "{{ make_placement_cleanup_delay | default(omit) }}" until: "{{ make_placement_cleanup_until | default(true) }}" register: "make_placement_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement_cleanup" dry_run: "{{ make_placement_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_cleanup_env|default({})), **(make_placement_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000204415133657453033367 0ustar zuulzuul--- - name: Debug make_placement_deploy_prep_env when: make_placement_deploy_prep_env is defined ansible.builtin.debug: var: make_placement_deploy_prep_env - name: Debug make_placement_deploy_prep_params when: make_placement_deploy_prep_params is defined ansible.builtin.debug: var: make_placement_deploy_prep_params - name: Run placement_deploy_prep retries: "{{ make_placement_deploy_prep_retries | default(omit) }}" delay: "{{ make_placement_deploy_prep_delay | default(omit) }}" until: "{{ make_placement_deploy_prep_until | default(true) }}" register: "make_placement_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement_deploy_prep" dry_run: "{{ make_placement_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_deploy_prep_env|default({})), **(make_placement_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000173115133657453033371 0ustar zuulzuul--- - name: Debug make_placement_deploy_env when: make_placement_deploy_env is defined ansible.builtin.debug: var: make_placement_deploy_env - name: Debug make_placement_deploy_params when: make_placement_deploy_params is defined ansible.builtin.debug: var: make_placement_deploy_params - name: Run placement_deploy retries: "{{ make_placement_deploy_retries | default(omit) }}" delay: "{{ make_placement_deploy_delay | default(omit) }}" until: "{{ make_placement_deploy_until | default(true) }}" register: "make_placement_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement_deploy" dry_run: "{{ make_placement_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_deploy_env|default({})), **(make_placement_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000212115133657453033363 0ustar zuulzuul--- - name: Debug make_placement_deploy_cleanup_env when: make_placement_deploy_cleanup_env is defined ansible.builtin.debug: var: make_placement_deploy_cleanup_env - name: Debug make_placement_deploy_cleanup_params when: make_placement_deploy_cleanup_params is defined ansible.builtin.debug: var: make_placement_deploy_cleanup_params - name: Run placement_deploy_cleanup retries: "{{ make_placement_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_placement_deploy_cleanup_delay | default(omit) }}" until: "{{ make_placement_deploy_cleanup_until | default(true) }}" register: "make_placement_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement_deploy_cleanup" dry_run: "{{ make_placement_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_deploy_cleanup_env|default({})), **(make_placement_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_p0000644000175000017500000000161615133657453033357 0ustar zuulzuul--- - name: Debug make_glance_prep_env when: make_glance_prep_env is defined ansible.builtin.debug: var: make_glance_prep_env - name: Debug make_glance_prep_params when: make_glance_prep_params is defined ansible.builtin.debug: var: make_glance_prep_params - name: Run glance_prep retries: "{{ make_glance_prep_retries | default(omit) }}" delay: "{{ make_glance_prep_delay | default(omit) }}" until: "{{ make_glance_prep_until | default(true) }}" register: "make_glance_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance_prep" dry_run: "{{ make_glance_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_prep_env|default({})), **(make_glance_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000014700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance.y0000644000175000017500000000150315133657453033302 0ustar zuulzuul--- - name: Debug make_glance_env when: make_glance_env is defined ansible.builtin.debug: var: make_glance_env - name: Debug make_glance_params when: make_glance_params is defined ansible.builtin.debug: var: make_glance_params - name: Run glance retries: "{{ make_glance_retries | default(omit) }}" delay: "{{ make_glance_delay | default(omit) }}" until: "{{ make_glance_until | default(true) }}" register: "make_glance_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance" dry_run: "{{ make_glance_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_env|default({})), **(make_glance_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_c0000644000175000017500000000167315133657453033345 0ustar zuulzuul--- - name: Debug make_glance_cleanup_env when: make_glance_cleanup_env is defined ansible.builtin.debug: var: make_glance_cleanup_env - name: Debug make_glance_cleanup_params when: make_glance_cleanup_params is defined ansible.builtin.debug: var: make_glance_cleanup_params - name: Run glance_cleanup retries: "{{ make_glance_cleanup_retries | default(omit) }}" delay: "{{ make_glance_cleanup_delay | default(omit) }}" until: "{{ make_glance_cleanup_until | default(true) }}" register: "make_glance_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance_cleanup" dry_run: "{{ make_glance_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_cleanup_env|default({})), **(make_glance_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_d0000644000175000017500000000176715133657453033352 0ustar zuulzuul--- - name: Debug make_glance_deploy_prep_env when: make_glance_deploy_prep_env is defined ansible.builtin.debug: var: make_glance_deploy_prep_env - name: Debug make_glance_deploy_prep_params when: make_glance_deploy_prep_params is defined ansible.builtin.debug: var: make_glance_deploy_prep_params - name: Run glance_deploy_prep retries: "{{ make_glance_deploy_prep_retries | default(omit) }}" delay: "{{ make_glance_deploy_prep_delay | default(omit) }}" until: "{{ make_glance_deploy_prep_until | default(true) }}" register: "make_glance_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance_deploy_prep" dry_run: "{{ make_glance_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_deploy_prep_env|default({})), **(make_glance_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_d0000644000175000017500000000165415133657453033345 0ustar zuulzuul--- - name: Debug make_glance_deploy_env when: make_glance_deploy_env is defined ansible.builtin.debug: var: make_glance_deploy_env - name: Debug make_glance_deploy_params when: make_glance_deploy_params is defined ansible.builtin.debug: var: make_glance_deploy_params - name: Run glance_deploy retries: "{{ make_glance_deploy_retries | default(omit) }}" delay: "{{ make_glance_deploy_delay | default(omit) }}" until: "{{ make_glance_deploy_until | default(true) }}" register: "make_glance_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance_deploy" dry_run: "{{ make_glance_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_deploy_env|default({})), **(make_glance_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_d0000644000175000017500000000204415133657453033337 0ustar zuulzuul--- - name: Debug make_glance_deploy_cleanup_env when: make_glance_deploy_cleanup_env is defined ansible.builtin.debug: var: make_glance_deploy_cleanup_env - name: Debug make_glance_deploy_cleanup_params when: make_glance_deploy_cleanup_params is defined ansible.builtin.debug: var: make_glance_deploy_cleanup_params - name: Run glance_deploy_cleanup retries: "{{ make_glance_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_glance_deploy_cleanup_delay | default(omit) }}" until: "{{ make_glance_deploy_cleanup_until | default(true) }}" register: "make_glance_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance_deploy_cleanup" dry_run: "{{ make_glance_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_deploy_cleanup_env|default({})), **(make_glance_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_prep0000644000175000017500000000154115133657453033434 0ustar zuulzuul--- - name: Debug make_ovn_prep_env when: make_ovn_prep_env is defined ansible.builtin.debug: var: make_ovn_prep_env - name: Debug make_ovn_prep_params when: make_ovn_prep_params is defined ansible.builtin.debug: var: make_ovn_prep_params - name: Run ovn_prep retries: "{{ make_ovn_prep_retries | default(omit) }}" delay: "{{ make_ovn_prep_delay | default(omit) }}" until: "{{ make_ovn_prep_until | default(true) }}" register: "make_ovn_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn_prep" dry_run: "{{ make_ovn_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_prep_env|default({})), **(make_ovn_prep_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn.yml0000644000175000017500000000142615133657453033210 0ustar zuulzuul--- - name: Debug make_ovn_env when: make_ovn_env is defined ansible.builtin.debug: var: make_ovn_env - name: Debug make_ovn_params when: make_ovn_params is defined ansible.builtin.debug: var: make_ovn_params - name: Run ovn retries: "{{ make_ovn_retries | default(omit) }}" delay: "{{ make_ovn_delay | default(omit) }}" until: "{{ make_ovn_until | default(true) }}" register: "make_ovn_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn" dry_run: "{{ make_ovn_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_env|default({})), **(make_ovn_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_clea0000644000175000017500000000161615133657453033375 0ustar zuulzuul--- - name: Debug make_ovn_cleanup_env when: make_ovn_cleanup_env is defined ansible.builtin.debug: var: make_ovn_cleanup_env - name: Debug make_ovn_cleanup_params when: make_ovn_cleanup_params is defined ansible.builtin.debug: var: make_ovn_cleanup_params - name: Run ovn_cleanup retries: "{{ make_ovn_cleanup_retries | default(omit) }}" delay: "{{ make_ovn_cleanup_delay | default(omit) }}" until: "{{ make_ovn_cleanup_until | default(true) }}" register: "make_ovn_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn_cleanup" dry_run: "{{ make_ovn_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_cleanup_env|default({})), **(make_ovn_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_depl0000644000175000017500000000171215133657453033412 0ustar zuulzuul--- - name: Debug make_ovn_deploy_prep_env when: make_ovn_deploy_prep_env is defined ansible.builtin.debug: var: make_ovn_deploy_prep_env - name: Debug make_ovn_deploy_prep_params when: make_ovn_deploy_prep_params is defined ansible.builtin.debug: var: make_ovn_deploy_prep_params - name: Run ovn_deploy_prep retries: "{{ make_ovn_deploy_prep_retries | default(omit) }}" delay: "{{ make_ovn_deploy_prep_delay | default(omit) }}" until: "{{ make_ovn_deploy_prep_until | default(true) }}" register: "make_ovn_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn_deploy_prep" dry_run: "{{ make_ovn_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_deploy_prep_env|default({})), **(make_ovn_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_depl0000644000175000017500000000157715133657453033423 0ustar zuulzuul--- - name: Debug make_ovn_deploy_env when: make_ovn_deploy_env is defined ansible.builtin.debug: var: make_ovn_deploy_env - name: Debug make_ovn_deploy_params when: make_ovn_deploy_params is defined ansible.builtin.debug: var: make_ovn_deploy_params - name: Run ovn_deploy retries: "{{ make_ovn_deploy_retries | default(omit) }}" delay: "{{ make_ovn_deploy_delay | default(omit) }}" until: "{{ make_ovn_deploy_until | default(true) }}" register: "make_ovn_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn_deploy" dry_run: "{{ make_ovn_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_deploy_env|default({})), **(make_ovn_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_depl0000644000175000017500000000176715133657453033424 0ustar zuulzuul--- - name: Debug make_ovn_deploy_cleanup_env when: make_ovn_deploy_cleanup_env is defined ansible.builtin.debug: var: make_ovn_deploy_cleanup_env - name: Debug make_ovn_deploy_cleanup_params when: make_ovn_deploy_cleanup_params is defined ansible.builtin.debug: var: make_ovn_deploy_cleanup_params - name: Run ovn_deploy_cleanup retries: "{{ make_ovn_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_ovn_deploy_cleanup_delay | default(omit) }}" until: "{{ make_ovn_deploy_cleanup_until | default(true) }}" register: "make_ovn_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn_deploy_cleanup" dry_run: "{{ make_ovn_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_deploy_cleanup_env|default({})), **(make_ovn_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_0000644000175000017500000000163515133657453033441 0ustar zuulzuul--- - name: Debug make_neutron_prep_env when: make_neutron_prep_env is defined ansible.builtin.debug: var: make_neutron_prep_env - name: Debug make_neutron_prep_params when: make_neutron_prep_params is defined ansible.builtin.debug: var: make_neutron_prep_params - name: Run neutron_prep retries: "{{ make_neutron_prep_retries | default(omit) }}" delay: "{{ make_neutron_prep_delay | default(omit) }}" until: "{{ make_neutron_prep_until | default(true) }}" register: "make_neutron_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron_prep" dry_run: "{{ make_neutron_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_prep_env|default({})), **(make_neutron_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron.0000644000175000017500000000152215133657453033353 0ustar zuulzuul--- - name: Debug make_neutron_env when: make_neutron_env is defined ansible.builtin.debug: var: make_neutron_env - name: Debug make_neutron_params when: make_neutron_params is defined ansible.builtin.debug: var: make_neutron_params - name: Run neutron retries: "{{ make_neutron_retries | default(omit) }}" delay: "{{ make_neutron_delay | default(omit) }}" until: "{{ make_neutron_until | default(true) }}" register: "make_neutron_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron" dry_run: "{{ make_neutron_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_env|default({})), **(make_neutron_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_0000644000175000017500000000171215133657453033435 0ustar zuulzuul--- - name: Debug make_neutron_cleanup_env when: make_neutron_cleanup_env is defined ansible.builtin.debug: var: make_neutron_cleanup_env - name: Debug make_neutron_cleanup_params when: make_neutron_cleanup_params is defined ansible.builtin.debug: var: make_neutron_cleanup_params - name: Run neutron_cleanup retries: "{{ make_neutron_cleanup_retries | default(omit) }}" delay: "{{ make_neutron_cleanup_delay | default(omit) }}" until: "{{ make_neutron_cleanup_until | default(true) }}" register: "make_neutron_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron_cleanup" dry_run: "{{ make_neutron_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_cleanup_env|default({})), **(make_neutron_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_0000644000175000017500000000200615133657453033432 0ustar zuulzuul--- - name: Debug make_neutron_deploy_prep_env when: make_neutron_deploy_prep_env is defined ansible.builtin.debug: var: make_neutron_deploy_prep_env - name: Debug make_neutron_deploy_prep_params when: make_neutron_deploy_prep_params is defined ansible.builtin.debug: var: make_neutron_deploy_prep_params - name: Run neutron_deploy_prep retries: "{{ make_neutron_deploy_prep_retries | default(omit) }}" delay: "{{ make_neutron_deploy_prep_delay | default(omit) }}" until: "{{ make_neutron_deploy_prep_until | default(true) }}" register: "make_neutron_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron_deploy_prep" dry_run: "{{ make_neutron_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_deploy_prep_env|default({})), **(make_neutron_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_0000644000175000017500000000167315133657453033443 0ustar zuulzuul--- - name: Debug make_neutron_deploy_env when: make_neutron_deploy_env is defined ansible.builtin.debug: var: make_neutron_deploy_env - name: Debug make_neutron_deploy_params when: make_neutron_deploy_params is defined ansible.builtin.debug: var: make_neutron_deploy_params - name: Run neutron_deploy retries: "{{ make_neutron_deploy_retries | default(omit) }}" delay: "{{ make_neutron_deploy_delay | default(omit) }}" until: "{{ make_neutron_deploy_until | default(true) }}" register: "make_neutron_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron_deploy" dry_run: "{{ make_neutron_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_deploy_env|default({})), **(make_neutron_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_0000644000175000017500000000206315133657453033435 0ustar zuulzuul--- - name: Debug make_neutron_deploy_cleanup_env when: make_neutron_deploy_cleanup_env is defined ansible.builtin.debug: var: make_neutron_deploy_cleanup_env - name: Debug make_neutron_deploy_cleanup_params when: make_neutron_deploy_cleanup_params is defined ansible.builtin.debug: var: make_neutron_deploy_cleanup_params - name: Run neutron_deploy_cleanup retries: "{{ make_neutron_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_neutron_deploy_cleanup_delay | default(omit) }}" until: "{{ make_neutron_deploy_cleanup_until | default(true) }}" register: "make_neutron_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron_deploy_cleanup" dry_run: "{{ make_neutron_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_deploy_cleanup_env|default({})), **(make_neutron_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_p0000644000175000017500000000161615133657453033372 0ustar zuulzuul--- - name: Debug make_cinder_prep_env when: make_cinder_prep_env is defined ansible.builtin.debug: var: make_cinder_prep_env - name: Debug make_cinder_prep_params when: make_cinder_prep_params is defined ansible.builtin.debug: var: make_cinder_prep_params - name: Run cinder_prep retries: "{{ make_cinder_prep_retries | default(omit) }}" delay: "{{ make_cinder_prep_delay | default(omit) }}" until: "{{ make_cinder_prep_until | default(true) }}" register: "make_cinder_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder_prep" dry_run: "{{ make_cinder_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_prep_env|default({})), **(make_cinder_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000014700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder.y0000644000175000017500000000150315133657453033315 0ustar zuulzuul--- - name: Debug make_cinder_env when: make_cinder_env is defined ansible.builtin.debug: var: make_cinder_env - name: Debug make_cinder_params when: make_cinder_params is defined ansible.builtin.debug: var: make_cinder_params - name: Run cinder retries: "{{ make_cinder_retries | default(omit) }}" delay: "{{ make_cinder_delay | default(omit) }}" until: "{{ make_cinder_until | default(true) }}" register: "make_cinder_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder" dry_run: "{{ make_cinder_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_env|default({})), **(make_cinder_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_c0000644000175000017500000000167315133657453033360 0ustar zuulzuul--- - name: Debug make_cinder_cleanup_env when: make_cinder_cleanup_env is defined ansible.builtin.debug: var: make_cinder_cleanup_env - name: Debug make_cinder_cleanup_params when: make_cinder_cleanup_params is defined ansible.builtin.debug: var: make_cinder_cleanup_params - name: Run cinder_cleanup retries: "{{ make_cinder_cleanup_retries | default(omit) }}" delay: "{{ make_cinder_cleanup_delay | default(omit) }}" until: "{{ make_cinder_cleanup_until | default(true) }}" register: "make_cinder_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder_cleanup" dry_run: "{{ make_cinder_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_cleanup_env|default({})), **(make_cinder_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_d0000644000175000017500000000176715133657453033365 0ustar zuulzuul--- - name: Debug make_cinder_deploy_prep_env when: make_cinder_deploy_prep_env is defined ansible.builtin.debug: var: make_cinder_deploy_prep_env - name: Debug make_cinder_deploy_prep_params when: make_cinder_deploy_prep_params is defined ansible.builtin.debug: var: make_cinder_deploy_prep_params - name: Run cinder_deploy_prep retries: "{{ make_cinder_deploy_prep_retries | default(omit) }}" delay: "{{ make_cinder_deploy_prep_delay | default(omit) }}" until: "{{ make_cinder_deploy_prep_until | default(true) }}" register: "make_cinder_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder_deploy_prep" dry_run: "{{ make_cinder_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_deploy_prep_env|default({})), **(make_cinder_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_d0000644000175000017500000000165415133657453033360 0ustar zuulzuul--- - name: Debug make_cinder_deploy_env when: make_cinder_deploy_env is defined ansible.builtin.debug: var: make_cinder_deploy_env - name: Debug make_cinder_deploy_params when: make_cinder_deploy_params is defined ansible.builtin.debug: var: make_cinder_deploy_params - name: Run cinder_deploy retries: "{{ make_cinder_deploy_retries | default(omit) }}" delay: "{{ make_cinder_deploy_delay | default(omit) }}" until: "{{ make_cinder_deploy_until | default(true) }}" register: "make_cinder_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder_deploy" dry_run: "{{ make_cinder_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_deploy_env|default({})), **(make_cinder_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_d0000644000175000017500000000204415133657453033352 0ustar zuulzuul--- - name: Debug make_cinder_deploy_cleanup_env when: make_cinder_deploy_cleanup_env is defined ansible.builtin.debug: var: make_cinder_deploy_cleanup_env - name: Debug make_cinder_deploy_cleanup_params when: make_cinder_deploy_cleanup_params is defined ansible.builtin.debug: var: make_cinder_deploy_cleanup_params - name: Run cinder_deploy_cleanup retries: "{{ make_cinder_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_cinder_deploy_cleanup_delay | default(omit) }}" until: "{{ make_cinder_deploy_cleanup_until | default(true) }}" register: "make_cinder_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder_deploy_cleanup" dry_run: "{{ make_cinder_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_deploy_cleanup_env|default({})), **(make_cinder_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq0000644000175000017500000000165415133657453033412 0ustar zuulzuul--- - name: Debug make_rabbitmq_prep_env when: make_rabbitmq_prep_env is defined ansible.builtin.debug: var: make_rabbitmq_prep_env - name: Debug make_rabbitmq_prep_params when: make_rabbitmq_prep_params is defined ansible.builtin.debug: var: make_rabbitmq_prep_params - name: Run rabbitmq_prep retries: "{{ make_rabbitmq_prep_retries | default(omit) }}" delay: "{{ make_rabbitmq_prep_delay | default(omit) }}" until: "{{ make_rabbitmq_prep_until | default(true) }}" register: "make_rabbitmq_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rabbitmq_prep" dry_run: "{{ make_rabbitmq_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rabbitmq_prep_env|default({})), **(make_rabbitmq_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq0000644000175000017500000000154115133657453033405 0ustar zuulzuul--- - name: Debug make_rabbitmq_env when: make_rabbitmq_env is defined ansible.builtin.debug: var: make_rabbitmq_env - name: Debug make_rabbitmq_params when: make_rabbitmq_params is defined ansible.builtin.debug: var: make_rabbitmq_params - name: Run rabbitmq retries: "{{ make_rabbitmq_retries | default(omit) }}" delay: "{{ make_rabbitmq_delay | default(omit) }}" until: "{{ make_rabbitmq_until | default(true) }}" register: "make_rabbitmq_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rabbitmq" dry_run: "{{ make_rabbitmq_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rabbitmq_env|default({})), **(make_rabbitmq_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq0000644000175000017500000000173115133657453033406 0ustar zuulzuul--- - name: Debug make_rabbitmq_cleanup_env when: make_rabbitmq_cleanup_env is defined ansible.builtin.debug: var: make_rabbitmq_cleanup_env - name: Debug make_rabbitmq_cleanup_params when: make_rabbitmq_cleanup_params is defined ansible.builtin.debug: var: make_rabbitmq_cleanup_params - name: Run rabbitmq_cleanup retries: "{{ make_rabbitmq_cleanup_retries | default(omit) }}" delay: "{{ make_rabbitmq_cleanup_delay | default(omit) }}" until: "{{ make_rabbitmq_cleanup_until | default(true) }}" register: "make_rabbitmq_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rabbitmq_cleanup" dry_run: "{{ make_rabbitmq_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rabbitmq_cleanup_env|default({})), **(make_rabbitmq_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq0000644000175000017500000000202515133657453033403 0ustar zuulzuul--- - name: Debug make_rabbitmq_deploy_prep_env when: make_rabbitmq_deploy_prep_env is defined ansible.builtin.debug: var: make_rabbitmq_deploy_prep_env - name: Debug make_rabbitmq_deploy_prep_params when: make_rabbitmq_deploy_prep_params is defined ansible.builtin.debug: var: make_rabbitmq_deploy_prep_params - name: Run rabbitmq_deploy_prep retries: "{{ make_rabbitmq_deploy_prep_retries | default(omit) }}" delay: "{{ make_rabbitmq_deploy_prep_delay | default(omit) }}" until: "{{ make_rabbitmq_deploy_prep_until | default(true) }}" register: "make_rabbitmq_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rabbitmq_deploy_prep" dry_run: "{{ make_rabbitmq_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rabbitmq_deploy_prep_env|default({})), **(make_rabbitmq_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq0000644000175000017500000000171215133657453033405 0ustar zuulzuul--- - name: Debug make_rabbitmq_deploy_env when: make_rabbitmq_deploy_env is defined ansible.builtin.debug: var: make_rabbitmq_deploy_env - name: Debug make_rabbitmq_deploy_params when: make_rabbitmq_deploy_params is defined ansible.builtin.debug: var: make_rabbitmq_deploy_params - name: Run rabbitmq_deploy retries: "{{ make_rabbitmq_deploy_retries | default(omit) }}" delay: "{{ make_rabbitmq_deploy_delay | default(omit) }}" until: "{{ make_rabbitmq_deploy_until | default(true) }}" register: "make_rabbitmq_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rabbitmq_deploy" dry_run: "{{ make_rabbitmq_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rabbitmq_deploy_env|default({})), **(make_rabbitmq_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq0000644000175000017500000000210215133657453033377 0ustar zuulzuul--- - name: Debug make_rabbitmq_deploy_cleanup_env when: make_rabbitmq_deploy_cleanup_env is defined ansible.builtin.debug: var: make_rabbitmq_deploy_cleanup_env - name: Debug make_rabbitmq_deploy_cleanup_params when: make_rabbitmq_deploy_cleanup_params is defined ansible.builtin.debug: var: make_rabbitmq_deploy_cleanup_params - name: Run rabbitmq_deploy_cleanup retries: "{{ make_rabbitmq_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_rabbitmq_deploy_cleanup_delay | default(omit) }}" until: "{{ make_rabbitmq_deploy_cleanup_until | default(true) }}" register: "make_rabbitmq_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rabbitmq_deploy_cleanup" dry_run: "{{ make_rabbitmq_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rabbitmq_deploy_cleanup_env|default({})), **(make_rabbitmq_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_p0000644000175000017500000000161615133657453033411 0ustar zuulzuul--- - name: Debug make_ironic_prep_env when: make_ironic_prep_env is defined ansible.builtin.debug: var: make_ironic_prep_env - name: Debug make_ironic_prep_params when: make_ironic_prep_params is defined ansible.builtin.debug: var: make_ironic_prep_params - name: Run ironic_prep retries: "{{ make_ironic_prep_retries | default(omit) }}" delay: "{{ make_ironic_prep_delay | default(omit) }}" until: "{{ make_ironic_prep_until | default(true) }}" register: "make_ironic_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_prep" dry_run: "{{ make_ironic_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_prep_env|default({})), **(make_ironic_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000014700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic.y0000644000175000017500000000150315133657453033334 0ustar zuulzuul--- - name: Debug make_ironic_env when: make_ironic_env is defined ansible.builtin.debug: var: make_ironic_env - name: Debug make_ironic_params when: make_ironic_params is defined ansible.builtin.debug: var: make_ironic_params - name: Run ironic retries: "{{ make_ironic_retries | default(omit) }}" delay: "{{ make_ironic_delay | default(omit) }}" until: "{{ make_ironic_until | default(true) }}" register: "make_ironic_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic" dry_run: "{{ make_ironic_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_env|default({})), **(make_ironic_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_c0000644000175000017500000000167315133657453033377 0ustar zuulzuul--- - name: Debug make_ironic_cleanup_env when: make_ironic_cleanup_env is defined ansible.builtin.debug: var: make_ironic_cleanup_env - name: Debug make_ironic_cleanup_params when: make_ironic_cleanup_params is defined ansible.builtin.debug: var: make_ironic_cleanup_params - name: Run ironic_cleanup retries: "{{ make_ironic_cleanup_retries | default(omit) }}" delay: "{{ make_ironic_cleanup_delay | default(omit) }}" until: "{{ make_ironic_cleanup_until | default(true) }}" register: "make_ironic_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_cleanup" dry_run: "{{ make_ironic_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_cleanup_env|default({})), **(make_ironic_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_d0000644000175000017500000000176715133657453033404 0ustar zuulzuul--- - name: Debug make_ironic_deploy_prep_env when: make_ironic_deploy_prep_env is defined ansible.builtin.debug: var: make_ironic_deploy_prep_env - name: Debug make_ironic_deploy_prep_params when: make_ironic_deploy_prep_params is defined ansible.builtin.debug: var: make_ironic_deploy_prep_params - name: Run ironic_deploy_prep retries: "{{ make_ironic_deploy_prep_retries | default(omit) }}" delay: "{{ make_ironic_deploy_prep_delay | default(omit) }}" until: "{{ make_ironic_deploy_prep_until | default(true) }}" register: "make_ironic_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_deploy_prep" dry_run: "{{ make_ironic_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_deploy_prep_env|default({})), **(make_ironic_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_d0000644000175000017500000000165415133657453033377 0ustar zuulzuul--- - name: Debug make_ironic_deploy_env when: make_ironic_deploy_env is defined ansible.builtin.debug: var: make_ironic_deploy_env - name: Debug make_ironic_deploy_params when: make_ironic_deploy_params is defined ansible.builtin.debug: var: make_ironic_deploy_params - name: Run ironic_deploy retries: "{{ make_ironic_deploy_retries | default(omit) }}" delay: "{{ make_ironic_deploy_delay | default(omit) }}" until: "{{ make_ironic_deploy_until | default(true) }}" register: "make_ironic_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_deploy" dry_run: "{{ make_ironic_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_deploy_env|default({})), **(make_ironic_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_d0000644000175000017500000000204415133657453033371 0ustar zuulzuul--- - name: Debug make_ironic_deploy_cleanup_env when: make_ironic_deploy_cleanup_env is defined ansible.builtin.debug: var: make_ironic_deploy_cleanup_env - name: Debug make_ironic_deploy_cleanup_params when: make_ironic_deploy_cleanup_params is defined ansible.builtin.debug: var: make_ironic_deploy_cleanup_params - name: Run ironic_deploy_cleanup retries: "{{ make_ironic_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_ironic_deploy_cleanup_delay | default(omit) }}" until: "{{ make_ironic_deploy_cleanup_until | default(true) }}" register: "make_ironic_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_deploy_cleanup" dry_run: "{{ make_ironic_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_deploy_cleanup_env|default({})), **(make_ironic_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_0000644000175000017500000000163515133657453033375 0ustar zuulzuul--- - name: Debug make_octavia_prep_env when: make_octavia_prep_env is defined ansible.builtin.debug: var: make_octavia_prep_env - name: Debug make_octavia_prep_params when: make_octavia_prep_params is defined ansible.builtin.debug: var: make_octavia_prep_params - name: Run octavia_prep retries: "{{ make_octavia_prep_retries | default(omit) }}" delay: "{{ make_octavia_prep_delay | default(omit) }}" until: "{{ make_octavia_prep_until | default(true) }}" register: "make_octavia_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia_prep" dry_run: "{{ make_octavia_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_prep_env|default({})), **(make_octavia_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia.0000644000175000017500000000152215133657453033307 0ustar zuulzuul--- - name: Debug make_octavia_env when: make_octavia_env is defined ansible.builtin.debug: var: make_octavia_env - name: Debug make_octavia_params when: make_octavia_params is defined ansible.builtin.debug: var: make_octavia_params - name: Run octavia retries: "{{ make_octavia_retries | default(omit) }}" delay: "{{ make_octavia_delay | default(omit) }}" until: "{{ make_octavia_until | default(true) }}" register: "make_octavia_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia" dry_run: "{{ make_octavia_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_env|default({})), **(make_octavia_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_0000644000175000017500000000171215133657453033371 0ustar zuulzuul--- - name: Debug make_octavia_cleanup_env when: make_octavia_cleanup_env is defined ansible.builtin.debug: var: make_octavia_cleanup_env - name: Debug make_octavia_cleanup_params when: make_octavia_cleanup_params is defined ansible.builtin.debug: var: make_octavia_cleanup_params - name: Run octavia_cleanup retries: "{{ make_octavia_cleanup_retries | default(omit) }}" delay: "{{ make_octavia_cleanup_delay | default(omit) }}" until: "{{ make_octavia_cleanup_until | default(true) }}" register: "make_octavia_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia_cleanup" dry_run: "{{ make_octavia_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_cleanup_env|default({})), **(make_octavia_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_0000644000175000017500000000200615133657453033366 0ustar zuulzuul--- - name: Debug make_octavia_deploy_prep_env when: make_octavia_deploy_prep_env is defined ansible.builtin.debug: var: make_octavia_deploy_prep_env - name: Debug make_octavia_deploy_prep_params when: make_octavia_deploy_prep_params is defined ansible.builtin.debug: var: make_octavia_deploy_prep_params - name: Run octavia_deploy_prep retries: "{{ make_octavia_deploy_prep_retries | default(omit) }}" delay: "{{ make_octavia_deploy_prep_delay | default(omit) }}" until: "{{ make_octavia_deploy_prep_until | default(true) }}" register: "make_octavia_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia_deploy_prep" dry_run: "{{ make_octavia_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_deploy_prep_env|default({})), **(make_octavia_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_0000644000175000017500000000167315133657453033377 0ustar zuulzuul--- - name: Debug make_octavia_deploy_env when: make_octavia_deploy_env is defined ansible.builtin.debug: var: make_octavia_deploy_env - name: Debug make_octavia_deploy_params when: make_octavia_deploy_params is defined ansible.builtin.debug: var: make_octavia_deploy_params - name: Run octavia_deploy retries: "{{ make_octavia_deploy_retries | default(omit) }}" delay: "{{ make_octavia_deploy_delay | default(omit) }}" until: "{{ make_octavia_deploy_until | default(true) }}" register: "make_octavia_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia_deploy" dry_run: "{{ make_octavia_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_deploy_env|default({})), **(make_octavia_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_0000644000175000017500000000206315133657453033371 0ustar zuulzuul--- - name: Debug make_octavia_deploy_cleanup_env when: make_octavia_deploy_cleanup_env is defined ansible.builtin.debug: var: make_octavia_deploy_cleanup_env - name: Debug make_octavia_deploy_cleanup_params when: make_octavia_deploy_cleanup_params is defined ansible.builtin.debug: var: make_octavia_deploy_cleanup_params - name: Run octavia_deploy_cleanup retries: "{{ make_octavia_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_octavia_deploy_cleanup_delay | default(omit) }}" until: "{{ make_octavia_deploy_cleanup_until | default(true) }}" register: "make_octavia_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia_deploy_cleanup" dry_run: "{{ make_octavia_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_deploy_cleanup_env|default({})), **(make_octavia_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000167315133657453033410 0ustar zuulzuul--- - name: Debug make_designate_prep_env when: make_designate_prep_env is defined ansible.builtin.debug: var: make_designate_prep_env - name: Debug make_designate_prep_params when: make_designate_prep_params is defined ansible.builtin.debug: var: make_designate_prep_params - name: Run designate_prep retries: "{{ make_designate_prep_retries | default(omit) }}" delay: "{{ make_designate_prep_delay | default(omit) }}" until: "{{ make_designate_prep_until | default(true) }}" register: "make_designate_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate_prep" dry_run: "{{ make_designate_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_prep_env|default({})), **(make_designate_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000156015133657453033403 0ustar zuulzuul--- - name: Debug make_designate_env when: make_designate_env is defined ansible.builtin.debug: var: make_designate_env - name: Debug make_designate_params when: make_designate_params is defined ansible.builtin.debug: var: make_designate_params - name: Run designate retries: "{{ make_designate_retries | default(omit) }}" delay: "{{ make_designate_delay | default(omit) }}" until: "{{ make_designate_until | default(true) }}" register: "make_designate_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate" dry_run: "{{ make_designate_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_env|default({})), **(make_designate_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000175015133657453033404 0ustar zuulzuul--- - name: Debug make_designate_cleanup_env when: make_designate_cleanup_env is defined ansible.builtin.debug: var: make_designate_cleanup_env - name: Debug make_designate_cleanup_params when: make_designate_cleanup_params is defined ansible.builtin.debug: var: make_designate_cleanup_params - name: Run designate_cleanup retries: "{{ make_designate_cleanup_retries | default(omit) }}" delay: "{{ make_designate_cleanup_delay | default(omit) }}" until: "{{ make_designate_cleanup_until | default(true) }}" register: "make_designate_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate_cleanup" dry_run: "{{ make_designate_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_cleanup_env|default({})), **(make_designate_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000204415133657453033401 0ustar zuulzuul--- - name: Debug make_designate_deploy_prep_env when: make_designate_deploy_prep_env is defined ansible.builtin.debug: var: make_designate_deploy_prep_env - name: Debug make_designate_deploy_prep_params when: make_designate_deploy_prep_params is defined ansible.builtin.debug: var: make_designate_deploy_prep_params - name: Run designate_deploy_prep retries: "{{ make_designate_deploy_prep_retries | default(omit) }}" delay: "{{ make_designate_deploy_prep_delay | default(omit) }}" until: "{{ make_designate_deploy_prep_until | default(true) }}" register: "make_designate_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate_deploy_prep" dry_run: "{{ make_designate_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_deploy_prep_env|default({})), **(make_designate_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000173115133657453033403 0ustar zuulzuul--- - name: Debug make_designate_deploy_env when: make_designate_deploy_env is defined ansible.builtin.debug: var: make_designate_deploy_env - name: Debug make_designate_deploy_params when: make_designate_deploy_params is defined ansible.builtin.debug: var: make_designate_deploy_params - name: Run designate_deploy retries: "{{ make_designate_deploy_retries | default(omit) }}" delay: "{{ make_designate_deploy_delay | default(omit) }}" until: "{{ make_designate_deploy_until | default(true) }}" register: "make_designate_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate_deploy" dry_run: "{{ make_designate_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_deploy_env|default({})), **(make_designate_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000212115133657453033375 0ustar zuulzuul--- - name: Debug make_designate_deploy_cleanup_env when: make_designate_deploy_cleanup_env is defined ansible.builtin.debug: var: make_designate_deploy_cleanup_env - name: Debug make_designate_deploy_cleanup_params when: make_designate_deploy_cleanup_params is defined ansible.builtin.debug: var: make_designate_deploy_cleanup_params - name: Run designate_deploy_cleanup retries: "{{ make_designate_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_designate_deploy_cleanup_delay | default(omit) }}" until: "{{ make_designate_deploy_cleanup_until | default(true) }}" register: "make_designate_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate_deploy_cleanup" dry_run: "{{ make_designate_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_deploy_cleanup_env|default({})), **(make_designate_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_pre0000644000175000017500000000156015133657453033416 0ustar zuulzuul--- - name: Debug make_nova_prep_env when: make_nova_prep_env is defined ansible.builtin.debug: var: make_nova_prep_env - name: Debug make_nova_prep_params when: make_nova_prep_params is defined ansible.builtin.debug: var: make_nova_prep_params - name: Run nova_prep retries: "{{ make_nova_prep_retries | default(omit) }}" delay: "{{ make_nova_prep_delay | default(omit) }}" until: "{{ make_nova_prep_until | default(true) }}" register: "make_nova_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nova_prep" dry_run: "{{ make_nova_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nova_prep_env|default({})), **(make_nova_prep_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova.yml0000644000175000017500000000144515133657453033352 0ustar zuulzuul--- - name: Debug make_nova_env when: make_nova_env is defined ansible.builtin.debug: var: make_nova_env - name: Debug make_nova_params when: make_nova_params is defined ansible.builtin.debug: var: make_nova_params - name: Run nova retries: "{{ make_nova_retries | default(omit) }}" delay: "{{ make_nova_delay | default(omit) }}" until: "{{ make_nova_until | default(true) }}" register: "make_nova_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nova" dry_run: "{{ make_nova_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nova_env|default({})), **(make_nova_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_cle0000644000175000017500000000163515133657453033376 0ustar zuulzuul--- - name: Debug make_nova_cleanup_env when: make_nova_cleanup_env is defined ansible.builtin.debug: var: make_nova_cleanup_env - name: Debug make_nova_cleanup_params when: make_nova_cleanup_params is defined ansible.builtin.debug: var: make_nova_cleanup_params - name: Run nova_cleanup retries: "{{ make_nova_cleanup_retries | default(omit) }}" delay: "{{ make_nova_cleanup_delay | default(omit) }}" until: "{{ make_nova_cleanup_until | default(true) }}" register: "make_nova_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nova_cleanup" dry_run: "{{ make_nova_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nova_cleanup_env|default({})), **(make_nova_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_dep0000644000175000017500000000173115133657453033400 0ustar zuulzuul--- - name: Debug make_nova_deploy_prep_env when: make_nova_deploy_prep_env is defined ansible.builtin.debug: var: make_nova_deploy_prep_env - name: Debug make_nova_deploy_prep_params when: make_nova_deploy_prep_params is defined ansible.builtin.debug: var: make_nova_deploy_prep_params - name: Run nova_deploy_prep retries: "{{ make_nova_deploy_prep_retries | default(omit) }}" delay: "{{ make_nova_deploy_prep_delay | default(omit) }}" until: "{{ make_nova_deploy_prep_until | default(true) }}" register: "make_nova_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nova_deploy_prep" dry_run: "{{ make_nova_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nova_deploy_prep_env|default({})), **(make_nova_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_dep0000644000175000017500000000161615133657453033402 0ustar zuulzuul--- - name: Debug make_nova_deploy_env when: make_nova_deploy_env is defined ansible.builtin.debug: var: make_nova_deploy_env - name: Debug make_nova_deploy_params when: make_nova_deploy_params is defined ansible.builtin.debug: var: make_nova_deploy_params - name: Run nova_deploy retries: "{{ make_nova_deploy_retries | default(omit) }}" delay: "{{ make_nova_deploy_delay | default(omit) }}" until: "{{ make_nova_deploy_until | default(true) }}" register: "make_nova_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nova_deploy" dry_run: "{{ make_nova_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nova_deploy_env|default({})), **(make_nova_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_dep0000644000175000017500000000200615133657453033374 0ustar zuulzuul--- - name: Debug make_nova_deploy_cleanup_env when: make_nova_deploy_cleanup_env is defined ansible.builtin.debug: var: make_nova_deploy_cleanup_env - name: Debug make_nova_deploy_cleanup_params when: make_nova_deploy_cleanup_params is defined ansible.builtin.debug: var: make_nova_deploy_cleanup_params - name: Run nova_deploy_cleanup retries: "{{ make_nova_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_nova_deploy_cleanup_delay | default(omit) }}" until: "{{ make_nova_deploy_cleanup_until | default(true) }}" register: "make_nova_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nova_deploy_cleanup" dry_run: "{{ make_nova_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nova_deploy_cleanup_env|default({})), **(make_nova_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000175015133657453033344 0ustar zuulzuul--- - name: Debug make_mariadb_kuttl_run_env when: make_mariadb_kuttl_run_env is defined ansible.builtin.debug: var: make_mariadb_kuttl_run_env - name: Debug make_mariadb_kuttl_run_params when: make_mariadb_kuttl_run_params is defined ansible.builtin.debug: var: make_mariadb_kuttl_run_params - name: Run mariadb_kuttl_run retries: "{{ make_mariadb_kuttl_run_retries | default(omit) }}" delay: "{{ make_mariadb_kuttl_run_delay | default(omit) }}" until: "{{ make_mariadb_kuttl_run_until | default(true) }}" register: "make_mariadb_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_kuttl_run" dry_run: "{{ make_mariadb_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_kuttl_run_env|default({})), **(make_mariadb_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000165415133657453033347 0ustar zuulzuul--- - name: Debug make_mariadb_kuttl_env when: make_mariadb_kuttl_env is defined ansible.builtin.debug: var: make_mariadb_kuttl_env - name: Debug make_mariadb_kuttl_params when: make_mariadb_kuttl_params is defined ansible.builtin.debug: var: make_mariadb_kuttl_params - name: Run mariadb_kuttl retries: "{{ make_mariadb_kuttl_retries | default(omit) }}" delay: "{{ make_mariadb_kuttl_delay | default(omit) }}" until: "{{ make_mariadb_kuttl_until | default(true) }}" register: "make_mariadb_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_kuttl" dry_run: "{{ make_mariadb_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_kuttl_env|default({})), **(make_mariadb_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_db_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_db0000644000175000017500000000165415133657453033421 0ustar zuulzuul--- - name: Debug make_kuttl_db_prep_env when: make_kuttl_db_prep_env is defined ansible.builtin.debug: var: make_kuttl_db_prep_env - name: Debug make_kuttl_db_prep_params when: make_kuttl_db_prep_params is defined ansible.builtin.debug: var: make_kuttl_db_prep_params - name: Run kuttl_db_prep retries: "{{ make_kuttl_db_prep_retries | default(omit) }}" delay: "{{ make_kuttl_db_prep_delay | default(omit) }}" until: "{{ make_kuttl_db_prep_until | default(true) }}" register: "make_kuttl_db_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make kuttl_db_prep" dry_run: "{{ make_kuttl_db_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_kuttl_db_prep_env|default({})), **(make_kuttl_db_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_db_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_db0000644000175000017500000000173115133657453033415 0ustar zuulzuul--- - name: Debug make_kuttl_db_cleanup_env when: make_kuttl_db_cleanup_env is defined ansible.builtin.debug: var: make_kuttl_db_cleanup_env - name: Debug make_kuttl_db_cleanup_params when: make_kuttl_db_cleanup_params is defined ansible.builtin.debug: var: make_kuttl_db_cleanup_params - name: Run kuttl_db_cleanup retries: "{{ make_kuttl_db_cleanup_retries | default(omit) }}" delay: "{{ make_kuttl_db_cleanup_delay | default(omit) }}" until: "{{ make_kuttl_db_cleanup_until | default(true) }}" register: "make_kuttl_db_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make kuttl_db_cleanup" dry_run: "{{ make_kuttl_db_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_kuttl_db_cleanup_env|default({})), **(make_kuttl_db_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_common_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_co0000644000175000017500000000175015133657453033432 0ustar zuulzuul--- - name: Debug make_kuttl_common_prep_env when: make_kuttl_common_prep_env is defined ansible.builtin.debug: var: make_kuttl_common_prep_env - name: Debug make_kuttl_common_prep_params when: make_kuttl_common_prep_params is defined ansible.builtin.debug: var: make_kuttl_common_prep_params - name: Run kuttl_common_prep retries: "{{ make_kuttl_common_prep_retries | default(omit) }}" delay: "{{ make_kuttl_common_prep_delay | default(omit) }}" until: "{{ make_kuttl_common_prep_until | default(true) }}" register: "make_kuttl_common_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make kuttl_common_prep" dry_run: "{{ make_kuttl_common_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_kuttl_common_prep_env|default({})), **(make_kuttl_common_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_common_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_co0000644000175000017500000000202515133657453033426 0ustar zuulzuul--- - name: Debug make_kuttl_common_cleanup_env when: make_kuttl_common_cleanup_env is defined ansible.builtin.debug: var: make_kuttl_common_cleanup_env - name: Debug make_kuttl_common_cleanup_params when: make_kuttl_common_cleanup_params is defined ansible.builtin.debug: var: make_kuttl_common_cleanup_params - name: Run kuttl_common_cleanup retries: "{{ make_kuttl_common_cleanup_retries | default(omit) }}" delay: "{{ make_kuttl_common_cleanup_delay | default(omit) }}" until: "{{ make_kuttl_common_cleanup_until | default(true) }}" register: "make_kuttl_common_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make kuttl_common_cleanup" dry_run: "{{ make_kuttl_common_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_kuttl_common_cleanup_env|default({})), **(make_kuttl_common_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000176715133657453033457 0ustar zuulzuul--- - name: Debug make_keystone_kuttl_run_env when: make_keystone_kuttl_run_env is defined ansible.builtin.debug: var: make_keystone_kuttl_run_env - name: Debug make_keystone_kuttl_run_params when: make_keystone_kuttl_run_params is defined ansible.builtin.debug: var: make_keystone_kuttl_run_params - name: Run keystone_kuttl_run retries: "{{ make_keystone_kuttl_run_retries | default(omit) }}" delay: "{{ make_keystone_kuttl_run_delay | default(omit) }}" until: "{{ make_keystone_kuttl_run_until | default(true) }}" register: "make_keystone_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone_kuttl_run" dry_run: "{{ make_keystone_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_kuttl_run_env|default({})), **(make_keystone_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000167315133657453033453 0ustar zuulzuul--- - name: Debug make_keystone_kuttl_env when: make_keystone_kuttl_env is defined ansible.builtin.debug: var: make_keystone_kuttl_env - name: Debug make_keystone_kuttl_params when: make_keystone_kuttl_params is defined ansible.builtin.debug: var: make_keystone_kuttl_params - name: Run keystone_kuttl retries: "{{ make_keystone_kuttl_retries | default(omit) }}" delay: "{{ make_keystone_kuttl_delay | default(omit) }}" until: "{{ make_keystone_kuttl_until | default(true) }}" register: "make_keystone_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone_kuttl" dry_run: "{{ make_keystone_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_kuttl_env|default({})), **(make_keystone_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000176715133657453033357 0ustar zuulzuul--- - name: Debug make_barbican_kuttl_run_env when: make_barbican_kuttl_run_env is defined ansible.builtin.debug: var: make_barbican_kuttl_run_env - name: Debug make_barbican_kuttl_run_params when: make_barbican_kuttl_run_params is defined ansible.builtin.debug: var: make_barbican_kuttl_run_params - name: Run barbican_kuttl_run retries: "{{ make_barbican_kuttl_run_retries | default(omit) }}" delay: "{{ make_barbican_kuttl_run_delay | default(omit) }}" until: "{{ make_barbican_kuttl_run_until | default(true) }}" register: "make_barbican_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_kuttl_run" dry_run: "{{ make_barbican_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_kuttl_run_env|default({})), **(make_barbican_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000167315133657453033353 0ustar zuulzuul--- - name: Debug make_barbican_kuttl_env when: make_barbican_kuttl_env is defined ansible.builtin.debug: var: make_barbican_kuttl_env - name: Debug make_barbican_kuttl_params when: make_barbican_kuttl_params is defined ansible.builtin.debug: var: make_barbican_kuttl_params - name: Run barbican_kuttl retries: "{{ make_barbican_kuttl_retries | default(omit) }}" delay: "{{ make_barbican_kuttl_delay | default(omit) }}" until: "{{ make_barbican_kuttl_until | default(true) }}" register: "make_barbican_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_kuttl" dry_run: "{{ make_barbican_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_kuttl_env|default({})), **(make_barbican_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000200615133657453033365 0ustar zuulzuul--- - name: Debug make_placement_kuttl_run_env when: make_placement_kuttl_run_env is defined ansible.builtin.debug: var: make_placement_kuttl_run_env - name: Debug make_placement_kuttl_run_params when: make_placement_kuttl_run_params is defined ansible.builtin.debug: var: make_placement_kuttl_run_params - name: Run placement_kuttl_run retries: "{{ make_placement_kuttl_run_retries | default(omit) }}" delay: "{{ make_placement_kuttl_run_delay | default(omit) }}" until: "{{ make_placement_kuttl_run_until | default(true) }}" register: "make_placement_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement_kuttl_run" dry_run: "{{ make_placement_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_kuttl_run_env|default({})), **(make_placement_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000171215133657453033370 0ustar zuulzuul--- - name: Debug make_placement_kuttl_env when: make_placement_kuttl_env is defined ansible.builtin.debug: var: make_placement_kuttl_env - name: Debug make_placement_kuttl_params when: make_placement_kuttl_params is defined ansible.builtin.debug: var: make_placement_kuttl_params - name: Run placement_kuttl retries: "{{ make_placement_kuttl_retries | default(omit) }}" delay: "{{ make_placement_kuttl_delay | default(omit) }}" until: "{{ make_placement_kuttl_until | default(true) }}" register: "make_placement_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement_kuttl" dry_run: "{{ make_placement_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_kuttl_env|default({})), **(make_placement_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_k0000644000175000017500000000173115133657453033363 0ustar zuulzuul--- - name: Debug make_cinder_kuttl_run_env when: make_cinder_kuttl_run_env is defined ansible.builtin.debug: var: make_cinder_kuttl_run_env - name: Debug make_cinder_kuttl_run_params when: make_cinder_kuttl_run_params is defined ansible.builtin.debug: var: make_cinder_kuttl_run_params - name: Run cinder_kuttl_run retries: "{{ make_cinder_kuttl_run_retries | default(omit) }}" delay: "{{ make_cinder_kuttl_run_delay | default(omit) }}" until: "{{ make_cinder_kuttl_run_until | default(true) }}" register: "make_cinder_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder_kuttl_run" dry_run: "{{ make_cinder_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_kuttl_run_env|default({})), **(make_cinder_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_k0000644000175000017500000000163515133657453033366 0ustar zuulzuul--- - name: Debug make_cinder_kuttl_env when: make_cinder_kuttl_env is defined ansible.builtin.debug: var: make_cinder_kuttl_env - name: Debug make_cinder_kuttl_params when: make_cinder_kuttl_params is defined ansible.builtin.debug: var: make_cinder_kuttl_params - name: Run cinder_kuttl retries: "{{ make_cinder_kuttl_retries | default(omit) }}" delay: "{{ make_cinder_kuttl_delay | default(omit) }}" until: "{{ make_cinder_kuttl_until | default(true) }}" register: "make_cinder_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder_kuttl" dry_run: "{{ make_cinder_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_kuttl_env|default({})), **(make_cinder_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_0000644000175000017500000000175015133657453033437 0ustar zuulzuul--- - name: Debug make_neutron_kuttl_run_env when: make_neutron_kuttl_run_env is defined ansible.builtin.debug: var: make_neutron_kuttl_run_env - name: Debug make_neutron_kuttl_run_params when: make_neutron_kuttl_run_params is defined ansible.builtin.debug: var: make_neutron_kuttl_run_params - name: Run neutron_kuttl_run retries: "{{ make_neutron_kuttl_run_retries | default(omit) }}" delay: "{{ make_neutron_kuttl_run_delay | default(omit) }}" until: "{{ make_neutron_kuttl_run_until | default(true) }}" register: "make_neutron_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron_kuttl_run" dry_run: "{{ make_neutron_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_kuttl_run_env|default({})), **(make_neutron_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_0000644000175000017500000000165415133657453033442 0ustar zuulzuul--- - name: Debug make_neutron_kuttl_env when: make_neutron_kuttl_env is defined ansible.builtin.debug: var: make_neutron_kuttl_env - name: Debug make_neutron_kuttl_params when: make_neutron_kuttl_params is defined ansible.builtin.debug: var: make_neutron_kuttl_params - name: Run neutron_kuttl retries: "{{ make_neutron_kuttl_retries | default(omit) }}" delay: "{{ make_neutron_kuttl_delay | default(omit) }}" until: "{{ make_neutron_kuttl_until | default(true) }}" register: "make_neutron_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron_kuttl" dry_run: "{{ make_neutron_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_kuttl_env|default({})), **(make_neutron_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_0000644000175000017500000000175015133657453033373 0ustar zuulzuul--- - name: Debug make_octavia_kuttl_run_env when: make_octavia_kuttl_run_env is defined ansible.builtin.debug: var: make_octavia_kuttl_run_env - name: Debug make_octavia_kuttl_run_params when: make_octavia_kuttl_run_params is defined ansible.builtin.debug: var: make_octavia_kuttl_run_params - name: Run octavia_kuttl_run retries: "{{ make_octavia_kuttl_run_retries | default(omit) }}" delay: "{{ make_octavia_kuttl_run_delay | default(omit) }}" until: "{{ make_octavia_kuttl_run_until | default(true) }}" register: "make_octavia_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia_kuttl_run" dry_run: "{{ make_octavia_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_kuttl_run_env|default({})), **(make_octavia_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_0000644000175000017500000000165415133657453033376 0ustar zuulzuul--- - name: Debug make_octavia_kuttl_env when: make_octavia_kuttl_env is defined ansible.builtin.debug: var: make_octavia_kuttl_env - name: Debug make_octavia_kuttl_params when: make_octavia_kuttl_params is defined ansible.builtin.debug: var: make_octavia_kuttl_params - name: Run octavia_kuttl retries: "{{ make_octavia_kuttl_retries | default(omit) }}" delay: "{{ make_octavia_kuttl_delay | default(omit) }}" until: "{{ make_octavia_kuttl_until | default(true) }}" register: "make_octavia_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia_kuttl" dry_run: "{{ make_octavia_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_kuttl_env|default({})), **(make_octavia_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000171215133657453033402 0ustar zuulzuul--- - name: Debug make_designate_kuttl_env when: make_designate_kuttl_env is defined ansible.builtin.debug: var: make_designate_kuttl_env - name: Debug make_designate_kuttl_params when: make_designate_kuttl_params is defined ansible.builtin.debug: var: make_designate_kuttl_params - name: Run designate_kuttl retries: "{{ make_designate_kuttl_retries | default(omit) }}" delay: "{{ make_designate_kuttl_delay | default(omit) }}" until: "{{ make_designate_kuttl_until | default(true) }}" register: "make_designate_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate_kuttl" dry_run: "{{ make_designate_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_kuttl_env|default({})), **(make_designate_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000200615133657453033377 0ustar zuulzuul--- - name: Debug make_designate_kuttl_run_env when: make_designate_kuttl_run_env is defined ansible.builtin.debug: var: make_designate_kuttl_run_env - name: Debug make_designate_kuttl_run_params when: make_designate_kuttl_run_params is defined ansible.builtin.debug: var: make_designate_kuttl_run_params - name: Run designate_kuttl_run retries: "{{ make_designate_kuttl_run_retries | default(omit) }}" delay: "{{ make_designate_kuttl_run_delay | default(omit) }}" until: "{{ make_designate_kuttl_run_until | default(true) }}" register: "make_designate_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate_kuttl_run" dry_run: "{{ make_designate_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_kuttl_run_env|default({})), **(make_designate_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_kutt0000644000175000017500000000165415133657453033462 0ustar zuulzuul--- - name: Debug make_ovn_kuttl_run_env when: make_ovn_kuttl_run_env is defined ansible.builtin.debug: var: make_ovn_kuttl_run_env - name: Debug make_ovn_kuttl_run_params when: make_ovn_kuttl_run_params is defined ansible.builtin.debug: var: make_ovn_kuttl_run_params - name: Run ovn_kuttl_run retries: "{{ make_ovn_kuttl_run_retries | default(omit) }}" delay: "{{ make_ovn_kuttl_run_delay | default(omit) }}" until: "{{ make_ovn_kuttl_run_until | default(true) }}" register: "make_ovn_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn_kuttl_run" dry_run: "{{ make_ovn_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_kuttl_run_env|default({})), **(make_ovn_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_kutt0000644000175000017500000000156015133657453033456 0ustar zuulzuul--- - name: Debug make_ovn_kuttl_env when: make_ovn_kuttl_env is defined ansible.builtin.debug: var: make_ovn_kuttl_env - name: Debug make_ovn_kuttl_params when: make_ovn_kuttl_params is defined ansible.builtin.debug: var: make_ovn_kuttl_params - name: Run ovn_kuttl retries: "{{ make_ovn_kuttl_retries | default(omit) }}" delay: "{{ make_ovn_kuttl_delay | default(omit) }}" until: "{{ make_ovn_kuttl_until | default(true) }}" register: "make_ovn_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn_kuttl" dry_run: "{{ make_ovn_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_kuttl_env|default({})), **(make_ovn_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_ku0000644000175000017500000000171215133657453033402 0ustar zuulzuul--- - name: Debug make_infra_kuttl_run_env when: make_infra_kuttl_run_env is defined ansible.builtin.debug: var: make_infra_kuttl_run_env - name: Debug make_infra_kuttl_run_params when: make_infra_kuttl_run_params is defined ansible.builtin.debug: var: make_infra_kuttl_run_params - name: Run infra_kuttl_run retries: "{{ make_infra_kuttl_run_retries | default(omit) }}" delay: "{{ make_infra_kuttl_run_delay | default(omit) }}" until: "{{ make_infra_kuttl_run_until | default(true) }}" register: "make_infra_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make infra_kuttl_run" dry_run: "{{ make_infra_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_infra_kuttl_run_env|default({})), **(make_infra_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_ku0000644000175000017500000000161615133657453033405 0ustar zuulzuul--- - name: Debug make_infra_kuttl_env when: make_infra_kuttl_env is defined ansible.builtin.debug: var: make_infra_kuttl_env - name: Debug make_infra_kuttl_params when: make_infra_kuttl_params is defined ansible.builtin.debug: var: make_infra_kuttl_params - name: Run infra_kuttl retries: "{{ make_infra_kuttl_retries | default(omit) }}" delay: "{{ make_infra_kuttl_delay | default(omit) }}" until: "{{ make_infra_kuttl_until | default(true) }}" register: "make_infra_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make infra_kuttl" dry_run: "{{ make_infra_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_infra_kuttl_env|default({})), **(make_infra_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_k0000644000175000017500000000173115133657453033402 0ustar zuulzuul--- - name: Debug make_ironic_kuttl_run_env when: make_ironic_kuttl_run_env is defined ansible.builtin.debug: var: make_ironic_kuttl_run_env - name: Debug make_ironic_kuttl_run_params when: make_ironic_kuttl_run_params is defined ansible.builtin.debug: var: make_ironic_kuttl_run_params - name: Run ironic_kuttl_run retries: "{{ make_ironic_kuttl_run_retries | default(omit) }}" delay: "{{ make_ironic_kuttl_run_delay | default(omit) }}" until: "{{ make_ironic_kuttl_run_until | default(true) }}" register: "make_ironic_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_kuttl_run" dry_run: "{{ make_ironic_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_kuttl_run_env|default({})), **(make_ironic_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_k0000644000175000017500000000163515133657453033405 0ustar zuulzuul--- - name: Debug make_ironic_kuttl_env when: make_ironic_kuttl_env is defined ansible.builtin.debug: var: make_ironic_kuttl_env - name: Debug make_ironic_kuttl_params when: make_ironic_kuttl_params is defined ansible.builtin.debug: var: make_ironic_kuttl_params - name: Run ironic_kuttl retries: "{{ make_ironic_kuttl_retries | default(omit) }}" delay: "{{ make_ironic_kuttl_delay | default(omit) }}" until: "{{ make_ironic_kuttl_until | default(true) }}" register: "make_ironic_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_kuttl" dry_run: "{{ make_ironic_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_kuttl_env|default({})), **(make_ironic_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_kuttl_crc.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_k0000644000175000017500000000173115133657453033402 0ustar zuulzuul--- - name: Debug make_ironic_kuttl_crc_env when: make_ironic_kuttl_crc_env is defined ansible.builtin.debug: var: make_ironic_kuttl_crc_env - name: Debug make_ironic_kuttl_crc_params when: make_ironic_kuttl_crc_params is defined ansible.builtin.debug: var: make_ironic_kuttl_crc_params - name: Run ironic_kuttl_crc retries: "{{ make_ironic_kuttl_crc_retries | default(omit) }}" delay: "{{ make_ironic_kuttl_crc_delay | default(omit) }}" until: "{{ make_ironic_kuttl_crc_until | default(true) }}" register: "make_ironic_kuttl_crc_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_kuttl_crc" dry_run: "{{ make_ironic_kuttl_crc_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_kuttl_crc_env|default({})), **(make_ironic_kuttl_crc_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_kut0000644000175000017500000000167315133657453033416 0ustar zuulzuul--- - name: Debug make_heat_kuttl_run_env when: make_heat_kuttl_run_env is defined ansible.builtin.debug: var: make_heat_kuttl_run_env - name: Debug make_heat_kuttl_run_params when: make_heat_kuttl_run_params is defined ansible.builtin.debug: var: make_heat_kuttl_run_params - name: Run heat_kuttl_run retries: "{{ make_heat_kuttl_run_retries | default(omit) }}" delay: "{{ make_heat_kuttl_run_delay | default(omit) }}" until: "{{ make_heat_kuttl_run_until | default(true) }}" register: "make_heat_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_kuttl_run" dry_run: "{{ make_heat_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_kuttl_run_env|default({})), **(make_heat_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_kut0000644000175000017500000000157715133657453033421 0ustar zuulzuul--- - name: Debug make_heat_kuttl_env when: make_heat_kuttl_env is defined ansible.builtin.debug: var: make_heat_kuttl_env - name: Debug make_heat_kuttl_params when: make_heat_kuttl_params is defined ansible.builtin.debug: var: make_heat_kuttl_params - name: Run heat_kuttl retries: "{{ make_heat_kuttl_retries | default(omit) }}" delay: "{{ make_heat_kuttl_delay | default(omit) }}" until: "{{ make_heat_kuttl_until | default(true) }}" register: "make_heat_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_kuttl" dry_run: "{{ make_heat_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_kuttl_env|default({})), **(make_heat_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_kuttl_crc.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_kut0000644000175000017500000000167315133657453033416 0ustar zuulzuul--- - name: Debug make_heat_kuttl_crc_env when: make_heat_kuttl_crc_env is defined ansible.builtin.debug: var: make_heat_kuttl_crc_env - name: Debug make_heat_kuttl_crc_params when: make_heat_kuttl_crc_params is defined ansible.builtin.debug: var: make_heat_kuttl_crc_params - name: Run heat_kuttl_crc retries: "{{ make_heat_kuttl_crc_retries | default(omit) }}" delay: "{{ make_heat_kuttl_crc_delay | default(omit) }}" until: "{{ make_heat_kuttl_crc_until | default(true) }}" register: "make_heat_kuttl_crc_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_kuttl_crc" dry_run: "{{ make_heat_kuttl_crc_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_kuttl_crc_env|default({})), **(make_heat_kuttl_crc_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansibleee_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansiblee0000644000175000017500000000200615133657453033363 0ustar zuulzuul--- - name: Debug make_ansibleee_kuttl_run_env when: make_ansibleee_kuttl_run_env is defined ansible.builtin.debug: var: make_ansibleee_kuttl_run_env - name: Debug make_ansibleee_kuttl_run_params when: make_ansibleee_kuttl_run_params is defined ansible.builtin.debug: var: make_ansibleee_kuttl_run_params - name: Run ansibleee_kuttl_run retries: "{{ make_ansibleee_kuttl_run_retries | default(omit) }}" delay: "{{ make_ansibleee_kuttl_run_delay | default(omit) }}" until: "{{ make_ansibleee_kuttl_run_until | default(true) }}" register: "make_ansibleee_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ansibleee_kuttl_run" dry_run: "{{ make_ansibleee_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ansibleee_kuttl_run_env|default({})), **(make_ansibleee_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000017000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansibleee_kuttl_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansiblee0000644000175000017500000000210215133657453033360 0ustar zuulzuul--- - name: Debug make_ansibleee_kuttl_cleanup_env when: make_ansibleee_kuttl_cleanup_env is defined ansible.builtin.debug: var: make_ansibleee_kuttl_cleanup_env - name: Debug make_ansibleee_kuttl_cleanup_params when: make_ansibleee_kuttl_cleanup_params is defined ansible.builtin.debug: var: make_ansibleee_kuttl_cleanup_params - name: Run ansibleee_kuttl_cleanup retries: "{{ make_ansibleee_kuttl_cleanup_retries | default(omit) }}" delay: "{{ make_ansibleee_kuttl_cleanup_delay | default(omit) }}" until: "{{ make_ansibleee_kuttl_cleanup_until | default(true) }}" register: "make_ansibleee_kuttl_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ansibleee_kuttl_cleanup" dry_run: "{{ make_ansibleee_kuttl_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ansibleee_kuttl_cleanup_env|default({})), **(make_ansibleee_kuttl_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansibleee_kuttl_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansiblee0000644000175000017500000000202515133657453033364 0ustar zuulzuul--- - name: Debug make_ansibleee_kuttl_prep_env when: make_ansibleee_kuttl_prep_env is defined ansible.builtin.debug: var: make_ansibleee_kuttl_prep_env - name: Debug make_ansibleee_kuttl_prep_params when: make_ansibleee_kuttl_prep_params is defined ansible.builtin.debug: var: make_ansibleee_kuttl_prep_params - name: Run ansibleee_kuttl_prep retries: "{{ make_ansibleee_kuttl_prep_retries | default(omit) }}" delay: "{{ make_ansibleee_kuttl_prep_delay | default(omit) }}" until: "{{ make_ansibleee_kuttl_prep_until | default(true) }}" register: "make_ansibleee_kuttl_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ansibleee_kuttl_prep" dry_run: "{{ make_ansibleee_kuttl_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ansibleee_kuttl_prep_env|default({})), **(make_ansibleee_kuttl_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansibleee_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansiblee0000644000175000017500000000171215133657453033366 0ustar zuulzuul--- - name: Debug make_ansibleee_kuttl_env when: make_ansibleee_kuttl_env is defined ansible.builtin.debug: var: make_ansibleee_kuttl_env - name: Debug make_ansibleee_kuttl_params when: make_ansibleee_kuttl_params is defined ansible.builtin.debug: var: make_ansibleee_kuttl_params - name: Run ansibleee_kuttl retries: "{{ make_ansibleee_kuttl_retries | default(omit) }}" delay: "{{ make_ansibleee_kuttl_delay | default(omit) }}" until: "{{ make_ansibleee_kuttl_until | default(true) }}" register: "make_ansibleee_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ansibleee_kuttl" dry_run: "{{ make_ansibleee_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ansibleee_kuttl_env|default({})), **(make_ansibleee_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_k0000644000175000017500000000173115133657453033350 0ustar zuulzuul--- - name: Debug make_glance_kuttl_run_env when: make_glance_kuttl_run_env is defined ansible.builtin.debug: var: make_glance_kuttl_run_env - name: Debug make_glance_kuttl_run_params when: make_glance_kuttl_run_params is defined ansible.builtin.debug: var: make_glance_kuttl_run_params - name: Run glance_kuttl_run retries: "{{ make_glance_kuttl_run_retries | default(omit) }}" delay: "{{ make_glance_kuttl_run_delay | default(omit) }}" until: "{{ make_glance_kuttl_run_until | default(true) }}" register: "make_glance_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance_kuttl_run" dry_run: "{{ make_glance_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_kuttl_run_env|default({})), **(make_glance_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_k0000644000175000017500000000163515133657453033353 0ustar zuulzuul--- - name: Debug make_glance_kuttl_env when: make_glance_kuttl_env is defined ansible.builtin.debug: var: make_glance_kuttl_env - name: Debug make_glance_kuttl_params when: make_glance_kuttl_params is defined ansible.builtin.debug: var: make_glance_kuttl_params - name: Run glance_kuttl retries: "{{ make_glance_kuttl_retries | default(omit) }}" delay: "{{ make_glance_kuttl_delay | default(omit) }}" until: "{{ make_glance_kuttl_until | default(true) }}" register: "make_glance_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance_kuttl" dry_run: "{{ make_glance_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_kuttl_env|default({})), **(make_glance_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_k0000644000175000017500000000173115133657453033360 0ustar zuulzuul--- - name: Debug make_manila_kuttl_run_env when: make_manila_kuttl_run_env is defined ansible.builtin.debug: var: make_manila_kuttl_run_env - name: Debug make_manila_kuttl_run_params when: make_manila_kuttl_run_params is defined ansible.builtin.debug: var: make_manila_kuttl_run_params - name: Run manila_kuttl_run retries: "{{ make_manila_kuttl_run_retries | default(omit) }}" delay: "{{ make_manila_kuttl_run_delay | default(omit) }}" until: "{{ make_manila_kuttl_run_until | default(true) }}" register: "make_manila_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila_kuttl_run" dry_run: "{{ make_manila_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_kuttl_run_env|default({})), **(make_manila_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_k0000644000175000017500000000163515133657453033363 0ustar zuulzuul--- - name: Debug make_manila_kuttl_env when: make_manila_kuttl_env is defined ansible.builtin.debug: var: make_manila_kuttl_env - name: Debug make_manila_kuttl_params when: make_manila_kuttl_params is defined ansible.builtin.debug: var: make_manila_kuttl_params - name: Run manila_kuttl retries: "{{ make_manila_kuttl_retries | default(omit) }}" delay: "{{ make_manila_kuttl_delay | default(omit) }}" until: "{{ make_manila_kuttl_until | default(true) }}" register: "make_manila_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila_kuttl" dry_run: "{{ make_manila_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_kuttl_env|default({})), **(make_manila_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_ku0000644000175000017500000000171215133657453033437 0ustar zuulzuul--- - name: Debug make_swift_kuttl_run_env when: make_swift_kuttl_run_env is defined ansible.builtin.debug: var: make_swift_kuttl_run_env - name: Debug make_swift_kuttl_run_params when: make_swift_kuttl_run_params is defined ansible.builtin.debug: var: make_swift_kuttl_run_params - name: Run swift_kuttl_run retries: "{{ make_swift_kuttl_run_retries | default(omit) }}" delay: "{{ make_swift_kuttl_run_delay | default(omit) }}" until: "{{ make_swift_kuttl_run_until | default(true) }}" register: "make_swift_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift_kuttl_run" dry_run: "{{ make_swift_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_kuttl_run_env|default({})), **(make_swift_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_ku0000644000175000017500000000161615133657453033442 0ustar zuulzuul--- - name: Debug make_swift_kuttl_env when: make_swift_kuttl_env is defined ansible.builtin.debug: var: make_swift_kuttl_env - name: Debug make_swift_kuttl_params when: make_swift_kuttl_params is defined ansible.builtin.debug: var: make_swift_kuttl_params - name: Run swift_kuttl retries: "{{ make_swift_kuttl_retries | default(omit) }}" delay: "{{ make_swift_kuttl_delay | default(omit) }}" until: "{{ make_swift_kuttl_until | default(true) }}" register: "make_swift_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift_kuttl" dry_run: "{{ make_swift_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_kuttl_env|default({})), **(make_swift_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_0000644000175000017500000000175015133657453033435 0ustar zuulzuul--- - name: Debug make_horizon_kuttl_run_env when: make_horizon_kuttl_run_env is defined ansible.builtin.debug: var: make_horizon_kuttl_run_env - name: Debug make_horizon_kuttl_run_params when: make_horizon_kuttl_run_params is defined ansible.builtin.debug: var: make_horizon_kuttl_run_params - name: Run horizon_kuttl_run retries: "{{ make_horizon_kuttl_run_retries | default(omit) }}" delay: "{{ make_horizon_kuttl_run_delay | default(omit) }}" until: "{{ make_horizon_kuttl_run_until | default(true) }}" register: "make_horizon_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon_kuttl_run" dry_run: "{{ make_horizon_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_kuttl_run_env|default({})), **(make_horizon_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_0000644000175000017500000000165415133657453033440 0ustar zuulzuul--- - name: Debug make_horizon_kuttl_env when: make_horizon_kuttl_env is defined ansible.builtin.debug: var: make_horizon_kuttl_env - name: Debug make_horizon_kuttl_params when: make_horizon_kuttl_params is defined ansible.builtin.debug: var: make_horizon_kuttl_params - name: Run horizon_kuttl retries: "{{ make_horizon_kuttl_retries | default(omit) }}" delay: "{{ make_horizon_kuttl_delay | default(omit) }}" until: "{{ make_horizon_kuttl_until | default(true) }}" register: "make_horizon_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon_kuttl" dry_run: "{{ make_horizon_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_kuttl_env|default({})), **(make_horizon_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000200615133657453033415 0ustar zuulzuul--- - name: Debug make_openstack_kuttl_run_env when: make_openstack_kuttl_run_env is defined ansible.builtin.debug: var: make_openstack_kuttl_run_env - name: Debug make_openstack_kuttl_run_params when: make_openstack_kuttl_run_params is defined ansible.builtin.debug: var: make_openstack_kuttl_run_params - name: Run openstack_kuttl_run retries: "{{ make_openstack_kuttl_run_retries | default(omit) }}" delay: "{{ make_openstack_kuttl_run_delay | default(omit) }}" until: "{{ make_openstack_kuttl_run_until | default(true) }}" register: "make_openstack_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_kuttl_run" dry_run: "{{ make_openstack_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_kuttl_run_env|default({})), **(make_openstack_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000171215133657453033420 0ustar zuulzuul--- - name: Debug make_openstack_kuttl_env when: make_openstack_kuttl_env is defined ansible.builtin.debug: var: make_openstack_kuttl_env - name: Debug make_openstack_kuttl_params when: make_openstack_kuttl_params is defined ansible.builtin.debug: var: make_openstack_kuttl_params - name: Run openstack_kuttl retries: "{{ make_openstack_kuttl_retries | default(omit) }}" delay: "{{ make_openstack_kuttl_delay | default(omit) }}" until: "{{ make_openstack_kuttl_until | default(true) }}" register: "make_openstack_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_kuttl" dry_run: "{{ make_openstack_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_kuttl_env|default({})), **(make_openstack_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_chainsaw_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000202515133657453033340 0ustar zuulzuul--- - name: Debug make_mariadb_chainsaw_run_env when: make_mariadb_chainsaw_run_env is defined ansible.builtin.debug: var: make_mariadb_chainsaw_run_env - name: Debug make_mariadb_chainsaw_run_params when: make_mariadb_chainsaw_run_params is defined ansible.builtin.debug: var: make_mariadb_chainsaw_run_params - name: Run mariadb_chainsaw_run retries: "{{ make_mariadb_chainsaw_run_retries | default(omit) }}" delay: "{{ make_mariadb_chainsaw_run_delay | default(omit) }}" until: "{{ make_mariadb_chainsaw_run_until | default(true) }}" register: "make_mariadb_chainsaw_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_chainsaw_run" dry_run: "{{ make_mariadb_chainsaw_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_chainsaw_run_env|default({})), **(make_mariadb_chainsaw_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_chainsaw.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000173115133657453033343 0ustar zuulzuul--- - name: Debug make_mariadb_chainsaw_env when: make_mariadb_chainsaw_env is defined ansible.builtin.debug: var: make_mariadb_chainsaw_env - name: Debug make_mariadb_chainsaw_params when: make_mariadb_chainsaw_params is defined ansible.builtin.debug: var: make_mariadb_chainsaw_params - name: Run mariadb_chainsaw retries: "{{ make_mariadb_chainsaw_retries | default(omit) }}" delay: "{{ make_mariadb_chainsaw_delay | default(omit) }}" until: "{{ make_mariadb_chainsaw_until | default(true) }}" register: "make_mariadb_chainsaw_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_chainsaw" dry_run: "{{ make_mariadb_chainsaw_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_chainsaw_env|default({})), **(make_mariadb_chainsaw_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_0000644000175000017500000000163515133657453033437 0ustar zuulzuul--- - name: Debug make_horizon_prep_env when: make_horizon_prep_env is defined ansible.builtin.debug: var: make_horizon_prep_env - name: Debug make_horizon_prep_params when: make_horizon_prep_params is defined ansible.builtin.debug: var: make_horizon_prep_params - name: Run horizon_prep retries: "{{ make_horizon_prep_retries | default(omit) }}" delay: "{{ make_horizon_prep_delay | default(omit) }}" until: "{{ make_horizon_prep_until | default(true) }}" register: "make_horizon_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon_prep" dry_run: "{{ make_horizon_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_prep_env|default({})), **(make_horizon_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon.0000644000175000017500000000152215133657453033351 0ustar zuulzuul--- - name: Debug make_horizon_env when: make_horizon_env is defined ansible.builtin.debug: var: make_horizon_env - name: Debug make_horizon_params when: make_horizon_params is defined ansible.builtin.debug: var: make_horizon_params - name: Run horizon retries: "{{ make_horizon_retries | default(omit) }}" delay: "{{ make_horizon_delay | default(omit) }}" until: "{{ make_horizon_until | default(true) }}" register: "make_horizon_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon" dry_run: "{{ make_horizon_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_env|default({})), **(make_horizon_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_0000644000175000017500000000171215133657453033433 0ustar zuulzuul--- - name: Debug make_horizon_cleanup_env when: make_horizon_cleanup_env is defined ansible.builtin.debug: var: make_horizon_cleanup_env - name: Debug make_horizon_cleanup_params when: make_horizon_cleanup_params is defined ansible.builtin.debug: var: make_horizon_cleanup_params - name: Run horizon_cleanup retries: "{{ make_horizon_cleanup_retries | default(omit) }}" delay: "{{ make_horizon_cleanup_delay | default(omit) }}" until: "{{ make_horizon_cleanup_until | default(true) }}" register: "make_horizon_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon_cleanup" dry_run: "{{ make_horizon_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_cleanup_env|default({})), **(make_horizon_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_0000644000175000017500000000200615133657453033430 0ustar zuulzuul--- - name: Debug make_horizon_deploy_prep_env when: make_horizon_deploy_prep_env is defined ansible.builtin.debug: var: make_horizon_deploy_prep_env - name: Debug make_horizon_deploy_prep_params when: make_horizon_deploy_prep_params is defined ansible.builtin.debug: var: make_horizon_deploy_prep_params - name: Run horizon_deploy_prep retries: "{{ make_horizon_deploy_prep_retries | default(omit) }}" delay: "{{ make_horizon_deploy_prep_delay | default(omit) }}" until: "{{ make_horizon_deploy_prep_until | default(true) }}" register: "make_horizon_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon_deploy_prep" dry_run: "{{ make_horizon_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_deploy_prep_env|default({})), **(make_horizon_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_0000644000175000017500000000167315133657453033441 0ustar zuulzuul--- - name: Debug make_horizon_deploy_env when: make_horizon_deploy_env is defined ansible.builtin.debug: var: make_horizon_deploy_env - name: Debug make_horizon_deploy_params when: make_horizon_deploy_params is defined ansible.builtin.debug: var: make_horizon_deploy_params - name: Run horizon_deploy retries: "{{ make_horizon_deploy_retries | default(omit) }}" delay: "{{ make_horizon_deploy_delay | default(omit) }}" until: "{{ make_horizon_deploy_until | default(true) }}" register: "make_horizon_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon_deploy" dry_run: "{{ make_horizon_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_deploy_env|default({})), **(make_horizon_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_0000644000175000017500000000206315133657453033433 0ustar zuulzuul--- - name: Debug make_horizon_deploy_cleanup_env when: make_horizon_deploy_cleanup_env is defined ansible.builtin.debug: var: make_horizon_deploy_cleanup_env - name: Debug make_horizon_deploy_cleanup_params when: make_horizon_deploy_cleanup_params is defined ansible.builtin.debug: var: make_horizon_deploy_cleanup_params - name: Run horizon_deploy_cleanup retries: "{{ make_horizon_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_horizon_deploy_cleanup_delay | default(omit) }}" until: "{{ make_horizon_deploy_cleanup_until | default(true) }}" register: "make_horizon_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon_deploy_cleanup" dry_run: "{{ make_horizon_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_deploy_cleanup_env|default({})), **(make_horizon_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_pre0000644000175000017500000000156015133657453033374 0ustar zuulzuul--- - name: Debug make_heat_prep_env when: make_heat_prep_env is defined ansible.builtin.debug: var: make_heat_prep_env - name: Debug make_heat_prep_params when: make_heat_prep_params is defined ansible.builtin.debug: var: make_heat_prep_params - name: Run heat_prep retries: "{{ make_heat_prep_retries | default(omit) }}" delay: "{{ make_heat_prep_delay | default(omit) }}" until: "{{ make_heat_prep_until | default(true) }}" register: "make_heat_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_prep" dry_run: "{{ make_heat_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_prep_env|default({})), **(make_heat_prep_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat.yml0000644000175000017500000000144515133657453033330 0ustar zuulzuul--- - name: Debug make_heat_env when: make_heat_env is defined ansible.builtin.debug: var: make_heat_env - name: Debug make_heat_params when: make_heat_params is defined ansible.builtin.debug: var: make_heat_params - name: Run heat retries: "{{ make_heat_retries | default(omit) }}" delay: "{{ make_heat_delay | default(omit) }}" until: "{{ make_heat_until | default(true) }}" register: "make_heat_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat" dry_run: "{{ make_heat_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_env|default({})), **(make_heat_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_cle0000644000175000017500000000163515133657453033354 0ustar zuulzuul--- - name: Debug make_heat_cleanup_env when: make_heat_cleanup_env is defined ansible.builtin.debug: var: make_heat_cleanup_env - name: Debug make_heat_cleanup_params when: make_heat_cleanup_params is defined ansible.builtin.debug: var: make_heat_cleanup_params - name: Run heat_cleanup retries: "{{ make_heat_cleanup_retries | default(omit) }}" delay: "{{ make_heat_cleanup_delay | default(omit) }}" until: "{{ make_heat_cleanup_until | default(true) }}" register: "make_heat_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_cleanup" dry_run: "{{ make_heat_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_cleanup_env|default({})), **(make_heat_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_dep0000644000175000017500000000173115133657453033356 0ustar zuulzuul--- - name: Debug make_heat_deploy_prep_env when: make_heat_deploy_prep_env is defined ansible.builtin.debug: var: make_heat_deploy_prep_env - name: Debug make_heat_deploy_prep_params when: make_heat_deploy_prep_params is defined ansible.builtin.debug: var: make_heat_deploy_prep_params - name: Run heat_deploy_prep retries: "{{ make_heat_deploy_prep_retries | default(omit) }}" delay: "{{ make_heat_deploy_prep_delay | default(omit) }}" until: "{{ make_heat_deploy_prep_until | default(true) }}" register: "make_heat_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_deploy_prep" dry_run: "{{ make_heat_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_deploy_prep_env|default({})), **(make_heat_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_dep0000644000175000017500000000161615133657453033360 0ustar zuulzuul--- - name: Debug make_heat_deploy_env when: make_heat_deploy_env is defined ansible.builtin.debug: var: make_heat_deploy_env - name: Debug make_heat_deploy_params when: make_heat_deploy_params is defined ansible.builtin.debug: var: make_heat_deploy_params - name: Run heat_deploy retries: "{{ make_heat_deploy_retries | default(omit) }}" delay: "{{ make_heat_deploy_delay | default(omit) }}" until: "{{ make_heat_deploy_until | default(true) }}" register: "make_heat_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_deploy" dry_run: "{{ make_heat_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_deploy_env|default({})), **(make_heat_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_dep0000644000175000017500000000200615133657453033352 0ustar zuulzuul--- - name: Debug make_heat_deploy_cleanup_env when: make_heat_deploy_cleanup_env is defined ansible.builtin.debug: var: make_heat_deploy_cleanup_env - name: Debug make_heat_deploy_cleanup_params when: make_heat_deploy_cleanup_params is defined ansible.builtin.debug: var: make_heat_deploy_cleanup_params - name: Run heat_deploy_cleanup retries: "{{ make_heat_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_heat_deploy_cleanup_delay | default(omit) }}" until: "{{ make_heat_deploy_cleanup_until | default(true) }}" register: "make_heat_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_deploy_cleanup" dry_run: "{{ make_heat_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_deploy_cleanup_env|default({})), **(make_heat_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansibleee_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansiblee0000644000175000017500000000167315133657453033374 0ustar zuulzuul--- - name: Debug make_ansibleee_prep_env when: make_ansibleee_prep_env is defined ansible.builtin.debug: var: make_ansibleee_prep_env - name: Debug make_ansibleee_prep_params when: make_ansibleee_prep_params is defined ansible.builtin.debug: var: make_ansibleee_prep_params - name: Run ansibleee_prep retries: "{{ make_ansibleee_prep_retries | default(omit) }}" delay: "{{ make_ansibleee_prep_delay | default(omit) }}" until: "{{ make_ansibleee_prep_until | default(true) }}" register: "make_ansibleee_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ansibleee_prep" dry_run: "{{ make_ansibleee_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ansibleee_prep_env|default({})), **(make_ansibleee_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansibleee.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansiblee0000644000175000017500000000156015133657453033367 0ustar zuulzuul--- - name: Debug make_ansibleee_env when: make_ansibleee_env is defined ansible.builtin.debug: var: make_ansibleee_env - name: Debug make_ansibleee_params when: make_ansibleee_params is defined ansible.builtin.debug: var: make_ansibleee_params - name: Run ansibleee retries: "{{ make_ansibleee_retries | default(omit) }}" delay: "{{ make_ansibleee_delay | default(omit) }}" until: "{{ make_ansibleee_until | default(true) }}" register: "make_ansibleee_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ansibleee" dry_run: "{{ make_ansibleee_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ansibleee_env|default({})), **(make_ansibleee_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansibleee_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansiblee0000644000175000017500000000175015133657453033370 0ustar zuulzuul--- - name: Debug make_ansibleee_cleanup_env when: make_ansibleee_cleanup_env is defined ansible.builtin.debug: var: make_ansibleee_cleanup_env - name: Debug make_ansibleee_cleanup_params when: make_ansibleee_cleanup_params is defined ansible.builtin.debug: var: make_ansibleee_cleanup_params - name: Run ansibleee_cleanup retries: "{{ make_ansibleee_cleanup_retries | default(omit) }}" delay: "{{ make_ansibleee_cleanup_delay | default(omit) }}" until: "{{ make_ansibleee_cleanup_until | default(true) }}" register: "make_ansibleee_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ansibleee_cleanup" dry_run: "{{ make_ansibleee_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ansibleee_cleanup_env|default({})), **(make_ansibleee_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_baremetal_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_baremeta0000644000175000017500000000167315133657453033372 0ustar zuulzuul--- - name: Debug make_baremetal_prep_env when: make_baremetal_prep_env is defined ansible.builtin.debug: var: make_baremetal_prep_env - name: Debug make_baremetal_prep_params when: make_baremetal_prep_params is defined ansible.builtin.debug: var: make_baremetal_prep_params - name: Run baremetal_prep retries: "{{ make_baremetal_prep_retries | default(omit) }}" delay: "{{ make_baremetal_prep_delay | default(omit) }}" until: "{{ make_baremetal_prep_until | default(true) }}" register: "make_baremetal_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make baremetal_prep" dry_run: "{{ make_baremetal_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_baremetal_prep_env|default({})), **(make_baremetal_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_baremetal.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_baremeta0000644000175000017500000000156015133657453033365 0ustar zuulzuul--- - name: Debug make_baremetal_env when: make_baremetal_env is defined ansible.builtin.debug: var: make_baremetal_env - name: Debug make_baremetal_params when: make_baremetal_params is defined ansible.builtin.debug: var: make_baremetal_params - name: Run baremetal retries: "{{ make_baremetal_retries | default(omit) }}" delay: "{{ make_baremetal_delay | default(omit) }}" until: "{{ make_baremetal_until | default(true) }}" register: "make_baremetal_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make baremetal" dry_run: "{{ make_baremetal_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_baremetal_env|default({})), **(make_baremetal_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_baremetal_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_baremeta0000644000175000017500000000175015133657453033366 0ustar zuulzuul--- - name: Debug make_baremetal_cleanup_env when: make_baremetal_cleanup_env is defined ansible.builtin.debug: var: make_baremetal_cleanup_env - name: Debug make_baremetal_cleanup_params when: make_baremetal_cleanup_params is defined ansible.builtin.debug: var: make_baremetal_cleanup_params - name: Run baremetal_cleanup retries: "{{ make_baremetal_cleanup_retries | default(omit) }}" delay: "{{ make_baremetal_cleanup_delay | default(omit) }}" until: "{{ make_baremetal_cleanup_until | default(true) }}" register: "make_baremetal_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make baremetal_cleanup" dry_run: "{{ make_baremetal_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_baremetal_cleanup_env|default({})), **(make_baremetal_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ceph_help.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ceph_hel0000644000175000017500000000156015133657453033354 0ustar zuulzuul--- - name: Debug make_ceph_help_env when: make_ceph_help_env is defined ansible.builtin.debug: var: make_ceph_help_env - name: Debug make_ceph_help_params when: make_ceph_help_params is defined ansible.builtin.debug: var: make_ceph_help_params - name: Run ceph_help retries: "{{ make_ceph_help_retries | default(omit) }}" delay: "{{ make_ceph_help_delay | default(omit) }}" until: "{{ make_ceph_help_until | default(true) }}" register: "make_ceph_help_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ceph_help" dry_run: "{{ make_ceph_help_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ceph_help_env|default({})), **(make_ceph_help_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ceph.yml0000644000175000017500000000144515133657453033326 0ustar zuulzuul--- - name: Debug make_ceph_env when: make_ceph_env is defined ansible.builtin.debug: var: make_ceph_env - name: Debug make_ceph_params when: make_ceph_params is defined ansible.builtin.debug: var: make_ceph_params - name: Run ceph retries: "{{ make_ceph_retries | default(omit) }}" delay: "{{ make_ceph_delay | default(omit) }}" until: "{{ make_ceph_until | default(true) }}" register: "make_ceph_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ceph" dry_run: "{{ make_ceph_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ceph_env|default({})), **(make_ceph_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ceph_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ceph_cle0000644000175000017500000000163515133657453033352 0ustar zuulzuul--- - name: Debug make_ceph_cleanup_env when: make_ceph_cleanup_env is defined ansible.builtin.debug: var: make_ceph_cleanup_env - name: Debug make_ceph_cleanup_params when: make_ceph_cleanup_params is defined ansible.builtin.debug: var: make_ceph_cleanup_params - name: Run ceph_cleanup retries: "{{ make_ceph_cleanup_retries | default(omit) }}" delay: "{{ make_ceph_cleanup_delay | default(omit) }}" until: "{{ make_ceph_cleanup_until | default(true) }}" register: "make_ceph_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ceph_cleanup" dry_run: "{{ make_ceph_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ceph_cleanup_env|default({})), **(make_ceph_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_pre0000644000175000017500000000156015133657453033425 0ustar zuulzuul--- - name: Debug make_rook_prep_env when: make_rook_prep_env is defined ansible.builtin.debug: var: make_rook_prep_env - name: Debug make_rook_prep_params when: make_rook_prep_params is defined ansible.builtin.debug: var: make_rook_prep_params - name: Run rook_prep retries: "{{ make_rook_prep_retries | default(omit) }}" delay: "{{ make_rook_prep_delay | default(omit) }}" until: "{{ make_rook_prep_until | default(true) }}" register: "make_rook_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rook_prep" dry_run: "{{ make_rook_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rook_prep_env|default({})), **(make_rook_prep_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook.yml0000644000175000017500000000144515133657453033361 0ustar zuulzuul--- - name: Debug make_rook_env when: make_rook_env is defined ansible.builtin.debug: var: make_rook_env - name: Debug make_rook_params when: make_rook_params is defined ansible.builtin.debug: var: make_rook_params - name: Run rook retries: "{{ make_rook_retries | default(omit) }}" delay: "{{ make_rook_delay | default(omit) }}" until: "{{ make_rook_until | default(true) }}" register: "make_rook_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rook" dry_run: "{{ make_rook_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rook_env|default({})), **(make_rook_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_dep0000644000175000017500000000173115133657453033407 0ustar zuulzuul--- - name: Debug make_rook_deploy_prep_env when: make_rook_deploy_prep_env is defined ansible.builtin.debug: var: make_rook_deploy_prep_env - name: Debug make_rook_deploy_prep_params when: make_rook_deploy_prep_params is defined ansible.builtin.debug: var: make_rook_deploy_prep_params - name: Run rook_deploy_prep retries: "{{ make_rook_deploy_prep_retries | default(omit) }}" delay: "{{ make_rook_deploy_prep_delay | default(omit) }}" until: "{{ make_rook_deploy_prep_until | default(true) }}" register: "make_rook_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rook_deploy_prep" dry_run: "{{ make_rook_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rook_deploy_prep_env|default({})), **(make_rook_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_dep0000644000175000017500000000161615133657453033411 0ustar zuulzuul--- - name: Debug make_rook_deploy_env when: make_rook_deploy_env is defined ansible.builtin.debug: var: make_rook_deploy_env - name: Debug make_rook_deploy_params when: make_rook_deploy_params is defined ansible.builtin.debug: var: make_rook_deploy_params - name: Run rook_deploy retries: "{{ make_rook_deploy_retries | default(omit) }}" delay: "{{ make_rook_deploy_delay | default(omit) }}" until: "{{ make_rook_deploy_until | default(true) }}" register: "make_rook_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rook_deploy" dry_run: "{{ make_rook_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rook_deploy_env|default({})), **(make_rook_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_crc_disk.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_crc0000644000175000017500000000165415133657453033412 0ustar zuulzuul--- - name: Debug make_rook_crc_disk_env when: make_rook_crc_disk_env is defined ansible.builtin.debug: var: make_rook_crc_disk_env - name: Debug make_rook_crc_disk_params when: make_rook_crc_disk_params is defined ansible.builtin.debug: var: make_rook_crc_disk_params - name: Run rook_crc_disk retries: "{{ make_rook_crc_disk_retries | default(omit) }}" delay: "{{ make_rook_crc_disk_delay | default(omit) }}" until: "{{ make_rook_crc_disk_until | default(true) }}" register: "make_rook_crc_disk_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rook_crc_disk" dry_run: "{{ make_rook_crc_disk_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rook_crc_disk_env|default({})), **(make_rook_crc_disk_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_cle0000644000175000017500000000163515133657453033405 0ustar zuulzuul--- - name: Debug make_rook_cleanup_env when: make_rook_cleanup_env is defined ansible.builtin.debug: var: make_rook_cleanup_env - name: Debug make_rook_cleanup_params when: make_rook_cleanup_params is defined ansible.builtin.debug: var: make_rook_cleanup_params - name: Run rook_cleanup retries: "{{ make_rook_cleanup_retries | default(omit) }}" delay: "{{ make_rook_cleanup_delay | default(omit) }}" until: "{{ make_rook_cleanup_until | default(true) }}" register: "make_rook_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rook_cleanup" dry_run: "{{ make_rook_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rook_cleanup_env|default({})), **(make_rook_cleanup_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_lvms.yml0000644000175000017500000000144515133657453033370 0ustar zuulzuul--- - name: Debug make_lvms_env when: make_lvms_env is defined ansible.builtin.debug: var: make_lvms_env - name: Debug make_lvms_params when: make_lvms_params is defined ansible.builtin.debug: var: make_lvms_params - name: Run lvms retries: "{{ make_lvms_retries | default(omit) }}" delay: "{{ make_lvms_delay | default(omit) }}" until: "{{ make_lvms_until | default(true) }}" register: "make_lvms_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make lvms" dry_run: "{{ make_lvms_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_lvms_env|default({})), **(make_lvms_params|default({}))) }}" ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nmstate.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nmstate.0000644000175000017500000000152215133657453033334 0ustar zuulzuul--- - name: Debug make_nmstate_env when: make_nmstate_env is defined ansible.builtin.debug: var: make_nmstate_env - name: Debug make_nmstate_params when: make_nmstate_params is defined ansible.builtin.debug: var: make_nmstate_params - name: Run nmstate retries: "{{ make_nmstate_retries | default(omit) }}" delay: "{{ make_nmstate_delay | default(omit) }}" until: "{{ make_nmstate_until | default(true) }}" register: "make_nmstate_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nmstate" dry_run: "{{ make_nmstate_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nmstate_env|default({})), **(make_nmstate_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nncp.yml0000644000175000017500000000144515133657453033345 0ustar zuulzuul--- - name: Debug make_nncp_env when: make_nncp_env is defined ansible.builtin.debug: var: make_nncp_env - name: Debug make_nncp_params when: make_nncp_params is defined ansible.builtin.debug: var: make_nncp_params - name: Run nncp retries: "{{ make_nncp_retries | default(omit) }}" delay: "{{ make_nncp_delay | default(omit) }}" until: "{{ make_nncp_until | default(true) }}" register: "make_nncp_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nncp" dry_run: "{{ make_nncp_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nncp_env|default({})), **(make_nncp_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nncp_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nncp_cle0000644000175000017500000000163515133657453033371 0ustar zuulzuul--- - name: Debug make_nncp_cleanup_env when: make_nncp_cleanup_env is defined ansible.builtin.debug: var: make_nncp_cleanup_env - name: Debug make_nncp_cleanup_params when: make_nncp_cleanup_params is defined ansible.builtin.debug: var: make_nncp_cleanup_params - name: Run nncp_cleanup retries: "{{ make_nncp_cleanup_retries | default(omit) }}" delay: "{{ make_nncp_cleanup_delay | default(omit) }}" until: "{{ make_nncp_cleanup_until | default(true) }}" register: "make_nncp_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nncp_cleanup" dry_run: "{{ make_nncp_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nncp_cleanup_env|default({})), **(make_nncp_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netattach.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netattac0000644000175000017500000000156015133657453033410 0ustar zuulzuul--- - name: Debug make_netattach_env when: make_netattach_env is defined ansible.builtin.debug: var: make_netattach_env - name: Debug make_netattach_params when: make_netattach_params is defined ansible.builtin.debug: var: make_netattach_params - name: Run netattach retries: "{{ make_netattach_retries | default(omit) }}" delay: "{{ make_netattach_delay | default(omit) }}" until: "{{ make_netattach_until | default(true) }}" register: "make_netattach_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netattach" dry_run: "{{ make_netattach_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netattach_env|default({})), **(make_netattach_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netattach_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netattac0000644000175000017500000000175015133657453033411 0ustar zuulzuul--- - name: Debug make_netattach_cleanup_env when: make_netattach_cleanup_env is defined ansible.builtin.debug: var: make_netattach_cleanup_env - name: Debug make_netattach_cleanup_params when: make_netattach_cleanup_params is defined ansible.builtin.debug: var: make_netattach_cleanup_params - name: Run netattach_cleanup retries: "{{ make_netattach_cleanup_retries | default(omit) }}" delay: "{{ make_netattach_cleanup_delay | default(omit) }}" until: "{{ make_netattach_cleanup_until | default(true) }}" register: "make_netattach_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netattach_cleanup" dry_run: "{{ make_netattach_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netattach_cleanup_env|default({})), **(make_netattach_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb.0000644000175000017500000000152215133657453033301 0ustar zuulzuul--- - name: Debug make_metallb_env when: make_metallb_env is defined ansible.builtin.debug: var: make_metallb_env - name: Debug make_metallb_params when: make_metallb_params is defined ansible.builtin.debug: var: make_metallb_params - name: Run metallb retries: "{{ make_metallb_retries | default(omit) }}" delay: "{{ make_metallb_delay | default(omit) }}" until: "{{ make_metallb_until | default(true) }}" register: "make_metallb_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make metallb" dry_run: "{{ make_metallb_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_metallb_env|default({})), **(make_metallb_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb_config.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb_0000644000175000017500000000167315133657453033371 0ustar zuulzuul--- - name: Debug make_metallb_config_env when: make_metallb_config_env is defined ansible.builtin.debug: var: make_metallb_config_env - name: Debug make_metallb_config_params when: make_metallb_config_params is defined ansible.builtin.debug: var: make_metallb_config_params - name: Run metallb_config retries: "{{ make_metallb_config_retries | default(omit) }}" delay: "{{ make_metallb_config_delay | default(omit) }}" until: "{{ make_metallb_config_until | default(true) }}" register: "make_metallb_config_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make metallb_config" dry_run: "{{ make_metallb_config_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_metallb_config_env|default({})), **(make_metallb_config_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb_config_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb_0000644000175000017500000000206315133657453033363 0ustar zuulzuul--- - name: Debug make_metallb_config_cleanup_env when: make_metallb_config_cleanup_env is defined ansible.builtin.debug: var: make_metallb_config_cleanup_env - name: Debug make_metallb_config_cleanup_params when: make_metallb_config_cleanup_params is defined ansible.builtin.debug: var: make_metallb_config_cleanup_params - name: Run metallb_config_cleanup retries: "{{ make_metallb_config_cleanup_retries | default(omit) }}" delay: "{{ make_metallb_config_cleanup_delay | default(omit) }}" until: "{{ make_metallb_config_cleanup_until | default(true) }}" register: "make_metallb_config_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make metallb_config_cleanup" dry_run: "{{ make_metallb_config_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_metallb_config_cleanup_env|default({})), **(make_metallb_config_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb_0000644000175000017500000000171215133657453033363 0ustar zuulzuul--- - name: Debug make_metallb_cleanup_env when: make_metallb_cleanup_env is defined ansible.builtin.debug: var: make_metallb_cleanup_env - name: Debug make_metallb_cleanup_params when: make_metallb_cleanup_params is defined ansible.builtin.debug: var: make_metallb_cleanup_params - name: Run metallb_cleanup retries: "{{ make_metallb_cleanup_retries | default(omit) }}" delay: "{{ make_metallb_cleanup_delay | default(omit) }}" until: "{{ make_metallb_cleanup_until | default(true) }}" register: "make_metallb_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make metallb_cleanup" dry_run: "{{ make_metallb_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_metallb_cleanup_env|default({})), **(make_metallb_cleanup_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_loki.yml0000644000175000017500000000144515133657453033345 0ustar zuulzuul--- - name: Debug make_loki_env when: make_loki_env is defined ansible.builtin.debug: var: make_loki_env - name: Debug make_loki_params when: make_loki_params is defined ansible.builtin.debug: var: make_loki_params - name: Run loki retries: "{{ make_loki_retries | default(omit) }}" delay: "{{ make_loki_delay | default(omit) }}" until: "{{ make_loki_until | default(true) }}" register: "make_loki_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make loki" dry_run: "{{ make_loki_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_loki_env|default({})), **(make_loki_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_loki_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_loki_cle0000644000175000017500000000163515133657453033371 0ustar zuulzuul--- - name: Debug make_loki_cleanup_env when: make_loki_cleanup_env is defined ansible.builtin.debug: var: make_loki_cleanup_env - name: Debug make_loki_cleanup_params when: make_loki_cleanup_params is defined ansible.builtin.debug: var: make_loki_cleanup_params - name: Run loki_cleanup retries: "{{ make_loki_cleanup_retries | default(omit) }}" delay: "{{ make_loki_cleanup_delay | default(omit) }}" until: "{{ make_loki_cleanup_until | default(true) }}" register: "make_loki_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make loki_cleanup" dry_run: "{{ make_loki_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_loki_cleanup_env|default({})), **(make_loki_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_loki_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_loki_dep0000644000175000017500000000161615133657453033375 0ustar zuulzuul--- - name: Debug make_loki_deploy_env when: make_loki_deploy_env is defined ansible.builtin.debug: var: make_loki_deploy_env - name: Debug make_loki_deploy_params when: make_loki_deploy_params is defined ansible.builtin.debug: var: make_loki_deploy_params - name: Run loki_deploy retries: "{{ make_loki_deploy_retries | default(omit) }}" delay: "{{ make_loki_deploy_delay | default(omit) }}" until: "{{ make_loki_deploy_until | default(true) }}" register: "make_loki_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make loki_deploy" dry_run: "{{ make_loki_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_loki_deploy_env|default({})), **(make_loki_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_loki_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_loki_dep0000644000175000017500000000200615133657453033367 0ustar zuulzuul--- - name: Debug make_loki_deploy_cleanup_env when: make_loki_deploy_cleanup_env is defined ansible.builtin.debug: var: make_loki_deploy_cleanup_env - name: Debug make_loki_deploy_cleanup_params when: make_loki_deploy_cleanup_params is defined ansible.builtin.debug: var: make_loki_deploy_cleanup_params - name: Run loki_deploy_cleanup retries: "{{ make_loki_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_loki_deploy_cleanup_delay | default(omit) }}" until: "{{ make_loki_deploy_cleanup_until | default(true) }}" register: "make_loki_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make loki_deploy_cleanup" dry_run: "{{ make_loki_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_loki_deploy_cleanup_env|default({})), **(make_loki_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobserv.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobser0000644000175000017500000000156015133657453033426 0ustar zuulzuul--- - name: Debug make_netobserv_env when: make_netobserv_env is defined ansible.builtin.debug: var: make_netobserv_env - name: Debug make_netobserv_params when: make_netobserv_params is defined ansible.builtin.debug: var: make_netobserv_params - name: Run netobserv retries: "{{ make_netobserv_retries | default(omit) }}" delay: "{{ make_netobserv_delay | default(omit) }}" until: "{{ make_netobserv_until | default(true) }}" register: "make_netobserv_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netobserv" dry_run: "{{ make_netobserv_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netobserv_env|default({})), **(make_netobserv_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobserv_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobser0000644000175000017500000000175015133657453033427 0ustar zuulzuul--- - name: Debug make_netobserv_cleanup_env when: make_netobserv_cleanup_env is defined ansible.builtin.debug: var: make_netobserv_cleanup_env - name: Debug make_netobserv_cleanup_params when: make_netobserv_cleanup_params is defined ansible.builtin.debug: var: make_netobserv_cleanup_params - name: Run netobserv_cleanup retries: "{{ make_netobserv_cleanup_retries | default(omit) }}" delay: "{{ make_netobserv_cleanup_delay | default(omit) }}" until: "{{ make_netobserv_cleanup_until | default(true) }}" register: "make_netobserv_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netobserv_cleanup" dry_run: "{{ make_netobserv_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netobserv_cleanup_env|default({})), **(make_netobserv_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobserv_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobser0000644000175000017500000000173115133657453033426 0ustar zuulzuul--- - name: Debug make_netobserv_deploy_env when: make_netobserv_deploy_env is defined ansible.builtin.debug: var: make_netobserv_deploy_env - name: Debug make_netobserv_deploy_params when: make_netobserv_deploy_params is defined ansible.builtin.debug: var: make_netobserv_deploy_params - name: Run netobserv_deploy retries: "{{ make_netobserv_deploy_retries | default(omit) }}" delay: "{{ make_netobserv_deploy_delay | default(omit) }}" until: "{{ make_netobserv_deploy_until | default(true) }}" register: "make_netobserv_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netobserv_deploy" dry_run: "{{ make_netobserv_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netobserv_deploy_env|default({})), **(make_netobserv_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobserv_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobser0000644000175000017500000000212115133657453033420 0ustar zuulzuul--- - name: Debug make_netobserv_deploy_cleanup_env when: make_netobserv_deploy_cleanup_env is defined ansible.builtin.debug: var: make_netobserv_deploy_cleanup_env - name: Debug make_netobserv_deploy_cleanup_params when: make_netobserv_deploy_cleanup_params is defined ansible.builtin.debug: var: make_netobserv_deploy_cleanup_params - name: Run netobserv_deploy_cleanup retries: "{{ make_netobserv_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_netobserv_deploy_cleanup_delay | default(omit) }}" until: "{{ make_netobserv_deploy_cleanup_until | default(true) }}" register: "make_netobserv_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netobserv_deploy_cleanup" dry_run: "{{ make_netobserv_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netobserv_deploy_cleanup_env|default({})), **(make_netobserv_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_p0000644000175000017500000000161615133657453033367 0ustar zuulzuul--- - name: Debug make_manila_prep_env when: make_manila_prep_env is defined ansible.builtin.debug: var: make_manila_prep_env - name: Debug make_manila_prep_params when: make_manila_prep_params is defined ansible.builtin.debug: var: make_manila_prep_params - name: Run manila_prep retries: "{{ make_manila_prep_retries | default(omit) }}" delay: "{{ make_manila_prep_delay | default(omit) }}" until: "{{ make_manila_prep_until | default(true) }}" register: "make_manila_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila_prep" dry_run: "{{ make_manila_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_prep_env|default({})), **(make_manila_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000014700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila.y0000644000175000017500000000150315133657453033312 0ustar zuulzuul--- - name: Debug make_manila_env when: make_manila_env is defined ansible.builtin.debug: var: make_manila_env - name: Debug make_manila_params when: make_manila_params is defined ansible.builtin.debug: var: make_manila_params - name: Run manila retries: "{{ make_manila_retries | default(omit) }}" delay: "{{ make_manila_delay | default(omit) }}" until: "{{ make_manila_until | default(true) }}" register: "make_manila_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila" dry_run: "{{ make_manila_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_env|default({})), **(make_manila_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_c0000644000175000017500000000167315133657453033355 0ustar zuulzuul--- - name: Debug make_manila_cleanup_env when: make_manila_cleanup_env is defined ansible.builtin.debug: var: make_manila_cleanup_env - name: Debug make_manila_cleanup_params when: make_manila_cleanup_params is defined ansible.builtin.debug: var: make_manila_cleanup_params - name: Run manila_cleanup retries: "{{ make_manila_cleanup_retries | default(omit) }}" delay: "{{ make_manila_cleanup_delay | default(omit) }}" until: "{{ make_manila_cleanup_until | default(true) }}" register: "make_manila_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila_cleanup" dry_run: "{{ make_manila_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_cleanup_env|default({})), **(make_manila_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_d0000644000175000017500000000176715133657453033362 0ustar zuulzuul--- - name: Debug make_manila_deploy_prep_env when: make_manila_deploy_prep_env is defined ansible.builtin.debug: var: make_manila_deploy_prep_env - name: Debug make_manila_deploy_prep_params when: make_manila_deploy_prep_params is defined ansible.builtin.debug: var: make_manila_deploy_prep_params - name: Run manila_deploy_prep retries: "{{ make_manila_deploy_prep_retries | default(omit) }}" delay: "{{ make_manila_deploy_prep_delay | default(omit) }}" until: "{{ make_manila_deploy_prep_until | default(true) }}" register: "make_manila_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila_deploy_prep" dry_run: "{{ make_manila_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_deploy_prep_env|default({})), **(make_manila_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_d0000644000175000017500000000165415133657453033355 0ustar zuulzuul--- - name: Debug make_manila_deploy_env when: make_manila_deploy_env is defined ansible.builtin.debug: var: make_manila_deploy_env - name: Debug make_manila_deploy_params when: make_manila_deploy_params is defined ansible.builtin.debug: var: make_manila_deploy_params - name: Run manila_deploy retries: "{{ make_manila_deploy_retries | default(omit) }}" delay: "{{ make_manila_deploy_delay | default(omit) }}" until: "{{ make_manila_deploy_until | default(true) }}" register: "make_manila_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila_deploy" dry_run: "{{ make_manila_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_deploy_env|default({})), **(make_manila_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_d0000644000175000017500000000204415133657453033347 0ustar zuulzuul--- - name: Debug make_manila_deploy_cleanup_env when: make_manila_deploy_cleanup_env is defined ansible.builtin.debug: var: make_manila_deploy_cleanup_env - name: Debug make_manila_deploy_cleanup_params when: make_manila_deploy_cleanup_params is defined ansible.builtin.debug: var: make_manila_deploy_cleanup_params - name: Run manila_deploy_cleanup retries: "{{ make_manila_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_manila_deploy_cleanup_delay | default(omit) }}" until: "{{ make_manila_deploy_cleanup_until | default(true) }}" register: "make_manila_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila_deploy_cleanup" dry_run: "{{ make_manila_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_deploy_cleanup_env|default({})), **(make_manila_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000167315133657453033433 0ustar zuulzuul--- - name: Debug make_telemetry_prep_env when: make_telemetry_prep_env is defined ansible.builtin.debug: var: make_telemetry_prep_env - name: Debug make_telemetry_prep_params when: make_telemetry_prep_params is defined ansible.builtin.debug: var: make_telemetry_prep_params - name: Run telemetry_prep retries: "{{ make_telemetry_prep_retries | default(omit) }}" delay: "{{ make_telemetry_prep_delay | default(omit) }}" until: "{{ make_telemetry_prep_until | default(true) }}" register: "make_telemetry_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry_prep" dry_run: "{{ make_telemetry_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_prep_env|default({})), **(make_telemetry_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000156015133657453033426 0ustar zuulzuul--- - name: Debug make_telemetry_env when: make_telemetry_env is defined ansible.builtin.debug: var: make_telemetry_env - name: Debug make_telemetry_params when: make_telemetry_params is defined ansible.builtin.debug: var: make_telemetry_params - name: Run telemetry retries: "{{ make_telemetry_retries | default(omit) }}" delay: "{{ make_telemetry_delay | default(omit) }}" until: "{{ make_telemetry_until | default(true) }}" register: "make_telemetry_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry" dry_run: "{{ make_telemetry_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_env|default({})), **(make_telemetry_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000175015133657453033427 0ustar zuulzuul--- - name: Debug make_telemetry_cleanup_env when: make_telemetry_cleanup_env is defined ansible.builtin.debug: var: make_telemetry_cleanup_env - name: Debug make_telemetry_cleanup_params when: make_telemetry_cleanup_params is defined ansible.builtin.debug: var: make_telemetry_cleanup_params - name: Run telemetry_cleanup retries: "{{ make_telemetry_cleanup_retries | default(omit) }}" delay: "{{ make_telemetry_cleanup_delay | default(omit) }}" until: "{{ make_telemetry_cleanup_until | default(true) }}" register: "make_telemetry_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry_cleanup" dry_run: "{{ make_telemetry_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_cleanup_env|default({})), **(make_telemetry_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000204415133657453033424 0ustar zuulzuul--- - name: Debug make_telemetry_deploy_prep_env when: make_telemetry_deploy_prep_env is defined ansible.builtin.debug: var: make_telemetry_deploy_prep_env - name: Debug make_telemetry_deploy_prep_params when: make_telemetry_deploy_prep_params is defined ansible.builtin.debug: var: make_telemetry_deploy_prep_params - name: Run telemetry_deploy_prep retries: "{{ make_telemetry_deploy_prep_retries | default(omit) }}" delay: "{{ make_telemetry_deploy_prep_delay | default(omit) }}" until: "{{ make_telemetry_deploy_prep_until | default(true) }}" register: "make_telemetry_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry_deploy_prep" dry_run: "{{ make_telemetry_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_deploy_prep_env|default({})), **(make_telemetry_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000173115133657453033426 0ustar zuulzuul--- - name: Debug make_telemetry_deploy_env when: make_telemetry_deploy_env is defined ansible.builtin.debug: var: make_telemetry_deploy_env - name: Debug make_telemetry_deploy_params when: make_telemetry_deploy_params is defined ansible.builtin.debug: var: make_telemetry_deploy_params - name: Run telemetry_deploy retries: "{{ make_telemetry_deploy_retries | default(omit) }}" delay: "{{ make_telemetry_deploy_delay | default(omit) }}" until: "{{ make_telemetry_deploy_until | default(true) }}" register: "make_telemetry_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry_deploy" dry_run: "{{ make_telemetry_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_deploy_env|default({})), **(make_telemetry_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000212115133657453033420 0ustar zuulzuul--- - name: Debug make_telemetry_deploy_cleanup_env when: make_telemetry_deploy_cleanup_env is defined ansible.builtin.debug: var: make_telemetry_deploy_cleanup_env - name: Debug make_telemetry_deploy_cleanup_params when: make_telemetry_deploy_cleanup_params is defined ansible.builtin.debug: var: make_telemetry_deploy_cleanup_params - name: Run telemetry_deploy_cleanup retries: "{{ make_telemetry_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_telemetry_deploy_cleanup_delay | default(omit) }}" until: "{{ make_telemetry_deploy_cleanup_until | default(true) }}" register: "make_telemetry_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry_deploy_cleanup" dry_run: "{{ make_telemetry_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_deploy_cleanup_env|default({})), **(make_telemetry_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000200615133657453033422 0ustar zuulzuul--- - name: Debug make_telemetry_kuttl_run_env when: make_telemetry_kuttl_run_env is defined ansible.builtin.debug: var: make_telemetry_kuttl_run_env - name: Debug make_telemetry_kuttl_run_params when: make_telemetry_kuttl_run_params is defined ansible.builtin.debug: var: make_telemetry_kuttl_run_params - name: Run telemetry_kuttl_run retries: "{{ make_telemetry_kuttl_run_retries | default(omit) }}" delay: "{{ make_telemetry_kuttl_run_delay | default(omit) }}" until: "{{ make_telemetry_kuttl_run_until | default(true) }}" register: "make_telemetry_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry_kuttl_run" dry_run: "{{ make_telemetry_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_kuttl_run_env|default({})), **(make_telemetry_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000171215133657453033425 0ustar zuulzuul--- - name: Debug make_telemetry_kuttl_env when: make_telemetry_kuttl_env is defined ansible.builtin.debug: var: make_telemetry_kuttl_env - name: Debug make_telemetry_kuttl_params when: make_telemetry_kuttl_params is defined ansible.builtin.debug: var: make_telemetry_kuttl_params - name: Run telemetry_kuttl retries: "{{ make_telemetry_kuttl_retries | default(omit) }}" delay: "{{ make_telemetry_kuttl_delay | default(omit) }}" until: "{{ make_telemetry_kuttl_until | default(true) }}" register: "make_telemetry_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry_kuttl" dry_run: "{{ make_telemetry_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_kuttl_env|default({})), **(make_telemetry_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000015300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_pr0000644000175000017500000000157715133657453033452 0ustar zuulzuul--- - name: Debug make_swift_prep_env when: make_swift_prep_env is defined ansible.builtin.debug: var: make_swift_prep_env - name: Debug make_swift_prep_params when: make_swift_prep_params is defined ansible.builtin.debug: var: make_swift_prep_params - name: Run swift_prep retries: "{{ make_swift_prep_retries | default(omit) }}" delay: "{{ make_swift_prep_delay | default(omit) }}" until: "{{ make_swift_prep_until | default(true) }}" register: "make_swift_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift_prep" dry_run: "{{ make_swift_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_prep_env|default({})), **(make_swift_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000014600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift.ym0000644000175000017500000000146415133657453033370 0ustar zuulzuul--- - name: Debug make_swift_env when: make_swift_env is defined ansible.builtin.debug: var: make_swift_env - name: Debug make_swift_params when: make_swift_params is defined ansible.builtin.debug: var: make_swift_params - name: Run swift retries: "{{ make_swift_retries | default(omit) }}" delay: "{{ make_swift_delay | default(omit) }}" until: "{{ make_swift_until | default(true) }}" register: "make_swift_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift" dry_run: "{{ make_swift_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_env|default({})), **(make_swift_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_cl0000644000175000017500000000165415133657453033423 0ustar zuulzuul--- - name: Debug make_swift_cleanup_env when: make_swift_cleanup_env is defined ansible.builtin.debug: var: make_swift_cleanup_env - name: Debug make_swift_cleanup_params when: make_swift_cleanup_params is defined ansible.builtin.debug: var: make_swift_cleanup_params - name: Run swift_cleanup retries: "{{ make_swift_cleanup_retries | default(omit) }}" delay: "{{ make_swift_cleanup_delay | default(omit) }}" until: "{{ make_swift_cleanup_until | default(true) }}" register: "make_swift_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift_cleanup" dry_run: "{{ make_swift_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_cleanup_env|default({})), **(make_swift_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_de0000644000175000017500000000175015133657453033412 0ustar zuulzuul--- - name: Debug make_swift_deploy_prep_env when: make_swift_deploy_prep_env is defined ansible.builtin.debug: var: make_swift_deploy_prep_env - name: Debug make_swift_deploy_prep_params when: make_swift_deploy_prep_params is defined ansible.builtin.debug: var: make_swift_deploy_prep_params - name: Run swift_deploy_prep retries: "{{ make_swift_deploy_prep_retries | default(omit) }}" delay: "{{ make_swift_deploy_prep_delay | default(omit) }}" until: "{{ make_swift_deploy_prep_until | default(true) }}" register: "make_swift_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift_deploy_prep" dry_run: "{{ make_swift_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_deploy_prep_env|default({})), **(make_swift_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_de0000644000175000017500000000163515133657453033414 0ustar zuulzuul--- - name: Debug make_swift_deploy_env when: make_swift_deploy_env is defined ansible.builtin.debug: var: make_swift_deploy_env - name: Debug make_swift_deploy_params when: make_swift_deploy_params is defined ansible.builtin.debug: var: make_swift_deploy_params - name: Run swift_deploy retries: "{{ make_swift_deploy_retries | default(omit) }}" delay: "{{ make_swift_deploy_delay | default(omit) }}" until: "{{ make_swift_deploy_until | default(true) }}" register: "make_swift_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift_deploy" dry_run: "{{ make_swift_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_deploy_env|default({})), **(make_swift_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_de0000644000175000017500000000202515133657453033406 0ustar zuulzuul--- - name: Debug make_swift_deploy_cleanup_env when: make_swift_deploy_cleanup_env is defined ansible.builtin.debug: var: make_swift_deploy_cleanup_env - name: Debug make_swift_deploy_cleanup_params when: make_swift_deploy_cleanup_params is defined ansible.builtin.debug: var: make_swift_deploy_cleanup_params - name: Run swift_deploy_cleanup retries: "{{ make_swift_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_swift_deploy_cleanup_delay | default(omit) }}" until: "{{ make_swift_deploy_cleanup_until | default(true) }}" register: "make_swift_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift_deploy_cleanup" dry_run: "{{ make_swift_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_deploy_cleanup_env|default({})), **(make_swift_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_certmanager.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_certmana0000644000175000017500000000161615133657453033401 0ustar zuulzuul--- - name: Debug make_certmanager_env when: make_certmanager_env is defined ansible.builtin.debug: var: make_certmanager_env - name: Debug make_certmanager_params when: make_certmanager_params is defined ansible.builtin.debug: var: make_certmanager_params - name: Run certmanager retries: "{{ make_certmanager_retries | default(omit) }}" delay: "{{ make_certmanager_delay | default(omit) }}" until: "{{ make_certmanager_until | default(true) }}" register: "make_certmanager_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make certmanager" dry_run: "{{ make_certmanager_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_certmanager_env|default({})), **(make_certmanager_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_certmanager_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_certmana0000644000175000017500000000200615133657453033373 0ustar zuulzuul--- - name: Debug make_certmanager_cleanup_env when: make_certmanager_cleanup_env is defined ansible.builtin.debug: var: make_certmanager_cleanup_env - name: Debug make_certmanager_cleanup_params when: make_certmanager_cleanup_params is defined ansible.builtin.debug: var: make_certmanager_cleanup_params - name: Run certmanager_cleanup retries: "{{ make_certmanager_cleanup_retries | default(omit) }}" delay: "{{ make_certmanager_cleanup_delay | default(omit) }}" until: "{{ make_certmanager_cleanup_until | default(true) }}" register: "make_certmanager_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make certmanager_cleanup" dry_run: "{{ make_certmanager_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_certmanager_cleanup_env|default({})), **(make_certmanager_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_validate_marketplace.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_validate0000644000175000017500000000202515133657453033373 0ustar zuulzuul--- - name: Debug make_validate_marketplace_env when: make_validate_marketplace_env is defined ansible.builtin.debug: var: make_validate_marketplace_env - name: Debug make_validate_marketplace_params when: make_validate_marketplace_params is defined ansible.builtin.debug: var: make_validate_marketplace_params - name: Run validate_marketplace retries: "{{ make_validate_marketplace_retries | default(omit) }}" delay: "{{ make_validate_marketplace_delay | default(omit) }}" until: "{{ make_validate_marketplace_until | default(true) }}" register: "make_validate_marketplace_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make validate_marketplace" dry_run: "{{ make_validate_marketplace_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_validate_marketplace_env|default({})), **(make_validate_marketplace_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_redis_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_redis_de0000644000175000017500000000175015133657453033364 0ustar zuulzuul--- - name: Debug make_redis_deploy_prep_env when: make_redis_deploy_prep_env is defined ansible.builtin.debug: var: make_redis_deploy_prep_env - name: Debug make_redis_deploy_prep_params when: make_redis_deploy_prep_params is defined ansible.builtin.debug: var: make_redis_deploy_prep_params - name: Run redis_deploy_prep retries: "{{ make_redis_deploy_prep_retries | default(omit) }}" delay: "{{ make_redis_deploy_prep_delay | default(omit) }}" until: "{{ make_redis_deploy_prep_until | default(true) }}" register: "make_redis_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make redis_deploy_prep" dry_run: "{{ make_redis_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_redis_deploy_prep_env|default({})), **(make_redis_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_redis_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_redis_de0000644000175000017500000000163515133657453033366 0ustar zuulzuul--- - name: Debug make_redis_deploy_env when: make_redis_deploy_env is defined ansible.builtin.debug: var: make_redis_deploy_env - name: Debug make_redis_deploy_params when: make_redis_deploy_params is defined ansible.builtin.debug: var: make_redis_deploy_params - name: Run redis_deploy retries: "{{ make_redis_deploy_retries | default(omit) }}" delay: "{{ make_redis_deploy_delay | default(omit) }}" until: "{{ make_redis_deploy_until | default(true) }}" register: "make_redis_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make redis_deploy" dry_run: "{{ make_redis_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_redis_deploy_env|default({})), **(make_redis_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_redis_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_redis_de0000644000175000017500000000202515133657453033360 0ustar zuulzuul--- - name: Debug make_redis_deploy_cleanup_env when: make_redis_deploy_cleanup_env is defined ansible.builtin.debug: var: make_redis_deploy_cleanup_env - name: Debug make_redis_deploy_cleanup_params when: make_redis_deploy_cleanup_params is defined ansible.builtin.debug: var: make_redis_deploy_cleanup_params - name: Run redis_deploy_cleanup retries: "{{ make_redis_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_redis_deploy_cleanup_delay | default(omit) }}" until: "{{ make_redis_deploy_cleanup_until | default(true) }}" register: "make_redis_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make redis_deploy_cleanup" dry_run: "{{ make_redis_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_redis_deploy_cleanup_env|default({})), **(make_redis_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_set_slower_etcd_profile.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_set_slow0000644000175000017500000000210215133657453033435 0ustar zuulzuul--- - name: Debug make_set_slower_etcd_profile_env when: make_set_slower_etcd_profile_env is defined ansible.builtin.debug: var: make_set_slower_etcd_profile_env - name: Debug make_set_slower_etcd_profile_params when: make_set_slower_etcd_profile_params is defined ansible.builtin.debug: var: make_set_slower_etcd_profile_params - name: Run set_slower_etcd_profile retries: "{{ make_set_slower_etcd_profile_retries | default(omit) }}" delay: "{{ make_set_slower_etcd_profile_delay | default(omit) }}" until: "{{ make_set_slower_etcd_profile_until | default(true) }}" register: "make_set_slower_etcd_profile_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make set_slower_etcd_profile" dry_run: "{{ make_set_slower_etcd_profile_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_set_slower_etcd_profile_env|default({})), **(make_set_slower_etcd_profile_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_download_tools.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_download0000644000175000017500000000170415133657453033414 0ustar zuulzuul--- - name: Debug make_download_tools_env when: make_download_tools_env is defined ansible.builtin.debug: var: make_download_tools_env - name: Debug make_download_tools_params when: make_download_tools_params is defined ansible.builtin.debug: var: make_download_tools_params - name: Run download_tools retries: "{{ make_download_tools_retries | default(omit) }}" delay: "{{ make_download_tools_delay | default(omit) }}" until: "{{ make_download_tools_until | default(true) }}" register: "make_download_tools_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make download_tools" dry_run: "{{ make_download_tools_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_download_tools_env|default({})), **(make_download_tools_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nfs.yml0000644000175000017500000000143715133657453033176 0ustar zuulzuul--- - name: Debug make_nfs_env when: make_nfs_env is defined ansible.builtin.debug: var: make_nfs_env - name: Debug make_nfs_params when: make_nfs_params is defined ansible.builtin.debug: var: make_nfs_params - name: Run nfs retries: "{{ make_nfs_retries | default(omit) }}" delay: "{{ make_nfs_delay | default(omit) }}" until: "{{ make_nfs_until | default(true) }}" register: "make_nfs_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make nfs" dry_run: "{{ make_nfs_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nfs_env|default({})), **(make_nfs_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nfs_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nfs_clea0000644000175000017500000000162715133657453033363 0ustar zuulzuul--- - name: Debug make_nfs_cleanup_env when: make_nfs_cleanup_env is defined ansible.builtin.debug: var: make_nfs_cleanup_env - name: Debug make_nfs_cleanup_params when: make_nfs_cleanup_params is defined ansible.builtin.debug: var: make_nfs_cleanup_params - name: Run nfs_cleanup retries: "{{ make_nfs_cleanup_retries | default(omit) }}" delay: "{{ make_nfs_cleanup_delay | default(omit) }}" until: "{{ make_nfs_cleanup_until | default(true) }}" register: "make_nfs_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make nfs_cleanup" dry_run: "{{ make_nfs_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nfs_cleanup_env|default({})), **(make_nfs_cleanup_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc.yml0000644000175000017500000000143715133657453033157 0ustar zuulzuul--- - name: Debug make_crc_env when: make_crc_env is defined ansible.builtin.debug: var: make_crc_env - name: Debug make_crc_params when: make_crc_params is defined ansible.builtin.debug: var: make_crc_params - name: Run crc retries: "{{ make_crc_retries | default(omit) }}" delay: "{{ make_crc_delay | default(omit) }}" until: "{{ make_crc_until | default(true) }}" register: "make_crc_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make crc" dry_run: "{{ make_crc_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_env|default({})), **(make_crc_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_clea0000644000175000017500000000162715133657453033344 0ustar zuulzuul--- - name: Debug make_crc_cleanup_env when: make_crc_cleanup_env is defined ansible.builtin.debug: var: make_crc_cleanup_env - name: Debug make_crc_cleanup_params when: make_crc_cleanup_params is defined ansible.builtin.debug: var: make_crc_cleanup_params - name: Run crc_cleanup retries: "{{ make_crc_cleanup_retries | default(omit) }}" delay: "{{ make_crc_cleanup_delay | default(omit) }}" until: "{{ make_crc_cleanup_until | default(true) }}" register: "make_crc_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make crc_cleanup" dry_run: "{{ make_crc_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_cleanup_env|default({})), **(make_crc_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_scrub.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_scru0000644000175000017500000000157115133657453033412 0ustar zuulzuul--- - name: Debug make_crc_scrub_env when: make_crc_scrub_env is defined ansible.builtin.debug: var: make_crc_scrub_env - name: Debug make_crc_scrub_params when: make_crc_scrub_params is defined ansible.builtin.debug: var: make_crc_scrub_params - name: Run crc_scrub retries: "{{ make_crc_scrub_retries | default(omit) }}" delay: "{{ make_crc_scrub_delay | default(omit) }}" until: "{{ make_crc_scrub_until | default(true) }}" register: "make_crc_scrub_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make crc_scrub" dry_run: "{{ make_crc_scrub_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_scrub_env|default({})), **(make_crc_scrub_params|default({}))) }}" ././@LongLink0000644000000000000000000000017500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_attach_default_interface.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_atta0000644000175000017500000000222615133657453033365 0ustar zuulzuul--- - name: Debug make_crc_attach_default_interface_env when: make_crc_attach_default_interface_env is defined ansible.builtin.debug: var: make_crc_attach_default_interface_env - name: Debug make_crc_attach_default_interface_params when: make_crc_attach_default_interface_params is defined ansible.builtin.debug: var: make_crc_attach_default_interface_params - name: Run crc_attach_default_interface retries: "{{ make_crc_attach_default_interface_retries | default(omit) }}" delay: "{{ make_crc_attach_default_interface_delay | default(omit) }}" until: "{{ make_crc_attach_default_interface_until | default(true) }}" register: "make_crc_attach_default_interface_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make crc_attach_default_interface" dry_run: "{{ make_crc_attach_default_interface_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_attach_default_interface_env|default({})), **(make_crc_attach_default_interface_params|default({}))) }}" ././@LongLink0000644000000000000000000000020500000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_attach_default_interface_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_atta0000644000175000017500000000241615133657453033366 0ustar zuulzuul--- - name: Debug make_crc_attach_default_interface_cleanup_env when: make_crc_attach_default_interface_cleanup_env is defined ansible.builtin.debug: var: make_crc_attach_default_interface_cleanup_env - name: Debug make_crc_attach_default_interface_cleanup_params when: make_crc_attach_default_interface_cleanup_params is defined ansible.builtin.debug: var: make_crc_attach_default_interface_cleanup_params - name: Run crc_attach_default_interface_cleanup retries: "{{ make_crc_attach_default_interface_cleanup_retries | default(omit) }}" delay: "{{ make_crc_attach_default_interface_cleanup_delay | default(omit) }}" until: "{{ make_crc_attach_default_interface_cleanup_until | default(true) }}" register: "make_crc_attach_default_interface_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make crc_attach_default_interface_cleanup" dry_run: "{{ make_crc_attach_default_interface_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_attach_default_interface_cleanup_env|default({})), **(make_crc_attach_default_interface_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab_network.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000174215133657453033311 0ustar zuulzuul--- - name: Debug make_ipv6_lab_network_env when: make_ipv6_lab_network_env is defined ansible.builtin.debug: var: make_ipv6_lab_network_env - name: Debug make_ipv6_lab_network_params when: make_ipv6_lab_network_params is defined ansible.builtin.debug: var: make_ipv6_lab_network_params - name: Run ipv6_lab_network retries: "{{ make_ipv6_lab_network_retries | default(omit) }}" delay: "{{ make_ipv6_lab_network_delay | default(omit) }}" until: "{{ make_ipv6_lab_network_until | default(true) }}" register: "make_ipv6_lab_network_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab_network" dry_run: "{{ make_ipv6_lab_network_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_network_env|default({})), **(make_ipv6_lab_network_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab_network_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000213215133657453033303 0ustar zuulzuul--- - name: Debug make_ipv6_lab_network_cleanup_env when: make_ipv6_lab_network_cleanup_env is defined ansible.builtin.debug: var: make_ipv6_lab_network_cleanup_env - name: Debug make_ipv6_lab_network_cleanup_params when: make_ipv6_lab_network_cleanup_params is defined ansible.builtin.debug: var: make_ipv6_lab_network_cleanup_params - name: Run ipv6_lab_network_cleanup retries: "{{ make_ipv6_lab_network_cleanup_retries | default(omit) }}" delay: "{{ make_ipv6_lab_network_cleanup_delay | default(omit) }}" until: "{{ make_ipv6_lab_network_cleanup_until | default(true) }}" register: "make_ipv6_lab_network_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab_network_cleanup" dry_run: "{{ make_ipv6_lab_network_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_network_cleanup_env|default({})), **(make_ipv6_lab_network_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab_nat64_router.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000205515133657453033307 0ustar zuulzuul--- - name: Debug make_ipv6_lab_nat64_router_env when: make_ipv6_lab_nat64_router_env is defined ansible.builtin.debug: var: make_ipv6_lab_nat64_router_env - name: Debug make_ipv6_lab_nat64_router_params when: make_ipv6_lab_nat64_router_params is defined ansible.builtin.debug: var: make_ipv6_lab_nat64_router_params - name: Run ipv6_lab_nat64_router retries: "{{ make_ipv6_lab_nat64_router_retries | default(omit) }}" delay: "{{ make_ipv6_lab_nat64_router_delay | default(omit) }}" until: "{{ make_ipv6_lab_nat64_router_until | default(true) }}" register: "make_ipv6_lab_nat64_router_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab_nat64_router" dry_run: "{{ make_ipv6_lab_nat64_router_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_nat64_router_env|default({})), **(make_ipv6_lab_nat64_router_params|default({}))) }}" ././@LongLink0000644000000000000000000000017600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab_nat64_router_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000224515133657453033310 0ustar zuulzuul--- - name: Debug make_ipv6_lab_nat64_router_cleanup_env when: make_ipv6_lab_nat64_router_cleanup_env is defined ansible.builtin.debug: var: make_ipv6_lab_nat64_router_cleanup_env - name: Debug make_ipv6_lab_nat64_router_cleanup_params when: make_ipv6_lab_nat64_router_cleanup_params is defined ansible.builtin.debug: var: make_ipv6_lab_nat64_router_cleanup_params - name: Run ipv6_lab_nat64_router_cleanup retries: "{{ make_ipv6_lab_nat64_router_cleanup_retries | default(omit) }}" delay: "{{ make_ipv6_lab_nat64_router_cleanup_delay | default(omit) }}" until: "{{ make_ipv6_lab_nat64_router_cleanup_until | default(true) }}" register: "make_ipv6_lab_nat64_router_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab_nat64_router_cleanup" dry_run: "{{ make_ipv6_lab_nat64_router_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_nat64_router_cleanup_env|default({})), **(make_ipv6_lab_nat64_router_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab_sno.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000164615133657453033314 0ustar zuulzuul--- - name: Debug make_ipv6_lab_sno_env when: make_ipv6_lab_sno_env is defined ansible.builtin.debug: var: make_ipv6_lab_sno_env - name: Debug make_ipv6_lab_sno_params when: make_ipv6_lab_sno_params is defined ansible.builtin.debug: var: make_ipv6_lab_sno_params - name: Run ipv6_lab_sno retries: "{{ make_ipv6_lab_sno_retries | default(omit) }}" delay: "{{ make_ipv6_lab_sno_delay | default(omit) }}" until: "{{ make_ipv6_lab_sno_until | default(true) }}" register: "make_ipv6_lab_sno_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab_sno" dry_run: "{{ make_ipv6_lab_sno_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_sno_env|default({})), **(make_ipv6_lab_sno_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab_sno_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000203615133657453033306 0ustar zuulzuul--- - name: Debug make_ipv6_lab_sno_cleanup_env when: make_ipv6_lab_sno_cleanup_env is defined ansible.builtin.debug: var: make_ipv6_lab_sno_cleanup_env - name: Debug make_ipv6_lab_sno_cleanup_params when: make_ipv6_lab_sno_cleanup_params is defined ansible.builtin.debug: var: make_ipv6_lab_sno_cleanup_params - name: Run ipv6_lab_sno_cleanup retries: "{{ make_ipv6_lab_sno_cleanup_retries | default(omit) }}" delay: "{{ make_ipv6_lab_sno_cleanup_delay | default(omit) }}" until: "{{ make_ipv6_lab_sno_cleanup_until | default(true) }}" register: "make_ipv6_lab_sno_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab_sno_cleanup" dry_run: "{{ make_ipv6_lab_sno_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_sno_cleanup_env|default({})), **(make_ipv6_lab_sno_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000155215133657453033310 0ustar zuulzuul--- - name: Debug make_ipv6_lab_env when: make_ipv6_lab_env is defined ansible.builtin.debug: var: make_ipv6_lab_env - name: Debug make_ipv6_lab_params when: make_ipv6_lab_params is defined ansible.builtin.debug: var: make_ipv6_lab_params - name: Run ipv6_lab retries: "{{ make_ipv6_lab_retries | default(omit) }}" delay: "{{ make_ipv6_lab_delay | default(omit) }}" until: "{{ make_ipv6_lab_until | default(true) }}" register: "make_ipv6_lab_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab" dry_run: "{{ make_ipv6_lab_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_env|default({})), **(make_ipv6_lab_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000174215133657453033311 0ustar zuulzuul--- - name: Debug make_ipv6_lab_cleanup_env when: make_ipv6_lab_cleanup_env is defined ansible.builtin.debug: var: make_ipv6_lab_cleanup_env - name: Debug make_ipv6_lab_cleanup_params when: make_ipv6_lab_cleanup_params is defined ansible.builtin.debug: var: make_ipv6_lab_cleanup_params - name: Run ipv6_lab_cleanup retries: "{{ make_ipv6_lab_cleanup_retries | default(omit) }}" delay: "{{ make_ipv6_lab_cleanup_delay | default(omit) }}" until: "{{ make_ipv6_lab_cleanup_until | default(true) }}" register: "make_ipv6_lab_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab_cleanup" dry_run: "{{ make_ipv6_lab_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_cleanup_env|default({})), **(make_ipv6_lab_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_attach_default_interface.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_attach_d0000644000175000017500000000213215133657453033350 0ustar zuulzuul--- - name: Debug make_attach_default_interface_env when: make_attach_default_interface_env is defined ansible.builtin.debug: var: make_attach_default_interface_env - name: Debug make_attach_default_interface_params when: make_attach_default_interface_params is defined ansible.builtin.debug: var: make_attach_default_interface_params - name: Run attach_default_interface retries: "{{ make_attach_default_interface_retries | default(omit) }}" delay: "{{ make_attach_default_interface_delay | default(omit) }}" until: "{{ make_attach_default_interface_until | default(true) }}" register: "make_attach_default_interface_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make attach_default_interface" dry_run: "{{ make_attach_default_interface_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_attach_default_interface_env|default({})), **(make_attach_default_interface_params|default({}))) }}" ././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_attach_default_interface_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_attach_d0000644000175000017500000000232215133657453033351 0ustar zuulzuul--- - name: Debug make_attach_default_interface_cleanup_env when: make_attach_default_interface_cleanup_env is defined ansible.builtin.debug: var: make_attach_default_interface_cleanup_env - name: Debug make_attach_default_interface_cleanup_params when: make_attach_default_interface_cleanup_params is defined ansible.builtin.debug: var: make_attach_default_interface_cleanup_params - name: Run attach_default_interface_cleanup retries: "{{ make_attach_default_interface_cleanup_retries | default(omit) }}" delay: "{{ make_attach_default_interface_cleanup_delay | default(omit) }}" until: "{{ make_attach_default_interface_cleanup_until | default(true) }}" register: "make_attach_default_interface_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make attach_default_interface_cleanup" dry_run: "{{ make_attach_default_interface_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_attach_default_interface_cleanup_env|default({})), **(make_attach_default_interface_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_network_isolation_bridge.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_network_0000644000175000017500000000213215133657453033431 0ustar zuulzuul--- - name: Debug make_network_isolation_bridge_env when: make_network_isolation_bridge_env is defined ansible.builtin.debug: var: make_network_isolation_bridge_env - name: Debug make_network_isolation_bridge_params when: make_network_isolation_bridge_params is defined ansible.builtin.debug: var: make_network_isolation_bridge_params - name: Run network_isolation_bridge retries: "{{ make_network_isolation_bridge_retries | default(omit) }}" delay: "{{ make_network_isolation_bridge_delay | default(omit) }}" until: "{{ make_network_isolation_bridge_until | default(true) }}" register: "make_network_isolation_bridge_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make network_isolation_bridge" dry_run: "{{ make_network_isolation_bridge_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_network_isolation_bridge_env|default({})), **(make_network_isolation_bridge_params|default({}))) }}" ././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_network_isolation_bridge_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_network_0000644000175000017500000000232215133657453033432 0ustar zuulzuul--- - name: Debug make_network_isolation_bridge_cleanup_env when: make_network_isolation_bridge_cleanup_env is defined ansible.builtin.debug: var: make_network_isolation_bridge_cleanup_env - name: Debug make_network_isolation_bridge_cleanup_params when: make_network_isolation_bridge_cleanup_params is defined ansible.builtin.debug: var: make_network_isolation_bridge_cleanup_params - name: Run network_isolation_bridge_cleanup retries: "{{ make_network_isolation_bridge_cleanup_retries | default(omit) }}" delay: "{{ make_network_isolation_bridge_cleanup_delay | default(omit) }}" until: "{{ make_network_isolation_bridge_cleanup_until | default(true) }}" register: "make_network_isolation_bridge_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make network_isolation_bridge_cleanup" dry_run: "{{ make_network_isolation_bridge_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_network_isolation_bridge_cleanup_env|default({})), **(make_network_isolation_bridge_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_baremetal_compute.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_bar0000644000175000017500000000207415133657453033357 0ustar zuulzuul--- - name: Debug make_edpm_baremetal_compute_env when: make_edpm_baremetal_compute_env is defined ansible.builtin.debug: var: make_edpm_baremetal_compute_env - name: Debug make_edpm_baremetal_compute_params when: make_edpm_baremetal_compute_params is defined ansible.builtin.debug: var: make_edpm_baremetal_compute_params - name: Run edpm_baremetal_compute retries: "{{ make_edpm_baremetal_compute_retries | default(omit) }}" delay: "{{ make_edpm_baremetal_compute_delay | default(omit) }}" until: "{{ make_edpm_baremetal_compute_until | default(true) }}" register: "make_edpm_baremetal_compute_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_baremetal_compute" dry_run: "{{ make_edpm_baremetal_compute_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_baremetal_compute_env|default({})), **(make_edpm_baremetal_compute_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_compute.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_com0000644000175000017500000000164615133657453033375 0ustar zuulzuul--- - name: Debug make_edpm_compute_env when: make_edpm_compute_env is defined ansible.builtin.debug: var: make_edpm_compute_env - name: Debug make_edpm_compute_params when: make_edpm_compute_params is defined ansible.builtin.debug: var: make_edpm_compute_params - name: Run edpm_compute retries: "{{ make_edpm_compute_retries | default(omit) }}" delay: "{{ make_edpm_compute_delay | default(omit) }}" until: "{{ make_edpm_compute_until | default(true) }}" register: "make_edpm_compute_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_compute" dry_run: "{{ make_edpm_compute_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_compute_env|default({})), **(make_edpm_compute_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_compute_bootc.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_com0000644000175000017500000000200015133657453033356 0ustar zuulzuul--- - name: Debug make_edpm_compute_bootc_env when: make_edpm_compute_bootc_env is defined ansible.builtin.debug: var: make_edpm_compute_bootc_env - name: Debug make_edpm_compute_bootc_params when: make_edpm_compute_bootc_params is defined ansible.builtin.debug: var: make_edpm_compute_bootc_params - name: Run edpm_compute_bootc retries: "{{ make_edpm_compute_bootc_retries | default(omit) }}" delay: "{{ make_edpm_compute_bootc_delay | default(omit) }}" until: "{{ make_edpm_compute_bootc_until | default(true) }}" register: "make_edpm_compute_bootc_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_compute_bootc" dry_run: "{{ make_edpm_compute_bootc_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_compute_bootc_env|default({})), **(make_edpm_compute_bootc_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_ansible_runner.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_ans0000644000175000017500000000201715133657453033371 0ustar zuulzuul--- - name: Debug make_edpm_ansible_runner_env when: make_edpm_ansible_runner_env is defined ansible.builtin.debug: var: make_edpm_ansible_runner_env - name: Debug make_edpm_ansible_runner_params when: make_edpm_ansible_runner_params is defined ansible.builtin.debug: var: make_edpm_ansible_runner_params - name: Run edpm_ansible_runner retries: "{{ make_edpm_ansible_runner_retries | default(omit) }}" delay: "{{ make_edpm_ansible_runner_delay | default(omit) }}" until: "{{ make_edpm_ansible_runner_until | default(true) }}" register: "make_edpm_ansible_runner_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_ansible_runner" dry_run: "{{ make_edpm_ansible_runner_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_ansible_runner_env|default({})), **(make_edpm_ansible_runner_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_computes_bgp.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_com0000644000175000017500000000176115133657453033373 0ustar zuulzuul--- - name: Debug make_edpm_computes_bgp_env when: make_edpm_computes_bgp_env is defined ansible.builtin.debug: var: make_edpm_computes_bgp_env - name: Debug make_edpm_computes_bgp_params when: make_edpm_computes_bgp_params is defined ansible.builtin.debug: var: make_edpm_computes_bgp_params - name: Run edpm_computes_bgp retries: "{{ make_edpm_computes_bgp_retries | default(omit) }}" delay: "{{ make_edpm_computes_bgp_delay | default(omit) }}" until: "{{ make_edpm_computes_bgp_until | default(true) }}" register: "make_edpm_computes_bgp_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_computes_bgp" dry_run: "{{ make_edpm_computes_bgp_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_computes_bgp_env|default({})), **(make_edpm_computes_bgp_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_compute_repos.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_com0000644000175000017500000000200015133657453033356 0ustar zuulzuul--- - name: Debug make_edpm_compute_repos_env when: make_edpm_compute_repos_env is defined ansible.builtin.debug: var: make_edpm_compute_repos_env - name: Debug make_edpm_compute_repos_params when: make_edpm_compute_repos_params is defined ansible.builtin.debug: var: make_edpm_compute_repos_params - name: Run edpm_compute_repos retries: "{{ make_edpm_compute_repos_retries | default(omit) }}" delay: "{{ make_edpm_compute_repos_delay | default(omit) }}" until: "{{ make_edpm_compute_repos_until | default(true) }}" register: "make_edpm_compute_repos_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_compute_repos" dry_run: "{{ make_edpm_compute_repos_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_compute_repos_env|default({})), **(make_edpm_compute_repos_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_compute_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_com0000644000175000017500000000203615133657453033367 0ustar zuulzuul--- - name: Debug make_edpm_compute_cleanup_env when: make_edpm_compute_cleanup_env is defined ansible.builtin.debug: var: make_edpm_compute_cleanup_env - name: Debug make_edpm_compute_cleanup_params when: make_edpm_compute_cleanup_params is defined ansible.builtin.debug: var: make_edpm_compute_cleanup_params - name: Run edpm_compute_cleanup retries: "{{ make_edpm_compute_cleanup_retries | default(omit) }}" delay: "{{ make_edpm_compute_cleanup_delay | default(omit) }}" until: "{{ make_edpm_compute_cleanup_until | default(true) }}" register: "make_edpm_compute_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_compute_cleanup" dry_run: "{{ make_edpm_compute_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_compute_cleanup_env|default({})), **(make_edpm_compute_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_networker.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_net0000644000175000017500000000170415133657453033400 0ustar zuulzuul--- - name: Debug make_edpm_networker_env when: make_edpm_networker_env is defined ansible.builtin.debug: var: make_edpm_networker_env - name: Debug make_edpm_networker_params when: make_edpm_networker_params is defined ansible.builtin.debug: var: make_edpm_networker_params - name: Run edpm_networker retries: "{{ make_edpm_networker_retries | default(omit) }}" delay: "{{ make_edpm_networker_delay | default(omit) }}" until: "{{ make_edpm_networker_until | default(true) }}" register: "make_edpm_networker_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_networker" dry_run: "{{ make_edpm_networker_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_networker_env|default({})), **(make_edpm_networker_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_networker_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_net0000644000175000017500000000207415133657453033401 0ustar zuulzuul--- - name: Debug make_edpm_networker_cleanup_env when: make_edpm_networker_cleanup_env is defined ansible.builtin.debug: var: make_edpm_networker_cleanup_env - name: Debug make_edpm_networker_cleanup_params when: make_edpm_networker_cleanup_params is defined ansible.builtin.debug: var: make_edpm_networker_cleanup_params - name: Run edpm_networker_cleanup retries: "{{ make_edpm_networker_cleanup_retries | default(omit) }}" delay: "{{ make_edpm_networker_cleanup_delay | default(omit) }}" until: "{{ make_edpm_networker_cleanup_until | default(true) }}" register: "make_edpm_networker_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_networker_cleanup" dry_run: "{{ make_edpm_networker_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_networker_cleanup_env|default({})), **(make_edpm_networker_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_instance.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000203615133657453033361 0ustar zuulzuul--- - name: Debug make_edpm_deploy_instance_env when: make_edpm_deploy_instance_env is defined ansible.builtin.debug: var: make_edpm_deploy_instance_env - name: Debug make_edpm_deploy_instance_params when: make_edpm_deploy_instance_params is defined ansible.builtin.debug: var: make_edpm_deploy_instance_params - name: Run edpm_deploy_instance retries: "{{ make_edpm_deploy_instance_retries | default(omit) }}" delay: "{{ make_edpm_deploy_instance_delay | default(omit) }}" until: "{{ make_edpm_deploy_instance_until | default(true) }}" register: "make_edpm_deploy_instance_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_deploy_instance" dry_run: "{{ make_edpm_deploy_instance_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_instance_env|default({})), **(make_edpm_deploy_instance_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_tripleo_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_tripleo_0000644000175000017500000000170415133657453033422 0ustar zuulzuul--- - name: Debug make_tripleo_deploy_env when: make_tripleo_deploy_env is defined ansible.builtin.debug: var: make_tripleo_deploy_env - name: Debug make_tripleo_deploy_params when: make_tripleo_deploy_params is defined ansible.builtin.debug: var: make_tripleo_deploy_params - name: Run tripleo_deploy retries: "{{ make_tripleo_deploy_retries | default(omit) }}" delay: "{{ make_tripleo_deploy_delay | default(omit) }}" until: "{{ make_tripleo_deploy_until | default(true) }}" register: "make_tripleo_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make tripleo_deploy" dry_run: "{{ make_tripleo_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_tripleo_deploy_env|default({})), **(make_tripleo_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalone_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalo0000644000175000017500000000176115133657453033415 0ustar zuulzuul--- - name: Debug make_standalone_deploy_env when: make_standalone_deploy_env is defined ansible.builtin.debug: var: make_standalone_deploy_env - name: Debug make_standalone_deploy_params when: make_standalone_deploy_params is defined ansible.builtin.debug: var: make_standalone_deploy_params - name: Run standalone_deploy retries: "{{ make_standalone_deploy_retries | default(omit) }}" delay: "{{ make_standalone_deploy_delay | default(omit) }}" until: "{{ make_standalone_deploy_until | default(true) }}" register: "make_standalone_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make standalone_deploy" dry_run: "{{ make_standalone_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_standalone_deploy_env|default({})), **(make_standalone_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalone_sync.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalo0000644000175000017500000000172315133657453033413 0ustar zuulzuul--- - name: Debug make_standalone_sync_env when: make_standalone_sync_env is defined ansible.builtin.debug: var: make_standalone_sync_env - name: Debug make_standalone_sync_params when: make_standalone_sync_params is defined ansible.builtin.debug: var: make_standalone_sync_params - name: Run standalone_sync retries: "{{ make_standalone_sync_retries | default(omit) }}" delay: "{{ make_standalone_sync_delay | default(omit) }}" until: "{{ make_standalone_sync_until | default(true) }}" register: "make_standalone_sync_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make standalone_sync" dry_run: "{{ make_standalone_sync_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_standalone_sync_env|default({})), **(make_standalone_sync_params|default({}))) }}" ././@LongLink0000644000000000000000000000015300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalone.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalo0000644000175000017500000000161015133657453033406 0ustar zuulzuul--- - name: Debug make_standalone_env when: make_standalone_env is defined ansible.builtin.debug: var: make_standalone_env - name: Debug make_standalone_params when: make_standalone_params is defined ansible.builtin.debug: var: make_standalone_params - name: Run standalone retries: "{{ make_standalone_retries | default(omit) }}" delay: "{{ make_standalone_delay | default(omit) }}" until: "{{ make_standalone_until | default(true) }}" register: "make_standalone_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make standalone" dry_run: "{{ make_standalone_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_standalone_env|default({})), **(make_standalone_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalone_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalo0000644000175000017500000000200015133657453033400 0ustar zuulzuul--- - name: Debug make_standalone_cleanup_env when: make_standalone_cleanup_env is defined ansible.builtin.debug: var: make_standalone_cleanup_env - name: Debug make_standalone_cleanup_params when: make_standalone_cleanup_params is defined ansible.builtin.debug: var: make_standalone_cleanup_params - name: Run standalone_cleanup retries: "{{ make_standalone_cleanup_retries | default(omit) }}" delay: "{{ make_standalone_cleanup_delay | default(omit) }}" until: "{{ make_standalone_cleanup_until | default(true) }}" register: "make_standalone_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make standalone_cleanup" dry_run: "{{ make_standalone_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_standalone_cleanup_env|default({})), **(make_standalone_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalone_snapshot.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalo0000644000175000017500000000201715133657453033410 0ustar zuulzuul--- - name: Debug make_standalone_snapshot_env when: make_standalone_snapshot_env is defined ansible.builtin.debug: var: make_standalone_snapshot_env - name: Debug make_standalone_snapshot_params when: make_standalone_snapshot_params is defined ansible.builtin.debug: var: make_standalone_snapshot_params - name: Run standalone_snapshot retries: "{{ make_standalone_snapshot_retries | default(omit) }}" delay: "{{ make_standalone_snapshot_delay | default(omit) }}" until: "{{ make_standalone_snapshot_until | default(true) }}" register: "make_standalone_snapshot_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make standalone_snapshot" dry_run: "{{ make_standalone_snapshot_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_standalone_snapshot_env|default({})), **(make_standalone_snapshot_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalone_revert.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalo0000644000175000017500000000176115133657453033415 0ustar zuulzuul--- - name: Debug make_standalone_revert_env when: make_standalone_revert_env is defined ansible.builtin.debug: var: make_standalone_revert_env - name: Debug make_standalone_revert_params when: make_standalone_revert_params is defined ansible.builtin.debug: var: make_standalone_revert_params - name: Run standalone_revert retries: "{{ make_standalone_revert_retries | default(omit) }}" delay: "{{ make_standalone_revert_delay | default(omit) }}" until: "{{ make_standalone_revert_until | default(true) }}" register: "make_standalone_revert_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make standalone_revert" dry_run: "{{ make_standalone_revert_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_standalone_revert_env|default({})), **(make_standalone_revert_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cifmw_prepare.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cifmw_pr0000644000175000017500000000166515133657453033421 0ustar zuulzuul--- - name: Debug make_cifmw_prepare_env when: make_cifmw_prepare_env is defined ansible.builtin.debug: var: make_cifmw_prepare_env - name: Debug make_cifmw_prepare_params when: make_cifmw_prepare_params is defined ansible.builtin.debug: var: make_cifmw_prepare_params - name: Run cifmw_prepare retries: "{{ make_cifmw_prepare_retries | default(omit) }}" delay: "{{ make_cifmw_prepare_delay | default(omit) }}" until: "{{ make_cifmw_prepare_until | default(true) }}" register: "make_cifmw_prepare_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make cifmw_prepare" dry_run: "{{ make_cifmw_prepare_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cifmw_prepare_env|default({})), **(make_cifmw_prepare_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cifmw_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cifmw_cl0000644000175000017500000000166515133657453033376 0ustar zuulzuul--- - name: Debug make_cifmw_cleanup_env when: make_cifmw_cleanup_env is defined ansible.builtin.debug: var: make_cifmw_cleanup_env - name: Debug make_cifmw_cleanup_params when: make_cifmw_cleanup_params is defined ansible.builtin.debug: var: make_cifmw_cleanup_params - name: Run cifmw_cleanup retries: "{{ make_cifmw_cleanup_retries | default(omit) }}" delay: "{{ make_cifmw_cleanup_delay | default(omit) }}" until: "{{ make_cifmw_cleanup_until | default(true) }}" register: "make_cifmw_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make cifmw_cleanup" dry_run: "{{ make_cifmw_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cifmw_cleanup_env|default({})), **(make_cifmw_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_network.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_ne0000644000175000017500000000166515133657453033360 0ustar zuulzuul--- - name: Debug make_bmaas_network_env when: make_bmaas_network_env is defined ansible.builtin.debug: var: make_bmaas_network_env - name: Debug make_bmaas_network_params when: make_bmaas_network_params is defined ansible.builtin.debug: var: make_bmaas_network_params - name: Run bmaas_network retries: "{{ make_bmaas_network_retries | default(omit) }}" delay: "{{ make_bmaas_network_delay | default(omit) }}" until: "{{ make_bmaas_network_until | default(true) }}" register: "make_bmaas_network_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_network" dry_run: "{{ make_bmaas_network_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_network_env|default({})), **(make_bmaas_network_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_network_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_ne0000644000175000017500000000205515133657453033352 0ustar zuulzuul--- - name: Debug make_bmaas_network_cleanup_env when: make_bmaas_network_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_network_cleanup_env - name: Debug make_bmaas_network_cleanup_params when: make_bmaas_network_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_network_cleanup_params - name: Run bmaas_network_cleanup retries: "{{ make_bmaas_network_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_network_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_network_cleanup_until | default(true) }}" register: "make_bmaas_network_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_network_cleanup" dry_run: "{{ make_bmaas_network_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_network_cleanup_env|default({})), **(make_bmaas_network_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000020700000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_route_crc_and_crc_bmaas_networks.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_ro0000644000175000017500000000245415133657453033373 0ustar zuulzuul--- - name: Debug make_bmaas_route_crc_and_crc_bmaas_networks_env when: make_bmaas_route_crc_and_crc_bmaas_networks_env is defined ansible.builtin.debug: var: make_bmaas_route_crc_and_crc_bmaas_networks_env - name: Debug make_bmaas_route_crc_and_crc_bmaas_networks_params when: make_bmaas_route_crc_and_crc_bmaas_networks_params is defined ansible.builtin.debug: var: make_bmaas_route_crc_and_crc_bmaas_networks_params - name: Run bmaas_route_crc_and_crc_bmaas_networks retries: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_retries | default(omit) }}" delay: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_delay | default(omit) }}" until: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_until | default(true) }}" register: "make_bmaas_route_crc_and_crc_bmaas_networks_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_route_crc_and_crc_bmaas_networks" dry_run: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_route_crc_and_crc_bmaas_networks_env|default({})), **(make_bmaas_route_crc_and_crc_bmaas_networks_params|default({}))) }}" ././@LongLink0000644000000000000000000000021700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_route_crc_and_crc_bmaas_networks_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_ro0000644000175000017500000000264415133657453033374 0ustar zuulzuul--- - name: Debug make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_env when: make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_env - name: Debug make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_params when: make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_params - name: Run bmaas_route_crc_and_crc_bmaas_networks_cleanup retries: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_until | default(true) }}" register: "make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_route_crc_and_crc_bmaas_networks_cleanup" dry_run: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_env|default({})), **(make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_metallb.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_me0000644000175000017500000000166515133657453033357 0ustar zuulzuul--- - name: Debug make_bmaas_metallb_env when: make_bmaas_metallb_env is defined ansible.builtin.debug: var: make_bmaas_metallb_env - name: Debug make_bmaas_metallb_params when: make_bmaas_metallb_params is defined ansible.builtin.debug: var: make_bmaas_metallb_params - name: Run bmaas_metallb retries: "{{ make_bmaas_metallb_retries | default(omit) }}" delay: "{{ make_bmaas_metallb_delay | default(omit) }}" until: "{{ make_bmaas_metallb_until | default(true) }}" register: "make_bmaas_metallb_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_metallb" dry_run: "{{ make_bmaas_metallb_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_metallb_env|default({})), **(make_bmaas_metallb_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_crc_attach_network.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_cr0000644000175000017500000000213215133657453033350 0ustar zuulzuul--- - name: Debug make_bmaas_crc_attach_network_env when: make_bmaas_crc_attach_network_env is defined ansible.builtin.debug: var: make_bmaas_crc_attach_network_env - name: Debug make_bmaas_crc_attach_network_params when: make_bmaas_crc_attach_network_params is defined ansible.builtin.debug: var: make_bmaas_crc_attach_network_params - name: Run bmaas_crc_attach_network retries: "{{ make_bmaas_crc_attach_network_retries | default(omit) }}" delay: "{{ make_bmaas_crc_attach_network_delay | default(omit) }}" until: "{{ make_bmaas_crc_attach_network_until | default(true) }}" register: "make_bmaas_crc_attach_network_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_crc_attach_network" dry_run: "{{ make_bmaas_crc_attach_network_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_crc_attach_network_env|default({})), **(make_bmaas_crc_attach_network_params|default({}))) }}" ././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_crc_attach_network_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_cr0000644000175000017500000000232215133657453033351 0ustar zuulzuul--- - name: Debug make_bmaas_crc_attach_network_cleanup_env when: make_bmaas_crc_attach_network_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_crc_attach_network_cleanup_env - name: Debug make_bmaas_crc_attach_network_cleanup_params when: make_bmaas_crc_attach_network_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_crc_attach_network_cleanup_params - name: Run bmaas_crc_attach_network_cleanup retries: "{{ make_bmaas_crc_attach_network_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_crc_attach_network_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_crc_attach_network_cleanup_until | default(true) }}" register: "make_bmaas_crc_attach_network_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_crc_attach_network_cleanup" dry_run: "{{ make_bmaas_crc_attach_network_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_crc_attach_network_cleanup_env|default({})), **(make_bmaas_crc_attach_network_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_crc_baremetal_bridge.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_cr0000644000175000017500000000217015133657453033352 0ustar zuulzuul--- - name: Debug make_bmaas_crc_baremetal_bridge_env when: make_bmaas_crc_baremetal_bridge_env is defined ansible.builtin.debug: var: make_bmaas_crc_baremetal_bridge_env - name: Debug make_bmaas_crc_baremetal_bridge_params when: make_bmaas_crc_baremetal_bridge_params is defined ansible.builtin.debug: var: make_bmaas_crc_baremetal_bridge_params - name: Run bmaas_crc_baremetal_bridge retries: "{{ make_bmaas_crc_baremetal_bridge_retries | default(omit) }}" delay: "{{ make_bmaas_crc_baremetal_bridge_delay | default(omit) }}" until: "{{ make_bmaas_crc_baremetal_bridge_until | default(true) }}" register: "make_bmaas_crc_baremetal_bridge_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_crc_baremetal_bridge" dry_run: "{{ make_bmaas_crc_baremetal_bridge_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_crc_baremetal_bridge_env|default({})), **(make_bmaas_crc_baremetal_bridge_params|default({}))) }}" ././@LongLink0000644000000000000000000000020300000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_crc_baremetal_bridge_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_cr0000644000175000017500000000236015133657453033353 0ustar zuulzuul--- - name: Debug make_bmaas_crc_baremetal_bridge_cleanup_env when: make_bmaas_crc_baremetal_bridge_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_crc_baremetal_bridge_cleanup_env - name: Debug make_bmaas_crc_baremetal_bridge_cleanup_params when: make_bmaas_crc_baremetal_bridge_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_crc_baremetal_bridge_cleanup_params - name: Run bmaas_crc_baremetal_bridge_cleanup retries: "{{ make_bmaas_crc_baremetal_bridge_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_crc_baremetal_bridge_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_crc_baremetal_bridge_cleanup_until | default(true) }}" register: "make_bmaas_crc_baremetal_bridge_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_crc_baremetal_bridge_cleanup" dry_run: "{{ make_bmaas_crc_baremetal_bridge_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_crc_baremetal_bridge_cleanup_env|default({})), **(make_bmaas_crc_baremetal_bridge_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_baremetal_net_nad.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_ba0000644000175000017500000000211315133657453033325 0ustar zuulzuul--- - name: Debug make_bmaas_baremetal_net_nad_env when: make_bmaas_baremetal_net_nad_env is defined ansible.builtin.debug: var: make_bmaas_baremetal_net_nad_env - name: Debug make_bmaas_baremetal_net_nad_params when: make_bmaas_baremetal_net_nad_params is defined ansible.builtin.debug: var: make_bmaas_baremetal_net_nad_params - name: Run bmaas_baremetal_net_nad retries: "{{ make_bmaas_baremetal_net_nad_retries | default(omit) }}" delay: "{{ make_bmaas_baremetal_net_nad_delay | default(omit) }}" until: "{{ make_bmaas_baremetal_net_nad_until | default(true) }}" register: "make_bmaas_baremetal_net_nad_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_baremetal_net_nad" dry_run: "{{ make_bmaas_baremetal_net_nad_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_baremetal_net_nad_env|default({})), **(make_bmaas_baremetal_net_nad_params|default({}))) }}" ././@LongLink0000644000000000000000000000020000000000000011573 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_baremetal_net_nad_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_ba0000644000175000017500000000230315133657453033326 0ustar zuulzuul--- - name: Debug make_bmaas_baremetal_net_nad_cleanup_env when: make_bmaas_baremetal_net_nad_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_baremetal_net_nad_cleanup_env - name: Debug make_bmaas_baremetal_net_nad_cleanup_params when: make_bmaas_baremetal_net_nad_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_baremetal_net_nad_cleanup_params - name: Run bmaas_baremetal_net_nad_cleanup retries: "{{ make_bmaas_baremetal_net_nad_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_baremetal_net_nad_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_baremetal_net_nad_cleanup_until | default(true) }}" register: "make_bmaas_baremetal_net_nad_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_baremetal_net_nad_cleanup" dry_run: "{{ make_bmaas_baremetal_net_nad_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_baremetal_net_nad_cleanup_env|default({})), **(make_bmaas_baremetal_net_nad_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_metallb_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_me0000644000175000017500000000205515133657453033351 0ustar zuulzuul--- - name: Debug make_bmaas_metallb_cleanup_env when: make_bmaas_metallb_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_metallb_cleanup_env - name: Debug make_bmaas_metallb_cleanup_params when: make_bmaas_metallb_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_metallb_cleanup_params - name: Run bmaas_metallb_cleanup retries: "{{ make_bmaas_metallb_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_metallb_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_metallb_cleanup_until | default(true) }}" register: "make_bmaas_metallb_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_metallb_cleanup" dry_run: "{{ make_bmaas_metallb_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_metallb_cleanup_env|default({})), **(make_bmaas_metallb_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_virtual_bms.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_vi0000644000175000017500000000176115133657453033371 0ustar zuulzuul--- - name: Debug make_bmaas_virtual_bms_env when: make_bmaas_virtual_bms_env is defined ansible.builtin.debug: var: make_bmaas_virtual_bms_env - name: Debug make_bmaas_virtual_bms_params when: make_bmaas_virtual_bms_params is defined ansible.builtin.debug: var: make_bmaas_virtual_bms_params - name: Run bmaas_virtual_bms retries: "{{ make_bmaas_virtual_bms_retries | default(omit) }}" delay: "{{ make_bmaas_virtual_bms_delay | default(omit) }}" until: "{{ make_bmaas_virtual_bms_until | default(true) }}" register: "make_bmaas_virtual_bms_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_virtual_bms" dry_run: "{{ make_bmaas_virtual_bms_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_virtual_bms_env|default({})), **(make_bmaas_virtual_bms_params|default({}))) }}" ././@LongLink0000644000000000000000000000017200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_virtual_bms_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_vi0000644000175000017500000000215115133657453033363 0ustar zuulzuul--- - name: Debug make_bmaas_virtual_bms_cleanup_env when: make_bmaas_virtual_bms_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_virtual_bms_cleanup_env - name: Debug make_bmaas_virtual_bms_cleanup_params when: make_bmaas_virtual_bms_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_virtual_bms_cleanup_params - name: Run bmaas_virtual_bms_cleanup retries: "{{ make_bmaas_virtual_bms_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_virtual_bms_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_virtual_bms_cleanup_until | default(true) }}" register: "make_bmaas_virtual_bms_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_virtual_bms_cleanup" dry_run: "{{ make_bmaas_virtual_bms_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_virtual_bms_cleanup_env|default({})), **(make_bmaas_virtual_bms_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_sushy_emulator.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_su0000644000175000017500000000203615133657453033376 0ustar zuulzuul--- - name: Debug make_bmaas_sushy_emulator_env when: make_bmaas_sushy_emulator_env is defined ansible.builtin.debug: var: make_bmaas_sushy_emulator_env - name: Debug make_bmaas_sushy_emulator_params when: make_bmaas_sushy_emulator_params is defined ansible.builtin.debug: var: make_bmaas_sushy_emulator_params - name: Run bmaas_sushy_emulator retries: "{{ make_bmaas_sushy_emulator_retries | default(omit) }}" delay: "{{ make_bmaas_sushy_emulator_delay | default(omit) }}" until: "{{ make_bmaas_sushy_emulator_until | default(true) }}" register: "make_bmaas_sushy_emulator_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_sushy_emulator" dry_run: "{{ make_bmaas_sushy_emulator_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_sushy_emulator_env|default({})), **(make_bmaas_sushy_emulator_params|default({}))) }}" ././@LongLink0000644000000000000000000000017500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_sushy_emulator_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_su0000644000175000017500000000222615133657453033377 0ustar zuulzuul--- - name: Debug make_bmaas_sushy_emulator_cleanup_env when: make_bmaas_sushy_emulator_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_sushy_emulator_cleanup_env - name: Debug make_bmaas_sushy_emulator_cleanup_params when: make_bmaas_sushy_emulator_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_sushy_emulator_cleanup_params - name: Run bmaas_sushy_emulator_cleanup retries: "{{ make_bmaas_sushy_emulator_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_sushy_emulator_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_sushy_emulator_cleanup_until | default(true) }}" register: "make_bmaas_sushy_emulator_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_sushy_emulator_cleanup" dry_run: "{{ make_bmaas_sushy_emulator_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_sushy_emulator_cleanup_env|default({})), **(make_bmaas_sushy_emulator_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_sushy_emulator_wait.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_su0000644000175000017500000000215115133657453033374 0ustar zuulzuul--- - name: Debug make_bmaas_sushy_emulator_wait_env when: make_bmaas_sushy_emulator_wait_env is defined ansible.builtin.debug: var: make_bmaas_sushy_emulator_wait_env - name: Debug make_bmaas_sushy_emulator_wait_params when: make_bmaas_sushy_emulator_wait_params is defined ansible.builtin.debug: var: make_bmaas_sushy_emulator_wait_params - name: Run bmaas_sushy_emulator_wait retries: "{{ make_bmaas_sushy_emulator_wait_retries | default(omit) }}" delay: "{{ make_bmaas_sushy_emulator_wait_delay | default(omit) }}" until: "{{ make_bmaas_sushy_emulator_wait_until | default(true) }}" register: "make_bmaas_sushy_emulator_wait_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_sushy_emulator_wait" dry_run: "{{ make_bmaas_sushy_emulator_wait_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_sushy_emulator_wait_env|default({})), **(make_bmaas_sushy_emulator_wait_params|default({}))) }}" ././@LongLink0000644000000000000000000000017200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_generate_nodes_yaml.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_ge0000644000175000017500000000215115133657453033340 0ustar zuulzuul--- - name: Debug make_bmaas_generate_nodes_yaml_env when: make_bmaas_generate_nodes_yaml_env is defined ansible.builtin.debug: var: make_bmaas_generate_nodes_yaml_env - name: Debug make_bmaas_generate_nodes_yaml_params when: make_bmaas_generate_nodes_yaml_params is defined ansible.builtin.debug: var: make_bmaas_generate_nodes_yaml_params - name: Run bmaas_generate_nodes_yaml retries: "{{ make_bmaas_generate_nodes_yaml_retries | default(omit) }}" delay: "{{ make_bmaas_generate_nodes_yaml_delay | default(omit) }}" until: "{{ make_bmaas_generate_nodes_yaml_until | default(true) }}" register: "make_bmaas_generate_nodes_yaml_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_generate_nodes_yaml" dry_run: "{{ make_bmaas_generate_nodes_yaml_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_generate_nodes_yaml_env|default({})), **(make_bmaas_generate_nodes_yaml_params|default({}))) }}" ././@LongLink0000644000000000000000000000014600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas.ym0000644000175000017500000000147515133657453033321 0ustar zuulzuul--- - name: Debug make_bmaas_env when: make_bmaas_env is defined ansible.builtin.debug: var: make_bmaas_env - name: Debug make_bmaas_params when: make_bmaas_params is defined ansible.builtin.debug: var: make_bmaas_params - name: Run bmaas retries: "{{ make_bmaas_retries | default(omit) }}" delay: "{{ make_bmaas_delay | default(omit) }}" until: "{{ make_bmaas_until | default(true) }}" register: "make_bmaas_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas" dry_run: "{{ make_bmaas_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_env|default({})), **(make_bmaas_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_cl0000644000175000017500000000166515133657453033354 0ustar zuulzuul--- - name: Debug make_bmaas_cleanup_env when: make_bmaas_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_cleanup_env - name: Debug make_bmaas_cleanup_params when: make_bmaas_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_cleanup_params - name: Run bmaas_cleanup retries: "{{ make_bmaas_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_cleanup_until | default(true) }}" register: "make_bmaas_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_cleanup" dry_run: "{{ make_bmaas_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_cleanup_env|default({})), **(make_bmaas_cleanup_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/ci-env/0000755000175000017500000000000015133657745023544 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/ci-env/networking-info.yml0000644000175000017500000000536615133657654027420 0ustar zuulzuulcrc_ci_bootstrap_networks_out: compute-0: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.100/24 mac: fa:16:3e:c1:c1:89 mtu: '1500' internal-api: iface: eth1.20 ip: 172.17.0.100/24 mac: 52:54:00:e4:85:d6 mtu: '1496' parent_iface: eth1 vlan: 20 storage: iface: eth1.21 ip: 172.18.0.100/24 mac: 52:54:00:81:02:e8 mtu: '1496' parent_iface: eth1 vlan: 21 tenant: iface: eth1.22 ip: 172.19.0.100/24 mac: 52:54:00:10:b6:c8 mtu: '1496' parent_iface: eth1 vlan: 22 compute-1: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.101/24 mac: fa:16:3e:62:06:49 mtu: '1500' internal-api: iface: eth1.20 ip: 172.17.0.101/24 mac: 52:54:00:1a:ac:e2 mtu: '1496' parent_iface: eth1 vlan: 20 storage: iface: eth1.21 ip: 172.18.0.101/24 mac: 52:54:00:8e:43:4e mtu: '1496' parent_iface: eth1 vlan: 21 tenant: iface: eth1.22 ip: 172.19.0.101/24 mac: 52:54:00:96:51:04 mtu: '1496' parent_iface: eth1 vlan: 22 controller: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.11/24 mac: fa:16:3e:b8:b9:28 mtu: '1500' crc: default: connection: ci-private-network gw: 192.168.122.1 iface: ens7 ip: 192.168.122.10/24 mac: fa:16:3e:db:79:f1 mtu: '1500' internal-api: connection: ci-private-network-20 iface: ens7.20 ip: 172.17.0.5/24 mac: 52:54:00:75:cf:2c mtu: '1496' parent_iface: ens7 vlan: 20 storage: connection: ci-private-network-21 iface: ens7.21 ip: 172.18.0.5/24 mac: 52:54:00:da:18:d2 mtu: '1496' parent_iface: ens7 vlan: 21 tenant: connection: ci-private-network-22 iface: ens7.22 ip: 172.19.0.5/24 mac: 52:54:00:b7:68:1f mtu: '1496' parent_iface: ens7 vlan: 22 crc_ci_bootstrap_provider_dns: - 199.204.44.24 - 199.204.47.54 home/zuul/zuul-output/logs/ci-framework-data/artifacts/installed-packages.yml0000644000175000017500000023013215133657670026637 0ustar zuulzuulNetworkManager: - arch: x86_64 epoch: 1 name: NetworkManager release: 2.el9 source: rpm version: 1.54.3 NetworkManager-libnm: - arch: x86_64 epoch: 1 name: NetworkManager-libnm release: 2.el9 source: rpm version: 1.54.3 NetworkManager-team: - arch: x86_64 epoch: 1 name: NetworkManager-team release: 2.el9 source: rpm version: 1.54.3 NetworkManager-tui: - arch: x86_64 epoch: 1 name: NetworkManager-tui release: 2.el9 source: rpm version: 1.54.3 aardvark-dns: - arch: x86_64 epoch: 2 name: aardvark-dns release: 1.el9 source: rpm version: 1.17.0 abattis-cantarell-fonts: - arch: noarch epoch: null name: abattis-cantarell-fonts release: 4.el9 source: rpm version: '0.301' acl: - arch: x86_64 epoch: null name: acl release: 4.el9 source: rpm version: 2.3.1 adobe-source-code-pro-fonts: - arch: noarch epoch: null name: adobe-source-code-pro-fonts release: 12.el9.1 source: rpm version: 2.030.1.050 alternatives: - arch: x86_64 epoch: null name: alternatives release: 2.el9 source: rpm version: '1.24' annobin: - arch: x86_64 epoch: null name: annobin release: 1.el9 source: rpm version: '12.98' ansible-core: - arch: x86_64 epoch: 1 name: ansible-core release: 2.el9 source: rpm version: 2.14.18 apr: - arch: x86_64 epoch: null name: apr release: 12.el9 source: rpm version: 1.7.0 apr-util: - arch: x86_64 epoch: null name: apr-util release: 23.el9 source: rpm version: 1.6.1 apr-util-bdb: - arch: x86_64 epoch: null name: apr-util-bdb release: 23.el9 source: rpm version: 1.6.1 apr-util-openssl: - arch: x86_64 epoch: null name: apr-util-openssl release: 23.el9 source: rpm version: 1.6.1 attr: - arch: x86_64 epoch: null name: attr release: 3.el9 source: rpm version: 2.5.1 audit: - arch: x86_64 epoch: null name: audit release: 8.el9 source: rpm version: 3.1.5 audit-libs: - arch: x86_64 epoch: null name: audit-libs release: 8.el9 source: rpm version: 3.1.5 authselect: - arch: x86_64 epoch: null name: authselect release: 3.el9 source: rpm version: 1.2.6 authselect-compat: - arch: x86_64 epoch: null name: authselect-compat release: 3.el9 source: rpm version: 1.2.6 authselect-libs: - arch: x86_64 epoch: null name: authselect-libs release: 3.el9 source: rpm version: 1.2.6 basesystem: - arch: noarch epoch: null name: basesystem release: 13.el9 source: rpm version: '11' bash: - arch: x86_64 epoch: null name: bash release: 9.el9 source: rpm version: 5.1.8 bash-completion: - arch: noarch epoch: 1 name: bash-completion release: 5.el9 source: rpm version: '2.11' binutils: - arch: x86_64 epoch: null name: binutils release: 69.el9 source: rpm version: 2.35.2 binutils-gold: - arch: x86_64 epoch: null name: binutils-gold release: 69.el9 source: rpm version: 2.35.2 buildah: - arch: x86_64 epoch: 2 name: buildah release: 1.el9 source: rpm version: 1.41.3 bzip2: - arch: x86_64 epoch: null name: bzip2 release: 10.el9 source: rpm version: 1.0.8 bzip2-libs: - arch: x86_64 epoch: null name: bzip2-libs release: 10.el9 source: rpm version: 1.0.8 c-ares: - arch: x86_64 epoch: null name: c-ares release: 2.el9 source: rpm version: 1.19.1 ca-certificates: - arch: noarch epoch: null name: ca-certificates release: 91.el9 source: rpm version: 2025.2.80_v9.0.305 centos-gpg-keys: - arch: noarch epoch: null name: centos-gpg-keys release: 34.el9 source: rpm version: '9.0' centos-logos: - arch: x86_64 epoch: null name: centos-logos release: 1.el9 source: rpm version: '90.9' centos-stream-release: - arch: noarch epoch: null name: centos-stream-release release: 34.el9 source: rpm version: '9.0' centos-stream-repos: - arch: noarch epoch: null name: centos-stream-repos release: 34.el9 source: rpm version: '9.0' checkpolicy: - arch: x86_64 epoch: null name: checkpolicy release: 1.el9 source: rpm version: '3.6' chrony: - arch: x86_64 epoch: null name: chrony release: 1.el9 source: rpm version: '4.8' cloud-init: - arch: noarch epoch: null name: cloud-init release: 8.el9 source: rpm version: '24.4' cloud-utils-growpart: - arch: x86_64 epoch: null name: cloud-utils-growpart release: 1.el9 source: rpm version: '0.33' cmake-filesystem: - arch: x86_64 epoch: null name: cmake-filesystem release: 3.el9 source: rpm version: 3.31.8 cockpit-bridge: - arch: noarch epoch: null name: cockpit-bridge release: 1.el9 source: rpm version: '348' cockpit-system: - arch: noarch epoch: null name: cockpit-system release: 1.el9 source: rpm version: '348' cockpit-ws: - arch: x86_64 epoch: null name: cockpit-ws release: 1.el9 source: rpm version: '348' cockpit-ws-selinux: - arch: x86_64 epoch: null name: cockpit-ws-selinux release: 1.el9 source: rpm version: '348' conmon: - arch: x86_64 epoch: 3 name: conmon release: 1.el9 source: rpm version: 2.1.13 container-selinux: - arch: noarch epoch: 4 name: container-selinux release: 1.el9 source: rpm version: 2.244.0 containers-common: - arch: x86_64 epoch: 4 name: containers-common release: 134.el9 source: rpm version: '1' containers-common-extra: - arch: x86_64 epoch: 4 name: containers-common-extra release: 134.el9 source: rpm version: '1' coreutils: - arch: x86_64 epoch: null name: coreutils release: 39.el9 source: rpm version: '8.32' coreutils-common: - arch: x86_64 epoch: null name: coreutils-common release: 39.el9 source: rpm version: '8.32' cpio: - arch: x86_64 epoch: null name: cpio release: 16.el9 source: rpm version: '2.13' cpp: - arch: x86_64 epoch: null name: cpp release: 14.el9 source: rpm version: 11.5.0 cracklib: - arch: x86_64 epoch: null name: cracklib release: 28.el9 source: rpm version: 2.9.6 cracklib-dicts: - arch: x86_64 epoch: null name: cracklib-dicts release: 28.el9 source: rpm version: 2.9.6 createrepo_c: - arch: x86_64 epoch: null name: createrepo_c release: 4.el9 source: rpm version: 0.20.1 createrepo_c-libs: - arch: x86_64 epoch: null name: createrepo_c-libs release: 4.el9 source: rpm version: 0.20.1 criu: - arch: x86_64 epoch: null name: criu release: 3.el9 source: rpm version: '3.19' criu-libs: - arch: x86_64 epoch: null name: criu-libs release: 3.el9 source: rpm version: '3.19' cronie: - arch: x86_64 epoch: null name: cronie release: 14.el9 source: rpm version: 1.5.7 cronie-anacron: - arch: x86_64 epoch: null name: cronie-anacron release: 14.el9 source: rpm version: 1.5.7 crontabs: - arch: noarch epoch: null name: crontabs release: 26.20190603git.el9 source: rpm version: '1.11' crun: - arch: x86_64 epoch: null name: crun release: 1.el9 source: rpm version: '1.24' crypto-policies: - arch: noarch epoch: null name: crypto-policies release: 1.gite9c4db2.el9 source: rpm version: '20251126' crypto-policies-scripts: - arch: noarch epoch: null name: crypto-policies-scripts release: 1.gite9c4db2.el9 source: rpm version: '20251126' cryptsetup-libs: - arch: x86_64 epoch: null name: cryptsetup-libs release: 2.el9 source: rpm version: 2.8.1 curl: - arch: x86_64 epoch: null name: curl release: 38.el9 source: rpm version: 7.76.1 cyrus-sasl: - arch: x86_64 epoch: null name: cyrus-sasl release: 21.el9 source: rpm version: 2.1.27 cyrus-sasl-devel: - arch: x86_64 epoch: null name: cyrus-sasl-devel release: 21.el9 source: rpm version: 2.1.27 cyrus-sasl-gssapi: - arch: x86_64 epoch: null name: cyrus-sasl-gssapi release: 21.el9 source: rpm version: 2.1.27 cyrus-sasl-lib: - arch: x86_64 epoch: null name: cyrus-sasl-lib release: 21.el9 source: rpm version: 2.1.27 dbus: - arch: x86_64 epoch: 1 name: dbus release: 8.el9 source: rpm version: 1.12.20 dbus-broker: - arch: x86_64 epoch: null name: dbus-broker release: 7.el9 source: rpm version: '28' dbus-common: - arch: noarch epoch: 1 name: dbus-common release: 8.el9 source: rpm version: 1.12.20 dbus-libs: - arch: x86_64 epoch: 1 name: dbus-libs release: 8.el9 source: rpm version: 1.12.20 dbus-tools: - arch: x86_64 epoch: 1 name: dbus-tools release: 8.el9 source: rpm version: 1.12.20 debugedit: - arch: x86_64 epoch: null name: debugedit release: 11.el9 source: rpm version: '5.0' dejavu-sans-fonts: - arch: noarch epoch: null name: dejavu-sans-fonts release: 18.el9 source: rpm version: '2.37' desktop-file-utils: - arch: x86_64 epoch: null name: desktop-file-utils release: 6.el9 source: rpm version: '0.26' device-mapper: - arch: x86_64 epoch: 9 name: device-mapper release: 2.el9 source: rpm version: 1.02.206 device-mapper-libs: - arch: x86_64 epoch: 9 name: device-mapper-libs release: 2.el9 source: rpm version: 1.02.206 dhcp-client: - arch: x86_64 epoch: 12 name: dhcp-client release: 19.b1.el9 source: rpm version: 4.4.2 dhcp-common: - arch: noarch epoch: 12 name: dhcp-common release: 19.b1.el9 source: rpm version: 4.4.2 diffutils: - arch: x86_64 epoch: null name: diffutils release: 12.el9 source: rpm version: '3.7' dnf: - arch: noarch epoch: null name: dnf release: 31.el9 source: rpm version: 4.14.0 dnf-data: - arch: noarch epoch: null name: dnf-data release: 31.el9 source: rpm version: 4.14.0 dnf-plugins-core: - arch: noarch epoch: null name: dnf-plugins-core release: 25.el9 source: rpm version: 4.3.0 dracut: - arch: x86_64 epoch: null name: dracut release: 102.git20250818.el9 source: rpm version: '057' dracut-config-generic: - arch: x86_64 epoch: null name: dracut-config-generic release: 102.git20250818.el9 source: rpm version: '057' dracut-network: - arch: x86_64 epoch: null name: dracut-network release: 102.git20250818.el9 source: rpm version: '057' dracut-squash: - arch: x86_64 epoch: null name: dracut-squash release: 102.git20250818.el9 source: rpm version: '057' dwz: - arch: x86_64 epoch: null name: dwz release: 1.el9 source: rpm version: '0.16' e2fsprogs: - arch: x86_64 epoch: null name: e2fsprogs release: 8.el9 source: rpm version: 1.46.5 e2fsprogs-libs: - arch: x86_64 epoch: null name: e2fsprogs-libs release: 8.el9 source: rpm version: 1.46.5 ed: - arch: x86_64 epoch: null name: ed release: 12.el9 source: rpm version: 1.14.2 efi-srpm-macros: - arch: noarch epoch: null name: efi-srpm-macros release: 4.el9 source: rpm version: '6' elfutils: - arch: x86_64 epoch: null name: elfutils release: 1.el9 source: rpm version: '0.194' elfutils-debuginfod-client: - arch: x86_64 epoch: null name: elfutils-debuginfod-client release: 1.el9 source: rpm version: '0.194' elfutils-default-yama-scope: - arch: noarch epoch: null name: elfutils-default-yama-scope release: 1.el9 source: rpm version: '0.194' elfutils-libelf: - arch: x86_64 epoch: null name: elfutils-libelf release: 1.el9 source: rpm version: '0.194' elfutils-libs: - arch: x86_64 epoch: null name: elfutils-libs release: 1.el9 source: rpm version: '0.194' emacs-filesystem: - arch: noarch epoch: 1 name: emacs-filesystem release: 18.el9 source: rpm version: '27.2' enchant: - arch: x86_64 epoch: 1 name: enchant release: 30.el9 source: rpm version: 1.6.0 ethtool: - arch: x86_64 epoch: 2 name: ethtool release: 2.el9 source: rpm version: '6.15' expat: - arch: x86_64 epoch: null name: expat release: 6.el9 source: rpm version: 2.5.0 expect: - arch: x86_64 epoch: null name: expect release: 16.el9 source: rpm version: 5.45.4 file: - arch: x86_64 epoch: null name: file release: 16.el9 source: rpm version: '5.39' file-libs: - arch: x86_64 epoch: null name: file-libs release: 16.el9 source: rpm version: '5.39' filesystem: - arch: x86_64 epoch: null name: filesystem release: 5.el9 source: rpm version: '3.16' findutils: - arch: x86_64 epoch: 1 name: findutils release: 7.el9 source: rpm version: 4.8.0 fonts-filesystem: - arch: noarch epoch: 1 name: fonts-filesystem release: 7.el9.1 source: rpm version: 2.0.5 fonts-srpm-macros: - arch: noarch epoch: 1 name: fonts-srpm-macros release: 7.el9.1 source: rpm version: 2.0.5 fuse-common: - arch: x86_64 epoch: null name: fuse-common release: 9.el9 source: rpm version: 3.10.2 fuse-libs: - arch: x86_64 epoch: null name: fuse-libs release: 17.el9 source: rpm version: 2.9.9 fuse-overlayfs: - arch: x86_64 epoch: null name: fuse-overlayfs release: 1.el9 source: rpm version: '1.16' fuse3: - arch: x86_64 epoch: null name: fuse3 release: 9.el9 source: rpm version: 3.10.2 fuse3-libs: - arch: x86_64 epoch: null name: fuse3-libs release: 9.el9 source: rpm version: 3.10.2 gawk: - arch: x86_64 epoch: null name: gawk release: 6.el9 source: rpm version: 5.1.0 gawk-all-langpacks: - arch: x86_64 epoch: null name: gawk-all-langpacks release: 6.el9 source: rpm version: 5.1.0 gcc: - arch: x86_64 epoch: null name: gcc release: 14.el9 source: rpm version: 11.5.0 gcc-c++: - arch: x86_64 epoch: null name: gcc-c++ release: 14.el9 source: rpm version: 11.5.0 gcc-plugin-annobin: - arch: x86_64 epoch: null name: gcc-plugin-annobin release: 14.el9 source: rpm version: 11.5.0 gdb-minimal: - arch: x86_64 epoch: null name: gdb-minimal release: 2.el9 source: rpm version: '16.3' gdbm-libs: - arch: x86_64 epoch: 1 name: gdbm-libs release: 1.el9 source: rpm version: '1.23' gdisk: - arch: x86_64 epoch: null name: gdisk release: 5.el9 source: rpm version: 1.0.7 gdk-pixbuf2: - arch: x86_64 epoch: null name: gdk-pixbuf2 release: 6.el9 source: rpm version: 2.42.6 geolite2-city: - arch: noarch epoch: null name: geolite2-city release: 6.el9 source: rpm version: '20191217' geolite2-country: - arch: noarch epoch: null name: geolite2-country release: 6.el9 source: rpm version: '20191217' gettext: - arch: x86_64 epoch: null name: gettext release: 8.el9 source: rpm version: '0.21' gettext-libs: - arch: x86_64 epoch: null name: gettext-libs release: 8.el9 source: rpm version: '0.21' ghc-srpm-macros: - arch: noarch epoch: null name: ghc-srpm-macros release: 6.el9 source: rpm version: 1.5.0 git: - arch: x86_64 epoch: null name: git release: 1.el9 source: rpm version: 2.47.3 git-core: - arch: x86_64 epoch: null name: git-core release: 1.el9 source: rpm version: 2.47.3 git-core-doc: - arch: noarch epoch: null name: git-core-doc release: 1.el9 source: rpm version: 2.47.3 glib-networking: - arch: x86_64 epoch: null name: glib-networking release: 3.el9 source: rpm version: 2.68.3 glib2: - arch: x86_64 epoch: null name: glib2 release: 18.el9 source: rpm version: 2.68.4 glibc: - arch: x86_64 epoch: null name: glibc release: 245.el9 source: rpm version: '2.34' glibc-common: - arch: x86_64 epoch: null name: glibc-common release: 245.el9 source: rpm version: '2.34' glibc-devel: - arch: x86_64 epoch: null name: glibc-devel release: 245.el9 source: rpm version: '2.34' glibc-gconv-extra: - arch: x86_64 epoch: null name: glibc-gconv-extra release: 245.el9 source: rpm version: '2.34' glibc-headers: - arch: x86_64 epoch: null name: glibc-headers release: 245.el9 source: rpm version: '2.34' glibc-langpack-en: - arch: x86_64 epoch: null name: glibc-langpack-en release: 245.el9 source: rpm version: '2.34' gmp: - arch: x86_64 epoch: 1 name: gmp release: 13.el9 source: rpm version: 6.2.0 gnupg2: - arch: x86_64 epoch: null name: gnupg2 release: 5.el9 source: rpm version: 2.3.3 gnutls: - arch: x86_64 epoch: null name: gnutls release: 2.el9 source: rpm version: 3.8.10 go-srpm-macros: - arch: noarch epoch: null name: go-srpm-macros release: 1.el9 source: rpm version: 3.8.1 gobject-introspection: - arch: x86_64 epoch: null name: gobject-introspection release: 11.el9 source: rpm version: 1.68.0 gpg-pubkey: - arch: null epoch: null name: gpg-pubkey release: 5ccc5b19 source: rpm version: 8483c65d gpgme: - arch: x86_64 epoch: null name: gpgme release: 6.el9 source: rpm version: 1.15.1 grep: - arch: x86_64 epoch: null name: grep release: 5.el9 source: rpm version: '3.6' groff-base: - arch: x86_64 epoch: null name: groff-base release: 10.el9 source: rpm version: 1.22.4 grub2-common: - arch: noarch epoch: 1 name: grub2-common release: 120.el9 source: rpm version: '2.06' grub2-pc: - arch: x86_64 epoch: 1 name: grub2-pc release: 120.el9 source: rpm version: '2.06' grub2-pc-modules: - arch: noarch epoch: 1 name: grub2-pc-modules release: 120.el9 source: rpm version: '2.06' grub2-tools: - arch: x86_64 epoch: 1 name: grub2-tools release: 120.el9 source: rpm version: '2.06' grub2-tools-minimal: - arch: x86_64 epoch: 1 name: grub2-tools-minimal release: 120.el9 source: rpm version: '2.06' grubby: - arch: x86_64 epoch: null name: grubby release: 69.el9 source: rpm version: '8.40' gsettings-desktop-schemas: - arch: x86_64 epoch: null name: gsettings-desktop-schemas release: 8.el9 source: rpm version: '40.0' gssproxy: - arch: x86_64 epoch: null name: gssproxy release: 7.el9 source: rpm version: 0.8.4 gzip: - arch: x86_64 epoch: null name: gzip release: 1.el9 source: rpm version: '1.12' hostname: - arch: x86_64 epoch: null name: hostname release: 6.el9 source: rpm version: '3.23' httpd-tools: - arch: x86_64 epoch: null name: httpd-tools release: 10.el9 source: rpm version: 2.4.62 hunspell: - arch: x86_64 epoch: null name: hunspell release: 11.el9 source: rpm version: 1.7.0 hunspell-en-GB: - arch: noarch epoch: null name: hunspell-en-GB release: 20.el9 source: rpm version: 0.20140811.1 hunspell-en-US: - arch: noarch epoch: null name: hunspell-en-US release: 20.el9 source: rpm version: 0.20140811.1 hunspell-filesystem: - arch: x86_64 epoch: null name: hunspell-filesystem release: 11.el9 source: rpm version: 1.7.0 hwdata: - arch: noarch epoch: null name: hwdata release: 9.20.el9 source: rpm version: '0.348' ima-evm-utils: - arch: x86_64 epoch: null name: ima-evm-utils release: 2.el9 source: rpm version: 1.6.2 info: - arch: x86_64 epoch: null name: info release: 15.el9 source: rpm version: '6.7' inih: - arch: x86_64 epoch: null name: inih release: 6.el9 source: rpm version: '49' initscripts-rename-device: - arch: x86_64 epoch: null name: initscripts-rename-device release: 4.el9 source: rpm version: 10.11.8 initscripts-service: - arch: noarch epoch: null name: initscripts-service release: 4.el9 source: rpm version: 10.11.8 ipcalc: - arch: x86_64 epoch: null name: ipcalc release: 5.el9 source: rpm version: 1.0.0 iproute: - arch: x86_64 epoch: null name: iproute release: 1.el9 source: rpm version: 6.17.0 iproute-tc: - arch: x86_64 epoch: null name: iproute-tc release: 1.el9 source: rpm version: 6.17.0 iptables-libs: - arch: x86_64 epoch: null name: iptables-libs release: 11.el9 source: rpm version: 1.8.10 iptables-nft: - arch: x86_64 epoch: null name: iptables-nft release: 11.el9 source: rpm version: 1.8.10 iptables-nft-services: - arch: noarch epoch: null name: iptables-nft-services release: 11.el9 source: rpm version: 1.8.10 iputils: - arch: x86_64 epoch: null name: iputils release: 15.el9 source: rpm version: '20210202' irqbalance: - arch: x86_64 epoch: 2 name: irqbalance release: 5.el9 source: rpm version: 1.9.4 jansson: - arch: x86_64 epoch: null name: jansson release: 1.el9 source: rpm version: '2.14' jq: - arch: x86_64 epoch: null name: jq release: 19.el9 source: rpm version: '1.6' json-c: - arch: x86_64 epoch: null name: json-c release: 11.el9 source: rpm version: '0.14' json-glib: - arch: x86_64 epoch: null name: json-glib release: 1.el9 source: rpm version: 1.6.6 kbd: - arch: x86_64 epoch: null name: kbd release: 11.el9 source: rpm version: 2.4.0 kbd-legacy: - arch: noarch epoch: null name: kbd-legacy release: 11.el9 source: rpm version: 2.4.0 kbd-misc: - arch: noarch epoch: null name: kbd-misc release: 11.el9 source: rpm version: 2.4.0 kernel: - arch: x86_64 epoch: null name: kernel release: 661.el9 source: rpm version: 5.14.0 kernel-core: - arch: x86_64 epoch: null name: kernel-core release: 661.el9 source: rpm version: 5.14.0 kernel-headers: - arch: x86_64 epoch: null name: kernel-headers release: 661.el9 source: rpm version: 5.14.0 kernel-modules: - arch: x86_64 epoch: null name: kernel-modules release: 661.el9 source: rpm version: 5.14.0 kernel-modules-core: - arch: x86_64 epoch: null name: kernel-modules-core release: 661.el9 source: rpm version: 5.14.0 kernel-srpm-macros: - arch: noarch epoch: null name: kernel-srpm-macros release: 14.el9 source: rpm version: '1.0' kernel-tools: - arch: x86_64 epoch: null name: kernel-tools release: 661.el9 source: rpm version: 5.14.0 kernel-tools-libs: - arch: x86_64 epoch: null name: kernel-tools-libs release: 661.el9 source: rpm version: 5.14.0 kexec-tools: - arch: x86_64 epoch: null name: kexec-tools release: 14.el9 source: rpm version: 2.0.29 keyutils: - arch: x86_64 epoch: null name: keyutils release: 1.el9 source: rpm version: 1.6.3 keyutils-libs: - arch: x86_64 epoch: null name: keyutils-libs release: 1.el9 source: rpm version: 1.6.3 kmod: - arch: x86_64 epoch: null name: kmod release: 11.el9 source: rpm version: '28' kmod-libs: - arch: x86_64 epoch: null name: kmod-libs release: 11.el9 source: rpm version: '28' kpartx: - arch: x86_64 epoch: null name: kpartx release: 42.el9 source: rpm version: 0.8.7 krb5-libs: - arch: x86_64 epoch: null name: krb5-libs release: 8.el9 source: rpm version: 1.21.1 langpacks-core-en_GB: - arch: noarch epoch: null name: langpacks-core-en_GB release: 16.el9 source: rpm version: '3.0' langpacks-core-font-en: - arch: noarch epoch: null name: langpacks-core-font-en release: 16.el9 source: rpm version: '3.0' langpacks-en_GB: - arch: noarch epoch: null name: langpacks-en_GB release: 16.el9 source: rpm version: '3.0' less: - arch: x86_64 epoch: null name: less release: 6.el9 source: rpm version: '590' libacl: - arch: x86_64 epoch: null name: libacl release: 4.el9 source: rpm version: 2.3.1 libappstream-glib: - arch: x86_64 epoch: null name: libappstream-glib release: 5.el9 source: rpm version: 0.7.18 libarchive: - arch: x86_64 epoch: null name: libarchive release: 6.el9 source: rpm version: 3.5.3 libassuan: - arch: x86_64 epoch: null name: libassuan release: 3.el9 source: rpm version: 2.5.5 libattr: - arch: x86_64 epoch: null name: libattr release: 3.el9 source: rpm version: 2.5.1 libbasicobjects: - arch: x86_64 epoch: null name: libbasicobjects release: 53.el9 source: rpm version: 0.1.1 libblkid: - arch: x86_64 epoch: null name: libblkid release: 21.el9 source: rpm version: 2.37.4 libbpf: - arch: x86_64 epoch: 2 name: libbpf release: 3.el9 source: rpm version: 1.5.0 libbrotli: - arch: x86_64 epoch: null name: libbrotli release: 7.el9 source: rpm version: 1.0.9 libburn: - arch: x86_64 epoch: null name: libburn release: 5.el9 source: rpm version: 1.5.4 libcap: - arch: x86_64 epoch: null name: libcap release: 10.el9 source: rpm version: '2.48' libcap-ng: - arch: x86_64 epoch: null name: libcap-ng release: 7.el9 source: rpm version: 0.8.2 libcbor: - arch: x86_64 epoch: null name: libcbor release: 5.el9 source: rpm version: 0.7.0 libcollection: - arch: x86_64 epoch: null name: libcollection release: 53.el9 source: rpm version: 0.7.0 libcom_err: - arch: x86_64 epoch: null name: libcom_err release: 8.el9 source: rpm version: 1.46.5 libcomps: - arch: x86_64 epoch: null name: libcomps release: 1.el9 source: rpm version: 0.1.18 libcurl: - arch: x86_64 epoch: null name: libcurl release: 38.el9 source: rpm version: 7.76.1 libdaemon: - arch: x86_64 epoch: null name: libdaemon release: 23.el9 source: rpm version: '0.14' libdb: - arch: x86_64 epoch: null name: libdb release: 57.el9 source: rpm version: 5.3.28 libdhash: - arch: x86_64 epoch: null name: libdhash release: 53.el9 source: rpm version: 0.5.0 libdnf: - arch: x86_64 epoch: null name: libdnf release: 16.el9 source: rpm version: 0.69.0 libeconf: - arch: x86_64 epoch: null name: libeconf release: 5.el9 source: rpm version: 0.4.1 libedit: - arch: x86_64 epoch: null name: libedit release: 38.20210216cvs.el9 source: rpm version: '3.1' libestr: - arch: x86_64 epoch: null name: libestr release: 4.el9 source: rpm version: 0.1.11 libev: - arch: x86_64 epoch: null name: libev release: 6.el9 source: rpm version: '4.33' libevent: - arch: x86_64 epoch: null name: libevent release: 8.el9 source: rpm version: 2.1.12 libfastjson: - arch: x86_64 epoch: null name: libfastjson release: 5.el9 source: rpm version: 0.99.9 libfdisk: - arch: x86_64 epoch: null name: libfdisk release: 21.el9 source: rpm version: 2.37.4 libffi: - arch: x86_64 epoch: null name: libffi release: 8.el9 source: rpm version: 3.4.2 libffi-devel: - arch: x86_64 epoch: null name: libffi-devel release: 8.el9 source: rpm version: 3.4.2 libfido2: - arch: x86_64 epoch: null name: libfido2 release: 2.el9 source: rpm version: 1.13.0 libgcc: - arch: x86_64 epoch: null name: libgcc release: 14.el9 source: rpm version: 11.5.0 libgcrypt: - arch: x86_64 epoch: null name: libgcrypt release: 11.el9 source: rpm version: 1.10.0 libgomp: - arch: x86_64 epoch: null name: libgomp release: 14.el9 source: rpm version: 11.5.0 libgpg-error: - arch: x86_64 epoch: null name: libgpg-error release: 5.el9 source: rpm version: '1.42' libgpg-error-devel: - arch: x86_64 epoch: null name: libgpg-error-devel release: 5.el9 source: rpm version: '1.42' libibverbs: - arch: x86_64 epoch: null name: libibverbs release: 2.el9 source: rpm version: '57.0' libicu: - arch: x86_64 epoch: null name: libicu release: 10.el9 source: rpm version: '67.1' libidn2: - arch: x86_64 epoch: null name: libidn2 release: 7.el9 source: rpm version: 2.3.0 libini_config: - arch: x86_64 epoch: null name: libini_config release: 53.el9 source: rpm version: 1.3.1 libisoburn: - arch: x86_64 epoch: null name: libisoburn release: 5.el9 source: rpm version: 1.5.4 libisofs: - arch: x86_64 epoch: null name: libisofs release: 4.el9 source: rpm version: 1.5.4 libjpeg-turbo: - arch: x86_64 epoch: null name: libjpeg-turbo release: 7.el9 source: rpm version: 2.0.90 libkcapi: - arch: x86_64 epoch: null name: libkcapi release: 2.el9 source: rpm version: 1.4.0 libkcapi-hmaccalc: - arch: x86_64 epoch: null name: libkcapi-hmaccalc release: 2.el9 source: rpm version: 1.4.0 libksba: - arch: x86_64 epoch: null name: libksba release: 7.el9 source: rpm version: 1.5.1 libldb: - arch: x86_64 epoch: 0 name: libldb release: 2.el9 source: rpm version: 4.23.4 libmaxminddb: - arch: x86_64 epoch: null name: libmaxminddb release: 4.el9 source: rpm version: 1.5.2 libmnl: - arch: x86_64 epoch: null name: libmnl release: 16.el9 source: rpm version: 1.0.4 libmodulemd: - arch: x86_64 epoch: null name: libmodulemd release: 2.el9 source: rpm version: 2.13.0 libmount: - arch: x86_64 epoch: null name: libmount release: 21.el9 source: rpm version: 2.37.4 libmpc: - arch: x86_64 epoch: null name: libmpc release: 4.el9 source: rpm version: 1.2.1 libndp: - arch: x86_64 epoch: null name: libndp release: 1.el9 source: rpm version: '1.9' libnet: - arch: x86_64 epoch: null name: libnet release: 7.el9 source: rpm version: '1.2' libnetfilter_conntrack: - arch: x86_64 epoch: null name: libnetfilter_conntrack release: 1.el9 source: rpm version: 1.0.9 libnfnetlink: - arch: x86_64 epoch: null name: libnfnetlink release: 23.el9 source: rpm version: 1.0.1 libnfsidmap: - arch: x86_64 epoch: 1 name: libnfsidmap release: 41.el9 source: rpm version: 2.5.4 libnftnl: - arch: x86_64 epoch: null name: libnftnl release: 4.el9 source: rpm version: 1.2.6 libnghttp2: - arch: x86_64 epoch: null name: libnghttp2 release: 6.el9 source: rpm version: 1.43.0 libnl3: - arch: x86_64 epoch: null name: libnl3 release: 1.el9 source: rpm version: 3.11.0 libnl3-cli: - arch: x86_64 epoch: null name: libnl3-cli release: 1.el9 source: rpm version: 3.11.0 libosinfo: - arch: x86_64 epoch: null name: libosinfo release: 1.el9 source: rpm version: 1.10.0 libpath_utils: - arch: x86_64 epoch: null name: libpath_utils release: 53.el9 source: rpm version: 0.2.1 libpcap: - arch: x86_64 epoch: 14 name: libpcap release: 4.el9 source: rpm version: 1.10.0 libpipeline: - arch: x86_64 epoch: null name: libpipeline release: 4.el9 source: rpm version: 1.5.3 libpkgconf: - arch: x86_64 epoch: null name: libpkgconf release: 10.el9 source: rpm version: 1.7.3 libpng: - arch: x86_64 epoch: 2 name: libpng release: 12.el9 source: rpm version: 1.6.37 libproxy: - arch: x86_64 epoch: null name: libproxy release: 35.el9 source: rpm version: 0.4.15 libproxy-webkitgtk4: - arch: x86_64 epoch: null name: libproxy-webkitgtk4 release: 35.el9 source: rpm version: 0.4.15 libpsl: - arch: x86_64 epoch: null name: libpsl release: 5.el9 source: rpm version: 0.21.1 libpwquality: - arch: x86_64 epoch: null name: libpwquality release: 8.el9 source: rpm version: 1.4.4 libref_array: - arch: x86_64 epoch: null name: libref_array release: 53.el9 source: rpm version: 0.1.5 librepo: - arch: x86_64 epoch: null name: librepo release: 1.el9 source: rpm version: 1.19.0 libreport-filesystem: - arch: noarch epoch: null name: libreport-filesystem release: 6.el9 source: rpm version: 2.15.2 libseccomp: - arch: x86_64 epoch: null name: libseccomp release: 2.el9 source: rpm version: 2.5.2 libselinux: - arch: x86_64 epoch: null name: libselinux release: 3.el9 source: rpm version: '3.6' libselinux-utils: - arch: x86_64 epoch: null name: libselinux-utils release: 3.el9 source: rpm version: '3.6' libsemanage: - arch: x86_64 epoch: null name: libsemanage release: 5.el9 source: rpm version: '3.6' libsepol: - arch: x86_64 epoch: null name: libsepol release: 3.el9 source: rpm version: '3.6' libsigsegv: - arch: x86_64 epoch: null name: libsigsegv release: 4.el9 source: rpm version: '2.13' libslirp: - arch: x86_64 epoch: null name: libslirp release: 8.el9 source: rpm version: 4.4.0 libsmartcols: - arch: x86_64 epoch: null name: libsmartcols release: 21.el9 source: rpm version: 2.37.4 libsolv: - arch: x86_64 epoch: null name: libsolv release: 3.el9 source: rpm version: 0.7.24 libsoup: - arch: x86_64 epoch: null name: libsoup release: 10.el9 source: rpm version: 2.72.0 libss: - arch: x86_64 epoch: null name: libss release: 8.el9 source: rpm version: 1.46.5 libssh: - arch: x86_64 epoch: null name: libssh release: 17.el9 source: rpm version: 0.10.4 libssh-config: - arch: noarch epoch: null name: libssh-config release: 17.el9 source: rpm version: 0.10.4 libsss_certmap: - arch: x86_64 epoch: null name: libsss_certmap release: 5.el9 source: rpm version: 2.9.7 libsss_idmap: - arch: x86_64 epoch: null name: libsss_idmap release: 5.el9 source: rpm version: 2.9.7 libsss_nss_idmap: - arch: x86_64 epoch: null name: libsss_nss_idmap release: 5.el9 source: rpm version: 2.9.7 libsss_sudo: - arch: x86_64 epoch: null name: libsss_sudo release: 5.el9 source: rpm version: 2.9.7 libstdc++: - arch: x86_64 epoch: null name: libstdc++ release: 14.el9 source: rpm version: 11.5.0 libstdc++-devel: - arch: x86_64 epoch: null name: libstdc++-devel release: 14.el9 source: rpm version: 11.5.0 libstemmer: - arch: x86_64 epoch: null name: libstemmer release: 18.585svn.el9 source: rpm version: '0' libsysfs: - arch: x86_64 epoch: null name: libsysfs release: 11.el9 source: rpm version: 2.1.1 libtalloc: - arch: x86_64 epoch: null name: libtalloc release: 1.el9 source: rpm version: 2.4.3 libtasn1: - arch: x86_64 epoch: null name: libtasn1 release: 9.el9 source: rpm version: 4.16.0 libtdb: - arch: x86_64 epoch: null name: libtdb release: 1.el9 source: rpm version: 1.4.14 libteam: - arch: x86_64 epoch: null name: libteam release: 16.el9 source: rpm version: '1.31' libtevent: - arch: x86_64 epoch: null name: libtevent release: 1.el9 source: rpm version: 0.17.1 libtirpc: - arch: x86_64 epoch: null name: libtirpc release: 9.el9 source: rpm version: 1.3.3 libtool-ltdl: - arch: x86_64 epoch: null name: libtool-ltdl release: 46.el9 source: rpm version: 2.4.6 libunistring: - arch: x86_64 epoch: null name: libunistring release: 15.el9 source: rpm version: 0.9.10 liburing: - arch: x86_64 epoch: null name: liburing release: 1.el9 source: rpm version: '2.12' libuser: - arch: x86_64 epoch: null name: libuser release: 17.el9 source: rpm version: '0.63' libutempter: - arch: x86_64 epoch: null name: libutempter release: 6.el9 source: rpm version: 1.2.1 libuuid: - arch: x86_64 epoch: null name: libuuid release: 21.el9 source: rpm version: 2.37.4 libverto: - arch: x86_64 epoch: null name: libverto release: 3.el9 source: rpm version: 0.3.2 libverto-libev: - arch: x86_64 epoch: null name: libverto-libev release: 3.el9 source: rpm version: 0.3.2 libvirt-client: - arch: x86_64 epoch: null name: libvirt-client release: 2.el9 source: rpm version: 11.10.0 libvirt-libs: - arch: x86_64 epoch: null name: libvirt-libs release: 2.el9 source: rpm version: 11.10.0 libxcrypt: - arch: x86_64 epoch: null name: libxcrypt release: 3.el9 source: rpm version: 4.4.18 libxcrypt-compat: - arch: x86_64 epoch: null name: libxcrypt-compat release: 3.el9 source: rpm version: 4.4.18 libxcrypt-devel: - arch: x86_64 epoch: null name: libxcrypt-devel release: 3.el9 source: rpm version: 4.4.18 libxml2: - arch: x86_64 epoch: null name: libxml2 release: 14.el9 source: rpm version: 2.9.13 libxml2-devel: - arch: x86_64 epoch: null name: libxml2-devel release: 14.el9 source: rpm version: 2.9.13 libxslt: - arch: x86_64 epoch: null name: libxslt release: 12.el9 source: rpm version: 1.1.34 libxslt-devel: - arch: x86_64 epoch: null name: libxslt-devel release: 12.el9 source: rpm version: 1.1.34 libyaml: - arch: x86_64 epoch: null name: libyaml release: 7.el9 source: rpm version: 0.2.5 libzstd: - arch: x86_64 epoch: null name: libzstd release: 1.el9 source: rpm version: 1.5.5 llvm-filesystem: - arch: x86_64 epoch: null name: llvm-filesystem release: 1.el9 source: rpm version: 21.1.7 llvm-libs: - arch: x86_64 epoch: null name: llvm-libs release: 1.el9 source: rpm version: 21.1.7 lmdb-libs: - arch: x86_64 epoch: null name: lmdb-libs release: 3.el9 source: rpm version: 0.9.29 logrotate: - arch: x86_64 epoch: null name: logrotate release: 12.el9 source: rpm version: 3.18.0 lshw: - arch: x86_64 epoch: null name: lshw release: 4.el9 source: rpm version: B.02.20 lsscsi: - arch: x86_64 epoch: null name: lsscsi release: 6.el9 source: rpm version: '0.32' lua-libs: - arch: x86_64 epoch: null name: lua-libs release: 4.el9 source: rpm version: 5.4.4 lua-srpm-macros: - arch: noarch epoch: null name: lua-srpm-macros release: 6.el9 source: rpm version: '1' lz4-libs: - arch: x86_64 epoch: null name: lz4-libs release: 5.el9 source: rpm version: 1.9.3 lzo: - arch: x86_64 epoch: null name: lzo release: 7.el9 source: rpm version: '2.10' make: - arch: x86_64 epoch: 1 name: make release: 8.el9 source: rpm version: '4.3' man-db: - arch: x86_64 epoch: null name: man-db release: 9.el9 source: rpm version: 2.9.3 microcode_ctl: - arch: noarch epoch: 4 name: microcode_ctl release: 1.el9 source: rpm version: '20251111' mpfr: - arch: x86_64 epoch: null name: mpfr release: 8.el9 source: rpm version: 4.1.0 ncurses: - arch: x86_64 epoch: null name: ncurses release: 12.20210508.el9 source: rpm version: '6.2' ncurses-base: - arch: noarch epoch: null name: ncurses-base release: 12.20210508.el9 source: rpm version: '6.2' ncurses-c++-libs: - arch: x86_64 epoch: null name: ncurses-c++-libs release: 12.20210508.el9 source: rpm version: '6.2' ncurses-devel: - arch: x86_64 epoch: null name: ncurses-devel release: 12.20210508.el9 source: rpm version: '6.2' ncurses-libs: - arch: x86_64 epoch: null name: ncurses-libs release: 12.20210508.el9 source: rpm version: '6.2' netavark: - arch: x86_64 epoch: 2 name: netavark release: 1.el9 source: rpm version: 1.16.0 nettle: - arch: x86_64 epoch: null name: nettle release: 1.el9 source: rpm version: 3.10.1 newt: - arch: x86_64 epoch: null name: newt release: 11.el9 source: rpm version: 0.52.21 nfs-utils: - arch: x86_64 epoch: 1 name: nfs-utils release: 41.el9 source: rpm version: 2.5.4 nftables: - arch: x86_64 epoch: 1 name: nftables release: 6.el9 source: rpm version: 1.0.9 npth: - arch: x86_64 epoch: null name: npth release: 8.el9 source: rpm version: '1.6' numactl-libs: - arch: x86_64 epoch: null name: numactl-libs release: 3.el9 source: rpm version: 2.0.19 ocaml-srpm-macros: - arch: noarch epoch: null name: ocaml-srpm-macros release: 6.el9 source: rpm version: '6' oddjob: - arch: x86_64 epoch: null name: oddjob release: 7.el9 source: rpm version: 0.34.7 oddjob-mkhomedir: - arch: x86_64 epoch: null name: oddjob-mkhomedir release: 7.el9 source: rpm version: 0.34.7 oniguruma: - arch: x86_64 epoch: null name: oniguruma release: 1.el9.6 source: rpm version: 6.9.6 openblas-srpm-macros: - arch: noarch epoch: null name: openblas-srpm-macros release: 11.el9 source: rpm version: '2' openldap: - arch: x86_64 epoch: null name: openldap release: 4.el9 source: rpm version: 2.6.8 openldap-devel: - arch: x86_64 epoch: null name: openldap-devel release: 4.el9 source: rpm version: 2.6.8 openssh: - arch: x86_64 epoch: null name: openssh release: 3.el9 source: rpm version: 9.9p1 openssh-clients: - arch: x86_64 epoch: null name: openssh-clients release: 3.el9 source: rpm version: 9.9p1 openssh-server: - arch: x86_64 epoch: null name: openssh-server release: 3.el9 source: rpm version: 9.9p1 openssl: - arch: x86_64 epoch: 1 name: openssl release: 6.el9 source: rpm version: 3.5.1 openssl-devel: - arch: x86_64 epoch: 1 name: openssl-devel release: 6.el9 source: rpm version: 3.5.1 openssl-fips-provider: - arch: x86_64 epoch: 1 name: openssl-fips-provider release: 6.el9 source: rpm version: 3.5.1 openssl-libs: - arch: x86_64 epoch: 1 name: openssl-libs release: 6.el9 source: rpm version: 3.5.1 os-prober: - arch: x86_64 epoch: null name: os-prober release: 12.el9 source: rpm version: '1.77' osinfo-db: - arch: noarch epoch: null name: osinfo-db release: 1.el9 source: rpm version: '20250606' osinfo-db-tools: - arch: x86_64 epoch: null name: osinfo-db-tools release: 1.el9 source: rpm version: 1.10.0 p11-kit: - arch: x86_64 epoch: null name: p11-kit release: 1.el9 source: rpm version: 0.25.10 p11-kit-trust: - arch: x86_64 epoch: null name: p11-kit-trust release: 1.el9 source: rpm version: 0.25.10 pam: - arch: x86_64 epoch: null name: pam release: 28.el9 source: rpm version: 1.5.1 parted: - arch: x86_64 epoch: null name: parted release: 3.el9 source: rpm version: '3.5' passt: - arch: x86_64 epoch: null name: passt release: 2.el9 source: rpm version: 0^20251210.gd04c480 passt-selinux: - arch: noarch epoch: null name: passt-selinux release: 2.el9 source: rpm version: 0^20251210.gd04c480 passwd: - arch: x86_64 epoch: null name: passwd release: 12.el9 source: rpm version: '0.80' patch: - arch: x86_64 epoch: null name: patch release: 16.el9 source: rpm version: 2.7.6 pciutils-libs: - arch: x86_64 epoch: null name: pciutils-libs release: 7.el9 source: rpm version: 3.7.0 pcre: - arch: x86_64 epoch: null name: pcre release: 4.el9 source: rpm version: '8.44' pcre2: - arch: x86_64 epoch: null name: pcre2 release: 6.el9 source: rpm version: '10.40' pcre2-syntax: - arch: noarch epoch: null name: pcre2-syntax release: 6.el9 source: rpm version: '10.40' perl-AutoLoader: - arch: noarch epoch: 0 name: perl-AutoLoader release: 483.el9 source: rpm version: '5.74' perl-B: - arch: x86_64 epoch: 0 name: perl-B release: 483.el9 source: rpm version: '1.80' perl-Carp: - arch: noarch epoch: null name: perl-Carp release: 460.el9 source: rpm version: '1.50' perl-Class-Struct: - arch: noarch epoch: 0 name: perl-Class-Struct release: 483.el9 source: rpm version: '0.66' perl-Data-Dumper: - arch: x86_64 epoch: null name: perl-Data-Dumper release: 462.el9 source: rpm version: '2.174' perl-Digest: - arch: noarch epoch: null name: perl-Digest release: 4.el9 source: rpm version: '1.19' perl-Digest-MD5: - arch: x86_64 epoch: null name: perl-Digest-MD5 release: 4.el9 source: rpm version: '2.58' perl-DynaLoader: - arch: x86_64 epoch: 0 name: perl-DynaLoader release: 483.el9 source: rpm version: '1.47' perl-Encode: - arch: x86_64 epoch: 4 name: perl-Encode release: 462.el9 source: rpm version: '3.08' perl-Errno: - arch: x86_64 epoch: 0 name: perl-Errno release: 483.el9 source: rpm version: '1.30' perl-Error: - arch: noarch epoch: 1 name: perl-Error release: 7.el9 source: rpm version: '0.17029' perl-Exporter: - arch: noarch epoch: null name: perl-Exporter release: 461.el9 source: rpm version: '5.74' perl-Fcntl: - arch: x86_64 epoch: 0 name: perl-Fcntl release: 483.el9 source: rpm version: '1.13' perl-File-Basename: - arch: noarch epoch: 0 name: perl-File-Basename release: 483.el9 source: rpm version: '2.85' perl-File-Find: - arch: noarch epoch: 0 name: perl-File-Find release: 483.el9 source: rpm version: '1.37' perl-File-Path: - arch: noarch epoch: null name: perl-File-Path release: 4.el9 source: rpm version: '2.18' perl-File-Temp: - arch: noarch epoch: 1 name: perl-File-Temp release: 4.el9 source: rpm version: 0.231.100 perl-File-stat: - arch: noarch epoch: 0 name: perl-File-stat release: 483.el9 source: rpm version: '1.09' perl-FileHandle: - arch: noarch epoch: 0 name: perl-FileHandle release: 483.el9 source: rpm version: '2.03' perl-Getopt-Long: - arch: noarch epoch: 1 name: perl-Getopt-Long release: 4.el9 source: rpm version: '2.52' perl-Getopt-Std: - arch: noarch epoch: 0 name: perl-Getopt-Std release: 483.el9 source: rpm version: '1.12' perl-Git: - arch: noarch epoch: null name: perl-Git release: 1.el9 source: rpm version: 2.47.3 perl-HTTP-Tiny: - arch: noarch epoch: null name: perl-HTTP-Tiny release: 462.el9 source: rpm version: '0.076' perl-IO: - arch: x86_64 epoch: 0 name: perl-IO release: 483.el9 source: rpm version: '1.43' perl-IO-Socket-IP: - arch: noarch epoch: null name: perl-IO-Socket-IP release: 5.el9 source: rpm version: '0.41' perl-IO-Socket-SSL: - arch: noarch epoch: null name: perl-IO-Socket-SSL release: 2.el9 source: rpm version: '2.073' perl-IPC-Open3: - arch: noarch epoch: 0 name: perl-IPC-Open3 release: 483.el9 source: rpm version: '1.21' perl-MIME-Base64: - arch: x86_64 epoch: null name: perl-MIME-Base64 release: 4.el9 source: rpm version: '3.16' perl-Mozilla-CA: - arch: noarch epoch: null name: perl-Mozilla-CA release: 6.el9 source: rpm version: '20200520' perl-NDBM_File: - arch: x86_64 epoch: 0 name: perl-NDBM_File release: 483.el9 source: rpm version: '1.15' perl-Net-SSLeay: - arch: x86_64 epoch: null name: perl-Net-SSLeay release: 3.el9 source: rpm version: '1.94' perl-POSIX: - arch: x86_64 epoch: 0 name: perl-POSIX release: 483.el9 source: rpm version: '1.94' perl-PathTools: - arch: x86_64 epoch: null name: perl-PathTools release: 461.el9 source: rpm version: '3.78' perl-Pod-Escapes: - arch: noarch epoch: 1 name: perl-Pod-Escapes release: 460.el9 source: rpm version: '1.07' perl-Pod-Perldoc: - arch: noarch epoch: null name: perl-Pod-Perldoc release: 461.el9 source: rpm version: 3.28.01 perl-Pod-Simple: - arch: noarch epoch: 1 name: perl-Pod-Simple release: 4.el9 source: rpm version: '3.42' perl-Pod-Usage: - arch: noarch epoch: 4 name: perl-Pod-Usage release: 4.el9 source: rpm version: '2.01' perl-Scalar-List-Utils: - arch: x86_64 epoch: 4 name: perl-Scalar-List-Utils release: 462.el9 source: rpm version: '1.56' perl-SelectSaver: - arch: noarch epoch: 0 name: perl-SelectSaver release: 483.el9 source: rpm version: '1.02' perl-Socket: - arch: x86_64 epoch: 4 name: perl-Socket release: 4.el9 source: rpm version: '2.031' perl-Storable: - arch: x86_64 epoch: 1 name: perl-Storable release: 460.el9 source: rpm version: '3.21' perl-Symbol: - arch: noarch epoch: 0 name: perl-Symbol release: 483.el9 source: rpm version: '1.08' perl-Term-ANSIColor: - arch: noarch epoch: null name: perl-Term-ANSIColor release: 461.el9 source: rpm version: '5.01' perl-Term-Cap: - arch: noarch epoch: null name: perl-Term-Cap release: 460.el9 source: rpm version: '1.17' perl-TermReadKey: - arch: x86_64 epoch: null name: perl-TermReadKey release: 11.el9 source: rpm version: '2.38' perl-Text-ParseWords: - arch: noarch epoch: null name: perl-Text-ParseWords release: 460.el9 source: rpm version: '3.30' perl-Text-Tabs+Wrap: - arch: noarch epoch: null name: perl-Text-Tabs+Wrap release: 460.el9 source: rpm version: '2013.0523' perl-Time-Local: - arch: noarch epoch: 2 name: perl-Time-Local release: 7.el9 source: rpm version: '1.300' perl-URI: - arch: noarch epoch: null name: perl-URI release: 3.el9 source: rpm version: '5.09' perl-base: - arch: noarch epoch: 0 name: perl-base release: 483.el9 source: rpm version: '2.27' perl-constant: - arch: noarch epoch: null name: perl-constant release: 461.el9 source: rpm version: '1.33' perl-if: - arch: noarch epoch: 0 name: perl-if release: 483.el9 source: rpm version: 0.60.800 perl-interpreter: - arch: x86_64 epoch: 4 name: perl-interpreter release: 483.el9 source: rpm version: 5.32.1 perl-lib: - arch: x86_64 epoch: 0 name: perl-lib release: 483.el9 source: rpm version: '0.65' perl-libnet: - arch: noarch epoch: null name: perl-libnet release: 4.el9 source: rpm version: '3.13' perl-libs: - arch: x86_64 epoch: 4 name: perl-libs release: 483.el9 source: rpm version: 5.32.1 perl-mro: - arch: x86_64 epoch: 0 name: perl-mro release: 483.el9 source: rpm version: '1.23' perl-overload: - arch: noarch epoch: 0 name: perl-overload release: 483.el9 source: rpm version: '1.31' perl-overloading: - arch: noarch epoch: 0 name: perl-overloading release: 483.el9 source: rpm version: '0.02' perl-parent: - arch: noarch epoch: 1 name: perl-parent release: 460.el9 source: rpm version: '0.238' perl-podlators: - arch: noarch epoch: 1 name: perl-podlators release: 460.el9 source: rpm version: '4.14' perl-srpm-macros: - arch: noarch epoch: null name: perl-srpm-macros release: 41.el9 source: rpm version: '1' perl-subs: - arch: noarch epoch: 0 name: perl-subs release: 483.el9 source: rpm version: '1.03' perl-vars: - arch: noarch epoch: 0 name: perl-vars release: 483.el9 source: rpm version: '1.05' pigz: - arch: x86_64 epoch: null name: pigz release: 4.el9 source: rpm version: '2.5' pkgconf: - arch: x86_64 epoch: null name: pkgconf release: 10.el9 source: rpm version: 1.7.3 pkgconf-m4: - arch: noarch epoch: null name: pkgconf-m4 release: 10.el9 source: rpm version: 1.7.3 pkgconf-pkg-config: - arch: x86_64 epoch: null name: pkgconf-pkg-config release: 10.el9 source: rpm version: 1.7.3 podman: - arch: x86_64 epoch: 6 name: podman release: 2.el9 source: rpm version: 5.6.0 policycoreutils: - arch: x86_64 epoch: null name: policycoreutils release: 4.el9 source: rpm version: '3.6' policycoreutils-python-utils: - arch: noarch epoch: null name: policycoreutils-python-utils release: 4.el9 source: rpm version: '3.6' polkit: - arch: x86_64 epoch: null name: polkit release: 14.el9 source: rpm version: '0.117' polkit-libs: - arch: x86_64 epoch: null name: polkit-libs release: 14.el9 source: rpm version: '0.117' polkit-pkla-compat: - arch: x86_64 epoch: null name: polkit-pkla-compat release: 21.el9 source: rpm version: '0.1' popt: - arch: x86_64 epoch: null name: popt release: 8.el9 source: rpm version: '1.18' prefixdevname: - arch: x86_64 epoch: null name: prefixdevname release: 8.el9 source: rpm version: 0.1.0 procps-ng: - arch: x86_64 epoch: null name: procps-ng release: 14.el9 source: rpm version: 3.3.17 protobuf-c: - arch: x86_64 epoch: null name: protobuf-c release: 13.el9 source: rpm version: 1.3.3 psmisc: - arch: x86_64 epoch: null name: psmisc release: 3.el9 source: rpm version: '23.4' publicsuffix-list-dafsa: - arch: noarch epoch: null name: publicsuffix-list-dafsa release: 3.el9 source: rpm version: '20210518' pyproject-srpm-macros: - arch: noarch epoch: null name: pyproject-srpm-macros release: 1.el9 source: rpm version: 1.18.5 python-rpm-macros: - arch: noarch epoch: null name: python-rpm-macros release: 54.el9 source: rpm version: '3.9' python-srpm-macros: - arch: noarch epoch: null name: python-srpm-macros release: 54.el9 source: rpm version: '3.9' python-unversioned-command: - arch: noarch epoch: null name: python-unversioned-command release: 3.el9 source: rpm version: 3.9.25 python3: - arch: x86_64 epoch: null name: python3 release: 3.el9 source: rpm version: 3.9.25 python3-argcomplete: - arch: noarch epoch: null name: python3-argcomplete release: 5.el9 source: rpm version: 1.12.0 python3-attrs: - arch: noarch epoch: null name: python3-attrs release: 7.el9 source: rpm version: 20.3.0 python3-audit: - arch: x86_64 epoch: null name: python3-audit release: 8.el9 source: rpm version: 3.1.5 python3-babel: - arch: noarch epoch: null name: python3-babel release: 2.el9 source: rpm version: 2.9.1 python3-cffi: - arch: x86_64 epoch: null name: python3-cffi release: 5.el9 source: rpm version: 1.14.5 python3-chardet: - arch: noarch epoch: null name: python3-chardet release: 5.el9 source: rpm version: 4.0.0 python3-configobj: - arch: noarch epoch: null name: python3-configobj release: 25.el9 source: rpm version: 5.0.6 python3-cryptography: - arch: x86_64 epoch: null name: python3-cryptography release: 5.el9 source: rpm version: 36.0.1 python3-dasbus: - arch: noarch epoch: null name: python3-dasbus release: 1.el9 source: rpm version: '1.7' python3-dateutil: - arch: noarch epoch: 1 name: python3-dateutil release: 1.el9 source: rpm version: 2.9.0.post0 python3-dbus: - arch: x86_64 epoch: null name: python3-dbus release: 2.el9 source: rpm version: 1.2.18 python3-devel: - arch: x86_64 epoch: null name: python3-devel release: 3.el9 source: rpm version: 3.9.25 python3-distro: - arch: noarch epoch: null name: python3-distro release: 7.el9 source: rpm version: 1.5.0 python3-dnf: - arch: noarch epoch: null name: python3-dnf release: 31.el9 source: rpm version: 4.14.0 python3-dnf-plugins-core: - arch: noarch epoch: null name: python3-dnf-plugins-core release: 25.el9 source: rpm version: 4.3.0 python3-enchant: - arch: noarch epoch: null name: python3-enchant release: 5.el9 source: rpm version: 3.2.0 python3-file-magic: - arch: noarch epoch: null name: python3-file-magic release: 16.el9 source: rpm version: '5.39' python3-gobject-base: - arch: x86_64 epoch: null name: python3-gobject-base release: 6.el9 source: rpm version: 3.40.1 python3-gobject-base-noarch: - arch: noarch epoch: null name: python3-gobject-base-noarch release: 6.el9 source: rpm version: 3.40.1 python3-gpg: - arch: x86_64 epoch: null name: python3-gpg release: 6.el9 source: rpm version: 1.15.1 python3-hawkey: - arch: x86_64 epoch: null name: python3-hawkey release: 16.el9 source: rpm version: 0.69.0 python3-idna: - arch: noarch epoch: null name: python3-idna release: 7.el9.1 source: rpm version: '2.10' python3-jinja2: - arch: noarch epoch: null name: python3-jinja2 release: 8.el9 source: rpm version: 2.11.3 python3-jmespath: - arch: noarch epoch: null name: python3-jmespath release: 1.el9 source: rpm version: 1.0.1 python3-jsonpatch: - arch: noarch epoch: null name: python3-jsonpatch release: 16.el9 source: rpm version: '1.21' python3-jsonpointer: - arch: noarch epoch: null name: python3-jsonpointer release: 4.el9 source: rpm version: '2.0' python3-jsonschema: - arch: noarch epoch: null name: python3-jsonschema release: 13.el9 source: rpm version: 3.2.0 python3-libcomps: - arch: x86_64 epoch: null name: python3-libcomps release: 1.el9 source: rpm version: 0.1.18 python3-libdnf: - arch: x86_64 epoch: null name: python3-libdnf release: 16.el9 source: rpm version: 0.69.0 python3-libs: - arch: x86_64 epoch: null name: python3-libs release: 3.el9 source: rpm version: 3.9.25 python3-libselinux: - arch: x86_64 epoch: null name: python3-libselinux release: 3.el9 source: rpm version: '3.6' python3-libsemanage: - arch: x86_64 epoch: null name: python3-libsemanage release: 5.el9 source: rpm version: '3.6' python3-libvirt: - arch: x86_64 epoch: null name: python3-libvirt release: 1.el9 source: rpm version: 11.10.0 python3-libxml2: - arch: x86_64 epoch: null name: python3-libxml2 release: 14.el9 source: rpm version: 2.9.13 python3-lxml: - arch: x86_64 epoch: null name: python3-lxml release: 3.el9 source: rpm version: 4.6.5 python3-markupsafe: - arch: x86_64 epoch: null name: python3-markupsafe release: 12.el9 source: rpm version: 1.1.1 python3-netaddr: - arch: noarch epoch: null name: python3-netaddr release: 3.el9 source: rpm version: 0.10.1 python3-netifaces: - arch: x86_64 epoch: null name: python3-netifaces release: 15.el9 source: rpm version: 0.10.6 python3-oauthlib: - arch: noarch epoch: null name: python3-oauthlib release: 5.el9 source: rpm version: 3.1.1 python3-packaging: - arch: noarch epoch: null name: python3-packaging release: 5.el9 source: rpm version: '20.9' python3-pexpect: - arch: noarch epoch: null name: python3-pexpect release: 7.el9 source: rpm version: 4.8.0 python3-pip: - arch: noarch epoch: null name: python3-pip release: 1.el9 source: rpm version: 21.3.1 python3-pip-wheel: - arch: noarch epoch: null name: python3-pip-wheel release: 1.el9 source: rpm version: 21.3.1 python3-ply: - arch: noarch epoch: null name: python3-ply release: 14.el9 source: rpm version: '3.11' python3-policycoreutils: - arch: noarch epoch: null name: python3-policycoreutils release: 4.el9 source: rpm version: '3.6' python3-prettytable: - arch: noarch epoch: null name: python3-prettytable release: 27.el9 source: rpm version: 0.7.2 python3-ptyprocess: - arch: noarch epoch: null name: python3-ptyprocess release: 12.el9 source: rpm version: 0.6.0 python3-pycparser: - arch: noarch epoch: null name: python3-pycparser release: 6.el9 source: rpm version: '2.20' python3-pyparsing: - arch: noarch epoch: null name: python3-pyparsing release: 9.el9 source: rpm version: 2.4.7 python3-pyrsistent: - arch: x86_64 epoch: null name: python3-pyrsistent release: 8.el9 source: rpm version: 0.17.3 python3-pyserial: - arch: noarch epoch: null name: python3-pyserial release: 12.el9 source: rpm version: '3.4' python3-pysocks: - arch: noarch epoch: null name: python3-pysocks release: 12.el9 source: rpm version: 1.7.1 python3-pytz: - arch: noarch epoch: null name: python3-pytz release: 5.el9 source: rpm version: '2021.1' python3-pyyaml: - arch: x86_64 epoch: null name: python3-pyyaml release: 6.el9 source: rpm version: 5.4.1 python3-requests: - arch: noarch epoch: null name: python3-requests release: 10.el9 source: rpm version: 2.25.1 python3-resolvelib: - arch: noarch epoch: null name: python3-resolvelib release: 5.el9 source: rpm version: 0.5.4 python3-rpm: - arch: x86_64 epoch: null name: python3-rpm release: 40.el9 source: rpm version: 4.16.1.3 python3-rpm-generators: - arch: noarch epoch: null name: python3-rpm-generators release: 9.el9 source: rpm version: '12' python3-rpm-macros: - arch: noarch epoch: null name: python3-rpm-macros release: 54.el9 source: rpm version: '3.9' python3-setools: - arch: x86_64 epoch: null name: python3-setools release: 1.el9 source: rpm version: 4.4.4 python3-setuptools: - arch: noarch epoch: null name: python3-setuptools release: 15.el9 source: rpm version: 53.0.0 python3-setuptools-wheel: - arch: noarch epoch: null name: python3-setuptools-wheel release: 15.el9 source: rpm version: 53.0.0 python3-six: - arch: noarch epoch: null name: python3-six release: 9.el9 source: rpm version: 1.15.0 python3-systemd: - arch: x86_64 epoch: null name: python3-systemd release: 19.el9 source: rpm version: '234' python3-urllib3: - arch: noarch epoch: null name: python3-urllib3 release: 6.el9 source: rpm version: 1.26.5 qemu-guest-agent: - arch: x86_64 epoch: 17 name: qemu-guest-agent release: 10.el9 source: rpm version: 10.1.0 qt5-srpm-macros: - arch: noarch epoch: null name: qt5-srpm-macros release: 1.el9 source: rpm version: 5.15.9 quota: - arch: x86_64 epoch: 1 name: quota release: 4.el9 source: rpm version: '4.09' quota-nls: - arch: noarch epoch: 1 name: quota-nls release: 4.el9 source: rpm version: '4.09' readline: - arch: x86_64 epoch: null name: readline release: 4.el9 source: rpm version: '8.1' readline-devel: - arch: x86_64 epoch: null name: readline-devel release: 4.el9 source: rpm version: '8.1' redhat-rpm-config: - arch: noarch epoch: null name: redhat-rpm-config release: 1.el9 source: rpm version: '210' rootfiles: - arch: noarch epoch: null name: rootfiles release: 35.el9 source: rpm version: '8.1' rpcbind: - arch: x86_64 epoch: null name: rpcbind release: 7.el9 source: rpm version: 1.2.6 rpm: - arch: x86_64 epoch: null name: rpm release: 40.el9 source: rpm version: 4.16.1.3 rpm-build: - arch: x86_64 epoch: null name: rpm-build release: 40.el9 source: rpm version: 4.16.1.3 rpm-build-libs: - arch: x86_64 epoch: null name: rpm-build-libs release: 40.el9 source: rpm version: 4.16.1.3 rpm-libs: - arch: x86_64 epoch: null name: rpm-libs release: 40.el9 source: rpm version: 4.16.1.3 rpm-plugin-audit: - arch: x86_64 epoch: null name: rpm-plugin-audit release: 40.el9 source: rpm version: 4.16.1.3 rpm-plugin-selinux: - arch: x86_64 epoch: null name: rpm-plugin-selinux release: 40.el9 source: rpm version: 4.16.1.3 rpm-plugin-systemd-inhibit: - arch: x86_64 epoch: null name: rpm-plugin-systemd-inhibit release: 40.el9 source: rpm version: 4.16.1.3 rpm-sign: - arch: x86_64 epoch: null name: rpm-sign release: 40.el9 source: rpm version: 4.16.1.3 rpm-sign-libs: - arch: x86_64 epoch: null name: rpm-sign-libs release: 40.el9 source: rpm version: 4.16.1.3 rpmlint: - arch: noarch epoch: null name: rpmlint release: 19.el9 source: rpm version: '1.11' rsync: - arch: x86_64 epoch: null name: rsync release: 4.el9 source: rpm version: 3.2.5 rsyslog: - arch: x86_64 epoch: null name: rsyslog release: 2.el9 source: rpm version: 8.2510.0 rsyslog-logrotate: - arch: x86_64 epoch: null name: rsyslog-logrotate release: 2.el9 source: rpm version: 8.2510.0 ruby: - arch: x86_64 epoch: null name: ruby release: 165.el9 source: rpm version: 3.0.7 ruby-default-gems: - arch: noarch epoch: null name: ruby-default-gems release: 165.el9 source: rpm version: 3.0.7 ruby-devel: - arch: x86_64 epoch: null name: ruby-devel release: 165.el9 source: rpm version: 3.0.7 ruby-libs: - arch: x86_64 epoch: null name: ruby-libs release: 165.el9 source: rpm version: 3.0.7 rubygem-bigdecimal: - arch: x86_64 epoch: null name: rubygem-bigdecimal release: 165.el9 source: rpm version: 3.0.0 rubygem-bundler: - arch: noarch epoch: null name: rubygem-bundler release: 165.el9 source: rpm version: 2.2.33 rubygem-io-console: - arch: x86_64 epoch: null name: rubygem-io-console release: 165.el9 source: rpm version: 0.5.7 rubygem-json: - arch: x86_64 epoch: null name: rubygem-json release: 165.el9 source: rpm version: 2.5.1 rubygem-psych: - arch: x86_64 epoch: null name: rubygem-psych release: 165.el9 source: rpm version: 3.3.2 rubygem-rdoc: - arch: noarch epoch: null name: rubygem-rdoc release: 165.el9 source: rpm version: 6.3.4.1 rubygems: - arch: noarch epoch: null name: rubygems release: 165.el9 source: rpm version: 3.2.33 rust-srpm-macros: - arch: noarch epoch: null name: rust-srpm-macros release: 4.el9 source: rpm version: '17' sed: - arch: x86_64 epoch: null name: sed release: 9.el9 source: rpm version: '4.8' selinux-policy: - arch: noarch epoch: null name: selinux-policy release: 1.el9 source: rpm version: 38.1.71 selinux-policy-targeted: - arch: noarch epoch: null name: selinux-policy-targeted release: 1.el9 source: rpm version: 38.1.71 setroubleshoot-plugins: - arch: noarch epoch: null name: setroubleshoot-plugins release: 4.el9 source: rpm version: 3.3.14 setroubleshoot-server: - arch: x86_64 epoch: null name: setroubleshoot-server release: 2.el9 source: rpm version: 3.3.35 setup: - arch: noarch epoch: null name: setup release: 10.el9 source: rpm version: 2.13.7 sg3_utils: - arch: x86_64 epoch: null name: sg3_utils release: 10.el9 source: rpm version: '1.47' sg3_utils-libs: - arch: x86_64 epoch: null name: sg3_utils-libs release: 10.el9 source: rpm version: '1.47' shadow-utils: - arch: x86_64 epoch: 2 name: shadow-utils release: 16.el9 source: rpm version: '4.9' shadow-utils-subid: - arch: x86_64 epoch: 2 name: shadow-utils-subid release: 16.el9 source: rpm version: '4.9' shared-mime-info: - arch: x86_64 epoch: null name: shared-mime-info release: 5.el9 source: rpm version: '2.1' skopeo: - arch: x86_64 epoch: 2 name: skopeo release: 2.el9 source: rpm version: 1.20.0 slang: - arch: x86_64 epoch: null name: slang release: 11.el9 source: rpm version: 2.3.2 slirp4netns: - arch: x86_64 epoch: null name: slirp4netns release: 1.el9 source: rpm version: 1.3.3 snappy: - arch: x86_64 epoch: null name: snappy release: 8.el9 source: rpm version: 1.1.8 sos: - arch: noarch epoch: null name: sos release: 2.el9 source: rpm version: 4.10.1 sqlite: - arch: x86_64 epoch: null name: sqlite release: 9.el9 source: rpm version: 3.34.1 sqlite-libs: - arch: x86_64 epoch: null name: sqlite-libs release: 9.el9 source: rpm version: 3.34.1 squashfs-tools: - arch: x86_64 epoch: null name: squashfs-tools release: 10.git1.el9 source: rpm version: '4.4' sscg: - arch: x86_64 epoch: null name: sscg release: 2.el9 source: rpm version: 4.0.3 sshpass: - arch: x86_64 epoch: null name: sshpass release: 4.el9 source: rpm version: '1.09' sssd-client: - arch: x86_64 epoch: null name: sssd-client release: 5.el9 source: rpm version: 2.9.7 sssd-common: - arch: x86_64 epoch: null name: sssd-common release: 5.el9 source: rpm version: 2.9.7 sssd-kcm: - arch: x86_64 epoch: null name: sssd-kcm release: 5.el9 source: rpm version: 2.9.7 sssd-nfs-idmap: - arch: x86_64 epoch: null name: sssd-nfs-idmap release: 5.el9 source: rpm version: 2.9.7 sudo: - arch: x86_64 epoch: null name: sudo release: 13.el9 source: rpm version: 1.9.5p2 systemd: - arch: x86_64 epoch: null name: systemd release: 64.el9 source: rpm version: '252' systemd-devel: - arch: x86_64 epoch: null name: systemd-devel release: 64.el9 source: rpm version: '252' systemd-libs: - arch: x86_64 epoch: null name: systemd-libs release: 64.el9 source: rpm version: '252' systemd-pam: - arch: x86_64 epoch: null name: systemd-pam release: 64.el9 source: rpm version: '252' systemd-rpm-macros: - arch: noarch epoch: null name: systemd-rpm-macros release: 64.el9 source: rpm version: '252' systemd-udev: - arch: x86_64 epoch: null name: systemd-udev release: 64.el9 source: rpm version: '252' tar: - arch: x86_64 epoch: 2 name: tar release: 9.el9 source: rpm version: '1.34' tcl: - arch: x86_64 epoch: 1 name: tcl release: 7.el9 source: rpm version: 8.6.10 tcpdump: - arch: x86_64 epoch: 14 name: tcpdump release: 9.el9 source: rpm version: 4.99.0 teamd: - arch: x86_64 epoch: null name: teamd release: 16.el9 source: rpm version: '1.31' time: - arch: x86_64 epoch: null name: time release: 18.el9 source: rpm version: '1.9' tmux: - arch: x86_64 epoch: null name: tmux release: 5.el9 source: rpm version: 3.2a tpm2-tss: - arch: x86_64 epoch: null name: tpm2-tss release: 1.el9 source: rpm version: 3.2.3 traceroute: - arch: x86_64 epoch: 3 name: traceroute release: 1.el9 source: rpm version: 2.1.1 tzdata: - arch: noarch epoch: null name: tzdata release: 1.el9 source: rpm version: 2025c unzip: - arch: x86_64 epoch: null name: unzip release: 59.el9 source: rpm version: '6.0' userspace-rcu: - arch: x86_64 epoch: null name: userspace-rcu release: 6.el9 source: rpm version: 0.12.1 util-linux: - arch: x86_64 epoch: null name: util-linux release: 21.el9 source: rpm version: 2.37.4 util-linux-core: - arch: x86_64 epoch: null name: util-linux-core release: 21.el9 source: rpm version: 2.37.4 vim-minimal: - arch: x86_64 epoch: 2 name: vim-minimal release: 23.el9 source: rpm version: 8.2.2637 virt-install: - arch: noarch epoch: null name: virt-install release: 1.el9 source: rpm version: 5.0.0 virt-manager-common: - arch: noarch epoch: null name: virt-manager-common release: 1.el9 source: rpm version: 5.0.0 webkit2gtk3-jsc: - arch: x86_64 epoch: null name: webkit2gtk3-jsc release: 1.el9 source: rpm version: 2.50.4 wget: - arch: x86_64 epoch: null name: wget release: 8.el9 source: rpm version: 1.21.1 which: - arch: x86_64 epoch: null name: which release: 30.el9 source: rpm version: '2.21' xfsprogs: - arch: x86_64 epoch: null name: xfsprogs release: 7.el9 source: rpm version: 6.4.0 xmlstarlet: - arch: x86_64 epoch: null name: xmlstarlet release: 20.el9 source: rpm version: 1.6.1 xorriso: - arch: x86_64 epoch: null name: xorriso release: 5.el9 source: rpm version: 1.5.4 xz: - arch: x86_64 epoch: null name: xz release: 8.el9 source: rpm version: 5.2.5 xz-devel: - arch: x86_64 epoch: null name: xz-devel release: 8.el9 source: rpm version: 5.2.5 xz-libs: - arch: x86_64 epoch: null name: xz-libs release: 8.el9 source: rpm version: 5.2.5 yajl: - arch: x86_64 epoch: null name: yajl release: 25.el9 source: rpm version: 2.1.0 yum: - arch: noarch epoch: null name: yum release: 31.el9 source: rpm version: 4.14.0 yum-utils: - arch: noarch epoch: null name: yum-utils release: 25.el9 source: rpm version: 4.3.0 zip: - arch: x86_64 epoch: null name: zip release: 35.el9 source: rpm version: '3.0' zlib: - arch: x86_64 epoch: null name: zlib release: 41.el9 source: rpm version: 1.2.11 zlib-devel: - arch: x86_64 epoch: null name: zlib-devel release: 41.el9 source: rpm version: 1.2.11 zstd: - arch: x86_64 epoch: null name: zstd release: 1.el9 source: rpm version: 1.5.5 home/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible_facts.2026-01-20_10-59/0000777000175000017500000000000015133657765026746 5ustar zuulzuul././@LongLink0000644000000000000000000000015300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible_facts.2026-01-20_10-59/ansible_facts_cache/home/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible_facts.2026-01-20_10-59/ansible_facts_0000755000175000017500000000000015133657765031617 5ustar zuulzuul././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible_facts.2026-01-20_10-59/ansible_facts_cache/compute-0home/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible_facts.2026-01-20_10-59/ansible_facts_0000644000175000017500000005561315133657765031633 0ustar zuulzuul{ "_ansible_facts_gathered": true, "ansible_all_ipv4_addresses": [ "38.102.83.147", "192.168.122.100" ], "ansible_all_ipv6_addresses": [ "fe80::f816:3eff:fe25:2c21" ], "ansible_apparmor": { "status": "disabled" }, "ansible_architecture": "x86_64", "ansible_bios_date": "04/01/2014", "ansible_bios_vendor": "SeaBIOS", "ansible_bios_version": "1.15.0-1", "ansible_board_asset_tag": "NA", "ansible_board_name": "NA", "ansible_board_serial": "NA", "ansible_board_vendor": "NA", "ansible_board_version": "NA", "ansible_chassis_asset_tag": "NA", "ansible_chassis_serial": "NA", "ansible_chassis_vendor": "QEMU", "ansible_chassis_version": "pc-i440fx-6.2", "ansible_cmdline": { "BOOT_IMAGE": "(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", "console": "ttyS0,115200n8", "crashkernel": "1G-2G:192M,2G-64G:256M,64G-:512M", "net.ifnames": "0", "no_timer_check": true, "ro": true, "root": "UUID=22ac9141-3960-4912-b20e-19fc8a328d40" }, "ansible_date_time": { "date": "2026-01-20", "day": "20", "epoch": "1768906624", "epoch_int": "1768906624", "hour": "10", "iso8601": "2026-01-20T10:57:04Z", "iso8601_basic": "20260120T105704817065", "iso8601_basic_short": "20260120T105704", "iso8601_micro": "2026-01-20T10:57:04.817065Z", "minute": "57", "month": "01", "second": "04", "time": "10:57:04", "tz": "UTC", "tz_dst": "UTC", "tz_offset": "+0000", "weekday": "Tuesday", "weekday_number": "2", "weeknumber": "03", "year": "2026" }, "ansible_default_ipv4": { "address": "38.102.83.147", "alias": "eth0", "broadcast": "38.102.83.255", "gateway": "38.102.83.1", "interface": "eth0", "macaddress": "fa:16:3e:25:2c:21", "mtu": 1500, "netmask": "255.255.255.0", "network": "38.102.83.0", "prefix": "24", "type": "ether" }, "ansible_default_ipv6": {}, "ansible_device_links": { "ids": { "sr0": [ "ata-QEMU_DVD-ROM_QM00001" ] }, "labels": { "sr0": [ "config-2" ] }, "masters": {}, "uuids": { "sr0": [ "2026-01-20-10-41-36-00" ], "vda1": [ "22ac9141-3960-4912-b20e-19fc8a328d40" ] } }, "ansible_devices": { "sr0": { "holders": [], "host": "", "links": { "ids": [ "ata-QEMU_DVD-ROM_QM00001" ], "labels": [ "config-2" ], "masters": [], "uuids": [ "2026-01-20-10-41-36-00" ] }, "model": "QEMU DVD-ROM", "partitions": {}, "removable": "1", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "964", "sectorsize": "2048", "size": "482.00 KB", "support_discard": "2048", "vendor": "QEMU", "virtual": 1 }, "vda": { "holders": [], "host": "", "links": { "ids": [], "labels": [], "masters": [], "uuids": [] }, "model": null, "partitions": { "vda1": { "holders": [], "links": { "ids": [], "labels": [], "masters": [], "uuids": [ "22ac9141-3960-4912-b20e-19fc8a328d40" ] }, "sectors": "167770079", "sectorsize": 512, "size": "80.00 GB", "start": "2048", "uuid": "22ac9141-3960-4912-b20e-19fc8a328d40" } }, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "none", "sectors": "167772160", "sectorsize": "512", "size": "80.00 GB", "support_discard": "512", "vendor": "0x1af4", "virtual": 1 } }, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/centos-release", "ansible_distribution_file_variety": "CentOS", "ansible_distribution_major_version": "9", "ansible_distribution_release": "Stream", "ansible_distribution_version": "9", "ansible_dns": { "nameservers": [ "199.204.44.24", "199.204.47.54" ] }, "ansible_domain": "", "ansible_effective_group_id": 1000, "ansible_effective_user_id": 1000, "ansible_env": { "BASH_FUNC_which%%": "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}", "DBUS_SESSION_BUS_ADDRESS": "unix:path=/run/user/1000/bus", "DEBUGINFOD_IMA_CERT_PATH": "/etc/keys/ima:", "DEBUGINFOD_URLS": "https://debuginfod.centos.org/ ", "HOME": "/home/zuul", "LANG": "en_US.UTF-8", "LESSOPEN": "||/usr/bin/lesspipe.sh %s", "LOGNAME": "zuul", "MOTD_SHOWN": "pam", "PATH": "/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin", "PWD": "/home/zuul", "SELINUX_LEVEL_REQUESTED": "", "SELINUX_ROLE_REQUESTED": "", "SELINUX_USE_CURRENT_RANGE": "", "SHELL": "/bin/bash", "SHLVL": "1", "SSH_CLIENT": "38.102.83.39 56844 22", "SSH_CONNECTION": "38.102.83.39 56844 38.102.83.147 22", "USER": "zuul", "XDG_RUNTIME_DIR": "/run/user/1000", "XDG_SESSION_CLASS": "user", "XDG_SESSION_ID": "7", "XDG_SESSION_TYPE": "tty", "_": "/usr/bin/python3", "which_declare": "declare -f" }, "ansible_eth0": { "active": true, "device": "eth0", "features": { "esp_hw_offload": "off [fixed]", "esp_tx_csum_hw_offload": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hsr_dup_offload": "off [fixed]", "hsr_fwd_offload": "off [fixed]", "hsr_tag_ins_offload": "off [fixed]", "hsr_tag_rm_offload": "off [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "macsec_hw_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "on", "rx_gro_list": "off", "rx_udp_gro_forwarding": "off", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tls_hw_record": "off [fixed]", "tls_hw_rx_offload": "off [fixed]", "tls_hw_tx_offload": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_esp_segmentation": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_list": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "on [fixed]", "tx_ipxip4_segmentation": "off [fixed]", "tx_ipxip6_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_tunnel_remcsum_segmentation": "off [fixed]", "tx_udp_segmentation": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "vlan_challenged": "off [fixed]" }, "hw_timestamp_filters": [], "ipv4": { "address": "38.102.83.147", "broadcast": "38.102.83.255", "netmask": "255.255.255.0", "network": "38.102.83.0", "prefix": "24" }, "ipv6": [ { "address": "fe80::f816:3eff:fe25:2c21", "prefix": "64", "scope": "link" } ], "macaddress": "fa:16:3e:25:2c:21", "module": "virtio_net", "mtu": 1500, "pciid": "virtio1", "promisc": false, "speed": -1, "timestamping": [], "type": "ether" }, "ansible_eth1": { "active": true, "device": "eth1", "features": { "esp_hw_offload": "off [fixed]", "esp_tx_csum_hw_offload": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hsr_dup_offload": "off [fixed]", "hsr_fwd_offload": "off [fixed]", "hsr_tag_ins_offload": "off [fixed]", "hsr_tag_rm_offload": "off [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "macsec_hw_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "on", "rx_gro_list": "off", "rx_udp_gro_forwarding": "off", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tls_hw_record": "off [fixed]", "tls_hw_rx_offload": "off [fixed]", "tls_hw_tx_offload": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_esp_segmentation": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_list": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "on [fixed]", "tx_ipxip4_segmentation": "off [fixed]", "tx_ipxip6_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_tunnel_remcsum_segmentation": "off [fixed]", "tx_udp_segmentation": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "vlan_challenged": "off [fixed]" }, "hw_timestamp_filters": [], "ipv4": { "address": "192.168.122.100", "broadcast": "192.168.122.255", "netmask": "255.255.255.0", "network": "192.168.122.0", "prefix": "24" }, "macaddress": "fa:16:3e:c1:c1:89", "module": "virtio_net", "mtu": 1500, "pciid": "virtio5", "promisc": false, "speed": -1, "timestamping": [], "type": "ether" }, "ansible_fibre_channel_wwn": [], "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "compute-0", "ansible_hostname": "compute-0", "ansible_hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d", "ansible_interfaces": [ "eth0", "eth1", "lo" ], "ansible_is_chroot": false, "ansible_iscsi_iqn": "", "ansible_kernel": "5.14.0-661.el9.x86_64", "ansible_kernel_version": "#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026", "ansible_lo": { "active": true, "device": "lo", "features": { "esp_hw_offload": "off [fixed]", "esp_tx_csum_hw_offload": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hsr_dup_offload": "off [fixed]", "hsr_fwd_offload": "off [fixed]", "hsr_tag_ins_offload": "off [fixed]", "hsr_tag_rm_offload": "off [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "macsec_hw_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_gro_list": "off", "rx_udp_gro_forwarding": "off", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tls_hw_record": "off [fixed]", "tls_hw_rx_offload": "off [fixed]", "tls_hw_tx_offload": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_esp_segmentation": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_list": "on", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipxip4_segmentation": "off [fixed]", "tx_ipxip6_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_tunnel_remcsum_segmentation": "off [fixed]", "tx_udp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "vlan_challenged": "on [fixed]" }, "hw_timestamp_filters": [], "ipv4": { "address": "127.0.0.1", "broadcast": "", "netmask": "255.0.0.0", "network": "127.0.0.0", "prefix": "8" }, "ipv6": [ { "address": "::1", "prefix": "128", "scope": "host" } ], "mtu": 65536, "promisc": false, "timestamping": [], "type": "loopback" }, "ansible_loadavg": { "15m": 0.1, "1m": 0.03, "5m": 0.18 }, "ansible_local": {}, "ansible_locally_reachable_ips": { "ipv4": [ "38.102.83.147", "127.0.0.0/8", "127.0.0.1", "192.168.122.100" ], "ipv6": [ "::1", "fe80::f816:3eff:fe25:2c21" ] }, "ansible_lsb": {}, "ansible_lvm": "N/A", "ansible_machine": "x86_64", "ansible_machine_id": "85ac68c10a6e7ae08ceb898dbdca0cb5", "ansible_memfree_mb": 6800, "ansible_memory_mb": { "nocache": { "free": 7302, "used": 377 }, "real": { "free": 6800, "total": 7679, "used": 879 }, "swap": { "cached": 0, "free": 0, "total": 0, "used": 0 } }, "ansible_memtotal_mb": 7679, "ansible_mounts": [ { "block_available": 20302618, "block_size": 4096, "block_total": 20954875, "block_used": 652257, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 41887817, "inode_total": 41942512, "inode_used": 54695, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota", "size_available": 83159523328, "size_total": 85831168000, "uuid": "22ac9141-3960-4912-b20e-19fc8a328d40" } ], "ansible_nodename": "compute-0", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "dnf", "ansible_proc_cmdline": { "BOOT_IMAGE": "(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", "console": "ttyS0,115200n8", "crashkernel": "1G-2G:192M,2G-64G:256M,64G-:512M", "net.ifnames": "0", "no_timer_check": true, "ro": true, "root": "UUID=22ac9141-3960-4912-b20e-19fc8a328d40" }, "ansible_processor": [ "0", "AuthenticAMD", "AMD EPYC-Rome Processor", "1", "AuthenticAMD", "AMD EPYC-Rome Processor", "2", "AuthenticAMD", "AMD EPYC-Rome Processor", "3", "AuthenticAMD", "AMD EPYC-Rome Processor", "4", "AuthenticAMD", "AMD EPYC-Rome Processor", "5", "AuthenticAMD", "AMD EPYC-Rome Processor", "6", "AuthenticAMD", "AMD EPYC-Rome Processor", "7", "AuthenticAMD", "AMD EPYC-Rome Processor" ], "ansible_processor_cores": 1, "ansible_processor_count": 8, "ansible_processor_nproc": 8, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 8, "ansible_product_name": "OpenStack Nova", "ansible_product_serial": "NA", "ansible_product_uuid": "NA", "ansible_product_version": "26.3.1", "ansible_python": { "executable": "/usr/bin/python3", "has_sslcontext": true, "type": "cpython", "version": { "major": 3, "micro": 25, "minor": 9, "releaselevel": "final", "serial": 0 }, "version_info": [ 3, 9, 25, "final", 0 ] }, "ansible_python_version": "3.9.25", "ansible_real_group_id": 1000, "ansible_real_user_id": 1000, "ansible_selinux": { "config_mode": "enforcing", "mode": "enforcing", "policyvers": 33, "status": "enabled", "type": "targeted" }, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBImXgYRgtVhvHpMswDAh9UlpvtE1pP8PEn0C7uDlhV5zNa4lLLwa9hrNUZaxzn4SgfZQFMldvxjekHdYiRkrnVc=", "ansible_ssh_host_key_ecdsa_public_keytype": "ecdsa-sha2-nistp256", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIIQ3/Z4kVl/AT59/Tv6HVvPhUGgpIZb+Fp7H6BUxNPgr", "ansible_ssh_host_key_ed25519_public_keytype": "ssh-ed25519", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABgQC5Q8F0s8M9Jon+Lj7prVMGsM10+eKWbur0kefVExM5UwKhTvoBLhRBv97P4RC77BVPJIJVOed6O+/WgH0n++ckHPFc40aPxq8DYHQPYf4zjYU1iX8lN1wWUMfKwwwUb0DMqvTNJeCeiOHoHQ2CUi11sV4G6BR1uemczKMFcj3hgHz1RuVosl14gNqJC7qKzXQQteukA/v88QeBckJmYJ5GDBgONjV5FuF4+2MzQzNsNf+gltZJf/WSFAq8lAxTvkNbl/lrCLoCCKaaz2mBcnbNm+d43ZPVFh6Ww5htzLlHd+ReGyvGTSFFcmUkvxdrMcvsK+x4MMPPHmrayzcYDiRHao4jO98naVN0B/MtgOJ4lIVTCZgZRWkKNCOyeuZVsDhxp/Vwfd6U9FIp5fLJlp9828aFJ0fNJCEBPcJ0OrEEbelI50zGEanlrHou5DH1qVjC7vumBB21AWvdxuWmr1/jQOet/J7AP2qsOZL8NHV/BC5j7bkwFuxQUgSQ2rpIv4k=", "ansible_ssh_host_key_rsa_public_keytype": "ssh-rsa", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": [ "" ], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "OpenStack Foundation", "ansible_uptime_seconds": 917, "ansible_user_dir": "/home/zuul", "ansible_user_gecos": "", "ansible_user_gid": 1000, "ansible_user_id": "zuul", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 1000, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_tech_guest": [ "openstack" ], "ansible_virtualization_tech_host": [ "kvm" ], "ansible_virtualization_type": "openstack", "discovered_interpreter_python": "/usr/bin/python3", "gather_subset": [ "all" ], "module_setup": true }././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible_facts.2026-01-20_10-59/ansible_facts_cache/compute-1home/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible_facts.2026-01-20_10-59/ansible_facts_0000644000175000017500000005561315133657765031633 0ustar zuulzuul{ "_ansible_facts_gathered": true, "ansible_all_ipv4_addresses": [ "192.168.122.101", "38.102.83.233" ], "ansible_all_ipv6_addresses": [ "fe80::f816:3eff:fed1:b2b4" ], "ansible_apparmor": { "status": "disabled" }, "ansible_architecture": "x86_64", "ansible_bios_date": "04/01/2014", "ansible_bios_vendor": "SeaBIOS", "ansible_bios_version": "1.15.0-1", "ansible_board_asset_tag": "NA", "ansible_board_name": "NA", "ansible_board_serial": "NA", "ansible_board_vendor": "NA", "ansible_board_version": "NA", "ansible_chassis_asset_tag": "NA", "ansible_chassis_serial": "NA", "ansible_chassis_vendor": "QEMU", "ansible_chassis_version": "pc-i440fx-6.2", "ansible_cmdline": { "BOOT_IMAGE": "(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", "console": "ttyS0,115200n8", "crashkernel": "1G-2G:192M,2G-64G:256M,64G-:512M", "net.ifnames": "0", "no_timer_check": true, "ro": true, "root": "UUID=22ac9141-3960-4912-b20e-19fc8a328d40" }, "ansible_date_time": { "date": "2026-01-20", "day": "20", "epoch": "1768906624", "epoch_int": "1768906624", "hour": "10", "iso8601": "2026-01-20T10:57:04Z", "iso8601_basic": "20260120T105704831495", "iso8601_basic_short": "20260120T105704", "iso8601_micro": "2026-01-20T10:57:04.831495Z", "minute": "57", "month": "01", "second": "04", "time": "10:57:04", "tz": "UTC", "tz_dst": "UTC", "tz_offset": "+0000", "weekday": "Tuesday", "weekday_number": "2", "weeknumber": "03", "year": "2026" }, "ansible_default_ipv4": { "address": "38.102.83.233", "alias": "eth0", "broadcast": "38.102.83.255", "gateway": "38.102.83.1", "interface": "eth0", "macaddress": "fa:16:3e:d1:b2:b4", "mtu": 1500, "netmask": "255.255.255.0", "network": "38.102.83.0", "prefix": "24", "type": "ether" }, "ansible_default_ipv6": {}, "ansible_device_links": { "ids": { "sr0": [ "ata-QEMU_DVD-ROM_QM00001" ] }, "labels": { "sr0": [ "config-2" ] }, "masters": {}, "uuids": { "sr0": [ "2026-01-20-10-41-49-00" ], "vda1": [ "22ac9141-3960-4912-b20e-19fc8a328d40" ] } }, "ansible_devices": { "sr0": { "holders": [], "host": "", "links": { "ids": [ "ata-QEMU_DVD-ROM_QM00001" ], "labels": [ "config-2" ], "masters": [], "uuids": [ "2026-01-20-10-41-49-00" ] }, "model": "QEMU DVD-ROM", "partitions": {}, "removable": "1", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "964", "sectorsize": "2048", "size": "482.00 KB", "support_discard": "2048", "vendor": "QEMU", "virtual": 1 }, "vda": { "holders": [], "host": "", "links": { "ids": [], "labels": [], "masters": [], "uuids": [] }, "model": null, "partitions": { "vda1": { "holders": [], "links": { "ids": [], "labels": [], "masters": [], "uuids": [ "22ac9141-3960-4912-b20e-19fc8a328d40" ] }, "sectors": "167770079", "sectorsize": 512, "size": "80.00 GB", "start": "2048", "uuid": "22ac9141-3960-4912-b20e-19fc8a328d40" } }, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "none", "sectors": "167772160", "sectorsize": "512", "size": "80.00 GB", "support_discard": "512", "vendor": "0x1af4", "virtual": 1 } }, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/centos-release", "ansible_distribution_file_variety": "CentOS", "ansible_distribution_major_version": "9", "ansible_distribution_release": "Stream", "ansible_distribution_version": "9", "ansible_dns": { "nameservers": [ "199.204.44.24", "199.204.47.54" ] }, "ansible_domain": "", "ansible_effective_group_id": 1000, "ansible_effective_user_id": 1000, "ansible_env": { "BASH_FUNC_which%%": "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}", "DBUS_SESSION_BUS_ADDRESS": "unix:path=/run/user/1000/bus", "DEBUGINFOD_IMA_CERT_PATH": "/etc/keys/ima:", "DEBUGINFOD_URLS": "https://debuginfod.centos.org/ ", "HOME": "/home/zuul", "LANG": "en_US.UTF-8", "LESSOPEN": "||/usr/bin/lesspipe.sh %s", "LOGNAME": "zuul", "MOTD_SHOWN": "pam", "PATH": "/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin", "PWD": "/home/zuul", "SELINUX_LEVEL_REQUESTED": "", "SELINUX_ROLE_REQUESTED": "", "SELINUX_USE_CURRENT_RANGE": "", "SHELL": "/bin/bash", "SHLVL": "1", "SSH_CLIENT": "38.102.83.39 41736 22", "SSH_CONNECTION": "38.102.83.39 41736 38.102.83.233 22", "USER": "zuul", "XDG_RUNTIME_DIR": "/run/user/1000", "XDG_SESSION_CLASS": "user", "XDG_SESSION_ID": "8", "XDG_SESSION_TYPE": "tty", "_": "/usr/bin/python3", "which_declare": "declare -f" }, "ansible_eth0": { "active": true, "device": "eth0", "features": { "esp_hw_offload": "off [fixed]", "esp_tx_csum_hw_offload": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hsr_dup_offload": "off [fixed]", "hsr_fwd_offload": "off [fixed]", "hsr_tag_ins_offload": "off [fixed]", "hsr_tag_rm_offload": "off [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "macsec_hw_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "on", "rx_gro_list": "off", "rx_udp_gro_forwarding": "off", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tls_hw_record": "off [fixed]", "tls_hw_rx_offload": "off [fixed]", "tls_hw_tx_offload": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_esp_segmentation": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_list": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "on [fixed]", "tx_ipxip4_segmentation": "off [fixed]", "tx_ipxip6_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_tunnel_remcsum_segmentation": "off [fixed]", "tx_udp_segmentation": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "vlan_challenged": "off [fixed]" }, "hw_timestamp_filters": [], "ipv4": { "address": "38.102.83.233", "broadcast": "38.102.83.255", "netmask": "255.255.255.0", "network": "38.102.83.0", "prefix": "24" }, "ipv6": [ { "address": "fe80::f816:3eff:fed1:b2b4", "prefix": "64", "scope": "link" } ], "macaddress": "fa:16:3e:d1:b2:b4", "module": "virtio_net", "mtu": 1500, "pciid": "virtio1", "promisc": false, "speed": -1, "timestamping": [], "type": "ether" }, "ansible_eth1": { "active": true, "device": "eth1", "features": { "esp_hw_offload": "off [fixed]", "esp_tx_csum_hw_offload": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hsr_dup_offload": "off [fixed]", "hsr_fwd_offload": "off [fixed]", "hsr_tag_ins_offload": "off [fixed]", "hsr_tag_rm_offload": "off [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "macsec_hw_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "on", "rx_gro_list": "off", "rx_udp_gro_forwarding": "off", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tls_hw_record": "off [fixed]", "tls_hw_rx_offload": "off [fixed]", "tls_hw_tx_offload": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_esp_segmentation": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_list": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "on [fixed]", "tx_ipxip4_segmentation": "off [fixed]", "tx_ipxip6_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_tunnel_remcsum_segmentation": "off [fixed]", "tx_udp_segmentation": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "vlan_challenged": "off [fixed]" }, "hw_timestamp_filters": [], "ipv4": { "address": "192.168.122.101", "broadcast": "192.168.122.255", "netmask": "255.255.255.0", "network": "192.168.122.0", "prefix": "24" }, "macaddress": "fa:16:3e:62:06:49", "module": "virtio_net", "mtu": 1500, "pciid": "virtio5", "promisc": false, "speed": -1, "timestamping": [], "type": "ether" }, "ansible_fibre_channel_wwn": [], "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "compute-1", "ansible_hostname": "compute-1", "ansible_hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d", "ansible_interfaces": [ "eth0", "eth1", "lo" ], "ansible_is_chroot": false, "ansible_iscsi_iqn": "", "ansible_kernel": "5.14.0-661.el9.x86_64", "ansible_kernel_version": "#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026", "ansible_lo": { "active": true, "device": "lo", "features": { "esp_hw_offload": "off [fixed]", "esp_tx_csum_hw_offload": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hsr_dup_offload": "off [fixed]", "hsr_fwd_offload": "off [fixed]", "hsr_tag_ins_offload": "off [fixed]", "hsr_tag_rm_offload": "off [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "macsec_hw_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_gro_list": "off", "rx_udp_gro_forwarding": "off", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tls_hw_record": "off [fixed]", "tls_hw_rx_offload": "off [fixed]", "tls_hw_tx_offload": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_esp_segmentation": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_list": "on", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipxip4_segmentation": "off [fixed]", "tx_ipxip6_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_tunnel_remcsum_segmentation": "off [fixed]", "tx_udp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "vlan_challenged": "on [fixed]" }, "hw_timestamp_filters": [], "ipv4": { "address": "127.0.0.1", "broadcast": "", "netmask": "255.0.0.0", "network": "127.0.0.0", "prefix": "8" }, "ipv6": [ { "address": "::1", "prefix": "128", "scope": "host" } ], "mtu": 65536, "promisc": false, "timestamping": [], "type": "loopback" }, "ansible_loadavg": { "15m": 0.15, "1m": 0.02, "5m": 0.2 }, "ansible_local": {}, "ansible_locally_reachable_ips": { "ipv4": [ "38.102.83.233", "127.0.0.0/8", "127.0.0.1", "192.168.122.101" ], "ipv6": [ "::1", "fe80::f816:3eff:fed1:b2b4" ] }, "ansible_lsb": {}, "ansible_lvm": "N/A", "ansible_machine": "x86_64", "ansible_machine_id": "85ac68c10a6e7ae08ceb898dbdca0cb5", "ansible_memfree_mb": 6795, "ansible_memory_mb": { "nocache": { "free": 7297, "used": 382 }, "real": { "free": 6795, "total": 7679, "used": 884 }, "swap": { "cached": 0, "free": 0, "total": 0, "used": 0 } }, "ansible_memtotal_mb": 7679, "ansible_mounts": [ { "block_available": 20302613, "block_size": 4096, "block_total": 20954875, "block_used": 652262, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 41887817, "inode_total": 41942512, "inode_used": 54695, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota", "size_available": 83159502848, "size_total": 85831168000, "uuid": "22ac9141-3960-4912-b20e-19fc8a328d40" } ], "ansible_nodename": "compute-1", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "dnf", "ansible_proc_cmdline": { "BOOT_IMAGE": "(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", "console": "ttyS0,115200n8", "crashkernel": "1G-2G:192M,2G-64G:256M,64G-:512M", "net.ifnames": "0", "no_timer_check": true, "ro": true, "root": "UUID=22ac9141-3960-4912-b20e-19fc8a328d40" }, "ansible_processor": [ "0", "AuthenticAMD", "AMD EPYC-Rome Processor", "1", "AuthenticAMD", "AMD EPYC-Rome Processor", "2", "AuthenticAMD", "AMD EPYC-Rome Processor", "3", "AuthenticAMD", "AMD EPYC-Rome Processor", "4", "AuthenticAMD", "AMD EPYC-Rome Processor", "5", "AuthenticAMD", "AMD EPYC-Rome Processor", "6", "AuthenticAMD", "AMD EPYC-Rome Processor", "7", "AuthenticAMD", "AMD EPYC-Rome Processor" ], "ansible_processor_cores": 1, "ansible_processor_count": 8, "ansible_processor_nproc": 8, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 8, "ansible_product_name": "OpenStack Nova", "ansible_product_serial": "NA", "ansible_product_uuid": "NA", "ansible_product_version": "26.3.1", "ansible_python": { "executable": "/usr/bin/python3", "has_sslcontext": true, "type": "cpython", "version": { "major": 3, "micro": 25, "minor": 9, "releaselevel": "final", "serial": 0 }, "version_info": [ 3, 9, 25, "final", 0 ] }, "ansible_python_version": "3.9.25", "ansible_real_group_id": 1000, "ansible_real_user_id": 1000, "ansible_selinux": { "config_mode": "enforcing", "mode": "enforcing", "policyvers": 33, "status": "enabled", "type": "targeted" }, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFhNbYjjeK/oArk7LI0XanEzL0O7pgkf21+cq6QoQ5wbEb5g8C3uy8vUvqvdJBSrbH0ip8mNVOAh0sdh8xx0iOk=", "ansible_ssh_host_key_ecdsa_public_keytype": "ecdsa-sha2-nistp256", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIFJpjSKPikJsHpfPqsw0KgXOHY/fls40YXS3/2usUOaz", "ansible_ssh_host_key_ed25519_public_keytype": "ssh-ed25519", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABgQCZYl4khmSSMFZqYE0pbymRw0xsRabp4YkDzSmFiVfyaP7UHC+q6q+POgFEQJNhuYfa6P1PFnIzG58tPOI7IFIuoHc7RKpjo0ghGXsYYCAmY3pW3TtcpNSE5v/+iAD37zJKZJEswxSjjeByERazfj6oCyaWKEP5m9oKYohiMAU8GdyQPiHvmFT4UmmWl0BL+KsiwszcJ/RRzl59M5hlVIT0VW3d1QpEB8WmvxgxaiRGMDOIobkSwxArnabE6IjZdoJFHiuN0JnQuellbUPVEMAV7fK1JA5c8jYZmXSa1QTjjKUVLvGljId6WamT+7+kHcsGoOyMDlaktQnDPwB9F1538fFsSioF8CiLPzSPd0OLmE3Zqg6eQWH8rk8Ox5uQVcGh6AK4yMNMde4GuNcmAmtAEwrzn5PT9BsF2bysDdtpaROdlA4SSyhXZB85irUiLX4/aNsuWaC4iDX/20G3XhDeio7bDHxFlPiT6n+KfWwugRGxcYrUZi4BKkbxBN3aDqE=", "ansible_ssh_host_key_rsa_public_keytype": "ssh-rsa", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": [ "" ], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "OpenStack Foundation", "ansible_uptime_seconds": 907, "ansible_user_dir": "/home/zuul", "ansible_user_gecos": "", "ansible_user_gid": 1000, "ansible_user_id": "zuul", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 1000, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_tech_guest": [ "openstack" ], "ansible_virtualization_tech_host": [ "kvm" ], "ansible_virtualization_type": "openstack", "discovered_interpreter_python": "/usr/bin/python3", "gather_subset": [ "all" ], "module_setup": true }././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible_facts.2026-01-20_10-59/ansible_facts_cache/localhosthome/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible_facts.2026-01-20_10-59/ansible_facts_0000644000175000017500000016033615133657765031632 0ustar zuulzuul{ "_ansible_facts_gathered": true, "ansible_all_ipv4_addresses": [ "38.102.83.39", "192.168.122.11" ], "ansible_all_ipv6_addresses": [ "fe80::f816:3eff:fe71:e11f" ], "ansible_apparmor": { "status": "disabled" }, "ansible_architecture": "x86_64", "ansible_bios_date": "04/01/2014", "ansible_bios_vendor": "SeaBIOS", "ansible_bios_version": "1.15.0-1", "ansible_board_asset_tag": "NA", "ansible_board_name": "NA", "ansible_board_serial": "NA", "ansible_board_vendor": "NA", "ansible_board_version": "NA", "ansible_chassis_asset_tag": "NA", "ansible_chassis_serial": "NA", "ansible_chassis_vendor": "QEMU", "ansible_chassis_version": "pc-i440fx-6.2", "ansible_cmdline": { "BOOT_IMAGE": "(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", "console": "ttyS0,115200n8", "crashkernel": "1G-2G:192M,2G-64G:256M,64G-:512M", "net.ifnames": "0", "no_timer_check": true, "ro": true, "root": "UUID=22ac9141-3960-4912-b20e-19fc8a328d40" }, "ansible_date_time": { "date": "2026-01-20", "day": "20", "epoch": "1768906475", "epoch_int": "1768906475", "hour": "10", "iso8601": "2026-01-20T10:54:35Z", "iso8601_basic": "20260120T105435180928", "iso8601_basic_short": "20260120T105435", "iso8601_micro": "2026-01-20T10:54:35.180928Z", "minute": "54", "month": "01", "second": "35", "time": "10:54:35", "tz": "UTC", "tz_dst": "UTC", "tz_offset": "+0000", "weekday": "Tuesday", "weekday_number": "2", "weeknumber": "03", "year": "2026" }, "ansible_default_ipv4": { "address": "38.102.83.39", "alias": "eth0", "broadcast": "38.102.83.255", "gateway": "38.102.83.1", "interface": "eth0", "macaddress": "fa:16:3e:71:e1:1f", "mtu": 1500, "netmask": "255.255.255.0", "network": "38.102.83.0", "prefix": "24", "type": "ether" }, "ansible_default_ipv6": {}, "ansible_device_links": { "ids": { "sr0": [ "ata-QEMU_DVD-ROM_QM00001" ] }, "labels": { "sr0": [ "config-2" ] }, "masters": {}, "uuids": { "sr0": [ "2026-01-20-10-41-26-00" ], "vda1": [ "22ac9141-3960-4912-b20e-19fc8a328d40" ] } }, "ansible_devices": { "sr0": { "holders": [], "host": "", "links": { "ids": [ "ata-QEMU_DVD-ROM_QM00001" ], "labels": [ "config-2" ], "masters": [], "uuids": [ "2026-01-20-10-41-26-00" ] }, "model": "QEMU DVD-ROM", "partitions": {}, "removable": "1", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "964", "sectorsize": "2048", "size": "482.00 KB", "support_discard": "2048", "vendor": "QEMU", "virtual": 1 }, "vda": { "holders": [], "host": "", "links": { "ids": [], "labels": [], "masters": [], "uuids": [] }, "model": null, "partitions": { "vda1": { "holders": [], "links": { "ids": [], "labels": [], "masters": [], "uuids": [ "22ac9141-3960-4912-b20e-19fc8a328d40" ] }, "sectors": "83883999", "sectorsize": 512, "size": "40.00 GB", "start": "2048", "uuid": "22ac9141-3960-4912-b20e-19fc8a328d40" } }, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "none", "sectors": "83886080", "sectorsize": "512", "size": "40.00 GB", "support_discard": "512", "vendor": "0x1af4", "virtual": 1 } }, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/centos-release", "ansible_distribution_file_variety": "CentOS", "ansible_distribution_major_version": "9", "ansible_distribution_release": "Stream", "ansible_distribution_version": "9", "ansible_dns": { "nameservers": [ "192.168.122.10", "199.204.44.24", "199.204.47.54" ] }, "ansible_domain": "", "ansible_effective_group_id": 1000, "ansible_effective_user_id": 1000, "ansible_env": { "BASH_FUNC_which%%": "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}", "DBUS_SESSION_BUS_ADDRESS": "unix:path=/run/user/1000/bus", "DEBUGINFOD_IMA_CERT_PATH": "/etc/keys/ima:", "DEBUGINFOD_URLS": "https://debuginfod.centos.org/ ", "HOME": "/home/zuul", "LANG": "en_US.UTF-8", "LESSOPEN": "||/usr/bin/lesspipe.sh %s", "LOGNAME": "zuul", "MOTD_SHOWN": "pam", "PATH": "/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin", "PWD": "/home/zuul/src/github.com/openstack-k8s-operators/ci-framework", "SELINUX_LEVEL_REQUESTED": "", "SELINUX_ROLE_REQUESTED": "", "SELINUX_USE_CURRENT_RANGE": "", "SHELL": "/bin/bash", "SHLVL": "2", "SSH_CLIENT": "38.102.83.114 49292 22", "SSH_CONNECTION": "38.102.83.114 49292 38.102.83.39 22", "USER": "zuul", "XDG_RUNTIME_DIR": "/run/user/1000", "XDG_SESSION_CLASS": "user", "XDG_SESSION_ID": "9", "XDG_SESSION_TYPE": "tty", "_": "/usr/bin/python3", "which_declare": "declare -f" }, "ansible_eth0": { "active": true, "device": "eth0", "features": { "esp_hw_offload": "off [fixed]", "esp_tx_csum_hw_offload": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hsr_dup_offload": "off [fixed]", "hsr_fwd_offload": "off [fixed]", "hsr_tag_ins_offload": "off [fixed]", "hsr_tag_rm_offload": "off [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "macsec_hw_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "on", "rx_gro_list": "off", "rx_udp_gro_forwarding": "off", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tls_hw_record": "off [fixed]", "tls_hw_rx_offload": "off [fixed]", "tls_hw_tx_offload": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_esp_segmentation": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_list": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "on [fixed]", "tx_ipxip4_segmentation": "off [fixed]", "tx_ipxip6_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_tunnel_remcsum_segmentation": "off [fixed]", "tx_udp_segmentation": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "vlan_challenged": "off [fixed]" }, "hw_timestamp_filters": [], "ipv4": { "address": "38.102.83.39", "broadcast": "38.102.83.255", "netmask": "255.255.255.0", "network": "38.102.83.0", "prefix": "24" }, "ipv6": [ { "address": "fe80::f816:3eff:fe71:e11f", "prefix": "64", "scope": "link" } ], "macaddress": "fa:16:3e:71:e1:1f", "module": "virtio_net", "mtu": 1500, "pciid": "virtio1", "promisc": false, "speed": -1, "timestamping": [], "type": "ether" }, "ansible_eth1": { "active": true, "device": "eth1", "features": { "esp_hw_offload": "off [fixed]", "esp_tx_csum_hw_offload": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hsr_dup_offload": "off [fixed]", "hsr_fwd_offload": "off [fixed]", "hsr_tag_ins_offload": "off [fixed]", "hsr_tag_rm_offload": "off [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "macsec_hw_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "on", "rx_gro_list": "off", "rx_udp_gro_forwarding": "off", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tls_hw_record": "off [fixed]", "tls_hw_rx_offload": "off [fixed]", "tls_hw_tx_offload": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_esp_segmentation": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_list": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "on [fixed]", "tx_ipxip4_segmentation": "off [fixed]", "tx_ipxip6_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_tunnel_remcsum_segmentation": "off [fixed]", "tx_udp_segmentation": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "vlan_challenged": "off [fixed]" }, "hw_timestamp_filters": [], "ipv4": { "address": "192.168.122.11", "broadcast": "192.168.122.255", "netmask": "255.255.255.0", "network": "192.168.122.0", "prefix": "24" }, "macaddress": "fa:16:3e:b8:b9:28", "module": "virtio_net", "mtu": 1500, "pciid": "virtio5", "promisc": false, "speed": -1, "timestamping": [], "type": "ether" }, "ansible_fibre_channel_wwn": [], "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "controller", "ansible_hostname": "controller", "ansible_hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d", "ansible_interfaces": [ "eth1", "lo", "eth0" ], "ansible_is_chroot": false, "ansible_iscsi_iqn": "", "ansible_kernel": "5.14.0-661.el9.x86_64", "ansible_kernel_version": "#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026", "ansible_lo": { "active": true, "device": "lo", "features": { "esp_hw_offload": "off [fixed]", "esp_tx_csum_hw_offload": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hsr_dup_offload": "off [fixed]", "hsr_fwd_offload": "off [fixed]", "hsr_tag_ins_offload": "off [fixed]", "hsr_tag_rm_offload": "off [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "macsec_hw_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_gro_list": "off", "rx_udp_gro_forwarding": "off", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tls_hw_record": "off [fixed]", "tls_hw_rx_offload": "off [fixed]", "tls_hw_tx_offload": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_esp_segmentation": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_list": "on", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipxip4_segmentation": "off [fixed]", "tx_ipxip6_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_tunnel_remcsum_segmentation": "off [fixed]", "tx_udp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "vlan_challenged": "on [fixed]" }, "hw_timestamp_filters": [], "ipv4": { "address": "127.0.0.1", "broadcast": "", "netmask": "255.0.0.0", "network": "127.0.0.0", "prefix": "8" }, "ipv6": [ { "address": "::1", "prefix": "128", "scope": "host" } ], "mtu": 65536, "promisc": false, "timestamping": [], "type": "loopback" }, "ansible_loadavg": { "15m": 0.47, "1m": 1.21, "5m": 0.83 }, "ansible_local": {}, "ansible_locally_reachable_ips": { "ipv4": [ "38.102.83.39", "127.0.0.0/8", "127.0.0.1", "192.168.122.11" ], "ipv6": [ "::1", "fe80::f816:3eff:fe71:e11f" ] }, "ansible_lsb": {}, "ansible_lvm": "N/A", "ansible_machine": "x86_64", "ansible_machine_id": "85ac68c10a6e7ae08ceb898dbdca0cb5", "ansible_memfree_mb": 1339, "ansible_memory_mb": { "nocache": { "free": 2808, "used": 847 }, "real": { "free": 1339, "total": 3655, "used": 2316 }, "swap": { "cached": 0, "free": 0, "total": 0, "used": 0 } }, "ansible_memtotal_mb": 3655, "ansible_mounts": [ { "block_available": 9508234, "block_size": 4096, "block_total": 10469115, "block_used": 960881, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 20821436, "inode_total": 20970992, "inode_used": 149556, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota", "size_available": 38945726464, "size_total": 42881495040, "uuid": "22ac9141-3960-4912-b20e-19fc8a328d40" } ], "ansible_nodename": "controller", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "dnf", "ansible_proc_cmdline": { "BOOT_IMAGE": "(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", "console": "ttyS0,115200n8", "crashkernel": "1G-2G:192M,2G-64G:256M,64G-:512M", "net.ifnames": "0", "no_timer_check": true, "ro": true, "root": "UUID=22ac9141-3960-4912-b20e-19fc8a328d40" }, "ansible_processor": [ "0", "AuthenticAMD", "AMD EPYC-Rome Processor", "1", "AuthenticAMD", "AMD EPYC-Rome Processor" ], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_nproc": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "OpenStack Nova", "ansible_product_serial": "NA", "ansible_product_uuid": "NA", "ansible_product_version": "26.3.1", "ansible_python": { "executable": "/usr/bin/python3", "has_sslcontext": true, "type": "cpython", "version": { "major": 3, "micro": 25, "minor": 9, "releaselevel": "final", "serial": 0 }, "version_info": [ 3, 9, 25, "final", 0 ] }, "ansible_python_version": "3.9.25", "ansible_real_group_id": 1000, "ansible_real_user_id": 1000, "ansible_selinux": { "config_mode": "enforcing", "mode": "enforcing", "policyvers": 33, "status": "enabled", "type": "targeted" }, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI+HME4ahJQfJECnlUk3Icgw7DjB45ygINRfee3AcsNVR5rNrzPcoaVTiPZ1YEOGS4KKBJ1Qyzp3CgN+dL12iOs=", "ansible_ssh_host_key_ecdsa_public_keytype": "ecdsa-sha2-nistp256", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIHcCbCXC3if6/1WyJzr5vIzL+Tzi/3I/oQgtmHTAEmym", "ansible_ssh_host_key_ed25519_public_keytype": "ssh-ed25519", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABgQD5hZ02CR6jauFfwvnGyh7Gg7GiN8xU/4woiEx+8xAto75E7Pi9h+8iAczj5rpkBpdIX3G3BSeegzMeog4upoxDVvta9EgXuabnQ49Y7WDm0LPFAPgiFBu/CkrcXHPm6OM5a181eFVk4w9Kf3GDJ9Arh5IdZdAbxXEEdenpbQnlz4hFtl/dGIrohDfCuWmrhq5VMqraCeMpiJ4c2G2iMVgZFQf8LvUICbaySrebir4HAfyv1yWZawS1Nql3bsyHsx9Tf25tbj5CHHs+DhC9NI/UgOmeW8rz3IyrdhqKInbsI/AqSHQPmEAEwzRco8xALMmzjICopKKXB9R++ddv/PAqiZPTrixuYUXRQQyJvx080Visb20dtAZTLYdmY7X1oB8Jgvullh3xFHNeqAu0+7OIeoHl9eCxx2sbk1kEtud1CMuDc5cn7h3ANGHZY3/jP0cUAyel1wQE0olv43z2rzgUpI6+8gKM0edyBLbCbww6/PHtcNkrzxAB3WOaAIzZAI8=", "ansible_ssh_host_key_rsa_public_keytype": "ssh-rsa", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": [ "" ], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "OpenStack Foundation", "ansible_uptime_seconds": 780, "ansible_user_dir": "/home/zuul", "ansible_user_gecos": "", "ansible_user_gid": 1000, "ansible_user_id": "zuul", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 1000, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_tech_guest": [ "openstack" ], "ansible_virtualization_tech_host": [ "kvm" ], "ansible_virtualization_type": "openstack", "cifmw_discovered_hash": "6b1f209ecc539dcfd8634a5c7786c6629def62c87865ceb38b6678fdd81d8a90", "cifmw_discovered_hash_algorithm": "sha256", "cifmw_discovered_image_name": "CentOS-Stream-GenericCloud-x86_64-9-latest.x86_64.qcow2", "cifmw_discovered_image_url": "https://cloud.centos.org/centos/9-stream/x86_64/images//CentOS-Stream-GenericCloud-x86_64-9-latest.x86_64.qcow2", "cifmw_install_yamls_defaults": { "ADOPTED_EXTERNAL_NETWORK": "172.21.1.0/24", "ADOPTED_INTERNALAPI_NETWORK": "172.17.1.0/24", "ADOPTED_STORAGEMGMT_NETWORK": "172.20.1.0/24", "ADOPTED_STORAGE_NETWORK": "172.18.1.0/24", "ADOPTED_TENANT_NETWORK": "172.9.1.0/24", "ANSIBLEEE": "config/samples/_v1beta1_ansibleee.yaml", "ANSIBLEEE_BRANCH": "main", "ANSIBLEEE_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml", "ANSIBLEEE_IMG": "quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest", "ANSIBLEEE_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml", "ANSIBLEEE_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/test/kuttl/tests", "ANSIBLEEE_KUTTL_NAMESPACE": "ansibleee-kuttl-tests", "ANSIBLEEE_REPO": "https://github.com/openstack-k8s-operators/openstack-ansibleee-operator", "ANSIBLEE_COMMIT_HASH": "", "BARBICAN": "config/samples/barbican_v1beta1_barbican.yaml", "BARBICAN_BRANCH": "main", "BARBICAN_COMMIT_HASH": "", "BARBICAN_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml", "BARBICAN_DEPL_IMG": "unused", "BARBICAN_IMG": "quay.io/openstack-k8s-operators/barbican-operator-index:latest", "BARBICAN_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml", "BARBICAN_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/test/kuttl/tests", "BARBICAN_KUTTL_NAMESPACE": "barbican-kuttl-tests", "BARBICAN_REPO": "https://github.com/openstack-k8s-operators/barbican-operator.git", "BARBICAN_SERVICE_ENABLED": "true", "BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY": "sEFmdFjDUqRM2VemYslV5yGNWjokioJXsg8Nrlc3drU=", "BAREMETAL_BRANCH": "main", "BAREMETAL_COMMIT_HASH": "", "BAREMETAL_IMG": "quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest", "BAREMETAL_OS_CONTAINER_IMG": "", "BAREMETAL_OS_IMG": "", "BAREMETAL_OS_IMG_TYPE": "", "BAREMETAL_REPO": "https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git", "BAREMETAL_TIMEOUT": "20m", "BASH_IMG": "quay.io/openstack-k8s-operators/bash:latest", "BGP_ASN": "64999", "BGP_LEAF_1": "100.65.4.1", "BGP_LEAF_2": "100.64.4.1", "BGP_OVN_ROUTING": "false", "BGP_PEER_ASN": "64999", "BGP_SOURCE_IP": "172.30.4.2", "BGP_SOURCE_IP6": "f00d:f00d:f00d:f00d:f00d:f00d:f00d:42", "BMAAS_BRIDGE_IPV4_PREFIX": "172.20.1.2/24", "BMAAS_BRIDGE_IPV6_PREFIX": "fd00:bbbb::2/64", "BMAAS_INSTANCE_DISK_SIZE": "20", "BMAAS_INSTANCE_MEMORY": "4096", "BMAAS_INSTANCE_NAME_PREFIX": "crc-bmaas", "BMAAS_INSTANCE_NET_MODEL": "virtio", "BMAAS_INSTANCE_OS_VARIANT": "centos-stream9", "BMAAS_INSTANCE_VCPUS": "2", "BMAAS_INSTANCE_VIRT_TYPE": "kvm", "BMAAS_IPV4": "true", "BMAAS_IPV6": "false", "BMAAS_LIBVIRT_USER": "sushyemu", "BMAAS_METALLB_ADDRESS_POOL": "172.20.1.64/26", "BMAAS_METALLB_POOL_NAME": "baremetal", "BMAAS_NETWORK_IPV4_PREFIX": "172.20.1.1/24", "BMAAS_NETWORK_IPV6_PREFIX": "fd00:bbbb::1/64", "BMAAS_NETWORK_NAME": "crc-bmaas", "BMAAS_NODE_COUNT": "1", "BMAAS_OCP_INSTANCE_NAME": "crc", "BMAAS_REDFISH_PASSWORD": "password", "BMAAS_REDFISH_USERNAME": "admin", "BMAAS_ROUTE_LIBVIRT_NETWORKS": "crc-bmaas,crc,default", "BMAAS_SUSHY_EMULATOR_DRIVER": "libvirt", "BMAAS_SUSHY_EMULATOR_IMAGE": "quay.io/metal3-io/sushy-tools:latest", "BMAAS_SUSHY_EMULATOR_NAMESPACE": "sushy-emulator", "BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE": "/etc/openstack/clouds.yaml", "BMAAS_SUSHY_EMULATOR_OS_CLOUD": "openstack", "BMH_NAMESPACE": "openstack", "BMO_BRANCH": "release-0.9", "BMO_CLEANUP": "true", "BMO_COMMIT_HASH": "", "BMO_IPA_BRANCH": "stable/2024.1", "BMO_IRONIC_HOST": "192.168.122.10", "BMO_PROVISIONING_INTERFACE": "", "BMO_REPO": "https://github.com/metal3-io/baremetal-operator", "BMO_SETUP": false, "BMO_SETUP_ROUTE_REPLACE": "true", "BM_CTLPLANE_INTERFACE": "enp1s0", "BM_INSTANCE_MEMORY": "8192", "BM_INSTANCE_NAME_PREFIX": "edpm-compute-baremetal", "BM_INSTANCE_NAME_SUFFIX": "0", "BM_NETWORK_NAME": "default", "BM_NODE_COUNT": "1", "BM_ROOT_PASSWORD": "", "BM_ROOT_PASSWORD_SECRET": "", "CEILOMETER_CENTRAL_DEPL_IMG": "unused", "CEILOMETER_NOTIFICATION_DEPL_IMG": "unused", "CEPH_BRANCH": "release-1.15", "CEPH_CLIENT": "/home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml", "CEPH_COMMON": "/home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml", "CEPH_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml", "CEPH_CRDS": "/home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml", "CEPH_IMG": "quay.io/ceph/demo:latest-squid", "CEPH_OP": "/home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml", "CEPH_REPO": "https://github.com/rook/rook.git", "CERTMANAGER_TIMEOUT": "300s", "CHECKOUT_FROM_OPENSTACK_REF": "true", "CINDER": "config/samples/cinder_v1beta1_cinder.yaml", "CINDERAPI_DEPL_IMG": "unused", "CINDERBKP_DEPL_IMG": "unused", "CINDERSCH_DEPL_IMG": "unused", "CINDERVOL_DEPL_IMG": "unused", "CINDER_BRANCH": "main", "CINDER_COMMIT_HASH": "", "CINDER_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml", "CINDER_IMG": "quay.io/openstack-k8s-operators/cinder-operator-index:latest", "CINDER_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml", "CINDER_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests", "CINDER_KUTTL_NAMESPACE": "cinder-kuttl-tests", "CINDER_REPO": "https://github.com/openstack-k8s-operators/cinder-operator.git", "CLEANUP_DIR_CMD": "rm -Rf", "CRC_BGP_NIC_1_MAC": "52:54:00:11:11:11", "CRC_BGP_NIC_2_MAC": "52:54:00:11:11:12", "CRC_HTTPS_PROXY": "", "CRC_HTTP_PROXY": "", "CRC_STORAGE_NAMESPACE": "crc-storage", "CRC_STORAGE_RETRIES": "3", "CRC_URL": "'https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz'", "CRC_VERSION": "latest", "DATAPLANE_ANSIBLE_SECRET": "dataplane-ansible-ssh-private-key-secret", "DATAPLANE_ANSIBLE_USER": "", "DATAPLANE_COMPUTE_IP": "192.168.122.100", "DATAPLANE_CONTAINER_PREFIX": "openstack", "DATAPLANE_CONTAINER_TAG": "current-podified", "DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG": "quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest", "DATAPLANE_DEFAULT_GW": "192.168.122.1", "DATAPLANE_EXTRA_NOVA_CONFIG_FILE": "/dev/null", "DATAPLANE_GROWVOLS_ARGS": "/=8GB /tmp=1GB /home=1GB /var=100%", "DATAPLANE_KUSTOMIZE_SCENARIO": "preprovisioned", "DATAPLANE_NETWORKER_IP": "192.168.122.200", "DATAPLANE_NETWORK_INTERFACE_NAME": "eth0", "DATAPLANE_NOVA_NFS_PATH": "", "DATAPLANE_NTP_SERVER": "pool.ntp.org", "DATAPLANE_PLAYBOOK": "osp.edpm.download_cache", "DATAPLANE_REGISTRY_URL": "quay.io/podified-antelope-centos9", "DATAPLANE_RUNNER_IMG": "", "DATAPLANE_SERVER_ROLE": "compute", "DATAPLANE_SSHD_ALLOWED_RANGES": "['192.168.122.0/24']", "DATAPLANE_TIMEOUT": "30m", "DATAPLANE_TLS_ENABLED": "true", "DATAPLANE_TOTAL_NETWORKER_NODES": "1", "DATAPLANE_TOTAL_NODES": "1", "DBSERVICE": "galera", "DESIGNATE": "config/samples/designate_v1beta1_designate.yaml", "DESIGNATE_BRANCH": "main", "DESIGNATE_COMMIT_HASH": "", "DESIGNATE_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml", "DESIGNATE_IMG": "quay.io/openstack-k8s-operators/designate-operator-index:latest", "DESIGNATE_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml", "DESIGNATE_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/test/kuttl/tests", "DESIGNATE_KUTTL_NAMESPACE": "designate-kuttl-tests", "DESIGNATE_REPO": "https://github.com/openstack-k8s-operators/designate-operator.git", "DNSDATA": "config/samples/network_v1beta1_dnsdata.yaml", "DNSDATA_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml", "DNSMASQ": "config/samples/network_v1beta1_dnsmasq.yaml", "DNSMASQ_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml", "DNS_DEPL_IMG": "unused", "DNS_DOMAIN": "localdomain", "DOWNLOAD_TOOLS_SELECTION": "all", "EDPM_ATTACH_EXTNET": "true", "EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES": "'[]'", "EDPM_COMPUTE_ADDITIONAL_NETWORKS": "'[]'", "EDPM_COMPUTE_CELLS": "1", "EDPM_COMPUTE_CEPH_ENABLED": "true", "EDPM_COMPUTE_CEPH_NOVA": "true", "EDPM_COMPUTE_DHCP_AGENT_ENABLED": "true", "EDPM_COMPUTE_SRIOV_ENABLED": "true", "EDPM_COMPUTE_SUFFIX": "0", "EDPM_CONFIGURE_DEFAULT_ROUTE": "true", "EDPM_CONFIGURE_HUGEPAGES": "false", "EDPM_CONFIGURE_NETWORKING": "true", "EDPM_FIRSTBOOT_EXTRA": "/tmp/edpm-firstboot-extra", "EDPM_NETWORKER_SUFFIX": "0", "EDPM_TOTAL_NETWORKERS": "1", "EDPM_TOTAL_NODES": "1", "GALERA_REPLICAS": "", "GENERATE_SSH_KEYS": "true", "GIT_CLONE_OPTS": "", "GLANCE": "config/samples/glance_v1beta1_glance.yaml", "GLANCEAPI_DEPL_IMG": "unused", "GLANCE_BRANCH": "main", "GLANCE_COMMIT_HASH": "", "GLANCE_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml", "GLANCE_IMG": "quay.io/openstack-k8s-operators/glance-operator-index:latest", "GLANCE_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml", "GLANCE_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests", "GLANCE_KUTTL_NAMESPACE": "glance-kuttl-tests", "GLANCE_REPO": "https://github.com/openstack-k8s-operators/glance-operator.git", "HEAT": "config/samples/heat_v1beta1_heat.yaml", "HEATAPI_DEPL_IMG": "unused", "HEATCFNAPI_DEPL_IMG": "unused", "HEATENGINE_DEPL_IMG": "unused", "HEAT_AUTH_ENCRYPTION_KEY": "767c3ed056cbaa3b9dfedb8c6f825bf0", "HEAT_BRANCH": "main", "HEAT_COMMIT_HASH": "", "HEAT_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml", "HEAT_IMG": "quay.io/openstack-k8s-operators/heat-operator-index:latest", "HEAT_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml", "HEAT_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/test/kuttl/tests", "HEAT_KUTTL_NAMESPACE": "heat-kuttl-tests", "HEAT_REPO": "https://github.com/openstack-k8s-operators/heat-operator.git", "HEAT_SERVICE_ENABLED": "true", "HORIZON": "config/samples/horizon_v1beta1_horizon.yaml", "HORIZON_BRANCH": "main", "HORIZON_COMMIT_HASH": "", "HORIZON_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml", "HORIZON_DEPL_IMG": "unused", "HORIZON_IMG": "quay.io/openstack-k8s-operators/horizon-operator-index:latest", "HORIZON_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml", "HORIZON_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/test/kuttl/tests", "HORIZON_KUTTL_NAMESPACE": "horizon-kuttl-tests", "HORIZON_REPO": "https://github.com/openstack-k8s-operators/horizon-operator.git", "INFRA_BRANCH": "main", "INFRA_COMMIT_HASH": "", "INFRA_IMG": "quay.io/openstack-k8s-operators/infra-operator-index:latest", "INFRA_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml", "INFRA_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/test/kuttl/tests", "INFRA_KUTTL_NAMESPACE": "infra-kuttl-tests", "INFRA_REPO": "https://github.com/openstack-k8s-operators/infra-operator.git", "INSTALL_CERT_MANAGER": false, "INSTALL_NMSTATE": "true || false", "INSTALL_NNCP": "true || false", "INTERNALAPI_HOST_ROUTES": "", "IPV6_LAB_IPV4_NETWORK_IPADDRESS": "172.30.0.1/24", "IPV6_LAB_IPV6_NETWORK_IPADDRESS": "fd00:abcd:abcd:fc00::1/64", "IPV6_LAB_LIBVIRT_STORAGE_POOL": "default", "IPV6_LAB_MANAGE_FIREWALLD": "true", "IPV6_LAB_NAT64_HOST_IPV4": "172.30.0.2/24", "IPV6_LAB_NAT64_HOST_IPV6": "fd00:abcd:abcd:fc00::2/64", "IPV6_LAB_NAT64_INSTANCE_NAME": "nat64-router", "IPV6_LAB_NAT64_IPV6_NETWORK": "fd00:abcd:abcd:fc00::/64", "IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL": "192.168.255.0/24", "IPV6_LAB_NAT64_TAYGA_IPV4": "192.168.255.1", "IPV6_LAB_NAT64_TAYGA_IPV6": "fd00:abcd:abcd:fc00::3", "IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX": "fd00:abcd:abcd:fcff::/96", "IPV6_LAB_NAT64_UPDATE_PACKAGES": "false", "IPV6_LAB_NETWORK_NAME": "nat64", "IPV6_LAB_SNO_CLUSTER_NETWORK": "fd00:abcd:0::/48", "IPV6_LAB_SNO_HOST_IP": "fd00:abcd:abcd:fc00::11", "IPV6_LAB_SNO_HOST_PREFIX": "64", "IPV6_LAB_SNO_INSTANCE_NAME": "sno", "IPV6_LAB_SNO_MACHINE_NETWORK": "fd00:abcd:abcd:fc00::/64", "IPV6_LAB_SNO_OCP_MIRROR_URL": "https://mirror.openshift.com/pub/openshift-v4/clients/ocp", "IPV6_LAB_SNO_OCP_VERSION": "latest-4.14", "IPV6_LAB_SNO_SERVICE_NETWORK": "fd00:abcd:abcd:fc03::/112", "IPV6_LAB_SSH_PUB_KEY": "/home/zuul/.ssh/id_rsa.pub", "IPV6_LAB_WORK_DIR": "/home/zuul/.ipv6lab", "IRONIC": "config/samples/ironic_v1beta1_ironic.yaml", "IRONICAPI_DEPL_IMG": "unused", "IRONICCON_DEPL_IMG": "unused", "IRONICINS_DEPL_IMG": "unused", "IRONICNAG_DEPL_IMG": "unused", "IRONICPXE_DEPL_IMG": "unused", "IRONIC_BRANCH": "main", "IRONIC_COMMIT_HASH": "", "IRONIC_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml", "IRONIC_IMAGE": "quay.io/metal3-io/ironic", "IRONIC_IMAGE_TAG": "release-24.1", "IRONIC_IMG": "quay.io/openstack-k8s-operators/ironic-operator-index:latest", "IRONIC_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml", "IRONIC_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/test/kuttl/tests", "IRONIC_KUTTL_NAMESPACE": "ironic-kuttl-tests", "IRONIC_REPO": "https://github.com/openstack-k8s-operators/ironic-operator.git", "KEYSTONEAPI": "config/samples/keystone_v1beta1_keystoneapi.yaml", "KEYSTONEAPI_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml", "KEYSTONEAPI_DEPL_IMG": "unused", "KEYSTONE_BRANCH": "main", "KEYSTONE_COMMIT_HASH": "", "KEYSTONE_FEDERATION_CLIENT_SECRET": "COX8bmlKAWn56XCGMrKQJj7dgHNAOl6f", "KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE": "openstack", "KEYSTONE_IMG": "quay.io/openstack-k8s-operators/keystone-operator-index:latest", "KEYSTONE_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml", "KEYSTONE_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/test/kuttl/tests", "KEYSTONE_KUTTL_NAMESPACE": "keystone-kuttl-tests", "KEYSTONE_REPO": "https://github.com/openstack-k8s-operators/keystone-operator.git", "KUBEADMIN_PWD": "12345678", "LIBVIRT_SECRET": "libvirt-secret", "LOKI_DEPLOY_MODE": "openshift-network", "LOKI_DEPLOY_NAMESPACE": "netobserv", "LOKI_DEPLOY_SIZE": "1x.demo", "LOKI_NAMESPACE": "openshift-operators-redhat", "LOKI_OPERATOR_GROUP": "openshift-operators-redhat-loki", "LOKI_SUBSCRIPTION": "loki-operator", "LVMS_CR": "1", "MANILA": "config/samples/manila_v1beta1_manila.yaml", "MANILAAPI_DEPL_IMG": "unused", "MANILASCH_DEPL_IMG": "unused", "MANILASHARE_DEPL_IMG": "unused", "MANILA_BRANCH": "main", "MANILA_COMMIT_HASH": "", "MANILA_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml", "MANILA_IMG": "quay.io/openstack-k8s-operators/manila-operator-index:latest", "MANILA_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml", "MANILA_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests", "MANILA_KUTTL_NAMESPACE": "manila-kuttl-tests", "MANILA_REPO": "https://github.com/openstack-k8s-operators/manila-operator.git", "MANILA_SERVICE_ENABLED": "true", "MARIADB": "config/samples/mariadb_v1beta1_galera.yaml", "MARIADB_BRANCH": "main", "MARIADB_CHAINSAW_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/config.yaml", "MARIADB_CHAINSAW_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/tests", "MARIADB_CHAINSAW_NAMESPACE": "mariadb-chainsaw-tests", "MARIADB_COMMIT_HASH": "", "MARIADB_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml", "MARIADB_DEPL_IMG": "unused", "MARIADB_IMG": "quay.io/openstack-k8s-operators/mariadb-operator-index:latest", "MARIADB_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml", "MARIADB_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/kuttl/tests", "MARIADB_KUTTL_NAMESPACE": "mariadb-kuttl-tests", "MARIADB_REPO": "https://github.com/openstack-k8s-operators/mariadb-operator.git", "MEMCACHED": "config/samples/memcached_v1beta1_memcached.yaml", "MEMCACHED_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml", "MEMCACHED_DEPL_IMG": "unused", "METADATA_SHARED_SECRET": "1234567842", "METALLB_IPV6_POOL": "fd00:aaaa::80-fd00:aaaa::90", "METALLB_POOL": "192.168.122.80-192.168.122.90", "MICROSHIFT": "0", "NAMESPACE": "openstack", "NETCONFIG": "config/samples/network_v1beta1_netconfig.yaml", "NETCONFIG_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml", "NETCONFIG_DEPL_IMG": "unused", "NETOBSERV_DEPLOY_NAMESPACE": "netobserv", "NETOBSERV_NAMESPACE": "openshift-netobserv-operator", "NETOBSERV_OPERATOR_GROUP": "openshift-netobserv-operator-net", "NETOBSERV_SUBSCRIPTION": "netobserv-operator", "NETWORK_BGP": "false", "NETWORK_DESIGNATE_ADDRESS_PREFIX": "172.28.0", "NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX": "172.50.0", "NETWORK_INTERNALAPI_ADDRESS_PREFIX": "172.17.0", "NETWORK_ISOLATION": "true", "NETWORK_ISOLATION_INSTANCE_NAME": "crc", "NETWORK_ISOLATION_IPV4": "true", "NETWORK_ISOLATION_IPV4_ADDRESS": "172.16.1.1/24", "NETWORK_ISOLATION_IPV4_NAT": "true", "NETWORK_ISOLATION_IPV6": "false", "NETWORK_ISOLATION_IPV6_ADDRESS": "fd00:aaaa::1/64", "NETWORK_ISOLATION_IP_ADDRESS": "192.168.122.10", "NETWORK_ISOLATION_MAC": "52:54:00:11:11:10", "NETWORK_ISOLATION_NETWORK_NAME": "net-iso", "NETWORK_ISOLATION_NET_NAME": "default", "NETWORK_ISOLATION_USE_DEFAULT_NETWORK": "true", "NETWORK_MTU": "1500", "NETWORK_STORAGEMGMT_ADDRESS_PREFIX": "172.20.0", "NETWORK_STORAGE_ADDRESS_PREFIX": "172.18.0", "NETWORK_STORAGE_MACVLAN": "", "NETWORK_TENANT_ADDRESS_PREFIX": "172.19.0", "NETWORK_VLAN_START": "20", "NETWORK_VLAN_STEP": "1", "NEUTRONAPI": "config/samples/neutron_v1beta1_neutronapi.yaml", "NEUTRONAPI_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml", "NEUTRONAPI_DEPL_IMG": "unused", "NEUTRON_BRANCH": "main", "NEUTRON_COMMIT_HASH": "", "NEUTRON_IMG": "quay.io/openstack-k8s-operators/neutron-operator-index:latest", "NEUTRON_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml", "NEUTRON_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests", "NEUTRON_KUTTL_NAMESPACE": "neutron-kuttl-tests", "NEUTRON_REPO": "https://github.com/openstack-k8s-operators/neutron-operator.git", "NFS_HOME": "/home/nfs", "NMSTATE_NAMESPACE": "openshift-nmstate", "NMSTATE_OPERATOR_GROUP": "openshift-nmstate-tn6k8", "NMSTATE_SUBSCRIPTION": "kubernetes-nmstate-operator", "NNCP_ADDITIONAL_HOST_ROUTES": "", "NNCP_BGP_1_INTERFACE": "enp7s0", "NNCP_BGP_1_IP_ADDRESS": "100.65.4.2", "NNCP_BGP_2_INTERFACE": "enp8s0", "NNCP_BGP_2_IP_ADDRESS": "100.64.4.2", "NNCP_BRIDGE": "ospbr", "NNCP_CLEANUP_TIMEOUT": "120s", "NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX": "fd00:aaaa::", "NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX": "10", "NNCP_CTLPLANE_IP_ADDRESS_PREFIX": "192.168.122", "NNCP_CTLPLANE_IP_ADDRESS_SUFFIX": "10", "NNCP_DNS_SERVER": "192.168.122.1", "NNCP_DNS_SERVER_IPV6": "fd00:aaaa::1", "NNCP_GATEWAY": "192.168.122.1", "NNCP_GATEWAY_IPV6": "fd00:aaaa::1", "NNCP_INTERFACE": "enp6s0", "NNCP_NODES": "", "NNCP_TIMEOUT": "240s", "NOVA": "config/samples/nova_v1beta1_nova_collapsed_cell.yaml", "NOVA_BRANCH": "main", "NOVA_COMMIT_HASH": "", "NOVA_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml", "NOVA_IMG": "quay.io/openstack-k8s-operators/nova-operator-index:latest", "NOVA_REPO": "https://github.com/openstack-k8s-operators/nova-operator.git", "NUMBER_OF_INSTANCES": "1", "OCP_NETWORK_NAME": "crc", "OCTAVIA": "config/samples/octavia_v1beta1_octavia.yaml", "OCTAVIA_BRANCH": "main", "OCTAVIA_COMMIT_HASH": "", "OCTAVIA_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml", "OCTAVIA_IMG": "quay.io/openstack-k8s-operators/octavia-operator-index:latest", "OCTAVIA_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml", "OCTAVIA_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/test/kuttl/tests", "OCTAVIA_KUTTL_NAMESPACE": "octavia-kuttl-tests", "OCTAVIA_REPO": "https://github.com/openstack-k8s-operators/octavia-operator.git", "OKD": "false", "OPENSTACK_BRANCH": "main", "OPENSTACK_BUNDLE_IMG": "quay.io/openstack-k8s-operators/openstack-operator-bundle:latest", "OPENSTACK_COMMIT_HASH": "", "OPENSTACK_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml", "OPENSTACK_CRDS_DIR": "openstack_crds", "OPENSTACK_CTLPLANE": "config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml", "OPENSTACK_IMG": "quay.io/openstack-k8s-operators/openstack-operator-index:latest", "OPENSTACK_K8S_BRANCH": "main", "OPENSTACK_K8S_TAG": "latest", "OPENSTACK_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml", "OPENSTACK_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/test/kuttl/tests", "OPENSTACK_KUTTL_NAMESPACE": "openstack-kuttl-tests", "OPENSTACK_NEUTRON_CUSTOM_CONF": "", "OPENSTACK_REPO": "https://github.com/openstack-k8s-operators/openstack-operator.git", "OPENSTACK_STORAGE_BUNDLE_IMG": "quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest", "OPERATOR_BASE_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator", "OPERATOR_CHANNEL": "", "OPERATOR_NAMESPACE": "openstack-operators", "OPERATOR_SOURCE": "", "OPERATOR_SOURCE_NAMESPACE": "", "OUT": "/home/zuul/ci-framework-data/artifacts/manifests", "OUTPUT_DIR": "/home/zuul/ci-framework-data/artifacts/edpm", "OVNCONTROLLER": "config/samples/ovn_v1beta1_ovncontroller.yaml", "OVNCONTROLLER_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml", "OVNCONTROLLER_NMAP": "true", "OVNDBS": "config/samples/ovn_v1beta1_ovndbcluster.yaml", "OVNDBS_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml", "OVNNORTHD": "config/samples/ovn_v1beta1_ovnnorthd.yaml", "OVNNORTHD_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml", "OVN_BRANCH": "main", "OVN_COMMIT_HASH": "", "OVN_IMG": "quay.io/openstack-k8s-operators/ovn-operator-index:latest", "OVN_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml", "OVN_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/test/kuttl/tests", "OVN_KUTTL_NAMESPACE": "ovn-kuttl-tests", "OVN_REPO": "https://github.com/openstack-k8s-operators/ovn-operator.git", "PASSWORD": "12345678", "PLACEMENTAPI": "config/samples/placement_v1beta1_placementapi.yaml", "PLACEMENTAPI_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml", "PLACEMENTAPI_DEPL_IMG": "unused", "PLACEMENT_BRANCH": "main", "PLACEMENT_COMMIT_HASH": "", "PLACEMENT_IMG": "quay.io/openstack-k8s-operators/placement-operator-index:latest", "PLACEMENT_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml", "PLACEMENT_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/test/kuttl/tests", "PLACEMENT_KUTTL_NAMESPACE": "placement-kuttl-tests", "PLACEMENT_REPO": "https://github.com/openstack-k8s-operators/placement-operator.git", "PULL_SECRET": "/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/pull-secret.txt", "RABBITMQ": "docs/examples/default-security-context/rabbitmq.yaml", "RABBITMQ_BRANCH": "patches", "RABBITMQ_COMMIT_HASH": "", "RABBITMQ_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml", "RABBITMQ_DEPL_IMG": "unused", "RABBITMQ_IMG": "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest", "RABBITMQ_REPO": "https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git", "REDHAT_OPERATORS": "false", "REDIS": "config/samples/redis_v1beta1_redis.yaml", "REDIS_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml", "REDIS_DEPL_IMG": "unused", "RH_REGISTRY_PWD": "", "RH_REGISTRY_USER": "", "SECRET": "osp-secret", "SG_CORE_DEPL_IMG": "unused", "STANDALONE_COMPUTE_DRIVER": "libvirt", "STANDALONE_EXTERNAL_NET_PREFFIX": "172.21.0", "STANDALONE_INTERNALAPI_NET_PREFIX": "172.17.0", "STANDALONE_STORAGEMGMT_NET_PREFIX": "172.20.0", "STANDALONE_STORAGE_NET_PREFIX": "172.18.0", "STANDALONE_TENANT_NET_PREFIX": "172.19.0", "STORAGEMGMT_HOST_ROUTES": "", "STORAGE_CLASS": "local-storage", "STORAGE_HOST_ROUTES": "", "SWIFT": "config/samples/swift_v1beta1_swift.yaml", "SWIFT_BRANCH": "main", "SWIFT_COMMIT_HASH": "", "SWIFT_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml", "SWIFT_IMG": "quay.io/openstack-k8s-operators/swift-operator-index:latest", "SWIFT_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml", "SWIFT_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/test/kuttl/tests", "SWIFT_KUTTL_NAMESPACE": "swift-kuttl-tests", "SWIFT_REPO": "https://github.com/openstack-k8s-operators/swift-operator.git", "TELEMETRY": "config/samples/telemetry_v1beta1_telemetry.yaml", "TELEMETRY_BRANCH": "main", "TELEMETRY_COMMIT_HASH": "", "TELEMETRY_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml", "TELEMETRY_IMG": "quay.io/openstack-k8s-operators/telemetry-operator-index:latest", "TELEMETRY_KUTTL_BASEDIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator", "TELEMETRY_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml", "TELEMETRY_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/test/kuttl/suites", "TELEMETRY_KUTTL_NAMESPACE": "telemetry-kuttl-tests", "TELEMETRY_KUTTL_RELPATH": "test/kuttl/suites", "TELEMETRY_REPO": "https://github.com/openstack-k8s-operators/telemetry-operator.git", "TENANT_HOST_ROUTES": "", "TIMEOUT": "300s", "TLS_ENABLED": "false", "WATCHER_BRANCH": "", "WATCHER_REPO": "/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator", "tripleo_deploy": "export REGISTRY_PWD:" }, "cifmw_install_yamls_environment": { "BMO_SETUP": false, "CHECKOUT_FROM_OPENSTACK_REF": "true", "INSTALL_CERT_MANAGER": false, "KUBECONFIG": "/home/zuul/.crc/machines/crc/kubeconfig", "OPENSTACK_K8S_BRANCH": "main", "OUT": "/home/zuul/ci-framework-data/artifacts/manifests", "OUTPUT_DIR": "/home/zuul/ci-framework-data/artifacts/edpm", "WATCHER_BRANCH": "", "WATCHER_REPO": "/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator" }, "cifmw_openshift_api": "https://api.crc.testing:6443", "cifmw_openshift_context": "default/api-crc-testing:6443/kubeadmin", "cifmw_openshift_kubeconfig": "/home/zuul/.crc/machines/crc/kubeconfig", "cifmw_openshift_login_api": "https://api.crc.testing:6443", "cifmw_openshift_login_cert_login": false, "cifmw_openshift_login_context": "default/api-crc-testing:6443/kubeadmin", "cifmw_openshift_login_kubeconfig": "/home/zuul/.crc/machines/crc/kubeconfig", "cifmw_openshift_login_password": 123456789, "cifmw_openshift_login_token": "sha256~aCk9SoMxAKzQn_kYPvy1gW_KvX_MhVbp0_pM2TnfuyE", "cifmw_openshift_login_user": "kubeadmin", "cifmw_openshift_token": "sha256~aCk9SoMxAKzQn_kYPvy1gW_KvX_MhVbp0_pM2TnfuyE", "cifmw_openshift_user": "kubeadmin", "cifmw_path": "/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin", "cifmw_repo_setup_commit_hash": null, "cifmw_repo_setup_distro_hash": null, "cifmw_repo_setup_dlrn_api_url": "https://trunk.rdoproject.org/api-centos9-antelope", "cifmw_repo_setup_dlrn_url": "https://trunk.rdoproject.org/centos9-antelope/current-podified/delorean.repo.md5", "cifmw_repo_setup_extended_hash": null, "cifmw_repo_setup_full_hash": "c3923531bcda0b0811b2d5053f189beb", "cifmw_repo_setup_release": "antelope", "discovered_interpreter_python": "/usr/bin/python3", "gather_subset": [ "all" ], "module_setup": true }home/zuul/zuul-output/logs/selinux-listing.log0000644000175000017500000041541315133660014020736 0ustar zuulzuul/home/zuul/ci-framework-data: total 8 drwxr-xr-x. 10 zuul zuul unconfined_u:object_r:user_home_t:s0 4096 Jan 20 10:59 artifacts drwxr-xr-x. 5 zuul zuul unconfined_u:object_r:user_home_t:s0 4096 Jan 20 10:58 logs drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 24 Jan 20 10:54 tmp drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 6 Jan 20 10:55 volumes /home/zuul/ci-framework-data/artifacts: total 876 drwxrwxrwx. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 33 Jan 20 10:59 ansible_facts.2026-01-20_10-59 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 20903 Jan 20 10:57 ansible-facts.yml -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 663400 Jan 20 10:58 ansible-vars.yml drwxr-xr-x. 2 root root unconfined_u:object_r:user_home_t:s0 33 Jan 20 10:58 ci-env -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 135 Jan 20 10:58 ci_script_000_check_for_oc.sh -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 239 Jan 20 10:58 ci_script_000_copy_logs_from_crc.sh -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 659 Jan 20 10:58 ci_script_000_prepare_root_ssh.sh -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 905 Jan 20 10:55 ci_script_000_run_hook_without_retry.sh -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 745 Jan 20 10:58 ci_script_000_run_openstack_must_gather.sh -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 213 Jan 20 10:56 ci_script_001_fetch_openshift.sh -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1068 Jan 20 10:57 ci_script_002_run_hook_without_retry_fetch.sh -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1057 Jan 20 10:57 ci_script_003_run_hook_without_retry_80.sh -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1076 Jan 20 10:57 ci_script_004_run_hook_without_retry_create.sh -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 159 Jan 20 10:57 hosts -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 77914 Jan 20 10:58 installed-packages.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 362 Jan 20 10:55 install_yamls.sh -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 1644 Jan 20 10:57 ip-network.txt drwxr-xr-x. 5 zuul zuul unconfined_u:object_r:user_home_t:s0 65 Jan 20 10:57 manifests drwxr-xr-x. 2 root root unconfined_u:object_r:user_home_t:s0 70 Jan 20 10:58 NetworkManager drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 120 Jan 20 10:58 parameters -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 345 Jan 20 10:57 post_infra_fetch_nodes_facts_and_save_the.yml drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 4096 Jan 20 10:58 repositories -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 106 Jan 20 10:57 resolv.conf drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 33 Jan 20 10:55 roles drwxr-xr-x. 2 root root unconfined_u:object_r:user_home_t:s0 4096 Jan 20 10:58 yum_repos -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 60029 Jan 20 10:58 zuul_inventory.yml /home/zuul/ci-framework-data/artifacts/ansible_facts.2026-01-20_10-59: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 57 Jan 20 10:59 ansible_facts_cache /home/zuul/ci-framework-data/artifacts/ansible_facts.2026-01-20_10-59/ansible_facts_cache: total 108 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 23435 Jan 20 10:59 compute-0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 23435 Jan 20 10:59 compute-1 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 57566 Jan 20 10:59 localhost /home/zuul/ci-framework-data/artifacts/ci-env: total 4 -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 2806 Jan 20 10:57 networking-info.yml /home/zuul/ci-framework-data/artifacts/manifests: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 39 Jan 20 10:58 cert-manager drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 43 Jan 20 10:57 kustomizations drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 16 Jan 20 10:55 openstack /home/zuul/ci-framework-data/artifacts/manifests/cert-manager: total 988 -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 1007914 Jan 20 10:58 cert_manager_manifest.yml /home/zuul/ci-framework-data/artifacts/manifests/kustomizations: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 72 Jan 20 10:58 controlplane drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 35 Jan 20 10:58 dataplane /home/zuul/ci-framework-data/artifacts/manifests/kustomizations/controlplane: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 305 Jan 20 10:57 80-horizon-kustomization.yaml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 402 Jan 20 10:57 99-kustomization.yaml /home/zuul/ci-framework-data/artifacts/manifests/kustomizations/dataplane: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4003 Jan 20 10:57 99-kustomization.yaml /home/zuul/ci-framework-data/artifacts/manifests/openstack: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 6 Jan 20 10:55 cr /home/zuul/ci-framework-data/artifacts/manifests/openstack/cr: total 0 /home/zuul/ci-framework-data/artifacts/NetworkManager: total 8 -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 331 Jan 20 10:57 ci-private-network.nmconnection -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 178 Jan 20 10:57 ens3.nmconnection /home/zuul/ci-framework-data/artifacts/parameters: total 68 -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 9283 Jan 20 10:58 custom-params.yml -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 28303 Jan 20 10:58 install-yamls-params.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 280 Jan 20 10:56 openshift-login-params.yml -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 20662 Jan 20 10:58 zuul-params.yml /home/zuul/ci-framework-data/artifacts/repositories: total 32 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1658 Jan 20 10:54 delorean-antelope-testing.repo -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5901 Jan 20 10:54 delorean.repo -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 33 Jan 20 10:54 delorean.repo.md5 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 206 Jan 20 10:54 repo-setup-centos-appstream.repo -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 196 Jan 20 10:54 repo-setup-centos-baseos.repo -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 226 Jan 20 10:54 repo-setup-centos-highavailability.repo -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 201 Jan 20 10:54 repo-setup-centos-powertools.repo /home/zuul/ci-framework-data/artifacts/roles: total 0 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:55 install_yamls_makes /home/zuul/ci-framework-data/artifacts/roles/install_yamls_makes: total 20 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 16384 Jan 20 10:58 tasks /home/zuul/ci-framework-data/artifacts/roles/install_yamls_makes/tasks: total 1256 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 790 Jan 20 10:55 make_all.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Jan 20 10:55 make_ansibleee_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1090 Jan 20 10:55 make_ansibleee_kuttl_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Jan 20 10:55 make_ansibleee_kuttl_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Jan 20 10:55 make_ansibleee_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Jan 20 10:55 make_ansibleee_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_ansibleee_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Jan 20 10:55 make_ansibleee.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1234 Jan 20 10:55 make_attach_default_interface_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1114 Jan 20 10:55 make_attach_default_interface.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_barbican_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1090 Jan 20 10:55 make_barbican_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Jan 20 10:55 make_barbican_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Jan 20 10:55 make_barbican_deploy_validate.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Jan 20 10:55 make_barbican_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Jan 20 10:55 make_barbican_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_barbican_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Jan 20 10:55 make_barbican_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 865 Jan 20 10:55 make_barbican.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Jan 20 10:55 make_baremetal_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_baremetal_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Jan 20 10:55 make_baremetal.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1219 Jan 20 10:55 make_bmaas_baremetal_net_nad_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1099 Jan 20 10:55 make_bmaas_baremetal_net_nad.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 949 Jan 20 10:55 make_bmaas_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1234 Jan 20 10:55 make_bmaas_crc_attach_network_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1114 Jan 20 10:55 make_bmaas_crc_attach_network.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1264 Jan 20 10:55 make_bmaas_crc_baremetal_bridge_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1144 Jan 20 10:55 make_bmaas_crc_baremetal_bridge.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1129 Jan 20 10:55 make_bmaas_generate_nodes_yaml.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1069 Jan 20 10:55 make_bmaas_metallb_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 949 Jan 20 10:55 make_bmaas_metallb.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1069 Jan 20 10:55 make_bmaas_network_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 949 Jan 20 10:55 make_bmaas_network.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1444 Jan 20 10:55 make_bmaas_route_crc_and_crc_bmaas_networks_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1324 Jan 20 10:55 make_bmaas_route_crc_and_crc_bmaas_networks.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1174 Jan 20 10:55 make_bmaas_sushy_emulator_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1129 Jan 20 10:55 make_bmaas_sushy_emulator_wait.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1054 Jan 20 10:55 make_bmaas_sushy_emulator.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1129 Jan 20 10:55 make_bmaas_virtual_bms_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1009 Jan 20 10:55 make_bmaas_virtual_bms.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 829 Jan 20 10:55 make_bmaas.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Jan 20 10:55 make_ceph_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Jan 20 10:55 make_ceph_help.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Jan 20 10:55 make_ceph.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Jan 20 10:55 make_certmanager_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Jan 20 10:55 make_certmanager.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 949 Jan 20 10:55 make_cifmw_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 949 Jan 20 10:55 make_cifmw_prepare.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_cinder_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Jan 20 10:55 make_cinder_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Jan 20 10:55 make_cinder_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Jan 20 10:55 make_cinder_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_cinder_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Jan 20 10:55 make_cinder_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Jan 20 10:55 make_cinder_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 835 Jan 20 10:55 make_cinder.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 850 Jan 20 10:55 make_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1294 Jan 20 10:55 make_crc_attach_default_interface_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1174 Jan 20 10:55 make_crc_attach_default_interface.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Jan 20 10:55 make_crc_bmo_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Jan 20 10:55 make_crc_bmo_setup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 919 Jan 20 10:55 make_crc_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 889 Jan 20 10:55 make_crc_scrub.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1225 Jan 20 10:55 make_crc_storage_cleanup_with_retries.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Jan 20 10:55 make_crc_storage_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Jan 20 10:55 make_crc_storage_release.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Jan 20 10:55 make_crc_storage_with_retries.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Jan 20 10:55 make_crc_storage.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 799 Jan 20 10:55 make_crc.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Jan 20 10:55 make_designate_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Jan 20 10:55 make_designate_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Jan 20 10:55 make_designate_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_designate_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Jan 20 10:55 make_designate_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Jan 20 10:55 make_designate_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_designate_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Jan 20 10:55 make_designate.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Jan 20 10:55 make_dns_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Jan 20 10:55 make_dns_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 895 Jan 20 10:55 make_dns_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 964 Jan 20 10:55 make_download_tools.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1039 Jan 20 10:55 make_edpm_ansible_runner.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1084 Jan 20 10:55 make_edpm_baremetal_compute.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1024 Jan 20 10:55 make_edpm_compute_bootc.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1054 Jan 20 10:55 make_edpm_compute_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1024 Jan 20 10:55 make_edpm_compute_repos.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1009 Jan 20 10:55 make_edpm_computes_bgp.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 934 Jan 20 10:55 make_edpm_compute.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1135 Jan 20 10:55 make_edpm_deploy_baremetal_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Jan 20 10:55 make_edpm_deploy_baremetal.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Jan 20 10:55 make_edpm_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1120 Jan 20 10:55 make_edpm_deploy_generate_keys.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1054 Jan 20 10:55 make_edpm_deploy_instance.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1180 Jan 20 10:55 make_edpm_deploy_networker_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1135 Jan 20 10:55 make_edpm_deploy_networker_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Jan 20 10:55 make_edpm_deploy_networker.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_edpm_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Jan 20 10:55 make_edpm_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1084 Jan 20 10:55 make_edpm_networker_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 964 Jan 20 10:55 make_edpm_networker.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Jan 20 10:55 make_edpm_nova_discover_hosts.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1210 Jan 20 10:55 make_edpm_patch_ansible_runner_image.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Jan 20 10:55 make_edpm_register_dns.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1135 Jan 20 10:55 make_edpm_wait_deploy_baremetal.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_edpm_wait_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_glance_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Jan 20 10:55 make_glance_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Jan 20 10:55 make_glance_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Jan 20 10:55 make_glance_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_glance_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Jan 20 10:55 make_glance_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Jan 20 10:55 make_glance_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 835 Jan 20 10:55 make_glance.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Jan 20 10:55 make_heat_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Jan 20 10:55 make_heat_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_heat_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Jan 20 10:55 make_heat_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_heat_kuttl_crc.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_heat_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 895 Jan 20 10:55 make_heat_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Jan 20 10:55 make_heat_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Jan 20 10:55 make_heat.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 814 Jan 20 10:55 make_help.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Jan 20 10:55 make_horizon_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1075 Jan 20 10:55 make_horizon_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Jan 20 10:55 make_horizon_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_horizon_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Jan 20 10:55 make_horizon_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Jan 20 10:55 make_horizon_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Jan 20 10:55 make_horizon_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 850 Jan 20 10:55 make_horizon.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Jan 20 10:55 make_infra_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Jan 20 10:55 make_infra_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Jan 20 10:55 make_infra_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 895 Jan 20 10:55 make_infra_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 820 Jan 20 10:55 make_infra.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Jan 20 10:55 make_input_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 820 Jan 20 10:55 make_input.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 994 Jan 20 10:55 make_ipv6_lab_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1189 Jan 20 10:55 make_ipv6_lab_nat64_router_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1069 Jan 20 10:55 make_ipv6_lab_nat64_router.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1114 Jan 20 10:55 make_ipv6_lab_network_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 994 Jan 20 10:55 make_ipv6_lab_network.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1054 Jan 20 10:55 make_ipv6_lab_sno_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 934 Jan 20 10:55 make_ipv6_lab_sno.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 874 Jan 20 10:55 make_ipv6_lab.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_ironic_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Jan 20 10:55 make_ironic_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Jan 20 10:55 make_ironic_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Jan 20 10:55 make_ironic_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_ironic_kuttl_crc.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_ironic_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Jan 20 10:55 make_ironic_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Jan 20 10:55 make_ironic_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 835 Jan 20 10:55 make_ironic.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_keystone_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1090 Jan 20 10:55 make_keystone_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Jan 20 10:55 make_keystone_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Jan 20 10:55 make_keystone_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Jan 20 10:55 make_keystone_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_keystone_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Jan 20 10:55 make_keystone_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 865 Jan 20 10:55 make_keystone.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Jan 20 10:55 make_kuttl_common_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Jan 20 10:55 make_kuttl_common_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_kuttl_db_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Jan 20 10:55 make_kuttl_db_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Jan 20 10:55 make_loki_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Jan 20 10:55 make_loki_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Jan 20 10:55 make_loki_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Jan 20 10:55 make_loki.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Jan 20 10:55 make_lvms.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_manila_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Jan 20 10:55 make_manila_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Jan 20 10:55 make_manila_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Jan 20 10:55 make_manila_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_manila_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Jan 20 10:55 make_manila_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Jan 20 10:55 make_manila_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 835 Jan 20 10:55 make_manila.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Jan 20 10:55 make_mariadb_chainsaw_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_mariadb_chainsaw.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Jan 20 10:55 make_mariadb_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1075 Jan 20 10:55 make_mariadb_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Jan 20 10:55 make_mariadb_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_mariadb_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Jan 20 10:55 make_mariadb_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Jan 20 10:55 make_mariadb_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 850 Jan 20 10:55 make_mariadb.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Jan 20 10:55 make_memcached_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Jan 20 10:55 make_memcached_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_memcached_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Jan 20 10:55 make_metallb_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1075 Jan 20 10:55 make_metallb_config_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_metallb_config.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 850 Jan 20 10:55 make_metallb.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Jan 20 10:55 make_namespace_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Jan 20 10:55 make_namespace.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Jan 20 10:55 make_netattach_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Jan 20 10:55 make_netattach.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Jan 20 10:55 make_netconfig_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Jan 20 10:55 make_netconfig_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_netconfig_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Jan 20 10:55 make_netobserv_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Jan 20 10:55 make_netobserv_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_netobserv_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Jan 20 10:55 make_netobserv.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1234 Jan 20 10:55 make_network_isolation_bridge_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1114 Jan 20 10:55 make_network_isolation_bridge.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Jan 20 10:55 make_neutron_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1075 Jan 20 10:55 make_neutron_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Jan 20 10:55 make_neutron_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_neutron_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Jan 20 10:55 make_neutron_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Jan 20 10:55 make_neutron_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Jan 20 10:55 make_neutron_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 850 Jan 20 10:55 make_neutron.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 919 Jan 20 10:55 make_nfs_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 799 Jan 20 10:55 make_nfs.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 850 Jan 20 10:55 make_nmstate.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Jan 20 10:55 make_nncp_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Jan 20 10:55 make_nncp.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Jan 20 10:55 make_nova_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Jan 20 10:55 make_nova_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_nova_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Jan 20 10:55 make_nova_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Jan 20 10:55 make_nova_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Jan 20 10:55 make_nova.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Jan 20 10:55 make_octavia_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1075 Jan 20 10:55 make_octavia_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Jan 20 10:55 make_octavia_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_octavia_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Jan 20 10:55 make_octavia_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Jan 20 10:55 make_octavia_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Jan 20 10:55 make_octavia_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 850 Jan 20 10:55 make_octavia.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Jan 20 10:55 make_openstack_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1075 Jan 20 10:55 make_openstack_crds_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_openstack_crds.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Jan 20 10:55 make_openstack_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Jan 20 10:55 make_openstack_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_openstack_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_openstack_init.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Jan 20 10:55 make_openstack_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Jan 20 10:55 make_openstack_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1090 Jan 20 10:55 make_openstack_patch_version.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_openstack_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_openstack_repo.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Jan 20 10:55 make_openstack_update_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Jan 20 10:55 make_openstack_wait_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_openstack_wait.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Jan 20 10:55 make_openstack.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Jan 20 10:55 make_operator_namespace.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Jan 20 10:55 make_ovn_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Jan 20 10:55 make_ovn_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Jan 20 10:55 make_ovn_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 895 Jan 20 10:55 make_ovn_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Jan 20 10:55 make_ovn_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Jan 20 10:55 make_ovn_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 865 Jan 20 10:55 make_ovn_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 790 Jan 20 10:55 make_ovn.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Jan 20 10:55 make_placement_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Jan 20 10:55 make_placement_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Jan 20 10:55 make_placement_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_placement_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Jan 20 10:55 make_placement_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Jan 20 10:55 make_placement_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_placement_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Jan 20 10:55 make_placement.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_rabbitmq_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1090 Jan 20 10:55 make_rabbitmq_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Jan 20 10:55 make_rabbitmq_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Jan 20 10:55 make_rabbitmq_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Jan 20 10:55 make_rabbitmq_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 865 Jan 20 10:55 make_rabbitmq.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Jan 20 10:55 make_redis_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Jan 20 10:55 make_redis_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Jan 20 10:55 make_redis_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Jan 20 10:55 make_rook_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Jan 20 10:55 make_rook_crc_disk.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_rook_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Jan 20 10:55 make_rook_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Jan 20 10:55 make_rook_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Jan 20 10:55 make_rook.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1090 Jan 20 10:55 make_set_slower_etcd_profile.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1024 Jan 20 10:55 make_standalone_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1009 Jan 20 10:55 make_standalone_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1009 Jan 20 10:55 make_standalone_revert.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1039 Jan 20 10:55 make_standalone_snapshot.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 979 Jan 20 10:55 make_standalone_sync.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 904 Jan 20 10:55 make_standalone.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Jan 20 10:55 make_swift_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Jan 20 10:55 make_swift_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Jan 20 10:55 make_swift_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Jan 20 10:55 make_swift_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Jan 20 10:55 make_swift_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Jan 20 10:55 make_swift_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 895 Jan 20 10:55 make_swift_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 820 Jan 20 10:55 make_swift.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Jan 20 10:55 make_telemetry_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Jan 20 10:55 make_telemetry_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Jan 20 10:55 make_telemetry_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Jan 20 10:55 make_telemetry_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Jan 20 10:55 make_telemetry_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Jan 20 10:55 make_telemetry_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Jan 20 10:55 make_telemetry_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Jan 20 10:55 make_telemetry.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 964 Jan 20 10:55 make_tripleo_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Jan 20 10:55 make_update_services.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Jan 20 10:55 make_update_system.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Jan 20 10:55 make_validate_marketplace.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Jan 20 10:55 make_wait.yml /home/zuul/ci-framework-data/artifacts/yum_repos: total 32 -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 1658 Jan 20 10:57 delorean-antelope-testing.repo -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 5901 Jan 20 10:57 delorean.repo -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 33 Jan 20 10:57 delorean.repo.md5 -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 206 Jan 20 10:57 repo-setup-centos-appstream.repo -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 196 Jan 20 10:57 repo-setup-centos-baseos.repo -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 226 Jan 20 10:57 repo-setup-centos-highavailability.repo -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 201 Jan 20 10:57 repo-setup-centos-powertools.repo /home/zuul/ci-framework-data/logs: total 456 drwxrwxr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 25 Jan 20 10:59 2026-01-20_10-57 -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 200959 Jan 20 10:58 ansible.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 14927 Jan 20 10:58 ci_script_000_copy_logs_from_crc.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 154305 Jan 20 10:58 ci_script_000_prepare_root_ssh.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 9953 Jan 20 10:56 ci_script_000_run_hook_without_retry.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2476 Jan 20 10:58 ci_script_000_run_openstack_must_gather.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 234 Jan 20 10:56 ci_script_001_fetch_openshift.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 14057 Jan 20 10:57 ci_script_002_run_hook_without_retry_fetch.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2143 Jan 20 10:57 ci_script_003_run_hook_without_retry_80.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 3226 Jan 20 10:57 ci_script_004_run_hook_without_retry_create.log drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 crc drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 72 Jan 20 10:58 openstack-must-gather -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 19192 Jan 20 10:57 post_infra_fetch_nodes_facts_and_save_the.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 3261 Jan 20 10:57 pre_deploy_80_kustomize_openstack_cr.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 3953 Jan 20 10:57 pre_deploy_create_coo_subscription.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 15848 Jan 20 10:56 pre_infra_download_needed_tools.log /home/zuul/ci-framework-data/logs/2026-01-20_10-57: total 200 -rw-rw-rw-. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 201001 Jan 20 10:57 ansible.log /home/zuul/ci-framework-data/logs/crc: total 0 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 18 Jan 20 10:58 crc-logs-artifacts /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts: total 16 drwxr-xr-x. 86 zuul zuul unconfined_u:object_r:user_home_t:s0 12288 Jan 20 10:58 pods /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods: total 16 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 37 Jan 20 10:58 cert-manager_cert-manager-758df9885c-cq6zm_f12a256b-7128-4680-8f54-8e40a3e56300 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 37 Jan 20 10:58 cert-manager_cert-manager-cainjector-676dd9bd64-mggnx_c229f43c-9d3a-4848-a9da-d997459f440b drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 34 Jan 20 10:58 cert-manager_cert-manager-webhook-855f577f79-7bdxq_e7870154-de6e-4216-81fb-b87e7502c412 drwxr-xr-x. 6 zuul zuul unconfined_u:object_r:user_home_t:s0 108 Jan 20 10:58 hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787 drwxr-xr-x. 5 zuul zuul unconfined_u:object_r:user_home_t:s0 105 Jan 20 10:58 openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 42 Jan 20 10:58 openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 29 Jan 20 10:58 openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 37 Jan 20 10:58 openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 64 Jan 20 10:58 openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 4096 Jan 20 10:58 openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 38 Jan 20 10:58 openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 60 Jan 20 10:58 openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 21 Jan 20 10:58 openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 29 Jan 20 10:58 openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 39 Jan 20 10:58 openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 30 Jan 20 10:58 openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 51 Jan 20 10:58 openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 40 Jan 20 10:58 openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 31 Jan 20 10:58 openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 49 Jan 20 10:58 openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb drwxr-xr-x. 9 zuul zuul unconfined_u:object_r:user_home_t:s0 140 Jan 20 10:58 openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 27 Jan 20 10:58 openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 22 Jan 20 10:58 openshift-image-registry_image-registry-75b7bb6564-ln84v_7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 21 Jan 20 10:58 openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 38 Jan 20 10:58 openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 53 Jan 20 10:58 openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 20 Jan 20 10:58 openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Jan 20 10:58 openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Jan 20 10:58 openshift-kube-apiserver_installer-13-crc_9387c79a-cd5b-4d24-a558-6dbbdd89fe1e drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Jan 20 10:58 openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90 drwxr-xr-x. 8 zuul zuul unconfined_u:object_r:user_home_t:s0 4096 Jan 20 10:58 openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 29 Jan 20 10:58 openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7dae59545f22b3fb679a7fbf878a6379 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 37 Jan 20 10:58 openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Jan 20 10:58 openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Jan 20 10:58 openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Jan 20 10:58 openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124 drwxr-xr-x. 6 zuul zuul unconfined_u:object_r:user_home_t:s0 164 Jan 20 10:58 openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 46 Jan 20 10:58 openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 20 Jan 20 10:58 openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 20 Jan 20 10:58 openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 20 Jan 20 10:58 openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 20 Jan 20 10:58 openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Jan 20 10:58 openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Jan 20 10:58 openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754 drwxr-xr-x. 6 zuul zuul unconfined_u:object_r:user_home_t:s0 130 Jan 20 10:58 openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 47 Jan 20 10:58 openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 22 Jan 20 10:58 openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 52 Jan 20 10:58 openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 48 Jan 20 10:58 openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 57 Jan 20 10:58 openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 47 Jan 20 10:58 openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 62 Jan 20 10:58 openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 58 Jan 20 10:58 openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 60 Jan 20 10:58 openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 35 Jan 20 10:58 openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb drwxr-xr-x. 5 zuul zuul unconfined_u:object_r:user_home_t:s0 77 Jan 20 10:58 openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd drwxr-xr-x. 5 zuul zuul unconfined_u:object_r:user_home_t:s0 77 Jan 20 10:58 openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 34 Jan 20 10:58 openshift-marketplace_marketplace-operator-8b455464d-nc8zc_5adb4a31-5991-4381-a1ea-f1b095a071ea drwxr-xr-x. 5 zuul zuul unconfined_u:object_r:user_home_t:s0 77 Jan 20 10:58 openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a drwxr-xr-x. 5 zuul zuul unconfined_u:object_r:user_home_t:s0 77 Jan 20 10:58 openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e drwxr-xr-x. 9 zuul zuul unconfined_u:object_r:user_home_t:s0 4096 Jan 20 10:58 openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 64 Jan 20 10:58 openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 25 Jan 20 10:58 openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 59 Jan 20 10:58 openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 29 Jan 20 10:58 openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 44 Jan 20 10:58 openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 37 Jan 20 10:58 openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 30 Jan 20 10:58 openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 30 Jan 20 10:58 openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 58 Jan 20 10:58 openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 30 Jan 20 10:58 openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 30 Jan 20 10:58 openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 30 Jan 20 10:58 openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 30 Jan 20 10:58 openshift-operator-lifecycle-manager_collect-profiles-29481765-pbh8m_835aa241-cfd8-4527-953a-e91ac4516103 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 26 Jan 20 10:58 openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 27 Jan 20 10:58 openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 59 Jan 20 10:58 openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 60 Jan 20 10:58 openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3 drwxr-xr-x. 11 zuul zuul unconfined_u:object_r:user_home_t:s0 4096 Jan 20 10:58 openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 38 Jan 20 10:58 openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 33 Jan 20 10:58 openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 35 Jan 20 10:58 openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501 /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-manager-758df9885c-cq6zm_f12a256b-7128-4680-8f54-8e40a3e56300: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 cert-manager-controller /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-manager-758df9885c-cq6zm_f12a256b-7128-4680-8f54-8e40a3e56300/cert-manager-controller: total 28 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 15883 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 9874 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-manager-cainjector-676dd9bd64-mggnx_c229f43c-9d3a-4848-a9da-d997459f440b: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 cert-manager-cainjector /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-manager-cainjector-676dd9bd64-mggnx_c229f43c-9d3a-4848-a9da-d997459f440b/cert-manager-cainjector: total 36 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 16471 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 12298 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-manager-webhook-855f577f79-7bdxq_e7870154-de6e-4216-81fb-b87e7502c412: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 cert-manager-webhook /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/cert-manager_cert-manager-webhook-855f577f79-7bdxq_e7870154-de6e-4216-81fb-b87e7502c412/cert-manager-webhook: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4625 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 csi-provisioner drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 hostpath-provisioner drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 liveness-probe drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 node-driver-registrar /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner: total 240 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 180933 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 60194 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner: total 84 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 65035 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 17107 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 396 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 396 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1533 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1533 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 fix-audit-permissions drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 openshift-apiserver drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 openshift-apiserver-check-endpoints /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver: total 192 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 93408 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 98939 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints: total 48 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 23827 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 23569 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 openshift-apiserver-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator: total 356 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 84973 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 214174 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 59897 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 oauth-openshift /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift: total 52 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 23257 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 25256 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 authentication-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator: total 904 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 217484 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 491271 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 210349 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 machine-approver-controller /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy: total 16 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 7732 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 7599 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller: total 28 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 10792 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 7598 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5271 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 83 Jan 20 10:58 2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb.log drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 cluster-samples-operator drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 cluster-samples-operator-watch -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 d9aeaa1aa1d02c7e8201fbb13a3ee252fd99aa6b0819f3318aaa2bd88982712e.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator: total 320 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 30933 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 112225 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 177988 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4037 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 664 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 cluster-version-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator: total 12332 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 11627525 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 995512 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 openshift-api drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 openshift-config-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator: total 88 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 33371 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 31684 Jan 20 10:58 2.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 16967 Jan 20 10:58 3.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 console /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1042 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1043 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 download-server /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server: total 68 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 75 Jan 20 10:58 5.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 51603 Jan 20 10:58 6.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 11526 Jan 20 10:58 7.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 conversion-webhook-server /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 909 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 571 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 console-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator: total 1004 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 227835 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 769174 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 27836 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 controller-manager /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager: total 276 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 28909 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 249420 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 openshift-controller-manager-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator: total 376 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 172802 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 38969 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 167419 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 dns drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 kube-rbac-proxy /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns: total 608 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 614882 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2301 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1173 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1040 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 dns-node-resolver /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 96 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 dns-operator drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 kube-rbac-proxy /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator: total 28 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 13994 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 11549 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1173 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1040 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 etcd drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 etcdctl drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 etcd-ensure-env-vars drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 etcd-metrics drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 etcd-readyz drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 etcd-resources-copy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 setup /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd: total 8608 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 8727966 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 57419 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 23803 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics: total 64 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 20593 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 17958 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 17961 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz: total 12 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 518 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1184 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 240 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup: total 12 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 74 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 74 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 74 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 etcd-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator: total 512 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 100273 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 296671 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 119903 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 cluster-image-registry-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator: total 92 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 46647 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 23357 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 16479 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-ln84v_7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 registry /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-ln84v_7fb90a11-2a7b-4fba-8ce3-60d4d14cdf76/registry: total 20 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 18595 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 node-ca /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca: total 40 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 31683 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 6688 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 serve-healthcheck-canary /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2922 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 602 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 58 Jan 20 10:58 ingress-operator drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 kube-rbac-proxy /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator: total 156 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 48554 Jan 20 10:58 2.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 29309 Jan 20 10:58 3.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 45393 Jan 20 10:58 4.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 24998 Jan 20 10:58 5.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1173 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1040 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 58 Jan 20 10:58 router /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router: total 148 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 52224 Jan 20 10:58 2.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 37860 Jan 20 10:58 3.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 51450 Jan 20 10:58 4.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 3349 Jan 20 10:58 5.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/installer: total 60 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 59795 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_9387c79a-cd5b-4d24-a558-6dbbdd89fe1e: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_9387c79a-cd5b-4d24-a558-6dbbdd89fe1e/installer: total 60 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 59637 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/installer: total 60 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 59909 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 kube-apiserver drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 kube-apiserver-cert-regeneration-controller drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 kube-apiserver-cert-syncer drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 kube-apiserver-check-endpoints drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 kube-apiserver-insecure-readyz drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 setup /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver: total 324 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 329426 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-regeneration-controller: total 12 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 11373 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-syncer: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1648 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-check-endpoints: total 16 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 15541 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-insecure-readyz: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 116 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/setup: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 265 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7dae59545f22b3fb679a7fbf878a6379: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 6 Jan 20 10:58 startup-monitor /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7dae59545f22b3fb679a7fbf878a6379/startup-monitor: total 0 /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 kube-apiserver-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator: total 804 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 222883 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 475401 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 117785 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer: total 20 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 19721 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/installer: total 44 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 43448 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/installer: total 44 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 43448 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 cluster-policy-controller drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 58 Jan 20 10:58 kube-controller-manager drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 kube-controller-manager-cert-syncer drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 kube-controller-manager-recovery-controller /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller: total 292 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 132818 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2076 Jan 20 10:58 6.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 155784 Jan 20 10:58 7.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager: total 1496 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 737559 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 64407 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 679184 Jan 20 10:58 3.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 44792 Jan 20 10:58 4.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer: total 32 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 6726 Jan 20 10:58 0.log -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 5026 Jan 20 10:58 1.log -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 12412 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller: total 40 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 11354 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4901 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 20284 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 kube-controller-manager-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator: total 588 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 193188 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 324520 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 73965 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 pruner /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/pruner: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2031 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 pruner /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/pruner: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1976 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 pruner /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/pruner: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1917 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 pruner /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/pruner: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1973 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/installer: total 28 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 27628 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/installer: total 28 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 27628 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 kube-scheduler drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 kube-scheduler-cert-syncer drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 kube-scheduler-recovery-controller drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 wait-for-host-port /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler: total 260 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 59236 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 148903 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 50621 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer: total 28 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5142 Jan 20 10:58 0.log -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 4837 Jan 20 10:58 1.log -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 9409 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller: total 24 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 6351 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5919 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5593 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port: total 12 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 85 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 85 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 85 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 kube-scheduler-operator-container /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container: total 276 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 78087 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 122585 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 73869 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 migrator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2038 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1875 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 kube-storage-version-migrator-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator: total 112 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 39921 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 34528 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 34762 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 control-plane-machine-set-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator: total 52 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 9654 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 22065 Jan 20 10:58 2.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 13139 Jan 20 10:58 3.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 machine-api-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy: total 16 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 7732 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 7599 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator: total 44 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 12595 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 12999 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 10649 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 kube-rbac-proxy-crio drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 setup /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio: total 16 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5045 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1509 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2192 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup: total 12 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 101 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 101 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 101 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 machine-config-controller /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1345 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1212 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller: total 216 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 136260 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 79699 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 58 Jan 20 10:58 machine-config-daemon /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1345 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1212 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon: total 140 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 15339 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 79965 Jan 20 10:58 2.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 22128 Jan 20 10:58 5.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 17534 Jan 20 10:58 6.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 machine-config-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1345 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1212 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator: total 68 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 16333 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 49270 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 machine-config-server /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server: total 60 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 46063 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 11634 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 extract-content drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 extract-utilities drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 registry-server /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/extract-content: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/extract-utilities: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-mpjb7_1d5b65e7-a4c3-495a-a5b0-72caab7218fd/registry-server: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 440 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 extract-content drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 extract-utilities drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 registry-server /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/extract-content: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/extract-utilities: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-6m4w2_bc228c8d-ec8b-45d8-a1a7-e4de2e5f87cd/registry-server: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 440 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-nc8zc_5adb4a31-5991-4381-a1ea-f1b095a071ea: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 marketplace-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-nc8zc_5adb4a31-5991-4381-a1ea-f1b095a071ea/marketplace-operator: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 7794 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 extract-content drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 extract-utilities drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 registry-server /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/extract-content: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/extract-utilities: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-2mx7j_df6e4f33-df74-4326-b096-9d3e45a8c55a/registry-server: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 440 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 extract-content drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 extract-utilities drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 registry-server /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/extract-content: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/extract-utilities: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-2nxg8_afcd1056-dc0e-4c35-93bd-1c388cd2028e/registry-server: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 440 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 bond-cni-plugin drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 cni-plugins drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 egress-router-binary-copy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 kube-multus-additional-cni-plugins drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 routeoverride-cni drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 whereabouts-cni drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 whereabouts-cni-bincopy /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 392 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 392 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 404 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 404 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 414 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 414 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 411 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 411 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 80 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 80 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 408 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 408 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 multus-admission-controller /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1173 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1040 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1386 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1276 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 58 Jan 20 10:58 kube-multus /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus: total 556 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 890 Jan 20 10:58 4.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 391605 Jan 20 10:58 5.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 164882 Jan 20 10:58 7.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 3764 Jan 20 10:58 8.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 network-metrics-daemon /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1173 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1040 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon: total 68 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 40893 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 27450 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 check-endpoints /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints: total 20 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 9955 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4976 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 network-check-target-container /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 61 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 61 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 approver drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 webhook /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver: total 40 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 12256 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 14943 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 11338 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook: total 752 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 760212 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4141 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 iptables-alerter /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 381 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 120 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 network-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator: total 1360 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 411501 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 574827 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 398831 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 fix-audit-permissions drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 oauth-apiserver /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver: total 172 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 127371 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 43455 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 catalog-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator: total 1904 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1508074 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 434522 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 collect-profiles /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72/collect-profiles: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 736 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 collect-profiles /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/collect-profiles: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 736 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29481765-pbh8m_835aa241-cfd8-4527-953a-e91ac4516103: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 collect-profiles /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29481765-pbh8m_835aa241-cfd8-4527-953a-e91ac4516103/collect-profiles: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 739 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 olm-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator: total 72 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 39824 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 30477 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 packageserver /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver: total 128 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 91140 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 34964 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 package-server-manager /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1187 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1054 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager: total 24 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 14404 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5821 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 ovnkube-cluster-manager /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1316 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1183 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager: total 212 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 157417 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 56924 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 kubecfg-setup drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 kube-rbac-proxy-node drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 kube-rbac-proxy-ovn-metrics drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 nbdb drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 northd drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 ovn-acl-logging drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 ovn-controller drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 ovnkube-controller drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Jan 20 10:58 sbdb /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/kubecfg-setup: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/kube-rbac-proxy-node: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4680 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/kube-rbac-proxy-ovn-metrics: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4640 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/nbdb: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2415 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/northd: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5622 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/ovn-acl-logging: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5158 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/ovn-controller: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 8033 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/ovnkube-controller: total 1916 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1961847 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-sdkgg_54d2ac09-0fc5-41b8-99f1-6e15d9191619/sbdb: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2347 Jan 20 10:58 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 route-controller-manager /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager: total 60 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 25403 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 31322 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Jan 20 10:58 service-ca-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator: total 112 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 58729 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 26036 Jan 20 10:58 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 23773 Jan 20 10:58 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Jan 20 10:58 service-ca-controller /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller: total 96 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 61484 Jan 20 10:58 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 31986 Jan 20 10:58 1.log /home/zuul/ci-framework-data/logs/openstack-must-gather: total 12 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 3336 Jan 20 10:58 event-filter.html -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2304 Jan 20 10:58 must-gather.logs -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 110 Jan 20 10:58 timestamp /home/zuul/ci-framework-data/tmp: total 0 /home/zuul/ci-framework-data/volumes: total 0 home/zuul/zuul-output/logs/README.html0000644000175000017500000000306615133660027016721 0ustar zuulzuul README for CIFMW Logs

Logs of interest

Generated content of interest

home/zuul/zuul-output/logs/installed-pkgs.log0000644000175000017500000004753215133660030020522 0ustar zuulzuulaardvark-dns-1.17.0-1.el9.x86_64 abattis-cantarell-fonts-0.301-4.el9.noarch acl-2.3.1-4.el9.x86_64 adobe-source-code-pro-fonts-2.030.1.050-12.el9.1.noarch alternatives-1.24-2.el9.x86_64 annobin-12.98-1.el9.x86_64 ansible-core-2.14.18-2.el9.x86_64 apr-1.7.0-12.el9.x86_64 apr-util-1.6.1-23.el9.x86_64 apr-util-bdb-1.6.1-23.el9.x86_64 apr-util-openssl-1.6.1-23.el9.x86_64 attr-2.5.1-3.el9.x86_64 audit-3.1.5-8.el9.x86_64 audit-libs-3.1.5-8.el9.x86_64 authselect-1.2.6-3.el9.x86_64 authselect-compat-1.2.6-3.el9.x86_64 authselect-libs-1.2.6-3.el9.x86_64 basesystem-11-13.el9.noarch bash-5.1.8-9.el9.x86_64 bash-completion-2.11-5.el9.noarch binutils-2.35.2-69.el9.x86_64 binutils-gold-2.35.2-69.el9.x86_64 buildah-1.41.3-1.el9.x86_64 bzip2-1.0.8-10.el9.x86_64 bzip2-libs-1.0.8-10.el9.x86_64 ca-certificates-2025.2.80_v9.0.305-91.el9.noarch c-ares-1.19.1-2.el9.x86_64 centos-gpg-keys-9.0-34.el9.noarch centos-logos-90.9-1.el9.x86_64 centos-stream-release-9.0-34.el9.noarch centos-stream-repos-9.0-34.el9.noarch checkpolicy-3.6-1.el9.x86_64 chrony-4.8-1.el9.x86_64 cloud-init-24.4-8.el9.noarch cloud-utils-growpart-0.33-1.el9.x86_64 cmake-filesystem-3.31.8-3.el9.x86_64 cockpit-bridge-348-1.el9.noarch cockpit-system-348-1.el9.noarch cockpit-ws-348-1.el9.x86_64 cockpit-ws-selinux-348-1.el9.x86_64 conmon-2.1.13-1.el9.x86_64 containers-common-1-134.el9.x86_64 containers-common-extra-1-134.el9.x86_64 container-selinux-2.244.0-1.el9.noarch coreutils-8.32-39.el9.x86_64 coreutils-common-8.32-39.el9.x86_64 cpio-2.13-16.el9.x86_64 cpp-11.5.0-14.el9.x86_64 cracklib-2.9.6-28.el9.x86_64 cracklib-dicts-2.9.6-28.el9.x86_64 createrepo_c-0.20.1-4.el9.x86_64 createrepo_c-libs-0.20.1-4.el9.x86_64 criu-3.19-3.el9.x86_64 criu-libs-3.19-3.el9.x86_64 cronie-1.5.7-14.el9.x86_64 cronie-anacron-1.5.7-14.el9.x86_64 crontabs-1.11-26.20190603git.el9.noarch crun-1.24-1.el9.x86_64 crypto-policies-20251126-1.gite9c4db2.el9.noarch crypto-policies-scripts-20251126-1.gite9c4db2.el9.noarch cryptsetup-libs-2.8.1-2.el9.x86_64 curl-7.76.1-38.el9.x86_64 cyrus-sasl-2.1.27-21.el9.x86_64 cyrus-sasl-devel-2.1.27-21.el9.x86_64 cyrus-sasl-gssapi-2.1.27-21.el9.x86_64 cyrus-sasl-lib-2.1.27-21.el9.x86_64 dbus-1.12.20-8.el9.x86_64 dbus-broker-28-7.el9.x86_64 dbus-common-1.12.20-8.el9.noarch dbus-libs-1.12.20-8.el9.x86_64 dbus-tools-1.12.20-8.el9.x86_64 debugedit-5.0-11.el9.x86_64 dejavu-sans-fonts-2.37-18.el9.noarch desktop-file-utils-0.26-6.el9.x86_64 device-mapper-1.02.206-2.el9.x86_64 device-mapper-libs-1.02.206-2.el9.x86_64 dhcp-client-4.4.2-19.b1.el9.x86_64 dhcp-common-4.4.2-19.b1.el9.noarch diffutils-3.7-12.el9.x86_64 dnf-4.14.0-31.el9.noarch dnf-data-4.14.0-31.el9.noarch dnf-plugins-core-4.3.0-25.el9.noarch dracut-057-102.git20250818.el9.x86_64 dracut-config-generic-057-102.git20250818.el9.x86_64 dracut-network-057-102.git20250818.el9.x86_64 dracut-squash-057-102.git20250818.el9.x86_64 dwz-0.16-1.el9.x86_64 e2fsprogs-1.46.5-8.el9.x86_64 e2fsprogs-libs-1.46.5-8.el9.x86_64 ed-1.14.2-12.el9.x86_64 efi-srpm-macros-6-4.el9.noarch elfutils-0.194-1.el9.x86_64 elfutils-debuginfod-client-0.194-1.el9.x86_64 elfutils-default-yama-scope-0.194-1.el9.noarch elfutils-libelf-0.194-1.el9.x86_64 elfutils-libs-0.194-1.el9.x86_64 emacs-filesystem-27.2-18.el9.noarch enchant-1.6.0-30.el9.x86_64 ethtool-6.15-2.el9.x86_64 expat-2.5.0-6.el9.x86_64 expect-5.45.4-16.el9.x86_64 file-5.39-16.el9.x86_64 file-libs-5.39-16.el9.x86_64 filesystem-3.16-5.el9.x86_64 findutils-4.8.0-7.el9.x86_64 fonts-filesystem-2.0.5-7.el9.1.noarch fonts-srpm-macros-2.0.5-7.el9.1.noarch fuse3-3.10.2-9.el9.x86_64 fuse3-libs-3.10.2-9.el9.x86_64 fuse-common-3.10.2-9.el9.x86_64 fuse-libs-2.9.9-17.el9.x86_64 fuse-overlayfs-1.16-1.el9.x86_64 gawk-5.1.0-6.el9.x86_64 gawk-all-langpacks-5.1.0-6.el9.x86_64 gcc-11.5.0-14.el9.x86_64 gcc-c++-11.5.0-14.el9.x86_64 gcc-plugin-annobin-11.5.0-14.el9.x86_64 gdb-minimal-16.3-2.el9.x86_64 gdbm-libs-1.23-1.el9.x86_64 gdisk-1.0.7-5.el9.x86_64 gdk-pixbuf2-2.42.6-6.el9.x86_64 geolite2-city-20191217-6.el9.noarch geolite2-country-20191217-6.el9.noarch gettext-0.21-8.el9.x86_64 gettext-libs-0.21-8.el9.x86_64 ghc-srpm-macros-1.5.0-6.el9.noarch git-2.47.3-1.el9.x86_64 git-core-2.47.3-1.el9.x86_64 git-core-doc-2.47.3-1.el9.noarch glib2-2.68.4-18.el9.x86_64 glibc-2.34-245.el9.x86_64 glibc-common-2.34-245.el9.x86_64 glibc-devel-2.34-245.el9.x86_64 glibc-gconv-extra-2.34-245.el9.x86_64 glibc-headers-2.34-245.el9.x86_64 glibc-langpack-en-2.34-245.el9.x86_64 glib-networking-2.68.3-3.el9.x86_64 gmp-6.2.0-13.el9.x86_64 gnupg2-2.3.3-5.el9.x86_64 gnutls-3.8.10-2.el9.x86_64 gobject-introspection-1.68.0-11.el9.x86_64 go-srpm-macros-3.8.1-1.el9.noarch gpgme-1.15.1-6.el9.x86_64 gpg-pubkey-8483c65d-5ccc5b19 grep-3.6-5.el9.x86_64 groff-base-1.22.4-10.el9.x86_64 grub2-common-2.06-120.el9.noarch grub2-pc-2.06-120.el9.x86_64 grub2-pc-modules-2.06-120.el9.noarch grub2-tools-2.06-120.el9.x86_64 grub2-tools-minimal-2.06-120.el9.x86_64 grubby-8.40-69.el9.x86_64 gsettings-desktop-schemas-40.0-8.el9.x86_64 gssproxy-0.8.4-7.el9.x86_64 gzip-1.12-1.el9.x86_64 hostname-3.23-6.el9.x86_64 httpd-tools-2.4.62-10.el9.x86_64 hunspell-1.7.0-11.el9.x86_64 hunspell-en-GB-0.20140811.1-20.el9.noarch hunspell-en-US-0.20140811.1-20.el9.noarch hunspell-filesystem-1.7.0-11.el9.x86_64 hwdata-0.348-9.20.el9.noarch ima-evm-utils-1.6.2-2.el9.x86_64 info-6.7-15.el9.x86_64 inih-49-6.el9.x86_64 initscripts-rename-device-10.11.8-4.el9.x86_64 initscripts-service-10.11.8-4.el9.noarch ipcalc-1.0.0-5.el9.x86_64 iproute-6.17.0-1.el9.x86_64 iproute-tc-6.17.0-1.el9.x86_64 iptables-libs-1.8.10-11.el9.x86_64 iptables-nft-1.8.10-11.el9.x86_64 iptables-nft-services-1.8.10-11.el9.noarch iputils-20210202-15.el9.x86_64 irqbalance-1.9.4-5.el9.x86_64 jansson-2.14-1.el9.x86_64 jq-1.6-19.el9.x86_64 json-c-0.14-11.el9.x86_64 json-glib-1.6.6-1.el9.x86_64 kbd-2.4.0-11.el9.x86_64 kbd-legacy-2.4.0-11.el9.noarch kbd-misc-2.4.0-11.el9.noarch kernel-5.14.0-661.el9.x86_64 kernel-core-5.14.0-661.el9.x86_64 kernel-headers-5.14.0-661.el9.x86_64 kernel-modules-5.14.0-661.el9.x86_64 kernel-modules-core-5.14.0-661.el9.x86_64 kernel-srpm-macros-1.0-14.el9.noarch kernel-tools-5.14.0-661.el9.x86_64 kernel-tools-libs-5.14.0-661.el9.x86_64 kexec-tools-2.0.29-14.el9.x86_64 keyutils-1.6.3-1.el9.x86_64 keyutils-libs-1.6.3-1.el9.x86_64 kmod-28-11.el9.x86_64 kmod-libs-28-11.el9.x86_64 kpartx-0.8.7-42.el9.x86_64 krb5-libs-1.21.1-8.el9.x86_64 langpacks-core-en_GB-3.0-16.el9.noarch langpacks-core-font-en-3.0-16.el9.noarch langpacks-en_GB-3.0-16.el9.noarch less-590-6.el9.x86_64 libacl-2.3.1-4.el9.x86_64 libappstream-glib-0.7.18-5.el9.x86_64 libarchive-3.5.3-6.el9.x86_64 libassuan-2.5.5-3.el9.x86_64 libattr-2.5.1-3.el9.x86_64 libbasicobjects-0.1.1-53.el9.x86_64 libblkid-2.37.4-21.el9.x86_64 libbpf-1.5.0-3.el9.x86_64 libbrotli-1.0.9-7.el9.x86_64 libburn-1.5.4-5.el9.x86_64 libcap-2.48-10.el9.x86_64 libcap-ng-0.8.2-7.el9.x86_64 libcbor-0.7.0-5.el9.x86_64 libcollection-0.7.0-53.el9.x86_64 libcom_err-1.46.5-8.el9.x86_64 libcomps-0.1.18-1.el9.x86_64 libcurl-7.76.1-38.el9.x86_64 libdaemon-0.14-23.el9.x86_64 libdb-5.3.28-57.el9.x86_64 libdhash-0.5.0-53.el9.x86_64 libdnf-0.69.0-16.el9.x86_64 libeconf-0.4.1-5.el9.x86_64 libedit-3.1-38.20210216cvs.el9.x86_64 libestr-0.1.11-4.el9.x86_64 libev-4.33-6.el9.x86_64 libevent-2.1.12-8.el9.x86_64 libfastjson-0.99.9-5.el9.x86_64 libfdisk-2.37.4-21.el9.x86_64 libffi-3.4.2-8.el9.x86_64 libffi-devel-3.4.2-8.el9.x86_64 libfido2-1.13.0-2.el9.x86_64 libgcc-11.5.0-14.el9.x86_64 libgcrypt-1.10.0-11.el9.x86_64 libgomp-11.5.0-14.el9.x86_64 libgpg-error-1.42-5.el9.x86_64 libgpg-error-devel-1.42-5.el9.x86_64 libibverbs-57.0-2.el9.x86_64 libicu-67.1-10.el9.x86_64 libidn2-2.3.0-7.el9.x86_64 libini_config-1.3.1-53.el9.x86_64 libisoburn-1.5.4-5.el9.x86_64 libisofs-1.5.4-4.el9.x86_64 libjpeg-turbo-2.0.90-7.el9.x86_64 libkcapi-1.4.0-2.el9.x86_64 libkcapi-hmaccalc-1.4.0-2.el9.x86_64 libksba-1.5.1-7.el9.x86_64 libldb-4.23.4-2.el9.x86_64 libmaxminddb-1.5.2-4.el9.x86_64 libmnl-1.0.4-16.el9.x86_64 libmodulemd-2.13.0-2.el9.x86_64 libmount-2.37.4-21.el9.x86_64 libmpc-1.2.1-4.el9.x86_64 libndp-1.9-1.el9.x86_64 libnet-1.2-7.el9.x86_64 libnetfilter_conntrack-1.0.9-1.el9.x86_64 libnfnetlink-1.0.1-23.el9.x86_64 libnfsidmap-2.5.4-41.el9.x86_64 libnftnl-1.2.6-4.el9.x86_64 libnghttp2-1.43.0-6.el9.x86_64 libnl3-3.11.0-1.el9.x86_64 libnl3-cli-3.11.0-1.el9.x86_64 libosinfo-1.10.0-1.el9.x86_64 libpath_utils-0.2.1-53.el9.x86_64 libpcap-1.10.0-4.el9.x86_64 libpipeline-1.5.3-4.el9.x86_64 libpkgconf-1.7.3-10.el9.x86_64 libpng-1.6.37-12.el9.x86_64 libproxy-0.4.15-35.el9.x86_64 libproxy-webkitgtk4-0.4.15-35.el9.x86_64 libpsl-0.21.1-5.el9.x86_64 libpwquality-1.4.4-8.el9.x86_64 libref_array-0.1.5-53.el9.x86_64 librepo-1.19.0-1.el9.x86_64 libreport-filesystem-2.15.2-6.el9.noarch libseccomp-2.5.2-2.el9.x86_64 libselinux-3.6-3.el9.x86_64 libselinux-utils-3.6-3.el9.x86_64 libsemanage-3.6-5.el9.x86_64 libsepol-3.6-3.el9.x86_64 libsigsegv-2.13-4.el9.x86_64 libslirp-4.4.0-8.el9.x86_64 libsmartcols-2.37.4-21.el9.x86_64 libsolv-0.7.24-3.el9.x86_64 libsoup-2.72.0-10.el9.x86_64 libss-1.46.5-8.el9.x86_64 libssh-0.10.4-17.el9.x86_64 libssh-config-0.10.4-17.el9.noarch libsss_certmap-2.9.7-5.el9.x86_64 libsss_idmap-2.9.7-5.el9.x86_64 libsss_nss_idmap-2.9.7-5.el9.x86_64 libsss_sudo-2.9.7-5.el9.x86_64 libstdc++-11.5.0-14.el9.x86_64 libstdc++-devel-11.5.0-14.el9.x86_64 libstemmer-0-18.585svn.el9.x86_64 libsysfs-2.1.1-11.el9.x86_64 libtalloc-2.4.3-1.el9.x86_64 libtasn1-4.16.0-9.el9.x86_64 libtdb-1.4.14-1.el9.x86_64 libteam-1.31-16.el9.x86_64 libtevent-0.17.1-1.el9.x86_64 libtirpc-1.3.3-9.el9.x86_64 libtool-ltdl-2.4.6-46.el9.x86_64 libunistring-0.9.10-15.el9.x86_64 liburing-2.12-1.el9.x86_64 libuser-0.63-17.el9.x86_64 libutempter-1.2.1-6.el9.x86_64 libuuid-2.37.4-21.el9.x86_64 libverto-0.3.2-3.el9.x86_64 libverto-libev-0.3.2-3.el9.x86_64 libvirt-client-11.10.0-2.el9.x86_64 libvirt-libs-11.10.0-2.el9.x86_64 libxcrypt-4.4.18-3.el9.x86_64 libxcrypt-compat-4.4.18-3.el9.x86_64 libxcrypt-devel-4.4.18-3.el9.x86_64 libxml2-2.9.13-14.el9.x86_64 libxml2-devel-2.9.13-14.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 libxslt-devel-1.1.34-12.el9.x86_64 libyaml-0.2.5-7.el9.x86_64 libzstd-1.5.5-1.el9.x86_64 llvm-filesystem-21.1.7-1.el9.x86_64 llvm-libs-21.1.7-1.el9.x86_64 lmdb-libs-0.9.29-3.el9.x86_64 logrotate-3.18.0-12.el9.x86_64 lshw-B.02.20-4.el9.x86_64 lsscsi-0.32-6.el9.x86_64 lua-libs-5.4.4-4.el9.x86_64 lua-srpm-macros-1-6.el9.noarch lz4-libs-1.9.3-5.el9.x86_64 lzo-2.10-7.el9.x86_64 make-4.3-8.el9.x86_64 man-db-2.9.3-9.el9.x86_64 microcode_ctl-20251111-1.el9.noarch mpfr-4.1.0-8.el9.x86_64 ncurses-6.2-12.20210508.el9.x86_64 ncurses-base-6.2-12.20210508.el9.noarch ncurses-c++-libs-6.2-12.20210508.el9.x86_64 ncurses-devel-6.2-12.20210508.el9.x86_64 ncurses-libs-6.2-12.20210508.el9.x86_64 netavark-1.16.0-1.el9.x86_64 nettle-3.10.1-1.el9.x86_64 NetworkManager-1.54.3-2.el9.x86_64 NetworkManager-libnm-1.54.3-2.el9.x86_64 NetworkManager-team-1.54.3-2.el9.x86_64 NetworkManager-tui-1.54.3-2.el9.x86_64 newt-0.52.21-11.el9.x86_64 nfs-utils-2.5.4-41.el9.x86_64 nftables-1.0.9-6.el9.x86_64 npth-1.6-8.el9.x86_64 numactl-libs-2.0.19-3.el9.x86_64 ocaml-srpm-macros-6-6.el9.noarch oddjob-0.34.7-7.el9.x86_64 oddjob-mkhomedir-0.34.7-7.el9.x86_64 oniguruma-6.9.6-1.el9.6.x86_64 openblas-srpm-macros-2-11.el9.noarch openldap-2.6.8-4.el9.x86_64 openldap-devel-2.6.8-4.el9.x86_64 openssh-9.9p1-3.el9.x86_64 openssh-clients-9.9p1-3.el9.x86_64 openssh-server-9.9p1-3.el9.x86_64 openssl-3.5.1-6.el9.x86_64 openssl-devel-3.5.1-6.el9.x86_64 openssl-fips-provider-3.5.1-6.el9.x86_64 openssl-libs-3.5.1-6.el9.x86_64 osinfo-db-20250606-1.el9.noarch osinfo-db-tools-1.10.0-1.el9.x86_64 os-prober-1.77-12.el9.x86_64 p11-kit-0.25.10-1.el9.x86_64 p11-kit-trust-0.25.10-1.el9.x86_64 pam-1.5.1-28.el9.x86_64 parted-3.5-3.el9.x86_64 passt-0^20251210.gd04c480-2.el9.x86_64 passt-selinux-0^20251210.gd04c480-2.el9.noarch passwd-0.80-12.el9.x86_64 patch-2.7.6-16.el9.x86_64 pciutils-libs-3.7.0-7.el9.x86_64 pcre2-10.40-6.el9.x86_64 pcre2-syntax-10.40-6.el9.noarch pcre-8.44-4.el9.x86_64 perl-AutoLoader-5.74-483.el9.noarch perl-B-1.80-483.el9.x86_64 perl-base-2.27-483.el9.noarch perl-Carp-1.50-460.el9.noarch perl-Class-Struct-0.66-483.el9.noarch perl-constant-1.33-461.el9.noarch perl-Data-Dumper-2.174-462.el9.x86_64 perl-Digest-1.19-4.el9.noarch perl-Digest-MD5-2.58-4.el9.x86_64 perl-DynaLoader-1.47-483.el9.x86_64 perl-Encode-3.08-462.el9.x86_64 perl-Errno-1.30-483.el9.x86_64 perl-Error-0.17029-7.el9.noarch perl-Exporter-5.74-461.el9.noarch perl-Fcntl-1.13-483.el9.x86_64 perl-File-Basename-2.85-483.el9.noarch perl-File-Find-1.37-483.el9.noarch perl-FileHandle-2.03-483.el9.noarch perl-File-Path-2.18-4.el9.noarch perl-File-stat-1.09-483.el9.noarch perl-File-Temp-0.231.100-4.el9.noarch perl-Getopt-Long-2.52-4.el9.noarch perl-Getopt-Std-1.12-483.el9.noarch perl-Git-2.47.3-1.el9.noarch perl-HTTP-Tiny-0.076-462.el9.noarch perl-if-0.60.800-483.el9.noarch perl-interpreter-5.32.1-483.el9.x86_64 perl-IO-1.43-483.el9.x86_64 perl-IO-Socket-IP-0.41-5.el9.noarch perl-IO-Socket-SSL-2.073-2.el9.noarch perl-IPC-Open3-1.21-483.el9.noarch perl-lib-0.65-483.el9.x86_64 perl-libnet-3.13-4.el9.noarch perl-libs-5.32.1-483.el9.x86_64 perl-MIME-Base64-3.16-4.el9.x86_64 perl-Mozilla-CA-20200520-6.el9.noarch perl-mro-1.23-483.el9.x86_64 perl-NDBM_File-1.15-483.el9.x86_64 perl-Net-SSLeay-1.94-3.el9.x86_64 perl-overload-1.31-483.el9.noarch perl-overloading-0.02-483.el9.noarch perl-parent-0.238-460.el9.noarch perl-PathTools-3.78-461.el9.x86_64 perl-Pod-Escapes-1.07-460.el9.noarch perl-podlators-4.14-460.el9.noarch perl-Pod-Perldoc-3.28.01-461.el9.noarch perl-Pod-Simple-3.42-4.el9.noarch perl-Pod-Usage-2.01-4.el9.noarch perl-POSIX-1.94-483.el9.x86_64 perl-Scalar-List-Utils-1.56-462.el9.x86_64 perl-SelectSaver-1.02-483.el9.noarch perl-Socket-2.031-4.el9.x86_64 perl-srpm-macros-1-41.el9.noarch perl-Storable-3.21-460.el9.x86_64 perl-subs-1.03-483.el9.noarch perl-Symbol-1.08-483.el9.noarch perl-Term-ANSIColor-5.01-461.el9.noarch perl-Term-Cap-1.17-460.el9.noarch perl-TermReadKey-2.38-11.el9.x86_64 perl-Text-ParseWords-3.30-460.el9.noarch perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch perl-Time-Local-1.300-7.el9.noarch perl-URI-5.09-3.el9.noarch perl-vars-1.05-483.el9.noarch pigz-2.5-4.el9.x86_64 pkgconf-1.7.3-10.el9.x86_64 pkgconf-m4-1.7.3-10.el9.noarch pkgconf-pkg-config-1.7.3-10.el9.x86_64 podman-5.6.0-2.el9.x86_64 policycoreutils-3.6-4.el9.x86_64 policycoreutils-python-utils-3.6-4.el9.noarch polkit-0.117-14.el9.x86_64 polkit-libs-0.117-14.el9.x86_64 polkit-pkla-compat-0.1-21.el9.x86_64 popt-1.18-8.el9.x86_64 prefixdevname-0.1.0-8.el9.x86_64 procps-ng-3.3.17-14.el9.x86_64 protobuf-c-1.3.3-13.el9.x86_64 psmisc-23.4-3.el9.x86_64 publicsuffix-list-dafsa-20210518-3.el9.noarch pyproject-srpm-macros-1.18.5-1.el9.noarch python3-3.9.25-3.el9.x86_64 python3-argcomplete-1.12.0-5.el9.noarch python3-attrs-20.3.0-7.el9.noarch python3-audit-3.1.5-8.el9.x86_64 python3-babel-2.9.1-2.el9.noarch python3-cffi-1.14.5-5.el9.x86_64 python3-chardet-4.0.0-5.el9.noarch python3-configobj-5.0.6-25.el9.noarch python3-cryptography-36.0.1-5.el9.x86_64 python3-dasbus-1.7-1.el9.noarch python3-dateutil-2.9.0.post0-1.el9.noarch python3-dbus-1.2.18-2.el9.x86_64 python3-devel-3.9.25-3.el9.x86_64 python3-distro-1.5.0-7.el9.noarch python3-dnf-4.14.0-31.el9.noarch python3-dnf-plugins-core-4.3.0-25.el9.noarch python3-enchant-3.2.0-5.el9.noarch python3-file-magic-5.39-16.el9.noarch python3-gobject-base-3.40.1-6.el9.x86_64 python3-gobject-base-noarch-3.40.1-6.el9.noarch python3-gpg-1.15.1-6.el9.x86_64 python3-hawkey-0.69.0-16.el9.x86_64 python3-idna-2.10-7.el9.1.noarch python3-jinja2-2.11.3-8.el9.noarch python3-jmespath-1.0.1-1.el9.noarch python3-jsonpatch-1.21-16.el9.noarch python3-jsonpointer-2.0-4.el9.noarch python3-jsonschema-3.2.0-13.el9.noarch python3-libcomps-0.1.18-1.el9.x86_64 python3-libdnf-0.69.0-16.el9.x86_64 python3-libs-3.9.25-3.el9.x86_64 python3-libselinux-3.6-3.el9.x86_64 python3-libsemanage-3.6-5.el9.x86_64 python3-libvirt-11.10.0-1.el9.x86_64 python3-libxml2-2.9.13-14.el9.x86_64 python3-lxml-4.6.5-3.el9.x86_64 python3-markupsafe-1.1.1-12.el9.x86_64 python3-netaddr-0.10.1-3.el9.noarch python3-netifaces-0.10.6-15.el9.x86_64 python3-oauthlib-3.1.1-5.el9.noarch python3-packaging-20.9-5.el9.noarch python3-pexpect-4.8.0-7.el9.noarch python3-pip-21.3.1-1.el9.noarch python3-pip-wheel-21.3.1-1.el9.noarch python3-ply-3.11-14.el9.noarch python3-policycoreutils-3.6-4.el9.noarch python3-prettytable-0.7.2-27.el9.noarch python3-ptyprocess-0.6.0-12.el9.noarch python3-pycparser-2.20-6.el9.noarch python3-pyparsing-2.4.7-9.el9.noarch python3-pyrsistent-0.17.3-8.el9.x86_64 python3-pyserial-3.4-12.el9.noarch python3-pysocks-1.7.1-12.el9.noarch python3-pytz-2021.1-5.el9.noarch python3-pyyaml-5.4.1-6.el9.x86_64 python3-requests-2.25.1-10.el9.noarch python3-resolvelib-0.5.4-5.el9.noarch python3-rpm-4.16.1.3-40.el9.x86_64 python3-rpm-generators-12-9.el9.noarch python3-rpm-macros-3.9-54.el9.noarch python3-setools-4.4.4-1.el9.x86_64 python3-setuptools-53.0.0-15.el9.noarch python3-setuptools-wheel-53.0.0-15.el9.noarch python3-six-1.15.0-9.el9.noarch python3-systemd-234-19.el9.x86_64 python3-urllib3-1.26.5-6.el9.noarch python-rpm-macros-3.9-54.el9.noarch python-srpm-macros-3.9-54.el9.noarch python-unversioned-command-3.9.25-3.el9.noarch qemu-guest-agent-10.1.0-10.el9.x86_64 qt5-srpm-macros-5.15.9-1.el9.noarch quota-4.09-4.el9.x86_64 quota-nls-4.09-4.el9.noarch readline-8.1-4.el9.x86_64 readline-devel-8.1-4.el9.x86_64 redhat-rpm-config-210-1.el9.noarch rootfiles-8.1-35.el9.noarch rpcbind-1.2.6-7.el9.x86_64 rpm-4.16.1.3-40.el9.x86_64 rpm-build-4.16.1.3-40.el9.x86_64 rpm-build-libs-4.16.1.3-40.el9.x86_64 rpm-libs-4.16.1.3-40.el9.x86_64 rpmlint-1.11-19.el9.noarch rpm-plugin-audit-4.16.1.3-40.el9.x86_64 rpm-plugin-selinux-4.16.1.3-40.el9.x86_64 rpm-plugin-systemd-inhibit-4.16.1.3-40.el9.x86_64 rpm-sign-4.16.1.3-40.el9.x86_64 rpm-sign-libs-4.16.1.3-40.el9.x86_64 rsync-3.2.5-4.el9.x86_64 rsyslog-8.2510.0-2.el9.x86_64 rsyslog-logrotate-8.2510.0-2.el9.x86_64 ruby-3.0.7-165.el9.x86_64 ruby-default-gems-3.0.7-165.el9.noarch ruby-devel-3.0.7-165.el9.x86_64 rubygem-bigdecimal-3.0.0-165.el9.x86_64 rubygem-bundler-2.2.33-165.el9.noarch rubygem-io-console-0.5.7-165.el9.x86_64 rubygem-json-2.5.1-165.el9.x86_64 rubygem-psych-3.3.2-165.el9.x86_64 rubygem-rdoc-6.3.4.1-165.el9.noarch rubygems-3.2.33-165.el9.noarch ruby-libs-3.0.7-165.el9.x86_64 rust-srpm-macros-17-4.el9.noarch sed-4.8-9.el9.x86_64 selinux-policy-38.1.71-1.el9.noarch selinux-policy-targeted-38.1.71-1.el9.noarch setroubleshoot-plugins-3.3.14-4.el9.noarch setroubleshoot-server-3.3.35-2.el9.x86_64 setup-2.13.7-10.el9.noarch sg3_utils-1.47-10.el9.x86_64 sg3_utils-libs-1.47-10.el9.x86_64 shadow-utils-4.9-16.el9.x86_64 shadow-utils-subid-4.9-16.el9.x86_64 shared-mime-info-2.1-5.el9.x86_64 skopeo-1.20.0-2.el9.x86_64 slang-2.3.2-11.el9.x86_64 slirp4netns-1.3.3-1.el9.x86_64 snappy-1.1.8-8.el9.x86_64 sos-4.10.1-2.el9.noarch sqlite-3.34.1-9.el9.x86_64 sqlite-libs-3.34.1-9.el9.x86_64 squashfs-tools-4.4-10.git1.el9.x86_64 sscg-4.0.3-2.el9.x86_64 sshpass-1.09-4.el9.x86_64 sssd-client-2.9.7-5.el9.x86_64 sssd-common-2.9.7-5.el9.x86_64 sssd-kcm-2.9.7-5.el9.x86_64 sssd-nfs-idmap-2.9.7-5.el9.x86_64 sudo-1.9.5p2-13.el9.x86_64 systemd-252-64.el9.x86_64 systemd-devel-252-64.el9.x86_64 systemd-libs-252-64.el9.x86_64 systemd-pam-252-64.el9.x86_64 systemd-rpm-macros-252-64.el9.noarch systemd-udev-252-64.el9.x86_64 tar-1.34-9.el9.x86_64 tcl-8.6.10-7.el9.x86_64 tcpdump-4.99.0-9.el9.x86_64 teamd-1.31-16.el9.x86_64 time-1.9-18.el9.x86_64 tmux-3.2a-5.el9.x86_64 tpm2-tss-3.2.3-1.el9.x86_64 traceroute-2.1.1-1.el9.x86_64 tzdata-2025c-1.el9.noarch unzip-6.0-59.el9.x86_64 userspace-rcu-0.12.1-6.el9.x86_64 util-linux-2.37.4-21.el9.x86_64 util-linux-core-2.37.4-21.el9.x86_64 vim-minimal-8.2.2637-23.el9.x86_64 virt-install-5.0.0-1.el9.noarch virt-manager-common-5.0.0-1.el9.noarch webkit2gtk3-jsc-2.50.4-1.el9.x86_64 wget-1.21.1-8.el9.x86_64 which-2.21-30.el9.x86_64 xfsprogs-6.4.0-7.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 xorriso-1.5.4-5.el9.x86_64 xz-5.2.5-8.el9.x86_64 xz-devel-5.2.5-8.el9.x86_64 xz-libs-5.2.5-8.el9.x86_64 yajl-2.1.0-25.el9.x86_64 yum-4.14.0-31.el9.noarch yum-utils-4.3.0-25.el9.noarch zip-3.0-35.el9.x86_64 zlib-1.2.11-41.el9.x86_64 zlib-devel-1.2.11-41.el9.x86_64 zstd-1.5.5-1.el9.x86_64 home/zuul/zuul-output/logs/python.log0000644000175000017500000000520215133660031017107 0ustar zuulzuulPython 3.9.25 pip 21.3.1 from /usr/lib/python3.9/site-packages/pip (python 3.9) ansible [core 2.15.13] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/zuul/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/zuul/.local/lib/python3.9/site-packages/ansible ansible collection location = /home/zuul/.ansible/collections:/usr/share/ansible/collections executable location = /home/zuul/.local/bin/ansible python version = 3.9.25 (main, Jan 14 2026, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-14)] (/usr/bin/python3) jinja version = 3.1.6 libyaml = True ansible-core==2.15.13 argcomplete==1.12.0 attrs==25.4.0 autopage==0.5.2 Babel==2.9.1 certifi==2026.1.4 cffi==2.0.0 chardet==4.0.0 charset-normalizer==3.4.4 cliff==4.9.1 cloud-init==24.4 cmd2==2.7.0 cockpit @ file:///builddir/build/BUILD/cockpit-348/tmp/wheel/cockpit-348-py3-none-any.whl configobj==5.0.6 cryptography==43.0.3 dasbus==1.7 dbus-python==1.2.18 debtcollector==3.0.0 decorator==5.2.1 distro==1.5.0 dogpile.cache==1.4.1 durationpy==0.10 file-magic==0.4.0 google-auth==2.47.0 gpg==1.15.1 idna==2.10 importlib-resources==5.0.7 importlib_metadata==8.7.1 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.21 jsonpointer==2.0 jsonschema==4.23.0 jsonschema-specifications==2025.9.1 keystoneauth1==5.11.1 kubernetes==31.0.0 kubernetes-validate==1.31.0 libcomps==0.1.18 libvirt-python==11.10.0 lxml==4.6.5 markdown-it-py==3.0.0 MarkupSafe==3.0.3 mdurl==0.1.2 msgpack==1.1.2 netaddr==1.3.0 netifaces==0.10.6 oauthlib==3.2.2 openstacksdk==4.1.0 os-service-types==1.7.0 osc-lib==4.0.2 oslo.config==10.0.0 oslo.i18n==6.6.0 oslo.serialization==5.8.0 oslo.utils==9.1.0 packaging==20.9 pbr==7.0.3 pexpect==4.8.0 platformdirs==4.4.0 ply==3.11 prettytable==0.7.2 psutil==7.2.1 ptyprocess==0.6.0 pyasn1==0.6.2 pyasn1_modules==0.4.2 pycparser==2.23 pyenchant==3.2.0 Pygments==2.19.2 PyGObject==3.40.1 pyOpenSSL==24.2.1 pyparsing==2.4.7 pyperclip==1.11.0 pyrsistent==0.17.3 pyserial==3.4 PySocks==1.7.1 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-keystoneclient==5.6.0 python-openstackclient==8.0.0 pytz==2021.1 PyYAML==5.4.1 referencing==0.36.2 requests==2.32.5 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 resolvelib==0.5.4 rfc3986==2.0.0 rich==14.2.0 rich-argparse==1.7.2 rpds-py==0.27.1 rpm==4.16.1.3 rsa==4.9.1 selinux==3.6 sepolicy==3.6 setools==4.4.4 setroubleshoot @ file:///builddir/build/BUILD/setroubleshoot-3.3.35/src six==1.15.0 sos==4.10.1 stevedore==5.5.0 systemd-python==234 typing_extensions==4.15.0 tzdata==2025.3 urllib3==1.26.5 wcwidth==0.2.14 websocket-client==1.9.0 wrapt==2.0.1 zipp==3.23.0 home/zuul/zuul-output/logs/dmesg.log0000644000175000017500000014671015133660031016677 0ustar zuulzuul[Tue Jan 20 10:41:34 2026] Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 [Tue Jan 20 10:41:34 2026] The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. [Tue Jan 20 10:41:34 2026] Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M [Tue Jan 20 10:41:34 2026] BIOS-provided physical RAM map: [Tue Jan 20 10:41:34 2026] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable [Tue Jan 20 10:41:34 2026] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved [Tue Jan 20 10:41:34 2026] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [Tue Jan 20 10:41:34 2026] BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable [Tue Jan 20 10:41:34 2026] BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved [Tue Jan 20 10:41:34 2026] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved [Tue Jan 20 10:41:34 2026] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved [Tue Jan 20 10:41:34 2026] BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable [Tue Jan 20 10:41:34 2026] NX (Execute Disable) protection: active [Tue Jan 20 10:41:34 2026] APIC: Static calls initialized [Tue Jan 20 10:41:34 2026] SMBIOS 2.8 present. [Tue Jan 20 10:41:34 2026] DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 [Tue Jan 20 10:41:34 2026] Hypervisor detected: KVM [Tue Jan 20 10:41:34 2026] kvm-clock: Using msrs 4b564d01 and 4b564d00 [Tue Jan 20 10:41:34 2026] kvm-clock: using sched offset of 3107079708 cycles [Tue Jan 20 10:41:34 2026] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns [Tue Jan 20 10:41:34 2026] tsc: Detected 2799.998 MHz processor [Tue Jan 20 10:41:34 2026] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved [Tue Jan 20 10:41:34 2026] e820: remove [mem 0x000a0000-0x000fffff] usable [Tue Jan 20 10:41:34 2026] last_pfn = 0x140000 max_arch_pfn = 0x400000000 [Tue Jan 20 10:41:34 2026] MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs [Tue Jan 20 10:41:34 2026] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT [Tue Jan 20 10:41:34 2026] last_pfn = 0xbffdb max_arch_pfn = 0x400000000 [Tue Jan 20 10:41:34 2026] found SMP MP-table at [mem 0x000f5b60-0x000f5b6f] [Tue Jan 20 10:41:34 2026] Using GB pages for direct mapping [Tue Jan 20 10:41:34 2026] RAMDISK: [mem 0x2d426000-0x32a0afff] [Tue Jan 20 10:41:34 2026] ACPI: Early table checksum verification disabled [Tue Jan 20 10:41:34 2026] ACPI: RSDP 0x00000000000F5910 000014 (v00 BOCHS ) [Tue Jan 20 10:41:34 2026] ACPI: RSDT 0x00000000BFFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) [Tue Jan 20 10:41:34 2026] ACPI: FACP 0x00000000BFFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) [Tue Jan 20 10:41:34 2026] ACPI: DSDT 0x00000000BFFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) [Tue Jan 20 10:41:34 2026] ACPI: FACS 0x00000000BFFE0000 000040 [Tue Jan 20 10:41:34 2026] ACPI: APIC 0x00000000BFFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) [Tue Jan 20 10:41:34 2026] ACPI: WAET 0x00000000BFFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) [Tue Jan 20 10:41:34 2026] ACPI: Reserving FACP table memory at [mem 0xbffe172c-0xbffe179f] [Tue Jan 20 10:41:34 2026] ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe172b] [Tue Jan 20 10:41:34 2026] ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] [Tue Jan 20 10:41:34 2026] ACPI: Reserving APIC table memory at [mem 0xbffe17a0-0xbffe181f] [Tue Jan 20 10:41:34 2026] ACPI: Reserving WAET table memory at [mem 0xbffe1820-0xbffe1847] [Tue Jan 20 10:41:34 2026] No NUMA configuration found [Tue Jan 20 10:41:34 2026] Faking a node at [mem 0x0000000000000000-0x000000013fffffff] [Tue Jan 20 10:41:34 2026] NODE_DATA(0) allocated [mem 0x13ffd3000-0x13fffdfff] [Tue Jan 20 10:41:34 2026] crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB) [Tue Jan 20 10:41:34 2026] Zone ranges: [Tue Jan 20 10:41:34 2026] DMA [mem 0x0000000000001000-0x0000000000ffffff] [Tue Jan 20 10:41:34 2026] DMA32 [mem 0x0000000001000000-0x00000000ffffffff] [Tue Jan 20 10:41:34 2026] Normal [mem 0x0000000100000000-0x000000013fffffff] [Tue Jan 20 10:41:34 2026] Device empty [Tue Jan 20 10:41:34 2026] Movable zone start for each node [Tue Jan 20 10:41:34 2026] Early memory node ranges [Tue Jan 20 10:41:34 2026] node 0: [mem 0x0000000000001000-0x000000000009efff] [Tue Jan 20 10:41:34 2026] node 0: [mem 0x0000000000100000-0x00000000bffdafff] [Tue Jan 20 10:41:34 2026] node 0: [mem 0x0000000100000000-0x000000013fffffff] [Tue Jan 20 10:41:34 2026] Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] [Tue Jan 20 10:41:34 2026] On node 0, zone DMA: 1 pages in unavailable ranges [Tue Jan 20 10:41:34 2026] On node 0, zone DMA: 97 pages in unavailable ranges [Tue Jan 20 10:41:34 2026] On node 0, zone Normal: 37 pages in unavailable ranges [Tue Jan 20 10:41:34 2026] ACPI: PM-Timer IO Port: 0x608 [Tue Jan 20 10:41:34 2026] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [Tue Jan 20 10:41:34 2026] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 [Tue Jan 20 10:41:34 2026] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [Tue Jan 20 10:41:34 2026] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) [Tue Jan 20 10:41:34 2026] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [Tue Jan 20 10:41:34 2026] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) [Tue Jan 20 10:41:34 2026] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) [Tue Jan 20 10:41:34 2026] ACPI: Using ACPI (MADT) for SMP configuration information [Tue Jan 20 10:41:34 2026] TSC deadline timer available [Tue Jan 20 10:41:34 2026] CPU topo: Max. logical packages: 2 [Tue Jan 20 10:41:34 2026] CPU topo: Max. logical dies: 2 [Tue Jan 20 10:41:34 2026] CPU topo: Max. dies per package: 1 [Tue Jan 20 10:41:34 2026] CPU topo: Max. threads per core: 1 [Tue Jan 20 10:41:34 2026] CPU topo: Num. cores per package: 1 [Tue Jan 20 10:41:34 2026] CPU topo: Num. threads per package: 1 [Tue Jan 20 10:41:34 2026] CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs [Tue Jan 20 10:41:34 2026] kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() [Tue Jan 20 10:41:34 2026] PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] [Tue Jan 20 10:41:34 2026] PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff] [Tue Jan 20 10:41:34 2026] PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] [Tue Jan 20 10:41:34 2026] PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] [Tue Jan 20 10:41:34 2026] PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff] [Tue Jan 20 10:41:34 2026] PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] [Tue Jan 20 10:41:34 2026] PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] [Tue Jan 20 10:41:34 2026] PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff] [Tue Jan 20 10:41:34 2026] PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff] [Tue Jan 20 10:41:34 2026] [mem 0xc0000000-0xfeffbfff] available for PCI devices [Tue Jan 20 10:41:34 2026] Booting paravirtualized kernel on KVM [Tue Jan 20 10:41:34 2026] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns [Tue Jan 20 10:41:34 2026] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 [Tue Jan 20 10:41:34 2026] percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u1048576 [Tue Jan 20 10:41:34 2026] pcpu-alloc: s225280 r8192 d28672 u1048576 alloc=1*2097152 [Tue Jan 20 10:41:34 2026] pcpu-alloc: [0] 0 1 [Tue Jan 20 10:41:34 2026] kvm-guest: PV spinlocks disabled, no host support [Tue Jan 20 10:41:34 2026] Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M [Tue Jan 20 10:41:34 2026] Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space. [Tue Jan 20 10:41:34 2026] random: crng init done [Tue Jan 20 10:41:34 2026] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) [Tue Jan 20 10:41:34 2026] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) [Tue Jan 20 10:41:34 2026] Fallback order for Node 0: 0 [Tue Jan 20 10:41:34 2026] Built 1 zonelists, mobility grouping on. Total pages: 1031899 [Tue Jan 20 10:41:34 2026] Policy zone: Normal [Tue Jan 20 10:41:34 2026] mem auto-init: stack:off, heap alloc:off, heap free:off [Tue Jan 20 10:41:34 2026] software IO TLB: area num 2. [Tue Jan 20 10:41:34 2026] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 [Tue Jan 20 10:41:34 2026] ftrace: allocating 49417 entries in 194 pages [Tue Jan 20 10:41:34 2026] ftrace: allocated 194 pages with 3 groups [Tue Jan 20 10:41:34 2026] Dynamic Preempt: voluntary [Tue Jan 20 10:41:34 2026] rcu: Preemptible hierarchical RCU implementation. [Tue Jan 20 10:41:34 2026] rcu: RCU event tracing is enabled. [Tue Jan 20 10:41:34 2026] rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=2. [Tue Jan 20 10:41:34 2026] Trampoline variant of Tasks RCU enabled. [Tue Jan 20 10:41:34 2026] Rude variant of Tasks RCU enabled. [Tue Jan 20 10:41:34 2026] Tracing variant of Tasks RCU enabled. [Tue Jan 20 10:41:34 2026] rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. [Tue Jan 20 10:41:34 2026] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 [Tue Jan 20 10:41:34 2026] RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. [Tue Jan 20 10:41:34 2026] RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. [Tue Jan 20 10:41:34 2026] RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. [Tue Jan 20 10:41:34 2026] NR_IRQS: 524544, nr_irqs: 440, preallocated irqs: 16 [Tue Jan 20 10:41:34 2026] rcu: srcu_init: Setting srcu_struct sizes based on contention. [Tue Jan 20 10:41:34 2026] kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____) [Tue Jan 20 10:41:34 2026] Console: colour VGA+ 80x25 [Tue Jan 20 10:41:34 2026] printk: console [ttyS0] enabled [Tue Jan 20 10:41:34 2026] ACPI: Core revision 20230331 [Tue Jan 20 10:41:34 2026] APIC: Switch to symmetric I/O mode setup [Tue Jan 20 10:41:34 2026] x2apic enabled [Tue Jan 20 10:41:34 2026] APIC: Switched APIC routing to: physical x2apic [Tue Jan 20 10:41:34 2026] tsc: Marking TSC unstable due to TSCs unsynchronized [Tue Jan 20 10:41:34 2026] Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) [Tue Jan 20 10:41:34 2026] x86/cpu: User Mode Instruction Prevention (UMIP) activated [Tue Jan 20 10:41:34 2026] Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 [Tue Jan 20 10:41:34 2026] Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 [Tue Jan 20 10:41:34 2026] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization [Tue Jan 20 10:41:34 2026] Spectre V2 : Mitigation: Retpolines [Tue Jan 20 10:41:34 2026] Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT [Tue Jan 20 10:41:34 2026] Spectre V2 : Enabling Speculation Barrier for firmware calls [Tue Jan 20 10:41:34 2026] RETBleed: Mitigation: untrained return thunk [Tue Jan 20 10:41:34 2026] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier [Tue Jan 20 10:41:34 2026] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl [Tue Jan 20 10:41:34 2026] Speculative Return Stack Overflow: IBPB-extending microcode not applied! [Tue Jan 20 10:41:34 2026] Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. [Tue Jan 20 10:41:34 2026] x86/bugs: return thunk changed [Tue Jan 20 10:41:34 2026] Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode [Tue Jan 20 10:41:34 2026] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' [Tue Jan 20 10:41:34 2026] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' [Tue Jan 20 10:41:34 2026] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' [Tue Jan 20 10:41:34 2026] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 [Tue Jan 20 10:41:34 2026] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. [Tue Jan 20 10:41:34 2026] Freeing SMP alternatives memory: 40K [Tue Jan 20 10:41:34 2026] pid_max: default: 32768 minimum: 301 [Tue Jan 20 10:41:34 2026] LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf [Tue Jan 20 10:41:34 2026] landlock: Up and running. [Tue Jan 20 10:41:34 2026] Yama: becoming mindful. [Tue Jan 20 10:41:34 2026] SELinux: Initializing. [Tue Jan 20 10:41:34 2026] LSM support for eBPF active [Tue Jan 20 10:41:34 2026] Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) [Tue Jan 20 10:41:34 2026] Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) [Tue Jan 20 10:41:34 2026] smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) [Tue Jan 20 10:41:34 2026] Performance Events: Fam17h+ core perfctr, AMD PMU driver. [Tue Jan 20 10:41:34 2026] ... version: 0 [Tue Jan 20 10:41:34 2026] ... bit width: 48 [Tue Jan 20 10:41:34 2026] ... generic registers: 6 [Tue Jan 20 10:41:34 2026] ... value mask: 0000ffffffffffff [Tue Jan 20 10:41:34 2026] ... max period: 00007fffffffffff [Tue Jan 20 10:41:34 2026] ... fixed-purpose events: 0 [Tue Jan 20 10:41:34 2026] ... event mask: 000000000000003f [Tue Jan 20 10:41:34 2026] signal: max sigframe size: 1776 [Tue Jan 20 10:41:34 2026] rcu: Hierarchical SRCU implementation. [Tue Jan 20 10:41:34 2026] rcu: Max phase no-delay instances is 400. [Tue Jan 20 10:41:34 2026] smp: Bringing up secondary CPUs ... [Tue Jan 20 10:41:34 2026] smpboot: x86: Booting SMP configuration: [Tue Jan 20 10:41:34 2026] .... node #0, CPUs: #1 [Tue Jan 20 10:41:34 2026] smp: Brought up 1 node, 2 CPUs [Tue Jan 20 10:41:34 2026] smpboot: Total of 2 processors activated (11199.99 BogoMIPS) [Tue Jan 20 10:41:34 2026] node 0 deferred pages initialised in 2ms [Tue Jan 20 10:41:34 2026] Memory: 3644980K/4193764K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 545100K reserved, 0K cma-reserved) [Tue Jan 20 10:41:34 2026] devtmpfs: initialized [Tue Jan 20 10:41:34 2026] x86/mm: Memory block size: 128MB [Tue Jan 20 10:41:34 2026] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns [Tue Jan 20 10:41:34 2026] futex hash table entries: 512 (32768 bytes on 1 NUMA nodes, total 32 KiB, linear). [Tue Jan 20 10:41:34 2026] pinctrl core: initialized pinctrl subsystem [Tue Jan 20 10:41:34 2026] NET: Registered PF_NETLINK/PF_ROUTE protocol family [Tue Jan 20 10:41:34 2026] DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations [Tue Jan 20 10:41:34 2026] DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations [Tue Jan 20 10:41:34 2026] DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations [Tue Jan 20 10:41:34 2026] audit: initializing netlink subsys (disabled) [Tue Jan 20 10:41:34 2026] thermal_sys: Registered thermal governor 'fair_share' [Tue Jan 20 10:41:34 2026] thermal_sys: Registered thermal governor 'step_wise' [Tue Jan 20 10:41:34 2026] thermal_sys: Registered thermal governor 'user_space' [Tue Jan 20 10:41:34 2026] audit: type=2000 audit(1768905694.106:1): state=initialized audit_enabled=0 res=1 [Tue Jan 20 10:41:34 2026] cpuidle: using governor menu [Tue Jan 20 10:41:34 2026] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [Tue Jan 20 10:41:34 2026] PCI: Using configuration type 1 for base access [Tue Jan 20 10:41:34 2026] PCI: Using configuration type 1 for extended access [Tue Jan 20 10:41:34 2026] kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. [Tue Jan 20 10:41:34 2026] HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages [Tue Jan 20 10:41:34 2026] HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page [Tue Jan 20 10:41:34 2026] HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages [Tue Jan 20 10:41:34 2026] HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page [Tue Jan 20 10:41:34 2026] Demotion targets for Node 0: null [Tue Jan 20 10:41:34 2026] cryptd: max_cpu_qlen set to 1000 [Tue Jan 20 10:41:34 2026] ACPI: Added _OSI(Module Device) [Tue Jan 20 10:41:34 2026] ACPI: Added _OSI(Processor Device) [Tue Jan 20 10:41:34 2026] ACPI: Added _OSI(Processor Aggregator Device) [Tue Jan 20 10:41:34 2026] ACPI: 1 ACPI AML tables successfully acquired and loaded [Tue Jan 20 10:41:34 2026] ACPI: Interpreter enabled [Tue Jan 20 10:41:34 2026] ACPI: PM: (supports S0 S3 S4 S5) [Tue Jan 20 10:41:34 2026] ACPI: Using IOAPIC for interrupt routing [Tue Jan 20 10:41:34 2026] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [Tue Jan 20 10:41:34 2026] PCI: Using E820 reservations for host bridge windows [Tue Jan 20 10:41:34 2026] ACPI: Enabled 2 GPEs in block 00 to 0F [Tue Jan 20 10:41:34 2026] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) [Tue Jan 20 10:41:34 2026] acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [Tue Jan 20 10:41:34 2026] acpiphp: Slot [3] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [4] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [5] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [6] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [7] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [8] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [9] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [10] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [11] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [12] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [13] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [14] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [15] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [16] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [17] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [18] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [19] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [20] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [21] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [22] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [23] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [24] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [25] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [26] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [27] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [28] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [29] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [30] registered [Tue Jan 20 10:41:34 2026] acpiphp: Slot [31] registered [Tue Jan 20 10:41:34 2026] PCI host bridge to bus 0000:00 [Tue Jan 20 10:41:34 2026] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] [Tue Jan 20 10:41:34 2026] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] [Tue Jan 20 10:41:34 2026] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] [Tue Jan 20 10:41:34 2026] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] [Tue Jan 20 10:41:34 2026] pci_bus 0000:00: root bus resource [mem 0x140000000-0x1bfffffff window] [Tue Jan 20 10:41:34 2026] pci_bus 0000:00: root bus resource [bus 00-ff] [Tue Jan 20 10:41:34 2026] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint [Tue Jan 20 10:41:34 2026] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint [Tue Jan 20 10:41:34 2026] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint [Tue Jan 20 10:41:34 2026] pci 0000:00:01.1: BAR 4 [io 0xc140-0xc14f] [Tue Jan 20 10:41:34 2026] pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk [Tue Jan 20 10:41:34 2026] pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk [Tue Jan 20 10:41:34 2026] pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk [Tue Jan 20 10:41:34 2026] pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk [Tue Jan 20 10:41:34 2026] pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint [Tue Jan 20 10:41:34 2026] pci 0000:00:01.2: BAR 4 [io 0xc100-0xc11f] [Tue Jan 20 10:41:34 2026] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint [Tue Jan 20 10:41:34 2026] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI [Tue Jan 20 10:41:34 2026] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB [Tue Jan 20 10:41:34 2026] pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint [Tue Jan 20 10:41:34 2026] pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] [Tue Jan 20 10:41:34 2026] pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] [Tue Jan 20 10:41:34 2026] pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff] [Tue Jan 20 10:41:34 2026] pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref] [Tue Jan 20 10:41:34 2026] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] [Tue Jan 20 10:41:34 2026] pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint [Tue Jan 20 10:41:34 2026] pci 0000:00:03.0: BAR 0 [io 0xc080-0xc0bf] [Tue Jan 20 10:41:34 2026] pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff] [Tue Jan 20 10:41:34 2026] pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] [Tue Jan 20 10:41:34 2026] pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref] [Tue Jan 20 10:41:34 2026] pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint [Tue Jan 20 10:41:34 2026] pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] [Tue Jan 20 10:41:34 2026] pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff] [Tue Jan 20 10:41:34 2026] pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] [Tue Jan 20 10:41:34 2026] pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint [Tue Jan 20 10:41:34 2026] pci 0000:00:05.0: BAR 0 [io 0xc0c0-0xc0ff] [Tue Jan 20 10:41:34 2026] pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] [Tue Jan 20 10:41:34 2026] pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint [Tue Jan 20 10:41:34 2026] pci 0000:00:06.0: BAR 0 [io 0xc120-0xc13f] [Tue Jan 20 10:41:34 2026] pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] [Tue Jan 20 10:41:34 2026] ACPI: PCI: Interrupt link LNKA configured for IRQ 10 [Tue Jan 20 10:41:34 2026] ACPI: PCI: Interrupt link LNKB configured for IRQ 10 [Tue Jan 20 10:41:34 2026] ACPI: PCI: Interrupt link LNKC configured for IRQ 11 [Tue Jan 20 10:41:34 2026] ACPI: PCI: Interrupt link LNKD configured for IRQ 11 [Tue Jan 20 10:41:34 2026] ACPI: PCI: Interrupt link LNKS configured for IRQ 9 [Tue Jan 20 10:41:34 2026] iommu: Default domain type: Translated [Tue Jan 20 10:41:34 2026] iommu: DMA domain TLB invalidation policy: lazy mode [Tue Jan 20 10:41:34 2026] SCSI subsystem initialized [Tue Jan 20 10:41:34 2026] ACPI: bus type USB registered [Tue Jan 20 10:41:34 2026] usbcore: registered new interface driver usbfs [Tue Jan 20 10:41:34 2026] usbcore: registered new interface driver hub [Tue Jan 20 10:41:34 2026] usbcore: registered new device driver usb [Tue Jan 20 10:41:34 2026] pps_core: LinuxPPS API ver. 1 registered [Tue Jan 20 10:41:34 2026] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti [Tue Jan 20 10:41:34 2026] PTP clock support registered [Tue Jan 20 10:41:34 2026] EDAC MC: Ver: 3.0.0 [Tue Jan 20 10:41:34 2026] NetLabel: Initializing [Tue Jan 20 10:41:34 2026] NetLabel: domain hash size = 128 [Tue Jan 20 10:41:34 2026] NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO [Tue Jan 20 10:41:34 2026] NetLabel: unlabeled traffic allowed by default [Tue Jan 20 10:41:34 2026] PCI: Using ACPI for IRQ routing [Tue Jan 20 10:41:34 2026] PCI: pci_cache_line_size set to 64 bytes [Tue Jan 20 10:41:34 2026] e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] [Tue Jan 20 10:41:34 2026] e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff] [Tue Jan 20 10:41:34 2026] pci 0000:00:02.0: vgaarb: setting as boot VGA device [Tue Jan 20 10:41:34 2026] pci 0000:00:02.0: vgaarb: bridge control possible [Tue Jan 20 10:41:34 2026] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none [Tue Jan 20 10:41:34 2026] vgaarb: loaded [Tue Jan 20 10:41:34 2026] clocksource: Switched to clocksource kvm-clock [Tue Jan 20 10:41:34 2026] VFS: Disk quotas dquot_6.6.0 [Tue Jan 20 10:41:34 2026] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [Tue Jan 20 10:41:34 2026] pnp: PnP ACPI init [Tue Jan 20 10:41:34 2026] pnp 00:03: [dma 2] [Tue Jan 20 10:41:34 2026] pnp: PnP ACPI: found 5 devices [Tue Jan 20 10:41:34 2026] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns [Tue Jan 20 10:41:34 2026] NET: Registered PF_INET protocol family [Tue Jan 20 10:41:34 2026] IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) [Tue Jan 20 10:41:34 2026] tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) [Tue Jan 20 10:41:34 2026] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) [Tue Jan 20 10:41:34 2026] TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) [Tue Jan 20 10:41:34 2026] TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) [Tue Jan 20 10:41:34 2026] TCP: Hash tables configured (established 32768 bind 32768) [Tue Jan 20 10:41:34 2026] MPTCP token hash table entries: 4096 (order: 4, 98304 bytes, linear) [Tue Jan 20 10:41:34 2026] UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) [Tue Jan 20 10:41:34 2026] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) [Tue Jan 20 10:41:34 2026] NET: Registered PF_UNIX/PF_LOCAL protocol family [Tue Jan 20 10:41:34 2026] NET: Registered PF_XDP protocol family [Tue Jan 20 10:41:34 2026] pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] [Tue Jan 20 10:41:34 2026] pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] [Tue Jan 20 10:41:34 2026] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] [Tue Jan 20 10:41:34 2026] pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] [Tue Jan 20 10:41:34 2026] pci_bus 0000:00: resource 8 [mem 0x140000000-0x1bfffffff window] [Tue Jan 20 10:41:34 2026] pci 0000:00:01.0: PIIX3: Enabling Passive Release [Tue Jan 20 10:41:34 2026] pci 0000:00:00.0: Limiting direct PCI/PCI transfers [Tue Jan 20 10:41:34 2026] ACPI: \_SB_.LNKD: Enabled at IRQ 11 [Tue Jan 20 10:41:34 2026] pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 112982 usecs [Tue Jan 20 10:41:34 2026] PCI: CLS 0 bytes, default 64 [Tue Jan 20 10:41:34 2026] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [Tue Jan 20 10:41:34 2026] software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB) [Tue Jan 20 10:41:34 2026] Trying to unpack rootfs image as initramfs... [Tue Jan 20 10:41:34 2026] ACPI: bus type thunderbolt registered [Tue Jan 20 10:41:34 2026] Initialise system trusted keyrings [Tue Jan 20 10:41:34 2026] Key type blacklist registered [Tue Jan 20 10:41:34 2026] workingset: timestamp_bits=36 max_order=20 bucket_order=0 [Tue Jan 20 10:41:34 2026] zbud: loaded [Tue Jan 20 10:41:34 2026] integrity: Platform Keyring initialized [Tue Jan 20 10:41:34 2026] integrity: Machine keyring initialized [Tue Jan 20 10:41:35 2026] Freeing initrd memory: 87956K [Tue Jan 20 10:41:35 2026] NET: Registered PF_ALG protocol family [Tue Jan 20 10:41:35 2026] xor: automatically using best checksumming function avx [Tue Jan 20 10:41:35 2026] Key type asymmetric registered [Tue Jan 20 10:41:35 2026] Asymmetric key parser 'x509' registered [Tue Jan 20 10:41:35 2026] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) [Tue Jan 20 10:41:35 2026] io scheduler mq-deadline registered [Tue Jan 20 10:41:35 2026] io scheduler kyber registered [Tue Jan 20 10:41:35 2026] io scheduler bfq registered [Tue Jan 20 10:41:35 2026] atomic64_test: passed for x86-64 platform with CX8 and with SSE [Tue Jan 20 10:41:35 2026] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 [Tue Jan 20 10:41:35 2026] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [Tue Jan 20 10:41:35 2026] ACPI: button: Power Button [PWRF] [Tue Jan 20 10:41:35 2026] ACPI: \_SB_.LNKB: Enabled at IRQ 10 [Tue Jan 20 10:41:35 2026] ACPI: \_SB_.LNKC: Enabled at IRQ 11 [Tue Jan 20 10:41:35 2026] ACPI: \_SB_.LNKA: Enabled at IRQ 10 [Tue Jan 20 10:41:35 2026] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [Tue Jan 20 10:41:35 2026] 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A [Tue Jan 20 10:41:35 2026] Non-volatile memory driver v1.3 [Tue Jan 20 10:41:35 2026] rdac: device handler registered [Tue Jan 20 10:41:35 2026] hp_sw: device handler registered [Tue Jan 20 10:41:35 2026] emc: device handler registered [Tue Jan 20 10:41:35 2026] alua: device handler registered [Tue Jan 20 10:41:35 2026] uhci_hcd 0000:00:01.2: UHCI Host Controller [Tue Jan 20 10:41:35 2026] uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 [Tue Jan 20 10:41:35 2026] uhci_hcd 0000:00:01.2: detected 2 ports [Tue Jan 20 10:41:35 2026] uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100 [Tue Jan 20 10:41:35 2026] usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14 [Tue Jan 20 10:41:35 2026] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [Tue Jan 20 10:41:35 2026] usb usb1: Product: UHCI Host Controller [Tue Jan 20 10:41:35 2026] usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd [Tue Jan 20 10:41:35 2026] usb usb1: SerialNumber: 0000:00:01.2 [Tue Jan 20 10:41:35 2026] hub 1-0:1.0: USB hub found [Tue Jan 20 10:41:35 2026] hub 1-0:1.0: 2 ports detected [Tue Jan 20 10:41:35 2026] usbcore: registered new interface driver usbserial_generic [Tue Jan 20 10:41:35 2026] usbserial: USB Serial support registered for generic [Tue Jan 20 10:41:35 2026] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 [Tue Jan 20 10:41:35 2026] serio: i8042 KBD port at 0x60,0x64 irq 1 [Tue Jan 20 10:41:35 2026] serio: i8042 AUX port at 0x60,0x64 irq 12 [Tue Jan 20 10:41:35 2026] mousedev: PS/2 mouse device common for all mice [Tue Jan 20 10:41:35 2026] rtc_cmos 00:04: RTC can wake from S4 [Tue Jan 20 10:41:35 2026] rtc_cmos 00:04: registered as rtc0 [Tue Jan 20 10:41:35 2026] rtc_cmos 00:04: setting system clock to 2026-01-20T10:41:34 UTC (1768905694) [Tue Jan 20 10:41:35 2026] rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram [Tue Jan 20 10:41:35 2026] amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled [Tue Jan 20 10:41:35 2026] hid: raw HID events driver (C) Jiri Kosina [Tue Jan 20 10:41:35 2026] usbcore: registered new interface driver usbhid [Tue Jan 20 10:41:35 2026] usbhid: USB HID core driver [Tue Jan 20 10:41:35 2026] drop_monitor: Initializing network drop monitor service [Tue Jan 20 10:41:35 2026] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 [Tue Jan 20 10:41:35 2026] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4 [Tue Jan 20 10:41:35 2026] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3 [Tue Jan 20 10:41:35 2026] Initializing XFRM netlink socket [Tue Jan 20 10:41:35 2026] NET: Registered PF_INET6 protocol family [Tue Jan 20 10:41:35 2026] Segment Routing with IPv6 [Tue Jan 20 10:41:35 2026] NET: Registered PF_PACKET protocol family [Tue Jan 20 10:41:35 2026] mpls_gso: MPLS GSO support [Tue Jan 20 10:41:35 2026] IPI shorthand broadcast: enabled [Tue Jan 20 10:41:35 2026] AVX2 version of gcm_enc/dec engaged. [Tue Jan 20 10:41:35 2026] AES CTR mode by8 optimization enabled [Tue Jan 20 10:41:35 2026] sched_clock: Marking stable (1162006829, 151139044)->(1407583433, -94437560) [Tue Jan 20 10:41:35 2026] registered taskstats version 1 [Tue Jan 20 10:41:35 2026] Loading compiled-in X.509 certificates [Tue Jan 20 10:41:35 2026] Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7' [Tue Jan 20 10:41:35 2026] Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' [Tue Jan 20 10:41:35 2026] Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' [Tue Jan 20 10:41:35 2026] Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a' [Tue Jan 20 10:41:35 2026] Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0' [Tue Jan 20 10:41:35 2026] Demotion targets for Node 0: null [Tue Jan 20 10:41:35 2026] page_owner is disabled [Tue Jan 20 10:41:35 2026] Key type .fscrypt registered [Tue Jan 20 10:41:35 2026] Key type fscrypt-provisioning registered [Tue Jan 20 10:41:35 2026] Key type big_key registered [Tue Jan 20 10:41:35 2026] Key type encrypted registered [Tue Jan 20 10:41:35 2026] ima: No TPM chip found, activating TPM-bypass! [Tue Jan 20 10:41:35 2026] Loading compiled-in module X.509 certificates [Tue Jan 20 10:41:35 2026] Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7' [Tue Jan 20 10:41:35 2026] ima: Allocated hash algorithm: sha256 [Tue Jan 20 10:41:35 2026] ima: No architecture policies found [Tue Jan 20 10:41:35 2026] evm: Initialising EVM extended attributes: [Tue Jan 20 10:41:35 2026] evm: security.selinux [Tue Jan 20 10:41:35 2026] evm: security.SMACK64 (disabled) [Tue Jan 20 10:41:35 2026] evm: security.SMACK64EXEC (disabled) [Tue Jan 20 10:41:35 2026] evm: security.SMACK64TRANSMUTE (disabled) [Tue Jan 20 10:41:35 2026] evm: security.SMACK64MMAP (disabled) [Tue Jan 20 10:41:35 2026] evm: security.apparmor (disabled) [Tue Jan 20 10:41:35 2026] evm: security.ima [Tue Jan 20 10:41:35 2026] evm: security.capability [Tue Jan 20 10:41:35 2026] evm: HMAC attrs: 0x1 [Tue Jan 20 10:41:35 2026] usb 1-1: new full-speed USB device number 2 using uhci_hcd [Tue Jan 20 10:41:35 2026] Running certificate verification RSA selftest [Tue Jan 20 10:41:35 2026] Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' [Tue Jan 20 10:41:35 2026] Running certificate verification ECDSA selftest [Tue Jan 20 10:41:35 2026] Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3' [Tue Jan 20 10:41:35 2026] clk: Disabling unused clocks [Tue Jan 20 10:41:35 2026] Freeing unused decrypted memory: 2028K [Tue Jan 20 10:41:35 2026] Freeing unused kernel image (initmem) memory: 4200K [Tue Jan 20 10:41:35 2026] Write protecting the kernel read-only data: 30720k [Tue Jan 20 10:41:35 2026] Freeing unused kernel image (rodata/data gap) memory: 420K [Tue Jan 20 10:41:35 2026] x86/mm: Checked W+X mappings: passed, no W+X pages found. [Tue Jan 20 10:41:35 2026] Run /init as init process [Tue Jan 20 10:41:35 2026] with arguments: [Tue Jan 20 10:41:35 2026] /init [Tue Jan 20 10:41:35 2026] with environment: [Tue Jan 20 10:41:35 2026] HOME=/ [Tue Jan 20 10:41:35 2026] TERM=linux [Tue Jan 20 10:41:35 2026] BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 [Tue Jan 20 10:41:35 2026] systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) [Tue Jan 20 10:41:35 2026] systemd[1]: Detected virtualization kvm. [Tue Jan 20 10:41:35 2026] systemd[1]: Detected architecture x86-64. [Tue Jan 20 10:41:35 2026] systemd[1]: Running in initrd. [Tue Jan 20 10:41:35 2026] systemd[1]: No hostname configured, using default hostname. [Tue Jan 20 10:41:35 2026] systemd[1]: Hostname set to . [Tue Jan 20 10:41:35 2026] systemd[1]: Initializing machine ID from VM UUID. [Tue Jan 20 10:41:35 2026] usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00 [Tue Jan 20 10:41:35 2026] usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10 [Tue Jan 20 10:41:35 2026] usb 1-1: Product: QEMU USB Tablet [Tue Jan 20 10:41:35 2026] usb 1-1: Manufacturer: QEMU [Tue Jan 20 10:41:35 2026] usb 1-1: SerialNumber: 28754-0000:00:01.2-1 [Tue Jan 20 10:41:35 2026] input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5 [Tue Jan 20 10:41:35 2026] hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0 [Tue Jan 20 10:41:35 2026] systemd[1]: Queued start job for default target Initrd Default Target. [Tue Jan 20 10:41:35 2026] systemd[1]: Started Dispatch Password Requests to Console Directory Watch. [Tue Jan 20 10:41:35 2026] systemd[1]: Reached target Local Encrypted Volumes. [Tue Jan 20 10:41:35 2026] systemd[1]: Reached target Initrd /usr File System. [Tue Jan 20 10:41:35 2026] systemd[1]: Reached target Local File Systems. [Tue Jan 20 10:41:35 2026] systemd[1]: Reached target Path Units. [Tue Jan 20 10:41:35 2026] systemd[1]: Reached target Slice Units. [Tue Jan 20 10:41:35 2026] systemd[1]: Reached target Swaps. [Tue Jan 20 10:41:35 2026] systemd[1]: Reached target Timer Units. [Tue Jan 20 10:41:35 2026] systemd[1]: Listening on D-Bus System Message Bus Socket. [Tue Jan 20 10:41:35 2026] systemd[1]: Listening on Journal Socket (/dev/log). [Tue Jan 20 10:41:35 2026] systemd[1]: Listening on Journal Socket. [Tue Jan 20 10:41:35 2026] systemd[1]: Listening on udev Control Socket. [Tue Jan 20 10:41:35 2026] systemd[1]: Listening on udev Kernel Socket. [Tue Jan 20 10:41:35 2026] systemd[1]: Reached target Socket Units. [Tue Jan 20 10:41:35 2026] systemd[1]: Starting Create List of Static Device Nodes... [Tue Jan 20 10:41:35 2026] systemd[1]: Starting Journal Service... [Tue Jan 20 10:41:35 2026] systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met. [Tue Jan 20 10:41:35 2026] systemd[1]: Starting Apply Kernel Variables... [Tue Jan 20 10:41:35 2026] systemd[1]: Starting Create System Users... [Tue Jan 20 10:41:35 2026] systemd[1]: Starting Setup Virtual Console... [Tue Jan 20 10:41:35 2026] systemd[1]: Finished Create List of Static Device Nodes. [Tue Jan 20 10:41:35 2026] systemd[1]: Finished Apply Kernel Variables. [Tue Jan 20 10:41:35 2026] systemd[1]: Started Journal Service. [Tue Jan 20 10:41:36 2026] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. [Tue Jan 20 10:41:36 2026] device-mapper: uevent: version 1.0.3 [Tue Jan 20 10:41:36 2026] device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev [Tue Jan 20 10:41:36 2026] RPC: Registered named UNIX socket transport module. [Tue Jan 20 10:41:36 2026] RPC: Registered udp transport module. [Tue Jan 20 10:41:36 2026] RPC: Registered tcp transport module. [Tue Jan 20 10:41:36 2026] RPC: Registered tcp-with-tls transport module. [Tue Jan 20 10:41:36 2026] RPC: Registered tcp NFSv4.1 backchannel transport module. [Tue Jan 20 10:41:36 2026] virtio_blk virtio2: 2/0/0 default/read/poll queues [Tue Jan 20 10:41:36 2026] virtio_blk virtio2: [vda] 83886080 512-byte logical blocks (42.9 GB/40.0 GiB) [Tue Jan 20 10:41:36 2026] vda: vda1 [Tue Jan 20 10:41:36 2026] libata version 3.00 loaded. [Tue Jan 20 10:41:36 2026] ata_piix 0000:00:01.1: version 2.13 [Tue Jan 20 10:41:36 2026] scsi host0: ata_piix [Tue Jan 20 10:41:36 2026] scsi host1: ata_piix [Tue Jan 20 10:41:36 2026] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0 [Tue Jan 20 10:41:36 2026] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0 [Tue Jan 20 10:41:36 2026] ata1: found unknown device (class 0) [Tue Jan 20 10:41:36 2026] ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 [Tue Jan 20 10:41:36 2026] scsi 0:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 [Tue Jan 20 10:41:36 2026] scsi 0:0:0:0: Attached scsi generic sg0 type 5 [Tue Jan 20 10:41:36 2026] sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray [Tue Jan 20 10:41:36 2026] cdrom: Uniform CD-ROM driver Revision: 3.20 [Tue Jan 20 10:41:36 2026] sr 0:0:0:0: Attached scsi CD-ROM sr0 [Tue Jan 20 10:41:37 2026] SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled [Tue Jan 20 10:41:37 2026] XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40 [Tue Jan 20 10:41:37 2026] XFS (vda1): Ending clean mount [Tue Jan 20 10:41:37 2026] systemd-journald[244]: Received SIGTERM from PID 1 (systemd). [Tue Jan 20 10:41:37 2026] audit: type=1404 audit(1768905697.014:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 [Tue Jan 20 10:41:37 2026] SELinux: policy capability network_peer_controls=1 [Tue Jan 20 10:41:37 2026] SELinux: policy capability open_perms=1 [Tue Jan 20 10:41:37 2026] SELinux: policy capability extended_socket_class=1 [Tue Jan 20 10:41:37 2026] SELinux: policy capability always_check_network=0 [Tue Jan 20 10:41:37 2026] SELinux: policy capability cgroup_seclabel=1 [Tue Jan 20 10:41:37 2026] SELinux: policy capability nnp_nosuid_transition=1 [Tue Jan 20 10:41:37 2026] SELinux: policy capability genfs_seclabel_symlinks=1 [Tue Jan 20 10:41:37 2026] audit: type=1403 audit(1768905697.125:3): auid=4294967295 ses=4294967295 lsm=selinux res=1 [Tue Jan 20 10:41:37 2026] systemd[1]: Successfully loaded SELinux policy in 113.998ms. [Tue Jan 20 10:41:37 2026] systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.635ms. [Tue Jan 20 10:41:37 2026] systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) [Tue Jan 20 10:41:37 2026] systemd[1]: Detected virtualization kvm. [Tue Jan 20 10:41:37 2026] systemd[1]: Detected architecture x86-64. [Tue Jan 20 10:41:38 2026] systemd-rc-local-generator[548]: /etc/rc.d/rc.local is not marked executable, skipping. [Tue Jan 20 10:41:38 2026] systemd[1]: initrd-switch-root.service: Deactivated successfully. [Tue Jan 20 10:41:38 2026] systemd[1]: Stopped Switch Root. [Tue Jan 20 10:41:38 2026] systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. [Tue Jan 20 10:41:38 2026] systemd[1]: Created slice Slice /system/getty. [Tue Jan 20 10:41:38 2026] systemd[1]: Created slice Slice /system/serial-getty. [Tue Jan 20 10:41:38 2026] systemd[1]: Created slice Slice /system/sshd-keygen. [Tue Jan 20 10:41:38 2026] systemd[1]: Created slice User and Session Slice. [Tue Jan 20 10:41:38 2026] systemd[1]: Started Dispatch Password Requests to Console Directory Watch. [Tue Jan 20 10:41:38 2026] systemd[1]: Started Forward Password Requests to Wall Directory Watch. [Tue Jan 20 10:41:38 2026] systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. [Tue Jan 20 10:41:38 2026] systemd[1]: Reached target Local Encrypted Volumes. [Tue Jan 20 10:41:38 2026] systemd[1]: Stopped target Switch Root. [Tue Jan 20 10:41:38 2026] systemd[1]: Stopped target Initrd File Systems. [Tue Jan 20 10:41:38 2026] systemd[1]: Stopped target Initrd Root File System. [Tue Jan 20 10:41:38 2026] systemd[1]: Reached target Local Integrity Protected Volumes. [Tue Jan 20 10:41:38 2026] systemd[1]: Reached target Path Units. [Tue Jan 20 10:41:38 2026] systemd[1]: Reached target rpc_pipefs.target. [Tue Jan 20 10:41:38 2026] systemd[1]: Reached target Slice Units. [Tue Jan 20 10:41:38 2026] systemd[1]: Reached target Swaps. [Tue Jan 20 10:41:38 2026] systemd[1]: Reached target Local Verity Protected Volumes. [Tue Jan 20 10:41:38 2026] systemd[1]: Listening on RPCbind Server Activation Socket. [Tue Jan 20 10:41:38 2026] systemd[1]: Reached target RPC Port Mapper. [Tue Jan 20 10:41:38 2026] systemd[1]: Listening on Process Core Dump Socket. [Tue Jan 20 10:41:38 2026] systemd[1]: Listening on initctl Compatibility Named Pipe. [Tue Jan 20 10:41:38 2026] systemd[1]: Listening on udev Control Socket. [Tue Jan 20 10:41:38 2026] systemd[1]: Listening on udev Kernel Socket. [Tue Jan 20 10:41:38 2026] systemd[1]: Mounting Huge Pages File System... [Tue Jan 20 10:41:38 2026] systemd[1]: Mounting POSIX Message Queue File System... [Tue Jan 20 10:41:38 2026] systemd[1]: Mounting Kernel Debug File System... [Tue Jan 20 10:41:38 2026] systemd[1]: Mounting Kernel Trace File System... [Tue Jan 20 10:41:38 2026] systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). [Tue Jan 20 10:41:38 2026] systemd[1]: Starting Create List of Static Device Nodes... [Tue Jan 20 10:41:38 2026] systemd[1]: Starting Load Kernel Module configfs... [Tue Jan 20 10:41:38 2026] systemd[1]: Starting Load Kernel Module drm... [Tue Jan 20 10:41:38 2026] systemd[1]: Starting Load Kernel Module efi_pstore... [Tue Jan 20 10:41:38 2026] systemd[1]: Starting Load Kernel Module fuse... [Tue Jan 20 10:41:38 2026] systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network... [Tue Jan 20 10:41:38 2026] systemd[1]: systemd-fsck-root.service: Deactivated successfully. [Tue Jan 20 10:41:38 2026] systemd[1]: Stopped File System Check on Root Device. [Tue Jan 20 10:41:38 2026] systemd[1]: Stopped Journal Service. [Tue Jan 20 10:41:38 2026] fuse: init (API version 7.37) [Tue Jan 20 10:41:38 2026] systemd[1]: Starting Journal Service... [Tue Jan 20 10:41:38 2026] systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met. [Tue Jan 20 10:41:38 2026] systemd[1]: Starting Generate network units from Kernel command line... [Tue Jan 20 10:41:38 2026] systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). [Tue Jan 20 10:41:38 2026] systemd[1]: Starting Remount Root and Kernel File Systems... [Tue Jan 20 10:41:38 2026] systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met. [Tue Jan 20 10:41:38 2026] systemd[1]: Starting Apply Kernel Variables... [Tue Jan 20 10:41:38 2026] xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff) [Tue Jan 20 10:41:38 2026] systemd[1]: Starting Coldplug All udev Devices... [Tue Jan 20 10:41:38 2026] systemd[1]: Started Journal Service. [Tue Jan 20 10:41:38 2026] ACPI: bus type drm_connector registered [Tue Jan 20 10:41:38 2026] systemd-journald[589]: Received client request to flush runtime journal. [Tue Jan 20 10:41:39 2026] Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled [Tue Jan 20 10:41:39 2026] Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled [Tue Jan 20 10:41:39 2026] input: PC Speaker as /devices/platform/pcspkr/input/input6 [Tue Jan 20 10:41:39 2026] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 [Tue Jan 20 10:41:39 2026] i2c i2c-0: 1/1 memory slots populated (from DMI) [Tue Jan 20 10:41:39 2026] i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD [Tue Jan 20 10:41:39 2026] [drm] pci: virtio-vga detected at 0000:00:02.0 [Tue Jan 20 10:41:39 2026] virtio-pci 0000:00:02.0: vgaarb: deactivate vga console [Tue Jan 20 10:41:39 2026] Console: switching to colour dummy device 80x25 [Tue Jan 20 10:41:39 2026] [drm] features: -virgl +edid -resource_blob -host_visible [Tue Jan 20 10:41:39 2026] [drm] features: -context_init [Tue Jan 20 10:41:39 2026] [drm] number of scanouts: 1 [Tue Jan 20 10:41:39 2026] [drm] number of cap sets: 0 [Tue Jan 20 10:41:39 2026] [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 [Tue Jan 20 10:41:39 2026] fbcon: virtio_gpudrmfb (fb0) is primary device [Tue Jan 20 10:41:39 2026] Console: switching to colour frame buffer device 128x48 [Tue Jan 20 10:41:39 2026] virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device [Tue Jan 20 10:41:39 2026] kvm_amd: TSC scaling supported [Tue Jan 20 10:41:39 2026] kvm_amd: Nested Virtualization enabled [Tue Jan 20 10:41:39 2026] kvm_amd: Nested Paging enabled [Tue Jan 20 10:41:39 2026] kvm_amd: LBR virtualization supported [Tue Jan 20 10:41:40 2026] ISO 9660 Extensions: Microsoft Joliet Level 3 [Tue Jan 20 10:41:40 2026] ISO 9660 Extensions: RRIP_1991A [Tue Jan 20 10:41:46 2026] block vda: the capability attribute has been deprecated. [Tue Jan 20 10:44:49 2026] pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint [Tue Jan 20 10:44:49 2026] pci 0000:00:07.0: BAR 0 [io 0x0000-0x003f] [Tue Jan 20 10:44:49 2026] pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff] [Tue Jan 20 10:44:49 2026] pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref] [Tue Jan 20 10:44:49 2026] pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref] [Tue Jan 20 10:44:49 2026] pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned [Tue Jan 20 10:44:49 2026] pci 0000:00:07.0: BAR 4 [mem 0x140000000-0x140003fff 64bit pref]: assigned [Tue Jan 20 10:44:49 2026] pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned [Tue Jan 20 10:44:49 2026] pci 0000:00:07.0: BAR 0 [io 0x1000-0x103f]: assigned [Tue Jan 20 10:44:49 2026] virtio-pci 0000:00:07.0: enabling device (0000 -> 0003) [Tue Jan 20 10:51:13 2026] systemd-rc-local-generator[8453]: /etc/rc.d/rc.local is not marked executable, skipping. [Tue Jan 20 10:51:43 2026] SELinux: Converting 389 SID table entries... [Tue Jan 20 10:51:43 2026] SELinux: policy capability network_peer_controls=1 [Tue Jan 20 10:51:43 2026] SELinux: policy capability open_perms=1 [Tue Jan 20 10:51:43 2026] SELinux: policy capability extended_socket_class=1 [Tue Jan 20 10:51:43 2026] SELinux: policy capability always_check_network=0 [Tue Jan 20 10:51:43 2026] SELinux: policy capability cgroup_seclabel=1 [Tue Jan 20 10:51:43 2026] SELinux: policy capability nnp_nosuid_transition=1 [Tue Jan 20 10:51:43 2026] SELinux: policy capability genfs_seclabel_symlinks=1 [Tue Jan 20 10:51:53 2026] SELinux: Converting 390 SID table entries... [Tue Jan 20 10:51:53 2026] SELinux: policy capability network_peer_controls=1 [Tue Jan 20 10:51:53 2026] SELinux: policy capability open_perms=1 [Tue Jan 20 10:51:53 2026] SELinux: policy capability extended_socket_class=1 [Tue Jan 20 10:51:53 2026] SELinux: policy capability always_check_network=0 [Tue Jan 20 10:51:53 2026] SELinux: policy capability cgroup_seclabel=1 [Tue Jan 20 10:51:53 2026] SELinux: policy capability nnp_nosuid_transition=1 [Tue Jan 20 10:51:53 2026] SELinux: policy capability genfs_seclabel_symlinks=1 [Tue Jan 20 10:52:15 2026] systemd-rc-local-generator[9509]: /etc/rc.d/rc.local is not marked executable, skipping. [Tue Jan 20 10:52:19 2026] evm: overlay not supported home/zuul/zuul-output/logs/selinux-denials.log0000644000000000000000000000000015133660045020604 0ustar rootroothome/zuul/zuul-output/logs/system-config/0000755000175000017500000000000015133660057017663 5ustar zuulzuulhome/zuul/zuul-output/logs/system-config/libvirt/0000755000175000017500000000000015133660060021330 5ustar zuulzuulhome/zuul/zuul-output/logs/system-config/libvirt/libvirt-admin.conf0000644000175000000000000000070215133660060024706 0ustar zuulroot# # This can be used to setup URI aliases for frequently # used connection URIs. Aliases may contain only the # characters a-Z, 0-9, _, -. # # Following the '=' may be any valid libvirt admin connection # URI, including arbitrary parameters #uri_aliases = [ # "admin=libvirtd:///system", #] # This specifies the default location the client tries to connect to if no other # URI is provided by the application #uri_default = "libvirtd:///system" home/zuul/zuul-output/logs/system-config/libvirt/libvirt.conf0000644000175000000000000000104315133660060023617 0ustar zuulroot# # This can be used to setup URI aliases for frequently # used connection URIs. Aliases may contain only the # characters a-Z, 0-9, _, -. # # Following the '=' may be any valid libvirt connection # URI, including arbitrary parameters #uri_aliases = [ # "hail=qemu+ssh://root@hail.cloud.example.com/system", # "sleet=qemu+ssh://root@sleet.cloud.example.com/system", #] # # These can be used in cases when no URI is supplied by the application # (@uri_default also prevents probing of the hypervisor driver). # #uri_default = "qemu:///system" home/zuul/zuul-output/logs/registries.conf0000644000000000000000000000763515133660060020046 0ustar rootroot# For more information on this configuration file, see containers-registries.conf(5). # # NOTE: RISK OF USING UNQUALIFIED IMAGE NAMES # We recommend always using fully qualified image names including the registry # server (full dns name), namespace, image name, and tag # (e.g., registry.redhat.io/ubi8/ubi:latest). Pulling by digest (i.e., # quay.io/repository/name@digest) further eliminates the ambiguity of tags. # When using short names, there is always an inherent risk that the image being # pulled could be spoofed. For example, a user wants to pull an image named # `foobar` from a registry and expects it to come from myregistry.com. If # myregistry.com is not first in the search list, an attacker could place a # different `foobar` image at a registry earlier in the search list. The user # would accidentally pull and run the attacker's image and code rather than the # intended content. We recommend only adding registries which are completely # trusted (i.e., registries which don't allow unknown or anonymous users to # create accounts with arbitrary names). This will prevent an image from being # spoofed, squatted or otherwise made insecure. If it is necessary to use one # of these registries, it should be added at the end of the list. # # # An array of host[:port] registries to try when pulling an unqualified image, in order. unqualified-search-registries = ["registry.access.redhat.com", "registry.redhat.io", "docker.io"] # [[registry]] # # The "prefix" field is used to choose the relevant [[registry]] TOML table; # # (only) the TOML table with the longest match for the input image name # # (taking into account namespace/repo/tag/digest separators) is used. # # # # The prefix can also be of the form: *.example.com for wildcard subdomain # # matching. # # # # If the prefix field is missing, it defaults to be the same as the "location" field. # prefix = "example.com/foo" # # # If true, unencrypted HTTP as well as TLS connections with untrusted # # certificates are allowed. # insecure = false # # # If true, pulling images with matching names is forbidden. # blocked = false # # # The physical location of the "prefix"-rooted namespace. # # # # By default, this is equal to "prefix" (in which case "prefix" can be omitted # # and the [[registry]] TOML table can only specify "location"). # # # # Example: Given # # prefix = "example.com/foo" # # location = "internal-registry-for-example.net/bar" # # requests for the image example.com/foo/myimage:latest will actually work with the # # internal-registry-for-example.net/bar/myimage:latest image. # # # The location can be empty iff prefix is in a # # wildcarded format: "*.example.com". In this case, the input reference will # # be used as-is without any rewrite. # location = internal-registry-for-example.com/bar" # # # (Possibly-partial) mirrors for the "prefix"-rooted namespace. # # # # The mirrors are attempted in the specified order; the first one that can be # # contacted and contains the image will be used (and if none of the mirrors contains the image, # # the primary location specified by the "registry.location" field, or using the unmodified # # user-specified reference, is tried last). # # # # Each TOML table in the "mirror" array can contain the following fields, with the same semantics # # as if specified in the [[registry]] TOML table directly: # # - location # # - insecure # [[registry.mirror]] # location = "example-mirror-0.local/mirror-for-foo" # [[registry.mirror]] # location = "example-mirror-1.local/mirrors/foo" # insecure = true # # Given the above, a pull of example.com/foo/image:latest will try: # # 1. example-mirror-0.local/mirror-for-foo/image:latest # # 2. example-mirror-1.local/mirrors/foo/image:latest # # 3. internal-registry-for-example.net/bar/image:latest # # in order, and use the first one that exists. short-name-mode = "enforcing" # BEGIN ANSIBLE MANAGED BLOCK [[registry]] location = "38.102.83.51:5001" insecure = true # END ANSIBLE MANAGED BLOCK home/zuul/zuul-output/logs/registries.conf.d/0000755000175000000000000000000015133660060020363 5ustar zuulroothome/zuul/zuul-output/logs/registries.conf.d/000-shortnames.conf0000644000175000000000000001735515133660060023725 0ustar zuulroot[aliases] # almalinux "almalinux" = "docker.io/library/almalinux" "almalinux-minimal" = "docker.io/library/almalinux-minimal" # Amazon Linux "amazonlinux" = "public.ecr.aws/amazonlinux/amazonlinux" # Arch Linux "archlinux" = "docker.io/library/archlinux" # centos "centos" = "quay.io/centos/centos" # containers "skopeo" = "quay.io/skopeo/stable" "buildah" = "quay.io/buildah/stable" "podman" = "quay.io/podman/stable" "hello" = "quay.io/podman/hello" "hello-world" = "quay.io/podman/hello" # docker "alpine" = "docker.io/library/alpine" "docker" = "docker.io/library/docker" "registry" = "docker.io/library/registry" "swarm" = "docker.io/library/swarm" # Fedora "fedora-bootc" = "registry.fedoraproject.org/fedora-bootc" "fedora-minimal" = "registry.fedoraproject.org/fedora-minimal" "fedora" = "registry.fedoraproject.org/fedora" # Gentoo "gentoo" = "docker.io/gentoo/stage3" # openSUSE "opensuse/tumbleweed" = "registry.opensuse.org/opensuse/tumbleweed" "opensuse/tumbleweed-dnf" = "registry.opensuse.org/opensuse/tumbleweed-dnf" "opensuse/tumbleweed-microdnf" = "registry.opensuse.org/opensuse/tumbleweed-microdnf" "opensuse/leap" = "registry.opensuse.org/opensuse/leap" "opensuse/busybox" = "registry.opensuse.org/opensuse/busybox" "tumbleweed" = "registry.opensuse.org/opensuse/tumbleweed" "tumbleweed-dnf" = "registry.opensuse.org/opensuse/tumbleweed-dnf" "tumbleweed-microdnf" = "registry.opensuse.org/opensuse/tumbleweed-microdnf" "leap" = "registry.opensuse.org/opensuse/leap" "leap-dnf" = "registry.opensuse.org/opensuse/leap-dnf" "leap-microdnf" = "registry.opensuse.org/opensuse/leap-microdnf" "tw-busybox" = "registry.opensuse.org/opensuse/busybox" # OTel (Open Telemetry) - opentelemetry.io "otel/autoinstrumentation-go" = "docker.io/otel/autoinstrumentation-go" "otel/autoinstrumentation-nodejs" = "docker.io/otel/autoinstrumentation-nodejs" "otel/autoinstrumentation-python" = "docker.io/otel/autoinstrumentation-python" "otel/autoinstrumentation-java" = "docker.io/otel/autoinstrumentation-java" "otel/autoinstrumentation-dotnet" = "docker.io/otel/autoinstrumentation-dotnet" "otel/opentelemetry-collector" = "docker.io/otel/opentelemetry-collector" "otel/opentelemetry-collector-contrib" = "docker.io/otel/opentelemetry-collector-contrib" "otel/opentelemetry-collector-contrib-dev" = "docker.io/otel/opentelemetry-collector-contrib-dev" "otel/opentelemetry-collector-k8s" = "docker.io/otel/opentelemetry-collector-k8s" "otel/opentelemetry-operator" = "docker.io/otel/opentelemetry-operator" "otel/opentelemetry-operator-bundle" = "docker.io/otel/opentelemetry-operator-bundle" "otel/operator-opamp-bridge" = "docker.io/otel/operator-opamp-bridge" "otel/semconvgen" = "docker.io/otel/semconvgen" "otel/weaver" = "docker.io/otel/weaver" # SUSE "suse/sle15" = "registry.suse.com/suse/sle15" "suse/sles12sp5" = "registry.suse.com/suse/sles12sp5" "suse/sles12sp4" = "registry.suse.com/suse/sles12sp4" "suse/sles12sp3" = "registry.suse.com/suse/sles12sp3" "sle15" = "registry.suse.com/suse/sle15" "sles12sp5" = "registry.suse.com/suse/sles12sp5" "sles12sp4" = "registry.suse.com/suse/sles12sp4" "sles12sp3" = "registry.suse.com/suse/sles12sp3" "bci-base" = "registry.suse.com/bci/bci-base" "bci/bci-base" = "registry.suse.com/bci/bci-base" "bci-micro" = "registry.suse.com/bci/bci-micro" "bci/bci-micro" = "registry.suse.com/bci/bci-micro" "bci-minimal" = "registry.suse.com/bci/bci-minimal" "bci/bci-minimal" = "registry.suse.com/bci/bci-minimal" "bci-busybox" = "registry.suse.com/bci/bci-busybox" "bci/bci-busybox" = "registry.suse.com/bci/bci-busybox" # Red Hat Enterprise Linux "rhel" = "registry.access.redhat.com/rhel" "rhel6" = "registry.access.redhat.com/rhel6" "rhel7" = "registry.access.redhat.com/rhel7" "rhel7.9" = "registry.access.redhat.com/rhel7.9" "rhel-atomic" = "registry.access.redhat.com/rhel-atomic" "rhel9-bootc" = "registry.redhat.io/rhel9/rhel-bootc" "rhel-minimal" = "registry.access.redhat.com/rhel-minimal" "rhel-init" = "registry.access.redhat.com/rhel-init" "rhel7-atomic" = "registry.access.redhat.com/rhel7-atomic" "rhel7-minimal" = "registry.access.redhat.com/rhel7-minimal" "rhel7-init" = "registry.access.redhat.com/rhel7-init" "rhel7/rhel" = "registry.access.redhat.com/rhel7/rhel" "rhel7/rhel-atomic" = "registry.access.redhat.com/rhel7/rhel7/rhel-atomic" "ubi7/ubi" = "registry.access.redhat.com/ubi7/ubi" "ubi7/ubi-minimal" = "registry.access.redhat.com/ubi7-minimal" "ubi7/ubi-init" = "registry.access.redhat.com/ubi7-init" "ubi7" = "registry.access.redhat.com/ubi7" "ubi7-init" = "registry.access.redhat.com/ubi7-init" "ubi7-minimal" = "registry.access.redhat.com/ubi7-minimal" "rhel8" = "registry.access.redhat.com/ubi8" "rhel8-init" = "registry.access.redhat.com/ubi8-init" "rhel8-minimal" = "registry.access.redhat.com/ubi8-minimal" "rhel8-micro" = "registry.access.redhat.com/ubi8-micro" "ubi8" = "registry.access.redhat.com/ubi8" "ubi8-minimal" = "registry.access.redhat.com/ubi8-minimal" "ubi8-init" = "registry.access.redhat.com/ubi8-init" "ubi8-micro" = "registry.access.redhat.com/ubi8-micro" "ubi8/ubi" = "registry.access.redhat.com/ubi8/ubi" "ubi8/ubi-minimal" = "registry.access.redhat.com/ubi8-minimal" "ubi8/ubi-init" = "registry.access.redhat.com/ubi8-init" "ubi8/ubi-micro" = "registry.access.redhat.com/ubi8-micro" "ubi8/podman" = "registry.access.redhat.com/ubi8/podman" "ubi8/buildah" = "registry.access.redhat.com/ubi8/buildah" "ubi8/skopeo" = "registry.access.redhat.com/ubi8/skopeo" "rhel9" = "registry.access.redhat.com/ubi9" "rhel9-init" = "registry.access.redhat.com/ubi9-init" "rhel9-minimal" = "registry.access.redhat.com/ubi9-minimal" "rhel9-micro" = "registry.access.redhat.com/ubi9-micro" "ubi9" = "registry.access.redhat.com/ubi9" "ubi9-minimal" = "registry.access.redhat.com/ubi9-minimal" "ubi9-init" = "registry.access.redhat.com/ubi9-init" "ubi9-micro" = "registry.access.redhat.com/ubi9-micro" "ubi9/ubi" = "registry.access.redhat.com/ubi9/ubi" "ubi9/ubi-minimal" = "registry.access.redhat.com/ubi9-minimal" "ubi9/ubi-init" = "registry.access.redhat.com/ubi9-init" "ubi9/ubi-micro" = "registry.access.redhat.com/ubi9-micro" "ubi9/podman" = "registry.access.redhat.com/ubi9/podman" "ubi9/buildah" = "registry.access.redhat.com/ubi9/buildah" "ubi9/skopeo" = "registry.access.redhat.com/ubi9/skopeo" # Rocky Linux "rockylinux" = "quay.io/rockylinux/rockylinux" # Debian "debian" = "docker.io/library/debian" # Kali Linux "kali-bleeding-edge" = "docker.io/kalilinux/kali-bleeding-edge" "kali-dev" = "docker.io/kalilinux/kali-dev" "kali-experimental" = "docker.io/kalilinux/kali-experimental" "kali-last-release" = "docker.io/kalilinux/kali-last-release" "kali-rolling" = "docker.io/kalilinux/kali-rolling" # Ubuntu "ubuntu" = "docker.io/library/ubuntu" # Oracle Linux "oraclelinux" = "container-registry.oracle.com/os/oraclelinux" # busybox "busybox" = "docker.io/library/busybox" # golang "golang" = "docker.io/library/golang" # php "php" = "docker.io/library/php" # python "python" = "docker.io/library/python" # rust "rust" = "docker.io/library/rust" # node "node" = "docker.io/library/node" # Grafana Labs "grafana/agent" = "docker.io/grafana/agent" "grafana/grafana" = "docker.io/grafana/grafana" "grafana/k6" = "docker.io/grafana/k6" "grafana/loki" = "docker.io/grafana/loki" "grafana/mimir" = "docker.io/grafana/mimir" "grafana/oncall" = "docker.io/grafana/oncall" "grafana/pyroscope" = "docker.io/grafana/pyroscope" "grafana/tempo" = "docker.io/grafana/tempo" # curl "curl" = "quay.io/curl/curl" # nginx "nginx" = "docker.io/library/nginx" # QUBIP "qubip/pq-container" = "quay.io/qubip/pq-container" home/zuul/zuul-output/artifacts/0000755000175000017500000000000015133656043016110 5ustar zuulzuulhome/zuul/zuul-output/docs/0000755000175000017500000000000015133656044015061 5ustar zuulzuul